path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
Notebooks/Weather-RNN.ipynb
###Markdown ![sf-weather](https://raw.githubusercontent.com/tirthajyoti/Deep-learning-with-Python/master/images/sf-weather.jpg) Weather prediction using Recurrent Neural Network Dr. Tirthajyoti Sarkar, Fremont, CA ([LinkedIn](https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/), [Github](https://tirthajyoti.github.io))For more tutorial-style notebooks on deep learning, **[here is my Github repo](https://github.com/tirthajyoti/Deep-learning-with-Python)**.For more tutorial-style notebooks on general machine learning, **[here is my Github repo](https://github.com/tirthajyoti/Machine-Learning-with-Python)**.---In this Notebook, we show how the long-term trend of key weather parameters (humidity, temperature, atmospheric pressure, etc.) can be predicted with decent accuracy using simple recurrent neural network (RNN). We don't even need to use any sophisticated memory module like GRU or LSTM for this. A simple one-layer RNN based model seems sufficient to be able to predict long-term trends from limited training data surprisingly well.This is almost a proof of what Andrej Karpathy famously called **["The unusual effectiveness of recurrent neural networks"](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)** The datasetThe dataset consists of historical weather parameters (temperature, pressure, relative humidity) for major North American and other cities around the world over an extended time period of 2012 to 2017. Hourly data points are recorded, giving, over 45000 data points, in total.By attepmpting to do a time-series prediction, we are implicitly assuming that the past weather pattern is a good indicator of the future.For this analysis, we focus only on the data for the city of San Francisco.The full dataset can be found here: https://www.kaggle.com/selfishgene/historical-hourly-weather-data Data loading and pre-processing ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option('mode.chained_assignment', None) from keras.models import Sequential from keras.layers import Dense, SimpleRNN from keras.optimizers import RMSprop from keras.callbacks import Callback humidity = pd.read_csv("../Data/historical-hourly-weather-data/humidity.csv") temp = pd.read_csv("../Data/historical-hourly-weather-data/temperature.csv") pressure = pd.read_csv("../Data/historical-hourly-weather-data/pressure.csv") humidity_SF = humidity[['datetime','San Francisco']] temp_SF = temp[['datetime','San Francisco']] pressure_SF = pressure[['datetime','San Francisco']] humidity_SF.head(10) humidity_SF.tail(10) print(humidity_SF.shape) print(temp_SF.shape) print(pressure_SF.shape) ###Output (45253, 2) (45253, 2) (45253, 2) ###Markdown There are many `NaN` values (blanck) in the dataset ###Code print("How many NaN are there in the humidity dataset?",humidity_SF.isna().sum()['San Francisco']) print("How many NaN are there in the temperature dataset?",temp_SF.isna().sum()['San Francisco']) print("How many NaN are there in the pressure dataset?",pressure_SF.isna().sum()['San Francisco']) ###Output How many NaN are there in the humidity dataset? 942 How many NaN are there in the temperature dataset? 793 How many NaN are there in the pressure dataset? 815 ###Markdown Choosing a point in the time-series for training dataWe choose Tp=7000 here which means we will train the RNN with only first 7000 data points and then let it predict the long-term trend (for the next > 35000 data points or so). That is not a lot of training data compared to the number of test points, is it? ###Code Tp = 7000 def plot_train_points(quantity='humidity',Tp=7000): plt.figure(figsize=(15,4)) if quantity=='humidity': plt.title("Humidity of first {} data points".format(Tp),fontsize=16) plt.plot(humidity_SF['San Francisco'][:Tp],c='k',lw=1) if quantity=='temperature': plt.title("Temperature of first {} data points".format(Tp),fontsize=16) plt.plot(temp_SF['San Francisco'][:Tp],c='k',lw=1) if quantity=='pressure': plt.title("Pressure of first {} data points".format(Tp),fontsize=16) plt.plot(pressure_SF['San Francisco'][:Tp],c='k',lw=1) plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() plot_train_points('humidity') plot_train_points('temperature') plot_train_points('pressure') ###Output _____no_output_____ ###Markdown Interpolate data points to fill up `NaN` valuesWe observed some `NaN` values in the dataset. We could just eliminate these points. But assuming that the changes in the parameters are not extremely abrupt, we could try to fill them using simple linear interpolation. ###Code humidity_SF.interpolate(inplace=True) humidity_SF.dropna(inplace=True) temp_SF.interpolate(inplace=True) temp_SF.dropna(inplace=True) pressure_SF.interpolate(inplace=True) pressure_SF.dropna(inplace=True) print(humidity_SF.shape) print(temp_SF.shape) print(pressure_SF.shape) ###Output (45252, 2) (45252, 2) (45252, 2) ###Markdown Train and test splits on the `Tp=7000` ###Code train = np.array(humidity_SF['San Francisco'][:Tp]) test = np.array(humidity_SF['San Francisco'][Tp:]) print("Train data length:", train.shape) print("Test data length:", test.shape) train=train.reshape(-1,1) test=test.reshape(-1,1) plt.figure(figsize=(15,4)) plt.title("Train and test data plotted together",fontsize=16) plt.plot(np.arange(Tp),train,c='blue') plt.plot(np.arange(Tp,45252),test,c='orange',alpha=0.7) plt.legend(['Train','Test']) plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() ###Output _____no_output_____ ###Markdown Choose the embedding or step sizeRNN model requires a step value that contains n number of elements as an input sequence.Suppose x = {1,2,3,4,5,6,7,8,9,10}for step=1, x input and its y prediction become:| x | y ||---|---|| 1 | 2 || 2 | 3 || 3 | 4 || ... | ... || 9 | 10 |for step=3, x and y contain:| x | y ||---|---|| 1,2,3 | 4 || 2,3,4 | 5 || 3,4,5 | 6 || ... | ... || 7,8,9 | 10 |Here, we choose `step=8`. In more complex RNN and in particular for text processing, this is also called _embedding size_. The idea here is that **we are assuming that 8 hours of weather data can effectively predict the 9th hour data, and so on.** ###Code step = 8 # add step elements into train and test test = np.append(test,np.repeat(test[-1,],step)) train = np.append(train,np.repeat(train[-1,],step)) print("Train data length:", train.shape) print("Test data length:", test.shape) ###Output Train data length: (7008,) Test data length: (38260,) ###Markdown Converting to a multi-dimensional arrayNext, we'll convert test and train data into the matrix with step value as it has shown above example. ###Code def convertToMatrix(data, step): X, Y =[], [] for i in range(len(data)-step): d=i+step X.append(data[i:d,]) Y.append(data[d,]) return np.array(X), np.array(Y) trainX,trainY =convertToMatrix(train,step) testX,testY =convertToMatrix(test,step) trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) print("Training data shape:", trainX.shape,', ',trainY.shape) print("Test data shape:", testX.shape,', ',testY.shape) ###Output Training data shape: (7000, 1, 8) , (7000,) Test data shape: (38252, 1, 8) , (38252,) ###Markdown Modeling Keras model with `SimpleRNN` layerWe build a simple function to define the RNN model. It uses a single neuron for the output layer because we are predicting a real-valued number here. As activation, it uses the ReLU function. Following arguments are supported.- neurons in the RNN layer- embedding length (i.e. the step length we chose)- nenurons in the densely connected layer- learning rate ###Code def build_simple_rnn(num_units=128, embedding=4,num_dense=32,lr=0.001): """ Builds and compiles a simple RNN model Arguments: num_units: Number of units of a the simple RNN layer embedding: Embedding length num_dense: Number of neurons in the dense layer followed by the RNN layer lr: Learning rate (uses RMSprop optimizer) Returns: A compiled Keras model. """ model = Sequential() model.add(SimpleRNN(units=num_units, input_shape=(1,embedding), activation="relu")) model.add(Dense(num_dense, activation="relu")) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer=RMSprop(lr=lr),metrics=['mse']) return model model_humidity = build_simple_rnn(num_units=128,num_dense=32,embedding=8,lr=0.0005) model_humidity.summary() ###Output _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= simple_rnn_1 (SimpleRNN) (None, 128) 17536 _________________________________________________________________ dense_1 (Dense) (None, 32) 4128 _________________________________________________________________ dense_2 (Dense) (None, 1) 33 ================================================================= Total params: 21,697 Trainable params: 21,697 Non-trainable params: 0 _________________________________________________________________ ###Markdown A simple Keras `Callback` class to print progress of the training at regular epoch intervalSince the RNN training is usually long, we want to see regular updates about epochs finishing. However, we may not want to see this update every epoch as that may flood the output stream. Therefore, we write a simple custom `Callback` function to print the finishing update every 50th epoch. You can think of adding other bells and whistles to this function to print error and other metrics dynamically. ###Code class MyCallback(Callback): def on_epoch_end(self, epoch, logs=None): if (epoch+1) % 50 == 0 and epoch>0: print("Epoch number {} done".format(epoch+1)) ###Output _____no_output_____ ###Markdown Batch size and number of epochs ###Code batch_size=8 num_epochs = 1000 ###Output _____no_output_____ ###Markdown Training the model ###Code model_humidity.fit(trainX,trainY, epochs=num_epochs, batch_size=batch_size, callbacks=[MyCallback()],verbose=0) ###Output WARNING:tensorflow:From c:\users\tirth\docume~1\personal\datasc~2\python~1\tf-gpu\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch number 50 done Epoch number 100 done Epoch number 150 done Epoch number 200 done Epoch number 250 done Epoch number 300 done Epoch number 350 done Epoch number 400 done Epoch number 450 done Epoch number 500 done Epoch number 550 done Epoch number 600 done Epoch number 650 done Epoch number 700 done Epoch number 750 done Epoch number 800 done Epoch number 850 done Epoch number 900 done Epoch number 950 done Epoch number 1000 done ###Markdown Plot RMSE loss over epochsNote that the `loss` metric available in the `history` attribute of the model is the MSE loss and you have to take a square-root to compute the RMSE loss. ###Code plt.figure(figsize=(7,5)) plt.title("RMSE loss over epochs",fontsize=16) plt.plot(np.sqrt(model_humidity.history.history['loss']),c='k',lw=2) plt.grid(True) plt.xlabel("Epochs",fontsize=14) plt.ylabel("Root-mean-squared error",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() ###Output _____no_output_____ ###Markdown Result and analysis What did the model see while training?We are emphasizing and showing again what exactly the model see during training. If you look above, the model fitting code is,```model_humidity.fit(trainX,trainY, epochs=num_epochs, batch_size=batch_size, callbacks=[MyCallback()],verbose=0)```So, the model was fitted with `trainX` which is plotted below, and `trainY` which is just the 8 step shifted and shaped vector. ###Code plt.figure(figsize=(15,4)) plt.title("This is what the model saw",fontsize=18) plt.plot(trainX[:,0][:,0],c='blue') plt.grid(True) plt.show() ###Output _____no_output_____ ###Markdown Now predict the future pointsNow, we can generate predictions for the future by passing `testX` to the trained model. ###Code trainPredict = model_humidity.predict(trainX) testPredict= model_humidity.predict(testX) predicted=np.concatenate((trainPredict,testPredict),axis=0) ###Output _____no_output_____ ###Markdown See the magic!When we plot the predicted vector, we see it matches closely the true values and that is amazing given how little training data was used and how far in the _future_ it had to predict. Time-series techniques like ARIMA, Exponential smoothing, cannot predict very far into the future and their confidence interval quickly grows beyond being useful.**Note carefully how the model is able to predict sudden increase in humidity around time-points 12000. There was no indication of such shape or pattern of the data in the training set, yet, it is able to predict the general shape pretty well from the first 7000 data points!** ###Code plt.figure(figsize=(10,4)) plt.title("This is what the model predicted",fontsize=18) plt.plot(testPredict,c='orange') plt.grid(True) plt.show() ###Output _____no_output_____ ###Markdown Plotting the ground truth and model predictions togetherWe plot the ground truth and the model predictions together to show that it follows the general trends in the ground truth data pretty well. Considering less than 25% data was used for training, this is sort of amazing. The boundary between train and test splits is denoted by the vertical red line.There are, of course, some obvious mistakes in the model predictions, such as humidity values going above 100 and some very low values. These can be pruned with post-processing or a better model can be built with propoer hyperparameter tuning. ###Code index = humidity_SF.index.values plt.figure(figsize=(15,5)) plt.title("Humidity: Ground truth and prediction together",fontsize=18) plt.plot(index,humidity_SF['San Francisco'],c='blue') plt.plot(index,predicted,c='orange',alpha=0.75) plt.legend(['True data','Predicted'],fontsize=15) plt.axvline(x=Tp, c="r") plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.ylim(-20,120) plt.show() ###Output _____no_output_____ ###Markdown Modeling the temperature dataSince we have covered modeling the humidity data step-by-step in detail, we will show the modeling with other two parameters - temperature and pressure - quickly with similar code but not with detailed text. ###Code train = np.array(temp_SF['San Francisco'][:Tp]) test = np.array(temp_SF['San Francisco'][Tp:]) train=train.reshape(-1,1) test=test.reshape(-1,1) step = 8 # add step elements into train and test test = np.append(test,np.repeat(test[-1,],step)) train = np.append(train,np.repeat(train[-1,],step)) trainX,trainY =convertToMatrix(train,step) testX,testY =convertToMatrix(test,step) trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) model_temp = build_simple_rnn(num_units=128,num_dense=32,embedding=8,lr=0.0005) batch_size=8 num_epochs = 2000 model_temp.fit(trainX,trainY, epochs=num_epochs, batch_size=batch_size, callbacks=[MyCallback()],verbose=0) plt.figure(figsize=(7,5)) plt.title("RMSE loss over epochs",fontsize=16) plt.plot(np.sqrt(model_temp.history.history['loss']),c='k',lw=2) plt.grid(True) plt.xlabel("Epochs",fontsize=14) plt.ylabel("Root-mean-squared error",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() trainPredict = model_temp.predict(trainX) testPredict= model_temp.predict(testX) predicted=np.concatenate((trainPredict,testPredict),axis=0) index = temp_SF.index.values plt.figure(figsize=(15,5)) plt.title("Temperature: Ground truth and prediction together",fontsize=18) plt.plot(index,temp_SF['San Francisco'],c='blue') plt.plot(index,predicted,c='orange',alpha=0.75) plt.legend(['True data','Predicted'],fontsize=15) plt.axvline(x=Tp, c="r") plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() ###Output _____no_output_____ ###Markdown Modeling the atmospheric pressure data ###Code train = np.array(pressure_SF['San Francisco'][:Tp]) test = np.array(pressure_SF['San Francisco'][Tp:]) train=train.reshape(-1,1) test=test.reshape(-1,1) step = 8 # add step elements into train and test test = np.append(test,np.repeat(test[-1,],step)) train = np.append(train,np.repeat(train[-1,],step)) trainX,trainY =convertToMatrix(train,step) testX,testY =convertToMatrix(test,step) trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) model_pressure = build_simple_rnn(num_units=128,num_dense=32,embedding=8,lr=0.0005) batch_size=8 num_epochs = 500 model_pressure.fit(trainX,trainY, epochs=num_epochs, batch_size=batch_size, callbacks=[MyCallback()],verbose=0) plt.figure(figsize=(7,5)) plt.title("RMSE loss over epochs",fontsize=16) plt.plot(np.sqrt(model_pressure.history.history['loss']),c='k',lw=2) plt.grid(True) plt.xlabel("Epochs",fontsize=14) plt.ylabel("Root-mean-squared error",fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() trainPredict = model_pressure.predict(trainX) testPredict= model_pressure.predict(testX) predicted=np.concatenate((trainPredict,testPredict),axis=0) index = pressure_SF.index.values plt.figure(figsize=(15,5)) plt.title("Pressure: Ground truth and prediction together",fontsize=18) plt.plot(index,pressure_SF['San Francisco'],c='blue') plt.plot(index,predicted,c='orange',alpha=0.75) plt.legend(['True data','Predicted'],fontsize=15) plt.axvline(x=Tp, c="r") plt.grid(True) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() ###Output _____no_output_____
Projects/Report/Instabot/Instabot.ipynb
###Markdown InstaBot Introduction - Part 1 Your friend has opened a new Food Blogging handle on Instagram and wants to get famous. He wants to follow a lot of people so that he can get noticed quickly but it is a tedious task so he asks you to help him. As you have just learned automation using Selenium, you decided to help him by creating an Instagram Bot.You need to create different functions for each task. ###Code from selenium import webdriver from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.common.exceptions import TimeoutException from bs4 import BeautifulSoup import time #opening the browser, change the path as per location of chromedriver in your system driver = webdriver.Chrome(executable_path = 'C:/Users/admin/Downloads/Chromedriver/chromedriver.exe') driver.maximize_window() #opening instagram driver.get('https://www.instagram.com/') #update your username and password here username = 'SAMPLE USERNAME' password = 'SAMPLE PASSWORD' #initializing wait object wait = WebDriverWait(driver, 10) ###Output _____no_output_____ ###Markdown Problem 1 : Login to your InstagramLogin to your Instagram Handle Submit with sample username and password ###Code def LogIn(username, password): try : #locating username textbox and sending username user_name = wait.until(EC.presence_of_element_located((By.NAME,'username'))) user_name.send_keys(username) #locating password box and sending password pwd = driver.find_element_by_name('password') pwd.send_keys(password) #locating login button button = wait.until(EC.presence_of_element_located((By.XPATH,'//*[@id="loginForm"]/div[1]/div[3]/button/div'))) button.submit() #Save Your Login Info? : Not Now pop = wait.until(EC.presence_of_element_located((By.XPATH,'//*[@id="react-root"]/section/main/div/div/div/div/button'))) pop.click() except TimeoutException : print ("Something went wrong! Try Again") #Login to your Instagram Handle LogIn(username, password) ###Output _____no_output_____ ###Markdown Problem 2 : Type for “food” in search barType for “food” in search bar and print all the names of the Instagram Handles that are displayed in list after typing “food” Note : Make sure to avoid printing hashtags ###Code def search(s): try: #locating serch bar and sending text search_box = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'XTCLo'))) search_box.send_keys(s) #waiting till all searched is located wait.until(EC.presence_of_element_located((By.CLASS_NAME,'yCE8d'))) #extracting all serched handle handle_names = driver.find_elements_by_class_name('yCE8d') names = [] #extracting username for i in handle_names : if i.text[0] != '#' : names.append(i.text.split('\n')[0]) time.sleep(5) #clearing search bar driver.find_element_by_class_name('coreSpriteSearchClear').click() return names except TimeoutException : print('No Search Found!') #extracting all the names of the Instagram Handles that are displayed in list after typing “food” usimg search('food') name_list = search('food') for i in name_list : print(i) ###Output dilsefoodie foodtalkindia foodmaniacinthehouse food.darzee yourfoodlab dilsefoodie_ food foodnetwork foodinsider foodiesfeature foodplanet001 delhifoodguide food_belly11 food_lunatic delhifoodie bangalore_foodjunkies food_and_makeup_lover foodgastic_amdavadi street_food_chandigarh buzzfeedfood thefoodranger hmm_nikhil pune_food_blogger food_junc sattorifoodlab foodie_girl_sneha hyderabad.food.diaries indianfood_lovers foodofchennai ndtv_food foodelhi fityetfoodie foodiesdelhite foodys food_affair Food Street / Thindi Bheedi _foodpaths_ mumbaifoodie chandigarhfoodguide food_travel_etc delhifoodwalks foodofbengaluru Shivaji Nagar indian_food_freak gastronome101 yum_crunch thisisdelhi VijayNagar Food Street foodrush.recipe foodtalkprivilege ruchika_asatkar ###Markdown Problem 3 : Searching and Opening a profileSearching and Opening a profile using Open profile of “So Delhi” ###Code def search_open_profile(s): try: #locatong search box bar and sending text search_box = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'XTCLo'))) search_box.send_keys(s) #locating serched result res = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'yCE8d'))) res.click() time.sleep(5) #driver.back() except TimeoutException : print('No Search Found!') search_open_profile('So Delhi') ###Output _____no_output_____ ###Markdown Problem 4 : Follow/Unfollow given handleFollow/Unfollow given handle - 1.Open the Instagram Handle of “So Delhi” 2.Start following it. Print a message if you are already following 3.After following, unfollow the instagram handle. Print a message if you have already unfollowed. ###Code def follow(): try : #locating follow button btn = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'_5f5mN'))) #checking for text if btn.text == 'Follow' : btn.click() time.sleep(3) else : print('Already Following') except TimeoutException : print("Something Went Wrong!") def unfollow(): try : #locating follow button btn = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'_5f5mN'))) #checking for text if btn.text !='Follow' : btn.click() time.sleep(2) #locating popup window (when you click on follow button) pop_up = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'aOOlW'))) pop_up.click() time.sleep(3) else : print('Already Unfollowed') except TimeoutException : print("Something Went Wrong!") #for search and open 'dilsefoodie' instagram handle search_open_profile('So Delhi') #for following this instgram handle follow() #for unfollowing this instgram handle unfollow() ###Output _____no_output_____ ###Markdown Problem 5 : Like/Unlike postsLike/Unlike posts 1.Liking the top 30 posts of the ‘dilsefoodie'. Print message if you have already liked it. 2.Unliking the top 30 posts of the ‘dilsefoodie’. Print message if you have already unliked it. ###Code def Like_Post(): try : #scrolling for locating post driver.execute_script('window.scrollTo(0, 6000);') time.sleep(3) driver.execute_script('window.scrollTo(0, -6000);') time.sleep(3) #locating post posts = driver.find_elements_by_class_name('v1Nh3') for i in range(30): posts[i].click() time.sleep(2) #locating like/unke button like = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'fr66n'))) st = BeautifulSoup(like.get_attribute('innerHTML'),"html.parser").svg['aria-label'] if st == 'Like' : like.click() time.sleep(2) else : print('You have already LIKED Post Number :', i+1) time.sleep(2) #locating cross button for closing post driver.find_element_by_class_name('yiMZG').click() time.sleep(2) except TimeoutException : print("Something Went Wrong!") def Unlike_Post(): try : #scrolling for locating post driver.execute_script('window.scrollTo(0, 6000);') time.sleep(3) driver.execute_script('window.scrollTo(0, -6000);') time.sleep(3) #locating post posts = driver.find_elements_by_class_name('v1Nh3') for i in range(30): posts[i].click() time.sleep(2) #locating like/unke button like = wait.until(EC.presence_of_element_located((By.CLASS_NAME,'fr66n'))) st = BeautifulSoup(like.get_attribute('innerHTML'),"html.parser").svg['aria-label'] if st == 'Unlike' : like.click() time.sleep(2) else : print('You have already UNLIKED Post Number', i+1) time.sleep(2) #locating cross button for closing post driver.find_element_by_class_name('yiMZG').click() time.sleep(2) except TimeoutException : print("Something Went Wrong!") #for search and open 'dilsefoodie' instagram handle search_open_profile('dilsefoodie') #Liking the top 30 posts Like_Post() #Unliking the top 30 posts Unlike_Post() ###Output You have already UNLIKED Post Number 2 You have already UNLIKED Post Number 5 ###Markdown Problem 6 : Extract list of followersExtract list of followers 1.Extract the usernames of the first 500 followers of ‘foodtalkindia’ and ‘sodelhi’. 2.Now print all the followers of “foodtalkindia” that you are following but those who don’t follow you. ###Code def Extract_Followers(): try : # locating followers button and click on it followers_btn = wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME,'g47SY'))) followers_btn[1].click() #locating followers list frame = driver.find_element_by_class_name('isgrP') #scrolling untill first 500 user is located for i in range(50): time.sleep(1) driver.execute_script("arguments[0].scrollTop=arguments[0].scrollHeight",frame) names = [] #extracting userdata followers = driver.find_elements_by_class_name('d7ByH') #extracting username for i in followers[:500] : names.append(i.text.split('\n')[0]) return names except TimeoutException : print("Something Went Wrong!") ###Output _____no_output_____ ###Markdown First 500 followers of ‘foodtalkindia’ ###Code #for search and open 'foodtalkindia' instagram handle search_open_profile('foodtalkindia') # Extracting followers using Extract_Followers() function users = Extract_Followers() ind = 1 for username in users: print(ind,username) ind += 1 ###Output 1 farhinvox 2 baby_mawu 3 jaison_1934 4 anujdigital 5 chef_tamorady 6 badboyharsh192 7 tandooriflames 8 starlit.suites 9 cookbookbylele 10 shree___kashyap 11 secret7society 12 value_investor_15 13 jikky_94 14 srj_universe 15 clicksbyrg 16 thesumptuoussaga 17 divyankarya3467 18 royal_bhamani_11 19 wallacehajimuken 20 _letters_to_self 21 meeta4632 22 leobite9927 23 chefzillabyrimalakhani 24 hiyaguptaa 25 rasa__delights 26 intercuisine_food 27 mayu_makeovers_ 28 im.vishal.gupta_ 29 anii._.10 30 orbitluxuryvillas 31 daniiyaall1 32 shagorika_chronicles 33 t.thanu_san 34 peerohotsauce 35 _badasstaurus_ 36 nandhaprem 37 mr_notty_saif 38 ashmi_raval18 39 _freedom_wings 40 naman4550 41 shamlibarvekar 42 aumagronica 43 vanashreeyoga 44 averiesen 45 sgkalidas 46 manpreet__lohat 47 __shreyanigam__ 48 the_amrit7 49 khkhansa345 50 craftsimpex 51 i_am_jimesh_raval_ 52 sonwane4745 53 culinary_delights_punjab 54 sanrinzbinge 55 jasmine_2902_ 56 itz_rg_ranu 57 mahi_madhu27 58 nikita_ramteke 59 story_danish1 60 curvy_girl_lookbook 61 umeisone 62 thefoodfeelingconnection 63 foodiekomal 64 foodtechmantras 65 mr.abdul.wahab_12 66 meher.bakery 67 krsnaruby 68 dj.rhett 69 estherjcolaco 70 makka.97 71 mansy.me 72 _khaadya_padarth 73 explore_world_universe 74 local_ka_traveller 75 foodkitchenzhindi_official 76 karnika.patnayak 77 deepesh7052 78 ind_healthy_food 79 whats_cookinng 80 samratsethi1 81 the_alluring_cutie 82 jainvinny 83 camerainmyshoes 84 kanhaiya_shandilya 85 anshuman_1996_ 86 foodstationbylalli 87 _priyankabohara_ 88 indianfood7074 89 shri_krishna_mandir_ 90 eyerajdamania 91 could_land 92 way2burger 93 foodhub_n_love 94 suganyadevi2112 95 11.37270 96 muriel_vally_ferns 97 __fatima.amoudi 98 kitetsu7 99 swikritisharma 100 kashish_cute 101 theshahikitchen 102 rishab0077 103 _imainak_roy_ 104 oasiaan 105 samazing.patil 106 playxtpro 107 aloowaala 108 admagnetomedia 109 jaaaanlove12gma 110 simply_unique_1 111 my_own_backpack 112 yashikatayal_94 113 the_silentpoison 114 cafemommyjoon 115 quarantine_foodzilla 116 justlookingforheaven 117 abhiramyenuga 118 _shotsoflife__ 119 suzanepattanayak 120 rupesh_prajapati_919 121 the_classy_bloke 122 ww123c0m 123 sharanpreet491 124 mahen_lams 125 al_bismillah_biriyani 126 alchemy_design_studio 127 nishant.neeraj.549 128 machhindramandlik 129 pragyasharmaofficial 130 thatinsomniackhan 131 subhashree_97512 132 snigdhaberamaity 133 sagarchowdhury1 134 crust_n_creams_ 135 theginbrigade 136 best_famous_tantrik_baba_india 137 foodie.want.food 138 world_famous_best_astrologer 139 ms._bhatia 140 rapiddart 141 thebig_foodie 142 suravie_pb 143 hauzkhassocial 144 teddy_de_bakker_2 145 vk87792 146 sufitravelspvtltd 147 adityamane200 148 rahul_rachakatla 149 foodforuindia 150 pritidubey1993 151 _iam_reshu 152 love__maan0007 153 aaro.hi9766 154 the_sarash 155 pawan628sharma 156 poojaparam04 157 adh_arch 158 gittushah 159 sweetsagittarius.93 160 vaishnav_mal 161 chef_mesutcanturk 162 colors_valley 163 qrable 164 tarungm 165 charubijlani 166 food_stories_101 167 faizan0000007 168 food_frolic_30 169 madhavi.jedhe 170 thisguytarun 171 ashmeeetsethi 172 culinary_hub_ 173 akshay_foodandtravel 174 classicgravyy 175 sohini_clicks 176 bhavyaadhingra 177 shrees_shetty 178 armankhanaf 179 official_salmakhan 180 bhukkad__billi 181 sbrmsthofficial01 182 foodieclicks6 183 theimpeccablestyle 184 tricky_cat_09 185 khopdi__lover__01 186 subhrangshuslookbook 187 fodie_junction 188 instapromotion_90 189 someshgurav 190 _cottoncandyandshandy_ 191 aartisingh1203 192 kangscookingcorner 193 s_ou_raa_vr46 194 ___naqqu__xx_01 195 the_pleasure_of_loving_foods_ 196 the_bartender_cd 197 desi__sperm_ 198 muksa.ajbani 199 _foodiemedico_ 200 djschocolate 201 somabhambure 202 junglee_jalebi 203 desiigner_subrat 204 mahnfrank2 205 asifmobeen 206 dudaflormendes 207 norbertportka 208 bhatiyare 209 matlubk323 210 his_pureevil.devil 211 gangwar.rinul 212 foodie__ashutosh 213 takeeateasy2020 214 bloglife41 215 djpdaminijparmar 216 deepthi_pullela 217 foodies4forever 218 meeansh_k 219 virdiandsons 220 krishaprivate 221 refectionsthejuicebar 222 foodographybyvipin 223 altafmalik725 224 shivanya3278 225 anishaa.9 226 _ghar_ka__khana 227 pushpendra.khandelwal 228 sumittapadiya 229 foodpact_2020 230 foodonshot 231 jil_247 232 ayanmars 233 advjasbirkaur 234 nehaa7860 235 status_lover_td 236 sa__ha___d 237 er_sandeepyadav 238 jamungoa 239 nutsnspreads 240 danish.q143 241 lohia.shalini 242 _miss_foodiee 243 rehman_blog 244 __adityareddy__ 245 duyguaaltug 246 kunaka_munashe 247 flavorsbychefdeparth 248 dalai_seafoods 249 patelqueen167 250 grafikmanish 251 sonam_r_sachdev_09 252 occupiedwith_myself 253 u.panigrahi 254 food_fun_adventure 255 buzz__twin_kle 256 thefoodiezpage 257 garimarishisain 258 arajurkar 259 prsharma4382 260 chef_siblings 261 shubham_522 262 wellness_withmona 263 _chf_dalai 264 diyakatiyarr 265 chefrahilaga 266 mai_hoon_doctor__strange 267 sowrav.sowrav.792 268 durgabaisa2 269 a27_shutterbug 270 prince_chetanya_pradhan 271 vicentefelixrodriguez 272 ritubajaj13 273 samarth_and_kushal 274 poojamadaan3 275 ashwinsins 276 altitude_ace 277 o0o_boy_yash_o0o 278 ayushi_ajit23 279 anuyadav0707 280 its_shahnawaz_sta 281 hair.blessed06 282 balalalalab 283 rohitgupta6773 284 vipul_uniyal105 285 nehaljoshi1 286 paridhibhatiya 287 ansumanbisoi2003 288 subhan_pathan07 289 tripsntreats01 290 newbombaymart 291 premfulwani2005 292 pleuvine 293 sundarii_kumar 294 a_p_megavar__ 295 notyourpringles_ 296 bindaascookingco 297 foodie_goodie_by_sushi 298 shagunakanwar 299 akhand.pratapgur 300 hameed_kohistan 301 veshalis74 302 nehearttrusha 303 mycoconutstories 304 kaushal_prajapati24 305 mungase_agro 306 headovermeals_7 307 shalis_cooking_window 308 shreesidheshwartraders 309 foodtalkdel 310 ram_sir_karan_n_group 311 the_food_philosophy25 312 broker_chefjulian 313 joyd33p.ghosh 314 i_am_nehapant 315 black_tick_7a3 316 jwmarriottchd 317 mr._right_naga 318 rendi_0649 319 royalsu73 320 laziz_darwaze 321 gurbani949 322 pyramid_atmbar 323 shreya_kapoor925 324 tarashidresses 325 deepu_manugula 326 niladri.s.d 327 atri.ashwin 328 manjitvdeshmukh 329 ahmedazruff 330 megha._1 331 storytellershive 332 cafe_r19 333 abidevroy 334 tanhadiltraveller 335 vandupai 336 lwd7152 337 _._._a.l.i.z.e.h.h.h.h._._._._ 338 venkzzy 339 food_universe143 340 svs305 341 trend_shoppers_crew 342 maison_cuisine 343 foodprism70 344 foodieesince88 345 maskaonwheels 346 lokesh.sendhav 347 themohammedg 348 sweeterabyminal 349 zoshorganic 350 traces_of_flavours 351 _night_boy________ 352 vishi_001 353 bachelors_kitchen_1min 354 sushmaas_kitchen 355 vaishnavi_raghav_makeup_artist 356 cakeland523 357 dilliiblog 358 shaikhtaj917 359 itzzz_adarsh21 360 firkush09 361 rajuthapa1983 362 khaled.abou.186 363 kingroger123 364 theeverydaycooking 365 a2_unisex_fashionnn 366 loveforfood42 367 lavenderlimecollective 368 headover_meal 369 spiceofpune 370 mysticfoodindia 371 dakshesh.12ka4 372 zarak.han1010 373 amitsharma5914 374 fittness_boy1 375 froshtafoods 376 irenesgroup 377 delhiadvertisements 378 shaileejauhari 379 aggrenu1101 380 nehaaggarwal3 381 klassykunal 382 ruby_arora_87 383 svk.92 384 muskan_virmani 385 yash.ntpl450 386 the_great_indian_kitchen 387 culi.naryarts 388 siva.murugesan.9615 389 dhanraj97 390 solotraveller01 391 chandigarhfoodblog 392 thechocolatepuddin 393 mohit.rathour_ 394 fernweh234 395 itsreallydelicious 396 the_lavender_kitchen 397 thegoanloadedplate 398 susmitabaishya1133 399 mahdimaj1374 400 harleens_kitchen_youtube 401 myrecipesblog 402 anis_turani 403 yadavdharmendra__17 404 avdhoot_247 405 fouzia1222 406 sundar_sethuraman 407 revatir03 408 delhi_hai 409 olive.brew 410 luc_ky7779 411 kiranvyas77 412 soul_kill_sort 413 duttranjan 414 housoflife 415 chefjoshii 416 aamy_ks 417 shahidchoudhary8654 418 emre49000 419 foodwrit 420 hungry._.again 421 sonal.k_0650 422 panchaleagle 423 abhi_guitar143 424 bakearoma_neha 425 latte_s_bakeanddough 426 foodstoriesbyrutuja 427 rishikk_mishra 428 dhrus29 429 fashionbloggershubofficial 430 sweetycookingstories 431 taj_natural_dry_fruit_halwa 432 amirsohel47 433 malaydinerol 434 kartikgupta542 435 batterbutterbakery 436 _rishugarg_ 437 prabhaprabha954 438 ajay_rajwade 439 priyankagohel 440 aug_os_r 441 soyab.khilji.1 442 azhar.shaikh.0220 443 mital.312 444 cookingstudio54 445 armankhan074 446 foodie_mohapatras 447 anand4506kumar 448 manjurkhan1936 449 barnaliattachment 450 anupam_viya 451 __jaykoli60__ 452 salonit16 453 nuskha_e_biryani 454 foodies_court9 455 sethiavish28 456 anjalithakur9516 457 amitatrehan 458 rk.gamer__ 459 m_a_j_e_e_d_madboi 460 foodforfoodie95 461 t_h_o_u_g_h_t_s_ofme 462 zarpashkhan9 463 travelandeatz 464 i_am_not_fake_banana 465 junurajan 466 koulickbiswas 467 souravrana411 468 _.p.r.i.n.c.e._.s.h.a.r.m.a._ 469 theripalrao 470 daddieskichen 471 mohammadibrahim008 472 nikitavarma244 473 thillaiashok 474 la_vedlysevents 475 theboy_who_lives 476 shrutik1012 477 patel.kamlesh.7140497 478 bonappetitko 479 arunkushwah4321 480 mairajkhan63323 481 vinod4891 482 chefharshitagarwal 483 dev__unstoppable 484 arpita_garnaik 485 _._shadow_of_heart._._ 486 flavorfeast_piku 487 rachnakarproductionstudio 488 ni3.kumar 489 sanyasinghal91 490 satakshi.sharma.7thd.sdpsmzn 491 soulhunt_saiash 492 porshiya_s_bose 493 nutri_pickles 494 mr.prashu.4141 495 timetravel_turtle_ 496 _zeeshanali786 497 kumarhariombca 498 bhumikachheda 499 _foodiexplorer_ 500 anarsa_flavours_of_bihar ###Markdown First 500 followers of ‘sodelhi’ ###Code #for search and open 'sodelhi' instagram handle search_open_profile('sodelhi') # Extracting followers using Extract_Followers() function users = Extract_Followers() ind = 1 for username in users: print(ind,username) ind += 1 ###Output 1 bambayallaofficiel 2 kanishq_basoya_dellhii0001 3 ashoka2906 4 choudhaarysagar 5 twenty4_into_seven 6 _____mahesh_panchabhai_____ 7 the_positive_rays 8 21shrutikumari 9 vaani.n 10 chahatmalhotra_ 11 _soflattering_ 12 visualsbyfocus 13 shreyaa.xx__ 14 ojasvikori 15 nitantgoradia 16 vaishaligupta6 17 vanshikhanagal 18 ayuroy_20 19 isagarkaushik 20 nau_tan_ki_nik 21 _kehkasha_ 22 kaira_singh16 23 apala.03 24 dagar3905 25 romeopandit_0577 26 shrutika_94 27 monikabanerjee517 28 shubham.sr4 29 rizwan__rk__77 30 quotes_and_facts_0 31 _elenasingh 32 geeta_mehra_12 33 we_share_hearts 34 thestaplework 35 s.m.6662 36 jikky_94 37 sonammverma 38 llamanll_ 39 amsterrgram 40 anushi.singla 41 jackyjwjwb 42 gallivanter.diaries 43 nishavikasbaliyan 44 fungusmaybe 45 nikhilkhetan 46 soodshaab1 47 asmitaarora22 48 goyalsaloni1 49 jollywood_daily 50 makeupnhairbyz 51 buddhism_teaching_page 52 bwithvanshika 53 kritijalan_ 54 tanishkasandhu 55 sumanyudutta 56 printsolutionsdesign 57 lizanedsouza 58 kapurjiya_ 59 adnan_00762 60 rocksaltlampindia 61 jeewandisha 62 manabi_12 63 abushaikh5708 64 vinthemoon 65 kritikapoor79 66 garry_7621 67 arpna_bharti 68 arshi28_ 69 dikit_tsering 70 _delhistreet 71 foodiereviews45 72 asharaf.alam.5205 73 aarzoo_aroraa_ 74 madhavgupta08 75 vaibhav_kain 76 mechanicalengineering_19 77 sid._.mahajan 78 mohi_t3600 79 dimariamit 80 naveenmg1818 81 luxuriousarenaofficial 82 manupriyakaplesh 83 _nambooo 84 aggarwaljii 85 nakshtra_003 86 art_and_craft_with_love 87 nishant3030 88 sgkalidas 89 shivangiamba 90 mes_sysoul01 91 mr_ayanofficial110 92 anandshivani 93 radhikka_khanna 94 indersaheb123 95 riyas._ym 96 sakshams21 97 jass_singh_kaler 98 bindia363 99 rahm_an7860 100 anurag_069 101 chicstyle_etmode_ 102 sakshi__dhiman 103 accrete_designs 104 notror404erfound 105 pranavi.999 106 sanrinzbinge 107 su_naina_7 108 _cherr_david_ 109 yogeshsharma0933 110 itz_rg_ranu 111 rakxw62376 112 iammissy6 113 ammuu8762 114 _the_untold_tales__ 115 khana261549 116 kunaljoshi9555 117 chinmaiverma 118 mahek.singla 119 mohammad.sojib.73113 120 _nimmi.nimmi_ 121 rasoolattia 122 food.overdudes 123 mrs.dr.rayadivyangsharma 124 zanuss.home 125 vishnu_shastri_ji_35 126 anythingbutsugar 127 anmol.rawlani 128 shiivaniyadav 129 bhukkad_at_the_nukkad 130 _thebrowndaughter 131 ishaatyagii 132 delhiboy8691 133 chitta_official_001 134 angerdoll_pg 135 meher.bakery 136 r_rahman567 137 army.adda 138 arche_rai 139 desi_traveler 140 mansy.me 141 anushreyabedi 142 varunsharma6003 143 mahipwadhawan96 144 aagamshah88 145 mohit.gupta.official 146 puniaanjali24 147 _sheena.bhatia_ 148 khaopiyo_official 149 jerrry_sharma 150 photo_phactory_2244 151 ritz6196 152 hitmanpanwar.s 153 _nikhilkotecha_ 154 arorasuksham 155 samratsethi1 156 chodogyaan11 157 namrata8.arora 158 shrreya__ 159 safetogo.life 160 fuck.you8255 161 mcbc_memez 162 raghav2002agg 163 sarab_sainbhee 164 ashishrawat2911 165 realmirmasroor 166 gudduskhan143 167 rebha.wadhwa 168 bpeotiques 169 sachin4459singh 170 _tuhada_apna_chauhan_ 171 rajatbhallaofficial 172 manankarangiya7 173 _simranjon_ 174 rajamahendravaram_raju24 175 angadsngh 176 _.hell_o_leo._ 177 arushisaini_ 178 depesh.photography 179 himmilicious 180 shweta.saigal 181 petals.punjabi 182 abhishek.chauhan___ 183 aksh_kg 184 thefaridhasan 185 islamic_facts786 186 afryn31 187 _imishan_ 188 aasthasinghal786 189 sudeshmahala878 190 mayankrao00 191 cute_luv_mine11 192 earning_trick_04 193 heena.iftekhar 194 himanshidz 195 _preeti_honey_ 196 kunal_ghatara 197 justperfect.content 198 justyatin 199 ajmeriet 200 shivendrasinghh 201 a__dev__ 202 aryamanmehlawat03 203 saga.rjata 204 aadichaudary 205 _shaikh_ifra_ 206 chianti_over_roses 207 cherrry.89 208 bikashdas9605 209 aniishachandel 210 emekannabude 211 mehar_sehgal_ 212 thenotoriousbabaas 213 bao.the.frenchie 214 theprableenkaur 215 aksweets___ 216 mridul_chillana 217 abhipahwa 218 designerkhusboofficial 219 tharwaniapoorva.ta 220 muskan_2212 221 punitchauhan16 222 i.akshaysharma 223 nath_sachindra 224 saumya_19 225 aslilava 226 anwar_irshad59 227 rachna_dutta 228 asmailaziz312 229 whenwritikawrites 230 _endlesswords__ 231 notkashish 232 kunaaaalb__ 233 oneimperfectguy 234 nikki19valya 235 akashgoel061089 236 _dixitjain_ 237 ash_warriorr 238 pure_passionomist 239 aloowaala 240 _sachintichkule_ 241 notsogregariousgirl 242 ashmeet_reyat 243 rs_fashionstyle 244 agarwalmayuka 245 ocherlybright 246 swapniil____ 247 ncrblog 248 _miss_foodiee 249 jaspreet_mishukaur 250 _sumit_kaushik_ 251 bharatpulyani 252 taannuu.u 253 milind_makeupartist 254 ansh_chadha3112 255 nanci_gayle 256 catchy_captures21 257 vijay_prd 258 shivendu.kumar 259 vishwastanwar 260 _itsprashant_ranjan 261 cci.chefstory 262 kartikey.1996 263 riyasharma13049 264 lexquestfoundation 265 anuarora68 266 _.itsmanya 267 tyagi_swati_08 268 jitenderkumar5336 269 _ayazmusic 270 potty6390 271 ayush_jhambar24 272 ch_pran.tiger_bsntpurya 273 nitish.gupta.08 274 scoresomestyle 275 thehappysoulss 276 jatinxtx 277 kanikasharma22.08 278 its_ankita15 279 chop.stick15 280 sp_chugh 281 sa.lmon6526 282 awsmvibe18 283 anjuchawla24gmail 284 _me_happiest_soul_ 285 sheldonandmissy2020 286 anantyadavvv 287 nehaagg7 288 kshitij_bansal07 289 shivankitbagla 290 surmabhopali69 291 akashgoyal2296 292 high_ratedgabru0 293 gabbardhaliwal 294 tarot_palmist_sushmitah 295 gundeep_mann 296 lesliecuellar_ 297 aditi_1011 298 a__nigma 299 must.aee 300 _fcukall_ 301 mahesh.123111 302 x___sammmy___x 303 imkkmr 304 akashmandal621 305 meghna9962 306 justacoffeeperson 307 007_zayn_shuaib 308 eshanaoberoi 309 mannpesticides 310 mahalaxmi.industries 311 sleepyworld_14 312 anjalipvt.1995 313 sabhyapvt_ 314 handa3131 315 social__distansingh 316 sattu_aulakh_3600 317 thetwobhukkads 318 x_poo_ 319 ig_chronos_ 320 harsh_sahni 321 __chaudhary__monu 322 riyaaa_097 323 reformedfinny 324 leena_500 325 anshika0044 326 manjyot2050 327 ritisharmaaa 328 kishh_shh 329 pratham_pb03 330 vipul_kumar001 331 the_classy_bloke 332 alishba7b3 333 antony____paul 334 shivi4frnds 335 peachorange.in 336 sonasolanki07 337 grisha.ggarwal 338 hanshul18 339 foodie.want.food 340 sameerupadhyay59 341 sachin.499 342 countryside_livingg 343 parineet09 344 firozansari257862 345 vro0om 346 _s.w.e.r.i.l_11_ 347 izuu___2610 348 dharmesh.bunker 349 iam_divay 350 harsh_gupta_06 351 viidiishaaa 352 vipin9494 353 shubhambhandariiiii 354 silvershades_258 355 vanshika.thakkar21 356 _irl_inspired 357 robinsingh105 358 shikhar0104 359 rakhi_sharma_568 360 naveen_baneta 361 nitimasinghal 362 aulakh8602 363 yashansh10 364 tootoo983 365 avi._.meena 366 meenarm24 367 tendol_namtsang 368 sonam_verma0808 369 theartcart_30 370 amoreapparels 371 jsinha27 372 abidsheikh77.official 373 hauzkhassocial 374 dineshkumark27017 375 iam.sakshi.chaudhary 376 aaayusush 377 jayesh_khattar 378 poonamnegi_ 379 teddy_de_bakker_2 380 manmeet_arora83 381 thisisramneekkaur 382 meenalaswal 383 akhilpal9999 384 puii_lalrem 385 _mani.aulakh_ 386 ams.aman 387 tannukuswaha 388 uwyc_1409 389 himanshi.sejwal 390 ankushgabaa 391 ac_233484 392 suranshchopra 393 khanshazfa786 394 mireya_bymamta 395 sohilkhan2001sohil 396 foodforuindia 397 shrutii.sehgal 398 anup_misal15 399 aru.jha 400 idhashaha10 401 shubhijain.27 402 makeupstoriesbyrajnivirmani 403 abhinavshawarma 404 the_weddings451 405 yuvi.at.war 406 sriyansh_jain 407 _so.cha_ 408 gmfedrina 409 navpreecollection 410 theb_righterside 411 _shaifali._.gaba_ 412 kitchenandkulture 413 harbir2 414 chetanchetan16 415 love__maan0007 416 saintgraam 417 kaurparamveer 418 akshaygarg118 419 sona.sapehia 420 arpitvishnoi89 421 hxxetrii 422 fmlftsanika 423 _sargam_0608_ 424 poorva.16 425 simrndhimn 426 sahil_shadow 427 tanwar_kanika 428 iamnbhullar 429 ias_paradise 430 _vanisha_m 431 barsha._.h 432 wanderlustraveler43 433 __.azaani 434 sylvanaesatpathy 435 hoodadeepika 436 k_saluja1608 437 khwaabonkamusafir 438 nishtha_kharbanda 439 neerumehta28 440 kinnikaur 441 myfoodstory4u 442 ga_education_india 443 priyachristina 444 kukrejasimran 445 thememeion 446 yukti_agarwal14 447 fierce143 448 jolly_sangeeta 449 sheetalbelwal 450 sanyamm_007 451 vikas.s_98 452 justanned.in 453 jamal_cooking 454 anmoll.jainn 455 dr.rashmi21 456 santoshkumarthukral 457 theclassiccollectionskn 458 lokesh.sharma__ 459 _naveen_sheoran 460 _that_rainbowgirl 461 safarbhramantee 462 ketogenic_with_neha 463 alexandermediationgroup 464 tarun.1406 465 chavat_shakha_1 466 jass_sraw_ 467 ivipinjhamb_ 468 dev.editography 469 bhumika_143 470 _jagriti_26 471 anayashodwani 472 arman.sharma.7796 473 sanasiddiqui001 474 yogendraagarwal9 475 akieshabyakanksha 476 mehmakaurkohli 477 travelagain27 478 _muskan2003_ 479 gourmetsfromindia 480 lalit_kumar_sambhariya 481 sumit3806singh 482 thakur_prashant_gaur 483 aryankashyap9563 484 anzeerupp 485 plantohub ###Markdown Print all the followers of “foodtalkindia” that you are following but those who don’t follow you. ###Code def Following(): try : # locating following button and click on it followers_btn = wait.until(EC.presence_of_all_elements_located((By.CLASS_NAME,'-nal3'))) followers_btn[2].click() #locating followers list frame = driver.find_element_by_class_name('isgrP') #scrolling untill all users are located for i in range(20): time.sleep(1) driver.execute_script("arguments[0].scrollTop=arguments[0].scrollHeight",frame) names = [] #extracting userdata following = driver.find_elements_by_class_name('d7ByH') #extracting username for i in following : names.append(i.text.split('\n')[0]) return names except TimeoutException : print("Something Went Wrong!") #for search and open 'foodtalkindia' instagram handle search_open_profile('foodtalkindia') # Extracting followers using Extract_Followers() function followers_of_foodind = Extract_Followers() #casting into set followers_of_foodind = set(followers_of_foodind) #now finding all user followed by me for that I'll use search_open_profile() after that using Following() I will extrat all user #followed by me search_open_profile(username) followed_by_me = Following() followed_by_me = set(followed_by_me) #taking intersection so s1 contains only that user who followed by me s1=(followers_of_foodind).intersection(followed_by_me) if len(s1) == 0: print('No such users found') else: #now extracting my followers my_follower = Extract_Followers() my_follower = set(my_follower) #taking intersection with s1, so s2 contains only users that I am following them but they don’t follow me s2=s1.intersection(my_follower) if len(s2) == 0: print('No such users found') else: for user in s2: print(user) ###Output No such users found ###Markdown Problem 7 : Check the story of ‘coding.ninjas’Check the story of ‘coding.ninjas’. Consider the following Scenarios and print error messages accordingly - 1.If You have already seen the story. 2.Or The user has no story. 3.Or View the story if not yet seen. ###Code def Check_Story(): try: #locating story or profile pic story = wait.until(EC.presence_of_element_located((By.CLASS_NAME,"RR-M-.h5uC0"))) #check the Profile photo size to find out story is available or not height = driver.find_element_by_class_name('CfWVH').get_attribute('height') if int(height) == 166: print("Already seen the story") else: print("Viewing the story") driver.execute_script("arguments[0].click();",story) except: print("No story is available to view") #searching 'coding.ninjas' using search_open_profile() function search_open_profile('coding.ninjas') #for checking story Check_Story() ###Output Already seen the story
examples/Genetic_algorithm_helper_functions/GA_final_version .ipynb
###Markdown Initialize and Load Data Initialize the first iteration ###Code np.random.seed(2) conc_array = np.random.dirichlet((1,1,1,1,1), 7) conc_array_actual = conc_array def perform_UV_vis(next_gen_conc, conc_array_actual, spectra_array_actual): current_gen_spectra = conc_to_spectra(next_gen_conc, sample_spectra[:,1:sample_conc.shape[1]+1]) conc_array_actual = np.vstack((conc_array_actual, next_gen_conc)) spectra_array_actual = np.vstack((spectra_array_actual, current_gen_spectra)) return current_gen_spectra, conc_array_actual, spectra_array_actual def export_to_csv(conc_array): sample_volume = 300 #uL conc_array = conc_array*sample_volume for i in range(conc_array.shape[0]): for j in range(conc_array.shape[1]): if conc_array[i,j] < 5: conc_array[i,j] = 0 conc_array = np.round(conc_array) df = pd.DataFrame(conc_array, columns =['red-stock', 'green-stock', 'blue-stock', 'yellow-stock', 'water-stock']) df.to_csv("concentration_array.csv", index = False) def import_from_excel(filename, conc_array_actual, spectra_array_actual): sample_spectra = pd.read_excel(filename) current_gen_spectra = np.asarray(sample_spectra) conc_array_actual = np.vstack((conc_array_actual, next_gen_conc)) spectra_array_actual = np.vstack((spectra_array_actual, current_gen_spectra)) return current_gen_spectra, conc_array_actual, spectra_array_actual ###Output _____no_output_____ ###Markdown Export Concentrations as CSV ###Code conc_array export_to_csv(conc_array) ###Output _____no_output_____ ###Markdown Import UV-Vis Spectra from Excel ###Code df = load_df(r'Spectra_iteration_0.xlsx') df = subtract_baseline(df, 'A8') df = normalize_df(df) df = df.drop(['A8'], axis = 1) current_gen_spectra = np.asarray(df) wavelength = current_gen_spectra[:,0] current_gen_spectra = current_gen_spectra[:,1:].T ###Output _____no_output_____ ###Markdown Load Desired Spectra ###Code df_desired = load_df(r'Target_spectra.xlsx') df_desired = subtract_baseline(df_desired, 'C2') df_desired = normalize_df(df_desired) df_desired = df_desired.drop(['C2'], axis = 1) x_test = df_desired['C1'].values.reshape(-1,1) ###Output _____no_output_____ ###Markdown Additional Steps for the Zeroth Iteration ###Code spectra_array = current_gen_spectra conc_array_actual = conc_array spectra_array_actual = spectra_array ###Output _____no_output_____ ###Markdown Analyze Fitness of Zeroth Iteration ###Code next_gen_conc, current_gen_spectra, median_fitness_list, max_fitness_list, iteration, mutation_rate_list, fitness_multiplier_list = zeroth_iteration(conc_array, spectra_array, x_test) plot_spectra(current_gen_spectra, x_test, wavelength, iteration) ###Output _____no_output_____ ###Markdown Nth Iteration ###Code Iterations = 25 Moves_ahead = 3 GA_iterations = 6 n_samples = 7 seed = np.random.randint(1,100,1)[0] mutation_rate, fitness_multiplier, mutation_rate_list_1, fitness_multiplier_list_1, best_move, best_move_turn, max_fitness, surrogate_score, next_gen_conc_1 = nth_iteration(Iterations, Moves_ahead, GA_iterations, n_samples, current_gen_spectra, next_gen_conc, x_test, conc_array_actual, spectra_array_actual, seed, median_fitness_list, max_fitness_list, iteration, mutation_rate_list, fitness_multiplier_list) best_move ###Output _____no_output_____ ###Markdown Run if satisfied with the best moves taken: ###Code next_gen_conc = next_gen_conc_1 mutation_rate_list = mutation_rate_list_1 fitness_multiplier_list = fitness_multiplier_list_1 ###Output _____no_output_____ ###Markdown Export Concentrations to CSV ###Code export_to_csv(next_gen_conc) ###Output _____no_output_____ ###Markdown Create those samples using the OT2 and perfrom UV-Vis on them Import Spectra from excel ###Code df = load_df(r'Spectra_iteration_2.xlsx') df = subtract_baseline(df, 'B8') df = normalize_df(df) df = df.drop(['B8'], axis = 1) current_gen_spectra = np.asarray(df) wavelength = current_gen_spectra[:,0] current_gen_spectra = current_gen_spectra[:,1:].T conc_array_actual = np.vstack((conc_array_actual, next_gen_conc)) spectra_array_actual = np.vstack((spectra_array_actual, current_gen_spectra)) ###Output _____no_output_____ ###Markdown Plots the maximum and median fitness of the spectras of the next batch of samples. ###Code median_fitness_list, max_fitness_list, iteration = plot_fitness(next_gen_conc, current_gen_spectra, x_test, median_fitness_list, max_fitness_list, iteration) plot_spectra(current_gen_spectra, x_test, wavelength, iteration) a = np.asarray([1,2,3]) b = np.asarray([4,5,6]) fig, ax = plt.subplots() ax.plot(a,b) ax.set_xticks(a) ###Output _____no_output_____
jupyter_notebooks/machine_learning/mastering_machine_learning/04. Model Evaluation/02. Evaluation Metrics.ipynb
###Markdown Evaluation MetricsSo far, we have mainly used the $R^2$ metric to evaluate our models. There are many other evaluation metrics that are provided by scikit-learn and are all found in the metrics module. Score vs Error/Loss metricsTake a look at the [metrics module][1] in the API. You will see a number of different evaluation metrics for classification, clustering, and regression. Most of these end with either the word 'score' or 'error'/'loss'. Those functions that end in 'score' return a metric where **greater is better**. For example, the `r2_score` function returns $R^2$ in which a greater value corresponds with a better model.Other metrics that end in 'error' or 'loss' return a metric where **lesser is better**. That should make sense intuitively, as minimizing error or loss is what we naturally desire for our models. Regression MetricsTake a look at the regression metrics section of the scikit-learn API. These are all functions that accept the ground truth y values along with the predicted y values and return a metric. Let's see a few of these in action. We will read in the data, build a model with a few variables using one of the supervised regression models we've covered and then use one of the metric functions.[1]: https://scikit-learn.org/stable/modules/classes.htmlsklearn-metrics-metrics ###Code import pandas as pd import numpy as np housing = pd.read_csv('../data/housing_sample.csv') X = housing[['GrLivArea', 'GarageArea', 'FullBath']] y = housing['SalePrice'] X.head() ###Output _____no_output_____ ###Markdown Let's use a random forest to model the relationship between the input and sale price and complete our standard three-step process. ###Code from sklearn.ensemble import RandomForestRegressor rfr = RandomForestRegressor(n_estimators=50) rfr.fit(X, y); ###Output _____no_output_____ ###Markdown First, use the built-in `score` method which always returns the $R^2$ for every regression estimator. ###Code rfr.score(X, y) ###Output _____no_output_____ ###Markdown Let's verify that we can get the same result with the corresponding `r2_score` function from the metrics module. We need to get the predicted y-values and pass it along with the ground truth to the function. ###Code from sklearn.metrics import r2_score y_pred = rfr.predict(X) r2_score(y, y_pred) ###Output _____no_output_____ ###Markdown Let's use a different metric such as mean squared error (MSE). ###Code from sklearn.metrics import mean_squared_error mean_squared_error(y, y_pred) ###Output _____no_output_____ ###Markdown Easy to construct our own functionMost of these metrics are easy to compute on your own. The function below computes the same result from above. ###Code def mse(y_true, y_pred): error = y_true - y_pred return np.mean(error ** 2) mse(y, y_pred) ###Output _____no_output_____ ###Markdown Taking the square root of the MSE computes the root mean squared error (RMSE) which provides insight as to what the average error is, though it is theoretically going to be slightly larger than the average error. Therer is no function in scikit-learn to compute the RMSE. We can use the numpy `sqrt` function to calculate it. ###Code rmse = np.sqrt(mean_squared_error(y, y_pred)) rmse ###Output _____no_output_____ ###Markdown The units of this metric are the same as the target variable, so we can think of our model as "averaging" about &36;18,000. The word averaging is in quotes because this isn't the actual average error, but will be somewhat near it. Use the `mean_absolute_error` to calculate the actual average error per observation. ###Code from sklearn.metrics import mean_absolute_error mean_absolute_error(y, y_pred) ###Output _____no_output_____ ###Markdown We can compute this manually as well. ###Code (y - y_pred).abs().mean() ###Output _____no_output_____ ###Markdown Different metrics with cross validationIt is possible to use these scores when doing cross validation with the `cross_val_score` function. It has a `scoring` parameter that you can pass a string to represent the type of score you want returned. Let's see an example with the default $R^2$ and then with other metrics. We use a linear regression here and continue to keep the data shuffled as before by setting the `random_state` parameter to 123. ###Code from sklearn.model_selection import cross_val_score, KFold from sklearn.linear_model import LinearRegression lr = LinearRegression() kf = KFold(n_splits=5, shuffle=True, random_state=123) ###Output _____no_output_____ ###Markdown By default, if no scoring method is given, `cross_val_score` uses the same metric as what the `score` method of the estimator uses. ###Code cross_val_score(lr, X, y, cv=kf).round(2) ###Output _____no_output_____ ###Markdown Use the string 'r2' to return $R^2$ values, which is the default and will be the same as above. ###Code cross_val_score(lr, X, y, cv=kf, scoring='r2').round(2) ###Output _____no_output_____ ###Markdown Use the documentation to find the string namesThe possible strings for each metric are found in the [user guide section of the official documentation][1]. The string 'neg_mean_squared_error' is used to return the negative of the mean squared error.[1]: https://scikit-learn.org/stable/modules/model_evaluation.htmlcommon-cases-predefined-values ###Code cross_val_score(lr, X, y, cv=kf, scoring='neg_mean_squared_error') ###Output _____no_output_____ ###Markdown Why are negative values returned?In an upcoming chapter, we cover model selection. scikit-learn selects models based on their scores and treats higher scores as better. But, with mean squared error, lower scores are better. In order to make this score work with model selection, scikit-learn negates this value when doing cross validation so that higher scores are indeed better. For instance, a score of -9 is better than -10. Mean squared log errorAnother popular regression scoring metric is the mean squared log error. This works by computing the natural logarithm of both the predicted value and the ground truth, then calculates the error, squares it and takes the mean. Let's import the function from the metrics module and use it. ###Code from sklearn.metrics import mean_squared_log_error mean_squared_log_error(y, y_pred) ###Output _____no_output_____ ###Markdown We can use the metric with `cross_val_score` by passing it the string 'neg_mean_squared_log_error'. Again, greater scores here are better. ###Code cross_val_score(lr, X, y, cv=kf, scoring='neg_mean_squared_log_error') ###Output _____no_output_____ ###Markdown Finding the error metricsYou can find all the error metrics by navigating to the scikit-learn API or the user guide, but you can also find them directly in the `SCORERS` dictionary in the `metrics` module. The keys of the dictionary are the string names of the metrics. If you are on Python 3.7 or later, the dictionary will be ordered. There are eight (as of now) regression metrics and they are listed first. Let's take a look at their names. ###Code from sklearn.metrics import SCORERS list(SCORERS)[:8] ###Output _____no_output_____ ###Markdown Let's use the maximum error as our cross validation metric, which simply returns the maximum error of all the predictions. ###Code cross_val_score(lr, X, y, cv=kf, scoring='max_error').round(-3) ###Output _____no_output_____ ###Markdown Most of the built-in scoring metrics are for classification or clustering and not for regression. Let's find the total number of scoring metrics. ###Code len(SCORERS) ###Output _____no_output_____
courses/2020-LMU-RDM/NDA2015_nix_excercise/module2nix.ipynb
###Markdown Metadata ###Code m_gen = nixf.create_section("General","odml.general") m_gen.create_property("Experimenter", values_or_dtype='Alexa Riehle') m_gen.create_property("Institution", values_or_dtype='CNRS, Marseille, France') m_gen.create_property("RelatedPublications", values_or_dtype='doi: 10.1523/JNEUROSCI.5441-08.2009') m_exp = nixf.create_section("Experiment","odml.experiment") m_exp.create_property("Task", values_or_dtype='DelayedCenterOut') m_subj = m_exp.create_section("Subject","odml.subject") m_subj.create_property("Name", values_or_dtype='Joe') m_subj.create_property("Species", values_or_dtype='Macaca mulatta') m_subj.create_property("Sex", values_or_dtype='male') m_rec = m_exp.create_section("Recording","odml.recording") m_rec.create_property("BrainArea", values_or_dtype='M1') m_rec.create_property("RecordingType", values_or_dtype='extracellular') m_rec.create_property("SpikeSortingMethod", values_or_dtype='WindowDiscriminator') # trial conditions: condnames = {1 : "full", 2 : "2 of 6", 3 : "3 of 6"} m_cond = m_exp.create_section("TrialConditions","odml.conditions") def mkcond(cond, target): condname = "condition %d target %d" % (cond,target) sec = m_cond.create_section(condname, "odml.section") sec.create_property("BehavioralCondition", values_or_dtype=cond) sec.create_property("BehavioralConditionName", values_or_dtype=condnames[cond]) sec.create_property("Target", values_or_dtype=target) return sec m_infiles = {"joe097":"joe097-23457.gdf", "joe108":"joe108-124567.gdf", "joe147":"joe147-12467.gdf", "joe151":"joe151-12346.gdf"} #m_mlfiles = {"joe097":"joe097-5-C3-MO.mat", "joe108":"joe108-4-C3-MO.mat", "joe147":"", "joe151":""} m_conds = [[mkcond(c,t) for t in range(1,7) ] for c in range(1,4) ] m_sess = nixf.create_section("Sessions","odml.section") for sess in ["joe097", "joe108", "joe147", "joe151" ]: m_s1 = m_sess.create_section(sess,"odml.session") m_s1id = m_s1.create_property("SessionID", values_or_dtype=sess) m_s1infile = m_s1.create_property("InputFile", values_or_dtype=m_infiles[sess]) m_s1subject = m_s1.create_section("Subject","odml.subject") m_s1subject.link = m_s1subject m_s1conds = m_s1.create_section("TrialConditions","odml.conditions") m_s1conds.link = m_cond def mkunit(sblock,unitid): print(sblock.name) # create single unit su = sblock.create_source("Unit %d" % (unitid), "nix.ephys.unit") su.definition = "Single unit" # count trials as they appear trialcnt = 0 # read all data for target in range(1, 7): # load spike matrix smxdata = np.loadtxt('asciidata/%s-%d-C3-MO_spikematrix_%02d.dat' % (sblock.name,unitid,target), dtype=int) # load motion end times medata = np.loadtxt('asciidata/%s-%d-C3-MO_MEevents_%02d.dat' % (sblock.name,unitid,target), dtype=float) medata = medata - 1000. # motion end time is stored as array index, so subtract the time offset # load trial start times tsdata = np.loadtxt('asciidata/%s-%d-C3-TS_MOevents_%02d.dat' % (sblock.name,unitid,target), dtype=float) tsdata = -tsdata # calculate trial start relative to motion from MO time in TS-aligned data # load spike times stf = open('asciidata/%s-%d-C3-MO_spiketrains_%02d.dat' % (sblock.name,unitid,target), 'r') stdata = [] for line in stf: st = [int(i)-1000 for i in line.split()] # shift by -1000 ms for alignment to MO stdata.append(st) stf.close() #_for line # # data array of all trials with this target; time dim first, as in Jans data spikeactivity = sblock.create_data_array("SpikeActivity Unit %d Target %d" % (unitid,target),"nix.timeseries.binary",data=smxdata.T) spikeactivity.definition = "Array of spike occurrences aligned to movement onset (MO)" sa_dim1 = spikeactivity.append_sampled_dimension(1.) # 1 ms sampling interval sa_dim1.offset = -1000. # data aligned to MO sa_dim1.unit = "ms" sa_dim1.label = "time" sa_dim2 = spikeactivity.append_set_dimension() # trials sa_dim1.label = "trial" spikeactivity.sources.append(su) # mov tag # this is not so great because the need to define the positions an extents as dataarrays, # so stick to the movement epochs as tags for each trial #movtag = filter(lambda x: x.name == "Arm movement epochs for Target %d" % (target), sblock.multi_tags ) #if not movtag: # MOlst = sblock.create_data_array("MO times for Target %d" % (target), "nix.positions", data=[[0.0,tr] for tr in range(0,smxdata.shape[0])]) # MOdim1 = MOlst.append_sampled_dimension(1.) # MOdim1.unit = "ms" # MOdim1.label = "time" # MOdim2 = MOlst.append_set_dimension() # MOdim2.label = "trial" # MElst = sblock.create_data_array("Movement durations for Target %d" % (target), "nix.positions", data=[[medata[tr],0] for tr in range(0,smxdata.shape[0])]) # MEdim1 = MElst.append_sampled_dimension(1.) # MEdim1.unit = "ms" # MEdim1.label = "time" # MEdim2 = MElst.append_set_dimension() # MEdim2.label = "trial" # mov = sblock.create_multi_tag("Arm movement epochs for Target %d" % (target), "nix.epoch", MOlst) # mov.definition = "Epochs between detected movement onset (MO) and movement end (ME)" # mov.extents = MElst # mov.units = ["ms",] #else: # mov = movtag[0] #mov.references.append(spikeactivity) #~~~~ # loop over all trials for this target for tr in range(0,smxdata.shape[0]): trialcnt += 1 #spikeactivity = sblock.create_data_array("SpikeActivity Unit %d Trial %03d" % (unitid,trialcnt),"nix.timeseries.binary",data=smxdata[tr]) #spikeactivity.definition = "Array of spike occurrences aligned to movement onset (MO)" #sa_dim.offset = -1000. # data aligned to MO #sa_dim.unit = "ms" #sa_dim.label = "time" # array of spike times spiketimes = sblock.create_data_array("SpikeTimes Unit %d Trial %03d" % (unitid,trialcnt),"nix.spiketimes",data=stdata[tr]) spiketimes.definition = "Spike times aligned to movement onset (MO)" spiketimes.append_set_dimension() spiketimes.unit = "ms" spiketimes.label = "spikes" # spike train as multitag #spikepos = [[x,tr] for x in stdata[tr]] #spiketrain = sblock.create_multi_tag("Spiketrain Unit %d Trial %03d" % (unitid,trialcnt), "nix.spiketrain",spikepos) #spiketrain.definition = "Spike times aligned to movement onset (MO)" #spiketrain.references.append(spikeactivity) # assign sources spiketimes.sources.append(su) #spiketrain.sources.append(su) # assign metadata spikeactivity.metadata = m_conds[2][target-1] # so far all data are 'C3' -> index 2 in conds spiketimes.metadata = m_conds[2][target-1] # trial as tag trialtag = list(filter(lambda x: x.name == "Trial %03d" % (trialcnt), sblock.tags )) if not trialtag: trial = sblock.create_tag("Trial %03d" % (trialcnt), "nix.trial",[tsdata[tr],tr]) trial.definition = "Trial start (TS) relative to motion onset (MO)" trial.extent = [3000.,0] # trial length of 3000ms is arbitrary trial.units = ["ms"] trial.metadata = m_conds[2][target-1] else: trial = trialtag[0] trial.references.append(spikeactivity) # arm movement period as tag movtag = list(filter(lambda x: x.name == "Arm movement epoch Trial %03d" % (trialcnt), sblock.tags )) if not movtag: mov = sblock.create_tag("Arm movement epoch Trial %03d" % (trialcnt), "nix.epoch",[0.0,tr]) mov.definition = "Epoch between detected movement onset (MO) and movement end (ME)" mov.extent = [medata[tr],0] # because motion onset is at 0.0, duration is equal to end time mov.units = ["ms",] else: mov = movtag[0] mov.references.append(spikeactivity) #_for tr #_for target #_def mkunit mkunit(b097, 5) mkunit(b108, 4) mkunit(b108, 7) mkunit(b147, 1) mkunit(b151, 1) nixf.close() ###Output _____no_output_____ ###Markdown read data from file ###Code nixf = nix.File.open("module2x.h5", nix.FileMode.ReadOnly) nixf.blocks ###Output _____no_output_____ ###Markdown get overview of file contents ###Code for b in nixf.blocks: tlst = list(filter( lambda x : x.type == "nix.trial", b.tags)) print('%s: %d trials' % (b.name,len(tlst))) for s in b.sources: print('\t%s ' % s.name) ###Output _____no_output_____ ###Markdown select some data from one of the blocks ###Code b108 = nixf.blocks["joe108"] b108.sources dalst = filter(lambda x: ("SpikeActivity" in x.name) & (filter(lambda s: s.name == "Unit 7", x.sources) != []) & (x.metadata['Target'] == 2) & (x.metadata['BehavioralCondition'] == 3), b108.data_arrays) dalst = list(dalst) print(dalst) len(dalst) dalst[0].shape spikedata = dalst[0] spikedata.shape yyy = np.nonzero(spikedata) [tind,jind] = np.nonzero(spikedata) plot.scatter(tind, jind) [jbla, tbla] = np.nonzero(np.array(spikedata).T) plot.scatter(tbla, jbla) nixf.close() ###Output _____no_output_____
preprocessing_split_train_test.ipynb
###Markdown Split datasets into train, validation, and test This module can use for processing split datasets. You need modify the ratio of train, validation, and test. And you can modify output directory you want and input directory you have. ###Code # -*- coding: utf-8 -*- """ Split datasets into train, validation, and test This module can use for processing split datasets. You need modify the ratio of train, validation, and test datasetes. And you can modify output directory you want and input directory you have. ################################################################################ # Author: Weikun Han <[email protected]> # Crate Date: 03/6/2018 # Update: # Reference: https://github.com/jhetherly/EnglishSpeechUpsampler ################################################################################ """ import os import csv import numpy as np def write_csv(filename, pairs): """The function to wirte Args: param1 (str): filename param2 (list): pairs """ with open(filename, 'w') as csvfile: writer = csv.writer(csvfile) for n in pairs: writer.writerow(n) if __name__ == '__main__': # Please modify input path to locate you file DATASETS_ROOT_DIR = './datasets' OUTPUT_DIR = os.path.join(DATASETS_ROOT_DIR, 'final_dataset') # Define ratio for train, validation, and test datasetes train_fraction = 0.6 validation_fraction = 0.2 test_fraction = 0.2 # Reset random generator np.random.seed(0) # Check location to save datasets if not os.path.exists(OUTPUT_DIR): os.makedirs(OUTPUT_DIR) print('Will send .csv dataset to {}'.format(OUTPUT_DIR)) # Create list to store each original and noise file name pair original_noise_pairs = [] input_original_path = os.path.join(DATASETS_ROOT_DIR, 'TEDLIUM_5S') input_noise_path = os.path.join(DATASETS_ROOT_DIR, 'TEDLIUM_noise_sample_5S') for filename in os.listdir(input_original_path): # Link same filename in noise path filename_component = filename.split('_') filename_noise = (filename_component[0] + '_' + filename_component[1] + '_' + 'noise_sample' + '_' + filename_component[2]) input_original_filename = os.path.join(input_original_path, filename) input_noise_filename = os.path.join(input_noise_path, filename_noise) if not os.path.isfile(input_original_filename): continue original_noise_pairs.append( [input_original_filename, input_noise_filename]) # Shuffle the datasets np.random.shuffle(original_noise_pairs) datasets_size = len(original_noise_pairs) # Create indexs validation_start_index = 0 validation_end_index = (validation_start_index + int(datasets_size * validation_fraction)) test_start_index = validation_end_index test_end_index = (test_start_index + int(datasets_size * test_fraction)) train_start_index = test_end_index # Save pairs into .csv validation_original_noise_pairs = original_noise_pairs[ validation_start_index:validation_end_index] write_csv(os.path.join(OUTPUT_DIR, 'validation_files.csv'), validation_original_noise_pairs) test_original_noise_pairs = original_noise_pairs[ test_start_index : test_end_index] write_csv(os.path.join(OUTPUT_DIR, 'test_files.csv'), test_original_noise_pairs) train_original_noise_pairs = original_noise_pairs[ train_start_index :] write_csv(os.path.join(OUTPUT_DIR, 'train_files.csv'), original_noise_pairs) ###Output Will send .csv dataset to ./datasets/final_dataset ###Markdown Split datasets into train, validation, and test This module can use for processing split datasets. You need modify the ratio of train, validation, and test. And you can modify output directory you want and input directory you have. ###Code # -*- coding: utf-8 -*- """ Split datasets into train, validation, and test This module can use for processing split datasets. You need modify the ratio of train, validation, and test datasetes. And you can modify output directory you want and input directory you have. ################################################################################ # Author: Weikun Han <[email protected]> # Crate Date: 03/6/2018 # Update: # Reference: https://github.com/jhetherly/EnglishSpeechUpsampler ################################################################################ """ import os import csv import numpy as np def write_csv(filename, pairs): """The function to wirte Args: param1 (str): filename param2 (list): pairs """ with open(filename, 'w') as csvfile: writer = csv.writer(csvfile) for n in pairs: writer.writerow(n) if __name__ == '__main__': # Please modify input path to locate you file DATASETS_ROOT_DIR = './datasets' OUTPUT_DIR = os.path.join(DATASETS_ROOT_DIR, 'final_dataset') # Define ratio for train, validation, and test datasetes train_fraction = 0.6 validation_fraction = 0.2 test_fraction = 0.2 # Reset random generator np.random.seed(0) # Check location to save datasets if not os.path.exists(OUTPUT_DIR): os.makedirs(OUTPUT_DIR) print('Will send .csv dataset to {}'.format(OUTPUT_DIR)) # Create list to store each original and noise file name pair original_noise_pairs = [] input_original_path = os.path.join(DATASETS_ROOT_DIR, 'TEDLIUM_5S') input_noise_path = os.path.join(DATASETS_ROOT_DIR, 'TEDLIUM_noise_sample_5S') for filename in os.listdir(input_original_path): # Link same filename in noise path filename_component = filename.split('_') filename_noise = (filename_component[0] + '_' + filename_component[1] + '_' + 'noise_sample' + '_' + filename_component[2]) input_original_filename = os.path.join(input_original_path, filename) input_noise_filename = os.path.join(input_noise_path, filename_noise) if not os.path.isfile(input_original_filename): continue original_noise_pairs.append( [input_original_filename, input_noise_filename]) # Shuffle the datasets np.random.shuffle(original_noise_pairs) datasets_size = len(original_noise_pairs) # Create indexs validation_start_index = 0 validation_end_index = (validation_start_index + int(datasets_size * validation_fraction)) test_start_index = validation_end_index test_end_index = (test_start_index + int(datasets_size * test_fraction)) train_start_index = test_end_index # Save pairs into .csv validation_original_noise_pairs = original_noise_pairs[ validation_start_index:validation_end_index] write_csv(os.path.join(OUTPUT_DIR, 'validation_files.csv'), validation_original_noise_pairs) test_original_noise_pairs = original_noise_pairs[ test_start_index : test_end_index] write_csv(os.path.join(OUTPUT_DIR, 'test_files.csv'), test_original_noise_pairs) train_original_noise_pairs = original_noise_pairs[ train_start_index :] write_csv(os.path.join(OUTPUT_DIR, 'train_files.csv'), original_noise_pairs) ###Output Will send .csv dataset to ./datasets/final_dataset
Benchmarking Sorting Algorithms.ipynb
###Markdown Introduction Sorting is organising data in ascending or descending order. This project will take a comparative look at 6 sorting algorithms (Bubble Sort, Merge Sort, Counting Sort, Quick Sort, Insertion Sort, BogoSort). It is in two parts, firstly an overview of each algorithm and lastly the benchmarking application of the sorting algorithms. Sorting Algorithms Overview:1. How it works 2. Performance or Time complexity. Time Complexity is the computational complexity that describes the amount of time it takes to run an algorithm. (source: https://en.wikipedia.org/wiki/Time_complexity)3. An example diagram of how it works4. A python example of the selected algorithm (with comments) This project will also highlight the different sorting methods used in each algorithm whether the are comparison based or non-comparison based. The Benchmarking ApplicationUsing python(https://www.python.org/), random number arrays were created ranging in sizes from 100 to 50,000. The sorting algorithms will each run through the arrays and the timings will be captured using Python’s time module (https://docs.python.org/3/library/time.html). These timings are collected into a table, using the library pandas tables https://pandas.pydata.org/. The timings will be benchmarked against one another in a plot using Seaborn https://seaborn.pydata.org/ and Matplotlib https://matplotlib.org/. The result of the benchmarking sorting algorithm application are discussed to see if they matched the expected output. This project is written using a Jupyter notebook (https://jupyter.org/) using external python files. Sorting Algorithms 1. Bubble Sort (A simple comparison-based sort)Bubble sort is a simple sorting algorithm. source:https://en.wikipedia.org/wiki/Bubble_sort Named for the way larger values bubble up to the top. It is a comparison based sorting algorithm as it steps through the list, compares and swaps adjacent pairs if they are in the wrong order.How it works:1. It starts at the beginning of the dataset and compares the first two elements and if the first is greater it will swap them. 2. It will continue doing this until no swaps are needed. PerformanceBubble sort has a worst-case and average time complexity of О(n2), where n is the number of items being sorted. When the list is already sorted (best-case), the complexity of bubble sort is only O(n). In the case of a large dataset, Bubble sort should be avoided. It is not very practical or efficient and rarely used in the real world.Bubble sort in action https://www.youtube.com/watch?v=lyZQPjUT5B4&feature=youtu.be Bubble Sort Diagram![title](bubblesort.png) ###Code # code sourced from http://interactivepython.org/runestone/static/pythonds/SortSearch/TheBubbleSort.html # calls a function bubblesort def bubbleSort(alist): for passnum in range(len(alist)-1,0,-1): for i in range(passnum): if alist[i]>alist[i+1]: temp = alist[i] alist[i] = alist[i+1] alist[i+1] = temp alist = [54,26,93,17,77] bubbleSort(alist) print(alist) ###Output [17, 26, 54, 77, 93] ###Markdown 2. Merge Sort (An efficient comparison-based sort)Merge sort is a recursive divide and conquer algorithm that was invented by John von Neumann in 1945. This algorithm breaks down the array into a sublists until there is just a single element left and merging them back together until they are sorted. (https://en.wikipedia.org/wiki/Merge_sort)How it works:1. It starts by breaking down the list into sublists until each sublists contains just one element. 2. Repeatedly merging the sublists to produce new sorted sublists until there is only one sublist remaining. PerformanceIn sorting n objects, merge sort has an average and worst-case performance of O(n log n). It's best, worst and average cases are very similar, making it a good choice for predictable running behaviour. (Source: P.Mannion (2019)Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology )Merge sort in action:https://www.youtube.com/watch?v=XaqR3G_NVoo Merge Sort Diagram![title](mergesort.png) ###Code # code sourced from http://interactivepython.org/runestone/static/pythonds/SortSearch/TheMergeSort.html def mergeSort(alist): # print("Splitting ",alist) # if the array is greater than 1 then if len(alist)>1: # mid is length of array divided 2 mid = len(alist)//2 # left half is equal to the first slice lefthalf = alist[:mid] # left half is equal to the second slice righthalf = alist[mid:] # call merge sort again for the left half mergeSort(lefthalf) # call merge sort again for the right half mergeSort(righthalf) i=0 j=0 k=0 # copy to temp arrays lefthalf and righhalf while i < len(lefthalf) and j < len(righthalf): if lefthalf[i] < righthalf[j]: alist[k]=lefthalf[i] i=i+1 else: alist[k]=righthalf[j] j=j+1 k=k+1 # while i < len(lefthalf): alist[k]=lefthalf[i] i=i+1 k=k+1 while j < len(righthalf): alist[k]=righthalf[j] j=j+1 k=k+1 #print("Merging ",alist) alist = [54,26,93,17,77,31,44,55,20] mergeSort(alist) print(alist) ###Output [17, 20, 26, 31, 44, 54, 55, 77, 93] ###Markdown 3. Counting Sort (A non-comparison sort)Invented by Harold H. Seward in 1954(source: https://en.wikipedia.org/wiki/Counting_sort.) Counting sort is a technique based on key values(kind of hashing). Then doing some arithemtic to calculate the position of the each object in the output sequence. (https://www.geeksforgeeks.org/counting-sort/) How it works (Source: P.Mannion (2019)Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology ):1. Determine key range k in the input array(if not already known)2. Initialise an array count size k, which will be used to count the number of times that each key value appears in the input instance.3. Initialise an array result of size n, which will be used to store the sorted output.4. Iterate through the input array, and record the number of times each distinct key values occurs in the input instance.5. Construct the sorted result array, based on the histogram of key frequencies stored in count. Refer to the ordering of keys in input to ensure that stability is preserved. PerformanceBest-, worst- and average-case time complexity of n +k, space complexity is also n+k (Source: P.Mannion (2019)Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology ) Counting Sort Diagram![title](countsort.png) ###Code # code sourced http://www.learntosolveit.com/python/algorithm_countingsort.html def counting_sort(array, maxval): """in-place counting sort""" n = len(array) m = maxval + 1 count = [0] * m # init with zeros for a in array: count[a] += 1 # count occurences i = 0 for a in range(m): # emit for c in range(count[a]): # - emit 'count[a]' copies of 'a' array[i] = a i += 1 return array print(counting_sort( alist, 93 )) ###Output [17, 20, 26, 31, 44, 54, 55, 77, 93] ###Markdown 4. Quick SortQuicksort was developed by British computer scientist Tony Hoare in 1959. It is a recursive divide and conquer algorithm. Due to it's efficiency, it is still a commonly used algorithm for sorting.(https://en.wikipedia.org/wiki/Quicksort)How it works (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology):1. Pivot selection: Pick an element, called a “pivot” from the array2. Partioning: reorder the array elements with values < the pivot come before it, which all elements the values ≥ than the pivot come after it. After this partioining, the pivot is in its final position.3. Recursion: apply steps 1 and 2 above recursively to each of the two subarrays PerformanceIt has a worst case n^2 (rare), average case n log n, best case n log nMemory usage: O(n) (variants exist with O (n log n)) (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 3, Galway-Mayo Institute of Technology): Quick Sort Diagram![title](quicksort.png) ###Code # http://interactivepython.org/runestone/static/pythonds/SortSearch/TheQuickSort.html def quickSort(alist): quickSortHelper(alist,0,len(alist)-1) def quickSortHelper(alist,first,last): if first<last: splitpoint = partition(alist,first,last) quickSortHelper(alist,first,splitpoint-1) quickSortHelper(alist,splitpoint+1,last) def partition(alist,first,last): pivotvalue = alist[first] leftmark = first+1 rightmark = last done = False while not done: while leftmark <= rightmark and alist[leftmark] <= pivotvalue: leftmark = leftmark + 1 while alist[rightmark] >= pivotvalue and rightmark >= leftmark: rightmark = rightmark -1 if rightmark < leftmark: done = True else: temp = alist[leftmark] alist[leftmark] = alist[rightmark] alist[rightmark] = temp temp = alist[first] alist[first] = alist[rightmark] alist[rightmark] = temp return rightmark # alist = [54,26,93,17,77,31,44,55,20] quickSort(alist) print(alist) ###Output [17, 20, 26, 31, 44, 54, 55, 77, 93] ###Markdown 5. Insertion SortInsertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. (https://en.wikipedia.org/wiki/Insertion_sort) It sorts similar to the way card players sort their cards in their hand (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 2, Galway-Mayo Institute of Technology)How it works (Source: P.Mannion (2019) Week 10: Sorting Algorithms Part 2, Galway-Mayo Institute of Technology):1. Start from the left of the array, and set the “key” as the element at index 1.Move any elements to the left which are > the “key” right by one position, and insert the “key”.2. Set the “Key” as the element at index 2. Move any elements to the left which are > the key right by one position and insert the key.3. Set the “key” as the element at the index 3. Move any elements to the left which are > the key right by one position and index the key.4. …5. Set the “key” as the elements at index n-1. Move any elements to the left which are > the key right by one position and insert the key.https://www.youtube.com/watch?v=ROalU379l3U PerformanceThis algorithm works well on small lists and lists that are close to sorted. The best case is an array that is already sorted. In this case, insertsion sort would have a run time of O(n). Worst case would be that no numbers are sorted, giving a run time of O(n2). The average is also O(n2). https://en.wikipedia.org/wiki/Insertion_sort Insertion Sort Diagram![title](insertionsort.png) ###Code def insertionSort(alist): for index in range(1,len(alist)): currentvalue = alist[index] position = index while position>0 and alist[position-1]>currentvalue: alist[position]=alist[position-1] position = position-1 alist[position]=currentvalue alist = [54,26,93,17,77,31,44,55,20] insertionSort(alist) print(alist) ###Output [17, 20, 26, 31, 44, 54, 55, 77, 93] ###Markdown 6. BogoSortBogosort is a highly inefficient sorting algorithm. Also known as slowsort, https://en.wikipedia.org/wiki/BogosortHow it works:1. It randomly shuffles the array until it is sorted. PerformanceThe best case occurs if the list as given is already sorted. https://www.youtube.com/watch?v=CSe0MWDLevA BogoSort Diagram![title](bogosort.png) ###Code # Python program for implementation of Bogo Sort import random # Sorts array a[0..n-1] using Bogo sort def bogoSort(alist): n = len(alist) while (is_sorted(alist)== False): shuffle(alist) # To check if array is sorted or not def is_sorted(alist): n = len(alist) for i in range(0, n-1): if (alist[i] > alist[i+1] ): return False return True # To generate permuatation of the array def shuffle(alist): n = len(alist) for i in range (0,n): r = random.randint(0,n-1) alist[i], alist[r] = alist[r], alist[i] alist = [54,26,93,17,77,31,44,55,20] bogoSort(alist) print(alist) ###Output [17, 20, 26, 31, 44, 54, 55, 77, 93] ###Markdown Implementation & Benchmarking For this section, a function will be defined to call each sorting function defined above1. Bubble Sort2. Merge Sort3. Counting Sort4. Quick Sort5. Insertion Sort6. BogosortFirstly, arrays are generated with random numbers using randint from the python's random library (https://docs.python.org/2/library/random.html). These will be used to test the speed of efficiency of the algorithms. ###Code # code sourced from project example # Creating an array using randint from random import * # creating a random array, function takes in n numbers def random_array(n): # create an array variable array = [] # if n = 5, 0,1,2,3,4 for i in range(0, n, 1): # add to the array random integers between 0 and 100 array.append(randint(0,100)) return array # assign the random array to alist alist = random_array(100) alist1 = random_array(250) alist2 = random_array(500) alist3 = random_array(750) alist4 = random_array(1000) alist5 = random_array(1250) alist6 = random_array(2500) alist7 = random_array(3570) alist8 = random_array(5000) alist9 = random_array(6250) alist10 = random_array(7500) alist11 = random_array(8750) alist12 = random_array(10000) ###Output _____no_output_____ ###Markdown Benchmarking Multiple Statistical RunsReference Week 12 - 08 to 12 April 2019 lecture notes Using the time module (https://docs.python.org/3/library/time.html), a start time and end time for each function will be noted and the elapsed time is what will be noted.Above a random arrays were defined. They will be used to test the performance of the 1. Benchmarking Bubble Sort ###Code #benchmark_bubblesort.py from benchmark_bubblesort import * ###Output _____no_output_____ ###Markdown 2. Benchmarking Merge Sort ###Code #benchmark_mergesort.py from benchmark_mergesort import * ###Output _____no_output_____ ###Markdown 3. Benchmarking Counting Sort ###Code #benchmark_countingsort.py from benchmark_countingsort import * ###Output _____no_output_____ ###Markdown 4. Benchmarking Quick sort ###Code #benchmark_quicksort.py from benchmark_quicksort import * ###Output [0.0, 0.002, 0.005, 0.009, 0.015, 0.024, 0.042, 0.069, 0.111, 0.166, 0.237, 0.326, 0.432, 1.97] ###Markdown 5. Benchmarking Insertion sort ###Code #benchmark_insertionsort.py from benchmark_insertionsort import * ###Output _____no_output_____ ###Markdown 6. Benchmarking bogosort ###Code #benchmark_bogosort.py from benchmark_bogosort import * ###Output _____no_output_____ ###Markdown Create a table for the results Using the data from the benchmarking timings for each sorting algorithms, a table was created. The table was created using the pandas libary https://pandas.pydata.org/ ###Code import pandas as pd import numpy as np df = pd.DataFrame(columns = ['Size','Bubble Sort', 'Merge Sort', 'Counting sort', 'Quick sort', 'Insertion sort', 'BogoSort']) df['Size'] = [100, 250, 500, 750, 1000, 1250, 2500, 3570, 5000, 6250, 7500, 8750, 10000, 50000] df['Bubble Sort'] = bubble_avglist df['Merge Sort'] = mergesort_avglist df['Counting sort'] = countsort_avglist df['Quick sort'] = quicksort_avglist df['Insertion sort'] = insertsort_avglist df['BogoSort'] = bogosort_avg df ###Output _____no_output_____ ###Markdown Summary StatisticsSummary statistics can give an elegent overview of your data. You can clearly see that the slowest algorithm was Bubble Sort. ###Code summary = df.describe() summary = summary.transpose() summary ###Output _____no_output_____ ###Markdown Plotting the sorting algorithms timings in a graphSeaborn https://seaborn.pydata.org/ and Matplotlib https://matplotlib.org/ were used to generate a data visualisation of the algorithms ###Code import seaborn as sns import matplotlib.pyplot as plt sns.set(style="whitegrid", palette="husl", rc={'figure.figsize':(14,16)}) title="Benchmarking Sorting Algorithms" # Bubble Sort bubble = sns.lineplot( x="Size", y="Bubble Sort", data=df, marker='o', label='Bubble Sort') # Merge sort merge = sns.lineplot( x="Size", y="Merge Sort", data=df, marker='o', label='Merge Sort') # Counting sort counting = sns.lineplot( x="Size", y="Counting sort", marker='o', data=df, label="Counting Sort") # Quick sort quick = sns.lineplot( x="Size", y="Quick sort", data=df, marker='o',label="Quick Sort") # Insertion sort insert = sns.lineplot( x="Size", y="Insertion sort", data=df, marker='o', label="Insertion Sort") # BogoSort bogo = sns.lineplot( x="Size", y="BogoSort", data=df, marker='o', label="BogoSort") plt.xlabel('Input size n', fontsize=16) plt.ylabel('Running Time in seconds',fontsize=16) # Increasing font size plt.title(title, fontsize=26) # Show the plot plt.show() import seaborn as sns import matplotlib.pyplot as plt sns.set(style="whitegrid", palette="husl", rc={'figure.figsize':(14,14)}) title="Benchmarking Sorting Algorithms (closer look)" # Bubble Sort #bubble = sns.lineplot( x="Size", y="Bubble Sort", data=df, marker='o', label='Bubble Sort') # Merge sort merge = sns.lineplot( x="Size", y="Merge Sort", data=df, marker='o', label='Merge Sort') # Counting sort counting = sns.lineplot( x="Size", y="Counting sort", marker='o', data=df, label="Counting Sort") # Quick sort quick = sns.lineplot( x="Size", y="Quick sort", data=df, marker='o',label="Quick Sort") # Insertion sort insert = sns.lineplot( x="Size", y="Insertion sort", data=df, marker='o', label="Insertion Sort") # BogoSort bogo = sns.lineplot( x="Size", y="BogoSort", data=df, marker='o', label="BogoSort") plt.xlabel('Input size n', fontsize=16) plt.ylabel('Running Time in seconds',fontsize=16) # Increasing font size plt.title(title, fontsize=26) # Show the plot plt.show() ###Output _____no_output_____
jupyterNotebooks/.ipynb_checkpoints/A1 core module notebook-checkpoint.ipynb
###Markdown A1: core module notebook Introduction This notebook includes a description of the 'core' python module in the JBEI Quantitative Metabolic Modeling (QMM) library. A description and demonstration of the diffent classes can be found below. Setup First, we need ot set the path and environment variable properly: ###Code %matplotlib inline import sys, os pythonPath = "/scratch/david.ando/quantmodel/code/core" if pythonPath not in sys.path: sys.path.append('/scratch/david.ando/quantmodel/code/core') os.environ["QUANTMODELPATH"] = '/scratch/david.ando/quantmodel' ###Output _____no_output_____ ###Markdown Importing the required modules for the demo: ###Code from IPython.display import Image import core, FluxModels import os ###Output _____no_output_____ ###Markdown Classes description Metabolite related classes metbolite class The *metabolite* class is used to store all information related to a metabolite. For example the following instantation: ###Code ala = core.Metabolite('ala-L', ncarbons=3, source=True, feed='100% 1-C', destination=False, formula='C3H7NO2') ###Output _____no_output_____ ###Markdown creates a metabolite with nbame 'ala-L', 3 carbon atoms, which is the source of labeling, is labeled in the first carbon, is not a destination (measured) metabolite and with a composition formula equal to 'C3H7NO2' the **generateEMU** function creates the corresponding Elementary Metabolite Unit (EMU): ###Code ala.generateEMU([2]) ###Output _____no_output_____ ###Markdown In this case the EMU contains the first and last carbon in alanine. The input ([2]) specifies which carbons to exclude: ###Code ala.generateEMU([2,3]) ###Output _____no_output_____ ###Markdown reactant and product classes *Reactant* and *product* are classes derived from metabolite and the only difference is that they represent metabolites in the context of a reaction. Hence, the stoichiometry of the metabolite and the labeling pattern in that reaction are included: ###Code R_ala = core.Reactant(ala, 1, 'abc') ###Output _____no_output_____ ###Markdown Notice that the stoichiometry information (1, meaning in the reaction only 1 molecule participates in the reaction) and the labeling data ('abc', one part of the labeling pattern, see below) only make sense in the context of a reaction, so they are not included in the metabolite class.Both classes are derived from metabolites, so they inherit their methods: ###Code R_ala.generateEMU([2,3]) ###Output _____no_output_____ ###Markdown Reaction related classes reaction class The *reaction* class produces a reaction instance: ###Code # Create reactant metabolites coa_c = core.Metabolite('coa_c') nad_c = core.Metabolite('nad_c') pyr_c = core.Metabolite('pyr_c') # Convert into reactants Rcoa_c = core.Reactant(coa_c, 1.0) Rnad_c = core.Reactant(nad_c, 1.0) Rpyr_c = core.Reactant(pyr_c, 1.0) # Create product metabolites accoa_c = core.Metabolite('accoa_c') co2_c = core.Metabolite('co2_c') nadh_c = core.Metabolite('nadh_c') # Convert into products Raccoa_c = core.Product(accoa_c, 1.0) Rco2_c = core.Product(co2_c, 1.0) Rnadh_c = core.Product(nadh_c, 1.0) # Create reaction PDH = core.Reaction('PDH',reactants=[Rcoa_c,Rnad_c,Rpyr_c] , products=[Raccoa_c,Rco2_c,Rnadh_c] ,subsystem='S_GlycolysisGluconeogenesis') ###Output _____no_output_____ ###Markdown Reactions can also initialized from a string: ###Code PDH2 = core.Reaction.from_string('PDH : coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c ') ###Output _____no_output_____ ###Markdown The *reaction* class contains some useful functions such as: **stoichLine** to obtain the stoichiometric line for the reaction: ###Code print PDH.stoichLine() print PDH2.stoichLine() ###Output PDH : coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c PDH : coa_c + nad_c + pyr_c --> accoa_c + co2_c + nadh_c ###Markdown **getReactDict** produces a dictionary of reactants: ###Code PDH.getReactDict() ###Output _____no_output_____ ###Markdown **getProdDict** produces a dictionary of products: ###Code PDH.getProdDict() ###Output _____no_output_____ ###Markdown Elementary Metabolite Unit (EMU) related classes Elementary Metabolite Units (or EMUs) of a compound are the molecule parts (moieties) comprising any distinct subset of the compound’s atoms (Antoniewicz MR, Kelleher JK, Stephanopoulos G: Elementary metabolite units (EMU): a novel framework for modeling isotopic distributions. Metab Eng 2007, 9:68-86.). For example, cit$_{123}$ represents the first 3 carbon atoms in the citrate molecule. EMU class The EMU class provides a class to hold and manipulate EMUs: ###Code cit321= core.EMU('cit_3_2_1') ###Output _____no_output_____ ###Markdown The method **findnCarbons** produces the number of carbons in the EMU: ###Code print cit321.findnCarbons() ###Output 3.0 ###Markdown The method **getMetName** produces the name of the corresponding metabolite: ###Code print cit321.getMetName() str(cit321.getMetName()) == 'cit' ###Output _____no_output_____ ###Markdown The method **getIndices** produces the indices: ###Code print cit321.getIndices() ###Output [3, 2, 1] ###Markdown **getSortedName** sorts the indices in the EMU name: ###Code print cit321.getSortedName() ###Output cit_1_2_3 ###Markdown **getEmuInSBML** produces the name of the EMU in SBML format: ###Code print cit321.getEmuInSBML() ###Output cit_c_3_2_1 ###Markdown Transitions related classes Transitions contain the information on how carbon (or other) atoms are passed in each reaction. Atom transitions describe, for example, the fate of each carbon in a reaction, whereas EMU transitions describe this information by using EMUs, as described below. AtomTransition class Atom transitions represent the fate of each carbon in a reaction (Wiechert W. (2001) 13C metabolic flux analysis. Metabolic engineering 3: 195-206). For example, in:AKGDH akg --> succoa + co2 abcde : bcde + aakg gets split into succoa and co2, with the first 4 carbons going to succoa and the remaining carbon going to co2. ###Code AT = core.AtomTransition('AKGDH akg --> succoa + co2 abcde : bcde + a') print AT ###Output AKGDH akg --> succoa + co2 abcde : bcde + a ###Markdown The method **findEMUtransition** provides for a given input EMU (e.g. succoa_1_2_3_4), which EMU it comes from in the form of a EMU transition: ###Code emu1 = core.EMU('co2_1') print AT.findEMUtransition(emu1) emu2 = core.EMU('succoa_1_2_3_4') print AT.findEMUtransition(emu2) ###Output ['AKGDH, akg_1 --> co2_1'] ['AKGDH, akg_2_3_4_5 --> succoa_1_2_3_4'] ###Markdown This is done through the method **findEMUs**, which finds the emus from which the input emanates in the given atom transition: ###Code print emu2.name print AT.findEMUs(emu2) for emus in AT.findEMUs(emu2): for emu_ in emus: print emu_.name ###Output succoa_1_2_3_4 [[<core.EMU instance at 0x7fe7d68e1320>]] akg_2_3_4_5 ###Markdown which in turn, uses the method **getOriginDictionary** which provides for a given input EMU the originating metabolite and the correspondance in indices: ###Code AT.getOriginDictionary(emu2) ###Output _____no_output_____ ###Markdown EMUTransition class Class for EMU transitions that contain information on how different EMUs transform intto each other. For example: TA1_b, TAC3_c_1_2_3 + g3p_c_1_2_3 --> f6p_c_1_2_3_4_5_6 indicating that TAC3_c_1_2_3 and g3p_c_1_2_3 combine to produce f6p_c_1_2_3_4_5_6 in reaction TA1_b (backward reaction of TA1), or: SSALy, (0.5) sucsal_c_4 --> (0.5) succ_c_4 which indicates that the fourth atom of sucsal_c becomes the fourth atom of succ_c. The (0.5) contribution coefficient indicates that reaction SSALy contains a symmetric molecule and two labeling correspondences are equally likely. Hence this transition only contributes half the flux to the final labeling. ###Code emuTrans = core.EMUTransition('TA1_b, TAC3_c_1_2_3 + g3p_c_1_2_3 --> f6p_c_1_2_3_4_5_6') print emuTrans str(emuTrans) == 'TA1_b, TAC3_c_1_2_3 + g3p_c_1_2_3 --> f6p_c_1_2_3_4_5_6' ###Output _____no_output_____ ###Markdown Ranged number class The *rangedNumber* class describes floating point numbers for which a confidence interval is available. For example, fluxes obtained through 2S-$^{13}$C MFA are described through the flux that best fits the data and the highest and lowest values that are found to be compatible with labeling data (see equations 16-23 in Garcia Martin *et al* 2015). However, this class has been abstracted out so it can be used with other ranged intervals. Ranged numbers can used as follows: ###Code number = core.rangedNumber(0.3,0.6,0.9) # 0.3 lowest, 0.6 best fit, 0.9 highest ###Output _____no_output_____ ###Markdown Ranged numbers can be printed: ###Code print number ###Output [0.3 : 0.6 : 0.9] ###Markdown and added, substracted, multiplied and divided following the standard error propagation rules(https://en.wikipedia.org/wiki/Propagation_of_uncertainty): ###Code A = core.rangedNumber(0.3,0.6,0.9) B = core.rangedNumber(0.1,0.15,0.18) print A+B print A-B print 2*A print B/3 ###Output [0.0333333333333 : 0.05 : 0.06] ###Markdown Flux class The flux class describes fluxes attached to a reaction. For example, if the net flux is described by the ranged number A and the exchange flux by the ranged number B, the corresponding flux would be: ###Code netFlux = A exchangeFlux = B flux1 = core.flux(net_exc_tup=(netFlux,exchangeFlux)) print flux1 ###Output Forward: [0.445861873485 : 0.75 : 1.05149626863] Backward: [0.1 : 0.15 : 0.18] Net: [0.3 : 0.6 : 0.9] Exchange: [0.1 : 0.15 : 0.18] ###Markdown Fluxes can easily multiplied: ###Code print 3*flux1 ###Output Forward: [1.33758562046 : 2.25 : 3.1544888059] Backward: [0.3 : 0.45 : 0.54] Net: [0.9 : 1.8 : 2.7] Exchange: [0.3 : 0.45 : 0.54]
ipython-7.29.0/examples/IPython Kernel/Beyond Plain Python.ipynb
###Markdown IPython: beyond plain Python When executing code in IPython, all valid Python syntax works as-is, but IPython provides a number of features designed to make the interactive experience more fluid and efficient. First things first: running code, getting help In the notebook, to run a cell of code, hit `Shift-Enter`. This executes the cell and puts the cursor in the next cell below, or makes a new one if you are at the end. Alternately, you can use: - `Alt-Enter` to force the creation of a new cell unconditionally (useful when inserting new content in the middle of an existing notebook).- `Control-Enter` executes the cell and keeps the cursor in the same cell, useful for quick experimentation of snippets that you don't need to keep permanently. ###Code print("Hi") ###Output Hi ###Markdown Getting help: ###Code ? ###Output _____no_output_____ ###Markdown Typing `object_name?` will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes. ###Code import collections collections.namedtuple? collections.Counter?? *int*? ###Output _____no_output_____ ###Markdown An IPython quick reference card: ###Code %quickref ###Output _____no_output_____ ###Markdown Tab completion Tab completion, especially for attributes, is a convenient way to explore the structure of any object you’re dealing with. Simply type `object_name.` to view the object’s attributes. Besides Python objects and keywords, tab completion also works on file and directory names. ###Code collections. ###Output _____no_output_____ ###Markdown The interactive workflow: input, output, history ###Code 2+10 _+10 ###Output _____no_output_____ ###Markdown You can suppress the storage and rendering of output if you append `;` to the last cell (this comes in handy when plotting with matplotlib, for example): ###Code 10+20; _ ###Output _____no_output_____ ###Markdown The output is stored in `_N` and `Out[N]` variables: ###Code _10 == Out[10] ###Output _____no_output_____ ###Markdown And the last three have shorthands for convenience: ###Code from __future__ import print_function print('last output:', _) print('next one :', __) print('and next :', ___) In[11] _i _ii print('last input:', _i) print('next one :', _ii) print('and next :', _iii) %history -n 1-5 ###Output 1: print("Hi") 2: ? 3: import collections collections.namedtuple? 4: collections.Counter?? 5: *int*? ###Markdown **Exercise**Write the last 10 lines of history to a file named `log.py`. Accessing the underlying operating system ###Code !pwd files = !ls print("My current directory's files:") print(files) !echo $files !echo {files[0].upper()} ###Output ANIMATIONS USING CLEAR_OUTPUT.IPYNB ###Markdown Note that all this is available even in multiline blocks: ###Code import os for i,f in enumerate(files): if f.endswith('ipynb'): !echo {"%02d" % i} - "{os.path.splitext(f)[0]}" else: print('--') ###Output 00 - Animations Using clear_output 01 - Background Jobs 02 - Beyond Plain Python 03 - Capturing Output 04 - Cell Magics 05 - Custom Display Logic 06 - Index 07 - Old Custom Display Logic 08 - Plotting in the Notebook 09 - Raw Input in the Notebook 10 - Rich Output 11 - Script Magics 12 - SymPy 13 - Terminal Usage 14 - Third Party Rich Output 15 - Trapezoid Rule 16 - Working With External Code -- -- -- -- -- -- -- -- -- -- ###Markdown Beyond Python: magic functions The IPyhton 'magic' functions are a set of commands, invoked by prepending one or two `%` signs to their name, that live in a namespace separate from your normal Python variables and provide a more command-like interface. They take flags with `--` and arguments without quotes, parentheses or commas. The motivation behind this system is two-fold: - To provide an orthogonal namespace for controlling IPython itself and exposing other system-oriented functionality.- To expose a calling mode that requires minimal verbosity and typing while working interactively. Thus the inspiration taken from the classic Unix shell style for commands. ###Code %magic ###Output _____no_output_____ ###Markdown Line vs cell magics: ###Code %timeit list(range(1000)) %%timeit list(range(10)) list(range(100)) ###Output 100000 loops, best of 3: 2.78 µs per loop ###Markdown Line magics can be used even inside code blocks: ###Code for i in range(1, 5): size = i*100 print('size:', size, end=' ') %timeit list(range(size)) ###Output size: 100 100000 loops, best of 3: 1.86 µs per loop size: 200 100000 loops, best of 3: 2.49 µs per loop size: 300 100000 loops, best of 3: 4.04 µs per loop size: 400 100000 loops, best of 3: 6.21 µs per loop ###Markdown Magics can do anything they want with their input, so it doesn't have to be valid Python: ###Code %%bash echo "My shell is:" $SHELL echo "My disk usage is:" df -h ###Output My shell is: /usr/local/bin/bash My disk usage is: Filesystem Size Used Avail Capacity iused ifree %iused Mounted on /dev/disk1 233Gi 216Gi 16Gi 94% 56788108 4190706 93% / devfs 190Ki 190Ki 0Bi 100% 656 0 100% /dev map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home ###Markdown Another interesting cell magic: create any file you want locally from the notebook: ###Code %%writefile test.txt This is a test file! It can contain anything I want... And more... !cat test.txt ###Output This is a test file! It can contain anything I want... And more... ###Markdown Let's see what other magics are currently defined in the system: ###Code %lsmagic ###Output _____no_output_____ ###Markdown Running normal Python code: execution and errors Not only can you input normal Python code, you can even paste straight from a Python or IPython shell session: ###Code >>> # Fibonacci series: ... # the sum of two elements defines the next ... a, b = 0, 1 >>> while b < 10: ... print(b) ... a, b = b, a+b In [1]: for i in range(10): ...: print(i, end=' ') ...: ###Output 0 1 2 3 4 5 6 7 8 9 ###Markdown And when your code produces errors, you can control how they are displayed with the `%xmode` magic: ###Code %%writefile mod.py def f(x): return 1.0/(x-1) def g(y): return f(y+1) ###Output Overwriting mod.py ###Markdown Now let's call the function `g` with an argument that would produce an error: ###Code import mod mod.g(0) %xmode plain mod.g(0) %xmode verbose mod.g(0) ###Output Exception reporting mode: Verbose ###Markdown The default `%xmode` is "context", which shows additional context but not all local variables. Let's restore that one for the rest of our session. ###Code %xmode context ###Output Exception reporting mode: Context ###Markdown Running code in other languages with special `%%` magics ###Code %%perl @months = ("July", "August", "September"); print $months[0]; %%ruby name = "world" puts "Hello #{name.capitalize}!" ###Output Hello World! ###Markdown Raw Input in the notebook Since 1.0 the IPython notebook web application support `raw_input` which for example allow us to invoke the `%debug` magic in the notebook: ###Code mod.g(0) %debug ###Output > /Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py(3)f()  2 def f(x): ----> 3  return 1.0/(x-1)  4   ipdb> up > /Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py(6)g()  4   5 def g(y): ----> 6  return f(y+1)  ipdb> down > /Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py(3)f()  2 def f(x): ----> 3  return 1.0/(x-1)  4   ipdb> bt <ipython-input-46-5e708f13c839>(1)<module>() ----> 1 mod.g(0)  /Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py(6)g()  2 def f(x):  3  return 1.0/(x-1)  4   5 def g(y): ----> 6  return f(y+1)  > /Users/minrk/dev/ip/mine/examples/IPython Kernel/mod.py(3)f()  1   2 def f(x): ----> 3  return 1.0/(x-1)  4   5 def g(y):  ipdb> exit ###Markdown Don't forget to exit your debugging session. Raw input can of course be used to ask for user input: ###Code enjoy = input('Are you enjoying this tutorial? ') print('enjoy is:', enjoy) ###Output Are you enjoying this tutorial? yes enjoy is: yes ###Markdown Plotting in the notebook This magic configures matplotlib to render its figures inline: ###Code %matplotlib inline import numpy as np import matplotlib.pyplot as plt x = np.linspace(0, 2*np.pi, 300) y = np.sin(x**2) plt.plot(x, y) plt.title("A little chirp") fig = plt.gcf() # let's keep the figure object around for later... ###Output _____no_output_____ ###Markdown The IPython kernel/client model ###Code %connect_info ###Output { "stdin_port": 62401, "key": "64c935a7-64e8-4ab7-ab22-6e0f3ff84e02", "hb_port": 62403, "transport": "tcp", "signature_scheme": "hmac-sha256", "shell_port": 62399, "control_port": 62402, "ip": "127.0.0.1", "iopub_port": 62400 } Paste the above JSON into a file, and connect with: $> ipython <app> --existing <file> or, if you are local, you can connect with just: $> ipython <app> --existing kernel-25383540-ce7f-4529-900a-ded0e510d5d8.json or even just: $> ipython <app> --existing if this is the most recent IPython session you have started. ###Markdown We can automatically connect a Qt Console to the currently running kernel with the `%qtconsole` magic, or by typing `ipython console --existing ` in any terminal: ###Code %qtconsole ###Output _____no_output_____
2. Machine_Learning_Regression/K-Nearest Neighborhood Regression.ipynb
###Markdown Predicting house prices using k-nearest neighbors regression In this notebook, you will implement k-nearest neighbors regression. You will:Find the k-nearest neighbors of a given query inputPredict the output for the query input using the k-nearest neighborsChoose the best value of k using a validation set ###Code import numpy as np import pandas as pd dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int} sales = pd.read_csv('kc_house_data_small.csv', dtype = dtype_dict) train = pd.read_csv('kc_house_data_small_train.csv', dtype = dtype_dict) test = pd.read_csv('kc_house_data_small_test.csv', dtype = dtype_dict) validate = pd.read_csv('kc_house_data_validation 2.csv', dtype = dtype_dict) ###Output _____no_output_____ ###Markdown 3. To efficiently compute pairwise distances among data points, we will convert the SFrame (or dataframe) into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() (or equivalent). The function takes a dataset, a list of features (e.g. [‘sqft_living’, ‘bedrooms’]) to be used as inputs, and a name of the output (e.g. ‘price’). It returns a ‘features_matrix’ (2D array) consisting of a column of ones followed by columns containing the values of the input features in the data set in the same order as the input list. It also returns an ‘output_array’, which is an array of the values of the output in the dataset (e.g. ‘price’). ###Code def get_numpy_data(data, features, output): data['constant'] = 1 # add a constant column to a dataframe # prepend variable 'constant' to the features list features = ['constant'] + features # select the columns of dataframe given by the ‘features’ list into the SFrame ‘features_sframe’ # this will convert the features_sframe into a numpy matrix with GraphLab Create >= 1.7!! features_matrix = data[features].as_matrix(columns=None) # assign the column of data_sframe associated with the target to the variable ‘output_sarray’ # this will convert the SArray into a numpy array: output_array = data[output].as_matrix(columns=None) return(features_matrix, output_array) ###Output _____no_output_____ ###Markdown Similarly, copy and paste the normalize_features function (or equivalent) from Module 5 (Ridge Regression). Given a feature matrix, each column is divided (element-wise) by its 2-norm. The function returns two items: (i) a feature matrix with normalized columns and (ii) the norms of the original columns. ###Code def normalize_features(features): norms = np.sqrt(np.sum(features**2,axis=0)) normlized_features = features/norms return (normlized_features, norms) ###Output _____no_output_____ ###Markdown Using get_numpy_data (or equivalent), extract numpy arrays of the training, test, and validation sets. ###Code features = [m for m,n in dtype_dict.items() if train[m].dtypes != object] features features.remove('price') training_feature_matrix, training_output = get_numpy_data(train, features, 'price') testing_feature_matrix, testing_output = get_numpy_data(test, features, 'price') validating_feature_matrix, validating_output = get_numpy_data(validate, features, 'price') ###Output _____no_output_____ ###Markdown In computing distances, it is crucial to normalize features. Otherwise, for example, the ‘sqft_living’ feature (typically on the order of thousands) would exert a much larger influence on distance than the ‘bedrooms’ feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.IMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.e.g. in Python: ###Code features_train, norms = normalize_features(training_feature_matrix) features_test = testing_feature_matrix / norms features_valid = validating_feature_matrix / norms ###Output _____no_output_____ ###Markdown Compute a single distance To start, let's just explore computing the “distance” between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.To see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1. Similarly, print the 10th row (index 9) of the training feature matrix. ###Code print features_test[0] print features_train[9] ###Output [ 0.01345102 0.01807473 0.01375926 0.01362084 0.01564352 0.01350306 0.01551285 -0.01346922 0.0016225 0.01759212 0.017059 0.00160518 0. 0.02481682 0. 0.01345387 0.0116321 0.05102365] [ 0.01345102 0.00602491 0.01195898 0.0096309 0.01390535 0.01302544 0.01163464 -0.01346251 0.00156612 0.0083488 0.01279425 0.00050756 0. 0. 0. 0.01346821 0.01938684 0. ] ###Markdown Quiz Question: What is the Euclidean distance between the query house and the 10th house of the training set? ###Code np.sqrt(np.sum((features_train[9] - features_test[0])**2)) ###Output _____no_output_____ ###Markdown Of course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set.To visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.Write a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set. Quiz Question: Among the first 10 training houses, which house is the closest to the query house? ###Code distance = {} for i in range(10): distance[i] = np.sqrt(np.sum((features_train[i] - features_test[0])**2)) print distance distance_2 = [] for x,y in distance.items(): distance_2.append((y,x)) distance_2.sort() distance_2 ###Output _____no_output_____ ###Markdown It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process. (The material up to 13 is specific to numpy; if you are using other languages such as R or Matlab, consult relevant manuals on vectorization.)Consider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]): ###Code for i in xrange(3): print features_train[i]-features_test[0] # should print 3 vectors of length 18 print features_train[0:3] - features_test[0] ###Output [[ 0.00000000e+00 -1.20498190e-02 -5.14364795e-03 -5.50336860e-03 -3.47633726e-03 -1.63756198e-04 -3.87821276e-03 1.29876855e-05 6.69281453e-04 -1.05552733e-02 -8.52950206e-03 2.08673616e-04 0.00000000e+00 -2.48168183e-02 0.00000000e+00 -1.70254220e-05 0.00000000e+00 -5.10236549e-02] [ 0.00000000e+00 -4.51868214e-03 -2.89330197e-03 1.30705004e-03 -3.47633726e-03 -1.91048898e-04 -3.87821276e-03 6.16364736e-06 1.47606982e-03 -2.26610387e-03 0.00000000e+00 7.19763456e-04 0.00000000e+00 -1.45830788e-02 6.65082271e-02 4.23090220e-05 0.00000000e+00 -5.10236549e-02] [ 0.00000000e+00 -1.20498190e-02 3.72914476e-03 -8.32384500e-03 -5.21450589e-03 -3.13866046e-04 -7.75642553e-03 1.56292487e-05 1.64764925e-03 -1.30002801e-02 -8.52950206e-03 1.60518166e-03 0.00000000e+00 -2.48168183e-02 0.00000000e+00 4.70885840e-05 0.00000000e+00 -5.10236549e-02]] ###Markdown Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below: ###Code # verify that vectorization works results = features_train[0:3] - features_test[0] print results[0] - (features_train[0]-features_test[0]) # should print all 0's if results[0] == (features_train[0]-features_test[0]) print results[1] - (features_train[1]-features_test[0]) # should print all 0's if results[1] == (features_train[1]-features_test[0]) print results[2] - (features_train[2]-features_test[0]) # should print all 0's if results[2] == (features_train[2]-features_test[0]) ###Output [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] ###Markdown Perform 1-nearest neighbor regression Now that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable ‘diff’ such that ‘diff[i]’ gives the element-wise difference between the features of the query house and the i-th training house.To test your code, print diff[-1].sum(), which should be -0.0934339605842. ###Code diff = features_train[:] - features_test[0] diff features_train - features_test[0] diff[-1].sum() ###Output _____no_output_____ ###Markdown The next step in computing the Euclidean distances is to take these feature-by-feature differences in ‘diff’, square each, and take the sum over feature indices. That is, compute the sum of squared feature differences for each training house (row in ‘diff’).By default, ‘np.sum’ sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the ‘axis’ parameter described in the np.sum documentation. In particular, ‘axis=1’ computes the sum across each row. ###Code total_row = np.sum(diff**2, axis=1) total_row.shape diff.shape ###Output _____no_output_____ ###Markdown computes this sum of squared feature differences for all training houses. Verify that the two expressions ###Code np.sum(diff**2, axis=1)[15] np.sum(diff[15]**2) ###Output _____no_output_____ ###Markdown With this result in mind, write a single-line expression to compute the Euclidean distances from the query to all the instances. Assign the result to variable distances.Hint: don't forget to take the square root of the sum of squares.Hint: distances[100] should contain 0.0237082324496. ###Code np.sqrt(sum(diff[100]**2)) ###Output _____no_output_____ ###Markdown Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query. ###Code def compute_distances(features_query): diff = features_train - features_test[features_query] distances = np.sqrt(np.sum(diff**2, axis=1)) return distances dist = compute_distances(2) dist min(dist) np.argmin(dist) ###Output _____no_output_____ ###Markdown Quiz Question: What is the predicted value of the query house based on 1-nearest neighbor regression? ###Code training_output[382] ###Output _____no_output_____ ###Markdown Perform k-nearest neighbor regression Using the functions above, implement a function that takes inthe value of k;the feature matrix for the instances; andthe feature of the queryand returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house. ###Code def k_nearest_neighbors(k, feat_query): distance = compute_distances(feat_query) # print np.sort(distance)[:k] return np.argsort(distance)[0:k] ###Output _____no_output_____ ###Markdown Quiz Question: Take the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house? ###Code k_nearest_neighbors(4,2) ###Output _____no_output_____ ###Markdown Now that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:the value of k;the feature matrix for the instances;the output values (prices) of the instances; andthe feature of the query, whose price we’re predicting.The function should return a predicted value of the query house. ###Code def predict_output_of_query(k, features_train, output_train, features_query): prediction = np.sum(output_train[k_nearest_neighbors(k,features_query)])/k return prediction ###Output _____no_output_____ ###Markdown Quiz Question: Make predictions for the first 10 houses in the test set, using k=10. What is the index of the house in this query set that has the lowest predicted value? What is the predicted value of this house? ###Code for m in range(10): print m, predict_output_of_query(10, features_train, training_output, m) ###Output 0 881300.0 1 431860.0 2 460595.0 3 430200.0 4 766750.0 5 667420.0 6 350032.0 7 512800.7 8 484000.0 9 457235.0 ###Markdown Choosing the best value of k using a validation set There remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:For k in [1, 2, … 15]:Make predictions for the VALIDATION data using the k-nearest neighbors from the TRAINING data.Compute the RSS on VALIDATION dataReport which k produced the lowest RSS on validation data. ###Code rss_all = np.zeros(15) for k in range(1,16): predictions_k = predict_output_of_query(k, features_train, training_output, features_valid) rss_all[k-1] = np.sum((predictions_k-validating_output)**2) ###Output _____no_output_____
Argentina - Mondiola Rock - 90 pts/Practica/TP1/ejercicio 4/.ipynb_checkpoints/Ejercicio 4-checkpoint.ipynb
###Markdown EJERCICIO 4El conjunto de datos “Iris” ha sido usado como caso de prueba para una gran cantidad de clasificadores y es, quizás, el conjunto de datos más conocido de la literatura específica. Iris es una variedad de planta que se la desea clasificar de acuerdo a su tipo. Se reconocen tres tipos distintos: 'Iris setosa', 'Iris versicolor' e 'Iris virgínica'. El objetivo es lograr clasificar una planta de la variedad Iris a partir del largo y del ancho del pétalo y del largo y del ancho del sépalo.El conjunto de datos Iris está formado en total por 150 muestras,siendo 50 de cada uno de los tres tipos de plantas. Cada muestraestá compuesta por el tipo de planta, la longitud y ancho delpétalo y la longitud y ancho del sépalo. Todos son atributos numéricos continuos. $$\begin{array}{|c|c|c|c|c|}\hline X & Setosa & Versicolor & Virgínica & Inválidas \\\hline Setosa & 50 & 0 & 0 & 0 \\\hline Versicolor & 0 & 50 & 0 & 0 \\\hline Virgínica & 0 & 0 & 50 & 0 \\\hline \end{array}$$ ###Code import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl import mpld3 %matplotlib inline mpld3.enable_notebook() from cperceptron import Perceptron from cbackpropagation import ANN #, Identidad, Sigmoide import patrones as magia def progreso(ann, X, T, y=None, n=-1, E=None): if n % 20 == 0: print("Pasos: {0} - Error: {1:.32f}".format(n, E)) def progresoPerceptron(perceptron, X, T, n): y = perceptron.evaluar(X) incorrectas = (T != y).sum() print("Pasos: {0}\tIncorrectas: {1}\n".format(n, incorrectas)) iris = np.load('iris.npy') #Armo Patrones clases, patronesEnt, patronesTest = magia.generar_patrones( magia.escalar(iris[:,1:]).round(4),iris[:,:1],80) X, T = magia.armar_patrones_y_salida_esperada(clases,patronesEnt) clases, patronesEnt, noImporta = magia.generar_patrones( magia.escalar(iris[:,1:]),iris[:,:1],100) Xtest, Ttest = magia.armar_patrones_y_salida_esperada(clases,patronesEnt) ###Output _____no_output_____ ###Markdown a) Entrene perceptrones para que cada uno aprenda a reconocer uno de los distintos tipos de plantas Iris. Informe los parámetros usados para el entrenamiento y el desempeño obtenido. Emplee todos los patrones para el entrenamiento. Muestre la matriz de confusión para la mejor clasificación obtenida luego del entrenamiento, informando los patrones clasificados correcta e incorrectamente. ###Code print("Entrenando P1:") p1 = Perceptron(X.shape[1]) I1 = p1.entrenar_numpy(X, T[:,0], max_pasos=5000, callback=progresoPerceptron, frecuencia_callback=2500) print("Pasos:{0}".format(I1)) print("\nEntrenando P2:") p2 = Perceptron(X.shape[1]) I2 = p2.entrenar_numpy(X, T[:,1], max_pasos=5000, callback=progresoPerceptron, frecuencia_callback=2500) print("Pasos:{0}".format(I2)) print("\nEntrenando P3:") p3 = Perceptron(X.shape[1]) I3 = p3.entrenar_numpy(X, T[:,2], max_pasos=5000, callback=progresoPerceptron, frecuencia_callback=2500) print("Pasos:{0}".format(I3)) Y = np.vstack((p1.evaluar(Xtest),p2.evaluar(Xtest),p3.evaluar(Xtest))).T magia.matriz_de_confusion(Ttest,Y) ###Output _____no_output_____ ###Markdown b) Entrene una red neuronal artificial usando backpropagation como algoritmo de aprendizaje con el fin de lograr la clasificación pedida. Emplee todos los patrones para el entrenamiento. Detalle los parámetros usados para el entrenamiento así como la arquitectura de la red neuronal. Repita más de una vez el procedimiento para confirmar los resultados obtenidos e informe la matriz de confusión para la mejor clasificación obtenida. ###Code # Crea la red neuronal ocultas = 10 entradas = X.shape[1] salidas = T.shape[1] ann = ANN(entradas, ocultas, salidas) ann.reiniciar() #Entreno E, n = ann.entrenar_rprop(X, T, min_error=0, max_pasos=100000, callback=progreso, frecuencia_callback=10000) print("\nRed entrenada en {0} pasos con un error de {1:.32f}".format(n, E)) #Evaluo Y = (ann.evaluar(Xtest) >= 0.97) magia.matriz_de_confusion(Ttest,Y) (ann.evaluar(Xtest)[90]) ###Output _____no_output_____
colloidal_chemotaxis.ipynb
###Markdown Passive and active colloidal chemotaxis in a microfluidic channel: mesoscopic and stochastic models**Author:** Pierre de Buyl and Laurens Deprez *Supplemental information to the article by L. Deprez and P. de Buyl*The data originates from the RMPCDMD simulation program. Please read its documentation and thepublished paper for meaningful use of this notebook. ###Code %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.special import erfc, erf from scipy.integrate import quad, nquad from collections import namedtuple import h5py import os.path from glob import glob plt.rcParams['figure.figsize'] = (8, 6) plt.rcParams['figure.subplot.hspace'] = 0.25 plt.rcParams['figure.subplot.wspace'] = 0. plt.rcParams['figure.subplot.left'] = 0.05 plt.rcParams['figure.subplot.right'] = 0.95 plt.rcParams['figure.subplot.bottom'] = 0.19 plt.rcParams['figure.subplot.top'] = 0.91 plt.rcParams['axes.labelsize'] = 14 plt.rcParams['font.size'] = 14 plt.rcParams['xtick.labelsize'] = 11 plt.rcParams['ytick.labelsize'] = 11 colors = ['#1f77b4', '#ff7f0e'] %load_ext Cython # Set the following variable to point to the location of the chemotaxis mesoscopic simulations mesoscopic_directory = '.' # compute D, v_flow fluid = namedtuple('fluid', ['tau', 'T', 'rho', 'alpha', 'm', 'a', 'eta']) cell = namedtuple('cell', ['Ly', 'Lz', 'g']) colloid = namedtuple('colloid', ['sigma', 'R']) fluid.tau = 0.5 fluid.T = 0.33 fluid.rho = 10 fluid.alpha = 2.6 fluid.m = 1 fluid.a = 1 buffer_length = 20 # Kapral review Eq. 55 eta_kin = fluid.T * fluid.tau * fluid.rho / (2*fluid.m) * \ (5*fluid.rho-(fluid.rho - 1 + np.exp(-fluid.rho))*(2 - np.cos(fluid.alpha)-np.cos(2*fluid.alpha)))/ \ ((fluid.rho - 1 + np.exp(-fluid.rho))*(2 - np.cos(fluid.alpha)-np.cos(2*fluid.alpha))) # Kapral review Eq. 56 eta_coll = fluid.m / (18 * fluid.a * fluid.tau) * (fluid.rho - 1 + np.exp(-fluid.rho))*(1-np.cos(fluid.alpha)) fluid.eta = eta_kin + eta_coll print("Viscosity", fluid.eta) fluid.D = fluid.T*fluid.tau/(2*fluid.m) * (3*fluid.rho/((fluid.rho - 1 + np.exp(-fluid.rho))*(1-np.cos(fluid.alpha))) - 1) print("Self-diffusion D", fluid.D) cell.Ly = 60 cell.Lz = 15 cell.g = 1/1000 def v_of_eta(fluid, cell): return fluid.rho*cell.g*cell.Lz**2/(8*fluid.eta) v_max = v_of_eta(fluid, cell) v_av = 2/3*v_max print("Flow maximum ", v_max) print("Flow average ", v_av) print("Poiseuille flow Peclet number", v_av*cell.Lz/fluid.D) colloid.sigma = 3 colloid.R = colloid.sigma*2**(1/6) all_EPS = ['0.25', '0.50', '1.00', '2.00', '4.00'] # Quantities for the catalytic reaction on the surface of the colloid probability = 1 k0 = probability*colloid.R**2*np.sqrt(8*np.pi*fluid.T/fluid.m) kD = 4*np.pi*colloid.R*fluid.D # define c_A(x,y) and lambda (derivative) def c_A(x,y): return fluid.rho * 0.5*(1+erf(-(y-cell.Ly/2)/np.sqrt(4*fluid.D*x/v_max))) def lam(x,y): return -fluid.rho*np.exp(-(y-cell.Ly/2)**2/(4*fluid.D*x/v_max))/np.sqrt(4*np.pi*fluid.D*x/v_max) # define Lambda(R, eps) def V(r, sigma, eps): return 4*eps*((sigma/r)**12-(sigma/r)**6) + eps def integrand(r, sigma, eps): return r*np.exp(-V(r, sigma, eps)/fluid.T) def Lambda(R, eps): result, error = quad(integrand, colloid.R/2, colloid.R, args=(colloid.sigma, eps)) return result - colloid.R**2/2 # define placeholder dicts for the numerical data passive_sphere_meso = {} passive_sphere_stoc = {} active_sphere_meso = {} active_sphere_stoc = {} nanomotor_meso = {} nanomotor_stoc = {} ###Output _____no_output_____ ###Markdown Single passive colloidHere, the setup ###Code # Single passive colloid # Lambda lambda sigma = colloid.sigma R = colloid.R y_shift = 3.4 dt = 0.01 gamma = 4*np.pi*fluid.eta*sigma D = fluid.T/gamma x_factor = np.sqrt(2*D*dt) y_factor = np.sqrt(2*D*dt) def run_single_passive(passive_EPS): F_factor = 8*np.pi*fluid.T/3 * R * (Lambda(R, 1)-Lambda(R, float(passive_EPS)))/gamma x, y = sigma, cell.Ly/2 + y_shift xy_data = [] for t in range(1000): for tt in range(50): F_y = F_factor * lam(x, y) xi_x, xi_y = np.random.normal(size=(2,)) x += v_max*dt + x_factor*xi_x y += F_y*dt + y_factor*xi_y xy_data.append((x,y)) return np.array(xy_data) # Collect mesoscopic simulation data for passive_EPS in ['0.25', '0.50', '1.00', '2.00', '4.00']: runs = glob(os.path.join(mesoscopic_directory, 'passive_sphere_EPS{}_*/passive_sphere_no_solvent.h5'.format(passive_EPS))) runs.sort() xy_data = [] for r in runs: with h5py.File(r, 'r') as a: xy_data.append(a['/particles/dimer/position/value'][:,0,:2]) passive_sphere_meso[passive_EPS] = np.array(xy_data) # Generate stochastic simulation data for passive_EPS in all_EPS: passive_sphere_stoc[passive_EPS] = np.array([run_single_passive(passive_EPS) for i in range(16)]) plt.figure(figsize=(12,6)) for i, passive_EPS in enumerate(all_EPS): plt.subplot(2, 3, i+1) m = passive_sphere_stoc[passive_EPS].mean(axis=0).T s = passive_sphere_stoc[passive_EPS].std(axis=0).T color = colors[0] plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5) plt.plot(*m, color=color, lw=2) m = passive_sphere_meso[passive_EPS][:,450:].mean(axis=0).T s = passive_sphere_meso[passive_EPS][:,450:].std(axis=0).T m[0,:] -= 20 color = colors[1] plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5) plt.plot(*m, color=color, lw=2) plt.xlim(0, 26) plt.ylim(25, 40) plt.text(1, 26, r'$\epsilon_F='+passive_EPS+'$') if i//3==1: plt.xlabel(r'$x$') if i%3==0: plt.ylabel(r'$y$') ###Output _____no_output_____ ###Markdown Single active colloid ###Code # Single active colloid # Lambda c_2 sigma = colloid.sigma R = colloid.R y_shift = 3.4 dt = 0.01 gamma = 4*np.pi*fluid.eta*sigma D = fluid.T/gamma x_factor = np.sqrt(2*D*dt) y_factor = np.sqrt(2*D*dt) def run_single_active(active_EPS): F_factor = -8*np.pi*fluid.T/3 * R * k0/(k0+2*kD) * (Lambda(R, 1)-Lambda(R, float(active_EPS)))/gamma x, y = sigma, cell.Ly/2 + y_shift xy_data = [] for t in range(1000): for tt in range(50): F_y = F_factor * lam(x, y) xi_x, xi_y = np.random.normal(size=(2,)) x += v_max*dt + x_factor*xi_x y += F_y*dt + y_factor*xi_y xy_data.append((x,y)) return np.array(xy_data) # Collect simulation data for active_EPS in all_EPS: runs = glob(os.path.join(mesoscopic_directory,'active_sphere_EPS{}_*/active_sphere_no_solvent.h5'.format(active_EPS))) runs.sort() active_simulation = [] for r in runs: with h5py.File(r, 'r') as a: active_simulation.append(a['/particles/dimer/position/value'][:,0,:2]) active_sphere_meso[active_EPS] = np.array(active_simulation) # Generate stochastic simulation data for active_EPS in all_EPS: active_sphere_stoc[active_EPS] = np.array([run_single_active(active_EPS) for i in range(16)]) plt.figure(figsize=(12,6)) for i, active_EPS in enumerate(all_EPS): plt.subplot(2, 3, i+1) m = active_sphere_stoc[active_EPS].mean(axis=0).T s = active_sphere_stoc[active_EPS].std(axis=0).T color = colors[0] plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5) plt.plot(*m, color=color, lw=2) m = active_sphere_meso[active_EPS][:,400:].mean(axis=0).T s = active_sphere_meso[active_EPS][:,400:].std(axis=0).T m[0,:] -= 20 color = colors[1] plt.fill_between(m[0,:], m[1,:]-s[1,:], m[1,:]+s[1,:], color=color, alpha=0.5) plt.plot(*m, color=color, lw=2) plt.xlim(0, 26) plt.ylim(25, 40) plt.text(1, 26, r'$\epsilon_B='+active_EPS+'$') if i//3==1: plt.xlabel(r'$x$') if i%3==0: plt.ylabel(r'$y$') ###Output _____no_output_____ ###Markdown Dimer nanomotor ###Code d = 6.7 def F_C_y(x, y, phi): return 8*np.pi*fluid.T/3 * colloid.R * k0/(k0+2*kD) * lam(x-d*np.cos(phi)/2, y-d*np.sin(phi)/2) def torque(f_c_y, f_n_x, f_n_y, phi): return (np.cos(phi) * (f_c_y - f_n_y) + np.sin(phi) * f_n_x)*d/2 def rotate_xy(x, y, phi): rot = np.array([[np.cos(phi), -np.sin(phi)], [np.sin(phi), np.cos(phi)]]) return np.dot(rot, (x,y)) print('cdef double RHO =', fluid.rho) print('cdef double FLUID_D =', fluid.D) print('cdef double V_MAX =', v_max) print('cdef double R =', colloid.R) print('cdef double T =', fluid.T) print('cdef double k0 =', k0) print('cdef double kD =', kD) %%cython import cython cimport cython import numpy as np cimport numpy as np from libc.math cimport exp, abs, cos, sin, sqrt, acos, erf from scipy.integrate import nquad cdef double d = 6.7 cdef double RHO = 10 cdef double LY = 60 cdef double FLUID_D = 0.06559643942750612 cdef double V_MAX = 0.095309639068441587 cdef double R = 3.367386144928119 cdef double T = 0.33 cdef double k0 = 32.6559814827 cdef double kD = 2.77576727425 cdef double PI = np.pi @cython.cdivision(True) cdef double c_A(double x,double y): return RHO * 0.5*(1+erf(-(y-LY/2)/sqrt(4*FLUID_D*x/V_MAX))) @cython.cdivision(True) cdef double lam(double x, double y): return -RHO*exp(-(y-LY/2)**2/(4*FLUID_D*x/V_MAX))/sqrt(4*PI*FLUID_D*x/V_MAX) @cython.cdivision(True) cdef double polar_c_B(double theta, double varphi, double r, double x, double y, double phi): """Concentration of B at location theta, varphi, r from the N bead. x, y are the c.o.m. coordinates and phi is the orientation of the dimer.""" cdef double x_C, y_C, x_N, y_N, c0, c1, c2, x_p, y_p, z_p, r_0 x_C = x + d*cos(phi)/2 y_C = y + d*sin(phi)/2 x_N = x - d*cos(phi)/2 y_N = y - d*sin(phi)/2 c0 = c_A(x_C, y_C) c1 = -k0/(k0+kD)*c0 c2 = -k0/(k0+2*kD)*lam(x_C, y_C) x_p = x_N + r*cos(varphi)*sin(theta) y_p = y_N + r*cos(theta) z_p = r*sin(varphi)*sin(theta) r_0 = sqrt((x_p-x_C)**2+(y_p-y_C)**2+z_p**2) theta_0 = acos((r*cos(theta)-d*sin(phi))/r_0) return -c1*(R/r_0) - c2*(R/r_0)**2*cos(theta_0) @cython.cdivision(True) cdef double F_C_y(double x, double y, double phi): return 8*PI*T/3 * R * k0/(k0+2*kD) * lam(x-d*cos(phi)/2, y-d*sin(phi)/2) @cython.boundscheck(False) @cython.cdivision(True) @cython.wraparound(False) def F_N(double x, double y, double phi): cdef double fx = 0 cdef double fy = 0 cdef int i_theta, i_varphi, N_theta, N_varphi cdef double c, th, vphi N_theta = 32 N_varphi = 32 cdef double inv_N_theta = 1.0/N_theta cdef double inv_N_varphi = 1.0/N_varphi for i_theta in range(N_theta): th = (i_theta+0.5)*PI*inv_N_theta for i_varphi in range(N_varphi): vphi = (i_varphi+0.5)*2*PI*inv_N_varphi c = polar_c_B(th, vphi, R, x, y, phi) fx = fx + c*sin(th)*sin(th)*cos(vphi) fy = fy + c*sin(th)*cos(th) factor = 2*T*PI*inv_N_theta*2*PI*inv_N_varphi return fx*factor, fy*factor cdef double torque(double f_c_y, double f_n_x, double f_n_y, double phi): return (cos(phi) * (f_c_y - f_n_y) + sin(phi) * f_n_x)*d/2 def run_nm(nanomotor_EPS): Lambda_NM = Lambda(colloid.R, float(nanomotor_EPS)) - Lambda(colloid.R, 1) y_shift = 3.4 x, y = 5, cell.Ly/2 + y_shift phi = 0 D_para = 0.002 gamma_para = fluid.T/D_para D_perp = 0.0015 gamma_perp = fluid.T/D_perp D_r = 1.4e-4 gamma_r = fluid.T/D_r dt = 0.025 x_para_factor = np.sqrt(2*D_para*dt) x_perp_factor = np.sqrt(2*D_perp*dt) phi_factor = np.sqrt(2*D_r*dt) dimer_data = [] for t in range(500): for i in range(20): F_y = Lambda_NM*F_C_y(x, y, phi) F_N_x, F_N_y = F_N(x, y, phi) F_N_x, F_N_y = Lambda_NM*F_N_x, Lambda_NM*F_N_y F_com_x = F_N_x F_com_y = F_N_y + F_y xi_para, xi_perp, xi_phi = np.random.normal(size=(3,)) F_para, F_perp = rotate_xy(F_com_x, F_com_y, -phi) F_para = F_para*dt/gamma_para + x_para_factor*xi_para F_perp = F_perp*dt/gamma_perp + x_perp_factor*xi_perp F_com = rotate_xy(F_para, F_perp, phi) x += v_max*dt + F_com[0] y += F_com[1] phi += torque(F_y, F_N_x, F_N_y, phi)*dt / gamma_r + phi_factor*xi_phi dimer_data.append((x,y,phi)) return np.array(dimer_data) for nanomotor_EPS in all_EPS: nanomotor_stoc[nanomotor_EPS] = np.array([run_nm(nanomotor_EPS) for i in range(12)]) # Collect simulation data for nanomotor_EPS in all_EPS: runs = glob(os.path.join(mesoscopic_directory,'nanomotor_EPS{}_*/nanomotor_no_solvent.h5'.format(nanomotor_EPS))) runs.sort() nanomotor_simulation = [] for r in runs: with h5py.File(r, 'r') as a: r = a['/particles/dimer/position/value'][:,:,:] orientation = r[:,0,:] - r[:,1,:] r = r.mean(axis=1) r[:,2] = np.arctan2(orientation[:,1], orientation[:,0]) nanomotor_simulation.append(r.copy()) nanomotor_meso[nanomotor_EPS] = np.array(nanomotor_simulation) nanomotor_plot = { 'name': 'nanomotor', 'stoc': nanomotor_stoc, 'meso': nanomotor_meso, 'xlim': (0, 25), 'ylim': (29.5, 35.5), 'xticks': np.linspace(0, 20, 5), 'yticks': np.linspace(30, 35, 6), 'label': r'{\kappa,B}', 'ylabel': r'$y$', 'idx': 1, } nanomotor_phi_plot = { 'name': 'nanomotor_phi', 'stoc': nanomotor_stoc, 'meso': nanomotor_meso, 'xlim': (0, 25), 'ylim': (-np.pi/2, np.pi/2), 'xticks': np.linspace(0, 20, 5), 'yticks': np.linspace(-1.5, 1.5, 7), 'label': r'{\kappa,B}', 'ylabel': r'$\phi$', 'idx': 2, } active_sphere_plot = { 'name': 'active_sphere', 'stoc': active_sphere_stoc, 'meso': active_sphere_meso, 'xlim': (0, 25), 'ylim': (26, 39), 'xticks': np.linspace(0, 20, 5), 'yticks': np.linspace(27, 39, 7), 'label': '{C,B}', 'ylabel': r'$y$', 'idx': 1, } passive_sphere_plot = { 'name': 'passive_sphere', 'stoc': passive_sphere_stoc, 'meso': passive_sphere_meso, 'xlim': (0, 25), 'ylim': (26, 39), 'xticks': np.linspace(0, 20, 5), 'yticks': np.linspace(27, 39, 7), 'label': '{N,F}', 'ylabel': r'$y$', 'idx': 1, } for data_plot in [passive_sphere_plot, active_sphere_plot, nanomotor_plot, nanomotor_phi_plot]: fig = plt.figure(figsize=(529*0.9/36,2.8)) idx = data_plot['idx'] for i, EPS in enumerate(all_EPS): ax1 = plt.subplot(1, 5, i+1) m = data_plot['stoc'][EPS][:,:,:].mean(axis=0) s = data_plot['stoc'][EPS][:,:,:].std(axis=0) color = colors[0] ax1.fill_between(m[:,0], m[:,idx]-s[:,idx], m[:,idx]+s[:,idx], color=color, alpha=0.5) ax1.plot(m[:,0], m[:,idx], color=color, lw=2) m = data_plot['meso'][EPS][:,400:].mean(axis=0) s = data_plot['meso'][EPS][:,400:].std(axis=0) m[:,0] -= buffer_length color = colors[1] ax1.fill_between(m[:,0], m[:,idx]-s[:,idx], m[:,idx]+s[:,idx], color=color, alpha=0.5) ax1.plot(m[:,0], m[:,idx], color=color, lw=2) ax1.set_xlim(*data_plot['xlim']) ax1.set_xticks(data_plot['xticks']) ax1.set_ylim(*data_plot['ylim']) if i==0: ax1.set_yticks(data_plot['yticks']) ax1.set_ylabel(data_plot['ylabel']) elif i==4: ax1.yaxis.tick_right() ax1.yaxis.set_label_position("right") ax1.set_yticks(data_plot['yticks']) ax1.set_ylabel(data_plot['ylabel']) else: ax1.set_yticks([]) ax1.set_xlabel(r'$x$') plt.text(0.05, 0.07, r'$\epsilon_'+data_plot['label']+'='+EPS+'$', transform=ax1.transAxes) plt.savefig(data_plot['name']+'_panel.pdf') ###Output _____no_output_____ ###Markdown Extra slides ###Code X, Y = np.meshgrid(np.linspace(0.1, 20, 180),np.linspace(0, cell.Ly, 150)) plt.pcolormesh(X, Y, c_A(X, Y), cmap=plt.cm.viridis) plt.colorbar() plt.axis([X.min(), X.max(), 0, cell.Ly]) plt.pcolormesh(X, Y, lam(X, Y), cmap=plt.cm.viridis) plt.colorbar() plt.axis([X.min(), X.max(), 0, cell.Ly]) from matplotlib.figure import SubplotParams params = SubplotParams(left=0.2) plt.figure(figsize=(150/36,2.8), subplotpars=params) EPS='1.00' X, Y = np.meshgrid(np.linspace(0.1, 20, 180),np.linspace(0, cell.Ly, 300)) plt.pcolormesh(X, Y, lam(X, Y), cmap=plt.cm.viridis, rasterized=True) plt.colorbar() xy = passive_sphere_meso[EPS][0,:,:].copy() idx = np.searchsorted(xy[:,0], buffer_length) xy = xy[idx:,:] - np.array([buffer_length,0]) color = colors[1] plt.plot(xy[:,0], xy[:,1], color='k', lw=2) x_track = [colloid.R] y_track = [cell.Ly/2 + y_shift] plt.plot(x_track, y_track, color='k', marker='o', ms=7.5) plt.xlim(0, 20) plt.ylim(cell.Ly/2-10, cell.Ly/2+10) plt.xlabel(r'$x$') plt.ylabel(r'$y$') plt.savefig('trajectory_and_gradient_'+EPS+'.pdf') ###Output _____no_output_____
simple-exercises/basic-cryptography/7-basic-rsa-decryption-play.ipynb
###Markdown RSA Decryption- Ascii plaintext encoded using PKCS1.5 PKCS1.5```RSA Modulo Size: e.g 2048 bits or 256 bytes+------+------------------------------+------+--------------------+| 0x02 | RANDOM NONZERO DIGITS | 0x00 | MESSAGE IN ASCII |+------+------------------------------+------+--------------------+``` ###Code # Given message = "Factoring lets us break RSA." ct_string = "22096451867410381776306561134883418017410069787892831071731839143676135600120538004282329650473509424343946219751512256465839967942889460764542040581564748988013734864120452325229320176487916666402997509188729971690526083222067771600019329260870009579993724077458967773697817571267229951148662959627934791540" E = 65537 N_string = "179769313486231590772930519078902473361797697894230657273430081157732675805505620686985379449212982959585501387537164015710139858647833778606925583497541085196591615128057575940752635007475935288710823649949940771895617054361149474865046711015101563940680527540071584560878577663743040086340742855278549092581" p_string = "13407807929942597099574024998205846127479365820592393377723561443721764030073662768891111614362326998675040546094339320838419523375986027530441562135724301" q_string = "13407807929942597099574024998205846127479365820592393377723561443721764030073778560980348930557750569660049234002192590823085163940025485114449475265364281" from os import urandom from gmpy2 import mpz from gmpy2 import invert, t_mod, mul, powmod def decrypt(y, d, N): return powmod(y, d, N) def encrypt(x, e, N): return powmod(x, e, N) def decrypt_pipeline(c_string, d, N): m_decimal = decrypt(mpz(c_string), d, N) m_hex = hex(m_decimal)[2:] m = m_hex.split('00') #assumes correct format return bytes.fromhex(m[1]).decode('utf8') def encrypt_pipeline(message, e, N): raw_message = bytes(message, 'utf8') TOTAL_LENGTH = 128 APPENDLENGTH = TOTAL_LENGTH - len(raw_message) - 2 randomhexstring = urandom(APPENDLENGTH).hex() final_bytes = bytes.fromhex('02' + randomhexstring + '00') + raw_message final_decimal = mpz(int.from_bytes(final_bytes, 'big')) return str(encrypt(final_decimal, e, N)) N = mpz(N_string) p = mpz(p_string) q = mpz(q_string) c = mpz(ct_string) e = mpz(E) # compute d phiN = N - p - q + 1 D = invert(e, phiN) d = mpz(D) # d * e mod phi(N) = 1 # where phi(N) = N - p - q + 1 assert t_mod(mul(d, e), phiN) print(decrypt_pipeline(ct_string, d, N)) c = encrypt_pipeline(message, e, N) m = decrypt_pipeline(c, d, N) print(m) ###Output Factoring lets us break RSA.
something-learned/Mathematics/hackermath/Module_3a_linear_algebra_eigenvectors.ipynb
###Markdown Intermediate Linear Algebra - Eigenvalues & Eigenvectors Key Equation: $Ax = \lambda b ~~ \text{for} ~~ n \times n $ TransformationsSo what really happens when we multiply the $A$ matrix with a vector $x$Lets say we have a vector - $x$$$ x = \begin{bmatrix} -1 \\ 1 \end{bmatrix} $$What happens when we multiply by a matrix - $A$$$ A = \begin{bmatrix} 6 & 2 \\ 2 & 6 \end{bmatrix} $$$$ Ax = \begin{bmatrix} 6 & 2 \\ 2 & 6 \end{bmatrix} \begin{bmatrix} -1 \\ 1 \end{bmatrix} = \begin{bmatrix} -4 \\ 4 \end{bmatrix} $$$$ Ax = 4Ix $$$$ Ax = 4x $$So this particular matrix has just scaled our original vector. It is a scalar transformation. Other matrices can do reflection, rotation and any arbitary transformation in the same 2d space for n = 2.Lets see what has happened through code. ###Code import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.style.use('fivethirtyeight') plt.rcParams['figure.figsize'] = (10, 6) def vector_plot (vector): X,Y,U,V = zip(*vector) C = [1,1,2,2] plt.figure() ax = plt.gca() ax.quiver(X,Y,U,V,C, angles='xy',scale_units='xy',scale=1) ax.set_xlim([-6,6]) ax.set_ylim([-6,6]) plt.axhline(0, color='grey', linewidth=1) plt.axvline(0, color='grey', linewidth=1) plt.axes().set_aspect('equal') plt.draw() A = np.array([[ 6 , 2], [ 2 , 6]]) x = np.array([[-1], [1]]) v = A.dot(x) # All the vectors start at 0, 0 vAX = np.r_[[0,0],A[:,0]] vAY = np.r_[[0,0],A[:,1]] vx = np.r_[[0,0],x[:,0]] vv = np.r_[[0,0],v[:,0]] vector_plot([vAX, vAY, vx, vv]) ###Output _____no_output_____ ###Markdown Solving Equation $Ax=\lambda x$ Special Case: $Ax = 0$ So far we have been solving the equation $Ax = b$. Let us just look at special case when $b=0$.$$ Ax =0 $$If $A^{-1}$ exists (the matrix is non-singular and invertable), then the solution is trival$$ A^{-1}Ax =0 $$$$ x = 0$$If $A^{-1}$ does not exist, then there may be infinitely many other solutions $x$. And since $A^{-1}$ is a singular matrix then$$||A|| = 0 $$ General CaseThe second part of linear algebra is solving the equation, for a given $A$ - $$ Ax = \lambda x$$Note that both $x$ and $\lambda$ are unknown in this equation. For all solutions of them:$$ \text{eigenvalues} = \lambda $$$$ \text{eigenvectors} = x $$ Calculating EigenvaluesSo let us first solve this for $\lambda$ :$$ Ax = \lambda Ix $$$$ (A-\lambda I)x = 0 $$So for non-trivial solution of $x$, $A$ should be singular:$$ ||A - \lambda I|| = 0 $$ For 2 x 2 MatrixLet us use the sample $A$ vector:$$ A = \begin{bmatrix}3 & 1\\ 1 & 3\end{bmatrix} $$So our equation becomes: $$ \begin{bmatrix}3 & 1\\ 1 & 3\end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}\lambda & 0\\ 0 & \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} $$$$ \begin{bmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = 0 $$So for a singular matrix: $$ \begin{Vmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{Vmatrix} = 0 $$$$ (3 - \lambda)^2 - 1 = 0 $$$$ \lambda^2 - 6\lambda + 8 = 0 $$$$ (\lambda - 4)(\lambda - 2) = 0 $$$$ \lambda_1 = 2, \lambda_2 = 4 $$$$||A|| = \lambda_{1} \lambda_{2} $$ Calculating EigenvectorsFor $\lambda = 2$,$$ \begin{bmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}1 & 1\\ 1 & 1 \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = 0 $$So one simple solution is:$$ \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-1 \\ 1\end{bmatrix} $$For $\lambda = 4$,$$ \begin{bmatrix}3 - \lambda & 1\\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}-1 & 1\\ 1 & -1 \end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} = 0 $$So one simple solution is:$$ \begin{bmatrix}x \\ y\end{bmatrix} = \begin{bmatrix}1 \\ 1\end{bmatrix} $$The eigenvectors are orthonormal to each other in this case. Vector Representation (2x2)A vector representation for this is now:$$ \begin{bmatrix}3 \\ 1\end{bmatrix} x + \begin{bmatrix}1 \\ 3\end{bmatrix} y = \begin{bmatrix} \lambda \\ 0 \end{bmatrix} x + \begin{bmatrix} 0 \\ \lambda \end{bmatrix} y $$Now we need to draw these vectors and see the result ###Code A = np.array([[ 3 , 1], [ 1 , 3]]) eigen_val, eigen_vec = np.linalg.eig(A) eigen_val eigen_vec eigen_vec[:,0] # All the vectors start at 0, 0 vX1 = np.r_[[0,0],A[:,0]] vY1 = np.r_[[0,0],A[:,1]] vE1 = np.r_[[0,0],eigen_vec[:,0]] * 2 vE2 = np.r_[[0,0],eigen_vec[:,1]] * 2 vector_plot([vX1, vY1, vE1, vE2]) ###Output _____no_output_____ ###Markdown 3 x 3 MatrixLet us write it in the form $$ Ax = \lambda x $$$$ \begin{bmatrix}1 & 1 & 1 \\ 3 & 8 & 1 \\ 5 & -4 & 3\end{bmatrix}\begin{bmatrix} x \\y \\ z\end{bmatrix}= \lambda \begin{bmatrix} x\\ y \\ x \end{bmatrix} $$ ###Code f = np.matrix([[1,1,1], [3,8,1], [5,-4,3]]) np.linalg.eig(f) ###Output _____no_output_____
examples/demo/analysis_demo.ipynb
###Markdown *grama* Analysis Demo---*grama* is a *grammar of model analysis*---a language for describing and analyzing mathematical models. Heavily inspired by [ggplot](https://ggplot2.tidyverse.org/index.html), `py_grama` is a Python package that implements *grama* by providing tools for defining and exploring models. This notebook illustrates how one can use *grama* to ***analyze a fully-defined model***.Note that you will need to install `py_grama`, a fork of `dfply`, and dependencies in order to run this notebook. See the [installation instructions](https://github.com/zdelrosario/py_grama) for details. ###Code ### Setup import grama as gr import numpy as np import pandas as pd import seaborn as sns X = gr.Intention() ###Output _____no_output_____ ###Markdown Quick Tour: Analyzing a model---*grama* separates the model *definition* from model *analysis*; once the model is fully defined, only minimal information is necessary for further analysis.As a quick demonstration, we import a fully-defined model provided with *grama*, and carry out a few analyses. ###Code from grama.models import make_cantilever_beam md_beam = make_cantilever_beam() md_beam.printpretty() ###Output model: Cantilever Beam inputs: var_det: t: [2, 4] w: [2, 4] var_rand: H: (+1) norm, {'loc': 500.0, 'scale': 100.0} V: (+1) norm, {'loc': 1000.0, 'scale': 100.0} E: (+0) norm, {'loc': 29000000.0, 'scale': 1450000.0} Y: (-1) norm, {'loc': 40000.0, 'scale': 2000.0} copula: Independence copula functions: cross-sectional area: ['w', 't'] -> ['c_area'] limit state: stress: ['w', 't', 'H', 'V', 'E', 'Y'] -> ['g_stress'] limit state: displacement: ['w', 't', 'H', 'V', 'E', 'Y'] -> ['g_disp'] ###Markdown The method `printpretty()` gives us a quick summary of the model; we can see this model has two deterministic variables `w,t` and four random variables `H,V,E,Y`. All of the variables affect the outputs `g_stress, g_displacement`, while only `w,t` affect `c_area`. Since there are random variables, there is a source of *uncertainty* which we must consider when studying this model. Studying model behavior with uncertaintySince the model has sources of randomness (`var_rand`), we must account for this when studying its behavior. We can do so through a Monte Carlo analysis. We make decisions about the deterministic inputs by specifying `df_det`, and the `py_grama` function `gr.ev_monte_carlo` automatically handles the random inputs. Below we fix a nominal value `w = 0.5 * (2 + 4)`, sweep over values for `t`, and account for the randomness via Monte Carlo. ###Code ## Carry out a Monte Carlo analysis of the random variables df_beam_mc = \ md_beam >> \ gr.ev_monte_carlo( n=1e2, df_det=gr.df_make( # Define deterministic levels w=0.5*(2 + 4), # Single value t=np.linspace(2.5, 3, num=10) # Sweep ) ) ###Output eval_monte_carlo() is rounding n... ###Markdown To help plot the data, we use `gr.tf_gather` to reshape the data, and `seaborn` to quickly visualize results. ###Code df_beam_wrangled = \ df_beam_mc >> \ gr.tf_gather("output", "y", ["c_area", "g_stress", "g_disp"]) g = sns.FacetGrid(df_beam_wrangled, col="output", sharey=False) g.map(sns.lineplot, "t", "y") ###Output _____no_output_____ ###Markdown The mean behavior of the model is shown as a solid line, while the band visualizes the standard deviation of the model output. From this plot, we can see:- The random variables have no effect on `c_area` (there is no band)- Comparing `g_stress` and `g_displacement`, the former is more strongly affected by the random inputs, as illustrated by its wider uncertainty band.While this provides a visual description of how uncertainty affects our outputs, we might be interested in *how* the different random variables affect our outputs. Probing random variable effectsOne way to quantify the effects of random variables is through *Sobol' indices*, which quantify variable importance by the fraction of output variance "explained" by each random variable. Since distribution information is included in the model, we can carry out a *hybrid-point Monte Carlo* and analyze the results with two calls to `py_grama`. ###Code df_sobol = \ md_beam >> \ gr.ev_hybrid(n=1e3, df_det="nom", seed=101) >> \ gr.tf_sobol() df_sobol ###Output eval_hybrid() is rounding n... ###Markdown The indices should lie between `[0, 1]`, but estimation error can lead to violations. These results suggest that `g_stress` is largely insensitive to `E`, while `g_disp` is insensitive to `Y`. For `g_disp`, the input `V` contributes about twice the variance as variables `H,E`.To get a *qualitative* sense of how the random variables affect our model, we can perform a set of sweeps over random variable space with a *sinew* design. First, we visualize the design in the six-dimensional full input space. ###Code md_beam >> \ gr.ev_sinews(n_density=50, n_sweeps=10, df_det="swp", skip=True) >> \ gr.pt_auto() ###Output Estimated runtime for design with model (Cantilever Beam): 0.0151 sec ###Markdown The `skip` keyword argument allows us to delay evaluating a model; this is useful for inspecting a design before running a potentially expensive calculation. The `pt_auto()` function automatically detects DataFrames generated by `py_grama` functions and constructs an appropriate visualization. This is provided for convenience; you are of course welcome (and encouraged!) to create your own visualizations of the data.Here we can see the sweeps cross the domain in straight lines at random starting locations. Each of these sweeps gives us a "straight shot" within a single variable. Visualizing the outputs for these sweeps will give us a sense of a single variable's influence, contextualized by the effects of the other variables. ###Code df_beam_sweeps = \ md_beam >> \ gr.ev_sinews(n_density=50, n_sweeps=10, df_det="swp") df_beam_sweeps >> gr.pt_auto() ###Output _____no_output_____ ###Markdown Removing the keyword argument `skip` falls back on the default behavior; the model functions are evaluated at each sample, and `pt_auto()` adjusts to use this new data.Based on this plot, we can see:- The output `c_area` is insensitive to all the random variables; it changes only with `t, w`- As the Sobol' analyis above suggested `g_stress` is insensitive to `E`, and `g_displacement` is insensitive to `Y`- Visualizing the results shows that inputs `H,E` tend to 'saturate' in their effects on `g_displacement`, while `V` is linear over its domain. This may explain the difference in contributed variance- Furthermore both `t, w` seem to saturate in their effects on the two limit states---there are diminishing returns on making the beam taller or wider Theory: The *grama* language---As a language, *grama* has both *objects* and *verbs*. Objects---*grama* as a language considers two categories of objects:- **data** (`df`): observations on various quantities, implemented by the Python package `Pandas`- **models** (`md`): a function and complete description of its inputs, implemented by `py_grama`For readability, we suggest using prefixes `df_` and `md_` when naming DataFrames and models. Since data is already well-handled by Pandas, `py_grama` focuses on providing tools to handle models. A `py_grama` model has **functions** and **inputs**: The method `printpretty()` gives a quick summary of the model's inputs and function outputs. Model inputs are organized into:| | Deterministic | Random || ---------- | ---------------------------------------- | ---------- || Variables | `model.var_det` | `model.var_rand` || Parameters | `model.density.marginals[i].d_param` | (Future*) |- **Variables** are inputs to the model's functions + **Deterministic** variables are chosen by the user; the model above has `w, t` + **Random** variables are not controlled; the model above has `H, V, E, Y`- **Parameters** define random variables + **Deterministic** parameters are currently implemented; these are listed under `var_rand` with their associated random variable + **Random** parameters* are not yet implementedThe `outputs` section lists the various model outputs. The model above has `c_area, g_stress, g_displacement`. Verbs---Verbs are used to take action on different *grama* objects. We use verbs to generate data from models, build new models from data, and ultimately make sense of the two.The following table summarizes the categories of `py_grama` verbs. Verbs take either data (`df`) or a model (`md`), and may return either object type. The prefix of a verb immediately tells one both the input and output types. The short prefix is used to denote the *pipe-enabled version* of a verb.| Verb Type | Prefix (Short) | In | Out || --------- | --------------- | ---- | ----- || Evaluate | `eval_` (`ev_`) | `md` | `df` || Fit | `fit_` (`ft_`) | `df` | `md` || Transform | `tran_` (`tf_`) | `df` | `df` || Compose | `comp_` (`cp_`) | `md` | `md` | Functional programming (Pipes)---`py_grama` provides tools to use functional programming patterns. Short-stem versions of `py_grama` functions are *pipe-enabled*, meaning they can be used in functional programming form with the pipe operator `>>`. These pipe-enabled functions are simply aliases for the base functions, as demonstrated below: ###Code df_base = gr.eval_nominal(md_beam, df_det="nom") df_functional = md_beam >> gr.ev_nominal(df_det="nom") df_base.equals(df_functional) ###Output _____no_output_____
07_Visualization/Tips/Exercises_seaborn.ipynb
###Markdown Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries: ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns ###Output _____no_output_____ ###Markdown Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips ###Code tips = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv') tips.head(2) ###Output _____no_output_____ ###Markdown Step 4. Delete the Unnamed 0 column ###Code del tips['Unnamed: 0'] ###Output _____no_output_____ ###Markdown Step 5. Plot the total_bill column histogram ###Code ttbill = sns.displot(tips.total_bill) ttbill.set(xlabel='Value',ylabel='Frequency',title='Tital Bill') ###Output _____no_output_____ ###Markdown Step 6. Create a scatter plot presenting the relationship between total_bill and tip ###Code # sns.relplot(x=tips.total_bill,y=tips.tip,kind='scatter') sns.jointplot(x='total_bill',y='tip',data=tips) ###Output _____no_output_____ ###Markdown Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function. ###Code sns.relplot(x=tips.total_bill,y=tips.tip,kind='scatter',hue=tips['size'],size=tips['size']) sns.pairplot(tips) ###Output _____no_output_____ ###Markdown Step 8. Present the relationship between days and total_bill value ###Code sns.relplot(y=tips.total_bill,x=tips.day,kind='line') sns.stripplot(x='day',y='total_bill',data=tips,hue='sex') ###Output _____no_output_____ ###Markdown Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex ###Code sns.relplot(x=tips.tip,y=tips.day,kind='scatter',hue=tips.sex) ###Output _____no_output_____ ###Markdown Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch) ###Code sns.boxplot(y=tips.total_bill,x=tips.day,hue=tips['time']) ###Output _____no_output_____ ###Markdown Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side. ###Code fig, axs = plt.subplots(ncols=2) sns.histplot(tips[tips['time']=='Dinner'].tip,ax=axs[0]) sns.histplot(tips[tips['time']=='Lunch'].tip,ax=axs[1]) sns.set(style='ticks') g=sns.FacetGrid(tips,col='time') g.map(plt.hist,'tip') sns.catplot(x='time', y='total_bill', data = tips,kind='violin') g = sns.FacetGrid(tips, col='time') g.map(sns.boxplot,'tip',orient='v') ###Output E:\python3.6\lib\site-packages\seaborn\axisgrid.py:670: UserWarning: Using the boxplot function without specifying `order` is likely to produce an incorrect plot. warnings.warn(warning) E:\python3.6\lib\site-packages\seaborn\_core.py:1326: UserWarning: Vertical orientation ignored with only `x` specified. warnings.warn(single_var_warning.format("Vertical", "x")) E:\python3.6\lib\site-packages\seaborn\_core.py:1326: UserWarning: Vertical orientation ignored with only `x` specified. warnings.warn(single_var_warning.format("Vertical", "x")) ###Markdown Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side. ###Code g = sns.FacetGrid(tips,col='sex',hue='smoker') g.map(plt.scatter,'total_bill','tip',alpha=.7) g.add_legend() ###Output _____no_output_____
Leading Causes of Death in Egypt.ipynb
###Markdown Data Sources: WHO, CDC, World Bank and UN. ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from catboost import CatBoostClassifier #importing plotly and cufflinks in offline mode import cufflinks as cf import plotly.offline cf.go_offline() cf.set_config_file(offline=False, world_readable=True) import plotly import plotly.express as px import plotly.graph_objs as go import plotly.offline as py from plotly.offline import iplot from plotly.subplots import make_subplots import plotly.figure_factory as ff import matplotlib.pyplot as plt import matplotlib as mpl import missingno as msno from p5 import * import datetime as dt from datetime import timedelta from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import LabelEncoder from sklearn.cluster import KMeans from sklearn.linear_model import LinearRegression,Ridge,Lasso from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report %matplotlib inline from matplotlib.pylab import rcParams rcParams['figure.figsize']=20,10 from sklearn.preprocessing import MinMaxScaler std=StandardScaler() import warnings warnings.filterwarnings("ignore") df = pd.read_csv('Leading Causes of Death in Egypt.csv') df.head() df = df.replace({'Cause Effect': {'YES':1, 'NO':0}}) df.head() df.info() df.duplicated().sum() def missing (df): missing_number = df.isnull().sum().sort_values(ascending=False) missing_percent = (df.isnull().sum()/df.isnull().count()).sort_values(ascending=False) missing_values = pd.concat([missing_number, missing_percent], axis=1, keys=['Missing_Number', 'Missing_Percent']) return missing_values missing(df) msno.matrix(df) df.describe().T df.describe().plot() sns.pairplot(df, hue='Rate per 100,000') sns.pairplot(df, kind='kde') df.hist(figsize=(15,8)) plt.show() px.treemap(df, path=['Percentage%','Deaths','Rate per 100,000'], values='Rate per 100,000') y = df['Cause Effect'] print(f"There is: {round(y.value_counts(normalize=True)[1]*100,2)}% --> ({y.value_counts()[1]} of the death Leading Causes affect others)\nًِWhile: {round(y.value_counts(normalize=True)[0]*100,2)}% --> ({y.value_counts()[0]} doesn't affect)") df['Cause Effect'].iplot(kind='hist') numerical= df.select_dtypes('number').columns categorical = df.select_dtypes('object').columns print(f'Numerical Columns: {df[numerical].columns}') print('\n') print(f'Categorical Columns: {df[categorical].columns}') ###Output Numerical Columns: Index(['Deaths', 'Percentage%', 'Rate per 100,000', 'World Rank/183', 'Cause Effect'], dtype='object') Categorical Columns: Index(['Death Causes'], dtype='object') ###Markdown Target Variable Numerical Features ###Code df[numerical].describe() df[numerical].iplot(kind='hist'); df[numerical].iplot(kind='histogram',subplots=True,bins=50) skew_limit = 0.2 # This is our threshold-limit to evaluate skewness. Overall below abs(5) seems acceptable for the linear models. skew_vals = df[numerical].skew() skew_cols= skew_vals[abs(skew_vals)> skew_limit].sort_values(ascending=False) skew_cols numerical1= df.select_dtypes('number').columns matrix = np.triu(df[numerical1].corr()) fig, ax = plt.subplots(figsize=(14,10)) sns.heatmap (df[numerical1].corr(), annot=True, fmt= '.2f', vmin=-1, vmax=1, center=0, cmap='coolwarm',mask=matrix, ax=ax); ###Output _____no_output_____ ###Markdown Categorical Features ###Code df[categorical].head() df[categorical].describe() df[categorical].nunique() ###Output _____no_output_____ ###Markdown So far so good. No zero variance and no extremely high variance compared with the high rate of data. MODEL SELECTION Prediction using Different Machine Learning Models first We'll use dummy Catboost module to hyperparameter tuning for Catboost CATBOOST ###Code accuracy =[] model_names =[] X= df.drop('Cause Effect', axis=1) y= df['Cause Effect'] categorical_features_indices = np.where(X.dtypes != np.float)[0] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) model = CatBoostClassifier(verbose=False,random_state=0, objective= 'CrossEntropy', colsample_bylevel= 0.04292240490294766, depth= 10, boosting_type= 'Plain', bootstrap_type= 'MVS') model.fit(X_train, y_train,cat_features=categorical_features_indices,eval_set=(X_test, y_test)) y_pred = model.predict(X_test) accuracy.append(round(accuracy_score(y_test, y_pred),4)) print(classification_report(y_test, y_pred)) model_names = ['Catboost_tuned'] result_df6 = pd.DataFrame({'Accuracy':accuracy}, index=model_names) result_df6 ###Output precision recall f1-score support 0 0.88 1.00 0.93 14 1 0.00 0.00 0.00 2 accuracy 0.88 16 macro avg 0.44 0.50 0.47 16 weighted avg 0.77 0.88 0.82 16 ###Markdown second we will use sklearn.model_selection to get the Label Distributions: ###Code Rate = df['Cause Effect'] Percentage = df['Percentage%'] df.drop(['Cause Effect', 'Percentage%'], axis=1, inplace=True) df.insert(0, 'Cause Effect', Rate) df.insert(1, 'Percentage%', Percentage) # Rate and Percentage are Scaled! df.head() ###Output _____no_output_____ ###Markdown Splitting the Data (Original DataFrame) ###Code from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold, StratifiedKFold print(round(df['Cause Effect'].value_counts()[0]/len(df) * 100,2), '% of the dataset are not contagious') print(round(df['Cause Effect'].value_counts()[1]/len(df) * 100,2), '% of the dataset are contagious') X = df.drop('Cause Effect', axis=1) y = df['Cause Effect'] sss = StratifiedKFold(n_splits=5, random_state=None, shuffle=False) for train_index, test_index in sss.split(X, y): print("Train:", train_index, "Test:", test_index) original_Xtrain, original_Xtest = X.iloc[train_index], X.iloc[test_index] original_ytrain, original_ytest = y.iloc[train_index], y.iloc[test_index] # We already have X_train and y_train for undersample data thats why I am using original to distinguish and to not overwrite these variables. # original_Xtrain, original_Xtest, original_ytrain, original_ytest = train_test_split(X, y, test_size=0.2, random_state=42) # Check the Distribution of the labels # Turn into an array original_Xtrain = original_Xtrain.values original_Xtest = original_Xtest.values original_ytrain = original_ytrain.values original_ytest = original_ytest.values # See if both the train and test label distribution are similarly distributed train_unique_label, train_counts_label = np.unique(original_ytrain, return_counts=True) test_unique_label, test_counts_label = np.unique(original_ytest, return_counts=True) print('-' * 100) print('Label Distributions: \n') print(train_counts_label/ len(original_ytrain)) print(test_counts_label/ len(original_ytest)) ###Output 86.27 % of the dataset are not contagious 13.73 % of the dataset are contagious Train: [10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] Test: [ 0 1 2 3 4 5 6 7 8 9 25] Train: [ 0 1 2 3 4 5 6 7 8 9 19 20 21 22 23 24 25 26 27 28 29 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] Test: [10 11 12 13 14 15 16 17 18 30] Train: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 25 29 30 31 32 33 34 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50] Test: [19 20 21 22 23 24 26 27 28 35] Train: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 30 35 40 42 43 44 45 46 47 48 49 50] Test: [29 31 32 33 34 36 37 38 39 41] Train: [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 41] Test: [40 42 43 44 45 46 47 48 49 50] ---------------------------------------------------------------------------------------------------- Label Distributions: [0.87804878 0.12195122] [0.8 0.2] ###Markdown Let's predict the increase of average rate using KNeighborsRegressor model ###Code # example of evaluating a knn model on the housing regression dataset from pandas import read_csv from sklearn.model_selection import train_test_split from sklearn.metrics import mean_absolute_error from sklearn.neighbors import KNeighborsRegressor # load the dataset df = read_csv('Leading Causes of Death in Egypt.csv', header=None) data = df.values X, y = data[:, :-1], data[:, -1] print(X.shape, y.shape) # split dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1) print(X_train.shape, X_test.shape, y_train.shape, y_test.shape) try: # define model model = KNeighborsRegressor() # fit model model.fit(X_train, y_train) # make predictions yhat = model.predict(X_test) # evaluate predictions mae = mean_absolute_error(y_test, yhat) except ValueError: print('Mean Absolute Error (mae)') print('Rate Of Increase: %.3f' % mae,'%') from sklearn.model_selection import cross_val_score from sklearn.tree import DecisionTreeClassifier from sklearn import datasets Accuracy = datasets.load_wine() X = Accuracy.data y = Accuracy.target dtree = DecisionTreeClassifier() Model_Accuracy = cross_val_score(dtree, X, y, scoring="accuracy").mean() print('Model Accuracy:',Model_Accuracy *100, '%') ###Output Model Accuracy: 86.52380952380952 % ###Markdown important mention This statistics had been recorded in 2018 from the WORLD HEALTH RANKINGS of the world life expectancy in details such as(world rank, percentage, rate and deaths) as we can see in the previous dataset, since the last recorded statics listed in 2020 with the Total cases rate only without any details as we can see in the following blank, Hence from the the previous and following statics of 2018 and 2020 we can check the module accuracy in a specific row only as an example:Coronary Heart Disease in egypt in 2018: 271,690 casesas we saw earlier the model shown the Rate Of Increase as: 4.433 % so 4.433 of 271.690 = 12044.0177 cases So, 12044.0177 + (271690 Last record state) = 283,734.0177 cases.And the last recorded cases in 2020 for Coronary Heart Disease was 288,790 cases, which is almost nearly expectation record between the model and the real recorded cases as the model has a 86.52% of Accuracy. ###Code Coronary Heart Disease 288,790 Liver Disease 121,883 Stroke 105,209 Influenza and Pneumonia 39,130 Kidney Disease 32,350 Liver Cancer 31,873 Alzheimers & Dementia 31,781 Diabetes Mellitus 31,593 Lung Disease 30,520 Low Birth Weight 24,134 Road Traffic Accidents 23,848 COVID-19 17,545 Breast Cancer 13,086 Hypertension 11,944 Birth Trauma 10,974 Lymphomas 9,144 Lung Cancers 8,936 Endocrine Disorders 8,719 Bladder Cancer 8,375 Violence 8,177 Diarrhoeal diseases 7,917 Leukemia 7,159 Suicide 6,724 Inflammatory/Heart 6,501 Other Neoplasms 6,355 Colon-Rectum Cancers 5,042 ###Output _____no_output_____
docs/python_basics/03_matplotlib.ipynb
###Markdown Python basics 3: Matplotlib This tutorial introduces matplotlib, a Python library for plotting numpy arrays as images. We will learn how to: Follow the instructions below to download the tutorial and open it in the Sandbox. Download the tutorial notebook [Download the Python basics 3 tutorial notebook](../_static/python_basics/03_download-matplotlib.ipynb)[Download the exercise image file](../_static/python_basics/Guinea_Bissau.JPG)To view this notebook on the Sandbox, you will need to first download the notebook and the image to your computer, then upload both of them to the Sandbox. Ensure you have followed the set-up prerequisities listed in [Python basics 1: Jupyter](./01_jupyter.ipynb), and then follow these instructions:1. Download the notebook by clicking the first link above. Download the image by clicking the second link above.2. On the Sandbox, open the **Training** folder.3. Click the **Upload Files** button as shown below.4. Select the downloaded notebook using the file browser. Click **OK**.5. Repeat to upload the image file to the **Training** folder. It may take a while for the upload to complete.5. Both files will appear in the **Training** folder. Double-click the tutorial notebook to open it and begin the tutorial.You can now use the tutorial notebook as an interactive version of this webpage. ###Code .. note:: The tutorial notebook should look like the text and code below. However, the tutorial notebook outputs are blank (i.e. no results showing after code cells). Follow the instructions in the notebook to run the cells in the tutorial notebook. Refer to this page to check your outputs look similar. ###Output _____no_output_____ ###Markdown Introduction to matplotlib's pyplot We are going to use part of matplotlib called `pyplot`. We can import pyplot by specifying it comes from matplotlib. We will abbreviate `pyplot` to `plt`. ###Code %matplotlib inline # Generates plots in the same page instead of opening a new window import numpy as np from matplotlib import pyplot as plt ###Output _____no_output_____ ###Markdown Images are 2-dimensional arrays containing pixels. Therefore, we can use 2-dimensional arrays to represent image data and visualise with matplotlib. In the example below, we will use the numpy `arange` function to generate a 1-dimensional array filled with elements from `0` to `99`, and then reshape it into a 2-dimensional array using `reshape`. ###Code arr = np.arange(100).reshape(10,10) print(arr) plt.imshow(arr) ###Output [[ 0 1 2 3 4 5 6 7 8 9] [10 11 12 13 14 15 16 17 18 19] [20 21 22 23 24 25 26 27 28 29] [30 31 32 33 34 35 36 37 38 39] [40 41 42 43 44 45 46 47 48 49] [50 51 52 53 54 55 56 57 58 59] [60 61 62 63 64 65 66 67 68 69] [70 71 72 73 74 75 76 77 78 79] [80 81 82 83 84 85 86 87 88 89] [90 91 92 93 94 95 96 97 98 99]] ###Markdown If you remember from the [last tutorial](./02_numpy.ipynb), we were able to address regions of a numpy array using the square bracket `[ ]` index notation. For multi-dimensional arrays we can use a comma `,` to distinguish between axes. ```python[ first dimension, second dimension, third dimension, etc. ]```As before, we use colons `:` to denote `[ start : end : stride ]`. We can do this for each dimension.For example, we can update the values on the left part of this array to be equal to `1`. ###Code arr = np.arange(100).reshape(10,10) arr[:, :5] = 1 plt.imshow(arr) ###Output _____no_output_____ ###Markdown The indexes in the square brackets of `arr[:, :5]` can be broken down like this:```python[ 1st dimension start : 1st dimension end, 2nd dimension start : 2nd dimension end ]```Dimensions are separated by the comma `,`. Our first dimension is the vertical axis, and the second dimension is the horizontal axis. Their spans are marked by the colon `:`. Therefore:```python[ Vertical start : Vertical end, Horizontal start : Horizontal end ]```If there are no indexes entered, then the array will take all values. This means `[:, :5]` gives:```python[ Vertical start : Vertical end, Horizontal start : Horizontal start + 5 ]```Therefore the array index selected the first 5 pixels along the width, at all vertical values. Now let's see what that looks like on an actual image. > **Tip**: Ensure you uploaded the file `Guinea_Bissau.JPG` to your **Training** folder along with the tutorial notebook. We will be using this file in the next few steps and exercises. We can use the pyplot library to load an image using the matplotlib function `imread`. `imread` reads in an image file as a 3-dimensional numpy array. This makes it easy to manipulate the array. By convention, the first dimension corresponds to the vertical axis, the second to the horizontal axis and the third are the Red, Green and Blue channels of the image. Red-green-blue channels conventionally take on values from 0 to 255. ###Code im = np.copy(plt.imread('Guinea_Bissau.JPG')) # This file path (red text) indicates 'Guinea_Bissau.JPG' is in the # same folder as the tutorial notebook. If you have moved or # renamed the file, the file path must be edited to match. im.shape ###Output _____no_output_____ ###Markdown `Guinea_Bissau.JPG` is an image of Rio Baboque in Guinea-Bissau in 2018. It has been generated from Landsat 8 satellite data. The results of the above cell show that the image is 590 pixels tall, 602 pixels wide, and has 3 channels. The three channels are red, green, and blue (in that order). Let's display this image using the pyplot `imshow` function. ###Code plt.imshow(im) ###Output _____no_output_____ ###Markdown Exercises 3.1 Let's use the indexing functionality of numpy to select a portion of this image. Select the top-right corner of this image with shape `(200,200)`.> **Hint:** Remember there are three dimensions in this image. Colons separate spans, and commas separate dimensions. ###Code # We already defined im above, but if you have not, # you can un-comment and run the next line # im = np.copy(plt.imread('Guinea_Bissau.JPG')) # Fill in the question marks with the correct indexes topright = im[?,?,?] # Plot your result using imshow plt.imshow(topright) ###Output _____no_output_____ ###Markdown If you have selected the correct corner, there should be not much water in it! 3.2 Let's have a look at one of the pixels in this image. We choose the top-left corner with position `(0,0)` and show the values of its RGB channels. ###Code # Run this cell to see the colour channel values im[0,0] ###Output _____no_output_____ ###Markdown The first value corresponds to the red component, the second to the green and the third to the blue. `uint8` can contain values in the range `[0-255]` so the pixel has a lot of red, some green, and not much blue. This pixel is a orange-yellow sandy colour. Now let's modify the image. What happens if we set all the values representing the blue channel to the maximum value? ###Code # Run this cell to set all blue channel values to 255 # We first make a copy to avoid modifying the original image im2 = np.copy(im) im2[:,:,2] = 255 plt.imshow(im2) ###Output _____no_output_____
problems/0059/solution.ipynb
###Markdown Problem 59 XOR decryptionEach character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase $A = 65$, asterisk $(*) = 42$, and lowercase $k = 107$.A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, $65 XOR 42 = 107$, then $107 XOR 42 = 65$.For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.Your task has been made easy, as the encryption key consists of three lower case characters. Using [p059_cipher.txt](https://projecteuler.net/project/resources/p059_cipher.txt) (right click and 'Save Link/Target As...'), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text. Solution ###Code from itertools import product def compute(path: str, n: int) -> int: text = list(map(int, open(path).read().split(','))) keys = {i: set() for i in range(n)} letters = range(97, 123) for i in range(n): for j in letters: for k in range(i, len(text), n): keys[i].add(j) if not 32 <= text[k] ^ j <= 122: keys[i].remove(j) break for key in product(*list(keys.values())): decrypted_text = '' result = 0 for i, j in enumerate(text): xor = j ^ key[i % n] decrypted_text += chr(xor) result += xor if ' the ' in decrypted_text: return result compute('p059_cipher.txt', 3) %timeit -n 100 -r 1 -p 6 compute('p059_cipher.txt', 3) ###Output 3.84701 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 100 loops each)
src/notebooks/PICO.ipynb
###Markdown The PICO model based on Reese et al (2018): "Antarctic sub-shelf melt rates via PICO"In part (a) we test a few idealized geometries, in part (b) realistic geometries are presented.There are a few differences to the original implementation w.r.t to real geometries.- underlying datasets: we use the BedMachine2 data- model resolution: we use the BedMachine native grid at 500 m grid spacing, whereas PICO uses 5 km Favier's implementationcompare the PICO Model Box Model (BM) to simple parametrization (M), and Plume Model (PME)- use two constant depths for "ambient" temperatures: 500 m or 700 m- use 2, 5, or 10 boxes- avoid pressure dependence of melting becuase it introduces an energetic inconsistency -> uniform melting in boxes ###Code import sys import numpy as np import xarray as xr import pandas as pd import warnings import geopandas import matplotlib import cartopy.crs as ccrs import matplotlib.pyplot as plt sys.path.append("..") # matplotlib.rc_file('../rc_file') %matplotlib inline %config InlineBackend.print_figure_kwargs={'bbox_inches':None} %load_ext autoreload %autoreload 2 warnings.filterwarnings("ignore", category=matplotlib.MatplotlibDeprecationWarning) from real_geometry import RealGeometry, glaciers from PICO import PicoModel, table2 from compare_models import compare_PICO ###Output _____no_output_____ ###Markdown (a) idealized geometries ###Code f, ax = plt.subplots(5,3, figsize=(12,12), sharey='row', constrained_layout=True) for i, testcase in enumerate(['test1', 'test2', 'test3']): geo, ds = PicoModel(name=testcase).compute_pico() geo.draft.plot(ax=ax[0,i]) ax[0,i].set_title(testcase) ds.melt.plot(ax=ax[1,i]) ds.mk.plot(ax=ax[2,i]) ds.Tk.plot(ax=ax[3,i]) ds.Sk.plot(ax=ax[4,i]) ###Output _____no_output_____ ###Markdown These are test geometries, the `test1` is a quasi-1D iceshelf of 100 km length with a grounding line depth of 1000 m and an ice shelf front depth of 500 m. `test2` is simply a rotated version of `test1`. `test3` has a sinusoidal grounding line profile and a flat ice shelf front profile. The geometries (arbitrarily) have 3 boxes. `boxnr=0` represents either the average (for melt) or the ambient conditions (temperature and salinity).The melt is highest near the grounding line in part because in-situ temperatures are highest there. Both temperature and salinity decrease as the plume ascends towards the ice shelf front. (b) real geometriesAt first execution, the code creates the real geometries from the BedMachine data and IceVelocity data (these files are too big for version control on Github, but see lines 26f in `real_geometries.py` for their location). example: Thwaites glacier ###Code geo, ds = PicoModel('Thwaites').compute_pico() f, ax = plt.subplots(1,4, figsize=(20,4), sharey=True) geo.draft.plot(ax=ax[0]) geo.rd.plot(ax=ax[1]) geo.box.plot(ax=ax[2]) ds.melt.plot(ax=ax[3]) ###Output _____no_output_____ ###Markdown comparing the 6 currently implemented ice shelves ###Code for i, glacier in enumerate(glaciers): if glacier in ['Ross', 'FilchnerRonne']: # at the BedMachine resolution, these datasets are too big for laptop memory continue PicoModel(glacier).compute_pico() compare_PICO() ###Output _____no_output_____ ###Markdown maps of Amundsen Sea and East Antarctica ###Code proj = ccrs.SouthPolarStereo(true_scale_latitude=-71) def fn_poly(glacier): return f'../../data/mask_polygons/{glacier}_polygon.geojson' x5, y5, _, _ = geopandas.read_file(fn_poly('MoscowUniversity'), crs='espg:3031').total_bounds _, _, x6, y6 = geopandas.read_file(fn_poly('Totten') , crs='espg:3031').total_bounds x3, _, _, y4 = geopandas.read_file(fn_poly('PineIsland') , crs='espg:3031').total_bounds _, y3, x4, _ = geopandas.read_file(fn_poly('Dotson') , crs='espg:3031').total_bounds import matplotlib.ticker as mticker f = plt.figure(figsize=(8,12)) for i in range(2): # Amundsen Sea, Totten+MoscowUniversity (x1,x2,y1,y2) = [(x3,x4,y3-1e4,y4+2e4),(x5-1e4,x6,y5,y6+1e4)][i] shelves = [['PineIsland','Thwaites','Dotson'], ['Totten','MoscowUniversity']][i] for s, shelf in enumerate(shelves): (x,y) = [[(.65,.88),(.05,.55),(.05,.2)],[(.3,.8),(.4,.1)]][i][s] name = [['Pine\nIsland','Thwaites','Dotson/\nCrosson'], ['Totten','Moscow\nUniversity']][i][s] dsg = xr.open_dataset(RealGeometry(shelf).fn_PICO) dsP = xr.open_dataset(PicoModel(shelf).fn_PICO_output) lon, lat = dsg.lon, dsg.lat for j in range(3): q = [dsg.draft, dsg.box.where(dsg.mask), dsP.melt.where(dsg.mask)][j] cmap = ['viridis', 'Spectral','inferno_r'][j] (vmin,vmax) = [(-2000,0),(1,2),(0,25)][j] ax = f.add_axes([j/3,.545-.54*i,.33,.45], projection=proj) ax.set_frame_on(False) ax.set_extent([x1,x2,y1,y2], crs=proj) ax.coastlines() gl = ax.gridlines() gl.xlocator = mticker.FixedLocator(np.arange(-180,179,5)) gl.ylocator = mticker.FixedLocator(np.arange(-89,89)) im = ax.pcolormesh(lon, lat, q, transform=ccrs.PlateCarree(), cmap=cmap, vmin=vmin, vmax=vmax) if i==0: # colorbars cax = f.add_axes([j/3+.02,.5,.29,.02]) label = ['draft [m]', 'box nr.', 'melt rate [m/yr]'][j] plt.colorbar(im, cax=cax, orientation='horizontal', label=label) if j==0: ax.text(x, y, name, weight='bold', transform=ax.transAxes) if j==2: ax.text(x, y, f'{dsP.mk[0].values:.2f} m/yr', transform=ax.transAxes) # f, ax = plt.subplots(4, 3, figsize=(15,15)) # for i, key in enumerate(list(ds.keys())[:-1]): # if i<9: kwargs = {'cbar_kwargs':{'orientation':'horizontal'}} # else: kwargs = {} # ds[key].plot(ax=ax[int(i/3), i%3], **kwargs) ###Output _____no_output_____
notes/book_ap/CSVShape.ipynb
###Markdown Import ###Code from dataclasses import dataclass, field, asdict from typing import List from csv2shex.csvreader import ( csvreader, _get_csvrow_dicts_list, _get_corrected_csvrows_list, _get_csvshape_dicts_list, ) from csv2shex.csvrow import CSVRow from csv2shex.utils import pprint_df import pandas as pd ###Output _____no_output_____ ###Markdown Declare ###Code @dataclass class CSVTripleConstraint: """Instances hold TAP/CSV row elements that form a triple constraint.""" propertyID: str = None valueConstraint: str = None valueShape: str = None extras: field(default_factory=dict) = None # propertyLabel: str = None # mandatory: str = None # repeatable: str = None # valueNodeType: str = None # valueDataType: str = None # valueConstraintType: str = None # note: str = None @dataclass class CSVShape: """Instances hold TAP/CSV row elements that form a shape.""" shapeID: str = None # shapeLabel: str = None # shapeClosed: str = None # start: bool = False tripleconstraints_list: List[CSVTripleConstraint] = field(default_factory=list) @dataclass class CSVSchema: """Set of shapes.""" csvrow_dicts_list = [{'shapeID': ':book', 'propertyID': 'dc:creator', 'valueConstraint': '', 'valueShape': ':author'}, {'shapeID': '', 'propertyID': 'dc:type', 'valueConstraint': 'so:Book', 'valueShape': ''}, {'shapeID': ':author', 'propertyID': 'foaf:name', 'valueConstraint': '', 'valueShape': ''}] ###Output _____no_output_____ ###Markdown For each row 1. Initialize instance of CSVShape ###Code for row in csvrow_dicts_list: shape = CSVShape() shape.shapeID = row["shapeID"] shape.tripleconstraints_list = list() dict_of_shape_objs[shape_dict["shapeID"]] = shape dict_of_shape_objs ###Output _____no_output_____ ###Markdown 2. On finding new shapeID, capture shape-related elements in a shape_dict. ###Code shape_dict = dict() shape_dict["shapeID"] = "b" shape_dict["shapeLabel"] = "label" shape_dict["shapeClosed"] = False shape_dict["start"] = True shape_dict["tripleconstraints_list"] = list() shape_dict ###Output _____no_output_____ ###Markdown 3. Assign CSVShape instance as value to key "shapeID" in dict_of_shape_objs ###Code dict_of_shape_objs = dict() dict_of_shape_objs[shape_dict["shapeID"]] = cshap dict_of_shape_objs "b" in dict_of_shape_objs # Triple constraints list for shape "b" dict_of_shape_objs["b"].tripleconstraints_list ###Output _____no_output_____ ###Markdown 4. Each new shape is added to dict_of_shape_dicts. ###Code shape_dict = dict() shape_dict["shapeID"] = "c" shape_dict["shapeLabel"] = "clabel" shape_dict["shapeClosed"] = False shape_dict["start"] = False shape_dict["tripleconstraints_list"] = list() dict_of_shape_objs[shape_dict["shapeID"]] = CSVShape(**shape_dict) dict_of_shape_objs dict_of_shape_objs.keys() # After first row, for rows that lack shapeIDs, get most-recently-inserted key from dict_of_shape_dicts list(dict_of_shape_objs.keys())[-1] ###Output _____no_output_____ ###Markdown 4. ###Code # Problem: append multiple triple constraint dicts to tripleconstraints_list tc_dict = dict() tc_dict["propertyID"] = "dc:type" tc_dict["valueConstraint"] = "foaf:Person" dict_of_shape_objs["b"].tripleconstraints_list.append(tc_dict) dict_of_shape_objs # Problem: append multiple triple constraint dicts to tripleconstraints_list tc_dict = dict() tc_dict["propertyID"] = "dc:creator" tc_dict["valueConstraint"] = "http://example.org/person1" tc_obj = CSVTripleConstraint(**tc_dict) tc_obj CSVTripleConstraint(**tc_dict) dict_of_shape_objs # This is to pretty-print the entire CSVShape vars(CSVShape(shapeID='b', shapeLabel='label', shapeClosed=False, start=True, tripleconstraints_list=[ {'propertyID': 'dc:type', 'valueConstraint': 'foaf:Person'}, {'propertyID': 'dc:creator', 'valueConstraint': 'http://example.org/person1'}])) ###Output _____no_output_____
Teaching Session1_part5.ipynb
###Markdown Operations on NumPy ArraysThe learning objectives of this section are:* Manipulate arrays * Reshape arrays * Stack arrays* Perform operations on arrays * Perform basic mathematical operations * Apply built-in functions * Apply your own functions * Apply basic linear algebra operations ###Code import numpy as np ###Output _____no_output_____ ###Markdown Example - 1 (Arithmatric Operations) ###Code array1 = np.array([10,20,30,40,50]) array2 = np.arange(5) array1 array2 # Add array1 and array2. array3 = array1 + array2 array3 ###Output _____no_output_____ ###Markdown Example - 2 ###Code array4 = np.array([1,2,3,4]) array4 + array1 print (array1.shape) print (array4.shape) ###Output (4,) ###Markdown Example - 3 ###Code array = np.linspace(1, 10, 5) array array*2 array**2 ###Output _____no_output_____ ###Markdown Stacking Arrays ```np.hstack()``` and ```n.vstack()```Stacking is done using the ```np.hstack()``` and ```np.vstack()``` methods. For horizontal stacking, the number of rows should be the same, while for vertical stacking, the number of columns should be the same. ###Code # Note that np.hstack(a, b) throws an error - you need to pass the arrays as a list a = np.array([1, 2, 3]) b = np.array([2, 3, 4]) np.hstack((a,b)) np.vstack((a,b)) np.arange(12) np.arange(12).reshape(3,4) array1 = np.arange(12).reshape(3,4) #3x4 array2 = np.arange(20).reshape(5,4) #5x4 print (array1, '\n', array2) np.vstack((array1,array2)) ###Output _____no_output_____ ###Markdown Example - 4 (Numpy Built-in functions) ###Code array1 np.power(array1, 3) np.arange(9).reshape(3,3) x = np.array([-2,-1, 0, 1,2]) x abs(x) np.absolute(x) ###Output _____no_output_____ ###Markdown Example - 5 (Trignometric functions) ###Code np.pi theta = np.linspace(0, np.pi, 5) theta np.sin(theta) np.cos(theta) np.tan(theta) ###Output _____no_output_____ ###Markdown Example - 6 (Exponential and logarithmic functions) ###Code x = [1, 2, 3, 10] x = np.array(x) np.exp(x) # e=2.718... # 2^1, 2^2, 2^3, 2^10 np.exp2(x) np.power(x,3) np.log(x) np.log2(x) np.log10(x) np.log ###Output _____no_output_____ ###Markdown Example - 7 ###Code x = np.arange(5) x y = x * 10 y y = np.empty(5) y np.multiply(x, 12, out=y) y y = np.zeros(10) y np.power(2, x, out=y[::2]) y ###Output _____no_output_____ ###Markdown Example - 8 (Aggregates) ###Code x = np.arange(1,6) x sum(x) np.add.reduce(x) np.add.accumulate(x) np.multiply.accumulate(x) ###Output _____no_output_____ ###Markdown Apply Basic Linear Algebra OperationsNumPy provides the ```np.linalg``` package to apply common linear algebra operations, such as:* ```np.linalg.inv```: Inverse of a matrix* ```np.linalg.det```: Determinant of a matrix* ```np.linalg.eig```: Eigenvalues and eigenvectors of a matrix Also, you can multiple matrices using ```np.dot(a, b)```. ###Code # np.linalg documentation help(np.linalg) A = np.array([[6, 1, 1], [4, -2, 5], [2, 8, 7]]) A ###Output _____no_output_____ ###Markdown Rank of a matrix ###Code np.linalg.matrix_rank(A) ###Output _____no_output_____ ###Markdown Trace of matrix A ###Code np.trace(A) ###Output _____no_output_____ ###Markdown Determinant of a matrix ###Code np.linalg.det(A) ###Output _____no_output_____ ###Markdown Inverse of matrix A ###Code A np.linalg.inv(A) B = np.linalg.inv(A) np.matmul(A,B) #actual matrix multiplication A * B ###Output _____no_output_____ ###Markdown Matrix A raised to power 3 ###Code np.linalg.matrix_power(A,3) # matrix multiplication A A A ###Output _____no_output_____
query_language.ipynb
###Markdown pandas temporal query language ###Code #export import re import numpy as np import pandas as pd import ast import glob import ntpath import os from itertools import zip_longest, chain from itertools import product from functools import lru_cache from functools import singledispatch ###Output _____no_output_____ ###Markdown General helper functions Info ###Code #export class Info(): """ A class to store information about the data and results from analysis """ def __init__(self): self.evaluated = {} ###Output _____no_output_____ ###Markdown memory ###Code #export def memory(info, func, expr): """ checks if the function has been called with the same argument previously and if so, returns the same results instead of running the function again args: - """ rows=None if info: if func in info.evaluated: if expr in info.evaluated[func]: rows = info.evaluated[func][expr] else: info.evaluated[func] = {} else: info = Info() info.evaluated[func] = {} return info, rows ###Output _____no_output_____ ###Markdown listify ###Code #export def listify(string_or_list): """ return a list if the input is a string, if not: returns the input as it was Args: string_or_list (str or any): Returns: A list if the input is a string, if not: returns the input as it was Note: - allows user to use a string as an argument instead of single lists - cols='icd10' is allowed instead of cols=['icd10'] - cols='icd10' is transformed to cols=['icd10'] by this function """ if isinstance(string_or_list, str): string_or_list = [string_or_list] return string_or_list ###Output _____no_output_____ ###Markdown unique ###Code #export # A function to identify all unique values in one or more columns # with one or multiple codes in each cell def unique(df, cols=None, sep=None, all_str=True): """ Lists unique values from one or more columns sep (str): separator if cells have multiple values all_str (bool): converts all values to strings unique(df=df, cols='inpatient', sep=',') """ # if no column(s) are specified, find unique values in whole dataframe if cols==None: cols=list(df.columns) cols = listify(cols) # multiple values with separator in cells if sep: all_unique=set() for col in cols: new_unique = set(df[col].str.cat(sep=',').split(',')) all_unique.update(new_unique) # single valued cells else: all_unique = pd.unique(df[cols].values.ravel('K')) # if need to make sure all elements are strings without surrounding spaces if all_str: all_unique=[str(value).strip() for value in all_unique] return all_unique #unique(df=df, cols='codes', sep=',') ###Output _____no_output_____ ###Markdown del dot and zero ###Code #export def del_dot(code): if isinstance(code, str): return code.replace('.','') else: codes = [c.replace('.','') for c in code] return codes def del_zero(code, left=True, right=False): if isinstance(codes, str): codes=[code] if left: codes = [c.lstrip('0') for c in code] if right: codes = [c.rstrip('0') for c in code] if isinstance(code, str): codes=codes[0] return codes ###Output _____no_output_____ ###Markdown Notations expand hyphen ###Code #export # function to expand a string like 'K51.2-K53.8' to a list of codes # Need regex to extract the number component of the input string # The singledispach decorator enables us to have the same name, but use # different functions depending on the datatype of the first argument. # # In our case we want one function to deal with a single string input, and # another to handle a list of strings. It could all be handled in a single # function using nested if, but singledispatch makes it less messy and more fun! # Here is the main function, it is just the name and an error message if the # argument does not fit any of the inputs that wil be allowed @singledispatch def expand_hyphen(expr): """ Expands codes expression(s) that have hyphens to list of all codes Args: code (str or list of str): String or list of strings to be expanded Returns: List of strings Examples: expand_hyphen('C00*-C26*') expand_hyphen('b01.1*-b09.9*') expand_hyphen('n02.2-n02.7') expand_hyphen('c00*-c260') expand_hyphen('b01-b09') expand_hyphen('b001.1*-b009.9*') expand_hyphen(['b001.1*-b009.9*', 'c11-c15']) Note: Unequal number of decimals in start and end code is problematic. Example: C26.0-C27.11 will not work since the meaning is not obvious: Is the step size 0.01? In which case C27.1 will not be included, while C27.10 will be (and traing zeros can be important in codes) """ raise ValueError('The argument must be a string or a list') # register the function to be used if the input is a string @expand_hyphen.register(str) def _(expr): # return immediately if nothing to expand if '-' not in expr: return [expr] lower, upper = expr.split('-') lower=lower.strip() # identify the numeric component of the code lower_str = re.search("\d*\.\d+|\d+", lower).group() upper_str = re.search("\d*\.\d+|\d+", upper).group() # note: what about european decimal notation? # also note: what if multiple groups K50.1J8.4-etc lower_num = int(lower_str.replace('.','')) upper_num = int(upper_str.replace('.','')) +1 if upper_num<lower_num: raise ValueError('The start code cannot have a higher number than the end code') # remember length in case of leading zeros length = len(lower_str) nums = range(lower_num, upper_num) # must use integers in a loop, not floats # which also means that we must multiply and divide to get decimal back # and take care of leading and trailing zeros that may disappear if '.' in lower_str: lower_decimals = len(lower_str.split('.')[1]) upper_decimals = len(upper_str.split('.')[1]) if lower_decimals==upper_decimals: multiplier = 10**lower_decimals codes = [lower.replace(lower_str, format(num /multiplier, f'.{lower_decimals}f').zfill(length)) for num in nums] # special case: allow k1.1-k1.123, but not k.1-k2.123 the last is ambigious: should it list k2.0 only 2.00? elif (lower_decimals<upper_decimals) & (upper_str.split('.')[0]==lower_str.split('.')[0]): from_decimal = int(lower_str.split('.')[1]) to_decimal = int(upper_str.split('.')[1]) +1 nums = range(from_decimal, to_decimal) decimal_str = '.'+lower.split('.')[1] codes = [lower.replace(decimal_str, '.'+str(num)) for num in nums] else: raise ValueError('The start code and the end code do not have the same number of decimals') else: codes = [lower.replace(lower_str, str(num).zfill(length)) for num in nums] return codes # register the function to be used if if the input is a list of strings @expand_hyphen.register(list) def _(expr): extended = [] for word in expr: extended.extend(expand_hyphen(word)) return extended ###Output _____no_output_____ ###Markdown expand star ###Code #export # A function to expand a string with star notation (K50*) # to list of all codes starting with K50 @singledispatch def expand_star(code, all_codes=None): """ Expand expressions with star notation to a list of all values with the specified pattern Args: expr (str or list): Expression (or list of expressions) to be expanded all_codes (list) : A list of all codes Examples: expand_star('K50*', all_codes=icd9) expand_star('K*5', all_codes=icd9) expand_star('*5', all_codes=icd9) """ raise ValueError('The argument must be a string or a list') @expand_star.register(str) def _(code, all_codes=None): # return immediately if there is nothing to expand if '*' not in code: return [code] start_str, end_str = code.split('*') if start_str and end_str: codes = {code for code in all_codes if (code.startswith(start_str) & code.endswith(end_str))} if start_str: codes = {code for code in all_codes if code.startswith(start_str)} if end_str: codes = {code for code in all_codes if code.endswith(end_str)} return sorted(list(codes)) @expand_star.register(list) def _(code, all_codes=None): expanded=[] for star_code in code: new_codes = expand_star(star_code, all_codes=all_codes) expanded.extend(new_codes) # uniqify in case some overlap expanded = list(set(expanded)) return sorted(expanded) ###Output _____no_output_____ ###Markdown expand colon ###Code #export # function to get all codes in a list between the specified start and end code # Example: Get all codes between K40:L52 @singledispatch def expand_colon(code, all_codes=None): raise ValueError('The argument must be a string or a list') @expand_colon.register(str) def _(code, all_codes=None): """ Expand expressions with colon notation to a list of complete code names code (str or list): Expression (or list of expressions) to be expanded all_codes (list or array) : The list to slice from Examples K50:K52 K50.5:K52.19 A3.0:A9.3 Note: This is different from hyphen and star notation because it can handle different code lengths and different number of decimals """ if ':' not in code: return [code] startstr, endstr = code.split(':') # remove spaces startstr = startstr.strip() endstr =endstr.strip() # find start and end position startpos = all_codes.index(startstr) endpos = all_codes.index(endstr) + 1 # slice list expanded = all_codes[startpos:endpos+1] return expanded @expand_colon.register(list) def _(code, all_codes=None, regex=False): expanded=[] for cod in code: new_codes = expand_colon(cod, all_codes=all_codes) expanded.extend(new_codes) return expanded ###Output _____no_output_____ ###Markdown expand regex ###Code #export # Return all elements in a list that fits a regex pattern @singledispatch def expand_regex(code, all_codes): raise ValueError('The argument must be a string or a list of strings') @expand_regex.register(str) def _(code, all_codes=None): code_regex = re.compile(code) expanded = {code for code in all_codes if code_regex.match(code)} # uniqify expanded = list(set(expanded)) return expanded @expand_regex.register(list) def _(code, all_codes): expanded=[] for cod in code: new_codes = expand_regex(cod, all_codes=all_codes) expanded.extend(new_codes) # uniqify in case some overlap expanded = sorted(list(set(expanded))) return expanded ###Output _____no_output_____ ###Markdown expand code ###Code #export @singledispatch def expand_code(code, all_codes=None, hyphen=True, star=True, colon=True, regex=False, drop_dot=False, drop_leading_zero=False, sort_unique=True): raise ValueError('The argument must be a string or a list of strings') @expand_code.register(str) def _(code, all_codes=None, hyphen=True, star=True, colon=True, regex=False, drop_dot=False, drop_leading_zero=False, sort_unique=True): #validating input if (not regex) and (':' in code) and (('-' in code) or ('*' in code)): raise ValueError('Notation using colon must start from and end in specific codes, not codes using star or hyphen') if regex: codes = expand_regex(code, all_codes=all_codes) return codes if drop_dot: code = del_dot(code) codes=[code] if hyphen: codes=expand_hyphen(code) if star: codes=expand_star(codes, all_codes=all_codes) if colon: codes=expand_colon(codes, all_codes=all_codes) if sort_unique: codes = sorted(list(set(codes))) return codes @expand_code.register(list) def _(code, all_codes=None, hyphen=True, star=True, colon=True, regex=False, drop_dot=False, drop_leading_zero=False, sort_unique=True): expanded=[] for cod in code: new_codes = expand_code(cod, all_codes=all_codes, hyphen=hyphen, star=star, colon=colon, regex=regex, drop_dot=drop_dot, drop_leading_zero=drop_leading_zero) expanded.extend(new_codes) # uniqify in case some overlap expanded = list(set(expanded)) return sorted(expanded) ###Output _____no_output_____ ###Markdown expand columns ###Code #export def expand_columns(expr, all_columns=None, df=None, star=True, hyphen=True, colon=True, regex=None, info=None): """ Expand columns with special notation to their full column names """ notations = '* - :'.split() # return immediately if not needed if not any(symbol in expr for symbol in notations): return [expr] # get a list of columns of it is only implicity defined by the df # warning: may depreciate this, require explicit all_columns if df & (not all_columns): all_columns=list(df.columns) if regex: cols = [col for col in all_columns if re.match(regex, expr)] else: if hyphen: cols = expand_hyphen(expr) if star: cols = expand_star(expr, all_codes=all_columns) if colon: cols = expand_colon(expr, all_codes=all_columns) return cols ###Output _____no_output_____ ###Markdown More helper functions get rows ###Code #export # mark rows that contain certain codes in one or more colums def get_rows(df, codes, cols=None, sep=None, pid='pid', info=None, fix=True): """ Make a boolean series that is true for all rows that contain the codes Args df (dataframe or series): The dataframe with codes codes (str, list, set, dict): codes to be counted cols (str or list): list of columns to search in sep (str): The symbol that seperates the codes if there are multiple codes in a cell pid (str): The name of the column with the personal identifier """ # check if evaluated previously info, rows = memory(info=info, func = 'get_rows', expr=codes) if rows: return rows # check if codes and columns need to be expanded (needed if they use notation) if fix: # do this when if cols exist, but if it does not ... cols = expand_columns(expr=cols, all_columns=list(df.columns), info=info) all_codes = sorted(unique(df=df, cols=cols, sep=sep)) codes = expand_code(codes, all_codes=all_codes) # codes and cols should be lists codes = listify(codes) cols = listify(cols) # approach depends on whether we have multi-value cells or not # if sep exist, then have multi-value cells if sep: # have multi-valued cells # note: this assumes the sep is a regex word delimiter codes = [rf'\b{code}\b' for code in codes] codes_regex = '|'.join(codes) # starting point: no codes have been found # needed since otherwise the function might return None if no codes exist rows = pd.Series(False*len(df),index=df.index) # loop over all columns and mark when a code exist for col in cols: rows=rows | df[col].str.contains(codes_regex, na=False) # if not multi valued cells else: mask = df[cols].isin(codes) rows = mask.any(axis=1) return rows ###Output _____no_output_____ ###Markdown make codes ###Code #export def make_codes(n, letters=26, numbers=100, seed=False): """ Generate a dataframe with a column of random codes Args: letters (int): The number of different letters to use numbers (int): The number of different numbers to use Returns A dataframe with a column with one or more codes in the rows """ # each code is assumed to consist of a letter and a number alphabet = list('abcdefghigjklmnopqrstuvwxyz') letters=alphabet[:letters+1] # make random numbers same if seed is specified if seed: np.random.seed(0) # determine the number of codes to be drawn for each event n_codes=np.random.negative_binomial(1, p=0.3, size=n) # avoid zero (all events have to have at least one code) n_codes=n_codes+1 # for each event, randomly generate a the number of codes specified by n_codes codes=[] for i in n_codes: diag = [np.random.choice(letters).upper()+ str(int(np.random.uniform(low=1, high=numbers))) for num in range(i)] code_string=','.join(diag) codes.append(code_string) # create a dataframe based on the list df=pd.DataFrame(codes) df.columns=['code'] return df make_codes(10) ###Output _____no_output_____ ###Markdown make data ###Code #export def make_data(n, letters=26, numbers=100, seed=False): """ Generate a dataframe with a column of random codes Args: letters (int): The number of different letters to use numbers (int): The number of different numbers to use Returns A dataframe with a column with one or more codes in the rows """ pid = range(n) df_person=pd.DataFrame(index = pid) #female = np.random.binomial(1, 0.5, size =n) gender = np.random.choice(['male', 'female'], size=n) region = np.random.choice(['north', 'south', 'east', 'west'], size=n) birth_year = np.random.randint(1920, 1980, size=n) birth_month = np.random.randint(1,12, size=n) birth_day = np.random.randint(1,28, size=n) # ok, I know! events_per_year = np.random.poisson(1, size=n) years = 2020 - birth_year events = years * events_per_year events = np.where(events==0,1,events) events = events.astype(int) all_codes=[] codes = [all_codes.extend(make_codes(n=n, letters=letters, numbers=numbers, seed=seed)['code'].tolist()) for n in events] days_alive = (2020 - birth_year) *365 days_and_events = zip(days_alive.tolist(), events.tolist()) all_days=[] days_after_birth = [all_days.extend(np.random.randint(0, max_day, size=n)) for max_day, n in days_and_events] pid_and_events = zip(list(pid), events.tolist()) all_pids=[] pids = [all_pids.extend([p+1]*e) for p, e in pid_and_events] df_events = pd.DataFrame(index=all_pids) df_events['codes'] = all_codes df_events['days_after'] = all_days #df_person['female'] = female df_person['gender'] = gender df_person['region'] = region df_person['year'] = birth_year df_person['month'] = birth_month df_person['day'] = birth_day df = df_events.merge(df_person, left_index=True, right_index=True) df['birth_date'] = pd.to_datetime(df[['year', 'month', 'day']]) df['event_date'] = df['birth_date'] + pd.to_timedelta(df.days_after, unit='d') del df['month'] del df['day'] del df['days_after'] df['pid'] = df.index df.index_name = 'pid_index' df=df[['pid', 'gender', 'birth_date', 'event_date', 'region', 'codes']] # include deaths too? return df #df = make_data(n=1000) #df #count_person('max 2 L35') #count_person('x before y') ###Output _____no_output_____ ###Markdown formatting an expression ###Code #export def format_expression(expr): """ formats and expressions do it can be evaluated """ original = expr.copy() # easier to parse and split when space only exist between siginificant words expr=_remove_space(expr) # insert external variables (maybe unnecessary?) expr=_insert_external(expr) # if multiple options are specified in the expression, # make one expression for each alternative specification expr=_get_expressions(expr) return exprs ###Output _____no_output_____ ###Markdown remove_space ###Code #export def remove_space(expr): no_space_before = r'(\s)([<=>,])' no_space_after = r'([<=>,])(\s)' expr = re.sub(no_space_before, r'\2', expr) expr = re.sub(no_space_after, r'\1', expr) return expr ###Output _____no_output_____ ###Markdown get_expressions ###Code #export def get_expressions(expr): """ Makes a list of all possible statements from an expression, all possible combination of expressions involving ?[x, y, z] that are in the expressions >>>expr = 'min ?[2,3,4] of (K50, K51) in icd inside ?[10, 20, 30] days before 4AB02 in ncmp' >>>get_options(expr) """ original = expr alternatives = re.findall('\?(\[.*?\])', expr) alt_list = [ast.literal_eval(alternative) for alternative in alternatives] combinations = product(*alt_list) all_expressions = [] for n, combination in enumerate(combinations): new_expr = original for i, value in enumerate(combination): new_expr = new_expr.replace('?' + alternatives[i], str(value), 1) all_expressions.extend([new_expr]) return all_expressions expr = 'min ?[2,3,4] of (K50, K51) in icd inside ?[10, 20, 30] days before 4AB02 in ncmp' get_expressions(expr) ###Output _____no_output_____ ###Markdown insert_external ###Code #export def insert_external(expr): """ Replaces variables prefixed with @ in the expression with the value of the variable from the global namespace Example: x=['4AB02', '4AB04', '4AB06'] expr = '@x before 4AB02' insert_external(expr) """ externals = [word.strip('@') for word in expr.split() if word.startswith('@')] for external in externals: tmp = globals()[external] expr = expr.replace(f'@{external} ', f'{tmp} ') return expr x_1=['4AB02', '4AB04', '4AB06'] expr = '@x_1 before 4AB02' insert_external(expr) ###Output _____no_output_____ ###Markdown insert columns ###Code #export def insert_columns(expr, cols=None, all_cols=None, code2col_rules = None, info=None): """ insert column names in expressions (in col ...) logic: all conditions that do not contain column names, should end with an in statement general approach: - split on keywords to get each condition separately - next: if the condition is about a column (age>20) no need to do anything - if, not, check if it has an 'in', if not: insert one! todo/problem: row selectors? code2col rule (a function that you can pass in?), sniffing, dict option, or in info? expr = 'max 2 of 4AB02 before 4AB04' expr = 'max 2 of 4AB02 in x before 4AB04' expr = '5th of 5th' # the code is here also a keyword ... problem - maybe of as long as we keep the of keyword ... but more difficult when we do not, at least for automatic column labeling! expr = 'max 2 of 4AB0552 before 4AB04' expr = 'max 2 of 4AB02 in ncmp' # should include zero ? expr = 'min ?[1,2,3,4] of 4AB02 in ncmp' expr = 'max ?[1,2,3,4] of 4AB02 in ncmp' # should include zero ? expr = 'min 2 of days>4' expr = 'min 8 of days>6' expr = 'min 3 of 4AB02 in ncmp within 200 days' insert_cols(expr, rule=col_rules) """ split_words = [' and ', ' or ', ' before ', ' after ', ' within '] for word in split_words: expr = expr.replace(word, f'@split@{word}') conditions = expr.split('@split@') all_conditions=[] for condition in conditions: words = condition.split() if re.match('[><=]', word): pass elif len(words)==1: condition=condition + f' in {cols}' elif ' in ' not in expr: #alternative: words[-2] != 'in' but what if multiple cols already with spaces, then problem condition=condition + f' in {cols}' all_conditions.append(condition) new_expr = " ".join(all_conditions) new_expr = new_expr.replace('@split@','') return new_expr ###Output _____no_output_____ ###Markdown break up nested expressions outer parenthesisfirst whole parenthesis expression, from start to finish, including nested ###Code #export def outer_parenthesis(expr): """ identifies the first parenthesis expression (may have nested parentheses inside it) rerurns the content, or just the string if ther are no parentheses may consider returning NONE then (if no?) """ start_pos=expr.find('(') if start_pos == -1: between = expr else: n=1 pos=start_pos while n>0: pos=pos+1 if expr[pos]=='(': n=n+1 elif expr[pos]==')': n=n-1 elif pos>len(expr): print('Error: Not same number of opening and closing parenthesis') between = expr[start_pos+1: pos] return between expr="nothing" expr="more(than nothing)" expr="outside((nested) and (nested))" expr="(x) and (y)" expr = "(x or q) before (y and z)" expr = "c1 before (y and z)" outer_parenthesis(expr) ###Output _____no_output_____ ###Markdown first inner parenthesis expression ###Code #export def first_inner_parenthesis(expr): """ iterates until it finds the (first) innermost expression surrounded by parentheses and returns this expression """ new_expr = expr while '(' in new_expr: outer= outer_parenthesis(new_expr) new_expr = outer #first_inner_parenthesis(new_expr) # hmm check why this is here return new_expr expr="nothing" expr="more(than nothing)" expr="outside((nested) and (nested))" expr="(x) and (y)" expr = "(x or q) before (y and z)" expr = "c1 before (y and z)" expr = "(x and (a or b)) before (y and z)" expr = "(x and (a or (b after c))) before (y and z)" expr = "x and b and c" first_inner_parenthesis(expr) ###Output _____no_output_____ ###Markdown break_upbreaks an neste expresssion into its component partsreturns a dictionary with the name and the expressions to be evaluatedthe dictionary is ordered: evaluate in order (since next expression may depend on calculation of previous expression)logic: recursively find the innnermost expresion, store it, substitute, repeat possible todo (well, never! explicit is better than implicit here): implicit breakup based on priority rules (and, or, before etc, like multiplication addition atc have priority rules that allow implicit breakup) ###Code #export def break_up(expr): """ breaks up an expression with nested parenthesis into sub expressions with no parenthesis, with the innermost expressions first, that needs to be calculated before doing the outer expression. also replaces the statement with a single symbol in the larger expression """ p=0 new_expr = expr expr_store={} while "(" in new_expr: inner = first_inner_parenthesis(new_expr) new_expr = new_expr.replace(f'({inner})', f'p{p}') expr_store[f'p{p}'] = inner p=p+1 return new_expr, expr_store expr="nothing" expr="more(than nothing)" expr="outside((nested) and (nested))" expr="(x) and (y)" expr = "(x or q) before (y and z)" expr = "c1 before (y and z)" expr = "(x and (a or b)) before (y and z)" expr = "(x and (a or (b after c))) before (y and z)" expr = "x and b and c" expr="outside and ((nested) and (nested))" break_up(expr) ###Output _____no_output_____ ###Markdown eval stuffAfter formatting and fixing a query, we end up with a list of expressions to be evaluatd (eval_expr)After breaking up a standard expressions, we have a dictionary of sub expressions (X before Y, min 2 of X and max 3 of Y) (eval_sub_expr)To evaluate a sub-expression, we breake it up into atomic statements (min 2 of Y), evaluate these separately and then apply the transformations that are appropriate based on the type of sub-expression it is (and, before/after etc) (eval_atom)To evaluate an atom we use eval_row selection, get_rows and eval_prefix.Alternative language? Paragraph, sentence, statement, condition, compound, molecule eval_expr eval expr ###Code #export def eval_expr(df, expr, cols=None, sep=None, out='series', info=None, fix=True): # check if evaluated previously if cols: name = expr + out + str(cols) else: name = expr + out info, rows = memory(info, 'eval_expr', name) if rows: return rows if fix: expr = remove_space(expr) expr = insert_external(expr) expr = insert_columns(expr=expr, cols=cols, all_cols=list(df.columns)) #print(expr) final_expr, sub_expressions = break_up(expr) for name, expr in sub_expressions.items(): df[name] = eval_sub(df=df, expr=expr, cols=cols, sep=sep, info=info) result = eval_sub(df=df, expr=final_expr, cols=cols, sep=sep, info=info) # return boolean series person (default) or rows, or pids if out == 'series': result = result.any(level=0) elif out == 'pids': result = set(result.index[result]) info.evaluated['eval_expr'][name] = result return result ###Output _____no_output_____ ###Markdown eval sub (expression) ###Code #export def eval_sub(df, expr, cols=None, sep=None, info=None, fix=True): # check if evaluated previously if cols: name = expr + str(cols) else: name = expr info, rows = memory(info, 'eval_sub', name) if rows: return rows splitwords = 'and or not within after before inside outside'.split() operators = '= > <'.split() # a simple existence expression if not any(word in expr for word in splitwords): print('simple') rows = eval_atom(df=df, expr=expr, sep=sep, info=info) # and/or epression (to do:allow multiple and or without parenthesis?) # shortcut before splitting in sub_expressions: if only and or and not before etc: then can simplify: just split on and or, substitute, evaluate separate, keep parenthesis and do an eval elif (' and ' in expr) or (' or ' in expr): print('and or') rows = eval_and_or(df=df, expr=expr, sep=sep, info = info) # a before/after expression elif any(word in expr for word in [' before ', ' after ', ' simultaneous ']): print('before after') rows = eval_before_after(df=df, condition=expr, sep=sep, info=info, fix=fix) # within, not within expression elif ' within ' in expr: rows = eval_within(df=df, expr=expr, sep=sep, info=info) # an inside expression elif (' inside ' in expr) or (' outside ' in expr): rows = eval_inside_outside(df=df, expr=expr, sep=sep, info=info) # store result for future info.evaluated['eval_sub'][name] = rows return rows ###Output _____no_output_____ ###Markdown (1st g after s) > 201st g after s > 201st g after sg after s eval atom ###Code #export # atoms: [prefix] condition[row_selector] [in columns] def eval_atom(df, expr, cols=None, sep=None, info=None): # precautionary move! expr=expr.strip() # check if evaluated previously if cols: name = expr + str(cols) else: name = expr info, rows = memory(info, 'eval_atom', name) if rows: return rows # check if it has a row selector, execute if so if '[rows:' in expr: row_selection = eval_row_selection(df=df, expr=expr) df=df[row_selection] # delete the row selector after applying it before, after = expr.split('[rows:') expr = before + after[1:] # starting point prefix = None words = expr.split() operator = any(operator in expr for operator in list('=><')) function_call = '(' in expr # is the atom a code based atom? example K50.1 # code based atoms must have in or cols if (cols) or ('in' in words): if 'in' in words: codes, cols = expr.split(' in ') cols=cols.strip() else: codes = expr if len(words)>3: prefix, codes = codes.rsplit(' ',1) # handle multiple cols # in icd0, icd1, icd3 or in [icd0, icd1, icd3] if ',' in cols: if cols.startswith('['): cols = cols[1:-1].split(',') cols=[col.strip() for col in cols] # deal with list of codes [K50, K51] if ',' in codes: codes=codes[1:-1].split(',') codes=[code.strip() for code in codes] #expand codes and cols? rows = get_rows(df=df, cols=cols, codes=codes, sep=sep, info=info) # a simple column based atom: example glucose>8 elif (operator) and (not function_call): prefix, expr = expr.rsplit(' ',1) rows=df.eval(expr) # a function based atom: example glucose.cumsum()>100 elif (operator) and (function_call): prefix, expr = expr.rsplit(' ',1) rows = pd.eval(f"df.groupby('pid').{expr}") #alternative? rows = df.groupby(pid).apply(eval, expr)? if prefix: rows = eval_prefix(prefix, rows) # store results for future info.evaluated['eval_atom'][name] = rows return rows df=make_data(1000,letters=10, numbers=5) df.head() #df['prescription_code'] =df.codes.str.split(',', expand=True)[0] #df['ddd']=np.random.randint(1,99, size=len(df)) #df.to_csv('/content/sample_prescriptions.csv') #from google.colab import drive #drive.mount('/content/drive') eval_atom(df=df, expr='E2 in codes', sep=',') ###Output _____no_output_____ ###Markdown eval row selection ###Code #export def eval_row_selection(df, expr, sep=None, info=None): """ example K50[rows:age>20] example min 2 K51[rows: after hip_surgery] df[surgery_rows].... min 2 K51 inside after hip surgery min 2 K51 after hip surgery after age>20? """ # check if evaluated previously info, rows = memory(info, 'eval_row_selection', expr) if rows: return rows row_query = expr.split('[rows:')[1].split[']'][0] statement = any(operator in expr for operator in list('=><')) temporal = (' days ' in expr) positional = (' events ' in expr) or (' event ' in expr) #positional = (expr.startswith('inside ')) or (expr.startswith('outside ')) # statement:'age>20' if statement: rows=df.eval(row_query) # positional: 'inside/outside 5 events before/after/around X' elif positional: rows = create_time_interval(df=df, expr=expr,cols=cols, info=info) # temporal: 'after 1st S4' else: rows = create_time_interval(df=df, expr=expr,cols=cols, info=info) info.evaluated['eval_row_selection'][expr] = rows return rows ###Output _____no_output_____ ###Markdown eval prefix ###Code #export def eval_prefix(prefix, rows): interval=True freq = ['min ', 'max ', 'exactly '] first_last = [' first ', ' last'] ordinal = r'(-?\d+)(st|nd|rd|th)' # re to find and split 3rd into 3 and rd etc rowscum = rows.groupby(level=0).cumsum() # freq condition: min 5 of 4A if any(word in prefix for word in freq): word, num = prefix.split() num = int(num) if 'min' in word: select = (rowscum >= num) elif 'max' in word: # double check! n_max = rowscum.max(level=0) select = (n_max <= num) elif 'exactly' in word: select = (rowscum == num) # beteween frequency or between ordinals (1st and 3rd) # note: inclusive between elif 'between ' in prefix: word, lower,_, upper = prefix.split() # interval/positional between 4th and 8th if re.match(prefix, ordinal): lower=int(lower[:-2]) upper=int(upper[:-2]) if lower > 0: aboverows = (rowscum >= lower) else: aboverows = (lastrowscum >= abs(lower)) if upper > 0: belowrows = (rowscum <= upper) else: belowrows = (lastrowscum <= abs(upper)) select = (aboverows & belowrows) # frequency between: between 4 and 8 of 4AB02 else: lower=int(lower) upper=int(upper) select = rowscum.between(lower, upper, inclusive=True) # first, last range conditions: first 5 of 4A elif any(word.strip() in prefix for word in first_last): # regex is better word, num = prefix.split() if '%' not in num: num = int(num) if 'first' in word: select = (rowscum <= num) if 'last' in word: select = (rowscum >= num) # pct condition: first 10% of 4A elif '%' in prefix: n_max = rowscum.groupby(level=0).max() pct = float(num.split(r'%')[0]) / 100 pid_num = n_max * pct # first 1% of two observations includes 1st obs pid_num[pid_num < 1] = 1 if word == 'first': # hmm, generalproblem: drop if pid is missing ... select = (rowscum < pid_num) if word == 'last': select = (rowscum > pid_num) # percentile condition elif ' percentile ' in prefix: event_num = rows.groupby(level=0).cumcount() n_count = rowscum.groupby(level=0).size() num = float(num.split(r'%')[0]) / 100 pid_num = n_count * num if word == 'first': rows = (pid_num < event_num) if word == 'last': rows = (pid_num > event_num) # positional condition: 5th of 4a, 3rd to 8th of 4A, (3rd, 4th, 5th) of 4A # also allows: 2nd last (or even -5th last) elif re.match(ordinal, prefix): pos_str = prefix.rsplit(' ',1)[0].strip('(').strip(')') pos_nums = re.findall(ordinal, pos_str) pos_nums = tuple([int(pos[0]) for pos in pos_nums]) # if the conditions includes last, need reversed cumsum # example 2nd last if ' last ' in pos_str or '-' in pos_str: n_max = rowscum.groupby(level=0).max().add(1) # reversed event number (by id) lastrowscum = (rowscum - n_max).abs() last_flag = 1 else: last_flag = 0 # single position: 5th of 4A if len(pos_nums) == 1: interval = False print('single position') if last_flag: select = (lastrowscum == pos_nums) else: select = (rowscum == pos_nums) # from-to positions: 3rd to 8th of 4A, 1st to -3rd elif ' to ' in pos_str: lower, upper = pos_nums if lower > 0: aboverows = (rowscum >= lower) else: aboverows = (lastrowscum >= abs(lower)) if upper > 0: belowrows = (rowscum <= upper) else: belowrows = (lastrowscum <= abs(upper)) select = (aboverows & belowrows) # list of positions (3rd, 5th, 7th) elif prefix.startswith('('): pos_str = prefix.rsplit(' ',1)[0].strip().strip('(').strip(')') pos_re = ordinal.replace(' ', '') # last condition may have ) i.e. 25th) pos_nums = re.findall(pos_re, pos_str) pos_nums = tuple([int(pos[0]) for pos in pos_nums]) pos_num = [num for num in pos_nums if num > 0] neg_num = [num for num in pos_nums if num < 0] pos_select = rowscum.isin(pos_nums) neg_select = rowscum.isin(pos_nums) select = (pos_select | neg_select) if interval==True: return select else: return rows & select # so far, have marked interval of events for expressions with qualifications # (existence conditions are not intervals). example: First 5 of 4A, markes # all events in the interval between the 1st and 5th of 4A # if we only want to pick the 4A events in this intereval, we and it with # the boolena for 4A existence (row). But sometimes we want to keep and use # the interval. For instance when the qualifiers are used in before/after # statements if the evaluated expression should be returned as 'exact row', # 'interval row' or pid existence ###Output _____no_output_____ ###Markdown testing prefix ###Code index = np.random.randint(100, size=1000) code = np.random.binomial(1, p=0.4, size=1000) rows=pd.Series(code, index=index).sort_index() rows df=rows.to_frame() df['evaluated_prefix'] =eval_prefix('2nd A', rows=rows) df.head(30) ###Output _____no_output_____ ###Markdown eval and or not ###Code #export def eval_and_or(df, expr, sep=None, info=None): split_words = [' and ', ' or '] for word in split_words: expr = expr.replace(word, f'@split@{word}') conditions = expr.split('@split@') for n, condition in enumerate(conditions): atom = condition.replace('and','').replace('or','') name = c + str(n) df[name] = eval_atom(df=df, expr=atom, sep=sep, info=info) final_expr = final_expr + condition.replace(atom, name) rows = df.eval(final_expr) return rows ###Output _____no_output_____ ###Markdown eval before after ###Code #export def eval_before_after(df, condition, cols=None, sep=None, info=None, out='pid', fix=True): # check if evaluated previously if cols: name = condition + out + str(cols) else: name = condition + out info, rows = memory(info, 'eval_before_after', name) if rows: return rows # replace conditions so intervals/multiples becomes positional # example: before first 5 of 4A --> before 5th of 4A # background: first 5 is not satisfied until ALL five have passed, while some other conditions are # may introduce shortcuts for some easy/common evaluations later (like 4A before 4B, easier than 4st of 4A before 1st of 4B?) # before and after are also different, may exploit this to create shortcuts condition = re.sub(r'last (-?\d+)', r'-\1st', condition) condition = re.sub(r'first (-?\d+)', r'\1st', condition) # todo: also fix percentile --> find position, not first 5 percent before_expr, after_expr = re.split(' before | after | simultaneously ', condition) print(f'{before_expr},{after_expr}') # shortcut if ' simultaneous ' in condition: ...just a row and # check if the left side of the before expression has been calculated #if before in info.before_has_happened: # before_has_happened = info.before_has_happened[before] #else: before_has_happened = eval_atom(df=df, expr=before_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int) # info.before_has_happened[before] = before_has_happened # check if the right side of the before expression have been calculated #if after in info.after_has_happened: # after_has_happened = info.after_has_happened[after] #else: after_has_happened = eval_atom(df=df, expr=after_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int) # info.after_has_happened[after] = after_has_happened both_exist = (before_has_happened.any(level=0)) & (after_has_happened.any(level=0)) if ' before ' in condition: is_it_before = (before_has_happened - after_has_happened).groupby(level=0).sum() endrows = both_exist & (is_it_before > 0) elif ' after ' in condition: is_it_after = (after_has_happened - before_has_happened).groupby(level=0).sum() endrows = both_exist & (is_it_after > 0) elif ' simultaneous ' in condition: difference = (before_has_happened - after_has_happened).groupby(level=0).sum() endrows = both_exist & (difference ==0) #info.evaluated['eval_before_after'][name] = endrows return endrows ###Output _____no_output_____ ###Markdown eval withinExamples: expr= '4AB02 within 100 days after 4AB04' expr= 'min 2 of 4AB02 within 100 days' expr= '4AB02 within 50 to 100 days before 4AB04' expr= '4AB02 within 50 to 100 days before 4AB04' maybe use inside on some? expr= 'min 4 of 4AB02 in ncmp within 100 days' expr= 'min 2 of 4AB02 within last 100 days' expr= 'min 2 of 4AB02 within 100 days from end' expr= 'min 2 of 4AB02 within first 100 days' expr= 'between 2 and 5 of 4AB02 within first 100 days' avoid and? well, just use format and replace it with to? expr= 'min 2 of 4AB02 within 100 days from beginning' expr= 'min 2 of 4AB02 within 1st of 4AB04 to 5th of 4AB04' expr= 'min 2 of 4AB02 within 1st of 4AB06 to 3rd of 4AB04' expr= 'min 2 of 4AB02 within first 20 of 4AB04' expr= '3rd of 4AB02 within first 20 of 4AB04' expr= 'min 2 of 4AB02 within 100 days from 5th of 4AB04' expr = '3 or more of 4ab within 100 days' wstart, wend expr= 'min 4 of 4AB02 in ncmp within 100 days' expr= "min 4 of ncmp=='4AB02' within 100 days" expr= "at least 4 of ncmp=='4AB02' within 100 days" expr= "more than 4 of ncmp=='4AB02' within 100 days" best language expr= "less than 4 of ncmp=='4AB02' within 100 days" best language expr= "between 4 and 7 of ncmp=='4AB02' within 100 days" best language inclusive or exclusive between expr= "5 or more of ncmp=='4AB02' within 100 days" best language expr= "from 4 to 7 of ncmp=='4AB02' within 100 days" best language expr= " 4 to 7 events with 4AB02 within 100 days" best language events problem ... again format? expr= " from 4 to 7 events with 4AB02 within 100 days" best language events problem ... again format? expr= " at least 5 events with 4AB02 within 100 days" best language events problem ... again format? expr= " no more than 5 events with 4AB02 in ncmp within 100 days" best language events problem ... again format? expr= 'min 3 of days>3 within 100 days' expr= 'ddd[pharma=='pharmaz'].sum()>97 within 100 days' ###Code #export def eval_within(df, condition, cols=None, sep=None, date='event_date', info=None, out='pid', pid='pid'): """ mark observations that satisfy the conditions in a within statement examples: - expr = 'min 3 4AB within 100 days' - expr = '4AB02 in ncmp within 50 to 100 days before 4AB04 in ncmp' """ # within 100 days after 4AB --> within 0 to 100 days after 4AB if ((' after ' in condition) or (' before ' in condition)) and not (' to ' in condition): condition=condition.replace(' within ', ' within 0 to ') left, right = condition.split(' within ') print(f'within {left}, {right}') # within x to y days (better: between x to y days) # expr='4AB02 in ncmp within 50 to 100 days before 4AB04 in ncmp' if re.match(r'\d+ to \d+ ', right): print(f'within x to y') lower, _, upper, unit, direction, *rightsingle = right.split() if direction == 'around': condition = condition.replace(' around ', ' after ') after_prows = eval_within(df=df, condition=condition, cols=cols, sep=sep, info=info) condition = condition.replace(' after ', ' before ') before_prows = eval_within(df=df, condition=condition, cols=cols, sep=sep, info=info) endprows = (after_prows) | (before_prows) return endprows rightsingle = " ".join(rightsingle) lower = int(lower) upper = int(upper) lrows = eval_atom(df=df, expr=left, cols=cols, sep=sep, info=info) rrows = eval_atom(df=df, expr=rightsingle, cols=cols, sep=sep, info=info) pid_change = ((df[pid] - df[pid].shift()) != 0) rdates = df[date].where(rrows == 1, np.datetime64('NaT')) # if not have a date assign one to avoid ffill from person above # risky? rdates[(pid_change & ~rrows)] = np.datetime64('2100-09-09') if direction == 'after': rdates = rdates.fillna(method='ffill') # hmmm must use groupby here? or can it be avoided? inseret a 999 when pid change and fill it with nan after ffill? elif direction == 'before': rdates = rdates.fillna(method='bfill') rdates = rdates.where(rdates != np.datetime64('2100-09-09'), np.datetime64('NaT')) # allow other time units, within 5 seconds etc if unit == 'days': delta = (df[date] - rdates) / np.timedelta64(1, 'D') else: # add s if it is not there? like 1 day, 1 second? delta = (df[date] - rdates) delta = getattr(delta.dt, unit) if direction == 'before': delta = delta.abs() within = (delta >= lower) & (delta <= upper) endrows = (lrows & within) # nb, frequency conditions not work here I think: min 3 x within 10 to 100 days before S cpid = endrows.any(level=0) # pure within statements have few elements to the right # example min 2 4AB within 100 days elif len(right.split()) < 3: print(f'within x days') if ' in ' in left: word, num, codes, _, cols = left.split() rows = get_rows(df=df, codes=codes, cols=cols, sep=sep, info=info) # 'sum(days)>15 within 100 days' or 'min 5 of ddd>200 within 100 days' # expr='sum(days)>15 within 100 days' elif re.search('[>=<]', left): if 'sum(' in left: # may want to create smaller dataframe first, if possible? focus on only some variable, columns, rows? sub = df.set_index(date) # assume column date exist, should also drop rows with no time col, operator = left.split(')') col = col.replace('sum(', '').strip(')') threshold, unit = right.split() if unit == 'days': unit = 'D' eval_text = f"(sub.groupby('pid')['{col}'].rolling('{threshold}{unit}').sum()){operator}" rows = pd.eval(eval_text, engine='python') cpid = rows.any(level=0) return cpid # 'min 5 ddd>200 within 100 days' else: word, num, codes = left.split() rows = df.eval(codes) # so far no sumsum etc, only 4 events with sugar_level>10 within 100 days # code expression not quantity expression # example: min 3 G2 within 100 days else: word, num, codes = left.split() cols = cols rows = get_rows(df=df, codes=codes, cols=cols) threshold, unit = right.split() threshold = int(threshold) num = int(num) if word == 'max': num = num + 1 # may need to use expand cols to get the cols (not use cols expression here if it starred) sub = df[date][rows].dropna().to_frame() sub.columns = ['date'] sub['pid'] = sub.index sub['shifted_date'] = sub['date'].shift(-num) sub['shifted_pid'] = sub['pid'].shift(-num) sub['diff_pid'] = (sub['pid'] - sub['shifted_pid']) sub['same_pid'] = np.where(sub.diff_pid == 0, 1, 0) sub = sub[sub.same_pid == 1] # sub['shifted_date'] = sub['date'].groupby(level=0).shift(int(num)) # sub['shifted_pid'] = sub['pid'].groupby(level=0).shift(int(num)) # todo: allow for different units here, months, weeks, seconds etc sub['diff_days'] = (sub['shifted_date'] - sub['date']) / np.timedelta64(1, 'D') # sub[sub.same_pid == 1]['diff_days'].dropna()/np.datetime64(1, 'D') if word == 'min': endrows = (sub['diff_days'] <= threshold) cpid = endrows.any(level=0) elif word == 'max': # n = df.index.nunique() endrows = (sub['diff_days'] <= threshold) cpid = ~endrows.any(level=0) # revise max and exactly elif word == 'exactly': endrows = (sub['diff_days'] <= threshold) n_max = endrows.groupby(level=0).sum() endrows = n_max == threshold cpid = endrows.any(level=0) # #todo (also need to change parsing then ...) # elif word=='between': # endrows=(sub['diff_days']<=threshold) # n_max = endrows.groupby(level=0).sum() # endrows = n_max == threshold # cpid = endrows.any(level=0) return cpid ###Output _____no_output_____ ###Markdown eval inside outside ###Code #export def eval_inside_outside(df, condition, cols=None, sep=None, pid='pid', info=None): """ mark observations that satisfy the conditions in an inside/outside statement 'inside/within/outside 5 events days before/after/around x' examples: - expr = 'X inside 4 events after Y' - expr = 'X inside 4 events after each Y' - expr = 'always X inside 4 events after each Y' - expr = 'always X inside 4 events after a Y' - expr = 'X inside 3 events around Y' - expr = 'X outside 5 events after Y' - expr = 'no X before last 5 events' - hmm this is before after, not inside? - expr = 'no X inside 5 events before Y' - expr = 'min 3 4AB inside last 5 events' - special - expr = 'X inside 1st and 5th Y' (between is better?) -special - expr = 'X inside 2 events before and 8 events after Y' - special - expr = 'min 2 X inside 4 events after Y' - expr = 'X inside 4 events after min 3 Y' - expr = 'X inside 4 to 7 events after min 3 Y' """ # some horrible parsing if ' not ' in expr: pre, negate, post = expr.partition(' not ', expr) post='not ' + post else: pre, post = re.split(' inside | outside ', expr) if ' inside ' in expr: post = 'inside ' + post else: post = 'outside ' + post # mark relevant rows inside_rows = create_position_interval(df=df, expr=post, sep=sep, info=info) first_atom = eval_atom(df=df, expr=pre, sep=sep, info=info) endrows = inside_rows & first_atom # todo: warning, frequency conditions not work (at least not as expected) # todo: make a frequency version work? different keywords/sytax? groupby? return endrows #expr='G4 in codes within 1 to 800 days around 3rd G2 in codes' #a=eval_within(df=df, condition=expr, cols='codes', sep=',') #df['a'] = a #a.sum() df.head() ###Output _____no_output_____ ###Markdown row selectors Examples- after/before s- after 3rd 3s- between 5th and 6th s- within last 5 events- within 100 days after s- within 100 days after 2nd s- within 50 days around 3rd s- after min 3 s- after glucose- (pharma x after s) and (pharma y before pharma z)- g after 1st s >20- x before y after 2nd s- x before (y after 2nd s)- (x before y) after 2nd s- after 2nd s: x before y- x before (y before z)- x before (y and z) and (y before z)- (x before y) and (y before z)- (x before y) after z- x before y before z before qbefore after statemetns have to be solved from right to left? ###Code #def select_before_after(df, expr, sep=None, info=None): # word, atom = expr.split(' '.1) # rows = eval_atom(df=df, expr=expr. sep=sep, info=info) #export def eval_before_after(df, condition, cols=None, sep=None, info=None, out='pid', fix=True): # check if evaluated previously if cols: name = condition + out + str(cols) else: name = condition + out info, rows = memory(info, 'row_eval', name) if rows: return rows # replace conditions so intervals/multiples becomes positional # example: before first 5 of 4A --> before 5th of 4A # background: first 5 is not satisfied until ALL five have passed, while some other conditions are # may introduce shortcuts for some easy/common evaluations later (like 4A before 4B, easier than 4st of 4A before 1st of 4B?) # before and after are also different, may exploit this to create shortcuts condition = re.sub(r'last (-?\d+)', r'-\1st', condition) condition = re.sub(r'first (-?\d+)', r'\1st', condition) # todo: also fix percentile --> find position, not first 5 percent before_expr, after_expr = re.split(' before | after | simultaneously ', condition) print(f'{before_expr},{after_expr}') # shortcut if ' simultaneous ' in condition: ...just a row and # check if the left side of the before expression has been calculated #if before in info.before_has_happened: # before_has_happened = info.before_has_happened[before] #else: before_has_happened = eval_atom(df=df, expr=before_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int) # info.before_has_happened[before] = before_has_happened # check if the right side of the before expression have been calculated #if after in info.after_has_happened: # after_has_happened = info.after_has_happened[after] #else: after_has_happened = eval_atom(df=df, expr=after_expr, cols=cols, sep=sep, info=info).groupby(level=0).cumsum().astype(bool).astype(int) # info.after_has_happened[after] = after_has_happened both_exist = (before_has_happened.any(level=0)) & (after_has_happened.any(level=0)) if ' before ' in condition: is_it_before = (before_has_happened - after_has_happened).groupby(level=0).sum() endrows = both_exist & (is_it_before > 0) elif ' after ' in condition: is_it_after = (after_has_happened - before_has_happened).groupby(level=0).sum() endrows = both_exist & (is_it_after > 0) elif ' simultaneous ' in condition: difference = (before_has_happened - after_has_happened).groupby(level=0).sum() endrows = both_exist & (difference ==0) info.evaluated['eval_before_after'][name] = endrows return endrows ###Output _____no_output_____ ###Markdown eval selector prefix ###Code #export def eval_selector_prefix(df, prefix, sep=None, cols=None, date='date', info=None): prefix=prefix.strip() words = prefix.split() # before last 5 events if ' event' in prefix: prefix = prefix.replace('events', 'event') #ad hoc, works with little code, but slow, optimize later df['event'] = 'e' prefix = prefix.replace('event', 'e in event') # just before if prefix=='before': print(prefix) rows=~(df['rowscum']>0) # just after elif prefix=='after': print(prefix) rows=df['rowscum']>0 # 100 days before elif words[1]=='days': days, _, direction = prefix.split() rows=mark_days(df=df, max_days=days, direction=direction, date=date, info=info) # 3 events after elif (words[1]=='events') or (words[1]=='event'): days, _, direction = prefix.split() rows=mark_events(rows=df['atom_rows'], max_event=days, direction=direction, info=info) # 50 to 100 days after elif ('to' in words) and ('days' in words): min_days, _, max_days, unit, direction = prefix.split() rows = mark_days(df=df, min_days=min_days, max_days=max_days, direction=direction) # 2 to 5 events after elif ('to' in words) and (words[1]=='events') or (words[1]=='event'): min_events, _, max_events, unit, direction = prefix.split() rows = mark_events(rows=df['atom_rows'], min_events=min_days, max_events=max_days, direction=direction) return rows ###Output _____no_output_____ ###Markdown create time interval ###Code #export def create_time_intervals(df, expr, cols=None, sep=None, date='date', info=None): """ expr='before 4AB04 in ncmp' expr='100 days before 4AB04 in ncmp' expr='50 to 100 days after 4AB04 in ncmp' expr='5 to 10 events after 4AB04 in ncmp' expr='50 days around 4AB04 in ncmp' expr='5 events around 4AB04 in ncmp' #next x events, inside 1 event after expr='before 3rd 4AB04 in ncmp' expr='100 days before last event' expr='between 3rd s in cod and 7th b in cod' expr='before age>20' expr='a pd statement' age >20 expr='100 days before last event' expr='inside last 5 events' create_time_intervals(df=df, expr=expr) """ original = expr words = expr.split() expr = re.sub(r'last (-?\d+)', r'-\1st', expr) expr = re.sub(r'first (-?\d+)', r'\1st', expr) # todo: also fix percentile --> find position, not first 5 percent if any(word in words for word in 'before after around'.split()): splitted = re.split(r'\bbefore\b|\bafter\b|\bsimultaneously\b|\baround\b', expr) atom = splitted[-1] prefix='' for word in words: prefix = prefix + ' ' + word if word in 'before after sametime around'.split(): break print(atom, prefix) atom_rows = eval_atom(df=df, expr=atom, cols=cols, sep=sep, info=info) rowscum = atom_rows.groupby(level=0).cumsum() df['atom_rows'] = atom_rows df['rowscum'] = rowscum rows = eval_selector_prefix(df=df, prefix=prefix, sep=sep, cols=cols, date=date, info=info) return rows ###Output _____no_output_____ ###Markdown mark days ###Code #export # mark days (ex 'within 22 days before/after x') def mark_days(df, direction, max_days, min_days=0,inside=True, date='date', info=None): """ mark days (ex 'inside/within/outside 22 days before/after/around x') """ df['reference_event'] = np.where(df['atom_rows'] == 1, df[date], np.datetime64('NaT')) min_days=float(min_days) max_days=float(max_days) if direction=='around': rows_before = mark_days(df=df, max_days=max_days, direction='before', date=date, inside=True) rows_after = mark_days(df=df, max_days=max_days, direction='after', date=date, inside=True) rows = rows_before | rows_after if not inside: rows = ~rows return rows elif direction=='before': df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='bfill') elif direction =='after': df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='ffill') event_diff = (df[date] - df['reference_event']).dt.days.abs() # inside 50 to 100 days before x if inside: rows = event_diff.between(min_days, max_days) # outside 50 to 100 days before x else: rows = ~(event_diff.between(min_days, max_days)) return rows ###Output _____no_output_____ ###Markdown create position interval ###Code #export # mark rows (ex 'within 5 events before/after x') def create_position_interval(rows, expr, info=None): """ mark events (ex '(not,always, never) inside/within/outside 5 events before/after/around x') """ df = rows.to_frame() df.columns = ['reference_event'] df['hard_way']=1 df['event_num'] = df.groupby(level=0)['hard_way'].cumsum() pre, post = re.split(' inside | outside ', expr) # to do: handle pre if it exists ... maybe only handle not (never and always is higher level) last_atom = re.split(' before | after | around', post)[-1] inside_statement = expr.replace(pre,'').replace(last_atom,'') last_rows = eval_atom(df=df, expr=last_atom, cols=cols, sep=sep, info=info) inside_statement = post.replace(last_atom,'') # inside 2 to 5 events before x # to do: validate since this should not work if the direction is around if ' to ' in inside_statement: in_or_out, min_events, _, max_events, direction =inside_statement.split() # inside 5 events before x else: in_or_out, max_events, direction = inside_statement.split() if in_or_out in ('inside', 'between', 'not outside'): inside = True else: inside = False min_events=int(min_events) max_events=int(max_events) rows = mark_events(rows=last_rows, direction=direction, max_events=max_events, min_events=min_events, inside=inside, info=info) return rows ###Output _____no_output_____ ###Markdown mark events ###Code #export # mark rows (ex 'within 5 events before/after') def mark_events(rows, direction, max_events, min_events=0, inside=True, info=None): """ mark events (ex '(not,always, never) inside/within/outside 5 events before/after/around x') """ df = rows.to_frame() df.columns = ['reference_event'] df['hard_way']=1 df['event_num'] = df.groupby(level=0)['hard_way'].cumsum() min_events=int(min_events) max_events=int(max_events) if direction=='around': rows_before = mark_events(df=df, max_events=max_events, direction='before') rows_after = mark_events(df=df, max_events=max_events, direction='after') rows = rows_before | rows_after if not inside: rows = ~rows return rows elif direction=='before': df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='bfill') elif direction =='after': df['reference_event'] = df['reference_event'].groupby(level=0).fillna(method='ffill') event_diff = (df['event_num'] - df['reference_event']).dt.days.abs() # inside 50 to 100 events before x if inside: rows = event_diff.between(min_events, max_events) # outside 50 to 100 events before x else: rows = ~(event_diff.between(min_events, max_events)) return rows df=make_data(1000,letters=10, numbers=5) df.head() df['date'] = df.event_date df = df.sort_values(['pid', 'date']) df.head() row = eval_atom(df=df, expr='B2 in codes', sep=',') row.head() eval_row_selection(df=df, expr='2 events before I2 in codes', sep=',') expr='before 4AB04 in ncmp' expr='100 days before 4AB04 in ncmp' expr='50 to 100 days after 4AB04 in ncmp' expr='before 3rd 4AB04 in ncmp' expr='100 days before last event' expr='after 1st D3 in codes' expr='100 days before D3 in codes' expr='500 days before H3 in codes' expr='900 days around H3 in codes' expr='100 to 600 days after H3 in codes' # interesting problem. is something is 150 days from a h3, but also 50 days from another h3? df['around']=create_time_intervals(df=df, expr=expr, sep=',') df.head(50) df df['atom_rows']=eval_atom(df=df, sep=',', expr='H4 in codes') df['atom_rows'] df.groupby(level=0).atom_rows.cumsum()<1 create_time_intervals(df=df, expr=expr, sep=',') expr='before D3 in codes' x = re.split(r'\bbefore\b|\bafter\b|\bsimultaneously\b|\baround\b', expr) x df.head() df=df.sort_values(['pid', 'event_date']) df['date'] = df.event_date df.head() count_persons(df=df, expr='min ?[10, 20] G2 in codes', sep=',') df.groupby('pid').size().value_counts().sort_index() #export def before_after(before, after, condition, info=None): before_has_happened = before.groupby(level=0).cumsum().astype(bool).astype(int) after_has_happened = after.groupby(level=0).cumsum().astype(bool).astype(int) both_exist = (before_has_happened.any(level=0)) & (after_has_happened.any(level=0)) if condition=='before': is_it_before = (before_has_happened - after_has_happened).groupby(level=0).sum() endrows = both_exist & (is_it_before > 0) elif condition=='after': is_it_after = (after_has_happened - before_has_happened).groupby(level=0).sum() endrows = both_exist & (is_it_after > 0) elif condition == 'same time': difference = (before_has_happened - after_has_happened).groupby(level=0).sum() endrows = both_exist & (difference ==0) #expand to fit df length? endrows =endrows.reindex(index=before.index) #expand_endrows['result'] = endrows #endrows = expand_endrows['result'] return endrows index = np.random.randint(100, size=1000) code1 = np.random.binomial(1, p=0.8, size=1000) code2 = np.random.binomial(1, p=0.1, size=1000) df=pd.DataFrame(code1, index=index).sort_index() df['col2'] = code2 df.columns =['col1', 'col2'] df.head() col1=df.col1 col2=df.col2 condition='before' df['before'] = before_after(col1, col2, condition) condition='after' df['after'] = before_after(col1, col2, condition) df.head(50) df=make_data(1000,letters=10, numbers=5) df.head() ###Output _____no_output_____ ###Markdown count persons ###Code #export def count_persons(df, expr, cols=None, sep=None, codebook=None, info=None, use_caching=True, insert_cols=True, fix=True): """ count persons who satisfy the conditions in an expression examples expr = 'K50* in icd' expr = 'K50* before K51*' expr = '(K50 or K51) and K52' expr = 'min 3 of glucose>8 within 100 days' expr = '3rd of 4AB04 in ncmp before 3th of 4AB02 in ncmp' """ expr = remove_space(expr) expr = insert_external(expr) exprs = get_expressions(expr) count = {} for expr in exprs: rows = eval_expr(df=df, expr=expr, cols=cols, sep=sep, info=info) count[expr] = rows.any(level=0).sum() # return only number if only one expression if len(count) == 1: return count[expr] return count count_persons(df=df, expr='G3 in codes before I2 in codes', sep=',') count_persons(df=df, expr='first 5 G3 before 3th G2', cols='codes', sep=',') count_persons(df=df, expr='min 5 G3 before 3th G2', cols='codes', sep=',') eval_atom(df=df, expr='1st G2 in codes', sep=True) insert_columns('G2', cols='codes') df.head() count_persons(df=df, cols='codes', expr='B2 before H2', sep=',') df.codes.str.contains('G2', na=False).any(level=0) get_rows(df=df, codes='G2', cols='codes').any(level=0).sum() expr = "?['4AB02', '4AB04'] in ncmp" expr = '4AB02 in ncmp and 4AB04 in ncmp' expr = 'min 10 of 4AB02 in ncmp' expr = 'min ?[4,5,6] of 4AB02 in ncmp' expr = 'min 6 of 4AB02 in ncmp' expr = 'min 10 of 4AB02 in ncmp' expr = 'min ?[6,8] of 4AB02 in ncmp' expr = '1st of 4AB02 in ncmp' expr = '2nd of 4AB02 in ncmp' expr = '4AB02 in ncmp before 4AB04 in ncmp' expr = '4AB04 in ncmp before 4AB02 in ncmp' expr = '4AA23 in ncmp before 4AB02 in ncmp' expr = 'max 2 of 4AB02 in ncmp before 4AB04 in ncmp' expr = 'max 2 of 4AB02 in ncmp' # should include zero ? expr = 'min ?[1,2,3,4] of 4AB02 in ncmp' expr = 'max ?[1,2,3,4] of 4AB02 in ncmp' # should include zero ? expr = 'min 2 of days>4' expr = 'min 8 of days>6' expr = 'min 3 of 4AB02 in ncmp within 200 days' %time count_p(df=df, expr=expr, cols=None, codebook=None, info=None, sep=',') %time count_p(df=df, expr=expr, cols=None, codebook=None, info=info) def count_persons(df, codes=None, cols=None, pid='pid', sep=None, normalize=False, dropna=True, group=False, merge=False, length=None, groupby=None, codebook=None, fix=True): """ Counts number of individuals who are registered with given codes Allows counting across multiple columns and multiple codes in the same cells. For instance, there may be 10 diagnostic codes for one event (in separate columns) and in some of the columns there may be more than one diagnostic code (comma separated) and patient may have several such events in the dataframe. args: codes (str, list or dict): Codes to be counted. Star and hyphen notations are allowed. A dict can be used as input to merge codes into larger categories before counting. The key is the name of the category ('diabetes') and the value is a list of codes. Examples: codes="4ABA2" codes="4AB*" codes=['4AB2A', '4AB4A'] codes = {'diabetes' = ['34r32f', '3a*']} cols (str or list): The column(s) with the codes. Star and colon notation allowed. Examples: cols = 'icdmain' cols = ['icdmain', 'icdside'] # all columns starting with 'icd' cols = ['icd*'] # all columns starting with 'icd' # all columns including and between icd1 and icd10 cols = ['icd1:icd10'] pid (str): Column name of the personal identifier sep (str): The code seperator (if multiple codes in the same cells) normalize (bool, default: False): If True, converts to pct dropna (bool, default True): Include counts of how many did not get any of the specified codes length (int): If specified, will only use the number of characters from each code as specified by the length parameter (useful to count codes at different levels of granularity. For example, sometimes oe wants to look at how many people get detailed codes, other times the researcher wants to know only how many get general atc codes, say the first four characters of the atc) Examples >>> df.atc.count_persons(codes='4AB04') >>> df.atc.count_persons(codes='4AB04', dropna=False, normalize=True) >>> df.atc.count_persons(codes=['4AB*', '4AC*']) >>> df.atc.count_persons(codes=['4AB*', '4AC*'], group=True) >>> df.atc.count_persons(codes=['4AB*', '4AC*'], group=True, merge=True) >>> df.count_persons(codes={'adaliamumab':'4AB04'}, cols='ncmp', sep=',', pid='pid') >>> df.count_persons(codes='4AB04', cols='ncmp', groupby=['disease', 'cohort']) >>> df.groupby(['disease', 'cohort']).apply(count_persons, cols='ncmp', codes='4AB04', sep=',') """ sub = df sub, cols = to_df(df=sub, cols=cols) cols = expand_cols(df=sub, cols=cols) if normalize: sum_persons = sub[pid].nunique() # if an expression instead of a codelist is used as input if isinstance(codes, str) and codes.count(' ') > 1: persons = use_expression(df=sub, expr=codes, cols=cols, sep=sep, out='persons', codebook=codebook, pid=pid) if normalize: counted = persons.sum() / len(persons) else: counted = persons.sum() # if codes is a codelist (not an expression) else: if fix: if not codes: counted = count_persons_all_codes(df=sub, cols=cols, pid=pid, sep=sep, normalize=normalize, dropna=dropna, length=length, groupby=groupby) return counted # if some codes are specified, expand and format these, and reduce the df to the relevant codes else: # expands and formats columns and codes input codes, cols, allcodes, sep = fix_args(df=sub, codes=codes, cols=cols, sep=sep, group=group, merge=merge) rows = get_rows(df=sub, codes=allcodes, cols=cols, sep=sep, fix=False) if not dropna: sum_persons = df[pid].nunique() sub = sub[rows].set_index(pid, drop=False) # unsure if this is necessary, may drop it. Requred if method on a series? well not as long as we add pid column and recreate a series as a df # make a df with the extracted codes code_df = extract_codes(df=sub, codes=codes, cols=cols, sep=sep, fix=False, series=False) labels = list(code_df.columns) counted = pd.Series(index=labels) # maybe delete groupby option, can be done outside df.groupby. apply ... if groupby: code_df = code_df.any(level=0) sub_plevel = sub.groupby(pid)[groupby].first() code_df = pd.concat([code_df, sub_plevel], axis=1) # outer vs inner problem? code_df = code_df.set_index(groupby) counted = code_df.groupby(groupby).sum() else: for label in labels: counted[label] = code_df[code_df[label]].index.nunique() if not dropna: with_codes = code_df.any(axis=1).any(level=0).sum() # surprisingly time consuming? nan_persons = persons - with_codes counted['NaN'] = nan_persons if normalize: counted = counted / sum_persons else: counted = counted.astype(int) if len(counted) == 1: counted = counted.values[0] return counted eval_atom(df=df, expr='B1-B5 in codes', sep=',') def eval_condition(expr): @lru_cache() def get_conditions(expr): split_on = [' or ', ' and '] split_rep = ' @split@ ' for split_word in split_on: expr = expr.replace(split_word, split_rep) conditions = expr.split(split_rep) conditions = [condition.strip('(').strip(')') for condition in conditions] return conditions expr = 'max 2 of 4AB0552 before 4AB04' _insert_columns(expr, 'icd') ###Output _____no_output_____ ###Markdown evaluating atomic expressions ###Code def eval_single(df, condition, cols=None, sep=None, codebook=None, out='pid', info=None): """ evaluates a single expressions (1st 4A), not relational conditions (A before B, within 100 days after etc) condition ='first 5 of 4AB02 in ncmp' condition ='min 2 of days>10' condition ='ddd>10' condition ='ddd[4AB02 in codes]>10' condition ='ddd[4AB02 in codes].cumsum()>50' condition ='sum(ddd[4AB02 in codes])>50' a=eval_single(df=npr, condition=condition, sep=',') todo: info bank problems after allowing code selections? """ # create temporary storage to avoid recalculations if not info: info = Info() original_condition = condition # no re-evaluation necessary if it it has been evaluated before if out == 'pid' and (condition in info.single_pid): return info.single_pid[condition] elif out == 'rows' and (condition in info.single_rows): return info.single_rows[condition] elif out == 'interval' and (condition in info.single_interval): return info.single_interval[condition] quantity = r'[>=<]' # better to use term comparison freq = ['min ', 'max ', 'exactly '] first_last_between = [' first ', ' last ', ' between '] ordinal = r'(-?\d+)(st |nd |rd |th )' # re to find and split 3rd into 3 and rd etc row_selection = '' # select sub df if specified by [] after a code if ('[' in condition) and (']' in condition): row_query = condition.split('[')[-1].split(']')[0] row_selection = row_query # check if evaluated before if row_query in info.single_rows: rows = info.single_rows[row_query] else: condition = condition.replace(f'[{row_query}]', '') if ' in ' in row_query: row_query = row_query.replace(' in ', ' in: ') # using old use_expresssion wich requires in with colon relevant_rows = use_expression(df=df, cols=cols, expr=row_query, sep=sep) info.single_rows[row_query] = relevant_rows df = df[relevant_rows] # is it a functional expression? ddd.cumsum()>10 # expr="ddd.cumsum()>10" # condition=expr # expr='gender.nunique()==1' # hmm what about properties like .is_monotonic? (no parenthesis!) # if ('.' in condition) and ('(' in condition) and (')' in condition): # still imperfect ... a code could also be a column name ... ok usually not also with a period mark in column name so ok if ('.' in condition) and (condition.split('.')[0] in df.columns): codetext = condition codes = re.split('[<=>]', condition)[0] if codes in info.single_rows: rows = info.single_rows[codes] # not evaluated before, so calc else: cols, funcexpr = condition.split('.') # a method if '(' in funcexpr: func, threshold = funcexpr.split(')') func, args = func.split('(') rows = pd.eval(f"tmpdf.groupby(['pid'])['{cols}'].transform('{func}', {args}) {threshold}", engine='python') # an attribute (like is_monotonic) else: rows = pd.eval(f"tmpdf.groupby(['pid'])['{cols}'].transform(lambda x: x.{funcexpr})", engine='python') info.single_rows[codes] = rows # if it is a simple quantiative conditions (oxygen_level>20) elif re.search(quantity, condition): codetext = condition codes = condition.split()[-1] # code condition always last hmm unnecessary # check if evaluated before if codes in info.single_rows: rows = info.single_rows[codes] # not evaluated before, so calc else: # sum(glucose_level)>10 # if this, then may skip further processing? # well: 1st sum(glucose)>20 ok makes sense, maybe # but not: max 5 of sum(glucose)>20 ... well maybe # first 5 of sum(glucose)>20 # if the modifiers does not make sense, the sum might be in the # list of other modifiers i.e. first 5, 3rd etc and not a # pre-modifier when finding rows (which allows skipping) # complex quantitative expression: sum(glucose_level)>10 # better, more flexible ...: glucose.sum()>10 ... can make any function work, and can pass arguments if 'sum(' in codes: # can use ddd.cumsum() now, keep this to double check col, operator = codes.split(')') col = col.replace('sum(', '').strip(')') eval_text = f"df.groupby(df.index)['{col}'].cumsum(){operator}" rows = pd.eval(eval_text, engine='python').fillna(False) # is fillna false better than dropna here? # simple quantitative expression: glucose_level)>10 else: rows = df.eval(codes).fillna(False) codecols = codes info.single_rows[codecols] = rows # code expression (involving a code, not a quantitative expressions else: codetext, incols = condition.split(' in ') codes = codetext.split()[-1].strip() # codes always last in a simple string after cutting 'in cols' if incols.strip() == '': cols = cols else: cols = incols codecols = codes + ' in ' + cols + ' row ' + row_selection # cannot use just codes to store rows since same code may be in different columns, so need to include col in name when storing # If conditions is about events in general, create an events column if (' event ' in codes) or (' events ' in codes): rows = pd.Series(True, index=df.index).fillna(False) codecols = ' event ' # not a quantitative condition or an event conditions, so it is a code condition else: if codecols in info.rows: rows = info.rows[codecols] else: # cols = expand_cols(df=df, cols=cols) # expanded_codes = expand_codes(df=df, codes=codes, cols=cols, sep=sep) # allcodes=_get_allcodes(expanded_codes) # rows = get_rows(df=df, codes=allcodes, cols=cols, sep=sep, fix=False) rows = use_expression(df=df, expr=codes + ' in:' + cols, sep=sep) info.rows[codecols] = rows # is there a prefix to the conditions? if not, isolated condition, just return rows # if not, start preparing for calculating conditions with qualifiers # todo: quite messy! refactor: one function to evluate the code/expression itself, another to evalute the qualifier? if ' ' not in codetext.strip(): # remember answer info.single_rows[codecols] = rows info.rows[codecols] = rows if out == 'pid': endrows = rows.groupby(level=0).any() info.single_pid[codecols] = endrows info.pid[codecols] = endrows else: endrows = rows return endrows # calculate and remember cumsum per person # use previous calculation if exist if codes in info.cumsum: rowscum = info.cumsum[codes] else: rowscum = rows.groupby(level=0).cumsum() info.cumsum[codecols] = rowscum ## if not a simple existence condition, it must be one of the conditions below # positional condition: 5th of 4a, 3rd to 8th of 4A, (3rd, 4th, 5th) of 4A # also allows: 2nd last (or even -5th last) if re.match(ordinal, codetext): pos_str = condition.split('of ')[0].strip().strip('(').strip(')') # pos_re = ordinal.replace(' ', '[ )]|') # last condition may have ) i.e. 25th) pos_re = ordinal.replace(' ', '') # last condition may have ) i.e. 25th) pos_nums = re.findall(pos_re, pos_str) pos_nums = tuple([int(pos[0]) for pos in pos_nums]) # if the conditions includes last, need reversed cumsum if ' last ' in pos_str or '-' in pos_str: n_max = rowscum.groupby(level=0).max().add(1) # reversed event number (by id) lastrowscum = (rowscum - n_max).abs() last_flag = 1 else: last_flag = 0 # single position: 5th of 4A if len(pos_nums) == 1: if last_flag: select = (lastrowscum == pos_nums) else: select = (rowscum == pos_nums) # from-to positions: 3rd to 8th of 4A, 1st to -3rd elif ' to ' in pos_str: lower, upper = pos_nums if lower > 0: aboverows = (rowscum >= lower) else: aboverows = (lastrowscum >= abs(lower)) if upper > 0: belowrows = (rowscum <= upper) else: belowrows = (lastrowscum <= abs(upper)) select = (aboverows & belowrows) # list of positions (3rd, 5th, 7th) elif pos_str.strip().startswith('('): pos_num = [num for num in pos_num if num > 0] neg_num = [num for num in pos_num if num < 0] if pos_num: pos_select = rowscum.isin(pos_nums) if neg_num: neg_select = rowscum.isin(pos_nums) select = (pos_select | neg_select) # freq condition: min 5 of 4A elif any(word in codetext for word in freq): word, num, _, codes = codetext.split() num = int(num) if 'min' in word: select = (rowscum >= num) elif 'max' in word: # doublecheck! n_max = rowscum.max(level=0) select = (n_max <= num) elif 'exactly' in word: select = (rowscum == num) # first, last range conditions: first 5 of 4A elif any(word.strip() in condition for word in first_last_between): # regex is better word, num, _, codes = codetext.split() if '%' not in num: num = int(num) if 'first' in word: select = (rowscum <= num) if 'last' in word: select = (rowscum >= num) # if pct condition: first 10% of 4A elif '%' in codetext: n_max = rowscum.groupby(level=0).max() pct = float(num.split(r'%')[0]) / 100 pid_num = n_max * pct # first 1% of two observations includes 1st obs pid_num[pid_num < 1] = 1 if word == 'first': # hmm, generalproblem: drop if pid is missing ... select = (rowscum < pid_num) if word == 'last': select = (rowscum > pid_num) # percentile condition elif ' percentile ' in codetext: event_num = rows.groupby(level=0).cumcount() n_count = rowscum.groupby(level=0).size() num = float(num.split(r'%')[0]) / 100 pid_num = n_count * num if word == 'first': rows = (pid_num < event_num) if word == 'last': rows = (pid_num > event_num) # so far, have marked interval of events for expressions with qualifications # (existence conditions are not intervals). example: First 5 of 4A, markes # all events in the interval between the 1st and 5th of 4A # if we only want to pick the 4A events in this intereval, we and it with # the boolena for 4A existence (row). But sometimes we want to keep and use # the interval. For instance when the qualifiers are used in before/after # statements if the evaluated expression should be returned as 'exact row', # 'interval row' or pid existence # store and return results if out == 'pid': endrows = (rows & select) endrows = endrows.any(level=0) info.single_pid[original_condition] = endrows info.single_rows[original_condition] = rows elif out == 'interval': endrows = select info.interval[original_condition] = endrows elif out == 'rows': endrows = (rows & select) info.single_rows[original_condition] = endrows return endrows ###Output _____no_output_____ ###Markdown get inpatient data To test the functions and to calculate the Charslon index we need some data. Here we will use data on hospital visits from Medicare: ###Code # Use pandas import pandas as pd # Read synthetic medicare sample data on inpatient hospital stays path = 'https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs/Downloads/' inpatient_file = 'DE1_0_2008_to_2010_Inpatient_Claims_Sample_1.zip' inpatient = pd.read_csv(path+inpatient_file) inpatient.columns = inpatient.columns.str.lower() # easier to use a column called 'pid' than 'desynpuf_id' inpatient['pid']=inpatient['desynpuf_id'] #set index to the personal id, but also keep id as a column inpatient = inpatient.set_index('pid', drop=False) inpatient.index.name='pid_index' # Have a look inpatient.head() # make a list of columns with information about diagnostic codes icd_cols = list(inpatient.columns[inpatient.columns.str.startswith('icd9_dgns_cd')]) icd_cols ###Output _____no_output_____ ###Markdown Make a list of all unique ICD9 codes that exist, a all_codes: ###Code # Codes to calculate CCI using ICD-9 (CM, US, Enhanced) # Source: http://mchp-appserv.cpe.umanitoba.ca/concept/Charlson%20Comorbidities%20-%20Coding%20Algorithms%20for%20ICD-9-CM%20and%20ICD-10.pdf infarction = ''' 410* 412* ''' heart_failure = ''' 390.91 402.21 402.11 402.91 404.01 404.03 404.11 404.13 404.91 404.93 425.4-425.9 428* ''' peripheral_vascular = ''' 093.0 437.3 440* 441* 443.1-443.9 447.1 557.1 557.9 V43.4 ''' cerebrovascular = ''' 362.34 430*-438* ''' dementia = ''' 290* 294.1 331.2 ''' pulmonary =''' 416.8 416.9 490*-505* 506.4 508.1 508.8 ''' rheumatic = ''' 446.5 710.0-710.4 714.0-714.2 714.8 725* ''' peptic_ulcer = '531*-534*' liver_mild =''' 070.22 070.23 070.32 070.33 070.44 070.54 070.6 070.9 570.* 571.* 573.3 573.4 573.8 573.9 V42.7 ''' # Interesting, diabetes seems to be 5 digits long in the data, but not the specified codes diabetes_without_complication = '250.0*-250.3* 250.8* 250.9*' diabetes_with_complication = '250.4*-250.7*' plegia = ''' 334.1 342.* 343.* 344.0-344.6 344.9 ''' renal = ''' 403.01 403.11,403.91 404.02 404.03 404.12 404.13 404.92 404.93 582.* 583.0-583.7 585* 586* 588.0 V42.0 V45.1 V56* ''' malignancy = ''' 140*-172* 174.0-195.8 200*-208* 238.6 ''' liver_not_mild = ''' 456.0-456.2 572.2-572.8 ''' tumor = '196*-199*' hiv = '042*-044*' ###Output _____no_output_____ ###Markdown Put all the strings that describe the codes for the comorbitities in a single datastructure: ###Code icd9 = unique(df=inpatient, cols = icd_cols, all_str=True) # A dictionary with names of cormobitities and the associated medical codes code_string = { 'infarction' : infarction, 'heart_failure' : heart_failure, 'peripheral_vascular' : peripheral_vascular, 'cerebrovascular' : cerebrovascular, 'dementia' : dementia, 'pulmonary' : pulmonary, 'rheumatic' : rheumatic, 'peptic_ulcer' : peptic_ulcer, 'liver_mild' : liver_mild, 'diabetes_without_complication' : diabetes_without_complication, 'diabetes_with_complication' : diabetes_with_complication, 'plegia' : plegia, 'renal' : renal, 'malignancy' : malignancy, 'liver_not_mild' : liver_not_mild, 'tumor' : tumor, 'hiv' : hiv} ###Output _____no_output_____ ###Markdown Having created a all_codes, we can use the functions we have created to expand the description for all the different comorbidities to include all the specific codes: ###Code codes = {disease : expand_code(codes.split(), all_codes=icd9, drop_dot=True, drop_leading_zero=True) for disease, codes in code_string.items()} ###Output _____no_output_____ ###Markdown And we can check if it really expanded the codes, for instance by examining the codes for mild liver disease: ###Code codes['liver_mild'] ###Output _____no_output_____ ###Markdown In order to do the calculations, we need the weights associated with each comorbidity. These weights are related to the predictive power of the comorbididy for the probability of dying in a given time period. There are a few different standards, but with relatively minor varitions. Here we use the following: ###Code charlson_points = { 'infarction': 1, 'heart_failure': 1, 'peripheral_vascular': 1, 'cerebrovascular': 1, 'dementia': 1, 'pulmonary': 1, 'rheumatic': 1, 'peptic_ulcer': 1, 'liver_mild': 1, 'diabetes_without_complication': 1, 'diabetes_with_complication': 2, 'plegia': 2, 'renal': 2, 'malignancy': 2, 'liver_not_mild': 3, 'tumor': 6, 'hiv': 6} ###Output _____no_output_____ ###Markdown We also need the function that takes a set of codes and identifies the rows and persons who have the codes (a function we developed in a previous notebook): ###Code #hide from nbdev.showdoc import * from nbdev.export import * notebook2script() ###Output Converted 00_core.ipynb. Converted index.ipynb. Converted query_language.ipynb.
convolutional_networks/week2/KerasTutorial/Keras_Tutorial_v2a.ipynb
###Markdown Keras tutorial - Emotion Detection in Images of FacesWelcome to the first assignment of week 2. In this assignment, you will:1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK. 2. See how you can in a couple of hours build a deep learning algorithm. Why are we using Keras? * Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. * Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. * Being able to go from idea to result with the least possible delay is key to finding good models. * However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you would still implement in TensorFlow rather than in Keras. * That being said, Keras will work fine for many common models. Updates If you were working on the notebook before this update...* The current notebook is version "v2a".* You can find your original work saved in the notebook with the previous version name ("v2").* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Changed back-story of model to "emotion detection" from "happy house."* Cleaned/organized wording of instructions and commentary.* Added instructions on how to set `input_shape`* Added explanation of "objects as functions" syntax.* Clarified explanation of variable naming convention.* Added hints for steps 1,2,3,4 Load packages* In this exercise, you'll work on the "Emotion detection" model, which we'll explain below. * Let's load the required packages. ###Code import numpy as np from keras import layers from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D from keras.models import Model from keras.preprocessing import image from keras.utils import layer_utils from keras.utils.data_utils import get_file from keras.applications.imagenet_utils import preprocess_input import pydot from IPython.display import SVG from keras.utils.vis_utils import model_to_dot from keras.utils import plot_model from kt_utils import * import keras.backend as K K.set_image_data_format('channels_last') import matplotlib.pyplot as plt from matplotlib.pyplot import imshow %matplotlib inline ###Output Using TensorFlow backend. ###Markdown **Note**: As you can see, we've imported a lot of functions from Keras. You can use them by calling them directly in your code. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`. In other words, unlike TensorFlow, you don't have to create the graph and then make a separate `sess.run()` call to evaluate those variables. 1 - Emotion Tracking* A nearby community health clinic is helping the local residents monitor their mental health. * As part of their study, they are asking volunteers to record their emotions throughout the day.* To help the participants more easily track their emotions, you are asked to create an app that will classify their emotions based on some pictures that the volunteers will take of their facial expressions.* As a proof-of-concept, you first train your model to detect if someone's emotion is classified as "happy" or "not happy."To build and train this model, you have gathered pictures of some volunteers in a nearby neighborhood. The dataset is labeled.Run the following code to normalize the dataset and learn about its shapes. ###Code X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() # Normalize image vectors X_train = X_train_orig/255. X_test = X_test_orig/255. # Reshape Y_train = Y_train_orig.T Y_test = Y_test_orig.T print ("number of training examples = " + str(X_train.shape[0])) print ("number of test examples = " + str(X_test.shape[0])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ###Output number of training examples = 600 number of test examples = 150 X_train shape: (600, 64, 64, 3) Y_train shape: (600, 1) X_test shape: (150, 64, 64, 3) Y_test shape: (150, 1) ###Markdown **Details of the "Face" dataset**:- Images are of shape (64,64,3)- Training: 600 pictures- Test: 150 pictures 2 - Building a model in KerasKeras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.Here is an example of a model in Keras:```pythondef model(input_shape): """ input_shape: The height, width and channels as a tuple. Note that this does not include the 'batch' as a dimension. If you have a batch like 'X_train', then you can provide the input_shape using X_train.shape[1:] """ Define the input placeholder as a tensor with shape input_shape. Think of this as your input image! X_input = Input(input_shape) Zero-Padding: pads the border of X_input with zeroes X = ZeroPadding2D((3, 3))(X_input) CONV -> BN -> RELU Block applied to X X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') return model``` Variable naming convention* Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. * Instead of creating unique variable names for each step and each layer, such as ```X = ...Z1 = ...A1 = ...```* Keras re-uses and overwrites the same variable at each step:```X = ...X = ...X = ...```* The exception is `X_input`, which we kept separate since it's needed later. Objects as functions* Notice how there are two pairs of parentheses in each statement. For example:```X = ZeroPadding2D((3, 3))(X_input)```* The first is a constructor call which creates an object (ZeroPadding2D).* In Python, objects can be called as functions. Search for 'python object as function and you can read this blog post [Python Pandemonium](https://medium.com/python-pandemonium/function-as-objects-in-python-d5215e6d1b0d). See the section titled "Objects as functions."* The single line is equivalent to this:```ZP = ZeroPadding2D((3, 3)) ZP is an object that can be called as a functionX = ZP(X_input) ``` **Exercise**: Implement a `HappyModel()`. * This assignment is more open-ended than most. * Start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. * Later, come back and try out other model architectures. * For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. * You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`. **Note**: Be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to. ###Code # GRADED FUNCTION: HappyModel def HappyModel(input_shape): """ Implementation of the HappyModel. Arguments: input_shape -- shape of the images of the dataset (height, width, channels) as a tuple. Note that this does not include the 'batch' as a dimension. If you have a batch like 'X_train', then you can provide the input_shape using X_train.shape[1:] Returns: model -- a Model() instance in Keras """ ### START CODE HERE ### # Feel free to use the suggested outline in the text above to get started, and run through the whole # exercise (including the later portions of this notebook) once. The come back also try out other # network architectures as well. X_input = Input(input_shape) X = ZeroPadding2D((3, 3))(X_input) X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X) X = BatchNormalization(axis = 3, name = 'bn0')(X) X = Activation('relu')(X) # MAXPOOL X = MaxPooling2D((2, 2), name='max_pool')(X) # FLATTEN X (means convert it to a vector) + FULLYCONNECTED X = Flatten()(X) X = Dense(1, activation='sigmoid', name='fc')(X) # Create model. This creates your Keras model instance, you'll use this instance to train/test the model. model = Model(inputs = X_input, outputs = X, name='HappyModel') ### END CODE HERE ### return model ###Output _____no_output_____ ###Markdown You have now built a function to describe your model. To train and test this model, there are four steps in Keras:1. Create the model by calling the function above 2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])` 3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)` 4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)` If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/). Step 1: create the model. **Hint**: The `input_shape` parameter is a tuple (height, width, channels). It excludes the batch number. Try `X_train.shape[1:]` as the `input_shape`. ###Code ### START CODE HERE ### (1 line) happyModel = HappyModel(X_train.shape[1:]) ### END CODE HERE ### ###Output _____no_output_____ ###Markdown Step 2: compile the model**Hint**: Optimizers you can try include `'adam'`, `'sgd'` or others. See the documentation for [optimizers](https://keras.io/optimizers/) The "happiness detection" is a binary classification problem. The loss function that you can use is `'binary_cross_entropy'`. Note that `'categorical_cross_entropy'` won't work with your data set as its formatted, because the data is an array of 0 or 1 rather than two arrays (one for each category). Documentation for [losses](https://keras.io/losses/) ###Code ### START CODE HERE ### (1 line) happyModel.compile(optimizer = "adam", loss = "binary_crossentropy", metrics = ["accuracy"]) ### END CODE HERE ### ###Output _____no_output_____ ###Markdown Step 3: train the model**Hint**: Use the `'X_train'`, `'Y_train'` variables. Use integers for the epochs and batch_size**Note**: If you run `fit()` again, the `model` will continue to train with the parameters it has already learned instead of reinitializing them. ###Code ### START CODE HERE ### (1 line) happyModel.fit(x =X_train, y = Y_train, epochs = 30, batch_size = 50) ### END CODE HERE ### ###Output Epoch 1/30 600/600 [==============================] - 9s - loss: 2.2086 - acc: 0.5483 Epoch 2/30 600/600 [==============================] - 9s - loss: 0.4844 - acc: 0.8217 Epoch 3/30 600/600 [==============================] - 9s - loss: 0.1732 - acc: 0.9267 Epoch 4/30 600/600 [==============================] - 9s - loss: 0.1774 - acc: 0.9300 Epoch 5/30 600/600 [==============================] - 9s - loss: 0.0983 - acc: 0.9650 Epoch 6/30 600/600 [==============================] - 9s - loss: 0.0840 - acc: 0.9717 Epoch 7/30 600/600 [==============================] - 9s - loss: 0.0926 - acc: 0.9683 Epoch 8/30 600/600 [==============================] - 9s - loss: 0.0754 - acc: 0.9767 Epoch 9/30 600/600 [==============================] - 9s - loss: 0.0618 - acc: 0.9783 Epoch 10/30 600/600 [==============================] - 9s - loss: 0.0654 - acc: 0.9850 Epoch 11/30 600/600 [==============================] - 9s - loss: 0.0612 - acc: 0.9767 Epoch 12/30 600/600 [==============================] - 9s - loss: 0.0480 - acc: 0.9867 Epoch 13/30 600/600 [==============================] - 10s - loss: 0.0530 - acc: 0.9800 Epoch 14/30 600/600 [==============================] - 9s - loss: 0.0458 - acc: 0.9883 Epoch 15/30 600/600 [==============================] - 9s - loss: 0.0306 - acc: 0.9933 Epoch 16/30 600/600 [==============================] - 9s - loss: 0.0327 - acc: 0.9950 Epoch 17/30 600/600 [==============================] - 9s - loss: 0.0271 - acc: 0.9917 Epoch 18/30 600/600 [==============================] - 9s - loss: 0.0283 - acc: 0.9933 Epoch 19/30 600/600 [==============================] - 9s - loss: 0.0226 - acc: 0.9950 Epoch 20/30 600/600 [==============================] - 9s - loss: 0.0228 - acc: 0.9983 Epoch 21/30 600/600 [==============================] - 9s - loss: 0.0233 - acc: 0.9950 Epoch 22/30 600/600 [==============================] - 10s - loss: 0.0172 - acc: 0.9967 Epoch 23/30 600/600 [==============================] - 10s - loss: 0.0192 - acc: 0.9950 Epoch 24/30 600/600 [==============================] - 11s - loss: 0.0206 - acc: 0.9950 Epoch 25/30 600/600 [==============================] - 11s - loss: 0.0254 - acc: 0.9950 Epoch 26/30 600/600 [==============================] - 11s - loss: 0.0175 - acc: 0.9950 Epoch 27/30 600/600 [==============================] - 11s - loss: 0.0178 - acc: 0.9950 Epoch 28/30 600/600 [==============================] - 11s - loss: 0.0238 - acc: 0.9950 Epoch 29/30 600/600 [==============================] - 11s - loss: 0.0207 - acc: 0.9917 Epoch 30/30 600/600 [==============================] - 10s - loss: 0.0116 - acc: 1.0000 ###Markdown Step 4: evaluate model **Hint**: Use the `'X_test'` and `'Y_test'` variables to evaluate the model's performance. ###Code ### START CODE HERE ### (1 line) preds = happyModel.evaluate(x = X_test, y = Y_test) ### END CODE HERE ### print() print ("Loss = " + str(preds[0])) print ("Test Accuracy = " + str(preds[1])) ###Output 150/150 [==============================] - 1s Loss = 0.151273331046 Test Accuracy = 0.966666664282 ###Markdown Expected performance If your `happyModel()` function worked, its accuracy should be better than random guessing (50% accuracy).To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. Tips for improving your modelIf you have not yet achieved a very good accuracy (>= 80%), here are some things tips:- Use blocks of CONV->BATCHNORM->RELU such as:```pythonX = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)X = BatchNormalization(axis = 3, name = 'bn0')(X)X = Activation('relu')(X)```until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You can then flatten the volume and use a fully-connected layer.- Use MAXPOOL after such blocks. It will help you lower the dimension in height and width.- Change your optimizer. We find 'adam' works well. - If you get memory issues, lower your batch_size (e.g. 12 )- Run more epochs until you see the train accuracy no longer improves. **Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. Normally, you'll want separate dev and test sets. The dev set is used for parameter tuning, and the test set is used once to estimate the model's performance in production. 3 - ConclusionCongratulations, you have created a proof of concept for "happiness detection"! Key Points to remember- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures.- Remember The four steps in Keras: 1. Create 2. Compile 3. Fit/Train 4. Evaluate/Test 4 - Test with your own image (Optional)Congratulations on finishing this assignment. You can now take a picture of your face and see if it can classify whether your expression is "happy" or "not happy". To do that:1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.2. Add your image to this Jupyter Notebook's directory, in the "images" folder3. Write your image's name in the following code4. Run the code and check if the algorithm is right (0 is not happy, 1 is happy)! The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try! ###Code ### START CODE HERE ### img_path = 'images/my_image.jpg' ### END CODE HERE ### img = image.load_img(img_path, target_size=(64, 64)) imshow(img) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) print(happyModel.predict(x)) ###Output [[ 1.]] ###Markdown 5 - Other useful functions in Keras (Optional)Two other basic features of Keras that you'll find useful are:- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.Run the following code. ###Code happyModel.summary() plot_model(happyModel, to_file='HappyModel.png') SVG(model_to_dot(happyModel).create(prog='dot', format='svg')) ###Output _____no_output_____
instagram_analysis/insta_agds_full_dataset.ipynb
###Markdown Softmax version ###Code from scipy.special import softmax def get_trait_dot_product(post_text: str, word_map: list, word_dataframe: pd.DataFrame) -> list: # Filter out the text filtered_post = remove_stopwords(clean_up_text(post_text)) filtered_post += extract_hashtags(post_text) # Create a vector for dot product vector post_vector = [0] * len(word_map) # Calculate word occurrences word_ctr = Counter(filtered_post) for word, freq in word_ctr.items(): if word in word_map: post_vector[word_map.index(word)] = freq # Calculate dot product for a given text word_dot = word_dataframe.dot(post_vector) out_vec = pd.Series() for trait in trait_list: out_vec = out_vec.append(pd.Series([np.argmax(softmax(word_dot.loc[trait]))], index=[trait])) return out_vec # Trait accuracy - round the results def natural_round(x: float) -> int: out = int(x // 1) return out + 1 if (x - out) >= 0.5 else out def accuracy_per_trait(input_vector: pd.Series, annotated_vector: pd.Series) -> np.array: out_array = np.array([0] * 37, dtype=np.int) for i in range(len(out_array)): if input_vector[i] == annotated_vector[i]: out_array[i] = 1 return out_array pbar = tqdm(arch_df.iterrows()) accuracy = 0 # Out accuracy vector total_accuracy = np.array([0] * 37, dtype=np.int) for idx, row in pbar: user_text = list(itertools.chain.from_iterable(posts[users.index(idx)])) user_text = " ".join(user_text) sim_output = get_trait_dot_product(user_text, softmax_word_map, softmax_word_df) user_accuracy = accuracy_per_trait(sim_output, row) total_accuracy += user_accuracy pbar.set_description(f"Average accuracy: {round(np.mean(np.divide(total_accuracy, users.index(idx)+1))*100, 2)}") # Test dataset # Load the .csv with archetypes arch_df = pd.read_csv('test_archetypes_pl.csv', index_col=0) # Save the order of columns trait_list = arch_df.columns.tolist() # Show the table header and column list print(trait_list) arch_df.head() # Table preprocessing - replace all NaN with 2 (Unrelated/Don't know class), replace 0-5 values with the ones in range -1.0 - 1.0 arch_df = arch_df.fillna(2) # Remove duplicated annotations, to exclude conflicting entries arch_df = arch_df[~arch_df.index.duplicated(keep='first')] # Print the head of the dataset after modification arch_df.head() # Check if a user has a non-empty directory in the dataset, otherwise delete the user from the list available_arch_df = copy.deepcopy(arch_df) posts = [] BASE_DIR = "instagram_cleared" # Iterate over whole DataFrame for i, row in tqdm(arch_df.iterrows()): profile_posts = [] profile_hashtags = [] # Get all posts per profile profile_path = os.path.join(BASE_DIR, i) for file in os.listdir(profile_path): if not file.endswith(".toml"): with open(os.path.join(profile_path, file), "r") as post_f: read_text = post_f.read() profile_posts.append(remove_stopwords(clean_up_text(read_text))) profile_hashtags.append(extract_hashtags(read_text)) # Merge lists - a single list for a single influencer profile_hashtags = list(itertools.chain.from_iterable(profile_hashtags)) posts.append(list(itertools.chain.from_iterable([profile_posts, [profile_hashtags]]))) # Map usernames to indices users = list(arch_df.index.values) user_indices = {k: users.index(k) for k in users} pbar = tqdm(arch_df.iterrows()) # Out accuracy vector test_total_accuracy = np.array([0] * 37, dtype=np.int) for idx, row in pbar: profile_path = os.path.join(BASE_DIR, idx) user_text = "" for file in os.listdir(profile_path): if not file.endswith(".toml"): with open(os.path.join(profile_path, file), "r") as post_f: read_text = post_f.read() user_text += read_text sim_output = get_trait_dot_product(user_text, softmax_word_map, softmax_word_df) user_accuracy = accuracy_per_trait(sim_output, row) test_total_accuracy += user_accuracy pbar.set_description(f"Average test dataset accuracy: {round(np.mean(np.divide(test_total_accuracy, users.index(idx)+1))*100, 2)}") # Show total accuracy scaled_test_accuracy = np.divide(test_total_accuracy, len(arch_df)) avg_test_accuracy = np.mean(scaled_test_accuracy) print("--- ACCURACY ON TESTING DATASET ---") print(f"Average test dataset accuracy: {round(avg_test_accuracy*100, 2)}%") print("Accuracy per trait:") for i in range(len(trait_list)): print(f"{trait_list[i]}: {round(scaled_test_accuracy[i] * 100, 2)}%") ###Output 0it [00:00, ?it/s]<ipython-input-17-4ffef00e153a>:21: DeprecationWarning: The default dtype for empty Series will be 'object' instead of 'float64' in a future version. Specify a dtype explicitly to silence this warning. out_vec = pd.Series() Average test dataset accuracy: 37.64: : 177it [15:25, 5.23s/it] ###Markdown Regression model - testing dataset ###Code # Methods def get_trait_dot_product(post_text: str, word_map: list, word_dataframe: pd.DataFrame) -> list: # Filter out the text filtered_post = remove_stopwords(clean_up_text(post_text)) filtered_post += extract_hashtags(post_text) # Create a vector for dot product vector post_vector = [0] * len(word_map) # Calculate word occurrences word_ctr = Counter(filtered_post) for word, freq in word_ctr.items(): if word in word_map: post_vector[word_map.index(word)] = freq # Calculate dot product for a given text word_dot = word_dataframe.dot(post_vector) return word_dot # Replace NaN with 0 in word_frequency_table word_df = word_df.fillna(0) # Method for calculating the dot product of trait <-> influencer relation def get_influencer_dot_product(trait_output: list, influencer_dataframe: pd.DataFrame) -> pd.DataFrame: return influencer_dataframe.dot(trait_output) # Method for calculating the similarity def calculate_similarity(post_text: str, word_map: list, word_dataframe: pd.DataFrame, influencer_dataframe: pd.DataFrame) -> pd.DataFrame: # Calculate word-trait dot product post_result = get_trait_dot_product(post_text, word_map, word_dataframe) # Calculate trate-influencer dot-product inf_dot_product = get_influencer_dot_product(post_result, influencer_dataframe) # Get the sum of influencer traits influencer_sum = influencer_dataframe.sum(axis=1) # Divide the dot product by the sum calculated above inf_dot_product = inf_dot_product.divide(influencer_sum) return inf_dot_product # Trait accuracy - round the results def natural_round(x: float) -> int: out = int(x // 1) return out + 1 if (x - out) >= 0.5 else out def accuracy_per_trait(input_vector: pd.Series, annotated_vector: pd.Series) -> np.array: out_array = np.array([0] * 37, dtype=np.int) for i in range(len(out_array)): if natural_round(input_vector[i]) == annotated_vector[i]: out_array[i] = 1 return out_array pbar = tqdm(arch_df.iterrows()) # Out accuracy vector test_reg_total_accuracy = np.array([0] * 37, dtype=np.int) for idx, row in pbar: profile_path = os.path.join(BASE_DIR, idx) user_text = "" for file in os.listdir(profile_path): if not file.endswith(".toml"): with open(os.path.join(profile_path, file), "r") as post_f: read_text = post_f.read() user_text += read_text sim_output = get_trait_dot_product(user_text, word_map, word_df) user_accuracy = accuracy_per_trait(sim_output, row) test_reg_total_accuracy += user_accuracy pbar.set_description(f"Average test dataset accuracy: {round(np.mean(np.divide(test_reg_total_accuracy, users.index(idx)+1))*100, 2)}") # Show total accuracy scaled_reg_test_accuracy = np.divide(test_reg_total_accuracy, len(arch_df)) avg_reg_test_accuracy = np.mean(scaled_reg_test_accuracy) print("--- ACCURACY ON TESTING DATASET ---") print(f"Average test dataset accuracy: {round(avg_reg_test_accuracy*100, 2)}%") print("Accuracy per trait:") for i in range(len(trait_list)): print(f"{trait_list[i]}: {round(scaled_reg_test_accuracy[i] * 100, 2)}%") ###Output Average test dataset accuracy: 18.08: : 177it [14:44, 5.00s/it]
Codigos/Bicycle Thefts_Toronto.ipynb
###Markdown **Import libraries** ###Code from google.colab import drive drive.mount("/content/gdrive/") %cd "/content/gdrive/My Drive/Colab Notebooks/bikes-theft-model" # Libraries: Standard ones import pandas as pd import numpy as np import matplotlib.pyplot as plt import plotly.express as px import plotly.graph_objects as go from plotly.subplots import make_subplots import plotly.io as pio # Library for boxplots import seaborn as sns import pandas as pd #GRAPHS CLASS from Codigos.DataStatistics import GraphsStatistics as gp ###Output Mounted at /content/gdrive/ /content/gdrive/My Drive/Colab Notebooks/bikes-theft-model ###Markdown DATA SET DESCRIPTION - TORONTOThis dataset contains Bicycle Thefts occurrences from **2014-2019** . The location of crime occurrences have been deliberately offset to the nearest road intersection node to protect the privacy of parties involved in the occurrence. All location data must be considered as an approximate location of the occurrence and users are advised not to interpret any of these locations as related to a specific address or individual. *Total of 26 features.* Field Field Description Variable Type Num of Unique values X Location in cartetian coordinates (X) float 4885 Y Location in cartetian coordinates (Y) float 4874 FID ID int 21584 Index Record Unique Identifier int 21584 event_unique_id Event Occurrence Identifier String 19350 Primary_Offence Offence related to the occurrence String 65 Occurrence_Date Date of occurrence String 2104 Occurrence_Year Occurrence year int 6 Occurrence_Month Occurrence Month int 12 Occurrence_Day Occurrence Day int 31 Occurrence_Time Occurrence Time String 933 Division Police Division where event occurred int 18 City City where event occurred String 1 Location_Type Location Type where event occurred String 44 Premise_Type Premise Type where event occurred String 5 Bike_Make Bicycle Make String 725 Bike_Model Bicycle Model String 7008 Bike_Type Bicycle Type String 13 Bike_Speed Bicycle Speed int 62 Bike_Colour Bicycle Colour String 233 Cost_of_Bike Cost of Bicycle float 1458 Status Status of event String 3 Hood_ID Neighbourhood Id int 140 Neighbourhood Neighbourhood name String 140 Lat Longitude of point extracted after offsetting X and &amp; Coordinates to&nbsp;&nbsp;&nbsp;nearest intersection node float 4874 Long Latitude of point extracted after offsetting X and &amp; Coordinates to&nbsp;&nbsp;&nbsp;nearest intersection node float 4885 **Descriptive statistics and visualisation** The following is a statistical analysis of the data, initially showing the type of variable it has per field: ###Code data_bikes=pd.read_csv('Data/Bicycle_Thefts_Toronto.csv',header=0) display(data_bikes.info()) ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 21584 entries, 0 to 21583 Data columns (total 26 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 X 21584 non-null float64 1 Y 21584 non-null float64 2 FID 21584 non-null int64 3 Index_ 21584 non-null int64 4 event_unique_id 21584 non-null object 5 Primary_Offence 21584 non-null object 6 Occurrence_Date 21584 non-null object 7 Occurrence_Year 21584 non-null int64 8 Occurrence_Month 21584 non-null int64 9 Occurrence_Day 21584 non-null int64 10 Occurrence_Time 21584 non-null object 11 Division 21584 non-null int64 12 City 21584 non-null object 13 Location_Type 21584 non-null object 14 Premise_Type 21584 non-null object 15 Bike_Make 21584 non-null object 16 Bike_Model 13443 non-null object 17 Bike_Type 21584 non-null object 18 Bike_Speed 21584 non-null int64 19 Bike_Colour 19855 non-null object 20 Cost_of_Bike 20048 non-null float64 21 Status 21584 non-null object 22 Hood_ID 21584 non-null int64 23 Neighbourhood 21584 non-null object 24 Lat 21584 non-null float64 25 Long 21584 non-null float64 dtypes: float64(5), int64(8), object(13) memory usage: 4.3+ MB ###Markdown The following table presents a summary of the numerical variables in the database. It can be seen that the year with the most thefts is 2019 in December in division 58 (http://www.torontopolice.on.ca/divisions/map.php) . The average cost of stolen bicycles is 937,9778 Canadian dollars ###Code data_bikes.describe() ###Output _____no_output_____ ###Markdown In the following table, the summary of the categorical variables is presented. It can be seen that the majority of thefts occur in apartments (Rooming House, Condo). On the other hand, the neighborhood in which most robberies occur is Waterfront Communities-The Island, and most of the thefts occur at 18 h. ###Code data_bikes.describe(include=['object']) ###Output _____no_output_____ ###Markdown **Visualisation** In general, the status has three main types: stolen, recovered and unknown. The following graph shows the proportion of each status with respect to the general data. 97% of the bicycles correspond to stolen items ###Code fig=px.pie(data_frame=data_bikes,names='Status',title='Theft status') fig.show() ###Output _____no_output_____ ###Markdown ***What is the trend of annual thefts ?***In the following graph, the number of reported thefts per year is presented, it is observed that between 2014 and 2018 thefts were increasing, however a slight decrease is observed between 2018 and 2019 ###Code Dias_aux_robo_year=data_bikes.groupby(['Occurrence_Year','Status']).size().reset_index().rename(columns={0:'Count'}) #display(Dias_aux_robo_year) fig = px.line(Dias_aux_robo_year, x='Occurrence_Year', y='Count', color='Status',title='Vol de vélos à Toronto') fig.show() ###Output _____no_output_____ ###Markdown ***What day do the most thefts occur ?***According to the figure, it is observed that there is not a great difference between the number of thefts reported per day among the different days of the week, however it is observed that the greatest amount of thefts occur on Friday ###Code data_bikes['Occurrence_Date']=pd.to_datetime(data_bikes['Occurrence_Date']) #Transforma la columna Formato de fecha data_bikes['Week_day']=data_bikes['Occurrence_Date'].dt.day_name() #Agregar el nombre day_type = pd.api.types.CategoricalDtype(categories=["Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday"], ordered=True) data_bikes["Week_day"] = data_bikes["Week_day"].astype(day_type) Dias_aux_week=data_bikes[data_bikes['Status']=='STOLEN'].groupby(['Week_day']).size().reset_index().rename(columns={0:'Count'}) fig = px.bar(Dias_aux_week, x='Week_day', y='Count',title='Bicycle theft in Toronto per day') fig.show() ###Output _____no_output_____ ###Markdown ***At what time do most thefts occur?***According to the following histogram, it is observed that the greatest amount of thefts occur between 12h and 18h ###Code data_bikes['Occurrence_Time'] = pd.to_datetime(data_bikes['Occurrence_Time'],format= '%H:%M:%S' ).dt.time data_bikes['hour'] = pd.to_datetime(data_bikes['Occurrence_Time'],format= '%H:%M:%S' ).dt.hour Dias_aux_time=data_bikes[data_bikes['Status']=='STOLEN'].groupby(['hour']).size().reset_index().rename(columns={0:'Count'}) fig = px.bar(Dias_aux_time, x='hour', y='Count',title='Bicycle theft in Toronto per hour') fig.show() ###Output _____no_output_____ ###Markdown ***What are the most stolen bicycles?***According to the information in the following table, the most stolen bicycles are those between 0 - 1000 Canadian dollars ###Code bins = [1, 1000, 2000, 5000, 10000, 120000] aux=data_bikes[data_bikes['Status']=='STOLEN'] aux_2=aux['Cost_of_Bike'].value_counts(bins=bins, sort=False) aux_3=pd.DataFrame(aux_2).reset_index().rename(columns={'index':'Range','Cost_of_Bike':'Total'}) aux_3 ###Output _____no_output_____ ###Markdown ***What are the most dangerous neighborhoods?***![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAt8AAAGsCAYAAAAFVMs2AAAgAElEQVR4Aey9B3QUZ5b2P54xJmcTTBBZCBASoAAiB5FzTgabKHIwGUTOGEQSSOpWZ2VEEkGhk7qVEBhsjAMmSuqgRLA9452d7/v/z+87b8maZbwzO2Pv7nGgdE6dqq741q2n7vvU1fPe+7sLN05z85keq1uB2akhs9RAZokKW7EKa5EGc4lanmQbyBiQMfCjMWAp1fBbnf6pXyxV/+Lu/X+qzf/0PPK78qPfFdmmMs+QMfAbwMD3fl88S1OlHyzVYC3TYi3TYbhxmE49WiL+fpeQHY7NqSWjKJp0h4EMZwwZDjWZJXqMBQaMRVp5km0gY0DGwI/GgMmhw+yUp9+aDeQ+Qe4TZQzIGJAx8EMMaKQ+UvR7km0KNRgLNZiKtJidesxuA7q8D+ng+1fyfRq7W2zQYnTHke6MIbUoGmuJAasrnsxCPZmFOnmSbSBjQMbAj8SAHlvRb3UyYCv6Z9Mv7d7/WXvF9n/eZrlPkPtDmRPIGJAx8J8xYCsU/tOAmAv7WAvEPnrsDgM2VxyGG6do17VNReT73I2T5JZpsbiVmCQCriW1KAKTW0GmW5xIJU+yDWQMyBiQMSBjQMaAjAEZAzIGZAz8AwzYHWqyHGrEvJI7S+ucGuwuPfqck3Tu1rGCfCflHienRI3FGYXJqcFSrJeIt7VYSaZLh8UZjVmeZBvIGJAx8CMxIHzHb3JyRWN1qf7LyeJS/eLu/Z+1WWz/V56X3B/IfaKMARkDMgb+FgOS7xQ+tHL6vr8UfjXTrcbq1qLPOoqXz/eR7xhLGDlOPVluA5mueCzOOEwODZlOLeanejKc8iTbQMaAjIEfjwGjy4DJHfMrmX5cW83FsfzX0y/vvv/r9lbcz7/yvOR34ce/C7LNZJvJGPjtY0D0eZVT5fOu7ActxTHE5x6hc9dmFZHveFsYVrcac7FaEoVnFhmkkLnZEU2GQy9Fw01ONfIk20DGgIyBH4OBVJea604VNmcMH5ddJt95gbQv9aQ91JFbEs+tEgOZ7mhMJVFkiUjyVwZuuGMwPo3EVGDA+lRD9sM4ch/qyX6sxvIgibSv4rEWRJBbeJrcpwqyXVosZZFYSiKwFcVgenKetMeJkj+zlIrnFYepKB6zU4XdrSDbqcXyWIv1wXlSP1eSW5JMdokOs+sA5qJj2J8YyHmgIeexgazCODIea0l/qsfkqBx8riazWI2tNBrTYyWZD9XkPlaT8zCanCd6LI80ZDyJIsMdyVX3GW6WJ5HvuED6gzhSv4rD9vQcN1znsJfpyShXkFmsxXJfya3CRCyP1JifqLE/1ZLr0JFdqiG99CxpZaKdCeQ7L5HxMIHrX8VhfZzADVcS2eUGUtwnSHWGYX6ahO2xnhxHDLZiDZZyJZZSLaanMeS4NdifKch4FEHmIy32BxqyvlKRK+63KBbT0ygynKdIcZ7k1vMkbrkuYHmYQOr9WEwP4skrPMeNsniul57F7NZgcWsxu7Q/6B80mF0Vk/gv6o/Biryv7FtkDMgY+HVj4D/8n/CDlT5QLFvcOqzFemLyjtHRx6OCfCfYBflWYSpWSZkJxIAboVExO5RS1PvXbQwZzPLzkzHwc2EgTcgzygxkOWLRpu5j1qqhdB7ans4j2zFueT/OXNiGrSiO9MJoDsYvZdrKYAzWg5y7HcaMrcMYMMeLwdN9GDatO4Om+tBnciBjlg4l8vp2wuJXMHVpX/ap13D1oZC36Dh7eTsz1o9ih3YVqU/PYnsmAgqxWIsSyHLrufjxMTadXECfKV3oNrwr/ab5svzAPC7fUXHzmY7zd46y/sgShk0Oov/47vSc0Jm+s7uyQTGfS19EYiuNxV6qw1Sk4UzaQYYv6EfgaG+GTw9i2JRe9J8QQO+JPZi+cTyR1sPYXsZz8VYEC7dOpMf4LniNaM/Q94LYEbGUy59HklGqQZm5nSmLBxA8qTc9x/sSNKUbE0OGsufsGlLunsHyXIWpPBaN+TDzNk/Af0JXvEe3Z+SC3uxXrcIsPlJe6jF8tIvJq0cwZFZXNh2fw5V7kWQUqtibtJLpG0ZyImktMfnhjFk6iD4T/Aie0odR0wcweHIQfSb5MWHlUBSZR8j97hxxOcdYumOyZB+f0e0ZPNufrWELSf9Si/VFDBbxwSCR7/9MsP+DfMvv3c/13snXlbEnY+DnwUCl/6sg3xVtqCDf2u/Jd5hMvmVw/jzglO3+Gtm9OJqscgPJeWFMXzqAVt0aMnTOIILfG0yrgOaMnjOY6IxjpD3Wsz7sPdr4NuOQbj2pX+mYv286Y5b1pntwO97xbIhnQFuGzR3KlHVjMNhPEH5uN14BrZkcMpb4mwoyHsazdt8C2vf0YGvkErJLdWSVRZPpMpDtSiL1iwg2HJ9Dl37t8RvRhXHzxtN/Sk86+Ldl/aHlmO9riMk8xsh3R9C0dWP6jw1i6Nx+BC8IIFQXwpUHkdjLYrCV6LE4DCgtJ5i+fjzjFgynU6+2vN22Dt0Gd2HE+4NZuHcuKvtprj82MH3FCDx83mbgrCBGLQ7GZ0gH/Id3ZFvkcizFiYSnbeftdvXx8vNh2LvDGDirHx17e9FjcHc2nFiMqUTLpftahr0bRCvfRvSbGsioxYPpPrwTgSO82aNaQ943l4ky7aFZt2bUaFKF4Kk9MZhPY3qcyPJDc+gQ6EHoiRCSPlYxY9MERr43BO++XjTzakzn3p4MmzOQmZsnoMsNx1aWzKy1I+kY+A69J3Rj9OLB9B7vS0BwRzaFhZDz4iLWYp1MvuX/Bsv/2ZAxIGPgBxiQyfcPDCITvteI8MnP/hfjEG1lYuyImmOxa/Dt34zeI31JtseQYDcweHp/OgW1JzRyDdcf6Fn34TwatWnArog15JdfIy5fgd52nJCts+kc6MXwacGo0k6gth/E+ERN+r04+o4KoMewLhxLDkVvOcXYOSMkch2TeZw7XydidpzB7tJwq/Q8evN+RswJpH33luyLXM+l3HjOJu+nfY8WjJjWlyTbh8TZjjFk+kDadGrJh6ptJNhPE2XcSfK9MNILFZLMwuqMQUzXH0UTk3OYS3kRjJnTGw/fuoRsm4Iq/SCJH53myn0NZ6/vo3mXBgQM8eF08kHO5apYvmMePn07MHFJMJc+UxGRfoDf167KuBmTiEo5SIz1QxZtnklHcc/zBnLxfiRnUg/QuGMdhozvS/i5wyTmKFkSOpfOvdoyeUkw5sIEIq7to2G7JtR6uxode3hwQLmN658msurAIpp7NWPTgUXYS+OJyT2CGOczbckwOgQ2ZfqyYWgzDmPIPYypSIchJ4zOfTwICvblcHQo53PVbDm2kq692tF/XHcyi85J/0KVI9+yT5X7VRkDMgb+FgMy+ZYJ2C+GgMkv59++nK+TPbLKdZgLlOwIn09HvwYs+GAyD7/9iHtfZxEauZKJiwdxOG4DV79QsPrAbBq1a8CuyFV89DKFmy9Tuf3yKnsVG+g51I/ZKyaSX3KBm3/SYyuL4tMXaazcNQfPPi0J2TeFzSeWEjjUj6nLRnLTdYn8shgynp7C7lTy8fOLnD6/lV4jPek7whvzPR2fvjByu+Qifcd2oeeQ9qhSNhNn38/Q2X3oGuRJ+m0tj/9i4aOXidz4OhaTKxqjQ4PFEYe5KIbMUg255Voe/Vs68zeMo1OfJuxVLCWvOJ5b3yRjKtKzX7eG6m+/ycK1s7E9TObLP1lQXTrKoHFBDJjkR2zOMRTGg/yh9lss37iYnAIdT///yygu7qL3KH/6Tw8g6dNTHDCspVbjqmzYvZLsRxf5/FsTZ5L20GdEd/qN707qFyqU1/fTtH1TvHw9aN2pActC3yUpS8mGo8vx8G7Opv3zyf82gZsvY/n0xWVW7pmJz8DWrN43i4+fXZDW53+TyIeJa2nj25CFq2aQcSeO+3+0EpN+nMFjAung14yML7VYhM5elp3IPlbuZ2UMyBj4GwzI5FsGxN8A4nUifPK9/nLIvr1Mh7VQw87wENr61Gd56HTulF3D7o4l8dZh1KZdXLl3mqufn2L5nikS+d4dtYwcoRMvjyer1MCW8MX4DenK9CWjMD+NJvdPkRiLT/LpN9dQXzmI3zAv+k/txvDZfQkc7scR7WbufX0V89NIMp1R2F0K7jw/z/HELfgHd2DIWF/sD2OwFuq4WRbHoEk++PVrS+S59ehtexj8bhCdAjzRpZwm8+E5Uu+LdFIajI5oTA6RklWP+fuB6GZHJJ88u8R766bQoacH206HkFmgI6c8nvQnOo4Y1lClVhXW7lwkDaYU0XjNtYMMGd+HoFE+xOceQm3dz5t132TRmvdIuxtFnkvL3sgVBI3ozsTlQ7j+5Cz7NMt4q9abbD/yAcb7Km5/nUDU5R30G+WH39DOGO8rUKXtpaFHPcZN6Y9vzxZMCxlC5KV9bDwagkeXpoQeXoC9PAF7qYob7kQWb5uFV+/2LA6dQY5DaNk15L44x5G4ZbTp2pDVW+eTdjeaOy8TiTEdZMTkPrTu2pj0z6OwiJS0MvmWfazcz8oYkDHwNxiQybcMiL8BhExIfzmE9HV6FrZSPdYiAzvCV9DMqz5LQ6dy69l5jO4IbGVK8p/HkP8sltQvT7H2wAwat2nA3ujl3HihJ7O8IqPIloiFBAzzZXrIGNIeKLE8i+BaYTj5ZRfI/CKWqYsHSJrqlj7NGDNvCJfvRPLRiwTSHp0lp1xkU1Fw81kix5K20X1wF/oO9cP+JJHsZwnklsfSe3RX/Pp4ozy/m5jsYwyeM5imrVszd8V81h1ZywcnQogw7uXaIwVGhxJjUaSUx9Xk0pDpPssnz8+zYNMsOgoJTfgSbA41Oc9jSX2k4bB+FVVqv8mqHfNIf3yavK+jibyyg4Fjg+g90oe47P1obXv5Q63fMXjkYPZEbGHHmbWMnDmI3iP92B6xBHuZip1RC6TzbDuyluv3w7n1Jy0RV7fSZ1R3/Id2Jv2Ls0ReDaVWkxosWzuTEZP8GDy1K6FnF/PB4QU092rEjqMLyX6RhL1MQa47hmU738O7X1cWh87EXqjBWhxJ3stkjietpGmHBixeP5tr98L5+N/06Ky7GDQ+kFbeTUn/LEIm33L/IvcvMgZkDPwdDMjk++8Y5XUiPfK9ymT7l4CBrOexGJ/q2XwihHb+77Bm/3xyShO57grH/rVWGryYXWIg/X4EH+yfTWOPhuyLXsutb+K+J99qQqOWETC0G7OWjsXujMf+jZ6rDiVZxQl89iyFQ1Gr6eLfgZadW7Di4Lvkug3YSxSkFSiwFOswOiPJKY/jw6StBAzvwaDRfUn9QouxVEvWsxj6jfcjsH8givMH0diP0m92X6rWrk7zTi2o37EeTQPqsfToTK58pSD3ZTwWdyTpheGkO6KwuI/y8cs45m+cQfvAVmw+9R6WwnByX2jJKNBxQLuKtxpUYVnofNIKz5Dzx2jCr+xg0MS+9J/gR4z9EJFpO6jZ9Hc0atmAhh0aUqtZTZp7NmX+6mmkfaIlu8zA1rPzqVL/TbZ+uJ4rD06T+100xy9uIHCYLz1HeHPtXiSK67up3aQGa7YuYMn66fQe34n3t41l1eF5ePg2ZdPB98j/7hxZz06TW6xm2a45dO3vzeLt07E5IrGWnODWd8kcillGi86NWLUjhJTPzpD9jYJo606GzxiEV2Bbrt2NlP4TIEe+ZR/zS/AxchtkHP6SMCCTb5l8y1+lMgZ+dgxkvRA5tuPYenoVrXo0Y+6maWQ/u4TxuR7Lixiynl8it+w81z+NYM3uOTTyaMTuyLVkl+rJeRlP9vM4NoQvodugTkxbMoj0x1HYvtVheqbFVBjJnVINicbDDBk7kIChgZy9upOP/5SAyXUaqyD1Dh0Zriiyn8dzKGkr3Yd1p//oXqQ+iibjeRT2r1X0neBD96CuRCbvQZN3hH7zg2jTrRkHotaQmBVBtPEI5z+NwlikwywwJUi3S4XJFYPFeYZPXqawYMMCOvh7E3p6BZmFKrLL46Vc4/tU66hStxqLti0m3akg99+0nEzZTZ+x/ek9JhBD5nE05pNUbfQ7Rk4OIky3i01hS+k2pCM+/dpwQL2K3LJL7FKsokqdamw4tImrjyLJ/4uB4ynbCBzZk96jA7l2T0V0+iFqN6rJ8vXzOK7cS/DM3gx+L5DZGyfTPqAdmw4uIvfrS1jdEeS641gSuphOvQJYvG0e2U4DFlcE+d9e5Vj8Jhq3b8LSrUs5/+kZsv6oQmXfz6BJg2nRqSXGz2NkzbfsW3523/JLIlxyW+QPgEoMyORbdo6yc5Qx8LNjIK88FluBgU2nltDEtyErDswgvziO/GItiuvb2KP4AG3WEa7cj2D9/oU0admQfWfWkf0sjsxv1djLYtl+cjkBA7oyffFobI/iyH1uIM0ZQZrjLDeexxJnPcawKf3pOz4QRcZ+7nx9GXOhQhoMaSw8i7EgilsvLnD0/Ca6Dm1P3zE9yCuIJsdxlLwiHf0m+NGlrxfhifuIzTzAsNk+eAd4ccEWxUcvk7CWa0h36zAWJ2Bxx2JzKrCK67vUEgm/U57Cwo0z8OrlQWh4iKQltz+LlbKhHFWsoUrtKqzY/R4ZD09w66USZepOBk4Mos+IbiRZDqMy7aBqwzdYsHomtq803CrTs2jbONr6vMPiLVPJL45l39kFvFWvCluOf0Dq/bPceqnl5MXNBI7uRtDIrljvRaNJ3UntpjV4f+VMEjLCmLN2MN0Gt6fvqP50DuzMpkPzsH0Ti61MLQ1cFdlSvPt4SdrvzALxIaQl948JnIpZyTsdG7Bk6/tcvnOaG2UqNKbdDJ3el9Y+Tci8q8TskDvbys5WnstYkDEgY6ASAzL5lonXz068KsEoz19fx5Qr8mI/NbD1zHKa+zdi4c7p3P+jiY/LLxCyazy9xvqwTbWEaw8j2XBosUS+D0SsJ/dFIsbnCimCvDt8JQEDvZm6eJREvnPKtGQ4IzG6FOQ9jyXeHsbwGf3pN8kfpWk/t19eJuOxGGypkAZc2pxq8suTOXZhI35jOtFnlD/mu0oK/u0yd1wpBI7sRrchPkRd+JAY82GGzuiGd2BHzmdFcuNZDJbyaNKKtaQ748go0mF1REmyloxiLSZHNJ88v8KCjZNpF9CYzSfnYykQlSRjSXui5oRuE1XqvMn8DbPILjDw4C9XUF45QO9RgfQb7cfFnBOozLt5q8EfCFk3D8tDDR+9jGXNodl09GvLnDWTuFl2jjD9Wkm+snr/crIKErn/51ROJG/HN7gLgSO9sdxTo7q2i9qNazB/7XtczVex7tB0vPu2waNzB9r6dGTLhyHYvo7BWqriZtlFlmyfjXe/9oRsn4G1wEBmmRr7yxgizm2kqVcD5q1/F9NnMXz1XSrajEP0HutP625NsX0m7vv1xbTsz+RnL2NAxsA/woBMvmXyLZNvGQM/OwayinVkO+I5EhdKm94e+I/1JTFbgTbjQwJGdqZj7/Zs16zg2sMo1h9aRGOPBuyNWE3eyyQyyhRklcWzM3wZ3Qd0YOKiYGxPYsl7pifDqSDDqSSnPJa4rGMET+1DrzG+RKTt4taLi5ieKjAVnMXmUmJ3acktTSTavI+hc4Lo0L0Ne86sJOMTFafi99KqawsGTelDrPksMZYTDJkeSCe/9iTaTnHzZRzmMiVpLg1GVzwmhx5LUSSZblE+XqTcU3Hvm+u8t2E8rfzrs+nke1gdWrKex5FWoEWRspeG7RvQI7g7ERf2cjHvLEu2z6VDrw6MnTeEa3eVRKbt4fc1f8/c5bNI/0rDzReJbDq+CE//doydFywVCIpO3U8Dzwb0mdiXMxf2kZx7hoXbZuDh58HIeUOwPognKmUPtRrXZP66eRjvJnIibhOBwzpTrX5NmrdvzoYjS7C9jMX+TMvN8gssCp1Kxz4eLNo+hcwiA9ZSNeZSHXrTIdr1bE3PUQEc0WzjUl4kG48uprV/K3qO8cP2QNjhP1e2/EedkbxeJioyBmQMvC4YkMm3TLx+duL1urxs8n3+447F6lSS5TJgsJ1iTMhIPHq2YOj7A6R0fi39GjNh8RjUtmNcf6RkQ9g8Wvk2YHfUcnJfJGF9rsNeEs/OyCUEjenE9NXDMT/SkPdcj9EVTbojmqwyPbHZhxn1fj8GzvAjyriLm8/PY3MJTXgEmS4lVoeKnJJ4Uj6PYvWReXTp25GA0Z0YtyiYnuO609a/Bav2zyHtMx2xtpOMXTiQHkM6SgV3RGTd6I4i1aHC5I7D4jJgcSgwOyPJcGmxutV89Pwi80Mn4h3ckq2R87EUach6JvKC67l8N5Kxi4fzTtcmDJrSk9FzB9Clfwe6D+/KtoiVWAtjCL++k4at6zD/g1mkfqUm/2USe9VrpcGUI98fQOojLVc/1zBy4TBa+DVn4LQARr7Xn84D2+MzzJvtig3klVxCef0AjbzqMXf9TNI/jSUp+xRj5w+gWpO3aO3bgu3hq7C/TCD7uZ7s4jhC9kyl24j2LNk3FXOBmsxSNaYSPWlfaZm6ehzterWSPmjGzhtA4KgudOzbjnXHVpJXfBGzUyv7F7mPkTEgY0DGwA8wIJPvHxhEJkj/mCDJtpFt87+FAYsrkqySaDIe6TmdcojZmyfTb2YA/Wf7M33dMCJSDmF6Gk/6UzWnLm1m9vohKFP3YC9LwFpuwFYcR+T1naw8MJOd0SswP1WTVarB6FST4VBjL9Vx4ZOTbDy9iDVh7xF/6yh5z5KwF4tc3EqsLhXmomjsxTHYXLHEZJ1k7ZEQRswLou+0AAbP6c3S/TOIzz1CjtvA+dun2XwmRCKmFz8OJ9OtkaLsaQ4VZlcMVpceq1Pk/VZidGkxu6LJLYvnQMw6Fu2ZSvi1bZgdKmxlemnAZ6ZbhzLjCHPWT2LYrN4MnhrI2AVD2Hx6ORc+iSCrzIAu7yDTV47ksGYTGU+jyXquQ23ex6p9c/kg7H2uPorCXpZE+PVDzFw/juFzezFouh/jFg9ha8RqLn+mI7v8HPE3jjNl7VB2qdeQ8ZUW41dqdimXMXrhAGavG0/U1X3YyhPILNGQ6dJxKGE9IftmcDhxHaZCFWa3GEQqMsXEE5V+mAWh0xg+pzfBM/wYt7A/a48sJOWunuzyREQH87+FGfm8sj+SMSBj4NeKAZl8y+Rb7hxlDPzsGLAUn8VeFiGly7O7zpH6VSwJ+eEk5IWR8UBNXnEydne8VLTG/ETH5U/CsYsKki4DGU4dme5YzE91XP1MxfUvNdiLdVhcIte2BmORFnuJDotDy/Uv9KTcU5P+JBp7ichCosHqVJMpCsE4VFidWjIFsXQmYntyjsufRJKYf4rzt5TYi+LJKVVjL4nAVBDF1S+iufKZkkxXDBlFSsxuDWaXTmqTxanF6lJLk6h2aRKZVMpiSH8Yx6W7GtIfa7C6VViKNZjdWoyuCLJLEskqPM+lm1GcyzktRdiznEnYSw2Y3JGkFkZy6VYkxi81WIqjMJWeIeNJFFc/ieLy3bNkuBRYS4Ut4jE9juXS7XDO5R4j476WG+WXsZXGk1akIO2xkkv3zmJ6qpUi/zanirSvorj08VlSPomS7G12Cd26AluJjvRHei7fU5P6UOQrrygilF4oJD3i3mPJLIgn5U4E53KOce2ughxXMnnlyVJ7TU7Vz46tX2vnLLdbJpYyBn67GJDJt0y85M5RxsDPjoG0otNkOE9hdSsxF6nJKU3gzreXyS2LI68sRooiGwvVmJ0GKVqdUxojkWeR1s9WkiitF4TXJiQf7jgyHJEYnVGS9tpUFINFkEZnNJnFSWSVnccqBkE61RgdKjJL9GQW66XljKJoxGQSgyEdcdz+NomPvkkktySJ7NI4MgpPkV50DGuJguxn8VIBHpHHOq1AXEuFqUiD2alDtMXiVGMWxNuhldqSXhiF1ZVMVmkKFlcMRqcCs5QJRYfREYGpKIrsEj03nyVx69k58sriyHSpMRWJe4kg3R3FnWcXyBUfHMXhXC8+jrVYwa2yRG6UJ5BeHIW5WJS2V0gfGzeex3PjWTx5IiNMsZbUp2e/v6aGvK8TKuQjzgisriiyyjRSoaGc0ljJ9hZnPBmFCtILI7G647CVJGN26skoipLabHYIO0djcSqxu7XklcVyszyeG2Wx2MR/AQoiyCiKQCbfv13yIBND+dnKGPjpGJDJt0y8fnbiJb/AP/0F/q3YTkSNjSICW6CQCFtmiY60xxEYCyPJLlGT6YqSSLkYyGhyRWMtUWJyKsko1GN1JZJRqJUitUJmYhQl3t0RWIuVmAr0UvEekXdbkN0MQYSlku8qKVptcmrILI6RCLMg4hLxdqqwugxkOmJIe3qc1CdHyXRpyC47h7VERKkjMRcrsJSoMBYrMJeqySqPkT4KROTc5jb8lcynP43GWKDBWhKNUUTiHbFkCiJbrMdSHI3VrcH0VI+1UMg4NKQVnsZcrMRappIIt9kZRaZLi7FAS6pLhfWpFvMTJeayaEzPK2xgLVBjdepJL1ZjFhUo3WoyXBGYSqIxlapJd0WSIUi2W0T59aQ5dGS9iMEstUkhyUgsxaoKMu0S59BjKUyU9s9wRkn2zHDoSBcfJS4xuDUGc0E81iIdWeI5FYQj9rOUqKXtJmckNhHRL5AlJ7+V91O+D9lHyxj4n8WATL5l8i2TbxkDvwAMxJJRGEe6KFDj1kkEXEg5bMVajFLEWEmm2yDpqU3F0WQUn8FSosTiiiW9ILaCULuFvlpJulMQ3bOYXVFkOuOxFMVK5zO7ozG5xfn1WNzCkVZIIoQ0RYpWi20uIR0R5DtGOtbsPE1W+WlpXdqTGKzuRKl9Gc5oMtwitaDIIx4hactNRWosRVosRWoyCpWkF0ZjKhJyFz0mt5CEqEh3iEGgcdIgUGmdyIRSGI+94EJFlLwkktTiM1wvPoOxNBLRZnOBAUtBMqbSREyPY8gSUeliDdfdIkqvxvLUgLkoHlNJIvaiJFKfRmF5puF6STQpziiuu5CZ/RsAACAASURBVBUYi6MxO7RkFiaS6ogntUgl2SlDDIh0iyJDKtIKI7G49BgLYiXyLaLyJreIpqsxujVS8R/x4WEt1kn6e9vTBMzivxHF0ZjKNFxzKblaFCH9Fv+9yCo6J1W4lDvt/9lOW7anbE8ZA79+DMjkWyZevwDi9et/kWRn+I+foVFEUoviJPIs5B8WEfV1x2F0CMIYR5ogxS41xkIh31BLOugM9xnMpSLqqiKjUGiydWQWf0+O3WqsxWpJSpJWoCTtqUoiz9ZivaRJtroUWJxnsboisZeosRWLapPRkv7b7IyVdMpC553mVHHZoSRVEHynBkthNDYxd2kkAp1ZoMPkiMX6LAFjiZKMorNkOoSUREeaW0VasVKSsog2Ss/focZYpML4vXRFRNJFZF3kujY6oiWyb5SWhW5a9f060TYNFqdeOoeQxggttVEMAHWrJYmH2G4WuvVCLZaCBIzOeNJdCkyOM2QWKcgs0mF06UgrEbIcEdkXAyLVZDgF2Rfni5bsKNooCHi6+DiQ2iaemYhOV7QnQ2pjRXuEVKYC0xXnEssV9ylsLfYRHw2xmMWg0mJB7jWkObSkff9bPB9rkdDki4+ZfxwBr7DRP8ZORRvk7bIdZAzIGPhtYaDSN77qH8WykDGKviwmL4yOPh6Iv98l2MOkQUKm4orOzlakxy51Rkoyvu88ZID8tgAiP0/5ef53MWB1xkjRX2ORkH1E/DVbhpBgZAjS6Iqq0BILEurWS6Qz1XFKIt/mYg2ZJTFSVNhYKGQb0RWaY5ea9EIhu1BjK42VIraCpBsLVBKJtjqEFlmFvUSF2RFFeqFCGnhpFmS6SGQiUSOK31wp1pBarMfqjiHbqcdWJMiyCovIkFKkk6Lxqa5YUp0RmJ1nsIk2OgykudWkClnL9x/vlY5U/H6VUL7qWCv3/Xtz4XBf3bfSCVeuE+3JEVH1wgTSHCKtYTQ2dxRZImNKkUGK5qeX/kd7/t41KtdVtk+cu/L8ldsq5z/cVvm7ci72y3iixSIyn4hc5g5B4GMwiQGnQjZTrML4tPI/CRXXqTz21Xnl9eS57GdkDMgYeJ0w8Pf8oFgnk285Iv5XYvE6vRDyvf7PdwBSrmcReXaeQWQ1EVpsIfWwlwhdsUEiaqIIjcWlk774RSRWZAGxlwuZiZbMYkMF+ZbS+FVEg0UEXeizhbMS5xDEW0hH7MXx2BxxmJ4IzXG0pNW2Or+P1oqocKEO41MhD1FhLjOQLiZ3DDZHPDmOeGxC9iJFv9VkFmoxO+NJc8eS5hSVMJVkO2KlTChGt4h+K6Xrv+pE/6vlSmxVOlgxF+te/V15vHDAlZO03aHG9lSFzZWIUWR9KVFjL1aRVajHXhiPVbRR6M+/J9T/3fmrHwOV53q1PaLd6U9FRyGeq7huLLbiC2QWJ1RIZVxKTEX6f9ieynOJeaVd5Pn//Lsn21S2qYyBXyYGKv2qmFc+I7EsfKIc+ZYJ+F9BUQkOef7LfJF/6c8lwyEkJGfILBW64mhSvogk42EM1z5XYnqsItMpos96KfWfuVCkt9Nx9ctoMgRRlsiwIHKCqAkphxgcWOGkpOwkhRXSFCGFMBfGYX6chPlJPOanQu5SkUpQSEkqnJxWGrgpyLfQMadJEed4Uj/XkH5PjfVxRTvEoEW7QxTHSSDNZaiINDu0ZHyuJe0LvTTA0yj+A/gDsltJKl91qD8kmJUO9tV9fnieyn3EsdKyU2ioRZrDJNKceowiTWGhivTPVaTd02EuiCHjR34M/L1rVq4Tum6xXIkrqQ3ffxCIZSGfyRCDO4tVUppGkYP9+v04Uh+ITC9qrEKr7pTJd6X95LnsN2UMyBh4FQOVvvbv+VmZfMvk+6+d76ugkZdlJ/LjMCA01YKoitzaWq5/pWC/YR0fHFnM8r1z+dCwkUt3RYaMGKwOjZSzeqdiJauOzCH59kkpSi2i2yJftiQf+V4/LQiikJ6I3NpioKDh5kH2GNawX7+V5NsKspznsBTpMRUKIiiuL6bvybzQXrtUku7bVBjLYf16tp9cjMFyRNJ020u12IUW2mkgVWQwKY/h4p3T7IpcyS7FauLyw6QCOa86TmET8buSMFfaqHKfV+Uer+4j1ot9/qvJ6tKQI0X540gVeuxiLefvnOCQdh27FWuIzQkjs1QMVK3A5g/PVblezF+99qvrXz1G7CO2ibZVtu/V4yrWicqiGuwuA2rjEbaeWcnB2PWkPogku0x8NPxHeyqv8+o1Kpcrt8lz2a/IGJAx8LpgoNL/iXnlPYtl4Wdl8i2T77+CohIc8lx2jj8WA0aHAZMzBntpvFR85dzNSAbO7Mlb9avwRp3f0Wu4LyfP7yCrLE4a8BhxdQftgt6hdps/EHE1FEtRdIWmWxTM+V7zLeaV5DvnWYKULnBX/EJ8x3rgPbQ9hxI2kuUShXliMReJSLl4biIDSSRCOy4V0ynTSzm+0+7rGTK1G61867D2yLukPo4m53kM1gIFaQUis4qO/G/OcTxpM94D2uAzpA0HYleR+zz+r9HhSpIqbFPpVCvtVOlcf7hP5fZ/ZS4KAGWKNIxioGNZLDkvEzmdso3+k7rhM7gd25VLyHuR8C+9r5XtqbzuD9v1avv/3jZxnPiYshRrsZVoMD/WsmL/bNoHNSf43QASbhwh77nIY/79QFTZj/5Lz6Xyechz2cfKGPjtY6DSz77qj8WyTL7lDkPuMGQM/A9hIJYMocN2CG11IhdvaugzLpC3ar/F76v+nhZtmrLpxFJJdiIK6iiu7KZB2/q8Uf33RF/bjbVIDCQUhWiEDlpIHURUVeTWjpXygku6cbeBw5dWMnSJD8NCAjibtouckiQp57ZFDKAUAwOdIkouiLwGs5SZRJBZFVlPztFnpDf1Wr7Fqv1zufYompwXenJLtKQWRHH5yVluvEzmZNIOOvZoS3u/FuxRL0MUsflhNFh0mq861UrH+uq6VzvWyvWC5L567A/3EdUy7SKNYlE0aS4VeS/PcfrSLgKH+tK2W0s2nVxA/ouEv1771eNfPW/l9V6dv0qwKxz/f45Y//B84rdI6ygyyeS5klm+ay5NOzSi7xg/zt04wY0XsaQVRf61PZXXe/U8r1731fXy8m+feMjPWH7GrzsGKn2imFfaQizL5FsmXn8FRCUw5LnsMH8KBtIcGjLcOmylMRIhvnxLje+gjlSrV5VGTRtQrUZVJi8bwfnPTpBfZkB1ZR+tu7Sj+tu1UFzegfHxGSmybRFRbJF7+nvNd/azilR26WJgZXEM6QUqUr6IIvW+kIzEk10iqi2K0vJC1hFFWqGouijSEGqlUvNmpxKbS0Pe04v0HepLwxY1+ODgfNILtGS/0GF3RmMr1WMq1vDR15c4c24/nfw64N3LkyO6ddx6nihFzkUE/lUSXulApcqZJRW6Z7GP+F25r7Cj2E/8FlPlMZXbxe9KW4tzW8R9PxW5zbVYnhm4+e1FzlzaR8/gHnh2b8v28BV/055XjxfnqSTVog2V161sj9heSf7F9W2inP0rx/ywTZXHi7zl9mINN12X2HBgGa28W9J/fAAXboRz43mMlNtcFEsSx1ee41U7vXpdsSxPsg1kDMgYeF0wIPxo5VR5z+J3ha+WUw3KHYLcKcoY+G9iQAxMFHmyRZXHnBID1+9G4zuwDU1bNSAwqBstmjeg7yRvzpq38MnXiahT9tOiQ1uq1qtFxPlt2IoUWItjyCyJkypJCkIsot+CPOa+SMBWEitlTxHzW19f4tbzi1iLtBifRpHpjia7TI29TI2tTEtWuYGc8nhyyuLJLteTXazDfj+RfsHdaNamIesOLcZUZCD3GxGlV2Av10uZRW6+vMTppIN4deuAby8vTsRu5pNvzmMvi5HIqmiLcJrCiVaSU7FNTGK92C4i9eK3IKKVTlasF2S3ch+xXHkusU5MYn9biU6q9Gl7psdYpiH/20ucvXyIXsH+ePXowD7FOj77LuWv7am8xg+vU9kecR2xXHltQb7FvuJ35UeNdN1Sg7Tu1fNJ68sMWMvV5D0zkF90kQ92L6VVp5YMHNeTizfOcvPreCxlKmxl/3EdcV/iWHFP4lpiqux05LlMumQMyBh4nTBQ6QNf9YNiucLny+Rb7hz+m8TrdXqZ5Hv9+52HUZQmF8VcRHq8MgMpn5yg1xBv3m7SiPkrJtAlqBXtfBsSenwBN8suojDuoalXfarVrcWZpF3klWmxu2MwGM+yeu9SBrwbSJ9pgUwLmUp4wlHsjkRynsUSl3uYLceXsu7Dhejy95D7QseNwouc1e9l6qKBDJ3VjZCt09h5fAPrDixnw4n5WAoiyC1U0WeoD208W/JuyERW73+PyUsGMmF2P/ae2ML1u3Hc/ffzHE5cRpeANnTu2pkp88cwY90YBk7ry7tLJqBKCiPHnUB6cRi5z2K4/nEcB5Rbmbp8OH0m9mTioiHsOr2B1LuJ5L9IkrTnYRc288GBhRxWb8Dkiib3ZSyJeSdZvWs+20+s4vInEaQ/VnIsbiNrty/mpP4g+zRreDd0tKSrPnsulH5DA/Hy6cjCDbNZd2QxAyb2ZNqCiRxSbSflvgJzuZLsUj1pd9QcjNzI5EXDGDCuO5PmDuHA6e2Y7yVhFyXuS85iKU7gwi0D204sZejUXgyaGsCsleOJiDtA5udxZImoebkSi9NA1IVDTJs1mdEzBxCycRZjJg3Do21z+o3rRkzeIW7/+byU0WZX+BLGzR3EoAl9mL5kPIc0m0j7Qk1ueSIZLo2Ua11+b/7+eyPbRbaLjIHfLgZk8i2TS/kDQ8bA/y4GREl1odkWBXNKdVz59DhBwb7Urd+AjXvnMWiqHy08azNv1TiMD+LR2A/zTpc6vFmzKhHJ+7hRZkCfdUAib219PWgWUAvPAU1o2ak53fv5cuLCTqxOPeEpGxk6JZCg0b4cTVlF7ks9qksH6dWvB41a16JdwNv4BXviG9SZ9j5tGTwzgMufHiHfrWLYpCAav9OQroFt6Te+K959m9G8dU26Bfiw+dgqbn0bx+mra+gS0Jo69erTwa813UZ1oFUPD5q2aMTwMf2Izz1K1otI0h4p2Hx0Ff4De+DRtRleQZ606FwfT9/2rD2wjPRHKq4/PsvS/dPo0qstE98fRLojihvfxHHqfCjde3di6MTeRKft5eLdEyzeOpF2HT3o1tcHr/5taTOgMacvrUNxYRuDRwVRp1F9OvVqT5+JPfD0byV91PQY7MuRixuxlKtI/SqC7cdCCBjQiWZeb9O9vycenvXx9vNi8+FlXPn8JLnf6Ej6OIKF2+bh2b01bX2a0DHIgxYdmzJ4ZE8+VK7HLKqPPlMRkbGTgWP9ad7sHbwCmhM41JM2HZtTr2Ftgqf2IuGjMIxOLetPL6CLX1u8A1rR2b8lbTo3o+fQbuyIWEF2SYL0MXbNqcIov3//u++fbF/ZvjIGfnEYkMm3DMpfHCjlr/3f2Ne+Q4/JJXJTV5SEv/bZSXoO8aV67drsObGcBVsm0c73HYZO6oMh8zRxuSdo0bUuv6/+B5SXD5HtNrDh9Pu09W1Op4CWrD4ykT3R7zN0QiBVar/JgtCZXP9SzcnkD+g52BPvnm05cm4l2aU6Vu+eRc161fEP7sL6Y/PZcmop/cf0onqDGnQb5kl83l5yHWqGT+lDnYY16TWkE+sOzmFb2CKGjfWnRt3qDJ3RVxqkGZW2lS7+banfsAHDZ/QnVLmcNYcW0albK+o2rME+3WpyX2hRmvYwcHIQHp2bMWnBcA5rQ5kaMpSqDd6ke7APevsR0h4peX/beN5uU4dBE/0wuhTc/rdzHI3fSPMOjejRvyNRV3eR8lk4c9aOpEr131GtURUCxnbi3dAxpNwNI/ryNvoN86NmgxoEjfJlb/Q6dp5cSTsvD2o2rs7yozNJLVQQbdzD0MmB0nmnhowmPGk3E+cNpF7TmvQd24No4w6yy/QcTthMp94daNKqFhuPhHBAs4UBE3rRrP3bzAgZRvJHxzE6o1l/ci51GlXDv58nO8IXEXp6IX2GdaNBk9oMm9aHpFvhJN05jf/YTtRp1JjFa6ey5+RCgoZ1oW7LOkxePpL0xyrsL3SkFIoiPb8xvMv3I/cpMgZkDPwTDMjk+58YSCaCcscoY+C/h4GKYis6qTCM2a3m2r1TEvl+q2Z1DkWt5pBuPf6DffDu5cU+9Wbi7Cfx6FqPN2u9ifr6UcxPNcxYN5IGLesyftZgsh9p+az0HPvC1lCrfg2GzOrLuZsRnEraQK8BXvgGduRE8npsDi3Tlw6lZsOqhITOxPgwDrvjAit3LaZBiyZ4D/IkPvcA+W4DA8cFUv+dWry/aixpH6vIe5LM1v0hVK1dFf/hvhgLtERc2U4Xv/a092rP9pOryX9xieuf6Zg0dyBvVHmDD44uwF5iYKd6BZ4BbfAf0JmIC7u49006mqsHadS2PvXb1ufkhVDSH6iYv22KtG7olCAp8n37385zLGEzLTyb4DegM8rre7n6xVnmrB1FtVp/oE33ZnwY9wEp95R89t15FJe30GuIDy09m7Jyz2zuvUwnvyCRYeP7UqN+NebtmsSl+xHs067Bs4cHnf08iTWe4cu/WDh1bgetvZvjGejBh/EfkP4gmjWH36dh6zr0HeJF5uNk7n6Tyabjy2jdpQWDxvmjztjL1fuRzFk3muo13mLBprHcKE4kuyiBxRun0aJdI4ZM6sm5/DMk3wqnXUBjajdqwt6Ta7F9qib06BL6jQti8Y53SX2oJOuFjlRRwEhIkmQ/LNtAxoCMgdcIAzL5fo0ettzB/fdIpGy/n2Y/i6g8KaoiiqqMxVpS7p6kZ3A33qpdnQOKFcRmHWfsnGBaeL3D/C0z0Fz/kLY+jahS7y3U6ce4fl/JpKXDadi8EeOmj+DqDSWWOzoOhG2lbuPadA1uLZ3jVMI2gvp1pUdAJ06d24y1QMeEeQOp1egtPjgyH6tDDLJMZnPYKjw6tsd7oCfnbh7lTnkifUf70bh1PdbumU3WEwO3XMnsCltBtTpVCRjeA2thHOEXt9O5Rzu8e3TmkG4zH//lMqkPo5m3ZjRvvPUGqw8swOZKYN3xhVLUe+DoQBJsx7jzx/NcvnlGygZSrXENPowRuudo5m+ZTMNWDRgyqTfXnkZw84/nOJawBQ+vpvgP7ELU1d1c+vQUM1cOp3aDGgRP78UN9znyn1/g3p+TOHtxA72Cfenk155dEUv5/Lsr3HSpGT9zMDUbVGfhrmlc+kLB1vAVNO/QlO69vUn/LJbb/36R2OxjePfrQJvuzTmgXs3VT6JYtHWG9AEy7d1B5JQm8dEfUzlo2IyXfzv6juhGVMpuLn0cwfj3+1OnZjU2nJhDTlksuWWJrN7zHq283mHAeD+Sb57C/EDLpHn9qNmoKsOn9eGUYRtxqadRXgjDYD5JpsOA0RWNuTQRkzzwUiZdcj8sY+A1w4BMvl+zBy4TyJ9GIGW7/XS7WRwir3Y0JlGKvNTA+TvH6TmsO2/Vq87OswtI+0rF8h1zaeHVhMFTgzih30HnHh78oW4Voo0fcvWLSKYuHc3b7zSlo09HJoWMYvrScQwdNYSq9d7EZ9Q7JN08SXjiXnr17UF3/06cSNxMtiueSQuCJQK98sBcqSS8/VksW06toG1nT7oO9CQx70NuuOLoM9qfJu0asGbfLDIeKLA+0bL1w8VUrVsV/2HdsRbEc+ZCKF16tMMnsAt7NWuwf63j8oNw5q8fLUW+V+9eSLYjmQ1HQ2jVqTl9R3RHY9pD/rcxJOcfp2WnFlR/uyZhsVswfqli8bapNGnTmODJ/UgrUnDnzxcJS9pG8w6N6dGvkxT5vvZlJO+uGU29RnWYFDKce3+8irFQzc1vdIRf3EDgIB88u7ch9OwCbjzTk1kQxugZA6lZvwaLdk7j8mdRhIavwcPTA58gLy5+HMmNf08g4eZxvPt3oGWXxuyLWsm1O9Es3jybug1rMmFGP/JexJH/zSUOxW6iU0B7+gzvRuSlvVy8rWDcnIHUq1lTkp+kFynJLkvig4OLaN25BX3G+JKYd4TbzxKJTT2E36g21G5eDd9+ndlxYr00wPNOWQo5xbFcf6IkwxmDSY58y8RL7odlDLxmGJDJ92v2wGUS+dNJpGy7n2Y7i0ONyaGUyLetLJbk22ES+a7aoDqbT8/G4tBzWLuFrj29pBzaG/YsoltAR96o8XsU5iNc+yqCqSHDqduwHlXrV6NOh5rUalWD2m/Xor5HTYaH+BKXf5RjcXvo2T8QXz9PjiVswO6KZ/LiEVSrW5XVB98n9Uk0tnI928OX06FLW7z7t+Nc/jFuuhMJGulHwzb1WHtoDpYCDTZHLNuOL6dqw6r0CPYlqzBByjnu7deWbr28ORCznpw/x3OlIIq5H4zijTffYO2eRdxwXmBr2BLa+LSk/1g/tNY95H1nIOlGGC07eVC9YQ3C9OtI/+wMi0MnSeR74Lg+UuT71p/OczR+M83bC/ItZCf7uPalgrkfjKVRs3pMWzGa/GdJpBUq+PjPcUSn7ab38F54+nmyPWohd/5oIK/sLOPnjKRmw3os2T2dK5+eIfTUKlp6tqJrL0/Of3yKm/8niZjcD/Hq1QqPzm9zSLma9E81LAmdQ52mNRk3sz9WRxR3/3yVw7GbpMh3n2E9OHN+D8m3opi4IJg6Nauz4eR7ZJYYyC4/T8j2uTRt14TeY7sRm3uQmy9i+LT0MjGZ+wie5U99j9q09G7CotApXL4dTrY7RiqqZBE5xeXIt0y85H5YxsBrhgGZfL9mD1wmkD+NQMp2++l2y3RqsDgUmNxKBPm+8PFJ/Id1o0qDqmyJmCXlNY21nmTk1IE092zEqKl96NK1A2/UqoLCeoTrD88yNWQw77RpwpAp/QlL3kKM/RgXrTri0pSc/ziKrLLzHI4PpXtfb7oGtudY8gYynmgZ+34w1epXkO9rX0WSVWpg55lltPF6h859W3Lpzgk+eXGJoJEB1POox5pD72Eu1GEujGXjh8uoWvctugd7k1VgQJmyhS49WtPZryN79OvI+UsyFx8pmb1mFL+v8gYb9i8hr+gc6w6+R4uOjek3uhsa6y5yvovlwp0I2vt6UvPt6oRpV5J2N4yQ0Ak069CMIZMGS5UrP/ruAmEJobTu1JyAgV2JuLJbIsszVgynQdM6TAwZLuUnN7k13P4ujtMXt+E/MID23duzLXI+d74zYCo4zqjpo6hZtx5Ldk6R0jpuPhZCszbN8e3TkZQvznD7/14hIfck3n3a0cH3HQ5HryDlo3DmbZ5C3Ra1mTRvKDkuJV/85bqkx+/Yow39RgYSdfUwF++qmLAoWNJ8rzo8SaoYanPHs3zXfNp0bcPAKb0w3DhK3stEbhVf5XbpFTI+17Ng6yRadG5Miy5NCdk9E0uBFktxFBmuCKlokvx+/fT3S7adbDsZA78+DMjkWybf8he3jIH/VQyIKpJWpwKTS0FmSQwpn56hR7APVepWYbviXUxuFelf6FjwwWQatK6Kp08TWrduwVsN6qC0Hubqg1PMXDaYtz3qMGHxYLKKldx8qeBCzkmW7pjLRsUyLj7QcDBxI936e9E5sA1HkteSWRzHlCVjqFa3GhuOLMbuiOXm8wS2HV9Ai/YN8BnkwYWPjpLniqfvuN683a4RK/bNIe2xGnNhPJuOrqRqvYoBl7anGqKvbMI3sDWd/D3ZbVhP3v+XQkqhgfc3TuKt6m+wbu9CbhQlsOnIe3h0akLfUT5oLLu48e/JJH8USUuvVtRoWJUzCWswfRHGoq2jadK2CUOnBGMt0XP3/1zhWPw2mrVrQve+nVFc28eVzyOZtWoU9RrXZfLyMdx8mSxljbn5pzhOX95FwJA+dOjRiS2R88n/VovVGc7EOVOpVedtFodO5Opnx9kZvgIPEfnu3YGLn53g7v81kpgTTufAVnTwacQxzUpS70awYOs0ajevxch3B/LxixjuvLzAIc0GvHq0o89wfyJSDpL8iZIR8/pRo85bbDoxmZvPDdwoT2LF7vm06NiSvhODMIhsJ5+f5d2tY9l4ainmByqsD1XMXjuBum0a03tiEAk3jvLJX/SkOU9IGVRk8vDrIw/yM5OfmYyBn44BmXzLxOt/lXjJL+dPfzl/K7azuAyYXTpMLhW2khiufK6h23AfqjR4k92RKxDENteVwK6o1bT3a0X1WtWoXqMGtRq+iT71CNkFCby/dRwNWtcieGx/0vMT+KLEzq6Ta/Ho3Jzg2b25cEvHscQd+A7woktQB46e20qmI4lFO2ZSo0F1JswcS2K6mgRrONMWDqXe22/RY0hrLt4O52bJZQJHdaWJZz3W7H8f0wMd9gID28KWUO3tagQO9SfnQSL6awfw6ekpZUpZuPk9bPcvkp5/npHT+/GHWm8QGr6C7MKLHNJsxDuoA76BXpxU7eBuiYVTMYeo2bAmzTrXR31tN9b7ShZvGUcDjzr0Hh6E8TMdd1xWFqx/n/rv1KfHwHZoru4l466G91ZPpG6z2kxcOozb5SnYilTc/eYiZy/vI3CYP+38WrH1zEI+/eY8eUVKxs0ZTs0GtVi6dRbmexrCdBvo5N8az27tUFw8xuPneZzU7sejYzM69W3LqQs7sD+NZfPxeTRuWwvvgE5Y72m567SydMt83mnXlFEzBhBvPcHVO2pmr5lItdpVmb92JrcKLpH+sYoxs4Op2aQewVP6cy77DJr0o7wtPnCC2qG/eIqPHphZsXURDdvXJ2hCNxLzTvDxt/GYik5ickbLPkjuh2QM/Bow4Kqo4Psqcfzh8r/ab/3wuB/+/lfP82vd79X7rbwHsU6ucPlreBHkNsoO+1eAgQyHngynDososlOsI+mjCLoM68ibb7/JlhNLsH6l5PbzJM5c3U3QGH/+UKMqv3/zDeo3+x2qCzvJL7rALs0qvPq2kdLqTXp3BAvWzMK7nxc1mv1BqiJ5942yZAAAIABJREFU/ZM4jiftoNtgLzr37MCHiVv5SJSEv7SZ9n4NaOXZguGTBjB0Zi/a+Tajbt236DGoNRc/OiuRb/9R7Xnbsyqr9r+L6YEWe4GOrcfmU61hNXqNDCDv8Xkik3fQya8tb9aqSude7Zi9bCQz3x9L846NaNCxKmEXtnOzPI2ErAjGzh1AS89G9B/ux8L1k+g53I/f1/89w+b04dKtM+Q7Eth2Yj5tfN+hSZtGjF8wmPnr3uX/sfce0HEt15XoX7P+Wt8z88cae8Yeh2+PLHtsy2PJY0nWs2RJtmVbsmVLTpKVk/0e42POBMAEECBAgiRy7gYaIMCckBqdG92IjUYjMZMInZAIghGpw/5rV90LNPkIgNJ7enqWgLX2qtv3XtxbdepU1T6nTtX93dc+jP/nfT+DP/zzD0BzNQl17aX4xtp/wM/+6v+LL635M7T6z6MlWIruB5dw8lwCPvK538cHPv4r2JP9ffRMXkTLUBH+6uuv4T/9t5/Bv23/Cix9Opyzn8QXv/sZ/MJv/Rw+/befwKY9X8Nrn/0ofv43fw5fXPN5nGrJQtvoeeRe2ovX/vJ38b7//l/wz6v/FN/e+GX8zsd+W4SKrN33NZiua2C/U4E9J9fh5z/wPvz2R34P33rzr/G1N/8Uv/HhX8J/+Nn/G3/6jx/HecdxXG3Lx2//8W/hv/z3/4i/+tKf4Ntr/xl/+JkP4dc+/D/w7Z1fguGmDu33q2D2Za+Enfw7aMMqOVhJf7qdKbGE8e0cv6oeLfeOV33Oe/G+2LKp+eO5FfK90iGuENsVHXhHdEB83TJQCstICWwjWlzqLsA/bfobfOwLH0JaxQ5Y7xag42EVzrozsfrgd/BHf/Ux/MEnfw+f/uLvoEx/EC3B0zjjzsTaxG/itb/+ffzOH/8SfvO1X8RvfOJ/4HPfew1Zlw/A6b2EQsNh/NPav8QXv/MXyKs7CPeT8zDdzcf6fV/CZ7/4EfzJ33wQn//2x/CZL3wU7//1X8b/+cz/xNnWk2gfv4SvbvlzfOYrv4+DJRth6C+D1afF0fJN+KPPfxjf3PpPaPWeRUn9Afzj65/DJ7/wGv70nz+CT3zxA/jga7+ED/3Z+/GVbX+GM+5stNy/CtvgWaRot+IL3/gEPvyZX8fvfvq/4v2v/SI+9eWPI/1sAqwDOrjGTqPUkIhvbvwi/vef/BY+8MlfwCf+/iP43Nf/Gp/82z/Gv7z5eVRa01HnKcWW5Dfw6X94DetSv4GmYBWcwyVwTZxFYcNhfHnTF/CFVZ/B0XNb0TFxGk3+Anw/7u/xx3/7h9h9cjVMNzSw3ytHStlW/PnXPo4Pfvp/4n9/+ufxm3/0y/jstz6F1HP7UX+vDC33q1DTk4O9x1fjU1/4OH7rU/8N7//4L+H3//SD+NrGL6Gw7hCaAzo0Bcqgs6XiH9/8HD7w0f+F3/30L+Kz3/g9/OVXP4FP/t0f4Ts7/w6XOo6iLViJ3Znr8Ym//hA++Mlfxu/+ya/gD/7it/GVN/8WOdWH0TxyCdZgGSzBghXyvdLPvCP9jEpgVtIfrYEQSxp/2GPWkdFXsiR4z6s8/99rfceWTS0Dz62Q75UOcaVDXNGBd0QHLMPlMA9rYQzkwxoshtV3CgUNaUitisPlzlw0eovhGCuGfqgYFU1ZOF6VhKNl+5BeuQ3117PhHNWBz7joyceJqr3YnPZNvJn6VezI/i40lhRYvVVoGjuPS33ZyKmOR+6lBFzuyYZ9pBy11wqxP2Mt9h9bj4yy3agwHUZcymr8/gd/Bx/65K/hTHM6PI8vo0Afj4wLu3CmLRMNPh2MAQ3Ou9OFB72g7giaghWo6ctGfm0Ksq8cRfbVQzikXY1Nyf+CQ9q1qGw/BqOvDAZfGRpHTkN/Q4PiukTsz30d61O+iK0nvo28+iOwes/C7CuBPVgMc38JKqwncKBwE9489o84VLoZpaYcZF9JQX79ftTfyIN1oBTljek4fmE/NI1paLpfBYM3B44xHa7eLEJOQzJO1hzA2a4MOMbK0DicjyJTEo6eTUBlSwaM94rRGNCh9kYRcuoSsTd/DdYkfgG7s19Hnj4V9f2VMA3rYBkthmNEh7peDfIvp+LN1K9hddI3cbBkFyqdObAMlsEeLITZlwOrvxS6puPYeWIrNqd/C+nntqFIn4a8mjRoLMkw3MtEy3gpDHfKkX1xD7Yf/yY2HPk6DpRsgq4xA+bBc7APn0NNfwEso1qYAj9asqAObCvpipxXdOBHrwOxpHKp4+XqYqn//UGvLfeuH8f12DKo7+e5FfK9QrzeEeKlKtVK+qPv9N6rMrYOl8EcKIF+KBtGXx6axirRPHoJTcEraB4+A7u/CHrvSTQEC8S2dW1j9XBPGNA8cgbN4zqYg4UwjpTAOX4GLcMX4AicgevBZTiHT6Pl/lmxZZ1+sBBGfwlax3XoGC+DY7QMRt8ppJ8/iI9+9kPYFLcK9p5LcHErwCOr8Bvv///w0b/4X7jsyUbn0wtoHTsF1/h52AKVqPeXwTCsEc9om7gCx/B5WALFsA1r0Rg8B+fwFTSPX0DL+Gm08AuPI1VonjwNQ6AYtYN5YgFh43CFzH+wCo5AGZpHz8ExdhG20SrUDGTB4M8Rn3RvHb0AZ+AsbMNFaJs8i7aJy3COnoZjVAv7SD7sowVoGteJvbRto6fROFGBmsEMmIaLYR2thH30Amyj54VxYgxyR5lc2EdPoXHkAhxj52HwlsDgK4BttAyNo2fROHIejX4d2ieuwDl2CZbR09AHCtEQzIVttATO4Sq0jl4RZXKOXkDz2BU4Rs7CNlIKYyAb9UMZsI7q4Lx/Ce1jDeIDO633T6Nl/CJa71ej9cF52EaKUDeQCef4WXQ8uIhG/ynYvJVwDl9A09hlWINnUTegRfVAIUxjupWtBlfGmpWx5idIB2JJ5ds5ftXx7FXe8arPejfvi823+l6eWyHfP0GNQa3YlfSnlwD/OOtedjLFMAcKJJH2FsLmL0fjcBXMXg0sAZLGHBiCuTAESmDwlsPkq0DDUBHqh3JRN5QNvT8PDb4iNAyWoGFITlfW3suDOaCB0c/7eD0fFl82bN5M2EY0sI+dx4lLh/EL7/85/OGn/gCJWfHIOHMYf/ONP8cv/OrP4svr/gKmgTJBOu3BIjQGtGgYKEK9XwvjiBZGXwHM/jLYhvkJdOa9SOxrbfbrYPJpYB8tg3OsAkZfMQz+QtT78mAb55c8i6AfKoAloIVjhCEshWger4TRX4oGH7/qWAz7/RJYgkWweLVwDJ+CY1wrnnf1dg7qB/JhHy+BMZAljBLrSBHsozqYR+ih1sIyxmdooOe0baAC1tEqmIbL0OAvhN7L/y2FdaQc1uFTktgKUp4Pg78YlmAlHOMVMAd1qBsogZ55Z+jHaD7MowUw+Itg9GlhHSlA84NyQbprB3Jg8OeJPPO9Bi6gDZ4TH05qHKVMCuWiSV8ZTP5S8cEgvbcY9rFzsAVLlcGkHJZguZgZaBgqgzGgg3GkXMp6pa9dIZ8rOvDvQgdiCeNix6861iz2/+p5PucnOTRFLSdTVWY8XiHfK53BvEKoirGSrpD3H0YHGGZh9WvhGNXBMa6DwVuAuv58Qa7psbaPauGY0MI8XCDIo4Gk0l8qiKNlWAPrKL2uRWgQe4VrYBtjGEopGoYKJfH1FsISlPHkzuFy2Ia0qCOBnayEJVCBz333w3jfr/xHvO/X34f//Bv/Ff/p134GH/vL30Jq1Q44HpyDZawEtuFCmAfzYPGVwBQsk6EYQS1q7uaImGTrcKnY05pkn+9tGCqA0VcoiDJ/M5adebCPlsLkLxLXDd4i8XEhkm9rQAuWi8ZFva8AdYM5qOvPhnGwAPwCqH5AA/1AKcy+Spj9p2DylUE/WIT6/kLoB2hwyP83BYvhmCiHZaQUdQOFqB+krBjuooWRRgOP/fnQD+XB4CUZ1sE8rIFltEDss270laNuMB91AwVib3HrSBksIxqYggUiLMjgJwFn/k6gnh72QB4sI4WwjhbDPEyyrkH9UDnqBk7BMHQCZn8W9EMZqB/krEaR8MDbRqpgDlSi+l6JiNkk6afhxPASC2dBKN8gjQUtrt7LFwPsD6NXK/+z0h+t6MC7qwOxhHGx41epk+VINa+/ynMWy8OL51/lWe/2PbF5VN/Ncyvke4V8v5Lyq0qzkr67neC/K3l7tTAN8hPzWsUTKomqZYSEWyMRZGdbJDyoJGiSTJaiwcvzpTDyK5nsmIa5ZaH0iFAG9mAZ7MNlsAVKYRwsguFOEaxD5Wi8z3CKIrHIs64vEynFa/GNLX8tdvdYd/i7KNYnwuQ7BdNEJap9BTB482EZKIAzqIMtWI4GbymMXq0gjyZ/CQxDJTD5SMD5bi4SLAIJOY0AMxcGMbxjoAiGwWKRF8dYBUhsTb5i2Em6Bwph8BYL7zVJp9GvEbKw8wNEgxoYh8phHKpA48g52EfOwOgthyVwCtYgyXg5rPwi5HApSOj5HH7W3UR5DlfANkKyXgoDPw7kPQuDLx8GXx6M3goYvVXCS08SbeJe694KIUN+7MhC77OXXx+ld5qkWCv2XLeOatE0VgGTMG4KYRkuFl5/vZeEXQNL8DTMwSpY/HlwjJSicbgMFm4X6CuCOUAZ8ZPx0rvOWQOzvxSWQCkYfmQJsuycudDAOqxD0/0zioxX2s+/qza9Mj7+VI6PKjkkQXw7iCWeix1LElom+g32HS/Dq+SBz38vtq3Ycqv54zlZbh0qWo7jdz786+Df/3W68TisHCSHS8R0rN2rQ6NPA7OvCAZOxa40yBUZrOjAig68oAO2QAVsJGNDpbCQiJFE+gtFLLIpqEEDpxe99I5rBJEmKaNH1xQ4Bf2AFqYhHcQzhiWhIxlnOIiFZNirhdVXCpu3FHYBDay+YhiDWujpHfeWoMlfBsfdEhiu5aGmLw8NJOheeb12rBQN42Uw+0vQOFQC+6AGNq9OLAq1+E7BMUJSXAzTUBms/lNisSTDZKyBEkEoRcfuK4VxQAPboE7sZGIdYt4Yc66Bic/0lsLG/JLs+svQMERjohy2QDka/aWwktgP0ZDQwTBYCFtQA2tAA2tQK0itmTHbwWLhlecMgjVQJmAL6mD1k3BTDjw+DePAeRkCEuDiyNOw+C4KMmz0Mxa9EBY/iTOJfQUsJPXeBVj8imETKIZ56DTsgSohYxoQjNknOTdz0B1mHL4OjdytZLAcdm8VrL4yEYZCD79t+JQwXIz+fJh8DNHRibJzfGCIDg0Y+wjzq5WyfUFfVsaRFUNkRQfemzqgksOlSO/LSPKL52wjOthHy982+Nyl8sJroo9+D/YxzJcKVd9V+XIMfMfJNztvej7oLbL5S8Xg4/CVotEnByErPSscoDnA+rQw0TOjgL9fBvV+Dkrq//E+9fwPlNILxAExZoDjs6SHiBYUB35WKAdsecxzBr9WwKQMjuL9gTLo6bmjV4k7Cij5i82jWh41j3wmPVpLgVPyhkCZMt1cCr1Piwb+DvAd5bBywPOTXJSgIaiBwcePWJQI7xw/9W3uL4Z1QAO7j/Goy8BP0iFhC9DDWCaIBHc84HQ5yQ+h5pfxorFQzzNVyxqbquVWU1Wmi6W8z+HVwTbEfOhg5g4TXk7JM0xBB3OAg30prANaWL1smOUwBnXQB8oUGbHeJDlivRjpMWVIgFJ/aspzApweD5SiPkB5lqGev0k+hivQ4OVCwnIYAqVoYLzwEjCoz1sipR4x3OA5kFzFQGzb55d1v9ixibopPmpDmSyCGN2NrSMeq/VHgsoQDuuQTAW59ZUJvbENlcI6qIF9UAvnkA5N3nJZLwx1GOLCxApYxP+VwubTQcRJe1k+tc2QiNGbzTCNUpioR9QRpW2xTunxNVLGfh2sgXJRv9wvnPrtCNJzW446xhgLMlgOu495KIPdy8/Zs43qpKeWOjNcAefIKXAhpH2Y8cesVy0agqUwjVBvqM+SzJtFHplnepypT2z3sq5F+xedJvuJMlE2C8vMfPrKYfTp0MCQEV8pGkhmmV9vqTAQLEHuosJwC3mN9afquTEmv7F1JkJJFK8/zzNkg8RXBetahugwfxUwDZ6CfqgEBj+NmnIYA1UwBitQH9CglruKjJ8W7YFtZb69KMfiWeyfqaM+fhiJse4MY5FtwURZ8bpIpTyMXsqHnm4aFqxLEngZ2808mGhkjFQJ50ydT4M6tpURnWhDeq8Mk6FcKAf2saKfZd8S1Amo54XcFZ3lOdlWFvod1hH7o+UQ+7+qnqspnyHe52d9S32kTrCvY74YR8/3Grx0OPE8DQgFSj9Ko0sFZcd++WVg/y3BGHoai0TJAgLFMASKBIw0fMTHTbj12svB3XkaRrTQj5VCH9SAX5UlDP4S6Ic1YvceaUDRiKJhy/UCxWK9BNdMSHCGo0Qx/mhcStDQlMZmMcz+YuFsM/uL5DF/cy2Bcu98qow3Fp8GBLfuFOD4+BxodC8OC5/jWxomEabG8CuWSULsoBPUKAYjjUbyAXILCRr6NBSFsRgsFbNQYn0C12gMFqMlqIM9yNkaGs4aYZg7Rb/PfoXeSRXKPSNlkISSpJL9U5lYaM3F1s6gCh2aONujoGVEBxXNIzo0D5fNo2m4DAIjOjSN6OAcKRPgcfMoUY4mLnweLoMjyP6tVBzzHK/xnqaRMnHeEdTCqVzn/YuB9y8HLmRvVDFSCvtIKWwMDaSOKrpBfW4IFouZx/pAIep8Baj15qNmKA/Vg7m4OpiLS4M5uDCYjfMDWTjbn4kzd0/i9J0TqLyVjlM3j6HixlEUX0tBbm8ScnoTkd1zCFndB5HpOYCTnftx0r0PJ9wJSOvYi+SOnTiswrUDSe0Sie3bQRxp2o3jjQk42hiHQ86dKB/MhJP1MsS2VArjsA5GhsK9ywRdJd5M1Xf/yMk3X2RhA2SHNlgC25AGjRzo2UD9Wth4/BLYfaUQ8JfB7tcJcOU+YfcRCjHwlglP2Mueseg55dk29R0xKT1rAiTlAgsEVP5mg+SAIDt/GhVcSGXnAiN6zdiZi09sy/zzHcyHdUizAP4maXgFcDqcgzWJpVxUJT9iIggCvWycfvZr0TBcAtOI3MpLWJ5+LRo5ON4rhm1AA4efHjDplVo05f2xxFkxjGLJMo9VOSyW8h4xYMakHOgEsVTJIomUIA2aeSJLYmsQA34ZjEq8qN1fLrx9JA8cwEis9fQaDpeLgU7v1cDZXwbHkA52hbSR8JC4kciRDNq9JHHKYE2DjwOqkrIO1MGCHkmRdxIwQWgVMugvh+GeFnZvxUuNilgDg8fLkQJeZ75iQeOBRJPnxDGNCb9uWczrq9DbWJ2S5WRZSeQoT4GglDHlTKhGhCTGrH9JmiUBlR5c6gvrjvmyKfmkR9U8zHhgDurSQBAEjoRLrVemJAUqlHeS0MVC1rla9/IaiTKfS890k78C1AO9IN8kgjpB/u2DJNBcxMlYZp0oD4mnPqgVx9Qn0U5FO+Nn5J8HZSPqP0YHVF1YSGV/ILztQ+xz6JmXbYnlJTFWy085OUSe2bfJfoJGh8EnjQ+2U5L45etVtmnevxhIFFkXRhpM1HsapyTWND59paijDo++mr7O66/InzJjIdrrghG9fJ4V3WU+ODNKzz9lM0yDlYaRFgYvHQlS59j2RfunbsToIs+p/YBIY43cGL2iIR1ruLzs2BrQCaOdDgXKyyJmDbQwDykQRp903KjGJ/tq00CxNNp8WhmCJIxF1onsD2INZPVYJeFvSVXCrswWCxIcKBaEmKR4nhgLclysEEpJoEmyXwaS7PpgCWpGNagPFMPh1cJBY8ZfIkg5w5zM3mKQzAqCzFQ9nifZGtgCEiohtvm5LaYGjUGOZ4R6T8n8vTzHa43cDUigFHYRksU2yI9rLQYuDI7BaKlY38GdeWxjElzU3DhaviQ4O0VizJAxNSX5Fb+DChFmOizBGRvCoczeqL8ZvmaZqIDdpxX72QsCP1omZmka2S8MlaDGVyjWSjD0S4R/DXF9h4R+iAutC8F1LPUK6gYLUDuQj5r+PFTfy0X13VxcvZuDK3eycebOSZy+fQJVt47j1M10VNw4hvLradD1paK094hAUV8KCvuSUdArwePCvhTI8yko6D2M/O4kiZ7D8/eJ+3vktbzuJOR2HUKOZ2nkeg6J+3jvYshw78eJjgSB464EpLfHI709Dkdb9yKtdQ/SWnYjpW0PkgV2I7ltNw63EruQ1LITSS0yPejcjgOOrUtgGw43bsMR+3YcadyBVAVpjp0QcO7CUaJ1L4654nGsI0Eg3b0Px4nO/ThBeA4gy5WI3NZkHLHHYV3tGzh+/bAg362DGmFsN3BBPcf2n3TyzellMy1hDiLeEjHtKAYTTmeOyP11aZ0KEk4iHihVoFrM0pJ+0Qu7QPoU7yw914IMy/tVC/zFlPeId8W8xyaIMvOg/C9/szPhtCu9XuzIVKhTB5zeEPfwPg6ynELnoEMiQLIj/0dY3M9Z/iRm8j2UB2WzmHdj4TytbnZo3P2gXHrcRR7pCdaK6XDutMCpdOsw88ydH8plJ8jBj8SDBgzJe7BsGTBek8+U5VY9B/xtpWdAgbDY5uOVZHwnr6nTPlJuyqyB4sFSCbyaUg9iF2LQWx/bIFSZk6hSjwjhoVI8cjRwSGyMQyVwDGrlbIpKeGhdMn808LwaYfCpz3s+pQWveFqE96ZIeEBYbt7HXS6El41GyRBnZpRGq9Sx6LBfcsz/nZfdIsckd8IwI4mKhX/hN/VclddiqXmoBOYhhg4UwyJQAgvDOgjhhaJOyjoV9arqsJKqMhczAjFERyVGgjyTCHOwomHp1UBPgsvt/UZ1YqEgY58F+R2l50+GG8TWrXqsvottS7axF3VmYfrVoBBo6rgwohnuQSOT8vbRgC+FnWSa5RTtgaRcev+4y4i6iIdlFoY+ZeGVsqF8hIx8mvlr6jkpO96r4Ll7SEZokFC3WQbujsJ2L3WU7dlOr7rStvlu6jXDL+ixU3VvsbpcOP+8525RXSLB9sq+x8r3EmwfIj/UYZLnpT2J6nX1HaqHkKnaDnjPQt4W10nqa0N/oTA8GFYjDBDhjFDqTvFaCjnM6yHrfKHtsw5VPZGyk7vdqDrEVMTTC6/wwlSu6HPmnynPC0eI0ifI+PQFx4mQER0NMW1DeEZZt16SzVKhV8wr+zZbsExiWPeWPpRyt4uZ3TLh8KDTQwW9oQ4aZcIwk7O/DD9q9GkF8SP5axSOklI4/KViXcOL0/Yv/m4c1onxxkRPZKAU7V4dWgcZj68TBJjtnjoo5BjTPy2MKXLcYf8ldhzirkOBEjRwsawC7oyj545D/P2Wa0XiWr23CPXeQtR4C3HZWyBwyZuPS0P5uDiYJ3BhMA8XBnJjkIPzAzk4358tcK4/G+fuZeHsvSycuZuJqjsZS6Ly9klU3DiO8uvpEteOQdd3FGV9aSjrTUNpbyq0Pako7jkiUNSdgsLuZBR2JaOg6zDyPUnI70xCnicJJ68dRm5XErI9iTjWdRDJXfuR3nUQ2Z2HkN4WjyP0uHaQgO7HCdd+HHftQ3p7AtLbEnCsLR7HWuNxtDUOye1xSHbFI9kVh8PtcTjcthdJAnuQ2ErsxkHnrgU07cahWDTvRmLzHhyKvWeR48Sm3SAOOZ9/Jv9XXtuDxKZ3EvJ94pnNe3C4ZS8Ot8YhpS0BR9r34VjbPpxo3YeTbfuR0X4AGa6DyOo4JJDtTkJOZxLyO5JQ1HEYJe5kaDxHUNqVBl3vMVT0paPy+glU3TiJ6lu5MN0ugvl2MSx3SmC9q4GtXwv7gPwCMb9C3OI/jY7hS3CPEJfhGb2CrrGr6B6vRs/9GvTcr0X3/Vq4x67i8t0i7DRvxGF3vBjTW7hOxstF5KpjcGGsWehvfnTnYvsn9X2i7xEzLrp3PuzEJEgAQx5IvjUiNtM4rkMNV9KPlYlGywbOLa+4Sl71Cqgr5vXeQtRzpwExvSS9ALT6CdUjYCJJHuYUKa8vTEW97NjEKbj5+7hlGMGdFWQeRCqmUUrkNIqP0yhyKoXTKfXcYozxq8wrQzxGtWjgNmCcdvFyRwQuQmJHpnRctJx9RUq5+G6ZP14XZeSex4GlwXJwAOD9+kAxqvtzxQ4MjBHlQix6PhqCXCQmLfHqgTzUBUpQ7S/GVV8xrgwV4cpAIappnVOeS0DvK4SKem8BVKjnGigrbvWm5LtBeTffL6eeiqEX99CzI42QxVKWqdFXIsBFalxPwN88lr9lKg0YTmWWiEHK4dfCPsQZlGLw2EFDitdI/sUOC1pYR8vEAjjG0nJHC4tY/KaFmYv+YmDiFG3M9C8Xm1lHOH3JreEkKaAXSXh0RsuhJ+Ef0SwLPnd+ceFix6o3ShifStwviWIMRHlYhqXA8vkYI1ysTOlKb5X0XHFA5oBP2cZCyn1e/iSLJEAxhIYdhFwsJ6d4+dvGQZ26OKKFYUy2OdZBk1cL52g5GoZJjIth59Zz3oW6jK1T8U4/Z2I0C/ma1wX2FfL/+KEeA3cT4XQyd1ARJIl54rS3RtS7M1AKB7f447Q73+ctljo0JBc+stx2xbunylXKJVbONLwXZEh9UXVO9QwKA130M1K3hMfSVzw/fa3mTRjs7F84JRsoEdO01iBlJvXJTJ0Snsdl6lSJAWcbXwzUT9k3lIgP5jDEjPfaRui9K5OhCDQ4GFO+LBi7zf+X91K+8lg9T+Nt+TzzvUbuaCPIJMklww1l/VGXODPHOmHdS2jn9eC53zH9gqoP8ymJsbdYtHl6apeCCMMgcWRfTVCnCK49IDhe0DBiynbNjwBx20n2Yf35MA5xrQLDQSQ5FVPs7Id9knRybKobyhf7vdfxmGOFl9PuBagdykfNoApOweehVtl5prY/X3hG6RG9fCsLlwQycelWJs7fzsKZO0vjwu0sXLiVharbGTh3MxM117NQfS0TZ29nQnPkVVcqAAAgAElEQVTtKMr76FVNR/GNNBRfT0PR9TQUXktF4bUjKOhbQH5fCnI53U/0pSC79zCyupOQ4TmEk56DApldDAFIRCbRJTF/vfMAThAdJKYLSG8nSX0ex9oScFQhq0db4pDWshepzXuR2rRHhAqkOHcj2bkLhxp3LIkDjTsQ37hNIMGxHSr2OXdAoEmmCfat2GffJrC/cRv2N27Hfgc9sBL7HTuwt3ErDtq2IbFpF3a37MD3ar+PN65+D/tsW5Ho3InEtr04wjwyrwJxSGMZ2ogEHGuXSG2LR2p7AtJc+3CU6NiPdPcBHPccxHHPIZzwHEJBZyqKPEdR0p0Obe8J6K5l4tSNbFTdysWZO/k4e7cQNbcKUHuzEHW3ClFL3CR4Tj1fhIa7JQL1t4vFfbyXqL9dBP0dXtPANKiDeah8GfCepdHor4QzcBrO4Gk0Bc+geeQsWkfPo/3+RXQ8uIzOh1dxfeIqbo9X4879atydqMG9B7Xof1iPgYf1GHzUgMHHDQg+MWHsqQXjz6y4P2XDxLQdk7MOPJxz4lGoCY9DTZiKtmMWnQrcmMUCZtABIhTtRiR6TSAavYZoRAKRa1ARifZhCl3oe1iHpOY9ONC6UzhSmzhrOsTF63IGXiXA71b6rpNvTu04afFzys+ngX5Yi0vDxaJDqLh1HBU9aSjrSUFZbwp0fUdQfi0VFdfTcIq4cXQeZ+5k4OzdTDA9ffskqm6dQOXN46i8kY5TCipvHEPlzaWhPrPiWirK+46Id7747oobaSgnrqdCp+LaEegI5Xelkj/tzVQU30qF7m46SjoTUe5OwqmeFOh6kkWZ+I75Minlic0jp53Krh1ZEuXdKajwpKCiLw2Vt45D250MnfswTnWloMidiJzuRBRdT0XZ9WMo7klBricR+d3JyO1KRlbXYWR6knHcdQjpHQeR2ZO4JDK6D+Fk1wGc7D6ADAVZvYeQ3ZeInGtJyLl+GIXXU6R8FBlRTgtQZHfrGHT3Ts6jvP8kyvsz5lExkIHK/gxcvCtx6W4mLvdn4cpANq4O5ohYsZqhXFR783DRl4+rgUJc9eajdqgA1QO5uHwnC5fvZaPWX4i6YLHYTo0kTTUW6K1h7Du3hTMQg4UwDRXAzG3qfEWw+klUYyDIaxHM3nwYBnNg7M+F3VsEB43HwWKY6dn1aoSBw/9fDlZvEWzLQO5kUSQMqAYvpze5pV2hnMocKhB7R+sHC7AcjN5CmHyFYGoYKoBhMB8NA3nQD+RC35+L+v5c1AzmCplWD+YI+V4dyBayvkKZ91OWmbh4NxMX72Xjcn8OLg/k4lJ/Di7czcL5OyQEGQIX6J26l4nTA1k4PZgt2l4Nn38nB1cGc1E5kImz9zJw6V6meKaoU6VeF+o2B/PvZ30rdS7qnXWu4Ko3D1d9BageyhcGp34gH/X9eWI7wHrKZagQNbweKEC1Lx+X7mbgyt1M1AmSk4tab57Qn+r+bNTcyULd3WwhCyGXASmXunvyGq/X3s2W99zLEXLTD+ShYZD6kC/kSgO0lromDGDuz83rBWC+WG8kwjRGa7h1X6AAdV5uq0fdy0c968KbDz31lely9TrEnUbyl0S9Nx9XfPmo68+DhTu/9HPXk2Log/TuFME8WATLveWfI99TgIahfNQP5qFuIAe1/Tki5W+eb/Aur4csE40Po7cIDdza8F4eTPywzlARau7l4Go/dSRP1C/rnLpBvaNeUC9Z79QBnrs6wD4gRk9UPe3PErpF/bpwLxPnl8HZe9THTKmzd+lRPYnK2ydwimOPMvV/+tpRVN08hvI76dDcSoP2+hFUXktDWdtB0Z/rug4jrydJ6TcPgX1kRvdBnOyS4FT3Cc9+cZ79q+xDD4opcE6Hc2pcnSZP69yHI579SOnch8Md8TjUtgcHWnZhX9MOJDi3I96xDXsbt2G3fRmYt2CHaSM2Wzdiu2UT9pm3Yr9tOzZZN+Ibl7+D1Ya12OXYil3WTdhp2Ygd5g3YbnpzHvzN8ztNG7DTsB67TBuwx7wReywbZWreiL2WTYi3bUGcdTN285pyLs62BYLYktA6tmO/cweSGBLg2CtwzBkHIt0Zj/SmeBwnmhNwojkBJ1slMtr2IbNtPzLbGSZwUCDbdQh57iRoOlOXRKnnKKp6T+D0tQycvZ6J8zezcfF2Li7fzceV/gJcHShCLfvsAYa5ytCsRl85nMFTaBYflTqNtrGzcI2dRaevCp7BU+gbuwj7yCnkOPeh0JqAjqFTGJisxd1HdfA9rIf/UT0Cj/QIPjFg5KkRo89MGJ+24P6MFROzNjyYsmByHlY8nLLi4bQVj6ZtEjM2PIy04lGkDY8i7QKPIy48iXYIPI268SzaiUjEDRBRj4JO+Vuc71w4F5Xno+EORMIdYBqd/1/e1wVgGfCe+feo73tJGvEAsQjz3R5ASaORTkSjLwAeRJ9DF6IgYs4r/xNRyh0VeWGeYvIF5oflURDpBiJ9QLgPCPUCoR5gjugGZhWEuhCJdqH/YQPSmuMQ79wqZm/ofOKCfBGKyvC8n/SwExaQXm8n4yCDZagd1qDwZho2X/w+4i6/geRLa7G5YTXWm1ZjvXE11hlXYZ3hDZHyN/GmaTXWNryBtfpVWNuwCusEVmNdA8+vwlo9r70h7zG8gbVLYJ1hlfLsVeKdb5rXQMCkpOY14jlr9K+LVOTF8AbWG1cJqL83GFbh9drv4R8ufBlfvPIvWN+2CRvMa7Gh5t+wpWENNpjWKPevkvlp4PNexxqmLJ+SDz6P55fCRv0qbNGvxgbjGrxpXYd1DW9gc/0qbDWswXerv43Pn/sHfOXqN7DN8ia2NqzFrob12FP3JnZWr8Xu+g2It2xHnG0Htlk2YT3z9YpQZaPev06tnxhZrNH/G2LBcrBs6xtWYUPdKmysJ1Zjk34NNjesxWbDWmwxrMNW43psM63HTtN67DK/iZ0KeLzLsmEeO60bsM2xCbubtyLOsQUHGDvm3I79Tdtx0LUHBz1x2NO6Hftdu3G07xCSPQlI7ohDakc8UjsSkOZOQHrXAZzoOoii7sMo7kmGpvcItNdSUUrD58ZR6G4cQ5nAUeRdP4zc3kTkuPejoOMgyntSUXX9uJjG5NSl5vpRlF5LWxa6PsbyLY2cm0eQdTMFmcSNZGRcP4wTfUk40ZeI472HkN5zCKmdCTjiXhop7XtBMO7ucCuxW8TaJTbvQmLzToEDTVJm+5zbkODYinjHFsTZN2OvfTP2WDdKGNZjT/067NWvF+Dxrrq1Aur5DQ2r8b0r38Kq6u9ih2kdttS8jh21q7DqzNexWb8K2y1vYrt5PbYb12K7cZ24h3W727IBu60bsde2SYDvZ15UsD4Fmndgv4J9zduR0LID+1t2Yn/zdrA8hxq34YBjGw607sLOpi3Y3LgBO1q2IqF9F+IZU9i4DYeVch9s2Yl9/P/mHYocGH/4PFQZHWraAcYlEoeaZKwi5UiZHnHFCRzuiMPBzjgkuvfiYPNOpLnikOaKl/J27UViZxz2u/cgwbMXR64fwpGufTjqTkAqp6Rb9yClI16cS36FOj3ijhf3838WQ1LHXuxt24H9jVuR0rgDKU07kdq1D4e64oU8mO+T7v1Iatu9LBJbd2Fp7AbztLQuxiO5nVPSu3GoeScOMw7UtReHWnbiQMsOMf27v3UnDjTvwL6m7aLuExQdmK/7pu3gOeoldYW6Sd1hn8C+Yodx3bxubTWuxZZlsM6wGmsEVmF1wyqsangDq/RvyLThDaxueAObDKux3rwG32r4Hr50+Sv48uWvYpNlHTbW/Bu2N6zFJo4Nos/kOMVxYi02WNZhk209NtvfFDpIPdzGvOnXYId+LXYa1mG36U3sNW9AvG0zEuxbsM++BQctW5Fk3o5k206kOvYgvSVBTNNndzJONxl53UdQ3HsMpb0nlkRZ3wlob2Sg+E4mNLczUXkzC6f783Gs9whW16/FQfcB1AZK4QmcQWfgNNz+KgFP8Ax6Rs4Jsnn9/mXcnbgK/4MaBCbrMPJYj7GnRow/M+H+lBmTs1Y8CjXi4ZwdE9MWPJixYnLWJn4/DjvwNOrEMzThGZoxE21BJCwRjbQiGmkBoq0SaAOIaBt47fnz7YiCcAlE4MIcOpZECB2Iwj2PCDoQFnAhDBdCkGkUnZhH1I2oAkQ6QEQjHcLL+izSisfoQOfjGmjcqbjYk4mx2UZBFul95ftUhOEGERHoRAQSzA8EOoCoSyLSDkTaEQ1LYMYFiQ5gNhZuYFYiOkciTQJOgtsJhN2IhjoQDbmAEPPNMnlAorooVMKqPmPR1C2ez3csBvEOQZpjiLxKvJmfOReiEQ8i6EJYwIMQPJhDpwLpvQ5FOxGKeuYxF/XgOcAjDA9hRKjGRJjlVYyR+VTWG+tO1GOUhgpl37mAkBuRWRcGHtTjREsC4p3bxIfASL65LkRsUvFj2K3vXfd8c1GCqb8QtoFiMATlaqAI+TfSsLN2NU5fP47eR5fQ8/ASPBMX4B4/h/bR02gNVqLJXw6nVyfgGCqD6VYRjLeKRCyQ7V4pHIPlaPKeQou/Cq0B4jSa/afQ7K9YEi2BCrQNV6Jj7Ix4Z/fkRRCeifPoGDuL9pHT6Bw/h+77F9AzcRG9Dy7h2sPL8+ibvIy+yUvoHD6N+pu5yHTEI825F43PLuJmqAHXxi/g+sQl9D26As+DC/PP5Dtbg6fQEpB5ZD547Bk/h+uPriyJ25NXcGviMrofXITnyWXcmq6Df8aEoekG8QXAA4YtSGncJXYt6J64iKeRVoQjLoSmmzA77cBcqA0z0Q7MRF0IR9uXRqQN4UgLwmGiGXOzDsxM2fH0iRmPHhow+UCP4INa9D28jN7JS+i+fx5d48Q5kVJmlJH7/nkYxc4kZeKDIrX3inD1Vj4uXs/B+d5MnO05KbwXZbdOovTmCWhupKPk2jEU9qYiv4ve+2TkuJOQ1ZGIYx0HkN5+AMea4nCyKUHgeOs+EW+207IZ/1LxZXz93Nex1boJO6ybsMu8CXHmzdhr3IRdhg3YbdyIbfr1WFu36nnUvoE1Na9jdfXrWFX9b/jXK9/DJzR/ga/Xfxfr7G9iVe2/Ys2V72Nz/VpsqluLNVffwAbjm1hX88ayWF/zBpbDhtpV2Fi7WmBT3RoQfNcW/TqBrfp12GXaiN2mTUtij2kTiL3mzYi3bkWCjVOt27G/cQcOOHaKOMPUpr1Ib47HiZZ9yGg7gGx3IvI8yShgbGTfUSH7ypuZOHMrG6dvZKHqeiYqr2Wg8noGztzMxvm7ebjUX4hE9z6sb1iLFMduXOrPx8WBfGR3J+ONc9/G0Y4DKL+Ticp72SItu30S2pvHxbMLuo+I94k6dR0SeTjZul/ECjJPBL1jBL1lxJHmOCS1xSHJlYCk9nhR38lOLu6Jx353Av6VhOnKd7DduAFprv042X4A2a0HkNNyACccceIZqc1xSG6WMYqMqWTcJWWiyoXnklpouMQjpT1BpPzN+yjDOMsWIVvq0E7zJuywb8Ee21ZxnnlJdOxCvH0bdjq2YY15Hb5y4ev4pzNfwUbrRuywbMJe40bEG6mLG7HLtAk7zJuwzbRhyfpU63sX/2cJ7LZuQVzHXlGWJMt2HLRuQ1zLLvxrwyr845mv4PuXviN1wrYN8ctgr2WLuGdf4w7sp3ycu0S6r3G7OM/rar6WSncZN4g8xzl2IL5tD1bVvo5/LvsH0c6S3PtwoGUPjjXHg3Wf6TqELEUXjrfIulfP81p2RyJyOw+L/qCoNw2a6+kovXUCutsZqLibBd3dzGVRc7sAhltFAsbbxTDfLYG1X4vGoXI4fRVoDlSibagclgENdD3HcLRxDwraEtH98BL6H9UgMGfCQMiEx6FGhENNEuFm2UdGWxGOtj3fn0baEWYfKs6/LG1HaL4PdiES7ZhHOOqS18IuREIdy8CFcLgDc9FOzEZdiMy1gx7EG/erccqRAufAKUxG2e+7MR3twFTUhWfRdpHy3BxIkroEeZTEl4S2AxBwIRptFwBcyjmV3MSQTOXehXvUayTG8nkk0yrk8/kceT0MyqINc5FWzIZbMBNqxmyoBSDpWgIkXioJlkRYIaSKV5UeXxLU6Wi7wAxcmAWJtkrqWX6SaLVM3ZhGJ6z95ShqOgzDrWI8CrdIz7Egd7He5xgyGJPHuXA7iFCEY6xSp6KcfIdH8UIzJZT3Cm+uQh4VMsk8hdCJMDzz4G8VPM9xPLZMatmY8jzLzfKp//N20plwO2bCrCPq5vNyVw0SQYRjZCHqThgLLkTn2gUw0w5Mt0OkqhGiGiBzboAgqUe3ApXs01NOA0caPCEwNKVFYA6tCKEN1KPndYz/2wP/YyOyXQexr2mbCC1zcv3UMD3e3K41dl3Jjy7OO9a7/q6Tb8bVcTeQZi5A4Sr8sTKU3DmO+Pr1YtHUk3ALZmmBKpgJuyAqPNSG6RhMzbVjas6F6bkOTIcIN2aIcCyU/xUKQ6V5K6ZmW0DMhNqVd7oxGyFcmA23Y3quVSib+M1zEZdQvLlIh5LK37OhNow8taLhRhEKHUnoe9aAWXRhjh0iyxDpwEykE7Phzpg8dkCWj/mUeeU71Ya7eMpndmA64sLjSCtm2HnPtiEU6cRoqAl1g6XYULMGf6v9OySYt6DjyRU8QivmxADgxgw68RideMaO5MVG8uJvWpiig1A6CmFhd4opLVrknOISnX7EDcqE+acs5hGmfFifbkyFOhS48CzkwrM5F54Ss0Q7Hs+24cFci4JmPJhtxsQ8mjAx04T7M00Ym5HHE9MOPJx24v50I0ZnnPDOONA4fBa5tgMwXy/G0FMzxqYcmJhyYmKqCRPPGjHx1IbxZzZ4JxvgnryKjgdX0D5xGa3jF9E8eh5Nw2fhCJ5BY+A0DIM6bKhfh1T3QZgnLqDzWT06Ji6jffgcXMM0DC/C/bAa7aMX0LYM2kcuYDncHr2Ke2M1GLhfi8GJengn9fA/MiDw2IThp2aMPLNgfMqO+8tgQpSZ5W7EgykHHoiUx42YeGaXchByU+SoyPX+rBP3Z5sg0jknJuda8XCuTaRqvfDco7l2PJ5z4UmoA7qbucjoSEartwpPZ5tF3TU/uIgdF9agur8EA08M4nkP5nhNXme9PphtweRsCx7MvIhmTMwQTZiYbsJ9kU/WMevQgUlixonJGaZNmHxiw+R0MwZnbLhwIxc6dzpagxcQnGnGg5k2PJppwaOZZimz6SaMTjdhbJr/34TJaSceTMfIZ9ohzolrvC7ew3uV+5Q8SH1yYvyZHSNPzLj/1IqJxxb5v1N2jE81YmjaAoevEqda03ClKwf3pqiLTiH7B1PNQh/vTzkwNtWIkSnbsnV6n3Un9JhyeDnuTzlF22CZJp/Y8eBZI/zTdpgGdChtOQLD9UIEnllEe2GbWRJ8H2UjZNSEB9MqnOI887OcHvI69Y3p+HQT7j414ZInEwX6OLQPncY45TrjxNh0o3LcjMmZFqWNN2JUOc9z83pCnRGI0SdFtx4KfaXOLo6nc214Ntc+DzGOhFyYCrkwTYQ7xDjyKOSCZ+wqTnsycLUrG08ibQiFSKQ6MS0INT2aClFS0xhvHPvVsCC1ktjRextLitRjEhaVSCyWkmwsGzIQ7kQkTK+i9DAKDyG6cPNBPcoak2HrL8cj4Wl2K303yXw7wnNtCM/J8YNjSIQkieVRy6JO7zMP6jGvcZwQBFG5l9dEPlVCGUMqY73AKkEX3uAYGaoE9DnvtAeR6ALpjCWgzx0zPCE2DEIc00u8AIZCLBB0HkuonmqR0hNOUhjtwVO4cbEnFyXtaWgduYBnLGuYZep97rnSI628PyZkIzacguNn7G/1eE549enZV0GSLD319NYTqtGCKGUloZ4TxktUGjUL5xZmAORsgHI9So/0MuA9y0J9n3wPjTNhrEXbEYkQbYjQ+BN6Td2WUGcZ1PQtbUdtQzEpy8SZC0KSaeVZiiHH6zxPsq3e97KU7wxHOnB3sg7pTXHYa9kIo7cYTm5Owe2GxY5uXHT57pBu9T3vOvnmHqTcXcTJnQl8WtSPaAX5jqteA0t/EabCDrChIMI4n+4YdMlzEQ+iIraIsT49EuFuRBXMn4v2yv/lc5ZA9Ll38Jnq/TENWsRLMS8x19gQ5xs8z3swHe5Ec/9ZnDQkoNF3RsRsRcOyHGF0Ixz2IDznRiTUiaj4f+V583lQ45uWSfnMsJzWmRbWNGOb2Bl6MB5pwaXbRXizbh2+eumbSG1LQH/YjBDvm25FdKYDkbkOhCLSEo5E3VgSwsqkR4RxUx6Eo7JD5G/GbAnLdD7/qvxiZCPitSi7HtlxoRdRFdEeRGLAuuCU1VvRhWhEIhLxCLIv4spCnUCoE3MzJPtuPIYb7rFLKDDuR1t/JR7ROkc3IhE5MNF7RIOB+aZxNxXpxFTELTrWp2GSyfYFzLULEnguoMWBq5twzp2BgUdGPI3QcGjHs1AHnvGdMy3Kb8WgoFGxGOakwUGj42WYFgYYjbAF43OOA3h0wWuhDhpLpdF5b4HqNegWZZYxdl2IRhUoMo2VLeWrIhT2IBTuQijShTAXtqBH1B2PZ+c6MTXdgfzuEyi6kYnBpxYxOLEMbU+q8bru26jp12Ji1in0hs9U37NoGlbrfqG+5/PKPM+3TRlPKDxacyQLPRgJN6PcfRylzam4Pa7HDPNKnZvtBOY8iM7RYOxFGD1iqjNMneAzhayomywby9iFMHWMRqai40L3Ff1X24C8h/lgDCL1Xuq8MEajXXgIN5p9Z6FrSoX9VjmeiGf3ifbPPoHtn8+Yoy4Kr9YCIVi0bpX2x7y9DJzGnVIJEPuXaLcwsq2DVShpSoHltg4z7IdivGgvO5ZtW8ojHGXdexBSZKLKTMpq6TyT0LCOqL9P4MbNRwZUtaVDZ09GcNYJ9okkdOJ9Qie7IfoAIStZF7L+eT5GJxbRW6lX7EMWh6gryPoWRIpkSqBH6JF4/xwJdie679eh0p2BS+5sTJNchki+eqSOKAM6CfZC/0mdiol1VUIQXiR4C/er4Qpu4R2lh1T1lApvqeLdoz4s562kXrKfo6xZp1KuHtx6qIfGloSGWxpMhJolqVZJtJIyNlhM74dcwpnCsWIuqnqFSYBYRyTBan0zv26E5rFwrxoiwvwj4lqAGnox701XyKQaLsA0hnjNE30hzyVCKpRwi9j7549fCK8Q5WRZF4EwODg+RDoxHmqGtjUNxZ3p8DzSY4rvEQYH27kSAsKUMmToxwtQ+wOZqkaAEs4RcgMhhr1wdqINEeGppbdWDblZSIF2CERlmA5DdebP8dqLv9X7X5ZGef8SeNn/vHBOzorIvMp8tylhQkqeSd5fKMdb8k8nINowFwN6rOnFjoWUB5//PGLLH6Xx8Zzs1TEkJo12CqO5/5FRzKzGWzaKxefNYstmbnvLbxP8NGw1yNX6XIWv7JvJrcrO3MnEXvNGVNw6gemwQ2mE7EBirWjl93zHof5eJCUxjLFEFz1+cQpI/I9q5b+Qh/nnqdfVlPfJCr41VouSliPIb0vFCBsVB+ZZtzJ4KR5jtdGKssSUUXk3lUk0Wv6viL+K/c2FDfS4yE4+NEeldeP2k3qc7c1EpnM/0h0JONV7Ak2BKtyYuIopEWPHwcOleCykx2Y+/i3W2/CWY4WE0CtAi5NTTYxb4zSk0pEKa5bEiANk2IMIjSOWRRAu2Zm9zHMjyU2MV+A52cfKNraOWW4SLsUI4/tCHsxF3LgfaYZtpBLpxl247b8qCHKEA7lYwMFGyg5PypteJhmTx1Sx3hVPg5xqleUcnnPC0FeEPbWbcNJ5ANdHL8sOVykrB2l6Mp6bHfghf79afcTELb6lruS1V8qLkHWsXN96zMU+LN8U4/NEB0cjrFt0ZCSOvaPVyGhLROWdLPQy/Mp/Fq2DFciw70NCzWZ03a8Rs1LzAxAJqiBZHEwVHY64pTGl6PT8wKnmL1aWanvhADndJgxO5iMIN6qu5yHdGAdDbyEezzQLsiDkoHq/RJuTU5diap86LPRX1QPWIYkUvTf0AFHfmU9V3lJPVeJEPZKkSHb+olyKgUH9fxxth8WrQ67jAJx3K/FEEPkeMIZTeuQkCRBEJ0b/xPvFoMy+Qx2EVQ+YmpeXpyR5JGkc2MGFSOjGI3Sg9q4W+c3JcAcvYmYRL5zqjVNTyo7eIhkGIT1HDF0T3jT2dWEevzwfsedZn8wX32vxVSGzORHVPXmYFl5hOk9emFmLre/YY1Uf3ol00YVnShsIMx61Ezce6nGqJxOnXMfxJNQm+rNZkm1Rl+yHlLjhF+QwH0ZAshpuRzhMwsQQDkmcQgzjE3VL44TODMpZypZ91VyoRelfGeIhz4c40xpmWEtrDFljXhhfqxjXiqwF+aaxhC4EH5mR7zyE2jsFeBDi2Kp6qOW75/Mq2pYsjzQWno9lVuv0OUNi3riIMSLmY6A51nkQmmF8M9s6SWSTQBStQndoyMhxuROzM02yT+Azw2yDlBedBOx76OFkfqnTPQgzBIdEmA4lypFx4+J6bNleOBblU8eUxVKFHEe74H5YgyznIVy+lofAtF3MFstFhNTXH+bZyjvF2K7OFqhhPbIsC+OR2iepaex96r0vnnuV3+rz3l4qeECM9/n5flR99tL54f+oXvHFUvncxZ6jvOct9frWcYw6xjbre2JGvjsJeywbwN3HuGNVg9jViruj/RSQb7nXMr9ayX2HNeLTz1fu5WKvYysyr6dgOuxUSNELCv6iwr/Xfisd8fBTKy5cz0OSZQ9uh+yg5wjT7DhjY8qWKRuJiZj280jCrE6jcdCIdOJxuAX+GStGnlhE/PazaAfO38xBkm0XstsOo/auBoOPDDHkUm0QL0uXyYsgIPSqyGklQVQ5fRnlANGqgAtnSL5VD5XyTGEgSML+lg7rh64/loGDEi1byonv7RExi8PhRvElylTzdoyMm/qjJK4AACAASURBVDArvNz0rtMjxUasllVt0HJwk8+jQdEmp80irYhwkZDwKvQg+MSKzYaN2GXfhrZAFaIcTFk2hjkJL5r63J+slORiiqvuOaBSv0lKFKJBD1v1rSKUupJx7uZJaPpSkWjdJmJ3MyzxaBk8LcJTZgRRiwlvUsmsUh8c0DjQCjI+Xz+LyFHoEwdnLk5i7GAnHobbUT1UgSON+1DuTkf/RF1MPb/8OcJbIsgA9YDkiHVNXZA6wAFdHQh5rzwmOSfhkvrM/HIae4aeIWFI0DPLe0goO8U6C+OQBpmOBLgGzuGpQnoRQ745bc+Bh9PLUgdlKggFSYXIR+y1l7XfF895hFFCTzEHnclICy7ezEdBawpuT9aL7boW2sHL5SOvu0VbkHlQy8y8yLbD/uDVnkNy6cbj2RacvpaDo80H0ew9Lby0UyTXgkQtlY8fwzVFR+88bsCpa1nQdRzD47lm0edw+zNJBKU+LCcDGnIk35FIC2Zn7SDxFOtsRJuSHl3qfzikEk46L0jSSdakrIXDQ5B42d8CLYhEmmWIQIT1JEkq9ZAkmYSc/SJnFh48tiG3+aD4uMtkqPEV6+zty1y063AnwpxxErOeDA2wIyJAA6IX0egtINor8j83RxlQz9gf0GhhO2eborEvDYzpGRfCLBevCaONRnKLXMy5XN/xqteFB74L9YEKnGw6CNvdMjwOtYiYaWEo0Zv/qs9aue89Iiu2sy4EnphR4knBTvN61A8XCwKuD5SIb2rwGxFqOMi7lb7rYSck3/xgAck396FtCZajuj8PcY5tSO8+JMJO5GD39juAd7ORcJEByeDkXDOM3nJBvlsfXEaIK4BnZRy58Eot0yCFd1ZMb0nPoPQMkdRK4sPV1ow7Lr1+Ehd6s9AZOI+u4GWk1W5DsSsFfY/0gpAwNl0lEEunS8tZkIywC6EQCSkHBQ727CQ5oKiDAT3RHADkYLIgd3mvJBYcSJZ+16td5zNZNnbMJHW0dHvEYqHArA1XBwuQZt2Jh0+cIryA0+OM0yT5FgSPA5ogWnJgW8gbvSgdooyhEBeXsmy8p1fEnh9pP4gjnoPoHD+PqPCCMYSFe5ByCvudKNd78BnRTrFAa0Z4mGjscFqbMunGRKgJJW2puOg5Lr7odqRjn1jsuLdlNypvZuPugwYRtvOEHjthTCqLpnhMgiCeoyyiEeRVnl9aliRyncIjy86U6wiu3a9GinE3irvS0fOgBs9eiRRSF9RBlGsWqNskk6pOMGV+pE7IY1k/Ukc46Mt4xFnFEFQ93yKsIdqJJ9FW6AeKxALs68FqTIvFbDSmaTAqs1fsM5S4xlg9VN+9kDI/P0B7DjEkQHpmR2fsONObhcK2Ixh4bBSzZEvLWNVDlt0l5CJlJWXBY9kuXo18sp0ylML70IAiVypyO47g9qMGQaiesg7eq+Q72onbj/SoUMl3iAvuPGLv4Vcn39IAEroSbsXMrFPEx7L9kESGwr0Ik1wyBCnEcDrOrkgHBnANiPYhqpBTElL2X9RTOj1k38v6Uci3mNlh38bZCBnOyDCxycd2Qb6v3M7Ggzn7u9ZXsczRsAwJA/pEvsPRJkS4OC7swoOHLoyOtmNmpgvhUA8ePmzDzZsXEQgYMT3twfSUB8NBKwYH9Jh4QIfcDRHmRieWKB8NDOrPfNiFqrdvM6WMQy6cv5WP3PYUuALnZIy/MLzlTMartZ+3mY+f1DHlx1IuhXw/NUPTlSp23KoLFsE0rAXJN78I/VNBvvmBEhM/uc4PYARK0TpcgZqBfMQ3bUdKexyeCev8VQbi95Zyq+R7Cm647l9GsnWv8EDPzrYKgjgryPPy5aJXTyx6IbEUZFZ62BgbOhlux8AzC4q6jmFt7Roctu1GvuMgNM5EpFzcANM9Dca5mwkX0ZAgPhdaIcnEW88tLUd6ZUIhEm1JTuTA++JAzGfQO87BgCSB5EYSdOkdp5dc8ea87cYnSYE6JcmYb8b508vjfWbGuVtZOO7YgxlOTYppSsaqk+wt5JnbPUkCTpmQTMg8UzYLhEP1fvZgKuJC+c0s8ZUw82Ap5vj/Im6QsaHS2/QT2SGLraAYb045yR0DuMqdXqtbj/RIM+1BdW8Wbj6pgd6rQUrjHqR5kpDRm4pizzHo72lxfbweD2fp6VKneqW8SMgY4kC9mV+8tYxuSK8eQ0Po3evGZKgJFzyZSNbvhC14RhidJMHL14VKsqm3qj7xnPytknDqsAxBUa9JAiR1O5Z8y5hv4W0WMzEePIw24+q9PGQ5E+B7aMaciGOmrsp2LWe21MVLKrmW+VKfr+ZjIV2sDb9wPuoRi/to4HgfG1HuOQ5Nx1EEp23KdP/SbV6VA/PB8BIpA9l+ZBgF28srkm9uIRZux7WxK8hrOYwz13PxUBgcXXjCPuG5MIhXyde7cI/oKzpFvHR5X6aYUXkSZmiDB9Ni8der5kGdCaTsZBsi8Z6b64QvYML4/VZMTUsSPjXlwfi4A5MPSaxvYHq6B8GgA16vBZOTrQhzTRPj78OcdWRfyn6LzyWx4GwLvd0kowr5Fms0esXC29yWg+LDPdx/evm28aplW+4+GgLsQ+mcoIEh8xsKuxEctqG+vhhVlScwPNyIyQftuHjxGDIyNqOs9AC6u66gt6caZ8+kISdnG65UH8XkJI2O65BrNVR9l46UqBiblsvPq11nOOLMVBO0rqPQ9pzAjck6MdvH2QTKm/r/7snw1fK8kp/l5CTJd/CZBaW9R7HNsAa1gULxcT2xBtHLD7L9lHi+Y8l3C8n3YAH2t3LLr+14GqJ1zsa1nEDfW9fFgCm8fJ24/qABRxv34UxPFmZIvumlU2Lhli8XvSX0XHA/TxdCYZfo8J+E3eiZqMH5niwctuzBUXcSGsaqUHOzAJfcJ2G4VgD/tAXT3FtVTMlxcFQ7qaXS5eWoerSF9z2kTtHz+apnRxIY1UMmp/KbEQ43IRxRQc/R8u9a/h75LvluueCS5JtkjJvon+o5huy2A+K3jIXklKWMTVWJluis51d1y3AaOdVP8qPKSiFCyuDhHD+PRPsu6DxpGJ2yiAElih7M0eP0jpTrPfgcxtcKssi8sQPjugaGSfSg0V+FJOMuOAbKMRVuwfgjE1y3y3CpNwtZ7hQktuzDftNOnGzcj/bgeTwT8pbPI4kmIZPkW1mLIGI8l5aBeDeNOhHb7IE/5MAx/S6cvpmLoSk7QmIh9qv0HQt1zHYrya1KkNTf0oBcMCjf+lyGjHB3AkEwOBUuYlTpkevCWMiBc7cykdtyQOy4woWeDN0Ri7XUkCnl3XJPYL5XGoI0dKWHkwO9ahjyuprvxVNZHm6VJhfK3ZmoQ2nHMVR0n8REWIZOvJK+qrGU87pN+dAYWcArPYd9YqQdrcGzyG09DP0trVgETVkwhEO04/l3LF3/r/S+d+JZgtS6cXOyHrreDFR0HsczxXnArduk5/tV8kriSe+1nKVjWBI93UNeA4pK9qK2LhcjI0149qwLXV3nUFmVCJtNg2h0EH29tSgoiMOJ45thNuVi4r5dCd9gv0v9YD9PJ4sLYS7e4wyC0jbUsBMayQ+fNiKv5RDo+X5Xybdo73K2QJSfRgK68PRZK5qayrHqjX/Cl/7uU+jtvQpP5yX8/Zc+gi2bvoqvfuUzSDm8CclJG/D9734e3//eX+Gb3/4EHE4NZud6BPkmARZjDfUxzN1a3hnHDvsV7kLz6LEN2fYD4iuTPsVgpfOGfSH1/13Tw3dCl1eeoYxdXRh+ZkX5tXRsrn9DfPiMX0zm7nv0fP9UkG9zsFSGnfCT0X4tmgM61A4WILEjTiy6fDxrVRZCvUrn9l66hyEQ7Ag7cGfShJPNh1DuOSG2R1TJt/QgLp1nObjRgyEHak5DTsGFW5N1KLIeQsK5dciy7EO77zyeoUdMZz9DB56I/S3pVZH7vUrP1OKD9MJAvnR+2Mlxf29ChmOw4+P/yN1fuL0V3xkKSY8AiTrJDL0zqodmwXO39LterVMjCeC7SHJU8s0dHLpwa7wamo5klPSkgsSYuz9I4s08qaDnkVsutWNuToaWSFLBcjL+l+WTRIfESyw0jHYjGHWi+Fo6jjvi0NZfLoijGOSEd/2dKNd78BmUr5jOpvEiyRcHKMaAn7+Wi/Tmg+gbviK8bgypoldQfAI46sZ4tAUNwxXYZd2GiutZ4CAmwk/4wQQuKhPkm15qEnLFK77MQMFt2YTnPcJ9bTsQmGvDwZotMI1UYSzsRJjx1AzrWOY5vM48iPoVsbbSu8u65zn5/9QXtU5i25F6jrrFbeQU8s0POXCnFIVk+J6ZUH7tGIrcyWKxNQm6CJMSbYflpR5Sz6T3W74r9j0vHvO9L557629JDtgvMEzNg76RK8Lrfe5GHp6InQtiy6WW5a2pmIEThE7WzcJvGmEehOZke1+Q0VufIa4xdCnSjvp+LbKaE2G/Wy535VAWist2ucj/zsv/Xb6u7GByY7IWup6TOOU5IfpglucHI999gnyLvlLsvtOHZ1MdqKw8gA//wS9h797v4/ZtI27f0SMu/lv46Md+FVnZ2zE7ew/79r2Ob3zjs/jSl17D3j1fR5OjENNTUtfYP83OOpR+kG1I7hojY77lblScEaTB9+ipA/mtie8++RYhXK1iO0IaBzQSONvF3ZOGhuxIPbIL3/rmF+Byn0ZPbzW02iT09Vmxe9frWLf2X/Dd7/wNkg9vQnv7FXz723+Oq9UZYpaA7Wt+wT+NVBG29M6E/nGseAoX7k+acMIcjyveUoxGmsSsKRcbi60Vf1w6ufLemP74B+0PpOebW7qeunECG2tfF19AZtjJTxX5tgyXic9ym320ODRw+kpR5y3Cke4D2K5fiwdTRoX8/KAC/vHeT1LC3UTCcy7ce2hEblsyyrpOiP1j/3/23sM5zjvLFvOf4bKrnt8r2+Wyy7Nvw9vd2dm8s5M00ozSSBpJI4mURDGIOecMgiDBAIAAQSKSAAEmMIDIOXcD3Wg0ciQyQOTcufu4zv19HwBKItAaYjlci6z6qsEOX/jFc+8991yCCyUBxc1y6ftUWq/cINWCSu8gvS6mnpu4lH0AKUymdBSBdBbhhzPxTPTECRq5EHMh4u+/vTF/93tL3w+lhcR7LV4znrNWPOtuF2WpeJ0G4eORm8gNeW62VC2IDHtSL91ZDpdLJQct9+z+fc57UEoL8pwi2WgVoFE7cAeRFUeQ3BEqnm96palIMU9vkHYhmKAMlg60tLYWryU9Gzp1RvWVgG+n0mlPbAzB2YqDqOi9IV5MxS9emYXfv2dfuq/+Q85BA4ZgWStIQYNj0luBq4ZTuFp3Du0TWXD7ajEHK2a44ZJ2MWOCb9aAUU85LlQeR4QpUKI2TL6UiA414WnoaEoPSgFEtfdSz0DjlfJmHPfUWm5zF+N49i5k9V7HmKtMk+/0r410L67yvD5rzuhjgyoLemLoggda8bX5f6oSaZ5voZfUoGXsEaKspxBfF6zmpMhx6kYGn1U/FABX/+e985qcx5phMr9e6N9f+lWMBx/pEUrGsLrntoDv1I4YURgRpZX5cz67rWRtEWqVam9lcGtATyJez/7t031oFcCa3HQJl02nYRm4qxk+NDqsymtJQ8SPe3ph36GH02dG43ga4q3ncaPmghiWBLf0jPrr+WYbcj8g8ORa6HIyslEvFIovv/w5AgI2oL4hDTZHHXJyI/HFlz9H/LXjePLEjC+/fB2XLx9DYsIFpNw5h8b6uyKnK0pS0lY0ftRYFEoL1zTODZlTlNYkXcyKqbkSRBpPaOC74AW2M+9NpxVRzpHymExWtcJma8DN5EvYuuVjNDTdgcvdiImJWoSFHsJbb/4TDuxfi0OH1iI0dB+qqh5g1Wev4ebNYNhspL5xrqo5wCRVt5Neb84Zf8fjs7/Hfp1CFYYmc3Eu7wDS+6+Lgha561RgUfkJnAPPPserz17GtlHge8hehKTWEHydugoPeyIk4VLAN2kn3T+AIju62gklBgm+S3pikdEbheDGk9iS+iWGpzIWeZ9exo787nsSzyAXbY8ZjyezEWEIQGzNOcy66WVVoIHqCMtNToIcqTzJCU/vEs+JGrSOpSEsYx9ut0ai11eutIUJtkXGiYNLl21itjzDcfSSLL1RL2z43/1M6l7pqSTnkCFEns8ixSYEePsa4fU0wOWsgcdFTmIDvN56uCXDXclCicb2PA1mqet8n88U9UAACsGe1yq63abuJISXHkLqkxhZ5JXnW5cy1M+vLdTi4WSyjwk0JETeUbzYpNLoIIkSXvWiFc2QcULteQRXH4VpPEVAPfn9SiNYP/f/v16F50gDxkuPH5U9CEosmHCWIqTwMBJbwtE1V4JZbw1m+T0CTHqf7QTZFonGJJnP4GLRQRT1JGGOmxY9w4yWsJgUASLzCZxMvFt+bojOsOaNt7sNME2m4nDGNhQOJGNKk4JTtI6l+4FgRVGzuJESGFjnlRaU0cvkN0Zz9HD20+CbwF3AO8PeAoDIW6eRx/HC/IMaWPpvI8J8HMltYZIQrBKD1biaT/7VjGSeSwCEeONV4Srd08y1g22mAIYf85lhcQ+rGFpE3q+8IxFRVaeR2X0dzDvxD3zzespr63Yp0EhFDZU4VwMqU6jPl25nWT80/nlM7TnEN4Tg8VSOMtAEwPNcjD4t3/fLrZsr+rkGvhvGHiGu5hySai4qlRgB398nx4OJlATfBMp0RlAfn7kxrdiz5wOcObMZTU2Z8PpaUWFMxPYd7yAu/jiczk68/vrf4O23/hEf/v7nCA/bg86ObFlrnZwr8wYax7FRqakwkiQJvMrzTfDNhMtpWykuG0/gXksoRh2ky/nRZyvyHQ180/khY4ca5uVi1MzNVSPhehA2bHgTLR234fM1w2ZvREpKON773b9g967V2L3rE1y4sB3WmnR89ukbePjwMhyOOjhdJnHm8LnVPORYXRkHiALfJgzP5ON8/kE86onHiKtkkUSvvu+8qDZ8dZ2VGa8KfA87inGzPQzrH3yKB93hCnwPUvb6BwK+mXCZ3xODgn5qK8aguCcWWf0xuNh6GpserMbAeKqElVam0V/c4JWytpLYZ0HnZBbCK04iuiYY0wTfWrUzfzwmSvdXJZWJdBkXb2pYu8oQV3QS16zn0Tyj1AK46NDb7ZLFvRZeaqqS/yeZ8Xx2PzbrZRdabjasWkW+qAIJQi3x1WJ0tBglxVdwI+EAcrIuYnaaG0MzWlvuoc6ahOGhfLjdNfAyIZJ83GWv5e93lK6sAt/0OtZIhVJDx3VcKjmAnIkb8yBIPEG6ZrHw9RRdRSXp1GBqshAjw9mYnSkRtQHVptXKW8UKngIolTrFnbYIHKvYi9TmCMzMFWNWPL0rs/CvXNv424bLf08805QL86qqqARvXrcZ4/ZiBOfsw83Wy+ibLYPbWw+HzyqFSPhder+dzgrM+ExIbb+KwPw9eNB8BVOkPXBs0uCRktBWoStJ0SwBmEvfk4BvgnSXGTaXEYbRFOy5vwHG8QeYFhWIpX+vtzEBLY1kAiLOlempYoyO5GhRG27mVokszcwUaQCc75GqokC3Pq9ocItqC2kzGvgmmCfdw9CViNCqo7jdFak0t128NzX2GMJmNEH9n/OV84r3YobTYYBtrkzuxW4rV/QOaRuCXz/mM8/lZNlwRQ8qbInDlcpA5PTeEBqWJCj7MQ9VwZoaTE4UYXQkDzabAS6nCeNjBXgymC0Gtt6eS76y4rDPhNCqU7jWFCqynT67KrktnloxPPzrtyWv48cz+f17HXyPpooDZR58k79O+pWf1yJlj+uMzuGnN51rIdAiQPv8he1oa8uBF60oNVzHpq1vIiEpEDbHY/z6jb/BkaNfY9Pmj7F39x9QkBsJl5NRRxqLeuIlPekckxxDvJYC5rI+i3JKHWbsZeL5Tmm+iBF73gquwcv1mZ5wSb475w77vFwKsMzOGcXD//WmN9DalQyDKRoP0y+graMI23d+gq/WvYU1617HxbDtMNc8wEe/fw3pabGw2+uVZK9WZ0LyITg3RMpwuftZ/nPuz9OwYGS2ABcKDuJhZwzoLRWPtxj9+r6z/Ln8Hmt+jqVX53ueNl8A37c6LmHd/U9wv/uSgO9sAd9xPwzPt9BN+mLBVx75FDofjEV8xwVsvfsZjGM3YRNJO7XhLQy6xRsPNyv+/3k6ZGV/y01YqpL5LOiazEWY4aTIn825aaFrYNqP++XCLh5HKZ5AD6IqdMHS9TG5R0V3tt2RLx5ILrKSuOSkmkgNfOT5USt1BT3NXDRF1kpL/KI3kFXgXK5aZGVdwv79H2Hdul9j164PkZMTi8nJeuzd83vs2vEOKsrj4HTS02OFewUlxfh8c0I90Ti2PgvYzrltMThXfBBNs9R6VuocbE85tCQqfcyQIvP4cRbCL+3AqZPrkJx4Go878uGw16Ox4SGuxx9DccF1uJ0sHGQV6TbL5ANEmI7h6MN1CCvYi6S2MCQ1heKxK08poMzTMzQJRCc3alb4+8/pMeGmSaOJBYxYEZSRBI4vJq4cyd6J7M54TDhUFIZjloajKl7E6IgBTp8J3TPZuFx5EuE1p9E0m6W8nmwXB4FDrVCq3KI2Q+BAT7Pi3KtIC9/jXNc3bxoCNVLVdMZRgfTeOOxP34T68VTYBOSopCi9j5/1ytC9x80xXIeuzmycO7sBpwPXIuH6CTzuKER9Qzri4o/g/PlNuHv3NGZmOAeUmg8NUYDKFzxInaEcI5+F64mSeHP6LMhsj0Wo8RhKehKU53pRYq4o78hawDVO0To4R7zeBjzuyMC1uEMIDdmKhIQjsFhuorUtFcm3jyM8fBfu3A7C1FQVXOJB5TW5sZCPzxC/pnbhqpS+mvQZca8xAleqz6Bq+J4C/KJFvtzax3PWY+hJGUIubML1awfR0vIA1eZbuHhuK44fXY3794IwN8O1QIFB1dbsK9Vnir6l1qIhZwnOV57A7ZbLUvLeQxqX5LRwDf/mOv+se2Mf8Hqk/1gxMlKEqqoElJXHwu1pwMCTQqRnXEBW9kV0dj2CW6pR6gaLMqZVdUs/ridOFDPaxtNxvfYCYmqCpY9pLDCiKJr3/qzlotVOz74C4Covph5eTz02rP8ZwkK3oLMzBx5fE4pKY7Bh42tIuBGA6dkmvPPO3yMtPRaXwo9g754P8fB+EFwORmAahTPP9hVwLzx+BfIFgMv+QWOQRXZqMeesEM93StOfAHxLpUP2G++VXG31OjVZjvi4/di+47do70pFVm4oPvv8n3H6zCasWv0rHDr8FXbs/AO+WvcbHD36Jd773Y9RZbwJl7NByTGKjCbPpWvs+9Gn/vQXE8FhlfXtQvERPGyPxpCtWOaNqhqq5tqz1pVX7z9r7v6J36fx57VgwlGGtMdX8OWjT3H/cSiyBqKR068SLpl0+aL0vfXrvHCd7/zeGNH3ptRgTm80cgfikP0kDnc6w7Hz9me4PxQjRWSeXpS5SHNh1w+G8V4u8E0PhO7R6pnMR2hlgIBvO8Pq38NbwgmswDqBDAetKqDAgjrReUcQbT2HRoIYTceXoN/p5rXNcNGjyGI+pJ2IN+35Bz2fy7tIB1mB7zrJ0H/4MAQhIbsRfG4XVn/xW1wMOYihJ3X44otf4/33foLcHIYKCZgo9ff89yKLG5/bU41JKcmrgC3BzJSrHA9bruBsyWGMOUqWNczm5spx/34wNn79FvbsXI2N699HZkYsenvKEBqyFz/76d8gIuwE3A6qFTQKf3LCV4HCrhgEZW3G3kdrEFR9CHsKNuNa8wUMzeRKmzPp0EEPlMMMzJGbXieeP3+9ZS/VAu4mPYQJc2aV6Ei1E3cVOqezsTdnuyhY2PSCIDJWOSfVoXuJ+ZrafBmnKw4hvfsa5sRQMsPntEipdWo9u+ZBiirc5GEoXQxwznfmDRjFQ65KndcKsJhyVOBmeziOFexE+0Sa5FRIpEgMz6XHGr/n9Sht4fv3zuH37/8jdu78A9av+x2uXzuP8Ihj2LjpQ2zZ/CE2bHgbDQ2pcLtJqyIdpQJudzG8XmoPV4qGPBMZ56NabjNsXjPutFyRvI9mJqQut/GTiiPRoVbk5kRh9We/wO5dnyE6+iRycmNxI/kUvvjqF9i163N89NG/o6ryDmZmaAzUCMVMJQnrHGA+G0G4RdRNrtdfRJT1HJom01WSp19gl2CmFdmZsfiHn/w/2LzpHWSkh+PqlSNY9clvsGv7Knzx+c/wuD0NTgfvQwclBN40ngg2KU+qcjM6ZnNwzngcqW1RUn3UQ1oM28TPRFvVfuTdGuFymTFns6CoOAYbN/0Ge/d9jPHxGlyNOYQDBz/Ftu1vIyn5OPoH8sRJoPYR0ssI1AjG/QFqKpm2cyITN+pCcNlyBg5J8lZ5PW6hGy09xvR7VonnBIo0ZGloWeG0V2PXjjdwNXIburoy4fE1osIQj+07f4vEpBOYszVi3brXEXxuG3bs+BDHjn6K8rJoeDxci+rFAFbn51xj6W3m/7ACJ9tbSdMy+kTON+lZTLgk+H7RtBPel9enVFkY2SHn3esh57saZSVRSEo6jKHRUrS1Z+D4iVXYved9nApcj/z8BNy/fxkH9n+BnTt+jwvn1mJkuEQkGKXwjhRz0xPvFwz0ZefZcvNQHDVW9M7l40LpMaS2EXyXSA0JUuTYpsqx5U/fv/rOSvTHipxDIpMWKfSV1xmN1emf4V5HCDL7o5QDuC8Oef0/APAtlS3p+e6JVg8+EIfcJ/G4130Ze+59gcSuMEy4WIZ28SLJRUYH3nx9GcE3SzCrcHLPVD7Cqk7hiuWsqJ0IH9yPia8vqMqLzmdWvGnyvifd5bhhPItzFcdQNnBTFTDgOSXkaBHQTVlDAeDUuF4h8M17EsPCTcUTevu46NXJBtjxOA9t7YXIzUvAmq/eRFx8EKan2hAefgCbNr6JkmLKQ6l7o0GwIhNJryqHJAAAIABJREFUwLcZ05rX0cnEPa8Jw7YiqSwaajgBpcm79OJns5UjK/s8UlLO4OGDcKxd8zquxZ9AWVkivvrq1/irv/w/ER56BG5nI+BtkWpr9PwOzuahsi8R+Z0xKBxIRJzpNDbd/wI1A7fgcBvE+0tKBZzkPnMzZJEf9uXS9/NSfs5iLaSc0MDzEmiSH29E4/gj7M/dgZqR+3CK15rPxmdcOBaD7/rBFERWnMSlipMo7L+NgsFbaJvMxZytDE7xKNNDSoCiojyKosTwvFVoP3ZbpaIvsViH5DbUiHZ4YnMoAkv3oXc2W5I3hRvth5En1/FWYXKyGKmpZxAZuQdxcSewZs1vcOjgV9i67WNs2vwRLpw/KNzT2tqHGvCpFRoKE4jFM8828ZpAtaF58O2qwrTXiKSmcFw1B6GXHGe/+p6e+HokJ53Cu2//BKcDtyM/Lxn5+TcQdGYL1n/9FvJyb+KXv/zvuHkzCOPjBNh1AiqlfLmsA2qOEfTSOB92lQpnOab2PNpnslUuw1Pr6nePSeZpTE5WI+DEZvzd3/4Ie3atwo3EQBw7sh47tnyB1PvX8Pprfw5jRTxmZwxC4aHDQHm9uTarviTgpNHZMJmG4IpjSGuPxqzLoOUQaHkVboKn776Pp94X774FLleNaF+Hhu7AX//1/4IPPvgndHdX4ONP/h1Hj63HqlW/wLFjn8NiuSXKGioJnXkc5M77Ow818D2ZiRv1oYiwBMEuxqAOvnXHyHL3zbZQkRw9MkJDhQbLzaQjyM+7hOHhQnh99Whte4jriQdQXhEPt6cVyckBAka3bXsbN5OPY3CAfO1GiRYxH4BjWM03trUyWkXFSSqv0rBR4NvmMiCC4Lv5xYJvJup7WVRHk0SkE8nlrARzCGhAMJo0NFQABxNDvbXo7cuDwXgNbe2ZmJ2rx9hYNczmFBQXxaOnm9WauQYzakBDT+VgqPHG52c7L9cX/n3OSGXvbD7OlxzFo7ZoDNtLxYhhNIt5LGqc+3eulbqnV+d5zvYm+PbWCPjO74rB6gyC74vIHIhCHmnQvXHI6/sBgO+s7qtgsiU94Ey4JPc770k8HvZewcG09YioP4UxB8vg/ucC31wICZrphe2dzkdEdRAiTKdhZ0KRLAw6MFl6IPG73MgZxld634ojO+urREp9OIIqjyO/JxEeFxd1taFQ51TfZJyU8BI+K708S1/Ln89lA5WNrwpOVmjzKUDmctfC5WpBZ2cJIiMPSfJMQ0M2fL4+JN0IwpbNb6K4KErC+6zy6XKtkM63Br4pNUdZO0YWCIC6JrORaL2IKNMZlRy1zLPTIJKkP1ed0A127fwAcbGHkZBwFOvW/hKrPvslLl08AteMFXA3weOgx71Gkl9JM6DSyYzbhMaBB/j4zkdIbb2MMdGEZT+TfsDyyFVCS6Dx5E9bv3TfYRTFy3ZWScCUCuP4qhi8hcP5u9E8niEl1heDbv3vxeB7zlmO3JYYnMjahd0Zm7EufR3CTAEYtZeAm5pU7/OwEEetGGt2OylDDfChCQ4n+df1cLlYYKNBCvwQlI/byxFtPYtzxiN4YstX0SXhmy/f1h4vvdelknQ5ZzPCZqsTj9uaNW8gMHArNm/+EO+//+/Yu3sNzgXvwtRULby+ZvG6SqKmeKqpw18pBbGcnK86sHNXYdRThvj6i6J2pBwJy81DAhLyvU24fes03n377/HF6jdx+tQuXIkMwOnT23E6aDueDFrx9tt/i6SkkxgdpYIQS3QTENB44b2QemIWI4nryICjENHmM6D3u9dRoIHv5e6lGna7AXm50di4/gOs+fxdnA3ageSkYASe3I7NG1Yh6vI5/OoXf4GSoquYm9U9yjTSuSYpOo4ay8rTXDV0F8FlR5D1OE7mDeXiOKbk3plUrrfdEnNWAJevQcZCYUEcdu38GB+8/68SrejtqcZXa9/B8eNbseqzNxBw4mvU1j6Ay6USadWY1MHq8s/PvYdrcNdkFm7UhyC8OgjzER4m4pJ+ssS96vNYtYcqUOYWI4O8fu5rjRgbLYaDick0zr1mzM5VYGy8GC43+dyNsNutsFpvo7U1DVNTLLzD/J469Xuh3ugeX7YjqS3sewXKSYUixYaF2Ug7oc73i/Z8C8ebMn1SBZngWHG/mdzucXOs1EtU0ENao+ybVjicZrilmFCDJO97vKyMyb+bJJHf5STth79VIHjBAFl+zut9stwrwXfPTB7OFR/RwHeZBr61ZGU/jPvlrvHqc3/m4Ap+RwPfM04DCnvi8HnGKtxtu6A83z8k8J3ZEwVqfRdQ7aSHVkcM8gbj8ag/CifztuF0xT4Mz7ES138+8K0SsMzon8nH1dpzuFQZAJuTSSZcMLhALL9IkKLCsKbwx1kVTKSUasS7ltpyFWcqjyOvN0kST1iAR0LhLiO8bma7s1qaoqA83X5//EAmgHRT1nBRQpyL0lm+RoyOVSL5ZiB27Pwdbt0+A4+3A/B1ITnpNPbueR8lxVfhsFOLlc+9uD//+PsRJRI+o6hvmOCQ6ptmNAw/FKBxwxqiDJdlNkf2CUsVd3Zm46s1r+Fy+EEkJgTi0IFPsGH9b7Fj26c4dmgbpiYYRWiBj4lO4nFRhYXEC+IldaEUX+Wsx0XDUbSNp8FNXiqBkBSjYQjYX4WJ52iTZZ71j17sJUxOrrdZSWV6TRLGftQRg5MF+9A5kfMNw1If4wuvAgxhQe9sATLaoxFYsAfv3/oAm9K/wrCjTIrvMBxvmzPhyVAJmpoeoqbmDqanLXA4mvD4cR6am9PR0ZGH8YlKuMmNRg1G54oRUnEU4dUnMWZnARItcdEPIKc8ZgRFBD1mOOzN2Pj1u1iz5i0End6NDes/xHvv/Qxr176LX7/2V6ipeQC3uwUOOyldNKpIxaKqiEmSR6luIqW0Ob99ZvQ7CqUKbWLtRcUVXrZ/2F4mKZZSXn4dx46sRfCZfVi/9kP84ePXsWPHKoSEHUJXVwXefPN/4O7dIAyPFMHtIugmn5GGIRPctBwRzYB/PJMjZeVvNl7CkEdFExVgWXqsTUwUYvOmN7H564+wcd3H2Lb5Q9xIOIOYq0H44J3X8dtf/xv+6s//V5SVRMNu47lU0rGim+jVbPlM/KwGBT2JOF92FAXdCbAJ0LIoGhNBrlYfYPkx2gj4WtDXV4JLYXuxeeP7OHtmD7Zu+QRdnWbsP7AOP/23/4G/+9v/G2eDdqKnuxQ+b5NQPXjuBaC29LPr98H1oXsyG0n1oQivPo05t4r6sc/9Bd/sU5Uzw/oBaq12OXVHCbnytTJ/CEA9dCr4uH5XYXauTIA2K2F6JEGzQegqyuPN+9fyD8TjSxCuAXEB31xzuNbWKvDtqpinnYzYX6TaCftfRUHm70/ui/euJzzzPimVa4DdRd3yatgpTUuaI6NJ0iZW8XaTsqLWX9K/aKzqxpS+1vjXr3r/PutV93wTfKe2RmHYrsA3qXeqhsAK7WPLrgkr8zzPes4f1PsStajBjMuA4v5rAr7vtJwTznf+Dwp89yuet4Dvrmjkd0VJ0mXGYAzOVuzDofwtIL/5abCmNqcF6gkntb64vzyDVKeXDMwVILYxBBfLj2PWrrzF/oJv8ZoK3YSeb054LjJ1UlQksykKx7N24nrVWfRP5UtCjdr41ObvtJfLQsWFnOB9pSYYNU6ddm7eVKcgEGc4uQ6pacH4Ys2/4dDhjzEyYoDD0QigG1cv78a2zb+GwRAjgEJJba1gfwmlhlrGiovM+6nsu43wygA8aI78xtj57vFBb+vEhBH79n6MUye2ormhFMk3QvCHj36K137xl/jzH/1X/PSff4yW7kzMelihlIV8KFtIgMOESr7WwemsQYD1GPZkbkLNk4dwuitht5fC5qwQ1QvpHz8Lv6xUf63YeTj+RPmBeuncdMyYcZUjrvYCzpUdRe9k7iKjUt8Ev/3qQDVs5J+iGi1TqQis2IddOV8LSGUlRvJAe7rzpMDIG7/5M3zy6T/j2vUTKK9Ixqef/hTvvfcT7N79EYqLY+F0qQTQ/qkcnCrYg/im85hwkH/NfuY9+rM5cgOvw+R4BYoKYzHYbxYAt3P7l/h81e+wY/tXuHMnCpWVj7B69S9wMzkQM7Pc8BvgZjEdXf+a3kxpnxp4nSoCQ3Wd9tkchJtPS4Vb/+Yh5yqNASuam1MxNFiNqYl2hIUew6d/eB3btn2CyCvHMTRUg9ff+BFSU89haoqATkUNZM3Q+oq1BljwiEf9yENEGgLwsC1KciRE5tGPdWF8vABfrfkFPvvDa/jXf/oRfvw3/wUH96/C3dsR4vU+E3gAv3vnH1FpiId9jkY/Pa18BoJABbgIxJVUYw0etF5FmOEEKvpvws7v0YBhNEwiRP6uU7XwuGtRXBSL9evewGu//Av8/oN/x1/95X9FRPgJ/P6jnyPychC2bV2FwFNbUGt9ILkaIiXpM8HlLIJXDBB/rqcUTbqnFPi+ZA7ErItqT5z3dIwoOo0/80yMTzEIaaAoI0UMNx8VoDiW1P3w1eEok0JmXGfVuekFN80X/yK3f2GMK7qFSkLV1lYBpQvg2wmLRAap832n4TyG53JXbE9Y7tn53Lw3tWdznDIqwvGhz1HeM5+dEVEaNuSus435G5V4TZojo61iVMrv6P1nm/HZ1XxX1+F5v3ud/77vE3z3zRWI5/tB81UMzSnOt4BvF+95ZSLK3/e+Xn3/OfpXFHFqMOs2oHQwAZ9nrsat5mAFvvtjkU/O9w+BdpI5ECNJlgW9sSjqikZRZ7TQTzIGYxFmOYYdmWsxMM1FYvEmqk9UTlYeLyP45oKtFu1BWyGut4bjXMlhTM0WaSFpbYFcbpFg2JjeZuFIk2emdHvJHe6ezUN6WxQuFxxBUnkQWiYzpdqgKiJTK94vZtQTWK7cIqE2eJeDi6TaFEhFmJgsRuSVrXj73T/Dm2//v9ix8w1kZ4fB52vDreTjOHr4A5SXXpFQulo4Vm6BVJug0jFmuJ+AuKgrESGGE8h7fM2vMPb0dAkiwnfjL/77/4xD+zcg9X4sykpSUFF2C1cjD+O1X/41tmz8FP3jRbCRpylFS+hx15VL2BZ18HjrkDZ0HRtTPkdGTzymJMlIRS6kX8Tq9rPvlxsbL/pzggaRGqSSCakg1ZhwlCDYcAxXq89gcCZ/3gulQAbnJp/16cNB6T7x8lkw6izGna5QrLvzIdqn0wSMMQSdnRWB/fv/gMOHP0dw8Fa8//7fIyxsLz7++N8QHx8ole4Gn5RLe7t9VnROZuJY3k7c7bkiybZiEGkb8nIblcjoeeoxNFiMc8Fb8NZvf4wD+9fg009ex8EDG7B166fYtOlDHDnyJV5//UcwmZLhdtdrCWNq01eAl+FvGroWgAojkvBWA+vIQ5HWu9d4WdPbX27jYLtRyq8St2+fwIb1r+Po0bX46qs3sXHj77D/wCr84dN/RWjYTvz7z/535OVHYHaW4FsZAlwr6fmWgiNUpiEdC2YYu5MRWRGA7MdMdGVkgBrQvNbS90PA192TgzprJvbu/gzr1v4KZ86sQ8DJ9diw9n0cO7wRH334D3jcng63i8miumdZB9/0+Ko1mnMgofYirpqCUDN8X1RzKBXpdZmkOq8/lJOF+23A0FAJCvKjEXJxBz7//DX8wz/8HwgJ2Ydf/OovUFp6H4GB27B//ycoLLwqNAaORZX0yDHJ+/NnLqo1pWcqB8kNYSD4nnHS+aCMUUazxAhfph153wSKetVc/k2DhNrmCoyqdlKAUlHzJJLpoSGmedrl/gk21b1zvClQyz7UFWDY1qq8PNuTAJJeYq5ZzMmIMp3CrbpgDE5nLdv3C2299BhZ7ntcD1gRWd0r21yBcf6fB4G1280CbKVSrZMVlMkPJ1fc5SqDy10qErc8j9rzlaGi2k7tJcoDznMvxgrPed+owYC9COdKjiCl8TIGZ4tk7XNyT3Xz3K/A93J9/9J9vgh8lz+5IeD7ZtNZZA8q2jM537l9P4AiO+kDMVpJzxiUdKmDet+ZgzG43BCIrx+uRp8kKS2eUGryqknIyfjygm8uBEP2YtzoiERQwQFMTBUIb9bfxZpWvizG4l0hH7AKXodBPC5OrxmD9mKkt0XjXMFBxJUHomk8HR56ZSUJh2Bgcbs930Ikk4j3oXn6+H9uAk63EQ5XFXr6slBVnYiCokjkF4ShtzsLXncD+ntz0dZ8HxPjJRpHkeFxejNW4H5k42NEoFoKiZB7SQpKZkcsLlSehHHwjl/ge3a2DEk3TuCzT36KwwfX4srlo6g238XMTDVaWh4hNvYgbiafkedUajVV8MIAD8G1bOJs5zp4fXVonU3H17dX4UzpQTSNP1JSkTrolrLjK9wnK9SOy/eHAt9uTzUUr7kaQ7ZCHCzcg+SWCIzai5cB36q/mSRJgM25MeotwK3hCKxKfgfNw3ckFOj01mHgSTnq6zORkXEV+/Z9Ih7na9cC8c47P8Hevatw714EuroKxPvp8NSgaTQVh3N3IGs0UXklNT19zp1ln0sW41rY5qphZHGT7e9g/YY3cOXKAVhrH+FOyjlQg3j9hl/jUsQ2zM6a4HBSb195G7kOUZublCxej9rPQkcg0EINynqTcbEqAOntMfL+cvfDOcUETnq+G5vuIeDkl1iz5lc4ePBTZGdfQUFRNDZvewNfrvk59h34QJQ8PB7yvdmuar1QXm2ljMR8iFmYkd8ah6vGUyjuTYZDeOr+RcS4Vrk8JuHC5+fHIDsnFHX1N/Hw4Xls2vgOdu38Pe7dD4LNxvWG/aqMcgJu4R4LKOQaTYlOEyIrA3HNegEtE5kqH8Wl6Dsz3wM0SaEvb6Xw7mdmzLBa7+PCxa3Ytu1d1NY+woZNv8WHH/0T3n33bxAathmPO9OVTKarHG63otx4PaTELG986GOodypXwHeY6RSmHX8c+Oa59POpvtKpKEqC0O0iCOU4okoN+e8LQNvlVJxpPZrAccT7VzKcbHv1XXmVJFuVaCv94WW0jnKD1YipDkKy9Qz6qHjzwtYO3qvaLwi0lbGhuOkL963ylqjaI1r3YiDSoGAbqT2DSlvKaOJaotNN1Hoq3xGDW42/lXg29sOQqxTnS4/hTn04+qbzpRiYJFxKRedX4Hsl2vmFnkMD33MeAyqGk/FF1mokN55B1mC05BwK+O79AYDvtMFoZPYrb3dZdxx4FPUqre+olrNYe+9T9E4yu3kxYHn5wbdaZLlAWDDkKEFy51UE5u3D2ESe2rT9XfS4mEhCj/JgiUY0J71TFShx+6rRbivBrbY4nC84iLumCxh3VcDjtYD6uSqhh23nBwj5HvekJJbM4s0g0Ofm7PJY4HBZMTNrwtRUBZzkxLqoCNEIt4NcPVbdpBeGXhnlrXjeSUdgIMltTjNsonNeLTralDG7YAqAdTzVr8QyJoH29GSjujoFVksqmprSMTbOgjEmTM8WoKcvFYOUKyPth8miTGgi8GaVNtnE1YbJULLNU4YTBbuxN3sLyvqSRN+abUTKELXXKVPlt/Hlb5+8gO9JJIfAW6ThmMhnxsBsPnbkbsPD7jhMOMlNVRui2ggJbDju9EMztpg05akHEy+Nw4nYVvE1Pkt5D3Vd12GTgjD1sHlaMDXdiNS0SLz97o+x78Aq3Lx9Ae998E/4euPvsHHz+7iXcg6zc5WYdhlR3X8bB7K3omj6DuZcNE7pldQUNJZtGzU3yN+enCpFXcMtGE3X0N2biVl7JQaGsmG2JMJYmYyOzkxJ/pxzlInXmMmaKrlN01b2UCqR9BOq+vDvauS0xwv4zum+7lflTt0ryMqRLKFNacMKQxIaGh5hbNyIsckyVFuTUFJ6HU3Nj0QVxe5gdIx66vQwKqBLbyfHmQ0mzMCMjPoriKo8DcPAXbio1EC6h5/rAj2QbnctxseNGJ8ohs1uxNBQMczmu7BYUjA8UgCftxFuFwE/1xu+EjgpoKUSrK1weIy4WHYMLC/fNZuvim1JYp0Fk+wnf4wlMbjZ7lznmE+h1pyWtnTU1N6VcuXllYlIuHEM9x+cRXPLA8yyKJBHL3BjhMddoRUz0sbkEmNEB8u90xr4riL4JrWJnH4a+/55vhf2BAUc+Xu2EcHkgpFilgREyYsRRwcBtCqYo9YZAlY+uwKhNHDoIWZ7K0CrnkcK7GhFdgR8e1S0jhHTWMsZ3LCcRs/4oxcGvvnssgaKwaBTTqj+xOdQz0RQziR21lGgcg/XUtKRhNtOAK7lVtArrs8RFVFRBqe07zx/fPl+9WfvYRRn2FOGi2XHcbv2EnqmcsXx4OI+JpU6X4Fvf9rx5fqOEq8g+DaO3hTwndQQ9LTn+4cAvtP7Y5AtaiexKOiLU5Uu+2KR0xeNxMch+PL+J6gd0sKTkkxEDhv5gbq3m0COQGgFweUSC7Hfg4ibCEOCsGDYUSJerwNZ29E/wwIsGvDULDDdsGDBjoWFlcBFLbxclFzucpGtoydNEk3cavOkN4FayQ0zebjREIagvP3I7IzHJMPOzCInL02jnTCBRxZAXw28LmbD65udDo6+Rxtqm6S+4EvSDL1prMwniYhKB1mB/3p1z7qurvz2e1xrqf5gG7vNsLP4i/BvqzHtrsCtxnDh2XZryhf+9JtSiGmQTHp6Hfl/r68Kbq/a5KWfhPajPFISeaF3z8v/awlu7B93FdIex2BHzhZkdcRLIieTUqn3TYoDC9T4B77Z1/pGrY0HTWKT4FY9E4EON29uyJwTDPtTx5i/42fc3PTvPt+GRMPD4zZLWXJ6vunFHJjLx7r7a1DZe1Nk48RLJYWgdKlN9jOfuQqTzjJMeQ0Y9VnQOluE3LZYXC4+hJNFu3C7/RL6JjLgpBGHWgyP0RiqRFn5XRw48IUA7rT0GMTFn8bdlAisW/8OLoXuwNhYMcacpSjojsORjM1ons6Ck6CSyWqinOCPkafGPzdvRVuiuoUykCTRS6hFBJN1WtVTK1wezlV+XwcS6reUYqRRIkl4WiXQew2XEWkMRFX/bT/17ZUBQ7oZy7Z7PCy4QzBCxRcqrZAuUqMl36my704XQZgKxSuQww3GIklh7Cvqp0fVBCPOEoyWkTStSBD70J/24bghTYDeVfKUOa7U38rT3aA9l1pf2C4cm8oA49hUPGVSTsbcRQguOYD7LZHotxWDIIZjyutizob/xoDyfjLRmwfbi+3Bg17PBpGsm5gsw+ycYR7Muj3k4TMaoQCdgGaujXKo9XphTqmiXBJJk6TQagxM5+FuQzguVQWKoakKrZD2ofqehoyKiikwLo4B7T25pqwR+rqnxpyam5yf+qHRdrjOcO7Lq0k0u8VhId5tfcwpJ4ZafxiR1D3dHP9sF32N55hRdCg+c7wlCNesgeiYSNX6cvG6oN8XX3lPWrvQEJCoIsGmApzqeTlPOB71ZyfN5duHyJJKFVsaK4quxwilB+wPFoEywk1DiqpQ3Oe5j4hBoahUEm3lWs/CZZx7mtdbEpvp2GDejXjWVR0JiT7xPRfbqEYMJN3QVP2y+Jmf/Tf310mPAWGGACTVhqJrIlsKrInOtwDwZ/9WjaVXn79s7aDGa40oz1UP3cLnmatwo+UsMgeiUSSV1uOQ90MA35nU+O6LlyOb1S0HYiXhMr8nCnd7I/DFo09R0HUd0+QYs1S6xywg1DHP1+PCS4ChL2ovyWDXwDcTnUbsJShuu4btGZvx2J6rPGPi5WF4TUkuiRcEBFCUDDOIFB85gQS3Ljel0LjZW5QuqoBpVQpb+JXeasx4KlE99ggXKwNwMmcvCtsTMDBdCLuzSrST7aiBw1sNp7tKtL+9zBbn4iQLNsGgOl62ibLs/XDBdZoxxY3bpZKfhGNvVZXoxj3lmvzdCxwXLrMUnjlatA+JtaEYnilWuusECU4mhSoO6bLPxvC9SO9RP1p5+uZBBzdl+Zzjh7xJciTpObLA566Dx0XNZ27Gyku0/LX8aB+nScaOQxR4zCK31jmZhS9vrUb3cDrsoqxDUKYABKtfun01GPdVwzT2EI9ao5HRFotHHdeQUHcJIYWHEZ57CGXdSZiECTbZiNVmW1GRiOvXzuBG4jmcPbMDH7z/L0hJicDdu+F4+PAqNm18DxGXdmF8tBjDjmKkdl7G6cztGJ4pgddTB5fPCofPAI+sDX4821IGnp+fEdi4NC+oXUC5BTZUI8kaguuVZ9A6lOqX8s6K9JWsL9UAvcqwYMpVhkDzCSQ1huHJNLn5Sj7Pf/DtTxvS+NNBpAbc5ttOGTPdzkycLd6LjNarGLaVQfj/TFZ2qCgFq6L6Z5gufz9Cc6Chy7nCOUepTwGAvJcaoQe5vdXg4WKpePKSaShIH+rGF0GrcqIMTuXhXl04wowBsqYzQVTGupbQ7hZQSUCppAcJ3HUaHN8jYFT71BKvsm8QSH6PQ56PIFP7jdw/1UEqAVKhPOpZFQg1IVHAdwBaJx7OG1ILY25x/3GPqFKqHi7lXbbJcxB8W1T7CeAl+FbAWIFi7slPHwTnbNelDrbV08B9wYjhmNBBM41Jleyu9k5iApVnoIxmAuNpjkN6ph3qVe9bMRql+u7y44dtwnuadRsRXnkKCbUh6BzPFl16Bb6VcbvQdv6d89X3/7TtxLFDQ93pMKJu8BZWZ6xCYlsw0vuvorg/DsW9xKM/ANoJwXd+fzzy++KR1RcjFBTqfhf1x+LBwBV8lb4aaa1XMTZXrEL+zOgHCwZoi4TwLl9ezzfB96ijFGWPE7AjcwvaZrMXwLdY9qySR+8QPVBUBGAIjrJqLEBQI2WRqajASmj0ZjrsRiVtRu+izoWj58htwYSrCiX9t3Eh/xAuZOzDfUsYqnruom44A/XjGeicycWE24AZJ8OUdaJLrMA3w3882KZ/2onxva/PTdXBsDol3+jJM6N9IgOx1cG4UReKGZ/yknzv8z5PO3jNmLKX4LpVqYDQeJzhJsaQ6owBdvHq+NHOEr5XpdldblW5Q96tAAAgAElEQVSgQsLMmtdV6fzWynMzNCshWob9Pc3w+drh4hhy05hboX51E6hoRXY8lZixl8LafxcbH6wTFYARlwGPp/LxeDwHgxP5GJsuRvdUATK6k3C66iROFh9EaPFRRBYdQ1TRcdyqOo/K3lsY8xnhRA3s9NZxTnhMyM0Jx65dH2L16p9h48Y3ceTI50hNDcfGjW9h3bo3sGHDb3E/5ayUNH9iK0RKayguFO7DuL0CXnet6IU7hY+/crkFy40hbtSc7wQe5FkTLMz4qpBgOY9kywV0T2Sr5LfnGVvf57f0nDJSAdYayMXpyuN42BEtZZXp/VEg049x6Pc1lwffLdMPcKZ4L3I7YjHmqBDVIEleI7hjtG4FnShcH6msJBS5eS8yQX6NJMPyWuRBU32HKjtSjVYqrNJrrIFZ8ahSArAGg5O5Ar4vGU9heK5Ifsv3uV4z2rHg0aUTQIuiaB5jFSngexp/WQPjCpAv6oNv0LZU9ED7jfbbeW+4Dra1VxoZknvARE5fJZw+o3iJWedhhtr8fE6vEYnVp5FYcwod436Ab/LOnSZ4XdWY8xjRacvFwEyeqGt1TOWiYzwLnWOZ6JjIlr/bxzLQPpq26EgH32sbTUfL8KMlj9aRNPkev8ujfUz9tmM8A/rRPp6JFl5rIgedE7nonswBE2F7p/PQM5Mve1zHdA46bYVw0QlAZSlftVD/uDeofuA89c9Zxzk97arAJUMAEupC0TWZIwnVYmRzTIls7qL+83uuvPrNcuvpf9TnBN9uDXzXP7kt4Duh9SzS+q6iqC8OxX3Eoz+EIjvU9+6/Ng++M/qipehOCbW+n0RjfebnuN90GcOzRRr4pgWswLck7Ync0ssNvsedZTD23BTwzbLOrA7IgaW8A3xVGetcaGdnqW/agLk5K54MlGF4qBxzs9VwO2vgcjHcy/AzK7vR08lNwgAvowJ2gnErZj3VqJvKQKLpAs5k7MGZzD0ILzqGqxWncLsuBNaJVExLuI/lfcu1UKUC//95wXcV5qQ9VbEO8+BdXKkMxP2mSNBL+x81iZ91XulXVxXMQ/cRVHJY1BHabXmAuwbeuSpMMOztxyLNccGNg2oASqFBgWmnk15wCyYnSvBksAhjYxWwzVVibsaAsZFyDA0ZMTZKvWoLbHaWPveXWrDMhsCQsE9tXAwjz9jLUNgah11ZW9BvK0Pp4B1EVJ/BxYoTSKoJQXpzFBKrz2N/ytc4nrkTWV3X0TCcitbRNLSNpaN/tkDUeegdd4KJuwynk2pjwfhYGdLTQ3HixCpER+9FZ2cOpqaqERCwGrt3v4ObN0+iuytHyrz3zuQiqf4sIg3HMcViUwK+6fHluqDm2rP6aiXfZ5+K7JxQZ9S8ZoGdGNMZ3G24hCFb0XzIfiWv+8xzEfgx2gUTrEP3EFRxFPl9N5S2tnj/ODdWkrO6NPjm2KkauoGg4r0o6UnEtKsSLirfCD1GaXzTQ/zM5/Fjzjz9WyozafrQQtdTxojPzQJhBrBCLY0Q5VVVnmnR42chKW+V1AywOytkTE04K2TMJtWH4YzhOJrH0gTsPZkuwJOpfAxO52lHLgamctA/uXDw/4PTueidzkHXdA66Fx8zOeiZzpVD3p/KRvdk1pJHz1Q2eqb5O4JOdXRPZaFrIhOPx9LRMZqGpvGHqJu4j8aJVLRPZME0mQ7rRBqs4w8QbDiIqIZTaJtNW9bzzT2BFCy7z4Jme64kVt+qCUGi5bxo118xnUak4RRCqwIQWqmOkMqT4KH/n68XDSdEMYSqIc86qP1+wXBcDn7/ovHEwnl4/qoAhFWewqXKQESaghBVdQaxprO4Vn0eCdYQAccxtRcQZQnG/dYrGHGVaI4tRqSU510Skb2qdsbTY+W71z7O6UlHGULLTyCxLhTdU3kSSaJBy/EsErPfe1x+97X8uZ9X33n+thPw7VMVVptGUgR8X285g7S+K5JvKOD7h1BePqsvFoX911HYfw3ZfTHI6I0Chc5LBxbA993WyxixlamBTg+fhL0r4WXiG4uqyCa7kov283ewCgGqMNuk2wDLkxQB37UjDyTpiOFHerkVnURxIplgQr6ix9OCsuIEhF3YhchL+2AsT4Jthp+1akoC3FBYrIfggsDKAp+zGt6ZcrBimBt1GEE16uy5KOhLQnrDVVwqPIz9jzYg0XoOE7TmUQGboxRez0touHyfxYwAw6OqW3IjpeexoDMBEYYAZLXHCvB40QuWyF/5ajDkNSGq9iLOmwJQO/EIcFXB567BrBZCXf6+SOFg+FaBb+H+uwi862C3m5H6IBCnjn+B6Cv70Fh/T47E+GMIDtqMyPC9qK+7AxfL2j+VrPzHj211LyqsTq/p+FwZUqyhOJK/By32PISXn8Sh/D04XnEEx0oO4kj2boQWHkFecyxGbMXibfDxfhgm1qoGkgbgYNVX8UrVaomSnMuqmp3LbQUPp4sgkZxncvLrpPIl5f48aMDjyUxEm04griYI06JyUie8dM/8/Pjjn3n5Plo4t4TFOR5J7RJOtBk9s7mIqAjAw5YrmGAU4vuM7ZX4rmYs5XbGI7jkMIz9d0QqTUUP2c406Bee4fn+Xg58m5H7+ArOlO6D+ckdUK3JDavwvkmX8rA67YqCb/KDK0WNiVUjmQRLLy5IuYMVDkqGesxweqpkTaZ296ijGD22fLTOZaNuOh3Vk6l49OQGkrujEdF4AV/mb8a/3HwPx0oOIKDsMM4UHkJQ0SGcLjyAwIL988ep/H345nGoZB/2lu/D3rJ92FO6F7tL9mBXyW7sKNqFHYU7sb1wB3bkb1/22FW4E7uLdmF34S7w750FO7A9bxu25W7F1uwt2Jy1CdszvsautA3Ynfo19jzahHWZX2N72mYcK96H9x9+jP01e1E9t7znm958RgdGnOW49zgaW+6vxemMXbhYcAgXyo8joioQseaziLeck0jfjboQJDWEIbnxEljMiX8zAnm7KQLpXXFLHqkd0bjVFIGbTeHyexo6iXUhSLBewDXLecRXByPGfBaR5jO4wldjoDoMgYioOIXQ8pMIKjyIA+nbcDR7J5qmMiTPhmCLYFnGFuen8NP9G/ME3xP2UoSUn8CN+jB0T+crqUGNWiRjasXmj3/39Hxz9NU1uHfR8KaqUOv4A+F8X2sOEvBNpT2C74L+eOT2xbzQI68/FvqhX5v/z2cRysFrSKg4h//rz/4L+O9/Si4+h4KBaOQORgtnu6jnGopZobL3KrL7rvl141k9MSjsuyYHwXdmX5RUuSzui0XaQBQ2ZHyO5GZ6jcqUriYBjHh8GcY3At5KuJktvcKL9nMPcFnsmXxVjWmvEfVjqdiVvQ3mgTtS6ECdn+FEevt4mOFiEiQa0fU4G+vW/AofvfcveOc3f4ftm9+DyXALXg8TFzl5NIqKhyWxy+EREE+QRTWROgnb0+NrF48F+Yw1qBy9h8DqQwipPIxmTxbmpJKYvvkqDt/KbsQvcJJrnEvePxNSU9uiEW4MQFlXooSXhdv+QhfIGoyjEkUDN3E6f59Iq7XPZksI3Mvy2Vr0YrkxpvSA2Uc8GJ0gt7tKgHdpyRX86hf/DV9+/iu889Zf4+zZ9bh7Jwjnz23D5o3v4R9+8r8hMfEwpqaLV2xuSBIiAZOnUja1oelCRJUFiIeq0ZaNgOzduF0bhq7ZYox6LRjyVGLMWwmnlwDLBJuDiatUCWAyIMeHKq+90CZ1cNkNUqJduO5MonVSB5ltpugo8kqqkSTvNsGNejSPpeJS2UEkNV4UL6WXxW8E3L9Yz7f051Pg24Sm4YcILT2GtPZoTEtp7RWiAH2P8UxDh0AorOQoagfuyRzhPGGfCIXhe5xr6TG7NPgmFeNu/XmcMx5Cw9hD4e0q8M3kcdLvuJavYPtoSYAOGYPkL5MDrzjMfH5z/x0UdicgqyMW9xojcM0cjJDSoziRuwf7MrdhR/ombEnfiN1pW3A0axeOFu7D6uyv8etHnyG86QJiH19GXEMoYuovIq4pTGo5JLZfFllZqlvd6o7G7Z4YOe70xkqFxMzGq8hujkJuawwK2uNR1HkdJd2JKO1JQnlfMoz9tyQxl8m5zzpKOxMl4lTUHo/SzgRU9CShauCO6KbXjaeiYTINbVPp6Jx4hM7RR+gey0TDXDbaJjLRNpqKiJIDiC0/jLahlOU93yyG4zFh2lmO7I44XCw8jPbJTNg8Jsx6zZij8eypxLS7HLMeA+xe5hgxr4X7TrX8rd5TVByh55Ci8x0HaSH8rn44eO5vHDZvFWa8JrnurLcSNq9JjlkmdHsrMWAvRkFXAvbl7kCLO1+TV1QRUSlwJesO9zv/5Ahfge8XuI+v2Dq09D0vgG8T2idTsTrjM8Q3nVa0k97YHxj47o1HkUZyz+6LRl5PNAp7opE2GIUtuesQWR2ErukCVZZcksh00MhX8gQJvldw0V6JQbAIfJP32TKThd0522DsSYbNpZQSCKgoE0XpJOHyCh2lFkODRUi5dRYmQwpCgndj28b3UVp4TZLoRH5JpKV0LjM5h9T3VeoiqqAOOYp1akOjseIxo2zkJrYXfo0vr72LsJwdGPMUKn1qAhlyI7+HN2DpDXjpgf8f8VvxJkoRIpVcltwUgcum07AOpojc24sE37yXcXc54qrOYt/tdThxb7NstlT5YPEVG7n9cq/Lt5PKA2ACopIT4xihgcFCJl0dGYiP2Y/WpkJs3PA2jh5dhdraVAw9qUZ2Zhze+u1fo7HxPihXt1JtLvxWAUlUJDGjfyoX54oO4WbDJXTa8hBecgzJ5vNoH8mAw01pMSvczirYuamyvDXD/FTfkWqg9ERy7JnhcRjgslfAw6qQDi1plN/VymMrr3g13Nq8UYnIjP7Uwe61wvrkLi4U7sG9riuYobymr1G44yoRmwB8+bZese8I+Ca/llEYE0w9NxFSegxZnfGYkYTYFaIA+flMuvoGS6LHGk+jfUTpzos3kOCb66mf51r+e8uAb68Z8ZYAhNWcFAAnCYA+an+zeBiT2ziumHuyQv0lfG1SFNkXaryxoI/HW42e2TyEZO/H6dy9OFt0EBfLjuJy5SnEW8/jbstlZPZeR8noHVROp6J7Jg8jM4XoGMtASnMkzhtPomMiE2MuAybtZZh0VGDCVoJJWzGmHKUCVMkTnnYbMO02yjHjNooa0JyzAuowwObiYYSNVXDnD9Jdlj7s/I3TIFVz+Wp3GYQi4/Ao4Ov0WmD3mjDjrcCMpxw2t1GSnlkcjIn5SbXBuG46gZahu8uCb0mg9dVgymnAw7arOJmxE6z0ybVMlH1YT8KrEiAZdeTBtUE/9PcIeAhklz5UPpfkfYAKJXSmqGimfh72oyTS0nBnBM2nJb1q3510lKOi8waOFO9Du4+8fA3oy+8U/5u/9zfCwvt9RTtZofm4UvP6Oc8jY4oKOB4TOqYezYPvjIEoFLLYY28c8n8ItJMc0k5641HYo2WY9scgn+C7OwpZQ7HYV7oNZ0oOoWksU7LSBbiI0gMHBME3J7wORF+iQTIPvs2YgwmPZ/OwK2ebJF7OOfUwu0qk8Xjo3aMqhXoel9OMsVED2luzERSwEYf3f47Wpix4vU3iBdQT6JiAqbSzWWCD1ctMAu4IkqT6ltciHkpuFPENwViX+RnWJLyLixlbMOEj+DaKAoAkOjHTX7v+8pvsy9PO3KhlIeUGQC8ITIiuv4CrNcHonMhUdAbxdryYe6Z3pbD3BtbdXI1jaduQ1RqDgbkieDjZfaScEAT4C4g5HgiOaKzxVSmBOB1m2GZqMDxUhUpDKtavexNXru7H6KgZjztKcf78LklMHJ+kxvLKPbdsiu5qSfwloOmZyMKJ/D0o7EnEmKsUmW3ROJW3D3GmYDwezwYo1+hS/FkqQ7BiI+lR9HwLF5dzhAlr/L9s4gqM0+utPOP8XLWXKmDD97XNVzxntZj1WGDqu4mzeTuRNZyAOSYno0nAAfMhIAbLyrXBsnNDwDeNDQVCStqvIcxwUrjWczSu/e77579nzg0aAKy0ebLiMFLqLmFgMke4qtyAxPNN3frn3MwWfr8c+GaBnSO40nQGXbO5koDMgipuRkI0KpJbInLP/+xyT06jnFc9q6oUzLFHz2z5wE3sfbQJae2xsI49QhP50lO56J8pxIitBBOuchAwz2la7YxgjswWIa0lSigI/ZO5Qpkh8JQIjhiLKkdjoT30vB49YZLf1WQaNeoZ+0jxzjX+uR8Jl5ILMk8X4nzQEy1JX2SBJjpTmLzMnKAqScRmFJTa4Q6vEbfrz+O6+SSahu7M38/CPXM86EcVPD5GuUwYdlfgfv81HM3djbbpLDhYXE1kWFVEQQxzUq1oUEvyd5UqOqXPa3ku3usSB/n5pI2xHoL+O4lyLbQRo28E9pL3JYIDCpwzwXkOZgw7ylD0+DoOF+5BiyMXNleFSnJle4mRpyRQqUa08MzPHm8E3/MJl5QafJVw6Ve7+dO2f6rvcD2Q/dhjRsf0I6xK/wxxjYHIHIwWzndRT9wPI+Eyt195vQt74kTehRyXgp5oFHVHI2/sOk6Y9+FY7m7UDacpHiUnpWSw04OoJpBasBVw/VN16LeuS2AhiyDBtxlds4XYnbMd3Ix18M0wqwIh1ORmgRL+n9qstbDbqnEj4TAO7/8U926FYGaSnu1mLXnOLAspJdWobAF3vWhdc5Fk4qEHtfA4mOVvxSgqcbs5ErvzN2N70VoEG/agvO865lAmnHHqfYt8Iz0ZK7YJP3sx+1Y7Pec1STfiJkAvGvXTp7wVuFQThJi683gyV6gSyl4g+GZRnoTHkViT+hXuNUej314qnl8aQ9Rdn5DKnv4nuaniGwTfHBvc0Ah+6+F1NWF8tBqnArbg5MkNMFbehsPZjJKSJKz+/JeIij4El7teVA9WipLF8eFxmuRweM1oHU7F/qxtsAzfE5nBjtlcRLddwtHSg4itOovG4UcCdJQOMmkj3PgsopNLQ5GHSws/U4psfmzw70XyaRKZEUpW7bz3myCclTJn3NUwdt9AUO52KbBDHq8Ue5FrUYLUv012/trPOR7Fm+82g5JsVEQoaI4RXmzx4C2RUlRGxYuZHwR2Lqp4OCuwv2gPsttjMWFjFVLlqWTbqxoAK3U/S4Nvzo3QioOIbb+IHluBSkD2UJvbovSYyf9eSUeKS1UdpWedgJtJlEySo9783aYI7C3ajYbxR6JMw77ipkwnhvSROE8IprlRq8/G7KVIb43G+dKj6J3IBg1KynkqCpRO3eNvdJlApZgyDzgXAW6lka20sUUjW8azBqR1RZQlXzXDVHOYzF+D5xGteYJvzdhz8TpKlo8FoO40hSCm5iTq/ADfKgG1WlSyMgYScDxrl8x7h9MgEQQx7kXxQ8sd4For44pjS28H9br4Hp/5tz7vv/Fb6RPhazMBnc9eI0YF257/p1Skw2fGmL0MxR3XcTh/N1pms2Czl4lnnt+X+9H2ZIefClAE37MeSg0GalKDWa+kBp93jfwT/55zweNjFXAzHgv4/lTAd9aTGBRRae+HAr7z+pWgeX4vJQfjFOGcnm/qfw/FI9J6EvuyNqBy6K7oVMPNrHiGkmhxE5TQyve3aMlKbTJ+nEcreEEeG3VJe2cLsD9zG3I7r2FSkuYIoPXSuZXweEpF31sWEqcVWenh2L39A0Se342uhlx4XS2YcVtgGXuA3rEMdI2lY8hWgmFHBUZs5WC4bcZjxKx4sJmkRq+FFU+8FUiuCcGhrC04bz2KwtEbmISSoKLurY/Fery1ksRGL9TCxqGDoUUJWfqiOr+5PL24PvXbZb+jNjbl5dS8ErLxPf23eJboXfqOzwgcCCRY5ZPeU06mgblChFUF4kZjOMYc5fJ8elhbgMYzzsPFXQ/RE7RIUi/D4JQeE2qTli0/v3As3vz0v6l2YUbWZAq2ZG7G/cexGJZqo5SCrJQCMjYajA7qd/sxhkTGjBsyx3m50JNYvZARkIlRE2KuHMbmjR/gXkoEhoctGBo2Ceh+//0fw1R9WxJ0WdDnKWA7f//+XP/p73Ce0TPF+TfnNsI0cAfbcraieyJLolKM8DQ78nG39SouFB1GRPlJPGyPQcXIXdSOPUL3VC5sLNQDC2ZRhQmfUZQ3CG7II9cBsFCgvjF+6BnnBirScdpn5O2Ou43C2z1VuAdtzhx4XPT0sqoq55cWMfjGub49Tjm+vn0sBgjf/s13jX2lLsJ7YAVJFgtKqbssyWGmoftwyBjj+F7028X9sfh9AX8agPmuv785fhb/VjungG+vGf3TeTiYtwtV/ar6J4GKJBxRZYbzQRRPlMqM8g5qhWcWzxUC5GfNQ/33vK52PnooeQi/X4oeWaTy7smivbjXHYMnNAIoLyjrTw1AfXoqn2gSfCoKqM+r73pVHmQ1ZhaD1oXvkl5CI5F9IRV/pfohpV/Lcc54EsFVp9A7lTM/7uRcOnjU21NbC+idnnCUIaM9BufKjqJrPEv4xJwPyoD5xjqp/56vi/pYebrpyVXqKvqrDsCVobzwDN/VDvq4VJ89PUf1a4nXWyJmTHBmUjrvj+1bjftN4YisCYBpmGvE4nbkuXSvN1+V/CwLOs24jMjtSRDVooaRVDiYh8G8Cxbz8jN6svDsfP5vH/q9f+t1cVvSc65HjwSEK2+7MjhqMO0worwzEfvzNqNhKh0OJth6Scuk8cF8MR0zPN0v37qm1meky9icFbhaeRoJdSF4PJElBpw4qhjtkHv47j541jlfvf+nbS9J9hfalAm9kxn4IvUzRDWeRs6TWNH5LuiNQW7/i022ZIKlnmzJ1xeScMkLZfdEycWY1alfuGBQZZvergvGjozVKH6SiDkp7V0HLzdXWSRUsiIrvnEyv1SDmolp1FuVrGgL+mfycSRtK9J6r2HUpUpTyyZMPWBJpCuTaoQEkEMDRdi19ff46Hc/x/mgHaiquo3e8XJUjaXhoPEQkqlhbQ3GveYw3GuLwP22SGR0xKKgOwHlPUmwDj1A7Wgq6icyUT2dJl6ewMJ9OFm2H3daL6NxNANTbiOcbvL2SFmpB5jMKWV91QZK0Lu44IRw9TSOrgLCC2CFAFjfEPx/5e8JQlUlScXdI3+Ph3pfPuOmzI1fk6BTrwQKWnlfrRIlS2VTR7tjPBuRxiA8aI3GlMsIr7MaDo3aIEUX5Dza7/W/tSRApcHLZ1EayPSGOWk8yQauuKLPHmNqw2Q79cKIk1l7cK7kECp6kzBlL5YKbpR8k2ex+Qm+pVIo24M0JY6ZMrg9FXDYq1FruYc3f/1XWLf2LcTHnUZzcwEqq1Jw6PAn2Lj5NUxMsipqHdwebjyLgMFzzBNueuxfAtUZZzkK+25ga+EOTM0WaRXuFPgcnilEUcc1RFScRGDZYZy3nEKYKRA3LBdQ3pmM5sFUmAdTUDGSgp65Qulvqv4o75TSEicAf3Zbq7lOwDLqKEFGZxwCig9ghhurkx4w0lpUQa55jed5cL0I+D61qS+MZz7fd43jBdD89HcXv8/xy4qNnEtO1CO2JgTR5nNoGEoVA4Vr1WKv/vwzLroXXpv3v9zxrd8SlOjnYT+LUWNG7fADnMzerUUimMSmAVK3RSJlknz2jbkgoHl+3mlJsgLANUCtf19/JQCXZ+Mr12erHLKmOM1S3KttOgd7MrejoDcZE/YyieIQdMshxcZYDIbj/Vl9tPC+0BsIfJ5BYyAwJUAi/Yc611wfOPd8LgsmnCYcLNqPJGuYUEnm23HJuWHBuKMU6R0xCC4/isdjGcLrl/El9/GS7T8CnKkEpowDL3MtGFFFNdIaLiO8OgBlQ7eWBd9UOyHFbNZViYLeGziSvQuWyUeYE/lS9p2Sb/SvDZ+/jWicyDPJ/s81QgFg7kFeWCXno6I7Cbvz18E6mQqnuw7wNMi+QvUjL1jjgtGK5dcXYgqOIdtcKWIrg5BQH4KOqSzhrZPqSBDnL3f8RbXPq+v4N8bEueYzYWgiG2sffo7LjaeRNxiLEnK9+6OQ3a8wqQ6CX8TrnwR85/RGz4Nv/SELn1wTIH6v6SK2Z65Gbn+s6MIKxUIWd3q7ldoJJ8FLN+gY+qOHSQPfA7MFOJ65A6ndcRhxlQoPTSWOkcur83qNcDgM6Gh/hGOHVmPdF29h294/IColAIbuB7jZEYP3Hq5CRNVxXDYFINxwDBfLDyOo7BBO8Sg5iKDig4gwBYq29CVLEK60hCKkIRh7inbhD7c+xoeJv8fBwj3IGr6DNlsxhpxlYDiVfMbB6XwMTPEolOqYLFbzeDILvdP5mg5tLvpm8tE/XYj+6QL5zsBMIQZmtWOuUPjN9D4vdzyZK8LITClGZkowPFssx8hcKcZsZRi305NvwLSTyUhVkjA05zZgzlUhHD4mEEkGvU95tvi+XRKPKmF9ch/hhgA8bI/CpLtCEg5J+yHFgaFGlni3eyqlOiPPQw+uHC4jJu2lmGIEwWkQrie9gm4fS3szxGmB2sC+e3LrYI2AkFKP9yyXcCJ9G8JKDqG0+xrG7IViMJJrxnP6M17F6y9A0KCkJX0VQkuamzWgsiIRm9b/Ftu3vYfDh1cjJzcKJaXxCI/YhhtJh+H11WPOXgW3RyXi+nO9Zb/De6HHymfG+FyRAJG9JXvEM8TNjxs1ZfbI12RBDvJ6y3qTcbv2EiLLA3G26BBO5lF3/iiCCw7gdOlhFPQkymYmHj+CN5dZeOJ+eZS81RibKUJ681WReJsRwFGLOUqLSRifRrlOB/jGGvGUh/PpPtW9kYtfxcuobdg0ZhaMxW+cVwO9NPRmfWbE1VzA9ZoLEqon7UYMjHlDgABZA5QC/Hguxf3lpr/UQY+c5H7ov/+OV35udxtR3peE0/l7UTdyHzafUYrJSNLeXBmmfAaZC3aPUeaG4vIq6U6VNMckSMUpFuNLK+TCvxVlTnF7GcVjVIb9T9CvfkuPKyvQVsoYqR19iL0ZW1EykMndOgAAACAASURBVCzJiDpXWvj+pBNK9eIaZchpNAzdI/zNV4+LERheWxmEuqddverVFllAh+o8qpw8EwQJwMecFTiavwdZTVcxaSvxay6yX8YdZRr4PoaOsXQBsvPG3ZLA/enxtew8W5Fz0WutgW/OW3qovTViMGQ2X8Ul80mUDCYvC76ZBEvwzTWydPA2Dmftgnn8IWYkeqvyN5Tn/8U841PgW5+PwgtnpMWKKXq+uxKxp2A9aqfSxAHxXeCbY3S5ftDBt91WtgC+JxeDb655y4P45a7z6vPl+2Kl22gefE9mY13qF4hoCBTwXSxeZ4JvhUl1PPoiXv+k4Fv3evNBCb7p/X7UHo7tWauR+viyADN6aKlyQM+3FwaRGlShzhffgUsOCOHd6eC7WgDqqZzdopU64tTAt3C+GW4n+KZ3k94/KkRY0ViXCqs5DZbKFHS2pGNyogJl7TdwtHAPhj1FsMOMSWcpBmdz0TSdDsPkA5QM3UZO9zWRhLpTG4bY8lOicBBlDMRl4ykElBzEp/dW41eJ7+KLrLW40nAG6R2R/x977/0VZ5qlCe7fsD/v2Z2ZH3Z75+z0nO6Z2dnuraruLtPVndVZ3mRVZlZWOkkpj5BASAIESMgLCQnkkJCQNwiB8J4IIgLC4r333gbhI549z32/QGRWpSAr6dzsqdI57wEREV985n3v+9zn3vtcFHal4Wl7CrKazuK29QzuWM/iruM8btSfxA3zSWTazyLDdgY3HedwuzEFN21nRG/1tuMcshpTcLf5Iu63XMLDtjQ8okZre5q0sWYr688bVMh43pSG5438mY6c5ivIa72Ggo4MFPdkoqwvCzVD92GffArrxBNYxx/DMv5YfndMPUPTbA5aFyirlQc7/z/2FC2Tz3G/LRUHqyJxtek0muZeoH+uEG3LxfLeloU8+VvD9HM5rm3NcevHHqF05B7KR+5DN/YIlpkctC0WYWClEmN0Ujx6THp0mHVWY3alRsDn3IoOcy6d/M6/zyxXYWapClNOEywzeUhvPoPdJdsRVxEBXf8dcbioKbyA5g21GVfAgvOauapG+AO1oNZ3KNgAn6cJY8N6NDXlwG5/itHRakxP12JouASz8zrJ8aSkIfN9CZJeO1c3vNmrCBOBF7v9PWq5jOOmoxLh4UYlaSniLISZYQV+3aFGzKEJZmcRrjWew3XLaSRVHkJk4S48bU2XnFtGWZgGxY6MAmwFNK+zpoMOaXDyrPky4suj0O0sx+yyHr3LVVJjMb6iw6SzBhPOSkxyrFRhcqUaUy6OGky5dDJmXDWYdVVj1lWlxkoVZmRUYsapxvRKFaZWquUY48sVGFsql8HfefzwaxP8/3I5JhdKZc6dqzuKGw2nYZ94iik5Bx0meV4rNZh0VmPCWaWNaowvc/B8a+R4PObrxtrrWvs+XidfG3OWo2s5H7e7LuCIfj9KhjPRspSHNmc+2mdfoGXkEWzzXAvP0CCpQTlomc9D21I+OpYL0LFciM7lQnQtF2J0sVjGyEIxOIbni7RRiKE5Nfpm8zE6X4QJNo5ZKcXAcjFG5oswzjQ5ZwnKxu8hThcNw/wzqc1g/jWBvSdUBx8HzFIfw+K59YZPPsvPs4kQO4qyyNUqXVLpmLuCZtBhX/SbwFQvOt0seHUF69HrLMGhwl0wdGfC6a7dwNpQDOu814CivkycNyaiZ6ZQ8vrpaG4EyG3O+ltnPXxqHTPnu16lZ5Kc0iIMvlAjyrszccl6DNUj9zcEvhmtoSpL3eRzxBXth3UmVxRdGJlRUdA/4IB+6ly+yHmv994w801GX9k1FSlRMrPzbhNq++7iUM1OtCwVwicR3VbwGqiKFgSFDZSDuN4zCYNvr5vqVWdkn+tZKBECIpyi+Gfwvd7z+nq+vgq+F8uwo2AL0luSUT5GmcFMxXxTQvt/dJ3vtYB77e8E3gTgJYM3sK/8IzxqT8H4Ug0QbEEQzWK0JYRErW+GPr9u7LdWSCbMd8iO8ZVqnKk6jKedVzHl0Wt5uDTqKl8xLDmoDAL1ujuB0AAQ6gX8bVhcMKKm+SaOlUZiyVcjslxifGj8ZdAQakUl9OoZtpb8YAt8/nqM+nRoC1RhKFSLLncZHrak4HeP3sL3bnwHbz39KWJqduCi7SiuO87gquWkgO27TRdxr/USMptScNV2CunWk7hiO4UrlhNINycjrf44LpuOiUxXij4e53VxOFcTK4MSdK8bp/VxSDDFId4Qizj9YSkIO1gZhajy/ThQtg/7S/fhQOlexJTtWR0HS3cjumQXoop34kDxDuwv2o7Iwk+wt2ArjuTvQKI+Gr8qeA//74N/wS9z38Whkt2IfrEVewq3yWf4ufCILtmJ6NJd4DHD3xFdsR/7SvZib8FOROTvQHTRHmHIzpgScNlxCtcaz+J0TSzO6uNxrvYoLpiSpHNbav1xcLAr2wXTMRxjV9H6EzjbfBJvl36MHz35Ba6Yj8EVMCHgqsMyQ+sb2JxUMRcdMzpoYc1qsneq4ymCnQDaRRve52+A3891wEY0jRKaDcEChlqFVd7A9623GXGNKQbSiv65YtywnUG69ZQw4TRmBN8EQRzMxxZmk4VQbqMUVvrRhAXQ+bCjavgJLpqSUNyeIfr0PGc61AT2oqKwIUbJIV0Hz9TE4mcZP8GJoihceBGJpLy9OFV6EKllR3C8/CD2VUUiko1IqvZLM5MoXRSi9dHaOIiE2kM4ZziCM/rD0n3xZHU0TlRF4XjFfiSV7UNiaQRiKyJxoCZKGqFEVOzD3vIIGfydx+Vr+yr2iiHfWbwNkWU78bvyLfiLO9/Gd17+Crtq9iBGH4l9NRHYVxkJfm5P+V7sKtmNHUU7sb1wO7blfyJjV9FOxFRGvXYcrDog18LmLPzutWN/jXotuuYAYvWR+Nb9N/Ct+/+M7cVbEFW2E9HlO3GwaDv253yIyKJt2F+4/fcG11VkwTbsy9+GiPyt2FW6/VNjZ8kn+OzYVrINe0q2I6J8J7aXb8fW0m3gveDftlTvxDezf4J9RZ+gfCgTg4uFmHFXYM5Xg8WAHq5ALRb8NRj3V4lyzpyvFq8bs169vE5wzVoXjgW/EdPuGowtlmNorhhdo8/QPvYUA9O5GJh7ia6FPHTP5ULXn4no5x+jj1Kkku70eoAQTlEg+C7uv62B7wJRsfpag+9QvRRcy97I1DoWfYcaUdl7B6nmRFQOZa0LvtX+5MCKpw6miWzEvNwL62weqLGtajPoaH+14FvJCr8C38r5oTRhg0RN9b1ZOKLfjdblIgW+WaD+WfC9AbwQBt8+bx1us5Nm0wX0UEFrtQ7gz8z3+nvG69fW/1+fV+DbhqnFcuwq2obLTcdRNsoGO3+C4DtMuYe9jTD45v/3V29DZsNJDM2VrwHfBBZG1Wjnawy+w8VjE64apOiP4kHrZUyy5bcWMns1+VSBC//v9bKorg1+Xys8/hb40Iolvw2GiSeI10dh1FuOJRbgiQFpVDJdEu5V+aQE3a+Oy5w4lW5BBQYWzwS8Nnj8ZjS58nDaHIOPXvwaB2o+wbPhKxhwV2Peb8Wcv042M1fABqffLL/P+A3gYKc+jjm/AXM+g2yWlJqb8ekx41WD/1938L0eNabdZCFrPsUsDi+UoHs+Dz3zL9G7kC+Dv3fOvEDbZDZaxsl2P0OPsxBj0/noWnyJa/ZknDEewcvOdNl4O8mIz2SjbfIZ2qay0T71HJ0zOeiazZWNuEc7fs9CPjo85Whzl6LJWYT62RxUDt9Hbuc1aaV8q+4krhuP4077JdzpuCzjdvsl3Gq9iIzmFFx1nEGa9SQuW07gsukEHptT8LzrKs50nMJeSxSutJwWYMAcWFX8tBGjxK6mdM5Y26AVD0pubZPKlw22aA17qFmqJMbIRqmuqTY4nZUiNfZqLmzkOz//PQK8tUKxjpl8pNQdw+Pmy58qemaqBFlL1VFOS6NgfijnJ/P3mcaDJpgGn+CKIQml7TfFWRAGUwqiVIHoRpwTgorO+WKkNZ5BkikeJZOPYBh5iIK+W8juz8Cz9nQUtF2TZiZsaFLZfRsV3Zmo6MpEeectlHXcRFlHBnI6ruJOZyqyui7hXk8a7vem42H/VTwauIYnQzfwdDgDeQM35XM8RnVvFnT9d6Hrv4eavixU9dyW16p7MmEcuAP9cBZMw/fwpPMSoqv24mrraVQO3IKuOwNl3TdECo2frR28D+PwQ5hGHqN+7CnM489kGEYfoaj31mtHcW+mXE9lN3+qa6rouqWdo3qtpvcO9P03sS/vI1y3JKF+OAuN4w/RMvkE3dPZ6JnKRv8S1xZbkL9E91weumZfoGM6R9YJ1wrXWetkNmzTTz4znsI28+lhmc+GbfIxzGMPYZx4APP0EzTOZsMx8wxFY7dxQLpD/gh7a/YivjwCB3O3YW/2B4h4/iEO5mzFwZwtiMjdggO52xH1YsfrR+5OROftFLnAQwV7cLhwrwzKBx58uQvRuTtxpHA3Yop24vDL7TiQu1Wc9IN0kEo/wbcf/Qi6pefSbGy99fFvE3wzMkyATOedNTONq+C7qu8uLpoTUD5wZwPgm2k9DlHoMo09Q3TuLpinX2BFapqUvOtXy/wz11rbK8MRPUkh+dcD3wFfvTRgogZ891zRK/BNgm1DJMHn29T15t6fX//XuXer4HupQsiB1IYklI5k/GmB7zDYDoPvMPutWmrekdaa0YaduGJJRM9UsQLfoSZpxrAKvlkItCnM3iY+aI35FvAdtGHSrcelumPIakgBgbiAbwHgBFcsrqLyCUGWVTpdsnFOgB3+/Db4pdrcBMPQPRwq3oUZbxW8TFNh2orXLHJXChipXEtviKHYVyCIeZAEfMxZVo0JCOrs8CzXYHKhBPVj95FmS8DJqigUNV3BmEevWuj6lQwh8xrlHEXBgDmZKqdSvvPzcjNXc0LDuaGf/hkIWuAL1IM/RZN29ThUqzALW+/x10mjCGfQBGewDivBOgkbrwSYk10Hp7BeJsyH6qQ73lSwFlftJ3HDcQpNUzlwMeTsN2EhYJQQND/jCtSvDh6Px10OGuV7vH6rKJJQspGhahYVMq2EqQpMX5hy6jHr1mHWo1fDrccMh0uHaaYxOKsxtcy0EyPmpsqx4NSjZOwujtYfQrrtOCbc1VJ5T/3rjcxX6r9Lmolo0nITVPdQPisOJ+eIKh5Sr7EDJkE674dq3qRY883JS1RpJUoerWHqBZJN8ajovSN5vpwLBMxS1MaGKcx9p5yg5IyqPHGJ0PisknNfM/QAqaYkZLelYzpgwCJTa6TrXJOknsh711vTITuax3OQVp+MRx1XMBowwem3YNatx5BXL/OYhX1Obx2cXhOczOfncBvUcNVieYXNUbTn6tZhzsOhx5y3FvM+DoOMRa9BPuv0mOD01sPpNWuDv5vUa24DVjy1cHr1WPboUNefhbTqWOgHsjDjrILTxb/r4eQ5eYxasxXWMWgNV7wmyY1e8Rix5DGsM4xyXcse9d2r1yXXyGutw6LXhJ6VUhws2YXSwVuYcldh0W/Agl+PpYAOSwE9vMF6cH1JQ5agEUsBA5YCVE5SHQtlzfjr4PYaXg2fAW6fEW4/h+nVCNbLe1wrOqy4dVjxGeDxm7DiN6JvuQiX9HE4ZTmK6pnHaJ3LRfP0czhmnksKWftsLlqn1P8pXdmw7ngB+0Q2LKNPUDf0CCyyqx9+DNt4Nphb3jpfgM6FInQsFKJnrgAdc3lwzOehbuY57rSmYFfRdvQ7SyTasv5a/LeYdsKaKLOKkJKZJvgOUNu9AVV9WRr4vr0h8E1m2+2zwDr9AocL9qF68D6WvGYw9YfykVK3sN5a3bTXv0DayfLmpJ2wnoHdT5ly2TVXqDUKUnvin8H3JmKmTZsj65/TKvherkRE2U5csCegZDgDNVQ6GclA6Z9C2snaYsuw2kkYiIcB+CHzXlysi0P7RD6CwRYENPCt0k6oZfr1B99Tbr0UQmbazkoKCh++MvoaOxjOlRWt2XAuG4GpAt/c/HWDWThQuA3T/mqsgHmSZMjJbFMhgMCHhomFJ3ROWIxohk/yylWqAllRvk/uF5Vj2Go5aJdiGjbiedZ4CafLYvCi/RqGfXpQM5lAm8yGgG+pKGchmPL4aXg+d7AA7zVDwFqAWuTMv+P1hu8Hf/I8NbAp17MG7H9GBSIQZK6nymmcRR3SzSfwsDEVgwulSs+YrD/vgVZ0JY6EJlEYLtIKd19jvrGSU2PKBAe/l7JslBxUutT8/6ecDo0J5vNUr3EzagGc9SJ3l991A0fK9uCO/TSW/SYBlqJrvAFDw8Y6avBe0DlTQ90rxXAH/HXws5GEMN9m+PkdTN/gPRXQTuYrfG/XN0qvAyIsqOR1spCYbbCPGmPRMPFcFVnyO8JzS1QxqMer7h3vM1NQuIkzTE+HyzTyCKeqD+NY8QHct5zFy84b6F4pFzk+cU42GMq2DD3BeX08KgbvqQI4Fsa6WVBLiU+uC7LtaqPkz9V7oc1rNbdVkSMLt9VgWtCnR3g+qLkZViKhM6SOLfdNnFJ+r1qz7GabrkuAeeypOHPURJbNmufB7187B+Rv/HtYXpDrdJ0hc/TzbB+Blg197gpEF+6GbfyZFC5Lt0fO5yALLOtVh1HRuFYymbQ3fL5STKmtGa4d0Uhmjq/k+WrnufacpdBYqd1IwxzeFwKzAJ1pK0Zc1Ug3HsPT1jSMu2tACUzWP7hDFomUcH35AhYpinYTxAdeP1ThtFaM7VPO1YqvThw76j3T/nGIrfJZpDnZSsiCcbcOL9quIK7sACbmy75APYQquKSq1Dnjv4WCS8UOix2VZ9aoFVzaxWFm2knF4EbSTmhHHPAGbGhfKcXJ6iPIaUjD1LJOCB5K6X5qHq+d0/8Kv2+44LJ6cwoupaFPyKrAd8N5dIbBt9gUyqZ+tdf/Vd7r/5G/i6pkqnFWJSIrduO8NR7FQzf+tMF3GICTEScIZ/rJEVskzhsOo3UsD4Fgswa+CQRMqtEOi0k+u5n9Kyz8LzQZNSZXmO+AFVOeWlyzn0aG5ZQGvj+9+SrwHG4gQIDHAkyTBrrswvKaRx/gYME2zHgr4GIxjSgeNAr4FuBIpQnJR1PdLgkaCcqUAgVZdQ2IcYMN2MDiGy+a4WcBa6gZnYuVuO5IwYnC/TCOPIKXrZ/Znl4ABkGMBsLDmzN/rjLWBJ9r2gevsuNh4PzZn3y/Yr0DkptMQ8bBdCJKNPI1bvphSUFKmGkShOG/MY0h4IAn1ASsWDHurUVqXTJedNzApFMvLCy1fkW/PCxpFv4pAHHNMZkOIc4C1UroxFDZQ4ExBdI1NQ8NbK9VwhDGV97Pa7Rh2V2Pobky6XSZUB2Ng2W7UTqQKYud6g9UUdlIWoVisRVgJJAhIFIAknOHz9CIEFvPU89aVCiUdryAHtkUCZKUI/eF5u7nrB0Cb56311cP3eB9xBqPoG+xRAEyYb0JrJXuOue9AraaDrDWiEOAcMCO/qVyvBzOwq32VFw2J+NIxQGkO06jqOsWjEOPMe6sWndT58ZYNfQAJ4zxqJ99oaTtPDZghVEVpe1MsCiOiGySYfD92Q2TIL0ZIagBMJ3ns0NFi8LgO+wcfgrQS+c0aphT4q4ZtYOPkVqTCOv4C5G7pPweI1LqWYQdAuUorT1Hzi0BwWuiZ5/9vzivmlOx6lB85thUOnFMvUBkwW50zhZIqhn1vZl6xrlO9Q8F8DX5wPC64trgOqFCBiXq6JQK0NecUmFSaXOVDKhEO0JUFlGRD3HuZa0qiUIfO8ktV+Bk5RHkt2eIkpF0P6X+tsh6autQtL5VPcSn1xfn3afHWruzdv2p39W6ZQoUG2rA65DOqlQamnAZ8LQpHQm6WIwvVcl63dja+LcmNRgG37QJBNDKdjL1sLwnE6kW5nzfXZ/5FhtM1acGDPkMuGQ5iczq4xiYLpamNqq2QxEVG7uP4fn/x/3kPNgMqcGN4AXOJdoYOsRZ9vPIdJyTdSRzjGsv+Gfw/VU8883/Ds4h2lh2ra3E/so9OGeJQ/HQ9T8t8B0G2QTaa4F3+O8E36cbj+BYbRTMo08RYPoEN1ZtEfpE/eGzm+kft7A39SFrGyM3OYJXylRlNpxDen2yqBsIcF4DCASUCKtLo0lQy/QBSimSnbYJg90+lYPI/G0Y85TDxxQVYcm5AaoCS7JMyjgRQGqNh+iUhIsyNbaP7yEr5gupRiBsShIKtsIZaoJlNh8pNUeQUhODJa9BgW8W87ENNAEVFSnImIebS2jXJ8ZMNmXtXOS7FACTexD+v1YcSkZSFgDTEkJ0JAi6zfAHVZGQAHFJs6CRI9PI+xJm+bjxK1DOBkGU04PHgralEpw1JKKo9zZmPQZJ2xGWmaCC75fPhIGGAg+rf2fHNGEA1b0LA031nSo6IfNDYzp5fqr5jvKg6ejwvf6ABfdb0pBqTkai7hDiq6Nxry0VfSvl6vs1ObWNzDWVhqTmtsyP32OZmNvP18NDi1KsDQX/3mf++LUhQDNkx7yrFvk9N3HUEItBd6XGeKtnRDBFdjUMvgnQOFeCPhMgTL4ZoUCDdKYcCZrQ7dfDMJmDy+YTiGGObv5OHK06ANPoYxVV4X2lgyfrneCVv6tn7w6aUTSYieN1ceh2VSDob5Si05CXjoldgQ4tLUfdIwJddc8U2FVgcXGhFp2deWhwPMP0FB3edvT1FaK5KQcd7QWYnjJqxazr3zsadKYteUNNeN5xAxm1J9A09RIugl2Rrgwf47PgO/wMFcj83IgSNw25B8qpWAvaV+eUOMl2LPqMKBq+jdiCPehfKIFPog7s7kbwrZrgyDMVRzSs362tLT43LYIhAH0t+JYcYg18S2RBOeZ0WinpqboesvMgv8sBd9CKjqVCxBTthqHnPpbcSrGH58D7pSIjfKa8dj4j6rRz0HFWTruyddTsJsHArphmxdAzIsR5pTmGotWvOcJkvnkcNmbjCAQbMOzU44btLNLNpzDm1sn3h9e4IkDUfOPf6NSqeUPg2iTpRlRhOmmMl/QDP+8Jr4E64htZZ2IDlS1RzhvPm3aD10M7wmeiOf0h1fhMOYF0/OggUja0VZqiMdqp/sZ5w2PwWhkp0+yk2GU6wxx8Puq5MuJADf7L1uOoGLqnpDjDtln7yXspamK8r+Lwk5hoxLyvDrqhezhDucGxJ5KyJ8+Ox151/MLz+4/9qdaBWqe8pjBpxHmhIoEBP6N7vOckk15FZekIMB3MPPgQcWW7YVvIw0qgEYEg5WL5PnUMISQ28ry0iBDt/B3bOWSS+Z4tEBJK9greG4mWapEhHlPG2mtfs3f8Effo9+y+rEsbhpbLYZvLRb/sK3SWtTkoalFcU7STShp29dnw3Lg3SCRk7Tl+md/5vNTnea7s2q2eHWuQSCCG1xPfo80TuffELVptmsxZ4hkOPlPOYY7wsfkz/Pcvc67aefLeyJ5iB1WsDlXvw0nLEZQOXkfNyC2UjWag7E9BajAMssOpJvzJv3GE005utCYjRh+BiuE7Kh/aS2Cp2A1n0Li6oYYnwdfjp5o4YVaG+tH3mi7iQl2SyJ6tpjqstyC1Bc1J3DtTgG15W9HjqRAwLgt9dYKuMyll0SomipOaedXKUKtQsUpdaRKG/llfOnbkvoMJZ5GAGHhb4A00SrdHkGX1qk6H6hoU0CLYIjinsSdYFlZMm+BhJlnO12eRVtJ0GISRoaMRIDDjIiUwUyyu+jxD58xbtMAfMop0lj9gQDBYpwE+Xk8T4FPXbpx9jlO6WFRIF1GDhLu5wYujsN593ujrBM9Bs4AMSpt5GVb3WUXabNZbC8vgY7yb+yG2FXwiBYmVA/cxulKtMSiaYxI2Qhv9zq/L+zSmdHi5Eg/aLwsIGfBVq2dNgC0Op9rkJVpCtpuAQstdR4jPjdKHDdL8wuvnptiMFZ9NwrmP7OdxQheNnS8/wMOmC1IbQeUTGnV2MSWbzS6aATQBgSbJOc7vvYJk0xHM+o0IuR1wh5oFyEjXPQ2MyLmIgScwJLjWoghBB5YXTaipvopTp7bgZPJO3Ms6gbm5ejzLTsaF8/twLHE78nJTMTu7MT1oAWE+Rm0akGY/jSeW8+iYK8RySIFF2fg26XkSuCkWUAFxZfc08BxswKzPiLtDl3CuKBJDCyUC/smAvgK03KQVgNkcm2mFE/Uqj9pnQ4A1FCErnKF6tCzlYF/RR+gayZF8dwH/Eh1bu9EyBY3AvFEIFgHg4kQROGkA318n5x/e2LnRhyNEBM8EoHItYSAbTiWj9Kvfht6lSpwxHEVeYxomvbWqaF07tjj5JBe0eRPwG7VjU32qGR6vBdV9d5FkikX7/EuxX/xOPwkhAtd1nittnooSqPPkubLeZXq6Cqb6W+jseYnlFRM8PiuGRyphtz3D2JgOgUALOruew2S6A73uHtpa8rAwTztIB4dzmU6CHawRUTZdOU6BgCq0p5MgUUTp8miFdeix1EkU9d4RRltS8gRkk6hRa22Zco8BszQocjONMdQgOfyznkocKtuFkpFMzAWo2kW7vf61r3dv1OskWqgXz9oV1f05IL8TNHNfUVG+gDjy4WdNh02lczEHnTUQ9oHHOFF4AKa5HCwEGWkMp1JpoIt2is7Vp6K2r1779N/tWAzVC+t9u+kCulhwyWJTIa/sgIeAUxW7MwVSBr/vMzr9PMeN3YMwWFdA9bPgm4TZ8ALVyi7huCke99rTMLBQBo9fKUXBy7WiwLdy0rXeDMQJxBIkv5hyKgB4o+f0uvcp0kWtPZUKqeajGT6fXturNWlHzSEMR2O5vpRDxXlLu8z/1wnQluetpVqK2ABlpTfJVtG5lPol2DDjqkKCLgpJlkMoH7gundUV+P4TaLKzEfCd2X4KB3V7UNSfgRWvUbzNMEPgkocWZgBfN0m+6tfWgm+bFE/db04VmSpq8K4C0vUWpYBv5d33zRZie/4n6FwphY+KBSHpzwAAIABJREFUJl9gAQmDsbroaMx4zxQTwJbdNMBkDoacVXjYloLEmgjMLJcIsAy5mxD0k4lhwaZZWDOG40TVgu3XtQY2LGaU/GoBlyoUyfzeMBslni8BnF+Bd79XMUs8ppxfmKWURUrj6oCXG5Sw43UIoA4erw5+P7VayfIShDWo0DjsqJl4grO18dAPPxRWRpwB3iOy9Ovd5w2+TtaNjUgYuhfmyW1GYKUO0x4DSoeyEF0cgVMVh/Cy5yZ63czNp5EncKDx4xxUTUQ263y+0uNoTlX/Yhmymi8ipf4YRoO1CnzL5k1DrKUpCdgxw+8zw+cJa41zzrFAmA4X06WapB28j++BHXMwo81TimumBJyriEZp1y3MuGsVg0PA7jIJUykMMhow79PheWcqTpvisEjg4WnACmUWeZ+5yUg9gWKEFCjj72RjFONCEDo0UIBLqdvx3m+/hdhDH+EH//zXaGrKRk7OKZw8sRNv/NNf4cjhd9DXl7+xOcS56zdjJWTGBfNxZNsuoHe+VHT5ZQ5IBGdzbJFiidU6fsUSKUaaIHvGU4sbXWdwuTIGo84KYVb5dxV1Cm/Km7c2eA4uPl+uJb8dQb+KsLHg2Tz1AJEFH2J4tlhyvbmO+JzIWq+yWvI7bQK7stLmqTVGgEmAQ/vBzdrvDz9DzjfzK4ZNXieYV/aSAEtaqxP0Ms3Ob0bHYgmSqw+jtCMD86yV4Hs5d8M2SNIJaI9U7QQLl2kbeT4udy0qem7ieP1htM3nwuvSK7JBUpSUvXvteuSxeSxtDvA5sKBer7+Gd377N0i9tA99/cWYmtbh8ePj2Lbtn5GdfQYeTxPu3DmIqKif4a1ffQMnkz9Ga/MjZZvplPo07WqxYYqNp5NHey3NxSSSStZTAbr26Ze4pE/AfUcqRlw6jFETXtOxn1muxKCzEm3LpehfLBZg17pQiZ75cnQs5KNl9jl2FW9Fdt8VTHgrxTavMpobtKGfe4+Y9ue3weWqQyDYJA3SJGUQTcK8B/x0MGj3G1UXWaaJsWmZpLox1dKOBbcBjoEnOJq3FwWDt9GzXIWxFR0mlqswvlSBMWr0s4DeVSvXzOv+vDG+UoVpZzV6XOVItZ/EpYbTsEy9wLRLj3Hq9HtM0qmVKaUckzL0mHDrpLaAtQ1j7hqpcVjwsIHb6webvC24auUaFtxGafhG0m7t6F4qR0bzBRwri8aJogM4URGDy44z6HBWyBxlqtVqWowWMRQAL9FQ2h06ZmFGeXPsEJ0lMt5qfaq1R1tA0BwUcpTgW+0NQWHG6QhzT6hDUFInCawVYx6OZqg1rGqYuAYlC0CL8Hzu/Nng/Ps0+K5Goi4KieaYP4PvP8R83+86h6jq3cjuTBMlAglXajmAHtEx5oPf3An1ZR+wnI8wgcoLXfLU4mHrJZzRx2F8qVwrWtzg5JeNyI6B+WLsLt6FlvkCeL8g+Fb3h/eIIJAbmQrrkEHycxGwG17QDsdUAZJ1MThWH4Nlnw4evw1OrwWzgXo4hb2zSO4omW6qU1CLmlJMBMpqQ30V0iHY9HupyqJ5vmE1CzSqcKeEsgnqrfCJvCI3BxpUSumRoWsSlpOsp5cLnJukLDBeA8EVmWTq1/JaHCgZzMIFYyLM49missHQm6SDbCL4FgZJigdVCJwdOCddelT1Z+FocSSS7ccw4qqU1AMB6NzwtFQbAetwiNoKDcCXn2MbnD8bNErrno+A7wZ0zhThlv0srjvOYo6MhMxPBXo4n5SxZJFbHZaWqmSD9HgMApwkjCyyg2oTlbQfbggEIiGzdODrmSvCVUMiEgsjUNR2FUs+o0RbyDLxebv8ViwH7RhYKceDlrO4VJ8ohXohfyOc4uxoEQZxBKwyR5WeM1l0BdwECKMRU5OVMBlvo7zsFooLsvDzn3wTXV3FcLtbYLO+RNT+3+LG9SNYWCAY28D9FgBhxqzfgJOGeOQ1XsbwYhV8nDOcr5wLGznOht7DDe5VBGt18+OcCzVKM5+L9qPINCWJjKfMxzXgm90LN4tNUtdEnXdGqsiwqe6WdIBnvTqUD9xAdPEWzLp0r0C1Br6VM0QbEQbi6qditMPORZj55nOgnVDrR4CFkBD8G1WE+JODz2ot+LaC6kmNcy9xtPQADEMPsUInUNg1Mm9k0NVQn1V7Co+v/l4HX8CA2pE7OFZ/EI2TT+D3GpXjiWZxqtd/rnwuynaRPaXT6fbYYDLdxTe++R9w4MA7cDQ8w8BgCVLO78Y3v/HvkJ4WjVCoF729RTAZH2LXjp8j9vA7sNsYBaZT0KCB73CKDEkLLfooev/sasn7qdVehOwYXanC7bpTOFUTizvtl/Gw+RIetaXhccslPLafwyV7MhLqj4giyrW6ZJyrTkRaxVFcrY5DlukYfv7o57jVfEYaKZFEol1b/9rXXzvqWlqwuGjB3DybijXD5bZjYaEOU1M6zM/TtrTD7WrExFg1FhdtcK7Y4PKwi2kzAmiB02eGbfAZ9rzcgZOmo8hoOo87DSnSNO6e47yojd1uvoA7LRdxp4Gvff6413gB+e3XkNV7Bdsq92JL2S6kNZzGi+4beNp1FU/7boAF9QU9N9Xovon87gy87LqBvM7ryOu4hlyO9qso7b6Fsu7M146C9uvIb2Ojueso6spAiTSbu43yvjso67st44IhARGFu/CiJ0OKqV/03cSO0t24aDmBZV+9Uv76VGqJAtshH6PKnAdM99hs5lsBZ65Xr9cALwlSWcuci3xNOYJCzEhqilX2BTqHBOAk00iEqPVL8K7IQbX+aaMUkA8f88vOtc+C7yR9NI7WHURZ/5+Z799LO8nuS0VU1S7cazonHixzPsm2kkXwSUvprzP4ViGORY8eTzqu4GT1YemMR6MlDNGGNllluIYWSxFRHgH75HN4KAMom462yax7HE5ifkapY6gNygZfsA4Bgp6QEV2L+bjVdB4f5byPj19+iOGFIhj6snDTcgr3Bq4JK8kGKjP+egwsl2EuaAQbp7BQja3bl/xGLPgMWKKEnxh8MtMqLMkFL0VjzItmznmQXi093malWR3qANANoAsBfwv8Xm5oXQiBjWR64GeKwEqtsFdchExlUGkrBDXMUbQjp+MaLtcno2EqT/5PVkvA92YCHg2AEiz2zBQgtycDZ63HEVW4B+dKYzAa1IvaDMO3QacRoWUD4KEijwILck6r4GD9DenLGppN/bwABweax3NxzXxSupmygIvXxhAn2UEaSq+3Fi53laa8ohgP1TCoCT4vWUw7Qizk9ZjUfSEYd9XB7zaJxCRzXxsW83G5Lgnni6JgG3kq0QYyoryXTJHyowXt84W45UjGLetJxZQGGrEsuY5hkBN2+ggOeP8V+Kbj6fEYJVeX6QShYCcMtU/wwx/8LX72429hbJSv9SDrzikcjvkQZaUZorK00XtJJ3HMW4ME3SEUt17H5LIOAXHYNpt1Cm9SalPlmlahb7JfjRhZqkCy/gCeNl/AnIfFuUzTUqozDK0rDWgNmK1rPzYyV+vhDZpUKgLnO1NGQnZMuqrwoiMVCdV7RB6Rm6yE/YVAUQ6AAHDJAaUtV0yY328ARxicczN2u5n+E04bUsWEjNzRuQqPcJQjDL5JDPC+LHsNMI0/RWzRPpEj9MqG74DHUwsfUxm0dcnvU6pBdCBV6onPr4ffXwvzxEMk1UfBIt0hOa9a4KPCjhxrnXskKRq83nBtBp2ETgT83di//22cPXsQnZ3lAHphMDzBzu0/xf27p+D3dyIU6kZvTyVioj/A1bRoTIxXaBr/JCkIvFX6jYo8qcgiC+1pb7xh8O1ltNQBF+wwj2XjSv0JJFXEIKUmHpdMSUirO4702gREV0Tg3bz3Eas7gLsdl/Gk+xZyWq8hpy0NBR3p2FayBQ+7L2HMXfkqX38T5s/KSj3s9mc4ceJj5Oaex/JyC/T6LMTGvo39+3+KFy/OY3S0FuVlGdi758c4duwDNDfniKPsZ1+MYIuA8IGFSmR0pEoB9yXTMaTWJuIShyERKYYEnNLH4UTNEVw0Jr52pBoSkFYeiyRjHH70+Nf48b1f4sDLndI87mxNHBJLD+BkeQziK6IQ95kRXxmNtSOm8gDWG4d0B3GohiNaDf5fdxAHq6NwoCISkaV7cSR/D6zTOVgRUYIGTAXrUTB4B1GPtyC37RpcIkKwpvGeOOIOEHwHJNKsyMCN2rL136f2NdpXEi8kXLinh4G0RK3o5EqNgoqek2gjwbayUiuOokSDqdjG9zBNVPYRhVfU+qeNCn/POmtsA/NwLfiedVcj2XgI8aYolPVf+3PayWdzvgsGryCmeg9u2E+gf75UNjKG/gnsWKi3mVrG60+2L/DwhREMg+9a5PRm4HhFDEYXSsV5IGD5It83slyOqOoDMAw+kHa/ykPkMcJMz+efGwsZQiBDrVIBuGlI4SrZaTgwvliKnMYLiC/eh1R9gugmN09kI7FoDyLyPsLe0k9wsysFecO3kFR1CIllUWBXy6zGC3jQcklUXM5VHUFiYSTSdAlomMiBl+clYJWbE/M/LVIwx5QVP5kmCf83wx9ogsXyAA8fHEdR4RUMD+kwN2eBXp+JzNtHYbY8htfXKrmVPmHZNVZKwsmKDaee9H17Cq7bTqN9VtNiFfmzTcxJpFHwsKNqI2YCRmGPdj77GJFlEchoSUHLdJ4qMHMTvBEkcpMNGxP1bAg66Ch8kef+tXmvADYb6oeeIM10HIX9t1XOLFk8DXz7/Ab4AkYpGGaTD2WEWTDWCqADoVCnOCeSiy9pBbw/DgTIzJCVZVqPrx6zPhOMI0+QXh2PG7rjmBKg1oKgqO80wxtwwDz2GFesiXjadFGlAIXIfDPyotRkuD6o9qOcNK4TgiqDGHJhB0XNpwF+Xzu6OvWIP7ILv/jp32FszACnswG7dv4USQmfoK21SLqIbvQ5kO0d8FQiriYGVd23Mcv0BK4FmQtfbM2//js5t8Pyk5rCh0hjUsnEgd7ZIiSU70HJQIawYq/At0p92mzwzbSw1VxsrnsplnRgdLkMd+0ncLYuBis+pdKkImeqwFKx72Hmm0CSGzhzuxluVmy0Cmuz6FDl2JIlVfnTZLs5v/iTYJi2kPekXqW7US5RJFMdWPAwQnUXR0sOoH+lTOoIxC6yEySBA1OXJHWlAR4v/09nTR2b+dT+QC3qxu/huDkGjvFHUp9CUM9iRElxWXddM6eZdkE5PDx/n5/OXx+iot5Bamoc2ttL4PW2ob4uGwejfocnjy7A5+1CKNSHB/dPY/++91BceBUBf/i6aUsIVBT4VoQM7w0jDyrlTyIRdJzp5Iq9b8BMoA6D3mqMufVYcNdhxmOQNKVZTw2qR+4hvf4YyvsyMekzYdrvwJLPjgVfHcZ8lUioO4gnXZcwuFQsJIoCS1/epk1MVCD9ymH8+3//P+PQ4XcwNW1FQuKH2Lnrh9iy9fvY9skbOHlqJ7Zt/TH27fs1fvnLv8XZMzvR31+JQLBdlNAAgvAmLPnMmPboMOOqAdM5Fj0mLHhMmPUYMeUxYtJtlN/5/88bc24D5per0eQpxQVTIh41X0LzdC4m2dPBpcPwUhlGVipFw39muQqzzmo1Vqox56rBvFuHebdefjLVdL0xxUZzHh0m3TVqeGqkN8TYSiWGF8sk8j28Ui1pOezvwZRMSnVOLFcipzlNGku1uss0ooLpbxRGoI3Q9iD+f3WdfPnnRdvE9alSQtQ6pfNHR3RpsUbVfsnabIPf3wSfn2upHYFAI1ZcfH+LkARKkaxFunqHqGYnDD3tJHPIKZxA0pDren2c83p7qa55LfiecVfjRN0RxBoOvALfYxko+1PQ+d5Iznfp8HXE6vchzZKEzukCaQCjwDcZN06oVyG3jdz8r+w9knZC42wV5ZD8oTtIKovCyPwfB75HnZWIqY1BVVemdBxTAGJj4JsMN5tiqJxPyqh1IxRoQzBElRMbLL33cM1wFLfaUjC2WImpJR2sw49xWB+JtLZjSKjZg70vP0Jc5X5csCbjYe813Gm6gCu1SbhmOo7MxvPI6krHra7LuGQ9gZumExj36hDw1EmuYbhIhUCfFfJcAAz5rrhMGBmtwJ7db+KTbf+CbVt/gPS0g3jy5BR27HgDW7e+iQ8++C5MpntYWeFiV/mgDN8yT5OsFyMIy6F6ZBiScbspBT2LpcK48TvFwRFPehOMjQa+XaEGVIw+QnxxJE5XH4Fu7DHGXTq4JBJDXWMyfrzOVwoq4YIbXrdf8uM24XzW3ew3+ztscIXMqOzPwmVjEvTjj5VBDKn8fxpdgosAm/xQwSZkEyPb1PQEWVlxuHc3Cd1d5fB42zE9VweT9S4q9Vfh8ikA46VUpqQi0GG1Y9RVhey2NMTm78WzzuuYZoiSIJ3qPKFm1AzdQ6o5HmVdGdocc4jme7iojeCb6UnKcBO42cWYq1AoQbcN3Z3PUVhwEWUlt6Greo433/i/0dqah6rqa3j//W/jwf2TmF+g40qws4H7KYVYVnSslOJwdRRMQw9BjX4yzZsFUtR5vEofU0yvUsqgMoMPDrj9NrSMvUB86S7UTj3ECoEk6yM4CFx5nyUnXgHBDV3bOtcvagVkrzn3eXwy30wPWihCujEW15qSpKcAo5Z+qQUh+Kb94jloDDhT4IIGUT4g6yygmyQG600IVllsHyKz1opggCpNLIZUbL6ADHlOjMJotSai8EJ1LIc0wWJo/0T1YUwEamV+snia5+oPWCWVj7KIPh/Pv02URTzeRjiXGU7nHLWjbvwRjtUfRuPUU3hZj0A7JKBgI8CAIX+lIMJnwHtD+xsIdGLL1n/GlSuxGBysgc/XDl31fUkxybqdjFCwH4uLdkTs/RliD22Bw/ZCegmoea1ACgGQihBwjhJs8RlrtSZ8bkKCkAxgd+N6UaAhOcKIjKT5kSXX0vocE9m4rk9ATfctrLC4PKi9R1j0epyqO4y8niuYWGGeMa97g2tjnfmzsmJGreExfvP2t3HuXARmZhwwGh/Das3Bw4ensXv3T/DRR2/g3d/+ABZbEU6c2IXog79GY+NzBIO8j1zTdLhVfnuQ16TZFN4TNec1BRgtJZR/+/zBedCAhuVCnCs+iOqu21jwMleZc5sEEiUYqdalCjDX/uS+w30pPGQ9iG1T0aA/9H+mNKqmc6qfAqMWInPLVDb2TJD6J1W0S8dWRZLtcAesaJzNw/HKGJyzHsegq0owkjiGBNxCAoXVhKhwFk7f3IA9W+eZURJZ5XWTaOG9VOpkfhY6h1oxPFSEysqraGjIhtPZiIkJg9TTpF+JQG1tJlyuRni9TWh0ZKO5IQfLi/VSE0Qnng4l1zHnGG2JkFnrnM9G7Ngr8G0Fwfep+lgcqY0U8F0zmomyP4PvV2onVaM3kWCKkhy0pslcaZ7BNAdhViSU/HUE35x83ATUBuP0GVEydl9CVSPzJcICb4j5lk2bBtaOUWcVjhgPo6TlGthpT3U31MKY60xKMs1sTBEINGFxrh7DvZUY7q/EwrIVy4EGWPofI7U2DhltKVLZH3Tb0TdfgsOm/ThcuwMxFVuxI+e3OFF2AJaxZxj16DGwVIrOyTx0TOVJN7kuVxVsS0W41ZSCI0UR6AvUQFrdUzOYQFiYaJX/ReDsC7Ky34zh0UokJ2/D40cXcDD6d4iJeQ+3bh3FufN78ejRJXzve/8Xnjw+JSoUikHlZqAVf5JRh01ybC9XxuFBexqGVipVJ0+yXrwvvIfr3J8Nvx50YMinR7r9NE6bjqJ66CGWPXUIeZgGxYIzhvypLsCUl1dyT6oIRhVbqQ3ryxu+DZ/zZl077FgImFDQlYFLxiTYZ3M1QMnrJsPIjUUpcPB6vT47Orte4kDUzxER8XNs3fImrl5JQG+vAbWGJ9ix+2f4cNsPMTxeI+lF7KKn5rSSzmJTIuPkUwkRX9AdxUyQef8MoXIzakBR/y2kWOJRN/RA/kanx8c0KKYBCPCgjSBYU/eaP8mkqDA9ZQltaGt5jLOnt+DDD76PuMPb8N67/4ihoXKcPvM77Nr1BnQ1mfD5CC42Aq7U91ACkd0Uoyv3o2EyBx5hblSqwaY9M85pSdNQChG8NtoTrgUfGrDstcA28BjxZbtgZRt1AX2sj+AmrBhi5ZxsHvgWIkQAPWVJWRzHtBAbeufycb4mGg96zmMlUK+ifmIXuVYU6FZhZV4LwY1JajsUi0Ypvwb4vLSnjNa1SlpawN8Mv5eqKGTMWgSYM7WJn1EglABDpUSJUg4cGF0sx1PHRaSYkjAnxWC0RQ3w+liM6cDcggGOxoeo0d9AMNSJ8fFalFdcR0lRGlqbXmBp0Yz68Rc4VhcH+9gTBFn4KYoz2nett87EieCcVClaUuiORslr/vDjv8fF1Aj095ciEGiDriYLn2x9E5m3EhAK9cNovIOf/PivcONaIqYmqAhFBzQ8J3m8MFjhNamcb9pI6fPAHF9JvVPRFzr/wjayZkckDBvgpT2WtetAw2wu0o2JKO25KXU+rL2h0xZkqiPMOKGPRn73FVGKEKdINPy/vD3j3tTTU41PPnkTFy8ewOxsI5aX29Hfr0daWgz2738LFy8exNvv/gA3M0/jo4/fxL7IX6C5+blEAlisqQgpteb5O5vMMeKqoq2qBojXKUOTi1UOq5KO/ezvJKbudabjROUh1I89wzKjfCzwZFSBUTp5/irqpMAyAbMaCoCvIYBoh14zwk6AsjV8ttoIM9dixxq1lMoG+L0W+D0knxox7TOhmE34ivdKjvikxwAPI4rh/H8eQwqHbdJcS8iA9ebrBl4PSYTqleIc55w4w2iFc8mMe1kH8cH7f48bN+LQ3V2JnJxUfPjh93D48Dv46OPvwtHwApVVt3Ag8lc4f3oX+npKJAJKGxCgEy7R8QaJ0v9rgO9ZTzXOWOJxSL9vFXyXj938M/MdlhrUj2dKqC/FfBT28Ry4wIYOfMj03BXTphiUL28ANm1zlPPjOapzWgmYUD75CAkl+0HwTeaeWqrrfp9ssip1glXasXWxyG9Iw7KW2kADs+4xxANnQaUVExPVyHlyEvEH3saJhC0oLEnH4Iwe3YsVuNV8Dkd00ajsvIeB2SrMOXWIM+zDL3N/jLdL3sK+sq14bDmDRRcVLhpV8x7xSi0i57QYssI08AiJeXux++UO9IOLkuFOVewjeaZMH6A8lhh7pic0YnbegqamYtTW5uBQzBacOLEHVusLNDYWoKb6Ob773b9EQX66sD9+ytMxr02UV+qlfbIrZMGkpwYXSg8hu+e6VJiL7inll/j9m5jzTUDR663EycrDUszTI0C/ARDwTfCnNjheq2LF+PzVJinnoqXhbOSZff3e48C0rxbP268I+G5dLNSKXW2S46+0ilXjCRbjenwN6OwqRGLix7hz5zQ+2fYLJMTvga4yB5nXT+HvvvEf8cY//zeMjBoQ8LfBzw1eioEoh2URVYzG2QKk1h3DTfNpLIvucIMAbXZf5bM+b41D00Q2Qm7KPnKja0HIx02HAFwx32oN0gFjvrkqBqIMG+fR7HQ1SosvIfn4NpxMjsDLvKtwuex4+OgwcnPPYHiYIW2maWwMpNLpWA7UwTT5DJHl+9A+V6BqO8Ib6AY2tA09d5nXJB3YaIqAk3UPYfDdiEV3PUw995BYvgeNLuqMk3FtVOCbzrDMU4LTjV3XRs7Jy5oSKQSn8gQBLXOPbeiZzcfZmijkjV2DW2PvxCkWNpJrNHwOXNdMNaGiCcPSDvh9dgz0F8BcdxvDQ2WYmTKivCQdBS/PQVd9DX29hXCt8Bhh2T1+jvnbPKYC35SgYwOgwdkS3Ks/I47zMiMiAtYa4SO49zfCVHcHeyN+gL0RP8TcvA2Pn5xBbOx72B/xK6Sej0SDPRvm8QIcr09C3cADse1ev1HLnd+Agy9zQLHRdBKoVe5nKp7PihOn3sPjp8cwOl6OYKgFNttTHEv6GIUF6fB4WvE85yQiIn4Mfc0j+DwdQpCoYmXF8oejB8phUXaPc0LWE+0fh+xFigEmcBdGUdJTVG0MWVq+v3EuF1fqj6G09yaWpAcDU2OaVEoRrEiq2If8jrRNB9+UHe3t02HrJ/+C8ymRmJ1txoq7Gy/y0hCf8BGy7p6A1ZqPI/G7sD/qfXznH/8T9kT8FC2tOZqWN/dbRsZUkb+k+EkkSu21YRZ6lZXW0uiEQf4DvxO0j7tqcLBoL662pKDLyeZ2mnQh7YePBcav7jXv96cHX1v7+utYdkXWhGvBGKlR56vAO9cLr4dsNp1JiSoRfLvNAsI9ITtGPHpcN59EekU8rOPPsSAdhtUzFYdPa8L3Srjgy+MlVcCu1iydB84fSR8MtogdzX6ahLd+9bc4d/YAmpvKkP8yA0lJ23HtWjy++93/E+UVt1FWlokP3vs+Yva/jc72Aq3BGdPGDAhQLUWiXiTcSBp8+XMWHMIoQ8gKAd/WeMTURKC0/xpqRm/jTwp8r009Ced7h/9GAK4fu43T1iM4XR8L49gjuCVsyYneIAWXZHW/buCbi1AYV83gUYbPNP4EcSWRGFhQzLePoVFuEKservYZDazJRCNYE+Npx7izBsm1schtuIwlV62E1HzSRIFhRW6oTHPgouUxG6QgVXnaBCIWeAImdHXnIC7m14iLeg9b3n8DR468C1tLNuaDVhinn+GC7TjOVcThnjUVZS03cLRiL/ZVbsOJxlg87bqEjrHnKieLGwnzb1lxL6oBdswF61DcfBW7b76Njx6/j2Z3hWpBTBBMFkxju1jlzGJLPxheZn5lGxaX2nH1agKio36H3Nx0LDtbMDRYi3OnYrD14x+hvbUAXm8zvF5VXCVGjouS6iEhK3rY+rgiBqX9d0C9bTFS2vcS7NNrlo1IUl6ocqBCWrzHq8/qUywcGTeCtbBzo55NwF2HKb8RZ6ticb4uCbVjTzHrMsAnjAg3e2UgFSul5dqFNz06Qdqi3wxvsqTZAAAgAElEQVQjIue+9vxlzr0C+xJ10Jw/NRfDc5LzQ4EFrhuCVEmL4XtXh3qd3xG+b5xTw+5qZLWmIrX+OIZclTIHOIc90kac10cmhooXDvgCjZibr0dDw0uUlt7Fu++8gQsXjuLZowyknI7D27/6Pj58/w3Mzdrh9ZDJa1XMuWg9U9vdjrbpAqQZE/G4Kx2LQSOCbJjitcEVdEhdwkVLAnoXi6SAk6kDIdEAVwVw3FTl2sR4014QcDNnn8+Vz5/PpwkrTjMG+svQ0VYIr7tVmNTRsWIsLhgQ8LFIiPeAQJcbiwbqeP9WwT1zadU84SZJzfEXg5lIqjiI/vkyYReV0oi2+UpxnsY+C0hkWgttAYfGooZTLXjOwuby+Pxu7fv5zIT5Vus8HFVSQMKB6RUDSjtu4lRlFLq85aAmPdM26AizyyrD3mIz5bz5rNXxeX3yGlM5pDCSLJ5FumGqdDF+jpEHxTKrecRzssLP507NYwIebvx+Mo82tM3kIrFqP6zTj+XvYSAhjoMAlvAaIyBk+NoKt6cO/kALRkd0uHkzDnt2/wwvXlxAb18ZUi7sQFLCB9i18wfIfnYSszO1Wm42bSCfMVlg7ZlIxI0dR21om83H9fqTyGpJFVk62iQ6jOzuOzahx5VrB/GNb/1v+OGP/xpdPZWI2PcW9kW+i4NRv8XJ5O0w1N6GYyIfZy0EphkCBLw+ht3D578eMFA2mvaGz5T3TuXdO2Cx3UJ3bw6cLgKZRoyPV8JQewsD/cXweGxwOO5Bp7uOyQlea5uW587jqXmtnoP6XUCQfIeyjzw/OcfVtc1IJJV3OK9V07pwdJBpMc1zuUizHEdh73VQJpJF9XSm2XWY9uBEZRRy2y9jcoVKRpzTKtql5i7n7+8PvkY78WoodphzKWx/A/4GdHZXYMu2N3D6zB7MzDajRvcQEZG/wrHk7XA0FKK5uQJnzxzClStJ+M1v/gExB3+D9vZcKfoXm0rbQQdb8pvDNQCvbJm8RwCcss3qu9fYPd4XieA5MOs14EnrZRwp2CsytgvUPadd43Oj08IUFwGbvO/rDAHPau3INf+B/0sKi/QxUICb+7gC4+EIqmrixnsexgR0bgPeOmHbqQHeMZ2Pk2UH8aDxInoXysCiYlEVoYMumvfEB6om5tW9WG/efv7rku4ktT1rrl++k59pRXfXC0QfeAcXLxxGd1c1RobrYDBkIzFhN974p/8OiyUPI6O1iD/yIY7Fb0NPV6mWUkV7SwlDo9gUkRXepPQm7gnchxkdZV5/iiUJMTV7UTyQjuqxWygfVX1m1mLRr+L3P9Trhn9TJHQW7pnO4//4y/8V/Pc/PdKdRxUT08dvomLkDmqGsqAbvoWK4RsoHclaVS35sideM3YbF2xHkVR3EOUjt+HlRshOTmgQo8oJKuBhE7yizZiQPIZ4vMLiKY/V7beibegZDpXskxQNMh4E3wx/8TqkWEjeT0PBvMzwRm1D0G+VZgcjyzW4UB2HnJZ0LK7o5B54QjYs+M2YXDFieKkGw84qjLprsEz9Vy/ZLhpaDuYEGjE+XoT7d46iu60S8Ye2YP/eX8BmfSzSfsuBenQsl+JF21Wk1sTjTNURXK1LRuXAXYy5aqQrpoCZVcfCgZDLJI0ZuDEvBuvROJ6D1LIj+ODOb/C06xraZouwSCUCOgXahKfh8vpNAr650TMXrEZ3DxERb+HGjUQMDtVicsqE3NwL2Pr+j1CQewXOZbVR+ANMt6ECARku3rdm2dDNi3k4qjsE8+AjuLx1wnS52fVL8okZfiWY1jSoyeYQyMk9CTMT3MiUGowCqg6R8vJ6CTJoWJTCClxUhnHgWcc1HK+Nler5kq5bcn/I6sFHyTyCKR6PBv6VQ8U5Ks9cNqfPN2hfZA5y7osTpxW4CUugMWx+D69H21hkU9GiDgLm1IYkYE02EJUaJLmhAlYJKPgeMphaukeoAR1LJUhrPov0hjNw+k0CvkVb1qMBLoYh2RCJ9yvYCK+vAYtLzcjOvoTvfO8/48zZGCQnR+Nowj4cjNmC7Z/8BN095fCxoJYdKskIshGPbOgN6JoqQFrtUdzsTMF0sBZBjxVYMWPRZ8HD5su4ZkvGRECnwKOAUV6vAhfhjpIEWTJW815578PPR6WoUGFl7ZCcTMlhJxurniM3moCkRGhgmakLwrQxDYAEgOpSOOEzIr09BRk1SRhfrJI0OabEcFMXJ5AOBGXgCIoIFlioyDx5FoRqxALntspzVI6CzFUyuux6K3mQiu0WgCzFzIw4qNxeOoxshPSk+TIu6RIwEjCKjro8WzrBwogpMKTsJllRFkuyKQavkSxyu+QjS1dM1IMNkdjlVnXHtIvqEJUNxCmRvE8lxciNnWygSJAGbHAGLaifeIao8khMzZYpELiuneacs8PtaUdV1SO8++4P8V//21/gbEo0JudsKK2+goybx/Hmv/xXZGTEYWGRDgMlSmkzmXOugK1IqmmRppWQBXUzz3Hechy5nddXU5EIcFw+O6p0WUg6tgtbtv4EH2/5Ido7q/Cbt/8RkZEf4+zZ/XjwMBnt7S/QMpGHi/VHkd2XJvZCbEN4Xq17XZuz5jdiH/iMVefNBqkxYfqF7EsS8eF6UE4YnzUb6AhYleuwoW3+pZAwOT1XMM+6HelaS9UUxZ5fMBzFs440DDkrhLlnN2ap8RCnm463csAlKs05LYPOtJYfLaCShf4KZAoAppPkc4gtiNj3U1y6HIXpaQcSErfgzTf/GnFxH6K8/A5ePE9H5K7f4NG989i/95dIv7QfgwPlUsTH61U2SzkJ694nLTVN1oDYSWUD+UyXA2ZYJp5ja95WPGlPx6SzWq1H6QrNGhdVl8II67rf8xXPi3s9V3FaF4eKnttSZMoooIcRKe6XPjuIGVTk6cvPR0VGcA/l4BzjXqn2VNqmqclSxB75EOfPR2NgkFHEDvT0VuN3v/0pvvP3f4PmplJ4fA6cPb0LJxJ3oaezVKIs0khP24+Vg0QygPv9JpzzKsHRjAWXFZcspxCj24v8wUuoJJ4dvo2KoTubhl83in+/luBbP5GFS45EJBqjUTx0E252OBTwzQKrzZtIm/Fgw8dYnYDCMNnh9lvQOZqDmJIItM3lK8C06tmG2S4uZG7MCryrTdIBhpScIQu6nVU4Z0zEw9bLmPXUSqHFtFsPy0I+Hg1mIqPxAu62XJLiNONsDgacNViW7ltUhmE4jgWKDridLaguu49/+MZf4EzyTpVnxSrjUCNWQnZMhUzo9VSg01OOfm8VZoN1whRJmFNjOsIAVYwPjZgAjwa4Q9QKz0Vy9REklBxASlUsuher4BS2mwUwNOANKoQmeaGtaG3Lx44dP0XEnrdRVnofU1PNKC3NxPu/+w4OR72P/u5quF3N8HmVWouSNDLCR+Y9xCpqM2rnniOhOgaO0WeqcAoOSUMgO0GAqYq0lBySMhL8LAcNBYF5k4S6ZcMiyPA3IBBoluIrsu18j89nEGBHh4lqJ5UTj7G/bB92vvwEusEHyrgRqImB0H7SqGsGQ1hmLSoR/tuX/emGRRq6EGiyKJAhRSrC8HxFEYLgX4AfHTqlOuOT0D9ZYTKgnHNawRpZfs0BFOCiXQePRwDOTckxmYNLjlO425EmbH/Iw9QflXajGiIxnK6UIxhOn57Rw2p7hpERs7CI73/wQ/zyV9/D++//CG/9+nv4/vf/EtnZZxEKdWHFRTDHc1LPmQC0d6YY6YYEnDUfxRRZdTp7PhvGXHpkNV9AZvNZTEkERTHBioFSjis7+3GowlcVFeI8VSBNsb+8JrK0rCH51JB8UOZR83MaC8zUDq4hqMZP1CX3CbDQUm3kfjZg0mtAasNp3DGcwNhSpWiXkwVWevThY9lF4o5zUBWnWuHx2jC3YMT4ZBVmZmrhdvMZtkhr+5XlWnhcBlFuoXqLalZBB4nPlXneZJ7pXKlufv2LpbjvSMEt82lMyjkrh4SMGdlpAiG1AWsMYLAOHk81/H4LqKc8OFiJ7p5C9A28hNtTj4UFG7p78zE4VIK5eb2AY+bMK8lGM7we6nerYxKkqagRNb4N4rzHVkVjeUWvWM51N1BedytGRw24dTMZ7777L3jr1/+ElIuHEQoNwYdOPHhwHjExv4VOl4VAoAOhYIsUXyv5Um3NaWCSRIYzWC8FwqcM8SgduLN6HrwHE1PVyLgZj7jYrTh/7iA++eTH6Oysxs9++i386M2/xw9/+DeIjX0fVusTNI8XIsUUj+d9VwR8U7+edvrLruNN/7zmgNPZC3nr4fNSj5/F3hbpmeCnQ8q0G9ZZ0D54qIyh0gM7pnORbj6O4q5rWHbrZH2wCRajJ5xv50xH8bgzDb3OUtmLGWkQeyG2hGSDSqnxherh5RqROc6/0zHmfNVUa4SMUew8j8vGOhOTRjx4eAz62juYmbXi0uVI7N79Q+yL/Bkupu5DdvZ5HE/Ygr07f4TIvT+Grvomlhat8Lg5wnZvo89DgTnlgGrrI2gTBZHOhSJk1p2S4lz7/Eu4SEhoNSm8FwSwtKObBQg38/n3uCtw0XQMt6xn0DlbKClgjLxLpM+r0lbUNb/am/7Y71cEltpLSYipCBr3FPWcR0cKEHv4d7h8+TB6+kowMlaJtvZiFOY/wN9946+Q//I6FhfrcO70TiQnbkd3R7EUUlN9KAQ69GTAeZ6v5Av/2HMNf05FZjgfmwR8X7aexsE/g+8/TPebJu/hWnMyjtYeQF7vFTgDRtXEgVX94sWEAc+Xn0zhB/Rlf66Cb4btQza4Axb0TeUjpjhC5IoIVCSHSxYx89bDBkPlcBIokS1mniuL3Ob9JrQsliOx/ihOOpJhWy6CbSYX9xsvIKpkH3YX7cUZUyLOVsfh0NPtOPBsG87VxKF26glmWbyghSFZsORcakFvpw4//sF/x5GD76GjvUA2+YDPIkaabBA3UA4COjlPnxlB6oQynCdGVgt9STW1MqyiPEKgFrRh1GNCTtd1RD/dhkdNqdJincAw6K5DkICNnckCDXC7G1FTk4Xv/MN/xK9+8W0pert+JRGxhz/A3/w//wveeetbuHBuN9pbs+F2GUV/V9g2hs4EJFvh9dWjZPIhEqti0D5FuT8yKgzRKYPJDYfAXQGzcPoAATX/ZofbbcXYaCl6e4vhdLYgEOiS/PKJCR3GxisxN68TUEjDIs0KyCgHHZhAPS63peDDku2415EOL7uvcSMTMKsxztwEtVbGLHQio/tl59bazwuY1BRkWOBJhstPOTef0j2WqIrXArjrAToRlEYT/XOVV8sNRTlQZMOUEoOau+HzpDwUr4G5+2bU9NzB2fok5PTdEmcjSDUbGkY/K/95bGV4Gc6em9OhrDwNW7Z8H6Wlt3H40EfYs/vXOH3qANLS4vDeez/AX//1v0PW3TglEyb3ikWRqiMmHYphZwWetl9GZMEnqPPkYUVyth1omcxHmjkZt1rPYRy1Uj9BzX9ujFxLBJfUIHfypybJRVkusrJkrbyMQkgImWFYbsIas6Kx5sph0Z6nOC+c72rOSWqL1tVQIgtM8eIzoOPgt4s82MnaODy1XRDGjPnGft5nr3I8xcEhWJZOjTxfOoct6OktwrPsM7h67SDu3UsQFQefrwOD/SWoM93E6Eip5EB7Of/oaHEuac4dgTevmc+XzmH7XCFumk/hWXMa5uSZrwHfbOEdBt90yHx0SBQQYXfB+/fi8c7b38DPf/5fkHjsbbR2PsaDh8fxy7f+Cz744Jt4/CQRlIXjc2fXQfdKrWy2omAiBZ9k1xUzOOKsRHbbFRzXHZGoxtq5+3m/014yYlJenoEzpyMRd2QHDsVsw4lTkfD4++AP9GDv3l/g5KkdaO8oQAjsEUAFJ849zlvtp0SseI9sWPQbUT54VxQh6qazFViU0L8DBtMtbN36A3zn2/8JP/vZN/F3f/e/i/LIL3/5D3j08BquXjmByH3v4OmTs2ifLMVVxxncbjzzKgVoXWfiq9+XaBeE/V5Nv1D9ECQqxPkYMMPjoS4z86G5xpWkYshbh8bxp0i1HEP5YKaKMkuakmoORltwrjYez7uuYnSlUkWoOJ8kiqMBbLLBPqua75zzPvV/OgGyf0gaZXjuKmdU0gQDLNJtxcJiHdwe1ll0YH6+HpOTegwNV2BykgV4nfC6G9DfW4DJ8Ur4fDxvFt0qsMceAh72D/hCz4TzRRV5c/3M+gwo6buNw893oLQnE0w3CaIJjCRSKYb2w8N1RjZZ7MZX/3xff31NqBy4h7NVR/C8JR3z7NLKfU6LzpG4DNuN1x9n/esKs93qOHRGwuuPBJYFA/05iNr/S9y6mQC7/REuXd6JuLjfoqz4Hr77D/8ZRYVXsbhoxOmTW5AQ+z462thJuF27r4yeMg2Lx1T76Jc9X35ewDePGfz/2HvTrriyLEuwf0R/6E/9qWvVh149Za3O6pXdVVmdU2RmZIRnZszpQ3hE+CC5a54nQGgAJCFAIDGDZgECiXmewbDZMOZ5BjGLyQybzXavfe4zJI8KF0Q4FenV5VrrLUNgZu+9++49d59z9tlHge97tls423IUFdPJaCaLY/bhd5HvcLjeuPQUuUPxiGw/ieeDd7Dua1cpXM0bV7SN3SfKfjy4vX6HAnrcVLkJsdjRgtn1Wine6Fp8CY9oWZL3poCCAi0qbawKqNgwxIphRy3qBrPxqPkq4lsi8MvqA/hJ6a9wqvIIzhQdwIW6k3g+mY0ZXwccARZ7mbHms8DsrMTNlgu40XgOuldFcDK667NjcrwamWnn4d0eQ/rdCJw48iOYTfkIBgckrSbGWdq202AyhatASTiiuOPp73A1uSi4cfcoI+s2A24Lgi4TNnwWFI9l4XrxUfQvlWipbyWbxLbH5IOyM9nWVif0HXmwWUug78iH1VwEu/0l6mrTUF1xDzWVyVhebIRf0vVciKSckA/GCKkFbp8JRdO5iGu9hIm1GgHUpMEQAHAsWdgqoFMiFYpLxkXNzzodBrS3peLAZ3+Of/j+/4boqM8w0FeP9LSLOHDgb/Hll99Dyt2DmJgskU2djTwYMfK6DVjzd6B5mkVtZ3Cs9igKxjOV0Wd6jxuQOCZqfAgoCE5k/oSdl99rg/ia+c1nxcY1Xo4HNxG1kRIUbgeM0oyBY0HHx09AJNFgu0ohy8bRAzgMEpVVQJ7nYeSRmzevl9+pzk1QXz2QiZsdl1E99UgV3RJUyjk5B3iv7HrKNtHcSHswOlqFX//6P+EnP/5T/PVf/8948uQWFhZsWF624cXL27h69Zd4NU8gR/BEx4dOEb9TRa23QhZ0rVXgTMVBHMz/AM19Gdh0tqFzsQS5tlsoH07DlnTZ5IaijS/BsoBJNoDSQLXMUW7QjPT+rkNzluQewmD8DSCneo1HnP1OuCXS3C069gTWXtYtBDvh8VoQ8FpBPf4LlUdRM5wt7af5OTqCsunJWPI6VYqWY8woLzCBkpIUfPjBX+HjX34PSYnH0dNTjqnJFhw/+h7+/nv/BqXFNxEK9CIU4HWRhxsuJlZ1HozEiaMPCzqXi3FXR8WK+9gMp5kZBWXkWzIfinYizyzUB5eTDS+64Ngy4eaNAzhy6J9RW5ODVws6TM3U4v33/xxp6edx5Mh7uHD+ZzAaH4lOL5+z2sgVrYzZnYDwe2nb7Bhbr0WOPR53DFe1+fE18/jttcDiyVAf7t07iZ//lNHnP8Nf/sX/jg8//BvML1nQN1SN9/7x/0B+QRzW1hnEGBDnR5xjZkd8BtHgppMjNDnYsOZtR9VojvRZGHQxtR2mKnZjerYeRS+ScCX6EN7/l7/Cn/zJ/4AHD6/gBz/4P5GbcxO3b0Xg1ImPUVZyD8MrTUi338azwRRtnZCOxjn7Zp2E18u/5iszYk7hD2vAmjQUt0Hqc8SmC1hV/F83s5LkDXO9B80YXCtDWmcsakczsc3iemqKB1jorNb6Hd1llA+nY2m7SRxy5dRzTXF9cW4ySqkdtE8s9JXmTtr6k7UQBt9ca11Sp0QRAtKcmKVktsrrp/QeCz15UP+5Fz5ynMXuc2+lTJ0RXi+piNwTqJYUtoG7zTPFu6dNVvQkNY+5zicdDbjfk4gLNSdERYt7tLJtiioj4FtqZVSNz7/mc/5d56Ydd/uNyLMl4mrrRTS8ypOx8gTMWOd1e8JBot3GaC9/55zgmNP28zUMvm0iCzv/qhJJiV+irDRRsmaPHp/H3/3tv8E/vfenOPj5exgbrYHPb0FO9mkkJx7F8GCpNGHj93CP9vlol9iVmN+7l+vZ/T2/Db7T7Qk43XxYaCfNC6RNfwe+dzg3psWneDqejIi2k3jccwsrnmZ5EDR4Um0sG+vug75fD28v36PA6lvgO2TBvKNRwLd94QXc4jUz4sTNShWfyOSl8aAxRA+G1mukmQl1jqmnnWKLw2+qD+LDoo9ROJAK60wBxtfqscYoJ6O8XhaBdMHv7MB60ISO1SKk6CLxrCcR484meNzdsJqf4te//I84ffwn+Nk//3tEXngfA33UjSUAoKFWEQQBjoxUaJsUASxlzKRBDCNporDABUGlALM0pfAzuszrD3VJA4AJfyuy+5NwufwYOnnPUuymdL4FEEvjH8VNdLuscLss2N62SLrd4yEX3AjHphUORydc2wSXiiZC2gnVECSNFuqEy2dE9mAykjuuYHajXkWftQIYxR3m3KABZ6SboIcAnK92vF5tR1lJLA5+/pcoKryHH/zDnyI9NRrHj/8CV68eQFVVKrp7C+F0M7qj0TQkbUr+a7dIPrIDY4LhCs7WHUf99GMVEdHAq9qUaZhUsY0aY/X/vcyjXd/jYqdIsxgsL3qw7jdjeqteZCBngm0Y9Ddi0NOAfncderdr0LVVie71Cow4aiUjwogOawIkas7NTzjXYWPK+anJ9RG8B8yoHsnBHfN1tMw+U4BHoot8v4q0sQAoJBFizqVBuN29GB4sg9mcj9aWR1hYYMR9BF5vH17NN2FoqAR+PzfqLonEEcgIr5mbcLBbZPNmvc14MpKMP7/xZ8hujsL8ZgPaZguQY72JpvEcaWm+k43h/CX4ZmEhr0WA+G+B7TAQ33lVYFbm/k6EJQzAlVNBMMAW8YrGwmwQG0T0C+2JYJjrLuSnxF4XppwNOPHyAPTzBXBybYi0GTc8FrdRx5b0Jc5Bu+JGB3tE2/n58wQcOvRTpKVGo8teDqdjGAvzNsReP4x/+cV/QGV5MkLiJLOlOOkvmoOnZVakCDJkgyNoRsd8gUS+9DP5Ev2X5yiUMVX4LMCdgJHjQ6dHmhx1wrFpwPWrn+DnP/1z3Iw7jtKyZKyuGdHe/hyzsyZcjvoVIiI+gNmcJ90ZaTeYgfK4yPPn2n+rgBdd6F+pkPnCNt7COd/DBkrJOGYCTManyMqIwBcH/gn/6T/+Wxw+/COsrfcgIvIjvP/B/w2j6Qmow+1nxFScGK5pzh8COwI+tV55rwvbzSgZSENi22XMBzvEOSZIYdbB47Vjc7MH09NGFL+8i9OnfoLFRT2io38j0pP/+N5/QFzcSQz0VaNnrgq3dZdRPJWJAJVO/AataY5aA7uu1z3c/358B+0rVV7I99326EVKkM+JRczbIbNICDJDwqIzZoBUBodrthPD6xXI6ryB5tFseNwdqkCXWVFxxntwSxeJ/MEUTDsbRVJVObqMPhOks4aCYJprpRtuyl7Chm2ei/abAQnZ41Qmhs4gwa9kjoQWp5xStiinMyiReaFOhjnstEmMiL5ds6P2UdYgEIyHgfJu48g1QTqd3LtQ7pSKGql1yaZrSB+8I/chGV8JRPG+KC+oMsNyHlk/3y7swfke8neib6UCCeZruN0RLfQ3PpNNKUimPdyfaw7ry3NfVXurAt8cW46ra7sDr+bq8XqlAx6XFctLTbBZnqG+Jh3DA1WCSbiPT01WYHqyBl43nyWL78Pfw3XFQ/1/t2e6l78TfEsGWot8Z3Qn4lTjoe/Adzja/faraf4xCmfSENl+CjmdMZjfrhcjzygLAWE4ZbqXgf9jvScMvrkZSeSbnaicTW+BbxoJLoAwGA0DHQIzu0S9KwazcKc1Ci9672F4vQa96zXIHEzC9ZYLElHaFhqG4iyL4gjpBdR/ZRQ0ZMdKQIeXXYlI10XBMJknLduXFttRWHANUZfeR3Tk+zAZn8DpZMSA6X6lVKA2UeUdh50Dgm4aaRpvAeQS6Qlzt7tE+9Qd7MSaz4Du5VKpto43RONY1WFkdMdj1NGgeOMs9JHGB1yseo3XxZ+1ymbhTyovmgsw6Ge3NhankSfIQjBGLFTEW67Nb4TTq0e87Trud8ZjwdGsNnm+T5NfU9xW9bk3i5j/7xSli9npetgsBTAYSvDB+3+FtNTL+PiXf4fPPvshsnMuw2B6BpenWwrQ+GykWlycHVVBzq59fYslyDDF4ljrKUw7WGWuCvm4wYuhI0gV5RQCvf0xfPI9gU74gl2Y8baj+VU+HltuI6P5MjKbLovDltARjTuWGCRbYnC74zJutVxCUmskMjuu44k5Hs1TT/DKZ4ZH40b7mPIPczTD0QzhanaKXnXpSCbu2eJgWShS0T5Gkbk50rngPVKHmxsSG3ME+4Qzz9Tw2poeDoddQDc7mwVD5NiTdmSVxhgEviqCyLEhqCRHmhFQdtfrgG2jFN/P+DuUjGVg1WtAzdRTRLWdR6LlCtoX8kTrWzeXh9a5fLRPPUX7xFO0jD9G69QzdMw9l0P/qhCG+SIYF17AtPgS5qVimJeL0bVajv61KnQvl8E2/wKW2eewzBXC+uqF/J+/65ovRt9CGYaXKzHxugZjy1UYW6jExEIlXq3Wweli0WQ3NkJm2DcrcLz0C/RtVgk31C2a9pyPjIBzsw8rrhC4cd33w+vtx6NH1/CjH/0ZPvzwe7hw4ZcoLc2AwzGJ8tIcfHHgPZSVJMHv65MoFr+HPG9xkjl2dDYJjkKd2IxVU8gAACAASURBVPAbhGIR13QB/StlSmaQmTZxlLjxqKivRPykBoPPjGvQIhtkeupZURg6fvQXOHToPbR3PITTMY7OziocOfxPyEg/p4rcqLVNSgFBLgs5OV+4Lsi1ljneBdtSCWKN0SgbydKudfe5z03b7+/E5qYFC/MGVJSnITLiYxTk38TrtU4cPvIPuJd2ArNzjfDR4eE5xdk2IRgwiHwiU968DoJr2q7xtRoUdN1BtvUW1qjaQb6z1AuRCsaajyG4XcOYnWmH1ZonBafjEzXS7CvvWTx6eiqxvd2DwZUa3LPFoGjsHjzuVrFHCnzuo0O9D/ZBCiq9nQh6KGVIbfUuLPjaYZgtQP34A9SM5aJ2IAuN/TnomCtA/xod8nr0rZQIpeRC+xkUjtzDmk8nmR4WrLKeh+AzzhyNu/Y4tC0XYMxVh1FHLXo2ajC4VYuJ7UYs+nXYlG7GzF4ph5VOqxTrcm7QxouNUNRFRqxJgxIbIkpEqqieoCwc0RYKjUS31V4RDBm07odhmTvac9X9lEGWvdlYZgSZOeEeQ8fRLhQ1/dRTJLZFoWKtQK5TOQx06JSet1Ih4b6rPrO3c+0+7/f1e/xWOH1G1I8/xI2miyjsShFHiM9hPwsu6ezSnoW539xfuW/zufF+uG97PYyCd4m9589eTze2NllDxUZWKtvp9Vjg93HPYP8F1i0x2h0G3uFMIQH4Nx/Ht8H3psuKrN47OFH/BSqmSDv5LvK9E/UmCDfOPEDpfBYud5xGhuUqZh21O+BbIrHUrd0nT24/Hi6/Q20GWiQIqohjQQPf3UvFcElqVNEhaHQoXySFO6QsUAIHZjy0xOOBPhbDSxXSTWvNpUPJeDZut0Tg1XbY8BPksZ20VeTe2IFQvEYpIDPCNPsE6frLIjsWClJRog8rS63o6S5Ef38Rtp0miaKptu3axixjGY74KeeBmxhB7A6Y1CKEjJ4w7T692QTbYgmqx+4jwxiLuOaLyOpPxNORNHRvVmPZq1MFepT/Y/RDUyaQzVK45VTJYOqTcmXcRJUBJZjjRippdm7yoszBxc3rVtGPLbcOl9rOo3AoDSsura2tJiNG50yifGIUaBD4XWoBS5cyXzcCviG8XjbjVvxRnDv3IcpKM3Ds6M/xySf/gOPHf4aY2AOYmGxECCOa40Bjre5DqC3SUdAC+0IJDtYfQdVINma8baIJK4CE48n7ZvREHJg31xC+lj/0lXUBs552VEw+QII+Gontl1HYkyL0kOL+VFGceT6WiaKxTLwcyUTpaBYqxnJQ2H8PKa1RSGiNxNOBdExQlorcTm6WpFWIYgsBNTdKzuNOcXLy+1KQbr+J4fUq9TeOZVA5wQQxnLuqsxyNsNYcIdQDl4s0n15NM/uNAeU6ESqPnIddGAkSGR2jNjEjcnTsLJhy1eHjFx+iZvkplgNGvJx8iMPNx3Gy7QSy+24jw3gNycYrSDRfwz1TDLLMN5BhvYl75jikGGN2jmTDdbx93NFfQ4o5FndtN5Bouo6busu40RaFWx3RSDBeQ5IpBnfMMeo7DTHINsXivu0W0vTXkG64hjTDNWRZ4lA6dR9mRyVMr0vwYiwLx6sOY9LdLFQfr98sr+SwswBNOYCcy1QYYdaKHNx+tLU9RNKdk4i/fQKHj/4YlyI+xbZzGi3NRTj0xY9RXnpXOLEEzyxK/Qr45nzUQC+lNqsm7iOu5SJmnQ0qW8XxlILQN+BbQJCPRdBKQYUFxX6vDe2tSlXiwf04/PrXf43i0gRsrI8gOvoAoi9/ApOxAC4XM0mMTLLYjmuV9C42QFLzhbaC16l/9RzRpiiJxCtH4c2z/9o5T6ebQIrOV3AQ8/PtsFjyMDvbjI0NK+oa0jA8Wi7NadgngPZPfRcdGxYWqmgq5xBpBOT3986X4KHxBgoGU7FFp8WnAfMA09p0AHvhdffB6+mV7Jsq5u7FyooeKytmbLtYzNyNodcVSLNfw4OeOPgZ+Sbwl66h3y7wzbUDj02yYtS/n3Q2obArGXebo5DRcU0Kce8b4pCpi8FdQ4w441mmWGQZruG67iJ+U38Q98dTsBTqkL4abhYZ0nb5LbjXfRNxbReQob+KJ/rryNVfw11rLNIsscg0xeKp+RbKu+7CMPoYY0vl0uKd2ZgtjaIlGUvORzp+muPE/cUnggC0E3yOfIaKEsKMpxQZS8aV9BKuIe6X/JkgTRX8MfKqAjN7Bd+cX7TF1JBWexs7ehonnso4NW++lAwrbZo61DzjHigOrERnv2XPXbPHkmXz2/BqvR4vulMEgNuWS8QOOaVgPOwgh9fOH/pKQKwy0squvQHfCjyrwBmdaTrqpI4GfOSDMwjFbDttvKI8KUlX2kXlSHFNqywHnzF5/Puzb/42+M7pS8HxuoMo/w58/+dFl4apXFQu5iLacBap5mhMbVYrAyvASqWYv5XgWwMu3IhY7LVA2knNMfQul2Lbq5fW6yyMUwCWESMFOAhmNmDDA1sCnlhvY2q9TiLZa24dSqdycbXuDCacTUrdQFKnTJ9yk2GBGYuxGDlUQLbv9Uuk2q+ifDRLuHShoGqYoCY+N1AWIzLSoIyYSt+rNKQ4EJLSVlFNBcKUIVL0EoI0GzaDZpR3p4qySUz7JSSbr4vM0aizHoteHdxsRiCV4ir1KIWXQm/p0WgqKtLNc3MMqIhCmTUVUeJiZJrxDfAOG16fX133urMVx+uOonLqAV57NUUF6vcGlb6vcFB3dJppmJU+a4Ap0sAANtc70VifjQ8++n9QUZWGkdFmPC+8g2d5iYiNO4qDX74Hg+k5AqExpRBAzjcLbkRuiuoSqoDvlVuHB/bbuF51Cq2LBVjyU+JNAVfh+1KdhUV+3Fj2wYOncSKwYDTrnuE6UswxqJt9hmlnM1a3W7G43Yp5XzvmPK145WrBorsVq16dHFPORuhmnovmcXTjWVR1peLV6zoB4Izsq2tkRDQMvu3YdOvw0J4gzYUWvG0KzMnmZRduOZ1IeYbcLCXySKPMaBS12RUnU6T5JFpCJ1EBUdk8ueHyIPhm1EQ2ZRVFD3ktWPU04WzrCTwZSsbQejVSbfE43XwKWf230Tr3DE0j2agey0LlxH00TD5C2+RTtEw9Rf3EI9SOPUTtGKN991E9kouqkRxUDmWDmaXywUyUDWWhZCQLLwYzUNiXisL+NBQP0VHJRtlYDspGc1AxkoXqwXSUDdxD+WgGng/dReFIOnJ6b+NK8xlcaTuL1J6bSO+8iatN56UGgF1gpeuepNapJ63WKMfU72Pmi3QhKsOY4Q90w971HNU199DU8gBRVz7FF4d+jI2NUTQ1FuLggX9Gedldyb5wDAPQK96+OKacxxxrbqB2LLvbUDqWjbj2CKz7mV0iHUCBb649Ah2ZfwTr0o6bkStGEvVwOvSorLiD5wWJePokHp98+jd4ln8Vubmx+OCDv0DesxuYnW2VltBBaYpEgETApFMF2pJ9Y7F4p9xX68xTRJgj0btRqegGe5n3jKQJPUdt1h63HdvbnXJOj6cXTv7s74GfGTEB31yDXNecT4qDLWBcA98svDVN5iOz7SrKJ+9ji5t4GHwHdeLE+7x08kmhoboMqVEdWuSO5+gT4M1sw8h6MTJ7ryHTws6TvE72MNAK/vZyb3+k99BeUo3G5bdiarsZzzoTcbPmLPLsd9A0/himmeewzhRCN5WPyrH7KO9LQ9lABqoG0nG/5zbOtp3Gc0a+vSxmZnEvnWobgl4b2pYKUTaSiYq+e6i030FZ1x0UD93Dy6F7KOi7g4e2G8gwXEGqPhqZ1hg86kvAy6E0tI49UU6+ZMmoOa/ZFsn+cn1wPRBkUZVLKagwG8bgCwuW+VzoYEp2TbjlBHCKkqAAeBikcR7sAUzKnqI6VTJIIkXa7E478xzZzVfQMP9Mq5NR2ek3Npt7ILMtSi96T+fay/Xs03uULaD+PwGvCf1Lpbirv4o7umjMuJtUJmyfzqUAMUExQTdtEO06x4bAWWV9ae/4DEU6NdQj4JuF6ZIhleenoucMRChniHhAKeLsPFdRPvkvAL7dNuT0p+BY7YHvwPfbdJPwz23T2ahbeoBY/TmkGCMxvFGmdbJi0ZDaoL9t4FuiStzwuMDRBabt2JTkTPlhPB9IRdMsU+OPYZ7Kw9B6FVb9HaogTxQzbNiCHU977+G+PQEDr6tU5NvbgZaFZzhfc0xoHIxQMuItUlFM0THCThoOJzQ5nAEL+tgwoScO5WPZwgOTdCRTfiyEoSavpMQZDaNhoxao8ogl8iCevSqI2UkTMvXGeyKYCrGzoQ3m+edIq7+ElLqLyOtJgWHpJTz8Lon42qT7o0RixKNlepEHDRg3rR4VVZD/K6qLgG75Pc+tuF6McihPmkZVpaN4jb6AHq8ctThcfgAdyy/g8LDYigtb6TYrgGxBQLh9vE8ad26ajLD3wOXshaE9Dwc//QE+P/i3GB5rwMRUO0rKUlFbfx/pmZE4cuxH0Bufwx8cE4cgSJoG6RmSVlOeOlV3XgeMmFypRFTtKaSaYzAkzZSYWqWDyHSrkgNk5OGNId9lk5DUpnqPcob4s5IM5AbAmocCWxKydNclfbwaolZ0l3B4uWHynPIcyLPkNTAySecGnXCF7BjzNOJFTwpy66KkedPYZr3MNSrEMEIlNRUC6rqENpVui8XD/jsCYERthmA5pMA3eaaKb6ucNUUj4aapKFYqSqVAkjLSbxtTzj/+n9fL71EGnF3k4DLB4dcjfTQJ52uOIsdyE+dqT+JU/TGUjWdizdUKj1sHZ8goRWZ8Fqwv2PabpNCZTmlYvYdFXXSEWfnv4hHQDr8J2z6jOvxGmb+k33iCFriocMDf+fXY8rTBEezAeqgDGzBhwlWPhtFsFNkSUNyTgif2JEQ1ncW5xtNY9rBBhBoLrhk6yIx+Exi5nNQm5/1S+ccCj9eK5uZMJN85gYTbJ3D06D8hOvpzbG0Oo77uEY4c+kcBxWz1LfM+ZJAuuWpdEnwqYMJxm9tulKZECfrL8vxEkUHLXpDrSBqc2AiOM5+f8EAN8hw3N3R4+OACTp/6Kc6c+gVOnvwxKitT8MuP/gYfffCXuJt8Ch26h1heahXtfAIPPku/v1X4swRJQheg7r+nA1VjObhqjMKYu0Fb57vMd4ICgjDhUtNxYwSca4y8eG7KAwKESfFRNRjhYmLOMTVvJJLP72HtjKg8WNAy/ghJ7ZfR+ipfVHCY9VJOiFnAN4EqAxIs4qPajgLyisbAQmWuVwKBqc0KPB6IR5IpCj6/XrsnFgsTCKooqNhOOq1yL8ohVQCNf1fvkTUp60qt56+8V9Z8OGuqfV56GpDGodHeuO+942Ckeg02jG63oKg/FXHlJ1Hccw/T7jY4wxRC2gDYhCq1GeTascEZMKFrrQz3TNdQ3XcPDmerqgOiHeEYBRRveC2ox1qgAxukmPjasRVoxyY6sBLSYcxVC/3Sc5SOZ+JhbwLSrLFIbL2ExOpTKBtME840FYO4Hui8cCw4vqqwkeNGxSZ2omXhtlJoCrDXhY+2g/aBmUeOo9rD1B7Bz6nPSuG11B6pMVR2U3M45ZmoLLOKqCvaEfdozjWqkHUtvESa/goqp3OlaRf3KpX1fev5cT5wHwsXefN7dz0UQH0bsIadh3CxuNrjNJsttAvte7lmZT/e/Txqb+Z4qiDaursDjROPEFV9AoWDqVjzG7Rs8O7ftfs90X4oGyLzfmc8VOacz1btA3w2/B2fGQ9STPg55TzxO9SewPep61L3G36u6hzhv32TVz43tUa7se624klXCo7UHUAp1U7mldpJw+x3Ot9CP6mby0Lz4gMk6c4jQX8BnWtF0jSGaUxOMEUr2I+JtI/fEU6rkfuLbokMTgVaEVFyGCdqjuCE/hzOdVzA5dYLSLHGon7uCWYc9QJoOUnXYUV+3z3kWuLRu1ghvESXpx3DSy9wrPJzjDtqFZ+Rmwg3eK0RgosbKSc0OZg+MwbZ1c1+C5WDWQh5GaXhJs2UHu+VTYrYIZJp73Dx1S5jQPDNiLEW+R1xN0rHs2zDVYwtlotsGXle32RxvPksF6aKjr4xsPxd+Brt8ARN6FkvwrGXv8Gwowk+NyWumBZUnGuCEdIYGDGRlGSIES3yBBmJ7cHivBH3M2Pwf/27/wmRkb9CRUUaGhru4+zZn+PQ4R/giy//HmfO/gTDI7UIhsbeOnf4Gr76SmBTtZKPM6VfonHkEdbdBsVpFm4s0202vJZmFF/93Jt7euv33Jxlc1JGVDIAAkrZdlsVw6759UjqiEZZ9z04HO0CiER9gwCUGwQ3zJ1N/q3vlloJxc0j7eOFNQmRDWdQOJyFNRbPeRmFskmBDp0Zgryh1WIkWqLwbOiuaM/Dz8yFci58ok5Cp+Lt5/P2+f6AnzlHfapR0nawEw3rRfgo70N8WvIprnZE4lTzCVxtO4uemedgsa8UQlHnPGSDK0DgrPGQd+bLH3ANv9dn7VhyteP+SCputkVhw0utWvJWmWkhiOJz5Noh153zkOOl5iK5kPr2J7h88TN89Iu/xomj/4jG+iz4fMMw6h8hKeFzGPQ5wokUcCt6xFwb3YCPesrUViZlx4yRrUpkD95GpiVGA3sa4JN7CW923NS0jZPzQ66NWvpd6O4qQNq9Y7gW/Uvhmc/PdiAzNRJXL3+Ki+d+jsL8GMzNNAjdQmhXkt3QiZPEbrOSAUE3ZrabkT9wV6Jus76WN8Dz9xrTb/LMGIG3weMzomb8PuIsV9C/Ui4cX5FR1QINv3PtvXWNQjGQ/9uxsNmIgp4URBsi4fIQIKqx5SspUgIYpUBQCz7srGFlo1VEUEVNfxsUCvAKZ5qkIROlQ+kgMYJph9NrwUrAhFW/CeteE9Y9Rqxpx2u3AXK49Fh16THj6oB9qx7FI9m4XnYcRdZEOCT7pMAr78kTMsMtDXFURoaFt+TcDmzWIlV/FeX9d7HuIfje7Rnwft6AI7kPGRdmE6zYdurROfEUt5qO4645WpsHLFxUNBZVCxOej7udaw9/p41kgznZ0xTQZlM73jPBe/gZsVmf4nMzss8sMLvy2mBdfoF422XUzOVoTlXYsfq6dbSHaxLnjdk9zhk6d3R26TCqNajolwxs0V7TuaDTruhTCijyfvj+PdhXn1H6ETDoIBmLYBfmnW3I60nGuaLP0blWBpdkivhd7zr2dl+7z49v2/eoIBEbTK26LSgxJeNQwwEUzd5B81w2Gmce4DvwPacoKATfrUsPkay/iPiO8zCt5KvIN1tai2HYw4Tc1YDs7wThpiugRwO55CUyulDUcQvPu5JRPvcI+q0yFE/mILLxLM7UHpeU/sJWA7Z9FiwF9Xjam4xs4w10vypR2sxBEybWynCk4jOMblULN1cMPpSkGjmgrnBUlaAo1ImRtSqk6q/huT0F28I9ZfrHItXvdFxojEgJYKST3v3uC4lNR3olAsPUsGn2KQ5WforkwVsY2W6E18+U8Bvvdffve9c5+VzfDb7ZZbH91QOcLT2ACWerbFLUnGZUhZEtyUBoEXxuEIEgtWKZJjeI9vfKUjvamh8hJvoLRF/5NTKzLqC7pxQvXsbjwsVf4NyFn6KkNB7ObW6su4NvRnCWQ3rhDN5pvITmsYeYd7UJEGQhrH+rA1Ql2VPkm4aYm7EANm4ijA7QOCuaAZ2ngfUq3NJFoXYkB27KhbEAkzq+wtEkH1WLnP1nz1YBAH4naSFT7ibkmG8ioTkCHTP5YrSVji1lv9jS3Qz73DPcMUfh5WiGtC2Gj5KKLLLj+RiR20/wzfWj2qEz4uYNWtH5uhTxDReQ33cXA44GNM4+xe3m83ikj8MGAQ+zEQS8AnR74GS1vUTD3jXH9vdv89stSO9JQGLHFWx4CayVtKbwknk/AkoY+eF52RWT85ERvR5sblgxMlALq/EFhgcq4NhkxHcQ204bxkZKsbpKAKuAANcqAaREk7h+pQMrM1dm9L4uxl17DB51UYtarfE365Brip8Lgx1u/gSG1B1nlJHXNoiAvw8+Ty9C/iFt3k8AwVFRJyA9QyLQfrtq3iIbOKkcVrgFfPO77ZjYqhcbxkLkxQC1wPd3rHf/PkaKbVKrUDGSjQRbDKadDbL2PBJp29v1vAHfnQK+n/fdFfC9TSUQcWwpmWoWOU813vxejjPHnjU8yhb72XyKKh3S1ZR2jfrR7GxKJ02jzdCBEm4+C9Rs8G7rpbPozGY9DK8K0L5ciLaF52iaykPj5DM0TDxB/fgT1I4+Qu3oQ9SMPED18H2UD2bhkfEGEqvP4an1Nl5BrxRIqBEvziAzpqrvAO+PQI3RaGYTuhw1SKKW/2gaXgf38tx4rwpEch6pIkllt3j/1PyfWKvCg94YZHXHKqDOLsfM0Gk2TkU9OQZ7eybveh9tK+2fBOZYd6Jp/6t6ITrkLFBmt0cz4DGLMpIr1CXcdqkPWHiJXN1lWIb3E3zTEaFtUvsZs4DBQIeql6CNkAJD3jv3GYUduE/7SQllTYZmL/YEvvmZIDuLMgjFBmq9cPg70bZYiKuVx1HYdwevXbQlfG7vOr75s3jXc/rX+5sC3+zA/NpjRbn1noDvwpk7aJrL+g58hyknfK2fzULb8iPcM0fhRvsZtL16pKJGYjT2jz+7n5NBqre19Llw27jIQz3YdrRLSpxVx5QDYhvlKX8HCmdzcbbkS8QWH0PD2H2YtitxuzMW6eYbGFqskOgZKSaTm5X4suwTDG9WCuVE6ar2iDHxBhnJoBFUqgP826yzCU9st5Ggu4zm+QLRJmZ1N0Eru2aSA0pPmwtVaAO7GD9GtVj1zo5fQWc7pl21+E3953gv/yd4bInH6pYOAdF43Y+FS8PwbvC97mnHi6EkXKs5hvltFk8pXVkB3xJFIhDg9xBUUBaR72HUkQaNY8X01yB8bqaOx+ATNYlhhEJDcG7bhF8KEID0w0ewucv4yN/9dkz5WkWpJrL8KJ70JWPQXS/SX6RqEBzvBXyrSLeS4eImJZFNAd/hse1G+Ug2bugvQz9XKJscVW/4TOn8cS5IaloiPr/byEpUhTr06IZ1uhBJzRGin70e1As3UAy3cDAtaBnLRoqJHQIfwMe0bxh8B1XB8H6Cb84zPkNmcEQ2kClhdGJqsx5bkjbtwnrIgPyBOzhddhDG5ULZqOQ5M8vgNsPBqNtentc+vmd2qwG3jVeR2XkTW0ydi/Y205wKgBFYKYBL3WI9/AF2i2TxNOd5LxAaBIKD6hXD8FAzPximNfC5q/nM+SNSg1zrolLEgl4TEDDBMp+P24YIlAyxBfpu4Du8vsivVnrJpF4QXKt5PwC/V7W6V3UibOOuaE+qwJPfrzkTISXDJpQlZkoIuLoS8NAWj3Vqscs6DM/dP8YrlTY6sexqFepFqu2GXAfnvES+BTjvfh0Ep8wmcswXt5pQ1H8Pl/URCINvOsTKMeZ3Ka6qAqC0L3SAWQNBx4ZjwKgnefgdcgRDup2f+TvaJM59p9+Ize02aZY1u1kvhdHs6XCp5BBOlR/C4bqjOFJ3DEfrj+N440mcaD6FU61ncLrtLM60n8ON+vO4XxeFit40TATa4EaPUAyl6FsoG9wj2JPBqGiB1OEOECDaMOpqBIsvy3pTsOJq2oPNo23hvap0vtw7ufA7jmYf5rabUDCWhMyuGEXrCTBDRT495w8/z3nNY/fnsZf38HnxOpjVC9tR7m+0iY6QSaQW+azAGgUXg1GU/OW870HvQgmS2y6ifoDrh79TDq9aS+Hr+/2uWdVUqfvjdfl81LBmXRZ/x32lW3SxOWY+L6+b36/WmlqX3K94HeHzv+NVKJ+KkitZcT/7OvRiwtMmhemJTecwv1GnjTvP83XHO86xl+v41r6HmId2leDbhip7Or5sOIDn00kS+W5ia/nvaCcq8t04mwXdyhNk2K/iWtsp1ExkCj+MKW43Cf7fwocs4EGiHl1KYstlVCAiRLDHdLpFWvqG2O41YMOqrwOGpSI8sN5CZMlhRFYcw6WKY3jZn45lV7vQD3xBE0bWSvF56a8xuFUpRXGKH63RQILs8MfFrBWzhLqwGTLDvPQCtzouI0YXgXFGfkI98FITleoL5P9RmlAW++6LW0XcWJBnh8fXgeKuRHxR/TnONZ1Ay+RjbHkZydkjSN31udEohMEB74sHfxc2CnYsbzcjw3YFqW0RWHG9Ad+MNErESuPKeen8CK/PiEBQh2CoQ14JgLgZkxawtcWIOAuuWKFNSbc+4eK6pKNgr4qk75w7fA2/49XNrop2DLobkNd/F9HVJ5Gsv4rR7QYx7twM3tzD7/i8dg42gBBePwsgpUkOo2KMhnMMuoXPm2y4hlT7TQys1wj4FkdP41CG1wbBQRhk8/V3GVty9J1+E8oGM3FDFwnD0gvhv8JtlY2K0aLi/hSkWa7ANE/+O+cYOeV81kzvE/CHv/vr72kv9833kKsvxaqyhrqFp654otTY7gGvyz6VjyRDFG5br6LPW6eizB6mXJU6AtPOKjL5za9nL9dNOzSxXoeYlot4PpQmikbkVHMuKvCtZPj8Pl4POaace4yEtiMQZKEwVR7IcbZA5qtGU6GmtwIRYTUHla7m+SRbFeZOsrNpwIjWqYe40X4eurknewTfBIT8Tm7wan1xjipdfabp6dzrtfcokBVei2HgKYpEYk80PedgJ+xLJci03sDz7hTJyMnc3cv62bf3UJe5ExOOeskq5ppuieQewbA4wLKOdp8b3Gc41jyWHM14MZiGKP0lDXyrtaXmB4Ekv49jyEOBKr7ys6qWgWv4dxzhtaM510JXCnVhbduAhMozUkfwcuEh7FsVGF+rxvRyNWZWajC7Wou513V4tVaP+fUGLGzwaMTseh1mtuqx7DfAwWAL16cEIzoRYodhn6o1CDGqyq7DpB+iB9voxoCrAZnGGBR3JWHJQZu12xhxDJSjoe5Zc/jE4WQUtwszW7V4/perMAAAIABJREFU1H8DqdYrCNAJCXGNa0o1tE/h+9/1XLtdi/q7rIuw40nOPos1SVNlQb/muPpE8pOZxB4Bp7x3ylDWLj7DOeNp5E+naoB3P8C3opeRLsh6AzYSooPLYJGH3Yc1yg+vhQEa4bRLBJxjqc6vgPoe7l+UqlRQktRBKvcwWLLms6J4JA1xTWcwu1H71jwNz9ffft3Dufbpee0+x/bzWt6A73VvJxr6c/Flw+comEpQ4JvA+zvwrcB3kwa+s/viEN16Ei8HU+DXUtyM9v4xN9i9ThJVrEWJMa0QiiBFKrWpc2mQBUjOEVUzeA9833bAgpVtHXoXitG7VILh15VYcOngEs4tpZiMGF0rw4GKT9C/VQF2iZQiPvK7yeXekdVjVIMTjADGjhW/AY2zz5Cou4x03VUMOshfVu1fGXnwUHGFTX8kCvHuSc7NjM0bPD4zuhdf4ELpQTzqu43u12XCcQ2wVTUjj/uyKPk97wbf845GxBnOo7AzHhsenfAuBRASuGhFPNwYpL036R4CTGnsVKpXpTuV6gQ5f8K3Ew1qRgJJxyHgIahkRGuPoFkryCT/ddGtQ/FwNiKbzuJhZ4IUxMqz2cP48JmSSkFKB2kXnCNSYCcFk13Sev1ywxmREVzw0PEgH5+Gnelc1RQjHPl+G3x/FYCr5y3zJ2SHabYQCcYreDqUKvNBOmeSgwg7HnXdxP3OWAy/Ltei0gp8czNTnQT3D3xLARQdKI49nyXVcsjfFWUQ8h/t0E0+QUzzWcQbotC0VihRTVHKIYAlWGKBnnRFffec3p+5qiLRg2uVuNJ4Bo0Lz+DxUXPaLAWO0tSB80dsgNKrV/NJ8T9DAn4JgKnYwGgd5x//T5AeVgEg7Yg/q42c85sOys7zDFjgChpQPZGNm7oLGNos/z3BN58f5zjXnFIgYHqcwJ/RWn+gFSpSq8A/AaYC34xyKkUVZvy4xghudXMFItFYNZytRen/OM9h53nKWreja60c6eY45JsS5DoJvrmmBIxKVujd1xW2qxzvZWcLioczEKm7CKeL9kZFe+WcBJHMOsl3EjRplAdpTmZDwEvpP9pmBbwQGlA/e0mBoIPZI0EAcbR8FrGnVRMPcKTwU7Qul2DJb4CLtprOGO9BuOUsJmeBKjt7mqQWhADP57cKNWwbVgnASBZMHHdVhM1nNOiowouJdBR3JUuh/OORTBQMZyHVdgtnyg8ht+smZjzsPvvu8aEj9lXwrRxyqbORvakTUxs1eNgXh3vWK9r4KM43M3rM+gToeAoPebdz7f53tS7oeCkFEwG83N8YMJB52omAx4QNGDHvb0P3ajmeDWUiof0a4hsjcLriEH5Z/CFKJtP2DXxznwmwzojBt9AAnE4bXr9mU6BeUdzyeG1YWmrB8kobPJ4euD12bDkMcDiM2HZRjlQV6+/+LOjkUUyAFBeuTyX1x2ZLTr8NtdMPcKXlNGY2av4bB98MuHVjw2dH68hjfFH3GfIm4gV8N38Hvt9IDpIEr1t5jAfD8bjcdhJPe+LhBiX1mLrihrT7gtzTpN3VyOz9PIy6EjhJ9FU8cLaDJkjpBJtusOEAQdLO+wgyBGQzKm2FN8CII9PmduH88W++kBkTjip8WXsA3eslqkV9uDhDM/g7hl+qjFXUjcVwa14jWsafILruNO503cCwsx5erWsXQZtPGlJw8333PQr4DtjgdHagbDgdx6oOwL5cDLfXoJo5MKon7a/f/T27nUf9fXfwzejO+fbj0o2NndxE6YQgmtED4SHzO7g5cLwJ5hj9IKCgRq+SV+TGoQCNKnhRBXJKjpCbKz/j9TIlTFCy+315qP7BZ0wZt2AXzCuluGm9hnvWOCy5W/fk5Mh5yM0U8M1GOqSTaODNb5Ui2faZZ4huOgfjfKGo6RD4MIIvig3S8prRD5UGD4OEt1/DUU7JlHATDHVieqsBT/uSpWPn8lazopegS9L06Z3XUNB7G4uOBqWtTIcyoKKLCsjsH/jmNXFjFh6yFPSpdDy7RbJQjKBzdLMaT+3xuFJ+FDHtF5HXlSjtsPms3YxusvX5vhX/7uW5m2FfeYnohlPoclWK+geLfZkVkGABpRUlC9GtVB6ED8s5qKLOUosgBcEqAs4uf0xRh4HN289OaSArKs5O1DBkxXpAhxdjqUgwRGLRx07AWhRyZ96q9aAi12+AtgLd/Bt/R7CtDl4bgYMC4x0iKcgouPqdAv4qSk45McUh5rqjok/N5AOkmK5DN5sva1G4zzvXsft47mWtvfM9Aj460br4HMmm66jqShPnms6vgG8tQPHO79Ai3kI7Cdmw4mwRrfyI9gsCvsXpFHrhmwI5rivaeurTh+cqz8dsWjAwiEBoAD7S2EIDCIaGgOAQQsEh4fe7Sd1hMXuQnSZrpF/CjY4oLJLH67FIxkc6t2rXRaDJQ8ZWy7CInaODzlbsb3VkpNMhakfcg/wmlE9m46LuDDJM15FmjkOC/RbSrbeRbopDii4a9dMP8TrIjMduz4r2Vc1Ttf+oTI9oczMYELRj1lGPp0PxAr7F7mgysFSjERsr4JvrYLdz7eHvtGVu9mFQ+y3VatyUxIUFi74OjK3XwDSdh1wqsZhjkdIWjTjDVaR3J6Jk/L78/mLjUdRPZO4f+Ba8wvXVh8nJOjx8GIW01NMYHq7G4qIO6RnHEBn5AW7c+BxGYx70+sdITz+F6CsfIDvnDBYXWxEKjWjZlN3GgJF+JdsqTnGI8sJdcPstaJp+iMjW0+IMKfvPa/q6Y7fz/Nf6d5V5YqO3TX8X9JP5OFj3GZ6O3ULzq2w0zz36LvId5n23EHwvPcaT8SRE6djlMhZOqRamEQ1rhX4bJ4KKvhKEE9wyUk3lE9FRJtj1UplDSb9JxTcLFRmp81kFbLEtr5Pgm2DST9ksM6Y8tTjc+CUsy0XSVl1xvsM0D3r2XEiqUY9oLUtKj6DSjpWtNpQN5+Ji8xlkWOKkDTlBnchuiSQiN+p3jyPBHJtNuNxGNM8/xdGqA6gcSMeqo0WcCo80Tghfz7u/a7dzKaPw7sj35GYtDrV8gd6F5wL+VfRXgUJG9VlwpcA3C8q0Jj4EsqQDSJSKG5eKvggfl5/xMdqtZLDU+5i6M4r82e7X3ClKIMxoBEVdphOGuee4abqCZ8NpcLD9MZ/5XhxG0UJVUVzqbgs/X9RtLHgdtEgq/Y7pGkbXarXGOJS+Ighgwak6h0S+NeP6FfC2Qz9Rz0iKsEI2yb7opvKQ0BKBtvHH8JHiQdUTvx6JndEoG74Lp6dNUXBYtMSiOwIAkfbaP/AdjqiKmgaBOAsC+dykNoHZABucQSMmXleheeQ+HvUnIrr6OFqGcrHupVSg0jlWqj7fdB7u7fOOgAG6V89wue4kJoOtYNaA4JvFfR4W1wn4puHXJNbQo+lzq82PIFuBWo6jaiRFhR61DpTjqKJZLLgluLCq1uC0gUJlsGLe24ynw3eQYrkCp1BFdgPfBNva/NYctvBmzGvZ0V2WjnUqwqmuU11r+GeJXkqWT2UqKJ3KplfJ5hj0rFcqm7RHmsde1tie3kPbFrKibOYBkizXYRx9orIoWlMocRh3sXc8D8FtGHxTP5+yrWzqxcg3HQ+uK7G7vD+xt+r9XLOUlhQAHLRixW/ClFuH4c0G9K5WoWelEv2rNRh8XYe+5Wp0L1bAulAM22Ix9DMFeGZNwLXyE2hfKoSb3027QfUf2hZRteJ+8jsO7jPkmPtoD9i10SxqOEJlYHF0wCJZwoKBFFxrPydSsd1LFehYLod9sQz9i+UYWi7D4nYTfOKM7Tb/6YSp+cDxkPlDm6rp23OPerXdjIKxRCQbIyXoQQco3DFTxlg+z8/udq49/J20Tg8VvzrBpjmznlZYV8tQNfkQT3qSpQlccscVxHddR3ZvIqqHc9A+9wKDmw1Y8Lajbe6Z1Ey0z7O2jJSPb0o7ob3Syxzy+bthND7DF1/8EB99+BfQ6/MwNFyFDz78UyQknMAXX7yHuNhDiI39ApcufYSTJ3+CTz/9G1RXpyMQGN8j+GbQgXOAexyxQK/w+VmQ3zr9COfbT2Fy87/lyPcb8L3l74F5tggH6z7Fk9Eb34HvMOgOv7bO5UC3+AjPplIQpT+FdOtVbLJQhbxEAgvZ+PewKPdjYe/1OwiCxatXhXKkDJC/qyTH2BKZuqUWAWiin0qDGlQRbx85nyyeDNjgIZCS4jZyw42Y8tThSNNhGObz4KRcmUT2aBwUiFR8YIJPVegjKXt+NxsVBDqx7DagZDAD50oO4WFXIia26yWCLlQRRt53uT/SAfzokcjMmKcB0c2ncL74czSOP8CyTy+gRxQ2dvme3c6j/k5Q8m7wPbZRjd80forpjUrVZpoFlOw4x+IhSX+HwTf5sEzXE9iQU8cqcBaKsciNkUiOH50XDdhQl5mRI2kOoBRi3lSdv3ucpOCMwDSgIky1w9m42R6JlqXnEg3fO/hWhbNCPZG5w+wItd0tmA8YEdt0AU8G72J+sxVsWqBSuKRbqIi3zD+JfPJ6VZTybQCuQJa6F4JUAl2O2cRqDZ6YbiHVFIttzuGADfNbDYizRqJ6IkPpIgvQ6BZqB+sGlMLI/oLvnbS1RPO5jnit5CDzXsgTVa+MLk64G5Fju4GL9SfRuliIDb9JGw8+/3c/r/36+7q3HfUTObhcexyLMCDoVrQTF0xwgx1c2UlSzWc1D9W8Y3RKRaj4qigpfDaMeqsINaODBF2dIisqRVRMQ/sYfNDoUKSosCDbWYvc/nik2q7DJ9HrvYDvbqEyMOOjwL02JySTwmikanQi60Z46Fq0MkgKDK9RrTFmaUSdJ2gTsPWwJxEptlhMiMxgt1ar8Md5FvJMpQ+CBc/G0pHUGYuhhdKd+aNoGHu7FgHfXH8hK1672lAxnoOLGvhWmQkGgLQgENeLximmg8hgy5Zbj/H5MlQOp+Nh7y086LmJ+903tCMOuV3hIxa5nTeRYbuJO6brSG6PRnnXPWwEDOLsku7Hg3teSGiXpBqqI8jmQnIoJ08Fc7hmrKJ1T317tu3m/GLAZdXdhhf9d5FliYFTsoB2bEkr+G6JGAu9TRy6vQDisI3VHBFZbyqLqOxCN+ZdBN9JSNBdkCwir88jilSKWkat+7ft0TdZk9Tmn9xuQudSMarGc/FoIAV3u24h0Xwdd00xQv9jbYthuRAjjjps+Uyg2gn7I5BOaZx+iqSOSzAuPdtH8N0halA+fy+Gh+sQGfkpfvWrv0Nr6xPMzrXhXtpxVFffx2efvYfz53+D9PRo1FQ/QH5+It5////FgwcsVJ3Wrufd85ba42LnZT3TPijw7XEb0T79GKd1pzDxHfgGI9+OQA9s88U4UPspHg/HCfhu+S7y/YZ20jaXg9aFByiZTsN1/VkkmiOxGGgR2R7ZLPY1oqIAhNrg+TMnuorkKnrL7tFhZfg1MCyRaBontQlKREKLetKIMpInm5pGkWDTA49GTyEoFxoFwTcjnn4D5jyNONN4DI0zj7DuatcKM+jdcnMm14scUup5K51ZoVuw05tID9Fod2Hdo0N04xl8kP8B8gbvYsbVDC8LUpjWDDsyOylZAlLeizoIgBhVJJ95wFWLmI6L+Ojxz3Cj+Tzsy2Vwi4yTUkOgMZWICK9H7lXdL1Ow5KIT6DGaGS6AkU6UsrFwA6PzwvEn1YK8eXb1I9h4Q+fhZwdWykQgf8ZVK41vmMZkJF/GS1QNLGBhTYAd/IJ6yR6QnsFUJDn3komg7JWfVfCMlGkgXMAlNxDKQLEglSBjbxF9RiQVd9yGdXe7tHW+2RaJXkc1vF5ujlqGQpsHOxtNeJzDmxcdAuFwq4Jafi+dN0fABOtaBc5UHJFmTRvbegW4tZoC8qMlWqPx3ne+XwOrO+B1Z27ynqlooaLLr13taBzOFYrSwHolNkN2sDVxjCUC9TO58AVYtKrdI+XQSLEhB1V4jSrayHOE50z4VQCB3PNe1pCinbAWgc6MrB9R4SClSFNB4TPi3AiY4Q1YMLBZiRNVh3DXFovhrWop8qP2NyPz4ljJ+8NgnHOL98CDz1aBSLluGX/1dwHJ4WuWe1JrWYHUN+ua17foasaL4RTE1Z3CBoEQI9+U9YRJjjfgm59Tn1Vd3xT4FhDOOSnPnJF7Rr3VuqPt4d+ZBfOy0yB1idl4hHOSa0javNvQs16B9O4beGiPl4h72O68PQcUoNfuXwrdOK+VfVL2NBzN5dgwSkTQxr9rXHWhc4SduTA4U1QLUdoJWDG9WYfcznjpMbBEXWM6u5wzOzSY3zEHtPFVc0dFkd9ctwY4wo4X54BweJV9evPcwu/jazc2/Sbc709BetdtvHI0CTBmxNsjNAflzErAIrz2wvM27LQSdBNEi321YcOtQ/3EQ1xqOYc1d7tGC2LghLZafZ/w9anEFbJhcrNeZEAfG24gteUiUtsv4L71KvL6bqFg4Dbyem/iac8N5PfHo2g4CSVDqSjoT0XeUBpaXuVj1dUmTguBPxtecX+gDaRdVY4Px1RF8sTe7mTzwuBX2XzaPG9Y+hJ2rLrb8aI/Bbm2OLgIvn3MzpCq16P1g6BDTdvLdaGN6de+ci6H51N4TSiJRdpPzp357UY8H0lCQsdFbHtZYGyR+hfOFzow3J9U9kDNd55TFEvklefX5ousRf5fmx8C4K2igDTrbET/ajna5/LwaOiuODCxrZeEQne/NwlVkw9gXy3Doqddmi9RalD2CrY+5/ew6N5rQPMUGzJFoOdVgWbz36wPNRbh9Ru+593Gh8pNRnGgAqF+uD2DyMu7hZMnf4GGxgdwuYex6ejFs6cJ+ME//HukJEdibNSEpcV+FBWm4uOPv4cOfR6Cob1Fvv0BgzTuUuPJecC11wWvl0pIeTjWdgTD6xWSTZR9VmxMeL3xlffFe9z9vv6rfA/vT9ZJN5wBO7oXi3Gw5lM8GmLkOwfNr9hg54H0mAkHgP8Yr42v7iN8hM/H/zfNP0DzwiM8MSTg3/6v/yP477/Lb0uQjkANCzlonHuI1plHaJvNBdVJ6ug9aDrd3/S1nXKD8zlon8lFiuESovRnMeyuURuDl6k9Vdm7HxNBKQ4ow8ZCJ7URUKFE0TMIUnYMwR9jcmrpOBp/Gtt1nw4xtWfwcjwL855W1SmSm5BMpg74fCyQYTqUUTYW8NBQkx9vQpAAI2SFw9eGhL5Y/OPzn+JixZeoGs3ArLsRXnanc1kQYGrTTcNE+gvbRCtJQtmYCbZ8Vsz42pA7chcHKg7iquEyIkuPoqw7FcteHTZ8Bsxu1mB6vRozaxWY2arG1GYtJjbrMLHZiNHNFgxvNGFyowGjqzWyQXGTmtislQr9eUcD5rfqML9ZC/68tF6HudfVmNyqx7CjHmMbtaDu7ehaNaqHs3G58jB0G4UYclRjdK0cM+uVeLVZI9cwuVmFofVSjL4uxdBqCUbWyzGyWSHH2GYlJjarML5ejqGVEpi2qrAh8k/c2Dlu6lDZBNUsaS9zjJFqOj2Mmq9vt+Gx7TbO156AaekltvwWbPgNwpFn8w8PuzDy8Bnh8hrUwUiMzwyHTy+SY9tBK1xeE1xeI9YDJvSuVSPNEgtKj82s12o8di3a8RXDqQDVXq5ZnC5Gxgju/WbY5gpxp+ocMkxx6Fitw/3RNFzpuIDS8SzMbjdg1dmEJWcTFhz1ePW6EnOb1Zh01GLaUY+5zXqJlLM4bcXRglVnG1adrVhcr8eSsxWv3TpxSuiY8NjwhA8dNr0d2PSyk2QH1t06LLvbserSYdPZDoejDQ5vB5x+Mzwek8xXcrvdPr0AiDVfB56ZYnGt/AiKx9JgWn2BrtcVmHS0wEVJMUnbhxtXaICbertB0mjIxWeGg5sOQSZ5zdTiJXeS64sbGIu2CDa7JK3P5yvOrdQTdGPMUYeMvlhktUQqOggj2lLEqzrkCXVhL5saAYY4fxoQEeCt+KvisEqhpWqMQseVGRZGrUkxaF4sxD1LHGp7s7DJLIFGm/jq65u5rZwLgnzaBo4J708Dc3K/4XvefSOm1jgBIilCo6uVeGCLx7O+u9hknYU0vqJTrWoxFB3qrevTwK2ys3SoCPi17Ibcg3bNYof5fV0I+VQGi89MOccKlMm9yvrtx7hLh2xzAvK7UqVxmdhzPlPpcksKmFnxoCVTqEXuCcAJ9jjuzBhKBkBFtpm6t0zlIaLxDKZ9LfAJaNTAG2s8vGY4vQZQ7318uQb3rfE43nAcVyxXoJ8twOvNZjhcHfD4lcPItb/NdU9bIMXEVvi8LBDXnFtxWLTrCRc2ajUQfp8qvOWzCzLzwWAGryFAFRFVTyRrP+xQiF3ifmnB2nYrivpSkN4ZC0fQIHOIWU1K7akMmNpv1Hzf7dlrDsAOAOf/w59RNoiZs6K+JNw1X4Ej0A5v0ChzRXHmlYPDgIjPx/MqoE3ajoeF5lrHYnG2hHJDjXCzdFBddDZjZL0Kra/y8HggBTH6KFxsPI3E5gg860yEYSYfM44GbLNomdckTuSbZysdfCWQoNb9oleHF+NZuK27jNmlqrecxfD98H1h0M3XvTgniv8vAgvSB6MbeQXXcOToD1BXl4Pt7VG8fj2Ily+T8f3v/wnu3b2MqYlO6NrLceniJ7ge8ykCob694xzBLSrgRXqPFOEzYx00YXSzFF82fAr78guRPFZjotmbsM3xc/2H7/f/h69cD7QP7PAaMGFw5SUOVX2Kx/230PQqF00Luah/lb1v+HWv+DcMvPka/sy/Ovhunc1Gw3wOdHP3cdcUiciO0+jdYot5taHuL/gm8Cbo5qLiBsBoo1FS7SrS8HsAmh0D9A0m8A745nUo8B1bdxZFoxl45dbANx0CLhxNMYHXvu0kAGdam1FTyjGqghiCCaevA6mDtxHXH4uHtpu4W30Wtf1ZWGeRWrATmz4DPBI1smLK0YDp7QYselpBcLPuM2LRZ4BlvRwXiw8hvesWul3NeGZORHpTFCpGs1G59BjXbBdxuPTXOF97DFfbLiKq4SyuNZxHbNMlxDRewrXGi4hpicSVhou41ngBMS2XENcWIZKIbIudpI9CfEcEojvO4a4tBgn6KFwzRCBCfxHRbRcQ2x6Ba7pIHK07hh/l/QxRlgjEGCIR03wOt5rO41Yjz3UO11rOI7rlLKKaz+BS82lEtp1FZNs5XGo9g4jWs4hqP4eIljO4WHcccQ3n0TlXKCCXnr/iU/LZMUJuh3uP1fg+cmd9qi0yW52/HM/BiYbjeNB5G4b557AsvkDXSgm6taNruRidiy9gnS+E5VUhzHPP5TAtFkG/VATj62JYHBUwuapQu5yPFP1VXKYM5fR9rHkoUafmafj1D4paMEIl1CRqxVux4GpHzUAmbpafxPXKkzhT+Dk+K/0UVzou4kFXPB5YbyBZH41bHZHI6ryBDGss7liuItFyHUlM71pikd55E9mdt/CgNwmPB1OQ2XMb6X0JeNKfgqKBVJFsezmYJuoRpSOZKBvLRvl4DiomclE3koOawSyUDKSjpDcV1fZ7aOjPQv3kQ4n2M8Lbv1WN3q0qDDiqYV0rRtNGIW71XMNPC3+BH+b/GN9/8kN8kvcBMluvon+xDCHq5jJCT+1f2YQZ5bHA62uHy9sswFBp63bD7WL7dHahY/ESgR43KLPSRiaApEMq4Ftx7AkYRjarkdZzHffbr+yAb9KfGPnjJvd7gW86AVI/wToQi8wnvnIzVRujAhASDJA6EbWhNE49QbohBoahx7KGw5KNX/fKKGoYcNNu7BxhEC6vb4Opr7dlBMB0RqitTdDDeZA7nIZprx5rzjas+6zY9BukfoCv1Gt3+I1w+E1y0Kni4QpY4WImym+S96jPqPdu+y3yd3egS+oTqIe95dVhy9sOV8CIbT8dVjpoRim67V+vQq4+FuXdqdIASGgQfHZhDXxZ26ohF50GiXLLmGhAnNmgMOUraIPHa4R1Kh+XGk5jytusFIgIVCTbxs2c2YcapFtv4MqLw4gpP4HyiVwsuDvglmAGz8E9hFRBVaws3HCxN5pzwowVwQGzc6K2FM4uaK9c7wy2MGrsp5Nog89L5RU+S9YR8HPhjo4EnJzn6pU2Ygd8MyNgj4UjoP8K+JbzasGefQPfG3V42XkbWcarWkMhlSnhPYjSV8iMdZixDKuiunF90gHZNiHkNCLgpoKLBS6/GaseHYZd9aheeIo79jicqzuJcxVHkNQUIZ1+uxdeYgkGyT55pQO2tlZkzDguyrHjWmJmTMZSghadmHe3onA0A0mGq1jeoMzib+/3fxj4ZkbB7WOQxQq3144nzy7j6PEfor4hFzMzejx5chMLCyacPPlz3Ig7garKR0i+E4GIiE8wO9cOv79fnDT1jL9+DYrT8zb45jyi4owov1gw6azEofpPYF96IfvZV8C3zBHO5b2t9zcO1i7Xsx84aD+/Iwy+pV+JGWMb5fii4jd41HcTjbM5CnzP5+wA4DAQ/i/9+q0E380zWWh8lQvd/EOk2aIR0X4KhiV2uVQ0CaUxvD8T4O3JHQbeVMZQHi4pHfw5bAz355zvnMSizqEoF+TFMfJ9o+E88gdTMedqVpFvRt6ojBGgIgcVCcjTpE71INgYQ0WZSAcwC2XA6TMhzR6HB8PJ6F8vQ4HxJpJbItA680RSpUw/zvhaUNafjkx9DJLaL+NWWyQSTdeQ0RWPLNstJDVHIrrgCxhn87Hht8A+U4S0piicePE5Pin+GB+V/Bw/evlznG4/i+ejOSgbzEXBQAaKpnJQNXcflWOZKBrLRdFINvIGUlEwmCYHdbGf9CbhUfdt3O6IwM8Kfoakzuu435uAnMEkibYzpfi4PwXZ/XeQ3Hsb6b2JKJzi96ehqD8NpUNZKB3JRsl4DkpnH6J87jGaJtTRPPkEjZOPUTf+UI6Gqceom3yIgu5kfP74I9EqZ1Ekx0wVTNHYKi6uN6DRAHY1BD1K2z3Irmn3ZDTqAAAgAElEQVSdKJrMwa/KP8Xx/E8QX34K8WWncKruOI7VH8OxujcHHQl1HMWR2qO4WX8e0eXHcK7yKI5XHcahsoM4XnoQN+rPoWIoE2ukH8i18Brfdew+T1kcqKQNFdgIoAfrPgs6pp+huDMZpf1puN+TiBzLLTw03MSdpkgcKfoMnxV9gjSCcXsiHnQmILc7Adld8ci03USmOQ5pHddwV38NqZZY3DFfx23TVcQYLuOqPgJXOi4hWncRUW0XENF6Dpeaz+BC4ymcbziJcw0ncbrlNE51nMNZwwVEdFzEBVMEPm4+jL96+s94v+w3OFFzFJeqT+BK3RlcKj+KiIaTON98Er8p/hj/8uyn+JeHP8LPU3+ICwUHYZp/AU+oC1tevRSgudlgimMn9AVG26zw+QmUR+B2MaLNMTPC79UhQHAbYuttZoM4B7ghq8+IIgupSSE7BtcqkGK9jDzTDa0Qkg2flMoOKWYEdnt2jARI8Txapo3XwwgdN0YtQkX7JNdPkELwF7IJxSFLH4P+mZeixS8AXgq5VWH3V/7PaC8BnrQYZ0e88EEdbKq0KJqYosbtPocI/JglYw+A55YkfFb4a3xRdxj3um7hZc9dPLTdxrPeZDny+lKQ338X+f33UDCQiueDaVKgWTScjuLRLLwYycDzwVR5T/i9BYOpKBrOQPFYFl6O8siU9xWPZ6FsKgfF45nIH0jGs747KBpJg22lEEUDKUg3X0fzzGM4fB3wbbUJQJVxZG1NqBOU43Oy1bo2JyRyToBGupOoDalnzXGnLKttukDA96S78Q34lloTG0ZWKnDPEotjlYdQMJyGMX8ztglkt6k0wjHk81TAm/OIcyIM+nlegmbFIw+/l68EQ8zGKOeIc0jRTtgUSQ+fj10o7dK4hQ1ZRO2E9DuZQ18F3/ws6XvrrjYUUbe/Mwab/Dyjy7wurQ+FRPsZgBIu9m7PntfHPZGv4SP8GQVeX23W4XnXbWQYKTVoFDoW97MQ6XzM2pAexkwDnVW3CX7aYDacQQ8csGOGRZPLL1HSl4YrNadxvvSQNKW71RoBPv9eZ60UobN3Bdei3Ivcj6q9UY6s5oTI+PeIQyLgW8aJY2rFrKMBeYOpSDHHSJZtv8C3V5SAOCZ9WH3djvT0Yzh69O9RVXUP1VXp+Hd/8t+L2sk//9Of4NqVz3Hk0I/xvb/+X3Ds6I+g0z3A5FSF6O2r6wmP7de8ap1LFXUzvMaVgznvrceh2t/AvsgiXnZY1aLeYac+bGfEVn3N9++6/33LPyfgWzlgXPMz27U4UPYrPOq/iYaZbDQReH8HvrUmOzOZaHyVA93CI2R1XUdE60nUTeXKAqZBVAUo+/TAZdOloVPqAsqYaGlZAd4E3zQw+3S+3b6HCyRoEoPJSCzB962mCyIHx+6VVE6R6JdspDRYBqXW4R9AY0MWDPpHWF5p0vh7LM7phNNrkg0prf0yZtfrMbRRhbvGK0jTX8OksxFT2614YriBG2UnkGO7hZyeBGT03EZGf6J6NcbigSEObWOPRL1D+IJeC8ZdTcifzMKF1pM4XfcJ4tvPoYdUDpcZTk8PHF47pjfr0bVagKGNYqw6dXD4OuHwmuDwkmZhFs7dhleHVVczTNNPcaXuOIZdtVjX6AibPhO25H0mrHv1WPbqscnP83cOHRwuI5xeC5yS/rXA6ae+aSdcPos6KNPns8DpNsLpYaTMik2PGQNL5ThZdRiG1ZfYCnC8VSdKiQxyTnAz26NB4qYX8Bgl4kDqxPPRDESZolAykQP7Whm6V/4/9t77K640yxb8F2bW/DA/zKxZs2bN63n95s3rN73qzXttql61K5vVmZVZlaYqK02lT3mPQBJCEvLCCSGDRwghC0LCmyCAIIgICIMTXsJ7HxCePWuf715EZlYJspLKyuzuH+4KTBDc+5nz7bPPOfs8QPfkI/RMqKt3shBPposxMFeKwfmy1atlugC2yTxY5gtgnLmHkqFs1Aznon+pCks+G0IepgQQ0HEt6sBOvfIg0a+NrFUCDR524nQwHzNgFRBFbdhprxnTPl5WTC/VKy368UKkNcbgkuUMnnjNmPU2YN5rk7mY99ZLSHjeY8Lccg2m3dWY8tZiymPChKcekx4Tpjx1ck166qQ5EhtJjS0bpTBreKkK44tVIus2ybQTeb8JI546FA9cx9myMNx1JKBuOBe1o7dRP3obDcO3YR+/h9bZB+JQts9QJz8f9sl7cMwVYpo6yGwh7bUguNICP1oRgtZJcqVFwvSBYBuw0oOAj84WQYPqSiiSkVSekbQU2gI15gTsUuSqHfatkwWIbTiEBy0XpXmVNAOShhkKCKn8Sn2+fr/94DpjgySGxMlYk0lW9SGKmdVTNnxkb8nWSpiYqhoO5HdcQ7LtDPpmS4TpXQXbwsxqxelrv5YiVgWYVH2GKtyjNKIPVvB/8NrIGiJTSRbRgyaYx+/jUPlevH37bZxuiMJ1VxwSKw8jvi4K8aZjiDdFIU6+5vfPLv1nfI3Tfi5fa+/Vfx9TE464usO4YDyEC8Yj6rPqjuGi+Tji6iJx3hiBsMrt+PWt13DIsAsN47elnoMRDNa2LHtMSl5VigyVKpJyZAmOeak9RXtJG8C55l4jiHIM3hHw3bNULiwybQNTR7wBG54uVIptvlgbhRF/HSjzytSykIepNMoBI9iWXGeN9dXBN+eda0TmYRX0MuLJ8Vd1NGpPE4Tr61AVIPOeVdSCxAvJgzXN5wg21oAq/p855ny3J+Jy00nMeGv+6OB70F2B7O4ExNuPwyta9nwmbT1Kmoli++HTOtqu2DHtqYV9+K40abpQG4nDVfvFUU9ynkNebwpaJgswsVQjtp4pk1SpYv0Je2xI50pxSgkueUZyP3MPK2eH9UIcEwHf2hnMOWZ65I22iyILK5+zWcy31DdxvzbD57Ojsek6iovjMDBQheGhGsTGbsOuXS/g9Kn3Yaq7gezrJ3Fg/+uIjPwNLl/ZCbszU3W01u71uftRnEjuY+WQqxoEVVs1EzJiR8n7aBy+JVEErnlZUxwPPqs49vpZ8vtt1HP//0bu8U/5HgHfKuJB8D3kLcdH+W9L2kk5iV4C79FnNYd/bMZb//xvJPNdQfA9lIrasSykt53FkZq9uNuRKB6zHD5aXuPmLAhlfMlAsJkBWQSyCT4WiQQZ4v9TMN+W1aZCc34TYoyHkeWKRf9ipQLfuocvjoMDgUAzKspT8PMX/xvCw3+N5pY7ShWFxj/kwKzbiOuuWKTUHMfoVBmmAw0CatjZ8IQhHOdrInH84S7cbb2EjtlyDLqNGHYbMbRQhadzZeiZL0f/fBWm2diFRXBUXfAwfGxF20IpkppPYUvFb2EcyYIn5IAv6EJgpQ3BoBOtg7dxvno3jlZsw8BYCfxBpbIhDJ4WmmckY95XB/OT6zhZsRfTK2bloWssibB8POiZ38iDkqolVDiR1AClYkIGhTJLwRWXNJsgGyJOCtVhWNTmZ54oQY0LywE72sYf4kDZLljnCjAbMkuRHgtDmb8tBom5cxs0GAKOCGpWmiR0mWOPwQXbCXR7K7EcasIyi/B4SAQskLxvdhtlvifVe7SwML9eDlrhDlqwGGrAPFN+vCZxMtgRdWXZog4YmXPdSH4ReK9Kf61375K3rHJwGZZlcyEeUGSQyHz6RLPYLm2Y/UEH+uarcb05EclNsZhmUyLWCUgDkTVMyipDqw49GnoCROnaSeBKRlDyehXo57gxPKoOBOXwyJ7WpPmW4UDlUA6u1ByDdeAOZj1mLLrr4V42wy3OlUWADu+VRbbekBmLoXoZP3Z6Y+ERtZaXPQ6YrFl4WBiP9taH8C23YHbOCpvtNgxVGXC7CWhcogWv2EY2Z+JFIMSUNHVQCYCS+2fXzRYp3LlgCkflkzQlv0iZxj8AfHOdcQxWLwIGrg2Jgik2liCNTTuECNDyjr1BC2674pFmO43xpSoBzzLWZG+fc62mswggUc7B5+9hQ7ZVY9/dKxb0+2twu+0yjpbswVXHeTgWS/Bkphz9c5VyPZ2rwJO5crn65sog12wZemdL0T1bLFrM7BbK9+jv5fc92u+7Zx+gb7YQfTOl6JkpR9d0OTqnyvB0zoD+2Up0zRTBMJKNGMtRhJXvkBQ0pi/1LVdhhtJ77G0gnX21/HLeO8GaJv0qYFU7V+gA6eCbzL5r+B4iyveic6FEA99UIGLet0PqV+61XEK2+SyoukGW2881InuBDjHTHjRFEjK/WjG6gG7NgWatEaVNOeYCjDj/AaaY0LlRaUwEkbwXj5eAjp14raIrz88PrpgQWDE/s1faPlQFrypatuCtQ97jJFHFmVquFvDNyIW+Xsl8y/rS0jGeP//cD3xW5cR9lpxSzDd1vrPbYxFjjcQSo7OaUycsP/X82WGTOuq+OliH7+BG4wVcqD6MM1URSDQdx21XAoy9WWgdy8ewW5097Mor48q549hoBem0Ify5XGTTVwE0z3Zl4zh+/L3kfIttVE7M07lSZLcmCPim06uYZgXc1Rhwf6x9zo3lfNM58vrMas0FnVhctGB6qgbLS3b4/e0YGTGioz1PwPjMdANGR2rQ012E7s4CDPQXY36hHsvLJi21SLf3v+eVyjaifKOcLoJvv5xfDiyumLC77EM0DNyAm/K5tDNaGpM4KJJy8i8cfHNeZX2w8NqG/qUSAd/Z7edRoYHvyn8D38r7KB+8hsrBFBjHMpHVGYOjdfuR6bog4Jv5azyknm8cvvzvhdWSoh+CWao5rGWvvvzn/cH3J8bUImFgGui5QD0STFHIcFxA/0LlZ/IHhR0JOjEza8OJY1vwF//pf8H27b+Ay5UnYXNuLhrvObdRQrypDacxMlWCZUqUeYwoGszGJedZpDjOo7A7FU/d1QIWxegTXBGwiga5MpY8lEQlhUCXjCLZ50ADqgazsN/AsOtFDPvqsEg94pVmTK7YcKs7Ca/fewW/ufsqOoYfSrMYHk4EB2QallasWIAV/e4KPGxPQrThAOblgOKGUU1mBKjroXQp4GJxEXNvVbU+WWvR110hcGyGhCIFGGghSJFp03PhXFj029A4cAf7CrfDtVgEggcaJWEd/wDwzXQVZfCdWHDXIs+ZiEsN0RjxVCsWhio2GsvM+5ZLO+h1wKWiOZpiANdhgCyRchyoZhLwUlpOHSbP1tYXwbcCigpQPXvf71q/6vBRaVf8mmPKz6czo9a+MLASEXIJyLnujEdy43lMS0dNjq1LFAMIqlkAqLqD8n/zc/g/NXDNz6Px45xJVIGOEedXv/g7xRIq9obvb5Q22fe6k5FUf0L0iH18fqaE0JlacQkYkfxWSRMiCCFgtar0ARY+eh0Iue0oLorH7gOv4NPtP8PZM1tR+PASCgouSr7lzh2vIiU5AnPzdQJ2FeBmJIwHMMETHXIdfBMA83BvlkZYtsFbuFB3EJaJ2/AJg0mnkMXayvl4Fp34XeO/5mccC863zLkaEzVWGogjmJICa8WIKqDowKKnFjcbzyO78SwWfbXCfOnr6TOv+prT5lVAmQA0xZSJs8nveRDr10ZsrMawMn2APQp6FsqR57qIi1VHYB67J7ZKxoLjoV1cB5+/lKNBR0cDUGveqxhjpgxZ4fMz7YKyrW1YXGzC2Fgtpibq4PMqILywYIJr+AGK+rOR2noR52qPIq7uGLJaEuCaL5YIGoscFbBmLwW7At8szpWIAKMJVMvhvSjmmw4pOxATfD+eLVRpJzIfDgHHbP1+pyUR163nsCzgmmtepR4pFQnOmX6tBXH8mj/nWteuoBPLywThbEPehpXQY4QCrfLMVOYIBAnoXFhYoAxtK0KhFqlTCa7Uq46RnDPZV1qqkrCbqtCMBc15HUlIaozGhLtK2SUNfPPZSYBsJvjm/yhsv4TEuiOYD9RINGcBDkwH6tE7XyLFqLmtl3CqIQpn6o/iUv0J3HLEobo7A62j+Riar8Cspx4eKjn5uD8UuUEngf0zeIboSlW0rQTzlEAls80aFp+mlCWpOGuZT/1s0WxU/0LZGuZb7XMFwPX9SXu2dt42Br5ZZ0KnShq4CeFESVvK2bJomHPKXH3aezZiYq0WiYJmmWslg0syQNnM59txjgvTx1Ths9gNkShV5JFnxYywik9R15cp0UlJLRPwzb2v0ldpp4g11v0/G7EJ39T30H5Saz5klQgp005yOmIEZwrzPfJvzLeW9J4ig1I1loGc3ngcN4fhatMpCeNR43czwbc6XGkEFUASDehQCwL+FvT1FmFxXhOu/7oWlcZAMAeTG4LNTgjkKOOlg28aGx5UDIEvLdlgqsvBvj2/xYv//N9x/Pg2tLQWSrU0NyLfx3SLW51XcNF8AoMzFRKO91IqzW/C44VidM2VYMpfL2yOFKOQUWB7b75S/o6dw5hjDieWA5Sd40Gtio7oRQ8vVuB++0UJCWc3XUDzwB1RJymfyMUB8z68kPsC9hR+gIGZCsnpU2F1pxhQMhFLaELvXCnuOmMR13AUSwSmUmzG1AHFDousFwE4D2aZK2VcJExM5paqCBq44/3RwyWg5uEtwE5Cviy4sWPca4KhN1PyhsmMSdOY1ZxvrcBNquzXMiC6Mf7iKw9rAZX8jKANZd3pSKg/jtbRPMVcEPwLA6O/8hm+eAkLzPvVDnfpWEdFGpGOtEtagYBTjTF7Bu64dnkP6tqQEdWAEw9r3WBLcZoAYoIgNW7CGAWbMDBdiuzG80ioOYq2+VKMLhsxSnUTT40w+0zx4Fgzj1wYFVkjSo6N4JxjvHqJfjEdMKUaQRZRqXqolDIefuJ4woFURwyuWk+jdapIZC19cMIDl6wRtxQWcz6oMkJ5SrKCrAuxKiDrcyC45ERG1hFEnf4Ix059ij273sChg+/g1Olt+PDDn+NY1Da8+84/oLfvoSZxSCk0xTry0CVjpw4mxeALc8T/77WitjsTF0wH0br0SIFvKqSw26tEQXhfPNA2cKgRMNFZ5MV9TWdGGy8CieUVqziIyysN8n8IMChBN7JUKbUSN1rjsUBZTbnfNQy6sOUq7M65YXSF+4SpEbwIWGQvcg3ojpDs6y+u8d+1pgSgEvgEmb9LVrEJzcP3canyiIBRrnlJuRCb9izyQUD0mUsH5J//Ob/XfifgW7TcW/G034C79y7gwoUdSEkNR2NTDp4OluDm9SgkXt2LzIJTKG/JhrH3NrLN5xFbexQXHaelxsQ5XSCqUQshizjs4uCKQo0CrrzfteCb37eNPxDw3Tb98Bn4Zv2Fn3arTAqKr1lOg5EaeWa+EjSK/KuWR811qV/C4JLFVdJ3JBG83kY86X2EirIEjI3WIuB7DPdiM9pbCtDaehdTMzVYdDehrf0R7t+PgbkhB9MzZrE3IVG2UimGXwTfys4s+uqR33kZlxpPYHyh8hn45rPr4FuUPAg015t/3XFYC0r1v1HM95TbgJL2JMQYw/HUXYzO2SJU9d9EfvMl3LCeQ0rDaSRZToIpJXd6rsE6moeheQPcvgZZo2IzSNBwPzA6IWTKs31BomT1on0RMofSjOrivpF5XLVzJKKYIsVzg3tS2XbmfOfQSbCdEiUM9fO1dp/vXfucGwPfopwmESplE2kXFTtPe0ugrfLt/ZSiFXUX7kHlfIqSD22CkCH6uD7nVcgjDXzT7grppmy4b8WMI9XbYehKxcxyrYwZ17SQLgK+FfbhucIzaf25/5a+RwPfywELHIM3saXwfeR2x6FqKE3STqiup6eDfF2v38y0Ew7EQDIqxtKR+zQRJy0RSLSdEA3sJTZ62MRFwrCyHlqmEQwFm+Fdbobdlouow79Ffc1NLC4w3+7rWnQ0MKoAikaC4PuK7TTSms5hYF4x3wTfDKt7/TZpQ3s44l3s3P4u3v7NyzgQ9j7qzLnwkpXk4RVswlKgCdkdSThZF4knc5XCGEM8cLI0inWUPDkevDw0tHCezlCThaXSA7uueclaS5EQN7diSBluHZk34JErCZeqDiG5/hjSms/gTNNhbC//EB8VvIUkaxQWvBzHNkkfUVJuNIiUkbKjd6oIt2mUnWcEIK+wYQnludjqXcL/GjNIAE5jqMmkMddxdQ5p7JgmELTAC5U3H5S0ASUlSV3qZVgxsFyFoseXcbpkPyY8TKVROYICejRjTWdD8us3MO8EzRLW4n3CAcNQLk7VHZHxWAJzeR0q3UIHuvrrKtOnhwsV40ZgwnvhOAcCyokgQF2SqA+fXwd1z1514K1Y2g2sVTHSCnDwUJLDQQ4Ijq1DSevpLaSDVgzOlSDHdQGHS3Ygsy0W97uSkNMcg/KOFDyZZjSF96zyVAV865EKcRxUcRXXEzu/rl46+JaugEoHns/NQ4fjPwsbEmuPIct+QUCOT9rIO7EccmKJYWsepAJYFXhVwIlScWScCHrIULfD6rqPKuttxCSGY8+u3+B41FacOrMbH3z4Ck4c24X33/8xevuK4BdJOsXAEliqJiZ8VYc1ozUExQSt04u1KG+9ilhTOHqCFcK0qWZPXx588/OZizjnM2F0oQI9U0UC+OyjebCM3EXdyC3UDt+EafAGbEO50qGQqjil/Rk4Un8QJ2yRsE7cgWv0LponHsjlmsiHczwPzvH7cIzdh330HppG7sI+eh/tEyXomCxB91QpnsxSJtKAmeU6LPmZekSwpjXxWmftk43l+FObneCe805t7ZzmBJwo3Y9F9hqgY7wGfItN+jzI/gz41pwHjaVfBd/UTRaW+DGM1VnYufM1vP/eT3H16mHUW3NR8PA8tm/7Obbs/AW27XoZN3NPYcntwvhsDar7c3DNcRrnqvYjyXYSdzovwzx6B0NugyiSUA5RgVblFKyCb+0e2icKEF6+Fy1TBQIMJVJAVjLkQK+7Epls6mKOVuBb9gEbX60F3zoA5ytBt0WAtwLfJEiaMT5mwM0bR/HuW99HbXU6lt1t6OksR3TUpzh/bitcLXfQ2VWC2Ng9CDvwFvbufR119RlYcLOTIgkpLef7C8w359MBt4DvK7hkO4Gx+QoNfLdokSQ6xrQzBGK0L+vZD75Hf69mj1f/RoHvMY8BN7sS8GnBe7jVHIP0xjOIbYjCZWMkci3nUdyRAtvIPYwtV2MuRF18OreMYKpCQbHFHEs2otPS84SwEFCqgCXXldgUOeeUc0dmVwfl/AwFMmmfVWqKpJ3Qxmj3O+SuQm7H5hdc0kbzHGL0VhEDilQREkny0Qm+HfB52UmZAJwyp6wbYo8JCj0wAvbsPvX7/Z2vxDC0TbRZ8jeMmHINMh++HsdrdqOi/ZrU1HBtswBXPod2lopFWnH4v1jwzfmWaLINS/4G1HelYnvJR7jdkwDDSLoC3/8mNahR/wTfgykoH03D3f5LOGs7hAv1R0QD1COyOusZhy/ze13j2yZdAwP+ZsxMNSD56kH83//+f8KVS0cwNmpY3azPFv8z4LMhdmvVOK13bzzg2UxDMRkLgXqk2s8hpeksBuYrVI4sQUuwCR6fDX1PivHmr76HbVtexz98/y/x4j//DfLzk7DkZd612vCU8aJayrGGo+iZL1MFWz5KqjHdgAcoATeBK1luyuYxxYFpAIq5ogEhA817IcASUCgt2ZXBI3j2rbhkYT+efIjLjpOItoQjzhGFs+YInDcfQvlgphT0MVc26KXShtoQ3PzM5Wsbz0eaic0o4uGm4ZJW3Qq0MaWEaRE0VmK8xHjQaHCMeAhoF5klAeCaSgDTBTRAKU2ImNcMG0bc5ShwxONidSRmvPWqcREdDx62eo4yHQw5yAgGP2sEaax0g0VHkCCd90XASPDSNP1Iye9VHkaftwIz7HZIo6gzovrn6iF+jfGUeZDqfZWSQ1aSwEBSdAQY6ukcn197n/9+vTWmGXY+l9yX9nyrz0ntXx5sdGB4sDdi0leDqqHrSDRH4ar9FK46ziCyfA/OlOxHRVcaJv116uCUsVEpLQJouO552KwyqxrDq4Nm/eeSg64iNRKGDzRhCg2IqTiEPFciRhaqVCdTv5Z6IgCR8mTKWeT/IrO+DDbzaJCUJj81dlfasRzqQfvTGny09Rf44L2XkJZyFnHxkfj7v/9L/ORH/1XST8bHTSodSKQ+FVtMnVxxajRGU8C3pEw5MDJnQL7jImLrwjEIo2JEJW1EOZTiIAsQUCoiqwebNsZcN8zxZ+fBntliKSKlrGJeaxKybeeRYj6FK/XRuFwfjaSGaCSZTyCp7igu1x1FUn2UpA/EmY9iS8kn2F6+FdcaTyLZfFz04C9ZopFkPYnLjafle359Sa5oJFmikW49i0zbOYlk3LbHoaA1CRXdaTAP3hTA3jFD3fwKTC4bsRBUbDvXOO959Tnka5eknlGvWaJOITtGPTW405eMyIp9mArUqEiIlsKxutb0NcdX7VDk/tGZP/1V3s89IkCWfQwY4WhH6aPL2PnJK4g+8imaG4swPmZH3u2zuJN1AdlZ57F1xyuIS9gFt5tpaI8xt9IkfQ1c3Tdwpe44zlYfkjG9+/gyKO854DGI+olEr5gaoDkHXLd0zDsmHorKTqsw3yr6JsWmwSY8XarCzceXkFRzDG46x37tPgVg6SB1jY0SdlnL3RcWlnulBUNDlbgYvxvf+cv/DXdvn8OSux0trkK8/ssf4t13f4hKwzVUVKbhzV//I+LjDuGFF76DtIwjGJ+gAhb3KYsu6QDRXvGV46lAFudt0WvCA0k7OY6RJQMQYr4306RIPPC8ocIW7/Oztu7ZebfWpvA9/Oy1dmft7x2YDNbi/lAyPr73Fs6X7MG1umO423UNTZN5GPPWwM3UEJIPZLVFIUjbc0wLkvvXAbOKzkgRrORtr0mN+owNUQSKiprRjnNNamMg60yNC9NRBJBzT7Mz60IlbrcnIcESLf0GNov5FvDNtEnZ72pMVRSRDLiaJzU/vA+Cb7M4ZWreuD74PBuZC9py1Z9ERZP4d4oAoLLM0ooZx47ydRkAACAASURBVAi+265iYtGgiCyx0WTgSXKoQk1FMKydw39BXwvGYVqkDQu+BlQ9voqdjz7A7b6LqBpNR9VwCiqHUhQLPpiGCsoPjmhNdwbZ0yYdlfz5wOay499I5rtsIA0VY5moHElHYf9lxDcdRpRhN0ZZ/OilhBYX2OYsDmEhNGZV8rxDrZifb0RNTQZ++KM/x40bpzE5Sbkq5dE/M0Y0PvpFo6UM3bPf/2H3J0aDRk2AoB1LfjNyXPG43HhGGqzwYCBDx5yyQMCOifEKPHxwBslXwvBPf/dn+MVLf4XC/FR4ltok5EvD6/FbUdR8Gccaj6FvsfyZgZWwk0s7MHhocNNTukpv5sBn4HOxgxZ/Rs9asbFkfcTQrVAr3Ca53AsEvmiRqup5rwmuxWJcaD6LpMazmFk0KsdBU3IgOyFGkGxboBGN43mIrY9EdWe65I8K+ybsqcZ4k2Un6yj5t+uPLd+rCo6Utq4KtSlwPbhQiUzrOaQ1nsMk80iZ+8hun36yJopRJdCS/Fc/UxiYzsD/ybAyD2fF5pFF4HNQukyAeoBNI6yYC1pQ2XcdkflbUDuWg0U5GDdoSDfspK0/BuuvRXV4KtZcv79n61xXYWAeLMfDH7DAT9WYZROm/A2ofZILdvKkTNxMoF6xLXoqg27gtdf170UBO86vShlyoX+lFicqD6KqLUWa9pD5XP9zOFdcU1YpzqRe7tBwLQYG6pCWfgq79vwGB8M/wdGjZL5/gUOHPsCHH/0IAwMVCDLnUtIz2B3VjEBQSaQxh52OEB0SdqDFSguezBuQ4YzFhbrDmFxhx1EL4OUBqjPnqqBsdsUioEzWEwFBkCo8FowuUUotH/c6yZ4fQ1jxThwp34czxsO4bDmFbGccHrRfRUVPBmr7c1A/lAvjQA4q+7IkramsOw3lPRmo6M1EeU86ijuS8aj9Ch60XcJdRxzyWxJR3HUNxV3JKOrkq/q64PFlpDvPIdV+BknWE4g1HcGpmjAcM+zD0ep9OFqzHydM4UhpPCv3VjtxB4/nyzDsrsG0h3raZmk0o6eNMIzvln2p6hO6JotwmZ/ddAIhsRvc58qxXWtDuY/E8Se7Kcyb2t+S/021EKadETTxbwnMCbyk3qMdpaWXJVXoxZ99B9u2vIJbN+MwOd6MwUEbzpzZgffe+0fcunUSKysdWFlh7nQLQgFKzXViMdQI13g+Mpou4FDVfhysC8OV7kQ0zBdi2lMP7zIZSG0OeWjDhZ7xIkQV7JQUPa8AOgewxMY/TdJ7Ib/jKq6URwppwPnlehGWfgPrldEVrg2vvxWNjfn41a++j3v3Y7Cw2Ay3uwPx8YcQefQ9VBqSUVaWhnfefgE3c5Lwxuv/iKzrJzAxYURAuimaACqikIAQm62l62lkwvxSLR61JuCK/RgGWNC+0gl4NEJDijWpnEVQuJE9tv57fMEGjM0XoqUnC/1jD0RNaUXkcFtVOgXHlntNVIWUs885JusrRePca7T/m3Q/4iiEGD0hYcNaoRYEOH+TZbjnTEC8+RhmwMjsM/unbA3ton7O85Vn3PrP//W+R2EP7il2cF4JNWOFqaKwY3LFggMVW2F8nIKpRQP8WuE7a+eUI60INvX1N+25Nut+KMTgUGRHwIyygXTsvPkm0rvPo3g8Gcbha6gbTEHtYAaqiT37U2Acv64A+EAqjEMZMPSnovxJ8qampnwjwXf5QDoqRzNF67tw4Ari7UcQadiF/hWDACRVeLY5ExNka9bVZiotADoQDLIauQ7vf/BPuJ8Xh6npPy34zm25KLrJ/bOlWj4ow/uqyx2BAA8W93wT0pIjkBi3By3OIqysdH4GfJe0XkWU7Sh6F0q1nDeVx03ZQFVkx25/JgRDdQgETQgEyNZyjMnMUFtW/UyxHQxLN68W1tFhIGgl8JbPIrvts8E2eg+JrvO425MsuXYKPKmiJmF1CXApcRdsRMPoPWmyY3lyUw5zAd/CDPM9qoCIBmI1ZPYcA6jepztobJjD3G9GOMjw2NE3Xy561ExnmPCbRAGFHT4ZKlaSgyzoY0GWKqSUv9fCdbrBemaYHSrPd6VFtXxncQ2a0dh/G6dL9sI2fX+1sOrrNcjr7Q8FvD/LXq0F4xxzlUO9+sw8CEXGyw4PmuGceCTNKe4TfPtNmnyZlkf8pcG3iiaosLHqctflq8SJyjDU92ZLsZA4VM+Zd44vJdn8wTr4AizOc2FktBInT/8WRcWXUVF1HZFRn+C1N/4Je/e9j5ybiSgpScePf/If0NtXDKq6SArC6v/gmuaYOCWthMCcaVJUQumZrcANFqBaozGCOpHnk79lERXT1Ng5NsSiUBdA2TkvUzqaMBmsR81gDhKMkdib9wl2FW5FrPUETIM3YZ/MQ+9yBaZCDZLCENBk8UR/XljMZm19cr+p9BfWYXDNsuDTK4oSao2LyowWgueaV9+zIM0mNRasq2CEiYBjMGDE48UiWMZvo+ppBh52XMZZcxSiKvbj0INtiMrfjovlh3DfngDr4C08dVeIsxXwcYwJ8lyy5lmI2jpagBhTJNI7YhEQMEOGlRE1gmkFEjiPZCB50W7QgeXP5JKUK9ZvcNyV/eEcyP4VJroF9ebrSLy4Dwnx+7B96yv47Ts/xPR0Kwb6GxF24G288/Y/4N7dM/B42AGSDWlo49pU11JNQYTP3r5YilstiThWsBMRdz5BtiMWze5yTAUtIJEwL7n2VjyeeoSo2gjYfaWYDFlAAB4IOTAfaoJzoQSZLXFIqorErDwv75k59UrLer09r1KbWKDZhZbWIrz77g+Q/yAeyx6qRfUhNTUaR6PeQ5UhFTZbHn75yvfxf/zv/yP++q/+HYw12Vhc0kEUn1FLb/kc+KbdXfSYUNSeiKSmo+j2VSO40gF4VXH1Cp0kkQPcPLDLaJkPDWCNFtfnCscfnQDlPpl6SIAY5HM3I8iIkdZpU5EcLoSo9sIic54Bq/txPZv2vN/TkWOEQDl1XLNUv5qYN+BB62VcMB3BYLDq2wu+xTlldIqyqdxzTONpFMy0t+wTNPRdx7y3TpxCEnifB99fdDqeN5bftt+xFscp3WiZdtoSKME7Oa9iR8UnyBlIgGk0HQ0DaagZzkTNSBaqBtNQRv3v4XQYhjNQPZSOagLwoWedKDcjL/wbCb4rBjNQzr73Q2koHryGi86jOFS5HW1LherwF9WHzVoA3Nw0XNycDAG1SDX91FQD3n7nu7iZexoTE9W/Y1Py7/SLf785RmIt803QwwJHhkcvNpzA05kSOaCkuFDTghXAKyH7dowOV2J4sALuRTI21DVWhxpbG1c8TsFRayS65oq1g40Gj+CCBpAtjimryFwz/lxvV0y5NadUbfN9CnBTKqlBmHdhrvleggSy4j7VNp26ujxwrdP5SLKfRUF7suQ9K/bPpSqzCZR4GFM+a6URxqGbOGM8iI7xApUjzXuT8LRWjEbgIwe0fij//vnn+3iwy2Gvs2Y8zJljBwfaZopwuiochoEbwlIroK3SQpi/vuAzSXcycSS09BimwZDdlMI0NsCQCARTMhQ7RzDKkPuC3yyqB9VPr+NE0S6Yh29CtGw3jcH5/c/9ZQ4pNXccy7XjSZZSRTfIPrGFteQiMpwuTBSjA1rVOFyiQBJXf1yanyjwrVhOgiml3qKFfj/zP37P/fPw0HKqJY/TZ4Vz/pEAQNfYfZFx2wj4Vs9DdoqOQzOWPTacP/8hXnzx/8GPfvzn2L3nVSRejED4wffxd3/37/HSz/8Se/e+hPEJo6iYcAx9PhO83jrFummhY6WxzbGiM9eMzukyZLvikOk6h3k2u2I9AyNABMFMW6CT4rUDywqosUHWna4r2F+2G58++lS0sMuGsjG4XAUWAVF6kqldXFdcv6tssZ52IakXmiOqp9qsDbtTWlEiRbRFWg4zAa+kk2npRdwLkt7BPFRV8Cch75BFk2psgCdkhjtowoS/FiM+IzoWilHdny1pKqdK90ujkz0FWxBeuhsPXRdRO3ADDncx+kMmuOHEiLceeX0pSKyOQMDPaBc1runYa+BbSzWRNu9s9R6gbJ8Kfctz6yCcY6ClurCQVP2ORb3N6O0tgcuZjyd9RqSnHcOrv/wrPCxIRHdXHWpr7yIs7E1cvbpfcqmFOZW0J46LSrFjOJ4pB2TD3AEb+uZL8aj9Kk5XhGH7nQ8QbziCe82JqBy8Aet0Hh70puDtoo+Q1B6Dh0/TYRrMleZcD7pTkNIcg9OGCFypjsJk0KQ5bw4FOjcQpQutsIaJzP5jOB35+MUr38HDh7Ga/GUvLiYcQPSJD1Bbm4nS4hR88N6LqCy7hXff+imuXA7H6Ei1OHVBYb1VTrmehifRPk1xiLrkpR2XkWA5DOdskQLf/hYttYHgm1KFPMN4JvyePfolfq4IGc5bm9RdBEId8HpbEfC3iYJLKNgOoAsrK+3w+hzwern/W4XAIRgXQC6phmvt01e5L36OSqEjoKdCD4vE57z1KOlOw5macLRO5/2Oc55/p5/zfP2GMt+MyEi0XClBEXwzIt3ufgSC75bRe1KcK3tKnl3tKdpLiXjoaXxfYo43Y518HZ/BKFqIDniQjr4Dnb4yvHr953jp9suIb4uGaTQT1uFMAdwE3QTFBNcE2wTiZML5dfVI5r985ruS4JtVqENpKB1OxqWW4wiv2AbzWK7G0uq5r19lM+p/q4wNF6ACJC3CfA+PGPCLV/8Ct26fxNTU1w++edgQbDDH/UFPCuJMx6Qxi2KH9DAuD2nVklik3gIM46ncYwkLa+wjwbehOwNHGg6jY/qRKo4SQETgyIvPrpwHr9cMH+UF/S5xQmgcVXW2HQsLRq3r2tr/S5CjwALl3+a0PEmydmVjt5BgPo6ylmti7Og0ENCSySMbIznOIYatrSgbyMRpYxhG3FUICFOiPSMPTmHA1f1tZLNy7MgESr43QbPGuBGUEBxZJ/Ow7+E2JHTFoWj2Dmpn7qNy+Cbud17FtcazolRyo/Ui6ibuYtpfJ3KILPokY6CADfOOqTvOA50sg3oeqm94VprQP1eBW/Y4xFSEo32+SOXgfcOM2hfBN59NN8Tqa64LuXjvwnpzTMl+c3yb4Rp7iPiGaOT3JGOWaScEgHRKvgL4poPGoiuf3wrzxF0cLd+HjqlCiVhwXtebf+p0qwLcRmkNToDVP1CCoqI45OQeh8Wai5HRBtga7yM+YTeSkw9gcKgSyx7uG+VY0rlkNz7qMyPAUL56Jvn/rHlYcaBtohA3GmNwzxErYJvj46PWuDw7wVcjxn11ME3eR0zLOURW7kdk0W5cd8TCPp6PkaUaLPobJJVHUixWn02NoexH2Ztqj8p8yfecC+XwqZA87RejPCrSo/Y9bYcqHmTe67NL+zvujTX1EXqqgtKgZoEp17NLHFhqzs/4zRhfNqF/wYjW6WIYh3KQ134Z54wROGuIwPHqCJyqPoJk01ncbbuC5JYLOHDvfYx6SuANqM6FkocqB7zmGHA9yT3rsnxcVyxSUx0fWTBL0E5g7qdON9MRVpzw+R0w1iRLd8BPP/kJtm55AWFhr+H4sXexZ/evcWD/23j//R8gJeWAdDBVzhjzbNmvoUFT2FH5vrKHmSoWasK8t0F0vGtGbyG5JRan6iNx3BiO6OoI7CvZif928yV8WrcXJyxHEV0RhriKQ7hkjAKb61ytO4GilitYCJK8YESQDKRSPllvvSpAyChJK+z2e3jpxb9AleEK3EusW+nE6dMf41jUO6iry8KdWxfwi5f/FvV1+fjZT/8LLicdxOgoncYWFc3ketUK0QnAdfDNiIwQMD3XEGcOR/3wXQTRAbDJlNjGBgSltwJt7CaBb+4ZFuj7mmFuyEVPTxWWlzsxM9MIu/0OKsqT4XDkwet1we22w2RKg6UhE+Nj1Qj4Kb2niq7V/K2/79cfZ655ixqnIGtq6Hw54Q7aUNWfjTM1EWgaIb7Q9tOa/fitAN8kR4S8YGSZEQ2nOPTs+rq/cis6px+qugyuTb2mQZ6RRIHmsG/S3K8/F5sxn1/iM2hnKFXpZU1GIyZD9UhynMTuyi04ULYF2a3nYBjPQtlYBooGkgWE14xmoXo4A5UDqQLAmfNdPvivIOfbQPA9qMB32Ugqrj4+hYNV21HSk6xYWU2jc3MmmZtSgU8uXtEMDbowMFiG337wN7iXdwpT038q8N0kxYhFA5k4X3MEvVOFWnhWGQjeLzcO71mBZ37PdBG9Al6xaOzSVtt3HYfqD6Ft4oEUHqwCLckR5OZTnnBQmtW0YWzchNbWR3A47qGv96EopPgpeycFaaqgUf1vjp9DVABYuf7gSTpqejOFLTtnO474+ii4+m+pDU/Wiy2pRQaQRYos8mzETMCEgr5rOFsbgUVWfcvhxfdyg6mD8hmAXv9wIEhimF3AicZ8y2et0Jmxwj6Vj0PFu7Cl8FMcM4bjXM1hxNUexaXGU7jcfB6JzrOIbTiO81UR0kHvZlsiKodz0LJQjLGgSVQ2mIKgQumq6I9pBUzTmfLWoexxMuKKwpBrvoCpIBVW2JTlSxiLr+29GsjWHbHVVzXuZHIJZpV+tlJEUeCbKQ4u2EcfIMFyEg/6UjFL4CEKECoP/ssz35pDJ8VfdukeWDGQjcjKA+ieLtnw+HEfCAAXjV2G5OlMOzE9U4uJyRq4lwjsWzE314ShISNGRgwI+J3wS7dG6nvz71nAxMIvsrXqYBJnkYCR6h6hJrROPEJW4wVk289hjqyhFIqxoNeOsaVqVD/NxsXG04gwRQhgy+64hMaRPIwsVMPjZRRFFRiS5feK2gfHmoBfqavoUSv+X16iz6t9zf2h//zzr0rOUXXJ5L3QCRVdZKagiHSbUl9QTgoLwgi41F7jGuX/J+gVR0rGQu/OqBp3eEI2zAfMmFyqQftkERrH7qFoIAtZ7ZeQ2hSLi/Unsb3wU/ws7QVcbT6JvM4r4Dw2Theg12vA1ApTapgao5wDVVSmAR7NwVMFl6pYTs2HysFm2k4g6EJvbxHS0yOwY8cLiDzyJkymDNTUpiIi4h3sYLFl/A7Y7TcRYvhd/k89wLz8lXp46SBpNRuq0FxPtXDJOM1560TStWO6CG0Tj9A+WoDKzjTsLd+FdOcFOCcL0D72CO3jj9A+WYi2qUJ0zZRgYtEoTLrYQqo0yTxuZM/TlrJ+x4nO9kcID/slXM4b8HiowtWM9LQwXL2yGy7HbRgN6fjwvR/i4w9+ird+/X3U117H3IxKr1L27feDb6b2GZ6kIrY+HIa+Gwp8h9oVsSE542S/Nw98cy25lxpgNt/Aa69+D9HHt+JJXw0eFSYgPOINvP/+jxAW9hs8fBSHBwWxeO+9v8dHH/4TMjMOY3CgCmCuvpAdGyddno8HGP3RwLc4ymS+HVgKqajr+brDaGSU8lsJvlVki6CaSkxM86Gz6w3aYB3LRYRxF3pni7RIuNp3FAxQz6ocXvX1Rtbrt/E91DO3I+RrAKNeJIf63GUoHkjD8bJdiCjdhmttZ1E8kYmKyesoZcrJYBqMknKSgZqx68J4l/T/K8j5JviuGEyT6tPy0VSkdJ9FhHEn7rpihXmkZ8fwyfM328YWCbWBRSFD2CSl0kAJrcVFK67nhMPZfBPuJTIa2gGxCoxoFPRLgdfNuB/90FUAkgd9IypGc3DGECHtycl8q+JB5TCo3Gwe3Kr9MA35ygor35+Fcn1BG8wDNxFeF47m0fsiDaYcDd63Mv5ByhzJs7VjdLQW2TdO4vDh9xB94mPcv3sKS4sMt6nDLBCoQyikp6iQuXJgZrEKea4EnKg5iCvGSFy2nMAxUwTudF6S7nsE2SplQ087YdMBhn8bMbpswK3OROlOpxdfPTMOBNE0Ll/OCK+G7cnEC6jQWLWADYNuA+62J+GBMxFVnekw9WTB+jQHzrH7aJ8twuPZYjhH78PYkY5rLTGIqY8S6bQzZWFIs51DaX8WLNP5aJi4J6krrZ4iTAZqMOoxoLw/E7Gmo7hUEwXnwD344cKcOBMbW4+bsYa+3GesZbt5j/o657iTNaVaB8O0GijT5CXdcMA6kocE60kJxc+FGr4i+FZOlhRcMuITaMKDzms4XhMhXRDVfOoO2fPGks6gksHUnWlJFQmR8aHdYHFXM/z+Zvj8qkkTO6NSjYZymgr4qf0kmuGy7vSCS8X8B0NOdE6XIssegxTrSWkSxb/rmymUYshU21nEsj164ynkdl+FY/QemHZCyUmmPXFPs6iPzyT65hIxUM6H3LNIUCpw/ozJVrKe/J5/J42ahMEiMNUuHr5aFEYAukg5qvQK2kwyffy9PKOMEYEnx+vZvHMvS/MSpvYxrYVjwrCt5HardUB2Xxh+Og7sUhhoQJe7Gi0zpTD038TZusN4884bSHh8EhebzyLWFo24huO4ajuNm64EFHWkwDRwE+1zRRgO1cKj5arredLcr3xG/eL9icKSOCctWFp2SI6+qT4DdnsuFhftmJk1oanpLkymG+jofIiFBRaIc15pDwlQ2S3QKp8jjU9kHDk2dEps0kae+etUYWJkQxwWeY8D/TMliKk/hqLH1yRNhUwyAQ5D2GRQV5tsMdWIjpSkEKm0mfX3Iu0v11sLZqcbYDalY3KiXLTmef+tLbfQ0pyL6ak6jA4bUfQoDhfjtqPoYTzYGdHvYyGwFgkm6y1F7+o8Usy3ug+uueqn6Yg1h6OiOwuBtcy3sN6bC7753MvLFphNN/HCD/8/bPn4VXR1GZD/IAaXr+7HhZh92LHjdezZ9wrCDr6Bgwd/gz27XsPBsF+j3nRdHGLlFG5WmgfPEBI7PEdU+kEw5MBS0CopjzHmSDjGbq+xf7qNoc3Rz3m+btb96J+/Ga/K5tGGiVSjMN/PwPeRuj0CvhlF4pqgg63W5ZqzVRydzbiXb95nkEygHCrtpJApASsCPosQZSU9aYg2HsQRwx6ktZ9F2fR1Efsoe3INxsF0KbY0jmahYjgdZf8qmO+BDJF3oQA6wXd6XwyOmPYiy3JSQDcBzaaCb2F/lbFnuJNdHFkUMjRSgnk3ixB1L3Et4F+7If844Fvy5laaUD15BycrwtA9/lCApALfZKHZiVNVh+sstHoli606BhJQE3xbR+7gYG0YHEN34WPushwsNCSK5WNTAAH2oTY4nPcRHvG2NCGJvbAfFaVXseR2IMD27mRpggTf9Yp1Z25ZwIaOkXs4WrwLNzoSUd6ZgqLeFBQOZIhCgI+HAlNHJMxMz5xhbYJvPkMjns6VIKs1BonOU2L8lHHQx1wPs28EeD3b+MJqMS2ErLymkiKNe/xs996Ep+4quD0meHxUteDGVIwAD2MCIh78jBh0LZWL2kRuYwziKw7hVOkBnKk9gvON0ThujsTusp04ZA/Hje4EZDWfR2TVXkQaw/CoLwPzHmqOu6QpjHJsnt3f+ofyH/u9XMsKeOtREx14c20IENGAt7CnVDyR99uFQZ2DHXXDd5FgOwVGZuaZcyot3Z+lnfAZuYaVI7XO87AoT4APG2E0YdlvQ7YjBmcsUdLCm4BRsaTrfI44alwzZH/4fGqNyb1ozoQ4tqEm+Px0OLmunKK1G2SKhIRvCWZVS3BplKJ1P+W4cD0RwHfPlOOGMw5X6qOksLiiKx1XrSdx1hQpuvyPOpLhmCzAmJ97RdVe0GYJMy2gmfeop1owYkWHQV3iaHK/6M/M933h0sZDA+ryN/I155UFa6o7ragSrajiNhaYyR7UHSwtVWX1s0X2kU6QQxwFAmsypgqsci65dx0iEUdWzRsgm0y1BNX4yLPiwqyvAbVPs3GkaCsyOi+gduw2yp9k4X7bZUm5SbadQaIlGgnWaCS5ziG1OxFV3RkwP81By3g++t2VmAmapSiUDbF0oO+nUgXnk5GMUAv8gWZJiwuF2uEPuCTFhOomwVAbAuweKNFAglqSBExlUe3bJXLDqAP187UCbAJopmXwkOYcEVgTfOvsNQvdz5uO4lHbFSyKI6Kx5Zpih1ozSnta5kHWoBZ5WyVrfve6pWMg0cqgCwFfCzzLDvi9XIckJmzweqzw++iosUDQicV5M4ZY17PACCcVXPh/noFv/p3K+WaaHO210qanVnbNQAbiLYdQ2Z2lpZ20aqSGVnC5icw3bQubtA32m7B3+7vYtf0t9PYa0dNbgu7eMjwqTsWuPW/hePSHOBz5W+zZ82t8+P4/I/zgO7BabkuTOwGTAc7f7x67L/dz7het4JLOHGs3Qg4s+i0wDGSD4Ns1cfdbCr7VelRKTXTMKTWo0k6s47dA8N0zW6hqL1ZTTLXzVNaqbms2Y5y/eZ8h6Wsk+mjr6XiQVJAaHRfGvfWoGsjB+dpDOFq1E7efXkLVVLakm1T3p8JA9ROmnIyko2JUkx8c2pxumN/IgsvqflVlahhOF/Cd1R+HqIYDSK49KuFW3yaCb9V1jEBapXBQooz5pgFRPqBOLJkDLiidEdQX1x8bfPMQoCfbJDnJ0WUH0DVWoACygCAyJsyn1BsAKdaeqQ+q7ToZX5V2wsILhocP1BxA08BtyYXVwbcw5SK/xHC3HYFQG6zW29i56w18/OnrSE87h4a6XPi8bfD5GAolo6RCuJJbG7RjyWdB/WA2dpZswZOVGiyFGjAdqMVsyCyHpwAnTYKLC19C2wTfkjLTiK7JAqQ4z+Jq+wUFblgkJmNOVn0NeBMAskEQztxkbrAVHuCq25kUpHnJ/JHJ5IFFx4FpFfwflCMisKT0l0rB4QFOwEHwSUDA9tKUdct1JiDDHosk62kcrwrDPsM2nK8PR2zNQcQYI3C34zKeUEuXBUR+HpCs9F/ruOlr6E/7qtaAiqCo8Vb3SBlLj8eEOXc12Kluwl2NiaUazCzVYGHZJJ3SBr11KO7JQozlBIoHr2NBHCzVpU1Xr+Bn0uBtyPGQYkEF8JmW4PaZcaXuBGKdpzHgZmMoDWyudxDzfTSuApTVvqaDuhugmQAAIABJREFUqthFJZkpHQFZ6EaAI6yjFQE/9d41tQgB7CrF6jM537L+yFw70DL2CBfN0dhRtgUxliicqQpDguU48rtT0DVXKrJW3ItMlaDzLqyLDt65rrSfEbhxP7DgkuwuD1DFbtNxUJd6Fq57JXWpfv5FQK4YcQVE52ZrMD9XI50Tl5etmJysxPhYGTxMj9L07KmUwvUtF4E7nSe52NCJnQJ1hltjocW20EliB9omLPnrpZCJES3uJ6q7UFWA2snJthM4XbMPLQN3pVX4uL8Ogz4jWuaKUD2ci3ud15DujMElczQSTMeQaDqOVNsZ3G5NRGlvBuoHbqN5rADds6UYWDZg0Vsje5U6+myzHgy2SLEegbjXx3QdJR/nCzRKZEHlrytiQdY2547sP1WiqKThVxKpZAplnxNIc22J463ytXUGvn+6FBdqjqCwheBbySByPUpUQos6EOgqUkTN01p1F7W3fvdeD4VMojIlzpKPLcfbVFSEPw8RmHMNqCiUpEeS7PDRsVJgX4C/OIwkGlQRrQLfmp3VUmy8QStqBzKRaDsCQ2+2KrgMUPaP+7MeK6BzskHbut4elN8rQDc/04Ljh3chbN/76O2tQmilA/1DBly8fBg797yFKmM2Tp/dhRdf/Bt877v/ASeObUNXZ6VS6+Jcbyb4ZvM6Roal2JZdJp2Y95pR8SRL1E6+reCbJBz3fkDrMyFqJ0GSRxbYxm9J1kDPzCON8eYcaxftkbbmFTm2mfP/u9f78/bCH/N3cg7pdp0CEXrRbcgpzf8edV5DePk2XGo6hvLxTNSOXUf101Rhv8v6U1A6rOSvN0PlRP+MbyT4NvRnwDCcKVIvZL5z+hNw0hqOi4YIAXM8/LjYNmOyJO1kNeebh7VKoyAjKmyDbpS/AJ4+D743Z+GqQ4MGUXX8Y1W2deY+jpXtRft4vsr1lGIw3h+BN8GTxu7RSFPuj0yXAEcabrscRs3j+ThQvR8NA7lYlmfj78j0EHyyQxYPMP59B1zOB9i+7Zd48cXv4uP3XsapIx+K9GIAzVIRHwrWqQImAQcuzAcsMIzewO7iTzERrFVds4RFJIjm/yHgJbDRmtHoMmlyaDSheeQeLjedRHZ3ohx+IWpriwFXxSGKneQBpCmgrGv8+Rwu1ZxDA99kbZmTTNk36nkLGJaDlvPIw1Y1s5EDVEAj89jZaZTNW1QOLcPjLKpiqJJgwxeyY3yxChPeSjydLkD/dAGmlqskp5Qyd4uLFoyOVmNyvA5+tqOmPNl8HcbHKzA7W4OASGwp1ko1C9qcNb2xfaE7ayrPWHcweZhTW759NA9Vj5NR1nYV5Xx9nCIpOjXdWajuzEDJ0+tIqD+J44aDKO7PxByjMExJCdGBcWJu3oTxiSosLLFdtEP0fBcXGzA7WwcfG2vIfuI8qH0jKhxka1ccon4z5a3FqYqDSG+Px/BytQLfBNbrzT3tAhlcAbpqnp+BbzqrZLvJgmhrLGiD31sne0HAjaRCacocogyiPoMgmvM+sVCNrolC3G1JwkeFH+PH+W8g3nUGpv4bknrEhlNcS9x/8kx+1gZoa0+AjgK7XI9S+6CBbZ/WTp7rjb9T6VcKQMn+EYdQRbMI1vWIgu7Y8JUgmtKGgyMVePDwPOobMvFkoAw2+w3cvncCd+6dgrUxR7o6MmWC+aH6+uYaV04o945qcKJYVY3B5Vho6ScE//r+hIdRLT4TC+QIyu2YhxUmdx6OGXbjQtlBZNtjUDt8SxzSRQl9c2xUF0jvnBHV03dxvz8dmc5YJNVEIbYsHHFl4bhqPIabTbEoepwMa28WWofuomeCDYCqMb1sliLJJT/T61Q3XxIl/hWrACw6WGqOGZWkU8F1wSJS7mfFbvP9otIkJIAWnSBTLHOi5olj8nSmVFrUF7QkYUmUUlTaD+dJzZVuI9RnyPgx6rahVDmuQ802UmfeTxvA9aMYe5Wap70HKvd9yU3pW443QZdKc5G0w1Xwzf2lHF86CCRwGP2sHcxEYmOkyvmm1KCondCxIvhWhcUSBdHB2Vd5JcMYsmN6wob9u95BVOQWPHlqxPSsBTdyo7E//B1cvxGLlrYSvPPujxEdvRsfffhLHI74GA5HgUQwOPa6fVh3369nF/gs7ElB8E2QKilJDgHf5X2ZOFd3CM7xO5pd0ta8YAzew2fPeu49uR+xMWudYPVzfU9+5p75Xp1AkM/lPtscey8kAh0K6SZMu96ClaAD3oAFTVN3EVa5Db2TBc+EB/j/OT8aycT75ZisRm30/S3PpxMZ2jOv3jMxytpLOa/yWbL2Pvv9Zj3rH/Y52hzKmlTpuIyosWM0Uw3hsws5GW07iKPGXSgcTUX9VA4qniaL/KCBzDezMEbSVxvxsABz7cUsjSo24/kSrPg3E3wPZqFiKFPybMqHU5H35BLiG48iqmw3pshQL9OwfX4xbM5C/sMmd/P+N42ggAOGGYPN8ITs6Bq9g8NV22CdvA2vKIEw9/r5/1MOY7IzfioGNOHpRBHCK/ejtP86Zn1syKCArALu3LhkPmh0mtHXU4Sc68dx++Z5pMUewrZXfwiLMx+zaFessM8sDUWEAQu1YDrQiNKx2zia+wkGvQbJO5PPlw2thaq1g0EkANEMH50DaXHvFN3jGGsUKvrS1n2u9Z77T/N7pjgoMCvAZMWFmRkTDIYUxMXtRkrKMTwZrMDkdANKy67gytWDouXLgj/RH2Yef6hWM/LPn9fNez6nAFo6GGT2VZGOA0FfA8ZWLDhkDMNHjz5BRPUBybs+XLEPh6rDEFF7EOGV+xBZfxhby3fgUOE2FHVdkwZYiyxoDTiwMO9ASWESriYfgaEuF6MTDegfMKKy4joe5F1BW2uxanrCQ40pX1IoxHxlstaKWe72VuOToi2o77mOKW+9qPgIO7POut/I+BDYEmxKSoY4YwQ/SttdOjVyLpli5WHhXDPYIXZ6vgaWsXxkOGJx2hiOtwrfwZt5byHflohZYY+Z2qEfsEqP3g86aQoIbeS+nv8eOrHco3qKm4pYKGBOW8jDvBkebzvu5Sfi3/3Z/4x9+z9EXn4KIg59gB/86Dt49bWf4sMP3sTiAp0NAjwVqVCAS193yil//r3o79VfNQdbO7TJ8jNyMjVTjvOOU9heuh07H21BmvMCuubLxOkMevgsVGUgOGSkiREiJzwBC/oXKlA7dAt32q/gUs1xXCgKQ2TxHhwr348LxiNIs53Fg/YrqOvPQcd0odSMzHrrMe+vx2KgAcshAnAVyVNASKsZkYgI71lzMLRIgkT8WJiq66JLIxKSH0yxsUlX4HjzcWQ6YkCwv+EozCas1Y3PgwYwRGKTa05fE7RLTPGzo24wC3G2Q6jo1XO+mXZCkKQcUlkHAgx1sP9VXkmcdGB80oxPt/wYhw6/ib4ntSgpScUnn/wcx6K2oKuzGiZTDn7+8/+CggcpuHAhHNu2vgaj8Qb8wVaRxlPAV19nX+VV5XwLkSOSs6rg0k0lsIEbYMGlfZQ530opRCIb4nzrY6nGl+QYtfJJjlGliBEgSWWSV75XEUbP5u2Z8wrK3XH90PGU//PZtbh2XUoERjAO/y/319o9pn6m/gedRN4HlZloE5QUKouNWRzdvvQIEWVb0Dt0HyuU9pTUMt6zIuxINkm0l7iK6agip8vP0SIrdDYDFvl8SU9cvQ86gkqNiM6qXqsl6aRCBGiKS/xc7m8B+zqQ/yrz+Mf5W8fMXUQ27sbHht/i7kwySsdSkNp2GgU9l9DwNF0a8FQMpcM4kI7q/jQYB9TFpjz8vmZAvYcNeXRme73Xbzz4rhhJw4P+y0iwRyGybCeGYUZQWtL+cSbh2ab503y+At8MlzK02gxvyIHu8Xs4YtgOy8QtLItX+eXBd/9kMSKqDnwWfMthyU3IjoAa+EazyD2Z67LQ2VGBopIMvP3ej0SDlk0SvF6GQ+kxUgOawMmFkaU63Om5hkvlhzERMsEn4Xzd+LM4ySGMFNlvFlXRwJF9UuDbBUP/DcRaolA7eP1bCb5FHk2MFg0tLxdqa5Pxy1/+JX7+8nfw/e//GWIvbsO1lIP47Xv/gB/88M/x+mt/jWtXw0QHVwyTzqZ8XQc2514KUFROKMGohKADNkzAimPGCFQ+vY75QIPkeHtDFiwFG7AUMEs4k4Bp2lODR454lD6+giGPAd4VhzDb929H43t/87/ihX/+z3jl9b9F0rUDOHb8Xbzw0/8Xb7z+d/jNm9/D/CzTPBQryYNGVDt4MASU0kmrtwxbSraiefge5iQNTAO2mzE+K0w3asSir16p4jCPeKUJC7BhPtQAn9cqGt1k8IcDFhQPZuNs+SF8fOdDHCnbh7zuK7jfdRFXTUdR3pKGBWFS2fpcu0dJ61AtuzcTfLM+Yy34VoeyOijlsGbh85wF23e8iP/rz/4HnDi+FalpUTh+4n1kZkWjqjIX//zCX2NivFxrbEJgpjF3csBzb+rX+vZPAX91sOopZApAENg3wutlelozplcacKPuNGIf7oOhKxPzQQuWPCZFoHBuGQEjIGaUQLR4XViEEzNwYhJNmIINDm8pqiZv41bXFSSaT+B42T6EP9qB8OJdOGoIw0lzJAqbL6PxyU30zRRL0ycpCtUAtrov9iRQBeu8X6k5EdCi1F1U7nQzwPOFcylAyo6+uTLEmqKQ4jiPab8ZPkYJv+79uu665/1y7rQo6FcC3/oa+MNfmWrl9bswO2vDjl0/wbWUA2hte4QzZ7fghZ/9BX71q/+Oc+e2IzsnCrv3/ALf+97/iR/84D8hJmYvunrKhPlWTtHm5nxLwaVE51StAnO+aecIvlXaiXICpZmWzlTLWHJ8tbNS1lTzqrIII12y/0g2cd0ss/6FX2uXODSMcHL9sZ7CBneIDs+z9Cban7WX/rvfxSRz7fFaTXWSvdME+EmoqdQv9pZgUeHThSIce/gJuofuQyLKst9p6zm3fCYFkhlJYSSMkSE6oXqtBTvY6hd/pooWNRAta/LzjqyK2vPznl36e7SIJM8dzVH/U+Mt/f8/mSvEZdtRvHb7l3jz4ZvYXvohXkr+Ec4Z96G8/xrKJjNRMJKMsqdK8cQwmonqsSzRBCcrzmJMXvx6PdCt//4bC77LBzSt75F0PBq8hiTXCRwq2Y4ub6UK18uCXv+A0Af32/Iq4FsUSzTwveLEk6l8HK3egfqxHLjloGKHyec/u2K+GQZmk48mDEyV4JAhDMVPszDj1ZhvGUNuQP1qlqJKlyMHp0/+Fh+//wO8/MZf47VPfoDZSbIgHcJSeWBBQO6R4dZWOEeLEVN2EHmOi3CLDjLTTJThUTmtmuce0phAAVsurASYg+lEWW8GYhuOommSBS/Pf65v5u8JuJ0I+FkAy/tvQ3d3Ie7cOY3unipRjTl46FfYuv1niE/YhwcPriA3NwYP8uPh97dKcwl2EFUg/Ot5fgU8VASJoXM2kREwx5bEaMDJqoOo78/BEiX75JBg22InpBMoC7yoEBJswoyvFjP+OiyLHrMdC/MWFOadR+rVcFgb7+Oj7T/H3oOvIiZ2G9LTonHrZjx+8uP/CEdTNvx+5s8yDMwqfLL/dkkJWvaaYZl9gK0l29A3Vwq3FIBuHvgmU+MXaUCVLsDDhUW5Kv+ZxXYODHqMKBzIxKnqQzh491NkN8bBOHkfvcu1GAuYUdJ1DZeNh9E0cE+ay6jIAfOhVa4rGTJ20/vjgG/Olea4aKwY1VNmZqpxP/8stu94Ce+880NcuhSOjMxInDrznmicm+pv4vXX/hZPnzyQfU4nUYFvxarpOfD6obz+XlN2g+v2s+Cb9okAnECEqUgu1AzcxDlTJA6U70Z07WHc6rqKvhUz2jxlWBAwyzQxAgiVWsE6FQIBOmWMRpDJ5hqbDTRg2FeDdncpTJP38Kg/A9fbL+Ka67yK0JTvRWTpXpypOIgU8ykUPr4Gx3geBkTm0KzY1DVOEp9RbKWW7kPgwbofaosTjJCRH5yvwmXLSWQ3acw3WUthz7+evbr+PPA+OBfPB9+moeuIb1zDfAdaVplvWU9CHPCz/nDQ/exv6dgxH98OV8tNPO0vFj3vx48forg4EQ8fxsNkykL74/vo7HyEq1cPIOfGSTx5Uina4FQiCUl6mF7T9FXHmuPDbpu0eUw3YqTLiQWfGWV9GThXd1hrskNii+y3xlivnkf8e01VTGsiR7ZX0hfQBLcGUgmgaVs9sArIJtlEooJFf7rNlYiTgFOuI7VX1HrSQaoC6iLLu9oJVtUt8fP07rD6q4fKMlTsYYE0XCIDS+KA3/f4SnGkZAtax+5jKWQRYM01zr/VU6b4HAL2GQ1jSi/rPigbKwX0nEfuZfU72gwFqjk/TLvi87FOStk6fi/CBmTZmXql2UMB2wK4lc3Y2Jr+qnO+8b8nAfB0oRx5/clI7Y3H9tKP8L3E7yLCtAt3+hOkDX3pRBoqRzIEZBcPJKNw4BpKhlNROpqOopE0PBxKRuHgxuUIv5nge+g6yjW5weqRdJQMp+Bq6ymEl2yDbTofPo2V+KZN4Gbcz6pUII1PkOkZTvTPPkSUcSfqhrOxQNAgG/b5C0s8ZoJrKkiE7BiaKcOh6oMoepKJaenepxsXHrxUAyDbThbLhekpIwofnseZk+/j3LlPUVR+RTqTAe1Y8NXLQRhk++WQGdbph7hmOY/E4nDYR+9BFcNqrGqwUfK/xVvmIUeGnZtTHAg2p3GKVFlhdwoSLMfQuVD4rQTf4mgw75TPy4IovwNzcxaMj1vQ1l6CV1/9W2TlRGPrthfl6zfe+Hscingf1YYs+P3tCARo4Gh4nz+nm/p7OVwUO0GAQ2MsaybUhJHlapwrC0Nq/Uk87EpG1UguHrsrscSIRYgGnuoWDtFrJmMuLAzz4H2MiDRjeqoBvV3lOHdhJ375q+/h+s0TGBg0oqYmF+Fh7+JXb/wNxkYrhUVRaToa862t10WvCVXDOdhZugPjS9VywIni0CaxJeJsaLnZcnhwbZLl8lgxNleJ/J4UHK2LwGHjAaQ+ToBt4gEm3HVYmK8V/fF+TxXSW84iznQYT+cq4RHpOY1B5udozA5ZNiVtthnzykNLMd/cR58B31Tt8FvR1/dAWO+DYe/i7bd+goiI93Du3FZEHfsNbt2Ohtl8Cz/72X/Gk7781ZbewvBpoWQFoHVQv5F71sEC36sfqnzl99p6dlNJyAnnUimirVF4Oed1vJT9Kj4q+AhhpXuw68GnKO9Kx7i3RhhwFRZX8yFsuMj30dlTyjBSxCra6Da4Aw2Y85owuWTE+KIBnXPFcE4XoG7kFgq6kpHWdA7njYcRVXkAkcaDiKo/jNTGcyjouAbz+F30easwjQZh9uiQCVHANAxGHQn8mUMbsOHJfBkuW08hpeG0qGOwWFPWy9e5X9f9Xxz3jYHv8t5MBNjmXQPfbD4k60nGmHP31cE30zsIRL0+i3SvpE30B+zweBxYWLBKWh7lIZeWG+BesmBioh4z01Z4vZTDZO2CBYGQSRojbY7d4/gQIOrdTVUnSOlw2ZMm4Lt9tkCT01VOqczxKsmnjS/T5BgZE2CqimwVIFZ1QVyzipnWajBgl2jagKdK1mbV4A0pys59nIRbrUm42ZyIG8543HDEI7clEXmdySgZyIZx6h7MC48w66nBsrceHl89vH6zNOVSqjwaYGbxoF67QOCvR+GkqNAmzPfpku3onnigJE61tBjZ61odxwqFCWi7gywQf/7lCykVINp9/l/ZJ/LMfF5+z3QVLRVHX0+fGUMtDUVj3Tdnbjdiq9Z7D882F+Z8DdKpuvxpKrYUvoe38l5HQkskSoeTYBi4Knne7HRZNZopXdiZB87mPCWj6SgeSUPpyLec+a4avo6ygXSReDFS8WQkDWkd53GwZBvK+6/DS29KN+7rGqX1Bv2b9fsvgm8Xhhk6qtmF6oEMzLH7nnjNz7/vz4LvJozMVuCwMRyFfRmY9rB19jPwzQ1A1ovsK1kfgsHJCSO6uwrR11OGmakGeHx2LFJejE1GVmwYWixHQVsSzlRFIMYYCWNnBqZ8St2EOV7K0yWoVEoJ/B98NjIP7AwpRSGsNvebcf/xZSRZT2DQW/H1AtDNWjtMMxDgTcPDCn1eLRgarsOZs1uxdevL6OquxPYdL2PLlpcRfvA9HDzwDrIyTyIU6kYo1IoA8wdXjdTz53YzDJYwHZpxZFGWpJ2Q6Qs1Ydpbizv2OFwxHMWFqkO4YDqKBz2pmF2xSMtisj0+FpwyR1bSj+i4sWiHIIkKCl3wejpwPPpj/OL1v0VeQRy8vg40NRVi25Zf4s1ffxfTkzVSMCbhT9GnJYPCcKYdcx4THnQnS2OTqeUaUashe8Tfbc6zs6iRDgdZT9WAgZ/9eDgPqTUncKLqIFJb4mAeuYOOxVJJR6FTuuKnoocdjcO3cLHhCLJc5zEvTRtUOoNyJHTwrQAMGfDNuGcFbp8Pvp/0PcS+vb/Glk9ew3f/+j/ilZ9/H9u2voyDB1/HrVunUVWZhZdf+q8YHSkRJ5EMFqMXOpBUkReC7z/0nhVAUU68lg7jo3PvwFioHsVPM5HqOI/s9kTktCfioukYrjadwf/P3ns/xZVu2YJ/08TExIuYmYiJiXjvTb/XPRP9uuea7n59u29VXZWuVFKVVPIGIZBBBklICJCwQkIWhBM+EzIhPZl47733JKTPNbH2dw4glUpwb3HrqXr6hy9O+jznO59Ze++1104o/x6umTdYE3UPcmLJTSWlzQ0w+VqUSbRNX0sg1BVh1DqjDB/eUyZFr4RsYMGv4eVqdMyUwjnxBoahJ3jX/RgZLYm433gNDxxXkG6/jpeeu6jpyRRd/9H1GswGGhAIuCRhjSCcxvToeh0eNt/ATdMFzAcc8IvW/f6Mxf0ZGzyXPw18B98D3xqdSQdL7xlS/N0/r0mNANLWZL1nJJeVk+k9Zr/SWaEMHUrfKpEAykdq6kC6qAD2kXayBb6pKESA3YoVnw0VfZmScNm9UqrAt3h56cGmQaZfu+pjzmcCUNHCl6Rqj9rPNIOMawBpSaMhM5oXilHRnY4cxw3cM13CDfMFqZya4I7H9earSPbcwEP3dktuTJCiZawafNcWjzvWOIneZDkSkOO6iafuRLxoSUJ+50MU96Xj3VC2RLIdEy/RPvUW/XPvMLxSg3Eaoz4LxldrQKnBS4Zv0T9fKso5gpu4jvK6iAF4z7WkcI519ksk2q61NpERDZKmI9ECJQIh0SjZz2kkc90n2NaVkrTfFXyhYQxZ1/V+VM4aJaLw+cwh5neEKN0ZakPE14RlvwV1s89xqvoQzhr/iFf9t2CZzYZpIhsN00+l3Dy1wA2sgCmJlky2zN4qTa9TSz51/Cw938bJXFSPZ4uVUT+ZjdrpbDztv4ezVUdR0J0KJkns36L1+QwAXpOinThk8Vee71ZMb1QhvuE4jEOZWPKTp0qP5afPW4FvNfgJaqZXjbjUcB5lg9lY8DXIxOPiQrCtL9xMulRFKSiFRpoByw93IRqkfJkHPpFHa8L4ugFvOpNxw3Qe6Y4EOMZfY3azHlRD8cuiqW/ePE/dO8ZJSS1xJp1w8WqT0N/0hhkv2u8j3XMd82EmIH76uj7P91sRDrpksRbvd7hZVE7y8uLx5Vf/CdXVOVhaacWRo3+Lu3dP4PWr+7gUcwAJ1w8DIPjmRkQlDi5QP8/1S9EcSaZRoF8tqAr8BiIedC+VoXHsDfI7UnHXFoc3HQ/hpfZ1UJX/pgHIRTdCVRsm64r3tBXra3a4XXmimFNjyMYXX/0XJN49AlfjazQ3VyD3yS38/d/9L+jreStjT3IHJMyvJB65ISxuWvC8LUmSPilvyDFFz6QArX3pH+V92U7wbIJz4jWS6mMloa+kNx0jKwbhhVO6jso2Pt7fSKtEaqq60pBsi4FxnB5EVXBFjA/Z0LghKR70tgdxP+4px4YGkoRyonuoFWWEHtqVZQusljcoK8nBb/7f/4xDX/8WNxOO4dzZf8LJE79G/OU/4ve/+yvx4BN4sK8FYIhRo0deeO57W185XhXQ3h63fE341Ft8/maRUKRm+NymBVNrdaBBNeutQ898OZpWSnHixRcwjOdiKWKXaqpKGYlrCDd0VSOA0QoaiPQy6k1FLZTiCIG3hMqpKiRRDa59vD5qlTfCG7RjZcOCzqUyMaooL1fYmYIn7ttItV1FsuMq0jw3kdF+F8bebLROFWJgtQazfisGfGbRI79luogl1gXQqpP+XHN1b//De8A+43xWfaevvXrCpaKdXAQ938FolyaDyvv+Ifjej/GqfoPnwpoBaqyo+gHcc3Rgq6KFar4wArtNX+JrDoRFp30/zof9w8ir8nz/AHw3XIAC3yrhkuNGnSPPg9/V+9ctBgLXQVExIvAMNYke/pyvHi1TBajqTsejltt44LqG+7Y4pNqvisZ9SW8GaoefwzL1BraZN2ieLULLXDFa50rQMleC5pkiuCffwjH6GpbBFzD356Gg7zFe9qTgWcd95LTeRabnFh67EvDIeR1pzutyfOi8gkxXAp403kaO+w6ym+4hr/k+nnoS8bDtOn5b+k940XIXtuE8uMZfwzOVD6qf9axUYHC9BsNeA0Y3aoWCExC5QnrvVa4WpTiJH0gLlIi77rQTQ02B7i01J6GN0fmgIqocfwqHcF9h/2ljYl/W8P0YE9u/wesMkFYpevtuqYRN1abKkUycLjuAi6ajeDGUhNqpHDTM5YHeb/K/a8eUHKGZSZijWTCNZ/+yOd8GAd85YlE0TGSjbiobL4Yf4FzNd5JxvhK0K03Wz/Am6gPszz/SC0AeLPVoCWZbMOurRrzle1T3P8bCpmXvnm8CX+FgeyQsG2u5iHcDWVjY1MG3Wgy52Qr4E/k1pX3MzTxMOkSYCRxqkNI6pFf0Xfcj3KyPwZP2JPQuV6qy1RLyolY2F/6dFq8+wBX4Fkm9EBU9FrXyAAAgAElEQVQOyE1rxuBKNXKb7yC75RZWIpatCfrn95/+fz/fkTw5faGmLm8g4EFDQwZ+9y//O377D/8r6ury0DtYhYuXfoeLMf+K+PiDOHnyH/H40TmEw10iz6dkI7cXqL/09QsPkd52oXAxQ57eDY2jF23Ripy0wL1YivS2RBR1PVKcbJ9KKmJ4U75LL5CUcue4acX8vBlZWWdw6uSv8OzpdRz4w18jJuafcffeUTx4cA6P0uLwd//tf0ZfT6F4+8PaRkxjgLxBjtc5bwMyXDeQ0BiHZW+9Uq7YR/CtAKJbFE02/HbYJ97gesNFaXUTz7EcsisurHiEqB/dig0/+emtwjt+25qEVHccPKtF4ikTyhA561r0QgFS3YPM+bAfY5Fj48fAt/JeRyj5F+lDJDyKvNzbKC5IhbuxEHnP4vH9sf+Gs6f+O7LS4xGk6gI9f1KURxlcO8+RUbCdz3/sMa9TB9oKoKjrVNevACCNOm7opCsxahCVDZr8VzeWwy7Ujj5DbMlRWGdfY0XAFjm5qu/oHSNtR6dD8XxJYaMhxnWGoICgXLUmBIJu+ENu+KWICvtCeeUESNHA4JqqKV1wY50ImtG+VAbjSB5edabgcdMtKf7z0HkNaU038aT1Lkq6H6Nk+AmSOm/jjjkGq5SlZP+JR3Q/7ut+/YYODvcGvgNb4Jv/r8aVSjLcn/OROaatLYwqCd9Zi1xxzLCYHROueeT44ucJvkX9iMZllADdIYn9Pzb+/rTX2T87wTfleFshtJP+LNxquICupWLhYCt+s5pT6rx044DnRIqGG0xoDIc8WPQ1oHehHNbxVyjqeyyA+GbdBWTYruFNSxJqh56gfaEUU4EGbBKASuVYepsp8aryGbh+SCRSwCrXG721Yg4OTIQtGPbXoXetSn7LM1UA5+gr2IaewzqYh+K+x1INuLj7EV61P0S25x4ynbeQ4biOO83x+Jv8v0Ns7UmkOK/hkfuGRJvSm24hu/0ecjvv40lHEnI77qO6Nx0NAzmwDT+Fa+w5PJOv0DpbgM7FYvSulKN/tRKTXiPmNs1YCliwJspCjUIDFu94mEYIPf8qqqhkUxUdhXN4637tAOJbr+18/3/AYxrvPF/ua5Qg3KDBGHBL4nZ28018W3YQMeYTeNp3F2Uz2TAs5aFm7gnqZnJhYhn6kUzUj2ShnnTpPSZdfpae75rJXNRM5Ihrv2E8S8D3q9EUXKg9jkfOBCwErAIWPpcbt7/nQdkzBb5DIXramjHnrxHwXdmbhoUNgm8mPewYzB95LBvWDvA97zXhsjUGpf2ZmN+s1ziUyivNRU8S/iTEx+IjDuGAh2n908KlNwVubEQaUTv2DAnGc3jadh/9qwYRq2fSHIGbAt6a1cvNaSdgEtoADQsCPKUFzUSRrvl3ws181n4Ha79Q8M2NhaBReZpasOlzoc6UiqNH/xqnTv0Wycnn4Gx6gZJ393Du/D/hqz/8X8LFbW7Ol2p86nsMr/584FvAs4QdmanPUCMtf8pV6UmYSg3AMVOAh003UNTzWNFKvNwsFVdfQv/0RoadUiQmgjasel0or7qPf/nd/4HDB/8GV2O/RuHbRNx/cBxffPFX+ObQ3yPu8pfwrnGcsd+4yTVr3hWGeZswvWbGQ0sckjsSseq1qGqFMp72p39oaLBCI72v9aN5iK85jUR7HBzzhVhh4RitgAoNFJ6bVHUUDet2eP0OvPYk4lHrNXQEKiVcToUOAgzyrmUjlWvSgcT+nLMaGzr4JrBVTQFdBaAJFKRSbaQDU+NmzM/a4Pe1YmqyBjZLOqz1uViYJchRyVMsNqNTnbZ+R6Og7ba+6O/rAJz/rb8mHmsxwuk1Iw9UJX+xEi49hkyw9UVc6PMacK3sBJ403US/txqbrNZL1SXxlDYirCXIhQjEQswfcSvtdD2JSyINmlct0gRftAXeaBO8VFoRbx1BDSlBSmVC/bcKnfsjjWJIEagzr2Yj4saMz4yhxQoYR/PwpPOBAKnUhjjcqL+II8bvcOzdN2iaKcDEhklk3Lav99Nr8c/zOfY/x8RewHcOPg2+9bXszz8KN18iBBxrTUqWT/YSDWjL3Of5MlJEY4v/pbzdqlgQjXoaaTvH1U/pZ+X8Ec+35sUl+GaRnarBbAHf7QuFWrKlkuHUaW4c2zRIWfDH57dgdq0Wg0vVaJwuwqueNNy2X8YV0zncs8WJc5AOrrllAzboKNOKyxFQc72Ua6KRHqQRqVNs9GjNdvKvfJZ0QKGxKAUSlSipgK2okkgicpMUw6K8JqUFVyMuTPkaMLxiwMBiBRrmXuJQ8b/iSWsiyoeyUTqYKUXgnrffRy496e5bSLFdwQNLHO7bL+Ge7SLuWM7jjuUckuwXkeKOR3rrdWS235D2qicZRQPpKBthFfLnsM7ko5Ee/Pl3aFssQ8dyOQbWqjHmrcW0gHQrNgQfqOJh0qcyb3/Kvdz/75Krzkga+32D0TWukRsuRINN6F034lnbA5wr/xbfGw4jsS0eWSNJeDmWgnfTmahm9UvdAz75Cwff9HzXTD5B3Th1FbNQO5WNgrE0xJlP4p41FtMBsxD7P72o7dek3f8b/anzpmdYGjf9oFJemA0Ycb3hJMp7UsUrqMD3pxdGbqoq/MriMU0S6r1iiRHu7syGWXG1pHoaF0D2FRdAAiuG10mBUAuiLDxhO/xRF/o3zLhkOIW71jj0LlQp3miAngBFmRAwJZniOvBm3/E8ueCSNqBli0dahEPGja9puhDp7pt43Z0Mb8S2YwP/efv9U/dkt/ck4XILfLCgjAcrK1aMjRnRP1CNxsYCTM8zcaYFXd2lcLqY/W8UrzfQocLrYHLRzzdmudhIIqzGa1SLjwbipPoXVXIaYRl/jhTPDVQMZEv1QqyRYtIEH73dsokRJDGRiTxwBWS8via0tRaizpCB8UETfBttGBurhaEmExVljzA7YwNYCILFTUhlYZVRjTPPTXt8tQ53TReR15eCVR9VYLYT7na9FxxrO43RrfuiXud1rgcd6F+pwtv+xzhRdhR3TZcw5DWIx5QAkWFx8jY3WJY86lHJpeT/Bpsxs2zAE/s15LbdwnioTnmymBTIojU/AN/63NqPsUzQrHko3wPeelic/8EkLAuCQep485zalMpChN/j824g2iueNRrWig6gQsPKg62BIZn7HIu7NRraCky9P3Z5rtpYJlCmMSJePZa9bpb/pUKOdeYNvsr4DayzL7FCylvICQQdYrSLWowkaNOzppK8STuRcSvXqYFu/j4bK6RKcqbGYxWvPj3kyjNOr1ZAj9aIJrOWFMaoHgEem1RZ5cbbjIWwHeN+EzqW3uFFTwoOGI7gP+f8rRhpT3sfonn2LSZXqjHvrcNqwILNsEOMV84Zcp3ptVeee8VxVUl4H3ksRoJSw+B36dWXIkpaxUw+5+s0QJjkLiBQk2xlQaGQJCaqwkIBVhSO2OAL21WjZzJkl6JqjPCYRrJw13kO5f0ZWA65selvlCql60EzVv1mrGzWY9VnwVrQirWAFeshG9Z9FqwGrVgMNGDF34AVXwOWfQ1Y4+f8FqzzcwErvEGbyJBu8j/DDviYHxTyYNNnk0Q/qscEww4EqbgRssEfsimFjAijFQ6EqCRCI4t7jxZ5ITim8bs15zmmdrYP57k+5na+rj/mexG9mBxBLeVvm7EYsKFqIBuJDRfhnnktSb+sBsrrWfY3YG6jDhNrNRheLkfvQhFaZ97gRX8aktzXccbwPb54+yX+pfALxFjPo3LyGRZCar0SZ5SMAeXV5jhkrhTXFY5jOj8EiDIa8KmmOx20qIGax9p35T3OWcp0kgbDsUPuMhNWSRVpwXjEjPNVh9G/UCrvMRLli7qwHrZjNWTDYtCCmY06TG7UomWzFPWrr1E5nYPCoVS87knC885E5LbeQEZjPFJtMbhlicGVhguIN59DXN1ZxNWexTXTOdxsiMFdezweNF7Di9Z7KOtKQ01fBizDT9E0/QbdC8UYXqnA+HqNeM9nfWbJzaARTmcIr0E5DD++Vqo1XY0FtbZofaDf3594ZIQtLFWaVaSNaljMYYoE3PBFWrDod6GgMwX/nP+P+LLoX3Ck4mtcqP0Oyc3xKBp7hGqWnp/cO+WE3vHP0vMtmaRT2TCOZqgqQrO5oLRLsuMSThi/wZivQpPw2jExZaPgQNQbgeTPB2a2FoifOAgUr5EbQrOEt2iJjQXNSK29hJKOVEytmxGOMhGCVd1+vBEAsrwsARFlleYD9bhTfwn5I5kY2jRI8Qg1kT8+2LeuJ+hBcMONoUA9rrddx9mSI+hfKocUyZASxypJb+vzXHA0UX0B3FsbuJIsovYvQQGVJULhZtSOPkeq+wZqhnPh+wXer+3r3qUff+K42P//oXFFUKZKcOvhfS6ECkgzsciFqtEnSGq6jtqRZ5rEVKuAZQSd4pVRC+dPv3aCbmloAXWVY2uOobEvGytBq3B8lfdSeej5nwQzAnA0lQqdU8hz5xwi2BNqgFAEtIIaYRem/GZUT+Uhtv4cjr/6A57Yb2CBmyJa4Q074Y26hHLDsUgFGAJw0c5lRCDqQf96NR54EvC2/SFCfl0bn+vQX7bxehSAUxKGUlhja61TRkyU4IIgRmsEsDubWg/5fW54Kpql3n//O+pzHB+6Qa7GiT5e1FHRBXYdlwS1BDqihKCAMYuTLPkbUD6ei1/n/gats4VYJojkHGFI36+kJ72iqKOqYQpA+ZnmkE9k07h38J62SI7My94UHCw8gBfO20goP4lTb79GTNk3SLZdQlF/Chzzr9G9VorhtXLMequx6jdhOdiAhaDlk23OZ8b0ei0mmCi3XIXJ1RrMeOswu1GHGW+tPB9dK8fIeimmVt9hfqkUI8vF6PZXYWDjHcbnCzG6WoL+9RJ0rbxFx1I+2uZfo3nmOTzTeWicyIVr/Amc40+Q676CS1VHkNFyFca5J7AvvkDDZA6MI+ko6bmPt933pKhIxXAaygdSYJjIhLEzGW9HUpDZn4i3nXdR2HEHrzpv411XEsp7HqCqPwWGoUeoG8tEw9QT2GaewTGXB8/CS3Qt5qNnqQB9q0UYWCvBkLcUQ953GPSWYnC9BMNrZZhaN8p+NBo0YcFXj5WNeqz6bVgLOrG2acXquhkC6IN2Uf3Y8FvBJgogQTt8IYcUZmLV5s2ADT55zSWiDBRmYISLlR5ZsdoXdMj3OPamfSaMB8xoXy0XUHXPeAFv+9IwtF6K9sU3cj0vuxKR1BiDCw3H8U3tH/H7mq/wz9Vf4JvKo3jcdhfO8VfCn052XsGRdweR2ngNU9xbQ6RP6tjj/f3w/T3xp6+bav4pA5dYQY80UfqWALLPX4lTBiU1GBClNLV28rPqu5qjTObWNldbxj5zJxgF1FR/SKvbCNix4rdgYdOMiTUDeubfwTX2SpKWC1se4HljIpIsF3DFdBwxhqM4XfFHHC8/gGMVB3DccAin6o/inO173OlMwJuJbHhWSrFA7fwINcbVmiu66Tt0wnmuQVFIU4ntIYmM6XUVlMNg13VoH9aOkdUa5LU9QFzlSSRYYhBnOo3jb75AVlsCKuefoHrxKSqnf+FqJ6KlOJklZHZmktbO5qJmLBOPGuPwXfVB9K2ViA7r++CRg0kH3jz+MsE3Qx/CvRVLlhOlDUPRetytu4icpkS0rVRgKeLAWsj6ycZiIfNwYTJiw/hGLTwzBbhkOInXY5kYDxLAa16EXQel4kC51ktxtvw75HelYJHeSLQIyPGiUSTnuNjooHt74eHiwvui/ZdwNrmhc5K3CsgpH8gWLpppJE8K7/wck+jf/4PeSoJuRiyUN1XNHX0z4P1pk5LUpb3pSGm5JRKfCgAxKUVRUxgS3S/wLbQdTSO+e+UdYmq+Q+fEK6yF7ErakmFAlt4Wzjm9JJQ7bALHH3W61Xkoz6pQZ3RvqMb/HQmaUTSchRjjKXxfdAgp1isSMl1h6JcSitQcl/C3kpkTTrym4ys8TVImoh50r5QjuekG3gkNh2NZ77O/7JHeUD/7IkK1IwWc5Z7pPFke5fy1+Sbzbud6yMc6oNaP+vv6cx7VayqR1gnQC8lGr5B4xPXv8LiHa6ZesHBAla6wGERhanY74VkuwcWaE0irj8PIaq2WSK6MGN5fUkbEG0xe8BZY2MN/7uW8PvEZGl30sOsAZH7DhNf9jxBvu4hJnwXDYSsavCV4NfoYd11xOFtxFMeKvsbxd4dwquY7nDWdwNn6k4irP4eE+oufbLGmM1IxllVjf6ydNh7DKeNRnDd+h5iaYzhU8TX+puQf8WXVH3C+6ijOGb/DiepvcLzsoGrvDuL4R9qhd3/Al2Vf4nDl1zhl/BanKg7jTNkhXKj5FheNx3DJfAKXzSdx2XgccYbjiK87iZuUg6w+jF/n/wPOVhzGbdsFxDnPIcZ6GhcsJ3G+/nucMR3DScNRfF99GN9V/BFHy7/G8fKDOFN+WDX+D1v5IWmnyw7hdNkfcb7iKK4aziDGdBqXG84j0RKLJMtlXDOfR6z5LG5YLyHFdR0P3QnSUptu4FHzLTxuTZTE2MyOe8jqTEJO133k9iTjSfcDOTIy8aw/FXkDaXg++Egan/OzKc03cdd5RfI7LpvP4ozhBA4UfInfPvk1DpYcwIGSL3Dk3QGcrfwGt+pO44n9CirbHqBp4AmGJ/IxM1+G+XAz5lbq4A82wh9txeBKDcoG0nHTcgFnSg+jN1CpchX0yo76Hrg15vQ5ul9jeb/AtzI2Fa5SnmU9L4QAXPKatGRmmZ+U5Iy2Cm1rM9KMtYgHy5FGrEScEslaDjswH7ZiLFiLrrUSWCefobw7BflNt5FafR7xb75BTP5h3DddgnniBZiHIb/LJHeux0LnVMnxNCQIusXpwii6UFp1h8d+9eOnfycUbcYKaZGMioXt6AmbcK3yOBJqv8eboYeoms1B5UTGL5vzTZd81WQmasezUEu1k5knknyZ3Xodfyz7Ep75l/Dp5dDfG9A7N4ZfJvgmiKCFGaFXLcTM/mb0Rk24VXcBMaXf4r7xItJsl5HkPP/J9tB2Can1cUg2xSK95hIeVl/AP2f/Bo/bEzGyWStAYk9JQ5SBC3swsmnBbcslXK06ibKhbHT5DQLu/Vth/o+Bb31ykNdLviY3d1II1Ma2hiYUtKci1XoFrsl8rdrgpyfAnjb8rTHx77/1Y/2le0S3DCVSA+hRYOhdvJUtWNu04IX7Dh64r6PFWy50AerByn2UMsYfUDx+Ur8rlQHOa89cvoCKsaV3AtIk9C7AnJ5vAuwm8Y5GN5yIMPqiFX4gCPfTQx1U2fnLUTfawgbk9aXiivEM4gynkdp0EzUTzzC6bsQGNxTSpEQmi9U1PVKCmUUihLYlSTj0DDGsSwDYBM9cIR56EqQwlDJGdq45f7nHNGx1fd/tzVHds3CIyaAqgW3rfuvG9ceOujH8g/e0TU02tm0gvmVAb4EI/Tr3ML+YtEreqqiUqCQ29jd59HNBq+hxnyn+BqaRZ+LtpLOBdDR6K1kpMLpJmhNzXLiW7OH/9uEzQh3Z2txbRGs+r/OBhNlXA4yMNAuvnAlnTF4fXzOgc6EUtolXqBjMxtvex3jdk4YX3Q/xoiv5k62gJw2l/RkoG8wSTm7ZQCZKelUCXWmfet0wyqSu56ifeYX66Vd4NpCKQ4ajSO9JgmXqJWzTr4UGM7RQhuHFcowuV2J8tUroEpPrBuhtfN2IsXUjxr0GTJOTu1KD2VUj5rxmzG2YQSNjmhQEL73uZoxuGDC/Xit5PvcbYlHfn4Op9VpMUq1m0yxyjjPynVpMeY3yP5SgHV+rweBqBVqWS9C8WATPQiHccwVwzeaLpCSPjbP5aJh8IfzhlwOP8KwvBQW9aXjRmSzg+1jpYcTVnUF2+108brktLb01EemtfHxLEmJT3AlIdl1DsusqHjZex0PXNTxwXkWSPV4qkt6xxILtrjUWSfY4pHluINVzAxmtiZJgSDpR0XAmKsafCmXEulSIvvUaDKwbhIY2tlEr17jkt4hajo+RZALAYDPCITow2hAMuODzOzARMCOv/yF+9/wfUDzyWJyDP3RG6fuhRsPch7Gq5sR+ge+d80s3ELb3dfHa6zSvHUfZ2+lY02o+RJicvuFAxK9yNagqRXlG9p/Xb8XaRj0W1k3oWixDYf9jKeh2vvgI8hy3MLRUKQmckovCOahhBebhBAMOAeV6Xo0ULvoZjXIaH1LLQmqneLAOD0r603Cx/DDSW6+Jk9g89QvnfBvHM1FFz/ek0k00TmWjfjIHz7vv4KuSfxVQvh4kfWHngszBom8KPP4ywTfBqWSda8lQDMUuwCOFTqiH/bY1GW877+NN771PtoLeZLztTcPb3keo7E4XT93JsqN42peCCV+9WJjv99/OibfjMTfKgAtLYQccy8XiMUyoU16KyoFszK6ZFEVGFhI1UbcA3VYoXlmvqtgOQVaTJFxNR+3IdSciw3oNvUvliiqwbwvSjmv499/8AXBRNBPOE/aTNo+40JEDS7DJJLSgAwXN98Xr1OStUPxLaiCTl6px9PYPEKloyHrQjvqJZzhbdQRz3hoExENP3m6zqGaEyFGXRFFSs1gEhdw8AmTmODA82YSlAAF8IZ62Jgk/8Ur9BWS1JKJh4gWG141YEcqMxhkkLYVNeMAa/5f9IB4ezbgg31JoJ00wjeYh2Z0A0+hzrd92eo3/0o/1zXvHukejSZq6Z+Ih/sGGtPN7Ox9/ZI7ogHwLaOub8M61VX/8ke9/ONck6U5V5RM1ARljvIctEmoeWq7EddN53DWcR1VnOobXDPByEyfPP6Sod+Sp7t842/23ohxjQacYm5Q1nPAaReYt3nAa636nJMpFpAIg+5wGYZtcizfgxJK3AXPrZsx56zFLveUN0yfb/KZZFDOWfDYs+e1Y8lmxuFmPxa3XrVjZtGHNZxfjZDFgh3uhBDHVp1E/+RoLPju8AXK3mXBHqhipBypZWlG12iR6SqOG1CoaDiyQJTr9kmxKlQdq9jdpJcXV8yjatUhkO5xTb5FSfwVNw68RCKu8DqlIy2RayjkKJYqKSUqijt5LnoMv2oTNiKpMyqQ7b9CJDbaQS9QkWNp9YaMBs756oZ/MbZgw5a1DZXcGHjdcQWlHGsbWybmuVU0eq+fjq0aMrFZjaKUSHEMjK9XShperMLhUgf7FMvQvlMmRz4eXquT9sdUaTK7XYdprxqzXLMbTcsCO1aADvH90NEmL8NqYLKxfU6tUO2XFU2pBr0ebsUHjPOBCwO9AE6VKTReQbIlF50aZiixr0eAf7oefK/jW5/onjlx7pWla4TtAuOReyPNmIESnBh0lWrl7qrxwTmuNHnNWtp3w16N5/h1edT1EbOl3eGC8COf4a6xS0pP5N+IB5/ko/r84RrQ9QNRiZJ3afU7vy/ohuu7NgF8VZPPDDdP8C1yoPoIHjbEwjmXC+ktXO6kdy0L1ZBYMLOU5kYWa8Uw0TOTg7UAyviz7PQoHkiUZYgs0yOLMG6RvCjz+MsG3DCjZ/BVHmklcm2jCXMiGCUn+MGByzYBJ8gE/0ca9RgxtkE9XCyZYjmzU4q49Dnk9DzG8WqOSw+jl3GVj48YuxRGiTTJZeudKUdGfhcS6C0i3XEXnZJGm68lNfdtK3l5wODG4mbIxuURt/uvRRniWS5FqicfrpnuY9f9SNb5/pom/y33a7T5++H5YKwss3gPhTRPAKhBLmUuCi82QExVdj6UgiXHqFUJcSFndTaNj7BvlhNcmC3orlv02VA1n4WzNt1jy1SIUUbxq6mmz+poCPYqHKDrPlI6LNmM17MCUtxb1o8+R3pSIZFu8jK381geibTu8Wi1JZBJVCqnKheQei7ERbkaY3hpJmCTgVvxqSfIhCOe4lQW/Ce9Iw3EnwDH5WqNC/KUBt/779G4pzrQKBVOJhfNNGQicWwQLygulv6e9r12DfPa9xzpwV9eo3lfe6a2kP0kCZCIgaTeqYqeeTPjhmProc3q+mTTIdYQGG+X/tH5nxIJcXMdsKe5b4nGn9gJMY3lYI8WF0Q2JwuxMKP2Z5hr7mU4HglmC741akVUlRYLgm9GhYLhJZGD9AlxZZZDSdW2IRtgIOgg0NANP1r0fSazbmtc7jSId/PB6CWTYF2o/2Ag1omWpHBfLT8I5XQgvOfIhAhNVLl3Xr1bnoIEf0XRn8m2LqDiEWI2XybhspA4wCVYUh1pE/nUT5Ojzmsix7UDt9BvcsV+Fa6IAm9FmoVrQ4FDXy//WgBaLbIVUk2Q1rYQ7DSkmArKJ0g6BlCTmMwrK625RY4KGA1pgH3mJZ85bMPfnShSE/S0GtjgGtvuR41XtK1pkVYxQ7TWRQGWCvyZVybEnRqkGHjm+aLTret0CrCi3qhIWudZIiXuWW+c5ikSgBiSDzViIerBKhYywB1OrNXjZcgc3686hZbEUG7BtOZJ+6P1W+6F4kbfu/U8d1/vl+eZcY9Rvx5rzHqZSdCzm1DC3htQs6VPdYNf2Dxbr4Vh7rwlFRdVEoPoRDbDNMBP1W8RIG980o3L4KW4ZzyPZHAvbdD6WgnYpxMTk8JBgFc4p7gFUlWrRKmxyrvzU/tvb97nfkVPP+RgNOLEQsSKt8Qoumo4ho+MG6kYyYB/de9LlZ5lwaRrLAst2slVNZKJmLAPWsRy8G32Er6q/wpOuW1jwEaxx4uod928DfHOTowqB0DPoVaHeNl8jHUXT5eViFkL7Lk3Je1FyixszE8jSG28gt+0eBpYqVEVFWqlb/af34/tHASJciPwuRNkizZgONCDTnoDHtqtomylSmdvyO2rTeH/B4QTlpq2oAGIdowlL8CC/Ow0P6mNhGsgV2a/dzuXf33//3vyU/iCwJO+fAIONY4QeZdkkuQGFPaB8lWX0OZIdV/CqOw3rGt2D4HudMnBciHYZP3t+n78dacXchgWFvUuvq+UAACAASURBVCmiqcqkNZGeI+WEgIFJyKITrfiH/ogLC34buhYrUTX0RHIiWBAoqfEa3nSlwj1egPl1k8jbySavKVvQs6mKIWnXTaUSFvrRJDPJZdc3FhoikgioyQ++anuAx8230DpHo5PzRwdKP8NRjABSTaivrYA1vY1cG5QkGUGIAgty3AG0eX91oKLutbrfKoRLYPL+c/V59R0ZI5r6hh5aVuoDe7j/3DQJgPj7BFlyDUqZQcBpmEmtbSgdzUWc8TSKezMknCvgmyH+LfrHHv5rv8aiBr7537z3U5tmqcp53XhWwLfohUuCWKt4kEkNJFBTfa8iOPTwsQ9ljnE8/UhT/chICwGUAlFqTPF6tddl3LLvmsVz3LxYipiKE2icK8IGjWECc/7/FsjVkpf1JGYBvVriKucPZTP5m/TiEgjL76uIl+igc76FuN+0IBJqgXHiJe47r6NpslA831TTECUSqRLJ81LSldGw+o8owQkNeNY/2Gra5+Sz/Lz6jhiM4tXkXHTDj1aYx14iw30TNYO52BBQrCQmZRyJo0AZZBLx0WmP4rDSjXhl9CjjVDMwt8aGAuG8t5xDXAckr0DWPu6rXGsofcoy654PWhNC7KugBxv06gstrQU986V4aLuM88YTqF8oEL739rqn9j8Ftj9cI/ZrTO8P+FbAm8BWtR/miFCphfeIRqkC4ASkujGujlrEQHMGSD9qeTfsVxrgbEyEpSKLSFLKPWgCaYL1Y89F+vCOPQ51o3lY9DPnpxlBUcRiYrmi2nF+kcoitSG27u1+9efHf4cGh8JTLZLQ61l8i9PFB3G7+TLyJx6hfjwbtl86+DaztDy93pNZqJzIFA1F26h6ftB0CMkt8SKP828RfKtFgZOJCwkXcs1TxEVIPEFcyBk25ML/440eDQWmlCeLId9M103kNCdiaLlKo52o399eKD4y6OR/mxHadCC66RTv4wIakVIfj2RLHFoWShDQim3oC4wsijsUT/ickzTIa5LwvQd9G7V4YL+KrKbb6FmqENAnm/HPNJE+ec3/vzgH5WlTXl61IHJxpCePG6AfbZLQ2L1agazWO3jsuYWJQIOy+tEkyTGkgMg43Y/+EvDSionVWjxru4Prrhj4gg2IkvvL0sasuMYiMkE3NoNOTG2Y0DpbhLLedGQ13hRPd7LtCt52paFtsRSzIYsAI5XAQ0+5AqYcY/S+SsEXbgQaKFLjU3ETKQcmmvVaURjmNTAqwE06x3MbOR1JGFit3OIjqnVop+fyL/NYgCvPX0L9yotEygCLVZFKwLalCiPezW1PE71NOrAIiHFBmoCugU1PqPosw+qkHChaEdcHJZXGcUKgyH5iUwmJH1kvPhwL4g3Tx4nmCSagZn6BcDoJrJrQ4a3EbWssclvuY1yq5ao+5Hr4c9NOJAJEb6sk4zZhZtOMvO6HuCrgm3r8ukdbm0Nc3xgZYHREuK9KMYib9W5NnCoyBnUQqK5XXuf3heuv1nyu4eshB9wLxYipPgnXYhG8AVYlppGk3xtlOHL8ShMeLt/TimL5WSCGBUTUvJL5IFQBValxC5j7Fcik8gwLIT2wX0HrRIGKRhAAaaD3R49c68VjrHmNuZdpz8VQEHCuAziCdIJhN6hwYxh7gceemzAMPwO98GovpHGogLQelaExKYCPRy16x7mtr0lqvOpRV5X0L59jv9KhRZpa1ANqvpOuQ8qa8tDzv5ThyYiNUiBTR84dGR9UGhOKTwvGVw0o7ExGrPk0bjZeRvt8sdQ+0PdDGlEf7onbBtYe5tCHc+oHz/cHfBNsfwp4832JmIrXm2uAGnN6P/NeqHvD61UROXlP+lL1qYBnrgk0YsQwa4Lo/7PSdpi5FI2wTr9BojUWDxviYB54iqmNBhWxkToBjIqp9YQJmHQM/Fx7Oa9X0bMoRepEXssdnC07jNzRZFQsUBo7C/Vjv3C1E9Oo8nxXTmaCjYmXjpEcSbw8YvsWN1wXhCP2bxF8y0AS4M0BrEJm9Mpxw932UvJ1TQ/8R446zYMbJScCB01m401kN93G0HK1xr3i5vHpyS+LEEEPvRPRFtF3no86ke66CXIg33SmYnCpCpQQVNXKaN3rv6mugQsNB66PCxdBUNSD4p5HuOWIQ/lkHuZJLdBC6tvf1X/j349/kT6RMaaAHKsPCv/SZ5WFbmrTjllfI1YCToz5TXjZlyrAyD3zVhZILqir1PZmyJYL6db9/in3imCwBQNLlXjsvoa7nQmqtDSVLkJKl5cVEYe9tXDO5ONVTwoSXfG4UncWDy3xMPRlY2K9VjZTbr4c/6osMgE2PVmqzDU3eHIJg1ua3HxPKZmIsSueOWWwCo/c7xYPFzeNQNiJx403wASxyYBZjVnxHtKD+JdtBMWMeDFcT8N6M+TGst8uJdtnN62Y99lFi3bF7wDbqt+J1YATa2xBF9aDjfCGGuW4SAm3gFO9FnRjLeDa+uxGkIlRTdgMOuBnNIBRAt1rLQokitvL+bzn+66BbAVa1PfEINfWN8oLzgcteN6WhDvWWFSP5mrUEyrrsP8ZOfsT/u+nflbmhgbA4JGku+c9KYirPYNVv1WMGALbrTWYGtxR6t6rccT3fCxUJuH7D72d7z+nEUNwR3qLagoQqkiG/hq9jUpVinroDklIPg7rQr7oNQsFYMtzroDYNvd/x/8J1YLOHA8irM0gERTOBypLMEJBbzWjSloOBAF6qBFVk89w3X4JrslXomG+F+C4E5DxsWoaWBaDQH9MsKyBsRA9ym7UDOUi03kDDYPPZN+QffA9Z47m+Saolf2N1SZd0njuArAFmPN/Nf1oAegeBINUeKIHm7rpCrwRxAVFkpDl7JU2vA7s1bVqwF3Ogf/JNaMFLFJFo55ry/RmLSqGMhFvOo048ymseE3SrwqA62NX29N5r37qGH3v+/sHvhVdl9TCjze5HqGZaPdNHqv+4X3k+2KIitY958iO/YHv8/4weT2g1nVF52lGlMW3SOmKtoqaW8tCMbIdCUgyXkB1N6ty25WzQEA6+52YiP26O4bZt77m/Agz4uHGaLABt41nca32FAomH6NyJhvVo+moHc/8Zaud1JHzPZUtXm96v+smsuEczUH9zFOcdp/GJcsJjKzSe6t1vNwETgoOag5EHknd2HHj3xus+mT4DI/aYJbFS/MY+yJOCdUQQG/KYk3lBbvWWJTCBkT4nCBWvc5iBczMpqXOfmJySBa9dp5EDC1VK06lqFr8sA+2F0vl4WKYyU9vII2AgBP+qBsdG1XI7EzC6apjUo1wftmAAAtkSKhUeYTUoFeLPzdrv1jGzaLjevPdSaS1JaJt3QifroDCpDd9QhGMa143deRmyGtRXnSeI39feT70Sb5jo9E2UN0L8t4EZB/veTzs+E0ZTzuf/7Dvfvx3d37vxx7v/L2dn9n5Oh9/+B6ff/iZTz9XHghyVxsxs1aL5tHXsPQ/RVVnBqp7n8DQ/xTO4RewT7zEY/cNXCj7FqVtKQgFOBY9WCOYFT6vfi76/+nP9aP+urpXuudUP24tzqRSRZvRNVeCh7ZYZAzdF1BMrXGqrkyvm1A78xr3224hpvY0zlUcE+51/WIBplmEhWNBL56iecu40Ev5epHgU14aAh3FPWYRKybVcZOhd40AQFN64byQ0HwzQixEQvAZaRK94fuOK8jtfYi+jRqpNruwYQPbot427Vjaag4s+dic283vxLI0F5b9LiwHtKPPieVPNX6HGrtBO8bXa9EyVQjLYB5M3TkwdWfLvbONvIBz6AVcQy/hGn4F98hreMby0TxegJbJQrROFaF1slA+4x59jebxfDRPFMAz9gZ8zmPL5Fu0TxehfaIA3dNFGJh/h7HlKsys12JpswHeoEMiXdKHW6BKB1fq+P5YpOdT79tt9QKuSWoM0ABsEx3f5oViJDmuIL35NqZCDVLeORJoFG7t3sD3x+bBjtf2OO8FvElInB5nN6jFXdD3CPGmc1j012OT8pYRO0KU6hSQQg14J0AFLo4nMRg1D91u81LWe81bq69ZEi3QPawK5BDok/LjC7nQOvUGceXH0DybL3kZ0t8CvrWqqrL/7dgLaQToTQArAT/5tox0EfDynlL7nWsrPe8Et4wQKfpjzewLxNsuwDrxHCzsQ6nLj1ESdr7G82XxLb19XHterffsbwXCVHXS6sEcZDlvwDH8QhmbMmc1j/5HHquxQaOQc1x9Tj1Wc57rFb3cNFxJVRMDVmRvdRBMNR3y491g/QIIwHtfWpPYgtfAvqExuhbVdOnJxxcKjwcrgXqUDWfgV29+C9kPA3YBogqn6FFmDajq2OUHGGXnusnP7thLeQ2yP/K3FP9ZzTXeX9J2aAiQ9sEoQ4uA/z5/JU4bvkHnLClD+j1W0QD1XY4zvR94X3VNf66LrEhNQKw3LReD/HdGBTk+5N6p31CJ7zwPdd8jUqGWhY2UnK3ikqv5KBFJrrEE4rKP8Lw59mgstUqUv3m2EKm2q0isPCM64isBRp3UXAn4yQfneeuRkZ39tr3nqGtU65I+Nng/iCUEG0h//vDz+vfeO3J/8TNa04yJkBVJlkuIrfgW+YMPYZh+AgNr07DYzi+5vLxhTul614/nwDyRg9rRTFgnc2GZykV2TyJ+X/kFnAsFmidVJcdwstFzIB1Kq+pP6dTdFsjP7H1eK8GsGkStCPsdahBL0R1toeGk5WAJcpFtwzSceO65izxLAobnKqR6k+gmS/ITw6WKl00vis4fJXAnVUQ8X6QFhD3C96KHkqWevWEXygZzEF9zClXtj4SryYnDxDXxIgqAUSF8/gbDo0uhZmQ130F88TG0TpcI7wusJMUEjWAjIlzM0I0wk4iinQiF2xGJdEjVLj8rpIFSX0qCURbTKKupNSAk1TE5yTUjhIlInNRCc1FRA/4/Q55yfjrg1zc8HnduevrjrcWIv/2hN0DbhHTOpITSdoZbCTy0tlWw5MPf4AJHo4mGC3+PY1jbLLc0lrXXtYWaVe4iES6I+uf097kQ7SyeQqPk/aZvgiyzHIID63CifDAbXxf8M87WHsW3NX/ERftJxJUewvmywzhZehAnC77C9fLvUNaTgtloPQJwCSVEKvmJ/rNaXFXCM89Fu6YtYKK4leSQr8CNFbRKWxVNe+YReAAqDQRdaJsqxH3jWZjHcrEUdqErXINXY49w0XwCZ4oP4X71eZR3pGF4uVIrtKXUc2Tj0Qyz9xbMP3fuMloUcWORfcwkMrRiOlCPU8bvcaD4AC7Vnka8+TTiTN8h3nwMVxuO45rlBG7YT+Gm8wxuu84h0X0edzwxSGyOw52WeNxtvYJ7bddwv+M6HnTdwMPuW0jpuY2HXTeR3HZ915bafh3P+u7igukY/vrF3+LvCn+DYxVf42r5EVwrPyJZ9yeqj+CH7Rt8X6XaiarDOFFzEN9Xf43vqw7geNUBnKj+GidrDkr7vvoAjlUfwAHTQRyqPYhvqg/gePkfEFt5GGnWC6gaSEX/RhnWI05sRFySq7EZIRilt9cNf4Rh/O3GxDVRLRDajlsAAD3qOp82EHIIf3IzbMVMxI7croe447iMtuUSxS2VSoQc07tvkFIZkd5JqZyqAEmUQJj3kOtSmONyBxj/sd9klFFbY7lhz3nrxPCMaTgnlR1/dqeOthaRWrIWcqJlsgCna74HvYPegE0BiS0wt3s/fXJ+iKGpvLnk9RLc2Kde43Z9DFzj9HwrL+snf+PH+vXD1wkmIy4EZN0jJdGDzagblUNP8KjxFiyjr7SI0k+8pg//9yc9555qgy/QoPqC6zXX47ADY2sVSG9NwFflf8CG1yL73UbYBb9OmyPoI4+eEUOJlBG8cu1X4FdxrrkPcI/guu1CNNQu1YDZ32FWAg0SjPcDkV6EA+wX3iMW9WGFZHrymQRKg6AFIb8TgxsVOF97FD3zxVI0jYYIx7Q+nySqoOWYKS+9vpcoKo9OpVLn26HlFrQjHGXl0gqEI/xvnocdoQB5/A6EolaJBPH8w1ErAuF6ObdQyK6MOubuMAqz631oxsCmEWmd93Ci7FtUdaRL0j/pt6QKsT6DUlPhPs99hIaVctDJXkrDQPZ3JtG6sRZ1YCFqRyDahs0Nq8jUSlI359eu56KS7rlPiUEedOC64yJOVh5GwVAqGqbzUDOejbLxDNQQfE9mo5qqfWMZMBCQTyvaNJ3JOkD/LBMuWaqzejQD5H6bJrJB6cGGyScCvp8P3seXlV+ibjJP+G8CMBlG+wH43mOH7qXTP7PPKPBNnpwHPh9LdauQt89rlVCYWPGih9ws4USGSRbQipe9aXjgugbHfClmok4sEbiCPLtGGZhLIStmNuswslKFvrlSdE4WonOqUOSaqAu7GrFLmWGW3Z1YqETL9Fs87bwvWqoNfbkCvgnMRc2A4F94tloolqoA5HbOlCCh9ITo4A5umkVWjN605YAN6/oiTGAsyXU8tgMByvtoHFEBly1KrUUUKPRQPDPVG8WDxusnh5XgnB5RbiIsqUtuepCNISsuZuIN4WKmPAm6N04tTHyN/cqF8dONFfFUU8koEimQUDLDyaopDy/H5Md+i68rvhwntloc1QLJc2ETq13eU96cbW8Kz5/fV9fBsCuBKMeIatseI1EHIeBleJxAJOzEetSKmuEs3DKchGulAB2BSrQGKzGyWYm+zUr0+qsxsFmN4Y0qzAfMQg2h12InkOG1EdSH6A1i+F08HgRYpC7YtEWfkQsWyvEgusHkXXrX2iSRckU8iUzwbIRt5hViDSdw330V9+tjcK7oEC5VfIfH7bdQu/AKvX4jZlmeWjxCvD/Ks6qiIppHaF/mKxUfmrFKsBYgp9eN9rliXKg7jSu2GDzvfoiC/lSpCFjcn4yivgd425OE/K67eNN5B6/ab+Nl2y08b7mJJ56byHHfQHZjAjKd15Buv4JH1jikWS4jtSEWKZbLeNh45dPNGY8U8wUkv/sez5pvIqbhJE6Yj+HZSApaNt+hY70Ebesl6NmoeK91b1Sge6N8u3nL0OUtRsdaIdpX36JjtRCda0XoWi+W1rlehM7VQgzMF2JouRQ9y8VwTD1HcXcy0uyXca3me1wk0K84giRbvGrWONxtiEVifQxumS7iZu15JBjP4brhLG4Yzkll3QeOK6K/fNN8EdeMZ8HkxZt1F3CTsqXUX686jRRrLI4WH8Dl6u/Rv1KGaNCB0JoFUUqW7QlcqvnFsSnjk/OCRn/IgbDQDQgs+JldNn7h8nLeqLm3sGFGWUcaLtafFfCtQMoefme3/9nr+5pjgM6DjbAH3YvvcLz8KNwz+VIefk/X9Cf8F41hUikU+G6Bkxzchkv7D74JlKS6LO/LNviu0MC3dfT1Zwi+FeClU0NFCGick0LhxnLIjorpp/i25giWg3Y4Jl7iYf0lvGCOyHqNJLGHAwSgHvjDdgWAaSjSQNQLWekOGNkn+F/6HkSvr0MieUE6ltCtjWO+zkYJTJdEu0OitNMhRm//RjnO1BxGFz3fAk7VmNb3I6GI8Pw5xuQ/dfCtzRHNGFPJxHTu9CLo70EwwPFP6kcnIsFO+H2k84wiEmauWSuCIWr1E3xzr1Xzjg4j3mdxtuzBCCaWCIbcmF4xorj7MQ4WH0Ry1z2skaIiWuv0sjOC1ix7GqPzTJCWfTyq6FTUBo9sOqU4G/EA6UKrQZvkSIgBILKcKsK167ogORxuTAXMeNP/ECfefoV7thgY5vNgWXwN8+RT1I3nCPhmlXYpN0+5bGp/s/S8BsirCcg/1/LyVVNZqBnNAOkn5HsbSGSfyBHvd9H4Ixyo+BLvBtJFC5VcIQ58Ag2GlKUDxfP6My6Oe13Y9ulzcq2UwQrRA0rA0Y7Qpl30uAlwgsFGSUpjaex5fwNcoy9R0p2OK3XncPD1F4ipPoVUZ4IktSQ1JeCO6ypu2mJxvf4C4mvPIt54Rvjc5HTH151FjPksThu+R5z5HK5Vn8Zt43ncsFzCOdt5/FPl1/i26lt0TBcJN1YGPy1pTlrhESpKCAHpZKBBEuJ+l/VbxLCimfMqUpwJSLFfQ5L9iuhJv3TfQO1INjwLRehZr8ZowIzZsEWMAx+93lGbogSIV5/WKOWf2hENtiEaaEM02IFwuANLoGGhuO4E2fQayQIihVSUtvM2gOXmroXxaAUL/UBREHTwqx8/NkGFOyhhP5Wxz9DthwCa4EH9BoGq9jn9O1veCH6GITFGCRTgU6oWHMvaexrvlgaGgGBemx5+40LNSIXmZeR1K0+HtiBpC5Nct3hHPJjdNKG4IwVphotYClokeW8xZIU/6hJ+Pz0MnFcM17KUM40DfXGTzUdAEc95J9dPNxw0jiM3WSZEcdEU9QsCZnqs+T16GVswH7TCtVqMe+0J+Pu8X+Foydd4aL4E0/hT9C5XYMZnxkrYJQlZTDDk/RSpMG2MyXiTvqERom0cP+Uo/dgklBNGUBi5MQ89RaLrCkrHn2LQb8Kk34T5gAnzfhPmpNVhzleHWV8tZjdVIxd0csOICTbvjrZuwLjWJtYNmPEaP9mmvUaMrdVicqECixsWWObycaVWFSEZDprFY+iPsAgI6WY7WsiBQMi+ozkQiDjBKBIrZkrj5/ldGkva69QvpsYuee7UaF7y2zDqrUPzUgmM489Q2JuGvLYkPG+/jxcdD0Sn901PKvJZW6D/MQoH0lE0mIHnA2nI60/FU1Yg7HqAZz3JeDXwCAXDmSgYzMCr3lQUDKbCNJKN8uF0xNZ8j+s1p9C3XCZjg3xcFXrfyz3lPFGRI5U4pkAdHyswvpffUOOKnnnOHc5Z0m1YK+GC+SxWNxsUlWAP4GFfxiHHsA6+o03wRtwYWK0S8N04TfBNMEcwtk/7HddJoS/o4LsNrtkCAd8spy6e758yr9777sfBd/lgjni+rWNvxIGyb/343n/vcSx87DtC06BsI6O6lCtV4NsbdcO+XIhviw+gavYZEm0X8e3bL3Gq9I/IarqB5sW3WI1ShpA5KbxnynGiRzJVhIZOCV1thGOZylNWRKJUd7NqUoBcN9sQFgBfjyDBJCPTxAEa8I0GPAgHXZgMm3C8/Gu0TLyW9ZvYQfYYGS80TrXid3JNnC8KKKs+5/kpWkeYeTeBRjS53yIl+RLKyx9gdc2CJk8+bt/8DidP/B1e5t3HwoIL/kAXrLaneJ1/FW0dL+X/1PikA00H+nvrf+KZQMCNwcUKZHUn43DlEWS23MXQmkEkZikQQNUZUkEIrtmovc+9kbk+VCehfCSVVVgsiUV8rD05UoGTxiWBu+Q5aNTET4219agd7uV83PVcxreGQ7jWcAp5ffeE8/1mLA0VE5lCja6dykHd9BORyBbgPZ6J6pF0KRRpmsqBUaOlfJ6eb2p8j2UK3cRI8D2RBcoPWiaeoGr2CQ5X/AGv2u9jct2ogAf5eVpmuXSeVKjbp8XoY5Pvf/hr1NBsEr6uABgaHwSjrAwXcCIQcGEh0gj7YikyrdeQVH4G6darSHZcxT17PB413kBe0x288tzFS89dvGlKQlHbQ5R3PUZtfw5sY8/hnsxHy3QRmqcKYZ18hZrJ5zCMPkVt+2PY+nLRMJWPZ4PpOGs8iZuGs5j0GkFVCNJWZDPQwLeALu3erIepUVuKx4038ag1Edlt9/C05R7yWu/jaft9PGq9hXueGNx1xeBmwzmpVpbiuIZMz028aE9CYU8yKgbT0DT1Gj3LpRharcKsvwFrWmgvHG4DIt2IRjqkShaF/Gmc0NsqvDxSWjR+GTm89GyS88jIABujADrVJiAFKLbVCghAf9A0T7pwFrkZb7WdC4tawGiUCKVnx+/wuWp8T3mGha5DqgopO1LIQ6OxaMUJ1PtKOYAqEfIZ3nvxdvD6dUkvLtDbj+V7OxQH+JwJfINeg1SVy7UlSNY5QbGfhQ1kE1a/QcOEIJkedTFkNLkonjM1WIXrJ5/nNehgQTuKV8INJnUy5Co0HKkQ5sZG1IWhlSpJrMptvoNY2zl8WXkQX777A95NPsHQQgVWAxZsSjKv4gOS0ygGiRaKF2BCEK/1LwHTpxbQPb9H4yesxgONMYY4X7PgUON1NM+XqrEuBgeNpI80MfhUf3GD3mqax0y4o1v8VIK89+lBHz7n9zckqtEinvhJvxnZrlu4Z7oE2+hrhKmx7KOCBcPgeiMw+EiTxDvOCb1xjGlNe4160dy8lDazGgf0Gm2G3Vjx2zG/Xo/JVYO0qTUjptdrMc3KiCwqs2nGHBuLp/gapCLimLcWY6y2yMqKm2bM+tjq5XG7zwDn5AsUtt3DjeoTeGyLx/B6lcw3UsxIS+D93f3e8TyVEagDUgLJcJjcfnrdNDCx22+JxrfyfPM/V3wW1PRl4bzptIBvme/7Nc52Oxe+L2NdrRErYTv6litwsuoYmuYKsSGh/H0E37IOqSii8ny3o3HurZR+J/jm/N/9PuzlXvEzO8G3okmSdlLWnyXg2z6R/xmCb9JOlEDAFv+dwC3cCB8a0bb2DpfeHcXJij8ioe6kyKa+6LyHa3UncNl4HLltt1DflwF/kLQV8vi5Nri2jBqOWx18C7UwzD2M66oTkQg55BzP7fBtsu94LyjH2Y5wqA2hUCtCkXZEuA/6lRzrctSFU1WH4Rl5IdUlef/EubPTYNP3rR94vjlflFMqFHQj4HchPe0i/vqv/gNu3DiEkdEK5L+5g4MH/hb37h7DP/76r9DcVIyuTgPOnP4Sv/rV/4aXL68gGu2Qqt2836GgDeGtMbvLOOG1SnXMJvgCjRhbMuBdbwbO1Z7Gw+ZbaJsvhdfPwkhc/1UxKTrcNqSiLp06jIC3YsxnFvzy1JOIe9XnEPPiIN51PcYs9xXubeJJV4b6p8b2oLcS95pi8f/k/DV+W/gPuGw/hXjjdzhXeQjnDUdwv/kyCsfSQK+3aSYX9HATiJsmc2AYyUDtWCZM49lgHZvP1vNdw+qW4+EB7gAAIABJREFULC0/pjJHayazUTeWCctYDkwLefi+6hCyPDfBwhmyGHDzY9EQLogcSFI2WnktPtWZv9z3SKkgT1qr8Cf8KQ42DwKhRqz57bBO5uO2/QoSas6gqCMVnskieOZK0LpUjq7lcgwsVmBooRIDCxUYWarG5Got5r0N8l0fxe+FukCliCbhcJKfOxe2Yt1rxbzXhIbxV+K1Ztn7sq7HWBc+NuUEuVGqBVqANz2etDB5rkG3FEMZWq9D69I79C1Xo2+hAgNLVRhaNaBj+R0cC29gnHqC8sEsFPdn4nnrfeS4EyUB5zENB8dlPPJcRnrzFTxyXcbT1pso6H6A0r40GIafwD5TgJbFEoyuGjDtb8BqmB5cTmL2l6Lq0PrnuGETGgopKRGnlARWElRqLHE8cfJL0z6vuKDs6+2m5JXo7f5EoydVvqM49AS30nhu1DmlwSgAWlnu4hmWYh36c42KQ3AtknseUE9XQHeEHu824fmR66de++AohTa034oy/MYE2Ga0LJcjs+0uXjUnKUM2wIxuLtDsK/0xF0kuwrqXgHNLqZMwmqCH73gkEGc+gESkZFwSxKkcBS50m9EmzATr4Z4pwNvOFKS5riPRdhnpTbekdPRl63kk1F/AoM+4BerZN/TAi4EikQF6Y7SNYYsypJKDaYTvy7zW9GQZfueawryJVOtV0R/uX6pStCjdi7+T88/HNIb0JknN2rpEUKxvdLrxwKNsojzvH28E40H5fqvoDHsjLrjmi5BkisVTx22p6kn9Yc41mW+cc3pk5QdHbezpY/AjRymSInrOyoDTjTcadFLIRcamlhQo16AZW7rxJR4zpRevkvdofGnjiH3EiKV2D1+NZyHDfgUpxnNIbbiEmqEsrAQtokbBa6YXei/gmxE/kaMUY4wbsr5OqpwSFcLnuNnDps/5rvXlit8K40AOzhhPYnWzXgM9u/zGbv/xp7yvjRXO2eWwHd0L77bA92aI4X593OzPOYkhK7QT/l47Gme3wbeaX3vowz1d38fBd2lvpoBvx+TbzxB8c36Tk01jSCXBCmc7wjwkDyb8JhR3piDDfRWWsSeY9ZswtFGJZ+23cbL6G5yuOYLEmhOYChhkr6TSUkiSWPX1jH2iPN7BgB0dbUWoqkrG0HApNjYd6O4uxOtXN5H/+ha6OgsxOVWDoqI7eP3yCioq7qF3sBRBiQbT89uE6YgFp6qPCPhe91lkz1NOFBpsvL877yVfU0V21JjS3uf6rXm+35U8xO//9W+QeOcbjE1Wwm5/gYyMOJSVPcB//av/AGNtFt4WPMJvfvVf8J/+4/+ErMxYRKM9ShmLaxKTPkUcYnewK7lnxA1cN2iE+N1Y2KhH+ehTJJgvIsVxHfax11jctIlEbhjtEu2mVC5rBwyvm1DX+xSZzltIdMQj1XNDKh3fMcfgasVJtK5Uwss9W1+/dxmzgxuVeNyVgOOlXyG24XsktcQh1X0Z2S3XkGS7gBjjt7hlP4ey0Ueom38m9BPyvqVS+6hyJtOJTEbHZwu+yY2hwgkBOIE4lU8IxK0j2bAsvkBM3TE8tMejZ7FchRtk81AcLD2zXoVX9mcx2nWx3uWm7fv3tQ2O3CnKHZGCQKDIQjqLfhvcQy+RUR+PO6YYGIZzpTomASgVT6ibSi8vFwqCGX6H3l/d48sJq7yeXARUQQ+GsKizTKWV9YATjv5nyLBexe26C3jTkSIlu8ULKlre2+BbBxoCCJjQGaInkZ6sVjBEJ2WNWUKWYE0DhEya9YZsQi+Y9TkwslqPngUjWqbK4RgtQO1QLkoGHqCw7z6etSXIwH/cFI9k9yU88MQipe0qUluvIavlDvK6k1Eyko3qsTzUjTyHc7JADI9hrwFTvjoE/FaEw9y8lKdQPFr0YkiClhZ++xAkse95vwkctKZAN7nWO5okRXKhUU0BLP27O4CKgDz2DZNlVFlo9jX7RjXFPaYSiFR4FH3ZFgmjUZeWln1EdJnpIWFCj/L2K71a5d2WCnMClrd1nXnfORZcM0V45LmJos40BY6CamwwHCeGCKWh5JoJZgigaCRws1ceEc4z3vstQ4XGRIgGBXmjjBYQ5DdhMWRHz0oljOMvpMoqC+HcsV1GRtNtFPano22hBO0LpXjVmYx0x3WM+00qWiEAsFk850yaEQAn41+FQ3ku0oS/r/iM+zLfCGjk/2g8kr7UgtsNsXjZ8QAT63XK0GFkQsaH7p3UwOiWR1nzLOuA+8MjNxW9ffjeD56ra6Pni+OORuM0HHje9gB362LgmHwjkRveD9UUeCSAVI0GpVojtz+j+JISet36Hu8zIy/0RNKw0se5ArI0oBgxCuzQSNcN2a2j/KcyuKi4xM1er2opIJ7GXYDevmb4Io04aT6Oh/ZYlHekwDb+EiNegxiyqv+VMScRjl3W2WCIlLs2LC5ZsLRsFU/g+roTU1N1mJkxY22NHnCO3V32Bc4rAg6JXrhFXtA0/BSnao4L+N7Luez6H7udw873Od41J8BayIW26UIcKzsC0k5YhVbWafFk7nJdO3/zE4853sWRI59pFWnPxIZYuCZeS4R5/66NRlUj/JJguO35LulJF/Dtmi78PMG3tgfoc0uAKvdNel0jLkx5TRhfrcJGxC4RL1/Uie7VMhQOPcI912WcensA9eusFsoiMXRYcV5q1EDxfHPvbcXmej3uJZ7A7//1P6KmJg3Dw9XIyryEP3z5f+P82a9w/dohVFWl4OsD/xXnz/wTMjLOwdP6En46Z8IqYbxtuRAnqr5By8Qb+IIOtTaII0mPlnB+q7VOkvWF8sI1n3sy1xq1V9HJwt/s6irHpUtf42HqcSysmLC07EFbexmu3/gSv/n1/4m3hbdwPykGX3/1jzh08NfIe3YDkUiPcMF1KeKoCAzsDr5lX+F6J8INLQiEaIg3YQ2NqB18irv2K0hibYfONDTOFGI6YMVq1I3RZSPqep4hp/EObjXE4oHtCoo6U+GeL8KAvw7OpWKcfv01qqdfYCpsQUCoizuNkI/Po6WwBa2rxXCOv0Dj9CtY5l6gdaUYoz4DWmbzkeNJwBXjcXEKlk1moG4xTxWLJM2ETI6xLPF8myaUIspnSTvRwTcpJ1WTWagS8J0l4Nu2+AK3HOdxy3QeLXPFsgkojqySAdIVK3jj9uIt2b+F5OM37C/y+zIgaR3zPynRRhDdhBmfBeahZ0ivu4ycuisiHUbRem6k4lmVz3LjVHQHTi5JztO9ulqhA3qpCUTlSP78plO8kAxDD65UIq3qgoDv2pFnGPOahHtFC5rgjIBN8Z31/tC8MkJDIffKLYknpINEgh7J/BaJOFZpCyoLOxpoxtxsAzo6i9HVXYHZxSasB3sxtuCAu6cAA0P5WKSnPmjByGYNWpfewirZxpkoHHyA3PYbeOS8jhTXdTxoSsA1eyxOVX6HS3VnkNaeiGd9KXg7mA5LTyYah/PQNvUWXXOl6F0sR/9yBQZXKzC0VomR9Wos+k1YCpixHDBjJViP1RBpLhZ4I1Z4IzbhoDP8GGSyi3BuFe9WuLRBO/xBG/xMJg1bsRa2yveXg/VY8pux6DNhwVeHhU3Vxr21GF03Yni9BoNrVehfrZA2sFaJobVqeb13vhx9ixXoX65E7xIfl2FgsRKDS5UYXK7E6HINRlaqMbxSJZSOoWX1HiMd/Qvl6F8ok2PfUjmGVmpQ1ZOJu+aLyGu5i8k1I6bWatC/VoHJtWqMr1VJ42PVqjC1XoWpNQMm12owtVqJcW81xjYNGPcZMemvw7TfjCmfCaNrNehbfIf22QKhLZUMZiOzJRG3Gi7hZkOMGEfmyRcY2zBKJU1GBag//7L1vmjIz4BZ//T2a57kYJMU2OECLhEVbg4cowR3ovmqz4fdF9G9zEmuHWJ0SFTNIxJ/8fXnUTqYiQWfRUApgbdSfOFGtZM2su2BFoNFQr36a/xdbq7ve5/p3fl0I9DnZu0S+TZ6zJig7JwvRDK9xg2xGPRSflWte+q4Pb/1eS5AWPPmSz/I5qoZ2jteF7rJFvAmnUOpFXHzowIS1xtlwH+EjrWDWsWIBUEJIyJiOEl0ROkrB6MtmNqsw9Hir1A3mo2VkFXRv7hmSX8qahiTruRefAIwqnvaj6GhGuTnJ6CqOgWjY7Wor89CWtoJPH8eB4fjGdbWqQqhr00/cpRiK9vgey1gQ/1onnibVzbqtXX3R76722//We8z2sXEUTfWAy7YB57hu7JvJHq0ycIjNED3C3xrYIv3jIYzDT375GvhfDdO/H/cvYd31GmWJbj/x56zPbtzznZPd582Oz09M90z01VZpquyKquqsyp9VmalJ/FOIJCEBPIWgQwgA8gAkhBCICHvpVBEKBQR8gYnQN4gH97cPfd98QNlFQnqLBWZu5zzOyERoYhffOZ999333n3Mwd7K7809wSJ1MslsS6/UTsqHsnDWlADTTPl3GnzznCMA51hxTWnnqUQhHVQyIRnCqK9Fco8XvJ3QTRQjvHIvTg0nY97eIQWAHGchL3gOa+euz4K1pWbERe/BD175TygvT8XgYCVysiMQdWI3TiaH4LWf/wNyc0Px1hvfQ1rqUTQ35WHsUSM83l6RhCXYrh89i+CmfRhZqBDFD1WvoxUYqrNZtLclqkZAvBF4PwXfdJoZ9RoZqcHRox/jVNpuLCy1YGnZikfjOgQfeQPvvftjRMV8jH1738f2be9j7+73kHb6EFZXCfQHJTddUmp8JLxeDL6FmOC691rBNFC2oyeRRKzBdDTDdBmyzYlIaA3Bqc7jyB84hfK7uSjoTUFifTBOtx7D1bvn0LdciRWHEqNgQeii14jI8r0oGErDsK1WyXVuwinnXEujQIk+m4U1530xAs2Mg7vLNSi0JCK0Yhty++JQ97gQ9QsFQh4ze4O53iSVm5mG8l0tuOSNSV7MI9Vkp3rqvCSrs3Wncb4IZ3qjcaL+IAxTZHvUApGDmD8LM6QUJrRN8UJju6UGZSuN09e8lwa+xeiaYfd2iQh90+2LOFUfgkxdNLpnbqh8Jr6W1dG+Xik4dEljgQDzyg0nXh83IQ9YxXrRwLKwTqmH0CBY4XHoJUe3Y+4qYusOo2H8EpZENYOAmUUUaiM/VesgCOJ7qoNdUhJc3aJyAQerlS3wrRvgd5qlctnnNMPnMMPptMLtHkZJcRyior/AsfDPUFGVg/7hBly9kYHw45/iwsmD6NcXwbbeD79/FF5he1WKzLpXh8euJrDKedHVgdH1OpTdy0FozX4E1+3Hyd54pFrjkGKMxInaA4hsCEJMSzASO0KRajyBDEsszvUlInsgCbmDKagaO4uaB+dQ9ygbDdScn76I1rkCtC9cgm7xCvSLxRhYvomBpZvoX7yBvsfl6F24jp75MljnrsEyWwrzzFV0LFyWv2uaIhOfjeqxs7h1LxOVdzPkqribgeKRU7g8fBKXh1JQOJCMgv4kFPQnqseBJPm/pKZDSNMdQ64lFuctscjqikSOKQoXzDG4YInFRXMs8nvikU+VDWs8Llri5LlcUzSyuyKRbYxElvEEcrujccWaiLSWowiu2IaY1sMoHjyJoqFUXBhIQlnfSVztT0bRQBKuBK6iwWRcGUxCXk8sLlpjcGkgEcWjqSi9expl9zNQ/iAT5Q/OoPReGgoHEpHVFYFTbcGIqN+PsPogpHWcQOXAOYzOVYpmNtcG1w0PKBqx3rkKZLMRlDUVS9yTZIU9VngZeXGr1s7CRgZYWmGeAikVKs85cFhswX6m7aBTy3vgmn60WIOjzYdQ96gABGMiYUl5q0CTK2raqotgU/tZaSLzveQKAG5hiANFsWTzmQ9PMPu8i7KHfhf1zKmzz9xP5uWbMOlpQX5vEo6Wf4HmifxAGFntPY21+uojo1nMDQ9I8FEqkrKVXl4G+AMSlmTjaA9YvMR9zPx36qL7qIMsdodpRwRPT8G+srOBzw7kVzuFmdXY84AsLJuasJDRbUTHWCGOVGzHwEwpPF4yheyiS61sHrQW2NADhxRQfY0t3DDXbvcI8vMj8fPX/hb79v8G1TXnEHH8Y7z66l8h+Mg7KC5JwPx884vBN7+rzLuyXetuIzoni3GgZg+W1tlcibbuxfezda9RMq9MT1tYb0eFMVXWIrvP0rnZnGOyyfvld6ODxFoNiYB0Qzd+RcB390RpgAndqu//bPB9ffCcAt+zNyR1aOvGcZNj8KK55ZkqkcQAaBbgrYC26jVghc+lUp0k39nN3+mA9mF2TYciawb23diN3oelWHd1yThzP5D9lqizdEq1wGnTw2qpxEcf/xCVVaexuGzCw0dtMJkqkJJyGG+9/c/ILzyGN379Ck4mheDmzbMYGK6UiA8jqesOHa7oohHbHYGH6w2BQktVu0MnXI0rU0Ho7PJ3lXai0t8CY0VwKZiKoLcXw0P1OBL8OVJP7cfdsUq0d1yG3nADJnMZXv/VK/jgw+/jww9exb/96nt45ft/ie3bX8WDB7Xw+/ulp4LPSylgOlva5z9nTuh0e7gOVdSdxZT8XnQIWWDp8Vhwz16Pq2OZOFq9A2+f/xU+LHkXCaZw1ExexJi9Gg5fp7K9jORRYctFVS0zcrrikdQeDuvCTTgJvHnWvHDeqeRGKWU1n/ANAJ4+wE69cmrI98BIDf6m/TjRsBfZfbEoGUvHzUdZqJ+5KDrgTEOpDzTi+U4y37y5xoe5qCf4nshFzdR58Rb0jy7COHcF+SPJCK3eg9aJS7DRa5bQs1KY0MA31TVUftomBvVFg/6ynxdGe8N9b/hdDjv+znQTsh7UfnV1obr/LOLK9yHHFA/DYi1meHBR9k3LE3UY4XMZVV4xFxtzLvkoAEOFnbw8FH1dKg86kJbClAF6mcx9XfN14+Z4Hqhx3Dl9A056ocxtJbgX2T9V+Opm8SKVKTxGrLs7QS1fzgUZd4at2S2TxobGnWkTBF5s3ONl0YRvBAsLRnzx6b/iwO538O5vvo+g3e8j/sRe7N32Bt598/v44S/+K2JTg3F/wgC35x78vlH4ff0qTM6CShZYsgiV6gAwS7vx4q5kNA/nSm46DQ0jAXfQim5bBWrHL6J0KA2Xe5LFcy0wJ+CCMQbndBE41rIXoS17ENq8GyFNu3C0YadINwXVbcOuWx9je+WHONoWpK7WgzjScgDBzftwuGkvDjU+vUKa9+Bo0y4cadyJ4IYdONKwU97vWOs+RHQcQKT+EOI7jyDJEIrU7gikWSOR1hMljydN4UjoPIqIlgP4ovg9hDXtRUZ/DLKHE3G2PxbUf84ajBOnNM18HGmW40i3nECGNRKZPVE42xeDc/1xyB6MR85QAnKGEpHWF42zlmhkdB1HYnc4EntPIKU7AgnWCESZw3DaGI5UUxiSukORZA5DsvUYEnuOIVR/EB9efxfbqz7GMdNhxBiDEdcRhJj2g4hu248oal637UeCPhhpPRHIGo5D63g+xlfrwTxllXfLwlC15sTRCxS9mqauSW5/1XCOGGtK/JFF4hw+RjdsEl1hG3QaX+3gCxhxOSQCTiVBw5M9oxjnFxrW39vjZKKYIsX1TZZ5ZPI6wloOQz93TdaqSjkiQCWL8/R62qSCBX68eKApCUSVg6yBU/V/2nNaTvXXPpI5dnTAI+9pBhwd8Hs6MTJfhix9BBJbD8NiuxEAR4Fx4JgEDk8eoE8uaXrBuVBMl9IVVo001M+0DYFmMaKTrYWoOR+URjPCSwfkOTnq2nMqAmmVQ9PHVBPaGD/TkIzomC5G5PU9iDGG4e4a63cU462iHUqFgfJo7kBqhfaefziXHFMLHj5sRVTUNvzX//YfsH3Hr1Fdk4O4uF2IiPgUg4N1WFqiEhLHfYNtfdbPAr5VTQgdRNoxw8xVHKjeg8X1lq1jmZ/12c/6P9krqj7l/mI1cqrDcNIah0mnTuoAFPjmGGwENPx54+8v+M7a5wr45p7pEYDDM6ZpLB8p7RGwTl7bYvBNp5RpiBuYb1+3pBGc60qAZfaGSJu+cL60e39Jj7KP/H0idUzZulVeLj3W3HqsOQ1YXW7DnMuEOTbFchiwuqrD6mIbFhdbMbXajsHHTQgp2YWbHcmYWmlUihwCvpnKR9tI54dObz/0+mLs2PkqauvT4HL3Y2XViorKM3jllT9H8JG3UFoWjbfe+BE++uAX+OCDH+Hc+SNYslHPvhcrKw04XXkA5+6cwoyTWttMHVNkwlO1FRIM3N+qWFnZK66VgJ16Yj9oU3vQP1CHw8Ff4uSpwzBZS5CQtB8f/u41nL8Yg9de+x4KLiWiru4SoiL34De/+SeEhb2HublWeJk6Il1Bqc5Du/iHa5P3tnGuRUJQUty4TgL3Q3vsNWPdbcL4TDUq7mUh2hwqRY+pDUHofHQRqz4dfJRepEqMTwcnDLAznYYEBqxY81twa/oSIuqDYJ26CS+104kZfu/z/+B3jwVeF9N2++DCEOBmkTu1xgfhoWyuvxfs/l2zeBkRtbsRdPUjxNbvwxnTcVx/cBb1M3mi8U3Q/Z1mvjWvgDcpYJx54NN54P/XPszG/tovUPEwW0KVkpvoVe3PNdk1N6uD5eD76oT+wYC+aMBf9vMEsW7V+lYx94GFweI65noy/1TCUyr3lsC7aCgDHxZ9gMS+eIysN8ND8X2C0C24d94DPXKCGb+jS9IWIpqDUTtVjBVxAnoANzduv3ii3LSr0EvHp6y7yTg/FAvzxAWAEoHPvB/tkOAjAbMRq+sd6NQX4PF8H85lnsDe3e/g1MlDaG8rgNF4FcGHtmHfvvcxP28SPVEvWUBWgrPDnOTQUhu1Vxp6kL1rny0VlYrG0fOKpZfvw88j068xFMPw+XgNyUWvWopMvGzr3S/pMk67Ec51M9ZtVgzP1SDbeBxBVR+gfvQyGkcK0TyaB91YAbqYEzZRBNNECfrmbuL+aiMmFhoxvdiMx+udsHvIqlIWagBufz8cPobV2OWvDz7KJvoG4fD2we7tk2IVn28Qdl8/au5dRkZrNCyzlbAzSsFoBgtRAsoU7GzGi/dLxZfnXmQTA6FEyQl/xs8OUXNhkaVVcmXpbN2x63CkfBd6717BolMvQIpOzrMuAX5cN8+c96/uS6ffgJaJAqSbomEYvywpDaxgh6x7FXJ/2g6boJFhUKVsoQF6toznvIGOmCjc8HmjGF9Niko1PVKMLR3CZ128X64LKXp1q1Sq6ntZyGg6irvMQ7V3AQ4yWcyZ3shy/+HPStWEh16vNAFzELhKXQGdD41xUeDx2ftDjRPTV0SfXcaS8po9WPN3oXXuKtJNsSg1p8LlUYcamXpGBSR3W8ZJsVsi/SmOwFfH/nmf+8c+J4yeNMBg0RSZbyUNZlwox96ybTjQfljSeNS99gIcW5tRxktC8bxfriG/MSCpphwazv2T7qQYkCLhwsvRiIzZju27XkfKqUMSMdu5+038/X/+3/Hm268gOzcWs7ObYL6F6VNrgKkybGRjWCrHwZs7RcXlZZ8rTP9j6gLD7t1z13Gk7iCsj0qxKiknKjqjaku0Rl1kww2qsO2JA8jzQBXy8RyRSAZZRFkPtJcbHESNkJF9YEXWYCou6uNwf7ZyS8E355y59W5GU4R46YbTY0BR32lcMMTj7nSFONl/7Brc8r9nJNitQ+l4Fi7fPo383kScNp3Aye7jSOqOQJw5AoldIYjXH0GM7hBiOg4hNvAY2bofofU7cajyM2TePITbC7cEfD9RHqJd9qrOwyQZGhsvYvv2X6Gp6RJmZ3vw4EEnpqfNqKzMwquv/h3a24vRN1yP+Rkz4qO+QEril1hapCM9hEVbK+KM4bg+lIZFO8H3RjujzlsFtrk2NsdGP3xQhRvX49HceBaz063o7DiPLz7/Hn752l/g3Jn9mJ1th99/D9be6ygqjUG7LhdOFwv9h0F9cs1RVNFA3g/tOFPumPuvFdUHZGS1ehiu00BtG9fM0Eo1coZO4cvGPThYsRMFHbGwTl+DjcSA9DdRZzvfm6QhSUWud9Yg0dHgWTW0WIljjYekU7A6u/g3W2EXVT3MhKsV7XfzcK4hDJ8XvINkcwSqps6LCop0wpy88BUGXJMnJMblzxSdKNSn4M//5v8A//1vRW0paJ7KRcN0Lhon8qQLXdv4eTSOZ6NuIl/QPMHyVlzPA9+UcCH4Lh87i0VXuzqgRdydzT+UF8fcSB5YWzOgWzEpm3wPWWhk26ywsVmE6FYyRaNTQjDMLWIOJAvA7q42IrsrHgdKPse14TMYt7XBxg5MwmhvguHZzGJjzhoXrmySPszbWhBXcwhZPcnoW63GmhSNsJCPm6hPcjYt82WI0Yfg30rfxAc33kfJ6CmsQ+nRSm6s5CdybjZe2vgwzaUHTucI7t1tx/497+F0aggejukxdr8FZzKP4u/+9s+QmxuD1VXqefYH3ifAmvHgkDA12VGlp90wXiga4u1jhU/SY8hoeTx0CDQGsg9eT49c8PM9CU6Zo07ng4UeBvgIIjx98PqGMbXeidLRNCR2HcSsw4Qlt1nC6MsevbDr7Gy27jXBzq6gbPUrlyVQNNkHj7cHTvG2B+U7CNCUhgo8CAfg8jPczhBhn4DsNZ8FV3vPIEcXj4GFWrCiW8ChRymYcPy5VvjIe9cA6dc/8jXPv2j81byqlCSmRVhttTh6YzfuTN7AGpsoaIzEcx+1uf36R5uvE3UPzuOU/rg4LzTEBG4SxhWZRcW8aN9HGvRQ3cM/JF3fXE62ku+F19cvOY9KepHGndEWI1x+doczKtlJ5jJrBxHTXlhU+iT0qu6R34sRGa/HhCWfHnkjqUjsDEPP4xti5KnawkuUDriev+YSkBhIzyDQVzZJKccIEyOs09ePy0b7pRzhAGh3KWWPrpnrONVxHAX6RLgkRYRAlYecWvtSgyH7jc6aYqA3vuef+mdxArTiZCrheLux7jNKHmZU7SHsKvsCXY+KMOZoxBL3pDB/SlWHxddiRmyvAAAgAElEQVSynjnOEtrXmHr1/dT3tMDlNGJ6qgpffPETRER8js8++xmCgt5HVdV55OXFIzU1GDHRe/H+ez/G4CAbt7xgvJ+Ab6X4Q/DduXgdQRU7pdX8ywbfKmKhwDeVRwi+e8evYW0D+FZpA2p8mFcrMnUBvWj+zsJyAnRJ9ZK1zygG7a9KDRSAIswgWcZARFQUryzI6ElEoSkZDxeqXzx2LxrbDc8rFpZypQR+JBHMAr6v9J5CXlciHsxXKeJnw9+8cO7+1K/lmLkMuDteiui2IzjZForznVHI7ozExa445JsTcKkvBVcHU1EyoK7S4dMov52JintnUXHvHG7cyRSFru7xYiy7eMaztkWptAl5xHkV0suK2vosfPTxD3CzIh21ddmIjfsSWVmhuHQ5Dj/+179AQWEMcvIjcbUoEUcPvonsM0FYWtbD7jFjbqUREW3BIHGw7GQnyj8efHs8fViYb8fi4054vUOwrXfjwf1buHv7BhYX2kXyEBjF2roF8wvtWFtTjXjImss8U3aVkXCn0kZnCh6FHCjTKOtcHHTmebNxHbufsm6lFyw05tpP74zE4fIdiG84gutDZ2BZuIFxRxvWGBmUM5ERQgMgvV7oQKuUrT8A348rEU7wPU1FHb5ui7CiKIJZpZjW7unGA3sTkg3hON4WhKKhFHRO5qNzplAB7AADTnKZF3HtU6I5/9sB3xrw5uNG5pseAS/e4L76bSi5nSZNLST14Svgm4Dp/6Pgm01HvBY4vSY4/QytBxgJgkCR0jJLW3frzHWc1cdK45ui22cwZWuCh2BEQIgqetwSQ8X78ZFlM0sHL3apZG45G+1EVR9EblcC2qdL8MDegDurNRhZvoXr/Wk4UP4F3rz0JqL0IehdLIdXQscbD03180YAzg1AL9SPYTjsQ0g7fQShRz5Da/NVOOz3sbLcj05dEd5+8xXEx+/G3FwnXG6CdY35ZriNBytBqNp4Ho8RFbezkWmOF0UIZYAU6Pf7aZCUGgmBNr1mst0ekZLSXsPDiCCzE35KJHEefP2YWO9AyXAaTlpCYHMT+PaLeoPKeVe6xPKeVP1gd04CEDI9zPF1k8Ujm6UBf3UAqntT/8dwPVOnJMfZZ8ay14SsjmgUdp/EveUmyYOT8DwBPAvEJIVHgWT1PhrA/OaPNGZS6MVoDCMaHhMapq4gtGIvppYaYKcM1BYddsueDlTcOYfUzgj0z5aJtjO/vxTqydj1YHKyHtevJyIx8XMUF0dhYUGP8fFmXLgQgpycw+jquozlVYLOIeWYuAhGuA44HyblPGnhTwHlvH+OceAKqIFILjmdTo8qplnz6pFjiZU0ndGVKhV9Qq/qskkW+3lXoHBQ9qKAYK5PlR/KWgmCns3uUx4S7OIma4IpZF4rRhaqkNsZi8yOSNz3t6v3ImgKqNQQbAnIF/aYSjib/7zN3tfzXsex5dqR2hKCb8nv78aC24D2qSLkmRKReCsIyY0hKLSkYmi+OuDoM8rHBhkB1kscCgUuNTk0fi73rMtpwMMH5ZJf+sknP8Err/wFXnvtH3D2bDja2kpgMVeh9Gom3n3nh+jvK3rxeJNxCwBPMrNMJ+iYL8XhW7sxZ2956emMT8G3AiBH6g+id+L3wLeAB9oZLZ0owHRLAZ1q2qLYcY6hZhP4eu3ia7hPAumatKOyv7uR0hWFq31pmFiu31QU63nrYeNzAr7pkAacQ9psSidesp5EoTkFU8v1Ysc3/s134WfOR+94MaL0YSi7n4Xh5SqMrdSDeveza02YW2/FnK0Vs4Frzt6GBUc7Hjt1eOzswLyjXXTuF90GIVlYX6BUjBhN64bPTexC8siM7u4iJCV9CZOpGAMDN5GY+CXeffe/YdeunyE5eTva2i/ig09fwXvv/Hfs2f4TVN5IhMtFxrcPj206hDUH4dads1sGvuWMZEGnu0e6XLqcVLciccWaMpIKJIWoNd4PLyO5UpzItcR5pu2jPTJIN07RL+d35xkujRKZ6mcV4pFjzAj/uL0F9Q8LccoYjWMNQdJFt3IkC0PzlZizt2Pdy4JdJToh/SYkHVelwdKZoy3n2FI+V2O+aT8H5m8iounw1oPvQCTdJdEcFeXLGWQPi8O4evs0uicKoBvL/Qr4JsYl8K55kCVZHcS9LTMF3w3wrTHpGvjm7/ubdiB/KBkTa3Uq75hC62yqQuZbAByNzBZ5M1sEMDZjOAg+VbEPWUf+TJDKRURZLjMeew3oGC/CmY5IpLcdR/X9i5h0dkAKKZ1d8LtZpKDyOzfzeS98DTeHxwoHrFhmlTL6sGjXQTdeLNXCUR1hOFK3D2mdx5DZdQJnDCeQ0BiMI/X78VHZx4jTR6B/qVIVbch8KFCrgW6NpdNYLALblZVuFORHYtvnP8elgiTcHmmDrr0EJcVJqK/Lwc6dr2Pf/tcxO9spjgE9asX0UAFD6WZz05HVJtC9NpCBM5YEdM9eDzDdirH0+lsAgmppYKBYIRoIUXmR5gUqVcYPfm8Cdb06kHy9eLTaigJrMpJNIXAw9wsDASaZn8uDT30+0xx8zImXUC/vU6kTKIUCAjwelly7PPSeOiS8f6YZiAas3ywV2sn09gcyMbneJrnzKjdWMaFKLo7vowycNr5f/6hA4NODWDuQNz6ywCWg240ePHZ04urwWUTUHMRjW7uSm9uivbHgasW1kQyk60/gwWqNrH2yHjKfBEI+M27eSMKB/W8h+PD7OBT0LvIuRCEhfrf8vmP7L5EQtxsWC5mMYUlx8LPpDHqE9SaQt/vJpij5Q8VCE5grCb2nj9x3yglkZIEpOYteHTL1x1E0ckrsDVUnvB5KMfJA0cDL1z2y2DnAVott4rpQThLnSjllXD+cj+dffL3Na1B68aILb8GCvQ11o7k42RYOOuFkuiXcSpZf6j16FKP0JK+Tn//8z9nK5wV4M4JCQCF6+1Q74rruxbLHiDuLVagey0d6dxyOVx9AxcBZLLnY0Y8OJVOoAs6lONgMpysArsad34WpWhYsLTajq6sEDQ0X8OWXP8e2bT9HaekpZGaG4Nixz7Bv79v4ctuvMDpa9uLvzz0UOLjpBK+6DWieLsKR6r2YtbWoe3qZYyhF65z7LnTNXMPR+iD0TZQp5puMKXOExTnRgLc2x7QnarxokxQj/jQ1Rcv13/hIGyS2MwC+aYNi24/h5mgWZm3NAfBNG/HHryGSLXTO+F5cqzzv1tydyLcko6j3FBZsrVsK9rfinvkeXBPmR0WI0YehZaZYVIeoR+8n4HIZJEVCpfQFUvsIQDdcovTlU5K/lNpkUTJ7SXDfEnx7XbRbqm/H7FwLRkdvYGnJiJWVLgwMXEVNbSqqqpJx714lHj/WoaohA9euxqK96RymJhrh9vbB4e3Gkr0DYU0Htxh80+YpoK3SOfh7nzT24+9izwSABxqq0cZxbBh58bA5Eb8bbVi31JbRFvhYfxbIc2cd2JS7Hf2rVbgxmoWzumiktR1HljEWN+7mwDpbKY271nxdMg+0LwKueY6S7ZaGcwrsK/CtVJr4Gjozsq/9ZvTNluN4czB6ZyhnyXPgj1/Pah2zs6hSuCEJ+NjVgQRjOEL1h1D68AxM05cEfG8kloln+TsBOFOqFfgu/HbBtwa6tZvTwDfzYg627kZWbyzGlqoCIZsN4FsWsgaAtmZQt2rjvuh96JWJnrFmVEU3uUtC3OOeDlSP5SG14zgy9NHQT5Rgwd4hCiI8mD0Uwpf8Jh4eW2MghUlmbqvPAgfZa2FN+rDuM2HYWY/MgWQE3dqJUnMC6u9mo/5ONhruMIxShETdCSS1H0fXBAER0ymeNRc0OtzAasG6XH2YnGzGl1/+K95+65+QkrwfdbW5yLsQiYMH3sChQ2/hgw//l0gdLS2TIWCbXQVqubGoQ03pOR4gBFRkvkt6TuOsNRHW+UBBGr+DaHK3w+tjjpr6e2EJBXxTe9wAH1Ul6AyBBaPtCoDL3/VhfKUNF0zxSDaFwcZ0FeadSuoHD68uUWzw0JCKxBo/T4W/PFJ0xvukU8MCNIIzzheNOh0Asvf8XqrzGdUf+F1mvR2IrN2Pmtu5mLMTnFCDOQDqBKgoZ43f4WkhnXbwPuuRr3vWfGz8P3XPwj7Ditn1VlwwJyO2OQTLDr0wlDwwvx7gq7l98edYMGVvRNHQKZwzxWLB3SbrjMwnvw8ZC6Z/1FVnICMtDAV5idi9800cCvoQb7/1fVw8H4e0UyEIPvQ73LiZLvn0BGTUliZT3DdZjO6pq6KOZJi4CkqmmadKYZ26But0uXRylW6u0tG1DJapUnRPXYN5uhyW6XI0Tl3CwZodKBo7gxlHkxh4v4cHqgZ6lNP0VUcy8H9UoyC7zbni3AiQ0sZeOUs8GDYzRtzjdh5iEqJXLBn3O6UnL1iSENV4WPJymS5DG8IIDu9R1p7kDXNNKbCzmc/bmtcoOyQOrUT1uuGWwm91KLKgcgYm1ExdRlzdYZT3pGHJ1RFwIgNpUTJmWkEo71+BSjqyynlRESuvdxR2+yDKy0+irCwFt2/Xo6AgCju2/wLbt7+GoqJELCzQ4d64xp/xszhFXHdK837Z1Ynah3kIrTsQyPne3Hy98HNedB/a8+IMWGDzGGGcLlXge5Lgm1KbGvjmXuE+V/fGfaPGR3MKtTXK37Wx1OwCQbsC7k/At0RnmINtxImmI0LyLDjbttTxUPfI+WQkSOX7Ljk7cKE7EVcH0qWQccvGUBvLLXikaEDf+FXEtYfCMFWqQCCdW4JmFwuVlUqTqkVjSgmFCVS9g/zMBlWM2LDY1GeCW84vzhttilIVkjGR80tFd1h86XZb4HLxbLJgnXURoiBilfNnddkAF1U3mHbns4oKzpK9FcdaglB5+wyWHYyKbcQE/Dy1l9Qj5//F65prTO1ltZ7U2a1sjVpvivxUka6AohKjL17WZHUCLE6HPtDZk51UzUIqrlIf3daK/uly3Bg+hzRjDJL0bKaXhLo7FzA6dwuLIt6gGHElc/o0SiOst6TuaOk7Ku2OZ6eAc5JyAfDN84TpJidajmw9+OZZLLVAxCVmjM7fQHDtboSZQ3Bl4gzaZvLRMZ0vAJu4llkcfNRyvbW0k2+V+f594E1vgBcBeOtsAQ517kNa93GMLtwU8M0QrDDfzJl+Ar6fYVi3YPP9KQ0CvTOGWkVSh8VjTHPwmPFgrQnlt3MQ3xaGM8Y4dM2WY42LifmrZPj8qjqd0lPsDKWqg//4709nwCWLmvfRrYoY+ZleKybWm1A4mIYMQxSmV+ukWQ+VTtgOehkW3BzNRYYuGs33Lkto6ClIe/Z9kS1wuvqlIUZm5j6kJG9HZsZB3KpMQ1NTDnJzQhAc/AaiYj5E/0AZXC4W7DCVQ7HdsvmF+VPsjYTuPF24YjmJ7L4U9C1WBBh4euU0PMybpVIF70d1iWQRI9N+VOoJ339AUiu8NBgE6vL6XkwstSLHEINT1uNY81C9pV/UW5hnTt1van47vQThdAICHSjB/GSVV+5nSI6yRSwUZkcyFmAGQsQixSQhYCVbR9m3e856hNXsQetYAZacnWJQNEAnTK0Af8XYbRn45j4iiCV7T0ZirQlnDLFIMURhxakXBlblym004s/6+cVGne3ECwZO4kJPkjSnIGDVwLew+p5uTE92oNtUjfy8JEkhSD15FG/85l9QkJ+MjPRw7Nn9DoqvJgn4ZvSIRnbS3oDzLeHIMUQjtzsBF7oSkGuMR5bpRVcCck1JOG9KRJLpON4ofhOXJ7Mw52mXDqJS7LrRAeJYPeMic69AekDG8wkoUmPyFIA8e0981daoxkccD2HqA47bCiX7JkuQ1HYMl1pi8GCtAQ7qcXu5vgORQDow/P0lp53woFbfQTmgEjUIsOAqz55qShboZq7ibHskmm9fkMY7mq412SOOoUo1UakRCnyrYtuNAJwNdjyeQUxPt2JquhU2Wy/GxhpQV3sGtTVnJFImBbkvOgMCzLfsZXRj0d6Bm7ezJEy94Gx/6WknMn5UbvIYYAiA7/6p6xJy18C3SrfjOCmHVSJpEjUleOOlQAgf1XMEUUx7UhE/fgbHkjKPAlIC4Nvu1iOi4TDqHhZg0a2T9/nqmtzMuv2a1whZwvWhcvxZ48HIQrYpHmXDZ0T/f8s+60Vz/u94nufKnakbYHfnromrsh4kyiQpDxxndT6qAuNAoyrJZdaaVjHS9TQ1UpsTflf5WfYo55JgWV2KXCPIVcW3TFvkmcX9zVoPqviwKy0Bn7zGb8HCegPCWwm+M7cYfDPNk2CdAFzVE6h7V46ekAxydjDaxVoD4hP1nCpQJ4lHjNIDm88sOdudc2W4PJiO9M4oZOoiRfWqfqwAo7Z6LPgV0UOMA5dJGnQxQiCkTCAyJmQj7bFgFdo81eOAkRuOEQG4Eo5QTrVlshSRrUfRN6uY761aZ5wnSjOzkSBZ/cHpMhyt2omQziCkdIfjYn88qtiAZzpPLo3tJq7l/2m4t3n6W0w70W6Cjxrw5s+8yba5QgR3BSG56xgGOXhkoLi4/QQ0lMniwlSH21YN6st6Hxo+slY8pAh6WFw5vtSIst50JNQGI8echL7FarilILBHDlMBjwShElJnx0rF2GzFPdMRoNQWyPT5e4R94b353N0YmLwuVemF1pOY9xrgArU8CS67seo3oXX8Ms7oIqV74pKwuk+95WcBcTpPXu8w7A4LFhd1mJltxvhEPWZmW2CzmbG4aEBPbwnuP6iC29MPP4X72U1R1DjIMjOkxcOFqhMMW5MpMKDAmIiLw2kYWa9VTDMNGhkGOjcYgN1mxvx8B6YnW7GyzHEfgctpEYmkuVkTHA6mH1AhgKEzevAWPFxsxDndCWSNJGDN2wcPBoQJ5gbnHMo8SjiLrE4/5mc7MDxcidHRKiwtmeB0DmJqsh39fWWYGG+B1zuidNkD3rkYMFnDZAaMMD++jqN1+2GcLMaam3rMga6cXC/MWZbiSAJAbe2rg/jpOH+D37mPCL5BZt+C8dUGnOo4gXRrokhrKRDI9+VnPu968V4cWarA+b5EFA6kCgDwO/QBxRNVSMPUHbdjCKOjHUhMOIhXf/qfce1aJg4f+h2CDn6AN9/8F7z7zg9w/fop+DEkTpnd2w3r7DVEl+/AeXM8ykbO4dZwDm4NZ6N8+JwU7JQNZOJaf4Zc/Pn64Bl57uZQNqoGcnCz/6xov39Y9lvcXCjAIlV7pGkV86+VFjj1wL/2Er3kXjgDusmcJ+5TRlTkYN10hIoOEA9tBa4kykJwxXxGfw/mXJ24de8igi99iorhLEza2+GQg0C9RsDZV1j3rwFE/w4Qshn74hJwpyI7jERQyUlFepgbyrlV4feakRxkdURB/+CK5HqycYXXrpfnVbSNB6iWXqYYXgUolZIBbSBtgU80/8kS9sDlps2iehELqEfgowLQV9i/rxmDJ5Eo1ZF3Ya0VV/vTENUWgkWXbstC1JsZP/UaBcDW3Qbop64K8z0wVY51KWBVzLcCaQRvHCfaWbW+GLEUMChqRvyZ64fvx6igCW43C934u2LqXKxDkOgZySwzbK5OnGgKRv14IZY9TNELRHC2Yp3wveSsVp9FWduJ1QacNcaifCQLJHKUU/E187QV9/AN3oO2kMWgCY0hMDwqkmiSOJmCQ6hUwu6tmtOjUhoktSHgBKm9GCBK+H/cwxLFoQ1V31VzkLTf+byaO0UMORx6sR+cZylKFGeJ/RCMQhzxtbMrNQhr2Y+qu2e2LOeb9kftQzptXGtk4nnuUrWGBJVae7wvnlEShWY6iqT3sY6D4g1W0DbP2NrQO3sDN29nI80YjciWo0jvikHLw0uYpayq2NVuOPwmOOW9GNFjPxACcBKU7JzLNcQ1GbAxAclj7gfudabtMmr6BHzzPv3d6B4vkf3cx7QTzRZ/g7WgzY/2yPdye1lEqtLmmKqV1x2H5I6jOFG1E+E1O3C2N1owLUlkYlrmemvYlo+8mqe/pYJLAmwCbt6EBrz5SEqez7XPX0KIJRjx+hD0TV0T4/H/F/BN744FCAzNMsf7/koTzrRH4sSNPQJiH6y1wBU4HJjTRGAm+av0NMVoMpzFDfl0I2sL45s80mhICN8W2NQC+i1Y9RlRc/cCMjpOoHbkPNYlD2wIPrsJWKcD1IN7j6txse0ELugiMe5sCjhENP4KhGubVxkW5aHywGTTHOaRCagm68qDQz6Xh+ewSOnZHEwL6YfTRdBNgKGYHB+l3Lw6Ab8E3zZbO3Lao1B45wzGPC0KtBLA+Mywr7MQYxR9PWUovhyL89lhaGu6iIXZLtwdrUJhfjgu5adiaKgGNjvvm4w5wb4Zdxaqkdl+DJfGM2Cn3F8g7eTp4URnUKnvOGx6XLkSh0OH3kFw8Huorc3B4EA1Llw4gf37foPTpw5ieroLLikaVkZEzSUNtxXrHr0YpJCmQ+ibvw7K1WlFfj6Y4GJjEsnVpOLL09Dz8+d7E6CZB2OA9SYr9XClAcmtx3BuIBWrNLTiXDwPdGvPvRh8986V4Zw1FkVD6ZIbjHVqtVI6URlrx3on7t+pRm9vFRoa87H/wNtISTmImzfP4GRqED755KfY9uUvUFNzVsC3288CUQM6H13CqbpDMC9VYF7yvgl+tTxvGmh1+HE9yM+aw8TKeX+fMDO6xVLRbq+YK1BdT1eaMbHWgnuONjyytz7/srVifL0VD9ebMOfUKQUbLW1EHKjAwfECw8+xptqMYtep1RtoSy1sm/pOY6tNiG05huiaIDQ+yMeCi01rVHqK/J0cMi8XyDDth/PoZ54896zTGDgsuT+Uk7riMyG3IwanO06gc/oq1pkaw+ItUTBQuu5aqgQPfjrXytkLABUpqFI2RT3P9abUiljUKmku4jxzD2/i+4stVQVvLr8Jc8tNuGROQawuHExB4Tp5/t7a6uefAb6nA+BbAx8cY/mOAWeLa4zKMmvtsK13CNHAXGO73YClxVbYqQWNIfj9g7CtG7C+bhCHRcmWcvz4fmasO3WIaj6K+kcFWHLrxW6K478VY6CBb65RFlvCjLGlWmQaonFjJEsVyMmZttXj+c3fj/dJR/rRaiPiWsLQ+ahItNaf5GxLoaSKUDGKwBoM2jBVY7LRxrAXhXJGGQESoMrX0UGmDRKJPK51TRmJqkwk5QJRcVGqUqmLXKNKt5vzphhfauYv2hsR2rwXlXcysezcmrQTtS6oUKMArbbP+P9ORkPpePBiimqg/oRnN9XbXOxMyUZRtk5YZ2+gdDgTKa3HcKLmIM7pomF8VIwlT6ec915/ryIhNQLI3Q2XQ481H4tUA5E/suAUmQgAe9pIEpccb+Vkk6wg0aacTmG+A+DbNF6M6PZQKPDNceMe++br4snfcg/6VB8TjgkZ+Wl3CwaWr6P2zjkktQXjWNMe3LydLliWeJZ53hr41vBu87dVcPn74Ju/8+Y2gu8w61HEdh5Bz2SpgG+2DWUOlErXIAjhgG7BYL7s9yAT4aK3Z8HoSgNS2sKx9/o2lE/mYcZtkGR+Sun4+boAOCHLKqkHmtcpoe5ngW9VnCBgdTOHEL87x1Fyintl4TsDaTEDa9VIMkYiwxKPhx4dPL4ReMlGuwbgdw8I8HwMMwpHMrG3cTtyR5LhCqhmcF5oXNg8Y93ejuW1FiyttWDF1gGnpxc2p1E1j+CGduuf5MQxjE7Pl0wvFzZTC0RqUAyUWTxvh51FOqpREDfg0nITMhvDUXwnC1Mid8iNyL9nPuk9OGz9SE3ejbff+Cf84tW/x75dv0H2mTAkxHyJn/3kr/Cj7/8jTp06grEHtfBKegj1w00YmClHatMRXJvNgotdrvwDKgWIhR8ExzQAlBlcbsfMWCXeffd7CAn7DB/87ic4EPRbhB77HB9+9FNs3/kb/Oy1/4KL+Ql4vMKmAKqdOsGKSlfokY6Kt+7kIrwtRNRkGErTwLfX3wW7Ty+pPsw1F8k7WbPqwFZGRTMsG/+P+0MDx1/zSBDCqAKBHsH3cgPim0ORM5ym2HcBqvxbgqHnXc/Yi1/Zn1Z0TRYjvTsKpSNn1DjaVXEv59qBbiwvNSMzfTeOhn6AKyWJ+PTzn+Dw0fewa8+/ofT6KaSfCUb4iU9Q13AOfolCWCVHsG4kC2nVQbi7VK8UYjT99kB0iMVTKlqhGveIwoZEn5jy1Yt1TzfqJgvwzpW3ENS0B6lNR3G2KRyphhgkWxOQaYxDpjFWrjOBR+13PmZ1xiCvJQr5bdFoGM7BhKNZGHNhxgR800a9uOCS6WirBJ1UcPHTuLMVd5foz1PT3OcyYcXbBaOzAwdLtyGu8gDMM9ewygJNrkm5CGqfMRd/QhsnqUMsipRC8ACoZvEZgQeLYWGWLrSxTUeR/+AcRv10UJgjy8JxRnQUkUD9/6eAW61dcXQDa105TywmJGBhKpkeHo8ODgdrOggMmEe+ye/+BHwzJ9WEmaUG5HUlIEEfgRUX1+XLPlfUvl1366XVOwsuB2duKOZbA9+i1kSihPuRTHIfFh83YaD/Eu7eKcXqShuWF9tg6S7EzfJ4tLWchdfdh9VlI3Tt2WioO437dytV3wQZJ56nFqw5OhDdGoK6h/lYouPxtWfLNxgTfo44+JwXtgw34+5iNTIM0bg5mq0Y+Je2Xje3NgjwHsOAe+v1iGkLg3685Gn6g8b2CpBTa/SpjdXeX/t/kiRqnWt7kilh1OpXdRkKOKruk4wmM8LDv2XEiGeDmme+B9M8vV46RgS8Rjg9HVjzdGDJ0YbjbQcV+H5Wzreco5rt3xxpw/mXGhKpjVJRFXFu5TuTsFCkk0RV+F0Ckq2PXZ1Y9Bhx93EdbrSfwqHKvdjbuA+5lCyeuS61BcxrlzHhZwjoVlEdnuMS6dPspKwJFlAyMsU0W9YLkKShLVcqSXR6ZIyIcwi+WTsX2LeK+Q6Ab2ZObNYubGbfU8uc8oqBNC44OsXmsp5rBQbcGMvG0U4dTD0AACAASURBVMZdKOiOEdDdMqPyvyUNZSb/CSBveZJ28mffns635gkQfGveAW80yRyKSMNhGCYuCUPiQa90whMPk4eUdGbTFvw3MAybGeh/52sIIhhak6ILys8FhOOlkx9/D2hVLvu60DxfgtCmQzhcsQtdc9cEBHCRE/hpi+j5zgU3VaC4wd8FJ9UP/MN48KgBg0OVmJ41wOMdCnh8GjjjOGmbkY+q6I+d/pgS4nIy7NiDNZ8RpcZkHK/Yi7TBZNSsl6F1pgQWe7XkZ9ncZKWVrjDVGPIHTuOjis+R13MSszAKkOM8jS3X4lzLcews+gifl3+Gg/V7kdsdB8tSFVaZE80uUsLyay2wmU5CT5abk2CPBoMOAg91pkiQTeB3UcojXvRjdKlGvOuO4YuQDntS6KgMH9NU2P639Fo0dB0FuFGejrCwT/HFF7/E3r1v4UpRqgDxjz/7EcwDN+DFKODpx6rTiLapEkTVHoFp4jo8vkHViIYFlMxvEyCqwm9upxFLi21obCrE9Gwv4hL2C+Bm84+w8C9g7a3F4SMfISZuLxbmWyTkzrx9VXTLtrVdmHe1Im0wEWf0xzG3ziYhG+eLnv/GOWORnWJteXgqxlAPL7t+SZ7eIKQtLln5DSwiC2moe85mRVrOuIAMGkGmcvmZdtKEmM5w3LCmw+bi67he2JGNB0LgngLMO6UlpVubVvkubAUNIMfFDJeda2kAcPcD3mFUjBUilQ127qn9zPVm5/zI/Cp2urYmAx9//Cp+/OO/wY4dv4ZOdw3JyUF49dX/Bz//OaXlIvBovFkAAsHbokOP2PYwXNUn4cFaMxyBiAf3ncgISpoTGT/msip5TjoaHn+P1E2w8x6djgVvO1pHzqOiJwNl1jSUWNOQ35WCc52JOGNIxTnTSaTpYnDemoIMfSQy9FHI7U5UOeMdcchri8a+pgP48vrnuLNWJ8BZDhkeVLKWuR55qdQpfnc+LwcYC3R9BFOUIOWYM3eb65bzx/nlYce8aKZFWeGxWzBta8EnJb/FO7d+i1RzOPrmr4qOrgdM1fq6wudvz0beXq7Gnvq9ODV6EkOuBtV0RDT3LVhkHQTXl6y1f+890v5rZ4D2uIn3kPQc2i+17ibWmnGuOx4Z7Sew6OoQouOb3c8mPvuZ35NArAdrrk5RugptPIzR+VuwiXSjAheKbOLeUsXiHs8QSktO4u//5v9EVNRu3LvfhOtlaXjv7R/j7Td+gJ/95L/A2l2NtNRQ/Pa9H+FHP/hrHNj3HoyGoifriSkCS64WRDfuR8vYBay6WAjNyB/BHu0cIwmaHVbjy8/npc4RPhL4UDWKNkqlIiimV5E6QlDIc71yrt1brkaGKRpVt7PElno9mp35pmO34e+EcGJEgPdIVlKdHZKmGCgGZNSZNo0knp8pU7SRfBQnQRVP+j0GPFqsREpHKLrG8uFzazaTY8GoJMEXCTEy3qpuh7nOMla0wU/GZ8O9PXPetec3gG/agkBdGyPGJN6Y4sCzzs9md06ztFCn7PCC3YjT1aEouZ2Nx+utykZrNofYg7ViTNeV2jKrKKQonf0BIR5UrQ+JF85dHwSnUEObDLx0rGZjHKpy9UgEngy3am7G1BQFjNfRhXF3Cxoe5COhLhiHyncgtPmw1C4trjbDHojOiT0MkHJP96z2/Z/3yHt7HvFD7KCi4wTq3Cccs9HlaoQ1sMNlmVJW2iIAzvdmtE8RhH0K6zmITXqwABNuTV9GePshxLTtRu1cDpqmVC0jO7lXTeSgYfI8dBMXoJu4iIuGU/jrv/qL7x74PtUTgROGw+icKJTFwApfhq2UUWTaAw9/7ffnTd7Le45MD3W7KacjBVteC5hjty4dDinH1YPptTZUjeYgofYwTrZHwLh0QxlZZ5e0h5cD9yvg69n3TxDoZddHMocuNrjpRV//VYSEvYPffvC/sHf/r3Gr+vQLwbfovoqXG6jCJmBBDzpvFyCjLgSRt/Yjvi4YSZVBCLm1B7H6cHQslqk27i4T3K4uPLQ3ofT2ORy5uUtSaNqnijHqaUHFwzzEtx/DyZ44FI7nIKXrBA6WfYKqiUuY9w5gHX1wogs2bwuWvU3SOZMtxrmh1/wGLPs7MOdqwry7GbPOJsw727HqM2HepcP4eiNu2xtQN3EJicZIGCZKZJORyfK4DMIQcUwc9i6RcVpesiI9PRj79r0tjTkiI3fgxz/5O/zge/83omO34f54G1zOPnjs3VhzG9E+exXR9cEYmK8UPVPFUnO90bATHNEgEDSpYpi1tVEYDOXS9joqaicSEw/g409+Jg1AfvrTv0d4xDYszDVLx0ipgJccNzN8biNmHM1IMJ9AniUej22tfwC+v+ow0bgTCJO54mHMg4FAlxcPzB6wIQ1z4NRe0R75Ws1x4OGqk3bfcugI+DZLqDWq8xhu9WZ+BXyzIJga9MKICMAnIFBhQi+L65hq4KZBMsHOdr8+sjRWgBKMLFZ19+Ha7fM43R2L3olSAaQ05GyyoIFvgtTVVQPu3KtBt6UYw6O3YLMNCtjuMl2BwXgJE5NtcIverArRzts7pEiusicD47ZWaQsvzI2AKoICfk+ypHRgqDrAPUPNdirXULedoU9Kfg1gxdWFJYcZs44uTDgMmLR3YdpmxZTdinG7EdO2Lkw7dHhka8S0rQ1T9nbJu561deLeSgMyxtJxtCkIY+v1cAW6XMociI3iHKhcSs6ZgBQBMTywVaqE+n8FZjwe7mf+v5bfy79XINznH5Q1dLX3JCINRxBUtx1JDUHouJsHJx1ZL9mlZ9uMb+v/2TugoCcVodd34kxHFKyPKyTyx3Q3hpm5t17qvQVy8oXl9ZoxvtaMs91xInv2XQDfYU3BzwDfBJJ0VDohqXfuPhj1V/Haz/4RsbG7MTxSDZ3uCi4XJqO2+gK2ff4LnE4Nxscf/StysiKQdzEKuTkRMBguidPMMSdYWXQ2C/hue3ABa24qPgVSI8S2aMCbdo42hXYvUFgnPxulE6Q6h83wuFgvw3NZsaPc0yyspYPL9ev2deHOUiXOWmJRe+/CBnC/VfOvALfL0QleKkrA/cZ7Us6EOMPMFabd0thlYU/JwmqsLxWGbiK2PQS6sXx43LStLFSlveL7cT9yr2q2lfev9rhqxsV54ti9+HtxTJhCZ2czGmloF7gPgnAogoLyqfZAd1+vS/WV8Pr7MG3XIbUyGJWP8vHYZYBH8qRp561gM7jOmatIaw5H8/hlzHqM0reAZ7uLGtsSpVbRD56X6rurlBanrws2TyecZNu5DkhmOCmxSPKL67BHekAMLt1CQW8qjtbsx+GavThrSUDbZBEerDZg1d4BF9fDkwJwriGVQ65s3YvHRo0fx5Vj/pzrGeD79spT8M0zRgkHbPYzv/51YpclMsGzuws+cVasmLe1oXzgDHZf34bflf0Op4fi0LhyCW3jZ6F/kAHdo0w0jmeh+tF51E8VoXW2BJc6o/CXf/1/fffAd3p/JMJ1B9H+qEDAN4v8JCdSFjSZ0O8e+Jakfx+9VAXA1WZiuJvSfd2YXG1G+cBZJDWHIc+UBOt8OWw8nCldZNf0Q1V+3Is2rlrIzNOmwgHZrkGkntqBw0feQkZGMGLiduFKUUwAyG1kUjcuYmVIaYQUixrIP4MVj21tuD1/C70zNzEwUwnrVDkK7p/D/7z4GnbV7kLP+FW4HEY4vCbJD59wtKLiwUWcbglHUnMoUo3RCKk7gIjGw6gcy8dDtw7tU1ewq+YzhLYEI7f/DC5YT6OwNxn5ligU9MXi8tApXLCcxMWeJGSZopHXl4Ts7mic64pGjjkO5y2JyDEniCZobmskznRG4Vh9EA61BKFzqlSFriQ0GGAMWWTEwixnP1pb83Ek+AOkp4ehs/M69h94D2+/8wp+9bN/RGTkF7j9kPqpA9JldMWjR9P0JUTXH8RDWx08orTDDclxJIgKVHjL72z13oeJiS5ERGxDbOwedHSUoq4uHyEhnwiD+8orf4mUlENYfNwuYTORiaPBIAB3GzBhq0d422GUjaRvsnBGhSzJlHq8PDBZJEZNdJ0YKTpwZCZUcYwG4siwahrAfI75wjwkaOxV4RNzvk+0h6J2MAt2NzVblRGSnH2JRNDwPv0/qsZoYFeALpUUJD+R7BABOR97YHd348pgJjIt8aL7zPfgAcFw6hPwTUDkM0tY3OWywOVmYZMFDmcXHM5uOBwm1XBJ/lYpezxca0Bw7T603c3DjFMHFcHRDlB2ewywUZJKxcYPZpnLO/cq0NiYCWt3MVyuITy4W4eL548i72IYrD2lWFq1oG+wEpcuRePC+VBU3EzC8iJTr+hIGCVE7KGzwQJA9GPe1YmcexmIbQvFpK1J5YASPHNfkUUKAGyNTeQj54LpEwQGfhhF9lI5R1Q80AWiFLx/dcjT6LPQa505p14T5tdboF++jihjKEJr96BhJFt1vpXiw68/PF5kV/4kz/t7MbnWhOqBs0jpiEByVxRaxgoln5NFV2SStupw3NT9B8A31xuZ2UerTcjsikGOMR5LbtYivOzxo12xCvOtGy/CH4DvQMEZI1lej1Ik8bisWFrswYF97yA19SBGbrPQ24xHD3VobsrDr1//RxQXJeLXr/8DPvv0h9i371c4nxuKu3erAuDbCqffiAVHo4Dv9kd5WHd3CJlDQkeBbAWWlGKKis4op5/kEtelkn11uTrgcDDvXClLuVxdcNIhZ+2FdBBWzD5Tse4sViKjKwrVt7MDDLFyODc1b8+dF9o5BXoJupXjzfORecq8L7KjBtWFkWkO4phzf/E7don0rPpOSi+693E5TuhDoZu8ohrGSEdeMtGB85NpXgLWVQqFltctjojYss2Bb0aQWWBPVRCCbKalgEonZOgdJA/IrKsuoR4v64aYhsgUBzPuO9qR0hSC7IEU6GZL8cjehBVfF9ZgQdfsDYTWHsC++r0Iaz6MhqUyzPnMmHTpYF24CcvsdUzZWyUaIUQJSTyvHut+PRx+oxQt0m7xbPATm/jYT8GCBZcOnePFuNSdglNNYUhuCUPBYDraZ67j9moD5l3tElXUyAM+qrmlfacDR8zBMdT+/0V7bTPgm9EaxcTzs8hKjyxVPWG+txJ8y/lHvCbpWey+SwelB7POdhT3ncaXpZ/i3ZJ3ccwYhNKpcyI9aBjPR8fDi9DNFKJ9vhCN02TDs1BsOI7/9Df/4bsHvs8ORCO0dS9aHuSpMAgT8pkTJIwewQIX92Yn8EUTvDXP07hoF9NHWIAgOcteK+6v1iN/KB3R7WHI60nF4FI1bNQHJjtIlpC5qm5V2a8BnOcbJGUMKT3kdLFCegBHQt7GRx/9EOHhn+PU6VA0tZBdUIb96Xt9FXwrNkXzttmNioyA8oCp0csqZDL5j7161M4V4ZfF7+BLaiKbkvFgsV46Y7Jw1O23YNbTCdPkVVGUuDKcgSsjmagZy8P95TpRKphdb8K1sdM4q49GvvU08s0nUWCOR57pOC73xKF0KB2F5lO40puMAnMCSgZOoWw4E8X9p3FtOBPXRzNR1JcsrX2vDWWCn8FucGH6MJjmb6rmRdJGnAwgAR7TM4YxMFCOE8c/QVLSPlit1WhtLcHHH/8UhZeTER+1F0EH30HPcAXcPqZs9GDG1oiyO+mIaziAVaZzyPto48iDicachtsCj7sbK8sdSE8/jB07foVbt7IxNdWN6upc6dJZVJSMt9/+nzh79hiWlyhjphqLMO+PF/XG765U4VD1LjSN50m3PTVn2pr8fePDg0JJQTFPliCaTYII4qhbTgCnDJxiVLX558GkmCvuG7UG+PliYAloWeG/XIfw1iNoohyctO+lweT7qs9Teuv8WzImvC+mRiimxeNVB4dqJGSE29ki4VmuuyWnTtZ8Tm8Kpl2tihnzKsUehje1Q4yHvhz8AaZCwt8BJ0W7VwVkzXB6jRhYuIl9t3aid/Y6ljyBOgI5EOkkKKdDrftuuDwGyW2fnW9B7oVD2L79J8jNisDURBeyMkKw58tfYu+OXyE1YQ/0rVdQXJCAvTt+iZjIT3C5MBwLsy1PurkRUAsb5Ga0qBvzay0425OIU7oTmJeaBOYwKgeIAFzyv6lMIg4Mx1Q5BYxa0GlicydGLpRzRCeJc8U1xoOKygpkHMnasx1zj4Bwjv8DXwviDMcQ0xKMnmlVmM70vKd7XVtD3+4j61cIMqZs7cjuTcXe2j240nMqUHDZC7uE71/iPW4E3z4zHq40Ik0fiXxrCpbcnd9C2olyatdceugmigV8j8xXPk07IfgWh/op+Ha76NyNIeTI75CSsgcjoxXweW9jeLAOcbE7sW/Pb3D3ThN+/rO/wb59r+Pzz/8Vx49/BJPpMph7S+fX7jdgzlaPmMb90E0UYN1DW0dSgOuP61CBU9VchQ3NulWXZeb0ixOsgDMJIKfIowZyhHlWs1BemsGpLs7i6KAHdxZvId0YiVsjZwPs8Nad4dwTtKeaneJn0j7RXtJpVoQE75lnpyIOCASf2kZlQylmYJopQURnCAzTJeIYkvH2CllA554MMDEIGX4lfyc1DsKkq3XMMdzUPmQEgqw2I3CUD6WaCfXERTuc864IH/aDcPkNmFqvFza+yJKCrO547Ln+KfZVb0d801FcsCTj2p1s3Lh3AectKYipC0bFo8tIaA1Dckc4CvoycN56Eqntx5HWGoFrltOwPCzBilcVbovqiHR7JklhgofMrp/9P0yY8VMU4DLyzSlI0p/ASVM0igbToZ8oFklitrsng09n2i2yuLxvbU8rW6zGmuO9xeCbqSa/B76HpL38n6DDpURJVHoVU3UYaad6kM1nQs/8TVwZzkSC4RiONOxAVHsQsgcSceVuJi4Qt9xOQ/1UDnSz59E+cwEXjKn487/9DjLfOUOxONK0C/X3ctWmkfahLGDgoub13QPf3NCiPyy61Jwgevy9uD1biYvmZEQYI3B+JB1DS9Xi5SrvlsCJUnZk1VhFHNjUTxautoB//1GBb7JvLlcvXK4B7N7zS7z51j9j+5evIzRsG25VZ74YfBOIMV+XoFs+Xxl6Yeyk4rkbNn8XBpZviapISksYLo6kI64+GPUPCjEv1ck0cjwMrOKVz9hbMG5vxoy7TZgkacVKhRd3F+ZcNbgzewOjs1W4PVeN0bkbGJm9hrsLN/BgqR73Fupx9/FNjD2uFjWVyfVWjK82iwze+FotHi5X4NF6Fcaczbhrb8ZJQxROWePRt1QFAkApyOUGEfDdB6ejF2lpe/DrX/8DThz/DDrdVZSVpeHD3/0AJ08fRMjhjxBy5H0M3q6EWzS5LXi0WoNLg0k42RYMD0OWouNNQ8s5ICBSDDJDtw57J6Yma/G73/0LPvzwe0hO3o3a2iycPx+O/ft/jaSkXfj00x+htjYbtjWmCSnDzw1LFsXp6QSVQPbd3Abz41KldCIOkzbffwi+pfhGGFE6GGRuFOCmM6oOEgI3xQA/OSAC+XbKaeD+Ud3nFMunVEHGlmulDqFtrBDOQCqKx9sh6U18PxbyUmNW5dz3y3irJkNsM0xDxO6SNNgGeNzsGMpQrQnjq3XIMsWhYCANy3SaeWAxYkOAGsjTFrDM3yUFiveuff9euD1dIOujcqD5nc2wsThtvAi7b+3A/bV62AQ00JEN/K18f41pCTS68Pdi4XEHsrKD8M57/wNRsfvQP9CETz7+OeJj9+H0ySM4tP89XL4Yj9T4A/jkwx8j93w4OnT5sK1bVARECoH5vlzz1MY3YW6lWYrIcs2JTyIXdBIUiCE7zjnXnAHOp5pTSRvzkclkeJahbcXC8dDiPNFJ4nx6fVpUQ0Uq5GD3mDDpakZSxzGc1EXg9nKNRFW4V58eetoYfruPjEA4/WYML9XirCkRMS0haBkrUHPMYklxtl7iPWrgW6ITZowt1yO1I0LABMP1sndeaH+38H651ln86Najc6IYoU2HMTxXoQouJZVIOahMZZCaDa4TgjOM4cD+N5GQsA0jI+UYf9SKC7lx2LfnXdTXFGBlaRg//uFf41ppOk4mH8ShoPdQdStdwDdJAJtfj5m1WgHfnZOFsHkV+BYn2KdYY4J0iW4RWLHrq/QuUG3FlSPcK1EprzRqow47axf4/IDI0pL99lINQwoNzbj9uAJnumNQcyd7y8E3bQdtlJxF8rNSpuC9uSTXm7VCqm+Eimb2CUEjzjFZfH8nvH6dSOV1T5YgShcK01QgTU4US1ROsaj0sDCaDKgAcMWEys/i/CoVp83sQ4JcSbtysolPt0j9TjpbMMmCSnHAezFpa4BlsgjDq5VoHs9HWvNRxDcEIcVwDMld4UjvjUNebyrO952UuprYlhCkt4SjafQiln0WWCbKUNAeK/uOwPmCNQWXLKmSIppQdxiXBlJRd/s82u/m4cFMBexOPZxek5zdDx1NaHtYhIKek0hqOYYMSwJK7+aAut0PnM2wE4vR5nJ8pX6LkX6SDxr4VvZO2T/aQM3+8XEze2jj67/mZ7G15icKcGS+pb18M0kJNgBUCjOb+7zn3xPfi2c3NciZYixiGBLhpE5/F6acbeh5fAtXh84ipvUA9td+iC+q38b7pb/C0fYDyL2Tg9IH11EwWomj9Xn4s7/4j9895vviaCKCG3ei4vYZOETInfJbFGHn4aKKHF5qqHITC4UhTIIRNoZhNbrdTdWMG0jXRyOyNQTF93Nwz9EiLLHkT1FKh6wBLFinRA4BLHOqNpUDydAZJQt5QA/D4ejDZ5//CIcOvYvMzGM4cOADpGUEvRh80/MnkNAkjiSHWRUwECjaXAZhGM8PpCKpfD+s967gzloDopqOSPv5R45mWYBkJskGKOeBC1hJiTGERq1OhvxZtezzqdCm29cLJ3OypbBDyc15qOvtHwJzzmiwXd5eODx9UvDo8fVJtzAPqDeuk5w1N/qQo49DXk8Kbi/VKHUUAd0BMX4W9a2bkJq6Hbt3/xzHj3+Mq1eTUd+Qg9TTe/HJ5z/A7h2/xPXSJMwtEODwoOnGg7UaFA4n44whHD4WtUrBmwK6CrwGwDdTJ5wGzM40IC1tF5KStyEu7hMUl0Tj1q1UJCd/iUOHXwcbCk1Pt8Erc8X34VpW4c1VVzs6HhbgYOUOjK7fkjzMFzHfsv4lz43rjQeOAoJSKS9AjyFfdTAISCRTK0aPn63YVAHfGuMYSOW4v1SLow0H0Tl+RYwvjRVTWTw+ndJT9faI/rrNZsX6OiMuPGhZ4DQIp70PNgeNEqUjeUhxDXWKTOLwfDnOGKNQMnQWdh5WEulRclEC/iXthYaVhYXsZsj7ZAoUWdx+afVOsCGdxbhH/GasOHVouHsB+2v3YMbZLqoaCiRo4Jvrmk4JwRQBLBk9OgyDaG7NxcHg9xERdwA9w214/e3v4+r1c6hvuIyQ0E9xPjcaMVE78eYb/4yUlP0ouZqItVVq8nNfs9GFYgVViNmCmZVmpLaF48pQOlZcHTL2GviW1CIeRDLGGuvDAqsuuFm4HAjDeuVe+d7cO70Blo6vp0PCRhcE51Q7UKCc+2rF1Yl860kktIej6dElrHgZvubafP4B8rKfp5M169XhykCa1LncGs3GjJ21DSrN7elB/ZLum4e1AHC1b+4v1YnEZtmdLJGv1PbKyxonfh7nmekHBN8hjYcwNHtTNdkJMKuiDiF7l2uGc9wPj2cEBw/+BidTd2BwpAQ3ylPw23d/in2730e3sRoTDy14/Zf/A4UFJxEbsw8hRz5FY33OV8D31Eq1pJ3oBXxTZpFRT6471aCHxINq9NMD27oZ448aMDXZDKfDinVbF+7du4WOjmzcv1+DxSU9JqeaceduFW7frsLEeD3sa3xP9odQqhWDs2U4a4lDw4OLgejh1jiLjNxRKYvEwOqaHisretgdJHz6sbZuxtRUC2ZmOiStjc/PzLRg4bEeS8tG2OwE3uy/odL3SCIxrTK+Ixzm6TIVoZECTRZpMiWkWxTLFMFD4kK7FNkmduwJ+Hz+mibbTEcIji6srbWjZ+4miu6eRd7dM6gcu4ApZwdK759DdFMQzvTEIaXzGCKr96L83jkYH1/HqKsWD73NmPR3YtTRiNbxK7jZm4m24YuYX28NNKmz4tF8DWrvF0izq0fOVjxytqH8bhY+K/sYr5e8jY+u/RbHanajRB8r3314pRat08UoGEgVQYOU5jBcsabBvFaFRS8dM8r4qrQ6rk0Zl8CZo+zi7wPl3x8HPv/7//cNfxfSk8QPiSGrpMxIe3npcEnw/Q3f9xl/pwpRmYVBaViVISBpQoEMBhbPr3m6MLhShXOWMHx07VX86OJ/x/+48Arerg/G/8vce0XFmWbZgi/zMHPn3pd5mbVmzcy9PdNm+lZ3uezMLNPVZbO6XFZVuspKb5RK+ZSEkAAZEBISCJBDEvJeQiBAeO9tED4IrBBOILw34c2etc8XQZJGCdmlypsP/wpsxP9/5nz7nLPPPpuN6djQVIM3Kxvw65vp+N/+69cw8n2n7yTCq7Yg1XoM0/YaiRgK+HYq8C2tur9utBPhmhE4NGPBqYV+NBtRlRGIKN+JyrF0TJHrKcbUpDawSOmZMQct5sGCNfVzSWl9zsR/erESWHETUIKRBmZH6IuIT9iCjIwkhEe8j2PHt64IvgVMBgpP5NAhB9PVJFrj004tjAMZuKg5jH3V4SjpuiqHhMtnxgV9PC6ZE9ExXai8XIKLQKqKAFYOVi5IFoF4DXD4WHjaBA/YXMUCl6cV47ONGF+sxzwjYz4rbM42jM+ZYGfDFW8LbLZW9D+qwcioBja7FazUZoGmk8WFTM0RfNfGIMN6Gv1zZQKeufjplSJQUEPANjBYAovlLrTaFLS152JiqhHDo9VIy4hBZdVF2BcYSWIqlpQQPR7M5eNa+1FcNJAzz/sn8At67QSvwYJLpl/NEvkh9YfcZJtdB7tdLwZ9ZrYewyPlmJ9nxJuqW40wywAAIABJREFUH+qQJRDj4c/05oS9GoUd57CvIhSP3BXiyKwMvtlcSAuXkwckZRDb4PcxqsNnoKFh9IdAkSBOdS5VxUfKIKqoquIvCvgNpGD7ZkoRUbYD2qG70r2T6418ZI+vUVKjdrsFIyNNqK6+hoqKi3j4sBIzs03o6ymHpuEOdPo0TEw2CNcTPGxdlJXTwTyajrP6w8jpugynyMwxM8GIcKC+QQ77gJ6r3DNBGesYWpQeNMfO3/qJeZiy1yK3/RzCa8IwxYr6pQg6M06cM+UgCadaKD5MpbKZ0n3o9HcRGb0OB47sQuuDRrz0+s+RmnkGuUWXEL73Xdy4HYcz53bjgw9+hR1b/4Q/v/wMujryJerH8eU6FzUepuV9egzPlONweTjSey8utU4XnqGAPDqWau0o0M6sA+sQSB9i1NCMuVkNHHYDPC7OZaeAHJeTe5vPodaMymhwzavGFqRHcYwaBjNwsHKPtMJuns4OpHSf3GHzaZvzH/meNTD350sk/X3BGI/exXKJjtHBFidxFbbuP/K5j/2fAPhWANyEnukSJFTtQd7Da1gQpZknCAxW8WxB8M0mO41DaarJzkg2FqRegM5qMLLK4EUj3OzCCqsEXE6dCkF65iHc78nA5SvheOH3P8CObW/i3Jn96L5fhyOxW7F544t4842fIelEGHq6S2Xd+NAske9HM4UB8H0bNjp4EiXkmqMNJ42ElEYCLSvaWtJxOmkbUlMOY3y0AR2deUhIXI+wsJdx9Ngm6HSpKCu7gMuX9yEhYRMuXwxDf1eGOOS0aYxIWh6lIdlwGOUPrytKFWtDVjFGK/0N7TCLsR3OZpSVn0Jh0Qkp1p6e1qOx8TauXd2PmzcO4n5XNjrv56KkJBk5OSdRXJyMzs48caw9UrzeBL9HB1N/KuIb9sMwck8i6RI8oiwmnXgGmehQMKLPSLpcKmsn9pdn/CrpXwy88T1I7Wwdy8U5zWHsLN6OiOoIhBZswZ37ZxFRH44txZsQVrYdkUXbcbnpMPpcVUrYgWeJSyvdbsXmBxR8GIVnwSaVtZxs2kY1M0qYUuLTyzoLI8zT2Qiv24l/T38Jv7z+PPbU70RsdRhONEbhlPEQ4uv34VD5LqRYkvDAVYkFrxGLpMUQg7Hfh0MnhfZCw5MACj8zSEkKBhqCIPzTNukJ7rFl4JuOEIN8ltF7OFCzG82j9wJnYjAS/+n7+HLf88wiyHcG+lPQlvs5Hi7W5JgwZauDdfgOMjsOI642BGty1+HPReF4ofQEnq/JxA/KyvAP+XX4uzwD/vuZbPzn//oPX7/Id/bIeeyu2YbL2hgMTBUJMBXwLW1XyRdVzWdW2pRf5e858Yz8sVGDdigde4tDsasiFEZbMewERUwVBjYHD3DhyzJFI90rGSlU1A+JcqxgkGgclYEkoGDbdjOSz4Vgzdpf4M03f4KNm17EvayjK4NvprYDHDgvC+QkImTBlFuL8p6bOFIahoPFoSh9eAt2aV5ixayzAemmkzheE4ninutShMFnEQAEq3iF4m0yNefUS2reJd05LQJW2KFw6lEdCjOOw6pLweJ0I1yLJnS3l6I07yJG+orhW2xGS1Mm7t1MwL2b8Wg3ZcHt4AbiBmeBFsGlGacqI5Hdfg59tgop/vS6KQUUKIJhcxq3Tor3fL42+P334fN3gFF0BzmTaJUOeX6PGXZHo+IG+ppgnbyHs5Yjkp6TyI9EYzneNBh8VRxcBZKpLtEmgEpxsQnOme7kwcVDzCw8Pho/laZUmQ0WXTqhx9B8Ge4aj+KINgqTntpAsddyQ6E+k5+rLjqfPAwZ8W6VTpoz0zrpsOl0KiDOuViYrxOlF9JB+AwfA3q+N9+TEZfA3PM+vQb0TZchonQH9KOK/sIx9oJarhzvdoxP6JCaloDXXiPF5hc4enQL6hquITF+K959+5cICf0TUlJjpMg1WBXvgBkND1NwVheLwt6AIoekKPneKh3IcRX6iax5NcfqfpulmQM1zykHJtrnEiU0iiRjRvMpROmiMCPSlx9TVVR6nuCb408OKLV1+TWlN3ug0dxFdPR6JCaGw2gqxssv/xuyc84ir+AcwiJex72ckzBbs2E0ZcOkL8DLLzwNoyFVOiqS4y/zSmDi1cHrbsLgbBmiK8ORN3EHswF1FUnJB55PpfFZ+KVVCgxSQ9AKr6cDrS25KMhJQps1D25HGxbmTKipuoC+ngIsLmgls6UcJzVvfl8TnLbqJZWWIVsDrppOYEvOelzUxQSA2Zc7UP76NtKCztEcxFfvxdW2JPS5ajHn0WE2UFymHMav8J4Z5RSHTPHxu6eKcaQiAkVDt7AgHTufIDBYwY5z7D8Bvh+lIaI0FC1DWZin8yWZogD4FmePWSgWVnNPd6OzMx8jo5UiNTswWIy62puorLiCosJkzM81o6e7HFnZR5GTcxQ9PaXw+dh4hzKv1gD4LlgW+eb70iFW9odOumpgxOyTFVmZsXjuZ/+MzRteQUdbAQqLzuHFl57GwZgP8dOf/h1SUhKQmXlK6lvWrfs93nztB7Aar0oHZbXHTTAPpiFZfwjlrOUSm/Zkxpp21uluweBQJV5/81n84Y/fRHHJeTRpMxAR8TZ+97un8OqrP8KBmLdRXHIWx4+H4L33/h2vvf5vuHJ1H2YXmpRMqlBazbAM30Os9gDqh9Ix59Jg2qXBhEuLEVcjJl1aTLkM8v20RyfFkgwoEZ94mHmlvJ/MzyrWNKmLMGDMr0N2+0UpYMx5eBMmezVOaGOw5urriC4PR/5oCspHUlE7mCr1Y3ZmF4WSRCpuM0DaikMrmXNpmie0QTaiYva4SWmVk+LC4kknM5MWjC5W4Xbbcews2oTNGe8i+9FFXOlLwpactdiasQa32k+KNOgCQTu56exIK9lb9uDgGRS4Bz4r74XrlRx7wRA8rzi3wYtjEfw6+LqK8ZG1uPxM+JyviZ+E8qjOM6r4mIczcbB2TwB88/PUmn4Sto4ZVmZyOMcSsHOT1mjAgK0Sxb1XcaQiHBtS12BTeSxer76F31VW4MeFOjybZ8E/ZuvwN3kN+IcSPf77+Tz8l3/81tcPfBdNXcW++h04U7cPncP3pIJWwLcsbC7YJgEPT2Iwn9R7kLs16q5HxsBVbC36CHFVezHspcKFFV6JAJMPRdH8QBEKvSYe0NJ4J6A8sbTYVlqYBBMq/a3Stp14OFiG2ykxiI7+ANk5ZzA9yw0QXKzB9wuCOL4SgFHAXvF/JaKHZklfl3VeRkzhDiQaD0HvLVW8Lhf1qVlIakX3fBlONcUgpHALUlqSsOAmbYPvZcG8n3MToBdIhE55w0w/U3bK7mvGqTM78Df/7X9HTMwGDAyWo70zHzt2vYV//s7/hYLCY1K5/+Ha3+D53z6NX/3ymzgQ/T7a27PlMHJ664VmsOhqRFxpOPL7r2LQUw8H6wI8RkkHsUqbqhH0TKmBzq6adqcRNifloqxysdpfyTQS0BNUk6KgQ/NIFhLro3G79ay0rWYEVgHpoNFgIZIq4JHx9XMTMoLDSwdW/7vYplqoFxo50PgzlZojJYYUIwPs0OHhbDFuaGKR1ByHeU+wu95yQ8HPXD5nnHPeKzvXWaBtvIVjCZtx5PA66Jtuw+NqxtxsLZo056UBx/wcudekbwQ56+q9+R4KICqqDQEewXd4aQgMo5mwU7mA6VA2q/E2weWxoKurDNH7N2B3xAe4ejUWf/rT93Ho8Frs3bMG0dFbEBbxLj5Y+wvY5sjrJu/TINr85Q+uCfguG7gtWrNSZCyynEHHT4Fvr8jOBTnQnDsdPO5aJaPoYxMVBagZfRheqESK5QRimg8J+A7yUlUkNVgTQqCs/oeG0kseqq8L5eVXEL7jDZyOD0N3ayXefPlHuHh6D86dCcNHm36D27f2o7DwBDKzjqG44jae++030WBMxSKVc0TJiA4eFQHYNEOH9pEsyRiU2LKwGKD3cJ+Qpy57QvYpCysD9BPJVjzA6HAjYmPW46c/+jskxG2DbbENd24dwo//9f9A8umtGHpUBdBhkCwAnTcqDzTC5W3AFKwYpwKUz4puew2SWxMRUxuGWT/Vb4J7/WvyKutdjxuW4zhQvRu3rUloGLyD1vFcAQfSMOOrvOdPgW92s40tD0fpyB0sSpEh99xXN3YKfJslqsiCy/CSHWh+lCkZQQHfEknl2uG+576gI8c1fh/AAwAdcPvYI4J2iUGGdni8VniEF96hVIIYQPA1K3DoY8feliXwHVO+HYrzzTWqnltlWlTxvdJh78JAfwX273sDMdHvoauzDL299dBoGBkexR//+H1cuxaPsdE2TE504NLFw9gZ8idMj5YGKDK8NxOswxk4a4xFWe8VJUsotu0vH2vaKre3A5NTeoTseAlvv/MTFBVdREN9Bm5cT0RB/jXcvHEUP/v532Bi0gibrRt3004jJORN3L4dB7eH3ZUpFcwsWjM0j+5iW+l2XGyKQ2nzeeSYTkuGNdWShOyWs8i3nENp6yXo+u6if7ZcHCUPs3YMUEmtl+KXB8fzsa8L7GhpRoetDBd0cThfcwA9MyWw+UywzhVgX85HyKo7homZOpk/1pQROJO6xwi2gHz5nlkkixQvM4vOYFdQMYWfPSfnPP+eGTMGDrRweTSwQSv1MtbBFEw6y2DxlyOhMQpxRaFChxy1VWLBVgfbfC2c7Eos55YJ8zBI0yRmtThuUiQuCjDUvA705fjEHiJeoT1mcIrn2Wf3mOwDoQ6p9cDzSZ27/F4F25YHtcSeBmzqcvBNu2wazvjrgG8WP9PREclFk+AHytc+dFTjiuUo1mZ/iHcLd2OnthgvV5fg/8suxv+dp8d/y2nDN7I68FS6CT+814if5DXiWxcy8D/9wz9+9eA72FTnca/Vo1eQbDmA6MoQ5HeeF7F32WCUDaN3LtSGz07gYxf5JxaCmtwv87ecUNJCmNqh0gK91SCokbSyz4QBfzWut8Vj+63XEFe4GbqZVPR5i9DvLMSUtxpjnnIM20rQP1eMjrkCdC4WodVZjK7JXAzMl6DTWYJpT70CulxUcgXvdTkAo+Hl98HfqcXJYhO7vRHzjHqy1TM36if+5nO+5yZl5JPeHMyyqYbcNYir2o1bhqMYWayCw08ZHxZmWlVnOla9U8Flrgz32pJxuDQMlzSxigNF5yKQuqVjEVSpUM9C42TErK0KRstt/P4PTyEpaac4DWOTNbh+4wB+/4dnkF90GONTdUg8ugX1dZnCVwwLexM1tZcUVYHg3qXFqL8eMflbUc10sY90Fh2cILWF48NoLWXcGD0i4Favwa/5PXVbF4TbyEi2MhjsKKkZvIO4+r0oHUtR0eqlAl9GnT99qc+izrVEtpnZIEiizN3SGmUXSQU02eWNwJv8fkpndkwVIqY2EmXdN7DI7npCG/mceVo2j9wHPHBNpkzsCn0dv/7lU9j+0VvY8OGLeNjXJMVVz/3820iM34XBAR6oPKT5P4GIfUDzVhU80qhxXRvQO12C8JIQWMaz5SBXQJYHPptQGGFb0KGvtxRtrYWIj9+Md979KUrLLqF/uAFmay5OnfgIO7f/DgtzGpGR47pig4g7zSdwQx+PlrEc2MTJYZSEBkxlKHhouAOA6JPREY4D9/iySzJIRqF6XDIm4oo+AWz6JIdQoKGEpMoDbYZtHo2kdZkmJOBw+5vRqL+MU6e3IjPlOCaHTLh9NRbP/fgf8NyP/x6nju2EQXMPV85F4Y+/eRq//cWzCNnyKkaH6oXfroAtHRbKqFkx79ajfuQmDhRthd5eKApGBCuiIS4SYXqVmmUWQYohGaFjhKgDFZWX8fobP8b3vv//IjExVOoTyoou4Gf/+o9IiA1Bf1+tohRJcS3XMhVylGoKI5deiXQZMOtrQtXgbcSWhYuqgdVWjEG/DtTM97mpt26RpkOqNTMzcJ+2HY9bb4pzL4XMEl2ywOlhep8Zo17Y3R0ypnROJRIWUNShpJuyO2bhtjPrx3tpnyzADW289ANYl7ceoZWhaBzLEI6/zB8pDn6VEQryN5WNbZYmH7QdK9qzZfvksX8r1CHWAHAPmtE8mYe9ZaFom6LiEffz48bjkz9XspqMLnaIfrybRbNicxTwJYAV1QgfI4XNcLrZsVTR8MQZFAqAAiMEHiy4bHiUKnUX3TNFmGMvBz7zJ2xCcC/wXoJnBIMHAQrX0s+C96p+F3wmOq7SOZiF+j6TUB321+2BdiQDDpEG5d/TVvAi+GHmkjagFW6XFWeStuHQgQ/woLMCXm8/ZmfbkJF5HK/86XvQNFG6swelpZcRFv5nnDsfKs8vsn7MPKEZ2tFMccSre67LWcDsV/De/rJXgsBO+Hw9uHUrAZs3v4ii4vNYWGzG8EgD0jNO4JfPfQfRBzZgZq4LU1NtOHRoM2Ji1sNsYiOWdniYrZSOq83omivGbcNRJJZG4HBZGI5U7xWZzNMNB3GmNhrHKsIRV7kLBwu3YW/hFhyoCMVZ3UHc7U5G6fQdmRsFNFVdQ1DQQArLeZ5S3lDmwgDzfAH2a6OwsywEJZ2XQBlTYpwJn0E6ONtFSWZ14xRcB2KnuG4Ca0fZreCa4CvnI3iZVJdpdqWGDvqeG0gqC0NY4UaEVm/HltKNOFQfgdjszdAN3saCp0n2zVwQgPssmJFOs83wsyaP7ITVYA+uVWECsFurFrlZR3E37Qhstk5MTBqQdvcIYuM2ISv7FMYntNJA8MbNA4g7vAkPusrhdpKOGHAmROJZYRnayI7pAuyvDJcIuFpXywNay8fh4695RjMQsIS76AgINiL2CWRpef77G+D3aeAgvdHfgpGFUiRrI/Fu6QZ8oIvBB8YsPJOnx1O5enwzW49/zjbin7NN+Fa2Gd/JMeG7uSY8lWfCNy5m43/++3/6+oHvmpHLOGs5gP1VIcjtPK+oDDykA9FTATZPyhivwtgK8KeMTkA6jG2VCcBpUBi5HnbW4FxHIt7Kfxuvpf8JW4rW4pA2AgfrQrGvdDOiK7bhYFUIEip3IbEiAoerIxBXuxtRdeE4WLAdMUUh2NcYgcKx25gS7lnQsAYXx0rgO/h3X/KVmzOwyGgIyT8fddXiROMBFD24op6XYFEAQCBy52LBhR42rwFd82VIb09GfFkErukSMOanbFGgMFaMmZJdoxGgLNuigx68EeQPf/jhb5F8djcGByvh9lhRU3sdH3zwW2Tm7MciO/mNaJGddR5/euUniIxcg76HZcoBEiBtQjeqcag0FJqhO4EOf4wmN8FJkCOFadwwnCMCL14E3MGLxo8ARUlrKR4vn9OAusE7ONwYibKp9IAcmsoQ8P8fd3HTCoc0qNghVAplzGnQeTHiS5DLiAUN/YJfD+14FiKqw1E1mYVZaQCzspFQRtOKzs5cZGedQMrtRCQd342XXvghHnRVIj8vGa+9+gucOL4H/f11UiTJMVeReaYKSZdQ0nzMUAj49hvQM10s4Ns6kSOHA+dMsisseCbv09MKh70V3Q8q8OILTwv4tljy4PD2Ijv3NEK2/B43r+yF19MieukE2It+HW6a45FqTET3VKHaM1xzEh0JgHDZ10r9ZeUDmMa2WTTzk00JON8Ui2lHvawLOgjS0U047AFlEHL/WcAY4Doy5T45UYW+7jxMjWrgdnZgsKcc1aXnUFdxEQPdZVics2CwrwoNVddRWXQFZt1deNipk1FRbwP87JYne8aKGUcDygcv42D+R2hxlgQoT6wdYBRNpScp3+lyNcLBTpuyPqzof1iKi5f2YMPGP2L9xldx8eJhyVqMDDbig7efx5kTkejrrobPw6wK9z6dSa5dFUESsErnhfJksKBvsQqZHWcRnbMNe1M+xFnNYegmsjBPyUgXJURZJMQ6g4BCwypsHnmj7KTn87XA2nIXV69HICsnEUPDdSgpPY/dkW/g6PENMBhvY2FR1bNINkUoPlxnBJZGcdwZIaNk2chsOVrHC5DVfxu7anfhjC4Ws16d1BhQ1pRg2MkIZHCNyGHOQ2514GPl9UP7yKghaX7KOTRN5mBv+U50UTFJHObV2VCeCXaHCQsLVNNgwyZFh+PPbHYjnG6N0INm53UYHavBwiKVgFoFXAknl88oz6WejeC7fvAO9pTvRM9syWPA9+ru7XHjEATfBEgE39bRHOyv2wvdaKbIi7LYkBQOBmM4d3wfptZV/UUbjiduQNSeN9HdVQG7vQstLUV4/fUfITUtHiOjWszMNuPU6Z3YHvIi6huuKacgQEnkuDeOpCNZexg1PTfEJjIA8Lh7/XI/5znTBj96cPfucXz00csoLDyH+YVmLCy2SgO0l174CULD3sTcfBeMxnyRhr14YR+mp7i/OlWNjGQXGGDT49FcmYC49tlidM6Vi45173w5euZKcH+2EJ2zpbBO56J06AZS2k7iQuNBHKkIQ2zNbgyO5Ut2QgIxHEexSUr7n88lVFPZHyYUtJ9HrGY/0nouYdhRL46yw6nBvAQZVz4Pvtw4ffF485ycddbL81WO3satB6dxyhqHjNGriCndgZutx/FwvhQNfSlIqovGLcMxDE2XyTpXHbqZEf60w/j5n8mic8pRzs02oSD/OP79F3+LtR/8EqMjOly7fgAfbf0j1m34PXbuekuCPcdPfISXX3kGWza9hNCQ19DWki3yl3TuGHARRzWAz+7PFCKqMgxmqtXIGlt5HEm3ksANI/hBeyPPohwoNmbySCdx9nJhYLEFj+aqcawmDGEV6xGuOYI19Zn4cVENvpVt/Mz17RwTvpNrluu7eWb808Vs/Ke/+5qC72RzNKJrQlH44JK0OKVhYHECD5GvGnwzSied/sjnZXTXr1dAgjQHlwb37p9HYvUeXNEdQvnDa9CNpEM7kgrdUAo0A7dQ338djYO3oBtMQdPgHTQOp0I/nI7G4UyYBtJR2HYWMTURuPfgLMYc1cuiGsGF+1cC31yYNLSs5vYrKgS5YEn1B1DSfXWpWIygkZtFPGlGqFhcKa3oTbg/W4L01tPYVxqK26bjovspknRS9KEoJ8qDZFECwTALBluweeMLuHghCo+GqoUS0NBwG2s/eB4Z9/aD/GW//yFqqu/ijdd+gd0R7+BBdzG8VGOhh+43QjOXI4VuhtF7kmbjAf0xwFBRd/6tugiOg1/zlUCGUTbFzZaOZx6m6wyoe5SGmMYo5I3fkUOaRZgrXgGQr4CxikLJ1wJE6ASYJQIqDW08TPsZsejSQNufitjSCHTNl0rRaZBv+UUGlZqvPOjn5rUYHKxFbs4pvPLy97Bl8x8wNanHgwdF+GjLqzh7NhqPHjFVSJ4njQ9pMUqJRPYSI97LwHf3dBHCirfDOpmrIoCSntSpqnYW9c6b0POgDBPjBly7egBvvPFvaGpKh0afgyPx23H6+E48elgLv69TCiS5VtiVlnUbd83HMDRfofYuDRvBN+9JuInkf/NeVgZYdH5ZRDSwUIFTlnic0cZixtUIpxSZBqUGCUoJ/qhMwDXLzzHAF+BEehm9ZVGvh5QaduYzwL6ogcPWBK9bFUx7PCY4bHosLuhgsynqisdH9YAauD01orlNUDLmqEFu71nE5m/FA3elAGHWdogR53rjAcsDgc4ZqSge0oaaUVyShF27XsW6dS9gy+a3EHNgO+yLRrjtXdi49iWcPr4XfQ+q4Pe2yf3z/9nQgVQWeS5GJtmhk9QTrw5zXh36FipR1XcDN7XxOFy1Gwm1+9AwdBczLvKEqa7DLsF6aWrzRetr6Xcy/0rZJilpG379628gKup96A2ZCAl9CeF7X8OH636No8c2o6enSKK7ItsmYIPPHahP8GjhZzSM4+LSY9pWh/KHKQgr3IobzSewEKToSOMls9rLXB/yP6RpqYgU99PSvf1FoE2Bb0b06QDrx+5hX8Uu9MzRvtDOru5zhkdKcPduDC6cj0BrS5bQO3r7cnD8+EZkZMRhZLQc9+/n49LlSOzd+6Y0bHr0qAqsP1HRZAYzuB9VtP3T4Hv+cyPfwfPgP/a6BL4DkW/LSBaiPgO+VSZMjQOBt1WpeHksSDzyAaL2voHurnK0WAuxfdvLWPP+L2E258Bmvw+D4Z7oiu/e/QYGBirVORYMRPiNqB9KxRkB3zfFJkpQ4i+aSzUODCC4KD6ALty8eRhbP3oJxUXn0dlZDG1TBrRNeYg7HIY/v/6vGB83ITl5DzZt+r38jdfTJVQdkU3l2mUhIbGGl3uF2VQ6kMqJFIk5akq7NYIBnN4mjLtq0L9YDtNIJm5YjuJA5S70T7IzslIRUxlQBeDEkaY9Ij2FjqVLj5TGeFwwxMM8lQ8PM2puo9QqsYGdZNWfwPisdt/wTHT5dVIoT/WkIXs1uhZK0e6pxsHKXYjV70d61zlcaDiE6OJQJNbsQ4o2EaPzVfC62GSNzjPB6crrkw4ez0GnXY9mcxo2b/gV3nn7JxgfZ5+Mc0i5E4ekU7sRuvMdRES8hZu3DuHmzVjEx4bgvXd+DZ0mFW4XaT4KNP+l4FuccrF53I/MaHEeqDDFs5NKXMwaEVdY4HK3oXu0HGfqd2JH1VqENMRhQ00efp2rwzfTTPh2tvEzl4p6m/HdPAueEvCdg//lb7/x9Yx8J5v342DtLhT3XBWgx8iWqgwO0E6C3skqJnq1i+9xf+dz6eALNIsQtQ4erJJGNKK69yZiK3cj1XAM/ZNF0piGkS+mlriYuYEoAbZIjrFPCycvUTfhwakmeniuFOc0B5HTdgZjCwGjJSnE4CL+64Fv2QQCUqhbacbobDlO1kShmJHvAL2Fxo3RUoJwLkSJ7LEIgyDSo0P3XBnS7p9FXEkYrukT0DlXDAf/VhRPFFi2ieQbganSJd+08QUkn9mNhw/LJa3ZUH8L7737K2RkHsTMjA51tbfQ0lyCqH3rsHf3OzAa2EK+VVLvbp8ZhQ9vIaE6Cm3jhSJHx8JJRtik0YgUPTJqt+wSIExwFQTH/PtmUdTzfXHoAAAgAElEQVSQIhUvqQEW6IbvIa7xANL6rkjUR/5eosCB//2cr0kpYFScAIEgUs07jWfwMgpoDFKUGPmecNShqusaTpfsleYefo8y0I9bg8Gf8zNoBOx2E6anjaipvol33/k5Nmz4LcbHNRgaqsLO0Ddw4cIBDA0ReLXKQUK5OgHfsl+aA4BXPROLArunCwV8t07nL4FviYDR+Phb0ddbgrOnQ3HjegyuXN6Pt976CdIzjiIyej0+XPsCUm8kYuyRFk4n+Y5MARsw6a3H6Ya9yG5JkmY7ArKF4kOAo4AVQToP4c+mRoNr/+NXHn48CAcWKpFkPoKz+jiRaOM6lbUpkQqmyQOpQkkNqggxJS9pQBmxZOTJzcYV5JmTCiQcblVcyowIr+Acqnklx5sFsQ1weOqElsMWz4P2aqR1J+FYcSgGPTXSzTYIvkXeUKIyerjd5P4rKUG3x4TMzFjs2vVnvPvOb/D8b36E9975A6bGG2Cba8Xad59H8sl9GOirhc9LlReuYapOBEARM3ASgSGdwSx0AbuHTTjM0myod7EK2Y+uY2fRVlykIpHsRXbLY1pZdZQMrqUvfuXeaEGzJRObNv4B//ajv0d09EZYzIW4dfswGrWp2L79NYSFv4mWllxx8ng/wkcWLWqNyD1KrYM0immWrM/ofCWyWpNxIC8EHVMl8DiVHCmzFvKsdNAJvFkYJo4qZepWtz6++HmC6ygIvg2Y82rRNHxXomT9i2VfCnzfv5+J3RGv4q03foLCgmQ4HBZotFfw299+AztDX8H9+3lIuR2PbVvfQMj21xCy/RWk342D18M5DVACBHgHwLeLke8UiXz3zpVgnueHODIf0wdW93zB5/zsq4BvjmUAfJuHMgV860fvCS1G9geDJozKi50g8FBF42zsc+3KLpw/uwPd90uRmXEcP/j+/4n1636Da1ej0XW/HHfT4hEa+hKuXo2E3R6oNaGDL7QrA2oH74gmfm3PLdmHqwFpq3lm2g6hR/nacOVyJMLDXpfC0/Kyiziwfy32RqzDjm3vYdPm36CtvQDr1v8K0fvfhdmcCbe7XdSiqNkvn8W1J8pXgQinBAYCmCPoFLJYmI6wSyuF1LTr4446FPRcwoHqMDxcKBV5U9pPRTUJqqEEosJSoEhwbsKVhlhcNSaia7Jw6bwk+Of9UBFrNXZxNWO0mr/hZ7GQ3E8KksgEG4RmO+SrR1zZLoSVbUNERSjO1B9AUcdl3G0/j/DibaiZzMC8VxV2UoKPlJqVPk8i1qJz3oyR4WqcPLYZH679d0xOGDE0rEGzNQ8HY7bg7bd/i4uXotH1oBzNzUV49+3fYOuW12A158Dtok15UuA7kImSrA+LOJkdY1CSVFLVDZeAnFmRnolKJFVEYWvpu9ilP4z3anLxXKEVz2R04am0Znwnx/Cpy7hENyHw/pd8M/75Uu7XGHxbohFTF4aSnmuS1qDB+B8Gvqnf7dELDYOpW2kG4jLAPJGHmJo9SG46hJaxXImk8RChXjcPEhozOeglYm+W//cIKFcakQ7R7zViwlmNM/pDSGs/hUcL5V9t5JugVAwPgasFA5NFklIq77nxsQcrqUjFN1URAhbwsMOjarfN7laPnLXI7byE3YXbcdlyFPfnSlSbYSqQwIRZRmx5sPotcNibsWP7n3H2zG7095eIXGJjww2sXfMrFOQnoL+vFBFhr+Lyxf3Y/tHLiI56D83mDFH48PmtcHvNSLMmI6kuBt2TpfC7LICbvD+rRMeFfkAD94UXQaUV8LJYhUBDVYe3jObjeOMhXLEkBdKtweLRx79S9UJx/JROrsdHsKYNgG+m/Jkp0al6BaFOGdG7UI7MljO4WBkt3Gipkl869B5vvHykp/hbRSKLwLu+9i6ST+/Diy88i/7+aok4hWx/FRcuRGF4mE1yOF48JAgAVdaI61JqGAJGkuC7a6pAwHfHbOES+BaAzLn3WtHfW4wTRzdj7drnsHnT7xB3ZD1S02KxbuMf8cbrz+H00V2oLb+GxUVKMyrA/2AhHyc1e1D64IJQCyS1FwDGEmmQCHiQT/f4Zw4ach4OfI/BuQqc0B7EJWO8NNwhUJY0eYDyQG4/6yKUo8JoM9PoClTw72hQmSmRvSgaxDq4/ew0R14uDxEFvkXpRWoGWDypGrAwe8NW81TO6bfV4fqDY7hQE4lJNimRCAmzPTwQCJqZEg2Cb5XSZ72E1ZqO4sKzSIzfgVde+jk2rX8FczNajA01Ys3bv8Kls1F4NFgnlA+CZmqfe9xco3RaLNJR0C3ZqmbRBmcDK3bdpN1xeo3ogwbxjdE40hCFprEs2BlxZ7GxOJh0oFcea6/XArvNiGtXo7Bzx5tYt/YlHDu6G709GoyNGdHbX43161/C6VOkN9XA5+sUXjzvVwr2aNt8TTLOzLBwvdF+jzrqkNNxHnH5oZhwUUGBhxsdCTrzerloY6RgjJ1/SUvhWK7inlfzXEGnjxSTaXcj6gdSsL86AoPsWfAlIt+jo5U4dnQr1rz3G2TdSxKn835XHt579xfSYZKddRPidyBy70fITD+PyD0f4FDMh/C4yFXlxfXBOg+uVZO0l68bSBHOd+9cKRbknAiC9JXnazXPvgS+KTfnM8I4mIGouj0S/ScnnZQ4lX1ToFu9Jx032g8jtE2XodVcE6nBmqpriNz7JhLjN+LKpT1oa81FWWkyMjPi0NqSDT9YE8D7/hh81z26I5Hv2t7bkkEmaF3Nfa/ub7jnWlBedh5pd+LQ0V4gAZtjiVvw0cZXELbjfRQWn0Tn/XxERr6BvLzjGBurh9tthd/HAnHOB/c916lpKYvKNRvMEErGLqDqIfaeey5QvD5pr0VR92Xsr9yFQWdlwAnlONLWqKyrAGnaeJ5LfgsmoMcJTYw0Ibs/ni8UVkaQpd+HKyBT+sTW/cprSJwFcTBUVJ4ZQ69HizlfE/Jbz+KqIR4XzAmoeZiCnpkyZHdfw5bybbg1eg153Zeh6buNaVvtqmhiii5CR8eK4aFKnDy2CRvW/xqTEwbYHR0YfFQn4PuNN3+Fu+knMTffigcPavDCH36IkK1voKOtSFR4uGYZnPiLI98yL4rnLbKwEtxlgI70QdJFjbA7G9A+moYLxkhsLlyHHZokvF+Th58X6fCtnFZ8O6sdz2Zb8d0cw2eup3KN+Jc8E57OM+PpfDO+eSkX/+nrGvk+b43BofowFHZfhivgfXAj8BKO7lcY+eaBR7kdO70hciG9Bum8eKRxPw407IN27B4WxYtjt0gWWTHNyggDNycXBnmMqrDMxcp0divzG2H3UQNaLxHCU+ZY3Gg/ib55VogvN37cNH+9yLfw0WhQGKGHCV3jBEwHUdmfIpFk+b0c+ioV42Z7cwGJKoIkzyvOhhnTXoN0MtxfEor0zrN4aKuRiJzojQZBj1cvWsZXL+5HadE5jI8xFWtEe2sGTp8IgcV0G4MPyxAT/Ta2b3keGz/8d9y8FomxkRp4PbwHK1weA65pEpCsiUP3TJkqLPMwmmuFSxqhMD1EIKxAFsdaABhBGK+AJ8sCWvKZSWeRr31m3J8qxnlDIs5p4yXlGozGURHkcZc0fyJIEl3VRiw6ajDvqMWipxGL3ibMeRqw4G+Ew6vBoqsB065a6MczcV4Xi9uaBDhEtkgZ6pUOGioZeFmwV34JMQc3Ij5uB+LjQvHOW89hZFiDgYEqbNv2Ms6d34OhYUZrW6TYL6gWIhkMgrnA2Ejk2W/A/cl8Ad/354s/Bt/cY0y1uU0CDlutWUg+uwNHj21EszUT5uY03Lh9CCeO78T1C5GoLrsIp7MFTifXtwmG4Ts4od2H2oGbHx/Ey8E3syOyj1d5CMv/6vFwugjxNXtwq+W4jLGA78Ahpzp+cl+RZhM4UAMScgK8Zf6VJJg4yRLpUmoBTPnSyVfc8UAmQrJTJhWN9ZH73QZ4OgBvF3rma3G2Iw6p+njYKDMYyApJ0Zo4fty3CuxTE5wX0/jUTfZ5O9HaUojLlw4iPe24HCSTYxocOrAW+TlJmBinwgspLFTuMcHl4D0oGg3pK9yHyhFm6tUkaWw/FYncrGWw4E7LacRW7UHVQCpsLHB1UJuX97C6sfa4TejsyELIthewJ2INtm55CztD18DaXI6JyWZcvBKFNWt+J11jbba2QOSb+5PZFBYcc/wpPaaaaJHSRZoP5fzIbY7JC0HVoxSMuhrQM1cuDUZ0o/dgnc7HqLNeslIMXLBDpp2NjZ6kvRcHyohJRz2qem8gumY3WGguB/gqASHpI/m5Z7F39we4l5kEr7dd1v6JY6GI3LsGBN8nT0Rg/74Q3Lh2Atu3voaYg2vh81CalNr8DOg0Soda7vkFVyMEfJeFQoFv2uQg+F7dnK1kO5bAN/sT+AzQD9xFZO0eGMazVOT7M+Cba5e0K7WOZ2drpYENG2vZFy0CwjvbczHQX4aFeR0mJ2oxNVkvTXhYjBakkimnm7STNCTrYqHAN8/yJ/Ncap/TkbFialKLsZF6LC4Y5Zzp6ylCbdV1NNamw+OxYG6+CQbDLdlf1NWnHCsppVy37MzLei5mpkll49cs4OM6DBb30V5xLTMqrc4X1Vti3F6D4u4riK4Kx4CdBanq7Ff2VYFDrmHaBmKYebcGJWOpOFC/F9csx9AzUyrYgh19WSdC7WjlvKwMmlea99X+XjjqUgTJSD8/V9kfZrSm3Q0YslVi1FMvSiedU0Uih7ixZAvOtB/DjqwNOFyyCx0jOZJpXfEzBQ/wPLaADZniYz/Apo2/wfiYDhZLNu53FSMrOxm796xFQsI2GIz30NdXj/jYHVi75nlYTFnw0x6LjXwS4FuddbIvJeLNTLjKtJLtsODWwzpyF8lNoVhb+Cfs0B/F+jotfphlxTdzjPh2bgO+k6vFd/It+JccA55advH7p3ONeCbPJNez+WZ8+1Iu/tevI/iuHb2Cy+1xiKnbhZxOdsbjoaGk6kgtUFzRJ7VxV7G42XzGT86kRaLw7Gp3zZSIkMKPUD6aill2fRKPWdEOGHFj6pveMQEZwTYPZxHpl8NUIw4E6Qr0hinPd9ESj9S2JAzMfbXgW5wDj0447HOwwDqZh/imAyjqv65UZliEQ/UOOfzIAeThqiKoApx4mFPT20Z+FL15LW6YjuFIzV6UP0rHON+bxsSn0pAER1TrGB/RYGGO8oCqE6HTofSqHXZGgtoxPFSBirLTaGq8irFRRtaosqC6/7m8BlxsjEWSMQ6GqVw5ROccjZh0aqQr2Ji9GsO2CozIVYlReyXG7dWYsNeAEQp11WHMVo1RexWG7bUYsVdj3FYNy0Q2LrQdwyndYczY6jBmr1zxGrFVYmihDN0zeWgevYumgZto6L8OzaNb0AynQDN8G7XTaTAMpaCh9yoqH17D1fbj2FezC7mt52AXQBYEoiutx3Y56Jubs5GYsA2vv/ZTrHn/10i5FQeHowVj4zVITNyM9PQEjI3XKmfRpwnoA2vhoaoKDxzRmFdpUUZmOifyBHw/WCwVwCQ8fSrXsDLfrVeNYDxtsNmMUlBGOovba8G83YSZGQPmJ7VwzBMotsDmYEdGC6q7ryBBswcNj24vi3Bx3wai3UvAW/1sZYPNA02H7ok87C/bgTvdyRharMSEsx7TznpMOmsx5azGrKMK0/ZKzNEJcjZg0c1LAzYyIbd2wdGARbsGdq8Jdo9eutHanFrYnE2wuQ3yvd3VBLtHA6oNOEjzcelhYyMllwmLdgMWbQaYx/MQa9qHwvYzss+FWkIAThsVAN9iB5i1czUpvjbBKQ848ugXDBgZqcP0VJNk1rgv2qxZ4miykRKzFYw6suCNjZTkUJZsgYqsS4SGzgHHUUCvSsESNFQ/uIaE0nCktZzCgK1aRdNkb3EfrrTGGKXvQEXZGXy05XdYu+bX+PnPvonf/fZ7yMm5gPqGdLzw8vdw5epBjIwy69MFP9oUGBG5vkAKX/T9CWBU1J92wuPU4v5kAU7W7kdcfSSy+q/gtCFOugBHVofjqPYgSh/exrRkGvUg5U9pJ690z6v8PQMH4igbMWGvQ3nXVRyo3YMR0dn/MmdKJ3JzkrFzx9tIvXMMPt990d1PiN+KPRHvor09V4qhN254Da+89FP827/+LeJi18Hrph1jTQvvl+NEYGvGvKsRtQ9vY3dZKPok8k3wzWyb2qMrz9nKz08wJ8ErZh+9Buj6UwPgO/uT4Fvuje9HgKlRalEyr8zqcC3SWSW9jJQNXuSFB1SABAwFMj8E1wI4VTCpceSuqJ3U9t4SYPmkwDf3mNtTLyCaTi187XA5mXFiMyvyivksvTLO1OKmGpHIMlJ+lNkVySJRiYZSjkp+VQJgkvENdHBcmgezFFZLvRDpmFKrYMKYqw4lD2+I5j/rZ5jJYZ2UcnaZKTNLF07KlU54mqAfyUBUaShO6Q6hfjgNE656pYASyBaI0o/IFq48r09ibch7iJNB+hcVzTj3ZqHYuYTmRgckQBH0mdE1VoCTjQexqWAjkhsOIr5oF2LKw1D/KE0ci5XuiQ6618NifhOGh6twNPFDbNv2e6lTOnp0M5LP7sTFy/sRGfUhtm1/EYdj1+Je1gmcOb0H77/7S+i0KRKUoJ19EpFvZjCEASBrX3Uytnub4PBbMOnWQzd0D0naSHxY+j62VR/BmqpyfD+7Df+U04V/uteIpwtq8FRhA/4puwn/kqv/zPV0ngHP5BvxbIFJru9czvuagu+xq7jelYiDtTuR3pIkfCKmcqWwijzd5RWpKx4kf/niZYqEnDLyKsccdcjruoi1Ke+gbOQOpj2NgeiOKuJTwJveqybQGGR51Frxgd3CDzYFvGstxtzVOKWPwS3rMTycLfnqIt88tAMR2wlvIzqclSjqvYZ9dbuR138VdhZNSaSQHHc24aERVaL6jEhxXBjVEC/ZqQWclESzonOhBCeqIpHWmYxHnnqAfGYe/P4meDx1AszgvQ94WwHR164W+R6/h01r1CZnq2CnnRvUAqdDC7stYBgZofdqkVi3D7vqduJS+zEUdJ1H4f3zuNd9Dne7zuBeSxJummJxyxwn123LEaQ0x+OONQGpLYlIbU2U1zvmBKQ2J+K6NQEplgRkGBNwSXsQ4dXbsb1oA3INR3HHEi//y/9/3HXUsB8HGyIQWbUD+8q3Iap0GyKLPsLevE2IzNuE6PwtWF+2DqE5H2Jn1ofYU74N60s34LXcd5HXexlTTHOKHOHKa5XqLTSGVIiZntahs7MAfX1lcDiVTJjb14SHgxWYmWUKWRVb+kUikREsRka18HtZuMoDUhUpMt3eOZ6LXUXb0G2jqoz6ucfdIK3NaUgZfbUtknpB0E3JRKVCQUeINCCm0lkAOLdQq9YELChqTUaSOUai/ApE8Pm4hgjIgnxG5agKgFxhLzOSxpoJ06M0rLn7NhIsh5DVdhrZ95ORe/8scrvOIa8rGYWdZ1DUeRYFHclCeal6eB3VAzdQPXALVf035VUzlgbtRDqaxu/Ka+NoKuqHU9Awcgf8XdN4KprGU6CdSINpNhu6yXtoGEmTq3E0DbxSu05hV+1WVFO3mHuJsnIsCia4XqLUcGza4OX+kCACATUjnqqQUxX3kFfLsW0OZHd4WPN76rQT3LL4h1FQK3ykiMl7E0Qz6s2MDOktdHy0cAUcHx7mLIQ+UhqO4vaLsr5kL4vDs/I6IzeZVJjhoVrk5JzAug9/JRmV7Owzokr0059/AxrtbfT0FcFm41yyjTjvgc/F96fEoU7WCbvsMXrGaB51ctlgp3Y8Hac0MXJoHy+JwO3OZNzoOy80Prazbp0tCDTsssLDBh9cqyusj1X9fsnmsVlTnci7Cfj21gUitavLQNFRykg/hpBtryE76zT8eACb3Yy9e97Ggei16OrKR0NDquhJnzu7H5s3/gGXL+0V5RwWcUmRt2RC1J74BPieL8Ui5/UJg+8gpUKUWrx6NPXeWQLfLjftOYNHCihzj1Ja1Q+C78bAmmUKnraATgOjos1wO6kaxPXMv7cI4CSo4lwIhYXZYnkvMzSfUjt5UuBb5p1A2GWAy07KpwoS8f693gbpu8Bzi7ZTXqWGwgA3uwSz6ZdIKzLoo1XOjmSYuL/YuIwZMDVHktVhho3PT6qih3bQCNYzsTixqOcaDlbvwSMbKUxmKbAW8M0CS6kzMWKSdQbTWThdF43jtVFoncmHk3UcLBoV2hP/lg68Sc7T1djFVa37VeydYOSbEX3KYrKPAS9xSBgg8jBbbxDHxm5rRI+zAhpbPvrsFSjtuoIjjVFomsrHojzHyjaGZxG7/TJbknH3EGJj12Juzoo7qbFYs/an+N3vv4U9+95Fbt5JxMV/IM2Tfv/8d3Ds6DYMDlYoB1aofU8g8i2UTL0KcHCu7I1CE573mVA7eBexFRFYV7wdG61XsK1Wh59ltOPvc5rx93dr8L0CPZ4pooygDk9nax8PvPMV8P5egRnfJfj+f/7716/gsm7sGm71Hhfwfcd8DLPuRokmUa+X4O6rBt8sVKLHPGdrlC5G7+evw8XOJCzQs5WNyxQapbKa4PA1wO0jv5YbnRQHDXweNg9hkQYPThXtEBAloIiC9U24aU5ErvUURma+wsg3jYrXBIdHJ3ythMJd2HbtbbyT8R7ye2+A7dlFj5kODyMYNEo0PGJolUIFHQkR9Wc0gRsWZjyYK8HN6hjUPbiOBUa6WWDE/wXHpREecj3JEXXTy+bYVMPnqxeDppr90ICqYgdlWBgVVuDdjybMejWIqtuFN0vWIKR6Gw5V75KLP9tfF4aEuggcM+3FUeMeJBr2IF4fgSPaMMRqduFwYygONajrqH4vjpv3I0YXgUTdHpzW7EWCJgI79TsRotmOCw17cdywF8cMe77wumKOw01LPHK7kmGYzsCQpxrzvibMe+uw4KqB3VOHaV89HK4GOD31mPPVoXLwBk7URaJvNEc4sqAm81LE6fGGywOCBI00SSBPk+ldpkY5Tl7Uww82pumAz98Fp5u60ir9r0CcRh36LFwNpBQ5l0yvBcF3j718CXxzzP1+Hqy8CPAscDByLgc09yI7zbJRRBP8BI9ScMriUwPsXg2yTSdwuvkITJPZSttb1s3H4DsIxAVYrQIUOqHDtLcOtQPX8VLKy4i0ROJo3W7E1oTjUEUoDlaFIrIqBPtqtuOgJhy7K7YhpGADdhRuwo7CzdhZ/BF2lWzFjuJN+Kj4A7lCytZhe9mH2F76IULK12FbyVp8VLwGW4rew+bCd+RvtpbwdxuxvWw9QsqpUb0O28vfx5q817Gh6AM0PLwRSE2z5bRVlFTUAU3HgoWlLfCx+JiZM3c9fMIf57qnQ8nxZMrcAIeDP2NklHx0ciepfET6CbtjqsNcQA/5uRKJU4VgQqkCaVeq6Fii4LCizV2JY00HRQ99nICfLacD6hpffGBzjrrhdbfB7WrB0FC1cGRzcpOg02finXeewyuv/gTvffB93LoTgeHRIjjdDXC5mfFQ3XoF5NBJCNyX8GaFO8l0vhkzfhOmfTrVgY9RfkbTZktwXXME1xvjMOCtxTxUMyOx+bLmHr8vvvh5lv0fx014uyaML9aKY8JueKO++gD4JnBbGYCTX5yWegQh215Ffu4ZqQFwOKzYtvVFREW+JwWXsXHr8eGHz+NQzAZJqd+5c1Aa40gjFpHwCwLdT0W+vyLwrelJQWTtbhgnsvEx+ObzE3gq8B0E4HTcFfdbAVhGR22LNYGxohoPaYG04U0SUAHPO4JOUe2grVgGvrvZXp7nx8rjvKp5Ja1QaloYvOlUoFmaEZXD42FBP4MIdGSVLXRLjQ5BNZ1XZnF5tvB3vEhDUU2BuI+YXaLyCR0IUk0EINsYlLICNmZm6CQ1Y9qtRenALeyvCJeeCZTOFDvJPcDMMKljMKPXUYWbLSex9uZbMMzmwU777KATQEUude7ZXRrpmyHc71WcCasao9WA76WstsqksfBTggMwS78P0jpn0Sy6/VKcKXROE2wuvVBQ3ixYi+LOS5i3163oKHPMafepPuV0MAPeifl5OnS98Hp70NtXjtaOQgyPNsHpasXcvAENjdegaUzF3Gyz1BUJxpL+Ak8CfAeyhk46jlxH3AdW6cVxsmY/wkpDsNtwFe/VG/Hde/V4OrsVz6ab8VShAd8s1uPbOQZ8L6NZaCjP5BrwdI5+6eL3z+YZ8Wy+Cd/LN+H7X2fwXTxyBQU9yUisCUOy8SB6HeyyqIqqRPWE6aBVLKZVLUoaYxb4iDGgfjeL4xgBUJElgk9uUDazKOu7gajCEJw3xGOSCijkUtJLElUTbjam09W19NkEFQIs+J5Kc1pAlni3nGQ262jEZUMs7rQfQ99CsTybAjiBYhXhnxHY6uBiy28eoExri7Zx4P75NfmqvOjNS+MXZQCo3uAAqTE8nJlKolwdjaNB0vD1ExnYlvYBTpnicKPlJGKqd4OpQZEok8OaG4VGNUiNYHEj34tKBMEW5Iz88fMsGHTXIrkmWrradS0SzCnN40UZWwPmXCwEVDxbbiDei3RoC4B8BfQClcYEleSNkvcZSGkTzO4s2oDM3rPonSrE1EIdxhdrMDJXiYnZOozbdZhxNGF6+WXXYNquwdSya9phwtSiDlN2I6ZseszYDJi06zFq02Dcoce0XSs60jPOBkzYSFlpwJStHpOLtZhxNmHGocWMXYM5Z0OA3qCBM1BQy/Upl6RdlaMiqWafAQtejWQWYhoiMTOnCr0+jgQvAwqfs8ZV9IagTl2fBAo8GHnRaDKCoihCai3yMA2uRbWuuJfoGDn9RgHIYaXbMThbKprs8v5yAAf+L3Av/PnS2ubvuX8YmfUb4eBaEYBlwZzfjDONh3Cn+SQezpYqx4IHSQD8yB4QA8dx4p6hcxY8uDWBA3zZK5uYeHQYmKtCyv1zSGyIwYiTc9qAocUKjNkqMbVYjon5Ckws1GHK3oiJ+XqMLxWl/J0AACAASURBVDZhfFGHockajM5qMD6vxchMJR7NFGBothij8xUYnCrB8Gw5xhZrMDxXhv6JPPSO5aBvNA+9wznoGc9Hx2QeemeLMTJditGZUnSM5+BOy0nsq9gJ60yecliYBZJ5YZSaWS7VFEwyRoFDVI1fcEyXvwbnSgESNc+cy+DfBH+v3vvjOfh4vfC9Xcw++Vl4zHk1oLT1PA4WhSCxaT9qRtMEQNAOsOCQe5iXSKeyKQznmM4CefBcQ7J+CED0WFysx+JiAxxOLUZHSzA8XIy+/iyMT5TAIZFP7lfeK8dA3asUpQXWv1qTwWdT+4GZDM49nFYpYm0cS0dMzS5EVW5D1sPzktG425aMzIc3UdVzE8bBu2ADmnF3HWyiSqNofsH1Hozs8jn4Nd9feM7yvXJM1JpnZseMQVsNbnaekXT5tLteInYLq1Vr8JnQ1pqKkuKj6GjPEKfJbtOiIP8IqipPYXKiEtWV5xEd+RZCtj2P5DNb8bCfWc0OUTsRp2xpf5tFtad+MBV7K8LwYLoINhYAMmgh3Fs1bp8351/qZ0F7wCyyT4/8oVs4UBaG9pEciXRyD3+SZxxce3zlOgu+BuZxWTaCTpecD4H1SnvmIGWN888up34Dih7dRJI5DlWDd6RJiVIYoUIOHXa+d7Nak1ybEqgKyIcujdPHa/2zz83/530FL37PNclgF78OXsv20dL7Bn/HV7WHl95f1i9/HrDnfB8v655UER4DElTfmnI0oPrRHeyvi4BpNgsLdJg9pA1ZJbK96DdhwNmA9O6L2JO7FbrJbCwGGtIJPlgGssWG8N6W/WzpfpbueflY8BmDzgMdB5UJIx2Djh7vUSL4gUCh7E/aYWIQBrnESX6c0xmYa/lcYp2AlKAEFYgzmO3SoWUiB5d0RxCZvQVRtbtROnZXak+kMaFbK1+zsRyj6svtoYyrPKfKDijQa5EsqosOCesQAsXybB7ocJCmGpxjNUYr0k6I7Uhjc3GMFK7jOS1cfqnTodNjBYQSxsZsbHTUg/bRYsRXb8f6ii3Y3Hge79XW4gf5RuF0P51rxtM5LKA0SSHlv+Sa8EyuGc/mmvBsnl6u7+UbwEt9z6+N+EGhCT8oMuO7V0g7+Rp2uCwavYKSvvM4VhuOJP1+dNoKA/q2igPHyMknjcTyhfglvxZwrFJkNNhSSBFsmypFE2Ys+I1ofHQXZ+sP4lJTHDoWShSv004eJxfj6u6HBkZAFt/XTU9apbYWPBpcMR7B7Y5j6F0oUoceK40ZEXIx6sUFx8VCANokBXQ8XNWl+JRqgQa/DgBzKfzkAUcZQfJWm5TkHYtLpKugEaMLNcJfP1iyE7UzuWiaz8NxXQxq+ygHxc3CjawiWnY7KSNm2UCMgDIlxfdxCyimljLBkwUzfh3umk/hVN0B1AymYkEKrxg9oBIGm8yY4bU1gQVi3Ig0BuTGM9WlNMdV4RbvmalJXtwovKitOukoQ2jRRtRNpmKBjXs8gQ6kVE3w0DNXXisdm8deksJn5IO6uzRCpE00w+1RaiJUtCCP1eXlOJNKoPj+dBpYpMNIpFIoCVBllox+0DB8+jVgvL0GTNlrkPHgAuKaomF3MCJJ8KqkCr/YyH7Jtf25hlq9hzgGBN9SSMyGP5mIKN+BkfkKAd+qOGgVnyfUFcUlZTdUVshT9nEcehyri0Z221mMLFSqdR/Ya3KoBEC76FYTFAgQ4xgsvwgQAxff22dE93QpzltPIlmXADsLZ9EKqu0onrVSH+ChIhQON2k2dNop79YKj0utD+lCGdDflvXsM8JNnVrqgTMaw9bw0gTCAI9LK+ty3ktFFB5mpFmZpc6goPsKDtXuwQNHeaAgkHOuHHnOI+0JbdVXNaf8HB5EpAI5ua6ofDJXgsye84htikR8RTgMwrflQaeABg877i9S+pbAN/flElhZDkw++bXLpYGKIgaB9/L1EqATcV4lQqbGVn1u8H0Cf29jmrsZ2ulsHGncjfCC9TiricKV+gPSyONk7SEcbojCocZIxDVE4VjDfpzVHJK+AiU9V2GZyUOft0bAntBwBEQqIKkcYBa/syhVrfdg0OLhQgVudpzGmboDAiAkuinFnXQ+lj/L53ztb8b8XAMmJthwho5jmwCFsdFSsDCRXGM2ELnfmQezMQUD/cXSJZL2SLIWPDNo00k98Svw3UDwXRmGB1ME3wQEgft9UmtIgA/BjEnqj3KGb+FgaRg6h3OEHiR2dqXnXuXv1dlB545KVEYJrmQ/uIiTukPgc9KW0xGihK+cXXRUSacLStryXoXyyLX4OeP/P/JnPI8F2HMsqQHdjHl7Ixoe3sGhmnC0zeYI4GYh5+BMOar7biOz9Swua+JxqDQMV5qPY5R1OLLH/vJnY2bMLzQ1gkue1zqhsREUk6rGM5mZCreTtE+ubfUzrkPShlifI9RRAcFffD+kJEphcmB+xN77TZj36NE+X4LSR7dwiC3q6yJR/zBV6sYIlqmkRo1/j9h9gmbagC/+rFX9nsB6FWonSw5HIPDAZ2DgQYko0EbxTG+D09MGl8+C+oGL2F+7EVuKdyCk+greqizCv+bX4Olc0+dez+SZoS4FvoPAOwi++fr9AgW+f0jwfTUP//nvvoa0k+LRKyjtv4Dj9btxTBuJltmcj8E3+aYS1XgCEyfvw/fj4aOatIg35CHfih6hCQ6/GQ3jWTini8PlxsMwjdyDgwuPEWZq0UrEnEZm5ftRBskgXEifkxF2HsxW2H1aXDPH41Z7IrrnC5VnKMY3WOzBzwleBOGMOFNJQFEGuJAELHFBBS5JGVEDkwCEhylBrwDJAEAn6PIb0DxZILzLey1nMOLR4oG7Ckc1B1DRfU05ASLjxlSM8nC5SMk55sVUMTW/yW8TwONpBukTHr8ZxtEsnKnZL85K80wB5gji6WFz3BjhYJMBFr+IjjHvW20EGgSXFMAow83UJB0tAT1e1X2s11aIXcWboJnNlGI6RsUZzRNOrLcZThnXT4Pfz35P9QmCLN4XeYy8+EwerwULjkZRqfH6OuHysoV2u4BzSsXxXtX/0ditHlxJJM6rx8BMCW61JiHJEKf0YKmlKx3ynpBBWsVaVGPKcTVJ9oMyYLsrdmLSXqM6Wi6L+HyRESTY4ZxJRoXglIep24heTw3iayJFL37cWSsFvZ9oG0760jIaEx0zN5qXLjqm/J6vLFKSy29C80QejhvjkNKeLF36yO+kM8ciKq5BAhkCGuFQkyrDqJub3E/FV+VBIIc955DOnkQWWTCm1oICh9zLXE9Ki5z7lg1qCE6D/ztqr0Hu/YtIbNiPIV9dYP8H1oJElQj+eOisfn180Tiv9nfkXUszCHYldeulHXSrsxSXO05gd85G1AwGJES5FwPOOW0HgRfnUJ5ZDuggQH78Kx0VRjw/Cag//feqtoB8dDo1H4+vslV8Lr+nGXZ/C+pG0nG8fi/ONERBM5KGlpFs3J8ohuZRJqpH0lA2nIKCgeviuDJDd85wBEm6QzipPYST+sO4ZEqQAtPS7quwjNyTLoULzgahDXCN0tbQNpEmR/v5YK4Yl1tP4Io+PlBsxSynCgasNN5cN1xTaj2RqqC+l8JDv1XAN9em12OVZiD8mn9LihDBkYrKEjSp7OmCu0lA6V8VfMv65Z5ne3kjsoduIqYsXMA3C+zUGnhCNoi2QJxjRW3ks2d1nseZpkPQDaapiD6bWkkUkwXKqpEbbTGBkow/X58USFuFTVxpzpd+T6eJ4JvOgfC5m2FzaKHtTUFs6U50T7NRmQmj0+XSRCe+LgpJ+lhcaoxFuukU7s9XifzuajDD0md+wf2rSDfXEtcV74mOLmtF+D3pak2i380gFe2cyiwTmKvv1RpU/7vS50nEP+i4c36kMJMYyowFmDDu06JmLAPHGqJxqjYabVMFSmxhUQunS1FUJRsSnOMveK6V7kV+vwrwzWg7g55SlyMBQNKAiKeU08G+AqS/uPz3MeNoR9H9C4is/TN2aDZgQ/1VvFxSh59k6/Fslu4TwPtjwB0E3mY8m/cx+CbYXg7C+T2B9w+LzXjqSh7+y99+DcF3yehVVAxcwknNXsQ37YV+Ij2Q0mLkm9GkT6fHVga+j5tIAiKCeQJRqqqwaEqKA1kt7W9Gy0wxzujjpOV6/cM7mHdqArQORuJIv1g+sV98H0HwzZQaozAEFATfpLpcsyTguvUIOpnCDqRJFfAkCFXRPxpsMdqBVtO89yDNRQ41borARSk5pxwQbJDDCAqBSasqfuT3HjMGF6qQ0nIKiYVh6BjNEY3gh/ZKxDdEoezBVRlzJZGmPGrV3MMMkUPy8dWM2flGDI1WYoiSgZ5WwNUsUm1TLg2KOi7iVHUkLmpiUfzgGjrnitBvKwcjThPuekx7NNKkhwVqshHEOaCQvQ42Hzv3aTHjVRzveU8TFt1aUSigWkh44Wbo57Jhoz6zHKZKJs4v+qich0+DgM//3keJOIIPjw5jYyVoaUlBS2saZhca4fG1iIZxszUbj4ZqpKCK+sdykEt7YHLY6Zis7sCS+fIZ0D6Wg4umBFxtPhHglpEG8NVGvuVeeECyOYuzHpV9N7C3ahdmPQ0CWFXke+XnUuBbFYdxzdGoUa7PslCA2Oo9qO6/hVlPk+wxHu5ysDISQ+AdcLxEA1q6l5K/3gIPL3z2cqAZTaPZiNccQPHATSW1xzXja4HHbYGbMowi26iiaE6HRgp6GIVUcmJWOKlYskh1EQIhRsgJjpnNUIeSfB/Ixsie4vNA0dFY1yFzzxTyfDky2s7gtOYQpqQwjWuO7xWQ4CTIEmrWF9uEx9ml/+jPmZFRdLhmkTplNqnPWYmbbSewN38LdPYCiciLcy52gIBTRemXwLekoT9/v3xyX6nnVff6eX+vxiSYTeDr0hgF1KB48C+iGdaxAtzSH0VybRTKe65K9ow2gGDZRtvs1Yps56xHI8oSvQtlaJ7IQcPAHZR0XUZu2zlpVnL+/2fuvb/qvrNswfn5vXkT/oWZeWm9t2Z19+t2VTlXdZUrubvHr1zuaoeucrYcJFs5gRIiSggJSQiRkwRKiJxzuDkTBEiIJILI+Ub2rH0+3wvYJQNu02prre+6iHDvN3zCPvvss48xElcMEbhqiEC6OQZ5zgso6U5G4+A1GMduo322DN0LVVIk1jR2EzH2CKR1XFDML9dlGQ+bB0z8vbWxQmZRAXGV1qb/Ov/PdU1lYRToVhkCBZbIOAalfE+J+dYkQTwX7g13BtMRXnN4DXwLmbMF1n8rgEmyRGp/UrJOJ+50JiLBHAXTyG3J2nKd4H0TKRBBOMEQ/89xImuT2lv/pXPhX+3vAmvMt7iaBGyYX9ShpScTpyr34P5ciYzb3vFSRJcdQLj+BAqHs2EfuYuhmTqxw2Xh8dZlJZutIVxXg2NLBXdSJyIAXC+uSQqYE5QHXZTU93mPOA7XLCU3+azVYEib+/KcFPnHe8GgYzpgQGVfFmKajiHHeQGzzO4sUftukUCYwS+zgtvyfLYCvgWjKethCZhE0qSCQgJvYga/34Vxtw3VD9Oxr+xN7Gn9AHsdV/C7xgb8qMiFv7rTjh/fdeCZIloGBg8CbatouannpoWgaLqLzXiWLLd2/ISst/a958tseL7c/sMF35WjaagfSsMl03FE6A6j8RElENrCzUplAZ3b8/C4wZLtlmpjAiCyPpwYATv65uuRYo/FmcZQlN1PxYS7RQon6OvMDYsNdwQoa+zfZoOJgIe6bzUAyPQpWQRTvhn2M0h3RqFjqlCiYjLxbKNOYCl/I9fPe8AiT0o4qGkMHrwGRnYquiPIJ6BZprxDqrVpD0V2hu1r7eLbPL2gQ113FiKrD+GWMx4LTD8FrBiYrUJ0y3FUPSDzzYWQET6v1wimmdX94qLqwtBwDcoqLuDS1b1IvBaKnvslCHhd0kqaXRwH5qsFxCc1nca52qNIcsXiWucl5DnjUdydhNLuFNgfFwgIVw4RquKbzLV16CbKu1NQ+jANJf0ZKBvMFOar8H4yLtQcxamKvXDNFos+n6wpNawBOnl4VHFMMFD59lc1nuRZBOzof1iM3OuhOHnqHcSd/wLVNcm4d68KcXF7ER6+AykpIejsLFLG/rJBc/Fk10gW8vC9ngQ+1r6ngiQ1Zo1DN3DREI78zkQNfKtNhr+z2Rjarp+vgW8rJpcaUdmTipCGg6JVlMYFW2S+JdgTZwaVhREmhBXij28iov4o9EN50gKdY0uCTi58ZCLY3AY2jC014uFcNfrmqjA4V716DM3XYGihFo8W6+QYXqxD31wdiu+lI7z2CAzj+SqTwm6ybgfudxbDYruF/uF6uN02DA1Wo6YmHmWlsejtrcDigg0P7pejpvoK6moS0NdbJpIjycJItojMjxoTksYkQ8SNRhwpVLEV5yGvl3Oye7oMOfZzSDFGi6xK7sN68E3wJeD76T1TBWzb4F82rPqV8xoG5qtww3EOJ8t3o/jxNSySgabkjQXVBOA8V02jLu9B5n6T8ax+vh6kPulv1HgnwJZARu6P0pYy2FFMuBnm8UKk66OQWH8cpe1X8HCuStL2y95WjUxgUKckTVIzIKwZwZqSJlC3Smb90UwV2kcL0Nx7DYXtici0nhVruzjDacSZw3HeFoU0xzlcd8XjbscVYcp3N+zF1a541VRF5HNbfV5aloDnEDwfkRESlPM91u6NrJkEbNJUh2OM194ijjYqo/n0wDczG1zz2IzpxsMUyU51jxZp9T1KFrktawzvCTXrHEdatjrbeQGxxjDUjeRhCiYsccxRXilBj9IliwxFs7alrNIrmu3t2eu35boIGsWZiEQYe3cozff0UgtqejMQ2ngA7e5SLK3Y0LdYhwRDBBI7zsO5XCnZBnH+opxoeTvBNzMIKmMQdBxS7Dcz1JSKcm3j+RJ8M7vLpmG0YQw2h1M6cTWnN7nXGrnHMa0yFAqEy/+5Zgr5YhX73rzOBJyqPYTG8dvwEOssGmV/Zi0KMdy2PI8tgG+Rl2g1UEIASM8Ojk8LAh5mSU14vNiM2gepOFb9Dj4qfh3H72XhH5vq8FyhCT8qcOGZ4nb8dbFDFU6yeFI7CLbXHz8ptuLZYjOeI8v9jeP5UiteKLPhxXI7/jq1CP/bD5H5rhpJQ9OjDCTawnCq9SDKHyZLcZ+kaZgu3kbwLbZd0u6XjKZZfGXZRXHGrcct1yUcq9mPwu4kaQfOKJ2pWi5eZK2nYdS8sIML8MYDl4BHNN9SYa3S6kxbEXyzYUhW+xl0SfGWAt5e2ocJsGFUqx1QshMCAA7g1SNgEvaUbVEVu8jN1an8fVl04bNi2WPGYsCBRx4d6npzkNgYhhRDFFgUKSn7gB0Pp8oR3XwM1b2Z8j4KnJKdUNo7bpiSovTZUVuXiL37X8fv334e//jpK7icehBuNxdRhxRIuleMEgV3TVegrD0FV1ojkKKLRFLjKcTXHUNk5QGkmKOl2p4MNkHPMsiE2ZHVEon9uR/icDmdK0Jx2nQSkZYw0UlHle1Hrv08hhZrVYDCYIPMpY+pN4IelSH4duCt/Zw6bjebyDhRU3kZhw78AV989g84sP+f8d67r+JMzEH84Q8v4quv3sDbb72AGzfDsbjI50DWlGwB3Sp4qIVNgTdurn9+yHMnywAr6vqycU5/CvUPs2Tj5vck9bddbMAW3kfAtxTkWDG52IDyrmSENhzCAh1OpGuYxlpt4b0IPLmoBS0KKUUq7c9ATMtx2EZuS6ZE0nwsAqbsSJvDk36d+Czn2M4h2xWL3M6zyO2MRd69WNzoOocbXXG42X0et3p4XMBNar1rTuBs1WE8WGARJ8/Rgccj9Ui/cgiHjvwz7pYmYGioGdlZYdKJ8+OPfo2Ey4fgdJQg4fJhfPH5aziw7x+RkXoEC3N8lvR0ZWaHB5+b6mhGCZIC2+rZqi6aDBzM4Li2jt1GsjESNxzxEgCKixFZII35Fr2qZLC2aZPZynNgMM7MDJ1VhCBg1syGWXczzEN5SNCfwpHqr/BgthzixsD1jPIvcf9QXfgENAqY5HlvfChQre7Zt/+uBsDk/Ndt1KuA3I7LhjBEVx1ARVcKhpfqZU2lNpR6cuUQpQEV2fDVuBTiQWpFLEpmQ7mNRqRQ/08Wn3K+4aU68e6nher1zsvINJ5FemskclqjcK76CHYUfYKk9jipGSAjLAWOcv0br+XrAwoFRNQckPVHW4PX7g/XZDKOdA3iuqGAuHK9Yfr/KYNvZkX9ZmR3JSC25SQejJcq8C3So20ar7yXUjvAZ27HfECPZFOUFChT6tMweguOiTLMeWkiQPJBA5DcZ+T+qYLlbQNpW5w/WwKEGvjm2kECjdK4cXcLSvsysK9pLyoeZcA5XgrjZCEuURpljoR5LF+yzWSAKYULCPG2FqBt6XO/7RoIQEVOwjHILC5JM4dYAqoCbFUvRbtYZm4p0aO8kp7tyzSO4H4m3vNbOR9N1rKua6fgDT5vrifibGYRxyfrSD6imkIRrjuOtukqKQaXAs2nzHxzDeTaQGwkgTLJVq9BZdX8ZvTPVKH43gWcqHkPX+a/ha/qz+BDixXP3bXi2TsWvFhoxE9KzXimzAaC668dJVY8W2Jbd1jxXLEZz5dYQLDN1yAIf6HUKsD7pQoHnkktxv/+f/8AZSdVj1LRMpKFZFcUjrccwN3uy6DNmBQoyEDhArzx4rjVwSzgkvofSXdxcJox4zUJMDp5dxeS7bHonauR6ng1uBTDtwALWBkv3uOSXtz8fLiBs7COQIWbJDXfTIfTvuh6+3nkdJ5F12SBMHqc2JygTLuyyHCR1fi+FiwFWuHxM21OsKqAtqRAhVlUKWT+rbSS9jKqM2LBo8P0UisGF5rhnK3Cnd5UnKw9gNj6EHRNl2OJE0/cUSx4MFmqwPeDLFkQabMklkcSOZqxSA9TPxutmEHrrIMH30JM3H7sCvkTdu7/PRaXGMRwYhPYmLHo1mFuiT6/LIxzYHnFgkdzNVK0WtCXhrNNx3Cz7TKmtOdLR5Q52FHVk44Tt7/A2ZojKB3Mgn66AM1DuTA9zkfLdCFG/PRXJ8NP6zLlL+z3sKU1J1XQ6vHPQfDXgDF17wsG+Hzt0h65qSEbjfV5SEoMx+9/93O889bf4/z5g+jsrMTu3a8hOfkAJieb5Po9HoJubpxBxnTjzwqCb74W308RBsg5dldp5lZBzuZjaKvjerPfE8Avi6VFCmEqNPA9zxQk08EcW1tlJ9aDb1nobMjtSkCc6TQ6HheqRY9zRPSdLPahJZ8VDxaqcKHqKI4W7UJk40HEm0IRbwqR44LxKOIMRxCnP4JzusM4pzuEhIZQpNWGodB5STqGEgjTlaivtwKnDv0Jf/8Pz+JyymkYjXfw/ru/xKED7+DyxcN4+82fIS3lNEKOfojTYTsReuQ9fPCnn2Gwr1qcAFTjJwXAyRIxyGYmhgEKxIaMQJyMET3SKX9oRUN/Ji41n0D1gwxhTUW/TjBIACyLvFqjhD3fprVq02cqbieUMrCoT80Nn2StzBJU1Q/n4I8Zv0dVbwYe+9jUI7hhasx3kLHdIvO9JtVZD8A5hhVo5/wPfs1X/p9/o77P32PhpwORTYcRrz+B9sfF8NGBgR2BhY0nO2UEpAhPyYJYmK3mPAkQBuuUDFqxqAFx3nuup0u0u6Q1nBA1ynGD7hNLAStmVgzSkEk3cF2sGG/cS1DkAzXwPOT6N56LvAbF/LFonusd9w7Nok6eN4EM7wOvN8gs8m94T1T9TtBu7emBb16Tyhgsew1IdcXhgiEc/dOVUkQfzKZuNs629HMGMDKHKHd0YtGrR37bZUTWHkZ4+X6cLt6D6MqDaBnKxYi3Reo7VEClGFWl0+Uc2gog3PhZbel8v8scDRjgpeuY3yhsNvfx8eUW6ZGxs/RTnCvbi/N1x5DQehpf3foEh0q+RO3DHKX595ols82s33ZhmODcUjIoFzxuK8ZGatDZfgfu5Xb4vF0YHW3Cva5CPOgtw8ysCW6PC6OjLejoLMDgUA3mFyjf2/xeqwyuWtt4/lwjZa/QMBTXeDqs+L1mjC/Uo2ogG6ENBxHVGIKHM5VYcuvkPnDub8tz2QLzLWNJMm2sAaKMkD1jLBjztMA8fhcZ5jM4UP4RPm34GEdMmXizqgnP3bDh5wWd+Fm5DX9d0oQfFbfiZYLsJ7DaArIJtAVsW/F8sQUvlFhXj+D/Xyy14aUyO14ud+CZtGL8Hz/EDpdVj1KgG81GWucZhLbsR177eSyt6MWJhAuVSvNtz4TjBuQl8yMFgGzgYoFu/DbIrl6uPYaeqWphCZga42cT2Ep7Z/k/i2i0okjZuDY+p1XwzQHA4kGptlXSlRxXHFLtEbCP38bCiglzARMmfDoMuOvQMVsM62Q+rNN34VgoQvdCGR7NVWN8uQGzfp2kkZlKpnURD7ZjnvPrMOJvQPdiqTQaSWk5gTOVe3H41gc4kv8xLhtOwPT4lgxEFqwtL9EL1YiH85U4ZziFuofZkl5nV06IzplAnp7DwcKMTtTXp+Ozz/4nXvn5X+KN13+K7KwoeDyd8HjYFbEFPk+L6E99Hhvc83rFepJtXtZhwW9D49htxDSE4rrtAma5SWleqv4VJ2Zhwx1DHK5WhEB//zrmfSYs+wxYDJjQRzeKAPXrLI5Um7EUl/gJPHh+PDZ+FurnBFw2icj9PhcmJoxIuhqKV3/zNwgN+RxHDn+Gs7FfYnTMgP37f4cribsxMUHQzeZAXKhYULVF5lurU3B79LjZcRnnzOEYZMZBQC4zJ9vEOG3puhVDK0DDZwFTppSdcJGc8bUi4CVI+g7gOwg+CEhgF8tBatov2WPwYKZcQAgzJ0GdN4Eqmcn+5VokNp4SzW37UoUE2LTEXFzRYd7fgjlvM2Y8jZharsfkUh3Glhow7G7CeIBWlTrATV9sF/zuDpTfSsDhA58gJy8ejU1Z+OiTX6GpkK/XFAAAIABJREFUJQed9yrw2mt/g9raTAwM6OB0liP2zG7s+PC3mJogq90mbCT950UKQIBNnS6/L/cyGFSRsVcs+PyKDpW9qbjQfBzW8buqUJHBK9Pk0hVw/bNc//VWxuT3/R2uj3ZxIqKFJC08/Wxy4zfh3mQJ9pd8ikRTFLrcNasF3+LAQSDM6yVo4lqoAeiNXhUA/TrAfvLvM7PDjAILX4NFl8pxhez2eUcYomv3o7UrHQtLZIVdcHvN0kxLvZ9JCl4XZX3ghq/mnmzgnIcskKd8hgW3wijylQevSbHgBOlz0qSJfsocp1axLUyxn8V1xwXF2FHaskUiRQXxDAgIwlThpJf9KERewv9zrPCa6eChkz1DnZNaO1ZdFuTznhbzzeer5DLsQnvFEo0EawwG52plrBN8byfY5fxZpH82xxMdgqDHfV8dzOP5uGOPR2TJXoTe/Awl91Mw5jFIEzsp5JcsqwrCfpCyE8l4K2cW1X2VfROcGFquw437Cag0xEqhed3gNcTrTiOs7ghKetMxu2IQtzESTJyjW9ujNl8PVHDHdYZNuuzo7ryLs9Gf4LNPfoNOVzmG+nQ4ceID/O71Z/D+B79Abl40XG2lOHjoLbz6d/+vfM/VVgIPG3ltsn9w7ggpIeBZzTEJ6Fb/74BnWQ+frCEOaRaln8zHkYKdOFN9GKbhm5iX3hObA/3NzkV+vgXwrYA+Hc1sWFpshhsmTPqNKOlKwbHyPfis4BPsbgnHl/dK8FurEc8VuPBKrhM/LjDhvxW14rkKJ14qduAnd4yrLDbZ7K+B7ieAb4Lt9SCc/yfwfpnM9w8ZfOvHcpDeFYuQln3IccVKIxreNAV6tunBwYoFaY6j7MPIeI0uNONK62nsvfER2qbKxQPZL44LbCtrgkdAN7s82oAldsRSUoKtRLEctMJ8C/he03wTiGTYYhDfchSF3QmoHsjBdft5xFYdQcjdz3Hg7g4cKPkc+8t3Ym/lF/iyeAd23v1EOhIeqz+E8JZQRLQew+mWEJxuOirH8YYD2Ff9Gb4ofh97K3Yg1nYc1x/Gg1kFx/RNjLlr4KWtls8KAkK6mCytGNE1W4boplBU3U8T5jtAw/mVdtXVS+QVRtVQw2dFeXkCvvj8Dbz1+1/gozd+g9zUKCwtMEBxwkt2niyX14IVdl8kWA6yGCs2OEfvinyALWrNE0XCihNMr7it8C20wBNwoN/XinTHeewv/BwhlXtxyRaNi4YIZDbGYLDnLnwLyq2CfuN+AdyqI5tPmvlwjGx0cKFhQxpq4nRiV7i0ZIZOl4OQo3/Cp5/+Dl99+RbOxe2Cq60Qe/e9hpxrx7CwQPaqTZhFtfkTfAQB/7e/8tlz4ZpfbEK28zzibFGY8lPTyvNgITHfZ/PFb/t+hx0HjYDHjOnlVnneIfUHMOVuEh3c1sE3F33eZ7WZUAYyH9BJ98LEtnN4OF8tKX2pXxBGmA48LFiy4v5sOS5WHJFMzNCKXtKhkhKV4kt24dSOANOkDtmcGbAyWAT1wGCDKqane2BtyEfI4Z1ITI1Aiy4dX+5+FWUV8aipu4r3P3wZdkc+lpbbkXj1IH79y/+GE0c/hHe5Q1g5n49Wmmx2w5oGgkOmKVkHQptPHeBnAyE+WwZrRsyjFWX3ExHffEw6gsqa5CHDya61HIt8lsGx9zSfqQae6Wwk58IMg11pG31m9M1W4GTtPnyc+Q6Ke1PRv1iDRS+ZqHVjj6CA7O8WwLe6xuD1cRx88+DP+D31O2tgXTHerCEZH69AaP1XyOqIxeB8LVb8rBlxyVggo+ilzM7frOpkWAzPZy8AkjaoZBFNkn1kbwZK8ZhapgSNB9cfMu1KxqCcNfiMFMC0oWemDFccZ3DVGiNN0lS/BrWeB89541eytMxAKsbb52PGLXg/FMut2HHODcrx2HJcK/6VAniurcqh56m4nXCtkQI5Fgg2y1p61RGLofl6Weu3E3xzfjMzQcciCYjYSMXPRnT0/zZjxm+AfbYECfYYHLm5AzlNERheqNPsaFmIqQIkNZeC9/SH8RrgmBSpINdtO7BsARbNUsQ65teJzG6W2eqAGY6ZUlxsDcM1axymYFCZGM6xGdU9em28fJ9rU2NNBXQOtLnuYMdHv8JLz/1fsBnLoG8qxL697yA+/ghOnfoUO3e+gZiY3dix4zXk5Z3HG2+8iMzMaIyN1a0bv08+H6oP1DhRUkkyyWrvUq5k9Hb3cN7x/tC61W0Qbb9ruQb7q3YjyX4WfdPlal6uzpUnf9aW7s1WwDdlvpTXiMTGgYUVA3Lqw3H41g6Eth7F8a4ovGe/hR+Xt+Ev85rwbIUJ/72yBf+j0o4fFXXix4Xd+ElJF54rdQn4fhLoppZbDoJwjfkm2F4PwFfBd7kDP0otxv/5Q2S+q0dS0TCchjtd5xHZchCXrKeks51YfYlFzNYeFlO+UoRIEMgK/iDQkUIoNqFg4xqlVaXe+PFSM/K6E/FJ2acoG72GOYJk6ic1yyQCKDI4q5vOFtKT6wcQAaJPOn+RpWG7Wwe8tHhacaKuJwNnS/bgVP5OnK0JRUJjGLKMZ9A4cB1tU8XonalA/1QZBqdK8XCqAq6JUjT1X0OB4zJyzedww3YBt9oui4SDTE6RKwGm+1loH76NgdlKKWyb8rRgztcqWkgyChLBUkcp7gbKRYaezOebT6DifprcG5GwBFxSGCMpVOjkni4vmZBy9RCOh3yIssJUhIV+iBMhf8T0JDfzDs1WK1hcGmwY4QDcFiys2BDTchTxrSGwDF/HIjskctNdpmeqC14yryvUxNswOl8D48NsZBrCEVq5CzsrP8VvU/8eOY6zmF5qFE0opSaUBVCLuxBgl0c+IzJPGx8+MoL0pV7pREXZJdzKi0FTXQ6Sr5zAa3/3DD754FWcidwNi7EIn3/yd0hPCcH8LBedDlWlL2OBn/XtoHv1ZxxLKxa0TxcjwRKJDEcsZsWbVUka1hfsrQG3IID7rq8ETxtfu4fWeZqGlt0ojcN5+KL8M/TNV0idgWqYsfF7qPGgXGYg7c/p8mDGWKAVkfUhYsU3ushOjZyHLGBksKRpi5nBmC3HucYQlPSlgvpvxQwG2RQCluB1c74zvcnUPt0j6C7RKmBZsaCdaKrLxLEjHyErLQzNDVfx6RevoLYlDQ26bHz4wc9RU5WIkcfN6Bwow9lLu/DZx7/F+GizbPKBALMXPAjo9VJkTOBGqdu8VOqTjbVKZzcfHJhebsSd9jgk6E+IM4+sK9pYWHMm4rjgc1gHbLdjs9nKewjAVCBzlc2mt7PfiP7pclxvjcah/M/Ejcb2uEg0q5SSUaJG1p+StvXr1vf6WtZdLVtJyaDYLypfeDYXa3Uk4UTtARhG87FMGRRZbB50omFAQ+mCSDWevOZzrnOjl81emFumxNeO9eBfXcdayvzeRDEumiOlCFzGpfZ33+16+Zx5KHZ/s78V8L9uHwqOcXZxrhu6Jnaf9NpnwTrlL6JjXzcPRLYiQWww0FIN1RSwZaMrM5Z8DBKdmgaXvvsaQCKzLzpsB5Y8JmTWheGGKRZ9c8yCqM9S+9uT7/Vm1/bNn3PsBQMdIalWx6V6Xst+I4Zma3GrPwUfl32Kq46zGJyvVxlSEgMsjBMAr7kpac86GMhINk32Lq4T23POW3kfD7NjzKposj25Ns0xiMEupahzMKFhJA/nW07ibPlBNHWmi+OTOLXx+S+rTNlWPm+z3/Frjl0kD8nuzs7oUFZyHq+/9lcwtNzA1GQb7nWXI+9WNPbufxM7d/0Tjh//Am++9UskXj2JiOgdaLtXBLd04f3mvQyuYWpPUXNr3b3+RsZWYSRqyDUXOEpstW62ac44JBsicW+8CF5ajwZoPcjiS64PDiGCsKgTQpOBN4O3za6dhEnAX4cVT5NgqoDPhZ7pFpypOAFL/w14ZW1XmalFTys6x/KR1HgQ+4rfxQFjOHbZC/CP9a14uYi2gBb8uMCIZ0vM+FGJSflyF1NqYsdzJTY8L1ISSkrWHdRxP+F4qYwstx18DR78/08rnPhZpRM/Ti/Gv//PP0DNd/VIChpG0lWXS91hxBpDMeauVRrF7wC+ZYJISlxz7dAmKhcwt0cnCxw3Ay4Qk54W0SeFNRwGU5GDvkaxseNgVmz75gNh84HChZ96bxreq7br4rQCO/oX62AcuQX90G04R8vQOVGJ3plqTHr1mPXphMkS5kga7ljgpkWcpxXD83Xon6nG4GwdhuYbMLDQgIdz9Riaq8PsYiPcLEKU9KsqVFWRKws1zYqh4+TRNJ5cGKjTim85ifL7qcIqKR0ZW17z+jkBWSFtgNdjQXZGKD5671fYv/tNfPjuz3HuzOdYXFBNaIINS7hQClgSDSpb0DvE9utowx7c7UvCiLcRS9RuUuspPuFO6QymNl06thgw6mlEQc9VfFb8Ad6r+AB7K3eicfQ6Zhbr5V4SuHJBXOB9Erad9n+bAUcGBmz3TSbKhTs3o7Bn12v47ONXsWfn6zh68B1cuXgI7/zhb3Fw71v44E8/R2H+OSzOW8W3V6QJ1Jz7CNg2+yzWFFAfboV5Ih8XzaelO+K8tBXXWPv17OMWFp2Nx5paKDc6L68wtCo4dQf0cE0W4KuqnXCO3ZYOqEpisfl1yWLM4JRNTESSZUK/pwFhNYdEDz2xTFcHjSEU8E1PdgbCJgzMliO2ORTFvcmY9BL8BoH3etDNcaeArNQSBJzaeOIzVkyj12NDReklHNz7NrLTwmDQZeODj3+BiNgvcP7SAXz8/q9x8dweJFw5gMzbETgZ9TE+/OMrGB1k1sOFFZEr8b0YBFLXrbI3nCt0LcCS0oBTCuVZsWJothJZzhhctYargF7mRhCEfX2zUs/g+68dGz/vrb8/meGBmTok289if9EX4jg07WatBK9RAVkC0e36PCloJ+iShmXq/ckwi72r3wTXg1ycKN2N4t409C/Xi2RONPfS8MiqnHIEfG79Gjc+9zXw3TlehIumCOR1UfPNcws+w+36rK2+jw0E37VDOThSsw/jC3Wa044CxATgfCYKGJO4WQsu+D2ulZwbdLZiR0C/lvJXQWEQCHGus1DcLuttn7cJ8XWhyLScwb25CrHZZa2H6sOw1fP+fr/HdWHBq8eot1VqRE5WH0DhvSSMLZBUUU14qO8nSUZ5KOcjszTiqKEFIGpf+n7nsfF4+fP3FmAp45nrmKb3Z7aFsipmX2CF63GhaNqjG0NQ+SATY/ON8rNlri9c36Xw/M/f+7ueC39fFfCSIOA668DSognNDcl4/bW/QIerGMsLnfB4O5CSFoK33/kZ9u19F9FRh/Dss/8Vn376Ot58+wXU1Cdhbp572TfX3vXrGYNyFWhueJ4MmKX3A7MXKuCeDRiQbT+HJEMkuiYrpJaNz1+664r7Csee1gdETCgUGbjh5zAwk5o4BqAqM8mC5t6lBsRWHYdh8BZmV/SSHWVWraQtBmdrPsG+kk/xZeNFfNhait+W1+OFQp0USVLPvf4IFkoGX1k8+TXgXWJ5IvAmGP828E3g/bca+P53/+UHCr4bxzJQ0Z+IWP1RRLcexuB8hegp1WK+hQEgAIZAUBVAcZGSiStAwAofben4sOkjGzChdeSmNJe5oDuFh9OUm1ixSD2faLO2R58VjBplo9MKHMlQsKhyCXbMwCaet74AmzRYpWKYkhdpwCI6VBZFsG2tUaV0JNpW3aMoXWGqWSwTZVAy8mRgQd0jF2bFOHKDUUwRX3n9QfCtBvvEYgMutpwUa0UuemoysosjgSbT8iaw2NDvs8FsvI7Y6J04tO9NxETugMlwDX7/PWGu1wpCuThRKqAKVMd9Opwq/RKf5b+HVFs0qnrTUf0wC4bxAowzsPDyeTkx4W4WP2xd/zXU9GXgvOEU/unGH/BJ3WeoG7mGCW+L6FoDbBnrJ/g2YllAFCUJZH7WLxpP/loVjJEtcKGnqxh518JwJmoHUpOOwm69g3sdxTgXswvhJz9AdsZxPOhhlzqHBB4qoKCGlaCN9+nJn7H6fdk8bdCP3MIlUzhud1yWFtkMxiibWRvXQQDwfV43ORf5TMU+cU6wKO/BbKVovut60rDoVR1NV899w2tThVV0SBGphs+IzpkyaVvdNJiLWS8lTdwglBUmQQSDPC7izMicbQpFYU8Sxj20bNwYfCu5AcF30G1GjUefx4LWpnRcjNuH8pJE9PRUIyL6c7z38W/x4Yd/j8S4oyi7nYDjoe/hvU9/hR07/wHnonZjYZqf16Vt5gbVdIL1HwwCA23CbLHwEkv0zqbfNJshmdE5egdJ5jBkOGPWsmCrARPv/TeDlu3ZZDfbiLbyc278S7BBP1+EsKoDKOlh4NOsyRGYWQjO+W06Zwb2WjGirL+yNjFLxYyfXYJ9uo/ENoTiij4cVV2pGJqqFDcWukjMS8Zkq+v9Vs55DXyTgbtkjkTuvcv/xuDbKuC7ZigHR2v3fw18q6yRAt/rQTfXcII8ZoNZVErvZGayZI5IW3pq4ZWGXfY+Zj5gF221YewOcjsuI7RwF663X0TfYr2AGMr+JOuwOpa3cj//5b/Da1hm4OCxYGyxQYDZRUskrBPFIkMk+79Aood7lxTqElyq5j0SWAixowpdtzL2t+93HKLh57ORLtMMCMREwaLWkhUrLHPFOFG+F5mdF9HtrsOyuJBQDqXW3a0U9W79fGnXyEY6ZNMV+G5qSMJrf//fMTzYiP4HdRgYaEB9QyZOnNiBz3b8AQf3f4IP338dxcXpePXVv0RyylGMjFRvD/gW2Z5FOZBpzPdsQI8cSyziG46jaTAPw4v1WBL5IXELA0M1diXTIxkf7T5tMha9YBO8XgS8Nnh8DQigGb3uCsTUHkXzo9sYhg6mqTtIsUcgpOZL7Ko5iA/rE/BGdQVeKWXjHBOeKTHj2QrLKvAOgu31r6qYcg18fxvbvZ7lJtO9nv3m1wp8u/Bsegl+sOC7aSwTlQNJiDUcRVTzQfTPlH538M1BIF2daKejmA2C0GDaCl5qAi3omC7FFVuMtETVj9wUloEFEQS9BLNcJLY+Eb79dxm5M5JnGlXSnASOcGF5yYjZWT1mFsxYXqJGmpExNbkEZZo/spwHAbSq2OW5i/vKqgZyjR0JpvsIvJWNkNJC8h4E04DqdxSLElwICIrY+vxi6ymU9qRgiRIZiYQV4PHQTYSLuEcnDMvcrAmd7SVobcpGR1shlhfpJc6GJ1yMuFCu6R4JWib8ejSP3MDO3HcRWroTKfowpJojcablmHSpq3uUiwFfA/qhR+XwdZyrDUFc7VFcNYQjtiUUe8s/x8Ha3Wjsy5KAie8pRXzyHLUFmoyIMHebAVA+e6UbJlvk8zowOdGM+90lGOyvEmmJ12NH74NK2Cy5GKF/9DLHEztxKpDidjduDXgTvJJVXLGhaeA6LhnDUdSdLG4MfA7CBEsg+E3Q9i/9/2bXrvSU/GwJQHxmPJqvx1nDKeSZz2HGTekOxzFBz2bvRfBNMKykBG6fCabh2zhRdQDmx/nynLhRKgCgJE5kPTmf6OtNt5v8rkSMLTdtAXwz1c5MCs+bGRhKnKi5dWBooBpW0w08fFCLuTknTNZ8pGWFIfFyCLodVZh8ZEF1RTLOX9mL5JwTcDgK4fd1I4AOYXy91JwzKJFGEXb42UBLukWagWW7NKeiPIybraX/GpINp1DYfflbNqvgcwvKkb59TdiOdeW7vAefK4Fvj78e5+tDUf4wHePiOa0YVLLOCths0zlzU9UcYISlFZKA+ljWZ7BzqQXtY0XIsJ/D6YYjONcQivruDGGr6GJCwLiR7OS7XLv63TXWmLKTy5YoXO+49G8IvlUwwCxmzWAOQuoPYmKxXnMI4ppB2Y2SoDBAF6mF7CGKRCEjzOe5IM2KuL8o/2LWchBIM9Mx72nF44U6WIfzcc15GZd1YUhoPoWMlgg4xgoxT69tzaFCGjRt03632bPhXsQATDWccaC4JwWHq/Yg1RGHe3NVmPDpMUeZGscA13SReajrExBOVlUL7Db7rG39OWVvIp9jsyJqnFWdCNdBaotJzDjQgNM1h3D7fhKGvM3guijZbjgFTzCY2K5zUsSi5uSxYsPCggE1VRfx6m/+Iwb7alBfk4ykpCO4kx+PqKiv8P57/4B9ez7Avj3vw2qtxG9+8xdITgnFyGjNt6xnwT1gi8y3PBcGJkEcwCDLjOL2RMRUHERczVEpuO0aL5agUYqcpc7ELkw4scNW1yAf2uD1dyLgsYvWnutbz2IJour34+6DKygbTMEZ0xF8Uv05PqyPwbutRXi1pgnPlxjwNyUW/GWRDX9T2oFny9qF/X6uWFkDilUgiyrlsII2gWIV+A22+yWy3BrTvQq8y2z4KSUm2hEE4T+tcAjr/fMqF54j+P4hyk5qRlPRMJqOir5EAd+RGvhmlL/GEG6+OchCJW3D6U1NNw0uZBYpNBQQ6rdifLEBeZ2XEd56DEW9qVJVL0BJ03NJ1L2VVMsWFiyC79Uonn7Zyzqs+GzoaruBsqKzuJl3GjUVFzDwoFB136PlGyczdckEmqBuVn0t94GLsVj8aICTgYVsdmqD4e/ykEkgwFvdM1m45P8a+F6n+Z5cbsSl1jCUdCdhkUyypLWYilJ+yMJgaiwu0I6VwD0EvNRAd4p22ush4FapOGUrxs8g823GyHIDklpP42xDCKp7M2AauoG6gRwk2c7gWMUenKft4EAysvuTcKL5CD7Lew8Xm8JQdj8NDUPXUN2XiTRjFM5WHIB9vABTnmYBQ6se0qJ5o66eC5tiV7/9lQCUwYSqWmf7Y3YApeY84KOmWEkPgE7pnhjwk+2gewHfl63JDXC7W7QWvlycNvk8Cf5sqLmfKY0/qvqylEe8aGCZciNrEQRt3/d1a+dD5owabaab2WQgzRGLi/XHMbHcpDlKcLxscl0c91qBoofeyl4T6u5n4mT1QbRNl4qmT+k9FbtBQBsE32ykQ/B9uzMBbNe+OfOtmC81JrnJMdXKe8WAj1X2Tni9bQistMEf6MTckhUzE3Qy6saKtwtutwOj8y14tNCMOa8V8x4zZjzNmPe1YMpTizlvo9gIzvtbMUe3Fb9OimKphVzwE+DYMBsworonBSmmMLQ+ytEKTdfrJHnvg8+P58gNa/O16mn+DpmmR/4mJDacxJ3OK3i4XCtFUkpyQJtCXsM2nbOsR8H7o9YRkdppRVkMZsh+9izXoXTkGs41n0CGPkYCdXa6lUZA23Uu8j5r4LtrogQJ1mjktMf/m4FvCUhWLJj2tKB6kLZshzC13ChML9fyPwffav2XZ8R1mPtCgNIMmgeoZnHcL7weA+ZWLLi/WI3mR7m4234ZCS1hiKw7ggzTGegfXsfgdKXI7kj0UMrhowuRthds2/Pf4Nnx2uXzAnbMeg3Ibb+ELwo+weHK3UiznUXl/Qx0TldhWmNIgx7SBOEia9AINUUQbdN43eB8V+8JZXYrymZ4mZIYjbUVaZXPCC/sKBnNQ1j5fhR2JmJ0mfa0VuUAQvKA8jUy+Vv5rC38DmWgwT2dJMjCgh5NDQn4+MMXMDRQhab6JOzd8zt89dXr2LPnnxAV+RUyM87i4w//P5w88SneevMlNDRkCWhX9zI4X3lPuRYEj62Bb8FZUj/Bv9ewwIoDHY+LkWc9j/DSvThR/BWKupKkD4hkW8S8wiIyU1qNimJBC9xX7/sT7oUPevhY/+MhYdIpXY5dk7dwuPRdhNV9gWN1u/Fx5UG8WXcBv68rwK+rrHimqA3/o7QNf1PpxP8oYWt4F54tbpNCSRZLBgsmv+lUwoLJVbCtge6Xqe3+xhEE3T+rcKwCcH6P/yfw/kVVmwLf/+m/gv/+l9zGWNQNp4B665qhDDQMZKJxMBU1g0moHMpE9VDqUztqRtNQN5yKsocJivluOSTMN6UV3wV8i6cjGTnKEigzEfBtxYK7VZqzLHiMKO9JQ2RjCLLa4zHsbxHtI1nlYIMb6um2GoVtNEj4M046kYGQzWBjD766jagpjUVY6Nv43d/9Bd5993kUlEZIm1ZuTG5KTLT0mvgaU3ZC3RgnL1k/0YDzdziZ1x2azzYXifUTKshGc8GWnwlTwslFZwELppabcFl3GkVdV0VDzd9RAEdNSGWrxUnFlD0ZcWq0HdKohsb+ZM4IvinH4MHPI3BnFqF3phQnbn+Kst40THn0YrU4S33cdBluOM/jStNxJLaeRHT1QZyu2I8Ucwzapyoxu0LtLaUlVvRMl4nzSVzzCbTPVUrraWFr3KqBCwM0sXJ6wkT9+vPhfdFcLuQcCTqcoomkfpJNVshgSJZEwLbKkgQDC3mezI6IRIPvtfHiz8WR47eiMwVXjVFoHr4phSZSdMuxwU1PziO40H2f143PRc6VaVDODY5vvxVT7hYUdifjVPl+jC41Ctukxs3G78XxwTkifvSUTXlMKGlLRHj9UfTMVcnc42ap7pvqrqfAtw2PFupwtuk4bnUkYGRxC+BbNoHgvVavanyrFt50oaFXrc+vE/2ulx7dAfqKt8HvNoAFNw/nymEZzYNhKAet/elo7U+FcSQTrQNJMDxKhWk0A+axTJjGrqH18U00Pb4J12gB9CP5MD4ugH78Di6ZwhBnCIVr5u7q+Fh7/nxuP2zwzdQ0C8Jy9DFI0kfA8rhQih2l1TIZUBmHGz/3tevd+PfU/OfaEdzQ1dcif9PkBB6fQda7h0t1Ij2IqTqEjplSZfvn5joXfOYbf9bWzmkNfHdPlkjGM9t14d8QfJMgsWDK3Yyq/kwcazyEaZIKDCplzVCEkfSjkAySVrwq+lit6+dqtkK59Mz7DHiwWI366XzE2aNwovEwwqoPItMSC/tkIca4z4mfutY0btmgJCpc72TN2477vPl7MHBgkMA52zlXjtC6AzjSchjx5gjEVx1FRNFuXDZGoLEvG9MkOnhudBdZLdxVmeSnec4yxrxwmGGSAAAgAElEQVRmIfPYGE78uqWvhsrCElQPLtTieMVeJLWchn34DubE21qdN4k9rllcA7cLfHNPlgym7OnM4lrwaKgM5aVnZG+emdIh/04MIiI+QmrqMXR2VGBooBVJV0Owb+8buH49AtNTnGOUl66fq3yG6/ehrYFvkoxcQ2RvYF0UbWbZhZoN0WCQjpfhTSFItJ9Fv7cJHurfParYeplZdeq4g03bNtlXpVA+0IiAx4pFepfP1aLkXgzez3sV79/8I3ZVReDjpiL8rsqEn95twLOFOvyorA1/VdyJZ0ru4blSJ14oNeDFEv2qLeCLJSyiVE4l619fWge+vw1wrwfeBNtBAM5Xar0Jvl+pasML6aX4dz9M8J2K2kcpKH94BXGmUJD57p0s/hcUXKoBLgsXB70wA6qtNVksQ38evizYgbC6Q3BNlMDLohUK98k4U29IRwjRzW3P4q90wmRVaXLPAjOmzZwIeNswN2VD2tWTOHbyj6hsuggfXGJzuCQFGtSuk+Vg0R4HMlNxKu3ITYyyBRbQ+aRRjfo59eWKteQECoJIpb1d2xCV/ZUENATIsGDa3SzgmxH7nMhMCL6DvrUO6QylgHvwPTlZ+TUnrksAq2KTmzU9NCevRSzNLI+u4+StT+GYLFQaOJEr8BwcWIQJo2iCRLLSvIg6eHZio6c3AyAWa1pFA1o+dwd77+zAjZ5EPHDXSoU/fyaMj6RfN1/4eU4BaQ+vXEj4/mx85KfTgjD3SkenmEsCb8obNHmFtD/mvQy6wGz+edSysRK+1HkVKaYzME8UCsAl+GZVt1TOS+He5u+1NZCxyfv4VDDj4cYRsGHWq0Proxs4WLRLin/JYkt2aJPFT80tBb4XpZuiAbctF3BGdxJ983Xw039Z62rJ8c8gZD34jm0+gdsdVzC6ZfDN58WD4yoI6KjbD6ZdGYi2wocW+FaUTEo6OK7oMbBQgmz9cey/9c84UPkpjjV9heNNuxFSvwtH63YipH4njtZ9hqN1n+Nkw25E1B0Ci8Cii/fjeMFunC7ah+i6o/jo7vs4oTuIB76qb9ms1sC3crvZ5Flsco+35XkHP4PBuMcgbkLFriuIrjmM8t4czGnAhs22lCZ/u855be3kM1PPjd/Tnh1Zds43rwmTCw0oaU/EmapDyLddhJvjhsGAzLe19/l+92Md+J4qRaL9DDKdcQK+FRjars/Z2v1TAMWCSXcTKh6m43jjYczSL1wLgCQ7RXbbZ4TPY5BD2fCxCZtDAnq62Cx59Zj36DG6UA/d8A1ccsTg89qv8Jvkf0BI1V4YxvKFwBCJobDlJrilHoPMJINXp5qnzMAFx8q/+is/2w4CLtt8Cd7Lfx+nbGGwL1RiyqdH0+B1hJTvwZG8T2Douw4392YGY6zLIKiTzphK0vf0zlkVSxJQLolbGu0j+Xy4Jzsw5m5FnuMCQgt2om2yRHmXuw1wU2JKRw/iEGYqVpnhrY2Tja+PZJEePunwzDnGAlw73G6jvAId8Ppc8PqcCKx0ALiHQKAdHi8JtG7pSh1YadcIte8PvolJWDvC/ZiGFrSXpCUoxzrvUfdiNRJs0dJormowG5SfLHj0IkGiaw/VAXShI8bZ+Lp573iNzF5b8GixBcUd53Dg7pv4uH4XjriuY0dLDV65ZcDzuV34RVk3flblwE9KjXiuvB3PlbTj+UIzflGuxyuVJrxUYvva8XKpHX92fAvLHQTdwnBrLDfBdhCAB8H3L6rb8MvqdryYXob/9T/+AJnvquFk1A+noWYoBfGWEzjdsA/3Rm+D9mgKKAY33Y0HLsEDNW+iIQIbZ6h0jxsuOBfLcb4uFK9nvoGY1uN4OFshPqQyQURa4JLuaVz0ts2DWZgFp4BVKb5csWJmoQH+wD2MjZlx5MhHiIj8BMMjVQisuKSgUG1SLuVIImyIqvr2gFE3nTIU6JaOWxyIMmA1YMyNVja34GansSayqCrmWlh9STXS2o/pz2Yk6MNR0HFF2lKrwc8Fme/Bcyc7bJbWz4oFVz8js8nJz0Y8Hi/t5YJRMv/Oghl3E2oGUnC2Yi8G3Q0QrdaKKixl1Ov1GzBHqzcGGD76bpOdMWntr1XbbEpa3CwaChhx03UJ+/I+RvlgDsZWdOJCIe4lPqNMdnXfggvJk18VsFbXROtHFUCQKeXv87wZfKmiPv6fi4dyYlBevWxoQFlKkB3f6DMZzfvgRJEtEemWc2ibrcAKMwdwiRcuAbDKJvC5fN+D5/Xka179vs+GZXZZlW6ITgmG7s1WYdfdz3FvhhkFBnebvAcBumwm3AjtWIQVk8utyNGfwXlzhDjvsIBWbAjpayyZEo4zxWANLzTgXHMY8juTMbq4Bc23LOBMjbMrqbKPWlucuekwK8RmUS3iKkSvefeSYvloZ2UazsVlwzFk2aLwcLYMI546DC7U4P5kKR5MlcnrvccF6Bi9A8fjOzBO58MwUwDHZBH0k4VoGb6N8kc5CG09jAv2U5hEo8qKSdCkxvnXmaIgCN94nVq7hqf0e+JEYEPtwyyElu9BouMMej31wnoy3eveRtBFQLsGuDWZ2zfXJAZokh1xYGChDtnmszh5+ws0D1xXbbj/lZjvHg18p9vP/ZuD74nlRpQ9SMXxpsOYF195jieuGcyYcT6rfYNzkjr4JVpewoLBlWbcc1ejZjRXXLrOlB/A0esfI7ohBGlDSYgxnpKi2rGFOrWukcgQiRvfnwDeCsk8eFmwp773tMYj11M6hNCxorg7GZ/f/hB5/SkYA+WODsmM2qZKcahyN1rGbsLN86PJgLCirMUKsslcp57S3OHnSNEkZSeUTBllLyTQpDStZfw2vrq7A5VD2ZikfI9aZhJDIvGjBI4ZeJtI2ghSt+e8CXZZA0NCYm3t5568MN+kdZQkOaZ+JvtYgPVcvGdcl9VYW90bgoGx/Py7M9/yOZzPXoPo4eelKZEJKwu0EbRjckWPG9Y4HMz7CF9dew8h+Z8j0xiDHtrcMhDnfN/inF9BB3zoRP9iFZKcYdiR+w52NoTjo/ZivNHYghcLTHi2qAM/LevBswVteKHYJUWPL5YZ8FKZDn9bbsFPS+x4sci+Cry/Cbh/WuaAOuz4WalNjr8ts2P98fNyB755/KLCCR78Pl9fqXThl1Vt+HVVO17+oYLviqGraBzNQOPjLFx2hOFk7W44B3OxxCYX8lC2NtloaUXmgIOKaX1KPFjg0bdci0uNJxFSthvH9KGIajmG2vZkAfac0AtstEHZgd+GZS3C3pZJwpSZn4CLXdcInslA2bHgdeBO0WV8vust5GSGwb9IX1ZV5CgLpJvewxyQlFZQckJfa3UEG39QciIsG4sk6fspQF01b6H2XZ0/wTeL3nhtWjp3VXaiwPeMpwUJhnDcbU/AzDI7WhJ0Kk3Z8jKZRDWBg6kuAmQWv8n7SoMSgiA6gBgFuHJRIMCfcDfh7sAVRFTu0cB3OxgEeQlg503wz7dimfot+nJK8wx2POP9sSLgMcPvpna9TTYdP1x47NUj0RCFyDpV1bzIgkZpeMQgiwvK5sCR90RYJGYf+FzoNb7CZkDBhYqLEg/eP947HpSnsOkQQXoHECCTsLbgfdvnMj1J8F1gTUCG9TzuLbC4hZkCl3xfbSTsftf+/Y+tXL/PjkXajvF65BzseLTUgh13dsAxXiwe91sB3xK80QUEdszDgtHlRqS3ROKSLQaP6PHNwmEeAr5dytue9ytgB8F3XMtp3O1KwdjSFtxOgmNcXvlceN8J7tTYXxGPbp3YBQqbKve3TZ4VnUqaH93ARWsk7j5IljbjSysOuFfa4AvcgyfQiSV/G5b9Liz5HVjymeHx6UWu5vWY4AnYhIkd9DQg2RmDXEcMvD7V/EXugYwTntP6zSr49VMEBrLebfJ5ImewoMtbg7Pm0zhYsgt5+jMYGSuTObSs3ddtWfM47pmVEPkb5yWfPQEJ1zjeL7oTKQ3vfMCASb8BtsfF0pjkdM1B2KeLlTTqa4Bgk+vb8B5ozPeKBavg2xarxpH8Hc/p+7z/d/1bdT7jS43S6fF40xEsBNjwSd0b6cAszdCY0aT7hxH35yqlD8NVfQROVR3AwdJd2Fe1G6d0objuOA/Tg2uYcusx6NcjqSUcuc4L6JytxDxZSJHvafeeAT+LBGkRukIXHAUon9r1iyTBikf+Fpwu2YNsYwz6ZmokiCfzyWxg70QVjupDUDqYhcfLJKq4DwQb+HDua5m1p/nMtE62zFhSSij3z29B31Q5kkxR+KDwI3Qu1gghBA87xrKgVcMi3G/l75lZ/K5j5dt+n2NFZWfVvsw9mGQYWWHVxGlFsAb3b5oM8JX1UryfJD0ofQpKRIP7ZvCzgmsYX4OEWvBnT36V69Kuk/iLGVHJrMxR3mSXYGtosQ7NozdwbSAZZzrPYH/1HqSYojHpaUaAGS+SNgwUN7lHs147zH3FuFx1EDuK/ogvGsLwvr4UP6ux4pmCdvxleTf+urYdz1Ya8ZMyA35UpBOZycvldC4x4/liO14s6cBLpe1/znKX2jXQ/XXwvR508+tvgu4g2N4MfP/7/+e//PA03w3DWagdykDlWAaudkQirH4Pah8kScEho3YpnmQUzKgy6PpBkKYZ8nPiysTwspiMD5FifCum/UZYFkqloj4sfxfq+3KgmylEvDkSqbZYTAR0WPHS0UKlQvn+tMhjsWCweJADi5Ero1aCe/pUSsMSYaDZwUoPz4peJCCUh1AKQpAtzDQjfZ9F0mfSRIaWUOjC3LwVR4++g0OH3oZel6E2JWHT1OATIB3c1Ak0JH1LvTf9eYNWd5SkMLXFg90CDSJHYfGhHFoahx3DqMeSIkwBLiy6U2DZDwPmVxqRaA9Hbsd5DC3UKG2aVC1zcisQz5QSmUf1fzXxFUDl5OTvMKpWBXEE7vx6YrEat+/F4UTDPoyKptEunQqVqwQdRJiiInOvroeuLmRFgilR/pyfK5u4m8yzEbb5Ihxt+gq7qz7EeUco0ntikWo/i1xjHK44opDWcQ6ZHXHIdMUiw3EGGfYzyHCele/l3IvHTdsFVPZmoHYgCw3D2Wgey0HTaDY4/lof56Jp5BoahzPQPJaF6v5kVDy4iooHSajsTUHVw1SU9SShrCcZ9UM50I3fQMvYdTSNZKFxJFOOusEMlHRdQeWDdOR3X0FS13l8VvOFHBedUbjReRHX2uKQfS8e2fcu4HrXFWR3XERWx3mkt51FuusMrnXHI7fnEq53JyCr4xLyuhNwrfMCcjrjcK0zFllt0Uh3RiKnM1a+n9t1Cbk9F3CtOxZZnTFId4YjzR6Oa51xyHTEIt0Wh+ttiShgQ6b2C8hxnUNeWxxu915BWv8lvHfzbVzVnUTZYBrqRrJR3ZcmrdRZ7MriWP3EbbnOmoE0NI9lQ/84C+bJa7DP58M8X4CKwXQcz/8cF1tPwLpQgG5fOXqWy9GzUIbu+WJ0LxTDOX0btulbqBxIwYmyL3Gu9TjuPLgC2/QdKO9zK5bpg0/wLGyXqn7nnGYTEQkm6XAjAQ01ygwQlWZe6h4EjNPX3oVF8Yt2YgFOlPVn4bL5FOp709QcZnpWQB0DIGYBlKew2F1JlkPzndXkD9SWumbLcNV+BgVtCfB4VUHyt28QClhxzCowpbJvkvXSir9VVo6AVGN7+KptXH/2ugrwOeeecASBiMh8vrFx8WdyHkwDs6DJgmWPHkNztSjtTsXZ2qNI0UdhTNYrguRv/P33+L8AJPn7dRu7nI/2GVxv3Ub4PSZxRRj1GVDVdwP7b+1A/UAOlqVIV8ksJABmNlIyUgb4/JTv8TmpDKXfx6YbBBhcB0ky2LEiGSoGavy+6nxJT/PuqXLRfGc5z8MvNUGURQXv6/Zd/8b3ksDEisn5elR3J+N0yxFMBpqk6Q0dImjV6ZgrRv5QEtKdMYirP4yo0q8QV7EfmboIlHReQUN/FuzDN9E3UY6xhVbMLhF8dWB8pRWJhlPIbb+A3pkaaVBFpyapYxK5Ivcwjk0WAWq2vCRlgpr/YHDEecIGTBqJI70qCMS4HwSJHa7PLDbkuBLApeQDwrLDqeQxIpVU18s5tsj+Fh4DiruvIjL3MziHbmHRZ5ACRhYx0uZ3YLYWEfrDKOi+IkWpJKuY4WSmkPu9zCt5Zlt5XsHxt8ErAd/qXAzW4XBPY4G+AqLssiv4QuRSFtEs2ydKEN18DCcq9kofCxZsk2DjPkzZpp+v0hmT3TF5qKy16s689jXxwvpDkWr8/ODBPVbtrQoMq74JBLmCTbg+CmillEh5jnOdVGvOeiDN61E1Yhyf6to49je6j5v9/M//lufE5yRBNtdyWXOtWPQbMO3RYcJtxP3FetzqS8af7vwBu8s/QEFvAoaXG+FlrQ5IRtFtygQstQCU666wiZEB3YtlyGyLxZ6qffiiNhx/akzD72or8PMyM54tacNzPMpseJ5Au8yCF0rNeL7EjBfLLFrjG6XtfqnModq+a9aAq+4k5SyYdIAuJXKUO/AzHpqW+2+p465wqoN67konfsGjgky3fd3hwCuVTvyyyoVfVbvw6+o2vJxZhv/wn36A4Lu+nwWfGaibyEFaz1mcqPsSua4zWASLMyxSZLfAyRug1Y8yWucA40MWdw+mVZn28Nvg9VswHzCj39OCskc5iKg+JHY3zQ9yMO5ulm58rLK+0BoG01i+FIiJdpgLRYAbslXscLhYCGCVwc3FTVWiMwpn9Ks+n9GtUU06TdNFAC4uJTLZFOgloJeiArZTRxecrjt47/2XERv7Bfr6y7SJojZsNWm+/nWQjaVsIpjSJRsOHgLOGfFSo6wq4YOuJzznrx3aIrNCVwGZiEYs+huQbD2NG644DM/VyLXJpi0pLTW5lL6Wnx2cjJxUBP9POGjDFDBgfL4SN5wxONV6GBMenTCi1FCTNeC9U/ouLvwE31oVPy39xNqPTisKcMvmS6/tgA2PocON7gv4ouRd7Kr5GJfaI3Ddfg63Ledx1RaJtLazSHXEiJ94hpPazrNIsUQiwRiGRHM4LpkiccEQhjjdMVwwHsMFUyjOG0IRbzqFRHs0LpmjEG86iSv2cCTYwpHoiMJlazgumsJw1RGDRHsMkl2xuGSKwDndcZzTheKi+QQuWY7jgjEUF02ncF5/ApfNUUg3RyHSdAxvF/wR7xe+jzO6UKQZI5CkO4UrptNIMJ/GFWMULuvDkWyNQpozGsn2cFwxnUR86zEkWaORajuHDOd5pFijcdV0Gqn2CFzrPIPrXQw6InFZdxIplhhktcepv7eGIdlyEinWMGQ6o5BqiUKOKx4p+lik6MORYAnDJfMpXDacQIw+BJ/Wf45/yvtHnK7ejQR+3xqGi+aTch3xxjBcMJzGeUMYzhtO4pLlNBIdkbhiP4U4wxGcMx7DJVe0MKkHSr/E0YrdOMdrt4ThiiUCFw2nENcSgvO6EPX7llBEm0Kwq+hjHKzbh9D6vUg1nZKiXMUkcS47EPAyQ2PDss+A7pkyGEduo2vkLu4P5qPjcSFcE8XofFyInrEiPBgvQe9MGXpny0VC0jVVgfbRIjwcLYNzohRX+UxNJ2AdyVtLwWuOQQFpkMEmGRyvqiMg2SORQ0iArzIfhvG7uGyJRklnMrxB4LHRhiX6S25u1HtapaZEglaZiwRC60D3Rl+vC8Y33CCFzSSIVAQBSQJugCpzw/OwwU2tsNS20GvbBud0CS4aTiO2NgQDS+wWx036zzfSf7XvyeZK6Ymynnu03ISirlREluzFvflyKX5nbcIjd5MUvNYMZcI5lY9lsnmUqclarTSlkhWkdaIQE7y3HVjx0cGIpAWBOsG3KnLsmiwTq8GctngB32Tl1ZoWXNeexj1Q42p6qQlVvak41LwXxqlbaBrJQ2H7FWToYhDfdAxxLUdw1XgSee3nUNOXDtvQbfRMlGBovgpTnia4GYxwLKND6XoD7RgPNCHBeAxZHbF4MFcjTk4E+sECRQZgzLgxY+ghcOaeSVCrWRkquSWZU02rrAVvBOzL3OckgNLuFckTTYvNLJhbyzSTWffS7tdtENtfr1ZsyEJv6rprpu7g+O1PccccJ17fBN2cEwwSuP8OzdUjRncId+9dkroUkjGSRZO9g+OZn79941X00nKPgplOjgHeJ03euaITi1hpoU6Q66GVow61j3JxrP4gMtsuYI6dk2U/VYB9bQ9XWEA6XUvm+usGCZRqrh7MbrPWSorGn7CvroJx5WgmJIK2XgkpKc0Blfaa5OHaXv00xvTmn0FfejefIa2JPQ4MLjUiwXESu2s+wonaL9E6cgOzLJ5nJpjsPaWxPkoOTeIRXtuXhYutx7G36iA+qo3B+43Z+EN9DV4pM+HZAid+UtiOF0oceLHEqA6tC2XQDnD96xrYXrMHFA23BrKDYHv9K/Xc3zyUkwnlJQTbXz9+WeUU4L0GvkvxH/7zDxF8P0xHQ18G6h9nI+fBeYQ17EGK8SRm0SogTToiEnj7rcoVhNZDYsdHEKgmDZlSMllD3ka0jOcjveMSIptDcb42BK191zBPrRkLAWFE62AeElqZnruIKUaVATs8XkpUHJKmE0ZM2HRG8yYB4YqVoj5JK7jgxslJS5Aqm7dKL61G0QTv7OQkE8wiGwoBJG3RTJYcREa+j6rqRCwscsKqlJSKZFVqTRZKkVYwKOCEIwvMRYdRvGo0wI1IJhkXRdnItdSNRPP8OrigqFcWKQaCBXECrk1Y9DQgyXgaNx0XMDJbpzlyMNDgeXFSsejwm5kAXvvGx+O5aly3RSLccBRT0uCI906BbwYw1K8L6F5RE4xpKrI3c/Q/n2qFe5kSEdr92bC06MKc14nllTZ0jpUjovwwdt7ZgWzHBdyfKMWD8QJ0jBeBm2vHeDGcjwvQNlmEzqlStD0ugn34Nmwjt2EZvQ3d4DW09GWjuT8TzX08rqHhQS7KHGmo6byOlod50A3cgH7wFnQDN9E6wC6k+Wjpu4ki61VUd2ahpe8GGh/koLGXrhk5aOnPRGNvKnSD16Efugnj8B04Ru6gZDADp5qOINZwCrWDOXAO58P+6DZMw7dgGcmH7VEhrI/uwvW4CO2TRXCO34Fl+CbMA7fgGi1BY881lFiT0frgJhwjxWjuyUG+PhY3W2JQ1ZYC60ARqlyZuNlyASW2qzD03YRrrEC6Vt6bKBKQ2jVdCudwAayPbsE0eguWsTuwDt9CUXci9lTtwmnjMVT1pkE/eA36oWzoH+XA8Oi6XEvrwA00PcxF08ObaO2/hYb719F0PxsN9zNR3ZWJqo4MFNpTUN6eheqeXFS3ZaDUlIBKexJq7mWguisd1T2pqH2QhtqHKagfyELF/VTU9OfgkjEMcc2H0TZ2R0kTVhwSCCtZkB1DM5XIcp3D8ZajSLbGIM0YjcvmSMTbo5FgiRIHmSRztGQ+suxnkWGIRpr5DDJaY5DbEIMkYxS+LNmBKMNRuOaKFKBgoRnniUggNJZL5i43O84tAg+rAm6ymVnRMnwDF02RKO9OE0nUpoCU7+8mQGTdBiv/WUDM4JtzSgMabAr1tfmjrR1fAwFBfb36G573kw4WKwmgYhaQfQpo6cf3IbvFz+TcZUE5s24+M+YDRlT2ZSGmKRTZtjhMuluV7/JTBN+c/woU2oT5tjy6hfi6Y0jURWBkxYSZFT3aB+/iuj0eJw3H8FXTblxyRGDE36DumzQbI3ur3BqElBDgQrlhJxDolGfo87Oj6VrPh3sTJdJkhz7fIsMTBvBpAm+ygiQe7JhcbkZ+dwL+6dabSNSdwCX9SSQYIpDsiEP+vSto7c1A++gdDC1WY47yPPYHWDFLrQzHEplLEjEB3gORw7kwu9KKq9aTSGyLgHWqENMBIyYDOszQJlUrZmcRI4kNKXSTnhiKLWWnYL+QMtxfNKkhm/nALDU1DHz4tWR/ydZyf5B9h/fcLs3SsKzmkMejVwGd3wp3wCR76tySDva+XMTqjmN/9rtwTBaL7RyDQQkORKLpwNBMzTaCbz7bjQ+lsVcNiwha5T6IyQEzbdz7lZWtFBQyKPdasOA3onEoF2cbQ9AwlKOCB21+c45/7ZAMNEk6bb352rzfeA9dXSNknKrAQ4A99+7gfZN5zf1fw0Gsz5Lzf7rjerN1MahOWKF8c7kNPq8TvQuVyOyLw+78d3HLHofRhSb4V1zwrjgx77Pg0RK7gOch1XwGhytC8FnJMXxUlYg/1Rbh9xWV+FUJ28Q78FxxJ54v7sSLpQ68XGrCy2Wm1RbvTwLd3wTf60F28GsB2tRvf/MQttuFX1SqQxjuCgd+uf6odOCXlU78qoqstwu/Eeb7Bwq+mequ60tH3Ugmbj68iDMtB3Gl/hBmvQ3CRFFCIsUhHGjBwSapYcVCL3r1mJqvh3nsNm73XJUGLqfrDiPVFA3HaL7W3IQNHlS3PVqe5Tku4mzzSbiWawC6WCyyeItdHS3w+CkvUWk0SjjIgKvBRSBKKQklIBr45iujtFUwrgFgbuLCRqtCMZm8Wrvf4UfVsNtv4PFjMjPUEDPty42fBSUqxUaQL44eZIClo6NZvEJVuokSDrpXKM9RLuhMJQb/PvhKRnH9QcmHjynZAB1YCK4tmPc2I8kchVuuyxiZb5RiQKle5iKiTfo12Yr6nmQcJHXJDVAdCgCs/f/xQj3yHDEI0x1RzLf4aKtz5OYRBN9cHCn74WR8NFSP4sJYlBXHYWigFh63Cw/ulyIn7yzMxtuYn2mDznQL+2M/xv889BuczPoSnSOF8AVaFbjwO8RNhgVK3GzIyFCKQMsjahzJLAhrItkMMmcsUDHCasxFelIIdI258HraMfW4FbWVl1FZGo++B1UYH9OjujwBiZf2IilhP+613ZEGPCKToGMOI3Qf35sLIJka6tys6FioQprtHO52J2PU16o6QrJQUarfqbtsg49jTxo8qUYk/Dt/oB1zc2bcKYzC6aiPoDPmYeyxAV6kqwgAACAASURBVBXVCTgW9jZORf4J6dmh6OmpQVrmSYRHf4r4hH0wmK5jcdkmhUp+HzdoOtpQCkXnFSXhYLt0fl73VClO1hxE0UQuJgOULHFx5z3R2Be5R6xZaINnyfX/M/feX3HnWZbgf7BzZs/Zsz/sD3v2zJnpnundnp7e3emurvRZlZVZ6V2lUTplpqRUykuAkIQEQiBhhBAyWGGEAIEkhPAEgYkgiAi8DbyE996Gh7vnvk+ERGZVKbvn1GblD58TQeAivubz7rvvvvvQ3JCN4gdXMNhfgvW1JnR23sftzHNITw+G0ZiJmfkm1FQm4+6tsygvuoyhwRI43HRn8Dg0SBDjNadsAksHU3C19Rw6ZnM9CWSHOKUQlPF3OibvI8Z4FmcNp3B/IAkFfTdwpzcO2f0JuNeXgNzuOLEsvNsbi9zu6yjoiEW+JQFFHYkoa45HZnsMfEv2I6YxCH3rJcLMUc5gZUIsJWWy0JRWsc+AZWFVnZJkk6yp7DXN0I1kykjyike3HmvOnxpouA/Y2KuhALUak+1tpOP/VDI33sfeJcDKM72Qz+kOw8cfJuBKJ739NTK6VukHUY10BA6PWS8B+gTeDRBQ5FTTDKc3jTJq/mSFDwrHboE6eNFb/ozgm+/Tyzguu0zQ9qcg4MF+RJnPoWLyPoofpSJVfx5RxrM4rDuKXWV7cK0lFNMuvdKFutpgc7Vjba0FTgclbM1wOhqwsV6PtRUmGr2y70kjLisbnmoAKybbx8tzX+P3nno+/8LHheeWMoZZqwG5PdexI+cTZDWGoagrHqbxO+i2VmB20wSXJBOUXHn2FsYgAi4SK7w2xTedEgTuqV3SW7SBOsS3BuGs4Shud0ZDP3hL5itUD91CzVAWdPTRni/Eutiq0uub0hGlTeaezDjD6188xanPFlmWAtyUKLpAZxxOK6VThQLflIlxdoXbTfabv08gTsmKil+08F11NaBjPA+pVWcRqT0u8r8plxlszBM7YcY9gm9XG8YXyv+C4Nsr3fjzj0ruoqSm28G3yD94nbo5cZMJNCfhMrltxphNj/u98QjT+qFmOF3Z/jG+yFID7qTa4qm4yLXO/ZX3pCyVfHrJLZXsMOHxDskjSadi+pP46q22KzJM/U0l6RGwL9V47ve8vpiU/bzX9U/dQ+5N9neR6OwAHF3YsrOfqxm5k4kIKN6L0o5YLG0YwSm3Mw4zzJP3pHIbUO2L3SUnsKPoEj4tz8FH2mq8WVyH3zyow7N51HH34LmyfjxX2o3nRLPdjJc8MpMf2AN6JCbbXUrEqeRPsN1ehltkJZ4mSq+em02U29fv2FS5HXhr2vEqgXd5B17TKuD9eoUFL6b/QsG3ZiQF1Mvqp28hfyweV+pP47LmKBZsOtFw08eS4ImAeM1Vh3lbLcZXq/FwsRzts/nQD91CkSVeSqlh+gAkmDi2OBWjKxUS4KQkS+acIISgdasZuqFMhOgDUDCRIaNtYVOOFtIwRvE/wT03EzLsHv2bAGICaN4Y3Ag9DIEw117WSphpr8aMN4BnmArBroB0bjQE22zeY4mlw1NW85bX/viRujeCMrIIPAb8/9y0+Nwh2Tpv/G1AW5xeVCOigB3P1wqIdknDHUvotK5addfhRmsU7nUnYmqdlm0WJedh6c2TxSv5jYe5E6kPASKPAzcLtfj149fQLp3fBf1XcUbvg0lpnOExI/CiN62yS1TMN0FKO5z2DpiN6djz7cs4fOBN1JuzsbrSjuKia/j18/8FceHHMfzQhMgrp/DJV2/gD1+/hu9CdiBZF4o5R5Vs3FsObvgMRp4mSjqa8Pi61TFWOjxeSw1qFO5WB8bHq3H9ug9efvFvcSX6FNasvejr02LPnlfx7Te/hVabgpbWQvj5foK9372J9977J1yO3o/xKXpj98JJqdMPrLBolcbSbis65oqRVBuGgq4kLLjqJMCQ4XfwmmKytdmp3puLGks1/IbXxqa7GwszRoSf+xov/Mt/wK3kMDTU3kXYuT348tPncO7M1/j842eRkRKGz/7wIo7t/xQ7P30F1y75YHaKfQFdcNkJolWQZoIj8oNNdb1QAtU7V4xTJUehtxZjldcUEzKOAN80w+XppGciyqRofr4BkeEH8dEHz6O4OA59A6WIvnoUH+x4Ft/sextHT3yB4opUHPP7BHu/fx0HDryBzNvBWF5n6dYiDTfUhdrYSLbVKeXwkuFUxLSHoG32niSCcs4oy+J1znHo43dwqzESVX0pkjQzoaR8QyUJLEuSCW7CxlY9VrbMohtlk6skXJttGHBU4yoZRf1J6B8mo37iLgzj2TBMZKN5Jg+WmQfonLiH1sk7aJq5h76lUtlbGBAJcMWxwN2Eikc3Ra6kG856XHV6WqAhoF51NWJkqghjSxoMrZRjeLUC42tV4m0+s2HAnLUWCzYjFrnsRizYOfTHiEWnEUtOE5ZcJiy7zNhw1j99ueqxtsmSN+9FD3suCSDPu6qGUVLH5uVNexNW7XVoWirEeeMZ+FX7omzuLjYIvjlZ9y8MMp/29wTIcB/YbMaqy4zGqXu4agxGqOE0LpiDcLbCT/Tod4aTEdYQhChzEJoX8pRlpbsTW/ZO9D4sgVYbj8GHpZidqUJrSzpKiqNRUZ6I6UnOb+iUaqHStqpj0z2Tj6sN53F3IEE+ryQBPzNI8YJvMt8Vj1IQqPPB2HqZ+Bx75Vec0sxGe2m258AZB/XW/Axs+m6XSsaWi+edvUVM+C1go9/6lgnXmk/jSNkuXKzyR4YxDGnG87hRdwGXagJxKv8A2LTZOH4f06vVcm3ReURJALlvqgZZGfTmGWzDxEZkIdJ8x4oK5Z18Pwqw0qp2bK0aj9arMew2SIylnIgAnIQTwerQcjkyWi7hnMYXhuFbWEaTNGszHjspV+H/IFh3tmB8Ufuzgm/VY8Jj62HgJa6TWFPAe9NpEuMHp+i4m4Wtrxi5hbDqE7ioPwXzZLayRJbfV+yztxr/OFnyVKIFnAtRRmD9o+XBD4yhLsZSb2wVQP4kzkq8FaDNmK8qaopcUQQLtem0jVW665+WgzztPv2Lfo8yKRd707h392DTbsHylhmXGwNxuT5IqtbWzTYMrVeibDAeofrD+OL+TnxQGICPdLfxvl6P35XV4+WiFrxU1I7nCtrwTFEHnivvwXOaLjxT0oZni5vxYkkLXtpmD/g0sK003NRxe5ZHx03Q7dVzv0LXEo9zibiXEGzLsoiTyavlnXiNQJuAW9P++PnvtwHvNyq78FJ6Cf6nX6Lmu3IiHYaJDOin0lEwmYSYznM4W3EYvTYtZqxGDK1oMbhcIT6anN5V2J+E9LZoxJjPIbj6BE5W+YqONLY1AtWTtzFlY7mRJWY2XrL7vkGapagJdztUGbZj6gGizSGIaQlH30o5pldrMLKmx9i6HrOrOiyt1WDNboLVWa8GA3DypFPpymWjYBlYSkhqY/eWhJU8xfOaMGxkXtSAHJfTrBpdmCGTMRYNlMeazUnfaoJHVcaTJhMBqwSsqgwoJXEB8Wogg2yKXu2dNJoxo/ZsIpIYcDP1WOuhU0qUAq7JzIr3bxtWHLW40RSBu+2xmFo2CDiSjV4YFroUeFh9TzB/LG+howWBrnd5XS4E/HZI41DlRAr8NPvxaIU6Tm42PCeqAUhYdXFOIQvG99iPhwMaHNr/Jvbt+T2MNZlYX+1CVXkKfvMv/xfCzx5ER78eB09+g6PHdiM7Owm3SuMQXOCD/Il4kSS5nJwqRuaLII5l/3YBtZv0iqXNnqd0SukRfcYdWxZMLzYg/U4EfvvGPyL6WhDmrD14NF2HQyc+xWe7X8X9kiS091UjNi0UeZoUHA3YiV0Hf4+Oh4WwbXVhY4sOIkzsWrG+xfJuK+xbivFtnLyPOMN5FPelYlm0+TyXtJQk+GawY4MhdXA8xrxmeHz4M73A1iNUlMTj8z+8jMzUSyjITcC5M3uQGHsGvZ1avPP6f0d0xHF8+sFvkJpwEUcPfI6QwL141M9KTq9MsWQABL19xS/eU1qVQU5N6BjPxTlqbLcMauSvvUmSCDoMUd9Jqz5+FutWJ2ZWW3A57jReeeNXuJt7FdqKZPj570BI+CFk3LmCF3/3D4iM9kfMFX88yLuGwwfewbnAL7G0QHkF3wuDAM8Fr9tObG524sFAIqKaA9Exn6eSUiZIbJbbahX3g5rR20g0hqC4M17GaYtLA8vkwlaSxVUNubYtWk2qKgYDPQczsWmn1abF6SofHL3zJSK1vrhgCMSZmpM4UeMv3srndSdwofI4AiqOYr92Hy43h2B6jbIrlago8N2M8oFUxDVHoHYsWwHanwCpa1sNqF8pxjV9kMhlElsjZaJremcMcrqvy3TPov4kGB6mw/AoHcbBWzANZcI8koX6sWyZAts0cRfNE3fRMvb01TpxD5bFQgwsl2FwsRRjy1pMrFZifKUCE6tVmNjgflaFCace4xs6ISniTCE4lLsb5/Sn0LxeBgcZRytlMj9foKaXP4EvK0YEXuubTei3alExegv3euKE+R6116B66jZCq/1wvS4UDzeq4Ob94eqFfakVkVcO4/0P/gl37oSjqioBgYGf4MMP/hFf73wZufei4HB0wMnhS2KByj2xBQTf7GO4N5CowPf2ZsOf6fMLANtsw5qtDnXDt3BBfxxTy6UiJ1EAlPs/GW5PRVV06fRr5p7GyiKreIxrlDTWKRtPMt9r9VjeNOBG13mEt5xG/lAyuhaK0bNYjJaFAuim7+CK4Sz8cnYjXHMcd7uuo3ExH5PuWg9TymSI92erSsb4HsiwO9lsXwc73TGk8bARyzYDptf0eLhSDv1kNm52XUV85yUUTmegZ10DK4kyNuhvtWHJaYamPxnnawKQ3H9VgCw/C40L+Fms9jppYORnosPQ6Gr1XxB8s2L9tKWSbPmcjyvYSs5DSY+qmPB90oHLLMn+hLUKt+ou4ELhYRjGs7C0yWFAiliTeCyxXTUdSoWYEitJLvh3n1SGpUoqRN2P8QMr4CrJEpJO9M/b4qzEV4Jzr0SGLDf/nxePeMk9JU/9Oe/rp/4vzi8RAwW6h3XLWoMZJ4r3IrH/CrrW9OhYKkFaRyiOFH6Jzx58g10NCfii2YQXSlvw/9xvxT8Xd+IFbRde0nbgBW07nitvxbPlzXimrBHPlFBq0oKXyxSQfhroprSEgNvrWuJltb2PP2C2NWS2n6xXyy3Yvl4rt+D35R0/WK9rO/FGhUXWm5Vd4HopvfSXCb6NU1moHb6JyqEbyJtKRFjvOXyavwMXLedxtTUMF03BOF56GEcLvsfxwgMILj2Gy5WnkW4KQ2lnPCyz+Zh1mbAqnfvcaNm41QS7tRYrDqOaSkWm0UpGQV2czLBvN0XjSO53CCrzRWT5SUTozyDUGIhrzWHI7o2FZiIDjSsF6LdXYsppxIqrEVaybe56WN1mODylFAYSllSYHfPR+9y6acbaphEcX23brMfGphk2dyNW3EasbBqlBO7wNJHa2TUui6XkRmzIasDGZgNsLH1xbZIpaJQpi/I1XQFksXmmHhy+4NV9OjYbxHOT+i/vkqYHsZlSGTMZ6GW7DjdbL+JBZyyml6plzD2b3liSlEc+l2YYamGbhIVR31c6UhkLK1py/jxZDMXCr27Ww7iQhSNFe9AzVwAHmQ0P+BYgJRsPQROnntWJ3hsYQHLSSZwL/BomQya2XIOYm2pC+JHvcDL0O/RMGuB38iu8/voz+Ozzd+Ef/B3O5/rgaMV36JgtwAobNjmsSBxyuOl6ZULcQMl+8/95wJUkF9zUetDfo8Fpvy8ReykANusAnLaHyL1zCQH+XyDv3hW4bEPYdIxicqwZn3/8Eq5FH8b0pMFjEejx42UFwkF3BZaFKfVog344C7Hm8ygfyhCmh4GNSSHP4ZOSoxf00H2gDS5HE5x2JltD6Okqxne730d6WjTu5yYgOHgfEhOD0dOjw2c7XsGDvGT84cPf4tVXfoW333oW6TfDYbP1gAMUKNUSgE/ZCR0hHAQ6bFZuwfyaDtUdiQjPO4yZLaMaNOTg4BPV+CqNtCLRIQNEhr4fXa2lOPb9DmiKU6ApT8LJM1/h5u2LqG/X4I33f40KfTaWVnvQ2lqI436f4PKlQ7Cu8nd7PbIGPud76sCqrQ7prZGIMJ1A31KhSjzsPG5MSJR/uG44Awn151H28CYcou3lcaYGVbmeMNCx9C1gmXpHR6vIK7bE+cGC5qUiXGs5j8zOi+haeIDhjUr02irQadOgY7kI3bMP0DuXD/1YhjTUJtSHYnRR89gNQgLyVgvK+pKR0HIR5ok7Kkn4CZA2btMhtv8Kvr/7DS42hCC6PgRRhkBE6c/gYtUpRGhPILzqJAKqjuNUpR9OVfjipNYH/uVHcVxzBH5lh+Fbehh+pYfgV/L05VN6EN+V7cP+0v04WLRPfv+E9hiOV/rAv/YEjtcH4KDBBzfGEhBZH4xDGV/hXOERxBtDUDWUgRUmhKzwWb3X4M/zSEmYyA0IML1Dzjh8xVknCazTI8PqspYguNYfn2V9jPj6C1hnUmvvwXBfJX71yt/g7/7+f8WNpCBkZYbicvRBZGeF40zAN/h652+xtkYtMwGU2ge84JvM92Pwzf3gZ2a+RY7kbMbqmgGGgVScrzmOSZtWfPZZtbER4LLXRxy4lDyBybrVQe0xbWu7JblUwI6SEdqV9gPrzZh16HCj4zyS+qPQsaqFAxZJonmsed8tox7N0/eRZD4v8fRUwxncnbmFMUcN5p11WLBxXkObqvRK0zGBId2F6G/NWNaEXmc5cvqvI1R3EieKDyGwzAfny/0RXHQUp/L2IaomEE2rRZhz1cLqbkb3ShmS26MQ3RgK02qB9Fa4nWZPz4OStlD3TQafswYGVsr+guD7X3E9S8OykvOw6sZrhomBdasBtMJcdZulIrZm02HFXo32uXviDsUG9gmXEatOExaslVhzmrEqFataLLkMWHTVYMltkFhPH/eNzXrYiAPcJtg3FXZ4jBdEkufRlpNUcDRuWw3YdDSq2MreDS4B3kp3r4wfFGkhlXr2dBHn8N76ib3q5/w+EzfKCZ3uNjht6vNsbBnhn78bEZaLSHp4C6d0R/BN0Sc4aLyEAy3V+LCmDf90vx7/+KAF/6Lpxa81D/FMWRee1zTiOU09ni0z49clJjxbYsbzZY14ScOR7h3iUOIF314Nt/fx3y4pUcB7O+Dmc3p3c/1eawHBNhdBuPc5wbcXeL9V2Y2X00vx7/7mFzhkp2I4BbqHyagdu4mqlSxcG4zEh9nv40TFYcQYg3Gz7TL0j9LRMZWHoVUN5hwGrLnrsOGug9VhhlO8sBX7JQHe2aB0ZwK0mG16s8B22UjIPpBhm1vXi51a1mAiMjquILM5GnGtkQhj6bPKHwFlx+BXsB9+D/bhXLkfEloixSKneDoTlWM3UTOcAtPYTZgnbsE4dhPG0TT1OJYGw0gqqkaSUTqUgKKHcSgfvSGraCAWxQ/jUDaUiNLBBJSPJEM3cQvVE7dQNXYTlaNp0I6kQDOUjLLBGyh7lCSWdxWDSageTpZVOZiEqqEb0D5KQPnDBFQOJUE3lALdo2ToB1NRO3IT5vFMNE7noHX+HjqW8tC5nI/OlUJ0rpaid7kEj9ZL8MheCsvqA1y2nMcNyyU0Td/Ho/UKDNmqMOaqwbi7BpNbtZhFHaZhxrirBsP2KgzZKjFoLceQTYshO39ei0dWjawhuxajrioMuythWryNYyV70T2bDweBHTd2MuDCghOIUUunHAkUyOtGQpwvggK+gFF/C27HAKYnm3Di+89w/sphDIxpkBx/XBhff7/v8f3Bj3Gj8Jw4WoSX+6F+OQ9LlOZQ8yZuMCq40srJLhaIbbA7TbA5jQJOCXbdmx3o7SmUsbuhId9hdZVe3CO4d+8STgV8jsKiOLhcwxgbMyMx8Qyef/4/oK0tDw57F1zuTg/IZRCsw6YMi+CmaBaQqOlLQmLDeRgmcsSxRzHQrLzQC10187B0TA93Jzdkl8kDzCmbGUJHewG++eYt3MqMQVFJKi6EH0Z8UhBa2kvx4UfPIyr6JL7a+TYOHf4COz77Ha7HnsLiEpOLHvFKtjOp8YAL6jUfuSpQtXwHOUNx4vgSXHEMD91aWJ0GSRzpfiDSIbGnpK0eGVFKGroxPFSDPbveQv69y6gojUXw6S+QkxWBpsYCvPj8f0J/TxWc6724GLIHASd2wGzMwpa9G7B3ANQbE2RtsXmrAa3D2QgrOYwLFUfRu5CvehVcBOc8H81SSTCPZSOpMQwFA8mg0xG1y5Sd2DlaWUrD3MzpKdukfPpZEeKYYmcjVrdakDuYjAstwSieSMcqWThbgzD7BDc2HnsOGdlsxsh6BTLaopBgOicyNZZtyfoLA4g2FHcnIbHlIhqm76sG658IaBPLWqTXRyDTcAGz9lowqVYJcqPsU05Xg2hj192N0gS+6q7HirsOy24zllyUntRiwbMIYJ62pp016LWWo2e1FJ3TebBM3kfTcLYw6tqhm8jpicWJB/vxnwP/AYcKv0f5VBbGnNw7m2B1NsDOxk8ya+s8N/8KoPITn/1fG8w5/Y7nTmR7BJn0VXc2YnK+DIMrGlgWi2AYTkNiXRAO3/8ah+/vwu22K1iwN2J9uQ0xUcew99gn2PH5aygsSMLERB0eDVajID8O3377FiIjD8Nu71bVRalkKHbRq/m+0x8v4O+vofkWre6mGrZW3Z+Ms3o/jKyXYWmrTnqTeH2KjptEBYeSsZIj+t8ebKELTvZRUK6ILpEm2Fnhw4AA8FU0I60jCimWaHQvlouckiQMJ2WqSo6SKqy5W9C0WoqoxlDsv/stzhYexg3aGHYlYNHNc8PmZ/bh1Mv8C8dWA8aclagcTUeg9hj2Fu/B0YrDuNoaDvNcnvzOvM2EysF0+GXvRkDBQWR3XkFBXyKuG4IRWnQU+W2x4rUvchqPXNPB0eKUL1Jawc8LC0ZXK/6C4NvTS0Xi7U8skhPsm6IUiFINmVuwZUKPXQPj3F2UDCXjTu8VRLeeQ1JTKOLqzuKU7ig+L/wK35btQnhTIKIMp3C99TzS6kNws+6crPSGEGQ2XUBO+0XkdcdIvNcMJ8Ewmir4wDSuMEPDVCaaZm+jeS4HLfN30LpwF50LeehayEfXYgF6lovQv1YisXbUWYWJrRpMoxYTmwZZU1tGTMOEmU0CfpNHakIpo6r0/Gvvx5/j51QllfLZdjnf61sGtG/kYHfOx/hDzk58lOuDr6qC8Y05Dh9XavB6UQ+ezRvEfy9sx6/K6vFMOa0D6WhiEevA50qb8HyJWi+UNuO5kkbx8f51USOeK25+zGr/OXabLPcrZWptZ7b53Au0hdXWWPD7bev18i78cD0B39uB958C3//+lzhevmw8DbrRmzCOp6NyKhWJlgs4UbgH+tF0zG/UYHmDDLYJ6+562DzgiiPWlXSBNy3ZbFV2UV7cyphfOoBpei+a8WYsolEW9afUl7mtbAZpxKizHvN2E5aX9ZjdqMWstVbG9g4ua9A0lYuyR6nI6r6Ga01hOF3hiz13v8aRgl0Iqj6IEMMxnDf6IrT2GM7VHEGI4ag8ntUdQkiNDy4Y/BCi90GIzgfhxhMIN/ojrNYf5w2+CK3xQajeB+e3rQs1vni8DH4I4++b/XCm3gcnTUdkBTX64ZTpKPaVfo3d+Z/hSMUeBBqPIbTWD6G1vgg1+CLkB8sHIQYfnKs5jnM6f4RUHEOkzg8Rtb64aPLHB8U78GXRToRU+iGs+hSCqvwRWnsaF0xBiKgPRnTzBbGUo1Y0qNofgVV+CNL5IFjvh+Ca4+LlHVjtAy4Gk1DjCSR0hKF2PhPHyw8o5ls0yEqXTvDEcyd2h1tmbG7Sq5cNRT2Ijz2GwIDPYa7NxJZ7EBNTjdh7+GNcjz6AseECWFqzMdStQ8m9dAQE7EVKbjC6ZnNxJPcbpLVGoG+tGDYCbeoSXWaA5S422m4p6ythdqS86mlgc7Wgx5IL3yPvICp8H9atZFiHcDv7Anx830fegxgsL3eirCwBu/e8CpM5GxsbXXA5OMyF1x3ZbAJDJhHUWXPaIt0J6lDSeRUpjedRP31P9Pog40PgLe+LchAOP1Agl0MRxLVBgJ8Fm85uWNoe4IvPX0FqSgTu34/F6dNfI/LiYdTW5uDdd/8Zu759G9/teR/lmkz4+3+JwMCvMDBQIgkF7wmCUWH6xXqqHobhVJyvOgb/4u8QpvfD7e5oWOlH6+RUSMqjPOVOSU5Z8eAgHLIwLXjUV4b9e95EUX4MysuvIeDMDmTkhKGpvQCvvPH3sPRocOdOJE4H7MCDklgsWjniuFdca9SgogZJiqZhREZTBC6ZTqF8+iYWXbWKfbWRZVcDqciMmoYzkVx/ASUDKbCy2UwkKyzhehrPWOXYVE4Mdnb/SwN2O7bsBBidAprj6kPEuUU03MLsqz4HScYFeLZgwlaN7O4rSG4KE0srMnAE4Kpq0IGCzgTEN0eicfaBp8fj6SD10UIJwvWnkdV5FXN0YPKUhOlcQI0mHZfEa/lPuJc8tgoVdyHub09fZOlYySBTSrcmMqQca71hr8OC0yyTOq9XncGuOzthmMvHEv36OYLawTI5ExoFglkl+TkCsPd/SBmffRKexjza1PXMFSJBd1YsK/31x+GrP4xw80mUDidhcKUUy1Yj1tabUGfMxO9f/QdciTmNj/7wCpISL2BysgXj4w1ISAjCa6/+30hJCYXVxmuJ9zJZXyakzfI/Hjdc8hhLqZ7M+M/HErL5m77r01YdNANJCDH4Y8FepRrCKUHjdepuh8PRhMWFaszMaMXxiT1CQ4NFaGvJwPRkJVzWJizO6dAzUITu7mLMjeswZatBSmskaKX4cKESm26LAEs64qhj3yb30ZajQaw859f1YvEW03MJQVXHcSRjJ8oepmLcUSMMPK+tDYcJONQ5uQAAIABJREFU9WO3cdboj3PVxxFdfQr1c3cxtq4T8mrdSU9rWvw2YclZh0dLFbj3MAn+Oj8c0h4WaaiuPxXTSxVK58y+Ewf3Pe5NniSXiZ8YKbRifOkvBb75t1kV+PNra8uizoXDgLbpPEk+ks3nEVl9AmcrfUW2dlrvg/iWUOTUh+F+YyQyui4hfuASEnouIrMtEvdaLuJWSwSutwfjekcwYjvPyeLzK62BiGo4iTCTH84bGJ+PCV4gZuDi197XvM9Da/3lmmBclfiq95O4zZgaZg5ARP0ZRNaeQVRtEC7VBiGq5gyu6AOR3RyNtpkHWBYZnsJG3vvtl/BIuSV7kezuVjxcLEKc+QQOlX2G3eU7sbvCF/t1UfhCq8FbJRa8VtaE54pr8SvaBmr78WL5Izxf1oEXy0x4qbQRLxZ34aXiLrxc3IXflnbh5ZIOPF/cgueo9da242WtmjLpBd4/HoLzWF5S1vEDScl24C3sNiUl24A3n/8QeBOIW/BGeecP1ptaC96s6MJblV14u7Jb1m/SS/GLBN/0+SaTXD6egvKpFKT3XsRpzX6QEbfa2UDmHVbBoPhkqYvK09zoYW64oQoo97I0DCxS7qaLAMvWylqJr6nXVcmL5RsF4JXTgDgQ0FvcRWBei+kNHUZWqtAyl4/omiDEmM6hYjgDPbNF6JvKR+9cASzzqpTdN18EWr31zxdhYKFE/Fn754vxkCOtPUteXyjGwHwR+uVnC+U5vx5Y+OHqXyxG72IxuheKYFkoQt9CEXoWS5Heflk2zdtd19C1WIy+hRL0LhSja64AHdPK1q559A4ahm6jbjATxsFMVA9lovpROqoGbqK8LxnlvTfENSK/NxEV/WnQ9KchrzcJud0JyO1KwN3OWNxuvSK2X/cs8XjQdwPF/Smo6EtFac8NlPenQtuTjIquGyjvviFgKbEpEgGaI7g5ehlBpfvRv1IEBzV1LBt7bK6U7IKbRCNc1CWTFbe34HrUbgSd/ggNDVlwbg2gb6wKn37xPBLj/TEyrEdCXABCgw/g8P5P8f3376K0IgHr9ha0PMrE8ZKDSLNEi86VpVIyNlaR3bQJc+qiBlRK0QRCSh7jdLairS0Hvj5vIzxsN1atHbCjD2nZ53Ds5EfILbqKmvq7+OjzF/DSq3+PlIwINDXdwtoKpx0qpwXlZMPPpxalNLTJ5ECAxI4ItM7nqU5vAjDRvCvNPku6BAfqOvYCAKULt66b0W3JxrGj7+P+/Suoa7iPyOij+OCTX+PLb17FEZ9PkJYWhvff+zW+/fp1fLPzNSQmBGB62gT3ZjccbBrmIA1+XrGpbETFo1REmoOQ2XUNfQtl4rMrSRCZMdFK8x7xuh+0SsLAUvjWZg+a2+5g997forDoMkymdPid+AN27vktAsP34P0vXkD2g+t4491/xv6DH+LOvSj0DZZhw9UJxyaDHzvcaYHXiiFXlXiBZ7bzPFVL8LOzuYj9GFJSbQUZvIrBdKSaImHouw2X6E95nCiX4rXCygGHc9TBKckb2XAm3A0i+SF4z2i/jOtNoWibyZMg7yaTx0E+vM5ESsO/147RNS0yOi+KH/zseo06p3Je2rCOVhR2JSKrOQqdMw9E4iKMrewdT1wHpNxLuZW7GZapPARU+KJhKV9cbdjMJtc6ExqP5SD3IWHWpcnqSXOkl0AQ5wkBhE/2NuWS4NV18vfJ5qqGbiftUplsSn+G8iqn+87ASjHOGI6huiteGHU514+tFXkNevpRpAqj/IMFkHpAqeheBbipa1NsV+XzENAygVbSHyZB6n5uk2PktdPjebB6rOooD5JrXe479hfUy17AHhBKglIfxuJUlR9yHyaicjYHLTO5eLRYhkU7nT9YLevEylI98u9H4b23/xHffPUb/Lf/+r9h73cfIj//Bvr7DGhqKsapUzvh7/8prFaCbzLErHyqmNG3UIy4lghkdl0Vi9i/BvjmvsO9iRMuObDrHMfLO2tVdVaa/FvETahGfwtBp3cjKuIIXM5+tDTnIuDE19jzzTu4GH4Y2tJ43M4MxVff/g5Bgbtx52YwJtdqkNISjpsdEeJmxP3W+9kfx0rp3WECq66fVTf19mbUTRchoSYUEWV+iKw6jvjGc8jo4dyECzhxZxc+Cf4dYgpOon+hAnPrRhQVRiA16RiMugQszdVidFiL7DuRyMw+j/6pCrRPlqBlPB+DCxVYsdKxxdP87pnMqBIe756nKhO8J8bJfDecwt2uy5hzciojQbTSQZMhV82h3NtUo6jXjcx7/fG6tG82YMamg3nhLoxzOaiZzgK94llNLhqIx4Oea8i1xCCnIxoJ7RdwUL8fZyuOIF4XgKyGCJFgVj28iZa5InQvlGFsRYdpEnMrNZjeqMWkzYjJDQOmV9kfZsDUSg1GVqqlZ2x4uQrDfL6qA58PLlZgYLYMAwtadM88QN9cESxT99E1nSdfd4zfgWXqHjon76J17DaaJ++JFW3dWA5qh7JQTVvX3lSUdSehxJKIks4EFHQlorA7CYXdiWDcvtV5BRG6AFzVB6FjsUAmpooJAPs5OKCIza9OminQsYYxsBbY5PLGIR5/byLk7Tdgf5uqJsqx5T7g9SCXiij3DjOcqJXKrVxfTKhEMsV9qwdwtAE2mlcow4ihlTIU9FzB+WpffKc9jF31kfiqIQcfVFbh3dIOvFbSi5dKLHixtBXPlTbi2dImvKDpxIuaLhmK82JZE14sa8GLZZSWcHXi5bJOz3M1DEeAtwy+2T70pu2PfLhfEYcS5UridSdRj53iUkKnEq7XKSXZtt6gnvtH601tJ7avtyo68VaFBW9XduGdqm68W9WD32WU49//x1+gz3fNUBoqx9JQMZUG3dwtZD+KwUntftzrvSr6XYIctXn8PAwNgyPBt1j/iIOC8mdl8+IyGpHZGoNbbVfRs6gViyXYqZVlgxofGRgZ4LnhqwD/p0peT17zAjBPoJXf+dFz3hi8iajXZvlMbMy6UDmYhRjTedSM5MjrLN3RQ5YaOoJdu6sJNmcjNuz1WLfVYc1ejxV7g6xFWx1m1w2YWavB7IYB81ajrFkrNxZ2sOsxvlaD0RWduMo8XNJidFWHaasJc1YzFjfM4sspz9dNWF43Y2HdKCPLK4Zu41y5L6JbT+FyhQ8erRWLqwGrFmQp6dLCTZNWjNxMaUfFMci0QdRoLiG/IAyjI1q4Xb2YnqpBbNxRFJVcxfxCK+7du4LAM7tw+PCHiL58BI8Gtdjc6sUaJ6d1xeNcpT/ye5MwvVEj8gHq3N1izUWmzztSl0CIoJJj5i0YH63E/exzqNbEwrpBFqcL5vpbyMuPQrulEE0t+Th69A8C9iPCj0BfHo3VuUpVyqWln2eqqmJf1Sa24jIgpTlMysC9S0VyPcj1xKDHUru8J55Xz6YnzI9icEUX7mjA/JQG1RXxePhQi8WlVtTV5yA29jhiY/1h0GdifNSI5MRAXLl0FOmpwWhroRymWxo23QSswn5TjlUvyYfuYToSmiKgH70tZV4V1NpkcATfh3TPi/0emVGCYXraq+70R72FSI47htamHIwNVyInIxh+Pu/hxMlPkXozBKWlyfj6q9dw9NAniL/ii9qaJCyuUlLTo9gnjw6RlYlrLSHI747HspXynHaxhSTTTIaafQEraEZZfwrS66PQNJIHJ48vRzbTLYUA3E0ArvxsBRBSqiJ2iaphbN5Ri6TmCCRbotGzXCrJOO3CVHWFCQkdZQhkWzG6Xo5MyxPwLT74nmO3hFbc5+CTxgg0T92XRlCCSdVozYDC5JEBTiXydle9NE2eqfTDoFsnmvTH+5aAHsV+CxClJE7Y7202pbJnKJZWCAT+XS9Al0dvgNz2uowI9wzfkuqfCoAupwkdc3dxRLcPj6byYHXTHYOaVrLdariHDOqS6aDKQYn71eMl1Q8eoyd7rmqQ9O5xCtCqpIcJm9LfM7nmtcNEi3uody9V5AbPo7JDFOkP7x13CwbWtIjtjsZ53SmMbFSIFl3cmDyfh0269LK229rR112ArFvBiLt2DC+/9Lc4HbALV2JOIuqiL+LjA3HgwNvw8/9QLDcpY1DM9w/Bd4blyl8NfNOhh+d9wVqDkn6C7wBscF/ivk+ww6FyG03QlCXhy89ewzc7X4fd3oP8/BicPrkHIUGHcXD/hzh5fAeCg3bji69fx+3My2isycL0hhEpHDbWHo7eBXrbPzl3j69DLynleaT7hMPdg3V3J7oXSqF5dAM5vZdxq+ciUi1huKj3wyehv8Pfv/K/40TkLti2HuLevSic9P0YR/e/g/NBu5B58zzS085j7973cOjQh7h+3VdsdNm7skl3mk0Lttw8h6xEUMpGJt67tgFwtGJsVYsLZn/kWKKx4DQpdwxKFTYZ0xrhcNPSVfla81gqEqUVVncDxlcrUT+SjbutMbhWE4SrptO4aj6D63WBMliMA8wSmkOQ1BKKG20XkNR2HjGtZ/Bx6WeIbjoD/XCqkGbja9VYtplhE7KI+6jnvcv7p8yH15UiFSgN2nSzV4f3ikViCl9T16z6PTXynY3mrXDYG+GgKcAmZ1g0w2Yzg8SA01kHp5MViSaZ0rvmbMSKvR6LjLcbtdLgOrWqw/hKtcTlqTUDptb0mNjQoXdVi9KhdETrzuBGXSj6bZUi8XHaGuCyM9Yoa2H2osm+IuQFpWYkI0hksPq6bfHrTS6SGozR3F88E049+Eb1AtBcgAm4ci5y0x6TsYayNmIiV5cM1XG4GtE6noGUukAEavxwQBOMPaY07DAX4/WKGrxc2ooXi/rwQnEXfmwP6NVuP378E9aA0kC5bQjOb8p/OHHyxwNw+LX4cHssAbeDby/oFptALTXc7Y/XGxUd+PF6U4A2wfYP19uVFrxT1YV3q7vxXnUPXs3Q4t//EsfL15D5Hk2DbjwN5tlM3B+Nw8maI7jWECwTLcUOSFien95M/twm8295ncFRaRIZjBQbyKBPMLOMBmS1xeBWawy6FzRwUzNJizvxVKaWXLFayqZPAXj+3p9fPwLafw58e8p0ogckQ7JlQdUQwXco9CPZ0rDCxhqxHRNWyqulo55OLQIbAVwgK6T077RwJLtAOQYDJkvQZJrE7kjGBHfI36Y/tLrJVGMgrZA26ANLsC/skgL+a2hB03QBYvRncKH2KM7c/xY5LZEotMSipPcGygfSYBy+DbL565t1AoKYMLDCwKl2Y7NaTExXwM4mMGcXHGuteDhYgJHRcjic3Rgfr4HRlIHKqhuwdD3Aho1ghZ6hbVherxG9LRvcdCNZWKNmkaDWoRotnXRZ8Pgo8/Nw4pZjsxMbtnaMj1dicb4OLicdFTpl0M/sdA3WV1uxON+Exroc1OoyUFORhv7+u7CumaRBkt7pstgkxpHL0tDahmVXDRLqziGt8xIGVsskuFL3xkYe/hylAgJUBGAp+QplAC72KwjQapOmm43VRlit9NHtwtpqE8ZGtJiaqMbGGqeA9mBysgbDQxpMjldhfZ3gjlpQJd9QbBAZ/kbRkuuHsnCj+SJom8f34rWz4jFiiZmMK0GtNOvwfYmTTackKKsLJgz2FWJ+3ijvZ3S4HDWGZGjKYzE5XouRET2qKm+iqDgBuopEDHQ/wMYGr39aanaoQS6crriYjyuNwSjsScCyzaTGNhMI8P9Jk1kbFjfNyO+4hvSGi2idK1aT0ZwtsHuazpxOBq9O2J2tsNla4HKSYae1nAVbjjZMrumRVBcuDP+jNa1iaD1WafysBDqUePCzjm9okNV1UUALk1Anbff497dasYgW3OuKR5QuAJmt0aiduIv6mfvoFOYqHy0zeWibLYBlJh+W6Qcwjt1GWmsULugDMCUME/cMD7ggK02ph8ffm0zTY/Dt3dvkvHvBiEcyJOfEw54/Bsdqn5G9RuwjyULxGlIghdWVdVstaodScLzqIMaXNDLMhtMGRQMslRDPuZb9ahvz9fh/eNltAna1JLnk3sPzJEGY55f7DKuHnIJIVyeej0ZY7Was2o3S8L7iNGPFYcaKrRYrq3qMr1dheL0SY4vlYi1HT9/ouhCczz+ClVWdkl7IgDHvVF0P87nZCaejE3MztRh8WIKI8L3QaJJQUpKAoMBvsW/fmzh0+E1oyq/C4eI+xs+ojh3vBTLfsZz6+ldkvr3VMcopi/uSFPhmMrkNfLvc7WhtvY/TAV9j7543sLbegra2uzAbc5GflwDfY3/A4QNvw9/vC3z82SvIyriMdnMuJtZrkdoWiZsdUVIFVZW1p8dM6b/hRNAtC9bddZh16zHm1GJgowSWhXyYhjJwJccfb+x4DnuPfQSrcxhpKReQkhiKqLBj8Dv2BXyOfYETJ3bh8KHPEHBqF7768iUMD5E86fSMuCfRwWZqvpeng29O8IyoP4nc7itYpjsYgaznunZRNkULRjqlyDWoCIbJdR1o/xffEoFI01lcrQvBraaL0HTGQduVgKreG9A/TIVpOAMN49lonr6Ltrn7aJ3LRcVEMnyq90M7kYJ5l16uYV7LjI+q+VT1hz1mhn8Qy3mPcuAd/dI5oVL1GLndrOoQzPJrZeOpZkx0QvYu6vbd7fJc5jFIbGcyouI046GQaBJbVWWemnQuOlCJ2w0TXLH3ZRxrxozDhPJHqThbfAgJlmgMrFdi3dGg/t9mu0yO5vRo9yb7lJgEdXnkWJRlMqFRS+SPlEDK4mBC7tFGuMEEUSX58pkpNWKPjTRHt4Hnxr3J82WCw23ChrMJK1vtmHKZYBpJw1XDSRzX+MGn6jK+1+fis2oDfl/WhBeK6dXdgOdKzXihjANyWmU9BttlT6ZQPm6a9FoDbnuUYThei0BNO17RtHkWn6vpk78T0P0EeHstArc/ikUgmyelifIJ+H5D2/7HwLuyE29W/hB0E4RvB96/ePBtGM+Abiwd+uEUGMdSkTcahzNNJxBUeQzr3EAlKHuD0tM3k38LyP5zP0s29gn4Zmb3ZLokO8az2mOQ3hKN7vkyyWCFifbcKGpzVXpUAnAptf4pQP2D1/4Vn0mYKE8zmByPDuhGbiPGHCquGmwaYWDfvoTF8viSE0yIxk6qCJ5k4HGpiRsjWViPvzn/vshE+Dob4TzfE0BGtkrJNtis4pXxCKARg38GuTJpNPy2Yie+vvMJomsDkNoYgSTzBVwxBOOy4SySWy+iaiQDC/MVwgC4nAq4OgkEXSzhkwntkIlXMrCA7x9q46CkwuVSA3VkqIg0ybGTugEDM0XijHCt8Twap+7DSX2rkyyp8nlnKZw6XBr8r201YV0qFl2wwQKbmxsSLQr5uRV7QcZmy92lLPM2++Cyd4PNctwAxZXBU40QpwKysWziQQsW7Tpcrw0S8P1wtVTAHjfPVTBhaZX3IKOV5TpgeVjZibk8x1YqI6ygoN0zCppNpIqNlkZRsi0E/uiUyoEwQLIxqymHPB+qREumVzUUGcbuIKU5CtWDGeAgIlZRuMlzM92Qzd0LVLzXB4+DRTTULlYJtrpl+hibZlzoBH1Z1+hSskm/czag9mHD1QOrrR1uB/8vEwElNSLTzEauxrn7uNxwFsX9N6QR8rG+Wjz1VdP0jNuAe63RSGkIE19qlsYXl0yob76Fzv58rK21YWG2Hk2NOaiuuIHu9mKsrnVhasqMekMaSnRXEX7fF7md8Ziw0o+d1RY6LDBoE6gyaPPebMLoWgkyLOFIbr2AWZsRLqlAKK/vNbSKtOwq9ZVVJxFXF4KE1kikNkbiZvNFJLREIKntItKaLyK15SLCjYE4UnIAV0znsCzSDYJaz77FqZpkNuW88HUeWw8AFhaSrDYZPW/yowK7gEf+HPdB7/Lox6VS4W4UByVWEDiwicDNtdkuPs55HTEIqfXHymqNCti8VjmwTPoUVPBmNU3ei4eZ5/9nUkKygdcNkzTVkNYijbCcFLxsZ5WsBjMbOkysVWNkpQIDC6Uid2NCYhrPRtXgTWgepaLUs0oepqBkIBml/cm41xWL25ZruNN5DXe7Y3G1JQz7C/fC/95eZffIa0qGuDDYE7yo90RgT8aRgMxua0R31wPMzBgxP1cHozEdt9IDUVQcjbV1ggASCKpCQuDN1Ttf9HjCJT/nX0N24o0PBN9FvYkIqT0Nq/SlKJcknm/2LIyMVCMp4QwO7H8ba+vU9FuwttKDhLhABJ/9GilJAYgMP4L3PnwBASf24OLZvRiYrUBqaxTS2qPRN1/uYZp/IrbIns9KIN2aauFg4zfqRLJHUsWJDvQ8Kkbk5aPYd/BdrK33oae7Er09VYiO9sNRnx0IPr8fgcH7sXvXH+Dr8xX273sHo6NVcLstQjAw0ZXzKQ5KTwffE2sViKo/hXzveHmx2vPa7zHBZFWO0zgp7WtC+1yB2A6HGk8jtO4s0rqvoX72PhZdRnCap8PFRfKB1nxcXqcv9ko04uFGEc5VHkHLTDbsXvcysusiD6QVIfdHXoPeWM7jqa5Hxd6rBI9Ywu3mddcg1oxOYe3Z1F0nCQNZYRmGRFtINs46W+EUtpzMuVeTT6D/w6WYfSVVIlHGJZbE/F2vHM/RLBbK1Pynt0fhQPFeZPdcx9CGHlbGD5JpElPZwMvFZKtLkmZvX4m3L4KPfI3NyG6RbBKAk/1mIkHmnBU0BcgJyh2gLIzSYO6vdZKEsIdjA52wrGuRM3ANIfrDOFIZhAPGm/jOVISPK2vwSmEjXijuxgvlfXhW04ZnNXXiYvI00C0M9zaLwD+r5dZQy/2jqZPbht/IAJw/YQ/obZj0Poq8ZBvb/WZFB9RSoFuAd6UC2wTcb3uB9w9Y7268r+vFq5la/M//8RfodlIzngH9ZIa4h+gHk5A7ch0hliCc0BwScPTXAd8qaHIzZPAWRhAtWEYdbnfE4GbzJXTNlngyc96QinVWmys3UJbEvDfs09ht/i5/7mmLwUjdlAyGiiFsR814Dq7UhUI3nCkB0htkBIB7ArToR/kZRG/MAK58tuVRAK3He9QDziXAcUN+/HMKbAsw97wuzwW4kxFT1QEvM0DJxJzViKyeK/hV7qvYXbkblUvZGHXp8XBRA44sz+mOQ4QxEGc1x1DQGAPLZJ40vNIflhIRWha6XASyzKx5fLh5ckKkslSkxy2bKMnochIcdcB8H5QEkIUzj9+RQSJXmy7IiPllZz0mbQYs2SmZqZGhJhxswmFNczYj5mws7ZkwsaLDjLUGs1Yd5uwGmUI3u8Gv2YBrwOSKHjO2BsxbazG/QclOtTQdLWzUYn69RgDJOq0h0YyFtQok1J5FesclsbUj47doNWJqvQZzqwbMrhkws64X3TUHr1DuM8cSo83k+b96ASKTG1pMi999BWatVZi3GTBvq8HsehVmbXrMWPWY2ijHNH9uQyvPp9a1mNmokj6FOZtBPtPMRg20fWlIMZxHpSUZS2zKs5GdrMPahhGzHPbiMGLZYcayox5LNhMWNmrAz7ZkM2PZ3oAlGz1+TViyGbBor8Wyq04kTFZrA+y2Jqys8+easOGk60gLNpyNWHfVw+qqg40/626AfuI2YhpDZMT5Cu00XY3SKEgXECeP3VaTdPQXdMcitjYImpGbmLbWoq4hHd/sfRURMYcxNd2MypIbCDr1Jb7f8xbCQ4+g2nAHWdlR8DnyPr7a9zJ2nnkTZe1pWN/i2PUG2KUhVjmuMBFVzDcHgBQjveMCklvDMGMzYlOa9CzSwMnKwPBaJYxjWdD0JqKgKx73ehOQZ4nD/a443O66jpyu6/J1bnc8ohtCcbT0INIaI0XLTrAsAJvgzwO+eX1S88vkVr7vAd4E5QK0GYgl+fVMuBQff5ZwmTjQr36b7MNj0WZFi+yTts020P2CFnP982VINJ/D9ZZQuMTNhMGWA7Q8fv/8WTf7IpgsdmBjqxEcYLZsr8U8r6uNakysVWF0pRJDS1o8WtSgYbEAhrl7qJjIRPFQKvL6E3G3JxZZnVdwsyUKyY0RuFgXjDBTIC4YzyDcHITIumBEmoIQXstGsUBEm88hzhSKWGMI4hrDcKUjEr56P+y48zkuNZ3HhDj/8FgQpHnYOE5dla/VPkzfaRePxWYHnC5WgJiI016zC5ubXdiwsqGPiSR/X+nRt4PvnN64v5rsxAu+6bRV0JMg4Nsm+xn3XH4+Og51YHSkBknxZ/HNzlfhdHbDbu+FQX8XRw9/joxbYejqKIamLAVJqeGIjz2HN176O7QNFSCpMRJpbTHom9eqIV4eecmfI5r4P9mULjIDt8d5irFI+gcIenswNlKJxHhf7N/3O6yssNrUg6FhHQ4few87d/8G15NOIjLaFy88/9/wxuv/jFMnv8TsLCuD9HTmZ2IjOn2yzT/NfK+UI9xwHMX9cdhgQy5ZbxfvV0VQEJw6XHWYcNSiYiYbZ2tPwbf0EG52XEbfcjlsrGZKbxdjgUdi5nlk8u9llfnIat+4VYOQskNoGcuC3UUJSD1cdInyzBSQyo4kDQTIjNFKxiefyQPKec64By4t6WFnQzOlKJsdsNn4mg7r67weLVhdacT8nEkqlE5HO1xOz7VL4kGqcR6LQE8i7E22pRH7cfLQhC2vlEQIG1bem4E1Ghe0YXyzBtfNwWLHnNudiIdrOqyJNET5jQtOcXmHzPGzeBfft0p25TW5b3juSDz1AZs9UsHkZ92UfhsjXJsGuDbZJ1MFuGtkjsMmq5BbPRhcr0ZGVzQO5H+HA9XHsa8pDztqG/BaqR4vFtXiBQ09ujvxjKYbz5X34jlND14Q/XYbXipT6/Hwmx8x3N6GSXEs+dEQHDUIRw284dCb7YNvfsBwlyt7wB8A7W3+3OJWUkEtd8fj9WNpyWOWm2B723qXcpOqLrxXTeDd88sG37rRdGhHU1E9mgLj9C0Uz6UioicUfgXfYZWeyAR9AsIIxP7/X3RSYaanMltu3k/0aSsw405HDNKaItE5U+hp3lIAWnTiBNxi0K9KUD/9fnnjef+1yUtyAAAgAElEQVT+n3mkFECCCbXoBN9K21o7eQdX6kNQPZwhjXI/BPB/fJx449CTnAyxSDA46EL019Rgq2yXCYMC8SztN8qAIjWRTgU+/g0pOYv7hKe5la9xs6adnLtFABWbaP+Q8zYO3P8CD/pj0b9ajnlXI1Y327Gy2YzxFS0qO+NxuPgAgiqO48FwCpqX8jG2ocOilSVqE1bttdhw1coG4XDUiEZcBWEyWtQDcsOph8NpVHZzZKzRjvWtRhSP38KJhtM41xwM7XA6igdozZiGou5ElD9Mg/bhTWnqqxy6hfKHN1HSm4KSvhRkD8Qh+1E87o0k485QEnJZiRlPQ3rPNaR0XkbWQDyKepJQYIlHfkccCrsS5G/ya01/KpoWCtC3Xo2e9RJENgcjouM88vrjoe9JhqYnCXfJ9jVfwf32a8juvIYsyzUF4nricK8vEbkDSbjTm4Ds7lh5Pas7GjkDl5HdF43bPZdwty8G9wauIvdRHHL6r+F2Twyyui8iqzsS2b2XcKcvBtk9l5FlicHdvjjc60+Qv0lwGKMPQnDRMSTWnkfJYBru9MRBP5gBY186SoduokzuwwyUj2Sg5FEqHvQm4kFPAkrJYg5noGggDdqBFGj6k8SGq2I6C+WDaWidykXn5H00judAT4vLsRxYxnPRNZmHgbliDC+VSTNx53wB7liuIcJwBrl9CehaLELX9AP0TudjYLYQ3Qv56FguQOdGCfJ6riGk8ADSas9jetGA+3kR+D/+07/DgWMfYWS0EWdPfotjBz/A2TN7sPubd7HvwEf46uvfI/jcHuzc/xre3/UMyowpsDktwjLxmuZ9SWBLJszBIVObLXi0WIT09nCktUdiirITSSo7AA6NIrhldYce/tIHwkoDGWbFCFulDMx7QJWDWxeLZXQ7PfOVPEyx2EyG1XRb9bOqn8Rjf+bpD+F9p9hmJRcT72N3HZzeRU07HXuEfaJrC8GFx/mFjDY99hmE3R1wbbXDMn4f8RUnUdhzDY6tdtjcbViwmzHHBNRejylbPUY2TBhcN6J/TY/GtWLULuZBO5mFB4PJuN19HSmtUYitP4/LtWfFpzyo5iTO6E8gUOePs7oTMjjlkvEsYpvCkMp7o/c6NAMpMAxnomXqPvqWSjC8Wo6RlXKMLmkwsVohiaPIrVxNWHGZ0b5UiFsNkbhedgoPNziRuB1bNupGTar0vUX2kIyiB0gz0XYTxFEq592TSEwQKJIVpya3QyxMt0DffbUPb3c7EatBD/Ot9sw/3it/et/+H/wdiQ8tkrTnd8UhxHhGGgQFlHvAN3tY+vt0iAjzxffffQi3exC9PRp8+dnvEXL2KNpaNRh8qENDfS7auyuhq8rB6y/+LVoHH+BG00XcslzDwLJeSb5+Il4ytqljy+PL48pH+r9zsd+jG0MDpYi9fAhnTn4Mu70L3T0lmJtvwd28y/A58Qm+O/AG/Py/QNTFMwi7cAy7vn0Vo6OVci5YgaAEwy3AngnV05lvxoUo4wmUD92QOLLpagG1y3TycXAAl7tR7EETh2Jx6MFehFWegHnyLlZpzECdsUM10rOngOTN0xb7jEY2NAguP4r6kVvYkCF4ypqY95ZTMAABKY+Jeu9s9iab7X2Nz+32GphN6SgrvY7xcT3c7n7R6ff3aVBSfBUzMyasLLdAWxaH7MxzMBpSMT9nhIsN6QTSnsRbelkeV75U4v7kOtxG4HmSEbFNld9tB5xNj6V9k2sVuKYLxJn7+3CnJRpjG3S+aYDLqlPnlpPDCZqlosZZF7SC9Qz7o+MO+6FAy11Wfbux5ezGFkfCOyyAsxNwkZzoBDiPgg2VNh1gpVyMLHsfqJm/2x6KA7lfwNcUhSPNefiguh7PFrTjV/lNeKa0Bc9XdOCZ8jb8qrQNvy7txvPlj/CypvePLAJ/zG7LtEmPRSCtAn9sE6gcS5Tn9p8afuMF23zc7lIiDiV0Kdm23tJa/kjL7QXcwnR7ALcXbBOAe58L8K7uwQe6Hnyg78NrmRW/TOa7eigNtBusXshEzfJtZI9exyndYfhmfol5q15pYKVU/D+44f3EBvTkAld/XwIbs1rpBubN8YSVXkUdcruuCbvVPpX/J8A3gbEXsPP3fuo9/yvBtzSXeMA3ge5WC0zTdwV8Vw2l/wnw/eQ9e98/AQjlNAQSCkwQRHibosjwK/aer1En7v05pflUDVTyPSZDDh4XBXblkckRATkZ661mdCw+wGH9d9ib8ykiig7jtuEC6oayMLSmxYqLG1k7VrfM0G7kI6o+RIY+HMjbhYDyI4iuPY2U+hCU9ydgfqtWkgD+vMtthNNVA/cmJ5hyE1QlPrKG6wQj6ybZSCjvGHUbEN9zGa9lvI+3k9/FlbKTCNAcg3/JYQRofHCi9Ig8P6U5hpNlR+BfTGvIU6LBDNKdQHDNKXGtOKU9hjNVvgis9kNEw1kE11AOdRyBWj+E6E4hwnQW4bVBCNIeh3/RIRwvPAj/wkMI1vnh5ax38Jt7H+Bg2T6EV/hLQ9lZwylcMJxBTHMYoozBiKo9i0hDoNjThekCEK47Lc8jawJxuS4El81ncNF0EuGmEwioOIS9d7/EgbyvcanpLEKqjuOiIQAxdacRVXsCl2pPIqaONlQB4PTGS7VnEFF9ChHVAQg3ncWZ2gD4Vx/HiUo/7C3eh9+nvoeTZcdwvtgXvpoj8NEeg2/5Ufhw2Iv2GPzKj8GHx6bSByer/OBTdhj+RQfgX3oQB8oP4Muib7Hj7mfwLz2AgLzvxcLylPYQ/LSH4FN+EEdK9sGn5ABOaY7Cn3+n2hdfFezCO7mf4avCXThWdkiGypyq8MHx6mP4qvBrvHP7A+yrOYjvyr/D4bxvkWkKw8RiFSZHqnDy5JdS4h4ZNOOU72e4mXIW7e0liIzww9tv/wofffIiJuZa8aA8Ad8dfRullYlwOPuwiV5hm9WkuQ5sbfV4mqNaBfjfbIuURIX9CwRxBH8SkBiI7A2AuMGoICmJJ4GJ7EcEfATpHI7SDONEDq7UBgv4JOBmrwFBFeU/BIp8ztfpKMPEVlhHWu5RCiNJb7s0oC5tNWJ20yjAX5XKvSXzJg+LT8tVM6y8H+yGbX0abJBtkma+ykfJiKnyQ+1UOix2LWpm7iG3Kw4ZLdFIa4pCvPkCoqpOI6TEBwEPDsD/wT6cyN+PgKJDOKvxQbguQKZBJrZHIa33KjIG4kDvcOPUHXSuFGPIWY2ZLZMMbllDA6xoFL0spQDs+6CsSXSrHCLmrIPNYZYhYNSmKza0HbObZty1XEdk6XFU9qVhGW1YlAZs1Yytqo081p5GrscgnKV+NoORUWSVjIPHlJUokyqeG6eLewT3GZIJanXPFjyZcPlXBd+tmF3T44ElVoa6Uf4g14Jo1Fvgcnegu6sM0Rf9cdz3K6yudCAi/Hv83X/+X7Dr63eQGB+I1OQAnDz5Cd58///Fjk9fxKHdr6F/phzJzZFIbbvkkZ0wEXl6/OG16fAkdAp4K+tT9XudMtSnt7sIF4L24KTvZ3A5hnAh4hBS08ORkhaG4/478d67z2D3rndQVJiGe/eu4e23/iuamrNgt1MeSImIl/Hmufwp8F2OyBo/FPZex6zTIGQTe3o2ttoxZjegbvIObtZdwNd3dyKn5TKWl6qwJYlypwBtJsqUGUriIHJJT1P74+esOqnFJHB0U4fg6uMwjNzCik0vElMZ3rVZBweY7Hq022Inq7TcjDs8PrxnbTYDpqaK8Obr/4CXXvgb1OhuA1uTsFgq4OezAy+/+Leo1KYj9+4VvPHq/4n33v4HfLvzBRQVRMFNECuOPJSiUIetqh9CZEnlgcm72je8MVw9sg+L8bPRYy1IoN6mKmZWxsAm9NvKkWoOwdUSX9T2p8rgIDLWlL84nDWwu42wbXIOAaVGRjg2/z/m3js46nPtEty/9u+tmZ3aqp2amp2a/e7MfGG+ne+79zoRbbDB2djGYAw2NibnLBBBgBCIjIQAZUSUkFDuboVu5dxJrVZGEooooIBid6uls3WetxsJX9sws3dmlqq3uml1/P3e3/Oe57znOU+xZ7D5XwHGJvMw4srByKQeI1N6jE7lyOMukl4s3GSXZtrDjjDOqZjqnK5G83Am7hqPYMv9T3DCdh57rTqszLRiToIJb2tsWFLQinf1zXgj2Yb52nq8l1WP+To6mtggum3db3eeVEz3TLfJ2cDb682tLAJfBt+zAbf3PiUls4G29/6ntAh8MTxSEspJsqteYre9TPcXeju+1Fe/NL4y1ODrnNqZkfv/Y/Bd8OQWDAOxSBmMwc3aMziStQ2+2s2ILT0jGkWyvcJcvSKQvCrQvO7fJVjLYsmJT+ZE3XLij6IcKfU3EFUeCHNHgrJCIhgVnbgHSMv2DS/QGfb8lez277HfkgiwEK9CrLzE8me6AqU9ZL5PQt8SLQFUPoPP9Tyf9mPT0taVbhcKrJLRd02VyiCLxkFGjY9NSmUzP4OyDgIFtQPgXbzYqIVDFkVujQtY58LGrJwBjcy30oc1PU/HxbKDuGk9hYTOMATkHsKB5K2y4Je1PZQtQW4hTtBxYkwPY18i0lujcP/xNdywBeB41nb463fB5EyFk+2vCXIkCeJiy66MRYrJlMep1bZhxFks4IPb+gQDTWNZCLdfwtr0DdB23kH9cBaeTOTiiSNHWLaG4Qw0jrA5UBYej+rQ6cxH93MDep4bRHLBTpDdQ9noHsrCwHiuvH/PcBY6HHloGdOjZdyANkce2p15cr9+SAdbTzKM7XHQ1V4XEOqTuxdpTWF48jwDreN6tDhz0TaRg6fOAgxNFGKUhWkTRRgYzkXvoB59gwY8G8qR/49MFGNsPF8WhX5nPqwd8biXfxqxBWfR4yxAZ38mhofzMDpeiOGxfAyP5WHUUYARRwEGhw1yf+C5HgPPs9E1kYMnzny0OPNgHkzDLftVHNMfgKk/BU+HslE/loXakUzUDGpR+1yLx+PZeOLIRfOEHk3j2Wgaz0TjqA7NQ1o0jxiga70H36y9uFR2AjWOLDSNaMVvveZ5MqqHU1A9lIzagWQ09CXj8bNU1HQ+QnVvMoxd8SjvToCpMx62toeoeZqI2v4UGAcTEW0NxInUbUhtvA5Tbyxsg0lod+VjcsqOgX4zTp/YhKPHN6KlqQBHfVYhIsIHeYUxOHp8Pb75Zi5Wr12E6ie5CIo4gp17v0Jx0X2MjVRj4LkR4+LcUo3+ngL09BRg4FmRFO+1jxTgblWQ6LiHhguFPXV7dpaU7p4yLXqel2FSusxWgK4mamubW9eKsWYtSEprFE7lHUb9QLoAbUlQKXvhrpLMXSa1JlkI2SxIzWfGDs5tq/iZN4gd101czT+OgNJjOF16FP4lR3CqyBcnCw/Dr8AHx/MP4ljeARzP3Y/TmTsQVHgYF0qP4XL5CdwoPYXAYl+s0qzBnNuLsC9/C07nH4J/zkEEGA7iarEfIk2BuG0+jwfWS0irv4mcthg09KSgqS8Nrf1adA1m4tmwQeYUrV7HJ9ll1yQOSmzEREmYm0y7owLTExWYdrAuR7kHiWZV6nTYvVLVjihJDesEWOtgQh9K0D9dAstwujDrkcVnwcY/3C1gl2KRCwgQncX2yTHyEiHUsBbD7S7EJO3oKOkh+KBUjvI5+T+PLweJkJfBd0JTmIAataa8DkHy+wD2ddeVmedZ0Duci0dV13C6+JjUbDAhIdCS4wcbBgfLYbOliMXg2HgFcnKvIejqbly8sBN37/rBZLyN3Nyb2HvoG/gd/xHledHoGs9HmPE0om0X0TCY7XHl+P3vznOjpIM8Dkw6Cd7ogMHjxr9VoqlZi/AQH5wL2ILp6TaERh7Dp1/+CUs/+kcc9f0Rd6ICcOroBsyZ83/jk0//EVeubJfCbEoIlVMZ5wY7InNNfDX4Pu+RnQy4aI1nx+CUEeZnqbhXeRX+mj048WgLmjqT8Zz2eVxD3Mq3XApspVeBcvcYQ4VIESlHnBksDFaDDbo6UCy9K4o672NUrA1V8zNJ7vhdPXUDvFa57jFpFg27MM4WsFHQs2eZOHt2O776ag5ycu7DMdGCtLRwfPbZG1iy5P9BbOwV+Pj8gLOntyAj/Qb27lwmMp6xERZqWjFG0kiwjXd+e4kz/p/nQTkUeddq/k6p4eIODxN/d7n0QGENj6ojMqJ/uhz6rjsIzD2Ac/lHoeuJR0bnI0SX30RQcTACCi7jZAHjygGcLvHBmbLDMnj/ZOF+HMvdDV/9DuxIY/fcHTioP4CA0hO4Yb+AhJZQFD+LQ4srE8+nivF82o4+2GGkDXPefmxJ/BF+9uvYZkrBF1mFWJxai3czqqUN/NupFVioq8WH+lYs1jTifW0NlmZXYZG2Au9pjFisrXzR/ObXmuAsyajCUt3M+KUPt/r/y90mXwBuT8v3F63fZ7PcLwD3X4JvL9D23nrZbZGWGKpBsM1BEO697wXfy3Pr8DXB971s/Ms//C347395kH8ROV0RyH4aAX3HLWlok98eCX17GDI7YpDdEfk/bJh64pDcFYVTJl/s0P6MU4X7oOu9JY1uqPemM4UwSf+DwLcsil4AS7sdBiXZeuLFXI70xlBElZ9BRVuc0m8K+FYuCgKAPZY7qtJZXUDy+K8CbC8b4A1Mv3IrNj9/Cb7L+uIRRPDdHCVZuoBuL/j2JgwvkggmEIqRV9ouBbBVokFpzyy9lyezl8SD9+U9Zp4jgYIadClcJBDhNjjlKGRwlB67cywLEXZ/3Cj0RcN4BlqGc1Dw+A5u6U8ivSIIvRP5AkqmJ5VV0bi7AsPUEk/koG1UA231FdzIOQjreLL4TCtrJxZiqiSCgEiYzEnlTsKF10H3FTKNLlWMMuYuR01fCqKM57BXvwc9/Xo4+TwW4YhtFW/LBUCxCY/qMsmqbRamUXtpUp3euGXnLse4gyy8clGgFRotEvk8BjwOOpjwvcdFM12IGPtlBFf4w9gZJ69nRTi3QaVrncsoPtoEJJQY0IKRw+s0oywZVcCnGw2L/2zP0nHfeAkPjUEYRhXGpOKdAVc5l3AhIhjkTgU9zqUJC+0cKS/ie4v9mxVdo3lIrrmJK/l+GHAXi4/2OFkcAiwXNZUmuCaNnt+uPJldkyphozZ7zG2EsTcZ18sDpFX3IKjzLsDEJFmTQgxPFWF4qgRjbgK3QrjcZXC4SsT6kq421BaPufgYF81SsFttt7sUGe1RCDYcgrk7DsPuAgxMFaKfieBkNZ49KcTBXd/i7Jkd6GwrxsljqxFzxwdFFTHwO/Uzli9/E9//uAi2Fj0CLu/Cls2fIjzYF4H+W3E1eA+e9uWiujIePns+x7Iv/gF+x1ahvjYZrUN58Hu0C1/6zkd6ir/oNKnZ5HmiR7xDtq9NnjnC82GR5jSc+8LieoqTh6fLkdYShYB8X3S58pVMjlu5tP1k0ky5gTiDeHaaREbnqd/gjtGU2r2xDaUj0nxe2Of0xptIrb8uvsSJ1UHgSK69hrSGG6r77ZNwJDVfRXztJdyo9Eeo9Qwe2C8isu48dpTvwNKEj7Bb+wOCa8/hsvkUrhlPIqExBBX98WgZy0DPuAHPHfkYduaj312KQXe5gOBxNu2ReU2w4S0A5RyfkadJLYmneJPzk3NPimf5fGHtyMZRAmARGdiTcQPy2u7hnuUyIm2XRPPN9uSHcg8gtT1GwJEwiZLcVyrQJjZulC0QlCogTtaNmlQlRVFyFAVcPfUs0za42UCJ4Ih2aUKeKPBd3ZMszPej5vD/6eC75wX4Pq5cWTzHjHGUIIu65vEJIyYmGIdKMfg8C719ZFnz0PtMjzFpOlSBto48tLblYnzYjP6pYkRXnkVM1QU0DOhUsfwr1kyVGFmVLEoakymLORJHSnphwuhECXqeZqHraTYmUY2uZ3kotd5BcVkUmpo04hLVWKdBYtJVZOvD0N1doIotWeRHrTZ125Q2UBr0SvCdhQuFB6FrCUfvZDFaB/XQWG7iovYQzhgO43ZjKMoGMsWCmDuzUsvEBFgsLukgRdvaCnHiUkCZc+XXByWYHSjCYd1ulHbGYtxJKUgJ2A+CNnuSjHD99yZ30nWUaxCvC17P3CmzyK6LVncT362ej7y8e6isTEfwtQPYvOVTbNr8CRIeXcau3V/hZogvGmqy4HtwFW5c2wvHOJMSWhAqCRr15qq4k5iApB/PgRokzWRHnuQZd6bpRMXC5HEjHBMl6BrLgfVZCvJaYpBhC0FkuR+O5+3CqvhV+OTBKqzS7cVXyT745OFhLE70wzvJp/BO4mW8kxCMt+OD8BbHw6ty+86ja5iXfAMLUm7izaQw/DkpDG+lhODt5HOYE38E7z3Yia8TtmFH9j6cKj6GW7Yg3Cw5icOaddiRsRE7Sy7jpyIdPs8yiwf3nAwT5mcZ8V62BYuyrFiss+B9jQlLdBZ8qKvEUp0VH2jNeF9r/V3gLZKS1wDfv9b45uNZQNvLcs++nWG7Z8C3+HTP0nLPlpV4AfiyWeDbC7x5S/BN4C2DspO/NvjWt0Uguz1CQHpWZxQyOiOh64iQwfvZXVHQdYYh42kEsnqikfU0Elnt4eDrctujoO+8hdy+ewivuQRfwz4cztiJSEsgjP1J6Oa2CJs/SLvpcrnI6EDBSn1OfjIE9P31sqFqi5GPk5nl4yqLZLYqTC0XAnkdL5rfZwMUuOQiwouW9nR0O1FykgkYRdcYaQxEcUcs6CM9Y7ulHAwUg6EYF/VZXtaWzAy/Hy9sfg91kXlBLr+ny6W6CgrglcILbptxYWH2q1hmZb1oFauzK2UnkN0SAbYpV+/J9509Zn4rj9mUsBHeY+B9Hn/bzDET/aEcI6Vff/F+olf0aNGkSIQ2e0oXzoVTXufRLQ5NFCHzcRgO5+xF3VgmJtwWPJnIR6j1IsIrL6DDmSfnlACWQZm6fuVqUoFRdzHyH0chNP846iYyPD6rXExZbMbv6ilA5f+lcI36TyZoleI24mJ1N/1rp20YcJbB3PkI13JO4FKBH/rG8jFFTSm3+/kaAdmcIyYpGFEe0mrB49+np2swPVUnCYCLTB53BNwsPmlQLeqnaadVLfo94In6rizwmTIi/UkkQtjhsu2BsHLiIENmZopOLgThHp9aj0sNgTybI5DF58IgxXmy+0AJkBU1gymIrg5ArP2CVK27xUObW8NsOa28VhUjwgI0siNkZdWukVwvYt9mElY/vTECR3P2Y8RJTT2BJOcYj68aci5FE6iSLm8CRjBJK8Pq3nSElp/Fo8ZwcW6hBzRfQ+9oFsTKosjfxevPk8yJHZdcj0zQSjHNpkcu6qntGJ+ywdAYhYv5PjA9SxAwRgBMnScdZ7pbDVi/9l2cD9yGztYiHPVZgwD/jYiMOiXe4ptWL8V3KxbgTsY1rPP5Gj+t+wDnjm/Gzz98CJ9D3+FxYyru3DmK79bMwe27Afh25VykpoSgrDoVPx/9Dn8379/g0rmNcDn4+yix4rDDNV0L93QD3JSu0J6LDYwc3CJ+DOdUndgwUh/eO5GDhw1XcKnMFxOSfDMu8NpTzWh4Ljh3eJxJJBBkKeBDnTZ3jfhbzaAuPtx8BjdMp/FsrBh9o4XoHc1D74gBvaN69I0aRIo3MFGIAUcRekdzpYj3yXM9mp9no304B02jBpGZRBkDcVV3EIFZB3Cp8BiCTKcRWOCLkKITKGq7L8mFxCpP4qqON5l5JUdjvJHvzbnlKBWZB6VfBAliw0ZdqNiWUmJSJrZkTDBVbFaAh7uGw5OFKO66A5/UjdidtwMXzP64b72MuJLzyGmIRrsjV3okkL0m0GEiyGtPgRu1ve+N2RLLeexICHhivMQniT/qeAsYF2ZSxWDFcFtQ05sq4DuhMdQTs3mOPNe+59x4WfK/vFVkw18+zvOo/sbzqiR6v30rScS0FT2jemk9fjXPF+MTLPBjcy1VaCrNo+T78Do2YwSlUtOirO/YjIxSJTK0VkxP2jBF7e20HYOokIZQd43nUU8zAGmpzmOijpfEaInraj3ysrgyDyUOM4YTJPM1/E1kvhWxIDGC8ZI7HlNmjLvKMeYsg4ugeqoSThaiDxVieJQF8LYXtRH8ziq+q+ZPdBt6UR/ksQ0UksBTU/F0NBfXTL44Y/XFHftFXM09hvO5RxFruwbz00R0jnIHskI15JJzx/fnuuYhY6SuiaQK44uKSWq9YBxRTimynso8NWEcZdiXsR3ZbdFSYySxi0kbgbAkemp+eHen5LgwznmTZ0nyTNDpwrF58+coKUlETEwgNm36Atu3L8eqVe+JD/2G9Z/hbvgJPLZrcPTQV4iK2C0Jg7wvu3A67JiebMDktF1ip2AEdxGmXQWYnmSSSaBfg0k0YGKyEH1jOtlpzGiKRFjlJRwtOYkNhkNYk7Ef32n24dPkK/ggOQILk6OwIOUu3tMmYUFqGuYm6fB2UjbeTMnBmyl5Mt5IzoV38LG3UvPxdloB3kkvxDvaEryjK8Y72ny8na7H26lavJ2UiHcSYjEv4Q4WxMdgycMwfJIQgi804ViRl4BvCvT4MKMIi9OteE/LpjlsiGPF4kzV7GZxhgWLtSZ8kGHBEtr/ZVhnuZNQxz0zvI1vlmSy7XsVWDT5YUYlPszkeLkBjjTD8TTAmd30hvdVsSQ13GqIM4lHTqJcSn4hKzHYQUnJMn0VvjTYfzGq8VXO7GHH1znV+Dq3Wt167i/Pq8E3+bVYUVCHDx9k4F/84e//esy3vl2x41nsTEng3RkBbUe4gG/+P7szClmd4cjqikR2dzQMbKLTFimuJoVP7yD1aTQCTcdxIHMX/HMOIb7mhlQsM/iK1pHgl3ZX3Lb1sB/UNEqlOwO1bE0yyLMAQkk8OGkVuPUAJYJWuQgJ3pR9lzeQ/9atBFMCX8lwVfMItQgpu7qsxihEms6hsDNWmsMo8M3FVQFlubg97iSy3cbAJoCYGmkv+GYQo88124wzSBBU8ruqIp03ID0AACAASURBVCHFIjNgWMRqkWBOFWox4BJ82kTicKX8BLJawj3aNBU0f/N3CdjkMeCxI4PEtug8drxVPqW/9dqXHudvk+1nbserglguHDxGXpZ1zFEMc1scdmbsgHFMg353GfK7H4r2mO4IQ+4SkdA4uYBKkxoCEgYYM4achdDVhCI49zieuAtegG0F7gjmuEgoAK4AP5luAhh2sjRjimw6xxRlATZx6zA3x2J39m7csV7G06EcZdtH/a2w1mrRFgcKSYwILkoxNJgHY9k9ZOvC8KQpB1PuagwPG5CVGYqK8gQMDOSLP7heH4bY2NPI0d9D/0CheL/zd2R33hHmu6jlnkou2GBE2qB7C1o4H6iX4+8mC8LGCxZMutT2ORct5dFOWU017M+SEWLxxe3K03L+2XlSFjYmolyMJ3kevQsoFwjPnPTIGpTjjVHcSnRPbuFg3j6RplALyPPPeaCSQ+985cLjTcp4jFTiw8Yc9m62CD6N+IZQxXh6EmJJjphUyDazWqQoeSJQejHfpamDsqciI+ZCLVxuOwy1kQjMPwjrsyTxpWb1Phtc8Lg8683D2dNrERV+GD1PyxFx8xj27vwG27Z+hUMH1iI+4iyO+67FmkPL8E+f/Qf4n92CPO0t+B3diCPHfkS1LR5Xgrbjp41L0N1nxbq1H+HBgyuIeXgF7y2fi7/5p/9TttXdLi5+tPaywuG0o9KWKoyeNuM6Wtr0GBurQmnpA9y5fxZFFQ/xfJhFaVXoHs9GbOMFBJuOCNOmwKH3evSCH+9xJbBhwaRnR496VLdij+29SYgwn8adusviNKEaevA4eBN47zlVoEYKpcQWk1vQfA9lDzc8aULroAHG1ocwNNxGbts9FPUkSrJ0JusgrhecRO2gTuwyCbgItPgZUpDNxE8SMsYKxSpS4+l2s+iZ580Ml5NFnmTtWNBKAMpYTYBuF7tDJTlgYk3JSiGsfbE4nLYR2w1bEMguq8YLMLfEiWsQSRbuRHEuu52lcDCWvA5B8hrP4e/ib2Fsqu1PR7AlEPce35BGX+KbLMmkF1i/zq06VgJYXyS2MyCen/N7g6TAxLQVzWOZuN9wESHFRzHqLJPeApI4ewpKxRpPdPMs5lWSHxbHkzwBberc3OlTcVisVGFGN0oRa70qrcarelSHVbLDE9PlGJksxjAL0ydLMDZVJo/xce78CZBnTwQB4F7wreYDjx3dWOhaxPMjtRDSDVc1omNyL3If6u5lx03tmMkxEGCqSC8hVgiWOY+kxwFvVRExDQ6Y8A2SKGlPgG/mz/gu/VtcLT0CklzJLdFoGM2Rnh/EAUx2pyXWU7ah1gN+d65FkuALcFagmd2TKXcRcsVDxDFGKQkMr6UK7MnahvQnYRgcZ7EkiQMPkUCQ/TpzbNqEuLiL2LZtGYqLE3Dv3kVs2fIlVq9+H2+/9e+xY/sKrFz5LmLvBKCs6B727VuGqFuH4ZqqUnaxTNJptetphCW4hliFhZVO7r7SzagCDcOpMDSFIrnmDKLNfggoOoKt2UewXHsSH6RfxIK0cMxNeYC5yY/wRpIWb6YYMEdTiPm6cizQWTBPU4m5aXbMS6vD/PQGzNPY5TE+zjFfSx32y2Mh9dg6GxZorTIW6iqxMIPPteCd1DK8kViIPz7UYE6yBh9kF+KjnAos1pVhQWopFmks+EBnlxbwi3U2LM6Y6Tr512p882tNcGgH+FvOJN5Cyc/0BNu2F+MLQxV+OZYZ7PjqV4YAbQJsGQTeVViea8c3udVyuzzHDg7+f0V+LVYW1uKjOC3+xR/+4a8IvjvJXkcp5rsjAgThlKsYOqJk5LRHwdAZibzOKBS0RaKgNRL5Hbekk+X95mvwLz2MHZlbEFx0Anmt99E5wQIAgid1kYq/tIAqLgAErXTg4CRV97nAEWgr4OgFDgpUKrCrFj0BJJ4topdA5G9cWMJuELh4WBGyGUpWobRiBN8RxkAUdHjAt7AuL4NvBh8CMLLtbAygFmMGNO8CzC2ymQIOlTAQ6HAotpG/U5IOTxLA4CZskAd8V/WlQoHvsNcC317wKsDIk9VTPy2giEUUnsdedYwYkAhQnJMMngxQSqZBZwgJrNy2nizFk34t9mbuQmTjNaQ/jsC1opMIyT8BY/N9Sa7YEVR2DiaoreNv4/abFYOOIqRX38DVvBNo4/GQRMYDYGQXQAVaKeYRyzGVnPFYydxxk0Hmol+G8Sl6eZejdSgDe/R78P2jH5HSECE2e9Q6cqucrLJaTNVneM9B0+NEnPFfj/U/foQsbTicjipYLRFYuWIBAvy3obExFRm6YBw+9C127PgUu3d8i/yCMLhYSDNthaH7voDv/KbbSovu4i4BdZA2DI+UwlYVh87OHDgcNjzrK4TF8gAFBREwm+4BqMfTrhyUlsWgwhiH3p4S1PRrcKPyBGKsAapJgpsd1cjee7dVCXB5TXgTUc5JNchW8dxwXvL4ZrbFwLfIR4FvJoECvtXcVNeVeh8vGFeLF+URCpjaezzguz5UWC4W2hIceEGnXKfSeEKdqxfXqSyEvEaVZID2d07UwTFpg742QjSK1c9SxL2DkiQCQn730ZFiWM33YK64i9FRO+pqdLh32x9BQXuh0V5HT1sh8guisCd4PT7Y9A4y8qLxrNOEkGvHcOL0Rtgscbh0ZTu27Pwco2ON2LJhGYKCjmDXkR/xj0v+E/689O9xMXAnJkbpq87jZEdtnQbnzu3Atu1f4uChVYiIPorikvvw8VmF7Xu+wd5Dq2E2J8ExZkHXqAGxDZcRYvH7/wa+e5IQYfLHnforHvDNojmCb14f3tih4oA8Ln78TERVbFQFmjxHTGqs0pVzmP7y0+WglWfPZAnuVFzEkZQdiKm8AvuATupYhJGVnRHOJ2+Cq8gK0cBzd8WtrNim2CDEaUdbaw7M5ji0trKJFJt2mdDYoEV9Qxqes7spGXNXGVyTJehz5kLbeBMRVRcQY7uMqzlHkVITipbRHJEI0N9egfpS+e6vA3peFafU3xVJw3lfN6jDBesZ+Bcfhak1DuVt8SjvSIC5OxHWniRYehJh6U4UpxZTVwKMnfGokPFQahVMTx/J38jA8jV0dDE9VfULxo6HqGiPQ3l77O+PtgSUdTxCWks4zhoP41L5UYwwprnZw4BJAhuDlYkjzyglcQJuLeJkwetgeKocLUPZKOp4CO2T28htuYOCxhhkN99CfGsUpFg85yCiKy+Kx3pm8y1oH0citT4UKXU3kVoXKh772qZI6JqjxPkpuyESmfURyH4cBUPTLWQ3RSO76Rb0zTHIojNUY5T8NtqQitSKiTbXYU+dk8gwaF/HTo0kYDxzUkAkCTRpwOLZhWNiK7tKlNdUwDldhr7RHFg7E5FcGylN436KXYZvk1ci+0kkHj/PxFMmDGxAM12p5F8SE0imqQJj744246AC34yHCie4nMWegmbPLjNlbIx1IithfKnAPv0OaFsjMDiWJ69XTmcq+Xidecjfee+eP7Zv/xQlJQ9QXa3Bo0dX4Oe3DkuW/D2OHPkR+/Z9i8NHf0bA2V1Yv+kT3IsLhHOqFs4pK5zyXWizqlfXnpux2ophlxFNz/Uo7IhDXO0NnDMexf7crVivPYZvU/zxSdJZvJsUjLmp9zBXl4n5GUbM09RibnID5qQZMVdTgrmaMszVVGBOuhFz0kyYk2bBvHQ75mtqME9T9RL4/jUAvlBbhYXaSsxPt7wY9OKm5/ZCnUXavb+nK8P7maX4IKsci3XlWJhWhnfTKkRC8kGGHYs1NqXlnt34JvO/rfHNh7/oOukF37M7T/5e10kv+P78FeB7WY4dX+a8HvgWoJ1TLWBbAPgs8L3yvxf4ptSEgwy3nqC7PQp5HdEo6LiF/I5o5LURiKvGOWUtkchviUByeyhCmi5gb8Fu7E3fjPv2y6h5loZhsnZk/1y0C+LWlNrOkqIl0UQq6YcCBoqxJWgkU0x2jgU4XvCogIIHrAn7x8WLnpoE7VzAuHj99ngBvj2ZL4PFDPg2IrMhEuEVZ5Hf/kCKNxQLq4CCl4kV8M2tOwlGBBteYMLncduai4Jna1Y0jJSNMKjxO6rn06ifAN27sAr4JkvlAd81/em4WnESmS2hmBSbqN/+Ter3KvbR5aKfq1rApWW3p6iEQIvH9PeODf/GgMQuoOJ8Imy6UToocvGV4Cxg1oT+iQIcLD6EtanrxRLqVkkgSlruYWCsQLSgZGXEx1vcJbxstkXAYVrNTQRk+iCvOxH2/nSYuxNg7Hoobb65CJa1P0BRyx2Utt2Xv5V3xaK04x7KOh7I80q7YpHXfhuG1mjZUkxquA4fwz5sztyGk7mHUNh6H8POGX9wh0iLCAAIODhHLGh9osUx37X4/KO3kfLoGhzjVSgtuYG33/wD9u5eDbv9EUKu7cXOHV8i5NpBrF3zMa5f3w2Xq1YWp9ye+wgxBiD38S04qUOV967CxHgFjJZo7N//DR49uoz2tiLk5MRI4RQfO+H3ExobcnDvbgB8fFbDx2ctdJpQWDpSEdN4GXerzkv74mm33SOl8s4vznnOH84lNdfVXKKvKwf/bsWAoxhkvn2LDmFkokCA9wyw8143nK/qPpNHzhP+n0kFNdF1vVrcLDuDh7U3lDc2k0zOH9niVkBQzW++jhIxulEwCVVzVMA4i4Anqa9+DMdkNfS1kThr2Ifq3kQp6Jt2cT7yWmcCy6S0Co4J7mrQUq4BgwNGdPcWYnTCDLfLhmFHBfRdCTgVvx91HQY86zHh0pUD8Du9EZWWh7hwaRt27v0aw8P12PjTp9iy5Wt8uOId/GHR/4V//ug/Yd/OFehsz4PDScBfi/iEC1i//lP4Hd+CE6c246f1S+B/ehNWrpiHB/HB+PSrd3D39hn0PS1E50geHtRdRYj55GuDb6+MTJEMSu5i70lEuNEfdxsIvms8MibGL8YNzktv/FIxTR6Xc67Ot1cGJl74Uquh6gEkRjGpmLKirDUOlwuO46ThIG6UB6DGka12EaXQlIw6mUXvNj7jFBMrzgWTJEWTPNbPrYiLu4Q9e1YgXRME93QNBoeL4XfkZ1y8tBO1DelwCUgvh9vFpNyM/sliPB41oGpIh9CKs2I5mdV2R1wbKL2ZdpUL2UCwJnH0d2L0q2LUzN8J0vjdzagdyoC/8SQ2PPoZdwvPILzgFMKLT+NGxRmxS6RlIsc1jooABFcEvHTLx351eF9TdhrBpf6/O24UBSC85CwuFvliR+Z6BFYQfPOaqla7DQTfJCXYZt5VKjsRYns5xsS3Ej3uEiTUhyKg4Ij0SgguPIGb+ScQXOCHi0Z/cSY6lnMQl4tP4HqJP4KLTuJaMccphJR4RvEp+T8fCy45hSulp3Cp+AQulZ7E5dKTOF/sh/MlarBxVIDBBzcKT6FmSCO70DxPQkyJLEmBXDc787L+gbJQ7qzKLgpZcF7DJLKU5pvrAxN4Fi/3jelR2R2PxOprCCr2x9miUwgoOYl9WRuwq2A7noxnqzoV7k4wdoqc0CiuOlzLmYxwHeW8FKAvSSPjloo39IPn3OXz1PpLAM54xDjErqLEFGXYb9gp9SZDE/mzmG9PgvE6eGHahPz8MIRc34n6+jRMTtZKIzCj8QGCgnbBYAhHdnYoNm1bjQ2bV+LEqU0oLbsrzZR4vdLthiTEqPxOu8h5mvpTkN1yAzcqT+JA/kGsSNmDxXGH8HlWMJakafBuah7mpReIJOTNjHK8oTPizXQz3k61Ym5yJeZrrJgvbLcZc9MJutXg/XkasuAc1pfA9/xfYb/f1dnxrvhv2wSEL9TO7jhpwXseKQm7R76vs2KR1iKM9/u6StB5hDrtRRorFmtnS0tmGuC81Nr9NRrffJRlg3SbZMdJT9fJmeY3bIJThU/Z+OYlSYlddZ38hX57maEKfzkU6Cbw5vjaMGt42O7lOdV4eSiWm8DbC76F9c6rAcH3t4W1+DhOh3/512S+tZ0RyOxS7LehPRI5rRHIa41EQVsU8ujb3RqB/LYIFLQrdjz2SQjOWo9ji24jDmftRP7T+xiSCmPFsEj3OQJtAd4KiINAnBcvta9udqCjRlXZIRFYOEWzSnBApoVBlhej0s/xwvOCbV6QBBF8bCYw/zpYfRl8K99dAd9SzGEUJiCs/AzyqJvkoigSCM8WFe97GEZ5jYd5ZIAQ+YuAE2rXPVm7fB9vwLAKQyq/g2zyZCncU8VS7EkgQ7AqC6OA70rUDWkFfGc03cQkO/i9crGixrgYLvHOVo1GxBOX7XE97DqP6aveh8eHizGZauroppx0VVHJEtkIWaSnjGgdy8HuskNYk7JOutv1jRdIQaQ4RHicCbj1qwKzeh3lLEPjhdDWhIoFWqDhMALK/OBX6Av6DNOq73jBIRzJOQBfw3557DRdIfIPwi97t4yTufvgl7sXx/S7cVS/C776XTiq342Ex2HIf5YgnsVBhSdg7HokzDidUYalqJZsB7fSOY9s4nGaFH8J+3etQm72XUxPN2Fiokw0xlcuHkJNTSounN8mRYDV9mwcP7IdZ89uEC9cLkSGp3dxw3wW+S23hYnnuWOhTH9fDhJTTuLNN/4dzgTsRlFBAq5c9sFPaz/ESb/NWLF8AQLP7saqb9/DoYPr8NPazxFwageSim4guu4y7ldfFHmNtG6X+eRNRr1+yGqRYVGaSqboIMH5yZ2FKjwbK0RqYzgOFx6SNuQz7DZlAjyvHLw2eDx43RAUcdHiAqhcLer7MgR837UFKU9rYb24iKjFmPIaSg+8u1TCgrpNmBgvhcsjq2GiPTleCcdUM1zuBuTU38LZnP2o6okXRw3R5Xv00+o7qjnndPIzeI0TsFDmYIRrohjjUxXI6Y9HYMZhNPXloa+nDIEXd+KY/zrYKh/hytWd2Lj5I9RWZ2H9D0uxceOnWLl5Cf5myb/F//a3/yu++vzPqKlKxjTqMOmuxp07p7Fn92rcjrmEsDB/fLNiHjZv/gI+B35AY0spvl/3Ca4HHUZbswE94yWIawjB1fLXZ74JvoW9Fhs0qzC+Vd2JCKs4hfuNQaojHjuuynlg3PKyd3ydl5njeaF0TEl7XoB0StnoWiGD4McKuLiDaMbQVBksg6liS+eTsg0PWyNkB4rnWYEY7j4qXTE/h8k23VmUnrsKU5N1sJqTsWnDV/jTH/8GwcEH4XDWo7D4Fv7ub/41Pv74DeQW3QGtx6RpEQvKXIxd9A6uRa+zAg+bIuCr34979iD0T1LbWoEpSbYsLywaXxWHXu/vCnwz7tc/1+Ji5VmJG8WP7yCzNhwZdRFIYvOkqmt4VB0ivvbJ9aFIa4xAelMktLTBfXIL6Y8jkVIfhiQ2Waq9IYP3UxrC5G/a5miQZaYd4++NzMYYZNRH47btPHxztyOkKgDjUpSoOg5KF0EWS8s6wnPGa9ICyO6gFb3uclw0nZXGRZb2h6jrSERLZwqaulLQNMAuo6loGcwQl6auZzq09mnQOZCJnmEDekdyZHQ/16NrMEsebx/KFPvXpiEdHj/XoXFIi7rBdNQNalA7qJEahAT7Nfhl7Ufps0fi8iO7JLLmKSKGRJOwz1wD2E3YXS4yEtpNkrmXHTeRixox5ixF77ABdd2JeFQbhBP5B+GTsxc3bZdQ0ZeGpmEDHrQFw7doLx4PpklnSkrT6N8N+niTRZeYx07InPsE3lw7uKayjbpyWFEkCq8RdtlUSYAAdk+iSlmhm2vhVDH2GXYo8O0gEUEtvdp9EBmfxMNfxwmz59/YeCm6nmZhfJzXaI3IsZ4Pl6K9Iwtj4yaMT5hRXJKI7MxoNNakY3zYSzByranE+JQdz9016HXmo6L7Dm6UHsaWpB+wMnkjVuSdx8c5d/HnR6mYn2HD3LRKzE2vwnxdNeZmVuPtDBveIvBOMWJ+qhmL021YqLFhnsaGuelWzCVrrbUIGJ+vMwkT/k5aGealmzE/naz2zFigsWH2IPP9rrYK7xFEZ9qxKJP3K7FQa8FCLQsqzXhPY8OidDLc1XhfW40PMqo9wJva7RlNt7fpjdxm2uDtNOltfqNaur/c/Eb8uGe5lCgt90zjGzbB+aXEhOy215lk9q23SFK5lNAisOrF+C15ycsgW4Hub3JrMDOq8U2OHSsoMcmrkVsB3rnVWJlXg28L6rCqsA6fxunwv/+H//zXk51oOsIFfBs6FdA2NIfB0BwqoJvSk6yOCJQ+vYP0p5G42nweOwp2YFvyOtwpC8Cz8RxZ5KVIhUV7Evi5wBA8U9dcDheZSckKCRz4NzJ6XGiYuXKSc+IyqKtFWellTZ5iDC94UEwm9VQzW+u/fzHNgG8GF4JvD/PtVm4a2Y+jQfCd23pPgW8uhB6WXAFxJVOhdozMnVoQPd+VwYh6aS6EU2TA2b7bCqfLCFpKTaNGBvV/qrqZ37tEtu684FuSk+lKNA5n4mrFKWgfX5cgMjsY/Pp9FroVCbNOX1zHBEEVJRJVoqNncjIDvH7nGPF4S0JEDSD1aibRbdMT2T1BcFUhHRMftoTjx/i1SOq4hcHJYukc5hYnDRMmnJSsKH9k0T0zgMr2uirWaR7WI7khHHGmS0grv4qChluwdj2CvTcZdQPpaB7NQquDxWVZqBtIQ8dYJp4MpUvHQnYtbBnWoH0sA90uA3omc9A7mSvAn4HO2BGPczmHEWoKRMNIluh7uYBQOjFTF2DBpLMS8Q/OYe+OlcjT3wWmGyXI+/r8hMCAPbBVJiI4aB/On9sFY3kSjhzaiosXt8HlahCQm9oUgUvFfshrihHgQsZH6Tqr8WwgFz///BGiowKRl/sQZ8/sgf+pHbBVZmPrluXYtPFLfPXlPJiMGbh5/QxOHt+KsCR/RNgvIrEpRNoJ0yNWOTpwLpHhUYuRuoYU+6OuE7JRKonk7+t9bhDP58OFh0FtviRLlBDxGvRotwXceRZYdZ9z3KOnd1tQ26cVr+go4wWwCJlJmDDfUhBrxfhEBbp7s9H5NEMcG4AGDAyWoLIyDo+bNBgZVXNvZMyG1t4SPB+xIrsmWrzJ6/rTJKGjbzy/PxMI9R0JMLmgUgJhhMtZDsck/Xi5HV6G0cki3Ku/IslV3UAWWp5k4vLVXbh4bReaWzJw+64fPvr473Hq2DqsXj4HKanXUNaswYbL3+M/Lvk3OOO/CeOjLCJmN716FBXFYsvm5fh53TJ8//2H+OLLt3HyxCasWf0+jp/chv/yp3+La1cPoetJPrqGCxFbfw0hplOvzXxT6/pL8G17+gih5Sfx4HGwFNEpKRavRZVQqVvGO3V9stCRg2BCdjykdsTzd7lOK1TzDeqK6WhD1xqJY2YBWKHlZ8Qyku5DdAviDo0qnp0pamaMltgjjTdq4Ziowe2YAHzz9QKs+vZ9REacwbNndmzd8QWWLPwjfvjhU5RVJGMKDdKsQ5jbkVJp0EFNfV7zXZzL8cX+pK2ItV5Bn4NF5jaJg2NjhSqRfyWR8Dvx6S9eq+Zu44AGYfZLuFV9VWop2GRF7NlIdMhu3kzfA/5e1qN4B//PwR0yaqXpgsP7fExZ9XHOeIqbPbuDTFp+Odhh1AEbagZTEWY9ibjGIKW7n6wSwK3OI69jj56bMYO7ELDC6TbD2JeEI/k+CLNdxHOPvzo/n99jAizO5HdSMjrXtPqutMX0/j7v7+Fj3Enl4wSzlAzy+XSucrCTsPx2E0bdpShsu4eThb4oGkiEQ36jTTq0SuGkuNAot5TJiRJVTMn3FHclHhvW3VgxPGlE+0gOClvuIbTqIrZmb8Mu/XbE1QajfSjDw5pXYWTSgtye2zik3YaqjnhMsMATLEZkQ5cK6SUhRJroxqskkSfgdU3a4HbXwOm0w+mswqTbDqerUv4vfuNuGxxOo3jOq/XdDCfJvOkSkZ1oWsNF881jTYcRpQvnNUfC4tVzTTHrajeKCSy16FL3wLodrpGgp3clphzU91vhdlgwOc5iVhtGXEV4MpwMe/8d3H18FLu1a/FT6m5sLonGOnMeluiL8GZKBd5Ir8E/JdfgrbRSvJNWiLfSivBmaineSjXi7TQL5qbbMTetGvNTa0RaMi+d4Jt6bivmk7HOsGJhpgXztJShlGBeuhELNNa/GJSZeMeCNCsWplnxnqYSi3VVeD+jCvTbZvObRTqL0nHrqrFYU4fFmloB4Is1VSIzWay14INMKz7UV+Gj7Je9t70NcLw2gAKyX6fxjRRMegso1a1XSuK9nQ24f82hZMYi8C9lJS/ruV9muGcA9wz4XpFbg5U51fg2t+bF8P5/VV4tvsuvw3ce8P2v/vBXBN86FlKyiLJTSUzynkQgrz0aeb13oH92B9reaDzsDoNvwV6sub8CJwx7YOp7pCajSA1U62cFrBm8qB2uUCyLJ6gxwNHezyHaZCXNUKCbzC0X4mpMugiwaVRfD6AWU+zyRnZNCrYIcLlV7VmcXmzb/vYFNQO+FZvN4EY9K4MNC0MMzTECvnOe3H0JfAvT7QEsimlUbJUCEN4L2ZOde74v2yGzRfLkJL9/C6bJArrqpF2ygFKRoii7IwZYYUHIMExXonksG0HGU9A0hEggeVWQIKs9Ne1pIcwiLTzG6DAL/Li9rYr3JHl4jWAjz+O2H4twyMR7CgkZbAddxdA338KGhHVIrLmOcWlKQkcOMu88Z1VwTJnwzFUgixuTCZejRBYAMm5kvx2oxMC0ss/iNqcw0VKc6N1G5kLncQhw0ZFGDZ47Hns61CjtHoGIJ0FjU4CxElnc9Q23cCrnECJsl9DtKhTXBjnH/G6OYrhcnC/VSEm6iv27v4Uh8xamJzm3TNi/9zucO7sXNtsjXLiwDRfO70a1PQuHD27EhQtb4HCoQBtbdRVXS0+iojNBgYspsyQdPGa0/du5czlCrh1Hhu42zgXuw2n/nSgrTcWPaz/G3j1r8OWyuQLK16z+EJvWf46gB4cRVnUOqa10a2Cg53nzgm7OZyauBN0EtpxvVtrndwAAIABJREFUfI5yXiFbJQWtsKF/vATpTVE4XnYMoxNMxvg8bwErbzm/PIyhuNrw99gVGy3dz9i+PBPhbGVtvqgKLkXLSRBOJxkzWtsyEHP7MA4c+AKFhdGYmmrEucDNWP3dXGxYvwRJqefQ2ZmF3KwwHD+9HmWlD5BbfxfHMnahoOeeNE3id2dCMD6uupuqxFDtZpGh4m8iQ+iiTAsmPHflIaj4KG4YT6NxOAtDg8UoL7uN4vLbmHBY0NNbgKtXtmP1N2/g0YMz6OktQdNYMU6l+eAH/y9gyAoHkwSCKPdUFTo6inAr+hwOH9qAtWs/wpZtX0GTHorz5/Zix57v8Ke3/z3CbhxFd1cR2p7n4U71ZVy3nP6vB9+enT7GmcquBNwsP4G45mtyncws/jyfjH/eGMjz7SEFJCFRIE0KzChx4jXH4jTuJNJ9YopAkh0Cy8Uik3ODhY6m/mTserAOyeYg9E2wCJogmEyimgdKr+tJEggAnVaYzQkIPLsTe3Z/j8CzBxBz+zLKjRps2voFzp/yxVHfbSizpAibJ25G3DVxMPbUoOtZAY7f34ywsjOwDGnR4y4VgEfwIvGT81Di+m/H51fFudl/fwGKpq2o703F9bLTuGe9LMfGPcFYoByHJMaLFvnXAThjL4cXxPL5vM/HCGgVEJ/RQRPQ/trgtUamtrE3EeFFvkiouyoSE7h4jjjvuHvFAlSj6lvgSTYJ9ou6YnFOfxgntXuR+zhmVlGqkjHyt5J1loZO3FHl+RdiynMsSRAxLkhhJeMiYzGvfd6q+cRkQa1dqrB0wlWKoid3waZjeb1x0j+B14ckxtL+nbGVNQZ0DlFMt3yG6L4r4YAZjSOZSG4Ik46oa9M24LBmJzJrbkr/As4zVXhMgsCGwfFSZDaG42TWfrSPGeCEDaxZmJBCS5ogcF5TxlSFrqd6qZt5/pxAtw6trQYUFMSgtOQBup8yfrfAZHqI4uJbaG3LhMPBpET1hOAaxN/K+OmTtwe69kgMjuVKW3nv+i/ypxfH5lXz0QTKRGekr4zDFrEBdEnjJ7piKensJEox6NKjx5mJptEUJFSfxgX9VmyN+QyfPPgen2uOYUVeLJbllmBhuh1vJNbjT0l1+FOSBW9rjXhHW4m3NZV4M92CN9LNeFNrxVsZNryZacWfdWb8UWPEXOq0tQTfBOVmzNOYhf1ekGEGx3xdBRZozS+Atxdse2/f1dGhxIaF6ZV4l64lmkos0trk9j0tXUxU8eQHWTYszarFkoxaLNbasUhTiffSKTNRMhQy30syrViaqdxJfg10ewH4b9kBvmwDqBrf/B7Apv3fLxvfzPbgVpaA1S9JSn6N4eZjvwq482qxwjtmgW+C7dkgnP9fXVCP1YX1+CxOh78++O5S4Du/PRoFnbeQ33Mbmb0xiG0LQUjNGWxMX4dDGduQXBuM1uc6YV4IDoRpFGagEpMuJQfhxUAAxAX1hYWVBGWCayWTkC0v6qNGytBYn4SJMf7tMaYma9HVYUBvTz6cvMimbALyyeQpnSQXL44Z1mh2kJ59X/TaDFoeNlsCLhk92Vq3IO/JXQHfhpY7vwm+GXjJXPOzyUiJdptAiNpGtkF2V+NxYxq02ssoKbsDl6sRT54U4saNYwgNPQat7hp6+3IV+037Nm7l8f0IjDyL9ZMJA4KN/kivvwaHm+zRqwIErbc8rMp0AzpaC6DPiECuPhqdbbmghpgL9avfh7/BIizZpJNuLfxeVgmUrRP50DRF4qLBFxfzj+GZk/puMpUejZ7bjMmJUoyPl0grb6mWp0OJnGd6dpvEfo5e2AQCchw9OyMEYrLTwYAoAMEjKxI9KnWqPGdsl8uuW+xMWK62bWXrtlJ15fQkZH2uIjyoDoa/wQfahgjRV3LOKfmSOkaOCSPu3fbDji2fyTHCdIPImLZt+QIXz++D3Z6Ac+c2Yt+eb3D//hn8+P3HuHqVzDcTKTPuV15CmO0C7AMalaSI5pvHt0psEXfu+hJRUf7Iy3uAixf34cyZHTCb07F921cIC/PDlSsHsXv3cixe/PfYs3s5HujPI7ruPBIag4Qpck9a4J4sEacT8dKlPzkZFyZDZFp4TpxkcfiZNXC6LHA6KtA/Wijt4k8ZT2J4jJ0AufgqSYraPSKwZWJTBZezAuNj/Ay1Pcr3ddHtpDcNYeWBuFNJFpGdHsmgEXyzoMkGa2UCduz8AosW/x2MpgTY7Rn4+ut3cOH8Pvy87lP4n92AbEMI/P1+xh/f+HdIeBCIxyO5OJF7AFcLj8Dcm4xRKZriOVbJAQG3un4V8612yzw6fWcFBp0FuJBzEA8aQ9A1weY8NtnuHXMwplTCMWnB0+4c9HQXYaS/BONOM2z9GkTZLiPOfhPj4+rcqPlvw8P4QDkncbHBuHzFBz/89AFCw47gwoVdqKo1YOX3ixAfdwH9z8rQ3J+FKOtZhFheX/NNplGkQLPAt7UzHjfK/BDXEuIB3zw3jFmzwbf3OJBg4N9UkiW3HtCsABUBjbIy5C0BHJ2jCHh4rdDWsnMsH3cqzsM/cQcKu+LwbFIVKRPEebXSaiuen1OFCYcZFy5uwfLlc/DVVwuw8tslWPP9J/h+7UdY+skfsWbFMnz80Rw8eHQJfSO8xgn+Gd/pD2+FwX4XAfrDKOlOxDC9xCm/E7tPC8adlM5xHnrB4Kvi2av/LkCKgNJtRPXTRISUByCxMUzV3RD4UatMECnJChN/tYsjt5SfefTiciv3FdhU8YfH3yivdzuVfakAW8bD3xrTlBeZUdcTj5D8A3hUHwQHP9PFJJkEVAUmXcVwcReRjjswYcBdBF19uBTJsoNqeVc8RhxFALXXotNXUkeCUgG+nAOeGPky0FZxU3Yt+f2kcFIVQKpkggXoZNCZTJjAJjT9zgLkNt3C2fwjMA6lCJMt50gsBNXn8jNJmlBiojqd0rPegtoBLRJMV3BZcxDntfsRXRaIis6H6B7Lx7iL86wY49Ml8homhm7Y8dxlRmb7bRzU7ELdMw3G3GyYVgUXpUuc6+IIVoaOrhzs2fcZPvv8H5CbFy1FjmfPbsWnn/4z1q5dIjK+8vJErF+/FGu+n48DB5ejuCQG7ulqiQtOOolMV2JgKFM03/ru22I1SBcUrttcZ9h5lknra62HsjvHhJVroRmTrgo4HUwAFEHEouXR6TpUP89CQk0QzuUdwMGsHdii24Pvkg9ibdYVLE+5io+yNHhPZ8C89Hy8lVKCN1ONeEdjw5x0G95MKsN8jRFvasvxVkYF5mSaMTezEu/oCMQr8ee0SvyZYF1bg3m6KizQUdNNbbcC37y/QFfpGVa86wHRBNKKyZ7pLsluk9JRUmfH+zKqhPkmqKaGW+m4LaLzVtISG6jzpu6bQyQlWexGWQky4EsyrMoaMOtlWYm34Y3c/sKL+2XQrfy3P8t+WVIyW0rivf9L8D3bf3vmvgLfvwTds8G2YrXJbKvxbS7B9V+OVTk14Pgut1aG/J/382qxJr8ea/57gO8Mj8NJTnsk8qnz7ohGVlcUHjQF4XSpDzYkfY9r5SeQ33UXXRMGTEyzY5UnMxeGg0FHMZ3CtDBIeqqUhbkTUMXFwxsEVYB60pKGm9d3YMPP81FReheTk02wGBNwcP8yXDy/HrXVCZieohNElQATL2PGan21Df/7QZsLFb8HL3bZ7ibrQbtC2Vq3Ir/1noBvfcttjAvgZZGSYlx54THwCeshhZPc7iWY44JZCSd1a9O1GOgvRmT4fqxY8c84c3YDnnZXIObWOaxZvRTbt32DDRuXoKQsGk4CTo9d4gvwTQA5ZcWTiRwEm/yRVh8s3p+vDhIMJJTCmDE8aIK/n3Ly+HH1YkRHHENnm0F2DV71PmQqJshGcJuazWGmTXg6noeClnsILz0Dv+wDUsTTMpQhhZlkv8h4ux1l4hPLwj9aXAnoI/Cjb+00mSVqosnOkM1VWj44eQ5U8iWgwVO46p0vEiT5/mSsyYKK6wJZdjJMVmkgQ0kPtadjco5s0uCFjWIan2fitvkizugPoaQnXhg3pXlXGmpuG+blhON60D5UWZIw5ab+1oigK/sRGX4KzU1aJCddxLatn+Cnn97Fru0rYTLfx9RUPZwTJYgqPY1I+yXUjWTJLowUJFGbyGQDlVi3fjGiY04hr+AuTp3ehEO+a6DPicGKb+cgIHArjvn9hDRNMI4c/QGBZzchvuACwqsDEN9wWe0EMIkTBrgKoyMV6OzIRFdHJhzSeMGMSstd5OeGoLtLj5GRMrS0pKHSchul5kjcLr+II0W+GBrJV1ZsPN5ui9gr9nRnYnSUzhZ0YdGjtuY+OtrT4XAQGJjQP5SPwobbuJJ7HDGWyy8KLpXbCcF+NXr7KhBz2x/ffrsARUUPUVgQjxUrFqEgPxkBp/fj+OkNKCy9BU1yCL5ZsQC3Q4/imcOCtKGHCMw/hKt6X1S0PBB5ibDwkmxRsqUSA8pO5D6vhUkb4KrEc0cJjiVvQXJHFHpdhZiiBR47L3qSHTaCcburZLuX/siUHVQPpyOq5gLiakLER9dFICaOQzbk5Ufi4MHVwnrv2fsdQsOPITU1GN9/vxDbdi7H2g1LYTInwzVRhdaBTNypPi8FUoqVn71ocw7zuudjKvnnbt9fgm8rBHyXHsfDl8A3X0fGm7Ib3nrBN8GUTYYs8lLIzTlqB1tyu1ElTjRy3fDaorRLzrOyaHPR3m2qDHWDWpzQ7EFYaQCK22PRMaLHGF0ipGZGMeo8t3RSIeOYrQ9FcPBhbNu2EkuWvoFvv1uCc5cOwufIz1j28fuYO+c/I+bBGTwbLpKGMfR0d6EM/W4jkqujcCzvMCr70zHpoCSGGls2hSIIJxjmb/vrgW+J5SJPpM93GoKMAXjQeEPYfwLOGZZ/9s6PIl6ENfaCaCFjvM+Z9XcvSeNZK+TzGGd+a4BssAWVvfG4WnQQya03FbAk8y2FimWqOJo7iS6j1L/cKz+PPalbEVZ/FZZBDZ5zV45AlGSDJF+cB97vpNYftcvlAdsvkTJqTZP56C1Q9LiLydyQuKucSXhexieKUNIQg4u5R2Af1or/Pc+RfL4kevxcJnPKFu/5eAGKuuJw3XwOR/MOwr/AF3errkrC0Mmuqa5i8XRXfuhksD09F0SrXYVBZwUSO25hT8ZONAxkwjnJnS32PVBdKx0oE6vE3r5S7D+wHEs//FtkZoajuPghrgUfQcDp3bhw3gffr/kIPgd/wo7t3yA42AcbN32EqOij6HtWIsXBBOFMxivtEdicsg6GnjuyA0OZq9oxJcHkWdNfOn6/gR24qywJqyrUZLJHwoIgnNfm+Ggh4uxncSRjK3Zod+HnrBP4Ii0Qcx8GYV7qIyzSF2NBpglvaerx59RavJlejbfIcGuNmKsrxzxNOd5OKcJCbQXe0VowhxpuSkk0FiykfCStDAvTy/GethyLMo1YyMY2Og6z2AwSiKtBRrxSCi3fJZCe1c6d9wm4XxpaO97n0BFIs4BSFVGq7pNesG3CB5kmLMmib7cF72coD++lWVX4MNsu/twfZlbh41+OX4Dt2U1veP9Xwbe+WvTcXqA9+3ZZTg1k/IL5ngHcqhMl//91Ts0viib/kuV+HfC9Krf2JeBNAO4F4qvz6l4C3//HH/7x9TTfhS2RyOmIQkbvLaT3RSOtOxIaKbCMRObTaOi6o2Boi0BmaxgMndHI772L9I4IBJlPYr92M07p9yKx7hqaBtMxzC564murJrPa7mEgY/BQmeILwCeAW01g+TsnvjxGXSdti6xoakhHYMAGLFn0B2TrQuGcqBOQtPKbudi5YznKy+IE4Iq+Vgq26Bji1fO+OrDz+/HCk6Di2V5kMGWgY/ON4vYHoFYyuzlGth25uHH7UX4Xg5dspfGWAFNZB6oqawIcvm81JsasyNKFYMPPS3Hs2DrUN2TihN8maQii1YVjxYoF0GZcEjaI30WsnQi0uHhLYDej35GH62UnkVQXjFEWZr4iSBBUTKJYqqntNWn45puFuHrFT7bVT57cjKLie1LApnYKCBK8Y3bAUXKESWeZgN2qUS0e1l3HtcKTCM45iuiiAKTVhIpDCRMQHhNptEKtoDAtfMy7Xe55XzphTKmFmvaBUwRM1MrRAk/kRmT26IftYSQkENsUoJUOjpSNsNEHQS3fW22ZyudIBbn6TLIrZIgJsJg0sG22vfMRwgpO4XL+MbTSx5jstxR+KV1+T7ceTU3pGBwqgov+xjCivi4JTY1pGBkpRldXFgqLQpGmO4+SihgMjXKuV6HLZcCV0uO4W3UZbc8NnoIgAkErXDKPjNi153PEPjyH6hotoqL98OO69190RUtMCsLmLZ9jz4Hl2Lh1KaJifGGovYWwqvOIbyArynnMhj3s6FmHzIxwHPb5DjeuHUJ3VwVKi+Kx4eePsWv7cpw/uxuFRbeR9CAA53x/wp49y/Hp7sX44dEmjJD5o4XYlAlVVbE4H7gRVy5vR3OzDk2NBpw5vREHDy7DoQOrUVH2EKUl9xF0fR9+2LUUKw98gLjCq9KMRlxQ5PizGLcKY+M2ZGTdwMaNS2HIuY2Cgjis/m4JLOYsXL54FKdPb0ZlZQJMpkRs3LoMEWEnMT5ei15UorwzHjEFJ3FDfwhZj8MwMF0EAkWCZZGaic8vJSeqqHPKVSne0O0TBdj+cB1s3QkYcReJhz7E458LoHJ8ITs8xeI1Nxd1I2zPHiHSehoJNddUAadbXbe8lp71G1BSEoaUlEDk59/EkydadLTrkK45g0dJfjDkBGGwn/UYtWga0iOm+gLChPnmNa8aCU1K06FKuCarMTFRhXFHpQAJuRYZJyar4ZiqgZtkwbQVpq54XC31QyIbwbD5hrDavE4ISgm8vfFExSd1jdrR3qaHxRIPkzkWjU3peDZQgp6+EpSbHqKyMhHNLZlwTDB5YQLC48GEmFpfI8anLDC2xiKs6ATO5uzFqZy9SGm4gf4JHkMWxDJ2UlfOnaFK9PcXo7k5Cwnx53DIZyWuXN6JxsZMVNek4dyZrdi+9WMYcm5inG3nhclWyea42wRjw33sTt+OxOZI9IwXiCe/AjmM86z14Q6jh9EXppMAksm3Ymh5jNjSW+3WKDDsZavVNaGIEFkzJI6rXTo+v6YnFUEVAYhrvKnmDhlOTzL0m7HTC2pfEVu9r5e141e03t7HR8lkwwh7bwrCC/yQ2RQpoJtdeVkDw90cJpYjkxWw96fitvUS/HR7cb/mGhrHDBieLFOdcOloxCJbkYhxfqgEbXqasZBrJX+b5/gIe6uSvikhoFQdksRT7gx77Hx57JlwSSdTuZbNGJtgw7Nb8C86DOtIusRHKX6Uon/KTEwYcRfj8ZAWcQ2hCC8NwGW9ryRyyQ2hKOtOQMtoNp6LhzVjs9KYq11h7op4ZDDihsKeDOVIbo7Agew9aBnJBi0oSdZwd4RzgC5bjN1j41V4EHsGq757BxrtDbS05KO6OlNauW/f/g02bVqGrVuXw+fQOsQnhGDThs9x7fIBPG1nHG/E8HQtakfzEJC+B2GWQDQN6+Ra4FolO3hcK5gssw6KSavEER63mSRYdim4qy06eU9xO9d79jVgzdq0EU8d+SjqjkV46SnsyfXDGu1JfK27hk8yH2KhJhNvpBTgzTQT3qY+W2PHW2kcVXgn3YY5GgvmakyYp6WFoBFz08uxQMuCSeVWMp9MtobSERMWaowy3tUa8V6GCQt0lJlQXvJL8K303wTu7+oqpXiSBZRq2LAo8+XxfgYZ7ZmmN9JlUhreqIJJxXpbsDTTjA+zeDszxBYw2wbefpz1cpHkrxVKqsJJFlCqQS/u2UP02/oqfJFdJc1v2ABnmTTCYTOclxvhfEWbQM8QHTcb4LwYNVjucSbxOpSo2xpVNMnCSTqVsGAyr/rFWJVXg1X5swc13TX4Lq8Gqznya9Xw3F9TUIfvC+vxQ1E9vozNwL963YJLMtnZT8JAdlvXHY307kikdIZD0xUJ/dNbyKaLSectFD29jayn0bjTfBUBFUdwOHsnrub7orD5tmh6lSvGbPD233afAYULELfC+7pL8ODOGXy7fC5ysqLhctSjsT4Ne3atwhHfTTCZ6FhQ69EQ8jXsdsVsndvj//Xgm6BOFjmRQphR2hGLmwTfTbeULIJbuhL8KAnxyma8wJWLJkE4WSvFWHFBY9fExjodTp3YgoCAHaipTYOPzw8IOLseXU9LhQFP01zAOGUdBFqyACo3CQZaAttRZz5CS04goY7txl8NvqfAQpoSjE5UoKDoLhYt+i+4d/c6/E/txa7dq5CWflMKzdQCz2DuHSpwq8WN38UE50Qx6npSEFoRgDOFvggzn4emIRK2p0nSRZJSHe+C9Fu3XHQpp+FxmXTniTyHbLV7khZ6BNs8bmzqUS1JF5kFFquy2Q1lRrSpI8tIB5ff+ozZj5O5m2ZxqAAstSg/nyhGXvMdnMrYj9j665ggK03wT0kHGx9QbkGpDJvYkK2kltyjS5egLO2FKzA+zsp5de6Z9D12aHGh9AiSqoLR95ySHjrDKGkMFxAW4+YaQlBfm4KhIRPqatMQe/8UQoL3IDHhPLq6ipEYfwEXL+/AtZt7YbTch6krBaGVF5DSFOE5NzwXVjjGaxAfd0ks8HwOrEGNXY/r145j1cr3ER1xDqu/XYp7sYEoNNzG/RtnsH3rSvzT1/+IVXEbMEC7ThdlJWUoLAzHjz+8j40bPkZ5+QNo06Ow7LO3ceXKLqxZ9TFCgo+LC8tB33XYfGAlvtv2IeIzroKdNsUFRfSzBK92TDirkJV9Hes3LEJO7i38v8S9eXDUZ7YlOH/O/N2zRHTPi4mO7p5+EdNv3puOfs9V5SpvVS6Xyy67yiu28YJZDNhgzCIWAQKBNhC72ISQhMQmIQHapdS+Z6aUu3YJBBLa0L7mKp2Jc7/fTxKLkV6HJ4aILzK1oMz8Lfc799xzz62svImVK9+EzWZA9NFdOHxwLRrst2F3ZmHDjx8gNu4wvO52mRw67bfA2n0DkcVbsSl7DVJbz6LPWwWvNC/Z4PNbMS0VF9UgGvDbMT5bB9tYLjamr0HPUC48s7Vq6qsAFOUCM9/8TFYxoBx7bP2piLMygWVCQ7kQEz9e97zm6+H1VmN6ugIelpCpyQ3UY9pThvGJAnjdlcKsz861oG28FFcajyHedli7plm+rpaG8d6+MpSUXMHNm8eRlXMOruY7mJoqg8mciLu3jqHMdBODAuJtMHZfx7HaEOR3JmtuJ3rDF98P7xUu7f0RGLLh0u/AndvHcCh0DSIiv8OdjGjUGJNw9Vo4tgetRHT0VuQXxGJ4xAg/h5MImUGgRfDDcr4LPr8F1T03EF6zA2syv8SFujD0jpdJ9WOKgIKWrpqbBOURvD9ZaTEZ49HgStXu0QY0NqSgyHAC3V15AtRJlojUTuz07BgZLsJpcziOVIXgbusltE4UYEIADD8Lk2MCfTa9MtFaqHoRELJBkGCeDd1zXpIhKskmYNRZev4/mQ4rrKUOvHlOCb4zccYcgVtMbAQEMkHSz/XS8WpxLPnvfe5mb8JsHZr6M3G5OgJ3W2KlcsiE30spEN2e3DWgz3h83RGxY7zTcRl9nkqR6JBRZWOsT4wHeE6YhFFmyc/BpukSGYYklVZKQwTUKgbc51U+7ULmCBnFY8jhZATpilThceJ+xxhF8DjprkbVvSuIrtkH+2Q2fCQnKNkI2DDqrhGv8/TmGJyvD0e0MRQX66KQ0xSLxr4MjLgrpY+H545VW7pcqT2Ex5rvV8V+da8pMmHMbUZuRzx2F29H53SRnEu67FD+oUgbVQX1+lqRm3cWq759DYbCS5iYbMD0TAtycmLx2WevYuvWFbhw/gBWr30f+0I24KO/vozzx3dhqJcNpR144DYjpfESduX9hNrB25j0Uf/PvYXSrDr5/HIMBXyT/NISBd4LQqgxznGeBe9z+o/TDYhzJpwydXTSU4PGoXSkNJ3E/spgrM3bhi8KL+GTonS8ayjAa3lV+E2uHb/KbcKvshrw67s2vJZpxSs5FrEGpCWgvhZkI7p8ZOFRXEyEBVeOJq/lUVrCgTi2hZW/YA9Ij25ZBQ7x6qZfN9fvDc5n1h8KXfijQXcseb49oHIsoaTkyaX7b+uPywHf7xc58X6RQ9ZfFw3A0Z/LEBwC7iKnrMVuJfpz3bVEH3TD4TcE2k+A7fJGfMr1FAAXtxINeOvg+4uKRixeKyubsHh9WUnQ3YSvKpvnl/7111UtArxXVbfi45QC/Ju/X+aQHcPAFWG1y+/HoepBAigtyeuOQz6H5vReQU1nPGoeX4fh0WVcckVgb/lW7Cn9CUmOaHQMZmlARcu+lwHElgpmBGIiRQg0YGrCgRJDPL5b/WdUlCTD523H9KQNR6O2IipqK+z2DGGShK2WTZSlLXYZ62zRiwOtsBTCeDDYq2D0JPhOXSb4VkFGubMsWIExeHE64f0OA45E/Ijoo9vQ0pqPXTu/xuGw1RgasgpLmJ17HG7a+EkQ5ab0JPie9lXikvEQ0lrOYFwaKZf6XKqk5vU64GrIweefvYk9uzbi88/+hHVr30de3iXMzrYs2uAZIBdvTgyQqtHo0XQprlSF4Uj+DqS3nUfbTAnGRCtIlksxvEufU7LL/HwcJsTGPyvGx024f68APY/IFtvh89ViYsKG1tZ0TJFV9tswPGxCe5sBD+4XwOMho0k24sWfnT8ng0JtokiD2EMwR42iFe1TJbjiOIF9WT+ifSAHngCn92nT5LjBe80yXpvXAAG4bFjChmibFpk6LxkczTYxYEXLeAZO1O5DYftlTEzTSs2m7AFZXhU9pmL4OI2TGxnlHO4pIyZGqzA6VC4NsO6pOvR2l6G3twJTUxZYH2UgxhSO3IcJC6OT4YTX2wij8QZ271qBQ4fWwGa9i/DDG3Bg/xr099nw7Td/wvW0KAyP2OE05+GoM1KkAAAgAElEQVRQ6CZ8EvQeNhfuxCNfiYyDZ6Ne5/08HDq4HruCVqKqMhlXk47js09+j7ZWA/bu+h5796zD2lXvIWjnGoSdCMKmvV8isyQWs3PNmPNpo6eFVXViatqKrOxjWL3mVdTUXEd11Q18/NFvkHH3AnYGfYPQkFVoacyA1X4Haze+g/MX98Hr7QDmWuGfc6FryoCzloP47PZnCCreIg2ij0ZK5Th55uwYhQkzdD0SkGbHiM+Ioq5r2J6zCX0ThfCIxI3Xu0kmL/I6VkBLa+ij1njOAvvALVyyHsKdZjY4KqZ5AXwTADLho2aZ15caeENHCn+AWnleSy7455rRMlaMK43RSHREaHahBHb0bHagojIRwcGrQEYu5uweFBZfhNNxFXt2f4rtmz/F9v2rUFmVhKnJWtR2XROgY3i4HPDN92SBZ8aCsENrsX7duwgL+w4FhgsoMFzEj1s+wsqv/ojz5/fBZOI0Tia5qndGPhdjomhp2XxmR6e7GCfsIQgz70RRZwLGp2vFjWGK3v9MZsSqVQfFStbl81rg87Ii5RJAztg2MV4Fr1dVKWTyKsGTJNmKTXSOZiHeFCXNg5ctR1HecxP3Jg3omijC/fECdEwU4d5UEe5PFuLhVIksuhrdny5G70wFAqyCUbJCmYpUGlVlS9jxAIe9mMWBRMC79Fk8Bb7bLkq8EFeLeQnP0vFjOTFmyd9h5cFnQctANs6ZIpDcfBbTdCnixMqABQMT5ai8fw2xNeGIKQ1BQUcChkk6SVO76vlRkg1NGsc9jUSTX1npcoYDNeNizzdLW0ieb4JzG7xudR0zgRSJBPc4Ad/q7/Je4u8xxrN6yPtjcqYKxrYrOF++D86RTMzAgX5PNep67yCt4TzOGA8hqmYfLtgiUXY/Ce1jBZgke8z4KCSFkhTqYH4p8D3uMaPgfiJ2FW3DvYkCNT9iPrnSqprUwntakZZ2BKu+fUOS+66uKnR1VaOmOhUh+9di7Zp3kJMVj+1B32Dr9q/wt/deQtzZvRgbtGJmrhm2oRxE5QXhsu0YOt1lUlmTiouAbxM8lGfy2hC5qCLN5L2zP8NPEo9OUw3iiMWYRYckmiZMehrQOVKGkvuXcaZuDzYVbMbnOYewoigZ7+cY8E5uFf6QY8bL2Va8lO3CS5SXZDvx20wrXs+ufwZ8/2uAN4H4grxEAfB5sK2Dbm1AjgDuFzDebxa6wPUWLQHZNKkvg1NGvOv2gDLuXZopfznwrYPtxY/69Elhu4uc8xaBOujmow68P9aG3OgA/GdBNu0BtUWW++n1eXkTVlY0LiwNeAvgJuiuZEMldd0KfH+tA/AKfq8ZBN+rqtvwbU0bPkk1LB98F/Qnoqw/CdWPrqC2MwHVnZxQqXTdBN9VfcnI6DyPaNNebC/YiPCyIBTci8OAh8CBAEwHir9MUBOWVDRUDkxP2lFWlIjVX/8B5cVX4PO0wTPtwtGoHxEVtRk22234aT0kbACZb/p2MlgzUC39fp4G34rFVYGKz009qbhYtxzmm8wLNybqvgkQFZBVXskNwnpGhn2PE8eC0NKSB7KWR458h94eE75Y8RZyc08K+CZIlDLqU+B7xl+FOPNh3Go+jVGOYV/is4kLCFmIgBOPB+tx5Mh27Ni2Bu/95bfYtOkTlJVfhT9ABw2dXXs++Oa0vKrhdASnrENO2yV0ecswI1k/GUUCUr0BbKljzc2Y8hL1enSoMdUk42T0FqTciBJnhbGxcpSWJmDvvpVo78iCx+sQYHj0yCYkxO/H8EilTNpc6rPz52R1ONGSsiC6s1B3PsWmooAZdY9v41DuNlwzHsWDqTJMiCuExgp5zNKbwL/BTUSkLaJRVU4J4m3OARNa+ZZ/2zZ8C0dqglHWfQ1uNiQR9EiTEjcQDqHg/UGgUCvL56+BjyVhTerDcyUaal8DMNsq46fr7qXgZNVB5Hclqo2NG5yU5FvQ3p6PyIi1CA9bA7vtDsIOr0PowbV41G3Ed+veQVrOCQyOWJCZfhbfb/4IofHbEVoXjoceg7LfEtanGUmJkTgcugHVVdeQlBCNtd/+FY+6q3Bg3xbs3rkGKz99Cys+eRufr/oLPl3zJxgqr8DP9+hj4kHverJr9RgZrUF2zjHs2v0BrLZbaG0pwOpv/4A9u77EmtV/wsWzOzDQWwabMx0/bv8AScmh8HpagbkWabRtH8pCki0CEcZgnHEdweHiXUisjML94SK51mbmaI2mgwcbhmeqkeI4jfDyvRicKYeXdnDgJmoE3QUo85CmXokBqimXmlabBr5vN51dJPNgnNCvfe1RjjNBHJMnXq9Mpnm9O+GZbYBrOB+XHVFIdESKVIOVEolZsw6kpx/Dxo0fYm/wd8jLT4TJnIbryaF4+0//iBORu7F28wqk3z6BkcFKGR4VXROM3Ptxy2C+Kc+wy+TVHdtWYMvmT3H+/F7UGlNlaNPmHz/GT9u+RG5uHFpbC+F2U/7CcdUEbhxMZpREgfIWMsbNQ7exv+QHJLUdQ6+vXKxQvXNOdZyZcPhpS0mWmaw5k2aNmRa2mseM1Sr9miTo488Z/wi+VeWPcZAJ8MPhAty1n0VYznYcyNuGS/ZjuN58FteaziC5OQbXWs/hWnMMbracw43Wc0huOYvk1rNIaz6PQb9JpAcEh9J0yz4R0W+zkkbgx3NGG1Q19IUJEO+T5sdZwnyntl6Y36P0auRy4scv8ju0mPVZ0Tacj7Nmgu8YeCSu29HvrkZeUxyOFQfjVOVBlHVew7C3WjXY0x3My0qPbleq2H4OPHLPNMAzbYdHwLVDhnjNTNFBihayrCQyxlIyxBkGJDR4/rRzI37wBN9arw2vWzl+KobPuGsEfMeUBqOi+xqahg2425GIw5UHsKs4CKctESjvv46+AO853o+8t5SnPEhIiMxSER2835YC3xPeOhQ9TEaQ4Se0jeZKjFaVGr5f4gr+DYLvJgHf69a9ibLSRFSUJyPlZjRupR7H2TO78eUXbyDhcgRizobgxKk9+H7DX3EzORxjw3WYCNhQ25+CsOzNKLh3BeNyr/J6UVVdyki8YuFJ0M04XSsEhTSuc3ox412A4LtRTApYxR6crse9wQqU309BbF0Yggq/xzf5m7Gi+AT+WlKIPxW04o27jXgtswm/zWzCS5kN+BcB3068nGPDK9l1+H2OGa9yCI4MwlnEbtOpRHMr+VmmW9N0Pw9s69/TWW79UW+qXKzx/qOBbPfCEma74EmnkmccSwwOvGOw4x0+6sNv9MciJzh9Uka+P+XJPT8Ip1gfguPC34qdT6ynh+B8WOLCRwTai9fiQTglHO3+LKtNAL6Y2RZJySJ7QN2lhA4l+lopMhLFbJPN1tdilltAdnkzvq5oxjeUmRB0a4tff1vdhtUCvguXD76LHl5G6UASSrgeJaDqQTzqupJQ+ygJWd2XkPjgNHYX/oBtuetxrv4wXEN3FCvI4DtLH1S9TLgUAFvuz+lEQAeHJmG5DXkXsHLFb1BZxs2/HROj9Qg9+C3CwtfB4bwtzVNkc1Rw5Wa5POAtIE2AmmLJCcQJuOT/E1zABlPvLZnQtrTshJ+Nmw9fX8lP1KbO0p0DTnsKIsOosd2OpqZMHA79DkFBf0PareP46ou3UVh4Fh4GMG4kovvWE5p60dB7AjW4XBeOlMaTGPappOdFGwQBHfXG026rMO0HQtbjVuo57A1ei5D9a1BTkyJWa0uB7wF/JVIexSI4dR26veWQRja3EbOT1YCbiY4qyb3ovaif8fw0SsmYWt7xsRpcid+HP7z2f+GH9R9gfNSMgYFS7N/3Lf7u7/4n5OXHYGjYhMSEQ3j7rX/Exg3vor+/QnkJL5F48PUIOqYJagmU3XXiE+2Wa7UOY74aGcO8++5G5PcmocdfLTZZshEJ60lgYVNDgjRLLZbBpWlLNn5dp6rY8NrH13GoMggl3dfh1thZ6sm5gbD0TGbWw6AuoJB6TxXwpTeC/udz1OdTa0jw3QT47bB3peF8XQQMj5K1TYzXETvrm9DSnI+I8PUID1sPsykNe3Z/hcjwzejrsWHNqndwK/skWjoKcOb0DvFjTqs8jwjzYfTOlmNyqlxJbWZbEX/5EPbvXY3yskRcjo3Ad2v+hv4+E/bt/gHbt36Nbz5/Fz9t+hbbdq/DR1+/iZt3TsDrJrPWoJwXZOqs8qxvb89AXt5x3O/ME0ei9PSjOHjwG8Sc2QZHfQoCHgce9ZYh9U4kLLYU+D1sRlW+wI7uVFw3hSO79RxaJwuRZI7GjzfX4ZwxCq7+DEzMkN1T3uzc5B9PlCG27CDiLEcw5uFkWOr6TfCBrhHUK5skmaHDh5JaWMXu0dKXilgLZSfn58G3AowEl0yYCaIV0BEWl2O0ZaCVqmSxodMz64JjMBeXbBFIdESpDVmz9/P5HcjOPodNP3yGHzd9ieSrJ1BUnIyYkzvxxqv/iHPHI3A27ijs9hy4p+tR8/AqjlTtQsHD+CXBN68luZZxT9g+Dmf64INfY+euz3Hhwh5s374Sf3nvZXzxxevYu+9LPHxYpLH3BMhqFLdfKjYO+ANWdI/kIqJ8G9I7zgjo8/odAlRUb0Gt2JRy6BfnDqhR85xBoIC83mSq9OQEdyQcCIIJmnTygb0dVZibMUoyOgkrMhpj8UPKavyQuR7R9WG4WB+Fa+ZosSI8y+mM1YdxpvYwomtDEVq2Gztvb0DdWCbGKDuCDV42gopDFuUY9N62ypIqhTQkUpqibGybB7MRUxcp8jLG0/8/mG8m7mxUbxrKQpw5CjdcZ+CZtWFwuhx3Wi5KkhljDId5MEMcQwTMEniL5t0Cr49TnbkfcWKzHYOPjTDV3MDDzhL4vA4MD1XDZkmFsfoGOjuK4fU4ZeIiHXlamjIwMcHz1agqFcIo8zrnHqUlLGxY14ctwQG3vw41XddxsDwIpxyROFQejN3ZPyHWchw1w3fRG6gUIwUBxgTebJjl+xVNOPctxgYmRUo6sjT4rkdx9zXsKNgC+v17ZSouryG1pygJKKVHLSgsPI/duz+GyXgD5WVXsCtoBf721/+K9evewaED3+HEkW1Ys+YdrFn3Z+wK+gylhRcxPVUHkkd1I6k4WrRFJi5PsoGVxIlXc4ARWSf7q3jv18KDKszItGn2GbVgjkkM44C/VnqApvwWGO4l4mjhDqy7vhJfpH+Hr0qjsKI6FW8VluE3mQ14ObsPL2fcx28y2vBSVhP+OduJf86lTSDHwJvxWo4Jf8ipEw23AG0dcFPTvVhCsui5DqoXP+rAWn9c0HMrXfdiTbcOuundvRhw8/lbhQ2y3jY86c/9jsEJ3RpQf3zX4MS7Bsf8op578aKWex5oL3r+jF3gCyQlOqvNxyemTmpfL3YtEaC9iNV+GnTrDPcXZU3zQFsH3HxcWd4i/txKw/0zoFtjucl2f6MB71WVLQLAF75uFfC9pqYdn94qWj745sTK/IeXQAa8ZOgqSvquyAh5enufbQjHZ9c/wub8Dch/cBkjngphEljWlSDMMj2b2jgl8BfqXJdOff692QZMjJlQbDiLTRvfgrH6CvxeF8ZGjIg6sgZHotfCYrsOj1dpOrkJcyNYYLKWBuHCfJMhIzuggW8JTqKHs0nndmx91DI039zkWH4lgOYGxA2bQFwxvvfaMxB7PghXr4RicqIexQWx+MMb/w6v/e7vsGfHKrS2ZKiyquj6lO5MNVxSk0YNnREJlkjcbDiBx24OLXpxIkMGgV6yPl8DGhsz8OEH/4x33/knrPj0ZVyO249Hj8ieNy/JfPd4ypDUGYPzpfvQN1OMgKdGsb4MuGRW5uxSSl3q/UggpuYUVkxPV8LrtqDBnoE9QV/iUMg6TI7bMDFmRmVlCn79m3+LwuILGJuwo7Y2Hbt2fouQkLXoH6gWG7mlX4usCR1VaEFJhxUOByKzzaYa5YYyHDAirDQIMTUhaBzJFn95sqWih2QzlM8Mt8coOkZuAtzUJTGSRzZ68fgzoXChbuAGwsuDUNKVLMw3wTOdKRjkqX8UcCIJQb38TS91rLSi5OsE7PB4ee1QBtGktLN+izhhxJgPIauTmm+Wh9XfnA10oMGZj4Mh6xB6YD3qTFkI2bce+/Z8hwZHGb5e+Q5uZZ1Aes5xbN/9KaKO/wDzvXQEF2zF4wATALJVLmHKLpwPxt7gr1FbexVXk47iow9+i87OEmza+Ln0VHy/7mNcSz4DQ3kqNgatwKXkEJUcMIkS32T2ZhAY0EKUCayWNINypnaMj3K4ExtFG4HZBvgCLozNUCbgQsBDVpCbrB2Nj24jqT4Caa2nMUZLsoAF1rFcbMv9CdsyNqGwLQEj0xWyCbKq1TdaiBO5O5DZdAFTXg50YTKkJD68Z8hmkTWj9EiBSbuAb1P3DVyoP4SsDtrPLWi+eX7UcB8FThhHRCJFXf+86wivjQb40YKGoQLE2SNxxRklo+69HmpDeT4bkZERg+83foovPudUzY+wY8eXOB61Hf/l7/93fPTOW3j3wzeRnRWLmcl6WPvTEGMNRV7n8phvlZi3oaoyBfXmbNy6dQp7936FiIgNKC6+hozMeMTFheLzL15GY9NtZeGqsdHq/7rU0A8y+7AhwXIYsbX70dR3Fz40wq1ZViLA81oLrzDifK4RElIh4LVI2YOyVlP3ogLejH0E6op8YByulftkMmBC76wROQ+u4pQ5DBldV9A9Z8R4oBaToyWY9NfK8/HZWozAhO7ZGpQPpOJo5jbUPkrBJC016Q4iFrSUk1FKob7n1ZylMFUHDvghCcPVPJgj4JtsOntoBHxr+t3lxI9f4nemYcYYTGgbzUO89QgSGk/gga8SCTXh2JG+DvHNJ9HhLRNpjcjYCAhFBuLENKyY8VPjT0afja/VSEkJxxuv/0eJ30NDtYiJ2YwVK17CW2/+A/YFf4OWliz091fjzOltePP3/wlFRRcxOcnpkMrBS+4NJjB+7kuUVTmlIZ39KxzWMx4w427nZbx7ewW+TvsKZ8sOoqEvE26+B3HXIVvMe6MGU+wdERCvGGTVPKvip4qLStKycH3opBSvJf7MiUlvPUoeXRfw3TycpYFvEhqLwDf3YZ8DE2NGtDbfxeSETfaKOuN1JCWE4PataIyP2OCdaUJCQghizu5ARXk8RoaNmEWj9KIZHlzEnvSv0dxzF9OeWswJIcPqAhMdxmKVkPjZ8C9NloxZTQh4bVIB8qMCM7Nl6PUaUN13Heuz3sW3Od9gVfERrCi5jT8X1OLVbBd+ndmAlzJt+FVmLX6d68JLuU68lGfDSwV1+HWBES/n1+KVfBNeyyPzbZkH388D3ItB9tPPdbCtPz7XuYTNk4tYbTZPPr04En7x+nOBC+8UOGU97Vay4E7iegZsPzN18kWgu6QRdC2RpslFUycXS0l08K0D78VgWzHdTw6+0ZnsxY+LAbaAbM0akK4kTy+xCKx4kslezGiT1ZZV0YJV5c34tqIFqytb5ZHP9a/XVrdjXU07VqT9K8A3myxLHycL4GaDZcHIVVy+fxJbstZg/bUViHNG4MF0Mab8VVKSYTB2w4xRYfQUKMBErSo7LQEKlxPUxIJOyvZs9rGht6cAhYbjeNxfLGwZmWS74wZszhsYHNbZUG781AUy6CsWS20aLwapi8E3N2w6XOjgm5KBur50XFoW+Fb2XlLeDSjGSBqkAtyk7JieMmJkqAbTk9yg6gVodnfl4GFnPkYHCYZUqU1tkgQTDAoENAp8+2HCFWsUrjuPoX+arNaLPxfLsT4Jmk2iT+OQmPj4YOTknMKjRxXCkBD4LcV897hLkXwvBrGl+zHmrcCsj2OslZaWkg56xaoG1Be/HynLk4mUQTAEOY0Y6DXixJGtiAr7AVPjDvi8jWhtLcXfPvivqKlNxtR0Izo6KnH82C4cOLAOD7uYdFAq8+LX4s+93CSY+Eig5cbM/6MANLXcA3PVKJ1JxYniHSjvvIxRX4UqwfL3tcSCQxroliGe4uI2wqYmAmXlJawaoKxwDqTgeMUelN+/ArebYJAbEsdsq42Emwnov82R29QQ+rhcgIflTyZA7YDHhcCsU1xlyOqZe1Jw3HQAGQK+eWxYIuY13ommRgMiI35AVMRmuByFSIyPwltv/hM+eP+3WP3NeyitSMDxM99j7cY3kXbnCFp7srH1zhoUPIjFOEc/zzbA47HhclwwQkO/gc1xE4b8y/jTH/8LNn7/B7z79j/jzKmd2L3zS6xa8zY+XvUa3vr0/0FaAa0GCVr5WXjtKDeNheluytaOQJzXvGq+42ck4Ccw4lAXpaVmEkSwTsvJhp47uFAZghOmfagZT5dGXt+cCw99Jhw3hmNX7hbcajmHLm8JZmDEw9E8RORvR/GDRIxTHuHRnFAkudLuGy0xEpAtr2NHxb0kAd+G7iRl0SeaV4IBgguCAz7qSTS/zyROiyea5juAVrgG8xFrDUeinbITnhMuaqkb8eBhBTo6auB0lOBi7CF8/sXr2LjufewO+g622nKs2/Q1zp3di65OA2oe3kBkxU7kdS6D+Z7lYA8TPDM2pN86ifLSm0hJOYbdez7HocNrJGl1OIuQlBSJzz7/jYBvXu9kvZlYBPx0E+JnaoSX52LWhLu2ozia9yNKWxLg55jsuWaZiKmOBY+HHkdp9WiSwSKcjMt7mQCK1QGCYH69wKgq5pJMoYDwGZ4bB1zThTLt8HL9ETycLhI5yhwnAGpJEwkczhHwBeowCQucozk4kbcLHAPfOJmHe9NF6JosQs+YAd0jBeicMKBtplBcQliNk8maZF418E3f6LP1UbjRfFY14M9XJdV55bn9/3oxwZnxG+EauouLTcex17ofwcXbsDvze+T2JqEvUKmm4XpMwIxikNlcP+ivxhilVoxXdEZxm9E/UIqbN8Pxq5f+HSjD6+mpREzMFly+tB/RR4Kwe+fXuHbtMEzGVLzyu3+P//Dv/0fk5lzA5IRK8H2c+iyuPPzc3Fcscrx5/DljoddTheKORITlb8eG7LXIf5AAt4faf8qvGuGHXcBxwMdpxRZMcRieDB1SA4kU8abiqyRGMmxGJWoL9xfjv7p2eM9MeJ/HfDNmKrmgMN8E4hx45zFKbw7fC+17A4FmkWB5ZuziIkRdttfrhEfkN/wd2m/a0T1Fr+0TOF4RhBlq8KUywqmd1Knz+tPve94ntfCKgwxjl0viLRP5fl82CntO4lDhj/j80hdYYw7GWlsyPq8x4Y/5bfjV3Vb8KtuJ14us+ENRLV7JLMPLBWb8Jt+ElwuM+J3BhFf4mG+UgTev053kZ1jup4G2/rUOtBc/6o2TiyUliwH3Ylb7LTLcGgBfDLj5/O3CRlnvGBqesQhcAN0NUJaAzzLbz7DaRQ344Kn17CCchnk5iQ6y9cenwfbTgJv+3Prgm8+eM+zmGeDNATiLQLcC263KGpDe3LKeBN/zgFsH3pUtINutA22Cbx2A83FNVRsEfNd24LO04uUz31kPL6K4/woqR27gTtdFRNbsxpbM1Qgv34HS3isYcJfCJ4FSTfiiZy01eLRzo50S5Q3wLs9dZFkBjzfffOMFWZZ6TE1VqM5r0RxaxYd4xlMHH28oZscympdZPYO+DsDVZvqi13wSfDOQ/PeCb9Xoo/SRbL6iK4dZm4RFYO4SloysDceEB3zKTcHntiDgbsNcgCU7AnWyaM+Cb0oWkmxHcM0Rjb6pwiU3Dm6GwuCR5ZizY2rSiJGRGvGAZiOjTJEUiYUefHisuPTNSTEX3dPFiG06ilP5O1WXOIMhGyepR5Qpbapi8KJjrH7G1+RGUA0fp0zO2fHwfjGiDn+P6MjN8M40CkvqcuXj7bf/HhVV8ZiccqK9vRxHj27D/pDV6OktFwC39GvRWo4gTFmXCUtNH9uAWUbWXzUfRYrrDGLbj2Bd0kc4URoE61A6xrih0NFEBl6wpM3n1G074IUdw94a9MyU4ZGnHF3ecnT7KvDQV46KviREmPYirysRY342rnEIkZrIOKO5pnDzon2eV5o/OV5aDZNw0+0BTfDTNQQuKT9Tm24cuIVT9jBkdScIuGPjFhnr2dlmcbKwWlPEVm56ulHYarI+R6N+QFlpMgb6K1BrTERJeSz6H5ejZ7QIR03BCL2zBoVNlzA2Q2apAfX1N1BeTveAWgyI93wo9oV8gMT4w7jXUYJq41UcjdmEb7e/jR9CP4SxNUUBAjbi+pXUg3GAIJvDs1RDLeU2Tpn2JsCM9yuZfl73BGn8v9zwRGuqRmU/HC9EWv0xhGZ/j515P2D7rfUocVzGtM+G/pkqJFtPYHfhVsQ2HEXDTDbMo+k4aApGzdRdDM2xSZKWjk74OeBp1iJTc6lJJdiXpGfWKce9sDkO5+sPo/Rxisg8VGOlfr3zPiA4UFIiuQdZPRLZCdkyWoqRQW+GYyAXF+oPI8HOhksbPGTTaEHqtyIv7zzOnA7GieO7sXffWny75i0cOrAKq1f9BdcTYvDuR68jKSkcgwPlkmCdthxCRiutBpnUqMqQugf5fvR7U5PFSVxoRmTEeny58jV8+OF/w/Ydf8PFSztxOOw7fPTJK+L+ELRzBR48MIiMzcf3LTGNCSGTZpd4KROkPhjLwqW6w7hsiUbbcDHmArwOHQLS1LEgUUBXDVYGeP4Y45h0cfE8M+ZrgEviL48dmzP5u9qxnKL0ywLXeC7YcHm97hhGfDWqUhCgdIT3ilVAtzDnc2psumMsB8cL9yAsdxuiDbsRVrEXkZX7cKxsH46X70dE9X7sKd+JvWU7Ue8rQP8cZQGqx4KfTQff15tiVMWEFShxVFp6T1hOfFnW79Aub7YOjWNZOFAbjFcv/hGhxTtQ/TgVA/4qGVxGf29JZHluNJ0zexwIbClbYbWOewYnIzc2ZuP779+XwU+DQ3Xo66vG0LADcZeisPWnlbhxMwo3bx7DX979F7z/3q9RXJyMsXFqsBukEqjAN+M6zxn3JNFLhtwAACAASURBVFaI7Oj3VkmzJ4/zxdIQ3B/MwLC3QhpwGZd8jFGi72dipUmzSKIIyaFG1YscSN+z5fwT5C8FvutQSM13wRa0appv7sdyHDTpjSLClKxplhaGApiVA4syM2gQXb0kE5zsyuPFquOcU6qQrv7bOFexFxft4VKdH5qpgk+SbiXvlOucnymgMAMJL4/brFl1ujDgKUHu/VPYU7gJ32TtwNdFp/FxRQ7+lF+JVzId+HVWC36V58Kv8qvxq9xivJJVjTezm/BavgW/yzMJ2H4j34rX6VKiAe7XCxx4Jc8mvt1v5NrwRh6XmiSpT5T8Pa0BF60nPLo58n2RRzelJDqw1h+fAdgFDXi7oAF/XrwMjfizoRHvaOvdReD7WdC9MPhGl5U8A7o1VvtvtATUwPfToHvBh/vJqZME3k+D7k9LG7FCG3qjD7/5rKwZXIsH4Ohe2/rgG3lcBLYJvL+uaH12VbbhG219XcmfNyuGezHgrmrFKn1pbLcOtnUArn+9rqYD39Xew2fpJcsH36aB6wK+LzVFYU/pFtF3xzsiYB9Jx0SgSjJCma5FCzaNiZWBBQRfXjofWGQkrDS3/SKMAptGmIkqjWEgQCBrBJ0HRFNIVwJu+mQ2paFNlSTFukpKi2xWoaMGN68XMxxLge96Yb6PLCk7EQZH6x7n5qnANwEwPUFZJiVIIcBmR3iFbNbUmEkTnb9dMd3USIqn67Pgm2XTZNtRXHMcRc+EYcnPJe9BXGO4CdrAsbd8rYWSoMaAzG/wzwff9HQ9UheCJOtRDAdqROuvXCc0JwSy4MIqv/g48/2wKuGnvzYdFWBDW0s2QvevRlT49/BMN8DttqPOchd/eOs/yFCWKbcLDc0FOBC6Abv2fIVH/Wz0Wd7mKf6t1CVqvuzc4OnDygasdbfXIKblBPa59uDT1A/w2dW/4UDJNsTbo3HLdRYFbYkou38Nxu50mHtvw9iThqKOJFyrO47zlYdwrjYcZ82ROGc9ilN1EQgt346/pX2CQzV7UfLwGup6bqOqKxWm7lSUdV9HxYNkVDy4gsL2WFQ8TELNoxuo7r4BY98t1PSmoLYnFbU9t2Dpu4u6R2mwdN3ETfsx7DRsRpw1ElNTVZpTRaOU3llGnpyqxuQUm+Ia4PNZRS9Pne/4hBl+jx0T42aMTdSBOuQJnwnFQ6kIy92Im3XH0DPJxkQXJibNGBuvEXmGz9eMgQEjWtpS0d9fB7enEeOTZrR15yG79jwu5YTAeP+G6OgJoHn/yYYsWnWz0o7SPo1AVLve2XgnzJKARuUBT89fgiACQroSUUrAcnfHeAHq+tNwtzUOIVlbkN0UBzetH2ed6J4oxu3OOESY9iGqPAgnjPvwVf4apPclwjySgYd9ubg/ZMCD0RIMjpVhdKocEx7aQVpBh45ZH7W3VuQ1XMB5Sziqx+7O+3zL5i5st5KsKXDB11USMr946hs18O1AYK4Zzsd5wnwnuaIEWFKXO+OpkCTAZLqJvcGr8fHHr2LT5o9w9XoYSoouYuXnv8f6VR9iw08rUVV1A54ZK6wDdxBjVRZ0S4NvAh/qpx1wOu4iIf4ATp/+CYaic7h3vwAlpQmIiNqMI0e2iIf/5CQJCQIsTcNOBxLGJ14vZOn99fD6q3Cj7TSO2yNh7s9FwN8skh+JiRp4Vv+fr817ibapBN58JHhhXw7lUvQtV2CcTX4iw+GwowDt2SyYnrOj6mEKzpcdRKYlBlP+erHrlATepySDkqjxfEuTqAUjfiMsj2+jvOcGih8mI+dRMgq6k1HZdR1Vj24i414cjlbsx8Hsn9DpK9WGaikpF6VMLUN5OGc5AoJvsrVKJqlkRUvtCb/Yz2WOQR2axnKwv2YXvrz1BcwDKRgnAcF7Q5v+zH4Skj4EsyJHYzWBemrqvQX0ciiRA82t2djy0wc4ExOE8QnukfdQV5eB4D3rEXrwB9y+cwo7d36F6KM78fFHr8NgSMLoGIEzp+wybhI4s7eEgJ9JpwPDM1XItZ9DdP5OnDcdhWk4W3zzpQGcrDdJIp5TIRQo6eJ1xInC1EwziVcSP1X9UFVB+dtadUQdS7427/knme9xTx0MnVcQVPAT2sfyROIn154cB0VeqD2ce7myEPVzoJYMbVP3KBMXStECfg4QYyJg1hI51efh6r2LM6XBuNMXj4fjxTC1JKNrxIApzptgUspjQjzhI2lHUokTpjlVtwWPx2thcF1AhGErfirdj82OG/ioIhN/zKnH69kO/DbHhV9RXpLnkLHvv82z41Uy4BkN4Kj217JseD3biTdzm/BmTjNez27Ea7kNeN3QIINzODSH4HsecGsA/Amg/dRgnOdJTJ4G388A70KC7KeAd0HDPOh+Gnz/HPDmIJynp07qAPyJATga+P554N2Ej0uWB77nQXdp0xOAWwffHHzzBOgue5LlJvD+6jngWwfdC4+tWEVZyaL1LcH200uTnJDp5tIBOJ8L+DY+Bb6vVZ9A0UA8insvo/phMkydyah6EIfy3gSU9V5D+uMrOGo+gL15m3GifC/y2+PwcLIIbpYsxXCejUssnStWWL9BGBiUBEENseD3f5nAxXIpb2Ru1GoyFm9uWivxe3zO4CCTJaURTpn583uyudNLmkNb+P6FQdA0ZALctLKXZkOnNhoyQgxKClCSbSBQJXtp7UlHvPkIijquiBMAA5dqvFOlOwIRSi9Y+pQNinZXDJyi/WZSwM9gleoAWTkGijnRRZKJUowSNzD1WenUwNdmOZ8JBgMWgTz9bI1ItEbhqvMYepfBfKvzwM+qmD0mMzxuT5wffs1ArDEU864bup45YEXrQAYO1+5Bzv1LmGQDFoMVm1RkCp5q5JLRx0ucezlnbO4R1wK+Fxs62jNw8vh6XDj3I2am6uBxW2Czp+GTT/4JRtMVTLtdaGsvQEzMNpw6tRnDIzWikea1SFCgGr608yml5zrVBCRstQle2sZJoxqb1ax47K3C3Xtx2JazEel98SgbSkNa8zkEF2zBt7e/xq7in3DOEo6zpjDE1EchznQUF0xhiK0PR0zVQZwuD8ElUxRuNpxDauMl3GlNxK3GOCQ7j2N17jqsy12Po1X7cdkUiYu1h2XqZWJdOC7aIxBnOYxLpoOI56M5FOdrQxBnCUNs3SEZMX7eFIpYYyhiTWFINEbgYuUB7DNsxp7CDajp5ahkXqO8TtQ9xuOp7DVVhWKejSQryaSYVQbajgXI3lvx0FOB8IrduG49ib5JnkcNREmZnuCK0okG5Q6DJvmaZVtKd1x9mYiriUDFw2syaEOBBDJqarMlUBWrMRl3z1jBzV1LIHndEURow1TkfVMbLMk1Nz6yg1ZpoGMp2zmchfCSXSjsuyEeyawiUPfaP1OByt6biLVG4qfizXjr6l8RbtyHU/WhiKs9hDjTYVy0hCPeFoWr9mgkOY/hWvNpJDiO4WrjKcTbj+JgwRZE14XANJ01r8kXNlgchijHIZvPgRlaYkGigQms5prDY8QqhXMgB5fqI3HNeVzs2ngte2QYjANDQ5WoqIhDys3DKCyMQfejIgwN1iDjTjRuXg9HRfUVPB6sgGfWAsvjdJyzheFm+znRwZNNpjOFHD9pXlT3GYebcJQ3ez8IhNw+O7p7i3H/QT5GRqolwRobN6O9PQdtbVmYnGIiQXaT55jnSCUSEq9JVogzRb24JsVZwmWoSl7PdXTOlKHfV4HBqRIMecow5C3HoKcMg+5S6TMZmC5B31QR+qaK0TtZhEFPBQbdFeifLpWvH40b0DWajwfDuXg0XojusXy0BQpQN3EX8cYIsEm2rus23GQlpXrG64IkC6uBKgZLPGLCAJv0kowFTBjxm9EfqBG2mJWlIX8NqnpTEJL5I/blb0Ve/3VUDd+B8fEd1Paloa43HXeaLyKkKhg3Oy+pv824yxHt0gewxD7Fe0y7z1Q1kKCRcVJZwEpjtvwOzxWvYRIbjNUE92RQFdAlSOW9cW/MgBhLBI5U7cMoB0LJ5+Xf0vYy6VNgnOf74v2rxWXtvYprSaBB5kOsXvN7nL+wG5OTLejuNuHEid04fOgn5OYkIDX1GN55579h48bP8Q//8H/g2LFgdHVXKyaYw3jmaiRm0HGFe9WkuwZ3XecQWbwbyZYTaBjMx9gs5S5a1YLnhPet1v+k3J94b6veGbl3non7Gtstew2PA+OzdjwE7KrPySRjwmdGac9VhNz9AU2j2Zjmfsi/p8UVXgOUuAhw1/ZC6Z+ZJ2DUfsaE2TdbDB849Iy9TkwOXfDCAddQBo7X7sIh63acrAvG7sx1KH6QgBHRrFMGp6pjkjTynpu1YNhTBnPPNSRawxBcshUbSvdjrfkavjQb8fssC17NsuPVHDteybMLiH5ZHjlN0oXXcl14NduO13LteC2Hy4E3cl14g6A714XX8px4rcCJVzQ7wDdoB6gz3IvsAH9PdtvACZSOZxb9uBevtwrUYBy6lfxJhuK45JFTJ9VqeKZxkg2UT4x7L6KshHpuF5SGm1Mn1Xpfcyh5f37ipAu6HaB6bMAHpQvrwxI1BEfXbj8xAKdUDcB5muV+rrSkrBGfL1oy+IbDb/TFoTd0KdEH3+iPmv0f3Un0pWu4Fbv9HIabkpLK5vm1urIZqytbsLpq0apsxprKZqytasG66lasrW6R5/rX62vbscHY8STznVR9HIaBeBT1xKH6YRLM3ddRNZCE4oEEpDhPYl/tXuwu3IZYUwTMj1IxwmlnsiGp0tQvB6qXCHrajUyAoeyRyAQoyyEGC/oTS3AWRozMtw629Ud+b2ERnMuGJX+DIIS/x/dgU9IJDmmgUwu/z0AobANBOD2DbZiABZaedCTURsLQliAbpmhVhW3X2ANxtODodSYnLBNqPtB8jzJhjn+b0hj1meT9C+BlkOXPFBCVDVKANhlC6iR18M1jUA+PrwrxlghcaziOPnfxkyD6mQC4vOMsQV504YohFkArtlTKLooWao19txFeuxcVPdfgo55egJNJZB0iP9K+9wSof977ISAOUMNPMEWw58DISDkqK8+iuuq8MJQEyByPnhi/Aw868+Fxc1BHDerMSTLgw+MmoGuURJCb9cI1wvfLRizNJnCO/uQ18JIp5HH31EtpkmPmLYN3ccoYgoSWY5j22zDpNaO46xr2lwYhyrgfht4kFHclo6j7BkruXUNux0UUdMYhrz0W5Q+TZZpn11Q5Hk1VYWDaiJ6xSnRyc7VGIrg8CHG2ozDcu4yCjksovxeP2s5EFPQkorgrAeVdSSjruoLC++rnfMy7F4u8Dq44ZLedQ+a9iyi4l4iKzutIazqDXcXrcLPjKHxkmzhaXaoXS5xf6v1lQyJ7pq75EW8NQqv24YY9BoNTVfPA7EXnTd1LVjh7b+NUdShKu6/LdSesGO8hJlIyhpz3DBMcgj7ldqCzXwLMBViRdeV1xuuAGyQ/g/pa/Ibl75jQOpyFsKIglA3cwJjo9u1qyIrfhplAHSxDGQgzHcT3GetwvekkUlpPI6X5BG41HsPVlmgktxzDTdcxJNmjkNhwFBesYYh3RuGiORTnyoOR1XYOXf5SccngPUvJmnpfvJ543NRkWxX/WKVi8kwNKLWlHAjjhKM/C5ctR5DKSZlMtskwswJEZpiSJbcJkxNVcM8QlFE72ojJCROGhsvh9fLv2eBBHRyDt0WDfs5yWEn3yEjz/UiyUi8MJY8nEyC3NF1zDoFDll8STFVeV2y0S5OALHFtMBZ66ZteD+tEJg6U7sCP+Rtx2hmBG02nkGo/gTTnSaTyOY9v40mkNJzATddx3HAdk54TSt+42PzNdYOJjiMaV+1HcdV+RHvk86O40hiBOFsoTpfsQabrAh65OXKcMghVBVQAVSN3JG4QkDImMr5y/yGYc4nmmI5AjMvTc3Wo672Fg3d/xIHinThpi5T770J9JC6aInDFdATRpXuxIWcDrt6/qO4Z6QdYDKpfcJy0vY/XMIG1spokIGUiqQC1XNcyzI2Am++XILwGAT+rVKzscSnp28PxIiQ5TuCy9SimJRlV5A7/xnKSgdk5NTXV1XAXX371G8QnhGBoyIH4+DB8tuINhB3+CSbjHRQVX0Zo6AZs2foN/s//9L8hMnIHuh5SrskelSpxsOF+xymm47NmFHTEIzRvG+Jsx9E0XqjkGHJfvuDYPC+2P/d7GmieB96qF0PILq1iPT5rQtloCg7cWAfH49vayHeVlBCAE3yT2HpRjJLkkgOyUCl2o5QJKj97SgXr4BpKR1TpDqxK+wInmw9hbfpKZHGi63QVAgFVHeI9y4RjYK4Mrv7ryGo+gYiyIKzP24ovSyPxWe1N/KW8BK/lWPBqZgdekamTyh7wGTtAfbS7NgxnfhDOc+wBn6ff1nXcfFSDbxz4o+HJ9VahE08sAd1qCqUC4E4NdOvg24V3NBtAWgHq6y/FLixeixsnFwC3sgbk138VWUnDU8Db9QTwJgj/sNSFj7T1cVkDnl6fPGfozdP2gGrwDSdONshaPPxGf64PwNFtAfXHJ+wBxamkCV9XNMr6prIJT69VVc3g+rayaX6trmrG02tNFYF3E9ZVN+O7GgLw5vnFrzcY27DR1I7PF8tOrlYeQ2FfHAp7OB4+CYb+q0jrvYBLTRE4lLlZdHVpjefQMmnABG3QNFZR3+i5CS91A/ySPyf4lg5sglBt0AsfFYNGfTmnsKmSnDAU0vBDQKdn1Qwe3KgU+BYARi0vN1eCbP5MQC81uPWy2KDIiVcq66a9nw3jqJepXvG1EchriQO1ePK6EjwU880AzYDGyVwE7/IehTFQQ17I2ikAogVaMibCaJD5ZhAmK0A99ML75/EWhp+BS46BGW4dfDceF9eRX+R4a+9FQKyw8koew0SBmtmpgBmW/nSEle2CaSBVLPJklG6AjSl0rVCsvACt5wbgxUFc6cz9PEbzWlEnpqdqBahI05qAHBuGh8rFs9bjqZNBO16PRRpW6Q0uTYdyrJSGXKoEcp55HrRFLaI2dpnHmK8pgRYOPJopQ0r7Gewo2YyB0Qr4Z23o81chxXla3GSco3cxw9IlS62wY3KuFlOgG0M1OICEryFgSxwjnOLy4fUacbfxPGLrImHsTcXEnBGTHJPOKXRzZhmKNBEwYmbOKuXx6bn6+ecTnCw3V4/pOSsm5moxIi4QdZgK1KNhOBPHTTuR2nYUHvYDBAiwFJv5wvOvj43WNncmnKPeWhys3Ifr/wrwLRvlnBWOnnTxHC97RPBNzTbPoUqM6TZBwEr5Fy0UeV9R/00QyWuaMYTPCTjV9wiACKiYSHDiIu8HAk42xRrROZSLaEMQKruvyvVHvb1yquH/caBrrARJtpNIMEVhwFuOkbkaYXBHfOXoD5Si31eGkalS9I8XoJeDW8by8GiiAI9G89A1kIXRiRKZ6jfDpnGR0KlYod+j+nslEOe4euXiQFs1O+bcTDBscPbcQYIlCumt51XDoAbA+f6EvRYNPKtC/NsNwsLRd9kjwFvTSMOKhxP5uOmIxpGCrWjpz8CQu1o013I8pPF34fiJzEcr8/N1lCMUkxdt8bWXc20QfM9SV12P0v6b2F28FTuKtuCy6xjS7CcRXxEq8qv4+gjE14cjoT5CLUsEEi2RalkjkWiNRHxdmPwOn7MfhRWHqwThjmgk24/K95JNYbhZFwXq7dtHCjHJ5ER04iQf2I/Ca4UkBBM3FQ+V+wyZZCY1iryYj/0y1dKCnvFiFDbGIbs5DpnNl3DbEYMM53kB+PmNlxBvjMJP+T8iuf28iq2MbyK7WRyTnv9crm1hXxm7FZtNkkRVdBibCWa5FFOqX9d0AVGaZH4uvn+V+N4fMyDRdhyXrdFw8zOKLpkgkffC0nsr46U/4ERbRyYOhH6O3LwzGBoyY9++r/Hxx7/Brl1f4c7dU2hszpSGX0NhIlZ+8TrycmMxNlong37oyU+5lzRLempQ1nsDu0uDcN4YgZbRfLiZ2MhnXvr9vDD2zO8D/DusAPM4qMSFZJki8hhDLGA/0eV7p3Hozg9oo9sJiR0fveG1cy8xZen3w79JlpsDubi/UxsO6sPnauX+T6mPwcmKY3DMlSOycC9KW69gbLJSmos59GjYU4HmgTTkdF7Ekert2FW0C2tzwvFZXgI+KCrCn4st4Ij2X2XZ8bu8xnnw/TTwfmLwjWYTqDdMLn7UQfeCNeCTY96F8dYG3+jA+y2DA2otAO8/FToh6+lhOIUu/Pmp9TT4Xgy69efvFSvW++eA93LBt2K6Xc+AboJwAm8dfOtDb/RH3RZQf+S49y/KG/AFH7XpkwuDcNT0SR1w81EG4CyaQElrwPn1M+B7HngvAt/PgO7qFqyR9STY/lnwnVaC//k//wP4739IrT6F0p44FD6KRcHAVVx9FIt9xh3YmrMGJwp2wtKViiEvGxFUYGYQVKwiZRMqA13eDff8YPav/b9kcsjsUPZBrS6zX6X1ZWmc7gna5EHZ0LWyqjBUC6y3usn5MwZQVdqUJjEZEauAg9KLs0lncWMeNzMFJtj41tifgQRjJDJbYzFOD2EeG4I9AYAKSMuGvcjflF3a+oAW0fCx9MvNXFgFvpb2+gK8VXCR9ymvSx27Oo5kvAlKKD/xBWqQaItCsjMaj6aWo/lezrkgKOIgBqO2idOa0Cxe1RNzVnSzEed+gjhLWAfSZKwuAYk4H2jgWzYkAqf5oPvzr0uAo5rwFsCDbFzCtvMYaT9n09dsPdzu2nmtOh01PG5ubsqflQGayRKTJgHdGitG/aSU6Gc5xIbHnOwsNxY2gtnEHaN6KAU/FHwH2710jHvNIBCr7EhCbOUB5DefhzSpsZdBrgsmQNrGK3+PIJHlazI5/B1Vgi1quYzYysOo7rwmvt0yaIQbMH+PTJhUawiQaKHJ8fOUhJAp5fAGglAOcyAI5UQ/O/zuOrQOZSK2PgRZ98/Azc8jrP8ywLd2XGQz5TGFFWNeIw5U7F3EfGvl4RecNyapBBqOR2k4UXlA9Ldkq306+yfHXGOxBdjwWmZSyUd+TgVUKMnipsjzRNZcGG+yvPw5qxUEI3JdmdE5nI/TRXtg7LomDbJi80h9qUwLdaJjuBCXaqKQWn9Kmiv1pI33FO85Nt2RyeJnZsJOyYZUtjSpEq8ruQ+lV0GPI6oxTprGpMlWNW3OikMIz592fnzKfcnWfQsXzYeR0nxGPoMiAHjdqwRBkguRJTCWaOebbiLC9CtZEI/RlL8alt6bOFe6BycqQpBzPwHt40UYmKnEmLdWmH4mwUx2hDTg8RLSgdeO/lx/XRVXlrwP+b48PBdWGPtvIbp4F65bojHgKRMnq0lPJQZQiylfJThVl4sDvtyBalme2RroS/8Zv+YIdDoycfG5d64W7tlqTKJO5F7jAcoZVJMxR6tLDJRmUDLLVolBjCsqDqp+H35fKoCUj7FnhZaOlMyQyRW2khUBB6ZglQFak1KttGIC9agfuovj1gik00Od1wSvMUpd5Nr8+RilXpNxXunD1V7IRn7uI9xPWAWhjIf3LhMrXhMKjDPJF0cjaUxVLDuv346xfMRboxFvi4ZHlzcK0F0eEy+VUCbnk9WwWJPQfi8LPp8TBsM5XLkSgri4YGRkHcO9rlz4/A3oH6hAeloYursK4fOqa1BVUyyY8dfBNnAHB4qCcKBmH1xTBmnu5D2hEh3eny8+Psv7Oc8x50DoMhxKptQETGIKWv6ZH9zA6uzVuGw/giEPe4Hq5TyLKQGry5JMLh3vJPGfbZfqc2COfTDVADXcrJbNOvDAWwWnuwTGoSwczPwJuY0X0DVRiEczFXCM5iPv3lWcLorApoyd+DI/DN9WJGJlZQHeKTDit3ed+HVmG14tfIBXilvxUrYJr+RZsRh4PzFp8jkMN4G3DrgXP+re2/rj004l4lYikhIHdEZ78aMuK3lmEE6ha15msiAtocyEA3CeZLsJvN8raZD1NOgWPXdJozDeBN46+FbDcJTM5ENhusl2N+DDskZZHz3FdsvYdw106+CbgPuzUrUWWwTy+YJbSRNWljc+sRaD7XnAXd6Er8rJbi8MvXmyeZIykyasWrx0tpugW1s/D7oV+KbM5LvqZqyvaZlf/Jqs9/raVmzUmO8v0krwv/z9/63Ad1r1KZQ/vITirljkD1xF7L3T2GxYh8MlP+JhoEyACW/OWY8RAXetsoFi0A+Y4eYGpGkHl3fT/QI3LoMsbbG42QgbQp9SVcJbrIlTOlM6Kagl7JnOKstGqNhlkYoI8FWsk9r8Cbo13SzZB+q8qQMXQKCAFS30HgzmIrkuGrdaL6B/tlrYsikyldwYRJtNfTkdX6wSWGdmLZhgNh2okcmJLJVP+WvhIbMjoFuBa53dVsdU/5w8dgw2/B0CFTMgLKLSzCU7ogWAPxjN/UUCJMGJW/SqtMfTdLlzTkwGrGifZpd3As6XH0BidRg6xvOk2VbXo7JZRTU1cjNUG+iS1wc3fnBogvJRlk2MGnqZtMafEbxw4+Xmx41Ybcp0W+DfZmlXHR9ugEpSoYAONxgFPgi6pBLhLkeAun+6KdCqShgVsp0WNIxmIShnPSpar+OxRw3jaZ/Ix2VLJM7WHEDPVLE0EXMzJwAVdkrOndagIz7SZvhQDTeq5P6pf3wXF0wRyO6IF69iYXlYiiY4l7I0XR/4uZhgWMQZhAkFR4D7OBaaIJyabq8Dcx42xDXi/kgBLtUdRF7nOcz4qxGYUVKmJY/zPPgmu8bryIJRTy32l+3BTec5DE5TdrIc8K0SIkd3mjDfFb0czOSEj4BZ7ktNp0tANKOuITLVvFep1WZ5m7aJvF8mCDa5OE2Unz9gldjCwUdK02wRtxnXUCaii3bB1HNTmrjFJ10DXKxEWB/nILLyIG42XRCmS7TqBNIBDjOySmOlgH/ZuDXds8g4CAbUtEtpFvNYZPiSmsxHKQj11TxWvH5Usk9pCGOAWFYKZuWfIQAAIABJREFUW+uEF07U9t/GKWcErt7npEz9utXAr4BuDVTNP6d2nkmQlpgwtlF/7K/DhLcaFV3J2FMShL2lO3GsPgwJzWeQ050E88hdtE8W4PFMxYJDgyQ5HHvNoUMaiCVDrEnrlrw2+J7cZnR7K3Gl4xwOFm5HXsN5eHjPUYIXMMkxZBnev+RSMYBxQP9d/j8OO/Jqi+fH7WFzJ2OtSoyEQCGpI/c12W4eQy1xExcVNaxFv0bZbCefi/c4QbDPrPYoesyz6iiMKqVmyo2Ljw2DmThVH46Utgva3+f1vvQ1r46fAqBspFUWny4EAo3aBEm+N0qLLHDP0ONeTT8M+FgFo4NVozTeUkPM643gu300D5frjwr49kqVh8eC8Yyfe+nFvUAWq7eBOsx4OMCIzjPNMkTH53fC49eHEDHhtavGZ5FqMp7yuDvh8dWhgT08FcHYV7QDLTMGTEq1ql6qyazc8jpaznta+ncY9+jqxXOnHU8y29IrVI/RyXKUNl/GF1lfo9KTiZFAFQIz1eqahkP2VK+QHMt4P7yv/ZwMTN9uWg/XSpO119OI3ikzKnqTkNAcjC1ZH+G9yy/jcPUGXGsPx7mGUASV7cB3GVuxLnMfVlWcxopqC/5sMIMOJa8VNOG3ec34l0wH/jmjHr/Lt+DNUjtez7fJMJzFbLb+fDG4fvq5MN0GJ3RpyQtBtzb4ZjHY5nMZ9T6v5VayEn0Ajui4Cxue1XIX06mE+m1dz/18hltvnuTj4gZK8eUubcIHpU34sKRxfrS7run+mDruRUvpuele0oCfG/f+2RITJ8WbmyPfNfD9DOjWB988BbifBN3Kn3uVaLmVpORFAJva7acXtd36+q66Beurm7GhpgUba1vlkc/1r38wteMHUweeAN8plSdg7EmEqT8JZQPXkdByEsGG73G5NkT0g8K4yYZAJogBSjHMDMbKL1kxtUvfcMu4UZZzc3stmOPgES8DrdLHcWP00PZKk3kwK2Y5TpgIbcMU0MWbe34RFNOOUHNjIPARHaH6HU7mFO0k2QxuBtyAZynxIHNTh+k5E3pGDbhhOYFrjlNoH8/HIJuSPGXoD1RheKYcA8MG9I0Y8HAkF/dHc9A5kYfWoQw4e1LR2JeOpr50tA7cwaMJA8Z8VfJ3FSuns4AMjATfZE0YZHmsuRFxs9ZYVu3rZMcxxFsj0TmS84sESAIbDoEQUDJtRGCqRoBq/aM0xFUcRtidH3G+OATOoUxhh8W6TTYwpffl8ZJrYpmbiNr8OC6YDjC8zgiO+MiEg+eTemFl0ah7DKvpegTi3ESUbdrC/9FK7Szve+oUK6YlXwS9g54ScdLwylAONgVb0TtVjOyGM9h/41u0DBkwGeBIbSUHyWm/jEOlO1DUmShs0ACU9l+9T4dcV+65WsygBtMgq1clOsNxXw1ck/mIMUUgqeE0uv2VagOc5XVkxKyvFjOeUvhmK2VcOacwcmgSzzVZWUoTprxmGRXt9dOfth2+2WY4eu/gSNFWZN+LkQmOmG1enq5XB9+SsDBJZBNRDYJLdiKt6SKGuMktA4hIcjpbD0f3LdF8V/XfEv2tJMLz/s5sMK7HrIcyJAc4KKV1JAcPvKXoDJSjw1eClsk8tIxmSWm5vf8uHgxkigSkazAb/dPFeOwvR89sLdpmq2EYuIbQwu2o7b2JiVmjDD9ikkLfdJbGTQN3EFEVgtyeawjQQo2MuLse8DBBt4PghsmNOmcKAOm9GCq2aYmaBgj0vhFJsnjt0J+Yk1v9/Dx09+GUQY1x1WIjNfBsVLzVGvPEfajkE5peWZOOSU8HJzGS1RXrQw2YM9EXsGMX1rY7UIM7HZcQUR2CHwu3YH3uRmzJ+QERRbuRbj+NtuEcDRiROWeyoao6wiCLpR0rckuzhCqGWxDbcgpfl27CuQdn0RmokOoiK0Zihyle3oxHTy1KCBYvLbF84nuSIPOa5jngJEEmHLRR5LWrehakUih9L8qPWcbGCzlCWZWS6aghMPSEr8eMu0SYaAFukiSpRIl/h97gPD8SQ3lMRTpoQctgNs6aw3G14bRYJeoSRBVbX7xPBaRXhAk/k+8GmYnQ11eBzs4C8ZL2+x1i5eh03kF3d6n83OttkO/19pbBwwRaBk3xXv4FwLck7+ozSkWNEguZ7Kr5rBOIi888YykTSd7fCvTOeKvgCTBBaUBDTw6i8ndiT95PaJgpkEST15Leq0FNPV1Xfpl9XjHfC+CbFTLFfDMeTcxUoaYjCRsMG1DhzhDpnSI61D7MfZs9R5JIL4UZeN9OcB9zSiWSieDobLUMsSJ5tDb5c3yV/g0+yN+Ff0z+Cv8S/xXeSPgOb9zcjt9nHsZ7+efxeXka/lqbj1eyW/HrOy78JtOB3+ba8dvcevwux4zX8+rxh7x6vHqnVmwDdbCtPz4NtJ/nSkKwvZjd1m0BFz8+7VSyHFb7Re4kdChRawF8P8Nwz7PaagDOYuBNwL14LQbfiwE3n39S1iTracD9PD23MNylTXieTeCCJ3ezMNpktfX1DLtd0SyDb/ThN/rQmyf9uNk0+aSGW8lIdDmJxmxrjZQ62ObjdzVt82t9jQLcBN760oH398Y2EHxvMnXgy7RS/K//WWO+b9WcRmVPPEofxqK06wrS2s7hWEUQksyh8z6rDNwc/iKaK3HrUEyELpv4ZW7IFwc9/TVm5uxo6M9CjusCbtQfR7LlOJLsxxFXfwRxlqPSVEPP2ATrMcRbosHBDZfqouT7F4xhOFdzGBxXHGoKwX5bKA7bDiPMdACHq/biUMVeHCjbLZ3wIcb92F8VjL1luxBculPYp93FOxBUuBW7SnZgF5uRcjfhrzdW4JO0L7GncDtCi3ZiV9E2+RnZg305P2FfwTbsKdyK4JLtCKH3bNE27C3cipCyIOyt2Img8u34oeB7pDScEaZLWUspPboEeDLvLIcTfLI8L04p2uYlgJwbqwlX7EeQaD+Ch+N5v0iAJEPEDVeOO9mDOQd6PBVIsZzExfJQFN67gi5PuTD5CnjzdwgedKaegZoBn2upc6scSlRZmaCbo5DVwCEFuhVoUZNB+dl1m0bVuCTNS9qmSDZVGBoGZ443pleuME3c9KwYnzXi2lActpmDsLliO3YVbMOBvO3YZdiGNfkb8HX+Ghy27sMkh8GI/IEJkA2uxxmg60icKRLj4vTB5I7ggQMZNO2+bMjc7ChrUOcq4KtH30y5jEW/Yj2O9skieT9zPtpZadUUYcFNCHirEODYaH89JkbL0NOVg4mxagRm6TTShr4JK7oGjRiZtKNttEQaBrPvn5fGxmWDb73hUgCgqgQMeiqxu3A7MtrjMexmgrP0OVsA32k4VXUQlX2p6ryR9WaJmJIuabykh7kVD6YqEG+Kwv7MHxGauw3heTtw6O6P2J29SXTFe4q3Yn/hVhwuCUJo0XaElO5ASPVubCncjI1312Dn3fUIzfoR+7M2wTF0BzOzmt+xgG8bpt3VqG1LxMmSYFhGMxWYJYPtsQFuuzyKcwobsMhe++lRrTfIqbK3Yrf5c6M0ZE2hDuMwYRxmTMCI6Tmj2JXpyS9Bm/hQi/yO4MQKe3cqrpjDkN1+XrvutXtVjuni5zzGirX0BiiDYtOmqhrI/S4NqmSY+XctatKj1yiWmK7xHBT13cCVlhiEVgYjuHA77lObK/7HmoxNkl69SsZkfZngO2DH1fpj2JSx4f+l7b2627rSbcGf0I997xjnpV/6nj59R588zqngKNtlu1xll11yliUrZ4nKJJVJURQpkcokRTGJlMQkMecIkAAIMIAAc5aYc05InD3mt7EpkLbC6a77sMYGI4CNvdeaa37zmxMxlhD0zRUrkgq3heDKlF5ICAe4AeEGlPcbB+ehF4M/V39HdUFaZWnd71sJv2Elpw5OVla5trCB0mFFf38JrNYn6H2eg4V5Bp60wmFvw9K8FbYlC1wOVskYWEYNMKsH9N7nvMn7z13FdEv3xKpPegw4D1nQwaqR+SoeN91SmtqlMvCmJBIZWrLLdZiZqUFBQRg2/vXf4O//M4aGNMjPD8Xhw3/Cl3/5V/h4f4+Skmg8ehSALz7/J/z5z/8T4eEn0N+vkUCjv4nsRPTwanVRISKUKqAy59L1yc5r3d2L4XQq4NvOvhNWnlCH2v40hJSexbmSk+iYzFWkhiLh5PxmkfuFPU28Dl8/l79urufP+X/War5pU0zyS+Z+Zx3aR7NwsfgoLucdRd1EmrgbLTiMWGa1Xa5t92f82rVFqcDZl8yYczXJhv9xsx9OF/yEA4UHsVsbju/LsvFxSSN+U1KHf3uqwX8kleC3mZX4fVENyEh/kteIf8+qxbuFNfh9ThXezavBh+I20oAPci34MMeKP+Q0YEOGGe/lkf3+dSnJf4XVVkH3esDNr9XgG2q3PZntX3UpWQXYKtBuwuclzWvGF26nkley2yVNUO0BX/hxN4OPN5a1uEfzL/24y18kTjIAxxN8e8a9SxOltgXfaVvwPaPd19kEvgDdagrlC/D9C9Dt9uKmNaAn4F4LupUgHLqW0KVEdSpZw257MNu7K9vAsUfXvnboO7BXRjv26lqxT9+G/YZ2OfKx+vVBYycOGrvWgW/DTZQNx6DwebiA78yOCASWHoF3xnYkN91G9XQmBpwGzNDcX24Q7lqpP1VLgm84qb/JTfIGvzMMMx5a7+JymQ+uGi7gWrU/7liCEdEYikgOawjCzcEIMwchwnIV96whiGwIxf3G64hquoGoJh6v43ThUfiX++BB8y2kt99DVlM48psjkd8aCbKcWZ1RyG6/j5y2SOS1RyG/Kwb5PbHI745FaW+CaFzL+h4i53ksivsfom4gFfW9yah4/ghlg49hGEiCqTcRNUOpMI2kono0FfUjT2DuT0LT8BM0jDxF7WQaCocScFV7GknVVzEyWyoLCBdxlksFBAn4VmQ21L0pk6ibPSLQIchbqUKcOQgPLMHomyv4m0yQBE70AraRCRV2xArTSBruVlxEivUWehfLREMr4EA2B5xU1fI6j8rjNy6fkrUSDaCq5awSGQYne2Ux4XtWgj1o3aV8j6CczVhK4w59WUWKQPBLGYeAmTrMOYzoni1C7VgaNCNJ2P9kM65U+OBpaxjKux5A05OAov6HyB6IQ0FvNDrms4U5lSh4Vn2ctRhbKEVGyx0EanzRvFwiTXai2RVJEgG+RRLfluaMWJzj62nFChM3VxoxNqvFU/NtJDbcRsdskVSQXEwkpP0YAQc3COKAYxEfbrMpAadPfoW9Oz7AkQOfo6k+C8lJV+Dl8w227/0UdyO9Ud72CBHmQKQ2X5fGUDjoq/wG9yLDJlZY+lfuXy7AQwvlOFlwBLnP4jHFSPY3Bd8rtR4Nl48FBJGB5GcuWlTZQNViAWbRT/oXnMC9xlBkDz9ExWgKTLR+G05BzViquHvwnmgZS0f9UApMw8koHUhAuMEPQaUn8bgxFIbhJFSNpWJBejMUtwNJZnSaMbOgQUnTHVwqPgLNbArGnFrM2rlxIYCgHKYW03YdRhzlGLWVYWq5DLM2DeZtWtEwzzi0mHZWYMalw8yKHsMODZoX8sHEy7j6a7hnCkB0dSBiTYGIqbqEqBp/RNBmsvYywuuCEV17DfGWUFzV+sK35DAed5P55qaS14K6ESXocN+7MtfxMf3tyaJTosaqj+IaxA0Mq3Rqbws3dEwDdS3VYHnRiNnlKnQvaZBOOzaNNzLqb2KMEjapivH/qs+jAGJlPnkDYOSqw8CcBkkNd3C54KTYT/bMl8LuNGN5kVpZWrUpGyzZLKtAet3crWykeS1wjuL75rrhHvL3ZM7VOUOp7lEGNU8pQGU0ft62Adt3/hGXA7bAYk7CwqwV5SWxOOPzE2LunxFZBaubS0sapb/D3STLRlluopRmWIUIWAXfZEFddeiYzEckG0Abbqwy36sN4uvex3qwuQIjbC56cRN8m5GVfUeSJQ97/QUDgzrEPTiN69cPIyTkKHx9t+CI19f48cd3cP/+efj778DOXX+A0ZgqbD9Jgp6ZIsTWKw2Xizwv/8WGS0r9XNS9U4bBzZfd4P6awVYGmVNXPwuZq5TNCvtEpqBH3fhTSZANKjsF7XiqJKXyvC7b+D+VnhVx5JINyt8YfEslxD2Hy3XC64EkiQXzjirUjabC/8k+XC06iaJncRhcqpBKpDDeIqt6PUnA647ryoLLDO3gE1wzXMSB/H3YVeqN7foofFFSgrezG/BuTiv+s6QD/5HbjN9lNuK3BS34TUkL3i1owIc5Fvxnbg3eKarCe0UmbCiqwfv5tXg3uwbvZtXhg7wGfFTYgg/ym34ZflNgkdCbVwbfiAXg2jh3ht6oYzX8Zl3wjSfwfqHfVqQlfyppAscLdvtXQHdpC77g8ADfnsz2WkmJArQ9gfcL0K2A76/LXoBvBWi/AN6rqZPlzaCsZI20hEmUmpZVPfePTJ2kJ3c5Abbixb16XA3EWZs6+euSkhfgW/XcVo/b3cmT2yvbXiknEYZb90vQrQBuFXjz+AJwE3x7AnA+VsH3T6ll+G9//4+K5vuBJhgFo9HIH4pESf8D5PfHIrLlCnx1Xjha6gXfgiNIbo9AzWgank8XYm5Jp0yaZLckCOANJvTXTGjrJ7hXfT2OGiRUBiFOfxnGsTRYJ/PQPJ2HztlCdE7konu6CF1T+eicykPXdAG6ZwrRM1uEntliPJsrxrPZIjybLcR9rR9iqoNRP5mNgQUthudKMTlXjskFLSYWKzC8WI7RhTKMz5djfEErFosTy5XiXTtt02HWYcCsXY8Jhx5TdoPIJRgfPmXTySJIwDdnN2DOWYUZpx5z1IQ79FherhTt5PKSVrxLKUOIMQbisSEQ/ZMFim2YNJK6deyyYKgad0WHznK1g4u0TLhkGY0CvOMbrqJ/7m/VcEkGjSCWrLtZ3hObvm7r/VDW/RALMjG7NYTyOshEuMveooElM6xY/r3q83zxMwW0KRIShcEha6NEVnPBUBZwLiQOBwOVCC547fE1uG0ZV3WdijaXVnHTjiqYBlIQUemPswVH4W88L4lk1YNJGGXgypIe4zYdhh16TNh1mFkqB50JbGyWIqAms03A6qiCYeAxLupPI2f4ETBfq3Tgiz7Tgpk5PQymGNy7dwRRkSfR0pINp70NvZ1ZKC67jtD0k4gwXoFlIlual+jrzqhshTVvhNPZjBVXJwaHqhAX54ft2z5FRMRFfPfdh4i8F4gg/z24fv0IfL034dSpbxD+9LQ0I+U8i8Aitf/ScMnP4DX34wq17UYsywZGSfscmCvFiXwvFPY9xrRdKam/7v8QDHJQ/nK98oJsSKVKoTLeXKyl8a8OM/YqlHXF42j8zyidTMVzpx4TNh3ml/RYpOyGjXoOHWxybxjk8byzCoN2DXKa7iLecAlVzxNEajLtMih+7UvUyJsV1tRZh2FbGRJbr+HHp9/iovUswk0X8Kj+KgyDSWieyJFGsqcd4QiqPoMQ41lE1tP3+zLuV/vhft0lRFouI8J6GZHWK7KRpS/4TfMl+JQewc7UzTietx+3zQF42HYTDxtop3cFMdZgRDZeQ0zjDcTWhSC+5iruas4ivPoiqmYIsLihVKszyiZS6VdQATkBKDeZdL8xSN+ESMykaVVhvZfEmckMm5M+4wwfMkuUNnshyOAPzJWhuPYOgrOPoMdWLkym6JxVWYibjeaG63WfKX8uFc4VC/rnNEhsi0Cg0Q+ZnTHiwGN31EB6HaQxlRtOpSK3CqpVcL3+qP7+6vcVeQ2rI0qyIqUidNuoxuSUHqnpwXj/w3/Ag4fXUWVIwNiwHn09FbjifxC//4//A6dPbsKKs00cY2QucPeFqO5U0qxL1ytuvBhmQz05q1LuTUPHZJ6A7wfW6+vA92vuHSFF9HCsGCRTYHGpAS0thTh8+BscO/YN+gcq0dmdj86uUqSlheHIke+xd+9fsHXrH6DTJePx4yDs2PkJKisTsbLSLK+J4S4PLNfdVoNKgzEratL8ybn/dfczf855lhadaiWHMqZVr3xeY/yc+HskzUiKWCSpt302D/d05xFceQJFww8w6aKmn59FHZw2Eh2U8ynrgPROvWHz/OtfM+dsSkFY6VAGN2iqPSn7pLjxnHdUoqYvDVc0vggoPYnKvseYtelF26/IxF4PvjlH0cKxsO02LpTugVf5UeypuIGfytLxeYEBb+c24beFXfhDth6/zTPgrdxavJ9jxVv5zfhNQRPezbbiDxn027bindx2vJPXivcKqM2uxfuFRrxXYMA7BUa8VViLt4qsAr7XB+Go+m31qLLa6nE9u01mexVwe6ROqoE36vGzwgZ8Jv7bSrz7qsTEk+0u+SXoJvMtoHsd+H4Z8Fb03G8Ovl8GvAnA14But7b7RROlAsDJev8a8PZMotyibcUWTcua4RmC48l6q4BbPb4A3gzD+a+D718C7w7s03MorLcn+ObjA1Udq+B7c2oZ/rsKvhPKr6JoOBqFw1Eo7o9D6UA88gbv42HPDYQ1XYNfwSGcLTqE85pjCKk6i+z2cPQLe1eHFTtBENkMi5JkyN2oaB+Vm0LYGik9KxMEJwD+vrBibtcPpSHmDScZcauowSPTNSSbb2B4SSO6bHbnO5ar4OBiLxIJZfLicxEAim2TW6tOVo+A8lFNEOIbQtC5UAY79aF83fLalMQzMiHS+CjMDidlZQJXv6dOMCwdShmMzZjusi+Zaym5u8uZZAOZrkW9mUyE0rjJCVLpxI83hyDOeAXPJvLFY3fRHegh508akdyaOPdkTLBJ/TkbLpWwIJM4ncRagvF8Jv9XmEsFuCoAVmXEXrPQ8HOV52MZtx7jLhMeN9xCmCkA1UNp4hLCBifnilXY0v5lHXrmNRier8Qidfku+hJTLtHglh/w+diQxGuA55INUWYsMyDJDaJlcRCGTFnUxbFEGhHd8hdh8xRAQ5BO1p8lbaVkyQYvpQxu5/8Wv+AmjLuqkNN5H5cLT+B+dTBKehJhHErGnJSslefhNchmMLmW5bMxSQgNP1dKb8Quz1WLjrEc3K0Jwt3aa7DZ6oWFkXCkFQssjSm4ErwXWzZ/BK8D3+D6taOYnapHQsxF7NvzGfZc/g5B2nOoHH8sDCe9r5cl6Ea5Zh0C9FvQ1V2K0OvHsW3759BoM7Bp8xe4HOiDwtwodLQUIircG8eOf467T04hznwVWR0RmKcOV6wGLdJYxmqEAjzcTKNb58zv8xokE8iGWoIS+iJ3zpfhVPoh6EczMMt7SDT2CnvNe1XAjFwLvK+Ve0uA00qNMN83dOdR2hvvtsdU5gNx3qB0x1mP3ukSxBuD4Jd+AB32YtlgqhsrXsfSeEqARocLJsqJjMGMeRiR0xqGWNNl1A2kwCaOSwQQdXDYlFAtPna5GiUoKLo9GHtytuFRx23E1QbjUskp3Db4I7w6ENer/HBFexoh2tMSqvO0MxxP2u7iccN1PLSGIt4aijhrCB5YQvC4/gYe118Xi8mctmhEGwJxJeeYSNjKRp+gcaoQPaN5Il9omyxA23QxWseL0N6fg+Zn6WgazcWgpF/ys+W1q8xvynykSKjke3zPy8pG0ukOEZE5hKSGjFo4xPe+HjabQal60eXJpVi+cq5zLhnR256ME9m7UT2bgTnpXWGPhPL58tolcKcMSnkdSqgO5xCRgfDo7ikhmJaG3xXmGJhRPpGOKwx3qg7CiFMn18ySWCwqwJs6XfYliP+56lbDecOtrZbnk/WB95lyr6nngvOnzF3iHMI1QXEdGhuvxIMEP/zrv/8dLgcew5O0q3jeW4biomgc8/oOX/3lN/A59QNcrmaF8V7RuV+/2ljK68+teyfY9Lhu1Wu2bTwbYXVBeNR0W/4H51mFnX/NnMjNyQotRY3SSG13NqF/SIeL/rvgc3oLevtpYdeLZ890uHPHFydPbcK9yLPYtetTVFc/xZMnodiz94/Q6x/D5WSjZi2ezZYgxhqKiLpgzLnPBedDYe9578kaoqxFXEc4RPLlXpd4vXDelP4mtQmR59y9OVZ/LhkTElfPe9iKoZkyRFUHIaDCB1m9URhb0a16Z/OeZ7VP1mre73wOXqvU4rv/t9r0Djr1uFqkmdTlog6f8z6PTBRVCBKlB8sqjLZcc7IOqveFujap2MD9HmXeoe94HfRDqbhdeRFhOj/o+h9jWuwC64FFnotmOJxsnq8SQorXEZNQXXY2djdgZMGIh80X4Kc7hD2lx7CtMgzfafPxSUEN3s5uwm9zW/FWcQs+zCXwNuCd3GpJlnwnz4q3GHqTY8EH2fXYkG/Fu7lteDevBe/lN2BDYT02FNXi/aIakaO8U1CHdwrJclvwYUG9x1jrw017QDX4Rj16ht+ojz8tbMAfC60eQwHaBNue40Xwjdow+cJ7W9Vve8pJ5HHp2qbJL0sbwaEE4DThqzJl/LWsCcpoxsZyDiUERzk242vN2qGC7hdykhYI4HbLSURSom3BD9rmNePHihZ4jlWnErdF4Kol4Ko9IJslW8BmSc+xtbIVa4ei5aYHt4xVW8BWJQTHLSths6TnoH57/dijb8NedRjasdc99lFiYuiQoQJtgm0O9Wuy3odMXThc3Y0tTwm+3ZpvNlyWDUWjeCgKxX1xEvahG4iFdiQO2vFkpHSGIsTkg3P6ozhatAdniw7iSdNNDCyVS3c1b0aa1zvs1ISTgVSAlQK0eUMp4FSAjYBvhZVULeA4eVN/SbCqAtpXHekKEG+9gZj6EPROFwhzSKDhXCAjodhs8bnJDlOmIc8vwJigmzc69eu1SKgPRFTDFbTMlsCx0iI2blwYnK4K0VCrk8yrXssb/8zdCMpSv7owyGTPiOuVWsQ33ER0dRCejecJGGXKmALquDAq55BfC8gXaYciNyH4JmNEuUW89Rqi66+gS9xOFJD04vVxElSHwjS8+NlLFhz3BMxzyzIgQdkjXaB+MYKdAAAgAElEQVQEbTTN5GLUpkX7eCbKnyUgq+M+EppuIdZyA0mNdyWCXd+bgtqRLDSP52NsugxLIhUgk0xQQOBdj0VnDeadlIrQN1uxfeNmRtnQKJM/PwcBLbIZ4nul5ITsrFKuFNcSup8IO23AkK0UHTMl6J6pRP+sDs3T+UhoDEGI7gys03niFCLvSRgdbob4f4xwOVmuZelWcUthQAwlA9Q6MnGQoGR8oQLZ7TE4l3sCQ9RmS0Q3X58FmdkhOHT4G1w464Wo8Kv44ZsNGB0yIjoqEH/98zvY5vstAjRnUDASiWWXBivUozutwuKwkVRpOGrE0FAlYmP88MP3H+JK4Em89/6/IPjqGQwOmvGsswxXL+/A+YAfkGuJQExNMNI772OG15FTOa9sPKRUQXyL7bxmlCRLujDQNpHnme9FLNmcBN+1aF/UwC/1sFSRFljt4AZVwJBiA0hnAWUDS1DHzayygeb5sQwk46bhAsoHHipuQAIG3M4hdjq21KN5LA+3tecQVxOIqRW9aGZ5zulzTlAowIsgSRhSupKYxE1p2WVAZstt3Ku+jJqhDNicDXDZrOIYsQwuuAQEFqkwtE2WIroxCLeMFzG2qMWzqRJkdsQirTkCSc13Edd4HWnNd2HtfYL+2TJM2A2in+6fL0PfbCl6p4vxbKoIPVPF6JkpEc/woSkNlp1mjMxUINt0B/eKLyK9JQbdS9QnN0qz4IqDmtgG2Pnali1wLlmw6LBiUTb7L7m3VDaT89JyDZbJzkq4EEEW48LJYioN5fQxFyDtqJa8AG6YKC2hH7yNn5GjFr0j2diXtx1hTUEo6oxDW38WZucNMh/SP56uSmQSFVCsVImEEJCNpVu2xXNvq5LPuXe6CNq+ZNxvvIGThV7wL/dGxVgyemcKMLeocc+59Puvlo2U6N7dZIpIllgVc1eqXnnkHMMUU27iJS2xERMTRjx5EoJPP/1nHD70A3bv+yNi4/0QfPUITp3cgjOnt8PH+0fYbArAc0In899r5zMBq8pc0jKWhdvVl5HcFq5sGFdf62s+LwHCiisHiQOHqwH9Q1r4nPkBp7w3oXegFlPTnUhJuY2LF3YjOem6MN7fffcW6uoykZ5xC7t3f4rKykdwOhrFhpDMN9eycFOg9DHws+YmWeZ7zkEE2ySFxOGKoJwWpu77TzZO7s2GKgVlr4t7zqREUfl9Oq0QkCq9DSMLFUisvwH/Um887Y5Cv51r3husv+q9z40hswVYIemvQEtzIYaHjVhcsmJgoAwG/SNUVkZhZKxY4th7evJhMj5CY2MaJqcqfoUc+uV5X1236RQGM2qG0nC7/DxuV1xARV+iODQ5FviemgWH2JyVsDl14pTksjdgebke3bMaPGy6ib1Fm3Gg4gx2GmKwsbwAHxSY8LucBvw2txlvFTTh7XzzWntA+nCrgTjutMmXNU+ymVLVcvO4FnjX/yIERwHfBOAvxnrXEn79aZEFf/QYaviN5/FPxdZfuJTQtUQF3Tx+UdKAL0qsq0NJnVybQknQ/TJ3khfNkooHt2oH+A3DcLTNa8YL0N0s2m2Cbc/xfUUrfhCgrfhwrwbhVLZgk8f4qaIZP1U0ydhc2Yz1Y4uuBRyqD7d63KprhefYpmvBdl3z6tihb4Hn2GloxS6D4kiiOpOoTZJq0+TqkcDbo5lyn6ENAryr2iENlVUKw/2C5e5cZbxV4O1V04Of08rx3/9e9fmuvI7y4RiUDEejpC8O5c8foKKf0fLRKBt5jJzJWBSOxqBiNB6pTaEIKPbCmeJDiG+7CctoJqaWNJh3Ub+qMJhKmZ43spvllM51AgNlN8+fs7RJYCw7dlcdlpbpv0m96usnAAHf9ddx33xV5CUu92Qj5UU+pzSEKsBM2FGRJHCyqF71heZE/dB6BfetgWieKV4HvitlFy2bBXWR/BscyaRLw4p7kuQESnaBpdIH1huIrQ5G/0SBTLycgEX7xs2D+v7k3Cg6aoIUYQvFYpFgugYJDSGIMgeiczLbfR49z6UKvHl8M/DNyY+VBPrUknGcsFXiYakf0hpuoXT4ETK7I3Gv7hIu6o4jWO+N6xXeuFN1FqHGs7hSdRrB1edwo84P4TWXJeSisj9Rop2bRrPQN12C8UUt5mx62Ag8HHXiH86QCbqLyOKzWnFQrgsCLTK2ZKlZnlYWIEX6RJ0kx8xcGZK67yLSHIKH5nCkW6PxuO4mgspP4XrVObQtlWDFQccEngOCGuW8EERSH6kEwNRK06PEz7OZiUyjwyRaSJZkq8cy4J15EFVjT7EsbjsEpFY8Sb2CI14/4M7tQKQkRePPf/4NRkZMsNQX4OSRzTgRvAvB5WeR2Xsbs85SBXzTv1usFSulmZaVgtmZOhQVRuHA/r9i586/4B//8e9w4+Z5tLVVIj42AGe8f0BqZhAsE/m4rb+EqMbrMA5lomk4E20jOQJ0W8by0Dmai67RPLROFqJ9ogBdIzlomcxH+3QROkdz0DWYia6RXLRM5qF0MBHeybtQ2BOLlrFMdI5no3+6CBPLOkzYKzG0UIqxBa3Ir8YXNRhf0GBiQYOphXIYnicgxHQRpeNJ0gOwhFqZC7ihYgWCtpt1o1m4U3keWW3hUtlh0AX9zYV9FTcOupEQHJJlp8WlCbZlnfhI53aEI7ImGHkWJpk+hrH8AdrbMzG9qEdLSwpMhnhU6ZKRpYtAuP4skptvKuV1lxXTqMU8mxVRgwlqXB06sTjkZpJ2b8u0NiTAcfsGE9hyA2/jkGpBE5ZE3mFB50QxEnRXEVp8FpmdseieLseyNHuyAmIW2znqgBedtVgSa87X6+/JahJAz9kYcsIeC7eWmhtD6QFQqhDyO6hB63QuzIPpsHAMpcEynA7zaBayn8Xi+8S/wqv4AAIqTiNc44/q3lRx81lyUR5Vp8iMxMuacgyy37yv2cdjEovUiSUtuvh/+1LxtO4GbmrP4ZLWB97FR+BbcgS3qv0kobKo8x5ml5iIyKAyxcJPWFrxY+e86666rAJaldn8tSNBF7MEjGIxSuZ0ZrYW1dXJuHXLG/l5cdi67VNs2/4nbNv6Bfbv/R77936HbT9/hvExblDoiPL686wAc4VFJvnRMpop4DulnQmXCrD1ZORfBeRJmshaxuqLox4jIzqcPfM9jh/7Gv19JpQWP8TB/V/j7OndqDHlorLiKb768nfIyoxE2N0zIkPR65/A5WqV6+75TBHizaGI0F8SWZFaGRSmWhq5SSC5GXA1kErmLkXPLlIaN8FFL24JdOPmjWSEkxU8Zd4ULbyzDpOLGiQ2MB33GDI7ojC2WCFkB9enV71v5WdK87nTwaZshpsZERtzFpf8dkOrTUBffylSnwTiyOEfcOzYRjxNC0KDNR0RYb7wObUFQVcOQlMeC4eDG6dfAm7P73F+V/T7CqvPTaSu5xFuF5/B9ZLTyOuOw/OlCjjoPoQmLNmqYHNWia3vvK0ZbRN6xDaGYEfhNzhUcQ+7NI/wXUkuPs2vlGj33+e24K38Frxd0IC382vEHlASJ/PM0jCpgm31KCB7NeLdIo2YShDO2hAcgu+PVscL1ns1CIfhN4UWfFzgHqpF4Lrja4G3O/RmvUXgq8D3rwFvpk5K8E1p4xpLQILuF8B7baOk2jTpCbalYVLTDCUEpwXfs3HScwjwbnUz3O4QnIpmbOJwA++fKlsgo6IZmyuasJlHN/jeUtkMZfw68JYUyleAb0/QzccE3m8KvgVou5snPZsoVXmJMNyGDhys6sAhstzuwa8PVnXisKkLXtU9+BXwfQOakViUjESjpJ/gm+x3FEr6IlE8mICS6QSUDUTD2P8AhsF4pHbdhp/+JA5k7YBf4REUdERiSiZjt00XJwwHNVtmLNhNmFrUYGxBgzknk/pqxL+XIIcLLVkP8dmWEpknWHz5jUnGJsEcgkjLNXQuFCtNZlIOcwefEIiJDzQ1jsoO3eGgtRJZD4VF5mT2yHoFkZZANE0Xwr7S7Ga+yTKw/OaW07xmgvCcLF73mGyuyCvWgG+yVwTf1xFfdw0j02Thlfh40fxxIeN7k9Kp+5yQLZKSsVL6FRDuBt/3zZfRMZH1NwHfdDuZp5f2ikV07H0ODW5XnMeFSm+c1XnjXOkx3Km+iLyhGNTPpKFvPh+jznJ0O0tgnE1H4UA8MlrCEWUKhE+VD3z0p3Cq9AiuVPgiuiYQ+a0RsA49wfPJAgxPFGBmrhJzC3osEYwKu+MuBQvbwo0AE0jpamARkMfAFCVkiWCd6WV1eDaZjwPZO+GTfww3SwMQpw1BlCYAdyvOIq31DkacbBhjmZTXg2JPKHZvcl1wkue5r5Juf7I6ymaQVR0jnMsGYVjaZgpwrfI0YixXFUcUhv44LKjSJcDXexs+/8sH+PzLD/DVN+9haNyEnu4KnPbeiou3D+G25iJSOkMwtFIsTipisSb+swr7yITE8TEdCgvvIeSaF1JTbuLrjW8jKioAjx7fwoG9G3H35kl0PS9B43gRIk1X4Z11EEElpxBWfh53tRcRqvPDDZ0fwir8cUt7ERe0vrhccRbhFf4IqwnCDfMV3Ky+jDBDAMKrAnC92g9nDT74LvFr+Fedwm3TOYRXnEV89RVkd96TpseCnmhktoUhqzUc+V33kd0WjoKWcFS2R+OxOQQntMfxsP8+OmdK0D5XirbZYrTNlaFrtkwqD1md9xFS7I2ijihxDGEMN69pbo4pP3EyVMhlBZZNcC3RZtEqVYjZpXIUP4/BHd1lnA7bj02bN2Dfz58h8PJetPXk49atAzi4+1P8vOlj7Du7EZfzDiKNLiOcV2zc8DOshyE47EVwM+tSOaqTUBX7ksG9uSVgZDWAKbRk9tn4Sra/XhJQaTtKrbN1NBdxhmBczjmOKNMVNE1kY8ldnidYWnAZxbecUhH6Zb9uTiDAWKAEiKCKcw1fA+cvm+JVzeuP9+HgihH6iUzcqbmC8wXHEVTkjbuaC7ihPS9uJ16lXvhzzKe4XeOH0olU+GnO4JYlBJb5Aiyyqkhm2cVQIRIf7uwDmwlOmxFzy3q0TeUjsytKpDl+uV64XuyNjMY7qJ9IQ/VIMlIsobim8cbp4oPYm/Ezukaz3FVPnlvFX5t2kvZlOlEokqZXMt6rwFyZ0wRccuPjqMfkpB4Wy1OkpIairi4He3Z/iR3bFeD9w3d/xFu/+Z/4aMO/4FmXFguzZrgI5ISsePmawc+BzyGfx0odmkcycMsUgNTOe/I9riki8Xuj+V5J1nRyI+Osx/hYJa4Gb8e1kH0Y7KvEpXPb8NnH/4QdW/+EqHuXkJEWAa9D3+LQga/x/bfv4JLfXrS3lolUg5WDnrkiPG66LS5crDqpbLVsYtzrlcg02evCCgbvFzcbzt/hRkCpIivkF33nnZL+qFiV8uckNHgdLMxXignArpStiGu7i+GlSsWSVySXrz5/yrVM8ofrNDdZtRjoz8e+vZ/gDx/+AxISAlBvSZHz4HVgC86f3Y2ft2xAgP8+7N/zNY4f2Yo9O/+KoMDDmJ15PdDneslN8wL7e9zkCkm+2uGnuKW5gFM5BxHfE4a26XRM20xYttdjkT0JdiMaR7Jx1xSEnfm7cUB3Cts1RnxXZMRnOTV4P7Meb2U14a3cFryd34R3hPWufingJuhWh6eWW9Vvrz96Mtqejz3ZbdUicP1xbQPli6h3le0m060mTkoAjoc3tyfb7fl4PeBWQnDWxr5TWqKCbQHcHtaALywCFQC+HnAL6PZwKfmFflvbih89xiZtC1Rf7vVR75srW8FBScnLYt5VllvAthqC4wbd23StkKFvwzZ9G7brW9cw3Z6gWwXfu93M9163P/dadlthtYXh9nAuWQO63RITAd6GDhyu6oSXsUuOfKx+fbS6B0drnmFrugfznVRxHRoy3+vAd3HvPRT2P0BFXwzK+2JQ2BeFkuE4aKcSUTQWj9jmYJwpPIidKZtgfv4IM0saRYvI6HFHFfrmC6Hrf4THTTdwtzYA+tFUjNEje1UDztI0bcAoJbAqTipvsPsmY55ovo67dUGwzuTJAsl0MqXURiaTE63qhsGbnCwnj5SbVLkb+GrxqCEI9+ovo2GyYA34XsH/AvDtZv0V5lsB1GuZbxV8l4rEg5MtPWpXqJ3kguzWsfM9KgsI2Rr+TNE8E4A/oouLORDt438j8O1iwwubYhrE2YOx6GezD+KruK9wyXAG5ul0SVYUCQndJJbNsC/Xiy2eNP8tW4F5C5yLdZheqUTvchFMY8lI7biDiyVH8eODjdj44EvsK96Hq40ByGoMQ1n3A5jH0sVJZcqux4LNABu1/MvVcC6YpESvAClu9OjbaoUTSrjJjKMajaNZOFW4D4UjSRh01GARjTKBT4rvNl0krMCiFU4Xg2R4rRBsKxURlvnJBjLkh0w2g154VGKnueCQya1Bn60IT5+H40jmdgzOV4q2HCuteN5Viqcpd3H24lF8v/Uv+Mu372BowoiurlJ4Hd6ImzE+eGAIQpzVH+32bNjFDo3JlUrctMICW0Ev4Pv3vfGXL/4ZR7y+xDff/gb5BRHYufsrfPrxv+Os9xYUFIahqS8bbctlyO+LR+7zGBQ+i0FWdzQetofjYXsYktsicNMUgE3JW3Ak75Dodu+Zr+G2KRD3a68iyXoLTxvvILX1DhK77yCm9Sqe9txBZvsN3DX741TJYRzI2A6v7F3wLfXChcpTOF16BOc0x3Gu/DiuFBxFRKEvfNP34f3IT7A1YytCClkBOY8b5edxU+cvtn9BOV64mH4AwVnHYHiWKECTnyETCSXBkpIxVyOc1Isu18G1xAY5q7DitKorH4jHweht+NOu9+DjuwNpCXfw4YZ/RGVVMtpaCqApisfBPd/h6wMf4ErFSTzpu69I2FhNobzJXi1gls8lsha1grR6dFfcRALBe40yK3pOW6WPhNcAq1CctxZhQf9CJfJaIuGfdQiXyk/BOpGOZWk4NmN+xYhFNjaqGtk3AHOsctl5rzMmXAK5FJcTMpeUIjFPwDiXBd+k3Qgp9EHhsweoHnkK4/PHKGmLRmbrPSS3h+F00X7kWm9hbMmAp52x8Cn3QVTTLfTZK0SasrJQJZ7hdoarMAjMrsfIfJmkroaWnoZXxl5ctlyGZSIDg45y6QWgZIEb4RlUoctVjPKpRFwsPISOsWws0GGD85M75Ig6Yr4HAvxpkh6rAJtz1ssGNzpsoKbtI5uszQK+U1IC8OFH/ye2/PwBNm78DVJTb2J40Ax95RP4em+Fz8lNmJ+1wLbEqhNZ1NcTN57gu2k43Q2+I+VvxUHqDdYeWVu4gaHPO9cc+ssvVMPakARzUwrmF80oLbiDiDsncDlgD8LCvFFTnQqTMRl+F7fKaGzIwsIC5xeLMM6DC+V40hoOfgbTrmpQUy+Mr2zEuGF0D5FzveidUr/PyqiSzqlowXkuRCfu3mQJycVQK3s1Wnuf4HLGEYTpL2F80SBVR5E4umVoCsB+OQgXuRnJNV6zdlarm5CbcwcH93+JxMfBGBnVo7unFO0tRqQ9CcO33/wWsTGXUKXPRFlRKs747BEw/ibgm+9hmRIrOxuSFRnb8nIVZh0mtC2W4WHbXRzL2Isz2fuQ2XUPI4sGLDsbUD+ShOu6fTiY/SOOa6/hkL4Ef8ppwx8ym/FeZjPeyW7B22S9c5vwVk493s2tw4a8ujXgWwXb64/rgbb69Vp/7lfLSURSQi33r9gEeloF/qnICnX8GrvN7/1Cyy0yk7V67q9KGvFVScPqUOUlnkcVcKuabc+j6lLyKtCtSktUxxK6lXgCbj7eVNEm46eKVrwMdK+C70qP1El32qQKulVZCUH2ej/uHfo2eI6d+laoMe9MmvQcTJ3k2EP5yGtYbU+wLSy3G3Cr2m6V8VaBNsG3CsB5PGLqxq+C70RtKMqHohXZSf8DkZ2Q+V4F313RKB6IQeZINJ4M3EN2Tzgq+uNQM52C1N4wbEvfhMOpP+Om/iyyB2JQOpmIR203cabwEDYnfY+NT77DxpzNOPlkD0p6HmCCDXQs11IXSD0xk8LsVSC7JOzPaxcsC9IbwhCsO4/ivofiVSrSDE7AjBkWvRtZTU7oBnHGUCdnCcSgbs+lgO8I8yVYxvPWgG+4wzMUScLLJ6LXTVRrfi4x5jWSiCeMBku0nOz53t2a7yjzVXRM50v8Mdldgj36EJOdk/NFdoITsUyutXBCARP0nCUQT2y5KZpv6rCV9+u5ICnyCgVwvpnshGVIphCCISXTjI02IbX5NnwKDiGuMRT99jLY7AYMTxWjfbIApoks6Mcy0LOowSJBrq0RWCCbacWKjT7GjKbXYcpViW5HMZIbr+NU0jb4ZO5BdPt1nCs5gbPFJ+Cbexh+hSdwveICYs3XkPcsFvXzuegDXXYIhKi1tYjtmmzehNWsF4D+rD8LPpm7YB5Nx6KDjJxVgpRs1AdT5rBUBczxvCgOFOw1cLoIGlpkAG3Cli7KOW+C08WydosS7+6iTr0WM64K1C2kY0/6ZtSN5IgW3mEzoyzvLi6c24XTFw7i9OUj2LTjY1mMW1qysHPHR4h+dB6pVSEI0/nAuJiKcQYYMTpe9OuKTy8ZU7vdgmc9RYiOPIXTp79BWtoVjIxU4n5sAM6d24WzPpsQd/8UOjvTQYnH/EotRp1aTK/oYVsxY46bX0lzNKNxLBvXSk5Lsuas3YQFVw0WXWZpYqLcggwaJTzLK1VYWqnCsqsSDhelGVWYc+ox7ajArFOHeacBdnfkOS3K+LczTGp1GaEZSsYJ/UlctQTgifkmkiw38aia/vuheGi+huSGGyh4Ho/mhVJMyefXATibAEZb8zpheifT+Bg8stKKFadFgB9dLNjYphlIwI473+PjrW8hPOISuqxV+PD9f0VG9n1Mz7RAr0nFed998L6+CxENl5HQcUs28vT0ZtMgqyJkj22SyEhQSP0zPdapN2dZ3ijlan5PmncpxeCmkyQB+0bcoJ1zB1N9CUjnV6pRM5OGs+XH4J91AEOTudIf4KRtoIuNkXqJUl8zB/zavMb7mYm9oEbcjMbxLFQOJKJuMhMDDh0WYEHXXAkuaHwQWnIaHbMF4uU9t2zAPANkCKQdtRibLERSdQAi9OfRNa/F7Eo98s33cCHDC7drAtFh46a+HrOoRtNULkxz2cgcfoQg3QWcTNuHO5V+qJ/JxQzB5Apj2bkBcocKiQ6cG9lqWKczceD+18jriUWHvVRkPQxPkg0LQ4EYQiRzFDc0LwPcHt8X6SG/dm+AhJixYHyiElExPjhx8mukpAZgeJgMbSeGBrXQlIVDU3ZXmQcoM5TNK//Hq+dqT/CtMt8pHRH/H5hvt6OTsMWselHW04CZFWVzZqOciTr+ZTL5bORvkoZghu84GPpEEM33SwaZLlILWuQ2R+BWoa/cm8wWYBIsKzBSwZFzw6Z3guoXQzln6nnj/1QIAlkfpFrq7i9ipXClDtapbFysvoDTxScwa6+VTSIdhfgzkg4iA33NOZTXzPlWGrJ53TajtDQGx458h/S0O3C6OuBwtaGzswqf//mfce3aHoyNmTA5bsWpE1vwx0/+CbEx55RQpdc8F4E+KzNs4Fyc0UpqKTfBnIfYwzDpqkbjbD4ia29hT8JPCCk7geyuGwjV7IJv8U+4aL6GfbpifJjegrcyGvG7dItiKVjQgHcLrcJ4v51Ti/dy6kAQ7clqez5WATaPH+W/GKpTyfrjejZb/doTbItLSWEDVt1Jita6lSj2gGoD5dqjJ6u9Hnyvdyvh15567vWAW2W7yW57Am4+fgG63U2THqmTL2O3f+nF3YqftG1rxmbtC4vALW4/bvX4c2UbONg0+SbBNyr49gTcfLzT0O4eDML5ddD9a+D7ZSBbZbUPGTrAoTLa649eVZ04YuzCUVO3HPlY/fp4zTMcr32ObRmaF5rvx9oQabgUzXf/A2m41PZR/x2JooF4lA4+FPvB/KE4lIzGo3QoDsXP76PoeSQKh6KRNhiO87W+8Crdj4PZO+CVsR2ncvfgkvYEwluD8Gg0EgkT9+GdtR8B2UdR2hkn+mEpJZGVWjFLJz9Lr29W+jOjpPsBrpr8kNEdLfHO4rfMSWqZiwabVV40KspunXHVbHADWZJm2GyUnQRLSIl5LGcd+DZIct3/avAt75VlxpU6JDTewl1LMEqHElH2/CHMw2mYkwWQ3tHuIBB36U0mV2m+4UJH8KSAhqTWWwK+28Yy/ibgm7rreYmYt2DJRfeDajxbKkZy7TVczT+O69ozuFfthzDtKfjrTmBHyU58m/4tzlUcg2b0oTTWkVGyUeJBMENmy6UEyxA0Nk1kibQhuPQUktrD8GyhDM8WNNAPJCOzJQIP6kNx3eiPC7rT8Nb7wFfni2jDFaQ03EXps4cy8Q45DSIHYKw39bdd47nwLT6AmuEnWFgwCbilRpzgWzYrwm6SMaevMqPMuei0YXBQJ3rimppETE4SqDdjfNgIfdVjdHUWYHGBGuFGkb7YXEZ0zefiZPF+JDfcxcBihXTVN5pTEOi/B99t+gO8fLcgJeMauOB2dGQi9Pp+FBaHobwhCuGVvkjquIMpbiLAdL8qsZvj52pfJgtPQNWEqakq9PUXYW6OpeZ6TM9YMTRUheGhCsxMVsK+SLa0XmKTGczDCgQ3JfI9WaTq0DyUgZBCX5T1PoSNTD6jmWEGF91FB3XGihSLulCyvYrWvl7RudPajgwfG6qFleTv8vcsUo1gjDx1+ObBFIRWnUf5WBJml/WYXjZgarFSrDmHlivQb9Nh1FGHngkdktKvITbmPCpLYzA7VYveZwUozL+FJ0+CUF3zGI55M2Dn59Mom+hFhw5lA3E4l3YYXx/+A7786vfY9eMX+H/+779DbuEDzM214cbVYzjrvRMJudfwuDkEjxtCFbcWNGGOLKWU4blZ5YaXkggCcfqcG8Xhho/pdMPvc0MsQJ0OIQnpYFIAACAASURBVFIZUYClAEq3Qw83zGTTJ10mVE+lwSdjL6J0fhiczpfXzHP4pg5OJBvmnNVonc6T8KqgwlO4kH8MFwuOI6wqAE/aInDPGIj92XtROp6Caab4ur3TOW+wksbPZ2pRh6g6f4SYzqF1QSua8/l5A0obonEp7ziCjRcR13QDV0tOI1hzDueKT+JM0QncMl5GUe9D9C1qYHPUwbVYjVk0YZbngSBLAopq4bKxMdooPRO3DH448mQXgivP4WlXFCzzeRgX9w+eK7fEZ5auNUpl6dUgXAGNTLVdXNQqci9eu456jE3oMTpqwtycCTZ7jSQLc1O8uMhR4b5nKB/jRvr14Ju/ozht1EJtuExqo0RJlZy8yf/gfE2mnlUbMsDsGzCiz1EO02Qa6ibTMWDXSNM8+5rs4rPN+cQofuSK8w03/DosufRSKeldKEFi8x0cz9iHwr6HKB9MlMwBTXc8irtikdEdhacd95DSGoak5jsy+JjfS++OErkQG95z2u9LHkVuayRy2yKR2x2DnN4HSHsWjbCmUOwpPohPkr5BcNd1ZHZHo+Z5EmaEZGBfj3oOXrOBISBmP4M0EyuOVlX6xwK+k5NC4HC04/mzcoReO46ff34Xz3uLMD1tgs3WjqyMMBw9shF37hzG1DSbLl/9XFI9YjoxCTpXvcxFrLSQrBO230EnFAuGlq2omynFobLd+OzJu9if9z2u6G/jqL4c7xXX499KOvFOgQVv59XhnbwavFdYhw1FZrxfwEAcMzYwJCe/YQ349gTcno/XA231a0+rQBVsq0dP0K0+Xg+6Pb241cefs3HSY6wH2vz6S4/gGzUAh8c1XtzUcnsMJd69aU0Yjifw/gXo1rbiO23rGh9uFXz/gt3WqAE4vwK6K9qxWUYbVLCtHlXQrR49wbcKsNXjKtD2CMBRUyc9Eyf5WE2d/NUmyqoXkpKXgW5PZvsXQNvNbqsstzDcbuBN8K0CcB6PVffgBfjW4r/9j/+p+HwnakPWMd9x0Lo13wTf2c+iUUxQ3h+P0t4YFPVFoWgoRhl9kSgfiELqcCTiuhlgE4RY6xUkN4citysc5YNx0I88RMVgHB503cXZHC9EaP3QNJItTUqKG4nS/EON1xuBb2c1qoefINQcKOX1RQEElB+Q+eZipITDsBzpdFjhsLM0SSukZqw4G2GnI4GjGYmN1xFW64+6kWzYVM232ILRb9fgbsZ7zQTxuglE/bnbhotOE6rbiTDfLDOv1CKh4SYOZe3FqeyDuFx0CjeLTiPXEoYxxv9KaITSiKWwSooWmqCbMhSCSLqdJLfdRlT9FbSMpP9NwDcZGru9Fgs2I2ywYBE1Up0YnCuG5tkDkRKdLfeCf/E+nCrdg82Z3+PLh3/G1qSvcafqDBomnmKJgRRkF1fqscQytTQOEdxZMe2oRe1YNu7XBCOw6BTqaXO3YsaM3SQNfUNzpeicyEPtYCpKnj1AamsYIqxXcasmAMHaM7hafho3K86L9pZl28KBBKQ/u48DxbuhG07G/CI3LZQLkLWjj7Jeri9GjC86NG7dZCs6O0sQHnYau3f/CV5eXyHyvg9a27Jx3ns79h74Ej6+P6G8PAbT4uutgNspmxbhzcEIKjuNlukCkSQszJrQ2pKHwrJ4lOgeYWhUiZFfXDShrT0bw0MaPJ8owGNrEAJ1vmizMWmuCS4YsMIGRIJZB/WkVpBJX1qkA4QVy8us5nDzWAsbgalbkrHCQCJeryJ5qIdrwYgVevO6pQJMgGQFIKDMFwUjiZhjwp27JE2HHLH4I0NGO8CVJmClGfML1Na3yeui3IaLHmUXEmSz0gDHohErdr5Gfp8LYzXMPY8QWu6L6kledwzIoH6b+nkF6LPxcnhch9T069j000fYu+dLnDj2AzTl8bgXfhKH93+GHds/wPnzP6G7JRtM6uT1plhR1qDkWQyulZ7GpVgvBATuR4Dvfnz4/r/AYMpAe1sRTh35GnduekPT+Bix5st4bA1VwMFKI+bJ7nOxJriQ5loCMPZ1KAy4NM8xbEiAGTXirAYokgmeR6bp8WfcGC0v6SVwaoU9CU5qS6vknFYMJeFC7hFkNt7FwDzlGjVYog2oMMCvnj9mnVXQjKYiMOsobpeexWPrTRT0P0RGVxTC9ZdwpeAkbmnPo5ThR06j4oCyVC3yHDKjBIIMDqrrT8GxvF24b/JH73wpFu0mOJ1mjM1oUNYcjZuVF+BffR7h9SFI6Y5G3rN4sQvtnirExEIlbNSo0xbUVguHs1nixh0rjVhw1mPBaYHN1SgSnhlXNXqmCiVo7EFdiHwuV4q8EaG7hNyW+2ifyscyrwtWnd6E+eb8RRcXB2PgCUpVJxHa1LViydYowJtN+qxWih2ji3MzQTcHLTb1io2hOte+9MhGScU5pHU8B3dqAvGo+bYi7RN2+dWflQoW6c5FjTU1/pSeTNg0KBuMx5nsvfB5ugO5PfcwOF8Mm/u+VSqxJjgdBqlyuVxM5TSKRSPXrr7lMjx9FgWvnAO4qjuPoMpzuKI/j7s6f0QYAnCrmuMybtdcxp3aQBmsZvB7/NmNGmXcNgVIivM9QwDu1VxBmPUqLtf642CJF37M2IafM7bDu+goQhuDcSL/MOI1lzA4VSTzso3yoTeQ7ihOUBbYl0lkkb1vQVlpLA7s+wqJidfQP1CBB/EXsW3LpygticHCQj1SUi6jIO8+ykof49yZbbhw/ifMcnP20s/J/TnwOnKaRfrFxmAxciDZwCZ3qbAoxFX/YhlCa4Pwfe42bEj6Bt8WB2B3ZTF+oJ47swH/UGzCe8VmAd3vFdRhQ4EZHxTUiyXghjwLPsizYkN+Az7MIwi3ylhluMl2FzTIIND+OP/F+KSgAWoIjueRoFsF2Z5HzxCcP3n6cHs8XpM8+RJZiSe7vR58e4Ju9fHG0iYB3+tB9xrALaE39OBWxvflDL0h4H4xCLgldZJHD5AtbLe2DZs8GG5PP24FcLdjS4U6XoBvFWyrx626dnBs07W9VlIiDLdqFegR866A7nbsNnRgj9gBrnMpUUG3+ygyEn07DnKwadJjHGIDparbppbbredWgbbKbIushMC7qgvHjN04ZupRjnxs7MZxUw9O1DzHidpebM/0AN+0Glx1O1nXcEnwrR14jJLuGJT3xKK8Lw4lAzEoHIwSa8LS/khU9N9H6WAMSodiUDLAps1olA/EQDsQC83zKGifRUmzZu5cCoKrLiIg9xjyGu9h3KYXr2h2KXMnTQbnzcB3LWh1F1x7CaFGfwzPlEgSF91WVhxmCZ5wrjRibrYajQ0pMOiiMThQDputGXU1j/Eo4RyKCu7idvlphNX6oXY4ax34ZjmXE/ybMSGvnUQ4yaxqvhVPVjZVKbITteHyBv4auxE/p2xFbPc9RNZfw9X0o7AMP8WChES4G4bkNSkMDm3a+NwslRNQpLTfEc138/DTvwn4ZsmT1lQzyzp0ThXAOJiCwo5oPG25i3t1gThfeQony71wWXMKNwznENcYgvSOaESYruBakTeyG8IxulQpjhcsZZOpk+Y3JuTZaH1lwYSjGhV9ybhTegEpllsY5UaDrAolSWSl6YVtN2F6WYf++RK0z+TAOv4UFb0PpPEvpfUWoixBuFnrh2DTOfiWH8HG1I3QDiVi3sbn43XFxVIP1wo1irSbpEWYUaQTDlcrCvKj4e29DadObkVg4GHs2fsZbtz0ws4fP0NwqA82b/0EdyN88XxQC5uzUeQSdMDIm3oM79xD0I0mY46AgOyMrRHT802Ymme5mVp5stN1WLKz5NyMeUctinpjcbrsKFIa74me3slNAXsUeE3Qn5ZR23ZWg+qwuEhQwnhoHWx2HumrTMZW2SCC1wB9l8lqLxNIK+eMtnU2Zw0ax7PhX+aDvJFEzLt/z0FLxRXec+5GXkc9bIsWVJRHITB4D0LvnkSNJRkLS7UYG6+AVn8P6ZkBGB4rh9NuhZ2fnatBtL2UrNQ+T0KQ1hdV02miYSU44XmWUra4l1jR2VGAKwH7sGvHl8jOTsDW7Z8j9MYJHD3yV1w89zNuXPdCcPBedDXlAM5mLBHAUzKyYpINv3fyTuz2/wLXrh/Cw8gr+OMn/472bg2SEy9jx5Z3kPw4CHWDuQi3XMKj9tuKe4mzXiQ53IxL+Z4AQxo9WTFyb1zoq2zXA+6NzAq/pte4W8rACpqDTjiM1nbQEYfMc5XifiNyuWrMukyIt9yAv8YXOT1xGLHpxDZPbSR/1fwwuFCKm/VBIICt6k1F71wJJp0GjCxXoGk0E+XPH8IwnIoZm06qEjynZFx5b1Cr2zVfhCed9+BX4YNrBh+YR5IwT9ceNleuWLDoqsPYfCVaR7JRN5yKzqkiDC7pMbFUhSUHwSxtNeths9ehd7gAKRkX8CDaG0kPz6GtNQP9g6XQaO8hMfECqqqi4bDzs7dier4SzycLYehLwcPmO7io8cGRrH0IKPNBzvMHsM4XiA3hawE4qwmyMVKqKrIpEoBshU1kGk0yT9DGluBbKi+8doV44AaHTKhirfeq86z8jBsLbjpr0TaRJz7fCY03Fe22yPzeDHyTQOFcJvOjsxqjyxokddzE7tQfsSPxG8S2BqFnvlDmGUqoVtjXxAZgqQq4K7J0m7HXwblUg4XFKnTPlyD3eZwEVRX2J6Bg8BH0A0moH0hFw9BTNA7TOz4DzaOZMviY32sYSkPdSBpMo2moG36KxsGnaBlIQ/NoBkxjaUhsC4NPwREwWba4PQbm4UwYxp4gvNIf4WXnReY0T199vp/XgWH5OTfWFjhEDsLNeTOMhmScO7MTWVkRMNc/xcmT3+CPH/8ngq94ITvrFsLunMJRrx+wf++32LPrS9y/dxZ2ShJf93wCvhVWngQVNzxCHnBdctZi1laFmsFEBOr34qR2H/br/PDXgnv4pKAQfyiqx0e5VnycVYNPc0x4O69WNN306haWO8+KD3Kt2JBrxft5DXg/v3GNpEQF3yqzrR4/yW+AOl4egtMAVVaihuB89quyktekT7r1256MtyfwFknJOuZbBdzqUUmgXMtyE4R7Am8+JugWwO1x/EHTCs+hxr2vl5Z4ykrWgG53EuUL0K2A758r2rC1ohVb5cjHythW2Q51bP8vgO/1bLeAb0PHfwl8E3irkhLPo8p2q3KSVbDtlpOokhJhuo3dq+CbYJtDgLgbfJ+seY6TdQTfFS+YbwHfq5pvxWpQy+ZKt+ykcugRinuiUPIsWkrAJQOxKOq/j2KC7oEoYb5Le6NQTntCar7dgyCc0pTi55HQDsaheOIhUrrvwr/oKK5qfGAcfSIOFcsMj6ClHScoGW52WG5OSlHY6FYLJtvRLoy2XsOLFUhvi8S1SuqCQ6EfS8MkGWAnm4nY9NcsFke0QDp5fBPMtel43qOB98nvcMZnM44c/BY/XfkzLpecQP1YNhyUo5DlI4soAR9kw8jecLzpxPSq31O0eqJp56TCyYQMnKtOurlzrGHYkbAJXnmHUGMvgnb8Cc4n7UF5033MLRMo1Uv8uKKTV1wZGP3LrnZuWlg1eNoZgXvmAJgHk+W183scwrS73weZMoJR5T0p74+vaRk1mHEaMLhYho7JXFiH0qB//hi57feRbL2JSONlhJsCcNfoj3s1lxFrvYbE9jvIfhaDyt5E1A4+Rd98GeYcdeiaLkWCKRRhlQEw9D/F8koj7OI44XZGIKi2M0SE3flmDM5qkGeNhF/5KbQ6y8TBBLSLJBAVF4h6kVTYxR5QkQ/Muaow6TKArEfLZA6qBlNR0BWHmLoQbEr9CRVDyZihY4QwsFy0uVAr/QD0E2YoA6UWvE6Sk6/h6JEfERpyGgnxN/HNNxtw8OB3co2U61Jw6MjXuBpyAB3dhQKgHUt0JDCjx16GC0VeSO8Ox9BiCVxOhp80Yo7vl7HyLNvbKuV8s7nStdII6mM75woQZQmGX+5xtEzlwia+14rGlmBPSsBu8Lq8rFh60Qecm0GCB5vTANuKIqPhpotaZAJ06jH5ughSyLoSZDRN5OC07gyy+xIkhVWAvYQWKdabfC6Xox6D/cXYs/sD+JzdhC07P8Hd+yfR3p2HCn0C9uz/DJt/3oDGJuqaO2Bno5gEI1HOU4uq/mQEVZyBZSxdAKJILmjfRrBE73FnI1pb83Dp4m7s2bURGk0GvvvuE5w6tQM//fQx9u35ChfO78bt296YGeEGuhWLBIViw2dAcV8Uzj3dhz0Xv8Ax768QdHEnAs5vx/CwHrduH8L5c99LCqJ5NAe3Lf5I6wyXa4ifPS0DRc4mGz8CNW5WaEupNkVSp6zHCl1wCOhcBN4GCfqRe5RBHZTeiDRBkcWRfbWLVpaaXgUUts8U4IYpAIHGiygfTsU8PwOWzGXuUJxWFp3u9EDeiwSdK7Xomi3EWcNpxBoDMTVPXXO9sgFzKs5IMzBJbwB/l+dDXofbGq93qRSpHREIKPfBA/M1WMeeYEFYYT4vQ6Y4b1Lz7n4+3j98XSJPImup2C26XFYsLJlhrH2Erbvex7kzP+Pu7eMwGRORlhaCC+e34sD+z+F3cRsaG9JAYoObcr4Hpm/22ipQNpCIqNpgBGvPIrTKD7dqA0FJBzfrNcPcVBRLqBU3ZrR1FO2wuwpGmY9Ifvi1eO7zvdLqkX71DGahr7tyPSnnk6QI34c61PPs/v5LGXfl/PG+aJ/MR4Q5GAmW6zIXKSyu+vfq//v1I9cinlNakPK101zAMPQQx7N2YMODjxHW4I+BxRI5v1IBonZbHEIoy+Fz8PPg9UFf9VqpPLFqO+UyKAmuK1WYghEL4nXPa5bnWlkTWTlUpWLymDaCtBSUoKNq2OX6Vn63YTQD0aYg3GUwTVeCxMXT+nPGWYW01gjcqryIpvEsqWYqa92vv98165/cRwS/lItyI1GPocEy6Cpj0NmZi96+AmRmXsG9cG/EPzgHTfk9GKviERN1GteC9yHhwQW5hhgwJPOjVId4TpSqN8k3qZytKMnCdherlVaxZXTaGmR9ZwDXsK0Exb2xuKzxxd7yYzhmDMWeqlR8UVohWu7f5VvwTkEt/phTjY1ZlJfUYUN+/Sqz/WGeVZjuD/Ktq9HvHxVYXxt+o0pJ1OMfGYTj0Tipht8w+MZz/Lm4EZ7j82IG4dCPWx304147/iKyEkpL3ENCcF4E4DAIRwnAcVsFqo4l64Nwygm2PQf13Gs13bQH/F7ThO81zfjh14aWbDddSlrw05rB5slWbPYYilMJGyY5WvHzukHgva2iFdsr1w1dG3asDobgtHiMVkjzJBso3YNabpGVuJsm2TipjBcBOHvXsdzix23swH6PIb7cYglIlts9jC8sAw8bO+FlooOJMo6Y2EDpMaq7cKSaOu8uHDV24pipC8eru2Xwsfr1ydpnOFX3HDszK/C//18espMyN5NNq0FN7wOomm+6nRT3U3by/3+U9d9HxVg8rtecxfGS/UhsuyUTF5kLmTS4WBFwOQkQOQErE9ySsxo9kwXI64xGYmcEGibyML1owNBMOXI6YnCu8gwCqi9B05uEJVlYqZm1or7+Kfbv24hvN36EamMmrJZsHD70JR7EBmPLj1/gt9v/GSdT96B+PFPYWU6Q4ppCNpmTjEyYfyPwLc013FwoCwYnGTaAsbFm0WFCa+9TJBiDpQzas1wG7XAyzmYegr4zXmQfErhgYwKgu/NdHBz4Pyxi6UgnhrTOCETUX0LtUJK8dlo9kqFhuY42fASyjKMes+nQM1eMrulCtE3kilewvj8JRT1xeNIShti6a7hruIRQ3XlcNZzDLZMf4mpD8LQlDMU9sagZTkHntBKysyQaWuX/ygLjYCmwHrr+FNys9MND802MSvMPwSglG0qzp91lEC043R2WbLUwP0/D/vzdqJx5ikWml3ETRG2tMB7KZ0GwTr2fLMyysVDAAzdcjIGfc9SicTwf+9L3QtufjMnFCgWIygZE8cSl1ILNZDYHmXAyKS0oKY7CqZObcfjQj/D13ofPPnsH+w9shu/ZrWhsyYfvmU24FrIX7R0Eny2iR+UCQT07I8y54WmbyBH3guUVK2aoF+YGh4DYRTeUWkUDygWU79dpQvXIE5wrPIIHjaGYcmhFesSFfBWIiBRCbYwlk0y9vPveEM2vAvqkFE8ALedYKcPKJpbnyV6NpvFsnDSeRXZnLKZsRgHaZE25IVP+llKWOnFr+Nd//d/Eq/f8+Z244L8VRWX3ERMXgN//7u/xp09/h7qaAtjtnWKR6GK5X5Iy66DrT8JVzVl0DWYI4JPFWhrpuJCy8tCMvn4NoqIv4KdN/y9z7xkcV5ZeCUbsz92/uzMTMTOhiN2NiZnWSNqVRhq11N1VXdXs8rbLN8ux6L0FCRKGFvQOdAAJgiRAECRBA+9dAokEEunhDeEBwhDepAfOxvluPjCBIouUVNG9P268lzRA5sv37j33fOc75w3s27sBb7/5S2zdsgxffv4Gvl36Llav+gQ7d36PvnY95rzNcDEdlRp4VCG77RIu6Q/iZt5xxN/cgxtXdqDJdhfTk2bkFp6H3nANQ4PlEsJxwbofOc2xspkUplEWceXLLHpvgjuxEiTQ4P1IAKHcZihhYNT73GylWFHSC94nm0ReL9XYpv49g60CuQa8jvRWnrPAMHAPhyv34qzlKOzDmVLVY9VDXHP8FtFMTzNrQNyY+Dxa0OMqxYXG4zibvQMDE4XynArLy3mQPtx+kwSMyLPFjRVdKfhMwwLbaBqirVHinz4+WfxyJAE/P73ryRjL3MAKVC2mpx0o1SXgvff+FidO7EZm9lXYa7Jw5NgmbNz0JbZvX4YVKz/C3XvHQTmKpjWX4KVZbvrMGHaXo/ZJBrKb43HdfArHjQdwXrcXMcbDSHKcQU5LHKoHH6DLQ49w1bTIOVa776XqKeBOVUAJSGk1x8+6AAA+kwzRgPNPHZ8CPMbLX7YdR5Lj7EKryZdggElocGPHDYGXrLGvGn0z+difuwG/uPobXLIfwNAMv4/F5M3ipneNBHk+6KUTjAK66prItQpaG1k9lUZJqQpwnmfl0Ir2yUJcrzwqftiF0mPFTbwigNx+M24/uizEVfsTzmnqZ7/4GqvqqzQhB6Lrn3XOzerMTAUmJ3TiHMX7eWy0GEODeZicKBPrTnXf0MGEvzvwfXMDIT1MDPii8w/DpFRFEd4mwNmAWV8NBmdKkdd+GfvKt2BtwWbstKZilSEfHxdV4PVcC36Va8Wv8q14Lc+KN7NteDtzceLk84Nvgu0AtfO3CmqhjWDvbZ5rFoDBRzqUPMudZGGj5NPQGwbgLLYD5Os/FNfOB988r1Hy05I6fFqqxmeldfis9BlBOLr6ANjWQLc6ag4lPH5VRuBdhz+W1T9zMAyHftyLXUr4WnMoeVmLQDZSavrt4OO8lpsNkxVPXUqe61byDGtA2gQSYGtjfVXLAqBN0L3B+Agbqp8Opk9uNDL+XY1N1UyjfDo2m5hM2aYAOEF49SMB2wTcwWObAPBWbDc9wg5zmwyea693Wjuwy9aF1enl+Hd/+acG331XUT6ciMsNRxBWvhWJtccx5tYJ281FjdpTMju0c1OJaXZMeqtQO/AQyZZT2FcYgrXZ6xBtikL9SDY8sxaMuiug60nGkeLdiEjdiM6pAvjoX0vP3RkrrsVHYuvmT+GwZ+Lx43KYTMkoLU7Eyh8+xOsbf4mwtA2wDD2UUBDFCKnJW6yn5jvxnz85vsyEJf+GwIsAiZuDAIvmlaAYltdrMOOrxoCnHAO+Cpk4r9hPYGfxdjS4GMVN7SmbA5VeUQAsy89elfxF/TSZ6zsN0bhiOyQMGBkX5QRCAMMGu2o8mS5Dy2A28tsSkChplKdx2XwM5wwHcEoXIeNsxX5cEiu6k0ipuwhz3110e4rFPWNq1ggyzq45o7icMJSGEyfBBEvgBAUEx2Qae52luGc7J77TppEM0RSrIBNVWfD6DXB7K2TRJvP2aDgP2/LXI6/7Oqbd5ZhnvglsuABJtYDOAmQhKb3QEkBZ+iQTbcGwuxIFjxKw+s4K6HtTMOEiW/x0wSV7RPA9A2rPjQFg3oTmplycO7sTK5Z/gKV/fAtvv/1LrFr9BQ4cXI9H7XrRfB87TvCdA6AJDMKRnztnQXbnFRzS7UR5fwqm/WSmabVGdwFKOqjF5zWyCuDme+Tmic16XTMluFp7ElEFO/DInSffETebWhVDFqQAUKKum04tEiZF4MwyNgEkQ1i4GeOf0UWB6XWB70Ds39zVaHmShb1lYcivj5OGOnhr1b/nv5tn102oq3+Af/iHf4eCggTsDl2BPeHfIy5+H06c3IE/fvUmVq/8DFZzDtyuJuXJLbIVkzDdCnyHo70/IwC+uclQbJ1852jA8LARmVkXsWvXtwgPW4UP3v9nbN3yDVatZBk6CpevHMQ33/4epXlXMetpgFesB21wz1UirTka18zHUDuQAw9ZV7cdvineb42YcptVsqS/FubH96DAd8xPgG9WXLipIZjm/UrgTSDK+5I6VgJTI3xePVSQiHIdUv+e4Jf3Ddm4anGJYB+D+Fz7bZiYMyGr8wYOl+6RtNru6SK5xxigQ8cVNqqyCU+qGwQcdE3xG2EYu4PQOz+gdOA2njDplN+LzyqaYeqGhSGkZIHkhLjFqKpA9cA9XHQcw8O2yypw65mgNHjuoqSpSvzTlcaalpoEiA2YmrKjqJjBUH+L0F0rER29B3r9HZw/vxs7tn+HLZu/xfp1n8t3yGeAzzyBHu9JJYPh5pabPxsYhsJwo/bhHNS23cED2zmcKduLfUU7sdcQjsvN0Wgfz8cYm6+lYqOYYW4uOdTnZbXLAqdYqQZ/hued/xToVn/H6oF6bq1oepKBGMtRJNefU4SIkCLP+9mL/1w5dDHMjWSNx1WBZmcWDpZuw5LUT3Cj4TieOH8e8C1EVGDOE6BNgOrV5i7OfSQkuKFUlQFK1OR0ygAAIABJREFUo3qcJdK8fygvBGUtNzDmYnN5YPPI3gW/CcnB4DtwXVSFdPFnXfya11D1Tzz/qOYWFVjG/0/PftozBiQ3rO5wLg7o76WiQ8JitlqaV0nEUZvP70vmOlZZOM/5rBieLkdBwzUcygvF1oItCKuNxnelRfi0wIC3sq14NbMWv84m601Ntx1v5jjwdrYjKPSGATjPDr55a1HIjbwuqMXbQSMYfAcDbu18sfe2BsIVw/009j04dfJZwFuCb4JSJ58Hvj8rqYM2FrLbiunW7AFV8I0WgKOOwQE4Ku79KfBeWlYPGeVPkygXA+/5yPeAN/ezwLdmD6gdaRP4LItAgvAF4DvIIpDg+1mOJWS5gz25tXMNeAvLHZQ6Od88KWCbgPvp2CRMdys2VzMQJ2gQdAfGVoLuRcBbALepHdvMtBIky92KHUHgWzsPsbSD4DvU1oU1fw7wTRlL6eB1nLftw56STbjXFC3lcxVprEpYjL+mWwCZhWm3EZaeFBwvCUO4LhSpI8m49+gKdhdsRXLTOfRNF8jkN+WrQunjZCxPXorCvkRMi/MDGdJa3LkThc2bPoKp+i78/mbMOG24GheGV379X/BuyKs4WRYB23Ca+KrK5BNgPuThfynGZfHk9JzXAdbzKVhnWVjZn3FiJEihbZjXZ0bbYKYwRlvzN6N45CF6nHpM0v9VnEOoz6NO1YQ5N/8/y9e1cs3u1p3FZctB1AymCOtJWYpqgLOJ9jqzKQ6Hsndgf8ZWXCjbj7jKw0g0ncD92gsoaLshrFSzqxADs1WYgB1uKfmSPVeWjX7aBUq5nPIckyQzskxOYEFmXmygZINAEGmDtf0uLhRF4prltNj+MaadJUMtGp6aY4JMgs6uqWLsqQhBdmc8ptxlsphQgkQwSWmDW1hqxZx7PGS0jfB7DQKkyPyQIWwfK8LZwjCE54agabJQkgMJmjjRcwjY0iznfBWSdkrWz2pNQeKNQ7h+LQpxVw7hh2XvYt3aT7Bn90qYqrOwedNnOH1qC7q6SkTjyEWE7A5/dosrF2F5m3GzPQZdbh18EgXOMillJHR2oRON2hzwnpbmRo8Jw74K5D5OQMj9VcgfTMTAjA6TzgpMuSox4zVJ0JDLrYfLVQ4vwSDdOeboysESPfW8tIGkdSNlKGYB90wLHfWVY5r6cH+1NL41jOYiSh+B/PorcPN+oHsF2SZ+F5SwsJnWbcbQkBGffvKP+Obrt/DPv/xLLPv+HWzY8ClCd62QiPuVP3wCXckd+P0t8Iich0yaAt/6njs4XhqGzsEsKcfzO1UVHm58yLbVoqurEAk39uLihV2orLiPjes/lbANSsKuXtmP2MsRWLv+PZgNyZiba8Yk3S08VZjxlyOt+RzizUfheJwmMgRQS89y/VyNSpakjt9vhYVg1HYA2U2XFoBvbtz4HZD5nqU9J+VpArZVkxmvBRNOedQGX3u9bIhVoFKYOjZ7sRlzTn12MtNOkQXRn5zgwI5ep04ag1kOv117Hk6fCaNgBDvtLNkQrvTavHfIwHs9RnS6cyW06WTVfjRPFcKDWng9FnhoQShhQax61ABe3v8MDeIzb4e+4xbOGA8gu+fGSzDDirVkCI56hjUQxe+nDhOTZpToruG7719H+J51ePvtv8PlyxE4d243Pv/sNfzd3/6fWPrHN1FXxw1oq8inWP2RTUngvpbPRrkYSQaRDyi5i3OOIUGVKBlOQVjpTrx//WOcqtiHrJHbGHLpFFvLpksPwTjnBH5OyuzoIa7Z5T1nXp3fcLwYfCu9N2VaJtQzqrw6SsJtFGmgZFsvJzNUn0uRNQ559ozudIQXb8ZXud/hTtNpDM0U/SzMt9wnsnHjBkVtav1s5OSzzGsk/tysiFkx461Cl6sUcQ1nsTF7PbIfxWOE6ZUBWSKfRT4HTHVNbo0V5rtjOGd+QyJgd/56/tT1ftG1VuCb8wvnQXVNA/cK3wPna18VnLO0ElVrBedvr5/PIT8Xn0O65VQA7krMeSxw+5jMWY68hmhEpK/ChowdCLPdwiZTId7OtuL3WXV4PaMRv81owm+z6/FaXg2W5NtFRsImyeCwG+1cY7a1oyYleV7DJOUlGsgOPv4cwTcE4BJ+U1IHAu/F4FuzBFx8fBbg5p9poJvHr4IsAnmuOZUEH5fqGrBUVz8/tBAc7SgMd1kjvg0amlNJ8FED2cHHZQzCCUqeDAbfKxaBbg2ArzT82CJQZCYBb27Nn1sD3IsZbw2Arw/y4w4G2zxn5LvEvjN9MsBoa8dgRlskJZqshNKSwNhuootJ8KDc5BFCzG0/Gjst7dhl7cBuWxfWpev/9Mx3XlcMyp4kIsZxEKEF63G35vQ8g0i/Zd90hdrBz1kwOlWKYkcsDqdvxYHS3SiYfIAndGhADeJrTiJKH4qCjqsYd5eKPVXbTAl26rcjb/w2pty6ALtcjwf3j4u9UXlZIsbGTOjrK0FvTwU2bfgcn0a+hb1ZW1DelSTJdGrnryYJsgxSBn2pyeinJir1d/PlQyn3cvJSJUlqsLnjZ9KZ01cFl8coYQgNvfcR+XADNqQsF2baMZKDSZahCUbnVMoZF3v6gc9R6jFrxsPGC4izHIJ9IEXAvNIl2jHlr8It40mEZG7EOccJ0cf3zZRgYo6hGyyDcwHWGD4uLEp7r9gUToRcTPme6bFdK6VEj4f2WQQzykdZXbtazIgPMhdOO0Ymy5BeF4P9BTtR3XdP7Mu4YM9yQg2km3IhpESjdSIfW0o2Iqf3hnhLk5EkQ0nv92moIS4DXlpUVcHjLhUGT9hsnxHTvko0TObhUvUhxNpPY9hLUE+Gmg1iKkVRFi8BBfxOVLe+c7oaOdlnJKXtg/f/BmvWvI2ow6tx794pvPvOP+LLL5bgg/f+HjcTozA+TtbJLjHiAjro6QwzLlTux7na46ibyAnovG3Sje+hfpgLiVZ+5oJJZ5FZphoaYZvMxLaU5dieuRrXq44ixXQG6TUXUdyeAEPfbRj778A+koo2TwF6fCXo85fhsV+PXl8ZOl1FaJ3MReNoJqjttPffh7nnDvT9t1E9lIK6yQzY3Dm4N5CAiIowsTCbkvIy72+CvkoBfvxOfT4bent1uBK7Fw/uX8bmTd/jk09exZIlf4133/6feOv3/4i/+E//B85F74LH0xiQayj2lxvl8u7bOFYahu4nOQp8C+uv5FW8z+m60dWdi5iYbXjllf+M9979a4Tu+grVxhQcPLAcb7353/DWO7/AnogvMdhXCnqtsweD98qUpxz36k7jmuUIap+kiTcx3UYEdAigtgUitK2wPXmIS/aDyGy48BR8Uzu+CHwz3dTj1Ym0RFwnJMSDzyPdkRh4w8oZ7w8y3bz3leMMwQSBO1AhFQ1u6BgAwsqVgMQZ2mna8NhXgZuN5xGZtQW1j9mLYlYNpASTmmyMQIr/32dC03QO9mdvxDXrSXRN6+EVX2h6Q/PacRNHwNIgmxJ6wDunq2STV9F5G9GmQ8jtuaEY/BfOVeozcmPEqpUCRnSVqRU/+9FxM4pLrsPrHcTWrX/Eli2fYPPmT3Hm9G7EXDyIbdu+xoOHpyTKm0FCZL45r8l3QXbSrQXuqEZeN/sceL976Dpkw6ivSpoKN+RswAF9GJanr8Td1lg0T+ZimuxtQN9NXbJTPgs3NIH56IWf7UWAUJtzVWAS5XaXLEdxq47MNyVcBIkvnsfl34iE0C7NzXBWY8Klx+3uGOzKXYtVucvxsCX652O+A3074tIiFZsA8JcGzoB2ntfXbcKg24Cr7Zfw6d2lyOu4IQ2JAryl+qcamAV8+xT4PlWxHx0jtMckWFabs5e7Bi9zrXktycYHrx/8fwHWnBVTrnleNQ/RGUlsgDnve3Tw+0rAsDsSTVP+Stj6H+Bi2R5sSvsGG8t2IqQ+BT9UVOOVW9V4JacZv81twWs5Tfhdbj1+vwBsU8e9EHxrYHvxMVi7HXy+wKXkZwu++enQGzLdi91JtNcLmyWfarm/WAS6NQCuBd8Eg21xLnlO6mQw4A5mu6nrDgba2rnmVMJjsEXgc+Ulz7AIpFVgcONksEXgM20CKS+pbMFii0BxLgkC3KLnDmK4g0H3s8D3j0A3Ge3AWAi0FejeYaaFoDao81bgm2CbQwPiPA+1dWKPvRvrMvT4939q2Ulxbxx0A9dxu/M8Dup34IphL8ZcpQIUp0QKwIWAwSDpuFB1CKF5WyVCvtVJFtMu3eHUcbb5inCkPBT7y3eiYCgRTdNZqBi9i3WZKxDfeBIjbi7gnGTbkHTzKNav/RgV5XdRpkvA2jVvw2RMw/rVX+LD3W8gJGUViluuw82Jl5MqmduA3dU8aHrZSfml/p1inLioqomOwFdjCB3weS3wOCuFAadHcuqjK7IBOZWxA7csp2EafoiROYOUvMmcsSwrEpO5GmS0XkaM+QAqe2+Jhp2NqR2uEpwqZxTvZjxsvYIupw4EYXQfUYudAt1k4Sm14VAsMSd5drZz0q6T9+P0cMGlVpVNUHXwzdXAJX7IZGSU5EQFJamFiZZ0jSO5iDYcwOmcEHTNlgpjBi7UbHr0UXtNX2EzjCP3sfbed6gcuQcXQStt9wLgm7IbVgYU80Pww4VCLfCzdANxMaDJhpbxEpzTH8B1ezTGxB0hUJYNLC7i5xtg39VmQjGIM1PVqCiPx6ULW5GYGIn29lzMzNiQnHQS+/etRtLNI+jsKIXf1wi/jzpzG3xs0qN/uasKBa3XcFwfAV1nklgFckGe89KRhNdFsU1yL5EVFI9cVj3M6HOW4UTpHqxKW4YwUwT2GCMQot+FHfqd2FEegq3FW7CxYAM25K7FyrRlWJuxGuvS12Bt+mrxfSa7tSlng4zNuRuxrWAL9mZvwpHU9TiWsQkhqWvwxc3Psf7hKmEdh9koSCDpqhTXDq/XJCy202lBlTEB777zV0hLvSwSg4MH1iMjPQaZ6Zexc/vXeO2V/4KC3FhMTyogJVZv3AQK+E7GsZI96GVSrPbdSAVJMexc2On8Qt9yc3USqioSMTxogNfTiJ7uYpSXXUGxLhZt3fmY9ddLWBJ7CdhMOu0rR3rzRSTaT4jLAysqlPNQT08JCDdX3ACxgdY6eFfAd3rd+afgmw2fQeCbcpE5Bi7RLYbe71LeJvisxfSMGS4XG2MbxQVkTqwXyYgrqRGBzKxspvh72ZipehImZ1XYjep5oAzGCtPjVJzVReKe9az49XMjITIN3rd89nn/wy7+3vtLt+LYgw3oGi1Ee28hGlqy0NCQioG+fExOlaKh7TYcNffR112EiQlWoZhVUI+q7hRcMEUhu+MqxHbypeYftflSYVvcWPD6ODAxWSna+R07P0FJ8R1sWP8RQkO/xNdLfyMViqTEU1i18n3Exe2SjbqqJKi5TNvUyufXqniBz0sHHsqhqI3ucRYjrSMe0ebDaHMXQ9d0DVvT1uJI6R4JXxt0lsqGnlU1Sg8oPZyRZ/9lQPGLASHnOA4+i2y4vGQ9JiFeXpIBQjC8zO8JuJywoZnVuFkz+ifzcUi/C1drj2KvIQTZ7Zcw7CyZJ1ieAlpeb21w/uf5T/9OAlTa7Am5IRvBAEgl4PZodpZ2jLgNyGqLx6b0tbjZclEcqlg9mGGAVoA44ef+ecD3T79nNbdqn43rqao0sWrEypL6e9X/xLlBNqQiLSHrzf/HXqgqjHmLoeu7ivNlO7A+5Wt8lbICy8rOYLktFx+XV+FVarkLu/Grolq8WmTH60VW/L7QjDfzzXgzz4I3c8l81+KN/FoEs9rB5wtAdv5Tl5LFPtzaa01Gsvi4UM/9tHFysTuJ9jpYRhJ8Hsxsa2BbOy4E3SoU58tFrLbGcgcDbc2lRDsudivhazZRBrPaPNfAtXb8vqwJy4LGD+XNWDykifIFyZOaJ7d2DAbd2vkaw4slJYvBtyYrCT4yHGfesWSxLzeDcAJDY7O14zOBdnU7QkwdC8ZOcyeejg7sMrUh1NyO3ZYOOfJce73H2okwWzfWZ+jx737xl099vv8UDZelffHQPb6G+90XcaQyFDEVEVKeo+xiDFYM+itQNZmBg8Wh2F20A3c6r6PNRZ9rpXX0zxiEWaQM4ZJ+P5be+QqbctfhVHEojmVsQVjaOqS2xmBCGnoY1FCP9PQLOHp4Ixrqc1FTk4YvvvgH/LDsTSz98k1sjF2Ok2WRqOhMFpkEu89FTiESB04wZIpeNNH8S/+egI9MFktsaiIS9kjAigNevwVOAXZcKBwY9RhR3/sAadVncKhwF46XR6CsIxHjbHibY5oh2WfGq1skJvyieT+q+29LBG/l0APsL9mNsKKdKOxLFmbEJ/HPdrjJglK6QjZWtJbUVSsnFJFpyIagRvzQp0f5u5ow62/A5LgRTwbLMNBfivEJ+iHXBxxV7PAwsIGbJAJxgs45u9im6R4n42DuJiSbj2GcOnH+TtEoWjEwUwJdTwLirYdxqngXerwl0oAlDZdk6MjS8jugztlJwMXvpE7YOgmo8bEDvg4eNODRpB4JpnNIskZjQvSY6vvkdVIbK2ojjSKjmSNgoocz36/fgekpI4aflGNkpAJOFxeJGoyP1aCvT4/RUTNcTnrDB0JEROoTWEQ8NvFVPlcWiZSG8xiZLRdwxZ9LoEvmjpIcxfRrGxo2USn7xFv2Uwg37YFjNBO9M6ViNdc1VYjuqQJ0T+ahczwHrU/S0Pg4BbVDqbAPPoRt4AFqnqShYSwLjePZMponctE+UYRH43loGUlHzcgDJDVFY0feRtyynkKvrxIzsqipDYmfyZ7UkrLy4KnGwGAxNm16B998+yq2bv0SxcXxGB0x4fFjPdLoeBGxFH09RfCJ3IObH7qDKPBd1pWMo8W70T9eIN+V3NcBJs3vV/7NrBSwKdbtssPltMHtJCCvE8nL1JQBkzNVYn0nmxU6jEipneBbj5SaM7hijELdYKpqyHIz6ZTAj7prymsIfqthH3kgVoOpNdELwLc8b/OyE+r96R6kKltzDJSZrEZhUQyuXt2DjPTz6Okuk41Cfv4FJCSEo6w8HpOT3CDXKycG2VRRh6ockaZQLUFPsvHgn/ltaBnOxbXqo0gynlD3LxhURctCXjuemxXJYDiAnTlr0TaWh9HeUhw9sBLfL1uCtWvfxtXL21Bafg7R8euwftP7+PKz/4Gr8aHiF0+9u6nvoeiWs9rj5Lt44VzFTQg3f6Ibd0jF0EsNOEGa0wKzLQlbtr2HL7/4Nb755jfIyDiF2NgQrF71Fr74/FcICfkCLa0Zal7kRkTcgtRmXfoV6FACG2bonMT0UMoivOrZpVSqa7oYt+vO40TBHjDifob+96PZuGY8ij25W3Gi+iBKh+9hlE2vfK+BqpdYsr5wHn4x+JYkV/kOLGgZy8Ml+3HEW0/AI70hvJ+0n/GiOZ2OWiQwLJiYM8M0ch/bU1fAPHEPZ237kdFyHgNThT8L+ObGUjbxYlOoNvOy6WPlUZrPHRhwVyCt5Yp4zd+znMXYdLl8pwxpI6kkFqVebjr/VLITstskSch6qx4K7fX8n3EDyqoGJUuyvnODwSqsGYNuPSz9aUiynsb6/G+xKncrvs07ic+L7uL9oiIsKajCqwU1eKWgBb/Ob8ZvCs34bUEVXivUY0mBHm/kGfFGrg1v5DrwRr4DbxQ4FoDvYMAdfK4B7MVHLfSGx+DQG54vdijha3pvayM4+OYPxQ3QRnDoDc81gB18fBbY5p8Fh+BoYFs7BoNu7fxHntwBj+6FFoEqdVID2tpxAav9DLBN8L1c3xI0mrFC3zQ/NIAdfFxV0YzVFS3zY01FC2QYWrEmMNYaWhZEvj+L4X4Wy72Y6d4kDiUq6l0Lw9EAd/DxJ0E3me0g0P0UbD8F3rssXdhl7lwAvjUAziNHmK0L4fYebMiowL//b39q8N0dh6qhm0jtjUGUPkSikEf8erGMM09kScLbwfJwmYhv1USjbbpYkhXZbEiATnmFaAJnrdLEszlzPaI4Yfffh7EnBZb+VPRNl0mjEu3A6PLR21uMurpUDI9UCrtjMl1HUtJ+6EqScMsRixjLMZR3JIkLiALeBHdkv6ij4+TBCflFk/GL/54LiViWBUqHZFpU45fS8WlSD06ofN8CLrwmYS6cXiN6xwtwr+0Kjlftw73a83giDF4tnFKqrReW6EF7LK40Hkdp/02Utt/E3pztOFKxF5bBTIx5KkXSQktH/nxxDhFbxwDjHwCoBCrUt3IMPi5C5sMzuHxuGyZHrehuL8bV2F04cmg5os9sRHHhFcxMk0Enm0bvVwVa1WdghLlFZCg9bh2ut59BVNo6DLp0KrDFY8aM14jy9gSEp64SZuNB+yV0zZVixFuOJ64S9HvL0OMvl02ZxNz7uRlohK3mPpLvRyG/KBb9j/UYH7Wg0nQH0YnhiIzfgmTzWUxTe80mRFrBiU6dTAvdR1je5kRfjVluBOTzEoBTYsBNETcNlKyY4fYQYNSJ3pZ/zmvCBjPqdAkOXDPl4lc+M2tCgvkYThojkNt/De2uPAyhEiPCKtbK72eTJQE4JUMusTw0Y9ptQHbDJWwr2YT6/lT5foSllWakpwsXJQLcKFATyUqB22sUn2mfnzZ4dOQwwk8dtpfNqYyLrsLMnBFlj5NxwhCBup4UWYTZ6Ke02PQe5v3NhZEAvFoqGw0N91Cii4XNcQ8DgyXigkKZw5MnZXjUmgH3jGK9FXOlNh/cHJV13pKG58HJ4qfgWxhwBZAZLsLPwN+tNk8NcE7zM/HvuRllox7T6gK+zTznRoHAerYKDxsuiOabvtcEdLy/CHzl/QtIU5Ujx2gaYmui8NBx9in45j3AzW4AfEv8uyTCsp+AG6N6lOriBVguX/42tm/7GlevHkB+/mUBwBs3fogNG99DTU0qXO4a+GZps6icSkSfyiZDymPI0lObL6FZdpgfP0B02V7QQpThVAQ+9Asn0B33Vokn85Wqw5Jkaei/Aycc6GsvwbaNn2N36HKkpcaitaUQjwdK4Gi+i+z8y/j4w/8X8dfCMfi4DHOz9TD23MPF6ihktsQCPgbOvHgeUtaPBODc/CgAzcAgl9uM8Qkj7DUpyM2+CGNlgnzvXZ25KCqIQXrqGdQ6HkhKMCsHCjCxEqA27ASjdOIgW00igJULSRcl2GdzsN+MgekSZDRfxpnCcKlMMhDK56pG10gBUjuv46DlAMLKdiHecgINTzICkjZVGXzxZ9OA8/OPAmT5fmGVMKAYxwkk1J6V+19rZny5+d4Bl9eE9okCFPTfRnzTGezLXI9eZy6iTZFIbTgjbLgib4IJHI31VuzuyzDfrJCRrRfyQOxXWUngXMJnxoEnHgOymuPk+btB2dJQtqT68juYobSDldyAHSbfD+/Df1vDJe+x519j9XcE35wf1Od9eq7uT76WtdVfC+d0Bfy0x4QRI75SCc9LtMdiX+ERrE4Px2cl+/BpUQK+KDPjzZw6Ybt/V1SDJSU1eDXPgddyG/C7XBuW5Fbjd/kVWJJXiTfyrHgjrw5v5tXjLQLvQouA72CgrZ0vlJQ8K+J9kRf3cwC3xmi/jPc2PbhfBnB/UdIAbZDh1gJwgo+i4aaOuzQQgKMddU0ShKMB72/IbmsjEIaz0JN7oaQkGHRr5/TjDma6F4JuBcBX6BX4DgbbPNcYbR4JvOcBd0UL1hpafzTWBcD3YtC9gNUOCsNhMA7DcBYP2gTOe3PPs9wqfVK8uU0d2M4EysDYUc1AHMpHFgJuMt6LQbcAboLu+dE5z3JrgHsh692FCHsPNmUY/vTgu6w7DpWPbyCtW4HvC4YIdMzkQ/foujhtMIUtyX4Gpe0J6BwvED2k2HixxCzaUbVYkFVtGM7GFctJnDMcRlVvKqalQdABt8cs8pS5WaXLJEBxeRiQwTIhGzwqMThUDLe7FkUD9xBrPYbSRwki45DmJgJvLqACTF622eclFj2xHDNLTLdaSAKgmwtmoNNeQJ801TDWmQyaFSNzdOewSrNlSXuCLLZ3Wy7BMZIJR8sd1PdkYMqlmuvutcXisCEUh/WhiKrcizjjMTj6U+H1K7adwI3acGGVBDQQOCgvcIITeV9kIdiQ5zWhueE+du/+Bq+//t/R3VOBqqq72Lz5Y0TuXYZr1/dDX5EsFmW8bgJw6WtN0Ou3YYZODlyQvVYMTutwo/ccDqavFUbXJ24dDkz6q6HrS0JkyWasyvwOZ6wHcLP2NO7XnMUdx2lcd5wS2dG9xkuoHUrHjM+Knq5y7Aj9Eht2foYNOz7Fg9RzyEi7iPDI5Vi24QO8+/2vkVB2FDNuXleyrfze+f74mosBFw4uDIEwFTKRAg4I0jgIzlX8s9dHcE4QTLBNVwzlDy1BK5TgeJh0qPyUczrisbV4A7YUrcVlexTuNp5DYfdtdAwXzd/HvG+ZaEdgzJ9FD2/bQAo2Zq+CtSMZM27lziL3QUAryU2akhRoTjf8HIGNgIAnxaoT3JDJZlIm/cwZxFPecxOnysMwMJQt0gy1weO/p5uGKpnLQiiLnxkegnov+wfsAvQp5aA7DK+hSJCk7E2LOD4XvI7q/qHc5khRKIam6NHOzR2vGa+lAt90aiEAl/dHHbPXAi+fU4b+SCm6Gj4mocp1UVZ4TEblBsjpNyC18RKumY/D0f8wUAWzY9ZF5o+bKFoZctNkhWMkVcD3A8eZBeCbwEUD32I3GGhanZ2rh9Ndg9jY3cL2n4uOwJEoBv98i5OntuLb75YgMfEoPvzo75GXfwXURPv8ddLITSaXgFMkVPw80ovB/g0zRrwGie8+rAtD41C6AHOyl7yOPq8Jlu67uFBxAAfLwnCv4xqcbjJ+jWjq1eO7Hz7EimV/wPEjISgsvIHxSTOmZkyotNzHRx/+LQqKrmBy0iL2gJVddxFjPoK8jniA37v2DP/Ucc6hNpEBnTO/JxIaTLik7aZ/rh4uJ90paoOqQmZMjRPE1QfICP3TeyBhFA4iAAAgAElEQVTwPau5ixIDfq8cFhVRT4eWAPge9pQjq/M6jhTtxjQ3Z2w4laAfK4Z8lTANpyG54QJO6vdKQ7i+NREusaILBrDPm2tfBAgtGJ/WYXiyBGNTOmnkP2M8iMS6aLlufP8vB7yVnefIlE42VltyNmCPcTfuN5/HtKcU0dWReNh4Bv1TBT8L801JoszNfAbZ6yDyPm5MHRj1V6OsLRHnyvbhsvUYmsaypV+IPTK833xCsvB54rXhNdTA97/eanB+7fpJAK6Y76dsNzf47BEiG66AuThWsfIlibxG9E9nIaf5PI7rIrAlPxLLi47hy6LLeKewAu8W1+H3eW14LasNr+c04fcFNXirwIq38yx4K9uGd7Na8U42kycJuh14M78Rb+U/wtsFzXin0I53ioygH/eLgm+CGe9gtls7/6Co/plMdzDwXgy+tbCb4ONLB9+UNODLoPHMEBwNbAcdF8tK5kF3AHwvBN3N4GtKSujHrQXeBB+14Jtg4P1jxjsAvp+h5w4G3i8Nvitbf6TnDgbewnDTk5ugOzA0eUnwUUuiFKAdiHzXzhn5ro150B0A3z+Sl5goKSGzrUaouQsLhqULoZan4JsSEw2A88jX4fZuRNp7sfnPAb51vTdQ3nUNub1xOG4Ow86CtYizROFySSTiy6NQ3H0XvVNFcPFB5QKyADBxkVB6X+q+6ZfLRf9UYRjOF0fC0HNX/IY7p/NFuqGihymDINukWDrqyPziVMCJqEZSEC9ZolDUei3AjqrmQS4gyu7oeRP9v+bPlfZU2EdtcdRYdR4D3r0SikMQy7TCOTsm56ploRr1GpBluyDWdFGmCIl2jyvfh/jyA8iwn0N57y1E2w9hbcYybMtcjaTac+iYLFTWZiyFkzEUtkttKISJW+ArHbR4iUzChMd9+bgQswd/9/d/gbbuAuj0SfhhxXuI2LscKfej4ajLgcdXp9hlavZQC580f9rgZPl5zooBWkO1XsdR/U7ctp/FkFNZqfGzUadNDXrp4G3cbbuAWzWnkVxzHg/M55BhPIsk+o1XH8QJXRiS7Ocx7rLAWHkHv/v9L3DszE58t+JdAd3hEctUEM7ZXXjvs1/jclIYpicJUuuV/EU+j5IJqEZJm7iuKJD4lOkX2y4mH4rdHJ0uAiBdwleUJZ0AcJbbqbcUaYlDLB8bJ7KxT78Lq7NX4pw9CpeMB3G+8hBuGo4jo/4S9J030TKajSGvHtNMqCRA9lWj212I8LS1qG69gSlXhbBy1CH75Z4NPAOB50DpjQkSFLDlMyKWhIEjmVV6U/M9kmHXdSTgRFEouibyhEkW6QvBjLDGvCcU4KArC/XV9D7XWGJZ9CnR8TGFVm1+haWmxEl8+PkMsDHOgqLOm6L57vSUYoIbF95nlAsJg8/rRLcOo2i0eZ97A5sMdS0pe6C+1yyyGEoWuEEki8r3wOCnB02xuFF9HLU99+GSa8OGUfpxKxkP2X+6eFT1JOGUMRxpdedF/sB5gtUPAnO+H9VEra4tqwi0zPN4G3DmzDZs2vgFYmOO4NDBXVi37o+IPrcbn332Gg5HheDjj15BWVkypqb5DKtGUII1glYy8Kxq8DMTjNP+tH4wHbHmo7hoOYZpVipkM682vn3OYiQwAbY4DDldSejhpsTNSksTmh+XY9uub8Xab8UP72P3nu/hqM3AkycmRMdEYOk3v0VDYwa8Xrrp2FHeeQuXzUdg6E4W+0JeM3VPqM2G+FHL59Zek6lWAEy+Z7oJyWAFgL0dtSJhm5urD/TM8B5hTwkBe33geeL3rpo1yXoLYyz3X2Czxc29ZC0oFpyWiZp14IinHNkdV3CwNATjsqGtExtVryT0kgSwYdBdBl3vLVypPoLDWduR84hWeTrQKYn3Bn2gRaogPSBWgLZ7vI9lTlV9AqLflueH37FiX8uH7iDFcQb3raeQVXMOcYaD2Jq3CTE1J1UDsfw86qk1GYS6X1iRkT/j9yzN4dxAW6SR9kxRGNanrkZSWyy62bvkNuBc1V7crz+Dx5P/dvAtz6dIApW+nFUVykhUxcYKe/c9nCvfJ1Hy9aOZYvfKKuSMVEV53VnBVJtkAd5CQliE+b7ddhknyveCPt8yn/A71NYjbX167jForXgOCBfSgs88g4KkOZf2iOzVqJZNBDfO7I3woQmd0zokO44gsmAjNhdGYnlpPD4vy8YH5UYsyWnDmwXteDWrHr/LbcSbBQ0iJ3kz14Z38+14J9eK93Ja8G5OA97Oo7a7Dm8XNOHtgha8U9iIdwsdeLfIshB452sBOE+Z7vcLahE8lLykDhKGI4E41HHXg8E3wUMCcIrr8XHQ+KS4HvNDC74pacCn2ihtED/uhU4lCz24f+RSMh980/jUrUQLvmH4jTYkAIca7qfjRVru7wKJkwy++UHfPD+W6ykpYcw7B8E1X6tgHJGWzAfiqHAcYbgpKTEsHi0S886od21QViKjsgXrKlsXjqpWKJeSFmwQq0Dt+NSLm57ctAfcLJrugK47EIKjheFsCQThbKt+NO9Q8ix5iRaIE3wMMbNZkpHwauw0U8PNQYDdid3asBBkd2GPjE7ssbRjj7UDYdZOOe4h8KbkxNqJCHs39jp6sCnzzyE76UtCWVcCigduILr+AL7JXIrVad/hju00eidKMTFXKxPwC8taEjZhwqPRTCRYj2Nf3hacM+7DnoJNyBu8gSlPmcgKFHhSk+/CEp9iASr7b+Gi5QDyHlEzSSkG9YrK7UQasWRi+dcA7X/d/1GgS7GMwoZTGsBI6zkbhnwG5DfGYV/uFqzLXYnD1XuQ3HoeCbZjOFkSgmMVodiWtwbLMr7FJdMhjE7qxHlCQhkCTZkCLv8FUhq/3w6z5T7eeuuv0NOfBZ0uEcuXfYg1qz9EePgKPHgQLaydMKNuC3w+WgKyBG9B/3QZavpSkVofgxPlkTiWuR2ZjhvoeKxTbIfbiJHBMnR2FqNvWFkpDk7oUNRwEyX6OAw9ysHjySJYhlNwxXIYpw1H4XQ3wGZ7gPff/TscPRKCtWs+xe7d32N7yBeI3L8cGVnX8M237+PcScpkqCdsUgs/F2zRGKoNArXHLItPe42YcVNrTA91Zb2npAwEHWRllfxHvgtZ9Fmt4PcTOJJVpxzKTW1yFe7WnsepqgMoeZKCqoEHeNgQg4TKozhcsQfHDWG4YT2KkparqOm9i5bhdDz2laHRm4+o9A0wNF/FOG0FpXRPJlixx4qtpzd3AChrzKIwlwpAC9uoMeXy/mhFaENl+22cLQpHvadQglrEE3ye8SZg0RZm7X5Vv0Mt+pTYEHAwQIZAmkCfGziHgA8GKbEZd8pfjeK2BESXRKBlTofRWYMAQ94TDHTiPS0Nklx4CYhYcqZNo4A6VUkQBtTvwNScFRP8/GIzp8ADm3HTW+Jwy3gcjT0PBHzQdnLCa8Coy4DO0Vw4+u7D0H4TV6sPIaJgI1LrL8DD70USawkwNVCqql9efwXEXclPsNWKhw/PYt26T7Fixaf48sv3sWbtV7gctx/vvPtrfP7J+/j4gyWwW7Ph9ZFlV57rEk9PGYAEAdHr3izuDbzu+pabOFe2FxmdN5Tun5UeN6sR1TCPP8Tp8nCkOM5jmuDIE2DQ0YDHA2VIvHkAxcUJOHFiKzZt+gx5efGobyjG19+9g6gTmzA0oIffw6qFCSXdN3DedBCVHbcxI2meJgGnlBqwSY/uQATgBG1OTxXGnWUY85Rh1K3DqKtifoy5DXI9xzzlmPDqMek1Ytyjx5S/FNOzOkz5KjDhNmLcVY0JTyVmZiukr2bCrZfjpKdC0ivpOMTnYMbPoTIBpryVmPIxPdcs/QupDSdx2LBTyBM2tPIe8QSqHDIHz1kxNVeNhuk8XDQcQkThTuQ3XELrVB6G56oEOBLQ0xFFQPg0m8+5CSDQJNESIFu4iZZNpUmsS/eY9iCybDsuGCNx3XoYJw3hWFO4AWfsx5SUSTTVtDjkZpzPt7I5FSkUfxerGz4TBn1G9HkrkNl0GQeLd+F63TkMSeNvLeAyI9qwH3frzqB34l8CvtVzp9Y97VkMbGYCVQSt+ZRzq2vOKna0ibpDOGXYD/1AyktVPfhccyNN2cmd9isyL7c/yZK1T9vQv1T15LmgPPDeOVfIJkxtSmVOk7mAG39lH8nqG/ubHs9UIanuHLZlr8PG4nBsrLqNb/RVeCO/Aa/ktf7IInCxQwlfP6+BktISTVbC4Jvgsbhhkq8/KHTMDwbfLB4SfFP01KXkD4Hz4IZJnr9U8M1znEmCg28WB94w6Gbx+NcE3wTbAGrnz3UmMTTP+2+/TPDNc51JgoJvNBtA7bg4CGeDBOEsBNrPCsChH7dmDagdf+RWEgi50cJutKMWgqMdNVcS7ai5lWjHXRbVRBnMZmuMNlltNToQZmlHuLXjRyPC1olIe5cC31kV+A+/+MWftuGypDcRpT0JKBpMQEzzUWwuXi0yiS6fThpwGKBBRvZF4FseYL8F4/5K2CcykNdzA5ltV7Dj/nIkN5zBGEt+dMwQ5vL54Ns4eBuXrAeR23pFge9Zgm8yH0r3SzZw8YT4s0xOz5m8ODHSEUE1BxK0ULPMxZ6Mhw0dEwXIbogVSYZ9PB2jcyaMzFXBMfwQt+2nEZmzCSH5G3Hv0WVMzFWLLZtieskAKnD58uwGk/kU+H7nnb9B/5M8VBju4PjRnbhwPhxr1/wBu0KX4vFAgSxYtKubcXHxrUbHTDEyGi/jVP4enMjehQTDCSSXncX6sD8i6e5JjAwb8aSvFGl3TuD82VAU65LgmW3B4HAV1kR8gi07PkJLTZp4Pbe68pHUeBYXqo7B62tEZ4cOX3z2Cl579f/Bxx+9iosX9+LEqa1Ys/5DhIWvxauv/S0iT6xEz4geUz4bprxVmPLqMeUtx6RHjwl3JQYnS1A3kgV7zz2YO29LkFDdSDqGnDpZxMnUsIFPQKImsRAgroHxp7ILanjZQEc2Sv8oEXEVh1DcnijMv9K9VsPqykFG+1VcqziEi7mhuJgXiviqKKT1XcOdgWvYnL0WVR0JmHRXBAKm2OPA+0AtYuLcIywWm+S4yaEFHS0PCY6t4pBDlxyGFREw87smQ2jsuI0LxRGwzeQI0KOlpAZUhM0LsF0CtheV3We9Rgnv4L8juKfURt6PyAAIfGgnaRUvbmNzAq4V7EW/Xw8/S+NssqP8SGQxZJ75PFECRAbWIRaQ/lmbWFZS9w0PKz30LFabYjm6yJIxZMeC/KY4JFYdhaEzGV0zpXg0ng9z711xmYk3HcNhfQR2lzN0YyPC8tbjfm20MPIEaYr5VuBb9Mn+Svh8FZJO6HRTUtMIhyMNp07twIYNX+Hrr9/Hd9+/h52hS7F587d4eO8mPnj3FRQV3sDUNDdpjJan041RSU7Ek1r9fAXEeR/cxFXzMRSP3seEXD9+PkqXTLA8voP4ykPIb47HDBlit7LrZHDTk5EKJN7cj8Kiazh7NgQR4cuQnx+PzIwYvP7b/47cgquYcdbA52KzrAF5fTew1xiGm7Zo1I8WoHMkB+1PMtE2mIGO4Wz0jOcLCOR5XdddGJtvoKQ3HvntMch9dAm5j2JR1BmPku7rKOq6iuLuqyjvT4R+4C5KeuJRPnAVxpFEGIZuobQnGaXdt0UmVjF4A+WdidB33ERFZ5Iw71U9t2HsuwvT4xSY++/JqBy8jar+FFQ9foiyvrvIao3Bhao9CDdslwbVWTQG7kdVjZMNJoH1rB3jMKN3rgrJlmjsy9iMu46zaB/NlY07Aft8Q6erWjYX4uHuU/esVIO40WM/w6wJtt472FmyDbld8ehyF2JwTg/zRDrO1Z9ArP04XB7DvMxQApAC7DrvWc7J0hCPGrhmragYvIf0jiu4ZNiH8xX7YRhOhYtpjLyPUYszVfuRUh+NvpdkvtWzpxy2hLFnxYcb0wALLRUWaTxnJdEE15wdbWM5OF+2D0d1YSjrvgXmXLx4XVI/l5tht9+Cux1xAr4fDWUGseNPgf+Lf97z/y3fO68jfxc363SqIuDnhlBVKejtXYthZw1S609gS/oXCDEcRIg5A18XlOOdTCOWFFjx67zqefD9JuUkIimpAQF3cCCOBr41Hbd21IA3j8HA+7kJlAHwvRh087UWhKMBbu24GHjztRZ6w+NCdnth8I1mBbggAKesERoADw6+WVr2DOCtb3x24uRPpE5qYDv4+NLBN4tSJ/8twTfBWu4fyUkWJU5qLLcE4QQnTwaB78XhN8qbm3aAbZIwqYFu7aiBbh41wK0dNcCtHenPHUqP7kDjZPDxKfBWTLcGvgm2g0G4Br73kfn+c4DvvM441Qw4cQfxj04homQLEmzHZUKR5qtZMlF8qF9U1uK/Y5KgTSbpYVSjd7oMJ4p2IrUjFuPiBU1W6mnZ8VnMt/nJXcRYD0ocNXf+nCioRSTolRK1lC4Xs4PPn3T+LRMW/6+ywiLAIxAh8FchCnPU8s7SScAqjC2BHkvu0mgnPsAOzKAW1tFsnK84gIuGg2h05mNS5BKBsq/4cauAFgFeL1FidLvN0OsT8d47f4PBJwXo7zPCbi1Eb48Zly5EInTXUrS0pkrJemKyTCwIW8fycMt6BkfyQ8W32DGRh47RMlxPPoT//Bf/qwTX9PVWw2y8h2Vfv4nf/vqvcTn2MGZmWlGsu4n/8F//N7zy2n+FueQGpp1mWJ88xHnDXlzSR4nW9f690/jDh7/C9fiz2LDua1y6uB/3HpxHxL7V+PKrN/H3//yX+P2uV5DSGI+y/hSU991CWW8idL0JKO6+jvy2q7htPYU1WWuxNz9EnHW2523GroItyGy+LL0F3PxQZy3JcVLSZkle2WDKYslScIAJVmE29HG2omE4Czeqj+Ga8RiGJaLcJBugmYBrwwiq0DSRjcyGSzhbEoZ9Bduxs2Azfnj4Pay9yZjxVcIjvutK76wBX7kPZq1wu22YmrJidMyIySlaNTbC7anF5KQNY2NmTE1ZVBPjrFl+r74jCcfyduJB51XYxzPR6SzGI1cxhlxl4k084SyHNiad5Zh06THlrsC0x4CpmTKRwXAz5fRUwjlVJg2yM06DuJI4vWY4vSaMeCtQ2nodccX70Okvg5MbRT+HGWQ9J3z06WXCqh1TszbxqufrMZ8JY4yhnqXDjwIvDNpw0RqNgVJ0Z/FWos9TKvr5E7o9uFR9GHG1p3GkPBy7szZhX0EImMj6oC0O5pkcZD5OwBlzJAo7rolPOGUnovuX70qbVxRzTb09A2vcbjuSbx3C4cPrcPFCBI4c3oYNGz7Btu2fYPu2b5CVkYT33v4l0tOiMT7OpkbOPZwTAoCRFQhuOPi75pg0a0H5o0TEVEUhp+8WZuYc0iDLDRFBVcPgfSTqD6K4Ph7uWbvo38WG0GdBT2+BOM58/fWv8fln/xNRUatQbbyLmEu78MUn/4Tevgr4fPT7ppOOCSndcfg+dyU23l6Oo/m7cbh4t9j20RHptGE/zlQewAn9Xhwu2Y0D+SHYm7cNYSXrEFq0GrsKVmF34VqElWyQ4878NYjUbcOhylAcqtyLffoQhJWuw+7iNQgv3YIDFeE4ZIhEaOEmhOStRHjRDkQUhSCco1Ad+Xr+z4pCcCh3C45mbcfxrFCcyAnF8ewt2P1wGTblrBDwPDfXqJxX6L7iYQCVHbM+B7wusr4NmJ1rxpCrGg/rryDswQbE6Q6iZSgLM35TIKOAGxpWuHhtuemjRamKV6dFH5lwt68K123HsV+3C00DaaIz5/fUMVGEG3VncaUqCs5pnWiS/f5KZT0pBBA31XYVROVSzcLdrmJE5e7Aqpt/xM57K3DXdgYDDPqaqwM3cvwej1VE4G79WTx+SbcTzid0B9KAN+8tyivVn1EmxH4CkzQkk4RhaA6dhbblbUZu702M+lRewYvWHu338OctAN+DGcreUdZdbn5/jvWNP4fBcRZ4pCrANU1VFViJ5TPyxG1EUXsK1t/6COtL12KLIwVfFZnxbloD3siqxeuFJrxeXI238hwyNIAdfNRA9uJjMOjWzjXwvZjxFnlJkUqeXAy6PyqqnQfdi8H3s0C3ZhP4LMD9U8E3mivJ4qPmt734uMB7e5FFoOZSEnwMBtrB58HBNwti3p8TfLMYbGuvGXijjcWpkxq7HXwMDr4JBt704NYGXUo2U1YSGBqzHXzcGhR4o8lJeAy2CdxhUsE3kjT5vAAcc5t4cmtgWzsSdGtjMfheCLoV8x1GxjvAfBNsywiw4Dzf6+jG/ppebMn8czDfAwkoHEzA7bZzOFi8DQdzNqO8K1FkFRKHzRS3lwDf1OEpzZhid/kwj0yV4WzeTtxuPodBdzFmJUTlp8G3ZeQeYmyHkN0cK2CKAT4s61EbKxpCKZMTBP8cE9KLfwZZf07g88y3VlplxDTLdSJ3UIsUXFbAQyDQAK+T/ts2YdnK22/jYul+3K45JzZY4qE6zcXDITIUbljYvPoyn2l6uhLZ2efwz//0n4QZKi25jtUrP0Zk+HIsX/YW9oQuxZNRncgEKB0gcLrpOIvw7G1Ib4jFgEsnuu8pbw3am/Kx8tv3cDXmEIYGTBgZsSL+6mHs3LESGenX0dtrxq5dP+DdT1/Fxx//BrXmDAEa2XWxOFywTUIjhofLcfLERnz2h1dRkJeCdWu+wrp1n+FQ1Hok3z0NY3Uulnz8Cv6XVf87luauxhbdNmwt2IAt+euwpWg9tpZuwPbSjdhRuBG3W2JgHngA+3QOkh5fQ4QpAgkNZzFFmzP6i6NK3AIY9qENugewmYkWXhw8p1c6WboJmNDvLceD+os4WRKGuolsWYDEMo/gTKQWStdKy79hVKLTr4Np7CF0A8kYcBaqJMs5yoyowSbAI1usGHAC8aGhStxKikLYnqVIuLEf/Y8r0dNdjlMnNyJkx2cwGG5heooNo3ZMz1WjtCMRex6ux56szYjK2IqTOTsRlReCQwU7EVUUKmDtmC4cp/R7EW08hIuWo7jsOCkAN7H1Im50XERi5yU86IzDvdZYpPRew522yxKhfr8zHmmd1/CgIw7HqvZhc9Z63Oy9ipyhZJQM3EHJ4B3kPk5CeXcy7F0pqH+cKlH3rcPZaBxIR9NgBh6RqR3LQ4erCM2+QpjdWSiffoiy8bvI6YzD7fpTiLbsw7qs5fgk+TMsS1+Bw9X7kdx0AabB+xj26cUhhYCCoNg0eA/n7QcEfHu5WWLAi1QE+Dxr9zyfZwXWyMy73FZkZZ3GH//4S7z22/9bQmUePDyDOylR+M2v/y+8/upfYfPGz/CoLTfA3msWoWxipvaf4KdGQD7PqYc3997DdetJ5HXdFFtMvjc3NzOzFWieyUZi1WFk1VwUNxDaZlIywefY7bHBYrmFCxe2IjZ2JxyOhxgdNaFCfx364hvw+x/Jv2EFwQkj7vfFIcwajpvN51HQl4TMrutIa7+K+y2xuF1/HjcdZ5BUcxYPH11B+dhDNKEU/rkKzAZCgthYzLROj6cMTFNlFWVsrERsFp1uIwaHczA+VSTMssfjEDeUsckyjE0VYsyjx5i3AqPucgy7yvBkphSDU8XonyjE4/ECGY/ceWiaykXzRCFancVomMxARstZnKigV7gN8DUBvnrMupn22SDhSgxYmvU3wzVjg8dFwPYIPrSgfDgTYcW7sL8wBNUDKdLYS+DtdJYpeRivoeiJqSU2i8MMn53hqUKEl+xAYsNZ9E+UA16mo9rR8SQPCY6zuG46HghO4j1kggequsHNkszDfJ+im3agZaYAa5K+w/LUZbjRGo02ZyH8ssFjHwC/RweOGiKQXHsKvRP5Sis+f9/xO+a9pw1tbVKMtJqPOZ8T8NPFiBppPstKlkT5y4TTgMTKo/j69tfIHkrGmLg3ae/vReuL+j2817gx5nNMOWDbkywlEfk51zjpUVGGAlI9pDsRk0rFjcmMEX8lynsTEfLgG+zSh2Oj/S4+LjJgSUYjlmS3YEleLX5XaMdbusYFkhIC78VAm6+1RsrghkntfL5ZUmQllJao8UyLwMIafFxYA43VDj5qYHuxReCzXEuCwbfGbi8+Lgba2uulC4JvGhAMvINBt3YeDLKDzzWHEh7//xh88zzQrYHvZ0lKCLx/JCsJAuDBoFs7p45bY7SDjxrA5pGSEs2PW3MnCWa3RV5C3bal/UdjAbtt7UCktQN7CbQXjX32Lux3dONAAHz/x7/8E1sN6oeTkN53WfTJe3M340HdeYnZlomHJWdJM+Mk+NODDh3KS5oMibIZG5suw5XCcFxznECvi17Rymf06WQXDKK5ENtET6wx33xN8E2N6Z8LfGtWWNLIRLAW0O9Cwlnoi0pJAV0HGAbkAFzUH9MKzyESBLffiseTOtyzRONk0R48mi4UZmnOSQcMZQXmDrAo/LwvAuBulxH1tQ8QezFEFoKhgQocP7wJK5ctQdiupSguuAy6yUhD21wNHnlKEJG7HXebY9HLWGN/jbhnTLuNGOouxobv38b5UzvQ/7gMfn8T0tLOIzJiFW7dPIYy3Q388O37iLt+Fj/88BHqLVlwemx4WBuN6KoI0YB6PDWorU3FJx//Em/+/n9g5YqPkZx8Emeit+HTL/4Jn335W/xqyV9jdeKXKBpMQuNYOh49SUfbSAbaJrLQMZ2NPlceBp35YBS7x2eDZ9YOy3gWYqxH8dARLWEgvKc8foPSkZL5pjyAzCWPAXmG0s9TGlQDp6avnbPA1HkbF0siUdQcrxbfQPyzfIdi+6XSNiUZdM4Kp1MPl2h02WSs5C6UvKhnQPnBU+7BRTk+PhJrVr+P77//PTZu+AMOH96AS5fC8OEHf49Nmz7B0qW/hc16M/A+bRhwlqOi/y5KBm6jtDMRxR0JyO66LgvvzdZLSGy5iOuN5xBffxZXak4hxnYcF8xHcN50GMfM+7EydxW+T1+GwxVhOGPYi8OWAzhSEYmzur04bNiLo3IkJs8AACAASURBVBWROKaPRFjRDmzKXIf9+jDsLwhBVP5OHMzdjoNFu7CvdAciS7cionSLVLp4vle3DZGlWySOO6JkMw4U70BUcSj2F+9ERMFWROZtwYHsjTheHIJLtkOIazuNnCdJaHIWotddhiFXqcS6s8lRvgdKs3zVqOxVDZc5LbFwcRMtNneBBjqCC9GZky01YlY83glI6sCQIYf9LkpL49DQkInpaTsGh8qRm3sRaakX0N5aIAw5QZHaDFFPripV6rtxwO+sFMkPN2rlHcm4bIhCXss1YIKBOnaRJTHLIOPRFVwsj5Qmb3H8YJ8L2UHKiubq4fHUY3LCjPExo/ih+301mJmuhttFgNegXGLImM+akNl+BWdsh2AfTRUJHisFrFrMeA2Y9lZh2lMplQxWM5yUy/CZJ8vsJ1BV0inR5lKe46/D+Eg1UpKisPy7t7Fl44dISoyEyZiEW4kHsXHdRzgQuQINdekibxKtNlnUObNsutkQydTLmVmjjOlZBl9VwuWlTlv1goz7KlDeHYfjuu1wuRlQVQvMtqKjsxB3HxzFtaRImB0PMOWsh702DTcTDqCk8CrGJ60Ym63Fg/ZELM9YgR9Sv0VRfwL80rSpmkn5fAhYJsDzkl21ix2guf0WtuduQtXIQ+X9L03YtWgdzcMVxynEVB6C182KAhlsg4RfCTnBuZfXSSodSnr1yFWI0KxNWF+0Aff64zFEr3sSRmTsUS/yimOGSNxpPo++l4qX5zrH36EdVb8RXz+VnnCetmNguhQ3a85i9f3l0A+lYNzH+YtuS+xNCV7bng/CpYmazLO3CrdaY3CqYh86Rhgvr/o/eHzRmvBSf8/5ijIc5jlwc8Jm/CkD4FHe6MwpOFWyE2tyvkV4Yyo+01Xjtex6/K6gHUvyWrAktx5LCurwuwIH3smrkaEB7OCjBrAXHzXAHXz8KdCtuZUEg23tXAPd2lED38F+3Nq55sv9Lw6+kWbJJvzIpWQRq601Ti4A2UGhNwzAWexIolxJmhDMbGvnWpQ7jz9lDai5lSxmtbVo9wWs9qLUyWCQrZ1vCgq+WexMonlxs2lya9AIZra18+3VbcoWULMHfIZTyU5Tu3hvM/xGC8AJBtra+WKwLYB7Xsv9VFKiAfBg0K2dRwSBb4JtDoJw7fxATQ8O1vZiW5YBf3LwXT50HXc6ohFWuAkXqw+hzVkMNm5J2ZYAM7AgvBh805NZWeIpk34bJiZLcVN/GGcNe9HNMqIwDBq7oLEN2sSkwHe1pvluiVOMAxyK+eZCKIs12Y+Xm9healJ6Absgmm+WHAOTrzTbySJAkKccUehYwmtGto1ss8/D90gwQCauBp5ZKwwdtwR8Z9THYIKbCUpWPLSNo55SOQYotk27Hs8+ElRPTVaj/VEWfLM6+Hz16OuugNl4Cw11GZgUYFGNWZcZ47NkU67gaFkEqgfuY8ZdDbLzc242vDkw1F+EZV/9CufPbMbggA5enwPpGWdw4MBKxMbuQsylrXj9V7/A2rVf4K9+8R+RknQco6PlyG6NRaztENqmqC2vk2AOg+EmUu6dRpWRftSVaO3IQVZ+NG7ePoLktGiEZ69FjTMDk7PUcqquerpQuMkqB9INVeCMHT44UDOZizjrcSSbj4u/OJvnxucqwFAObi7ozy4e7QHNojg8sFxM5tNvwrSkyNngQQ1qh7Nw2XIM162nxGpN2NGAjzXdMJQOkoscZQp2zDmNspGiHliaEQPWe9qCLAtrAHzv2vUVoqI2Ij09FufPhyEk5BtkZcXh4cNLuHfvHD744B9g1McBjDl3s9HOhmmfEZMcnkpp6Oz3lInXOvXtQy4dBp06DMxwlOLxdAkeTxWhb7IQdZOZiDEfwtnyMNgH7qJrJAuOyUw0jGSgozcNtSMZsI2mwzaWDutYBswj6bCMponu1zrwAKa+FFifpILVJeOTW6joT0BZ7zU5WsdSUDWUhNLuq9D1xKO0OxEl3cko67+Lsv47qOi9DWPPbdQPp6F1Kgfd7mKMeQ3iRU7AzWvIa0ktqebhThBuYcKl4xCK2q+pJFXq0gk2KU2QhjxKEahdD7jCiF7bBo/bhKnpKkxNMXCIVqO1YllK3+uRERM8rhrMzdLJR/mtK6tHWkwGyun0WJfGaDtax/KRWHkM50sjYRx8iDlPjXizE3hXjDzAyZJQxFUeQN2TDNE2+8WJSLHnfvEkr8UcveWZ9Om1y3vjc81AKX6ffCYJbLgRzG29ikvGA2h7kiZzBj+j3JsC5hRo4yabm0k1P7GS0gCfzwFq7imrkDFbh1l/KzrbyxCy7Wts27oKt26dg9mSjlvJJ7Br1zJs3vwNdu9ehajD6zGLVkV6BDairDz8aMgcxk0FN6yUCDow4i1DYUcMmOhJt6q52SaMTtpx6vxWrN70Hr5d8wZ2H/gB15OPYMXad7B2/Yf4/PNfIS0tGgMjVSjpuY/dup1YnrkMB4q2IKs2GmNzlSr9ViMTxB2ELhoOPHFWILn6BA4V7ULnVDFm5mzwMpkXtWieLMCVhrO4VH0EPj8desgyK4KC9xcrjXMMpuG8y5/pq8bIrBHJzTFYmb4chwp3oKHvgVQ8+P24pMHXihP6CNxpu4g+98skXCqwy2ecmzr2VjxdR9T3RyKob6JQwom2Z29CWu8NTNANzK/scCU19QXrivYzBXwzHMhTiaSWiyJL6hzLk7WOc9nLgnjt5z33yHWc8z/XdiadsslfXLAsaKQTUOl+bEpdjx01Mfj/mHvvr7iyNFvwj5i1Zq2Z1+PW6nmvu6v7VVd1V1eXycqqrKyq9E4mUyblvTdIyILwEpKQBEIWK4MMIIfwniAwEYENgsAID0KA8C78nrW/GxcC5Oh56n79w7duRABBmHvP2Wef/e29JOcRPsmuxR9zm/GHrEb8JbMFn2e14ZOsOvwpq2yG1X4b6P4qqw5S2SZ87Vk5JnyTU49vckz4Nqdupr7LYRDObBiOGoSzONcIKTZNzisVYL/uqIJu9agG3qhHldX2PM4H2ur9ucE3c1MnPUG3enu+97YKvud6cM+G3mwU3farYNsz9GaOD/e/NfhG3EmaoQJt9agy2jzSi/vNoFsJxKE3twqyPY8qoy3Ht4Bu1SqQ4FsF2J7H+WD7qKENM1XRjmPz6nhFG3wqW2bKt7IFUlWt8HXXiao2+FW2w7+qAwHVnXLkbfV+YE0Xgmp7/ueA79KX8YhvCsX+jK2Irj4LarWpyeREooBLDkbvLrvaxKROrs5qjI0X4kndZfhneqFhNAu2BTDfJc9vg4mQmc/IUrJpTnHqkI73/wngW3R5wkJwsuTkSP2fMiirrAR14dTEWiSxTGli4+86pDFT+bvWkUzcqj6Hi/m+KOlLxrBLsU4UgE7bMwLwhQ7YripYrXwthbCyQc1mhnWaW8IMO2HjXSkctir0O0oRZ4pASM5h1Palio0UHQLIPrIZqfN5Flav+hDh4fvQ16eB3WHEg4dnxUc8Kuo4CgquI9RvG454r8NP/uZ/we14fwwNavDQHIkIvR+eTxWIFRolGRZLBfr6tRifpKawUj6LoVENOp7nwNyVBd/snTBNp2Lawaa4Kris1dKkyEnASY2krVQBb44KkTx12jVINF/C6ZxDuFoWiHhjGG62RuJ+3UUk1UUiuf4yHrgr2XRJHpPHTZfwsDIcydUXkVR7CUnGy4iqOIPD2ftFk1lty1WkCiDIVyZ22ujRQYSTp7Cdwg5xolXAt+oIMuc6cINvP7+N8PZei4jwYzh8eD0Oea9FT08l6k352LF9CVYs/xg1lXeBab2kgZJp4oJWSRStkqNi/6ekmb4CmGZAlAED0CDBeBbxumC8tBYI0zoMnaQ0OqbKMObQY5xsp1MnzCfT/qbodMEwJ6dO9Nq8PyXOF4oLxhjZWHv5jBsG749RF24rx7BFhxErFwp6jNsrRF9uZVOm+JfTBtEgqZCM9CaY5mQuxdvSz1AFXX+SxMvntMRI8A0bQsk6s4dDGGr5W/dCnGCHDjj8PsSOsEY86XmuEtAroJ5WlfWwWtzflbNMYcwdBP11osPluEGZEpvzuqYK8bjuilx3SXWX8cyiQb+9TOQKGR03EJZ/GJcKjqG0/RaGrQwXUcA09cq0TuT1ZLPo4bBXw2FnT4dyrkjgFs9V6sv5HfHadxiQ/SwW18oD0frioeIkQwCogm0BjASQblJDFu8M+qlzu3sobCfBOpN07Q4zmppysXbNZ/j++69x+cpJaMse4ly4N7wPr8O1qFO4fDUIa9Z/ApurUcgKjjuzpX7Gs0emyCo7A0ZYCYbtRcjuvIoThfuE5HC4zGjrKsa2Xd/Cx38jrsf4I+zCQfgGbMXKNX/B07QYrFz1Z1yNOIK2jgLoXmYgouokwsqOIrEhTNjTa2XB6J3WyJionNtKPwwX201j2TiZfRhJVeEYmWY/heKSQilY0zjBdzjCS4Jl/OBjlJBwQSVjrYB45fOW90DiAjXQ9ifjSPZeSVVufP5QcUcBLRNpx1iDCyV+uNF4Ae2TC3c7UcZ5JexKAcAkVxS2vXc8D8mmSPhke0kj/UtbsfQGCevtdhTijukbgfC8cZ4LjGlbKW43XpK+AIJvhUzgcyiLgYU+1xt/j+eow4AJul65SvDcWSwNtkzwfVB7EcezD2BfaSi2VBbh84wC/D69En/IqcOfs+vxRWYTvsxoxOdZtfgib9YicD67zfvCbHuCbY/bCugm8FZKAHdOHVSg7XlUPLhNrwDu10pK8kz4Ps+EH/JZsyE4P3iE4CzPV0JvGHwzE35TaJ4JvVl48I0CvlWgrR7nSEr+Mwff0If7dcE3Zc+wl+WOe58fgsPkyf3lLfDyrPkhOB7AWwXa6tEzDOeQ2APOAvD5oFu9T+A9H3Dz/vHKDimF1XYD7soWnKhqfU3NBd+eAJy3g2q7Z8D3X/1Hx8uX9sXimjEAB3N24oYpXCyaQN9cGdjoJayY8M8BHq8B45KiZtFjerwUlslyiUxm4pjhZQoOPdqBgo7bGLVq3tlwqem6gUi9H7Jb4oQho5aSqWzc5hPrKmEj39OANG8QfP3ApWyRky3i/+dEIHZhHMw4SQrjRx9knURDy+vkwoUDNQGdMLMGjNhLoe1JEi/hCyUBMI5nQhIi3bo7TlL8DF//Gjwf57YhvxMmfmbL/3TaqG9VJhoFdGgkCGPcVQHtiyT4pu5DoukaOibyYROgX4Ux2oz1FWDHviWIiQtGfz8DFxqQmRGF4KDdSEoOx9BQBZpNmUh/chXr1/4J2uKbmJwowwNzBC4a/PF8LE/iyW2uQpkkCLppf0d22uYqhdVZikmrDs0v0nEwaxuqp1IkmAi2WrhsRgH7TCcUr2vqdAnMxO+6ApOuCphfPkW6+QrumsIRV38e0eazuFR5CtdqzyKm/gJizeGihb5cGSqyjIuGEOXnlacRpT+FyPJgROiDwcf98g9j65MtyBpKhMPt/MFQG+7WSPG2MJ6KCwi/CzV1U8A3J1PZcVHOPdmFQTUePYrEgYNrsXETGcGP4H1oHSan2tHdXYW9+1Zi5cq/oLzmrnzX9J5m4iWbtMjayw6RnXZ4BFv8f27wJY4EChhXH+Ox31GA29WhuKkPwZhN455M6R7BHQ09phzuBRwBrYBbup/wdXMXRmELZUFNBxZHHRx2I+y2OvT1FqJCfxNtLWmYGK/A2Gg5ejqz0Vz/CO1NKRgdKhH/be6WOGEUEOq0VckCjiFRfD8C+LjzRQbY7SlOV5SS3ruIMJxAVtN1RWIhwJtNl27pCT9T0ckqCx3a9XExxgUun4dHue0OAiEYVhhvNrcycIkuJwS+ZMJV8K04IzEIK7clXiwX4yvPoehFMvK77iG+7iKuGc8hIO8govRBKO28icEpaq/ZaEemmq9ND7u1RD47ca3heeL2YudYSBDL10p2luc7zyF6m6c2R+GSPhDP+gm+qREme6mAR7k22ahN7bLbJlMsKfn9yHOy4VQJIhOnEKcRba152LbpS6xa8Q12bV+Gq1d8cPLkdhzwWo7jRzfC++BqrF//sVxLHLOVBc38o7JDJ2OLBFNRGlQDq6sKQ/YiFHRcQ0D+XvFrd7rMqDdlY9kPH2LD2k9xJmQvIs8dxdmTXti2ZTF6uquxa89yXDhzEC0tBageykG0MUx6AbrYYGsKh3fKTkTXRaB9LAdMmqXcxGmvxuR0GXQvH+HA012SHmvhucNzXbzQq9A2kYc4cyTO5PlibILhayY4YHL33Cjvib9LYCtgXAKB9NC03kKY1geP6i9J8yOJIBuMsMAITBlExhJVfxbNYxlzFyYy3roXfu5dWfHCdks9+D9E4y33SaoYMDlZjKfPouBbfBQ3q8+hz6qRnQR5XU66ltBBxJ2A+c7xnNeKMp5Y7OW43XRZwHfbULr8Ly7q3j0feM4Nb77N1zdoLUZBy03cNobjXsMl5HXeRWpzrMjU9hYfxD5znADhT7Or8FF2DT7ONuPTTCO+zDDg60ydWP59SZ9u8eSe9eNWQfgcSYkH6CbzrQJuz+ObwPec8JsFsN0/EHjnmeaE3zAIxzMAZwZwLzD45nWhN3xsDYNvPOs1ITgq060e5zLe7zv4pundwTceoTcMv/FkuNXbavDNDOgWltsjdZJA2xN4l79BXqJrg/e8OjQvDGch4PvoPJZbBeEq8ObRp7JdYbrdjPfrwLdfVdsM202w/TrwHWxUmO/3Dr6zu6KR0x2D3OexyO2Jlds5ndHI5dZydyyKe6NwRn8IftoDSG/n9hkna4JvDmAEBvRo5WMcKJQOcHpvDw5qpemovPwmpqer8aIvD8UFV/Ho/inkZ19HbzdlJmY8d+gQkOmF2Noz6BzPUrYS3QOudIxz0FPBKiqR1haNSH2gWGYprghsZCIo4QTFSYOT9PsblN49uHmAbwHciq6UQFzAuAAlZXITtkRSP7mNzqYb6jYVTSpBae+EFikNMfDL9sZl/SnUDqVieEorXrUEYGRi57wesqtSfH53yfvn9iE19nnK90JtvptFI0B0OkskwZGNh332ElwpD0Goxhd3ayKg607Cs4k8PHfp8HJCj0dPzkOvu4vxsXK4bDV4Vp+C/Owo1BqfwOVqhtNmwoveQmRmX8TAy2IJoXnYEIHLlcHoHcsRAOhwcVGlMrfKVimTS63Ul9rLYe59jN2ZW6AffYgpYTbrJCLeLg1RRpkQ5Xu18X1Ui2OMbOHaq9A9kI+8qjgkaS5A++w2DH1PkVEfjzvaCDytjUdpzxNUDKRC9+IhdC8ewdD/FIbhp6jqewJd7yOUyWNPkNFyEyFFvoitPQ87ZSWMqScQJqgStpPsHCcvLvJqBCwTAAnglvONn/8sE6WA7yo8Tb2Ms2EHcdx3EzZt/hp79y1DsTYZxrpsPHpyFd8t+jUKdbHiJiDgToApG7aUBRq/N37vPJc8F7g8l9SJWWUy++wFuFkTiluGU5iwFgtLbJUo+nK4LB6TvpwrfD43wOV7mrN9rqQl2iw1eN5diNhoHwSc2IAzoTth0CWjID8eUdeOIOLcTiTdDUKj+bHSREw2mvHvDu7qkOVWpADK9ctzla+ZDa/KwpTvsfyFAr5T6y/P6ptFdkJ5hXtM4eKB57X7/Kevu2iuZ5yNeF3wvNLJ+UZZhuyKEXyLfSm/Gy7iGHTjDq9x6aS/4FKpH86W+iC2Phy36yJxrTgY4QVBiK48D0Z/m8fSMGanzSHHITrqMNzGLRWRcYosfq2w6PK4yEIUBpTngJUMOd0vUIVxqxYPzZdwSR+ElqE0cT9REkM5bhJouvsHQFKDwJ1N2eVyzstn5yT4LlYAuCywazHQX4xLEfuQfO8K/H02wff4KlyOPIDQkB3YsvFLrPjhd9i+7c/yvuUakoUMFzTquetxlHGU0hIuaqpgcVZh2KZBYfs1BObuFa24y14Pg+4Bvl/yG6z58WPs2/U9vHb/gACfrdi9Yyle9NZi994VOB3qhZZnhTAN5CC6Jgz3GyJlYc9E1Qemq/DPOYRbxnDUjqWJxSqv58FRRtlfx4nsQ+ibZlAUvy9+NhzbzeidKsWDZ9cQWHhIdixaJwrRMJ6DF9Na0ajLIpKLe+5QOejGQteaSmTXXEO0/qT4+BPoWympcZlgp1xp0oCoshBEm8LQNJbpdsBRnLk4p8wCXO4YcIdFOcdl8crIePrpy+dWKddcYWucyFiuVoSibSRL5iWxPpTrt1rGO7GmVSU3bwHg/N9y3tkM4hYT0xQpDZeNfU9EFsKf8//PmRNe83y8jmQRxzAskmX8G84JLLEVrEOvtQSP2i/CO30LVj9chdUpu7Al4zg2Z+zH8pTVWJa7FWv0cfg8vQGfZhvxSW4dPsk247OManyZUYZvsssEfH+RacaX2QzEUcD3Vzlku9UygYmTkjqZU4dvPEvCcBiIM1uLcuuwKE8pRb9twuJ8pVSXEvHnznf7dOfXSRDO0gLTzJGNlMvyTVjGY4EJy6UIvOuxvFApSktWzisJwClqwI8etarQDLUULXcDVhfNlhp8s04AN5MnqeeeG4LDQJwNmobZKqa8hPHus7VJ24TNjHf3KCUEpwlbGPXurq1q6I37+Obgm4VIStTwGzqWkOH2qJngm/myEtoCzq0Doukm8KZjSSsOepahFSIpoazEswxtOPKWUkJvGIajFMNvWNRtk92erXb40pt7pthMqTDfrwXe1W3wr26XCqhuR2BNhxRvB/B2bQeCjV0IqeuGV5oWf/V3//D+fL4JtrM6o5DVFY3c3jjk9sbPgO+Czhhoe26g6EUUTmh344z+GGpePhKmRwYet/6V0dR20dlx8lDiaCcnKlBWmoxN67/Fnt0r0N9fjaLca/A5tALrV3+BQ16b8PRhFJzOZxhzVsk2bIjBB6aRNBmgORAzZGLSxQh2A1xkPd3a5xsNkYjUh6C2M1HR9YG6Xf5vBZjOyGFeMwi9a5D69/w5ZTHinQo360Hw7W5wIesooMBejYHpUjxqjIJ3yi6E64KQ03kbnZN5sNCCUMCeCrg9jnNYSwIN98/e8DiZZAeBpSSrVaJmOBUJ1RcQlncM4UUnkFAXgfzuO2h/mYP+SQ1Gpwthc5bCYtXCMlUM65RWvHvhNIIhMBIO4gaIjEhPqgvHZX0wBu1FEpuugiYBYbJ4U4Aj5TgM+Kh/8UjcTcr7E0UCQc0vNfJKSIx70mOCpV2xhrPZGH1uwOhoBQryb8PPdyt8fTfgcco5vOgrwY2bQQgM2IHISF/odQ9hsdSJ9pzx23A1S5Ouy1kr1mic1GmR1jdZiqSmaPik7sPIZIGcTwTDM9IT98SrAF1l0bSQRd6lS14IDd2J8xe8EBy8FSf81st9spPxNwLxww+/QUVF/DsnUOUznAX3yrmqSggUsNBrK0QMZSfVZzBpJ1BT0hxFY6oyrAu6LhTmcHKiHIWFMfj261/i2NHNWLzoQ1yK9IOf/3Zs3vIVDh9eidsJQWhsTBF5EYGCAoq5W0C2l17GahM1wbMKoGfP0eqXD3CpOgCPasPd4LvGDQqozybjT10v3/e7QcY7f4eaaaviIMSxKqn2NEJKDiCyPhhnKvzE3u+hIRKN7enoGS9B51Qhhu2lmHboJFLdLiEj1MMq2nGe9yKno2c0/aWpl3UZRd5C/3ayx2Q6Cb4t7EmYLsCTugjE6k+hezRPfOVVwsLF8xvsb2CTZRmsKIVFSpG2cNEpPsxk053lyk6asxJ9LwuQ+DAYdfUFiIkLxOFjK3D52hHE3TiJ4JC92L33e+zZ/7Ui6RGSxANsv+Y+P2uxRHXWwuKqETY0p/Uq/PL2SUMsF9ua4tvYsOkzxMYHISomAD+u/hir136CXXu+R1+/Ebt2/4DQMwfQ/CwP5r4sxFSG4Xb9ZdjZfOqqx7SjCrm1l3E01wtXqkNRN/gIY85SNAw8RVj6YQQ98cbTkmjUVj3C6KgGU1NV6HpehppnqUipj8Q+zTakd8choe48rhtDkNxwHZX9qXhu1aDfWoSekTx0jZSgY6gELRNa3Cw/ixj9KVQPPJFAMavsAtYDU/TQN+JaaTDi68+hhVaDTo4RihMKJS3cCaRlqUiVXFwUU+pSK0YDrik2TlbJbt0ESqHruQ3/Ii9cLwlAR+9TcSjh33IBrC6QVRmSOo687ZwVttxugH2iDO1jeQiuCMDZwuNiv0hpl7pj8rbn4M8Ivrk7ZHdSxlMmfvm8DmzWCoxYdeic1CG16x425S3Dd4++xudPl+DX9zfh72J34r9Gf4efxP8Kv7//HZZmXsc3eSbx3/48sxqfSWJlDVQrQFoDfpE1e19CcNyWgF+r1oB5JnkO1QZQPX6XX4f5tTjfCLWWFNRhfi0tNGF+fV9Yj++L5tbywjqoNT8Eh/cZguPpUKLeVt1J1OPqIhPUWqOpx/yiJSCDb+aX2iypHjcW10OtTVoz5hftALeUNIDhN56lNkuqx+1MmvQIwnlz8M2sHaCq496ta4Far3MpecWhRLy35wJtAu8DhraZUhIm53pwezqUKC4lLThS8WodrWTKpFqvykk8GW2V1X4ToParbsNMVbXCj+X5mIBuBXgH1LQhoKYVgbVtCDK2S/G2ej/E1ImT9Z3wytDgv/zdz94f+M7vjQeZ78zOKGQL+x2H3OdxyOuJQ2FPHIpf3ELeQDT8S/fjfIUval9SL6doKTkYETxKEpaVMdT0TFWsloaHtCJPWPzt77Douw/Q06PDw8STiDi3F+HnjuKQ1wacOLoJDnsDpp3VuF91FicLvFHScQfPRwvwfDAbXQPp6J0sxAS3+gnY7NSQ6hBVfRaX9CFo6HsiIFPYJYm3Jgvh3jp3A/F3DUr/YT93Mw8EctPgIEh2iQBEYR4JjrgVSRmK1VYuvsua7rsITN+HE6m78bghEl1T1MQrkeHc9n61FKb91cfVaGoOworTBAdibmNyO5ysEreXuT3KprsHNRdxIfsogp/sR4w2FEkd2ODfuAAAIABJREFUMSgffICmkQzUjKSKg8mARSONdHax7FJkNjwPCJYU8B2BKxUhGHIWewAut1vDW8C3biBJ/Knngm+CLrKl1K5Xwen2XuYEaax9DD/fzVj0ze9w9MhGrF7zRzx6GoEVKz/Crp3LsXzZn3EyZAd6e+niYhQdMNCkTKacRKmbBXdxajFm1UPzIhmHH29H7WAyrAIcFXaY55g0agoLNx8Avx0UZudcxn6vRViz9vc4G7YTGs1NJD84i6+/+SlW/vhbeayzkxaHb3+ehYDvzskcXK8+hZs1ZzFN2QOvAy60hBVWzg+FkX/7/xIW1lmBgYEi3L0Tgi8//2fUm4rgd2KXgPAdO5bC98RmJCVdQE1tCiYnKXEyige3lWOBsOhcGPCzegf4HkjGpepAPDZG/LuDbzY7Oy2KhIfn6cWCw4g1nURk6XFE6vyR9+IuRgVoGzHg0CGzORY5XTdQNfQI7ZN56LOWYMhpwJiLjHARuofT0T9RgBdj2RiYzMfLqUK8nNTg5WQxhic1GJ8oxfg0m2Y5jpVgwlqENNMlxJefRO9wnjQYCiPJ8UF2yGZlKnx9VtkdItDm91UjTXeS6iqJonqMWcthbnmEwz5LceDIFqzZ+BX2H1mFY/5bcNx/F46e2I1Dx7fg3CVvTM6Eob39u5fzUCQ/dbJIHbYUIbfpMvxz9oh+mj7elZVPsGfP94iNDUFMTDB27lyKPbuXY83qz1BWmoIfV36CyIvH0NKWA9PzFMSyMdp8WUJa2DfCRtRBuwbpzddxKmUvosqCoBl5hIfP4rEh8gd8d/BjfLf6A2zf+AWqDLfQ2JCC6zEnEHhuK65mHcP6RysQ+HA7wtMO4WL+URzK2AP/smO40RGO5OeXcLM+DHHGCFytjUSE+Tx2ZW9DeLkP6gceuXcjleRLsr5TMCBScwy3msLQPkmnKQJtLsxrhADi2CzjJiWF3HER7T6ZeLcOnxISawkqxh7DN2cfrml8YH7O8YPnPq+LShlrxabRzaQrbDWvj7d/F7JD5KoE00wf1V/BzsdbcLfiHPrH8hXHIGG93/083K2jjMtp0cqYMG2nq00F2qaykfP8Bq4ZT8MrZye+u/sFtmSsw+aM9fg0YR1+EbsBfxPzF/x9zE/x+f2vsDU/GosKG0AgTZBNAK6CbT5GsO0ZhqMCbvVI4K6WCrrV43zgzfsLAd6eFoHzfblVq8A3Ae+VbuD9OvC9ipHvmrn1LuC9cPBtFvA9H3Tz/owP9zzg/boUSs/QG95WGyXVowq0PY/04pbyAN9zfbhfYw1oeBV0q4z328D3K8Bbgm+YOvl28E3vbVVG4qnhVgG4T1WnsNsLAd/+IiuhtGS2hNkmu+2uwHngO8gNxIONHVDB94H3Db4JuMl6E4CTBScY55H3WXkE4pO3cUK7H6e1R1DxItHtFqA0l4iUgX65UgY4nCWKvhRm9PcZEH7+KNav/xzPe8sxMqzH5GQDCgoScezwZkSePwy4ngnja+xLwumUvfB6sAU+aXvh/3QvfJ/sxoVSf2gGkkQTyMGLDVJXdKdwvfI0OsY4UHIbmDrWComepu56IWzkuwa99/5zWbBwq51SnUpZRNgmtYoVHgdn+lKTZaUTB1kypw5jrgrUIxdx5rMIzNqNe/Xn0OksgIS/MABmphTnDsY7s8ZcOgmzYFgPHT34+3xuz5om0CfbbtfBYaEbBHXBCqsz4ihHx1QudL3JuFp6CgcebMXh+xvhn7IT+55sgr/mMFI7YtExnY8RYc4UFlCsJPk9OMuRaAzHFcO/HXzrXyZjiqEbbiZRYb45QRF8kzU1YXK8UHm99ipUGpJxKng/vPZsQNS1M1i39kucDz+EHdt/QHxsOPbvW4/g4B3o6s6V5j+m2QFmt3zBIFIPppBSJsVkxbaRPPim7URC0zmMWjTuxRG3Z7kAVORBZB9nmPx3TJ48j2hF19WVi9bWdIyM8D00Y2qqBrW1Saire4CJCf4Or6e3T8QLAd/PRtJwpSIYt+vOS/OyCr4VyQqvH7cj0Dv/F/XJBN8apD69hC8++yfxdd+xfRm8D67DuvVf4MuvfolPP/kZtm79AiUlN+F00ZGDOxpkDrmopGsJd6PeDr6rCL6rApBSd1FpKGZDtmyH87x6z8y3SNPqZBzps+QjIGsPcvtv4k55MK6X+sMw/FgWZMOuGpSNpWDL7ZVYFbcY3o82idtJXmsczMNpqJ/Iwo2WiziRw3HxGM5ojyG83A+RhiCElwfgoj4Id+ovIqs+BtlNcUgxX4amLwFt1mw8fHYVV/Un0T6UBRuBnpXJoswBoB7eXbRbpCMS3Ses9JvnZ2mEiwmmXLSzUVDGXBPGRvR4fC8Y6xZ/hG0r/wxD3i0YtUmICNyDbSs+w3n/XegyZwJ2Akq3U8pbjiKdos0c3VtgxLRNi7L2aATl75OxZNpSg8HBGpw4sQlfffXPWLz41zh5cg9SUq5j9epP8dmn/4Qvv/glsjOixH6xuT8VNypP4o7pvJx/THnl6+d3O20vQWnHLYQWH8fegv3Ym3MA3wV9hQ8W/QOychLw47JPkHDHF8lJp7Dyx4/x8Zc/QdCNbTihPYj0hih0TRehx6VF7uBdhBt84ZO2Hn7p6xFW6IVLZYEIzfNFSNFRnKvwRX7fDby0M0uCPQIVmJoukUbXIVcRIooO4VZjKFrH00SXP+uWVA47HXtsJbA7yhSbRO7CWunCVIURJx1XqvB8Kg8ncrwQUnYMpgG6qSjfkQBvSVflda/MmYqMZuFjCOeDjvFsnNP64WSJL7ql96ASFmsZrMwikOd9x9hBkM5dVi7ErSWYcJWi0Z6FqPpQbEtdhW9ufoTfR/4zPrzwATbeX4YDqRuxNWU3vn6wBf9658/4TcK/YM3T5ThaehOLixrxXYEZX7lZbhVsqwD7bWy3Crx5fBPoXpRfh0UFJqkFMd0Fr0+gJPBWUydXFprgWSqz7XlcVVgPlmoN6HlUmyZfYbo19fAMwPEE3yrL7XlUbQLfBrpV8K0y3ttKGqEy3epRZbsXFHyzEJcSD9/tN7mUHPBoluRttVHS8+hNWUnFq6WG3kjqpES+z4LvWbZbZb1bIcE37qZJFXAL213VCQLv14Hv+cy2ev9toFuRmrQjqKYNwbXtr1SIsQMn6zpxytSFAxnF+Ku/fY/MNxnvnJ5YYbsLnsejoCcO1HtndlxHTm8c8gZu4c7zSBzK2oaIEh+YXj6VLTcySCI9kdW9HjYLt0bVAYUDTx0GXxoQfd0fK1Z8iJ7nXHG3oLVdg4CgXfDa9yPMVWlwjVfCNl4k26nt0znIfJGAtOe3kNt7B/fMl3Cm4BhuVYTBwphgaxlGpjS4WBKIBFMEBmxaWc3TWoqg1sqY4v+s4Jtgh4BEJkwCQG6tc6GgMC899iK0jWagZyAF7UMpqOpOQMPgAzy3ZUPffRORuQdxpvgQ0nvj0D6aJdU2konW4Qy0DKUrNZiGZ4Npclv9HR7V32PMM3/e/DIVzxhl3ZuCjpFMNI1moHokBZUjKagdTUXzVDb6HSUYd1XhpTDkOrQNp8E4/hTFfXcRVR6Mo092SCpn23SewqgIE680vFocpbhfex6X9UH/JtnJvuwdMAw+wJREonNyVuQBClOrMN/UqSrb9IzqrkBXRxEuhPngFz//G/z2Vz/FsSM70N5hwOJFH+PX//pTfPbJb3EjPhQut9MD7cUY7+6ycyJlIE+5EuxBX2+bQZjL6KpAHNbsQM94toA0AUSMTZeJjuCVuw7UlC60t4C2dWTaaN9YLYsA3qef+sRUqVjRKazYOyZQ0Tur15j6uzynWMq2eMPAE1w2BOFe/UXF+9gtO1HAN7fLFXu5dwF9AjzuBjA4pbkpF1u3LMIXn/8KP/3v/wXeB9cg/OIRPHh4BQm3z+HQoR9xMWK/LCq4OFKZQ8Uu8N3gu7I/CZFVAUitvzQLvummRO34ewbfdAyicwu1vEPQwU/jjfzBOyjoiENEqS/umS/A4igRKZS+5y7OZR/E4xcxeNIfi8sVgfBN2w3vlN3YnbgVnxz+LX6x6W+xKeIHXCo+iW2XV+Jbnz9g1ekvsTd6DU483Au/nCPwyTmIw+m7EZDrhbzeW7jfehVRhlPoHqBdHB1FlLJTNz5DZJDQUL5TFbQREFL/baMXN6UwljLYbFy4NcJub8LEZDWmpnle1YDyqsnJKgyP0I6RNqV62KCR4KlJVOFtJYtcAjWnUYiRaasG5R3R8M/ZhSGbFhY24Trr0TegRUnZDRSXxKO7R4OJCROMdSkiQ9LpkzAySNlRHVpfpuBmZSBuV5909+MwMIduOxo47OUYdpQid+A+9mftw5fnvsLmsNX4/af/HbHXzuLy+RA0N2dgaLACcXGh2LL7W5y5vQ8RNSF47tTCDpO490y7DBizFmHQkolRey7GHcWYpGbdYZakVpIN027JCBfZDhsXM2bx+p6GEbeqwhCp8UGK6RJKOhJQ3BoPTXMstDy2xqKoNRb6gXuom07BoPTLGMTxhwm5nZZiSW49nXsIDS+fgF7qvM4kRZe7i2wMlsWu0gcg7LmkYfJaVq/jNxypJ3dV4NlkHk4aAnGl9iyGp4uBaTblqvpzXmNv+Hv345wf6QxDmSHZ9JfQ4Lz2CDY+WoM1T1dgfdpXWJfyGdalrsWunPXYm7keB7QHsD5/O/6S9Bk+vPEhVjzeCK/Su1hc2AB6bNODmwBcTZt8XQCO6sXtefwuvx4s1YN75phvwqyOW7mtRr7PZ7ffxHCrTLcKutWjCrw9wbZ6W2W3PcG2ensWdDeAt9dp6qXmy0p4XwXZG0XD7WETWNzwqh+31owtrLew29tLGjG//v8w3Xvc7iSqS4l6pDuJWp6Am7dVVtvzeEDf9gbA3Q7VqWRhjZKvht6o8e7UcKs6bgbfqDU//EZ8uKtoDzjLZqu3VTb7TUdV1z17nAXfBNssFYjzNoF3aH0XDqa/Z/Cd3RODvN545D+Pk+bK/K4YAd85vD+UgIyBeBzK3oFjKVuR2nwNL2j35qyVZioBkmxuYdABwQuZWweT1zSw2yvFazc66gSWLf81nvdqMDllxM2EU/Dx34qUp1GwTpilIUZcQpxlmHIZMGorwZitXKQmjS/Tcb0sBMce7kBaezxS2mOlKep4vjeePIsR5lcGoOkymUzpKiKSGGHb3j0gvWvA+vf4ObeWORHKc7ORzmlAbWcSbhQF4RRTDIsOw7/okDAooSVHJeo5tOAgDmRvx5rM9diWvwOBhUdnKqDgCNTyLzgCFu8HFR2T35n5Wf5h+Zn8Tv5hBOcdwbmcozhX4CNBKceyvXAs5wD88w9LQ881/SmktcSiH1pheRnrPWUvxZRdh96pQmS1x+G89gTuVZyFy8p0R+rYqxTW/n8AfFcOPXoVfIt+nZImbvmyUZWTFpuGjKiueoxTwQewffMq+PsewNYtP+D2nfPYtHEJvA9sw8b1S3Hh/EG8eFEgizMlnpx6TdoWlkjiILW4IpNw6DDl1KBs6DY2Pl2Jyt5ETNN1giys23GGIJmuIwTfM+ml75j4+F0r29ZkvhRQqWw704Na0YwuRP+5EOa7vu8hLlcEI6nxsgJc2ZRnY3Q3Fy/K96P2Rrz9/KZTSRWGBktRXnYP58MOosKQgc2bv4Gf3ybcTTwDoykLhQX3cPz4Wpw9u12SDq1W9n+Q/aY2n/Vu8F3Rl4jISn+kNVxRwDc/7zngm699AUBlId8DpQNuiccAynEk7yASn12BaTwLV4xn4V/ijcrhZNRMpCO5LgKhKbtR0p+ADlsOeqYL0Tyeh6yOuzhyYwd+/sFfYe2Oz/H9qg8QkxCEq/E+8Alai5XrP8S6LR8jSxON9okS1E/komgwEVe0J1DSlYD07hu4U0cP7Bz5nAi+6bxBSzzRjFOX7mBPgTswRkJjaP/JFFeGDSk+1i66utjqgCkTXFMNmLBR4mCClQ4wYM8E9dVm0BrQAh0mXHQeIgCkJ/qba8rJRkdKLWoFsBHIarujcSJ3J/qm8kG3EzrY2Hi9TJdgapo2ogZxKmHC5uioFtMWnTjl8Dnah1NwuzoIt6tCxHebAWNk9B0uOtEQFFZhyKFHzcss3C+/jCNnNuEnP/lfcXD3eiz69GOkPYnAyEAVHj+Kwh7vFQiO2YEwnT9e2LSwWpTmV5ed1rdKc6TSqFqDKYsJ3Z2lqDQkYXRYhwlLLVqfZaEo8xoy06+graMIcLTCgme4lnMawfe8cCLDG8e0R3BcexjHNd7w0x6Bb4k3DmsP4GCpF7YXbMejhkiMThaBO4sN4+kIK/TF/ifb0TiQCutkKcb5ucj1pl73ygJZXK3kc+XCdoE7tG7w3TyZhxCC75ozIkGBRfHgVpo/3z3XcV6k1z5JMY5BuW2X4JW6EevTd+LbxNVYn/UjQpoPYlP+Jnyc8BH+dP8vWJG7Bp8lfIa/xH+KH1K2Y3PJZawvSwdBNkG3CrwXArpVwO15VEH3fMDN+0sL6qUWArpVX+7lhWZ4enJLE6U7eVJltWeORa8y3GsKzVjLmheAMycEZ0F67lng/aYQnLeB7hl22wN8vwl0z/fe9nQmUUE2j29zKNm/QDtAbzqS6GbrsL4drxSbJiterTmx7vNSJ1XA7XlkA+VrAbc7BEcCcOaB79eB7cBqAuuOuVXbieA51YGTNe04VduBU8ZO5cjbtR0INXbidF0XTpu64Z1RjP/jv/3j+9N8Zz2PBYF2bmc02GBZ2BmLPALw3jik9cXiSnUQvDN34k7lWbQNZ2HKRfspNhJxBc3JQnE6UZhwDtilcHIrzlGN5z1FOBe2G6vXfoAXL4qQ/OAMlq38ELu9liMzJx6dbbngIEJJgcNWLC4JdukkJ7ipwguLBqmtsQjI8MKFEn9E1IZiR9Yu7M3cjeyOBNn2E09qxrhTW6c6IVDi4b79doDxdrbg/f6tewB2UbfO18v/rSTolbfdQXjOMVzQ+iusf+dNlHXdQWX3fZT33EH5QCJyuuPwtP0aMjtikcvUQ1bHTeR33EJB120UdiWgsDsBRT135H5Oazxy2lg3kNt+Q36Pv6PpuYtipif23kVKzw1k9NxCbvdtaJ7fhfZFInK7bolPdqQ2AOfyjiKuNBDawSRxuHG6auGwVMBu1aHlZSpuV55BZKk/rONapYPfLZuZtpci0XhBNN/DTko3uDXqniDI9gvj/2rDJZnvquHHmJZId36fZL+VCZVWcRIv7qyE3aYVBwsu+NJSw2UXJTTkMJITr2Px4t9h177F8PL6EY8exsDbez38/NajrT1NaaB0lMNqoWtLiTDfbNRVnETIZJfDSY9nSy4OZG/GI9MlPJ8sEE9xxRlA0bQzbVTOL1lwLvQcUizJnPI+CCS5UCTo5/ta+HMoGmrPyXYu813XmyyykwfNVwVw01+dvRKi7f83gW/FZWF8rAzFxfH4+suf4ciRNVi16iPcuRuCcxG7cOjIcmza9AXWrvsjUlLOKVaDtNrj2EAXH+lp4PWtfHdKsyXv8/XPNlwaXtwX8J3RdG0O+BZgIQud9we+lf+rMJEj0CNcfxKn0g8itjgE3tl78P39JfBO2wJ/vR+8M/YgNPsgWkbTYaEsa7oCtslqtHdk4XrUYXy39I94/DQOJwJ24cnTGJgbilBbm4WTp3ZJ0ExXD6UKteLe82wqCxfSDyC+NAhn9H7wLT6CrOY46HsfoqLzAaq6H6GuLwVNQxnoHM/FC2sRhp2lIhsTuRh0sEoqZRns7nOWzYAM0OrrK0F29nVcjPRGRvolDAyWoKMjFxkZV5CUdA5VlQ8VwkQ+c8qB3l5scmfi5ySqMYpKvHAWIa8vGsFlB9FvKYDTXgeHs1TpwQD7RkjIMAysHE72/LidWlwE/qhB69BjJNQE407taUWOwYbGaaXpUyz3CAztlIEY8Kw9A1evHMIPSz5AVXkmflz8Oe7eDEBfdwkeJF/BroM/4PTtvQivDEKvpQjWaTaH0paS3u5k5Bliw10mE561F8E/YCe+/+FD1NQ8QeOzXESEeWHb6k+xf+9ynAnzhn26E5l5N/Hp2t/CL3IncoxxqB14AvNgKsyDT9HA48vHqB18jILum7hS5IOEsmCY+x+jYThdxr9jKbtQ2ftIfMcpZ6FBAMcUZUGtSE2Ua13Rjsvjcm14XsdvGAPElrUCbeM5OFPih5BiHwxM0TqzBjYbF0A8lzkGvOHv1ccpNxP3IQbSleFK+UHszduJ9YVB+ONdL/zh7nqsLNyGT+59h/96+af4b3G/xm/vf4Z/ifwlvrm9DDuLr2FzZT6+LijCV5k1+DqrBt9kGyUAxzP4RrntEYSTV49FHrWYjHe+GTyqoHtOAE5BPdTQG+X4bknJfMDtCbrV2HcVdKuMtudRZbcJsueXagWoHjdozJAi0z2f4dY2YiPdSYrn1hZtI14Nw2nANm0DtpWw5jHcbKKclzipgm9Vy60e54NvT8Ct3ibwJsCeX176Nqi1EEkJ7QBfAdv6dhwxdHhU+2zoDQNwVHcSD1tAJfjG06FEuT3rUMKYd3pvexTvz7DcZLyVei3gdjuWqMw2wfdcsK2A7xC6mLBqO2fAN8E2S4C4G3yfMXXjbH03Dmdq3y/4znweg6zuaGR3RKGwMwaarjiQ/c7ujUNixyUcfLIJF6tPwjSYCoudtlm10iDJmHTaiE06qJtT5BP0vXXRU5feta5q9PcXIyEhEOfDd2BouASxscfxw/LfYdWGTxF28SA0xbGKxzCfg7o4J5+fjW1KEty0TYeeyQJUdN+Hru0OqgdSEKUPRbDmOAq67oFhHgTf1EZyG5mMFgchbvEtpKnsnQOWOnC9j6Owtwy9IaPoTpCT9MYKlHTexeXyEDxojMKAQ49Ra7nCJk2UYMpZjnHoMGzTYsRSjFFLMYanNRixaDBiLcaoTYsxWwnG7aUYt5dh3FGGMXup8nP1d6zFGLNpMc6GL4anOMsx6izHC0cJ+m0akfJMTJPV1mPEVobusTwYXzxGVlMULhUdwwmNN5523MQAWU27ES5HDV6M5CCx7jxCy31mwDfTC8mAU8dJ8H218qQkTs46XLy94VLA98hjcZYga8ot+FnwzQWL0qTqYFKcLPIqYNAnwPf4Ovyw9A/YuWMRTvitw+37/li99g/YvXspNm3+EteuH8LgcOHMosdO+zuyiNLgxvONbCjtyfTAdBmmHWWIM51ChNYPVS8fi5+44oPNBD7q592AmfpcYZPeMfHJ+eOWy4gNpAJAxYmE4FKiphcwEcuETuDu+btzwXdtT6LITgi+CdLYtEd5mPgeS9Oosuh517mvLKCZEFqFjs5sRETsxclTW3H12iE0NqUhX3MVYRd2wMd3DWJij6G9I1MkNSIVYVMaY+C5sJDP6t3g+2KlH7Ka6fPt1jw7uehSG4jfI/jmromAxEqM28pQ/vIJkgzhSDKcw9X609hXvBO7H6/F3YYrSGy4gqLuBIzayuCkVMlSDdjM6GzNw7mwPfjNb/4ePj67sH37chQVPcDocBMq9E8RFLANp0N3w2ptAuz01q9Ej70A0aUBCMs7hM1PN+L75FU4U+iLK7pQRBQHIFIbhMvlwaIFj6oMRUz1acTWnBHXmrjqs7hZF4Z75gjcrT+PJPNFJJoi8cB0DQ9qYhCa7IOPN/4GHy3+BT778Vc4G38Yh89sxBerf4c/LvklthxdhjRdNLJrryO98RrSm95e2fVXkF13HenGKKQ2RIO2oRfKD+FI8W4MO7Rw2thYy+uxwu26woh35ZrgXMAFpTQKokEIkoaXyYivCkKCMUyR3bnqAQf15FWS7MtFhDgPWWvQ15WLhFgfrPjht9AWP8DSpR/jfoIfnrfl487ts9hxYAnOJXrhfHUgXtiLxQ5R6ftRMgzYj8HdJMt0FTo7ihEcuAu//vX/jdLSuzDWpyP2uh8unvbC6dP7sXLNJ7COtiAjIwa//PNfIyRsO1ra0yW0R3ZtuAPDHQg7ZY3VGLaVIrf2KmLLgnCnIRwJxvMIyT4gZAjDgdhobKMjDZ1wJL9B2YnlrhqlJtJ/QaDs7lMS9vsdc4t8jq4KDEwVIaH6PHY+3IzC1tvSaCu2lqItX8D4w54VC0O0alDUdwubHnyD9fm7sL4sBl+mx+Nf7wbin25vws+jvsa/JH6Ov43/E35y5UP8PuYjbM45ij3laVheWInPcqrwTVat1LfZRtCHWw2+ofe2521PL27eXpJvnlOeoHsu4K7HD4VmiGZbQnDqweh3iX8Xa0AzyHKrtVL15fYIxJHkSbEIbBSrQE+wrd5WQbd69ATeKthWj7M+3LOstiovmc9ubyluhFpbtU3wrG0lzVBqHuAuaXzVi5sNlCVN2FXKap5TaggOme53B9/Qf9tdwnK3wUvXhgNzqlUi3z3127ytykl4PDSP6Z4LuhUAftSggG/PZklpmKxQQm9UlxJVTsLj6xhuBXy3vlNWQmZ7pjxAtyfbHVLTCbVO1nZhfp2qVcC2sNxkugnAPVjvs6ZuhNX34EhmCf7P98l8Z/bGIosuJx3RKOqMlcrriEZOXzwSOy9j+72ViDVfwPOxfGnYoUUTbaIEfHOV76TkgNuHSnOVMN8CjCqlqayxKR315ieYsuhQVZmIBw8jcOPeWTxIuwyjmSET1NoqHtAWsRYkgCNIq5AJj6EUElJjI/NTjcK2W2IzWNB9F9MceAiA+LtgLLebkVwIE/COQe9d4OTf+nM2hYrnMRud+DpFU1sp/uRFnXfAEJiM9gTZ0qVe0+k0SQIlt+8dnAAEjDBemuBJGdSVwdwTiHkOwnzcs9Sfefw+GUrKKpimR6kAnSAkeZCWaRV4aSlEefctXNCRqTuKJHMMekeLBXw39afgUmUgTtf6wzFdJowTJyh+V1M2Le7Vnncz38XuFD/3/30H8109+mQe+OZWusJKq8y3ssjjOWfA0FApcnKiERS4BWfDdqGi6h56BrIRHXvPtsJ5AAAgAElEQVQIISe3Ii7OD8a6B7CSJSIgFMkKXXoIDBV5A9MUOWlSE8rPwT5VgurRh/DN2o+srpsYpusJPxeyauJWQwBMKYqbvV3AuSQAWxo1CdgV1ttmLZXz/32C76que4jUBSC5mRIO+ksTFCmNiwQ7vE4WsjAVy1AXP69KTFsr8eJFMWrrHqG9MwcWSzVGJ0vQ0PwUNTWP0d1TAKuN5yYlaLQxK4XTwR4P1WbwHeC79x4uVvgh+1nULPjmeS+LIl7b7xF889qj5Ah0FVKamjtH8qX5sXDgDkJrjiO06CD6Rovw0qLFiKsc09TL2vn9m+B0NcPckgO/U9vxh49+iuAQL3z19a+R8vQ6hgbrcPNGMPbu/Q5pqZHigQ8bddvUl5dD3/8Qj82X4FN4EPtz9uJ+zUVkP7uB1KbreNp0HSnNV/Go6TKSGy7iTt05xNecQnRFEK4ZAnC1JhhXq4IQVROCWD5uOIVYfRgu5Z7EmsCl+H8+/t+w+ugSfLvrD9gRuhwfr/s5Plz5T1i0+zMsO/AFrmYEIKE0BFGVQbhe9fa6oQvEDd0pxOjP4lrNWYQWH8K25O+xM2eDsKYue50009LHnQtZhg7JzqOjWtyEGJZjt1VjwlaLMacBppcPEFsdjFvGc2L3CSeTNpsUoOoywoE6iY+3OYwYGypFccF1bNn4JxzxWYcNuxahuPg6Rnq1eJB4AUcC1+Jy2jGcMhzHC1exW0rB18BGdTpBsfeoFDarDhOjNcjKiMZX3/wjtLpb6O7XoqkxC2Xauzh/8SB+3PBnOCaa0dVWhKWrf4WL4bvQ2Z4p+QJ8D9IEy2AmkjsSRV+DspY7CC44hG1ZmxGkPYy05msYh0GSOpnmSr29y14jzdsybnEBSWtIB8cbRZqpNG8vTHbC3VwCectkKWr7UuCVtRcxlWfQNZrjTnMlofNuSRY/n8GJImS23MHe7K34IO6X+Oj+CixOi8XyHA0+TXmIn930wT8nbMSf0tbjlzeW4ucX/4RPE77ErrKz2KXTYmlGA77KaBDA7Qm63wa4VQA+H3jzvgq+Xw+8Cb7Nihc3/bjd9VqW2w2+1ah39egZ+b6myIz5tVbTAM9SwbcKuNXjLPBuggq4PY9vAt+eoJu3Z4F38ytabmq7VXZbPQrwLmkCQ288S5WWqMDbU1LyOpZ7Bni7Afhc0K2A8INq6A0B90wpgPsQgbfhVYnJm8D3fOA9361ECb55P3ruGeD9OnkJGe8astqzdaq2C/MrtLZLwDZB9/w6U9clwDvM3IOjWe8ZfJPhZsMlpSYFXbHI74xBTkcUcl/E4+nzaJwu8kaA9jDymqLRP5InQQZ2ahLpSMCOexfDRujt7R5YyH47GXRBwFgPp4MNWPUCzuzWWthoLWg3Y9xSgynqaAlEqBknqKEjh4QA8G8r4bAqiZCKhKQGTKMraL0pbgL53bdle49yAHrSCnvJZjwCIWGZVbD5tiMBoSIBUIAq9YN8Ds8J3w0aJYGOwSEKw0PwReZCAcBv+x/Kz9icx8ZQvjaF8eBipQZ02ijsuI0IQzDSWm/KZ+awU25Bmy86GhAgc2uVAIpx56/7X3yNSvH1zIJy92t/5W/coJGTJsGw/Nytl+T3IRplLoL0mHCWoWk0HTeqzyEo+zCemK6hsicJyVXnEVp0GE+exynftTA6ynOwoS2xJgJXdcF4bi8Eg5IUwMfPjGyZ4ugiCyZUCGvf0PcYBzN3oWoiFZPixEJtqlL8G5nAxMud37WSqiqTrasWk1MMcNJgcIg6bnoT6zE+VY7evkKMjlNXX+teHCgpo9SmK9IXt7bYofgzc3EHgqyJEoy4SnEy7xBums6hdYLBT8oEL572BOlsarMo5xr9rEVmIecIvx9+t2SkqXlW9Lv0JFf6Ivg31HxWiFe68jr43SrWkQS7vJbIIEvCI6oF8Mt5IJO38n2Ja4XYwbExzyQNdnCZUNZxDxfLAvG4KUqkA3SLoTaebi7K9UodqupEorBwlHjx+eWc5mubAehKTLdd/KVNsDPxEka3HIisNF1DGuBw8jUoTcQ2a/Es4y1Js7zG2ZjK162co7zWpdzNn/reu4isPIHslhiJM1cW9vwclGRD0curwEWAnsL28/xRnoesogrw1f/Dz8/dFC67Ezzn6RpSCpe1BBY5l7jYrMaoUwfzUDru1YXjXOlRpDZckvOAux3UI0soiyx++RmYYWxMRdCZHVi74VuUGTLw6Rf/ghsJigb+RMAG7PdegmdtbNZtUJhZ1GAC1RhHDRpGMhBnCkNUXRjax3Mx7eKirxRT0n9AvbAGI/ZCDFjy0DOZhY6xdLSNpqJpPBWmwUdoGklB83AaGgfS0NSXjaKqe/DyXY2fffB/4XjoFvif34FzMYfw3drfYcnGj3H09HYEXNyFEnMCTC8eoqI/CdTYG9yl708Ey9CfBIMcE8XRqqL/EfQvn6J08Ani6sKwLul7rE9bLTtmLrtZnFDsFgPoez5s1aFpMAv6zgfQtt5HRc9jlLYloqD9Lgq6E5DcEI6TRQcQrjuBlskcPBvNkRAv49ATNA5lomU0F43j2TCNZsL8MhOGZ0kIi92LI2HrkJh2AX0D+bBPVqK2KhmPs84jueICAkuP4IWNrkcKiGUTNhuouWiUoCVhok0w16fh+2W/REFJHCatdRgfMyIz7Sr27luEyCuHAXsrnNYmbN/9Ga5d2oe29kxYuciy1gDc6ZCmTF4Htei1l+FuXSQ231+NDUmrcLPhAvpsRcLgT1OyRumjhDCp1z/PU15Xyvglcx3nDHXnk7sE4mHvHrslqIfnr3IOcxdBdq/YFzKhw6itEpF1F3BRFyIBTVzsikMVd+wogeOupKsKg7YC1L58jJLue2gYTsOovRzTrkrUjaTiSN5+/PzSv+B3d3+LJcmrseruWax+mILvMjLx2ycX8c8JP+I3d5bhz3c34jdXl+Cjm0uxvvQ8thq0+D67Ed9mtGBxnumNtUTkJGS555VouM1YWqBWvYDvV4G3ynqbsUw03PVYUehRRWasnFc/FtKj2ywl9oC0CCxqwCqNUqsJsouo5fYojVkCcOaG4dCfm4E4DSIpoaxEStuIDW5JySYtmyfn1uaSRngWZSZbWSVNM7WNITilzTP1OlmJKiVRj2S8BXSXqeE37mP5bAjOq8D7VYmJFxso3SXyEnpzexQZbiVhcvZ4mBITAm6PkhAcfRuOsCgpEZabTLe7KjrEHpCyEs8i2Pasuaz3q8y3nzv0xr+6DQGeVcMQnLkVVENNt1qUl8ytELd2W2G3yXB3KrpukZd0IdTIIujuEOBNsO0JwFXwfY7gm7KTv32Pmm8mW76r9uVtxZGcHUg2n8ezkaeYsJWKvZIwak6moClbxAQUrJlJz637lMcI8NTJUI78PWXgJENAppXAhpMpwaBMvBxU3ECQ+vJpVKCi+TbCtMeQ0xkrDWWKHIATLYEOJ2M3m6uyjK8AT0/wygmb26ecvAk6a8QCkJM9G9SolyXQE8aCiXbsnp8is6IE4hCIS2Kc+zW/nQ3ngEoGhXHlBAJKkAj9tYs7ExBpCER6ayykGWnGOop/o4IKFUhzQPd8D/9+twnKpjmh2GtgtZbiQc0F+BTuh3fBHgRmeyFRF4Z+O0MbONkocgF+dmOoRErddcSVn8IzSx4m5LMk46qATElFFRcLBlxUYsJWgpb+pziWuge6yTQMs8lIbBCpD1d82wm0psjkSnS0EnzDxkcpfu9u6YiwpfMel5+5H+Pf8/8LAyxg2e0YQABKUOqOPuf5mGQMw/mSo9D13ROwpni0U7bAidkocd20GbNZy8VijxMhdykEjApzS//wGnFW4GOqFpq7C05JW2QcOa+dOvmc7e7Jm+l8/LnYy6EWoxLKpMgXlAWfu2lO0j7rxWXEYamHy9mMou5kXCkLQXpDPMbYxOcyAU42NtPnt1auQeV11IKMP0E8XRC49U/wTcciaUiUHgrFfYOBImT12GDNz5usMa9TAekEE6punecAF3NyfSsLGzZOykJEXYy4HWwoL7Gx+dhlgK73NiINx5HTEgcr6mWxrni5u899Wexz0apTzgdp5KQ3vUHCtRgc4nQoqY/CfIolJa8vN3vJvxewXQyXvVhSS+mOwYUs3+uz8QyE6wJwNGcfHtRfxNBUvrvBk98N33e1fL8MzRH3jpZURF7cj5UrvkRSUhS++PLXSE2LQ2JyBLbv+hoXIvZjYtokzY5WXr/OWkzAJO/t2WgWYqpOIao6FM+FdDCLdpq7BARbAsy4mHMvTGTRJQtbd/OejKn87kyAoxktz4rg67MVv/iXv4bXjqXYtOoLXDjjgx+XfY2vv/gDNm74FkeOrpaeGzqrSMMdA2ZE1qU4CvH7pcSL1oVSsovJc4XEiQnlPYnYnLoOyx6uQC9fs61B5DSSu+CsRHX/U1zVnYRvlheCNMdxuuQEThYeQ2DJYfiXHMKhrB3YkbQKRzK2Iq7lLKIbTiKmIRhXjf64VhWEm7VncMN8DlfqTuNy5SnEV4XhcsUppHXfxIiNoFaxk2UD6pitFEXdd+CnOYr+iXw4uaiV65gLYH5u/Jx4rdTBMlWNOuMjLPr2H1FYGI+x0TpUGB4hNGQ7/I6vRVeHBk57C2zWJmzb8ikiI/ajoY19TfVw2k2AhZ9BnewOjtl0SOu+gU0ZW7Hlzmok6s/huU0jVqVy3QixwhAkZS7j+KKM08p4rbDTHMO5iGfTLOcW+rdzB09ZnEojLZtpOR+5A5ckbIkyHtlJqMG95lhc0PqiZfCJzCP0TLc4ajBt0ePlkB50lnnYdhp78ndhSdJq+JUeQ0HvPZiGMpHcehUbM9biv4b/PVakL0Vs4xlEFoRiW2IA/vzkLH6echR/f/lf8a9xi7A4yw8f3N6LD+55Y1lJNFYZcrAorxrf5bZiUa7xFacSVb+tHt8ZfMMgnAKTSEtUhxJPa8AZlxJNvQTg0IdbvLiLzfhxXq3SmKDW6uJ6zK81WvMrdoC0BFynbZhTG7T1UGtjiRnza1Npw4wHt2oHKGE4pY3Y4lGv8+PeXto4JwxHBdieR08vbt7eXd6MPe7y9OFWb0sIju4Z9pUrNd+xRL3vpX+GAx510PBqCM4hQysOz6v5jZNHDa04amiZKTqUeBYbJn0qW2cSJd+WLKn6c78+/Ibpkgy+UYJu1MAbNfRGDcLhUXUlUY+qW4l6PEm3EmP7nAqt68D8Ivg+U9eJs6YuOfK2ep/A+7y5B8czNPjf//4/GHyntFzE4fQt2J6yDhfKfVHadRtDUxqZsOkG4XIYZWvMTq2nDIAKe0wGWbEK40RdBSfDCeiRyoGKkzmBFSdrSh8EiHPgdINMNzuogG/FSYMTdlPvY5wvOY601qsS2/w/BL6pW3RxIFS2BaW7n2wiyHAZMTmtkyYaYTXZxGMxSHf99GQZrLRWFL0dFwjKIuHtoFgF3yoDOgu+NR0q+I77TwW+CToof7GOUdNZiQZHHrzz9mD94zWIaryAWmseWqaL8MJWhNHpYoxNaDE6VYye6QI8qolEnDYExsEU9E0WoG80F0NMC5zWYGiqSM6foSktBic16BvNgbHjPg492Ias3gS0TBagf7oYg1PF8vOXk0XomypCz2QeBieLZmposhBDE6yCmRocL4DyeAF4e3A8X47ye5Pq/y3GwJRGdJQvpzQYnNZi0FKCQYsWwxYtBqaLMGwtQ2rHdYSVH0Ne1y30WTSS9MeQlTGrAeNT1MdX4aW1BCPWcgxbS5WylKJ/slA0mXzfQ/y5RY9BWxlG7DqM2vUYtJZIjTGh06XHsF0PTu5TVuV84zVlcXBLn81jBgw7qUPXSWgHj5O2UkxYS6Ux1eKowLSjEhYnU/RMKH/5ENH6YDypu4xBl14SCq2OWlgsOkxZaFGnU/6OgVUOJZmUzzklLka8fsthY7CLoww28X4nC0+wxsUJr80q2KmZJkgggCB4YPw52TqCY1n0EtRzkUlAxOtJkV4Isy4Mv7KLQ7afC5oyWmmWHUFByw3Y6MHuYuIoF9JkMalZ53Nx14LyH44ZCrCn8w4larKr4qD1pBbTrjJZXCmgVWnCk7RGBtZAkZsQYFplMaYAo9rnSTiZdRBXakLR6mSyKRfeBP7sA+D7VJrLFaeIBoyPG5CXdxUrln2M7xf/Drt3LYGpLgNxcScQGLgG+fnX4HQ2wGrnIqtEnHLsLpNILrqGM3Gv+jTu1JwRWQvHGn4OyoKQYJ+fMxc5jD/n40x55Q6Fe8eMr4WNc45aOG1mNJszcDJoBw7sXYmxgQosX/wr7Nu5FDu3LEFC/Bkk3w3D6hW/QkV5zMz74OchC01x3lGIDvW75Xdlp8advvd8zahDefddbE1di+WPlqHboYHNYRLGlaB4HBUobkvApZzjuK4JRG7bTWg6bqOs4w7yn99Fbt89PGi8hpjSEFwv8kecNghx2gDc0AYhseIsbpaFIE7jjyhtAK7qQxBbfRbR+tPiHhJceBRdE/nyXcn379BjwKpBVscNHNccwXM6r9AK1aWXMCKmg3KHSnH1MMFirUGh5joWff8z6KsS0dJdBL/Qrdiw/TNk5kTh5WAlpqe4aH2G3Ts+Q+SF3WhryYTNaYRFFnxGuCwVmJjQSEP7vpStWP9kA+40X0GvXYsJsLmSgL8G1gk2gSs7VlxQzjRcyk6kIjNTd36U+YLXi16Rv5Fk4nXD+zM9IcqOEckf7kByF4wyy+SOKwjTeqG2/5b01kzbTXhhMcIwmIvbpss4lLUCX9z5BX4W9QH+3/AP8GHMn7EzYw188w5iU/JefJG0Ef8Q/RWWZGzB9a6ruNN1Bb66g/go6Qv8dfRv8KekVVilOYvVmvtYlJmI73IeYmlhOpYWarAkz4gluc9eAd50LVFBt3qc71LiCbDfdFsF3OpxORnu+eBb82bw/QroLq4HgffrwPd84M377wLeCwXfrzRQvk5SUjY37n1XWTPmg+8Z4M2Yd93cmkmfdIPv/dJUSdZ7tlSbwAN6xrw/w0H9XNBNEK56ci8IfFfMgm9P0M3bBN4LBd8CuCtb4Vf5qqZbbaAUhtudMvkm8B08D3yrgFs9nmTjZN1c8D0fdPP+adaCwHfxfzz4rhxNRGrXNYTqjmHL47XY9WgD7hkvYMharBj3s+lRJkEFbHOrVtmupYxEj0noJNWRrLSwYDLQkBnglhwBtxuAC5BX2AtO1MKm8ugG8XzOvol8nC/zwYPGCEzYqSvldj1ZPXa88/9XKTpmAQEq+/Cmo8J8k4mgVlUYbabAuRrQ1V2IpuZsDAwUi3Wa01GDyXEdjDX3MTVBUEFAwsQzgnBlol4Q+Ca7JbsDSpAIme+i9lvCfGe0xf+nAt9cMFFyYJ8myKlG5otbWPdgJX5MXI5Q/QnEm87jsu4kLjZdQIw5HHHVYbhaEYpT+gCsfbgOi28sxYHMPQjKO4xTRcdxVnsCZ7Qn5HZIwVGxOAzI8cbxtL3Ym7wZn0V9hT05e+GjPYaA4mMILDgCv5yD8M06gBO53jiWtR9+GV7wzzyAwCxvBOUcQkjeEZwqOIbTRT5ikXhG4yuP8fHg3MMIzjmkVO7hmd8NLTyOc4U+iCg8gYtFfrhSEiSNbmTbLhpOynPdrI9AUPlRLL39HXZnbEFEZRBias8iri4ciabLSKq+hHvPovCwLRZ36y/hfsMVJDdfw936SNyqvSDHKH0IEkwXcLvyHJ40XkdOVwIy22/gofkKkowXkdkeD8P4E2R1xiHvWQzKmuKga72J0u4EkQa09f9/zL3nV1xZmub7B9xv8+XeWben17oz3X3XtJuZnu7q6uqqNlWVWZmpTKWkVMqllPLeey+BEAg5hBMChBXyQkICAcJD4AmIACLw3jshEMJEBOZ317tPHAiQMpW9Vq269eFd+0QoBMSJE/s8+9nP+zzx1A+8oHwwjuq+Z1T2P8P8Oo7i7kfkNEdhaLlDSc9jJSEwv35O5dBL7lZd5ULqbiLKPKmcSKLm3SssAwlU9L2gpFNeG0t5fxzW1/FUDSVQNRRPw9sk6gfjaR9LodueTqc9jd6RV7x+k8zwSBpjE9mM23J4N5qJY6oQ26QAsDzF2slj+2Q+U8rtQhwvpERGkY9NmD1xslDb4+JAIRHtJWqhrvThygvZSGnXfWLKL2LseIRNbExFPz4pia8FavdpakYDoCJvE92+eKVPOwQ0WrSobwE9AvRFEiY7F1QqxwvbTLmSMAlDLouKGcUySjNoKTbRsqufZeLtTDH3zNeJqrxO/WgSDrssNkVzLtrlYsVmymMFSAWUSt/LaCltdWkYUkPpb8vGYa+hqzWVvvYUHKNFzEhIjZrfCjSiQXYgpsy0DLzgnsmTexXeyjlE7XQpEkLbMRSp3bR4bEt8udjCyXwkOxzSnChaeuV+oS06pqmhqT2dKzf28t33/46lJptvVvyaU6f3sH//JqKjAnjyKIQtm77AUhmvnFHs07Kwk89FCBLxfRaArwFYmTulpMFbgOWoOJ5MF1PYeYeDKZvY8nwNr+0GJqdrGJou5fWMie7pErJa73O/9DrFXbFMUKl2iWRudFDFKBUMiQf3jAk7VmwzlQyJlazIuJTtndh+Sj9GsUpcdFBJ/3g+MaXXuGlw5/WoeHnLOZfFl5GeiSxeNoVzOv8UPZMGtQMjUkUH4t0u56VYhXzJDs2ovYSM3JscOrGYitpnJKcEsWLFL/j5z/9vtm3/hKvXdikf8Enq2XPoCwJvH6K1LUnp2YVYkWt1dKqMzMFYNj7bzP7EXYpFfiugW3YF1X1DCBy5v1hU8qU4WYkvt36vk7ley7/QPkMherRmZrnfOBerAr7l2lW6cGHB52pyJg/btOzEmHg3VcH92ttcKzxP5Zs4BqczqBh8QKD5LCtfruWL2K38/N5qlUj5vx+t4G+if81fhfwN/xLzK1YmnGRDahIbip7z20Qv/i5iFV88WsH65JVsSF3G1wm/4YuEJewxBnOsOpP1OUZWZ9ezKreRZZmVLM+oYHVmLSvTa+eBbx1sLxx18P0hoD0Lrn9Mx+20CFzoxa17csu4Pq9W1XvBN3kfZro/xHYL6N5cUKdqIdMtjwVwu5bOeLsy3frxzqJ6pHRvbl277TrqTLeu4XbVb+vHcy4lc6z2D7HbR4qbWFjvO5f8MOj+EPh+j/E2tSJWgWcWWAS6gu4PgW+d3dZHPfRmzrVkzqNbB936qFkEtqjwmx8MwKl02gPOSkvm7AF115I5SYkGsDV5icZy6+y2PgrrrZfOevtUdyLMt58w3yn/P4Dvl63BFL1+QP7AfaLqbnA4+wD7EncQV3VT3ehki1hW7rIlLsy3TOzCUusgXDsWqUC5kpbY1WpeHouUoJRRmTSV9ZqmB5YbjAbSNVAvoFYYDXm9OHkEGt15UuPLiEOa+ZyslLI3036mkoL8hCaUGdGqS/e+8s41K5DtmLTQ0JDKsWNrWf/9ZyQlBqobYE9XFreDj/Cbf/tTMtODFRDXOto154ofB94yOcvNTrrmXcC3BK7MmJWOPbD0ogJmf0yyE7mBqkazmUo6R9PxTNzPmcwDBFdf5p71BneLL3O31IfrJRcJkoTAfE8ijVeJtPgSYL7MTfMVHksTWcNtBUaf1N3iSX0waqwL5lnTbeJbI1U9bw7nWc0tXjaKu0IIsdVBxFYF8bQ6iOe1ISQ1hhNfF6q83uX/vGgJJ64pTAHe2LpgHtfe4lHNTfV7njaE8KwxlGeNt9Vr5HVyLOA4tj6EJ3XB3K8L4m7jLaKq/Ym2+BJh9eWG2ZtjWUfZ9OB7/I1eRDb5cyRtL1uerud09hF8zZe4UXQRX4M7l1NP4pl1Go+U41xMlUXAaTzTTuCZfhLfQg9ulnpzy+yt+hMuF7jhIa4xWadxyzyNh+Es3oXunEg+wP7nWziXeZBzmYe4kHmEE2n72ZSwiQ1x33Mh7RAX0g9zPH0/Z5L3ciplHyfTD3A68xBns49wLucIpzP3cyR5J8dT93Am6yCHsnawLWktOxPWczx1L2fTD2uLlqzjnEjdx6nMA5zPPsqx9INsTdjC5rj1nE4/wNmk3Zx9tYdTr3ZxInG7CtU6nrFH+csfSd3FweTtHEjaxr6XWzmYvJMDiTs4lr6PU7lHOG44zMm8o5zJPcyZ7AOcyz/KaeMJjphOcLrwJFcN5/EvvIh/vgc38y4SXHiJkJLLhJZdJaT0Cscy97Hi+bccTT/MzWIfgnKvEFTohX/RedXs61/sia/pEj5Vl/AzXuR28WUiS24QYwnhTlUoiT1PKLVn0KlAS61aPE/ahN2tRguu0fzGZ6aK6LdlkdT/kFsWH4JLLxOc7U5QmTdbU3eyKnET7gWnuFfpQ7TFj6hKXzU+agjhZdddUgYekz74lOyRFxjHXtEwlEHjcCZtYznUDiaraPLuyWw67Rn0OLIYRYKcDEyNGxifNipP7Pq+WB5UevGg+jI942k4ZMeDQlU2BLjJ3CnJq0ZFWMicKUy41nQqOw/S71DstBusYOBNAc8TrrJ2zc/419/9JSfObsFgeIqXxwGWL/4l3337b1z22MHEOwszNCigPansAAWsCtEhITAFKqtBHk+pOcq5G+jcHagYjuNg0S5+GfPvBBs9Cc2+RHC+F0H5F/EzuHEh4SDXUk6Q1/mQd5iYUNIJZ5qm2ukQOYv0r8hnIoBfnE6qsI0VawsNxTLrhI2ZAVsBj2uC8Ml3p3ksXWOXVd9HGf3j2SQ1hONWcJYBmbuVjEl2DGRxJU5bYoGo7UQqPf9wDuWWu7wdKeLNcBGm8sckJvmS8NKHouI7qinWYaukovwh7a3JTEzI3yS9RoUq5fdm1TWWPfmOc3mnsAwmMy7hWCJBVGSTs6/AVsLkmOwAyb3Iosm41HsVqWKpWkgqMslJJGkMuNwPNPmWSBzl83V9jaZbl50l+SxEqlLBm6lKHrfEcDzjIFeKTnDFcJhDzzey/vkOvoSqKJYAACAASURBVM+7yre5ofws9jR/EbKKT2I38G3CctanrGZ/0UX2GZP4Nr2M9YW1rMwp47dxL/gkLoJP4zz5zaOt/GvMF3wTv40z1Xlszy9mVUYtS1LbWJzWypKMRlZk1rMmrY7VqbWzjZJ6w6QaF1gELgTdOuB2HV2bJ3VLQNdxXW7d+/HuTsCtA28ZdfAt4Hph6Qy3HoDjOuppk/qop07qANt1nAXYhT+cNjmbOrnAJlAH3PqoJCULUidV8+SCMBwdcMvoymjL8SyrvSB1Ugferq4lx43NnDA2qfpBaYnScIuOe64WNk6KjvtcWbMqV932PMcSZQ+osdrCbH+I3dbDcGTUwbY+ztkDvi8pEWmJzmrLqCQlLsDbFXC7Huustg6y9VEH2telqdLSzg1rB75VnWqUY/2xf003ATXdnE/J50/+6g8sO8loCiW9VUvCzB68T1xHKG5ZRzgQv42krjsMT8o2tABmTTYiW4Gih5aJR024MhEJKy2JdaqZTrbhZLKVJiQjI+pmoAVKqA5xXeMo7LViFWRSEy2dWBsWEVR2UYHvYXuuYm9kQleWUKK305l0Xb4yq737EPstQFizRJOtWLtNtphr6OzI4/Dhdaxe+RkJ8cE4HFZKjQ/ZuOHX/Pl/+z94+uQa70bkvZbjmNRih386+BZWQ7Sx2ta5HTNZTdEI+E5ti/mjYr7lZiBR4XZHKXl1kdxIP05yazQ1Y6/oGE1l4F0WPWPZtI5m0PUuk57hNPrfZdI3nqOCePrGc3ljE9lFPm8mDLwRyYnIN8YNDI7nqONhez7DjkKG7YW8nSji7Vg+w8NZDL3LYWg8l6EJsVgs4K1N0k3zGHbkM2TXK483tjwG5WdOGHjtLHksz8+9Tl6vvU5e069kLeLnXUDHWBZdo5l0j2djfpPInSp/rqacoHrwJW3jaRhao/FPO8ld83Uqx5JpepdG+0gaLcOpNL5NoXU4k/rBZOqHXtEynEHbcBbNIxnUvk6msOkhVf2JtL3NoHkwFWt7AuXNz6npTqblTQY13c9pGUyieegF9UMvVD9FXvd9Qs2XCMg/h6n7IXWDwmjHUT/4gtrXcdS8fkb9ULx6bd2bF9S/SaB1SFJMX6lj68AT9br61wnU9sfTPJhMeetj6vrjaRx6Sd2b51hfP+NVUzgBhe4E5p+jtPchdb1PsPY9przvAZW9DzANPKR48KEKmMnruYeh6x4lg7HkdN6jaCBW+cJntt4hvS2atM5oUjujSWkNI6UuiNTmEO7X3eBUxl6Opu8noPI60U3BxDTcIqY6kGirP+EVNwirvEFIpQ9Hcw+yInE1J/JOctsaTGTlLe7U3iSszofQWh+CrTc4mXOMdYnruVIhjh838E45x78f+Wf+fO2f8b92/y/WBawhJM+XnPo4nqb7k5V+E9uoMOKac0/nWCZZLdGEFl7kQsYJLuSd5XalD+GFl4iuD+B8tSerUraw6dl6bhjOEGj0xrfwoqrree5ICRiUHRbvrNN4Zp7CLecU18o8OZd9HK+isyot82zWIdxyj+CRfwwPwxGCys4SkHWKG0WeBJV441V4ktXJq1n5ag3XzR6EGr0JLTpDZJk7kcYLhBWdJ6LkAg+s13jRdIuEllCeN9wivuE2ma3RvKoN5mWd7LT4c1esB2tDuF8awJUHhzgYtY4ggztPKgMIST/P+cjNeN7bzlPzdQrfPCC18zYZA/cwDD4ip/ce2R3RZLVGktkSTmZbJDk998jovkNpazRlLXcpaX1IecdjkltD2F+0k98++YzwyqtElfpxz+zLgwo/HlT689gSSHb7fTrt+UyoBmFnI7rIkZTjjQYshZyZtBdpYFR2LFWPhOxUys6lzPHlqjG6/12Weg++uedpH8tQjlp6E/Lr0WySakLVztjAeLbS7yu5h/yuqQKYKlRWktiLmFbSQJPSQ9vHi3k7UcLQeBlD70p4PZzHyLsi5eA1PV6KbbyMyQmTsnJtHUsmvT0Kz9zTHEjdz4PGUFqHMrGNGHGMynsoZ8pmZuR1Lv2daUy+E+llFZOOCgZe5/B2vEQ1JsvOhSycRLaodkHU/VHfEdHer5IcSe+CyMomJR9DdlVFgqI3zltU465I1CqG4vEsPMLiu5+xNn4L29O82ZIWzdqkV6xILOTz5Cf8WeRa/t/Qf2Tly284kPE9Z/JOcTrvMTuzzHyTUc23KfUsT27mm7R2vslr5WuDkd8lPuLTJzdYmRDGnoIyVr8qYXV2I0syG/g81crXIivJqGJ1ag3fpdXPA98LGybl8SoXdxLdDnAWaBtq+E5ntXNq0Z1J1hnqcHUn0Y91W0BpmtQtAfVxY14dUptEv70w/EYYbSUlETnJXOlA23XcVliPlCvY1o910K2PegCOjDrY1sc9xQ1IudoCag2TjejWgLovt7IHdNoEvt806fTk/imsdkkzx1W1cFx3LJEgHKPTqUTsARdoueXxe+z2j4FuZ8z7QpeSeaDbLN7crU5v7g+Dbh1gy+jqUqKx3PODcLwqtOAbCb9RVdmGtwvQ1n24XZsjZ4+tott2lqWd65Z2Ba59BGQvqBtVHahyAm8B37MAvKoDv+ouBHwH1nbjllLwhwffJZ13yemMVhrYlO5IsvpieFRzgzMvd3H61R5i22/TOZGlttuE4RB9mljY6e4eMvGoxzL5yHasiurWErdGMaotTpl4sZWqVDcF2J1byTIJ6fIUYdVFmxps9uRJrR9DNuk0F9ZDfr7WOCcsu9a06dSO/yj4FmZd09YpjaqaJGsYfWflVpAbe3ev41VyJB0dBh4+uMzB/StZ8tUvSIgP4d2ITJ5WZ7iEWKp9CNy7PicLkWJlh6VYfRfwndkYRaDR448PfCunjkqlDX5p8ic03x3T6+e8VQl4wshIg6GwdMLcaDpfOY+ySBK5gWqUUmyP6HK1Bj2N3dEWafpzyrXCuchSTJEs2uQaUu4usi0uW9jiAS+sn9MpQ43yWc+V7tKhPTfXbyB9B9r/k79DtKzSwCXXiyyeSph0yELRTMdoNg+tAYRkuTFmL8AxXUL/2wwe5HjxsMyHFnuWWtzJz1KNh7KDMyXyCNFOyzZ1GQ6HmY6udIKCD3Lu/HdcvbaT2sZ4covCCQg+gIf3ZmIeeNDelYVjUmtgnJjJY2KmQDUP1r17RUyVL2ElXgw4nWJGKFKWgaJ5lr9J6VpVE2sZ9slyenrTaWxOYGS0GPtUhWL5OjtS6OxIxWEzY7OZcYh8QrTcDgPjk/mY+58TWupNeIkXr6dy1fuQZMPh6VzGp/IZt+eqRtixaSOjk0be2YoYs5Xwzl7G+FQFo5MVCmQMj4lePo/e0UyGbLkMjRkYmjBQP5JAWJEbofkXKO2No208h45xg9Lwto9k0jooC5c0mt6m8bz+Jl65h4irDaZxJIf2t9rrWkdTaB1NpeVdBi9qgnHPOkBW7z2ahlMxWh9yOWA3e8+u4Nff/g2bjnxJ6MurHAs9xN9/+Zd8uf7vqO5NpnPCQF7rPWKM17hhcCOk8BL5dTFUDYiLSBptPS9pH8kgZeAh3gVnCcw8TWX7QxoHk2mQRdXrFOr6EqjufkZ1z3Oqe59j7XpKRedjTD1PqRx4RnH7Xcy9jzD1PMTYeZfi9mjymsNIq71JQqM/LysCeVYXTlJjJNHWa+zN2MaO1C1EWq/zpCKQmKrL3K+5piRDdyyXiTB7EVbmSViZF6FGT8U2+5de5WahF0EFF7ll8uKc4Sj7UndzLv80QWVXCM2/xKWCEwSUXyC43IPQMg9uZB/lSsZBgkzn8Sk7xdFXWziVdZhrZRfxM3oSUHiBm/kXCCq4wE21W+PFjaIL+BefI6j4IkEFXkTkeHAr+xSHUjezOW4Vlt6ntA3n0v42nY63GbSNpNM+msGAAO8ZE7Yp+Y5qLj/STyM6fHEqckwamVBg2JlIKQyxMPpKVmPRnEUcYsNayfCIgQSTP7cMbvSPZqOaY2WuoZzesRwSG8LxLrmgFtzTcj9RxI/0EEl/gEg5RMahyZDE4lUahcV9yyELMYdIlmRBJi5dmhPS+JSZ3skCqgZe8tIayM2Cc3jnn+ZW+VUKup/R/zaf9LirBHhtw5gfw0BPHhnJN3E7tRLP8+tIeHqV3u5cwm4fx8trE4FBBykuicY2IQy87ACI5ERzE5JzofoZhIRS85yz4dvV5Ug1vGoN0COTBZh7Y7mTf5lzL4+w49Uhvs88y6aiCL4zpPJNagXLklv5JqWRT1PS+fM7e/nb6C9ZHLeYNXGfsi9jD8cNL9ih2Os6VqbW8a3YBWY285WhmUW5DSzKsLAsxcza1HLWpZazOr1SNUMuza7g6xwTywxmvs2qZHV6Dd9lNCwIwZkLxdG8uTWLwFmw7ZI66cpqy7EOsF3H9bn1zFXde4BbgLcOuvVRj3p3Bdn68SzInnUrqUNZAxbUsa2gXtX2AgHe9SrqfTZ5srCOnS61q6geKXEpmedUUlTPnqIGVTqzrQNsXUbyIVZ7Yerkhy0C5yQlOqOtjzqzLYBbT51cGIAzawWoHEqaOSWgWxIoXYD2bAhOaStnF9S5sjZc63xZK25lLbiJN7epVZW7SQB3K+7mNlVa+M2cnORD7LYr+Hb149aPJRBnzqVEk5LoYNuV0dYkJXP2gLOAWwfe1g6E1dZZbJ3J1gH27GjtxNfagZ+1E7+qLjX6yrG1E/+qLgJqerhZ24O7gO+//B+/v4TLjzmdyL8L8M5pj1Ypi8ndkbzqjVT2hM+sPnim7Gd/2k6e1tykejCRN3KTVxpsPX7bqVeUVb00n4i+cVK248oYmSqkxZZF66SBEQFwYqmnGq1kW0+TsSjwrYCZZu1ndxQRavbiUfUNzZVAOUfIhKo1df5HwLfGmIhDi6bDU/IW0Tg7qnka68+xI5uJexZEWloYp0+t54L7Xr5fu4j7924w+EakMGK7pnWm/3TwLfpwcdUQ9xeR4ZhJr48koMSDtPa7f1zMtyxGHOKEUUlGU6QKCMluj2Z4UrT+4g0un5Ho3oXN12zr5EaonGpk8STMuQ6+ZSdC3SRlQaJdE3LO5PNWn4MzlEIWWPruh/xfAc7SvCRgV7bjBdyr0t025Geo0n+m/E7tOQW4ZctW6Vt18C83Qa0/QLSZDvl9yqvYTNNIhvI1vl3srRodpXFrZLKIZ+UBBJdconDgKXbnVvGEbM9Lc+K02BuWIkl9E3JNj5eRX3SPlat/ia/fCdZ891ui7rrhfW0bZ9zWc/LM9xw+torHsVexO8SNRGQGBap3QJwS6t+lEm31JazYmzFlQ1auAI3yiHc2NKqFgzjvTJRTbXmOz5UdXPbcxsvnAdRaU0hNuo3vtf0E+h4iNSmIKUcVM2KhaC9kWtwqZsqwvE4iuPQK8l5F+iXXolrcqgViifqejr4rpqk5kby8cAryI6gwPcFuq6W1JZtS41Maal4yPiz6YSuj8vnJNaAkBKX0Tedw33qdB2XXaHmToq5zfWEuUjG5dsQVRT7Xku57hJW6kd96H4dyOxHrRNneF+s42Y0yY+x4iG/hKcyDT5UWe+JNCS11qSS9vMWBg8u5EXCU9Nz77Dz+Hf/15/+ZP/nNf+JKpjsBpZfxLfBQTXyJ9ZFU9SUxNiHNl+LsYlRe7Y7pUnK67hNUcpHEqiDG7AIY5ToVqYT0dojXsmi/NV2xciBy/l0TNvnsNA95ZYGpFmRF2Gz5vB3LpGs8WVm0do8V0T9WiLnvKTeN5/EtOUflYAIdQznqumt6l0HdcCr1w2nUvEnB0p9IeU88pq4XlHcnUtybiLHjBabOF5T1PeNO6WWuZJ3gSe0tla5oaX+GsTeO0u5nmHvjqeyNx9T5lJL2JxR1PeFlQzieCYfxyzhLUlME+R2PKG57hLHtMaXtsRjbn1DS+YS89gdkddzB0PmYvPY4Spoek9sUwW2LJ+ey9tH5Ll25gYhuXlvAyg6mJsNQOmmbEBHSTC7fd43dfjOYg8XyCKMxhu6eHCYcVrp68zFbn1Pfnca7cbNKzZyckibTcqTB+nGlH24ZR5CdoOrRV7SMibwnC/ObBO7V3sTddIE+u4Fph8wXWkiWfF9l0S/fa9Hry07rxLRRfdayy6qRNEIYmJX9nlg2mvtfkNQQQWT5NTUHBxa6qWCjxJYoLINJWnPpsJnLnpv56su/5sVjX6pML7kVcJydWz/n1Im17N29mJfxN9m96yvOnlnPjm2fExJ8mN6eVGe/kLZLqoC4kEn6HKl6FGRxoF1nssgYnCqiy55H42gGlqFEEhpv4Wk4yt6EE2x4foWNaff4PvcVK/MK+Cq9nC9Tmlic3shXOSa+NJj5t6R4fpcUwWdxx/jVnX9lafxydhrusC3byrJ0IyuzLCxPr2JZZiOLMhr4KruJpdmarGR1Ri2r0q2sMdRqziZZJr7NLWdFbjkrDRYt6CarhlVZ1bOlhd/UsDp7rtbooTcy/mDwTR3rDS7lBN0bcutRldegjS6Mtw629XFTfj2blA3gHLP9HuguqEMAuKsXtw629XFHYQNSArZdme0PstsukhIdbOuj3jCpwPbHwm+kQbKoCdUoWdzMYT0ER8YSLQTnyAckJTro1scPpU7Ogu7Z5Mk5KYkuK3lPUlLayjlnnS9rY2G5mdrng20n6L5gbmN+teIhchIXSck8llui3svb8FpQegiOPnpXOoNuJOzGGXijA+9Zhtvpy/1B0C2plBKO42SyZ4G2y2NhtVVVaUBbB9sCuBUQd4LvwJoegmp7uCDg+7//gcF3Ukc4qR2RZHTdIbU7iqTucFJ7Isnqv8PTRj8uZR3ibOJ+7lT4Yn2brEIHlOWSo1gxi2OOItUo2TGcxutxA2NTJQzY8yjsfkyo+SpB5iuktsZQO5DAO3uhAlwCEBRYk8lKsRcCrgTolRBi9lI39oF3GbNyFnEAEHZBA98yoX2c+VYaPWmgmhIgJVZfJRqzOF1NXJwvR4+s5+GD6zx8eJmNGz/j4P7v+edf/DXnzu6it9eIQ2zklEXYT2u41JhvF/AtXfKYSauLIKDkAmkdf2TgWxhiR6V6n1XvUvFMOcxTCVwaT1e6SGmKk8Y17Uar6fQFSGsLJg3s6gBaB96z4Fl9vvIZO8spRxJmSLN21GLoFTCXXQW1ZetkzAWAvwe+dRA+N6oFkQL8c6+XG7UukZIYaLleZqbMTE6ZqB5Kxs/szR2rnwa+qVAhSNIo6Wv0JK4+hHFx+JBmtJkCrXEKYZsLscm28Uwl7yYqyc65x6JFPyPmTiCrViwiwP8IEeHnePLoBoH+p9m1fRlhoW7YHZr/vWh45RyKlrR2JJUoawDhJdeUhaNy6hCg45DFmoBASZ2UFD8LPb35hN0+w5qV/8LhA2s5fngTgX7uuJ3bx66dq9i7dxV79iylqyuXGUkUVAx/oXLpsQ6mEGK+QZjxmuamYRMAJImYoi+VBY6FwaFisgzheF/dznn3DZw9v5FKSwph4Rc5e24bftf2U5ATzuiE2ICKA0YFODRXjh7ylD90ZOkVxSBLXLk671PyWQo4k1EYSBNFPfcINp0nqznGaTUoW/jivCINhqLlLaO44wFXS05SNPhYu+YcYknYysMH13C7sINnCSHkZT/G/dAm1m/7gn9c+7dsubOZo6lHiKwKomQggYEJsSSsVE4d4naiwLfDhDQhvqgN5nLuGRIbxR9dAJ0s0iTsSphJraaEyXXqigWAy2ts9hJ17Yg9pDjTyLwnu0BS2sKxSC1sHDM1SINk29s07pqvEF1+hT67OGTUwnQzMzPNNDal8So5mPh4f8rLn9PXV0xN9SviXwSSlB3M68FCpqYqlQNKXn0U94u9MfU+ZVIWPTZx+bAqZndGXErEh16SFmes2LDSNlbAvVxfXhgD6Rw1II2NqmlcNY5bVBOrOKzYxdJVySBEAiI5DRZGpovIHrjL5bzjNA6m8FZZ8UnyoyyGZcdL5GlCYoh1prC94oMv4WlV2MctpKcE4+25jYse24iMuEhVdQZBIe54XDrAlVsHKC1/xPi4XAtWdS2+dhTwtD2Mfel78Ct2I9ziwwNrAI+rbhJRfp1zOSfZnnMQy1ACr4fSNfnbuxw6R7JpGcqg6U0arUMZtAxl0jicRv2bFGr6kijrjaOwN1ZJZOJFrlN+ndDii/jln8G/+Dzh1qtKbtL8NoUR5QpkwjZZhW20mjsPvVm6/Oc8eeBHjSWDjIx7xMYGEh7hxddLf86L+FBi7lwlJTmK/buXc+XSNjo7XmlMu1h6KtcfOeeaI4pGOmhzz8RUKd0j2RR1PCau/jYRVf7cqLiCe4k7uzOPsjrpBN9nh7MuN5tV6eUsS61iUZqVz1IrWJxmZUlWOV8YSvgqp4Fl2R2symlj0cvn/OzuJj5NWMfm/NtszTOzJL2cbw3VLM20sMJQx9KMKpZnSXy7gGmrAuYrc+U5C99mW1kh3txZGrO92lDNKkMlK7LLWZ1VPVsCtBfWdy7ge1ZW4pSYzGO5neB7FnA7gffGvAZmyxmEsylX5CVztTmvns35WumAWx9n2W4n8HYF3zrg1kcdeMu4EHj/GPjWAbc+6sBbxoNSRdIs2TSv5sW7C/AuEmbbJX1yQQjOsZJmFtZcAI4WhvMfSZ3UgbeMPwS+F4JueawB73bcTW0ay62z3SaJeJ8Pvj3MGvieB7idMpNZZruijUvl88s1CEeOL1fMB98LAbc8vqZKY7eF4dbLx6qBbgHeN1zAtisAnwXeAsAXgG0dfCvWu7obHXx7pBT+4cH3K2G7u6N4JbHz7eGkd4ST1BVGQl8EWUN3yWm9zbm0g7hnH+NujT/ZXfeUPVVByx3ymqNJbwjnefVN7pdd41lVICnt0TxrDuVy/ll2Pd/CnoRtuGcfxyf/PPmtd5V2V7Nqkhu0MGQi2dDkBVPTJUoTe6fKh+7RdGdgi7DiGgOjvMIF0P0E8K0YB+Xf7YwuV7IGAeM1xD65xpFDa3ga60NBQQy+Nw5z5tQ2/uK//SdOHNtIf59030t4iwYQPsp8y89W7g/SnOrUfDs9llPrI/AvuUB6+73fE/PtbGySG6CwUE7NvUz4GiDVzo/yl1VsstwwtQWL8jtX50+ArridVKoI6DeU4Z/rpjx5m4YTFds3OW1SHt6K9Xay2cKeKl2/i9RDmDGN7ZYbj9yw9dI/JxmdTKNyB3BuGTuKlYuF/JsG2uU60GQtSvqhQJGAOgGusgMhYT4y6iy3/nvnmHAl2RCAP21GGE/FcDqT+qyvE7he7EFcQ6h6f6qZa8ZM7btUQspE63qVYcV0S7Kr2ODJ321UzWoiv1ENfo46TCUvWPrVL9m3cz1LvvyEp7F+dLTnYypN5ILbXjZvXExWxn2maVYMndo9UaydmZqRFCJFE228ppIVJXV0Um7YookVx4hpSa00MzVTTXN7Jhe997B67e/wunyKXXvWs2vvOg4d34p/0EUCQ7z4evmvKC57ppwuZCE0OZ2n7PMqB5MJrvAlyuyLQ65HmwB7CSoRWY4wiWbejpVQYX1OeKQ7+w+u5MvFf09ElBc7di3l8NG17N7xNde9d9LXb2Bq2oo4Akkq4PiMkeKhOHwK3LlrvE7XSIbm4S/nS78mFbiVxauZom4JmTpDSmMEE+KAIT9LZGTqHIuMwERRx30Fvo1vnjImAV4zzQwNVXH04Gp8rh0mNS2asEA3Tm3+Tn1PP938z+yI3EioxQfrWDYjAkwnLUzbTZrUSBYK8rtsAoDKyG6OxCf7BNGmy6pxUmvkrlSyKrWgFNs/sbRTbi7adSl/u/wMu/KGF4cVWcQ5GySVpEJrMkS8mJV9oIXWt6ncNV8l2nSVvgnnomiqGZutkbAQd/btXs6WjZ9z7fIBXjy7iZ/PcbZtXsyG7Z+TmhHG27elapGX03SXsCIvsttiVDiZLGbEM1/Av3xvVd+NuJpMlykddoetgDulvjwovkHnqDjViBRD0mOlJPysHNuk5lAi0o9ph/j7W9WibWLKSGHPPS5mHqKuN4GJafEmFzmF83epBa18F+Z2n+R42l5FW3MWF923s2n9Zxw/up79+1by6JE/W7Yu4fiJbSz95mdE3T5DvywQxVdcnGSmjVSOJarF252iy2oXKKroMjElIr25yN7E3XzxbA2PrAGklgfxqiqE+OpQnlpDeFRxk/vmAB5WBPKwMpAn1lvcrwggsuQagUYvbhgvcj3/PDdyzhBV6s390itkNIVjHX5O12SWWlwriaRd5rEK5TAyMd1IXUM6u3cs5v796/T2lTNub6Gm3sBVn+Pq+9DWUci7d7XkZj/kxOENRIW58XZYdgHEK71CW9Ci7WKNTOYzMJ5Oy0giFYPPyOm8y92KADwyLrD75RnWxp9hafw5Pn9xnkVJN1ieE8fKgny+MVj5KqWWr9Ma+DqrjsVZFpakm1iaWc7i7Gq+Smvgm/R2VmX2siSlkk8S7vB1+k02FMSzMc/KN1ktLDc0siSrkpX5AqRLWZlVxursSlYrz+0aVuRZWZJuZnVuPWsMTaxIr2NlZh2rDDWsNFTwbbZ5Nm1SpU6qAJwaXGUmGtstoTe180oLvpEmyjrW54qk5P3SWO16NuZJ1akgnM15dbjWljwJval3Vh1b82vnV0Et25SsRKQlWm3Pr2NHgZTIS5xVWM9OxXgL6/0++Na13K7jnqJ6dMAt4z7ReBc3ztZ+ZQnYxKGS9+uwsZnZ0vXcswE4wni7ht+IhlvXcztH1Tg5F4Sjsd7O0JsfDb4RsD3nya0CcJzWgLMuJWUtiKzkvEkrXVbiZm5DSpOVCNj+KZKSlllNt2sAjoBvCcG5pPTcoumeq3khOHrcu856V2oBOK4AXNhu1ShpbcfHpZTERNdxV0kD5fzyE5a7en75V4u8pJMAkZhUa6U/DqzpVpKTW7U9eKQW8F/+0LKTnyJNedQTgmfJSQ4mb+Ng0lYOp+xgf9JWDiRvY3/yVva92srxpO3sT9rCltQtbE7f9BGL+QAAIABJREFUyqGsPQRbvXjacQv/Kk82JK7HPfUABe33GZ7Mx+bIx+7IxzZZyNhkAe8m8xmx5xBS5q2s7VpsmWrLX1hRYZpGVAiLgCDdptAJNn9Qjy3slTCYEvMtjJ3cPGpw2KpJSbrFJY/tZGUEq63D0REjlvI4Vn/7T6S/CmNsRHSNVmUz+JOsBkXeMKXZUEmEudp6VwCkkpSGCPyMF8j8vYHvcsVY2SekeceiJn/ZJhbdo7xPlbQ2I+89HySOWY2yvS9+52KdqLFXotmeUKyzbNmaeFLuz+0Sbyr74rUoertou0VzKe9HA7oi45DmIXlegWB1gxag87HPQj4zAV3isCJ6ctE2FzoXN5rWe9K5CHOIrlQtDEQ/WqGCbex27b2AjDqY/5HfqcCKsL2yfS6adRPWgZdcyj9HZttdJZFS7OikidFpI7GWm/jlX6BuPA3RQdvlelHyFufCTYWfWBkdrSQzM5oly37J7v3r+PSrX/IozpeunhJKTUmcPLOddd9/QV7BUyan63CIn7OwqFPS+FtK/cgr7lhvEGW8rEKF5LzZlSuFfJbCAssujWydW+l/XUR4lAf//pu/ZcXqzzl8fAcnTm/C/eJ24p7fIvFlJOu//5ziwqfY7VbFrDumxZ/fSPlAAsHl13hU6a92e2TxJU4X8p5kdJDLpMhhZqro7CzB+9IhTp3agH/AYXbv+ZJXKaHc8DnOmZMb6elM0xZJkyI7KaPBnopX6SnOpxwltzaGyUlnkMjsNSDfM03+I9+5ku77BJSdIbUxkgkJgpmyKq2sLGyFbZZFQ0HHPa7mH1c2iY6paqbpIiPzAZvWfsqLhzcoNMRw9MAqVnz5KavWLOb/+Zc/Ye3NJTxqDKRl3MDouFUxmHabgMxixh0VOKbq1UJbgOrgaDKJ5Ze5nLafRzW+2MVuUP29ukTMxNiMODPJ7lsFM+MmbOMmenuy6O/NZnxMnq9G5onXAwZG3oreWewCq2GqUn2PRDbV9DaRO+XXiCn3YcCWq7H/05VKhrFh0xe4XdiDh+duTp5Zy4HD37B+0yeEhXmz/ftvcffYT1tbAfaZagztDwkp9Sa9OUpdq2rnRK57+R4qAC7AWB5r9qs9tnxCJZK+6BI9o5la06PTmlWYcwHi4j6iFiQi/xCv65kqJmVxMVVCSc8d3PIOqKZeYcM1oP0j3y9xu5qpJDMngn0HVnLi5FYePPLDN+AIcfG+PI3zo9gYz8Z1X3LT9yztLTlMOiSIShbE2vUi8/iE+OmP5fP2nYFhWx5VI8kEVFzh65ffEyh6+Ozz+Oefx6/oAn7FF/Ev8SSgxItA4yXlYnPLdIVg81VCzFd5bPYjzRJKUdM96voT6BvPZly+T2qOk/OmET0yn4vMSBbmcl6mJq00NSSzZ+dnPLh7kf7+IgbfVhD3MoTNWxYT/yKI8YlK+vsKcDu/kYvu+zEbX8K0OO9YkHlLtNvy++rfJpHbdYenVVfxNZ7iUO5+NiftYc3zI6xNus66jAesTHvF0pR8vk4VttrKkoz5tTSzCtdallmFRLzrUe76OD/SvVqx2cJo6/V+EI6w4D8efLPGUM13hqp5tdCLW4XhGKo/6lSiu5To40KnEnksem6d0XYdXdlt3aVEH/VmSdfxQ8E3H2K25wNtrYFSb6RUzZQKbGugW4D2wjpgbGLWg9vl+HBpM64lwTd6HS1tYmEdK2ue9eDW7QBlPGFqmVenXLy3JQDHFWTrVoC/l+CbCj34phmPirly9eBWITgVc/aAH7IInHUp+YnBNz9kDShOJT5V7dyoaptXvtXtLCy/qnb08q/uwLUCajqRCqzW6mZNF1LyWMag2m5u1fUQXN/DxbQC/vSv/8Cyk58CvjOG7vGw2Z8wixe3LV6EWb0Jr7rMvSY/HnXeIrYnlOS+CB61BnLdfJ4rpWd40OxP3sBdcrujyRq6x40Kd06+3EVM+XVKxX2h8wnVnbFU9T6jov8p5X1Pqeh+wrWMU0SUXaN9LFO70UwKcNCi6QUwKnZNNWr++M1BWf9N6+BbdN8C2uTGUk9XRxbWijj6etI1plYagYZLSYz3p68nn6mpWgWCJJnuJ4XsKH9XkSbIdqwAVvE81wDIq/pw/I0eZHXc/70w33LTVVvqwqbahaEVsCOpb6LRdrLcKkRE/9tFCjPHfGvHWty5eCRj04Ik4hvDCSj2pLDrsWIyxUpyTAGUcgW+dNCuse0aUyvPac//+GehwLmKQJdzov1f8fUVBwC7owybrYLRcS1sSZIGhcVUTOa0BDyVK9ZWA92FPw18S3OkAh1ab4EGvhM08N2ugW95TljR0ali0pujCcy7QGp9FA5p1rILQBBwI8EvJc7PsprOjmyCb51izZpf091rYffetbif38TzWB+aGnJ4lRjF4T0ruRtxkWlbjcaMybkXu0sB32+TuSNWdwp8i2ShRPllqwWeLGSmSpkYK0D02H19hdwKPqW2vdeu/Zyjx7Zz9MR6LnrtUtvgz+NC2bTxSyrMSUxPNTA+UaRAhQAns4BvASSWgA+Cbwm4kV2MCXsl5eZkli39JampkTx4eJn1G/4NX9/D7N2zkkMHVtHbk6lp+yX4ZbyA1PYQzuUe5GFdKN3juUqeM+/6Ugs1F/Ddc5+b5rOkN0cqL2hJ81M7XmoxILsZwnzf41rBMUx9sUxTy8RMA2c8trJ955fkZEfT3pXNy4RArp07xKrln/Bnv/q/2B6ygnsVVzE2P6WiPAGL6SXtHZmMjRdhs1fT1pXHm8ES7BOyoCmhbTSJKJMXXokHaOpPUJ7X6rqURbk052LCpuQq5dhGiqmriefG9e0EBx2kzHif3u4cDNm3iYo4SeqrAAYHZCE4H3w3Dgv4vkpMhY8KjJHFx5jNRE1tMhvWf87OHSu4du0YUVGeeF3axbZtiynMT+DssV0cOb6R+ppUpiZrMLQ85FbxRTJbxC5Pehhk0eSUfOng2wnGRVffPZFLsNFb+WeLQ5GQFUqmN1Oq/KpV34Zis2WhXME0FsWw26ZN2GYKML2+x8WiozQOJyv3DW3x9LHvdB3JycGsWf0b/vkX/51Vq37D1asHePdOSIs20tPv8/3qz3j60J+xd1VMTVmw2zQ5l3yXJTlWs1nUmhZFotQ+msWTxhDOVrjRQjbDFPCaQt5RjPRhTIhnubOkP0NLtJVUWxNjlDGi/MtlN0B7bm5XTmPttV05mSPlvcniSX53HVWWp2za8CuePblCb38JSWmRHDm2Hn/f04wO1TE+1sAt/2Mc3PsNuZmPmJyoVtryielqOidKye2KJdx8EbfMveyK287amN2sfODGyudBrIgPZ/WrWDYYcliXb2J5poWv0mpYlNbE4ozGeeDbFXTL8bKsar7Jqn4PeAsAXwi+V2RZ0Uv34XYdF9oD6o9d7QEXgu8PAu+8mveAt/LlztV8uXWbQB10y/gh4K0F37yv5xYQ/kPg2xVw68fiVPIhSclC8P0hP+75LLcw3D8OvD8Evg8bm+YBbwHhR4xiF9j4HugWEC7A+z8KvheCbh14/+TgG6ct4IesAWddSlTq5BzwFhD+Q+Dbq6LlB9Mnf0rqpB588yGbQN0i8D8Kvl1BtxzrwFsH3zrw/hD4DqnvwTO98I8TfOf0RJHZGU52VyS5fTHk9N8huzeanIEYMnujSOsKp7D3LpmdEbxsCya1M4yivruYOu5Q3BRGTncUt8sv8u2TVaxL3sLRnMMcSdrDsaQ9HM84xImcIxzNOMjBxF1cTDlCekMkQw6xlZIYbk1mYFONeQIEhcH4OPupTbQa4zvLlipgIFuysu0tE69scctkXKnFXkus80y9YohUMI+k7ing+pEbkUpeE/BdimK+1c8TxxQzyfW3CSi9SHbXw98T+BYGWwehIouQxYK2yJiWxYak7qnYetklEJanCIdj7qan3XgEjAtLXgFjhUzMGElqi+ZmiReFnY+UTlaCTcSjVp1r0XoqkC83LA0Mazd0uZnLcx85P+rfBeRoumabTbyHRT5Ty8BAIcXFj5XOXh6/HiygovIJNTUv6OnOZmxU3ofIHmQR8VP09yYVfCLb9PI5i6xgIfjWGH2N3ZYbes3bZOVhfivTXaVGMi6gW+KnRcIiwSiys1HB8LCRZ3E+/OKf/pQTx9bz5Rf/wBXvHezZ+Tt2bfuc3du+Ysv3nxD78ArQqC321O6BnJ9yGodTuG/x54HJVwEHObfKA190xGLf6ShSGvXpSQs1lji2bf6EmGhvykqT2L9nNevW/pYzZzYS/yKE+BdhfL/uU+rq0pmerlc7O4phnynD3B+vwPcTS+AC8C1MqbCB0gNhZmiolKdPAvjNv/8142N1dLTncsF9E6tW/pJ/+tl/Ze+e5XT2ZCmJjAKaVJPUHMq1ojMUdscr5lT7fmiLO1mIKRmHkijIdryZos67+BlPkt4UzpRquLQyJczztDjO5CupR35bNL5Fx2iaSFZOFd3DBRy7uI7bd8/R3CFNgFWMTdfS120iJvoyR65uJLjQg6iSS3gEbmfrli/4+st/YPuuz8nKCaKg5AE793zNnTse9PVkKMZYnJcM7Q+4knqcx+YbvJNzoGLDBZjJonyumpoS2Ld3Mdu2fsqSr/8nhw99g5fnNg4dXKYer/j2Z9y76/EB8P1Sge+7FTd4rXIK5BzUUV2dzie//Tu++OwXfPbpP7Br5zJ8fY+zY/s3rF7xOb/62V+we98KmpuzmaGWnOZ7BBVdIK0pnAmHMNaWBeBbrl2N+ZYmxM6JHG4WeipbwMHxQrVzJQFCk6qfRoC4fIeKGbVnMzOVj22qSGUQCDi1zeRT0hvNmZx9NLxJYnq6Wl2rH/9ON/Dw4TW2bF7CmVN7uHr5JGtW/xqr9RWDg1Xq+UDf07Q25sB0g2KJhfnW5mIjtgnNO1s1S06LW4mR9jeveGr2wzP/DGNTBWpxzJQsUoXQkPnDWSKlWlgylwmYVosLIRe0BYu2gyXfP5lv5DzIvUMabSuYkkb+KQulJTFs3/qvJDy/RmHhE3bsXMnP/v7PCfI7jSH1ATGhl/nH//En7N+1lGdPfWhrT2VoMp/ivvsE5J9l0+NNLHm4i2+TL7E2L4Z1hXGsyjHwTVo136TX8k1WPcuyGliU3sCnr+r4JLWR32W2ag2RC5huHXQL8P4Q+J4PumsUCNe02xqz/UOAe3VO9TxJiauUxPVYD75xHV3DbxTQNtSg2wTq1oALxw8BbnlO9+ZeCL5dAbfrsQ6yF466PeBC8O3KbrseuwJt12NXWcnCpEl5vJDp1j25dS9u1/FoaQtSCyPeXdMmXZlu/Xgh433S3IrUadN8SYkr6J6VlLhEvuuBN/qoB98slJN8yKXkYrnISZpVCavtWrPx7ro1oMuoO5W4jlcqW5HSmW191D24PzTqoFsfXcH3e2x3TQd+zvox0K0DcFem2xWEC/MdXN9LSEPvHy/4zm6PIK8jioKeGAp67pLXJR3zURiE1e6IILM9nNymMHJbw8nrjaGg/x65bRHkNIRQ0nuPtNcxeBaewMdwmtTmCGqGEmh8rVX16+dUvI6jovcp1u5YKnqf0TWWhWiOlWuIs9lSmA4pmUw/fmOQ18gkKyUTrjbpq9CJyQIck3lMq8AGASHFikkWH3DR8Il8QwC5FuigA9mP/E4FvkUzWqripjUwLw4TZSTVhhJY5omh+9HvBXwLEz8zk+8E2cJ8ywJBbtBayXsU0D13jnRHB11yIu+vlMmpImVBJ4B4lFIVAhRWdAlTxxNl7Sg6YQXOhQ0UrbdDAirE81Zju4U5lS1XBWR/ymeiPgMN9AtbL81lFRXPuHRpD9+v+5xdO1ZiLH1MWvptLl7cwtYtn3DqxEoyM0KYnrJin5D3J0DpI5+F/LvEqzs1srJVLzdiy0A8lwrOk9VxT4t0nhE3GmE7y3gzXUhyfRhXU0/Sbst2XneaZ/uMvURZaYqLwoS9nJbWNHxv7Ofw/m8I8j1OU2Ma8S99OOe+jmPHVxASeoLWtnRmBGgiuucSbQEwU0b1QAJRpde4X+qjrBVV+p/o2WUx6dwtEWnOlMNMe2sqp44tZ+Wyf2D7ps84uGc5Hhe2s3fvUjZt/B27dixh44ZP6euVxYE0KFqVhlm20039LxT4jq26OQu+p1U4iewuiGORNJZWYjbHsXvnEvbvXYHd1kBx8X1CQ06Sk3WXy54HcDu3mYGhQsZmLIzaRJ5kJbbSj4DiC1T0JSk7zrnvmSxotXOmmFNxpJgso6D9Dv7Gk2S0RGBXALdGS6ucKVKBMhJAUtYSQ4ThOP3vkhVwnJq20NT0ktf9ucp3WRhS23QFgzYzXW/N5HUlcLvMm4eVfqSVRJOd8RD3c3vYuPlTiorukJEdzd/8z/+T06fX09qajmPKxMhUCdahV0SV+xJgusSoAmMigyrBZitiSlmkSn9JJZWWJyxb8jPu3vEmKfE2L+Jucvzod5w/u5nw2+5c8tyjnC/UgmRSmhg1t56GoZdEm69yr9KXwcl81Yw5OVVDVtZdvlv9OY8f3iY0xJtTJzdz8+Y5Yu744n7uIMsW/Zzj8re2yE5cFbmt9wkt9SS7/Y6ynhSgqRZpcp0o5ltbOMp3V8B3+0QW/vkXuFvmy+uxIqbtZZo9qyyU5bs+mY/DZlDuMjJ/TEwXql06Yc3HJ3PJ7wjnnOEgHRNZTDskefPj3zOJoY+N9eXIoQ2E3/bhWWy4Yvfz8x/h5bWPo0c2UWlOxGGrZ2qyWjUnartmmkRP61GpwK6CajRnpYGRLBItt/DOO43dUcCMSPhUzoQQAbJw1EtC2WTu0eLZZZQG8RmHzE9OqZxzntD6R7TXimONtpMpsh1ZnJcq2WF3ZxpPHrlRX/uChvpUIiM8OHpoORfOrCUm9CzPH/px/tRGPLw2cSVgM2EvjxBRdBq3lF3sebWPHYX+bDQm83VWPr9LLeWLbAuLDY0szmznq8xWvsxoZlFGPYsya1iUZWVRtpVFORa+zrbwTcZcSZT7+1U1z3t7NvxmQfDND4HuWYb7A/ptAd3zLQJFy109WwtBt/54IdDWH+tOJTLqFoFqdPHidpWXuLqUzB4X1ClrQL1pUiQls/UDLLcrq+167AqyFx67gm79WAffCwG3PNZlJa5gWz/WQbc+uoJvHWDrow60T5a1vO/H7UyblMRJqR8Lvjnv9N4Wa0A98EYfPwS29edmme7yVhWCowffSOS7K+D+oKzkR0C37lZyxTI/bfKD7LbouJ3+3K5WgZqmW/PiFj23LifRx4VAWzHcVR0EVHXMSkt0iUlgjSYtEbB9q7qL4Jru2ZLHt+RxbTehdb3cru/FM+OPlPnO7oxS8pGczigy2yPIaA8nQ0B3ZySZXZFk99zB0BZBfme0ei6lI4y07kiSOsJIHIji0VAk21K3qUakpuEUxqZLsIuHstr2L2FoqkjZvk1MljA2WaJArJpIx4qcYFZsqgRQiT5YwNfHbw5yY5JSMg0BOMo+TLbbBXhLWlo+Mwi41pLGJoV1lAlZvVYmaQkXktcKuP0Y4BM3gg+D78TfO/iWnQDRTItuPtfJ5JQxqezSBNxqsg4BPna7AFZtq1k13Km0T+0GJCypQ1mHVTKMUaVPBhjcMbQ+YEw1OUqTl8h9RJcsjY/yc4Up0oIi1HmVxx89N3LuhIEWpqtQab6np8sZGzOphtddO5Zzwe0I69Ysxd//KOXlCRQVxnLyxFp271xE8kt/Bb4FlGrv5WOfhQv4lp0O5SFcRsXACwW+szvvq+Y6Abtiwyc+5tLIa+yMxTfnPOlN0Yr1toukRxh/KaXDLscuDYcT5XR351JTHU9Xp4GJiQoG3xRRV5eA1fqMrq4sbLZKHLJ4kc9CLQDFIcNEZf8LZYv30OynEmLlb5DGPrnuBAROTZmYksY4u4nRd2Xk54Rx+vhyTh1dTlpyMHU1KcQ+uc7pU2u54L6ZZ7HXVPS5bJ1PT+quC+Wz4PtpdZDzvYq8Q0CJfH+E+S5T0pYXcT6sWfFL4mL9mZ5u4lWyHzu3/5Y9Oxexf/cyQkNOMfi2EBvi3FLPDHXEVgYQZrpMTV8ySBOm2hmS9yqLOhnluykLPrH1LEVY7UDTWQyd95kQdlI1RjrdTiT9VjVERnE6fQfPa3yw9D2he7qADod4IMeT3/KAjIZoXjVEkNZyl/Tmp0SX3+LEi108rgqkdbCIqoosrnof55z7Vt4Ol9LWXsCXX/+cS157aG/NUAmCYnPX9CaDO2Y/vEov0EOh8qxWf6f0TAjQE4/oSTPF5rv86pd/wcoV/8qmDV/w+FEAEWFe7Ni+lC8+/99s2riI2Cf+GvM9WaEcWiTGvf5NPNHmK9y3+DEkoTDTYsdXrWQx69Z8RqC/B54eB9i5YwnHj63D2+sgz5+GsWb5r7gZcpKuzkwmp8sxtMUQWuZJTvsd1XCsa77VIm0e+BbtdSlt45n45rrxoPwmg+MicRK5lubLLbtxstgV1tkmoWH2fGXBKemg0lg57sjD0BrK2ZwD9E5Kk2iNU2b149+z6Zkq5dpyTvocvlvEzu3LObB/OQ/uX+bv//4/s3njl/j7HqC05B5vh8WFR64VYeG1OVmuFZmTxEtf9atMlzM4aiCxJgT3nOMMTWQzbZe5WQgSjXTRWWyt6VrmHr3k2nNhxtWxfPfletRyAPSkSbkPyBwv50P+j7i4OOxmenvSGBsrZWK8gr7eHJqb4mlufE5ny0sG+gpoaE2mvO0Zj/K9cYvbxr64HexL8WZn1mO+y85habaJxdm1LMpo4bNX7Xz2qonP02r4Ir2aRekWvsqwsDSnkuW5UmaWZhlZmln2AbBtZWGc+yzgdrqXLAy/keAbzRZw/rhG2O6cGlUKaLs4lazNqUXqfceSGr43aKWz2+sl7l1kJU5rwI2GWvRSTiXiVuJS0kDpCrJdj11Z7VnAnV+HNE3qgNt1/I9KSgR8LwTa8tg1CEc/Vl7dxU0ccJbeSKmz20pSYhQZyVzpXtxHjdJA2YJuC6jG0laOlbZyorR5XqkgnLIWBHDr5RqEo4NtfTxjbkPqnKmFH0yb/A8G37j6byu3koo2BHjPuZS04lXewqUKqdZ5JQE4riE4yh5Qb5i0tOPqya1cSirbuC7lDMHRR9cQnIWg29WtRDueA98fBN2i53ZquWcBt1PLPY/ddgHfIbU9swBcwLg8vl3fR1hDH14ZRfyXv/rbP6zP90/RfGcK6O6KJr0jktT2CNI7I0nvjiKlI4KUzkhSuiLJEFDeEcmrzggSuiOI7bpNkNWbS8aznDWeYv3LjWS03lGhHlMi+xBG0ZlaKY1pAk7ksTB3ihUR0CNstDQSSpCCbCOKVZpqhPvxG4MOBDRGUQMFGhssTWe6FlqkGxpLrE3IpUozq8lQhDkW67E8Nfl/FHy7yk7sAoBFxqI1MibV3Sag7CI5vy/ZidxMVNCNZoPmEKZU+ThbmJySm640tFUxPW1RrI7dLmBZAJ7c7LSFhnrf4rMsf+u0WWklc3ofcyPPjVsl3jyrC+WpJYhHpTdINAeRVRVO97sszTnDeZOTz0icT0S+8NHzo4CZLIDk98tNz8LERCUvngeyY9tyjh7eweqVSwgNdaO/v5z+XjNeF3dywW0Tloo45TQi1ml2mzTPakDvR3+nznwrBs+sAEp5X5wC3zldDzRAOi3BPsKgiW9wGfWvkwgrucxNwwVei65UNVnKDVyuNbF2MytGWrSrExPSDFqlmuPED16BnclyBZqFdZy2lTE5XqwkUxqLWK78kk19En5zhQcVAUpfLD9X33aX78SEw4jNIZZ25Ygv8vBQPtaKR1RWxNLXnYNtvJLubgMm02PKy58w0C92dhamJ7XmUrUVTzllfc8V8/2s5paT5XcB3/IZzBiZGC+hpuo5L5/70dWeo5p4W5rjefTgPD5Xd3Ev2oMqSxxjjhLGlCd7vZIJPa0MILTUi+q+JE0W5mTU5z4Pp95bdNR2I7kt0fiVnCO/O5ZJkXRN1qitfs2fXdxXLCS3xbA2bQPncg8RWHCW4FIvgk1X8DG4ccNwHt9CD3yLPbhV6EFI3lX8cy9zJmE3r5oiGbRVkfg8gmNHNhB19yIzU7UMvqlg685l+Pgcoa1JGkY1r+WetwYemgLY9mwTzyoD6BnK0Nxm5DOeliTdCuz2UnKLI/nNr/+Gm4Fu7N2zCrfzu/G4sFfZk25Yv4itW77mboxIi6rBBXzXDb4gynSZ+1Y/hgU0CrCbLGN42ISvzzF2bF3Czh2LCfA/zP17npw6sY4D+1ayY/NnZOdHMzJSwsRUCZktEYSUeSC++9LkLDtQ72u+tTlQrl8Fvg1uPDAH0juWpzTd0qwsQH58oozX/dm8GTAwKdeK06LwzUgJI2NGxieLKOiI5HzuIbodBmampVdBPsOPzbGVjIyUkZYawfWrhwm6eYqcnEjKyh5w6dJWrl7ZS2DAXowlUYy+E2JDvkMyx8riX+YMab6UuUDcoTRbPrGqTWoM51z+SQamDAokS0OysNy6JaQ+zgFvJwEg3/VZKYq+Q+Dc8XTeMxSIV/05It2TOV6bu7TmS7n3mDWrT8Wea1p7kbtNYuQ1eaS1ReOT7c6JdG/2G2LYlpHEutQyvk1rYUlaO1+lNrM4tZ5laXUsy6hW6ZFfZ1pYklHB0vRyvhE7wMwKVmSU8226meXqsYVvM52VJcB7Trutabjfl5NoLHc1Oqs9L3Uyu3qeO8k8SckHwLaA7zmLwNpZOYkAb53Rdh035tayObfug7Ulrx6t6tiaVztXyrFEnEvme3LvyK9nYe0sqGdnQYOz6tldUDdXkjy5MAjnB8C2AG6d0ZbRNRTnh4JxZkG3S8S7Drx1Vvs9wG1s4Xhp67xyTZ10BdlyrANsPQBHRmUNaGrljKlN1Vln6qS4k3ws+MZ9gUPJhyUlc6mTOtjWRwm9UcE3Lqy2njzpKiWRY1fQ7epOoh8rlxI9cdJlXAi05bFcC6qZAAAgAElEQVSvpUOF30jgjR56owXhaGE44kjiX9WhSphtVdUa4HYF3TerOpEKqu6aV7cEdNd0awy3E2gL2HYF4BrrLeC7n0uZxX+c4DujI4qMTg18pzk9wdO7JHY6kvSeGFK7okjviCKlPYLU/hgS+qK4ZfHmTOp+Vcdf7uJK0VnqBhM1BnXSxIw0+jkbiaS5T8XuKgsymZCFtZYborAZciPQSpgPAeAam/iRm4PSc8vPkQlW/r9MxPKzZfIXJlbAoPa7VLOikkJowFzryBegKIyNsMcf+11iXeWi+RaGT5hiypWcwb/Ug+zOB78n2YnIWSp4N1ZMS+sr2jrSmJiwMDpqobExjdLSWOrqkp1aafFU1rTPajGgGiTlRiieyIVqV0GaNsdnTNRPpHO/0g+fnHMK7Nwqu0xY2VUV1HI1/jC5TTG8nshTfsfaAklj0z56btS5kyZRcYjQdhdEw2q3W0hLDWPHtmUs/fo3fPXFr0lICGF0tJGS4uecPL6W8NtnGRwoYnJS9PkVOOw/ZRdCmO8yTXaibq5iO2jE1PtMge/cnkcKjMuNX4I7ZPEg11vvWBbPa4I5mbifpL4H9E4VKBcLkRLJ6xRAkACpmQom7EYcWHlLuUpwfTdVwoQsJMUbWnZvJJxFrmWxp1Oe0JLkWY554CW3y29wvyqId84eBpsAcLUQMDEhYE0WUjNiASnnV3yrxS7OikNir6fEm1ps9cqx2yVxU9s+n3TIroJ8n6Q5tGwWfEuqpGzJC+Otrn1h/BT4LlF/n23czOiIGfu47PgI626kvy+Nhrr/j7z3/o46y7ME93/Ys/vLnJnd7Znp3jm91bXdM+272pTNquqqTFwaIEkyk8QkThgJLyFAFisEEhJySAgBwiOBfDiFFBGKCIW898gbkA+nuHvue/GVvgpEQlczvTk1nPM57/sVsl973333c28eBl7o4eIkAhbpjQ26OzTiUeMV4Z1cPfDAf+zk/bR0Hcj7jayiZL6zcbHyJJ43ZeLVTB1mXjqEDprWfdMzZszM2lHUlYMgfRCyWuOQ13QFd6vPIbfukoga17TdgK7nlmhYLu2+AW3nXRh7n6KwIwMdkyXCLzsu6iCC96+HuTpXgMexcTu++XYVYs8GoaeL4JuTmSpMzlegojcXIYV7EJW3Dw0DD8X5EtIk/r4+hzi3xopUfPCL/wpHdREunD8itNmf0/IxIgiJCeE4FPIVYqL3SvDtdiwy381jj5Bui0FOw2VM+j3MhbzL40BTQx7u517Ak8cX0dz0CH29xSguTEBmejiKniVheIzXuUNcq5rOVCRZT0H/Iku42HBiQNJB9F8I20tlxUlasPbOaQTzTQu+/lmdmLz5FurRP6jBs4J4XL64C1cvBaGu7qlYKWlsfo7sO9Eo1iRhaKwE5oFsnKoIQb9TI+wH30V2QvkGwer4mElINTo7CjE9VYnJSQN6e4tQX/cAdbU54npyOulfz/vBBi/DZ8QzgfI56cBDeSGbn8fnDSjoTEdY5TGMgdI6C5wwiwAuSmRWKh4XkbbLe1S8N+SKp8J681kvnveCcefPlCWanMX/EXDzWc+Pc3JQCbfbBI+bE1qSQja8RCkKuuIRVhSC4JIo7K/Mx9YKB9aXWLGmoAZrS7qxrmwEq4r78FFRA1aXmrBGU4XVmjqs1lBKQreSeqwtqcfaYlYD1hU3Yl1JgwDeSqOkelyUkWga8KmqaBsYyHLTFnCDqCbQh3vRi1sVgkOWe1Ng+WPflwJxJPhWg21le1FSsgLwXgLdCvhuXQTeanab24ustrEVOwLq24o2BJYCvtVykkB2Wx35vuc7WG4R+27uwLIESksn9vuLdoBKCYbbz26vBLjJdAeC7sO2HrDeRVKi9uJWwLZ6DK3+7xh8Qy/umt5lRWvAmJpuWf6odzXYVmQlCtBWj+ogHGER+CZJiRLz7k+YVMJulJHe20tFe8A3SEoUhpvykRVKLTERDHfTAJKbB4XEhKMCwq+3DAngndr+PQbfZb3p0PVlQtd3A9q+G9C8yERJb7pgwg3DtwQwJ0AveZGB4pGbuNNxFdGaEEQVHcCjxkTk1cajZjgXL+fpjuDXcovUyhoRxOGmb6zQM1LaIJtl1A9VhcHmQ5vMxzuBb6FVlqEichlcMqbie7HBTaT18edxuV6y7XI5kq4RlKXwa9/RzUN83kpWgzUoaEt9r1aDBKGzczWorXuIo8e/QM7tKAwPW1Fb+xwpKadwMmwbLl7Yj4ryFLicXOptEEBLHFdh9UUAbJL2d26riCZ3+ugUYEHTSB507ZnQdt2EffQxaiefI28wF0dyd+BhLQM8tJhnmqnQKJNp4oTmLRMTBXzz64S0h4DRIeza7uWexc5vVyP0+C5s+Ow3yMyMFv7O1xJDEbR3DfLzrsDtIoPPn8EJmvLifsvPZCiKuI5k0yv9s22D9wT4Lh+6KyZwlNQQeMvmW8aqW2AauIc9z3ZhR8FO5LemoP1VESbcFVLLzxUYggQC5QUbOseLUNl9W6QGGvpuobLvNqz998TPMffnoH7kqUg37J7XYWTGgJdzJlQOPES89SxSqi+ib0aH8ZlyjMxVYmLGiOEZPYanDBibMmJ82ojx2XLMOCnBIqjnJEFOQHjseCx4rbpc5QLAcCLlE0wrG0UJvh8K5vtRS5IKfFMyRPDtl2AJqzUe2zp4qBEWS/GVAhwJT2Yf5Qe8F5lySctGNiM3orgnDefKQ2DqyxFsrDKZlSyibHAVE2d+rdeOip4cHM/bh9D0PXj8PBGFTxLR1vwcDsdt5D+7gCdPL+L6swhE6I6ibjIPky49xqdLMTGvgZvsJH/PhWqhzZ/2mTG7QJlHs5BKcUJj02fh0PY1iIn8FmPTnEw3YWzchk1bfokzEdvR01UkJvFsoHZ6zRhwGXGnKwXHHuxA1UAu5jxyBY5ORWT457121FbfxoYNP0dS8hkEB2/G8RPbsHPnOhw+8jVOhu/EgQObkJBwQrrZrAC+bzfEY4p9F5SIwCgkRfTVhq9FaLq5EkVZCOPQmVDKIJ4FtMnVFZ8Vmo5UXKsKh3HwlrhXwLCwZeBbISAIvqvRN6dFXHk4btdcxhAdaDwOeF210OhScOjIp9i/ezX2bPsImZmxeDVWh6Tkk/jVb/4SZ6K+RVfvc1gH7uBk+UH0zZeIePZ3Ad8+GEQoEd2IFrzslXGIZ4l4dvJ3FQSEX1LlZmO4dIGSz18+b/lspRUqry15X43P6cXzMsxwGC+56sh0UZ9JrniK60BOEgVT7d8Xz27/85vkDMGyKH94kgK++ZySZAuv/yUQznfD0oogV4j4/K8SlrRuVysmXRbouuJxpmQrdhUewj5zNraYLFgtLAId+Ki0WgTbrBZNla3CJvDDMhs+0tgF8F6loQd3A9Zqm7BW24w1pU1YXdqENaUtWFfWvOhQsiLwpjUgwbZmqYT/tjageVLbiI1qSYl/+zVJiQp8LwHupSTKzfoWbNY3gey2upaF4ASA75WBtwTfbwTe/sh3NfgOBN3KPsH3dwFvISmpbMNefwUy3Mr+W4NxVkicJABfJish4PYXA3AOC4nJ0nhEAO+e17Tcyxhvv55bAd9qwK1sE3hL8P2egm8CQm8YgvN68I0E38sAt5/tXga0FwNwZBCOEn6zONb1CT33imy3KhRHAdzKuAS6FQDuB9/fIS1J8EtK1AA8EHirwTfBtgDhBOKUnLQMIbVtBGnto4jSfE+Zb01POvR9N2DozxKl7ctAcVcKSnvToR/MQkl3Kko7U6AZyMLT/jRcqTqFMwX7kNeaJEAd2TfJNEvAI+wC6cEMuRwvIusFSyn1yXzRUKMnikuTfp2xfJC+BXipwKBgN/3LnBLA82VA8CJtpvjgFQy1wpKTjRRe2AR5UjstWHjV91wRbIoXiQQpdDshQBbL2L4aFLS/X59vemBPvKrEs8Ir+KP/8r8jKOgTtLTocOPGWQQFrRcWcfuC1iElORjTk9R/NkjWiQyukJ2QPebfx5ehbNwS0e7i+EupjEyJdGDWY4X5ZQmO39+Jko4MjDsZt06dqSyxhEwgTmC8uLTLc+w/RwobJb63XAKWE5pqDA/rkXg1BDt3rMazvCwE7f4CsbF70dJciqA9axB6fBMcjvuAr0keT75YF11e3nINsBFUvIjJXDGG2oyqwbuIqjgO08Bd+fuDloZ2uJ1s1uILuwbdU2WINIbin678HCfzg5DjuATDwG10ThVhmmwYg3DIoLrMuFMRg9iCg4jXnURyZSQSjOG4ajyJa+bTiK84icSqCGTUXcCd9kQUNqdC15GF9JqLOFC4F0eL9qOo+To0dckobcuApjkdxY1pKKi7joLaZJQ0pEDflo7KzkzYXtxG10whRhbKRUAJJ1ECSFC7zQRW0ThchQXhty0nJ3YBvmNxvzkBTmGVSLmWFW6v9NaWX8eld8bYV8FNGZiXjcgG0AKSUhAG/9DphVIhgjGvixaBTXBMPMJF/SGUdd/ArErvLcGNbMjlBIC/p3ehGoaum9iZ9Tk+PfxznDi9BV+u/zGy008hKnILdu35NYJDPkFw3BYcKzwA+/hjYSfHc0a5AfsoCGAF2OI1xWvNRVvNemEvx+fEk9tnEHpgHe7ci8EM5Va+BoxPVmH/4Y9x8fIe9PUUCGkDJQc8BtOwonrqOQ4/2gFtVyZmhBOQFc6FKszy/PocGBosxfXkU9i69TfC8eV5QbrwVg8N/QZfffVLhIfvgM3+RPpmu6pFQydDehrGHiHNHovbDVcwyXtMHAeDPF9iBaNWeNiLsCpOhLw2OOet6J83oXPGgI6x52gefoQsWxTOVR6GfjhbXMdgMA7dd3z1QhojyQKeD6ZX1mFw3oh40ylk2mIxOlsOn6cGs9M2ZN06g6+3/gqJ8Sdw9cIJ5OYmoaVJh/0HPscPfvh/4UTYduFm4xh6gJPaIHRN5gkvfjF54vEWq4/+4y7uc/m8kAy03g9U+VzhM4DPVY4MJuOKoX+Vkderf8Ion6mSBJHPH9lIyfPLv2l0To8nLck4qTmEly6DcGuhREqsivJnKOVhI+byEjI48d5gUyZXq2QtPb953xB0y8ZzvgdkgzCfS1zd5Pni70zvebqvNGNg0gpNezaOPfoS+0v24XDNLWw3G7FWY8Nvyuz4qKxW1IelNvy21ISPtFVYpWvAKl0n1ujbsVZPu8A6rNLUYY2uAWu00sFktYbbTVinbXoNfC8y3lq/L7e2AevVJRonKS1ZqsXgG10TPl8sGYIjwm/0lJYsl5R8IfabVdKSFnwpdNuUlQQUNdzLqhlbypWixERKSigr+cbYiq3GVr+Gm0y3rMUQnMpWbGdVtOLbwKpsw051mdoF8N4twm+o5V7Scyu2gBz3mtoQtFjtCFKSKJWRoTjmdhn3bunAkrxkSctNeYnUcS+NIVWdCBHuJXQwkaWE3xyxdkOpo9ZuHLURdCvVhWOUmKjquJCVUFoii82UJ+zdCA2osOoeKBVe3Q11naruxilHz7KSWu4usGFSXRG1PVDqdwu+URone6GAb2q3lUZJZXwdZPfiYn2vX1KiDsGhnGSppKyE0hJZ6hAcGYaz5M+tuJWotdzcJvi+FljN1HIvr+TmAVxvGURK65AA3NxW9gX47hhFtMaM//D/fA8136X96SjuS0XZQAY0gzdQ1JuCwp7rYpsf4//r2lNQNpCJnIHrCDMfw6myEHR4SuHhg41BJ3ywKaDs927kA5wPbzbo8eFfD3j4cqxGfmcq4mynoet5PwmXBBGz7nI0tN7F7n2/FX60zS0VOBOxT2xn55zD3dxz0JSkwONsxIKQbJBp49IqwRllAtS6y2VWhRniS0oyQ1KKQ0D9ylmBvKYkRN3Zha7JItGASVDl8lWLpkxGd/uc/Hxqd/l3MylQvvQEE8pVDjdlD5z0cDXBLJZ0KZ949dKG+3cu4OtNv8aXG/4Zn3/8S5QVp0BXlo4Nn/4V0lMO4+UELemkTlVMlASIfwvwFtcWJxh8ydeDjWUzPjMso7cQXXkI1d058BBc+powP0u/4Rp4Ke/wVWPKTf3rLezO3oTT1WEI1QfjsGYfrjmiUDl4C10zRehyatE+V4bwe9txynIchR1ZMPfeh6EnByVdmUKzerfxKm7UXcD58qOI0h1EdPkRRFccwd7SHVjzaC2+ePwJLusP4tzz3YgyhCBKexgxmhM4qw1DjOYYIkqDEaU9iEj9fhzN34prdWfweCQd7dMlmHZaMOO2iMnAjMuEWbcJM65KsSJBCzmyu7ahXCTaI5FZfwETC+WCyaPDxcSCHvMEH4xS9xGI0/HGgimmMYrY9BrMExD6m1QJVES6qNDrknmvx9CkDteeH0VWzUV0zuv8EyM5IQCtOTlB4jUAG1wM0BnMRqx+P1JLz6BEl4kf/+1/wc3UWHyz9TcIObIZd3ITkFYcj9O6Y7BPPMGMCAFyYBZV0t2ITdAilZOTgTqRssnJM69BBsjUVKWjwngNLYPPMUbQ5KvGS6cR+SUx0FdexcSkTsiG+D1YbNrunn6OsOJ9eF53Ba/mdWCzpNdTiQWPCV5qjL1WzE06UF11B421jzA3U4v5mVo0NzxBhSEDHa2F8KFdTOzgYvx7NdweK2omniK1+jxu11/FmHgeEOARsErZCH/2jNuMV3MVGJ7Uons4H9VdOcjoOodLDVGIqwjH2dLD2P14M0L0O1E8likkYb6FFszONAoLv1fjNXDPOeCad+DVRDUmhqrRN2pEouMM0mynMDhbJpJVX01VISk9DL/45z/Dpo0fYM+Oz2CtLsGNuxdx8NBmbNzwAeLjj6GtuxCOofs4awhGx/A9Ic+RjiFS9kfHkQVKBEV+AUkRTmaZAMzJkdRiqwkJPuP5rBf9IKI5lM8V9qnI1UXxuYo8hBpwIffjqowdL+b0yOm8jrPlJzHiNPgnQ5LJVv+M33WbDfayiZ7vIlpL8pri5IjXP12NuFpjxZynHB0Tj3Gr/hy2P/oGnxccwq7q+9hcYccnRQNYU9izzKWEjiWvu5TUiyh3xrkrke5KAI6Md19Zy72o49ZJTTctAt8WfPO5QTqUKG4kyqh4bsuRGu7GZRVoB/jVG4Jv1E2S3FYCb5TxNRtAv0PJSs2Savu/wO1dpteDbwi41UBb2VYH4CguJcq4oluJpW0x+EYJwAn04la7lCjbikuJMlLPHajjDmS2v8uXW7EIDLN3QrEEVI+KPSDHUzXdr9Xp2h6o60xt9zI/7oi6bqgrsr4H0QGhNzF13Yit71lWih2gMgbaAdICUG3/pwTgBNoAcl9xJ+EY2Cyptv/7rkZJNdAmyKYtoLqYSnmtZQBJLf3LKrl1AIF1vWUAKa2DSG0bEiO3lf30jhGkd44gRmvC//kn38OQHQJsgm+CbAWIc59AnCObNitHclA6ko3U9jgcK9uP2JLD6Pfy5SyXX3/XB+b/GF+nBt+0BasTD3KCzrwOP/jufT8hO/J4VGN62oKo6O04dGgz2turcOrUXvzd3/0h/vEf/xBffPFj5D+NB9AptIuCORKgWzKZXO4lGFeA99Ixli9KBYR3zRRjp/0Y4hwxGHZWQqxQkHEStmz8XOqVuaRs9idXsnlReSFLJwEfwzTcZsGqOl1soqoW7gJcbu/pKsPd7FjERuzFrRtnMT5iQnvrMzx6EI2mhvuigZGNlhJ4E0wrrNnbADjlFzwndQJ8T/tMMI9kC/Dt6LkNN+PAmRDqrYWL0hs298IhGjBbJp/idP5uJLSdQ/XsU5S2pCA+LwSnHuzEOc1RZDVchn76KcIL9+N++zWMO8kMMy2wBk5GTIuqwRSqMAIDOucK0DZfjIbZZ8h7kYrLNadxs+Useuby0D3xBG3OArTPl6DdrUWzqwwN80Vocheh2VuIqsl7ML+6i7jC/Qi//TWuFR3D09pEaDuzZLVloLzrJip6bsI6eAc1ow/QMPIQT1uSEG0KRXx1FBpePsXgSD56JgvQPlcopAXjMyWYndVgbl6H6VkNXF6zSIacd1cKByKXt1okH7q8lZj1mkFNu2uhBrNM81uw4WljIkILduFBzWXRoCgDrHhOOAGV1xVZaYJv8/AdpNRHorjtJvKKUvDl+l/BYX6GvXs/xeavf43ImGCcyzyBi8VH0f2qUEzUeb4J4gTDy1UJxoG76VBBdrVBSHCcdCjx1QHTVcA85QYNQitPmQqtCUWIC5lSysrcVj94dIjVi+6ZQpzSHEJZSwomnQYpn6Eki811C7IpeAGNmBcJpXXgNtMg5+mQIwJ5eF3VY5pMqmBjuWJUjYaJfGEleacmHuNC2mUT2v45ht14LRif0aKm5y7uWM7jVFEIdhTswrbS3bjwfC/iDaF4Un8NlDCl18finOWokPi4KeeYa0FFeS4uXw3FzYwItLc8Q13TQ9zKiUR6UjjSb59GvOUUMqoj0TVTgDmnGVPTViRcP4Zf//bPcfzwVnyx/pfYHfQ5Nm75AKfO7MHe3Rtw5coJtHQWomb4EaK0wRiYfo7pWQM8TjaLEvTSjrN60RVITKjJhrOZWEjPeHxlcqSSICk9uZX7lI2i1XB6LZhzVYpQrUUm2lMFL72+CeqdVmDOisGXGjxqTsJVbbj0AecKotvfSPkeCBtKdJx0pfIa4RWgmxNFee9zkj7nq8a414zKF7dwQROMXY+241ttDA40lGKLtRr//NyKj0qbsU7b+lbwva6sfrGRUi0p4XYgu63efxv4Dgy+EcD7TcE35ergm+XgOxB4c//rN1gCfhf4DgTe3A/03lZA+Gtgu7J1UVKykkOJbJiUwTcK6BZMN9luVQKlAro5rgi8RfDNUurku4BvBWyrR9oErgS83wa+FcCtjKHV3Qir7no7+PbbAartAZesAaVbiQK+I2q7loFuAnAC798FfK8EvN8VfMepwPebgLcSfPNd4FstKQmUkiiabcFwvwfwndE5gliNCX/wg+8p+A50RSEIJygnC87tgqEbuGI9jcN53+L4k53ItV4EtdwCMJHhEA0tykP5beDpf7T//y7wnYI42xno+95Xw6Vczh8e0yPs5GbExu5DV6dZNIEdPvwVbt06j9DQzTgb/Q18Xvo/k7HkcZfNRATeYslXgHH1+eC5kgwml2MJqFuGn2Drk20wv3qM2QUbXnlNmCObxbRKMmJkSfl9+CIWy9KU8TCF1B/OQzadsh+hDefPp0yiWtjpCUZ7oQmzM9UYG6mAy9kg0uYmXxkxN2sWMd9S28ljy9+LTP07NL+Kl7RkXyX4rsP0gkmEPkVVHEJt3124fDXypS+cJHgMaHdWCZe7HK88emGXuff2BjhG7mHGY8KoUw/j0B1kNVxCXHkYtmZvxJrEf0Z+0zVMzpnhnrbCM09ZQIuIGyfbz2Vw+qK7nbTVqxHHzdKfi/SqaOQ1XJGNj0z45IoBo7YJCoRvM7etmFlgD4EdU74KvHKXo2XwARKs0TikPywqRBuCw9oQBJftR3DJHhzS7MG+wu04XrYfXz/fhr+69Rv8/e2PRHjV8Qc7cODxdux8ug27H36DkPtbcTpvL6IKDyL8WRDOao7gTNF+xJnCkOyIFkAuwXwK6fYopNqjcd0SgwfNSbhfdwW5LYm45DiN7SVbkeY4i6nZCuH3Ls8RzxXBDNle2WxqGryLlLpYPKy+jq07P8S5yIPoaDQgNGwrdgV9gs82/hI/+vhPsSVxHaoH7wuZDJ8bDMWZcldgxl0prElf+SyY8JnBBtUpWDElJAa0LiRAq8GCm240VdKT3cVJAKVlNfBRa8wVGK6+eKyYc1WgYfIZjmlCRILkiFMPp5gwyr4T8ZxaqIKb1pj+FE6nx4h5dzk8lOEwWt1rxuy8QTDM89MGuOcq4FyoRt1sMZJtscgwx2CYsgmfAwMuPcr7byO+4jT2PN6BbY++wXHdYaS3XoF27B7qZ55jeK4YQ24j5ryc4NhR1pOJJMtpaNoz4J6vQaf9Gdat+RE2f/0hNnzyj4gI+wKHDq3B+g3/gCMHv0R0fBAiSg4hxRKB4dlyeN21GB2pRELSMYQc+QIOWwGiTu3B3//TH+Mvf/QH+PCjv8Ef/9G/w5rV/4D8oqvCZjPWcBRdo3lCxsIJC6Vocz463VQJ8M0VLcGI0/KTMphFiQdXvGRYFF1JuK2Ub94kHKvEJMpvecrrhJps8TmUrwn3JtkY3zejRXbzVcSWHcM4G0f53BIg//28D3jfc2VEThKllSgdhNy+bgz67NAOZCJWfwQ7Hu/EjqLj2GlKwubKPKzT2/DbshZ8UFyP3+jqsFr3Zm/uJZvAQNeS10G3Ws/9mpZb1yRcSygpUXtvK9sKu62Mi7aAb3ApYePkSoBbHXoTCL4DQbeyvxLgVkC3En6jAG6OgaBb2V8JcL/mUmJuQ5BSlnaogbayvSQjUUtKmDK5JCsJZuKkUkyaXKHUQFu9rXhzrwS+F91LVB7dCsgOHAm6lQoE32rGW9k+pQLfgaA7orYXrMjanjemTTLunRXjD735Nwm+CYh4JwBX2G71qABvNcMtpCR+hjuRATgqb26h3VY1S0oAzkbKflFkt5eVn90mw53WMoj01qHF4n4a99uGcKN9BDc6vsfgOxB4c1+Rm3BbO5SJ9LZLOFq0B1GlwXjWnozuqSIpR3BWCK9kYSn1HtiLJZb2/TyQ38/3k+BbSDm8DNyohU/IThx42n4dl+1nUP7iPSVcEtT4ajEyVo6t236BiIid6OmpQmjoNkRH70VeXhKOHduI0OMfwzVP0EHbQUpDyBxSi0npiQTZS8y3AsKXwDc7/tuGnyLk+V40zhWBTZlcSibDR2mNz2kRzifzQutJdorfg8wkl3Kpx/Trf/2OAvJnVvubsuhrbRXhJm43tcf0V2bQBuUxBK5MBpShLVIeIzWZ3gVGeiu/65vPv/BvJ6NFhht1mPaaUDGYiaiKEDQM3JfgmzaDLiM8bqM8Jpwc0vvcZ0a3qwjnCvYhoaFYHYAAACAASURBVDIcHS8LhTXhlKccg04N2iYL8KgzGXHmUzCP5mKKXsTeRrjmG+FycrLTLCz4fAQpbFpcaIHH2wDPQhMcQ8+Ez/eD2itCikCQQzaOenPB9Po1tj6/U4oAOQvUONvhohPJrB5902V4MaNBP8fpYnS/ykfvq6fon3qGF1N56JsuQElvJs7ZTiOhNhbNU/kYGM1H50Q+Gl89RetEHhwvn0A7mouCF1l42p2OBx3JSK+ORao9CkmWM0ixRSDJHolERzSS6s4i0R6NJEsE4sqO4mplGCJMR7C1eAuS7dEYm9LLZFdx7hXwzXNULVZKHBNPkWCNxPHr3+KjdX8Dk+4BxgccMBrvoq6xFBmZl/CjL/4S/++JP0WCORIPaq7glvUiMm3nkW6KRkZlNNLMMUizn8ctRxxuWS4gq+oC7jVeQ/N0EWaZACmSCumEJK9RxQLT5TaDJTXjlEbRUcYGx6t8HCzeD+2L2xhxlwvtOhlaXtvC2Ub0exC0S5nWomxCXHtSnsUVH06YyMZzdYYR53WvniGrLg43G+JRPZUHXWcWDpcfxvHSYMSXn0JBSwqaRvIxyGZbJyd6FZhlQAzDYngN+Jrg9Dqg67qJZFMENG1ZmH5lx/Pbl7Fu9Y+ReiMOKYlnkJ12CuHhXyI4ZBMKn2bDVJuPBPs5ccxGZivh89RjerIaWTnR2LzlAxw7vBmbN/wEJ0/vgL4yB0+fJmPDJz/FvqBPYXPcR+1gPs6UhaBrvADzXjYp83dhjwV7Cdh4Lntl+DFKZ6YXzCIFVzpPyYkWpTWyZNMjm3T5dTymIgZe2KMqq2JcnWAQEBsqCfSl1SxX2lJb4hFrDMMkV+f8abqLbPm/8v0hGqx5n7sqAWcV4G2Gb6Ebs+iAYSwXYWVf4/PHX2F9cRS+KM/BV+ZKrNHY8csiOz4yNGG1oUnYB64qpTUgme2lUntxL21LAK5mtrmtsNt0LQkE3WprQOHNHQC+FbCtHunDrbiSqMdFh5LyFnxZ3vxa8E0g2KYfd2CTpNhfFnzTshR6Q3lJZYsoNdjmdmCTpNhXabYDgbfaFlC9rQDswFHNcAeCbzXoVrZXAtv82DKQbe3Eok2grXNl1xJbJ16XlVC/vVSv+XMLprsbJ5VgHKHl7oISfKMe1Sz3ab8Xtxp4K6BbGdXgWwHbyhhd3wvWvzT4RvHjXtRzN7zARdEsueS9rUhLAtltAbbf5FLS1A8FbCvWgIvNkipbwLeDbr9zidBuLwfdiqREGVNV4DujbXgRgBOMcz+zYxSZnaOI1Zi/n8y3Ii1RZCcE3Mo2pSeG0WyctYfjeOk+PGxKxNC8QSw1ehlUQscCsk6LgO/NoOn9AOH/P74/wYa/iz4AfD9pu4746ggYB+6+F6tBAgguRb98ZcLx4+tx7txudHbqkZZ2Cnt2r8LWrR9g57e/xP3c08K7WbC/ohmPrDPBhNR0U9tIIKtITNQvOH6MPrhdY/k48iwICbYYaIfuYMBtwBzBgr9BikDZyyV9IRGw+hlvMmO05iPrLV+21IO76evuB9XixSuAiz/wQjRBVQopDP9PNmXK31eCb7L10nf9XcA3m75ENLwffE96KmB4kY5IY7BoaBPMNwGGSOskg2+V6Z0ei5gcMGSoov82Qgr24mZLgojvFkvwXiumnBUYc1Wib0qDYXclZhbseDFsQF5BMtIyzqCkLAXDIzowIv5FrxbPniSioT4P07N1qBkpwXXbRdx2xMtGOupn2QgpjifvE7KLNvjEUnsjsNAsAbyvCQveWmEzSI90giNOcsSxElp9OnYYASafwoKa4YdIqT6L201XMcmeiwXpZU42mYzmPOx46TFjdN4o/paReT36p7TonSxC18sC9EwWomuyBK2TpWiaKkTLZAE6J56jY+QJOsfzYB6/gzjTCWRXncXQpFYATznxIust7wMBvn0OOMafICw/CB9s+wt8+vk/4uVwHSaG7Ui+fgyZ2TE4cXInfrj2j/FnEX+JBHsMbjvikGE7h4zai8isuYjs2jhkNcYjo+EyMh0XkVYVi6uWKBzPD0Lxi2yMuui6QeAt/dzF9axawaH+WKzkiBAqSg6saBh5gpCne2EdeoAZj0WkRHKyRH0zJz5iEujXiPO5JTXb1G1LcC5GODDL+4Tgf57XsQ0dE8+QYYlGWPFBRJvCcLokBLcqY1HWfgMto88xMl+OGUpbeM6EY40dPnrtuzmRpUyrCd6FJhh7c3HNGo2i9iy8fGnHrYwYEXf+9bbPcOjAV8i5EY3w8K/x2w//Gnt2bETIyS0IfRaC23XxGJjj8aiH19MCe81jnLsUhANBqxF+fBNaOkow62xEf18FstIj8eDBZfQO6mAfeIRI3VH0z2gw72NegL/vgzptuu0sWAX7PeozoWG2AKX9t1DadQNlnXRGyhL9LPqeW2Bpu2+KFYXSzht40paKp+1pKOnJgnn4HlqnCjDqKRfSD7LfXBGb4+qPf0Whd6YU2c3xuKQLxZzTKJ4fTJiVK2f/+ue6x22Gh8fbWytWS7iCR0matjsdMYZDCCregZ3mBHxlLcU6jQGrS2rwUSkBdy3WGWrxia4OH5c1Yl1p4yLwXgLaS0mUIgRHZQ+oWAWuaBGoacQGltoikG4lfptANlAqoTccFxnuf4vgG78ryaI9YIW/UfINSZMKo81RDb53V7a9e/ANPbnVwTfmdtEsyYbJ/ZSVLGuYlEy3Yg/IMTAEhzaBrENVHYt12NqxLARHCcV5H5ISgvDF9EkV2CbwDnf0LJYacK/kzS0aKVUhOEtgW/pyR9b1gaUE4Cij8Oau60WMqkTKpAi9+dcF38SpUicDQfciq/0d7iSLwFvlUhIItFdiuBcdSuhWwkTKxRpESvMAUlktg8sqjaDbz3Zn+IH2jbZhAbjFfqtkvbM6RpHVOYpzWjP+4PvYcKkG3wrjzY8RgBN464azEGoIRpTxKKoGc6V/MjWa80yQ9L+wxAvnX/8A/X4CdIJYAiB25tOpgcCOEowaPG5NRnx1JCoH770X8C2aGxdqMe+0Q6dPhqE8DePjVWhszMOtW6cRF7cH93Ij8aKvQLoxcHmYVnRCsiGBCL12XwffEowrbDhBzCtaf7WkIrY8FFGmk7huO4t7NVdQ2nED1aOP0T9TJiQo82LpmM1l/qVpscwv2S06D3DJmhpcob+ltMJDxwECajKH1IPrAfBjErjRrowf5z5HOWng//Fr3uUa8oNv0UhZh1cuI7Q9KYgoD0b7eJ5gvilTILMsNMTC3aPWH5ZTi3lvHUY8NiQ4ziFMdwT5IsylEu6FaiFDYXMYX+BkK6nvv/fgLI4c/Uo4zRw78jnKShKF93pi3EGs/tVf4U5WDEZHzKgfLUWaLQ451ZcFG8hJqQj5EeBbsvs8vwRhM9O1MJlykHXzFEo1KZh4yb+/DT29JbA5stHbXwgPJ3oESmJCRZcJprZWoW7oEdJs55DbmCCYVVok8riRnRUhJQSrDA1i4Io4Bo0AmjFP6Qb1674GeL0N8C40wsWmVSHZ8MtJfBaMLuhwx34W98wX0P9KI9h7ASYJ9MVEir8rQ5iqYR64h305X+NXO/4C8cmH4XW1Y2rCistX9mL3/o+w+etf4cPdP8H2nM0wjT1Ex3ghWseeo+VlIVpfFqD9VSFap4rQMFmAuvE81I7lQztyD0cf7UJedzoGXXrZ6Ltgx7zHBNBHmqsu9IQXun8CSF7vXKGwYH7OiMaeXJx4vBvNo08EGBee0EJ7XCWAMMj2CgaYjbm1wiec7Da36ToiQnvo4U5nGJILbiumfRYY+2/jTMF+7Lr7JS5aTuFJVxoGR4ox5TQKL3cyuTzfok+CE1eeO/ZQMOac358rKL56lPXkIN4aieddWRieqML11FP4h7//IY6d3I+vN/8Wl2KDkJB4BJHRBxAbeRi/WP3n+OLap7jXlIAhatjpguJtxMtXNjS3PENFeQZqHLmYdXGCUof5WQf6unUYHKjEy5kqVHTfQoTuMLpeFcHFiR7DeXxVGHEZ0DSRD31PNh40JCLDfgFJVTG4YonENWuMCIxKssbg9YoV/8fnBUOlkkxRSKqIQEplFLKs5/Cg7iqKOpmafBuNE08xNFeKWW8lumeLkdV6BRcNoZieZ+CPPJdyVe1d7vvv/pwFL20FG2WztceKtvG7uFt9HCeLtmNf0RHsMyZhm6kcn+masaakBus0tUKfvaa4FmuLa/FpaT02CJ/tJaCtgO8VUyf9FoGB7Db3BcO9gj0gLQPVFoH05l4E3G+QlPzPEnyjZrgVRpvjIuD2g+9Ae0DuE2Avstqq7WWA29qFY0oJp5JuLIbg+MNwTtg7EWrv8Be3ZYVVd0Kpk/ZuhCslWO4ehFf34JS6VJISNdu9jOWuoaTED7aVMJy6PkSpKrq2F0oYjuLFrR7P1veBqZOKO4kyLrLa9X1Qtt9r8E3j24Nv1MB7Sce95MOtgO4lsL0EvFOYTNkyJCQlQkZCKYm/1BITwXATaJPlpsSkbXixuH+zcww3u8ZwXmf5foNvtdyE4JtAvHzsFp53JeFw8W4kOaLQMfVcyg7c7B6nbtWKWS+9Wr/7wfhuoOr7+j0k+JbgkU1KS+CbXstXHFEwDdM2j+mTlGbw75BSFamRVaQUihzkzX+nADX0/fU5MD1dIYqplnOzNgwMlKG7uwhjo/RT51IxizIOyRxLsMsYc36MP0thvjnyd1heXFaecJVD15eDnOYEpFlikWGMQmpFFJLMUciovoDCjky0TxYJZwKxdE9AITTf0qqMDBfZVi5hK6y2OtSC4NojwDYTR6Uu3eORPutk35caQ5XJwZuPzdI1RPBN2QB95Gsx4TSgrDMZZwwH0fXquWi4dM7YYDbfxN37MSgpS8ZAvx7TE9WwVOTgTm4sisszYey/j8vG07isD0fFi1y8hE24vJAlXZgio8xzWY+YqG04EPQ5Th7bjT071iA78yS6Oovw5caf4Id/+O+RdOEwBnt1aBwuQYb1Em7bL4ljIsGhXDEQ9mwi8rwa86561NTm4WDIehw78SX2Ba+FvuImWtqLcS3lCI6GrUex5ho8C/WiQVJYedKlQ1i8WVA38BDp1rO435CAOQIYNrSRwST4ppxHWAvaMTGmhbkyDc/zLuHpo4toaHiKgReVsJjvo7jwOuy2exgToL8BLn/AEX/WkM+AbMdF3DZfRO9EmQgQkvZzBLxs1pMONx5vNYw9OThJXXluMJo7C7DgpkTHDqstCxnZoUjNPI0rueGIKz2BgTmNYJ/pa81JLBlmXqd8drgZj+6rApsXO7xaRBQeRF5fBgbd5aK3hA4c1OfzeuHkzuOim4vsSZj3UVtfJb5+bLYM5o4MHC/ci/qpJxjzGDDhNeCVU4eXTi3GPFpMuPWYdlZgxmPGtKtSbM96TJj1WsT+5Fw5puYMmHYZMA0zemZLUdBzA/G2SJzWH0GKNQaO4Ud46ZFNysqqH1lk0cjIc8HAGZ4L3nde2tvVAkxP9VWjuPsGrtojUdKfg74xI65eO4Z1H/8MeUV3sXPHxwg9shE5d2JhND2Bpvg+fvLhn+LDmA9wv+kqhl06eOldvVALDyeILDePYx2cbrrccCJCjXyDiJKfc9lg6MrAybID6J/RCvA94qqAbfA+HtZdRYopClcrz+CqORLJ1hjk1F3Gs/Y0GHpvoaI3B+Xd2cJOUt+ZBRatJY29Oajouw17/31U9d2FriMTeY1JyKmJQ5r9HBJs0bhki8J5WyTS7Wfx0BGH0vpreFh7GaEVR3FCG4IxL8+r3y7wHaRmS/f+m58PPjRgfqEefVN6aNozEV95HEHPtmFb8XFs09/ANwYNNpQ2Y21RF9aVNuAzvQMb9bX4pKQJ64qa8Ukp49tr8am2Fp8w6l1Vn2oawaRJpYSkRNOA9ZolaYkMwvGz3GS2FT9uAm6l/CmUi4E4ARaBalmJsk2LwJVSJ5d7cC9PnVxJXrItIPRG8eFWPLeVUQm+WcmDm4w3mW6G3QSWiHU3t2MvyyRTJ0XwjaldBN8sC78x0x4wQMPN5ElRKi23mu1ewZebwJve3G8F3gTbCvC2dsnUSSV9UjXSpSTM3rGs1G4l3F4E3vbu5YBbBb6XAW6HP4EyQGYSySCc2qWKru1DNNluVZHhFt7cATHvwh6wvg8E34EWgQrYVo+X/KmTiu+2elQ8uOMbloJvGICj6LbV41WC7oYX7xx+k8T0SVUC5SLgpg93M5ntIQGyCbSVSm0ZRmrrMNL84Dudmm6l1Gw3GW8/8FbAtxqAK+A7u2sMF3RV30/wTYabYJvNlYrkhPvUehN859adx6Fn3yK37SpG3TrBPAn2iNZavkrxEuOysNT9vvnh+C4P0O/n5yjgmwBTOl4ozPfDliRcrYmCZeTB+wHffHFTH81lcYIcss4LNriowXbRp7wWXo8MKyLYleCbzLdkv8WyvHCEIHBUwLZyThQw7pek0C6OTg2wY8JbgZ7JIrQMPUFlaybu2eJwyXgGxzWHcNtyHl2zGjjpncul6sWGS6kHZXOV/F3IvMttXgsKGJcTCsmEk6WXbDf/LoJx/i6K5/rbJyfy+jAJ8M1z4F6owfi8HiXt13DGcAA9U4WC/etoK0Loyc34ascHOHh0A+7euQCb8T5OHvoSwQc+x7Y9a2GuvYmKzlvCfSHJHIOaVwWisY6Jie4pOiZQ31qLS5eCsHfXRuzfswV792zAndwY5D6Iwt6gT/HrD36CG9dOYaBXh7rhIqTYLiK7+iKcQp5hwgLY+MXJqQEeGEWK3+S0Hc+eX8fPfv5DZGVfwKp1f4XUG+HQG2/h8NGNWPvxX+Du3RgseJpFCqZgpikt8Tt11Aw8xPWqWNytv4pZXiOMvF8gEKU/O48/z5EdL3rykXv7JKJOb8bGz/4OqUkn8fBeIqIjDiBo58eIjtiJcmM2vF5qkaUvPINtuhfKkVZzGVmWS+gaK4ZTNFfye3MCRfZbBqp4PBJ8X7Gchr7nDtyg9SUtOB2CCX45bcLL6RqUtz9CvJ76+ufw0h2Hdp1CEuS3uxMrA5xAWOFaMKHVVYxTxQeR0nABFYP30Dj8FPUjT9D4Mh/dUwXomHyG9pf5aB4jU/5IeJPXjD+BbfgByrrTcb3qFDbkf46cFwnI67mGx92JyOtIwOOOq7jXk4BHPYl43pmIou5kFHZdQ1F3Ekr6UlDSe118PK/tCvLbE5DXlYKnXSnIrLuAM2XBuGA+iWf9mehzaaVkgsmOQkYlQTc1zrw3qOHn5FkWvcuN4pzIyWI1yroykFwdCcPQHQy/rMDdO2fx2YZfIDnjIr7dsQaxkduQmhGKtMwYJCVE4mer/hSfJ67F3YZLGHFqxbkWx5iNvJTViL4EiwhMkv0QNtGgShDO0CxjbwbCtfsx7q1E76gG95uScM0UKTzsUyujkdd0HVXDD9A+U4xBtwFTPrNg8Hn/skHVw5/hLzmxl17/oimTEyCfRWi4h7zl6HCWwfbyCYp7b+J+cxIy7eeRYYpCTnkEErQnsKvwWwSXBmHEZxCe73JiqZATynPqdxun3UbUjdzHrepzOFoQjC3Pj+NrYwq+qirCeqMZa0qrsa6sEZ9oWwT4/ri0FjJRsgXrNM1YS59ubR3WaerwaRmDbxpfq/XaJigl5CSKrETzhiAcP+heBNv+IJylEJxmbNYvD70h0/3fI/iG3tsK4OaogG31uLOiTcS9q2UlAnCblktLlNAbjkrQTeD4LsE3B80dUCqQ4Vb2Qyydi+E3SgjOoYAQHDX4XsZ225YcTAi+lfAbZVSCb5QxEHwHAm8BvunHrQLayvZpRy+UOuPowRkFdDu6ocS9q0emUDIMJ6a277WKrXsB1tm6vkUfbsWPW504qY58V4NtZfuSP/hGDbaVbQV0K6MCvtVgW9leSVKi6LnVDLeyrYBuBWwro8JyE2inNi+vtBYCbnUNLYJuRUqijGqGWwBvP/NNwJ3J7bYRZLWPCuY7u2scF7VV+I9//D30+RbJlX3pKOpNRfGLVJT0p6FkIA2lwzfEiyqu/CjCCvaitC8TU14j4DQBLqnnnaWnMANNRIDB+3mIfv8AOAENw0kk+J4nsCQr6qsBI76vOhTw7feWFcw3jwXB87+E0eVLhwCUYJQAXEoB3G4ZakFGk4EXIkDCD1r5/WXoBdlk2WzJ/xcOBILtVr6nAmz59X7NpwD2BFIMyqBvs1WUDAmxY3DOiLuNiTiYuxX6/luY9FRJ72PB5rGhiZIINlwROEkZCo+JAOAKCyhAnTIxIwDn8SBQYEk9+eLfLJwPlq4hAegJ7IW7Bn9v5f/84NtDkFeD0TkdCtoSEWEIRt90idDE5ucnYsv2XyPywj4cCduCg8GbkHYtAuvX/BMePUjBZ5//Es+fncf4Sy1K224gpuyY0CL3TRaLY+yCQ7iRzLsduJ4Who8//Tl+/NO/wMYvf4PkzFPYdXA1LiWG45P1a3H92kn09mpRO16KZMcloV2ehVkAbxfZbkpwfOVYYKQ2rd3m6qDVZuNXv/pvSLx2Cr/58M9xMycGgwNW3Lp5Fvv2rMHDexfgnq8XzXUixdTNQCV5fquHHuOaLVacmzlxrVEeYwflQS6eU7HCUIXJyQr09BSjquoOPv74rxAVtQfHju9AcMg27N2xAUFb1yIjJQxeT4OYZIjeDV8t+tzluFEbL8B3x2iRsKMjSKJvOM8dJwGCsfbYRRImQ7eK2zLEihDPP4HhnLsSTi8nL80w9j5EbGWoSLhkAyo17XR8cYsmWF6LvIblNckwm2ZnMfYVfIuQ57twufwkUssjcN0YgXRbLG45LuCGNQqZ9iikWM8It5YkayQSrRGIt5xEpPYAduVtxo8yfopI2zHE6EMQrQ9BrC4YUfpgnDaGIKIyBJHGA4g1hSCmMhjRFQcRLcZgRJbvxxnDPkQYD+K0+TjCSvbjUukR3Kk6i9qxB5gUPuoE2GZJQizQGUi6sMjwGq5aSemJYL15PwvmmxMN3tc10HZk4JrlFMr7c+B01aCtKR8HQzbi6+0f4eiRTSh8fgXpmaH46ptf4MvNv8TX+3+L8MLDyKk5j/E5g2hkJMPu4mqBCE1ic7FeXG8LtNhjGqo/AZZyD2N/JkI1+6Dvv4k7+nM4/XQ/0qxnUTl6D0MeMtD+ZwQnErTlXKB8hSspSjHUShbPj1wFkZ8jJt7K1yt9P7yPWV6rcLAZXDBgcL4UtuFcXK2NxunyIxj36OAWqa2czMl7e/H+Fp7h8l6X9z4n+dxXngF8hnD1hc9HNsNWixW8ihepiNMfwM57W/FV/hnssBdgvaUaH+rYTNmO1fomrNZasVZbBeFaUtqKdUyi1NZjjc6B1fp6rNK3iLRKmTq5HHxLeYkE30sstxL93ujXcSt6bhmIs4ma7sWSEfBKAA5HAu+vAkqG4KjZ7uWhOMsDcQKCbxjxrlRFiwi7IehW6rvAtxKA8zbgLSPeZfiNDMCRwTcMv1kKwJHMNtltpQ5YOqEuqeVW2O4OBFs6lgXhiBAcSkpYlqU6Qqa7qntZHaU/t1K2zmUBOCIMx94NEXzD8Bsbt7tVITg9CLXLUoPvFYE3/bkdDMNZCsQ57egBmydP1/QulgjE8YPviJrXwTflJlJSQvCtqro+xKpKAdzq8TXwXS+DbxTALcYGBuC8WCwRgKOOfRfhN0rapBwJvhWwrR4V4M1RWARS062qa3QuURWBN4NvFMCtjArw5ijANwG4qqjhTiPj7S8pKRkU7LYCutVM9yLobhtGFsG2vwi+s9qGcbNjFLc6x3CrawyXvq/Md2F3Jor7slHSn4mi/hQUDVxH8UgK7vTE40DJt9j0dBNSzVFCH8jUO+GIIR6uNqmhpDZWaL4VcPS7MRffP9At/w65lMwEPb4E7ML/VzQUwYFHTdfEi58JixJEKiCXX8uXhFJ8UUgw/f36O5Xf1/9CE78jzyM/Xov+mXJ8W7EPD9sSMEkGT2HTqb11WYQel3pYNpYRWHBSIprbhG6ake1WCVL4/QRzahLJeR5UwI0KeCEZccmmysAmORmwS+2seNkzgr0KbmrZBRg3ScDj5iSjDsPzejxqT8TZimMYmNHC52pCqSYLe/d9hgPBn2PfgQ0IP70TVxNO4ic//RNk3ryIy/Gh6GjOg3uuCqPuCjxsTcKJ4n1IccRixF0hZQJuGyaGLQg78SViY/bizp3LOHBwM1at+ges+vCfEHYsCH/+wz/E/t1rUet4jMYxLdLsccisOos58XdxMsO/l1ZrBN7lUirhrIPN+hirPvpb/M1f/yf8w4//bxSWpGNypg33HlzDgeCNyL13AT5GkjN4hh7jwrVF6pyrBu+J5sUnrdeFpRsna4ozBUcvg2w8ZjidNfB42tHZUYaf/+I/o8qai/MXQ3Dk6E6EHNyF4P1bkZUZiQXUYY4hSZRGeBwY9VQgve4iUizn0DlSIqPE3bR3JKirxIxbK7zDae1n7c/BFVs4itqvi4h4NgDL9Ff+TvK8GV9k4Zz5CBonuDpEECoDm2RKKe8pAjuyyDzn9eia02NLyRZEGw7gruMyntdm4L41AVdKTuLozZ0IvrEJ0fk7keWIQbYjDhcKTyCpPAJ53cl42JeAuNrTOP3wWzERm5kxwkvbTGrUKcsRkxO6cFgFKzw1V46X0zpMTGnxakaPGWeFaCbn8+y93ad0/eBEkzpwnx1l7ZlIqoqGcfCumOxystrTXwKbJQv93UWYn6/G2JhR7BeXXIGxJRdXjOeQbb+EkVmeJ04MbUKiw9UOPpd4XAmWRYPngkP0ClDKM+E1oaD7Brbe+gLbHnyD0JIQNHblwiW8vtnHIidUglxYsGBulpI2TmopCeFkm/aLtGXkypoJbjcdifhs4HWpPDve/LwXx1xI1Wwi4fJZWwpOlh/HmMsg5FGL163HIrze+Xcwt4AMO69rTgzoRU7iY0GkE7N3gveBHvBWwOtrQo/bhAcd/cCbVAAAIABJREFUSQgq3In1j/fhG+0NbK20YL2uGR8WN2O1plFKSIpr8XFZHT7WNSyrT/SNUNenugZ8pq3Del39slKH4WzUN7xmEah2KJHbjfhC37BYbwrB+crQCKW+9gfhqMct5U34prxxsZTwG/W43R98s0M1BjqUBO6rGyhFEyUbKc2S4VY7knB7r6VjWX2XO4niVEINt1q7rWwHW7ugVKBLidqdRNlWs9rcXonZDnQokQmTSw4lJ9goae94q/e2ArgVO8DAkcE4p2u6lgXfnKmV+8vCb2q732oR+E7hOPXdOBdQ5xt6oK4LDT1YMQynqQ9x/gpsoBSuJU0vltkEXmmS4FsNtgXgVgXg0CIwsaV/WV1rHYC6klqkReAya0B/EI7apSS9bQDqymgfRGDdaBuEUpntQ1BXVscwsjqGkNU+iJsdQ4ul7Gd3DuNW1whud4/isq4K/+FPfgD++19u6y9AM5CKksFUlL64AV1vJvR9aSjtu46iF5lC/kEJyL9FGcbuoKgnHXntCSjsS0bJaLp4icVbw7AjewOyuhLR4dKAjC+ZK9GIRPBFdtNNNs5vQfcOD+P39kITbMubH/rv++cIGQjBN72ZRfQ3Xw42PGpKRJI9Crbhe7+X4Ht4xohwXTBKWpIw7aTzBVc6pPSELBkBG2UJTJmkswcnZ2Rg+dKU9m7S3URotN20KaRfNh096oRNoJeOJWwMdNnhmaMzBBktGeYz77OAxVUV4S3uIptGlpe6Y4IDGZAyOG9AbmcCIkxH0TOvhc/TiJaWUmza9HP84Af/G1at+mucPxeCmOgD+K9/9u/x2ac/xU9/8kPoNcmYnKkQwTkvnHrcb4zH0QdbcbsxHtOCoa3FyGAFDuxdg7DjXyLl+kl8u2MV9gVtQHbWRcREHsAf/af/Fbt3fAizJQfm3ie4XnUed2ri4BHMnBmz3nLRDOjx0KWC10wLhoeqcDPzPNat+SnKSu7j802/wtXEMHT1WJB7PwEHQzbg/oML8Pla4WLConA/4eRITlSWg29+nLppuYpAJlKuKJBlbcHwsA3xlw9izdo/w+CQCbduX8RHq36MP/rP/w6frP0ZDPqb8KEeTspJuOIyX4UX02VIq72AtKoL6BwuA0CLxWohn5ieLRVa5plZI+adFlQN3cMVezhKOtP8vv91WHBzhUYy2wRwuq50ROj3o336KcAwJyeZc9pE0nubLDIddixwuaqEZnzcW4XjphAUDqVhwMPz04QxpwV5hqv49ptV2LvzQ+za8XM01D7GpXP7sGXzr7B352o8eXQJL+YrkdJxAVc0x/DSbcICJ4HCE5zNt8pziseIVokSsC4xvJLplSD97cDynZ8xK4Dva1XRKPeDbwlim+CdJ/BsANAkG4V5rbscGJm3IanqAm5WX8LQbLnfMpGSMXmfia8XEilOZGoBOn4wiMdjgfHlU5zSH8NXyetEj0DPAoOW5Aqe22UCHUK8nirMz+rhmiewroNzjg3RlHRx8qSkpcqeDa58SODNY/n25++7gW/KdqRcR1yDfjcmEh1ixYWTinkH4OL9QwKAzceNWPB1o29Oj1u1UdiZuw7f5B/B7orb2GbUYBPj3ks78FFpO9bqmvGZnrKQeiEzeZM14KJFoK5hGegmCFcDb27/S8H3m4A3vbnfBLxpDShrCXyrATe3FU/udwXfip47kN1WS0sCgffbwLcCttXjARF8s6TdVoC3aKB8A/hWwLZ6pPe2GnyvBLz5MTX4JtAOLPpvK8A6cFQD7ECXEmV/mYbbD7YV0K2M3wW+FXtAZaRFYAw9ulXplIGplNw/+zbg3diLi429/2Lwvehe4gfgV5qXbALfBr7fNXVSDb4V0K2MtAhUA29uBwJv7n838F4Ovgm21SB8CXyP4YrO+v0E34V9KSjuTYGhPx3lwxkoGE5Bets5HLi/BWmmaAzOVogleL7gyajRI5oNduKh72eyhA74fbJF7/Bgf5eH//v5HGmlKJjvBZsE336P4EeNCUh2xMAx+uD3EnyPThsRUxaCgsYEERYy7SNjLT18fZQfiXRBgnDpZU3wR+Dt9Esg+PIlmJY9AvUyqRCNQh/sRAOcqIOLQNzXAPj8IE9M4giQWAQUfv241yI0tU7GSFOL6qMjST365rXI6biCU5Zj6PYaBGjNyopCSMhG3LgRjZiYfdi69UMcOfw1tm9bjd4eKz768G9x82YYhka0gvllz8LAbClym+Kx9+E3KBq6LW3jnA24czMS27/+GT5b95c4fnQDbJb7mJ1txdRUE4L2rcGj++cwNmqCvf8ZksyxyK2j1SDvDwvcgjkkQCbA4KpAE/p6jEi4chL//Ku/hUH3BOs/+wUuXDiCzs5K3LsXh71Bq3D7TiSANtFIJ8E3mVgCZCsIvhPtMXjaniInJiKeXYJvebz4ubVY8Lagse4ZPv3kr3H16kG0tZcgPHwHToXvR0zUMRw/shXpqccEECVgp30kI96HnOVIqz2PG/bL6J4oh5vsO8NufPVy1YsplL5m0dBXxabVqjAUtiVjllIMBjT5JVk8964FM7TdaThrPoKuuefAPBl05TqgvaJsvhSgixZ6zmYMjOkRXn4Ij+suYHjKAJ+rHq7ZKgyPGGHWFSDmVBBW/fq/4e7Ni9iz9VNcjDqBo/u/Qejhr2FpeYiUtvPIMEdgWnjLS3kYV+uWjg1XsRQ5E0E2i2CS97lc3eL/v59nB51zrPKeUZjvDsl8G/pvC+abP4dOKdIZxQ6nyyJ83/k7LfhqMDRvwmXjGaRbz6Gf4TRC4iUZbzL64rzRVWXBhhlOWBcseOkz4XFzMg7n70WU4QTMYw8x7fZPjDlpFr0ZNfB6bPDwHuZElk5OTjZs0neeYV9s7PTrvr1mzM8zqt0OuikJWck7PKPfDXwr54D3OV105P0irnvRyE9pj0OsyrAHg5OxyYVGNE3l4Yp+F7bnbsQuTST2mJ9ha7kdG0obsKaoE6vL+rBW14uPda34TFuPz8lma5fCcBSwrYyf6ZvAWq9rwAYtnU/qsVHXsKyUEByOm3Sy1G4lYtvQ7PflbloW904AHhiEo0S+L2O6F0G3BN/fCH/upmWx7wroVkY1+A5kuLmvsNyBoPs1P25TgIbb3LHYLMmGSaXUziSL21XLmW5Fsy3Gqi6wOTKw1EBbvf1dwTcKAFcH4ASCbWVfCb1ZCXyrQbeyrYBtZVSDbm7TrSSipmtZRa7ActMaUHEpUY9qp5KV/LmFbWB9L5QUSjXDrd6+0NgLpQLBt8J2q8fLASy3Ar4JupX6LtCtRL+T+VbAd3JA1Pv1tkGwUlr99oDNA4sOJYpTiWIPmN5KOcmAKAVgL44BDPfrbDdBt1JLzDfBthqAczunexR3er7P4Ls/GSU9iTD2JcM4lIYnLxJwzhaGA/e/Qd3EMzidZB38gItLiK9p9MhISsbyvb2w3uHB/m/3s/zgm3/3AiOlFf23BQ8bruC6IwY1Yw9/L8H3yIwRZwyHcbv+IvrntZgDNZ8E03TZoO6V4JrAjJIF6V7ApfAp2DCyUIme+TI0jTyBqS0TTxuu4GFtHO7WXECO47zQsOY2xCG/IxmGwWzYXz5Eu0eDFwsGTMAkmHQCJp5n6oKFpILOGIL5pv6TDHoDBuZ0uNtxFVFVoehdKBdWY8nJJ7Bjx29x9epRhIdvw44dHyIiYrcA4TbbM3zwwZ8i934khkc1EsyQ/XNVoH26CNdrz+LI412oGL6HaXcNhgcMsFflwKhLRVP9I8zNUtdeD7enBq3tjzE+ZoTX0wjbi6dINEXhfuNVP/jmsr2iU3XAS0DjqcPMdB20mix8vO7v8MWmn+HTT3+E8vIcTE3XI//5VZw+sxmPn5wVThbS85vghICEgN4Gy8A94c2e1566uCqgMN88XmLbV4NXL614eO8ifvxP/1FYVfb3l2PP7rU4fWofzsYcxcF9nyElOVhEzbsFU8/73IExTwVu1J/HNWM0zE0P0dFdhP7uEvT3lMLtacCMsw69L3QY6jeisvsekuuioO29IezrJICkxIiTMca9V8HQmyFkJy2vHgsJknvOip4XxRga1WNmmg2DEujNz9kw0V+Bl047Ik3HUdp5XYT8LMzWCObT7WlEe7cVx8K/QtChNWhsK0ZjmxYm2zOER+7CkbBN0DZn43LNaWTbYwXzK9Jahe2iosPm80ueFx4n6rLFdSVWUuQ+P/a+wbeQhIifYYemMwvXLFHQ9d2SlpTCJlKuAFB2IX4f8ayh1aQZA04DUqpjkWKJRe+MRlhDCltPThZo+ynsDHmfkM2vRuNsEeJt0ThefBDZjsvoeFWCcScTOf2TWU6GRYN8PdzuGky9MmFooAw9Xc+xILznGzEyZsTwqAEzszJsi6QLE2xdLspPeH7fznoLoE5ZzFtlJxJ8872i/O1yMkTA7Z8U0g/cy4l6G3rnK5HfcQlnijchKH8H9hqSscNais3GOnxS0opVhe34sLgba7Qv8LGhB+voVFJWi42aWgGoFbCtjAroVsb1Ouq3l0C3GnAr28s8uf3+3Io7iTJ+aVhitQmyXwPaBNnlzfhmpTK2qAJxmrHd2CSL8e7+WpSY+H25FVZ7cVTFu78NdKuTJxWAHTjut3RCqUXA/QZpCRnuQKCt7B+ydkMpNat9xB98c0TVJEmgfcxKDfd3h9+8t+AbRzdWAtzLLAJXANsE4AqrzTEQfC8H3QzG6QWBdiDYXgLdfVgx8n0FppuR7yvJSgIB95t03ArolnruASQGBuI0D+Kav4Sk5A2gewl8L9kCEnQHWgMKLTfTJ1WSEm6rQbayndU2BNbN9uHl1TGM7I4RZHcM41b7EG51DCPHX9wW+50juNM1irvdEnz/Hz/4HspOiodTUdZ3DYaeROj7k5DdHIsjxbuR0ngRY2yaYrQ2l/z5IBQPbbnkSEBEMCCYTbIVv+/Mtx98z4hGQb7ELXjQcAXJ1dFw/J6C77G5SsRVn0GyNQLdcxoRNy8a/7xWwYqxcY5x4EO+SvQ4NWicKYBxMBcPGpOQao7BNeMZXC+PQJLhFBIrQ5FoCUOiLRwJ1aeFXOFyVSgum44j3ngMV4zHEG04gQsV4cLmsLgrE02vnmHcY5SaWb+Lh5ONiww/obOGpw5Dkxo8armKi5YwDHqM8DlrYTbdxsGDa/HVVz/GieObkJt7Hvl5idiy5QPs3r0K3+74DepqczH1qlwEc5Dhm/dWgoE9bS8LcL7oCKLLw9A2qcGs14G5eRvmZ6vgdnGyUSdSD+fmykVgyYKnBp6FOlT1P0RyVTQetybCyQmCl1Z0lBJQ50y2kVpWO9xuO8bGDNCUJeDJ01gYjekYnzDC421EV1chzJYb6OzKF3pvAXTIyop7jfebHeb+XFy1RSO/M02AFfpeE7SQtSVoFI40CzXo6ynB9WtHsHP7B5h6VSW8yZOuHcXunR9j86ZfIuzEFzBVpsHprRQMvZhEzdkx7q5ATksc4jQnkXY/ArEXvkX4sY04cmAdenr0wpnl6PHPceLoJpxK3o7zFUdR1p0hZSeUFdEqUwTjUNtdBV13OqKNIeicey4mIBZjKqLPfY1joZ8g/vIuaMuuQWNIxpmYLxFzZiuyn57FvoI9KGhPwjg1zq4G+Nw14hyYTPfx8af/FQ+fRGNsogIudwsePIxDyOFPkJF5FI7+B4gxHcX9hjhMcaXEz8JLBxKy3WS0eay4zVFOpBfYt8CGauXefs/M9xL4roa266YA35qebH8YEyeUbKiVTjLU7FNrzYZCzwLBtw5JtkiRnto3qxUsOVlhweaL5kaCYUbC12BgVour5acRXnECdztS0fWqVARleYQshVIV/o0OuNzsoahBS8tTpKYcRUjwGhw7+ikMhgxoNEk4cvQzHAxei6yboWhtfwi3cGCid7wVznm6t8hr8W0EyLsw37x2KT8SgUl8xvpXIsTHhKUtm7tr4PTVoGkyHzdqTiL4+QYEFQbjoOEhthos+MT4/3H33tFxnfmVYLslUVkttcN6fGZ2xuvxzjk7dntnfNazY489Hm+7ZY+73d0KLVGBVKaYs5gzCYCEmEkARM4gCBAkCAIgMipXAVXIOecMAiRiJdw99/fqAQ9FUFTvePbo+I/f+b4HgqiqV1Xv3e9+93dvBd7WN+Htkna8WdKKN3WteMvQirf1zRKY825xLdaU1gqoFptAXQPe05HlXqr39Y14X98kjZMf6hqgljYERwu6P9I1gvUx3Uo09YnqVmLwS500NvpkJKqcxMdsG1dOnlwKv/FLnTQ3YUV229y8olPJoqSEFoF+xYZJsQi0LtkDqg2Si6MPdD/aLNmBxwXfEGjvKVsqaZqkQ4lfqSy2/6hltb+NpOSfMvhmGdAm0+2LelfHwOouqBVU0/VIEA6BN1ltLeBebJj02QOq/txiE1jXAzV1UjsydfJ8/ZMlJRf9It9VVls7Us+tgmztqDLaHAm6VbcSjit6c6upky0DiGgZlFKlJJFkvVvZNLkU9a4AbV/4jRqCI17cQ4hvGVSqdQjxPpAtYLuNbPawlD/oVgA3QbdaCvgm8L7eMSKlgnAeE3indY3jqs6Of/Oj76DbSf5AJEr7o2Dsi4SuPwJxtaew886nSOwIwRCZRjfBBgEAgYRia6ay3wIK/lkDb5V5VQJM+Lpn5MbNm3g5btVfFc135T9T2cnkrA0JlWdwqXgf2qcKxU3D7bHhgdOIjplCWEczUDiQjOTGq4ipPocIxxlcKwvCNWsAYsrPIL3uKvLaY1DSlYiK/jRUDmWgejQTVWN3UDGaKSmEtt4bMHckwdgSizuN1xBTHowL+qP4WncYV6yncKPuCqx9qRicLYWbASHUfHKnRZLsajA0XoSsuku4Yj2C+wwcmq3Fg8kyGAwxSE8PhF4fhcFBIwYHzcjNCUF01AHodNGYelgO9zy30ZWAmjl6aXtscM1aUN1/GweL9iC5/DyaJ+5hig4ilAZ4K+B2WuGaZ/OZTfyxaf3oXqiGYyADkY7TyGwNVRoY3WwoVWz/PNIwqdgsKqFI1DibMDtnUthX0T1XYn7egdnZcriYxki7OvluVUrYiwp4rH03cNUehNzOWAVo04XG5xaigG8C/QpMjJtht6XAbIwXdhkL9WhtzUPGzUtIjA+E2RCLyft60aITJMv5dFZj3GVCStM5hJlPIccShRu3vsapo5/i7V/8GXS669iy4x2cOLURRw59io0n38TRvM3IpwRGpBt14r/OxQy1+q6FMpR2RiPQ+BXap3Phna9AQ/UN3Mk+j8NHP8SWTf+A82c34XLoDqzf+VN8HbwVP3nvP+KD62uQ2xmJEWrEPfSsrsHwUAkSYwLx5s//BO0tdzE3XYWWhns4fXIjLgRvRVN9JlpGc8X1JrcjAjPUEcuuDHdmVLaWZAF3ENQiiNSW9uffjt19EgClRIPNpUqfQiV0nUkCvos6ExbBN8+/ND8u0CVHKXGWQTkG5kpxxXoUcZXnMMSGSPYAyGeDTDFlMnT8qMaYx4r46gs4U7APue2x6Hea4PIw2ZMOJuwBcCg7V3JNr8TMTAWys69g7573cejgxzh5Yh2uXduL06c/x6GDn2DXrtXYvfstZGV9DbenymeByjRb+q2z+fbJ0pxvA765O6J+fpVzycWb0qzLRZzb68C4ywrrYDKulu3D1nvrsL7wFHZY8rBBX4/VhY34RWkD3tQ34x1jC94x1eMdUw1+ZajBr/S1INhmeM6HPhnJIxHvBNya+lCbNLnIalNCospJVnYpoWvJcj/uJnxmaAQbJslwqyVMt6kJn/vY7XVGxaHkS1MLlhWtAc2tWG9uwQZTEzYQdJubF0sYbkpKLC1S/uz2Iuj22QSKS4mlFapN4CPWgD5ZiRZka+e7yjsh5ee9TVmJymprR39HEvV4v70bSnUt894WVxLHo8E3StANg28eDbw5UsnEyU6Jdl/03/4fDL6hHeA3hd/4s9qqrGRFsC0MN2UkvY8UgbbWoUTrTrI010S+N/SAQNsfbEvzZEMfLvtqkeFmA2VjvxQBtxr57s9wa5nulQA3f8bkSSV9krKSRyuqdQhqxQirvRR2o9oCig0grQB9doAqq60dHwHbrcNIaiPAHlle7SNIaR9FCse2YVxvH0FqxyhSCcDbleJxWtcY0rvGEaJz4Pe/k+CbcfJ9MdD1x0A3FIOb7RdwvGAzjhdsRyFTyqaLpGveJVuiikcvbyQCxmlNtbiN+U90s/pOSU7IdPoWHRwXKqAw37zxOEDNNxsuK/6ZNlzOzJejuOkazt7bidKBFNSPZaGyJxWFzRFIqj6PEHsQQu2nccV0AhFlQUituoT8pmiU96QJgzzkNGACZZiSHRKGENWLPeCMqwLTLgecDAQRDXEdvO5KcZrofVAES88N3KoPR1j5GQQbDuOS+QRS66+ipCsFna5C8RV2iU61HoMTxcisu4Sr1qN4wBRNJ1PuqGFlMyclIky+oz96HbxMl5wmA01XDmrM66WxbM6rOGAskC2dL4PTW4HM9hgE5exAWvV5tE7dwwzBubdcGtMIvr1uB5zisUzbtUrYB9IQYQ8QFn4eDAIqV0CXizHlasInmw3NwIIZXo6U8XgJwK3SdEjJg9jECXBkQqiyo8SYc/nOoQKW3lRcsQcirytOfpdNzyKT8MklvAv83XLx3HbP04GmwZd4SuaSspcazEzXwe2shYeMJlNK2cPgrQPmq/HAbcP1xnOItp9G83AxOrtLBdweO/AZbmdG4v2P/g6ngrbhyqV9OBTyOU4V70BxZ4zCfFN+RMmJT2Pt9NhQ3B6FIOMeAd9wVsI5W405VyPCIw8K+x4fcxx3sq8iNvkkEuOC8Mf/9V/g/ZQ1yOmMwvCcBW5vI9yeatgrU7BtywfYtvVtPHxQjb5+EwIDtmDjhreRlHgWff1mVPVl4aRuJ0oHEzErTXxVgIeNvr7eAzK2/IwIy6ywywr4VhhxAnPu7Ci2jv9E1zM/8K3vSkaoLRCF7XGL4FvePw818wS1dBhRrPTcC2XonSvGFcsxJNdcxuisSSFCPNRGK/0VDO7pn9PDOHAD+wt24m5TBEYf6kRDTr93AlnalPI9mWbolfh312Bi0o7YmBNY9/nPcTpwB0KuHkJq6te4nhqEosIUaVLeuuUN3Lx5RmRW9Prm7oDb9U8MvqW3w6f79l1rJYBL7DOrROfOxVSAfjfWZW/Gx4WXsKmsDOtMHXi/oA7vFNXizZIWvF3agnf09XjXUInVRjtW6yulUfIDXYvEu79XWiONlATfSry7b/QB7w/1TWCt8Qu+oY57GfA2KL7cj1oEau0Bm/GZgcVmyUelJWoIDm0BvzQ2L/PfVu0Al/y3feDb1CTM9qKsxNzsF/POAJylEpZbE4YjtoD05dZ4c68UgrMMcJcp7PauMh/wZpx7GWspeVILuNU5fbhVsK0dl4C3AsCfFHxDS8An2f+xgXIReP8PB990g/7b2vCbQL/wGxVsc5QAHGG6FSmJCsAXme7alYE3wfiTgXcfvinyXZWaEHQvAm7NfBnD3dC/GIyjZbe1c3/grVoEqsCbo1gEtgyCDZNqRZPt1lSsJmVyReDdRg9u2gEqkhJ1fAR4E3S3DiOljQB7eV1vH4VSI7jeNozU9hHcIPgmCPcVjwm8b3bdR6jOgf/tuwi+SwaTkN8dKz7fJcOxKBiKQnxdIL669TGO63fjfF0gSnuTMDCtwzxv8L4mSzb6eBkrLo1KSsPSE1mg7xqw/jbPxwe6pRlLdTsR5qwCmY1hCKPbyVDaP0vNt9NTheaJLBzK3oSDht0IthxGiOkYrhqO4IrpOBIcZ5HHkI7+NHQ8zMeExyKOOAwYmXWXYWi6BK1juagbuouGiTzUjGfD0p+GgrZY3GuJga47GeVDt1AxdAtlPamoG7iF1pFs9E8U4/6sFV3TpSjtTRbW77T+EA4U70Zi6wVY+lPQ9VCPh55q9E6VIKP5Ci7bj2OSsgFPoxL0IjpVAimHODrQTYNsMlMrhRH11MHjrcWs2445j03AryQosmGU7P5COZJqz+FM8U7cabyMvtkCzJOZZOT4QjU8c/zbjCOvlITGsv5UhJefxO3myyIj4HeBzaleRpt7fDaMBIF8jvw7XrK6tAU0w0PPY3GSoH8+wY3Pt13AImPKfdrXhQpYeq4L+M7vThBGl+Cbn00ugmULXyQMFnhddA+phtfNuHnFUs5FoE4meaEFHjfTEW0C/J38dzptzFXhodOGG03nEeMIQn3/PVjLUvHBu/8FJfmJyLgVjtf/4f/CO+/9DbZtfgu7zn2AYONe6PsSFb9o7pAJy0p7uArMu60obA1HkGkvet3F4naz4G3C6H0H9ux7DxfP7URHSykmJ2phNV7HgS3v4R/e+lNsyNyI7NYIjM1yIdGMmbly5JVexSef/RLJ18/C6WqFzX4Tb/3qL/H+h6/j/KX9KNLFw9CcglP6XbBOpGOOCxk2gLrpta0sohWJHBt3VZmHynT7RgJf+phLQ/X/TPAdgPy2WNFo83NCH21ZXIlvtyJ/4a4iF0Vdc4W4YjmB1NorCviW16QkaNKBatRlgbEvFeeKDiG24iy6pouliRJztAPl362Ey8OFIL+TfL95XIXRcRtCQg7iH3/251jz4U+x/su3cOfONdy/X426Gh02bngHmza+AZ0uFgtshvblD8zP6xczCJ50vf82zDe96alVl90JymjcTMzlTlMN+pxG3Gm4hkP5O/F5zh58UhqJjyxmvG/qwK8K2/BuYS0+YCMlA3GKm7G6uBHvlVTjA10F1ujrsEbXig9K2iQYZ7W+Gu/p+fvLI97VuPc1+mawFP/tRnykbwQDb5ZXMz4xNishOD5/7uVsdzOWkieXp04ShKugWx0JvlWwrY5LoLtVgnDU4JvHgW5huC0t2GJWSmW21VHLcC+CbaZOWtsWPbgXJSYSetOBXTalFKCtAu6l8VsF32gAuD/oVo//+RxtAAAgAElEQVS/TfCNvzuJgO2qTqhNkjKuALrVwBt1fFLoDRlvht6otVL4DX92psaXOukD31qgrc4Z+f51bS/OMoFSU+fq+6AWZSVLDPfS/GJDP5aqd0U99zJJiQZwE4RrQbc6J9utBdrqXAu4F8E2mW4mUDYvxb2rc/pyq82T/lpugm4VeCtBN0rku8p4JwjoHkVC+yhk/hg9tyopSRZW+3Gge2XwrQXgKvjO6L6PsO8q+C7tT0Z+dzwK+uNQOBSDwsFIFA5F4XrbBRzW78Cn2Z/gWPY2FLbGYNRpFM0gL5QLBN9y46fV2fImJdmSlG1JtZP90RuZortUXQcUl4FF3Sq3aX31yAVemHb1ZuobF0G0+ngcH31M/kx93Mf9+4o/5+sVSzc2XPIGxhtkBbIaryHcEQi7gG/18RRWfGlrmzd2WnRxVP/tm0bl76iv339c8fk95rUquxJ8LPW5+Y8+zb4w05zz/NBSsgrTHgf6pvXQjaRi3e21+Lu4/441GWsQWnsWuolbaHIWYVyAogOzXjMm540YnClFz1QhmiezYey7jviqKzicfxxf3t6Hz/QH8GHRXvwiazP+n5RP8V8T1uJvUz/Hz+9sxptZm/GLjC/x2b2j2J57GCGlQbA2xmNkohgzXgfuu62wjtxGSnMYjuVswvn8PUisuoiS/uvQDaUgsvEMgsoPoW2+FOOzVjyY02HGbcCsx4xZj0XA9aynDNMuEzjOeW2Yd1XgoasM055yYdydHgvmvVY88JowQ09jjx1t3hJccxzHmaIduNd6DeMug3KOXPS4rxKbRAk/WqhAxdBNRFUG4U5LiAK+F8rhpNaboUeeWnicVcJG02aRUgGy2tSNc6GiaLv5+WAUuVWCk4QBJdAn+01Jh2jHHQK+Q+ynUdSTpDS7ktUU8MJmPS6OCR7NPra7ztenofxt+RxwV4DPh7sCkqRKG7kKeBhT7q7B2GwpkmtOI7HqLOzttxGfegJv/Pw/Yup+IxKTzuFnv/xzBJ/bix3bVuNn6/4T9metE9kJQZQ4ndBphb7TC3bMOE3IawoV8M3Yerhr4HU2wGRKwSef/AQ3Us5i+kEjJkar4DDewPbPfoEf//SPsOfONpR0JWBivhwLrjrMTFlQ25CG2xlXMTJogWu+Bp1t+YgOP4Sgk+txJmADMm6egaExEcf1O1E7lQUnd+oouxAHFuX7q+iJeT55Plj8XmpL/blyzvm+cLdLcURRrim8ti1JLpTv1tKx//eLen+FpZZF4EIV9J3JCLUGoLA9Xhaq/Pvz4nVepXz3RIKkXF/YvNw5W4zLxuNIrbmMkVmDLKYIqulKMuV1oGEyF3FlwdibsRE9E3mYFm9834LDzV07BmE5MCONplXSME1v99ExGy5c2IsPP/gpggL2Yv2Xv8LOne+jp8eK7s4qbN+yFhvXv4GS4mhlp0hkRXx96vn6puuKch5o6cgdIj7X0RkjclqicMx0ACPOUp/PN1+7DWz4lXPNhRtZ/QU7hudNyO2KwYHbG7CtIBBb7Zn4tMyMt41V+MfSarxJwF3UgQ+KmvGRoQpMjfywpB1rSgmc60WHvbakHe8VtUgs/PvmWnxgrMOHOiUAR0YN203Ge42hecXgG2G5qeFejHpfHnzzqbEZrKUAnBZ8bvz1wfcjwNvSCgm/8em5VQDuLzEh471VyicrES33cueS7TYFcGsBuBZ0c77zEfBNhnsJdO/2xblrQ284Xyn4Zjnj3eWTmqiMdzcOOLo1wTcMv6FFYBcO+9W3Ad/HJXWSyZNKSfANw280tRx8/xrBN74EyqXwmyXwrYJt7UhWW6vdXpKXUMO9VP7AW4JwlgHvflxq6BPwfbmhF0op3tziyS2yEkpLlpjvqytou0XPzRCcJ2i6teA7QrTdywF4ZMsQokTP/WgTpQq8VfDNkJtHqm0EiYtFTfejjZQq8Oa4MvimpEQF3o8y3QTcN6j1puSkU2G+BXzrv6PMt747EsV9Ubg3GIXsgUjkUfvdFwt9fxz0k2lI6LiCj9PW4pzlOFofFCjMlptb8AwRqVK21rnNLMBNSTr0couSWkPxKK7xWVsRKCvgmDcriS6WrXnFios3b1rUsYlvnqEq/B1fiISiZVXYIYZVEPiLrRlvrHJzVRg3aRqjzlN0kT6dJ8G6ABgF6LvJAsmNSAXwPgAqoH6lOQEpX2elRCIrDhyVon8uqA1HbHkwykR24mvg8iqBFARRCphSAJXi08ydgseXNID5XjNdRZRS0uXUY9miXgTxK9zsucDge0HgQW9csRJTQDUBgNiE+RYTCzBJAp+AJhe1tY2Y8dahx10O3VgmgixH8EHGGlxpPIbTxm04nLMJd9uSMOahTVwDPK5auL1NmJi5i7tNoThiOYFPDbvxy/wv8N/S1+I/xG3Gv44Kxg/C4/B0zFU8GxeF5xNS8Hx8Op6NScfTkSl4OjoezyVE48XkKPxGVDq+H3UBvxW6Dn8TvwbB1hOonciHa6FZ2Fq46zA5kY8URyDev/EGfpz8Y7x97218XvoJNhZ/jpDWICQ2ByOr4SKMfXGwDqfANpIK+3g6Kicy4Bi/gbKRZFQ9uIHqkXQ0TWah5UE2mmW8g5aHmWi4n472mbvomMvC+HQxLKMZOGjcj+15W1HYnQg3wcGkQbb/2Ww67bJI46V9IB3X7KeR3RQujO+cx4q5aRPmnFbMuK2Ydlsw47Fi1mNTFgCeMpnzZ6x5T7kAfqenTJj3eQbleGxwesskDZLaVzbOWXtuIMpxDsbeGwKgPQvVcC7UwUMLP2+tsPmeRR912jjWwv1IVcMpVoB2hRlfcOA+yuGktMZpxi37Wdwsu4oifRy2f/U2tu58Ax5vJ6Kjj2HD+jdgMNzB1csB+MWGv8aOtLW41xyuaOK5KJHmPwXszjstyG0ORZB5Hx5IKmQjvN4eHD+4Dls2vgWDMRWd7UWw6uPR1lyEstrb+K0/eQq7Iz5FaX8qhqnBJ+NKxxJXFebFprAR8DYC7nrAVQevqxZzrmpMzDtg6EvHzpLNGHmQJ2ExUzynEv7EfALlOrMkMyGI5i6EUssXxlwQNfh2B+gqwmuK0vtCX2zKdcRxxKssXNxuykX4/VcsABWQrwB3uXbMsk+BDHcNSprjEWEOgKEvDU4+N5cDc6jGQ68NTJBkAy13FakTf4BytMwWI8RwHBkNoRicLYGLOxkPrXBPWdE3VYrUmkvYn7Ye5ukszMoOB68JPntOft/FsrBSPmN8z8XXfKEaw6NWXL58ALt2fILc7BScDd6PtWtfR2TkUfR0VON6/FXs2PwrpCYHYEE+W/xbXDTyuk5LSSUwSUa5xvqus3KtVZ4Dz5lnvgpuZz3GpstxrykKJyxfodd1V0J8FN9/MukMzKEFaSvmF6owPJeP6MYgfHTrA2wv3IDNtlx8ZGvDe4YG/KqkRupdHW0Da6U4f9/QgA/86kNjI7S1xtCANYb6xVprbIC2PjI14iNjwzKLQH97QPHmNrF5smGxPjc3QltfWJrAUq0A1fFLSzO0td7nVKI2UKqWgOq4ydq6GHyjtQX09+LeKumSrdhmU8o/BIfH9OHWupRwrvXfVue7y9ugln8IDo9pCbjMpeQxwTf7yzuWa7rtdC5ZHvHO4BttrQS0v004zonqDviX6r+tjitZAmodSjjX+m6rc3//bdWRRDvSmURb365RsgcX6xUdtyoh0Y5ktx/LZGuCb0Ka+hHS1LesHgnCae5ftAh8rFVgSz8i/SqqdQDaivYF4/h7cse1D2GxxJ97AHGtA4hvo4vJ8pJgHF8ojmoPqI4MxVErpYNykqFldaNjGCtVWucI0rsItgm6R6Q4z+gZx+3e+wgzEnz/0XcvZCdrKBK5wzEoHYqDeSAetqEkWO+n4kbbVZwuO4wd+ZuwKeNz2dLsfJCvWA2yO10YUnot27HgtME1Z5ZueHoMC5hj2ANtsMT32S4WfS7eWHjjku13HzstgJBMEy/YVcC8A5gpB+bsSmCER/HlJTifBplnX+oZWRVfEaTzpsLkOrUEJBPsq4BTbhjc1mQDEp+Dv0OLcjPmzYWl3pTJJtK7muCbDJLCglWDrNS9umsIt5+BaeSm73zwdwliynxlAyPGqQEWRwOeq28oMuqUIig3OOXcKeeF54bHygLCy1RDOf+KtIHPiyl6ysgFTDkmJKCE56haCcBZsGF+gUC+UgI6Zr3lmAODapS0OrfXgrkFIxrupyHGcRQ7M9diZ9bnqOouxoy7AwNeI67XHsMV3U7oO9Mxh1643E1oG07GMcNHeCv9E/wofjd+PyEY/youBL8XHYXfjk7Fi3GF+I04G56JzsBzsdl4PrYIz8cY8GyUAasiSrAqMh/PxWXjxcQs/EZcBl66UYLnYm7juUtn8EcxO7HbdhCGB1GYWSgGXPUCNGe9lajouYVzRUew4eYX2JW7CV+X7kFo6R6E6HfjQPFm7Mj7Eltz1mFH3kbsLNiELTnrsCX7C3xVvBmbsz/F7twN2HNvI/bmbcb+gq04ULhNxn35m+Xnh0p24HDudpwo2ot16Z/ipxE/x/aC7ajyFgkYmx4pFJ26fN5RjdKuJOwt2IlDBbuQWncZMc2XEdV4CaEtlxBXfwkptZdxvf4KbjSEIL0pDBmt4bjVGoHbbZHIbI9CfnMUSpuioWuOhbEtHpbOJJHiVAyko2roFmpGMlE3nIGUuvM4YTmA1NaraLt/F60Td9E8k4/O6UIMzOnQ79Shx1kqRQearrlidM4WoWO2UJpkO2YK0DVfgh6XDn2zJRh7WCxpkDWuPHQ4C9F8PwuJdeeQVn0N8Smn8N6aP8etO+fgXmhBUXE0Pvjgr7Hui5/j/dU/weqdryOgZOei2wmBpEhyRKpVCafbhsL2CHxddgD33dQrN2Ng0Ip//Psf4VzwVrR1FaGgMATbN/49Nnz2OnYffhd//Pr/gn0J65Dbl4h2twmTlEssVGBOtNN09eDOQY0siAls2Qg7s2DHqMuE0v4kHNBvx0OXHgvzlBLR8pCe5w5ZIPHaQMmJ+l1aaeT3njppJ4NdFhh407rYH6AcN4psiR7ZykKa1zFev+zgYmPeycU2Q4Rq5HrFnof7HqPsqAy4dbhddwWhhqMw9qXIAkiuhWLhyiCgWizM2gAy/qiRYLPm2UJcKTmCyLLTqJnKQdtMAVrni2CbykJkzXkcytyC21VX4PTp7LUsvQLCybwrTLryb0wfrcb0bDVu3ryEjz/673jzjb+SxNXAwI3YuOlnWPfJL/GrX/5XbN34SxQVhCnPSxxOeG1Uzh+vUaLN5shz5rs2KQsQLhQVyQudaryeBoxN25DXFIEAyx6MIh9est3zJE/qMO+h1IvXsga0TOsQYNqBL5N/goDi49jnyMc6ez0+NDdDBdwcCcRXU+Otq5Px1wXfWtDNuQBvP/DtD7w/MTWB9akGfGtBN+cq8P624PuxbLavUVILutX5N4HvxwHvXxd8Pw54f1vwrdVz+4Pug44usLTA+7Egu4Ka7uWl+nCroz/w5rEKutXRH3z7A28l+KZrGQD3B9481oJuzrWgW+wBG/r+PwXfqMB7maSkceXwm2UuJU39CGvqWywVXKsj/bjVuTr6e3TzOOJJwLttEDErpFBKGI4WfLcziXI54OaxCro5MgxHBdzaUQXeHP3B92NBd8cwbnaOIINgu2t02fx2zzgyv8vgu3A4Fnnd4Sjtj0FBXxSiak/joG4H1t36GPuyNyLcfEy2/1qnC0UCAC/dFwjcCDIVhouaSm5v0mWBKYVKhz1vxNzOtoEsnke2132uKT7fZl6oBXCTnfGBWyVmukKYb94saY9FRojb2MKAq5ZqjLr3lQpolXQ/AlQFuAoDLM1rvDlyS97n5bs4J1OlAlnKLbRgloDW12DK18Ttfd5w57moIDNWgaIaxVLPMnRTXB4kSIUhGPxdBlnIDYhAXgHzPB/fWHL+KGmgf6+vCGho9cjFDIsaYp/MYPExxEuXwMK3OyD6X+W9Ecsu0ZLyvHBrm64d1Os74HE1Yx4NmEA1LA/SEFZ5GEfvfYHgom3Iag5F15QeTr5WYeFNcNyPwwXzDpzW7UB8YzDCak9gZ9Em/OXtffhf44LwQlQSnokuwdPRVjwdZcTTcXp8P9mI7yUb8ExMMVbFGPFcbDmei6nEs1FVWBVVgVXRNjwbZ8bzCQY8H2XCcwmVeCrJjt9IzMXzCdfwB8mH8Ubmbpy3ngKjygc8JgEa83OVGBjVoXboLuom7kn639BMCUZmdRiZKUbfw3von8qTsXsyG533s9E1kY2+6Xx0jGeh7kEmykdvwDKUDNvwdZSPpsMxfhvVk3dR2pmA4vY4FAwki1tLYXssEmouYH/+TgktaZ7Oh5vvz3Q55l1M4ayEfiANB4z7sb9wl7i9XKoIwuWqMzhdcQrny07hkvUULllO4oL5OM6bjuGc8SjO6o/ga/1hBOsO46ThEI6ZfGU+hOPmw3LMOO7D+v04pNuHYMt+rMv7CH+b/jN8kf8pzup2I6hkJ47pd0rjY3DpV/havxdnDHukTuv3IEj3FYJ0uxFYukuKYDmwdB9O6w/jrO6IALvzuiM4WXYYFyoDcCh/O77I+xJhFWdhKktAZs5ZdA8WwumuwcxMDe7cOY+gwPUICz2E6PzTYhtZ0h0nu1aq7ztZeu6E0R+6oO0aAk27MDidD6+zEj29RYi6tg/1tXcw7axGd989pCcdwc6NP8NXB95CcMIOHEndhKD8PYh2fI2E8rNItgTjuuMcIioDEV0ZhChHIKLspxDtCJDjMPtJfG06gEOFW3GidKfiaT1rEU0+CMB91ockCpbLv3j9UWtp8T0+XowbNwIRHPwlLl/eBqs1EW1tucjLu4pr13YhN/ci5uZ43SAApa0kbQLJNtP5phJ9D4qRUxOGiNLjOGc6jMv6gwgtPYQw8zGczN+BYMN+6EauS2AXFghmqzEzS/KCxIAiVeH3lmmUE3Mm5FovYF/+Dhwu/gpHC3bjeOl+7C/dgz0FOxHhCBYvcHE94eKdu4nU8ksQmtIs7pEk4mqACwPKkngNXWhAR3sRUq8HIyhwI8LCDmBszAGzOQ4BJ9bjyMGPkJlxFoMDpb5US15TyXorCw2SG+r1UqwUeU2SnU5fYBHBOnssFurgQiMGZ6zIbY7AUf0O9Hty4PawKbkG8856ON0tcC40oXEiF+ete7A+aw1O2i/jSLkOG/RtWKNrkWbJ1b7Id2G4jY3CdEv6JBlvfcNi5Lt/1PtaYxOkDGyefDKz/TjQrQXf3wS6VQD+WMbb2oL11hZsWMGLWwXY6qh1KdHqt2VOKYnPpWSHjRpuTZWtwHSLrKR9WcOkEoKjOpZ0wB90a8Nv1LnKfPtbBIovd0UXxCpwBaZbBd2HKrpFYqKC75UY70VNtw94q0BbO6qe3CerOiDlF4DDQBwVdH87lxLFe1v14FZHLeD2B9vq8bmGPqh1vqF3WeqkNvBGnatgWx21oFtY7xVSJ7WgW52HrcBqE2QvA9jNA9AmTqpzrXNJVMvyuPfFBEo/wK3151b9uB8dl8C3FnSr80RfII4KvLWgW52ndFI+8ijTrTLbwnZ3jiwD2yoA53irewwq+L5m+I4y34ZeNr5FoWQwDgnN57CvaAt2l2zF5doglPYnomcyH+NOsqIEqj6bqwWmpdmF/ZUtVoJSAZaK/ISSEbLFvCEQkNIyiqBV2Xol82rHlNuCkYfF6BnJQddwNgYmCzE2r8cMt5oJzgVkE5STvVJCO/gYvKmII4GkB6r6Q1WruTTS8UFxZVGemwBhn4+sgHI+PwJkFSgLW6dsG/PGorLqBBGS9Mb/y99n4IPoKCuQUx+OS+UBKB68IZHZ9IAm8yc6X59elI9FsMy/xyj2byqXLFCYJFgD90KVgEwnH8tXtNrzcPEjr4NgYaVSZCfK+6FsBfO1c+FBNoqLGHlOjJD21KFh7B6SKi/hWNFuBJoPIq0tAjVj9zA+UwaPu0mCbETas1CDcY8DaU2h2JL9KX5x40P8Rdpe/NuUEPzrpCy8EluIp+LK8b2EWnw/3o6n40qxKiEPTyfn4ntJOXgmvgTPxOmxKs6MVXHlWBVnx6q4MqyKt2BVvAHPJujwUoQdL8VY8GKKAS+m6fFCShFejrmF340Mwx8nHsXfZO7AxcrTyOqOgWMoQz4/8y4j5j0m5f3i9vg07e4q4ZJo7DI4PWbMuUwSpkPZxhzBsrcaD50GTLnMmHbbpB66LJic5+e8Bg+dDjyYt2PSacXEnBkPZk0YmiqBviMRAXlf4UThHtTNFsDpLlP+rscOXWcizttO4k5bJHomC9H2IA9tk/dEqtXG48kCtE7ko3kiD033c9F4PwcN49lS9ePZaLyfjSZfNfLnY1moG85E9WAGKvrSYO9JRdlQCsLqA7DduAVXa0+hqDUMea3XcLcrHDmtobjXGILc5hDktISuWLmt6s/DkdtCtj0aRY1RuNcYgVutYbjVfg3H7m7B4dLdKOhPxv2HFoxNGDE1x0jvWszM2sSjvLu7EAN9Bhg7UxFWfQoFHVFiY6dqvvl94ffG6bIivy0MQebdGJ7Nl2vBgykzBvsK8fChVdxu5l0OjA2Voqk6E3V1GegcLRV5QoIjGFG2QMTbg5FiP4cE2xkB3DEVgYitCEKsPRDxVaeRUPs1oqtO46rtGE4WbMfRnI0YdusUzbs0Jyqpm1zUcyG/xHaroNu3A8cFuY/NHR0tQlJSAM6c2Yg33/hTBAdvxIXz27B3z2p8/vlPsGXLz3H3LpNMNd81NnV6KzDtscLWex0XCvfjgvko7jRdwb36K7hXfREFDVeR03AZhp4Y9DsL4WZj55wZcFFWw4WzsuCn3KR9pgi6vhRktkTgmi0QH+etx9vxv8KhvJ1IarqKu71xKOhLRtNUAWZ5vaKsRsgF33XNt7PHawSldrJDM18Op0sP94JFrqnz8zUYHbWis7MAHZ15wvI/mLKgqyMPrU13MTpsgJvpknJt5I6cRdx5JOIdlMYolpJ0AeLjq/cAlRwgeUGCZg51GHWWo7QnAYf129DtzMach37hDfB62zDjbkDVcBaulB3Gtrx1OGC6hP31lVhb1ok1ea14v6Aeq9lYqasTwE0pCQG4RL37pCYfGhrxWNDtA98facD3NwHsT4XdJsO9VJ+Zm6HW52Y2T/pSJzk3ayQmlJv45CWUlWhrAwG3pigvUfXbKtjWjsJwW9vgD7pVwK0dVeDtLy1ZlJdoHEq+ya1kT3k7WHvtHctqWQiOvRP7feBaGG6G4fjY7EWA7ejAIV8ddnSCQThqHanognhzV3TgqJbVpnNJZSe0ANt/rgJudVwpdVILulXwrXUpEaeS2uUOJWrwjRZoa+eLIFuj6T5f37tMx61qulVLwEdsARv7wPAbCcBZ0R5QsQakRSCbJVeyBRTttl/wjcpoPwK6fYE4K4FtFXirFoHRrcsj3/2lJcJwC6NNZnt5LSVO+iLf24cWmW6JfGf0u68IuJOZPtlBdnt5Xe8chVqplI90DC9WOqUlfkXGW2W9CbZZKgDnnKz3nd4JEHz/wR9/B2Unuq4oGIYSkDsYg0tlR3D03hbcbr+G1ukCjLn1mPdy+5baUJ/rAgGoMMRKjLg0G7nIAJF1UQCeMNYC9BS5BrXcUy4L7BNZuNMbj6SGqwgrD8JF4zGpq5aTCLEFSEVUBCOu5gKu111BVlOE+OJWDd5G50QeRmdK8WDeiJl5A+bm9GDnvdNpgMtlhNttEpcI+ijzJsEGOrpYUI5BP1yZS6Okov/kVqk8d8pSfAsHZVvWpxXnzwWYq0BcAdFkl8nwUIua1xwt4LuwL0Vh2XzgmyCEOlUB9hLbTVaMSX4E8o8vOhEsUDNLNw2yaF7Fxo5zD/WWUopnr7qAWHzOPjZccXSgvIavizdObu0qgFucDhYqMOWxyfnMbLyCkNKDuFpwGLeqr8E+cAcD00bM+ZIr6TrgZGCMuwYPHpShdjADUdUn8HHmOvxZ9Ff4V3HXsSrGglcidVgVbRLw/VSCFc/EleCF2Dy8EpeHl+OL8WxMCZ5JKMQzCcV4JqEUzyQYsCrBiFWJLB1WJZZgVWIRXonT49V4PV6L52jGD+KseCXegpcTi/Bc/HU8F3EB/3fSbvx95n6sydqPUNsFNIzcxLynCFigw0c1MNclYIZggBICj5cAmQsmauArRIMKd6PSjLdQLRpat5vnvRYudw3m5ivhdtdhAdQWK/7G/Gxz8TI2Z0BeZwKOGA/guOkQMjuj0T9TKm4S9CqPLj8Dar/5/yiH8Mj3Qmk4o5+4xHkz0ttth9vjEJtDF90nfM28BH/yvPl9YygPWUwPARM9xakVt6KwMxaXyk+iuC8R96d1GJ/VY9RpxtisGWNTeoxN6TA6rcfojFJjswYoZcTYrFoGjM3oMT5jwH3+7rQO3DXonClARPERJFefRetMgcgT+HmkZaDTbZVwFj5v2jcueOtgH8zA1YrjEi/P77gwnwtKIycX5fMusp0hCDDuwqSnVJJwZfHI18/iQpIBMDz/rgYBem7Uon+2BC0TOagbvYOm8Ry0juehaTQbLaOc31ustvv56JgoRNtEARrHcpHfHInjuZtQ2BGFOb7/DEWi2wkXs17G3dMZhs9T2RlTWG8FfPNaoILvmRkL2ttK4HDcxrvv/gXOnt2O7dt/hcOHP8PVKwexbes7CA7eJv0pXFTLd4xuOp4ykZcY+pLwtf4AEnrD0Dp9D91T+eh5kIeeqXx08Hi+EPdhkkZlj8ssz/XhAmPhbWiZyse91mgJqPq69BBCK8/gckUQ9tkO4rTlGIq7k9AzU4rRWQPuz+hlMU8ZCc81SQ/ubPF94HfX5SkTomNePv+VIukjUy9NuXL94+9Sy14Dl6sSbhcZcYWt9rKvw+NrBpZFPr9PvBew6Z7uPHwskgpKn49cFynpY68PpYWSeFsmkhQSCQ+cNmAEzBQAACAASURBVFi6E3C0eDN6Z+/CRVeZhQbMOB2w9CXivHkvdt3bgX2GCGwvN+Ndcy3eMTVDfLnVuHey24ZGfChMty+FklHvRgV4K/HujVhraJJatAg0NuEjYzM+NjZJ0qSaOCmpkxqATbD9ma8+NzfDv76wtEDK3Pxr6blVDbd23GRrBTXd3yb4ZrFJcpHlbl9yKinrkEZJNksqDZPt2EWGm5pun1WgCra/snWApQ3A4XwpBKfzET03WW5/hluVlKhAWzsusdqdOFKxVEcru8SPWztq5SRPAtkE29I0Wd0tMe9qGI74cld1IqCqE4HVrK6l0oTgfJvgG4bePDn4ZsmZRNs0ueRQwkbJ5S4l/qy2MNuPaY5UGW2O3y74ZgDhzUulAm01BIdjZPMgolgtgxKCwyAcbdGbO1ZqELGtSi0y3G1DiPMVQbdqDSjjSmE4TJ/0Y7YFcC/TcyuBOGowDpMo/euGD1irAFs7qgA7o3MUrFtdY7jdPY7bXWMyV4/v9N5HVu8EIgwO/NsffQcTLg1d0TAMJ+FWXwROm/bjdOFu1IzfFiaRjIbCalBywQsqt0UV8C1bjiqAlRsymygVmygCdTYlkWUdW7CgfPw2rjdclW13brdz+z3McQaxdReR3BSGG63hSGkORVLDFUTVXkBoVbDYqfH3rphPIswWiJiKr5FcexGp9VeQXncZt+ou4VbdZdyuv4I7jVeR1RSK7JYw5LReQ25bOPI7Y1DQFY+CjjjoepJR2pMEQ28yLH2pMHYlwdSdDHt/GmqGbstNnmxjy+Q9tD3kDbIAndMF6JouRO/DQvRPFqL/YREGp0tkATA+Z8Sw0yieuuesJ3CnPQZT82WYcpZh2mnzlRUzLitm3VZx16BtmNIsypv0ykXAxRsovXhFfiOuEdSyO+D02qUpj3ZhbN6bZhOfh2WTmnJbhcl9OG/CJAHXQ50CuuYMGHMaMOIyome2FLWjd1HcFofE8jMiH0msOgNjeyJGp03CCLvd5XjoLMHATBaa7ieipC8UJW1RuFUbgiPFO/F+9gb8t/QD+D8SIvE7CcX4jRgTno7Kw1OxRXgqXi9SE4Ltl6KL8VqMAa/FWPFilA2r4kt8VYpV8To8m6CXei6RYymeTSzBD66X4OV4A16OseEHMeV4Nc6KVxNNeCXZhBeTLHg5zoYXo6PwWnwYfjfyNP488Sj2l56BsS8JzgU6QTgw72qG20n2j7sEXHgpGn6G4jARc8HVjN4uPYZ7iuCcqYR7thJTYyZMDhsxNWbB1LgZbtrxuesEHLo8lB4pch7u5oy6rdAPp+G8+ThOGg/heu1lsVpMawjBpbIAWAZuwMXdEfY+cAHmW8RxIaeUTx7EhZ1vcScLKQF/atIfn7cqs1KchATkUFvelowrliBYhzNlS5/NlfOohxfN8Hrr4fEqzZdswPSgfrHc9Fj3FfXI3FlikZ3kd5lAdBxWRBpP4lbTVQyQIfVWga+fci8nmUpKDWhXxwWotxL2wRu44jiG/NZIWRwK+BarQaVJmZpvNlwG6L/CzIIJ3jmjIs+QnRiCYUor+L2gFKoeHoI/sIHUKqmb7POYc5VLubn7tUCbRFpEcmeoVl7bAujrXi9e1F0PihBacQJf6/ahb14nYFLeAy5eJaeA0fJcPPsDcL5+RXYiOmZJIu2A0ZQiqajXr5/Fnj1rcf78HuTdi8fRIxtw8uQmcKdKgm4IWOW9ZG+LFYb+JBzL34qzNSeR3RGFnLYo5LZH425bFG63R+BmezgyWsJwpykEeS3hSG8JR1pbOK43hSLKEYwQ4wlEWgJxuyEMlsF0lA2lo3T4Buof5OCB0wwvJX5csFCGRkcTPl9p0vR5erMpmGmUXjtmUAECe5EEznIRSr9uFsEvz4UqhauCR7y8FW9w7t5RK8/dSnlfBdTzGk8bTJtcw9TmbSEAuLvJz4X09bBplEy4zeer7sDMnBn2jnic0e3AiIuabwem5yth7I7AOeMO7MzZhZ3F0dhZ5sB75ia8ZajCh6YqvGupwXt0KSHI9rHbnKsR8JwrkpImrOTJTQD+sQBv2gQuT51UgbZ2/FwNwDE344tl1YJ1FrWaF2PeGfcuUe/+LLcE4CyF4DAMZxOZbikFeJPlVi0B1fERlvsbbAGXeXL7JCUCvG2PSkvEHtDWAX+nEsWtROvL3YF95R3YT9At5WO5hd0mw92Jg2yctHfikL1LKUcX6Ml92NGtVEW3MNpktZXqFEabrPYjVdHxCNOtstqLgNsHvAm0/SugulsB21WdCKruWlZatlvrSKLO/cNvxKWktmeZB/eiH3d975I1YF0fLtTRh1utflysXyol8EbjUEI7QK07iQ94k9lWS8JuCLg1FaZxKdHaAmrn4WrqpEZaorLa6iig+zFR72oKZdwKke/+se90KFHBt78vt3oske8+dpsM9yLLTR23r653MhRHLYbjKHWDcfC+oluJFnCr80Xg7dN43+ocFdCd2T0OFgE4i3MC77sCviu+m+C7vDNWtLQZfREIKj+CgwXbcas5TOQmvACLxlukDgrDQWZVNIPUAcvFmzcyhf0h6BZfYwamOMvRcz8PeS3RuFwWgAOlexBlCkBhzTVU9qahe7pIbvbcYp3wWiWdkD7RjFMmAK4eyRSAnNcchVt1IUipuSR+z5GOYITZTyPEEYQQe6DUVXsgpMoDcMVXFysCcdx4EF/lbsNJw0FcqAjERXsALjuCcLnslDynUHsQyLRHVX6N6OpziKu9gPj6S0houCyV2HhFwFV65SWk1V5BWmMosloikNMcibvtUbhoOYHdBdtx1nYSBe0E+vEo7IhDUVc8irsTUNqbCF1/IvQDSbAMpKBq8MY3VuXQDdiH0+EYvomKkQxUDGfIvHwoDdb+6zD1JkHfnSB/n4+hViEXGZ1xyO+IRX57jDBnWU3huN0Yhtst4bjTEYWbHRFIaLyMEMdpnDUfwyXLCeQ3X0OPu1i0pzOwo2+6GLb+G7jVFIJQRwCOGA7g3bu78fbN9Xjz7m78KPEr/F7MGfxufCJ+K+EOXkjIwfcSc/D9uHt4Kr4IT8UZ8HSMGauizXgh2oKXYy14Jc6MF2ONeD7OhOfjOPoq3ogX4k14Id68OL6cYsHz8WY8F2vFi/E2/CDJjFeTDHg1wYAfxtvxO4kNWJVShhdTyvBKcilejo7F/x5zHFsLg2AfycCM14xZVzmcBL1ksj2UmCge3pSIzM5Wwl5+U/yNo67uRbk5BW0NOSjJvYqkqH24d/sMmqpTAbLermrMyOLHZ9umAmWG67jL0H7/HjLqQ3HadASXbKewv3g39hj2wjCeISy1gB3ZQaB0iqCSINY3iiUgvz+aWnS3IRjmIszX6yA2dIrjDfsTSloScMUUgPLRLAANIPiekZ2RBngWGGJEBxr/IvOrKf6+Lz3QKSE7yqJ6BGaEWk4gvfEyBucNAnKdkqDJ524Tna6S0KlIsMoHrgvzXdgeDS8XG3QaWgTf9JMuQ14LA1L2SyqqZ56hOfQ4p4TBjjkvA4342AR61SK9kB0K0QNTs10GJ9NEnWSslec/PVeG8Ydk7k2YmLdict6GB7NWPJyxoGe6CMn9oVif9Rn0wzcwQxaW6Yz0evctMJaYby37vQS+Ffa7ApOTtTh4aA0OHFiD4uIEHDv2JS5c2IfcnHgcP7YFQYE7QOBP9xsu9Mh+Uw72cMEKQ28C9md+gYNFWxFZfhohppMIsZzCZfMJXLadUq5VZQEItZ5CRPlpnC0/hQvlAfJvUeVnxJKvcTIXkwtlsitCuQxdoPgayDCLh7t4l1cB0hOiSOHk80X7Vyf93OmiUoFhj1XsCmm9iDnaUCo9KEqzKBeJZLTVnykyNu4ccifAw4Wn7BzQspKN2YpFJpsuKf8TP27+nq8hnc3c0tAtTfFc2Nng8irkw8hMCXTtdDvZhdqpDDSQBGi+jhPGrdhdyCbpVGw1VuIDSwveNdVjrakOH5rrxFpwtb4WaykZMTbjI2OTRL9/UFoPiXsny01mW5tA6ZuroFsdVfCtBdvqXGW5BXCbyGy3PFJfWlrBWm9h+uRjYt6tLVAZblVSoo5aSQnn1HNL6M0Tgm9US0CV3d5p6wCBt8psy/gYPfdS8E3nI8BbawfI+X6pDqzkVKJltw/bu6CtI45uaOuoo/uJTZJapvtEZSekqrqgAm/VGpDgenn1+IXg0JtbAd2nq7sgVdMNFXirQTgq4OboD7rV40WgXacB2n5+3BcJun11qb4fK9VKoTf+ziWh38J7WwuyOV+0A/QlTi6mTjYPIFJqKfxGDcHhKAC7ZQixrGUx79okyiEsst2t1Gwvl5YI6902DBVka0d/i0CV0daOjzDbYgtIa8ClSuscg1rpPvCdwWZKTUPlLeq5NXW7cwyZPrAtAJzzrnHc6VZY77u9k4gwVOAPv4vMt607HvrOGOT0x+Ba8znsLd2F/YU7EGkNkjhjp4vbl4q1IEG3sEa0llLn6hYumZd5m2hqx5wWVAzcRrzlNE5mbEGo4QRy+5LQN6fz6RJ5kVcZFwe8vMEzAIQ3DepFCVJ8TCBvArxhUq8857Jias6I0Vk9+md16JspRe9UMbofFqJjMh9t9++hdSwHLaPZ0A+mIdRxBgfvbJGY8js9cbjdGYU7ndG42xGDu+3RyGgOx/X6ECTXXUFC9UXEVTDZLxgRtiBcMwcgzBSAcNMphJsDEFIWiIvlAbhK8G48LjdTBr/sL9qNQ7q9CDYfwWnTYZw2HsAZ00EEmw/hawvroNRZ60FcKDv0jXWx7DAulR/DpfKjylh2FBfLjuKC7QjOWw/jnOUQzpoP4bSRj3MYQYZDCDIo89PGIzhjOoJg81GctRzHxbJTOMfmPvNxXLSelCY/NvjxdaS2h8P64K7iTCF2jw4MefRIbw7H5ux9+HH8dvwoag/+TdRxPBcWipdCTuO3467i1eRbeDZOh2ciLVgVZcBT8cX43g0dnk4oFcb76RgDnokyYhUbJ2NNeD7egBcSi/FCYiFejDXjpTgTXo434eUEI15OMOCleCNe4nG8Ga8k2AR4v5RkwktJeryYUIqXEw14LdmGH6aU4wcJZrwUW4IfpNjwcmINfpDagheSLXgxJgr/Pv4gvio5joaHt+F1myTdUnTzIoNigxklUQ509+Rj+/Y38enHP8W77/w1AgI2ITz8MPbuexcfffKXCD73OUoNIYpDBirABQnZWUojxIub0qs5G9z0WUYVmIyZ15+MSHswttJ1JX87qubyFDDJJrc5An/qzxUnHtHI+hrV6FZD0CIl/RQKMF+0l6RGXwAXQZdiO0nWVt8chXDjUVSNZPi+g5UipyC450JZkTqp7LoyCgspzLsiB5DQGdFDK69LkQw4MLZgQrQ9CFmNIRie0WEB7D8g+61Ens87deJBzsU2wV2ZD3wvxstLyA77MxTvf4b3FLRF45TuoAQusbl6zk2JmBlOt1nCiYTFJaCUxQ3lUmzEMwPU8VNKxlRRbyXG58xonMiCsSsedxtCkd0ajvyuWBR2x6GoMxalXHh2ReNqWzDezFgtC/TRh3p5L5jOy50kpSlSy3yrDPjSSDaYgUwNTbn4yev/DmnpZ1FVnY3Dhz/H2bN7kHUnBgcPrkdw8G4B3wSnZO65aODu1sSCEeb+RHxdtAtxLefQMJwNR08Gqvoz4ei7heph7ijloWWiQOQyHZOFaH6YJ5r/zof5GHHq8WDBJiFBlJvx/HCRIp9BX4M7d6doeSiyD2HBy7FAxplkCXcouVjxVGDMaZbky9t1oRhzczFDKRoXpjwXSqS9woDz7zFx1XdNpqxEdowI9uleVS/FXQdKeBhGRQkPd9vG5/SyI6hch4vR8aAYzZMFqL9/D/Xjt1EzehO2oTRp2r/oOIZ3itfifOMJ2TXamLkTHxfuw1f269hnq8Ha4ha8rWvCx9Z6vFdahdWmFrxX2iLBOGS1CaIJstfoGoTlVkG1CryVQBxfEA7DcNQyteBTqeZFWYk/6F4C348H3VrwrdoDqqMKuLXjY0G3rQ3Uc9MicFFS8k0Mt7UdO62UkywF4HCu9eAms728OrHH58utjirr/QjoXox8737EHlALutW5Cry1gJvzoxU9vloC31qQzbkKrmX0gW4VaGtHleEOrOoGSw2/0Y6LQTg+0K0Cbe2ogm4JvtEE3qjhN2roDUetjEQ710pKyHavBLgvNwxArSsN/ctSJ6nf9i+tpMQfZKvHi2D7G4JvKClRg2+0ozYER/XfVhMn1VENwFEj3+NblwNuYbrbqeNWSgu4OdeCbnVOHfc3gW6CcGG3HwO6l8D3GDI6RkBmW1sqs62OmZ1juEOw7Sv1OKv7PrJ7J6Uiv6vgu2A4HtldESjqi0XxcBKSOkJwoGQnPo5bjayuaLloz8nNX3HKkGY/tbmGEhQfi0c2Zs5Zjs45HTK6YrCvYCd2ZW1Cen0oRub18Ag7RDst3qgoY/F52soNwefTLQELbDqi1rhccTqRx6a2kDpaZZtcdJbSWe/z9F5sylSbMxV9dHPPbWTqzqCyLVlkANTtKgsIX+OoACDFw5uSAm5zs1RrQdHt8ucLdmGHh7xGDLh06BzPEVa/92EBOqcK0PggB1XjmXCMZqB8OA1lg6mwDaTA2p8Ec18CTL1xKOmOQm5nKHK7wpTq5HgNuZ3XkMN55zXc6wxHfkc48tqvIa8tDPfaQpHfHo6CzggU90RD1x8P40AyygZvomwgXanBmygfyoBj5DaqxrNQM5GNpgf3MDBXioHZEgzPlmJkugRjD4ox+aAUczMENQQgtQBj1inHWChHx3QOzhoD8eO43fjdyGA8lZCG76WX4qmUWjwbX4unQ0vwWlIRXkvKw6sxpXgtpgIvxFXh+0lleDGJIFuHF+KK8EJMAV6MLcaL8Tq8kKDH8/E6PJ9QipfidHglQY9XEnV4JbEEryQWS72cUIpXEg34QZIRLyTk47WUUvxmsg6vxevwapwBrxB0p1jwXLoZz94y4ffi7+C3Uwx4KakSLyVX4rWUe/ituGD8nwmbcKXmItoG0jFA7fOsGXPUvS6QfbPA5XKgvu4OVq/+K8Qmfo1Dhzfg4MF1ouM9dGgtoqIPoaMrH/cnTWLjOL9gFQaanweGoMguDz/rqBS7S34nCHRdCzUYGi/CdeMZXDOeRO9cicJOktFdIIAv98W385hb8Rrwx4WlML++ZjVxjOACVy11kasck30uagvDJetBlI0m+yLqyzAHPlfqcJWAHo/4JlvgBYs/p6OG2pys6J5F6kL2W1hzSmLsGFww4lp1MNKar6JvukSaLCmrkMbAxVAayhWUxkUmu161H0NuY5him+djScX1g1aWbjOK2mLwtfEYRucNSq+INPtVwO2xyYKa32vxzCZz7OZnkZIWMq8E/HawEbZ1PBe5rVE44ziCfbpt2Jz/JTaXbMJO0w58Zd6JvcYd2G/YgaPG3Tiq34X3sz7CnqzNaBrMgsvNRQ57KJSmRNUqT5WZEGwr8oslAD45qUdyagD+9M9+iDJ7Orp7Tdh/8GMcOPgproYewedf/AIhoUeUZmR+j8j4s6cDDkwumGAeSMQ1yxHoZ27KzgT7Q7yU/BC4sqlaNPPUzTMh1dfLwc8C5U2Ufqg5BkwQdtKLn/0i5fD4dikIyHnOKKmak1RZ9tZUy/VRXGZcVkzM62DpTcbp3J3Yk/SpAHy3eJ6TzaYMhRr3Mig7GQTf9CxXiBDI7gSj6Csw667AxFwZhmes6JvUYWCqBO2jd1E3fgfWgRvIa41GWvUVxJefR7g1GJfMQTijO4HjhYewN38ntmVtwbqbm/Bh6jq8nvQe/uD6z/DLwo34Re42/DRvP9ab7+DLchvet5RjTWk9Pi1qx3u6JrxlqsCHEmwzio90nVirp0uJwnCr8pJPCKjNLYqsRH5XG4SjxL0vJU624DPTtwHfy1lvFXCr43pLq9I0+USP7hZstrJal5XWJvBbg29htf2B9xL4/qqsE3vKqOVeKgHa5aqeWxn3SSCOVmLC8JulAJwD9q5FSYkqLTnI8BufrEQdD9MmUGqJ8T4qUhMFfB+r7H5ESqIA70fBtxZwc36qenkAjhZsL85rerAIvBmCU921FPW+mDxJ55KexTpX2wttKSx3H87XLZUWcHO+YvCNH9t9ub5/EXSvBL79Qbc0UlJO0jSAa1KDUMG2Oi5Fui+x3UrwzfLUSSX4xg98+5ju5eCbrLaW5V6ePknw/Ujk+wohOAK4CbrV8oXhMBBHytdAKZKSlXTcXaO40TW2HHx3KoE4KujmmN41hpvUbz8BeIu8pGsMdygx6b4vo4BwHvcQfE8gp28SUcbvKPN9tz8SxQOxMHbHwDQQj9yRWJypOIIvrq+GaTgJsx5qP5k6ViMMFHWEvMCLfnWe2kE2F5ZhkgEg41m4YDmOLTe/QIDpMKzzuZimJy+jjmfIaqmg4v+fsXroFq6UByKj5Zqw6QT9ZAhV2YywhYtA55ueE5uyvql4863/xmKkuXKT5wJEuXkCDLaphcfF5qZ6Re4gN3M+l2rx0eX/Y7Met+WBJqWe8FiyRcwbtDRdKsBAef6KRpVb/VxEwV2mhOx46jDkNCCkKgB/nbQbr4bG4qn4cjyVasAPk3Lw/YhsrIorwYsxRXg5ugAvkP2ON+P7sSY8l2xWADZBtg9wE3T710vCZOvwcuLyeiVJD7VEYkKZiaZeSzZCW7+drMMP+e/JZrx63Ywfppbgd5LT8S8TQvCHiSfwo5R9+PjeBiQ0XkXbdBFmYIYLFvEIH2urwNEDG/A3a/8D/vPf/gmCLx7E+ct78P4Hf4XX//bfY8e2d6HXMyq9DnMwi9RCC1rVuRIFrzjrzKMOw2470toiEV93Hg9cOngE7FoBl0k0r0rAEtlEuoaYANBtQgE7dJ/wUoMrumdu7bMJjkCQf5+gmSM/O2SpK5DfHoeLtpOoHb6t6G4lHIoMusJ6iw73CZ9pylrEdo8LX2HxaRFqk8bKWHswEusuoWOqVGF2hZW3wgODPG8vzHAvEOQ70DCciXjraWQ7QuFdaIJ3ntKTZmlyZGPftKcUxuZriNYdxcCCXulncFtEzuCeL5MYdF5bJPjKwwZMh3hluwjcvXZMeh3QdyUjOG8XjqdvQKo+GJauVHTP6aVaJvLROJKDprFcacrsHM/HMBswx3Kw59YGJLVfw8BcsXzvuOvgXKA8g+f0m77rFRgaykNQwEc4fvQ9cfxwzdXiVvpprHn/P+Ov/uL3sGHdT1BXc2vpmiDMfZlEt1MqYhxIRYjpBLIHE6Xng30fj6s5tw0uTyNmnXrMua2KdIrNofPVmJ/Kg8uTBtdcHVweo+wUuGbtcE3b4JpnWmsRJry5eOhl068dY149JjwGNM/eRFp7APYVfoQPk3+O3Xe/RKvXhElvFWa8DHqySgO6NKETaMtOpo8Amac8hQuhSoy69TCOJeJq7THs0G3AZ0VfYG3ReqzOWYc3cj/BP2Stxd/f/hh/n7ker2dux1+n78Bf3NiL/5J+GH+RehR/lX4BP84Mx+tZcfjJnUT87e0U/MPdO3hHp8dHtkp8UlaHtSaG3jzBqcRAprteyt8qkMcSfGNsxKdGht8sNU1yrjLaHL8waVxKfG4lqh2gOi5quFUt97eQl5DtlgZKNlH6arOtFVs0tbWsDdraVt6O7eXtEn5DD261dto7sLwYhtO2WLvt7fCvrxwdEn6jWgFy3FvRuaz2UbOtcShRWWztyIRJ1f5PHVeyAdT+bNESUBP5fry6C8er2heLvtsrFV1J1FJdSbQjvbiDajoXSw2+8R9VS0COWocSdU6nkvP13Y/UhYae5ZaADL3R1EqNkytJSlYC2NqfaZsopZGyeRBhzU/23/b33mYIjjb0hnMG36zoTKLx3n6S77bqvb2S/7Zq/ccxpXMY1zuGllVq5zD8y98iUGsPKPOuEaR3Dy+rmz0j0FZGzwhudY/gds/ossrspYuJWqO40zuCrL7RxVKP7/aPIXtgHDmDE4iyVOEP/+RH372QneL+GJR0RULfEQljfzzu9EbgpPErrI19A2ltl9Hn0sElDUi0rGIDWRXA5MRZbltW44HLjJGFMtzqjMWRnB04kbkNufURGHfZMLtQjlknWSwCQMX/9Uk3vn/KfxfwbQ/ErZbw/8ngW3EYUBqTlth4/2OCJzasKeCKMd+VcDntklQoQNzHTCn/TrBPEKY07SkpewT4BO+Pfwz137goolyIwUbzLoVlJECT5k1p5iPrqYd7nqCwA+OwIKb1JH6auQ3/Iiocz8Sa8f0UC15MLAabIp9PsuCFeANejCPINuKZeBOejjPg2fiSRfDtD7h5/GKiQcofdPNYBd3qqAXd6lwLvDn/nRQdfjOZchQzXk2x4NUUI34zKR+/G5+G34+Pwb9LCMW/jNyKHyduRkJzOPpchZgnuPC0YbjDhu1frsbqd/4OP379PyEweDfCo4/h/IWdCAzchD27P8T2rW9jYaFVwpC4I6D4GpO19pU0zlK3rNScpxJDs2Zcrw9FuD0Ig1P3MMsodQ+Bt/peE+wp76UCrhW2VeLlxYqTbkJ8n2tE3kJGUgHd1OJyrgBvLnrz22Nx0XYKdSOZS0FRbCb8NcC3LD7JtNK/fp6MqrKwnFsoE40yex/aH5YoQBoN8vhetxVul1lh6iW8qkasEKMcAbhdE4IZNIj23IVasb67DytapnMQXhWAL/LWIawnBAnd15DUfBk3mkMRX30eUVXnEFF9DqEVwbhaHoQo/qzmHOKaruCyIwBHi7/Clsx1OGTcgzvDCRic0mOKSZELdZKEOM2GQgJ2L6U3VUIOuLwOTDhNCKeW2npC5BxkiBWGmxrwJ4NvNouPjxrQ1ZENl4vMdism7tvQWH8LZdYEtDRl4f+l7r2j27qu7P/fd9ZMnGTipmLJ3bETJ5lJJsmkx07iFndLli3LEouoYhVLlqzexCgPPwAAIABJREFUe++9URR7b2LvBIgOEOy9996LxAYS2L917nsXfAApUcl4ZiV/nHUvSIqgCID4vHP32ds0TOmh0oFFYciU4t3TG4NxKmoT9t3+Ckc1e+9f6j04nLARlww7cFKzCfuUa7FDuR7bVFuwTf01tqtXYXPidmwxbsT+nP04oNqD/YlbsS9xPXbLvsQO5Zc4qN6Coyk7cU2+EyfD1+Bw8AqcTdmCKzlHcDRjLwuYOm7YhRPyrditXYu1cU5YGbwAO2NWwTfvHPJ7E9HPArcqYBkpwV2zHlktAXDTH8G22M1YHrUdDrGH8Wn0WXwSdR0LYrzhkBCGz+PD8Xl8FBxSkuEkV8FBZsAieTac1EVwUpXBUZ0FZ10GXAy5WKzLg6MqFw6KQtbRXqyugpOyCs6qsinhW2oRKIVvqV2g1KlECuD3g28O3Hxl6ZMS6L6nrltfZrUIJGmJfXdbAG+SlggALoVu2hN4/z3wbQ/ddJvA+2+Fbylw8/24S8nUqZPcpcReUiKVlXD4ngy6efDNVOAthW974KbbPASHwzeHbenKLQL/Fvi2h266zR1LHgS+rxU34TrVJLpu3t12tYNvG19ubhH4DQXf2MM399vmq9QG0N6p5H7wPQG6a9sRUmPrzT0BvEW7wNu1beBFkG1T9ROhm0P4OHgTgI/DN8G2FML/KeBbXeMOQ703k0bI6t0R33gLgdWXsTPhS+yIW42LGQdR1ZvIut+kI2Tda4JwE3WSslF0NxHX00/gQPJW0DAk2Y/1DupgNtGbu2BdRW4KvaC0R/HNaorO0zcC4JYs5LbeZp3vyAq3/yF8379TJvy89+uMi58TgzhoqGuUOQsQhBFY0SATWdAZxHTNHIyMEBTT5wRpD/+d0Mfu34UXwnUIqIZZOJEgm6EBPmHgj+BF0BOPggawclDfLUdI7mmsiFmM3wVtwnOBwfiutwb/n6cM/+KrwXcC0/GQH/lxk0e3sD5EH/dT4Ds+qRO63FLotodvDtrS9dEANajuBdwE3dMCtaxmB6gw01+Fx/21eDQgHY8FZmBagAEzfdV4yluOF7xT8XjIbcy6vg2r43eChgJJrz00mAdDZijee+/3iPINxsnje7Bt53J4eB9FVm409PpwnDi2HutWz4F5sEBwbWAdQfvHXrSWE4HVbClkQ3/hxTdwK+sk2gZSWKqpAO7igNxYHnp79aivT0ZTkwxD1Fk0l7KP1dYmor8/HWNjhWyAUnguCI4UBODjMgnB1i2piixBj6KokwYuSaNLWm/BsYR00w/S+Sb4FlxYSOIheOnTc6N7WI1rhmPwLr6EijtpGDEVsHAdcj2h0xR2CsMsMMlOsJhZ/4XknEBC4VV0WTJROSCDqiYQgTmXGEzvU1Jgygp8Fb8Kx40HcC79IC7rDzEwpvuhQdVLWSdxJfsUrhiP43rmSVwyHMaN7JO4nnkMNzKO4VbmCSTVUbiSnskgmJWe1YmDXhukYabTN8H1QxjMzIa+PRw74zdA1hSAOxQDT1I59vuiCyr7x9T2Nr3mRmnWxZQH0widOuWyC+Th4SwMDmTBNFIAmMmOkjcUhNc3cysyZ6HtrgqqSm9E5FxGbJnbfSum/Cb8Ky4gsOIy3AuOY1viarx160388tbreDViKX7puxK/cF+MD8PnYZdxGwJqXZHc7A95iy/krV5IanJHUq0nUir9ENMYhNtVnrhdeR0e+cdwSL0Jq2JXYl7QCjhEH4BT5Bm8E3cTH8m9MCf5Kt4N24s5AevxdfJOeJRfREzDFYQU7Mfh1K3YELsZq+L2YYXsCr7QhWOJToZFch0+T86Bk6wMS+TFcEktxOLUYrikVcBFUQPntFo4yhvhpGyBo7IdjooqOCvL4aKuwhJNLSsXdR1cVLVYrKyDs6KOBegwi0BVyT3dSuzhWwrdrOst2gXeC7o5gNv4cUu8ue8X+T6ZppscSyYANw1QilpurufeQKBtrIQ1dXKSbjf35ebx7tY1sxpbM2uwNXNi8M1ksM273vYdb7q9M6cWu7KFVEkO27QKwE2BN7zGO9/S7jbf8y73/aCba7qtwTfWAJxqa5ebQze3BeQDkyck1oDjA5MkKZFUQS04bPNVCtt8z6Gbr1L4ntDxLmkAhd/cD7ong29pd5v2vMMt1XMzX+4SkpjYykvIHtBqCyjZc4cSWnlnm7rbPNqdhd9I0yYrWqYclPSR+G5z4Gbdbjv/bSl4S6Gb76nzbQ/cdDuEoJuXGO9uD908/p3W8Fo72BbhmwM2X6PqOsArur4TE6sDMfXtiG3oQFxjJyva89sJzd1IbOmFhz7vH7Pzrav1ZF3v1AYPJDd7IqXVC2md/givuYozmfvwRdRihJReQcOgjB170lAXaaBrhxWIrvLAadluXJDvQ1K5O6r7UnCXeUoLum4a7GJDmBS7zPSytm9yU70J/o8+T/DdchuXM44jqvKWFb4FzbcgP3lw2clUP/e4XlTocE52m74Hdaypu8mBOodJSyjtbcxsYINMBOD0dQTndFpAjgLC19PKO7GTff/xj1GXj47uBV9j0pTmYJTF1hOACN69ZCFmNpdjAGqE5ByBs78Lfn5rLZ7zv4npQWn4trca3/FLw7/4G/AdX9Jzk1WgjslR/tUnnQH3I/4yPOybgu/5qScUDUs+7K+x1iN+pPdW4VEqfyrSeQvFoFsiL5lGsM1LhO7pgVpQPRmowRMBavZ51vkOysDjgVmY7p+Dmb45mO2diyfCS/C46zW87bMZUcU3MDamxeBdLbQFwXh7/qvYvmUrnBw+xP79q3Dl2g6cOf81tu9wwpcr5+LUkbWwDBFcCRcxNNQm+BkL8EsXNVTMapO5T+Sgu0+J0LzL8Mg+hc7hNObyQJpl9niPZaGtVcNCW7Zu+QxHj6xAcpIbKitk2LfPBZs3fYob13egpDgKphECSboQ49BNjzmBOD1uwhBjYqUHszQs7ooVL8Io3OVvg296XZHdJRtwHsvGIJtzyESXSQP33LMIqrqJ2iGa0yhgjifkD0+Dl0Mj5JlO+wLQRYexPRxHjNuZlWBw6WX4GE7ANWkfbiTtRUDGOURXukPW6IfM1lAUd8WgrDMGlV2UNprEhqMpbKisJxkUQlTdI3p2dyWivCcBZZ3RqOlJRGOfDN1DGiZJoQsGdtHABmCFAUp2okYSK4Jv5h1O2ukcdJgM2Je0Bb7lV1E/JBMkOsyjf+oGAINv9jrMF3zImWc8XQjT6YRw+kcr6dWZZINdpOWyWQqyu6RQrN4hPdpp9mBAOWW1DKlQ361CXa8SsnJ/bI1ci/cCFuJjxRm8kxiG3wUfxB8u/QkH5RuQ2RmGDpMSPSYV+sY06B5WobVfjo4hA1QD0XArPI+Dsl3YELsRa5J340vNBTineWFhahIWJmfiI1k+PlTkYo5KjzmKJHyU4o8P4y5ibvRBLIjegcXxWzE3+AA+jb4MJ0UYXPQKOOgy8JkyD5+llWKhvAqOaQTPpXBWFMBJUQBnJXW6i7FIWYyFVKoSfK4qgaOyFE4KoajD7aKmLneZmERZAhdxgJJruaUrH6akVerLzffSABzac4vAFdoyUK3kpSvHSrFW6cYtAiezB6RAHILttazKQfaAvNjwpGgTaA/eUujme6bnFsGbS0qkK5OWZFTDCtviXgBugm5e4/DNAVu6MtjOqgHJSqS1K7uGATdBNxWlTu5l+u3xsJt92bWg4qE39wu+mSz8hoO2dBX8t0m3PS4pke6lshIO3Ry0pSuF3vDgGw7Z0pVDNq338udmuu6iRhaEc6GoHlRWWUlxPXgYDofuy0UNYEWdbjEEx9rpLhECcKirLS3qbjObwBLBl9sevCdCdyu4SwnBtxS2+f5/I/iGhd2I0e6TdbqtloBTBN/YSEoohXIS2Ga2gAyyCbTHi9kEUhBOHSVQjgM4h23pGlXfMQlsCwAe00A2glSdVtDm8M3X+KYucPj21OX9Y7qdqBq8kNLgjqQ2byR1+CClyROqRi/oOgIR3e6JzWnrmL+zvNkXHWYtekb1yO+OgWfhRRxQ7YJrxkno60LQOSSkCNIbFHXRSGPJkipZoAYlQ1KXjQBxKpD9hj5P8N0cjssZx5jVHoNu8f7pDZzeNAUQn+r+xqH23mBNb+gExvcroUtH4SKC9ICcDCiciJwMhBL29Pl89iZPEhOSplBnj8lPmASBoO5+90MuCEIojAAG1KkVYIuAiz0GLMKawnwa0GyOxV7ZSvzcdTUeu+WB7wTp8a0AI4uJf8RXg28HyvDIrWBM9wjD93zT8K8BWfg3/ww87KvE474peMxPxlxJHvHXwL4eDaDutJYNU/KuNl/t5SSsu82BO0DDQJsDN60zgnSsZgfq8QQDcTUeD1Lj8WAtpgUbMD04E9MDczDdLxdPBVVhtk88fu5xCEeVx1DTHYZRkwr1bXJc9T2BL792wfbtS5GS7A6V2gsXLq3HypXvYNd2Z2SlhwHk802dUtGbm3VVKQyGFw3XcamHORttPTIE5Z6HV94pdJsUGLGkM0cP9jhb8qBR+2HP7uVY8+U87Njugr17vkBCvAeWLnkXx46ug7PTWwgLPYHuTiXzxxecJgQbOHqsmWSCANucAYJvsu78n8E3AWQ2k51Qt/wu04DnMLkG+UxfNxxDUvktpJMvfp0fdHW+zNPb2BiErOZQZDWHQ1cbCNes41iVuBJbU9fBNecYQnPPQ1HmjryaUNR2JKGTdMkWo6ARZ/7S9PyjEmQ0gp+9ALNsPoE67MyDPBdDYwZxyJXmS8jNg7TJQvAQ829nIUQk2aISbO+o8zxKgE2BMcjDzazTOJ91FNmdESxyHcMGQJTx3P/vkDCMSK9PeiwE6Q+9Luk1LAQg0QmUcJJEXtj0NdR9FwfJRXccOs3iF9r3W+l5NTZUBoupHD13cxFT6ol1qdswL+kSPlYY8fOg83jPcx78i86iZVAmnJAx+Zrwt4FSQusGFLicvgV7ZduwKfUslid7wkkWBSeNCgtVGVigKMFnqVVYlFaFz9PKMF9RhPnqAnyqycbcNC3eS0zCe3GxmJMYh48TkzBfrsJCdTY+1xRhvrIE89PKsEBRBgdFMRbJC+AoArejqhgOqiI4qAuZH7eDtgCLWOXDRVsMF3UxnJUFcFblw0VDlQdnVRac1VlYqs/BUk0RlqpKhVKXYpm6bEIt19jquO213KyzzWFbXCe1CtQJcM272dLV6lIiiXy3h2x+e72hEl9PUhvSK7EhvYoVpU9uMgrFu9ubM6qwOaOaFYduHu++NaMGvJg9YGYNtmXWYntmjU3SJCVPStMmOXDzIBwW7c4j3sV1T04d9jDIHg+94QE40tAb2nNJCa32HW4pZPO9dWBSDL/hLiVC6E3teOCNGH5DMe+8bGBbHJQch25hWPJMfh3OFlCNQ7YQhNMA5sstJk5KbQJpP3F4chy8OWzzlXe1CbitsC3u7bvbTE4iykp4V5uv9qBNt62OJaI9oFuZMDT5YME346E3FH5jtQK8V/AN+XD/DcE3/jVtsHa1H8ilpB32HW1+W9rZ5l7cPHXSfo2k6Pe6Dmvx7jYBd1R9JyvqcsdIywrcXYht4NWJuIYOxDd0IqGxi1V8I+07kdDUhaTmHiS39MGLOt//iFaDSfW3kNjmhfhOH8S3eSK5/hZUNe7QNXhD0R0A98pz2Bq3ClcyDyKp1RfxjV64nn0ch9N2IqDsBnL64tHPJufFoAkaPqI3QQbfBK5iit1wuhC88H8I3znNYQy+Y2s8WfeSWa4xDTUNjf4t8E1vvFPV/YFY8NOlZD+CZ4JrOs4mqUkW+vs16OtVYWQ4HcNDBrS1pqC7S43+3nSM0jCmWejAMVswNqg3xX0RMLBhykyYSXNP90kXHiwEiTx6jaxr128pgKHPB0viVuEFzxP4lp8W/y+sBP8SosG3fVLwuIcW3/N0wzM3TuDFWxcxwycC3wrQ46EAsv5T4HGvJDwmupU84qdhkE2uJUJp8VjAeD3ur8Hj/mqhJJBt7XATcIs1I0CLGQTc1tJhZpBQswINmBWox8wgNaYHKTEtWIEZIUrMDNViRoge04N0mO2fh9khejzrewtzb++Df/5ZtN+Nx/BQBhrbMxGfcou5WPT0ZjLJR0F+MOLjL0Gt8sXIcCkGWfqkQeiW0sWSCOFsZb9HUd7BLuJyUd+TDN/cM/DMO4WuEZkI3+TcQacM+UhN9cKWzYvx1doF2L1rBbZtXQKDPhpRkW5ISfbHws/fgLfXPnS2p4nwTa8b/hhz6KPgpUwkVpLshMO3eOH4N3a+GTiKoEiAOMCGkPPZUGBYwRW4yg/AU3sQt3IPwjVzP25mHYZ3wVn4Fl2AR+4peOWfwc3Mo7ikOYibxtOIKndHRtttVPYloIsGA5kzUT5MDKTp4o8GHUVHF1rpd0jwypySqHNPw8FCQBdptplXNJ3ekOsRwfuY8FiwExwKnCIPdGYjKcA3/b0RQFe4QCLva7pgUjQH4aR2L2T1vuz/hpEHh28arhX8rmkwlh4Lug/6GIG20EQQhrfpdIo+JpxAsJAZFjBDP4vwtfcDb+GUSgPLWAnMIzQHUojqARmuFZzHJ5EH8W5yHH4WeQ2LotdC1RmGu3QSwtKG6TmYD8toLvpNGQgtuIjt0cuwNfU4NuoTsVRXhE8UNZgjr8ZcWSkD5/kpBXBMLYFDaik+l5VgPlVaGT5XVsJBXQNHZQ0WyirgoCnFIk0JFiiFf/e5ogKOygqm43ZRFWKxKg9OynI4qatZOaoq4Kii2+VwolRKdQkcNcVYoqWkSYLvQixWFmCJhiofi1U5WKzOwhJdDpZpirFMVcpquVpwKLFfv9BQN5t3t2kV0ielITiryJvbWuVYrZtYa3TlrLM93t2WdLlFf+572QNawVu0CLQPw6HbtnHvVbBCt1HYc+Dm69YMciypBvfilq7bGXjXYkfmxNRJDtp8peAba3HgJpkJqzoQfAvDlNXYz4JvanAgWyxr8E2tEIAjxr3fD7ytwH2P8BsWfDNF6A2Bt9QSkPZSdxLmx80TJwvqcU5S9qDNb18g6GYl2AaOh+CQPWADLhXVCyvrbEtCcEoamab7anEjpMU63JIgHCHuXdBzc+CWrhy+rcB9H4tAj7JWZhEodSWR7gV7QFvvbQ7fUg/uvzf4xtrp5sE31tAbCr8ZT5zkoTe0UvCNfcQ73Z7Q3SYfbjHu3X6NrO9CJAG2RFLC91JZiQ1411OHmwP3+BrX0IWEhk4kNnZNqKSmbiQ39yClpQ8+hvx/TPhObHJHQpsXYlrcEdvgBnmjJ7T1XlBWuUHV6ouUDl9cZeE7X2Gr7Cvs0WzB+YxDiC69hS6QI4ERw8yZQ+j6MM9gseNDXSl6AyJfWura8Tet+3edpupEP+DnLVnIaRLgO67Wywa+WUAEHSU/0IUAvdFOBd4CIN27M86/B+9mUvwzHaEXoKY6DvGxZ6CQ30B7qwJ1NTEIDz0Ef5/9CPA9hKqKZAwNkuaUjvvJs1pwybjvfdHveljP3ExGh/QYG03H8JgRg+Z0DJHbgdmIrlENFG1eOJ2+F28HrMKLvpfxPf80/EuwCv8aEo7v+QVhlls0ZlzdiNcCNuPPPrvwnNdNfNsvFQ/5G/CojwLTvGTMp/tRAm8/DR7z19oUDUXykkI27aVdbb6fGaCFtJ4I1MG+Zgbo8USQHk8EE4yrMDMoDU8Ep2F2iBKzQpWYGZKGGSE6TAszYmZoIl72Ow+HiF0IL7qKXkpYRBkDnVFzEfOxFmQEOczHmHTYI5YStI9q0EPx35Z0DDPPZfqdUaqoFv0mNfqGlegZUqB7MA0tgypkNIfiWuYR3Mo7gY7BFJCWXuh6k7VcPjIyIrB5kxM+/PC3WLr0A5w5vRX9fRUYG22Cl9cprF/3OVKTb2BogCCPpEmkIaY92frRc4ZuC2E9TPOdTprvaEHv/Xdovgl6hW4xyVvIDpE6tCShyEFeRwx0Ff5QV7ghqeYqEqtvsORYL80JuKYewsXYnfDVn0ZQ5jlEZVyHoSgcZfWJ6OjVYhS5DIzHLDTYm4thSz5MNJzJ5FKihzSXXNFpjERGQv9HOk0gxw+StVFXnOYUBi0G9hgQ6NLPR3Z+BOVkuye8fsV5Cvodsfui9MdMjA0ZUD8oxwX1fkRUuKJrTCf8vh5o7oR+H/Q6oxkMIXCGXm/kb02nUwTatJJLiyAjE2cz2EWACOLMQpEuMO5fFLRkssiF0yrKOhilQXUtdK2B2Jp6AO9GncarMi+8F7EOEbX+aB/SstRJ8rEfG8vH8HAGqgdS8PXtxdivPY/N2nC4KNLxsbwaH6Y1YU5aDT6RFWKRsggOacVYJCvBQlkJg+8F8lLWBXdIK4OzvAwuqSVYnFQMR2UxHNQlcKCutpLSJEuwWEUJkmVYQvIRVSkcGXxXwZkNTpbDUVEKZ3U5k5U40b+h5ElVOSuSqNC/XaopZ97bJC/h7iXLmEXgROiWWgSu0EhkJJIOt7S7vVpbji8nqTW6CvBaqyvHWl2ZtThoS1eCbAq/4UXyEftiHW7Rl5v5dE8S+07gLUS8k1uJIC9hsE3ATV1u6mxnTCwO3XzdkVlrjXznsM1XaZdbquWeTM9N8pIDWVWspN3tyTrcbHDyPsE3R8kSMLfWLvyGwnDGA3BY2A35b4uBN1bQFuUkrMNttQUctwRk0F3YYA3C4dDNAdu6ShMnCxpwoaDBGn7DQ3Bo5Z7cVkmJKC2x73DT7WtURY3CsCQfmrxXl5t03HZabg7dbqUtEKoVt8psy728DVSe5W1C6M0DBt9IgVu6p063vf+2vQd3QFUbAiUl9eHmewJuaeAN39tYANqlTvIOt3Ql2BY62+LKYLsLUfXSEuA7uq4DrEQ9ty1wd4HgemJ1I75RLAl8E2xLIZxup7T0IrW1Hz6GArz8j9j5Vnb5M613Yj15fXtC2ewNRb0HVA2e0LUHsNupPT44od8GZ+952JvyNfKGE9gbNutYoZC90dKbJYvEpjdVeiM1Z7F0OvKnpTd20sk+GOw+IFxPAc4kG8huCmWd7/g6b1v4plCfbxS+hU7Y/S8qCNAJpgjMCLzJGq4IEeGnsNjxVezbtRjFBTHIyw3CmVMr4ezwKl757TMICzqLnm56c6cYbQIO0oRP9TsiUDFijOQnoM6YHiVtEchvD0fNoAz1wyrImn2wMtUBb9yYh9/fdMIvAvbgae9reNjjFr7jfgbfu3EAT904jp+6LcKerD1YlbQVP/M7g+/5xeIh6m77KPCEnxbf9c2wAW4CcA7c0pXDNwdt6colJRy87YGbbs8K0rOiwcuZwXrMDk4HSVBmB2jxVIAGTwdp8HSIGk+FKDDrthozQ9LxZFgWZgfE4Hm3I1gQugmyBi/cMWlhGi7CCPNJJiAkGQR1E7NgGs1F91gm8rujYewKRE5PGHJ7byOrJxTGriDo2v3Z60He4I7kuptIrHFFdJUrfHJO4ZhuJ7xKzqJ3SI4xC3VYBR9nsi1MSnHHtm3OWLr0Xaz+8mMcObIWPT3FqKzSYOXKj+DqegD19XIWF0+ab+EilfTeBHs0eEuAKYTXcLeTgv+R24mgYyfgpW4yhQkJF825uMOgWfDcHiVbQHMW+nq0iI0+h8sX1mP/XgcE+h+AWnUTMXFXcOnmfly8vh2JqVcxahJdjaizzb2q2UWmEGVO/xc2QMokPWLgj4lWgl2SUNBFEHV0SWpC9oskNaFTmgyMkZSH0krpeU3dX3KEYV7q1FGnTjp11DOY9IP0+GY6RTIb4aE/Bt/CC6gelAkXMWLX+v6vIfp56HUmaO+Fx4M+JsxfUJd9mNI6ySOfDV1SBgJdHI//DaTfLf29M7GLBbpguFfRz6+FBVrhJGCUTqq06LwjQ0TRdSy6/TX+HO2BnwaT68leZDdGYGiUMhDyMEhuU2NGpPeE4LOg97FWewOO8nDMTdbio9QSzJOXY4EsH4tSM7FIlg8nVS2LZXdQFmChqgCfq4uwUFMCBw3psMvgIi9hA5QO8mI4KopBKZJOinyhVBTxXopFyjLMl5dgYVo+nDWCVaCzimQleViiLcQyXTGWqguwVFPIgm2WaSqwVF2OZeoKLNNUYrm2iq1LVBVwUZZjmcoWvKXQzfc2Gu57pU/agTcHbulK8C0Fbb7nXW1aSU4yVVebdbgNleDJk3y1iXs3Cl1tJisR95PCNg/CETvdBN0E3Dsy67Ajqw47s2rBYZuvUujmew7f48OTwhDlvpx6UO3PGQ++4fBt392WQvf9uts8+IavVv/tPEqbFMoK2/cAbPsut9DpHoduSp2kIBwrbIt7ezkJ8+OWpE5y8ObQzVcO35NBN5eWSCUlfC/tbNOeAXdJ87icRCItsUpKSlvhblceZW2Qlle5re829+G2Db5pxTcRfEMe3Byypas0dZLDNl+l0M33lDopBW2+l3a3Cbx51Lt0ja7vxnhRAiUNS9pKS6TdbYLu+PrxSmjoxmSVKHa+CbalAC6Fb9/0AvzoZz/7x7MaJJmJrMED6iZvqBu9IKu9xaQn5P2dUucGWY0b4ro84Ft7HidlG+Gh2Yf6wRQGj+YBvQAMo4LkhB0li284dHRMzgNsOI0BuC0wCkex2RgZMaKvT43eXiWGhykRMAtDQwb09alw544GJoqItroK2H6P+76BWrIE+DYegz18Uzoc6UYf7GKAoJm6j/RzUNFeWvQx+hpJB4514ehnHf+YoMPWsuE/+j2xYA6UIynhBpa5vIXd21xQVaaAxVyOwTuFUMr84OLwJmRJXrh7p4B1uaiDLfwMwvflPz+tlAIqvU3dwX6Lhtk9ZjeT9dk67AlbBbfcU/Avv4Svo5fgvzxfwWu+zlil3IHFso34jc9GvOh2CC+478fT11fgPz2XYXHCl5CZ3HCu4AD+EnYa0wOi8W3y2fZV4ckgI77rm20D348HTITvaYE61tEWgHu86z0jUIOZgRrM8FdjRrCefc39wJufn5x0AAAgAElEQVQAfFogfa0Os0My8FRgJp7yz8DTfkY8G5iO54IMeDZIi9nBSjzvb8Bzvrl4JjQfT4bH4PueWzDP+zOkNXqid6wcQyDfeh3MYxrBFpCeE8hH06Aax2PWYle0E3anrMYe+Vesdsu/Aqu0dWzdJfsKO1PX4phsKy6q9uBq/nHIOgPZc8FiNggyCgp4sZTi8tXt2Ll7CfwCTuHGzf1YtWYO8gqTsH2XC9ZvXIDCkhSYTEUsyttkMkrsKElWIYAr6xJbspFU5YULEp9v9jpi8elC6iSDV/HiTIDGia8ZFtYimQsYpq+nxMRhI4bJd9uUCYuJus/UwabuagkMxnBcvbYXH839NTZtXoDQ8HOIjrqBg7vX4OMPfoPdOxair1vL7EeF5zddkAoASmE51AEeoFRLUMgPdamF4UmSh1BoDIXP0JAwe82QNSkFzFBKKOmhSd89So4l9Hn6/4ge9SQjoa+hoWR6PdNqymQncRSUNTRmhJ/xNHNNyemMFDTkzMbR9ncivH7oNcSLfnZqGtDrjTr0wgkedbzZTAs72cvFGF0ksI/RSv9XCroRGg0slZRgnctsJHKl8Y6/IF+inAQKgxqyFOCOyYjeOxrcGTSi6m4yjuUcwV8Cr+KXkVsx5/oSJOV74O6IASNj2RgezcXASDbSG/2wJnoh3o5YjT+G7MFf48IwP60IzmkNWCqrwDJ5ARYkZ+MzZQUcZaVwkpfCQVGCBapifKoqwieqQnyqLsB8TSEWaAvgpKyBUxoNVpbCSUnWgPlwVOfjc2U+PlMXY6G+Eo7qAgbbS3Tk052Pxcp8LFMX4QstyUgKsVxViGXqQixV5WOJMh8uClrpYyUQoLoCy9VU4/D9hbpM/Fy5dV2hLcdKLaVO2obf8K63NfxGJ+l8k7xE0vGm/Vp9Jb7SV0yAbyl4C/BdMQG+pXIS2nPYlq5S8ObR71LwnkxeQsmTFIZzv8RJFoKTVWMD4By4+UryEhaAY3Ut4eBdx8D7b4XvieBtG3zDoZuvJ/ImJlBK4XtS0Ca4zq+3KaHLPVkYjgDhF6TdbjGRkqVP2sH3pUlSKAm+7wfeBOAcuPl6YzKrwCnAmwDcnUoK30xiMjV824A3Bd9UtMG3otWmJuty84+x8JuqdpvkyQBxgFIK3bSXgveDpE7y4BsO3HyVgjfvekuhm/bR9VTj8B1TTyE49wZvgvCp4DuxkTrd3Uhq7GKVTDITAvDGLramNHcjtaUXstZ+/MPCd2qjB6YqRbMH4htv4mL2fuxXbUJ41Q2M0JtJnxYYzsbIKAVMEIDavqnZ36Y3WuqW05GtyaTH4GAWjOlh2L1zOdavXwCF0g9NrQbExl/H/gNLcPHSZmTnUJhFNQuXIfkKdZ3sv++kty3ZyCbZSfpRJDX4CmDKbPrEN0wGq1N/L/6GLFwAkCRAGF5k2k72s9CbrhBeQ11+1oWj9E6CAnZ/QseSumLsNgXfDNPnqOtYgvp2DU5d3ICTx9ejtkwDmBrQ312EA3tcsHXTPFRWJIjhPXTfop+4hdw4CDIojVGPESgxaJHDZFGxLhr5S1so7ARGjJl08NEdwW9838S33F7BLK+P8atbb+JXF57Fn93nwLX2AFpMyWg2JWCnehN+dusDvOI9D9tlqxBTfRV9PfnotKTgtH433gs5j+e9U/Dvftn4brARjwerMd2LbP+UNjUtQAVpTfdX4UV3HWYFk6WgAk/46ljXfJZ/Cp51j8D0s4F4NDILTwRpMWuSmk2wzStQi9nkehKkm7SeChY65E8HqvACVbAez4ca8XxAMl6+ehbOvpuRc9cbnRbq+FLAUSZAIVBD2TCZ8lEykILPQj7BjYIdiKs9h+TaS1A03EB2hz+q7sai3aTCIOtwEryTVEMYHhSeJ+MXW9ILr8iwy1juPBdz3n8Fy5bOw56963Dp2hHMfPIhrF7vBO+gayiviMPwEEkdCEwF4BMe72yMmcgSUrjAi672xImMQyhqj8KoyYh+5OHuiE6IgSdIZc85mhEgSzzqwNNrMk+0ThS+L3XmR0HWfXqMmin0RziNMbGTEgJxGh7MFEOG6P9UzLyuLaOV2LfDEV439qCxUolRUy302dFYv30BfG8fxSBziKGgILpP+nnpApechWjGoQBDA9TFL2BZASRNGWFd92wM0d8OJuEguM6AZUQPM1ltmvMxainACAVRjZJUi/49XfgSfGcBJvp7IiREDiIH/WSviTy0IR3FljREN/vimGwnLqsOQNscih6Wlktx6cLfAPo5Sc7CE3WHkQEqStcdYxHr5HVOr0nBkpMaCSMwYsisQ99dNerblTAP5wFDucAABRWVYBiV6EUR+pk0Ro/+UQprEnXtNONhofugBF/h1EG4oFAxiVjnkBE+eRewTfElLpTvh3o0CPI+dyz2/QveDTyAD7w34FapGxpMSnaRwe5zsATDAyVQp4fjpGoTlkW8iU8j5mFh6nU4ESir67FIl4dPk7VwSGpg8hBHVRGkRV8nLWdV8QN4bxdD6rnNw26kNoDkTELBN9KS2gGywUkx7GalZOW+23x9kOAb68CkqN2eNPiGWQSWW/23J/XhTq/ABrGsFoESu0DBsaQSm40V1uAbHoJj78O9jXy4JSV1KaH9jqxqq6RkJw1SZlbbQDZ1uknLvSer2qb2ZtdAWvuySctdBW4JyFduDcjXe32cf57CcbgHN61HC2on1LHCOhzLr7HW8YJaTFZkDyh1J+F7qUuJ1P5P2NuG3pAloNWdhPTaxRPtAGlYciqo5p1t6cptAfl6nYJvSptwo7TRWq5lTbCvm6VNU1oE2tgCVjSD7AEnBOFUNI8PUEqGKW0kJdydRLJK7QBpH1DVisCqFmtNGoJT3YrgqvGycSypFny5Q2ttQ28oBEcaekP72xKHEu5WInUoEfbtiKpvY0WBN/bFvbi5HSB3J6GVHEp4JTR1IrGpw6aSmjthX8lNnaBKae5ixfepLd2QtfZA3tYDv/Qs/OSnv/zH63xPBd70eVmzO2TtXvAsPIl9yetwxbAfDSY58wgmn2DqWDG4nAK+CSZY15n5WWeioT4Be/cshrPj21i79lPs2OGIYyfWYPPWhXBxeRMrV76HXbsd0deXDdPoeAAJHf9OCtzS+/+G4JsuKgSbPjvNOj86p+EqBgJCaAlJb0yWbKZ1ZdHZ5HwwZIBliLSj9HMLcdpk80Zx09Utchw+tQonj69DS10GTEPVKCtKxgfv/Ad8PPeio13Hut6C6wk5Q4y7xoxa0jFiJoiiKG7x5yOIMVGnsQgY0sFkMSCs+AJWxCzC7wLn4EnvFXjc9Qhevnka7/otQnS5G+6M5GFsuAHpHSnYY9yFz2KXYINqMzzL9yKtNgyKwRvYnv4V3o45gRdDYvBIkB7fC9Ext5GZ3jRIOQ7fUujme4Lv5z11mOGvwLQgFaZTJ5w61F7xmH3KAw/vv4XHwgm+SV4yEcCt4B08OXATiBN083o+IhffDzPixVAdniVoD83AU+EGPO8VjF9c2o2VYTuhavFF65gSIxRaZC5gModRswo1Q7HYmrwRWZ2p6BquQt9QLe4MVWPIVIlRcxnGSCtOFz9k2UiyA/ECTArb9vvmJhVCgk7jyKEVcHXdjXRjGHSGYHy5bg72H/4CV1x3orgkjEG2IKEQL9QgBiqxCzey78xBAsW16w+hoDmCAXcfk2TkgTrm1ImlSHCSOBAoCiEwwgWBiYZsmUMHWYWSbMHIBgVHSAfPNNm5GLCQ3loAfYp/N5kJdmnegE5eSpGU4IovV32A6MjL6OspQGdHAfz8z8FlyVvIzb+NMUuhcL/IxPCYDiNjOoxZKNFSAPwxBvoGjI1qMDaqFlYzXUCms2h0kpa0mVIQmH4UqTW30DQkY8mNlAJL1oYkO6FBbpLJUJgNvc4oFbPrjhIljZGQFXvAQ38C51L34HDcZhxP2IbLqXuRVu6JjkE1cw7CsNhdZ38rxi1HuXMJwbHJbMQoDXQOkxSmCv1jxcjtkiO8yg+Xsk9jf9pmbElciTWp6xDTfBPdpmRYLEpYzBp2cUCvRTNor2AdfZoxEULGyB+dHiMCfgpGos43XXhogdE8dAxl4EzaXrx/613M9Z2PVZGbsPL2Diy6vRFzZJvxRuB8uEQ4YFf0KnhpjiK/NRb9llzctWShb1gH78rz2CRfjSWJO7FcHYvFmmp8nFKBz1Q5cFLnYnFaPZzIAlAC31Lopr2zZmLaJE+gdNGUgteU4E3e21OA92TwzYGbr+TBzWwBJeE3UpcSvidLQC4jodW+o72OeXDbpk5OBt82vtwToFtIotwsSZzk4E3r/eDbHrwF+LYdpuSyEulqD99S6Ob7fTk1E8CbQJtDNV8nk5kcypVEvkvg+57gLYHvyaD7RGEdqDhsS1cpeNPeHr7t/bel8H0v8H5Q+ObdbHsrQCYlET24OXzbAzfd5iE4U3lzk13gg8A3H56UrlLwpj3z4paA96QJlBL4nhS8yZtbAt/24D3uVjIeekPhNxyurWt9ByLE1En+sYngTW4lE4Gbp05awbtx3Jf7QeHbHrrpdnIzdbnHwZsAnN+2wnd7D/yN/8TwHd90E2kdPoipuY6Lml04krwBcRWuuMOOkAWN95QwLHZ/mMWZOHTV26OBIs0b6fooeHocx8ZN87Fk2V+xb/8XCAo6B1fX/XByeg2tLXqYTIWwWKio+3v/Djv7/DcE33RETvdJR/g0PMq6buJRtGDdJwyHkY85dR2ZDIA0rCxkSDgKpzffkTEjKzpypiEtgm+CuKqGZOw/sgTHj6xGa0M6eroLEBJwCq//6VmUFEZi1ES2dxRwQv9v+jkIsrVM40rH3OS5zJIRqVtJ90kXAvS1BGymApgsxai6o0RWeyhci07gjdtf49+un8HjXvH4tasjYvM9MTBUDIw0IbfhNg7rN+Kj2PV4M+woPvJZh3le2+CcuBZ/CnbEfwYewmz/MDwcqMDDIUpMC0jDLB+tTZebAzet0wPVrGYEqvGMvx7TfGR4IlCDx4J1mB6qwaxrwXh03WF8d9MFzIzIw2xyKhE727ODJu9uPxWkw9PSCtbj6WA9nuEVYsAzIdl4PsiAl0K1eC5Uh5lhGZgRlY+nItPxnIc//sv1PL4IXQP3/JMo6Y/HCHtu5mF42Iii5ijsjNmE2rtKmCyVLP3TYqkU0h4tRSwGXph1oA41l0FM3vHmED46mouOTiWqa6LR0JiIO3eM6Oym23Goq09CbV0c+vqU7ERIGAamx1i4wGTPJ37RNpYNeZUPrqQfRU5DGNPz36VTDgI5E+mgRYkGc2HJwQgN741QNz2PPTdHKGId5CZCkokcjI2STR7pqzPZc5PNbdDnSWtuJpjPhIkN+pZhdKwCe/cvwbqvP4HeGIbh0Wpk5yTiwN412LHJCQN9dKFQxgY4SZ5Bg6pUDGotOcLANb2WRtOFIikaOfKMCumrA+Y8mEYzUdgZifmu7+N49l6UDiSwi4kxklhQYu6YFs0jcmhagxFWdh3eOefgln6Cua4cS9uNk4q98Mo6g+CSa4ir8YSuNRSl3XHoGtKCBQWZclkyL/19oN8vc/0hpxJ2qkB/UwSZDNOfm0tgMddgBIXQNbrjYOoaLIvZCJfUK1iY7IO50VfwR78VOFd0HXXDGtbRHhs2YGyYtOHC65XuwyRqwdlFN9PBk26c/mYKf1PI5QUjpHUvRs2ADGeNB+Cc8DUWy89jSXIQPov0x9uRl/GLkPn4le9rmBP5KRyj1sE5ZA82JpxEcI0/2ix6NA/EYn/aBmxOO4R1ikCsUGXBSVGL+bJyJhdxUObBmQYrlcWiz7btygGbVmnojYu6BFL/bb6Xdril+2XaUvB6oNRJSdgNB25a/3eCb8a72qy7LYbg2AP3pnTScwtl71hCdoFbMqqwlWBbLGl3W7qfFLglse87yYvbTlLCut02VoG2XW4O3LTuy6lltd8Ovjls269S+OZWgdKVPLqlPtxszzrbdaC0SV4nCmrBy5o8WViHk4V1f1fq5KTQ/XcE30g723xv7WyXNGOCftsu+Ma1tAnU2bZW2cQu963SZrhTlVG1sPIoa2HOJR7lraDyLG+GF9Vk3e3KVhBY3x+62+Bb1QapN/eEjndNO7MJDKx+gMh3sbstBW8O3Xz9e4JvuEsJ2QOOO5UIwTcUfsNKDL3h4TexIniTS4m1RJtAsgu0Dk0yq8BJOt0tXUjmZQfbvPtNK8G3vK0Xae298Ddm//N2vmObbiK12QOqZi+ElJzHUflmnJJvR+lAIosrFyDjQYA4lzkcmFmgjIFZv/X1FuBOXzkbSDt2/Ets2rIQx46vR0qKD0JDL8HF5S00N+sxMkLgXQSTSfD1nRLAvyH45gBFA5okl+HwTZ1+pqklsGDH3YJFGhusIg0uaVBJfsL0oTlMPztEMEO6bdLXEixZ8lBRG4ujJ5bh9IkvUVsuR0VpEpObrFrxOjraNcx6jHW96aicacjpfsmajKzsioCxEmA0H6AUPnMO0+wOQI87iEPTWDo6hnMwMFaCYXMJKnoTcCH7JH4buAPfvnIQP/dagujmW2g26ZFTF4nrqv1wid2C30ZexA8CQ/ED12A8d80N/+F3Fi/6HMXz/l6Y4ZeIhwOUeDREiRmBSjwTYAB1tmcEqCdWoJppuknX/VSoETN90/BkgBYPh+owLTQNM49cw7c/XoVHvj6LZ2NK8GSw/p5SEt7VtgFtEbifDTFAWs8E5+K5wHS8HK7DSxE6zL6djmlRuZgVW4inw3R4KSgZP/c+gPdCtmGv7AiSK33QOqTG4HAmjFUh2HJ7DZpYUqUWJpCLhw6jIKkGaXOpSDZA4TTkusEfl3sDOHPqoK9lzyHB2314mJxXBNnTuIWdMNAnaIuFiz3BXYO+jmA/B2lVPriReRKF3bECbLPnIWmLSS5Cfvr03KLbNPScLYQ1sQtI6oYLA4FjFOQzVgDLSB5zeaHnzhBBsIXkFtS1JV023X8ORk3ZGDEVoqQ0Hp/N/y0unt+E2loN+gZL4Rt6Hl+s+BihfueAgTJgsFDw82ce6GQ3akS/OQP1g3rU9ygFpxlKhxym5ys9d4vYc3dsLA8jlhwMjxphaAvAq2d/jTXqLxDfdQsl/eHIbfJFQtkVXM0+gp3J67E02BGb47/CGe0++BVeREyNJ5IaA6BqDkdpVxyaKNjGpMcdsxFDlPLKrQjZhQY9TnQhS6dIYmgPl4SRRIeVOAQ5WsQkRorGq9gcvxzzw/diYXIS5iflYW6sEm/47sIOxQXIm6NQ2huPgvbb0DYGI6HGG0FFl3C7+CK6B9WiFCgHFrrgZnI04bnC/n6Y8zBqLkVJaywC8k5gq2IzlqkvYKlBBQdFGT6R5eLPMYH4ifcy/NTzDbwSMA9z48/g4+QIvB19HYuTtiG44gIyqwKwKsIB2/U3sEVvwHJ5GRbJKrCQYt7JOjCtUAjEUd0bujmAu6hL7wHc5FgiFA+7sUpLxJRJafiN0NkWrQHF8Bvr8KQYfLNaEnzD5CXU6barcWvA8dAba6fbUIGvDBUTOt087Ea6SoNvpJISafgNeXPbAzd3K+Hr1gxbOcm9gJs8uXlxP27pykA7qxq77WoPSU2yhWKQTaAtFgXh2Bfz57ZaBtaM+3Pn1uBQbq21pKDN90IoTh34yuGbg7Z0PVFYz7raPNbdfj1dVA9WkuAbwZO73saTm/y5zzP99v8g+IbkJt9w8M0DpU5y0BZXqT0g7cki0MaTW5SU2He12SBlhTBQyewCRX9uq3tJVRv8q0hWYluB9iE41W2CpIS627SXgrYkBGdqi8Dx0BvqbE/W1aaPEXCPg7YUujttgm+ssC1Jn+RdbvLitkK2FLhJvy0OTiY1CV1s6mSzItAWpSU2kN3UBVlzt7VSm+l2F+Qt3Uhr7YXinx2+E9rckdLgBm2jF5Lq3XDWsAdboldD13sbw6RdJsikN7ApO9IE36Tb1jHXA9JxDgwUIzXZF6tWzUFC4k2cPbcZx098jZQUX4SEXMQXX7yPrk7ywy5lEdcUUkMgPOV9fVPwLepvWZfMOvCVzY70yXeZha0QbBBwW7IwxHSj2RgiOLNksuozGdA9QkAsHJOTkwPTnSIPLR1y3I46gdjIs+hs1qG6MhmHDzgy+8GBAerKkfcveYIL3VA2uAkabCNHiQKAunljGRi26NE4JIehJQK3y27gZsEunC24hFsZZ5FZF4DeoSwMjOYhvTEY21K34BeeDvhPz4+wJ28ToprdcUa9F86Re/C70Jt4LiQOM4JUeNy7AI/4ZGK6bxJm+cbgyYA0TPPV4RF/PR4L0mKmvxbPBWRiZoDaWtTZti8mJQlLx2xyJPHX4dFwCsxJxMwNB/CtP87DE9su4vvxFXg62ADqbPMi0LavZ4P1eDbYYFPPhaRDWs+HFuKF4Ey8HKbFjyLUeCFCgycj0jE7IhtPhWXiB5GZeCEsHt/388AfPE9geeRhXMk5j+RaV9wuO409ig3ooG4pMhh0myx6Ju8hCY/wMa4TJl301PBtIgkHOX2w05MsjIzQ4C1dyNG/FYYqmSe0ODTLnDMIlCmEiTncCHZ1dGGXUHoLFwxHkDeQyD7HtN3ML1vwtsZoDkCBVlQkW2EnLXQ/VMJgNCUwjpDUZiQflgERslnXnKQowoAnuZVYTOTRX4A7AzmIibkAx/m/hSrJDQO9+ahtUuLQ5fVY9NUHyMqNAobLgLvk1y1q1ulnGjMiryUclw2HEZR5Hj3MrzuPheCYzXkYsuShy5yFphEFavoiUNQdguCqs3jz+p8wN+JTbNBtxD7dLuxI2YLlsavwZtRn+JXb63jb8wNczjsObVMQS8HsGFSifzQdA6RzJ4nHKF2IUEefLlSF3x29NsnCkPTW5BFuncEg2KbfESt+8UNDnCpYhotgMhcioycA+9Tb4RB/EQvlRsxLrsK8lDy8G+uKjyOOY4PqJPboD2KzYitWJK3F59EumBMyD8tjHdDSlzqezil6xpspeIwGa5GFO+YMqOp9cUm9H9tTt2FV2jG4aMLgoC3BfEUHPlbU4P00Gf4adwX/7fUhfu35Ad6Pu4aPtUb8KdkHv/Gfiy8j5yKtKBCrkxZja4Yn1muysERWBkcF+W0XwUlTztxMnJQ5WKwqwmIVWQcKxbrc1O0m+z+xlqrF0BsKv/kbgm8YbOsED2774Bs+IGm/8lRJLh/hq42GWyIpmSAnEcH7wYJvxjvam4yV1hAce+DeYqyGtASLQGkITjW2Z1QJlVllBWwO2iwIZxL9Nu9y80FJqaRE2tGW7qm7bR+Ew28fzK0Dq5zaceDOEeCbw7V05YDNVxaIk1+Po2IdkwTg8CCcEwV1OFlQb1OnSVbCqh6nC4VikpKiBpwR64FcSiSJk/ayEhZ8Y58y+b8efNOMW5KOtrWzLXa0ha42dbbHS4BtAu7x8iaoLm+BTwWVCNh8cPIeQTj+doOSdFtqEWg/MGkdmpTCNtdv26VOMi/umvYpnErGQ28oAGe8q20bfEPgzYYlJaE3PHHSOijZ0MmCbyj8hhULvhFCcHgYDh+UtK4idPPByeQmQcMtBW2+p642KwLtpi7Im7uR1tLDVtrz24rWXijb+1jn+z9+9t//nJrvpDYPpDa4QdPgCbIkPJe5H1vivkT6nShmecUkFA8AxEKniYIqDBgzp+POHT10On8sX/oezp3bjJpaJa5e34UDB1ciIuIq3NwOYvnyd3D3bgHIg9lspqNaoWv1fwvfoicxdRn5kBbTzZLulCCJQFg4TqbbdNTcMaxDaV8y9B0RiKy4hZiyW6jqS8UQfQ82mEnfMxtDpgzUNyahuVEG80gx+vsyoNW4ob/PgFGSkVhIb0u+wnTRQi4tepgsOmGIzkSwQ9pxDaoGohFcfgNbkw9hbuB2/CpwK34TsBu/d52LjQkOkDe5o6pfiYQCDxyMX4s1KV/g157v448+87FetRZOUavw+5DLeCYoFdMDdJgeoMd3/PPxrdB8POSnxeN+OswKzMA0/ww8GmjEtEAjZngb8bRP7tTgHaTFE8FazA7VM/nJE5HpmO0ejukLV+GhX7yJpw/fwPNxFXgm2GCVlFhlJFxOIq7PBRsgredD0mFfL4QV4IXgDLwUrMLLoXK8HK7ADyI0eCFMi+eC1PhhmB4vRBTimbAMfN8vGj/2vYJXwvfAMWE91qauwjrVWtT1p2KU5EZM2sPlHKJDDgEbgRS76JwavkcptIe87mngbkxw+DEjj52GUOecPk7PIaFDLqSvMlBn8gSSQgj67QFLBhLLPXBasx8JLf7oMxvQM6BEeV8Smgbk6B7Vs04tfT9yEeoe0aGxJwl3LEb0DGvR3ivH4KABHSNa1N6Vo7dPhdG7erQOKVDXm8qkHRZyF2JzDEIHfRQF6BvKhlLrgZuXNqC1Wg7LSBGq65Nw0HM9Pj86F8qSQGaFaabBVUse02KPWIT/a3zpNXwS+Ba2xDvC2OeDxsE4VHWEI78lGMqmAITVe8K95DyupR/CyfQj+FqxD+8FrMHrwZvxeuQBvBFzDH8KPYRf+q/B8wGv4ZnLv8JHAfOQ2hWIfnKVISnHSAbMJupwizaF9H/gRZ+nKHhyboFwcTwO31xzLcrDxLAiOsmiuHoKXOocykRK2y1s1W3FAvkFzNdq8VFaKT5My8NrinD8PPgEfhO8FX8MWolXApzxeqAz5oQvgXPsMuzTbUH3gJJdoDNrU9GHnOB7xGRkFyNZbeHYnbYYqxM3Y63GAyu18XBSqrFAVYRPNfWYp6AAnHzMTQzAqwHz8efAd/FO1D68FeeHX9/eh596/Rrv3fop/LLcsCzlC6zR3cIXaVoslhfCRVMEZ20hC79ZrKqAkzKLwbeNrEQ1UVZC3ttTB9+Mp06OB+BIwJvcScTQG/Lgniz05ktyJNHfI/hGOjgpwvc9wZu8uB/AIlBwJ7mHnIQH4TyAReD2jGrsyKiyFr3B8doAACAASURBVB+WlK67Mm0lJdYQnCwh6p0AnLrbe7OqWdHQJO9u85XkJIKkZBzAD1IKJSsRvHPrcIhKhG5mF5hbCyl0056Am6dP8vVYfj2kdTy/3urLbfXnlvpyF3A/7jrY67fPFjWA1znqalsDb2jfCGngDd8LoTffQPDNvby4JbIS7sHNV24LKF0Jtq0Sknt1tqm7LRa3B7RfGXCXt9i4lHDXEhtnkso2MIeSSgLtiRVIFoFVbQiSWAUGW8NwhCCckOp2hFa32ZR9l5vi3rk7iXSVOpUIFoES6BYBXNrl5g4lUktA6T6O+W8LwTdWSckkIThW4LZzKhkH726kNHWDutj2xbrcNExJRaDd1I00Eb7ZSvvmbihaeqBs64OqvQ8BGdn454XvRjekNbpD3eyFiJprOGnYjQOyzSgxJTNrLTZw+ACdb9a9ZcOZBoyOatHQEIeDB13wm189BTe3/cjLj4ab+z5s3+mIAweWY9cuJ2zZ+hlGTCUYMxOECl1vAuD/M/hmx/2koSbtt9DxpuN78hKmoSmSH9CbPjmx9A9p0dSTirL2GKSWe8A39zwuZh7H5qQN2BW3AallnugxG1h3kVL/zOTpy46+yV2iEBZzMdgR/DB1I/MxMpLOLjis/28CGjMNxVHXNJ9Zw8GSjs7RZNyuvIblsbvxC7ddmHXjCh7ySsEM9wD826WleNHnLWxOXwe3krPYEr0Bn7jNw2blRrwZtAo/urwafwn8BK+FrcaLfu54zE+B6f46POFNA5XpeCgyDd8OSGNAPjsoHTODdZhGITZBBN+ZmO2Vg1mBGkmND0ySbpvXLD8VZobr8XSAAc9GZGD2qVt4+M9z8J3/+hOevuCJp6JLGXxLodva5ZbISgi8J8B2qBEvSOr50Ey8EJqOl0I0+EGwEi+HKvHjcA1+HKbByyFK/ChEhR8E5+H5sDy8EJGFZ6KUmBUWguc8T+PHN9fjlbClCCg4g5yWYFR2x6K+PxGtgzL0mDQYMJN2n5JDs4AhEfaYHOjeshOLmTy/85gsaJQ6zMhjKZB3KEQG2RhmlpwE+vT8knpF5wvBMkzDncVkLrntUXA1nsB5w2GU9SUgqz4Y53WH4F14EZE17lA2BSKnIwaZnbGIqfbATf0xKLvCEV9+C2GGsyhqjYaqPRh+haegq7iFNpMKkbU3cVV7AHV9STCRNGJMkLz0DalQ3ZOI6jtKdAxnobkhFSODNPhZiM47WlzVHMFHbnNwWbmPnfAMj2gxYDGiaVSDikEFKvtV8C6+gLdvvwmHmI9xpmAPPErP4Kz2APbJ9+LrlD1wjNuKueFb8XHwYXwQfh5vhbvhndjbeDM+AW+kyvCWPBWvxNzGj333Y7bb7/DshV/iQ9+PkNTmg54xGmokiQjNXQguIkxLzeQ85KNPw4xiWWVbpLcWHIjYaR37PF3wCK9lOp3qMhtRP5CK3NY4xJR74oRxL5xT12FO2inMUUfhncQUvBUTjzeSPPDnyJP4KP4gFqccwCbFMZzNuIzQMm9omkNRcTeepZLSiQN7XEU7RDr1GjJno7gnEde0h7A0cSHWZHliVUYu8752SM3HAkUmPtHmMTvApbJyvBZ0E38OXoJ3Iz/Ch5Eu+Iv/SvzK8xP8OeQvmBv6Pr6OO4qPQr/GMmUAliv1WJxGMe6FbIDSSVkBh7QSuGhy4MLCbwRdN9dv268U8S61AOR77rtN6wpNqbVWSoJvpHtp8I299R+/LQ2+kQ5MSvcE3Tz0hlb70Bu6zb25WeiNJG3S1g7QVlJiH4JDspLJLAHJn5uH3gg+3LaDkhy6eWebr7sza8CL+3BLV+pwk0sJL+5GIl0Pkpwkm2LfhTqUQ6BtW4dz6mxg22oVKEa+c/BmyZP540E4BNvH8xtwomC8rBaBEuhmVoGFDTgtFgdvDtt8PVfcCF7cb5uv3G9bunLvbb5O5lpCoTcPGnxj9eGeBLofJPiGUic9y1psO9sctsXO9gTQrmzH32MRGFDZjkC7CqrqgLSCq9phX9yHm69h1e0Iq24DAbZ92YA2pU7WdlqTJ61BODwEh/lzi6E3EmkJB25aeWebB99Yw2546E1jN4t4n0xSMi4nEfy4paDN9ymUSClWKklJSFIiFu9oS1cG203dUDT3WCtNvK1s6YW6rV+E75x/XvhOrb8JZaM7lK3eCKq6hMPa7Tin2YNWi1o4vqV4aOoGTgHFAmiSVpmGwIxobk7Bnj2OWPHF+9i/bzlCQk9BpvDAxcubsGz5G9iydT5iYi/CbCkHdQpp2JAcE7j/7n3v7xuTnRBgCUUda0rqFJwR8jBizkbvsA6tdxUo70qAvNILHrrjOC3bhb2xG3CKBsAKLsE96wwuyfchMu8a+kWdL7NUGyMrOJIH5MFCVmrD2WK8eJ5wLE0WjvSGLYZ7CINaZI9GtoP5GCM4t+iQ3RWA3cr9+L3vATztF4jHgnPwbZ9qfNdXjX8PDcMTgcfwh6AVWJr6JT6LXYafuX6EX/ouwm+CVuO3AXvxE88l+I+g3XjGxx8z/ckmUItHvOR4MliL6THJmBGkZn7aT/kbMCMwDY+FyDAtWIkn/LV4xt8wPiTJhyUnsQJ8NkCNR8O1eCowHS+GpmPGxmP41g9+g4d/+Spmu3njichiBt/PBulBJe1uS/dS8H4hxBa6OYC/GKbHD24b8cPbmfhhWCZeCjXipWADXg414CcR6fhhuAY/CtbiB6E6/CQyBy9HF+G58AJ8P8CAF4Mi8GT4MbztswHLozdhm2IvjhlP4EbBFYRXe0HddhuFvYloHFSjm6QD7Hl/b/AmbTENQ7JEUxbsk8WkDwPmDFR2J6C6JwUNw0rcHaMBRyFCnkCNTnhIw98/qEYvDX+yIJpc9I+lI6HKE7sTNyG25CbC8i7jlStv4bWb7+M9308wP8gBS2NXYkXqOnwasRh/uvkeVis3wjFkMRzcP8O1/DPYn7MTi8Ln4HjaV4hv98TSBBe8duHPSK72RB8bTszBILKQ1xKG68q98Mu/hPqRTPQOZbCZC7r4aBlS4FzeIfwh4HWsjXVGQf9t1N5JgLEnHJEtvnAtvYHzGRexRX4QH0asw7uhX+OvwZvwfsRBvB58FG+GX8Rfo93xZrQP3oyLwHtyPd5PzsY7ySV4TVWBV1Jz8FpkEt4PC8C7QRfxuu8GfBjyCVamrMSpzL3IG45Bn5kcU3RsaJO9NkkmRCDO/g5RB55kQcLJARucZtp3uvgRLqLZwDQ7XcjD8Fg22oa1KOlPgrwtAGeL9mFb6gYsD/8Kn4dsxCdR+/BJ4kl8EnMKnwYdwuLgg9iaeBindCcRXO4KXXsoqu+koG+EgoTohESHMSiE1zYNVyIHJlO68FwwZ6NlQIXwkutYH7YaG9NdsSY/HfPlFfg8qR6LSautyMXHyhwsVBVjRUoxPkqKx59DjuLNoKVYELMQCyM/xcIIF6wznMHazAD8NegqPgh3xSqDBqsNRXBRFMNRXgrHtErm2e0gL8JyQ4FoD2ir6WYyE1HLTZpuDtt8lUI33z9I6iTrbNv5bXPPbfLdZvUNBt9IQZvvpf7bW9KrIC17H24peJMHNytJ+M14AI4tfHPYlq6s051JNoG1NiUNwWGpkznVE5xKbAYlRfieANy59ThsrTpYgZvSJyfrctt5cZ/Mb4B9ncpvmBjxLkmbpOAbFn4j6XITeJ8rGodugu/zxY3WtEkpbPP95eJmsLpP2iQfmLS6lEgSJycMTorhN7yrLV15Z9vGd1v04JaG3tCed7Slqz1s89s+FZNAd1UHfKlEf27e5ZauXF7CwVsK23wfXN0JVlUdDL45aEtXafDN/aCbAJygO9KuJnhx19n6bnPotulss0TJiWE3iY09GK+JSZNTgTcHbunK4VsK21xOQhITVs09kMK2FMI5fKs7+hGQ8U8M38omT6Q1uCOl2QO+ledxSL0V19MP4y4NZo0Y2bGvrbb0XiBOg15GjI2phTensUz092ejqyMXjQ06dHYbMThSiN7+bNTWydHYqITFUo4h0UHANEopcDSoxvWy97ofYXAqu1Hw+U6sn8Tnm0lIptapM79jGpAUv57e3IfNmegb1aOuOxEZlf5wN5zAV4nrsDHuKxyR70RIhSt0XRGoH9WjC7nIbo6Cv+E0IgtuMN03WZGBDUwSWFMqHxVdwAh2iiYTpevRz5aLUdGWkUlOSI8rulVQ944uePrNaQgpPo/PQ7fiZZ+reDwkCw/5VWKWdwL+X3AYHgs14HkPOV64uBcfRK7DMu12/D5yHb7tPh8zfV/Di24f4elbe/B0kC+e80/DU775mBGeh2nRaXjMOw7T/GSY5ZWFZ71K8YxvHmYFqTE9LAXTw5IxKyQFzwTJJsC3vQc3abh/GKTHv4eRnjsDL/mpMM1hA/7tsRfxyK9fxQwffwbfTMt9H/gm8H4hOB3fDzFOWi+GZoDqh7c1eClMhxfDs/DS7UK8FF6E74fk4sWwHPwoKhffD0/HT6J0eDlUi58FZOIX/jn4aUA6fhakxI/jNHgsVY3nfVLwvNcVvOR7Ej/yOoj/dNuCX91Yjbe9VmFpxAYc1B+Ee81l1A/KRN0/dazJcpOK3DQIuATooseYTihGx/QYMunYyUXznVQc9VmG40FrEFFwFcUdMWjul6HtjhwdA2noGlSiqScFOeUByC3zR+cdOTv5IWvP4u54XMs9hcD8i5B1heJPIfPxwtVX8UOPt/Afvu/juRuvY9rFP+DJm29g9s038N8RDvhvn3n4nftcfB67Bu/ELsAfgl7FwtsfwTnuc7xw+ef4rxuvIqjGFa3DOgyYc5mDRkTZZTjd+BArgpzhV3ENUblXUdkVj8r+eIRkn8ZnIfPwfa//xjv+b2JH/FIcS12PL4Kc8XnYcnwS9RXmRWzEgohTcI6PwIKUOPzxdhj+mJqG38rT8Ud5CV5LqcVbCY14PbEOv08qxhuRxXgnpRq/VuXiF0m38SevvVgVvAYXszYjsekUOshe0azFXUoRNacD5ORCA8ykL6fXBSWVUjgPG6jMxzBdLDO5Cfl/c3s/Gn4W4JsgmSQo/z937x1dZbmu/Z7vjLPH+fZe3+day1VsgHXZsKAiIl26CmJdLpGuIMUKCtIEFCmCIioq0kt6L3Om98wkM7MkmekJEJIQWijSQ9rvjPt55zvzzplQ1tp7f2N4/rjG/bwR6zDkN+/3eq7rXHOu+vcyl/zCN/GLmR0whyHhCxlnXswrsfMZG/Aerwa9xyzzPNZZVxB3cAv7z8dzsjmXZtVaq5UqcbkEmivVGyx5I3ZW7mYoW5r8/iH+edmwF3C2OY+cWj++iPuYmZFzmWtx8npqCa+mH2BiRi0TEqv4e1I1r+c28IY0Qybn8/fcEt7IsfF82EZG7XiVN0PGsSR3PV+WuHgzqZxxlnJVZDMto4TJaaVMzdjPlIx6JqTUMTmjlqmWaiZlOZiaKSU4HbYSOeuQ3dV8O7ur4ptqjBtu/dzJz30t8L7u4puOzbZnw+2udNcLcHTYNk4jeGvFN1eHb7XhluIbW+fiGynCWWSvdauj8r3LbG51WdIbugXCjeAt567g2wu8C2tYIRcmfTfdHujWAfza8C1bbmmf9AVu9Vxcz1fFh/iq+Nrw7ds62WXjZFmDp9ZdB259esBbAPxfKL756V8tvjGW3lQcRbbcV4Nv8W3roK1P2XDvqe5ae/c1okvg2wjc+lkHb5kBPlvuTuAtAL7/n698FxA3brw1+G70gu9O4K1Kb65dfGPuAr47oFsD8AR38Y2+zTbODrjWCnH0P9bxdW3znXz4NCk+Hu5O4C0A7gPfAtw6gBvhO+C3DN+RDVuJP76L3IbdhBWsZUHSLFbbFnMcCxfEEyveY/Va9yow7LUV74BeLdlBWu4EWOSVsDbVDyrZOKtX+vpfV57lh6hsDzXrh+Rwt6nIMi1ZRG3glQfbib0+lPXWL4g4uAOQC2YCRvLKuYRWiVBT1dZaC12LKr6Q9ArxdWt/H9msi//zIoWcx8Gp9jz2nUtQr/DXZ37Gu/HvMiH+HT5LX0i6aws1x02ca87TWgKVRUZ+yLtwNETyS94awos3qYxiBQfqkpx4TTWPrfy7a5f3JL5OtuG6Ov5bqYQGFReXw/n2NNUgmndoC+PNH3CH/3r+vDeOP+1x8Hu/Mm4IsnLj7mx+v2c9N/n/g2F7J7I541tch6IJqfqB0X4v8tD2Idy3bSbdd/zCX/ak8ofAfH4fnMuNQVlqA/7X3drlyZsCxEoiGdzuHG7xbxt1zeKbHHqEpPBv4UXcG7Gfe7/azQ2DhvB//e4G/v2pkdwRlsqdwS5uDxZLSa6X7gjJw6g7Q3K5KzSvk+4Os6Lrb6FWOpTP30LzuTfMW/eF5XF/uMjK/WFWNR8It/JARD73R+ar5/si87gz1MXdUWXcGRPPLQEruemXmdz5/Rzu2Si56c+zq3QlJY2JlDWaKT3xI/vPb6LhTCanLtRxob1c2UouykZW/MmXC2m7WMXR01lsL1vBa+a3GRa6kt471tA3eB5DY99inGkyE01TeStmFq9HvM+IgBmM3fEG4YckT1oKX4o4ciaVsNKv+TL7Qyxngnlvz3QG/TKWntsHc5/fVO4N/IL7Ar7moaBtPB4WTj9TKv1iUnky3MxjwRH0ClrLIwHTuX/XBO78ZRx3/DiC3jtGMdk8mXVFCwk9/A0pl7awvW4Fr/k/w2NfP8hTW/sz0G80L5mn8mbyuzwfPIHeO6bxgN8yHg1cRZ/Q7+gfuZPBMSEMi0tgVGI2zyXbGJ3kZHh8IcPiCxkaV8jQBJdbxQxLKGF4QglDEkvpF1/CEJOdceGFvBBfTr/EGMaGLCe8eCvnm/O42Cx3JNwbZAo5dTmbZvk9R9486G/d5C6FePGVLUsSW+TCagEtqqSnjPbWCmitoP1yibKB1DYnYT68jVWOFUyN+4gXwj7hufCVPBv5NQMCF/K83wzmRk4l1L6YqobtnDmXRNNlyTAv4UyLixMcUEVNsl2XspymC/k0XZIM+Cr1+1RbeyaXyFX/HDRZ1OXoC+2lZB3dzdLMibyTOJ/Fjv2MzyjjHxnlblV45huZlbyRWcF4uRyZUc3UnBLGxoTTd9dahod9wxRrJDNsObyZ5uTN1EomZ5QyOVPTlEwpwRE/t64ypmaXdcrelixuuShp1PTscqZnl7lVjrEAR84qCtCQu63ncOuXJfUpX9fPxmm8THk9SSW+UYCSVOKVUGLrsJPoiST6/MReQ4cOeJXeSEqJbxzglS5Lei5JSgzgf2HxjS9sS9GNt2r4ouiAV/lNVwU4ekqJPo0pJfrZEwtYfBA5+6aVyLNuKdGnbikxTsnn7hwPWM9Gdzygmu7LlL4XKfVnPZ9b33Abpx4PqM1D/FjRoZ8qGvip0lvGHG49j1vmluojHnXO3j7SqfjGGP+nLk26YwH1pBLf+D89CrBTDOD+YwQY5HtZUrVL6ikl7vIbYwygftZjAPUZerARo8JqTxBe29gpk7tTUkntMSJrteIbvQBHprEAJ8Y3ErDLlJJG4g4d96hTCU7DCRL0dBLD1C9K6lMlklzVUqJttnWQ1qcAdScdPU3msV/JOn5GTTnrz5YT58g5cY5QWyH39vqNluzEHdlJfMN2chp2k1L7C9/alzDPNIPd+WuUv7NFkgX+KfjWYfpfmwLHCtJ1r62KCjNCqmRdO3Acj+Ab11pCarerV/1S8COJB20qt1sub2o+bvm1kvUrErCX19by1xef98kLWdjrQwgt+I6vkhYyP+ZdFpk/ZGPmcmLLNlN8LIr6cymcuJitLtPJtlMuXIoHXDK3ZfOWVxfCT5YviS7/RaWgaB8QtMQFlVqiNmLyOlyAW7b6OnjL1P+9dGuDU2UyH7qUQcGxIL5IfY++/su5NSCCPwea+L1/EDf4pfHHLUXc5p9Kj51zmGGZibluC0fP5VB/PomAqqW8ET2GPqEf82jgD9ztF8fNfnb+GODkD0EafN/kb+EW/7wuS28kveSfKr4JzqF7RAa/Cy3grpASus1fz3/07MX/+N9/5HcjX+buoBTuCOkM30bolvOdodZO0K2DuA7eMjvAWzt3Au9wmwbdAt4+EvjW1Tsql0dCirk/qJB7/fbyZMAHvJX+Pottaxi9awYPbx7HmL2vMCH8HcbHfMq4qI8YEfomr8ZOYF7WXLZVbSBk/zcE1fzAj4WfszZrPl9ZlvNl3kreSV7Ai+a1jEyIpE90PL2jwngiYi+Ph+zmqSB/BkdEMTgujv5mf/pvWcKajG1Un7TT1FyKtS6AuXFTGOo3gM8K5zI55n3GJP9Mn5hInohMpm90Jk9HWekXVUT/qHIGRlcyIKaY/jF2+sXk0D82nQGmJPqZ43kqLponYwPpE7Gd/kGbGL53A2N2r+f14GW8FjKboTunMCTwPUaavmBA2CoGBH/DqMjtjInwY3R0BMMSkhmWkMKw+AyGxVkYHpfPiDgnI8wuRphLGB5XwtB4l9KwBAFubw1PLGFYchlDU8oYEe/k5ZgyxiZV0yc1kUFhK/mh8EcONlloaHdgbY4h7uRethR9xU+Wz7EfD+dkaw4X1eVn+aCsRRY2iyVLohRbxaom38tyEVSyy/M4eSGZ3PqtBBV/x4qM1UwxL+f5iEUMDZvHsNDZPBv6Fm9Evc2anGWE7NtFwek0jl9ycrHF6S4nsvNrs43iw2a+T1pC9blEre1X2j1V46m0nkrMo/x+YuMc8pZPPmDLcqCU+gsWdhatYXbcVD4t2sHsjFLe8IC3DuDlCroFvHW9mbKPqdlV/CPVwvMxobyUEMaUvAym5TqYlFHKpLQat5+7tFPzpLF90rf4pmv41sG7rEvwvl74np1TyRwfGb3cXZbhiL9bFeLs80xf+O4E3vYDzLMdQAdu4+wA7xrm273huxN4O2uQKECjfUQ/e8G3bLYNrZPGdBL97Ju9rfu4O8H21S5KSumNofjmau2TOnTL1GHbOKX4xgjfXYG3L3wbgVs/f1NW3wV4a82URvjWIdt3qhST8kPINAK38WyEb2PjpJw7FeBUHfYU4BjB+9rw3bl10gjfOnDrU8D7X4HvLsG7plGLBbwCeHtg2+Dj9t1mh9fq/u3rgO+6jtZJHb6N4K1KcP5J+O4E3ob2SU88oE/7ZFfw7WsrMW64dejWZyfwPvIrmUdOk3X0V7KPnVFTzvpzTuM5cn/r8J1yeBdph3aSXr+dtCM7iDy4iTXZC5gb9hZFJ6Jpuiyv1f81kL6qb9trW27460tqgMC+2nBr0KwlgQjQyqVM2XDbcRwNY4PjSyIObOWSilCTS28C2vLnupsmBbZVZKBszLRXw43n0sg9FMSesh9YZVnG4sR5qqZ6S84q4ko3U9wQxsFTiZy6YFENfOIhvayaK13K29uk/hnkA4BkgBeSWxfC5txVmKvkQ4D8s0ssnGZlUf/+6jW4QP+14fticx7FRyL4Nmc9c0yLGB32MT1D93Krfxx/2rKBGzd/xM27f1H2kT/uCeDOzRPYWbWWo02pHL5gJWzfHuakfcio6Hd5OGwv9wfFc7ufnZv2FvMnPwc3BlhUzODNgbncGuDwtpSoC5SdWyb1aECZqgDHEBGoLlAG53JreDa3hBXQY3cWv5/4Ef926538219u4obXJnJvYCq3h5aoS5K+wK1Dt4B3V/BthG79/DefLbcO3/eF29B1NejW4fuJIDu9Agt4KCiBx/y+4PXoqfjXryL713Bmx83j8d1jeTzkM/qGbaNPwBYe3/UFT+yawYCgV3no5wcYHjyIUQHjGRXxEU/tncKjW0fx9J4RDAsez7PhK3ghKYQBSXE8aEqgZ2QyvSJzeSKylN5h1TwRVMwjYck8ZtpKn12zmZ+6gPC63cQc2sPK3EUK8h8NnM5L8V8xMngNg+JDeTzRTp+4Qgaa7Qw0FTIwtpj+0SX0iyqgf0w+A80WBidmMNDkpH90Nf3j9tEvqZq+KRU8lVRIX7OVpyMz6R+SwpCgKJ4J8GNQcCDPxJoYlZbJsGQLQ8zZDIvNY3SMg9GxxYwwl3fIVMYIUxnDlUoZbipluAB4gstLIxKLMWp4omzDnYw02XnZVMGopH30tVjpE/sdE2MXsiptKcvT5/MP0xuMDRrHwI0D+PvPY4go38SRpkwuttm5JAkv6oOvZkER6BYbV1N7MYcv5GA9HExw8WrW5XzE21mf8Erc+wwPncXgoDmMCn6fCdEfsyRjOTuKN5B2aAcHTkdxosnGxfZqLrZVcEGKfiQzHCdnmhzkVYby6pb+JNRu5lxTrhb5KTGo7RJRKXYT+WBfxgXEKpejUmAutheRemA3n6V8wqzUL/iwIJmJKQWMTy/njfRyxmdUeOnNjArelJzuzEompNQwJfMAU7MrmJjhYGJmPlMshRpoZx5gUlotUzJly+1d+24swJGzEb6N225jTKDvplvfdnuV4Bg238attpw9m20DeF8Jut/PrUb0QZ5ISy3R54fW/YgkClCXVwa3zRu6vTzcNi0aUKwkHtkPdNp0q6ZJY/GND3x7Q7eWVKJSSnwq3gW69dIbmb7w7Qvd8uzVLumTUKLHAQp8f1lU0yHXQb50HexUhuOJBjTEBa41xAR+pS5Q1vJVsSZP86QhMlC23Tpk+079IqVM49bbCNzG89WgW229Kxo6wbcRuvVzJ9h2V74bQdurdVI23lVHkKZJo4zFN51yuKX0xmfL7QvdXcF3p423p/jmOFeEbgFvN3zrG2596tCtzy4tJB7o7oDvyIPHiZRs7lotLrAjJvAEWgHOcaJrjxEt225poNQLcGTWNyqpuve6Rsyieu+oQD0eME7yuf+FyncdumVKRKBKKXFHAurw7fFx637uIx2XKLuE7qNnyDSAtsC3EcDl/P8L+E6r34nl0G6S67aS2LAVS+Me/PZvZEroBCz1gVxsEt+lAY6vBM3/VV+Xv5dEiMl2WSL43BcSZeOkvLYqe1+3VQAAIABJREFU29dOXs1evk5dRHLNLlU4Ihcb2yQSsNWpJCUkklgiCRaHL6ZhbQghsPh7NuZ8werMpazKWsp3tlX4V/5Icv1eyk+YtNIMaeeTDGFJhWjRfKOX1Q97aQ0Un6m2mZeIs+b2AnLqgvk5bzXmqm0afMs2Tv1zu3+d+ue9PvhuvJSKf8U6hvtP42871nBHoB83BYZxx95f6L17LgN2vMEjW+dy257t/HHLQp4Nfx/r8QiOX0gjucaPjzLWMDBiDb3Cg+gWkM6te/O5eXcxf91TyF/8cvlLQDp/DczgpkC5SOnssvTGt87dmFCin42lN3K+OTiLe6MK6fZ9GP/+/AT+x5+68//ccjs3TJrOvaEZ3BVayp0hVu6UDbcuD3DLxlvTPaFWOkm23WH5Hgls32eUG7rvVxtv2XrbeCA8nwd9FZFPzwibR4+GFPFwuIOe4eE86r+QcSET2Fi0FL/qH3kt5O88sPMR7gpcyEOBATy542ee81vMB+kL+ThrHgM3P8TomP703vM6d/4ygYf3TOK5uPG8mf4af4+fyZjInQyLzaVXVAYPm230inbyZEwR/WMLGRSWQj+/zfQO/IT+iTMZFP4qM5Le4b3cpUzLXMKLpo/oG/oxTwR/zlC/xTwXMJf+gd/wRGQuT8cVMSg+j4EmGwNiBLIL6R9dxICYAgaY7Aww5TEktpChseUMNpfRP76UpxNKeDrJxYCkIgYnFvJMQgHD4koYZq5ksLmc/urXlTEouZIhSeUMjS9hZFwpI80VDI+t1GSqULAt2+4RccWMiHMxIq6QEfFFjEhwKY1MLMZXo5JKGJXkYlSik2fNBbwUV8nQuCoG55XQN2Ev/XbM4bkfx/L6jud52f9FJkdOZG7CbH7IWUbp8QgutEhsqRQYaSVCsuVub8vn9KUsyk5EYarZwneFa/g4aymTEhcyJnYBfSLn0j9sDi/GvMuHGQvYVLiWuAPbcB2L4vA5i8rDb2t3KWAWX7iU9DSrHH/ZahdxqcWOqyGU4X592ZD/KYfPp6hUorbLuVxuTuNye7a2/W4uUbnwzW1yMb2EfadS+DHnC95PXMS7+eFMzLDxRmYhb2aUG1TBhExvTcyqZFL6fialVzM1ax/TsqtU/vbkzAomZ1QyNUv+2H6mZpUzNauMaaLsci/p5TdvZ1egSyWWWDpKcDzebbGWWMqZKZK2SYP0AhzZautS222B7lytBMcXtPXnrqICu4oI1H3c+uyUw93FlluyuD2QbQBuY0rJAntNRya344DachtLb/Stt7H0RhXeGEpvdPDW4wBl+pbeyPNyQ+mNJJZ4ogCvUnzTAdwdpTdyedK41TaePZttV+01IwIFvgW4dTuJ79RhWwpwdEkRzobSQ176Vi5TltV7SYF2eT3flR/y6Ht3Rrc+fzCU4kg5jmy6peq9kyoO83PFEY984dsI3fq5K9gW8N5WfdSjTsAtGdxdALeKBOwqi3v/cfaqLG7v0hv/A8foXHzjTilRsYCy5TZ4t2tOEFxzAgFs30hAlcd9sJEwt4zbbmMsoDq7U0q8Mrm7iAfUogK9WyeNBTj6WYDbGA8o586pJSdIONRIQoPIx2LirntXgN1wkmSDjCU4OnjLRUktGlDzbuvbbYkITPexl2SqDbdsuX8lU0G3bLl1aVtvy/GzCr51CJdnge+8E+cJ+y3bTtJqd5BTu5PkQ9tIOrqdnGO72VP5DZOjJpN3NIyLkqnr5c3+7wVxucQkPxRVqoBskT1QLxngUixSwMWWHFLKN/N1wifkHwlWqQfq0lWbU70m/rUll/2Sw30omJjqbWwpXM9ay3JWZ3/GD7bVhJT9SFrNXsqOx3D8UiZnJWJONVlKgY72A1lAXgqGWlq1vG/ZdMsFTbGeqP8ekhXc5iSvIZTN1jVEl232sZ3I1l5LUpFc5+uxndSfN7Gu4CPu3j6BG3abuCHIxX/sDqZn0AqmJM1hbsrbvBT0Dr0CZjPI7w2+ca0h+1AIkYXf8Kn5HUYEzOb+vVvp7ufkRv98/uRXwF/9CrnZP59bAjK5JTCVmwPTuSkgj5v8nZ7CG6/ttmGzLSU4WiRgx+wRnIevbgu08FCMnZsXf8fvnn2D/3n/k/y/Dz7FDdPe567obO4NKVEpKDp467DtO30tJfKsb7b1aQRvI3Abzz3D8/HVQxE2jOoZbaNnTD4PRyTzaMAW+vst4bXojxkfO4s+W5/mvi3duX3naHruncejm99n2I6JvJc6nXlJ05ge/BrfV61iee5q5iQv45Ocz/nc+TGf5k5masI8xsXEMyCigkfCHfSOLadPbBl9YwWWkxkYtJlBu95lYNBLjEwbzzMxU3gpfD4jQlfwdNAnDAqczdjwT3gneT3LslfzVf4SXgtcwMDQcPqbchkYl8OA2DxlMxkQU8TA2FIGmcrV7B/tUvA93OzgGZONgaZ8BphtDIyzMTjewdA4O8PMdobGFzEkqYJBSeX0SyjlKbOLp+OKGSg2EgHv+HJGxct0KWmgXcDI+AJGJoicjExwuM8uRia4GJmoaVSiwHaxpqRink128Xyyk+dinYyLq2SEuZJRWVUMSTTRf9ci3gyYyne5Cwkv/ZGkuj24zpipv5BMU4vkfIudQz7wWjndnMn+02asdXsJKNqgrCPvpszn7+aPGBU5l2ci5jMqcjFvmZezJGste8q2kHskhPpzCVxoEWCWS7MltLSXaa2U7Vba5HK4FCK1l9DcWsKJixbKG4OJqFjNoIhXmBI2iYz6PfzabFFtoi0tVuUnl0WAfDCX3wda2nI43+bAVLqNpckLmJ31LTNsVl5L2c/rmUVMyCj3aGJmBd6qZFJWpdpoK3tJhsQEVqlIQgHvyRmSXCIlOW7ozurs6zZuuI0RgXLWgds4Z1oqFHjrXm59Gjfc17KTKEtJbhXvuaU23L6RgYZsbt+IQAFv/fKkbLs9sYBSgCORgD7SL0t6UkrcaSWf2g7yqV3TQntHSoluJ/GdXfm5xWKi20nUhltFBEpMYM2VC3C6KL7pKoPbk1LiEwuoJ5Zo8YAHWS12EoPUpru4Fq0EpxZPRKBLNtt1Sioi0JPNXYe0TX4t220fKdhWoF2PsRyn68uUh/Ddasuz0U6iNtvuQhw9PtBoK5Hzj13UvauoQENM4OaKI0jr5C+VDW5pFpMtarstG25N2wS0faSq3quPst0t3VKiFd+4y28UZB9Fa5h0z+qjnS5Lel+UvLqfWzbeKot7/3GM6SRy1hNKZIbWNF41GlC/KBlhiAj0xAPWniCy9iSRdSeJukJEoKcMRyIC6xqJrTuuZKprxCR+boOM4B0v0O0D3h253Bp8+9pK5Fnfbvv6ufUSHH3DrWwlksXtExGow7dxy+0F3RId6AFuDbyzj57BIjp2lhyB7WNnsRyT5zPkHNfA23ryPGH2ot+u5zu1bicZNdtIa9hB6vGdmPb9wLqsBcxNfZ+K80lcli20WEE8EPzfC9/SDHfZU+0t3m2p0ZYkEPn7FqlUgQOnTfgXrWdT7ueUnk/gDPkcuJzGgfNpFBwOJ2H/Tna4vuXrvM9Za1nGNzmfs7toI9n1gdSfT+F8m5XLYkkRmJa2QPF0UqhKcuQ1t2zMVROhxMOprZvYSaT0Rn7gCnxLHrJsvovIPxLJ5vy1hBf/6L5wKckmknzQceFSEjIkseFanu+6M9GszpnFPVs/5I/+Tv5XUDX/c08E9wcsY3zSND7MfJu/R81lSNA0PjC/grnhazZav2JSyEwGbnuJR7a9w1279nDjnkJuDLHxlxAbNwdbuS0om25BaXQLSuG2wAxuCcjlFn+7p/RG7CT6Vts4BbyNUYBXyuG+M9TOA8Gp/GHKJ9z4+ixuGT+bP734Fn+YvZTuZgv3BRZpkYChVu72Uj73hHXoXoFtXQZ7iS9wP6C227Lh1mXnwYgO9Qy3eeDbCNzG84OxmTwcncfjoQU8GWTlyaBoegVs4OHdM3h42zP03HQXD2x/mKfC3qTX3pk8sOUV+u0dxpg9g1iftYx9F6xUN2ZSejyd1JpQvslZyqSoiTwbtZgRpjSejiynd7SDJ2MdPBlTQG9TAb1jLTwZGsqTu7+i994PGRA7jwGh8xi1dwlj/Vfyj8jlzE35jJ/s35J6MIzK82YqL/izNGEBQ0PX8lR0LANk6x1rZ0Csk4GySTcXMTjOxWDZipuLGBTnZFCcjYHmfPqbxI7iYIjZyVBzIcNiCxgW7WRIrI1BCXkMTspncJKdwQkOBsuviXNpW2+TbL4LGZVoZ2SCLh22BcKL3GBexKgEb41OdGHUs8lFPJfkZEyMk5cSKhiTUMULSVWMjstiVMj3LMtdTdnFCFouF3K5zUUr0jxZoCxf5y5baDibRNGxUGL2b+Gbwq/5JGMJ46PfZ2zYBzwXOp8XwxczKfYzPkxZwfq8taSXbaPsaCy/NslGu5SW1lJaWlxaDnu7HdVk2pYFrTkgyUMtsu0uovpUCjFVW1mfu5g5ibMYErOCYbtmsDLrc9Jr99B4Udtw01asfagm1/37ooO68ylsyvycj1KWMssWxJuWUsZnneAfGS4mZpZ7gHtSZgWTskQadGuzgsnZTiZnFzIpo5hJAuAC3pkC3qVMzi5iWl4Rb7kvVL4tlyiVOi5S6kU4MwS4lTpKcDxlODlaGY4HtuXCpFKl5+LkHLXhli235ufWt9rGqW+4lZVELCW51Z4cbj2tRJ86dOugbZx6Wom0TXpZStzPxk23QPen+ZokmaQrLbIf9PJzqzhARw1Ga4lsuvXiG5la+Y13AY4vdHddfOPdOulbenOl4htt091RfrNabCRFBz3Sy2+MU9tq6wU4HVOPBlTxgLLRLq5TMgK2fu4StEv1cpwGNpZp+q6sge8Ftj3S/NvaZlsq3zX9qABbIFvXYX6q8JYxMlBAuyt5WUoqDyPPRjuJnI3FOMb2SeN5d5XWOqmX3RinJ43EU3pz5fKbqxbfuC0lCrQFtj0S2PZWWE1jp1QSBdwGW4lvPKA8d0oqqT1BdK1stjtHBeoxgSaxkbgtJb7bbf1Zh+4O0D6JftZTSVTrpIC2QZ1LcE6SKq2TDVrTpAe2VflN54uUGYdPo+TedvsCtzx3gu5jsuUW0D5LztEz5B4720l5x8+Rf+IC+ScvEO5w8bdHe/02Gy6TG3aRUrud9EM7VM38jsKVLE18l22VGzjcmkeLisn77wVuI9gLeKvts4Jtqay3c/lyrtpWyXb6xGUL4eWbWJ41n5/Lv8F+No784+GE7PuZwJLv+SnlM1YnfMra3BXsrPiBnMPBnLiQ4d5YF6nUES31RANiOWubfYFrB0hKimzcBfr1lBS59NnqVIkqAuU6gDdTRF5DOD/lrb6OtJNrw3f96Ri+yviQh39axc278vh9YC7/OzCFv2z/nnu3TuHxPRN4aO+XPLD9C2bF/J3NpQt4OXIpDwSs4s6AVdwT8B09gqP4Y7iNm0JyuCnEws3B2dwalMFtgWl0C0ynW2Am3YNy6B5kvQJw52G0lVwJuPXc7TtCbdwT4+KuHwL5jzFTue2DFdz/7R5uX7KJmxZ9z+0mC/cEOrgnxOqRJJR0JSNkG8/GrfaDYTZ09Qy305WMkK2fH46w8XCk3aNHo/J5LDyf3iFOngot48mIMnoGp/GA/0884TeHnhv78cjOwTxjWsSA6K+4f/dEHtjWmxeChxN04Hvqfk3g9FkXp884SK8I4uOEFQyOXMxTybt4OimH3qF2+sbYeTIuj96xLp4wV/FoQhmPROfyiF8oj+zYQN+g5YyLXsSc+MWsy/yciKrNlJ1L4kKrlCyVqQu5re2J7HZ+ycjw93kiPFBtucVSMjiuhEFm8YA7GGiWzbaNZxLtDExy0jfRSd8EB0/H2RkQ72SIpJEkuLQLkfHi0S5mRJJcjixiaHwBw2WjHV/Ic0kunksqZITZxjCTlRHxNkbE29VlyRFxBW67SQkjzGXKCy72lNEJRZoSixidWMSziS5NSS6eFcnmO8nFy3FlvJxcybjkcl40lTPGXMCo8F18mrmGohMBcLGYttZSzuPi8GUrVWczyGwIZ1fxJpZlfMaUuI8YYZ7LSNNCnjMt5+XYZUyNXciSpCX4OTbiOhLF2eZc2pvlEqS8MSuhqa2Y8y0uLrW6aJUkJGnqbZUirHz1fX65yc6J81k4joWwtXAt78bP55WYz3ghYRN/T0/luZjNvBDyKfPMC0k/GMQFaaW97KC9WboMcqFV+9CeX+/HlykLmJO+jum2VN7IOsCErKNMyixhUla5l1TNe7YAd4c8Hu/sEqZaKtTlyynZFUy1lCrv99v5RbydXcbbWZJU0lGAo5/1zfbM7EqMmmWpwlezLZXMtlR0uiwp224jZPtut33LcHTA1qcC7WtEBM6zSnLJAS/5wrcRuvWzbLi7Bm49HrCWxfaDntIbKb/xzeCWZ2UjMRTfdF2Acx3FN4XexTc6fButJQq0xVaiy904qZXf1KHiAbtonRTw1rfb+jRuuY3QrZ+vJyJQAFwvxZGpxwMa5/elDejbbOP03WxLPrcRrrvaancF279UHsUoBdsC3Eqaj9t3y22EbP2ssroNle9G2JazDtzG6V99DJGewW2cnijA6yq+aSR0fyNhB050UnjNSZQOnlDwbSy+0a0lstFWku22APdB2XB7K7ruFKIYFRMo222D6k96FeJoEYHethJfS4kO2sbZAd1aNGCSWEkOnVDSy2/06bXZFkuJQfpWW58e4D58WrOSGGwlnUD76Blkw61kgG4Bb1GuG74FtvMMEK7Dt+23Dt/mwztIOLyDzLrtxO/bxLf2pSxMehfr2QhOyavaVq0B0gjI/51ngWEp0GhV7XZ6wY924aqmOYP42l18HDuL8SH/YLl1Eb/YVvKtaR5Lw2bxU/ISYpzfYT8Vy3GVgqDlMbdKCYaKOZQLmbLJ19JOtLQE7VmdxavdnE+7SLbiyq/tVNnOavPdLJeu3GkpbU5V951eG8B32Z8TpWwnbpAX24wAgJft5NrwLRv9JakL6PnzOnrsyeHGgCRujMzhr8Gp3LZrG939flb523/YGcGwgFeYGPsaD/l9xR9Dk/l9mEQHpnBTcDp/jMjlVv8sbvW3cLN/jpJsu7sFCnRbuD04m9uDslThzT9VfGNomjTCdw9zEbcv3sh/jJpA95U/cm+4hR6/xHP7hhDujZTq93wF3l0B971hNnRdCbh94btr4HbwUISmh8NtGPVIhB1fPRlZRe+oInpFWXg0Mo9eEaU8HlbFU5FW+kb48ajfZzy4cyq99izkkV3zuWfzKB7Y/CgvRrzOl4WL+c75EWHl2wit3MAXuZ/yUtwy+sTt5YmUPB43O3gy1MXTkTX0Ne+jn/kgvU0uekZG8GjQekYGLWRa9CKW5qwh8OBOCs6Z+LXVRqskebRLxnU6XMjQbA3NDmwNUcxIns+AyF0MjC1jYGyJunA5MNbFIFMxg0wlDDKVMdhUzoB4F/0S7fQXEE+wMyjBwZBEB4MTHQxMsNE/Pl9tykfEVTMiroJhsSU8E1PIMJOTEfEORiTaGJ6cz8hkO6OTCjUpsC5mVEIxI5UnXLzfpYyKL+a5xEIvPZ9UhFEC9M8nlPFyXAXPJRQxJqmAF2JLeCF9H0PNkUwxLyO28ltlF2tsyiH/XBwBB3axOm89s2MX8lrIh4yN+pgxcSt4NmwdL0UsZkbcB6y3LSLx0M/UNcWraNL2lmLaLhbS2iwfquX73c5FrDRhpVUSSeT7ttlJ60UHFy4Vc7GlhOrziQQUf8WnCXOYHPMhE1J/5C1nHtNdR3g7+xivWQroH/0Lg3d9wPf5G2g4nwAqEjJH/d4g8C2Z3+byDSxL+5iZGZuZmufkTUudigecnlGGXvfu2zQpz1J6I5qc5WKqReICK3krZz9Ts/cxJauKt3P3MS2nnGm5pVeFbl/49gVuee5onbyOlJKcKt7PqVJbbdlsG+UF23lS595hIdHPHsAW2M6T/O0DnfRJfg2aDjA/33Bx0u3pNvq5feG7I5PbG767Am5j/rYvfOspJcapt05ePX/72tnbylJSKJttLYNbn95Z3HVelhIjZBvP64vrVAGODttq2116iK/dkiKcLrfb7o22vtnuCrq9srnLDnvBty90a5YSb/DuCrLla12V4eilOPrcVnHYa7Otb7l1yJZ5PaU4OnwbYVs/67YSI2zrZx269ak3Tsp221e6rST0atDthu/rKr45eJJoH8XUnsKo2NqTmGpPKJnrTmJUXL1eiKNVvvvaSQTAPQU47qp3ge9O0H2F1smuLCVq0+0Gbx229flPW0qOnMFy5Aw5R2XD3aHcY+fQJcAtsh4/p6Sf8xvPYztxAfvJC0T8ljffUYe3YTqynbxDu8mo28q6gqW8EzYB14lwTrdKWkixe2v8f2r77YZ9gd8WLSNcWgEPtWQQUPYDk3f+g1GbRjAtZjLr7MsIdHxFbs1uKpsSOC8+zrZ8mtodyCVJrT1Sa5HUt9gy1QZMLCQi9fex0dpsVQ2Xesyh+hDQlk9zq1zyElgvUttvAWrxgre3SCGPk+z6YH6yyoVL77QT8ZhqkYYavF/d8y3/bQuoPBvPrLwl/GXTAm7ZbePGgFz+V3iqigi8ZZeTW3eX8oegffw+NJ5eu4cyfPcAegdt5pbgIm4MqKCHfyH3BObTzS+b2/dY6e5no0dAId0Ci+kWVEL3oEKk7KZHYDq3ByZfE76vp/jmzlAbf46xcefMz7lh7DRu2bCbW2Ic/NnPwj3bk+kVZuG2CLlI6b3tvlflc3eAtwD4teD7gYgrbbo7wFsA3AjesvH2Be9HIx08Eu7iwWgH98dbuS/OinjA+8Q4EKvGo1FZPBRvp0/Yd/TcNoO7fnmdu3dO5N7dH/Kg30p6+n1Or5AF9PSfRa/I8fQxvcOQuK2MNJcwOLyOfpFOhiS66G+qpH+ck0eDQ3nQby39w2czM30yIfs+pfbSLs61p9KmLExV0FJBS2u5skoIJCL/38lbmeYS2tor2Zi3jkGBS/jbnm+5d8/3POj/C4+HBdDfFM8ziXk8kyAWFPF8FyugHh5bwkhTsZJckBwW7+KZhCIGJRUxKCGfZxIsjEy2MSLBznCzk9GJpQxLKGdgXDmDEqoZmijxgLL5trmtJzZlQxmd5GB0sp1nlZxe4C0gbgRv7VzMmMQKXjRXKK/4SxYXY+NKGJlWTZ+4KMaZl7K9+HNKTgezLf8LZodOZ3zIdF6P/IhXohcxOnIx/UIXMiLsI1YkzyasYhP7zmXS1FquZXsrq4qNi2RzHgtQDu3FIBGpUi/fJvGeebSTSws5XCKfEy1lxNfs4tOYt5gQPJHJiV8wMy+SmVa7ao98LaacCdEneTnnGONKXAyL28iq/K+p/jUUWlNpa8rmckuh+oB+iQJCCz9nQdJ7zM725y1rNf+QEpysEqYnu+gKuo3g7QXcmfuYmlXD5IwDTErfx4y8Ot7OqWFShnaRUt9061OHbn3K1vtq4K0aKA0pJbq323fjfS3wFgA3+rZ16JbpAW/ZcKvWyauBdw0L3HYS3VYis6stt/FrvvC92FHHEkdtp223Eby14purV75rqSX/+eKbVa5abbN9VfCWFsprw7dstb8p9taGkkP4yhe+PcBtAHDjllvOvuAtz1fbeCv4lkuT0jp5BSuJZ7NtAHAdto1za9WxTuC9o8q7/v164Fsrx+l6262Dtyq+8al7FwDXoVufAt++0C3POngrP/e/AN9dbrZrrw++zXViK7kSeGuNk8aNtn72gmw3fHt9reGUp+5dinD01kl92301+NaBW59G8Jazr72kq433tcBbAFyH7SvC96mLRDiLf7u2k8RD20lr2Inl8G7SG3bgV7mBxSkfMCtsEl9kfELYwZ84eDGOZtnIifdbfqjJ61ccXGzNpQ35uruNThXbyCVDbbOs7BzKwiGJBe4LiFKlLT5y5Z12aZYP5SkvUhGB0iR4kRyOt6bhOhNOcPkG1mTNZ47pLZ7d9Rx/N09kbdFyTMd24mpO4kJTPhfOZqvYMZXxLX8fAd92zZetJbWIBztHiwkjn1a1DXPDjUonsdMqZT7ui6Wt8u/VJgUa0kYpv04AukD7Ya4uZrpjD9udWOoC2ZT3JYnVO9SlTbGsyNZevOvyz6EsLALsyt4il1ezaGlJU39MvnaJDC6SSXNbEY7DybwUM4uHAj7nz4HR3BBs58agAm4JsHFbQC5/Dcrjj8E53BxkoltgCD0CQ+keZObWoDRuDcxCLj52C7RwW2C2mt2CxMutqXtwLppy6B5sUfItvZEiHGMc4O0hOdwSmsidIcXc5V/KPbL5Dk/mtvBEuoVncntIFveFpnBnQArdhvydv76/gpsCzNwS7+D2YAcP7sriweg0eoS6uNddduMdA9iRu61HAPrOByNtGPVQZD4Pi6KMsvFIdIcejbLhq17RdrwUZeOxaBuPxbgVbeOJaBu9Y+w8EWPj8Vg7fWKs9I7M5rHwdKUnIrPV156KsdInOocnIrPoHWGmT1QGT8WU8KS5lN7xBfQ1VzA46jCDQjLpG/glz/rNYnHaAuLqNlN/IZGLzSVcbi2nCRuXkcpyLRpT3igpW5NK9tFiNeV7rqkpj/31kWy1rGS+6X2WmOaxPG4enyZOZ/SWV3gtPorBCUU8lplB37QsdaFyqFhKDBqW6MKjhEJGJBQwQk05FzIysUhdmhyR6EIkz+LnFiuJUcpK4raUqK22z6Z7TLILLyWVMzbhAGOTihmbms+LaYWMSSjhebmYmZTMgKjvGRQyl9dNHyjYHhf3FX3DlvCU37u8tHcGS6M/JqTge0oaTfx6PlelkbTIBxMp22nLobUlndY2sZpY1ff3JZJpke/Z1nJVuoP8PoBV/T51qb2A2rO57LCt5Z2wF5kQP4fp1ijeyitjQkYJb2TkMd5i542cffwjo5ZZ2acZk2SiT8j7LLOvpupsirpo2dLUI9cBAAAgAElEQVSSS3t7CTRVcaQ9ljVZ0/gocykf5ScxK3sfU9MkNrCSKZZSfOMAJU5wmsVbb1ukDMcozc89PacCXcYoQP08M7cSoyStpOPiZFVHLGCedn7XWo3u2TZO3+ztD6zVfGiQFN8YJVncXrGA+dqzMYNbuzi5n09s+zyS4hujFtgPqIQSaZI0yvei5CJ7zdUtJapR0husxVJi3Gh3de4yIrBIK76R8htdxgxuVYLTRRygJ52kuFZlcWuXJTuX3awrqWNdqVslstH2lp5M0jHruGbxTVl954uS7rxt42VJAevO2+zD6NF/2uwovdEKcLxLb6QE53qKb7ZWNeCtw2yr9taO6s553Dv3HWHX/qMe7d4nySVGHWXPfm9JIY5vNKA8+7vjAWUG7j9GkDt7W5vH0GMAjVOPAjROr/Kbg9Iyecyg452LcNzJJHosoJZKcgJ9xtSfJKb+BHoaiT5Nh05glPmQsfimsXMUoG8qyZWytw2lN11CdReNk2rLbYgGlLhASSvJONohvfTGOKUQx1fZjWfx0vEzWI7/6lFOo0QHGnWW3Maz5DWewXrirEfqufEs+SfPYTt1AfvpC0Q5S3jgsSd+m57vpIbtpDbsIOPQTjKP7iH5xF4CD/3EeucyVqR8xNKod1iV+AF77KsoOhrKJalMV7BcoAGp2EOkirnVquwiYstQkK4iA7XEEOTyIU6a22xKWvKAg8vSHKdeDztV5XX5mTjSD+4iyLGOH1MWss70Id+nLSR03yYiD21ldeonLIicwWdRM9mQuZDE2p0a3DaLP7tEFd8IREscmQbd7mQSxJtppaU9R9WES3Z3G+IBle2V5ILLuYTWZjstl/NVIyfYaG/Lob1NtmZu64ls0tzwLYAtrZVXhm/ZvAt8a5t2HeCbWvI1G4u8Er+YRnOzxBCWsv+EmR9yVjE87DPuD9jEn4IS+EOIg78E2ukWYKFbUAZ/DcnkxuB0bg2K59agBG4NTOLWwBRuC0rjtsBMBd8qi1sgPMhb3aUMx0dG+DZCt36Wxsl7A1K4I8jJHcGl3BVk5+7QLO6Myqa7yUb3GCd3hOZy6/qt3PjMq9y2YjM9wjLoYbbxt0gnDwXJZcpk7o0q4/5wrVnSG7611kkduCUe0JNS4o4G1D3baopn2wu6OwD8avDtBd3Rdh6LcfB4tK1LPaHgWwDczpMxTp6MdvJEpF2pd5SDPjEFPBVbqKY8943Jpl+sk6diK3jCVMRjZgtPmpwMjK5mYFgsT4d+yrfF6yg+E8DJ5nhaxW98Wba2lVy4lKuSPNrQPszJmxq9eEll28uvVZeMC7hwIZu6X82U/xpF5a/xlP+aSFrjDmbHTGJGmj8vJxcxOMXGwCSrF3TrAO4B70QXwxMlIlCD7g7wFvjukKSWGC9O6udnJcHELV/49oJuN4SPTSpnTMI+xiQWMiY5j3GpTl5IKuPF1DJeysxlRJI/A6K/YFTMPEb6fcjo3R8y3rSQ1SUbSToczIFTSZw8l8vZiwU0yQd3+dAu31fKlqZ/0JbiGyeXW+SDdb5WTy+/R7U7aW6WnHAnxy9byK33Y1PqEt6PeJ8P0lbzjjWI8ZlZvCkxf5mHmJhewriEGJ5N2MPriamMMu9h5N4lvB+7iA15X/CTdRnhJV9zqCWV88h9lErS6r7nY/Mk3s1Yx3u5mczIqOJtFQ1YwRRLeRfwXd4FfJfxtqWM6TnlnTQjtwKRDtzGaQRvOXuKb3K78G/ndQ3eysvtU3xjhG8jdBvbJ/9Z+DZCt5z1Ehy9dfJq8C0ebl2+1hJ9u+1rKbkSfK9wHkDkiQfUowL1LO7rgu8ar4hAX/CW57UltV22THrA2w3gRvjuAO56Tyb3lYpvjLnbcjamlBiBWz//0EX2tr7JNsK3V/FNZQM/u2HbEwt4ncU33uDd0Am8t+/r3EIp4H0t+O4E3geOdQneneH7qBd8G4FbzsZ4QMngNsoYCxjuA98Rtd7wLQ2UOnTL1IFbnxp4i2/7n4PvK2VwXy2dxJNSYoBvo3/bF7CNf6zLbO6rgHeWFOG4wVuPBNSnRAN6qwO+vaFbA/DcE2cR6fAtsG2EcA2+z//24TuhYRvJknRSv52Uum0kHdmB6dgOIg5vIeTgj+xxfcVy6yfMTZjBroLV1F2SzZLYJKRq2aVZN6TaXXKxZevbZkeKaFRFvLJ+yA9BeX3u/jUKhAu43GblyIUUHMfCiKzewibXOlbkf8Y6y2dsz19DaPmPJNfuwX40ggMXUzjaYqHyRAzFh8KJLvqO7yyLiaz40V1qU8h58Wc3S0a3wK54PLVLk60tVpWWorbZygpSwsWmQmpqk9i3z8SZ0xZaLhdx7GgmGak/U+IK4sIFq/J0yl9HWVf+C+BblQM159PUVsL5NhfNrRZami20txVRdyaLXc6veTP0PXoH7eQO/0huDLLwx2AnNwfm0T0gle5Byfw1JI0bQzK4NTDZAN1ymVLAO5vbAsXXnafUPdCCkrpcmUOPILeCc+jhrnnXIdt36qU3d4dY6b3Hwh0BOXSLKKJHUAEPBBfQMyyfu8JzudNUzF2RJfzl3Y/59xemcMcPMdwfYefuCAv3RuXzUKSTOwOS6BlR1Am+deA2Tsnh9oJt97PxouQjkfl4FJXPI1E2JeOmu1eUHaMei7KjFO3gsWgBb4facsumW2273Rtv2XobpcO3ALhIwNso+drTMbn0iy2gr6mMJ+MK6B2frQptBsYWMSAyhL7hn7C9agP7z4dzpCmZs/L/o9p0l9BySaLr8mkmV0GjBt5a26n8f6dAXb2NkZQfJy3t+Vwih4ZLqWQcimRX5WamRE9lasqPvJaYw7Oq+KaYYfFFHvmW4MjziIQiRhrkSSxRMYEd0O25PJkoFzGLO0kuUnYF3PK1sSnFmpJLGZtYztgkB88n5fBCioOxiaW8mFrBy5lORidHMST8S14NlebJbwgo9iPrcCLVFyyckZhB+dAu+dutxTQLTCO/x2h19BIRCGXQXqY+PMsbhHayaW/LpUU+1LYXcb7NTsnJKPyK1rEs4QPej/2I9zJ38U52Im/luXgjs4Q3UquYknGISWlljIrYzaM7PmDg7qWMDJvJWutaEqp2833WZ8yInM6M2HmsTFuNuWEHlWej2ZAxj1mxHzIney+zch3MyNzHO1n7mZohHm7xdXcAuO/G+60c8XhX8vZVoLsr+PaF7ll5VYgki9vXQmLccCvQvo7im4+sYisRP7dbUvnuLsMxQvfHYitRUYEH8Lo46SnB8d50G6G7K/j23Xh7kkocB/8pS8kVt9w6aBumflmyY9awsvCApqIaVnqV4NR4SnCup/jGU3ZTXIvaePtsuQW8uwJu+ZpefLPBp3XSF7r1Zx2yfadAty7j1tsI3MazB7J9oFvP3ZbplVKiIgGPdCq/kcp3X8mme8e+Ix7pEYFqysa7i0IcX9jWn/ceOIYu3623ceOtn4MOXHvTbQTurkpwtIjA40Qc7JAqwVFFOB0lONF1HdDtSSpR227tsqQklUhKiUQDmuuvVnzTxbbb0DaZIPnbV6t6P3Lqnyq+6Rq4fyX9qKaMo1q9+zU33e5SHG/g1gBc8rlzGmWzfcZLeQLbRjWexdp4hvzGs9hOSLrJOXXWn+2nLuA4ffG3vflOaNhO0uEdpB5yA3jDDhIbtpNwZAfpp/xJb9zL9sbv+SR9Ft9lfEr1aROtsvWhlJb2Qlql1KatUEXyNUtpRbtcmNSkovwkJ1yliDg515zLwbOJ2BqCMFX+zO6i9fxoW8lGy2f8YvuSoLLvSKnZTdHxGOouZXEKB+cpQHyVchlN6txb2py4GiPY4VpDSPFGzX/d5oZvBTVFGngLOLfJFtuqIEZyw6GUy5dLKSmJZe3aD1i7ZhYlrlCOHclmy88L+PjDl1k4/zWyMrZw5leJEtNsMf8Vm+82eWMgl+raS7lw2UVLq8Qo2qk/G8te10amRH1Knz2ruNs/hZv9LfwxyMmfgh3KTtIjMEnB900hmfwpRJomBbhFWdwmG26JClTQbaVboI3uQfkdsK1Dd1AOxk23nPXcbc/0Kb6RhJIn99q5KzCTWyOtdA8t4GH/Ah76wUT3L7dy1y+x3BOQzw1jX+B3M+Zz/948ekeW8GBQBneFZ3JfjJO7/NN5MCSPB8Ot1yy+8QVvI3Tr50ciO1tKBLyN220PbLuhW2DbV72jZbPdWX1iHXgU7aCPKMbJU76KddI3toCnY/LU5rufuYS+YjlJzKV/vI1BZicDoyJ4Knwlc5I/Y13O12y0/cKukh1kNvhxsjlNK4RqcdLaLm9X9Dx9PU1H7jvYtLx79SFWvs+cnG5KI7VmB4vMXzAxahXDQz7mxZQtjE3MZGRsOcNN5Z7iG70AR5+eEpwEl1dEoL7VNk4B7+eTirvUmOQSNBUzNsnVoeRixrr1QnIxHiUVMy7VwdgUffNdyrjkCsalFDDSHM9z4d+wLOUDXKfMnLzo4HJrNa0tZbS3Fqg3A+fJ5ayyldjU2zKxl0nkn1S6i9pbimhr0Uq42lrk+7ZE+ecPX8zC0rCXTdZlvBs7l2mmNcyxRDPbWs741BL+kV7JG1kVqmVycuZ+JmYUMy4uggFByxm8eyLfFaym5HQ0iVW/sDxpMdPi1vN63G5eCP6CuVmz+ck6j7mx7/Ju+iZmW9OZbilnRtZ+ZmbvY1p6KW9lV6EX4Bjn2wq4Bbo1qT9P/twcUYWS2nDLxju3UkmAWy/B0efsvCqMMoJ3J+h2b7evq/hGh27rvk72Eo+1pIuIQD2dRJ8LbAdYYNvPAvv+jgIc3WriLsIRS4kutd2WeEDHQY+0TbdEBGqSDbcmSS85yDKRtE5KLrfzgJK+3VaXJ92FOL6b7g7YPoh+1tJKfFon3fDtVYDjOuhVfKOnkhinXJg0brWNZyNw66U3Mv8zxTdGr7ax9MYI3J1Kb6QIx6v45nCnincjdOtn3zhA/bmj+OYICrQFtt3St9rGeX3Nk0cRW4lHbug2AndXZTgBYjWpafRIbbdlw+1WlyU4NWIp6RwTqCeVyPTAtgD3weNeW2594y1bbh269UhA4zQdOqWaJvUoQH36ppTEH7qO4htD4Y0qvzl8UrVMGstv1IZbLkrqkYCHOxom9eIbNfWvu8twpBDHqxTH3TqZdexX1KZbtt1u0M4+foZsKcPptOU2QreA9zmPpUTfbBu32nKWzXa+TDd4C3wbZT95HsepCzhPXyTGWcKDvR7/jdpODu9Q8J18aLsC8IyGXWoLnlS3lYxGPw2+6zewPP199lpXcuR8irp82N7q4qI0QFKoatYvq0uOsq2TS4ya9aS11cG5Jgv7TsdjPRxOxP4tbCtezybr5/yQ/Rnb874kqvg7Mqq3U3kkglMX07mkrCmFtLVJY6XAvUh+sAoIF6u83tJTUexwrcW/YL3WatnmoklsJO0u2iSzW1k9ZHstnm2BGkkeEetJOZcvl5OfH85b08YyaeIwsjK3U1Zi4pVxvfn260947aWn+GnTfOrqUmhrL/akmyg7zX/KdiKvyh20NdtpbS5WF+uqTprY5lrNJPPH9PFfyz1+Udwk5TgBTm4MLuQvwfncFpTB7YEp9AjKUJndfwl2uO0l4u0WP3ce3SQ2UC5aKvC20z3I5oHv2wW6DeB9R3AuSnrTpHv6Ft7Is1yS7BlcyD0hmdwWmckdkQ4e2pbOXbNWcsMzL3PTlI+4fdV2/n3wQG5b/ROPBjroH1zIY0EZ3B2Zwd0xdh4IsfFwWE6HncRgLekE2xIH6JEWC/hIpB2jHjXAdy+BbiVvP/fjUXZ0PRElW+7OetIA331ixN8tMoC3QHi0nadiHKog52lTgZrybPxaP5OV/rEO+plcPG0uoG+cjf7mQi2BJCadfpF7GBKykeHB3zE85DteivqCxZZFJNdvUv+vtze5oLVI2Zm0uwraGxvl//YUTmlvYOR7Yv8pM5vyV/KC33wGh+2kb5Qfw1LNDE/RUkyeMWuNkwLcqvzGMzsaKEcluAwRge5oQD0i0D2fS/QG7zFJxWjSwbtEA20DfHtg2wjeycWMSy7ipbQCXkzN5+X0Il5KqeCllCpeTClhtDmLcZFbWZe9hOOIzUs+ZDhoF/uY2EharVxqy+NCm1ymtqo3a9obN6eymak7FmJ5a5U4UjtNLcVcZh+HmxzEVuxkbcZK5piWMyVhM9Ms6UzNq2ZiegPj02oYn1HF+MwyxmeWMjG7kkk5JbyekcEL5m1MDp1E5dkEqk9Fsjp1HrPi1vBeTh5TsksYHvkdQwNG8U7oy3yQtpKPHIlMzytmalY1b2dWMyOznLczypieXY2xBMdY8z49pxJdCrileTLH215i3HB3eLk1X7en5t3t55bnK5fe7OMDqybftBK9adJ4gVJFAgp4qwuU+nbbPd0NlJLNPd9LNXQuwTnAp7b9Xn5u3V6ib7l1O4k+fW0l8vyZo4bPHAcUYAtkG7VcNU4eZIVTIgI1S4k+OwF3QQ1fiHSLSeFBfItwviw6yKqiGi+tLupcgnOt4htJKPEtvJFnvfTmG9luS/mNQLdbvpcmVTxg2T9ffGMEbv18/cU3eumNzK6KbzqX3kg8oG/xzU7xc3vktpOo7bZsuDUZWyg7ynCOode9y/RA9xV83QLhvnXvqgxHr3p3TwXbB44ReuCYl6XEuO02grdWgNOIb1xg5MFGogS6DeAtWdxG6dGApjrZcHvLXH8K8yFJKjGklPxnim8MlhL9sqTRPqLA+zqLb650gVK/UNlV5XtX1hJViiPFOG7lHhfg7lBe4zm11bbKdtvt6VawLcAtG27xdAtwN57FfuJcJzlOnsd56iKFpy8S6yzlwUd/o/Cdcningu+E+q0k120jo34HGbXbSavfTuqx3UTU/szy1A9ZkzwXS81umtRGuZD2lkKamuVyYZHm9ZYfmpKO0mzl7MVsDp1JpeRojKpu31r2LSttn/N53mdssK3E37WR7P17OHjMxMUL2ao1TrbLanMuP3gFtiXppFl+wMqUH8YCr8VqGy2vkbcWfIm/c737wqb8wHUpwBbPt8rxlQtZEjco3u12+Zr4auXP38eRIzbWrp3HvHmvk5W5g5JiM88/+xjBgT8wZdJofv5pCQcPJqvttFRb/5dsvnGq1+WtTeKPt1F1Ioavc1cxNnIeD4es5e6QCLoF5nJjQD5/Cnby5xAbN6nLkpncHphJj8Bcbg0s4OYglxu4NeiWvO7uwfkG2egRnK8Bd1COBto6cPtsu70Lb/QCnI7Sm7vDbNwRUcjfInO4IzKT+0353P9tAH8e+ir/9+9u4n/e+yg3vvE2//7cczyyM4r7vomg59cRPLIniXtjsrkzOp9Hwlz0isjhoatAtw7hvqkk8izJJEb1irTTK9Km5Lvh1p8FuMWP7SvdPqIsJGqrbfcAtg7V3tNOXwXWTvqZC+hncvK0bLxjBLad9DcX0N9sY4DZrqwnT8fIJlzq3ssZFLtPJY8MirEwMCqT/lHZ9I1KYLDpF95I+pQV6e9x4HwcTZfle6lK/b8t8K1tuzXY1i4AywdPuaDspJUSHEfCWZI6n9FhX/JslpMhKRU8k1TM0KRiBifYGBrvcBfgaO2UEg/oq2cTJSLQ1UnGTbcC7URJKtE0NqkEX72QXNKx3VaQLaBdzLiUEo9eTCnmpRQXr6S7eCnVwStpxbySWsErqVW8mlbFuKRCxkQE8IFpAeWX09TFyBbSaCFdu/AsH3hb5LJzkfYGTS4vt9pU1rZsuS+0ZHLonIniI4E46gLJb4jE0Wgm/MB2PjUt5O3IZUxL2spUSyIT8hy8klnE+DSpaq9hWvYBJmaVKevJmzlVTMwt5ZWkOF4IWc9PqYuoupTINudiZppmMz0thJnWev6Rbmdo+DKe3D6EGaZZLCwMZY7dqYpzpmbvZ3q2eL5LmZldxQxLFXoBznSLEbYrmZFTyYzcKiVjxbs6d7Xl9hTgdFykfDevCiVrNXKZ0hgJeKXym+sqvlEpJRIRqKmrEpwF+QdYYD3gKb/pOq3E0DppP8AikaNGyVOCI5ncSgdZ6vDWZ+oiZa0btgW+ZbMtG25pn9TUUYKjwbcvcMuzDtse0DZAt942qc9VAtqFNUrGAhw56+U3At7GKED97BUFaCi+kQIcHbCN89sS74jAjV0W3xwylN50tE3qhTf67Ci80YtvtKZJY/mNMZu7q7QSiQf0spT8J4pvvCwl1UeQZyNsy1mPCLxSNrfEBcpWu7OOI5ttXZ7mSb3yXS/EqWkk2C1V7+7j5fa1lkTUNKKrqxIc+ZoRso1nfdMtU6D7agklEhMYX39StU3q6STG2ZFKctKr9EYKcDoV36jSmysU37gvTOpQLdOTw+1TfKMDtj49iSXuynepfpfUEh229elrLRHg1otxVD63ZHS7ZW08j6YOC4lst7vacAt0Oxo1OU+cRyTPMgtOXqDw1EWKTl/C9FuGb9l4Jx/eoZRyaLuC76z6nWQd2UNM/RY22JaxKHYOAUUbqb+QSVubi7YWSWWQNjoXl2Xb1JrDmYsZHD2bStXpONIP+bOzaAOr0haxPHk+G6xfsL3iB5zHIjh+Pp1Tzbmcw8k5CpWvs0k1QBbQJt7xFkkG0bJ6ZcPVLhF/0i4p9hU5t7soOxXN9qI1BBZtQNIM2lsKtM23vvWWy57Kly4QI2kSsgGXRksBmQqamirw81vHihXTSEnZTHV1KjNnvMiQgQ8x/JlHCAr8hmPHcmiVNjtlsfnPX7iU9BPxwTe1OTj4azjfZC1gVOAH3O3/AzeHJHBLaA63BGfzl0ALt4RauTkkl5sltSTIQg9pmgy00S2ggG6BEheYrylYK8TpEZJHh6zcHpJ3VegWm8ldIXlcq/hG2id7RNm4O8rGPdF59DLlcOfyr/ld78H82//6M//2p5v5934DufHd+Ty2I5obXv+A/5iyiG6bQrk7ysIdEbk8GFTIg2FZPBSR38nP7Vt8Y4RvI3Abzzpgy9S322oarCVG6DYCt/HsDdnaNlvfaEsVvOjpWIFqO/1NDgaYnUpy1r82MK6A/ib54zb6y6+LLVA174NN1Qw27WNwbCWDYkp5JmofQ6MOMCKujHHp6YxP/5kp4TOJPriZE825tDVXQmupSgAS0G4Ve5JUq3sKnTQfeDMuMmv9eC/2PZ4JXs3o1CKeTaljlLma4VFS657P6GSHF2yPTihGKbGE0W75wrcRuvXzmC5gW+D7heRSg0o02HaDt2yyffVSSimvppbx9/RyXkou4uVkFy8nl/JKcjmvpVXxako5L0RHMC1yLumHw91lWrlcQvzeherNU5t8eFYXoP8/7t47Osor2/b944777j3nnQ63+3RwBueAjXHOYEwOJpgkkkQQEjlnMMlgMDbYxoEkJJJyzjlUlSqqkqpKOUuABCILlDXfWPurXbXrkwjue8Z9z++POdb6cBvTY3h0/1jMPacVPe16tLcr0NqjwtWOPJibo3HOcgg7ktZjVdRyLI9bgvVZ67E4ZQU+D/HC5/ELMD5lA8ak/ojJuWmYq3dgvkGHedQ8mUUZ2yXw0ZVidn4RpmXk4vOoI5gZvhL2+jCENv4A39SZWKf+CasKCjEt14Lh8T/js5AZGBc5FdtMp7ClUAfvXBtm5ZVhkb4aS3QV8FfSdbqSPZ7kMYB8+qnLIJerddIjrcTzwi1aSmiX20rom8M3v27zKRbgiLGALBpQHg9IFe/3gW6XpeQhIgK3GKqxxVDleizJr9s0+YV7e0ENvpRph7EWonYaq7HTWMnEr9p89gXb/Mc4dPNJ8L2Xy1oLjyIcXoJj9WydFKHbBd9CrbtkL6nvncMtXLQ9gFuWyf1fUXxD4E3FN1wiZIt7n8B9v+Kbst9QfCOU3gSVNzHY5gAuh27+zbO45dMjIrDSXfne14WbfqyveEB5RKDHhfseDZR9AXecrHkyvrYZCdQ6ySTYS4QiHA7f7ixunslN0YDXmdIarrlaJjl4u6Hbs/iGX7TFya/bYuEN30XY9gDuBxTfuIC7j+ZJDt684p1DN79uS17u26ArN4dtPt3QLcG3/op01abLNreTuC/cLWC2EhlscwAX4bvweiuSzUV45fd6+c4hr/fFIFDkYFrdSWTXBoDBd9NZnC45hCVxPjhjOoTqWwp0oBjd3Q60dZpxu9uIy51qlN2Oh6M5DInVJ3Ck4GtsTV+LVQlLsTNvM8LKj6GkJRVtnVpW436nTYdbnXq0smu5HS3tBbh8PRcdbQTJdvS003XLwv4PuJP8rj1G9liSIEQqxJEsJY7mGAbfYU74RocJbc7LN7sS9kilGtIf3WvRRVXSMKGTym967GhrdyDo9C7s2DUb2blHodWFYdq0wTj2637M9BqJoKB9uHhRhS5YWUHHf8Xlm0o/bnSrUdKSiF/MezEq1B8vh3+Pp+IU+M8IC/5yPh+PUP072UyipKSSR4IpMlCLJ8Ppmq3Dk6FkIdG4GiifjNDALTWejFTjqUgNE7eXuPzcgs2EW0wIvuXFN7zwRpp6vBStwuMxBeifYMag8Cw86r8Of/hwBB79aCT++Np7+B9vvIvnA5Mw4IdQ/OHNT/EfMxbhqYBEvJxqwYAkE16PKcYrCTqP7G0RssV9YAxdtY29NCjOBC6ePMKnCNriLoI238XHknL45sAtzg+SKPfbDd4igNMuwbeUDf5Rso75vD9NceDTlCIMSS3EJykWfJRoxycJZfgktgRD4goxJkWPqdnJmJ16AAcLdqLubga62L/3ZKsi+wnZpyQAl34DyS/fRrR065BeE4ilaesxLOZbfBiXiE/jsjEhpQBj4swYm2HB2KxCjE61uTQmzY5eIktJH55ut5dbspaIl25P6OYALsF3L+DOdmCSU5OzizE1uxzTcysxObMYkzPt+CKL5MDUnBJMyS7F50lJmBa7GoGGX9DRRQ+5S9CFcrT3FKO1x4Q7UOBOdy57REkpRS0dapS3Jh/27coAACAASURBVCGh/iR2K3dhftwmzE3dD19VEOakRWJsxBEMC9uI106OwAsnX8PLQR/i48gNmJGpgnfOZczRWuBnKMWcbCvmKAoxR23BpLREjA39Bj6xa3HIvBPqm2HwTZ2Bldp92KBVgpohp+XGY2TcMow69xkWpfthd2Ee1miLMCerDN75FfArKMMyfRmWqavhl1+OBVo77gfd/ppykOSWEvqW20o4fPcF3TwqkOVvUwa3oF6wrX244hv5tZtDN588n1vM3+Y7z+GWt06K0N0XfIvATftOUx3TLhPZSh7SUmKmB5OCnYRgm4O2pRZfy8TLb9yzFgfoym2t8SjAkV+7H6b45n7QzS0mDy6+8Wyd5DYScXL4FkGbdjls/6vFN2LpzcNkb592wjeHbD6ZtaTSbSsRoVsEbnEPqXywrUSEbzGTW9yjqy67rtr8uk1TBO646maIkpfg0HdijVR6w8tv+BQv3feD7r7guxd08+KbC9cgAjftHLr55MBNsxd0y67bD4Jr+uu9MrmdzZO88p1ftfnkgC1OnstNZTiiqBiHleM4wZpftvlFWwRs2vmP06WbxP867XT1tt1oQ4ql+PcL31m1AdJjy8YzyG06C+XFM1DUBSKjIQAnSw7CP2UhDA0x6OkuATqL0dVmRXOrFvkNEThf8j325q3E+rg52JC0GN/l70RSRQCqbqUzP2sXXbS7DGjt1uF2jx5llTGoLI3FnZs63LmpR5E5FKkJB2G3h6Ol1Yw7kBINQJnbVGRDPvIONVjhDQiw6QGmAfrGUPyi24Hzxm+d5TcWF3xLMYdSuUZPlw7dXQTfVAetR0cn+cApacSG4yc3YcOmSUhJ/x5ZOQEYMfw13LpRixXLZ2HnzoWwF0WzC3wPlfL8F6Sd3OnMh+XiaWyJmYthZ7wwKOow+kWn4J9RGvyDrtwhKjx2Xol+odl4Noqu3Wo8cj4fj5KXO8KAJyKolTIL/cOzBeAm+CbozmfA/lSUCk9FKfFUpIpdvu8H3gTg9wdvA16K0uPDsFz8I0KPx+NtGHAsAf+Y6IN/jpyCV9fuwn9O8sZ/G/QRnk6y4dENh/HXl17HcyvWYUBMPl5MsuKVWD0GRhXh+fgHwzddtx8E3gTgHLr5FIGbdgLtd/rQuxQR6CH3tVt+8WZX7yTJZsKBWz4JvEmDUwsxONWMT1J0+DhJi48TC/BJkg6fpKowOEODTzLMeDPLhLdyCljz5ODEAoxN08ArLxg+0QtguXEGbe1qdNGf+rBHwfQQV3psKcE37VJRlKM5At+qN2JG/HJMUJzEB8nn8PzxrzAqPgoTM8wYkWbDsNT7w/fYdAfGpttddpL72UoeBN8Tshy9L90CeBOAT84qxpSsSngpajE9twpeikrMVJXDS1mGmapKzFRWY3R2Ht5N+QrLk3aj/nYOuqhFkuJCO6nYyg50k6WM3pY40NydjxjTQWwO98G8EG/MSdkCH/1JzHUkY7pZj6lZNzBb04RZOhtGpURjYlYkZuQlwjtPAX+lA765tZiWcxOzNWYsKFBjtsqAYZHR+PD0DsxJXoajZfsQ1fArpiRNxfq0ZdhryMUK7WXMybZgcvwhTIr1hk/GbPglzsdGlQ4rFbVYrL0MX30VvJUFWJBnwWJVFeZm27FAX8jgW37ppm8O3r8Fvu8H3gTgDwvf92+crMYGnWQpIVsJl2gr4TuHbXFy8Kb5W+D7XuBNAP5b4Pte4M2u3DLwJhB3QzdvopTg28PTLbOZEHg/TPHNg+CbinAeXHzzEPDtLL4R4VsO3vT9IPjuu/imEb8VvnnxjWgrOVvR1GftOwdwEbj5Hlx15aH83A+C78gaqn13W0o4fIvgTbsI3rTL4ZvaJxOdjZMcummK4E17St01pMqUVn8dHmqQrtv3Am9WfPMb4Pte4N1X6U1fkC2HbunSfQsE3Vys8l2oeScAF6Gb7wTfInTzRsrfCt9mspg0t8BCNhMCcL5fuwu6ev/u4VvREITM2pNIrw+QrCcXgqC4fI6lnaxJ8MXBiv0oaM2G5WYG4kuO4ZhqBw5kr8e2tOXYk70BobafoK6PQO2NHFy7o8Kt1ny0M3C2oJNKd3rsaG01IDPrMGZ4fYSt2xegpDwFKs1ZLFg4HJMmv4/Zc4dDZ4zC7Taz1PjHcnwl6wi7ZLPyEanWnSwptZeTcDJ3F9bFL0Vaw3lmRemkSnjWQukstumhby3ze9OjLQIZikPs6XHgxnU1zp7ZgsOHfKHXBcJUEIYpE9/G4oUTMGn8ewgPOYgrl1VSERDLC3ZeJalMqEMl+dC7inCnxwHNpTB8q1iLQ4rNuACllJByl6qsC1l+eEuPgdVYB5X+jPFpK/Fh9B68EnYeT4fn4PEINR6NUOLRcAWeCFbjqfNaPMnq33k0oMYZC6h1JpWo0S9C3UdKiRZPR3rqGWYrIWuJFs+yx5ME2249H6XHC1FamXTwyOGO0uGVcD0eDbfhpfRCPH/gMP708Wj8dcYqvHAqGY/6rMb//fjfMODMKfzb8Bn470Omod+hAAxM1eP1OD3eCFfjxeh89E9QsGIcV0Sg8GiSRwRSWskgKr6J0wvqncXdV0qJK53EmVTybgI9lCS/tigj3k9yi67aoujC/WEy+bglMdhOMuLjZBO7aFN6Cddggm6nPkkxO3/ciE+SjRicYnLKiMGpJgxJteCTtEJ8lGHBx+l6DEnRYWRSISZkqjE+cTUCSjajsSWXAXZHdz46yTrVU4KuDsqzJusJ/btrRmurGWfM++CTswGf557EByHb4R21AktivTE6ciNGpioxJrkSI1NLMSyrBKOzKjE+qxrDk20YlW7C2FwLRmfZMTK9FGNY9F+hh1+bHkuKXm3m3c60YkJmISZm9SVKMCnC+IxSTEhxwIvsJbk6jFcaMSavGKPTi/BFhgNz88owLcOK4cFn8FHQ1xgafwJDk4IwNDoA4+LiMTWjBF/kXMSIlEwMPbcBe9Vf4uLdDHR3ZwPtatb82Y5KNEIDzd0z+Fq9FguiFsMv6xCW6mLhq1XCR2mFd0455mZXY3ZeBeYqSaWYlWfD7Dwb5ioc8FYWw0dZinnKCvgq6uCtpN8EZGDY+TXwifDHr6afkNaUhHOVRzE/cjTe+/kZjAyfjWk5aZibX4mpCefhm7YeW7WrsCTVD/OSVsE/3wG//BLm7aZHk775RVjEHk6WSlNTDPJwc1uJNMUiHE9riXjtJg+3qOW6cqwQtFJfAVGrDJVYbagEFeCIWltQBVHrDBVwqxLrCzy1oaAKVH7jIWMVNhmrPbTZ6d12ebhNNXBFA5pqsI35tavwpanSJcre3mGu8pBnLGA1ehff1GC3pQq7LZUu8fIbcXpEAtKDyV5FOLXgxTcHbDXg+kaex02xgILElBK+H3JIxTdUfsP1fVEdPOSoww+CxBxuaW8AxQJ6pJQUNeCn4gt9FOG4f0yMBeT7r6V9pJQ4H0vyhBKa8uztk2WepTdUgtN38Y279EZKKBFLbyQvN48A5JMV31T2zuDm0X80Q6oaXQqtakQoPZ6sdius2hkPWH2Z5XDfq/gmsuYyImubBF1GVJ2nKI87tvYKKIdbVFx9M7ikiMDLiK9zqv4KEkQ1XGHFOCwmUF6Ic+Eqkp1KudAsK8RpBsUCipIiAq+CZ3HzmemMB6RJiSX8ws1n7zjAa8i9dM2j/IaKcMQ4wL52nsntmk03oGpyF9/wEhwxh5viASmhRBR/MOmaLB7wJgzNniq4egtcxqu3YJLJfO02PHT1NixXb8Jy9Ras1yTRLn3fhu1GC+y37iK1sAgvv/767zPtJLvuFLLqTyG9IYAps15qvKS4wS0pS7Ai3R8HlTtwJGcHfs3ZjnOGrxFdegRZTedhuZqOijvZuNahQwdrsnQmFbBoNCd8w8584lbreXh7f4r1G2ZDowvFufNfYeasIdDqkjBhwoc4F7wfTVfIHkKgyz3bfc+7HRpklJ/CjpxNCK84zrJ/O9u1LCWBVUrTgywqx+mhJjqKGiSAoVIM+vVZ0dFhRUlRNEzG07jcmInmy0okxf+IYz9vRVzUj6guz0BbK1kALFL9fAd50ElU3qNjaS7dHXZ0dNtxoS0LYaVfY12aD06VH0Baw0koqoJQczsDV7o1yG8Kx0HtdsxIWIo3onfh1YggPB+ahydDLXiMogTDyWKiwFMh+XgqmOBbAm95LCB98zxu+UWb20jEyaA7QusB3By+Cbz7gm8P8I4mECfbiRnPRzvwepwOfyfLyScT8NTagxhwNh2PzFyGf/9ff8JTC/3wP14bhr9MWYbnAmLwSho9lCzA29FU7a7BwEStK5ubw7Y4eUygJ3hLEC4vw5HHA/YC7yTya4vQ7d5F+Gaw7bSV9GUtYZdtAbg5eNPk4M0mfTs1JMUMD6Va8ClTIWug/CSjAEPTDRiWZMWYDDNGJe3DzvzVKGmOQ0e3lkUOdtKj5Xar9Js3itjsoVp0Pa7ezsMR7W5MTtqC8Wkn4JWwBxGVh5BYvwteScsxODEBI9IrMCa9CqNTSzEmpQyjExyYlOHApKwCTMjSY2y6CePTS/F5JkmIAnTuHvCdZesTuidl28A1MacEE7PLMDXDiNnpJnilFeGLzDJMVhZjSr4DMxRlmJSpxojEHzEz6Uuszt0Ov6St8E38EvPitmJC+HaMTjqJKXo9pigsGBF2FNOD/ZFU9TOut6nY/260dOtRcTce0RWHsIkq3BPXY2nOcazQ5cBXbcZcRRG8FaWYryiFT24R5ipKe8lbWQauuXRxzy3B55lRGBKxDl6xC3G6+Duk1x7DccMOrMtcgbmpyzD01GQMOjkHE3MTMSPXiKlxx7AodQm2qP2xTrENGwynGWATbHP1lVbiCd4lfVpMuKWETUotkRfisJhAN3yL0E07gTdJhG6+i+BNuxu8K3qDt7EKG4xVrgIcnsVNc5Op2kMifIvQTfs2cy3TlyZ6JFnpIU/YrmItlMxSYr538Y0I3yJw0+5qnxTyuOXgzUtw9gvQfU/4trvhm8O2OFkG98PAd5EbvnuDt/BwkoBbkGgn4fsv3M9N0YCCjjpjAo+V9AHffZThiPAtb5uk7//jxTcMuAm6JRFsiwqv6Q3dHMA9WyebEFUjKbq2CdG1l11iJTgy4L4nfNdfRryghIYrEJV4oRmJMujmjZQcvGnK4VuEbpbNfekq0i/2Bm8C8PvBtxy8pfxtd+Pk/donlTwmkCaPB+STNU66i284eNO8H3y7gFtooaTUEjl40zcHb5py+PaAbieEW65J8M3BW5yF1yX4djjh+5WBv1P4pqs3PbrMuCBZTahsh67h1Hb5q3E3lsXOw/pEfwSY90N9IRQVV1NQ35KJ6z063IEF9H+OnVQIwlJQKJVBehxGDyHp0sweScKB69cU2L17AcvX1ukjERi0C7NmDUFZmQZTvhiCgIBduHApDz3M99o3dHMopwpu0+U4HDZ9jeCin9BKD7K6pT+el+rc6Y/ppbQTCb7pwaVU4d1Fj0V7bGhtNaGlRceu850dFly/akBFSSquNRegvdWOrk7643/yjhPIUwsltXHq0YkCdFALJvOmm3GtKw8xVYcwPW4yvkhcBq+0r+EVuRWbMr7EoYJ9WJe7DWOi1uLNsD14OTwYL4Rkon9wIR4PKcWjYVY8ymrflegfko/+wZpeWdwidMvhW4Rtvj8TpQelmHDQFieHbj755bs3dEvNky/G6PF8jAWvRpvw2tl0/GnKMvxh/CL033scz3x/Bn8YNQP/158fwZ/6D8K/P/cRnluzH69G5eLFFDNeiTbhzWgzBkVp8H6MFq+TBcWZUsLSSnhEIC/AYQ8o9XgjTo83Sc4CHD55+Q3Btxy46fvdJJNLcvgWoZvvfQE393FzS4kI3L2gW7h+ewB3itkJ3By8LfgsxYpPU6wYkm7GsEwjPkumOncHhiedxbz4tUitPI6rbTks9ae7y4yudjPLr6bs6o4ePdqhR2NbCr437cXn8dsxOf5HbMv6FvXt8WjqPI+lOSswJDkQn2UoMSbDhAmZdkzMLsXIZCsm5NgwOj0fYzPzMSazAONySvE5+awF+PaEbp5U4gnfHLjFyVoqs4sxM6cQczIs8Ekvx4yUKnyRUY5pyhJ8oTBheEIQxsUuwU8l38HQHIG8unNQ10chve489qj3YFbml5iuD8M0pQ1zc6yYeP4gduZsg/56DKzXYhFXfBiH8tdgXdoqLE75AcuyQ7BCq2axfLNzKzArtxI++eXwUVkxJ0fPQJxg3CUnePsoy0DyVlXCS1OP4alH8GnYTBxw7EVU+a84qdyOzQlLsSjzaywyJmJ2+lmMTvwV09UZmJadgCmxOzA7agZWZCzGDmMgtlhye0UDcvi+X0Qg93d7XLnlsO385l5uZikhyBbEmied1266eLOrt7PmfS3VvxuqmFy53AXVWFdQ7QHcrtbJ3wLd5hpsNtdgi+zSLUJ3X/Ath27XhVsovLnXQ8k9FkosqWK6VwEOxQOySEBKK3FKXoRzoJB83DX4xkaqZXJduQm6qfr9IYpwxMu3x7Xbef1mxTd9Vr67oZuX4XDw5qAtTnbZLr4I0U7Cd09byUUcL7kgiS7ezpQSllziLMGhK7e89Ia+/98svqErtwjbfCfo5uKwzacI3Xwn2JY3TbJLtwDdPIObTWqhrGtm4s2TLJtbvHLXS1duVv3e4Ibu5IZmltGdXN8M2lMarvZSKtW+yy7dLugm8L7UG7xF6OZ79qVrkAM3ffPSG5oicN/r4k253DydRJz80SRN9eWbUBNsOyUvweFXbxYPyApwpJQSllbCIwJZHjfFA950iUBbLrp69wXc9GOW6y2SBPgm2CZxAKfdfvMOHLdakWalB5e/U/imhJMMEiWeXAhEDj24rDmF/AtnEFP5Mw7pt2Ntqh8Cig+irC2DFd5Qc1xPt41larMoPga/ZPMgaKZKd5rUbOkUebnbC3Bg/xIcOrQeRcVpiIn9AdOnf4SD32zEe+8+h+MndqDhYo5UWf+AyzfBd+HVRPxsPYjzth/Rxv54njK+CboJrknkGSePN12+6Qou/doYpLNiIHrYJvlr+TW8o92Ozk47e/zZ2UmJExpWftJFwA0tA29W9kM1190mNLVkQ10bjG80ezE0eiVeDzuMZ0Mi0O/UCQw8vQMfha7HO6E78VJoAJ4JT0H/0Dw8FWzC48FleCS0DI+GW/BYBJXfKPFMaD6eCZGllFBqifBQkjVPRuhAzZMecmZyUy431/OROrjkunST1USPF6INbL4UpYOH5EU4MQa8EGPFoDgtXg9Ow7Orvkf/zUfxzJFgPLplP/5t5FT8ceAQ/Pf/9k/8+6DhePVQIAYmG/BSYiEGRNvwRmwh3ojW46NIDQYJ4C0mlrDUEqF18i1qmxTU69LtzNlmDyYJulm1u5ROwh9LEmCLlhK+c0sJzY+oAp6LHk8mm5k+SSYbiSSPC7d41U61MDsJWUpE8Jau3G7opm+qdx+ebMWwZBuGUrV7pgnDUs0YllaOYSkqjAnZjcPGr1B+MxndlGXPYjbpN60OdHeYWeJHY3sW1BeOYodmCz5P2IoZMQfwk/JndHTp0I5srMlZiuGJX2FY7M8YEROIUYmxGJuhwNgsM8bnFmFkmh5jM/UYl2PGqBwbxghRgATeHg8ms6l9kq7ldkzOsnkqm2IC7W5lFWNqVjm8cksxJaMAM7LM8MqowIzMBkzJtmNI/Dl8HLwGG9JXwXQ7Fh0w406rHp3dRbjeZUBKZSC25m3DvLwj8NYVYaHqCqbHajEt9Et8qV6KvRp/bMhcA//knViYcRzL8wuwWGXBfIUDs/JKMT2vHLNU5fBWl8InvxBzFSb4UHqJS6XMasLsJqpSzFOVMvieqWnAxKxgjItegiNVh/Gdchu2RK7HprSjWKFTYZ6xBou0xZirsWOuToPPkw9iTMgUzI6ainW532C7MRsrtXpmKRFjAnn5jTjZ40kWEyg1UFILJWuiZDGB5VjWB3iL0M13seadX7X5dF23nRncLuA2VGE9A26Cbkku4BYq3vmFm1lL6MJtrAZdtl0yVTPYJuDm2mqixJJqbKPJVIPtJqfMNdjOim+k0hsqv+HX7X+t+KYae4XLtrjzKzcBN48G5Kkk4pQKcNxXbRd022p7leF858zoFgtweD73YUcDK8P53lEHkstaQpfuonomfuk+4qh3xgRSXKAkBttkN3HaS+iq7SFKLhGu27Rz4KbpCd2XcJyllTw4IpCyuHnhDZ9i4Q3f+eNI+XRnbwuFN7z8po/im5CqJoRUkbVEkEfxzWUG3q6ad7KYVF9m4qBNkywlUTVXeim6thlccvAWL9t8jxNaJz2B+yrcle8CcNdfAb9qizOloRm9y3CuIu3CNUFC8yS1UF7sfemmyECxCIftl66BgDv70nUmBtoPUXxD+dtcrgu3UIQjAjftInTzneBbtJOIu+vC7YwFdMUDUmqJvACnubelhFtMXMDNLCV02XaKQPtaC5P1WgskSbDNrtzOSzeHcLKcOG7eQdHt3zl851w6LVlOLpxiqSfZ1SehqDgBZe0pZF0KQnzTKWzOX43dqnXIbTiL26zAxoHuu9QyRxYR8qhK3mrXZdqZSywBOBXuWHD3jhpfbp+D/ftXoLIyGwrFGaxePRWHvtuENwY9gaCgPWhsUoC11z0Qvk2wXonDEdMBhDp+AiWJSJd2Am4J/KW8ZLKd0FWcHrORpMs8QTrtUn08XbYtaG+nS7kDnZQr3k3gTRGFGmY16YIB3dCwKnt0FbHowztdSmRXBWJXxm5MjdyPd2OC8Up0Fv4RosMjkQY8Gp6If549hyeC49CPsrhDC/FYqB6PhVjxSGgRHgmz4dEIPZ6IVKJfhBLPhGnwTJjOFRF4P2uJGBHYZ2JJpN4F3gy2mb9bAu8Xow3gejlKB1GvCFnc0k6PLs0YkKDEgNgsvBeYizfP56Pfj+fwV9/V+E/vpXhy8Xb8+5OD8KcZC/HiuRi8mmLEwDg73ogpxiCa8Sa8G6XFG7EGBuBvxMoiAoUSHA7db8cbIMndQMmLcMSUEg7b8kmw/UBbCQG3E745bMunCNbi7gHZKRZ8StfuFAuGEmwL+izNCtLIJCtGJNvwWVohPsswYkSGBZ+lVuGz9EoMDj0B/+wN0DRFoqvHxrLy0UkPhqloysIaXnPqAvGjai1W5m7CtIx98Ench7PGk+hqt+NquwoLY70wKW4F/JJ2YWHiVxgX8zWGRQVgYpYOEzLKMCm1HF+kUsKIDWOzTBibafYEbieA84QS9lCSIDvL5iECalEzMiswK7MeU/OKMTZPiwkqI6YrS+CVXYjPk2PxVvBmjA9fiLjiH3Cb4j47behuKURPVxk6uuwobU7Ccd1uLEnZhiVGHeZm1WNO+i2MiQjAp+dGYmLsOCzI/g6L1Qos0jiwIL8W81RVmKUowXSlAzPyHZiZTwU5DnirHJinKsI8ZUkvzVeWYr6KqwzzqHlSk4/PY3Zgad5qLElYhFWJB7E1X4sV2ouYo26Cr+4y5qubMVuRg6Eh8zHs9PvwT1+CPZZkrNfYsEhh9fBy84u2fC5Vl/aqfJdHBa7QloOLGih5IY44Cb7X8Jp3dtmm67ZTTl83h+71BN1MInRXY0NBtdPL7WycZF5u8nNL2myqBon7uPmU20rYlZvBdxW2m6p76UtzDUiUxy2mlPDLtjhZ6Y2QxU2xgK5Hk9ZaUOtkX8U37NJd6C6/2U+tk0IWN4G369GkrQ4E3wTc38okL8I5REU498rldsUE1ruh2wngHLj5PFJUDwbfjnoPWwnBN79u85QSHhFIUwRtBtsE3Fwllxh8ux5OOmMCT5RewsnSix4KKPUswzlVdkkovaECnH+t+ObcQxTfUC53X7GAYbLiGxd4U+tkH75u+jGx+IbvYuMka50kLzfzdLt93QTbojyAmzdQOivfOXwzLzdBN5N02WaXbrp2OyWCN48IZFOA7/QLV12179zLLZ99+bnJ1y1eusXWSZ6/LZ+8+MYF3U57iQjcrnhAAu8md/mNKyaQsrkv3/KAb6p2lyQU4VA0oCB3PKC7CMfYfNvDz22mKzeT29MtWUrc8O0Gbg7eLSi81gIbQTdducnf7RT/Lrp1F8W329jle8Dv9fKdTXXyDQHs8k0gnlcXiLzKE1DWBUJxPQRJ189ih3ET1iQvQqT1e1zv0KGHWuRuUiW2HVSb3u1qkpRSGag2nbVKsinFBN66qcCX22fiwP6lKC1JRmbmMezcMQ86TSwmT/wAyUm/4sZNpz3kAfBNUG9pisWPhn2IKjnK0lDI603ZyKzdsocKbSSABoE5+b17pLY86QEnPbykXyvBugTs0q+ZwJ3gm34zQVdzCb7J7w322FINtNJ10ojKO7H4RrsLo8M24o2QQLxCsX0RGvztfC7+GWPGX2Ns+FOoBY9E2NE/3I4nQgjI1Xg0XI9HIgrwWIQWj0dSMkke+hN8h2vxTLie5W9TBrdLkbz8xj2fY/8sz7SS5xlwG0ARgS9EGfBilF4mA16K9hQHbzl089Ibmq9GmvB0fDb6JWbhrWQrK7f5247D+MPUeXjuq8MYcDoNT/usxWOHf8GziVkYkFiAt6JteDvagYGxhRiUaMarsdoHppSwpJI4A96OM7DiG2qXFMWhmyWRsAxuqfCGSm9IH7B0EkoooextumwbnZLyuZmlhF246cpNjyfdF27aB3OlWDDYqXsCt8vLbcHQFLKVeGpYqhWixiRSxKANw9LpEl6AUZmFGJZageFZTfg0Phnj49chtioQLZ1k3yLLCT3YNaALJlS15eJw3h6sSVqDpaqfMCv3BBam7UVMRQDaOu2wX03G2JMfY1HKYkQWBSG44jj8FVswIuwrTM7MxbjEYsxIuwivlErMyCzCpBwTJmSb2WWbrtvswk2pJDJ9kU353HZMFZVD8YAOTHNqZmY55mbUYLKyEGP0BRirMWFiKKEhmgAAIABJREFUVgYmJR7FqKgN+DBkPvyz1sJ+M4zFh6LNCrRZQO24gB3Xu9WIK/8OK5IXYVHeKSzIKcLc7NuYkqPC4JhtGJ/6FeapM7BA04BZeRcxPc+GOepSzNWWY5a6FLPyi1k+9xyVHd4qO+ap7JivLHFpgbIEkkqxQCXJV1mK5coSrNAVYnLSLxgVtQSzM7dgpTYCK1V2LMqux6L8S1isu4L5uaWYFP8rPjwxGhNCx2GD5hC+sumwRlOBRcraXv7tpfzCLbROcj93Lx+3YC3hedziFJsnaV/LLSXOyUFbnDwekMcBinNjQQ1IvGGSTw7Y8imV4LgfUW5jjyjdfm6CbvJ0u+veq7HDzCVVvlMRDsG3CNt8F7O3XbDtBG8eDcgnFd/wq7Y4PS7bImw7d15+wydrnSysxXeFtQywCbK5xGQSvn9vrweTC7obIBXh1ONHR50kp72EwTYBN124nQ8pfybQFlUsFuBIOwfvXtDtBG15SolUhNOIE6VunSxtREDJRaZTpZcgSkwq4bnb4ux14S53l9/wJBL57F16Q0U47tIbKr8JY4U3fF4BB29eekOTXbbpui1KduXmwE1TDt3828NSUnvFVfneJ3BTHnf91T5FVhJe9S7OewG3PKmEfzPvNvm3ZcqiR5ROMdBmde/O2ndeiMOq3a+DwLsXaF+6ARYXKCu+EcFbhG6+q2UJJX3FBOou34Lu8k2X+HVbnAXO0hsxIpDHAbpiAJtvw9x8yyXXdZtfua/ehvWqG7IJvAm0Jd1B4XVJtustsF+/DceNFhSRxYSu3U7Rd8ntVpTcbkO6tRivvvY7tZ1k1JxgdpPMS0HIaTzDynVyawKguHQGyVdO40f7XqxJ9sfe9DXIqzoNSu9gl+I2LYODtk6d0ypCQEvWE7og0y619Uk2FCvaWjX49ZfVCAzcjrKyBISE7sTo0c9h5YrJWLRwDAoLo9HWrkcP6DotQfC9pwWWS7E4otuL2NLjLAqQeb0ZJBvR3UN53iJc089J124CakqQIPFfp8n5IJOsKvTrl/56N3usqWGV8D099BuIfKBdxfy47T16KK4FYVH2RrwefgDPRSfhqZA8PHE+A09GZOOxaDX+Fm3AP2LMeCLKiGci1Hg6PA+PRyrweIQST0Sq8BQH70gF+keq8HSEBs+E65wQ7lmAI79uPx+hh6gXIg0Q9WKkAS9F6plejjJArleiCyBJ71H7PiDa4FGEQxnc78Tb8GyKGk8nq/FCihHPn0/H31Zsx1+9FuHd4GQMyqzER+fS8FK6Es9n6ljd+ztRJrwVa8HAOBMGJpjwYqIOdPGmeEB5NCCPB5QiAvsGbg7eND9IMLlEpTZ9iV+05VME7sFJZnANSbagL9E12+PKza/aaVZmJyFLyWepEngPS/EEboJvqngnjUuyYxwBeYYBn6UbMTbDgVGpJRiZ1YiRGTp8ErkDx8w/oOlODrpgY4+Cezq0aO/RIrv+PNbGb8cqxVHMy0vGiMjT8E7ahczLJ9HaXYC8xhB8+Msr2Kxag5LrKuhaErDRth1jk3djsjYLI6ldMrsGXjnVmJzrwGiFEWNzzSxnm7K2RVEZDtfUbAcDbw7afE7PLQIXeb1nZ5swWWnDiDwNhsSexsSwHVgUvwYLkpZgRPAMeCcsgvZKKFpQiM6ecrR00OPRHFy5qUTVzSzE1BzBupz5mBe7Euv1ekzPsMNL68BUpQUzcq2YlefAzJwKTM8pxxydFrPVRsymOvi8Kvgo6zBfXYMF2jIs0NmwQGfBAlVxL4kV777KMixNr8NSRTEWaRWYrYqFtzYFiwwG+FHrZG41lqnqsSTXDJ/UEIwJXoBhJ8diSeYe7DAnY61WieXaYixWXwNdtbk4ZMunCN3cQiJOum7fD7o5hHP4FmGb7y5LiaEaInDTzqGbz80FUuukHLbpm1+4eQa3OLebaiGK2ibpMaXcy03f3M/NQVucHLr55ODNym+cedwu6HYW3+x3Ft/IL9vsuu28ajNbyb2g2y6V4JCdhMM2nxy0+WSwbavH97Z6iAU4nnncDa6r9j2v20UN+MUhiQM2n3LQFr972UqcEYEnWPukG7gJvk+WNoHiAQPKGj2Am+BbhG6ezX0/6OYQLgdt+ubRf3xS9jZXXxfuexXfRDDgbgbP3/Yovqm58lBXbunS3YzYuqsuifAtAre49wXcrPq94RqSnLoXcIvwzQFbPikakCtTFhHIk0rEmcPtJLIpAreYy32veEAO2PLJ7SQ05fDNIwHFKcUDCj7uPpon5fDNgVuc5qsUDei+avNdft0uvNoCLtu1O5DLfu0Oiq63oPjGHZf4d8nNuyi91Yqy2+3IsBbjtYGDfp9pJ5k1J5Fz8TQyqWb+gpR0Qj+WWncS32q3wj9+Ho7k74L1SjzugOBYAlfKvm6DdJ2jK5Z0Seb2Ewl2pWsyPcCkx45m1NemofFCDoOLG9cVyMn6BadObEOhORatrVZ2ue6mfN8HwXePCdaL0fhZtxcxxcecPnPJAkN/b1dPAaikp6tLj85OLTo6KEWFft306+LgTVd2AnR6GCoBN9lnOjvz2X+Xzk7yilPjpgWd7fTQUsGqry93aWFrT0BA/QFMz96Gl8OD8HSECs9G5eLRsFw8Hm3AYxEa9A9X4MWoPDwXkYWnQtPxZHgO+kVq0D8iH88IejpSw2IC+0dJUYEPspRQNjcHbxG4+f5iVAFI9wZuDt4FGBClZ3o12uBRgkPfvPzmdbqiJxrxRqwZA5IL8Owv5/Go71o8tXQrPooz4KWYSgxOduDlTCteTizAu9F6vBOjw+tJBryVagZ5tl9LteCteM+697edmdy8AEeEbxG2+c6tJQTffQH3R0kWcN0PujmAE3j3DdzS40h6ICm/aovXbL4PT7FCrhGpUrU7n+OTijA21YjhWRqMyDRjTIodY9NsGJ5ZiZFZFnwQ/gMOag6g8loc2rro3zcrurv0uNWdjUDTASxN+gbzc+LxRaYBw6OisTznG1haz6O9qwDZF6Iw+PS72GHdhqLLWYgu+AXTz/lg6Nl1mKCMxFSdBpNSlZiRpsO4RBVGU/JJHllK3ODNgVucU6mVUrhyc+AW54w8I6YrFJiQlY2hkb9i2vlVOJK3E+ZriVDcSYRvhi9GnxyO1JLTuNZTgYY2C5IqA3BcvwvfJG7EzshVWB3hjWUx07AjZwN2GBPgo1BjbkENvPKvYlpmPXyUFayC3Tu3EPM1BfDOM7JUE19lLYsMnJdbAe+8InirrJintnqAtwjdfPdVVsI/6zqWay9gsa4CvppazMmtZhC/2FjBElQWKdTwCvkOs6PXwCtmAmbHeGOzJhJ7rKVYmW/CgrwCLNNfZeAth236FoFbBGtx51DNprYC8jIc1j6pr8Jap/hVW5xy0ObfHLTFuclYC9L9oFsO3yJs8/1Lcx1IdOWWgzeHbj4JurmthEM2ny7AfojsbQ8LyT0Am1+3xSnWvdMuwjeHbT556Q3Ne0M3z+V2wzd/MMnnb7GUiLYSl5WEw7bzsi1dtemy7dap0iZ4qhGBBNyllxBU1thLUumN1DrJIZtPt5dbKr8R4ZvDtnz+VxXfEHyLl22+84s2A+2aZlcRTlztVfQlqnkX6935LgJ3ryzuenfrpLsU58F+7vtBtwjfImjz3cNSch/o5gB+b+AWsrcbb0ItiF+1xamlPO4mumzf9pCYxW24chsFl2+5JL9ui98ibNPOS3Ck6Qnfcujm3wTevYD7+l3YnXJcd8M3wTaJgzjtZbfbUN7SjixrMQb+XuE7ryEI1HJJ4E3xglkXApHbdAapjYFYFuWNpRn+0DRFobWHrtkGoFsN9KgZiFIKAz1gRAf5OAlkCWgJniXbCX98yR9CdncVgkS+7O6uAnR0WNDWVoyu9iL0UJEGdOhhf/8DLt/dRtgbY3G04ACiHL8wWCYrSlsH+bQNzlIc+vXSz0MPQCW4ptnZpUJXt0qKISTvd5fzNwcwM0in/47ShVy6gtNjzO5uK+50Z+NSZzYC9Qex4OwkjDs7DMNj1+DtqCQ8f7YE/c4W4m/hOvwxSoV/hmvx4mkzBgZYMeCMAf0j8/BEbB6eDTfhufACvBBmYKL92QgDKKmkf5Sa6WHhm8M2nxy6+XxY+L4feBOAvxqlxcD4Qnx01oR3EvR4cvd3+PuMhRiw4zCGZ5TjzYh6vBddihdiLXgzxozBMUa8GafByykavJVmxJvhagxMKACDbQJuQb3LcCif27P8hr45eNP8oI9rN4duPh8E32QreRB4Pwx8s8s2XbhlAM6hm8/Pk4owJlWHETkqjM4pxOgkOyawUhwrhqUV4MOIAOxR7kPRlTC0ddvQ2VmEjm49SsnylbgYa1VBmKfSsQbLT6NDsSpnP0rbotHeXon8ixp8GDgYU+OmY3vQPMzy+wjPfdoP/xj7Bp7wm4bXv1yPv0+bjn4zFuIF320Y/0sSZqcW3Re+p+ZQ9bsE3yJs831GXjFIU3KKMDI5G++FbccXcXORWHcCV1uN6OwuRuGtJCzL8sHUsyOgKzuNulYDvlfswfToMVhm9Mbuwg342bYH58v3IqbhW8Q2nMPPVdFYpi/ApIwKzNLXsObJeQoL/OhKnVuMhTlV8M0tw2JVGfxVVJhTiIV5DvjmV2ChqgY+eVUPhm9NGRbqKzE7zwJfTSn8NTVYorvALCuT0hPwWdz3+Ch4NfwTV+D4haPYovPFFv1mbNenYJ2yCssU1VhmKIGf5sFeboJwEbj57gHeusoHgjcBuAjdfOewzSeV34iFN3ynwhuXZPnc4sVbhG8O2+Lk4P1b4JvDtjg5eNPc9y/AtwjYtDPIpkeShZ6SF+Hc89ot2Ep+C3xz4OaTgzfNh/Fzc/i+F3hL1203dHMA9wRvAnEJvuXg7YZuAu8meJTelDfet/Smr4s3QXhfxTfcUiLOhyq+6ePaLYL3w8B3fN21B4I3QfiD4JuaJz083A1X2bcctuXfHLj5zLx0A1kXrz8wn/t+pTgE34rGm73aJvssvhHAmyBchG7a+XX7fuDNSnAeAN8cuHnxDZ+8AMc9HwzfZC25H3gTgDuuS7DNwZvDN03p6v3/A/jOrglATs1JMKsJ+bwvnAb5wFMvBeIX614siZ6L2NJA1LSpWbskyM7RVciq5lu7SyRopuzrborlky7e5I/u6VSgo0OJjh4Ny+HuoKr2bqqnL0MnitBJed49NvYIC+02lmtM1fU9POebwJn9fPRzmkEJJ+SB7YAR7dDB1BiBn/TfINJ+Gl30CLLHjo4eI+5SJX0XPVaTfN/UlklNlxQT2EplJdSY2aWV8rqpwIR+3V30z7Kjp40ekUoJLfSbg27aKf2kvRCXWzNxzL4HX8SuwPuRB/By2Dm8FJnFrtBPhxrQL0yPJ8nHzSrftXg6XI9nwwzMx90/Uot+UXThdvq2I3Ss+OaZSD1YNGCUDs9E6fBstI7lb/MoQGkapHQSSihhIi+3WIZDWdx6vBTDZcDLMQa8EqPz0IBYHQbE6j30Wozelb/9GhXfyMpvXqes7nAjXorV4b0YDcYE5+D5JXvwl3mb8GpQIt5PteHDCDveijWCUkveiivAOzTj9XgjQY+3Ewm6DaCYwHcSDHjXQ+78bR4N+H5iAbg+SOydWCKllBjxMZXfuOQuv5FHA/LvXqklqeTrdhflDCEYFxNMnDtLK3EmlpDFhOuz9EJwDU+3YkSaoHQrRqYXeirVjFHpZozOsGNUeilGpZRhTLoDY7KsGJmpwXtREVia9x3yG0PR3WlHd6sDt9pViLt8AjPj/OCV8wum5WUwu8inCSFYnLYb9uYkUDqPqT4Nw4O/wIDgN/HuT29i/Qk/fPvdJszwHopP/V/FxONTMGD1JLzsNQX/a8R4vH3oPLyzL2JR2gV4ZVdgosKBz3Nt8EqvwoL0RngllWFqthVT8gswPc+EBekVmJZWjgm6MkzMNGJOVgl8VDWYmG3DkMhTGBPmj+3522G7EYObrBzHguvtWhzTfYXhkWOwKH8+9PWB+DlzJ5ZFbsGa/NPY7sjE/qIsnCyPQ2xDENKvHEXC1XP4uSwWa/LzMTvbhpmqIszW2DBPbcciRTn88yrhqy7CAmUxfJXV8CXLSW4Z5itt8FU7sFBVjXl5tfAxFMIrX4uFpkLM01rhk18CX20dFqhr4KMsxmJ9KZZrHFiUdxHLlBexNr8aK/Wl+CIzCqPjN2NtzlqEm75HxfVEBBm3YmOmH7YWnMAmgxGrFBewQnUJS/RVWKBzYLm2zKP4hpfgiDncq3TlIK3WVUBMLOE7JZas1Vdgrb5ckFiEI+1iSom0987j3lRQic0ybTFWQdRWYxW2eaiPB5PMUkJlONzDTVfuauy01LhlljK6RZsJv3jT3G2twR5rNaj8xpXFXSjte2014Npncz+a/JoeUNpqWBkOFeJwHaBYQNYy6ZlW4pG/ba/DYXutW45alkbCU0lc0y4klPC0Eo9YQHcyiSuhpPgCSyehhBKeUuIB2iUXwQtv+PyVZXA34GiJpGMlF3CsVCZnPCCLBBTjAekBZZkkllJSdhGBouixpFxlFyHaSqS9EfzCTZOKbzwlK8GpbEJw5SWXxBIc2nkJTlhlI1yqchbhyGZETRNERdZSEY5bUXXk4aaIQM/SG0on4aU3ND2Kb1gGt7z4hjzclFLS3CudxCN7m6WUXEFqwxVnAU4z0igSkIsSSS5QSsn9YwDlCSXsqu1MJ3GllJCX+4JbctBm3+TpbpTEkkqabngU41AWN8/jFi0kzEZy5TbUTmmuEGDfdIs9lLwFVyQgZXA7i2/0wjSwPG7K5JZUQCklovgjyau3YSZR/raz5IaX3bgLcCiZhOsWCq855XwwyR9OSpO83C3MUlJEfm5B7LJ98w6KScxu0oKSm3dQeosu33dcou/yljZUtrQj11KEV14f+Pu0nVCuN8UL5pDPu/YUFPVBrHQnrTEIkfW/Yp9iPXanbkRS9Tlc7tYBlAbSasBdmHGTkkQ66BpO0Whm3KVCjJZUpNQG4KhpL3407EZ0xTHYbiTjSpcGd3tMaO0xoxUmdPI88G4CXgmUr3fn40a3Aa09hegkKwsc7GEWKPubfNfdZgbVBNmmS/E4qvsescXn0Mkg3ojbnfm40aNBew/FDprRRRYW9vPb2X+m/Eoy8mvOQlF7FqYrsahtyUHzXTVu3KVIQZtUY91BDZn0KIz+efRQ04prd1VIqw7EwtgVrFlwQGQY+oUq8HS4Gc9GGNE/zF2A48ripmu2TLyFkkBbrmej9XguWiq/EeFbAm4O3jT1rhZKN3Bz8Nbj5VgDXo7V45UYLZME3QTebr0apweJN06KhTd8lxonjXgvyoIBSfQIUo13Dofhn3O34u+rDuCtGBXeSSvE+xFmvEWALcQD0i6PCPQEbwnEOXTz+UFiAbjulVbihm43gHPI7muKkE27C7KF6EC5r/vTNCtIHLbFyaGbTwbfBOCCRmYUQtSIdBNGpVsxOqMYo9LKMCqtHKPSijE224rRWZSDHgefjG+QUXcG3ZQI0mZDY2s29ps3YvCJkZgUsxEjg7/B0IgEDImLxKT4DfjVvBmN3Tmw3snFxARvvBY2EYNOTcL+nN0Ii96H1SsmYvtxH8yJnonxQcvx/MwReHzKELz5wxZ8kHge7ybGYGRWDr7Is2KmogqTUgvwRZYKE3N0mJBXiomKWkzNK8fMdDsm5RrxRYEdC7IrsSz9EmamGjE0PhAzkrfjtInsMjnobC1CZ7sNde15OGzchi9CxmBZmjcCivfiiG47ViduxcLU41haoIGfwY7FahM2aVXYb8zAz/Y4nKiIwj5TJlbmWTEvtxxzVGWYq3Ew+PZVlMNPQRYRK3w1FvhpHVikLscCSjFROrBQ7YCvugSLlFVYnFsHP3YdL4G/qhS+ikr4qmrhr6/DYlM5/I12LNaWYYmmGUvyarCawF5lw/jUc5iZvRXBtd/i4s14VN1MxuH0tdiSvRubDQlYp3NghbIWy/NrsUxfAz9DVZ/gTQAuwjeHbPnkMYESfIvgXe5RhMNLceTwzeMB+aQSnM0FFR7aYqyEXJ7g3TuthFJKqIVSbinpBdn0mLKPjO7dFmqkrAHzdPcB3gTiHLxpMvgmABe0314LUZTJLUYDstQSWR73IaFt8mFbJykmkKeT8EkPJ/klm08RtPlV+0HFN1IBjhO85dBdegGsedIDuC+CIJxDN58PGxHYG7yl5sn7wff5Khl8Uzxg5SUmXnwjTp7DzcFbTCvhO08tiaxuBImX3/ApluDwGED59IBvofSGCnDE0hvaqfhGjAIUdxG+Uy9w6JamPIebsrf/q4pvyM/N4VsO3i5LiTMeUHwsyXYBulWs/OZWr2hADt40GXwLrZOuWECP4pub0F+54VKfJTjNNz1TSuSNkywOkBomqXVSEs/dFieBtydsS4klPKWEUkvo4WRxHxIBm+03WlB68w7KbtG1+y7b+XdFSxuqWtqRZ6GGy9d+p/B9IQhZDYHIqj2J7NoA5NYHIqs+AOkXA5HaFISwiiPYlr4SX+duQn59MFrbtejq0OF2twpt0KCnS42ebg1KrkQitvQY9moPY3rKV3gnYisGnl+HkeHrsCF7F4IKDyGq9EdEFf+AaPtPyHCcQlFDIjrgwKW7uezyftr2FYIK9yOy+BekVQdBezECjuZE1NzIQHOLEq0ddI22oRs2WC7E4phmHxLKjkrNk90atHbns8zvrq5CdHXRJdyOdtjQRtfybgOCdd9gafwyTI9bjgXpO/BV/g8INP6CCPtPMF4PRVVbImpvJeNKh4qVB7X2aNHYlYmMpnPYlrsXQ4O24a2ws3g+IhePUhV8qFFKKAmj2ndnJrcMuAnAn47S4+k+gFu6dutB4C2H797QbcCLMQV4MYZ81m7Y5rsE3QTeEnwPiNGB9CpXrI5VvL8aK0C389rNSm8oh7tX8Y0R78RaMTDNjLcTtei36zj+6rsLz34diA9SjXgr1YJ3o8we4C2Hbl6II8I3h20+efHNQ0UEui7eVOfuefWWg7b47YJu4cIth24RsmlnUYGp0hyWVghRw9NtIBF0j6Q4QeHiPSq9EEwZNowipVswJqMQYzOKMDqtFGPS6PJdhPHZhRiXrcfHccmYnfI1EqtOoqOLGlgtKGpJgHfsBAw/8SF26HZiS/Z38Io+jOHRxzEi9ghmJqzHId1OJN46gxkZS/B+2GoMjVyFXbl78e2J9Viz3gsx5u/hkzoR7x0Zi3+MfA0+u/yxO+VrrFQewOS0fRgVdwDj4s9hVpYS4+JOYUT8dgxP2o9RqQkYn2XB1Dw75mQVwyu/GJM1ZsxRmDE3R4eJiafhnbITp8yHcPFqJoPuro4SWJtzsDN/J7wivbA2bT6OGjbgR91mLM/ZisXKk/DVKTFXX4Zp6nJMy3Ngbq4NfrlFWKMwY7s2B+sUZizNroNv7iXMU1XDJ7+I2UEWKkuxSCXB9zxlAeYpjVigsjEv+CJNOQNvuoovyi/FYmUDFqsr4ZdvxxJdMfN1+2krsVBTgrlKAyalJWFSRjhm5mRiXpYCK5QlWK6rw5SMBHinfYWY6qNo6VQjq/YctsZswC7tOWw16rFWV4Hl+RVYqqL/fDVWGC5hJYG2U+zCLZTg8EKcNTryczvFowLF5BKWze156V5vqACTUP1O8M1BW5xi8+TmPmCb4HurqcqlbabewM2SS5zxgCwiUAbf4kWb7+IjSubt5sAtzK+sNe58boLuwhomns1NU7p007W7BgfstR5i1e+OOtDk8C1eu/kub528d/GN++rNYVucrPjmQW2TdOWWZW/zB5OejyUpk/sCk7sA5wIrwBFLcE6WXASJRQNSPCBX2SWcooeUFBFYdlGScO1msF1xCVT3fppmuSQRtvku+brdV+++oJvVvrOIQKltsk/odpbicNAWJ4duPqNrmiBXrzxuIZlEigV8yOIbgu7/DxffyIGbvjl08ymPCJQ/mKRveijpEosDvA3NZU9pnYU3/NLtSidpvg29U3TdFotv+G5svukqwHnY4hsRtMWdZ3C7crn7iAnkcYFy+O4F3c4rNwft8tutDMAJwkn07YbvYrwy6Hd6+aZyHdZuWR+AzPoAZBOI10l7TtMZ5DSdxbGir7ApcxFOGHai9lYqSzdp6yDLhhF3uxXQXDyJfbod8ErcjA/Dvka/syfw5zNh+PdTZ/CXowfwyqk1GB7mjwnRmzApajumhq/A/IiF2J+3EZpbMThTdhx+ydsxKmIDhoWuxbjwdZgRtwV+qTuxKWsv9isPIMB0BAkVZIsJh/16ItIrjmF/zioEWnejqT0NXchnfnGWctJpRncXWVtK0YZCtECFq10Z2Jzki1dPjMcjgX7od3Yf3g/5Dp9HbsO0uGVYr1+Lr61fYr92L341H8NZRxCCi4/iZ/tBrFLvwdjIvRh0JhCvROajf6QFj4QW4MkwAyj273lehCOAN2+clMBbspc8S9YSQc9FSdduunizq3e0swSHF+EImdwSeBfgpRiyleg99ArZTGLdGhBrcEO3E775lZtPspi4oNu5y8tvBsUZ8UaMSYLvFAOePHAa/XYcw1uBKfgg3YLXqcY9zoq34wtcudw8j1uaVIIj6b2EAryXYHDZSlz2EqEQh1+75TGBFA3IxYA7yQ3evNpdBG2+u6ICBRuJHLj5txy86XsYSUgt4eklNEcw8LZJ9pI0qwTbHLqdc3SGDaQxGVaMy7RhXIYDY+jinV6CsekOfE552DlGDE3MwvSEvYgs/QWt3Sbc7jIh82oIBh97B75xE5DddBq6y3E4oNuNKYnbMTYpCuMSEzDu3Has0WzBqIgFePuUD6Ykr8W68M1Yu9UP+75eh+IbKfjOsQJPr3oK/ca8gJDE4yi7kIKC+jDElp7EV7q9mBC/AZ9G7sHoyDVYmLsIazQrsTBlB8ZFHsSo+NOYm6HHHEUFRqemYUzGSXwI4dU7AAAgAElEQVSSugOfRvtjr3oniq8k4WabBaXXUhBXcQxbs9bCL8Yfe5Q7cMx6EN/lb8S69HVYrD6DpYZs+OqLMCu/El7qSsxUl2G2qhRz8yqxILcC/rl2+OfUwD+nGYvyrmCBsgrzlRQdaMNCVRn81FVYpCnCvLwizFeUwVdViiX6SvhpKrBQWc583366SvgYSjEtT4mJaTGYnBIGr7QozEmNwpyk85ideAJzUg/h89QVGBq2FnNzwtmVeom6Dl7J8fBP/RrhJcdQd0eJM6U/YXnMemw3RGKT0YrV+hqs0FRimboIKzQVWK1rxCpKKuG2En1vW4mUy+2Gb55YwqcrrYTDtjDll26exS1OVoLjzObezPK5K7GVYNtDVSDg5hKjAfm+g127pXhA2llEIKWWMFWz9JLdZCWRy3n55skl4uT53LwQRwRuceeV7wy07bUMtPl+0FEHEoNsWy17MNk7l7sehxz1OMxKb5xWE1Z4Q6U3XPWg1kkG2iwikJffiJOKbyiDW4oHZMU3smhAVn5TIlW8c+Dm0xO8L0HycTtbJ+9hL6FsbooIFKMB+c7TSiQfN8G3G7D7BO3ySzjD1IizFaKaIGVzU2pJI1Mw2Uuq3GJFONVSGQ6H7rCqRkiSWijDWQkOFeE0IYJyuQV5RAVWUykOPaT0hO/7gnftFYjJJOIuWUruX35DxTc8qURMJhF3Zi9xtk6ms+IbZ/mNs3GSLt9iNCCzlwjRgDwi0OPRpCsW8IZn6+SlG8gj4ObiEYHCFC/efYE32Usoe5t7uLl3Wz51V27DBdxCCY5YfkOWEgbaBNuCePkNA29uLWEXbiq94XI2Tl6neMDbbksJt5bQFOwlFA8oikBbVBHZSQiub7S4JcA2ATcXB22CbRHAaa+8046qlg4orL9j+E67EIC0C1K9PDVcUuNlVv0pZNEVvCEQ+Y3nkND8C3Zql+GgZj3sV2OYJaO73YRrt1XIrA7Gouy1eDt6K54O/QGPBgfjr+ey8adzJvzxXAH+dDYVfzp9DH8O3I2/BQbh0aAQPHX6K7x4eg4+DBmPJcaNGJewFS+eOYR/nA3HX4LC8J8BgfjnqWN48tQPeOHUPrwetA2fUGFH7GbMTN2GjaqvsCl3C2ZF+8I/aSXCyk+g5EYyqm+moqI5DpfvZuJutxqdXeT3tuBulwG2G1GYEzMFTwdMwGMh+/FESCaeDs7D86GBeObcDrwWuQpvhq3A6wEb8X7QfgwJ+QZDwr7Eh2Eb8HbYHrwefhovUyFOZCEei7LgiWg9+kWp8VyEVvJ904VbkOTl5p5uPZ4l0HbWvou2EtrFKzcvwOHzpRgCbrdejqG0EikiUMzjZpncsQV41SmC7IFyscs2v3BLpTfUPPkGq3YXy2+MeJNaJ+OMeDVaj4EpRryZXIBXg1LxVlAaPo7V4p0UI16L0+H9xEK8E290ZXLf68Ekt5PwyUFbnB8lGkGSP5ikb1dKSbIJLuBONnk0TLpg22kp4WAtTg7ZvADHcxZiGLVQOsUgm0CbiUBbJubhpvr2Qg+NYcBN0C1pbEYhxjvhm6B7fEYxxmXYGXxPyLVgeLICk+P24IztMG51GHC5owAnqn7BK98Nwte6Nai7nYLWLj3CS/dhfoYfpmWfxXRlKYaGh2Bw8Ca8f8YXb50cg6lxvpizZwZW+s1AQnQQbrdXwtIYi+cm/xkvz3oGgYofUH4jEzfvqnGnQwdVczBmx/vg9cApGBw+B34qPxwoWoOtWT4YHzge7wQuwNiYYxgZH4ZxcYexXLsHUzK98UnIKGzMXYWk6tOILgnAAc1W+CfNwsrEWQiy7kNoxQns0x7EivSdWJl3Civ0JixWWzA/rxA+eQ7Mzy/FfHUxs5TMUxdiXn4xvJU1mK+8iIWKJpZislBRjIVKCxbShVtdAT91LRapquGraIB/fh37cX+tHX7qMvbj/uoq+GpM8MoNxfj4Q/giYTdmJWzDktTdWJ+2D9vSduNb1T4ElBzCyvx5+Pj455inPI9lRgdmp2VjVuz3WJe9GT9b9uCM4yccsO/CwqSlWJJ7BCs0eVhB5TeGOqwuqMIqbRnWqGs8PNwu0NZXQrSUrNNXwlO8AMc9NxgqsZEu232IwzbP5abZV2IJezjp4eX29HbzMpwdpmpw7WSgTbDtKflVm3+LcE27+IhSgm0qxHGLXbaFIhzm6S709HQzP7cTug+yivdaBtwMugm8HXUg4L7XY0lKK2GPJB3/e8U3ZDHhlhIeCcgnB2xx/ivFNwy2y9zlN3TV5iklfSaVlNNDyUb3VVsAbLpqe0D2ffK5eVoJjwfkUx4TyLK5nX5u8apNO79o0xQjAvnOk0r4jKlpgiSp+l1egkNWExGyeUIJL7zh818uvhFKb6h98v9o8Y0A2WJcID2Y5CLgflD5DV25Rdh2PZq80gKdU3pqmXxg8Y3Tz+0qvOHFNzQ5ZHvGA/JUEnEWXnNbSOTWEvGqTZGAonhKCZ906eaALU5+2eaz/NZdVNCV26nyW9JeSZaTOx2oufM7h+/0+gCWdEJRg5l0BW84hcyGU8x+klt7Cqr6IMRc+RlfqlfhZ91uVN9KRXunEg13UxBZchTTg7ei/9nd+EtwMP4cko4/nc/BH87m4w9ndfjDOR3+EKLBf4Tk4t+C0/A/g3X4t2At/hgWh7+H/IS/HduE50/vwWMnfsDfQ5PxPyMK8D8j7PiPcBv+GG7Cn8M0+M/QLPw9OAH/OBuKvweewj8CjqLfyX14KehLvHBmN14+sRUTwvdgp+YQvvl/qHvPKCnrtN13n/3O2Xtm3nnHMaGOklFHR8fRMWBCMQGSkShBQpOaJJJMqIgj4AzmRIZuOudM51hVHaqrc84R6G5SZ0D8nXU/Vf/qp6obcPZZ511nPlzrvp9C8OtvXev+X5fpQz5NeAuvsi8xth6l9JQHrWcSaOkyczzvB17ymMt9bmu4+/hRhh0rYIhHE0P8irjTO5rBPu4M9jzKELcgRnjEM9IrhhG+QYzw9WSUjx/3BqQyKqiQ231zuD3AwJDgZIYFJDFcauG9+wpwFHQLbFuVxajALO4OlAIcx5tuPXTLLsBtL8KxAbfAtlXZ3B+czZ+DzQ5Z3M7QLfD9l2AzDwVlXtPZFuCW3G29nDO4/ybwHZTBw5HZ/DUsQ7vHHhMuUX/ZPBqWxcMhmTwZkctoSSSxy7H8xl6CY7vn1sO27JrLLXXvusZJBd9jIhyLcKxlOFb4dgZt+dZDtvOuoFvN6zna4mxrsB1dwLhoOR2Rx5L99WpMAa/GyElJQT9NiitENDm2gClxhZomC3THFSNzakI+05MKGBdlYkrIJ+yz7KGt00BdRxrvGT/i7n+O5mjp15y+kMiV3jyCyz5nZfxSZsXvZHyCBw/7fcFjAdt5NngLLwWs4aWja3ls8dMsWzWBzLwIOtubOJlnYsy4ody/cijTjs/gg4ytRNYdpOxsBKGl+5h/YCYTPeYzLmgZT7i/xoSg2cwIncTLHs/zxOHx/M19NiMPTGRuxALizh3gaPkeZni/xpij45jj/xoLPKYw99gUPkvcSGLLAY7X7eWdpLdxjdzF+tRA1pvztUKaJYn5LE3KYkVaPstTSlmaLNXvRSw2WlicnstCUxVvmGpYmlbJstRilqfmstKQw0qj3HdXsdzQwIrkkyxPrWVJioHpke7MjQtiucHCmsxKXAzpzIn4gTd8XXgnaSO7LZv5qnAbbpW78Sn/Et+iL4iq/B5jqzv7iz5moedyFkXvZ36iHzPD/s4bIUt4M3YBa2KXMitgESsS32B+5HxmB7/FisRA1hpLeDPrJJtzmtmYUcZmU2nfOUmGPJjsX4Qjv6lkEjVVMol+vp1VhUgP2Pr96rDdl82ttU5m17A923pWolxth+kE2Qq6d+TUsiO3zq6Pc2rtxTjOwK2+B4btej61ZXPL3J1f51COoy/FUbu651YnJPop5yTaSUlhA/ZIQBts66MB9fCtld84PKBs0EpvBiy+cXK2JaFEn1Kih23Z9e72tRJKrCklJwd0tZWjrbK39Qklzukk8u0mKj+p6VppJR6Sx60kqSQD6OrQrSvBsT2ctDvbtuZJvbs9UERg/5SSFoJr+6TP4la7HbivUXyjHG3n6eBq25JJJK3EOY1EfesbJvUOd9zJsyipSECZ2t22rfhG73arsxH91ABbV3zzSyICnbO39ckkCrjtsG2LCNRHA6rd3NqJPgZQ7SqdRIsC1BXfqNxt/dQDtvOuSm+0hBJb8Y3mbDs52lZ3uwvN2dZlcwtw6xNLtLQSSSzRAbiCbf0U8K5q79FU3dGLSL5l1nRe1MC7rusyqfmlPPDIv2nJjsQMyqlJgg2+oxsOagAuJTspDUcwNBzjSNVe3kvYxPGMLznVnkRNZwhflH/AEx5zGH5gD7e4J3OTp4kbPVK4wSOV33uZuME3g//0Tue3x1P5Lw8Dt/hmcoNPGv/plcrvvTO5xTOXWw9ZGOlWzOCjZm6T/z4gif8MSOP3fun8zkegPZ0bvLK4ySeXQb5FDPItZpBPCTf5GLnRP5FBAQZu8whn2NEvuG/fIu7/+hnu/fxJnnZfwGsR23gz+j0OGj/V3Pw1kZv4y7FNPBjkxnCfKP7oZ+RO3xKGeFYzwq+Au/2NjPIzcm9AHiP9ihnsk8MdvkYGBxgZ4ZfFnZ5Z3OWTy5DAbAYHJjEsKJ67g1K5NyCbe/xykPxtvaRp0lmq6l252moqZ9vqavflcEsRzgNB2QPIWoajXG41/xIsbrTUu2fbwdv5lERzucXVDs22g7czdOuLb/4WmsXjETk8FJTO6Kg8nozK1R5YyqmJ5nKHmu2lN6oAZ8AcbpurfU13O9yCwL0qv9FPezSg7qGkBtzXgG4F2vppfyhpK8B5JVruth01LrYQkTNsT4gpZEJskYNejS28KnTb4TuukKnxAt0C4QVMSyjSpgbfiYWMjzLzavAuvsrYRfPZBMrbolkZtYGH9r1CWJMXFzoy4acKMs8G8K5hC88fn889bpP5w7djGHbwOR52e5XJwXsZe+wQf3p7Iy/uWsoX5s9Jq/EkJ+UIn+914U23VYx3m8XfDjzGRO/RrIqazkLfOUz5YQZrAtexJ/NT3jd+wAz/Bfx5/7M8cOgJXvQey3PuzzD8y8E873YfB0vkhvsDFnotZurRGayNXs53BTuIKP2CgPIv+ND0Hi7+G1kZupd1hihWpptZlFTEopQ6FiQXsMSUrZ2eLEmtZnFyM0vSTrLYJJGCpSw0FrPYWMQSQyHL0iysMFhYZcxjlalEu/dekVLPqpRTuKTkMyf2MI8fXcJT7uuYHn6URYnRTA37nrGHl/FhxNuk9YTxY8l21gZPY/OJBVpiiWvwIpYHzGN76jp8q75lb/pnLPJ9l/GeK5kSOIUFIeNZE72cjRlfsMC4myf3/4054bN5PfQtNqSFa+cl6zKa2WCuZ2N6CW9nlg4I3JrLrd1xVyNnJQLdetBWuz6D+xeBtrmG92zSl9/od+Voy1Rg7Tx3WGpR+jinjoGkANt5CnAraTGBOtBW0L2roAGl3fn1KMjWT60QR1W+Oz2cFPhW0K3mF4XXz97+pqgve/tqxTf2R5LFTVzL1b4WdAuAK/AesPimvH/xjR64NeiuOM1Rm5yB262yBUddPyJQoNurssVB3lWtOMunqgW99HGAavevPo1IOdr6qVxtNfsBd52u/MYG3gq09VM53vocbrX/f1V8E990BiU9ZOv3gW61ne+19U721SDb7m6f6svjTjtt3Q0tHYgGgm2BbuVoy1SA7Tyz2rpQcoZvPXSr3bn4xhmw5VtrmLxe8Y0te1u52srJ1s9Sge1zXZSd73ZQ+YUeHGS733YAbtt5iXbTLaclOtgW4FYQLntd92Xqu3/694bv1MajJNcdJlEeWjYfJbrxkCatar7xKKZmdzxKv+bTuK0EFXxLwbkwPjXsZNSRN/it23bu8I7iJvdi/utwIjd7xHKHfzq3+pn5T/d0futm3e/wy+CW44kMcjdwu5eFW3xzucHbwk0e2Qz1LuD2Q0ZuOpTCbT4Z3OaTya0eUtNu5FZPA7d5G7jDL43bfZMZ5JPArd4x3BKQwY3+xdzkUaTF+43y/45bf3iJu/Y9ypDDs7jz+Cfc6HGImw/v4Uk3FzYmL2Nd8moec9vKEC83bvaP5ObwKG4PimeIlzRLmhjhncpIzxTu8TEyUtJLfI0MC0xnZHAOo/wLGeplZoSfmfvCchgZmMZwn2RG+Zu4J8DC3X4WO3j3A+5AM/fapGBbTQXdaurheyDofjDYwoMC2EFmBz0UnI1efw3uA2vlbA8E2M6/6Qtv1P54eJaWrS353I+EW3g43HqS8lSohWfCLIwOzbTD94DQrcpvfslJia51UoG3HbptLZQvROYgcq50l29VfDPQ1N9ryy5utrP6wXZ0Ia/aJOciek2KLUY0UQffCrbVnBxfhGhKXJEG31PjC2xut9X1nhKfx7SEQsZFWHgxYA//TN/NyXPxVLSF4xq9iqc8JuJTuY/z3VL8VEQvVcRVBzDLYxJ37B7EY8df5qOibXxdvpY3/LcwPSSUGYZCXor3Z7S3K3MCF/JFxruUXIyi7ed6Ms9l8m3Odmb4Psaf9g3lz8efYkzIYuYEbuRLw9/JOHOUqJq9fJa2lSme43n8wJ9ZGjSD6d7jGbznj/z18/uZfngqn6Z9hLk1hNNdSURXH2WjYTMzQpez0G8H6xIi2JhZzmpDFUuTi60nJYZKFqaV8EZaMYtSS1iUUsESQz3LTM0sTmtkUXI1S+XMRGIBjcW4GPNZbipgZXoxq03yb0kiSRWuxmrWZVSzMjOCJ9znMuKrx7j/u+mM8XiXsT7v8eSBhczzdGFV9DpeD1nEBI+5PHd4KRMD/87yzCCWZ0fziu8e5rtNJKXHn8jT/riGL+PFI6OZH76Uj/MD2JodzaOHVvLg5w+wMulNthmPsj0zh62ZTdrDTNf0Yt5ML0AytTdnVNmlnG3nqWBbph649ftVnW1LX727gLcetNWuz+G23ms7npAIfOtdbQHvgYG7np25Vjmck+TKo8m+UxK178qrZ7dOe/IbcNZn+fVaRKDEBFqjAuvpn9Ftu+keALo/L25EpM/e/rpIFd1Y5zfFzWjSwbdKKVFTpZXo4fuagC0ud0mzHbLtsP0vFN8cKTt9jbIba+62HrwdgbsF96pWqypb7Pnc+gIctWvZ21V94O0M3PLtU91mlQ2+FWjr50CtkwM53ALewbriG9mvVnwj5TdKepdb7frs7b6iG+fym3+t+EblbTtPAW89aKtd72o7w7fe3Vb71YBbwbVMw6l2jE4ynRbg7pMG2rrSG2fAlu+slk7MOmW3dtFfVudbgbaa+uKb3LOOde4KvvWudv7ZLq1x8lr52yp7Ww/bAzrbOvB2AG4bgFdc6KHyQrddyuHWz+r2Hmrae6ntuNhPdZ2XqO+6TEPXT6QVlPHg3/5NGy4N9UdJqjpAgjRdnjxG7MkjRDUeJLbhkPbw0tDkRkDWXr5L3szBom3syXmXZ46s41b3I/yXVyY3u5v5L48E/svDyO/dM7jBLYWb3eK57Xg8tx9PYpB7Gje5Z3KTRx53HM/ido80bvRN4ffBKdwQlMgNHlH80TuBIb6pDPJI4naPFO7yNth1p1caf/RMcdBg3ySGeScz4kgiIw56MvLIeoYeeIi7jzzCkEOu3OkZxc1BZdzgk8gDx77g7bhPyOv2Y1P0KkbtX8pd3vsZ7pPKUO90hgSlMSwgk+G+2bYSHKl4N2l178P8MhimZXXLzXU2d/unc4+vKIt7fC2aS35PQC6j/MwafF8LvAXAFXSrqaBbTQXfzuBthW4B7/8z+HaGbLuzrSu80cpvQi30L77J5KkQM2MiC3g4LIeHo/IZHVXA08EWngkyaycoA0G3/KZKb7R5nVtu7aTkF8L3vwreA52YXBe85XGkE3Ar+Fbgrc2YQibFFmhnJHJKoiTArZc43wq+ZyQXMi0xH4HvqfFFjIvIYVzY13xf8B3nLxqpPBuGa/QSHj3yNP9I3EjThTDaf06jp6eC1vYCAmt/ZG7gVMZ+ex9JTfto683jeO6PTPJbxp+PvckzfvuYHuXPxAhvng7azorY1ZzIc6elrZBzPQaq2iOJrfPlY8PHjPdZzzOhu7nv+Goe8XyZ2ZGTcA1fxHi3V/nr4SdYEjKfJSeWMPrYi6xL2Ez4qXAy22OIKP6MPVGLcAl/gyUJO1iQHci8/Bzmp5eyJKmKFUl1uKQ0aA8nV6aWsjJVCnLqcEkuZ4Uhn5UmMysNFlamFGkZ3q4p9awx1LE6Q263y1km0YHGClxN1aw1VbPGVMpqUz5r0s6xIauRV4Le5f6DD3Hfgb/wUtBKFqQcYFGiBy7xhxl37D3mROxmrdkT18woVpuyWJ9Vy2pTIzNCY3jFfTJJHcEkVQXyYfhmVsVvYEuBG1uzE5kV8DmP/Pg0kwKm8nHxAfYWZvNeSh2bDafYaKlhnSWPjVnFbDE02MFbINwZupXjfbXiGynA+VeLbxRwq6kHb9kHgm8H8M69mtPdB94C4NeDb3G5Bb6dYVt9f1bQiKbrwLe9dVLL6O47LVF33Oq05Csn4FYAbgdvAXAbfCvgVlOBt7rndoZu+dafk2j7NcD7lxbfOMO3HrTVLsDtXjGAKls5bpcVvhVsq6nuuBV8DwTdDuAtAF7Vgh641a7AW6be6f4l8D0geNeftUP3/xv4/j8pvnGGbvmW4hsF2/qpB2/Zrwffv7z4xhG+9dAte3pLp71pckDotjnb4mpfC74tbd1Y2rpQhTdq9hXedKHtA8C3A3if60Lg+1rgrS++0cO380mJdlZyvtvR5VbQ3d5LhaZfAN9yXmKDb4FtkQJx2Ru6f6Kx+8q/N3xHN7sTXXeEpFo5MTlKYsNhzfmObD5GpJTuNHvxQ9EOtiYvZVXcSiZF72Sw71F+45XAIJ8s/ng0lUFuadzsYeQmTwM3eaZws2cyt3gmc6tXKrd4pXGzl4GbvUwM8jJym1cat3qncauvKJVbvZK5wyeNP/qKy53G7d4G/uhtvKYG+WVxe0AmQ33CGOH2McMOvsrQ/c8y6ugahru7c4dnNjcFVHKjv4FRXt/xevQ7fFG0jcUx87nby4U7fQ4w0jODuz3zGOZvREpwhvtlMMIvg5F+1pp3lcktv2mJJurBpKSbSMW7v5yVmK3yl5QSffGN2vtHAqpoQJmO8YDWmEApwnkwJKuf/hJqRlOIGZXF3TfNSC63XdoDyiweCXXU30LN/C3MKlV8I+U3dkk9vJO0OMAwM09FWKzJJREWnozM4UkpwQnPRiu+iczmGQfZ0kmiLDwr0u65+xJK9BGBKpnkatM5IlBuuV+4TvGNpJS8HJ2rk2MOtxYP6JBMYk0lUekkWkJJXCET5ZGkk+Th5OT4wj7ZTknUSYkVsq1nJta9iGnxBUyLL9Q0PTGPGck5mhM+I76WGQlVvBSaxozIvRyv/JrOKym0dpo4YvmRB/a9wGiv6aw98SZhdV/S0pnITxdLaOg1crT0WyYdGI939Q+cvZxFYp0bCwPnM/bQVKb5b2BCzB6ejfmBl+OPMMZrB7N9tvPP1M/Jbj5Ox8Vkzl20EF/ny7zjy3lm/zyWJm5lXvA85gdNZm7AeF459jzP7RvDIs8p7E5dj2/Nl0Q1Hcej6AC7Uj5lY/QmVsR8xtKkWJalprEsPY8lGaUsMZSyNLWUZSllWg73stRSVqQVsyKtlOVaZGApKwzFrJDyHEMhKw3FuBrL2WCsYk2qONvNuKRWsiqzAteMMlyNFaxNq2d9Si0b0nJZZW7EJbeYyUE7WRXryvLgxSwKfIcVqf64WjJZk5GDS0IWq1MsrEvPZ6OljjezW1hhKmNGjBdTgrZyJPdzQho8+TD+Pbalf867RQlsLShkcZwfY4++zvNHH2d7xjF25iSzXeIBjY1syKhjg6WMjZYitmRWsc3QjDyUdJC5mm1OktKbd7VHks5lN3KvbVN2bT9X2xmsP7AX30j5jbUAx1p8I+U3VqkYwL5Zq5XdSOGNkhTf2MtvtBKcWj6xRQCqKECZ+kQS7eFkQR27dJICHP1jSWtCST3/KLQmlGhJJb/gpOTzonprUklxPfqIwK+KrQklklKi7dqDSltiif2mu5GvixutN922iEA9bGvA7VCA08T3pY0O+qG0iR/K+iTxgPtLrZnb+vIblbst85cW36hUEpkq9q9v2urcByq+qdLlb2vFN1J+Y5WXuNzVfVKPJp0fSPpWt+Bru9fWP5bs21vxr3VUQF2brfjG+kjyqiU49S2E6BTa0IpeYQ2q+EafUtI/jzuqsRW9TjS1oVe0vfhGym+UBijBaTpDnE3OJTjyLeU3csN9PbhO1hXfSAFOygDFN1dLJtEX4RhbLmBqOW/TBdJ1Odyy94sF1FJKOtAnlMiefaaDbCnBab2gPao0t3WSfaaHnHMXyWrtwnL2IpYzAuDibHeTf6aL3JYL5Jw+R0nnRXLPd2Bp7yT/goB1J3lnbO2T5zq11JOi9l5yWtu1NsnSC70Un+9Bpqb2ixSe7aLobBel7RcpudBL0ZlOys73UNV5ibzT5yk5202J/Ln8vXOibk3l5+Re+yIV53soO9ulzaoLcrd9iZrOS8hefq5Tu92uON9lPSHp+QnZxRHX7ro7eqjt7NZU19WDSP/d0HORpp6LmApLeOjhx/49c75PnHTnRKMbiXVHSa07ohXuxDQeJvrUMaIbjmA85cmerDd5PWA6L3os5kHfPdziG8yvPVO41Sud248lc4d7OoM8jNzimapJoPtWgWydBnmnMcjbekYipyR63e5jROkObyN6/dHHhLNuk0ePvvkM9U1lxPGjDDm4iaH7X+ce91Xc5/0p93odZthxT4Yf+5aHDm3ghaOzmBe+mokn/sH9/m4M941hpI9BOy8Z7mdCyQ7c/n0APs7Yg6kAACAASURBVNI/3Z5SImklzo8mrWklfcU3fwpU4J3RL4/7/sBMlLR4QIkItEniAbWIQFsBjirC0cpwQrP4i0590J3VB9w6+H441NwPvAXEFXjL1OA7PAs5K1F6IsKMs5yzuFUmt8ynIi2aHME72wrcCrxtUw/c+l0P3f0eUZ7IdXxEeZXiG7npVrfcMh3BO9ehAEcD71hbDrdTJKCC71fjChFZ4TufSXF9mhwv8N0nBd0yncFbbrutKmB6QjHTE0qYnpjPjKQcpsYVMDOxnplJ5bwQeoIFYbsJKv+BLgx0XM4iuSWU1abNvBi8hJfcXVgStJHvLd9gaomk4WIGtR2phBb9iPlCGEn17nydtYMpPjN43Xsa78WvYUnsRp48spSJ/tuZGLCdx48vY4zHTJaGzeOfpq341O7jUOPXzA6YzrQjLxPefBBDkycpNV7E1/gRXO2OZ8X3+Ff9QGDtV7iX7OCfGZvYkrCVVfF7cUn2wcVg0E5F5FxEaVlaiUPF+7K0YpbbtMJQgrNWGqUMp5w1KTWsM9Wx2lDJitRyXNOrWSE18oZy1poqWJdWyluGMpannWR1Xh1TAvawK+9TDhTu4sPYHSwK/jvzYo6wIS2HdWmSC17MmwXVrM2pZkVKEYsjIlka8Rk7LdtJO+fHx4nLmOk+jkVxW5gV+08mBu/lFZ+3edF9KjODJ7AnL4YdWWa2mSTdpJH15hoNvjdZStmaUc3bAt/mKgfp87dllwxu55Ib9e2Qve0E3/3AO6f+FxXfqFQSbWqxgLayG5W9LTfbtpZJ5+lQfOME33roln13Yb2Wx60iAfVTxQPaIwJ1AK5uuPXzCyfoVgCuQbeAtw2+9XncatdyuUus8G2NCWzSEkv0AK4aJ2VqoF3ayA82/VjWhF7SQKnP4L4mfFc0c0Sno5UncZaC7z7g7muctGZvSwxgX/627M4Z3BIJeLWHkgLe1kp3cbQdmyb7ILvFnlSiHk0G1Lail3bDXddG4ADw7VyAI98h9ac1SemNs1QJjnNEoL78Ru1RjS0oXa0Epw+6FXy3oX88Kbu65R7otES5287gLd/qlETNlJPn0CTgfeo8+jhA1Tb5r8J3P/C2ld9cLx5Q4NuiK7rJPddFdlsnAuC553oR1zv33EVyzvVgOdtJTlsHeW0dFJ/rpqD1PMa6Jgrbu8g5347kcctDyMKzAt8XyD3brqms6xIFkmDS1k7BmQ7rfraDovNdmvLPtGttlCXtPRRf6KborMQDdlPZeZGy9l6qu3+i6EyH9ltFh/U3ufkuP9+FJJOU2abs5XJm0t6j/V6hAbY8nhQHvIuazl7quuUR5UXqeyTF5CJV7V3UdnZR19Vtl/qu7+6hoaeXpt5e0ouK+evDj/97wndM81FiG4+RWO9GYu0x4moldvAwcacOk9BwgMyTbnyXtYV5fov429FNDPXcz61+J7jBK4VbvNK52c3AbQLfnkYGeaX1kx6yZb/dyybZvY2aHGHbyJ0+pn66y8fEXb7pmu70MjPYK5fhPrkM98zgriMhDDn0KfceX8oj3q/zlJcLz3u5Mit0Pa7R65jptZBnvN/m0fAI7Vzkbp987XHlCP8UBgJu+U01UEou90DAbYVu1TzZH771Drfmcgdl9kspUVGB6sHkg8FZ9tZJLY9bWihDsjQ5AHeImb/qpD2qDDXzcGi2Ju0xpcC2TgLbzlLQLbMfdEdmMzoiGz1sq11Bt5p6+Nacbh14jzmRw5goSS25fiHO1RJLHB5MCmhL8c1Vym+kCMeh7l0KcJxKcKT+fUJsvj0K0B4JqAG3QLdVVtfbCt564Fb7lIRCpmjnJHqnu9AG3Aq8i5ieUMiMhFJmJJTbIDyPafH5zE6uYWZKLs+HerEq9BOiy904edmEucWXH83b+bzoY9bGbGGq99u8EvAV08I+4u2UD0io9eXSpRw6ujM4czGVb0w7mOo5g0eOvcSk4CksiZnNVP8pTDg8iX/mfMaxyiO8m/ERqxPWsSHelaUxi5kW9zpT4mbz2P5HmX34BXLP+9L9s5nLV0rpvlJNY6+FzNZAQip/5Evzh2xO3MTqpJ2sMB7FJSOOpek5Wlyggu6rz/7ALQAu0K0kgO1qqGKdAHdKCWtNNbgaa1meUs4qUzlrzCWsycznrfQqVsWfwdV0isl+X7I7/zOSW48RVXOY95LfY1rQm6yJj2JzZg3rMk+xLq+SRUmhLIr8jg2Ru/k8/TPCTx3kG8t2FnmPZ/zhqUwJ3sDTXot57NhMxgcuYn7MGyyJmcOu3EQ+yrJo8P1WegNvmmvYaCljc3YpWzOr2WZsGrD0RkG3Kr9RsK2mHrrV/r6lFmfglu8Pcurtcm6c7HO3rTnc8i053M6PJOVbPZK0zprrF99IHKAkleilOd317CmwSqWUaOU3WkSg1fXWp5VoMYG2xBI9cKtbbplfFDc4Ot4KuHVTwbaaeuhWu3Ptux661S7w3a/i3dY+qTVOlttaJkutxTdaNKAqvdFNe/FNRbNDzbsG2/bimz7Q1sO3gm41FXw7Q7d8W4tvWlDutn5aoVullDjCtzN4K4dbPZTUz36PJlVCSV0LwVL9ropwNOiWEhyJCGyxK6y+hTBpnmyw1r4r6I6ob0VTQyuRDW19kkzuxjNo2dxOrZMC4P2aJ+2Od3/oViklmrstDrdOCrrV1MO3gm015ZzEKifgPnUeZ9jW4gFPWWMC7QU4tjIcLZdbiwfsq3vPaBGnu8/tFtdbwFuLB2yxTnNrB3pli+vd1kmOQLUOwM0tAs5d5At8t3aS29ZN9tlOss60a+2Uhee6KDnXhW9CGms/3o1PihHLqVZrm6T2Z50Une2g8GyHBuOyF51p1/6OZG+r+MCSjh6K27u1b4kQlF3T+S4tQlBOSyq7LlHe0av9e2UC1RqQi3NubaSs7BIY76a657KWya09ouzs1WrhKzu6qenopvJcB7WdvdR3X9S+5bc6AfGuXmo7e6jr7KK+q5v6rh5t1nXKn3fT0NVDY89Fmnsv/nvDd1zTQS3XO6HeTWt1i6s/RlyzxA3+SGLt92Q2HsK/4p+siv2YBzy+4RbPEAb5JTPIBt9/OJ7BLR7pDPIy9ANvgXE9fMtJiYJvPXDrdwFvDbSd5mDfdJTkkeQwbxPD/bMY5lfIXR4FDDkWxr3uH/GY5yKe93yN1wJm8nbaSj4vfJ/FkWv5k9s2hgaFMUySTXxLGeWTxQhfA+JsO2uUf18ZjhTh6OFbJZboYwL/FJjJfYEZdjmX4Mi3ONwKtvWzD7ytGd2qBEfL6Q7J6ndiItDtnGAi3wLcSpLRra97d26dlO/HtfOSLJ4Q8JY9wsxou7IZHWkFb2md1EvOTJz1bGQ2VlnPTMZEWbAqBw2+T+TwXJQunztKvnP6ZXQ7xwNqJyZObrcqvhnojls9qtRgW0G3bdpbJ21utwC3Ph5QPZLUT+3EJM6a0S3APUUvAe+EPuhWZyV9brdAd59eSyjntfgqZsSXMiM+nxnxBcxNrWBmqoFnQ77lndhPMNYHYD4Xx5emd1nlM4G9prfY4r+StTFfMMeQwAvBu1gQsJzAgn38dMnCz7059F5M54esT7Ua+r8ce5nHAsbzqMczjDk6hnfj11BxKZazP2dQcCaMjNNBpLcE4179IxsMG5njP5d5btPZHbOR6vYoOn5Op7krgayTfgSVfMsPpo/4MPZ9XCM/YVmCGyvS01ieVcyyjDKWGYtZllaES5r1oaSC7+XyaNKuMlYYSlkpsK3TKnG7HVTCKmMBazJKcDWUst5UjWuqFcZdM4txSc9gebqR9emFbEw6zUZTC1MCj7IlZQ8ppzzIOn2c3enbmB21lg2mRN7Ob2ST5RQuifEsDP8U16jV7Eh1ZX/edtzLdrM8dA7r4nezONKN+XEeTIv5jFeCVzEl9HUWx89jyYl5fJJj4IOsfN5Ol4eVDbyVVcOm7DK2ZJeyTeDb1IDK4NZP5+Ib58IbAW5VeKPmB5ZaHCXgLTfcfZIqd2vpjZTfDFx8c33wloeTfa2Tzqcl6lugW59OIrs+oUR268NJa/OkA3A7Od1SgOOsLzToFvBu5Es5L3Eqw9EKcaQUR6nYsQxHiw0ssZbiiOOtwFtFBUoD5Q+a0+04fyxrHhC+FXjLVMU3Un5z9QIca+ukcrb1Uw/aau/L41anJgMU3+hKbzS3W05L7MU3VgDXcrjF7a62QrdvTStKKhpQudtq9nO4a1tR8N0PvCWt5CopJSqhRKYG3ALdNingVjNCgLveqmtFBCq3O1qgW1ObBt99JThtxDZZpZ2VNOuLcM6iim8UdCdqEYGOMYFJtiIc7aRE53Y7pJbY4NvB6R4AvAXEjaJTF+zlN6oER59ektHSjgbdAt4tUu1uBW79NLe2Y25pv05MoED5eSxt7eSd6yKn9QL5Zzoo0c5Aeig404n5TDtZ59vJFUgW8D3Xwac/HmLoQ4+xdNM7xOcWau6ypI9oKSFdAroXaei5RPm5LrsbLfCtVb9LBGBXL3oAL27vorSzh9IOK1xrOd3t3RSeadcAXEBbQFwgXCIEyzu6qe29TEWnlOH0UqlmV6/2XSMutySbnG3XYLuh+xK1AurigrdbAbxBXPDObk0C22qXKd9ycnJS4Lu45N/X+Y5v3Ed8w0Hi6t2Jb/AiocmD+OZDxNZ9Q3zNdxgbjxDR7MFa43FGeQfwG7dEbvIycrtnsnbHfYN3Fjf7mKwnJQLbmvrOSjTg1lxug3ZOMtA9t97pFvAerJcNuof4ptOnNIb6JzA0MImhgekM8RMHPIt7jwfzF48veNRrKy97b2RR+DusSN7GWP9F/PHACu70O86wYAsjAooY6ZPJKB+jw0mJKsHRn5fYYVvKcALE4bbmceunZHOr4hv91EO27A/q5VCIY83m1vK5dcBtrXuXUpy+e27tpERfiiORgQNID9x9rZPZ2Ovew7NRJyVqKmdbTX3du+Rw6/VsVA5Kv9TVdj4rcYZt7ZZb3XTbnG3HEhy55Zazkj5ZC3D6GiclNnB8dD7jpXXS1jzpXIIj3wLe8lBSym4UcE+WTG6RPaWkLx5QnZf0Oy1Rt9w60FbQPSOxGNFrCUXMjC9nZnw1r8lMKNA0L62U11LieSpwD59nfEbFuRMYT0WxPXYdS/3Hs9u0ge1hK/h77g8szohlUtgeNsdvJa3JnytXcvmpN5NL5JDVGsT22E0siljOnPA3mOs3l7cj1pJW782VK0VwMZ+fenK1vZcC2q/kU92WQmK+Oyll7lha/SltCyWt6RjHCnbxYcqbrI1ezcrod1gdv4/VaYmsMlayzFDPkpQalqVWsFqq3A1luKSWsFzuuW2SinclKcBZaSxjlaFE02pjKXq5msoQrTKWaKU567KK2ZBVxFpDIWtSi9mYVcqSxHimhR1mXrwHa0xmtqRW8k5OK3Pj4lgcshfvoh8Iyt/FW+HLWJCygy2WXN7MzsIlJYTpfh+xNHQt2xJXscPoyjtJq3EJWcCSuMXsLMnknZxzbMxp4c2CQmbHfMNYtxlM85+Aa/xGPrJk835WsZZqsiWzni3marZml7Mtu0zL5H47o94hl/tqiSXvD1B8owpv1PwlEYEOJyXXapZU5TdOsYAqpUS7375e8Y0tnURLKbFFAvYBtxW8tUhASShxkh60VTa38wNK9ZBSSzGx32873nMrR9sO2rrad3lMqT8tkV3aJ6/7mLKsmf3aaYmclzShoPuAON42yU23apmUaS/BKT+FigyUbG4F3NZWScebbjts24pv1CNJNdVjScniViclaurdbe2sRGD7Kikl6qGkf03rL3ooGVTbiibtvETuu9sI1nSG4HqrVCSgeijpCN0C3m2E17dY1dCKgHaf2ohstEpB94mGNjQ1nkFlc0set1WtRDe26u65+5+W6E9KBjorEfBObLJK726r3e5uOxXfqDhA/Uw7dYE0gW6bs60vwTG2tCNygO3T7aSLbGU4Kpc7s6WdTIFum7M9UAmOyuOWaVFq69Tut9XDSYHu7Nbz5MhZyLluis51Udh2gZzmVjJrG8htbiHn3AVyO7soaO+k5Hw71efb+fjL77jnkdGMfnECh/2CKTrVRk7TaZLyi0m05JGcW0xafiExmTmYyqvJO9VK4dkLFJ3voOhCB8U2lXf1UniundKObko7uynrsFa7SzyguNoC2tU9l6ztk/JnndJE2UVlRxc13b3arGjvoOx8u/Zd1dmNSE5MGsTZbu+m9kKnNhs6e2nsEl3UJH/e0NlNo7jcNsm3/C7/XbMG35c0+H7or/+mN9+SlpCgwfdx4ht9SDrpSWLTfmLqvyWu8QiJzX4cqYng9eRI7vKK438fS+dGz0xu1x5WpvIHnwxu8TNpDrdytfVT72oLeN+p04AOt086QwbQUN8MlAYHpHFXYDJ3+aUy1D+ZEYEJjJLHld7ZjPROZYRvMPe6B/Cng+4MO/YBdx1zYdiRzdznG8jI4EIG++dzj5+B+wPSNPi+56q33AODtlaE49A66QjfztCtwNs5IlC+9RGBDwWb7fncUv0+oMMdYuaq6SWhFuwRgaHma7ZOSka3qnhXsC3T7nLbHG49cGtFOJF90O0M3/r7bbuzbat214O3M3Sr74EiAp0d7ldO5KPknFiivgWupfjGWXqnW6WS6Kc+ncS6W+HbGbjlWzncCrRlzlBS0C3greA7oYxZCVXMSihnVmIhMxMLmJtSxIykOJ4K/AdH8r7lQlcSp7uzCS7fz4aI1/kw403cW79iZ9HfmRm5mzEB69mc+T5556O48nMevT9l0vNzHpfJo7I1ktjqY1r0X2a9L7WtcVz5qZifL5fzc0chV3oL+flKIb0/Z9N1OZf27nxOtadRcjYSn+pv2JH6DkuD1/JawFu8HvcZK81+rM0zsNZiYXV6MS4pZbikVLHSUMsqUw0rDBW4pMljSp0MZawUadAt4F3OKmOZA3ALfCvo7pvVuKY3szaznjXpJazPzGNVcgqLY3yZ4ruTsUeXMT3ofTamF/JmYj7r02tYkVXM6/4/8GPG53jmbmdT9HIWJuxgdbgHy+K/Y1yAK48eeJUpvnNZG/8mW1J34hK1iwlu23gt0oU5J75lpbGADeYLuGZUMN73U5744RUWhq1gb9kJ3s8u5l1zOe9IRGBWrfUhZXY5b2fLb1W8k1k7YNPke9m6eEA5J5HSGyVLDQq49eU3zlncWjygU0SgHr4HcrjlNwXYzlNlcGvFN3l9xTfO7rb6do4DVN+STqKkGifV1IO22vWQrd+/Km5CSR5M2s9JxOG23XDrpwLtPmdbHk46utridOvhu1+CieRzl51Ef8et9n6PKXXQrS/BUfux8lO/qPjGDtu68hs9eOvhuz9069zt6lYtpcSvuhVNNW30gXcb/rqUEuVqq6l3t/+ViEAVB6ifKotbudpqKtDWTwXdfaCtgFtfhGOFbeVuxzb1PaZURTgKvpW7rabmcttKcBRoq6mAWz+V062HbbWrmEBxtJ2lh23N3bYBtwJt/VTJJVktHZqrLc62kh629cCtQFs/VVpJ/rluijsuknemnczGFsrk7rrtPD8GhLBl9z8JTEolr+0MhZ1dGjCXnr2AoaiMRas3MHuxCy9MnMrfv/gGY2k53vFJzF7hysvTZjFz4TKmzl3Ei1NmsnXXP0gsKqGm9yJV3XI+0kVZezvF585Td0nc6y7t9/KOTkrPXaD8QgfVnd0aTDde+kmb6ncrWPdq99hVF9qpbu+goadHm80XL2m/a/faXd00XOikpfcyTZ09NHV0car7Iidtau7qpamjm8aObu3PtW/572yS71M9lzjde4mMkhL+beE7pfkASY2HiKu3wXezOwkN3xPXvJ+Y1kB8G8J50xzL0+Hx3Omdxh+8LQzyz+MO73Ru9UjiFu9UbvGzOt0Kuu/wsrrcevCW/RedlDiBtwJu/Rzin85gv2yGeBUw1CuLkQHRjAqJYkRQFoMDirk9sIC7/Mq4y7uIW33jGRUSyaN+UTzimczd/tnc5SeV7iYeDErX3GzlbuvdbLULaN8f0CcpvhlIAwK3zt2W8xI9fDtCtzWnW/K5BwRu55OSkGw7gCvYdp5Xq3kfHW5BJM2T/domndxtDbYjLFrdu6p318/nonIR6cH6anfbCrD10zm1RA/fL5/oc7f1TrcCbOepz+jWQ/fEq7RPKujuD9ySyV2saVq8JJVYU0rU1MO2BtzxRbxm08yEYvSalViCpoRCZiWUMCu+jDmJJcxNlu8C5iQVMzPRyNiQg+xN/ZKaxmB+ulRES3cm7gV7mXtkIivD5rA+bgWzfJYx/NB4nguYRXDhD3DJzM+XMmi/bOLKz/lwOY+fL2Vz6UoGF8mAn7P56YqFjis5dF3J58yFJM51JHC6O5aS7kAiTn3Proz3mB/syrNuqxkXvpe5aUEsNCay2JTFMmMlS5KbeCOhiKUJZlamFbI6rQqXtDreMFayMKuIpVlFrEorc9Bqg+Ry61WmnZLIOckaY5mD1prkMWU569KrWJfRwPLk07gktbLaUMCrvnt45fg8FkW8jkv4fOZ6rmd9Sh5v59SzLruelVl5vO71D/Zn7CGu8XN2pW1gltsCPgp9j3Wxa3j0yPOM2v8Cz/gsZbnhaz4sTGBHbhUb08p45ugK7v7mr4wL28K6vEyWpAXznMd8xvtM5v1sN/6RK662rXEyqxaJBbQW4VTwjrmCd7MqeTer2l56c7UMbokE1Lva+l0P3Dss1SjpIVu/Xw249ffceuj+NM+xbVIB+EAnJc5nJQq21VTArZ+f61onFWw7O9xfOZXjqIhA/fzGydEeMCKwuMkBrBVkDwTYzr/tLzuF0gEn+NZDt9qt99ynrpnRLa2T12qbFOiW4hvPCquuVYCjXG01VfyfmgLb/lVWBVS3oVdgzRmsarOfkgwE3QrAFXw7xwOG1p0hVOIB689qzvZAwK3AW+Y1odvmcDtDt2qa1M+4xjZECrD10w7ZNldbudsyFWSrmSznJM3yWPJ8PynAdp6ay20rvnGGb2fg1qDbyd0eCLoVfJtt8O0M3Bp0O7nbA0G3gu+sU+covNCJONDFZzs019gnLolnp07nllGjeGfXbkyVVVT3XqS2u5esimq2fLiTG24fzPzlK5ky93U+2PMPTmRk8oX7ce599An+929+z//4n7/mP371W24bOooFa9ZzwpxNXXc3tR2dNHT30PLTT1Sdv0DTRTlP6aWxpxcNnrt7aOzspuXiJWoudFB7oUM7/2i5fEU7BWnu7qH10iVOX7xIc1cXdefPc/bSJU52d9N28SJtly7RcukSZy5f5vxPV2jpvcjJrl7OXLzIucuXaem9pEG3gLf8LrAtoK2kvgXSNfi+eJnMktJ/X/hOat5PglTKN3iQ2OhJcsMREmq/JaH1GIEtEXyc7cdjwSe03O7b3BMY5GniFl8Lt3pncodHAn+UyEH/VG73sUYE6p1t2fXu9mBvE3oN5HCr3/SwLfswv0y7hvtYGOmVz0jvPEb4ZDLcP4nhgYmMCDYxPNDCUP9chgXlMCIkn5FBldzrV8e97oXc55HO/YEG7gk2MDLQhNxzK8jWT5W7rc1AswbfzsDtmMVt5sHALLv0kK3fHwoSZ1vaJ/v0cIiFPjmW4wzkcKvfnGH78bAc9NLDtwJu/dTDt7O7rUG37axED9taFrcNuBV46+FbD9ZqV48lHbK5T+QNWIjzclSe3dVW7rZMPWhPOFGAXdEF9hIcVYajzasAtzovkTnZKYdbIFxBt5p6+O4H3crddgJugW87dNvge3ZSIbMTipgVX8TspALmpeYxKyGfOQmVzE0pZFJ0JLP9/s43xi+obE+lh2KaL6djaPUguzOQ2svpZJyPZEXCesYdncqx1L/Dzzn8/HOu5n7/TA4/XTTz8095QD6QwxUyOPNTNDGtP3K89p98bdzOrvj32RC6mQX+K5jt78LsoPd5I+44q0xZrDRUsjK5lrWJNWxMqmZNUhnLUuRM5DSrjedZmtTEwqRC3jBYWJqRz7IMKcSR8xM9aA+0OwK3ALiCbjXXZRSx3mzG1VTC6owqFiZH8Zy7K7NCZrGveieRLYfZmbSLaV7fs6W4inWFVYz3/Jy1/psx1LjRdimJ3HMhJJ4KoKQ7mU8yVzMlYgLTE99mdWY4H+VX83F2M1uN9WzKqmNpYiBPHJvI3V+P5nnfNYz1XcboIy8wL2oln1eY+Nh4mq3mBraZ63jHXMd7ZgHtKt4zV/JedoWmd7MrNfhWmdv6qX88eTXgdoZvPWirXQ/cDtnbA5TeCHjvyhVX27H8Rr5V/rZMVXijpgJs56kHbbXvLWpEyRm+9c622gW+9aCtdn0+97fFTSjgHuicZKCTEmfAlm9VhKOm5HE766A8mixtdjgr0U5LdCclzrXvKpNbP/XwrdxtNfXO9rWgW+VyXxW6be625mpfFbr74DuophWRAmz9VLAdWnsGvcLqzuKsiLo2RPoCHLX3FeG04eBy605KnE9L9LCtdpXFrYdttSvoVlNBtwJt/VTO9i+BbsPJC2hyKr+RMhyVw62dkKhTkqs43JmnO8gSaYB9tRzu/pXvqvxGzWtBt8QGatGB53rIlYeXp88jKSIFp9pY/9HHTJg7j0efH8u4GTP4bN8+EnPzMRWVcsjLj8fHjOXBx0azauMmJs6Zw7u7d5NUkE9eYwPuwSEsXLKc//iPXzNq5H18t+8QXuER7A/w5wcfb9xDw0jMtlB//gLxWWaC4hMITkgkMs1AXk0tJ7u6qTlzlgRzNgd8fPnWzR2/6FjKTp4iu6KKE0YTIQmJJGdnU9PaSlx6OlGpqUSmpBCelKTNaIOBjKJi7d9zDwnlR08vZCbn5Gr/XwHy5s4uGjs67fAtsK0AXKZ8n+69TMuln8gqLfv3he/Y+u+JrZdHlx4kCXzLXvsNCW0eHG2MYGHYAQ2wb/RKZ7BnInd6JHGjZxY3epm5S+BbTQAAIABJREFU0zOeoV6xDPJN1uD7WuCt3XJfB7414PaRYhtHDfeVEpw+3eOex/3u2dznH8fdIdEMC8hgsE8pI33z+ZOviQd8ErnHO5b7fJJ41D+LP3tZ/+49wTnc75fBfb6pDA9KZHhIWj/4dgDvIDOq+EYP347gLfXv14dvcbr10G11ufXgbeGRkOvDt0B3vyKcsBye6Kdsa/17mNx2W91u/XwyIkdzvq8F3nJW8kvhW8G2firw1qIAo3LtrZR6h1vt4m7rgVvtevCW3Q7eNgh3gO7oQq2FclJMAc5STrea13K89fDtDN2a220Db+s9t6PbPRB8z0kuZE5SgQbcs5NzmJdmscF3LQsNdcxOzuLZ6P28HLaZjxM/JrsunHOdWfT8lEbvpVwu99TQczGfvJOBRJXtJ7s5gA5y6MFCL9nayUn2ySAS672JrDhOZPERThR+jVvO2ywNHcfKlNfZlPoWrrEf8UbCVyxICWCJwcjy9DwWpxSyJLWKZaYm3kir5Y20SpYYynExVeGSXsfi9EoWZJZoWpJVrpXgvGmoYVNSNW8mV2o53JLFLVIgrnfD5S78ao63gu/1GUWsNsYzLmAXj3st54WQdUwKepfpbtt4N2IXETVuhDbuY07kZlwLDExLjGSm/2b8KvdyujuOy5cq6blcR11HKl6Fe1nuM4+PCvbzUW4aO1JL2JlUwg5TMdssVayzNPFWejkbs028GvQ2D+17kYf3j2W8/xusNX7J7iIjO0xy431SA/B35IzEXM375kq2myt432yDb0t5v2xuAXA9eMt+PfiWEhwF2/qpB2/Z9fCtd7jVbj0puTZ4/xL4FtDeW9DA506SindHXb0UR+BbzkoUbOunHrxlvx58a+cltnvuAaFbOdtOAN4PvMtPc6js1C8C72Nlp1ByKz+NgzTX2+p8K+BWU4G3TO2k5BdWviuXW011UiLzX4FvPXCrXYG3TD14y+4M3vJ9ffA+a73hVrfcMu033H2nJeJ8K9jWTwXeMhVw66eCbjUHcroFwBV4y/yl8O3cOqkHb638xgm8Bcb1DrfsAt/64hu165snJYnEfsdtu+fWw7Z+Vy63mgq81cyVCMHWC1pEX37zaZZv2caSdeuZNGc2L0ycwLjXZjJr6XI2vvchn/zjS6bNnsfazdtYvuFNpixYwDuf7Sa5qICWy5fIKStn09b3+L9/9RtGP/YM335/kDlLXRj10F+4bcgQRt7/AFPmzeez735gwszZDL37Xu4cNoLh9z/Aa28s4bCPH8f8Anh+wkRuv2sIt905mCF/up9tH3/Cth07eW7cBAaPuocXJ03hs2++5eUp0xh23/2MfOAvDBl1D0PuvpdRDz7E2Fcn8dqixdwx8m5uu2sIdwwZxuQ58wiKjafpQgdtcs/d0eUA33oAd4bvvzz06H9v1GBszfck1u8j0dZGGdNwRIsMjGuS6MAjxDcdIbHhIIkNh0hsPKwlmsTWi8N9hMSmIySJGg+RWv8Nsc37iTp5hJimo8Q1eRLT7ENiw35Cao7hGufNKN8kbvJM1wpw5HxkkLekm5i4wyuVO71TuUNcbx8DdzlpsK8RB/kYGWKXiaG+zkpnmJ8U3jhKim5G+Pfpbt8s7vWRops07g5MQ9JJRvrnaCcl9/pn8Cd/I/f6pfEnfwMPBGRwnzRTBmRwb1AW90kySUA69waZuDc4vX8Wd7C17EYV4PxZy9/O1DK4JYdbJJGA1iIcMw+GmK1xgFpMoEQFZvGQSCvDkUKcvljAh0Mc77gf0fK4+5JKpATnUZ0eCzXzWFi2XY+HCVBbeDIsTwPwp0/k86SU34RmaRXvz4Tl8XRovqZnwyw8HZTDmJBing4s5pmQMsZElPFURAGPScLJCTNPRIqyeSw8k6ejc3haUkvCs3khIoeXwnIYF5nPi5GFPB+Rz7MRuYw5kc+Y6AKeOZGr3YZLec4L8h2Ty9OxuTwRY+GphEKeiS/h2ehixkYW83JYCZMiK3klIofxJwo0wJaTklcEpGOKmBxTzJQTJUwOL+SV2BJeiC7kpZhC5KHlC9EWXonPZXxSHi/FZfFyvJkJTm63ap3Uz1djLExOzGNiQi6vxucwKTFf08SYHKYmFDBDOyeRXO4+aUU4CXLL3SdJJHktobCfZiYWoTQrsQgHJYm7XeygOclFzEmySk5OxO2W77nJZbyeWs68lCKmp5iYFO3PjOBvWRG+h/eSdvJN3id4Vn5FSM2PhNd8z4ma74hv2kfK2aMknT2Kb9XnfGP+gA+StrI18R3ejH8P1/hPcDnxKW+EfMSSqF0sT/meLYXBbMmNZY0hgRWpks2di4uxnOXyiDKtgmVplbgYq1lmkEeVFbgYK1hhssrFVMoykySOlLAivZzV6RWskeIbQwVrJCLQdHWtkZhAUwVrTZWsy6jSdoF0OTNZK7L97mquZUFWLlMjd/HM8aeY6DOR9Yk/4BJ3jMUROzmQs5fUU568kbiJadHv84LnRj7J+ILSC4lcvlLEpcsltPRmEtNwGNew6Ww3fcunOXl8lNHIu6k1bDeVs91cyrbsUjZny+lIDVstcmeexLSwLUwMeIn5EbPZmPI9281m3rWU8I5ZGihrrK639miyivezdbJUIcU3en0oRTg5NQ5yjAjsiwVUUYFSfrMzt9pRWvlNDZ/o5t/zdBGBeQOX4GgnJbbyG60Ap6COzwrrHaSV4NjSSvpSSvpuuSUSUBXffKEKcIqkAEciAfukL76R3Xq3bS290e645XZbOykRZ9umEpXD3cx3JUrymzyg1JffyE23VVKAI1nc+7THkvJgUqfykxywq68Y56Cklmg6iTonOVRxksMVJznipKMVks99yq5jlaewP6KUchx5NFnZX1omt5Th6FXdgqddp/GucpRP9Wn6EksktcSaze0cDehX41h+E1jTgoPUw0ndDK5tRT2WDKmTyvc2+h5LniGsXmR9MKlmuFaGcwZ1UhKhRQK2EtVgk1aCI3BtU1Mb0aJGR8U0CWj3SW63VemNzPimsziX30jxjQJs/VTRgGomNZ9FSyqRAhwlWw63ZHFredynLyA53Ha1XCCtpd1B1seT55EpJTh9sj2kbG0nvbXdXoBjL8Jp6yBTJ0kpyVaSB5FaLGAHFsnlFp3pIEfU1qfcMx3kagU3krNtVd6ZDvLOWpV/toP8s51Wnesk/1wnBec6KbzQTf55+b2d0vZuLE0nWbplK4vWrWPy/Pls370bz5BQPtjzT15b+Abrt73L7EVLWP/2uxp8T319Pu/v3kNqYSGtP18hp6qaNVu28atf/yejn36ONRs2cu8DD3LniOH8+eGHGX73PQwZMZJZ8xfw18ef4Ff/69fc/+BD3D54CLfeeReTZrzGwmUu/OHWQdx+12CeevY5fv27/+LJMc8x+bWZ3PPnB/i/fvW/GHXvfaxat177/t2NNzFu8hTuGj6C//j1r7n3Lw/ywqsTGHr33fzmd7/j2WfH8Ic/3MTNg25nzxdfUd18ijO9vTS2d2gOt3ZeIvfdPZe0UxO5DZe99eJPnLl0hZziUh787y7Zia/bT3LTIZKaDhPbcBiB77imY8QLfDccJqHpCMlNh0m2gXZ8wxFi6g4TWyf53Yc1IE+sP4hB/l6zB1K2E9fkRnyjFzFNnqTV78XQeoCdBSH8NVRaKyXRJJ3bfUQmhyKcO33kvOT68N0H3sYBwNtkB++BsrdV7rZMge2BkkkcIgED+2dvSwGOPn9biwT8/1nxjWNKSV86SV9KiYXRYbk8Gmzmychc7dHk6NBMng638Ex4Hk+H5fFMeAHPCYRLBbw43FEWnoww83SEmeciLLwQnssLAuonCnn2RAFPhmXxXJSZsVFmXjqRw0tR+TwflseLkcW8GFXA2Mg8xkTl8uyJXJ6JzuXZ6Byej87hpRhpkcxjXEwR42OLkIztl2MLeTmugHHxhUxMKGJ8TC4TYiSXO4eXYvN5Mb6AsXEFPBebz/Ox+bwUW8S42GImRBfzanQOE05kMzk2l0kxspt5NTqbSXG5TIjOtpbbROcx6USupsnReThrSkw+U2MsTE/IY3KshYkC4vH5TE4oQOBb4Frca2vrZL6Wty2Z29MTpAinTzMSrdA9M6EQZznAdmIhs5XkvMQG2Qq2NfBOLmbuAJqXUoLS/LQq5iUVMuNELFNCjjA1eDfzQj9gadQHuEbvYP2JnbwX+y67UraxJ/0DPjZ+xFvxH7DyxCesiPue5fGeLIkL5I3EIF6P92VmpCfzYkJYnW7irZxi1maUsDJFVM6KtApcJKlESmtEGogLjFslte4rtEeTfVncWhmOJJPotNqkc7Vt99vKzV6bXs7a9Aqbqtgg2ds24F6fWcP6DKmNr7Qqq4aV2eWsSglncuBCXj42lrdSvuctSxKzYg6yMe4TAit/4OucnawOWMrO1O1ktkTQ2ZsNvdlcuphBblsAO42bWRrvyiflyXxgOcXW9DY2pTew1VzL21kNbDU1sS39FO9lNfOOpZW3sguYHvw+E71fZmnUEt7P9OXj/DLeySu1nZnIqYkAd7WD1KPJDy1V6OUI2tXIt7rltsYE1vS53PoSnLxqdjqpXwmODr5VJKCaqgRnT6HAdn/9K8U31uxtx7bJgYpv1CNJNfUPJNWuIgDVg0k1Ve629dFkk730Rspv9KU3VuhuYl95sz2ZRCWUyDxQcdIuK2w3cbDcqkMVzTjryABFOFKMc6zqlIPcKiW9xCr3qlM463i1roFSt2u53DUtWj63V81pvKtP2eVTcxpnSQOligTUT308YEBtC4G1p+2ytk7qGyitedwKvPuAuw++wxrOIFJRgGqqshv9vFrpjT6D27n4xrn05pcW3+ihW3YF3GomnTqPNE0OJH37ZL8CnJYL6Bsnrft5DC1WGVsvoJeprR1RxgDKPNOBXmYNuC9oDZEWW762miqT2wrbAtx9ynOqeLcCd7sG1ipfW02J/BPlnZMynAsUnGunvLMHS/MpDb5fd3Vl4rx5fHHwIIXVNXgEBjFv2XJWb9rC3KUurHprMys3bmL6/IV8uOcfGIqKaOVnsquqWLV5C7/69W957KlnWbBkKYNHjOCRJ0fz2uuv88TTz3DHXYOZMuM17r7vfv7Xr3/DnPkLePLZMdxw8y089LdHeWn8BH73hxt5aozA+5vccNPN/PmhvzJu4iTuue9+/sf//A8GDxnGzLnzGDpyFCPv/RMfffopTz73HL/9wx+YMH0aaze9xeBhQ7nx5ptYv349o0bdw3/+7vd8/OluKhoatRvwU9092iNMAW15mClTSb7bLl3h3KUr5BWX8eAj/+1pJ0dJaHazwnbjES2ZJKH5mOZqxzcc0lxtzd0WB9zmfMeJ811/iHibEuoPkdp4nLgGAe/DpDYdIUXyvmsOklb7Gamth/koN4i/hiRxq5ckmqRzh4/ICt9/1KrgHaF7sI8Bq4wM9rFqiK8Ru3yuDt6a661rl9RDuAN8B/TBtx641W7N4P5l8K2iAa/+YFJcbql8l8IbvazlN1KCIwU4DrGANrdbYgGtkgxus0PpjSrAcS6+cYDvcEf4fkJc7ogcRkfm8liImSc1F1ymWXPAR9sq4OXPn4wo4qkT+TweaWR0TBpPxaTwVGQKz0Vk8FJEHmNDc3kypoDnYgoYG5HLS+E5vCIzKo8xJ/J4PDqPJ+IKeDHcwouRObx8Io8XoiyMjTDzSlQ2E6NzGB+SzjPusYz1TuZZt1ied4/lJc94xh6PYoxbKM95hvKkRxBP+4TwjH8iTwcmMfZEBi/GW3g+LpsxMWaej83hxZgcxsXlMzXSzPTIbF6LyWNadB4TIyy8GpnDRIkPPCH33cW8GpvLRIFqUVyOpklxOViVy6T4XKafyGdmdCEzoguYHlPIdLnnlkjB6DwtpcR6SiKV71b41kO32q8G347gXcTsRJEVwO3ArQNwge55A+j15BL0mp9cw/zkOhYkl7IgKYt5CXHMjg5lVlQAsyMDmBcZzIqYY2yI/4H1CQdwjT+KS6wXLglRrDJkaakjS5KrWZpazvK0Apal5LEspZAVWttjBcuSy1mWVMGK1FpWGKq1RsrlRgFvR60wlWOVpJZcG74lrWSgG24B8HXp4nArVbIhs5r1GZLdbd3lW3Zpr9yQUcXmrGrezijhRY+3eHz/s2w0fMsHxSUsSEpkVuinfGXeQWGrB0FFe4mo20d+Wwjne41cviiPTg2a+eASuoxlKXvZmpfOpswGNmTW8f8wd9bhUZ5p2//23e3Wu/Wtbt0olAIt7oVSpAY1SgsU11Lci0ORKhQnaJAocZ8kE3d3z8QnnhC0/X3HdU+e5MlAZd93j/f7/jiP636emQn989frOO/zXJiYw5KkXJbHlbIyspbV4Q2sjDSzLrmFGUEGBh6eyqhTY/kqeBtbU0LZmCzNlOLn/n3wFgDXg7ecreG7LZc7Pr8dum8YE2iB781J+Vhk2XpvSS5A09akAkTbkgotao0LlG23VL2LpPhG005VfFPUse49rZj2bXcx+uIb7fxb8N2hcVKXvy3wrcG2fu7J1LbcHacevOW8P7OU/QLdmSW/n8HduvHWIgEtsx2+j2SXcSS7VMkmpwxrtYG3bLpzyjmuynC02Q7flmzu3wfvfxe+raFbnu0Kq7AvrLouItBRtty6LG45OxdW6mQB7/OqBKcdvFU0YLEOuLUtdxt41+giAS3xgG3lN1o8oCrAsbROahnc2tSX3/wRfEtSSUBr8+QfAbb2uQbc2hTw/i341oO3nK8rwqlqvA6+LZvueiLMDR0UWd2IJgXfksldrckC3jE1TWiKq5Ftd4MqtdGgW6YG3jL10C1na/CWZw2+NeDWTw2+k+saEWU0XiC/5TJJ5ZV8sWQpn8yezaiPP2bXvv2k5uXj6OnNlHlfKvj+fOZsBd4C4gLfa7dtJzQ1lapffyEuP5+5y2XzfRuv9+nPZ5On8PhTT9Fv6FCmzZ3L0OFvKvh+d9wHCr5vu+sffPLZ5/QbNJjHn3qaoW+OQD676977GD5yFOs2beae+x/glW7dGfn2Owq+//LXm/jnw4/Sf/AQHnj4EQXs+44cYdioUdxx7z18PPFzNnyzjWeef477H7ifDRs20LlzF+655z627fiWnJJSaq7IZtsC2wLamv6/gW9v0wm8S07iU3Jc2Uxk0x1YJqAtcH1YwXeAyQLbAtxt226JDyw5ikEuVpYeJ7jsOH6mvQQU/0CoaS+hxccwFh0nrHgfniY7vgxyprNDIP88G6G23o/YRVrq3lVkYKjaeCtriYD22TCdraTdYmJtL3nSLoI22Xe0mqh2ybbim/ayG2ma1PS/WXwjNpL/TPGNWEpi2qSHbO0sthJtwy2zV5sEui3q7ZlAb+9EernG0s81jv5uYhdJpLd3Aj294+jpFUMf73h6+6QwODCT3p6x9PWKZZB/AgN94+nvE88Av0T6+cbR1z+SAX6xvOGbzjDvLN7wzmawdwZ9/ZLoZYynpzGGYQLePom85Z/KCN8kRnjF87aPWATiGXLSnxe2HKXLzgM8s+5buu08yKC9p+n93RE6bfieF77exUsbv6fHtwfpecCRHgedecPZyNsBiYwOTOYtQyIjA1N4yy+JMYYUxvhnMdovizH+2Yz2z1Lb8JF+6Yw2ZPGWfwZv+qYyKjCF0UHJbRoTnIK1xvln8pF/Np8YchkflM/HgTmM9U/nPb803g+UivcMxqrWyRTGig1Fp3Gy8W6VbLw/1KSzl1iA2wLeNwJui6Wkfdv9aXA6nwZZNCE4gxsqMIcJgfl8bsxnUlg+kyLz+Fz81pE5TIjI5lPZSkdkMD8yjbmRqcyOSGFaWBJTwlL5IjyTiaE5TAjJZaIxjynGAqaHFTEjvIhpYQVMCcljSohsu3OZHp6nLkpq0D0jPAulVuieGZGNRZK/nak23bLh1mtOpNS/ZzNXtttWG+924NbA2zK/jMpVMC6wLt9ZEC3bcIu+iilkUUwx8w0hvL7/c3oeGsSikN2sS0hkisGfT9y3sDt+A02/GElodGFv4g62G9fhnnmcgkYj5l9jCS93YqPfGqZ4rmeW53FmeXkwPcCNWZFuzIn1Y0FsLCti81kbXcqamHI2pNTyodN++uwdy+cei9mU7M43KemsjsllZWxxezygLiZQHw8o5/UC3EqtW27ZdCu120v0Pm45W3u5LX7uAjYn5rc3T7ZuubXNtsxvkjpKiwXUT9U2mfLfKb6xFN5o5Td/pvimA2jrC28ySxHwFnWIBvwfFN8o0M5qbZ/MbreSaJYSmRZLScfKd6l/11e+K+AW8G6VVoKjn9dtubX69/xKBLo18JYiHE2qDEcKcXQ6WyDV7+3bbjsBbqUqBd43gm9r6LaAt1yirLpO2qZbpoC3ZiXRpmYl0abH76SUaJck1WxtnfzfKL7RQFtmUJss0K3Bt1FZTKTu3VL5roF3SGU9mjT4vn7b3Ui4uUnpt2re9dtuBd3SPlnddJ1iVaFNs7KVKNgW4FZqagPvRAHvVvhWtpKa5jbwbreVSEOltEk2k1rX1EFp9c3oldpwgTQpuGlqUfCdXF7FlCXLGD97NmPGj2fn3n0k5+bh6OnFF3PnK+gW+J6zZJnSu+MnsHrLNkJSUhR8x+fn8+WKlfzt5tvo2XcAk6ZO47EnnqD3wIFMmzOHN94cwT8ffkRtvl/p3oPb7ryLzq905dEnnqTzq92YMWcuX0yfwZ333Kvgev3mLdx9733Id98cNZoXO3fhzn/cwyOPPc6zL76EwHuXbt3Ze/iwgu/b776bDyZ8yvptW3nuxRe49757Wb9+PS++2Il77r2frdt3kVtSqpJPzJcuUyX2kktX21R18Spm0SWL5aT+6q8kZ2T/72++3U2n8Cg5hU/JSQJKLRtv8XeLHcWvSC5QihXlOL7Fx5CNt2y/jcqKIvB9jICS4/iXniKw/DSGkv34F36Hb/5e/ArPEFLpSojZntMlXkw3eNDJPoiHzkaojfejdlLzbql/F6uJ5unuYCmR7bbVhrsNtu0irvN0i8db83RbW0o0e4m21ZapRQNqU59Uop07OXXM3v6tDfd/u/jGVVd8oy+9aW2btC6+ec1VvNvi4W6XpfymHbgVbCtft+WypP6CZNvZI56enrH0dolhwPl4Bron01+8335J9PGLp793NIM8whnoEs4Q93gGuSUzyD2Lod65DPHJYpBPMoPVxjmcYV4GBnkYGSQV8D4ZDPLPYbB/JkP9EhnuH8Ew72AGe0ucYAJDfJMY6p3AG55xjPRKYJRrDL0PefDI8p94cfF6nvxyFd03/8Cgvcfp9cMhXvx6F88t2UrnpTvot2E/vXc70+1HJ4adC1UQ/F5gJm8HZvKOMYfRAZm8Y8hmeEgBQwzZDA3IYoQhm1GGTMaorXUK7/ok8a5XvNqIy1bcomTe803mfb38ZKOdwfv+GXxgyOIjQzYfyNlffNjZjA3MVKkmYwNSGSee7gApvLneWiLv9JCtP0tEoKZPxLvdpt/ecuuB+7PgDJSMmXzWpjQ+M6YyIThdvZsQmsNnYfl8FpHP+LBcPg7N5tPgfD4PLGZicBGTQgqYFJbDpLAMJoYmMzEsiUmRiUwKS2VySCZTwrKZGpHN1HBpo8xUnm3xbk8JT2FqWJquCEc23O3Z3FLnLpqtvNyWEhwtj1tgWy+xlsyXLXdENl9G5lgplwVRHSXfmxueqYBdwHthbD5fxeQzP0b+hyCBCWe38+G5jxjvMp75vpuY6bmPjxy3Mtd/Fa6m/ZRfDOF40m4mO89mxOEPme+4AEOBIw3XUqhviich15kfQleyxWMD61x3ssRzM9N91/Gp72Y+9dvDtGBb5hmdWRoVwbKocEaemM+IE++xJHoPOzKT2BhfwOoYaZus5Ou4fCX9hUk5W6eUtNlKbrDdVrAtwG0F3W2XJ1uLcMTPrW22LbOQbVZFOALf20W69kktsUQKcET/qeIbzUpiPTsAty4iULOTWM//ZPHNkexyrlNOBTatEi/3MVX3LpXvN4Du1tr3DqCtebnzLM2TUvl+SqmC3619z/+fV76Lz1vKcayB21KEYynAsZTgmHEprGqXVvfeCt2WMhwpvmmVSTbcetXiWWJRh4SS1ouSGni3RwPeuPhGby3RX5CUs7a91s8/k1Jyo+QS/UVKOVu3TmrPGnDLFOjWSnAss4kIc0dZ/NytNe+t9pK2DXertUSgO9ZKcTXN6GXxcjdet93Wb7iTa5qxVkrtBfRKq7VUvafXNavKdv3MqL+AKE0SThSgN5LT1EJqhZnpy1bw6azZvPPpBL7df4D0gkKcvLyZPGeegu+Pv5jKvGUrlPVEogbFdiItkDWSgVVQwJcrVvC3m26hZ5/+TJs5i3898RS9+vdn8vQZDBn+Jo/86wnGffwJr772Ojffehv33Hc/t9/1DwXYc75cwORp0xV8yzZ87YaNCr41O4qA9tPPPs9LL3fhngce5G+33Kqg/af9+3lzzBjuvP9+xk74lLWbN/HcSy9y7333sXbtWp577gXl+96weRu5xSU0XL1KZcslzC1XqL54lZpL19SUs/Zcd+VXGq/+Skpmzv8+fHuWncGr7DT+ZacILDumLlf6Fx3At/AQvqZj+JaewrfsDP5lpzGUniBY4Nt0mGDTYbX19is5iXeJLX7FLoSYzhFQeIyzmac4kuHCuVI/Tpa7sCnHjY+DzvOSo8HSTimV79I0aR/RDt12YTxhL1YSC3DrIVs7W1+g1J6f1kH3b8G3Hrq1s9hKNMjWT31SSSfnjvB9Q1uJc8fsbS0WsGMG93+q+CaurfSmp2tsWxKJPhJQzm2QrUsokWQSTcpa4hlDP9dYBrsmMcgjlT5iL/GNp493JH0d/elt40SvA3b0+NmOPofc6b3Pg94/u9N/rxuD9joxaPcZ3th3jgE2Lgw46srrB+zovvc0fQ/bM/iIM4P3OvDGHgdGH/Kg1zE/XjvD+5vdAAAgAElEQVTizetHfel51Jtehz3pf9iLIYe96PudAy8s2UP3OTvpunAXr67fQ9ft+3h11z66bfuRris2033eagYv3Ez/Lcd4de1++uyyZfC+8ww44EL/Q64MsvFm4CEPBh32ZMhROwYcPs2AQ2cYYmPPMBtHhh12ZqSNG+8c92b0EU9GHPFhhI1vq/x462hHjTzqT/8zvvS29aK/rTcDT3oz+KQvY5yjGB+cxbigdFULLxcpbwTdeluJHrjlrAG3fo5v3Whrm21tdoDtILFPtOvz4EysNTEkhUmhSUwOS2NyaCaTjRlMDBKlMik4mS9CU9SWe3JoKhONqUwyZjAlNJtp4blMlwuTITlMDclkangaUyNTmBKRokBbwXakvEtjmpTlRGQw3cpO8lt+7t+E7qgc5kVZNtoLxDai01eRuVynqBwWRucqLYjMRrQoJk9JzlOCQvnAbQ+rzi/HPnsXxwp+YJFhPqNOvssbJ97my4D5uJgOE1Zhx2LHmcwLXMh0n9lsCl5FbJkTzRdjaGqIpLE5ivproZQ2GMmtDiSg6ATbo1cwyXsyo85/wlD79xhk+zbvO6xnstc2Rhwbw4eO41gRc5QNiWmsjSlhQ3Il6+IqO6SUKOiOL+wA3qoIpzWf23q7rd9wt4F2a2KJlk6in9sSCxBZb7cVcLfCdhto69onreMBtcIbmfrsbf1ZiwG0nlrpjUop+TeKb/TArTbdmWWqAEdZSqxaJ2+cVtJ+UVIrvZGp32ofybJqnbSKBlTxgDntrZPaZlvmdbDd2jopOd1aSol+qguUOh+3beu5g5/bCr7PFVgKcfRTat+vv0hZhUNh+2VK8XZrEYG/GRNYWK02222NkzfYdAt4a3GA2myPBaxFO/87xTeatUQP3XJWlhJ1ifJ66P53i2+sQbstuaS17t1Y0dABvvXArT9rudxa06R+RpmbEemr3bWz9YZby+XW4gD1M6HmAiLLhcl2a4nl4mT7hltZSlrhWw/b2jm1rgVRet2F34RuDb6lLl6242m1jeRJvXuVmRlLV/Dh1Gm8/fEnfCe2k/wCzrl5MGn2XHXRUrzfC1evZer8BYz56BM27PyW6MwsBd+J+fl8tWIlN/39Vnr17s+UqdOVP7vvwIEKvgcOfYNHHv+XspbIFlvsJbLRls323fc/oLzf77w/ljv/cbeyoCj4vudeur3eU31PgP3VHq/Rb8Agbrr1Nv76t5vU5nvPgYMKvu964H4++Pwzvt6ymWdffIH7Hrhfwbd4vmXzvWnbdnJNJdRevkx5c0sbaAt8K7WCeO3lX6hX8A2pWbn/+/DtU3ZKAbah9BhBJYcxFO3Hr+gg/mUnCTDb41XlhEeZHT6lZwgoOYGh+BCGwr3qe2rzXX4G73I7vIvsCC+3VSC+I9GB6QY3ZoZ48qnhHEPcHehxzoUnzwYpq8mjUvVub4FvAfB/tYL3kw5yWVJnJWm1lWiQrc2n7SJ5plXP2keh13MO0YhutNm2rnV/SVd6IwU4+hhA7dzZKZYbAvf/o+IbiQe0Bm1r2Jbs7b5u7ernnsB1koZJ33iGij/bI5WBXhm85pVID+9oep8PpMceW55ZsZ1nd+2h+86feGn2V9zVewg3P9eVW555lZtf7MO9g8bS/cvNvPS9HX12HuPZT2Zwf9fe3P3ki9zxdFduefVNHhy3iL7b7eiy+SCdN+/jlW8O8cr2Q3TZsp+um/fRc8sB+mzZz+tr99B7wxn6fHOOThsP8vz6H3ht+4/0XrOBJ8aM5Y6nXuLeh5/hn0++xB0PPcEdjz/DHU89z61PPsetT7/Anc915vanXuLOJ1/i4ccf5tFnn+TeZ5/m708/w3916cbdo8fRdfkGBv94hFe3/UyPbw7x2jeHeX37kY7acYSeO2yUev14iD4/HaLfj/Kd/XT/5iDDjrgxXjbkYjEJzeKDoLR2S8lvbLn1kK0/fxIsRTkWWcO3Hrq1s/i4rWFbnicas9oVnM7k4HSmqibJHKYZs5kemMHMwDRmBacyx5jCrLBkpocnMS0klWnGTGYa85kdamJOSAmzjcXMCi5kRkgO08PTmR6WzgyB7Ei5PJnLjPB8ZkYUMifaxNyogrbWSQ2wtdlhs/0HlhLZdmvgbQ3cC6PyaFN0rgLtpfGFLIkrUOfFsfkKxueEpPGJqy2fnv2M2CJ7rv2aQ8I1bzanzOJj/1G84zORD10X8fnpeUw/P40vPKbxVeRi5gXM4Ej+TnyrbfAo3KsiBl1yD1DUFMrlX7L55ddM0qsc2O43nwkOn/Ke22SGOY+j1/FBjHGcxKqUdUz3G88kly9YFmHLhqRs1idWsTG5irUxhayPK1DaEF+ItTYmSDzg9RGBN7aUWGIC9bCtnduKb6wsJXro1jbdGnxbA7c8aznc1tnbGnTrQfvPFN/oc7f15w6gnV7KPp204hv9vBFsq0xuLR5QZXF3bJ3UQ7d2trkBbFuAu5JjORYdt4JvPXRr55O5lb8B3FVoMYH61kmpfLduntQq37VYQP3sGBHYsfLdesPdbimpvi6bWx8PqFJLdFtuzVain3r41kBbP71L6hC1b7ctkYD6CEDt/N8pvtFvsLXznym+0UcEWpfgaM96yNaf9RYTPWxrZw26tflbwK223ZqlpLUA50bQrcF3UnUzSdVN1222ZdPdBti1F0izUroCboFuizJqL5BR26yUWXcBvbLqWxBlN1+yqOkihS2XyayqYuqipXz0xVTGfvY5Px85SnaxSdlOBL7F5y3wvWDlaqYvWKji+9Zs/YawlFTM164Rn5vLwlWrueWOf9CzVz8++2wSjz72L/oPGszMufOU7eShhx9Rm+/nO72sLlZOmTGTN0a8xX0PPKjA+s2Ro7jr7nvUBUtlOxHPd/ceygMu8D1wyFA+Hj+Bex/8Zxt8H7CxUfB9xz0W24lsvp945mnuu/8+5fnu1Kkz993/IFt37CKruEQV8pgvXqam5Sq1F69Rd+kXNeWsPTdc+ZWmq/+P4Nu35AR+4ts2HSKoeB+BhfuQi5YBVXZ4VDljW2DH2dwTKi5QrCeGokPqO8HFBzFWnMSv/CyupjPKMx5hPohD8VFmhDjQ+Ywnnc7689wZLx6xDeHZMzE8fjYSSTQR4H7CIZInHCIs2277cJ6yD+NpB5nt8K3Btja1rfYzVsAt8K1Btzat4Vu/2dbOGnxroK3Njvnbf5y9LZvuPyq++TPZ23+u+KYjfN9ww90K39bA3d8jEU39PKSZMpIB56MZ4ppEf+90evim0MeQxFAXI69v2sujH87kleWrGb5uIy+88z63PvQIf7nlNv5yyx385a77ubNzD7pO+5LXN/9I/2WbeLTPcO6490luuuke/uuWB/jLP1/gzv5j6LZsK52WreblVWvo8vU6uqxbR6e1q3lp1Uq6rF5JtzWr6L5qNV2Xb+O11dvpvnQDfZZvYNjKDfSaNIN7O7/GX+64n//6253c8ve/cdNNf+Wmm//GX2+5if+65Sb+esvN3HzLrdx8083c9Ne/cctfb+K2v9/O3/5+O3+5+Q7+z90PcGuXrjzz+UQGbdhIz7Vf02vlOnrr1GfVeqw1ZOlKhi9bzRtL1/H6nBV0nbeWET+eYrJfAmMDkngnNJ1xEgmoLkt2nB0gOzCd8Tp9GpSBtSYEpjNBt9XWNtx62J4ooK3TJGMW1ppizGaq0XIxclpwFjOM0hyZw+zQHGYKUKt32cwKzWNOWCFzwkzMCiliRnA+M4zynQxmhaUyKyyD2eG5zI3IZ15UAXMjC5gTnses0Fxmh0kudx5zInKYEyE17+2aG5mFtTQ/t7WPW/zbIrGVLIzsqEVReXRULouislkcncPSWLlYmcui6Bw15d28kFSm+pxlRdBcCpoDKL+Qwq7gTcz1mciKpG9Yle3LgoQQJgfY8rbLVr5MsWVGxG7et1vENNdVTHZYzMcOCxnnsYJ33Ray2HseNvHbia44TmjZAb4xrGGB726WxASyIDqSz3xsecdlAksS5jA/cAaTHVbwlcGJNQlZrE0sZ11iCZuTc34HuDXwLrLYSRLEr90xh1ueNcCW4psOUvXuHfO4lZ0kqfC6AhwB7hvBtvZOg25tWsO3Hrq1858rvin5w+KbvX+idVIrvNGmdfa2PB/OLOdI1v+8+EaDbw20tSlWkjZZwbcG3Pqpwbc1dOu32nL+LejWAFwsJTe8PFmks5QUVuNS0C7rHG7LczXuhWaVv60V4KhZLE2T7dLDtpw14NZPDb410NZPLX/7P2Up0YO1dtaAWpvyXjvrp9Y4KbND3XulFOJcr7YsbrXhli13u2KqLyCKNTehbba1qYds67MG29pMrG1BlPQnLCWpOvDWYFubGfUXEWXWtXQAboFvDbq1mdnQQlbjRbIbW8hvukh6RRXTFy/j81lz+GTyFH7Yf5DErBzsPbyU7UQuXYr3W8BbfN8fTZ5iuXCZnELNr7+SmJfP/GXLlef7tR69mTp1ukoa6dWnH59/MYVBQ99Qnu/3P/yITq90VZ7tiVOmIhvx+/75EN179lKQLZ5vefflosXqvcC3WFbEfjJi1Ghmzp6rfOJ/venv6uLm7v0HGPrWW9x2112MG/+Jsp08+8LzPPjQP1m8eDEvvfQy9973AOs3byUtv5D6q1dpvPZrG3ALfGsArp0FvpuvQVr2/4PNt0QKGkqOYCw5SIhpH8HF+1W8oE/FOWwLzrE9+gCHkg7ikGvxfhtLjhBqOkBYySGM5SfxKD7B6Uwb3LNtCS46jk3WOd43+PPgmRgeckzkQecI7nSI4lG7MB47a7lY2Q7fkci2W+wm1vCtAbc2NfCW+Z+Ab7GWdLpBxXtH8P7/sfjmT8C3xAJabbs16NbmAM8EBnvGMPh8DEPckunvl8XrgZkMCk1npFsory3fxe0vD+H+h57mH4+8wC1PduIfQ9/ixVlz6bl4KV0mTeahvn257dHHuPvRx7nzX09x0zOdeOz9z3hlwSpeX7iSF8Z9wm3PPc/tDz/KPx56grse+he3P/wvbn3kcW55+DFuffgxbnv4Me54+FHufPhRbn38UW577FFue+QR7nj0Ce567Fluf+Il/tFzMJ1nLqTX6s10WbSSrku/5pVlX/PykjV0WryGzou/puuidfT4aj3d5q/lpWVbeGnJJnos3cTgZRvpM20+j/Ufys2PPcmdjz7O7Q8/zO2PPN5BdzzyOB306L+486HHuevhJ7nrkee5/bGXeaj3KIau2808ydgOSWN0aOoN4VsP3nL+RAfeAuHXgXdw5h+Ct9pw68BbINwavCeHCHhnMMOYzsyQdGaEpjEtLJUp4WlMDstgUlg2k8LzmBqWoT6bEZbJdPWbHKZJuklEFtMjU5kaEc+MsFTmhOYyNyyX2ZJ0YkxjVkgqcyPSkCbJeVGpzI1M6wDeAuHW4C3P16eW5Cjo/u/A98LILAXhX0VksiA8Q8H4srh89W6qvwuz/JYSVm/H4YidzHNbxua4/XyfFMWmaDNLo8wsislmUUQ2S+OyWRKXxHxjIAsCvFge4sfKqCAWxwYzK9aXscEH+PDcWtZ6ruBoylbWBa7mS8MBFsdEqmzvBRFhDDszlpEn+rHIsJQt0c5sSYzj66Qc1iSWsi6pmI3JSX8avn8PvAXAO4B3K4hbt1AKfGubbf3UIFttt5OL+VavlOIbFOH8p4pvfh++VUSglaVENt7Wm24NurVpDd+H/43im2PZ7ZXv+rZJOZ/IrULfOqlVv+vtJDc666FbO5+xspSoTfeNbCX57QCuAbd+Ohb+ceW7FOH8EXy7FdVcB9566NbO3sW1dJCpDh9r/TeKb/Q+bu383ym+0cP175314G0N3zcCb9l0q/IbHXBr8K2Bt0wNuPXzOuCWNkllLRF7Sbu0whvVOllz4bqtt7bx1qYG3xpwa1MD738HvrObLpLTdJGCpktkVJqZvXwVE2fPZdS4D1mwfCXrt+9UsK2B9h6bY2oDLh5w8XxL2klIcgrV166RnJfHvMVLledb4HvWrDk89dQz3PfAP3nsyae494EHlef7o08nqAuTN/39ZiZMmsygN4apz7q99rraeAtUS8SgQLf4wnv26atsJy+83Jm3Ro1h5eq1PP7kU2rzLZGF4vkeOGwYt9xxBx9O+JRNO7bTqUtnbr3tVnr06MHtt9+pPN/bdn5HfmkZdVJD33KpbcutAbe29a6//CsNV/j34TtAovxkQ112kICyA/iXHMa/+BgBxccJKrEhpOwgfvlnCCs4SljxHnzLd+NWdRDvsqOEFtgQVnQIj5K9eJYdw1iyH2P5MQwVZwgttSGy5ACnCh2ZHuHEitgznCyyw7X0NN4lJwgsO46h3BaX4rPsTbdnZZQjM4LPsy7GnU3xHswM96Wfp5GHzkZyv10sDznF8ohjJE+cC+LJc6Eql1uiANtKcBwieUofC2gfwTP2kddJLlBqksuTz0vhjSYpu5HyG72copA8bouiO+RyWzK6Y5B87pedo9rU2VlKcDqqy/loXnFu1floS/GNVoDTWoLzqkssluIbKb+xqL34RspvJB4wltdcYtqkim+k/EaTJJSoS5Idfdy93cW/3e7h7uMeR1+9POKwbLFlk92u/h7xDNDL06pl0juRAX5J9PeKZ5CkmpwPZ4h3DKP8YnjL3o9um/Zwz7uTeLjbAO7pNpj7h77H01MW0mPNTvpt2s1ry7bw9Cczubv/aO4ZMIrbB47mnncn8LSA95Yf6bf5B7rMXMLDQ9/jXwNG80if4TzcZzgPtWkYD/UZxsN9h1vUbzh3DHqTe4aM4u7ew7mv13Ae7P0W/xzwNk+MnczryzfRd9N3dN34Iz13HOT1HQd4dcvPSq9tP0jP7Qd5fesBem47SPft++m87lv6bN7NsK176b9sCy+Mn8k/h77LPX2HcX//N3mk/3CL+sl/g+gNNeX9o6IBb3JPv6Hc0/cN7u37Jvf1H82zH85g9Lc2zPKP4WP/KD4ITeQj1TrZ0cdtyeLWWUqC0/g0KI1PZQanqcuQciGyTUa5HJnO58aMDpoofu2QzDZNCslQHm3xaYsmh4gy2/RFaCZTQzKYHprB9DCLpoWlK9iWy5JyaXJKeDbTBL7VZcnW74ZmIiA+I0KUzvSINGaGZzAnPIs54dnMDs9q3YRnMDcyk3lRIsvZGrbnRWXTQXKZMjKbL6MEuPWSqECLvhIft2y/W7UoOo/rJZvvXJbFlPBVRB6LY1NYGBPHV1GJLInNYVFUDjP9XJnruYjvEtexxHMhG6NOsyMhio2RaayOLGBlfBnL40tYHVei6t5XxhewMjaHldGZrI3L5uuEXNYk5rEysZAFcUUsigpmhvsuZnvOZ6FxDnOD1jI/5AzLYiOY6necYTa9+MjuA9ZFn2BHUgpbEoqUn/vrhEK+TihmU0opGxMkHrBVqgRHkko6anNix+KbLUmy8W4vwNkmMYCSVKKTlN9Yy7LhtqSUWBJLitXGuz0W0MR3qUUd9H1qMVoJjjZ/SC1GKa2YH5VM/JhmQhXiqCIcS/GNlN9oUgU4GaVIJrcmvb1EztbxgPuyylsjAiUm0KIDmZYiHCnD0XQoSywl7bLYSCo4nG3REbksmf0nim90CSXWPm5tw30ip4qTOWZO5Zs5nl/BKZMZm1wTJwoqOJFXjq2WRJIvVpJKzkhySU4Zp/Mq1FnSSUTyLL7vs4XVnCkwqwQT2XQrC0peFcrLXWi2JJcUVuJQbMauSJ6rcMg3Y59XhWOeGaeiGuxLanAoqVGfn8stx7WkDrGUnM8341ZYg3tx67O8K67FSWBcynBMdaqN0q2gBi9THR4mqYGvxrOkDrciM74VjWrKxUqf8gblCfcsrcW3shHJ7PYqlQuW8tsafOR9aR1exdV4l9biVVaDZ1kN3hX1eJfX4V/RgKG8AUNJHcbyRgIq6/AyVWKorMe/olZNdS6vVc8BFXUYKhpUaY5KKSmrbWuYVIU4FbUEVdQSWtWMwVRNaHkDIaV1BJfUYCyrJUQKcGqb8C0zE1JeT2h5PeGVjep7YVL9XtlkeSe171VNqhQnqqaZ0Io6VYwTYZbLlQ1E117AWFaDfBZeWa8KciKrm4iSpJKaZqWoqgZiqpuJq20htqoRDbYVYNe2tAG5bLS1DbfAuXwvSard5XeVDSTXX1JS3zE3Ij5vdcGypomU+haSxULSdIW0xkuk1LWQJlvthktkNV5WU+A7taZZnXOar1o+q79IdsNFte3Wtt4FLdfUNlzbfGc3XiK36TK5jRcpaL5ETl092/YfZsqiZXw0fRbjZ85h/Iw5fDpjLjMXr2DNN9+y4dsfmb5wKV/MX8Rns+axftePuAWHkV1ZTWphKZu/30OXXn0ZO/5zbGzPMHH6TLq83pNHn32e517pxqdTZ3Dg+ClGjfuIV3r24eutO1iwYjUD3xzJB59PYsWGDQwb8zZPderEo889x8uvvcaqTZtYs3Ur740fz+yFizh80pb3x0/gxa7d+OCzzznp4MiUufPo0a8/y9dvwNXPn4XLV/BS11d54rnneaHzK3z02UQ8/A3UNDXTePUqNS2XqLt8lfrL16i/0qrWc8PVX9Rm/MIvkJaTQ5cef7LhMqjIlkDTUQXefmX78Sk5go/pJL7FthhMxwgqPYix4AjhRScJKj6OR9kB3Mx7MZiPEG46TnD+EYKqj+NTeZqTGQf5NsmWAznnCSo7Q1iJDasT3ejh6saWNEd8BNDLj6gMXIe8E/yceoaVUfZMMLgx1MOPHq5GPvD1ZrzBh7d8g+juFqYsJQ/ZRfOYYxxPOUXy9LkgnrILuy6hRL/Rtmy1I3nWvqO0pBJtPq9gW4C7Xe2gbQHul6QIR1opnaLopCvAkfPLCrpjWqE7kpedI+l8Puo6qVxuge9WSQ63tV5VaSUC3NEd1N01Br0kj1uLCNQnlWhnSyRgLL3dr1cfjzg09VWwLcDdLutqd2mMHHgDSdqIpoFeCaroZpBvEsN84ulx0oueR13pfciRXntsefWbAzyzdDOdFqyiy4ot9Np1kIGH7Rl4zJX+x9wYcMSFgfsd6P/TGXrvOU333SfpceA0r8llR9vz9Lexo+u2vby8cidDdtrQ7+dT9PvZlv579TrNgL2nGbDPor57jzLgpyP0WPcd3ZduoceiTXSTbfbijfRat5Pu63bQZ/cphh9zZcQJd4YdPc/wYy6MPOWp9OZxN0ae9GDUMSd679hH360/M2jLz/Rb+x29lm2j19Kt9Fy2hcHrvuONfba8cegsQw+eYciB02164+AZhh0+x/DDdvTbtIs+qzbTd/lm+i7bwsC13zHqpxO8ffw8Q0+d5wNDNFL5/kfFN58a05hwA30Wko6mz0PSmRgisN1Rk0Iz0WSB7XQmh1j0RWgG1poalqmyt6dZTS0SUKYCbQXgAuEZ1+Vwy8VJiQTU/Nv6qfdyy1mzlGhlOB2sJXKZsgNwt8O3Bt4yFXy3XqTULlDqp/i6F8XksiQ6nxVxlSyKLGRpXBqLYxNYHJPJ8vgiFkSkM9PnDMt8vmR1yJcsCVjF9kQ/NsVksDY6l1Vx2SyPz2VFfDGr4gqUVscXYq01CUWsji9iVVwxXycVsSQ4jskuB5joMZ/xLlOY7r+DxZFufOy4gxE2rzHPfxU7UkPZnlzIxliT2nSvT8zn64QC1ieVsDExr003LMDRld6otJLkduiWaMC24puUQrZbyboI5/ciArX87e/TirCWVn7TNtugW4Pv4tbmSa2B0tTBTqL5ua+DbQHudEtUoCSW6LU/U8Bbg+8b53NrRTjSPqn5tmUKbFurLX/bqgCnQ/GNrvRGK7+RqY8FPJ5dxcncGgXfR3NKOFUkzZXFHM8r5aTAeCt8C3gr+BbIzilTUH2uoAr7omoEstXnBVXYyXOhpJpU4lhcg2NRNXZ5FdjnVyHFNw6F5TiaKrEvkjjBCvUdp6JqHPOrOF9Qg0txHfbF1dgXmXEy1WCfX4lLca2Cb9lmuxfV4l5ci1tJPa6mOgXe50216uwuwF1ch2eRQLPAt1TBSzZ3DV4llmd3gXFTrXqWpko3aZ6sbEAKdATAfcrrFHhLwokkmoitRNlMKupxF0CvqMOztAafsloF38Gl9QSZBMbFsmLGv0LOdRiqGhSQq2bL0moC1XM9RnMT/iYzFm93LcEC1hW1BJaZMVbWElhah/i/w8obiChvQKraI6oaCamqI8hcS1CNALPlMqRsr6XqPbJKni+oc0xNC2HlFqgOq6wjShJJaptVJnd4VT2imLoL6r3kdMfWt1ieWzO64+ouIPGA0kAZU9VITGW9ap2Ml3bKqgbLWdopzZJcIhtvAemLJNW1qM8SJYu79ay9T6q1xARK82RijfyuUbVQSlOltFLK+xRJKamTtBIB8Ittymq4qC5ZWqwkl8hpuqQq47MbWsisF8vJBXKbLpGtbCYXyWpoIbu+hdyGS+TUXyC7romCphaCUzOx9fLD1suXoy5uHHV245SrF+e8AnAOCMHRLwhbN2/Oevpz1tMPl8AwAhNSSDFVkFNZi39MAsfPu+ESEERKYRGGyCj2nzjJlh9+Yu+xUwRERJNaYMLBy48Tji6EJ6XiHxHDGVdPXAMCCU9MxMXPj29272bHnp85ZmdPVGoakSmpuBsCCYiMIiknD1d/AyccnHDxCyAlrwDf0HDOuroTHBNHXlkFUYlJHDxxii3ffs+ewzYEhIZRYq6m4dJl6i7K1vsy9Veu0nD1Wpu058Zrv9D0y69c+FWD7x5/rl7eWHyKoJKj+JUewrvsCF5lJ/EqO4d36Tl8S06pdkpj6X58TGfxNDnhW2JDcMVuDGYbzhY4sCfBFqcyJ/wrz7Al9iRjfRyYHurG0SxHTmSc5S1/X+639WZrkh0Rpp+JLTuAS8FJtsWf4zM/B/o7u9PJMYinHSORzO6eDp4McfGiv4eRbu6RPOcczb8connCKYannaJ42j6Up5S9pL3y3Rq8b/iW3wcAACAASURBVATfGnDrp6p4/x3wFhDvJHKMQosG1M/2C5TRdBbwdm4H7y7no7CotRBHbbtbN943KsKRAhyXGLq7RHeQVn6jzT8Cby2P+9+F7+vA2yuBAV6y5b4ewDXw1uZgzwTe9ElGIv+6HTjP89uP8dTXu3l27Y+8vH4PL679nuc276b7vjMMcwlhZGASbwQkMcA7jsHecYyScpnANMYEpDAyMJnhQUkMCYznjcAE+rsYefmAPZ1/PM1oxzDGBOd00NvGHN425rZJogI/CY1nnHcIQw/b03/nYfpt2kvvtT/w2qpvef3r7+m8cjuDbD15xzOSD/zjGOcXyzj/OD4yJPKhIYFxfnJOYIJvFIMP2tFz5yFe37yX19f9RM+vf6Dfxp8Z9M1+Ru+x5cPgDD6KLODDiHw+jMjjg/A8NT+KyOfjqAI+iS5k9HEnhn53kKE79vPGNwcZtPUAvbfs55Ute+n6wzHG+kSpNsnry28y+NTYrgnG9Db4/swoUYCi3wZvteGWLXdIO3gLgOvh2xq65XlKWCZ/Cr7DZdP92+At7ZN/Br4VcLdepuwA3a2lOO22EinA6SgBbk2SYLJIyZJcIuC9uE35CHwvic1jaYxImiaLFEgvic5lRWw5K+JLmRUcygzvn9kZvYQN4V+x0GcNG+MMrFPNkiZWJRSo3yyPy/9N+BbwVhIojy5gVZRckqxlUUgUU3328LnH18wI+J6ZAXsZc2om7599izUxh/khK5PtycVsjMtTW+5NyYWsT8plXWI6mxLzWnWDAhwpv9HFA6riG100oL74RgPvHSmFWGRpoGwvwWnfeLdvultjA9NMbUU4fwje6e3Abdl6Xw/eEh2oAbc2rcFbnveJ/tDTXcoBKcZRG+9SDmbpZWmkVMDdGhEolya1WEBtHpXLkqraXUBZK7xpnbrWST1w6896+D6ZV82pvBq1rVa17XllOBSUY1dQwdkCiRC0bLRlg306V5on2+MCHU11reAtm20B6yociySVpAr7vAoc8itUvft5ufhYXIOrqZrzxZU4F1fiVFzJuZwStbGWrbVstl2LanErrsNNYLq4Bo/SesvviqpxF3gurcddPpNn+VxttwWG6/EsqcfTJNvnOryL6/AqqsXHVEtAeSN+pfUElDfgnl+h3vmXNah3/uUNeJcJmJvxLq3B02TGr7wOv7JavE0SHViNoaKe0AqB3yZ8TNUKur1VXXwNgSpVpJGAIjNBlQLWjRgEvkurVVlOoEC4bL7LagipbiK4qgFjZQMBJrNKIwmVzO3yWgSSw6vqCDfXE1RaS7iyhDQTa7YoSiIBzQ2EmOsIq21orXNvIsrc1HaOlO9UNhBb10KYbLsraoky1xNff4Eos0QFNhBb26zeJzVdIqamkdjaJuLrW9SMNjcQJ5vp+hYF3hINmFh7wZK3Le+rG0kwN5AolybrLlimtE/WNJFa30Jq/cXW7zSqszwnmOU3jWqrnd4gaSVy4VKem0hruICkkqizNFPWNakUk7SaJjLqmsmsbyG9tpmc5ssoAK9tQjbaOY0XyW26SF7zJbXZzhEIr2ui+PJVsuqakeccge/6FvIbWyhsvEBB40XKLl+juOmCUlFjM4UNzRTWN1FY10hhfTMFAun1TZQ0X8Qkf7+2gcyqWnJqGjA1tVDa3EJZy2WqLl+m7MIFGq5do7yxkUJzNRUNTdReukzNpavUXhZdo0m815euYG65TM3FKzQIEF++QmFFJeUNjVS3XKRRAFkHyvWXr6jvXPjlV6ovXFSfNV37heZfflUgXX/pCk2XrlDbfIGy6hrKa+ppuXqN5itXqRe7yYVL1LVcbvub8vct/4b8O9cQ+Ja/1fIrpOfk8EqPPwnfQcUSC2iDX6kN3uXH8K6wRXzavuVnCSg9iaHEBu+KkziWnMOz/BwhZYfwK9zL/vRzzAzzZ5y3M5vSHFUV/A8p9oz29Wawpx9TgzyYZnDhaQcjfztmZKrRFefckzjlnmZrvBMf+XvQ1SmAh89G8qBdAg/Zx/PP0+E8Y+tLF/sAXnUNp4tbNC84R1kuT4qlxDGKpxwjUZGAEgtoH8Uz4t3W6VnHaJSk8t0+sq32/XllMRGbSZQuwSSKFx0jebF1s6223LLpFuB2jrZINtxWaofuGJVg0sU5mi7OUR2kbbm12fV8DK9aqZuymQh0W9TdJYYeLtFKGmRrU9ts66cllzu2rQhHK8ERe0kf99g2yZbbWmIt0QO3bLktskC3Am/ZcHvGt6t14z3YKwFNQ7wsLZQjvJIZ6ZFALxtPXvnhHC9tteHlLYfo9o0Nr2w9xLPbjvDqfmcGO0UwQqrf/TMZ7J3GEM9khnsnM9I7lRGeKbzlm8ab/ukM80vlTf80BjpH8cp+N7r85MQoh2hG+eR0lG8uo/Xyy+H9EKl8j1Db9T4/nabPzuP02XqEXpsP0nvrYbpt2qsaLkf5J/CuMZV3glN4JyhFnd8NTlXn90PT+SAgnv7HXOnxky09dh3nte02vLbtkEo16bnLhsF7z/CeTxpjA/MYG5jL+waL5CzvxgXl80FwAcNPedD7h6P0kv8R2HmU/juP02v7MV7dfpSeP9vzsW98W+mNRANqqSTW8zPJ3A5OU/pc2UvEYtIu2XbrLSV6K4n+rABbINtKAtyaZOM9vU3trZNtJTgqi7u1+EbKb3TqUIAjrZN6aWU4qu5dKt8tte+/B93aZcqvonLQpEUF6qdYTBZbSZoq9Voam8/SmFwWRmSzMr6QVYlFLIsqZXl0NYsis5jsd5Y5gRs5lvc164IXMMN5LesTI/k6sZRlseUsjzexMrGAlXG5rJYc7vgC1sQXtiuhkDUJhaxNKOLr+BI2xFSzLi6fTWnFrEmO56toZ2YF2zDd7ycmeS7h4/NjmOw1ke3p3nyfaWJbYiGbEjIVaG9OLmFLagEbU5JU6Y0U32gS2NZL8ritowG1ZBL93JEsfu5C9NttOetB+9vUYjrKxHdpHSXWkh/SivhBZrpFPwpwizJMSj+lFWORZdO9W9onNbUW4fycXoJF1iU4UvNepqRPLLmRl1sllmSWcTCztIP0UYFyVpaSGySVaAklMrWLkr9pJ9GyuGXTrUkrvtGmxAEWVHOmsBaHwhocskrZExDJvsAoTiZk4ZBbjgC5ALfYRkQORdU4Fdcq2D5fUo+9igY0q622U0E55wsrcS2WjO1KzudX4JxXjkOWiXPphThkm7DLLuJYYgYeJWZc8itwl820AmmxiNTgnF3G6bgs9hmiOBSaoIDZu6RWWT/EIuJVVqesIU5ZJuxS89vA2kssLCn5OGUW4yHwn1bA4ZBYTsWk4plXRlB5PQEltfgWVeGcXoh9cg6e+WUKlv1KqzFWN+Enm2stFrC8Vn3mlV+Gc0w6XumFBJfWEFRRR2BlHYbKOgXTkh5iMJlxyywgwFRJmLkBY7lstGvUObSqAb/CcrzyTJZGSYF5aZesqCOiqh6v7EJsI2JxSUojrLSKyJoLRFQ24JKUjWN0Cs6xqfhlFykwjmtoJqJGflfX3iZZ06haJCPN9URU1hFbJ9aRJqIqa0mqv0BSQwux1Q1EllcTUlxOcGEpyfLOXI+0TibUNavPpX0ySc6yYc8tJrailozGS8SUVhNWUEZiVT3pDRdJrW0mpaaZdEkWabpCSq3lLFtrsYioWXuB2NJqQvNKSK9vJkNsIo0XSTXXk1HbpAA6q+ECBRevqI21gLS2sc6uv0Be0yXymi6TUdNEwYXLFDRfIafhIpF5JozpuaRV1mC6eJXC5ssUCoTXN1N+9ReyaxvVO5P8pqGFosaLFDU0EZGVh29MPKGpaWSWV1Jx8TLll65QfvkKVdd+oezSFUwXLmK+9gsVV65See1XKq/+QvmVa5RcvExJy0Uqr1xVs+LSJcpbWjBfukRhbQ1JeXnEpKaTXlhM3ZVrXAAF4IWVVZTV1WO+0EJxdQ2ZxSaKzNU0Xrmqts+Syd0kudyNTeq9qbqW0ppaiisrab56jaYr16i7eJmWa79w8VeobblIcaUZU4WZi1evceHKVQpKymlsuWTRxcs0CuTLFNC+co0msZm0nrVnAXv5e+nZOeqip6y+/89Z43cElh3Bv/yIKrIJLj6B0WRDgOkQklASIKU3pqP4lx7Fr8wGv7IjBJQeIbDkMCGmw4SaDuJXehqP0mN4Vh3hXPFRNsTa8Y67P93twhjh6cHmjDOElR3BvcSZudEGXnMPo5NTKK/YG3jkTBh3nYmlr3cEcyJ8mRLkwRtu3rxoH8TDdjE86JDKI47JCqRfOGfkqXMhPOMQyQvnY3jxfBTPOYbztF0oT9mF85RDFE86WmBbHwt4w5QSiQlsBW6t/EY/JankJcdotdWWzbYma9DWnjs7yXbbAtva7NIaE2jdOtkG3Dp7idS9dzvfqjbYjqV7awlOd9c4i59bZysR8NbDtpwVcLvGqoxu66hALbFEIgL1dhLtrIdtOeu93AOtvdyeAt7tkK3Btswh3okdNNQ7kaEeCYyQeT6C4c7hjHAM5S17IyPtgxhxxp/h54J40yGMUe5Sz57OaNUamc0o33RGeiczyjOJtz3SGO2ZykivFEZ6p/COXxpjPJIYdi6c4bYhfOSZwig/a6Uy2r9do/xTeSM4kTeDEhnmHc0I9whGukUw0iWUt1xCGOUWxnAXI2/6xPNOcDrvGjMZE5TGmMBU3g5K4+3gdDXfC8lkjG8CQxyCGWYXyGjHEN4ROQTztkMwYxyDed89zNJGGZTG2CBpq0xVkrNoXHA6HxgzGOsTwxinIMbYBzLWIYQPHcP40DGCD5yjGOcSzWcGgerrmyati28sXm4dbBuvt5h0gOyQTL7QFJrFF62aKp7u0Iw2WVtLLOAttpLrNTM8G02zwrPaIgJnh0tSyfUWk7lW8YCavcQatjuU4ujSS7RinK/Ex91qKdHPDraSGEstvB625bw0tqBNy+MKWB6Xx1dh6ayIz2ZlQj7Lo8tYFlXGvNBQJhq+ZUHEVxzOXMNizy9Z6L+fjakpaiu+LN7Msvhytf1el5TPmjiLpNpdL4tXu1Blc2+MzWVTUi4bUwpZmRTNF0H7eddxCe+dm8P0gDmsTJ7O7ODZfJsTys7kUrbE57ElSeC7gE3xFWxNKWVLWmqHzXZ7AU7H8hsNvvWwLWd1abK1+EaD7l0C3ErtFe8acGv53Fo0oH62pZRYWUqsC3DkeXdau7TNtn6qrbYuHlCLCrSGbe3Z+gKlPiZQvNyHJY/bSvoiHIkD1KQuTLbGAurhu82znVvRMZNbl1JiXXyjtthWWdySaHKuqA7nghr2GqL4dPVWPl6+kWUHbTkRl4lDbgV2uRUKxOVsn1OmzmczTTjIdju3HMe8CpzzK3DIKsIuLQ/3ggocMgoJKK/HI7+c7S5+bDjryoGQWPaGRLHk2GnsU3PxLqjEq6AS94JKXAurcC+p4XhMGot3H+HDZeuZuvV77BIycZcNeW4xrgWluBdX4JJnwjY2lb1+obik5eNXUIFzcg5b7d05EBzF+dxivvUK4JM1m5m4YTs/uvvjX1BOkMmMY1IWW865sMrmNMfD4vDOL8W3oEwBs39xJYYSM36mSgLLazCa67GNTmTqyk1sP+lAWImZIFMl/sXlGMqqMJRWElBcjndOEZts7Tnob1QALFvmMPl9SRX++SU4JqRxOCAE/5xSImTTXV5DSFk1hqJSfnL35ot1m1n47Y94JKaReOEaRlMVy3YfYu7W71j500EOewYQYaokorSCuNp6wsuqiK6qVYAdXmYmsqKaaHMdkZU1yLPAdkJ1PdFlZuKrakkw1+GTlsUBD19sfA2k1jepd3EC6DUNxJvrSKiyyJhbhJ0xksCMPAJSsznuG8ghdx/807Is+dq1TSRUVJNSXU+qAHtVLWnmepIrakgzN5Au9pXCMrwT0vCISSKvsYnUqjpSK2uJyCkkoaSSHCnIMddR0HSB1MoasmoblD1E3mea68mpaSJPLCXmenLrGshvuEhx8xUOOLiyy+YkAfHJFNQ3k1Ndr5RVVas23NnmOnJrGyisa6CgrpE8sbWYyjnt5cuk+V+xctsODDHx5FRUUVDfQGFTM4XNzeTU1ZFprqawro7cunr1vkC22k1NpFVUEpWbR051LWUXBeYbVI17WXMTnmGhfPPzz2zYuYvT510x1dTSeO0aJbX1nHNzxys4hIwik7KM2Jw9h29oGALZJTW11FxoUdvv9MIi9d4QEUlEQhIObp6U1dQpD7e5sZnaCy1q052RX4C3IRgv/0ByCk0KvE+etVdlOfXNF2i5co1m2bhfuqrAXcC7+eovCsAF5JuuyPOviN/70q+Q8e/At7/pBP6m46ru3VB8gMCi3QQV/YixaDdhpgNElhwhsewg4eV7OZh9kPFGB150CuUFh0Q+NoRzIMuJoKojxFUcxKvEjvmRXrzkEsW9dsk8bBvJkycDecY+nO5e8XRxkQuQoTx6JoyHzkXziEMijzkkqOjAZ8+F8LJDCE85xvKEUwJPO8Wp+vZn7MN4xi5E/Q0LfEfzjEO0yuR+zt6Sxa3FAlpyuWN43jGmQ+vki47RNy7B0cG3BtnabANspxi6WEkrv9FmV+cYujpH/+5mWyC7x/mO0iIBtfm6Syyvu8QoSfGNXtagrX/WoFub1vBtDd0KvJWdxOrypAC3lzRItuv3oFsgXBon+/vItjxeNVGKjeQN73iGecUw3DOaEV4xjPCKZmRALG8FxDPcT9op4xnoK/XyCQzyi2OIfwzDDfGMDkxjuL+0WbYqIIG3AhKQDfUY/3jeC0xmhH98uwLi1efyHU0jAhJ40zeJ0QFpjDHIFjuNdwWqg1pr30PTVbrIuwHpvG/IUO2SY3xTeNs3RZ3fC0jnXf803vFP472gdN41JPOhFNe0JolYovzSLR7tkDQ+MCYzLjiJscFJao4zJjHOmKzefxCSgmh8aAYTlPUjhylhuUiM3xeBWUwNzmFKUDaTg7L4PCjzj4tvWi9IahclLZcl2y9KCnh/Ycxkik5TQ7Kw1rTQP+PnbgfvmWHtjZMaeMv8U5YSK/i2hm55/vJGRTgSGajP5laRgZbLknJh0nrDLc9LovNYGp3PspiCDloeW4imlXHiz85nWVQOK+OyWBadxspYy4XJeaHuTApaw/Sgiazy/5KVgbvYnBjIxrRcliqLShVrksysTSpmfVJ+B+AW+NagW5vrE3PYkhLL+oRU1idUsjoxm4/cfmLg4cm8dXIS0/3nsSByCpN8prI1LYhtSSa2JhSwLSWHzXLRMsbMloRKNidm37D05oaQrcH2b8QCfptchEgDbP3UQ7b+/ENqCRp0a/P3rCRa82QH0E4v5YaWklb41gBbPzXY1tJJ9NM6qeRIZsfiG5vWIhwNtrUcbutkkrZ0EkkoyZVLkpVo6ST/TvGNvvRGznKh0qG4Fofscna5BfDRknW8P38FHy5dz9qTzuw2RPNzUCw73ALZ4uTDvqBYNtp7sdXZjyPhyezyCOZbTyP7g2PYZwjnB69A9hsi+PqkAyfCEzgQEM6HKzcydcv3HAmJUVA8aMZ81hw7y15vI6fCEjgZlYJtUhbupdXsD45iwrINfLnjZ1YfOMHhgDB2nfdmf0AoxyPj+d4rgE1259nnF4JNYATHDZH86OjFbmcfZu/YzS5XHxwzcllja8eY2QuZvmkns7d9xxG/EPa6+7H9tBMLv9/L0p8OctQQxsmQaL4568xezwCckjJwSs7ELj4Vn4IS5bPe7eFH16GjmbVmM/7peRzyDWK3hw+nI2M5Gx3PPp8AvnN0ZeKq9azYcxCv1Gw8kzM5ExLFfndfDoiHOCwGG98gbDwC2e3siUdaNuEV1Xjn5LP2yDFGTZ/NjLUbOOljIKXlKhFlZmZv3snC7T+y+JsfWPPjfk74BGLj5YdDZAw2fgYOefvinpyKrTGUA57eOETF4J6Uwu7zrvhmZOEQFo1bTCLngsM56ObNd2ccWP79Hqat3YhdSKRqgowoLCUgPQeP+BQ84lIwpOXgm5jOIWdPbL0DWbBpB8u/+4lZ6zez7ehJDOnZKkEktqQCe2M4B1w8CMrI4aSnP66h0fjEJmPnb+S0TyC+ccnYi5c6KJyzfkEcdfVk7fe7sXHxIDglA3uDEc+oOLyi4znu5oV7eDTG1EzO+RpwDAxRZ0eDEVtPX9LLzZS0XGHR5m9Y8c1Oznj6EJSQjH9MPI7+BtyMoURlZqtp7xvASXcvfKMTyK9poOryVRLyi5izfBWHzp4ju7Qc75AwzhsCCUpOxpiSgndUNIbERA47ORGWno5/XDz2/v6cDzZyxPk82w8exD8mlqCEBM56eZFmMqlkkRPOjqzcspk1W7ayavMWnLx91BY7MjlFPQuU27l7cNj2NJt3fYdHQCDRyakcP2dPeHwilQ1NJGfncuD4SX7cf5AzDs58t/tnwmPi8TEEk5adR3xKOoGhEbj7+LP/yDFWrl3P7n0H8fDxZ8ac+fx84DCxidJU3MyVX+Hi1V8VhF+48gstAttXfmmTPF/6BfW9rOxcunX/kxcufctPKctIkMmGkKL9hBb/REjpTwSXH8a/8hze/7ez+/yK87r3/v8P/B6cc59+TuJ0x2mOU2wnceLYieUiuci2qkVnCr0IJIoA0XtvalaXkEACFUTvIHodGDoDM3TUe7Ps92/tPQzMjJTk3PeD19rluq4ZllhLfNZe+/ruxWLKZ05QvpRPUNcZfpJXxj8faeOjhjYuzJ7BcD2d7okUmg3Z5Azl8155Gf9xupV/Lejlu6ebeelkJb/Kr+DNc+V890wn/yXCdlEv3yvq4fuFXXI1+zsF7fx3fjv/ld/Gt8928R0xXyhCtqi73caLRW1yq8gPCjvki5ZipfvFMx0Wfny2E3Om2tvi0BsTU/3t1dby1EkRvE2h29SaB29T2Da1ppMnXzm/euqkaQuJqRUr2pJZ+DaFbVP76sUejCxPnXzeyZO/K+7m9xeNzA/AEX1TWcA/Xuq1WNU2rXBbr27/uaQXk7dK+3hGmeUKt2nF+y8VGkzeqtDwx8o+3qzS8Je6Qd6u1vB2VR9vVfTIkP2ODNI9vFV+mXeqO3inppt3anpZU6vhndpe/lrbyTv1Hbzb0MlfKjt5u6KDd2t7pHeqOninsp33ajp5v7aTNRWtbCzTriofZNNzbKsYY2v5CBtKtXxaouGTUg0fl/fxcVU/62tFqNayoXqQz6u0fFY5IInxptphSfQ/rRD70Ifk/Z/ViHv7+byijw1VGjaKg3HqBviouhebSi32lYPYVw3hYMWxehhh83L9blkuUBwFX6Nlc2U/W6u02NUO49gwJutum9fiNvXND76xb7QM2qZVbtOKtmidnhO2RfhWiNMnlykvj6K8PLLieSvcLs1juD6HW8s4Ju4t4nj20RWmVW3z9nlhWwbu5dXtlS0lZgHcOnSLsajV/dzAbbatZEfnavA2hW1TG9htQAjo0hHQMcaubrEve4qdHVp29Y4RohnGvekcGy4F8HmRHR4l0SQPNRCl0RLSO0JQr47A7mlCehbY3TNDWPcYu3t0UnjvJNYixMq1Zoq4oUkiBjSE9IwRpV0guGuY7Zdb8a4vxqY4iDf2vsL7xzYT2H4JcchNXP88CQOi7vYs8b1XSei7TnT36pYS68Atxqbj3E1lAc1LApr6K7W3ZUlAURZwlXnQFv30ASPrA3DMa3Kbr2qb+uZhW/TFS5Km1WxTax6u/1bfFLpNrSl0WwduWZd7bAnrEoHmgdsUuo1bSsS2klWyJOBy6Db1/3cH3ywZq42Y7dO2LgF4UreIeHHy3NQSmdXN/MFGyTpXXzxSctiyK47PAqL5xC+Mt5zcedPBFfuIRN529uBTvxDcknP4yDuINWof1m8PxmZ3DA5RCXzqF8yrm+z42Hsnn3jt5C07lQzBRxrbySir5rUtdmwN2I19cAyqqFRUiVmEF1zg+MAIBdpRIg+eYJNvIDnny0gvLOaNLXZ87LmdLQEhbNwRjE1oJKrYZDwS0tiZksN72xRs8tzB1h27SLlwicLhUYKOnuBPW+zxSkhne0o2dkHhfOLmi2NIFDYBYZJvcpac2+gTgEd8Gt6pWThFxhF6+DjntcPUzMwTX1DEh/YqPMLjOFRWzQafHaxRqFFFx+GdnM5G3x18qHTDIyYRt6h4dmXvY3tSOi4RsWzw3M4mb38C0nPwS8rgM4UXa7Yp2F9aRee1G1yeXyRo3wFeX7+BzW5e5J46S+W4nta5KyjCorDzD8Y/NoWgpAw8dsewXunCFk8fbHz92eTuxc6kVOz8dvKRswr38CiCUjNY84Ut/okprLVXEJq5B+fAUOz8AglIzsA3Nol3bRxRBIVRPzhGWe8AkaIKSEAIW738cAmJJCAxHXVwOAn7j/KRnZKQ1CzsfHew0cWD1KMn6dAZKO/sRRGwi9c++IjInH2stXXEKzya3Rk5OGzfiZ2PPxnHT7EzPpktbl5yHJW1B9fAXSTu+5K9pwrYrHJlq4s77rvCZKvw2ymvizYoPomQpFTWbrXB3suXhr4Bue86PC2TkKRkdiUk4hcRiX9kFEo/f1wDAvEJ242DlzefOzrxmaMCn93RFNU0cOXREyavXiMwKobKljZq2zrYGR6Jk6cXO8Ij2RUXT0BktHxx8lMbOyKSU3Hx24F6uz/+4ZG4BQbhERTMvhN5OHh4YO/qxpEzBcxcvUreuSJ57LxfaBjhiUnEpKVz+mIxcRmZKL192OTgiMt2P1mpJCQ6lqSsHBnCP99mK1+W1C9eYWhyipScPTi5ubM9MJDw2FgCQkP5bMsWQqOiZLlBj+3bcfX2JiIunh1BIfgH7qKytgG1uxdKVw/SsnIZHBnlkdiu8vCJDNsiaJuYgvjDp/y/he+K+eNUiaomhkM0zxygeXE/1VePcHr+DCkTpQRpavi45Dw7u87j33WBty+W8WJePZvrqig0HEOzozOTdgAAIABJREFUmEHDTJ7cdpIwdJ7fl9fxbyJMn+mWJfx+ea6FV8/X81p+Md893ci3Totj35vl9pLvFTTxvbOtfP9cJ98u7OLfC9r5Tn4LPygwlgmUJQGLOvlhUScieIuXJUUFkx+ZhW/zwC36LxV2SebhezVsd8nTJ1dqcBd18vPCDsk8ZIu+KWCLVhx8Y+6V892Y+5XVirZY4TYF67/XrobuZ8O3+cq26K+sapudOGkK36bQbWqtw7d16DaNRfC2Dtxvl2kw99eyPlaUa3jHyl/LNbwtwnhlP3+t6ucvlRr+WtnPO1UDkuj/taKPdVU9fFSlYV2ZhrUloh3gw3INa8u7+aC8g3VVXXxYreWjKi2fVA3wcaWGj8p7+aiil0+qNXIF+mNxIE3VMJ9Um4zwSfWymhHWL/uwZph11UOsqxtiXcMwHzYNs+6yMMTahgE+aOjn0+p+Pq3S8Fn1gDzWXRzjvqluCNFuqNHyWbUxqIvgLe+p6mdL9QBb6wbZ2jjM5sYhPmsYYFOtCNPDbK4dZos0wpbaEbbUmVm5ZlxBt20ex6F5Qp4maVtvLPdnfujN82pvi1rcprBtas1Dt6lvHb5Ngdu8NYVv69BtvqptHb5Ngdu8NYVv87At+uaB28vsqHdx+qT1yZOmsTgcx/IwHBG2J83o8G9fJVa4rYkVb1PYNrWm0G1qA7om2NkxRHDXDKE9iwR36wjpEzW1R/G63IhNyZfYlWUQ3FVORN8I0ZpFgtq1hPQMENmvJ7RjmuBWHRF/I3CL0G2yu1dHUOcI0UNTRPYuEtF5hTjNIgkDi/JAHN/LpWw8F8iGs0EEdVYRq5klTnODeHFP/wxJ2hlStPPE907/w7rbppBt3ZpCt6k1hW7rwC1D96Bxhfvvhe7MwTmEvxW4Reg2rXD/vcBtCtb7RD3uZaaQbWpNYfvLkUUOWjk0uoQ5GbLNam+LbSXmQdu8bwrapva47iomJyauWJw6aaq3bd5a194WVUmsFYjyf6Js4IietJom/uzqSejx05zrH8E9ORe3pD04hSfwkZsXm/0DcQ6PZpPfTtxiE3EKi8Q+JJxPPHz42MMHl/gUgvcdYktgKB+6+eAQGoVN4G5UEQmE7jvCkfoWDtRfZktgGL7Jufgn5eAYFMNaN3/C889z3jDHmaFxci6UkH72PB+L7/TdiWdcsgy3Gz39CM7cy5mWTuJOnsZmRzA+UYk4eAey3tmNrX6BZJWUcX5sjOgzZ7HbGUJhcydHymrxT8rANTwWVWgUtv67sPUPxiEglA2u28lvaKNyYAwRxj9QexJ+5CS1+lkuDo3hmZbFa+s+Za2jioj9h3AMCUcdEUNgejY7E9Nx9AtivaMLXpEJBCRlssnNF8+IeBIP5xGWtU/2A5Oz8IlOIipjP4rA3RyqqKHz6nWONDaxIzsH/6QUXHaGsEXhzqWhERqmZvBJSMV9dwzbIxNQ7diF0/YAvCNj5ervrqRUghNT8N4dKU9udAsOJTQlXc594eaJS8AuHPwCyDyZT3j2Xg6cK6agtpGYfQfxioglJC2b09X1tE3o2ZGQwheevmxy8ZDP+EbGoQ4I4XBRMcEJqaQcOkrcvgN4R0SRcTyPrkkDdf1a/GLi+GCbLTF79uMXFUvi3gMcLrpAwt4DeIWGszs1QwbopP0HZbv3RB4ZXx7i+LkL7M87jYM42MbDWwbdtAMHcfXfiXtQMEfPXaCyrZ2EPXv5zMGRmKxsart7WHzwkNyTeSTv20/yvn0k5OSQItrcXCLT0mQwFvPBcXGEp6QTmZbNqZIybn/zDTPXrsmAfa6iinMVlWQdPCSDcnBUDIHhkTIUp+87QEJmNkGR0bj57SA5Zw9JWbnsjk9k75HjnL1USkDobhIzMjh84iQzCwucvXCeuNRUzldU0abpJ2P/AQ4cP4nXzkDCYuNJ3bOPiIQk/HeF4hsQhIevP4FhERw7fYapuQW5R3t4Yoqc/QeJiI0ne+8+PH23o3JzJyktnZSMTPYc+JLtOwNw9fIiIiYOUdM7NTNHrnbHJ6fJfu7+g/T0D/L462+kB0++livgj54aV8LFarggxmJ1/KtvQKx8//o3r/7vqp2Uzx2lcvoQdfqDNM0cpH7xCGcXThI9cobPa8/z8qmL/OvhXjbVlBI5WIBjSzE/LmjixYJWlC0lHBz5krKF07RfOUDyUAFvlpfzHwXN/FtBF98+08HPL3TzcnEnL5y+LIP1d8VR72da+O6ZZr53VpxG2cz3iy7zg2UvFjTIz//x2VZ+JFa/5Wp3B+I5EcplTe7l8P23grcI4D8tNAbtvxm8z3Xz8+Xw/X8TvJ93AuWvrcL38wL3qxe6sSaOeLdkXPn+W8FbBHDrI9/NV7z/b8L3n5+z0m0eukX/L2Wa1eAtXqy0Ct5ivKZcw3slfbxfqpHeLenj3dI+3i8f4INKLe9XaHm3fIA1ohpK1RDvl4/wfskw68pG+bBiRO7tfq+yj7W1A3xQM8bamjHeLxf7wIf4qHqEj2pGWVs9xHvyBcwB1tb0sq627xkf1mmQajV8VKnl40otYgvJ+tohPpVheoANVf1sqtCwRax2L69ef16v5fOGQUT7ad0An9b181n9AJ83aNlQNyjD+KYaLVtrtIgtJ1/UiVrcA2xqGGBjk1a2mxsGkBq1bHkOW7Gy3TjBtvpR5PHvTSPYXB5lm6jVXafFrnEQu4bhv3vqpDj4Rm4paRqRq9vWIdv58hiS1cq3eegWfWXzOMp/sJdbhPB/FL7dW8WpleOrJQL/RrWS/034Nq1sm4dvy+BtDOH/KHwHdE4R1Klf1WWQdbiDuwxI3QZ29erY1TdMYMc0QR1XCBH1unsmjS9Pdk8S2KUloKeTXQNaQnuusKv9GuG940RqNET2DBPdO0t0zzzhXc9f7TYFb9GG900T2neN3aLedv8sUd3jxGuGiRcnTvZcI7JvjsCeBoJ6i4gd6SV+YJG4vrvE910nTqMjvn+QxIExEjTL+7aXt5RYB2wxFuHa4tAbcQDOwMwzB9+I8P33grfYVvK/Dd/WK93mwdtYpWSe561urwTvkQUZvE1h27w1BW+5um0VvEUQNw/eom8dvs3DtujLoC1rb4utJatOTFzF0t8P33mTVzGFb+vALcYF+mtSvtivPS1Oj1wkvryaD4NDONTeKfc1b886SPDeE3gnZLBe7cJWH188oqJ5z9aeLWJlz8OLz93c+dBZweeevvim5RB7PB/H0Cjec3ZFERbDZt9APnH1wScxgyN1l8mpqGatiydbfALZnXkIn9gM1rn6kVFZT82NO+xv7cArJR33+GQ+dPHAaVc4mz39cNwZgq1PAKrAcAKSs/BKSsMlLIaPbJR87uDKJ3ZqbP2CyCkpp2h4mLjCQnYmZTB07TbnWzrxiopng5u3/Byx8usUGIo6JJJNrr74xCRz4GI5wTn7We/pS2JBIa1L18jv7kURl4BPfCKbvH3xjE/ELSaWgPRMPCJisPXy4wu1J59sc0K1K4J9RZfY4rkdv7gUko6cxDsqgY1u3riGRuETnYjPrhje3+ZIcl4BXYtLHKmtxTctDe+YOLYoXXl3/UZKBofpvnKd4IxcMk+eIfnAMRTbA7Hz8EUdEIx/VKxckVX4+KHw2S657Qxke1gELv47+XDLNnZERKEOCiWvvJqAhBQZ2qNz9xORtUeOd2fmUlR/GcPtO+yMT8LGa7tcoXb2DyQ0JZMd0QkcPnsR14AQQlLSSTl8lJ3xieScyqdjYpL8qho+V6p5c/1nhKSmo9oRgEdwKOGpGWwPj8LW3Qv/yBiC45OIzszhCxd3kvfsJywhiYTsXKLTMvjM1h57Nw+8g0PYe/wk20N34xkUTFBMLCKMR6VlsFGcNBmfSOvAIDcfP2H/yVNkHjxExoEvyT1ylD3HjuMdFIxvSCjuO3aSc/gIUSmpxGXnkrjnAIUV1dz86mumlq6yOyGJi9U1VDU1y0AswnBwWATxKekcOXmarL0HULl7EhYVS0BIGLkHDpK9/yC7Y+JJTMvkaF4+bt6++Pr5k3cqn8WlK5y/cIGDx47RPzbGyJSe6ORkfIKCUHl74xMYiNrHR46DwiMJCNlNYGg4oZExePsHcOZ8MQvXbjA8McmJ/LMcP1VAaWUV4uh53x07CQ7bTWxiEkGhYYij6t29fYhNSCIlLYvE5HRKy6uJS0jhwMGj7BXhu6+fJ199zYPHTzEP3+YB/O+E7xwa5/KonV9+sdJwhLrpozQtHDfu854TK9/HqZk6ROvMl9TNHiNjMJ+PK4r4Xn4Z/5lXz/+X182r5+oJ67xA2sAF1tc08C+nuvl9WS37R47TNJPPwZE87BvO8tNzZfzr6Tr+60wTPzzfzI8vtvH9i+3851kRuM0OvTnbtnzYjWhF0G7lxcJWfny2hZfOtsoKJD8uMu7fFiFcrHqLFyZfKurgpbPmxIuTlkTw/nlhF78427msi5cLrRR188vCDn5Z2MYrK9p5paidX5k718GvzZ3v5DdWfnu+k9ekLl670MXr1i528/rFLl6/2Gnhd8VdmHu9uIvXL636XUk31l4v7ea1Mkuvl/dg7nflPfzezB8qevhDRe9zdPOHCqM3KnqQKnt4o7KHP1b28iexJ7uil7cqTfp4q7KPt4Uqo79U9fFOZS9rKvtYI/t9vFMl9mxreLe6n3dr+llTLVbFNaypHmCNCNFVWt6rHuS9alHRxHjPe3UDvFc7JL1bLa5r+aB2kA/qhni/Vsu7NeL6AGtre1lb12dhXV0f6+o1RnUaPqrplz6p1fJJnZZPagdYX9PPpzX9fF5jXOn+RFQ3aRhgfYN2hRivzDVq2VCvZWO9lk11WjbXadlSPyhtEnMNWjY1DrKpsU/a3NjH5iaNhS2X+xFEuHZoHMK+QYRsEba12DcOYt84gH2jFscmrQzWjo2jrBrDqcncOIqGMXmsu7JhDFWjtXFUTeOoG8ckl6YxjMZxaTISR8O7Xp7A7fI4bk1juDWNy7775XFWNBtPo/RsnsDzsk7yatbh1TxpwbtlEp8W8eLiBD4tq3xbRCURnQWfNh0mvu2TPM/2jklMZPDunMLPgg7/rokVO7p1mNvZrZMvUgZ1TbBKJ1e2xeq2ya6ecXl0e3DXBLu6RFWSSUJ6Jgjp0cl+mGaSMM0YYf2jhPUaCO3SE6EZJ0IzRISoQqIRpf70RPROENErWqPIPj2RfQYLUZoZIjULRCwf7x7dpyNWM06MOAynb56Y/gViBnTEDmiJ1+rkVpO4vkXi++aR5QI1E8T3T8rwbTry3Xj6pLFiibFyiTGYy4olIoQPmCqViFVzS6JiSfqAgfSBaTK0VgZnyFyWPTiLuZxBUZFkzsoMucPCLHvMjcyyZ0TU2zaV/ptl/4hgVolkdI6VSiQjsxyw8uXoHOYOjsxxSBg1mefw6DyHx1YdGZvjqLnxOY6Nz1s4Mb6AuZMTi5gTJf/yJhbIm5iXTukWsHZ6UpQKXCRfv0SBNYOoz21UMLVAoWGJQv0Ce7s07L5YyqlRHRcNC6RWN5Hb2ElmWQ279h3EPzNXetPGkeC9B0k8dZagPV8SvPdL4k6eYV9FHadaukkquIBHYgaZ58uIPpZPYO5BMs6VckEzSn6nluB9x9mZdYj8yxpC9p3ENT6b4y19NC7doVAzRkpBCUG5R4k9VsiBsgbC9p0g9vAZMs+UknC0kMgDeWSfr2D/pVoi950gLOcI0WKusIyiTi0VYwby2/s5WtlC39J9GscWSM8vISTnKKmnLrG/uJ59F+vYe7GGhNPn8c/az6nWHuLyz+GSkM7+mibart2meGSS3Mo6znZoyC2pIqekij2l1Ryta2FvSTXxx88QdfAkcUdOc6CkmotdA7hFJ5F44gzHqptIzT8vt9AcKK3mUHkdKccLCM7aR15jK12L16gdn+JAeTWxR08Stf8wiUfyqBmdoufKTU7WNVPc2U9xex+HLlWRcaqI+MMn+PJiOTn5hSQdPs6X5y+RdeoM6SdOs7/oouxH5OzjZEU1R0oqaRie4PClCpKP5rH/XAn5tZc5XdPImYYWGocnmLj7gLy6RnLPFUsHSyrIb2iWijt6SRafe7GUotYOTtU3UdLdR8/8ElXaYWIPHyM0dx/Hq+pIPpbHnrPnOVpaRe6Zc6SfzOdUdT0FdZc5eLGMuC+PcK6plRNllRwvreTopTISRWg8e44T5VVUdvdxtq6RE6WV5OQVcKy4TO77FltX9p05x/DiNWbvPZIvWlZ2dFPR3kVtj4aqzh72nCpgf/5ZCmvqKW/toLiphbLWDnlPx6iOpUdPZUnB4sZmuiem6DfMcqK4lP2nz1BYWU1tRzfN/Vqq2zpJ3LOPoqoaLtTWU9vZTV1nD+er6zhbVknF5VYO5Z9l3/E82voHuXLvAd3Do7T2DTB7/SbzN25RUtvA3qMnyDt3kfyLl8g9fJQTheflfFldowz+pbX1cstJeX2jfGb22k16h0fpHR5hZGqKupYWLpSXc/DkCUpqqjlz8SKHTp6k4OIFea2uuYVLlVU0tXdQ1dBIS1c3bT29TC8ucv+rr2TZwftfPZUlCB+KSimiuslyK160fAx8BYyNT/Dr3y7v+c5rzqJh/gSVs3spn9lDxcwBKgwHZLWTKkM2xfNZlC3lUr24h/K5A+wZOckX9cV8v6CO/3Oyle/kNfLPBc1853QTnk3nODVxnGjNeV69dJk3yqo5MHKYnum97Ggt5/dFNXzntAjZHbxyrpk/XGzi5bOigkkrL+S38/2Cdn5Q0CZZVysxjf9RhRJTlZKfnO3AxLSn27z9eWEnv1j2cmEnRl28XGT0y6IuXinqkGFbBG6TX53rwFI7vz4vdBgt1+E2lgQUpQGNp02KFyVfE6xelFzZt31RXG+XXr/YgbXfFXfy++JO/nDRpIs3ip/jUhdvlFj6Y0kXfyztXvGn0u5n6nP/WZxAaXY4jrEv5ozeKu/F3NsVoqpJL38p7eWvZebECviqNeV9vFfRy3sVfbxfaaVKwwfL1lZqMFlX1c+zjC89ihcfV4jtJ5KWj6uEAdaL7SAWBvi0xpLYJiLVaPncyobaQSQRwmsG5P7tjbVazG1a3qO9sX6QDQ1GGxsG2dgwxMZGo02NQ2yS208G2NzUL21pGkC6PMCWy+JFS61k0zCAnaQ1hu8GEb5NAVyE8EEcmrQWHJsGcbxsyaFJ3Lfs8iAOyxwvDyE1D+HUPIhTsxbnlsFnKFqHEJRCyxCq1uFnqNtGEFzbRnEz1z6GW/sY7iYd43i0j0qeHaOY8+ocw8S7YwzfdqPtHWNs7xi34Nc5jl/HGH4do0ado/h3jlnqEuMJxJ7uFV1T7LQS0DlJYIdOCurQIXVOEmQmuHOKkM5pC6FiC4qZsK5pdncZjLqn2b0svHua8J4ZKaJHrGQvrOpZIEpaJLpnkeheo5i+BczF9i0Qq1lcIbahxMvAbWwTxJYTzdKKxP4lEvsXSeyftZDUP4s0MEeSmeSBeZK1lsTWlZTBBVIHF8gYmJMytfNYyxpcQNLOk61deNbgAjmDi1Lu4Dzm9gzNs2dowcK+oXkk07aS4Xn2Dy8YjSwgVroPjMxb+HJEVCwRlUtWHRoVVUqWOGJO7t2+wtFxk0WOjVsS+7ctjC9yQhKnSVo6OXEFIW9iiVMTi5yaWOK0OFnSwhXyJ0Wt7SucNSNOkJQmr1K47NzUVc6JA3BERZPxBfIGDRROiOoj18kfmZVzZ0ZnOd43yr6WPhJK61Bk7Jfjs6PTnNCMc0IzxqnhKQomZmWIz5+YYV/nAGd0cxzVjnNkYJxTY9Oc1y9yYXKOPO04x3qHqVy8SUxxJfGlNZwZ1VOxcINL+kUKh/Uc69LKtkS/SIFWx5mhSS5OzMo2r2+EYt0sZYYFikamON0/wrmRKUqn5imfFrW8l6iYvkLp1BKVM9dpWLrHxbFZzmgnKR6fo0x/RaowXOGSbpqTPQNUzi6xp76F5JIqioYnaLh6i4rpRSpnlqhfvE71zBLl+nnKJmepmb0ilUxMUzQ4TqlumobZJWoMcyQWFXOyrZuqqRnKx/WUjk7SOC+qpCxSPqajZHiMptlFWhau0nblOnX6WYq1I/JazeQ0zbOLsgxg0/Q8LaL04MJVmueWqJ+aoUQ7KksF1k5MUTkyTvvCFep1eqpHJ2gyzNIwaZB9URWlZXqe/uu3aTbMUT08TqPOQNfCVal78Ro9S9dlbW0x1z6zQOfcEl3zV+heuErv0nV6Fq/RNDlNy+Q0vYtX6Vm4Qt/iNYZv3kV77RZN41M06/Rolq5zeWyK7pkFeueu0Koz0DJhkNVPRJWT9qkZOTd09QY9s/N0z8zRM7tA26SBvrlFBhavMnpTVEm5yeDSNbmnvMcwx/CVG3SJZ8cmmXnwhOkHjxm/cYfRqzcYv3ZTVjvR3bhDn2GOPsOsrMttqn4ianTrrt+W9bsXHj6RwX386k3ZLjx8zODcItqZOXTXbjBz+66kv3GLzrEJJuXcHaZv3Wbm1l2mrt5gbH4R/bUbjIrn9NMs3nvAja+eMn/nHnM373D9nrG+9sL12wxPTjO9dJ1Z8ZxhFv3CFRZv3mHx1h2u3L7H4s1baMcnmblylduPnnDr4WOu3r3P9XsPuPngAUu3b7F44wZDkzoWrl9n7tpVpubnmb16lev37nL19m1ZjnDm6hXZv3b3Djfu3+fukyc8+Pop95+agvdTY+h++jWPvha+4fE3xm0nT775htGxMX7z6nL4vtBzkJ5bF2i5doLmG3m03z5H27UiWq4U0H7tNPU38+i4m0fHnQLyZs6jaq/kp8Ud/EvRGC+cG+Vn+c28cK6LF4p6+PxyFfGjZwkdKGFNVSd/rGgnYbSIvhvHiR1uY21NNz8r1vJS8RC/LenjjZJOXi4S5QJ7efGClh9d1DzjxeJ+zP24WIO5ly5peOlSv4WfXOrnp1Z+VjKAuZ+XDvCLsme9XK7F5JdlWn5ZNmBUruWX5VpeWTHIKxWCll9VDDzj15VaTH5TqeW30iCvVi6rGuTVFUO8VjXIa5UDK16v0mLtD5WD/KnC6M2KQd6sHFrx58ohhLcrh3inwtKaiiHWVA6veLdymPekEd6rHOG9KqP3q0ZZUT3KB9VjzxDbPtbWjK+qHmPtsnU141ioneDD2gk+rh3jk7pxC+vrJ7BQO876ZZ+KmthWRG3sDXU6S/WTbLSyuUGHuS2Nk1jb2jiFyRdNU6zSs+2ynm1NerY16iSbpkms2V6eQviiVb+qTc8XbQZpW5uBbe3CNLbt+mfYdRgw59BmwNFc+zSO7dM4mXTM4NyhQ9E58Qxllw5zqi4dliZRd5vT4dI9gUuPboVrj44VvZO49axy753EvXdqVd8UHn1TePbp8XoOb40BEx+NAV+NHl/R9htt7zcgDUyzfWCaHZppAvuMgvqmkTQzBGlmCF4xzS6NgV2aaXb1i3rYZgZmCBH65wjRLEmh/UuE9l+xENZ/hbD+JXb3L64IH1gkfGDJQsTAFSIHrlqI0l7DXLT2GtHaq1LM4FViBq9ZiB26hhA3tLQifmiJ+KErRsNXiB++QsLwFZJGn+cqyWOrUkevIaSNmVwnbew66eNmxq6Rvixj/BoZ49efK3PiBtaydDcRsnU3ydXdkPZM3sTa3qlbCHt0N9iru8neyWftm7qFsH/yBgesfDl1E3MHp25wcOo6h6ZucEhv6bDhJsKR6VscnbF2m2Ozq47P3uaEhTucnLMye5u82VvkyVb0b3NqzuQOp+bucGr2DqeFuVX5c3eQ5u+SP3+XM/N3ODt/h0Jh4a6FosV7CBfm71I8u2zuLpfm7lmav0fpwn1K5+5SMnuH8sX7lC/co2zhHlVXH1GxZBxXXHlAxY2HlF65z1n9EocGJqm88ZiK6w+ovv2EyhuPKL3+kOLrD7hw7R4V955Sfucrym4/pvj6fcruPKH09iNKrt2h8vpNam7dpe7OfRruPSRvdIIL07NU3bhF9c07VF2/Re2NOzTcvk/9rXvU3bwr+6axaBvFs7fv0nhHXL9N/a07XL77gKY796m/eUfONd6+h1B7/RaX796n+d5DWu4/kq3pWuu9h7TfvU/Xg0e03b1PuWGGipk5mm/epu3OPdmK62237tJx5z5d9x7QdvsuHXfv0/PgEV3i+Tv36L7/kD6xEnr7LpVTepqXrspx/4NHaB48RHPfqP/+fQTtg4d037pNz+07xvHDRww9ekzf3Xv0372H9v4Di1bMDT98xOD9Bww+eMjIo0do791j5NFDhh88YPihcW744QPGHj9mSMzdvcfE4yeMP3zE6P0HjD18xPijx7KdePSYkfuinvYD9PcfYXj4xOjBY6YfPsbw4LEcz4k61w8eMSvmxeEz9x8yJ+teP5b9pSdPWXzytayZvfj4KxYffcX8/UfM3nvA3ANRS9s4d+XxU648+Yo5US/70WOuPX3K4qMnLMlnnshrYiwqk4gXJIWrT4z9pYePuPrkKde++porj5+weP8RVx8+4caTp9Ktr77m5ldPufboidxicuOJ8ZAbediNOPRmmdi2cvvpN9yRB9Z8jTi8xnTIzd3lw25kDe6nX8tDaMSccE8cTPPVUx4B959+I/uiTvZ9cVqkqNn9+Ct58I04/EaE20dPxf7qr+XearG/Wsw9/hoZhMWBOCL43n/8FV99I/Zni0okX8kygWK/9pOvv+bBE3FN7NV+ypOvv5F90Rr7Yiz2bj+Vzz5dXsUWnynmRWv6nK/EsybL159+A0bG8P3b15b3fBfX5aOdbaB3poKu6Qp6Zxvpn21DM9NMv6GBvul+tJMt1Gtria2u4q0Ttfzbvmb++XA7Lxxv5IfHK/n+yVpeONrMy/k1vH3hIn8oLOW7R5v47olGttSVcHC4kjRtJVtrKvhFUT3/dbKe/z5ayfdOVPDt41X8T14jL5xu4oW8Br5z0owY5zXyXQsNfO9kPd/LExr4vrmLUwIAAAAPAUlEQVRTDXz/VAM/ONX4jB+ebsLcD/Ib+UGB0Q8LGvnhGXNN/OhMEy+eaebFghZ+LJwxaeXHZ1qNW1/E9he5DaaZl86aa+Enhat+WtjCz85aKWzl52Z+UdjKy0Utq8618ksrrxS18utCo98UtWHut0Xt/PZcO68XtfNGoaU/FrXzx6IOC38q7MDkzaIOjDp5s+jv+/O5Tv58rsvofBdvXehe8fbFHqz95WIPfy3utvBOcTfvXOpZVdzDOxe6LF3sZo2V94t7sdTHB5csrSvpxeTDkl4+LOnjw1JLH5f28bGocGKuTMMnZf3LRAWUXj4pWVbay/rSPktlfXxe2seGMkFjYWN5PyabyvpZNcCmsgE2l5to2VyuZcuyreWDbK2w9EXFEF9UDsmqKTZVA5izrdJiW73KqWoQReWQpaohlFXDK1RVg6grzVQN4lI1ZEXMrXKtHsKaW/UwFmqGcasZxr1mZIVHzTAe1UOSZ80w5rxqh/GqHcFbqLHkUzuCT+3oiu11o/jVG/nXj+LfMPYcI/g3Dkk7GofY0TRsYWfTMDsvC0PsbB4mwFzLMIEtI8uGCGzpXxHUOoCFNi3BbQMEt/av2NU2gAXxAmZ7PyFtXUbtXYS2d1vq6CasXehid0e3hfCOHsI7jSI6e4jq7F0R3dXHszTEdGqfEds1iLm47iGEeKFn2EJC7zCrhkjoNRkmoW+YRDNJfcMk9Y1YSNaMkKwZXZGqGSa1b5lmmDTNyDPSNUMIGf0mw2T0G2UODCNkaUfJGlyVPThK9tCYhRztKOZytWPkDlrRjrJHO8Ie0Q4a7R0cRRoaY+/QGPuHxjlgYYIvhy0dHB7nkLmRcQ6PTFg4MjLB0VGdhWNjk1gY1XFsZJzjo+OcGNdxclzHiTEdJycmyZuYlOPT4zoKxifJH5vkrG6S06OiP0WhbkrOF4xPcXZsSq5CnxudwkjP+fEpzo/pOT82xYXxKS6OT1E8PsWFMR3nRycomTTIufIpPSUT4vokxWOTXBw1tiXjegQxVzw6yaXxKUp1Bkon9JSM6bg0NsGl0QlKx3WUTUxROj4piX65TtAZTU5SptNJ5ZOTlOsmKZvQUT6ho3ZSL9VNGWjQT1M3aaBGp0eM66eW+xN66ib0NOmn5UpznU5P49S0VK8zyLZJp+fypIEW/TTNUwbZb540tuKanJuYpFk3ScuUnla9gdYpQU/LpN44N6WnzSBWm6fkvLhH3H9ZN0mb3kCnfEZPh8FA29QULTqdbMW4XT9F6+QkHZOTtE7q6NIbpG69AUGMO/X65Tm97Pfpp9HoDfTpDfROGWS/X29sTeNBg4H+KQNiXmswoNUb+2J+UK9HM2WcGzRMM2SYYcQwzbBhmsHpaQYNMwwtj8WcMDI9zej0NMPT0/JgGlHnWt4jxlN6RvUGxsR9hmnGpqcZF/fqpxmeMjAq5vUGeY+4b3TKIK+PG6Zlf8JgvH/CMINO0E8zsWzSMC3nxFhnmGZyWrQGSfQls7F+ehr9zAz6aWH6OWaYMhjnJw0GJqammFq+T6c3IL5PXDfNTYnP1k+hnxFzBtlOTE0yadCjF+PpaQzi8/SincEgvtdgRm+8Nj09w8zMrDQ9M4MYm4hnpqeN10z3WLYzzMjnZ2i6fJmfv/wL4wuXCRHJHNiXy/4vM9m7P5vcPfvZu+cwB/YeYW/WPrL2FZGefoAduxN52yOC/3aK4J+U4fyLqz//olbzL9t38k/eQfyHSxjfUuzkf1Q+/LurP//HfRf/7LWT//b355cBcawJ8udnPoH8p3cM/+oeyf8o/fmh2ocXXHby7+67+SfXcP5DHcx/uwQ9439cgzH5lksg33IJ4Nuugc94wS0I4Tuuz/quWzDmvu0WyLfcAviWe+Azvu0eyLc9gviO+y6+6x5q4XseYVgK5XueIRa+7xWKBc9QfvAcP/QKw+RHnqH8yDPEwoteoZj7kXcoP3iOH/qEYfKiTxgv+YRa+IlvGNZ+6huGtZ/57uZn243EtZ/4hD7jpz6rz4nP/LFvqPSSbxgvbV/1k+1h/MRvN7L1DeEnviH8dHvos/xC+ZlfGD/fHsbPResXxi/8dhv57+YXy172380v/Yxe8QvnFX9zEfzKXwjn1zt2P+M3O8L5zc5Vr+4Ix+S1nRE8144wXlv2+o7dvL7T0u8CwvmDX5iV3bzhb+mPfhFI/pH8ycqbO6IQ3vCL5A2/CN7wj+SPVv60IwrhTf943vRPsPDnHYmYe3NHIn/yTzDakYgYW/vzjgT+vCPeTAJv7bT0l4Ak/ioECsm8Y2VNUArvCCvzKYg5c+8Gp7ImMJk1AYkr3g1M5N3ApBXvBSXxvhC43AYl8/6yD4KS+SBYSOGDoCQ+CEjgg8BE1lpZF5SEFBzP2uDoFet2xfA8a3fFsDYkhnUrYlkXEsu60Fg+lOL5cFfiqpBEPpKS+CgkiY9Ck/g4NIn1IQlmElkfmsinVj4Pi0fancAGKxvDE9kQnmC8vnzfht3xz9wnntsUnrhic3gimyOSVmyJSGJLZCJbImIsbI2MxZr5PdbXTOPNETGY2xIZw5bIWAtbI2P4IjLWUlQc28zYRMWwKhbb6GfZRcVgby46BofoWCtizlwsjjGWnGLicI6NR2EhAUVsgnxxTxmXgDI2TlLFxrEqHnWcmdh41DHLYuNxWZGAS2wCLnGijcMlJmZZLC6xsbiuiMNVfE98HM6Jy5LicV6mSIpHkZSAIjkBZVIc6pR4XNMSUSTG4JwQjTo5HmVSrCT66iQhAdeUJFySE/HOypCta3IiqsR4XJIS8UxKwichSb6c6B2XgF9SiuxvT0zGV8wnJOKVmIJ7SiYeqZko45JwS05f4Z6SgZScgUdyBl4pmXgmZ0heqVkIYt49KR2PpHQ8E1PxScnANy1LtqLsn09qJtvTsvBNzcQrOQ3vlBR809PlS41+GRkI4gVHQfR3pGcSlJhBQHwau5KzCE3NWRkHJ2USkpIt5yLS9xAUn0ZIUiahyVmEpWRLoi/mxDg8JYuwpAx2J2fKVvRN44hU4/Wo1Eyi07KITMmQwpPSEMQ4IjldtuK6mDONxTPiuuk++Rmp6USlpK2ITk1HiExOJTQugfjMbGJS0qXo5DSE2NQM4tOziEvLJDIxRY5j0sTPIb4/TZbaEy9CilJ7UanpxGaInzNNVgURLzEK0alpK8Q4MjlFXo9NTSchI4vEzGzi0zOJS8tYGYtrQlx6GolZmSRkZkhJ2VlyLFrTXEJ6Oonp6aRkZZGanS37YmzqJ2dmkpqZRXJ6hmzTs3NkVRAxzszdI+dFPy0ji7T0TNIzssjIzJb95JQ0Oc7KzpVtWno6mZmZZGVlkZaWJvspKSnk5OTIvriWnZVFdmYWmekZZKSlr7RZGZnkZucYx+np8jNMn5ORkUFubi6iNX1udnYWWVkZZGdnkpMjvi+FzMx00tNT5ZzoG+/NJjd3LxkZWWRn55KVlSPbzMxszOXk7EEQ183n5f1ZueT8DdmZOUhZOURERfLDF39kDN+7/MKwt/0CW/uNuLgpUKldsbVR4GinxlXhhr16Ex7qdXipP8ZGYcN6pSfb3Hfg6e6Bl6MNHioXtrm7oXTxxlPpiofaCaWrElu1C5uVdnyh2Iins4pQx49Rf7GBLXZqtjq4orR3wNfZDg+lAie1J9ucPXB2VqN0VllQidqeZtQKFS4KpQVXpQoLCiWuzpbcFCrMuSoUuCicn+GqVGCidlKgdFCgclRaUDupMFE5KVA5OUtqZwXmVn5OZwUuTque97O5OjnjYu8guTo4Ys3N0QkXRydUTo4r1M5OWFMpHFEoHVYoVY48j0rpiDm1yglz4ppS4WDB/H55XWn8LuvPV6mdWKES3+OwQq1y5HmUTraonO0sqBX2rHC2Q+lgI6kcbbGmdrJDUDluey61kw0mrk42mLg52/Jcjl/gtszdaRvWPJxtUCq+QGFFqdyGJVuUSltUKrtnqNX2COK6QmmLUmX3DJXaHkGtckatUlhwUSsxp1A44ezs+Awxv8LZ8ndq+h2b/24VzvY4O9lJom9NPKNytkflZEnt7IA5lfx9WP6uTL8nU+uotMPGxcjW1R5zdm4OSK522KptLNi52GLJATu1woK9ixJztmoF21ROFmzUzlhQKbBVqp9hp3LBxF6lxkGlWuGoVmPNQaXG3mmVg7ML1hwVLjg6qSw4Oav5R5wVLlhwdsPZwQ+Fo/8zlE47MBH3/KP7nBx9MXF22o41hbMfTvY+z+Xs4IuJwsEbE6WjD8+jdvDBmoujLxbsfXCx4urgizm1nTcqWy8LYs6ci60XLraeFlztvDDnZuvF87jbeWPiZuuBm4275G7rgTkPO08EF3sPlPbuz1A5eGCitHdD7eiOi5MHCjsXnGxUqBzcZF+MRV8l7rH3wMXBU7bean/ZF2OlrfGa2tEThaMHSidPnB3cUSu8ZSvGgrjm5OyNg8pf2mrviZ1i+wp7pR+SYjtOSn+cVTtwVPhJom8aOzhvx97JB1t7D+wdvXB09pGtnYMnDk7eOCl8ZWvn4IG9kzuOCk8cxN90lTdOSi/ZF2PRt1d5sUHpwadOrmxUebLZxZvPFe585uwm5zepveTcNjdfPndyZZPSg80qT7aovdjq4i37Yk6MRdWTzQo3tqo82KJ0XyHG21yM121dPLBz9cRG7S59oXRFEONtKjfZiuuibz4Wz4n7xJz8DBc3bEU+MmPn4oaNyoXNjs44untir3bDTuUq2Spd5NjJzRNHVw9sFGocXNyxdXHjC6WabSoX2bdRi+8w9kUlEmPfFRuVWhJZykTMbVOqsHNxxUapxl7tioOL+E4X+X+WaWyjUCF84ezENoWzbEXfVqXERqmQxLzk5ISNkxN2CgX2SiXbHB3lWPwftzKvUGDr6IS9swJHpUr2xdhZ/GyOTpKDkwIH8e/gpMDJWSlbO3tH2Vcoxf9rShwdxd8iBUql6Iu/Vc7Y2Ym/jSrZl9fEdWcFCidnnMX9Ts44OTjKObVSZRw7OsrPEfc7ODjIZ9VqNU5OTitjlUqJUuksqVQKHB3t5d9BJycHxFj8TTQ+q8TFxQ1nZ3G/eoUYmziJLKhUo1K5WMyJeXGPQmGZW61zrBwrVCjEz6RWyfD9/wOpcgxdn+3AHwAAAABJRU5ErkJggg==) After an analysis of the information we have the top 10 most dangerous neighborhoods, we proceed to make an analysis of what aspects they might have in common. ###Code Dias_aux_nei=data_bikes[(data_bikes['Status']=='STOLEN')].groupby(['Neighbourhood']).size().reset_index().rename(columns={0:'Count'}) Dias_aux_nei.sort_values(by=['Count'],ascending=False).head(10) #fig = px.bar(Dias_aux_nei, x='Week_day', y='Count',title='Bicycle theft in Toronto per day') #fig.show() ###Output _____no_output_____
bronze/.ipynb_checkpoints/B28_Quantum_State-checkpoint.ipynb
###Markdown prepared by Abuzer Yakaryilmaz (QLatvia) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Quantum State[Watch Lecture](https://youtu.be/6OE96rgQz8s)_The overall probability must be 1 when we observe a quantum system._For example, the following vectors cannot be a valid quantum state:$$ \myvector{ \dfrac{1}{2} \\ \dfrac{1}{2} } \mbox{ and } \myvector{ \dfrac{\sqrt{3}}{2} \\ \dfrac{1}{\sqrt{2}} }.$$For the first vector, the probabilities of observing the states $\ket{0} $ and $ \ket{1} $ are $ \dfrac{1}{4} $. So, the overall probability of getting a result is $ \dfrac{1}{4} + \dfrac{1}{4} = \dfrac{1}{2} $, which is less than 1.For the second vector, the probabilities of observing the states $\ket{0} $ and $ \ket{1} $ are respectively $ \dfrac{3}{4} $ and $ \dfrac{1}{2} $. So, the overall probability of getting a result is $ \dfrac{3}{4} + \dfrac{1}{2} = \dfrac{5}{4} $, which is greater than 1. The summation of amplitude squares must be 1 for a valid quantum state. More formally, a quantum state can be represented by a vector having length 1, and vice versa.The summation of amplitude squares gives the square of the length of vector.But, this summation is 1, and its square root is also 1. So, we can use the term length in the definition. Technical notes: We represent a quantum state as $ \ket{u} $ instead of $ u $. Remember the relation between the length and dot product: $ \norm{u} = \sqrt{\dot{u}{u}} $. In quantum computation, we use inner product instead of dot product, which is defined on complex numbers. By using bra-ket notation, $ \norm{ \ket{u} } = \sqrt{ \braket{u}{u} } = 1 $, or equivalently $ \braket{u}{u} = 1 $, where $ \braket{u}{u} $ is a short form of $ \bra{u}\ket{u} $. For real-valued vectors, $ \braket{v}{v} = \dot{v}{v} $. Task 1 If the following vectors are valid quantum states defined with real numbers, then what can be the values of $a$ and $b$?$$ \ket{v} = \myrvector{a \\ -0.1 \\ -0.3 \\ 0.4 \\ 0.5} ~~~~~ \mbox{and} ~~~~~ \ket{u} = \myrvector{ \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{b}} \\ -\frac{1}{\sqrt{3}} }.$$ ###Code # # your code is here # (you may find the values by hand (in mind) as well) # ###Output _____no_output_____ ###Markdown click for our solution Quantum Operators Once the quantum state is defined, the definition of quantum operator is very easy.Any length preserving (square) matrix is a quantum operator, and vice versa. Task 2Remember Hadamard operator:$$ H = \hadamard.$$ Randomly create a 2-dimensional quantum state, and test whether Hadamard operator preserves its length or not.Write a function that returns a randomly created 2-dimensional quantum state.Hint: Pick two random values between -100 and 100 for the amplitudes of state 0 and state 1 Find an appropriate normalization factor to divide each amplitude such that the length of quantum state should be 1 Write a function that determines whether a given vector is a valid quantum state or not.(Due to precision problem, the summation of squares may not be exactly 1 but very close to 1, e.g., 0.9999999999999998.)Repeat 10 times: Randomly pick a quantum state Check whether the pick quantum state is valid Multiply Hadamard matrix with the randomly created quantum state Check whether the quantum state in result is valid ###Code # # you may define your first function in a separate cell # from random import randrange def random_quantum_state(): # quantum state quantum_state=[0,0] # # # return quantum_state # # your code is here # ###Output _____no_output_____
main/nbs/poc/visualization-proto2.ipynb
###Markdown Easily export jupyter cells to python modulehttps://github.com/fastai/course-v3/blob/master/nbs/dl2/notebook2script.py ###Code ! python /tf/src/scripts/notebook2script.py visualization.ipynb %matplotlib inline ! pip install -U scikit-learn #export from exp.nb_clustering import * from exp.nb_evaluation import * import matplotlib import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import matplotlib.cm as cmx import matplotlib.patches as patches from mpl_toolkits.mplot3d import Axes3D from matplotlib.colors import LogNorm cd /tf/src/data/features ###Output /tf/src/data/features ###Markdown Generate all the feature vectors(Skip if already done) ###Code embdr = D2VEmbedder("/tf/src/data/doc2vec/model") # Generate and Save Human Features hman_dict = embdr("/tf/src/data/methods/DATA00M_[god-r]/test") with open('hman_features.pickle', 'wb') as f: pickle.dump(hman_dict, f, protocol=pickle.HIGHEST_PROTOCOL) # Generate and Save GPT-2 Pretrained Features m1_dict = embdr("/tf/src/data/samples/unconditional/m1_example") with open('m1_features.pickle', 'wb') as f: pickle.dump(m1_dict, f, protocol=pickle.HIGHEST_PROTOCOL) ###Output _____no_output_____ ###Markdown Read in Feature Vectors ###Code models_path = "/tf/src/data/features/output_space" models_features = load_features(models_path) len(models_features[0]), len(models_features[1]) ###Output _____no_output_____ ###Markdown Visualize Features ###Code models_clusters = cluster(models_features, k_range = [2, 3, 4, 5]) _, _, _, kmeans = models_clusters[1] kmeans.n_clusters def setup_data(model): feature_vectors, _, centroids, kmeans = model # Step size of the mesh. Decrease to increase the quality of the VQ. h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max]. # Plot the decision boundary. For that, we will assign a color to each x_min, x_max = feature_vectors[:, 0].min() - 1, feature_vectors[:, 0].max() + 1 y_min, y_max = feature_vectors[:, 1].min() - 1, feature_vectors[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) return feature_vectors, centroids, xx, yy, Z def plot_features(models_clusters): plt.figure(figsize=(12, 8)) # Create 2x2 sub plots gs = gridspec.GridSpec(2, 2) plt.clf() for i, model in enumerate(models_clusters): # Setup data to be plotted feature_vectors, centroids, xx, yy, Z = setup_data(model) # Plot data plt.subplot(gs[0, i]) plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(feature_vectors[:, 0], feature_vectors[:, 1], 'k.', markersize=2) # Plot the centroids as a white X plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering\n' '(PCA & T-SNE - reduced data)\n' 'Centroids are marked with white cross') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.subplot(gs[1, :]) colmap = {0: 'b.', 1: 'r.'} plt.title('Blue denotes Human Methods and Red denotes GPT-2 Unconditional Samples') for i, model in enumerate(models_clusters): feature_vectors, _, _, _ = model plt.plot(feature_vectors[:, 0], feature_vectors[:, 1], colmap[i], markersize=10) # plt.xticks(()) # plt.yticks(()) plt.show() plot_features(models_clusters) ###Output _____no_output_____ ###Markdown Gaussian Mixture Visualization ###Code def ###Output _____no_output_____ ###Markdown Visualize 1D ###Code models_clusters = cluster(models_features, k_range = [2, 3, 4, 5], dims = 1) dims = 2 for model in models_clusters: feature_vectors, _, _, kmeans = model print(feature_vectors.shape) # feature_vectors = reduce_dims(feature_vectors, dims) gmm = generate_distributions(feature_vectors, kmeans.n_clusters) fig = plt.figure() ax = fig.add_subplot(111) x = np.linspace(-10, 10, 1000).reshape(1000,1) logprob = gmm.score_samples(x) pdf = np.exp(logprob) #print np.max(pdf) -> 19.8409464401 !? ax.plot(x, pdf, '-k') plt.show() ###Output _____no_output_____ ###Markdown Visualize 2D ###Code dims = 2 models_clusters = cluster(models_features, k_range = [2, 3, 4, 5], dims = dims) # From http://www.itzikbs.com/gaussian-mixture-model-gmm-3d-point-cloud-classification-primer def visualize_2D_gmm(points, w, mu, stdev, export=True): ''' plots points and their corresponding gmm model in 2D Input: points: N X 2, sampled points w: n_gaussians, gmm weights mu: 2 X n_gaussians, gmm means stdev: 2 X n_gaussians, gmm standard deviation (assuming diagonal covariance matrix) Output: None ''' n_gaussians = mu.shape[1] # print(n_gaussians) N = int(np.round(points.shape[0] / n_gaussians)) # Visualize data fig = plt.figure(figsize=(8, 8)) axes = plt.gca() # axes.set_xlim([-100, 1]) # axes.set_ylim([-1, 1]) plt.set_cmap('Set1') colors = cmx.Set1(np.linspace(0, 1, n_gaussians)) for i in range(n_gaussians): # if idx = range(i * N, (i + 1) * N) plt.scatter(points[idx, 0], points[idx, 1], alpha=0.3, c=colors[i]) for j in range(8): # print(stdev.shape, stdev[0, i], stdev[1, i]) axes.add_patch( patches.Ellipse(mu[:, i], width=(j+1) * stdev[0, i], height=(j+1) * stdev[1, i], fill=False, color=[0.0, 0.0, 1.0, 1.0/(0.5*j+1)])) plt.title('GMM') plt.xlabel('X') plt.ylabel('Y') if export: if not os.path.exists('images/'): os.mkdir('images/') plt.savefig('images/2D_GMM_demonstration.png', dpi=100, format='png') plt.show() feature_vectors, _, _, kmeans = models_clusters[1] gmm = generate_distributions(feature_vectors, kmeans.n_clusters) feature_vectors.shape visualize_2D_gmm(feature_vectors, gmm.weights_, gmm.means_.T, np.sqrt(gmm.covariances_).T) def plot_2d(models_clusters): plt.figure(figsize=(12, 8)) # Create 2x2 sub plots gs = gridspec.GridSpec(1, 2) plt.clf() for i, model in enumerate(models_clusters): # Setup data to be plotted feature_vectors, _, _, kmeans = model gmm = generate_distributions(feature_vectors, kmeans.n_clusters) # Plot data plt.subplot(gs[0, i]) # display predicted scores by the model as a contour plot delta = 30. x = np.linspace(feature_vectors[:, 0].min() - delta, feature_vectors[:, 0].max() + delta) y = np.linspace(feature_vectors[:, 1].min() - delta, feature_vectors[:, 1].max() + delta) X, Y = np.meshgrid(x, y) XX = np.array([X.ravel(), Y.ravel()]).T Z = -gmm.score_samples(XX) Z = Z.reshape(X.shape) CS = plt.contour(X, Y, Z, norm=LogNorm(vmin=1.0, vmax=100.0), levels=np.logspace(1, 2, 10)) CB = plt.colorbar(CS, shrink=0.8, extend='both') plt.scatter(feature_vectors[:, 0], feature_vectors[:, 1], .8) plt.title('Negative log-likelihood predicted by a GMM') plt.axis('tight') plt.show() plot_2d(models_clusters) feature_vectors, _, _, kmeans = models_clusters[0] gmm = generate_distributions(feature_vectors, kmeans.n_clusters) feature_vectors.shape, kmeans.n_clusters feature_vectors[:, 0].max(), feature_vectors[:, 1].min() n_samples = 300 # generate random sample, two components np.random.seed(0) # generate spherical data centered on (20, 20) shifted_gaussian = np.random.randn(n_samples, 2) + np.array([20, 20]) # generate zero centered stretched Gaussian data C = np.array([[0., -0.7], [3.5, .7]]) stretched_gaussian = np.dot(np.random.randn(n_samples, 2), C) # concatenate the two datasets into the final training set X_train = np.vstack([shifted_gaussian, stretched_gaussian]) # fit a Gaussian Mixture Model with two components # clf = GaussianMixture(n_components=2, covariance_type='full') # clf.fit(X_train) # display predicted scores by the model as a contour plot delta = 30. x = np.linspace(feature_vectors[:, 0].min() - delta, feature_vectors[:, 0].max() + delta) y = np.linspace(feature_vectors[:, 1].min() - delta, feature_vectors[:, 1].max() + delta) X, Y = np.meshgrid(x, y) XX = np.array([X.ravel(), Y.ravel()]).T Z = -gmm.score_samples(XX) Z = Z.reshape(X.shape) CS = plt.contour(X, Y, Z, norm=LogNorm(vmin=1.0, vmax=100.0), levels=np.logspace(1, 2, 10)) CB = plt.colorbar(CS, shrink=0.8, extend='both') plt.scatter(feature_vectors[:, 0], feature_vectors[:, 1], .8) plt.title('Negative log-likelihood predicted by a GMM') plt.axis('tight') plt.show() ###Output _____no_output_____ ###Markdown Visualize 3D ###Code dims = 3 models_clusters = cluster(models_features, k_range = [2, 3, 4, 5], dims = dims) # From http://www.itzikbs.com/gaussian-mixture-model-gmm-3d-point-cloud-classification-primer def plot_sphere(w=0, c=[0,0,0], r=[1, 1, 1], subdev=10, ax=None, sigma_multiplier=3): ''' plot a sphere surface Input: c: 3 elements list, sphere center r: 3 element list, sphere original scale in each axis ( allowing to draw elipsoids) subdiv: scalar, number of subdivisions (subdivision^2 points sampled on the surface) ax: optional pyplot axis object to plot the sphere in. sigma_multiplier: sphere additional scale (choosing an std value when plotting gaussians) Output: ax: pyplot axis object ''' if ax is None: fig = plt.figure() ax = fig.add_subplot(111, projection='3d') pi = np.pi cos = np.cos sin = np.sin phi, theta = np.mgrid[0.0:pi:complex(0,subdev), 0.0:2.0 * pi:complex(0,subdev)] x = sigma_multiplier*r[0] * sin(phi) * cos(theta) + c[0] y = sigma_multiplier*r[1] * sin(phi) * sin(theta) + c[1] z = sigma_multiplier*r[2] * cos(phi) + c[2] cmap = cmx.ScalarMappable() cmap.set_cmap('jet') c = cmap.to_rgba(w) ax.plot_surface(x, y, z, color=c, alpha=0.2, linewidth=1) return ax # From http://www.itzikbs.com/gaussian-mixture-model-gmm-3d-point-cloud-classification-primer def visualize_3d_gmm(points, w, mu, stdev, export=True): ''' plots points and their corresponding gmm model in 3D Input: points: N X 3, sampled points w: n_gaussians, gmm weights mu: 3 X n_gaussians, gmm means stdev: 3 X n_gaussians, gmm standard deviation (assuming diagonal covariance matrix) Output: None ''' n_gaussians = mu.shape[1] N = int(np.round(points.shape[0] / n_gaussians)) # Visualize data fig = plt.figure(figsize=(8, 8)) axes = fig.add_subplot(111, projection='3d') # axes.set_xlim([-1, 1]) # axes.set_ylim([-1, 1]) # axes.set_zlim([-1, 1]) plt.set_cmap('Set1') colors = cmx.Set1(np.linspace(0, 1, n_gaussians)) for i in range(n_gaussians): idx = range(i * N, (i + 1) * N) axes.scatter(points[idx, 0], points[idx, 1], points[idx, 2], alpha=0.3, c=colors[i]) plot_sphere(w=w[i], c=mu[:, i], r=stdev[:, i], ax=axes) plt.title('3D GMM') axes.set_xlabel('X') axes.set_ylabel('Y') axes.set_zlabel('Z') axes.view_init(35.246, 45) # if export: # if not os.path.exists('images/'): os.mkdir('images/') # plt.savefig('images/3D_GMM_demonstration.png', dpi=100, format='png') plt.show() feature_vectors, _, _, kmeans = models_clusters[0] gmm = generate_distributions(feature_vectors, 2) kmeans.n_clusters visualize_3d_gmm(feature_vectors, gmm.weights_, gmm.means_.T, np.sqrt(gmm.covariances_).T) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(feature_vectors[:, 0], feature_vectors[:, 1], feature_vectors[:, 2]) fig = plt.figure() ax = fig.add_subplot(111, projection="3d") X, Y = X, Y = np.mgrid[-1:1:30j, -1:1:30j] #np.mgrid[-100:100:30j, -100:100:30j] XX = np.array([X.ravel(), Y.ravel()]).T # Z = -gmm.score_samples(XX) Z = np.sin(np.pi*X)*np.sin(np.pi*Y) # ax.plot_surface(X, Y, Z, cmap="autumn_r", lw=0.5, rstride=1, cstride=1) # ax.contour(X, Y, Z, 100, lw=3, cmap="autumn_r", linestyles="solid", offset=-1) CS = ax.contour(X, Y, Z, 20, lw=3, colors="k", linestyles="solid", norm=LogNorm(vmin=1.0, vmax=100.0), levels=np.logspace(1, 2, 10)) CB = ax.colorbar(CS, shrink=0.8, extend='both') plt.show() ###Output _____no_output_____
1. pandas/03_Grouping/Occupation/Exercises_with_solutions.ipynb
###Markdown Occupation Introduction:Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). Step 3. Assign it to a variable called users. ###Code users = pd.read_table('https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user', sep='|', index_col='user_id') users.head() ###Output _____no_output_____ ###Markdown Step 4. Discover what is the mean age per occupation ###Code users.groupby('occupation').age.mean() ###Output _____no_output_____ ###Markdown Step 5. Discover the Male ratio per occupation and sort it from the most to the least ###Code # create a function def gender_to_numeric(x): if x == 'M': return 1 if x == 'F': return 0 # apply the function to the gender column and create a new column users['gender_n'] = users['gender'].apply(gender_to_numeric) a = users.groupby('occupation').gender_n.sum() / users.occupation.value_counts() * 100 # sort to the most male a.sort_values(ascending = False) ###Output _____no_output_____ ###Markdown Step 6. For each occupation, calculate the minimum and maximum ages ###Code users.groupby('occupation').age.agg(['min', 'max']) ###Output _____no_output_____ ###Markdown Step 7. For each combination of occupation and gender, calculate the mean age ###Code users.groupby(['occupation', 'gender']).age.mean() ###Output _____no_output_____ ###Markdown Step 8. For each occupation present the percentage of women and men ###Code # create a data frame and apply count to gender gender_ocup = users.groupby(['occupation', 'gender']).agg({'gender': 'count'}) # create a DataFrame and apply count for each occupation occup_count = users.groupby(['occupation']).agg('count') # divide the gender_ocup per the occup_count and multiply per 100 occup_gender = gender_ocup.div(occup_count, level = "occupation") * 100 # present all rows from the 'gender column' occup_gender.loc[: , 'gender'] ###Output _____no_output_____
Modelo Inicial No Usar/5. Credit Risk Modeling - LGD and EAD Models - With Comments - 11-7.ipynb
###Markdown Import Libraries ###Code import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown Import Data ###Code # Import data. loan_data_preprocessed_backup = pd.read_csv('loan_data_2007_2014_preprocessed.csv') ###Output _____no_output_____ ###Markdown Explore Data ###Code loan_data_preprocessed = loan_data_preprocessed_backup.copy() loan_data_preprocessed.columns.values # Displays all column names. loan_data_preprocessed.head() loan_data_preprocessed.tail() loan_data_defaults = loan_data_preprocessed[loan_data_preprocessed['loan_status'].isin(['Charged Off','Does not meet the credit policy. Status:Charged Off'])] # Here we take only the accounts that were charged-off (written-off). loan_data_defaults.shape pd.options.display.max_rows = None # Sets the pandas dataframe options to display all columns/ rows. loan_data_defaults.isnull().sum() ###Output _____no_output_____ ###Markdown Independent Variables ###Code loan_data_defaults['mths_since_last_delinq'].fillna(0, inplace = True) # We fill the missing values with zeroes. #loan_data_defaults['mths_since_last_delinq'].fillna(loan_data_defaults['mths_since_last_delinq'].max() + 12, inplace=True) loan_data_defaults['mths_since_last_record'].fillna(0, inplace=True) # We fill the missing values with zeroes. ###Output _____no_output_____ ###Markdown Dependent Variables ###Code loan_data_defaults['recovery_rate'] = loan_data_defaults['recoveries'] / loan_data_defaults['funded_amnt'] # We calculate the dependent variable for the LGD model: recovery rate. # It is the ratio of recoveries and funded amount. loan_data_defaults['recovery_rate'].describe() # Shows some descriptive statisics for the values of a column. loan_data_defaults['recovery_rate'] = np.where(loan_data_defaults['recovery_rate'] > 1, 1, loan_data_defaults['recovery_rate']) loan_data_defaults['recovery_rate'] = np.where(loan_data_defaults['recovery_rate'] < 0, 0, loan_data_defaults['recovery_rate']) # We set recovery rates that are greater than 1 to 1 and recovery rates that are less than 0 to 0. loan_data_defaults['recovery_rate'].describe() # Shows some descriptive statisics for the values of a column. loan_data_defaults['CCF'] = (loan_data_defaults['funded_amnt'] - loan_data_defaults['total_rec_prncp']) / loan_data_defaults['funded_amnt'] # We calculate the dependent variable for the EAD model: credit conversion factor. # It is the ratio of the difference of the amount used at the moment of default to the total funded amount. loan_data_defaults['CCF'].describe() # Shows some descriptive statisics for the values of a column. loan_data_defaults.to_csv('loan_data_defaults.csv') # We save the data to a CSV file. ###Output _____no_output_____ ###Markdown Explore Dependent Variables ###Code import matplotlib.pyplot as plt import seaborn as sns sns.set() plt.hist(loan_data_defaults['recovery_rate'], bins = 100) # We plot a histogram of a variable with 100 bins. plt.hist(loan_data_defaults['recovery_rate'], bins = 50) # We plot a histogram of a variable with 50 bins. plt.hist(loan_data_defaults['CCF'], bins = 100) # We plot a histogram of a variable with 100 bins. loan_data_defaults['recovery_rate_0_1'] = np.where(loan_data_defaults['recovery_rate'] == 0, 0, 1) # We create a new variable which is 0 if recovery rate is 0 and 1 otherwise. loan_data_defaults['recovery_rate_0_1'] ###Output _____no_output_____ ###Markdown LGD Model Splitting Data ###Code from sklearn.model_selection import train_test_split # LGD model stage 1 datasets: recovery rate 0 or greater than 0. lgd_inputs_stage_1_train, lgd_inputs_stage_1_test, lgd_targets_stage_1_train, lgd_targets_stage_1_test = train_test_split(loan_data_defaults.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), loan_data_defaults['recovery_rate_0_1'], test_size = 0.2, random_state = 42) # Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes: # Inputs - Train, Inputs - Test, Targets - Train, Targets - Test. ###Output _____no_output_____ ###Markdown Preparing the Inputs ###Code features_all = ['grade:A', 'grade:B', 'grade:C', 'grade:D', 'grade:E', 'grade:F', 'grade:G', 'home_ownership:MORTGAGE', 'home_ownership:NONE', 'home_ownership:OTHER', 'home_ownership:OWN', 'home_ownership:RENT', 'verification_status:Not Verified', 'verification_status:Source Verified', 'verification_status:Verified', 'purpose:car', 'purpose:credit_card', 'purpose:debt_consolidation', 'purpose:educational', 'purpose:home_improvement', 'purpose:house', 'purpose:major_purchase', 'purpose:medical', 'purpose:moving', 'purpose:other', 'purpose:renewable_energy', 'purpose:small_business', 'purpose:vacation', 'purpose:wedding', 'initial_list_status:f', 'initial_list_status:w', 'term_int', 'emp_length_int', 'mths_since_issue_d', 'mths_since_earliest_cr_line', 'funded_amnt', 'int_rate', 'installment', 'annual_inc', 'dti', 'delinq_2yrs', 'inq_last_6mths', 'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'total_acc', 'acc_now_delinq', 'total_rev_hi_lim'] # List of all independent variables for the models. features_reference_cat = ['grade:G', 'home_ownership:RENT', 'verification_status:Verified', 'purpose:credit_card', 'initial_list_status:f'] # List of the dummy variable reference categories. lgd_inputs_stage_1_train = lgd_inputs_stage_1_train[features_all] # Here we keep only the variables we need for the model. lgd_inputs_stage_1_train = lgd_inputs_stage_1_train.drop(features_reference_cat, axis = 1) # Here we remove the dummy variable reference categories. lgd_inputs_stage_1_train.isnull().sum() # Check for missing values. We check whether the value of each row for each column is missing or not, # then sum accross columns. ###Output _____no_output_____ ###Markdown Estimating the Model ###Code # P values for sklearn logistic regression. # Class to display p-values for logistic regression in sklearn. from sklearn import linear_model import scipy.stats as stat class LogisticRegression_with_p_values: def __init__(self,*args,**kwargs):#,**kwargs): self.model = linear_model.LogisticRegression(*args,**kwargs)#,**args) def fit(self,X,y): self.model.fit(X,y) #### Get p-values for the fitted model #### denom = (2.0 * (1.0 + np.cosh(self.model.decision_function(X)))) denom = np.tile(denom,(X.shape[1],1)).T F_ij = np.dot((X / denom).T,X) ## Fisher Information Matrix Cramer_Rao = np.linalg.inv(F_ij) ## Inverse Information Matrix sigma_estimates = np.sqrt(np.diagonal(Cramer_Rao)) z_scores = self.model.coef_[0] / sigma_estimates # z-score for eaach model coefficient p_values = [stat.norm.sf(abs(x)) * 2 for x in z_scores] ### two tailed test for p-values self.coef_ = self.model.coef_ self.intercept_ = self.model.intercept_ #self.z_scores = z_scores self.p_values = p_values #self.sigma_estimates = sigma_estimates #self.F_ij = F_ij reg_lgd_st_1 = LogisticRegression_with_p_values() # We create an instance of an object from the 'LogisticRegression' class. reg_lgd_st_1.fit(lgd_inputs_stage_1_train, lgd_targets_stage_1_train) # Estimates the coefficients of the object from the 'LogisticRegression' class # with inputs (independent variables) contained in the first dataframe # and targets (dependent variables) contained in the second dataframe. feature_name = lgd_inputs_stage_1_train.columns.values # Stores the names of the columns of a dataframe in a variable. summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name) # Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable. summary_table['Coefficients'] = np.transpose(reg_lgd_st_1.coef_) # Creates a new column in the dataframe, called 'Coefficients', # with row values the transposed coefficients from the 'LogisticRegression' object. summary_table.index = summary_table.index + 1 # Increases the index of every row of the dataframe with 1. summary_table.loc[0] = ['Intercept', reg_lgd_st_1.intercept_[0]] # Assigns values of the row with index 0 of the dataframe. summary_table = summary_table.sort_index() # Sorts the dataframe by index. p_values = reg_lgd_st_1.p_values # We take the result of the newly added method 'p_values' and store it in a variable 'p_values'. p_values = np.append(np.nan,np.array(p_values)) # We add the value 'NaN' in the beginning of the variable with p-values. summary_table['p_values'] = p_values # In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable. summary_table summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name) summary_table['Coefficients'] = np.transpose(reg_lgd_st_1.coef_) summary_table.index = summary_table.index + 1 summary_table.loc[0] = ['Intercept', reg_lgd_st_1.intercept_[0]] summary_table = summary_table.sort_index() p_values = reg_lgd_st_1.p_values p_values = np.append(np.nan,np.array(p_values)) summary_table['p_values'] = p_values summary_table ###Output _____no_output_____ ###Markdown Testing the Model ###Code lgd_inputs_stage_1_test = lgd_inputs_stage_1_test[features_all] # Here we keep only the variables we need for the model. lgd_inputs_stage_1_test = lgd_inputs_stage_1_test.drop(features_reference_cat, axis = 1) # Here we remove the dummy variable reference categories. y_hat_test_lgd_stage_1 = reg_lgd_st_1.model.predict(lgd_inputs_stage_1_test) # Calculates the predicted values for the dependent variable (targets) # based on the values of the independent variables (inputs) supplied as an argument. y_hat_test_lgd_stage_1 y_hat_test_proba_lgd_stage_1 = reg_lgd_st_1.model.predict_proba(lgd_inputs_stage_1_test) # Calculates the predicted probability values for the dependent variable (targets) # based on the values of the independent variables (inputs) supplied as an argument. y_hat_test_proba_lgd_stage_1 # This is an array of arrays of predicted class probabilities for all classes. # In this case, the first value of every sub-array is the probability for the observation to belong to the first class, i.e. 0, # and the second value is the probability for the observation to belong to the first class, i.e. 1. y_hat_test_proba_lgd_stage_1 = y_hat_test_proba_lgd_stage_1[: ][: , 1] # Here we take all the arrays in the array, and from each array, we take all rows, and only the element with index 1, # that is, the second element. # In other words, we take only the probabilities for being 1. y_hat_test_proba_lgd_stage_1 lgd_targets_stage_1_test_temp = lgd_targets_stage_1_test lgd_targets_stage_1_test_temp.reset_index(drop = True, inplace = True) # We reset the index of a dataframe. df_actual_predicted_probs = pd.concat([lgd_targets_stage_1_test_temp, pd.DataFrame(y_hat_test_proba_lgd_stage_1)], axis = 1) # Concatenates two dataframes. df_actual_predicted_probs.columns = ['lgd_targets_stage_1_test', 'y_hat_test_proba_lgd_stage_1'] df_actual_predicted_probs.index = lgd_inputs_stage_1_test.index # Makes the index of one dataframe equal to the index of another dataframe. df_actual_predicted_probs.head() ###Output _____no_output_____ ###Markdown Estimating the Аccuracy of the Мodel ###Code tr = 0.5 # We create a new column with an indicator, # where every observation that has predicted probability greater than the threshold has a value of 1, # and every observation that has predicted probability lower than the threshold has a value of 0. df_actual_predicted_probs['y_hat_test_lgd_stage_1'] = np.where(df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1'] > tr, 1, 0) pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) # Creates a cross-table where the actual values are displayed by rows and the predicted values by columns. # This table is known as a Confusion Matrix. pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0] # Here we divide each value of the table by the total number of observations, # thus getting percentages, or, rates. (pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]).iloc[0, 0] + (pd.crosstab(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_lgd_stage_1'], rownames = ['Actual'], colnames = ['Predicted']) / df_actual_predicted_probs.shape[0]).iloc[1, 1] # Here we calculate Accuracy of the model, which is the sum of the diagonal rates. from sklearn.metrics import roc_curve, roc_auc_score fpr, tpr, thresholds = roc_curve(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1']) # Returns the Receiver Operating Characteristic (ROC) Curve from a set of actual values and their predicted probabilities. # As a result, we get three arrays: the false positive rates, the true positive rates, and the thresholds. # we store each of the three arrays in a separate variable. plt.plot(fpr, tpr) # We plot the false positive rate along the x-axis and the true positive rate along the y-axis, # thus plotting the ROC curve. plt.plot(fpr, fpr, linestyle = '--', color = 'k') # We plot a seconary diagonal line, with dashed line style and black color. plt.xlabel('False positive rate') # We name the x-axis "False positive rate". plt.ylabel('True positive rate') # We name the x-axis "True positive rate". plt.title('ROC curve') # We name the graph "ROC curve". AUROC = roc_auc_score(df_actual_predicted_probs['lgd_targets_stage_1_test'], df_actual_predicted_probs['y_hat_test_proba_lgd_stage_1']) # Calculates the Area Under the Receiver Operating Characteristic Curve (AUROC) # from a set of actual values and their predicted probabilities. AUROC ###Output _____no_output_____ ###Markdown Saving the Model ###Code import pickle pickle.dump(reg_lgd_st_1, open('lgd_model_stage_1.sav', 'wb')) # Here we export our model to a 'SAV' file with file name 'lgd_model_stage_1.sav'. ###Output _____no_output_____ ###Markdown Stage 2 – Linear Regression ###Code lgd_stage_2_data = loan_data_defaults[loan_data_defaults['recovery_rate_0_1'] == 1] # Here we take only rows where the original recovery rate variable is greater than one, # i.e. where the indicator variable we created is equal to 1. # LGD model stage 2 datasets: how much more than 0 is the recovery rate lgd_inputs_stage_2_train, lgd_inputs_stage_2_test, lgd_targets_stage_2_train, lgd_targets_stage_2_test = train_test_split(lgd_stage_2_data.drop(['good_bad', 'recovery_rate','recovery_rate_0_1', 'CCF'], axis = 1), lgd_stage_2_data['recovery_rate'], test_size = 0.2, random_state = 42) # Takes a set of inputs and a set of targets as arguments. Splits the inputs and the targets into four dataframes: # Inputs - Train, Inputs - Test, Targets - Train, Targets - Test. from sklearn import linear_model from sklearn.metrics import mean_squared_error, r2_score # Since the p-values are obtained through certain statistics, we need the 'stat' module from scipy.stats import scipy.stats as stat # Since we are using an object oriented language such as Python, we can simply define our own # LinearRegression class (the same one from sklearn) # By typing the code below we will ovewrite a part of the class with one that includes p-values # Here's the full source code of the ORIGINAL class: https://github.com/scikit-learn/scikit-learn/blob/7b136e9/sklearn/linear_model/base.py#L362 class LinearRegression(linear_model.LinearRegression): """ LinearRegression class after sklearn's, but calculate t-statistics and p-values for model coefficients (betas). Additional attributes available after .fit() are `t` and `p` which are of the shape (y.shape[1], X.shape[1]) which is (n_features, n_coefs) This class sets the intercept to 0 by default, since usually we include it in X. """ # nothing changes in __init__ def __init__(self, fit_intercept=True, normalize=False, copy_X=True, n_jobs=1): self.fit_intercept = fit_intercept self.normalize = normalize self.copy_X = copy_X self.n_jobs = n_jobs def fit(self, X, y, n_jobs=1): self = super(LinearRegression, self).fit(X, y, n_jobs) # Calculate SSE (sum of squared errors) # and SE (standard error) sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1]) se = np.array([np.sqrt(np.diagonal(sse * np.linalg.inv(np.dot(X.T, X))))]) # compute the t-statistic for each feature self.t = self.coef_ / se # find the p-value for each feature self.p = np.squeeze(2 * (1 - stat.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1]))) return self import scipy.stats as stat class LinearRegression(linear_model.LinearRegression): def __init__(self, fit_intercept=True, normalize=False, copy_X=True, n_jobs=1): self.fit_intercept = fit_intercept self.normalize = normalize self.copy_X = copy_X self.n_jobs = n_jobs def fit(self, X, y, n_jobs=1): self = super(LinearRegression, self).fit(X, y, n_jobs) sse = np.sum((self.predict(X) - y) ** 2, axis=0) / float(X.shape[0] - X.shape[1]) se = np.array([np.sqrt(np.diagonal(sse * np.linalg.inv(np.dot(X.T, X))))]) self.t = self.coef_ / se self.p = np.squeeze(2 * (1 - stat.t.cdf(np.abs(self.t), y.shape[0] - X.shape[1]))) return self lgd_inputs_stage_2_train = lgd_inputs_stage_2_train[features_all] # Here we keep only the variables we need for the model. lgd_inputs_stage_2_train = lgd_inputs_stage_2_train.drop(features_reference_cat, axis = 1) # Here we remove the dummy variable reference categories. reg_lgd_st_2 = LinearRegression() # We create an instance of an object from the 'LogisticRegression' class. reg_lgd_st_2.fit(lgd_inputs_stage_2_train, lgd_targets_stage_2_train) # Estimates the coefficients of the object from the 'LogisticRegression' class # with inputs (independent variables) contained in the first dataframe # and targets (dependent variables) contained in the second dataframe. feature_name = lgd_inputs_stage_2_train.columns.values # Stores the names of the columns of a dataframe in a variable. summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name) # Creates a dataframe with a column titled 'Feature name' and row values contained in the 'feature_name' variable. summary_table['Coefficients'] = np.transpose(reg_lgd_st_2.coef_) # Creates a new column in the dataframe, called 'Coefficients', # with row values the transposed coefficients from the 'LogisticRegression' object. summary_table.index = summary_table.index + 1 # Increases the index of every row of the dataframe with 1. summary_table.loc[0] = ['Intercept', reg_lgd_st_2.intercept_] # Assigns values of the row with index 0 of the dataframe. summary_table = summary_table.sort_index() # Sorts the dataframe by index. p_values = reg_lgd_st_2.p # We take the result of the newly added method 'p_values' and store it in a variable 'p_values'. p_values = np.append(np.nan,np.array(p_values)) # We add the value 'NaN' in the beginning of the variable with p-values. summary_table['p_values'] = p_values.round(3) # In the 'summary_table' dataframe, we add a new column, called 'p_values', containing the values from the 'p_values' variable. summary_table summary_table = pd.DataFrame(columns = ['Feature name'], data = feature_name) summary_table['Coefficients'] = np.transpose(reg_lgd_st_2.coef_) summary_table.index = summary_table.index + 1 summary_table.loc[0] = ['Intercept', reg_lgd_st_2.intercept_] summary_table = summary_table.sort_index() p_values = reg_lgd_st_2.p p_values = np.append(np.nan,np.array(p_values)) summary_table['p_values'] = p_values.round(3) summary_table ###Output _____no_output_____ ###Markdown Stage 2 – Linear Regression Evaluation ###Code lgd_inputs_stage_2_test = lgd_inputs_stage_2_test[features_all] # Here we keep only the variables we need for the model. lgd_inputs_stage_2_test = lgd_inputs_stage_2_test.drop(features_reference_cat, axis = 1) # Here we remove the dummy variable reference categories. lgd_inputs_stage_2_test.columns.values # Calculates the predicted values for the dependent variable (targets) # based on the values of the independent variables (inputs) supplied as an argument. y_hat_test_lgd_stage_2 = reg_lgd_st_2.predict(lgd_inputs_stage_2_test) # Calculates the predicted values for the dependent variable (targets) # based on the values of the independent variables (inputs) supplied as an argument. lgd_targets_stage_2_test_temp = lgd_targets_stage_2_test lgd_targets_stage_2_test_temp = lgd_targets_stage_2_test_temp.reset_index(drop = True) # We reset the index of a dataframe. pd.concat([lgd_targets_stage_2_test_temp, pd.DataFrame(y_hat_test_lgd_stage_2)], axis = 1).corr() # We calculate the correlation between actual and predicted values. sns.distplot(lgd_targets_stage_2_test - y_hat_test_lgd_stage_2) # We plot the distribution of the residuals. pickle.dump(reg_lgd_st_2, open('lgd_model_stage_2.sav', 'wb')) # Here we export our model to a 'SAV' file with file name 'lgd_model_stage_1.sav'. ###Output _____no_output_____ ###Markdown Combining Stage 1 and Stage 2 ###Code y_hat_test_lgd_stage_2_all = reg_lgd_st_2.predict(lgd_inputs_stage_1_test) y_hat_test_lgd_stage_2_all y_hat_test_lgd = y_hat_test_lgd_stage_1 * y_hat_test_lgd_stage_2_all # Here we combine the predictions of the models from the two stages. pd.DataFrame(y_hat_test_lgd).describe() # Shows some descriptive statisics for the values of a column. y_hat_test_lgd = np.where(y_hat_test_lgd < 0, 0, y_hat_test_lgd) y_hat_test_lgd = np.where(y_hat_test_lgd > 1, 1, y_hat_test_lgd) # We set predicted values that are greater than 1 to 1 and predicted values that are less than 0 to 0. pd.DataFrame(y_hat_test_lgd).describe() # Shows some descriptive statisics for the values of a column. ###Output _____no_output_____
examples/01_copying_task.ipynb
###Markdown Copying Task Inspired on the task described in the following paper: [https://arxiv.org/pdf/1511.06464.pdf](https://arxiv.org/pdf/1511.06464.pdf) Introduction The copying task is one of the simplest benchmark tasks for recurrent neural networks.The general idea of the task is to reproduce a random sequence of symbols with length`len_sequence` chosen from an alphabet of size `num_symbols` after a certain waitingperiod `len_wait`.Assuming the waiting symbol is `0`, the symbols chosen for the sequence are chosen fromthe alphabet `{1,2,3}` and the stop waiting symbol is `4`; an example input and target for awaiting time of 20 symbols and a sequence length of 5 can be given by:``` 213310000000000000000000400000 000000000000000000000000021331``` As discussed in the [paper](https://arxiv.org/pdf/1511.06464.pdf), it is always usefulto compare the loss of a certain implementation to the baseline loss of guessing.Assuming one uses the categorical cross-entropy loss, one can describe a baseline bypredicting the waiting symbol for the first `len_wait + len_sequence` timesteps, followed by a random sampling for the remaining `len_sequence` positions out ofthe alphabet of symbols `{a1,...,an}` with `num_symbols` elements. This baseline cross entropy loss boils down to``` len_sequence*log(n_symbols)/(len_wait + 2*len_sequence)``` Imports ###Code %matplotlib inline import torch import numpy as np import matplotlib.pyplot as plt import sys; sys.path.append('..') from torch_eunn import EURNN torch.manual_seed(24) np.random.seed(42) ###Output _____no_output_____ ###Markdown Constants ###Code # Training parameters num_steps = 500 batch_size = 128 test_size = 100 valid_size = 100 # Data Parameters len_wait = 100#0 # very slow if len_wait=1000 num_symbols = 8 len_sequence = 10 # RNN Parameters capacity = 2 num_layers_rnn = 1 num_hidden_rnn = 128 # Cuda cuda = True device = torch.device('cuda' if cuda else 'cpu') # Baseline Error baseline = len_sequence*np.log(num_symbols)/(len_wait+2*len_sequence) print(f'baseline = {baseline}') ###Output baseline = 0.17328679513998632 ###Markdown Data ###Code def data(len_wait, n_data, len_sequence, num_symbols): seq = np.random.randint(1, high=(num_symbols+1), size=(n_data, len_sequence)) zeros1 = np.zeros((n_data, len_wait-1)) zeros2 = np.zeros((n_data, len_wait)) marker = (num_symbols+1) * np.ones((n_data, 1)) zeros3 = np.zeros((n_data, len_sequence)) x = torch.tensor(np.concatenate((seq, zeros1, marker, zeros3), axis=1), dtype=torch.int64, device=device) y = torch.tensor(np.concatenate((zeros3, zeros2, seq), axis=1), dtype=torch.int64, device=device) return x, y x,y = data(len_wait, 1, len_sequence, num_symbols) print(x) print(y) ###Output tensor([[7, 4, 5, 7, 3, 8, 5, 5, 7, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], device='cuda:0') tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 4, 5, 7, 3, 8, 5, 5, 7, 2]], device='cuda:0') ###Markdown Model ###Code class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.embedding = torch.nn.Embedding(len_wait+2*len_sequence, num_symbols+2) self.rnn = EURNN(num_symbols+2, num_hidden_rnn, capacity, batch_first=True) self.fc = torch.nn.Linear(num_hidden_rnn, num_symbols+1) # optimizers and criterion self.lossfunc = torch.nn.CrossEntropyLoss() self.optimizer = torch.optim.Adam(self.parameters(), lr=0.03) # move to device self.to(device) def forward(self, data): data = self.embedding(data) rnn_out, _ = self.rnn(data) out = self.fc(rnn_out) return out def loss(self, data, labels): return self.lossfunc(self(data).view(-1, num_symbols+1), labels.view(-1)) def accuracy(self, data, labels): return torch.mean((torch.argmax(self(data), -1).view(-1) == labels.view(-1)).float()) def prediction(self, data): return torch.argmax(self(data), -1) ###Output _____no_output_____ ###Markdown Train Create the model ###Code model = Model() ###Output _____no_output_____ ###Markdown Start Training ###Code %%time for step in range(num_steps): # reset gradients model.optimizer.zero_grad() # calculate validation accuracy and loss if step %100 == 0 or step == num_steps -1: valid_data, valid_labels = data(len_wait, valid_size, len_sequence, num_symbols) loss = model.loss(valid_data, valid_labels).item() print(f'Step {step:5.0f}\t Valid. Loss. = {loss:5.4f}') # train batch_data, batch_labels = data(len_wait, batch_size, len_sequence, num_symbols) loss = model.loss(batch_data, batch_labels) loss.backward() model.optimizer.step() ###Output Step 0 Valid. Loss. = 20.2436 Step 100 Valid. Loss. = 0.0035 Step 200 Valid. Loss. = 0.0008 Step 300 Valid. Loss. = 0.0009 Step 400 Valid. Loss. = 0.0002 Step 499 Valid. Loss. = 0.0001 CPU times: user 3min 14s, sys: 91.3 ms, total: 3min 14s Wall time: 3min 14s ###Markdown Test ###Code test_data, test_labels = data(len_wait, test_size, len_sequence, num_symbols) test_loss = model.loss(test_data, test_labels).item() test_acc = model.accuracy(test_data, test_labels).item() print("Test result: Loss= " + "{:.6f}".format(test_loss) + ", Accuracy= " + "{:.5f}".format(test_acc)) print('baseline = %f'%baseline) ###Output Test result: Loss= 0.000193, Accuracy= 1.00000 baseline = 0.173287
modeling-perchlorate-reduction/modeling_perchlorate_reduction.ipynb
###Markdown Modeling a theoretical co-culture of perchlorate-reducing bacteria and chlorate-reducing bacteriaThis Jupyter Notebook is a supplement to the manuscript Barnum et. al 2019.Reduction of perchlorate by bacteria involves the respiration of several high-energy substrates in multiple steps: perchlorate is reduced to chlorate, chlorate is reduced to chlorite, chlorite is converted to chloride and oxygen (without energy conservation), and oxygen is reduced to water. Chlorate accumulates to varying levels during perchlorate reduction because one enzyme (perchlorate reductase, or Pcr) reduces both perchlorate and chlorate (Dudley et. al 2008). Substrate inhibition of perchlorate reductase at high concentrations of perchlorate (>1 mM) may also contribute (Youngblut et. al 2016). Accumulaton of chlorite or oxygen has not been observed.In the present study, we found that chlorate-reducing bacteria (CRB), which cannot reduce perchlorate to chlorate, can dominate cultures of perchlorate-reducing bacteria (PRB) in a metabolic interaction based on the exchange of chlorate.To understand this interaction, we present models to simulate the behavior of the interaction *in silico*. We chose a model based on an approximation of Michaelis-Menton kinetics known as Equilibrium Chemistry Approximation (ECA). Other models, a simple Michaelis-Menton kinetics model and a Michaelis-Menton kinetics model including competitive inhibition (Dudley et. al 2008), are included for comparison. The Equilibrium Chemistry Approximation model allows the inclusion of the following features:- Competition of chlorate and perchlorate for Pcr following diffusion of chlorate from the active site- Competition for the reduction of chlorate to chloride by either the perchlorate reducer or chlorate reducer- Competition for acetate as a source of electrons and carbon- Substrate inhibition of Pcr by perchlorate (IGNORED HERE)The code uses the following data:- The half-velocity constant (Ks) of Pcr for chlorate and perchlorate, from Youngblut et. al 2016a- Redox potential of perchlorate/chlorate, from Youngblut et. al 2016b- Redox potential of chlorate/chloride adjusted to reflect the perchlorate reduction pathway, from Youngblut et. al 2016bScripts required to run this code:- energetics.py -- Equations for calculating yield, stoichiometry - kinetics.py -- Equations for kinetics models- perchlorate_reduction_models.py -- Formulations of models specific to perchlorate reduction using energetics.py and kinetics.py Load Functions ###Code # Custom functions and variables import sys sys.path.append('./scripts') # Location of modules from perchlorate_reduction_models import * # External packages # Integration function from scipy.integrate import odeint # Data wrangling import numpy as np import pandas as pd from pylab import * # Plotting import matplotlib.pyplot as plt import matplotlib.colors as colors import seaborn as sns; sns.set(style='ticks',palette='Set2') # Tufte and Brewer style sns.despine() plt.rcParams['svg.fonttype'] = 'none' # Editable SVG text %matplotlib inline # Functions for resetting values and assisting plotting and data interpretation def reset_parameters(): # All concentrations in molarity (M) # Substrate affinity [PRB,CRB], Ks (M) ks_clo4 = np.array([0.0060,10000])*10**-3 # Ks perchlorate, 0.0019 (Dudley) super high value for CRB because it does not catalyze ks_clo3 = np.array([0.0074,0.0074])*10**-3 # Ks chorate 0.0007 M - Dudley #0.159 CRB #ks_clo3 = np.array([0.0074,0.159])*10**-3 # Ks chorate 0.0007 M - Dudley #0.159 CRB ks_acet = np.array([1,1])*10**-3 # Ks acetate # Substrate inhibition factor, Haldane kinetics [PRB,CRB], Ki (M) # Inhibition only occurs for PRB with perchlorate # Youngblut et. al 2016 http://www.jbc.org/content/291/17/9190.full.pdf ki_clo4 = np.array([10000, 10000])*10**-3 # Ki perchlorate, only low for ClO4- and PRB #ki_clo4 = np.array([7.5, 10000])*10**-3 # Ki perchlorate, only low for ClO4- and PRB ki_clo3 = np.array([10000, 10000])*10**-3 # Ki chlorate ki_acet = np.array([10000, 10000])*10**-3 # Ki acetate # Growth and death rate mu = np.array([0.5,0.5]) # Maximum growth rate m = np.array([0.0,0.0])# Death rate return ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m def reset_default_concentrations(): # Initial concentrations (M) x0 = [0.00001, # PRB 0.00001, # CRB 0.015, # Acetate 0.01, # Perchlorate 0.0, # Chlorate 0.0,# Cum. clorate to PRB 0.0]# Cum. clorate to CRB return x0 # Format a dataframe from ODE output def odeint_to_dataframe(state): df = pd.DataFrame(state) df.columns = values[0:7] # "PRB","CRB","C2H3O2-","ClO4-","ClO3-", "ClO3- to PRB", "ClO3- to CRB" df[time] = t_span # Time df["CRB"] = df["CRB"] * 113 # 113 g / mol biomass (C5H7O2N) df["PRB"] = df["PRB"] * 113 # 113 g / mol biomass (C5H7O2N) df["CRB/PRB"] = df["CRB"] / df["PRB"] # Ratio df["Total Cells"] = df["CRB"] + df["PRB"] # Sum df["CRB Growth Rate"] = df["CRB"].diff() / df[time].diff() # Ratio df["PRB Growth Rate"] = df["PRB"].diff() / df[time].diff() # Ratio df["Cumulative fClO3- CRB"] = df["ClO3- to CRB"] / (df["ClO3- to CRB"] + df["ClO3- to PRB"]) df["fClO3- CRB"] = df["ClO3- to CRB"].diff() / (df["ClO3- to CRB"].diff() + df["ClO3- to PRB"].diff()) df["ClO3-:ClO4-"] = df["ClO3-"] / df["ClO4-"] df["ClO4-:ClO3-"] = df["ClO4-"] / df["ClO3-"] return df # Summarize different attributes of the dataframe def dataframe_statistics(df): statistics = {"Max. ClO3-" : max(df['ClO3-']), "Max. ClO4-" : max(df['ClO4-']), "Max. CRB" : max(df['CRB']), "Max. PRB" : max(df['PRB']), "Max. Total Cells" : max(df["Total Cells"]), "Max. CRB/PRB" : max(df["CRB/PRB"]), "% ClO3- to CRB" : 100 * max(df["ClO3- to CRB"] / (x0[id_clo4] + x0[id_clo3])), "f:CRB/PRB" : df["CRB/PRB"].tolist()[-1], "f:CRB/PRB / i:CRB/PRB" : df["CRB/PRB"].tolist()[-1] / df["CRB/PRB"].tolist()[0], "Max. ClO4- Reduction Rate (M/h)" : max(abs(df['ClO4-'].diff().fillna(0))), } return statistics def plot_growth_curves(df,plot_title='',save_to_file=None): # Plot values_to_plot = ["PRB","CRB","C2H3O2-","ClO4-","ClO3-", "ClO3- to PRB", "ClO3- to CRB", time] colors = ['#D65228','#E69C26','black','#D65228','#E69C26','#D65228','#E69C26'] linestyles = ['-','-',':',':',':','--','--'] fig, (ax0,ax2,ax3,ax4) = plt.subplots(4, 1, figsize=(6,9), gridspec_kw = {'height_ratios':[10,4,4,4]}) ax0.set_title(plot_title, size=13) ax1 = ax0.twinx() ax=ax0 ax.set_ylabel('Biomass (g/L)') df.loc[:,["PRB","CRB", time]].plot(ax=ax,x=time, color=['#D65228','#E69C26'], style=['-','-']) ax.legend(loc='center right') ax=ax1 ax.set_ylabel('Concentration (M)') df.loc[:,["C2H3O2-","ClO4-","ClO3-", time]].plot(ax=ax,x=time, color=['black','black','black'], style=[':','-','--']) ax.legend(loc='lower right') ax=ax2 df.loc[:,[time,"ClO3-:ClO4-"]].plot(ax=ax,x=time,color='black',legend=None) ax.set_ylabel("[ClO3-] / [ClO4-]") ax=ax3 df.loc[:,[time,"fClO3- CRB"]].plot(ax=ax,x=time,color='black',legend=None) ax.set_ylabel("Fraction ClO3- to CRB") ax.set_ylim([0,1]) ax=ax4 df.loc[:,[time,"CRB Growth Rate","PRB Growth Rate"]].plot(ax=ax,x=time,color=['#E69C26','#D65228'],legend=None) ax.set_ylabel("Growth Rate (g L-1 h-1)") # Formatting for ax in (ax0,ax1,ax2,ax3,ax4): ax.spines["top"].set_visible(False) ax.xaxis.set_ticks_position('bottom') for ax in (ax2,ax3,ax4): ax.spines["right"].set_visible(False) ax.yaxis.set_ticks_position('left') for ax in (ax2,ax3): ax.set_xlabel("") ax.xaxis.set_ticks_position('none') for xlabel_i in ax.axes.get_xticklabels(): xlabel_i.set_visible(False) xlabel_i.set_fontsize(0.0) if save_to_file == None: pass else: plt.savefig('./data/' + save_to_file) return plt.show() ###Output _____no_output_____ ###Markdown Model Input ###Code # Energetics of energy = pd.DataFrame({"Electron acceptor half-reaction" : ["Gr", "A", "fs", "fe"], "O2/H2O": energy_to_fractions(-78.72), "NO3-/N2": energy_to_fractions(-72.20), "SO42-/HS-": energy_to_fractions(20.85), "CO2/CH4": energy_to_fractions(23.53), "ClO4-/ClO3-" : energy_to_fractions(redox_to_Ga(ne_perc,E_perc)), "ClO3-/Cl-": energy_to_fractions(redox_to_Ga(ne_chlor,E_chlor)), }) energy = energy.set_index("Electron acceptor half-reaction") energy = energy.transpose() # save to CSV energy.to_csv('./data/energetics.csv') energy.sort_values(by='A') ###Output _____no_output_____ ###Markdown Above Table: Energetic properties of different electron acceptors. Gr, energy per equiv. oxidized for energy production; A, equivalents of donor used for energy per cells formed; fs, fraction of donor used for cell synthesis; fe, fraction of donor used for energy. ###Code df = pd.DataFrame({"Population" : ['Perchlorate-reducing bacteria (PRB)', 'Chlorate-reducing bacteria (CRB)'], "Ks ClO4- (M)" : ks_clo4, "Ks ClO3- (M)" : ks_clo3, "Ks C2H3O2- (M)" : ks_acet, "Ki ClO4- (M)" : ki_clo4, "Ki ClO3- (M)" : ki_clo3, "Ki C2H3O2- (M)" : ki_acet, "Maximum growth rate (h^-1)" : mu, "Death rate (h^-1)" : m, "Yield coefficient ClO4- to ClO3-" : ypc, "Yield coefficient ClO3- to Cl-" : yc, "Stoichiometric ratio ClO4- to ClO3-" : ypa, "Stoichiometric ratio ClO3- to Cl-" : yca, }) df = df.set_index("Population") df = df.transpose() # save to CSV df.to_csv('./data/model-input.csv') df ###Output _____no_output_____ ###Markdown Above Table: Model parameters for each population Model Simulations Equilibrium Chemistry Approximation (ECA) kineticsThe Equilibrium Chemistry Approximation (ECA) of Michaelis-Menton kinetics includes competitive inhibition for Pcr and competition for substrates. PRB Only ###Code ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m = reset_parameters() # Initial concentrations (M), with no CRB x0 = reset_default_concentrations() x0[id_crb] = 0 # Time steps t_end = 400; # end time (hours) dt = 0.1; # time step (hours) t_span = np.arange(0,t_end,dt) # Run ECA kinetics model and plot state = odeint(eca_kinetics, x0, t_span) df = odeint_to_dataframe(state) plot_growth_curves(df,save_to_file='eca-prb-only.png') ###Output _____no_output_____ ###Markdown PRB + CRB ###Code # CRB concentration = PRB concentration x0[id_crb] = x0[id_prb] state = odeint(eca_kinetics, x0, t_span) df = odeint_to_dataframe(state) plot_growth_curves(df,save_to_file='eca-prb-and-crb.png') ###Output _____no_output_____ ###Markdown Results*Without* chlorate-reducing bacteria:- Chlorate (dashed line) accumulates during perchlorate reduction- Growth rate of the perchlorate-reducing population peaks when chlorate reduction begins*With* chlorate-reducing bacteria present:- Chlorate concentration decreases- Chlorate-reducing bacteria dominate the culture- Chlorate-reducing bacteria consume nearly all chlorate while fraction of chlorate is low- Chlorate-reducing bacteria have a higher growth rate throughout growth Conclusions- Chlorate-reducing bacteria utilize chlorate when perchlorate-reducing bacteria cannot: when the concentration of chlorate relative to perchlorate is low- Beacuse chlorate reduction to chloride has a higher total yield than perchlorate reduction to chlorate, chlorate-reducing bacteria have a higher growth rate- The consumption of chlorate by chlorate-reducing bacteria maintains the low chlorate:perchlorate ratio conducive to their success Other Model Simulations for Comparison Michaelis-Menton (MM) kineticsSimple model for growth limited by substrate concentrations PRB Only ###Code # No CRB x0[id_crb] = 0 # Time steps t_end = 150; # end time (hours) dt = 0.1; # time step (hours) t_span = np.arange(0,t_end,0.1) # Run Michaelis-Menton kinetics and plot simulation state = odeint(mm_kinetics, x0, t_span) df = odeint_to_dataframe(state) plot_growth_curves(df,save_to_file='mm-prb-only.png') ###Output _____no_output_____ ###Markdown PRB + CRB ###Code # CRB concentration = PRB concentration x0[id_crb] = x0[id_prb] # Run Michaelis-Menton kinetics and plot simulation state = odeint(mm_kinetics, x0, t_span) df = odeint_to_dataframe(state) plot_growth_curves(df,save_to_file='mm-prb-and-crb.png') ###Output _____no_output_____ ###Markdown Michaelis-Menton kinetics with competitive inhibition (CI) (Dudley et. al 2008)Accounts for the consumption of perchlorate and chlorate by the same cell PRB Only ###Code # Initial concentrations (M) x0[id_crb] = 0 # Time steps t_end = 300; # end time (hours) dt = 1; # time step (hours) t_span = np.arange(0,t_end,dt) # Run competitive inhibition model and plot simulation state = odeint(ci_kinetics, x0, t_span) df = odeint_to_dataframe(state) plot_growth_curves(df,save_to_file='ci-prb-only.png') ###Output _____no_output_____ ###Markdown PRB + CRB ###Code # CRB concentration = PRB concentration x0[id_crb] = x0[id_prb] state = odeint(ci_kinetics, x0, t_span) df = odeint_to_dataframe(state) plot_growth_curves(df, save_to_file='ci-prb-and-crb.png') ###Output _____no_output_____ ###Markdown Results- Michaelis-Menton kinetics with competitive inhibition of Pcr by chlorate produces similar results to the ECA kinetics model and live co-cultures- The model without competitive inhibition of Pcr by chlorate shows little chlorate accumulation and much less CRB growth Conclusions- Competitive inhibition is an important component of the model to recapitulate experimental behavior Effect of varying initial concentrations on the interaction ###Code def correlate_two_z_variables(id_N1,id_N2,z1_variable,z2_variable,save_to_file=None): import matplotlib.colors as colors fig, (ax0,ax1,ax2) = plt.subplots(1,3, figsize=(15,4)) Z1 = np.array(x_range) Z2 = np.array(x_range) z_variable = [z1_variable, z2_variable] z_scale = [z1_scale,z2_scale] Z = [Z1,Z2] axes = [ax0,ax1] color_map = [z1_color_map,z2_color_map] for N in [0,1]: # Initialize X-, Y-, and Z- dimensions X = np.array(x_range) Y = np.array(x_range) for y in y_range: output = [] # Plot by varying concentration for x in x_range: # Initial values x0 = reset_default_concentrations() # Replace one initial condition with x, a varying value x0[id_N1] = x x0[id_N2] = y # Calculate from each initial condition state = odeint(eca_kinetics, x0, t_span) df = odeint_to_dataframe(state) statistics = dataframe_statistics(df) output.append(statistics[z_variable[N]]) x_row = x_range X = np.vstack((X,x_row)) y_row = np.array([y]*len(x_range)) Y = np.vstack((Y,y_row)) z_row = output Z[N] = np.vstack((Z[N],z_row)) X = X[1:] # X-dimension Y = Y[1:] # Y-dimension #Z[N] = Z[N][1:] # Z-dimension Z[N] = Z[N][1:-1, :-1] # Z-dimension within X-Y bounds # Plot each heatmap # Normalization cmap = plt.get_cmap(color_map[N]) levels = MaxNLocator(nbins=1000).tick_values(Z[N].min(), Z[N].max()) if z_scale[N] == 'log': # https://matplotlib.org/users/colormapnorms.html if (z_variable[N] == 'f:CRB/PRB') | (z_variable[N] == "f:CRB/PRB / i:CRB/PRB"): z_bound = np.max([1/Z[N].min(),Z[N].max()]) norm = colors.LogNorm(vmin=1/z_bound, vmax=z_bound) else: norm = colors.LogNorm(vmin=Z[N].min(), vmax=Z[N].max()) else: # https://matplotlib.org/api/_as_gen/matplotlib.colors.BoundaryNorm.html norm = colors.BoundaryNorm(levels, ncolors=cmap.N, clip=True) """if z_scale == 'log': # https://matplotlib.org/users/colormapnorms.html norm = colors.LogNorm(vmin=Z[N].min(), vmax=Z[N].max()) else: # https://matplotlib.org/api/_as_gen/matplotlib.colors.BoundaryNorm.html norm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)""" im = axes[N].pcolormesh(X, Y, Z[N], cmap=cmap, norm=norm) fig.colorbar(im, ax=axes[N]) axes[N].set_xscale(x_scale, basex=2) axes[N].set_yscale(y_scale, basey=2) axes[N].set_title(z_variable[N]) axes[N].set_xlabel(values[id_N1]) axes[N].set_ylabel(values[id_N2]) plt.tight_layout() # Plot correlation Z1 = Z[0] Z2 = Z[1] ax2.scatter(x=Z1,y=Z2, s=20, facecolor="None", edgecolors='black', linewidths=1,) ax2.set_xscale(z1_scale) ax2.set_yscale(z2_scale) ax2.set_xlim([Z1.min(),Z1.max()]) ax2.set_ylim([Z2.min(),Z2.max()]) ax2.set_xlabel(z1_variable) ax2.set_ylabel(z2_variable) ax2.set_title("Variable "+values[x_variable]+" and "+values[y_variable]) if save_to_file == None: pass else: plt.savefig('./data/' + save_to_file) return plt.show() # Reset values ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m = reset_parameters() x0 = reset_default_concentrations() # Longer time period to capture all growth t_end = 5000; # end time (hours) dt = 1; # time step (hours) t_span = np.arange(0,t_end,5) # Z1 (X): ratio z1_variable = "f:CRB/PRB" z1_scale = 'log' z1_color_map = 'RdBu' # Z2 (Y): Max. ClO4- Reduction Rate (M/h) z2_variable = "% ClO3- to CRB" z2_scale = 'linear' z2_color_map = 'binary' # Y: PRB concentration steps y_variable = id_prb y_base2_min = -23 y_base2_max = -10 y_base2_steps = 1 + y_base2_max - y_base2_min y_range = np.logspace(start=y_base2_min,stop=y_base2_max,base=2,num=y_base2_steps) y_scale = 'log' # X: Perchlorate concentration steps x_variable = id_clo4 x_base2_min = -20 x_base2_max = -4 x_base2_steps = 1 + x_base2_max - x_base2_min x_range = np.logspace(start=x_base2_min,stop=x_base2_max,base=2,num=x_base2_steps) x_scale = 'log' correlate_two_z_variables(x_variable,y_variable,z1_variable,z2_variable,save_to_file="corr-prb-perc-ratio-chlor.svg") # Reset values ks_clo4, ks_clo3, ks_acet, ki_clo4, ki_clo3, ki_acet, mu, m = reset_parameters() x0 = reset_default_concentrations() # Z1 (X): Final ratio z1_variable = "f:CRB/PRB" z1_scale = 'log' color_map = 'RdBu' # Z2 (Y): Max. ClO4- Reduction Rate (M/h) z2_variable = "% ClO3- to CRB" z2_scale = 'linear' z2_color_map = 'binary' # Y: PRB y_variable = id_prb y_base2_min = -23 y_base2_max = -10 y_base2_steps = 1 + y_base2_max - y_base2_min y_range = np.logspace(start=y_base2_min,stop=y_base2_max,base=2,num=y_base2_steps) y_scale = 'log' # X: CRB x_variable = id_crb x_base2_min = -23 x_base2_max = -10 x_base2_steps = 1 + x_base2_max - x_base2_min x_range = np.logspace(start=x_base2_min,stop=x_base2_max,base=2,num=x_base2_steps) x_scale = 'log' correlate_two_z_variables(x_variable,y_variable,z1_variable,z2_variable, "corr-prb-crb-ratio-chlor.svg") print('Other z-dimensions available to plot:') statistics = dataframe_statistics(df) for stat in statistics.keys(): print (stat, "\t") ###Output Other z-dimensions available to plot: Max. ClO3- Max. ClO4- Max. CRB Max. PRB Max. Total Cells Max. CRB/PRB % ClO3- to CRB f:CRB/PRB f:CRB/PRB / i:CRB/PRB Max. ClO4- Reduction Rate (M/h)
jupyter_notebooks/notebooks/NB9_CVIII-randomforests_ising.ipynb
###Markdown Notebook 9: Using Random Forests to classify phases in the Ising Model Learning GoalThe goal of this notebook is to show how one can employ ensemble methods such as Random Forests to classify the states of the 2D Ising model according to their phases. We discuss concepts like decision trees, extreme decision trees, and out-of-bag error. The notebook also introduces the powerful scikit-learn `Ensemble` class. Setting up the problemThe Hamiltonian for the classical Ising model is given by$$ H = -J\sum_{\langle ij\rangle}S_{i}S_j,\qquad \qquad S_j\in\{\pm 1\} $$where the lattice site indices $i,j$ run over all nearest neighbors of a 2D square lattice of side $L$, and $J$ is some arbitrary interaction energy scale. We adopt periodic boundary conditions. Onsager proved that this model undergoes a phase transition in the thermodynamic limit from an ordered ferromagnet with all spins aligned to a disordered phase at the critical temperature $T_c/J=1/\log(1+\sqrt{2})\approx 2.26$. For any finite system size, this critical point is expanded to a critical region around $T_c$.We will use the same basic idea as we did for logistic regression. An interesting question to ask is whether one can train a statistical model to distinguish between the two phases of the Ising model. In other words, given an Ising state, we would like to classify whether it belongs to the ordered or the disordered phase, without any additional information other than the spin configuration itself. This categorical machine learning problem is well suited for ensemble methods and in particular Random Forests.To this end, we consider the 2D Ising model on a $40\times 40$ square lattice, and use Monte-Carlo (MC) sampling to prepare $10^4$ states at every fixed temperature $T$ out of a pre-defined set. Using Onsager's criterion, we can assign a label to each state according to its phase: $0$ if the state is disordered, and $1$ if it is ordered. It is well-known that, near the critical temperature $T_c$, the ferromagnetic correlation length diverges which, among others, leads to a critical slowing down of the MC algorithm. Therefore, we expect identifying the phases to be harder in the critical region. With this in mind, consider the following three types of states: ordered ($T/J2.5$). We use both ordered and disordered states to train the random forest and, once the supervised training procedure is complete, we shall evaluate the performance of our classifier on unseen ordered, disordered and critical states. A link to the Ising dataset can be found at [https://physics.bu.edu/~pankajm/MLnotebooks.html](https://physics.bu.edu/~pankajm/MLnotebooks.html). ###Code import numpy as np np.random.seed() # shuffle random seed generator # Ising model parameters L=40 # linear system size J=-1.0 # Ising interaction T=np.linspace(0.25,4.0,16) # set of temperatures T_c=2.26 # Onsager critical temperature in the TD limit import pickle, os from urllib.request import urlopen # path to data directory (for testing) #path_to_data=os.path.expanduser('~')+'/Dropbox/MachineLearningReview/Datasets/isingMC/' url_main = 'https://physics.bu.edu/~pankajm/ML-Review-Datasets/isingMC/'; ######### LOAD DATA # The data consists of 16*10000 samples taken in T=np.arange(0.25,4.0001,0.25): data_file_name = "Ising2DFM_reSample_L40_T=All.pkl" # The labels are obtained from the following file: label_file_name = "Ising2DFM_reSample_L40_T=All_labels.pkl" #DATA data = pickle.load(urlopen(url_main + data_file_name)) # pickle reads the file and returns the Python object (1D array, compressed bits) data = np.unpackbits(data).reshape(-1, 1600) # Decompress array and reshape for convenience data=data.astype('int') data[np.where(data==0)]=-1 # map 0 state to -1 (Ising variable can take values +/-1) #LABELS (convention is 1 for ordered states and 0 for disordered states) labels = pickle.load(urlopen(url_main + label_file_name)) # pickle reads the file and returns the Python object (here just a 1D array with the binary labels) ###### define ML parameters from sklearn.model_selection import train_test_split train_to_test_ratio=0.8 # training samples # divide data into ordered, critical and disordered X_ordered=data[:70000,:] Y_ordered=labels[:70000] X_critical=data[70000:100000,:] Y_critical=labels[70000:100000] X_disordered=data[100000:,:] Y_disordered=labels[100000:] del data,labels # define training and test data sets X=np.concatenate((X_ordered,X_disordered)) Y=np.concatenate((Y_ordered,Y_disordered)) # pick random data points from ordered and disordered states # to create the training and test sets X_train,X_test,Y_train,Y_test=train_test_split(X,Y,train_size=train_to_test_ratio,test_size=1.0-train_to_test_ratio) print('X_train shape:', X_train.shape) print('Y_train shape:', Y_train.shape) print() print(X_train.shape[0], 'train samples') print(X_critical.shape[0], 'critical samples') print(X_test.shape[0], 'test samples') ##### plot a few Ising states %matplotlib inline #import ml_style as style import matplotlib as mpl import matplotlib.pyplot as plt #mpl.rcParams.update(style.style) from mpl_toolkits.axes_grid1 import make_axes_locatable # set colourbar map cmap_args=dict(cmap='plasma_r') # plot states fig, axarr = plt.subplots(nrows=1, ncols=3) axarr[0].imshow(X_ordered[20001].reshape(L,L),**cmap_args) #axarr[0].set_title('$\\mathrm{ordered\\ phase}$',fontsize=16) axarr[0].set_title('ordered phase',fontsize=16) axarr[0].tick_params(labelsize=16) axarr[1].imshow(X_critical[10001].reshape(L,L),**cmap_args) #axarr[1].set_title('$\\mathrm{critical\\ region}$',fontsize=16) axarr[1].set_title('critical region',fontsize=16) axarr[1].tick_params(labelsize=16) im=axarr[2].imshow(X_disordered[50001].reshape(L,L),**cmap_args) #axarr[2].set_title('$\\mathrm{disordered\\ phase}$',fontsize=16) axarr[2].set_title('disordered phase',fontsize=16) axarr[2].tick_params(labelsize=16) fig.subplots_adjust(right=2.0) plt.show() ###Output _____no_output_____ ###Markdown Random Forests**Hyperparameters**We start by training with Random Forests. As discussed in Sec. VIII of the review, Random Forests are ensemble models. Here we will use the sci-kit learn implementation of random forests. There are two main hyper-parameters that will be important in practice for the performance of the algorithm and the degree to which it overfits/underfits: the number of estimators in the ensemble and the depth of the trees used. The former is controlled by the parameter `n_estimators` whereas the latter (the complexity of the trees used) can be controlled in many distinct ways (`min_samples_split`, `min_samples_leaf`, `min_impurity_decrease`, etc). For our simple dataset, it does not really make much difference which one of these we use. We will just use the `min_samples_split` parameter that dictates how many samples need to be in each node of the classification tree. The bigger this number, the more coarse our trees and data partitioning.In the code below, we will just consider extremely fine trees (`min_samples_split=2`) or extremely coarse trees (`min_samples_split=10000`). As we will see, both of these tree complexities are sufficient to distinguish the ordered from the disordered samples. The reason for this is that the ordered and disordered phases are distinguished by the magnetization order parameter which is an equally weighted sum of all features. However, if we want to train deep in these simple phases, and then use our algorithm to distinguish critical samples it is crucial we use more complex trees even though the performance on the disordered and ordered phases is indistinguishable for coarse and complex trees.**Out of Bag (OOB) Estimates**For more complicated datasets, how can we choose the right hyperparameters? We can actually make use of one of the most important and interesting features of ensemble methods that employ Bagging: out-of-bag (OOB) estimates. Whenever we bag data, since we are drawing samples with replacement, we can ask how well our classifiers do on data points that are *not used* in the training. This is the out-of-bag prediction error and plays a similar role to cross-validation error in other ML methods. Since this is the best proxy for out-of-sample prediction, we choose hyperparameters to minimize the out-of-bag error. ###Code # Apply Random Forest #This is the random forest classifier from sklearn.ensemble import RandomForestClassifier #This is the extreme randomized trees from sklearn.ensemble import ExtraTreesClassifier #import time to see how performance depends on run time import time import warnings #Comment to turn on warnings warnings.filterwarnings("ignore") #We will check min_estimators = 10 max_estimators = 101 classifer = RandomForestClassifier # BELOW WE WILL CHANGE for the case of extremly randomized forest n_estimator_range=np.arange(min_estimators, max_estimators, 10) leaf_size_list=[2,10000] m=len(n_estimator_range) n=len(leaf_size_list) #Allocate Arrays for various quantities RFC_OOB_accuracy=np.zeros((n,m)) RFC_train_accuracy=np.zeros((n,m)) RFC_test_accuracy=np.zeros((n,m)) RFC_critical_accuracy=np.zeros((n,m)) run_time=np.zeros((n,m)) print_flag=True for i, leaf_size in enumerate(leaf_size_list): # Define Random Forest Classifier myRF_clf = classifer( n_estimators=min_estimators, max_depth=None, min_samples_split=leaf_size, # minimum number of sample per leaf oob_score=True, random_state=0, warm_start=True # this ensures that you add estimators without retraining everything ) for j, n_estimator in enumerate(n_estimator_range): print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size)) start_time = time.time() myRF_clf.set_params(n_estimators=n_estimator) myRF_clf.fit(X_train, Y_train) run_time[i,j] = time.time() - start_time # check accuracy RFC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train) RFC_OOB_accuracy[i,j]=myRF_clf.oob_score_ RFC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test) RFC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical) if print_flag: result = (run_time[i,j], RFC_train_accuracy[i,j], RFC_OOB_accuracy[i,j], RFC_test_accuracy[i,j], RFC_critical_accuracy[i,j]) print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score")) print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result)) plt.figure() plt.plot(n_estimator_range,RFC_train_accuracy[1],'--b^',label='Train (coarse)') plt.plot(n_estimator_range,RFC_test_accuracy[1],'--r^',label='Test (coarse)') plt.plot(n_estimator_range,RFC_critical_accuracy[1],'--g^',label='Critical (coarse)') plt.plot(n_estimator_range,RFC_train_accuracy[0],'o-b',label='Train (fine)') plt.plot(n_estimator_range,RFC_test_accuracy[0],'o-r',label='Test (fine)') plt.plot(n_estimator_range,RFC_critical_accuracy[0],'o-g',label='Critical (fine)') #plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Accuracy') lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight') plt.show() plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse') plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Run time (s)') plt.legend(loc=2) #plt.savefig("Ising_RF_Runtime.pdf") plt.show() ###Output _____no_output_____ ###Markdown Extremely Randomized TreesAs discussed in the main text, the effectiveness of ensemble methods generally increases as the correlations between members of the ensemble decrease. This idea has been leveraged to make methods that introduce even more randomness into the ensemble by randomly choosing features to split on as well as randomly choosing thresholds to split on. See Section 4.3 of Louppe 2014 [arxiv:1407.7502](https://arxiv.org/pdf/1407.7502.pdf).Here we will make use of the scikit-learn function `ExtremeTreesClassifier` and we will just rerun what we did above. Since there is extra randomization compared to random forests, one can imagine that the performance of the critical samples will be much worse. Indeed, this is the case. ###Code #This is the extreme randomized trees from sklearn.ensemble import ExtraTreesClassifier #import time to see how perforamance depends on run time import time import warnings #Comment to turn on warnings warnings.filterwarnings("ignore") #We will check min_estimators = 10 max_estimators = 101 classifer = ExtraTreesClassifier # only changing this n_estimator_range=np.arange(min_estimators, max_estimators, 10) leaf_size_list=[2,10000] m=len(n_estimator_range) n=len(leaf_size_list) #Allocate Arrays for various quantities ETC_OOB_accuracy=np.zeros((n,m)) ETC_train_accuracy=np.zeros((n,m)) ETC_test_accuracy=np.zeros((n,m)) ETC_critical_accuracy=np.zeros((n,m)) run_time=np.zeros((n,m)) print_flag=True for i, leaf_size in enumerate(leaf_size_list): # Define Random Forest Classifier myRF_clf = classifer( n_estimators=min_estimators, max_depth=None, min_samples_split=leaf_size, # minimum number of sample per leaf oob_score=True, bootstrap=True, random_state=0, warm_start=True # this ensures that you add estimators without retraining everything ) for j, n_estimator in enumerate(n_estimator_range): print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size)) start_time = time.time() myRF_clf.set_params(n_estimators=n_estimator) myRF_clf.fit(X_train, Y_train) run_time[i,j] = time.time() - start_time # check accuracy ETC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train) ETC_OOB_accuracy[i,j]=myRF_clf.oob_score_ ETC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test) ETC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical) if print_flag: result = (run_time[i,j], ETC_train_accuracy[i,j], ETC_OOB_accuracy[i,j], ETC_test_accuracy[i,j], ETC_critical_accuracy[i,j]) print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score")) print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result)) plt.figure() plt.plot(n_estimator_range,ETC_train_accuracy[1],'--b^',label='Train (coarse)') plt.plot(n_estimator_range,ETC_test_accuracy[1],'--r^',label='Test (coarse)') plt.plot(n_estimator_range,ETC_critical_accuracy[1],'--g^',label='Critical (coarse)') plt.plot(n_estimator_range,ETC_train_accuracy[0],'o-b',label='Train (fine)') plt.plot(n_estimator_range,ETC_test_accuracy[0],'o-r',label='Test (fine)') plt.plot(n_estimator_range,ETC_critical_accuracy[0],'o-g',label='Critical (fine)') #plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Accuracy') lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight') plt.show() plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse') plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Run time (s)') plt.legend(loc=2) plt.savefig("Ising_ETC_Runtime.pdf") plt.show() ###Output _____no_output_____ ###Markdown Notebook 9: Using Random Forests to classify phases in the Ising Model Learning GoalThe goal of this notebook is to show how one can employ ensemble methods such as Random Forests to classify the states of the 2D Ising model according to their phases. We discuss concepts like decision trees, extreme decision trees, and out-of-bag error. The notebook also introduces the powerful scikit-learn `Ensemble` class. Setting up the problemThe Hamiltonian for the classical Ising model is given by$$ H = -J\sum_{\langle ij\rangle}S_{i}S_j,\qquad \qquad S_j\in\{\pm 1\} $$where the lattice site indices $i,j$ run over all nearest neighbors of a 2D square lattice of side $L$, and $J$ is some arbitrary interaction energy scale. We adopt periodic boundary conditions. Onsager proved that this model undergoes a phase transition in the thermodynamic limit from an ordered ferromagnet with all spins aligned to a disordered phase at the critical temperature $T_c/J=1/\log(1+\sqrt{2})\approx 2.26$. For any finite system size, this critical point is expanded to a critical region around $T_c$.We will use the same basic idea as we did for logistic regression. An interesting question to ask is whether one can train a statistical model to distinguish between the two phases of the Ising model. In other words, given an Ising state, we would like to classify whether it belongs to the ordered or the disordered phase, without any additional information other than the spin configuration itself. This categorical machine learning problem is well suited for ensemble methods and in particular Random Forests.To this end, we consider the 2D Ising model on a $40\times 40$ square lattice, and use Monte-Carlo (MC) sampling to prepare $10^4$ states at every fixed temperature $T$ out of a pre-defined set. Using Onsager's criterion, we can assign a label to each state according to its phase: $0$ if the state is disordered, and $1$ if it is ordered. It is well-known that, near the critical temperature $T_c$, the ferromagnetic correlation length diverges which, among others, leads to a critical slowing down of the MC algorithm. Therefore, we expect identifying the phases to be harder in the critical region. With this in mind, consider the following three types of states: ordered ($T/J2.5$). We use both ordered and disordered states to train the random forest and, once the supervised training procedure is complete, we shall evaluate the performance of our classifier on unseen ordered, disordered and critical states. A link to the Ising dataset can be found at [https://physics.bu.edu/~pankajm/MLnotebooks.html](https://physics.bu.edu/~pankajm/MLnotebooks.html). ###Code import numpy as np np.random.seed() # shuffle random seed generator # Ising model parameters L=40 # linear system size J=-1.0 # Ising interaction T=np.linspace(0.25,4.0,16) # set of temperatures T_c=2.26 # Onsager critical temperature in the TD limit import pickle, os from urllib.request import urlopen # path to data directory (for testing) #path_to_data=os.path.expanduser('~')+'/Dropbox/MachineLearningReview/Datasets/isingMC/' url_main = 'https://physics.bu.edu/~pankajm/ML-Review-Datasets/isingMC/'; ######### LOAD DATA # The data consists of 16*10000 samples taken in T=np.arange(0.25,4.0001,0.25): data_file_name = "Ising2DFM_reSample_L40_T=All.pkl" # The labels are obtained from the following file: label_file_name = "Ising2DFM_reSample_L40_T=All_labels.pkl" #DATA data = pickle.load(urlopen(url_main + data_file_name)) # pickle reads the file and returns the Python object (1D array, compressed bits) data = np.unpackbits(data).reshape(-1, 1600) # Decompress array and reshape for convenience data=data.astype('int') data[np.where(data==0)]=-1 # map 0 state to -1 (Ising variable can take values +/-1) #LABELS (convention is 1 for ordered states and 0 for disordered states) labels = pickle.load(urlopen(url_main + label_file_name)) # pickle reads the file and returns the Python object (here just a 1D array with the binary labels) ###### define ML parameters from sklearn.model_selection import train_test_split train_to_test_ratio=0.8 # training samples # divide data into ordered, critical and disordered X_ordered=data[:70000,:] Y_ordered=labels[:70000] X_critical=data[70000:100000,:] Y_critical=labels[70000:100000] X_disordered=data[100000:,:] Y_disordered=labels[100000:] del data,labels # define training and test data sets X=np.concatenate((X_ordered,X_disordered)) Y=np.concatenate((Y_ordered,Y_disordered)) # pick random data points from ordered and disordered states # to create the training and test sets X_train,X_test,Y_train,Y_test=train_test_split(X,Y,train_size=train_to_test_ratio,test_size=1.0-train_to_test_ratio) print('X_train shape:', X_train.shape) print('Y_train shape:', Y_train.shape) print() print(X_train.shape[0], 'train samples') print(X_critical.shape[0], 'critical samples') print(X_test.shape[0], 'test samples') ##### plot a few Ising states %matplotlib inline #import ml_style as style import matplotlib as mpl import matplotlib.pyplot as plt #mpl.rcParams.update(style.style) from mpl_toolkits.axes_grid1 import make_axes_locatable # set colourbar map cmap_args=dict(cmap='plasma_r') # plot states fig, axarr = plt.subplots(nrows=1, ncols=3) axarr[0].imshow(X_ordered[20001].reshape(L,L),**cmap_args) #axarr[0].set_title('$\\mathrm{ordered\\ phase}$',fontsize=16) axarr[0].set_title('ordered phase',fontsize=16) axarr[0].tick_params(labelsize=16) axarr[1].imshow(X_critical[10001].reshape(L,L),**cmap_args) #axarr[1].set_title('$\\mathrm{critical\\ region}$',fontsize=16) axarr[1].set_title('critical region',fontsize=16) axarr[1].tick_params(labelsize=16) im=axarr[2].imshow(X_disordered[50001].reshape(L,L),**cmap_args) #axarr[2].set_title('$\\mathrm{disordered\\ phase}$',fontsize=16) axarr[2].set_title('disordered phase',fontsize=16) axarr[2].tick_params(labelsize=16) fig.subplots_adjust(right=2.0) plt.show() ###Output _____no_output_____ ###Markdown Random Forests**Hyperparameters**We start by training with Random Forests. As discussed in Sec. VIII of the review, Random Forests are ensemble models. Here we will use the sci-kit learn implementation of random forests. There are two main hyper-parameters that will be important in practice for the performance of the algorithm and the degree to which it overfits/underfits: the number of estimators in the ensemble and the depth of the trees used. The former is controlled by the parameter `n_estimators` whereas the latter (the complexity of the trees used) can be controlled in many distinct ways (`min_samples_split`, `min_samples_leaf`, `min_impurity_decrease`, etc). For our simple dataset, it does not really make much difference which one of these we use. We will just use the `min_samples_split` parameter that dictates how many samples need to be in each node of the classification tree. The bigger this number, the more coarse our trees and data partitioning.In the code below, we will just consider extremely fine trees (`min_samples_split=2`) or extremely coarse trees (`min_samples_split=10000`). As we will see, both of these tree complexities are sufficient to distinguish the ordered from the disordered samples. The reason for this is that the ordered and disordered phases are distinguished by the magnetization order parameter which is an equally weighted sum of all features. However, if we want to train deep in these simple phases, and then use our algorithm to distinguish critical samples it is crucial we use more complex trees even though the performance on the disordered and ordered phases is indistinguishable for coarse and complex trees.**Out of Bag (OOB) Estimates**For more complicated datasets, how can we choose the right hyperparameters? We can actually make use of one of the most important and interesting features of ensemble methods that employ Bagging: out-of-bag (OOB) estimates. Whenever we bag data, since we are drawing samples with replacement, we can ask how well our classifiers do on data points that are *not used* in the training. This is the out-of-bag prediction error and plays a similar role to cross-validation error in other ML methods. Since this is the best proxy for out-of-sample prediction, we choose hyperparameters to minimize the out-of-bag error. ###Code # Apply Random Forest #This is the random forest classifier from sklearn.ensemble import RandomForestClassifier #This is the extreme randomized trees from sklearn.ensemble import ExtraTreesClassifier #import time to see how perforamance depends on run time import time import warnings #Comment to turn on warnings warnings.filterwarnings("ignore") #We will check min_estimators = 10 max_estimators = 101 classifer = RandomForestClassifier # BELOW WE WILL CHANGE for the case of extremly randomized forest n_estimator_range=np.arange(min_estimators, max_estimators, 10) leaf_size_list=[2,10000] m=len(n_estimator_range) n=len(leaf_size_list) #Allocate Arrays for various quantities RFC_OOB_accuracy=np.zeros((n,m)) RFC_train_accuracy=np.zeros((n,m)) RFC_test_accuracy=np.zeros((n,m)) RFC_critical_accuracy=np.zeros((n,m)) run_time=np.zeros((n,m)) print_flag=True for i, leaf_size in enumerate(leaf_size_list): # Define Random Forest Classifier myRF_clf = classifer( n_estimators=min_estimators, max_depth=None, min_samples_split=leaf_size, # minimum number of sample per leaf oob_score=True, random_state=0, warm_start=True # this ensures that you add estimators without retraining everything ) for j, n_estimator in enumerate(n_estimator_range): print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size)) start_time = time.time() myRF_clf.set_params(n_estimators=n_estimator) myRF_clf.fit(X_train, Y_train) run_time[i,j] = time.time() - start_time # check accuracy RFC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train) RFC_OOB_accuracy[i,j]=myRF_clf.oob_score_ RFC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test) RFC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical) if print_flag: result = (run_time[i,j], RFC_train_accuracy[i,j], RFC_OOB_accuracy[i,j], RFC_test_accuracy[i,j], RFC_critical_accuracy[i,j]) print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score")) print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result)) plt.figure() plt.plot(n_estimator_range,RFC_train_accuracy[1],'--b^',label='Train (coarse)') plt.plot(n_estimator_range,RFC_test_accuracy[1],'--r^',label='Test (coarse)') plt.plot(n_estimator_range,RFC_critical_accuracy[1],'--g^',label='Critical (coarse)') plt.plot(n_estimator_range,RFC_train_accuracy[0],'o-b',label='Train (fine)') plt.plot(n_estimator_range,RFC_test_accuracy[0],'o-r',label='Test (fine)') plt.plot(n_estimator_range,RFC_critical_accuracy[0],'o-g',label='Critical (fine)') #plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Accuracy') lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight') plt.show() plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse') plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Run time (s)') plt.legend(loc=2) #plt.savefig("Ising_RF_Runtime.pdf") plt.show() ###Output _____no_output_____ ###Markdown Extremely Randomized TreesAs discussed in the main text, the effectiveness of ensemble methods generally increases as the correlations between members of the ensemble decrease. This idea has been leveraged to make methods that introduce even more randomness into the ensemble by randomly choosing features to split on as well as randomly choosing thresholds to split on. See Section 4.3 of Louppe 2014 [arxiv:1407.7502](https://arxiv.org/pdf/1407.7502.pdf).Here we will make use of the scikit-learn function `ExtremeTreesClassifier` and we will just rerun what we did above. Since there is extra randomization compared to random forests, one can imagine that the performance of the critical samples will be much worse. Indeed, this is the case. ###Code #This is the extreme randomized trees from sklearn.ensemble import ExtraTreesClassifier #import time to see how perforamance depends on run time import time import warnings #Comment to turn on warnings warnings.filterwarnings("ignore") #We will check min_estimators = 10 max_estimators = 101 classifer = ExtraTreesClassifier # only changing this n_estimator_range=np.arange(min_estimators, max_estimators, 10) leaf_size_list=[2,10000] m=len(n_estimator_range) n=len(leaf_size_list) #Allocate Arrays for various quantities ETC_OOB_accuracy=np.zeros((n,m)) ETC_train_accuracy=np.zeros((n,m)) ETC_test_accuracy=np.zeros((n,m)) ETC_critical_accuracy=np.zeros((n,m)) run_time=np.zeros((n,m)) print_flag=True for i, leaf_size in enumerate(leaf_size_list): # Define Random Forest Classifier myRF_clf = classifer( n_estimators=min_estimators, max_depth=None, min_samples_split=leaf_size, # minimum number of sample per leaf oob_score=True, bootstrap=True, random_state=0, warm_start=True # this ensures that you add estimators without retraining everything ) for j, n_estimator in enumerate(n_estimator_range): print('n_estimators: %i, leaf_size: %i'%(n_estimator,leaf_size)) start_time = time.time() myRF_clf.set_params(n_estimators=n_estimator) myRF_clf.fit(X_train, Y_train) run_time[i,j] = time.time() - start_time # check accuracy ETC_train_accuracy[i,j]=myRF_clf.score(X_train,Y_train) ETC_OOB_accuracy[i,j]=myRF_clf.oob_score_ ETC_test_accuracy[i,j]=myRF_clf.score(X_test,Y_test) ETC_critical_accuracy[i,j]=myRF_clf.score(X_critical,Y_critical) if print_flag: result = (run_time[i,j], ETC_train_accuracy[i,j], ETC_OOB_accuracy[i,j], ETC_test_accuracy[i,j], ETC_critical_accuracy[i,j]) print('{0:<15}{1:<15}{2:<15}{3:<15}{4:<15}'.format("time (s)","train score", "OOB estimate","test score", "critical score")) print('{0:<15.4f}{1:<15.4f}{2:<15.4f}{3:<15.4f}{4:<15.4f}'.format(*result)) plt.figure() plt.plot(n_estimator_range,ETC_train_accuracy[1],'--b^',label='Train (coarse)') plt.plot(n_estimator_range,ETC_test_accuracy[1],'--r^',label='Test (coarse)') plt.plot(n_estimator_range,ETC_critical_accuracy[1],'--g^',label='Critical (coarse)') plt.plot(n_estimator_range,ETC_train_accuracy[0],'o-b',label='Train (fine)') plt.plot(n_estimator_range,ETC_test_accuracy[0],'o-r',label='Test (fine)') plt.plot(n_estimator_range,ETC_critical_accuracy[0],'o-g',label='Critical (fine)') #plt.semilogx(lmbdas,train_accuracy_SGD,'*--b',label='SGD train') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Accuracy') lgd=plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.savefig("Ising_RF.pdf",bbox_extra_artists=(lgd,), bbox_inches='tight') plt.show() plt.plot(n_estimator_range, run_time[1], '--k^',label='Coarse') plt.plot(n_estimator_range, run_time[0], 'o-k',label='Fine') plt.xlabel('$N_\mathrm{estimators}$') plt.ylabel('Run time (s)') plt.legend(loc=2) #plt.savefig("Ising_ETC_Runtime.pdf") plt.show() ###Output _____no_output_____
book/thermochemistry/cea_cantera.ipynb
###Markdown Implementing CEA calculations using Cantera ###Code # this line makes figures interactive in Jupyter notebooks %matplotlib inline from matplotlib import pyplot as plt import numpy as np import cantera as ct from pint import UnitRegistry ureg = UnitRegistry() Q_ = ureg.Quantity # for convenience: def to_si(quant): '''Converts a Pint Quantity to magnitude at base SI units. ''' return quant.to_base_units().magnitude # these lines are only for helping improve the display from IPython.display import set_matplotlib_formats set_matplotlib_formats('pdf', 'png') plt.rcParams['figure.dpi']= 150 plt.rcParams['savefig.dpi'] = 150 ###Output _____no_output_____ ###Markdown [CEA](https://www1.grc.nasa.gov/research-and-engineering/ceaweb/) (Chemical Equilibrium with Applications) is a classic NASA software tool developed for analyzing combustion and rocket propulsion problems. It was written in Fortran, but is available to run via a [web interface](https://cearun.grc.nasa.gov/index.html).Given rocket propellants, CEA can not only determine the combustion chamber equilibrium composition and temperature, but also calculate important rocket performance parameters.Although CEA is extremely useful, it cannot (easily) be used within Python. Plus, we might want to [Cantera](https://cantera.org/) is a modern software library for solving problems in chemical kinetics,thermodynamics, and transport, that offers a Python interface. Cantera natively supports phase and chemicalequilibrium solvers. In particular, it can simulate finite-rate chemical reactions.This article examines how we can use Cantera and Python to perform the calculations of CEA. Fixed temperature and pressureGiven a fixed temperature and pressure, determine the equilibrium composition of chemical species.This problem is relevant to an isothermal process, or where temperature is a design variable, suchas in nuclear thermal or electrothermal rockets.For example, say we have gaseous hydrazine (N2H4) as a propellant, with a chambertemperature of 5000 K and pressure of 50 psia. For this system, determine the equilibrium composition.In [CEA](https://cearun.grc.nasa.gov/), this is a `tp` problem, or fixed temperature and pressure problem.We should expect that, at such high temperatures, the equilibrium state will have mostly one- and two-atommolecules, based on the elements present: N2, H2, H, N, and HN.The CEA plaintext input file looks like:```textprob tp p,psia= 50 t,k= 5000reacname N2H4 mol 1.0output siunitsend```and the output is (with the repeated input removed):```text******************************************************************************* NASA-GLENN CHEMICAL EQUILIBRIUM PROGRAM CEA2, FEBRUARY 5, 2004 BY BONNIE MCBRIDE AND SANFORD GORDON REFS: NASA RP-1311, PART I, 1994 AND NASA RP-1311, PART II, 1996 ******************************************************************************* THERMODYNAMIC EQUILIBRIUM PROPERTIES AT ASSIGNED TEMPERATURE AND PRESSURE REACTANT WT FRACTION ENERGY TEMP (SEE NOTE) KJ/KG-MOL K NAME N2H4 1.0000000 0.000 0.000 O/F= 0.00000 %FUEL= 0.000000 R,EQ.RATIO= 0.000000 PHI,EQ.RATIO= 0.000000 THERMODYNAMIC PROPERTIES P, BAR 3.4474 T, K 5000.00 RHO, KG/CU M 5.5368-2 H, KJ/KG 42058.0 U, KJ/KG 35831.8 G, KJ/KG -103744.4 S, KJ/(KG)(K) 29.1605 M, (1/n) 6.677 (dLV/dLP)t -1.04028 (dLV/dLT)p 1.4750 Cp, KJ/(KG)(K) 11.1350 GAMMAs 1.2548 SON VEL,M/SEC 2795.1 MOLE FRACTIONS *H 0.74177 *H2 0.04573 *N 0.00806 *NH 0.00021 *N2 0.20422```So, CEA not only provides the equilibrium composition in terms of mole fraction ($X_i$), but also the mean molecular weight of the mixture $MW$; thermodynamic properties and derivatives density $\rho$, enthalpy $h$, entropy $s$,$\left(\partial \log V / \partial \log P\right)_T$, $\left(\partial \log V / \partial \log T\right)_P$,specific heat $C_p = \partial h / \partial T)_P$, the ratio of specific heats ($\gamma$), and the sonic velocity (i.e., speed of sound) $a$. We can perform the same equilibrium calculation in Cantera, but we need to construct an object that contains the appropriate chemical species. Cantera actually comes with a NASA database of gaseous species thermodynamic models,in the `nasa_gas.cti` file. ###Code # extract all species in the NASA database full_species = {S.name: S for S in ct.Species.listFromFile('nasa_gas.cti')} # extract only the relevant species species = [full_species[S] for S in ( 'N2H4', 'N2', 'H2', 'H', 'N', 'NH' )] gas = ct.Solution(thermo='IdealGas', species=species) temperature = Q_(5000, 'K') pressure = Q_(50, 'psi') gas.TPX = to_si(temperature), to_si(pressure), 'N2H4:1.0' gas.equilibrate('TP') gas() ###Output temperature 5000 K pressure 3.4474e+05 Pa density 0.055346 kg/m^3 mean mol. weight 6.6743 kg/kmol phase of matter gas 1 kg 1 kmol --------------- --------------- enthalpy 4.2088e+07 2.8091e+08 J internal energy 3.586e+07 2.3934e+08 J entropy 29182 1.9477e+05 J/K Gibbs function -1.0382e+08 -6.9294e+08 J heat capacity c_p 3779.4 25225 J/K heat capacity c_v 2533.6 16910 J/K mass frac. Y mole frac. X chem. pot. / RT --------------- --------------- --------------- N2 0.8567 0.20411 -30.731 H2 0.013688 0.045315 -24.65 H 0.1121 0.74225 -12.325 N 0.017042 0.0081204 -15.365 NH 0.00046864 0.00020831 -27.691 [ +1 minor] 2.3328e-15 4.8586e-16 ###Markdown Comparing the results from CEA and Cantera, we see very good agreement between (most) thermodynamicproperties and the species mole fractions.But, the heat capacity $C_p$ appears **very** different, and if we calculate the specific heat ratio,$$\gamma = \frac{C_p}{C_v} \;,$$we will see it also differs quite substantially: ###Code gamma_ct = gas.cp_mole / gas.cv_mole print(f'Cantera specific heat ratio: {gamma_ct: .4f}') ###Output Cantera specific heat ratio: 1.4917 ###Markdown 🤯Well, 1.492 is quite different from 1.255, and this would lead to substantially different rocket performance parameters that depend on $\gamma$.So, what's going on?Well, the key lies in examining the actual definition of specific heat, following Gordon and McBride {cite}`cea_analysis`:$$C_p = \left( \frac{\partial h}{\partial T} \right)_P \;.$$In this derivative, while enthalpy and temperature change, and pressure is held constant, what happens to the species composition? We could assume the composition is "frozen" and remains fixed,or that the composition adjusts to a new equilibrium instantaneously.The "equilibrium" specific heat then has two components, a frozen contribution and reaction contribution:$$\begin{align}C_{p,e} &= C_{p,f} + C_{p,r} \\&= \sum_{j=1}^{N_s} n_j C_{p,j}^{\circ} + \sum_{j=1}^{N_g} n_j \frac{H_j^{\circ}}{T} \left( \frac{\partial \log n_j}{\partial \log T}\right)_P + \sum_{j=N_g+1}^{N_s} \frac{H_j^{\circ}}{T} \left( \frac{\partial n_j}{\partial \log T}\right)_P \;,\end{align}$$where $N_s$ is the number of species and $N_g$ is the number of gas-phase species (so that $N_g + 1$ refers to the first condensed-phase species, if present).But, Cantera defines quantities like specific heat (and other thermodynamic quantities based on derivatives)at fixed composition, meaning Cantera's specific heat is just the frozen contribution $C_{p,f}$. We can obtain the full equilibrium-based value of specific heat, but it requires determining additional thermodynamic derivatives. Following Gordon and McBride {cite}`cea_analysis` again, we can obtain this system of linear equations:$$\begin{align}\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{kj} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log T}\right)_P + \sum_{j=N_g+1}^{N_s} a_{ij} \left( \frac{\partial n_j}{\partial \log T}\right)_P + \sum_{j=1}^{N_g} a_{kj} n_j \left( \frac{\partial \log n}{\partial \log T} \right)_P &= -\sum_{j=1}^{N_g} \frac{a_{kj} n_j H_j^{\circ}}{RT} \;, \quad k=1, \ldots, {N_e} \\\sum_{i=1}^{N_e} a_{ij} \left( \frac{\partial \pi_i}{\partial \log T}\right)_P &= - \frac{H_j^{\circ}}{RT} \;, \quad j = N_g + 1, \ldots, N_s \\\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log T}\right)_P &= -\sum_{j=1}^{N_g} \frac{n_j H_j^{\circ}}{RT} \\\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{kj} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log P}\right)_T + \sum_{j=N_g + 1}^{N_s} a_{kj} \left( \frac{\partial n_j}{\partial \log P}\right)_T + \sum_{j=1}^{N_g} a_{ij} n_j \left( \frac{\partial \log n}{\partial \log P}\right)_T &= \sum_{j=1}^{N_g} a_{kj} n_j \;, \quad k=1, \ldots, {N_e} \\\sum_{i=1}^{N_e} a_{ij} \left( \frac{\partial \pi_i}{\partial \log P}\right)_T &= 0 \;, \quad j = N_g + 1, \ldots, N_s \\\sum_{i=1}^{N_e} \sum_{j=1}^{N_g} a_{ij} n_j \left( \frac{\partial \pi_i}{\partial \log P}\right)_T &= \sum_{j=1}^{N_g} n_j \;,\end{align}$$where ${N_e}$ is the number of elements. As a first pass, let's assume that no condensed species are present. (This is fine for conditions in the combustion chamber, but for some systems the rapid expansion in the nozzle may drop below the dew point for some species.)Then, the unknowns in that system of equations are $ \left( \frac{\partial \pi_i}{\partial \log T}\right)_P$,$ \left( \frac{\partial \log n}{\partial \log T}\right)_P$,$ \left( \frac{\partial \pi_i}{\partial \log P}\right)_T$,$ \left( \frac{\partial \log n}{\partial \log P}\right)_T$,with a total of $2 \times N_e + 2$ unknowns. For the current system of N2H4, $N_e = 2$ and thus there are six unknowns.Since this is a linear system of equations, we can solve it using linear algebra, via NumPy's`linalg.solve` function. Let's set up a function to solve this system: ###Code def get_thermo_derivatives(gas): '''Gets thermo derivatives based on shifting equilibrium. ''' # unknowns for system with no condensed species: # dpi_i_dlogT_P (# elements) # dlogn_dlogT_P # dpi_i_dlogP_T (# elements) # dlogn_dlogP_T # total unknowns: 2*n_elements + 2 num_var = 2 * gas.n_elements + 2 coeff_matrix = np.zeros((num_var, num_var)) right_hand_side = np.zeros(num_var) tot_moles = 1.0 / gas.mean_molecular_weight moles = gas.X * tot_moles condensed = False # indices idx_dpi_dlogT_P = 0 idx_dlogn_dlogT_P = idx_dpi_dlogT_P + gas.n_elements idx_dpi_dlogP_T = idx_dlogn_dlogT_P + 1 idx_dlogn_dlogP_T = idx_dpi_dlogP_T + gas.n_elements # construct matrix of elemental stoichiometric coefficients stoich_coeffs = np.zeros((gas.n_elements, gas.n_species)) for i, elem in enumerate(gas.element_names): for j, sp in enumerate(gas.species_names): stoich_coeffs[i,j] = gas.n_atoms(sp, elem) # equations for derivatives with respect to temperature # first n_elements equations for k in range(gas.n_elements): for i in range(gas.n_elements): coeff_matrix[k,i] = np.sum(stoich_coeffs[k,:] * stoich_coeffs[i,:] * moles) coeff_matrix[k, gas.n_elements] = np.sum(stoich_coeffs[k,:] * moles) right_hand_side[k] = -np.sum(stoich_coeffs[k,:] * moles * gas.standard_enthalpies_RT) # skip equation relevant to condensed species for i in range(gas.n_elements): coeff_matrix[gas.n_elements, i] = np.sum(stoich_coeffs[i, :] * moles) right_hand_side[gas.n_elements] = -np.sum(moles * gas.standard_enthalpies_RT) # equations for derivatives with respect to pressure for k in range(gas.n_elements): for i in range(gas.n_elements): coeff_matrix[gas.n_elements+1+k,gas.n_elements+1+i] = np.sum(stoich_coeffs[k,:] * stoich_coeffs[i,:] * moles) coeff_matrix[gas.n_elements+1+k, 2*gas.n_elements+1] = np.sum(stoich_coeffs[k,:] * moles) right_hand_side[gas.n_elements+1+k] = np.sum(stoich_coeffs[k,:] * moles) for i in range(gas.n_elements): coeff_matrix[2*gas.n_elements+1, gas.n_elements+1+i] = np.sum(stoich_coeffs[i, :] * moles) right_hand_side[2*gas.n_elements+1] = np.sum(moles) derivs = np.linalg.solve(coeff_matrix, right_hand_side) dpi_dlogT_P = derivs[idx_dpi_dlogT_P : idx_dpi_dlogT_P + gas.n_elements] dlogn_dlogT_P = derivs[idx_dlogn_dlogT_P] dpi_dlogP_T = derivs[idx_dpi_dlogP_T] dlogn_dlogP_T = derivs[idx_dlogn_dlogP_T] # dpi_dlogP_T is not used return dpi_dlogT_P, dlogn_dlogT_P, dlogn_dlogP_T ###Output _____no_output_____ ###Markdown Using these derivatives, we can then calculate the specific heat, other relevant derivatives, and the ratio of specific heats:$$\begin{align}\frac{C_{p,e}}{R} &= \sum_{i=1}^{N_e} \left( \sum_{j=1}^{N_g} \frac{a_{ij} n_j H_j^{\circ}}{RT} \right) \left( \frac{\partial \pi_i}{\partial \log T}\right)_P + \sum_{j=N_g+1}^{N_s} \frac{H_j^{\circ}}{RT} \left( \frac{\partial n_j}{\partial \log T}\right)_P \\&+ \left( \sum_{j=1}^{N_g} \frac{n_j H_j^{\circ}}{RT} \right) \left( \frac{\partial \log n}{\partial \log T}\right)_P + \sum_{j=1}^{N_s} \frac{n_j C_{p,j}^{\circ}}{R} + \sum_{j=1}^{N_g} \frac{n_j (H_j^{\circ})^2}{R^2 T^2} \\\left( \frac{\partial \log V}{\partial \log T}\right)_P &= 1 + \left( \frac{\partial \log n}{\partial \log T}\right)_P \\\left( \frac{\partial \log V}{\partial \log P}\right)_T &= -1 + \left( \frac{\partial \log n}{\partial \log P}\right)_T \;.\end{align}$$The ratio of specific heats shows up via the speed of sound:$$\begin{align}a^2 &= \left( \frac{\partial P}{\partial \rho}\right)_s = -\frac{P}{\rho} \left( \frac{\partial \log P}{\partial \log V} \right)_s \\&= n R T \gamma_s\end{align} \;,$$where the ratio of specific heats is$$\gamma_s = \left( \frac{\partial \log P}{\partial \log \rho} \right)_s = - \frac{\gamma}{ \left( \frac{\partial \log V}{\partial \log P}\right)_T}$$and $$\gamma \equiv \frac{C_p}{C_v} \;.$$The constant volume specific heat is$$C_v \equiv \left( \frac{\partial u}{\partial T}\right)_V = C_p + \frac{ \frac{PV}{T} \left( \frac{\partial \log V}{\partial \log T}\right)_P^2}{ \left( \frac{\partial \log V}{\partial \log P}\right)_T} \;.$$ ###Code def get_thermo_properties(gas, dpi_dlogT_P, dlogn_dlogT_P, dlogn_dlogP_T): '''Calculates specific heats, volume derivatives, and specific heat ratio. Based on shifting equilibrium for mixtures. ''' tot_moles = 1.0 / gas.mean_molecular_weight moles = gas.X * tot_moles # construct matrix of elemental stoichiometric coefficients stoich_coeffs = np.zeros((gas.n_elements, gas.n_species)) for i, elem in enumerate(gas.element_names): for j, sp in enumerate(gas.species_names): stoich_coeffs[i,j] = gas.n_atoms(sp, elem) spec_heat_p = ct.gas_constant * ( np.sum([dpi_dlogT_P[i] * np.sum(stoich_coeffs[i,:] * moles * gas.standard_enthalpies_RT) for i in range(gas.n_elements) ]) + np.sum(moles * gas.standard_enthalpies_RT) * dlogn_dlogT_P + np.sum(moles * gas.standard_cp_R) + np.sum(moles * gas.standard_enthalpies_RT**2) ) dlogV_dlogT_P = 1 + dlogn_dlogT_P dlogV_dlogP_T = -1 + dlogn_dlogP_T spec_heat_v = ( spec_heat_p + gas.P * gas.v / gas.T * dlogV_dlogT_P**2 / dlogV_dlogP_T ) gamma = spec_heat_p / spec_heat_v gamma_s = -gamma/dlogV_dlogP_T return dlogV_dlogT_P, dlogV_dlogP_T, spec_heat_p, gamma_s derivs = get_thermo_derivatives(gas) dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties( gas, derivs[0], derivs[1], derivs[2] ) print(f'Cp = {cp: .2f} J/(K kg)') print(f'(d log V/d log P)_T = {dlogV_dlogP_T: .4f}') print(f'(d log V/d log T)_P = {dlogV_dlogT_P: .4f}') print(f'gamma_s = {gamma_s: .4f}') speed_sound = np.sqrt(ct.gas_constant * gas.T * gamma_s / gas.mean_molecular_weight) print(f'Speed of sound = {speed_sound: .1f} m/s') ###Output Cp = 11104.47 J/(K kg) (d log V/d log P)_T = -1.0400 (d log V/d log T)_P = 1.4722 gamma_s = 1.2549 Speed of sound = 2795.8 m/s ###Markdown 🎉 Success! These calculations agree very closely with those from CEA. Adiabatic combustionCEA also supports calculating the chamber temperature (along with composition) for adiabatic combustion, both with gaseous and liquid propellants. Cantera's equilibrium solver that we used above handles constant enthlapy and pressure equilibrium (`HP`) just fine with gaseous reactants, but how to CEA has a database of reactants with assigned enthalpies, as described by Gordon and McBride {cite}`cea_analysis`:- noncryogenic reactants are represented via enthalpy of formation (i.e., heat of formation) at the standard reference temperature of 298.15 K- cryogenic liquid reactants are represented via enthalpies given at their boiling points, which represent the standard enthalpy of formation minus the sensible heat (between 298.15 K and the boiling point), the heat of vaporization at the boiling point, and also the difference in enthalpy due to real gas effects at the boiling point.For example, CEA's thermodynamic database {cite}`NASA_thermo` represents liquid dinitrogen tetroxide (N2O4), which is an oxidizer used with hydrazine, with```textN2O4(L) Dinitrogen tetroxide. McBride,1996 pp85,93. 0 g 6/96 N 2.00O 4.00 0.00 0.00 0.00 1 92.0110000 -17549.000 298.150 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000```while cryogenic liquid hydrogen is given with```textH2(L) Hydrogen. McBride,1996 pp84,92. 0 g 6/96 H 2.00 0.00 0.00 0.00 0.00 1 2.0158800 -9012.000 20.270 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000```The full format of species thermodynamic entries is given in the CEA User Manual{cite}`cea_manual`, but for these reactants the key information includes- species name, given in the first line- elemental composition, given in a fixed column format in the second line - phase, given as an integer in the third-to-last entry of the second line (zero for gases, nonzero for condensed phases)- molecular weight, in the second-to-last entry of the second line- enthalpy at the boiling point, in J/mol, at the end of the second line- boiling point temperature, in K, at the beginning of the third lineIn general, both CEA and Cantera represent the thermodynamic properties of gaseous andcondensed species via more-sophisticated polynomial fits across multiple ranges of temperatures,but these problems only require initial enthalpy of reactants. Let's consider the Space Shuttle main engine (SSME), which used cryogenic liquid hydrogen and liquid oxygen at an oxidizer to fuel ratio of 6.0 and a chamber pressure of around 3000 psia.This is a constant enthalpy and pressure problem (`hp`) in CEA: ``` NASA-GLENN CHEMICAL EQUILIBRIUM PROGRAM CEA2, FEBRUARY 5, 2004 BY BONNIE MCBRIDE AND SANFORD GORDON REFS: NASA RP-1311, PART I, 1994 AND NASA RP-1311, PART II, 1996 ******************************************************************************* CEA analysis performed on Wed 27-Jan-2021 13:09:27 Problem Type: "Assigned Enthalpy and Pressure" prob case=_______________3446 hp Pressure (1 value): p,psia= 3000 Oxidizer/Fuel Wt. ratio (1 value): o/f= 6.0 You selected the following fuels and oxidizers: reac fuel H2(L) wt%=100.0000 oxid O2(L) wt%=100.0000 You selected these options for output: short version of output output short Proportions of any products will be expressed as Mole Fractions. Heat will be expressed as siunits output siunits Input prepared by this script:prepareInputFile.cgi IMPORTANT: The following line is the end of your CEA input file! end THERMODYNAMIC EQUILIBRIUM COMBUSTION PROPERTIES AT ASSIGNED PRESSURES CASE = _______________ REACTANT WT FRACTION ENERGY TEMP (SEE NOTE) KJ/KG-MOL K FUEL H2(L) 1.0000000 -9012.000 20.270 OXIDANT O2(L) 1.0000000 -12979.000 90.170 O/F= 6.00000 %FUEL= 14.285714 R,EQ.RATIO= 1.322780 PHI,EQ.RATIO= 1.322780 THERMODYNAMIC PROPERTIES P, BAR 206.84 T, K 3598.76 RHO, KG/CU M 9.4113 0 H, KJ/KG -986.31 U, KJ/KG -3184.12 G, KJ/KG -62768.7 S, KJ/(KG)(K) 17.1677 M, (1/n) 13.614 (dLV/dLP)t -1.01897 (dLV/dLT)p 1.3291 Cp, KJ/(KG)(K) 7.3140 GAMMAs 1.1475 SON VEL,M/SEC 1588.1 MOLE FRACTIONS *H 0.02543 HO2 0.00003 *H2 0.24740 H2O 0.68635 H2O2 0.00002 *O 0.00202 *OH 0.03659 *O2 0.00215 * THERMODYNAMIC PROPERTIES FITTED TO 20000.K NOTE. WEIGHT FRACTION OF FUEL IN TOTAL FUELS AND OF OXIDANT IN TOTAL OXIDANTS``` The key results include the chamber pressure $T_c$ of 3598.8 K, the specific heat ratio $\gamma_s$ of1.148, and the mean molecular weight of 13.614 kg/kmol.To perform this calculation using Cantera, we need the reactant information:```H2(L) Hydrogen. McBride,1996 pp84,92. 0 g 6/96 H 2.00 0.00 0.00 0.00 0.00 1 2.0158800 -9012.000 20.270 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000O2(L) Oxygen. McBride,1996 pp85,93. 0 g 6/96 O 2.00 0.00 0.00 0.00 0.00 1 31.9988000 -12979.000 90.170 0.0000 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.000```Cantera has fairly sophisticated ways of representing the thermodynamics of condensed [phases](https://cantera.org/documentation/dev/sphinx/html/yaml/phases.htmlsec-yaml-fixed-stoichiometry), but in this case we actually do not need that—we just need a way of easily representing the elemental composition and enthalpy of the reactants, which is the data neededfor constraining the equilibrium solver.So, we can actually use the [ideal gas thermodynamic model](https://cantera.org/documentation/dev/doxygen/html/d7/dfa/classCantera_1_1IdealGasPhase.htmldetails) (`ideal-gas`) for the phase.For each species, we can use the [constant heat capacity](https://cantera.org/science/science-species.htmlconstant-heat-capacity) (`constant-cp`) thermodynamic model,with the reference temperature set to boiling point (for the cryogenic liquid propellants in this case; for non-cryogenic reactants, this would be 298.15 K), the reference enthalpy set to the assigned value, and the reference specific heat and entropy set to zero.I've constructed a representative Cantera [YAML input file](https://cantera.org/tutorials/yaml/defining-phases.html), that describes separate phases for liquid hydrogen and liquid oxygen.⚠️ Warning ⚠️ these phases are **only** valid at the specific cryogenic temperature specified,and should only be used for this specific purpose (as reactants). ###Code h2o2_filename = 'h2o2_react.yaml' print('Contents of ' + h2o2_filename + ':\n') with open(h2o2_filename) as f: file_contents = f.read() print(file_contents) ###Output Contents of h2o2_react.yaml: phases: - name: liquid_hydrogen thermo: ideal-gas elements: [H] species: [H2(L)] - name: liquid_oxygen thermo: ideal-gas elements: [O] species: [O2(L)] species: - name: H2(L) composition: {H: 2} thermo: model: constant-cp T0: 20.270 h0: -9012.0 J/mol s0: 0.0 cp0: 0.0 - name: O2(L) composition: {O: 2} thermo: model: constant-cp T0: 90.170 h0: -12979.0 J/mol s0: 0.0 cp0: 0.0 ###Markdown To set up the system in Cantera, we create separate `Solution` objects for the liquid hydrogen and oxygen phases, and also a `Solution` containing the gas-phase products (actually, this could also include condensed species as well!).Then, create a `Mixture` that contains all three objects, and specify the initial moles of hydrogen and oxygen based on the oxidizer-to-fuel ratio: ###Code o_f_ratio = 6.0 temperature_h2 = Q_(20.270, 'K') temperature_o2 = Q_(90.170, 'K') pressure_chamber = Q_(3000, 'psi') h2 = ct.Solution(h2o2_filename, 'liquid_hydrogen') h2.TP = to_si(temperature_h2), to_si(pressure_chamber) o2 = ct.Solution(h2o2_filename, 'liquid_oxygen') o2.TP = to_si(temperature_o2), to_si(pressure_chamber) molar_ratio = o_f_ratio / (o2.mean_molecular_weight / h2.mean_molecular_weight) moles_ox = molar_ratio / (1 + molar_ratio) moles_f = 1 - moles_ox gas2 = ct.Solution('nasa_h2o2.yaml', 'gas') # create a mixture of the liquid phases with the gas-phase model, # with the number of moles for fuel and oxidizer based on # the O/F ratio mix = ct.Mixture([(h2, moles_f), (o2, moles_ox), (gas2, 0)]) # Solve for the equilibrium state, at constant enthalpy and pressure mix.equilibrate('HP') gas2() derivs = get_thermo_derivatives(gas2) dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties( gas2, derivs[0], derivs[1], derivs[2] ) print(f'Cp = {cp: .2f} J/(K kg)') print(f'(d log V/d log P)_T = {dlogV_dlogP_T: .4f}') print(f'(d log V/d log T)_P = {dlogV_dlogT_P: .4f}') print(f'gamma_s = {gamma_s: .4f}') speed_sound = np.sqrt(ct.gas_constant * gas2.T * gamma_s / gas2.mean_molecular_weight) print(f'Speed of sound = {speed_sound: .1f} m/s') ###Output gas: temperature 3597.5 K pressure 2.0684e+07 Pa density 9.4137 kg/m^3 mean mol. weight 13.613 kg/kmol phase of matter gas 1 kg 1 kmol --------------- --------------- enthalpy -9.8628e+05 -1.3426e+07 J internal energy -3.1835e+06 -4.3338e+07 J entropy 17175 2.3381e+05 J/K Gibbs function -6.2775e+07 -8.5457e+08 J heat capacity c_p 3795.4 51668 J/K heat capacity c_v 3184.7 43354 J/K mass frac. Y mole frac. X chem. pot. / RT --------------- --------------- --------------- H 0.0018901 0.025526 -8.7917 HO2 8.3393e-05 3.4395e-05 -40.623 H2 0.036632 0.24736 -17.583 H2O 0.908 0.68615 -33.499 H2O2 4.23e-05 1.693e-05 -49.415 O 0.0023965 0.0020392 -15.916 OH 0.045862 0.036711 -24.707 O2 0.0050891 0.0021651 -31.831 O3 2.257e-08 6.4014e-09 -47.747 Cp = 7330.18 J/(K kg) (d log V/d log P)_T = -1.0190 (d log V/d log T)_P = 1.3305 gamma_s = 1.1474 Speed of sound = 1587.8 m/s ###Markdown 🎉 Success! We get an equilibrium temperature of 3597.5 K, which is just 0.036% off the value calculated by CEA.Similarly, the ratios of specific heats match within 0.009%, and the speed of sounds within 0.019%. Rocket calculationsCEA also calculates performance quantities specific to rockets, such as the effective velocity (C-star, $c^*$), thrust coefficient ($C_F$), and specific impulse ($I_{\text{sp}}$).For the above example, but choosing the `rocket` problem and specifying a nozzle area ratioof 68.8, CEA provides this output:```******************************************************************************* NASA-GLENN CHEMICAL EQUILIBRIUM PROGRAM CEA2, FEBRUARY 5, 2004 BY BONNIE MCBRIDE AND SANFORD GORDON REFS: NASA RP-1311, PART I, 1994 AND NASA RP-1311, PART II, 1996 ******************************************************************************* Problem Type: "Rocket" (Infinite Area Combustor) prob case=_______________3446 ro equilibrium Pressure (1 value): p,psia= 3000 Supersonic Area Ratio (1 value): supar= 68.8 Oxidizer/Fuel Wt. ratio (1 value): o/f= 6.0 You selected the following fuels and oxidizers: reac fuel H2(L) wt%=100.0000 oxid O2(L) wt%=100.0000 output short output siunits end THEORETICAL ROCKET PERFORMANCE ASSUMING EQUILIBRIUM COMPOSITION DURING EXPANSION FROM INFINITE AREA COMBUSTOR Pin = 3000.0 PSIA CASE = _______________ REACTANT WT FRACTION ENERGY TEMP (SEE NOTE) KJ/KG-MOL K FUEL H2(L) 1.0000000 -9012.000 20.270 OXIDANT O2(L) 1.0000000 -12979.000 90.170 O/F= 6.00000 %FUEL= 14.285714 R,EQ.RATIO= 1.322780 PHI,EQ.RATIO= 1.322780 CHAMBER THROAT EXIT Pinf/P 1.0000 1.7403 961.12 P, BAR 206.84 118.85 0.21521 T, K 3598.76 3381.67 1233.84 RHO, KG/CU M 9.4113 0 5.8080 0 2.9602-2 H, KJ/KG -986.31 -2161.66 -10544.8 U, KJ/KG -3184.12 -4208.00 -11271.8 G, KJ/KG -62768.7 -60217.2 -31727.0 S, KJ/(KG)(K) 17.1677 17.1677 17.1677 M, (1/n) 13.614 13.740 14.111 (dLV/dLP)t -1.01897 -1.01412 -1.00000 (dLV/dLT)p 1.3291 1.2605 1.0000 Cp, KJ/(KG)(K) 7.3140 6.6953 2.9097 GAMMAs 1.1475 1.1487 1.2539 SON VEL,M/SEC 1588.1 1533.2 954.8 MACH NUMBER 0.000 1.000 4.579 PERFORMANCE PARAMETERS Ae/At 1.0000 68.800 CSTAR, M/SEC 2322.8 2322.8 CF 0.6601 1.8823 Ivac, M/SEC 2867.9 4538.6 Isp, M/SEC 1533.2 4372.3 MOLE FRACTIONS *H 0.02543 0.02034 0.00000 HO2 0.00003 0.00002 0.00000 *H2 0.24740 0.24494 0.24402 H2O 0.68635 0.70506 0.75598 H2O2 0.00002 0.00001 0.00000 *O 0.00202 0.00123 0.00000 *OH 0.03659 0.02704 0.00000 *O2 0.00215 0.00137 0.00000 * THERMODYNAMIC PROPERTIES FITTED TO 20000.K NOTE. WEIGHT FRACTION OF FUEL IN TOTAL FUELS AND OF OXIDANT IN TOTAL OXIDANTS```The key properties include:- C-star of 2322.8 m/s (based on combustion chamber conditions)- throat pressure of 118.85 bar and temperature of 3381.67 K- exit pressure of 0.21521 bar, temperature of 1233.84 K- at the nozzle exit, thrust coefficient = 1.8823, Isp = 4372.3 m/s ###Code area_ratio = 68.8 pressure_throat_cea = Q_(118.85, 'bar').to('Pa') temperature_throat_cea = 3381.67 pressure_exit_cea = Q_(0.21521, 'bar').to('Pa') temperature_exit_cea = 1233.84 c_star_cea = 2322.8 thrust_coeff_cea = 1.8823 specific_impulse_cea = 4372.3 specific_impulse_vac_cea = 4538.6 ###Output _____no_output_____ ###Markdown C-starWe can calculate $c^*$ directly using the combustion chamber state already obtained with Cantera: ###Code def calculate_c_star(gamma, temperature, molecular_weight): return ( np.sqrt(ct.gas_constant * temperature / (molecular_weight * gamma)) * np.power(2 / (gamma + 1), -(gamma + 1) / (2*(gamma - 1))) ) entropy_chamber = gas2.s enthalpy_chamber = gas2.enthalpy_mass mole_fractions_chamber = gas2.X gamma_chamber = gamma_s c_star = calculate_c_star(gamma_chamber, gas2.T, gas2.mean_molecular_weight) print(f'c-star: {c_star: .1f} m/s') print('Error in c-star: ' f'{100*np.abs(c_star - c_star_cea)/c_star_cea: .3e} %' ) ###Output c-star: 2323.0 m/s Error in c-star: 7.130e-03 % ###Markdown Throat conditionsThe nozzle flow from the combustion chamber to the throat is isentropic, and at the throat the flow velocity matches the sonic velocity. We need to iterate to determine the pressure and other properties.From 1D isentropic flow assumptions, the equation$$\frac{p_c}{p_t} = \left( \frac{\gamma_s + 1}{2} \right)^{\frac{\gamma_s}{\gamma_s - 1}}$$applies exactly, but only if $\gamma_s$ remains constant from the chamber to the throat.This works for the frozen-flow assumption, but not for shifting equilibrium, where the gas compositionwill adjust with changing pressure and temperature.We can use this equation to get a first estimate of throat pressure, $p_{t,1}$, then equilibratethe gas mixture at $s_c$ (chamber entropy) and $p_{t,1}$.The throat state is correct and converged when the velocity is sonic (i.e., equals the speed of sound).CEA checks for convergence using$$\left| \frac{u_t^2 - a_t^2}{u_t^2} \right| = \left| 1 - \frac{1}{M_t^2} \right| \leq 0.4 \cdot 10^{-4} \;,$$where $$\begin{align}M_t &= \frac{u_t}{a_t} \\u_t &= \sqrt{2 \left( h_c - h_t \right) } \\a_t &= \sqrt{ \gamma_s R T_t } \;,\end{align}$$using the properties at the current iteration.If the solution is not converged, we get an improved estimate for pressure:$$p_{t, k+1} = \left( p \frac{1 + \gamma_s M^2}{1 + \gamma_s} \right)_{t, k} \;,$$where $k$ is the iteration. ###Code gas_throat = ct.Solution('nasa_h2o2.yaml', 'gas') pressure_throat = pressure_chamber / np.power( (gamma_chamber + 1) / 2., gamma_chamber / (gamma_chamber - 1) ) # based on CEA defaults max_iter_throat = 5 tolerance_throat = 0.4e-4 print('Throat iterations:') mach = 1.0 num_iter = 0 residual = 1 while residual > tolerance_throat: num_iter += 1 if num_iter == max_iter_throat: break print(f'Error: more than {max_iter_throat} iterations required for throat calculation') pressure_throat = pressure_throat * (1 + gamma_s * mach**2) / (1 + gamma_s) gas_throat.SPX = entropy_chamber, to_si(pressure_throat), mole_fractions_chamber gas_throat.equilibrate('SP') derivs = get_thermo_derivatives(gas_throat) dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties( gas_throat, derivs[0], derivs[1], derivs[2] ) velocity = np.sqrt(2 * (enthalpy_chamber - gas_throat.enthalpy_mass)) speed_sound = np.sqrt( ct.gas_constant * gas_throat.T * gamma_s / gas_throat.mean_molecular_weight ) mach = velocity / speed_sound residual = np.abs(1.0 - 1/mach**2) print(f'{num_iter} {residual: .3e}') temperature_throat = gas_throat.T pressure_throat = Q_(gas_throat.P, 'Pa') gamma_s_throat = gamma_s print('Error in throat temperature: ' f'{100*np.abs(temperature_throat - temperature_throat_cea)/temperature_throat_cea: .3e} %' ) print('Error in throat pressure: ' f'{100*np.abs(pressure_throat - pressure_throat_cea)/pressure_throat_cea: .3e~P} %' ) ###Output Throat iterations: 1 9.420e-04 2 1.590e-06 Error in throat temperature: 2.640e-02 % Error in throat pressure: 5.430e-03 % ###Markdown Exit conditionsThe conditions at the nozzle exit (or any location, really) can be determined with a given exit-to-throat area ratio $A_e / A_t$, by an iterative approach.First, calculate the area per unit mass flow rate at the throat:$$\left( \frac{A}{\dot{m}} \right)_t = \frac{1}{\rho_t u_t} = \frac{T_t n_t \mathcal{R}}{p_t u_t} \;,$$where $n_t = 1 / \text{MW}_t$ is the number of moles. Then, for a supersonic nozzle with an area ratio greater than two ($A_e / A_t \geq 2$),we can obtain an initial estimate for pressure ratio using an empirical formula:$$\log \frac{p_c}{p_e} = \gamma_s + 1.4 \log \frac{A_e}{A_t} \;,$$where $\gamma_s$ is evaluated using the throat state.An improved estimate of pressure ratio for the next iteration can be found using:$$\left( \log \frac{p_c}{p_e} \right)_{k+1} = \left( \log \frac{p_c}{p_e} \right)_k + \left[ \left( \frac{\partial \log \frac{p_c}{p_e} }{\partial \log \frac{A_e}{A_t} } \right)_s \right]_k \times \left[ \log \frac{A_e}{A_t} - \left( \log \frac{A_e}{A_t} \right)_k \right] \;,$$where the derivative is$$\left( \frac{\partial \log \frac{p_c}{p_e} }{\partial \log \frac{A_e}{A_t} } \right)_s = \left( \frac{\gamma_s u^2}{u^2 - a^2} \right)_e$$and the $k$th estimate of area ratio comes from$$\left( \frac{A_e}{A_t} \right)_k = \left( \frac{T_e n_e \mathcal{R}}{p_e u_e} \right)_k \frac{1}{ \left(A/\dot{m}\right)_t } \;.$$ ###Code # this is constant A_mdot_thr = gas_throat.T / (gas_throat.P * velocity * gas_throat.mean_molecular_weight) gas_exit = ct.Solution('nasa_h2o2.yaml', 'gas') gas_exit.SPX = gas_throat.s, gas_throat.P, gas_throat.X # initial estimate for pressure ratio pinf_pe = np.exp(gamma_s_throat + 1.4 * np.log(area_ratio)) p_exit = to_si(pressure_chamber) / pinf_pe gas_exit.SP = entropy_chamber, p_exit gas_exit.equilibrate('SP') Ae_At = gas_exit.T / (gas_exit.P * velocity * gas_exit.mean_molecular_weight) / A_mdot_thr print('Iter T_exit Ae/At P_exit P_inf/P') num_iter = 0 print(f'{num_iter} {gas_exit.T:.3f} K {Ae_At: .2f} {gas_exit.P/1e5:.3f} bar {pinf_pe:.3f}') max_iter_exit = 10 tolerance_exit = 4e-5 residual = 1 while np.abs(residual) > tolerance_exit: num_iter += 1 if num_iter == max_iter_throat: break print(f'Error: more than {max_iter_exit} iterations required for exit calculation') derivs = get_thermo_derivatives(gas_exit) dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties( gas_exit, derivs[0], derivs[1], derivs[2] ) velocity = np.sqrt(2 * (enthalpy_chamber - gas_exit.enthalpy_mass)) speed_sound = np.sqrt(ct.gas_constant * gas_exit.T * gamma_s / gas_exit.mean_molecular_weight) Ae_At = gas_exit.T / (gas_exit.P * velocity * gas_exit.mean_molecular_weight) / A_mdot_thr dlogp_dlogA = gamma_s * velocity**2 / (velocity**2 - speed_sound**2) residual = dlogp_dlogA * (np.log(area_ratio) - np.log(Ae_At)) log_pinf_pe = np.log(pinf_pe) + residual pinf_pe = np.exp(log_pinf_pe) p_exit = to_si(pressure_chamber) / pinf_pe gas_exit.SP = entropy_chamber, p_exit gas_exit.equilibrate('SP') print(f'{num_iter} {gas_exit.T:.3f} K {Ae_At: .2f} {gas_exit.P/1e5:.3f} bar {pinf_pe:.3f}') print(f'Exit temperature: {gas_exit.T: .2f} K') print(f'Exit pressure: {Q_(gas_exit.P, "Pa").to("bar"): .5f~P}') print() print('Error in exit temperature: ' f'{100*np.abs(gas_exit.T - temperature_exit_cea)/temperature_exit_cea: .3e} %' ) print('Error in exit pressure: ' f'{100*np.abs(Q_(gas_exit.P, "Pa") - pressure_exit_cea)/pressure_exit_cea: .3e~P} %' ) ###Output Exit temperature: 1234.18 K Exit pressure: 0.21528 bar Error in exit temperature: 2.731e-02 % Error in exit pressure: 3.329e-02 % ###Markdown Those results look good! Now we can calculate thrust coefficient and specific impulse, using$$\begin{align}C_F &= \frac{v_e}{c^*} \\I_{\text{sp}} &= \frac{v_e}{g_0} \\I_{\text{vac}} &= I_{\text{sp}} + \frac{p_e}{A_e}{\dot{m}} = I_{\text{sp}} + \frac{T_e \mathcal{R}}{v_e \overline{M}} \;.\end{align}$$CEA prints specific impulse with units of velocity, without the reference gravity term,so we will compute both versions for comparison. ###Code derivs = get_thermo_derivatives(gas_exit) dlogV_dlogT_P, dlogV_dlogP_T, cp, gamma_s = get_thermo_properties( gas_exit, derivs[0], derivs[1], derivs[2] ) velocity = np.sqrt(2 * (enthalpy_chamber - gas_exit.enthalpy_mass)) thrust_coeff = velocity / c_star print(f'Thrust coefficient: {thrust_coeff: .4f}') g0 = 9.80665 Isp = velocity Ivac = Isp + gas_exit.T * ct.gas_constant / (velocity * gas_exit.mean_molecular_weight) print(f'I_sp = {Isp: .1f} m/s') print(f'I_vac = {Ivac: .1f} m/s') print() print('Error in Isp: ' f'{100*np.abs(Isp - specific_impulse_cea)/specific_impulse_cea: .3e} %' ) print('Error in Ivac: ' f'{100*np.abs(Ivac - specific_impulse_vac_cea)/specific_impulse_vac_cea: .3e} %' ) print('Actual specific impulse:') print(f'I_sp = {Isp / g0: .1f} s') print(f'I_vac = {Ivac / g0: .1f} s') ###Output Actual specific impulse: I_sp = 445.8 s I_vac = 462.8 s
_notebooks/2021-01-10-License-Plate-Detection.ipynb
###Markdown License Plate Detection > Detecting license plate with an open source model - toc: true - badges: true - comments: true - categories: [object detection] Install library ###Code !git clone https://github.com/quangnhat185/Plate_detect_and_recognize.git %cd Plate_detect_and_recognize ###Output _____no_output_____ ###Markdown Import packages ###Code import cv2 import numpy as np import matplotlib.pyplot as plt from local_utils import detect_lp from os.path import splitext,basename from keras.models import model_from_json import glob ###Output _____no_output_____ ###Markdown Load model ###Code def load_model(path): try: path = splitext(path)[0] with open('%s.json' % path, 'r') as json_file: model_json = json_file.read() model = model_from_json(model_json, custom_objects={}) model.load_weights('%s.h5' % path) print("Loading model successfully...") return model except Exception as e: print(e) wpod_net_path = "wpod-net.json" wpod_net = load_model(wpod_net_path) ###Output _____no_output_____ ###Markdown Data loading and preprocessing ###Code def preprocess_image(image_path,resize=False): img = cv2.imread(image_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = img / 255 if resize: img = cv2.resize(img, (224,224)) return img #hide_input !mkdir /content/plates !wget -q -O /content/plates/plate1.jpg 'https://images.squarespace-cdn.com/content/v1/5c981f3d0fb4450001fdde5d/1563727260863-E9JQC4UVO8IYCE6P19BO/ke17ZwdGBToddI8pDm48kDHPSfPanjkWqhH6pl6g5ph7gQa3H78H3Y0txjaiv_0fDoOvxcdMmMKkDsyUqMSsMWxHk725yiiHCCLfrh8O1z4YTzHvnKhyp6Da-NYroOW3ZGjoBKy3azqku80C789l0mwONMR1ELp49Lyc52iWr5dNb1QJw9casjKdtTg1_-y4jz4ptJBmI9gQmbjSQnNGng/cars+1.jpg' !wget -q -O /content/plates/plate2.jpg 'https://www.cars24.com/blog/wp-content/uploads/2018/12/High-Security-Registration-Plates-Feature-Cars24.com_.png' ###Output _____no_output_____ ###Markdown Create a list of image paths ###Code image_paths = glob.glob("/content/plates/*.jpg") print("Found %i images..."%(len(image_paths))) ###Output _____no_output_____ ###Markdown Visualize data in subplot ###Code #collapse-hide fig = plt.figure(figsize=(12,8)) cols = 5 rows = 4 fig_list = [] for i in range(len(image_paths)): fig_list.append(fig.add_subplot(rows,cols,i+1)) title = splitext(basename(image_paths[i]))[0] fig_list[-1].set_title(title) img = preprocess_image(image_paths[i],True) plt.axis(False) plt.imshow(img) plt.tight_layout(True) plt.show() ###Output Found 2 images... ###Markdown Inference Forward image through model and return plate's image and coordinates. if error "No Licensese plate is founded!" pop up, try to adjust Dmin. ###Code def get_plate(image_path, Dmax=608, Dmin=256): vehicle = preprocess_image(image_path) ratio = float(max(vehicle.shape[:2])) / min(vehicle.shape[:2]) side = int(ratio * Dmin) bound_dim = min(side, Dmax) _ , LpImg, _, cor = detect_lp(wpod_net, vehicle, bound_dim, lp_threshold=0.5) return LpImg, cor ###Output _____no_output_____ ###Markdown Obtain plate image and its coordinates from an image ###Code #collapse-output test_image = image_paths[0] LpImg,cor = get_plate(test_image) print("Detect %i plate(s) in"%len(LpImg),splitext(basename(test_image))[0]) print("Coordinate of plate(s) in image: \n", cor) ###Output Detect 1 plate(s) in plate2 Coordinate of plate(s) in image: [array([[298.04423448, 433.05399526, 436.51161569, 301.50185491], [338.43327592, 359.12640589, 385.37416506, 364.6810351 ], [ 1. , 1. , 1. , 1. ]])] ###Markdown Visualize our result ###Code #collapse-hide plt.figure(figsize=(12,5)) plt.subplot(1,2,1) plt.axis(False) plt.imshow(preprocess_image(test_image)) plt.subplot(1,2,2) plt.axis(False) plt.imshow(LpImg[0]) ###Output _____no_output_____ ###Markdown Viualize all obtained plate images ###Code #collapse-hide fig = plt.figure(figsize=(12,6)) cols = 5 rows = 4 fig_list = [] for i in range(len(image_paths)): fig_list.append(fig.add_subplot(rows,cols,i+1)) title = splitext(basename(image_paths[i]))[0] fig_list[-1].set_title(title) LpImg,_ = get_plate(image_paths[i]) plt.axis(False) plt.imshow(LpImg[0]) plt.tight_layout(True) plt.show() ###Output _____no_output_____
jupyter/example_notebooks/.ipynb_checkpoints/spark_example_v0.2.1-checkpoint.ipynb
###Markdown 0 - Setup Notebook Pod 0.1 - Run in Jupyter Bash Terminal```bash create application-default credentialsgcloud auth application-default login``` 1 - Initialize SparkSession ###Code import pyspark from pyspark.sql import SparkSession # construct spark_jars list spark_jars = ["https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-hadoop2-latest.jar"] if pyspark.version.__version__[0] == "3": spark_jars.append("https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-latest_2.12.jar") else: spark_jars.append("https://storage.googleapis.com/spark-lib/bigquery/spark-bigquery-latest_2.11.jar") # create SparkSession spark = SparkSession \ .builder \ .master("local[1]") \ .config("spark.driver.cores", "1") \ .config("spark.driver.memory", "4g") \ .config("spark.jars", ",".join(spark_jars)) \ .config("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "LEGACY") \ .config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem") \ .config("spark.hadoop.fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS") \ .config("spark.hadoop.fs.gs.auth.service.account.enable", "true") \ .config("spark.hadoop.fs.gs.auth.service.account.json.keyfile", "/home/jovyan/.config/gcloud/application_default_credentials.json") \ .getOrCreate() ###Output _____no_output_____ ###Markdown 2 - SparkSQL 2.0 - Docs* https://spark.apache.org/docs/latest/sql-getting-started.html* https://spark.apache.org/docs/latest/api/python/pyspark.sql.html 2.1 - Write CSV ###Code # create a DataFrame df = spark.createDataFrame( [("aaa", 1, "!!!"), ("bbb", 2, "@@@"), ("ccc", 3, "###"), ("ddd", 4, "%%%")], schema=["col1", "col2", "col3", ] ) # write CSV out_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.csv" df.write \ .format("csv") \ .mode("overwrite") \ .option("header", "true") \ .save(out_uri) # link to GUI print("----------------") print("View in GUI:") print(f"https://console.cloud.google.com/storage/browser/${out_uri.lstrip('gs://')}/") print("----------------") ###Output _____no_output_____ ###Markdown 2.2 - Read CSV ###Code # read CSV in_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.csv" df2 = spark.read \ .format("csv") \ .option("mode", "FAILFAST") \ .option("inferSchema", "true") \ .option("header", "true") \ .load(in_uri) # view DataFrame df2.show() ###Output _____no_output_____ ###Markdown 3 - BigQuery 3.0 - Docs* https://github.com/GoogleCloudDataproc/spark-bigquery-connector 3.1 - Write to BigQuery ###Code # create a DataFrame df3 = spark.createDataFrame( [("aaa", 1, "!!!"), ("bbb", 2, "@@@"), ("ccc", 3, "###"), ("ddd", 4, "%%%")], schema=["col1", "col2", "col3", ] ) # write to BigQuery out_project = "<<<MY_PROJECT>>>" out_table = "<<<MY_DATABASE>>>.example__spark_notebook" billing_project = "<<<MY_PROJECT>>>" df3.write \ .format("bigquery") \ .mode("overwrite") \ .option("temporaryGcsBucket", "<<MY_BUCKET>>") \ .option("parentProject", billing_project) \ .option("project", out_project) \ .option("table", out_table) \ .save() # link to GUI print("----------------") print("View in GUI:") print(f"https://console.cloud.google.com/bigquery?project=${out_project}") print("----------------") ###Output _____no_output_____ ###Markdown 3.2 - Read from BigQuery ###Code # read from BigQuery in_project = "<<<MY_PROJECT>>>" in_table = "<<<MY_DATABASE>>>.example__spark_notebook" billing_project = "<<<MY_PROJECT>>>" df4 = spark.read \ .format("bigquery") \ .option("readDataFormat", "ARROW") \ .option("parentProject", billing_project) \ .option("project", in_project) \ .option("table", in_table) \ .load() # view DataFrame df4.show() ###Output _____no_output_____ ###Markdown 4 - Advanced Functions 4.1 - Write File (Hadoop Java API) ###Code def hadoop_write_file(spark: SparkSession, fs_uri: str, overwrite: bool, file_data: str) -> str: """ Write a string as a file using the Hadoop Java API. :param spark: a running SparkSession :param fs_uri: the URI of the file :param overwrite: if we should replace any existing file (error if False) :param file_data: the string to write as the file data :return the URI of the writen file """ # create py4j wrappers of java objects hadoop = spark.sparkContext._jvm.org.apache.hadoop java = spark.sparkContext._jvm.java # create the FileSystem() object conf = spark._jsc.hadoopConfiguration() path = hadoop.fs.Path(java.net.URI(fs_uri)) fs = path.getFileSystem(conf) # write the file output_stream = fs.create(path, overwrite) output_stream.writeBytes(file_data) output_stream.close() return fs_uri # write file out_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.txt" file_data = "Hello World! " * 100 hadoop_write_file(spark=spark, fs_uri=out_uri, overwrite=True, file_data=file_data) # link to GUI print("----------------") print("View in GUI:") print(f"https://console.cloud.google.com/storage/browser/${out_project.lstrip('gs://')}") print("----------------") ###Output _____no_output_____ ###Markdown 4.2 - Read File (Hadoop Java API) ###Code def hadoop_read_file(spark: SparkSession, fs_uri: str, encoding: str = "utf-8") -> str: """ Read the content of a file as a string using the Hadoop Java API. :param spark: a running SparkSession :param fs_uri: the URI of the file :param encoding: the file's encoding (defaults to utf-8) :return: the content of the file (or None if the file is not present """ from py4j.protocol import Py4JJavaError # create py4j wrappers of scala objects commons = spark.sparkContext._jvm.org.apache.commons hadoop = spark.sparkContext._jvm.org.apache.hadoop java = spark.sparkContext._jvm.java # create the FileSystem() object conf = spark._jsc.hadoopConfiguration() path = hadoop.fs.Path(java.net.URI(fs_uri)) fs = path.getFileSystem(conf) # read file as string try: input_stream = fs.open(path) file_data = commons.io.IOUtils.toString(input_stream, encoding) input_stream.close() return file_data except Py4JJavaError as ex: java_exception_class = ex.java_exception.getClass().getName() if java_exception_class == "java.io.FileNotFoundException": return None else: raise ex # read file in_uri = f"gs://<<<MY_BUCKET>>>/example/spark_test.txt" file_data = hadoop_read_file(spark=spark, fs_uri=in_uri) print("-------- File Content --------") print(file_data) print("------------------------------") ###Output _____no_output_____
intro-to-python/intro-to-python.ipynb
###Markdown Introduction to Python Purpose: To begin practicing and working with Python *Step 1: Hello world* ###Code print(Hello World) print('Hello World') ###Output Hello World ###Markdown *Step 2: Print a User Friendly Greeting* ###Code #Input your name here after the equal sign and in quotations user_name = 'Hawley' print('Hello,', user_name) print('It is nice to meet you!') ###Output Hello, Hawley It is nice to meet you! ###Markdown *Step 3: The area of a circle* ###Code import numpy as np # Define your radius r = 3 area = np.pi * r ** 2 print(area) # Define your radius r = 3 area = np.pi * r ** 2 area ###Output _____no_output_____ ###Markdown *Step 3b: Doing it in a function* ###Code def find_area(r): return np.pi*r**2 find_area(3) ###Output _____no_output_____ ###Markdown *Step 3c: Doing more with a function, faster!* ###Code numberlist = list(range(0, 10)) numberlist for numbers in numberlist: area = findArea(numbers) print('Radius:', numbers, 'Area:', area) ###Output Radius: 0 Area: 0.0 Radius: 1 Area: 3.141592653589793 Radius: 2 Area: 12.566370614359172 Radius: 3 Area: 28.274333882308138 Radius: 4 Area: 50.26548245743669 Radius: 5 Area: 78.53981633974483 Radius: 6 Area: 113.09733552923255 Radius: 7 Area: 153.93804002589985 Radius: 8 Area: 201.06192982974676 Radius: 9 Area: 254.46900494077323 ###Markdown *Step 4: Intro to Plotting* ###Code import matplotlib.pyplot as plt %matplotlib inline fig, ax = plt.subplots() ax.set(xlim=(-1, 1), ylim=(-1, 1)) a_circle = plt.Circle((0, 0), 0.5) ax.add_artist(a_circle) for numbers in numberlist: fig, ax = plt.subplots() ax.set(xlim=(-10, 10), ylim=(-10, 10)) a_circle = plt.Circle((0, 0), numbers/2) ax.add_artist(a_circle) ###Output _____no_output_____
FFSSN.ipynb
###Markdown This page should automatically redirect to the updated page. If it does not please go to [http://nmsutton.herokuapp.com/nmsuttondetails/Snnrl](http://nmsutton.herokuapp.com/nmsuttondetails/Snnrl) ###Code from IPython.core.display import display, HTML display(HTML('<script>window.location = "http://nmsutton.herokuapp.com/nmsuttondetails/Snnrl";</script>')) ###Output _____no_output_____
8-Puzzle-Solver.ipynb
###Markdown 8 Puzzle solver* Parsa KamaliPour - 97149081* In this repository we're going to solve this puzzle using $ A^* $ and $ IDA $ imports: ###Code import copy import pandas as pd import numpy as np import collections import heapq ###Output _____no_output_____ ###Markdown Test case 1: ###Code input_puzzle_1 = [ [1, 2, 3], [4, 0, 5], [7, 8, 6] ] print('Input: ') print(pd.DataFrame(input_puzzle_1)) print() desired_output_1 = [ [1, 2, 3], [4, 5, 0], [7, 8, 6] ] print('Desired Output:') print(pd.DataFrame(desired_output_1)) ###Output Input: 0 1 2 0 1 2 3 1 4 0 5 2 7 8 6 Desired Output: 0 1 2 0 1 2 3 1 4 5 0 2 7 8 6 ###Markdown Test case 2: (Hardest) ###Code input_puzzle_2 = [ [8, 6, 7], [2, 5, 4], [3, 0, 1] ] print('Input 2: ') print(pd.DataFrame(input_puzzle_2)) print() desired_output_2 = [ [6, 4, 7], [8, 5, 0], [3, 2, 1] ] print('Desired Output 2:') print(pd.DataFrame(desired_output_2)) ###Output Input 2: 0 1 2 0 8 6 7 1 2 5 4 2 3 0 1 Desired Output 2: 0 1 2 0 6 4 7 1 8 5 0 2 3 2 1 ###Markdown code's configs: ###Code heuristic_method = input("Enter the desired Heuristic method: h1 or h2") f_function_omega = eval(input("Enter the desired f function omega: 2 is Greedy, " "0 is Uninformed best-first search, " "0 < omega <= 1 is A*")) test_case = eval(input("which test case? 1:easy, 2:hard")) if test_case == 1: input_puzzle = input_puzzle_1 desired_output = desired_output_1 elif test_case == 2: input_puzzle = input_puzzle_2 desired_output = desired_output_2 ###Output _____no_output_____ ###Markdown Matrix to dictionary converter ###Code class Mat2dict: def __init__(self, matrix): self.matrix = matrix self.dic = {} def convert(self): for r in range(len(self.matrix)): for c in range(len(self.matrix[0])): key = self.matrix[r][c] self.dic[key] = [r, c] return self.dic ###Output _____no_output_____ ###Markdown the heuristic calculator class:* H1 heuristic (misplaced tiles) $ \Sigma_{i=1}^{9} \; if \; currentPuzzleBoard[node_i] \; != \; goalPuzzleBoard[node_i]$ $ then \; h(state_y) = h(state_y) + 1$* H2 heuristic (manhattan distance) $ goalPuzzleBoard.find(currentPuzzleBoard[node_i]) $ $ retrieve \; Row \; \& \; Col \; of \; goal $ $ manhattanDistance = |(goal.row - current.row)| + |(goal.col - current.col)| $ $ TotalHeuristic[state_i] = \Sigma_{i=1}^{9} manhattanDistance_i $ ###Code class Heuristic: def __init__(self, node, current_puzzle, desired_answer, method): self.method = method self.node = node self.current_puzzle = current_puzzle self.desired_answer = desired_answer #self.current_puzzle_dict = Mat2dict(self.current_puzzle) self.desired_answer_dict = Mat2dict(self.desired_answer).convert() def do(self): if self.method == 'h1': return self.h1_misplaced_tiles() elif self.method == 'h2': return self.h2_manhattan_distance() def h1_misplaced_tiles(self): misplaced_counter = 0 for row in range(len(self.current_puzzle)): for col in range(len(self.current_puzzle[0])): if self.current_puzzle[row][col] != self.desired_answer[row][col]: misplaced_counter += 1 return misplaced_counter def h2_manhattan_distance(self): total_distance_counter = 0 for row in range(len(self.current_puzzle)): for col in range(len(self.current_puzzle[0])): key = self.current_puzzle[row][col] correct_row, correct_col = self.desired_answer_dict[key] total_distance_counter += abs(row - correct_row) + abs(col - correct_col) return total_distance_counter ###Output _____no_output_____ ###Markdown The node class:* F function is calculated in a such way that you can control how the Heuristic and G-costcan perform: $ FCost(n) = (2-\omega) * GCost(n) + \omega * h(n)$ $ if \; \omega = 2: $ $ then: algorithm \; is \; Greedy \; due \; to \; GCost \; being \; 0:$ $ FCost(n) = 0 + 2 * h(n) $ $ if \; \omega = 0: $ $ then: algorithm \; is \; uninformed \; search \; due \; to \; h(n) \; being \; 0:$ $ FCost(n) = 2 * GCost(n) + 0 $ $ if \; 0 \lt \omega \le 1 : $ $ then: algorithm \; is \; informed \; search(A^*):$ $ FCost(n) = (2-\omega) * GCost(n) + \omega * h(n) $ ###Code class Node: def __init__(self, current_puzzle, parent=None): self.current_puzzle = current_puzzle self.parent = parent if self.parent: self.g_cost = self.parent.f_function self.depth = self.parent.depth + 1 else: self.g_cost = 0 self.depth = 0 self.h_cost = Heuristic(self, current_puzzle, desired_output, heuristic_method).do() self.f_function = (2 - f_function_omega) * self.g_cost + f_function_omega * self.h_cost def __eq__(self, other): return self.f_function == other.f_function def __lt__(self, other): return self.f_function < other.f_function def get_id(self): return str(self) def get_path(self): node, path = self, [] while node: path.append(node) node = node.parent return list(reversed(path)) def get_position(self, element): for row in range(len(self.current_puzzle)): for col in range(len(self.current_puzzle[0])): if self.current_puzzle[row][col] == element: return [row, col] return [0, 0] ###Output _____no_output_____ ###Markdown The puzzle solver class: ###Code class PuzzleSolver: def __init__(self, start_node): self.final_state = None self.start_node = start_node self.depth = 0 self.visited_nodes = set() self.expanded_nodes = 0 def solve(self): queue = [self.start_node] self.visited_nodes.add(self.start_node.get_id()) while queue: self.expanded_nodes += 1 node = heapq.heappop(queue) if node.current_puzzle == desired_output: self.final_state = node Result(self.final_state, self.expanded_nodes) return True if node.depth + 1 > self.depth: self.depth = node.depth + 1 for neighbor in NeighborsCalculator(node).get_list_of_neighbors(): if not neighbor.get_id in self.visited_nodes: self.visited_nodes.add(neighbor.get_id()) heapq.heappush(queue, neighbor) return False ###Output _____no_output_____ ###Markdown result class ###Code class Result: def __init__(self, final_state, expanded_nodes): self.expanded_nodes = expanded_nodes self.final_state = final_state self.solved_puzzle = self.final_state.current_puzzle self.path = self.final_state.get_path() self.show_puzzles() self.show_path() def show_puzzles(self): print("Inital Puzzle: ") print(pd.DataFrame(input_puzzle)) print("Result Puzzle: ") print(pd.DataFrame(self.solved_puzzle)) print("Expected Puzzle: ") print(pd.DataFrame(desired_output)) print() print("Number of expanded nodes: {}".format(self.expanded_nodes)) print() def show_path(self): counter = 0 while self.path: counter += 1 step = self.path.pop(0) print("step {}: ".format(counter)) print(pd.DataFrame(step.current_puzzle)) ###Output _____no_output_____ ###Markdown Neighbors calculator ###Code class NeighborsCalculator: def __init__(self, current_state): self.current_state = current_state self.puzzle = self.current_state.current_puzzle self.neighbors = [] def get_list_of_neighbors(self): row, col = map(int, self.current_state.get_position(0)) #if row or col is None: # return [] # move right if col < len(self.puzzle[0]) - 1: moved_right = copy.deepcopy(self.puzzle) moved_right[row][col], moved_right[row][col + 1] = moved_right[row][col + 1], moved_right[row][col] self.neighbors.append(Node(moved_right, self.current_state)) # move left if col > 0: moved_left = copy.deepcopy(self.puzzle) moved_left[row][col], moved_left[row][col - 1] = moved_left[row][col - 1], moved_left[row][col] self.neighbors.append(Node(moved_left, self.current_state)) # move up if row > 0: moved_up = copy.deepcopy(self.puzzle) moved_up[row][col], moved_up[row - 1][col] = moved_up[row - 1][col], moved_up[row][col] self.neighbors.append(Node(moved_up, self.current_state)) # move down if row < len(self.puzzle) - 1: moved_down = copy.deepcopy(self.puzzle) moved_down[row][col], moved_down[row + 1][col] = moved_down[row + 1][col], moved_down[row][col] self.neighbors.append(Node(moved_down, self.current_state)) return self.neighbors initial_state = Node(input_puzzle) PuzzleSolver(initial_state).solve() ###Output Inital Puzzle: 0 1 2 0 1 2 3 1 4 0 5 2 7 8 6 Result Puzzle: 0 1 2 0 1 2 3 1 4 5 0 2 7 8 6 Expected Puzzle: 0 1 2 0 1 2 3 1 4 5 0 2 7 8 6 Number of expanded nodes: 2 step 1: 0 1 2 0 1 2 3 1 4 0 5 2 7 8 6 step 2: 0 1 2 0 1 2 3 1 4 5 0 2 7 8 6 ###Markdown The puzzle solver IDA class: ###Code class PuzzleSolverIDA: def __init__(self, start_node, iterate): self.iterate = iterate self.final_state = None self.start_node = start_node self.depth = 0 self.visited_nodes = set() self.expanded_nodes = 0 self.f_cutoff = 0 def solve(self): while True: self.f_cutoff += self.iterate queue = [self.start_node] self.visited_nodes.add(self.start_node.get_id()) while queue: self.expanded_nodes += 1 node = heapq.heappop(queue) if node.current_puzzle == desired_output: self.final_state = node Result(self.final_state, self.expanded_nodes) return True if node.depth + 1 > self.depth: self.depth = node.depth + 1 for neighbor in NeighborsCalculator(node).get_list_of_neighbors(): if not neighbor.get_id in self.visited_nodes: if neighbor.f_function <= self.f_cutoff: self.visited_nodes.add(neighbor.get_id()) heapq.heappush(queue, neighbor) initial_state = Node(input_puzzle) PuzzleSolverIDA(initial_state, 4).solve() ###Output Inital Puzzle: 0 1 2 0 8 6 7 1 2 5 4 2 3 0 1 Result Puzzle: 0 1 2 0 6 4 7 1 8 5 0 2 3 2 1 Expected Puzzle: 0 1 2 0 6 4 7 1 8 5 0 2 3 2 1 Number of expanded nodes: 2191847 step 1: 0 1 2 0 8 6 7 1 2 5 4 2 3 0 1 step 2: 0 1 2 0 8 6 7 1 2 5 4 2 3 1 0 step 3: 0 1 2 0 8 6 7 1 2 5 0 2 3 1 4 step 4: 0 1 2 0 8 6 7 1 2 0 5 2 3 1 4 step 5: 0 1 2 0 8 6 7 1 0 2 5 2 3 1 4 step 6: 0 1 2 0 0 6 7 1 8 2 5 2 3 1 4 step 7: 0 1 2 0 6 0 7 1 8 2 5 2 3 1 4 step 8: 0 1 2 0 6 7 0 1 8 2 5 2 3 1 4 step 9: 0 1 2 0 6 7 5 1 8 2 0 2 3 1 4 step 10: 0 1 2 0 6 7 5 1 8 2 4 2 3 1 0 step 11: 0 1 2 0 6 7 5 1 8 2 4 2 3 0 1 step 12: 0 1 2 0 6 7 5 1 8 0 4 2 3 2 1 step 13: 0 1 2 0 6 7 5 1 8 4 0 2 3 2 1 step 14: 0 1 2 0 6 7 0 1 8 4 5 2 3 2 1 step 15: 0 1 2 0 6 0 7 1 8 4 5 2 3 2 1 step 16: 0 1 2 0 6 4 7 1 8 0 5 2 3 2 1 step 17: 0 1 2 0 6 4 7 1 8 5 0 2 3 2 1
docs/apphub/image_styletransfer/fst_coco/fst_coco.ipynb
###Markdown Fast Style Transfer with FastEstimatorIn this notebook we will demonstrate how to do a neural image style transfer with perceptual loss as described in [Perceptual Losses for Real-Time Style Transfer and Super-Resolution](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf).Typical neural style transfer involves two images, an image containing semantics that you want to preserve and another image serving as a reference style; the first image is often referred as *content image* and the other image as *style image*.In [paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf) training images of COCO2014 dataset are used to learn the style transfer from any content image. ###Code import os import cv2 import fastestimator as fe import tensorflow as tf import numpy as np import matplotlib from matplotlib import pyplot as plt ###Output _____no_output_____ ###Markdown In this notebook we will use *Wassily Kandinsky's Composition 7* as a style image.We will also resize the style image to $256 \times 256$ to make the dimension consistent with that of COCO images. ###Code style_img_path = tf.keras.utils.get_file( 'kandinsky.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg' ) style_img = cv2.imread(style_img_path) style_img = cv2.resize(style_img, (256, 256)) style_img = (style_img.astype(np.float32) - 127.5) / 127.5 style_img_t = tf.convert_to_tensor(np.expand_dims(style_img, axis=0)) style_img_disp = cv2.cvtColor((style_img + 1) * 0.5, cv2.COLOR_BGR2RGB) plt.imshow(style_img_disp) plt.title('Wassily Kandinsky\'s Composition 7') plt.axis('off'); #Parameters batch_size = 4 epochs = 2 steps_per_epoch = None validation_steps = None img_path = 'panda.jpeg' saved_model_path = 'style_transfer_net_epoch_1_step_41390.h5' ###Output _____no_output_____ ###Markdown Step 1: Input Pipeline Downloading the dataFirst, we will download training images of COCO2014 dataset via our dataset API. The images will be first downloaded. Then, a csv file containing relative paths to these images will be created. The root path of the downloaded images will be parent_path.Downloading the images will take awhile. ###Code from fastestimator.dataset.mscoco import load_data train_csv, path = load_data() ###Output reusing existing dataset ###Markdown Once finished downloading images, we need to define an *Operator* to recale pixel values from $[0, 255]$ to $[-1, 1]$.We will define our own `Rescale` class in which we define the data transform logic inside `forward` method. ###Code from fastestimator.op import TensorOp class Rescale(TensorOp): def forward(self, data, state): return (tf.cast(data, tf.float32) - 127.5) / 127.5 ###Output _____no_output_____ ###Markdown Creating tfrecordsOnce the images are downloaded, we will create tfrecords using `RecordWriter`.Each row of the csv file will be used by `ImageReader` to read in the image using `cv2.imread`.Then, we resize the images to $256 \times 256$. ###Code from fastestimator.op.numpyop import ImageReader, Resize from fastestimator.util import RecordWriter tfr_save_dir = os.path.join(path, 'tfrecords') writer = RecordWriter( train_data=train_csv, save_dir=tfr_save_dir, ops=[ ImageReader(inputs="image", parent_path=path, outputs="image"), Resize(inputs="image", target_size=(256, 256), outputs="image") ]) ###Output _____no_output_____ ###Markdown Defining an instance of `Pipeline`We can now define an instance of `Pipeline`. ###Code pipeline = fe.Pipeline(batch_size=batch_size, data=writer, ops=[Rescale(inputs="image", outputs="image")]) ###Output _____no_output_____ ###Markdown Step 2: NetworkOnce `Pipeline` is defined, we need to define network architecture, losses, and the forward pass of batch data. Defining model architectureWe first create a `FEModel` instance which collects the following:* model definition* model name* loss name* optimizerThe architecture of the model is a modified resnet. ###Code from fastestimator.architecture.stnet import styleTransferNet model = fe.build(model_def=styleTransferNet, model_name="style_transfer_net", loss_name="loss", optimizer=tf.keras.optimizers.Adam(1e-3)) ###Output _____no_output_____ ###Markdown Defining LossThe perceptual loss described in the [paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf) is computed based on intermediate layers of VGG16 pretrained on ImageNet; specifically, `relu1_2`, `relu2_2`, `relu3_3`, and `relu4_3` of VGG16 are used.The *style* loss term is computed as the squared l2 norm of the difference in Gram Matrix of these feature maps between an input image and the reference stlye image.The *content* loss is simply l2 norm of the difference in `relu3_3` of the input image and the reference style image.In addition, the method also uses total variation loss to enforce spatial smoothness in the output image.The final loss is weighted sum of the style loss term, the content loss term (feature reconstruction term in the [paper](https://cs.stanford.edu/people/jcjohns/papers/eccv16/JohnsonECCV16.pdf)), and the total variation term.We first define a custom `TensorOp` that outputs intermediate layers of VGG16.Given these intermediate layers returned by the loss network as a dictionary, we define a custom `Loss` class that encapsulates all the logics of the loss calculation.Since `Loss` is also yet another `TensorOp`, the final loss value is returned by `forward` method. ###Code from fastestimator.architecture.stnet import lossNet from fastestimator.op.tensorop import Loss class ExtractVGGFeatures(TensorOp): def __init__(self, inputs, outputs, mode=None): super().__init__(inputs, outputs, mode) self.vgg = lossNet() def forward(self, data, state): return self.vgg(data) class StyleContentLoss(Loss): def __init__(self, style_weight, content_weight, tv_weight, inputs, outputs=None, mode=None): super().__init__(inputs=inputs, outputs=outputs, mode=mode) self.style_weight = style_weight self.content_weight = content_weight self.tv_weight = tv_weight def calculate_style_recon_loss(self, y_true, y_pred): y_true_gram = self.calculate_gram_matrix(y_true) y_pred_gram = self.calculate_gram_matrix(y_pred) y_diff_gram = y_pred_gram - y_true_gram y_norm = tf.math.sqrt(tf.reduce_sum(tf.math.square(y_diff_gram), axis=(1, 2))) return (y_norm) def calculate_feature_recon_loss(self, y_true, y_pred): y_diff = y_pred - y_true num_elts = tf.cast(tf.reduce_prod(y_diff.shape[1:]), tf.float32) y_diff_norm = tf.reduce_sum(tf.square(y_diff), axis=(1, 2, 3)) / num_elts return (y_diff_norm) def calculate_gram_matrix(self, x): x = tf.cast(x, tf.float32) num_elts = tf.cast(x.shape[1] * x.shape[2] * x.shape[3], tf.float32) gram_matrix = tf.einsum('bijc,bijd->bcd', x, x) gram_matrix /= num_elts return gram_matrix def calculate_total_variation(self, y_pred): return (tf.image.total_variation(y_pred)) def forward(self, data, state): y_pred, y_style, y_content, image_out = data style_loss = [self.calculate_style_recon_loss(a, b) for a, b in zip(y_style['style'], y_pred['style'])] style_loss = tf.add_n(style_loss) style_loss *= self.style_weight content_loss = [ self.calculate_feature_recon_loss(a, b) for a, b in zip(y_content['content'], y_pred['content']) ] content_loss = tf.add_n(content_loss) content_loss *= self.content_weight total_variation_reg = self.calculate_total_variation(image_out) total_variation_reg *= self.tv_weight return style_loss + content_loss + total_variation_reg ###Output _____no_output_____ ###Markdown Defining forward passHaving defined the model and the associated loss, we can now define an instance of `Network` that specify forward pass of the batch data in a training loop.FastEstimator takes care of gradient computation and update of the model once this forward pass is defined. ###Code from fastestimator.op.tensorop import ModelOp style_weight=5.0 content_weight=1.0 tv_weight=1e-4 network = fe.Network(ops=[ ModelOp(inputs="image", model=model, outputs="image_out"), ExtractVGGFeatures(inputs=lambda: style_img_t, outputs="y_style"), ExtractVGGFeatures(inputs="image", outputs="y_content"), ExtractVGGFeatures(inputs="image_out", outputs="y_pred"), StyleContentLoss(style_weight=style_weight, content_weight=content_weight, tv_weight=tv_weight, inputs=('y_pred', 'y_style', 'y_content', 'image_out'), outputs='loss') ]) ###Output _____no_output_____ ###Markdown Step 3: EstimatorHaving defined `Pipeline` and `Network`, we can now define `Estimator`.We will use `Trace` to save intermediate models. ###Code from fastestimator.trace import ModelSaver import tempfile model_dir=tempfile.mkdtemp() estimator = fe.Estimator(network=network, pipeline=pipeline, epochs=epochs, steps_per_epoch=steps_per_epoch, validation_steps=validation_steps, traces=ModelSaver(model_name="style_transfer_net", save_dir=model_dir)) ###Output _____no_output_____ ###Markdown We call `fit` method of `Estimator` to start training. ###Code estimator.fit() ###Output _____no_output_____ ###Markdown InferenceOnce the training is finished, we will apply the model to perform the style transfer on arbitrary images.Here we use a photo of a panda. ###Code test_img = cv2.imread(img_path) test_img = cv2.resize(test_img, (256, 256)) test_img = (test_img.astype(np.float32) - 127.5) / 127.5 test_img_t = tf.expand_dims(test_img, axis=0) model_path = os.path.join(model_dir, saved_model_path) trained_model = tf.keras.models.load_model(model_path, custom_objects={ "ReflectionPadding2D":fe.architecture.stnet.ReflectionPadding2D, "InstanceNormalization":fe.architecture.stnet.InstanceNormalization}, compile=False) output_img = trained_model.predict(test_img_t) output_img_disp = (output_img[0] + 1) * 0.5 test_img_disp = (test_img + 1) * 0.5 plt.figure(figsize=(20,20)) plt.subplot(131) plt.imshow(cv2.cvtColor(test_img_disp, cv2.COLOR_BGR2RGB)) plt.title('Original Image') plt.axis('off'); plt.subplot(132) plt.imshow(style_img_disp) plt.title('Style Image') plt.axis('off'); plt.subplot(133) plt.imshow(cv2.cvtColor(output_img_disp, cv2.COLOR_BGR2RGB)); plt.title('Transferred Image') plt.axis('off'); ###Output _____no_output_____
AppStat2022/Week5/original/MVA_part1/2par_discriminant.ipynb
###Markdown 2-parameters discriminant analysisPython notebook for constructing a Fisher disciminant from two 2D Gaussianly distributed correlated variables. The notebook creates artificial random data for two different types of processes, and the goal is then to separate these by constructing a Fisher discriminant. Authors: - Christian Michelsen (Niels Bohr Institute)- Troels C. Petersen (Niels Bohr Institute) Date: - 15-12-2021 (latest update) References:- Glen Cowan, Statistical Data Analysis, pages 51-57- http://en.wikipedia.org/wiki/Linear_discriminant_analysis*** ###Code import numpy as np # Matlab like syntax for linear algebra and functions import matplotlib.pyplot as plt # Plots and figures like you know them from Matlab from numpy.linalg import inv r = np.random # Random generator r.seed(42) # Set a random seed (but a fixed one) save_plots = False # For now, don't save plots (once you trust your code, switch on) ###Output _____no_output_____ ###Markdown Functions:Function to calculate the separation betweem two lists of numbers (see equation at the bottom of the script).__Note__: You need to fill in this function! ###Code def calc_separation(x, y): print("calc_separation needs to be filled out") d = 0 return d ###Output _____no_output_____ ###Markdown Define parameters:Number of species, their means and widths, correlations and the number of observations of each species: ###Code # Number of 'species': signal / background n_spec = 2 # Species A, mean and width for the two dimensions/parameters mean_A = [15.0, 50.0] width_A = [ 2.0, 6.0] # Species B, mean and width for the two dimensions/parameters mean_B = [12.0, 55.0] width_B = [ 3.0, 6.0] # Coefficient of correlation corr_A = 0.8 corr_B = 0.9 # Amount of data you want to create n_data = 2000 ###Output _____no_output_____ ###Markdown Generate data:For each "species", produce a number of $(x_0,x_1)$ points which are (linearly) correlated: ###Code # The desired covariance matrix. V_A = np.array([[width_A[0]**2, width_A[0]*width_A[1]*corr_A], [width_A[0]*width_A[1]*corr_A, width_A[1]**2]]) V_B = np.array([[width_B[0]**2, width_B[0]*width_B[1]*corr_B], [width_B[0]*width_B[1]*corr_B, width_B[1]**2]]) # Generate the random samples. spec_A = np.random.multivariate_normal(mean_A, V_A, size=n_data) spec_B = np.random.multivariate_normal(mean_B, V_B, size=n_data) ###Output _____no_output_____ ###Markdown *** Plot your generated data:We plot the 2D-data as 1D-histograms (basically projections) in $x_0$ and $x_1$: ###Code fig_1D, ax_1D = plt.subplots(ncols=2, figsize=(14, 6)) ax_1D[0].hist(spec_A[:, 0], 50, (0, 25), histtype='step', label='Species A', color='Red', lw=1.5) ax_1D[0].hist(spec_B[:, 0], 50, (0, 25), histtype='step', label='Species B', color='Blue', lw=1.5) ax_1D[0].set(title='Parameter x0', xlabel='x0', ylabel='Counts', xlim=(0,25)) ax_1D[0].legend(loc='upper left') # uncomment later #ax_1D[0].text(1, 176, fr'$\Delta_{{x0}} = {calc_separation(spec_A[:, 0], spec_B[:, 0]):.3f}$', fontsize=16) ax_1D[1].hist(spec_A[:, 1], 50, (20, 80), histtype='step', label='Species A', color='Red', lw=1.5) ax_1D[1].hist(spec_B[:, 1], 50, (20, 80), histtype='step', label='Species B', color='Blue', lw=1.5) ax_1D[1].set(title='Parameter x1', xlabel='x1', ylabel='Counts', xlim=(20, 80)) ax_1D[1].legend(loc='upper left') # uncomment later #ax_1D[1].text(22, 140, fr'$\Delta_{{x1}} = {calc_separation(spec_A[:, 1], spec_B[:, 1]):.3f}$', fontsize=16) fig_1D.tight_layout() if save_plots : fig_1D.savefig('InputVars_1D.pdf', dpi=600) ###Output _____no_output_____ ###Markdown NOTE: Wait with drawing the 2D distribution, so that you think about the 1D distributions first!*** From the two 1D figures, it seems that species A and B can be separated to some degree, but not very well. If you were to somehow select cases of species A, then I can imagine a selection as follows: - If (x0 > 16) or (x1 13 and x1 < 52), then guess / select as A.Think about this yourself, and discuss with your peers, how you would go about separating A from B based on x0 and x1. ----------------------- 5-10 minutes later ----------------------- As it is, this type of selection is hard to optimise, especially with more dimensions (i.e. more variables than just x0 and x1). That is why Fisher's linear discriminant, $F$, is very useful. It makes the most separating linear combination of the input variables, and the coefficients can be calculated analytically. Thus, it is fast, efficient, and transparent. And it takes linear correlations into account. ###Code # fig_corr, ax_corr = plt.subplots(figsize=(14, 8)) # ax_corr.scatter(spec_A[:, 0], spec_A[:, 1], color='Red', s=10, label='Species A') # ax_corr.scatter(spec_B[:, 0], spec_B[:, 1], color='Blue', s=10, label='Species B') # ax_corr.set(xlabel='Parameter x0', ylabel='Parameter x1', title='Correlation'); # ax_corr.legend(); # fig_corr.tight_layout() #if save_plots : # fig_corr.savefig('InputVars_2D.pdf', dpi=600) ###Output _____no_output_____ ###Markdown Fisher Discriminant calculation:We want to find $\vec{w}$ defined by:$$\vec{w} = \left(\Sigma_A + \Sigma_B\right)^{-1} \left(\vec{\mu}_A - \vec{\mu}_B\right)$$ which we use to project our data into the best separating plane (line in this case) given by:$$ \mathcal{F} = w_0 + \vec{w} \cdot \vec{x} $$We start by finding the means and covariance of the individuel species: (__fill in yourself!__) ###Code mu_A = 0 # fill in yourself mu_B = 0 # fill in yourself mu_A cov_A = 0 # fill in yourself cov_B = 0 # fill in yourself cov_sum = cov_A + cov_B cov_sum ###Output _____no_output_____ ###Markdown where `cov_sum` is the sum of the all of the species' covariance matrices. We invert this using scipy's `inv` function. __Note__: fill in yourself! ###Code # Delete the definition below of cov_sum when you have filled in the cells above: cov_sum = np.diag([1, 2]) # Inverts cov_sum cov_sum_inv = inv(cov_sum) cov_sum_inv ###Output _____no_output_____ ###Markdown We calculate the fisher weights, $\vec{w}$. __Note__: fill in yourself: ###Code wf = np.ones(2) # fill in yourself wf ###Output _____no_output_____ ###Markdown We calculate the fisher discriminant, $\mathcal{F}$. __Note__: fill in yourself: ###Code fisher_data_A = spec_A[:, 0] * (-1.4) + 10 # fill in yourself fisher_data_B = spec_B[:, 0] * (-1.4) + 10 # fill in yourself ###Output _____no_output_____ ###Markdown and plot it: ###Code fig_fisher, ax_fisher = plt.subplots(figsize=(12, 8)) ax_fisher.hist(fisher_data_A, 200, (-22, 3), histtype='step', color='Red', label='Species A') ax_fisher.hist(fisher_data_B, 200, (-22, 3), histtype='step', color='Blue', label='Species B') ax_fisher.set(xlim=(-22, 3), xlabel='Fisher-discriminant') ax_fisher.legend() # ax_fisher.text(-21, 60, fr'$\Delta_{{fisher}} = {calc_separation(fisher_data_A, fisher_data_B):.3f}$', fontsize=16) fig_fisher.tight_layout() if save_plots: fig_fisher.savefig('FisherOutput.pdf', dpi=600) ###Output _____no_output_____
Projeto House Rocket/ProjetoHouseRocket_MachineLearning.ipynb
###Markdown 1 - Quais casas o CEO da House Rocket deveria comprar e por qual preço de compra? 2 - Uma vez a casa em posse da empresa, qual seria o preço da venda? 3 - A House Rocket deveria fazer uma reforma para aumentar o preço da venda? Quais seriam as sugestões de mudanças? Qual o incremento no preço dado por cada opção de reforma? ###Code import pandas as pd import numpy as np from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score ###Output _____no_output_____ ###Markdown Passo 1: Importar os dados e criar o modelo ###Code tabela = pd.read_csv('kc_house_data.csv') modelo = RandomForestRegressor() tabela2 = tabela ###Output _____no_output_____ ###Markdown Passo 2: Verificar o estado dos dados ###Code tabela.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 21613 entries, 0 to 21612 Data columns (total 21 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 21613 non-null int64 1 date 21613 non-null object 2 price 21613 non-null float64 3 bedrooms 21613 non-null int64 4 bathrooms 21613 non-null float64 5 sqft_living 21613 non-null int64 6 sqft_lot 21613 non-null int64 7 floors 21613 non-null float64 8 waterfront 21613 non-null int64 9 view 21613 non-null int64 10 condition 21613 non-null int64 11 grade 21613 non-null int64 12 sqft_above 21613 non-null int64 13 sqft_basement 21613 non-null int64 14 yr_built 21613 non-null int64 15 yr_renovated 21613 non-null int64 16 zipcode 21613 non-null int64 17 lat 21613 non-null float64 18 long 21613 non-null float64 19 sqft_living15 21613 non-null int64 20 sqft_lot15 21613 non-null int64 dtypes: float64(5), int64(15), object(1) memory usage: 3.5+ MB ###Markdown Passo 3: Limpeza e Organização ###Code tabela = tabela.drop(['date', 'id'], axis=1) tabela.floors = tabela.floors.astype(int) tabela.price = tabela.price.astype(int) tabela.bathrooms = tabela.bathrooms.astype(int) tabela.price = tabela.price.round(-3) display(tabela) ###Output _____no_output_____ ###Markdown Passo 4: Modelagem ###Code X = tabela.drop('price', axis=1) y = tabela['price'] x_train, x_test, y_train, y_test = train_test_split(X, y, train_size=0.3, random_state=52) ###Output _____no_output_____ ###Markdown Passo 5: Treinamento do algoritmo ###Code modelo.fit(x_train, y_train) pred = modelo.predict(x_test) r2_score(y_test, pred) ###Output _____no_output_____ ###Markdown Passo 6: Exportando modelo ###Code import joblib joblib.dump(modelo, 'model2.pkl') teste = np.array([[3,1,1180,5650,1,0,0,3,7,1180,0,1955,0,98178,47.5112,-122.257,1340,5650]]) modelo.predict(teste) ###Output _____no_output_____
greengems.ipynb
###Markdown Clash of Clans: How many builders do you *really* need? (or, how should I spend those green gems?) Hello everyone! I am a mid-level town hall 8 avid clasher with 4 builders. Recently I discovered (like so many other [people](https://www.reddit.com/r/ClashOfClans/comments/2psnf3/strategy_lab_time_longer_than_builder_time_what/))that at my level research, not build time, is the limiting factor for progress. This made me wonder, is it really worth it to save up for the fifth builder? Or should I just spend gems on barracks/collector boosts, finishing research/hero upgrades in a timely fashion, etc. To solve this conundrum I decided to do a bit of simple data analysis using the upgrade time data available on the [Clash of Clans wiki](http://clashofclans.wikia.com/wiki/Clash_of_Clans_Wiki).This next section contains a bit of Python used to prepare the dataset for visualization and analysis. If you aren't interested, just skip down to the [results section](Results) ###Code %matplotlib inline import numpy as np import pandas as pd building_df = pd.read_csv("building_upgrade_data.csv") building_df = building_df[building_df["town_hall"] != 11] research_df = pd.read_csv("research_data.csv") research_df = research_df[research_df["town_hall"] != 11] # CONSTANTS HOURS_PER_DAY = 24.0 MIN_PER_DAY = HOURS_PER_DAY * 60 SEC_PER_DAY = MIN_PER_DAY * 60 UNIT_MAP = {"seconds": SEC_PER_DAY, "minutes": MIN_PER_DAY, "hours": HOURS_PER_DAY, "days": 1.0} # These functions parse the possible time strings from functools import reduce def parse_time(t): return int(t[0]) / UNIT_MAP[t[1]] def chunks(l, n): for i in range(0, len(l), n): yield l[i:i + n] def parse_time_string(s): return reduce(lambda x, y: x + y, map(parse_time, chunks(s.split(' '), 2))) building_df["build_days"] = building_df["build_time"].map(parse_time_string) research_df["research_days"] = research_df["research_time"].map(parse_time_string) def get_build_time(df): """This calculates total build time per town hall level""" build_time = {} grouped = df.groupby(["type"]) for name, group in grouped: regrouped = group.groupby("town_hall") prev_quant = group.iloc[0]["quantity"] for rname, rgroup in regrouped: quant = rgroup["quantity"].iloc[0] build_days = quant * rgroup["build_days"].sum() build_time.setdefault(rname, 0) build_time[rname] += build_days # This adds time to each town hall level based on new structure acquisition if quant > prev_quant: diff = quant - prev_quant catch_up_days = diff * group[group["town_hall"] < rname]["build_days"].sum() build_time[rname] += catch_up_days prev_quant = quant return pd.Series(build_time) build_times = get_build_time(building_df) # Get research times by town hall, don't forget to add lab upgrade time lab_build_days = building_df.groupby("type").get_group("laboratory")[["town_hall","build_days"]] research_times = research_df.groupby("town_hall")["research_days"].sum() lab_build_days["total_time"] = lab_build_days["build_days"] + research_times.values research_times = lab_build_days.set_index("town_hall")["total_time"] times = pd.concat([research_times, build_times], axis=1) times.columns = ["research_time", "build_time"] times["percent_research_time"] = times["research_time"].map( lambda x: x / times["research_time"].sum()) times["percent_build_time"] = times["build_time"].map( lambda x: x / times["build_time"].sum()) times = times.fillna(0) times ###Output _____no_output_____
spatialmath/introduction.ipynb
###Markdown Working in 3D Rotation Rotations in 3D can be represented by rotation matrices – 3x3 orthonormal matrices – which belong to the group $\mbox{SO}(3)$. These are a subset of all possible 3x3 real matrices.We can create such a matrix, a rotation of $\pi/4$ radians around the x-axis by ###Code R1 = SO3.Rx(pi/4) ###Output _____no_output_____ ###Markdown which is an object of type ###Code type(R1) ###Output _____no_output_____ ###Markdown which contains an $\mbox{SO}(3)$ matrix. We can display that matrix ###Code R1 ###Output  1  0  0    0  0.707107 -0.707107    0  0.707107  0.707107   ###Markdown which is colored red if the console supports color.The matrix, a numpy array, is encapsulated and not directly settable by the user. This way we can ensure that the matrix is proper member of the $\mbox{SO}(3)$ group.We can _compose_ these rotations using the Python `*` operator ###Code R1 * R1 ###Output  1  0  0    0  0 -1    0  1  0   ###Markdown which is a rotation by $\pi/4$ _then_ another rotation by $\pi/4$ which is a total rotation of $\pi/2$ about the X-axis. We can doublecheck that ###Code SO3.Rx(pi/2) ###Output  1  0  0    0  0 -1    0  1  0   ###Markdown We could also have used the exponentiation operator ###Code R1**2 ###Output  1  0  0    0  0 -1    0  1  0   ###Markdown We can also specify the angle in degrees ###Code SO3.Rx(45, 'deg') ###Output  1  0  0    0  0.707107 -0.707107    0  0.707107  0.707107   ###Markdown We can visualize what this looks like by ###Code fig = plt.figure() # create a new figure SE3().plot(frame='0', dims=[-1.5,1.5], color='black') R1.plot(frame='1') ###Output _____no_output_____ ###Markdown Click on the coordinate frame and use the mouse to change the viewpoint. The world reference frame is shown in black, and the rotated frame is shown in blue. Often we need to describe more complex orientations and we typically use a _3 angle_ convention to do this. Euler's rotation theorem says that any orientation can be expressed in terms of three rotations about different axes. One common convention is roll-pitch-yaw angles ###Code R2 = SO3.RPY([10, 20, 30], unit='deg') R2 ###Output  0.813798 -0.44097  0.378522    0.469846  0.882564  0.0180283   -0.34202  0.163176  0.925417   ###Markdown which says that we rotate by 30&deg; about the Z-axis (yaw), _then_ 20&deg; about the Y-axis (pitch) and _then_ 10&deg; about the X-axis – this is the ZYX roll-pitch yaw convention. Note that:1. the first rotation in the sequence involves the last element in the angle sequence.2. we can change angle convention, for example by passing `order='xyz'`We can visualize the resulting orientation. ###Code plt.figure() # create a new figure SE3().plot(frame='0', dims=[-1.5,1.5], color='black') R2.plot(frame='2') ###Output _____no_output_____ ###Markdown We can convert any rotation matrix back to its 3-angle representation ###Code R2.rpy() ###Output _____no_output_____ ###Markdown ConstructorsThe default constructor yields a null rotation ###Code SO3() ###Output  1  0  0    0  1  0    0  0  1   ###Markdown which is represented by the identity matrix.The class supports a number of variant constructors using class methods:| Constructor | rotation ||---------------|-----------|| SO3() | null rotation || SO3.Rx(theta) | about X-axis || SO3.Ry(theta) | about Y-axis|| SO3.Rz(theta) | about Z-axis|| SO3.RPY(rpy) | from roll-pitch-yaw angle vector|| SO3.Eul(euler) | from Euler angle vector || SO3.AngVec(theta, v) | from rotation and axis || SO3.Omega(v) | from a twist vector || SO3.OA | from orientation and approach vectors | Imagine we want a rotation that describes a frame that has its y-axis (o-vector) pointing in the world negative z-axis direction and its z-axis (a-vector) pointing in the world x-axis direction ###Code SO3.OA(o=[0,0,-1], a=[1,0,0]) ###Output  0  0  1   -1  0  0    0 -1  0   ###Markdown We can redo our earlier example using `SO3.Rx()` with the explicit angle-axis notation ###Code SO3.AngVec(pi/4, [1,0,0]) ###Output  1  0  0    0  0.707107 -0.707107    0  0.707107  0.707107   ###Markdown or ###Code SO3.Exp([pi/4,0,0]) ###Output  1  0  0    0  0.707107 -0.707107    0  0.707107  0.707107   ###Markdown or a more complex example ###Code SO3.AngVec(30, [1,2,3], unit='deg') ###Output  0.875595 -0.381753  0.29597    0.420031  0.904304 -0.0762129   -0.238552  0.191048  0.952152   ###Markdown PropertiesThe object has a number of properties, such as the columns which are often written as ${\bf R} = [n, o, a]$ where $n$, $o$ and $a$ are 3-vectors. For example ###Code R1.n ###Output _____no_output_____ ###Markdown or its inverse (in this case its transpose) ###Code R1.inv() ###Output  1  0  0    0  0.707107  0.707107    0 -0.707107  0.707107   ###Markdown the shape of the underlying matrix ###Code R1.shape ###Output _____no_output_____ ###Markdown and the order ###Code R1.N ###Output _____no_output_____ ###Markdown indicating it operates in 3D space. PredicatesWe can check various properties of the object using properties and methods that are common to all classes in this package ###Code [R1.isSE, R1.isSO, R1.isrot(), R1.ishom(), R1.isrot2(), R1.ishom2()] ###Output _____no_output_____ ###Markdown The last four in this list provide compatibility with the Spatial Math Toolbox for MATLAB. QuaternionsA quaternion is often described as a type of complex number but it is more useful (and simpler) to think of it as an order pair comprising a scalar and a vector. We can create a quaternions ###Code q1 = Quaternion([1,2,3,4]) q1 ###Output 1.000000 < 2.000000, 3.000000, 4.000000 > ###Markdown where the scalar is before the angle brackets which enclose the vector part. Properties allow us to extract the scalar part ###Code q1.s ###Output _____no_output_____ ###Markdown and the vector part ###Code q1.v ###Output _____no_output_____ ###Markdown and we can represent it as a numpy array ###Code q1.vec ###Output _____no_output_____ ###Markdown A quaternion has a conjugate ###Code q1.conj() ###Output 1.000000 < -2.000000, -3.000000, -4.000000 > ###Markdown and a norm, which is the magnitude of the equivalent 4-vector ###Code q1.norm() ###Output _____no_output_____ ###Markdown We can create a second quaternion ###Code q2 = Quaternion([5,6,7,8]) q2 ###Output 5.000000 < 6.000000, 7.000000, 8.000000 > ###Markdown Operators allow us to add ###Code q1 + q2 ###Output 6.000000 < 8.000000, 10.000000, 12.000000 > ###Markdown subtract ###Code q1 - q2 ###Output -4.000000 < -4.000000, -4.000000, -4.000000 > ###Markdown and to multiply ###Code q1 * q2 ###Output -60.000000 < 12.000000, 30.000000, 24.000000 > ###Markdown which follows the special rules of Hamilton multiplication.Multiplication can also be performed as the linear algebraic product of one quaternion converted to a 4x4 matrix ###Code q1.matrix ###Output _____no_output_____ ###Markdown and the other as a 4-vector ###Code q1.matrix @ q2.vec ###Output _____no_output_____ ###Markdown The product of a quaternion and its conjugate is a scalar equal to the square of its norm ###Code q1 * q1.conj() ###Output 30.000000 < 0.000000, 0.000000, 0.000000 > ###Markdown Conversely, a quaternion with a zero scalar part is called a _pure quaternion_ ###Code Quaternion.Pure([1, 2, 3]) ###Output 0.000000 < 1.000000, 2.000000, 3.000000 > ###Markdown Unit quaternionsA quaternion with a unit norm is called a _unit quaternion_ . It is a group and its elements represent rotation in 3D space. It is in all regards like an $\mbox{SO}(3)$ matrix except for a _double mapping_ -- a quaternion and its element-wise negation represent the same rotation. ###Code q1 = UnitQuaternion.Rx(30, 'deg') q1 ###Output 0.965926 << 0.258819, 0.000000, 0.000000 >> ###Markdown the convention is that unit quaternions are denoted using double angle brackets. The norm, as advertised is indeed one ###Code q1.norm() ###Output _____no_output_____ ###Markdown We create another unit quaternion ###Code q2 = UnitQuaternion.Ry(-40, 'deg') q2 ###Output 0.939693 << 0.000000, -0.342020, 0.000000 >> ###Markdown The rotations can be composed by quaternion multiplication ###Code q3 = q1 * q2 q3 ###Output 0.907673 << 0.243210, -0.330366, -0.088521 >> ###Markdown We can convert a quaternion to a rotation matrix ###Code q3.R ###Output _____no_output_____ ###Markdown which yields exactly the same answer as if we'd done it using SO(3) rotation matrices ###Code SO3.Rx(30, 'deg') * SO3.Ry(-40, 'deg') ###Output  0.766044  0 -0.642788   -0.321394  0.866025 -0.383022    0.55667  0.5  0.663414   ###Markdown The advantages of unit quaternions are that1. they are compact, just 4 numbers instead of 92. multiplication involves fewer operations and is therefore faster3. numerical errors build up when we multiply rotation matrices together many times, and they lose the structure (the columns are no longer unit length or orthogonal). Correcting this, the process of _normalization_ is expensive. For unit quaternions errors will also compound, but normalization is simply a matter of dividing through by the norm Unit quaternions have an inverse ###Code q2.inv() q1 * q2.inv() ###Output 0.907673 << 0.243210, 0.330366, 0.088521 >> ###Markdown or ###Code q1 / q2 ###Output 0.907673 << 0.243210, 0.330366, 0.088521 >> ###Markdown We can convert any unit quaternion to an SO3 object if we wish ###Code q1.SO3() ###Output  1  0  0    0  0.866025 -0.5    0  0.5  0.866025   ###Markdown and conversely, any `SO3` object to a unit quaternion ###Code UnitQuaternion( SO3.Rx(30, 'deg')) ###Output 0.965926 << 0.258819, 0.000000, 0.000000 >> ###Markdown A unit quaternion is not a minimal representation. Since we know the magnitude is 1, then with any 3 elements we can compute the fourth upto a sign ambiguity. ###Code q1.vec3 a = UnitQuaternion.qvmul( q1.vec3, q2.vec3) a ###Output _____no_output_____ ###Markdown from which we can recreate the unit quaternion ###Code UnitQuaternion.Vec3(a) ###Output 0.907673 << 0.243210, -0.330366, -0.088521 >> ###Markdown Representing positionIn robotics we also need to describe the position of objects and we can do this with a _homogeneous transformation_ matrix – a 4x4 matrix – which belong to the group $\mbox{SE}(3)$ which is a subset of all 4x4 real matrices.We can create such a matrix, for a translation of 1 in the x-direction, 2 in the y-direction and 3 in the z-direction by ###Code T1 = SE3(1, 2, 3) T1 ###Output  1  0  0  1    0  1  0  2    0  0  1  3    0  0  0  1   ###Markdown which is displayed in a color coded fashion: rotation matrix in red, translation vector in blue, and the constant bottom row in grey. We note that the red matrix is an _identity matrix_ . The class supports a number of variant constructors using class methods.| Constructor | motion ||---------------|-----------|| SE3() | null motion || SE3.Tx(d) | translation along X-axis || SE3.Ty(d) | translation along Y-axis || SE3.Tz(d) | translation along Z-axis || SE3.Rx(theta) | rotation about X-axis || SE3.Ry(theta) | rotation about Y-axis|| SE3.Rz(theta) | rotation about Z-axis|| SE3.RPY(rpy) | rotation from roll-pitch-yaw angle vector|| SE3.Eul(euler) | rotation from Euler angle vector || SE3.AngVec(theta, v) | rotation from rotation and axis || SO3.Omega(v) | from a twist vector || SE3.OA(ovec, avec) | rotation from orientation and approach vectors | We can visualize this ###Code plt.figure() # create a new figure SE3().plot(frame='0', dims=[0,4], color='black') T1.plot(frame='1') ###Output _____no_output_____ ###Markdown We can define another translation ###Code T12 = SE3(2, -1, -2) ###Output _____no_output_____ ###Markdown and compose it with `T1` ###Code T2 = T1 * T12 T2.plot(frame='2', color='red') ###Output _____no_output_____ ###Markdown Representing pose ###Code T1 = SE3(1, 2, 3) * SE3.Rx(30, 'deg') T1 ###Output  1  0  0  1    0  0.866025 -0.5  2    0  0.5  0.866025  3    0  0  0  1   ###Markdown Is a composition of two motions: a pure translation and _then_ a pure rotation. We can see the rotation matrix, computed above, in the top-left corner and the translation components in the right-most column. In the earlier example `Out[24]` was simply a null-rotation which is represented by the identity matrix.The frame now looks like this ###Code plt.figure() # create a new figure SE3().plot(frame='0', dims=[0,4], color='black') T1.plot(frame='1') ###Output _____no_output_____ ###Markdown PropertiesThe object has a number of properties, such as the columns which are often written as $[n, o, a]$ ###Code T1.o ###Output _____no_output_____ ###Markdown or its inverse (computed in an efficient manner based on the structure of the matrix) ###Code T1.inv() ###Output  1  0  0 -1    0  0.866025  0.5 -3.23205    0 -0.5  0.866025 -1.59808    0  0  0  1   ###Markdown We can extract the rotation matrix as a numpy array ###Code T1.R ###Output _____no_output_____ ###Markdown or the translation vector, as a numpy array ###Code T1.t ###Output _____no_output_____ ###Markdown The shape of the underlying SE(3) matrix is ###Code T1.shape ###Output _____no_output_____ ###Markdown and the order ###Code T1.N ###Output _____no_output_____ ###Markdown indicating it operates in 3D space. PredicatesWe can check various properties ###Code [T1.isSE, T1.isSO, T1.isrot(), T1.ishom(), T1.isrot2(), T1.ishom2()] ###Output _____no_output_____ ###Markdown A couple of important points:When we compose motions they must be of the same type. An `SE3` object can represent pure transation, pure rotation or both. If we wish to compose a translation with a rotation, the rotation must be an `SE3` object - a rotation plus zero translation.SUperset Transforming points Imagine now a set of points defining the vertices of a cube ###Code P = np.array([[-1, 1, 1, -1, -1, 1, 1, -1], [-1, -1, 1, 1, -1, -1, 1, 1], [-1, -1, -1, -1, 1, 1, 1, 1]]) P ###Output _____no_output_____ ###Markdown defined with respect to a body reference frame ${}^A P_i$. Given a transformation ${}^0 \mathbf{T}_A$ from the world frame to the body frame, we determine the coordinates of the points in the world frame by ${}^0 P_i = {}^0 \mathbf{T}_A \, {}^A P_i$ which we can perform in a single operation ###Code Q = T1 * P ###Output _____no_output_____ ###Markdown which we can now plot ###Code fig = plt.figure() SE3().plot(frame='0', dims=[-2,3,0,5,0,5], color='black') ax = plt.gca() ax.set_xlabel('X'); ax.set_ylabel('Y'); ax.set_zlabel('Z'); ax.scatter(xs=Q[0], ys=Q[1], zs=Q[2], s=20) # draw vertices # draw lines joining the vertices lines = [[0,1,5,6], [1,2,6,7], [2,3,7,4], [3,0,4,5]] for line in lines: ax.plot([Q[0,i] for i in line], [Q[1,i] for i in line], [Q[2,i] for i in line]) ###Output _____no_output_____ ###Markdown This is often used in SLAM and bundle adjustment algorithms since it is compact and better behaved than using roll-pitch-yaw or Euler angles. TwistsA twist is an alternative way to represent a 3D pose, but it is more succinct, comprising just 6 values. In constrast an SE(3) matrix has 16 values with a considerable amount of redundancy, but it does offer consider computational convenience.Twists are the logarithm of an SE(3) matrix ###Code T = SE3.Rand() T T.log() ###Output _____no_output_____ ###Markdown How do we know this is really the logarithm? Well, we can exponentiate it ###Code lg = T.log() SE3.Exp(lg) ###Output  0.570802  0.722709  0.389714  0.483881   -0.255076  0.607224 -0.752472 -0.702483   -0.780462  0.330106  0.53095  0.497569    0  0  0  1   ###Markdown and we have reconstituted our original matrix. The logarithm is a matrix with a very particular structure, it has a zero diagonal and bottom row, and the top-left 3x3 matrix is skew symmetric. This matrix has only 6 unique elements: three from the last column, and three from the skew symmetric matrix, and we can request the `log` method to give us just these ###Code T.log(twist=True) ###Output _____no_output_____ ###Markdown This 6-vector is a twist, a concise way to represent the translational and rotational components of a pose. Twists are represented by their own class ###Code tw = Twist3(T) tw ###Output (0.42701 -0.3207 0.89151; 0.69954 0.75614 -0.63182) ###Markdown Just like the other pose objects, `Twist3` objects can have multiple values.Twists can be composed ###Code T = SE3(1, 2, 3) * SE3.Rx(0.3) tw = Twist3(T) tw ###Output (1 2.435 2.6775; 0.3 0 0) ###Markdown Now we can compose the twists ###Code tw2 = tw * tw tw2 ###Output (2 4.87 5.3549; 0.6 0 0) ###Markdown and the result is just the same as if we had composed the transforms ###Code Twist3(T * T) ###Output (2 4.87 5.3549; 0.6 0 0) ###Markdown Twists have great utility for robot arm kinematics, to compute the forward kinematics and Jacobians. Twist objects have a number of methods.The adjoint is a 6x6 matrix that relates velocities ###Code tw.Ad() ###Output _____no_output_____ ###Markdown and the `SE3` object also has this method.The logarithm of the adjoint is given by ###Code tw.ad() ###Output _____no_output_____ ###Markdown The name twist comes from considering the rigid-body motion as a rotation and a translation along a unique line of action. It rotates as it moves along the line following a screw like motion, hence its other name as a _screw_. The line in 3D space is described in Plücker coordinates by ###Code tw.line() ###Output { 0.91 2.435 2.6775; 0.3 0 0} ###Markdown The pitch of the screw is ###Code tw.pitch() ###Output _____no_output_____ ###Markdown and a point on the line is ###Code tw.pole() ###Output _____no_output_____ ###Markdown Working in 2D Things are actually much simpler in 2D. There's only one possible rotation which is around an axis perpendicular to the plane (where the z-axis would have been if it were in 3D).Rotations in 2D can be represented by rotation matrices – 2x2 orthonormal matrices – which belong to the group SO(2). Just as for the 3D case these matrices have special properties, each column (and row) is a unit vector, and they are all orthogonal, the inverse of this matrix is equal to its transpose, and its determinant is +1.We can create such a matrix, a rotation of $\pi/4$ radians by ###Code R = SO2(pi/4) R ###Output  0.707107 -0.707107    0.707107  0.707107   ###Markdown or in degrees ###Code SO2(45, unit='deg') ###Output  0.707107 -0.707107    0.707107  0.707107   ###Markdown and we can plot this on the 2D plane ###Code plt.figure() # create a new figure R.plot() ###Output _____no_output_____ ###Markdown Once again, it's useful to describe the position of things and we do this this with a homogeneous transformation matrix – a 3x3 matrix – which belong to the group SE(2). ###Code T = SE2(1, 2) T ###Output  1  0  1    0  1  2    0  0  1   ###Markdown which has a similar structure to the 3D case. The rotation matrix is in the top-left corner and the translation components are in the right-most column.We can also call the function with the element in a list ###Code T = SE2([1, 2]) plt.figure() # create a new figure T.plot() T2 = SE2(45, unit='deg') T2 plt.figure() # create a new figure T2.plot() ###Output _____no_output_____ ###Markdown The inplace versions of operators are also supported, for example ###Code X = T X /= T2 X ###Output  0.707107  0.707107  1   -0.707107  0.707107  2    0  0  1   ###Markdown Operators Group operatorsFor the 3D case, the classes we have introduced mimic the behavior the mathematical groups $\mbox{SO}(3)$ and $\mbox{SE}(3)$ which contain matrices of particular structure. They are subsets respectively of the sets of all possible real 3x3 and 4x4 matrices.The only operations on two elements of the group that also belongs to the group are composition (represented by the `*` operator) and inversion. ###Code T1 = SE3(1, 2, 3) * SE3.Rx(30, 'deg') [type(T1), type(T1.inv()), type(T1*T1)] ###Output _____no_output_____ ###Markdown If we know the pose of frame {2} and a _rigid body motion_ from frame {1} to frame {2} ###Code T2 = SE3(4, 5, 6) * SE3.Ry(-40, 'deg') T12 = SE3(0, -2, -1) * SE3.Rz(70, 'deg') ###Output _____no_output_____ ###Markdown then ${}^0{\bf T}_1 \bullet {}^1{\bf T}_2 = {}^0{\bf T}_2$ then ${}^0{\bf T}_1 = {}^1{\bf T}_2 \bullet ({}^0{\bf T}_2)^{-1}$ which we write as ###Code T1 * T2.inv() ###Output  0.766044  0  0.642788 -5.9209    0.321394  0.866025 -0.383022 -1.31757   -0.55667  0.5  0.663414 -1.2538    0  0  0  1   ###Markdown or more concisely as ###Code T1 / T2 ###Output  0.766044  0  0.642788 -5.9209    0.321394  0.866025 -0.383022 -1.31757   -0.55667  0.5  0.663414 -1.2538    0  0  0  1   ###Markdown Exponentiation is also a group operator since it is simply repeated composition ###Code T1 ** 2 ###Output  1  0  0  2    0  0.5 -0.866025  2.23205    0  0.866025  0.5  6.59808    0  0  0  1   ###Markdown Non-group operationsOperations such as addition and subtraction are valid for matrices but not for elements of the group, therefore these operations will return a numpy array rather than a group object ###Code SE3() + SE3() ###Output _____no_output_____ ###Markdown yields an array, not an `SE3` object. As do other non-group operations ###Code 2 * SE3() SE3() - 1 ###Output _____no_output_____ ###Markdown Similar principles apply to quaternions. Unit quaternions are a group and only support composition and inversion. Any other operations will return an ordinary quaternion ###Code UnitQuaternion() * 2 ###Output 2.000000 < 0.000000, 0.000000, 0.000000 > ###Markdown which is indicated by the single angle brackets. In-place operatorsAll of Pythons in-place operators are available as well, whether for group or non-group operations. For example ###Code T = T1 T *= T2 T **= 2 ###Output _____no_output_____ ###Markdown Multi-valued objectsFor many tasks we might want to have a set or sequence of rotations or poses. The obvious solution would be to use a Python list ###Code T = [ SE3.Rx(0), SE3.Rx(0.1), SE3.Rx(0.2), SE3.Rx(0.3), SE3.Rx(0.4)] ###Output _____no_output_____ ###Markdown but the pose objects in this package can hold multiple values, just like a native Python list can. There are a few ways to do this, most obviously ###Code T = SE3( [ SE3.Rx(0), SE3.Rx(0.1), SE3.Rx(0.2), SE3.Rx(0.3), SE3.Rx(0.4)] ) ###Output _____no_output_____ ###Markdown which has type of a pose object ###Code type(T) ###Output _____no_output_____ ###Markdown but it has length of five ###Code len(T) ###Output _____no_output_____ ###Markdown that is, it contains five values. We can see these when we display the object's value ###Code T ###Output [0] =   1  0  0  0    0  1  0  0    0  0  1  0    0  0  0  1   [1] =   1  0  0  0    0  0.995004 -0.0998334  0    0  0.0998334  0.995004  0    0  0  0  1   [2] =   1  0  0  0    0  0.980067 -0.198669  0    0  0.198669  0.980067  0    0  0  0  1   [3] =   1  0  0  0    0  0.955336 -0.29552  0    0  0.29552  0.955336  0    0  0  0  1   [4] =   1  0  0  0    0  0.921061 -0.389418  0    0  0.389418  0.921061  0    0  0  0  1   ###Markdown We can index into the object (slice it) just as we would a Python list ###Code T[3] ###Output  1  0  0  0    0  0.955336 -0.29552  0    0  0.29552  0.955336  0    0  0  0  1   ###Markdown or from the second element to the last in steps of two ###Code T[1:-1:2] ###Output [0] =   1  0  0  0    0  0.995004 -0.0998334  0    0  0.0998334  0.995004  0    0  0  0  1   [1] =   1  0  0  0    0  0.955336 -0.29552  0    0  0.29552  0.955336  0    0  0  0  1   ###Markdown We could another value to the end ###Code T.append( SE3.Rx(0.5) ) len(T) ###Output _____no_output_____ ###Markdown The `SE3` class, like all the classes in this package, inherits from the `UserList` class giving it all the methods of a Python list like append, extend, del etc. We can also use them as _iterables_ in _for_ loops and in list comprehensions.You can create an object of a particular type with no elements using this constructor ###Code T = SE3.Empty() len(T) ###Output _____no_output_____ ###Markdown which is the equivalent of setting a variable to `[]`. We could write the above example more succinctly ###Code T = SE3.Rx( np.linspace(0, 0.5, 5) ) len(T) T[3] ###Output  1  0  0  0    0  0.930508 -0.366273  0    0  0.366273  0.930508  0    0  0  0  1   ###Markdown Consider another rotation ###Code T2 = SE3.Ry(40, 'deg') ###Output _____no_output_____ ###Markdown If we write ###Code A = T * T2 len(A) ###Output _____no_output_____ ###Markdown we obtain a new list where each element of `A` is `T[i] * T2`. Similarly ###Code B = T2 * T len(B) ###Output _____no_output_____ ###Markdown which has produced a new list where each element of `B` is `T2 * T[i]`.Similarly ###Code C = T * T len(C) ###Output _____no_output_____ ###Markdown yields a new list where each element of `C` is the `T[i] * T[i]`. We can apply such a sequence to a coordinate vectors as we did earlier ###Code P = T * [0, 1, 0] P ###Output _____no_output_____ ###Markdown where each element of `T` has transformed the coordinate vector (0, 1, 0), the results being consecutive columns of the resulting numpy array.This is equivalent to writing ###Code np.column_stack([x * [0,1,0] for x in T]) ###Output _____no_output_____ ###Markdown C++ like programming modelLists are useful, but we might like to use a programming model where we allocate an array of pose objects and reference them or assign to them. We can do that to! ###Code T = SE3.Alloc(5) # create a vector of SE3 values for i, theta in enumerate(np.linspace(0, 1, len(T))): T[i] = SE3.Rz(theta) T ###Output [0] =   1  0  0  0    0  1  0  0    0  0  1  0    0  0  0  1   [1] =   0.968912 -0.247404  0  0    0.247404  0.968912  0  0    0  0  1  0    0  0  0  1   [2] =   0.877583 -0.479426  0  0    0.479426  0.877583  0  0    0  0  1  0    0  0  0  1   [3] =   0.731689 -0.681639  0  0    0.681639  0.731689  0  0    0  0  1  0    0  0  0  1   [4] =   0.540302 -0.841471  0  0    0.841471  0.540302  0  0    0  0  1  0    0  0  0  1  
20210519/housing_force03.ipynb
###Markdown State $$x = [w,n,m,s,e,o]$$ $w$: wealth level size: 20 $n$: 401k level size: 10 $m$: mortgage level size: 10 $s$: economic state size: 8 $e$: employment state size: 2 $o$: housing state: size: 2 Action$c$: consumption amount size: 20 $b$: bond investment size: 20 $k$: stock investment derived from budget constrain once $c$ and $b$ are determined. $h$: housing consumption size, related to housing status and consumption level If $O = 1$, the agent owns a house: $A = [c, b, k, h=H, action = 1]$ sold the house $A = [c, b, k, h=H, action = 0]$ keep the house If $O = 0$, the agent do not own a house: $A = [c, b, k, h= \frac{c}{\alpha} \frac{1-\alpha}{pr}, action = 0]$ keep renting the house $A = [c, b, k, h= \frac{c}{\alpha} \frac{1-\alpha}{pr}, action = 1]$ buy a housing with H unit Housing20% down payment of mortgage, fix mortgage rate, single housing unit available, from age between 20 and 50, agents could choose to buy a house, and could choose to sell the house at any moment. $H = 1000$ ###Code %%time for t in tqdm(range(T_max-1,T_min-1, -1)): if t == T_max-1: v,cbkha = vmap(partial(V,t,Vgrid[:,:,:,:,:,:,t]))(Xs) else: v,cbkha = vmap(partial(V,t,Vgrid[:,:,:,:,:,:,t+1]))(Xs) Vgrid[:,:,:,:,:,:,t] = v.reshape(dim) cgrid[:,:,:,:,:,:,t] = cbkha[:,0].reshape(dim) bgrid[:,:,:,:,:,:,t] = cbkha[:,1].reshape(dim) kgrid[:,:,:,:,:,:,t] = cbkha[:,2].reshape(dim) hgrid[:,:,:,:,:,:,t] = cbkha[:,3].reshape(dim) agrid[:,:,:,:,:,:,t] = cbkha[:,4].reshape(dim) np.save("Value03",Vgrid) ###Output _____no_output_____
P3-Traffic_sign_classifier/CarND-Traffic-Sign-Classifier-Project/Traffic_Sign_Classifier.ipynb
###Markdown Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Step 0: Load The Data ###Code # Load pickled data import pickle from sklearn.model_selection import train_test_split import random import matplotlib.pyplot as plt from tensorflow.contrib.layers import flatten from sklearn.utils import shuffle import tensorflow as tf import numpy as np import cv2 from skimage.transform import rotate import glob training_file = '../data/train.p' # '../data/mod_train0.p' '../data/train.p' testing_file = '../data/test.p' # '../data/mod_test0.p' '../data/test.p' with open(training_file, mode='rb') as f: train = pickle.load(f) with open(testing_file, mode='rb') as f: test = pickle.load(f) X_train, y_train = train['features'], train['labels'] X_test, y_test = test['features'], test['labels'] ###Output _____no_output_____ ###Markdown --- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas ###Code assert(len(X_train) == len(y_train)) assert(len(X_test) == len(y_test)) n_train = len(X_train) n_test = len(X_test) n_classes = len(set(y_train)) image_shape = X_train[0].shape print("Number of training examples =", n_train) print("Number of testing examples =", n_test) print("Image data shape =", image_shape) print("Number of classes =", n_classes) ###Output Number of training examples = 34799 Number of testing examples = 12630 Image data shape = (32, 32, 3) Number of classes = 43 ###Markdown Include an exploratory visualization of the dataset Visualization of the dataset: ###Code ### Data exploration visualization code goes here. %matplotlib inline # select and show a random sample from each class rows, cols = 6, 8 fig, axs = plt.subplots(rows, cols) plt.suptitle('Random images from the German Traffic Signs Dataset ') for sign_class_idx, ax in enumerate(axs.ravel()): if sign_class_idx < n_classes: sign_class_img_set = X_train[y_train == sign_class_idx] sign_class_rnd_img = sign_class_img_set[np.random.randint(len(sign_class_img_set))] ax.imshow(sign_class_rnd_img) #ax.set_title('{:02d}'.format(sign_class_idx), fontweight='bold') ax.axis('off') else: ax.axis('off') # hide x and y ticks plt.setp([a.get_xticklabels() for a in axs.ravel()], visible=False) plt.setp([a.get_yticklabels() for a in axs.ravel()], visible=False) plt.show() ###Output _____no_output_____ ###Markdown Histogram plotting of the original training data set shows that some classes may not have enough data required for high accuracy recognition. Secondly, distribution of the test data on the other side is quite similar to the train data, so I would expect that sign recognition will not be biased to a particular sign. ###Code plt.hist(y_train, bins=n_classes, color='blue', alpha=0.7, rwidth=0.85) plt.hist(y_test, bins=n_classes, color='orange', alpha=0.7, rwidth=0.85) plt.legend(["Train data", "Test data"]) plt.grid(axis='y', alpha=0.75) plt.title('Histogram of the German Traffic Signs Dataset') plt.xlabel('Traffic sign') plt.ylabel('Counts') ###Output _____no_output_____ ###Markdown ---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) To improve the accuracy the first thing I did I increased the data set size. Initially, I implemented several steps: rotation, warping, shifting the images in x and y coordinates. But the data size increased hugely so that my pc would run out of memory very quickly. At the end, I just keept only image rotation option and the data set increased by 5. ###Code def rotate_image(image, max_angle =15): rotate_out = rotate(image, np.random.uniform(-max_angle, max_angle), mode='edge') return rotate_out aug_mode = 1 # =1 generated augmented data set # =0 load already generated augmented data set save_aug_img = 0 num_rot = 5 # number of rotations per image mod_training_file = '../data/mod_train0.p' if 1 == aug_mode: # # generated augmented data set y_train1 = np.matlib.repmat(y_train, num_rot, 1) y_train1 = y_train1.T.reshape(-1) X_train1 = np.zeros([len(X_train)*num_rot, 32, 32, 3], dtype=np.uint8) for idx in range(len(X_train)): for idx1 in range(num_rot): k = idx * num_rot + idx1 # convert it back to 8 bytes, i.e. saves memory by factor 8 X_train1[k, :, :, :] = np.uint8(rotate_image(X_train[idx, :, :, :], max_angle=15)*255.0) X_train, y_train = X_train1, y_train1 if 1 == save_aug_img: with open(mod_training_file, mode='wb') as f: pickle.dump({'features': X_train, 'labels': y_train}, f) else: # load already generated augmented data set with open(mod_training_file, mode='rb') as f: train = pickle.load(f) X_train, y_train = train['features'], train['labels'] ###Output _____no_output_____ ###Markdown After some trying I just kept the straight forward normalization: ###Code # Normalise input X_train = (X_train - np.min(X_train)) / (np.max(X_train) - np.min(X_train)) X_test = (X_test - np.min(X_test)) / (np.max(X_test) - np.min(X_test)) X_train, y_train = shuffle(X_train, y_train) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=60) ###Output _____no_output_____ ###Markdown Other things I tried shortly were playing with color spaces (YUV for example as mentioned here [[LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) and gray colorspace) and histogram equalization. But haven't seen much of a progress nor I wanted to use more of GPU time. Model Architecture With LeNet architecture I reached 94% accuracy. After that I slightly modified it, I removed one of the fully connected layers and increased the depth of the activation volume instead. The idea here was to increase the depth column in order to get more details as mentioned in [[LeCun]](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) "In the case of 2 stages of features, the second stage extracts“global” and invariant shapes and structures, while the firststage extracts “local” motifs with more precise details." ###Code BATCH_SIZE = 128 EPOCHS = 10 rate = 0.001 mu = 0 sigma = 0.1 conv1_depth = 64 conv2_depth = 128 fc1_depth = 64 fc2_depth = n_classes last_saved_epoch = 0 # not a parameter, don't change it # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, conv1_depth), mean=mu, stddev=sigma), name='weights_0') conv1_b = tf.Variable(tf.zeros(conv1_depth), name='bias_0') conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, conv1_depth, conv2_depth), mean=mu, stddev=sigma), name='weights_1') conv2_b = tf.Variable(tf.zeros(conv2_depth), name='bias_1') fc1_W = tf.Variable(tf.truncated_normal(shape=(5*5*conv2_depth, fc1_depth), mean=mu, stddev=sigma), name='weights_2') fc1_b = tf.Variable(tf.zeros(fc1_depth), name='bias_2') fc2_W = tf.Variable(tf.truncated_normal(shape=(fc1_depth, fc2_depth), mean=mu, stddev=sigma), name='weights_3') fc2_b = tf.Variable(tf.zeros(fc2_depth), name='bias_3') def ConvNet(x): # Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x64. conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b conv1 = tf.nn.relu(conv1) # Pooling. Input = 28x28x64. Output = 14x14x64. conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # Layer 2: Convolutional. Output = 10x10x128. conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b conv2 = tf.nn.relu(conv2) # Pooling. Input = 10x10x128. Output = 5x5x128. conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID') # Flatten. Input = 5x5x128. Output = 3200. fc0 = flatten(conv2) # Layer 3: Fully Connected. Input = 3200. Output = 64. fc1 = tf.matmul(fc0, fc1_W) + fc1_b fc1 = tf.nn.relu(fc1) # Layer 4: Fully Connected. Input = 64. Output = 43. logits = tf.matmul(fc1, fc2_W) + fc2_b return logits ### Define your architecture here. ### Feel free to use as many code cells as needed. x = tf.placeholder(tf.float32, (None, 32, 32, 3)) y = tf.placeholder(tf.int32, (None)) one_hot_y = tf.one_hot(y, n_classes) logits = ConvNet(x) cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits) loss_operation = tf.reduce_mean(cross_entropy) optimizer = tf.train.AdamOptimizer(learning_rate = rate) training_operation = optimizer.minimize(loss_operation) correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() #Model Evaluation def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples ###Output _____no_output_____ ###Markdown Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting. ###Code #Train the Model with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) training_accuracy = evaluate(X_train, y_train) validation_accuracy = evaluate(X_val, y_val) print("EPOCH {} ...".format(i + 1)) print("Training Accuracy = {:.3f}".format(training_accuracy)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() if (i % 3) == 0: saver.save(sess, './lenet_epoch'+str(i + 1)+'.ckpt') last_saved_epoch = i + 1 print("Model saved") ### Train your model here. ### Calculate and report the accuracy on the training and validation set. ### Once a final model architecture is selected, ### the accuracy on the test set should be calculated and reported as well. ### Feel free to use as many code cells as needed. with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) test_accuracy = evaluate(X_test, y_test) print("Test Accuracy = {:.3f}".format(test_accuracy)) ###Output INFO:tensorflow:Restoring parameters from ./lenet_epoch10.ckpt Test Accuracy = 0.956 ###Markdown --- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images ###Code images_new = glob.glob('../data/'+'*.jpg') images_new = [cv2.cvtColor(cv2.imread(img), cv2.COLOR_BGR2RGB) for img in images_new] y_new = [3, 34, 11, 25, 18] # class id's fig, axs = plt.subplots(1, len(images_new)) for idx, ax in enumerate(axs.ravel()): ax.imshow(images_new[idx]) ax.set_title('{:02d}'.format(y_new[idx])) ax.axis('off') ###Output _____no_output_____ ###Markdown Predict the Sign Type for Each Image ###Code # normalize for idx in range(len(images_new)): images_new[idx] = (images_new[idx] - np.min(images_new[idx])) / (np.max(images_new[idx]) - np.min(images_new[idx])) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) prediction = np.argmax(np.array(sess.run(logits, feed_dict={x: images_new})), axis=1) for i, pred in enumerate(prediction): print('Target = {:02d} | Predicted = {:02d}'.format(y_new[i], pred)) ### Run the predictions here and use the model to output the prediction for each image. ### Make sure to pre-process the images with the same pre-processing pipeline used earlier. ### Feel free to use as many code cells as needed. ###Output INFO:tensorflow:Restoring parameters from ./lenet_epoch10.ckpt Target = 03 | Predicted = 03 Target = 34 | Predicted = 34 Target = 11 | Predicted = 11 Target = 25 | Predicted = 25 Target = 18 | Predicted = 18 ###Markdown Analyze Performance ###Code print('Test Accuracy = {:.3f}'.format(np.sum(y_new == prediction) / len(y_new))) ### Calculate the accuracy for these 5 new images. ### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images. ###Output Test Accuracy = 1.000 ###Markdown The images used are actually "good" and easily detectable images, so it is not a surprise that accuracy is 100%. Output Top 5 Softmax Probabilities For Each Image Found on the Web ###Code # visualizing softmax probabilities num_tops = 5 with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('.')) top_k = sess.run(tf.nn.top_k(logits, k=num_tops), feed_dict={x: images_new}) softmax_probs = sess.run(tf.nn.softmax(logits), feed_dict={x: images_new}) # plot softmax probabilities per each test image n_images = len(images_new) fig, axs = plt.subplots(n_images, 2) plt.suptitle('Softmax probabilities per each test image') for idx in range(0, n_images): axs[idx, 0].imshow(images_new[idx]) axs[idx, 1].bar(np.arange(n_classes), softmax_probs[idx]) axs[idx, 1].set_ylim([0, 1]) axs[idx, 1].set_xlim([0, n_classes-1]) # Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. for img_idx in range(len(images_new)): print() print('Top predictions for the target image {:02d}'.format(y_new[img_idx])) for idx_within_tops in range(num_tops): pred_img = top_k[1][img_idx][idx_within_tops] probability = softmax_probs[img_idx][pred_img] print('Predicted {:02d} with probability {:.5f}'.format(pred_img, probability)) ### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. ### Feel free to use as many code cells as needed. ###Output Top predictions for the target image 03 Predicted 03 with probability 1.00000 Predicted 05 with probability 0.00000 Predicted 01 with probability 0.00000 Predicted 11 with probability 0.00000 Predicted 06 with probability 0.00000 Top predictions for the target image 34 Predicted 34 with probability 1.00000 Predicted 36 with probability 0.00000 Predicted 19 with probability 0.00000 Predicted 35 with probability 0.00000 Predicted 38 with probability 0.00000 Top predictions for the target image 11 Predicted 11 with probability 1.00000 Predicted 30 with probability 0.00000 Predicted 27 with probability 0.00000 Predicted 24 with probability 0.00000 Predicted 28 with probability 0.00000 Top predictions for the target image 25 Predicted 25 with probability 1.00000 Predicted 05 with probability 0.00000 Predicted 14 with probability 0.00000 Predicted 29 with probability 0.00000 Predicted 30 with probability 0.00000 Top predictions for the target image 18 Predicted 18 with probability 1.00000 Predicted 27 with probability 0.00000 Predicted 26 with probability 0.00000 Predicted 37 with probability 0.00000 Predicted 01 with probability 0.00000 ###Markdown Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above) ###Code ### Visualize your network's feature maps here. ### Feel free to use as many code cells as needed. # image_input: the test image being fed into the network to produce the feature maps # tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer # activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output # plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1): # Here make sure to preprocess your image_input in a way your network expects # with size, normalization, ect if needed # image_input = # Note: x should be the same name as your network's tensorflow data placeholder variable # If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function activation = tf_activation.eval(session=sess,feed_dict={x : image_input}) featuremaps = activation.shape[3] plt.figure(plt_num, figsize=(15,15)) for featuremap in range(featuremaps): plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number if activation_min != -1 & activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray") elif activation_max != -1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray") elif activation_min !=-1: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray") else: plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray") ###Output _____no_output_____
Desafio 6/DF6_Lit_2.ipynb
###Markdown Install ###Code !pip install tpot import pandas as pd import numpy as np import matplotlib as plt import seaborn as sns sns.set() from sklearn.preprocessing import KBinsDiscretizer, LabelEncoder from sklearn.feature_selection import SelectKBest, f_classif from sklearn.model_selection import train_test_split from xgboost import XGBClassifier from sklearn.metrics import classification_report, f1_score from tpot import TPOTClassifier df = pd.read_csv("https://github.com/maratonadev-br/desafio-6-2020/blob/master/dataset/training_dataset.csv?raw=true") test = pd.read_csv("https://raw.githubusercontent.com/maratonadev-br/desafio-6-2020/master/dataset/to_be_scored.csv") df.shape, test.shape ###Output _____no_output_____ ###Markdown PreProcessing ###Code colsToDrop = ["id", "importante_ter_certificado" #"profissao", "graduacao", "modulos_iniciados", #"pretende_fazer_cursos_lit", "como_conheceu_lit", #"universidade", "organizacao" ] for col in colsToDrop: try: df.drop(col, axis=1, inplace=True) test.drop(col, axis=1, inplace=True) except: print(f"{col} already droped") df_num = df[["certificados", "modulos_finalizados", "modulos_iniciados", "total_modulos", "categoria"]] df_num = df_num.dropna() df_num.shape df_dropedna = df.dropna() colsNumber = df.select_dtypes(include="number").columns df[colsNumber] = df[colsNumber].fillna(0) df["graduacao"].fillna("SEM FORMAÇÃO", inplace=True) df["profissao"].fillna("SEM EXPERIÊNCIA", inplace=True) df["como_conheceu_lit"].fillna("OUTROS", inplace=True) df["organizacao"].fillna("Eletroeletronicos", inplace=True) df["universidade"].fillna("FATEC", inplace=True) colsToDummy = ['universidade', "organizacao", "como_conheceu_lit", 'graduacao', 'profissao'] # df = pd.get_dummies(df, columns=colsToDummy) le = LabelEncoder() df_dropedna[colsToDummy] = df_dropedna[colsToDummy].apply(lambda x: le.fit_transform(x)).astype(int) df_dropedna ###Output _____no_output_____ ###Markdown Tentando Balancear ###Code df["categoria"].value_counts() ###Output _____no_output_____ ###Markdown Training ###Code X = df[["certificados", "modulos_finalizados", "modulos_iniciados", "total_modulos"]] # X = df_dropedna.drop("categoria", axis=1) # X = df_num.drop("categoria", axis=1) # y = df_num["categoria"] y = df["categoria"] Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, train_size=.7, random_state=0) Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape from imblearn.over_sampling import SMOTE smote = SMOTE() Xtrain_smote, ytrain_smote = smote.fit_resample(Xtrain, ytrain) Xtrain_smote.shape, ytrain_smote.shape tp_smote = TPOTClassifier(scoring="f1_micro", random_state=0, verbosity=2, config_dict="TPOT light") tp_smote.fit(Xtrain_smote, ytrain_smote) tp_smote("pipeline_smote") tp = TPOTClassifier(scoring="f1_micro", random_state=0, verbosity=2) tp.fit(Xtrain, ytrain) tp.export("pipeline3") from xgboost import XGBClassifier # f1_micro: 0.8317582241150573 | ((2431, 12), (1043, 12), (2431,), (1043,)) exported_pipeline = XGBClassifier(learning_rate=0.1, max_depth=1, min_child_weight=17, n_estimators=100, subsample=0.7, random_state=0) exported_pipeline.fit(Xtrain, ytrain) results = exported_pipeline.predict(Xtest) print(f1_score(ytest, results, average="micro")) # 0.8475398475398476 oi? | ((6732, 4), (2886, 4), (6732,), (2886,)) from sklearn.ensemble import GradientBoostingClassifier # f1_micro: 0.8351172767395709 | ((6732, 4), (2886, 4), (6732,), (2886,)) exported_pipeline = GradientBoostingClassifier(learning_rate=0.01, max_depth=10, max_features=0.25, min_samples_leaf=15, min_samples_split=10, n_estimators=100, subsample=0.8, random_state=0) exported_pipeline.fit(Xtrain, ytrain) results = exported_pipeline.predict(Xtest) print(f1_score(ytest, results, average="micro")) from sklearn.feature_selection import SelectPercentile, f_classif from sklearn.pipeline import make_pipeline from sklearn.tree import DecisionTreeClassifier from tpot.export_utils import set_param_recursive # f1_micro: 0.8351171664289474 | ((6732, 4), (2886, 4), (6732,), (2886,)) exported_pipeline = make_pipeline( SelectPercentile(score_func=f_classif, percentile=61), DecisionTreeClassifier(criterion="entropy", max_depth=4, min_samples_leaf=3, min_samples_split=14) ) # Fix random state for all the steps in exported pipeline set_param_recursive(exported_pipeline.steps, 'random_state', 0) exported_pipeline.fit(Xtrain, ytrain) results = exported_pipeline.predict(Xtest) print(f1_score(ytest, results, average="micro")) !pip install scikit-optimize from skopt import gp_minimize def tunar_modelo(params): learning_rate = params[0] max_depth = params[1] min_child_weight = params[2] n_estimators = params[3] subsample = params[4] print(params,'\n') mdl = XGBClassifier( learning_rate = learning_rate, n_estimators = n_estimators, max_depth = max_depth, min_child_weight = min_child_weight, subsample = subsample, random_state = 0) mdl.fit(Xtrain, ytrain) p = mdl.predict(Xtest) return -f1_score(ytest, p, average="micro") space = [(1e-2, 1e-1), # learning_rate (1, 1000), # n_estimators (1, 100), # max_depth (1, 100), # min_child_weight, (0, 1)] # subsample resultado_gp = gp_minimize(tunar_modelo, space, random_state=0, n_calls=50, n_random_starts=20, verbose=1) resultado_gp.x # 0.8496 [0.1, 1000, 70, 100, 1] xgb = XGBClassifier( learning_rate = 0.1, n_estimators = 1000, max_depth = 70, min_child_weight = 100, subsample = 1, random_state = 0) xgb.fit(Xtrain, ytrain) xgb_p = xgb.predict(Xtest) print(f1_score(ytest, xgb_p, average="micro")) xgb2 = XGBClassifier( learning_rate = 0.01, n_estimators = 100, max_depth = 8, min_child_weight = 20, subsample = 0.45, random_state = 0) xgb2.fit(Xtrain, ytrain) xgb2_p = xgb.predict(Xtest) print(f1_score(ytest, xgb2_p, average="micro")) xgb2_smote = XGBClassifier( learning_rate = 0.01, n_estimators = 100, max_depth = 8, min_child_weight = 20, subsample = 0.45, random_state = 0) xgb2_smote.fit(Xtrain, ytrain) xgb2_smote_p = xgb.predict(Xtest) print(f1_score(ytest, xgb2_smote_p, average="micro")) ###Output 0.7635561160151324 ###Markdown Predict test ###Code # test[colsToDummy] = test[colsToDummy].apply(lambda x: le.fit_transform(x)).astype(int) test2 = test[["certificados", "modulos_finalizados", "modulos_iniciados", "total_modulos"]] results2 = xgb2.predict(test2) results1 = pd.read_csv("/content/results.csv") results1.shape[0], results2.shape[0] count = 0 for row in range(0, 1000): if results1["target"][row] != results2[row]: print(f"{results1['target'][row]} != {results2[row]}, {row}") count += 1 print(count) ###Output perfil4 != perfil6, 408 1 ###Markdown --- ###Code results_smote = exported_pipeline.predict(test2) count = 0 for row in range(0, 1000): if results1["target"][row] != results_smote[row]: print(f"{results1['target'][row]} != {results_smote[row]}, {row}") count += 1 print(count) test_results_smote = pd.DataFrame({"target":results_smote}) test_results_smote test_results_smote.to_csv("results", index=False) ###Output _____no_output_____
python/Crawler/ipy/crawler4 - urllib.ipynb
###Markdown Python Web Crawler 4 - Urllib Request ###Code import urllib.request from bs4 import BeautifulSoup def getPage(url): page = urllib.request.urlopen(url) # <class 'http.client.HTTPResponse'> print(page.status) # print(page.getheaders()) return page.read().decode('utf-8') tree = BeautifulSoup(getPage("https://www.bing.com/"),"lxml") tree.div.select('#bgDiv') # JS rendered # 200 # [<div data-minhdhor="" data-minhdver="" data-priority="0" id="bgDiv"></div>] ###Output 200 ###Markdown urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, context=None)urllib.request.Request(url, data=None, headers={}, origin_req_host=None, unverifiable=False, method=None) ###Code import socket from urllib import request, parse,error def getInfo(url, data="", headers={}, method="GET",timeout=1): dat = bytes(parse.urlencode(data), encoding='utf8') req = request.Request(url=url, data=dat, headers=headers, method=method) req = request.urlopen(req, timeout=timeout) print(req.read().decode('utf-8')) headers = { 'User-Agent':' Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36', 'Host': 'httpbin.org' } dict = { 'words1': 'you\'re a miracle' , 'words2':'what do you fear' } getInfo("http://httpbin.org/post",dict,headers,"POST",5) # { # "args": {}, # "data": "", # "files": {}, # "form": { # "words1": "you're a miracle", # "words2": "what do you fear" # }, # "headers": { # "Accept-Encoding": "identity", # "Connection": "close", # "Content-Length": "49", # "Content-Type": "application/x-www-form-urlencoded", # "Host": "httpbin.org", # "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" # }, # "json": null, # "origin": "183.246.20.118", # "url": "http://httpbin.org/post" # } ###Output { "args": {}, "data": "", "files": {}, "form": { "words1": "you're a miracle", "words2": "what do you fear" }, "headers": { "Accept-Encoding": "identity", "Connection": "close", "Content-Length": "49", "Content-Type": "application/x-www-form-urlencoded", "Host": "httpbin.org", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" }, "json": null, "origin": "183.246.20.118", "url": "http://httpbin.org/post" } { "args": {}, "headers": { "Accept-Encoding": "identity", "Connection": "close", "Content-Type": "application/x-www-form-urlencoded", "Host": "httpbin.org", "User-Agent": "Python-urllib/3.6" }, "origin": "183.246.20.118", "url": "http://httpbin.org/get" } ###Markdown ERROR ###Code def getInfo(url, data="", headers={}, method="GET",timeout=1): try: dat = bytes(parse.urlencode(data), encoding='utf8') req = request.Request(url=url, data=dat, headers=headers, method=method) req = request.urlopen(req, timeout=timeout) print(req.read().decode('utf-8')) except error.HTTPError as e: print(e.reason, e.code, e.headers, sep='\n') except error.URLError as e: if isinstance(e.reason, socket.timeout): print('TIME OUT') else: pass getInfo('http://httpbin.org/index.htm') # NOT FOUND # 404 # Connection: close # Server: meinheld/0.6.1 # Date: Sun, 11 Mar 2018 06:25:37 GMT # Content-Type: text/html # Content-Length: 233 # Access-Control-Allow-Origin: * # Access-Control-Allow-Credentials: true # X-Powered-By: Flask # X-Processed-Time: 0 # Via: 1.1 vegur getInfo('http://httpbin.org/get',timeout=.1) # TIME OUT getInfo('http://httpbin.org/get') # { # "args": {}, # "headers": { # "Accept-Encoding": "identity", # "Connection": "close", # "Content-Type": "application/x-www-form-urlencoded", # "Host": "httpbin.org", # "User-Agent": "Python-urllib/3.6" # }, # "origin": "183.246.20.118", # "url": "http://httpbin.org/get" # } ###Output _____no_output_____ ###Markdown ParseParse module supports the following URL schemes: file, ftp, gopher, hdl, http, https, imap, mailto, mms, news, nntp, prospero, rsync, rtsp, rtspu, sftp, shttp, sip, sips, snews, svn, svn+ssh, telnet, wais, ws, wss. Split & Combine ###Code from urllib.parse import urlparse as pr from urllib.parse import urlunparse as upr # scheme://netloc/path;parameters?query#fragment result = pr('http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt') print(type(result), '\n',result) # <class 'urllib.parse.ParseResult'> # ParseResult(scheme='http', netloc='www.xiami.com', path='/play', \ # params='', query='ids=/song/playlist/id/1/type/9', fragment='loadedt') [print(result[i]) for i in range(len(result))] # http # www.xiami.com # /play # ids=/song/playlist/id/1/type/9 # loaded print( pr('www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt',scheme="https")) # ParseResult(scheme='https', netloc='', path='www.xiami.com/play',\ # params='', query='ids=/song/playlist/id/1/type/9', fragment='loadedt') print( pr('https://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt',scheme="http",allow_fragments=False)) # ParseResult(scheme='https', netloc='www.xiami.com', path='/play', \ # params='', query='ids=/song/playlist/id/1/type/9#loadedt', fragment='') data = [result.scheme, result.netloc, result.path,result.params, result.query,result.fragment] print(upr(data)) # http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt from urllib.parse import urlsplit as sp from urllib.parse import urlunsplit as usp # # scheme://netloc/path?query#fragment result = sp('http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt') print(type(result), '\n',result) # <class 'urllib.parse.SplitResult'> # SplitResult(scheme='http', netloc='www.xiami.com', path='/play', \ # query='ids=/song/playlist/id/1/type/9', fragment='loadedt') data = [result.scheme, result.netloc, result.path, result.query,result.fragment] print(usp(data)) # http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt ### More from urllib.parse import urljoin as jo print(jo("http://www.xiami.com/","play?ids=/song/playlist/id/1/type/9#loadedt")) print(jo("http://www.xiami.com/play?ids=/song/playlist/","play?ids=/song/playlist/id/1/type/9#loadedt")) print(jo("http:","//www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt")) # http://www.xiami.com/play?ids=/song/playlist/id/1/type/9#loadedt from urllib.parse import urlencode,parse_qs,quote,unquote params = { 'tn':'baidu', 'wd': 'google chrome', } base_url = 'http://www.baidu.com/s?' base_url + urlencode(params) # 'http://www.baidu.com/s?tn=baidu&wd=google+chrome' print(parse_qs( urlencode(params))) # {'tn': ['baidu'], 'wd': ['google chrome']} 'https://www.baidu.com/s?wd=' + quote("百度") # 'https://www.baidu.com/s?wd=%E7%99%BE%E5%BA%A6' url = 'https://www.baidu.com/s?wd=%E7%99%BE%E5%BA%A6' print(unquote(url)) # https://www.baidu.com/s?wd=百度 ###Output {'tn': ['baidu'], 'wd': ['google chrome']} https://www.baidu.com/s?wd=百度 ###Markdown Handler `BaseHandler`[¶](https://docs.python.org/3/library/urllib.request.htmlurllib.request.BaseHandler)- `HTTPDefaultErrorHandler`- `HTTPRedirectHandler`- `HTTPCookieProcessor`(*cookiejar=None*)- `ProxyHandler`(*proxies=None*)- `HTTPPasswordMgr`- `HTTPPasswordMgrWithDefaultRealm`- `HTTPPasswordMgrWithPriorAuth`- ` ...` Cookies ###Code import http.cookiejar, urllib.request cookie = http.cookiejar.CookieJar() handler = urllib.request.HTTPCookieProcessor(cookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') print(response) # <http.client.HTTPResponse object at 0x04D421F0> for item in cookie: print(item.name+"="+item.value) # BAIDUID=7A55D7DB4ECB570361D1D1186DD85275:FG=1 # ... filename = 'cookies.txt' cookie = http.cookiejar.LWPCookieJar(filename) # cookie = http.cookiejar.MozillaCookieJar(filename) handler = urllib.request.HTTPCookieProcessor(cookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') cookie.save(ignore_discard=True, ignore_expires=True) ## LWP-Cookies-2.0 # Set-Cookie3: BAIDUID="990E47C14A144D813BB6629BEA0D1BEF:FG=1"; path="/"; domain=".baidu.com"; path_spec; domain_dot; expires="2086-03-29 08:56:02Z"; version=0 # ... cookie = http.cookiejar.LWPCookieJar() cookie.load('cookies.txt', ignore_discard=True, ignore_expires=True) handler = urllib.request.HTTPCookieProcessor(cookie) opener = urllib.request.build_opener(handler) response = opener.open('http://www.baidu.com') print(response.read().decode('utf-8')) # <!DOCTYPE html> # <!--STATUS OK--> # ... ###Output _____no_output_____ ###Markdown Password ###Code from urllib.request import HTTPPasswordMgrWithDefaultRealm, HTTPBasicAuthHandler, build_opener from urllib.error import URLError username = 'username' password = 'password' url = 'url' p = HTTPPasswordMgrWithDefaultRealm() p.add_password(None, url, username, password) auth_handler = HTTPBasicAuthHandler(p) opener = build_opener(auth_handler) try: result = opener.open(url) html = result.read().decode('utf-8') print(html) except URLError as e: print(e.reason) ###Output _____no_output_____ ###Markdown Proxy ###Code from urllib.error import URLError from urllib.request import ProxyHandler, build_opener proxy_handler = ProxyHandler({ 'http': 'url', 'https': 'url' }) opener = build_opener(proxy_handler) try: response = opener.open('https://www.baidu.com') print(response.read().decode('utf-8')) except URLError as e: print(e.reason) Using requests import requests proxies = { 'http': 'url', 'https': 'url' } # http://user:password@host:port proxies = { "http": "http://user:[email protected]:3128/", } # socks proxies = { 'http': 'socks5://user:password@host:port', 'https': 'socks5://user:password@host:port' } requests.get("https://www.baidu.com", proxies=proxies) ###Output _____no_output_____ ###Markdown Robots e.g. Robots.txt https://www.taobao.com/robots.txt ###Code User-agent: Baiduspider Allow: /article Allow: /oshtml Disallow: /product/ Disallow: / User-Agent: Googlebot Allow: /article Allow: /oshtml Allow: /product Allow: /spu Allow: /dianpu Allow: /oversea Allow: /list Disallow: / User-agent: Bingbot Allow: /article Allow: /oshtml Allow: /product Allow: /spu Allow: /dianpu Allow: /oversea Allow: /list Disallow: / User-Agent: 360Spider Allow: /article Allow: /oshtml Disallow: / User-Agent: Yisouspider Allow: /article Allow: /oshtml Disallow: / User-Agent: Sogouspider Allow: /article Allow: /oshtml Allow: /product Disallow: / User-Agent: Yahoo! Slurp Allow: /product Allow: /spu Allow: /dianpu Allow: /oversea Allow: /list Disallow: / User-Agent: * Disallow: / ###Output _____no_output_____ ###Markdown RobotFileParser ###Code from urllib.robotparser import RobotFileParser from urllib.request import urlopen url = "http://httpbin.org/robots.txt " rp = RobotFileParser(url) rp.read() print(rp.can_fetch('*', 'http://httpbin.org/deny')) print(rp.can_fetch('*', "http://httpbin.org/image")) # False # True ###Output False True ###Markdown REFERENCES ###Code - https://docs.python.org/3/library/urllib.html - http://httpbin.org/ ###Output _____no_output_____
src/NYUDataTester.ipynb
###Markdown PRC ###Code for i in range(len(NYU_CLASSES)): plt.figure() y = prec[i] x = rec[i] writer_recall.writerow(x) writer_precision.writerow(y) f_recall.close() f_precision.close() # plt.plot(x, y) # plt.axis([0, 1.0, 0, 1.0]) # plt.title(NYU_CLASSES[i]) # plt.xlabel('Recall') # plt.ylabel('Precision') # plt.savefig(('../results/PRC/RGB/' + NYU_CLASSES[i]+'.png')) mAP_array = [] for i in np.linspace(0, 1, 101): prec, rec, mean_iou = calc_detection_prec_rec(pred_labels, pred_scores, pred_bboxes, gt_bboxes, gt_labels, iou_thresh=i) ap = calc_detection_ap(prec, rec, use_07_metric=True) mAP_array.append(np.nanmean(ap)) print(mAP_array) plt.plot(np.linspace(0, 1, 101), np.array(mAP_array)) plt.title('Overlap Threshold and mAP') plt.xlabel('Overlap Threshold') plt.ylabel('mAP') plt.savefig('../results/map_overlap/RGB.png') ap_array = np.zeros((len(NYU_CLASSES), len(np.linspace(0, 1, 101)))) for i, thresh in enumerate(np.linspace(0, 1, 101)): prec, rec, mean_iou = calc_detection_prec_rec(pred_labels, pred_scores, pred_bboxes, gt_bboxes, gt_labels, iou_thresh=thresh) ap = calc_detection_ap(prec, rec, use_07_metric=True) for k in range(len(NYU_CLASSES)): ap_array[k][i] = ap[k] for k in range(len(NYU_CLASSES)): plt.figure() plt.plot(np.linspace(0, 1, 101), np.array(ap_array[k])) plt.title(NYU_CLASSES[k]) plt.xlabel('Overlap Threshold') plt.ylabel('Average Precision') plt.savefig(('../results/ap_overlap/RGB/'+NYU_CLASSES[k]+'.png')) images = results for i, img in enumerate(images): plt.figure() if len(results[i]) == 0: continue det_label = results[i][:, 0] det_conf = results[i][:, 1] det_xmin = results[i][:, 2] det_ymin = results[i][:, 3] det_xmax = results[i][:, 4] det_ymax = results[i][:, 5] # Get detections with confidence higher than 0.6. top_indices = [i for i, conf in enumerate(det_conf) if conf >= 0.6] top_conf = det_conf[top_indices] top_label_indices = det_label[top_indices].tolist() top_xmin = det_xmin[top_indices] top_ymin = det_ymin[top_indices] top_xmax = det_xmax[top_indices] top_ymax = det_ymax[top_indices] colors = plt.cm.hsv(np.linspace(0, 1, 21)).tolist() plt.imshow(img / 255.) currentAxis = plt.gca() for i in range(top_conf.shape[0]): xmin = int(round(top_xmin[i] * img.shape[1])) ymin = int(round(top_ymin[i] * img.shape[0])) xmax = int(round(top_xmax[i] * img.shape[1])) ymax = int(round(top_ymax[i] * img.shape[0])) score = top_conf[i] label = int(top_label_indices[i]) label_name = NYU_CLASSES[label - 1] display_txt = '{:0.2f}, {}'.format(score, label_name) coords = (xmin, ymin), xmax-xmin, ymax-ymin color = colors[label] currentAxis.add_patch(plt.Rectangle(*coords, fill=False, edgecolor=color, linewidth=2)) currentAxis.text(xmin, ymin, display_txt, bbox={'facecolor':color, 'alpha':0.5}) plt.savefig('../results/detection_images/RGB/image' + str(i)+'_v10.png') y_true = [] for key in val_keys: y_true.append(gt[key]) y_true = np.array(y_true) print(y_true.shape) inputs = [] images = [] for key in val_keys: img_path = path_prefix + key img = image.load_img(img_path, target_size=(300, 300)) img = image.img_to_array(img) images.append(imread(img_path)) inputs.append(img.copy()) inputs = preprocess_input(np.array(inputs)) preds = model.predict(inputs, batch_size=1, verbose=1) results = bbox_util.detection_out(preds) #calc_map(y_true, results) print(results[0]) ###Output _____no_output_____
Taller_Grupal_21_Marzo.ipynb
###Markdown ###Code pd.crosstab(df["viaje_noche_fuera"], df["estrato"], normalize=True) pd.crosstab(df["viaje_noche_fuera"], df["estado_civil"], normalize=True) sns.histplot(x=df["edad"]) df["edad"].describe() mode(df["edad"]) sns.histplot(x=df["estrato"]) sns.barplot(x=df["estrato"].value_counts().index, y=df["estrato"].value_counts()) sns.barplot(x=df["nivel_educativo"].value_counts().index, y=df["nivel_educativo"].value_counts()) sns.barplot(y=df["estado_civil"].value_counts().index, x=df["estado_civil"].value_counts(),orient='h') #plt.xticks(rotation=90) df["viaje_noche_fuera"]=df["viaje_noche_fuera"].replace({"si": 1, "no": 0}) df.drop(columns="parentesco_jefe_hogar",inplace=True) df = df[df["nivel_educativo"]!= "no_sabe_no_informa"] df = pd.get_dummies(df, drop_first=True) X = df.copy() y = X.pop("viaje_noche_fuera") X = sm.add_constant(X) model = sm.OLS(y,X) reg = model.fit() reg.summary() X = df[["edad", "estrato", "estado_civil_pareja_union_libre", "nivel_educativo_superior_universitaria", "viaje_noche_fuera"]].copy() y = X.pop("viaje_noche_fuera") X = sm.add_constant(X) model = sm.OLS(y,X) reg = model.fit() reg.summary() ###Output _____no_output_____
Stephen_Lupsha_LS_DS_211_assignment.ipynb
###Markdown Lambda School Data Science*Unit 2, Sprint 1, Module 1*--- Regression 1 AssignmentYou'll use another **New York City** real estate dataset. But now you'll **predict how much it costs to rent an apartment**, instead of how much it costs to buy a condo.The data comes from renthop.com, an apartment listing website.- [ ] Look at the data. Choose a feature, and plot its relationship with the target.- [ ] Use scikit-learn for linear regression with one feature. You can follow the [5-step process from Jake VanderPlas](https://jakevdp.github.io/PythonDataScienceHandbook/05.02-introducing-scikit-learn.htmlBasics-of-the-API).- [ ] Define a function to make new predictions and explain the model coefficient.- [ ] Organize and comment your code.> [Do Not Copy-Paste.](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit) You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.If your **Plotly** visualizations aren't working:- You must have JavaScript enabled in your browser- You probably want to use Chrome or Firefox- You may need to turn off ad blockers- [If you're using Jupyter Lab locally, you need to install some "extensions"](https://plot.ly/python/getting-started/jupyterlab-support-python-35) Stretch Goals- [ ] Do linear regression with two or more features.- [ ] Read [The Discovery of Statistical Regression](https://priceonomics.com/the-discovery-of-statistical-regression/)- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapter 2.1: What Is Statistical Learning? ***Do Not Copy-Paste. You must type each of these exercises in, manually. If you copy and paste, you might as well not even do them. The point of these exercises is to train your hands, your brain, and your mind in how to read, write, and see code. If you copy-paste, you are cheating yourself out of the effectiveness of the lessons.*** ###Code import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') # Read New York City apartment rental listing data import pandas as pd from sklearn.metrics import mean_absolute_error from sklearn.linear_model import LinearRegression import numpy as np import seaborn as sns df = pd.read_csv(DATA_PATH+'apartments/renthop-nyc.csv', parse_dates=['created'], index_col='created') #assert df.shape == (49352, 34) #changing this to 33 since i am indexing on "created" assert df.shape == (49352, 33) df.shape df.info() # Remove outliers: # the most extreme 1% prices, # the most extreme .1% latitudes, & # the most extreme .1% longitudes df = df[(df['price'] >= 1375) & (df['price'] <= 15500) & (df['latitude'] >=40.57) & (df['latitude'] < 40.99) & (df['longitude'] >= -74.1) & (df['longitude'] <= -73.38)] df.shape # Alright, this is our trimmed dataframe. df.head() import matplotlib.pyplot as plt plt.scatter(df['bedrooms'], df['price']) plt.xlabel('Bedrooms') plt.ylabel('Price') # Well, that looks like a mess. plt.scatter(df['latitude'], df['price']) plt.xlabel('Latitude') plt.ylabel('Price') plt.scatter(df['longitude'], df['price']) plt.xlabel('Longitude') plt.ylabel('Price') # Splitting the data into feature matrix (latitude first) and one target vector (price) X_lat = df[['latitude']] y_lat = df['price'] assert len(X_lat) == len(y) y_mean = y.mean() print("mean rental price:", y_mean) y_pred = [y_mean]*len(y) #Don't forget to put y_mean in brackets - also, remember to ask Nicholas what that does exactly? does it convert it to a float? print('Baseline Mean Absolute Error:', mean_absolute_error(y, y_pred)) plt.scatter(df['latitude'], df['price']) plt.plot(df['latitude'], y_lat_pred, label='baseline model line', color='grey') plt.xlabel('Latitude') plt.ylabel('Price') ###Output _____no_output_____ ###Markdown Clearly this is useless without Longitude, so I'm gonna repeat my process for that just so we have those variables. I mean, theoretically neither is USELESS without the other, by why would one look at rent prices across only a certain latitude or longitude within NYC? I can't imagine what shared characteristics those would have except for stops along public transpo etc. Also, no apparent linear relationship at all, although clearly a high value area in there between 40.7 and 40.8. Since latitude is north south I would bet money that that range is probably central park... ###Code # Step 1: Import predictor class from sklearn.linear_model import LinearRegression # Step 2: Instantiate my predictor model_lat = LinearRegression() # Step 3: FIT my predictor on the (training) data model_lat.fit(X_lat, y) plt.scatter(df['latitude'], df['price']) plt.plot(df['latitude'], y_pred, label = 'baseline model line', color='grey') plt.plot(df['latitude'], model_lat.predict(X), label='Lat model', color='red') plt.xlabel('Latitude') plt.ylabel('Price') plt.legend() ###Output _____no_output_____ ###Markdown Re-doing all this for longitude : ###Code # Splitting the data into feature matrix (latitude first) and one target vector (price) X_long = df[['longitude']] assert len(X_long) == len(y) # Step 2: Instantiate my predictor model_long = LinearRegression() # Step 3: FIT my predictor on the (training) data model_long.fit(X_long, y) plt.scatter(df['longitude'], df['price']) plt.plot(df['longitude'], y_pred, label = 'baseline model line', color='grey') plt.plot(df['longitude'], model_long.predict(X), label='Long model', color='red') plt.xlabel('Longitude') plt.ylabel('Price') plt.legend() print(f'Price = {model_lat.intercept_} + {model_lat.coef_[0]} * Latitude') print(f'Price = {model_long.intercept_} + {model_long.coef_[0]} * Longitude') print(f'Price = {model_both.intercept_} + {model_both.coef_[0]} * Lat & Long') ###Output _____no_output_____ ###Markdown We can see from our results above that the fact that the Longitude in manhattan is negative, it's really throwing the calculation off. I need to figure out how to do a multi-variable regression model here and possibly even change the longitude to positive somehow... I don't know, reading more now. ###Code X_Both = df[['latitude', 'longitude']] model_both = LinearRegression() model_both.fit(X_Both, y) ###Output _____no_output_____ ###Markdown To quote the internet, and why I can't use Lat/Long *You cannot use them directly, as it is unlikely there is a true linear relationship unless you're looking to predict "how far east or north" someone is. As mentioned in the comments, you need to convert them into zones. If you wanted to keep it really simple, you could use a kNN clustering algorithm with a low number of potential clusters and then assign each instance a new feature with the cluster ID, and then one-hot encode that.* Obviously, I'm not doing all that today. I'm gonna move on to another observation or X. Most of the variables here are CATEGORICAL and not QUANTITATIVE - which will make it tough ###Code df.head() # Back to the beginning plt.scatter(df['elevator'], df['price']) plt.xlabel('Elevator in Bldg?') plt.ylabel('Price') plt.scatter(df['laundry_in_unit'], df['price']) plt.xlabel('Laundry in unit?') plt.ylabel('Price') plt.scatter(df['bathrooms'], df['price']) plt.xlabel('# of Bathrooms') plt.ylabel('Price') plt.scatter(df['bedrooms'], df['price']) plt.xlabel('# of Bedrooms') plt.ylabel('Price') ###Output _____no_output_____ ###Markdown Impossible. God this is going nowhere. ###Code df.head() # df["sum"] = df.sum(axis=1) # nope. OK. I'm going to add the "amenities" - one would assume that each of those factors increases the value of a rental right? df_amenities = df.copy() #copy df_amenities.drop(['description','bathrooms', 'bedrooms', 'display_address', 'latitude', 'longitude', 'price', 'street_address', 'interest_level'], axis=1, inplace=True) #drop the non-boolean variables. df_amenities["amenities_score"] = df_amenities.sum(axis=1) # sum it up into a new column. df_amenities.head() # and finally add that column back onto the original dataframe. df["amenities_score"] = df_amenities['amenities_score'] df.head() plt.scatter(df['amenities_score'], df['price']) plt.xlabel('# of Amenities') plt.ylabel('Price') ###Output _____no_output_____ ###Markdown We can pretty clearly see that there are plenty of places with "few" amenities that are priced awfully high...at least compared to what I would pay for rent. ###Code # predictor class is already imported. # Step 2: Instantiate my predictor model_amenities = LinearRegression() # Step 3: FIT my predictor on the (training) data # 3. Arrange X features matrix & y target vector # gotta remember to ask about the different language here in the steps. the 5 steps of modeling here... features = ['amenities_score'] target = ['price'] x_train = df[features] y_train = df[target] model_amenities.fit(x_train, y_train) plt.scatter(df['amenities_score'], df['price']) plt.plot(df['amenities_score'], y_pred, label = 'baseline model line', color='grey') plt.plot(df['amenities_score'], model_amenities.predict(x_train), label='amenities rating', color='red') plt.xlabel('# of Amenities') plt.ylabel('Price') plt.legend() ###Output _____no_output_____ ###Markdown at least amenities seems to go up at this point, as we expected. we expected some degree of positive slope. ###Code print(f'Price = {model_amenities.intercept_} + {model_amenities.coef_[0]} * Amenities "Score"') ###Output Price = [2843.40065221] + [157.55177491] * Amenities "Score" ###Markdown I'm having some doubts about how I "did" here - perfect questions for support hours I guess. I feel like this unit is foundational for what employers will want us to be doing on a daily basis. ###Code # ok, lets try something else. amenities_test = 13 x_test = [[amenities_test]] y_pred_am = model_amenities.predict(x_test) y_pred_am ###Output _____no_output_____ ###Markdown Ok, based off of what we saw above that's not actually a terrible prediction. I mean, I've never paid rent in NYC, just saying... ###Code amenities_test = 3 x_test = [[amenities_test]] y_pred_am = model_amenities.predict(x_test) y_pred_am ###Output _____no_output_____
.ipynb_checkpoints/10.Random Forest Classifier-checkpoint.ipynb
###Markdown select Some import data for Classification features and change arrival delay into binary class * late* not late ###Code df1=df.select("DayofMonth","DayofWeek","originAirportID","DestAirportID","DepDelay",\ ((col("ArrDelay") > 15).cast("Int").alias("Late"))) df1.show() ###Output +----------+---------+---------------+-------------+--------+----+ |DayofMonth|DayofWeek|originAirportID|DestAirportID|DepDelay|Late| +----------+---------+---------------+-------------+--------+----+ | 19| 5| 11433| 13303| -3| 0| | 19| 5| 14869| 12478| 0| 0| | 19| 5| 14057| 14869| -4| 0| | 19| 5| 15016| 11433| 28| 1| | 19| 5| 11193| 12892| -6| 0| | 19| 5| 10397| 15016| -1| 0| | 19| 5| 15016| 10397| 0| 0| | 19| 5| 10397| 14869| 15| 1| | 19| 5| 10397| 10423| 33| 1| | 19| 5| 11278| 10397| 323| 1| | 19| 5| 14107| 13487| -7| 0| | 19| 5| 11433| 11298| 22| 1| | 19| 5| 11298| 11433| 40| 1| | 19| 5| 11433| 12892| -2| 0| | 19| 5| 10397| 12451| 71| 1| | 19| 5| 12451| 10397| 75| 1| | 19| 5| 12953| 10397| -1| 0| | 19| 5| 11433| 12953| -3| 0| | 19| 5| 10397| 14771| 31| 1| | 19| 5| 13204| 10397| 8| 1| +----------+---------+---------------+-------------+--------+----+ only showing top 20 rows ###Markdown Dividing Data into Train and Test ###Code train_data,test_data=df1.randomSplit([0.7,0.3]) train_data.count() test_data.count() ###Output _____no_output_____ ###Markdown Preparing Data ###Code # Vector Assembler assembler=VectorAssembler(inputCols=["DayofMonth","DayofWeek","originAirportID","DestAirportID","DepDelay"]\ ,outputCol="features") tran_data=assembler.transform(df1) tran_data.show(5) ###Output +----------+---------+---------------+-------------+--------+----+--------------------+ |DayofMonth|DayofWeek|originAirportID|DestAirportID|DepDelay|Late| features| +----------+---------+---------------+-------------+--------+----+--------------------+ | 19| 5| 11433| 13303| -3| 0|[19.0,5.0,11433.0...| | 19| 5| 14869| 12478| 0| 0|[19.0,5.0,14869.0...| | 19| 5| 14057| 14869| -4| 0|[19.0,5.0,14057.0...| | 19| 5| 15016| 11433| 28| 1|[19.0,5.0,15016.0...| | 19| 5| 11193| 12892| -6| 0|[19.0,5.0,11193.0...| +----------+---------+---------------+-------------+--------+----+--------------------+ only showing top 5 rows ###Markdown Final DataSet ###Code tran_data=tran_data.select("features",tran_data["Late"].alias("label")) tran_data.show(5) train_data,test_data=tran_data.randomSplit([0.7,0.3]) train_data.count() test_data.count() train_data.show(2) ###Output +--------------------+-----+ | features|label| +--------------------+-----+ |[1.0,1.0,10140.0,...| 0| |[1.0,1.0,10140.0,...| 0| +--------------------+-----+ only showing top 2 rows ###Markdown Training Data ###Code lr=RandomForestClassifier(featuresCol="features",labelCol="label",predictionCol="prediction",\ numTrees=3,maxDepth=5,seed=42) lrmodel=lr.fit(train_data) print("Model is trained") lrmodel.transform(train_data).show(10) # Grab the Correct prediction train_pred=lrmodel.transform(train_data) train_pred.show(5) correct_prediction=train_pred.filter(train_pred["label"]==train_pred["prediction"]).count() print("Accuracy for training-data :,",correct_prediction/(train_data.count())) ###Output Accuracy for training-data :, 0.9263481017511112 ###Markdown testing Data --RF ###Code test=lrmodel.transform(test_data) correct_prediction_test=test.filter(test["label"]==test["prediction"]).count() print("Accuracy for test-data :,",correct_prediction_test/(test_data.count())) ###Output Accuracy for test-data :, 0.926432533259034
notebook/experiment_1/3_intervention_timer.ipynb
###Markdown List of figures: 2. [Figure S1: Time spent on intervention screen](timer) Imports libraries ###Code import matplotlib.pyplot as plt # Plotting import os # File system handling import pandas as pd # Dataframe handling from matplotlib.ticker import FuncFormatter # Formating graphs ###Output _____no_output_____ ###Markdown Set project directory ###Code PROJECT_FOLDER = os.path.dirname(os.path.dirname(os.getcwd())) FINAL_DATA_FOLDER = os.path.join(PROJECT_FOLDER, 'data', 'final') TABLES_FOLDER = os.path.join(PROJECT_FOLDER, 'reports', 'tables') FIGURES_FOLDER = os.path.join(PROJECT_FOLDER, 'reports', 'figures') ###Output _____no_output_____ ###Markdown Pandas options ###Code pd.set_option("display.precision", 3) pd.set_option("display.expand_frame_repr", False) pd.set_option("display.max_rows", 40) ###Output _____no_output_____ ###Markdown Set plotting style ###Code plt.style.use('classic') ###Output _____no_output_____ ###Markdown Set plotting properties ###Code font_kw = dict(fontsize=11, color='k') xlab_kw = dict(fontsize=11, labelpad=3) ylab_kw = dict(fontsize=11, labelpad=3) tick_kw = dict( size=5, which='both', direction='out', right=False, top=False, labelbottom=True ) ###Output _____no_output_____ ###Markdown Retrieving dataframe ###Code DATA = os.path.join( FINAL_DATA_FOLDER, 'experiment_1', 'data_final.feather' ) df = pd.read_feather(DATA) df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 3076 entries, 0 to 3075 Columns: 442 entries, Age to Q80_timer dtypes: float64(225), int64(25), object(192) memory usage: 10.4+ MB ###Markdown Seperate quality concern-treatments from the following main analysys ###Code sel = (df['Dataset'] == 'Main') df = df[sel] ###Output _____no_output_____ ###Markdown Figure S1: Time spent on intervention screen ###Code treat = ['Praise', 'Reference point'] hist_params = dict(bins=20, range=(0, 60), density=True, color='0.4', alpha=0.8) fig, axis = plt.subplots(ncols=2, nrows=1, figsize=(10, 5), dpi=150, facecolor='w') fig.subplots_adjust(hspace=0.35, wspace=0.25) for i, ax in enumerate(fig.axes): timer = df[df['Leadership_technique'] == treat[i]]['Intervention_timer'] timer.hist(ax=ax, **hist_params) ax.set_title(treat[i], **font_kw) ax.grid(False) ax.set_ylim(0, 0.18) ax.tick_params(**tick_kw) ax.set_xlabel("Time spent on intervention screen in seconds", **xlab_kw) ax.set_ylabel("Share of subjects", **ylab_kw) ax.yaxis.set_major_formatter(FuncFormatter('{:.0%}'.format)) mean, med = timer.mean(), timer.median() ax.text(45, 0.16, f"$\~{{x}}={mean:.1f}$\n$q_{{0.5}}={med:.1f}$") path = os.path.join( FIGURES_FOLDER, 'experiment_1', 'intervention_timer_hist.pdf' ) plt.savefig(path, bbox_inches='tight') !jupyter nbconvert --output-dir='./docs' --to html 3_intervention_timer.ipynb ###Output [NbConvertApp] Converting notebook 3_intervention_timer.ipynb to html [NbConvertApp] Writing 658987 bytes to docs/3_intervention_timer.html
m2-data-analysis-and-hypothesis-testing/case-study-data-visualization.ipynb
###Markdown ![ibm cloud logo](./images/ibm-cloud.png) Data Visualization ###Code from IPython.display import IFrame IFrame('https://player.vimeo.com/video/349962138/', width=600,height=400) ###Output _____no_output_____ ###Markdown Make Notebook Run in Watson Studio ###Code # The code was removed by Watson Studio for sharing. # START CODE BLOCK # cos2file - takes an object from Cloud Object Storage and writes it to file on container file system. # Uses the IBM project_lib library. # See https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html # Arguments: # p: project object defined in project token # data_path: the directory to write the file # filename: name of the file in COS import os def cos2file(p,data_path,filename): data_dir = p.project_context.home + data_path if not os.path.exists(data_dir): os.makedirs(data_dir) open( data_dir + '/' + filename, 'wb').write(p.get_file(filename).read()) # file2cos - takes file on container file system and writes it to an object in Cloud Object Storage. # Uses the IBM project_lib library. # See https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html # Arguments: # p: prooject object defined in project token # data_path: the directory to read the file from # filename: name of the file on container file system import os def file2cos(p,data_path,filename): data_dir = p.project_context.home + data_path path_to_file = data_dir + '/' + filename if os.path.exists(path_to_file): file_object = open(path_to_file, 'rb') p.save_data(filename, file_object, set_project_asset=True, overwrite=True) else: print("file2cos error: File not found") # END CODE BLOCK ###Output _____no_output_____ ###Markdown **Create data directory and save the file** ###Code cos2file(project, '/data', 'world-happiness.csv') ###Output _____no_output_____ ###Markdown "The first task of the data scientist is always data visualization."You recall these words from the many courses and bootcamps you've attended, and here you are working for AAVAIL about to do just that!Your team lead has asked you to start looking at the market churn in Singapore. Singapore has a higher rate of churn than AAVAIL's other geographic regions, so it makes sense to start looking there. *How comfortable are you with creating common data science plots in Python?*If you are comfortable with creating plots, then this next section can serve as a review for you, before you move on to the challenging task of looking at AAVAIL's Singapore data. For everyone else, this section is an important exercise that covers a topic that every data scientist should know: **Making continuous variables easier to visualize by breaking them up into discrete categories.** That sounds easy, but in fact it can sometimes be very challenging because the discrete categories create must make sense in the context of the problem you are working on. There isn't a "one size fits all" approach to doing this. Before you take a deep dive into your client's data, let's level-set with some practice data. *You always want to test your tools with non-critical data before you start working with actual client data.*AAVAIL wants the deliverables for this project to be prepared in Jupyter notebooks. Jupyter has become an industry standard in the Python ecosystem and in data science. But here is a pro-tip: Jupyter notebooks don't do well when used in a version control system. Your first Jupyter notebook for this project will be a practice notebook to make sure you can execute some basic data manipulation tasks to make EDA easier. Download the notebook from the following link then open it locally using a Jupyter server or use your IBM cloud account to login to Watson Studio. Inside of Watson Studio cloud if you have not already ensure that this notebook is loaded as part of the project for this course. As a reminder fill in all of the places in this notebook marked with YOUR CODE HERE or YOUR ANSWER HERE. The data and notebook for this unit are available below. * [m2-u2-data-visualization.ipynb](m2-u2-data-visualization.ipynb)* [world-happiness.csv](./data/world-happiness.csv)This unit is organized into the following sections:1. EDA and pandas2. Data visualization best practices3. Essentials of matplotlib4. Pairs plots and correlation5. Beyond simple plots Data visualization in Python It is expected that you already know and use both pandas and matplotlib on a regular basis. For those of you who are comfortable with plotting---meaning that you can readily produce any of the several dozen types common plots used in data science, then this unit will serve as a review. In the above sections we will touch on the essential tools, best practices and survey the landscape of available tools for more advanced plotting.If you would like additional context a few links are available below:- [Anaconda's article on 'moving toward convergence'](https://www.anaconda.com/blog/developer-blog/python-data-visualization-moving-toward-convergence/)- [Anaconda's article on the future of Python visualization libraries](https://www.anaconda.com/blog/developer-blog/python-data-visualization-2018-where-do-we-go-from-here/)- [Matplotlib tutorial](http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html)First let's make all of the necessary imports and configure the plot fonts. If you use Jupyter notebooks as a presentation tool then ensuring that your fonts are readable both the professional things to do and it will help with communication. ###Code import re import numpy as np import pandas as pd from IPython.display import Image import matplotlib.pyplot as plt plt.style.use('seaborn') %matplotlib inline SMALL_SIZE = 12 MEDIUM_SIZE = 14 LARGE_SIZE = 16 plt.rc('font', size=SMALL_SIZE) # controls default text sizes plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize plt.rc('figure', titlesize=LARGE_SIZE) # fontsize of the figure title ###Output _____no_output_____ ###Markdown Let's load the data that we will be using to practice EDA and then perform some basic cleanup. ###Code ## load the data and print the shape df = pd.read_csv("../data/world-happiness.csv",index_col=0) print("df: {} x {}".format(df.shape[0], df.shape[1])) ## clean up the column names and remove some df.columns = [re.sub("\s+","_",col) for col in df.columns.tolist()] df.head(n=4) ## missing values summary print("Missing Value Summary\n{}".format("-"*35)) print(df.isnull().sum(axis = 0)) df.info() ## drop the rows that have NaNs print("Original Matrix:", df.shape) df.dropna(inplace=True) print("After NaNs removed:", df.shape) ###Output Original Matrix: (495, 12) After NaNs removed: (470, 12) ###Markdown The dataThe original data are produced by the [UN Sustainable Development Solutions Network (SDSN)](http://unsdsn.org/about-us/vision-and-organization) and the report is compiled and available at [https://worldhappiness.report](https://worldhappiness.report). The following is the messaging on the report website:> The World Happiness Report is a landmark survey of the state of global happiness that ranks 156 countries by how happy their citizens perceive themselves to be. The report is produced by the United Nations Sustainable Development Solutions Network in partnership with the Ernesto Illy Foundation. > The World Happiness Report was written by a group of independent experts acting in their personal capacities. Any views expressed in this report do not necessarily reflect the views of any organization, agency or program of the United Nations.so knowing this it makes sense to [sort the data](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html). ###Code df.sort_values(['Year', "Happiness_Score"], ascending=[True, False], inplace=True) df.head(n=4) ###Output _____no_output_____ ###Markdown EDA and pandasThe pandas documentation is quite good compared to other packages and there are a substantial number of arguments available for most of the functions. See the [docs for pandas.read_csv](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html) as an example of this point. The Python [package pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html) is very commonly used during EDA. If you are not yet familiar with it or your need a refresher the [pandas tutorials](https://pandas.pydata.org/pandas-docs/stable/getting_started/tutorials.html) are a good place to start.[Pivot tables](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html) and [groupbys](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) are methods that perform aggregations over a [pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html). ###Code columns_to_show = ["Happiness_Score","Health_(Life_Expectancy)"] pd.pivot_table(df, index= 'Year', values=columns_to_show, aggfunc='mean').round(3) df.groupby(['Year'])[columns_to_show].mean().round(3) ###Output _____no_output_____ ###Markdown There [are some differences between pivot_table and groupby](https://stackoverflow.com/questions/34702815/pandas-group-by-and-pivot-table-difference), but either can be used to create aggregate summary tables. See the [pandas tutorial on reshaping and pivots](http://pandas-docs.github.io/pandas-docs-travis/user_guide/reshaping.html) to learn more. Also note that you can have more than one index. ###Code pd.pivot_table(df, index = ['Region', 'Year'], values=columns_to_show).round(3) ###Output _____no_output_____ ###Markdown When we want to summarize continuous data the functions [qcut()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.qcut.html) and [cut](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html) are useful to partition them. The binned observations can be summarized with the same tabular functions as above. QUESTION: tabular summaryUse `pd.qcut()` or `pd.cut` to bin the data by `Happiness_Rank` and create a tabular summary that summarizes `Happiness_Score` and `Health_(Life_Expectancy)` with respect to `Region`. Take a minute to think about presenting this information in a way that makes it easily interpretable. How many bins should you use? Is there a way to make those bins more understandable? Here is the [difference between `pd.qcut()` and `pd.cut()`](https://stackoverflow.com/questions/30211923/what-is-the-difference-between-pandas-qcut-and-pandas-cut). ###Code happines_rank_labels = ["Very Happy", "Happy", "Happy/Unhappy", "Unhappy", "Very Unhappy"] df["Happines_Rank_bins"] = pd.qcut(df.Happiness_Rank, 5, labels=happines_rank_labels) pd.pivot_table(df, index=["Happines_Rank_bins", "Region"], values=["Happiness_Score", "Health_(Life_Expectancy)"], aggfunc="mean").round(3) ###Output _____no_output_____ ###Markdown Data visualization best practicesWhen tables like the one we just created become difficult to navigate it can be useful to use a simple plot to summarize the data. It is possible that both a table and a plot might be needed to communicate the findings and one common practice is to include an appendix in the deliverable. Another related practice when it comes to EDA is that the communication of your findings, usually via deliverable, is done in a clean and polished way. If using a notebook as a communication tool take the time to remove unnecessary code blocks and other elements as they can distract from the principal takeaways.Best practices as a data scientist generally require that all work be saved as text files: 1. [Executable scripts](https://docs.python.org/3/using/cmdline.html) 2. [Python modules](https://docs.python.org/3/tutorial/modules.html) 3. [Python package](https://www.pythoncentral.io/how-to-create-a-python-package)A module is a file containing Python definitions and statements. The file name is the module name with thesuffix `.py` appended. Jupyter notebooks have the suffix `.ipynb` and use JSON, with a lot of custom text. The readability of such files is difficult using a standard programming editor and file **readability** is key to leveraging version control.That being said the two notable exceptions to this rule of always preserving your code in readable files are, EDA and results communication, both of which are tasks that come up frequently in data science.Data visualization is arguably the most important tool for communicating your results to others, especially business stakeholders. Most importantly, there are three important points to communicate to your stakeholders: >1. what you have done >2. what you are doing, and >3. what you plan to do.1) **Keep your code-base separated from your notebooks**Here we will show the import of code from a Python module into a notebook to showcase the best practice of saving a maximum amount of code within files, while still making use of Jupyter as a presentation tool. Version control is a key component to effective collaboration and reproducible research. Version control is not within the scope of this course, but systems are generally built on [git](https://git-scm.com) or [mercurial](https://www.mercurial-scm.org). There are a [host of other websites and services as well](https://en.wikipedia.org/wiki/List_of_version-control_software).The following links provide more context to the topic of reproducible research. * [Introduction to version control in the context of research](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004668) * [Collection of articles surveying several areas of reproducible research](https://www.nature.com/collections/prbfkwmwvz). 2) **Keep a notebook or a record of plots and plot manipulations**Outside of established software engineering practices there are couple of guidelines that have proven to be useful in practice. The first is related to version control and it revolves around the use of galleries. The [matplotlib gallery](https://matplotlib.org/gallery.html) and the [Seaborn gallery](https://seaborn.pydata.org/examples/index.html) are good starting points, but you should have your own. Just as you would do when engineering a piece of software you should be making the extra effort when something is reusable to ensure that it can be used in a different context. It could be as simple as a folder with a script for each.3) **Use your plots as a quality assurance tool**The other guideline is to make an educated guess **before** you see the plot. Before you execute the cell or run the script take a moment to predict what the plot should look like. You have likely already seen some of the data or created a tabular summary so you should have some intuition. This habit is surprisingly useful for quality assurance of both data and code. Essentials of matplotlibMatplotlib has a "functional" interface similar to MATLAB® that works via the `pyplot` module for simple interactive use, as well as an object-oriented interface that is more *pythonic* and better for plots that require some level of customization or modification. The latter is called the `artist` interface. There is also built in functionality from within pandas for rapid access to plotting capabilities.* [pandas visualization](http://pandas-docs.github.io/pandas-docs-travis/user_guide/visualization.html)* [matplotlib pyplot interface](https://matplotlib.org/users/pyplot_tutorial.html)* [matplotlib artist interface](https://matplotlib.org/users/artists.html) ###Code fig = plt.figure(figsize=(14,6)) ax1 = fig.add_subplot(121) ax2 = fig.add_subplot(122) table1 = pd.pivot_table(df,index='Region',columns='Year',values="Happiness_Score") table1.plot(kind='bar',ax=ax1) ax1.set_ylabel("Happiness Score"); table2 = pd.pivot_table(df,index='Region',columns='Year',values="Health_(Life_Expectancy)") table2.plot(kind='bar',ax=ax2) ax2.set_ylabel("Health (Life_Expectancy)"); ## adjust the axis to accommodate the legend ax1.set_ylim((0,9.3)) ax2.set_ylim((0,1.3)) ###Output _____no_output_____ ###Markdown There are some interface limitations when it comes to using the pandas interface for plotting, but it serves as an efficient first pass. You may also notice that if this notebook were to be used as a presentation there is some exposed plot generation code that can limit communication. There are ways to hide code in Jupyter, but in keeping with the best practices of storing code in text files for version control as well as the cataloging of plot code here is a script that makes a nicer version of the same plot with the `artist` interface. See the file for details on the additional customization.* [make-happiness-summary-plot.py](./scripts/make-happiness-summary-plot.py) **Create scripts directory and save the file** ###Code cos2file(project, '/scripts', 'make-happiness-summary-plot.py') def create_images_dir(p, images_path): images_dir = p.project_context.home + images_path if not os.path.exists(images_dir): print("...create images directory") os.makedirs(images_dir) else: print("...images directory exists") create_images_dir(project, "/images") !python ../scripts/make-happiness-summary-plot.py Image("../images/happiness-summary.png",width=800, height=600) ###Output ... data ingestion ... creating plot ../images/happiness-summary.png created. ###Markdown Pair plots and correlationThere are many useful tools and techniques for EDA that could be a part of these materials, but the focus of this course is on the workflow itself. In addition to indispensable summary tables and simple plots there is one more that deserves special mention, because of its utility, and that is the pair plot or sometimes referred to as the pairs plot. At a minimum these are used to visualize the relationships between all pairwise combinations of continuous variables in your data set. Importantly, we can quantify these relationships using [correlation](https://en.wikipedia.org/wiki/Correlation_and_dependence). There are also ways to get additional insight into the data by overlaying discrete variables, using a coloring scheme, and including univariate distributions along the diagonal. * [seaborn pairplot](https://seaborn.pydata.org/generated/seaborn.pairplot.html)* [seaborn pairwise correlations plot](https://seaborn.pydata.org/examples/many_pairwise_correlations.html) ###Code import seaborn as sns sns.set(style="ticks", color_codes=True) ## make a pair plot columns = ['Happiness_Score','Economy_(GDP_per_Capita)', 'Family', 'Health_(Life_Expectancy)', 'Freedom', 'Trust_(Government_Corruption)'] axes = sns.pairplot(df, vars=columns, hue="Year", palette="husl") ###Output _____no_output_____ ###Markdown QUESTION: Correlation plotUse the [following code snippet from the seaborn examples](https://seaborn.pydata.org/examples/many_pairwise_correlations.html) to create your own grid plot of pairwise correlations for this data set. Do this as a separate script then run and display the image here. ###Code %%writefile ../scripts/make-happiness-correlations-plot.py #!/usr/bin/env python import os import re import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns ## plot style, fonts and colors plt.style.use('seaborn') SMALL_SIZE = 12 MEDIUM_SIZE = 14 LARGE_SIZE = 16 COLORS = ["darkorange","royalblue","slategrey"] plt.rc('font', size=SMALL_SIZE) # controls default text sizes plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize plt.rc('figure', titlesize=LARGE_SIZE) # fontsize of the figure title DATA_DIR = os.path.join(".","data") IMAGE_DIR = os.path.join(".","images") ## allow script to be run from parent directory if not os.path.exists(DATA_DIR): DATA_DIR = os.path.join("..","data") IMAGE_DIR = os.path.join("..","images") if not os.path.exists(DATA_DIR): raise Exception("cannot find DATA_DIR") if not os.path.exists(IMAGE_DIR): raise Exception("cannot find IMAGE_DIR") sns.set(style="ticks", color_codes=True) def ingest_data(): """ ready the data for EDA """ print("... data ingestion") ## load the data and print the shape df = pd.read_csv(os.path.join(DATA_DIR, "world-happiness.csv"), index_col=0) ## clean up the column names df.columns = [re.sub("\s+","_",col) for col in df.columns.tolist()] ## drop the rows that have NaNs df.dropna(inplace=True) ## sort the data for more intuitive visualization df.sort_values(['Year', "Happiness_Score"], ascending=[True, False], inplace=True) return(df) def create_correlations_gridplot(df): """ create grid plot of pairwise correlations """ print("... creating plot") # Compute the correlation matrix corr = df.corr() # Generate a mask for the upper triangle mask = np.triu(np.ones_like(corr, dtype=np.bool)) # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 9)) # Generate a custom diverging colormap cmap = sns.diverging_palette(220, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) image_path = os.path.join(IMAGE_DIR, "pairwise-correlations.png") plt.savefig(image_path, bbox_inches='tight', pad_inches = 0, dpi=200) print("{} created.".format(image_path)) if __name__ == "__main__": df = ingest_data() create_correlations_gridplot(df) file2cos(project, '/scripts', 'make-happiness-correlations-plot.py') !python ../scripts/make-happiness-correlations-plot.py Image("../images/pairwise-correlations.png",width=800, height=600) ###Output ... data ingestion ... creating plot ../images/pairwise-correlations.png created.
Resources/Starter_Code/credit_risk_ensemble.ipynb
###Markdown Read the CSV and Perform Basic Data Cleaning ###Code # https://help.lendingclub.com/hc/en-us/articles/215488038-What-do-the-different-Note-statuses-mean- columns = [ "loan_amnt", "int_rate", "installment", "home_ownership", "annual_inc", "verification_status", "issue_d", "loan_status", "pymnt_plan", "dti", "delinq_2yrs", "inq_last_6mths", "open_acc", "pub_rec", "revol_bal", "total_acc", "initial_list_status", "out_prncp", "out_prncp_inv", "total_pymnt", "total_pymnt_inv", "total_rec_prncp", "total_rec_int", "total_rec_late_fee", "recoveries", "collection_recovery_fee", "last_pymnt_amnt", "next_pymnt_d", "collections_12_mths_ex_med", "policy_code", "application_type", "acc_now_delinq", "tot_coll_amt", "tot_cur_bal", "open_acc_6m", "open_act_il", "open_il_12m", "open_il_24m", "mths_since_rcnt_il", "total_bal_il", "il_util", "open_rv_12m", "open_rv_24m", "max_bal_bc", "all_util", "total_rev_hi_lim", "inq_fi", "total_cu_tl", "inq_last_12m", "acc_open_past_24mths", "avg_cur_bal", "bc_open_to_buy", "bc_util", "chargeoff_within_12_mths", "delinq_amnt", "mo_sin_old_il_acct", "mo_sin_old_rev_tl_op", "mo_sin_rcnt_rev_tl_op", "mo_sin_rcnt_tl", "mort_acc", "mths_since_recent_bc", "mths_since_recent_inq", "num_accts_ever_120_pd", "num_actv_bc_tl", "num_actv_rev_tl", "num_bc_sats", "num_bc_tl", "num_il_tl", "num_op_rev_tl", "num_rev_accts", "num_rev_tl_bal_gt_0", "num_sats", "num_tl_120dpd_2m", "num_tl_30dpd", "num_tl_90g_dpd_24m", "num_tl_op_past_12m", "pct_tl_nvr_dlq", "percent_bc_gt_75", "pub_rec_bankruptcies", "tax_liens", "tot_hi_cred_lim", "total_bal_ex_mort", "total_bc_limit", "total_il_high_credit_limit", "hardship_flag", "debt_settlement_flag" ] target = ["loan_status"] # Load the data file_path = Path('../Resources/LoanStats_2019Q1.csv.zip') df = pd.read_csv(file_path, skiprows=1)[:-2] df = df.loc[:, columns].copy() # Drop the null columns where all values are null df = df.dropna(axis='columns', how='all') # Drop the null rows df = df.dropna() # Remove the `Issued` loan status issued_mask = df['loan_status'] != 'Issued' df = df.loc[issued_mask] # convert interest rate to numerical df['int_rate'] = df['int_rate'].str.replace('%', '') df['int_rate'] = df['int_rate'].astype('float') / 100 # Convert the target column values to low_risk and high_risk based on their values x = {'Current': 'low_risk'} df = df.replace(x) x = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk') df = df.replace(x) df.reset_index(inplace=True, drop=True) df.head() ###Output _____no_output_____ ###Markdown Split the Data into Training and Testing ###Code # Create our features X = # YOUR CODE HERE # Create our target y = # YOUR CODE HERE X.describe() # Check the balance of our target values y['loan_status'].value_counts() # Split the X and y into X_train, X_test, y_train, y_test # YOUR CODE HERE ###Output _____no_output_____ ###Markdown Ensemble LearnersIn this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble AdaBoost classifier . For each algorithm, be sure to complete the folliowing steps:1. Train the model using the training data. 2. Calculate the balanced accuracy score from sklearn.metrics.3. Print the confusion matrix from sklearn.metrics.4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.5. For the Balanced Random Forest Classifier onely, print the feature importance sorted in descending order (most important feature to least important) along with the feature scoreNote: Use a random state of 1 for each algorithm to ensure consistency between tests Balanced Random Forest Classifier ###Code # Resample the training data with the RandomOversampler # YOUR CODE HERE # Calculated the balanced accuracy score # YOUR CODE HERE # Display the confusion matrix # YOUR CODE HERE # Print the imbalanced classification report # YOUR CODE HERE # List the features sorted in descending order by feature importance # YOUR CODE HERE ###Output loan_amnt: (0.09175752102205247) int_rate: (0.06410003199501778) installment: (0.05764917485461809) annual_inc: (0.05729679526683975) dti: (0.05174788106507317) delinq_2yrs: (0.031955619175665397) inq_last_6mths: (0.02353678623968216) open_acc: (0.017078915518993903) pub_rec: (0.017014861224701222) revol_bal: (0.016537957646730293) total_acc: (0.016169718411077325) out_prncp: (0.01607049983545137) out_prncp_inv: (0.01599866290723441) total_pymnt: (0.015775537221600675) total_pymnt_inv: (0.01535560674178928) total_rec_prncp: (0.015029265003541079) total_rec_int: (0.014828006488636946) total_rec_late_fee: (0.01464881608833323) recoveries: (0.014402430445752665) collection_recovery_fee: (0.014318832248876989) last_pymnt_amnt: (0.013519867193755364) collections_12_mths_ex_med: (0.013151520216882331) policy_code: (0.013101578263049833) acc_now_delinq: (0.012784600558682344) tot_coll_amt: (0.012636608914961465) tot_cur_bal: (0.012633464965390648) open_acc_6m: (0.012406321468566728) open_act_il: (0.011687404692448701) open_il_12m: (0.01156494245653799) open_il_24m: (0.011455878011762288) mths_since_rcnt_il: (0.011409157520644688) total_bal_il: (0.01073641504525053) il_util: (0.010380085181706624) open_rv_12m: (0.010097528131347774) open_rv_24m: (0.00995373830638152) max_bal_bc: (0.00991410213601043) all_util: (0.009821715826953788) total_rev_hi_lim: (0.009603648248133598) inq_fi: (0.009537423049553) total_cu_tl: (0.008976776055926955) inq_last_12m: (0.008870623013604539) acc_open_past_24mths: (0.008745106187024114) avg_cur_bal: (0.008045578273709669) bc_open_to_buy: (0.007906251501807723) bc_util: (0.00782073260901301) chargeoff_within_12_mths: (0.007798696767389274) delinq_amnt: (0.007608045628523077) mo_sin_old_il_acct: (0.0075861537897335815) mo_sin_old_rev_tl_op: (0.007554511001273182) mo_sin_rcnt_rev_tl_op: (0.007471884930172615) mo_sin_rcnt_tl: (0.007273779915807858) mort_acc: (0.006874845464745796) mths_since_recent_bc: (0.006862142977394886) mths_since_recent_inq: (0.006838718858820505) num_accts_ever_120_pd: (0.006413554699909871) num_actv_bc_tl: (0.006319439816216779) num_actv_rev_tl: (0.006160469432535709) num_bc_sats: (0.006066257227997291) num_bc_tl: (0.005981472544437747) num_il_tl: (0.0055301594524349495) num_op_rev_tl: (0.004961823663836347) num_rev_accts: (0.004685198497435334) num_rev_tl_bal_gt_0: (0.0045872929977180356) num_sats: (0.0041651633321967895) num_tl_120dpd_2m: (0.004016461341161775) num_tl_30dpd: (0.0032750717701661657) num_tl_90g_dpd_24m: (0.0027565184136781346) num_tl_op_past_12m: (0.0026174030074401656) pct_tl_nvr_dlq: (0.002279671873697176) percent_bc_gt_75: (0.0021899772867773103) pub_rec_bankruptcies: (0.0020851101815353096) tax_liens: (0.0018404849590376573) tot_hi_cred_lim: (0.001736019018028134) total_bal_ex_mort: (0.0015472230884974506) total_bc_limit: (0.0012263315437383057) total_il_high_credit_limit: (0.0012213148580230454) home_ownership_ANY: (0.0012151288883862276) home_ownership_MORTGAGE: (0.0008976722260399365) home_ownership_OWN: (0.0008125182396705508) home_ownership_RENT: (0.000573414997420326) verification_status_Not Verified: (0.0005168345750594915) verification_status_Source Verified: (0.0004192455022893127) verification_status_Verified: (0.0) issue_d_Feb-2019: (0.0) issue_d_Jan-2019: (0.0) issue_d_Mar-2019: (0.0) pymnt_plan_n: (0.0) initial_list_status_f: (0.0) initial_list_status_w: (0.0) next_pymnt_d_Apr-2019: (0.0) next_pymnt_d_May-2019: (0.0) application_type_Individual: (0.0) application_type_Joint App: (0.0) hardship_flag_N: (0.0) debt_settlement_flag_N: (0.0) ###Markdown Easy Ensemble AdaBoost Classifier ###Code # Train the Classifier # YOUR CODE HERE # Calculated the balanced accuracy score # YOUR CODE HERE # Display the confusion matrix # YOUR CODE HERE # Print the imbalanced classification report # YOUR CODE HERE ###Output pre rec spe f1 geo iba sup high_risk 0.09 0.92 0.94 0.16 0.93 0.87 101 low_risk 1.00 0.94 0.92 0.97 0.93 0.87 17104 avg / total 0.99 0.94 0.92 0.97 0.93 0.87 17205
refactoring-101/refactoring.ipynb
###Markdown Refactoring*A pattern for evolution of code*Scott Hendrickson2016 April 29 What is refactoring? Code refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior. - wikipedia Maybe an illustration about two work streams. ![title](img/streams.jpg)1. "Normal" coding work. You are trying to add new functionality to your code. E.g. 2. Changing the code, structure, architecture, etc. (but *not* the functionality).3. Know *when* and *how* to move between the two activitiesMore wikipedia, Refactoring improves nonfunctional attributes of the software. Advantages include improved code readability and reduced complexity; these can improve source-code maintainability and create a more expressive internal architecture or object model to improve extensibility. Typically, refactoring applies a series of standardised basic micro-refactorings, each of which is (usually) a tiny change in a computer program's source code that either preserves the behaviour of the software, or at least does not modify its conformance to functional requirements. Why worry about it? Refactoring enables graceful code evolution -- You don't have the know all the answers when you start, you can discover them as you go. Refactoring is a key part of the *how*. ... code refactoring may also resolve hidden, dormant, or undiscovered computer bugs or vulnerabilities in the system by simplifying the underlying logic and eliminating unnecessary levels of complexity. What motivates refactoring?Sometimes we have a distinct feel that something isn't quite right. Or, we hit a wall where someone asks us to add some functionality to the code and we realize that code choices we made earlier make adding the new piece very hard, risky or clunky.More formally, smart people have identified some common "this doesn't feel right" moments and cateloged them. Bad Smells Refactoring is usually motivated by noticing a code smell.[2] For example the method at hand may be very long, or it may be a near duplicate of another nearby method. Once recognized, such problems can be addressed by refactoring the source code, or transforming it into a new form that behaves the same as before but that no longer "smells". These *bad smells* can be at code, architecture, data structure, etc. levels. Learning to identify many types of bad smells and then having ready tools to address them is part of what we mean by "professional developer skills." For a long routine, one or more smaller subroutines can be extracted; or for duplicate routines, the duplication can be removed and replaced with one shared function. Failure to perform refactoring can result in accumulating technical debt; on the other hand, refactoring is one of the primary means of repaying technical debt.[3] When you don't refactor regularly......you accumlulate technical debt. Can we say any generally useful things about the risks of not paying down technical debt?There are two general categories of benefits to the activity of refactoring:1. Maintainability. It is easier to fix bugs because the source code is easy to read and the intent of its author is easy to grasp.[4] This might be achieved by reducing large monolithic routines into a set of individually concise, well-named, single-purpose methods. It might be achieved by moving a method to a more appropriate class, or by removing misleading comments.2. Extensibility. It is easier to extend the capabilities of the application if it uses recognizable design patterns, and it provides some flexibility where none before may have existed.[1] - wikipedia What is a design pattern? A design pattern is the re-usable form of a solution to a design problem. The idea was introduced by the architect Christopher Alexander[1] and has been adapted for various other disciplines, most notably computer science.[2] - wikipedia Can you give an example, maybe from Alexander's work? ... how the components of the pattern relate to each other to give the solution.[3] Christopher Alexander describes common design problems as arising from "conflicting forces" — such as the conflict between wanting a room to be sunny and wanting it not to overheat on summer afternoons. A pattern would not tell the designer how many windows to put in the room; instead, it would propose a set of values to guide the designer toward a decision that is best for their particular application. Alexander, for example, suggests that enough windows should be included to direct light all around the room. He considers this a good solution because he believes it increases the enjoyment of the room by its occupants. Other authors might come to different conclusions, if they place higher value on heating costs, or material costs. These values, used by the pattern's author to determine which solution is "best", must also be documented within the pattern. - wikipedia![title](img/access_to_water.jpg) Important attributes of a Pattern The elements of this language are entities called patterns. Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. — Christopher Alexander[1] - wikipediaSo, summarizing attributes:1. The Context section sets the stage where the pattern takes place.2. The Problem section explains what the actual problem is.3. The Forces section describes why the problem is difficult to solve.4. The Solution section explains the solution in detail.5. The Consequences section demonstrates what happens when you apply the solution.(These steps follow http://www.europlop.net/sites/default/files/files/0_How%20to%20write%20a%20pattern-2011-11-30_linked.pdf )1. People living, working, playing near water. Desire to be near water.2. Desire for access water by many people at once --> reduced access to water3. Identify the forces * Desire - private land on water * Water adjacent land = common access land * Roads parallel to water aren't the kind of access we mean4. Form of solution and considerations * Common land along water * Approach roads perpendicular5. Preserving some common areas near water provide broad access. Preserving common waters-edge space my building roads perpendicular to water maximized common areas. WRT bad smells, give a more code-y example ###Code import string, math a = { x:[] for x in string.punctuation } i = 0 for x in a: a[x].append(math.sin(i)) i += 1 print(a) ###Output {'!': [0.0], '#': [0.8414709848078965], '"': [0.9092974268256817], '%': [0.1411200080598672], '$': [-0.7568024953079282], "'": [-0.9589242746631385], '&': [-0.27941549819892586], ')': [0.6569865987187891], '(': [0.9893582466233818], '+': [0.4121184852417566], '*': [-0.5440211108893699], '-': [-0.9999902065507035], ',': [-0.5365729180004349], '/': [0.4201670368266409], '.': [0.9906073556948704], ';': [0.6502878401571169], ':': [-0.2879033166650653], '=': [-0.9613974918795568], '<': [-0.750987246771676], '?': [0.14987720966295234], '>': [0.9129452507276277], '@': [0.836655638536056], '[': [-0.008851309290403876], ']': [-0.8462204041751706], '\\': [-0.9055783620066239], '_': [-0.13235175009777303], '^': [0.7625584504796028], '`': [0.956375928404503], '{': [0.27090578830786904], '}': [-0.6636338842129675], '|': [-0.9880316240928618], '~': [-0.404037645323065]} ###Markdown This activity has a name, we call it enumeration... ###Code import string a = { x:[] for x in string.punctuation } for i, x in enumerate(a): a[x].append(math.sin(i)) print(a) ###Output {'!': [0.0], '#': [0.8414709848078965], '"': [0.9092974268256817], '%': [0.1411200080598672], '$': [-0.7568024953079282], "'": [-0.9589242746631385], '&': [-0.27941549819892586], ')': [0.6569865987187891], '(': [0.9893582466233818], '+': [0.4121184852417566], '*': [-0.5440211108893699], '-': [-0.9999902065507035], ',': [-0.5365729180004349], '/': [0.4201670368266409], '.': [0.9906073556948704], ';': [0.6502878401571169], ':': [-0.2879033166650653], '=': [-0.9613974918795568], '<': [-0.750987246771676], '?': [0.14987720966295234], '>': [0.9129452507276277], '@': [0.836655638536056], '[': [-0.008851309290403876], ']': [-0.8462204041751706], '\\': [-0.9055783620066239], '_': [-0.13235175009777303], '^': [0.7625584504796028], '`': [0.956375928404503], '{': [0.27090578830786904], '}': [-0.6636338842129675], '|': [-0.9880316240928618], '~': [-0.404037645323065]}
notebooks/UD_Staff_Management_EDV.ipynb
###Markdown Premise of NotebookThis exploratory notebook will explore increasingly granular levels of visualization, utilizing Altair for its high customizability and functionality, as well as interactivity that's retained when embedded to a website. This exercise focuses specifically on mentor/mentee data in regards to high level categorical metrics, but the concepts applied can be utilized in many other areas of data visualization, including attendance time series graphs, financial/resource management (tracking usage, supply/demand growths, etc.), mentor availability for mentees by day and time, staff metric overviews like what we're about to dive into...the list goes on. Data RetrievalFor starters, we will explore comparing counts of mentees and mentors based on certain categories, namely experience level and subject. This could be a useful tool for staff administration (superadmins). ###Code # These are requests to the live database, showcasing how data could be retrieved for usage. mentees_df = pd.DataFrame(requests.post("http://underdog-devs-ds-a-dev.us-east-1.elasticbeanstalk.com/Mentees/read").json()["result"]) mentors_df = pd.DataFrame(requests.post("http://underdog-devs-ds-a-dev.us-east-1.elasticbeanstalk.com/Mentors/read").json()["result"]) #verifying data retrieval print("Mentees\n") mentees_df.info() print("------------------------------------------------------------------\nMentors\n") mentors_df.info() mentees_df['tech_stack'].unique() # Using local files mentees_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentees.csv") mentors_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentors.csv") #verifying data retrieval print("Mentees\n") mentees_df.info() print("------------------------------------------------------------------\nMentors\n") mentors_df.info() # Using local files mentees_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentees.csv") mentors_df = pd.read_csv("https://raw.githubusercontent.com/BakerJr1904/Altair-visualization-for-underdogsDevs/main/mentors.csv") # Adding role column to distinguish mentee vs mentor mentees_df["role"] = ["Mentee"]*len(mentees_df) mentors_df["role"] = ["Mentor"]*len(mentors_df) # Generating random skill levels levels = ['Beginner', 'Intermediate', 'Advanced', 'Master'] mentees_df['experience_level'] = [r.choice(levels) for _ in range(len(mentees_df))] mentors_df['experience_level'] = [r.choice(levels) for _ in range(len(mentors_df))] # Filtering for relevant columns mentees_df = mentees_df[["role", "profile_id", "first_name", "last_name", "tech_stack", "experience_level"]] mentors_df = mentors_df[["role", "profile_id", "first_name", "last_name", "tech_stack", "experience_level"]] # Concatenating df = pd.merge(mentees_df, mentors_df, how="outer") # There shouldn't be nulls, but there are "None" values in tech_stack that shouldn't exist, so we'll remove them df = df.loc[df['tech_stack'] != "None"] # Checking redundant variances print(df["role"].unique()) print(df["tech_stack"].unique()) print(df["experience_level"].unique()) # Generating full_name column df["full_name"] = df["first_name"] + [" "]*len(df) + df["last_name"] # High level data overview df.info() # actual dataframe df.head(5) ###Output _____no_output_____ ###Markdown Building Graphs Part 1: Mentors and MenteesNext, we'll build some graphs. Let's start with a lot of information in a single graph, something we really don't want to use, but that showcases information density. ###Code graph = alt.Chart(df).mark_bar().encode( x="role", y=alt.X( "count()", title="Head Count" ), color="tech_stack", column="experience_level" ).properties(width=200).configure_axisX( title="null", labelFontSize=15 ).configure_header( labelFontSize=15 ).configure_axisY( labelFontSize=12 ) graph ###Output _____no_output_____ ###Markdown This is obviously a lot to look at. It's hard to compare mentors and mentees across disciplines, even if their skill levels are clustered together. But what if we could choose what subject(s) to compare? With the power of Altair, we can! One way is to use Altair's included checkboxes, but a prettier way is to make our own selection panels, so let's go for it! ###Code # Create selection panel with selection functionality selection = alt.selection_multi(fields=['tech_stack']) color_select = alt.condition(selection, alt.Color('tech_stack:N'), alt.value('lightgray')) selector = alt.Chart().mark_rect().encode(y='tech_stack', color=color_select).add_selection(selection) # Create main graph main_graph = alt.Chart().mark_bar().encode( x="role", y=alt.X( "count()", title="Head Count" ), color="tech_stack", column="experience_level" ).transform_filter(selection).properties(height=400, width=150) # Concatenate with data full_graph = alt.hconcat(selector, main_graph, data=df) full_graph.configure_axisX(labelFontSize=15, title="null").configure_header(labelFontSize=15, titleFontSize=20).configure_legend(disable=True) ###Output _____no_output_____ ###Markdown Ta-da! Now we can filter them at will, both with single subjects and multiple if we click while holding shift! Using the same process, we could add a secondary filter for experience level that would condense the graph we're viewing into two bars, if we wanted to get really granular! ###Code # Create secondary selection panel with selection functionality selection2 = alt.selection_multi(fields=['experience_level']) color_select2 = alt.condition(selection2, alt.Color('experience_level:N'), alt.value('lightgray')) selector2 = alt.Chart(df).mark_rect().encode(y='experience_level', color=color_select2).add_selection(selection2) # Add secondary filter to main_graph main_graph = main_graph.transform_filter(selection2) granular_graph = alt.hconcat(selector, main_graph, selector2, data=df).configure_legend(disable=True) granular_graph.configure_axisX(labelFontSize=15).configure_header(labelFontSize=15) ###Output _____no_output_____ ###Markdown Now we have two working filter panels! But it might make more sense to condense multiple selected experience levels for comparisons...let's do that! And while we're at it, since the sizes may get very small, let's make it so we can view each person's full name when we hover over the segmented bar graph. Though this may not seem useful in this situation, the same method can be applied to point graphs, for instance a time series graph that plots mentors' meetings with mentees and their resultant attendances; making it so each time you hover over a marked absence, the mentee's name, id, or other chosen identifier will appear to be viewed for quick reference. ###Code # Recreate main graph without column clustering main_graph = alt.Chart().mark_bar().encode( x="role", y=alt.X( "count()", title="Head Count" ), color="experience_level", tooltip="full_name" ).transform_filter(selection).transform_filter(selection2).properties(width=100, height=600) granular_graph = alt.hconcat(selector, main_graph, selector2, data=df).configure_legend(disable=True).configure_axisY(labelFontSize=15, titleFontSize=15, tickMinStep=1).configure_axisX(labelFontSize=15, titleFontSize=15) granular_graph ###Output _____no_output_____
Labs28 Notebooks/Preprocessing_Labs28_Data_2020PB_and_826Pt1.ipynb
###Markdown ** Labs28 Notebook for creating merged dataset.** ###Code # Install newspaper3k for article parser ! pip3 install newspaper3k import pandas as pd import numpy as np from bs4 import BeautifulSoup from collections import Counter from newspaper import Article import json import re import requests import spacy from spacy.tokenizer import Tokenizer import urllib3 nlp = spacy.load('en_core_web_sm') ###Output _____no_output_____ ###Markdown First dataframe created from the github r/Police Brutality 2020 page. ###Code # Import aggregated json data create to dataframe all_locs = 'https://raw.githubusercontent.com/2020PB/police-brutality/data_build/all-locations-v2.json' # Copy and paste link in url to see current update from Github 2020PB reddit page df_gitjson = pd.read_json(all_locs) # Pull data column out and create its own dataframe df_2020PB = pd.json_normalize(data=df_gitjson['data']) df_2020PB['updated_at'] = df_gitjson['updated_at'] # Create a last updated to save in .csv filename last_updated = df_gitjson['updated_at'].iloc[0] ### Create a preprocessing function for df_2020PB # Rename columns df_2020PB.rename(columns = {'name':'title'}, inplace = True) # Drop irrelevant columns df_2020PB.drop(labels=['edit_at', 'date_text'], axis=1,inplace=True) # Reorder column headers df_2020PB = df_2020PB[['date', 'links', 'id', 'city', 'state', 'geolocation', 'title', 'tags', 'description']] # Update the "date" column to timestamps df_2020PB['date'] = pd.to_datetime(df_2020PB['date'],format='%Y-%m-%d') # Write function to create hyperlinks for the 'links' columns def cleanlinks(json): links_out = [] for link in json: links_out.append(link['url']) return links_out # Apply function to the dataframe 'links' column df_2020PB['links'] = df_2020PB['links'].apply(cleanlinks) # Ensure that dataframe was created correctly df_2020PB # Extract and clean the data from the 846 API # https://incidents.846policebrutality.com/ url="https://api.846policebrutality.com/api/incidents" # Copy and paste link in url to see current update from 846 http = urllib3.PoolManager() response = http.request('GET', url) soup = BeautifulSoup(response.data, "html.parser") json_846 = json.loads(soup.text) # Check length of the json_846 file # print(len(json_846['data'])) # json_846 # Commented to see the json_846 object # Retrieve data from the json_846['data'] key # Create dataframe from the 846 API incident data df_846 = pd.DataFrame(json_846['data']) ### Preprocessing # Change data type for 'date' column to datetime type df_846['date'] = pd.to_datetime(df_846['date'], infer_datetime_format=True) # Drop irrelevant columns df_846 = df_846.drop(columns=['data','pb_id']) # Rename Columns df_846.rename(columns = {'geocoding': 'geolocation'}, inplace = True) # Reorder columns df_846 = df_846[['date', 'links', 'id', 'city', 'state', 'geolocation', 'title', 'tags']] # Check the dataframe df_846 ###Output /usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings InsecureRequestWarning) ###Markdown What is the difference between the information recieved in the first dataframe and the second. Is there any duplicate links relaying the same information? ###Code # 846 API already comes in order df_846['date'][0] ###Output _____no_output_____ ###Markdown Created the merged dataframes from the r/2020PB Data and the 846 API ###Code print(f'df_2020PB shape: {df_2020PB.shape}') print(f'df_846: {df_846.shape}') print(f'There will be a total of {df_2020PB.shape[0] + df_846.shape[0]} rows.') # Merge the two datasets and check for duplicates frames = [df_2020PB, df_846] merged_dfs = pd.concat(frames) # merged_dfs.reset_index(inplace=True) # Need to properly reset index print(f'There are currently {merged_dfs.shape[0]}.') merged_dfs.drop_duplicates(subset=["id"]) print(f'Now, there are currently {merged_dfs.shape[0]} after dropping duplicate ids.') # Sort by date merged_dfs.sort_values(by='date', inplace=True) # Replace the Nan values with the string "None" in the description column ***************** merged_dfs['description'].replace({np.NaN: "None"}, inplace=True) # Replace the Nan values with the string "None" in the geolocation column ***************** merged_dfs['geolocation'].replace({"": np.NaN}, inplace=True) # Missing geolocations are mapped as empty strings merged_dfs['geolocation'].replace({np.NaN: "None"}, inplace=True) # Removed Outliers by dates outide of the year 2020. merged_dfs =merged_dfs.loc[merged_dfs["date"].between('2020-01-01', '2020-12-30')] # Reset index merged_dfs.reset_index(inplace=True) # Create a latitude (lat) and longitude (lon) column. # Create function to create lat and long from geolocation column def splitGeolocation(item): """ Creates two new columns (lat and lon) by separating the dictionaries of geolocations into latitiude and longitude. :col: indexed slice of a column consisting of dictionaries/strings with latitiude and longitude integers :return: latitude column :return: longitude column """ lat = [] lon = [] if isinstance(item,str) and item != 'None': item = item.split(',') lat.append(float(item[0])) lon.append(float(item[1])) elif type(item) == dict: lat.append(float(item['lat'])) lon.append(float(item['long'])) else: lat.append("None") ### Null values lon.append("None") ### Null values return lat,lon merged_dfs['lat'] = [splitGeolocation(item)[0][0] for item in merged_dfs['geolocation']] merged_dfs['long'] = [splitGeolocation(item)[1][0] for item in merged_dfs['geolocation']] # Drop the geolocation column merged_dfs.drop(labels=['geolocation'], axis=1, inplace=True) # Look at dataframe merged_dfs = merged_dfs[['date', 'links', 'id', 'city', 'state', 'lat', 'long', 'title', 'description', 'tags']] ###Output _____no_output_____ ###Markdown **[X] Decide what format the geolocation column needs to be. Should the current column have all dicts and two new column be created one for lat and one for lon and ints?**We decided to create two columns each containing floats for longitude/latitudevalues and insert NaNs where no values exist. Dropped the geolocation columnsince it wasn't being used on the front-end to populate any real data. ###Code merged_dfs ###Output _____no_output_____ ###Markdown Natural Language Pre-Processing and Analytics ###Code def remove_list(col): l = [] rows = "" for row in col: for item in row: if item not in rows or len(rows) == 0: rows = rows + " " + str(item) l.append(rows) rows = [] rows = "" return l # Apply function to remove tags within a list merged_dfs['words'] = remove_list(merged_dfs['tags']) from spacy.tokenizer import Tokenizer nlp = spacy.load("en_core_web_sm") # Tokenizer tokenizer = Tokenizer(nlp.vocab) # Update stop words with all non-police of force terms stop_words = [ "celebrity", "child", "ederly", "lgbtq+", "homeless", "journalist", "non-protest", "person-with-disability", "medic", "politician", "pregnant", "property-desctruction", " ", "bystander", "protester", "legal-observer", "hide-badge", 'body-cam', "conceal", 'elderly' ] # Update stop words default list stop = nlp.Defaults.stop_words.union(stop_words) from tqdm import tqdm tqdm.pandas() def remove_stops(_list_): keywords = [] for keyword in _list_: phrase = [] words = keyword.split() for word in words: if word in stop: pass else: phrase.append(word) phrase = ' '.join(phrase) if len(phrase) > 0: keywords.append(phrase) return keywords # Apply function to use remove stop words and words that aren't indicative # of police use of force merged_dfs['cleaned_tags'] = merged_dfs['tags'].progress_apply(remove_stops) merged_dfs.drop(labels=['words', 'tags'], axis=1, inplace=True) merged_dfs.rename(columns={'cleaned_tags':'tags'}, inplace=True) merged_dfs # Analyzing tokens # Object from Base Python from collections import Counter # The object `Counter` takes an iterable, but you can instaniate an empty one and update it. word_counts = Counter() # Update it based on a split of each of our documents merged_dfs['tags'].apply(lambda x: word_counts.update(x)) # Print out the 20 most common words word_counts.most_common(75) # All of the words # NOTE: ALL CATEGORIES STRICTLY FOLLOW THE NATIONAL INJUSTICE OF JUSTICE USE-OF-CONTINUM DEFINITIONS #for more information, visit https://nij.ojp.gov/topics/articles/use-force-continuum VERBALIZATION = ['threaten', 'incitement'] EMPTY_HAND_SOFT = ['arrest', 'grab', 'zip-tie', ] EMPTY_HAND_HARD = ['shove', 'push', 'strike', 'tackle', 'beat', 'knee', 'punch', 'throw', 'knee-on-neck', 'kick', 'choke', 'dog', 'headlock'] LESS_LETHAL_METHODS = ['less-lethal', 'tear-gas', 'pepper-spray', 'baton', 'projectile', 'stun-grenade', 'pepper-ball', 'tear-gas-canister', 'explosive', 'mace', 'lrad', 'bean-bag', 'gas', 'foam-bullets', 'taser', 'tase', 'wooden-bullet', 'rubber-bullet', 'marking-rounds', 'paintball'] LETHAL_FORCE = ['shoot', 'throw', 'gun', 'death', 'live-round', ] UNCATEGORIZED = ['property-destruction', 'abuse-of-power', 'bike', 'inhumane-treatment', 'shield', 'vehicle', 'drive', 'horse', 'racial-profiling', 'spray', 'sexual-assault', ] # UNCATEGORIZED are Potential Stop Words. Need to talk to team. # Need dummy columns to fill. Create a cleaner function to handle this problem. DJ. merged_dfs['Verbalization'],merged_dfs['Empty_Hand_Soft'],merged_dfs['Empty_Hand_Hard'],merged_dfs['Less_Lethal_Methods'],merged_dfs['Lethal_Force'],merged_dfs['Uncategorized'] = merged_dfs['date'],merged_dfs['date'],merged_dfs['date'],merged_dfs['date'],merged_dfs['date'],merged_dfs['date'] merged_dfs # Created dummy data filled with the date column def Searchfortarget(list, targetl): for target in targetl: res = list.index(target) if target in list else -1 # finds index of target if res == -1: return 0 # if target is not in list returns -1 else: return 1 # if the target exist it returns def UseofForceContinuumtest(col): for i, row in enumerate(col): merged_dfs['Verbalization'].iloc[i], merged_dfs['Empty_Hand_Soft'].iloc[i], merged_dfs['Empty_Hand_Hard'].iloc[i], merged_dfs['Less_Lethal_Methods'].iloc[i],merged_dfs['Lethal_Force'].iloc[i],merged_dfs['Uncategorized'].iloc[i] = Searchfortarget(VERBALIZATION, row),Searchfortarget(EMPTY_HAND_SOFT, row), Searchfortarget(EMPTY_HAND_HARD, row),Searchfortarget(LESS_LETHAL_METHODS, row),Searchfortarget(LETHAL_FORCE, row), Searchfortarget(UNCATEGORIZED, row) """Alternatively, this (below) is what is happening under the hood""" # def UseofForceContinuum(col): # for i, row in enumerate(col): # # print("--------------") # # print(row, i) # merged_dfs['Verbalization'].iloc[i] = Searchfortarget(VERBALIZATION, row) # merged_dfs['Empty_Hand_Soft'].iloc[i] = Searchfortarget(EMPTY_HAND_SOFT, row) # merged_dfs['Empty_Hand_Hard'].iloc[i] = Searchfortarget(EMPTY_HAND_HARD, row) # merged_dfs['Less_Lethal_Methods'].iloc[i] = Searchfortarget(LESS_LETHAL_METHODS, row) # merged_dfs['Lethal_Force'].iloc[i] = Searchfortarget(LETHAL_FORCE, row) # merged_dfs['Uncategorized'].iloc[i] = Searchfortarget(UNCATEGORIZED, row) # # return merged_dfs # UseofForceContinuum(merged_dfs['cleaned_words']) # Apply function to the cleaned_tags columns UseofForceContinuumtest(merged_dfs['tags']) ###Output /usr/local/lib/python3.6/dist-packages/pandas/core/indexing.py:670: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy iloc._setitem_with_indexer(indexer, value) ###Markdown The newly added columns are objects instead of integers.***** ###Code # Saved the data in on .csv file for all sources. # Create a copy of the data cleaned_df = merged_dfs.copy() cleaned_df # Saved the data in on .csv file for all sources. # cleaned_df.to_csv(f'Labs28_AllSources_Data{last_updated}.csv', sep="|",index=False) # Uncomment to save. ###Output _____no_output_____ ###Markdown * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Proceed to Labs28_D_Duplicate_LinkExperiment.ipynb * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ###Code cleaned_df ###Output _____no_output_____
examples/loading_saving.ipynb
###Markdown Loading and Saving Loading mzML To accommodate disparate instrument types and manufacturers (e.g. Bruker, Waters, Thermo, Agilent), DEIMoS operates under the assumption that input data are in an open, standard format.As of this publication, the accepted file format for DEIMoS is mzML (or mzML.gz), which contains metadata, separation, and spectrometry data that reproduce the contents of vendor formats.Conversion to mzML from several other formats can be performed using the free and open-source [ProteoWizard](https://proteowizard.sourceforge.io/) msconvert utility. By default, DEIMoS will load frame, scan, *m/z*, and intensity from the mzML, as well as precursor *m/z* for MS2, as available.Additional "accession" fields may be specified for data of higher dimension.To view these fields, a convenience function is provided. ###Code import deimos accessions = deimos.get_accessions('example_data.mzML.gz') accessions ###Output _____no_output_____ ###Markdown The example data referenced is from an Agilent 6560 Ion Mobility LC/Q-TOF system. Thus, we will additionally need to parse retention time and ion mobility drift times.Consulting the list above, we are able to supply appropriate accession fields to the `load` function, renaming as convenient (here, "scan start time" becomes "retention_time" and "ion mobility drift time" becomes "drift_time").The `load` function will infer file type based on extension (here, .mzML or .mzML.gz) ###Code %%time data = deimos.load('example_data.mzML.gz', accession={'retention_time': 'MS:1000016', 'drift_time': 'MS:1002476'}) ###Output CPU times: user 4min 2s, sys: 38.9 s, total: 4min 41s Wall time: 4min 57s ###Markdown The resulting data will be returned as a dictionary containing data frames, with keys per MS level. The example data contains MS1 and MS2 (collected at 20 eV). ###Code data['ms1'] data['ms2'] ###Output _____no_output_____ ###Markdown HDF5 If the data is already parsed and saved in the Hierarchical Data Format, loading will be much faster. The function does not change, as the loader will again infer format by file extension. However, arguments will be different: specifing accessions is no longer required, but the relevant MS level must be selected using the `key` flag. ###Code %%time ms1 = deimos.load('example_data.h5', key='ms1') ms1 ###Output CPU times: user 7.81 s, sys: 10.1 s, total: 17.9 s Wall time: 22.6 s ###Markdown Multi-file Loading For certain alignment applications, a high number of input files bars reading each into memory simultaneously.In these situations, [Dask](https://dask.org/) is used to virtually load multiple data frames, thus more amenable for downstream computation.The `load` function will detect whether a list of inputs is passed and read using the appropriate backend.Dask chunksize (see [docs](https://docs.dask.org/en/stable/array-chunks.html)) may be specified by the `chunksize` flag, and additional meta data per input file can be passed as a dictionary with keys for each path (e.g. date, sample type, etc.). Only HDF5 format is support for multi-file loading. ###Code ms1 = deimos.load(['example_data.h5', 'example_data.h5'], key='ms1', chunksize=1E7, meta=None) ms1 ###Output _____no_output_____ ###Markdown Note that additional columns are appended to indicate each source file name and index.As the data frames are loaded virtually, the output is a placeholder for would-be data.For more on loading multiple files, see the section on [alignment](alignment.ipynb). Saving HDF5 By default, DEIMoS exports a lightweight, data frame-based representation in Hierarchical Data Format version 5 (HDF5) file format. One must specify a path, the data frame to be saved, and a key for the container. Multiple keys may be saved to the same container (i.e. MS1 and MS2). The `mode` flag is used to indicate file overwrite (`mode='w'`) or append (`mode='a'`), the latter to be used when saving multiple data frames to the file. ###Code # save ms1 to new file deimos.save('example_data.h5', data['ms1'], key='ms1', mode='w') # save ms2 to same file deimos.save('example_data.h5', data['ms2'], key='ms2', mode='a') ###Output _____no_output_____
NY Stock Price Prediction RNN LSTM GRU/.ipynb_checkpoints/NY Stock Price Prediction RNN LSTM GRU-checkpoint.ipynb
###Markdown **Author:** Raoul Malm **Description:** This notebook demonstrates the future price prediction for different stocks using recurrent neural networks in tensorflow. Recurrent neural networks with basic, LSTM or GRU cells are implemented. **Outline:**1. [Libraries and settings](1-bullet)2. [Analyze data](2-bullet)3. [Manipulate data](3-bullet)4. [Model and validate data](4-bullet)5. [Predictions](5-bullet)**Reference:** [LSTM_Stock_prediction-20170507 by BenF](https://www.kaggle.com/benjibb/lstm-stock-prediction-20170507/notebook) 1. Libraries and settings ###Code import numpy as np import pandas as pd import math import sklearn import sklearn.preprocessing import datetime import os import matplotlib.pyplot as plt import tensorflow as tf # split data in 80%/10%/10% train/validation/test sets valid_set_size_percentage = 10 test_set_size_percentage = 10 #display parent directory and working directory print(os.path.dirname(os.getcwd())+':', os.listdir(os.path.dirname(os.getcwd()))); print(os.getcwd()+':', os.listdir(os.getcwd())); ###Output /home/seyfullah/github-projects/syf_bindsnet: ['rl', 'guide_part_ii.py', '0', '1', 'guide_part_i.py', 'experiments', 'aifortrading.py', 'NY Stock Price Prediction RNN LSTM GRU', 'guide_part_i-2.py', 'LSTM'] /home/seyfullah/github-projects/syf_bindsnet/NY Stock Price Prediction RNN LSTM GRU: ['NY Stock Price Prediction RNN LSTM GRU.ipynb', 'NY Stock Price Prediction RNN LSTM GRU.zip', 'NY Stock Price Prediction RNN LSTM GRU', '.ipynb_checkpoints'] ###Markdown 2. Analyze data - load stock prices from prices-split-adjusted.csv- analyze data ###Code # import all stock prices df = pd.read_csv("./NY Stock Price Prediction RNN LSTM GRU/prices-split-adjusted.csv", index_col = 0) df.info() df.head() # number of different stocks print('\nnumber of different stocks: ', len(list(set(df.symbol)))) print(list(set(df.symbol))[:10]) df.tail() df.describe() df.info() plt.figure(figsize=(15, 5)); plt.subplot(1,2,1); plt.plot(df[df.symbol == 'EQIX'].open.values, color='red', label='open') plt.plot(df[df.symbol == 'EQIX'].close.values, color='green', label='close') plt.plot(df[df.symbol == 'EQIX'].low.values, color='blue', label='low') plt.plot(df[df.symbol == 'EQIX'].high.values, color='black', label='high') plt.title('stock price') plt.xlabel('time [days]') plt.ylabel('price') plt.legend(loc='best') #plt.show() plt.subplot(1,2,2); plt.plot(df[df.symbol == 'EQIX'].volume.values, color='black', label='volume') plt.title('stock volume') plt.xlabel('time [days]') plt.ylabel('volume') plt.legend(loc='best'); ###Output _____no_output_____ ###Markdown 3. Manipulate data - choose a specific stock- drop feature: volume- normalize stock data- create train, validation and test data sets ###Code # function for min-max normalization of stock def normalize_data(df): min_max_scaler = sklearn.preprocessing.MinMaxScaler() df['open'] = min_max_scaler.fit_transform(df.open.values.reshape(-1,1)) df['high'] = min_max_scaler.fit_transform(df.high.values.reshape(-1,1)) df['low'] = min_max_scaler.fit_transform(df.low.values.reshape(-1,1)) df['close'] = min_max_scaler.fit_transform(df['close'].values.reshape(-1,1)) return df # function to create train, validation, test data given stock data and sequence length def load_data(stock, seq_len): data_raw = stock.as_matrix() # convert to numpy array data = [] # create all possible sequences of length seq_len for index in range(len(data_raw) - seq_len): data.append(data_raw[index: index + seq_len]) data = np.array(data); valid_set_size = int(np.round(valid_set_size_percentage/100*data.shape[0])); test_set_size = int(np.round(test_set_size_percentage/100*data.shape[0])); train_set_size = data.shape[0] - (valid_set_size + test_set_size); x_train = data[:train_set_size,:-1,:] y_train = data[:train_set_size,-1,:] x_valid = data[train_set_size:train_set_size+valid_set_size,:-1,:] y_valid = data[train_set_size:train_set_size+valid_set_size,-1,:] x_test = data[train_set_size+valid_set_size:,:-1,:] y_test = data[train_set_size+valid_set_size:,-1,:] return [x_train, y_train, x_valid, y_valid, x_test, y_test] # choose one stock df_stock = df[df.symbol == 'EQIX'].copy() df_stock.drop(['symbol'],1,inplace=True) df_stock.drop(['volume'],1,inplace=True) cols = list(df_stock.columns.values) print('df_stock.columns.values = ', cols) # normalize stock df_stock_norm = df_stock.copy() df_stock_norm = normalize_data(df_stock_norm) # create train, test data seq_len = 20 # choose sequence length x_train, y_train, x_valid, y_valid, x_test, y_test = load_data(df_stock_norm, seq_len) print('x_train.shape = ',x_train.shape) print('y_train.shape = ', y_train.shape) print('x_valid.shape = ',x_valid.shape) print('y_valid.shape = ', y_valid.shape) print('x_test.shape = ', x_test.shape) print('y_test.shape = ',y_test.shape) plt.figure(figsize=(15, 5)); plt.plot(df_stock_norm.open.values, color='red', label='open') plt.plot(df_stock_norm.close.values, color='green', label='low') plt.plot(df_stock_norm.low.values, color='blue', label='low') plt.plot(df_stock_norm.high.values, color='black', label='high') #plt.plot(df_stock_norm.volume.values, color='gray', label='volume') plt.title('stock') plt.xlabel('time [days]') plt.ylabel('normalized price/volume') plt.legend(loc='best') plt.show() ###Output _____no_output_____ ###Markdown 4. Model and validate data - RNNs with basic, LSTM, GRU cells ###Code ## Basic Cell RNN in tensorflow index_in_epoch = 0; perm_array = np.arange(x_train.shape[0]) np.random.shuffle(perm_array) # function to get the next batch def get_next_batch(batch_size): global index_in_epoch, x_train, perm_array start = index_in_epoch index_in_epoch += batch_size if index_in_epoch > x_train.shape[0]: np.random.shuffle(perm_array) # shuffle permutation array start = 0 # start next epoch index_in_epoch = batch_size end = index_in_epoch return x_train[perm_array[start:end]], y_train[perm_array[start:end]] # parameters n_steps = seq_len-1 n_inputs = 4 n_neurons = 200 n_outputs = 4 n_layers = 2 learning_rate = 0.001 batch_size = 50 n_epochs = 100 train_set_size = x_train.shape[0] test_set_size = x_test.shape[0] tf.reset_default_graph() X = tf.placeholder(tf.float32, [None, n_steps, n_inputs]) y = tf.placeholder(tf.float32, [None, n_outputs]) # use Basic RNN Cell layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.elu) for layer in range(n_layers)] # use Basic LSTM Cell #layers = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons, activation=tf.nn.elu) # for layer in range(n_layers)] # use LSTM Cell with peephole connections #layers = [tf.contrib.rnn.LSTMCell(num_units=n_neurons, # activation=tf.nn.leaky_relu, use_peepholes = True) # for layer in range(n_layers)] # use GRU cell #layers = [tf.contrib.rnn.GRUCell(num_units=n_neurons, activation=tf.nn.leaky_relu) # for layer in range(n_layers)] multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers) rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32) stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs) outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs]) outputs = outputs[:,n_steps-1,:] # keep only last output of sequence loss = tf.reduce_mean(tf.square(outputs - y)) # loss function = mean squared error optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) training_op = optimizer.minimize(loss) # run graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for iteration in range(int(n_epochs*train_set_size/batch_size)): x_batch, y_batch = get_next_batch(batch_size) # fetch the next training batch sess.run(training_op, feed_dict={X: x_batch, y: y_batch}) if iteration % int(5*train_set_size/batch_size) == 0: mse_train = loss.eval(feed_dict={X: x_train, y: y_train}) mse_valid = loss.eval(feed_dict={X: x_valid, y: y_valid}) print('%.2f epochs: MSE train/valid = %.6f/%.6f'%( iteration*batch_size/train_set_size, mse_train, mse_valid)) y_train_pred = sess.run(outputs, feed_dict={X: x_train}) y_valid_pred = sess.run(outputs, feed_dict={X: x_valid}) y_test_pred = sess.run(outputs, feed_dict={X: x_test}) ###Output _____no_output_____ ###Markdown 5. Predictions ###Code y_train.shape ft = 0 # 0 = open, 1 = close, 2 = highest, 3 = lowest ## show predictions plt.figure(figsize=(15, 5)); plt.subplot(1,2,1); plt.plot(np.arange(y_train.shape[0]), y_train[:,ft], color='blue', label='train target') plt.plot(np.arange(y_train.shape[0], y_train.shape[0]+y_valid.shape[0]), y_valid[:,ft], color='gray', label='valid target') plt.plot(np.arange(y_train.shape[0]+y_valid.shape[0], y_train.shape[0]+y_test.shape[0]+y_test.shape[0]), y_test[:,ft], color='black', label='test target') plt.plot(np.arange(y_train_pred.shape[0]),y_train_pred[:,ft], color='red', label='train prediction') plt.plot(np.arange(y_train_pred.shape[0], y_train_pred.shape[0]+y_valid_pred.shape[0]), y_valid_pred[:,ft], color='orange', label='valid prediction') plt.plot(np.arange(y_train_pred.shape[0]+y_valid_pred.shape[0], y_train_pred.shape[0]+y_valid_pred.shape[0]+y_test_pred.shape[0]), y_test_pred[:,ft], color='green', label='test prediction') plt.title('past and future stock prices') plt.xlabel('time [days]') plt.ylabel('normalized price') plt.legend(loc='best'); plt.subplot(1,2,2); plt.plot(np.arange(y_train.shape[0], y_train.shape[0]+y_test.shape[0]), y_test[:,ft], color='black', label='test target') plt.plot(np.arange(y_train_pred.shape[0], y_train_pred.shape[0]+y_test_pred.shape[0]), y_test_pred[:,ft], color='green', label='test prediction') plt.title('future stock prices') plt.xlabel('time [days]') plt.ylabel('normalized price') plt.legend(loc='best'); corr_price_development_train = np.sum(np.equal(np.sign(y_train[:,1]-y_train[:,0]), np.sign(y_train_pred[:,1]-y_train_pred[:,0])).astype(int)) / y_train.shape[0] corr_price_development_valid = np.sum(np.equal(np.sign(y_valid[:,1]-y_valid[:,0]), np.sign(y_valid_pred[:,1]-y_valid_pred[:,0])).astype(int)) / y_valid.shape[0] corr_price_development_test = np.sum(np.equal(np.sign(y_test[:,1]-y_test[:,0]), np.sign(y_test_pred[:,1]-y_test_pred[:,0])).astype(int)) / y_test.shape[0] print('correct sign prediction for close - open price for train/valid/test: %.2f/%.2f/%.2f'%( corr_price_development_train, corr_price_development_valid, corr_price_development_test)) ###Output _____no_output_____
pba.ipynb
###Markdown Visualize Population Based AugmentationRun all cells below to visualize augmentations on a sample image.0, 1, or 2 augmentations will be applied.Does not include additional horizontal flip, pad/crop, or Cutout that may be applied beforehand during training. ###Code import PIL import matplotlib.pyplot as plt import numpy as np import pba.augmentation_transforms_hp as augmentation_transforms_hp from pba.utils import parse_log_schedule from pba.data_utils import parse_policy # Initialize CIFAR & SVHN policies. cifar_policy = (parse_log_schedule('schedules/rcifar10_16_wrn.txt', 200), 'cifar10_4000') svhn_policy = (parse_log_schedule('schedules/rsvhn_16_wrn.txt', 160), 'svhn_1000') def parse_policy_hyperparams(policy_hyperparams): """We have two sets of hparams for each operation, which we need to split up.""" split = len(policy_hyperparams) // 2 policy = parse_policy( policy_hyperparams[:split], augmentation_transforms_hp) policy.extend(parse_policy( policy_hyperparams[split:], augmentation_transforms_hp)) return policy ###Output INFO:tensorflow:final len 200 INFO:tensorflow:final len 160 ###Markdown User input possible in cell belowDefaults to CIFAR policy and image of bird from CIFAR-10, at the final epoch of the schedule.You can change the image path (`image_path`), augmentation schedule (`cifar_policy` or `svhn_policy`), and which epoch to use within the schedule. ###Code image_size = 32 image_path = 'figs/bird5.png' # Choice of either cifar_policy or svhn_policy policy, dset = cifar_policy # Epoch number to use for policy. 200 epochs for CIFAR and 160 for SVHN. epoch = 200 # Number of images to display num_images = 10 # Load image img = np.array(PIL.Image.open(image_path)) # Normalize Image img = img / 255. img = (img - augmentation_transforms_hp.MEANS[dset]) / augmentation_transforms_hp.STDS[dset] print('Showing 10 example images at epoch {}.\n'.format(epoch)) for _ in range(num_images): # Apply augmentations print('Applied augmentations:') img_aug = augmentation_transforms_hp.apply_policy( policy=parse_policy_hyperparams(policy[epoch - 1]), img=img, aug_policy='cifar10', dset=dset, image_size=image_size, verbose=True) # Unnormalize Image img_aug = (img_aug * augmentation_transforms_hp.STDS[dset]) + augmentation_transforms_hp.MEANS[dset] plt.imshow(img_aug) plt.show() ###Output Showing 10 example images at epoch 200. Applied augmentations: ('TranslateY', 0.6, 7) ('Contrast', 1.0, 7)
HWKS/assignment0_01_kNN/.ipynb_checkpoints/kNN_practice_0_01-checkpoint.ipynb
###Markdown k-Nearest Neighbor (kNN) implementation*Credits: this notebook is deeply based on Stanford CS231n course assignment 1. Source link: http://cs231n.github.io/assignments2019/assignment1/*The kNN classifier consists of two stages:- During training, the classifier takes the training data and simply remembers it- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples- The value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline and gain proficiency in writing efficient, vectorized code.We will work with the handwritten digits dataset. Images will be flattened (8x8 sized image -> 64 sized vector) and treated as vectors. ###Code ''' If you are using Google Colab, uncomment the next line to download `k_nearest_neighbor.py`. You can open and change it in Colab using the "Files" sidebar on the left. ''' # !wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/basic_s20/homeworks_basic/assignment0_01_kNN/k_nearest_neighbor.py from sklearn import datasets dataset = datasets.load_digits() print(dataset.DESCR) # First 100 images will be used for testing. This dataset is not sorted by the labels, so it's ok # to do the split this way. # Please be careful when you split your data into train and test in general. test_border = 100 X_train, y_train = dataset.data[test_border:], dataset.target[test_border:] X_test, y_test = dataset.data[:test_border], dataset.target[:test_border] print('Training data shape: ', X_train.shape) print('Training labels shape: ', y_train.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) num_test = X_test.shape[0] # Run some setup code for this notebook. import random import numpy as np import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (14.0, 12.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = list(np.arange(10)) num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].reshape((8, 8)).astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() classes ###Output _____no_output_____ ###Markdown Autoreload is a great stuff, but sometimes it does not work as intended. The code below aims to fix than. __Do not forget to save your changes in the `.py` file before reloading the `KNearestNeighbor` class.__ ###Code # This dirty hack might help if the autoreload has failed for some reason try: del KNearestNeighbor except: pass from k_nearest_neighbor import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.fit(X_train, y_train) X_train.shape ###Output _____no_output_____ ###Markdown We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.**Note: For the three distance computations that we require you to implement in this notebook, you may not use the np.linalg.norm() function that numpy provides.**First, open `k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. ###Code # Open k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print(dists.shape) dists # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() ###Output _____no_output_____ ###Markdown **Inline Question 1** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns?$\color{blue}{\textit Your Answer:}$The y-axis of the graph resembles the number of test points. The x-axis represents all the train points.- Extremely bright rows means that one of the test points is very far to most of the train points. This class is probably not in the training data set, or is an outlier.- Extremely bright columns means that one of the train points is very far to all of the test points. ###Code # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) ###Output Got 95 / 100 correct => accuracy: 0.950000 ###Markdown You should expect to see approximately `95%` accuracy. Now lets try out a larger `k`, say `k = 5`: ###Code y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)) ###Output Got 93 / 100 correct => accuracy: 0.930000 ###Markdown Accuracy should slightly decrease with `k = 5` compared to `k = 1`. **Inline Question 2**We can also use other distance metrics such as L1 distance.For pixel values $p_{ij}^{(k)}$ at location $(i,j)$ of some image $I_k$, the mean $\mu$ across all pixels over all images is $$\mu=\frac{1}{nhw}\sum_{k=1}^n\sum_{i=1}^{h}\sum_{j=1}^{w}p_{ij}^{(k)}$$And the pixel-wise mean $\mu_{ij}$ across all images is $$\mu_{ij}=\frac{1}{n}\sum_{k=1}^np_{ij}^{(k)}.$$The general standard deviation $\sigma$ and pixel-wise standard deviation $\sigma_{ij}$ is defined similarly.Which of the following preprocessing steps will not change the performance of a Nearest Neighbor classifier that uses L1 distance? Select all that apply.1. Subtracting the mean $\mu$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu$.)2. Subtracting the per pixel mean $\mu_{ij}$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu_{ij}$.)3. Subtracting the mean $\mu$ and dividing by the standard deviation $\sigma$.4. Subtracting the pixel-wise mean $\mu_{ij}$ and dividing by the pixel-wise standard deviation $\sigma_{ij}$.5. Rotating the coordinate axes of the data.$\color{blue}{\textit Your Answer:}$1. True2. False3. True4. False5. True$\color{blue}{\textit Your Explanation:}$1. Subtracting the mean shifts all the points in the same direction by the same amount; it does not change distances in between those points2. Subtracting the per pixel mean moves some pixels in different directions3. Same as (1), but then also scaling. Scaling does not affect the distances between points because the distances are scaled with the same constant factor. In this case the Sd.4. Same as (2).5. L1 distance changes for every point after a coordinate transformation. Except when this is exactly 90 degrees) ###Code # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print('One loop difference was: %f' % (difference, )) if difference < 0.001: print('Good! The distance matrices are the same') else: print('Uh-oh! The distance matrices are different') # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print('No loop difference was: %f' % (difference, )) if difference < 0.001: print('Good! The distance matrices are the same') else: print('Uh-oh! The distance matrices are different') ###Output No loop difference was: 0.000000 Good! The distance matrices are the same ###Markdown Comparing handcrafted and `sklearn` implementationsIn this section we will just compare the performance of handcrafted and `sklearn` kNN algorithms. The predictions should be the same. No need to write any code in this section. ###Code from sklearn import neighbors implemented_knn = KNearestNeighbor() implemented_knn.fit(X_train, y_train) n_neighbors = 1 external_knn = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors) external_knn.fit(X_train, y_train) print('sklearn kNN (k=1) implementation achieves: {} accuracy on the test set'.format( external_knn.score(X_test, y_test) )) y_predicted = implemented_knn.predict(X_test, k=n_neighbors).astype(int) accuracy_score = sum((y_predicted==y_test).astype(float)) / num_test print('Handcrafted kNN (k=1) implementation achieves: {} accuracy on the test set'.format(accuracy_score)) assert np.array_equal( external_knn.predict(X_test), y_predicted ), 'Labels predicted by handcrafted and sklearn kNN implementations are different!' print('\nsklearn and handcrafted kNN implementations provide same predictions') print('_'*76) n_neighbors = 5 external_knn = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors) external_knn.fit(X_train, y_train) print('sklearn kNN (k=5) implementation achieves: {} accuracy on the test set'.format( external_knn.score(X_test, y_test) )) y_predicted = implemented_knn.predict(X_test, k=n_neighbors).astype(int) accuracy_score = sum((y_predicted==y_test).astype(float)) / num_test print('Handcrafted kNN (k=5) implementation achieves: {} accuracy on the test set'.format(accuracy_score)) assert np.array_equal( external_knn.predict(X_test), y_predicted ), 'Labels predicted by handcrafted and sklearn kNN implementations are different!' print('\nsklearn and handcrafted kNN implementations provide same predictions') print('_'*76) ###Output sklearn kNN (k=1) implementation achieves: 0.95 accuracy on the test set Handcrafted kNN (k=1) implementation achieves: 0.95 accuracy on the test set sklearn and handcrafted kNN implementations provide same predictions ____________________________________________________________________________ sklearn kNN (k=5) implementation achieves: 0.93 accuracy on the test set Handcrafted kNN (k=5) implementation achieves: 0.93 accuracy on the test set sklearn and handcrafted kNN implementations provide same predictions ____________________________________________________________________________ ###Markdown Measuring the timeFinally let's compare how fast the implementations are.To make the difference more noticable, let's repeat the train and test objects (there is no point but to compute the distance between more pairs). ###Code X_train_big = np.vstack([X_train]*5) X_test_big = np.vstack([X_test]*5) y_train_big = np.hstack([y_train]*5) y_test_big = np.hstack([y_test]*5) classifier_big = KNearestNeighbor() classifier_big.fit(X_train_big, y_train_big) # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier_big.compute_distances_two_loops, X_test_big) print('Two loop version took %f seconds' % two_loop_time) one_loop_time = time_function(classifier_big.compute_distances_one_loop, X_test_big) print('One loop version took %f seconds' % one_loop_time) no_loop_time = time_function(classifier_big.compute_distances_no_loops, X_test_big) print('No loop version took %f seconds' % no_loop_time) # You should see significantly faster performance with the fully vectorized implementation! # NOTE: depending on what machine you're using, # you might not see a speedup when you go from two loops to one loop, # and might even see a slow-down. ###Output Two loop version took 25.361200 seconds One loop version took 0.365414 seconds No loop version took 0.033853 seconds
pivot_table and crosstab.ipynb
###Markdown pivot_tableA pivot table is a data summarization tool frequently found in spreadsheet programs and other data analysis software. It aggregates a table of data by one or more keys, arranging the data in a rectangle with some of the group keys along the rows and some along the columns. Pivot tables in Python with pandas are made possible through the groupby facility combined with reshape operations utilizing hierarchical indexing. DataFrame has a pivot_table method, and there is also a top-level pandas.pivot_table function. In addition to providing aconvenience interface to groupby, pivot_table can add partial totals, also known as margins. Using in tipping dataset, suppose you wanted to compute a table of group means (the default pivot_table aggregation type) arranged by day and smoker on the rows: ###Code import pandas as pd import numpy as np tips = pd.read_csv('tips.csv') tips tips.pivot_table(index=['day', 'smoker']) ###Output _____no_output_____ ###Markdown Adding a column named tips_pct by calculating tip percentage of total bill ###Code tips['tip_pct']=tips['tip']*100/tips['total_bill'] tips[:6] ###Output _____no_output_____ ###Markdown Now, suppose we want to aggregate only tip_pct and size, and additionally group by time. I’ll put smoker in the table columns and day in the rows: ###Code tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'], columns='smoker') ###Output _____no_output_____ ###Markdown Passing margins=True has the effect of adding all row and column labels, with corresponding values being the group statistics for all the data within a single tier: ###Code tips.pivot_table(['tip_pct', 'size'], index=['time', 'day'], columns='smoker', margins=True) ###Output _____no_output_____ ###Markdown To use a different aggregation function, pass it to aggfunc. For example, 'count' or len will give you a cross-tabulation (count or frequency) of group sizes: ###Code tips.pivot_table('tip_pct', index=['time', 'smoker'], columns='day', aggfunc=len, margins=True) ###Output _____no_output_____ ###Markdown If some combinations are empty (or otherwise NA), you may wish to pass afill_value: ###Code tips.pivot_table('tip_pct', index=['time', 'size', 'smoker'], columns='day', aggfunc='mean', fill_value=0) ###Output _____no_output_____ ###Markdown pivot_table options![pivot%20table%20options.PNG](attachment:pivot%20table%20options.PNG) Crostabulation :> crosstab ###Code from io import StringIO data = """\ Sample Nationality Handedness 1 USA Right-handed 2 Japan Left-handed 3 USA Right-handed 4 Japan Right-handed 5 Japan Left-handed 6 Japan Right-handed 7 USA Right-handed 8 USA Left-handed 9 Japan Right-handed 10 USA Right-handed""" data = pd.read_table(StringIO(data), sep='\s+') data ###Output _____no_output_____ ###Markdown As part of some survey analysis, we might want to summarize this data by nationality and handedness. You could use pivot_table to do this, but the pandas.crosstab function can be more convenient: ###Code pd.crosstab(data.Nationality, data.Handedness, margins=True) ###Output _____no_output_____ ###Markdown The first two arguments to crosstab can each either be an array or Series or a list of arrays. As in the tips data: ###Code pd.crosstab([tips.time, tips.day], tips.smoker, margins=True) ###Output _____no_output_____
tensorflow-tflite_converter.ipynb
###Markdown Tensorflow Lite Model Converter> Converts a SavedModel into Tensorflow Lite format. For details, see [Tensorflow Lite Converter](https://www.tensorflow.org/lite/convert) ###Code # export def convert_model(saved_model_dir): """ Convert a SavedModel into Tensorflow Lite Format. `saved_model_dir`: the path to the SavedModel directory returns: the converted Tensorflow Lite model """ logger.info('Converting SavedModel from: {}'.format(saved_model_dir)) converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) # path to the SavedModel directory tflite_model = converter.convert() return tflite_model # export def save_model(tflite_model, output_file): """ Save a Tensowflow Lite model to disk. `tflite_model`: the Tensorflow Lite model `output_file`: the path and filename to save the Tensorflow Lite model """ with open(output_file, 'wb') as f: f.write(tflite_model) logger.info('Successfully save model to file: {}'.format(output_file)) ###Output _____no_output_____ ###Markdown Helper Methods ###Code # export def read_pipeline_config(pipeline_config_path): """ Reads the pipeline config file. `pipeline_config_path`: The path to the pipeline config file. """ pipeline_config = {} with tf.io.gfile.GFile(pipeline_config_path, 'r') as f: text_format.Parse(f.read(), pipeline_config) return pipeline_config # export def configure_logging(logging_level=logging.INFO): """ Configures logging for the system. `logging_level`: The logging level to use. """ logger.setLevel(logging_level) handler = logging.StreamHandler(sys.stdout) handler.setLevel(logging_level) logger.addHandler(handler) ###Output _____no_output_____ ###Markdown Run from command line To run from command line, use the following command:`python -m mlcore.tensorflow.tflite_converter [parameters]` The following parameters are supported:- `--source`: The path to the folder containing the SavedModel. (e.g.: *datasets/image_object_detection/car_damage/saved_model*)- `--categories`: The categories file to add to the Tensorflow Lite model. (e.g.: *datasets/image_object_detection/car_damage/categories.txt*)- `--name`: The name of the model. (e.g.: *"SSD MobileNetV2"*)- `--version`: The version of the model, default to *1* (=v1)- `--type`: The type of the model, if not explicitly set try to infer from categories file path.- `--output`: The folder to store the Tensorflow Lite model. (e.g.: *datasets/image_object_detection/car_damage/tflite*) ###Code # export if __name__ == '__main__' and '__file__' in globals(): configure_logging() parser = argparse.ArgumentParser() parser.add_argument("-s", "--source", help="The path to the folder containing the SavedModel.", type=str) parser.add_argument("-c", "--categories", help="The categories file to add to the Tensorflow Lite model.", type=str) parser.add_argument("-n", "--name", help="The name of the model.", type=str) parser.add_argument("-v", "--version", help="The version of the model.", type=int, default=1) parser.add_argument("-t", "--type", help="The type of the model, if not explicitly set try to infer from categories file path.", choices=list(DatasetType), type=DatasetType, default=None) parser.add_argument("-o", "--output", help="The folder to store the Tensorflow Lite model.", type=str) args = parser.parse_args() model_type = args.type # try to infer the model type if not explicitly set if model_type is None: try: model_type = infer_dataset_type(args.categories) except ValueError as e: logger.error(e) sys.exit(1) output_file = join(args.output, TFLITE_MODEL_DEFAULT_NAME) save_model(convert_model(args.source), output_file) model_meta = create_metadata(args.source, args.categories, model_type, args.name, args.version) write_metadata(model_meta, output_file, args.categories) logger.info('FINISHED!!!') # hide # for generating scripts from notebook directly from nbdev.export import notebook2script notebook2script() ###Output Converted annotation-core.ipynb. Converted annotation-folder_category_adapter.ipynb. Converted annotation-multi_category_adapter.ipynb. Converted annotation-via_adapter.ipynb. Converted annotation-yolo_adapter.ipynb. Converted annotation_converter.ipynb. Converted annotation_viewer.ipynb. Converted category_tools.ipynb. Converted core.ipynb. Converted dataset-core.ipynb. Converted dataset-image_classification.ipynb. Converted dataset-image_object_detection.ipynb. Converted dataset-image_segmentation.ipynb. Converted dataset-type.ipynb. Converted dataset_generator.ipynb. Converted evaluation-core.ipynb. Converted geometry.ipynb. Converted image-color_palette.ipynb. Converted image-inference.ipynb. Converted image-opencv_tools.ipynb. Converted image-pillow_tools.ipynb. Converted image-tools.ipynb. Converted index.ipynb. Converted io-core.ipynb. Converted tensorflow-tflite_converter.ipynb. Converted tensorflow-tflite_metadata.ipynb. Converted tensorflow-tfrecord_builder.ipynb. Converted tools-check_double_images.ipynb. Converted tools-downloader.ipynb. Converted tools-image_size_calculator.ipynb.
Chicago_predictions_combo_comparisons.ipynb
###Markdown Day of week analysis for each month of each block id ###Code # start month = 3, end_month = 2 (months are 0-indexed) # X: 4/2017 -> 3/2019 actual date # y: 4/2019 -> 3/2020 actual date # X_test_start_month = 0 X_test_end_month = 0 X_test_start_year = 2016 X_test_end_year = 2018 TRAIN_NUM_BLOCKIDS = TEST_NUM_BLOCKIDS = 801 TRAIN_BLOCKIDS = random.sample(list(range(1,802)), k=TRAIN_NUM_BLOCKIDS) train_blockid_dict = {} for ind, blockid in enumerate(TRAIN_BLOCKIDS ): train_blockid_dict[blockid] = ind TEST_BLOCKIDS = random.sample(list(range(1,802)), k=TEST_NUM_BLOCKIDS) test_blockid_dict = {} for ind, blockid in enumerate(TEST_BLOCKIDS ): test_blockid_dict[blockid] = ind def plot_output(y, y_pred, dataset_type, x_label, y_label): fig = plt.figure(figsize=(10, 8)) plt.plot(np.arange(len(y_pred.flatten())), y_pred.flatten(), color='red'); plt.plot(np.arange(len(y.flatten())), y.flatten(), color='blue'); plt.xlabel(x_label, fontsize=16) plt.ylabel(y_label, fontsize=18) plt.title(dataset_type + ' dataset', fontsize=18) if use_counts == True: plt.legend(labels=['count', 'predicted count'], prop={'size': 20}) else: plt.legend(labels=['risk', 'predicted risk'], prop={'size': 20}) plt.show() from sklearn.ensemble import RandomForestRegressor from sklearn.multioutput import MultiOutputRegressor from sklearn.metrics import mean_squared_error random.seed(101) def get_predictions(X_train, y_train, X_test, y_test, x_label, y_label, model, do_gridsearch=False): def print_data_info(data, data_name): flat = data.flatten() print('Number of data points:', len(flat)) print('Number of non-zero elements:', len(flat[flat > 0.0])) print('Percentage of non-zero elements:', len(flat[flat > 0.0])/len(flat)) if use_counts == True: pd.Series(flat).hist(); else: pd.Series(flat).hist(bins=[0.25, 0.5, 1.0, 1.5, 2.5, 5.0, 10, 15, 20]); plt.title(f'Histogram of {data_name}') plt.show() print_data_info(y_test, 'y_test') print('Correlation between y_train and y_test:\n', np.corrcoef(y_train.flatten(), y_test.flatten())) X_train = X_train.reshape((TRAIN_NUM_BLOCKIDS, X_train.shape[1] * X_train.shape[2])) y_train = y_train.reshape((TRAIN_NUM_BLOCKIDS, y_train.shape[1] * y_train.shape[2])) X_test = X_test.reshape((TEST_NUM_BLOCKIDS, X_test.shape[1] * X_test.shape[2])) y_test = y_test.reshape((TEST_NUM_BLOCKIDS, y_test.shape[1] * y_test.shape[2])) print('y_test shape after reshaping:', y_test.shape) if do_gridsearch == True: # For regressors: param_grid = { # param_grid values not working - have to debug --- TODO --- 'estimator__n_estimators': [80, 100, 120], 'estimator__max_depth': [2, 3, 4, 5, 6], } # For classifiers: # param_grid = { # 'estimator__n_estimators': [80, 100, 120], # 'estimator__max_depth': [2, 3, 4, 5, 6, 7, 8], # } gridsearch = GridSearchCV(model, param_grid=param_grid, scoring='neg_mean_squared_error', cv=3, n_jobs=-1, return_train_score=True, verbose=10) model = gridsearch model.fit(X_train, y_train) best_training_score = model.score(X_train, y_train) best_testing_score = model.score(X_test, y_test) print(f' Best training score:', -best_training_score) print(f' Best testing score: ', -best_testing_score) if do_gridsearch == True: best_model_params = model.cv_results_['params'][model.best_index_] print('Best Grid Search model:', best_model_params) y_pred = model.predict(X_test) print('mean_squared_error:', mean_squared_error(y_test, y_pred)) plot_output(y_test, y_pred, 'Testing', x_label, y_label) def relative_percent_difference(y_true, y_pred): return 1 - np.absolute((y_true - y_pred) / (np.absolute(y_true) + np.absolute(y_pred))) return y_test, y_pred, relative_percent_difference(y_test, y_pred), model ###Output _____no_output_____ ###Markdown Compare two different blocks of data ###Code X_train_dow, X_test_dow, y_train_dow, y_test_dow = \ ready_data(2015, 2017, train_blockid_dict, # training (2015 2016) 2017 X_test_start_year, X_test_end_year, test_blockid_dict, # testing (2016 2017) 2018 DAY_OF_WEEK) train_blockid_dict[1], test_blockid_dict[1] def plot_block(blockid): train_blockid = train_blockid_dict[blockid] test_blockid = test_blockid_dict[blockid] y1 = y_train_dow[train_blockid].flatten() # blockid, month, dow = Jan (M,Tu,W,...,Sun), Feb (M,Tu,W,...,Sun), ... y1_mean = np.mean(y1) train = pd.concat([pd.Series(X_train_dow[train_blockid].flatten()), pd.Series(y1)]) train_mean = np.mean(train) y2 = y_test_dow[test_blockid].flatten() y2_mean = np.mean(y2) y_p = y_pred_dow[test_blockid].flatten() y_p_mean = np.mean(y_p) # plt.plot(np.arange(len(y1)), y1, color='red') # 2017 plt.plot(np.arange(len(y2)), y2, color='blue') # 2018 plt.plot(np.arange(len(y_p)), y_p, color='green') plt.show() print('train mean:', train_mean, '2017 mean:', y1_mean, '\n2018 mean:', y2_mean, \ 'y_pred_mean:', y_p_mean) [plot_block(i) for i in range(1, 6)] ###Output _____no_output_____ ###Markdown Day of week analysis for each month of each block id ###Code %%time X_train_dow, X_test_dow, y_train_dow, y_test_dow = \ ready_data(2015, 2017, train_blockid_dict, X_test_start_year, X_test_end_year, test_blockid_dict, DAY_OF_WEEK) print(X_train_dow.shape, y_train_dow.shape, X_test_dow.shape, y_test_dow.shape) model = MultiOutputRegressor(RandomForestRegressor(max_depth=5, n_estimators=100)) if use_counts == True: y_test_dow, y_pred_dow, rpd_dow, model_dow = \ get_predictions(X_train_dow, y_train_dow, X_test_dow, y_test_dow, 'day of week for each month', f'crime count / population {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}', model, do_gridsearch=do_gridsearch) else: y_test_dow, y_pred_dow, rpd_dow, model_dow = \ get_predictions(X_train_dow, y_train_dow, X_test_dow, y_test_dow, 'day of week for each month', f'risk {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}', model, do_gridsearch=do_gridsearch) plot_output(y_test_dow.flatten()[:100], y_pred_dow.flatten()[:100], 'Test', 'day of week for each month', 'crime count per million') ###Output _____no_output_____ ###Markdown Day of month analysis for each month of each block id ###Code # %%time # X_train_dom, X_test_dom, y_train_dom, y_test_dom = \ # ready_data(2015, 2017, train_blockid_dict, # X_test_start_year, X_test_end_year, test_blockid_dict, # DAY_OF_MONTH) # print(X_train_dom.shape, y_train_dom.shape, X_test_dom.shape, y_test_dom.shape) # model = MultiOutputRegressor(RandomForestRegressor(max_depth=4, n_estimators=120)) # if use_counts == True: # y_test_dom, y_pred_dom, rpd_dom, model_dom = \ # get_predictions(X_train_dom, y_train_dom, X_test_dom, y_test_dom, # 'day of month for each month', # f'crime count / population {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}', # model, do_gridsearch=do_gridsearch) # else: # y_test_dom, y_pred_dom, rpd_dom, model_dom = \ # get_predictions(X_train_dom, y_train_dom, X_test_dom, y_test_dom, # 'day of month for each month', # f'risk {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}', # model, do_gridsearch=do_gridsearch) ###Output _____no_output_____ ###Markdown Hour of day analysis for each month of each block id ###Code %%time X_train_hod, X_test_hod, y_train_hod, y_test_hod = \ ready_data(2015, 2017, train_blockid_dict, X_test_start_year, X_test_end_year, test_blockid_dict, HOUR_OF_DAY) print(X_train_hod.shape, y_train_hod.shape, X_test_hod.shape, y_test_hod.shape) model = MultiOutputRegressor(RandomForestRegressor(max_depth=4, n_estimators=120)) if use_counts == True: y_test_hod, y_pred_hod, rpd_hod, model_hod = \ get_predictions(X_train_hod, y_train_hod, X_test_hod, y_test_hod, 'hour of day for each month', f'crime count / population {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}', model, do_gridsearch=do_gridsearch) else: y_test_hod, y_pred_hod, rpd_hod, model_hod = \ get_predictions(X_train_hod, y_train_hod, X_test_hod, y_test_hod, 'hour of day for each month', f'risk {SEVERITY_OPERATOR} {SEVERITY_SCALING_FACTOR}', model, do_gridsearch=do_gridsearch) ###Output (801, 24, 24) (801, 12, 24) (801, 24, 24) (801, 12, 24) Number of data points: 230688 Number of non-zero elements: 120160 Percentage of non-zero elements: 0.5208766819253711 ###Markdown Weigh and combine predictions into one array ###Code NUM_BLOCKIDS = 801 NUM_MONTHS_IN_YEAR = 12 NUM_DAYS_IN_WEEK = 7 NUM_HOURS_IN_DAY = 24 risks = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY)) y_test_dow_times_hour = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY)) # Returns number of days in a month def days_in_month(year, month): p = pd.Period(f'{year}-{month}-1') return p.days_in_month # Day of week returns 0-based day value def day_of_week(dt): return dt.weekday() end_year = X_test_end_year for blockid in range(NUM_BLOCKIDS): for month in range(1, NUM_MONTHS_IN_YEAR + 1): for dow in range(7): for hour in range(24): weight_dow = 7 weight_hod = 24 weight_sum = weight_dow + weight_hod risks[blockid, month-1, dow * 24 + hour] += \ (y_pred_dow[blockid, (month - 1)*7+dow] * weight_dow + y_pred_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum y_test_dow_times_hour[blockid, month-1, dow * 24 + hour] += \ (y_test_dow[blockid, (month - 1)*7+dow] * weight_dow + y_test_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum risks_descaled = descale_data(risks) risks_descaled = np.nan_to_num(risks_descaled) risks = risks_descaled.copy() y_test_dow_times_hour = descale_data(y_test_dow_times_hour) y_test_dow_times_hour = np.nan_to_num(y_test_dow_times_hour) y = y_test_dow_times_hour.flatten() r = risks.flatten() print('Number of zeros in y_test_dow_times_hour:', len(y[y == 0.0]), 'out of:', len(y)) print('Number of zeros in risks:', len(r[r == 0.0]), 'out of:', len(r)) def plot_y_vs_ypred(y, y_pred): fig = plt.figure(figsize=(10, 8)) plt.plot(np.arange(len(y.flatten())), y.flatten(), color='blue'); plt.plot(np.arange(len(y_pred.flatten())), y_pred.flatten(), color='red'); plt.xlabel('dow * hour', fontsize=16) plt.ylabel('crime count / population * 1000', fontsize=18) plt.title('Test dataset', fontsize=18) if use_counts == True: plt.legend(labels=['count', 'predicted count'], prop={'size': 20}) else: plt.legend(labels=['risk', 'predicted risk'], prop={'size': 20}) plt.show() plot_y_vs_ypred(y_test_dow_times_hour, risks) plot_y_vs_ypred(y_test_dow_times_hour[0][0], risks[0][0]) ###Output _____no_output_____ ###Markdown Save data to file ###Code import pickle old_model_objs_to_pkl = [risks, test_blockid_dict] with open("old_data.pkl", "wb") as f: pickle.dump(risks, f) pickle.dump(test_blockid_dict, f) ###Output _____no_output_____ ###Markdown Create predictions ###Code X_test_start_year = 2017 X_test_end_year = 2019 X_train_dow, X_test_dow, y_train_dow, y_test_dow = \ ready_data(2016, 2018, train_blockid_dict, X_test_start_year, X_test_end_year, test_blockid_dict, DAY_OF_WEEK) X_train_dom, X_test_dom, y_train_dom, y_test_dom = \ ready_data(2016, 2018, train_blockid_dict, X_test_start_year, X_test_end_year, test_blockid_dict, DAY_OF_MONTH) X_train_hod, X_test_hod, y_train_hod, y_test_hod = \ ready_data(2016, 2018, train_blockid_dict, X_test_start_year, X_test_end_year, test_blockid_dict, HOUR_OF_DAY) y_pred_dow = model_dow.predict(X_test_dow.reshape((NUM_BLOCKIDS, X_test_dow.shape[1] * X_test_dow.shape[2]))) y_pred_hod = model_hod.predict(X_test_hod.reshape((NUM_BLOCKIDS, X_test_hod.shape[1] * X_test_hod.shape[2]))) y_test_dow = y_test_dow.reshape((TEST_NUM_BLOCKIDS, y_test_dow.shape[1] * y_test_dow.shape[2])) y_test_hod = y_test_hod.reshape((TEST_NUM_BLOCKIDS, y_test_hod.shape[1] * y_test_hod.shape[2])) NUM_BLOCKIDS = 801 NUM_MONTHS_IN_YEAR = 12 NUM_DAYS_IN_WEEK = 7 NUM_HOURS_IN_DAY = 24 risks = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY)) y_test_dow_times_hour = np.zeros((NUM_BLOCKIDS, NUM_MONTHS_IN_YEAR, NUM_DAYS_IN_WEEK * NUM_HOURS_IN_DAY)) # Returns number of days in a month def days_in_month(year, month): p = pd.Period(f'{year}-{month}-1') return p.days_in_month # Day of week returns 0-based day value def day_of_week(dt): return dt.weekday() end_year = X_test_end_year for blockid in range(NUM_BLOCKIDS): for month in range(1, NUM_MONTHS_IN_YEAR + 1): for dow in range(7): for hour in range(24): weight_dow = 7 weight_hod = 24 weight_sum = weight_dow + weight_hod risks[blockid, month-1, dow * 24 + hour] += \ (y_pred_dow[blockid, (month - 1)*7+dow] * weight_dow + y_pred_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum y_test_dow_times_hour[blockid, month-1, dow * 24 + hour] += \ (y_test_dow[blockid, (month - 1)*7+dow] * weight_dow + y_test_hod[blockid, (month - 1)*24+hour] * weight_hod) / weight_sum risks_descaled = descale_data(risks) risks_descaled = np.nan_to_num(risks_descaled) risks = risks_descaled.copy() y_test_dow_times_hour = descale_data(y_test_dow_times_hour) y_test_dow_times_hour = np.nan_to_num(y_test_dow_times_hour) ###Output _____no_output_____ ###Markdown Store predictions in DB ###Code from decouple import config pred_blockid_dict = test_blockid_dict def store_predictions_in_db(y_pred): DB_URI_WRITE = config('DB_URI_WRITE') # Put predictions into pandas DataFrame with corresponding block id predictions = pd.DataFrame([[x] for x in pred_blockid_dict.keys()], columns=["id"]) predictions.loc[:, "prediction"] = predictions["id"].apply(lambda x: y_pred[pred_blockid_dict[x],:,:].astype(np.float64).tobytes().hex()) predictions.loc[:, "month"] = 0 predictions.loc[:, "year"] = 2019 predictions.to_csv("predictions.csv", index=False) # Query SQL query_commit_predictions = """ CREATE TEMPORARY TABLE temp_predictions ( id SERIAL PRIMARY KEY, prediction TEXT, month INTEGER, year INTEGER ); COPY temp_predictions (id, prediction, month, year) FROM STDIN DELIMITER ',' CSV HEADER; UPDATE block SET prediction = DECODE(temp_predictions.prediction, 'hex'), month = temp_predictions.month, year = temp_predictions.year FROM temp_predictions WHERE block.id = temp_predictions.id; DROP TABLE temp_predictions; """ # Open saved predictions and send to database using above query with open("predictions.csv", "r") as f: print("SENDING TO DB") RAW_CONN = create_engine(DB_URI_WRITE).raw_connection() cursor = RAW_CONN.cursor() cursor.copy_expert(query_commit_predictions, f) RAW_CONN.commit() RAW_CONN.close() for r in SESSION.execute("SELECT ENCODE(prediction::BYTEA, 'hex'), id FROM block WHERE prediction IS NOT NULL LIMIT 5;").fetchall(): print(np.frombuffer(bytes.fromhex(r[0]), dtype=np.float64).reshape((12,7,24))) print(y_pred[pred_blockid_dict[int(r[1])], :].reshape((12,7,24))) with session_scope() as SESSION: store_predictions_in_db(risks) ###Output SENDING TO DB [[[ 3.62226775 4.29141527 4.99341643 ... 18.65912714 19.76846219 19.71990031] [ 3.48902621 4.15817373 4.86017489 ... 18.5258856 19.63522065 19.58665876] [ 3.44368941 4.11283693 4.81483809 ... 18.48054881 19.58988385 19.54132197] ... [ 3.72530414 4.39445167 5.09645283 ... 18.76216354 19.87149859 19.8229367 ] [ 3.6650971 4.33424463 5.03624579 ... 18.7019565 19.81129155 19.76272966] [ 3.55497764 4.22412516 4.92612633 ... 18.59183704 19.70117209 19.6526102 ]] [[ 3.54364996 4.09162981 4.68260826 ... 17.90739264 18.94988511 19.52136698] [ 3.24526502 3.79324488 4.38422333 ... 17.60900771 18.65150018 19.22298204] [ 3.47677822 4.02475808 4.61573653 ... 17.84052091 18.88301337 19.45449524] ... [ 3.26190582 3.80988568 4.40086413 ... 17.62564851 18.66814097 19.23962284] [ 3.31992703 3.86790688 4.45888533 ... 17.68366971 18.72616218 19.29764404] [ 3.61764469 4.16562455 4.756603 ... 17.98138738 19.02387985 19.59536171]] [[ 3.72496658 4.31471053 4.91321745 ... 19.02188491 19.53100906 19.36907928] [ 3.4719046 4.06164855 4.66015547 ... 18.76882294 19.27794709 19.1160173 ] [ 3.53337759 4.12312155 4.72162847 ... 18.83029593 19.33942008 19.1774903 ] ... [ 3.58954227 4.17928622 4.77779314 ... 18.88646061 19.39558476 19.23365497] [ 3.42981132 4.01955528 4.6180622 ... 18.72672966 19.23585381 19.07392403] [ 3.55494787 4.14469182 4.74319874 ... 18.85186621 19.36099036 19.19906058]] ... [[ 3.47282897 4.1201407 4.71222005 ... 18.76036277 19.82987611 19.82978825] [ 3.75071815 4.39802989 4.99010924 ... 19.03825196 20.10776529 20.10767744] [ 3.63353221 4.28084394 4.87292329 ... 18.92106601 19.99057935 19.99049149] ... [ 3.80752995 4.45484169 5.04692103 ... 19.09506376 20.16457709 20.16448923] [ 3.69850744 4.34581918 4.93789853 ... 18.98604125 20.05555458 20.05546673] [ 3.54380552 4.19111726 4.78319661 ... 18.83133933 19.90085266 19.9007648 ]] [[ 3.31854055 3.92801438 4.45727301 ... 18.28841996 19.16752258 19.17617787] [ 3.66278758 4.2722614 4.80152004 ... 18.63266699 19.51176961 19.5204249 ] [ 3.66540822 4.27488204 4.80414067 ... 18.63528762 19.51439024 19.52304553] ... [ 3.47868867 4.08816249 4.61742112 ... 18.44856808 19.3276707 19.33632599] [ 3.40795192 4.01742575 4.54668438 ... 18.37783133 19.25693395 19.26558924] [ 3.62582826 4.23530208 4.76456071 ... 18.59570766 19.47481028 19.48346557]] [[ 3.28655525 3.89916418 4.38915978 ... 18.06270713 18.8524392 19.0097691 ] [ 3.56468602 4.17729495 4.66729055 ... 18.34083789 19.13056997 19.28789987] [ 3.50913761 4.12174653 4.61174213 ... 18.28528948 19.07502155 19.23235146] ... [ 3.47367935 4.08628827 4.57628387 ... 18.24983122 19.0395633 19.1968932 ] [ 3.18837562 3.80098455 4.29098015 ... 17.96452749 18.75425957 18.91158947] [ 3.39143035 4.00403927 4.49403487 ... 18.16758222 18.9573143 19.1146442 ]]] [[[ 3.62226775 4.29141527 4.99341643 ... 18.65912714 19.76846219 19.71990031] [ 3.48902621 4.15817373 4.86017489 ... 18.5258856 19.63522065 19.58665876] [ 3.44368941 4.11283693 4.81483809 ... 18.48054881 19.58988385 19.54132197] ... [ 3.72530414 4.39445167 5.09645283 ... 18.76216354 19.87149859 19.8229367 ] [ 3.6650971 4.33424463 5.03624579 ... 18.7019565 19.81129155 19.76272966] [ 3.55497764 4.22412516 4.92612633 ... 18.59183704 19.70117209 19.6526102 ]] [[ 3.54364996 4.09162981 4.68260826 ... 17.90739264 18.94988511 19.52136698] [ 3.24526502 3.79324488 4.38422333 ... 17.60900771 18.65150018 19.22298204] [ 3.47677822 4.02475808 4.61573653 ... 17.84052091 18.88301337 19.45449524] ... [ 3.26190582 3.80988568 4.40086413 ... 17.62564851 18.66814097 19.23962284] [ 3.31992703 3.86790688 4.45888533 ... 17.68366971 18.72616218 19.29764404] [ 3.61764469 4.16562455 4.756603 ... 17.98138738 19.02387985 19.59536171]] [[ 3.72496658 4.31471053 4.91321745 ... 19.02188491 19.53100906 19.36907928] [ 3.4719046 4.06164855 4.66015547 ... 18.76882294 19.27794709 19.1160173 ] [ 3.53337759 4.12312155 4.72162847 ... 18.83029593 19.33942008 19.1774903 ] ... [ 3.58954227 4.17928622 4.77779314 ... 18.88646061 19.39558476 19.23365497] [ 3.42981132 4.01955528 4.6180622 ... 18.72672966 19.23585381 19.07392403] [ 3.55494787 4.14469182 4.74319874 ... 18.85186621 19.36099036 19.19906058]] ... [[ 3.47282897 4.1201407 4.71222005 ... 18.76036277 19.82987611 19.82978825] [ 3.75071815 4.39802989 4.99010924 ... 19.03825196 20.10776529 20.10767744] [ 3.63353221 4.28084394 4.87292329 ... 18.92106601 19.99057935 19.99049149] ... [ 3.80752995 4.45484169 5.04692103 ... 19.09506376 20.16457709 20.16448923] [ 3.69850744 4.34581918 4.93789853 ... 18.98604125 20.05555458 20.05546673] [ 3.54380552 4.19111726 4.78319661 ... 18.83133933 19.90085266 19.9007648 ]] [[ 3.31854055 3.92801438 4.45727301 ... 18.28841996 19.16752258 19.17617787] [ 3.66278758 4.2722614 4.80152004 ... 18.63266699 19.51176961 19.5204249 ] [ 3.66540822 4.27488204 4.80414067 ... 18.63528762 19.51439024 19.52304553] ... [ 3.47868867 4.08816249 4.61742112 ... 18.44856808 19.3276707 19.33632599] [ 3.40795192 4.01742575 4.54668438 ... 18.37783133 19.25693395 19.26558924] [ 3.62582826 4.23530208 4.76456071 ... 18.59570766 19.47481028 19.48346557]] [[ 3.28655525 3.89916418 4.38915978 ... 18.06270713 18.8524392 19.0097691 ] [ 3.56468602 4.17729495 4.66729055 ... 18.34083789 19.13056997 19.28789987] [ 3.50913761 4.12174653 4.61174213 ... 18.28528948 19.07502155 19.23235146] ... [ 3.47367935 4.08628827 4.57628387 ... 18.24983122 19.0395633 19.1968932 ] [ 3.18837562 3.80098455 4.29098015 ... 17.96452749 18.75425957 18.91158947] [ 3.39143035 4.00403927 4.49403487 ... 18.16758222 18.9573143 19.1146442 ]]] [[[2.46237207 2.61105935 2.76347368 ... 7.29057962 7.71836531 7.08945701] [2.31810551 2.46679279 2.61920712 ... 7.14631306 7.57409875 6.94519045] [1.6912287 1.83991597 1.9923303 ... 6.51943624 6.94722193 6.31831363] ... [2.44157915 2.59026643 2.74268076 ... 7.26978669 7.69757239 7.06866409] [1.94386287 2.09255014 2.24496447 ... 6.77207041 7.1998561 6.5709478 ] [1.87046172 2.019149 2.17156333 ... 6.69866927 7.12645496 6.49754666]] [[1.95955169 2.07749878 2.16727771 ... 5.13849934 6.73052567 6.53014218] [1.5811458 1.69909288 1.78887182 ... 4.76009345 6.35211978 6.15173629] [1.75983107 1.87777816 1.96755709 ... 4.93877872 6.53080505 6.33042156] ... [1.83917941 1.9571265 2.04690543 ... 5.01812706 6.61015339 6.4097699 ] [1.96337912 2.08132621 2.17110515 ... 5.14232678 6.7343531 6.53396962] [1.85642749 1.97437458 2.06415351 ... 5.03537514 6.62740147 6.42701798]] [[1.57344613 1.76516323 1.85939947 ... 4.47439383 6.46611087 6.82521038] [2.17273572 2.36445282 2.45868906 ... 5.07368342 7.06540046 7.42449997] [1.94725593 2.13897303 2.23320928 ... 4.84820363 6.83992068 7.19902018] ... [2.88890626 3.08062337 3.17485961 ... 5.78985397 7.78157101 8.14067052] [1.86762433 2.05934143 2.15357768 ... 4.76857203 6.76028908 7.11938858] [1.79152926 1.98324636 2.07748261 ... 4.69247696 6.68419401 7.04329351]] ... [[2.11332418 2.30960906 2.3793171 ... 8.38464842 7.12409038 7.34116082] [2.08234184 2.27862672 2.34833476 ... 8.35366608 7.09310804 7.31017848] [2.09462121 2.29090609 2.36061414 ... 8.36594546 7.10538742 7.32245786] ... [1.84716392 2.0434488 2.11315684 ... 8.11848816 6.85793012 7.07500056] [1.79592658 1.99221146 2.0619195 ... 8.06725082 6.80669278 7.02376323] [2.19357246 2.38985734 2.45956538 ... 8.4648967 7.20433866 7.4214091 ]] [[1.99521236 2.18808404 2.1884776 ... 7.12607153 7.50587032 7.00748426] [2.00616781 2.19903949 2.19943305 ... 7.13702698 7.51682576 7.01843971] [1.86630288 2.05917456 2.05956812 ... 6.99716205 7.37696084 6.87857478] ... [1.58521957 1.77809125 1.77848481 ... 6.71607874 7.09587753 6.59749147] [1.97079483 2.16366651 2.16406007 ... 7.101654 7.48145279 6.98306673] [1.86193129 2.05480298 2.05519654 ... 6.99279047 7.37258925 6.87420319]] [[1.67312128 1.83139005 2.03954357 ... 6.66592507 7.1348477 7.19580077] [1.67226141 1.83053018 2.0386837 ... 6.6650652 7.13398784 7.1949409 ] [1.40384736 1.56211613 1.77026966 ... 6.39665115 6.86557379 6.92652685] ... [2.07548831 2.23375708 2.4419106 ... 7.0682921 7.53721474 7.5981678 ] [1.67685125 1.83512003 2.04327355 ... 6.66965504 7.13857768 7.19953074] [1.75858166 1.91685043 2.12500395 ... 6.75138545 7.22030808 7.28126115]]] [[[2.46237207 2.61105935 2.76347368 ... 7.29057962 7.71836531 7.08945701] [2.31810551 2.46679279 2.61920712 ... 7.14631306 7.57409875 6.94519045] [1.6912287 1.83991597 1.9923303 ... 6.51943624 6.94722193 6.31831363] ... [2.44157915 2.59026643 2.74268076 ... 7.26978669 7.69757239 7.06866409] [1.94386287 2.09255014 2.24496447 ... 6.77207041 7.1998561 6.5709478 ] [1.87046172 2.019149 2.17156333 ... 6.69866927 7.12645496 6.49754666]] [[1.95955169 2.07749878 2.16727771 ... 5.13849934 6.73052567 6.53014218] [1.5811458 1.69909288 1.78887182 ... 4.76009345 6.35211978 6.15173629] [1.75983107 1.87777816 1.96755709 ... 4.93877872 6.53080505 6.33042156] ... [1.83917941 1.9571265 2.04690543 ... 5.01812706 6.61015339 6.4097699 ] [1.96337912 2.08132621 2.17110515 ... 5.14232678 6.7343531 6.53396962] [1.85642749 1.97437458 2.06415351 ... 5.03537514 6.62740147 6.42701798]] [[1.57344613 1.76516323 1.85939947 ... 4.47439383 6.46611087 6.82521038] [2.17273572 2.36445282 2.45868906 ... 5.07368342 7.06540046 7.42449997] [1.94725593 2.13897303 2.23320928 ... 4.84820363 6.83992068 7.19902018] ... [2.88890626 3.08062337 3.17485961 ... 5.78985397 7.78157101 8.14067052] [1.86762433 2.05934143 2.15357768 ... 4.76857203 6.76028908 7.11938858] [1.79152926 1.98324636 2.07748261 ... 4.69247696 6.68419401 7.04329351]] ... [[2.11332418 2.30960906 2.3793171 ... 8.38464842 7.12409038 7.34116082] [2.08234184 2.27862672 2.34833476 ... 8.35366608 7.09310804 7.31017848] [2.09462121 2.29090609 2.36061414 ... 8.36594546 7.10538742 7.32245786] ... [1.84716392 2.0434488 2.11315684 ... 8.11848816 6.85793012 7.07500056] [1.79592658 1.99221146 2.0619195 ... 8.06725082 6.80669278 7.02376323] [2.19357246 2.38985734 2.45956538 ... 8.4648967 7.20433866 7.4214091 ]] [[1.99521236 2.18808404 2.1884776 ... 7.12607153 7.50587032 7.00748426] [2.00616781 2.19903949 2.19943305 ... 7.13702698 7.51682576 7.01843971] [1.86630288 2.05917456 2.05956812 ... 6.99716205 7.37696084 6.87857478] ... [1.58521957 1.77809125 1.77848481 ... 6.71607874 7.09587753 6.59749147] [1.97079483 2.16366651 2.16406007 ... 7.101654 7.48145279 6.98306673] [1.86193129 2.05480298 2.05519654 ... 6.99279047 7.37258925 6.87420319]] [[1.67312128 1.83139005 2.03954357 ... 6.66592507 7.1348477 7.19580077] [1.67226141 1.83053018 2.0386837 ... 6.6650652 7.13398784 7.1949409 ] [1.40384736 1.56211613 1.77026966 ... 6.39665115 6.86557379 6.92652685] ... [2.07548831 2.23375708 2.4419106 ... 7.0682921 7.53721474 7.5981678 ] [1.67685125 1.83512003 2.04327355 ... 6.66965504 7.13857768 7.19953074] [1.75858166 1.91685043 2.12500395 ... 6.75138545 7.22030808 7.28126115]]] [[[ 3.36042951 3.9539847 4.40054629 ... 16.61840351 17.51744014 17.47139048] [ 3.47241571 4.06597091 4.51253249 ... 16.73038971 17.62942634 17.58337668] [ 3.49505442 4.08860962 4.5351712 ... 16.75302842 17.65206506 17.60601539] ... [ 3.19281636 3.78637156 4.23293314 ... 16.45079036 17.34982699 17.30377733] [ 3.12460627 3.71816147 4.16472305 ... 16.38258027 17.2816169 17.23556724] [ 3.4842873 4.0778425 4.52440408 ... 16.7422613 17.64129793 17.59524827]] [[ 3.53024915 4.07575665 4.35981683 ... 16.43761956 17.99493501 17.55416469] [ 3.07541996 3.62092747 3.90498765 ... 15.98279037 17.54010582 17.0993355 ] [ 3.43990196 3.98540947 4.26946965 ... 16.34727238 17.90458783 17.4638175 ] ... [ 3.40795897 3.95346648 4.23752666 ... 16.31532939 17.87264484 17.43187451] [ 3.33960704 3.88511455 4.16917473 ... 16.24697746 17.80429291 17.36352258] [ 3.09239468 3.63790219 3.92196237 ... 15.9997651 17.55708054 17.11631022]] [[ 3.23233704 3.70406492 4.18209581 ... 17.36905757 18.49100811 17.78490538] [ 3.073724 3.54545188 4.02348276 ... 17.21044453 18.33239507 17.62629234] [ 3.26940343 3.74113131 4.21916219 ... 17.40612396 18.5280745 17.82197177] ... [ 3.50422172 3.9759496 4.45398049 ... 17.64094225 18.76289279 18.05679006] [ 3.39719918 3.86892707 4.34695795 ... 17.53391972 18.65587026 17.94976753] [ 3.28192314 3.75365102 4.23168191 ... 17.41864367 18.54059421 17.83449148]] ... [[ 3.55534395 4.15968699 4.59111093 ... 17.83059709 17.67154488 19.21062216] [ 3.67272917 4.27707222 4.70849616 ... 17.94798232 17.78893011 19.32800738] [ 3.52231684 4.12665989 4.55808382 ... 17.79756999 17.63851778 19.17759505] ... [ 3.67597504 4.28031808 4.71174202 ... 17.95122818 17.79217597 19.33125325] [ 3.36325682 3.96759986 4.3990238 ... 17.63850996 17.47945775 19.01853503] [ 3.31555122 3.91989426 4.3513182 ... 17.59080436 17.43175215 18.97082943]] [[ 3.47151714 3.99753042 4.47251109 ... 16.42671631 18.43416299 16.24639599] [ 3.61056617 4.13657944 4.61156011 ... 16.56576533 18.57321201 16.38544501] [ 3.32064231 3.84665559 4.32163626 ... 16.27584147 18.28328816 16.09552116] ... [ 3.49341324 4.01942651 4.49440718 ... 16.4486124 18.45605908 16.26829208] [ 3.41304088 3.93905416 4.41403483 ... 16.36824004 18.37568673 16.18791973] [ 3.26222732 3.78824059 4.26322126 ... 16.21742648 18.22487316 16.03710616]] [[ 3.40217069 3.95182548 4.3069421 ... 16.84621171 17.38633983 18.06202122] [ 3.52710202 4.0767568 4.43187342 ... 16.97114303 17.51127115 18.18695255] [ 3.41807911 3.9677339 4.32285051 ... 16.86212013 17.40224824 18.07792964] ... [ 3.42397507 3.97362985 4.32874647 ... 16.86801608 17.4081442 18.08382559] [ 3.4075063 3.95716109 4.3122777 ... 16.85154731 17.39167543 18.06735683] [ 3.67371681 4.2233716 4.57848822 ... 17.11775783 17.65788594 18.33356734]]] [[[ 3.36042951 3.9539847 4.40054629 ... 16.61840351 17.51744014 17.47139048] [ 3.47241571 4.06597091 4.51253249 ... 16.73038971 17.62942634 17.58337668] [ 3.49505442 4.08860962 4.5351712 ... 16.75302842 17.65206506 17.60601539] ... [ 3.19281636 3.78637156 4.23293314 ... 16.45079036 17.34982699 17.30377733] [ 3.12460627 3.71816147 4.16472305 ... 16.38258027 17.2816169 17.23556724] [ 3.4842873 4.0778425 4.52440408 ... 16.7422613 17.64129793 17.59524827]] [[ 3.53024915 4.07575665 4.35981683 ... 16.43761956 17.99493501 17.55416469] [ 3.07541996 3.62092747 3.90498765 ... 15.98279037 17.54010582 17.0993355 ] [ 3.43990196 3.98540947 4.26946965 ... 16.34727238 17.90458783 17.4638175 ] ... [ 3.40795897 3.95346648 4.23752666 ... 16.31532939 17.87264484 17.43187451] [ 3.33960704 3.88511455 4.16917473 ... 16.24697746 17.80429291 17.36352258] [ 3.09239468 3.63790219 3.92196237 ... 15.9997651 17.55708054 17.11631022]] [[ 3.23233704 3.70406492 4.18209581 ... 17.36905757 18.49100811 17.78490538] [ 3.073724 3.54545188 4.02348276 ... 17.21044453 18.33239507 17.62629234] [ 3.26940343 3.74113131 4.21916219 ... 17.40612396 18.5280745 17.82197177] ... [ 3.50422172 3.9759496 4.45398049 ... 17.64094225 18.76289279 18.05679006] [ 3.39719918 3.86892707 4.34695795 ... 17.53391972 18.65587026 17.94976753] [ 3.28192314 3.75365102 4.23168191 ... 17.41864367 18.54059421 17.83449148]] ... [[ 3.55534395 4.15968699 4.59111093 ... 17.83059709 17.67154488 19.21062216] [ 3.67272917 4.27707222 4.70849616 ... 17.94798232 17.78893011 19.32800738] [ 3.52231684 4.12665989 4.55808382 ... 17.79756999 17.63851778 19.17759505] ... [ 3.67597504 4.28031808 4.71174202 ... 17.95122818 17.79217597 19.33125325] [ 3.36325682 3.96759986 4.3990238 ... 17.63850996 17.47945775 19.01853503] [ 3.31555122 3.91989426 4.3513182 ... 17.59080436 17.43175215 18.97082943]] [[ 3.47151714 3.99753042 4.47251109 ... 16.42671631 18.43416299 16.24639599] [ 3.61056617 4.13657944 4.61156011 ... 16.56576533 18.57321201 16.38544501] [ 3.32064231 3.84665559 4.32163626 ... 16.27584147 18.28328816 16.09552116] ... [ 3.49341324 4.01942651 4.49440718 ... 16.4486124 18.45605908 16.26829208] [ 3.41304088 3.93905416 4.41403483 ... 16.36824004 18.37568673 16.18791973] [ 3.26222732 3.78824059 4.26322126 ... 16.21742648 18.22487316 16.03710616]] [[ 3.40217069 3.95182548 4.3069421 ... 16.84621171 17.38633983 18.06202122] [ 3.52710202 4.0767568 4.43187342 ... 16.97114303 17.51127115 18.18695255] [ 3.41807911 3.9677339 4.32285051 ... 16.86212013 17.40224824 18.07792964] ... [ 3.42397507 3.97362985 4.32874647 ... 16.86801608 17.4081442 18.08382559] [ 3.4075063 3.95716109 4.3122777 ... 16.85154731 17.39167543 18.06735683] [ 3.67371681 4.2233716 4.57848822 ... 17.11775783 17.65788594 18.33356734]]] [[[ 3.59662683 4.252075 4.8812187 ... 17.90938671 19.31664881 19.06407051] [ 3.68561196 4.34106013 4.97020383 ... 17.99837184 19.40563394 19.15305564] [ 3.55817844 4.21362661 4.84277031 ... 17.87093832 19.27820042 19.02562212] ... [ 3.59643676 4.25188493 4.88102863 ... 17.90919664 19.31645873 19.06388043] [ 3.53490875 4.19035692 4.81950062 ... 17.84766863 19.25493073 19.00235243] [ 3.2913799 3.94682807 4.57597177 ... 17.60413978 19.01140188 18.75882357]] [[ 3.56613881 4.12716267 4.60048278 ... 17.7748226 17.26456718 18.85613262] [ 3.49546537 4.05648923 4.52980935 ... 17.70414917 17.19389374 18.78545918] [ 3.57954449 4.14056835 4.61388847 ... 17.78822829 17.27797287 18.86953831] ... [ 3.4430316 4.00405546 4.47737558 ... 17.6517154 17.14145998 18.73302542] [ 2.88801205 3.44903591 3.92235603 ... 17.09669585 16.58644042 18.17800586] [ 3.52879896 4.08982282 4.56314293 ... 17.73748275 17.22722733 18.81879277]] [[ 3.69503059 4.21629944 4.82068972 ... 18.50449855 19.49763925 19.24526295] [ 3.47843926 3.99970811 4.60409838 ... 18.28790721 19.28104792 19.02867162] [ 3.56408244 4.08535129 4.68974156 ... 18.37355039 19.3666911 19.1143148 ] ... [ 3.80493016 4.32619901 4.93058929 ... 18.61439812 19.60753882 19.35516252] [ 3.71097275 4.2322416 4.83663187 ... 18.5204407 19.51358141 19.26120511] [ 3.33956924 3.86083809 4.46522836 ... 18.14903719 19.1421779 18.88980159]] ... [[ 3.62438756 4.19702517 4.86759412 ... 18.63698777 19.34607673 19.39690756] [ 3.70623201 4.27886962 4.94943857 ... 18.71883222 19.42792118 19.47875201] [ 3.6761077 4.2487453 4.91931426 ... 18.68870791 19.39779687 19.4486277 ] ... [ 3.56728596 4.13992357 4.81049252 ... 18.57988618 19.28897514 19.33980597] [ 3.50949467 4.08213227 4.75270123 ... 18.52209488 19.23118384 19.28201467] [ 3.15634269 3.7289803 4.39954926 ... 18.16894291 18.87803187 18.9288627 ]] [[ 3.53726112 4.08857957 4.76932889 ... 18.38765372 18.68662699 19.35084428] [ 3.46854994 4.01986839 4.70061771 ... 18.31894255 18.61791581 19.2821331 ] [ 3.6517371 4.20305555 4.88380487 ... 18.50212971 18.80110297 19.46532026] ... [ 3.77718046 4.3284989 5.00924823 ... 18.62757306 18.92654633 19.59076362] [ 3.57344457 4.12476301 4.80551234 ... 18.42383717 18.72281044 19.38702773] [ 3.58197898 4.13329742 4.81404675 ... 18.43237158 18.73134485 19.39556214]] [[ 3.4158829 4.04627029 4.49448005 ... 18.13941589 18.39236742 19.01287394] [ 3.49587651 4.1262639 4.57447366 ... 18.2194095 18.47236103 19.09286755] [ 3.54734937 4.17773676 4.62594651 ... 18.27088235 18.52383388 19.1443404 ] ... [ 3.6890079 4.31939529 4.76760505 ... 18.41254089 18.66549242 19.28599894] [ 3.57341204 4.20379943 4.65200919 ... 18.29694503 18.54989656 19.17040308] [ 3.68512603 4.31551342 4.76372318 ... 18.40865902 18.66161054 19.28211706]]] [[[ 3.59662683 4.252075 4.8812187 ... 17.90938671 19.31664881 19.06407051] [ 3.68561196 4.34106013 4.97020383 ... 17.99837184 19.40563394 19.15305564] [ 3.55817844 4.21362661 4.84277031 ... 17.87093832 19.27820042 19.02562212] ... [ 3.59643676 4.25188493 4.88102863 ... 17.90919664 19.31645873 19.06388043] [ 3.53490875 4.19035692 4.81950062 ... 17.84766863 19.25493073 19.00235243] [ 3.2913799 3.94682807 4.57597177 ... 17.60413978 19.01140188 18.75882357]] [[ 3.56613881 4.12716267 4.60048278 ... 17.7748226 17.26456718 18.85613262] [ 3.49546537 4.05648923 4.52980935 ... 17.70414917 17.19389374 18.78545918] [ 3.57954449 4.14056835 4.61388847 ... 17.78822829 17.27797287 18.86953831] ... [ 3.4430316 4.00405546 4.47737558 ... 17.6517154 17.14145998 18.73302542] [ 2.88801205 3.44903591 3.92235603 ... 17.09669585 16.58644042 18.17800586] [ 3.52879896 4.08982282 4.56314293 ... 17.73748275 17.22722733 18.81879277]] [[ 3.69503059 4.21629944 4.82068972 ... 18.50449855 19.49763925 19.24526295] [ 3.47843926 3.99970811 4.60409838 ... 18.28790721 19.28104792 19.02867162] [ 3.56408244 4.08535129 4.68974156 ... 18.37355039 19.3666911 19.1143148 ] ... [ 3.80493016 4.32619901 4.93058929 ... 18.61439812 19.60753882 19.35516252] [ 3.71097275 4.2322416 4.83663187 ... 18.5204407 19.51358141 19.26120511] [ 3.33956924 3.86083809 4.46522836 ... 18.14903719 19.1421779 18.88980159]] ... [[ 3.62438756 4.19702517 4.86759412 ... 18.63698777 19.34607673 19.39690756] [ 3.70623201 4.27886962 4.94943857 ... 18.71883222 19.42792118 19.47875201] [ 3.6761077 4.2487453 4.91931426 ... 18.68870791 19.39779687 19.4486277 ] ... [ 3.56728596 4.13992357 4.81049252 ... 18.57988618 19.28897514 19.33980597] [ 3.50949467 4.08213227 4.75270123 ... 18.52209488 19.23118384 19.28201467] [ 3.15634269 3.7289803 4.39954926 ... 18.16894291 18.87803187 18.9288627 ]] [[ 3.53726112 4.08857957 4.76932889 ... 18.38765372 18.68662699 19.35084428] [ 3.46854994 4.01986839 4.70061771 ... 18.31894255 18.61791581 19.2821331 ] [ 3.6517371 4.20305555 4.88380487 ... 18.50212971 18.80110297 19.46532026] ... [ 3.77718046 4.3284989 5.00924823 ... 18.62757306 18.92654633 19.59076362] [ 3.57344457 4.12476301 4.80551234 ... 18.42383717 18.72281044 19.38702773] [ 3.58197898 4.13329742 4.81404675 ... 18.43237158 18.73134485 19.39556214]] [[ 3.4158829 4.04627029 4.49448005 ... 18.13941589 18.39236742 19.01287394] [ 3.49587651 4.1262639 4.57447366 ... 18.2194095 18.47236103 19.09286755] [ 3.54734937 4.17773676 4.62594651 ... 18.27088235 18.52383388 19.1443404 ] ... [ 3.6890079 4.31939529 4.76760505 ... 18.41254089 18.66549242 19.28599894] [ 3.57341204 4.20379943 4.65200919 ... 18.29694503 18.54989656 19.17040308] [ 3.68512603 4.31551342 4.76372318 ... 18.40865902 18.66161054 19.28211706]]] [[[ 3.46182359 3.97007677 4.32225271 ... 16.58712603 16.5570733 15.92958507] [ 3.62242575 4.13067894 4.48285488 ... 16.7477282 16.71767547 16.09018724] [ 3.23793154 3.74618473 4.09836067 ... 16.36323398 16.33318125 15.70569302] ... [ 3.34645876 3.85471195 4.20688789 ... 16.47176121 16.44170847 15.81422025] [ 3.49121761 3.9994708 4.35164674 ... 16.61652006 16.58646732 15.9589791 ] [ 3.25705484 3.76530803 4.11748397 ... 16.38235728 16.35230455 15.72481632]] [[ 3.31495307 3.67478917 4.14065619 ... 16.25489612 15.35446987 15.22320202] [ 3.14541825 3.50525434 3.97112137 ... 16.0853613 15.18493505 15.0536672 ] [ 3.02761858 3.38745468 3.8533217 ... 15.96756163 15.06713538 14.93586753] ... [ 3.40579104 3.76562713 4.23149416 ... 16.34573409 15.44530784 15.31403999] [ 2.99760837 3.35744446 3.82331149 ... 15.93755142 15.03712517 14.90585732] [ 3.22176647 3.58160256 4.04746959 ... 16.16170952 15.26128326 15.13001542]] [[ 3.18015663 3.55035654 3.99704659 ... 15.16930232 15.94242484 14.46975893] [ 3.23980533 3.61000524 4.05669529 ... 15.22895102 16.00207354 14.52940763] [ 2.94070861 3.31090852 3.75759857 ... 14.9298543 15.70297682 14.2303109 ] ... [ 3.27239296 3.64259287 4.08928293 ... 15.26153866 16.03466117 14.56199526] [ 3.18562099 3.55582089 4.00251095 ... 15.17476668 15.9478892 14.47522328] [ 3.19270835 3.56290826 4.00959831 ... 15.18185404 15.95497656 14.48231065]] ... [[ 3.42674083 3.93798484 4.35466471 ... 16.87557077 16.760324 16.95593028] [ 3.50938085 4.02062486 4.43730472 ... 16.95821078 16.84296402 17.03857029] [ 3.28208803 3.79333204 4.2100119 ... 16.73091796 16.6156712 16.81127747] ... [ 3.3359072 3.84715121 4.26383107 ... 16.78473713 16.66949037 16.86509664] [ 3.22626709 3.7375111 4.15419096 ... 16.67509702 16.55985026 16.75545653] [ 2.91623386 3.42747787 3.84415773 ... 16.36506379 16.24981703 16.4454233 ]] [[ 3.43847277 3.89766327 4.29929488 ... 15.23337628 16.61298068 16.0902146 ] [ 3.38758702 3.84677752 4.24840912 ... 15.18249052 16.56209492 16.03932885] [ 3.27592223 3.73511273 4.13674434 ... 15.07082574 16.45043013 15.92766406] ... [ 3.26970302 3.72889352 4.13052513 ... 15.06460653 16.44421092 15.92144485] [ 3.08119277 3.54038326 3.94201487 ... 14.87609627 16.25570067 15.7329346 ] [ 3.09562911 3.55481961 3.95645122 ... 14.89053262 16.27013701 15.74737094]] [[ 3.16023103 3.62789842 4.05417795 ... 14.88946439 15.96325429 15.44253724] [ 3.2352529 3.7029203 4.12919982 ... 14.96448626 16.03827616 15.51755911] [ 3.23151712 3.69918452 4.12546404 ... 14.96075049 16.03454038 15.51382334] ... [ 3.04373236 3.51139976 3.93767928 ... 14.77296572 15.84675562 15.32603857] [ 3.08010478 3.54777218 3.9740517 ... 14.80933815 15.88312804 15.362411 ] [ 3.13584002 3.60350741 4.02978694 ... 14.86507338 15.93886328 15.41814623]]] [[[ 3.46182359 3.97007677 4.32225271 ... 16.58712603 16.5570733 15.92958507] [ 3.62242575 4.13067894 4.48285488 ... 16.7477282 16.71767547 16.09018724] [ 3.23793154 3.74618473 4.09836067 ... 16.36323398 16.33318125 15.70569302] ... [ 3.34645876 3.85471195 4.20688789 ... 16.47176121 16.44170847 15.81422025] [ 3.49121761 3.9994708 4.35164674 ... 16.61652006 16.58646732 15.9589791 ] [ 3.25705484 3.76530803 4.11748397 ... 16.38235728 16.35230455 15.72481632]] [[ 3.31495307 3.67478917 4.14065619 ... 16.25489612 15.35446987 15.22320202] [ 3.14541825 3.50525434 3.97112137 ... 16.0853613 15.18493505 15.0536672 ] [ 3.02761858 3.38745468 3.8533217 ... 15.96756163 15.06713538 14.93586753] ... [ 3.40579104 3.76562713 4.23149416 ... 16.34573409 15.44530784 15.31403999] [ 2.99760837 3.35744446 3.82331149 ... 15.93755142 15.03712517 14.90585732] [ 3.22176647 3.58160256 4.04746959 ... 16.16170952 15.26128326 15.13001542]] [[ 3.18015663 3.55035654 3.99704659 ... 15.16930232 15.94242484 14.46975893] [ 3.23980533 3.61000524 4.05669529 ... 15.22895102 16.00207354 14.52940763] [ 2.94070861 3.31090852 3.75759857 ... 14.9298543 15.70297682 14.2303109 ] ... [ 3.27239296 3.64259287 4.08928293 ... 15.26153866 16.03466117 14.56199526] [ 3.18562099 3.55582089 4.00251095 ... 15.17476668 15.9478892 14.47522328] [ 3.19270835 3.56290826 4.00959831 ... 15.18185404 15.95497656 14.48231065]] ... [[ 3.42674083 3.93798484 4.35466471 ... 16.87557077 16.760324 16.95593028] [ 3.50938085 4.02062486 4.43730472 ... 16.95821078 16.84296402 17.03857029] [ 3.28208803 3.79333204 4.2100119 ... 16.73091796 16.6156712 16.81127747] ... [ 3.3359072 3.84715121 4.26383107 ... 16.78473713 16.66949037 16.86509664] [ 3.22626709 3.7375111 4.15419096 ... 16.67509702 16.55985026 16.75545653] [ 2.91623386 3.42747787 3.84415773 ... 16.36506379 16.24981703 16.4454233 ]] [[ 3.43847277 3.89766327 4.29929488 ... 15.23337628 16.61298068 16.0902146 ] [ 3.38758702 3.84677752 4.24840912 ... 15.18249052 16.56209492 16.03932885] [ 3.27592223 3.73511273 4.13674434 ... 15.07082574 16.45043013 15.92766406] ... [ 3.26970302 3.72889352 4.13052513 ... 15.06460653 16.44421092 15.92144485] [ 3.08119277 3.54038326 3.94201487 ... 14.87609627 16.25570067 15.7329346 ] [ 3.09562911 3.55481961 3.95645122 ... 14.89053262 16.27013701 15.74737094]] [[ 3.16023103 3.62789842 4.05417795 ... 14.88946439 15.96325429 15.44253724] [ 3.2352529 3.7029203 4.12919982 ... 14.96448626 16.03827616 15.51755911] [ 3.23151712 3.69918452 4.12546404 ... 14.96075049 16.03454038 15.51382334] ... [ 3.04373236 3.51139976 3.93767928 ... 14.77296572 15.84675562 15.32603857] [ 3.08010478 3.54777218 3.9740517 ... 14.80933815 15.88312804 15.362411 ] [ 3.13584002 3.60350741 4.02978694 ... 14.86507338 15.93886328 15.41814623]]] ###Markdown Load data from file ###Code import pickle with open("old_data.pkl", "rb") as f: risks = pickle.load(f) test_blockid_dict = pickle.load(f) risks.shape, test_blockid_dict y_pred_dow[0].reshape((12, 7)).sum(axis=1) y_pred_dow[1].reshape((12, 7)).sum(axis=1) plt.plot(np.arange(len(y_pred_dow[0].flatten())), y_pred_dow[0].flatten(), color='blue'); plt.plot(np.arange(len(y_pred_dow[1].flatten())), y_pred_dow[1].flatten(), color='green'); plt.plot(np.arange(len(y_pred_dow[0].flatten())), y_pred_dow[0].flatten(), color='blue'); ###Output _____no_output_____
Machine Learning/Classification/Week 4/module-6-decision-tree-practical-assignment-blank.ipynb
###Markdown Decision Trees in Practice In this assignment we will explore various techniques for preventing overfitting in decision trees. We will extend the implementation of the binary decision trees that we implemented in the previous assignment. You will have to use your solutions from this previous assignment and extend them.In this assignment you will:* Implement binary decision trees with different early stopping methods.* Compare models with different stopping parameters.* Visualize the concept of overfitting in decision trees.Let's get started! Fire up GraphLab Create Make sure you have the latest version of GraphLab Create. ###Code import graphlab ###Output _____no_output_____ ###Markdown Load LendingClub Dataset This assignment will use the [LendingClub](https://www.lendingclub.com/) dataset used in the previous two assignments. ###Code loans = graphlab.SFrame('lending-club-data.gl/') ###Output This non-commercial license of GraphLab Create for academic use is assigned to [email protected] and will expire on August 21, 2017. ###Markdown As before, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan. ###Code loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1) loans = loans.remove_column('bad_loans') ###Output _____no_output_____ ###Markdown We will be using the same 4 categorical features as in the previous assignment: 1. grade of the loan 2. the length of the loan term3. the home ownership status: own, mortgage, rent4. number of years of employment.In the dataset, each of these features is a categorical feature. Since we are building a binary decision tree, we will have to convert this to binary data in a subsequent section using 1-hot encoding. ###Code features = ['grade', # grade of the loan 'term', # the term of the loan 'home_ownership', # home_ownership status: own, mortgage or rent 'emp_length', # number of years of employment ] target = 'safe_loans' loans = loans[features + [target]] ###Output _____no_output_____ ###Markdown Subsample dataset to make sure classes are balanced Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We used `seed = 1` so everyone gets the same results. ###Code safe_loans_raw = loans[loans[target] == 1] risky_loans_raw = loans[loans[target] == -1] # Since there are less risky loans than safe loans, find the ratio of the sizes # and use that percentage to undersample the safe loans. percentage = len(risky_loans_raw)/float(len(safe_loans_raw)) safe_loans = safe_loans_raw.sample(percentage, seed = 1) risky_loans = risky_loans_raw loans_data = risky_loans.append(safe_loans) print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data)) print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data)) print "Total number of loans in our new dataset :", len(loans_data) ###Output Percentage of safe loans : 0.502236174422 Percentage of risky loans : 0.497763825578 Total number of loans in our new dataset : 46508 ###Markdown **Note:** There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this [paper](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5128907&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F69%2F5173046%2F05128907.pdf%3Farnumber%3D5128907 ). For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods. Transform categorical data into binary features Since we are implementing **binary decision trees**, we transform our categorical data into binary data using 1-hot encoding, just as in the previous assignment. Here is the summary of that discussion:For instance, the **home_ownership** feature represents the home ownership status of the loanee, which is either `own`, `mortgage` or `rent`. For example, if a data point has the feature ``` {'home_ownership': 'RENT'}```we want to turn this into three features: ``` { 'home_ownership = OWN' : 0, 'home_ownership = MORTGAGE' : 0, 'home_ownership = RENT' : 1 }```Since this code requires a few Python and GraphLab tricks, feel free to use this block of code as is. Refer to the API documentation for a deeper understanding. ###Code loans_data = risky_loans.append(safe_loans) for feature in features: loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1}) loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature) # Change None's to 0's for column in loans_data_unpacked.column_names(): loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0) loans_data.remove_column(feature) loans_data.add_columns(loans_data_unpacked) ###Output _____no_output_____ ###Markdown The feature columns now look like this: ###Code features = loans_data.column_names() features.remove('safe_loans') # Remove the response variable features ###Output _____no_output_____ ###Markdown Train-Validation splitWe split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use `seed=1` so that everyone gets the same result. ###Code train_data, validation_set = loans_data.random_split(.8, seed=1) ###Output _____no_output_____ ###Markdown Early stopping methods for decision trees In this section, we will extend the **binary tree implementation** from the previous assignment in order to handle some early stopping conditions. Recall the 3 early stopping methods that were discussed in lecture:1. Reached a **maximum depth**. (set by parameter `max_depth`).2. Reached a **minimum node size**. (set by parameter `min_node_size`).3. Don't split if the **gain in error reduction** is too small. (set by parameter `min_error_reduction`).For the rest of this assignment, we will refer to these three as **early stopping conditions 1, 2, and 3**. Early stopping condition 1: Maximum depthRecall that we already implemented the maximum depth stopping condition in the previous assignment. In this assignment, we will experiment with this condition a bit more and also write code to implement the 2nd and 3rd early stopping conditions.We will be reusing code from the previous assignment and then building upon this. We will **alert you** when you reach a function that was part of the previous assignment so that you can simply copy and past your previous code. Early stopping condition 2: Minimum node size The function **reached_minimum_node_size** takes 2 arguments:1. The `data` (from a node)2. The minimum number of data points that a node is allowed to split on, `min_node_size`.This function simply calculates whether the number of data points at a given node is less than or equal to the specified minimum node size. This function will be used to detect this early stopping condition in the **decision_tree_create** function.Fill in the parts of the function below where you find ` YOUR CODE HERE`. There is **one** instance in the function below. ###Code def reached_minimum_node_size(data, min_node_size): # Return True if the number of data points is less than or equal to the minimum node size. ## YOUR CODE HERE return len(data)<=min_node_size ###Output _____no_output_____ ###Markdown ** Quiz question:** Given an intermediate node with 6 safe loans and 3 risky loans, if the `min_node_size` parameter is 10, what should the tree learning algorithm do next? Early stopping condition 3: Minimum gain in error reduction The function **error_reduction** takes 2 arguments:1. The error **before** a split, `error_before_split`.2. The error **after** a split, `error_after_split`.This function computes the gain in error reduction, i.e., the difference between the error before the split and that after the split. This function will be used to detect this early stopping condition in the **decision_tree_create** function.Fill in the parts of the function below where you find ` YOUR CODE HERE`. There is **one** instance in the function below. ###Code def error_reduction(error_before_split, error_after_split): # Return the error before the split minus the error after the split. ## YOUR CODE HERE return error_before_split-error_after_split ###Output _____no_output_____ ###Markdown ** Quiz question:** Assume an intermediate node has 6 safe loans and 3 risky loans. For each of 4 possible features to split on, the error reduction is 0.0, 0.05, 0.1, and 0.14, respectively. If the **minimum gain in error reduction** parameter is set to 0.2, what should the tree learning algorithm do next? Grabbing binary decision tree helper functions from past assignment Recall from the previous assignment that we wrote a function `intermediate_node_num_mistakes` that calculates the number of **misclassified examples** when predicting the **majority class**. This is used to help determine which feature is best to split on at a given node of the tree.**Please copy and paste your code for `intermediate_node_num_mistakes` here**. ###Code import numpy as np def intermediate_node_num_mistakes(labels_in_node): # Corner case: If labels_in_node is empty, return 0 if len(labels_in_node) == 0: return 0 ones = np.ones(len(labels_in_node)) # Count the number of 1's (safe loans) ## YOUR CODE HERE count_ones = ones[ones==labels_in_node].sum() # Count the number of -1's (risky loans) ## YOUR CODE HERE count_negative_ones = len(labels_in_node)-count_ones # Return the number of mistakes that the majority classifier makes. ## YOUR CODE HERE if count_ones > count_negative_ones : return count_negative_ones else : return count_ones ###Output _____no_output_____ ###Markdown We then wrote a function `best_splitting_feature` that finds the best feature to split on given the data and a list of features to consider.**Please copy and paste your `best_splitting_feature` code here**. ###Code def best_splitting_feature(data, features, target): best_feature = None # Keep track of the best feature best_error = 10 # Keep track of the best error so far # Note: Since error is always <= 1, we should intialize it with something larger than 1. # Convert to float to make sure error gets computed correctly. num_data_points = float(len(data)) # Loop through each feature to consider splitting on that feature for feature in features: # The left split will have all data points where the feature value is 0 left_split = data[data[feature] == 0] # The right split will have all data points where the feature value is 1 ## YOUR CODE HERE right_split = data[data[feature]==1] # Calculate the number of misclassified examples in the left split. # Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes) # YOUR CODE HERE left_mistakes = intermediate_node_num_mistakes(left_split[target]) # Calculate the number of misclassified examples in the right split. ## YOUR CODE HERE right_mistakes = intermediate_node_num_mistakes(right_split[target]) # Compute the classification error of this split. # Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points) ## YOUR CODE HERE error = (left_mistakes+right_mistakes)/num_data_points # If this is the best error we have found so far, store the feature as best_feature and the error as best_error ## YOUR CODE HERE if error < best_error: best_error=error best_feature=feature return best_feature # Return the best feature we found ###Output _____no_output_____ ###Markdown Finally, recall the function `create_leaf` from the previous assignment, which creates a leaf node given a set of target values. **Please copy and paste your `create_leaf` code here**. ###Code def create_leaf(target_values): # Create a leaf node leaf = {'splitting_feature' : None, 'left' : None, 'right' : None, 'is_leaf': True } ## YOUR CODE HERE # Count the number of data points that are +1 and -1 in this node. num_ones = len(target_values[target_values == +1]) num_minus_ones = len(target_values[target_values == -1]) # For the leaf node, set the prediction to be the majority class. # Store the predicted class (1 or -1) in leaf['prediction'] if num_ones > num_minus_ones: leaf['prediction'] = 1 ## YOUR CODE HERE else: leaf['prediction'] = -1 ## YOUR CODE HERE # Return the leaf node return leaf ###Output _____no_output_____ ###Markdown Incorporating new early stopping conditions in binary decision tree implementation Now, you will implement a function that builds a decision tree handling the three early stopping conditions described in this assignment. In particular, you will write code to detect early stopping conditions 2 and 3. You implemented above the functions needed to detect these conditions. The 1st early stopping condition, **max_depth**, was implemented in the previous assigment and you will not need to reimplement this. In addition to these early stopping conditions, the typical stopping conditions of having no mistakes or no more features to split on (which we denote by "stopping conditions" 1 and 2) are also included as in the previous assignment.**Implementing early stopping condition 2: minimum node size:*** **Step 1:** Use the function **reached_minimum_node_size** that you implemented earlier to write an if condition to detect whether we have hit the base case, i.e., the node does not have enough data points and should be turned into a leaf. Don't forget to use the `min_node_size` argument.* **Step 2:** Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.**Implementing early stopping condition 3: minimum error reduction:****Note:** This has to come after finding the best splitting feature so we can calculate the error after splitting in order to calculate the error reduction.* **Step 1:** Calculate the **classification error before splitting**. Recall that classification error is defined as:$$\text{classification error} = \frac{\text{ mistakes}}{\text{ total examples}}$$* **Step 2:** Calculate the **classification error after splitting**. This requires calculating the number of mistakes in the left and right splits, and then dividing by the total number of examples.* **Step 3:** Use the function **error_reduction** to that you implemented earlier to write an if condition to detect whether the reduction in error is less than the constant provided (`min_error_reduction`). Don't forget to use that argument.* **Step 4:** Return a leaf. This line of code should be the same as the other (pre-implemented) stopping conditions.Fill in the places where you find ` YOUR CODE HERE`. There are **seven** places in this function for you to fill in. ###Code def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10, min_node_size=1, min_error_reduction=0.0): remaining_features = features[:] # Make a copy of the features. target_values = data[target] print "--------------------------------------------------------------------" print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values)) # Stopping condition 1: All nodes are of the same type. if intermediate_node_num_mistakes(target_values) == 0: print "Stopping condition 1 reached. All data points have the same target value." return create_leaf(target_values) # Stopping condition 2: No more features to split on. if remaining_features == []: print "Stopping condition 2 reached. No remaining features." return create_leaf(target_values) # Early stopping condition 1: Reached max depth limit. if current_depth >= max_depth: print "Early stopping condition 1 reached. Reached maximum depth." return create_leaf(target_values) # Early stopping condition 2: Reached the minimum node size. # If the number of data points is less than or equal to the minimum size, return a leaf. if reached_minimum_node_size(data,min_node_size): print "Early stopping condition 2 reached. Reached minimum node size." return create_leaf(target_values) # Find the best splitting feature splitting_feature = best_splitting_feature(data, features, target) # Split on the best feature that we found. left_split = data[data[splitting_feature] == 0] right_split = data[data[splitting_feature] == 1] # Early stopping condition 3: Minimum error reduction # Calculate the error before splitting (number of misclassified examples # divided by the total number of examples) error_before_split = intermediate_node_num_mistakes(target_values) / float(len(data)) # Calculate the error after splitting (number of misclassified examples # in both groups divided by the total number of examples) left_mistakes = intermediate_node_num_mistakes(left_split[target]) right_mistakes = intermediate_node_num_mistakes(right_split[target]) error_after_split = (left_mistakes + right_mistakes) / float(len(data)) # If the error reduction is LESS THAN OR EQUAL TO min_error_reduction, return a leaf. if error_reduction(error_before_split,error_after_split)<=min_error_reduction: print "Early stopping condition 3 reached. Minimum error reduction." return create_leaf(target_values) remaining_features.remove(splitting_feature) print "Split on feature %s. (%s, %s)" % (\ splitting_feature, len(left_split), len(right_split)) # Repeat (recurse) on left and right subtrees left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth, min_node_size, min_error_reduction) ## YOUR CODE HERE right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth, min_node_size, min_error_reduction) return {'is_leaf' : False, 'prediction' : None, 'splitting_feature': splitting_feature, 'left' : left_tree, 'right' : right_tree} ###Output _____no_output_____ ###Markdown Here is a function to count the nodes in your tree: ###Code def count_nodes(tree): if tree['is_leaf']: return 1 return 1 + count_nodes(tree['left']) + count_nodes(tree['right']) ###Output _____no_output_____ ###Markdown Run the following test code to check your implementation. Make sure you get **'Test passed'** before proceeding. ###Code small_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2, min_node_size = 10, min_error_reduction=0.0) if count_nodes(small_decision_tree) == 7: print 'Test passed!' else: print 'Test failed... try again!' print 'Number of nodes found :', count_nodes(small_decision_tree) print 'Number of nodes that should be there : 7' ###Output -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. Test passed! ###Markdown Build a tree!Now that your code is working, we will train a tree model on the **train_data** with* `max_depth = 6`* `min_node_size = 100`, * `min_error_reduction = 0.0`**Warning**: This code block may take a minute to learn. ###Code my_decision_tree_new = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 100, min_error_reduction=0.0) ###Output -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Early stopping condition 3 reached. Minimum error reduction. ###Markdown Let's now train a tree model **ignoring early stopping conditions 2 and 3** so that we get the same tree as in the previous assignment. To ignore these conditions, we set `min_node_size=0` and `min_error_reduction=-1` (a negative value). ###Code my_decision_tree_old = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1) ###Output -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Split on feature grade.B. (8074, 1048) -------------------------------------------------------------------- Subtree, depth = 3 (8074 data points). Split on feature grade.C. (5884, 2190) -------------------------------------------------------------------- Subtree, depth = 4 (5884 data points). Split on feature grade.D. (3826, 2058) -------------------------------------------------------------------- Subtree, depth = 5 (3826 data points). Split on feature grade.E. (1693, 2133) -------------------------------------------------------------------- Subtree, depth = 6 (1693 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2133 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (2058 data points). Split on feature grade.E. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2058 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (2190 data points). Split on feature grade.D. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 5 (2190 data points). Split on feature grade.E. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2190 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1048 data points). Split on feature emp_length.5 years. (969, 79) -------------------------------------------------------------------- Subtree, depth = 4 (969 data points). Split on feature grade.C. (969, 0) -------------------------------------------------------------------- Subtree, depth = 5 (969 data points). Split on feature grade.D. (969, 0) -------------------------------------------------------------------- Subtree, depth = 6 (969 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (79 data points). Split on feature home_ownership.MORTGAGE. (34, 45) -------------------------------------------------------------------- Subtree, depth = 5 (34 data points). Split on feature grade.C. (34, 0) -------------------------------------------------------------------- Subtree, depth = 6 (34 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (45 data points). Split on feature grade.C. (45, 0) -------------------------------------------------------------------- Subtree, depth = 6 (45 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Split on feature emp_length.< 1 year. (85, 11) -------------------------------------------------------------------- Subtree, depth = 4 (85 data points). Split on feature grade.B. (85, 0) -------------------------------------------------------------------- Subtree, depth = 5 (85 data points). Split on feature grade.C. (85, 0) -------------------------------------------------------------------- Subtree, depth = 6 (85 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (11 data points). Split on feature grade.B. (11, 0) -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature grade.C. (11, 0) -------------------------------------------------------------------- Subtree, depth = 6 (11 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Split on feature grade.B. (5, 0) -------------------------------------------------------------------- Subtree, depth = 4 (5 data points). Split on feature grade.C. (5, 0) -------------------------------------------------------------------- Subtree, depth = 5 (5 data points). Split on feature grade.D. (5, 0) -------------------------------------------------------------------- Subtree, depth = 6 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Split on feature grade.A. (347, 0) -------------------------------------------------------------------- Subtree, depth = 6 (347 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature home_ownership.OWN. (9, 2) -------------------------------------------------------------------- Subtree, depth = 6 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Split on feature grade.A. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 4 (1276 data points). Split on feature grade.B. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 5 (1276 data points). Split on feature grade.C. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 6 (1276 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Split on feature grade.A. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 3 (4701 data points). Split on feature grade.B. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 4 (4701 data points). Split on feature grade.C. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 5 (4701 data points). Split on feature grade.E. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 6 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (0 data points). Stopping condition 1 reached. All data points have the same target value. ###Markdown Making predictions Recall that in the previous assignment you implemented a function `classify` to classify a new point `x` using a given `tree`.**Please copy and paste your `classify` code here**. ###Code def classify(tree, x, annotate = False): # if the node is a leaf node. if tree['is_leaf']: if annotate: print "At leaf, predicting %s" % tree['prediction'] return tree['prediction'] else: # split on feature. split_feature_value = x[tree['splitting_feature']] if annotate: print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value) if split_feature_value == 0: return classify(tree['left'], x, annotate) else: return classify(tree['right'], x, annotate) ###Output _____no_output_____ ###Markdown Now, let's consider the first example of the validation set and see what the `my_decision_tree_new` model predicts for this data point. ###Code validation_set[0] print 'Predicted class: %s ' % classify(my_decision_tree_new, validation_set[0]) ###Output Predicted class: -1 ###Markdown Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class: ###Code classify(my_decision_tree_new, validation_set[0], annotate = True) ###Output Split on term. 36 months = 0 Split on grade.A = 0 At leaf, predicting -1 ###Markdown Let's now recall the prediction path for the decision tree learned in the previous assignment, which we recreated here as `my_decision_tree_old`. ###Code classify(my_decision_tree_old, validation_set[0], annotate = True) ###Output Split on term. 36 months = 0 Split on grade.A = 0 Split on grade.B = 0 Split on grade.C = 0 Split on grade.D = 1 Split on grade.E = 0 At leaf, predicting -1 ###Markdown ** Quiz question:** For `my_decision_tree_new` trained with `max_depth = 6`, `min_node_size = 100`, `min_error_reduction=0.0`, is the prediction path for `validation_set[0]` shorter, longer, or the same as for `my_decision_tree_old` that ignored the early stopping conditions 2 and 3? **Quiz question:** For `my_decision_tree_new` trained with `max_depth = 6`, `min_node_size = 100`, `min_error_reduction=0.0`, is the prediction path for **any point** always shorter, always longer, always the same, shorter or the same, or longer or the same as for `my_decision_tree_old` that ignored the early stopping conditions 2 and 3? ** Quiz question:** For a tree trained on **any** dataset using `max_depth = 6`, `min_node_size = 100`, `min_error_reduction=0.0`, what is the maximum number of splits encountered while making a single prediction? Evaluating the model Now let us evaluate the model that we have trained. You implemented this evautation in the function `evaluate_classification_error` from the previous assignment.**Please copy and paste your `evaluate_classification_error` code here**. ###Code def evaluate_classification_error(tree, data): # Apply the classify(tree, x) to each row in your data predictions = data.apply(lambda x: classify(tree, x)) # Once you've made the predictions, calculate the classification error and return it ## YOUR CODE HERE mistakes=predictions[predictions != data['safe_loans']].sum() mistakes = abs(mistakes) return (1.0*mistakes)/len(predictions) ###Output _____no_output_____ ###Markdown Now, let's use this function to evaluate the classification error of `my_decision_tree_new` on the **validation_set**. ###Code evaluate_classification_error(my_decision_tree_new, validation_set) ###Output _____no_output_____ ###Markdown Now, evaluate the validation error using `my_decision_tree_old`. ###Code evaluate_classification_error(my_decision_tree_old, validation_set) ###Output _____no_output_____ ###Markdown **Quiz question:** Is the validation error of the new decision tree (using early stopping conditions 2 and 3) lower than, higher than, or the same as that of the old decision tree from the previous assignment? Exploring the effect of max_depthWe will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (**too small**, **just right**, and **too large**).Train three models with these parameters:1. **model_1**: max_depth = 2 (too small)2. **model_2**: max_depth = 6 (just right)3. **model_3**: max_depth = 14 (may be too large)For each of these three, we set `min_node_size = 0` and `min_error_reduction = -1`.** Note:** Each tree can take up to a few minutes to train. In particular, `model_3` will probably take the longest to train. ###Code model_1 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 2, min_node_size = 0, min_error_reduction=-1) model_2 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1) model_3 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 14, min_node_size = 0, min_error_reduction=-1) ###Output -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Split on feature grade.B. (8074, 1048) -------------------------------------------------------------------- Subtree, depth = 3 (8074 data points). Split on feature grade.C. (5884, 2190) -------------------------------------------------------------------- Subtree, depth = 4 (5884 data points). Split on feature grade.D. (3826, 2058) -------------------------------------------------------------------- Subtree, depth = 5 (3826 data points). Split on feature grade.E. (1693, 2133) -------------------------------------------------------------------- Subtree, depth = 6 (1693 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2133 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (2058 data points). Split on feature grade.E. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2058 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (2190 data points). Split on feature grade.D. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 5 (2190 data points). Split on feature grade.E. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2190 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1048 data points). Split on feature emp_length.5 years. (969, 79) -------------------------------------------------------------------- Subtree, depth = 4 (969 data points). Split on feature grade.C. (969, 0) -------------------------------------------------------------------- Subtree, depth = 5 (969 data points). Split on feature grade.D. (969, 0) -------------------------------------------------------------------- Subtree, depth = 6 (969 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (79 data points). Split on feature home_ownership.MORTGAGE. (34, 45) -------------------------------------------------------------------- Subtree, depth = 5 (34 data points). Split on feature grade.C. (34, 0) -------------------------------------------------------------------- Subtree, depth = 6 (34 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (45 data points). Split on feature grade.C. (45, 0) -------------------------------------------------------------------- Subtree, depth = 6 (45 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Split on feature emp_length.< 1 year. (85, 11) -------------------------------------------------------------------- Subtree, depth = 4 (85 data points). Split on feature grade.B. (85, 0) -------------------------------------------------------------------- Subtree, depth = 5 (85 data points). Split on feature grade.C. (85, 0) -------------------------------------------------------------------- Subtree, depth = 6 (85 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (11 data points). Split on feature grade.B. (11, 0) -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature grade.C. (11, 0) -------------------------------------------------------------------- Subtree, depth = 6 (11 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Split on feature grade.B. (5, 0) -------------------------------------------------------------------- Subtree, depth = 4 (5 data points). Split on feature grade.C. (5, 0) -------------------------------------------------------------------- Subtree, depth = 5 (5 data points). Split on feature grade.D. (5, 0) -------------------------------------------------------------------- Subtree, depth = 6 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Split on feature grade.A. (347, 0) -------------------------------------------------------------------- Subtree, depth = 6 (347 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature home_ownership.OWN. (9, 2) -------------------------------------------------------------------- Subtree, depth = 6 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Split on feature grade.A. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 4 (1276 data points). Split on feature grade.B. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 5 (1276 data points). Split on feature grade.C. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 6 (1276 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Split on feature grade.A. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 3 (4701 data points). Split on feature grade.B. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 4 (4701 data points). Split on feature grade.C. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 5 (4701 data points). Split on feature grade.E. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 6 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Split on feature grade.B. (8074, 1048) -------------------------------------------------------------------- Subtree, depth = 3 (8074 data points). Split on feature grade.C. (5884, 2190) -------------------------------------------------------------------- Subtree, depth = 4 (5884 data points). Split on feature grade.D. (3826, 2058) -------------------------------------------------------------------- Subtree, depth = 5 (3826 data points). Split on feature grade.E. (1693, 2133) -------------------------------------------------------------------- Subtree, depth = 6 (1693 data points). Split on feature home_ownership.OTHER. (1692, 1) -------------------------------------------------------------------- Subtree, depth = 7 (1692 data points). Split on feature grade.F. (339, 1353) -------------------------------------------------------------------- Subtree, depth = 8 (339 data points). Split on feature grade.G. (0, 339) -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (339 data points). Split on feature term. 60 months. (0, 339) -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (339 data points). Split on feature home_ownership.MORTGAGE. (175, 164) -------------------------------------------------------------------- Subtree, depth = 11 (175 data points). Split on feature home_ownership.OWN. (142, 33) -------------------------------------------------------------------- Subtree, depth = 12 (142 data points). Split on feature emp_length.6 years. (133, 9) -------------------------------------------------------------------- Subtree, depth = 13 (133 data points). Split on feature home_ownership.RENT. (0, 133) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (133 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (9 data points). Split on feature home_ownership.RENT. (0, 9) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (33 data points). Split on feature emp_length.n/a. (31, 2) -------------------------------------------------------------------- Subtree, depth = 13 (31 data points). Split on feature emp_length.2 years. (30, 1) -------------------------------------------------------------------- Subtree, depth = 14 (30 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (164 data points). Split on feature emp_length.2 years. (159, 5) -------------------------------------------------------------------- Subtree, depth = 12 (159 data points). Split on feature emp_length.3 years. (148, 11) -------------------------------------------------------------------- Subtree, depth = 13 (148 data points). Split on feature home_ownership.OWN. (148, 0) -------------------------------------------------------------------- Subtree, depth = 14 (148 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (11 data points). Split on feature home_ownership.OWN. (11, 0) -------------------------------------------------------------------- Subtree, depth = 14 (11 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (5 data points). Split on feature home_ownership.OWN. (5, 0) -------------------------------------------------------------------- Subtree, depth = 13 (5 data points). Split on feature home_ownership.RENT. (5, 0) -------------------------------------------------------------------- Subtree, depth = 14 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (1353 data points). Split on feature grade.G. (1353, 0) -------------------------------------------------------------------- Subtree, depth = 9 (1353 data points). Split on feature term. 60 months. (0, 1353) -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1353 data points). Split on feature home_ownership.MORTGAGE. (710, 643) -------------------------------------------------------------------- Subtree, depth = 11 (710 data points). Split on feature home_ownership.OWN. (602, 108) -------------------------------------------------------------------- Subtree, depth = 12 (602 data points). Split on feature home_ownership.RENT. (0, 602) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (602 data points). Split on feature emp_length.1 year. (565, 37) -------------------------------------------------------------------- Subtree, depth = 14 (565 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (37 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (108 data points). Split on feature home_ownership.RENT. (108, 0) -------------------------------------------------------------------- Subtree, depth = 13 (108 data points). Split on feature emp_length.1 year. (100, 8) -------------------------------------------------------------------- Subtree, depth = 14 (100 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (8 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (643 data points). Split on feature home_ownership.OWN. (643, 0) -------------------------------------------------------------------- Subtree, depth = 12 (643 data points). Split on feature home_ownership.RENT. (643, 0) -------------------------------------------------------------------- Subtree, depth = 13 (643 data points). Split on feature emp_length.1 year. (602, 41) -------------------------------------------------------------------- Subtree, depth = 14 (602 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (41 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (2133 data points). Split on feature grade.F. (2133, 0) -------------------------------------------------------------------- Subtree, depth = 7 (2133 data points). Split on feature grade.G. (2133, 0) -------------------------------------------------------------------- Subtree, depth = 8 (2133 data points). Split on feature term. 60 months. (0, 2133) -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (2133 data points). Split on feature home_ownership.MORTGAGE. (1045, 1088) -------------------------------------------------------------------- Subtree, depth = 10 (1045 data points). Split on feature home_ownership.OTHER. (1044, 1) -------------------------------------------------------------------- Subtree, depth = 11 (1044 data points). Split on feature home_ownership.OWN. (879, 165) -------------------------------------------------------------------- Subtree, depth = 12 (879 data points). Split on feature home_ownership.RENT. (0, 879) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (879 data points). Split on feature emp_length.1 year. (809, 70) -------------------------------------------------------------------- Subtree, depth = 14 (809 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (70 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (165 data points). Split on feature emp_length.9 years. (157, 8) -------------------------------------------------------------------- Subtree, depth = 13 (157 data points). Split on feature home_ownership.RENT. (157, 0) -------------------------------------------------------------------- Subtree, depth = 14 (157 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (8 data points). Split on feature home_ownership.RENT. (8, 0) -------------------------------------------------------------------- Subtree, depth = 14 (8 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1088 data points). Split on feature home_ownership.OTHER. (1088, 0) -------------------------------------------------------------------- Subtree, depth = 11 (1088 data points). Split on feature home_ownership.OWN. (1088, 0) -------------------------------------------------------------------- Subtree, depth = 12 (1088 data points). Split on feature home_ownership.RENT. (1088, 0) -------------------------------------------------------------------- Subtree, depth = 13 (1088 data points). Split on feature emp_length.1 year. (1035, 53) -------------------------------------------------------------------- Subtree, depth = 14 (1035 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (53 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (2058 data points). Split on feature grade.E. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2058 data points). Split on feature grade.F. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 7 (2058 data points). Split on feature grade.G. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 8 (2058 data points). Split on feature term. 60 months. (0, 2058) -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (2058 data points). Split on feature home_ownership.MORTGAGE. (923, 1135) -------------------------------------------------------------------- Subtree, depth = 10 (923 data points). Split on feature home_ownership.OTHER. (922, 1) -------------------------------------------------------------------- Subtree, depth = 11 (922 data points). Split on feature home_ownership.OWN. (762, 160) -------------------------------------------------------------------- Subtree, depth = 12 (762 data points). Split on feature home_ownership.RENT. (0, 762) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (762 data points). Split on feature emp_length.1 year. (704, 58) -------------------------------------------------------------------- Subtree, depth = 14 (704 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (58 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (160 data points). Split on feature home_ownership.RENT. (160, 0) -------------------------------------------------------------------- Subtree, depth = 13 (160 data points). Split on feature emp_length.1 year. (154, 6) -------------------------------------------------------------------- Subtree, depth = 14 (154 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (6 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1135 data points). Split on feature home_ownership.OTHER. (1135, 0) -------------------------------------------------------------------- Subtree, depth = 11 (1135 data points). Split on feature home_ownership.OWN. (1135, 0) -------------------------------------------------------------------- Subtree, depth = 12 (1135 data points). Split on feature home_ownership.RENT. (1135, 0) -------------------------------------------------------------------- Subtree, depth = 13 (1135 data points). Split on feature emp_length.1 year. (1096, 39) -------------------------------------------------------------------- Subtree, depth = 14 (1096 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (39 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (2190 data points). Split on feature grade.D. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 5 (2190 data points). Split on feature grade.E. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2190 data points). Split on feature grade.F. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 7 (2190 data points). Split on feature grade.G. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 8 (2190 data points). Split on feature term. 60 months. (0, 2190) -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (2190 data points). Split on feature home_ownership.MORTGAGE. (803, 1387) -------------------------------------------------------------------- Subtree, depth = 10 (803 data points). Split on feature emp_length.4 years. (746, 57) -------------------------------------------------------------------- Subtree, depth = 11 (746 data points). Split on feature home_ownership.OTHER. (746, 0) -------------------------------------------------------------------- Subtree, depth = 12 (746 data points). Split on feature home_ownership.OWN. (598, 148) -------------------------------------------------------------------- Subtree, depth = 13 (598 data points). Split on feature home_ownership.RENT. (0, 598) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (598 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (148 data points). Split on feature emp_length.< 1 year. (137, 11) -------------------------------------------------------------------- Subtree, depth = 14 (137 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (11 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (57 data points). Split on feature home_ownership.OTHER. (57, 0) -------------------------------------------------------------------- Subtree, depth = 12 (57 data points). Split on feature home_ownership.OWN. (49, 8) -------------------------------------------------------------------- Subtree, depth = 13 (49 data points). Split on feature home_ownership.RENT. (0, 49) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (49 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (8 data points). Split on feature home_ownership.RENT. (8, 0) -------------------------------------------------------------------- Subtree, depth = 14 (8 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1387 data points). Split on feature emp_length.6 years. (1313, 74) -------------------------------------------------------------------- Subtree, depth = 11 (1313 data points). Split on feature home_ownership.OTHER. (1313, 0) -------------------------------------------------------------------- Subtree, depth = 12 (1313 data points). Split on feature home_ownership.OWN. (1313, 0) -------------------------------------------------------------------- Subtree, depth = 13 (1313 data points). Split on feature home_ownership.RENT. (1313, 0) -------------------------------------------------------------------- Subtree, depth = 14 (1313 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (74 data points). Split on feature home_ownership.OTHER. (74, 0) -------------------------------------------------------------------- Subtree, depth = 12 (74 data points). Split on feature home_ownership.OWN. (74, 0) -------------------------------------------------------------------- Subtree, depth = 13 (74 data points). Split on feature home_ownership.RENT. (74, 0) -------------------------------------------------------------------- Subtree, depth = 14 (74 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1048 data points). Split on feature emp_length.5 years. (969, 79) -------------------------------------------------------------------- Subtree, depth = 4 (969 data points). Split on feature grade.C. (969, 0) -------------------------------------------------------------------- Subtree, depth = 5 (969 data points). Split on feature grade.D. (969, 0) -------------------------------------------------------------------- Subtree, depth = 6 (969 data points). Split on feature grade.E. (969, 0) -------------------------------------------------------------------- Subtree, depth = 7 (969 data points). Split on feature grade.F. (969, 0) -------------------------------------------------------------------- Subtree, depth = 8 (969 data points). Split on feature grade.G. (969, 0) -------------------------------------------------------------------- Subtree, depth = 9 (969 data points). Split on feature term. 60 months. (0, 969) -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (969 data points). Split on feature home_ownership.MORTGAGE. (367, 602) -------------------------------------------------------------------- Subtree, depth = 11 (367 data points). Split on feature home_ownership.OTHER. (367, 0) -------------------------------------------------------------------- Subtree, depth = 12 (367 data points). Split on feature home_ownership.OWN. (291, 76) -------------------------------------------------------------------- Subtree, depth = 13 (291 data points). Split on feature home_ownership.RENT. (0, 291) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (291 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (76 data points). Split on feature emp_length.9 years. (71, 5) -------------------------------------------------------------------- Subtree, depth = 14 (71 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (602 data points). Split on feature emp_length.9 years. (580, 22) -------------------------------------------------------------------- Subtree, depth = 12 (580 data points). Split on feature emp_length.3 years. (545, 35) -------------------------------------------------------------------- Subtree, depth = 13 (545 data points). Split on feature emp_length.4 years. (506, 39) -------------------------------------------------------------------- Subtree, depth = 14 (506 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (39 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (35 data points). Split on feature home_ownership.OTHER. (35, 0) -------------------------------------------------------------------- Subtree, depth = 14 (35 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (22 data points). Split on feature home_ownership.OTHER. (22, 0) -------------------------------------------------------------------- Subtree, depth = 13 (22 data points). Split on feature home_ownership.OWN. (22, 0) -------------------------------------------------------------------- Subtree, depth = 14 (22 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (79 data points). Split on feature home_ownership.MORTGAGE. (34, 45) -------------------------------------------------------------------- Subtree, depth = 5 (34 data points). Split on feature grade.C. (34, 0) -------------------------------------------------------------------- Subtree, depth = 6 (34 data points). Split on feature grade.D. (34, 0) -------------------------------------------------------------------- Subtree, depth = 7 (34 data points). Split on feature grade.E. (34, 0) -------------------------------------------------------------------- Subtree, depth = 8 (34 data points). Split on feature grade.F. (34, 0) -------------------------------------------------------------------- Subtree, depth = 9 (34 data points). Split on feature grade.G. (34, 0) -------------------------------------------------------------------- Subtree, depth = 10 (34 data points). Split on feature term. 60 months. (0, 34) -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (34 data points). Split on feature home_ownership.OTHER. (34, 0) -------------------------------------------------------------------- Subtree, depth = 12 (34 data points). Split on feature home_ownership.OWN. (25, 9) -------------------------------------------------------------------- Subtree, depth = 13 (25 data points). Split on feature home_ownership.RENT. (0, 25) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (25 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (9 data points). Split on feature home_ownership.RENT. (9, 0) -------------------------------------------------------------------- Subtree, depth = 14 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (45 data points). Split on feature grade.C. (45, 0) -------------------------------------------------------------------- Subtree, depth = 6 (45 data points). Split on feature grade.D. (45, 0) -------------------------------------------------------------------- Subtree, depth = 7 (45 data points). Split on feature grade.E. (45, 0) -------------------------------------------------------------------- Subtree, depth = 8 (45 data points). Split on feature grade.F. (45, 0) -------------------------------------------------------------------- Subtree, depth = 9 (45 data points). Split on feature grade.G. (45, 0) -------------------------------------------------------------------- Subtree, depth = 10 (45 data points). Split on feature term. 60 months. (0, 45) -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (45 data points). Split on feature home_ownership.OTHER. (45, 0) -------------------------------------------------------------------- Subtree, depth = 12 (45 data points). Split on feature home_ownership.OWN. (45, 0) -------------------------------------------------------------------- Subtree, depth = 13 (45 data points). Split on feature home_ownership.RENT. (45, 0) -------------------------------------------------------------------- Subtree, depth = 14 (45 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Split on feature emp_length.< 1 year. (85, 11) -------------------------------------------------------------------- Subtree, depth = 4 (85 data points). Split on feature grade.B. (85, 0) -------------------------------------------------------------------- Subtree, depth = 5 (85 data points). Split on feature grade.C. (85, 0) -------------------------------------------------------------------- Subtree, depth = 6 (85 data points). Split on feature grade.D. (85, 0) -------------------------------------------------------------------- Subtree, depth = 7 (85 data points). Split on feature grade.E. (85, 0) -------------------------------------------------------------------- Subtree, depth = 8 (85 data points). Split on feature grade.F. (85, 0) -------------------------------------------------------------------- Subtree, depth = 9 (85 data points). Split on feature grade.G. (85, 0) -------------------------------------------------------------------- Subtree, depth = 10 (85 data points). Split on feature term. 60 months. (0, 85) -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (85 data points). Split on feature home_ownership.MORTGAGE. (26, 59) -------------------------------------------------------------------- Subtree, depth = 12 (26 data points). Split on feature emp_length.3 years. (24, 2) -------------------------------------------------------------------- Subtree, depth = 13 (24 data points). Split on feature home_ownership.OTHER. (24, 0) -------------------------------------------------------------------- Subtree, depth = 14 (24 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (59 data points). Split on feature home_ownership.OTHER. (59, 0) -------------------------------------------------------------------- Subtree, depth = 13 (59 data points). Split on feature home_ownership.OWN. (59, 0) -------------------------------------------------------------------- Subtree, depth = 14 (59 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (11 data points). Split on feature grade.B. (11, 0) -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature grade.C. (11, 0) -------------------------------------------------------------------- Subtree, depth = 6 (11 data points). Split on feature grade.D. (11, 0) -------------------------------------------------------------------- Subtree, depth = 7 (11 data points). Split on feature grade.E. (11, 0) -------------------------------------------------------------------- Subtree, depth = 8 (11 data points). Split on feature grade.F. (11, 0) -------------------------------------------------------------------- Subtree, depth = 9 (11 data points). Split on feature grade.G. (11, 0) -------------------------------------------------------------------- Subtree, depth = 10 (11 data points). Split on feature term. 60 months. (0, 11) -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (11 data points). Split on feature home_ownership.MORTGAGE. (8, 3) -------------------------------------------------------------------- Subtree, depth = 12 (8 data points). Split on feature home_ownership.OTHER. (8, 0) -------------------------------------------------------------------- Subtree, depth = 13 (8 data points). Split on feature home_ownership.OWN. (6, 2) -------------------------------------------------------------------- Subtree, depth = 14 (6 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (2 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (3 data points). Split on feature home_ownership.OTHER. (3, 0) -------------------------------------------------------------------- Subtree, depth = 13 (3 data points). Split on feature home_ownership.OWN. (3, 0) -------------------------------------------------------------------- Subtree, depth = 14 (3 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Split on feature grade.B. (5, 0) -------------------------------------------------------------------- Subtree, depth = 4 (5 data points). Split on feature grade.C. (5, 0) -------------------------------------------------------------------- Subtree, depth = 5 (5 data points). Split on feature grade.D. (5, 0) -------------------------------------------------------------------- Subtree, depth = 6 (5 data points). Split on feature grade.E. (5, 0) -------------------------------------------------------------------- Subtree, depth = 7 (5 data points). Split on feature grade.F. (5, 0) -------------------------------------------------------------------- Subtree, depth = 8 (5 data points). Split on feature grade.G. (5, 0) -------------------------------------------------------------------- Subtree, depth = 9 (5 data points). Split on feature term. 60 months. (0, 5) -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (5 data points). Split on feature home_ownership.MORTGAGE. (2, 3) -------------------------------------------------------------------- Subtree, depth = 11 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (3 data points). Split on feature home_ownership.OTHER. (3, 0) -------------------------------------------------------------------- Subtree, depth = 12 (3 data points). Split on feature home_ownership.OWN. (3, 0) -------------------------------------------------------------------- Subtree, depth = 13 (3 data points). Split on feature home_ownership.RENT. (3, 0) -------------------------------------------------------------------- Subtree, depth = 14 (3 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Split on feature grade.A. (15839, 4799) -------------------------------------------------------------------- Subtree, depth = 7 (15839 data points). Split on feature home_ownership.OTHER. (15811, 28) -------------------------------------------------------------------- Subtree, depth = 8 (15811 data points). Split on feature grade.B. (6894, 8917) -------------------------------------------------------------------- Subtree, depth = 9 (6894 data points). Split on feature home_ownership.MORTGAGE. (4102, 2792) -------------------------------------------------------------------- Subtree, depth = 10 (4102 data points). Split on feature emp_length.4 years. (3768, 334) -------------------------------------------------------------------- Subtree, depth = 11 (3768 data points). Split on feature emp_length.9 years. (3639, 129) -------------------------------------------------------------------- Subtree, depth = 12 (3639 data points). Split on feature emp_length.2 years. (3123, 516) -------------------------------------------------------------------- Subtree, depth = 13 (3123 data points). Split on feature grade.C. (0, 3123) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (3123 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (516 data points). Split on feature home_ownership.OWN. (458, 58) -------------------------------------------------------------------- Subtree, depth = 14 (458 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (58 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (129 data points). Split on feature home_ownership.OWN. (113, 16) -------------------------------------------------------------------- Subtree, depth = 13 (113 data points). Split on feature grade.C. (0, 113) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (113 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (16 data points). Split on feature grade.C. (0, 16) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (16 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 11 (334 data points). Split on feature grade.C. (0, 334) -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (334 data points). Split on feature term. 60 months. (334, 0) -------------------------------------------------------------------- Subtree, depth = 13 (334 data points). Split on feature home_ownership.OWN. (286, 48) -------------------------------------------------------------------- Subtree, depth = 14 (286 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (48 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (2792 data points). Split on feature emp_length.2 years. (2562, 230) -------------------------------------------------------------------- Subtree, depth = 11 (2562 data points). Split on feature emp_length.5 years. (2335, 227) -------------------------------------------------------------------- Subtree, depth = 12 (2335 data points). Split on feature grade.C. (0, 2335) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (2335 data points). Split on feature term. 60 months. (2335, 0) -------------------------------------------------------------------- Subtree, depth = 14 (2335 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (227 data points). Split on feature grade.C. (0, 227) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (227 data points). Split on feature term. 60 months. (227, 0) -------------------------------------------------------------------- Subtree, depth = 14 (227 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (230 data points). Split on feature grade.C. (0, 230) -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (230 data points). Split on feature term. 60 months. (230, 0) -------------------------------------------------------------------- Subtree, depth = 13 (230 data points). Split on feature home_ownership.OWN. (230, 0) -------------------------------------------------------------------- Subtree, depth = 14 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (8917 data points). Split on feature grade.C. (8917, 0) -------------------------------------------------------------------- Subtree, depth = 10 (8917 data points). Split on feature term. 60 months. (8917, 0) -------------------------------------------------------------------- Subtree, depth = 11 (8917 data points). Split on feature home_ownership.MORTGAGE. (4748, 4169) -------------------------------------------------------------------- Subtree, depth = 12 (4748 data points). Split on feature home_ownership.OWN. (4089, 659) -------------------------------------------------------------------- Subtree, depth = 13 (4089 data points). Split on feature home_ownership.RENT. (0, 4089) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (4089 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (659 data points). Split on feature home_ownership.RENT. (659, 0) -------------------------------------------------------------------- Subtree, depth = 14 (659 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (4169 data points). Split on feature home_ownership.OWN. (4169, 0) -------------------------------------------------------------------- Subtree, depth = 13 (4169 data points). Split on feature home_ownership.RENT. (4169, 0) -------------------------------------------------------------------- Subtree, depth = 14 (4169 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (28 data points). Split on feature grade.B. (11, 17) -------------------------------------------------------------------- Subtree, depth = 9 (11 data points). Split on feature emp_length.6 years. (10, 1) -------------------------------------------------------------------- Subtree, depth = 10 (10 data points). Split on feature grade.C. (0, 10) -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (10 data points). Split on feature term. 60 months. (10, 0) -------------------------------------------------------------------- Subtree, depth = 12 (10 data points). Split on feature home_ownership.MORTGAGE. (10, 0) -------------------------------------------------------------------- Subtree, depth = 13 (10 data points). Split on feature home_ownership.OWN. (10, 0) -------------------------------------------------------------------- Subtree, depth = 14 (10 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (17 data points). Split on feature emp_length.1 year. (16, 1) -------------------------------------------------------------------- Subtree, depth = 10 (16 data points). Split on feature emp_length.3 years. (15, 1) -------------------------------------------------------------------- Subtree, depth = 11 (15 data points). Split on feature emp_length.4 years. (14, 1) -------------------------------------------------------------------- Subtree, depth = 12 (14 data points). Split on feature emp_length.< 1 year. (13, 1) -------------------------------------------------------------------- Subtree, depth = 13 (13 data points). Split on feature grade.C. (13, 0) -------------------------------------------------------------------- Subtree, depth = 14 (13 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (4799 data points). Split on feature grade.B. (4799, 0) -------------------------------------------------------------------- Subtree, depth = 8 (4799 data points). Split on feature grade.C. (4799, 0) -------------------------------------------------------------------- Subtree, depth = 9 (4799 data points). Split on feature term. 60 months. (4799, 0) -------------------------------------------------------------------- Subtree, depth = 10 (4799 data points). Split on feature home_ownership.MORTGAGE. (2163, 2636) -------------------------------------------------------------------- Subtree, depth = 11 (2163 data points). Split on feature home_ownership.OTHER. (2154, 9) -------------------------------------------------------------------- Subtree, depth = 12 (2154 data points). Split on feature home_ownership.OWN. (1753, 401) -------------------------------------------------------------------- Subtree, depth = 13 (1753 data points). Split on feature home_ownership.RENT. (0, 1753) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (1753 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (401 data points). Split on feature home_ownership.RENT. (401, 0) -------------------------------------------------------------------- Subtree, depth = 14 (401 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (9 data points). Split on feature emp_length.3 years. (8, 1) -------------------------------------------------------------------- Subtree, depth = 13 (8 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (2636 data points). Split on feature home_ownership.OTHER. (2636, 0) -------------------------------------------------------------------- Subtree, depth = 12 (2636 data points). Split on feature home_ownership.OWN. (2636, 0) -------------------------------------------------------------------- Subtree, depth = 13 (2636 data points). Split on feature home_ownership.RENT. (2636, 0) -------------------------------------------------------------------- Subtree, depth = 14 (2636 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Split on feature grade.A. (96, 0) -------------------------------------------------------------------- Subtree, depth = 7 (96 data points). Split on feature grade.B. (96, 0) -------------------------------------------------------------------- Subtree, depth = 8 (96 data points). Split on feature grade.C. (96, 0) -------------------------------------------------------------------- Subtree, depth = 9 (96 data points). Split on feature term. 60 months. (96, 0) -------------------------------------------------------------------- Subtree, depth = 10 (96 data points). Split on feature home_ownership.MORTGAGE. (44, 52) -------------------------------------------------------------------- Subtree, depth = 11 (44 data points). Split on feature emp_length.3 years. (43, 1) -------------------------------------------------------------------- Subtree, depth = 12 (43 data points). Split on feature emp_length.7 years. (42, 1) -------------------------------------------------------------------- Subtree, depth = 13 (42 data points). Split on feature emp_length.8 years. (41, 1) -------------------------------------------------------------------- Subtree, depth = 14 (41 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (52 data points). Split on feature emp_length.2 years. (47, 5) -------------------------------------------------------------------- Subtree, depth = 12 (47 data points). Split on feature home_ownership.OTHER. (47, 0) -------------------------------------------------------------------- Subtree, depth = 13 (47 data points). Split on feature home_ownership.OWN. (47, 0) -------------------------------------------------------------------- Subtree, depth = 14 (47 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (5 data points). Split on feature home_ownership.OTHER. (5, 0) -------------------------------------------------------------------- Subtree, depth = 13 (5 data points). Split on feature home_ownership.OWN. (5, 0) -------------------------------------------------------------------- Subtree, depth = 14 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Split on feature home_ownership.OTHER. (701, 1) -------------------------------------------------------------------- Subtree, depth = 7 (701 data points). Split on feature grade.B. (317, 384) -------------------------------------------------------------------- Subtree, depth = 8 (317 data points). Split on feature grade.C. (1, 316) -------------------------------------------------------------------- Subtree, depth = 9 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (316 data points). Split on feature grade.G. (316, 0) -------------------------------------------------------------------- Subtree, depth = 10 (316 data points). Split on feature term. 60 months. (316, 0) -------------------------------------------------------------------- Subtree, depth = 11 (316 data points). Split on feature home_ownership.MORTGAGE. (189, 127) -------------------------------------------------------------------- Subtree, depth = 12 (189 data points). Split on feature home_ownership.OWN. (139, 50) -------------------------------------------------------------------- Subtree, depth = 13 (139 data points). Split on feature home_ownership.RENT. (0, 139) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (139 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (50 data points). Split on feature home_ownership.RENT. (50, 0) -------------------------------------------------------------------- Subtree, depth = 14 (50 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (127 data points). Split on feature home_ownership.OWN. (127, 0) -------------------------------------------------------------------- Subtree, depth = 13 (127 data points). Split on feature home_ownership.RENT. (127, 0) -------------------------------------------------------------------- Subtree, depth = 14 (127 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (384 data points). Split on feature grade.C. (384, 0) -------------------------------------------------------------------- Subtree, depth = 9 (384 data points). Split on feature grade.G. (384, 0) -------------------------------------------------------------------- Subtree, depth = 10 (384 data points). Split on feature term. 60 months. (384, 0) -------------------------------------------------------------------- Subtree, depth = 11 (384 data points). Split on feature home_ownership.MORTGAGE. (210, 174) -------------------------------------------------------------------- Subtree, depth = 12 (210 data points). Split on feature home_ownership.OWN. (148, 62) -------------------------------------------------------------------- Subtree, depth = 13 (148 data points). Split on feature home_ownership.RENT. (0, 148) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (148 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (62 data points). Split on feature home_ownership.RENT. (62, 0) -------------------------------------------------------------------- Subtree, depth = 14 (62 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (174 data points). Split on feature home_ownership.OWN. (174, 0) -------------------------------------------------------------------- Subtree, depth = 13 (174 data points). Split on feature home_ownership.RENT. (174, 0) -------------------------------------------------------------------- Subtree, depth = 14 (174 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Split on feature grade.B. (230, 0) -------------------------------------------------------------------- Subtree, depth = 7 (230 data points). Split on feature grade.C. (230, 0) -------------------------------------------------------------------- Subtree, depth = 8 (230 data points). Split on feature grade.G. (230, 0) -------------------------------------------------------------------- Subtree, depth = 9 (230 data points). Split on feature term. 60 months. (230, 0) -------------------------------------------------------------------- Subtree, depth = 10 (230 data points). Split on feature home_ownership.MORTGAGE. (119, 111) -------------------------------------------------------------------- Subtree, depth = 11 (119 data points). Split on feature home_ownership.OTHER. (119, 0) -------------------------------------------------------------------- Subtree, depth = 12 (119 data points). Split on feature home_ownership.OWN. (71, 48) -------------------------------------------------------------------- Subtree, depth = 13 (71 data points). Split on feature home_ownership.RENT. (0, 71) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (71 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (48 data points). Split on feature home_ownership.RENT. (48, 0) -------------------------------------------------------------------- Subtree, depth = 14 (48 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (111 data points). Split on feature home_ownership.OTHER. (111, 0) -------------------------------------------------------------------- Subtree, depth = 12 (111 data points). Split on feature home_ownership.OWN. (111, 0) -------------------------------------------------------------------- Subtree, depth = 13 (111 data points). Split on feature home_ownership.RENT. (111, 0) -------------------------------------------------------------------- Subtree, depth = 14 (111 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Split on feature grade.A. (347, 0) -------------------------------------------------------------------- Subtree, depth = 6 (347 data points). Split on feature grade.B. (347, 0) -------------------------------------------------------------------- Subtree, depth = 7 (347 data points). Split on feature grade.C. (347, 0) -------------------------------------------------------------------- Subtree, depth = 8 (347 data points). Split on feature grade.G. (347, 0) -------------------------------------------------------------------- Subtree, depth = 9 (347 data points). Split on feature term. 60 months. (347, 0) -------------------------------------------------------------------- Subtree, depth = 10 (347 data points). Split on feature home_ownership.MORTGAGE. (237, 110) -------------------------------------------------------------------- Subtree, depth = 11 (237 data points). Split on feature home_ownership.OTHER. (235, 2) -------------------------------------------------------------------- Subtree, depth = 12 (235 data points). Split on feature home_ownership.OWN. (203, 32) -------------------------------------------------------------------- Subtree, depth = 13 (203 data points). Split on feature home_ownership.RENT. (0, 203) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (203 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (32 data points). Split on feature home_ownership.RENT. (32, 0) -------------------------------------------------------------------- Subtree, depth = 14 (32 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (110 data points). Split on feature home_ownership.OTHER. (110, 0) -------------------------------------------------------------------- Subtree, depth = 12 (110 data points). Split on feature home_ownership.OWN. (110, 0) -------------------------------------------------------------------- Subtree, depth = 13 (110 data points). Split on feature home_ownership.RENT. (110, 0) -------------------------------------------------------------------- Subtree, depth = 14 (110 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature home_ownership.OWN. (9, 2) -------------------------------------------------------------------- Subtree, depth = 6 (9 data points). Split on feature grade.A. (9, 0) -------------------------------------------------------------------- Subtree, depth = 7 (9 data points). Split on feature grade.B. (9, 0) -------------------------------------------------------------------- Subtree, depth = 8 (9 data points). Split on feature grade.C. (9, 0) -------------------------------------------------------------------- Subtree, depth = 9 (9 data points). Split on feature grade.G. (9, 0) -------------------------------------------------------------------- Subtree, depth = 10 (9 data points). Split on feature term. 60 months. (9, 0) -------------------------------------------------------------------- Subtree, depth = 11 (9 data points). Split on feature home_ownership.MORTGAGE. (6, 3) -------------------------------------------------------------------- Subtree, depth = 12 (6 data points). Split on feature home_ownership.OTHER. (6, 0) -------------------------------------------------------------------- Subtree, depth = 13 (6 data points). Split on feature home_ownership.RENT. (0, 6) -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 14 (6 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (3 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Split on feature grade.A. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 4 (1276 data points). Split on feature grade.B. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 5 (1276 data points). Split on feature grade.C. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 6 (1276 data points). Split on feature grade.F. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 7 (1276 data points). Split on feature grade.G. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 8 (1276 data points). Split on feature term. 60 months. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 9 (1276 data points). Split on feature home_ownership.MORTGAGE. (855, 421) -------------------------------------------------------------------- Subtree, depth = 10 (855 data points). Split on feature home_ownership.OTHER. (849, 6) -------------------------------------------------------------------- Subtree, depth = 11 (849 data points). Split on feature home_ownership.OWN. (737, 112) -------------------------------------------------------------------- Subtree, depth = 12 (737 data points). Split on feature home_ownership.RENT. (0, 737) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (737 data points). Split on feature emp_length.1 year. (670, 67) -------------------------------------------------------------------- Subtree, depth = 14 (670 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (67 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (112 data points). Split on feature home_ownership.RENT. (112, 0) -------------------------------------------------------------------- Subtree, depth = 13 (112 data points). Split on feature emp_length.1 year. (102, 10) -------------------------------------------------------------------- Subtree, depth = 14 (102 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (10 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (6 data points). Split on feature home_ownership.OWN. (6, 0) -------------------------------------------------------------------- Subtree, depth = 12 (6 data points). Split on feature home_ownership.RENT. (6, 0) -------------------------------------------------------------------- Subtree, depth = 13 (6 data points). Split on feature emp_length.1 year. (6, 0) -------------------------------------------------------------------- Subtree, depth = 14 (6 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (421 data points). Split on feature emp_length.6 years. (408, 13) -------------------------------------------------------------------- Subtree, depth = 11 (408 data points). Split on feature home_ownership.OTHER. (408, 0) -------------------------------------------------------------------- Subtree, depth = 12 (408 data points). Split on feature home_ownership.OWN. (408, 0) -------------------------------------------------------------------- Subtree, depth = 13 (408 data points). Split on feature home_ownership.RENT. (408, 0) -------------------------------------------------------------------- Subtree, depth = 14 (408 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (13 data points). Split on feature home_ownership.OTHER. (13, 0) -------------------------------------------------------------------- Subtree, depth = 12 (13 data points). Split on feature home_ownership.OWN. (13, 0) -------------------------------------------------------------------- Subtree, depth = 13 (13 data points). Split on feature home_ownership.RENT. (13, 0) -------------------------------------------------------------------- Subtree, depth = 14 (13 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Split on feature grade.A. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 3 (4701 data points). Split on feature grade.B. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 4 (4701 data points). Split on feature grade.C. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 5 (4701 data points). Split on feature grade.E. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 6 (4701 data points). Split on feature grade.F. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 7 (4701 data points). Split on feature grade.G. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 8 (4701 data points). Split on feature term. 60 months. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 9 (4701 data points). Split on feature home_ownership.MORTGAGE. (3047, 1654) -------------------------------------------------------------------- Subtree, depth = 10 (3047 data points). Split on feature home_ownership.OTHER. (3037, 10) -------------------------------------------------------------------- Subtree, depth = 11 (3037 data points). Split on feature home_ownership.OWN. (2633, 404) -------------------------------------------------------------------- Subtree, depth = 12 (2633 data points). Split on feature home_ownership.RENT. (0, 2633) -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (2633 data points). Split on feature emp_length.1 year. (2392, 241) -------------------------------------------------------------------- Subtree, depth = 14 (2392 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (241 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 12 (404 data points). Split on feature home_ownership.RENT. (404, 0) -------------------------------------------------------------------- Subtree, depth = 13 (404 data points). Split on feature emp_length.1 year. (374, 30) -------------------------------------------------------------------- Subtree, depth = 14 (374 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (30 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (10 data points). Split on feature home_ownership.OWN. (10, 0) -------------------------------------------------------------------- Subtree, depth = 12 (10 data points). Split on feature home_ownership.RENT. (10, 0) -------------------------------------------------------------------- Subtree, depth = 13 (10 data points). Split on feature emp_length.1 year. (9, 1) -------------------------------------------------------------------- Subtree, depth = 14 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (1 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 10 (1654 data points). Split on feature emp_length.5 years. (1532, 122) -------------------------------------------------------------------- Subtree, depth = 11 (1532 data points). Split on feature emp_length.3 years. (1414, 118) -------------------------------------------------------------------- Subtree, depth = 12 (1414 data points). Split on feature emp_length.9 years. (1351, 63) -------------------------------------------------------------------- Subtree, depth = 13 (1351 data points). Split on feature home_ownership.OTHER. (1351, 0) -------------------------------------------------------------------- Subtree, depth = 14 (1351 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (63 data points). Split on feature home_ownership.OTHER. (63, 0) -------------------------------------------------------------------- Subtree, depth = 14 (63 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (118 data points). Split on feature home_ownership.OTHER. (118, 0) -------------------------------------------------------------------- Subtree, depth = 13 (118 data points). Split on feature home_ownership.OWN. (118, 0) -------------------------------------------------------------------- Subtree, depth = 14 (118 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 11 (122 data points). Split on feature home_ownership.OTHER. (122, 0) -------------------------------------------------------------------- Subtree, depth = 12 (122 data points). Split on feature home_ownership.OWN. (122, 0) -------------------------------------------------------------------- Subtree, depth = 13 (122 data points). Split on feature home_ownership.RENT. (122, 0) -------------------------------------------------------------------- Subtree, depth = 14 (122 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 14 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 13 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 12 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 9 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 8 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 7 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (0 data points). Stopping condition 1 reached. All data points have the same target value. ###Markdown Evaluating the modelsLet us evaluate the models on the **train** and **validation** data. Let us start by evaluating the classification error on the training data: ###Code print "Training data, classification error (model 1):", evaluate_classification_error(model_1, train_data) print "Training data, classification error (model 2):", evaluate_classification_error(model_2, train_data) print "Training data, classification error (model 3):", evaluate_classification_error(model_3, train_data) ###Output Training data, classification error (model 1): 0.125 Training data, classification error (model 2): 0.0603911454975 Training data, classification error (model 3): 0.0355415860735 ###Markdown Now evaluate the classification error on the validation data. ###Code print "Training data, classification error (model 1):", evaluate_classification_error(model_1, validation_set) print "Training data, classification error (model 2):", evaluate_classification_error(model_2, validation_set) print "Training data, classification error (model 3):", evaluate_classification_error(model_3, validation_set) ###Output Training data, classification error (model 1): 0.12623869022 Training data, classification error (model 2): 0.0606419646704 Training data, classification error (model 3): 0.0297285652736 ###Markdown **Quiz Question:** Which tree has the smallest error on the validation data?**Quiz Question:** Does the tree with the smallest error in the training data also have the smallest error in the validation data?**Quiz Question:** Is it always true that the tree with the lowest classification error on the **training** set will result in the lowest classification error in the **validation** set? Measuring the complexity of the treeRecall in the lecture that we talked about deeper trees being more complex. We will measure the complexity of the tree as``` complexity(T) = number of leaves in the tree T```Here, we provide a function `count_leaves` that counts the number of leaves in a tree. Using this implementation, compute the number of nodes in `model_1`, `model_2`, and `model_3`. ###Code def count_leaves(tree): if tree['is_leaf']: return 1 return count_leaves(tree['left']) + count_leaves(tree['right']) ###Output _____no_output_____ ###Markdown Compute the number of nodes in `model_1`, `model_2`, and `model_3`. ###Code print count_leaves(model_1) print count_leaves(model_2) print count_leaves(model_3) ###Output 4 41 341 ###Markdown **Quiz question:** Which tree has the largest complexity? **Quiz question:** Is it always true that the most complex tree will result in the lowest classification error in the **validation_set**? Exploring the effect of min_errorWe will compare three models trained with different values of the stopping criterion. We intentionally picked models at the extreme ends (**negative**, **just right**, and **too positive**).Train three models with these parameters:1. **model_4**: `min_error_reduction = -1` (ignoring this early stopping condition)2. **model_5**: `min_error_reduction = 0` (just right)3. **model_6**: `min_error_reduction = 5` (too positive)For each of these three, we set `max_depth = 6`, and `min_node_size = 0`.** Note:** Each tree can take up to 30 seconds to train. ###Code model_4 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1) model_5 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=0) model_6 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=5) ###Output -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Split on feature grade.B. (8074, 1048) -------------------------------------------------------------------- Subtree, depth = 3 (8074 data points). Split on feature grade.C. (5884, 2190) -------------------------------------------------------------------- Subtree, depth = 4 (5884 data points). Split on feature grade.D. (3826, 2058) -------------------------------------------------------------------- Subtree, depth = 5 (3826 data points). Split on feature grade.E. (1693, 2133) -------------------------------------------------------------------- Subtree, depth = 6 (1693 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2133 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (2058 data points). Split on feature grade.E. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2058 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (2190 data points). Split on feature grade.D. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 5 (2190 data points). Split on feature grade.E. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2190 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1048 data points). Split on feature emp_length.5 years. (969, 79) -------------------------------------------------------------------- Subtree, depth = 4 (969 data points). Split on feature grade.C. (969, 0) -------------------------------------------------------------------- Subtree, depth = 5 (969 data points). Split on feature grade.D. (969, 0) -------------------------------------------------------------------- Subtree, depth = 6 (969 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (79 data points). Split on feature home_ownership.MORTGAGE. (34, 45) -------------------------------------------------------------------- Subtree, depth = 5 (34 data points). Split on feature grade.C. (34, 0) -------------------------------------------------------------------- Subtree, depth = 6 (34 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (45 data points). Split on feature grade.C. (45, 0) -------------------------------------------------------------------- Subtree, depth = 6 (45 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Split on feature emp_length.< 1 year. (85, 11) -------------------------------------------------------------------- Subtree, depth = 4 (85 data points). Split on feature grade.B. (85, 0) -------------------------------------------------------------------- Subtree, depth = 5 (85 data points). Split on feature grade.C. (85, 0) -------------------------------------------------------------------- Subtree, depth = 6 (85 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (11 data points). Split on feature grade.B. (11, 0) -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature grade.C. (11, 0) -------------------------------------------------------------------- Subtree, depth = 6 (11 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Split on feature grade.B. (5, 0) -------------------------------------------------------------------- Subtree, depth = 4 (5 data points). Split on feature grade.C. (5, 0) -------------------------------------------------------------------- Subtree, depth = 5 (5 data points). Split on feature grade.D. (5, 0) -------------------------------------------------------------------- Subtree, depth = 6 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Split on feature grade.A. (347, 0) -------------------------------------------------------------------- Subtree, depth = 6 (347 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature home_ownership.OWN. (9, 2) -------------------------------------------------------------------- Subtree, depth = 6 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Split on feature grade.A. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 4 (1276 data points). Split on feature grade.B. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 5 (1276 data points). Split on feature grade.C. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 6 (1276 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Split on feature grade.A. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 3 (4701 data points). Split on feature grade.B. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 4 (4701 data points). Split on feature grade.C. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 5 (4701 data points). Split on feature grade.E. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 6 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Split on feature emp_length.< 1 year. (85, 11) -------------------------------------------------------------------- Subtree, depth = 4 (85 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 4 (11 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature home_ownership.OWN. (9, 2) -------------------------------------------------------------------- Subtree, depth = 6 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Early stopping condition 3 reached. Minimum error reduction. -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Early stopping condition 3 reached. Minimum error reduction. ###Markdown Calculate the accuracy of each model (**model_4**, **model_5**, or **model_6**) on the validation set. ###Code print "Validation data, classification error (model 4):", evaluate_classification_error(model_4, validation_set) print "Validation data, classification error (model 5):", evaluate_classification_error(model_5, validation_set) print "Validation data, classification error (model 6):", evaluate_classification_error(model_6, validation_set) ###Output Validation data, classification error (model 4): 0.0606419646704 Validation data, classification error (model 5): 0.0597802671262 Validation data, classification error (model 6): 0.503446790177 ###Markdown Using the `count_leaves` function, compute the number of leaves in each of each models in (**model_4**, **model_5**, and **model_6**). ###Code print count_leaves(model_4) print count_leaves(model_5) print count_leaves(model_6) ###Output 41 13 1 ###Markdown **Quiz Question:** Using the complexity definition above, which model (**model_4**, **model_5**, or **model_6**) has the largest complexity?Did this match your expectation?**Quiz Question:** **model_4** and **model_5** have similar classification error on the validation set but **model_5** has lower complexity? Should you pick **model_5** over **model_4**? Exploring the effect of min_node_sizeWe will compare three models trained with different values of the stopping criterion. Again, intentionally picked models at the extreme ends (**too small**, **just right**, and **just right**).Train three models with these parameters:1. **model_7**: min_node_size = 0 (too small)2. **model_8**: min_node_size = 2000 (just right)3. **model_9**: min_node_size = 50000 (too large)For each of these three, we set `max_depth = 6`, and `min_error_reduction = -1`.** Note:** Each tree can take up to 30 seconds to train. ###Code model_7 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 0, min_error_reduction=-1) model_8 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size =2000, min_error_reduction=-1) model_9 = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6, min_node_size = 50000, min_error_reduction=-1) ###Output -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Split on feature grade.B. (8074, 1048) -------------------------------------------------------------------- Subtree, depth = 3 (8074 data points). Split on feature grade.C. (5884, 2190) -------------------------------------------------------------------- Subtree, depth = 4 (5884 data points). Split on feature grade.D. (3826, 2058) -------------------------------------------------------------------- Subtree, depth = 5 (3826 data points). Split on feature grade.E. (1693, 2133) -------------------------------------------------------------------- Subtree, depth = 6 (1693 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2133 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (2058 data points). Split on feature grade.E. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2058 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (2190 data points). Split on feature grade.D. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 5 (2190 data points). Split on feature grade.E. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2190 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1048 data points). Split on feature emp_length.5 years. (969, 79) -------------------------------------------------------------------- Subtree, depth = 4 (969 data points). Split on feature grade.C. (969, 0) -------------------------------------------------------------------- Subtree, depth = 5 (969 data points). Split on feature grade.D. (969, 0) -------------------------------------------------------------------- Subtree, depth = 6 (969 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (79 data points). Split on feature home_ownership.MORTGAGE. (34, 45) -------------------------------------------------------------------- Subtree, depth = 5 (34 data points). Split on feature grade.C. (34, 0) -------------------------------------------------------------------- Subtree, depth = 6 (34 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (45 data points). Split on feature grade.C. (45, 0) -------------------------------------------------------------------- Subtree, depth = 6 (45 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Split on feature emp_length.n/a. (96, 5) -------------------------------------------------------------------- Subtree, depth = 3 (96 data points). Split on feature emp_length.< 1 year. (85, 11) -------------------------------------------------------------------- Subtree, depth = 4 (85 data points). Split on feature grade.B. (85, 0) -------------------------------------------------------------------- Subtree, depth = 5 (85 data points). Split on feature grade.C. (85, 0) -------------------------------------------------------------------- Subtree, depth = 6 (85 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (11 data points). Split on feature grade.B. (11, 0) -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature grade.C. (11, 0) -------------------------------------------------------------------- Subtree, depth = 6 (11 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (5 data points). Split on feature grade.B. (5, 0) -------------------------------------------------------------------- Subtree, depth = 4 (5 data points). Split on feature grade.C. (5, 0) -------------------------------------------------------------------- Subtree, depth = 5 (5 data points). Split on feature grade.D. (5, 0) -------------------------------------------------------------------- Subtree, depth = 6 (5 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Split on feature grade.A. (702, 230) -------------------------------------------------------------------- Subtree, depth = 6 (702 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (230 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Split on feature emp_length.8 years. (347, 11) -------------------------------------------------------------------- Subtree, depth = 5 (347 data points). Split on feature grade.A. (347, 0) -------------------------------------------------------------------- Subtree, depth = 6 (347 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (11 data points). Split on feature home_ownership.OWN. (9, 2) -------------------------------------------------------------------- Subtree, depth = 6 (9 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Split on feature grade.A. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 4 (1276 data points). Split on feature grade.B. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 5 (1276 data points). Split on feature grade.C. (1276, 0) -------------------------------------------------------------------- Subtree, depth = 6 (1276 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Split on feature grade.A. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 3 (4701 data points). Split on feature grade.B. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 4 (4701 data points). Split on feature grade.C. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 5 (4701 data points). Split on feature grade.E. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 6 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Split on feature term. 36 months. (9223, 28001) -------------------------------------------------------------------- Subtree, depth = 1 (9223 data points). Split on feature grade.A. (9122, 101) -------------------------------------------------------------------- Subtree, depth = 2 (9122 data points). Split on feature grade.B. (8074, 1048) -------------------------------------------------------------------- Subtree, depth = 3 (8074 data points). Split on feature grade.C. (5884, 2190) -------------------------------------------------------------------- Subtree, depth = 4 (5884 data points). Split on feature grade.D. (3826, 2058) -------------------------------------------------------------------- Subtree, depth = 5 (3826 data points). Split on feature grade.E. (1693, 2133) -------------------------------------------------------------------- Subtree, depth = 6 (1693 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (2133 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (2058 data points). Split on feature grade.E. (2058, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2058 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (2190 data points). Split on feature grade.D. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 5 (2190 data points). Split on feature grade.E. (2190, 0) -------------------------------------------------------------------- Subtree, depth = 6 (2190 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (1048 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 2 (101 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 1 (28001 data points). Split on feature grade.D. (23300, 4701) -------------------------------------------------------------------- Subtree, depth = 2 (23300 data points). Split on feature grade.E. (22024, 1276) -------------------------------------------------------------------- Subtree, depth = 3 (22024 data points). Split on feature grade.F. (21666, 358) -------------------------------------------------------------------- Subtree, depth = 4 (21666 data points). Split on feature emp_length.n/a. (20734, 932) -------------------------------------------------------------------- Subtree, depth = 5 (20734 data points). Split on feature grade.G. (20638, 96) -------------------------------------------------------------------- Subtree, depth = 6 (20638 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (96 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 5 (932 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 4 (358 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 3 (1276 data points). Early stopping condition 2 reached. Reached minimum node size. -------------------------------------------------------------------- Subtree, depth = 2 (4701 data points). Split on feature grade.A. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 3 (4701 data points). Split on feature grade.B. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 4 (4701 data points). Split on feature grade.C. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 5 (4701 data points). Split on feature grade.E. (4701, 0) -------------------------------------------------------------------- Subtree, depth = 6 (4701 data points). Early stopping condition 1 reached. Reached maximum depth. -------------------------------------------------------------------- Subtree, depth = 6 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 5 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 4 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 3 (0 data points). Stopping condition 1 reached. All data points have the same target value. -------------------------------------------------------------------- Subtree, depth = 0 (37224 data points). Early stopping condition 2 reached. Reached minimum node size. ###Markdown Now, let us evaluate the models (**model_7**, **model_8**, or **model_9**) on the **validation_set**. ###Code print "Validation data, classification error (model 4):", evaluate_classification_error(model_7, validation_set) print "Validation data, classification error (model 5):", evaluate_classification_error(model_8, validation_set) print "Validation data, classification error (model 6):", evaluate_classification_error(model_9, validation_set) ###Output Validation data, classification error (model 4): 0.0606419646704 Validation data, classification error (model 5): 0.053425247738 Validation data, classification error (model 6): 0.503446790177 ###Markdown Using the `count_leaves` function, compute the number of leaves in each of each models (**model_7**, **model_8**, and **model_9**). ###Code print count_leaves(model_7) print count_leaves(model_8) print count_leaves(model_9) ###Output 41 19 1
prediction_model/notebooks/prepare_train_data/merge_data_k80.ipynb
###Markdown Dense Layer ###Code dfDense = pd.read_pickle(os.path.join(DATA_DIR,'%s/8/benchmark_dense__20180907.pkl' %file_name)) for i in range(9,10): dfDense = pd.concat([dfDense,pd.read_pickle(os.path.join(DATA_DIR,'%s/%d/benchmark_dense__20180907.pkl' %(file_name, i)))]) ops = (dfDense['batchsize'] * dfDense['dim_input'] * dfDense['dim_output']) memory_weights = dfDense['dim_input'] * dfDense['dim_output'] memory_in = dfDense['batchsize'] * dfDense['dim_input'] memory_out = dfDense['batchsize'] * dfDense['dim_output'] dfDense['optimizer'] = dfDense['optimizer'].replace({0:'opt_None', 1:'opt_SGD', 2:'opt_Adadelta', 3:'opt_Adagrad', 4:'opt_Momentum', 5:'opt_Adam', 6:'opt_RMSProp'}) dfDense['activation_fct'] = dfDense['activation_fct'].replace({0:'act_None', 1:'act_relu', 2:'act_tanh', 3:'act_sigmoid'}) dfDense['ops'] = ops dfDense['memory_weights'] = memory_weights dfDense['memory_in'] = memory_in dfDense['memory_out'] = memory_out one_hot_optimizer = pd.get_dummies(dfDense['optimizer']) dfDense = dfDense.drop(labels='optimizer',axis=1) dfDense = pd.concat([dfDense,one_hot_optimizer],axis=1) one_hot_activation = pd.get_dummies(dfDense['activation_fct']) dfDense = dfDense.drop(labels='activation_fct',axis=1) dfDense = pd.concat([dfDense,one_hot_activation],axis=1) dfDense.describe() dfDense.to_pickle(os.path.join(SAVE_DIR,'Data_dense_%s.pkl'%model_name)) ###Output _____no_output_____ ###Markdown Convolutional Layers ###Code dfConv = pd.read_pickle(os.path.join(DATA_DIR,'%s/0/benchmark_convolution__20181031.pkl' %file_name)) header = dfConv.columns for i in range(1,8): dfConv = pd.concat([dfConv,pd.read_pickle(os.path.join(DATA_DIR,'%s/%i/benchmark_convolution__20181031.pkl' %(file_name, i)))]) padding_reduction = ((dfConv['padding']==0) *(dfConv['kernelsize']-1)) elements_output = ((dfConv['matsize'] - padding_reduction) / dfConv['strides'])**2 ops = (dfConv['batchsize'] * elements_output * dfConv['kernelsize']**2 * dfConv['channels_in'] * dfConv['channels_out']) memory_weights = (dfConv['kernelsize']**2 * dfConv['channels_in'] * dfConv['channels_out'] + dfConv['use_bias'] * dfConv['channels_out']) memory_in = (dfConv['batchsize'] * dfConv['matsize']**2 * dfConv['channels_in']) memory_out = (dfConv['batchsize'] * elements_output * dfConv['channels_out']) dfConv['elements_matrix'] = dfConv['matsize']**2 dfConv['elements_kernel'] = dfConv['kernelsize']**2 dfConv['ops'] = ops dfConv['memory_weights'] = memory_weights dfConv['memory_in'] = memory_in dfConv['memory_out'] = memory_out dfConv['optimizer'] = dfConv['optimizer'].replace({0:'opt_None', 1:'opt_SGD', 2:'opt_Adadelta', 3:'opt_Adagrad', 4:'opt_Momentum', 5:'opt_Adam', 6:'opt_RMSProp'}) dfConv['activation_fct'] = dfConv['activation_fct'].replace({0:'act_None', 1:'act_relu', 2:'act_tanh', 3:'act_sigmoid'}) dfConv['use_bias'] = np.uint8(dfConv['use_bias']) dfConv.dropna(inplace=True) one_hot_optimizer = pd.get_dummies(dfConv['optimizer']) dfConv = dfConv.drop(labels='optimizer',axis=1) dfConv = pd.concat([dfConv,one_hot_optimizer],axis=1) one_hot_activation = pd.get_dummies(dfConv['activation_fct']) dfConv = dfConv.drop(labels='activation_fct',axis=1) dfConv = pd.concat([dfConv,one_hot_activation],axis=1) dfConv.describe() dfConv.to_pickle(os.path.join(SAVE_DIR,'Data_convolution_%s.pkl'%model_name)) ###Output _____no_output_____
notebooks/4.1/CSSReference.ipynb
###Markdown CSS Playground A notebook that contain most of the things that could be displayed, to test CSS, feel free to add things to it, and send modification Title first level Title second Level Title third level h4 h5 h6 h1 h2 h3 h4 h6This is just a sample paragraph> With a blockquote def some_code(): return 'by indenting'```def some_other_code(): return 'bewtween_backticks'``` You can look at different level of nested unorderd list - level 1 - level 2 - level 2 - level 2 - level 3 - level 3 - level 4 - level 5 - level 6 - level 2- level 1- level 1- level 1 Ordered list 1. level 1 2. level 1 3. level 1 4. level 1 1. level 1 2. level 1 2. level 1 3. level 1 4. level 1 1. level 1 2. level 13. level 14. level 1 some Horizontal line***--- copy past from Daring Fireball link : This is [an example](http://example.com/ "Title") inline link.[This link](http://example.net/) has no title attribute. inline HtmlThis is a regular paragraph. Foo This is another regular paragraph. > This is a blockquote with two paragraphs. Lorem ipsum dolor sit amet,> consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus.> Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.> > Donec sit amet nisl. Aliquam semper ipsum sit amet velit. Suspendisse> id sem consectetuer libero luctus adipiscing.---> This is a blockquote with two paragraphs. Lorem ipsum dolor sit amet,consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus.Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus.> Donec sit amet nisl. Aliquam semper ipsum sit amet velit. Suspendisseid sem consectetuer libero luctus adipiscing. > This is the first level of quoting.>> > This is nested blockquote.>> Back to the first level. > This is a header.> > 1. This is the first list item.> 2. This is the second list item.> > Here's some example code:> > return shell_exec("echo $input | $markdown_script"); 1. This is a list item with two paragraphs. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aliquam hendrerit mi posuere lectus. Vestibulum enim wisi, viverra nec, fringilla in, laoreet vitae, risus. Donec sit amet nisl. Aliquam semper ipsum sit amet velit.2. Suspendisse id sem consectetuer libero luctus adipiscing. * This is a list item with two paragraphs. This is the second paragraph in the list item. You'reonly required to indent the first line. Lorem ipsum dolorsit amet, consectetuer adipiscing elit.* Another item in the same list. * A list item with a blockquote: > This is a blockquote > inside a list item. * A list item with a code block: 1986. What a great season.1986\. What a great season. See my [About](/about/) page for details. ref linkThis is [an example][id] reference-style link.[id]: http://example.com/ "Optional Title Here" *single asterisks*_single underscores_**double asterisks**__double underscores__un*frigging*believable // should render partially as bold\*this text is surrounded by literal asterisks\*``There is a literal backtick (`) here.`` Other Notebook element A small tooltipCloseExpandOpen in PagerCloseAnd some text inside Close Expand Open in Pager Close This one should be big ###Code # a code cell import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LightSource # example showing how to make shaded relief plots # like Mathematica # (http://reference.wolfram.com/mathematica/ref/ReliefPlot.html) # or Generic Mapping Tools # (http://gmt.soest.hawaii.edu/gmt/doc/gmt/html/GMT_Docs/node145.html) # test data d= 1 def maltc(ax, lambd=1, n=1): I0=1 I= lambda theta,d : I0*(sin(2*theta)*sin(pi*n*d/lambd))**2 X,Y=np.mgrid[-5:5:0.05,-5:5:0.05] Z=np.sqrt(X**2+Y**2)+np.sin(X**2+Y**2) r= np.sqrt(X**2+Y**2) theta = np.angle(X+1.0j*Y) Iv= np.vectorize(I) Z = Iv(r,theta) # create light source object. #ls = LightSource(azdeg=0,altdeg=65) # shade data, creating an rgb array. #rgb = ls.shade(Z,plt.cm.copper) # plot un-shaded and shaded images. #plt.figure(figsize=(12,5)) #plt.subplot(121) ax.imshow(Z,cmap=plt.cm.copper) ax.set_title('d=%d lambda=%f'%(d,lambd)) fig, (axes) = plt.subplots(3,4) fig.set_figheight(10) fig.set_figwidth(20) flatten = [item for sublist in axes for item in sublist] for ax,l in zip(flatten,range(len(flatten))): maltc(ax,lambd=(l+1)*pi/8.0) from __future__ import print_function import sys print('stdout') print('stderr',file=sys.stderr) ###Output stdout
notebooks/plots_ellipticity.ipynb
###Markdown Results Visualisation --- ###Code import matplotlib.pyplot as plt import seaborn as sns import pandas as pd from annex_new import import_ from annex_new import get_ellipticity from annex_new import get_bh_errors from annex_new import get_sep_errors from annex_new import get_elli_errors from annex_new import count_per_bin from annex_new import get_bh_results from annex_new import get_sep_results ###Output _____no_output_____ ###Markdown Path ###Code """Check the folders hierarchy""" from os.path import expanduser user_home = expanduser("~") path = user_home+'/Cosmostat/Codes/BlendHunter' ###Output _____no_output_____ ###Markdown IMPORT DATA ###Code """"Retrieve results for non padded images """ bh_results = get_bh_results(path_bh_results = path+'/bh_results') sep_results = get_sep_results(path_sep_results = path+'/sep_results') """Retrieve ellipticity components""" e1_total= get_ellipticity(path, get_e1=True) e2_total= get_ellipticity(path, get_e2=True) """Retrieve missed blends for sep and bh""" bh_errors = [[get_bh_errors(results=bh_results[i][j]) for j in range(len(bh_results[i]))] for i in range(len(bh_results))] sep_errors = [[get_sep_errors(results=sep_results[i][j]) for j in range(len(sep_results[i]))] for i in range(len(sep_results))] """For each noise realisation and each noise level, retrieve the components corresponding to the missed blends""" e1_errors_bh = [[get_elli_errors(e1_total, errors = bh_errors[i][j]) for j in range(len(bh_errors[i]))] for i in range(len(bh_errors))] e1_errors_sep = [[get_elli_errors(e1_total, errors = sep_errors[i][j]) for j in range(len(sep_errors[i]))] for i in range(len(sep_errors))] e2_errors_bh = [[get_elli_errors(e2_total, errors = bh_errors[i][j]) for j in range(len(bh_errors[i]))] for i in range(len(bh_errors))] e2_errors_sep = [[get_elli_errors(e2_total, errors = sep_errors[i][j]) for j in range(len(sep_errors[i]))] for i in range(len(sep_errors))] """Get e1,e2 for missed blends by bh and sep""" def get_e1_errors(e1=None, errors=None): return [e1[i] for i in errors] def get_e2_errors(e2=None, errors=None): return [e2[i] for i in errors] """How to retrieve informations from each bin in total dataset""" def count_per_bin(data =None, get_bins =False, bins_=int(180/3)): (n, bins, patches) = plt.hist(data, bins = bins_) if get_bins: return n, bins[1:], bins else: return n """Computation of error ratios""" def acc_ratio_bins(data=None, N=None, bins=int(180/3)): """N being the total obs per bin""" n = count_per_bin(data, bins_=bins) ratio = 1 - (n/N) return ratio """Compute mean accuracy for each noise level """ def get_mean_acc(data=None, data_total=None, nb_ratios=23, get_mean_total=False): if get_mean_total: #Get total number per bin and mean distance per bin for the whole test set n_total, mean_dist, bin_edges = count_per_bin(data=data_total, get_bins=True) return mean_dist else: """Get total number per bin and mean distance per bin for the whole test set""" n_total, mean_dist, bin_edges = count_per_bin(data=data_total, get_bins=True) """Compute accuracy ratio for each bin, for each noise realisation and noise level""" acc_ratios = [[acc_ratio_bins(x[j], N= n_total , bins=bin_edges) for j in range(len(x))] for x in data] """For each noise level, create sub_lists of accuracy ratios for corresponding bins but with all noise realisations""" sub_ratios = [[np.array([acc_ratios[k][i][j] for i in range(len(acc_ratios[k]))]) for j in range(nb_ratios)] for k in range(len(acc_ratios))] """Compute the mean on each sub_list. The function returns the mean accuracy ratios for each noise level""" return [np.array([np.mean(k[i]) for i in range(len(k))]) for k in sub_ratios] """Retrieve the mean e1, e2 per bin for x axis""" mean_e1_per_bin = get_mean_acc(get_mean_total=True, data_total=e1_total) mean_e2_per_bin = get_mean_acc(get_mean_total=True, data_total=e2_total) """Compute the mean accuracy ratio (on all noise realisations) for each bin and each noise level""" mean_acc_bh_e1 = get_mean_acc(data=e1_errors_bh, data_total=e1_total, nb_ratios = 23) mean_acc_sep_e1 = get_mean_acc(data=e1_errors_sep, data_total=e1_total, nb_ratios = 23) mean_acc_bh_e2 = get_mean_acc(data=e2_errors_bh, data_total=e2_total, nb_ratios = 23) mean_acc_sep_e2 = get_mean_acc(data=e2_errors_sep, data_total=e2_total, nb_ratios = 23) ###Output _____no_output_____ ###Markdown PLOT ACCURACY ACCORDING TO $e1, e2$ ###Code #Seaborn theme sns.set(context='notebook', style='whitegrid', palette='deep') #Font dictionnary font = {'family': 'monospace', 'color': 'k', 'weight': 'normal', 'size': 15} #Start plot fig, ax = plt.subplots(3,2,figsize=(20,17), sharex=False) #Title fig.suptitle('Mean accuracy according to e1 w.r.t $\sigma_{noise}$', fontdict = {'family': 'serif','color': 'k','weight': 'heavy','size': 23}, fontsize=23) #First subplot ax[0,0].set_title('$\sigma_{noise}$ =5.0', fontdict=font, fontsize=18.5) ax[0,0].plot(mean_e1, 100*mean_acc_bh_e1[0], color = 'k', marker='.', label='Bh') ax[0,0].plot(mean_e1, 100*mean_acc_sep_e1[0], color = 'steelblue', marker='.', label='SExtractor') ax[0,0].set_ylabel('Accuracy (%)', fontdict = font) ax[0,0].set_xlabel('e1', fontdict = font) ax[0,0].set_ylim(0,105) ax[0,0].tick_params(axis='both', which='major', labelsize=15) ax[0,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #Second subplot ax[0,1].set_title('$\sigma_{noise}$ =14.0', fontdict=font, fontsize=18.5) ax[0,1].plot(mean_e1, 100*mean_acc_bh_e1[1], color = 'k', marker='.', label='Bh') ax[0,1].plot(mean_e1, 100*mean_acc_sep_e1[1], color = 'steelblue', marker='.', label='SExtractor') ax[0,1].set_ylabel('Accuracy (%)', fontdict = font) ax[0,1].set_xlabel('e1', fontdict = font) ax[0,1].set_ylim(0,105) ax[0,1].tick_params(axis='both', which='major', labelsize=15) ax[0,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #3rd subplot ax[1,0].set_title('$\sigma_{noise}$ =18.0', fontdict=font, fontsize=18.5) ax[1,0].plot(mean_e1, 100*mean_acc_bh_e1[2], color = 'k', marker='.', label='Bh') ax[1,0].plot(mean_e1, 100*mean_acc_sep_e1[2], color = 'steelblue', marker='.', label='SExtractor') ax[1,0].set_ylabel('Accuracy (%)', fontdict = font) ax[1,0].set_xlabel('e1', fontdict = font) ax[1,0].set_ylim(0,105) ax[1,0].tick_params(axis='both', which='major', labelsize=15) ax[1,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #4th subplot ax[1,1].set_title('$\sigma_{noise}$ =26.0', fontdict=font, fontsize=18.5) ax[1,1].plot(mean_e1, 100*mean_acc_bh_e1[3], color = 'k', marker='.', label='Bh') ax[1,1].plot(mean_e1, 100*mean_acc_sep_e1[3], color = 'steelblue', marker='.', label='SExtractor') ax[1,1].set_ylabel('Accuracy (%)', fontdict = font) ax[1,1].set_xlabel('e1', fontdict = font) ax[1,1].set_ylim(0,105) ax[1,1].tick_params(axis='both', which='major', labelsize=15) ax[1,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #5th subplot ax[2,0].set_title('$\sigma_{noise}$ =35.0', fontdict=font, fontsize=18.5) ax[2,0].plot(mean_e1, 100*mean_acc_bh_e1[4],color = 'k', marker='.', label='Bh') ax[2,0].plot(mean_e1, 100*mean_acc_sep_e1[4], color = 'steelblue', marker='.', label='SExtractor') ax[2,0].set_ylabel('Accuracy (%)', fontdict = font) ax[2,0].set_xlabel('e1', fontdict = font) ax[2,0].set_ylim(0,105) ax[2,0].tick_params(axis='both', which='major', labelsize=15) ax[2,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #6th subplot ax[2,1].set_title('$\sigma_{noise}$ =40.0', fontdict=font, fontsize=18.5) x=ax[2,1].plot(mean_e1, 100*mean_acc_bh_e1[5], color = 'k', marker='.', label='Bh') y=ax[2,1].plot(mean_e1, 100*mean_acc_sep_e1[5], color = 'steelblue', marker='.', label='SExtractor') ax[2,1].set_ylabel('Accuracy (%)', fontdict = font) ax[2,1].set_xlabel('e1', fontdict = font) ax[2,1].set_ylim(0,105) ax[2,1].tick_params(axis='both', which='major', labelsize=15) ax[2,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #Add legend #labels_legend=["Bh", 'SExtractor', 'Gain on SExtractor'] plt.subplots_adjust(hspace=0.4) #fig.legend([x,y,z], labels=labels_legend, borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) plt.show() #Seaborn theme sns.set(context='notebook', style='whitegrid', palette='deep') #Font dictionnary font = {'family': 'monospace', 'color': 'k', 'weight': 'normal', 'size': 15} #Start plot fig, ax = plt.subplots(3,2,figsize=(20,17), sharex=False) #Title fig.suptitle('Mean accuracy according to e1 w.r.t $\sigma_{noise}$', fontdict = {'family': 'serif','color': 'k','weight': 'heavy','size': 23}, fontsize=23) #First subplot ax[0,0].set_title('$\sigma_{noise}$ =5.0', fontdict=font, fontsize=18.5) ax[0,0].plot(mean_e2, 100*mean_acc_bh_e2[0], color = 'k', marker='.', label='Bh') ax[0,0].plot(mean_e2, 100*mean_acc_sep_e2[0], color = 'steelblue', marker='.', label='SExtractor') ax[0,0].set_ylabel('Accuracy (%)', fontdict = font) ax[0,0].set_xlabel('e2', fontdict = font) ax[0,0].set_ylim(0,105) ax[0,0].tick_params(axis='both', which='major', labelsize=15) ax[0,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #Second subplot ax[0,1].set_title('$\sigma_{noise}$ =14.0', fontdict=font, fontsize=18.5) ax[0,1].plot(mean_e2, 100*mean_acc_bh_e2[1], color = 'k', marker='.', label='Bh') ax[0,1].plot(mean_e2, 100*mean_acc_sep_e2[1], color = 'steelblue', marker='.', label='SExtractor') ax[0,1].set_ylabel('Accuracy (%)', fontdict = font) ax[0,1].set_xlabel('e2', fontdict = font) ax[0,1].set_ylim(0,105) ax[0,1].tick_params(axis='both', which='major', labelsize=15) ax[0,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #3rd subplot ax[1,0].set_title('$\sigma_{noise}$ =18.0', fontdict=font, fontsize=18.5) ax[1,0].plot(mean_e2, 100*mean_acc_bh_e2[2], color = 'k', marker='.', label='Bh') ax[1,0].plot(mean_e2, 100*mean_acc_sep_e2[2], color = 'steelblue', marker='.', label='SExtractor') ax[1,0].set_ylabel('Accuracy (%)', fontdict = font) ax[1,0].set_xlabel('e2', fontdict = font) ax[1,0].set_ylim(0,105) ax[1,0].tick_params(axis='both', which='major', labelsize=15) ax[1,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #4th subplot ax[1,1].set_title('$\sigma_{noise}$ =26.0', fontdict=font, fontsize=18.5) ax[1,1].plot(mean_e2, 100*mean_acc_bh_e2[3], color = 'k', marker='.', label='Bh') ax[1,1].plot(mean_e2, 100*mean_acc_sep_e2[3], color = 'steelblue', marker='.', label='SExtractor') ax[1,1].set_ylabel('Accuracy (%)', fontdict = font) ax[1,1].set_xlabel('e2', fontdict = font) ax[1,1].set_ylim(0,105) ax[1,1].tick_params(axis='both', which='major', labelsize=15) ax[1,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #5th subplot ax[2,0].set_title('$\sigma_{noise}$ =35.0', fontdict=font, fontsize=18.5) ax[2,0].plot(mean_e1, 100*mean_acc_bh_e2[4],color = 'k', marker='.', label='Bh') ax[2,0].plot(mean_e1, 100*mean_acc_sep_e2[4], color = 'steelblue', marker='.', label='SExtractor') ax[2,0].set_ylabel('Accuracy (%)', fontdict = font) ax[2,0].set_xlabel('e2', fontdict = font) ax[2,0].set_ylim(0,105) ax[2,0].tick_params(axis='both', which='major', labelsize=15) ax[2,0].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #6th subplot ax[2,1].set_title('$\sigma_{noise}$ =40.0', fontdict=font, fontsize=18.5) x=ax[2,1].plot(mean_e1, 100*mean_acc_bh_e2[5], color = 'k', marker='.', label='Bh') y=ax[2,1].plot(mean_e1, 100*mean_acc_sep_e2[5], color = 'steelblue', marker='.', label='SExtractor') ax[2,1].set_ylabel('Accuracy (%)', fontdict = font) ax[2,1].set_xlabel('e2', fontdict = font) ax[2,1].set_ylim(0,105) ax[2,1].tick_params(axis='both', which='major', labelsize=15) ax[2,1].legend(borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) #Add legend #labels_legend=["Bh", 'SExtractor', 'Gain on SExtractor'] plt.subplots_adjust(hspace=0.4) #fig.legend([x,y,z], labels=labels_legend, borderaxespad=0.1, loc="lower center", fontsize=18, prop ={'family': 'monospace','size': 15}) plt.show() ###Output _____no_output_____
TeraGreen.ipynb
###Markdown Distribution of class ###Code df = pd.read_csv("file-all.lst", header=None, delimiter=" ") df[1].hist(bins=50) ###Output _____no_output_____ ###Markdown Top-N class ###Code df[1].value_counts().sort_values(ascending=False) ###Output _____no_output_____ ###Markdown Extract top-N classes ###Code THRESHOLD = 2000 N_TRAIN = int(THRESHOLD * 0.7) df = pd.read_csv("file-all.lst", header=None, delimiter=" ") df_train = pd.DataFrame() df_test = pd.DataFrame() count = 0 for i in range(max(df[1])+1): if len(df[(df[1]==i)&(df[1]!=713)&(df[1]!=747)]) >= THRESHOLD: count += 1 df_sample = df[df[1]==i].sample(n=THRESHOLD) df_train = df_train.append(df_sample[:N_TRAIN], ignore_index=True) df_test = df_test.append(df_sample[N_TRAIN:], ignore_index=True) print("total{} classes".format(count)) ###Output total27 classes ###Markdown Save ###Code df_train.to_csv("train-c{}.lst".format(count), index=False, header=False) df_test.to_csv("test-c{}.lst".format(count), index=False, header=False) df = pd.read_csv("labels.csv") tr = pd.read_csv("train-c{}.lst".format(count), header=None) labels = set(tr[1]) dict_class_neuron = {j:i for i,j in enumerate(labels)} f = lambda x: dict_class_neuron[x] if x in labels else -1 df = df.drop("NEURON", axis=1) df["NEURON"] = df["CLASS"].apply(f) with open("dict_class_neuron.pkl", 'wb') as f: pickle.dump(dict_class_neuron, f) df = pd.read_csv("labels.csv") df.head() df.to_csv("labels.csv", index=False) ###Output _____no_output_____
fun_coding/codility/4_MaxCounters(Counting Elements).ipynb
###Markdown ###Code # O(N*M) first try. correctness 100% def solution(N,A): array = [0 for i in range(N)] for i in A: if i > N: array = [max(array)] * N else: array[i-1] += 1 return array solution(5, [3,4,4,6,1,4,4]) # O(N+M) 10th try? lol def solution(N,A): import copy array = [0 for i in range(N)] max_i = 0 min_i = 0 for i in A: if i > N: min_i = max_i else: array[i-1] = max(array[i-1], min_i)+1 if array[i-1] > max_i: max_i = array[i-1] # print(array,min_i) for i in range(N): array[i] = max(min_i, array[i]) return array solution(5,[3,4,4,6,1,4,4]) a = [1,2,3,4,5] max(a) solution(5,[3,4,4,6,1,4,4]) ###Output _____no_output_____
Session_9_Random_Data.ipynb
###Markdown Confidence Interval on random data ###Code from numpy.random import seed from numpy.random import randn import numpy as np from scipy.stats import ttest_ind import scipy.stats as ss seed(1) data_1 = 5*randn(100) + 25 data_2 = 5*randn(100) + 45 print(data_1) print(data_2) import seaborn as sns sns.distplot(data_1) sns.distplot(data_2) mean_1, mean_2 = np.mean(data_1), np.mean(data_2) print(mean_1, mean_2) se_1, se_2 = ss.stats.sem(data_1), ss.stats.sem(data_2) print(se_1, se_2) se_diff = np.sqrt(se_1**2 + se_2**2) t_stat = (mean_1 - mean_2) / se_diff dof = len(data_1) + len(data_2) - 2 print(dof) alpha = 0.05 cv = ss.t.ppf(1-alpha, dof) p = (1.0 - ss.t.cdf(abs(t_stat), dof)) * 2.0 print(p) n1, n2 = len(data_1), len(data_2) # mean_1, mean_2 std_1, std_2 = np.std(data_1), np.std(data_2) se_1 = std_1/np.sqrt(n1) se_2 = std_2/np.sqrt(n2) mean_diff = mean_1 - mean_2 se_diff = (np.sqrt((n1-1) * se_1**2 + (n2-1) * se_2**2)/(n1+n2-2)) * (np.sqrt(1/n1 + 1/n2)) lcbx = mean_diff - 1.96*se_diff ucbx = mean_diff + 1.96*se_diff print(lcbx, ucbx) ###Output -20.470011496988636 -20.452107758456375
week3/ex2-logistic regression/ML-Exercise2.ipynb
###Markdown 机器学习练习 2 - 逻辑回归 这个笔记包含了以Python为编程语言的Coursera上机器学习的第二次编程练习。请参考 [作业文件](ex2.pdf) 详细描述和方程。在这一次练习中,我们将要实现逻辑回归并且应用到一个分类任务。我们还将通过将正则化加入训练算法,来提高算法的鲁棒性,并用更复杂的情形来测试它。代码修改并注释:黄海广,[email protected] 逻辑回归 在训练的初始阶段,我们将要构建一个逻辑回归模型来预测,某个学生是否被大学录取。设想你是大学相关部分的管理者,想通过申请学生两次测试的评分,来决定他们是否被录取。现在你拥有之前申请学生的可以用于训练逻辑回归的训练样本集。对于每一个训练样本,你有他们两次测试的评分和最后是被录取的结果。为了完成这个预测任务,我们准备构建一个可以基于两次测试评分来评估录取可能性的分类模型。 让我们从检查数据开始。 ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt path = 'ex2data1.txt' data = pd.read_csv(path, header=None, names=['Exam 1', 'Exam 2', 'Admitted']) data.head() ###Output _____no_output_____ ###Markdown 让我们创建两个分数的散点图,并使用颜色编码来可视化,如果样本是正的(被接纳)或负的(未被接纳)。 ###Code positive = data[data['Admitted'].isin([1])] negative = data[data['Admitted'].isin([0])] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive['Exam 1'], positive['Exam 2'], s=50, c='b', marker='o', label='Admitted') ax.scatter(negative['Exam 1'], negative['Exam 2'], s=50, c='r', marker='x', label='Not Admitted') ax.legend() ax.set_xlabel('Exam 1 Score') ax.set_ylabel('Exam 2 Score') plt.show() ###Output _____no_output_____ ###Markdown 看起来在两类间,有一个清晰的决策边界。现在我们需要实现逻辑回归,那样就可以训练一个模型来预测结果。方程实现在下面的代码示例在"exercises" 文件夹的 "ex2.pdf" 中。 sigmoid 函数g 代表一个常用的逻辑函数(logistic function)为S形函数(Sigmoid function),公式为: \\[g\left( z \right)=\frac{1}{1+{{e}^{-z}}}\\] 合起来,我们得到逻辑回归模型的假设函数: \\[{{h}_{\theta }}\left( x \right)=\frac{1}{1+{{e}^{-{{\theta }^{T}}X}}}\\] ###Code def sigmoid(z): return 1 / (1 + np.exp(-z)) ###Output _____no_output_____ ###Markdown 让我们做一个快速的检查,来确保它可以工作。 ###Code nums = np.arange(-10, 10, step=1) fig, ax = plt.subplots(figsize=(12,8)) ax.plot(nums, sigmoid(nums), 'r') plt.show() ###Output _____no_output_____ ###Markdown 棒极了!现在,我们需要编写代价函数来评估结果。代价函数:$J\left( \theta \right)=\frac{1}{m}\sum\limits_{i=1}^{m}{[-{{y}^{(i)}}\log \left( {{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)-\left( 1-{{y}^{(i)}} \right)\log \left( 1-{{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)]}$ ###Code def cost(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) return np.sum(first - second) / (len(X)) ###Output _____no_output_____ ###Markdown 现在,我们要做一些设置,和我们在练习1在线性回归的练习很相似。 ###Code # add a ones column - this makes the matrix multiplication work out easier data.insert(0, 'Ones', 1) # set X (training data) and y (target variable) cols = data.shape[1] X = data.iloc[:,0:cols-1] y = data.iloc[:,cols-1:cols] # convert to numpy arrays and initalize the parameter array theta X = np.array(X.values) y = np.array(y.values) theta = np.zeros(3) ###Output _____no_output_____ ###Markdown 让我们来检查矩阵的维度来确保一切良好。 ###Code theta X.shape, theta.shape, y.shape ###Output _____no_output_____ ###Markdown 让我们计算初始化参数的代价函数(theta为0)。 ###Code cost(theta, X, y) ###Output _____no_output_____ ###Markdown 看起来不错,接下来,我们需要一个函数来计算我们的训练数据、标签和一些参数thata的梯度。 gradient descent(梯度下降)* 这是批量梯度下降(batch gradient descent) * 转化为向量化计算: $\frac{1}{m} X^T( Sigmoid(X\theta) - y )$$$\frac{\partial J\left( \theta \right)}{\partial {{\theta }_{j}}}=\frac{1}{m}\sum\limits_{i=1}^{m}{({{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}})x_{_{j}}^{(i)}}$$ ###Code def gradient(theta, X, y): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) grad[i] = np.sum(term) / len(X) return grad ###Output _____no_output_____ ###Markdown 注意,我们实际上没有在这个函数中执行梯度下降,我们仅仅在计算一个梯度步长。在练习中,一个称为“fminunc”的Octave函数是用来优化函数来计算成本和梯度参数。由于我们使用Python,我们可以用SciPy的“optimize”命名空间来做同样的事情。 我们看看用我们的数据和初始参数为0的梯度下降法的结果。 ###Code gradient(theta, X, y) ###Output _____no_output_____ ###Markdown 现在可以用SciPy's truncated newton(TNC)实现寻找最优参数。 ###Code import scipy.optimize as opt result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X, y)) result ###Output _____no_output_____ ###Markdown 让我们看看在这个结论下代价函数计算结果是什么个样子~ ###Code cost(result[0], X, y) ###Output _____no_output_____ ###Markdown 接下来,我们需要编写一个函数,用我们所学的参数theta来为数据集X输出预测。然后,我们可以使用这个函数来给我们的分类器的训练精度打分。逻辑回归模型的假设函数: \\[{{h}_{\theta }}\left( x \right)=\frac{1}{1+{{e}^{-{{\theta }^{T}}X}}}\\] 当${{h}_{\theta }}$大于等于0.5时,预测 y=1当${{h}_{\theta }}$小于0.5时,预测 y=0 。 ###Code def predict(theta, X): probability = sigmoid(X * theta.T) return [1 if x >= 0.5 else 0 for x in probability] theta_min = np.matrix(result[0]) predictions = predict(theta_min, X) correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)] accuracy = (sum(map(int, correct)) % len(correct)) print ('accuracy = {0}%'.format(accuracy)) ###Output accuracy = 89% ###Markdown 我们的逻辑回归分类器预测正确,如果一个学生被录取或没有录取,达到89%的精确度。不坏!记住,这是训练集的准确性。我们没有保持住了设置或使用交叉验证得到的真实逼近,所以这个数字有可能高于其真实值(这个话题将在以后说明)。 正则化逻辑回归 在训练的第二部分,我们将要通过加入正则项提升逻辑回归算法。如果你对正则化有点眼生,或者喜欢这一节的方程的背景,请参考在"exercises"文件夹中的"ex2.pdf"。简而言之,正则化是成本函数中的一个术语,它使算法更倾向于“更简单”的模型(在这种情况下,模型将更小的系数)。这个理论助于减少过拟合,提高模型的泛化能力。这样,我们开始吧。 设想你是工厂的生产主管,你有一些芯片在两次测试中的测试结果。对于这两次测试,你想决定是否芯片要被接受或抛弃。为了帮助你做出艰难的决定,你拥有过去芯片的测试数据集,从其中你可以构建一个逻辑回归模型。 和第一部分很像,从数据可视化开始吧! ###Code path = 'ex2data2.txt' data2 = pd.read_csv(path, header=None, names=['Test 1', 'Test 2', 'Accepted']) data2.head() positive = data2[data2['Accepted'].isin([1])] negative = data2[data2['Accepted'].isin([0])] fig, ax = plt.subplots(figsize=(12,8)) ax.scatter(positive['Test 1'], positive['Test 2'], s=50, c='b', marker='o', label='Accepted') ax.scatter(negative['Test 1'], negative['Test 2'], s=50, c='r', marker='x', label='Rejected') ax.legend() ax.set_xlabel('Test 1 Score') ax.set_ylabel('Test 2 Score') plt.show() ###Output _____no_output_____ ###Markdown 哇,这个数据看起来可比前一次的复杂得多。特别地,你会注意到其中没有线性决策界限,来良好的分开两类数据。一个方法是用像逻辑回归这样的线性技术来构造从原始特征的多项式中得到的特征。让我们通过创建一组多项式特征入手吧。 ###Code degree = 5 x1 = data2['Test 1'] x2 = data2['Test 2'] data2.insert(3, 'Ones', 1) for i in range(1, degree): for j in range(0, i): data2['F' + str(i) + str(j)] = np.power(x1, i-j) * np.power(x2, j) data2.drop('Test 1', axis=1, inplace=True) data2.drop('Test 2', axis=1, inplace=True) data2.head() ###Output _____no_output_____ ###Markdown 现在,我们需要修改第1部分的成本和梯度函数,包括正则化项。首先是成本函数: regularized cost(正则化代价函数)$$J\left( \theta \right)=\frac{1}{m}\sum\limits_{i=1}^{m}{[-{{y}^{(i)}}\log \left( {{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)-\left( 1-{{y}^{(i)}} \right)\log \left( 1-{{h}_{\theta }}\left( {{x}^{(i)}} \right) \right)]}+\frac{\lambda }{2m}\sum\limits_{j=1}^{n}{\theta _{j}^{2}}$$ ###Code def cost(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) reg = (learningRate / (2 * len(X))) * np.sum(np.power(theta[:,1:theta.shape[1]], 2)) return np.sum(first - second) / len(X) + reg ###Output _____no_output_____ ###Markdown 请注意等式中的"reg" 项。还注意到另外的一个“学习率”参数。这是一种超参数,用来控制正则化项。现在我们需要添加正则化梯度函数: 如果我们要使用梯度下降法令这个代价函数最小化,因为我们未对${{\theta }_{0}}$ 进行正则化,所以梯度下降算法将分两种情形:\begin{align} & Repeat\text{ }until\text{ }convergence\text{ }\!\!\{\!\!\text{ } \\ & \text{ }{{\theta }_{0}}:={{\theta }_{0}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{_{0}}^{(i)}} \\ & \text{ }{{\theta }_{j}}:={{\theta }_{j}}-a\frac{1}{m}\sum\limits_{i=1}^{m}{[{{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}}]x_{j}^{(i)}}+\frac{\lambda }{m}{{\theta }_{j}} \\ & \text{ }\!\!\}\!\!\text{ } \\ & Repeat \\ \end{align}对上面的算法中 j=1,2,...,n 时的更新式子进行调整可得: ${{\theta }_{j}}:={{\theta }_{j}}(1-a\frac{\lambda }{m})-a\frac{1}{m}\sum\limits_{i=1}^{m}{({{h}_{\theta }}\left( {{x}^{(i)}} \right)-{{y}^{(i)}})x_{j}^{(i)}}$ ###Code def gradientReg(theta, X, y, learningRate): theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) if (i == 0): grad[i] = np.sum(term) / len(X) else: grad[i] = (np.sum(term) / len(X)) + ((learningRate / len(X)) * theta[:,i]) return grad ###Output _____no_output_____ ###Markdown 就像在第一部分中做的一样,初始化变量。 ###Code # set X and y (remember from above that we moved the label to column 0) cols = data2.shape[1] X2 = data2.iloc[:,1:cols] y2 = data2.iloc[:,0:1] # convert to numpy arrays and initalize the parameter array theta X2 = np.array(X2.values) y2 = np.array(y2.values) theta2 = np.zeros(11) ###Output _____no_output_____ ###Markdown 让我们初始学习率到一个合理值。,果有必要的话(即如果惩罚太强或不够强),我们可以之后再折腾这个。 ###Code learningRate = 1 ###Output _____no_output_____ ###Markdown 现在,让我们尝试调用新的默认为0的theta的正则化函数,以确保计算工作正常。 ###Code costReg(theta2, X2, y2, learningRate) gradientReg(theta2, X2, y2, learningRate) ###Output _____no_output_____ ###Markdown 现在我们可以使用和第一部分相同的优化函数来计算优化后的结果。 ###Code result2 = opt.fmin_tnc(func=costReg, x0=theta2, fprime=gradientReg, args=(X2, y2, learningRate)) result2 ###Output _____no_output_____ ###Markdown 最后,我们可以使用第1部分中的预测函数来查看我们的方案在训练数据上的准确度。 ###Code theta_min = np.matrix(result2[0]) predictions = predict(theta_min, X2) correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y2)] accuracy = (sum(map(int, correct)) % len(correct)) print ('accuracy = {0}%'.format(accuracy)) ###Output accuracy = 77% ###Markdown 虽然我们实现了这些算法,值得注意的是,我们还可以使用高级Python库像scikit-learn来解决这个问题。 ###Code from sklearn import linear_model#调用sklearn的线性回归包 model = linear_model.LogisticRegression(penalty='l2', C=1.0) model.fit(X2, y2.ravel()) model.score(X2, y2) ###Output _____no_output_____
notebooks/05_Displaying_data_with_Python.ipynb
###Markdown Displaying data with PythonHaskell is a great language for complex processing, but it lacks the visualization libraries that R and Python users have come to enjoy. This tutorial shows how to integrate both together when doing interactive analysis. ###Code :extension DeriveGeneric :extension FlexibleContexts :extension OverloadedStrings :extension GeneralizedNewtypeDeriving :extension FlexibleInstances :extension MultiParamTypeClasses import GHC.Generics (Generic) import Spark.Core.Dataset import Spark.Core.Context import Spark.Core.Functions import Spark.Core.Column import Spark.Core.Types import Spark.Core.Row import Spark.Core.ColumnFunctions conf = defaultConf { confEndPoint = "http://10.0.2.2", confRequestedSessionName = "session05_python" } createSparkSessionDef conf import Spark.Core.Types data MyData = MyData { aBigId :: Int, importantData :: Int } deriving (Show, Eq, Generic, Ord) instance SQLTypeable MyData instance FromSQL MyData instance ToSQL MyData let collection = [MyData 1 2, MyData 3 2, MyData 5 4] let ds = dataset collection @@ "dataset" let c = collect (asCol ds) @@ "collected_data" _ <- exec1Def c from kraps import * ks = connectSession("session05_python", address='localhost') ks ks.pandas("collected_data") print ks.url('collected_data') ###Output _____no_output_____
_notebooks/2022-02-21-Pet-Breeds.ipynb
###Markdown "A pet breed classifier"> "If a team did it at a hackathon, surely I can too... right?"- comments: true- categories: [vision] IntroI remember volunteering at a hackathon and sitting in the award ceremony when I saw a group win in the "fun" category for creating a pet breed classifier. You give it an image and it'll tell you what breed it thinks it is and how confident it is. It was "fun" because you could override the threshold and allow images that aren't cats and dogs to be classified as a dog or cat breed. This blog post will show you how you can train your own pet breed classifer and how it isn't that hard nor time consuming to do so. You don't need a beefy computer either since you can use Colab's GPUs.![Pet breed classifier meme](https://cdn.discordapp.com/attachments/880195928731041882/945358008014237716/y6oguouim9k41.jpg) Training our own pet breed classifierFirst, we'll download the Pet dataset and see what we're given: ###Code path = untar_data(URLs.PETS) Path.BASE_PATH = path path.ls() (path/'images').ls() ###Output _____no_output_____ ###Markdown In this dataset, there are two subfolders: `images` and `annotations`. `images` contains the images of the pet breeds (and their labels) while `annotations` contains the location of the pet in each image if you wanted to do localization. The images are structured like so: the name of the pet breed with spaces turned into underscores, followed by a number. The name is capitalized if the pet is a cat. We can get the name of the pet breed by using regular expressions: ###Code fname = (path/'images').ls()[0] fname, fname.name # # () = extract what's in the parentheses -> .+ # .+ = any character appearing one or more times # _ = followed by an underscore # \d+ = followed by any digit appearing one or more times # .jpg$ = with a .jpg extension at the end of the string re.findall(r'(.+)_\d+.jpg$', fname.name) ###Output _____no_output_____ ###Markdown This time, we'll be using a `DataBlock` to create our `DataLoaders` ###Code pets = DataBlock( blocks = (ImageBlock, CategoryBlock), get_items = partial(get_image_files, folders = 'images'), splitter = RandomSplitter(), get_y = using_attr(RegexLabeller(r'(.+)_\d+.jpg$'), 'name'), item_tfms = Resize(460), batch_tfms = aug_transforms(size = 224, min_scale = 0.75)) dls = pets.dataloaders(path) ###Output _____no_output_____ ###Markdown In our pets `DataBlock`, we give it the following parameters:- `blocks = (ImageBlock, CategoryBlock)`: our independent variable is an image and our dependent variable is a category. - `get_items = partial(get_image_files, folders = 'images')`: we are getting our images recursively in the `images` folder. If you've used functional programming before, `partial` is like currying; we give a function some of its parameters and it returns another function that accepts the rest of its parameters, except `partial` allows us to specify which parameters we want to give.- `splitter = RandomSplitter()`: randomly splits our data into training and validation sets with a default `80:20` split. We can also specify a seed if we want to test how tuning our hyperparameters affects the final accuracy. The final two parameters are part of "presizing":- `item_tfms = Resize(460)`: picks a random area of an image (using its max width or height, whichever is smallest) and resizes it to 460x460. This process happens for all images in the dataset.- `batch_tfms = aug_transforms(size = 224, min_scale = 0.75)`: take a random portion of the image which is at least 75% of it and resize to 224x224. This process happens for all images in a batch (like the batch we get when we call `dls.one_batch()`).We first resize an image to a much larger size than our actual size for training so that we can avoid the data destruction done by data augmentation. The larger size allows tranformation of the data without creating empty areas. ![Presizing (image taken from fastbook)](https://github.com/fastai/fastbook/blob/master/images/att_00060.png?raw=1) We can check if our `DataLoaders` is created successfully by using the `.show_batch()` feature: ###Code dls.show_batch(nrows = 1, ncols = 4) ###Output _____no_output_____ ###Markdown We can then do some Googling to make sure our images are labelled correctly. Fastai also allows us to debug our `DataBlock` in case we make an error. It attemps to create a batch from the source: ###Code #collapse-output pets.summary(path) ###Output Setting-up type transforms pipelines Collecting items from /root/.fastai/data/oxford-iiit-pet Found 7390 items 2 datasets of sizes 5912,1478 Setting up Pipeline: PILBase.create Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False} Building one sample Pipeline: PILBase.create starting from /root/.fastai/data/oxford-iiit-pet/images/great_pyrenees_179.jpg applying PILBase.create gives PILImage mode=RGB size=500x334 Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False} starting from /root/.fastai/data/oxford-iiit-pet/images/great_pyrenees_179.jpg applying partial gives great_pyrenees applying Categorize -- {'vocab': None, 'sort': True, 'add_na': False} gives TensorCategory(21) Final sample: (PILImage mode=RGB size=500x334, TensorCategory(21)) Collecting items from /root/.fastai/data/oxford-iiit-pet Found 7390 items 2 datasets of sizes 5912,1478 Setting up Pipeline: PILBase.create Setting up Pipeline: partial -> Categorize -- {'vocab': None, 'sort': True, 'add_na': False} Setting up after_item: Pipeline: Resize -- {'size': (460, 460), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor Setting up before_batch: Pipeline: Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Flip -- {'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} -> RandomResizedCropGPU -- {'size': (224, 224), 'min_scale': 0.75, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'max_scale': 1.0, 'p': 1.0} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} Building one batch Applying item_tfms to the first sample: Pipeline: Resize -- {'size': (460, 460), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} -> ToTensor starting from (PILImage mode=RGB size=500x334, TensorCategory(21)) applying Resize -- {'size': (460, 460), 'method': 'crop', 'pad_mode': 'reflection', 'resamples': (2, 0), 'p': 1.0} gives (PILImage mode=RGB size=460x460, TensorCategory(21)) applying ToTensor gives (TensorImage of size 3x460x460, TensorCategory(21)) Adding the next 3 samples No before_batch transform to apply Collating items in a batch Applying batch_tfms to the batch built Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} -> Flip -- {'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} -> RandomResizedCropGPU -- {'size': (224, 224), 'min_scale': 0.75, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'max_scale': 1.0, 'p': 1.0} -> Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} starting from (TensorImage of size 4x3x460x460, TensorCategory([21, 30, 15, 2], device='cuda:0')) applying IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} gives (TensorImage of size 4x3x460x460, TensorCategory([21, 30, 15, 2], device='cuda:0')) applying Flip -- {'size': None, 'mode': 'bilinear', 'pad_mode': 'reflection', 'mode_mask': 'nearest', 'align_corners': True, 'p': 0.5} gives (TensorImage of size 4x3x460x460, TensorCategory([21, 30, 15, 2], device='cuda:0')) applying RandomResizedCropGPU -- {'size': (224, 224), 'min_scale': 0.75, 'ratio': (1, 1), 'mode': 'bilinear', 'valid_scale': 1.0, 'max_scale': 1.0, 'p': 1.0} gives (TensorImage of size 4x3x224x224, TensorCategory([21, 30, 15, 2], device='cuda:0')) applying Brightness -- {'max_lighting': 0.2, 'p': 1.0, 'draw': None, 'batch': False} gives (TensorImage of size 4x3x224x224, TensorCategory([21, 30, 15, 2], device='cuda:0')) ###Markdown Now, let's get to training our model. This time, we'll be fine tuning a pretrained model. This process is called transfer learning, where we take a pretrained model and retrain it on our data so that it can perform well for our task. We randomize the head (last layer) of our model, freeze the parameters of the earlier layers and train our model for one epoch. Then, we unfreeze the model and update the later layers of the model with a higher learning rate than the earlier layers. The pretrained model we will be using is `resnet34`, which was trained on the ImageNet dataset with 34 layers: ###Code learner = cnn_learner(dls, resnet34, metrics = accuracy) lrs = learner.lr_find() learner.fit_one_cycle(3, lr_max = lrs.valley) learner.unfreeze() lrs = learner.lr_find() learner.fit_one_cycle(6, lr_max = lrs.valley) ###Output _____no_output_____ ###Markdown When we use a pretrained model, fastai automatically freezes the early layers.We then train the head (last layer) of the model for 3 epochs so that it can get a sense of our objective. Then, we unfreeze the model and train all the layers for 6 more epochs. After training for a total of 9 epochs, we now have a model that can predict pet breeds accuractely 94% of the time. We can use fastai's confusion matrix to see where our model is having problems: ###Code interp = ClassificationInterpretation.from_learner(learner) interp.plot_confusion_matrix(figsize = (12, 12), dpi = 60) interp.most_confused(5) ###Output _____no_output_____ ###Markdown Using the `.most_confused` feature, it seems like most of the errors come from the pet breeds being very similar. We should be careful however, that we aren't overfitting on our validation set through changing hyperparameters. We can see that our training loss is always going down, but our validation loss fluctuates from going down and sometimes up. And that's all there is to training a pet breed classifier. You could improve the accuracy by exploring deeper models like `resnet50` which has 50 layers; training for more epochs (whether before unfreezing or after or both); using discriminative learning rates (giving lower learning rates or early laters using `split(lr1, lr2)` in the `lr_max` key-word argument in `fit_one_cycle`). Using our own pet breed classifierFirst, let's save the model using `.export()`: ###Code learner.export() #hide from google.colab import files #hide files.download('export.pkl') ###Output _____no_output_____ ###Markdown Then, let's load the `.pkl` file: ###Code learn = load_learner('export.pkl') ###Output _____no_output_____ ###Markdown Create some basic UI: ###Code def pretty(name: str) -> str: return name.replace('_', ' ').lower() def classify(a): if not btn_upload.data: lbl_pred.value = 'Please upload an image.' return img = PILImage.create(btn_upload.data[-1]) pred, pred_idx, probs = learn.predict(img) out_pl.clear_output() with out_pl: display(img.to_thumb(128, 128)) lbl_pred.value = f'Looks like a {pretty(pred)} to me. I\'m {probs[pred_idx] * 100:.02f}% confident!' btn_upload = widgets.FileUpload() lbl_pred = widgets.Label() out_pl = widgets.Output() btn_run = widgets.Button(description = 'Classify') btn_run.on_click(classify) VBox([ widgets.Label('Upload a pet!'), btn_upload, btn_run, out_pl, lbl_pred]) ###Output _____no_output_____ ###Markdown And there we have it! You can make it prettier and go win a hackathon.However, a bit of a downside with deep learning is that it can only predict what it has been trained on. So, drawings of pets, night-time images of pets, and breeds that weren't included in the training set won't be accurately labelled.We could solve the last case by turning this problem into a multi-label classification problem. Then, if we aren't confident that we have one of the known breeds, we can just say we don't know this breed. Siamese pair When I was watching the fastai lectures, I heard Jeremy talking about "siamese pairs" where you give the model two images and it will tell you if they are of the same breed. Now that we have a model, let's make it! ###Code def pair(a): if not up1.data or not up2.data: lbl.value = 'Please upload images.' return im1 = PILImage.create(up1.data[-1]) im2 = PILImage.create(up2.data[-1]) pred1, x, _ = learn.predict(im1) pred2, y, _ = learn.predict(im2) out1.clear_output() out2.clear_output() with out1: display(im1.to_thumb(128, 128)) with out2: display(im2.to_thumb(128, 128)) if x == y: lbl.value = f'Wow, they\'re both {pretty(pred1)}(s)!' else: lbl.value = f'The first one seems to be {pretty(pred1)} while the second \ one is a(n) {pretty(pred2)}. I\'m not an expert, but they \ seem to be of different breeds, chief.' up1 = widgets.FileUpload() up2 = widgets.FileUpload() lbl = widgets.Label() out1 = widgets.Output() out2 = widgets.Output() run = widgets.Button(description = 'Classify') run.on_click(pair) VBox([ widgets.Label("Siamese Pairs"), HBox([up1, up2]), run, HBox([out1, out2]), lbl ]) ###Output _____no_output_____
notebooks/Banana Classification.ipynb
###Markdown SGLD from here onwards ###Code @torch.enable_grad() def gradient(x, y, params): params_ = params.clone().requires_grad_(True) loss = log_posterior(x, y, params_) grad, = torch.autograd.grad(loss, params_) return loss.detach().cpu().numpy(), grad lr = 2e-3 def step_size(n): return lr / (1 + n)**0.55 def sgld(n_epochs, data_batch_size): dataloader_train = DataLoader(train_data, shuffle=True, batch_size=data_batch_size, num_workers=0) model = DNN(2, 2).to(device) params = torch.cat([param.flatten() for param in model.parameters()]).detach() losses = [] step = 0 for _ in range(n_epochs): epoch_losses = [] for x, y in tqdm(iter(dataloader_train)): x = x.to(device) y = y.to(device) eps = step_size(step) loss, grad = gradient(x, y, params) params = params + 0.5 * eps * grad #+ np.sqrt(eps) * torch.randn_like(params) step += 1 epoch_losses.append(loss) losses.append(epoch_losses) param_samples = [] iterator = iter(dataloader_train) for _ in range(100): x = x.to(device) y = y.to(device) eps = step_size(step) loss, grad = gradient(x, y, params) params = params + 0.5 * eps * grad + np.sqrt(eps) * torch.randn_like(params) param_samples.append(params) step += 1 param_samples = torch.stack(param_samples) losses = np.array(losses) return param_samples, losses accuracies, eces, logps = [], [], [] accuracies_train, eces_train, logps_train = [], [], [] params_all = None for i in range(num_exp): param_samples, losses = sgld(n_epochs, data_batch_size) if params_all is None: params_all = param_samples.unsqueeze(0).cpu().numpy() else: params_all = np.concatenate((params_all, param_samples.unsqueeze(0).cpu().numpy())) accuracy, ece, logp = evaluate(param_samples) accuracy_train, ece_train, logp_train = evaluate(param_samples, train) accuracies.append(accuracy) eces.append(ece) logps.append(logp) accuracies_train.append(accuracy_train) eces_train.append(ece_train) logps_train.append(logp_train) del param_samples gc.collect() torch.cuda.empty_cache() accuracies = np.array(accuracies) eces = np.array(eces) logps = np.array(logps) accuracies_train = np.array(accuracies_train) eces_train = np.array(eces_train) logps_train = np.array(logps_train) SGLD_df = pd.DataFrame({"Accuracy": accuracies, "ECE": eces, "log predictive": logps}) SGLD_train_df = pd.DataFrame({"Accuracy": accuracies_train, "ECE": eces_train, "log predictive": logps_train}) with open(f"{results_path}sgld_test.txt", "w") as f: f.write(str(SGLD_df.describe())) with open(f"{results_path}sgld_train.txt", "w") as f: f.write(str(SGLD_train_df.describe())) plt_contour(params_all, "banana_sgld", "SGLD") def sgd(n_epochs, data_batch_size): lr = 1e-1 dataloader_train = DataLoader(train_data, shuffle=True, batch_size=data_batch_size, num_workers=0) model = DNN(2, 2).to(device) optimizer = torch.optim.SGD(model.parameters(), lr=lr) losses = [] for i in range(n_epochs): for x, y in tqdm(iter(dataloader_train)): x = x.to(device) y = y.to(device) optimizer.zero_grad() out = model(x) l = F.cross_entropy(out, y, reduction="mean") l.backward() losses.append(l.detach().cpu().numpy()) optimizer.step() losses = np.array(losses) return model, losses accuracies, eces, logps = [], [], [] accuracies_train, eces_train, logps_train = [], [], [] params_all = None for i in range(num_exp): model, losses = sgd(n_epochs, data_batch_size) params = model.parameters() params = torch.cat([param.flatten() for param in params]).detach() params = params.view(1, -1) if params_all is None: params_all = params.unsqueeze(0).cpu().numpy() else: params_all = np.concatenate((params_all, params.unsqueeze(0).cpu().numpy())) accuracy, ece, logp = evaluate(params) accuracy_train, ece_train, logp_train = evaluate(params, train) accuracies.append(accuracy) eces.append(ece) logps.append(logp) accuracies_train.append(accuracy_train) eces_train.append(ece_train) logps_train.append(logp_train) del params gc.collect() torch.cuda.empty_cache() accuracies = np.array(accuracies) eces = np.array(eces) logps = np.array(logps) accuracies_train = np.array(accuracies_train) eces_train = np.array(eces_train) logps_train = np.array(logps_train) SGD_df = pd.DataFrame({"Accuracy": accuracies, "ECE": eces, "log predictive": logps}) SGD_train_df = pd.DataFrame({"Accuracy": accuracies_train, "ECE": eces_train, "log predictive": logps_train}) with open(f"{results_path}sgd_test.txt", "w") as f: f.write(str(SGD_df.describe())) with open(f"{results_path}sgd_train.txt", "w") as f: f.write(str(SGD_train_df.describe())) plt_contour(params_all, "banana_sgd", "SGD") ###Output _____no_output_____
scripts/shift_hh_quartiles_for_UBI_TM_interimyrs.ipynb
###Markdown TAZ File ###Code TAZ_DATA_PATH = "C:/Users/etheocharides/Box/Modeling and Surveys/Urban Modeling/Bay Area UrbanSim/PBA50/Final Blueprint runs/Final Blueprint (s24)/BAUS v2.25 - FINAL VERSION/" TAZ_DATA_FILE = "run182_taz_summaries_2040.csv" taz_data = pd.read_csv(TAZ_DATA_PATH+TAZ_DATA_FILE) # 2025: 94174 # 2030: 102211 # 2035: 112964 # 2040: 120786 # 2045: 126888 # 2050: 132529 NUM_HH_TO_MOVE = 120786 print("number of households is {}".format(taz_data.TOTHH.sum())) print("number of q1 households is {}".format(taz_data.HHINCQ1.sum())) print("number of q2 households is {}".format(taz_data.HHINCQ2.sum())) # randomly select TAZs to shift HHs in (Q1 -> Q2), loop so that we never end up with negative Q1 HHs in a TAZ for i in range(0, NUM_HH_TO_MOVE): taz_i = np.random.choice(taz_data.loc[taz_data.HHINCQ1 > 0].TAZ) taz_data.loc[taz_data.TAZ == taz_i, 'HHINCQ1'] = taz_data.loc[taz_data.TAZ == taz_i].HHINCQ1 - 1 taz_data.loc[taz_data.TAZ == taz_i, 'HHINCQ2'] = taz_data.loc[taz_data.TAZ == taz_i].HHINCQ2 + 1 print("number of households is {}".format(taz_data.TOTHH.sum())) print("number of q1 households is {}".format(taz_data.HHINCQ1.sum())) print("number of q2 households is {}".format(taz_data.HHINCQ2.sum())) # save a copy of the data in the output folder, manually make it the master file as needed # don't want to risk overwriting the master file (run182_taz_summaries_2040_UBI.csv) NEW_TAZ_DATA_FILE = "run182_taz_summaries_2040_UBI_output.csv" taz_data.to_csv(TAZ_DATA_PATH+NEW_TAZ_DATA_FILE) ###Output _____no_output_____
nbs/03_StaticGenerator.ipynb
###Markdown Web functions > API details. ###Code #hide from nbdev.showdoc import * cPath = os.path.dirname(os.getcwd()) cPath sys.path.insert(0,cPath) #export from refreshment.School import StudySystem from refreshment.Program import Program, Subject, Record, Lesson from refreshment.munge import dateGrid, processGuide, unassignedForSubject, printAllUnassigned #hide guide = StudySystem("./data/programs.json") for x in guide.programs: print(x.name) for y in x.subjects: print("\"" + x.name + "\",\"" + y.name + "\"") guide.loadDirectory() for x in guide.programs: print(x.name) for y in x.subjects: print("\"" + x.name + "\",\"" + y.name + "\"") #hide cath = guide.programs[0] foo = cath.subjects sorted(foo, key=lambda x: x.name)[:2] reading = [x for x in cath.subjects if x.name == "Reading"][0] for less in reading.lessons: print(less.fileName) #hide template = Template('Hello {{ name }}!') template.render(name=cath.name) #hide guide = StudySystem("./data/programs.json") for x in guide.programs: print(x.name) for y in x.subjects: print("\"" + x.name + "\",\"" + y.name + "\"") #export class Render: def __init__(self, program, guide , dir="../../web",template = "../templates"): self.guide = guide self.program = [x for x in guide.programs if x.name == program][0] print("loaded program " + self.program.name) self.outputdir = os.path.join(dir,self.program.name) self.template = template self.file_loader = FileSystemLoader(self.template) self.env = Environment(loader=self.file_loader) self.resourcesName = "resources" self.resourcesStart = os.path.join(".", self.resourcesName ) self.resourcesEnd = os.path.join(self.outputdir,self.resourcesName) def basePath(self): return self.outputdir def renderLesson(self,sub,lesson): template = self.env.get_template("lesson.html") resDir = "../../.." output = template.render(item=sub, resDir=resDir, less=lesson, program = self.program, styleDir=os.path.join("..",self.resourcesName)) subDir = os.path.join(self.outputdir,sub.name) if not os.path.exists(subDir): os.mkdir(subDir) f = open(os.path.join(subDir,"vid_" +str(lesson.key) + ".html"), "w") f.write(output) def renderSubject(self,sub): template = self.env.get_template("subject.html") videos = [x for x in sub.lessons if x.fileName.endswith(".mp4")] resources = [x for x in sub.lessons if not x.fileName.endswith(".mp4")] for x in sub.lessons: self.renderLesson(sub,x) resDir = "../../.." #print(sub.sequences) output = template.render(item=sub, videos=videos, res=resources, resDir=resDir, program = self.program, grid = reversed(dateGrid(self.program,subject=sub.name)), styleDir=os.path.join("..",self.resourcesName)) subDir = os.path.join(self.outputdir,sub.name) if not os.path.exists(subDir): os.mkdir(subDir) f = open(os.path.join(subDir,"index.html"), "w") f.write(output) def renderCalendar(self): grid = reversed(dateGrid(self.program)) if not os.path.exists(self.outputdir): os.mkdir(self.outputdir) template = self.env.get_template("calendar.html") output = template.render(program=self.program, grid=grid, styleDir=os.path.join(".",self.resourcesName)) f = open(os.path.join(self.outputdir,"calendar.html"), "w") f.write(output) f.close() def renderSchool(self): foo = self.program.subjects seq = sorted(foo, key=lambda x: x.name) if not os.path.exists(self.outputdir): os.mkdir(self.outputdir) for x in seq: self.renderSubject(x) template = self.env.get_template("school.html") output = template.render(program=self.program, seq=seq, styleDir=os.path.join(".",self.resourcesName)) f = open(os.path.join(self.outputdir,"index.html"), "w") f.write(output) f.close() self.addResources() def addResources(self): if not os.path.exists(self.resourcesEnd): os.mkdir(self.resourcesEnd) src_files = os.listdir(self.resourcesStart) for file_name in src_files: full_file_name = os.path.join(self.resourcesStart, file_name) if os.path.isfile(full_file_name): shutil.copy(full_file_name, os.path.join(self.resourcesEnd, file_name)) with open(os.path.join( self.resourcesEnd, "lemonade.json"), "w") as dataFile: json.dump(self.program.toDict(), dataFile, indent=4, sort_keys=True) def addFiles(self): for sub in self.program.subjects: for lesson in sub.lessons: source = os.path.join(self.guide.origin, self.program.name,sub.name,lesson.fileName) dest = os.path.join(self.outputdir,sub.name,lesson.fileName) if not os.path.isfile(dest): print("found " + dest) shutil.copy(source, dest) #hide guide.loadDirectory() processGuide(guide) cath = guide.programs[0] printAllUnassigned(prog=cath) #export def makeSite(school,programName): baseRender = Render(programName,school) baseRender.renderSchool() school.save() baseRender.addFiles() baseRender.addResources() baseRender.renderCalendar() #hide makeSite(school=guide,programName="Cathedral") #import json #print(json.dumps(guide.toDict(), sort_keys=True, indent=4)) ###Output _____no_output_____
05-Pandas-with-Time-Series/.ipynb_checkpoints/Rolling and Expanding-checkpoint.ipynb
###Markdown ___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* Rolling and ExpandingA very common process with time series is to create data based off of a rolling mean. Let's show you how to do this easily with pandas! ###Code import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # Best way to read in data with time series index! df = pd.read_csv('time_data/walmart_stock.csv',index_col='Date',parse_dates=True) df.head() df['Open'].plot(figsize=(16,6)) ###Output _____no_output_____ ###Markdown Now let's add in a rolling mean! This rolling method provides row entries, where every entry is then representative of the window. ###Code # 7 day rolling mean df.rolling(7).mean().head(20) df['Open'].plot() df.rolling(window=30).mean()['Close'].plot() ###Output _____no_output_____ ###Markdown Easiest way to add a legend is to make this rolling value a new column, then pandas does it automatically! ###Code df['Close: 30 Day Mean'] = df['Close'].rolling(window=30).mean() df[['Close','Close: 30 Day Mean']].plot(figsize=(16,6)) ###Output _____no_output_____ ###Markdown expandingNow what if you want to take into account everything from the start of the time series as a rolling value? For instance, not just take into account a period of 7 days, or monthly rolling average, but instead, take into everything since the beginning of the time series, continuously: ###Code # Optional specify a minimum number of periods df['Close'].expanding(min_periods=1).mean().plot(figsize=(16,6)) ###Output _____no_output_____ ###Markdown Bollinger BandsWe will talk a lot more about financial analysis plots and technical indicators, but here is one worth mentioning!More info : http://www.investopedia.com/terms/b/bollingerbands.asp*Developed by John Bollinger, Bollinger Bands® are volatility bands placed above and below a moving average. Volatility is based on the standard deviation, which changes as volatility increases and decreases. The bands automatically widen when volatility increases and narrow when volatility decreases. This dynamic nature of Bollinger Bands also means they can be used on different securities with the standard settings. For signals, Bollinger Bands can be used to identify Tops and Bottoms or to determine the strength of the trend.**Bollinger Bands reflect direction with the 20-period SMA and volatility with the upper/lower bands. As such, they can be used to determine if prices are relatively high or low. According to Bollinger, the bands should contain 88-89% of price action, which makes a move outside the bands significant. Technically, prices are relatively high when above the upper band and relatively low when below the lower band. However, relatively high should not be regarded as bearish or as a sell signal. Likewise, relatively low should not be considered bullish or as a buy signal. Prices are high or low for a reason. As with other indicators, Bollinger Bands are not meant to be used as a stand alone tool. * ###Code df['Close: 30 Day Mean'] = df['Close'].rolling(window=20).mean() df['Upper'] = df['Close: 30 Day Mean'] + 2*df['Close'].rolling(window=20).std() df['Lower'] = df['Close: 30 Day Mean'] - 2*df['Close'].rolling(window=20).std() df[['Close','Close: 30 Day Mean','Upper','Lower']].plot(figsize=(16,6)) ###Output _____no_output_____
Note_books/Explore_Models/.ipynb_checkpoints/betting_strategies_v4_spread-checkpoint.ipynb
###Markdown Part 2 Evaluate betting ,... "tune" on x_16 ... test again on x_17 ###Code y_pred= model.predict(x_test_sc) #this is <= 20152016 data acc_pre = accuracy_score(y_test, y_pred) #this x_test_sc needs to be defined already above during training session on <=2015 def make_bet_df(model=rfc, x = x_16, y = y_16, Y = Y_16): dic = {} #x_16, lgr 0.55 ... Request rfc 0.58 x_16 and 0.6 on x_17 (later) dic['model_name'] = str(model)[0:10] dic['actual_res'] = [ t[0] for t in list(y)] dic['model_pred'] = list(model.predict(x)) dic['model_conf_1'] = [round(t,4) for t in model.predict_proba(x)[0:, 1]] dic['model_conf_0'] =[round(t,4) for t in model.predict_proba(x)[0:, 0]] dic['home_odds'] = list(Y['home_odds']) dic['away_odds'] = list(Y['away_odds']) df= pd.DataFrame(dic) #wrong! ... actually ok df['home_impl_proba'] = v_impl_proba(df['home_odds']) df['away_impl_proba'] = v_impl_proba(df['away_odds']) df['conf_1_sub_home_impl'] = df['model_conf_1'] - df['home_impl_proba'] df['conf_0_sub_away_impl'] = df['model_conf_0'] - df['away_impl_proba'] df['pre_acc_sub_home_impl'] = (acc_pre - df['home_impl_proba']).copy() df['pre_acc_sub_away_impl'] = (acc_pre - df['away_impl_proba']).copy() df['fav_pred'] = v_fav_pred(df['home_odds'] ) return df #df_bet_info df_bet_info = make_bet_df(model=rfc, x = x_16, y = y_16, Y = Y_16) ##this is for model = rfc and season = 20162017; rfc dif well on both seasons df_bet_info #simpler approach? y_pred= model.predict(x_test_sc) #this is <= 20152016 data acc_pre = accuracy_score(y_test, y_pred) #this x_test_sc needs to be defined already above during training session on <=2015 ##bet the spread mofo! #conf_min >=0 is how far from 0.5 you want conf_1 or conf_0 to be (if one is close to 0.5 so is the other) ## pre_acc_above_break_even = 0.1 means you would ask pre_acc >= break_even_prob + 0.1 pre_acc is accuracy of model on pretraining ##model_conf_above_break_even = 0.1 means you would ask predict_proba >= break_even_prob + 0.1 pred_proba is the current model's #estimate on how likely it thimks the anwer is 1 (or 0 if it is predicting 0 def make_pay_off_df(df_bet_info=df_bet_info, model_conf_min = 0 , min_pre_acc_above_break_even = 0, min_model_conf_above_break_even = 0, bet_fav = True, bet_und = True): ##return df with columns for each season in question: #type_bet, total_invested, total_earned, profit, ROI, ##rows should be: avg for season in question let's do x_16 for now ###filters for df .. df= df_bet_info.loc[: , 'model_pred'].copy() ##set up stub df HA_bet = df['model_pred'].copy() #0 or 1 fav_pred = df['fav_pred'].copy() #0 or 1 ... need 4 terms one for each combo actual_res= df['actual_res'].copy() ##STEP 1 ##decide on bet_fav bet_und or do both strategy if bet_fav and ~bet_und: filt_bet = (HA_bet == fav_pred).copy() if ~bet_fav and bet_und: filt_bet = (HA_bet != fav_pred).copy() if ~bet_fav and ~bet_und: filt_bet = (HA_bet != HA_bet).copy() #all False else: filt_bet = (HA_bet == HA_bet).copy() ##all True no filter #filt_bet_fav = (HA_bet == fav_pred)|(~bet_fav) # if bet_fav = False, want this to be all True; if bet_fav = True, want to use this filter as is; do OR not bet_fav #filt_bet_und = (HA_bet != fav_pred)|(~bet_und) #same deal ... if bet_und = True, want this filter to take effect; if False, want it to be all True #filt_fav_und = filt_bet_fav & filt_bet_und ##not possible to do both stragies ... df = df.loc[filt_bet, :].copy() #set betting fav/und strategy ##STEP 2 ## impose the restrictions on confidence levels, if any; to keep it simple we impose same constraint whether H/A or fav/und ##! I think filtering by betting fav or und FIRST before calculating confidence levels is important ## therfore USE df and not df_bet_info (unfiltered) below #conf_min = 0; first we define model not confident to mean proba is close to 0.5 (both 1 and 0 conf will both be close to 0.5) #eg within conf_min =0.05 of 0.5 means not confident model_not_conf = (0.5 - model_conf_min <= df['model_conf_1'])&(df['model_conf_1'] <= 0.5 + model_conf_min).copy() filt_model_is_conf = (~model_not_conf).copy() filt_= df['conf_1_sub_home_impl'] >= min_pre_acc_above_break_even ##it is tricky because if you are betting on home team you need your conf_1 to be high; but it betting away team, you need conf_0 to be high conf_above_impl_proba_home_bet = (HA_bet*df['conf_1_sub_home_impl'] >= HA_bet*min_model_conf_above_break_even).copy() #if HA_bet =0, reduced to 0>=0 (all True) conf_above_impl_proba_away_bet = ((1-HA_bet)*df['conf_0_sub_away_impl'] >=(1-HA_bet)*min_model_conf_above_break_even).copy() filt_conf_above_impl_proba = conf_above_impl_proba_home_bet&conf_above_impl_proba_away_bet #one of these will be all True pre_acc_above_impl_proba_home_bet = (HA_bet*df['pre_acc_sub_home_impl'] >= HA_bet*min_pre_acc_above_break_even).copy() #if HA_bet =0, reduced to 0>=0 (all True) pre_acc_above_impl_proba_away_bet = ((1-HA_bet)*df['pre_acc_sub_home_impl'] >=(1-HA_bet)* min_pre_acc_above_break_even).copy() filt_acc_above_impl_proba = pre_acc_above_impl_proba_home_bet&pre_acc_away_impl_proba_home_bet filt_conf = filt_model_is_conf&filt_conf_above_impl_proba&filt_acc_above_impl_proba #second restriction: df = df.loc[filt_conf, :].copy() return df df_pay = make_pay_off_df(df_bet_info=df_bet_info, model_conf_min = 0 , min_pre_acc_above_break_even = -10**6, min_model_conf_above_break_even = -10**6, bet_fav = True, bet_und = True) ##return df wi df_pay ###Output _____no_output_____ ###Markdown can do version where we get y regr version at beinning ... do all teh regression stuff ... then for later cells do y = v_win(y) for classifiers ... regression models now ...y = Y['goal_diff_target'].copy()x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42)do standard/minmax scaling on X_train numeric columns ... better to do pipeline? x_train_sc = std_scal.fit_transform(x_train) fit the scaler from train portion to the test portion x_test_sc = std_scal.transform(x_test)lr = Ridge(alpha=50000) rfr = RandomForestRegressor(max_depth=4, random_state=0)xgbr = XGBRegressor()quick checks for model in [lr, rfr, xgbr]: model.fit(x_train_sc, y_train) model.fit(x_train_sc, y_train.ravel()) y_pred= model.predict(x_test_sc) print(y_pred[0:5]) y_predw = v_win(y_pred) y_predt= model.predict(x_train_sc) y_predwt = v_win(y_predt) y_trainw = v_win(y_train) y_testw = v_win(y_test) same as usual win/loss acc = accuracy_score(y_testw, y_predw) f1 = f1_score(y_testw,y_predw) acct = accuracy_score(y_trainw, y_predwt) f1t = f1_score(y_trainw,y_predwt) print(str(model)[0:20], 'TEST: ', acc, f1 ,'training : ', acct, f1t) ###Code #Let's go with lgr, lgr2 (tuned or not) for betting investigation x_16 = std_scal.transform(x_16).copy() svc2.predict_proba(x_16) gnb.predict_proba(x_16)[0:, 0] y_16 ##make betting strategy on x_16 test it on x_17 ... and other seasons if needed df_lgr.columns df_lgr['away_impl_proba'] = v_impl_proba(df_lgr['away_odds']) df_lgr['home_impl_proba'] = v_impl_proba(df_lgr['home_odds']) df_lgr['conf_1_sub_home_impl'] = df_lgr['lgr_conf_1'] - df_lgr['home_impl_proba'] df_lgr['conf_0_sub_away_impl'] = df_lgr['lgr_conf_0'] - df_lgr['away_impl_proba'] df_lgr = df_lgr.loc[:, ['actual', 'lgr_pred', 'lgr_conf_1', 'home_impl_proba', 'conf_1_sub_home_impl', 'conf_0_sub_away_impl', 'lgr_conf_0', 'away_impl_proba', 'home_odds', 'away_odds']].copy() df['away_odds'] df_100.describe() df_lgr.loc[df_lgr.loc[df_lgr['home_odds'] == df_lgr['away_odds'], :] np.abs(-6) def v_fav(x): if x <0: return 1 df['profit'].sum() cols = ['actual', 'lgr_pred', 'lgr_conf_1', 'home_impl_proba', 'conf_1_sub_home_impl', 'conf_0_sub_away_impl', 'away_impl_proba', 'bet_HA', 'bet', 'pay_out', 'profit', 'total_profit','cumul_profit', 'total_bet', 'total_ROI' ] df_100.shape df_100.loc[800:900,cols] df_100 = find_pay_off_bet_100(conf_1_at_least = 0 , conf_0_at_least =0 ,conf_thresh=0 , diff_home_C_1_at_least =0, diff_away_C_0_at_least =0, type_bet = "Bet_100"): #or "bet_100" # df_100 = find_pay_off_bet_100(conf_1_at_least = 0, conf_0_at_least = 0, conf_thresh = -100,diff_home_C_1_at_least =0.2, diff_away_C_0_at_least =0.1, type_bet = "bet_100" ) df_100['total_profit'] df = find_pay_off(conf_1_at_least = 0.2, conf_0_at_least = 0.2, diff_home_C_1_at_least = -10, diff_away_C_0_at_least = -10, type_bet = "get_100" ) print(accuracy_score(df['actual'], df['lgr_pred'], )) df['total_profit'] #OR# #AND# ##OR## df_lgr.describe() accuracy_score(df_lgr['actual'], df_lgr['lgr_pred'], ) print(df.loc[(df['bet_HA'] == 1)&(df['pay_out'] >0), ['profit']].sum(), df.loc[(df['bet_HA'] == 1)&(df['pay_out'] >0), ['profit']].count(), 'avg loss ', df.loc[(df['bet_HA'] == 1)&(df['pay_out'] >0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() ) print(df.loc[(df['bet_HA'] == 1)&(df['pay_out'] ==0), ['profit']].sum(), df.loc[(df['bet_HA'] == 1)&(df['pay_out'] ==0), ['profit']].count(), 'avg loss ', df.loc[(df['bet_HA'] == 1)&(df['pay_out'] ==0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() ) print(df.loc[(df['bet_HA'] == 0)&(df['pay_out'] >0), ['profit']].sum(), df.loc[(df['bet_HA'] == 0)&(df['pay_out'] >0), ['profit']].count(), 'avg loss ', df.loc[(df['bet_HA'] == 0)&(df['pay_out'] >0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() ) print(df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].sum(), df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count(), 'avg loss ', df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].sum()/df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']].count() ) print(df['profit'].sum()) print('avg loss', df['profit'].sum()/df['profit'].count()) (df.loc[(df['bet_HA'] == 0)&(df['pay_out'] ==0), ['profit']] <0).sum() dic2['lgr_pred'] = list(lgr.predict(x_16)) dic2['gnb_pred'] = list(gnb.predict(x_16)) dic2'svc_proba'] = list(svc.predict_proba(x_16)[:,1]) predictions = gnb.predict(x_17) actual = y_17 confusionMatrix = confusion_matrix(actual, predictions) display(confusionMatrix) tn = confusionMatrix[0][0] fp = confusionMatrix[0][1] fn = confusionMatrix[1][0] tp = confusionMatrix[1][1] actualYes = fn + tp actualNo = tn + fp predictedYes = fp + tp predictedNo = tn + fn print('When home team wins, classifier predicts they will win %6.2f%% of the time' % (tp / actualYes * 100)) print('When home team loses, classifier predicts they will win %6.2f%% of the time' % (fp / actualNo * 100)) print('When home team loses, classifier predicts they will lose %6.2f%% of the time' % (tn / actualNo * 100)) print('When classifer predicts home team will win, home team actually wins %6.2f%% of the time' % (tp / predictedYes * 100)) print('When classifer predicts home team will lose, home team actually loses %6.2f%% of the time' % (tn / predictedNo * 100)) ## is the betting and calcs being done correctly? 62% accuracy should easily have a profit ... ##try all models ##try regression ##think about this "confidence diff " Leung mentioned ##also ... sigh ... put in the FW% thing I guess ... [that was what I was struggling with calculating yesterday ] ###Output _____no_output_____
examples/grid_resilience/PowerBalance.ipynb
###Markdown Defining Generation and Load Nodes ###Code # define the power used by each of the nodes - this is where you would put your human behavior decisions # for each of these nodes power_used = np.zeros((N,Nt)) # keep track of power used by each node power_used[:,0] = 100 * np.random.randn(N) # kW - starting power # Assume each node has solar and can generate power or consume power, if generated power, the power will be positive # and if consuming power the power will be negative # TODO: we don't need to worry currently about control of solar # eventually we will want to add some controls to make the output at the substation near zero for i in range(1,Nt): for j in range(N): # TODO: correlation with time of day power_used[j,i] = power_used[j,i-1] + 1 * np.random.randn(1) # change the power by a small amount for each minute # Caity - this is where you would add your inputs # plot the profiles of each of the nodes plt.figure(figsize=(10,7)) for i in range(N): plt.plot(power_used[i,:]) plt.xlabel('Time (min)', fontsize=15) plt.ylabel('Power (kW)') # plot the total power requested by the substation plt.figure(figsize=(10,7)) plt.plot(np.sum(power_used,axis=0)) plt.xlabel('Time (min)', fontsize=15) plt.ylabel('Total Power out of Substation (kW)', fontsize=15) plt.grid() ###Output _____no_output_____ ###Markdown Visualizations ###Code # def cluster(x, y, distance, method='radius', view_plot=False): # # 3 clustering techniques: # # 1. radius - agents within a radius # # 2. box - agents within a box # # 3. nearest - nearest agents # # number of turbines # nAgents = len(x) # # reference point # xMin = np.min(x) # yMin = np.min(y) # # initialize cluster dictionary # cluster_agents = dict() # if view_plot: # plt.figure(figsize=(10, 7)) # for i in range(nAgents): # # print('Agent ', i, 'out of ', nAgents) # # plot the turbines to see the clustering # if view_plot: # plt.plot(x[i] - xMin, y[i] - yMin, 'ko', markersize=10) # cluster_agents[i] = [] # if method == 'nearest': # dist = np.zeros(nAgents) # for j in range(nAgents): # if i != j: # # =============================================== # # cluster turbines within some radius # # =============================================== # if method == 'radius': # dist = np.sqrt((x[i] - x[j]) ** 2 + (y[i] - y[j]) ** 2 ) # if dist < distance[0]: # if view_plot: # plt.plot([x[i] - xMin, x[j] - xMin], # [y[i] - yMin, y[j] - yMin], # color=plt.cm.RdYlBu(i)) # cluster_agents[i].append(j) # # =============================================== # # cluster turbines within some box boundaries # # =============================================== # elif method == 'box': # dist_east = abs(x[i] - x[j]) # dist_north = abs(y[i] - y[j]) # if dist_east < distance[0] and dist_north < distance[1]: # if view_plot: # plt.plot([x[i] - xMin, x[j]] - xMin, # [y[i] - yMin, y[j] - yMin], # color=plt.cm.RdYlBu(i)) # cluster_agents[i].append(j) # # =============================================== # # cluster nearest turbines # # =============================================== # elif method == 'nearest': # dist[j] = np.sqrt((x[i] - x[j]) ** 2 \ # + (y[i] - y[j]) ** 2) # # nearest neighbor has a different plotting strategy # if method == 'nearest': # idx = np.argsort(dist) # cluster_agents[i] = idx[0:distance[0] + 1] # for k in cluster_agents[i]: # if view_plot: # plt.plot([x[i] - xMin, x[k] - xMin], \ # [y[i] - yMin, y[k] - yMin], # color=plt.cm.RdYlBu(i)) # if view_plot: # plt.title('Clustering', fontsize=25) # plt.xlabel('x (m)', fontsize=25) # plt.ylabel('y (m)', fontsize=25) # plt.tick_params(which='both', labelsize=25) # plt.grid() # return cluster_agents # def plot_distributed(x, y, cluster_agents): # # number of turbines # nAgents = len(x) # # reference point # xMin = np.min(x) # yMin = np.min(y) # xMax = np.max(x) # yMax = np.max(y) # plt.plot(x-xMin,y-yMin,'ko') # for i in range(nAgents): # for j in cluster_agents[i]: # scale_factor = 1.0 # plt.plot([x[i],x[j]]-xMin,[y[i],y[j]]-yMin,color='g') # plt.xlim([xMin - 10, xMax+10]) # plt.ylim([yMin - 10, yMax + 10]) # # Show power at each nodes - for t = 0 # x = 10*np.random.rand(N) # y = 10*np.random.rand(N) # # connect to nearest turbines # cluster_turbines = cluster(x,y,np.ones(N)) # # plot the network # # plot_distributed(x,y,cluster_turbines) # plt.plot(x,y,marker_size=power_used[:,0]) ###Output _____no_output_____
tensorflow/keras_vivit.ipynb
###Markdown tf.data pipeline ###Code @tf.function def preprocess(frames: tf.Tensor, label: tf.Tensor): frames = tf.image.convert_image_dtype( frames[ ..., tf.newaxis ], tf.float32 ) label = tf.cast(label, tf.float32) return frames, label def prepare_dataloader( videos: np.ndarray, labels: np.ndarray, loader_type: str = "train", batch_size: int = BATCH_SIZE ): dataset = tf.data.Dataset.from_tensor_slices((videos, labels)) if loader_type == 'train': dataset = dataset.shuffle(BATCH_SIZE * 2) dataloader = ( dataset.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE) .batch(batch_size) .prefetch(tf.data.AUTOTUNE) ) return dataloader trainloader = prepare_dataloader(train_videos, train_labels, 'train') validloader = prepare_dataloader(valid_videos, valid_labels, 'valid') testloader = prepare_dataloader(test_videos, test_labels, 'test') class TubeletEmbedding(layers.Layer): def __init__(self, embed_dim, patch_size, **kwargs): super().__init__(**kwargs) self.projection = layers.Conv3D( filters=embed_dim, kernel_size=patch_size, strides=patch_size, padding='valid' ) self.flatten = layers.Reshape(target_shape=(-1, embed_dim)) def call(self, videos): projected_patches = self.projection(videos) flattend_patches = self.flatten(projected_patches) return flattend_patches class TubeletEmbedding(layers.Layer): def __init__(self, embed_dim, patch_size, **kwargs): super().__init__(**kwargs) self.projection = layers.Conv3D( filters=embed_dim, kernel_size=patch_size, strides=patch_size, padding="VALID", ) self.flatten = layers.Reshape(target_shape=(-1, embed_dim)) def call(self, videos): projected_patches = self.projection(videos) flattened_patches = self.flatten(projected_patches) return flattened_patches class PositionalEncoder(layers.Layer): def __init__(self, embed_dim, **kwargs): super().__init__(**kwargs) self.embed_dim = embed_dim def build(self, input_shape): _, num_tokens, _ = input_shape self.position_embedding = layers.Embedding( input_dim=num_tokens, output_dim=self.embed_dim ) self.positions = tf.range(start=0, limit=num_tokens, delta=1) def call(self, encoded_tokens): encoded_positions = self.position_embedding(self.positions) encoded_tokens = encoded_tokens + encoded_positions return encoded_tokens ###Output _____no_output_____ ###Markdown Video vision transformer with spatio-temporal attention ###Code def create_vivit_classifier( tubelet_embedder, positional_encoder, input_shape=INPUT_SHAPE, transformer_layers=NUM_LAYERS, num_heads=NUM_HEADS, embed_dim=PROJECTION_DIM, layer_norm_eps=LAYER_NORM_EPS, num_classes=NUM_CLASSES ): inputs = layers.Input(shape=input_shape) patches = tubelet_embedder(inputs) encoded_patches = positional_encoder(patches) # Create multiple layers of transformer block for _ in range(transformer_layers): # Layer norm and multi head self attention x1 = layers.LayerNormalization(epsilon=1e-6)(encoded_patches) attention_output = layers.MultiHeadAttention( num_heads=num_heads, key_dim=embed_dim//num_heads, dropout=0.1 )(x1, x1) # Skip connection x2 = layers.Add()([attention_output, encoded_patches]) # Layer norm and MLP x3 = layers.LayerNormalization(epsilon=1e-6)(x2) x3 = keras.Sequential([ layers.Dense(units=embed_dim*4, activation=tf.nn.gelu), layers.Dense(units=embed_dim, activation=tf.nn.gelu) ])(x3) # Skip connection encoded_patches = layers.Add()([x3, x2]) # Layer norm and global average pooling representation = layers.LayerNormalization(epsilon=layer_norm_eps)(encoded_patches) representation = layers.GlobalAvgPool1D()(representation) # Classify outputs outputs = layers.Dense(units=num_classes, activation='softmax')(representation) # Create the keras model model = keras.Model(inputs=inputs, outputs=outputs) return model def run_experiment(): model = create_vivit_classifier( tubelet_embedder=TubeletEmbedding( embed_dim=PROJECTION_DIM, patch_size=PATCH_SIZE ), positional_encoder=PositionalEncoder(embed_dim=PROJECTION_DIM) ) optimizer = keras.optimizers.Adam(learning_rate=LEARNING_RATE) model.compile( optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=[ keras.metrics.SparseCategoricalAccuracy(name='accuracy'), keras.metrics.SparseTopKCategoricalAccuracy(5, name='top-5-accuracy') ] ) # Train the model _ = model.fit(trainloader, epochs=EPOCHS, validation_data=validloader) _, accuracy, top5_accuracy = model.evaluate(testloader) print(f"Test acc: {round(accuracy * 100, 2)}%") print(f"Test top5 acc: {round(top5_accuracy * 100, 2)}%") return model model = run_experiment() ###Output _____no_output_____ ###Markdown Inference ###Code NUM_SAMPLES_VIZ = 25 testsamples, labels = next(iter(testloader)) testsamples, labels = testsamples[:NUM_SAMPLES_VIZ], labels[:NUM_SAMPLES_VIZ] ground_truths = [] preds = [] videos = [] for i, (testsample, label) in enumerate(zip(testsamples, labels)): with io.BytesIO() as gif: imageio.mimsave(gif, (testsample.numpy() * 255).astype('uint8'), 'GIF', fps=5) videos.append(gif.getvalue()) output = model.predict(tf.expand_dims(testsample, axis=0))[0] pred = np.argmax(output, axis=0) ground_truths.append(label.numpy().astype('int')) preds.append(pred) def make_box_for_grid(image_widget, fit): if fit is not None: fit_str = '{}'.format(fit) else: fit_str = str(fit) h = ipywidgets.HTML(value='' + str(fit_str) + '') boxb = ipywidgets.widgets.Box() boxb.children = [image_widget] vb = ipywidgets.widgets.VBox() vb.layout.align_items = 'center' vb.children = [h, boxb] return vb boxes = [] for i in range(NUM_SAMPLES_VIZ): ib = ipywidgets.widgets.Image(value=videos[i], width=100, height=100) true_class = info['label'][str(ground_truths[i])] pred_class = info['label'][str(preds[i])] caption = f'T: {true_class} | P: {pred_class}' boxes.append(make_box_for_grid(ib, caption)) ipywidgets.widgets.GridBox( boxes, layout=ipywidgets.widgets.Layout(grid_template_columns='repeat(5, 200px)') ) ###Output _____no_output_____
04-Linear-Regression-with-Python/related-tutorials/01-B-Introduction-Numpy.ipynb
###Markdown *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* Introduction to NumPy This chapter, along with chapter 3, outlines techniques for effectively loading, storing, and manipulating in-memory data in Python.The topic is very broad: datasets can come from a wide range of sources and a wide range of formats, including be collections of documents, collections of images, collections of sound clips, collections of numerical measurements, or nearly anything else.Despite this apparent heterogeneity, it will help us to think of all data fundamentally as arrays of numbers.For example, images–particularly digital images–can be thought of as simply two-dimensional arrays of numbers representing pixel brightness across the area.Sound clips can be thought of as one-dimensional arrays of intensity versus time.Text can be converted in various ways into numerical representations, perhaps binary digits representing the frequency of certain words or pairs of words.No matter what the data are, the first step in making it analyzable will be to transform them into arrays of numbers.(We will discuss some specific examples of this process later in [Feature Engineering](05.04-Feature-Engineering.ipynb))For this reason, efficient storage and manipulation of numerical arrays is absolutely fundamental to the process of doing data science.We'll now take a look at the specialized tools that Python has for handling such numerical arrays: the NumPy package, and the Pandas package (discussed in Chapter 3).This chapter will cover NumPy in detail. NumPy (short for *Numerical Python*) provides an efficient interface to store and operate on dense data buffers.In some ways, NumPy arrays are like Python's built-in ``list`` type, but NumPy arrays provide much more efficient storage and data operations as the arrays grow larger in size.NumPy arrays form the core of nearly the entire ecosystem of data science tools in Python, so time spent learning to use NumPy effectively will be valuable no matter what aspect of data science interests you.If you followed the advice outlined in the Preface and installed the Anaconda stack, you already have NumPy installed and ready to go.If you're more the do-it-yourself type, you can go to http://www.numpy.org/ and follow the installation instructions found there.Once you do, you can import NumPy and double-check the version: ###Code import numpy numpy.__version__ ###Output _____no_output_____ ###Markdown For the pieces of the package discussed here, I'd recommend NumPy version 1.8 or later.By convention, you'll find that most people in the SciPy/PyData world will import NumPy using ``np`` as an alias: ###Code import numpy as np ###Output _____no_output_____ ###Markdown Throughout this chapter, and indeed the rest of the book, you'll find that this is the way we will import and use NumPy. Reminder about Built In DocumentationAs you read through this chapter, don't forget that IPython gives you the ability to quickly explore the contents of a package (by using the tab-completion feature), as well as the documentation of various functions (using the ``?`` character – Refer back to [Help and Documentation in IPython](01.01-Help-And-Documentation.ipynb)).For example, to display all the contents of the numpy namespace, you can type this:```ipythonIn [3]: np.```And to display NumPy's built-in documentation, you can use this:```ipythonIn [4]: np?```More detailed documentation, along with tutorials and other resources, can be found at http://www.numpy.org. *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!* The Basics of NumPy Arrays Data manipulation in Python is nearly synonymous with NumPy array manipulation: even newer tools like Pandas ([Chapter 3](03.00-Introduction-to-Pandas.ipynb)) are built around the NumPy array.This section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.While the types of operations shown here may seem a bit dry and pedantic, they comprise the building blocks of many other examples used throughout the book.Get to know them well!We'll cover a few categories of basic array manipulations here:- *Attributes of arrays*: Determining the size, shape, memory consumption, and data types of arrays- *Indexing of arrays*: Getting and setting the value of individual array elements- *Slicing of arrays*: Getting and setting smaller subarrays within a larger array- *Reshaping of arrays*: Changing the shape of a given array- *Joining and splitting of arrays*: Combining multiple arrays into one, and splitting one array into many NumPy Array Attributes First let's discuss some useful array attributes.We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.We'll use NumPy's random number generator, which we will *seed* with a set value in order to ensure that the same random arrays are generated each time this code is run: ###Code import numpy as np np.random.seed(0) # seed for reproducibility x1 = np.random.randint(10, size=6) # One-dimensional array x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array ###Output _____no_output_____ ###Markdown Each array has attributes ``ndim`` (the number of dimensions), ``shape`` (the size of each dimension), and ``size`` (the total size of the array): ###Code print("x3 ndim: ", x3.ndim) print("x3 shape:", x3.shape) print("x3 size: ", x3.size) ###Output x3 ndim: 3 x3 shape: (3, 4, 5) x3 size: 60 ###Markdown Another useful attribute is the ``dtype``, the data type of the array (which we discussed previously in [Understanding Data Types in Python](02.01-Understanding-Data-Types.ipynb)): ###Code print("dtype:", x3.dtype) ###Output dtype: int64 ###Markdown Other attributes include ``itemsize``, which lists the size (in bytes) of each array element, and ``nbytes``, which lists the total size (in bytes) of the array: ###Code print("itemsize:", x3.itemsize, "bytes") print("nbytes:", x3.nbytes, "bytes") ###Output itemsize: 8 bytes nbytes: 480 bytes ###Markdown In general, we expect that ``nbytes`` is equal to ``itemsize`` times ``size``. Array Indexing: Accessing Single Elements If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.In a one-dimensional array, the $i^{th}$ value (counting from zero) can be accessed by specifying the desired index in square brackets, just as with Python lists: ###Code x1 x1[0] x1[4] ###Output _____no_output_____ ###Markdown To index from the end of the array, you can use negative indices: ###Code x1[-1] x1[-2] ###Output _____no_output_____ ###Markdown In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices: ###Code x2 x2[0, 0] x2[2, 0] x2[2, -1] ###Output _____no_output_____ ###Markdown Values can also be modified using any of the above index notation: ###Code x2[0, 0] = 12 x2 ###Output _____no_output_____ ###Markdown Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.This means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior! ###Code x1[0] = 3.14159 # this will be truncated! x1 ###Output _____no_output_____ ###Markdown Array Slicing: Accessing Subarrays Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the *slice* notation, marked by the colon (``:``) character.The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array ``x``, use this:``` pythonx[start:stop:step]```If any of these are unspecified, they default to the values ``start=0``, ``stop=``*``size of dimension``*, ``step=1``.We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions. One-dimensional subarrays ###Code x = np.arange(10) x x[:5] # first five elements x[5:] # elements after index 5 x[4:7] # middle sub-array x[::2] # every other element x[1::2] # every other element, starting at index 1 ###Output _____no_output_____ ###Markdown A potentially confusing case is when the ``step`` value is negative.In this case, the defaults for ``start`` and ``stop`` are swapped.This becomes a convenient way to reverse an array: ###Code x[::-1] # all elements, reversed x[5::-2] # reversed every other from index 5 ###Output _____no_output_____ ###Markdown Multi-dimensional subarraysMulti-dimensional slices work in the same way, with multiple slices separated by commas.For example: ###Code x2 x2[:2, :3] # two rows, three columns x2[:3, ::2] # all rows, every other column ###Output _____no_output_____ ###Markdown Finally, subarray dimensions can even be reversed together: ###Code x2[::-1, ::-1] ###Output _____no_output_____ ###Markdown Accessing array rows and columnsOne commonly needed routine is accessing of single rows or columns of an array.This can be done by combining indexing and slicing, using an empty slice marked by a single colon (``:``): ###Code print(x2[:, 0]) # first column of x2 print(x2[0, :]) # first row of x2 ###Output [12 5 2 4] ###Markdown In the case of row access, the empty slice can be omitted for a more compact syntax: ###Code print(x2[0]) # equivalent to x2[0, :] ###Output [12 5 2 4] ###Markdown Subarrays as no-copy viewsOne important–and extremely useful–thing to know about array slices is that they return *views* rather than *copies* of the array data.This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.Consider our two-dimensional array from before: ###Code print(x2) ###Output [[12 5 2 4] [ 7 6 8 8] [ 1 6 7 7]] ###Markdown Let's extract a $2 \times 2$ subarray from this: ###Code x2_sub = x2[:2, :2] print(x2_sub) ###Output [[12 5] [ 7 6]] ###Markdown Now if we modify this subarray, we'll see that the original array is changed! Observe: ###Code x2_sub[0, 0] = 99 print(x2_sub) print(x2) ###Output [[99 5 2 4] [ 7 6 8 8] [ 1 6 7 7]] ###Markdown This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer. Creating copies of arraysDespite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the ``copy()`` method: ###Code x2_sub_copy = x2[:2, :2].copy() print(x2_sub_copy) ###Output [[99 5] [ 7 6]] ###Markdown If we now modify this subarray, the original array is not touched: ###Code x2_sub_copy[0, 0] = 42 print(x2_sub_copy) print(x2) ###Output [[99 5 2 4] [ 7 6 8 8] [ 1 6 7 7]] ###Markdown Reshaping of ArraysAnother useful type of operation is reshaping of arrays.The most flexible way of doing this is with the ``reshape`` method.For example, if you want to put the numbers 1 through 9 in a $3 \times 3$ grid, you can do the following: ###Code grid = np.arange(1, 10).reshape((3, 3)) print(grid) ###Output [[1 2 3] [4 5 6] [7 8 9]] ###Markdown Note that for this to work, the size of the initial array must match the size of the reshaped array. Where possible, the ``reshape`` method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.This can be done with the ``reshape`` method, or more easily done by making use of the ``newaxis`` keyword within a slice operation: ###Code x = np.array([1, 2, 3]) # row vector via reshape x.reshape((1, 3)) # row vector via newaxis x[np.newaxis, :] # column vector via reshape x.reshape((3, 1)) # column vector via newaxis x[:, np.newaxis] ###Output _____no_output_____ ###Markdown We will see this type of transformation often throughout the remainder of the book. Array Concatenation and SplittingAll of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here. Concatenation of arraysConcatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines ``np.concatenate``, ``np.vstack``, and ``np.hstack``.``np.concatenate`` takes a tuple or list of arrays as its first argument, as we can see here: ###Code x = np.array([1, 2, 3]) y = np.array([3, 2, 1]) np.concatenate([x, y]) ###Output _____no_output_____ ###Markdown You can also concatenate more than two arrays at once: ###Code z = [99, 99, 99] print(np.concatenate([x, y, z])) ###Output [ 1 2 3 3 2 1 99 99 99] ###Markdown It can also be used for two-dimensional arrays: ###Code grid = np.array([[1, 2, 3], [4, 5, 6]]) # concatenate along the first axis np.concatenate([grid, grid]) # concatenate along the second axis (zero-indexed) np.concatenate([grid, grid], axis=1) ###Output _____no_output_____ ###Markdown For working with arrays of mixed dimensions, it can be clearer to use the ``np.vstack`` (vertical stack) and ``np.hstack`` (horizontal stack) functions: ###Code x = np.array([1, 2, 3]) grid = np.array([[9, 8, 7], [6, 5, 4]]) # vertically stack the arrays np.vstack([x, grid]) # horizontally stack the arrays y = np.array([[99], [99]]) np.hstack([grid, y]) ###Output _____no_output_____ ###Markdown Similary, ``np.dstack`` will stack arrays along the third axis. Splitting of arraysThe opposite of concatenation is splitting, which is implemented by the functions ``np.split``, ``np.hsplit``, and ``np.vsplit``. For each of these, we can pass a list of indices giving the split points: ###Code x = [1, 2, 3, 99, 99, 3, 2, 1] x1, x2, x3 = np.split(x, [3, 5]) print(x1, x2, x3) ###Output [1 2 3] [99 99] [3 2 1] ###Markdown Notice that *N* split-points, leads to *N + 1* subarrays.The related functions ``np.hsplit`` and ``np.vsplit`` are similar: ###Code grid = np.arange(16).reshape((4, 4)) grid upper, lower = np.vsplit(grid, [2]) print(upper) print(lower) left, right = np.hsplit(grid, [2]) print(left) print(right) ###Output [[ 0 1] [ 4 5] [ 8 9] [12 13]] [[ 2 3] [ 6 7] [10 11] [14 15]]
02_datasets.ipynb
###Markdown Welcome to the Google Earth Engine (GEE) Python API! These notebooks will provide an overview of how to use the GEE python API and access all it has to offer. Notebook 2: Dataset Types There are a **ton** of datasets available for use on GEE. Check out the [GEE Catalog](https://developers.google.com/earth-engine/datasets/catalog) to get a sense of what's there. The datasets can be broken down into 3 main types: features, images, and collections. From the GEE website: - **Features** which are geometric objects with a list of properties. For example, a watershed with some properties such as name and area, is an ee.Feature.- **Images** which are like features, but may include several bands. For example, a satellite image is an ee.Image.- **Collections** which are groups of features or images. For example, the Global Administrative Unit Layers giving administrative boundaries is a ee.FeatureCollection and the MODIS Land Surface Temperature dataset is an ee.ImageCollection. We'll look into each type. 1. Features Features are geometric objects. Typically these are points or vector boundaries, such as a lat/lon location or a zip code. We can create a Feature simply by calling _ee.Feature_ ###Code import ee import numpy as np import geemap ee.Initialize() feat = ee.Feature(None) ###Output _____no_output_____ ###Markdown This is a simple, empty feature, with the basic attributes (which are empty): type, geometries, and properties.You can add any properties you'd like: ###Code feat = feat.set({'ID':'test1','name':'testing_feature_1'}) feat.getInfo() ###Output _____no_output_____ ###Markdown * One key difference between the GEE python API and the online javascript workspace, is that _getInfo()_ needs to be called here when printing info about a given feature/image. You'll find _getInfo()_ throughout this notebook. To create a feature with a point in mind, simply pass a lon/lat pair to _Point_... ###Code point = ee.Feature(ee.Geometry.Point([-105.25, 40.015]),{'name':'Boulder'}) ###Output _____no_output_____ ###Markdown ...or a repeating list of lon/lat pairs to _Polygon_ to create an area: ###Code poly = ee.Feature(ee.Geometry.Polygon([-102.06, 41.01,-102.05,37.01,-109.05,37.00,-109.05,41,-102.06, 41.01], proj='EPSG:4326',geodesic=False),{'name':'Colorado'}) poly.getInfo() ###Output _____no_output_____ ###Markdown _proj_ defaults to the coordinate inputs, where numbers are assumed to be 'EPSG:4326'. _geodesic=False_ gives straight edges in a map projection, where _geodesic=True_ gives curved edges that follow the shortest path on the curved earth surface. Plotting will be covered more in the next notebook, but just to check our features: ###Code Map = geemap.Map() Map.centerObject(poly, 7) Map.addLayer(poly, {}, "Colordao") Map.addLayer(point, {'color':'red'}, "Boulder") Map ###Output _____no_output_____ ###Markdown 2. Images Images are 2D datasets. Typically, these are rasters or satellite images. \We can create an image similar to how we made a feature: ###Code transparent = ee.Image() transparent.getInfo() ###Output _____no_output_____ ###Markdown This is a blank image with a single band. A multi-band image can be created by providing a list of constants: ###Code orange = ee.Image([0xff, 0x88, 0x00]) orange.getInfo() ###Output _____no_output_____ ###Markdown An image can also be read in from the GEE catalog by providing the product ID. For example, the JAXA 30m global DSM can be read by: ###Code image = ee.Image('JAXA/ALOS/AW3D30/V2_2') image.getInfo() ###Output _____no_output_____ ###Markdown While there's a lot of info, it can be subset by specifying the attribute desired: ###Code image.getInfo()['bands'] ###Output _____no_output_____ ###Markdown And can be added to a map by selecting a given band - in this case the heigh above sea level _AVE_DSM_ ###Code Map = geemap.Map(center=(40, -105), zoom=3) Map.addLayer(image.select('AVE_DSM'), {'min': 0, 'max': 2000}, 'AVE_DSM'); Map ###Output _____no_output_____ ###Markdown 3. Collections Collections are groups of features or images. They provide an easy way to grab many features/images and organize based on whatever parameters you choose. You can create a collection by simply providing a list of features or images: ###Code listOfFeatures = [ ee.Feature(ee.Geometry.Point(-62.54, -27.32), {'key': 'val1'}), ee.Feature(ee.Geometry.Point(-69.18, -10.64), {'key': 'val2'}), ee.Feature(ee.Geometry.Point(-45.98, -18.09), {'key': 'val3'}) ] FC = ee.FeatureCollection(listOfFeatures); FC.getInfo() img1 = ee.Image('COPERNICUS/S2_SR/20170328T083601_20170328T084228_T35RNK'); img2 = ee.Image('COPERNICUS/S2_SR/20170328T083601_20170328T084228_T35RNL'); img3 = ee.Image('COPERNICUS/S2_SR/20170328T083601_20170328T084228_T35RNM'); IC = ee.ImageCollection([img1, img2, img3]) IC.getInfo() ###Output _____no_output_____ ###Markdown Most frequently, image collections are loaded in from the GEE catalog. This enables easier filtering. \Below is an example using Landsat 8 data that shows loading in and filtering collections by applying spatial and temporal filters 4. An example with Landsat 8 Here's a quick example of working with all three of the dataset types described above. First, import the Landsat 8 surface reflectance image collection and filter by a given day range and cloud cover ###Code date_strt ='2021-05-01' date_end = '2021-06-01' landsat_collection_unfiltered = ee.ImageCollection("LANDSAT/LC08/C02/T2_L2") landsat_collection = landsat_collection_unfiltered.filterDate(date_strt, date_end) #print the size of the collection print(str(landsat_collection.size().getInfo())+' images in filtered collection between '+date_strt+' and '+date_end) ###Output 5740 images in filtered collection between 2021-05-01 and 2021-06-01 ###Markdown Next, we'll import a feature collection (TIGER 2018 US Census Counties) and filter the image collection by a given feature. ###Code county_name = 'Boulder' counties = ee.FeatureCollection('TIGER/2018/Counties') #get boulder county boulderCo = counties.filter(ee.Filter.eq("NAME", county_name)) #filter landsat images by Boulder County Bounds landsat_boulder = landsat_collection.filterBounds(boulderCo) print(str(landsat_boulder.size().getInfo())+' images in filtered collection between '+date_strt+' and '+date_end+' in '+county_name+' County.') print(landsat_boulder.aggregate_array('LANDSAT_PRODUCT_ID').getInfo()) ###Output 2 images in filtered collection between 2021-05-01 and 2021-06-01 in Boulder County. ['LC08_L2SP_033032_20210511_20210524_02_T2', 'LC08_L2SP_033033_20210511_20210524_02_T2'] ###Markdown Get the first image that fits our subset and plot it: ###Code ls_img = landsat_boulder.first() Map = geemap.Map(center=(40, -105), zoom=7) Map.addLayer(ls_img.select('SR_B1'), {'min': 20000, 'max': 50000}, 'Landsat Band 1') Map.addLayer(boulderCo,{'opacity':.5,'color':'red'},'County Boundary') Map ###Output _____no_output_____
Feature_Section_Regression.ipynb
###Markdown We will be using **Superconductivty Data Data Set**. The goal here is to predict the critical temperature based on the features extracted. Data can be downloaded through this [link](https://archive.ics.uci.edu/ml/datasets/Superconductivty+Data) ###Code df = pd.read_csv('train.csv') df.head() df.isna().sum().sum() ###Output _____no_output_____ ###Markdown Data has no missing values throughout all the columns ###Code df.describe() df.shape ###Output _____no_output_____ ###Markdown So the data has 81 features. Our goal is to reduce the number of ###Code # Organizing Train data X = df[df.columns[:-1]] y = df[df.columns[-1:]] ###Output _____no_output_____ ###Markdown Dictionary of Regressors used in this tutorial ###Code d = {'Linear Regression': LinearRegression(), 'Lasso Regression': LassoCV(), 'Decision Tree Regression': DecisionTreeRegressor(), 'ExtraTreesRegressor':ExtraTreesRegressor(), 'XGBRegressor':XGBRegressor(verbose=0,objective='reg:squarederror') } ###Output _____no_output_____ ###Markdown Use Below Regressors if you need more precise results, but we are avoiding the use of these regressors as the time taken to run these regressors is very high- 'CatBoostRegressor':CatBoostRegressor(verbose=0) DImentionality Reduction using various techniques 1. Univariate feature selection **Univariate feature selection** works by selecting the best features based on univariate statistical tests.***GenericUnivariateSelect*** allows to perform univariate feature selection with a configurable strategy. This allows to select the best univariate selection strategy with hyper-parameter search estimator.This function take as input a scoring function that returns univariate scores and p-values.- **modes** : ‘percentile’ - removes all but a user-specified highest scoring percentage of features ‘k_best’ - removes all but the 'k' highest scoring features ‘fpr’ - false positive rate ‘fdr’ - false discovery rate ‘fwe’ - family wise error - **score_fun** : For regression: f_regression, mutual_info_regression For classification: chi2, f_classif, mutual_info_classif ###Code n_best_parameters = 20 trans = GenericUnivariateSelect(score_func=mutual_info_regression, mode='k_best', param=n_best_parameters) trans.fit(X, y) columns_retained_GUV = df.iloc[:, :-1].columns[trans.get_support()].values columns_retained_GUV X_trans = trans.transform(X) pd.DataFrame(X_trans, columns=columns_retained_GUV).head() ###Output _____no_output_____ ###Markdown 2. Backward Elimination using Statistical Significance- This method used p-values for elemination of the feature.- Significance level can be set using p_threshold.- We have used OLS(Ordinary Least Squares) regression (commonly known as Linear Regression) for finding p-values ###Code # Using P-value for for elemination p_threshold = 0.05 cols = list(X.columns) pmax = 1 while (len(cols)>0): p= [] X_1 = X[cols] X_1 = sm.add_constant(X_1) model = sm.OLS(y,X_1).fit() p = pd.Series(model.pvalues.values[1:],index = cols) pmax = max(p) feature_with_p_max = p.idxmax() if(pmax>p_threshold): cols.remove(feature_with_p_max) else: break selected_features_BE = cols print('number of features selected: ',len(selected_features_BE) #print(selected_features_BE) ###Output number of features selected: 69 ['number_of_elements', 'mean_atomic_mass', 'wtd_mean_atomic_mass', 'gmean_atomic_mass', 'wtd_gmean_atomic_mass', 'entropy_atomic_mass', 'range_atomic_mass', 'std_atomic_mass', 'mean_fie', 'wtd_mean_fie', 'gmean_fie', 'wtd_gmean_fie', 'entropy_fie', 'wtd_entropy_fie', 'range_fie', 'wtd_range_fie', 'std_fie', 'mean_atomic_radius', 'wtd_mean_atomic_radius', 'wtd_gmean_atomic_radius', 'entropy_atomic_radius', 'wtd_entropy_atomic_radius', 'range_atomic_radius', 'wtd_range_atomic_radius', 'std_atomic_radius', 'wtd_std_atomic_radius', 'mean_Density', 'gmean_Density', 'wtd_gmean_Density', 'entropy_Density', 'wtd_entropy_Density', 'range_Density', 'std_Density', 'wtd_std_Density', 'mean_ElectronAffinity', 'wtd_mean_ElectronAffinity', 'gmean_ElectronAffinity', 'wtd_gmean_ElectronAffinity', 'wtd_entropy_ElectronAffinity', 'range_ElectronAffinity', 'wtd_range_ElectronAffinity', 'std_ElectronAffinity', 'wtd_std_ElectronAffinity', 'mean_FusionHeat', 'wtd_mean_FusionHeat', 'gmean_FusionHeat', 'wtd_gmean_FusionHeat', 'entropy_FusionHeat', 'wtd_entropy_FusionHeat', 'range_FusionHeat', 'wtd_range_FusionHeat', 'wtd_std_FusionHeat', 'mean_ThermalConductivity', 'wtd_mean_ThermalConductivity', 'gmean_ThermalConductivity', 'wtd_gmean_ThermalConductivity', 'entropy_ThermalConductivity', 'range_ThermalConductivity', 'wtd_range_ThermalConductivity', 'std_ThermalConductivity', 'mean_Valence', 'wtd_mean_Valence', 'gmean_Valence', 'wtd_gmean_Valence', 'entropy_Valence', 'wtd_entropy_Valence', 'range_Valence', 'std_Valence', 'wtd_std_Valence'] ###Markdown 3. Model-based (Select-from-Model) SelectFromModel is a meta-transformer that can be used along with any estimator that has a coef_ or feature_importances_ attribute after fitting. The features are considered unimportant and removed, if the corresponding coef_ or feature_importances_ values are below the provided threshold parameter. Apart from specifying the threshold numerically, there are built-in heuristics for finding a threshold using a string argument. Available heuristics are “mean”, “median” and float multiples of these like “0.1*mean” Using 'median' as threshold to remove the features ###Code importance_df = pd.DataFrame(columns=[k for k in d.keys()],index=df.columns[:-1]) for clf_name,classifier in d.items(): trans = SelectFromModel(classifier, threshold='median') X_trans = trans.fit_transform(X, y) classifier.fit(X, y) try: #print(classifier.feature_importances_) importance_df[clf_name] = classifier.feature_importances_ except: if len(classifier.coef_) == 1: #print(classifier.coef_[0]) importance_df[clf_name] = classifier.coef_[0] else: #print(classifier.coef_) importance_df[clf_name] = classifier.coef_ columns_retained = df.iloc[:,:-1].columns[trans.get_support()].values #print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained),']') print('No of columns retained by',clf_name,': ',len(columns_retained)) print() ###Output No of columns retained by Linear Regression : 41 No of columns retained by Lasso Regression : 81 No of columns retained by Decision Tree Regression : 41 No of columns retained by ExtraTreesRegressor : 41 No of columns retained by XGBRegressor : 41 ###Markdown Using 'mean' as threshold to remove the features ###Code for clf_name,classifier in d.items(): trans = SelectFromModel(classifier, threshold='mean') X_trans = trans.fit_transform(X, y) classifier.fit(X, y) try: #print(classifier.feature_importances_) importance_df[clf_name] = classifier.feature_importances_ except: if len(classifier.coef_) == 1: #print(classifier.coef_[0]) importance_df[clf_name] = classifier.coef_[0] else: #print(classifier.coef_) importance_df[clf_name] = classifier.coef_ columns_retained = df.iloc[:,:-1].columns[trans.get_support()].values #print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained),']') print('No of columns retained by',clf_name,': ',len(columns_retained)) print() ###Output No of columns retained by Linear Regression : 18 No of columns retained by Lasso Regression : 7 No of columns retained by Decision Tree Regression : 12 No of columns retained by ExtraTreesRegressor : 15 No of columns retained by XGBRegressor : 13 ###Markdown **importance_df** contains the feature importance of all the features calculated using the 5 classifiers ###Code importance_df.head() ###Output _____no_output_____ ###Markdown 4. RFE (Recursive feature elimination) Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), recursive feature elimination (RFE) is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_ attribute or through a feature_immportances_ attribute. Then, the least important features are pruned from current set of features.That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.Below code outputs the optimal number of features, which when used by the classifier gives the best score. ###Code for clf_name, classifier in d.items(): high_score=0 nof=0 score_list =[] for n in range(1,len(X.columns)+1): X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.3, random_state = 0) model = classifier rfe = RFE(model,n) X_train_rfe = rfe.fit_transform(X_train,y_train) X_test_rfe = rfe.transform(X_test) model.fit(X_train_rfe,y_train) score = model.score(X_test_rfe,y_test) score_list.append(score) if(score>high_score): high_score = score nof = n print("Optimum number of features by ",clf_name,' : %d' %nof) print("Score of",clf_name," with %d features: %f" % (nof, high_score)) ###Output Optimum number of features by Linear Regression : 80 Score of Linear Regression with 80 features: 0.738674 Optimum number of features by Lasso Regression : 8 Score of Lasso Regression with 8 features: 0.607689 Optimum number of features by Decision Tree Regression : 81 Score of Decision Tree Regression with 81 features: 0.878737 Optimum number of features by ExtraTreesRegressor : 69 Score of ExtraTreesRegressor with 69 features: 0.918793 ###Markdown WE can use the **RFE** to find the `n` most important features*change the the `n_features_to_select` to the optimal number of features* ###Code n_features_to_select = 2 for clf_name, classifier in d.items(): trans = RFE(classifier, n_features_to_select) X_trans = trans.fit_transform(X, y) columns_retained_RFE = df.iloc[:, :-1].columns[trans.get_support()].values #print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained_RFE),']') print("Optimum number of features by ",clf_name,' :', len(columns_retained_RFE)) print() ###Output _____no_output_____ ###Markdown RFE-CV (Recursive feature elimination with Cross Validation)RFECV is silimar to RFE but performs RFE in a cross-validation loop to find the optimal number of features. ###Code for clf_name, classifier in d.items(): trans = RFECV(classifier) X_trans = trans.fit_transform(X, y) columns_retained_RFECV = df.iloc[:, :-1].columns[trans.get_support()].values #print('Columns Selected by ',clf_name,' are: [',','.join(columns_retained_RFECV),']') print("Optimum number of features by ",clf_name,' :', len(columns_retained_RFECV)) print() ###Output Optimum number of features by Linear Regression : 77 Optimum number of features by Lasso Regression : 6 Optimum number of features by Decision Tree Regression : 24 Optimum number of features by ExtraTreesRegressor : 70 Optimum number of features by XGBRegressor : 58 ###Markdown 5. Using feature_selector Feature selector is a tool for dimensionality reduction of machine learning datasets.Clone the repository through this [link](https://github.com/WillKoehrsen/feature-selector)Technique available to identify features to remove:- Missing Values- Single Unique Values- Collinear Features- Zero Importance Features- Low Importance Features Create the feature_selector object for performing various types of feature selection techniques and plotting graphs ###Code from feature_selector import FeatureSelector fs = FeatureSelector(data = X,labels=y) ###Output _____no_output_____ ###Markdown Finding Missing Values ###Code fs.identify_missing(missing_threshold=0.2) missing_features = fs.ops['missing'] missing_features ###Output 0 features with greater than 0.20 missing values. ###Markdown Finding Features with Single Unique Values ###Code # Single Unique Value Features fs.identify_single_unique() ###Output 0 features with a single unique value. ###Markdown Finding Collinear Features ###Code fs.identify_collinear(correlation_threshold=0.95) collinear_features = fs.ops['collinear'] fs.record_collinear.head() ###Output 23 features with a correlation magnitude greater than 0.95. ###Markdown Plotting Correlation Heatmap ###Code sns.set(rc={'figure.figsize':(15,10)}) sns.heatmap(fs.corr_matrix) ###Output _____no_output_____ ###Markdown Finding Zero Importance Features ###Code fs.identify_zero_importance(task='regression',eval_metric='rmse',n_iterations=5,early_stopping=True) ###Output Training Gradient Boosting Model Training until validation scores don't improve for 100 rounds. Did not meet early stopping. Best iteration is: [1000] valid_0's rmse: 9.38301 valid_0's l2: 88.0409 Training until validation scores don't improve for 100 rounds. Did not meet early stopping. Best iteration is: [1000] valid_0's rmse: 8.89715 valid_0's l2: 79.1593 Training until validation scores don't improve for 100 rounds. Did not meet early stopping. Best iteration is: [1000] valid_0's rmse: 9.68113 valid_0's l2: 93.7244 Training until validation scores don't improve for 100 rounds. Did not meet early stopping. Best iteration is: [1000] valid_0's rmse: 9.68775 valid_0's l2: 93.8525 Training until validation scores don't improve for 100 rounds. Did not meet early stopping. Best iteration is: [996] valid_0's rmse: 9.47357 valid_0's l2: 89.7485 0 features with zero importance after one-hot encoding. ###Markdown Finding Low Importance Featurescumulative_importance = percent of cumulative importance to be gained after removing the low importance features. ###Code fs.identify_low_importance(cumulative_importance=0.95) ###Output 67 features required for cumulative importance of 0.95 after one hot encoding. 14 features do not contribute to cumulative importance of 0.95. ###Markdown Plotting feature importances with highest importances ###Code fs.plot_feature_importances() feature_importances = pd.DataFrame(fs.feature_importances) feature_importances.head(10) ###Output _____no_output_____ ###Markdown Removing Features using the methods used abovemethods - 'all', 'missing', 'single_unique', 'collinear', 'zero_importance', 'low_importance' ###Code train_removed = fs.remove(methods = ['missing', 'single_unique', 'collinear', 'zero_importance', 'low_importance']) ###Output Removed 34 features. ###Markdown We can even run all at once using 'identify_all' function ###Code fs.identify_all(selection_params = {'missing_threshold': 0.6, 'correlation_threshold': 0.9, 'task': 'regression', 'eval_metric': 'rmse', 'cumulative_importance': 0.99}) ###Output _____no_output_____ ###Markdown Analysis of feature selection In this analysis we have used the values for feature importance gained from the RFE technique and the number of features selected by RFE-CV technique. We then plot the model score vs the number of features selection in the descending order of the importances. ###Code feature_list = [] scores_lr = [] Sorted_features = importance_df.sort_values(by='Linear Regression',ascending=False).index.values for feature in Sorted_features: feature_list.append(feature) cls = LinearRegression() X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42) cls.fit(X_train,y_train) scores_lr.append(cls.score(X_test,y_test)) ax = sns.lineplot(x=list(range(1,len(scores_lr)+1)),y=scores_lr) ax.set_title('Model score vs number of features using Linear Regression') n = 77 plt.axvline(x=n,color = 'red') ax.annotate('Score = '+str(scores_lr[n]), xy=(n,scores_lr[n] ), xytext=(n+2,scores_lr[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),) feature_list = [] scores_lasso = [] Sorted_features = importance_df.sort_values(by='Lasso Regression',ascending=False).index.values for feature in Sorted_features: feature_list.append(feature) cls = LassoCV() X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42) cls.fit(X_train,y_train) scores_lasso.append(cls.score(X_test,y_test)) ax = sns.lineplot(x=list(range(1,len(scores_lasso)+1)),y=scores_lasso) ax.set_title('Model score vs number of features using Lasso Regression') n = 21 plt.axvline(x=n,color = 'red') ax.annotate('Score = '+str(scores_lasso[n]), xy=(n,scores_lasso[n] ), xytext=(n+3,scores_lasso[n]-0.007),arrowprops=dict(facecolor='black', shrink=0.05),) feature_list = [] scores_dtr = [] Sorted_features = importance_df.sort_values(by='Decision Tree Regression',ascending=False).index.values for feature in Sorted_features: feature_list.append(feature) cls = DecisionTreeRegressor() X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42) cls.fit(X_train,y_train) scores_dtr.append(cls.score(X_test,y_test)) ax = sns.lineplot(x=list(range(1,len(scores_dtr)+1)),y=scores_dtr) ax.set_title('Model score vs number of features using Decision Tree Regression') n = 24 plt.axvline(x=n,color = 'red') ax.annotate('Score = '+str(scores_dtr[n]), xy=(n,scores_dtr[n] ), xytext=(n+2,scores_dtr[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),) feature_list = [] scores_etr = [] Sorted_features = importance_df.sort_values(by='ExtraTreesRegressor',ascending=False).index.values for feature in Sorted_features: feature_list.append(feature) cls = ExtraTreesRegressor() X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42) cls.fit(X_train,y_train) scores_etr.append(cls.score(X_test,y_test)) ax = sns.lineplot(x=list(range(1,len(scores_etr)+1)),y=scores_etr) ax.set_title('Model score vs number of features using Extra Trees Regression') n = 70 plt.axvline(x=n,color = 'red') ax.annotate('Score = '+str(scores_etr[n]), xy=(n,scores_etr[n] ), xytext=(n+2,scores_etr[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),) feature_list = [] scores_xgb = [] Sorted_features = importance_df.sort_values(by='XGBRegressor',ascending=False).index.values for feature in Sorted_features: feature_list.append(feature) cls = XGBRegressor(objective='reg:squarederror') X_train, X_test, y_train, y_test = train_test_split(X[feature_list], y, test_size=0.2, random_state=42) cls.fit(X_train,y_train) scores_xgb.append(cls.score(X_test,y_test)) ax = sns.lineplot(x=list(range(1,len(scores_xgb)+1)),y=scores_xgb) ax.set_title('Model score vs number of features using XGB regression') n = 58 plt.axvline(x=n,color = 'red') ax.annotate('Score = '+str(scores_xgb[n]), xy=(n,scores_xgb[n] ), xytext=(n+2,scores_xgb[n]-0.02),arrowprops=dict(facecolor='black', shrink=0.05),) ###Output _____no_output_____
Kaggle/ House Prices: Advanced Regression Techniques/Simple_House_Prediction.ipynb
###Markdown ###Code from google.colab import drive drive.mount('/content/drive') #Import And Load Data import numpy as np # For numerical fast numerical calculations import matplotlib.pyplot as plt # For making plots import pandas as pd # Deals with data import seaborn as sns # Makes beautiful plots from sklearn.preprocessing import StandardScaler # Testing sklearn import tensorflow as tf # Imports tensorflow from sklearn import metrics import keras # Imports keras from tensorflow.python.data import Dataset import math tf.logging.set_verbosity(tf.logging.ERROR) Ames_House_data = pd.read_csv("/content/drive/My Drive/House Price Prediction/train.csv") #Ames_House_data = Ames_House_data.reindex( # np.random.permutation(Ames_House_data.index)) sns.scatterplot(x=Ames_House_data["OverallQual"],y=Ames_House_data["SalePrice"]) sns.scatterplot(x=Ames_House_data["GrLivArea"],y=Ames_House_data["SalePrice"]) sns.scatterplot(x=Ames_House_data["GarageCars"],y=Ames_House_data["SalePrice"]) sns.scatterplot(x=Ames_House_data["GarageArea"],y=Ames_House_data["SalePrice"]) sns.scatterplot(x=Ames_House_data["TotalBsmtSF"],y=Ames_House_data["SalePrice"]) sns.scatterplot(x=Ames_House_data["1stFlrSF"],y=Ames_House_data["SalePrice"]) def preprocess_features(Ames_House_data): selected_features = Ames_House_data[ ["LotArea" ]] #Mean Normalization selected_features["LotArea"]=((np.log2(selected_features["LotArea"])-13.1)/(0.7)) processed_features = selected_features.copy() return processed_features #define output features def preprocess_targets(Ames_House_data): output_targets = pd.DataFrame() output_targets["SalePrice"] = ( Ames_House_data["SalePrice"]) output_targets["SalePrice"] = (output_targets["SalePrice"]/100) return output_targets training = preprocess_features(Ames_House_data) training_examples = training.head(1022) training_examples.describe() test_examples = training.tail(438) training_targ = preprocess_targets(Ames_House_data) training_targets =training_targ.head(1022) test_targets=training_targ.tail(438) training_targets.describe() sns.scatterplot(x=training_examples["LotArea"],y=training_targets["SalePrice"],color="g") sns.scatterplot(x=test_examples["LotArea"],y=test_targets["SalePrice"],color="g") def construct_feature_columns(input_features): return set([tf.feature_column.numeric_column(my_feature) for my_feature in input_features]) def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(10000) # Return the next batch of data. features, labels = ds.make_one_shot_iterator().get_next() return features, labels def train_model( learning_rate, steps, batch_size, training_examples, training_targets): periods = 10 steps_per_period = steps / periods # Create a linear regressor object. my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate) my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0) linear_regressor = tf.estimator.LinearRegressor( feature_columns=construct_feature_columns(training_examples), optimizer=my_optimizer ) # Create input functions. training_input_fn = lambda: my_input_fn( training_examples, training_targets["SalePrice"], batch_size=batch_size) predict_training_input_fn = lambda: my_input_fn( training_examples, training_targets["SalePrice"], num_epochs=1, shuffle=False) # Train the model, but do so inside a loop so that we can periodically assess # loss metrics. print("Training model...") print("RMSE (on training data):") training_rmse = [] for period in range (0, periods): # Train the model, starting from the prior state. linear_regressor.train( input_fn=training_input_fn, steps=steps_per_period, ) # Take a break and compute predictions. training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn) training_predictions = np.array([item['predictions'][0] for item in training_predictions]) # Compute trainingloss. training_root_mean_squared_error = math.sqrt( metrics.mean_squared_error(training_predictions, training_targets)) # Occasionally print the current loss. print(" period %02d : %0.2f" % (period, training_root_mean_squared_error)) # Add the loss metrics from this period to our list. training_rmse.append(training_root_mean_squared_error) print("Model training finished.") # Output a graph of loss metrics over periods. plt.ylabel("RMSE") plt.xlabel("Periods") plt.title("Root Mean Squared Error vs. Periods") plt.tight_layout() plt.plot(training_rmse, label="training") plt.legend() return linear_regressor training_examples.describe() #print(type(training_examples)) training_targets.describe() #print(type(training_targets)) linear_regressor = train_model( learning_rate=0.8, steps=800, batch_size=5, training_examples=training_examples, training_targets=training_targets) def my_input_fn1(features, batch_size=1, shuffle=True, num_epochs=None): # Convert pandas data into a dict of np arrays. features = {key:np.array(value) for key,value in dict(features).items()} # Construct a dataset, and configure batching/repeating. ds = Dataset.from_tensor_slices((features)) # warning: 2GB limit ds = ds.batch(batch_size).repeat(num_epochs) # Shuffle the data, if specified. if shuffle: ds = ds.shuffle(10000) # Return the next batch of data. features= ds.make_one_shot_iterator().get_next() return features predict_test_input_fn = lambda: my_input_fn1( test_examples, # test_targets["SalePrice"], num_epochs=1, shuffle=False) test_predictions = linear_regressor.predict(input_fn=predict_test_input_fn) #print(type(test_predictions),list(test_predictions),type(list(test_predictions))) test_predictions2 =np.array([item['predictions'][0] for item in test_predictions]) test_predictions2 =np.array([item['predictions'][0] for item in test_predictions]) x = np.array(test_examples["LotArea"]) y = np.array(test_targets["SalePrice"]) # #print(test_predictions) # main =[] # k=1461 # for i in range(len(test_predictions1)): # l=[k+i,test_predictions1[i]] # main.append(l) df = pd.DataFrame(test_predictions2) #sns.distplot(df[1], bins=10, kde=False) #print(len(main)) # df = pd.DataFrame(main) # df.to_csv('/content/drive/My Drive/House Price Prediction/submission2.csv', index=False) #main_np = np.array(main) #pd.DataFrame(main_np).to_csv("/content/drive/My Drive/House Price Prediction/submission.csv") #print(main_np) #root_mean_squared_error = math.sqrt( #metrics.mean_squared_error(test_predictions, test_targets)) #print("Final RMSE (on test data): %0.2f" % root_mean_squared_error) plt.plot(x, y, 'ro', label ='Original data') plt.plot(x, test_predictions2, label ='Fitted line') plt.title('Linear Regression Result') plt.legend() plt.show() print(type(df)) sns.scatterplot(x=test_examples["LotArea"],y=test_predictions2) Ames_House_test_data = pd.read_csv("/content/drive/My Drive/House Price Prediction/test.csv") testf = preprocess_features(Ames_House_test_data) #test_targets = preprocess_targets(Ames_House_test_data) predict_test_input_fn = lambda: my_input_fn1( testf, # test_targets["SalePrice"], num_epochs=1, shuffle=False) testpf = linear_regressor.predict(input_fn=predict_test_input_fn) #print(type(test_predictions),list(test_predictions),type(list(test_predictions))) testpf =([item['predictions'][0] for item in testpf]) #print(test_predictions) main =[] k=1461 for i in range(len(testpf)): l=[k+i,testpf[i]*100] main.append(l) df = pd.DataFrame(main) sns.distplot(df[1], bins=10, kde=False) #print(len(main)) # df = pd.DataFrame(main) #main_np = np.array(main) #pd.DataFrame(main_np).to_csv("/content/drive/My Drive/House Price Prediction/submission.csv") #print(main_np) #root_mean_squared_error = math.sqrt( #metrics.mean_squared_error(test_predictions, test_targets)) #print("Final RMSE (on test data): %0.2f" % root_mean_squared_error) df.to_csv('/content/drive/My Drive/House Price Prediction/submission_simple1.csv', index=False) ###Output _____no_output_____
StudentPerformance(minor_project).ipynb
###Markdown We do not have any null values in the database. ###Code #display the type of data stored in the column database.dtypes ###Output _____no_output_____ ###Markdown Numerical Variables are Math score, Reading score and Writing score.Categorical Variables are Gender, Race/ethnicity, Parental level of education, Lunch and Test preparation course. ###Code # to display the count of gender count=database['gender'].value_counts() print(count) #barplot to display the gender database['gender'].value_counts().plot(kind='bar'); # to display the count of parental level of education count=database['parental level of education'].value_counts() print(count) #barplot to display the parental level of education database['parental level of education'].value_counts().plot(kind='bar'); #to display the count of race/ethnicity count=database['race/ethnicity'].value_counts() print(count) #barplot to display race/ethnicity database['race/ethnicity'].value_counts().plot(kind='bar') #to display the count of lunch type count=database['lunch'].value_counts() print(count) #barplot to display lunch type database['lunch'].value_counts().plot(kind='bar') #to display count of test preparation course completed or not count=database['test preparation course'].value_counts() print(count) #barplot to display the test preparation course taken or not database['test preparation course'].value_counts().plot(kind='bar') ###Output none 642 completed 358 Name: test preparation course, dtype: int64 ###Markdown OBSERVATIONS:We have almost same ratio of boys and girls in the database.Most of the parents have education level as 'some college' and then 'associate degree'.Group C have the highest number of students followed by group D, B, A, E respectively.Almost two-third students have standrad lunch compared to one-third who got free/reduced lunchAgain almost two-third of the students did not take any test preparation courses. ###Code database.describe() ###Output _____no_output_____ ###Markdown Descriptive statistics of numerical variables such as total count, mean, standard deviation, minimum and maximum values and three quantiles of the data (25%,50%,75%) are shown above. ###Code #adding new column named total i.e. sum of all the three subjects database['total'] = database['math score'] + database['reading score'] + database['writing score'] #adding new column named average i.e. the average score obtained combining all three subjects database['average'] = database['total'] / 3 #display top 5 rows database.head(5) #display bottom 5 rows database.tail(5) print("Minimum total score in the database is:",database.total.min()) print("Minimum average in the database is:",database.average.min()) print("Maximum total score in the database is:",database.total.max()) print("Maximum average in the database is:",database.average.max()) plt.figure(figsize=(20,5)) sns.countplot(database['math score']) #set passing score to 40 and check the number of students passed in maths database['Math_PassStatus'] = np.where(database['math score']<40, 'Fail', 'Pass') count=database.Math_PassStatus.value_counts() print(count) database.Math_PassStatus.value_counts().plot(kind='bar') plt.figure(figsize=(20,5)) sns.countplot(database['reading score']) #set passing score to 40 and check the number of students passed in reading database['Reading_PassStatus'] = np.where(database['reading score']<40, 'Fail', 'Pass') count=database.Reading_PassStatus.value_counts() print(count) database.Reading_PassStatus.value_counts().plot(kind='bar') plt.figure(figsize=(20,5)) sns.countplot(database['writing score']) ##set passing score to 40 and check the number of students passed in writing database['Writing_PassStatus'] = np.where(database['writing score']<40, 'Fail', 'Pass') count=database.Writing_PassStatus.value_counts() print(count) database.Writing_PassStatus.value_counts().plot(kind='bar') sns.distplot(database['average']) ###Output _____no_output_____ ###Markdown Seeing this graph we can depict that the average score in all the three subjects is between 60 to 80. ###Code # scores obtained based on gender plt.figure(figsize=(4,4)) sns.barplot(database['gender'], database['math score']) plt.show() plt.figure(figsize=(4,4)) sns.barplot(database['gender'], database['reading score']) plt.show() plt.figure(figsize=(4,4)) sns.barplot(database['gender'], database['writing score']) plt.show() ###Output _____no_output_____ ###Markdown We can hereby, observe that, boys have scored better marks in maths than girls but girls have an edge in reading and writing tests. ###Code # scores obtained based on race/ethnicity plt.figure(figsize=(4,4)) sns.barplot(database['race/ethnicity'], database['math score']) plt.show() plt.figure(figsize=(4,4)) sns.barplot(database['race/ethnicity'], database['reading score']) plt.show() plt.figure(figsize=(4,4)) sns.barplot(database['race/ethnicity'], database['writing score']) plt.show() # Data to plot pie chart based on race/ethnicity labels = 'group A', 'group B', 'group C', 'group D','group E' sizes = database.groupby('race/ethnicity')['average'].mean().values colors = ['orange', 'yellow','green', 'lightcoral', 'lightskyblue'] explode = (0, 0, 0, 0,0.1) # explode 1st slice # Plot the pie chart for math score plt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=140) plt.title('Average for Every Race/Ethnicity Mean') plt.axis('equal') plt.show() ###Output _____no_output_____ ###Markdown Group E students have scored higher marks in all three subjects and group A students have scored the less marks. ###Code # scores obtained based on test preparation courses plt.figure(figsize=(4,4)) sns.barplot(database['test preparation course'], database['math score']) plt.show() plt.figure(figsize=(4,4)) sns.barplot(database['test preparation course'], database['reading score']) plt.show() plt.figure(figsize=(4,4)) sns.barplot(database['test preparation course'], database['writing score']) plt.show() ###Output _____no_output_____ ###Markdown We can observe that students who have completed test preparation courses have scored better in all three subjects. ###Code sns.countplot(x='gender', data = database, hue='test preparation course', palette='bright') plt.show() plt.figure(figsize=(10,5)) sns.countplot(x='race/ethnicity', data = database, hue='test preparation course', palette='bright') plt.show() sns.countplot(x='lunch', data = database, hue='test preparation course',palette='bright') plt.show() plt.figure(figsize=(10,5)) sns.countplot(x='parental level of education', data = database, hue='test preparation course',palette='bright') plt.show() ###Output _____no_output_____ ###Markdown Most of the students have not completed the test preparation course.Highest number Students who belong to group C ethinicity have completed the test preparation course.More number of standard lunch students have completed the test preparation course compard to free or reduced lunch students. More students whose parental level of education is 'some college, 'associate's degree', and 'some high school' have completed the test preparation course. ###Code # function to check overall pass status database['OverAll_PassStatus'] = database.apply(lambda x : 'Fail' if x['Math_PassStatus'] == 'Fail' or x['Reading_PassStatus'] == 'Fail' or x['Writing_PassStatus'] == 'Fail' else 'Pass', axis =1) count=database.OverAll_PassStatus.value_counts() print(count) database['OverAll_PassStatus'].value_counts().plot(kind='bar'); ###Output Pass 949 Fail 51 Name: OverAll_PassStatus, dtype: int64 ###Markdown If passing marks is to be set to 40 only 51 students have failed the exam. ###Code # function to assign grades based on overall pass status def GetGrade(average, OverAll_PassStatus): if ( OverAll_PassStatus == 'Fail'): return 'Fail' if ( average >= 80 ): return 'A' if ( average >= 70): return 'B' if ( average >= 60): return 'C' if ( average >= 50): return 'D' if ( average >= 40): return 'E' else: return 'F' database['Grade'] = database.apply(lambda x : GetGrade(x['average'], x['OverAll_PassStatus']), axis=1) count=database.Grade.value_counts() print(count) database['Grade'].value_counts().plot.pie(autopct="%1.1f%%") plt.show() ###Output B 261 C 256 A 198 D 178 E 56 Fail 51 Name: Grade, dtype: int64 ###Markdown More students have obtained grade B(26.1%) followed by grade C(25.6%) and grade A(19.8%). ###Code sns.countplot(x='gender', data=database, hue='Grade', palette='pastel') ###Output _____no_output_____ ###Markdown Female students have more numbers of grades A's and B's compared to male. ###Code plt.figure(figsize=(15,5)) sns.countplot(x='race/ethnicity', data=database, hue='Grade', palette='pastel') ###Output _____no_output_____ ###Markdown Group C have most number of failed students and group A have the worst grades. ###Code plt.figure(figsize=(15,5)) sns.countplot(x='parental level of education', data=database, hue='Grade', palette='pastel') ###Output _____no_output_____ ###Markdown Students whose parents have master's degree did not fail the exam.Students whose parents have educational level of associate degree or some college have obtained better grades. ###Code plt.figure(figsize=(15,5)) sns.countplot(x='Grade', data=database, hue='test preparation course', palette='pastel') ###Output _____no_output_____ ###Markdown Students who scored grade A, most of them have completed their test preparation course.Other students have scored good grades without taking the test preparation course. ###Code plt.figure(figsize=(15,5)) sns.barplot(x = "race/ethnicity", y = "math score", hue = "gender", data = database, palette='pastel') ###Output _____no_output_____ ###Markdown Boys have scored better marks in maths than girls irrespective of their race/ethnicity. ###Code plt.figure(figsize=(15,5)) sns.barplot(x = "race/ethnicity", y = "writing score", hue = "gender", data = database,palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "race/ethnicity", y = "reading score", hue = "gender", data = database, palette='pastel') plt.show() ###Output _____no_output_____ ###Markdown Girls have scored better than boys in reading and writing irrespective of their race/ethnicity. ###Code plt.figure(figsize=(15,5)) sns.barplot(x = "race/ethnicity", y = "average", hue = "gender", data = database, palette='pastel') ###Output _____no_output_____ ###Markdown Overall, girls have scored better marks than boys in all the groups. ###Code plt.figure(figsize=(15,5)) sns.barplot(x = "parental level of education", y = "math score", hue = "lunch", data = database, palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "parental level of education", y = "writing score", hue = "lunch", data = database, palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "parental level of education", y = "reading score", hue = "lunch", data = database, palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "parental level of education", y = "average", hue = "lunch", data = database, palette='pastel') plt.show() ###Output _____no_output_____ ###Markdown Students who got standrad lunch have scored better marks irrespective of their parents' level of education. ###Code plt.figure(figsize=(15,5)) sns.barplot(x = "test preparation course", y = "math score", hue = "Grade", data = database, palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "test preparation course", y = "writing score", hue = "Grade", data = database, palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "test preparation course", y = "reading score", hue = "Grade", data = database, palette='pastel') plt.show() plt.figure(figsize=(15,5)) sns.barplot(x = "test preparation course", y = "average", hue = "Grade", data = database, palette='pastel') plt.show() ###Output _____no_output_____
Project_1_RBM_and_Tomography/h2_energy_with_rnn.ipynb
###Markdown In the file `rnn_helper.py` we defined a recurrent neural network and the functions necesarry to train it on the H2 data.Below we will show the results for training multiple RNNs on new values of R. ###Code rnn_helper = RNNHelper(epochs=9, verbose=True) rnn_helper.iterate_over_r() ###Output _____no_output_____
lightautoml/google_colab_sberbank_lightautoml_demo.ipynb
###Markdown **Sberbank LightAutoML (LAMA)** *Код данного ноутбука позаимствован из официального репозитория библиотеки https://github.com/sberbank-ai-lab/LightAutoML* Install LightAutoML ###Code #! pip install -U lightautoml ###Output _____no_output_____ ###Markdown Import necessary libraries ###Code # Standard python libraries import logging import os import time import requests logging.basicConfig(format='[%(asctime)s] (%(levelname)s): %(message)s', level=logging.INFO) # Installed libraries import numpy as np import pandas as pd from sklearn.metrics import roc_auc_score from sklearn.model_selection import train_test_split import torch # Imports from our package from lightautoml.automl.presets.tabular_presets import TabularAutoML, TabularUtilizedAutoML from lightautoml.dataset.roles import DatetimeRole from lightautoml.tasks import Task ###Output [2021-07-20 15:13:32,248] (WARNING): /usr/local/lib/python3.7/dist-packages/gensim/similarities/__init__.py:15: UserWarning: The gensim.similarities.levenshtein submodule is disabled, because the optional Levenshtein package <https://pypi.org/project/python-Levenshtein/> is unavailable. Install Levenhstein (e.g. `pip install python-Levenshtein`) to suppress this warning. warnings.warn(msg) ###Markdown Parameters ###Code N_THREADS = 8 # threads cnt for lgbm and linear models N_FOLDS = 5 # folds cnt for AutoML RANDOM_STATE = 42 # fixed random state for various reasons TEST_SIZE = 0.2 # Test size for metric check TIMEOUT = 60 # Time in seconds for automl run TARGET_NAME = 'TARGET' # Target column name ###Output _____no_output_____ ###Markdown Fix torch number of threads and numpy seed ###Code np.random.seed(RANDOM_STATE) torch.set_num_threads(N_THREADS) ###Output _____no_output_____ ###Markdown Example data load ###Code DATASET_DIR = './example_data/test_data_files' DATASET_NAME = 'sampled_app_train.csv' DATASET_FULLNAME = os.path.join(DATASET_DIR, DATASET_NAME) DATASET_URL = 'https://raw.githubusercontent.com/sberbank-ai-lab/LightAutoML/master/example_data/test_data_files/sampled_app_train.csv' %%time if not os.path.exists(DATASET_FULLNAME): os.makedirs(DATASET_DIR, exist_ok=True) dataset = requests.get(DATASET_URL).text with open(DATASET_FULLNAME, 'w') as output: output.write(dataset) %%time data = pd.read_csv(DATASET_FULLNAME) data.head() ###Output _____no_output_____ ###Markdown Some user feature preparation ###Code %%time data['BIRTH_DATE'] = (np.datetime64('2018-01-01') + data['DAYS_BIRTH'].astype(np.dtype('timedelta64[D]'))).astype(str) data['EMP_DATE'] = (np.datetime64('2018-01-01') + np.clip(data['DAYS_EMPLOYED'], None, 0).astype(np.dtype('timedelta64[D]')) ).astype(str) data['constant'] = 1 data['allnan'] = np.nan data['report_dt'] = np.datetime64('2018-01-01') data.drop(['DAYS_BIRTH', 'DAYS_EMPLOYED'], axis=1, inplace=True) ###Output [2021-07-20 15:13:34,761] (INFO): NumExpr defaulting to 2 threads. ###Markdown Data splitting for train-test ###Code %%time train_data, test_data = train_test_split(data, test_size=TEST_SIZE, stratify=data[TARGET_NAME], random_state=RANDOM_STATE) logging.info('Data splitted. Parts sizes: train_data = {}, test_data = {}' .format(train_data.shape, test_data.shape)) train_data.head() ###Output _____no_output_____ ###Markdown ========= AutoML preset usage ========= Create Task ###Code %%time task = Task('binary', ) ###Output CPU times: user 3.96 ms, sys: 0 ns, total: 3.96 ms Wall time: 4.08 ms ###Markdown Setup columns roles ###Code %%time roles = {'target': TARGET_NAME, DatetimeRole(base_date=True, seasonality=(), base_feats=False): 'report_dt', } ###Output CPU times: user 51 µs, sys: 11 µs, total: 62 µs Wall time: 67.5 µs ###Markdown Create AutoML from preset ###Code %%time automl = TabularAutoML(task = task, timeout = TIMEOUT, cpu_limit = N_THREADS, reader_params = {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE}, ) oof_pred = automl.fit_predict(train_data, roles = roles) logging.info('oof_pred:\n{}\nShape = {}'.format(oof_pred, oof_pred.shape)) %%time # Fast feature importances calculation fast_fi = automl.get_feature_scores('fast') fast_fi.set_index('Feature')['Importance'].plot.bar(figsize = (20, 10), grid = True) %%time # Accurate feature importances calculation (Permutation importances) - can take long time to calculate accurate_fi = automl.get_feature_scores('accurate', test_data, silent = False) accurate_fi.set_index('Feature')['Importance'].plot.bar(figsize = (20, 10), grid = True) ###Output LightAutoML used 111 feats 1/111 Calculated score for FLOORSMIN_AVG: 0.0002140 2/111 Calculated score for LIVINGAPARTMENTS_MEDI: -0.0007269 3/111 Calculated score for FLAG_DOCUMENT_13: 0.0008186 4/111 Calculated score for BASEMENTAREA_AVG: 0.0001495 5/111 Calculated score for NAME_EDUCATION_TYPE: 0.0003465 6/111 Calculated score for FLAG_DOCUMENT_3: 0.0006182 7/111 Calculated score for NAME_INCOME_TYPE: 0.0006216 8/111 Calculated score for ENTRANCES_MEDI: 0.0004110 9/111 Calculated score for LANDAREA_AVG: 0.0000849 10/111 Calculated score for WEEKDAY_APPR_PROCESS_START: 0.0014130 11/111 Calculated score for REG_REGION_NOT_LIVE_REGION: -0.0000102 12/111 Calculated score for DAYS_ID_PUBLISH: 0.0006454 13/111 Calculated score for NAME_TYPE_SUITE: -0.0000951 14/111 Calculated score for FLAG_DOCUMENT_8: -0.0007235 15/111 Calculated score for ELEVATORS_MODE: 0.0005774 16/111 Calculated score for LIVINGAREA_MEDI: -0.0000917 17/111 Calculated score for YEARS_BUILD_MODE: 0.0002955 18/111 Calculated score for YEARS_BEGINEXPLUATATION_AVG: 0.0003329 19/111 Calculated score for NONLIVINGAREA_MEDI: -0.0003940 20/111 Calculated score for AMT_CREDIT: 0.0008050 21/111 Calculated score for HOUR_APPR_PROCESS_START: -0.0000951 22/111 Calculated score for NAME_CONTRACT_TYPE: 0.0013179 23/111 Calculated score for FLOORSMIN_MODE: 0.0001902 24/111 Calculated score for YEARS_BEGINEXPLUATATION_MEDI: -0.0010054 25/111 Calculated score for FLAG_OWN_REALTY: -0.0003057 26/111 Calculated score for FLAG_DOCUMENT_6: 0.0000068 27/111 Calculated score for FLAG_DOCUMENT_16: -0.0015115 28/111 Calculated score for FLOORSMAX_MODE: -0.0002819 29/111 Calculated score for FLAG_PHONE: 0.0000917 30/111 Calculated score for ENTRANCES_AVG: -0.0002649 31/111 Calculated score for ORGANIZATION_TYPE: -0.0020856 32/111 Calculated score for FLOORSMIN_MEDI: -0.0000679 33/111 Calculated score for LIVE_REGION_NOT_WORK_REGION: 0.0002717 34/111 Calculated score for EMERGENCYSTATE_MODE: -0.0012704 35/111 Calculated score for FLAG_OWN_CAR: 0.0014572 36/111 Calculated score for REG_CITY_NOT_WORK_CITY: 0.0000951 37/111 Calculated score for LIVE_CITY_NOT_WORK_CITY: -0.0006386 38/111 Calculated score for NONLIVINGAPARTMENTS_MODE: -0.0001970 39/111 Calculated score for AMT_ANNUITY: 0.0104178 40/111 Calculated score for CNT_FAM_MEMBERS: -0.0003091 41/111 Calculated score for DAYS_REGISTRATION: 0.0012160 42/111 Calculated score for APARTMENTS_MEDI: -0.0008424 43/111 Calculated score for AMT_REQ_CREDIT_BUREAU_WEEK: -0.0000034 44/111 Calculated score for CNT_CHILDREN: -0.0000136 45/111 Calculated score for EXT_SOURCE_1: 0.0088349 46/111 Calculated score for NONLIVINGAREA_MODE: 0.0002038 47/111 Calculated score for LIVINGAREA_AVG: -0.0000713 48/111 Calculated score for FLAG_DOCUMENT_5: -0.0001053 49/111 Calculated score for FLAG_DOCUMENT_11: -0.0004416 50/111 Calculated score for NONLIVINGAPARTMENTS_MEDI: 0.0001257 51/111 Calculated score for NAME_HOUSING_TYPE: 0.0023573 52/111 Calculated score for FLAG_EMP_PHONE: -0.0006793 53/111 Calculated score for BASEMENTAREA_MODE: -0.0000510 54/111 Calculated score for NAME_FAMILY_STATUS: -0.0026630 55/111 Calculated score for COMMONAREA_AVG: -0.0042833 56/111 Calculated score for AMT_REQ_CREDIT_BUREAU_MON: 0.0027344 57/111 Calculated score for FLOORSMAX_MEDI: -0.0000611 58/111 Calculated score for CODE_GENDER: 0.0026834 59/111 Calculated score for REG_REGION_NOT_WORK_REGION: 0.0000476 60/111 Calculated score for ELEVATORS_AVG: -0.0007167 61/111 Calculated score for FLAG_DOCUMENT_14: 0.0002276 62/111 Calculated score for FLAG_DOCUMENT_9: 0.0006012 63/111 Calculated score for HOUSETYPE_MODE: 0.0003363 64/111 Calculated score for FLAG_EMAIL: -0.0009783 65/111 Calculated score for NONLIVINGAREA_AVG: 0.0002106 66/111 Calculated score for ENTRANCES_MODE: -0.0003125 67/111 Calculated score for OBS_30_CNT_SOCIAL_CIRCLE: 0.0001291 68/111 Calculated score for DEF_60_CNT_SOCIAL_CIRCLE: -0.0015999 69/111 Calculated score for FONDKAPREMONT_MODE: -0.0008696 70/111 Calculated score for COMMONAREA_MEDI: -0.0004042 71/111 Calculated score for TOTALAREA_MODE: 0.0012466 72/111 Calculated score for DEF_30_CNT_SOCIAL_CIRCLE: 0.0018852 73/111 Calculated score for APARTMENTS_AVG: -0.0003804 74/111 Calculated score for EXT_SOURCE_2: 0.0522011 75/111 Calculated score for BIRTH_DATE: -0.0036311 76/111 Calculated score for NONLIVINGAPARTMENTS_AVG: -0.0003227 77/111 Calculated score for EMP_DATE: -0.0025951 78/111 Calculated score for OWN_CAR_AGE: -0.0009375 79/111 Calculated score for YEARS_BUILD_AVG: 0.0008764 80/111 Calculated score for YEARS_BEGINEXPLUATATION_MODE: 0.0000068 81/111 Calculated score for LANDAREA_MODE: 0.0005774 82/111 Calculated score for LIVINGAREA_MODE: 0.0005027 83/111 Calculated score for FLOORSMAX_AVG: -0.0016338 84/111 Calculated score for BASEMENTAREA_MEDI: -0.0001461 85/111 Calculated score for LANDAREA_MEDI: -0.0010326 86/111 Calculated score for YEARS_BUILD_MEDI: -0.0007303 87/111 Calculated score for report_dt: 0.0000000 88/111 Calculated score for FLAG_CONT_MOBILE: -0.0000781 89/111 Calculated score for EXT_SOURCE_3: 0.0467425 90/111 Calculated score for REGION_POPULATION_RELATIVE: -0.0014266 91/111 Calculated score for COMMONAREA_MODE: -0.0015489 92/111 Calculated score for AMT_REQ_CREDIT_BUREAU_HOUR: -0.0011549 93/111 Calculated score for LIVINGAPARTMENTS_AVG: 0.0002344 94/111 Calculated score for REGION_RATING_CLIENT_W_CITY: 0.0020380 95/111 Calculated score for AMT_GOODS_PRICE: 0.0027446 96/111 Calculated score for REGION_RATING_CLIENT: 0.0010530 97/111 Calculated score for AMT_INCOME_TOTAL: 0.0004416 98/111 Calculated score for APARTMENTS_MODE: -0.0009986 99/111 Calculated score for REG_CITY_NOT_LIVE_CITY: 0.0007575 100/111 Calculated score for LIVINGAPARTMENTS_MODE: -0.0008798 101/111 Calculated score for OCCUPATION_TYPE: -0.0006861 102/111 Calculated score for DAYS_LAST_PHONE_CHANGE: -0.0010598 103/111 Calculated score for AMT_REQ_CREDIT_BUREAU_DAY: -0.0000068 104/111 Calculated score for ELEVATORS_MEDI: -0.0003651 105/111 Calculated score for FLAG_DOCUMENT_18: -0.0000374 106/111 Calculated score for OBS_60_CNT_SOCIAL_CIRCLE: 0.0011141 107/111 Calculated score for SK_ID_CURR: -0.0011175 108/111 Calculated score for AMT_REQ_CREDIT_BUREAU_YEAR: 0.0029008 109/111 Calculated score for AMT_REQ_CREDIT_BUREAU_QRT: -0.0012704 110/111 Calculated score for WALLSMATERIAL_MODE: 0.0001257 111/111 Calculated score for FLAG_WORK_PHONE: 0.0002072 CPU times: user 1min 2s, sys: 675 ms, total: 1min 3s Wall time: 55.5 s ###Markdown Predict to test data and check scores ###Code %%time test_pred = automl.predict(test_data) logging.info('Prediction for test data:\n{}\nShape = {}' .format(test_pred, test_pred.shape)) logging.info('Check scores...') # logging.info('OOF score: {}'.format(roc_auc_score(train_data[TARGET_NAME].values, oof_pred.data[:, 0]))) logging.info('TEST score: {}'.format(roc_auc_score(test_data[TARGET_NAME].values, test_pred.data[:, 0]))) oof_pred.data[:,0] ###Output _____no_output_____
Notebook-Class-exercises/.ipynb_checkpoints/Hello_world-checkpoint.ipynb
###Markdown Hello World ###Code import os import pandas as pd your_name = "Arvind Sathi" print("Hello ", your_name) current_dir = os.getcwd() current_dir ###Output _____no_output_____
notebooks/1.0-rec-initial-data-cleaning.ipynb
###Markdown Inital Data Cleaning PurposeThis notebook provides the intial data cleaning and data exploration for the Christmas BirdCount Project. We also limit the scope to just cirlces in the USA. Author: Jeff Hale Date: 2019-05-29 Update Date: 2020-03-27 Inputs Raw Christmas Bird Count Data from Audubon Socity.Examplecbc_effort_weather_1900-2018.txt - Tab seperated file of Christmas Bird Count events going back to 1900. Each row represents a single count in a given year. Data Dictonary can be found here: http://www.audubon.org/sites/default/files/documents/cbc_report_field_definitions_2013.pdfData is saved in this folder: https://drive.google.com/drive/folders/1Nlj9Nq-_dPFTDbrSDf94XMritWYG6E2I Output Files1.0-rec-initial-data-cleaning.txt - Tab seperated file that has been clean adn the scope limited to cbc circles in the united states. File will be saved in the google drive folder:https://drive.google.com/drive/folders/1Nlj9Nq-_dPFTDbrSDf94XMritWYG6E2I Steps or Proceedures in the notebook - Load data from the Audubon Socity - Drop the test sites- Explore the Shpae and Contents of the Data- Data Metric Conversions for temperture snow and wind- Impossible value removal- Limit the Data to Use Circles in the USA Where the Data will Be Saved All data for this project will be saved in Google Drive. To start experimenting with data, download the folder here and put it into your data folder.https://drive.google.com/drive/folders/1Nlj9Nq-_dPFTDbrSDf94XMritWYG6E2IThe path should look like this: audubon-cbc/data/Cloud_Data/ See data dictionary: http://www.audubon.org/sites/default/files/documents/cbc_report_field_definitions_2013.pdf ###Code # Imports import numpy as np import pandas as pd import plotly_express as px import matplotlib.pyplot as plt import seaborn as sns import gcsfs pd.set_option('display.max_columns', 500) ###Output _____no_output_____ ###Markdown Set Global Variables ###Code # ALL File Paths should be declared at the TOP of the notebook PATH_TO_RAW_CBC_DATA = "../data/Cloud_Data/cbc_effort_weather_1900-2018.txt" raw_data = pd.read_csv(PATH_TO_RAW_CBC_DATA, encoding = "ISO-8859-1", sep="\t") raw_data.head() raw_data.tail() len(raw_data) ###Output _____no_output_____ ###Markdown Drop the test sites ###Code raw_data = raw_data.drop(raw_data[raw_data["circle_name"].str.contains("do not")].index) ###Output _____no_output_____ ###Markdown Explore the Shpae and Contents of the Data ###Code raw_data.shape raw_data.info() raw_data.describe(include = 'all') raw_data.isnull().sum() #What percentage are null pd.DataFrame((raw_data.isnull().sum())/len(raw_data) * 100).sort_values(by = 0, ascending = False) #What types are the different variables raw_data.dtypes.sort_values() ###Output _____no_output_____ ###Markdown __N Field Counters__ ###Code raw_data['n_field_counters'].describe() (raw_data['n_field_counters'].isnull().sum()) / len(raw_data) * 100 raw_data['n_field_counters'].hist(bins = 50); raw_data.loc[raw_data['n_field_counters'] < 100].shape[0] / len(raw_data) ###Output _____no_output_____ ###Markdown __Count Year__ ###Code raw_data['count_year'].describe() (raw_data['count_year'].isnull().sum())/len(raw_data) * 100 raw_data.min() raw_data.max() ###Output _____no_output_____ ###Markdown Make a new dataframe named df that we will add columns to. ###Code df = raw_data ###Output _____no_output_____ ###Markdown Data Metric ConversionsWill make two columns.One for metric (SI) and one for imperial. Create columsn for imperial and metric.key = distance_unitsmiles = 1inches = 2kilometers = 3centimeters = 4 ###Code distance_cols = ['field_distance', 'nocturnal_distance'] df.distance_units.value_counts() df['field_distance_imperial'] = np.where(df['distance_units']=='Miles', df['field_distance'], (df['field_distance'] * .6214)) df['field_distance_metric'] = np.where(df['distance_units']=='Kilometers', df['field_distance'], (df['field_distance'] / .6214)) df['nocturnal_distance_imperial'] = np.where(df['distance_units']=='Miles', df['nocturnal_distance'], (df['nocturnal_distance'] * .6214)) df['nocturnal_distance_metric'] = np.where(df['distance_units']=='Kilometers', df['nocturnal_distance'], (df['nocturnal_distance'] / .6214)) df.head() df.tail() df.nocturnal_distance_imperial.value_counts().head() df.field_distance_imperial.value_counts().head() df.field_distance_metric.value_counts().head() df.nocturnal_distance_metric.value_counts().head() ###Output _____no_output_____ ###Markdown Convert snowkey = snow_unit2 = inches4 = centimeters ###Code snow_cols = ['min_snow', 'max_snow'] df.snow_unit.value_counts() df['min_snow_imperial'] = np.where(df['snow_unit']==2, df['min_snow'], (df['min_snow'] / 2.54)) df.head() df.tail() df['min_snow_metric'] = np.where(df['snow_unit']==4, df['min_snow'], (df['min_snow'] * 2.54)) df.tail() df['min_snow_metric'].value_counts().sort_values(ascending=False).head() df['min_snow_imperial'].value_counts().sort_values(ascending=False).head() df['max_snow_metric'] = np.where(df['snow_unit']==4, df['max_snow'], (df['max_snow'] * 2.54)) df['max_snow_imperial'] = np.where(df['snow_unit']==2, df['max_snow'], (df['max_snow'] / 2.54)) df.loc[:, ['max_snow_imperial', 'max_snow_metric']].describe() df.max_snow_imperial.value_counts().sort_values(ascending=False).head() df.max_snow_imperial.plot(kind='hist', bins = 100, range=[200, 2000]) ###Output _____no_output_____ ###Markdown Convert temperatureskey = temp_unit1 = celsius2 = farenheit ###Code temp_cols = ['min_temp', 'max_temp'] df['min_temp_imperial'] = np.where(df['temp_unit']==2, df['min_temp'], (df['min_temp']+32)*9/5) df.head() df['max_temp_imperial'] = np.where(df['temp_unit']==2, df['max_temp'], (df['max_temp']+32)*9/5) df.tail() df['min_temp_metric'] = np.where(df['temp_unit']==1, df['min_temp'], (df['min_temp']-32)*5/9) df['max_temp_metric'] = np.where(df['temp_unit']==1, df['max_temp'], (df['max_temp']-32)*5/9) df[df.loc[:, 'temp_unit']==2].head() ###Output _____no_output_____ ###Markdown Convert wind ###Code df['max_wind'].value_counts().sort_values(ascending=False).head() df["wind_unit"].value_counts() ###Output _____no_output_____ ###Markdown wind_unit key1 = mph3 = kmh ###Code df['min_wind_metric'] = np.where(df['wind_unit']==3, df['min_wind'], (df['min_wind'] /.6214)) df['max_wind_metric'] = np.where(df['wind_unit']==3, df['max_wind'], (df['max_wind'] / .6214)) df['min_wind_imperial'] = np.where(df['wind_unit']==1, df['min_wind'], (df['min_wind'] * .6214)) df['max_wind_imperial'] = np.where(df['wind_unit']==1, df['max_wind'], (df['max_wind'] * .6214)) df.min_wind_metric.value_counts().head(10) df.min_wind_imperial.value_counts().head(10) ###Output _____no_output_____ ###Markdown Note that due to rounding, km to miles aren't exact. List all columns ###Code df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 106925 entries, 0 to 106928 Data columns (total 47 columns): circle_name 106925 non-null object country_state 106925 non-null object lat 106925 non-null float64 lon 106925 non-null float64 count_year 106925 non-null int64 count_date 106925 non-null object n_field_counters 106707 non-null float64 n_feeder_counters 50575 non-null float64 min_field_parties 56210 non-null float64 max_field_parties 57093 non-null float64 field_hours 97024 non-null float64 feeder_hours 61910 non-null float64 nocturnal_hours 58526 non-null float64 field_distance 98802 non-null float64 nocturnal_distance 53193 non-null float64 distance_units 106597 non-null object min_temp 82436 non-null float64 max_temp 82420 non-null float64 temp_unit 57026 non-null float64 min_wind 80027 non-null float64 max_wind 80109 non-null float64 wind_unit 57026 non-null float64 min_snow 76936 non-null float64 max_snow 77165 non-null float64 snow_unit 53379 non-null float64 am_cloud 82396 non-null float64 pm_cloud 82268 non-null float64 am_rain 81682 non-null object pm_rain 81582 non-null object am_snow 81484 non-null object pm_snow 81410 non-null object field_distance_imperial 98802 non-null float64 field_distance_metric 98802 non-null float64 nocturnal_distance_imperial 53193 non-null float64 nocturnal_distance_metric 53193 non-null float64 min_snow_imperial 76936 non-null float64 min_snow_metric 76936 non-null float64 max_snow_metric 77165 non-null float64 max_snow_imperial 77165 non-null float64 min_temp_imperial 82436 non-null float64 max_temp_imperial 82420 non-null float64 min_temp_metric 82436 non-null float64 max_temp_metric 82420 non-null float64 min_wind_metric 80027 non-null float64 max_wind_metric 80109 non-null float64 min_wind_imperial 80027 non-null float64 max_wind_imperial 80109 non-null float64 dtypes: float64(38), int64(1), object(8) memory usage: 39.2+ MB ###Markdown Impossible value removal ###Code df.min_wind_imperial.max() ###Output _____no_output_____ ###Markdown That's okay ###Code df.min_wind_imperial.min() ###Output _____no_output_____ ###Markdown That's not okay. Numbers must be positive. Will make all negative numbers positive. ###Code df['min_wind_imperial'] = np.where(df['min_wind_imperial']<0, np.NaN, df['min_wind_imperial']) df.min_wind_imperial.min() ###Output _____no_output_____ ###Markdown Also need to do for metric equivalent. ###Code df['min_wind_metric'] = np.where(df['min_wind_metric']<0, np.NaN, df['min_wind_metric']) df.min_wind_metric.min() df.max_wind_imperial.max() ###Output _____no_output_____ ###Markdown That's not okay. Pretty sure that's faster than the highest ever recorded wind speed. Yep. Second fastest ever according to wikipedia is 231 mph (372 kmh). ###Code df['max_wind_imperial'] = np.where(df['max_wind_imperial']>231, np.NaN, df['max_wind_imperial']) df.max_wind_imperial.max() ###Output _____no_output_____ ###Markdown Better. Need to do the same for metric. ###Code df['max_wind_metric'] = np.where(df['max_wind_metric']>372, np.NaN, df['max_wind_metric']) df.max_wind_metric.max() ###Output _____no_output_____ ###Markdown ok ###Code df.max_wind_imperial.min() df.min_snow_imperial.max() ###Output _____no_output_____ ###Markdown That's a lot of inches of snow, but not impossible, so we'll keep it for now. ###Code df.min_snow_imperial.min() ###Output _____no_output_____ ###Markdown Ok. ###Code df.max_snow_imperial.max() ###Output _____no_output_____ ###Markdown That's a lot of inches of snow, but not impossible, so we'll keep it for now. ###Code df.max_snow_imperial.min() ###Output _____no_output_____ ###Markdown Ok. ###Code df['max_temp_imperial'].max() ###Output _____no_output_____ ###Markdown That's not okay. Highest air temperature reading according to wikipedia: 56.7 °C, 134.1 °F. Anything higher than those number will be removed. ###Code df['max_temp_imperial'] = np.where(df['max_temp_imperial']>134, np.NaN, df['max_temp_imperial']) df.max_temp_imperial.max() df['max_temp_metric'] = np.where(df['max_temp_metric']>56, np.NaN, df['max_temp_metric']) df.max_temp_metric.max() df['max_temp_imperial'].min() ###Output _____no_output_____ ###Markdown That's mighty cold, but possible. Leaving alone for now. ###Code df['min_temp_imperial'].max() ###Output _____no_output_____ ###Markdown Nope. ###Code df['min_temp_imperial'] = np.where(df['min_temp_imperial']>134, np.NaN, df['min_temp_imperial']) df.min_temp_imperial.max() df['min_temp_metric'] = np.where(df['min_temp_metric']>56, np.NaN, df['min_temp_metric']) df.min_temp_metric.max() df['min_temp_imperial'].min() ###Output _____no_output_____ ###Markdown Nope, that's not possible. Let's drop anything less than -305F. ###Code df['min_temp_imperial'] = np.where(df['min_temp_imperial']<-305, np.NaN, df['min_temp_imperial']) df.min_temp_imperial.min() df['min_temp_metric'] = np.where(df['min_temp_metric']<-187, np.NaN, df['min_temp_metric']) df.min_temp_metric.min() df['nocturnal_distance'].max() ###Output _____no_output_____ ###Markdown Seems like a lot. Leaving alone for now. ###Code df['nocturnal_distance'].min() ###Output _____no_output_____ ###Markdown Ok. ###Code df['field_distance'].max() ###Output _____no_output_____ ###Markdown Seems like a lot. Leaving alone for now. ###Code df['field_distance'].min() ###Output _____no_output_____ ###Markdown Ok. Let's look at other columns. ###Code df.info() df.feeder_hours.min() ###Output _____no_output_____ ###Markdown That doesn't make sense. Let's drop values < 0. ###Code df['feeder_hours'] = np.where(df['feeder_hours']<0, np.NaN, df['feeder_hours']) df.feeder_hours.min() df.min_field_parties.max() df.min_field_parties.min() df.field_hours.max() df.field_hours.min() df.nocturnal_hours.min() df.nocturnal_hours.max() ###Output _____no_output_____ ###Markdown Question: What's the maximum number of hours possible? ###Code df.describe() ###Output _____no_output_____ ###Markdown All these numeric values appear to possible (perhaps with exception of the maximum number of hours for some tasks. Note that impossible values for derived columns with _imperial_ and _metric_ suffixes were replaced with NaN. The original column values were not replaced (e.g. max_wind wasn't replaced). Also note that missing values were not imputed. Depending upon the variable of interest and the analysis, missing values might want to be treated various ways. Limit the Data to Use Circles in the USA ###Code print(df.shape) df.head(10) # Drop all the locations that are not in the united states indexNamesNUSA = df[~df['country_state'].str.contains("US-")].index # Delete these row indexes from dataFrame df.drop(indexNamesNUSA , inplace=True) print(df.shape) df.head(10) ###Output (89568, 47) ###Markdown Save the Output ###Code df.to_csv("../data/Cloud_Data/1.0-rec-initial-data-cleaning.txt", sep="\t") ###Output _____no_output_____
AppStat2022/Week4/original/HypothesisTesting/HypothesisTesting_original.ipynb
###Markdown Hypothesis TestingPython notebook for illustrating the concept of Hypothesis Testing and specific test statistics; among them the very useful Kolmogorov-Smirnov test.The Kolmogorov-Smirnov test (KS-test) is a general test to evaluate if two distributions in 1D are the same. This program applies an unbinned KS test, and compares it to a $\chi^2$-test and a simple comparison of means. The distributions compared are two unit Gaussians, where one is then modified by changing:- Mean- Width- NormalisationThe sensitivity of each test is then considered for each of these changes. References:- Barlow: p. 155-156- __[Wikipedia: Kolmogorov-Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov-Smirnov_test)__- Though influenced by biostatistics, a good discussion of p-values and their distribution can be found here: [How to interpret a p-value histogram?](http://varianceexplained.org/statistics/interpreting-pvalue-histogram/) Authors: Troels C. Petersen (Niels Bohr Institute) Date: 07-12-2021 (latest update)*** ###Code import numpy as np # Matlab like syntax for linear algebra and functions import matplotlib.pyplot as plt # Plots and figures like you know them from Matlab import seaborn as sns # Make the plots nicer to look at from iminuit import Minuit # The actual fitting tool, better than scipy's import sys # Module to see files and folders in directories from scipy.special import erfc from scipy import stats sys.path.append('../../../External_Functions') from ExternalFunctions import Chi2Regression, BinnedLH, UnbinnedLH from ExternalFunctions import nice_string_output, add_text_to_ax # useful functions to print fit results on figure ###Output _____no_output_____ ###Markdown Set the parameters of the plot: ###Code r = np.random # Random generator r.seed(42) # Set a random seed (but a fixed one) save_plots = False verbose = True ###Output _____no_output_____ ###Markdown The small function below is just a simple helper function that takes a 1D-array input along with an axis, position and color arguments an plot the number of entries, the mean and the standard deviation on the axis: ###Code def ax_text(x, ax, posx, posy, color='k'): d = {'Entries': len(x), 'Mean': x.mean(), 'STD': x.std(ddof=1), } add_text_to_ax(posx, posy, nice_string_output(d), ax, fontsize=12, color=color) return None ###Output _____no_output_____ ###Markdown and finally a function that calculates the mean, standard deviation and the standard deviation (i.e. uncertainty) on mean (sdom): ###Code def mean_std_sdom(x): std = np.std(x, ddof=1) return np.mean(x), std, std / np.sqrt(len(x)) ###Output _____no_output_____ ###Markdown Set up the experiment:How many experiments, and how many events in each: ###Code N_exp = 1 N_events_A = 100 N_events_B = 100 ###Output _____no_output_____ ###Markdown Define the two Gaussians to be generated (no difference to begin with!): ###Code dist_mean_A = 0.0 dist_width_A = 1.0 dist_mean_B = 0.0 dist_width_B = 1.0 ###Output _____no_output_____ ###Markdown Define the number of bins and the range, initialize empty arrays to store the results in and make an empty figure (to be filled in later): ###Code N_bins = 100 xmin, xmax = -5.0, 5.0 all_p_mean = np.zeros(N_exp) all_p_chi2 = np.zeros(N_exp) all_p_ks = np.zeros(N_exp) # Figure for the two distributions, A and B, in the first experiment: fig1, ax1 = plt.subplots(figsize=(10, 6)) plt.close(fig1) ###Output _____no_output_____ ###Markdown Loop over how many times we want to run the experiment, and for each calculate the p-value of the two distributions coming from the same underlying PDF (put in calculations yourself): ###Code for iexp in range(N_exp): if ((iexp+1)%1000 == 0): print(f"Got to experiment number: {iexp+1}") # Generate data: x_A_array = r.normal(dist_mean_A, dist_width_A, N_events_A) x_B_array = r.normal(dist_mean_B, dist_width_B, N_events_B) # Test if there is a difference in the mean: # ------------------------------------------ # Calculate mean and error on mean: mean_A, width_A, sdom_A = mean_std_sdom(x_A_array) mean_B, width_B, sdom_B = mean_std_sdom(x_B_array) # Consider the difference between means in terms of the uncertainty: d_mean = mean_A - mean_B # ... how many sigmas is that away? # Turn a number of sigmas into a probability (i.e. p-value): p_mean = 0.5 # Calculate yourself. HINT: "stats.norm.cdf or stats.norm.sf may be useful!" all_p_mean[iexp] = p_mean # Test if there is a difference with the chi2: # -------------------------------------------- # Chi2 Test: p_chi2 = 0.5 # Calculate the p-value of the Chi2 between histograms of A and B yourself. all_p_chi2[iexp] = p_chi2 # Test if there is a difference with the Kolmogorov-Smirnov test on arrays (i.e. unbinned): # ----------------------------------------------------------------------------------------- p_ks = stats.ks_2samp(x_A_array, x_B_array)[1] # Fortunately, the K-S test is implemented in stats! all_p_ks[iexp] = p_ks # Print the results for the first 10 experiments if (verbose and iexp < 10) : print(f"{iexp:4d}: p_mean: {p_mean:7.5f} p_chi2: {p_chi2:7.5f} p_ks: {p_ks:7.5f}") # In case one wants to plot the distribution for visual inspection: if (iexp == 0): ax1.hist(x_A_array, N_bins, (xmin, xmax), histtype='step', label='A', color='blue') ax1.set(title='Histograms of A and B', xlabel='A / B', ylabel='Frequency / 0.05') ax_text(x_A_array, ax1, 0.04, 0.85, 'blue') ax1.hist(x_B_array, N_bins, (xmin, xmax), histtype='step', label='B', color='red') ax_text(x_B_array, ax1, 0.04, 0.65, 'red') ax1.legend() fig1.tight_layout() fig1 ###Output 0: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206 1: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206 2: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206 3: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.70206 4: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.36819 5: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.28194 6: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.90841 7: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.03638 8: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.90841 9: p_mean: 0.50000 p_chi2: 0.50000 p_ks: 0.28194 Got to experiment number: 1000 ###Markdown Show the distribution of hypothesis test p-values: ###Code N_bins = 50 if (N_exp > 1): fig2, ax2 = plt.subplots(nrows=3, figsize=(12, 14)) ax2[0].hist(all_p_mean, N_bins, (0, 1), histtype='step') ax2[0].set(title='Histogram, probability mu', xlabel='p-value', ylabel='Frequency / 0.02', xlim=(0, 1)) ax_text(all_p_mean, ax2[0], 0.04, 0.25) ax2[1].hist(all_p_chi2, N_bins, (0, 1), histtype='step') ax2[1].set(title='Histogram, probability chi2', xlabel='p-value', ylabel='Frequency / 0.02', xlim=(0, 1)) ax_text(all_p_chi2, ax2[1], 0.04, 0.25) ax2[2].hist(all_p_ks, N_bins, (0, 1), histtype='step') ax2[2].set(title='Histogram, probability Kolmogorov', xlabel='p-value', ylabel='Frequency / 0.02', xlim=(0, 1)) ax_text(all_p_ks, ax2[2], 0.04, 0.25) fig2.tight_layout() if save_plots: fig2.savefig('PvalueDists.pdf', dpi=600) ###Output _____no_output_____
Play_Pandas.ipynb
###Markdown Testing some different commands in pandas ###Code df2 = pd.DataFrame([[1, 2], [4, 5], [7, 8],[7,5]], ... index=['cobra', 'viper', 'sidewinder','gladius'], ... columns=['max_speed', 'shield']) ###Output _____no_output_____ ###Markdown Documentation See: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html Selecting data with loc ###Code # df2.loc? df2 ###Output _____no_output_____ ###Markdown Get some indices ###Code inds1 = [False, False, True,True] print(inds1) inds2 = df2.shield == 5 print(inds2) ###Output cobra False viper True sidewinder False gladius True Name: shield, dtype: bool ###Markdown Selecting rows ###Code # Select single row df2.loc['cobra'] df2.loc[[True,False,False,False]] # Select row using indices thing boolean indexing df2.loc[inds2] # Slice range of rows - note: it's inclusive df2.loc['cobra':'sidewinder'] # Select several rows df2.loc[['cobra','sidewinder']] ###Output _____no_output_____ ###Markdown Selecting single element ###Code df2.loc['cobra','shield'] ###Output _____no_output_____ ###Markdown Selecting rows + columns ###Code # Select rows + single column - returns Series df2.loc[['cobra','sidewinder'], 'shield'] # Select rows + multiple columns - returns DataFrame df2.loc[['cobra','sidewinder'], ['shield']] # Slice rows + single column - returns Series df2.loc['cobra':'sidewinder', 'shield'] # Slice rows + 1 or more column - returns DataFrame # NBNB ** Double braces are not needed when using : ** df2.loc['cobra':'sidewinder',['shield']] # Use indices df2.loc[inds1,'shield'] # Use inds, multiple columns df2.loc[inds1,['shield']] # Use inds, multiple columns df2.loc[inds1,['shield','max_speed']] ###Output _____no_output_____ ###Markdown Selecting data with iloc ###Code # df2.iloc? # Use indices, multiple columns df2.iloc[0:2,0:1] ###Output _____no_output_____ ###Markdown Selecting data with query ###Code df2 df2.query('shield >= 5 and max_speed >= 4') # Query and then select columns df2.query('shield >= 5 and max_speed >= 4')[['shield','max_speed']] ###Output _____no_output_____ ###Markdown Pivot ###Code df2.pivot(columns='shield') df3 = pd.DataFrame({"A": ["foo", "foo", "foo", "foo", "foo", ... "bar", "bar", "bar", "bar"], ... "B": ["one", "one", "one", "two", "two", ... "one", "one", "two", "two"], ... "C": ["small", "large", "large", "small", ... "small", "large", "small", "small", ... "large"], ... "D": [1, 2, 2, 3, 3, 4, 5, 6, 7], ... "E": [2, 4, 5, 5, 6, 6, 8, 9, 9]}) df3 table = pd.pivot_table(df3, values='D', index=['A', 'B'], ... columns=['C'], aggfunc=np.sum) table # Not sure why this breaks # df3.pivot(values='D', index=['A', 'B'],columns=['C']) ###Output _____no_output_____
notebooks/8_Model_Data_Augmentation_Pseudo-label_Gen1.ipynb
###Markdown Swish-based classifier using cosine-annealed LR with restarts and data augmentation- Swish activation, 4 layers, 100 neurons per layer- LR using cosine-annealing with restarts and cycle multiplicity of 2- Data is augmentaed via phi rotations, and transvers and longitudinal flips- Validation score use ensemble of 10 models weighted by loss Import modules ###Code %matplotlib inline from __future__ import division import sys import os sys.path.append('../') from Modules.Basics import * from Modules.Class_Basics import * ###Output Using TensorFlow backend. ###Markdown Options ###Code with open(dirLoc + 'features.pkl', 'rb') as fin: classTrainFeatures = pickle.load(fin) nSplits = 10 patience = 2 maxEpochs = 200 ensembleSize = 10 ensembleMode = 'loss' compileArgs = {'loss':'binary_crossentropy', 'optimizer':'adam'} trainParams = {'epochs' : 1, 'batch_size' : 256, 'verbose' : 0} modelParams = {'version':'modelSwish', 'nIn':len(classTrainFeatures), 'compileArgs':compileArgs, 'mode':'classifier'} print ("\nTraining on", len(classTrainFeatures), "features:", [var for var in classTrainFeatures]) ###Output Training on 31 features: ['DER_mass_MMC', 'DER_mass_transverse_met_lep', 'DER_mass_vis', 'DER_pt_h', 'DER_deltaeta_jet_jet', 'DER_mass_jet_jet', 'DER_prodeta_jet_jet', 'DER_deltar_tau_lep', 'DER_pt_tot', 'DER_sum_pt', 'DER_pt_ratio_lep_tau', 'DER_met_phi_centrality', 'DER_lep_eta_centrality', 'PRI_met_pt', 'PRI_met_sumet', 'PRI_jet_num', 'PRI_jet_all_pt', 'PRI_tau_px', 'PRI_tau_py', 'PRI_tau_pz', 'PRI_lep_px', 'PRI_lep_py', 'PRI_lep_pz', 'PRI_jet_leading_px', 'PRI_jet_leading_py', 'PRI_jet_leading_pz', 'PRI_jet_subleading_px', 'PRI_jet_subleading_py', 'PRI_jet_subleading_pz', 'PRI_met_px', 'PRI_met_py'] ###Markdown Import data ###Code with open(dirLoc + 'inputPipe.pkl', 'rb') as fin: inputPipe = pickle.load(fin) trainData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'pseudo_train.hdf5', "r+"), inputPipe=inputPipe, augRotMult=16) ###Output _____no_output_____ ###Markdown Determine LR ###Code lrFinder = batchLRFind(trainData, getModel, modelParams, trainParams, lrBounds=[1e-5,1e-1], trainOnWeights=True, verbose=0) ###Output 2 classes found, running in binary mode ###Markdown Train classifier ###Code results, histories = batchTrainClassifier(trainData, nSplits, getModel, {**modelParams, 'compileArgs':{**compileArgs, 'lr':2e-3}}, trainParams, trainOnWeights=True, maxEpochs=maxEpochs, cosAnnealMult=2, plotLR=1, reduxDecay=1, patience=patience, verbose=1, amsSize=250000) ###Output Using cosine annealing Training using weights Running fold 1 / 10 2 classes found, running in binary mode 1 New best found: 3.3494162287467103e-05 2 New best found: 3.1251350577619516e-05 3 New best found: 3.0968467955218404e-05 4 New best found: 3.0624923153513834e-05 5 New best found: 2.972758020216545e-05 6 New best found: 2.937384930813088e-05 7 New best found: 2.9207264546658048e-05 10 New best found: 2.908599202641484e-05 11 New best found: 2.887690176432633e-05 12 New best found: 2.8494943980020672e-05 13 New best found: 2.8454158588881782e-05 14 New best found: 2.8288575807931837e-05 15 New best found: 2.8256847588616272e-05 21 New best found: 2.8249826953841972e-05 23 New best found: 2.803021944427844e-05 24 New best found: 2.7972700118413557e-05 26 New best found: 2.792655028370097e-05 27 New best found: 2.7758187295728315e-05 28 New best found: 2.7746438404781287e-05 29 New best found: 2.7712935441725306e-05 30 New best found: 2.7699189685260517e-05 31 New best found: 2.7697248742828404e-05 46 New best found: 2.7691872886670435e-05 50 New best found: 2.7597760291347505e-05 52 New best found: 2.7534906033198385e-05 55 New best found: 2.751687941008138e-05 56 New best found: 2.744188206562769e-05 57 New best found: 2.7411726427314398e-05 59 New best found: 2.7408041547214524e-05 60 New best found: 2.7386482904749667e-05 61 New best found: 2.7376100023715723e-05 62 New best found: 2.737245789523894e-05 96 New best found: 2.7313614653958784e-05 97 New best found: 2.7238783947878085e-05 102 New best found: 2.7208994674232294e-05 113 New best found: 2.7160848750986777e-05 115 New best found: 2.714154850817119e-05 116 New best found: 2.7121460546043484e-05 CosineAnneal stalling after 255 epochs, entering redux decay at LR=0.0001425649778654921 Early stopping after 265 epochs Score is: {'loss': 2.7121460546043484e-05, 'wAUC': 0.05572767934471157, 'AUC': 0.08961860231515995, 'AMS': 10.074470784980099, 'cut': 0.9971368312835693} ###Markdown The impact of data augmentation is pretty clear. Comparing the training here to that of the the CRL Swish model without augmentation we can see that we effectively gain another LR cycles worth of training epochs before we start overfitting, which allows the networks to reach much lower looses (3.18e-5 c.f. 3.23e-5) and a higher AMSs (3.98 c.f. 3.71) Construct ensemble ###Code with open('train_weights/resultsFile.pkl', 'rb') as fin: results = pickle.load(fin) ensemble, weights = assembleEnsemble(results, ensembleSize, ensembleMode, compileArgs) ###Output Choosing ensemble by loss Model 0 is 5 with loss = 2.6590147800629504e-05 Model 1 is 6 with loss = 2.6820445510714143e-05 Model 2 is 2 with loss = 2.6826869298535008e-05 Model 3 is 3 with loss = 2.689515164667775e-05 Model 4 is 1 with loss = 2.7071041502239852e-05 Model 5 is 0 with loss = 2.7121460546043484e-05 Model 6 is 8 with loss = 2.7251977471026633e-05 Model 7 is 9 with loss = 2.7374359068844444e-05 Model 8 is 4 with loss = 2.7409945491431063e-05 Model 9 is 7 with loss = 2.7461055062593557e-05 ###Markdown Response on validation data with TTA ###Code valData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'val.hdf5', "r+"), inputPipe=inputPipe, rotate = True, reflect = True, augRotMult=8) batchEnsemblePredict(ensemble, weights, valData, ensembleSize=ensembleSize, verbose=1) print('Testing ROC AUC: unweighted {}, weighted {}'.format(roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source)), roc_auc_score(getFeature('targets', valData.source), getFeature('pred', valData.source), sample_weight=getFeature('weights', valData.source)))) amsScanSlow(convertToDF(valData.source)) %%time bootstrapMeanAMS(convertToDF(valData.source), N=512) ###Output 50000 candidates loaded Mean AMS=1.25+-0.02, at mean cut of 0.21+-0.02 Exact mean cut 0.2129122701298911, corresponds to AMS of 1.2450575639639025 CPU times: user 3.48 s, sys: 5.27 s, total: 8.75 s Wall time: 1min 26s ###Markdown Adding test-time augmentation provides further benefits: overall AMS 3.90->3.97, AMS corresponding to mean cut 3.89->3.91. ###Code val = convertToDF(valData.source) plotFeat(val, 'pred_class', [(val.gen_target==0), (val.gen_target==1)], ['bkg', 'sig']) batchEnsemblePredict(ensemble, weights, trainData, ensembleSize=1, verbose=1) train = convertToDF(trainData.source) plotFeat(val, 'pred_class', [(val.gen_target==0), (val.gen_target==1)], ['bkg', 'sig']) ###Output /Users/giles/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval /Users/giles/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6499: MatplotlibDeprecationWarning: The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead. alternative="'density'", removal="3.1") /Users/giles/anaconda3/lib/python3.6/site-packages/matplotlib/axes/_axes.py:6499: MatplotlibDeprecationWarning: The 'normed' kwarg was deprecated in Matplotlib 2.1 and will be removed in 3.1. Use 'density' instead. alternative="'density'", removal="3.1") ###Markdown Test scoring ###Code testData = RotationReflectionBatch(classTrainFeatures, h5py.File(dirLoc + 'testing.hdf5', "r+"), inputPipe=inputPipe, rotate = True, reflect = True, augRotMult=8) %%time batchEnsemblePredict(ensemble, weights, testData, ensembleSize=ensembleSize, verbose=1) scoreTestOD(testData.source, 0.9619597619166598) ###Output _____no_output_____ ###Markdown Using the cuts we optimised by bootstrapping the validation data, we end up with a private score which would have beaten the winning entry (3.817 c.f. 3.806). It would be nice if the public score were higher, though. Save/Load ###Code name = "weights/Swish_CLR_TTA_Pseudo1" saveEnsemble(name, ensemble, weights, compileArgs, overwrite=1) ensemble, weights, compileArgs, _, _ = loadEnsemble(name) ###Output _____no_output_____
Lab_Activity_Week_2.ipynb
###Markdown Declaring Variables ###Code import numpy as np import matplotlib.pyplot as mplib lit1 = ([4, 6,]) lit2 = ([2, 5,]) lit3 = ([6, 7,]) lit4 = ([5, 1,]) lit5 = ([6, 6,]) vec1 = np.array(lit1) vec2 = np.array(lit2) vec3 = np.array(lit3) vec4 = np.array(lit4) vec5 = np.array(lit5) ###Output _____no_output_____ ###Markdown Printing Vertically ###Code #printing arrays print(vec1) print(vec2) print(vec3) print(vec4) print(vec5) ###Output [4 6 4] [2 5 2] [6 7 8] [5 1 0] [6 6 2] ###Markdown Printing Horizontally ###Code #using for loop to print vertically int_list = [[2, 5, 6], [3, 5, 1], [9, 2, 8]] for i in range(len(int_list)): for v in int_list: vec_vert=np.array(np.array) print(v[i], end =' ') print() ###Output 2 3 9 5 5 2 6 1 8 ###Markdown addition and subtraction ![2.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAASwAAADhCAIAAAD9Hh/8AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAD4bSURBVHhe7Z0HfBRF+8dt/9fX195eUZHeEUTpRQQFX1ERRUWaSJEu1VCkJ6EkCIQOEgiE0EsKoQUSCElI7tJ77yG9l0u53N3+f5NZjnAh5C65shfm++HDJ7s7uzf7zPyeeWZ3Z+YpjsFgGBQmQgbDwDARMhgGhomQwTAwTIQMhoFhImQwDAwTIYNhYJgIGQwDw0TIYBgYJkIGw8AwETIYBoaJkMEwMEyEDIaBYSJkMAwMEyGDYWCYCBkMA6OxCLOysoKCgsLCwiQSCb+L4/Lz84ODg7EfxMfHl5WV8Qc4rqqqKioqih4KDw/Py8uTy+X0kEKhSEpKwv7k5OTc3FyaRklCQgIS47I4q6Kigp4ilUpxtatXrzo4ONy+fTszMxMXoYdoHkJDQ0tKSugeHMKVcSn8T/doSnFxsZeXl4uLi5+fHy6LCypvJycnh6bJyMjAZkxMTHV1dVFRkdIOSIbTaZrU1FS6U0l6DfxGzc2Wl5fTxEpo5lVMTYEdoqOjb968CSPAhtnZ2fQ6SlJSUpTWq6yspGfRzFPrubu74xbqWk+ZDaX1YGS6pza4NbFY7OTkdOnSJX9/f2UOcRYtUyXIGz0EqB0iIiKQE7pHJpPFxcVhZ2JiIs1wzUmE2NhYZVUpLS319fVFQYhEIlQVHOIT3aewsJCmRBF4e3s7Ojo6Oztjv/J2cCkYmSbGryAnyjwAlB1yhevDLGlpacgVf0AvaCZCiGHWrFnvvPNOq1atLly4wO/luDNnznzwwQfYDzp06DBq1CgUM24Mh+7du/fpp5/SQ0jTv3//devWwY44hFulV/vjjz+OHj1K0yj59ddfjx8/jlM++ugjlBPSFxQUrFq1ql27dv/+97//9a9/vfTSS/369Tt37hz9IZqH9957b8OGDdS+uD6ujEvNmzdPWZxKUO1QoqhAKGB+18OgMCZMmPDqq6/i5956663ffvsNtUR5O4cPH6bJLC0t33333W+++QbZQykq7dCmTZuvvvoqJCQEaUxMTOhOJWZmZubm5vTvFi1adOrU6ccff4Rg6DUBqvUvv/yCoyqmBlDF4sWL27Zt+8ILL7z44ouDBg3auHEjvZSShQsXUut9/PHH1AfBe2InzqLWe/nll4cMGXL+/Hla4aj1wM6dO+kepfW2b9+uYj0YbfTo0bAJrgP++9//jh8/HgLGIWWZKjl48KBS6seOHcPtoIa4urrSPcgbChEGhDVOnz6NDPCnvfPOmDFjICGcCw/4/fffv/3228j566+/PmfOHFQwPlENKHSoDikhIRx64403kKvnn38el500aRKtPLAn/qbpYfDu3bsjn/AXOIS7QyapZVCpUN+uX79ekzs9oZkI0cq1b9/+qRp+//13pS+BhHDb//d//9exY0fc5DPPPIP7P3ToEIoELqdbt25ID0vhPpEMtzpz5kzoEOJBPcMh6A0F0LNnT5gPmzAEbLRgwQJcAenff/99+G8YcdGiRah2Tz/9NMoDl0L9Q+KWLVuiJqEAaB6wBxmAC8AeXB9Xxp7JkyfTiqUEvhClhQoBpk+fDnXxB+6D0zdt2vTcc8+98sorqKwffvjhihUr0P4ob2fPnj005dq1a5999tmBAwfiImgWkAfwySefoKSR1dWrVyMbkByuQDP85ptv9ujRAzV7+fLl2MT1+/btiwqNxMin0iPA8eNGkADUNjXsNm3aNJgaRoYdkAYVF1lVsd6aNWuo9WAftACwHuyJTZyFjMFB0My0bt2a6kFpPRjk1q1bta2HzNe2Htp8GAS5RXokxvXxBzaHDx+O5ldZplALbhO5gi/A1ei5sF6vXr2QeOnSpfSadnZ2KFPcPiIOZS3q3LkzTpw6dSqcPhwljIlTUG1QhV577bVvv/0WmlfaExaAo4FsEDJAP9iDlKgeSIxyAfBuMBoMC0dJ0/fp0wdWwiEUAbKBmAK/CMvg/wEDBuDn0FrSDOsHzUS4f/9+2AhOFMZC2QcEBND91Hy4bYQoAD4MtwS54vaUtXbz5s2oDaamprAjvBQ8bm0Rwkaw1JQpU7CJpgZqR8BDL0tFiLgLlQyG++mnnxBvwIOePHmyS5cuSA/DoXFQViOAEoJjfowI//zzT+QBdRduFeWBascfuA9CuJ9//hnnDhs2DE0iQke0JNivjghhB1gGMkAy3BEuhbtDBUU+sQeaRwSLqImKEKEBLr5+/Xr8jUYeVRzXRG6RN9gQ9QwXR5VCAEZ/bt++fcg56hlcEoQaGBgIS8LZq1gPwTM1CBWhvb09tR5uCnlDC4BGCZdF5f7666+ViXE66N27d23r1RYh/kDziOtAY2i78LuRkZEIT5BPVAzsgZ+iZTpu3DgUCnJVO8xGm4PEOL1r167IAw6hiUPiH374ASaieYD7RvniRESY6NcgGEEmUQfQWOG3EIHb2NggxFXac9myZUgMLzN//nxYDD7OysoKF8ctIKswFMyFU3CPVIS4KZQUlIm/P/vsM8QvCDSQDLeADGAT1sO98znWCxqIELcBdcEiiO7gz3DDFhYW1MnVLm9swogocty8ra2tSq2FWVHq2EQ8g4qoFCG9bUQI2Pz8889xSHlZFADidUR9+Gl4bjg8HAL46X/++Qe/AlWjOtLEMCWqMvI2duxYCLU+EeJEaA8lMXLkSDhdZRVXgvwgM/hFJMOlUPVxI/hFdUQIJwVtw92iaNEc0VgOtQTywImoK6ipuD4VIfwuVAFl0r8R7iIxLoUL4tehK7Q2tBphP/Q8ceJEpBw8eDBqHvn5GmHQn3ik9VAokAraUlwNVR/1uOYkogc4RBgKlR4xM02MRok24CgXOJ26IsSPInrETrTGSnXh5xB4YyeCC5QvLVNYacmSJQgf4IhpMgpKCuErfujUqVN3797Fr8NK+HUconmAwXFxeEk4DrgklDjMi5ZTKQz8gYJQ2hP+CztrZ0z5BAFmpBlDy4m/qQjhhuC5hg4dir8hfmQYLhg+BXeNPK9cuRJtMu1F4zrKS+kUDUR47do15BU1Ho0SfAzKD3eCe8MhZXlTEcIPwXaI5dD6qdRamA+xKDZp86WmCCE8OH4cgpOGr8IhClpd9Nn+85//uLm50cRohdBbgw7x63AWsDLOUhEh8uDh4YHYA3ZHRIdoNjExUfn0QgmqyIgRIxAuIhncPLwmfAFuB8EerqkUIVo8mKK2CJEe/+MUZAydN/pgoz4RQmDIM/6H2NC203zC1LgvmPrGjRvoiyLZd999hysgOvjyyy+xiVqlrJRK6hMhZEAdP70ITQzQncbvwm3duXNHab3du3dDh9j/119/IejAWbVFCEOh+4qd8L90DwUiwU64BpQOLVPYhAZN6CfziWpApUcjCRPBoSPgRDLEGjTKoHnAIZQdzkUPFmpBASEziGnp6UpURPj4jCF+RilQEULkuEH8EMIllBcqA7KEm0Vkh9/FryNehc9F/I9gBB62ds3REeqKEEUO4eEe4MZQsRBFwD+hokCQOKosbypCuBYcgrdDQ68iQtSPL774ApsoKkQgaooQ/vvvv/+GgfATKBgcAvDlu3btQgK4Bn9/f2Xi8PBw/I3ihwYQnOCCKiLE1dBdgVBRaZBJlBB0RTuWfIoasJmfn497mTt3Li6FX9+2bRvKElEArom/aTKYBYdwESSmIsSPWltb04cQsBJuHFmtT4SoE5ABin/dunXYjwuqmJqaCzUDbQhqBu4Fm3BhtOLS9Gq2hOggoDNccxJpP+E+sBMWQ+SmTIzGB/EtrAfQVcMFa4sQ94j4GTsnTZqEukt3wvvQTKKWK1tC+CyY1NHRUZlPJWfPnoU9cX1UEty40p3RPCCGgiO4ePEisoqco/sKocIgtIYAWAlFoyLC2hlTOhpErbAG3alsCVFboGqYHS6P2g3ABcMIR44cgWtGGlwKF3R2dkb+lWl0h7oiRGiHYAb5Q7HBKAB/43/UEuSSmg8OBv1jeHEUAA6hsuIspQhR5KhGqIKocygARKowq5oiRJ8QbRfqBC77v//9D84VJkNIiV4N0qMwYO7aiSFv/BBsjaNARYSwPrSBcoVccTXcEbJ0+fJl/vB90AqdOHECHTY0FJATroOQGFeG58bfCL3QfYdHQBuITUSJKEgqQtgBIQ1ab/haHIKXxd3VJ8KPP/54xowZyA+sRBUCH0Gduoqpd+zYAVND3nAc+BU0I/CA+CEabuDE+kQIz4j6RK2H8AxuBfEnuvc4hJ+AP1X2x2hibMLvQB64GlDpE6LEcQgBAvKPQNrPzw9NFs0SWiHcmkqZ1iUnJ4eKFnTo0EHZF6B5gLtBFYIvwE1B0ggEkEn4I3g91B8nJyc0UzC1ighpxmjDgD2wpI+PD4J81AHkzc7OTtknRD8IZQdToCeC9hPnwlyInhDmoC9AI3B0JdCkQ+qgJmu6RV0RIh5A7I4ACUWC2g9opAelIevUfDABSho+BncIp0u9iFKEKDY4IaSBF1y9ejVKWiMRwu6IDaAWFAk0TB9YIzEatKtXr8JYtRPjXMgS0RStwSoihN6Qhy5duqBPTzsSyBXiFtoQUVAG0CfuBUJCRUECXBmOAHeENhmnYw9qBmoMvWtUDpxFRYhQCg4LDgKVFVegbWx9IoSGIVe4LdwXKkdKSgpqG66J6Gjr1q21TY2swtSwjPI5J0yKOoeU9HHOY0QI661atQo1EmfhFGSYWq9jx46QsUpibKalpaGOIgGoLUIAx4qQGNfB3aHNR5niD2x+++23GRkZdcu0LrAGhAQr4ZbRy1B5xk4Ni74i7hrdTvgL2A0pcQjVDxUAnTp01VRECCBm2pHGlXGDtLIB5ASlifpGRYhOI/RJyxRdbtjqwIEDsAn8LLSHqoV7QY8UFkO9QoEKpSWENeHt0M2DfWkBAzQUqGpovuGPUc9wV0gA0G3DfaL1oLlHS4KeFT2ExPBDuGf6FhtFixqJ/fBhtJjRLcbmhAkTaAJ6WVS++Ph4bMKZoYLiR1EYqNww9PDhw2Ep+kMqiQHc4aBBg3BBlHRtUxYWFsLKiHNwBZyCW0P7YGNjo5IG8SHCVERHqGpor+CGqEphAVNTU5yI/fAFKLlDhw6hzHDI1dVVaQfcLGR88OBBZBuHUJ/gFLAfrgTXwf1u2bIFm/TZADSM+4JDMTc3R6XBfvSclbWztqmxmZubu2HDBmwiA7gFCJi2hI+0HiIrCBubiK9QX3EWKhysh5zjrhG51LaeMjGAOLGJCyLmr20ZgEYDAR46kKiycATwAnPmzFE+11Up00eCMkLRwA/SO6LUrkUA5qLPfhD1QC1wgigLRBkmJiYwoNKeVlZW9HRoG80yessQMG4QGcPRefPm0TuCaFEtsQenoz5v374d1kbzgLgXSkNJwZ+iXuFeZs+enZSUhDTw0fB39bkSLaKWCHF7aFjgAuljGAqqCPwl7hB1COAPJMD/SKmsPQD3AB3iEIAjR0pcjT9WU5+wX3lZVBRsohdBS51eFmcp2yiUKxLA1nfv3oXGcKLyanUT4yI028oHiUrgStHfwBVgbqTH7+J0/th9kHPcIAJOhJ24hdoVEacgksF++GllbgFqBrUDQE4gBmX28AdSYj9+i+6EzrGJTiZOx28hPT2L/oFk9ERQ29R0D9Jjk2ZAmbf6rKesRvgDCcRiMayHE5VfmYC6iXERaj2l260N0iPiQJMCIL/aTxFVyvSR4OJoNnFTtasKzQPOpeBGlNaDYaOjoxH9Ilan2VbaUyV7aPHgI3CDIpEIRazMGNLXrsPYD8+l3MQ10TdBvap9L2g/a5eC7lBLhLAUYjBEOwzGEwi6IbwSdIO64SjiJdo/YTCeNJQvV3WEug9mGAyGjmAiZDAMDBMhg2FgmAgZDAPDRMhgGBgmQgbDwDARMhgGhomQwTAwTIQMhoFhImQwDAwTIYNhYJgIGQwDw0TIYBgYJkKGbsjK4mpmjmM0CBMhQ9tUV3Pu7vL/fRX624KUHNWh0oy6MBEytIpEwu3dy3XqFPPVT0MWnHeNeTA/JaM+mAgZWkKh4FJSuAULuI4duc2b9zqFDdgZmFXyYPYKRn0wETK0gUzGeXlxn3/ODR7MOTuXlJR/fyR8iUO8/MF0Qox6YSJkNBmEoFZWpAGcPp1LTkaT6BZb2NZc5B7/YCIpxmNgImQ0Dahu4UJ0Ajlzc65mbjK0fquvJPa3YrGoujARMhoLQtBbt7iBA0kIeuMGd3/yQmiv346ANVeT7s9XyGgAJkJGoygr444f57p35yZO5MLCyFOZ+9yKK2xnLsL//DajIZgIGZqTmMjNmsV17sxZWnIPz70rVygW2ceNtQkvrax3+m2GCkyEDE1ACOriQkJQKPDCBfJe/mEyS6rQG9znlc5vM9SAiZChNiUl3MGDXNu23BdfcN7eRJB1OBWQ3XWLb0TWg1UQGQ3CRMhQj/h4buZM7r33uJUrubS02p1AJVUyxbhjET/ahEuqWCyqAUyEjIZAzOnlxQ0fTtpAtIS11qBXISanvPNmXxtxJnsuqhFMhEaDglNUycly7fy2figq4nbtIi/iv/qK8/R8ZAhKQbZsfbMgwkgWi2oIE6ERIFcoPPIiFoZaz/Tbk5Krx2ceKSnclCnc229zK1aQoUmPRVIl/8GGfKqGoJTfxVAPJkIjAG1gREFKb9clveznRsRH83t1CkJQNzfyAKZLF+7oUfJIpiGisyXdtvhejnjcsoSMR8JEaBzkVBb1vv3nLxfXZWY30CJpgbIyEoK2asV9+SXn6/uYELQ2+7zSB+wMzGSfqmkOE6FxEFSU+P616X9e2nkt2XdVxAnbRDdp9YMV9h9DsVRif897dajdmoiTziki5TLGjwYdzuRkbu5c8hT0jz9IOKoeZVWyn49F/HExDpEzv4uhNkyExoFzlt+rThOmX7Y45O84yMWkn/Oi1MyGJ49IKc9ZEnR4t9fZfaIL716a8tf1/XQB/UeDFu/OHW7oUK5bN87GhrSHauORUNR+o9g1ln2q1hiYCI0ANC4Hk66/bD/O4sbR7JycmQF7+1ycH5+cSI/eq8hbFn5sVvB+/LuY7KVcdF6mkC+PsO178Q//8OB9CVdaOPx6wt2p3pYQktu2jWvThvv6ay40lKu1QH+DoPFbczWpn1UAGzbROJgIjQCpvHp+yD8fnpvpEeCTX1H8v7sbfry4JvP+48pzaV5dnGa1vTit04UZ+9xOSyT8G4LMysJPbi8db78+KCXqa29zJPAMFj/iDQf2JCVx8+dzLVuS/1NT+f1qk1MqHbwraIVzAotFGwcToRFQJqsc7rX2u3Mrk9NSnTLFXZ1mH3Y5W1FRQY8WlpcEx0X4R4b4RYWkZ2bI7zdiocXJ716duujyjn1hTgNvmnx6YbFvfKiqCBGCurpy/fqRp6AnT5LhuZpzmw2baBrNUIRymQz/+I1mQbms6iexZW+H+Ucjri/w2W9+5VBySnKDb+2DihLfvTJ1hNOyQz4Ow26u/PjiXFv/y6W1e3qlpeT1Q8eO3MiRpDeoSQiqRK7gll1KGG0dVlzBPlVrJM1OhHJ5ju2ZtKNnmlNoRN4T5iWZexw94H7OK8wvJzdH2dw9hmKp5FCg0yn3S/cy0+3jPI+5OySnpih7jGQ40syZJARdsoTLzOR3ag5i0UG7AnfdYVOMNp7mJsLSSlnsxJmVk6fIZY3x64IF7V5VVVVlZaVMk0YekqNPYiBaaUGBIiyMjIVH67dqFde3LxmOZGf3mG9B1eF0zbAJ9qlaU2huIrwTX7R75GxuzBjlbAsMAuICKyvu44/JK/jnn+eeeYbr3Zu7e1fNF/H1IZUpJtlFIhYtY8MmmkCzEiFq2uorifNHmnDDhpEOD6M2ycmkDRwyhCjwX//ibG2JvZoGHTZxRMSGTTSJZiVC2j9ZP283edyXlsbvZQB0BV1dyYh4dAJ79uR69SKvJZoGdHcyILv9RnFoBpvrvkk0KxHeiivsuEnscdGd1LPAQH7vEw4aqdxcbssWrnVr8hTU0ZFIcefOxj0LrU25VP6LbeT8C7GV1c2q+61/mo8I5QpukX3cONsISWIK16MHmYTviQUCCwrijhzhVq/mpk0jX6K9/jpnYsJlZJDxEHv2NOKNfF0Qi3Zlwya0QfMRYXZNLLrfK53LyyOP/s6d4w88gaA/PHUq+QT0pZe4p57innuOMzXliov5o1p6iXrIO6PvjoB7RZX8NqOxNB8Ron/S3cIvOltCPoP8/HNu//6mP3gwVnDjCQnc4sXcK6+QZ6FdupAZYrSKpEo27lgEYlEZeybTZJqJCKUyxYTjkWPpFENVVaQdWL687oR8TwRQRWws99tv3FtvcePGkaBgzhytm8InuRjd72tRZN57RhNpJiKMyZF02exr65tF3DJ6RJs2cT/8wN3/uvIJQiolneF+/cikTHv3ck5OZJUIDw/+qJaAzDdcT+69PSCjmL2M1QLNQYQQ3nG/rK5bfGNz7n/8YWtLWoCcHH7zCaGkhDMzI+Nxhw3jR8SnpHCXLjXus+zHkFtGut9s2IS2aA4iLJfKx9lGLCZTDN1/Vo6a16EDFxnJbzZ7aAg6fTr33/+S/7XdA1TBM7GorbnIJZqtwqsdmoMIo7Il3S38rtfun3h5kQbhzh1+s3mD/t7ly9xHH5EQ1NpanUmZmgI/bOJwWHHFE9nl1gHNQYQHvTNUpxiKiiIjdNAjavYUFZHnwK1bc4MGkVUidD+GK69M+unuoO232QdJWsPoRUinGEIs+lD/JDeXPJw4dozfbJbgfqOjucmTSQg6ezZ5/677HhqaQTs/MsNvSDr7VE1rGL0IPRKKOmwUu6sM666o4P73P27jxqZ/nCVQEII6OpJPQNu04Q4c0GhSpkYjkyucwvLamok+2uqfW/ZgrhqFQiGVShscZMyoD+MWIcp9/bWkAXVXZkYdHTeOvC1slgOaCgu5HTtIpxch6K1b+nkdWi6VH/PNam0memqxeztzUXIB//onKipq1apVa9euVc5tw9AU4xZhTql04M7AR6zMjO0ZM7jPPmuGA5oiIkgI+tZbZF7QhAQ9hKCgWq74+1Zq+43iPtsD/r3M45213m73ZzcsLS0dO3bs/Pnz1Rnsz3gkxi1CMt2luVhZIR6Aqolu0kcfkc5hswEt3s2b3IABpA3cvl0/ISilWqY4Isr8xzvjm0Oho/4JhRQP3OWXxCgvL//qq6+srKzwR2ZmZmUl+5RUY4xYhPRZ+Vib8JJHrsy8aRN5QKqN4QKCID+fDEf64AMyJMLFRT8hqAoRmRI0hif8sxbZx+315EWYmpo6aNCgw4cPHzx4cMCAAdeuXaP7GepjxCKkQ3iVtUEVe3vy4D4ggN80auLiuJ9/Jl9jIwQ1nFvZdefeh5Z+SfkVuWXSwnLeC3h4eLRp0+bIkSOJiYm9e/c+fvw43c9QHyMW4fngnO4WfvWuzCwSce+/zzk48JtGilRKXsQPGkQcyt695JGMgSitkn1/JPzXE1HoH/K7atizZ0+PHj2SkpIyMjIgQtYSNgJjFaFUpphoF/m4lZmTksgQnv37+U1jpLiYs7Dg3n2X+/RTzt1dDy/iH8PtuMKWG3wuBD/0Oa5CoZg1a9aYMWPQIXRycho9enR2djZ/jKE2xirC+Lzybha+No+ZYgg1ePBgbsMGftO4wF0phyNNm8bFxPD7DQSys+ZqUqdN4sT8hwamyOXyhQsXDh8+/NatW6amprdv3+YPMDTBKEUI3Z0MyG5gZeaqKu7bb7l58/TzEF+bqAxHUo6INxxZJVV9dwTMOx+rEouC1NTUM2fOeHt7Z2Vlsff1jcMoRUiGTdR8qva4lZnlctKS/PBDEye31Td1hyMJgFtxhe+u83YIbUbve4SEUYowLrf8Q0u/hqcYWr6c69OHM5ZeCg1B6XAkhKA6Ho6kPmj84O+G7A7KLmVDeHWCUYrwH++MgeqszGxjw7VvT75yFj7Veh2OpBGwc5/tAWYuySzW1BHGJ0LEohPtIv+4oMbKzG5uJK7D/4KlooLLyiLrseh3OJJGnArIbmXq45Vo+K5pc8X4ROidVNxpk/hmjBrDugMDubffJu2hMElLI73Wbt247t31ORxJI6QyxS+2EZ/tCSpiQ3h1hpGJEDV0w/Xk/nWHTTwSdKtatiSv2gSIXE6mwX7mGTIv6FNPKcaPF+a35pFZkrbmou2309h0MrrDyESYVyb9bE+wulMMZWSQqbj//JPfFBRUhE8/DQXKnnn2xLilbrGFaHb4o8IAubH1y2qxztsnicWiOsTIROiZWNR+o1jdlZnLyrjRo7lffzXI584Nk5BABiX16lX+57K1Nt6dNostXFNqD5Y1OJKaT9V+sY1kK5/pFGMSIRq/VZcTa6YYUq9OoLWZOZO8cDPcJ5cNADeRlQUfUVIp23nnXsdN4ol2kWEZZbI678QNQnS2pMMm8ckA9iWabjEmEeZJpEP3BKGy8tvqsHo1mfvQGAY0VcsVPsnFPxwJ7/W3v7VPRukjx2fpl31e6V1qz+bK0A3GJMILwTndLHzrHTbxSHbv5t55h4xGNxLQ6d18M6WduWj66eiUgkoDNojwAmMOh0+yUx02wdA6RiNCqUzx28mo7w5ruDLzmTPkLYVIxG8aA5XV8mtR+UN2B322N/hyRP6DGY31i0dCUStTn9OBLBbVOYIToVT66CcTgWml6DId1nRlZrGYjEZ3duY3jQTcY0JexfwLsbjlddeSGv42SNvAxmsfNWyCoQuEIkKZTObu7r5mzZrjx4/jb35vDYiGoMAv9oc8b+JxKVzDJSmTk7nOncncZEaIRCpH5xAR+Jgj4X6pJfp8WpNbJh28K/D3MzFCe2vSLBGKCBUKha+vb6dOnaytrWuPiEElsA/N7bnV75kld55e4n7wbrpmlaKggMy5Nm8ev2lsyBSKgLTSCccje2z1+8c7QzmphK65HVfYYp332aAnbEUdAyGgcNTPz69169aurq6VlZVFRUV0Cj23WLIM/ah/Qt9Y5fXSCs/Z52IeN3ypLtXV5D3huHECfVWoHkUV1bs97rU1E/1iGxmbU67rtqlm2ERczbAJAb20bMYISITOzs7dunVzdHRcvnz5qFGj4mvG8sTnlTuF5W26mYKobLR12NA9QY+eW+0xLF3KjRxpZKMK64CIwCW6YMT+kIE7A88F5VRIdfi0JqdMOmBnIPqEmnW/GY1FQOHo3r17e/TocePGDegQTSKiU3oIqht9OGzKyajLEfnzzsdqLEJLSzKq0PgnIIUi0ooql11K6LBR/KdTvO6e1pzkh00U8dsMHSMUEVZVVf3+++9TpkwpLS21t7dH5zD6/jhA9/jCDzb4wP1XyxWllZo/KLCxIYP0mstahWgDTwdmf/S3/3eHw2zEmWge/dNKkvIr0F3USruFJhdd0EG7Agv01f9kCEWEJSUlQ4cOtbW1lclkK1eunDZtGl3bgH6q1nWLb8r9xQ805uZNMvehtpeMNiDos4VmlCE0/ZeJxysrPVubibpt8Z17Prbeiec0ISanHJ1wS7dUNmxCbwhFhMXFxcOGDVu9evW1a9fWr1+fmJhI92fVDOuefyGu8d9txMRw7do1v7UK7yYVt9zg8/Rid/wPzQTdK5VpQzYn/LPeWeuNi/PbDN0jFBHK5fK7d+8ePXrUw8OjsNb31nSKoQshTXhWnpFBJiBtdmsVVlbLJ9lFdtnsO2BnYDcL3+N+WRr3lutQLpX/Yhvx/ZFwdT+RZ2gDoYiQojJnHiKiPy7GDd8XnNeUAT6QdJ8+ZGmKmnceRk1KSsrJkycdHBz8/f0Rt6ND2H6j2M4/y8QpAa5qyskoBJN80kZBhk3UrDbBIlF9IiwRqpBdUtVvB5liiN9uHOhbjhxJJpIw5rUKq6urXVxcduzYYWFh0b1793nz5kGEheXVSxzjIUW0YEdEmT22+g3ZHXQpPA+NJH+ahhz0zui0SRyV/cSsNAinHx/PmZuTifn8/MimIRC0CE8GZLcxE/kkN61/IpVy8+eTxYyENIWZpnh6evbv3z8wMDAqKgoiRNxO90Nv9HM21J+wjLJfbCNbm4pWX0lsxAsMiVT+87EIXEGzzyGMGlSJMWP4SUaGDCE9F0MgXBGiKvx0NOKzvcFN7Z+geu7eTb4gTUri9xgbUql04cKF48ePLykp2bRpU4sWLQLqWW0KtjosymhnLoLd3GILNfrc9E5CUcsNPmcCn6RP1VAl2ral0/yQD/0NNDumcEWIoAgdnm1amWLoxAnuzTc5Ly9+09goLCwcOHDgnj17goKCJk6c2LNnz9T6hylXyxW34wo/3xfcabN4n1e6mq/7YOP115Ng8PjcJ2MIr1zOhYVxkyZxr7+uePnlwhdeyZu7iPRcDIFARQjZ2fpmvbfe2zelyTEk6tehQ9xzz3GzZmlrRjP4hbyqktwqPT3Hz8rKQgg6ffr0Y8eOLViwoEePHvb29vWN+QKwXkZx1frryWjZxtqEI0xt0I3llkkH7QyccSb6iYhFYbozZ8hkk+3aoW6E2FyYOH2/KMIwsSgQqAjLa/on+KeFKYYQ948eTeKNV14hX89oMrWuglPUbYdlCrnDPZ/PvdZu9jtZpZeHPYhCTUxMtm7dmp2dfeTIkWXLltEPax9PlYx8W9NnewD+OYXlwaT8gUfhlVjcYp23nX8Wv92Myc7m1q4lC8598QV365a8WrbCOXGAVWCTnsA3DYGKELFoh43iUwHZWnDLnp5cq1bcCy+Q6B89w+vX+f1q4JMffSH1brXsoYgOWfLMjnjX+bdtrsf1I0KAH6LDStAAVqs9IgRZjcstn346+gNTnyWO8cn1fHWEnqOJU8KAnYHpxUb8ALlhYEA/P+6rr7gWLchEmDWPYRACDNwZiFDcgJNrCVSEezzvdbfw08KwbtjdxIQbPpz7+GOyUtqUKWSi61rKkXOKZEmOe254WHFKQZVqsLotzvGb62sKi1Q/Zb6aFdDR4fdLPjcji1PDipJlckG/2kY0ccw3E11EyMw5PK/ut0c5ZdL+VoFrmvewifJybs8esjZJjx5kpoXKSrrbJboAQbtHgiG/VheiCFFp0JOZVGdl5sYAESIEdXTkRo0icx8mJpLZPu/XtUq59GK6z9/+Zye7WfS6vtA1UkRbG0ShESWpHnkR80L+6Xd50eUkkX9BfGX1g3BlS+zFPufnbb1jN97LsofLgqB0Ay/i2SAIqr2Tir8+FNrOXLTFNVVlcPDFkNxWpj631ZzN1ehAcScncwsXkrUGvv+eCwhQVgBUsJXOiTCLYb9WF6II78QXIXw6//DKzI0HnUA6tLdXLy7vodkxLqR7D7yy5HaQ97Jgm472M8ThgfSTHfT6LMPPf+5k0t5hxn8vThpqv2Sis1nCPf6bARydHGDV+8L8y363DkZefsd+8rVQL5VvfQQI8pdTKv37Vmpbc9GPR8N9U0qoi0NFnGgXiUYyX2LIiqgrUPouLlzfvuQNxI4dKhUgu1SKDrPGExdpG8GJEOZYdy2p4yZxYp5WpxhatIhMQJqWxm+icyWvnh609xv7leKEsMEeK0dcNElJf/Dcv6isJDohdrnIerjjn77BATEJcSX33/WXySpHeK2bam+empa2K+Fyz/Oz/SKDhS9CCiTnFJb3yTb/HpZ+pwOziyqqQzPKumz2Nb2OoNo4bkED0I84cIA8C4AIEQ3V6UtfjcxHrwcW4LcNhOBECG8NrzznUSszN4mDB8njmZAQfpPjsioLe91eMtV58+GwK71cFs5w3Jycp/qQel/C1bFX1ucXPLQCVFp5Xm/XJdvc7DJK8sZ4bVzhsDM7x5hecMOs6GwvsI9DCDpsXzBq4X+WeTg3uOKqcQGfGB7OTZhAQtCZM0k3pI6XhNNZ7BD/o014o7/y0xaCEyFi0XfXeaOXwm9rC1dX8r7+wgV+k+PSK/J7uC0c7mRi53ul342l3zmsEsWFqEz0dq801ysmsPzhqTHyqkq+8Fjz001zy9Bz5jcOR8ZGq5xlFJRL5RuuJz+79M5Ti92fXuIunAGE2ZVFN3OCt8c6JhSlNzK+kErJ5xndu5PXgNbWZK2BRxGfW97rb/+j4kx+23AIS4Ro/EycEobsDlJr5TONQHf8nXdIcHIfqVzmmuR/1d89Oy/nerzIPcinuKS4bqnTRzW1QRrfzChrHwfXUO/MrExjVCDlpH/2//1JRIh/ix3iBDK7YVRh6ug7Zp0cfhdHBTVGhNnZ3Jo15CXE8OHk7VT9pXPYJ6PDRnG0AL5WF5YI6bAJ9E+0Xx3s7bmXX+bGjq3dNUcZUwlBaXXF9hhwYnV1tfHKj5KQVzFwZ+DTi91f+8sLHUV+r6GRyqp/89/Z/8IfcYkartqPErx7lzwGRwgKHaL/X7+GEQiMOxbx64kog8eiQFgiPBeU09pU5Kn1lzaVldzEiWQxwOef50xNle+InnBQQwPvlVq4pTqE5j7+e5pGgxA3SZJ9Ns3z3L273rlREBh/oH7yq0qHeqz62WFdxL34S5m+HrkR8Hb8sccgkZAQtHNn8vjt6NEGizgqi3yZfERk+FgUCEiECIcmHo8cuieoSOsvbQIDuTZtuBdfJF+u4Y+LFx/jIxnaolIuPZ3muUFsa+Fp1+nK7J+umebmNdzVDylOanNt5nSHTRt9T/RyWfjJ1YWRaQ01icnJ5OkLuhs//UT6HWqIdr9Xej+rgMZPXKRVBCRCROdwTtpfmRlFsmULN2gQCVS6dOHmziUPzYx5gK+xcCMnuPPl2SfuXPLJiGh3bdayS7uUr3lkCnlwUdKdvAj8CypMqJI9+BDi7D2vl53Gz7+8LSYxfpbvnnYXpnkEi+rtHCIEvX2bGziQhKBbt6q5EGWFVD7aOkzd9Z51j1BECGOc8CcrM3trfYohiPDUKTLR09KlpBmEp4yJYS2hrqmSV88POTTo7AK/8KANkadfd5xg7XZeOfIjpTxn8O0VLZym4N/EK+apGQ/WnNwSe/EN+wnHbzlIKson+VsNOvtHSGQYf0yFoiLOyorEn/36cVeuqO9Y/VNLulv4XY/K57cNjVBESIdNoK9cqouVmaFDuExbW7JM2s2b/E6GLimQlvZ1N/n9wibHyDs/eW1pfXGqi7+HXMH3PIurJNei7p7yuYJ/7oHepfeHmCk4xfyQf3pcmB0UHZYsyenvZmJib5VT9zUsfGhEBBkN+Oab3NSpXFyc+l4VCVc6J/a3ChTOyuRCEWFcbnmnTeIT/toYNlEfHh6kT3h/YgiGTsmqLOzmtmCk47KtPqd+vW35wcXfdorOFZc9+EReLpfLaqj9XBrx4ezgA50cfneL8zOPODPecYN3oK/qqBG0eFevcp98Qr6+QEtY5/P6x5MvkQ7ZHbTqSqKWvwZpAkIR4UHvjC5bfJs4WVgDBAZyb7zBbdvGbzJ0SVl1xXL/I0svWfmGBVpHX5t52fKmvyed0Pnx3M4K+dl142LXPSfEl0OjIyorK0kUo2zoMjM5MzMym/Pnn3Nubo1Y58ctlkzofktIX6sLQoQSOmxC1yszJyaSr3iXLOE3GTqmvKKisLAQ7Zi0WlpYVPiYqQBqo1AoikuKc3NzoVjSSOLf8eOcgwPRoZ8fN2IE9/rr3IoVXFZjxh+jgq1wThj1j4GHTaggCBF6JxW3MRPpfGXmjAzuww+5adNIuTKMBfT3+vYlU3XZ2JAv0bp25Q4fru9LtAbJKZX2swogq1wKJRQlGF6EsMaG68lkZWbtDpuoC2Kh778ncx8+/EE2Q7hAK5aW5AXgN9+QBvDHH7ngYHVeA9aHQIZNqGB4EeZJpEP3BM3Qz8rMc+aQJ9r1T1XGEBbZ2dzgwdyzz5LZSaZO5Xx9m7LYq1yhWHYpYczhMJ2u7tgIDC9CjwQybEJPKzOvXk1m+DHQ9JKMhkHTV1rKt3UlJeTh53/+QybnffFF4j3RHjZhft6k/Iq+OwL+8TbYrGr1YWARwuYmTgmf6m1l5v37uVdf5UQifpMhNJKTuV9+Id9hJyZykydzb71FGsBjx8j3966uZKbQJrSE1j5kWmQhDJtQwcAiRCw6eFeQ/lZmPnOGe+019r5eoKASHDhApuSytiZd9zZtuO3bNX0NWB8IQScej5xwPFJHn6o3BQOLkAybMBN56m1l5oAA8oqJvSoUJgUFZBBg69b8a0Afn6a0eyrQCd0FMmxCBUOKsFqumHwiSq8rM6emkq7F/Pn8JkM4yOWcnR0Z8/nUU1ynTmTEmaenFl8mQX69twck6PoJfKMwpAhhkS5bfPU6sQJ8LeKcCROa8piboX0qK8nMI9DeCy+Q8RA9epCBERs3amuwC2LRH46EL3WMF+ZkVoYU4amAbH2vzAzPOn06+eqiQoge8QklJ4dbtYpMSPHrr9y1a6TLkJjI5edrMRYNSCvtZuHrEi3Q98MGEyGc0yS7SAOszLx0KTdggLa6+4wmAZ/o709mRm/Viiyl/PCkoNoCYdaaq0l9dwTo6Qm85hhGhKmFlZtvprRY531UnKnv+GD7dvIFaUQEv8kwFBIJmYeya1fSQUADqLNh1vmS6k8FNmxCBQOIEG3g7HMxzy698/QS94X2cVV6nmnn6FHyqtBo1ypsDqBtSk8nS7K88w4JQWNj+f26wT2eDJu4IdRYFBhAhIgKPtkWQGfa6/W3fyMWdm4Srq5kaK+zM7/J0DPo6aEIvviCPIY5fJgr1u0TAbR+664ljdgfIpwhvHUxgAhhly2uKa+s9Pz3Mo83Vnktso/T/iyjjyEhgbyJYkN7DUJpKWdhQV4DjhjBicV6GM4C7Qlw2IQKhukTllTK3GILr0TmH7yb8ck2/2+tw+7EF+kpZM/IIBPjoWco5GJpfsDaUVEk+ESHHIFoUpJ+7H8zpqC7hV/gPe2s0KwjDCNCJXKFIjyzbMLxyI6byDxrECd/QHcg/unfnyxUyGYf1RtSKVmPpVcv8u/0abJUoF6gwya+OxwmwE/VamNgEVLyyqSWbqnQ4ayzMRFZEt22iKgBX35J1ips7MBQhmbk5HDm5uQlxLffktGAegxAUgoqB+wM3O3xYCo3YSIIEQLEoghQh+0N7rM94FxQjkQXc65R4JUXLiRPxlNS+D0MHYEun68vGUjdpg23YQMZHKhHCsqrlzrGv7vOO0jYsSgQiggpGcVV668ltTETzToXk1qos3Bx/34yTFss5jcZuqCigjtyhMgPwb+bG/F9eiStsPK3U9HPm5AnfwFpRIQVFRX+/v5Hjx5NT0+naYSDsEQIJFL56cDsflYBIw+E3IkvqtLFcPuTJ7mXXmIDmnRIaiqZi6llS27GDC4yUs/PwKKyJKP+CW2x1vvZpXdeXO5Jl3wuKSlZtGhRx44dExISaDLhIDgRAvQJY3LKfz8T036j2MwlWfvLON+4QUR46hS/ydAiCEFv3eI+/ZSsDYiI4/6893oDlcfaJ2O8beSMM9HtzMV9dwQsv8Sr7q+//ho1apRyKn7hIEQRUooqqrfdSu26xffXE1EI62Va9KYiERnae/Agv8nQFsXFZFRu587cZ59x168baqhKaaUsu1T6rXUYdAgFTrSLrJYrqqqqJkyYsHjx4sDAQDMzMxcXF40Ww9MpwhUhkMkV3knFCC16WPod880q09bTmqQkUlGWLNHid/oMLj6eTCf53nvcggUGn0oL1aaduehsUE5YRplLdAEceFZW1ieffDJnzhxXV9ehQ4eOHz+eTCssDAQtQkpOqXSrWypsOuNMTEJehRYaxLIyEi999ZXeXlg1c9CkeHmR77C7dSPLzhnaqohIlzjGf2jpl1784Eus8PDw1q1bW1tbV1RUfPfdd3/88YeakxHrASMQIaisljuE5vbbEfDZ3uDrUQVNXV0VLvDrr8mAJh1/uPhEUFpKvgHs2ZMbM4a8kBBAjJdbJh24M3DOudgq2YPMuLm5oSWMj4/PyMgYNGjQ2bNnG7MWt24wDhECuLf4vPJF9nEdNopXOCdk1HJyGoOKMnUqWaswU4gzjhgTCQlkkHTHjtzff3O5DS8Aqh+uRxe8ufrupfCHRieeOXOmT58+sbGxNjY2y5YtKxDSBNBGI0IKuoX7vNK7W/gN2xv8x8W4VZcTt7imHBVnFmj6BJWuVYjOIaNxIJbz8OBGjiRz1J8/r+fXgI+hWq5Y6Zz4yTb/tIffM0dHR0+YMMHS0tLe3r5QvbVE9YaRiRDIFQqvxKLWpqKnFru/sMwDgcf3R8I1nkzyyBEymM3dnd9kaASaEUtLEkqgGYyPF9Sn8DUD5fwt3FLrPk4vrUE4UagS4xMhQKyPuPT//rzz8krPUf+EeicVa/wCA1781VfJrLIMjYCdw8PJ/LydOhnkNWCDIAptZSrS68RFTcYoRQjuxBe9v95n5tkYuL0eln4n/LM1+1IeNallS27HDn6ToQ6VlWSJst69yeygCCKE94JHJlcsdoj/fF+wvicuahrGKkKJVP7j0QiIMDyzbMHFOAhy3vnY+LxydRtEBFT9+3MmJvwmo0Gysrg1a8isrStWkG/fhRfUgbjccnjkLa4pQsxc/RiBCOVyeW5ubkxMTEZGhnJxcxCQVnIzpgDOr6xKttfzXjtz8fB9wbfiCtX63LS8nAzunjlTmJVJcBQVyadNK2vXSXbokJBnizzkk/HuOm/6xbYRIXQRVlRUODo6Wlpazpgx46OPPrp+/Tp/4GEgRd+UkrE24W3MROuvJeU0OLkdQil0bL74gs19qBZVVaK/D/881+a4KF1SJZSvvVRAf+QX28ivDoYWVRjZh1CCFqFCoThw4MDMmTNLSkpOnz7dqlWr4OBg/tijKJBUW7qlfmDqg5KAO2xgcPC8eeQFV1oav8l4LPS7pfbmZOB1ola+W9I26JigV7LD3fgKVNAiLCoqGjFixO7duxGFTp48efjw4Q2+4UEs6hJdMHhXEPoGh30yHjdfhqUlmXEoKIjfZDREZbXcMSy3v1XgsL3BMLK+56psiH1e6R03iUPSjW/CBEGLUCwWd+3a1cfHx8XFZdCgQRs2bJCp8WE+enlJ+RUL7cnTmmmnoyOz6nmFiMj2tde4c+f4TYYaILhIyKtY5BDfYaN41eXEbD1PV1k/FVL56JphEzoZgKpjBC1COzu7tm3b2traHjt2rG/fvmZmZmFhYWq+bEWp2IgzO2/2HbAz8Epk/iPKxseHe+UVMvSGoSH0uyXEGj8fiwhOLxXCKivofbTfKLbzy+K3jQpBi/D27dvffPPN8ePH7927N23atJ07d2ZlaWBluUIRdK900omo1mYi0+vJ6UUPu+3ISLIIyfr1/CZDE2BbyO+noxEfWvoduJuuj2ny6gdOYPmlhG4WvjqcEkWXCFqEcrm8rKwMTR/AH40bhYn6sc8zveUGH/RkvBKLH3jt1FSufXvyeIbRWPIl1dtvp6En9uuJqKhsiaEaxLyaYRNzz8caYywKBC1CbUGncht5IKTrFt9dHvf4R9j5+dzgwdzYsWxUYVOQ1jwJG7onCGG/Y1huU0eZNQoU7ttr7jqECmUYh6Y8ESKkpBdX/XU58QNTnwnHI6OyJIpqGffbb2Q9SjagqWmgk55SULnqciKaxBXOCfeK9BoTwsPiRz+uM2zCiHiCRAjgp88F5/Tc6tfPKuBiYJZk+kwyHUNAAPtupulUSOXHfDO7W/iNtg4TJRdDG/wBHZNTKu29PWDD9WQhPB9qHE+WCAHkFpElmXwi6sffD+W9/T737LNkNu7kZP4wowlABaEZZVNORnWz8P37VqrGgzwbxZXI/NamIs8EI/7y6YkTISVPIr3xywLZ089wTz1FdGhnxx9gNJniCtmBu+mdN/v+eDSChP26bJ8g+6WOxjdsQoUnVIQIlm5sPVb+7xeJCJ97jkwHzNAeMC+api8PhPS3CjgVkC3R2XosyfkV6A2uv5Zk1P2JJ1SEoDS3sGLtBm7gQG7RIrJeGkOrQBTpxVVrriR12ixeZB8XllmWL6mGGrXbcTvkTYZN+KUKbmyxRjy5IiRUV3N5ebpbLZ1RJVOcCyJPwtqYiYbsDvrhSPiMMzHoLhaVa6G7WCGVTzge+cX+kEJtXM2APNkiZOgeBIrXo/LfWev91GJ3/D/mcNhflxO1srRBVLaklanI0s3AEw03HSZChs6prJbPPBPz+l9e/11zd9jeYM/EIq0EpXs977U3Fwcb4bAJFZgIGfoAjWHnzb5bb6V+qfLdUmOpqJaPORI25WSUQb7R0S5MhAx9UFwh+8U2ws4vi3631Er53RJ/XGMC0krbmYvs/I1y2IQKTIQMXSGXy6urq+n39xBbamFlds20Iw++W9oRcCEkt0LzFxi43JqrSV22+CYXCHfCG/VhImTohPT09LNnz65fv/7QoUPx8fH83vtARRFZkl9PRLU2FZk4JSTlazZfRr5EOnhX0NRT0c0gFgVMhAzt4+Pj8+eff3p5eS1cuLBjx451RUgpq5LZ+ma13yjubxVIJs5T+427a2zBW6uNeNiECkyEDC2Tn58/atQoMzOzysrKSZMmDRkypLj+1a9kcrKowXeHwzpsFO/2uJcnaXhNi2q5ArEootnmEYsCJkKGlhGJRG3btnV1dRWLxR06dECTiJ4hf6wecsukG2+ktNzg8/2R8IC00se3iEjcx8iHTajARMjQMtbW1j179gwICDhw4ECPHj2wKZE0vFxPlUxxPjin346Aj7f5n/DPktS/KjMC11amPrdihbWyUlNgImRoGUtLy1atWm3ZssXZ2RlN4ty5c6OiovhjDRGfWz77XAw0Nvd8LKLNui2dXKEwcUr4bE+wsX+qVhsmQoaWiY+Pd3BwyMzMLCgocHR0TExMrHlDoS7lUvmFkNwPLf367ghwCM2VPjxtTEpBZa+/jX7YhApMhAzBgebOP7Xku8Nhbc1FW1xT6NtFyjHfrBbrvH2Sm9U650yEDIFSIKm2ck9rYyYaeSDkbmIxmj40kuOPR36+L7igGcWigImQIVyq5Yorkfmf7Q3ubuG3y+OeqUvy22vu4n/+cHOBiZAhdNIKK+eci/3Pco+nl7g/tdh91WV0MvlDzQMmQoYRcDep+JWVXlAg/o22Diur/wWGMcJEyDAC4nLLO2/2hQKfXuw++1wMwlT+QLOAiZBhBMjk5FX+jzbhs87GhGQY/SheFZgIGUaDRCpXeW3YPGAiZDAMDBMhg2FgmAgZDAPDRMhgGBgmQgbDwDARMhgGhomQwTAwTIQMhoFhImQwDAwTIYNhYJgIGQwDw0TIYBgYJkIGw8AwETIYBoaJkMEwMEyEDIaBYSJkMAwMEyGDYWCYCBkMg8Jx/w+2xUmNwX9OXgAAAABJRU5ErkJggg==) ###Code #Addition add1 = np.add(vec1, vec2) add2 = np.add(add1, vec3) add3 = np.add(add2, vec4) add4 = np.add(add3, vec5) #Subtraction sub1 = np.subtract(vec1, vec2) sub2 = np.subtract(sub1, vec3) sub3 = np.subtract(sub2, vec4) sub4 = np.subtract(sub3, vec5) print (f"The sum of the arrays is {add4}") print (f"The Diffrence of the arrays is {sub4}") ###Output The sum of the arrays is [23 25] The Diffrence of the arrays is [-15 -13] ###Markdown Squaring ![squaring formula.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAXUAAAA0CAIAAAD67D/iAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAARmSURBVHhe7ZgNjtwwCIV7mEpV73/ArSWo5cE2wQQSJ/M+odXEfvwYb9jV/PoBAIAcMF8AAFlgvgAAssB8AQBk8fL58vvPXzJ+BgBcyJvnSztWwkdMnVyrxv5ggmiX3dj/jYiT2o397+Pl86W2OKPXq1dI+oxKXgYa2/PQnnzL9y8ZjfZd4e1Xvj9obM9De/Kw+WLsl5DldblEJuNnG3n1vIadG+soLATKu5r6llIr758v2f0t8VdTZJf0DvZsbE1xyyXu2ROFl8+X/kMGJXhq/K9lw8beXs+zftnePF/oJqrRVgYXpPhO9mzsvfXs2ZMZX/H9ywU869YfxIaNvb2YDXsy42O+1Lqr8YYZ4V6MN4JoA7ZZyHjjU3YZooYlqq8SxKKpkOZQZic8oJ0zeauvEsSiqVQlPd6FpdQZ1VcJYtFYkP+/tOF8cS0RSKMY6zrEVitut5QIPRREMdYZWNUT5NL69kEsmkq7pcjs+AIWpW6sM7CqJ8il9e2DWDQVi0aBsijGOhsOlwK5tL59EIvGiGe+tJoeSwQ3IuAsV3heI1TPanbSV6/WXSwONQXlsXwWj/zpEyETiAhCqThGQUlXE5G+erXuYnGoKQxdCrPPBfGYR0lExs82SF+9WnexqGuMaPNlhi6wRHAjIs9y5RVwCJXkKMDiYtT0xntzFE0bpxrvXYg7tcVlVeMoI4NSBhk/m7G4hJzRM190LBFIoxjrOsTWTKxE6KEgirHOjMOlILyGqYeRLY6F2fohuqO+pRvrzDhcCsJrmHoYuXfkT/8/l5+Kpof0irFuhRCvYXaL5pB75osbEXmWK6+AQ3ypZ17tunJSsaU8DoMcMgtIH3wxVwmpvNKuDzW0OJQJvVgXu6n4cs282nWLxsLHfCnO1XhpkfMRdNqwSq5+5RrceYeOYlEJ3m+VlWq8pEY4ZBiQOBPWiDvF0FEsKsF75aGvEi0Wd6Kh4+G5Co6M8v+XzTGe0NGI84QnLQEtMe15jQGXCA/Yk1GzJaZR08osLiGEJxIHGWLRCDBfYsjIGH7YG4t0s3ljhSa7G8RdPXHkxXwJYDVdVHkU5+LDVi7Ivho8qhjH0RwuPlZTXFCSAubLWRy5osorca48qSA7uyN4VD2OozlcHDhSXFCVwsPmy2487r6fAhrb88SeYL74Wb28oifjZzABje15aE8wX5zU+1s19gcTRLvsxv5vRJzUbux/H5gvAIAsMF8AAFlgvgAAssB8AQBkgfkSyewbtR2+aXs0aKzOtn3AfAEAZIH5EkP5A4K/sRmgsTpKf3YA8yWS/qZpBW/CSdBYnW37gPkShnLHeA3OgMbq7NwEzJcwyjXXm26vHO/ASdBYnbY/u4H5Eob41a+P2979U0BjdXbuA+ZLFvVNwGsQCxr7IDBfUqgvAN6BWNDYZ4H5AgDIAvMFAJAF5gsAIAvMFwBAFpgvAIAsMF8AAFlgvgAAssB8AQBkgfkCAMgC8wUAkAXmCwAgC8wXAEAOPz//AHT9xjNOk3crAAAAAElFTkSuQmCC) Square root (for lengths) ![1.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAALAAAAAcCAIAAABOC6KcAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAAJgSURBVGhD7ZRhiiUxCIT3MAvL3P+AbwSzUimNbXc6AwN+5IcpS2M/5P35NA3QC9FM9EI0E70QzcSphfj770vOuERcGjY53R+5fAuz9ak25y+Wk+3gQozoP6KQWJz4GUebe/LnMFsfbPMTiuVkO7IQfhRTKFUc+i6H2uasHn38yZtfUSwnWy/Ea6weffzJm19RLCfb+wvh50Alz77C6w3rhE+TWB9v80OK5WQLFkIcdoa0EAWvk0FAJc++wuUTyYuXw+StfPmlklx9LZJnheJDpPNCYFpivVKNXclMgSGKnSEBoShgVXiGz0EpvaIY1qItNAiUlYCcvjBXNPaK4msVrFp5BN/KFAkw1kDJFsIgcdVLWZmFin+f8EUfeCSlZ9xnwg5k9rWJkgSKrzUkpWfcIyzrAyEUhWkhJEdpwYuoaIwGjIUkpYTiDr7h5QyKpPSMewRlb12VxFMpVySlZ9wj8m52Jf35P0QoYqDglVJKKAqi52f4HD6FisYrzyprYGrVxAj7JB6NTQnLBbStPAKl8LqKhWAhvJtqTPRODJSVbqz0x/iGOIPGq0dXuoGtNEBIvOWRQGNUNAjJswIZVm3pyguhiEnPuINiogYkChgrZCCS1DPChjaDBY9JOqBe8SjW0ALFO2/hy6m/Qkq8EJvUv6TuvMWhtjn06GqGH/txiuVk64V4DXw0GeDHfpxiOdmOLIRQmWbzg3OONvfQc8nr9cE2P6FYTraDC5EPdGnY5HR/xL+VPF2fanP+YjnZTi1E80vphWgmeiGaiV6IZqIXopnohWgmeiGaiV6IBvh8vgH0UP3GhD+BagAAAABJRU5ErkJggg==) ![square rooting table.jpg](data:image/jpeg;base64,/9j/2wCEAAkGBxISEhUUEhIWFRIWGRoVFxQXFRsaHhkYGCAZGRkdGxgaICggGB8nGxYYIjYkKyktLi4uGB8zOD8sOSgtLysBCgoKBQUFDgUFDisZExkrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrK//AABEIAJcBTQMBIgACEQEDEQH/xAAbAAADAQEBAQEAAAAAAAAAAAAABAUDBgIHAf/EAEkQAAIBAgMDBAwKCQUBAQEAAAECAwARBBIhBRMxIkFUcwYUFSMyMzRRYZOysxZSY5KUo9HS09QHJEJEcYGDkcNDU2KhsfDBgv/EABQBAQAAAAAAAAAAAAAAAAAAAAD/xAAUEQEAAAAAAAAAAAAAAAAAAAAA/9oADAMBAAIRAxEAPwD6Pg8JHFs6OYIXKYdJCu+dbhUBaxBNjYG2n9qegwkYOWaPdtYuLYiRlKrbNyjlsRcXFuBFidbecHhWl2bHGhAaTDIl2vYZowpNhx0Nb4vZjzK29KhshRQl7AMVLEnjrlUacLHz6B4ljwSortKqxt4LnEEK38GL2NaPhMIM13Ayrna87DKuvKPK0Gh19BqTtDBSIyKEOXKxIjV2EpdsxQuQxTwVJY5b5hYgAinm2BeGZLIhkxAxPIBW5V0kGcrYliYwCwoGBhMLdBnF5ATGN+3LAFyU5XKFjfTmr1BgMM+bI2bKxRssznKw4qbNoR5qVi7HgJFeycEDLy7AxySSggZuUxaQm7cCA3op3YeAaGPKxTmsEBCqAAoAuSbaaAk2FhwFAjgNmwjtgsGsspPjJOARCeDa0tLNAqnNGVYBG5eIdUCSZ8jPJchPFsDxsbDW4NVcBGG7ZU8GlYG3pRBXvZ2AMbM7MCxRIhYWGSLOVOv7RMjX5uH8SEnbGBhfATSBXGbDyOAZHuLoTqM1P4/ZyIuZUzW1Jed0VQLklmubDTzH/wDa07J/I8V1EvsNTO0oDIhQLG4OjLKLqR/Dn1tQR7Q3W8UgHe895XBQynKgy311OvC1xx1t+7LWCZnUIRktwmckXJFnF+Q+ng66EUyNikZVD3TvWa45V4SGXLzAEgX9H/Vig57ZWyojLi9G0mUDvj/7GHPxvTWby4cb60cpEWUAh5DnLMyaAEnKGUgm3MSLixL2AhDvjUN7NMFNiVNjh8ODZhYqfSNRX7htlGFpHia+ZI0VZGYhQhf+NhZ+A81BNkkhXdZoxeT4uJkN9coEd7GRtQSthYXrfauyohLhNG1mYHvj/wCxiD8b0VYwGG3a2JuSzO3mzOSxt6Lmldr+NwnXt7jEUC2Lw8KyJEq3kfXK07ryRxtqSxsCQLa5W4Wpd2gGY7t8oDlTvX5W6OV7i/JsefXQE6c7+N2RnlMgIBdY43JHKCxM8iZDzHM7cfQRw10wmzSjglgVUyFBbXvrZ2zc2h4WoFtl4WGZA4U8SLrM7Kbc6vcZl9NhWHY1sqI4PDEhrmCI+Mf4i/8AKuhqZ2L+RYXqIvYWg07jw+ZvWSfepXaGBijVcqMzuwRFM0igk6m7XNgFDNw/Z0ubCrNLbRgZ0KrlubeGCQOcGykG4NiLEajiONBCM+GsWEchEYLzd9cbtVZ420vyyHik0HMjEXuob3gDBLM8QRgVz/6zkjIwQ7xb97zXzJqcygnS1q2+DqqhjjYKkkW5luupW8jFltYByZpL3BF2B5iGu0CHceHzN6yT71HceHzN6yT71P0UHPbK2VEZcXo2kygd8f8A2MOfjemsozG0aMMPIrvK8ISSdhlKGQEu6M4AO6NrX1ZRz6U9keNxfXr7jD1g+yZDFkYxsN9JKUIJR0kMjBHBvexkB9JQUCqPAQpEb5bIznfPyRIxRCNeWCVJ5tNee1e9nLBLI6BCMl+E7kixtZxfkMbXA1uNaZGx20vJfMqLITcnvbM65T/FiNb6AVYoOe2VsqIy4vRtJlA74/8AsYc/G9NUu48Pmb1kn3qz2R43F9evuMPVOgj4zZyLbJEzk3v351AA463OvmHA+cUiN2widIS0MxQI3bEgdg4DZhH5gl2N2BsraVY2ph5JAFQrlvy1a4zDmFxzX4jn4cLg/qYVyYWkZS0anMFUgGQgKWUEnKLZ7DU8rjxuEN8ThxHvN2+V1EkRMzgNGWRc7m/elBkVjcaIb8QyrQ2Zg4Zo1kCsL3HjZCDlJW6tflIbXVtLqQeetdmbKMRju11hj3EVhY5ORq/na0aDSw0J5wFqUCHceHzN6yT71TZ9lRduQizWMM58Y/M+G/5emuhqZiPLYeoxHt4WgWlwiLKEMRyNoG375tBctkv4AuASSNTw1F14ty6l1TMpYRx2xD3Zjpyx/pjn4k2PAHk1SxOzzJKrsEATwXA75YixXMeCkkm3PYUththlFAUqN2I1jVVsuWG+TMBzkMRpwsKBF8Rh80Y3bXdihBmcHMHMTBBfvhVlJPCy661zH6Wdnxp2tlB13vF2P+35zX0fZ2F3SZb3JZ3J/wCUjNI1vRmY29Fq4L9L/wC6/wBX/FQdH2PbNlOFw5GMnAMMegXD2HJXQXhvVDuXL03EfNw/4NL7Oxgh2dDKUZwmHjYqmW+UIpJGYgaC5482l6o4LGiS4KtHIti0b5cwDXynkMykHKbEE8COIIALdy5em4j5uH/Bo7ly9NxHzcP+DTmOxSxRvIwJVFZyFUsbKLmyjUnThWsb35iP4219OhoJ3cuXpuI+bh/waO5cvTcR83D/AINU6KDntmbNlLTfrk4tKR4OH15Ka+Jp7uXL03EfNw/4Ne9leFP1x9iOte6Cbzd2PHLm0y57ZsnG98uvC3pvpQReyTZsowmJJxk5AhlNiuHseQ2htDeqXcyXpuI+bh/waOyfyPFdRL7DVToJncuXpuI+bh/waO5cvTcR83D/AINRsTsPfTLLLgkbNNlbebt2SFFlKuSSRZpCvIUmysOBuB1aqAAALAaADmFBzeytmymXFfrk4tMovlw+veMObm8Pptp5qpdy5em4j5uH/Bo2R43F9evuMPXjbeB7YaOJoQ8N88jsRlAQqQuS93ZmtzWADG4NgQ99y5em4j5uH/BqbtXZsolwv65ObzML5cPp3jEG4tD6La+enexTZ4hw0d8OkErKGljQJpIdWF00YAkganS1a7X8bhOvb3GIoDuXL03EfNw/4NHcuXpuI+bh/wAGncY5EblUaQhTZFIBY24AsQAT5yRXObG7H1TEZ5MJECIxJvlCN3+R3aRcxGdsoWOz2F8zE6nQK3cyXpuI+bh/wam9jWzZTg8MRjJwDDFoFw9hyF0F4Sa6WpnYv5Fheoi9haA7ly9NxHzcP+DR3Ll6biPm4f8ABqnRQTO5cvTcR83D/g0dy5em4j5uH/BqP2T7GbFGS+FVwirHHmZCX3hAlaMMbRFIywBPhNqQQi5umw2HSNQkaKiDgqqFAvqbAaDU0CPcuXpuI+bh/wAGjuXL03EfNw/4NU6KDmtlbNlMuK/XJxaZRfLh9e8Yc3N4fTbTzVS7ly9NxHzcP+DRsjxuL69fcYeqdBM7ly9NxHzcP+DR3Ll6biPm4f8ABphdpwFsgmjL3y5BIt7jmte96boOa2Vs2Uy4r9cnFplF8uH17xhzc3h9NtPNVLuXL03EfNw/4NGyPG4vr19xh6p0EzuXL03EfNw/4NKxxszmMY7FBteMUIBy8crHD2a3oNXag4yOUySyJh5SchiyNMiiTXkmOzndDUszclrZdGIAAN9y5em4j5uH/Bo7ly9NxHzcP+DUzsf7HI4J5HOGjDBY8k4VBmZt60pUDlI2eRr8BlZAPBNdNQTO5cvTcR83D/g1Nn2bL23CO3J77mc3y4e4s+G08TbW/wD0K6WpmI8th6jEe3haA7ly9NxHzcP+DR3Ll6biPm4f8GncXPu1LZWbgAqi5JJAAHMLkjUkAcSQATWMe0oyhd2EeU5XEhVSjWBysb2vZgdCQQQRcEUGHcuXpuI+bh/wa+f/AKV8FIva18TK9974Sw6eL4ZYhX0zC4yOUExyI4GhKMGsfTbhXAfpf/df6v8AioOmw8DybLSOMKXfCqihmyi7RhbkgHhe/CtI9nypFKiKpEgLAPIzEMQqlSzAlxYGzHgLLawAprsb8kw3UxewtUqDkm7HpjhmhKxtpKsaGQhV3mqOCqDIy3ZbKotc5bcK9Y3YmIkhaPJGC0kjjvhNiwujAshK2YngMyi2UiurooOawOyZkkLmNAXMpdhO5azKuQZst7Z82gsFvcamqmwcCYYshVFNy1ktz+fKqgm/OFH97k0aKCfsrwp+uPsR1n3Pfe30yb3f3vrfJu8uW38739Fq02V4U/XH2I6oUEzsn8jxXUS+w1UzUzsn8jxXUS+w1U6CK+HcOI+3cSWPmiha3mLEQWS9ja9r2qyosBrf0nn/ALaVGTASLiXcZrSOr58/JEaoiGPJfiWUsDa3KOt9DaoJmyPG4vr19xh602hh2PL7ZkhUDUKIrc+vLjY31tx5hWeyPG4vr19xh602phJJMm7kRQpuVeMuGItl0V0OhF+JH9hQetki8YcTyTLIFdWdUXkkXFgiJxB5xesdr+NwnXt7jEV77H8O8eFgjltvEijV7cAyqARxPOPOa8bX8bhOvb3GIoHsTEWUqrtGTblqFJGt9M6suvDUc9SsJ43L23iGK3OV4o1V8uhAbcrmGv7LVaqVh4XbEtLkkQZTGc7qQwBupjVWbIPCJJCk3W97CwVamdi/kWF6iL2FqnUzsX8iwvURewtBTooooImNOWTKcbOhPKyrHEVRWJC5m3JyLcEAsdcp1NjViFCFALFiAAWNrn0nKALn0ACpe22lPexh3liZeXkaIZr3GQ53UgHnIB0NvPVegKKKKCZsjxuL69fcYenMeGMbhEV2KkBHNlYkcGNjYHn0NJ7I8bi+vX3GHqnQR9k7MeKTUJk3SgMuh3haRpeRawXlJbUnQj0mxRRQTNkeNxfXr7jD1TqZsjxuL69fcYeqdAVFxkLR2zY3E8o6BYoWPp0WAmw8/NVqkNrzSqqiKJ3LGzFDHdF5yBIygnmGpte9ja1B+YPCOCr9tyyJxykQ5WBGmqRA21voaoUts1MsMYEZiARRuiQSlgOQSpINuFwSNKZoCpmI8th6jEe3hap1MxHlsPUYj28LQOY0yZG3QUyfs5iQPSTYcwubc9racajrsyR2jzxRBVYyNIXzyMwyFdSgtdluddBGgAseTfooEtjQPHBEsgUSBFDhSSua3KsSASL35hXDfpf/AHX+r/ir6NXzn9L/AO6/1f8AFQdP2O7NvhMOd7MLwxaCQ/EWqHcv5ab1hrx2N+SYbqYvYWqLMALnQDUmgR7l/LTesNHcv5ab1hrXDbRhkNkkVjpoDqL5rXHNfI39jTVAh3L+Wm9YaO5fy03rDT9FBB2Zs27T99m0lP8AqH4qU/3L+Wm9Ya/NleFP1x9iOm+2Ez5Mwz2vlvrb+FBD7JNm2wmJO9mNoZdDIfiNVLuZ8tN6w1n2T+R4rqJfYaqdBP7mi9t/Nfjbe8wtf/0f3r97l/LTesNKzxIMfC2UZzh5lL5dSM+HKgt/JyB6G9NWaDntlbOvLi++zaTKPGHXvGH41rtmMYeCWYyTtu0LBRIxLEDRRYE6mw4HjTOyPG4vr19xh6k7W2i0ksWEkj3cj4hSBvvDgizTbxMvKIvGqMhAsW5wQWDHZy455MsiRhAbSGPaDuyf/wAGBdfQSKe2rs60uE77NrMw8YdO8YjhW2zp4nxHIOXKsihADyuUudn0sOV4IJuczH+DG1/G4Tr29xiKDTuX8tN6w1zu18TiEnMUGV1VUzPNjmh745eyKBE4JCqrcb8tdKv7Z2hJAFcRB4RrM2cK0aacoKRZwL3PKFlBtmNgZWKIaGRlKCWaRyoKZjIikJkUgi2dY0GbmBB9NBUw2zWKKZJJVcgFlWYsA1tQGIBYA31sL+YUj2NbNvg8Md7MLwRaCQ/EWugTgNLeipvYv5Fheoi9haDTuX8tN6w0dy/lpvWGn6KBDuX8tN6w0dy/lpvWGmmxCBghdQ7XIW4uQOJA4kVrQIdy/lpvWGjuX8tN6w0/RQc9srZ15cX32bSZR4w694w/GqXcv5ab1hrPZHjcX16+4w9U6BDuX8tN6w0dy/lpvWGmocSj3ysGymxseB9Na0HPbK2deXF99m0mUeMOveMPxql3L+Wm9Yaz2R43F9evuMPVOgQ7l/LTesNHcv5ab1hp+igQ7l/LTesNHcv5ab1hpqDEo5YI6sUOVgpBytobG3A2I09Na0CHcv5ab1hqbPs79chG9m8TOb7w38PDfbXQ1MxHlsPUYj28LQady/lpvWGjuX8tN6w0/WRxKZ8mYZ7Xy31t/CgV7l/LTesNfPv0s4LL2t3yQ33vhPf/AG6+oV85/S/+6/1f8VB2nY35Jhupi9ha32r4l+9tJp4tLZm9CkkC/wDEgVh2N+SYbqYvYWqVBzUc0u+hZRKY2lyNI0JV3jEc9llBjDKiylMrHLfMeYZmxw2JxWRwrTNIyTsrSwMACshWLTKgvk1y6EgCurooENhl90N47ObmzNE0RI5uQ5LD+etP0UUE/ZXhT9cfYjpTtd99bKfH77NY5cm6yeFwvm0y8efhrTeyvCn64+xHVCgmdk/keK6iX2GqnUzsn8jxXUS+w1U6BBpMVfSOG1/91+H8N3T9QZHg7b5DoJw6iRi4zeACIQL3sVKvl4C+biavUErZl95jLWvvha5sL7jD2ueavRGJLBjDh8wBAbetcA2uAd3oDlH9hRsjxuL69fcYevzsgMGRBiArKXskbkBZHysQpDclgAGax0GTN+zQao+JuLxw2vraVybc9hu9az2v43Cde3uMRTGyj3pO+CTTww2YH+DftW4X4m16X2v43Cde3uMRQb43fHRI4nQixzuwve9xYIwItb/ulsLHiI0WNIYFRFCKolewVRYAd74ACqtQtmvAcQ25dAe+Z7MC0rhgGJHEiM8m54XsLDiFjDFyvfAobzKxYejUgf8AlI9i/kWF6iL2FqnUzsX8iwvURewtBTooooJu0PH4fkseU+oRiAMjDlMBZdSOJF6pUUUBRRRQTNkeNxfXr7jD1TqZsjxuL69fcYeqdBD7H8O67sMpG6gjgYkEXdeOW/hD/kNNfPe1yiigmbI8bi+vX3GHqnUzZHjcX16+4w9U6ApGWLE57rLCI7jkmBy1ucZxKBfjrlp6igm7N8diRlYctDcowB5CC6sRZtVPAm1UqKKAqZiPLYeoxHt4WqdTMR5bD1GI9vC0FOobYd9+4ynlTpOGscoRYo4zduAOZSMvE3vwuRcooCvnP6X/AN1/q/4q+jV85/S/+6/1f8VB0fY9DjO1cPaeADcx2Bwzk2yrxO/F/wC1UNxjekYf6K/5iv3sb8kw3UxewtUqCZuMb0jD/RX/ADFG4xvSMP8ARX/MVTooJm4xvSMP9Ff8xRuMb0jD/RX/ADFU6KDntmQ4zNNaeDxpv+rPxyp8vpT24xvSMP8ARX/MV72V4U/XH2I6oUHNdkkOM7UxN54CNzLcDDODbI17Hfm39qpbjG9Iw/0V/wAejsn8jxXUS+w1NbQxyQIXkJCDiVRnP9kBNue9tKCRtCWeHK0k0N3bIuTATSMWys3COYkclG14aVthHxci5lniAvbl4GaM/NeYH+dqMQgnlJViypDyTHIUzmYhvDU3GkSWYHg7caZ7HmJgXMWLAupzMWIKuykBjq6giwY6kAE6mgl7Khxe9xVp4L75b/qz6ncYfh3/AE0t56bxi4lEZ5MRhsiAuxOEkNgoJJtv/Net9keNxfXr7jD1L7ItoxYiI4ceFLOmFeOSORbqTnkFrA2aBJCH8E2481B7w20MQxCLKovoL7MxSD+bNIFX+Zr82rDi97hbzwX3zW/Vn0O4xHHv+ul/NVDAoRM2R3ZLNnLuWBe623YJsoUZgcoAuQNSDb1tfxuE69vcYigNxjekYf6K/wCYrwuExYNxNhr+ftV/x6rVN2Rm3mKBdmAmGXMb5QYoWyr5hck29JoPzcY3pGH+iv8Aj1N7GocX2nhrTwAbmKwOGcm2RbXO/F/7V0tTOxfyLC9RF7C0BuMb0jD/AEV/zFG4xvSMP9Ff8xVOigmbjG9Iw/0V/wAxRuMb0jD/AEV/zFU6KCZuMb0jD/RX/MUbjG9Iw/0V/wAxVOig5rZUOL3uKtPBffLf9WfU7jD8O/6aW89UtxjekYf6K/5ijZHjcX16+4w9U6CZuMb0jD/RX/MUbjG9Iw/0V/zFU6KDmtlQ4ve4q08F98t/1Z9TuMPw7/ppbz1S3GN6Rh/or/mKNkeNxfXr7jD1ToJm4xvSMP8ARX/MUbjG9Iw/0V/zFU6KCZuMb0jD/RX/ADFG4xvSMP8ARX/MVTooJm4xvSMP9Ff8xU2eHF9tw9/gvuZ7HtZ7Wz4a+m//AIc/Ma6WpmI8th6jEe3haA3GN6Rh/or/AJijcY3pGH+iv+YqnRQTNxjekYf6K/5ivn/6V4sSO1t5LE3jbZYGX/bve8pvX1OvnP6X/wB1/q/4qDtOxvyTDdTF7C0p2vvVaR5pI5A5TSVlVAr2VTGGCnMLakXOfQ+DZvsb8kw3UxewtNNgojIJDGhlGgkKDMBrwa1xxP8AegS7rnlWjuLSZLNqxiOVgwtydTpa9x6dKTTbr5ZX3ILpFG+VZjY5nmQrZlXIV3ZvdQ1+SQMoqzLgImzhokYSC0l0BzjhZrjlC3nrGTY2GYMGw8JDKqMDEpDIhuikW1VTqBwFAlittyqoC4feS5nUosgAsguSHcC+hHEDU/zq0puKU7lYexG4isz71hu1sZDxci2renjTlBP2V4U/XH2I6zlLDGRDO2VoZiUvybq+HANvPZjqfP6TWmyvCn64+xHTTYWMuJCimVQVEmUZgpsSA3EAkDT0CgS7J/I8V1EvsNTeNxaxrdw5BNrJE8h5+KxqSBpx4Up2T+R4rqJfYaqdBymzYMDDG8KwTNCzlxG+CnZV5wqhotEDXIXgMxtauh2fikde9q6qvJs0TxW81ldVuP4aUvIZBOMsha5BMdgFSK1iSeOYuDY31uRayk1SoJWzHyyYwm9hMDoCTpBh+AGpPoFT53wr4mPFFcUJI1KWGFxAVr3ClxuuUVDyBb8N6/nqlsjxuL69fcYet9qA5Ad6Y1BuxVbswsQFXjYlivMSbWA1oJuClwSSZ4sMyStdd4MDKhOcgtd90LAsATc20ueFN7X8bhOvb3GIpzAlzGu8FntqNP8Au2l7cbaXpPa/jcJ17e4xFBSdrAnXTXQE/wDQ1NS8Bi8OZG3cMiSSm7ucJLHmKjQu7RgE5RYXPmFVam4UyCYrvDIvKL6ABCSDGq258p1FzzHTMLhSqZ2L+RYXqIvYWqdTOxfyLC9RF7C0FOiiiggbcxDhpyGYGGATRAEgNId8CGA8YOQgyn43nIIv1lLh0YqWRWKm6kgEqeFwTwNa0BRRRQTNkeNxfXr7jD03i8QyWyxPJf4mTT+OdlpTZHjcX16+4w9U6CVtGR2SJ++RHexgoSouC4UhspIItroefXzVVrHFYSOUKJI0cKwdQ6hsrrqrC/BgeB4itqCZsjxuL69fcYeqdTNkeNxfXr7jD1ToCp20+U8UZZlRsxYqxU3UXAzCxHObc9tdLiqNZ4iBJBldVddDlYAi41Gh9NAtsSd5MPC7+G8SM2luUygnTm1NO0UUBUzEeWw9RiPbwtU6mYjy2HqMR7eFoKdS5iwxsXLbK0ExKX5N1fD2NvPZiL+k+c1UrFsLGXWQopkUFVcqMyq1swDcQDYXHoFBtXzn9L/7r/V/xV9Gr5z+l/8Adf6v+Kg6bsdxcwwmHth2I3Meu8TXkL6ao9uTdGb1ifbXjsb8kw3UxewtUqBDtybozesT7aO3JujN6xPtp+igQ7cm6M3rE+2jtybozesT7afooIOzMXNmn/V28af9RPip6af7cm6M3rE+2vzZXhT9cfYjqhQc92SYuY4TE3w7AbmXXeJpyG9NUu3JujN6xPtrPsn8jxXUS+w1UmYAXJsPOaCBNEFk3zYeRWLBtcVZCwUKDu8+QnKo5ua9PRbRkYXWAsPOJYz/AOGltsKXmGsYWGNpSZVzKGfkq9gQTlRZgRceMHm0c2HLngR7IM1zeMWVhcgMBrluADluSL2ubXoJuysXNvcXbDse/Lflpp3jD+mmMcjzBQ+Gk5LZlKYjIQbFfCRgeDHn5622R43F9evuMPVOgl4aSSNQi4Z8o0F5lY/zZmJJ9JNJbVxc29wl8Ow781uWmveMR6a6Gpm1/G4Tr29xiKDTtybozesT7aRwuFMbmRcNLmOY2OJLKM5zNZGcqtz5hV2k8VtXDxNkkniR7BsrSKpykkA2JvYlWF/+J81B47cm6M3rE+2pvY1i5hg8NbDsRuItd4mvIX01fjkDAMpBUgEEG4IOoIPOKndi/kWF6iL2FoNO3JujN6xPto7cm6M3rE+2n6KBDtybozesT7aO3JujN6xPtp+igQ7cm6M3rE+2jtybozesT7afooOe2Vi5t7i7Ydj35b8tNO8Yf01S7cm6M3rE+2s9keNxfXr7jD1ToEO3JujN6xPto7cm6M3rE+2n6KDntlYube4u2HY9+W/LTTvGH9NUu3JujN6xPtrPZHjcX16+4w9U6BDtybozesT7aO3JujN6xPtp+igQ7cm6M3rE+2jtybozesT7afooEO3JujN6xPtqbPi5u3IT2u19zPpnT4+G9P8A9euhqZiPLYeoxHt4Wg07cm6M3rE+2jtybozesT7afooEO3JujN6xPtr59+lnESHtbNCV8b+2pv4vzGvqFfOf0v8A7r/V/wAVB1eyI5WweEEbhO9xZyVucu74LfS+bJx5r142VtGUYdJ52zrJFE4CqFKuylnuSQoW2WxJ841JF2tgITgsOAxUmCLlC1xyF1FwR/1X6uxEEUUWeTLCQUNxeygoASBqArWvx0BvcXoPOC2/BK4WMsQSFDlGCszRiZQrEDNeM5tNNDe2l8j2TQBWdsyoqxuHawDLOxSEqSf2mFhe1ue1ecL2NpEqiOWQFGDpmysAyxbhbiwLDd2Fr3Nr3vrWOythOqukllQlWGVw7Z0IZHz7pNQVB5Qe5489wbg7I4HYKufURktkOUb1mjTl+C13UrySeIPDWmdkbQM6MxjaO0kkdmIN927JfTz5f5enicJNhIxLGSUk7q5LKbmFzKnFfjsTbhrbQACnMDgliDBSxDO8hub2MjF2t5hdjpQYbK8Kfrj7EdY9vvvbaZN7uMtufJvM1/8Aq1bbK8Kfrj7EdbdoJvN5re+bLfk57Zc1vjZdKBbsn8jxXUS+w1NbQgikQpMEaNgVKuAQbggix0Ol6V7J/I8V1EvsNT80KuLMoYcbEA/+0ETZeGMSOrYtWckBJeTfdoAEDjgW1ILDjxsOAq7OjRUyowYXZibg3ZyXYm2guzE29NTcQqLiEjyYez3tHkGcqo5T5ibCxNsuUk8b8bWIYFTwVVb+YAf+UCGyPG4vr19xh69dkWIkiws8kRUSRxu6l1LC6gtqoZb8PP8A3rzsjxuL69fcYem9oYRZoniYkLIpRrcbMLGx5tDQMVM2v43Cde3uMRVJBYAXJ9J5/TppU3a/jcJ17e4xFA7Pi40Kh3VS5yqGYAsfMoPE/wAK5yaWRGxGJDkZpFiQKrSjLGAqlrC0SCRpmYjQA3uLWroNoYJJkKONDYg2FwykMjC/OrKrD0gUph9kEQxxvM7WW0pAVRMzauzCxK5mLGykAZiKCmp0qNsTEiLZ+HdgSqwRE2FzbItz/Lj/ACqyBUjYECyYDDo3gth4lP8AAotBpPt2JDlKyZszrYRsfFgMxLDkqMpBFyL82ulZbY2vkCbrUmWBWbIWULLJGhBIIysUYkHW2hPEX/R2PoL99muxdmJfNmaRQjHlAhdBcKtlB5uatMRsKNwoZpLLuycrlc7QlWRmK21BXmtfnvZbAlL2SJKAuHJZjIsZKhWKhg5DWLW/0yLHUc44VrF2RRKq5y7G0zM25ZQFgfdyk30UKea9yBcXuLs9xVAUCWUBGVkAK2UKGVVAKkWsx1PKOlybCsJexyNlZTJJZlnQ6r4OJYPL+zzsP5c1Bu238OJnhMgDxgl7kWWyiQ5tbjkMGuRbXjfSjZePaSWYEEKuTKrRlCAwub3PLvxuLWvlOqmtTstCzlizLIOXGSMrEqIyWFtboALeDpe19a/cDsxYndw0jO4UMXkZtEvlsvgrobXABNrm51oMdkeNxfXr7jD1TqZsjxuL69fcYeqdBJ2LtB5Mhe3fIlnAAtkD/s/8raa6c/8AKtSmC2ekXg34BVB/ZQcEXzKP/uAs3QStlm0mMNr9/XQc/eMPXj4Rw5Q1pCGWBltGzEjEkrFot7XYEG9rW/hWuyPG4vr19xh6xj7G4V8FpR4lQM9wFw7F4lCkEAAnja5sLk3NwNq7aC4d3jvvMkjKN2XKmO4YsikFgrCxAOvNWWO7JI1WVU5c8aZigynWyk6XuLZgdbDzXpl9hRmMxl5LHeAuGysVlJZ1zKBYEm+liLCxFfsmxEKMgklCMMpAe/mBN2BLEhQCSTznQm9Bj3ZWHMszSMyiG4EDXG/Zkj8AEMcykG2gsPPXvEdkeHTd52sZC4CmwIMTBJNL6lXYKQtzc6Xr3PsVXbM0khYiEE3XXcMZEPg8c5JPnvbhX7HsVFy2eQENIcwYAkTNvJFJA0BcA6WItoRQeX2oTiI401QmRWbISCyDULIDYFWBBBFyb28Bq9Yjy2HqMR7eFr33Hj3qyAuCjM6oHKoGcEOSgsGzZidbi+osSb+MR5bD1GI9vC0FOpB2g++YaZElWDLbiWRJM9+a2cC3oPn0r0odnpvN5re+Yr+yXAChyOdgoA/kPMLA3Xzn9L/7r/V/xV9Gr5z+l/8Adf6v+KgqbA7NcAuFgVp7MsUYI3cnEKAf2af+HOz+kfVyfdoooD4c7P6R9XJ92j4c7P6R9XJ92iigPhzs/pH1cn3aPhzs/pH1cn3aKKBLZ3ZrgA015+MpI73JwyoPi+infhzs/pH1cn3aKKCf2Q9muAfC4hVnuzQyKBu5OJVgP2aofDnZ/SPq5Pu0UUCknZdgGcM2LJRSGVNzILMBYEtkuec038Odn9I+rk+7RRQT9mdmmAWTFEz6NMrDvcmo3MC/F86mqHw52f0j6uT7tFFAfDnZ/SPq5Pu1P2n2aYBpMKRPoszMe9yaDczr8XzsKKKCh8Odn9I+rk+7R8Odn9I+rk+7RRQHw52f0j6uT7tT+x3s0wCYTDq09mWGMEbuTiEUH9mvyigo/DnZ/SPq5Pu0fDnZ/SPq5Pu0UUB8Odn9I+rk+7R8Odn9I+rk+7RRQHw52f0j6uT7tHw52f0j6uT7tFFBP2Z2aYBZMUTPo0ysO9yajcwL8XzqaofDnZ/SPq5Pu0UUB8Odn9I+rk+7R8Odn9I+rk+7RRQT9mdmmAWTFEz6NMrDvcmo3MC/F86mqHw52f0j6uT7tFFAfDnZ/SPq5Pu0fDnZ/SPq5Pu0UUB8Odn9I+rk+7R8Odn9I+rk+7RRQHw52f0j6uT7tT5+zTAHFxNv+SIZ1J3cnFmw5H7P/E/2oooKHw52f0j6uT7tHw52f0j6uT7tFFAfDnZ/SPq5Pu1wn6UeyfCTdr7ubNl3l+Q447u3FfQaKKD/2Q==) ###Code #Squaring sq1 = np.square(vec1, vec2) sq2 = np.square(sq1, vec3) sq3 = np.square(sq2, vec4) sq4 = np.square(sq3, vec5) #Square root sr1 = np.sqrt(add4) ###Output _____no_output_____ ###Markdown Summation ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAPoAAADICAYAAADBXvybAAAQ40lEQVR4Ae2dAXLjIAxFc6feqXfKnXqn7Py2v0sccDBGWILPTMaJjTF89CwJu7u3h4oUkALTK3CbfoQaoBSQAg+BLiOQAgsoINAXmGQNUQoIdNmAFFhAAYG+wCRriFJAoMsGpMACCgj0BSZZQ5QCAl02IAUWUECgLzDJGqIUEOiyASmwgAICfYFJ1hClgECXDUiBBRQQ6AtMsoYoBQS6bEAKLKCAQF9gkjVEKSDQZQNSYAEFBPoCk6whSgGBLhuQAgsoINAXmGQNUQoIdNmAFFhAAYG+wCRriOMV+Pp6PO7318/n5+Px8fH6ud+/TDsp0E3lVeOrKgCgb7faz/3xwAmGRaAbiqum11UAHr0M+tfjdrs/breP38/tcYebNywC3VBcNb22Aq+gE+7b43Z7/nzfFQzlEuiG4qrpNRWAN/8fusN7f76AnYL+wTsCTjQqAt1IWDW7lgKvcJe9dwo5vn8J9LWMRaONqwBSbHjxH+D3PXgK+ichxxbL9EZFHt1IWDW7rgJfXwjXn3Pw0u+nFTvDlXeBvq49auQHFGBoXrs4fr8jdN+H/Z56c3yvbfxAv1lVoFMJbaVARgFE0+APHDI0z1TL7vr4wOOzPOx/C3Bb2LMtnd8p0M9rqBYmVAAenE62JXV+F77/LcAJ9AmtR0NaQgGG7dh+fr4uzGHfd3jAUCGFHXcYgyKPbiCqmoyhwNG8u2ZUBBsenSUN3xHO/xW+DI8dqJ+c81en0xeB3klINRNDgRRuONKjeXdplAAbED+B/Fs5DePTG8D3xVvyglIndvYL9B1xdGguBcBUT7ipDkH+Dsm5c7PFsZfj8O6GXjztgkBP1dB3KXBQgTQfP3jqz13n8EltJwj0Nt10llMF0tDcuovw0AjVn8Lx2ouio2m+XnteYz2B3iicTvOlAMJyLmL3yrtLIwTYpXy8dM7V+wX61TOg659SgHk3IB+xrlWTj58akNHJAt1IWDU7RgFEwIPWsx7Mx5tC9TFyFK8i0IvS6IAXBQAyvPUIj10a86l8vNTowP0CfaDYutQxBQD2qLy71DPm4y+PxkonON0v0J1OzMrdwmIannePyrtLWjNUxzZ6EejRZ3DC/o/Mu0vywYPj1VWTfBwDxN1sYBHoA8XWpf4rQFvH1lvhozMTyDHYCxYcBLo3K5u4P4DaQ95dknhYPm79oD8zQIGeEUW7+ivAvBtbjynv0Hwciw+DQxmB3t+m1WJGAdj1YNvO9CK/yzQfz10SK42Dy/grDh6gLjdOAYA8eI3p9OCYj59uqLaBi0QS6LUTpHpZBWi3iEbxiQI68nGsqkd/Pp6dlMxOgZ4RRbveKwDAAbbF33e/v/q5GkPz8XNd7Xa2QO8m5XoNAfZoBR68+U9Low026a9AT8TQ11cFALPHVfLXnu7v4aMzQL5iEegrzvqbMRNu5t3RQV8tH89Nr0DPqbLoPgCePu+OGJpvp475OGC/vKAPF/VDoF8++346ABuE977IFrsL4S4fv/BtIYHe3bz8N3ihYxkiDvNxd4/OkAtddBcV6ENMz8dF4K3TR2I+etW3F8zHXf5p6QVvxFFdgU4lJt3CgTDvBuTRF9b2pgke3OxPS/cuXHtMoNcqpXpHFQDYM+XdpfHzVVYXi265TuKOizvtRUUe/SLhe1929ry7pJfbfHzbYd5xt/sH/Rbog4S2uAzDcubdM4flOf346MxlPp7r8IX7BPqF4rdeGkATbuTfAH614j4fdzYhAt3ZhNR05+IosKaLpnWYj5teZLLGBbrjCV3RU+9NBx+duXs+vtdpJ8cEupOJSLuRhuaC/UcZ5eOphRz/LtCPa2ZyBhfW8Kh19ufdRwWEBw/9p6WY3Ivv2AL9qNUZ1Qfcqy6slSTlo7Pwf1p64Tvu1FagUwltXSkwVT6Ou7g8uiv7MutMmnebXWSShqfLxy989ZUmIY9OJQy2yruPixo+H98OGUYAj35xEehGE8D5Vd5dJzDz8ekenfFuXyeDWS2BbiatGq5VgPn4lK+yAnR8Li4CvXECMHfMux1EZo2juP405uOAXcVOAYF+UFvCjfUVB09NDvbeV3W+yirI7edFoB/QmJBjq9KuwLT5eLsk5mcKdHOJdYFUAYbqU+bj6UCdfRfoyYQgTUQ4zrA8OaSvHRTAirrrf+qpwxi9NrE86OmiGgHXulB/c10yH0eO5yTPWx50eHAtqvUHmy3y0dl0z8c5wL0tDMuJ11ge9L150rFzCiyfjzt4x50zOD3oyrs51WO3yscfP4s9Y2UvXm1a0PkoTHl3ce5NDvDRWfg/LT2rDjyMozeppgQdcENjJ+sgZ00mzPlL5+PbWYLxIUd3UuYA3dGih5N5Hd6N5fPxreLw6E4W4tC1+KBDTLhwue+tqQ37jXwcoTo8uopPBeKDDm/ORNynxtP2ivn4ko/Ogs1qbNDpzZmUBxM/cneZjx95lRXnWHl9y7YjzxP7Hht0rLgBcn44Km1NFWA+/g5awsctPD8+786r6Tzb5LZn2zXXj1YnLuipNxfow+wOQNXm47gh8NXXdHskCigNzLLt0jUj748L+tabA3YtBpnZIjwnYAXorQVw9oA8d33LtnPX293n8ClQTNCxwk4vnm4F+q79tR4ERPirsyOQMqQubff6wnNq6rBubrt3vukx2KSz4q9HNQLlvDnE1SO2GvUO1YEHb/nTUtwUEAHgXLSx/b3XCdR/Fzmk7eE6+M19/L13DbNjcDawT2clHuglbw7QT4SVzubFRXcADD7wli2FNwlCu/1dahP1eE6pDvYTbNSlR8d37L+sCPRO0pe8OUB3eCftNOqhzQAaeuIzFwZ0aTvb32nbBBVb1MMn3ZfWTb+nbfK89Pjw77jJXHmjKQw4nkfHHRNCwnvnoC8MVLvrFIA3BJw9vGIKIa6+/Z32iN6ZUUS63etL2ia+79VNr2f2HXYp0DvLC9ABPuF3KHDnEZs2R2jgSXsUtoctyvZ36Rqox3NKdbifN4g0cuAxbf8rEM+j/+/7j0fvZJRps6t9B9j0oD3HTggBLUNyArl3MzkCOvqL+my3Z/9nais26A4fY0QzDgBnCQlh542E270QG8f2jm815jWOnLNtY/bfcUGHJ9fi2yn7BBiAXICckjHEyXFB54JcCJn9dRLhLrzrXgjtr9fqUasCcUF3urrZOhGjzgPYDJ9HXVPXuV6BuKBzxf16DcP0wDofDyOEVUcd22Rc0LUQd8hcmY8rVD8k27HKjm0yJuhaiDtkgMrHD8nVVtm5TQr0tmkNcRbzcYCuYqwAQHesc0zQseKut+B2LZehuh6d7crU76Bzm4wJOu6cuIOqZBXgm2LKx7Py2Ox0/hQoJuiOVzdtrKi+VT46E+T1mq1QMybojlc3rzIa5eP9lUfQWPNB1O49wIwHOhTVq69PVq18/EmOLj8AL/xJ7cf7klE80KGo49XNLlZ2oBHl4wfEOlAV/qQWcu/eHMOOCbr32+cBgzpTlfn4mTZ0blkBBI7vYI8AOUZoDjr+Oqpr0ULc9x+iWP5padf5CtwYAscS6NHMsDOFEOaW/XSb7943jm4dG9OQ8nFbnZkZwsxKHj1i5tgd9O00dPfoC4OuV1m31tXnN8Jvem/ADdgZknM/PXvUrNEc9D5T8dsK1MdMLFb46Aw5uUofBQg3vTaAJtzpFQA2Ic8dT+t6/t4d9G3ojsGn+06JwbjqVCOxTgbk0E/vq/eZN5jQO7i3V8I5kSHHeLqD/t3ob56eCtYlhMdtF6ovUpiP6y23cxNO/8C8eyET+hPOBHS0Ti/O739XPPMFt+Lot9bK8SsfrxSqUA1mwvwaZjODVy4MtWp3LNAXWIhjPq5Qvcp+nyoR7qOh+VMjk/4wB71LyA7xMYuYwYkL83H9aemxSYa3Ftz7mpmBjsum4ft+NyqO8nZdUTViFXhw6KV8vG72lHfX6cRacUDHzOIzYeGrrIJ8f3J5r08X1bBP5b0CpqC/v/yBGqUHnQea8FZV+fj7GSHcCs3fa7VXIw7omOmJbt98dKZ8PG+eyrvzurTujQP6RCvuysfz5qq8O69Lj70xQIcnn2TFnfl4j8mboQ2G5sq7bWczBui81dtqYdo6H53p+fhPBoYlF+Xdpib31HgM0IO/+qp8/MfmlHc/sTf0RwzQAy/EwYMjXF/10RmDsTQ0H2rhuti3AjFAD7gQx0dnK/5pqfJuf3cXgW4wJyvm44RbebeBQXVo0j/owVbcV8vHlXd3oHBAE/5BhyXhE6Csko8r7w5gjJsu+gc9wIo78/GZH50xNE8X1bBPJYYC/kF3vuLOfHzGV1kJt/LuGDDv9dI/6I5X3JmPA/aZivLumWbzZyy+QQdAcCcOC19lnQVy5d0Ojaxjl/yDjhzdUZkpH2dorrzbkYEZdcU36HAz+DgpDNUj5+OEW3m3E6Ma1A3foDtaiMOKeuR/6kl59yCinF5GoFdMTNR8XHl3xeQuUsU36BevuPPRWaTn4wzNlXcvQnDlMP2CfvGKe6R8HFIpNK+0+EWr+QWdcecFExMlHxfcFxhH0Ev6Bf2CV1/56Mzzn5bmQvOgtqduD1TAL+iDV9w95+M5uLFPRQrUKuAX9IELcR7zcQ95NyOcWmNSPb8K+AQdVj7o1Vfk4wjVYdQeypV5N6Ma/lda3HpOZTzMWYQ+LAs6vZWHR2e4x2BJIn0k5sF4qJGHvqgP5xTwCTrcGj5GhZ7ryldZc3A7CSqMVFezVyrgE3S4NyOrZz4O2EcXXPLK0HxvvLz5IVxnKqPQfU+xWMd8gm604n5VPu4V7q2pEnbm5Nvf2/r6HUcBn6B3XnGHwcJ4R+bj8N4e8+4909yCvf29d66O+VbAH+ggpOOKO0P1Efl4Dm7si1K2YG9/RxmH+vmqgD/QEefCFXYo8ODIM2GwVgVNRwnN32mwBXv7+935Ou5XAZ+gg5yTBaE6F5VONpU9fRa408Ftwd7+TuvqeywF/IF+ciGOxmmRj+dC81jT/b631I8r7txyge59C6rhUQF/oJ9YiLPIx3NwG2YCHm1EfZpAgWlA75mPz5R3T2CjGkIHBXyBDsIOrrgj1GQ+flaPGfPus5ro/DkU8AU6SDuw4s588kw+ngvN55hajUIK/FfAF+iAvHLF/Uw+noMb+1SkwKwK+AK9csUdHvzoozOArNB8VjPWuN4p4Av0NyvuLfm44H5nAjq+ggJ+QIfL3VmIO5KP50LzFSZTY5QCJQV8gV5YiGM+DthLJQf3TvVSM9ovBaZUwA/oiLEzC3F7+ThAVmg+pV1qUJ0V8AM6vHnigpmP5x6dCe7OVqDmplfAD+jJijtDdWxZcqE5j2krBaTAvgJ+QP9dcYcH55+W5uBOnP7+yHRUCkiBPwV8gA56f/+s9HbDvwSD11p//lXUTUT/13F9kQJSoF4BF6B/3e+Pjxs8+df3P3ksuOsnUDWlQI0CLkDn4lqSktf0XXWkgBSoVMAF6JV9VTUpIAUaFRDojcLpNCkQSQGBHmm21Fcp0KiAQG8UTqdJgUgKCPRIs6W+SoFGBQR6o3A6TQpEUkCgR5ot9VUKNCog0BuF02lSIJICAj3SbKmvUqBRAYHeKJxOkwKRFBDokWZLfZUCjQoI9EbhdJoUiKSAQI80W+qrFGhUQKA3CqfTpEAkBQR6pNlSX6VAowICvVE4nSYFIikg0CPNlvoqBRoVEOiNwuk0KRBJAYEeabbUVynQqIBAbxROp0mBSAoI9Eizpb5KgUYFBHqjcDpNCkRSQKBHmi31VQo0KiDQG4XTaVIgkgL/AFSAFONKfY+BAAAAAElFTkSuQmCC) ###Code #Summation su1 = np.sum(add4) su2 = np.sum(sub4) su3 = np.sum(sq4) print(sq4) ###Output 48 [-15 -13] [ 4294967296 2821109907456] ###Markdown Visualizing ###Code A = add4 B = sub4 C = sq1 D = sr1 mplib.scatter(A[0],A[1], label='A', c='black') mplib.scatter(B[0],B[1], label='B', c='blue') mplib.scatter(C[0],C[1], label='C', c='yellow') mplib.scatter(D[0],D[1], label='D', c='red') mplib.title("Visualizing the Vectors") mplib.xlim(-100, 100) mplib.ylim(-100, 100) mplib.axhline(y=0, color='black') mplib.axvline(x=0, color='black') mplib.grid() mplib.legend() mplib.show() ###Output _____no_output_____ ###Markdown Result of the Operations ###Code print(f"The sum of the arrays is {add1}") print("I used the numpy add function which is used when we want to compute the") print("addition of two array. It add arguments element-wise. ") print("------------------------------------------------") print(f"The Diffrence of the arrays is {sub4}") print("numpy.subtract() function is used when we want to compute the difference") print("of two array.It returns the difference of arr1 and arr2, element-wise.") print("------------------------------------------------") print(f"Squaring the vectors result is {sq4}") print("function returns a new array with the element value as the square of the") print("source array elements. The source array remains unchanged") print("------------------------------------------------") print(f"The Square root of a Vector is {sr1}") print("The output of the function is simply an array of those calculated square") print("roots, arranged in exactly the same shape as the input array.") print("-----------------------------------------------") print(f"The Summation of the given vector is {su1}") print("this sum ups the elements of an array, takes the elements within an ") print("array, and adds them together") ###Output The sum of the arrays is [ 6 11] I used the numpy add function which is used when we want to compute the addition of two array. It add arguments element-wise. ------------------------------------------------ The Diffrence of the arrays is [-15 -13] numpy.subtract() function is used when we want to compute the difference of two array.It returns the difference of arr1 and arr2, element-wise. ------------------------------------------------ Squaring the vectors result is [ 4294967296 2821109907456] function returns a new array with the element value as the square of the source array elements. The source array remains unchanged ------------------------------------------------ The Square root of a Vector is [4.79583152 5. ] The output of the function is simply an array of those calculated square roots, arranged in exactly the same shape as the input array. ----------------------------------------------- The Summation of the given vector is 48
beginner-lessons/interdisciplinary-communication/.ipynb_checkpoints/Welcome-checkpoint.ipynb
###Markdown Welcome to the Hour of CI!The Hour of Cyberinfrastructure (Hour of CI) project will introduce you to the world of cyberinfrastructure (CI). If this is your first lesson, then we recommend starting with the **[Gateway Lesson](https://www.hourofci.org/gateway-lesson)**, which will introduce you to the Hour of CI project and the eight knowledge areas that make up Cyber Literacy for Geographic Information Science. This is the **Beginner Interdisciplinary Communication** lesson.To start, click on the "Run this cell" button below to setup your Hour of CI environment. It looks like this: ###Code !cd ../..; sh setupHourofCI # Run this cell (button on left) to setup your Hour of CI environment ###Output _____no_output_____
Part-2-PyMC3-modeling/ipynb/GAM/Step-2-Modeling_AOSTZ_with_pymc3-Piecewise_trend_model.ipynb
###Markdown Notebook Synopsis:Here I develop a set of models similar to that of Step-1-, here substituting the single trend component for a piecewise trend sub-model. Specifically I:* Load the training data generated and saved in previous NB.* Develop and combine piecewise trend, seasonal, and residual noise submodels similar to previous NB.* Compare models using WAIC or PSIS-LOOCV.* Retain and save models predicted to perform better. ###Code import pickle import pathlib from platform import python_version as pyver import pandas as pd import numpy as np import pymc3 as pm import theano.tensor as tt from sklearn.preprocessing import MinMaxScaler import arviz as ar import matplotlib.pyplot as pl import matplotlib.dates as mdates from matplotlib import rcParams def print_ver(pkg, name=None): try: print(f'{pkg.__name__}: {pkg.__version__}') except AttributeError: print(f'{name}: {pkg}') print_ver(pyver(), 'python') for pi in [np, pd, pm, ar]: print_ver(pi) %matplotlib inline years = mdates.YearLocator(day=1) months = mdates.MonthLocator(bymonthday=1) rcParams['xtick.major.size'] = 8 rcParams['xtick.minor.size'] = 4 rcParams['xtick.minor.visible'] = True rcParams['xtick.labelsize'] = 16 rcParams['ytick.labelsize'] = 16 rcParams['axes.labelsize'] = 16 rcParams['axes.titlesize'] = 18 rcParams['axes.formatter.limits'] = (-3, 2) with open('../../pickleJar/datadict.pkl', 'rb') as fb: datadict = pickle.load(fb) df = pd.DataFrame(datadict['frame'],) df['aostz_scaled'] = datadict['y_s'] minmax_t = MinMaxScaler() df['t_scaled'] = minmax_t.fit_transform(datadict['x'][:,None]).squeeze() del datadict df.head() ###Output _____no_output_____ ###Markdown Modeling a Piecewise Trend:Within the context of Generalized Additive Models(GAMs), which arise from the simple additive combination of submodels, I develop here a set of models following $$y(t) = g(t) + s(t) + ar1(t)$$where \\(y(t)\\) is the modeled signal (chlorophyll in the AOSTZ sector), \\(g(t) \\) is the trend (i.e *rate of change*) sub-model, \\(s(t)\\) is the seasonal sub-model, \\(ar1(t)\\) is the AR1 residual.The piecewise model is implemented by inserting a fixed number of changepoints such that $$g(t) = (k + a(t)^T\delta)t + (m + a(t)^T\gamma)$$where \\(k\\) is the base trend, modified by preset changpoints stored in a vector \\(s\\). At each unique changepoint \\(s_j\\) the trend is adjusted by \\(\delta_j\\), stored in a vector \\(\delta\\), everytime \\(t\\) surpasses a changepoint \\(s_j\\). Used for this purpose, \\(a(t)\\) is basically a vectorized switchboard that turns on for a given switchpoint such that \begin{equation}a(t) = \begin{cases} 1 , & \text{if $t \geq s_j$} \\ 0 , & \text{otherwise}\end{cases} \end{equation}The second part, \\( m + a(t)^T\gamma\\) ensures the segments defined by the switchpoints are connected. Here, \\(m\\) is an offset parameter, and \\(\gamma_j\\) is set to \\(-s_j\delta_j\\).The issue though is to find the right number of preset changepoint that will capture actual changepoints while not bogging down the inference. Moreover, for the sake of practicality, these will need to be regularly spaced. Here I try several setups including, one change point at the beginning of the year, and one for every season (4pts/year), one every two months (6pts/year), and one for every month (as many changepoints as data points). The idea is then to put a rather restrictive Laplace prior on \\(\delta\\) to rule out unlikely changepoints, effectively setting the corresponding \\(\delta_j\\) to 0.First is to define some [helper functions as in the previous notebook](https://tinyurl.com/y3nubuquhelpers): ###Code def fourier_series(t, p=12, n=1): """ input: ------ t [numpy array]: vector of time index p [int]: period n [int]: number of fourier components output: ------- sinusoids [numpy array]: 2D array of cosines and sines """ p = p / t.size wls = 2 * π * np.arange(1, n+1) / p x_ = wls * t[:, None] sinusoids = np.concatenate((np.cos(x_), np.sin(x_)), axis=1) return sinusoids def seasonality(mdl, n_fourier, t): """ m [pymc3 Model class]: model object n_fourier [int]: number of fourier components t [numpy array]: vector of time index """ with mdl: σ = pm.Exponential('σ', 1) f_coefs = pm.Normal('fourier_coefs', 0, sd=1, shape=(n_fourier*2)) season = pm.Deterministic('season', tt.dot(fourier_series(t, n=n_fourier), f_coefs) ) return season def piecewise_trend(mdl, s, t, a_t, obs, k_prior_scale=5, δ_prior_scale=0.05, m_prior_scale=5): """ input: ------ mdl [pymc3 Model class]: model object s [numpy array]: changepoint vector t [numpy array]: time vector obs [numpy array]: vector of observations a_t [numpy int array]: 2D (t*s) adjustment indicator array k_prior_scale [float]: base trend normal prior scale parameter (default=5) δ_prior_scale [float]: trend adjustment laplace prior scale param. (default=0.05) m_prior_scale [float]: base offset normal prior scale param. (default=5) """ with mdl: # Priors: k = pm.Normal('k', 0, k_prior_scale) # base trend prior if δ_prior_scale is None: δ_prior_scale = pm.Exponential('τ', 1.5) δ = pm.Laplace('δ', 0, δ_prior_scale, shape=s.size) # rate of change prior m = pm.Normal('m', 0, m_prior_scale) # offset prior γ = -s * δ trend = pm.Deterministic('trend', (k + tt.dot(a_t, δ)) * t + (m + tt.dot(a_t, γ))) return trend def ar1_residual(mdl, n_obs): with mdl: k_ = pm.Uniform('k', -1.1, 1.1) tau_ = pm.Gamma('tau', 10, 3) ar1 = pm.AR1('ar1', k=k_, tau_e=tau_, shape=n_obs) return ar1 def changepoint_setup(t, n_changepoints, s_start=None, s=None, changepoint_range=1): """ input: ------ t [numpy array]: time vector n_changepoints [int]: number of changepoints to consider s [numpy array]: user-specified changepoint vector (default=None) s_start [int]: changepoint start index (default=0) changepoint_range[int]: adjustable time proportion (default=1) output: ------- s [numpy array]: changepoint vector a_t [numpy int array]: 2D (t*s) adjustment indicator array """ if s is None: if s_start is None: s = np.linspace(start=0, stop=changepoint_range*t.max(), num=n_changepoints+1)[1:] else: s = np.linspace(start=s_start, stop=changepoint_range*t.max(), num=n_changepoints) a_t = (t[:,None] > s) * 1 return a_t, s def model_runner(t_, obs_s, add_trend=False, add_season=False, add_AR1=False, **payload): mdl = pm.Model() a_t, s = None, None with mdl: y_ = 0 σ = pm.HalfCauchy('σ', 2.5) if add_trend: n_switches = payload.pop('n_switches', t_.size) s_start = payload.pop('s_start', None) s = payload.pop('s', None) chg_pt_rng = payload.pop('changepoint_range', 1) k_prior_scale = payload.pop('k_prior_scale', 5) δ_prior_scale = payload.pop('δ_prior_scale', 0.05) m_prior_scale = payload.pop('m_prior_scale', 5) a_t, s = changepoint_setup(t_, n_switches, s_start=s_start, s=s, changepoint_range=chg_pt_rng) trend_ = piecewise_trend(mdl, s, t_, a_t, obs_s, k_prior_scale, δ_prior_scale, m_prior_scale) y_ += trend_ if add_season: n_fourier = payload.pop('n_fourier', 4) season = seasonality(mdl, n_fourier=n_fourier, t=t_) y_ += season if add_AR1: ar1 = ar1_residual(mdl, obs_s.size) y_ += ar1 pm.Normal('obs', mu=y_, sd=σ, observed=obs_s) return mdl, a_t, s def sanity_check(m, df): """ :param m: (pm.Model) :param df: (pd.DataFrame) """ # Sample from the prior and check of the model is well defined. y = pm.sample_prior_predictive(model=m, vars=['obs'])['obs'] pl.figure(figsize=(16, 6)) pl.plot(y.mean(0), label='mean prior') pl.fill_between(np.arange(y.shape[1]), -y.std(0), y.std(0), alpha=0.25, label='standard deviation') pl.plot(df['y_scaled'], label='true value') pl.legend() def plot_component(axi, x, y, hpd_=None, obs=None, line_label=None, y_axis_label=None, ax_title=None): if isinstance(obs, np.ndarray): axi.plot(x, obs, color='k', label='observations') axi.plot(x, y, color='darkblue', label=line_label) if isinstance(hpd_, np.ndarray): axi.fill_between(x, hpd_[:, 0], hpd_[:, 1], color='steelblue', alpha=0.5, label='95% CI') if y_axis_label: axi.set_ylabel(y_axis_label) axi.legend() if ax_title: axi.set_title(ax_title) axi.xaxis_date() axi.xaxis.set_major_locator(years) axi.xaxis.set_minor_locator(months) axi.tick_params(axis='x', labelrotation=30) axi.grid() mdl_trend_only, A, s_pts = model_runner(df.t_scaled, df.aostz_scaled, add_trend=True, n_switches=df.shape[0],) render = pm.model_to_graphviz(mdl_trend_only) render.render('piecewise_trend_only', directory='../../figjar/', format='png') ###Output _____no_output_____ ###Markdown ###Code y = pm.sample_prior_predictive(model=mdl_trend_only, vars=['obs'])['obs'] pl.figure(figsize=(16, 6)) pl.plot(y.mean(0), label='mean prior') pl.fill_between(np.arange(y.shape[1]), -y.std(0), y.std(0), alpha=0.25, label='standard deviation') pl.plot(df.aostz_scaled.values, marker='.', label='true value', ) pl.hlines(0, 0, df.shape[0], linestyles='--', label='expected trend') pl.legend(); with mdl_trend_only: trace_trend_only = pm.sample(2000, tune=2000) trend_only_inference = ar.from_pymc3(trace=trace_trend_only, prior=pm.sample_prior_predictive(model=mdl_trend_only), posterior_predictive=pm.sample_posterior_predictive(trace_trend_only, model=mdl_trend_only ) ) trend_only_inference.to_netcdf('../../pickleJar/model_results_nc/piecewise_trend_only_inference.nc') ppc_obs = pm.sample_posterior_predictive(trace_trend_only, model=mdl_trend_only)['obs'] f, ax = pl.subplots(nrows=1, sharex=True, figsize=(12, 4)) ylbl = 'standardized chl' plot_component(ax, df.index, obs.mean(axis=0), hpd_= pm.hpd(obs), obs=df.aostz_scaled.values, line_label='model_mean', y_axis_label=ylbl, ax_title='All Components') #plot_component(ax[1], d_aostz.index, ts_m4_f4_trend_mu, hpd_=ts_m4_f4_trend_hpd, # line_label='mean', y_axis_label=ylbl, ax_title='Trend') #plot_component(ax[2], d_aostz.index, ts_m4_f4_season_mu, hpd_=ts_m4_f4_season_hpd, # line_label='mean', y_axis_label=ylbl, # ax_title='Season') #plot_component(ax[3], d_aostz.index, ts_m4_f4_ar1_mu, hpd_=ts_m4_f4_ar1_hpd, # line_label='mean', y_axis_label=ylbl, ax_title='AR1') ax.axhline(label='trend=0', color='r', ls=':'); ax.legend(); ###Output _____no_output_____
Nanodegree Blog Post.ipynb
###Markdown Initialise the Data ###Code ##source: https://www.kaggle.com/saurabhbagchi/dish-network-hackathon?select=Train_Dataset.csv ##60% of data was randomly chosen to allow the csv to be uploaded to GitHub import pandas as pd import requests import io # Downloading the csv file from GitHub account url = "https://raw.githubusercontent.com/docju/nanodegreeblogpost/master/CarLoan.csv" download = requests.get(url).content df = pd.read_csv(io.StringIO(download.decode('utf-8'))) print(df.head()) ###Output ID Client_Income Car_Owned Bike_Owned Active_Loan House_Own \ 0 12162008 45000.0 0.0 0.0 0.0 1.0 1 12201095 30150.0 1.0 0.0 0.0 1.0 2 12188608 14400.0 0.0 0.0 0.0 1.0 3 12188085 14850.0 0.0 0.0 1.0 1.0 4 12136418 12150.0 0.0 1.0 0.0 1.0 Child_Count Credit_Amount Loan_Annuity Accompany_Client ... \ 0 0.0 131211.0 4875.30 Alone ... 1 0.0 72846.0 4456.35 Alone ... 2 0.0 102202.2 4342.95 Alone ... 3 0.0 83538.0 4032.00 Relative ... 4 0.0 100956.6 3349.35 Alone ... Client_Permanent_Match_Tag Client_Contact_Work_Tag Type_Organization \ 0 No Yes Business Entity Type 3 1 Yes Yes XNA 2 Yes Yes XNA 3 No Yes Self-employed 4 Yes Yes XNA Score_Source_1 Score_Source_2 Score_Source_3 Social_Circle_Default \ 0 NaN 0.489135 0.067794 0.0515 1 NaN 0.738472 0.298595 0.1031 2 NaN 0.386343 NaN NaN 3 0.501634 0.448004 0.538863 NaN 4 NaN 0.799336 NaN 0.0742 Phone_Change Credit_Bureau Default 0 832.0 0.0 0 1 2945.0 3.0 0 2 1002.0 NaN 0 3 2413.0 2.0 0 4 2029.0 3.0 1 [5 rows x 40 columns] (121856, 40) ###Markdown Data Understanding- get shape of data, which variables are numeric, which are binary, which are categorical, proportion of missing values, distributions etc ###Code df.shape ##121856 rows, 40 columns df.dtypes ##create list of categorial variables cat_df = df.select_dtypes(include=['object']).copy() cat_columns=list(cat_df.columns.values) ##create list of numeric variables num_df= df.select_dtypes(include=['float','integer']).copy() num_columns=list(num_df.columns.values) ###Output _____no_output_____
Staircase.ipynb
###Markdown https://www.hackerrank.com/challenges/staircase/problem ###Code def staircase(n): for i in range(1, n+1): print ("#"*i) staircase(5) def staircase(n): for i in range(1, n+1): print ((" "*(n-i))+("#"*i)) staircase(5) #!/bin/python3 import math import os import random import re import sys # Complete the staircase function below. def staircase(n): for i in range(1, n+1): print ((" "*(n-i))+("#"*i)) if __name__ == '__main__': n = int(input()) staircase(n) ###Output 10
Feature-Selection-for-Machine-Learning-master/Filter Methods/Combining-all-Methods.ipynb
###Markdown **Connect With Me in Linkedin** :- https://www.linkedin.com/in/dheerajkumar1997/ Filter Methods - Basics - Correlations - Univariate ROC-AUC Putting it all together ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline from sklearn.model_selection import train_test_split from sklearn.feature_selection import VarianceThreshold from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.metrics import roc_auc_score # load the Santander customer satisfaction dataset from Kaggle data = pd.read_csv('santander-train.csv') data.shape # separate dataset into train and test X_train, X_test, y_train, y_test = train_test_split( data.drop(labels=['TARGET'], axis=1), data['TARGET'], test_size=0.3, random_state=0) X_train.shape, X_test.shape # Keep a copy of the dataset with all the variables # to measure the performance of machine learning models # at the end of the notebook X_train_original = X_train.copy() X_test_original = X_test.copy() ###Output _____no_output_____ ###Markdown Remove constant features ###Code # remove constant features constant_features = [ feat for feat in X_train.columns if X_train[feat].std() == 0 ] X_train.drop(labels=constant_features, axis=1, inplace=True) X_test.drop(labels=constant_features, axis=1, inplace=True) X_train.shape, X_test.shape ###Output _____no_output_____ ###Markdown Remove quasi-constant features ###Code # remove quasi-constant features # 0.1 indicates 99% of observations approximately sel = VarianceThreshold(threshold=0.01) # fit finds the features with low variance sel.fit(X_train) # how many not quasi-constant? sum(sel.get_support()) features_to_keep = X_train.columns[sel.get_support()] # we can then remove the features like this X_train = sel.transform(X_train) X_test = sel.transform(X_test) X_train.shape, X_test.shape # sklearn transformations lead to numpy arrays # here we transform the arrays back to dataframes # please be mindful of getting the columns assigned # correctly X_train= pd.DataFrame(X_train) X_train.columns = features_to_keep X_test= pd.DataFrame(X_test) X_test.columns = features_to_keep ###Output _____no_output_____ ###Markdown Remove duplicated features ###Code # check for duplicated features in the training set duplicated_feat = [] for i in range(0, len(X_train.columns)): if i % 10 == 0: # this helps me understand how the loop is going print(i) col_1 = X_train.columns[i] for col_2 in X_train.columns[i + 1:]: if X_train[col_1].equals(X_train[col_2]): duplicated_feat.append(col_2) len(duplicated_feat) # remove duplicated features X_train.drop(labels=duplicated_feat, axis=1, inplace=True) X_test.drop(labels=duplicated_feat, axis=1, inplace=True) X_train.shape, X_test.shape # Keep a copy of the dataset except constant and duplicated variables # to measure the performance of machine learning models # at the end of the notebook X_train_basic_filter = X_train.copy() X_test_basic_filter = X_test.copy() ###Output _____no_output_____ ###Markdown Remove correlated features ###Code # find and remove correlated features def correlation(dataset, threshold): col_corr = set() # Set of all the names of correlated columns corr_matrix = dataset.corr() for i in range(len(corr_matrix.columns)): for j in range(i): if abs(corr_matrix.iloc[i, j]) > threshold: # we are interested in absolute coeff value colname = corr_matrix.columns[i] # getting the name of column col_corr.add(colname) return col_corr corr_features = correlation(X_train, 0.8) print('correlated features: ', len(set(corr_features)) ) # removed correlated features X_train.drop(labels=corr_features, axis=1, inplace=True) X_test.drop(labels=corr_features, axis=1, inplace=True) X_train.shape, X_test.shape # keep a copy of the dataset at this stage X_train_corr = X_train.copy() X_test_corr = X_test.copy() ###Output _____no_output_____ ###Markdown Remove features using univariate roc_auc ###Code # find important features using univariate roc-auc # loop to build a tree, make predictions and get the roc-auc # for each feature of the train set roc_values = [] for feature in X_train.columns: clf = DecisionTreeClassifier() clf.fit(X_train[feature].fillna(0).to_frame(), y_train) y_scored = clf.predict_proba(X_test[feature].fillna(0).to_frame()) roc_values.append(roc_auc_score(y_test, y_scored[:, 1])) # let's add the variable names and order it for clearer visualisation roc_values = pd.Series(roc_values) roc_values.index = X_train.columns roc_values.sort_values(ascending=False).plot.bar(figsize=(20, 8)) # by removing features with univariate roc_auc == 0.5 # we remove another 30 features selected_feat = roc_values[roc_values>0.5] len(selected_feat), X_train.shape[1] ###Output _____no_output_____ ###Markdown Compare the performance in machine learning algorithms ###Code # create a function to build random forests and compare performance in train and test set def run_randomForests(X_train, X_test, y_train, y_test): rf = RandomForestClassifier(n_estimators=200, random_state=39, max_depth=4) rf.fit(X_train, y_train) print('Train set') pred = rf.predict_proba(X_train) print('Random Forests roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1]))) print('Test set') pred = rf.predict_proba(X_test) print('Random Forests roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1]))) # original run_randomForests(X_train_original.drop(labels=['ID'], axis=1), X_test_original.drop(labels=['ID'], axis=1), y_train, y_test) # filter methods - basic run_randomForests(X_train_basic_filter.drop(labels=['ID'], axis=1), X_test_basic_filter.drop(labels=['ID'], axis=1), y_train, y_test) # filter methods - correlation run_randomForests(X_train_corr.drop(labels=['ID'], axis=1), X_test_corr.drop(labels=['ID'], axis=1), y_train, y_test) # filter methods - univariate roc-auc run_randomForests(X_train[selected_feat.index], X_test_corr[selected_feat.index], y_train, y_test) ###Output Train set Random Forests roc-auc: 0.8105671870819526 Test set Random Forests roc-auc: 0.7985492537265694 ###Markdown We can see that removing constant, quasi-constant, duplicated, correlated and now **features with univariate roc-auc ==0.5** we still keep or even enhance the performance of the random forests (0.7985 vs 0.7900) at the time that we reduce the feature space dramatically (from 371 to 90).Let's have a look at the performance of logistic regression. ###Code # create a function to build logistic regression and compare performance in train and test set def run_logistic(X_train, X_test, y_train, y_test): # function to train and test the performance of logistic regression logit = LogisticRegression(random_state=44) logit.fit(X_train, y_train) print('Train set') pred = logit.predict_proba(X_train) print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_train, pred[:,1]))) print('Test set') pred = logit.predict_proba(X_test) print('Logistic Regression roc-auc: {}'.format(roc_auc_score(y_test, pred[:,1]))) # original scaler = StandardScaler().fit(X_train_original.drop(labels=['ID'], axis=1)) run_logistic(scaler.transform(X_train_original.drop(labels=['ID'], axis=1)), scaler.transform(X_test_original.drop(labels=['ID'], axis=1)), y_train, y_test) # filter methods - basic scaler = StandardScaler().fit(X_train_basic_filter.drop(labels=['ID'], axis=1)) run_logistic(scaler.transform(X_train_basic_filter.drop(labels=['ID'], axis=1)), scaler.transform(X_test_basic_filter.drop(labels=['ID'], axis=1)), y_train, y_test) # filter methods - correlation scaler = StandardScaler().fit(X_train_corr.drop(labels=['ID'], axis=1)) run_logistic(scaler.transform(X_train_corr.drop(labels=['ID'], axis=1)), scaler.transform(X_test_corr.drop(labels=['ID'], axis=1)), y_train, y_test) # filter methods - univariate roc-auc scaler = StandardScaler().fit(X_train[selected_feat.index]) run_logistic(scaler.transform(X_train[selected_feat.index]), scaler.transform(X_test_corr[selected_feat.index]), y_train, y_test) ###Output /Users/anujdutt/miniconda3/envs/deeplearning/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning)