path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
Data Analysis/NumPy/numpy-exercises.ipynb
###Markdown NumPy PracticeThis notebook offers a set of exercises for different tasks with NumPy.It should be noted there may be more than one different way to answer a question or complete an exercise.Exercises are based off (and directly taken from) the quick introduction to NumPy notebook.Different tasks will be detailed by comments or text.For further reference and resources, it's advised to check out the [NumPy documentation](https://numpy.org/devdocs/user/index.html).And if you get stuck, try searching for a question in the following format: "how to do XYZ with numpy", where XYZ is the function you want to leverage from NumPy. ###Code # Import NumPy as its abbreviation 'np' import numpy as np # Create a 1-dimensional NumPy array using np.array() a1 = np.array([1,2,3]) # Create a 2-dimensional NumPy array using np.array() a2 = np.array([[1,2,3], [4,5,6]]) # Create a 3-dimensional Numpy array using np.array() a3 = np.array([[[1,2,3], [4,5,6], [7,8,9]], [[10,11,12], [13,14,15], [16,17,18]], [[19,20,21], [22,23,24], [25,26,27]]]) ###Output _____no_output_____ ###Markdown Now we've you've created 3 different arrays, let's find details about them.Find the shape, number of dimensions, data type, size and type of each array. ###Code # Attributes of 1-dimensional array (shape, # number of dimensions, data type, size and type) a1.ndim,a1.dtype,a1.size,type(a1) # Attributes of 2-dimensional array a2.ndim,a2.dtype,a2.size,type(a2) # Attributes of 3-dimensional array a3.ndim,a3.dtype,a3.size,type(a3) # Import pandas and create a DataFrame out of one # of the arrays you've created import pandas as pd pd.DataFrame(a2) # Create an array of shape (10, 2) with only ones np.ones((10,2)) # Create an array of shape (7, 2, 3) of only zeros np.zeros((7,2,3)) # Create an array within a range of 0 and 100 with step 3 np.arange(0,100,3) # Create a random array with numbers between 0 and 10 of size (7, 2) np.random.randint(0,10,size=(7,2)) # Create a random array of floats between 0 & 1 of shape (3, 5) np.random.random((3,5)) # Set the random seed to 42 np.random.seed(42) # Create a random array of numbers between 0 & 10 of size (4, 6) np.random.randint(0,10,size=(4,6)) ###Output _____no_output_____ ###Markdown Run the cell above again, what happens?Are the numbers in the array different or the same? Why do think this is? ###Code # Create an array of random numbers between 1 & 10 of size (3, 7) # and save it to a variable one_and_10 = np.random.randint(1,10,size=(3,7)) # Find the unique numbers in the array you just created np.unique(one_and_10) # Find the 0'th index of the latest array you created one_and_10[0] # Get the first 2 rows of latest array you created one_and_10[:2] # Get the first 2 values of the first 2 rows of the latest array one_and_10[:2,:2] # Create a random array of numbers between 0 & 10 and an array of ones # both of size (3, 5), save them both to variables zero_and_10 = np.random.randint(0,10,(3,5)) ones = np.ones((3,5)) # Add the two arrays together zero_and_10 + ones # Create another array of ones of shape (5, 3) ones_t = np.ones((5,3)) # Try add the array of ones and the other most recent array together ones + ones_t ###Output _____no_output_____ ###Markdown When you try the last cell, it produces an error. Why do think this is?How would you fix it? ###Code # Create another array of ones of shape (3, 5) ones = np.ones((3,5)) # Subtract the new array of ones from the other most recent array zero_and_10 - ones # Multiply the ones array with the latest array zero_and_10 * ones # Take the latest array to the power of 2 using '**' zero_and_10 ** 2 # Do the same thing with np.square() np.square(zero_and_10) # Find the mean of the latest array using np.mean() np.mean(zero_and_10) # Find the maximum of the latest array using np.max() np.max(zero_and_10) # Find the minimum of the latest array using np.min() np.min(zero_and_10) # Find the standard deviation of the latest array np.std(zero_and_10) # Find the variance of the latest array np.var(zero_and_10) # Reshape the latest array to (3, 5, 1) zero_and_10.reshape(3,5,1) # Transpose the latest array zero_and_10.T ###Output _____no_output_____ ###Markdown What does the transpose do? ###Code # Create two arrays of random integers between 0 to 10 # one of size (3, 3) the other of size (3, 2) a4 = np.random.randint(0,10,(3,3)) a5 = np.random.randint(0,10,(3,2)) # Perform a dot product on the two newest arrays you created np.dot(a4,a5) # Create two arrays of random integers between 0 to 10 # both of size (4, 3) a6 = np.random.randint(0,10,(4,3)) a7 = np.random.randint(0,10,(4,3)) # Perform a dot product on the two newest arrays you created np.dot(a6,a7) ###Output _____no_output_____ ###Markdown It doesn't work. How would you fix it? ###Code # Take the latest two arrays, perform a transpose on one of them and then perform # a dot product on them both np.dot(a6,a7.T) ###Output _____no_output_____ ###Markdown Notice how performing a transpose allows the dot product to happen.Why is this?Checking out the documentation on [`np.dot()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) may help, as well as reading [Math is Fun's guide on the dot product](https://www.mathsisfun.com/algebra/vectors-dot-product.html).Let's now compare arrays. ###Code # Create two arrays of random integers between 0 & 10 of the same shape # and save them to variables a7 = np.random.randint(0,10,(4,3)) a8 = np.random.randint(0,10,(4,3)) # Compare the two arrays with '>' a7 > a8 ###Output _____no_output_____ ###Markdown What happens when you compare the arrays with `>`? ###Code # Compare the two arrays with '>=' a7 >= a8 # Find which elements of the first array are greater than 7 a7 > 7 # Which parts of each array are equal? (try using '==') a7 == a8 # Sort one of the arrays you just created in ascending order np.sort(a7) # Sort the indexes of one of the arrays you just created np.argsort(a7) # Find the index with the maximum value in one of the arrays you've created np.argmax(a7) # Find the index with the minimum value in one of the arrays you've created np.argmin(a7) # Find the indexes with the maximum values down the 1st axis (axis=1) # of one of the arrays you created np.argmax(a7,axis=1) # Find the indexes with the minimum values across the 0th axis (axis=0) # of one of the arrays you created np.argmin(a7,axis=0) # Create an array of normally distributed random numbers np.random.randn(1,10) # Create an array with 10 evenly spaced numbers between 1 and 100 np.linspace(1,100,10) ###Output _____no_output_____
State_vectors_to_orbital_elements.ipynb
###Markdown State vectors to orbital elements ###Code import numpy as np ##Unit Conversions## rad_to_deg = 180/np.pi; ##Known values## mu = 398600; t = 1000; ###Output _____no_output_____ ###Markdown Defining the given state vector ###Code ##Given state vector## x = -13686.889393418738; y = -13344.772667428870; z = 10814.629905439588; s = np.array([x,y,z]); xdot = 0.88259108105901152; ydot = 1.9876415852134037; zdot = 3.4114313525042017; sdot = np.array([xdot,ydot,zdot]); ###Output _____no_output_____ ###Markdown Converting given state vector to orbital elements $$r = \sqrt{x^2+y^2+z^2}$$$$speed = \sqrt{v_x^2+v_y^2+v_z^2}$$Using Vis-viva equation,$$a = {(\frac{2}{r}-\frac{v^2}{\mu})}^{-1}$$e can be found by the following equations:$$\mu.\bar e = (v^2-\frac{\mu}{r})\bar r - (\bar r.\bar v)\bar v$$$$e = \sqrt{e_x^2+e_y^2+e_z^2}$$i can be evaluated by the following relations:$$\cos i = \hat w.\hat k$$where $$\hat w = \frac{\bar r \times \bar v}{\mid \bar r \times \bar v \mid}$$ $\hat N$ is the unit vector along nodal line.$$\hat N = \frac{\hat k \times \hat w}{\mid \hat k \times \hat w \mid}$$ $\Omega$ can be computed as follows:$$\cos \Omega = \hat i.\hat N$$$$\sin \Omega = (\hat i \times \hat N).\hat k$$Both sin and cos are computed as $\Omega$, $\omega$, and $\nu$ vary from 0 to 360 degrees.$$\hat e = \frac{\bar e}{e}$$$\omega$ can be calculated using the following equations:$$\cos \omega = \hat N.\hat e$$$$\sin \omega = (\hat N \times \hat e).\hat w$$The last orbital element, $\nu$ is calculated as follows:$$\cos \nu = \hat e.\hat r$$$$\sin \nu = (\hat e \times \hat r).\hat w$$ ###Code ##Converting to orbital elements## r = np.sqrt((x**2)+(y**2)+(z**2)); v = np.sqrt((xdot**2)+(ydot**2)+(zdot**2)); a = np.power((2/r)-((v**2)/mu),-1); e_vec = ((((v**2)-(mu/r))*s)-(np.dot (np.transpose(sdot) ,s)*sdot))/mu; e = np.sqrt(np.dot(np.transpose(e_vec),e_vec)); w_cap = np.cross(s,sdot)/np.sqrt(np.sum (np.power(np.cross (s,sdot),2))); k_cap = np.array([0,0,1]); i_cap = np.array([1,0,0]); j_cap = np.array([0,1,0]); N_cap = np.cross(k_cap,w_cap)/np.sqrt(np.sum (np.power(np.cross (k_cap,w_cap),2))); cos_nu_0 = np.dot(s/r,np.transpose(e_vec/e)); sin_nu_0 = np.dot(np.cross(e_vec/e,s/r), np.transpose(w_cap)); if sin_nu_0 >= 0: nu_0 = np.arccos(cos_nu_0); if sin_nu_0 < 0: nu_0 = (2*np.pi)-np.arccos(cos_nu_0); i = np.arccos(np.dot(w_cap,np.transpose(k_cap))); cos_Omega = np.dot(i_cap,np.transpose(N_cap)); sin_Omega = np.dot(np.cross(i_cap,N_cap), np.transpose(k_cap)); if sin_Omega >= 0: Omega = np.arccos(cos_Omega); if sin_Omega < 0: Omega = (2*np.pi)-np.arccos(cos_Omega); cos_omega = np.dot(N_cap,np.transpose(e_vec/e)); sin_omega = np.dot(np.cross(N_cap,e_vec/e),np.transpose(w_cap)); if sin_omega >= 0: omega = np.arccos(cos_omega); if sin_omega < 0: omega = (2*np.pi)-np.arccos(cos_omega); print('a =',a); print('e =',e); print('i =',i*rad_to_deg); print('Omega =',Omega*rad_to_deg); print('omega =',omega*rad_to_deg); print('nu_0 =',nu_0*rad_to_deg); ###Output a = 20000.018460650677 e = 0.09999900702499649 i = 100.0 Omega = 230.0 omega = 199.99988817526065 nu_0 = 190.0001118247393 ###Markdown Estimating true anomaly at t = 1000 seconds The same procedure as the above code was followed. ###Code ##Calculating anomaly at t= 1000 seconds## n = np.sqrt(mu/(a**3)); E_0 = 2*np.arctan(np.sqrt((1-e)/(1+e))*np.tan(nu_0/2)); M_0 = E_0-(e*np.sin(E_0)); M = M_0+(n*t); E = 0; fdotE = 1-(e*np.cos(E)) fE = E-(e*np.sin(E))-M; epsilon = 0.0000001; while fE>epsilon or fE<(-1*epsilon): E = E-(fE/fdotE); fE = E-(e*np.sin(E))-M; fdotE = 1-(e*np.cos(E)); nu = 2*np.arctan(np.sqrt((1+e)/(1-e))*np.tan(E/2)); ###Output _____no_output_____ ###Markdown State vector at t = 1000 seconds The state vector after 1000 seconds is displayed as the result of the code. ###Code ##State vector at t=1000 seconds## H = np.sqrt(mu*a*(1-(e**2))); r_1000 = a*(1-(e**2))/(1+(e*np.cos(nu))); s_p_1000 = np.zeros((3,1)); s_p_1000[0,0] = r_1000*np.cos(nu); s_p_1000[1,0] = r_1000*np.sin(nu); R_omega = np.array([[np.cos(omega),-np.sin(omega),0], [np.sin(omega),np.cos(omega),0],[0,0,1]]); R_Omega = np.array([[np.cos(Omega), -np.sin(Omega),0], [np.sin(Omega), np.cos(Omega),0],[0,0,1]]); R_i = np.array([[1,0,0],[0,np.cos(i),-np.sin(i)], [0,np.sin(i),np.cos(i)]]); R = np.dot(R_Omega,np.dot(R_i,R_omega)); s_1000 = np.dot(R,s_p_1000); v_p_1000 = np.zeros((3,1)); v_p_1000[0,0] = -mu*np.sin(nu)/H; v_p_1000[1,0] = mu*(e+np.cos(nu))/H; v_1000 = np.dot(R,v_p_1000); print('X_1000 =',s_1000[0,0],'km'); print('Y_1000 =',s_1000[1,0],'km'); print('Z_1000 =',s_1000[2,0],'km'); print('Xdot_1000 =',v_1000[0,0],'km/s'); print('Ydot_1000 =',v_1000[1,0],'km/s'); print('Zdot_1000 =',v_1000[2,0],'km/s'); ###Output X_1000 = -12552.04150633802 km Y_1000 = -11118.283622706676 km Z_1000 = 14000.844806355082 km Xdot_1000 = 1.3812752457561022 km/s Ydot_1000 = 2.4525144474178697 km/s Zdot_1000 = 2.939582308195734 km/s
notebooks/query_context/qc3_ctx_household.ipynb
###Markdown Query by Household Overview Explore the FEC data by specifying SQL predicates identifying "households" (defined based on `indiv` records conjectured to represent real-world people residing at the same physical address)This approach will create the following query contexts:* `ctx_household`* `ctx_indiv`* `ctx_contrib` Notebook Setup Configure database connect info/options Note: database connect string can be specified on the initial `%sql` command:```pythondatabase_url = "postgresql+psycopg2://user@localhost/fecdb"%sql $database_url```Or, connect string is taken from DATABASE_URL environment variable (if not specified for `%sql`):```python%sql``` ###Code %load_ext sql %config SqlMagic.autopandas=True %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # connect string taken from DATABASE_URL environment variable %sql ###Output _____no_output_____ ###Markdown Set styling ###Code %%html <style> tr, th, td { text-align: left !important; } </style> ###Output _____no_output_____ ###Markdown Validate Context ###Code %%sql select count(*) from ctx_household ###Output * postgresql+psycopg2://crash@localhost/fecdb 1 rows affected. ###Markdown Queries / Use Cases Demographic Summary by State ###Code %%sql select hx.state, count(*) from ctx_household hx group by 1 order by 2 desc ###Output * postgresql+psycopg2://crash@localhost/fecdb 1 rows affected. ###Markdown Top Contributing Households across Election Cycles Note that we could simplify this query by introducing a `ctx_household_contrib` context view (analogous to `ctx_donor_contrib`). *\[It is actually a somewhat-deliberate design choice not to extend all of the Donor constructs over to Household, even though the two entities have identical underlying structures&mdash;we may complete and maintain the analogy later, if analysis and reporting by Household becomes more important and/or interesting\]* ###Code %%sql select hx.id as hh_id, hx.name as hh_name, count(*) contribs, sum(ic.transaction_amt) total_amount, round(avg(ic.transaction_amt), 2) avg_amount, max(ic.transaction_amt) max_amount, array_agg(distinct ic.elect_cycle) as elect_cycles, round(sum(ic.transaction_amt) / count(distinct ic.elect_cycle), 2) avg_cycle_amount from ctx_household hx join indiv i on i.hh_indiv_id = hx.id join indiv_contrib ic on ic.indiv_id = i.id group by 1, 2 order by 4 desc limit 50 ###Output * postgresql+psycopg2://crash@localhost/fecdb 1 rows affected.
DecisionTree_RF/RegressionTree.ipynb
###Markdown 构建回归树 0.import工具库 ###Code import pandas as pd from sklearn import preprocessing from sklearn import tree from sklearn.datasets import load_boston ###Output _____no_output_____ ###Markdown 1.加载数据 ###Code boston_house = load_boston() boston_feature_name = boston_house.feature_names boston_features = boston_house.data boston_target = boston_house.target boston_feature_name print(boston_house.DESCR) boston_features[:5,:] boston_target ###Output _____no_output_____ ###Markdown 构建模型 ###Code rgs = tree.DecisionTreeRegressor(max_depth=4) rgs = rgs.fit(boston_features, boston_target) rgs import pydotplus from IPython.display import Image, display dot_data = tree.export_graphviz(rgs, out_file = None, feature_names = boston_feature_name, class_names = boston_target, filled = True, rounded = True ) graph = pydotplus.graph_from_dot_data(dot_data) display(Image(graph.create_png())) ###Output _____no_output_____
8.Factor in R.ipynb
###Markdown What is Factor in R?Factors are variables in R which take on a limited number of different values; such variables are often referred to as categorical variables.In a dataset, we can distinguish two types of variables: categorical and continuous.1. In a categorical variable, the value is limited and usually based on a particular finite group. For example, a categorical variable can be countries, year, gender, occupation.2. A continuous variable, however, can take any values, from integer to decimal. For example, we can have the revenue, price of a share, etc..**Categorical Variables**R stores categorical variables into a factor. Let's check the code below to convert a character variable into a factor variable. Characters are not supported in machine learning algorithm, and the only way is to convert a string to an integer. ###Code factor(x = character(), levels, labels = levels, ordered = is.ordered(x)) ###Output _____no_output_____ ###Markdown Arguments:1. x: A vector of data. Need to be a string or integer, not decimal.2. Levels: A vector of possible values taken by x. This argument is optional. The default value is the unique list of items of the vector x.3. Labels: Add a label to the x data. For example, 1 can take the label `male` while 0, the label `female`.4. ordered: Determine if the levels should be ordered.**Example:**Let's create a factor data frame. ###Code # Create gender vector gender_vector <- c("Male", "Female", "Female", "Male", "Male") class(gender_vector) # Convert gender_vector to a factor factor_gender_vector <-factor(gender_vector) class(factor_gender_vector) ###Output _____no_output_____ ###Markdown It is important to transform a string into factor when we perform Machine Learning task.A categorical variable can be divided into nominal categorical variable and ordinal categorical variable.**Nominal Categorical Variable**A categorical variable has several values but the order does not matter. For instance, male or female categorical variable do not have ordering. ###Code # Create a color vector color_vector <- c('blue', 'red', 'green', 'white', 'black', 'yellow') # Convert the vector to factor factor_color <- factor(color_vector) factor_color ###Output _____no_output_____ ###Markdown From the factor_color, we can't tell any order. Ordinal Categorical VariableOrdinal categorical variables do have a natural ordering. We can specify the order, from the lowest to the highest with order = TRUE and highest to lowest with order = FALSE.**Example:**We can use summary to count the values for each factor. ###Code # Create Ordinal categorical vector day_vector <- c('evening', 'morning', 'afternoon', 'midday', 'midnight', 'evening') # Convert `day_vector` to a factor with ordered level factor_day <- factor(day_vector, order = TRUE, levels =c('morning', 'midday', 'afternoon', 'evening', 'midnight')) # Print the new variable factor_day ## Levels: morning < midday < afternoon < evening < midnight # Append the line to above code # Count the number of occurence of each level summary(factor_day) ###Output _____no_output_____ ###Markdown Continuous VariablesContinuous class variables are the default value in R. They are stored as numeric or integer. We can see it from the dataset below. mtcars is a built-in dataset. It gathers information on different types of car. We can import it by using mtcars and check the class of the variable mpg, mile per gallon. It returns a numeric value, indicating a continuous variable. ###Code dataset <- mtcars head(dataset) class(dataset$mpg) ###Output _____no_output_____
ch05/01_KPE.ipynb
###Markdown textacyを用いたキーフレーズ抽出このノートブックでは、さまざまな自然言語処理タスクを行うことのできるライブラリ「textacy」を利用して、キーフレーズ抽出をします。 準備 パッケージのインストール ###Code !pip install textacy==0.11.0 spacy==3.1.2 ###Output Collecting textacy==0.11.0 Downloading textacy-0.11.0-py3-none-any.whl (200 kB)  |████████████████████████████████| 200 kB 5.1 MB/s [?25hCollecting spacy==3.1.2 Downloading spacy-3.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.8 MB)  |████████████████████████████████| 5.8 MB 54.2 MB/s [?25hRequirement already satisfied: requests>=2.10.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (2.23.0) Collecting jellyfish>=0.8.0 Downloading jellyfish-0.8.8.tar.gz (134 kB)  |████████████████████████████████| 134 kB 48.6 MB/s [?25hRequirement already satisfied: tqdm>=4.19.6 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (4.62.0) Requirement already satisfied: scikit-learn>=0.19.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (0.22.2.post1) Requirement already satisfied: cachetools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (4.2.2) Requirement already satisfied: networkx>=2.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (2.6.2) Collecting cytoolz>=0.10.1 Downloading cytoolz-0.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.6 MB)  |████████████████████████████████| 1.6 MB 65.1 MB/s [?25hRequirement already satisfied: numpy>=1.17.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (1.19.5) Requirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (1.4.1) Collecting pyphen>=0.10.0 Downloading pyphen-0.11.0-py3-none-any.whl (2.0 MB)  |████████████████████████████████| 2.0 MB 53.0 MB/s [?25hRequirement already satisfied: joblib>=0.13.0 in /usr/local/lib/python3.7/dist-packages (from textacy==0.11.0) (1.0.1) Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (0.8.2) Collecting catalogue<2.1.0,>=2.0.4 Downloading catalogue-2.0.6-py3-none-any.whl (17 kB) Collecting thinc<8.1.0,>=8.0.8 Downloading thinc-8.0.10-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (623 kB)  |████████████████████████████████| 623 kB 56.1 MB/s [?25hRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (21.0) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (1.0.5) Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (57.4.0) Collecting spacy-legacy<3.1.0,>=3.0.7 Downloading spacy_legacy-3.0.8-py2.py3-none-any.whl (14 kB) Collecting typer<0.4.0,>=0.3.0 Downloading typer-0.3.2-py3-none-any.whl (21 kB) Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (0.4.1) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (2.0.5) Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (2.11.3) Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (3.7.4.3) Collecting pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 Downloading pydantic-1.8.2-cp37-cp37m-manylinux2014_x86_64.whl (10.1 MB)  |████████████████████████████████| 10.1 MB 35.2 MB/s [?25hRequirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy==3.1.2) (3.0.5) Collecting pathy>=0.3.5 Downloading pathy-0.6.0-py3-none-any.whl (42 kB)  |████████████████████████████████| 42 kB 1.4 MB/s [?25hCollecting srsly<3.0.0,>=2.4.1 Downloading srsly-2.4.1-cp37-cp37m-manylinux2014_x86_64.whl (456 kB)  |████████████████████████████████| 456 kB 59.5 MB/s [?25hRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.4->spacy==3.1.2) (3.5.0) Requirement already satisfied: toolz>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from cytoolz>=0.10.1->textacy==0.11.0) (0.11.1) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy==3.1.2) (2.4.7) Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy==3.1.2) (5.1.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (2021.5.30) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.10.0->textacy==0.11.0) (3.0.4) Requirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.4.0,>=0.3.0->spacy==3.1.2) (7.1.2) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy==3.1.2) (2.0.1) Building wheels for collected packages: jellyfish Building wheel for jellyfish (setup.py) ... [?25l[?25hdone Created wheel for jellyfish: filename=jellyfish-0.8.8-cp37-cp37m-linux_x86_64.whl size=73228 sha256=5e03a9623b30484a80e1dac3b881d19655e46b198cbb2560075cb3ff7977f819 Stored in directory: /root/.cache/pip/wheels/82/aa/f4/716387e1f167cbbf911488aa056138152f4d8699c9c9b43ea8 Successfully built jellyfish Installing collected packages: catalogue, typer, srsly, pydantic, thinc, spacy-legacy, pathy, spacy, pyphen, jellyfish, cytoolz, textacy Attempting uninstall: catalogue Found existing installation: catalogue 1.0.0 Uninstalling catalogue-1.0.0: Successfully uninstalled catalogue-1.0.0 Attempting uninstall: srsly Found existing installation: srsly 1.0.5 Uninstalling srsly-1.0.5: Successfully uninstalled srsly-1.0.5 Attempting uninstall: thinc Found existing installation: thinc 7.4.0 Uninstalling thinc-7.4.0: Successfully uninstalled thinc-7.4.0 Attempting uninstall: spacy Found existing installation: spacy 2.2.4 Uninstalling spacy-2.2.4: Successfully uninstalled spacy-2.2.4 Successfully installed catalogue-2.0.6 cytoolz-0.11.0 jellyfish-0.8.8 pathy-0.6.0 pydantic-1.8.2 pyphen-0.11.0 spacy-3.1.2 spacy-legacy-3.0.8 srsly-2.4.1 textacy-0.11.0 thinc-8.0.10 typer-0.3.2 ###Markdown モデルのダウンロード ###Code !python -m spacy download en_core_web_sm ###Output Collecting en-core-web-sm==3.1.0 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.1.0/en_core_web_sm-3.1.0-py3-none-any.whl (13.6 MB)  |████████████████████████████████| 13.6 MB 73 kB/s [?25hRequirement already satisfied: spacy<3.2.0,>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from en-core-web-sm==3.1.0) (3.1.2) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.0.5) Requirement already satisfied: typer<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.3.2) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (21.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (57.4.0) Requirement already satisfied: tqdm<5.0.0,>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (4.62.0) Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.11.3) Requirement already satisfied: requests<3.0.0,>=2.13.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.23.0) Requirement already satisfied: numpy>=1.15.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.19.5) Requirement already satisfied: wasabi<1.1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.8.2) Requirement already satisfied: pydantic!=1.8,!=1.8.1,<1.9.0,>=1.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.8.2) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.0.5) Requirement already satisfied: catalogue<2.1.0,>=2.0.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.0.6) Requirement already satisfied: thinc<8.1.0,>=8.0.8 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (8.0.10) Requirement already satisfied: pathy>=0.3.5 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.6.0) Requirement already satisfied: spacy-legacy<3.1.0,>=3.0.7 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.0.8) Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.0.5) Requirement already satisfied: blis<0.8.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (0.4.1) Requirement already satisfied: typing-extensions<4.0.0.0,>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.7.4.3) Requirement already satisfied: srsly<3.0.0,>=2.4.1 in /usr/local/lib/python3.7/dist-packages (from spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.4.1) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from catalogue<2.1.0,>=2.0.4->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.5.0) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.4.7) Requirement already satisfied: smart-open<6.0.0,>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from pathy>=0.3.5->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (5.1.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2021.5.30) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.13.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (3.0.4) Requirement already satisfied: click<7.2.0,>=7.1.1 in /usr/local/lib/python3.7/dist-packages (from typer<0.4.0,>=0.3.0->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (7.1.2) Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->spacy<3.2.0,>=3.1.0->en-core-web-sm==3.1.0) (2.0.1) Installing collected packages: en-core-web-sm Attempting uninstall: en-core-web-sm Found existing installation: en-core-web-sm 2.2.5 Uninstalling en-core-web-sm-2.2.5: Successfully uninstalled en-core-web-sm-2.2.5 Successfully installed en-core-web-sm-3.1.0 ✔ Download and installation successful You can now load the package via spacy.load('en_core_web_sm') ###Markdown インポート ###Code import spacy import textacy from textacy import extract ###Output _____no_output_____ ###Markdown データのアップロードまずは可視化する埋め込みをアップロードします。本ノートブックと同じ階層にDataフォルダがあり、その下に`nlphistory.txt`があるので、そちらをアップロードします。 ###Code from google.colab import files uploaded = files.upload() ###Output _____no_output_____ ###Markdown データの読み込みアップロードしたデータを読み込みます。Colabでない場合は、`Data/nlphistory.txt`を指定して読み込んでください。 ###Code mytext = open("nlphistory.txt").read() mytext ###Output _____no_output_____ ###Markdown spaCyのドキュメントを取得 ###Code # spaCyモデルの読み込み en = textacy.load_spacy_lang("en_core_web_sm") # テキストをspaCyドキュメントへ変換 doc = textacy.make_spacy_doc(mytext, lang=en) ###Output _____no_output_____ ###Markdown TextRankを用いたキーフレーズ抽出`extract.keyterms.textrank`を用いて、キーフレーズ抽出をします。 ###Code extract.keyterms.textrank(doc, topn=5) ###Output _____no_output_____ ###Markdown TextRankとSGRankの結果を比較してみましょう。 ###Code kps_textrank = [kps for kps, _ in extract.keyterms.textrank(doc, normalize="lemma", topn=5)] kps_sgrank = [kps for kps, _ in extract.keyterms.sgrank(doc, topn=5)] print(f"Textrank output\t: {kps_textrank}") print(f"SGRank output\t: {kps_sgrank}") ###Output Textrank output : ['successful natural language processing system', 'statistical machine translation system', 'natural language system', 'statistical natural language processing', 'natural language task'] SGRank output : ['natural language processing system', 'statistical machine translation', 'early', 'research', 'late 1980'] ###Markdown 重複したキーフレーズに対処するために、textacyは`aggregate_term_variants`関数を用意しています。この関数を使うことで、重複のないキーフレーズを得ることができます。 ###Code terms = set([term for term, _ in extract.keyterms.sgrank(doc)]) extract.utils.aggregate_term_variants(terms) ###Output _____no_output_____ ###Markdown 名詞のチャンクは、キーフレーズの候補として考えることができます。この方法の欠点は、大量のフレーズができてしまうことと、それらをランク付けする方法が無いことです。 ###Code [chunk for chunk in extract.noun_chunks(doc)] ###Output _____no_output_____ ###Markdown textacyは他にもさまざまな情報抽出の機能を備えており、その多くは正規表現パターンやヒューリスティックに基づいて、頭字語や引用語などの表現の抽出に対応しています。これら以外にも、品詞タグのパターンを含む正規表現にマッチするものを抽出したり、固有表現を含む文、主語・動詞・目的語のタプルなどを探すこともできます。詳細については、以下のドキュメントを参照してください。- [textacy: NLP, before and after spaCy](https://textacy.readthedocs.io/en/latest/) ###Code ###Output _____no_output_____
train_notebooks/tweet-extraction_train.ipynb
###Markdown Data Pre-processing and Transformation ###Code train = pd.read_csv("/kaggle/input/tweet-sentiment-extraction/train.csv") test = pd.read_csv("/kaggle/input/tweet-sentiment-extraction/test.csv") print("Training set has {} data points".format(len(train))) print("Testing set has {} data points".format(len(test))) train.head() test.head() ###Output _____no_output_____ ###Markdown Check for NaN values ###Code train.isna().sum() test.isna().sum() # Since there is only one NaN value, let's drop it # Dropping it in the TweetDataset class below # train = train.dropna(axis=0).reset_index(drop=True) print("Training set has {} data points".format(len(train))) print("Testing set has {} data points".format(len(test))) train.isna().sum() ###Output Training set has 27481 data points Testing set has 3534 data points ###Markdown Removing punctuations & stopwords, or not? ###Code # Checking if punctuation appears in selected_text selected_text_has_punctuation = train.selected_text.str.extract( r'([{}]+)'.format( re.escape( string.punctuation))) # number of selected_text with punctuations selected_text_has_punctuation.isna().sum() # observing some tweets whose selected_text contain punctuations train.loc[selected_text_has_punctuation.dropna().index].head() ###Output _____no_output_____ ###Markdown - Punctuation seems to appear in quite a lot of our extracted examples. I'll not remove punctuations for this dataset.- Also, I need to preserve stopwords as it can be seen in the above `neutral` sentiment that the tweet *text* has been extracted as-is in the *selected_text*. Deciding max *text* length ###Code train.text.str.len().max() MAX_LEN = 148 ###Output _____no_output_____ ###Markdown Tokenizer The pretrained RoBERTa model and tokenizer are from huggingface [transformers](https://huggingface.co/transformers/main_classes/model.html?highlight=save_pretrained) library. They can be downloaded by using the `from_pretrained()` method or attached to a kaggle kerned from [here](https://www.kaggle.com/cdeotte/tf-roberta) ###Code class TweetDataset: def __init__(self, data_df, tokenizer, train=True, max_len=96): self.data = data_df.dropna(axis=0).reset_index(drop=True) self.is_train = True if train else False self.sentiment_tokens = { 'positive': tokenizer.encode('positive').ids[0], 'negative': tokenizer.encode('negative').ids[0], 'neutral': tokenizer.encode('neutral').ids[0] } self.tokenizer = tokenizer self.max_len = max_len def ByteLevelBPEPreprocessor(self, text, selected_text, sentiment): """Return Input IDs, Attention Mask, Start/End tokens This function returns Input IDs and Attention Mask. If this is training dataset it also return start and end tokens. """ text = " " + " ".join(text.split()) enc = self.tokenizer.encode(text) s_tok = self.sentiment_tokens[sentiment] # Get InputIDs input_ids = np.ones((self.max_len), dtype = 'int32') input_ids[:len(enc.ids)+5] = [0] + enc.ids + [2,2] + [s_tok] + [2] # Get Attention mask attention_mask = np.zeros((self.max_len), dtype='int32') attention_mask[:len(enc.ids)+5] = 1 if self.is_train: selected_text = " ".join(selected_text.split()) idx = text.find(selected_text) char_tokens = np.zeros((len(text))) char_tokens[idx:idx+len(selected_text)] = 1 # if text has ' ' prefix if text[idx-1] == ' ': char_tokens[idx-1] = 1 # Get start and end token for selected_text in input IDs start_tokens = np.zeros((self.max_len), dtype='int32') end_tokens = np.zeros((self.max_len), dtype='int32') ptr_idx = 0 label_idx = list() for i, enc_id in enumerate(enc.ids): sub_word = self.tokenizer.decode([enc_id]) if sum(char_tokens[ptr_idx:ptr_idx+len(sub_word)]) > 0: label_idx.append(i) ptr_idx += len(sub_word) if label_idx: # + 1 as we added prefix before start_tokens[label_idx[0] + 1] = 1 end_tokens[label_idx[-1] + 1] = 1 return input_ids, attention_mask, start_tokens, end_tokens return input_ids, attention_mask def __call__(self): data_len = len(self.data) input_ids = np.ones((data_len, self.max_len), dtype='int32') attention_mask = np.zeros((data_len, self.max_len), dtype='int32') token_type_ids = np.zeros((data_len, self.max_len), dtype='int32') if self.is_train: start_tokens = np.zeros((data_len, self.max_len), dtype='int32') end_tokens = np.zeros((data_len, self.max_len), dtype='int32') for i, row in tqdm(self.data.iterrows(), total=len(self.data)): out = self.ByteLevelBPEPreprocessor( row['text'], row['selected_text'] if self.is_train else None, row['sentiment'] ) if self.is_train: input_ids[i], attention_mask[i], start_tokens[i], end_tokens[i] = out else: input_ids[i], attention_mask[i] = out if self.is_train: return input_ids, attention_mask, token_type_ids, start_tokens, end_tokens return input_ids, attention_mask, token_type_ids class TransformerQA: def __init__(self, max_len, model_path, tokenizer, fit=True): self.max_len = max_len self.model_path = model_path self.tokenizer = tokenizer def roberta_model(self): """Return RoBERTa base mode with a custom question answering head """ input_ids = tf.keras.layers.Input((self.max_len,), dtype=tf.int32) attention_mask = tf.keras.layers.Input((self.max_len,), dtype=tf.int32) token_type_ids = tf.keras.layers.Input((self.max_len,), dtype=tf.int32) config = RobertaConfig.from_pretrained( os.path.join(self.model_path, 'config-roberta-base.json') ) roberta_model = TFRobertaModel.from_pretrained( os.path.join(self.model_path, 'pretrained-roberta-base.h5'), config=config ) x = roberta_model(inputs=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) x1 = tf.keras.layers.Dropout(0.1)(x[0]) x1 = tf.keras.layers.Conv1D(1,1)(x1) x1 = tf.keras.layers.Flatten()(x1) x1 = tf.keras.layers.Activation('softmax')(x1) x2 = tf.keras.layers.Dropout(0.1)(x[0]) x2 = tf.keras.layers.Conv1D(1,1)(x2) x2 = tf.keras.layers.Flatten()(x2) x2 = tf.keras.layers.Activation('softmax')(x2) model = tf.keras.models.Model( inputs=[input_ids, attention_mask, token_type_ids], outputs=[x1,x2] ) optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5) model.compile(loss='categorical_crossentropy', optimizer=optimizer) return model def jaccard(self, str1, str2): """Return Jaccard similarity score betweeen two strings """ a = set(str1.lower().split()) b = set(str2.lower().split()) if (len(a)==0) & (len(b)==0): return 0.5 c = a.intersection(b) return float(len(c)) / (len(a) + len(b) - len(c)) def get_model_selected_text(self, data_df, preds_start, preds_end): """Return list of 'selected_text using the predicted start/end tokens' """ st_list = [] for k in range(len(data_df)): idx_start = np.argmax(preds_start[k,]) idx_end = np.argmax(preds_end[k,]) if idx_start > idx_end: st = data_df.loc[k,'text'] # if data_df.loc[k, 'sentiment'] != 'neutral': # st = st.split()[idx_start] else: text = " " + " ".join(data_df.loc[k,'text'].split()) enc = self.tokenizer.encode(text) st = self.tokenizer.decode(enc.ids[idx_start-1:idx_end]) st_list.append(st) return st_list def fit(self, train_df, input_ids, attention_mask, token_type_ids, start_tokens, end_tokens, stratify_y, VER='v0', verbose=1): """Fit a RoBERTa model with the training dataset """ avg_score = [] oof_start = np.zeros((input_ids.shape[0], self.max_len)) oof_end = np.zeros((input_ids.shape[0], self.max_len)) skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) for fold, (idxT,idxV) in enumerate(skf.split(input_ids, stratify_y)): print('Training FOLD {}:'.format(fold+1)) K.clear_session() model = self.roberta_model() sv = tf.keras.callbacks.ModelCheckpoint( '{}-roberta-{}.h5'.format(VER, fold), monitor='val_loss', verbose=verbose, save_best_only=True, save_weights_only=True, mode='auto', save_freq='epoch' ) model.fit([input_ids[idxT,], attention_mask[idxT,], token_type_ids[idxT,]], [start_tokens[idxT,], end_tokens[idxT,]], epochs=3, batch_size=32, verbose=verbose, callbacks=[sv], validation_data=( [ input_ids[idxV,], attention_mask[idxV,], token_type_ids[idxV,] ], [start_tokens[idxV,], end_tokens[idxV,]] ) ) # Load best saved model from disk print('Loading model...') model.load_weights('{}-roberta-{}.h5'.format(VER, fold)) # Predicting OOF samples print('Predicting OOF...') oof_start[idxV,],oof_end[idxV,] = model.predict( [ input_ids[idxV,], attention_mask[idxV,], token_type_ids[idxV,] ], verbose=verbose ) pred_df = train_df.loc[idxV].reset_index(drop=True) pred_df['oof_st'] = self.get_model_selected_text( data_df=pred_df, preds_start=oof_start[idxV,], preds_end=oof_end[idxV,] ) fold_val_score = pred_df.apply( lambda x: self.jaccard(x['selected_text'], x['oof_st'] ), axis=1 ).mean() avg_score.append(fold_val_score) print('>>>> FOLD {} Jaccard score = {}'.format(fold+1, fold_val_score)) def predict(self, pred_df, input_ids, attention_mask, token_type_ids, n_models, VER='v0', verbose=1): """Return a list of predicted 'selected_text' by loading saved models """ preds_start = np.zeros((input_ids.shape[0], self.max_len)) preds_end = np.zeros((input_ids.shape[0], self.max_len)) for i in range(n_models): K.clear_session() model = self.roberta_model() print('Loading model...') model.load_weights('{}-roberta-{}.h5'.format(VER, i)) preds = model.predict( [input_ids, attention_mask, token_type_ids], verbose=verbose ) preds_start += preds[0]/n_models preds_end += preds[1]/n_models test_st = self.get_model_selected_text( data_df=pred_df, preds_start=preds_start, preds_end=preds_end ) return test_st PATH = '../input/tf-roberta/' tokenizer = tokenizers.ByteLevelBPETokenizer( vocab_file=PATH+'vocab-roberta-base.json', merges_file=PATH+'merges-roberta-base.txt', lowercase=True, add_prefix_space=True ) # Get pre-processed & transformed training inputs and labels train_data = TweetDataset(train, tokenizer, train=True, max_len=MAX_LEN) input_ids, attention_mask, token_type_ids, start_tokens, end_tokens = train_data() # Get pre-processed & transformed testing inputs test_data = TweetDataset(test, tokenizer, train=False, max_len=MAX_LEN) test_input_ids, test_attention_mask, test_token_type_ids = test_data() QA_model = TransformerQA(max_len=MAX_LEN, model_path=PATH, tokenizer=tokenizer) # Train the RoBERTa model QA_model.fit(train_df=train_data.data, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, start_tokens=start_tokens, end_tokens=end_tokens, stratify_y=train_data.data.sentiment.values) # Test the RoBERTa model test['selected_text'] = QA_model.predict(pred_df=test, input_ids=test_input_ids, attention_mask=test_attention_mask, token_type_ids=test_token_type_ids, n_models=5) test[['textID','selected_text']].to_csv('submission.csv',index=False) # Show 25 random predicted 'selected_text' test.sample(25) ###Output _____no_output_____
Past/DSS/NLP/3.Scikit_data_creaning.ipynb
###Markdown BOWbag of words.1. 전체 문서를 구성하는 고정된 단어장을 만들고 1. 개별 문서에 단어장에 해당하는 단어들이 포함되어 있는지 비교 후1. 횟수 or 있는지 없는지 체크 - `DictVectorizer`:단어의 수를 세어놓은 사전에서 BOW 벡터를 만든다.- `CountVectorizer`:문서 집합으로부터 단어의 수를 세어 BOW 벡터를 만든다.- `TfidfVectorizer`:문서 집합으로부터 단어의 수를 세고 TF-IDF 방식으로 단어의 가중치를 조정한 BOW 벡터를 만든다.- `HashingVectorizer`:hashing trick 을 사용하여 빠르게 BOW 벡터를 만든다. DictVectorizer사전 형태로 되어있는 feature 정보를 matrix형태로 변환- corpus 상의 각 단어의 사용 빈도를 나타내는 경우가 많음 ###Code from sklearn.feature_extraction import DictVectorizer v = DictVectorizer(sparse=False) D = [{'A': 1, 'B': 2}, {'B': 3, 'C': 1}] X = v.fit_transform(D) # fit / transform X print(v.feature_names_) print(v.inverse_transform(X)) print(v.transform({'C': 4, 'D': 3})) ###Output ['A', 'B', 'C'] [{'A': 1.0, 'B': 2.0}, {'B': 3.0, 'C': 1.0}] [[0. 0. 4.]] ###Markdown CountVectorizer`= tokenizing + counting + BOW` 다양한 인수를 가진다. 그 중 중요한 것들은 다음과 같다.- `stop_words` : 문자열 {‘english’}, 리스트 또는 None (디폴트)stop words 목록.‘english’이면 영어용 스탑 워드 사용.- `analyzer` : 문자열 {‘word’, ‘char’, ‘char_wb’} 또는 함수단어 n-그램, 문자 n-그램, 단어 내의 문자 n-그램- `tokenizer` : 함수 또는 None (디폴트)토큰 생성 함수 .- `token_pattern` : string토큰 정의용 정규 표현식- `ngram_range` : (min_n, max_n) 튜플n-그램 범위- `max_df` : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1단어장에 포함되기 위한 최대 빈도- `min_df` : 정수 또는 [0.0, 1.0] 사이의 실수. 디폴트 1단어장에 포함되기 위한 최소 빈도- `vocabulary` : 사전이나 리스트단어장 ###Code from sklearn.feature_extraction.text import CountVectorizer corpus = [ 'This is the first document.', 'This is the second second document.', 'And the third one.', 'Is this the first document?', 'The last document?', ] vect = CountVectorizer() vect.fit(corpus) vect.vocabulary_ # 각 단어에 이름표(숫자)를 붙임 - 단어장 생성 vect.transform(['This is the second document.']).toarray() vect.transform(['Something completely new.']).toarray() vect.transform(corpus).toarray() ###Output _____no_output_____ ###Markdown Stop Words단어장을 생성할 때 무시할 수 있는 단어를 넣는다. (필수)- 너무 많은 단어는 오히려 판별에 방해가 되기 때문 ###Code vect = CountVectorizer(stop_words=["and", "is", "the", "this"]).fit(corpus) vect.vocabulary_ vect = CountVectorizer(stop_words="english").fit(corpus) # 기본세팅 stop words vect.vocabulary_ ###Output _____no_output_____ ###Markdown Tokenanalyzer, tokenizer, token_pattern 등의 인수로 토큰 생성기 선택 가능- analyzer : string, {'word', 'char', 'char_wb'} or callable- tokenizer : Override the string tokenization step while preserving the preprocessing and n-grams generation steps.- token_pattern : customize ###Code vect = CountVectorizer(analyzer="char").fit(corpus) vect.vocabulary_ vect = CountVectorizer(token_pattern="t\w+").fit(corpus) vect.vocabulary_ import nltk vect = CountVectorizer(tokenizer=nltk.word_tokenize).fit(corpus) vect.vocabulary_ ###Output _____no_output_____ ###Markdown n-gram단어장 생성에 사용할 토큰의 크기 결정(chunk 크기) ###Code vect = CountVectorizer(ngram_range=(2,2)).fit(corpus) vect.vocabulary_ # 2개로만 묶음 vect = CountVectorizer(ngram_range=(1,2), token_pattern="t\w+").fit(corpus) vect.vocabulary_ # 1개 + 2개 ###Output _____no_output_____ ###Markdown 빈도수 `max_df`, `min_df` 인수 사용으로 문서에서 토큰이 나타난 횟수를 기준으로 구성가능- 인수 값이 int : 횟수- 인수 값이 float : 비중너무 적은 단어는 overfitting 발생할 수 있기 때문, 많은 단어는 의미가 없기 때문 ###Code vect = CountVectorizer(max_df=4, min_df=2).fit(corpus) vect.vocabulary_, vect.stop_words_ vect.transform(corpus).toarray().sum(axis=0) ###Output _____no_output_____ ###Markdown TF-IDFTerm Frequency - Inverse Document Frequency - 문서 안에서 단어가 나오는 빈도 - 문서에 단어가 나올 빈도- 단어를 갯수 그대로 카운트하지 않고 모든 문서에 공통적으로 들어있는 단어의 경우 가중치를 축소하는 방법 (why? 문서 구별 능력이 떨어진다고 판단) ###Code from sklearn.feature_extraction.text import TfidfVectorizer tfidv = TfidfVectorizer().fit(corpus) tfidv.transform(corpus).toarray() ###Output _____no_output_____ ###Markdown Hashing Trick- `CountVectorizer`는 모든 작업을 메모리 상에서 수행하므로 처리할 문서의 크기가 커지면 속도가 느려지거나 실행이 불가능해진다. - `HashingVectorizer`를 사용하면 해시 함수를 사용하여 단어에 대한 인덱스 번호를 생성하기 때문에 메모리 및 실행 시간을 줄일 수 있다. ###Code from sklearn.datasets import fetch_20newsgroups twenty = fetch_20newsgroups() len(twenty.data) %time CountVectorizer().fit(twenty.data).transform(twenty.data) from sklearn.feature_extraction.text import HashingVectorizer hv = HashingVectorizer(n_features=10) %time hv.transform(twenty.data) ###Output CPU times: user 4.09 s, sys: 39.7 ms, total: 4.13 s Wall time: 4.13 s ###Markdown ex ###Code from urllib.request import urlopen import json import string from konlpy.utils import pprint from konlpy.tag import Hannanum hannanum = Hannanum() f = urlopen("https://www.datascienceschool.net/download-notebook/708e711429a646818b9dcbb581e0c10a/") json = json.loads(f.read()) cell = ["\n".join(c["source"]) for c in json["cells"] if c["cell_type"] == "markdown"] docs = [w for w in hannanum.nouns(" ".join(cell)) if ((not w[0].isnumeric()) and (w[0] not in string.punctuation))] %matplotlib inline vect = CountVectorizer().fit(docs) count = vect.transform(docs).toarray().sum(axis=0) idx = np.argsort(-count) count = count[idx] feature_name = np.array(vect.get_feature_names())[idx] plt.bar(range(len(count)), count) plt.show() pprint(list(zip(feature_name, count))) ###Output [('컨테이너', 81), ('도커', 41), ('명령', 34), ('이미지', 33), ('사용', 26), ('가동', 14), ('중지', 13), ('mingw64', 13), ('삭제', 12), ('이름', 11), ('아이디', 11), ('다음', 10), ('시작', 9), ('목록', 8), ('옵션', 6), ('a181562ac4d8', 6), ('입력', 6), ('외부', 5), ('출력', 5), ('해당', 5), ('호스트', 5), ('명령어', 5), ('확인', 5), ('경우', 5), ('재시작', 4), ('존재', 4), ('컴퓨터', 4), ('터미널', 4), ('프롬프트', 4), ('포트', 4), ('377ad03459bf', 3), ('가상', 3), ('수행', 3), ('문자열', 3), ('dockeruser', 3), ('항목', 3), ('마찬가지', 3), ('대화형', 3), ('종료', 2), ('상태', 2), ('저장', 2), ('호스트간', 2), ('작업', 2), ('지정', 2), ('생각', 2), ('문헌', 2), ('동작', 2), ('시스템', 2), ('명시해', 2), ('특정', 2), ('관련하', 2), ('이때', 2), ('의미', 2), ('추가', 2), ('조합', 1), ('container', 1), ('폴더', 1), ('a1e4ed2ac65b', 1), ('작동', 1), ('자체', 1), ('자동', 1), ('image', 1), ('정지', 1), ('핵심', 1), ('초간단', 1), ('중복', 1), ('id', 1), ('최소한', 1), ('일부분', 1), ('컨테이', 1), ('daemon', 1), ('컨테이너상', 1), ('한다', 1), ('콜론', 1), ('태그', 1), ('하나', 1), ('툴박스', 1), ('파일', 1), ('포워딩', 1), ('주의해', 1), ('이해', 1), ('누른다', 1), ('이미지는', 1), ('공유', 1), ('브라우저', 1), ('복사', 1), ('문제', 1), ('문자', 1), ('관련', 1), ('명시', 1), ('길벗', 1), ('사용법', 1), ('메시지', 1), ('마지막', 1), ('리눅스', 1), ('나오기', 1), ('도서출판', 1), ('데몬', 1), ('대화적', 1), ('대표적', 1), ('내부', 1), ('머신', 1), ('이재홍', 1), ('사용자', 1), ('생략', 1), ('tag', 1), ('가능', 1), ('의존', 1), ('으로', 1), ('내용', 1), ('원본', 1), ('요약', 1), ('가지', 1), ('사용해', 1), ('오류', 1), ('연결', 1), ('여기', 1), ('개념', 1), ('실행', 1), ('시스템상', 1), ('소개', 1), ('설명', 1), ('생성', 1), ('연습', 1), ('윈도우즈', 1)]
.ipynb_checkpoints/SupportVectorMachines6-checkpoint.ipynb
###Markdown Support Vector Machine Models**Support vector machines (SVMs)** are a widely used and powerful category of machine learning algorithms. There are many variations on the basic idea of an SVM. An SVM attempts to **maximally seperate** classes by finding the **suport vector** with the lowest error rate or maximum separation. SVMs can use many types of **kernel functions**. The most common kernel functions are **linear** and the **radial basis function** or **RBF**. The linear basis function attempts to separate classes by finding hyperplanes in the feature space that maximally separate classes. The RBF uses set of local Gaussian shaped basis kernels to find a nonlinear separation of the classes. Example: Iris datasetAs a first example you will use SVMs to classify the species of iris flowers. As a first step, execute the code in the cell below to load the required packages to run the rest of this notebook. ###Code from sklearn import svm, preprocessing #from statsmodels.api import datasets from sklearn import datasets ## Get dataset from sklearn import sklearn.model_selection as ms import sklearn.metrics as sklm import matplotlib.pyplot as plt import pandas as pd import numpy as np import numpy.random as nr %matplotlib inline ###Output _____no_output_____ ###Markdown To get a feel for these data, you will now load and plot them. The code in the cell below does the following:1. Loads the iris data as a Pandas data frame. 2. Adds column names to the data frame.3. Displays all 4 possible scatter plot views of the data. Execute this code and examine the results. ###Code def plot_iris(iris): '''Function to plot iris data by type''' setosa = iris[iris['Species'] == 'setosa'] versicolor = iris[iris['Species'] == 'versicolor'] virginica = iris[iris['Species'] == 'virginica'] fig, ax = plt.subplots(2, 2, figsize=(12,12)) x_ax = ['Sepal_Length', 'Sepal_Width'] y_ax = ['Petal_Length', 'Petal_Width'] for i in range(2): for j in range(2): ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = 'x') ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = 'o') ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = '+') ax[i,j].set_xlabel(x_ax[i]) ax[i,j].set_ylabel(y_ax[j]) ## Import the dataset from sklearn.datasets iris = datasets.load_iris() ## Create a data frame from the dictionary species = [iris.target_names[x] for x in iris.target] iris = pd.DataFrame(iris['data'], columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']) iris['Species'] = species #print(species) ## Plot views of the iris data plot_iris(iris) ###Output _____no_output_____ ###Markdown You can see that Setosa (in blue) is well separated from the other two categories. The Versicolor (in orange) and the Virginica (in green) show considerable overlap. The question is how well our classifier will separate these categories. Scikit Learn classifiers require numerically coded numpy arrays for the features and as a label. The code in the cell below does the following processing:1. Creates a numpy array of the features.2. Numerically codes the label using a dictionary lookup, and converts it to a numpy array. Execute this code. ###Code Features = np.array(iris[['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width']]) levels = {'setosa':0, 'versicolor':1, 'virginica':2} Labels = np.array([levels[x] for x in iris['Species']]) ###Output _____no_output_____ ###Markdown Next, execute the code in the cell below to split the dataset into test and training set. Notice that unusually, 100 of the 150 cases are being used as the test dataset. ###Code ## Randomly sample cases to create independent training and test data nr.seed(1115) indx = range(Features.shape[0]) indx = ms.train_test_split(indx, test_size = 100) X_train = Features[indx[0],:] y_train = np.ravel(Labels[indx[0]]) X_test = Features[indx[1],:] y_test = np.ravel(Labels[indx[1]]) ###Output _____no_output_____ ###Markdown As is always the case with machine learning, numeric features must be scaled. The code in the cell below performs the following processing:1. A Zscore scale object is defined using the `StandarScaler` function from the Scikit Learn preprocessing package. 2. The scaler is fit to the training features. Subsequently, this scaler is used to apply the same scaling to the test data and in production. 3. The training features are scaled using the `transform` method. Execute this code. ###Code scale = preprocessing.StandardScaler() scale.fit(X_train) X_train = scale.transform(X_train) ###Output _____no_output_____ ###Markdown Now you will define and fit a linear SVM model. The code in the cell below defines a linear SVM object using the `LinearSVC` function from the Scikit Learn SVM package, and then fits the model. Execute this code. ###Code nr.seed(1115) svm_mod = svm.LinearSVC() svm_mod.fit(X_train, y_train) ###Output _____no_output_____ ###Markdown Notice that the SVM model object hyper parameters are displayed. Next, the code in the cell below performs the following processing to score the test data subset:1. The test features are scaled using the scaler computed for the training features. 2. The `predict` method is used to compute the scores from the scaled features. Execute this code. ###Code X_test = scale.transform(X_test) scores = svm_mod.predict(X_test) ###Output _____no_output_____ ###Markdown It is time to evaluate the model results. Keep in mind that the problem has been made difficult deliberately, by having more test cases than training cases. The iris data has three species categories. Therefore it is necessary to use evaluation code for a three category problem. The function in the cell below extends code from pervious labs to deal with a three category problem. Execute this code and examine the results. ###Code def print_metrics_3(labels, scores): conf = sklm.confusion_matrix(labels, scores) print(' Confusion matrix') print(' Score Setosa Score Versicolor Score Virginica') print('Actual Setosa %6d' % conf[0,0] + ' %5d' % conf[0,1] + ' %5d' % conf[0,2]) print('Actual Versicolor %6d' % conf[1,0] + ' %5d' % conf[1,1] + ' %5d' % conf[1,2]) print('Actual Vriginica %6d' % conf[2,0] + ' %5d' % conf[2,1] + ' %5d' % conf[2,2]) ## Now compute and display the accuracy and metrics print('') print('Accuracy %0.2f' % sklm.accuracy_score(labels, scores)) metrics = sklm.precision_recall_fscore_support(labels, scores) print(' ') print(' Setosa Versicolor Virginica') print('Num case %0.2f' % metrics[3][0] + ' %0.2f' % metrics[3][1] + ' %0.2f' % metrics[3][2]) print('Precision %0.2f' % metrics[0][0] + ' %0.2f' % metrics[0][1] + ' %0.2f' % metrics[0][2]) print('Recall %0.2f' % metrics[1][0] + ' %0.2f' % metrics[1][1] + ' %0.2f' % metrics[1][2]) print('F1 %0.2f' % metrics[2][0] + ' %0.2f' % metrics[2][1] + ' %0.2f' % metrics[2][2]) print_metrics_3(y_test, scores) ###Output Confusion matrix Score Setosa Score Versicolor Score Virginica Actual Setosa 34 1 0 Actual Versicolor 0 24 10 Actual Vriginica 0 3 28 Accuracy 0.86 Setosa Versicolor Virginica Num case 35.00 34.00 31.00 Precision 1.00 0.86 0.74 Recall 0.97 0.71 0.90 F1 0.99 0.77 0.81 ###Markdown Examine these results. Notice the following:1. The confusion matrix has dimension 3X3. You can see that most cases are correctly classified. 2. The overll accuracy is 0.86. Since the classes are roughly balanced, this metric indicates relatively good performance of the classifier, particularly since it was only trained on 50 cases. 3. The precision, recall and F1 for each of the classes is relatively good. Versicolor has the worst metrics since it has the largest number of misclassified cases. To get a better feel for what the classifier is doing, the code in the cell below displays a set of plots showing correctly (as '+') and incorrectly (as 'o') cases, with the species color-coded. Execute this code and examine the results. ###Code def plot_iris_score(iris, y_test, scores): '''Function to plot iris data by type''' ## Find correctly and incorrectly classified cases true = np.equal(scores, y_test).astype(int) print(true) ## Create data frame from the test data iris = pd.DataFrame(iris) levels = {0:'setosa', 1:'versicolor', 2:'virginica'} iris['Species'] = [levels[x] for x in y_test] iris.columns = ['Sepal_Length', 'Sepal_Width', 'Petal_Length', 'Petal_Width', 'Species'] ## Set up for the plot fig, ax = plt.subplots(2, 2, figsize=(12,12)) markers = ['o', '+'] x_ax = ['Sepal_Length', 'Sepal_Width'] y_ax = ['Petal_Length', 'Petal_Width'] for t in range(2): # loop over correct and incorect classifications setosa = iris[(iris['Species'] == 'setosa') & (true == t)] versicolor = iris[(iris['Species'] == 'versicolor') & (true == t)] virginica = iris[(iris['Species'] == 'virginica') & (true == t)] # loop over all the dimensions for i in range(2): for j in range(2): ax[i,j].scatter(setosa[x_ax[i]], setosa[y_ax[j]], marker = markers[t], color = 'blue') ax[i,j].scatter(versicolor[x_ax[i]], versicolor[y_ax[j]], marker = markers[t], color = 'orange') ax[i,j].scatter(virginica[x_ax[i]], virginica[y_ax[j]], marker = markers[t], color = 'green') ax[i,j].set_xlabel(x_ax[i]) ax[i,j].set_ylabel(y_ax[j]) plot_iris_score(X_test, y_test, scores) ###Output [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1] ###Markdown Examine these plots. You can see how the classifier has divided the feature space between the classes. Notice that most of the errors occur in the overlap region between Virginica and Versicolor. This behavior is to be expected. There is an error in classifying Setosa which is a bit surprising, and which probably arises from the projection of the division between classes. Is it possible that a nonlinear SVM would separate these cases better? The code in the cell below uses the `SVC` function to define a nonlinear model using radial basis function. This model is fit with the training data and displays the evaluation of the model. Execute this code, and answer **Question 1** on the course page. ###Code nr.seed(1115) svc_mod = svm.SVC() svc_mod.fit(X_train, y_train) scores = svm_mod.predict(X_test) print_metrics_3(y_test, scores) plot_iris_score(X_test, y_test, scores) ###Output Confusion matrix Score Setosa Score Versicolor Score Virginica Actual Setosa 34 1 0 Actual Versicolor 0 24 10 Actual Vriginica 0 3 28 Accuracy 0.86 Setosa Versicolor Virginica Num case 35.00 34.00 31.00 Precision 1.00 0.86 0.74 Recall 0.97 0.71 0.90 F1 0.99 0.77 0.81 [0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 1 1 0 0 1 0 1 1 0 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1]
notebooks/chapter14_graphgeo/03_dag.ipynb
###Markdown > This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. 14.3. Resolving dependencies in a Directed Acyclic Graph with a topological sort You need the `python-apt` package in order to build the package dependency graph. (https://pypi.python.org/pypi/python-apt/)We also assume that this notebook is executed on a Debian system (like Ubuntu). If you don't have such a system, you can download the data *Debian* directly on the book's website. Extract it in the current directory, and start this notebook directly at step 7. (http://ipython-books.github.io) 1. We import the `apt` module and we build the list of packages. ###Code import json import apt cache = apt.Cache() ###Output _____no_output_____ ###Markdown 2. The `graph` dictionary will contain the adjacency list of a small portion of the dependency graph. ###Code graph = {} ###Output _____no_output_____ ###Markdown 3. We define a function that returns the list of dependencies of a package. ###Code def get_dependencies(package): if package not in cache: return [] pack = cache[package] ver = pack.candidate or pack.versions[0] # We flatten the list of dependencies, # and we remove the duplicates. return sorted(set([item.name for sublist in ver.dependencies for item in sublist])) ###Output _____no_output_____ ###Markdown 4. We now define a *recursive* function that builds the dependency graph for a particular package. This function updates the `graph` variable. ###Code def get_dep_recursive(package): if package not in cache: return [] if package not in graph: dep = get_dependencies(package) graph[package] = dep for dep in graph[package]: if dep not in graph: graph[dep] = get_dep_recursive(dep) return graph[package] ###Output _____no_output_____ ###Markdown 5. Let's build the dependency graph for IPython. ###Code get_dep_recursive('ipython'); ###Output _____no_output_____ ###Markdown 6. Finally, let's save the adjacency list in JSON. ###Code with open('data/apt.json', 'w') as f: json.dump(graph, f, indent=1) ###Output _____no_output_____ ###Markdown 7. We import a few packages. ###Code import json import numpy as np import networkx as nx import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown 8. Let's load the adjacency list from the JSON file. ###Code with open('data/apt.json', 'r') as f: graph = json.load(f) ###Output _____no_output_____ ###Markdown 9. Now, we create a directed graph (`DiGraph` in NetworkX) from our adjacency list. We reverse the graph to get a more natural ordering. ###Code g = nx.DiGraph(graph).reverse() ###Output _____no_output_____ ###Markdown 10. A topological sort only exists when the graph is a **directed acyclic graph** (DAG). This means that there is no cycle in the graph, i.e. no circular dependency here. Is our graph a DAG? ###Code nx.is_directed_acyclic_graph(g) ###Output _____no_output_____ ###Markdown 11. What are the packages responsible for the cycles? We can find it out with the `simple_cycles` function. ###Code set([cycle[0] for cycle in nx.simple_cycles(g)]) ###Output _____no_output_____ ###Markdown 12. Here, we can try to remove these packages. In an actual package manager, these cycles need to be carefully taken into account. ###Code g.remove_nodes_from(_) nx.is_directed_acyclic_graph(g) ###Output _____no_output_____ ###Markdown 13. The graph is now a DAG. Let's display it first. ###Code ug = g.to_undirected() deg = ug.degree() plt.figure(figsize=(4,4)); # The size of the nodes depends on the number of dependencies. nx.draw(ug, font_size=6, node_size=[20*deg[k] for k in ug.nodes()]); ###Output _____no_output_____ ###Markdown 14. Finally, we can perform the topological sort, thereby obtaining a linear installation order satisfying all dependencies. ###Code nx.topological_sort(g) ###Output _____no_output_____
final_project/fraud500.ipynb
###Markdown Credit card fraud detection Binary classification problem to determine whether a transaction is a fraud/non fraud. The datasets contains transactions made by credit cards in September 2013 by european cardholders. To guarantee anonimity all the independent variables are transformed into numerical using PCA transformations. ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn from sklearn.ensemble import IsolationForest !pwd import os for dirname, _, filenames in os.walk('/kaggle/input'): for filename in filenames: print(os.path.join(dirname, filename)) fraud = pd.read_csv('/kaggle/input/creditcardfraud/creditcard.csv') fraud.head(3) fraud.columns fraud.info() fraud.describe() fraud.shape fraud.isna().sum().any() ###Output _____no_output_____ ###Markdown There are no NaN values in the entire dataset. ###Code fraud['Class'].value_counts() plt.figure(figsize=(9,4)) plt.bar(['non fraud', 'fraud'], np.log10(fraud['Class'].value_counts().to_numpy()), width=0.3, color=['navy', 'firebrick'], zorder=3) plt.title('Non fraud/fraud count', fontsize=14) plt.ylabel('class count (log10 scale)') plt.xlabel('non fraud/fraud') plt.grid(color='y', axis='y', linewidth=0.5) plt.show() print('Non frauds: {} \nFrauds: {}'.format(fraud['Class'].value_counts()[0], fraud['Class'].value_counts()[1])) ###Output _____no_output_____ ###Markdown Thus we can see the dataset is higly unbalanced. Data distributions and correlation matrix ###Code fig, ax = plt.subplots(1, 2, figsize=(18,4)) sns.histplot(data=fraud.loc[fraud['Class'] == 0], x="Time", ax=ax[0], color='navy', kde=True) ax[0].set_title('Non frauds distribution of transaction time', fontsize=14) sns.histplot(data=fraud.loc[fraud['Class'] == 1], x="Time", ax=ax[1], color='firebrick', kde=True) ax[1].set_title('Frauds distribution of transaction time', fontsize=14) plt.show() fig, ax = plt.subplots(1, 2, figsize=(18,4)) sns.histplot(data=fraud.loc[(fraud['Class'] == 0) & (fraud['Amount'] <= 1000)], x="Amount", color='navy', ax=ax[0], kde=True) ax[0].set_title('Non frauds distribution of transaction amount in (0-1000) interval', fontsize=14) sns.histplot(data=fraud.loc[fraud['Class'] == 1], x="Amount", ax=ax[1], color='firebrick', kde=True) ax[1].set_title('Frauds distribution of transaction amount', fontsize=14) plt.show() plt.subplots(figsize=(11,9)) corr = fraud.corr() sns.heatmap(corr, cmap='YlOrBr', annot_kws={'size':20}) plt.title("Correlation matrix of whole dataset", fontsize=14) plt.show() ###Output _____no_output_____ ###Markdown Outliers removal The adopted outlier removal technique is Isolation forest. ###Code removal_fraud = IsolationForest(max_samples='auto', random_state=150, contamination='auto', n_jobs=-1) removal_nofraud = IsolationForest(max_samples='auto', random_state=150, contamination='auto', n_jobs=-1) f = fraud.loc[(fraud['Class'] == 1)] nof = fraud.loc[(fraud['Class'] == 0)] mask_f = removal_fraud.fit_predict(f[[col for col in f.columns if 'V' in col]]) mask_nof = removal_nofraud.fit_predict(nof[[col for col in nof.columns if 'V' in col]]) print(f.shape) print(nof.shape) (mask_f == -1).sum() (mask_nof == -1).sum() fraud_outliers = pd.concat([f.iloc[(mask_f == -1)], nof.iloc[(mask_nof == -1)]]) fraud_clean = pd.concat([f.iloc[~(mask_f == -1)], nof.iloc[~(mask_nof == -1)]]) fraud_outliers['Class'].value_counts() fraud_clean['Class'].value_counts() ###Output _____no_output_____ ###Markdown Plot the outliers distributions ###Code fig, ax = plt.subplots(1, 2, figsize=(18,4)) sns.histplot(data=fraud_outliers.loc[fraud_outliers['Class'] == 0], x="Time", ax=ax[0], color='navy', kde=True) ax[0].set_title('Non frauds distribution of transaction time', fontsize=14) sns.histplot(data=fraud_outliers.loc[fraud_outliers['Class'] == 1], x="Time", ax=ax[1], color='firebrick', kde=True) ax[1].set_title('Frauds distribution of transaction time', fontsize=14) fig.suptitle('Outlier distribution of Time', fontsize=14) plt.show() fig, ax = plt.subplots(1, 2, figsize=(18,4)) sns.histplot(data=fraud_outliers.loc[(fraud_outliers['Class'] == 0)], x="Amount", color='navy', ax=ax[0], kde=True) ax[0].set_title('Non frauds distribution of transaction amount', fontsize=14) sns.histplot(data=fraud_outliers.loc[fraud_outliers['Class'] == 1], x="Amount", ax=ax[1], color='firebrick', kde=True) ax[1].set_title('Frauds distribution of transaction amount', fontsize=14) fig.suptitle('Outlier distribution of Transaction Amount', fontsize=14) plt.show() ###Output _____no_output_____ ###Markdown Plot of distributions of clean data ###Code fig, ax = plt.subplots(1, 2, figsize=(18,4)) sns.histplot(data=fraud_clean.loc[fraud_clean['Class'] == 0], x="Time", ax=ax[0], color='navy', kde=True) ax[0].set_title('Non frauds distribution of transaction time', fontsize=14) sns.histplot(data=fraud_clean.loc[fraud_clean['Class'] == 1], x="Time", ax=ax[1], color='firebrick', kde=True) ax[1].set_title('Frauds distribution of transaction time', fontsize=14) fig.suptitle('Clean data distribution of Time', fontsize=14) plt.show() fig, ax = plt.subplots(1, 2, figsize=(18,4)) sns.histplot(data=fraud_clean.loc[(fraud_clean['Class'] == 0)], x="Amount", color='navy', ax=ax[0], kde=True) ax[0].set_title('Non frauds distribution of transaction amount', fontsize=14) sns.histplot(data=fraud_clean.loc[fraud_clean['Class'] == 1], x="Amount", ax=ax[1], color='firebrick', kde=True) ax[1].set_title('Frauds distribution of transaction amount', fontsize=14) fig.suptitle('Clean data distribution of transaction amount', fontsize=14) plt.show() ###Output _____no_output_____
31-3-graphmatriceadja-incid.ipynb
###Markdown ![image.png](attachment:image.png) ###Code a=matrix([[1,1,1,0,0,0,0], [1,0,0,1,1,0,0], [1,0,0,0,0,1,1], [0,1,0,1,0,1,0], [0,1,0,0,1,0,1], [0,0,1,0,1,1,0], [0,0,1,0,1,1,0]]) %display latex a F1=Graph([(1,2,4),(1,3,6),(1,5,7),(2,6,7),(3,4,7),(4,5,6)]);F1 ###Output _____no_output_____
Data Science Resources/zero-to-mastery-ml-master/section-2-data-science-and-ml-tools/scikit-learn-exercises-solutions.ipynb
###Markdown Scikit-Learn Practice SolutionsThis notebook offers a set of potential solutions to the Scikit-Learn excercise notebook.Exercises are based off (and directly taken from) the quick [introduction to Scikit-Learn notebook](https://github.com/mrdbourke/zero-to-mastery-ml/blob/master/section-2-data-science-and-ml-tools/introduction-to-scikit-learn.ipynb).Different tasks will be detailed by comments or text.For further reference and resources, it's advised to check out the [Scikit-Learn documnetation](https://scikit-learn.org/stable/user_guide.html).And if you get stuck, try searching for a question in the following format: "how to do XYZ with Scikit-Learn", where XYZ is the function you want to leverage from Scikit-Learn.Since we'll be working with data, we'll import Scikit-Learn's counterparts, Matplotlib, NumPy and pandas.Let's get started. ###Code # Setup matplotlib to plot inline (within the notebook) %matplotlib inline # Import the pyplot module of Matplotlib as plt import matplotlib.pyplot as plt # Import pandas under the abbreviation 'pd' import pandas as pd # Import NumPy under the abbreviation 'np' import numpy as np ###Output _____no_output_____ ###Markdown End-to-end Scikit-Learn classification workflowLet's start with an end to end Scikit-Learn workflow.More specifically, we'll:1. Get a dataset ready2. Prepare a machine learning model to make predictions3. Fit the model to the data and make a prediction4. Evaluate the model's predictions The data we'll be using is [stored on GitHub](https://github.com/mrdbourke/zero-to-mastery-ml/tree/master/data). We'll start with [`heart-disease.csv`](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv), a dataset which contains anonymous patient data and whether or not they have heart disease.**Note:** When viewing a `.csv` on GitHub, make sure it's in the raw format. For example, the URL should look like: https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv 1. Getting a dataset ready ###Code # Import the heart disease dataset and save it to a variable # using pandas and read_csv() # Hint: You can directly pass the URL of a csv to read_csv() heart_disease = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/heart-disease.csv") # Check the first 5 rows of the data heart_disease.head() ###Output _____no_output_____ ###Markdown Our goal here is to build a machine learning model on all of the columns except `target` to predict `target`.In essence, the `target` column is our **target variable** (also called `y` or `labels`) and the rest of the other columns are our independent variables (also called `data` or `X`).And since our target variable is one thing or another (heart disease or not), we know our problem is a classification problem (classifying whether something is one thing or another).Knowing this, let's create `X` and `y` by splitting our dataframe up. ###Code # Create X (all columns except target) X = heart_disease.drop("target", axis=1) # Create y (only the target column) y = heart_disease["target"] ###Output _____no_output_____ ###Markdown Now we've split our data into `X` and `y`, we'll use Scikit-Learn to split it into training and test sets. ###Code # Import train_test_split from sklearn's model_selection module from sklearn.model_selection import train_test_split # Use train_test_split to split X & y into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y) # View the different shapes of the training and test datasets X_train.shape, X_test.shape, y_train.shape, y_test.shape ###Output _____no_output_____ ###Markdown What do you notice about the different shapes of the data?Since our data is now in training and test sets, we'll build a machine learning model to fit patterns in the training data and then make predictions on the test data.To figure out which machine learning model we should use, you can refer to [Scikit-Learn's machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html).After following the map, you decide to use the [`RandomForestClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html). 2. Preparing a machine learning model ###Code # Import the RandomForestClassifier from sklearn's ensemble module from sklearn.ensemble import RandomForestClassifier # Instantiate an instance of RandomForestClassifier as clf clf = RandomForestClassifier() ###Output _____no_output_____ ###Markdown Now you've got a `RandomForestClassifier` instance, let's fit it to the training data.Once it's fit, we'll make predictions on the test data. 3. Fitting a model and making predictions ###Code # Fit the RandomForestClassifier to the training data clf.fit(X_train, y_train) # Use the fitted model to make predictions on the test data and # save the predictions to a variable called y_preds y_preds = clf.predict(X_test) ###Output _____no_output_____ ###Markdown 4. Evaluating a model's predictionsEvaluating predictions is as important making them. Let's check how our model did by calling the `score()` method on it and passing it the training (`X_train, y_train`) and testing data. ###Code # Evaluate the fitted model on the training set using the score() function clf.score(X_train, y_train) # Evaluate the fitted model on the test set using the score() function clf.score(X_test, y_test) ###Output _____no_output_____ ###Markdown * How did you model go? * What metric does `score()` return for classifiers? * Did your model do better on the training dataset or test dataset? Experimenting with different classification modelsNow we've quickly covered an end-to-end Scikit-Learn workflow and since experimenting is a large part of machine learning, we'll now try a series of different machine learning models and see which gets the best results on our dataset.Going through the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we see there are a number of different classification models we can try (different models are in the green boxes).For this exercise, the models we're going to try and compare are:* [LinearSVC](https://scikit-learn.org/stable/modules/svm.htmlclassification)* [KNeighborsClassifier](https://scikit-learn.org/stable/modules/neighbors.html) (also known as K-Nearest Neighbors or KNN)* [SVC](https://scikit-learn.org/stable/modules/svm.htmlclassification) (also known as support vector classifier, a form of [support vector machine](https://en.wikipedia.org/wiki/Support-vector_machine))* [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) (despite the name, this is actually a classifier)* [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) (an ensemble method and what we used above)We'll follow the same workflow we used above (except this time for multiple models):1. Import a machine learning model2. Get it ready3. Fit it to the data and make predictions4. Evaluate the fitted model**Note:** Since we've already got the data ready, we can reuse it in this section. ###Code # Import LinearSVC from sklearn's svm module from sklearn.svm import LinearSVC # Import KNeighborsClassifier from sklearn's neighbors module from sklearn.neighbors import KNeighborsClassifier # Import SVC from sklearn's svm module from sklearn.svm import SVC # Import LogisticRegression from sklearn's linear_model module from sklearn.linear_model import LogisticRegression # Note: we don't have to import RandomForestClassifier, since we already have ###Output _____no_output_____ ###Markdown Thanks to the consistency of Scikit-Learn's API design, we can use virtually the same code to fit, score and make predictions with each of our models.To see which model performs best, we'll do the following:1. Instantiate each model in a dictionary2. Create an empty results dictionary3. Fit each model on the training data4. Score each model on the test data5. Check the resultsIf you're wondering what it means to instantiate each model in a dictionary, see the example below. ###Code # EXAMPLE: Instantiating a RandomForestClassifier() in a dictionary example_dict = {"RandomForestClassifier": RandomForestClassifier()} # Create a dictionary called models which contains all of the classification models we've imported # Make sure the dictionary is in the same format as example_dict # The models dictionary should contain 5 models models = {"LinearSVC": LinearSVC(), "KNN": KNeighborsClassifier(), "SVC": SVC(), "LogisticRegression": LogisticRegression(), "RandomForestClassifier": RandomForestClassifier()} # Create an empty dictionary called results results = {} ###Output _____no_output_____ ###Markdown Since each model we're using has the same `fit()` and `score()` functions, we can loop through our models dictionary and, call `fit()` on the training data and then call `score()` with the test data. ###Code # EXAMPLE: Looping through example_dict fitting and scoring the model example_results = {} for model_name, model in example_dict.items(): model.fit(X_train, y_train) example_results[model_name] = model.score(X_test, y_test) example_results # Loop through the models dictionary items, fitting the model on the training data # and appending the model name and model score on the test data to the results dictionary for model_name, model in models.items(): model.fit(X_train, y_train) results[model_name] = model.score(X_test, y_test) results ###Output /Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/svm/_base.py:947: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. "the number of iterations.", ConvergenceWarning) /Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html. Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) ###Markdown * Which model performed the best? * Do the results change each time you run the cell? * Why do you think this is?Due to the randomness of how each model finds patterns in the data, you might notice different results each time.Without manually setting the random state using the `random_state` parameter of some models or using a NumPy random seed, every time you run the cell, you'll get slightly different results.Let's see this in effect by running the same code as the cell above, except this time setting a [NumPy random seed equal to 42](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html). ###Code # Run the same code as the cell above, except this time set a NumPy random seed # equal to 42 np.random.seed(42) for model_name, model in models.items(): model.fit(X_train, y_train) results[model_name] = model.score(X_test, y_test) results ###Output /Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/svm/_base.py:947: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. "the number of iterations.", ConvergenceWarning) /Users/daniel/Desktop/ml-course/zero-to-mastery-ml/env/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:939: ConvergenceWarning: lbfgs failed to converge (status=1): STOP: TOTAL NO. of ITERATIONS REACHED LIMIT. Increase the number of iterations (max_iter) or scale the data as shown in: https://scikit-learn.org/stable/modules/preprocessing.html. Please also refer to the documentation for alternative solver options: https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG) ###Markdown * Run the cell above a few times, what do you notice about the results? * Which model performs the best this time?* What happens if you add a NumPy random seed to the cell where you called `train_test_split()` (towards the top of the notebook) and then rerun the cell above?Let's make our results a little more visual. ###Code # Create a pandas dataframe with the data as the values of the results dictionary, # the index as the keys of the results dictionary and a single column called accuracy. # Be sure to save the dataframe to a variable. results_df = pd.DataFrame(results.values(), results.keys(), columns=["Accuracy"]) # Create a bar plot of the results dataframe using plot.bar() results_df.plot.bar(); ###Output _____no_output_____ ###Markdown Using `np.random.seed(42)` results in the `LogisticRegression` model perfoming the best (at least on my computer).Let's tune its hyperparameters and see if we can improve it. Hyperparameter TuningRemember, if you're ever trying to tune a machine learning models hyperparameters and you're not sure where to start, you can always search something like "MODEL_NAME hyperparameter tuning".In the case of LogisticRegression, you might come across articles, such as [Hyperparameter Tuning Using Grid Search by Chris Albon](https://chrisalbon.com/machine_learning/model_selection/hyperparameter_tuning_using_grid_search/).The article uses [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) but we're going to be using [`RandomizedSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html).The different hyperparameters to search over have been setup for you in `log_reg_grid` but feel free to change them. ###Code # Different LogisticRegression hyperparameters log_reg_grid = {"C": np.logspace(-4, 4, 20), "solver": ["liblinear"]} ###Output _____no_output_____ ###Markdown Since we've got a set of hyperparameters we can import `RandomizedSearchCV`, pass it our dictionary of hyperparameters and let it search for the best combination. ###Code # Setup np random seed of 42 np.random.seed(42) # Import RandomizedSearchCV from sklearn's model_selection module from sklearn.model_selection import RandomizedSearchCV # Setup an instance of RandomizedSearchCV with a LogisticRegression() estimator, # our log_reg_grid as the param_distributions, a cv of 5 and n_iter of 5. rs_log_reg = RandomizedSearchCV(estimator=LogisticRegression(), param_distributions=log_reg_grid, cv=5, n_iter=5, verbose=True) # Fit the instance of RandomizedSearchCV rs_log_reg.fit(X_train, y_train); ###Output Fitting 5 folds for each of 5 candidates, totalling 25 fits ###Markdown Once `RandomizedSearchCV` has finished, we can find the best hyperparmeters it found using the `best_params_` attributes. ###Code # Find the best parameters of the RandomizedSearchCV instance using the best_params_ attribute rs_log_reg.best_params_ # Score the instance of RandomizedSearchCV using the test data rs_log_reg.score(X_test, y_test) ###Output _____no_output_____ ###Markdown After hyperparameter tuning, did the models score improve? What else could you try to improve it? Are there any other methods of hyperparameter tuning you can find for `LogisticRegression`? Classifier Model EvaluationWe've tried to find the best hyperparameters on our model using `RandomizedSearchCV` and so far we've only been evaluating our model using the `score()` function which returns accuracy. But when it comes to classification, you'll likely want to use a few more evaluation metrics, including:* [**Confusion matrix**](https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) - Compares the predicted values with the true values in a tabular way, if 100% correct, all values in the matrix will be top left to bottom right (diagnol line).* [**Cross-validation**](https://scikit-learn.org/stable/modules/cross_validation.html) - Splits your dataset into multiple parts and train and tests your model on each part and evaluates performance as an average. * [**Precision**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.htmlsklearn.metrics.precision_score) - Proportion of true positives over total number of samples. Higher precision leads to less false positives.* [**Recall**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.htmlsklearn.metrics.recall_score) - Proportion of true positives over total number of true positives and false positives. Higher recall leads to less false negatives.* [**F1 score**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.htmlsklearn.metrics.f1_score) - Combines precision and recall into one metric. 1 is best, 0 is worst.* [**Classification report**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html) - Sklearn has a built-in function called `classification_report()` which returns some of the main classification metrics such as precision, recall and f1-score.* [**ROC Curve**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_score.html) - [Receiver Operating Characterisitc](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of true positive rate versus false positive rate.* [**Area Under Curve (AUC)**](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html) - The area underneath the ROC curve. A perfect model achieves a score of 1.0.Before we get to these, we'll instantiate a new instance of our model using the best hyerparameters found by `RandomizedSearchCV`. ###Code # Instantiate a LogisticRegression classifier using the best hyperparameters from RandomizedSearchCV clf = LogisticRegression(solver="liblinear", C=0.23357214690901212) # Fit the new instance of LogisticRegression with the best hyperparameters on the training data clf.fit(X_train, y_train); ###Output _____no_output_____ ###Markdown Now it's to import the relative Scikit-Learn methods for each of the classification evaluation metrics we're after. ###Code # Import confusion_matrix and classification_report from sklearn's metrics module from sklearn.metrics import confusion_matrix, classification_report # Import precision_score, recall_score and f1_score from sklearn's metrics module from sklearn.metrics import precision_score, recall_score, f1_score # Import plot_roc_curve from sklearn's metrics module from sklearn.metrics import plot_roc_curve ###Output _____no_output_____ ###Markdown Evaluation metrics are very often comparing a model's predictions to some ground truth labels.Let's make some predictions on the test data using our latest model and save them to `y_preds`. ###Code # Make predictions on test data and save them y_preds = clf.predict(X_test) ###Output _____no_output_____ ###Markdown Time to use the predictions our model has made to evaluate it beyond accuracy. ###Code # Create a confusion matrix using the confusion_matrix function confusion_matrix(y_test, y_preds) ###Output _____no_output_____ ###Markdown **Challenge:** The in-built `confusion_matrix` function in Scikit-Learn produces something not too visual, how could you make your confusion matrix more visual?You might want to search something like "how to plot a confusion matrix". Note: There may be more than one way to do this. ###Code # Import seaborn for improving visualisation of confusion matrix import seaborn as sns # Make confusion matrix more visual def plot_conf_mat(y_test, y_preds): """ Plots a confusion matrix using Seaborn's heatmap(). """ fig, ax = plt.subplots(figsize=(3, 3)) ax = sns.heatmap(confusion_matrix(y_test, y_preds), annot=True, # Annotate the boxes cbar=False) plt.xlabel("True label") plt.ylabel("Predicted label") # Fix the broken annotations (this happened in Matplotlib 3.1.1) bottom, top = ax.get_ylim() ax.set_ylim(bottom + 0.5, top - 0.5); plot_conf_mat(y_test, y_preds) ###Output _____no_output_____ ###Markdown How about a classification report? ###Code # classification report print(classification_report(y_test, y_preds)) ###Output precision recall f1-score support 0 0.83 0.69 0.75 35 1 0.77 0.88 0.82 41 accuracy 0.79 76 macro avg 0.80 0.78 0.78 76 weighted avg 0.79 0.79 0.79 76 ###Markdown **Challenge:** Write down what each of the columns in this classification report are.* **Precision** - Indicates the proportion of positive identifications (model predicted class 1) which were actually correct. A model which produces no false positives has a precision of 1.0.* **Recall** - Indicates the proportion of actual positives which were correctly classified. A model which produces no false negatives has a recall of 1.0.* **F1 score** - A combination of precision and recall. A perfect model achieves an F1 score of 1.0.* **Support** - The number of samples each metric was calculated on.* **Accuracy** - The accuracy of the model in decimal form. Perfect accuracy is equal to 1.0.* **Macro avg** - Short for macro average, the average precision, recall and F1 score between classes. Macro avg doesn’t class imbalance into effort, so if you do have class imbalances, pay attention to this metric.* **Weighted avg** - Short for weighted average, the weighted average precision, recall and F1 score between classes. Weighted means each metric is calculated with respect to how many samples there are in each class. This metric will favour the majority class (e.g. will give a high value when one class out performs another due to having more samples).The classification report gives us a range of values for precision, recall and F1 score, time to find these metrics using Scikit-Learn functions. ###Code # Find the precision score of the model using precision_score() precision_score(y_test, y_preds) # Find the recall score recall_score(y_test, y_preds) # Find the F1 score f1_score(y_test, y_preds) ###Output _____no_output_____ ###Markdown Confusion matrix: done.Classification report: done.ROC (receiver operator characteristic) curve & AUC (area under curve) score: not done.Let's fix this.If you're unfamiliar with what a ROC curve, that's your first challenge, to read up on what one is.In a sentence, a [ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is a plot of the true positive rate versus the false positive rate.And the AUC score is the area behind the ROC curve.Scikit-Learn provides a handy function for creating both of these called [`plot_roc_curve()`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_roc_curve.html). ###Code # Plot a ROC curve using our current machine learning model using plot_roc_curve plot_roc_curve(clf, X_test, y_test); ###Output _____no_output_____ ###Markdown Beautiful! We've gone far beyond accuracy with a plethora extra classification evaluation metrics.If you're not sure about any of these, don't worry, they can take a while to understand. That could be an optional extension, reading up on a classification metric you're not sure of.The thing to note here is all of these metrics have been calculated using a single training set and a single test set. Whilst this is okay, a more robust way is to calculate them using [cross-validation](https://scikit-learn.org/stable/modules/cross_validation.html).We can calculate various evaluation metrics using cross-validation using Scikit-Learn's [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function along with the `scoring` parameter. ###Code # Import cross_val_score from sklearn's model_selection module from sklearn.model_selection import cross_val_score # EXAMPLE: By default cross_val_score returns 5 values (cv=5). cross_val_score(clf, X, y, scoring="accuracy", cv=5) # EXAMPLE: Taking the mean of the returned values from cross_val_score # gives a cross-validated version of the scoring metric. cross_val_acc = np.mean(cross_val_score(clf, X, y, scoring="accuracy", cv=5)) cross_val_acc ###Output _____no_output_____ ###Markdown In the examples, the cross-validated accuracy is found by taking the mean of the array returned by `cross_val_score()`.Now it's time to find the same for precision, recall and F1 score. ###Code # Find the cross-validated precision cross_val_precision = np.mean(cross_val_score(clf, X, y, scoring="precision", cv=5)) cross_val_precision # Find the cross-validated recall cross_val_recall = np.mean(cross_val_score(clf, X, y, scoring="recall", cv=5)) cross_val_recall # Find the cross-validated F1 score cross_val_f1 = np.mean(cross_val_score(clf, X, y, scoring="f1", cv=5)) cross_val_f1 ###Output _____no_output_____ ###Markdown Exporting and importing a trained modelOnce you've trained a model, you may want to export it and save it to file so you can share it or use it elsewhere.One method of exporting and importing models is using the joblib library.In Scikit-Learn, exporting and importing a trained model is known as [model persistence](https://scikit-learn.org/stable/modules/model_persistence.html). ###Code # Import the dump and load functions from the joblib library from joblib import dump, load # Use the dump function to export the trained model to file dump(clf, "trained-classifier.joblib") # Use the load function to import the trained model you just exported # Save it to a different variable name to the origial trained model loaded_clf = load("trained-classifier.joblib") # Evaluate the loaded trained model on the test data loaded_clf.score(X_test, y_test) ###Output _____no_output_____ ###Markdown What do you notice about the loaded trained model results versus the original (pre-exported) model results? Scikit-Learn Regression PracticeFor the next few exercises, we're going to be working on a regression problem, in other words, using some data to predict a number.Our dataset is a [table of car sales](https://docs.google.com/spreadsheets/d/1LPEIWJdSSJYrfn-P3UQDIXbEn5gg-o6I7ExLrWTTBWs/edit?usp=sharing), containing different car characteristics as well as a sale price.We'll use Scikit-Learn's built-in regression machine learning models to try and learn the patterns in the car characteristics and their prices on a certain group of the dataset before trying to predict the sale price of a group of cars the model has never seen before.To begin, we'll [import the data from GitHub](https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv) into a pandas DataFrame, check out some details about it and try to build a model as soon as possible. ###Code # Read in the car sales data car_sales = pd.read_csv("https://raw.githubusercontent.com/mrdbourke/zero-to-mastery-ml/master/data/car-sales-extended-missing-data.csv") # View the first 5 rows of the car sales data car_sales.head() # Get information about the car sales DataFrame car_sales.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 5 columns): Make 951 non-null object Colour 950 non-null object Odometer (KM) 950 non-null float64 Doors 950 non-null float64 Price 950 non-null float64 dtypes: float64(3), object(2) memory usage: 39.2+ KB ###Markdown Looking at the output of `info()`,* How many rows are there total?* What datatypes are in each column?* How many missing values are there in each column? ###Code # Find number of missing values in each column car_sales.isna().sum() # Find the datatypes of each column of car_sales car_sales.dtypes ###Output _____no_output_____ ###Markdown Knowing this information, what would happen if we tried to model our data as it is?Let's see. ###Code # EXAMPLE: This doesn't work because our car_sales data isn't all numerical from sklearn.ensemble import RandomForestRegressor car_sales_X, car_sales_y = car_sales.drop("Price", axis=1), car_sales.Price rf_regressor = RandomForestRegressor().fit(car_sales_X, car_sales_y) ###Output _____no_output_____ ###Markdown As we see, the cell above breaks because our data contains non-numerical values as well as missing data.To take care of some of the missing data, we'll remove the rows which have no labels (all the rows with missing values in the `Price` column). ###Code # Remove rows with no labels (NaN's in the Price column) car_sales.dropna(subset=["Price"], inplace=True) ###Output _____no_output_____ ###Markdown Building a pipelineSince our `car_sales` data has missing numerical values as well as the data isn't all numerical, we'll have to fix these things before we can fit a machine learning model on it.There are ways we could do this with pandas but since we're practicing Scikit-Learn, we'll see how we might do it with the [`Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) class. Because we're modifying columns in our dataframe (filling missing values, converting non-numerical data to numbers) we'll need the [`ColumnTransformer`](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html), [`SimpleImputer`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) and [`OneHotEncoder`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) classes as well.Finally, because we'll need to split our data into training and test sets, we'll import `train_test_split` as well. ###Code # Import Pipeline from sklearn's pipeline module from sklearn.pipeline import Pipeline # Import ColumnTransformer from sklearn's compose module from sklearn.compose import ColumnTransformer # Import SimpleImputer from sklearn's impute module from sklearn.impute import SimpleImputer # Import OneHotEncoder from sklearn's preprocessing module from sklearn.preprocessing import OneHotEncoder # Import train_test_split from sklearn's model_selection module from sklearn.model_selection import train_test_split ###Output _____no_output_____ ###Markdown Now we've got the necessary tools we need to create our preprocessing `Pipeline` which fills missing values along with turning all non-numerical data into numbers.Let's start with the categorical features. ###Code # Define different categorical features categorical_features = ["Make", "Colour"] # Create categorical transformer Pipeline categorical_transformer = Pipeline(steps=[ # Set SimpleImputer strategy to "constant" and fill value to "missing" ("imputer", SimpleImputer(strategy="constant", fill_value="missing")), # Set OneHotEncoder to ignore the unknowns ("onehot", OneHotEncoder(handle_unknown="ignore"))]) ###Output _____no_output_____ ###Markdown It would be safe to treat `Doors` as a categorical feature as well, however since we know the vast majority of cars have 4 doors, we'll impute the missing `Doors` values as 4. ###Code # Define Doors features door_feature = ["Doors"] # Create Doors transformer Pipeline door_transformer = Pipeline(steps=[ # Set SimpleImputer strategy to "constant" and fill value to 4 ("imputer", SimpleImputer(strategy="constant", fill_value=4))]) ###Output _____no_output_____ ###Markdown Now onto the numeric features. In this case, the only numeric feature is the `Odometer (KM)` column. Let's fill its missing values with the median. ###Code # Define numeric features (only the Odometer (KM) column) numeric_features = ["Odometer (KM)"] # Crearte numeric transformer Pipeline numeric_transformer = Pipeline(steps=[ # Set SimpleImputer strategy to fill missing values with the "Median" ("imputer", SimpleImputer(strategy="median"))]) ###Output _____no_output_____ ###Markdown Time to put all of our individual transformer `Pipeline`'s into a single `ColumnTransformer` instance. ###Code # Setup preprocessing steps (fill missing values, then convert to numbers) preprocessor = ColumnTransformer( transformers=[ # Use the categorical_transformer to transform the categorical_features ("cat", categorical_transformer, categorical_features), # Use the door_transformer to transform the door_feature ("door", door_transformer, door_feature), # Use the numeric_transformer to transform the numeric_features ("num", numeric_transformer, numeric_features)]) ###Output _____no_output_____ ###Markdown Boom! Now our `preprocessor` is ready, time to import some regression models to try out.Comparing our data to the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), we can see there's a handful of different regression models we can try.* [RidgeRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html)* [SVR(kernel="linear")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form form of support vector machine.* [SVR(kernel="rbf")](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html) - short for Support Vector Regressor, a form of support vector machine.* [RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) - the regression version of RandomForestClassifier. ###Code # Import Ridge from sklearn's linear_model module from sklearn.linear_model import Ridge # Import SVR from sklearn's svm module from sklearn.svm import SVR # Import RandomForestRegressor from sklearn's ensemble module from sklearn.ensemble import RandomForestRegressor ###Output _____no_output_____ ###Markdown Again, thanks to the design of the Scikit-Learn library, we're able to use very similar code for each of these models.To test them all, we'll create a dictionary of regression models and an empty dictionary for regression model results. ###Code # Create dictionary of model instances, there should be 4 total key, value pairs # in the form {"model_name": model_instance}. # Don't forget there's two versions of SVR, one with a "linear" kernel and the # other with kernel set to "rbf". regression_models = {"Ridge": Ridge(), "SVR_linear": SVR(kernel="linear"), "SVR_rbf": SVR(kernel="rbf"), "RandomForestRegressor": RandomForestRegressor()} # Create an empty dictionary for the regression results regression_results = {} ###Output _____no_output_____ ###Markdown Our regression model dictionary is prepared as well as an empty dictionary to append results to, time to get the data split into `X` (feature variables) and `y` (target variable) as well as training and test sets.In our car sales problem, we're trying to use the different characteristics of a car (`X`) to predict its sale price (`y`). ###Code # Create car sales X data (every column of car_sales except Price) car_sales_X = car_sales.drop("Price", axis=1) # Create car sales y data (the Price column of car_sales) car_sales_y = car_sales["Price"] # Use train_test_split to split the car_sales_X and car_sales_y data into # training and test sets. # Give the test set 20% of the data using the test_size parameter. # For reproducibility set the random_state parameter to 42. car_X_train, car_X_test, car_y_train, car_y_test = train_test_split(car_sales_X, car_sales_y, test_size=0.2, random_state=42) # Check the shapes of the training and test datasets car_X_train.shape, car_X_test.shape, car_y_train.shape, car_y_test.shape ###Output _____no_output_____ ###Markdown * How many rows are in each set?* How many columns are in each set?Alright, our data is split into training and test sets, time to build a small loop which is going to:1. Go through our `regression_models` dictionary2. Create a `Pipeline` which contains our `preprocessor` as well as one of the models in the dictionary3. Fits the `Pipeline` to the car sales training data4. Evaluates the target model on the car sales test data and appends the results to our `regression_results` dictionary ###Code # Loop through the items in the regression_models dictionary for model_name, model in regression_models.items(): # Create a model pipeline with a preprocessor step and model step model_pipeline = Pipeline(steps=[("preprocessor", preprocessor), ("model", model)]) # Fit the model pipeline to the car sales training data print(f"Fitting {model_name}...") model_pipeline.fit(car_X_train, car_y_train) # Score the model pipeline on the test data appending the model_name to the # results dictionary print(f"Scoring {model_name}...") regression_results[model_name] = model_pipeline.score(car_X_test, car_y_test) ###Output Fitting Ridge... Scoring Ridge... Fitting SVR_linear... Scoring SVR_linear... Fitting SVR_rbf... Scoring SVR_rbf... Fitting RandomForestRegressor... Scoring RandomForestRegressor... ###Markdown Our regression models have been fit, let's see how they did! ###Code # Check the results of each regression model by printing the regression_results # dictionary regression_results ###Output _____no_output_____ ###Markdown * Which model did the best?* How could you improve its results?* What metric does the `score()` method of a regression model return by default?Since we've fitted some models but only compared them via the default metric contained in the `score()` method (R^2 score or coefficient of determination), let's take the `RidgeRegression` model and evaluate it with a few other [regression metrics](https://scikit-learn.org/stable/modules/model_evaluation.htmlregression-metrics).Specifically, let's find:1. **R^2 (pronounced r-squared) or coefficient of determination** - Compares your models predictions to the mean of the targets. Values can range from negative infinity (a very poor model) to 1. For example, if all your model does is predict the mean of the targets, its R^2 value would be 0. And if your model perfectly predicts a range of numbers it's R^2 value would be 1. 2. **Mean absolute error (MAE)** - The average of the absolute differences between predictions and actual values. It gives you an idea of how wrong your predictions were.3. **Mean squared error (MSE)** - The average squared differences between predictions and actual values. Squaring the errors removes negative errors. It also amplifies outliers (samples which have larger errors).Scikit-Learn has a few classes built-in which are going to help us with these, namely, [`mean_absolute_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html), [`mean_squared_error`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html) and [`r2_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html). ###Code # Import mean_absolute_error from sklearn's metrics module from sklearn.metrics import mean_absolute_error # Import mean_squared_error from sklearn's metrics module from sklearn.metrics import mean_squared_error # Import r2_score from sklearn's metrics module from sklearn.metrics import r2_score ###Output _____no_output_____ ###Markdown All the evaluation metrics we're concerned with compare a model's predictions with the ground truth labels. Knowing this, we'll have to make some predictions.Let's create a `Pipeline` with the `preprocessor` and a `Ridge()` model, fit it on the car sales training data and then make predictions on the car sales test data. ###Code # Create RidgeRegression Pipeline with preprocessor as the "preprocessor" and # Ridge() as the "model". ridge_pipeline = Pipeline(steps=[("preprocessor", preprocessor), ("model", Ridge())]) # Fit the RidgeRegression Pipeline to the car sales training data ridge_pipeline.fit(car_X_train, car_y_train) # Make predictions on the car sales test data using the RidgeRegression Pipeline car_y_preds = ridge_pipeline.predict(car_X_test) # View the first 50 predictions car_y_preds[:50] ###Output _____no_output_____ ###Markdown Nice! Now we've got some predictions, time to evaluate them. We'll find the mean squared error (MSE), mean absolute error (MAE) and R^2 score (coefficient of determination) of our model. ###Code # EXAMPLE: Find the MSE by comparing the car sales test labels to the car sales predictions mse = mean_squared_error(car_y_test, car_y_preds) # Return the MSE mse # Find the MAE by comparing the car sales test labels to the car sales predictions mae = mean_absolute_error(car_y_test, car_y_preds) # Return the MAE mae # Find the R^2 score by comparing the car sales test labels to the car sales predictions r2 = r2_score(car_y_test, car_y_preds) # Return the R^2 score r2 ###Output _____no_output_____
MasterIndex/TextProcessingOne13Form_update_sanjiv.ipynb
###Markdown Code to process a 13D/F/G filing and pull out required fields ###Code %pylab inline import pandas as pd from bs4 import BeautifulSoup import requests import re ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Do all extraction in one block of code - Run this ###Code %time # Create an empty data frame. To be populated later with quarterly from info. form_links = pd.DataFrame(columns = ['CIK', 'Company Name', 'Form Type', 'Date Filed', 'Filename']) # Function to extract URLs of Form 13*. URLs to be used to access HTML/XML documents of interest # # Input: First and last years in range of years we want Form 13s pulled # # Output: Dataframe of all Form 13* filed during range of years specified. def extract_13_url(first, last): for yr in range(first, last): for qr in range(1,5): #print(yr) df = pd.read_csv('MasterIndex'+str(yr)+ '_' + str(qr)+'.idx', encoding='latin1', sep = '\t') data = df.iloc[6:].reset_index(drop=True) data = data.rename(columns={'Description: Master Index of EDGAR Dissemination Feed': 'messy'}) meep = data.messy.str.split('|', expand = True) raw = meep.rename(columns={0: 'CIK', 1: 'Company Name', 2: 'Form Type', 3: 'Date Filed', 4: 'Filename'}) #raw = raw[raw['Form Type'].str.contains('13D')| head['Form Type'].str.contains('13F') |\ # raw['Form Type'].str.contains('13G')].reset_index(drop=True) raw = raw[raw['Form Type'].str.match('.*13D.*|.*13F.*|.*13G.*')].reset_index(drop = True) form_links = form_links.append(raw) form_links = form_links.reset_index(drop = True) #extract_13_url(1997, 2019) ###Output _____no_output_____ ###Markdown Save the dataframe to a csv file so we only need to load it down the pipeline. ###Code ### DO NOT RUN ### #form_links.to_csv('form_info.csv', index = False) ### DO NOT RUN ### ###Output _____no_output_____ ###Markdown Below, test for the first filename to check for single file processing ###Code df = pd.read_csv('form_info.csv', index_col = False) df.head() ###Output _____no_output_____ ###Markdown This is a very clean table. We can likely use the CIK from each row to match with the respective company's CIK in the Compustat table.However, this Form 13* table includes both the target and acquirer's CIK for the same filing (in other words, we have duplicate filings). We will need to figure out how to workaround duplicates, or find another to uniquely identify each filing. Fortunately, we know that each filing only contains the target's CUSIP number. Compustat data also contains a company's CUSIP number (which is unique). Therefore, we will have to extract the CUSIP no. from each filing and append it to the table as a new field. ###Code ### DO NOT RUN ### ### Psedocode for later reference ### if df['Date Filed'][1] >= last_filing & <= this_filing: return that row's qtr else: check the next row of datetbl ### DO NOT RUN ### ###Output _____no_output_____ ###Markdown Test - do not delete ###Code #Fields we want to collect fields = ["COMPANY CONFORMED NAME","CENTRAL INDEX KEY","STANDARD INDUSTRIAL CLASSIFICATION","CUSIP NO."] for x in range(1,10): url = 'https://www.sec.gov/Archives/' + df.Filename[x] f = requests.get(url) BeautifulSoup(f.content,'lxml').get_text() ###Output _____no_output_____ ###Markdown Content Extraction ###Code td = df[df['Form Type'].str.match('.*13D')].reset_index(drop = True) # Specify index of the file we want. Here, we will use index = 1 as a test. x = 1 url = 'https://www.sec.gov/Archives/' + td.Filename[x] # Fields we want to collect fields = ["COMPANY CONFORMED NAME","CENTRAL INDEX KEY","STANDARD INDUSTRIAL CLASSIFICATION","CUSIP No."] f = requests.get(url) BeautifulSoup(f.content,'lxml').get_text() ###Output _____no_output_____ ###Markdown Code separately in the function below:- Item 4- Item 5- (Date of Event Which Requires Filing of this Statement)- SOURCE OF FUNDS*- AGGREGATE AMOUNT BENEFICIALLY OWNED BY EACH REPORTING PERSON- PERCENT OF CLASS REPRESENTED BY AMOUNT IN ROW (11) ###Code # MAIN FUNCTIONS TO DO ALL EXTRACTION #SEARCH LIST OF TEXT FOR ITEM LINE NUMBER def findLineNumber(list_of_text,text_item): res = [j for j in list_of_text if re.search(text_item,j)] for item in res: idx = list_of_text.index(item) return idx def extract13D(url): print("URL: ",url) f = requests.get(url) text = BeautifulSoup(f.content,'lxml').get_text() #Split into lines list13d = text.splitlines() #Remove all stuff that starts with /xa0 and then all blank lines list13d = [j for j in list13d if j.startswith('\xa0')==False] list13d = [j for j in list13d if len(j)>0] for field in fields: print('Field: ',field) res = [j for j in list13d if re.search(field,j)] for r in res: x = re.split('[\t]',r) y = [j for j in x if len(j)>0][-1] print(y) print(' ') #PROCESS ALL ITEMS #Item 4 print('Field: Purpose of Transaction') start_idx = findLineNumber(list13d,'Item 4') end_idx = findLineNumber(list13d,'Item 5.') print(' '.join(list13d[(start_idx+1):end_idx])) extract13D(url) ###Output URL: https://www.sec.gov/Archives/edgar/data/1000079/0000950116-97-000432.txt Field: COMPANY CONFORMED NAME SC&T INTERNATIONAL INC CAPITAL VENTURES INTERNATIONAL /E9/ Field: CENTRAL INDEX KEY 0001000079 0001011712 Field: STANDARD INDUSTRIAL CLASSIFICATION COMPUTER PERIPHERAL EQUIPMENT, NEC [3577] [] Field: CUSIP No. CUSIP No. 783975 10 5 Page 2 of 5 Pages CUSIP No. 783975 10 5 Page 3 of 5 Pages CUSIP No. 783975 10 5 Page 4 of 5 Pages CUSIP No. 783975 10 5 Page 5 of 5 Pages Field: Purpose of Transaction
codes/tests/Teste_mag_modelo_escada.ipynb
###Markdown Etapa 1: Definicão das coordenadas de Observação: ###Code nx = 100 # n de observacoes na direcao x ny = 100 # n de observacoes na direcao y size = (nx, ny) xmin = -10000.0 # metros xmax = +10000.0 # metros ymin = -10000.0 # metros ymax = +10000.0 # metros z = -100.0 #altura de voo, (com Z constante) em metros #zmax = -100.0 # altura de voo, em metros dicionario = {'nx': nx, 'ny': ny, 'xmin': xmin, 'xmax': xmax, 'ymin': ymin, 'ymax': ymax, 'z': z, 'color': '.r'} x, y, X, Y, Z = plot_3D.create_aquisicao(dicionario) ###Output _____no_output_____ ###Markdown Etapa 2: Definicão das coordenadas dos prismas modelados: ###Code # coordenadas dos vertices (corners) do prisma, em metros: x1,x2 = (-2000.0, 2000.0) y1,y2 = (-3000.0, 3000.0) z1,z2 = (1500.0,2000.0) # z eh positivo para baixo! deltaz = 100.0 deltay = 4000.0 incl = 'positivo' dic = {'n': 3, 'x': [x1, x2], 'y': [y1, y2], 'z': [z1, z2], 'deltay': deltay, 'deltaz': deltaz, 'incl': 'positivo'} pointx, pointy, pointz = plot_3D.creat_point(dic) print(pointx) print(pointy) print(pointz) #%matplotlib notebook dic1 = {'x': [pointx[0], pointx[1]], 'y': [pointy[0], pointy[1]], 'z': [pointz[0], pointz[1]]} dic2 = {'x': [pointx[2], pointx[3]], 'y': [pointy[2], pointy[3]], 'z': [pointz[2], pointz[3]]} dic3 = {'x': [pointx[4], pointx[5]], 'y': [pointy[4], pointy[5]], 'z': [pointz[4], pointz[5]]} #----------------------------------------------------------------------------------------------------# vert1 = plot_3D.vert_point(dic1) vert2 = plot_3D.vert_point(dic2) vert3 = plot_3D.vert_point(dic3) #----------------------------------------------------------------------------------------------------# color = 'b' size = [9, 10] view = [210,145] #----------------------------------------------------------------------------------------------------# prism_1 = plot_3D.plot_prism(vert1, color) prism_2 = plot_3D.plot_prism(vert2, color) prism_3 = plot_3D.plot_prism(vert3, color) #----------------------------------------------------------------------------------------------------# prisma = {'n': 3, 'prisma': [prism_1, prism_2,prism_3]} plot_3D.plot_obs_3d(prisma, size, view, x, y, pointz) ###Output _____no_output_____ ###Markdown Etapa 3: Simulação do campo Principal na região das observações: ###Code I = -30.0 # inclinacao do campo principal em graus D = -23.0 # declinacao do campo principal em graus Fi = 40000.0 # Intensidade do campo principal (nT) # Campo principal variando com as posicao F(X,Y): F = Fi + 0.013*X + 0.08*Y # nT ###Output _____no_output_____ ###Markdown Etapa 4: Definição das propriedades das fontes crustais (prismas verticas): ###Code # Propriedades magneticas da fonte crustal: inc = I # magnetizacao puramente induzida dec = -10.0 Mi = 10.0 # intensidade da magnetizacao em A/m Mi2 = 15.0 Mi3 = 7.0 fonte_crustal_mag1 = [pointx[0], pointx[1], pointy[0], pointy[1], pointz[0], pointz[1], Mi] fonte_crustal_mag2 = [pointx[2], pointx[3], pointy[2], pointy[3], pointz[2], pointz[3], Mi2] fonte_crustal_mag3 = [pointx[4], pointx[5], pointy[4], pointy[5], pointz[4], pointz[5], Mi3] ###Output _____no_output_____ ###Markdown Etapa 5: Cálculo das anomalias via function (prism_tf) ###Code tfa1 = prism.prism_tf(X, Y,z, fonte_crustal_mag1, I, D, inc, dec) tfa2 = prism.prism_tf(X, Y,z, fonte_crustal_mag2, I, D, inc, dec) tfa3 = prism.prism_tf(X, Y,z, fonte_crustal_mag3, I, D, inc, dec) tfa_final = tfa1 + tfa2 + tfa3 ###Output _____no_output_____ ###Markdown Etapa 6: Acréscimo de rúido via function (noise_normal_dist) ###Code mi = 0.0 sigma = 0.1 #ACTn = noise.noise_gaussiana(t, mi, sigma, ACT) tfa_final1 = auxiliars.noise_normal_dist(tfa_final, mi, sigma) %matplotlib inline #xs = [x1, x1, x2, x2, x1] #ys = [y1, y2, y2, y1, y1] #xs1 = [pointx[0], pointx[0], pointx[5], pointx[5], pointx[0]] #ys1 = [pointy[0], pointy[5], pointy[5], pointy[0], pointy[0]] #flechax = [[numpy.absolute(pointx[0] + pointx[5])], [pointx[5]]] #flechay = [[numpy.absolute(pointy[0] + pointy[5])], [pointy[5]]] #origin = [[numpy.absolute(pointx[0] + pointx[5])], [[numpy.absolute(pointy[0] + pointy[5])]]] #ponta = [[pointx[5]], [pointy[5]]] #print(ponta) # graficos plt.close('all') plt.figure(figsize=(9,10)) #****************************************************** plt.contourf(Y, X, tfa_final1, 20, cmap = plt.cm.RdBu_r) plt.title('Anomalia de Campo Total(nT)', fontsize = 20) plt.xlabel('East (m)', fontsize = 20) plt.ylabel('North (m)', fontsize = 20) #corpo, = plt.plot(ys1,xs1,'k-*', label = 'Extensão do Dique') #plt.plot(ys2,xs2,'k-') #plt.plot(ys3,xs3,'m-') #arrow = plt.arrow(2000.0, 0.0, 4500.0, 0.0, width=250, length_includes_head = True, color = 'k') #first_legend = plt.legend(handles=[corpo], bbox_to_anchor=(1.25, 1), loc='upper left', borderaxespad=0.0, fontsize= 12.0) #plt.legend([arrow, corpo], ['Direção de mergulho', 'Extensão do Dique'], bbox_to_anchor=(1.25, 1), loc='upper left', borderaxespad=0.0, fontsize= 12.0) plt.colorbar() #plt.savefig('prisma_anomalia.pdf', format='pdf') plt.show() xs1 = [pointx[0], pointx[0], pointx[1], pointx[1], pointx[0]] #xs2 = [pointx[2], pointx[2], pointx[3], pointx[3], pointx[2]] #xs3 = [pointx[4], pointx[4], pointx[5], pointx[5], pointx[4]] ys1 = [pointy[0], pointy[1], pointy[1], pointy[0], pointy[0]] #ys2 = [pointy[2], pointy[3], pointy[3], pointy[2], pointy[2]] #ys3 = [pointy[4], pointy[5], pointy[5], pointy[4], pointy[4]] # graficos plt.close('all') plt.figure(figsize=(10,10)) #****************************************************** plt.contourf(Y, X, tfa_final, 20, cmap = plt.cm.RdBu_r) plt.title('Campo Total (nT)', fontsize = 12) plt.xlabel('East (m)', fontsize = 10) plt.ylabel('North (m)', fontsize = 10) plt.plot(ys1,xs1,'g-') #plt.plot(ys2,xs2,'k-') #plt.plot(ys3,xs3,'m-') plt.colorbar() #plt.savefig('teste_100_40000_D10.png', format='png') plt.show() dici1 = {'nx': nx, 'ny': ny, 'X': X, 'Y': Y, 'Z': Z, 'ACTn': tfa_final } data_e_hora_atuais = datetime.now() data_e_hora = data_e_hora_atuais.strftime('%d_%m_%Y_%H_%M') dicionario = {'Data da Modelagem': data_e_hora, 'Tipo de Modelagem': 'Modelagem de prisma', 'números de corpos': 3, 'Coordenadas do prisma 1 (x1, x2, y1, y2, z1, z2)': [pointx[0], pointx[1], pointy[0], pointy[1], pointz[0], pointz[1]], 'Coordenadas do prisma 2 (x1, x2, y1, y2, z1, z2)': [pointx[2], pointx[3], pointy[2], pointy[3], pointz[2], pointz[3]], 'Coordenadas do prisma 3 (x1, x2, y1, y2, z1, z2)': [pointx[4], pointx[5], pointy[4], pointy[5], pointz[4], pointz[5]], 'inclinação': 'positivo', 'Informação da fonte (Mag, Incl, Decl)': [Mi, inc, dec], 'Informação regional (Camp.Geomag, Incl, Decl)': [Fi, I, D]} print(dicionario) Data_f = salve_doc.reshape_matrix(dici1) Data_f #salve_doc.create_diretorio(dicionario, Data_f) ###Output _____no_output_____
deep-learning/fastai-docs/fastai_docs-master/dev_nb/snapshot/001a_nn_basics.ipynb
###Markdown What is `torch.nn` *really*? A quick journey: from neural net "from scratch", to fully utilizing `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader` *by Jeremy Howard, fast.ai. Thanks to Rachel Thomas and Francisco Ingham.*PyTorch provides the elegantly designed modules and classes `torch.nn`, `torch.optim`, `Dataset`, and `DataLoader` to help you create and train neural networks. In order to fully utilize their power and customize them for your problem, you need to really understand exactly what they're doing. To develop this understanding, we will first train basic neural net on the MNIST data set without using any features from these models; we will initially only use the most basic PyTorch tensor functionality. Then, we will incrementally add one feature from `torch.nn`, `torch.optim`, `Dataset`, or `DataLoader` at a time, showing exactly what each piece does, and how it works to make the code either more concise, or more flexible.**This tutorial assumes you already have PyTorch installed, and are familiar with the basics of tensor operations.** (If you're familiar with Numpy array operations, you'll find the PyTorch tensor operations used here nearly identical). MNIST data setup We will use the classic [MNIST](http://deeplearning.net/data/mnist/) dataset, which consists of black-and-white images of hand-drawn digits (between 0 and 9).We will use [pathlib](https://docs.python.org/3/library/pathlib.html) for dealing with paths (part of the Python 3 standard library), and will download the dataset using [requests](https://docs.python-requests.org/en/master/). We will only import modules when we use them, so you can see exactly what's being used at each point. ###Code from pathlib import Path import requests DATA_PATH = Path('data') PATH = DATA_PATH/'mnist' PATH.mkdir(parents=True, exist_ok=True) URL='http://deeplearning.net/data/mnist/' FILENAME='mnist.pkl.gz' if not (PATH/FILENAME).exists(): content = requests.get(URL+FILENAME).content (PATH/FILENAME).open('wb').write(content) ###Output _____no_output_____ ###Markdown This dataset is in numpy array format, and has been stored using pickle, a python-specific format for serializing data. ###Code import pickle, gzip with gzip.open(PATH/FILENAME, 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1') ###Output _____no_output_____ ###Markdown Each image is 28 x 28, and is being stored as a flattened row of length 784 (=28x28). Let's take a look at one; we need to reshape it to 2d first. ###Code %matplotlib inline from matplotlib import pyplot import numpy as np pyplot.imshow(x_train[0].reshape((28,28)), cmap="gray") x_train.shape ###Output _____no_output_____ ###Markdown PyTorch uses `torch.tensor`, rather than numpy arrays, so we need to convert our data. ###Code import torch x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid)) n,c = x_train.shape x_train, x_train.shape, y_train.min(), y_train.max() ###Output _____no_output_____ ###Markdown Neural net from scratch (no `torch.nn`) Let's first create a model using nothing but PyTorch tensor operations. We're assuming you're already familiar with the basics of neural networks. (If you're not, you can learn them at [course.fast.ai](http://course.fast.ai).)PyTorch provides methods to create random or zero-filled tensors, which we will use to create our weights and bias for a simple linear model. These are just regular tensors, with one very special addition: we tell PyTorch that they require a gradient. This causes PyTorch to record all of the operations done on the tensor, so that it can calculate the gradient during back-propagation *automatically*!For the weights, we set `requires_grad` **after** the initialization, since we don't want that step included in the gradient. (Note that a trailling `_` in PyTorch signifies that the operation is performed in-place.)*NB: We are initializing the weights here with [Xavier initialisation](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) (by multiplying with 1/sqrt(n)).* ###Code import math weights = torch.randn(784,10)/math.sqrt(784) weights.requires_grad_() bias = torch.zeros(10, requires_grad=True) ###Output _____no_output_____ ###Markdown Thanks to PyTorch's ability to calculate gradients automatically, we can use any standard Python function (or callable object) as a model! So let's just write a plain matrix multiplication and broadcasted addition to create a simple linear model. We also need an activation function, so we'll write `log_softmax` and use it. Remember: although PyTorch provides lots of pre-written loss functions, activation functions, and so forth, you can easily write your own using plain python. PyTorch will even create fast GPU or vectorized CPU code for your function automatically. ###Code def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def model(xb): return log_softmax(xb @ weights + bias) ###Output _____no_output_____ ###Markdown In the above, the '@' stands for the dot product operation. We will call our function on one batch of data (in this case, 64 images). This is one *forward pass*. Note that our predictions won't be any better than random at this stage, since we start with random weights. ###Code bs=64 # batch size xb = x_train[0:bs] # a mini-batch from x preds = model(xb) # predictions preds[0], preds.shape ###Output _____no_output_____ ###Markdown As you see, the `preds` tensor contains not only the tensor values, but also a gradient function. We'll use this later to do backprop.Let's implement negative log-likelihood to use as the loss function (again, we can just use standard Python): ###Code def nll(input, target): return -input[range(target.shape[0]), target].mean() loss_func = nll ###Output _____no_output_____ ###Markdown Let's check our loss with our random model, so we can see if we improve after a backprop pass later. ###Code yb = y_train[0:bs] loss_func(preds, yb) ###Output _____no_output_____ ###Markdown We can now run a training loop. For each iteration, we will:- select a mini-batch of data (of size `bs`)- use the model to make predictions- calculate the loss- `loss.backward()` updates the gradients of the model, in this case, `weights` and `bias`.- We now use these gradients to update the weights and bias. We do this within the `torch.no_grad()` context manager, because we do not want these actions to be recorded for our next calculation of the gradient. You can read more about how PyTorch's Autograd records operations [here](https://pytorch.org/docs/stable/notes/autograd.html).- We then set the gradients to zero, so that we are ready for the next loop. Otherwise, our gradients would record a running tally of all the operations that had happened (i.e. `loss.backward()` *adds* the gradients to whatever is already stored, rather than replacing them). *Handy tip: you can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment `set_trace()` below to try it out.* ###Code from IPython.core.debugger import set_trace lr = 0.5 # learning rate epochs = 2 # how many epochs to train for for epoch in range(epochs): for i in range((n-1)//bs + 1): # set_trace() start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_() ###Output _____no_output_____ ###Markdown That's it: we've created and trained a minimal neural network (in this case, a logistic regression, since we have no hidden layers) entirely from scratch!Let's check the loss and compare to what we got earlier. We expect that the loss will have decreased, and it has. ###Code loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Using `torch.nn.functional` We will now refactor our code, so that it does the same thing as before, only we'll start taking advantage of PyTorch's `nn` classes to make it more concise and flexible. At each step from here, we should be making our code one or more of: shorter, more understandable, and/or more flexible.The first and easiest step is to make our code shorter by replacing our hand-written activation and loss functions with those from `torch.nn.functional` (which is generally imported into the namespace `F` by convention). This module contains all the functions in the `torch.nn` library (whereas other parts of the library contain classes). As well as a wide range of loss and activation functions, you'll also find here some convenient functions for creating neural nets, such as pooling functions. (There are also functions for doing convolutions, linear layers, etc, but as we'll see, these are usually better handled using other parts of the library.)If you're using negative log likelihood loss and log softmax activation, then Pytorch provides a single function `F.cross_entropy` that combines the two. So we can even remove the activation function from our model. ###Code import torch.nn.functional as F loss_func = F.cross_entropy def model(xb): return xb @ weights + bias ###Output _____no_output_____ ###Markdown Note that we no longer call `log_softmax` in the `model` function. Let's confirm that our loss is the same as before: ###Code loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Refactor using nn.Module Next up, we'll use `nn.Module` and `nn.Parameter`, for a clearer and more concise training loop. We subclass `nn.Module` (which itself is a class and able to keep track of state). In this case, we want to create a class that holds our weights, bias, and method for the forward step. `nn.Module` has a number of attributes and methods (such as `.parameters()` and `.zero_grad()`) which we will be using.**NB**: `nn.Module` (uppercase M) is a PyTorch specific concept, and is a class we'll be using a lot. `nn.Module` is not to be confused with the Python concept of a (lowercase m) [module](https://docs.python.org/3/tutorial/modules.html), which is a file of Python code that can be imported. ###Code from torch import nn class Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.weights = nn.Parameter(torch.randn(784,10)/math.sqrt(784)) self.bias = nn.Parameter(torch.zeros(10)) def forward(self, xb): return xb @ self.weights + self.bias ###Output _____no_output_____ ###Markdown Since we're now using an object instead of just using a function, we first have to instantiate our model: ###Code model = Mnist_Logistic() ###Output _____no_output_____ ###Markdown Now we can calculate the loss in the same way as before. Note that `nn.Module` objects are used as if they are functions (i.e they are *callable*), but behind the scenes Pytorch will call our `forward` method automatically. ###Code loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Previously for our training loop we had to update the values for each parameter by name, and manually zero out the grads for each parameter separately, like this:```pythonwith torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()```Now we can take advantage of model.parameters() and model.zero_grad() (which are both defined by PyTorch for `nn.Module`) to make those steps more concise and less prone to the error of forgetting some of our parameters, particularly if we had a more complicated model:```pythonwith torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()```We'll wrap our little training loop in a `fit` function so we can run it again later. ###Code def fit(): for epoch in range(epochs): for i in range((n-1)//bs + 1): start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad() fit() ###Output _____no_output_____ ###Markdown Let's double-check that our loss has gone down: ###Code loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Refactor using nn.Linear We continue to refactor our code. Instead of manually defining and initializing `self.weights` and `self.bias`, and calculating `xb @ self.weights + self.bias`, we will instead use the Pytorch class [nn.Linear](https://pytorch.org/docs/stable/nn.htmllinear-layers) for a linear layer, which does all that for us. Pytorch has many types of predefined layers that can greatly simplify our code, and often makes it faster too. ###Code class Mnist_Logistic(nn.Module): def __init__(self): super().__init__() self.lin = nn.Linear(784,10) def forward(self, xb): return self.lin(xb) ###Output _____no_output_____ ###Markdown We instantiate our model and calculate the loss in the same way as before: ###Code model = Mnist_Logistic() loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown We are still able to use our same `fit` method as before. ###Code fit() loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Refactor using optim Pytorch also has a package with various optimization algorithms, `torch.optim`. We can use the `step` method from our optimizer to take a forward step, instead of manually updating each parameter.This will let us replace our previous manually coded optimization step:```pythonwith torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()```and instead use just:```pythonopt.step()opt.zero_grad()```(`optim.zero_grad()` resets the gradient to 0 and we need to call it before computing the gradient for the next minibatch.) ###Code from torch import optim ###Output _____no_output_____ ###Markdown We'll define a little function to create our model and optimizer so we can reuse it in the future. ###Code def get_model(): model = Mnist_Logistic() return model, optim.SGD(model.parameters(), lr=lr) model,opt = get_model() loss_func(model(xb), yb) for epoch in range(epochs): for i in range((n-1)//bs + 1): start_i = i*bs end_i = start_i+bs xb = x_train[start_i:end_i] yb = y_train[start_i:end_i] pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Refactor using Dataset PyTorch has an abstract Dataset class. A Dataset can be anything that has a `__len__` function (called by Python's standard `len` function) and a `__getitem__` function as a way of indexing into it. [This tutorial](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) walks through a nice example of creating a custom FacialLandmarkDataset class as a subclass of Dataset. PyTorch's [TensorDataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.htmlTensorDataset) is a Dataset wrapping tensors. By defining a length and way of indexing, this also gives us a way to iterate, index, and slice along the first dimension of a tensor. This will make it easier to access both the independent and dependent variables in the same line as we train. ###Code from torch.utils.data import TensorDataset ###Output _____no_output_____ ###Markdown Both `x_train` and `y_train` can be combined in a single TensorDataset, which will be easier to iterate over and slice. ###Code train_ds = TensorDataset(x_train, y_train) ###Output _____no_output_____ ###Markdown Previously, we had to iterate through minibatches of x and y values separately:```python xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]```Now, we can do these two steps together:```python xb,yb = train_ds[i*bs : i*bs+bs]``` ###Code model,opt = get_model() for epoch in range(epochs): for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Refactor using DataLoader Pytorch's `DataLoader` is responsible for managing batches. You can create a `DataLoader` from any `Dataset`. `DataLoader` makes it easier to iterate over batches. Rather than having to use `train_ds[i*bs : i*bs+bs]`, the DataLoader gives us each minibatch automatically. ###Code from torch.utils.data import DataLoader train_ds = TensorDataset(x_train, y_train) train_dl = DataLoader(train_ds, batch_size=bs) ###Output _____no_output_____ ###Markdown Previously, our loop iterated over batches (xb, yb) like this:```pythonfor i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb) ...```Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader:```pythonfor xb,yb in train_dl: pred = model(xb) ...``` ###Code model,opt = get_model() for epoch in range(epochs): for xb,yb in train_dl: pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() loss_func(model(xb), yb) ###Output _____no_output_____ ###Markdown Thanks to Pytorch's `nn.Module`, `nn.Parameter`, `Dataset`, and `DataLoader`, our training loop is now dramatically smaller and easier to understand. Let's now try to add the basic features necessary to create effecive models in practice. Add validation First try In section 1, we were just trying to get a reasonable training loop set up for use on our training data. In reality, you **always** should also have a [validation set](http://www.fast.ai/2017/11/13/validation-sets/), in order to identify if you are overfitting.Shuffling the training data is [important](https://www.quora.com/Does-the-order-of-training-data-matter-when-training-neural-networks) to prevent correlation between batches and overfitting. On the other hand, the validation loss will be identical whether we shuffle the validation set or not. Since shuffling takes extra time, it makes no sense to shuffle the validation data.We'll use a batch size for the validation set that is twice as large as that for the training set. This is because the validation set does not need backpropagation and thus takes less memory (it doesn't need to store the gradients). We take advantage of this to use a larger batch size and compute the loss more quickly. ###Code train_ds = TensorDataset(x_train, y_train) train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True) valid_ds = TensorDataset(x_valid, y_valid) valid_dl = DataLoader(valid_ds, batch_size=bs*2) ###Output _____no_output_____ ###Markdown We will calculate and print the validation loss at the end of each epoch.(Note that we always call `model.train()` before training, and `model.eval()` before inference, because these are used by layers such as `nn.BatchNorm2d` and `nn.Dropout` to ensure appropriate behaviour for these different phases.) ###Code model,opt = get_model() for epoch in range(epochs): model.train() for xb,yb in train_dl: pred = model(xb) loss = loss_func(pred, yb) loss.backward() opt.step() opt.zero_grad() model.eval() with torch.no_grad(): valid_loss = sum(loss_func(model(xb), yb) for xb,yb in valid_dl) print(epoch, valid_loss/len(valid_dl)) ###Output 0 tensor(0.2969) 1 tensor(0.3138) ###Markdown Create fit() and get_data() We'll now do a little refactoring of our own. Since we go through a similar process twice of calculating the loss for both the training set and the validation set, let's make that into its own function, "`loss_batch`", which computes the loss for one batch.We pass an optimizer in for the training set, and use it to perform backprop. For the validation set, we don't pass an optimizer, so the method doesn't perform backprop. ###Code def loss_batch(model, loss_func, xb, yb, opt=None): loss = loss_func(model(xb), yb) if opt is not None: loss.backward() opt.step() opt.zero_grad() return loss.item(), len(xb) ###Output _____no_output_____ ###Markdown `fit` runs the necessary operations to train our model and compute the training and validation losses for each epoch. ###Code import numpy as np def fit(epochs, model, loss_func, opt, train_dl, valid_dl): for epoch in range(epochs): model.train() for xb,yb in train_dl: loss_batch(model, loss_func, xb, yb, opt) model.eval() with torch.no_grad(): losses,nums = zip(*[loss_batch(model, loss_func, xb, yb) for xb,yb in valid_dl]) val_loss = np.sum(np.multiply(losses,nums)) / np.sum(nums) print(epoch, val_loss) ###Output _____no_output_____ ###Markdown `get_data` returns dataloaders for the training and validation sets. ###Code def get_data(train_ds, valid_ds, bs): return (DataLoader(train_ds, batch_size=bs, shuffle=True), DataLoader(valid_ds, batch_size=bs*2)) ###Output _____no_output_____ ###Markdown Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code: ###Code train_dl,valid_dl = get_data(train_ds, valid_ds, bs) model,opt = get_model() fit(epochs, model, loss_func, opt, train_dl, valid_dl) ###Output 0 0.36996033158302305 1 0.34184285497665406 ###Markdown You can use these basic 3 lines of code to train a wide variety of models. Let's see if we can use them to train a convolutional neural network (CNN)! Switch to CNN First try We are now going to build our neural network with three convolutional layers. Because none of the functions in the previous section assume anything about the model form, we'll be able to use them to train a CNN without any modification.We will use Pytorch's predefined [Conv2d](https://pytorch.org/docs/stable/nn.htmltorch.nn.Conv2d) class as our convolutional layer. We define a CNN with 3 convolutional layers. Each convolution is followed by a ReLU. At the end, we perform an average pooling. ###Code class Mnist_CNN(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1) self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1) self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1) def forward(self, xb): xb = xb.view(-1,1,28,28) xb = F.relu(self.conv1(xb)) xb = F.relu(self.conv2(xb)) xb = F.relu(self.conv3(xb)) xb = F.avg_pool2d(xb, 4) return xb.view(-1,xb.size(1)) lr=0.1 ###Output _____no_output_____ ###Markdown [Momentum](http://cs231n.github.io/neural-networks-3/sgd) is a variation on stochastic gradient descent that takes previous updates into account as well and generally leads to faster training. ###Code model = Mnist_CNN() opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) xb, yb = next(iter(valid_dl)) loss_func(model(xb), yb) fit(epochs, model, loss_func, opt, train_dl, valid_dl) ###Output 0 0.3650521045684815 1 0.28984554643630983 ###Markdown nn.Sequential `torch.nn` has another handy class we can use to simply our code: [`Sequential`](https://pytorch.org/docs/stable/nn.htmltorch.nn.Sequential). A `Sequential` object runs each of the modules contained within it, in a sequential manner. This is a simpler way of writing our neural network. To take advantage of this, we need to be able to easily define a **custom layer** from a given function. For instance, PyTorch doesn't have a `view` layer (`view` is PyTorch's version of numpy's `reshape`) and we need to create one for our network. `Lambda` will create a layer that we can then use when defining a network with `Sequential`. ###Code class Lambda(nn.Module): def __init__(self, func): super().__init__() self.func=func def forward(self, x): return self.func(x) def preprocess(x): return x.view(-1,1,28,28) ###Output _____no_output_____ ###Markdown The model created with `Sequential` is simply: ###Code model = nn.Sequential( Lambda(preprocess), nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.AvgPool2d(4), Lambda(lambda x: x.view(x.size(0),-1)) ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) xb, yb = next(iter(valid_dl)) loss_func(model(xb), yb) fit(epochs, model, loss_func, opt, train_dl, valid_dl) ###Output 0 0.3841403673171997 1 0.26314052562713625 ###Markdown Wrapping `DataLoader` Our CNN is fairly concise, but it only works with MNIST, because:- It assumes the input is a 28\*28 long vector- It assumes that the final CNN grid size is 4\*4 (since that's the average pooling kernel size we used)Let's get rid of these two assumptions, so our model works with any 2d single channel image. First, we can remove the initial Lambda layer but moving the data preprocessing into a generator: ###Code def preprocess(x,y): return x.view(-1,1,28,28),y class WrappedDataLoader(): def __init__(self, dl, func): self.dl = dl self.func = func def __len__(self): return len(self.dl) def __iter__(self): batches = iter(self.dl) for b in batches: yield(self.func(*b)) train_dl,valid_dl = get_data(train_ds, valid_ds, bs) train_dl = WrappedDataLoader(train_dl, preprocess) valid_dl = WrappedDataLoader(valid_dl, preprocess) ###Output _____no_output_____ ###Markdown Next, we can replace `nn.AvgPool2d` with `nn.AdaptiveAvgPool2d`, which allows us to define the size of the *output* tensor we want, rather than the *input* tensor we have. As a result, our model will work with any size input. ###Code model = nn.Sequential( nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(), nn.AdaptiveAvgPool2d(1), Lambda(lambda x: x.view(x.size(0),-1)) ) opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) ###Output _____no_output_____ ###Markdown Let's try it out: ###Code fit(epochs, model, loss_func, opt, train_dl, valid_dl) ###Output 0 0.3101793124198914 1 0.24328768825531005 ###Markdown Using your GPU If you're lucky enough to have access to a CUDA-capable CPU (you can rent one for about about $0.50/hour from most cloud providers) you can use it to speed up your code. First check that your GPU is working in Pytorch: ###Code torch.cuda.is_available() ###Output _____no_output_____ ###Markdown And then create a device object for it: ###Code dev = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') ###Output _____no_output_____ ###Markdown Let's update `preprocess` to move batches to the GPU: ###Code def preprocess(x,y): return x.view(-1,1,28,28).to(dev),y.to(dev) train_dl,valid_dl = get_data(train_ds, valid_ds, bs) train_dl = WrappedDataLoader(train_dl, preprocess) valid_dl = WrappedDataLoader(valid_dl, preprocess) ###Output _____no_output_____ ###Markdown Finally, we can move our model to the GPU. ###Code model.to(dev); opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9) ###Output _____no_output_____ ###Markdown You should find it runs faster now: ###Code fit(epochs, model, loss_func, opt, train_dl, valid_dl) ###Output 0 0.2065797366142273 1 0.16616964597702027
Apriori e sue varianti.ipynb
###Markdown Algoritmo Apriori e sue variantiVa installata una libreria di machine learning che contenga apriori. Nella fattispecie, usiamo `mlxtend` che installiamo con `pip`. ###Code import pandas as pd from mlxtend.preprocessing import TransactionEncoder from mlxtend.frequent_patterns import apriori, fpmax, fpgrowth, association_rules # il dataset è una lista di transazioni, anch'esse espresse come liste dataset = [['Milk', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'], ['Dill', 'Onion', 'Nutmeg', 'Kidney Beans', 'Eggs', 'Yogurt'], ['Milk', 'Apple', 'Kidney Beans', 'Eggs'], ['Milk', 'Unicorn', 'Corn', 'Kidney Beans', 'Yogurt'], ['Corn', 'Onion', 'Onion', 'Kidney Beans', 'Ice cream', 'Eggs']] # La classe TransactionEncoder analizza i dati, che possono avere una qualunque # forma iterabile e genera una struttura intermedia da cui si ottiene # il dataframe te = TransactionEncoder() te_ary = te.fit(dataset).transform(dataset) df = pd.DataFrame(te_ary, columns=te.columns_) df # Generare il database verticale in forma di tid-list dei singoli item è immediato df.transpose() # Applichiamo Apriori con un supporto minimo di 0.6 #frequent_itemsets = apriori(df, min_support=0.6, use_colnames=True) frequent_itemsets = fpgrowth(df, min_support=0.6, use_colnames=True) #frequent_itemsets = fpmax(df, min_support=0.6, use_colnames=True) frequent_itemsets.sort_values('support',ascending=False) # Confrontiamo le prestazioni di apriori e fpgrowth -- Frequent Pattern Tree Growth %timeit -n 100 -r 10 apriori(df, min_support=0.6) %timeit -n 100 -r 10 fpgrowth(df, min_support=0.6) # calcoliamo le regole a supporto mininmo 0.6 e confidenza 0.8 association_rules(frequent_itemsets, metric="confidence", min_threshold=0.8) # Sono riportate le seguenti metriche alternative # - support(A->C) = support(A+C), range: [0, 1] # - confidence(A->C) = support(A+C) / support(A), range: [0, 1] # - lift(A->C) = confidence(A->C) / support(C), range: [0, inf] # - leverage(A->C) = support(A->C) - support(A)*support(C), range: [-1, 1] # - conviction = [1 - support(C)] / [1 - confidence(A->C)], range: [0, inf] ###Output _____no_output_____
bin/MODIS2DL.ipynb
###Markdown Ingest MODIS Land Cover Data This notebook will ingest MODIS land cover data onto the DL platform. The MODIS land cover dats product is released yearly at a maximum resolution of 500m. The product features five different land cover classification bands. They are quite similar - we'll use the first one, the _Annual International Geosphere-Biosphere Programme (IGBP) classification_. The data are available from a number of US government data services, see https://lpdaac.usgs.gov/products/mcd12q1v006/. The land cover data is available in tiles that follow the MODIS Sinusoidal Grid, a special project system for MODIS products, see Figure. We'll need to use GDAL to convert the hdf tiles to GeoTiffs. The tiles will be downloaded from NASA's Earthdata, for which a registered account is required. A free account can be created [here](https://urs.earthdata.nasa.gov/home). User credentials should then be stored as a dict in json: `{username:, password:}`. **Figure: MODIS Sinusoidal Grid**![img](MODIS_sinusoidal_grid1.gif) ###Code import logging, os, sys, json, requests, glob, pickle from requests.auth import HTTPBasicAuth import descarteslabs as dl from descarteslabs.catalog import Product from descarteslabs.catalog import Image as dl_Image from descarteslabs.catalog import ClassBand, DataType, Resolution, ResolutionUnit from bs4 import BeautifulSoup ###Output _____no_output_____ ###Markdown Approach**Fetch the Data**- Create and store login credentials- For each year of the land cover product: - Parse the website and extract the hdf files - Retrieve the hdf files **Push to DL**- Create the DL product and land cover band- Convert the hdf files to GeoTiff- Upload the GeoTiffs to the DL product ###Code params = {} params['modis_path'] = '/home/jovyan/solar-pv-global-inventory/data/MODIS' # path to the geodatabase params['credentials_path'] = os.path.join(params['modis_path'], 'earthdata.cred') params['product_params'] = {'_id':'modis-land-cover', 'name':'MODIS land cover product for uploaded MODIS land cover tiles'} params['year'] = '2014' params['band_params'] = {'name':'IGBP_class', 'data_range':(0,255), 'display_range':(0,20), 'resolution':500, 'index':0} ###Output _____no_output_____ ###Markdown Download the Data ###Code credentials = json.load(open(params['credentials_path'],'r')) def get_url_paths(url, ext='', params={}): response = requests.get(url, params=params) if response.ok: response_text = response.text else: return response.raise_for_status() soup = BeautifulSoup(response_text, 'html.parser') parent = [url + node.get('href') for node in soup.find_all('a') if node.get('href').endswith(ext)] return parent url = 'https://e4ftl01.cr.usgs.gov/MOTA/MCD12Q1.006/'+params['year']+'.01.01/' ext = 'hdf' hdf_urls = get_url_paths(url, ext) print (len(hdf_urls), hdf_urls[0]) with open(os.path.join(params['modis_path'], 'list.txt'),'w') as f: f.write('\n'.join(hdf_urls)) !wget --user={credentials["username"]} --password={credentials["password"]} -i {os.path.join(params['modis_path'],'list.txt')} -P {params['modis_path']+'/tmp'} -q ###Output _____no_output_____ ###Markdown Get Class Labels ###Code class_labels = { 1 : 'Evergreen Needleleaf Forests: dominated by evergreen conifer trees (canopy >2m). Tree cover >60%.', 2 : 'Evergreen Broadleaf Forests: dominated by evergreen broadleaf and palmate trees (canopy >2m). Tree cover >60%.', 3 : 'Deciduous Needleleaf Forests: dominated by deciduous needleleaf (larch) trees (canopy >2m). Tree cover >60%.', 4 : 'Deciduous Broadleaf Forests: dominated by deciduous broadleaf trees (canopy >2m). Tree cover >60%.', 5 : 'Mixed Forests: dominated by neither deciduous nor evergreen (40-60% of each) tree type (canopy >2m). Tree cover >60%.', 6 : 'Closed Shrublands: dominated by woody perennials (1-2m height) >60% cover.', 7 : 'Open Shrublands: dominated by woody perennials (1-2m height) 10-60% cover.', 8 : 'Woody Savannas: tree cover 30-60% (canopy >2m).', 9 : 'Savannas: tree cover 10-30% (canopy >2m).', 10: 'Grasslands: dominated by herbaceous annuals (<2m).', 11: 'Permanent Wetlands: permanently inundated lands with 30-60% water cover and >10% vegetated cover.', 12: 'Croplands: at least 60% of area is cultivated cropland.', 13: 'Urban and Built-up Lands: at least 30% impervious surface area including building materials, asphalt and vehicles.', 14: 'Cropland/Natural Vegetation Mosaics: mosaics of small-scale cultivation 40-60% with natural tree, shrub, or herbaceous vegetation.', 15: 'Permanent Snow and Ice: at least 60% of area is covered by snow and ice for at least 10 months of the year.', 16: 'Barren: at least 60% of area is non-vegetated barren (sand, rock, soil) areas with less than 10% vegetation.', 17: 'Water Bodies: at least 60% of area is covered by permanent water bodies.', } pickle.dump(class_labels, open('./class_labels_MODIS.pkl','wb')) class_labels = [': '.join([str(kk),vv.split(':')[0]]) for kk,vv in class_labels.items()] class_labels ###Output _____no_output_____ ###Markdown Convert MODIS to GeoTiff ###Code hdf_files = glob.glob(params['modis_path']+'/tmp'+'/*.hdf') print (len(hdf_files), hdf_files[0]) for f in hdf_files: fname = f.split('/')[-1] #gdal_translate HDF4_EOS:EOS_GRID:"MCD12Q1.A2018001.h22v04.006.2019200003144.hdf":MCD12Q1:LC_Type1 test.ti tifname = f.split('/')[-1].split('.')[2]+'.tif' subprocess.call(['gdal_translate', 'HDF4_EOS:EOS_GRID:"{}":MCD12Q1:LC_Type1'.format(f), os.path.join(params['modis_path'],params['year'],tifname)]) ###Output _____no_output_____ ###Markdown Prep DL Product and Bands ###Code product = Product.get('oxford-university:modis-land-cover')#(params['product_params']['_id']) if not product: product = Product(id=params['product_params']['_id'], name=params['product_params']['name']) product.save() bands = [bb for bb in product.bands().limit(2)] if not bands: band = ClassBand(name=params['band_params']['name'], product=product) band.data_type = DataType.BYTE band.data_range = params['band_params']['data_range'] band.display_range = params['band_params']['display_range'] band.resolution = Resolution(unit=ResolutionUnit.METERS, value=params['band_params']['resolution']) band.band_index = params['band_params']['index'] band.class_labels = class_labels band.save() ### delete the product if it needs to be remade # status = product.delete_related_objects() # status # product.delete() ### add readers # product.readers = ["email:[email protected]", "email:[email protected]", "email:[email protected]"] # product.save() ###Output _____no_output_____ ###Markdown Upload Images ###Code image_files = glob.glob(os.path.join(params['modis_path'],params['year'],'*.tif')) print (len(image_files), image_files[0]) uploads = [] for f in image_files: image = dl_Image(product=product, name=params['year']+'.'+f.split('/')[-1]) image.acquired = params['year']+"-01-01" image_path = f uploads.append(image.upload(image_path)) for u in uploads: print (u.status) ###Output _____no_output_____
Jupyter_Notebooks/Interfaces/.ipynb_checkpoints/w2v_Dashboard-checkpoint.ipynb
###Markdown MHS-Word2Vec Dashboard ###Code import re, json, warnings, gensim import pandas as pd import numpy as np # Primary visualizations import matplotlib.pyplot as plt import matplotlib from matplotlib.patches import Patch from matplotlib.lines import Line2D import seaborn as sns # PCA visualization from scipy.spatial.distance import cosine from sklearn.metrics import pairwise from sklearn.manifold import MDS, TSNE from mpl_toolkits.mplot3d import Axes3D from sklearn.decomposition import PCA # Import (Jupyter) Dash -- App Functionality import dash from dash.dependencies import Input, Output, State import dash_table import dash_core_components as dcc import dash_html_components as html from jupyter_dash import JupyterDash # Ignore simple warnings. warnings.simplefilter('ignore', DeprecationWarning) # Declare directory location to shorten filepaths later. abs_dir = "/Users/quinn.wi/Documents/" # Load model. model = gensim.models.KeyedVectors.load_word2vec_format(abs_dir + 'Data/Output/WordVectors/jqa_w2v.txt') ###Output _____no_output_____ ###Markdown Functions ###Code %%time # https://www.kaggle.com/pierremegret/gensim-word2vec-tutorial def tsnescatterplot(model, word, list_names): """ Plot in seaborn the results from the t-SNE dimensionality reduction algorithm of the vectors of a query word, its list of most similar words, and a list of words. """ arrays = np.empty((0, 100), dtype='f') # 100 == vector size when model was created. word_labels = [word] color_list = ['red'] # adds the vector of the query word arrays = np.append(arrays, model.__getitem__([word]), axis=0) # gets list of most similar words close_words = model.most_similar([word]) # adds the vector for each of the closest words to the array for wrd_score in close_words: wrd_vector = model.__getitem__([wrd_score[0]]) word_labels.append(wrd_score[0]) color_list.append('blue') arrays = np.append(arrays, wrd_vector, axis=0) # adds the vector for each of the words from list_names to the array for wrd in list_names: wrd_vector = model.__getitem__([wrd[0]]) word_labels.append(wrd[0]) color_list.append('green') arrays = np.append(arrays, wrd_vector, axis=0) # Reduces the dimensionality from 300 to x dimensions with PCA; error will arise if x is too large. reduc = PCA(n_components=41).fit_transform(arrays) # Finds t-SNE coordinates for 2 dimensions np.set_printoptions(suppress=True) Y = TSNE(n_components=2, random_state=0, perplexity=15).fit_transform(reduc) # Sets everything up to plot df = pd.DataFrame({'x': [x for x in Y[:, 0]], 'y': [y for y in Y[:, 1]], 'words': word_labels, 'color': color_list}) fig, _ = plt.subplots() # fig.set_size_inches(9, 9) # Basic plot p1 = sns.regplot(data=df, x="x", y="y", fit_reg=False, marker="o", scatter_kws={'s': 40, 'facecolors': df['color'] } ) plt.xticks([]) plt.yticks([]) plt.xlabel("") plt.ylabel("") # add annotations one by one with a loop for line in range(0, df.shape[0]): p1.text(df['x'][line], df['y'][line], ' ' + df['words'][line].title(), horizontalalignment = 'center', verticalalignment = 'bottom', size = 'small', color = 'gray', weight = 'normal') plt.xlim(Y[:, 0].min()-50, Y[:, 0].max()+50) plt.ylim(Y[:, 1].min()-50, Y[:, 1].max()+50) plt.title('t-SNE visualization for {}'.format(word.title())) ###Output CPU times: user 5 µs, sys: 0 ns, total: 5 µs Wall time: 7.87 µs ###Markdown App ###Code %%time # App configurations app = JupyterDash(__name__) app.config.suppress_callback_exceptions = True # Plot configurations. sns.set_style("whitegrid", {'axes.grid' : False}) font = {'family' : 'serif', 'weight' : 'normal', 'size' : 18} matplotlib.rc('font', **font) palette = sns.color_palette("Set1", 4) plt.figure(figsize=(25, 12)) # Layout. app.layout = html.Div( className = 'wrapper', children = [ # app-header html.Header( className="app-header", children = [ html.Div('Word2Vec Dashboard', className = "app-header--title") ]), # content-wrapper html.Div( className = 'content-wrapper', children = [ dcc.Input(id = 'text', type = 'text', placeholder = 'work'), # dcc.Slider(id = 'slider', min = 5, max = 35, step = 1, value = 20), dcc.Graph(id = 'text-plot') ]) ]) ########################### ######### Callbacks ####### ########################### @app.callback( Output('text-plot', 'children'), Input('text', 'text_value'), # State('slider', 'slider_value') ) def update_textPlot(text_value): fig = tsnescatterplot(model, text_value, model.most_similar([text_value], topn = 25)) return fig if __name__ == "__main__": # app.run_server(mode = 'inline', debug = True) # mode = 'inline' for JupyterDash app.run_server(debug = True) ###Output Dash app running on http://127.0.0.1:8050/ CPU times: user 60 ms, sys: 56.1 ms, total: 116 ms Wall time: 328 ms
ML_Models_BA.ipynb
###Markdown Machine Learning to Predict Brittleness from other Geophysical Logs Data: 4 wells from the Appalachian Basin ###Code import os import pandas as pd import seaborn as sns import numpy as np import matplotlib.pyplot as plt import matplotlib.colors as colors from sklearn import metrics from sklearn.preprocessing import MinMaxScaler, StandardScaler from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.pipeline import Pipeline from sklearn.ensemble import GradientBoostingRegressor as gbR, GradientBoostingClassifier as gbC, IsolationForest from sklearn.svm import SVC, SVR from sklearn.neural_network import MLPClassifier, MLPRegressor from sklearn.feature_selection import mutual_info_regression pd.set_option('display.max_columns', None) #to display all the column information pd.options.display.max_seq_items = 2000 ###Output _____no_output_____ ###Markdown Load data ###Code file_directory = r"../Thesis work/Thesis work/Well_Data_CSV_Merged" #for macbook google drive file_name1 = "Poseidon.csv" file_name2 = "Boggess.csv" file_name3 = "Mip3h.csv" file_name4 = "Whipkey.csv" file_name = [file_name1, file_name2, file_name3, file_name4] data = [] for i in file_name: file_path = os.path.join(file_directory,i) df = pd.read_csv(file_path) data.append(df) data_poseidon = data[0] data_boggess = data[1] data_mip3h = data[2] data_whipkey = data[3] # ## Marcellus Shale interval # data_poseidon = data_poseidon.loc[(data_poseidon['DEPT'] > 7880) & (data_poseidon['DEPT'] < 8040)] # data_boggess = data_boggess.loc[(data_boggess['DEPT'] > 7880) & (data_boggess['DEPT'] < 7970)] # data_mip3h = data_mip3h.loc[(data_mip3h['DEPT'] > 7450) & (data_mip3h['DEPT'] < 7560)] # data_whipkey = data_whipkey.loc[(data_whipkey['DEPT'] > 7730) & (data_whipkey['DEPT'] < 7840)] print("The Poseidon data has {} rows".format(data_poseidon.shape[0])) print("The Boggess data has {} rows".format(data_boggess.shape[0])) print("The Mip3h data has {} rows".format(data_mip3h.shape[0])) print("The Whipkey data has {} rows".format(data_whipkey.shape[0])) ###Output _____no_output_____ ###Markdown Input and Output of the Model Data for Regression task ###Code features = ['DEPT', 'GR', 'NPHI','RHOZ', 'HCAL', 'DTCO','PEFZ','Brittleness_new'] #list of the features names to select # features = ['DEPT', 'GR','RHOZ', 'HCAL', 'NPHI','DTCO', 'Brittleness_new'] #list of the features names to select target = 'Brittleness_new' #name of the output feature data = pd.concat([data_whipkey, data_boggess, data_poseidon], ignore_index=True) data = data.loc[: ,features] fig, ax = plt.subplots(1, 2, figsize = (15,6)) m = ax[0].scatter(data_poseidon.PR_DYN, data_poseidon.YME_DYN, c = data_poseidon.Brittleness) ax[0].set_xlabel("Poisson's ratio", fontsize =15) ax[0].set_ylabel("Young's modulus", fontsize =15) ax[0].axhline(y=6, color='r', linestyle='--') ax[0].axvline(x=0.2, color='r', linestyle='--') ax[0].text(0.23, 0.6, 'Brittle Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[0].transAxes, c='r') ax[0].text(0.75, 0.08, 'Ductile Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[0].transAxes, c='r') l = ax[1].scatter(data_poseidon.PR_DYN, data_poseidon.YME_DYN, c = data_poseidon.Brittleness_new) ax[1].set_xlabel("Poisson's ratio", fontsize =15) ax[1].set_ylabel("Young's modulus", fontsize =15) ax[1].axhline(y=6, color='r', linestyle='--') ax[1].axvline(x=0.2, color='r', linestyle='--') ax[1].text(0.23, 0.6, 'Brittle Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[1].transAxes, c='r') ax[1].text(0.75, 0.08, 'Ductile Region',fontsize=15, horizontalalignment='center', verticalalignment='center', transform=ax[1].transAxes, c='r') fig.colorbar(l, ax = ax[1]) fig.colorbar(m, ax = ax[0]) # fig.savefig(r'./Images/{}.png'.format('YME-PR plot'), dpi=300) data[data < 0] = np.nan #remove negative values data.dropna(inplace = True) data.shape data.describe() #add correlation plot data.corr(method = 'spearman') def StatRelat(data, target): #Mutual information and Pearson's corelation for measuring the dependency between the variables. """ function to estimate the Mutual information and Pearson's corelation for measuring the dependency between the variables. Parameters ---------- data : DataFrame The data target: Str The column name of the target feature Returns ------- A histogram of mutual information and heatmap of correlation between features """ df2 = data.copy().dropna() X = df2.drop(['DEPT',target], axis=1)._get_numeric_data() # separate DataFrames for predictor and response features y = df2.loc[:,[target]]._get_numeric_data() mi = mutual_info_regression(X,np.ravel(y), random_state=20) # calculate mutual information mi /= np.max(mi) # calculate relative mutual information indices = np.argsort(mi)[::-1] # find indicies for descending order print("Feature ranking:") # write out the feature importances for f in range(X.shape[1]): print("%d. feature %s = %f" % (f + 1, X.columns[indices][f], mi[indices[f]])) fig, ax = plt.subplots(nrows=1,ncols=2,figsize=(15, 7)) # fig.subplots_adjust(left=0.0, bottom=0.0, right=1., top=1., wspace=0.2, hspace=0.2) ax[0].bar(range(X.shape[1]), mi[indices],color="g", align="center") ax[0].set_title("Mutual Information") ax[0].set_xticks(range(X.shape[1])) ax[0].set_xticklabels(X.columns[indices],rotation=90) ax[0].set_xlim([-1, X.shape[1]]) cmap = sns.diverging_palette(250, 10, as_cmap=True) mask = np.zeros_like(df2.drop(['DEPT'], axis=1).corr()) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): sns.heatmap(df2.drop(['DEPT'], axis=1).corr(), mask=mask,cmap=cmap, vmax=.3, ax=ax[1], square=True, annot = True) ax[1].set_yticklabels(ax[1].get_yticklabels(), rotation=45) fig.savefig(r'./Images/{}.png'.format('feature_selection'), dpi=300) StatRelat(data, target) data_summary = data.drop(['DEPT'], axis=1).describe().T.round(2) # data_summary.to_excel(r'./Images/{}.xlsx'.format('data_summary_before_stand')) data_summary #range data_summary['max'] - data_summary['min'] #standard deviation data.std() scaler = MinMaxScaler() data_norm = pd.DataFrame(scaler.fit_transform(data.drop(['DEPT'], axis=1)), columns = data.drop(['DEPT'], axis=1).columns) data_norm_summary = data_norm.describe().T.round(2) data_norm_summary # data_norm_summary.to_excel(r'./Images/{}.xlsx'.format('data_summary_after_minmax')) scaler = StandardScaler() data_norm = pd.DataFrame(scaler.fit_transform(data.drop(['DEPT'], axis=1)), columns = data.drop(['DEPT'], axis=1).columns) data_norm_summary = data_norm.describe().T.round(2) data_norm_summary # data_norm_summary.to_excel(r'./Images/{}.xlsx'.format('data_summary_after_standard')) X = data.drop(['DEPT','RHOZ',target], axis=1) y = data.loc[:,[target]] X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state = 1) X_train.shape def box_plot(X_train, save_file_name): fig, ax = plt.subplots(1,len(X_train.columns), figsize = (15,8)) for i, feature in enumerate(X_train.columns): ax[i].boxplot(X_train[feature]) ax[i].set_ylabel(feature, fontsize = 20) right_side = ax[i].spines["right"] top_side = ax[i].spines["top"] bottom_side = ax[i].spines["bottom"] right_side.set_visible(False) top_side.set_visible(False) bottom_side.set_visible(False) ax[i].axes.get_xaxis().set_visible(False) # fig.savefig(r'./Images/{}.png'.format(save_file_name), dpi=300) box_plot(X_train, "before_outlier_removal") # identify outliers in the training dataset iso = IsolationForest(contamination=0.1) yhat = iso.fit_predict(X_train) # select all rows that are not outliers mask = yhat != -1 X_train, y_train = X_train[mask], y_train[mask] # summarize the shape of the updated training dataset print(X_train.shape, y_train.shape) box_plot(X_train, "after_outlier_removal") df = data_mip3h.loc[: ,features].dropna() X_blind = df.drop(['DEPT','RHOZ',target], axis=1) y_blind = df.loc[:,[target]] # X_blind = data_boggess.loc[: ,features].drop([target], axis=1) # y_blind = data_boggess.loc[:,[target]] X_test.shape X_blind.shape ###Output _____no_output_____ ###Markdown Model Building ###Code def modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm, hyper_parameters, scaler, classification,printFeatureImportance=True, cv_folds=3): """ function to tune the gradient boosting model and return the optimum Parameters ---------- X_train : DataFrame The input features for the training set X_test : DataFrame The input features for the testing set X_blind : DataFrame The input features for the blind set y_train : DataFrame The output feature for the training set y_test : DataFrame The output feature for the testing set y_blind : DataFrame The output feature for the blind set algorithm : {'neural','svm','gradientboosting'} The Machine Learning model hyper_parameters : dict A dictionary of the hyperparameters of the models that will be tuned scaler : {'standard','minmax'} Scaling technique to employ. classification : bool Flag to specify the modeling technique. True for classification and False for regression printFeatureImportance : bool Flag to specify if to display the feature importance histogram. cv_folds : int Number of cross-validation folds. default is 3. Returns ------- model : an object of the trained gradient boosting which can be deployed or saved """ #step to assign the selected standardaziation if scaler == 'standard': scaler = StandardScaler() elif scaler == 'minmax': scaler =MinMaxScaler() else: print("invalid scaler: use 'standard' or 'minmax'") #step to assign the selected machine learning algorithm if algorithm == 'svm': if classification is True: algo = SVC(random_state=83) else: algo = SVR() elif algorithm == 'neural': if classification is True: algo = MLPClassifier(random_state=677) else: algo = MLPRegressor(random_state=134) elif algorithm == 'gradientboosting': if classification is True: algo = gbC(random_state=10) else: algo = gbR(random_state=824) else: print("invalid scaler: use 'svm' or 'neural' or 'gradientboosting'") if classification is True: pipe = Pipeline(steps=[('scaler', scaler), ('model', algo)]) model = GridSearchCV(estimator = pipe, param_grid = hyper_parameters, scoring='accuracy',n_jobs=-1, cv=cv_folds, verbose = 1) #Fit the model on the data model.fit(X_train.values, y_train.values.ravel()) #Predict training set: y_train_pred = model.predict(X_train) #Predict testing set: y_test_pred = model.predict(X_test) #Predict blind set y_blind_pred = model.predict(X_blind) #Print model report: print("Model Report") print("-------------------------------") print("The training accuracy : {0:.4g}".format(metrics.accuracy_score(y_train.values, y_train_pred))) print("The testing accuracy is : {0:.4g}".format(metrics.accuracy_score(y_test.values,y_test_pred))) print("The blind well accuracy is : {0:.4g}".format(metrics.accuracy_score(y_blind.values,y_blind_pred))) print("CV best score : {0:.4g}".format(model.best_score_)) print("CV best parameter combinations : {}".format(model.best_params_)) if algorithm == 'gradientboosting': #Print Feature Importance: if printFeatureImportance: feat_imp = pd.Series(model.best_estimator_.named_steps.model.feature_importances_, X_train.columns).sort_values(ascending=False) feat_imp.plot(kind='barh', title='Feature Importances') plt.xlabel('Feature Importance Score') else: pipe = Pipeline(steps=[('scaler', scaler), ('model', algo)]) model = GridSearchCV(estimator = pipe, param_grid = hyper_parameters, scoring='r2',n_jobs=-1, cv=cv_folds, verbose = 1) #Fit the model on the data model.fit(X_train.values, y_train.values.ravel()) #Predict training set: y_train_pred = model.predict(X_train) #Predict testing set: y_test_pred = model.predict(X_test) #Predict blind set y_blind_pred = model.predict(X_blind) #Print model report: print("Model Report") print("-------------------------------") print("The training R2 score : {0:.4g}".format(metrics.r2_score(y_train.values, y_train_pred))) print("The testing R2 score is : {0:.4g}".format(metrics.r2_score(y_test.values,y_test_pred))) print("The blind well R2 score is : {0:.4g}".format(metrics.r2_score(y_blind.values,y_blind_pred))) print("CV best score : {0:.4g}".format(model.best_score_)) print("CV best parameter combinations : {}".format(model.best_params_)) if algorithm == 'gradientboosting': #Print Feature Importance: if printFeatureImportance: feat_imp = pd.Series(model.best_estimator_.named_steps.model.feature_importances_, X_train.columns).sort_values(ascending=False) feat_imp.plot(kind='barh', title='Feature Importances') plt.xlabel('Feature Importance Score') return model.best_estimator_ "model__min_samples_split" : [2,3,4,5], "model__min_samples_leaf": [1,2,3,4,5], "model__max_depth" : range(4,8,1) "model__n_estimators" : range(100,301,50) ###Output _____no_output_____ ###Markdown Training the Gradient Boosting ###Code #use the documentation of SVR() to understand the parameters #put new parameters in the grid by using "model__" before the parameter name as below hyper_parameters = { } model_gb = modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm='gradientboosting', hyper_parameters=hyper_parameters, scaler='minmax', classification=False,printFeatureImportance=True, cv_folds=3) #sample size vs score m = Pipeline(steps=[('scaler', StandardScaler()), ('model', gbR(max_depth= 7, min_samples_leaf= 1, min_samples_split= 3))]) size = np.arange(500,X_train.shape[0], 500) train_scores = [] test_scores = [] blind_scores = [] for i in size: m.fit(X_train.iloc[:i,:].values, y_train.iloc[:i,:].values.ravel()) train_scores.append(metrics.r2_score(y_train.iloc[:i,:].values, m.predict(X_train.iloc[:i,:].values))) test_scores.append(metrics.r2_score(y_test.values, m.predict(X_test))) blind_scores.append(metrics.r2_score(y_blind.values, m.predict(X_blind))) plt.plot(size, train_scores, label = 'train') plt.plot(size, test_scores, label = 'test') plt.plot(size, blind_scores, label = 'blind') plt.legend() feat_imp = pd.Series(model_gb.named_steps.model.feature_importances_, X_train.columns).sort_values(ascending=False) feat_imp.plot(kind='barh', title='Feature Importances') plt.xlabel('Feature Importance Score') # plt.savefig(r'./Images/{}.png'.format('gb_feature_importance'), dpi=300) # model2 = modelfit(X_train2, X_test2, X_blind2, y_train2, y_test2, y_blind2, algorithm='gradientboosting', # hyper_parameters=hyper_parameters, scaler='standard', # classification=True,printFeatureImportance=True, cv_folds=3) ###Output _____no_output_____ ###Markdown Training the SVM ###Code 'model__kernel': ['linear', 'poly','rbf','sigmoid'] 'model__gamma': ['scale', 'auto'] 'model__C': [1,10,100] #use the documentation of SVR() to understand the parameters #put new parameters in the grid by using "model__" before the parameter name as below hyper_parameters = { 'model__epsilon': np.arange(0.01,0.1,0.01)} model_svm = modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm='svm', hyper_parameters=hyper_parameters, scaler='standard', classification=False,printFeatureImportance=True, cv_folds=3) model_svm.named_steps.model.support_vectors_.shape #sample size vs score m = Pipeline(steps=[('scaler', StandardScaler()), ('model', SVR(epsilon=0.02))]) size = np.arange(500,X_train.shape[0], 500) train_scores = [] test_scores = [] blind_scores = [] for i in size: m.fit(X_train.iloc[:i,:].values, y_train.iloc[:i,:].values.ravel()) train_scores.append(metrics.r2_score(y_train.iloc[:i,:].values, m.predict(X_train.iloc[:i,:].values))) test_scores.append(metrics.r2_score(y_test.values, m.predict(X_test))) blind_scores.append(metrics.r2_score(y_blind.values, m.predict(X_blind))) plt.plot(size, train_scores, label = 'train') plt.plot(size, test_scores, label = 'test') plt.plot(size, blind_scores, label = 'blind') plt.legend() ###Output _____no_output_____ ###Markdown Training the Neural Network ###Code #use the documentation of MLPClassifier() to understand the parameters #put new parameters in the grid by using "model__" before the parameter name as below hyper_parameters = {'model__hidden_layer_sizes': [(10,10,),(19,19,),(20,),(20,20,)], 'model__tol': [0.0001,0.00001,0.001], 'model__solver': ['lbfgs'], 'model__max_iter': [1000]} model_nn = modelfit(X_train, X_test, X_blind, y_train, y_test, y_blind, algorithm='neural', hyper_parameters=hyper_parameters, scaler='minmax', classification=False,printFeatureImportance=True, cv_folds=3) #sample size vs score m = Pipeline(steps=[('scaler', StandardScaler()), ('model', MLPRegressor(hidden_layer_sizes= (19, 19), max_iter= 1000, solver= 'lbfgs', tol= 1e-05))]) size = np.arange(500,X_train.shape[0], 500) train_scores = [] test_scores = [] blind_scores = [] for i in size: m.fit(X_train.iloc[:i,:].values, y_train.iloc[:i,:].values.ravel()) train_scores.append(metrics.r2_score(y_train.iloc[:i,:].values, m.predict(X_train.iloc[:i,:].values))) test_scores.append(metrics.r2_score(y_test.values, m.predict(X_test))) blind_scores.append(metrics.r2_score(y_blind.values, m.predict(X_blind))) plt.plot(size, train_scores, label = 'train') plt.plot(size, test_scores, label = 'test') plt.plot(size, blind_scores, label = 'blind') plt.legend() ###Output _____no_output_____ ###Markdown Visualizing the Result ###Code #create folder to save images if os.path.exists(r'./Images'): pass else: os.mkdir(r'./Images') def plot_logs2(data, well_name, model_gb, model_svm, model_nn, formation): """ function to plot the log data and the predictions Parameters ---------- data : DataFrame The well data to be plotted well_name : str The name of the well being plotted model: The trained model used for the prediction formation : dict The formation tops ( names as keys and depth interval as the item in a list) Returns ------- A plot of the well logs """ #assigning the logs to variable names to make the code cleaner and easier to read MD = data.DEPT GR = data.GR RHOB = data.RHOZ NPHI = data.NPHI DT= data.DTCO PEFZ = data.PEFZ BA = data.Brittleness_new #creating the figure fig, ax = plt.subplots(nrows=1, ncols=6,figsize=(15,10), sharey=True, gridspec_kw={'width_ratios': [3,3,3,3,3,3]}) # fig.suptitle("O {}".format(well_name), fontsize=25) fig.subplots_adjust(top=0.85, wspace=0.2) # ax[0].set_ylim(formation['Upper Marcellus'][0],formation['Lower Marcellus'][1]) #display only a depth range ax[0].set_ylim(7600, formation['Lower Marcellus'][1]) #display only a depth range ax[0].invert_yaxis() ax[0].set_ylabel('MD (M)',fontsize=20) ax[0].yaxis.grid(True) ax[0].get_xaxis().set_visible(False) #removing the x-axis label at the bottom of the fig ##Track 1 ##Gamma_ray and PEF ax_GR = ax[0].twiny() #share the depth axis ax_GR.set_xlim(0,270) ax_GR.plot(GR,MD, color='black') ax_GR.set_xlabel('GR (API)',color='black') ax_GR.tick_params('x',colors='black') ##change the color of the x-axis tick label ax[0].get_xaxis().set_visible(False) ax[0].yaxis.grid(True) ax_GR.grid(True,alpha=0.5) #variable colorfill GR_range = abs(GR.min() - GR.max()) cmap = plt.get_cmap('nipy_spectral') #color map color_index = np.arange(GR.min(), GR.max(), GR_range / 20) #loop through each value in the color_index for index in sorted(color_index): index_value = (index - GR.min())/GR_range color = cmap(index_value) #obtain colour for color index value ax_GR.fill_betweenx(MD, 0 , GR, where = GR >= index, color = color) ax_PEFZ = ax[0].twiny() ax_PEFZ.plot(PEFZ,MD, color='red') ax_PEFZ.set_xlabel('PEFZ',color='red') ax_PEFZ.tick_params('x',colors='red') ##change the color of the x-axis tick label ax_PEFZ.spines['top'].set_position(('outward',40)) ##move the x-axis up ax_PEFZ.spines["top"].set_edgecolor("red") #Track 2 ##NPHI and RHOB ax_NPHI = ax[1].twiny() ax_NPHI.set_xlim(-0.1,0.4) ax_NPHI.invert_xaxis() ax_NPHI.plot(NPHI, MD, label='NPHI[%]', color='green') ax_NPHI.spines['top'].set_position(('outward',0)) ax_NPHI.set_xlabel('NPHI[%]', color='green') ax_NPHI.tick_params(axis='x', colors='green') ax_NPHI.spines["top"].set_edgecolor("green") ax_RHOB = ax[1].twiny() ax_RHOB.set_xlim(1.95,2.95) ax_RHOB.invert_xaxis() ax_RHOB.plot(RHOB, MD,label='RHOB[g/cc]', color='red') ax_RHOB.spines['top'].set_position(('outward',40)) ax_RHOB.set_xlabel('RHOB[g/cc]',color='red') ax_RHOB.tick_params(axis='x', colors='red') ax_RHOB.spines["top"].set_edgecolor('red') ax[1].get_xaxis().set_visible(False) ax[1].yaxis.grid(True) ax_RHOB.grid(True,alpha=0.5) ax[1].axis('off') # #color fill # x = np.array(ax_RHOB.get_xlim()) # z = np.array(ax_NPHI.get_xlim()) # nz=((NPHI-np.max(z))/(np.min(z)-np.max(z)))*(np.max(x)-np.min(x))+np.min(x) # ax_RHOB.fill_betweenx(MD, RHOB, nz, where=RHOB>=nz, interpolate=True, color='green') # ax_RHOB.fill_betweenx(MD, RHOB, nz, where=RHOB<=nz, interpolate=True, color='yellow') #Track 3 ##Sonic ax_DT = ax[2].twiny() ax_DT.grid(True) ax_DT.set_xlim(100,50) ax_DT.spines['top'].set_position(('outward',0)) ax_DT.plot(DT, MD, label='DT[us/ft]', color='blue') ax_DT.set_xlabel('DT[us/ft]', color='blue') ax_DT.tick_params(axis='x', colors='blue') ax_DT.spines["top"].set_edgecolor("blue") ax[2].get_xaxis().set_visible(False) ax[2].yaxis.grid(True) ax_DT.grid(True,alpha=0.5) ax[2].axis('off') #Track 4 #gb model ax_BA1 = ax[3].twiny() ax_BA1.grid(True) ax_BA1.set_xlim(0,1) ax_BA1.spines['top'].set_position(('outward',0)) ax_BA1.plot(BA, MD, label='BRITTLENESS ESTIMATE', color='black') ax_BA1.set_xlabel('BRITTLENESS ESTIMATE', color='black') ax_BA1.tick_params(axis='x', colors='black') ##Ploting the predicted data ###work on this for generalization ax_pred = ax[3].twiny() df = data.loc[: , features].dropna() pred = model_gb.predict(df.drop(['DEPT','RHOZ',target], axis=1)) df['Brittleness_predict'] = pred ax_BA1.plot(df.Brittleness_predict, df.DEPT, color='red', linestyle='--') ax_pred.spines['top'].set_position(('outward',40)) ax_pred.set_xlabel('BRITTLENESS (GB)',color='red') ax_pred.tick_params(axis='x', colors='red') ax_pred.spines["top"].set_edgecolor('red') ax[3].get_xaxis().set_visible(False) ax[3].yaxis.grid(True) ax[3].axis('off') ax_BA1.grid(True,alpha=0.5) #Track 4 ##Brittleness ## nn model ax_BA2 = ax[4].twiny() ax_BA2.grid(True) ax_BA2.set_xlim(0,1) ax_BA2.spines['top'].set_position(('outward',0)) ax_BA2.plot(BA, MD, label='BRITTLENESS ESTIMATE', color='black') ax_BA2.set_xlabel('BRITTLENESS ESTIMATE', color='black') ax_BA2.tick_params(axis='x', colors='black') ##Ploting the predicted data ###work on this for generalization ax_pred = ax[4].twiny() df = data.loc[: , features].dropna() pred = model_nn.predict(df.drop(['DEPT','RHOZ', target], axis=1)) df['Brittleness_predict'] = pred ax_BA2.plot(df.Brittleness_predict, df.DEPT, color='blue', linestyle='--') ax_pred.spines['top'].set_position(('outward',40)) ax_pred.set_xlabel('BRITTLENESS (NN)',color='blue') ax_pred.tick_params(axis='x', colors='blue') ax_pred.spines["top"].set_edgecolor('blue') ax[4].get_xaxis().set_visible(False) ax[4].yaxis.grid(True) ax[4].axis('off') ax_BA2.grid(True,alpha=0.5) #Track 4 ##Brittleness ##svm model ax_BA3 = ax[5].twiny() ax_BA3.grid(True) ax_BA3.set_xlim(0,1) ax_BA3.spines['top'].set_position(('outward',0)) ax_BA3.plot(BA, MD, label='BRITTLENESS ESTIMATE', color='black') ax_BA3.set_xlabel('BRITTLENESS ESTIMATE', color='black') ax_BA3.tick_params(axis='x', colors='black') ##Ploting the predicted data ###work on this for generalization ax_pred = ax[5].twiny() df = data.loc[: , features].dropna() pred = model_svm.predict(df.drop(['DEPT','RHOZ',target], axis=1)) df['Brittleness_predict'] = pred ax_BA3.plot(df.Brittleness_predict, df.DEPT, color='purple', linestyle='--') ax_pred.spines['top'].set_position(('outward',40)) ax_pred.set_xlabel('BRITTLENESS (SVM)',color='purple') ax_pred.tick_params(axis='x', colors='purple') ax_pred.spines["top"].set_edgecolor('purple') ax[5].get_xaxis().set_visible(False) ax[5].yaxis.grid(True) ax[5].axis('off') ax_BA3.grid(True,alpha=0.5) # #formation top # ax_top = ax[-1] # ax[-1].axis('off') # formation_midpoints = [] # for key, value in formation.items(): # #Calculate mid point of the formation # formation_midpoints.append(value[0] + (value[1]-value[0])/2) # zone_colours = ["red", "blue", "green"] # for ax in [ax_GR, ax_NPHI, ax_BA1, ax_BA2, ax_BA3, ax_top]: # # loop through the formations dictionary and zone colours # for depth, colour in zip(formation.values(), zone_colours): # # use the depths and colours to shade across the subplots # ax.axhspan(depth[0], depth[1], color=colour, alpha=0.1) # for label, formation_mid in zip(formation.keys(), # formation_midpoints): # ax_top.text(0.5, formation_mid, label, rotation=90, # verticalalignment='center', horizontalalignment='center', fontweight='bold', # fontsize='large') # fig.savefig(r'./Images/{}.png'.format(well_name), dpi=600) X_train.columns # formation = {'Tully': [7195,7310], # 'Mahantango': [7310,7455], # 'Marcellus': [7455,7560]} formation = {'Upper Marcellus': [7453,7476], 'Middle Marcellus': [7476,7517], 'Lower Marcellus': [7517,7555]} plot_logs2(data_mip3h, "MIP3H2", model_gb, model_svm, model_nn, formation) # formation2 = {'Tully': [7604,7670], # 'Mahantango': [7670,7882], # 'Marcellus': [7882,8052]} formation_pos = {'Upper Marcellus': [7883,7961], 'Middle Marcellus': [7961,8015], 'Lower Marcellus': [8015,8052]} plot_logs2(data_poseidon, "Poseidon", model_gb, model_svm, model_nn, formation_pos) formation_bog = {'Upper Marcellus': [7877,7905], 'Middle Marcellus': [7905, 7951], 'Lower Marcellus': [7951,7974]} plot_logs2(data_boggess, "Boggess", model_gb, model_svm, model_nn, formation_bog) formation_whip = {'Upper Marcellus': [7736, 7785], 'Middle Marcellus': [7785, 7811], 'Lower Marcellus': [7811, 7835]} plot_logs2(data_whipkey, "Whipkey", model_gb, model_svm, model_nn, formation_whip) ###Output _____no_output_____
notebooks/Predictive_modeling.ipynb
###Markdown Importing the data ###Code # setting the raw path processed_data_path = os.path.join(os.path.pardir,"data","processed") train_file_path = os.path.join(processed_data_path,"train.csv") test_file_path = os.path.join(processed_data_path,"test.csv") X = pd.read_csv(train_file_path) y = pd.read_csv(test_file_path) X.shape y.shape ###Output _____no_output_____ ###Markdown Splitting the data ###Code X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.1, random_state=0) print("Number transactions X_train dataset: ", X_train.shape) print("Number transactions y_train dataset: ", y_train.shape) print("Number transactions X_test dataset: ", X_test.shape) print("Number transactions y_test dataset: ", y_test.shape) ###Output Number transactions X_train dataset: (65786, 10) Number transactions y_train dataset: (65786, 1) Number transactions X_test dataset: (7310, 10) Number transactions y_test dataset: (7310, 1) ###Markdown Dummy model Creating a dummy classifierA dummy classifier is a type of classifier which does not generate any insight about the data and classifies the given data using only simple rules. The classifier’s behavior is completely independent of the training data as the trends in the training data are completely ignored and instead uses one of the strategies to predict the class label.It is used only as a simple baseline for the other classifiers i.e. any other classifier is expected to perform better on the given dataset. It is especially useful for datasets where are sure of a class imbalance. It is based on the philosophy that any analytic approach for a classification problem should be better than a random guessing approach. ###Code ''' Below are a few strategies used by the dummy classifier to predict a class label – Most Frequent: The classifier always predicts the most frequent class label in the training data. Stratified: It generates predictions by respecting the class distribution of the training data. It is different from the “most frequent” strategy as it instead associates a probability with each data point of being the most frequent class label. Uniform: It generates predictions uniformly at random. ''' strategies = ['most_frequent', 'stratified', 'uniform'] test_scores = [] for s in strategies: #if s =='constant': # dclf = DummyClassifier(strategy = s, random_state = 0) #else: dclf = DummyClassifier(strategy = s, random_state = 0) dclf.fit(X_train, y_train) score = dclf.score(X_test, y_test) test_scores.append(score) test_scores # accuracy score print('accuracy for baseline model : {0:.2f}'.format(accuracy_score(y_test, dclf.predict(X_test)))) ax = sns.stripplot(strategies, test_scores); ax.set(xlabel ='Strategy', ylabel ='Test Score') plt.show() ###Output _____no_output_____ ###Markdown Machine learning models Defining the helper function for k-fold and stratified k-fold cross validation ###Code # Defining the path to save the figures/plots. figures_data_path = os.path.join(os.path.pardir, 'reports','figures') kf = KFold(n_splits = 10, shuffle = True, random_state = 4) skf = StratifiedKFold(n_splits = 10, shuffle = True, random_state = 4) def model_classifier(model, X, y, cv): """ Creates folds manually, perform Returns an array of validation (recall) scores """ scores = [] for train_index,test_index in cv.split(X,y): X_train,X_test = X.loc[train_index],X.loc[test_index] y_train,y_test = y.loc[train_index],y.loc[test_index] # Fit the model on the training data model_obj = model.fit(X_train, y_train) y_pred = model_obj.predict(X_test) y_pred_prob = model_obj.predict_proba(X_test)[:,1] # Score the model on the validation data score = accuracy_score(y_test, y_pred) report = classification_report(y_test, y_pred) conf_matrix = confusion_matrix(y_test, y_pred) scores.append(score) mean_score = np.array(scores).mean() print('Accuracy scores of the model: {:.2f}'.format(mean_score)) print('\n Classification report of the model') print('--------------------------------------') print(report) print('\n Confusion Matrix of the model') print('--------------------------------------') print(conf_matrix) print("\n ROC Curve") logit_roc_auc = roc_auc_score(y_test, y_pred) fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob) plt.figure() val_model = input("Enter your model name: ") plt.plot(fpr, tpr, label= val_model + ' (area = %0.2f)' % logit_roc_auc) plt.plot([0, 1], [0, 1],'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic') plt.legend(loc="lower right") my_fig = val_model + '.png' plt.savefig(os.path.join(figures_data_path, my_fig)) plt.show() ###Output _____no_output_____ ###Markdown Logistic regression model ###Code # instantiating the model; logregression = LogisticRegression() ###Output _____no_output_____ ###Markdown 1. Using k-fold.Overview:The k-fold cross-validation procedure involves splitting the training dataset into k folds. The first k-1 folds are used to train a model, and the holdout kth fold is used as the test set. This process is repeated and each of the folds is given an opportunity to be used as the holdout test set. A total of k models are fit and evaluated, and the performance of the model is calculated as the mean of these runs. ###Code model_classifier(logregression, X, y,kf) ###Output Accuracy scores of the model: 0.73 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.70 0.78 0.74 3636 1 0.75 0.67 0.71 3673 accuracy 0.73 7309 macro avg 0.73 0.73 0.72 7309 weighted avg 0.73 0.73 0.72 7309 Confusion Matrix of the model -------------------------------------- [[2825 811] [1195 2478]] ROC Curve ###Markdown 2. Stratified k-fold validationOverview; This is the modified k-fold cross-validation and train-test splits can be used to preserve the class distribution in the dataset. ###Code model_classifier(logregression, X, y,skf) ###Output Accuracy scores of the model: 0.73 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.71 0.79 0.75 3654 1 0.76 0.68 0.72 3655 accuracy 0.73 7309 macro avg 0.74 0.73 0.73 7309 weighted avg 0.74 0.73 0.73 7309 Confusion Matrix of the model -------------------------------------- [[2873 781] [1172 2483]] ROC Curve ###Markdown Hyperparameter Optimization:logistic regression ###Code lr_hyp = LogisticRegression() # regularization penalty space penalty = ['l1', 'l2'] # regularization hyperparameter space C = np.logspace(0, 4, 10) # hyperparameter options param_grid = dict(C=C, penalty=penalty) log_reg_cv = GridSearchCV(lr_hyp, param_grid, verbose=0) model_classifier(log_reg_cv, X, y, skf) ''' lr_hyp = LogisticRegression() # regularization penalty space penalty = ['l1','l2'] solver = ['liblinear', 'saga'] # regularization hyperparameter space #C = np.logspace(0, 4, 10) C = np.logspace(0, 4, num=10) # hyperparameter options param_grid = dict(C=C, penalty=penalty, solver=solver) log_reg_cv = RandomizedSearchCV(lr_hyp, param_grid) model_classifier(log_reg_cv, X, y, kf) ''' ###Output Accuracy scores of the model: 0.73 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.71 0.79 0.75 3654 1 0.76 0.68 0.72 3655 accuracy 0.73 7309 macro avg 0.74 0.73 0.73 7309 weighted avg 0.74 0.73 0.73 7309 Confusion Matrix of the model -------------------------------------- [[2873 781] [1172 2483]] ROC Curve Enter your model name: log_reg_cv ###Markdown In our case since we had tackled class imbalance, the is no much difference noted in the accuracy score gotten by using the k-fold classification and using stratified k-fold cross validation. Confusion Matrixhe diagonal elements represent the number of points for which the predicted label is equal to the true label, while off-diagonal elements are those that are mislabeled by the classifier. The higher the diagonal values of the confusion matrix the better, indicating many correct predictions. classification_report F1 scoreA measurement that considers both precision and recall to compute the score. The F1 score can be interpreted as a weighted average of the precision and recall values, where an F1 score reaches its best value at 1 and worst value at 0 Out of all the classes, 72 % of our data was predicted correctly. **ROC Curve:**The ROC curve shows the trade-off between sensitivity (or TPR) and specificity (1 – FPR). Classifiers that give curves closer to the top-left corner indicate a better performance The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test. In our case it is far away and therefore we can conclude that test is a bit accurate. **XGBoost Model** ###Code #instantiating the model xgb_regressor = XGBClassifier() ###Output _____no_output_____ ###Markdown XGBoost using k-fold cross validation ###Code model_classifier(xgb_regressor,X,y,kf) ###Output Accuracy scores of the model: 0.84 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.82 0.87 0.85 3636 1 0.87 0.82 0.84 3673 accuracy 0.84 7309 macro avg 0.84 0.84 0.84 7309 weighted avg 0.84 0.84 0.84 7309 Confusion Matrix of the model -------------------------------------- [[3169 467] [ 675 2998]] ROC Curve ###Markdown XGBoost using stratified k-fold cross validation ###Code model_classifier(xgb_regressor,X,y,skf) ###Output Accuracy scores of the model: 0.85 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.83 0.88 0.85 3654 1 0.87 0.82 0.84 3655 accuracy 0.85 7309 macro avg 0.85 0.85 0.85 7309 weighted avg 0.85 0.85 0.85 7309 Confusion Matrix of the model -------------------------------------- [[3209 445] [ 656 2999]] ROC Curve ###Markdown XGBoost Hyperparameter tuning ###Code def timer(start_time=None): if not start_time: start_time = datetime.now() return start_time elif start_time: thour, temp_sec = divmod((datetime.now() - start_time).total_seconds(), 3600) tmin, tsec = divmod(temp_sec, 60) print('\n Time taken: %i hours %i minutes and %s seconds.' % (thour, tmin, round(tsec, 2))) # A parameter grid for XGBoost params = { 'min_child_weight': [1, 5, 10], 'gamma': [0.5, 1, 1.5, 2, 5], 'subsample': [0.6, 0.8, 1.0], 'colsample_bytree': [0.6, 0.8, 1.0], 'max_depth': [3, 4, 5] } xgb_hyp = XGBClassifier() xgb_random_search = RandomizedSearchCV(xgb_hyp, param_distributions=params, n_iter=5, scoring='roc_auc', n_jobs=-1, cv=10, verbose=3) start_time = timer(None) # timing starts from this point for "start_time" variable xgb_random_search.fit(X, y) timer(start_time) xgb_random_search.best_estimator_ xgb_random_search.best_params_ xgb_tuned = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1, colsample_bynode=1, colsample_bytree=0.6, gamma=0.5, learning_rate=0.1, max_delta_step=0, max_depth=5, min_child_weight=10, missing=None, n_estimators=100, n_jobs=1, nthread=None, objective='binary:logistic', random_state=0, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None, silent=None, subsample=0.6, verbosity=1) model_classifier(xgb_tuned,X,y,skf) ###Output _____no_output_____ ###Markdown MLPDefinition - What does Multilayer Perceptron (MLP) mean?A multilayer perceptron (MLP) is a feedforward artificial neural network that generates a set of outputs from a set of inputs. An MLP is characterized by several layers of input nodes connected as a directed graph between the input and output layers. MLP uses backpropogation for training the network. MLP is a deep learning method. ###Code mlp = MLPClassifier() ###Output _____no_output_____ ###Markdown MLP using K-fold cv ###Code model_classifier(mlp,X,y,kf) ###Output Accuracy scores of the model: 0.75 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.71 0.85 0.78 3636 1 0.82 0.66 0.73 3673 accuracy 0.75 7309 macro avg 0.76 0.75 0.75 7309 weighted avg 0.76 0.75 0.75 7309 Confusion Matrix of the model -------------------------------------- [[3097 539] [1257 2416]] ROC Curve ###Markdown MLP using stratified k-fold cv ###Code model_classifier(mlp,X,y,skf) ###Output Accuracy scores of the model: 0.75 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.72 0.85 0.78 3654 1 0.81 0.66 0.73 3655 accuracy 0.76 7309 macro avg 0.76 0.76 0.75 7309 weighted avg 0.76 0.76 0.75 7309 Confusion Matrix of the model -------------------------------------- [[3095 559] [1231 2424]] ROC Curve ###Markdown Hyperparameter tuning for Multilayer perceptron ###Code parameter_space = { 'hidden_layer_sizes': [(10,30,10),(20,)], 'activation': ['tanh', 'relu'], 'solver': ['sgd', 'adam'], 'alpha': [0.0001, 0.05], 'learning_rate': ['constant','adaptive'], } mlp_hyp = MLPClassifier() randomized_mlp = RandomizedSearchCV(mlp_hyp, parameter_space, n_jobs=-1, cv=10) start_time = timer(None) # timing starts from this point for "start_time" variable randomized_mlp.fit(X, y) timer(start_time) randomized_mlp.best_estimator_ randomized_mlp.best_params_ mlp_tuned = MLPClassifier(alpha=0.05, hidden_layer_sizes=(20,), learning_rate='adaptive', solver='sgd') model_classifier(mlp_tuned,X,y,skf) ###Output Accuracy scores of the model: 0.74 Classification report of the model -------------------------------------- precision recall f1-score support 0 0.71 0.82 0.76 3654 1 0.79 0.66 0.72 3655 accuracy 0.74 7309 macro avg 0.75 0.74 0.74 7309 weighted avg 0.75 0.74 0.74 7309 Confusion Matrix of the model -------------------------------------- [[3012 642] [1248 2407]] ROC Curve ###Markdown Support Vector Machine Model;The objective of the support vector machine algorithm is to find a hyperplane in an N-dimensional space(N — the number of features) that distinctly classifies the data points. ###Code from sklearn.svm import SVC best_svr = SVR(kernel='rbf') ###Output _____no_output_____ ###Markdown SVM using stratified k-fold cross validation ###Code model_classifier(best_svr,X,y,skf) ###Output _____no_output_____ ###Markdown SVM using k-fold cross validation ###Code model_classifier(best_svr,X,y,kf) ###Output _____no_output_____
docs/notebooks/tweet-ouptut.ipynb
###Markdown Tweet output `TweetOutput` is as generic class which can export scrapped tweets.Under the hood it has abstract method `export_tweets(tweets: List[Tweet])`. There are few implementations of `TweetOutput` ###Code import stweet as st ###Output _____no_output_____ ###Markdown PrintTweetOutput PrintTweetOutput prints all tweets in console. It does not store tweets anywhere. ###Code st.PrintTweetOutput(); ###Output _____no_output_____ ###Markdown CollectorTweetOutput CollectorTweetOutput stores all tweets in memory. This solution is best way when we want to analyse small part of tweets.To get all tweets run method `get_scrapped_tweets()` ###Code st.CollectorTweetOutput(); ###Output _____no_output_____ ###Markdown CsvTweetOutput `CsvTweetOutput` stores tweets in csv file. It has two parameters `file_location` and `add_header_on_start`.When `add_header_on_start` is `True` header is adding only when file is empty. It is possible to continue storing the tweets in file in next tasks. ###Code st.CsvTweetOutput( file_location='my_csv_file.csv', add_header_on_start=True ); ###Output _____no_output_____ ###Markdown JsonLineFileTweetOutput `JsonLineFileTweetOutput` stores tweets in file in json lines. This solution is better because it can be problem with fast saving new tweet in large files, also it can be problem with reading. Using json lines it is possible to read line by line, without read whole document into memory.Class have only one property – `file_name`, this is the file to store tweets in json line format. ###Code st.JsonLineFileTweetOutput( file_name='my_jl_file.jl' ); ###Output _____no_output_____ ###Markdown PrintEveryNTweetOutput `PrintEveryNTweetOutput` print event N-th scrapped tweet. This is best solution to track that new tweets are scrapping.Class have only one parameter – `each_n`, this is the N value described above. ###Code st.PrintEveryNTweetOutput( each_n=1000 ); ###Output _____no_output_____ ###Markdown PrintFirstInRequestTweetOutput PrintFirstInRequestTweetOutput is a debug TweetOutput. It allow to track every request and shows the first part of response. ###Code st.PrintFirstInRequestTweetOutput(); ###Output _____no_output_____
04 OOPS-2/4.4 Polymorphism (Method overriding).ipynb
###Markdown Polymorphism - ability to take multiple formsOne in many forms is nothing but polymorphism ###Code class Vehicle: def __init__(self, color, maxSpeed): self.color = color self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber def getMaxSpeed(self): #get method is a public function return self.__maxSpeed def setMaxSpeed(self, maxSpeed): #set method is a public function self.__maxSpeed = maxSpeed def print(self): #Another way of accessing printing private members --> printing within a class print("Color : ",self.color) print("MaxSpeed :", self.__maxSpeed) class Car(Vehicle): def __init__(self, color, maxSpeed, numGears, isConvertible): super().__init__(color, maxSpeed) #Inheriting from vehicle class self.numGears = numGears #Passing via arguments in car class self.isConvertible = isConvertible def print(self): #self.print() #Here there is only one print() and we can use self or super in this example. print("NumGears :", self.numGears) print("IsConvertible :", self.isConvertible) c = Car("red", 35, 5, False) c.print() ###Output NumGears : 5 IsConvertible : False ###Markdown ** Here the vehicle and car class has the same method name with same arguments (so called) ---> Method Overloading (in programming)So, c.print() first searches in the car class if print() is present then it prints else it goes to its parent class (Vehicle class in the below example). As we removed the print() from class Car then the control goes to the print() of Vehicle class (parent class). Suppose if the print() is not available in the parent class then the control goes to its parent class and so on till it reaches the print() ###Code class Vehicle: def __init__(self, color, maxSpeed): self.color = color self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber def getMaxSpeed(self): #get method is a public function return self.__maxSpeed def setMaxSpeed(self, maxSpeed): #set method is a public function self.__maxSpeed = maxSpeed def print(self): #Another way of accessing printing private members --> printing within a class print("Color : ",self.color) print("MaxSpeed :", self.__maxSpeed) class Car(Vehicle): def __init__(self, color, maxSpeed, numGears, isConvertible): super().__init__(color, maxSpeed) #Inheriting from vehicle class self.numGears = numGears #Passing via arguments in car class self.isConvertible = isConvertible # def printCar(self): # #super().print() #Instead of super() you can also use self.print() beause a car inherits all properties and methods from parent class # self.print() #Here there is only one print() and we can use self or super in this example. # print("NumGears :", self.numGears) # print("IsConvertible :", self.isConvertible) c = Car("red", 35, 5, False) c.print() class Vehicle: def __init__(self, color, maxSpeed): self.color = color self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber def getMaxSpeed(self): #get method is a public function return self.__maxSpeed def setMaxSpeed(self, maxSpeed): #set method is a public function self.__maxSpeed = maxSpeed # def print(self): #Another way of accessing printing private members --> printing within a class # print("Color : ",self.color) # print("MaxSpeed :", self.__maxSpeed) class Car(Vehicle): def __init__(self, color, maxSpeed, numGears, isConvertible): super().__init__(color, maxSpeed) #Inheriting from vehicle class self.numGears = numGears #Passing via arguments in car class self.isConvertible = isConvertible # def printCar(self): # #super().print() #Instead of super() you can also use self.print() beause a car inherits all properties and methods from parent class # self.print() #Here there is only one print() and we can use self or super in this example. # print("NumGears :", self.numGears) # print("IsConvertible :", self.isConvertible) c = Car("red", 35, 5, False) c.print() ###Output _____no_output_____ ###Markdown In the above example, there is no print in the entire program so ERROR ###Code class Vehicle: def __init__(self, color, maxSpeed): self.color = color self.__maxSpeed = maxSpeed #Making maxSpeed as private using '__' before the memeber def getMaxSpeed(self): #get method is a public function return self.__maxSpeed def setMaxSpeed(self, maxSpeed): #set method is a public function self.__maxSpeed = maxSpeed def print(self): #Another way of accessing printing private members --> printing within a class print("Color : ",self.color) print("MaxSpeed :", self.__maxSpeed) class Car(Vehicle): def __init__(self, color, maxSpeed, numGears, isConvertible): super().__init__(color, maxSpeed) #Inheriting from vehicle class self.numGears = numGears #Passing via arguments in car class self.isConvertible = isConvertible def print(self): super().print() #Here there is only one print() and we can use self or super in this example. print("NumGears :", self.numGears) print("IsConvertible :", self.isConvertible) c = Car("red", 35, 5, False) c.print() print("_____________________________") v = Vehicle("green", 98) v.print() #Predict the Output: class Vehicle: def __init__(self,color): self.color = color def print(self): print(c.color,end=" ") class Car(Vehicle): def __init__(self,color,numGears): super().__init__(color) self.numGears = numGears def print(self): print(c.color,end=" ") print(c.numGears) c = Car("black",5) c.print() #Predict the Output: class Vehicle: def __init__(self,color): self.color = color def print(self): print(c.color,end=" ") class Car(Vehicle): def __init__(self,color,numGears): super().__init__(color) self.numGears = numGears def print(self): self.print() print(c.numGears) c = Car("black",5) c.print() ###Output _____no_output_____
Week1/.ipynb_checkpoints/BjetSelection-checkpoint.ipynb
###Markdown Classification of b-quark jets in the Aleph simulated dataPython macro for selecting b-jets in Aleph Z->qqbar MC in various ways:* Initially, simply with "if"-statements making requirements on certain variables. This corresponds to selecting "boxes" in the input variable space (typically called "X"). One could also try a Fisher discriminant (linear combination of input variables), which corresponds to a plane in the X-space. But as the problem is non-linear, it is likely to be sub-optimal.* Next using Machine Learning (ML) methods. We will try both tree based and Neural Net (NN) based methods, and see how complicated (or not) it is to get a good solution, and how much better it performs compared to the "classic" selection method.In the end, this exercise is the simple start on moving into the territory of multidimensional analasis. Data:The input variables (X) are:* energy: Measured energy of the jet in GeV. Should be 45 GeV, but fluctuates.* cTheta: cos(theta), i.e. the polar angle of the jet with respect to the beam axis. The detector works best in the central region (|cTheta| small) and less well in the forward regions.* phi: The azimuth angle of the jet. As the detector is uniform in phi, this should not matter (much).* prob_b: Probability of being a b-jet from the pointing of the tracks to the vertex.* spheri: Sphericity of the event, i.e. how spherical it is.* pt2rel: The transverse momentum squared of the tracks relative to the jet axis, i.e. width of the jet.* multip: Multiplicity of the jet (in a relative measure).* bqvjet: b-quark vertex of the jet, i.e. the probability of a detached vertex.* ptlrel: Transverse momentum (in GeV) of possible lepton with respect to jet axis (about 0 if no leptons).The target variable (Y) is:* isb: 1 if it is from a b-quark and 0, if it is not.Finally, those before you (the Aleph collaboration in the mid 90'ies) produced a Neural Net based classification variable, which you can compare to (and compete with?):* nnbjet: Value of original Aleph b-jet tagging algorithm (for reference). Task:Thus, the task before you is to produce a function (ML algorithm), which given the input variables X provides an output variable estimate, Y_est, which is "closest possible" to the target variable, Y. The "closest possible" is left to the user to define in a _Loss Function_, which we will discuss further. In classification problems (such as this), the typical loss function to use "Cross Entropy", see https://en.wikipedia.org/wiki/Cross_entropy.* Author: Troels C. Petersen (NBI)* Email: [email protected]* Date: 20th of April 2021 ###Code from __future__ import print_function, division # Ensures Python3 printing & division standard from matplotlib import pyplot as plt from matplotlib import colors from matplotlib.colors import LogNorm import numpy as np import csv ###Output _____no_output_____ ###Markdown Possible other packages to consider:cornerplot, seaplot, sklearn.decomposition(PCA) ###Code r = np.random r.seed(42) SavePlots = False plt.close('all') ###Output _____no_output_____ ###Markdown Evaluate an attempt at classification:This is made into a function, as this is called many times. It returns a "confusion matrix" and the fraction of wrong classifications. ###Code def evaluate(bquark) : N = [[0,0], [0,0]] # Make a list of lists (i.e. matrix) for counting successes/failures. for i in np.arange(len(isb)): if (bquark[i] == 0 and isb[i] == 0) : N[0][0] += 1 if (bquark[i] == 0 and isb[i] == 1) : N[0][1] += 1 if (bquark[i] == 1 and isb[i] == 0) : N[1][0] += 1 if (bquark[i] == 1 and isb[i] == 1) : N[1][1] += 1 fracWrong = float(N[0][1]+N[1][0])/float(len(isb)) return N, fracWrong ###Output _____no_output_____ ###Markdown Main program start: ###Code # Get data (with this very useful NumPy reader): data = np.genfromtxt('AlephBtag_MC_small_v2.csv', names=True) energy = data['energy'] cTheta = data['cTheta'] phi = data['phi'] prob_b = data['prob_b'] spheri = data['spheri'] pt2rel = data['pt2rel'] multip = data['multip'] bqvjet = data['bqvjet'] ptlrel = data['ptlrel'] nnbjet = data['nnbjet'] isb = data['isb'] ###Output _____no_output_____ ###Markdown Produce 1D figures:Define the histogram range and binning (important - MatPlotLib is NOT good at this): ###Code Nbins = 100 xmin = 0.0 xmax = 1.0 ###Output _____no_output_____ ###Markdown Make new lists selected based on what the jets really are (b-quark jet or light-quark jet): ###Code prob_b_bjets = [] prob_b_ljets = [] bqvjet_bjets = [] bqvjet_ljets = [] for i in np.arange(len(isb)) : if (isb[i] == 1) : prob_b_bjets.append(prob_b[i]) bqvjet_bjets.append(bqvjet[i]) else : prob_b_ljets.append(prob_b[i]) bqvjet_ljets.append(bqvjet[i]) # Produce the actual figure, here with two histograms in it: fig, ax = plt.subplots(figsize=(12, 6)) # Create just a single figure and axes (figsize is in inches!) hist_prob_b_bjets = ax.hist(prob_b_bjets, bins=Nbins, range=(xmin, xmax), histtype='step', linewidth=2, label='prob_b_bjets', color='blue') hist_prob_b_ljets = ax.hist(prob_b_ljets, bins=Nbins, range=(xmin, xmax), histtype='step', linewidth=2, label='prob_b_ljets', color='red') ax.set_xlabel("Probability of b-quark based on track impact parameters") # Label of x-axis ax.set_ylabel("Frequency / 0.01") # Label of y-axis ax.set_title("Distribution of prob_b") # Title of plot ax.legend(loc='best') # Legend. Could also be 'upper right' ax.grid(axis='y') fig.tight_layout() fig.show() if SavePlots : fig.savefig('Hist_prob_b_and_bqvjet.pdf', dpi=600) ###Output _____no_output_____ ###Markdown Produce 2D figures: ###Code # First we try a scatter plot, to see how the individual events distribute themselves: fig2, ax2 = plt.subplots(figsize=(12, 6)) scat2_prob_b_vs_bqvjet_bjets = ax2.scatter(prob_b_bjets, bqvjet_bjets, label='b-jets', color='blue') scat2_prob_b_vs_bqvjet_ljets = ax2.scatter(prob_b_ljets, bqvjet_ljets, label='l-jets', color='red') ax2.legend(loc='best') fig2.tight_layout() fig2.show() if SavePlots : fig2.savefig('Scatter_prob_b_vs_bqvjet.pdf', dpi=600) # However, as can be seen in the figure, the overlap between b-jets and light-jets is large, # and one covers much of the other in a scatter plot, which also does not show the amount of # statistics in the dense regions. Therefore, we try two separate 2D histograms (zoomed): fig3, ax3 = plt.subplots(1, 2, figsize=(12, 6)) hist2_prob_b_vs_bqvjet_bjets = ax3[0].hist2d(prob_b_bjets, bqvjet_bjets, bins=[40,40], range=[[0.0, 0.4], [0.0, 0.4]]) hist2_prob_b_vs_bqvjet_ljets = ax3[1].hist2d(prob_b_ljets, bqvjet_ljets, bins=[40,40], range=[[0.0, 0.4], [0.0, 0.4]]) ax3[0].set_title("b-jets") ax3[1].set_title("light-jets") fig3.tight_layout() fig3.show() if SavePlots : fig3.savefig('Hist2D_prob_b_vs_bqvjet.pdf', dpi=600) ###Output _____no_output_____ ###Markdown Selection: ###Code # I give the selection cuts names, so that they only need to be changed in ONE place (also ensures consistency!): loose_propb = 0.10 tight_propb = 0.16 loose_bqvjet = 0.12 tight_bqvjet = 0.28 # If either of the variable clearly indicate b-quark, or of both loosely do so, call it a b-quark, otherwise not! bquark=[] for i in np.arange(len(prob_b)): if (prob_b[i] > tight_propb) : bquark.append(1) elif (bqvjet[i] > tight_bqvjet) : bquark.append(1) elif ((prob_b[i] > loose_propb) and (bqvjet[i] > loose_bqvjet)) : bquark.append(1) else : bquark.append(0) ###Output _____no_output_____ ###Markdown Evaluate the selection: ###Code N, fracWrong = evaluate(bquark) print("\nRESULT OF HUMAN ATTEMPT AT A GOOD SELECTION:") print(" First number is my estimate, second is the MC truth:") print(" True-Negative (0,0) = ", N[0][0]) print(" False-Negative (0,1) = ", N[0][1]) print(" False-Positive (1,0) = ", N[1][0]) print(" True-Positive (1,1) = ", N[1][1]) print(" Fraction wrong = ( (0,1) + (1,0) ) / sum = ", fracWrong) ### Compare with NN-approach from 1990'ies: bquark=[] for i in np.arange(len(prob_b)): if (nnbjet[i] > 0.82) : bquark.append(1) else : bquark.append(0) N, fracWrong = evaluate(bquark) print("\nALEPH BJET TAG:") print(" First number is my estimate, second is the MC truth:") print(" True-Negative (0,0) = ", N[0][0]) print(" False-Negative (0,1) = ", N[0][1]) print(" False-Positive (1,0) = ", N[1][0]) print(" True-Positive (1,1) = ", N[1][1]) print(" Fraction wrong = ( (0,1) + (1,0) ) / sum = ", fracWrong) ###Output _____no_output_____
ChatBotTraining.ipynb
###Markdown https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html https://medium.com/cindicator/building-chatbot-weekend-of-a-data-scientist-8388d99db093 ###Code from keras.models import Model from keras.layers.recurrent import LSTM from keras.layers import Dense, Input, Embedding from keras.preprocessing.sequence import pad_sequences from keras.callbacks import ModelCheckpoint, EarlyStopping from collections import Counter import nltk import numpy as np from sklearn.model_selection import train_test_split np.random.seed(2018) BATCH_SIZE = 128 NUM_EPOCHS = 100 HIDDEN_UNITS = 256 MAX_INPUT_SEQ_LENGTH = 20 MAX_TARGET_SEQ_LENGTH = 20 MAX_VOCAB_SIZE = 100 DATA_PATH = 'data/movie_lines.txt' TENSORBOARD = 'TensorBoard/' WEIGHT_FILE_PATH = 'data/word-weights.h5' input_counter = Counter() target_counter = Counter() # read the data with open(DATA_PATH, 'r', encoding="latin-1") as f: df = f.read() rows = df.split('\n') lines = [row.split(' +++$+++ ')[-1] for row in rows] input_texts = [] target_texts = [] prev_words = [] for line in lines: next_words = [w.lower() for w in nltk.word_tokenize(line)] if len(next_words) > MAX_TARGET_SEQ_LENGTH: next_words = next_words[0:MAX_TARGET_SEQ_LENGTH] if len(prev_words) > 0: input_texts.append(prev_words) for w in prev_words: input_counter[w] += 1 target_words = next_words[:] target_words.insert(0, 'START') target_words.append('END') for w in target_words: target_counter[w] += 1 target_texts.append(target_words) prev_words = next_words # encode the data input_word2idx = dict() target_word2idx = dict() for idx, word in enumerate(input_counter.most_common(MAX_VOCAB_SIZE)): input_word2idx[word[0]] = idx + 2 for idx, word in enumerate(target_counter.most_common(MAX_VOCAB_SIZE)): target_word2idx[word[0]] = idx + 1 input_word2idx['PAD'] = 0 input_word2idx['UNK'] = 1 target_word2idx['UNK'] = 0 input_idx2word = dict([(idx, word) for word, idx in input_word2idx.items()]) target_idx2word = dict([(idx, word) for word, idx in target_word2idx.items()]) num_encoder_tokens = len(input_idx2word) num_decoder_tokens = len(target_idx2word) np.save('data/word-input-word2idx.npy', input_word2idx) np.save('data/word-input-idx2word.npy', input_idx2word) np.save('data/word-target-word2idx.npy', target_word2idx) np.save('data/word-target-idx2word.npy', target_idx2word) encoder_input_data = [] encoder_max_seq_length = 0 decoder_max_seq_length = 0 for input_words, target_words in zip(input_texts, target_texts): encoder_input_wids = [] for w in input_words: w2idx = 1 if w in input_word2idx: w2idx = input_word2idx[w] encoder_input_wids.append(w2idx) encoder_input_data.append(encoder_input_wids) encoder_max_seq_length = max(len(encoder_input_wids), encoder_max_seq_length) decoder_max_seq_length = max(len(target_words), decoder_max_seq_length) context = dict() context['num_encoder_tokens'] = num_encoder_tokens context['num_decoder_tokens'] = num_decoder_tokens context['encoder_max_seq_length'] = encoder_max_seq_length context['decoder_max_seq_length'] = decoder_max_seq_length np.save('data/word-context.npy', context) context = dict() context['num_encoder_tokens'] = num_encoder_tokens context['num_decoder_tokens'] = num_decoder_tokens context['encoder_max_seq_length'] = encoder_max_seq_length context['decoder_max_seq_length'] = decoder_max_seq_length np.save('data/word-context.npy', context) def generate_batch(input_data, output_text_data): num_batches = len(input_data) // BATCH_SIZE while True: for batchIdx in range(0, num_batches): start = batchIdx * BATCH_SIZE end = (batchIdx + 1) * BATCH_SIZE encoder_input_data_batch = pad_sequences(input_data[start:end], encoder_max_seq_length) decoder_target_data_batch = np.zeros(shape=(BATCH_SIZE, decoder_max_seq_length, num_decoder_tokens)) decoder_input_data_batch = np.zeros(shape=(BATCH_SIZE, decoder_max_seq_length, num_decoder_tokens)) for lineIdx, target_words in enumerate(output_text_data[start:end]): for idx, w in enumerate(target_words): w2idx = 0 if w in target_word2idx: w2idx = target_word2idx[w] decoder_input_data_batch[lineIdx, idx, w2idx] = 1 if idx > 0: decoder_target_data_batch[lineIdx, idx - 1, w2idx] = 1 yield [encoder_input_data_batch, decoder_input_data_batch], decoder_target_data_batch # Compiling and training encoder_inputs = Input(shape=(None,), name='encoder_inputs') encoder_embedding = Embedding(input_dim=num_encoder_tokens, output_dim=HIDDEN_UNITS, input_length=encoder_max_seq_length, name='encoder_embedding') encoder_lstm = LSTM(units=HIDDEN_UNITS, return_state=True, name='encoder_lstm') encoder_outputs, encoder_state_h, encoder_state_c = encoder_lstm(encoder_embedding(encoder_inputs)) encoder_states = [encoder_state_h, encoder_state_c] decoder_inputs = Input(shape=(None, num_decoder_tokens), name='decoder_inputs') decoder_lstm = LSTM(units=HIDDEN_UNITS, return_state=True, return_sequences=True, name='decoder_lstm') decoder_outputs, decoder_state_h, decoder_state_c = decoder_lstm(decoder_inputs, initial_state=encoder_states) decoder_dense = Dense(units=num_decoder_tokens, activation='softmax', name='decoder_dense') decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) model.compile(loss='categorical_crossentropy', optimizer='adam') json = model.to_json() open('data/word-architecture.json', 'w').write(json) X_train, X_test, y_train, y_test = train_test_split(encoder_input_data, target_texts, test_size=0.2, random_state=42) train_gen = generate_batch(X_train, y_train) test_gen = generate_batch(X_test, y_test) train_num_batches = len(X_train) // BATCH_SIZE test_num_batches = len(X_test) // BATCH_SIZE cb = [EarlyStopping(monitor='val_loss', patience=5, verbose=1, mode='min', min_delta=0.0001), ModelCheckpoint(filepath=WEIGHT_FILE_PATH, monitor='val_loss', save_best_only=True)] model.fit_generator(generator=train_gen, steps_per_epoch=train_num_batches, epochs=NUM_EPOCHS, verbose=1, validation_data=test_gen, validation_steps=test_num_batches, callbacks=cb) model.save_weights(WEIGHT_FILE_PATH) ###Output /usr/local/lib/python3.7/site-packages/tensorflow_core/python/framework/indexed_slices.py:433: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
Modulo 4 - Analisi per regioni/regioni/Calabria/CALABRIA - SARIMA mensile.ipynb
###Markdown CREAZIONE MODELLO SARIMA REGIONE CALABRIA ###Code import pandas as pd df = pd.read_csv('../../csv/regioni/calabria.csv') df.head() df['DATA'] = pd.to_datetime(df['DATA']) df.info() df=df.set_index('DATA') df.head() ###Output _____no_output_____ ###Markdown Creazione serie storica dei decessi totali della regione Calabria ###Code ts = df.TOTALE ts.head() from datetime import datetime from datetime import timedelta start_date = datetime(2015,1,1) end_date = datetime(2020,9,30) lim_ts = ts[start_date:end_date] #visulizzo il grafico import matplotlib.pyplot as plt plt.figure(figsize=(12,6)) plt.title('Decessi mensili regione Calabria dal 2015 a settembre 2020', size=20) plt.plot(lim_ts) for year in range(start_date.year,end_date.year+1): plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.5) ###Output _____no_output_____ ###Markdown Decomposizione ###Code from statsmodels.tsa.seasonal import seasonal_decompose decomposition = seasonal_decompose(ts, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative') ts_trend = decomposition.trend #andamento della curva ts_seasonal = decomposition.seasonal #stagionalità ts_residual = decomposition.resid #parti rimanenti plt.subplot(411) plt.plot(ts,label='original') plt.legend(loc='best') plt.subplot(412) plt.plot(ts_trend,label='trend') plt.legend(loc='best') plt.subplot(413) plt.plot(ts_seasonal,label='seasonality') plt.legend(loc='best') plt.subplot(414) plt.plot(ts_residual,label='residual') plt.legend(loc='best') plt.tight_layout() ###Output _____no_output_____ ###Markdown Test stazionarietà ###Code from statsmodels.tsa.stattools import adfuller def test_stationarity(timeseries): dftest = adfuller(timeseries, autolag='AIC') dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used']) for key,value in dftest[4].items(): dfoutput['Critical Value (%s)'%key] = value critical_value = dftest[4]['5%'] test_statistic = dftest[0] alpha = 1e-3 pvalue = dftest[1] if pvalue < alpha and test_statistic < critical_value: # null hypothesis: x is non stationary print("X is stationary") return True else: print("X is not stationary") return False test_stationarity(ts) ###Output X is not stationary ###Markdown Suddivisione in Train e Test Train: da gennaio 2015 a ottobre 2019; Test: da ottobre 2019 a dicembre 2019. ###Code from datetime import datetime train_end = datetime(2019,10,31) test_end = datetime (2019,12,31) covid_end = datetime(2020,8,31) from dateutil.relativedelta import * tsb = ts[:test_end] decomposition = seasonal_decompose(tsb, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative') tsb_trend = decomposition.trend #andamento della curva tsb_seasonal = decomposition.seasonal #stagionalità tsb_residual = decomposition.resid #parti rimanenti tsb_diff = pd.Series(tsb_trend) d = 0 while test_stationarity(tsb_diff) is False: tsb_diff = tsb_diff.diff().dropna() d = d + 1 print(d) #TEST: dal 01-01-2015 al 31-10-2019 train = tsb[:train_end] #TRAIN: dal 01-11-2019 al 31-12-2019 test = tsb[train_end + relativedelta(months=+1): test_end] ###Output X is not stationary X is not stationary X is not stationary X is stationary 3 ###Markdown Grafici Autocorrelazione e Autocorrelazione Parziale ###Code from statsmodels.graphics.tsaplots import plot_acf, plot_pacf plot_acf(ts, lags =12) plot_pacf(ts, lags =12) plt.show() ###Output _____no_output_____ ###Markdown Creazione del modello SARIMA sul Train ###Code from statsmodels.tsa.statespace.sarimax import SARIMAX model = SARIMAX(train, order=(12,1,1)) model_fit = model.fit() print(model_fit.summary()) ###Output c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\base\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used. warnings.warn('No frequency information was' c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\base\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used. warnings.warn('No frequency information was' c:\users\monta\appdata\local\programs\python\python38\lib\site-packages\statsmodels\tsa\statespace\sarimax.py:965: UserWarning: Non-stationary starting autoregressive parameters found. Using zeros as starting parameters. warn('Non-stationary starting autoregressive parameters' ###Markdown Verifica della stazionarietà dei residui del modello ottenuto ###Code residuals = model_fit.resid test_stationarity(residuals) plt.figure(figsize=(12,6)) plt.title('Confronto valori previsti dal modello con valori reali del Train', size=20) plt.plot (train.iloc[1:], color='red', label='train values') plt.plot (model_fit.fittedvalues.iloc[1:], color = 'blue', label='model values') plt.legend() plt.show() conf = model_fit.conf_int() plt.figure(figsize=(12,6)) plt.title('Intervalli di confidenza del modello', size=20) plt.plot(conf) plt.xticks(rotation=45) plt.show() ###Output _____no_output_____ ###Markdown Predizione del modello sul Test ###Code #inizio e fine predizione pred_start = test.index[0] pred_end = test.index[-1] #pred_start= len(train) #pred_end = len(tsb) #predizione del modello sul test predictions_test= model_fit.predict(start=pred_start, end=pred_end) plt.plot(test, color='red', label='actual') plt.plot(predictions_test, label='prediction' ) plt.xticks(rotation=45) plt.legend() plt.show() print(predictions_test) # Accuracy metrics import numpy as np def forecast_accuracy(forecast, actual): mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE: errore percentuale medio assoluto me = np.mean(forecast - actual) # ME: errore medio mae = np.mean(np.abs(forecast - actual)) # MAE: errore assoluto medio mpe = np.mean((forecast - actual)/actual) # MPE: errore percentuale medio rmse = np.mean((forecast - actual)**2)**.5 # RMSE corr = np.corrcoef(forecast, actual)[0,1] # corr: correlazione tra effettivo e previsione mins = np.amin(np.hstack([forecast[:,None], actual[:,None]]), axis=1) maxs = np.amax(np.hstack([forecast[:,None], actual[:,None]]), axis=1) minmax = 1 - np.mean(mins/maxs) # minmax: errore min-max return({'mape':mape, 'me':me, 'mae': mae, 'mpe': mpe, 'rmse':rmse, 'corr':corr, 'minmax':minmax}) forecast_accuracy(predictions_test, test) import numpy as np from statsmodels.tools.eval_measures import rmse nrmse = rmse(predictions_test, test)/(np.max(test)-np.min(test)) print('NRMSE: %f'% nrmse) ###Output NRMSE: 1.833846 ###Markdown Predizione del modello compreso l'anno 2020 ###Code #inizio e fine predizione start_prediction = ts.index[0] end_prediction = ts.index[-1] predictions_tot = model_fit.predict(start=start_prediction, end=end_prediction) plt.figure(figsize=(12,6)) plt.title('Previsione modello su dati osservati - dal 2015 al 30 settembre 2020', size=20) plt.plot(ts, color='blue', label='actual') plt.plot(predictions_tot.iloc[1:], color='red', label='predict') plt.xticks(rotation=45) plt.legend(prop={'size': 12}) plt.show() diff_predictions_tot = (ts - predictions_tot) plt.figure(figsize=(12,6)) plt.title('Differenza tra i valori osservati e i valori stimati del modello', size=20) plt.plot(diff_predictions_tot) plt.show() diff_predictions_tot['24-02-2020':].sum() predictions_tot.to_csv('../../csv/pred/predictions_SARIMA_calabria.csv') ###Output _____no_output_____ ###Markdown Intervalli di confidenza della previsione totale ###Code forecast = model_fit.get_prediction(start=start_prediction, end=end_prediction) in_c = forecast.conf_int() print(forecast.predicted_mean) print(in_c) print(forecast.predicted_mean - in_c['lower TOTALE']) plt.plot(in_c) plt.show() upper = in_c['upper TOTALE'] lower = in_c['lower TOTALE'] lower.to_csv('../../csv/lower/predictions_SARIMA_calabria_lower.csv') upper.to_csv('../../csv/upper/predictions_SARIMA_calabria_upper.csv') ###Output _____no_output_____
Ch12_computational-performance/12-2mx.ipynb
###Markdown http://preview.d2l.ai/d2l-en/master/chapter_computational-performance/async-computation.html ###Code from d2l import mxnet as d2l import numpy, os, subprocess from mxnet import autograd, gluon, np, npx from mxnet.gluon import nn npx.set_np() with d2l.Benchmark('numpy'): for _ in range(10): a = numpy.random.normal(size=(1000, 1000)) b = numpy.dot(a, a) with d2l.Benchmark('mxnet.np'): for _ in range(10): a = np.random.normal(size=(1000, 1000)) b = np.dot(a, a) with d2l.Benchmark(): for _ in range(10): a = np.random.normal(size=(1000, 1000)) b = np.dot(a, a) npx.waitall() with d2l.Benchmark(): for _ in range(10): a = np.random.normal(size=(1000, 1000)) b = np.dot(a, a) npx.waitall() x = np.ones((1, 2)) y = np.ones((1, 2)) z = x * y + 2 z with d2l.Benchmark('waitall'): b = np.dot(a, a) npx.waitall() with d2l.Benchmark('wait_to_read'): b = np.dot(a, a) b.wait_to_read() with d2l.Benchmark('numpy conversion'): b = np.dot(a, a) b.asnumpy() with d2l.Benchmark('scalar conversion'): b = np.dot(a, a) b.sum().item() with d2l.Benchmark('synchronous'): for _ in range(1000): y = x + 1 y.wait_to_read() with d2l.Benchmark('asynchronous'): for _ in range(1000): y = x + 1 y.wait_to_read() def data_iter(): timer = d2l.Timer() num_batches, batch_size = 150, 1024 for i in range(num_batches): X = np.random.normal(size=(batch_size, 512)) y = np.ones((batch_size,)) yield X, y if (i + 1) % 50 == 0: print(f'batch {i + 1}, time {timer.stop():.4f} sec') net = nn.Sequential() net.add(nn.Dense(2048, activation='relu'), nn.Dense(512, activation='relu'), nn.Dense(1)) net.initialize() trainer = gluon.Trainer(net.collect_params(), 'sgd') loss = gluon.loss.L2Loss() def get_mem(): res = subprocess.check_output(['ps', 'u', '-p', str(os.getpid())]) return int(str(res).split()[15]) / 1e3 for X, y in data_iter(): break loss(y, net(X)).wait_to_read() mem = get_mem() with d2l.Benchmark('time per epoch'): for X, y in data_iter(): with autograd.record(): l = loss(y, net(X)) l.backward() trainer.step(X.shape[0]) l.wait_to_read() # Barrier before a new batch npx.waitall() print(f'increased memory: {get_mem() - mem:f} MB') mem = get_mem() with d2l.Benchmark('time per epoch'): for X, y in data_iter(): with autograd.record(): l = loss(y, net(X)) l.backward() trainer.step(X.shape[0]) npx.waitall() print(f'increased memory: {get_mem() - mem:f} MB') ###Output _____no_output_____
docker/dockerfiles/jupyter/docker/notebooks/PNDA minimal notebook.ipynb
###Markdown Minimal PNDA Jupyter notebook`%matplotlib notebook` must be set before `import matplotlib.pyplot as plt` or plotting with matplotlib will fail ###Code %matplotlib notebook import matplotlib.pyplot as plt import sys import pandas as pd import matplotlib print(u'▶ Python version ' + sys.version) print(u'▶ Pandas version ' + pd.__version__) print(u'▶ Matplotlib version ' + matplotlib.__version__) import numpy as np values = np.random.rand(100) df = pd.DataFrame(data=values, columns=['RandomValue']) df.head(10) df.plot() ###Output _____no_output_____
Reinforcement Learning Summer School 2019 (Lille, France)/practical_rec_systems/practical_rec_systems.ipynb
###Markdown Bandits for Recommendation - RLSS 2019 For conveniency slides used for presentation are available on https://www.dropbox.com/s/6px7a37qddstgtn/RLSS.pdf?dl=0The objective of this notebook is to apply the bandits algorithms to recommendation problem using a simulated envrionment. Although in practice you would also use the real data, the complexity of the recommendation problem and the associated algorithmic challenges can already be revealed even in this simple setting. ![RecSys](fig/recsys_scheme.png) Controlled recommendation environmentIn the simulated environment, user browsers a web-site and might click on recommendation served by a recommendation agent. The goal of the agent is to maximize the number of clicks. The simulation is going to be a little bit more involved that ideal situation from the previous TP. In particular we are going to work on a stream of user which are also generating some "organic" observations, meaning that you collect some browsing events and have to perform some recommendations until the user decides to leave. Define action, observation, reward* Action -- a recommended item (e.g., in e-commerce it can be a product). Here the simulator will be set to have 100 possible products* Reward -- user interaction with the recommendation (e.g., a click)* Observation -- user activity (e.g., list of products that user has visited during his/her browsing session). Here we report only "organic" event which are not the recommended item and can occur even if the recommendation was not used. ###Code from reco_env import RecoEnv, env_1_args from configuration import Configuration from agent import Agent, RandomAgent, random_args # you can overwrite environment arguments here RND_SEED = 1234 env_1_args['random_seed'] = RND_SEED # create environment with configuration env = RecoEnv(Configuration({ **env_1_args, })) env.reset() # random agent rand_agent = RandomAgent(Configuration({ **random_args, })) # counting steps i = 0 observation, _, _, _ = env.step(None) reward, done = 0, False while not done: # choose action given current observation action = rand_agent.act(observation) # execute action in the environment observation, reward, done, info = env.step(action['a']) print(f"Step: {i} - Action: {action} - Observation: {observation} - Reward: {reward}") i += 1 ###Output _____no_output_____ ###Markdown Simulation of user response to recommendationUser response to recommendation is modeled as a function of (1) affinity of user to recommended product, and (2) correction due to the recommendation. $\mu(u,p,t) := f(\Lambda(u,p,t) + \epsilon(u,p,t))$,where $f$ is an increasing function, $\Lambda(u,p,t)$ is the log odds of user $u$ being interested by product $a$ at time $t$, $\epsilon(u,p,t))$ is a zero mean correction.Assuming the latent space for user and product, let $\omega \in \mathbb{R}^K$ be the latent representation of user $u$ of size $K$, $\beta \in \mathbb{R}^K$ is the latent reprentation of product $p$, then user response on recommendation can be modeled as$\mu(u,p,t) := \text{sigmoid}(\beta^T \omega + \mu_\epsilon)$, where $\omega_i = \mathcal{N}(0, \sigma^2_\omega)$, $\beta_i = \mathcal{N}(0, 1)$, $\mu_\epsilon = \mathcal{N}(0, \sigma^2_\mu)$.In advertisement, typical values for $\mu(u,p,t)$ are around $0.02$. Online learning using recommendation environment We introduce functions that train, evaluate and plot the evaluation metrics of recommendation agents. ###Code from train_eval_utils import train_eval_agents, plot_ctr help(train_eval_agents) help(plot_ctr) ###Output _____no_output_____ ###Markdown Bandits Code bandits algorithms that you have already seen in the class. In this part the simulator is configured in a such way that we are actually facing the stochastic bandit setting: all the users are the same and their preferences are not evolving. UCB algorithm [Auer 2002] Code the agent that runs the UCB algorithm for product click-through rate (number of clicks / number of displays). * First, we provide the code for bound based on Hoeffding inequality$I_k = argmax_k \hat{\mu}_{k,n} + \sqrt{\frac{2 \log t}{n}}$* Then, improve by tuning $\alpha$ $I_k = argmax_k \hat{\mu}_{k,n} + \alpha \sqrt{\frac{2 \log t}{n}}$* Finally, use the fact that click is a Bernoulli random variable to obtain a sharper bound ###Code import numpy as np # Implement an Agent interface class AlphaUCBAgent(Agent): def __init__(self, config): super(AlphaUCBAgent, self).__init__(config) # Init with ones to avoid division by zero self.product_rewards = np.zeros(self.config.num_products, dtype=np.float32) self.product_counts = np.ones(self.config.num_products, dtype=np.float32) # alpha parameter (already tuned) self.alpha = 0.01 def train(self, observation, action, reward, done): """Train from observed data""" if reward is not None and action is not None: self.product_rewards[action] += reward self.product_counts[action] += 1 def act(self, observation): """Return an action given current observation""" t = sum(self.product_counts) ucb = self.product_rewards / t + self.alpha * np.sqrt(2.0*np.log(t)/self.product_counts) action = np.argmax(ucb) return { 'a': action } # TODO: code the beta bound def ucb_bound(num_clicks, num_displays): return .. class BetaUCBAgent(Agent): def __init__(self, config): super(BetaUCBAgent, self).__init__(config) self.product_rewards = np.zeros(self.config.num_products, dtype=np.float32) self.product_counts = np.ones(self.config.num_products, dtype=np.float32) self.ucb_func = np.vectorize(ucb_bound) def train(self, observation, action, reward, done): if reward is not None and action is not None: self.product_rewards[action] += reward self.product_counts[action] += 1 def act(self, observation): ucb = self.ucb_func(self.product_rewards, self.product_counts) action = np.argmax(ucb) return { 'a': action } ###Output _____no_output_____ ###Markdown Compare UCB agents performance and running time in stochastic bandits setting* Train and evaluate UCB with Hoeffding bound and exact bound* Achieve similar performance by tuning $\alpha$ parameter* Compare the running time ###Code # number of products to recommend num_products = 10 # number of users for train and evaluation num_train_users, num_eval_users = 1000, 1000 custom_args = { 'num_products': num_products, 'random_seed': RND_SEED, } config = Configuration({ **env_1_args, **custom_args, }) alpha_ucb_agent = AlphaUCBAgent(config) beta_ucb_agent = BetaUCBAgent(config) rand_agent = RandomAgent(config) agents = [rand_agent, alpha_ucb_agent, beta_ucb_agent] stats = train_eval_agents(agents, config, num_train_users, num_eval_users) print(stats) plot_ctr(stats) ###Output _____no_output_____ ###Markdown Exp3 / Boltzmann exploration algorithm * Code the agent that runs the Exp3 / Boltzmann exploration algorithm. The adversarial setting is going to be described later in the course but you can find it on wikipedia https://en.wikipedia.org/wiki/Multi-armed_banditExp3[43].__Remark:__ *it is possible to have a exponential speedup of the sampling unsing storing the weights in a binary tree containing partial sums, see http://timvieira.github.io/blog/post/2016/11/21/heaps-for-incremental-computation/** Tune temperature parameter ###Code from scipy.special import logsumexp from numpy.random import choice class Exp3Agent(Agent): def __init__(self, config): super(Exp3Agent, self).__init__(config) self.product_rewards = np.zeros(self.config.num_products, dtype=np.float32) self.product_counts = np.ones(self.config.num_products, dtype=np.float32) # TODO: tune softmax temperature self.eta = .. def train(self, observation, action, reward, done): if reward is not None and action is not None: self.product_rewards[action] += reward self.product_counts[action] += 1 def log_softmax(self, vec): return vec - logsumexp(vec) def softmax(self, vec): probs = np.exp(self.log_softmax(vec)) probs /= probs.sum() return probs def act(self, observation): # TODO: compute probability of choosing action using Boltzmann exploration prob = .. # TODO: sample an action action = .. return { 'a': action, 'ps': prob[action] } ###Output _____no_output_____ ###Markdown Compare UCB and Exp3 agents performance and running time in stochastic bandits setting* Train and evaluate UCB and Exp3 algorithms against random product recommendation* Increase the number of products to 100 and explain the change ###Code # number of products to recommend num_products = 10 # number of users for train and evaluation num_train_users, num_eval_users = 1000, 1000 custom_args = { 'num_products': num_products, 'random_seed': RND_SEED, } config = Configuration({ **env_1_args, **custom_args, }) ucb_agent = AlphaUCBAgent(config) exp3_agent = Exp3Agent(config) rand_agent = RandomAgent(config) agents = [rand_agent, ucb_agent, exp3_agent] stats = train_eval_agents(agents, config, num_train_users, num_eval_users) print(stats) plot_ctr(stats) ###Output _____no_output_____ ###Markdown Compare UCB and Exp3 agents in a non-stationary setting Now we set the simulation to have some updates in the preference of the users thought they are initialized equally (using the random_seed given in the custom_args). This modelization of the user state change leads to non-stationary user response to a recommendation and is dependant of what you recommended. ###Code num_products = 100 num_train_users, num_eval_users = 1000, 1000 # adversarial setting: increase user state change during time custom_args = { 'num_products': num_products, 'random_seed': RND_SEED, 'sigma_omega': 0.3 } config = Configuration({ **env_1_args, **custom_args, }) ucb_agent = AlphaUCBAgent(config) exp3_agent = Exp3Agent(config) rand_agent = RandomAgent(config) agents = [rand_agent, ucb_agent, exp3_agent] stats = train_eval_agents(agents, config, num_train_users, num_eval_users) print(stats) plot_ctr(stats) ###Output _____no_output_____ ###Markdown Compare sample complexity $\sigma$-subgaussian distribution* UCB$R_T = \mathcal{O}(\sum_{i>1} \frac{\log T}{\Delta_i})$* Exp3$R_T = \mathcal{O}(\sum_{i>1}\frac{\log^2 (T \Delta_i^2)}{\Delta_i})$distribution-independent* UCB$R_T = \mathcal{O}(\sqrt{KT\log T})$* Exp3$R_T = \mathcal{O}(\sqrt{KT}\log K)$ Using side data: browsing events To make algorithms more sample efficient, we will use side data to bootstrap the recommendation. Specifically, we will use the user browsing events (aka "organic" events). The amount of browsing data is typically much larger than the number of click events on recommendation. Popularity agent The simpliest recommedation agent is based on the total number of views of the product during browsing. ###Code class PopularityAgent(Agent): def __init__(self, config): super(PopularityAgent, self).__init__(config) # Track number of times each item is viewed during browsing self.nb_views = np.ones(self.config.num_products) def train(self, observation, action, reward, done): if observation: for view in observation: # TODO: code the update for nb_views that increments view counts .. def act(self, observation): # TODO: compute probability of choosing an action proportionally to the total number of views prob = .. # TODO: choose action action = .. return { 'a': action, 'ps': prob[action] } ###Output _____no_output_____ ###Markdown Contextual BanditImprove the popularity agent by personalizing popularity to the user interest. We represent the user interest by the last product he/she has seen. ###Code from scipy.special import logsumexp class ContextualExp3Agent(Agent): def __init__(self, config): super(ContextualExp3Agent, self).__init__(config) self.product_rewards = np.zeros((self.config.num_products, self.config.num_products)) self.last_product_seen = None # softmax temperature parameter self.eta = 0.01 def update_lps(self, observation): """Update the last product seen based on the current observation""" if observation: self.last_product_seen = observation[-1] def train(self, observation, action, reward, done): if observation: # TODO: code the update for product_rewards matrix .. def log_softmax(self, vec): return vec - logsumexp(vec) def softmax(self, vec): probs = np.exp(self.log_softmax(vec)) probs /= probs.sum() return probs def act(self, observation): self.update_lps(observation) # TODO: compute probability of choosing an action given last seen product by the user prob = .. # TODO: choose action action = .. return { 'a': action, 'ps': prob[action] } ###Output _____no_output_____ ###Markdown Compare agents that use click events to agents that use view events ###Code num_products = 100 num_train_users, num_eval_users = 1000, 1000 # increase difference among users custom_args = { 'num_products': num_products, 'random_seed': RND_SEED, 'sigma_omega_initial': 2.0, } config = Configuration({ **env_1_args, **custom_args, }) contextual_exp3_agent = ContextualExp3Agent(config) exp3_agent = Exp3Agent(config) pop_agent = PopularityAgent(config) ucb_agent = AlphaUCBAgent(config) rand_agent = RandomAgent(config) agents = [pop_agent, contextual_exp3_agent, exp3_agent, ucb_agent, rand_agent] stats = train_eval_agents(agents, config, num_train_users, num_eval_users) print(stats) plot_ctr(stats) ###Output _____no_output_____
05_Duplicate_Questions/02_Duplicate_Questions_FastText.ipynb
###Markdown Question PairsIn the last notebook we used packed padded to do a binary classification on questions. We were classifying weather questions are duplicated or not. In this notebook we are going to create a modified `FastText` that will do the same task.**Note**: The rest of the notebook will remain unchanged from the previous one. Where there's a change i will highlight. Imports ###Code import time, os, torch, random, math from prettytable import PrettyTable import numpy as np from matplotlib import pyplot as plt import pandas as pd import torch, os, random from torch import nn import torch.nn.functional as F torch.__version__ ###Output _____no_output_____ ###Markdown SEEDS ###Code SEED = 42 np.random.seed(SEED) random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deteministic = True ###Output _____no_output_____ ###Markdown Device ###Code device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device ###Output _____no_output_____ ###Markdown Mounting the google drive ###Code from google.colab import drive drive.mount('/content/drive') ###Output Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True). ###Markdown Paths to data ###Code base_path = '/content/drive/MyDrive/NLP Data/duplicates-questions' train_path = 'train.csv' val_path = 'val.csv' test_path = 'test.csv' os.path.exists(base_path) ###Output _____no_output_____ ###Markdown Data LoadingThis is a binary classification task where we are going to predict weather questions are duplicates or not. We are going to have 2 inputs which is two differant questions that will map to one label, is_duplicate(1) or is_not_duplicate (0). We are going to create the fields of our data. Fast TextAccoding to the FastText paper we need to generate bigrams for each question.We are going to create a function called ``generate_bigram()`` that will generate bigrams for us for both of these two input questions. We will pass this function to the Text field as the preprocessing function. What do we have?We are having three `csv` files for each set whih makes it easy to create the dataset for this task. ###Code def generate_bigrams(x): x = [i.lower() for i in x] n_grams = set(zip(*[x[i: ] for i in range(2)])) for n_gram in n_grams: x.append(' '.join(n_gram)) return x generate_bigrams(['What', 'is', 'the', 'meaning', "of", "OCR", "in", "python"]) from torchtext.legacy import data ###Output _____no_output_____ ###Markdown Fields ###Code TEXT = data.Field( tokenize = 'spacy', tokenizer_language = 'en_core_web_sm', preprocessing = generate_bigrams, ) LABEL = data.LabelField(dtype = torch.float) fields = { "question1": ("qn1", TEXT), "question2": ("qn2", TEXT), "is_duplicate": ("label", LABEL), } ###Output _____no_output_____ ###Markdown Next we will create our dataset using our favourate class fro torchtext `TabularDataset`. We are going to load the data that is in `csv` format as follows. ###Code train_data, val_data, test_data = data.TabularDataset.splits( base_path, train=train_path, test= test_path, validation= val_path, format = "csv", fields=fields ) print(vars(train_data.examples[0])) ###Output {'qn1': ['is', 'it', 'right', 'for', 'a', 'woman', 'to', 'date', 'someone', '2', '-', '3', 'years', 'younger', 'than', 'her', '?', 'is it', 'for a', 'date someone', 'a woman', 'woman to', 'to date', 'someone 2', '2 -', 'younger than', 'it right', 'her ?', '3 years', 'right for', '- 3', 'than her', 'years younger'], 'qn2': ['is', 'it', 'strange', 'to', 'have', 'a', 'crush', 'on', 'someone', 'say', '17', 'years', 'younger', 'than', 'me', '?', 'crush on', 'is it', 'to have', 'have a', 'me ?', 'on someone', 'younger than', 'strange to', 'than me', 'it strange', 'say 17', 'someone say', 'a crush', 'years younger', '17 years'], 'label': '0'} ###Markdown Next we will build the Vocabulary.We are going to use the pretrained word vectors `glove.6B.100d` which was trained on about 6 billion english words. ###Code MAX_VOCAB_SIZE = 100_000 TEXT.build_vocab( train_data, max_size = MAX_VOCAB_SIZE, vectors = "glove.6B.100d", unk_init = torch.Tensor.normal_ ) LABEL.build_vocab(train_data) LABEL.vocab.stoi ###Output _____no_output_____ ###Markdown Creating iteratorsWe are going to use the `BucketIterator` to create iterators for all these sets that we have. ###Code sort_key = lambda x: len(x.qn1) BATCH_SIZE = 128 train_iter, val_iter, test_iter = data.BucketIterator.splits( (train_data, val_data, test_data), device = device, batch_size = BATCH_SIZE, sort_key = sort_key, sort_within_batch=True ) ###Output _____no_output_____ ###Markdown Next we are going to create the model.We are going to have two inputs which will be Question1 and Question2.* Each question will be passed through it's own embedding layer.* These embedding layers will then be concatenated and passed through a linear layer for predictions. ###Code class DuplicateQuestionsFastText(nn.Module): def __init__(self, vocab_size, embedding_size, output_dim, pad_index, dropout=.5 ): super(DuplicateQuestionsFastText, self).__init__() self.embedding_1 = nn.Embedding( vocab_size, embedding_size, padding_idx = pad_index ) self.embedding_2 = nn.Embedding( vocab_size, embedding_size, padding_idx = pad_index ) self.out = nn.Linear( embedding_size, out_features = output_dim ) self.dropout = nn.Dropout(dropout) def forward(self, question1, question2, ): embedded_1 = self.embedding_1(question1).permute(1 ,0, 2) embedded_2 = self.embedding_2(question2).permute(1 ,0, 2) embedded = self.dropout(torch.cat((embedded_1, embedded_2), dim=1)) pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1) ).squeeze(1) return self.out(pooled) ###Output _____no_output_____ ###Markdown Creating the model instance. ###Code INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 OUTPUT_DIM = 1 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] duplicate_questions_model = DuplicateQuestionsFastText( INPUT_DIM, EMBEDDING_DIM, OUTPUT_DIM, pad_index = PAD_IDX ).to(device) duplicate_questions_model ###Output _____no_output_____ ###Markdown Model parameters ###Code def count_trainable_params(model): return sum(p.numel() for p in model.parameters()), sum(p.numel() for p in model.parameters() if p.requires_grad) n_params, trainable_params = count_trainable_params(duplicate_questions_model) print(f"Total number of paramaters: {n_params:,}\nTotal tainable parameters: {trainable_params:,}") ###Output Total number of paramaters: 20,000,501 Total tainable parameters: 20,000,501 ###Markdown Loading pretrained vectors to the `embedding` layers.* Now we have two embedding layers in the model, so we need to add the word vectors to each embedding layer as follows: ###Code pretrained_embeddings = TEXT.vocab.vectors duplicate_questions_model.embedding_1.weight.data.copy_( pretrained_embeddings ) duplicate_questions_model.embedding_2.weight.data.copy_( pretrained_embeddings ) ###Output _____no_output_____ ###Markdown Zeroing the `` and the `` tokens.These tokens are not acually necessary for the model trainning that's the reason we are zeroing them. We will do this for all our emmbedding layers in the model. ###Code UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] or TEXT.vocab.stoi["<unk>"] duplicate_questions_model.embedding_1.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) duplicate_questions_model.embedding_1.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) duplicate_questions_model.embedding_2.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) duplicate_questions_model.embedding_2.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) duplicate_questions_model.embedding_1.weight.data ###Output _____no_output_____ ###Markdown Loss and optimizerFor the optimizer we are going to use `Adam()` with default paramaters and for the loss function we are going to use the `BCEWithLogitsLoss()` since we are doing a binary classification. ###Code optimizer = torch.optim.Adam(duplicate_questions_model.parameters()) criterion = nn.BCEWithLogitsLoss().to(device) ###Output _____no_output_____ ###Markdown Accuracy function.For the accuracy we are going to create a `binary_accuracy` function that will take predicted labels and accual labels to return the accuracy as a probability value. ###Code def binary_accuracy(y_preds, y_true): rounded_preds = torch.round(torch.sigmoid(y_preds)) correct = (rounded_preds == y_true).float() return correct.sum() / len(correct) ###Output _____no_output_____ ###Markdown Train and evaluation function.This time around we have two features which is our two text labels. The model except 2 positional args which are:``` question1, question2``` Where are we going to get them?Well our iterator contains all this information so we dont have o worry much about that. Let's create a train and evaluation functions. ###Code def train(model, iterator, optimizer, criterion): epoch_loss,epoch_acc = 0, 0 model.train() for batch in iterator: optimizer.zero_grad() qn1 = batch.qn1 qn2= batch.qn2 predictions = model(qn1, qn2).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator, criterion): epoch_loss,epoch_acc = 0, 0 model.eval() with torch.no_grad(): for batch in iterator: qn1 = batch.qn1 qn2 = batch.qn2 predictions = model(qn1, qn2).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) ###Output _____no_output_____ ###Markdown Train LoopWe are going to create some helper functions that will help us to visualize every epoch during training.Time to string ###Code def hms_string(sec_elapsed): h = int(sec_elapsed / (60 * 60)) m = int((sec_elapsed % (60 * 60)) / 60) s = sec_elapsed % 60 return "{}:{:>02}:{:>05.2f}".format(h, m, s) ###Output _____no_output_____ ###Markdown Tabulate training epoch ###Code def visualize_training(start, end, train_loss, train_accuracy, val_loss, val_accuracy, title): data = [ ["Training", f'{train_loss:.3f}', f'{train_accuracy:.3f}', f"{hms_string(end - start)}" ], ["Validation", f'{val_loss:.3f}', f'{val_accuracy:.3f}', "" ], ] table = PrettyTable(["CATEGORY", "LOSS", "ACCURACY", "ETA"]) table.align["CATEGORY"] = 'l' table.align["LOSS"] = 'r' table.align["ACCURACY"] = 'r' table.align["ETA"] = 'r' table.title = title for row in data: table.add_row(row) print(table) N_EPOCHS = 10 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start = time.time() train_loss, train_acc = train(duplicate_questions_model, train_iter, optimizer, criterion) valid_loss, valid_acc = evaluate(duplicate_questions_model, val_iter, criterion) title = f"EPOCH: {epoch+1:02}/{N_EPOCHS:02} {'saving best model...' if valid_loss < best_valid_loss else 'not saving...'}" if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(duplicate_questions_model.state_dict(), 'best-model.pt') end = time.time() visualize_training(start, end, train_loss, train_acc, valid_loss, valid_acc, title) ###Output +--------------------------------------------+ | EPOCH: 01/10 saving best model... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.530 | 0.739 | 0:00:45.04 | | Validation | 0.492 | 0.765 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 02/10 saving best model... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.466 | 0.783 | 0:00:44.70 | | Validation | 0.470 | 0.777 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 03/10 saving best model... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.435 | 0.800 | 0:00:45.06 | | Validation | 0.458 | 0.785 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 04/10 saving best model... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.411 | 0.813 | 0:00:44.78 | | Validation | 0.451 | 0.789 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 05/10 saving best model... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.393 | 0.823 | 0:00:44.83 | | Validation | 0.449 | 0.795 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 06/10 not saving... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.377 | 0.831 | 0:00:45.10 | | Validation | 0.450 | 0.799 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 07/10 not saving... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.365 | 0.837 | 0:00:44.85 | | Validation | 0.455 | 0.798 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 08/10 not saving... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.353 | 0.843 | 0:00:44.85 | | Validation | 0.457 | 0.796 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 09/10 not saving... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.344 | 0.848 | 0:00:44.45 | | Validation | 0.460 | 0.796 | | +------------+-------+----------+------------+ +--------------------------------------------+ | EPOCH: 10/10 not saving... | +------------+-------+----------+------------+ | CATEGORY | LOSS | ACCURACY | ETA | +------------+-------+----------+------------+ | Training | 0.335 | 0.852 | 0:00:44.83 | | Validation | 0.464 | 0.797 | | +------------+-------+----------+------------+ ###Markdown Evaluating the best model. ###Code duplicate_questions_model.load_state_dict(torch.load('best-model.pt')) test_loss, test_acc = evaluate(duplicate_questions_model, test_iter, criterion) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%') ###Output Test Loss: 0.450 | Test Acc: 78.80% ###Markdown Model InferenceOur predict sentiment function will:* get `two` question pairs, tokenize them and convert them to sequences.* pass the model the, questions that are converted to tensors.* Apply the sigmoid to get the accual label. ###Code import en_core_web_sm nlp = en_core_web_sm.load() def predict_sentiment(model, q1, q2): model.eval() tokenized_q1 = [tok.text for tok in nlp.tokenizer(q1.lower())] tokenized_q2 = [tok.text for tok in nlp.tokenizer(q2.lower())] indexed_1 = [TEXT.vocab.stoi[t] for t in tokenized_q1] indexed_2 = [TEXT.vocab.stoi[t] for t in tokenized_q2] tensor_1 = torch.LongTensor(indexed_1).to(device).unsqueeze(1) tensor_2 = torch.LongTensor(indexed_2).to(device).unsqueeze(1) prediction = torch.sigmoid(model(tensor_1, tensor_2)) return prediction.item() ###Output _____no_output_____ ###Markdown Getting questions for testing. ###Code dataframe = pd.read_csv(os.path.join( base_path, test_path )) qns1 = dataframe.question1.values qns2 = dataframe.question2.values true_labels = dataframe.is_duplicate.values from prettytable import PrettyTable def tabulate(column_names, data, max_characters:int, title:str): table = PrettyTable(column_names) table.align[column_names[0]] = "l" table.align[column_names[1]] = "l" table.title = title table._max_width = {column_names[0] :max_characters, column_names[1] :max_characters} for row in data: table.add_row(row) print(table) for i, (q1, q2, label) in enumerate(zip(qns1, qns2, true_labels[:10])): pred = predict_sentiment(duplicate_questions_model, q1, q2) classes = ["not duplicate", "duplicate"] probability = pred if pred >=0.5 else 1 - pred table_headers =["KEY", "VALUE"] table_data = [ ["Question 1", q1], ["Question2", q2], ["PREDICTED CLASS", round(pred)], ["PREDICTED CLASS NAME", classes[round(pred)]], ["REAL CLASS", label], ["REAL CLASS NAME", classes[label]], ["CONFIDENCE OVER OTHER CLASSES", f'{ probability * 100:.2f}%'], ] title = "Duplicate Questions" tabulate(table_headers, table_data, 50, title=title) ###Output +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | Do you watch Korean dramas? | | Question2 | Is it normal to watch Korean drama if you are a | | | guy? | | PREDICTED CLASS | 0 | | PREDICTED CLASS NAME | not duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 89.91% | +-------------------------------+----------------------------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | What are some good home remedies for getting rid | | | of stress bumps on the lips? | | Question2 | How do I get rid of an acidic tummy and a sore | | | mouth? Is there any home remedies? | | PREDICTED CLASS | 0 | | PREDICTED CLASS NAME | not duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 65.72% | +-------------------------------+----------------------------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | “Everyone wants to go to Baghdad. Real men want to | | | go to Tehran.” What does this mean? | | Question2 | Why do you want to go back to college days? | | PREDICTED CLASS | 1 | | PREDICTED CLASS NAME | duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 56.61% | +-------------------------------+----------------------------------------------------+ +-------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+-----------------------------------+ | KEY | VALUE | +-------------------------------+-----------------------------------+ | Question 1 | How can I ask my wife for sex? | | Question2 | Do I have to ask my wife for sex? | | PREDICTED CLASS | 1 | | PREDICTED CLASS NAME | duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 93.09% | +-------------------------------+-----------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | How do you deal with having a bad reputation in | | | college? | | Question2 | How can I deal with bad reputation in college? | | PREDICTED CLASS | 1 | | PREDICTED CLASS NAME | duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 76.68% | +-------------------------------+----------------------------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | What credit card is the one that you pay the | | | least? | | Question2 | What are the consiquences of not paying the credit | | | card? | | PREDICTED CLASS | 0 | | PREDICTED CLASS NAME | not duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 67.27% | +-------------------------------+----------------------------------------------------+ +----------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+--------------------------------------+ | KEY | VALUE | +-------------------------------+--------------------------------------+ | Question 1 | Does Elon Musk have a lack of focus? | | Question2 | What is the origin of the Drama? | | PREDICTED CLASS | 1 | | PREDICTED CLASS NAME | duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 98.80% | +-------------------------------+--------------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | What universities does Investors Real estate | | | recruit new grads from? What majors are they | | | looking for? | | Question2 | What universities does Renaissance Real estate | | | recruit new grads from? What majors are they | | | looking for? | | PREDICTED CLASS | 0 | | PREDICTED CLASS NAME | not duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 99.37% | +-------------------------------+----------------------------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | Could God who is truly all powerful create a rock | | | that he himself could not lift? | | Question2 | If God is all powerful can he make a rock so heavy | | | even he cannot lift it? | | PREDICTED CLASS | 1 | | PREDICTED CLASS NAME | duplicate | | REAL CLASS | 1 | | REAL CLASS NAME | duplicate | | CONFIDENCE OVER OTHER CLASSES | 99.31% | +-------------------------------+----------------------------------------------------+ +------------------------------------------------------------------------------------+ | Duplicate Questions | +-------------------------------+----------------------------------------------------+ | KEY | VALUE | +-------------------------------+----------------------------------------------------+ | Question 1 | How do I convey my mom (single mother) that i | | | want/need to get married asap indirectly? Please | | | help | | Question2 | How do I convey my parents that they need to let | | | me take my own decisions? | | PREDICTED CLASS | 0 | | PREDICTED CLASS NAME | not duplicate | | REAL CLASS | 0 | | REAL CLASS NAME | not duplicate | | CONFIDENCE OVER OTHER CLASSES | 89.29% | +-------------------------------+----------------------------------------------------+ ###Markdown Conclusion.We have learned how to create a model that maps 2 inputs to one output using the modified FastText model. What's Next? Next.Next we are going to try to use 1 embedding instead of two embedding as we did in this notebook. We are also going to use the same model achitecture `FastText`. ###Code ###Output _____no_output_____
notebooks/feature-importance-clinvar.ipynb
###Markdown Is feature order the same for clinvar and panel? ###Code import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import make_classification from sklearn.ensemble import ExtraTreesClassifier panel_file = '../data/interim/panel.dat' panel_df_pre = pd.read_csv(panel_file, sep='\t') panel_df = panel_df_pre[['Disease', 'gene']].rename(columns={'Disease':'panel_disease'}) clinvar_file = '../data/interim/clinvar/clinvar.eff.dbnsfp.anno.dat.limit.xls' clinvar_df_pre = pd.read_csv(clinvar_file, sep='\t') clinvar_df_pp = pd.merge(panel_df, clinvar_df_pre, on=['gene'], how='left') clinvar_df = clinvar_df_pp[clinvar_df_pp.Disease=='clinvar_single'] cols = ['ccr', 'fathmm', 'vest', 'missense_badness', 'missense_depletion', 'is_domain'] for disease in set(clinvar_df['panel_disease']): print(disease) X = clinvar_df[clinvar_df.panel_disease==disease][cols] y = clinvar_df[clinvar_df.panel_disease==disease]['y'] # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(X, y) importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] print("Feature ranking:") for f in range(X.shape[1]): print("%s %d. %s (%f)" % (disease, f + 1, cols[indices[f]], importances[indices[f]])) plt.figure() plt.title("Feature importances " + disease) plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center") plt.xticks(range(X.shape[1]), indices) plt.xlim([-1, X.shape[1]]) plt.show() !jupyter nbconvert --to=python feature-importance-clinvar.ipynb --stdout > cv3_tmp.py ###Output [NbConvertApp] Converting notebook feature-importance-clinvar.ipynb to python
Mining OpenStackNova Repo Notebook.ipynb
###Markdown Mining Software Repositories: OpenStack Nova Project. Goal The goal of this tool and analysis is to help in capturing insights from the commits on a project repo, in this case: the openstack nova project repo. This will help in understanding the project as well as provide guidiance to contributors and maintainers. Objectives The following questions will be answered:* Find the most actively modified module?* How many commits occured during the studied period?* How much churn occurred during the studied period? Churn is defined as the sum of added and removed lines by all commits. **NB**: This workflow is responsible for the pre-processing, analysis, and generation of insight from the collected data. It is assumed that the automated collection of the data via the script accessible in thesame folder with this notebook has been completed. The collected data will be loaded here before the other process in the workflow executes. Required imports: ###Code # Built-in libraries import json import os # The normal data science ecosystem libraries # pandas for data wrangling import pandas as pd # Plotting modules and libraries required import matplotlib as mpl import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Required settings: ###Code # Settings: # 1. Command needed to make plots appear in the Jupyter Notebook %matplotlib inline # 2. Command needed to make plots bigger in the Jupyter Notebook plt.rcParams['figure.figsize']= (12, 10) # 3. Command needed to make 'ggplot' styled plots- professional and yet good looking theme. plt.style.use('ggplot') # 4. This will make the plot zoomable # mpld3.enable_notebook() ###Output _____no_output_____ ###Markdown Other utility functions for data manipulation ###Code # Utility data manipulation functions # 1. Extract path parameters from filename def get_path_parameters(dframe): filename = os.path.basename(dframe["filename"]) filetype = os.path.splitext(dframe["filename"])[1] directory = os.path.dirname(dframe["filename"]) return directory, filename, filetype ###Output _____no_output_____ ###Markdown 1. Loading the data ###Code # Open and load json file with open('data.json', encoding="utf8") as file: data = json.load(file) print("data loaded successfully") ###Output data loaded successfully ###Markdown Data normalization The collected commit data is a semi-structured json which has nested data similar to the image below. Files is a list of file objects. The loaded data will be normalized into a flat table using pandas.json_normalize. ![json_structure.png](attachment:json_structure.png) ###Code df = pd.json_normalize(data, "files", ["commit_node_id", "commit_sha", "commit_html_url", "commit_date" ]) ###Output _____no_output_____ ###Markdown 2. Displaying current state of the data ###Code # The first 3 rows df.head(3) # The last 3 rows df.tail(3) # Summary of the dataframe df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1973 entries, 0 to 1972 Data columns (total 15 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 sha 1973 non-null object 1 filename 1973 non-null object 2 status 1973 non-null object 3 additions 1973 non-null int64 4 deletions 1973 non-null int64 5 changes 1973 non-null int64 6 blob_url 1973 non-null object 7 raw_url 1973 non-null object 8 contents_url 1973 non-null object 9 patch 1965 non-null object 10 previous_filename 8 non-null object 11 commit_node_id 1973 non-null object 12 commit_sha 1973 non-null object 13 commit_html_url 1973 non-null object 14 commit_date 1973 non-null object dtypes: int64(3), object(12) memory usage: 231.3+ KB ###Markdown 3 Verify data ###Code # Let us manually examine atleast one commit and see if the present rows are correct. # We use the most recent commit as at the development time. # Pls note that this commit will not be part of commits after 6 month from today February 17th, 2022 commit = '3a14c1a4277a9f44b67e080138b28b680e5e6824' df[df["commit_sha"] == commit] ###Output _____no_output_____ ###Markdown ![verified_commit.png](attachment:verified_commit.png) 4. Data cleaning ###Code # Removing columns not needed for the analysis columns = ['previous_filename', 'patch', 'contents_url', 'raw_url', 'commit_node_id'] df.drop(columns, inplace=True, axis=1) # Generating and adding extra columns df[["directory", "file_name", "file_type"]] = df.apply(lambda x: get_path_parameters(x), axis=1, result_type="expand") # Delete the previous filename column as it is no longer required df.drop("filename", inplace=True, axis=1) # Rename columns df.rename(columns={"sha": "file_sha", "status": "file_status", "additions":"no_of_additions", "deletions": "no_of_deletions"}, inplace=True) # Optimising the data frame by correcting the data types. # This will also make more operations possible on the data frame df = df.astype({'file_sha': 'str', 'file_status': 'category', 'no_of_additions':'int', 'no_of_deletions':'int', 'changes':'int', 'blob_url':'str', 'commit_sha':'str', 'commit_html_url':'str', 'directory':'str', 'file_name':'str', 'file_type':'category'}) df['commit_date'] = pd.to_datetime(df['commit_date'], infer_datetime_format=True) df.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 1973 entries, 0 to 1972 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 file_sha 1973 non-null object 1 file_status 1973 non-null category 2 no_of_additions 1973 non-null int32 3 no_of_deletions 1973 non-null int32 4 changes 1973 non-null int32 5 blob_url 1973 non-null object 6 commit_sha 1973 non-null object 7 commit_html_url 1973 non-null object 8 commit_date 1973 non-null datetime64[ns, UTC] 9 directory 1973 non-null object 10 file_name 1973 non-null object 11 file_type 1973 non-null category dtypes: category(2), datetime64[ns, UTC](1), int32(3), object(6) memory usage: 135.8+ KB ###Markdown A. Basic Analysis and Visualization 1. Total number of commits that occured during the studied period. ###Code # value_counts returns a series object counting all unique values. # the 1st value being the most frequently occuring i.e. the commit with highest no of file changes. df["commit_sha"].value_counts() print("The total no. of processed commits is: {commits_total}".format(commits_total = len(df["commit_sha"].value_counts()))) ###Output The total no. of processed commits is: 467 ###Markdown 2. The 12 most modified files ###Code df["file_name"].value_counts().head(12) df["file_name"].value_counts().head(12).sort_values().plot.barh(figsize=(10, 9)); plt.axhline(0, color='k'); plt.title('The 12 most modified files'); plt.xlabel('Total number of changes'); plt.ylabel('File Names') ###Output _____no_output_____ ###Markdown 3. The 12 most modified directories ###Code # the term directory is used in place of module df["directory"].value_counts().head(12) df["directory"].value_counts().head(12).sort_values().plot.pie(autopct='%1.1f%%', figsize=(20,8), shadow=True, startangle=90, ylabel="");plt.title('The 12 most modified modules') ###Output _____no_output_____ ###Markdown 4. The most modified file types ###Code df["file_type"].value_counts() df["file_type"].value_counts().head().sort_values().plot.bar(figsize=(10, 8)); plt.axhline(0, color='k'); plt.title('The most modified file types'); plt.xlabel('File extension'); plt.ylabel('Total number of files') ###Output _____no_output_____ ###Markdown 5. Churn ###Code # sum of all changes across all directories that occured during the period df["changes"].sum() ###Output _____no_output_____ ###Markdown B. Exploring activities at module level using aggregation operations 1. The total number of file modifications by directory i.e. no. of rows per directory. A row in df records a file change & the commit responsible ###Code # the term directory is used in place of module # thesame files may be modified in a particular directory by different commits # we cant have rows where a particular commit modifies thesame file more than once # Reveal the total no of changes in each directiory i.e. the no. of times the directory was modified. grp1 = df.groupby(['directory']).size().sort_values(ascending=False).head(12) grp1 ###Output _____no_output_____ ###Markdown 2. The total number of commits per directory ###Code # split data by column <directory> # pass column <commit_sha> to indicate we want the number of unique values for that column # apply .nunique() to count the number of unique values in that coulmn grp2 = df.groupby('directory')['commit_sha'].nunique().sort_values(ascending=False).head(12) grp2 grp2.sort_values().plot.barh(figsize=(10, 8)); plt.axhline(0, color='k'); plt.title('Total number of commits by directory'); plt.xlabel('Number of commits'); plt.ylabel('Directories') ###Output _____no_output_____ ###Markdown 3. The number of files by directory ###Code # The number of unique file per directory group_by = df.groupby('directory')['file_name'].nunique().sort_values(ascending=False).head(12) group_by ###Output _____no_output_____ ###Markdown 4. The churn by directory ###Code df.groupby('directory').sum().sort_values(by='changes', ascending=False).head(12) ###Output _____no_output_____ ###Markdown 5. The churn by file name ###Code df.groupby('file_name').sum().sort_values(by='changes', ascending=False).head(12) ###Output _____no_output_____ ###Markdown 6. The churn by file type ###Code df.groupby('file_type').sum().sort_values(by='changes', ascending=False).head(12) ###Output _____no_output_____
chapter_02/l01_fbprophet_hs300.ipynb
###Markdown 读取沪深300行情数据,因为按教程的开展进度,我们目前还没有部署好QUANTAXIS,所以没有直接调用行情数据的方法。在这个教学例子中,只能用预先保存好的行情数据文件代替。行情数据文件的实际路径需要按照你的部署目录修改,请自行编辑下面的路径字符串。 ###Code # 如果加载不到这个例子数据文件,请自行下载数据文件保存到用户目录,并修改下面的存储路径的用户名 market_df = pd.read_pickle(u'/home/wangdong/Downloads/996Quant/chapter_02/kline_399300_60min_21-01-29_15_00.pickle') # 日线 #market_df = pd.read_pickle(u'C:\\Users\\azai\\OneDrive\\Documents\\kline_399300_day_21-01-29_00_00.pickle') # 兼容日线和小时线的处理。双重索引的行情数据结构来自QUANTAXIS:QADataStruct if('type' in market_df.columns): market_df['date'] = pd.to_datetime(market_df.index.get_level_values(level=0)) ###Output _____no_output_____ ###Markdown 因为 Prophet 所需要的两列名称是 ‘ds’ 和 ‘y’,其中,’ds’ 表示时间戳,’y’ 表示时间序列的值,因此通常来说都需要修改 pd.dataframe 的列名字。如果原来的两列名字是 ‘timestamp’ 和 ‘value’ 的话,只需要这样写: ###Code df = market_df.reset_index().rename(columns={'date':'ds', 'close':'y'}) df['y'] = np.log(df['y']) # 构造模型并且训练数据 model = Prophet() model.fit(df); # 预测未来的periods个数据 future = model.make_future_dataframe(periods=22) #forecasting for 1 year from now. forecast = model.predict(future) plt.rcParams['font.sans-serif'] = ['Microsoft YaHei'] plt.rcParams['figure.figsize']=(20,10) plt.style.use('ggplot') figure = model.plot(forecast) for changepoint in model.changepoints: plt.axvline(changepoint,ls='--', lw=1) two_years = market_df.copy() code = market_df.index.get_level_values(level=1)[0] forecast['code'] = code forecast = forecast.rename(columns={'ds':'date'}).set_index(['date', 'code']) two_years = two_years.reindex(columns=[*two_years.columns, *['yhat', 'yhat_upper', 'yhat_lower']]) two_years.loc[:, ['yhat', 'yhat_upper', 'yhat_lower']] = forecast.loc[two_years.index, ['yhat', 'yhat_upper', 'yhat_lower']] two_years['yhat']=np.exp(two_years.yhat) two_years['yhat_upper']=np.exp(two_years.yhat_upper) two_years['yhat_lower']=np.exp(two_years.yhat_lower) two_years[['close', 'yhat']].plot() fig, ax1 = plt.subplots() each_day = market_df.index.get_level_values(level=0) ax1.plot(each_day, two_years.close) ax1.plot(each_day, two_years.yhat) ax1.plot(each_day, two_years.yhat_upper, color='black', linestyle=':', alpha=0.5) ax1.plot(each_day, two_years.yhat_lower, color='black', linestyle=':', alpha=0.5) ax1.set_title(u'沪深300时机走势(橙色)与沪深300预测波动置信区间(黑虚线)') ax1.set_ylabel('Price') ax1.set_xlabel('Date') print(each_day[-1]) model.changepoints ###Output _____no_output_____
IBM_Week_3_Part_1.ipynb
###Markdown IBM Week 3 Capstone Project The code below scrapes the provided wikipedia page for the raw data and cleans it. The data includes the postal codes, borouhs, and neighorhoods in Toronto. ###Code import pandas as pd raw_data = pd.read_html('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M', header=0) data = pd.DataFrame(raw_data[0]).dropna(subset=['Neighborhood']).reset_index(drop=True) print(data.head()) ###Output Postal Code Borough Neighborhood 0 M3A North York Parkwoods 1 M4A North York Victoria Village 2 M5A Downtown Toronto Regent Park, Harbourfront 3 M6A North York Lawrence Manor, Lawrence Heights 4 M7A Downtown Toronto Queen's Park, Ontario Provincial Government ###Markdown The code below prints the shape of the data obtained from wikipedia. ###Code print(data.shape) ###Output (103, 3)
Day_6/lab6/lab6-drive-ham-rabi-ramsey.ipynb
###Markdown by nick brønn Treat the transmon as a qubit for simplicity Then we can describe the dynamics of the qubit with the **Pauli Matrices**:$$\sigma^x = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \qquad\sigma^y = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \qquad\sigma^z = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad$$ They obey the commutator relations$$ [\sigma^x, \sigma^y] = 2i\sigma^z \qquad [\sigma^y, \sigma^z] = 2i\sigma^x \qquad [\sigma^z, \sigma^x] = 2i\sigma^y $$ On its own, the qubit Hamiltonian is$$ \hat{H}_q = -\frac{1}{2} \hbar \omega_q \sigma^z $$ Ground state of the qubit ($|0\rangle$): points in the $+\hat{z}$ direction of the Bloch sphere Excited state of the qubit ($|1\rangle$): points in the $-\hat{z}$ direction of the Bloch sphere ###Code from qiskit.quantum_info import Statevector from qiskit.visualization import plot_bloch_multivector excited = Statevector.from_int(1, 2) plot_bloch_multivector(excited.data) ###Output _____no_output_____ ###Markdown The Pauli matrices also let us define raising and lowering operators$$ \sigma^+ = \frac{1}{2}( \sigma^x - i\sigma^y) \qquad {\rm and} \qquad \sigma^- = \frac{1}{2}( \sigma^x + i\sigma^y)$$ They raise and lower qubit states$$ \sigma^+ |0\rangle = |1\rangle \qquad \sigma^+ |1\rangle = 0\qquad {\rm and} \qquad \sigma^-|1\rangle = |0\rangle \qquad \sigma^-|0\rangle = 0$$ Electric Dipole Interaction The **qubit** behaves as an electric dipole$$\vec{d} = \vec{d}_0 \sigma^+ + \vec{d}_0^* \sigma^-$$ The **drive** behaves as an electric field$$\vec{E} = \vec{E}_0 e^{-i\omega_d t} + \vec{E}_0^* e^{i\omega_d t}$$ The drive Hamiltonian is then$$ \hat{H}_d = -\vec{d} \cdot \vec{E} $$ And now, some math...$$\hat{H}_d = -\left(\vec{d}_0 \sigma^+ + \vec{d}_0^* \sigma^-\right) \cdot \left(\vec{E}_0 e^{-i\omega_d t} + \vec{E}_0^* e^{i\omega_d t}\right) \\= -\left(\vec{d}_0 \cdot \vec{E}_0 e^{-i\omega_d t} + \vec{d}_0 \cdot \vec{E}_0^* e^{i\omega_d t}\right)\sigma^+-\left(\vec{d}_0^* \cdot \vec{E}_0 e^{-i\omega_d t} + \vec{d}_0^* \cdot \vec{E}_0^* e^{i\omega_d t}\right)\sigma^- $$ $$\equiv -\hbar\left(\Omega e^{-i\omega_d t} + \tilde{\Omega} e^{i\omega_d t}\right)\sigma^+-\hbar\left(\tilde{\Omega}^* e^{-i\omega_d t} + \Omega^* e^{i\omega_d t}\right)\sigma^-$$by setting $\Omega \equiv \vec{d}_0 \cdot \vec{E}_0/\hbar$ and $\tilde{\Omega} \equiv \vec{d}_0 \cdot \vec{E}_0^*/\hbar $ Rotating Wave Approximation Move the Hamiltonian to the interaction picture $$\hat{H}_{d,I} = U\hat{H}_dU^\dagger \qquad \qquad ^*{\rm omitting\, terms\, that\, cancel} $$ with $$U = e^{i\hat{H}_q t/\hbar} = e^{-i\omega_q t \sigma^z/2} = I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)$$ Calculate the operator terms $$\left(I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)\right) \sigma^+ \left(I\cos(\omega_q t/2) + i\sigma^z\sin(\omega_q t/2)\right) = e^{i\omega_q t} \sigma^+ \\\left(I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)\right) \sigma^- \left(I\cos(\omega_q t/2) + i\sigma^z\sin(\omega_q t/2)\right) = e^{-i\omega_q t} \sigma^-$$ The transformed Hamiltonian is$$\hat{H}_{d,I} = -\hbar\left(\Omega e^{-i(\omega_q-\omega_d) t} + \tilde{\Omega} e^{i(\omega_q+\omega_d) t}\right) \sigma^+ -\hbar\left(\tilde{\Omega}^* e^{-i(\omega_q+\omega_d) t} + \Omega^* e^{i(\omega_q-\omega_d) t}\right) \sigma^-$$ Rotating Wave Approximation$$\hat{H}_{d,I} = -\hbar\left(\Omega e^{-i(\omega_q-\omega_d) t} + \tilde{\Omega} e^{i(\omega_q+\omega_d) t}\right) \sigma^+ -\hbar\left(\tilde{\Omega}^* e^{-i(\omega_q+\omega_d) t} + \Omega^* e^{i(\omega_q-\omega_d) t}\right) \sigma^-$$ $\omega_q-\omega_d$: slow-rotating terms contribute most of interaction $\omega_q+\omega_d$: fast-rotating terms tend to average out Define $\Delta_q = \omega_q - \omega_d$ and make the RWA$$\hat{H}_{d,I}^{\rm (RWA)} =-\hbar\Omega e^{-i\Delta_q t} \sigma^+ -\hbar \Omega^* e^{i\Delta_q t} \sigma^-$$ Transform Hamiltonian back to the Schrödinger picture$$\hat{H}_{d}^{\rm (RWA)} = -\hbar\Omega e^{-i\omega_d t} \sigma^+ -\hbar\Omega^* e^{i\omega_d t} \sigma^-$$ And the total qubit Hamiltonian is$$ \hat{H}_{\rm tot} = \hat{H}_q + \hat{H}_d^{\rm (RWA)} = -\frac{1}{2} \hbar\omega_q \sigma^z -\hbar\Omega e^{-i\omega_d t} \sigma^+ -\hbar\Omega^* e^{i\omega_d t} \sigma^- $$ Qubit Drive ExampleSet $\Omega^* \equiv \Omega$ and transform into the frame of the drive $$\hat{H}_{\rm eff} = U_d \hat{H}_{\rm tot} U_d^\dagger - i\hbar U_d \dot{U_d}^\dagger$$with $U_d = \exp\{-i\omega_d t\sigma^z/2\}$ In a similar calculation to earlier, the effective Hamiltonian is$$\hat{H}_{\rm eff} = -\frac{1}{2}\hbar \Delta_q \sigma^z -\hbar\Omega \sigma^x$$ What this meansOn-resonance ($\Delta_q = 0$): the drive rotates the qubit around the $\hat{x}$-axis Off-resonance ($\Delta_q \ne 0$): an additional $\hat{z}$-rotation on top of drive Qiskit Pulse: On-resonant Drive (Rabi)$$\hat{H}_{\rm eff} = -\hbar\Omega \sigma^x$$ Import Necessary Libraries ###Code from qiskit.tools.jupyter import * from qiskit import IBMQ IBMQ.load_account() provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main') backend = provider.get_backend('ibmq_armonk') ###Output ibmqfactory.load_account:WARNING:2020-07-24 16:35:36,355: Credentials are already in use. The existing account in the session will be replaced. ###Markdown Verify Backend is Pulse-enabled ###Code backend_config = backend.configuration() assert backend_config.open_pulse, "Backend doesn't support Pulse" ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) Take care of some other things ###Code dt = backend_config.dt print(f"Sampling time: {dt*1e9} ns") backend_defaults = backend.defaults() ###Output Sampling time: 0.2222222222222222 ns ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code import numpy as np # unit conversion factors -> all backend properties returned in SI (Hz, sec, etc) GHz = 1.0e9 # Gigahertz MHz = 1.0e6 # Megahertz us = 1.0e-6 # Microseconds ns = 1.0e-9 # Nanoseconds # We will find the qubit frequency for the following qubit. qubit = 0 # The Rabi sweep will be at the given qubit frequency. center_frequency_Hz = backend_defaults.qubit_freq_est[qubit] # The default frequency is given in Hz # warning: this will change in a future release print(f"Qubit {qubit} has an estimated frequency of {center_frequency_Hz / GHz} GHz.") ###Output Qubit 0 has an estimated frequency of 4.974452425330183 GHz. ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code from qiskit import pulse, assemble # This is where we access all of our Pulse features! from qiskit.pulse import Play from qiskit.pulse import pulse_lib # This Pulse module helps us build sampled pulses for common pulse shapes ### Collect the necessary channels drive_chan = pulse.DriveChannel(qubit) meas_chan = pulse.MeasureChannel(qubit) acq_chan = pulse.AcquireChannel(qubit) inst_sched_map = backend_defaults.instruction_schedule_map measure = inst_sched_map.get('measure', qubits=[0]) ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code # Rabi experiment parameters # Drive amplitude values to iterate over: 50 amplitudes evenly spaced from 0 to 0.75 num_rabi_points = 50 drive_amp_min = 0 drive_amp_max = 0.75 drive_amps = np.linspace(drive_amp_min, drive_amp_max, num_rabi_points) # drive waveforms mush be in units of 16 drive_sigma = 80 # in dt drive_samples = 8*drive_sigma # in dt ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code # Build the Rabi experiments: # A drive pulse at the qubit frequency, followed by a measurement, # where we vary the drive amplitude each time. rabi_schedules = [] for drive_amp in drive_amps: rabi_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_amp, sigma=drive_sigma, name=f"Rabi drive amplitude = {drive_amp}") this_schedule = pulse.Schedule(name=f"Rabi drive amplitude = {drive_amp}") this_schedule += Play(rabi_pulse, drive_chan) # The left shift `<<` is special syntax meaning to shift the start time of the schedule by some duration this_schedule += measure << this_schedule.duration rabi_schedules.append(this_schedule) rabi_schedules[-1].draw(label=True, scaling=1.0) ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code # assemble the schedules into a Qobj num_shots_per_point = 1024 rabi_experiment_program = assemble(rabi_schedules, backend=backend, meas_level=1, meas_return='avg', shots=num_shots_per_point, schedule_los=[{drive_chan: center_frequency_Hz}] * num_rabi_points) # RUN the job on a real device #job = backend.run(rabi_experiment_program) #print(job.job_id()) #from qiskit.tools.monitor import job_monitor #job_monitor(job) # OR retreive result from previous run job = backend.retrieve_job("5ef3bf17dc3044001186c011") ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code rabi_results = job.result() import matplotlib.pyplot as plt plt.style.use('dark_background') scale_factor = 1e-14 # center data around 0 def baseline_remove(values): return np.array(values) - np.mean(values) ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code rabi_values = [] for i in range(num_rabi_points): # Get the results for `qubit` from the ith experiment rabi_values.append(rabi_results.get_memory(i)[qubit]*scale_factor) rabi_values = np.real(baseline_remove(rabi_values)) plt.xlabel("Drive amp [a.u.]") plt.ylabel("Measured signal [a.u.]") plt.scatter(drive_amps, rabi_values, color='white') # plot real part of Rabi values plt.show() ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) Define Rabi curve-fitting function ###Code from scipy.optimize import curve_fit def fit_function(x_values, y_values, function, init_params): fitparams, conv = curve_fit(function, x_values, y_values, init_params) y_fit = function(x_values, *fitparams) return fitparams, y_fit fit_params, y_fit = fit_function(drive_amps, rabi_values, lambda x, A, B, drive_period, phi: (A*np.cos(2*np.pi*x/drive_period - phi) + B), [10, 0.1, 0.6, 0]) ###Output _____no_output_____ ###Markdown Qiskit Pulse: On-resonant Drive (Rabi) ###Code plt.scatter(drive_amps, rabi_values, color='white') plt.plot(drive_amps, y_fit, color='red') drive_period = fit_params[2] # get period of rabi oscillation plt.axvline(drive_period/2, color='red', linestyle='--') plt.axvline(drive_period, color='red', linestyle='--') plt.annotate("", xy=(drive_period, 0), xytext=(drive_period/2,0), arrowprops=dict(arrowstyle="<->", color='red')) plt.xlabel("Drive amp [a.u.]", fontsize=15) plt.ylabel("Measured signal [a.u.]", fontsize=15) plt.show() ###Output _____no_output_____ ###Markdown Save $\pi/2$ pulse for later ###Code pi_amp = abs(drive_period / 2) print(f"Pi Amplitude = {pi_amp}") # Drive parameters # The drive amplitude for pi/2 is simply half the amplitude of the pi pulse drive_amp = pi_amp / 2 # x_90 is a concise way to say pi_over_2; i.e., an X rotation of 90 degrees x90_pulse = pulse_lib.gaussian(duration=drive_samples, amp=drive_amp, sigma=drive_sigma, name='x90_pulse') ###Output _____no_output_____ ###Markdown Qiskit Pulse: Off-resonant Drive (Ramsey)$$\hat{H}_{\rm eff} = -\frac{1}{2}\hbar \Delta_q \sigma^z -\hbar\Omega \sigma^x$$ ###Code # Ramsey experiment parameters time_max_us = 1.8 time_step_us = 0.025 times_us = np.arange(0.1, time_max_us, time_step_us) # Convert to units of dt delay_times_dt = times_us * us / dt # create schedules for Ramsey experiment ramsey_schedules = [] for delay in delay_times_dt: this_schedule = pulse.Schedule(name=f"Ramsey delay = {delay * dt / us} us") this_schedule += Play(x90_pulse, drive_chan) this_schedule += Play(x90_pulse, drive_chan) << this_schedule.duration + int(delay) this_schedule += measure << this_schedule.duration ramsey_schedules.append(this_schedule) ramsey_schedules[-1].draw(label=True, scaling=1.0) ###Output _____no_output_____ ###Markdown Qiskit Pulse: Off-resonant Drive (Ramsey) ###Code # Execution settings num_shots = 256 detuning_MHz = 2 ramsey_frequency = round(center_frequency_Hz + detuning_MHz * MHz, 6) # need ramsey freq in Hz ramsey_program = assemble(ramsey_schedules, backend=backend, meas_level=1, meas_return='avg', shots=num_shots, schedule_los=[{drive_chan: ramsey_frequency}]*len(ramsey_schedules) ) # RUN the job on a real device #job = backend.run(ramsey_experiment_program) #print(job.job_id()) #from qiskit.tools.monitor import job_monitor #job_monitor(job) # OR retreive job from previous run job = backend.retrieve_job('5ef3ed3a84b1b70012374317') ###Output _____no_output_____ ###Markdown Qiskit Pulse: Off-resonant Drive (Ramsey) Ramsey curve-fitting function ###Code ramsey_results = job.result() ramsey_values = [] for i in range(len(times_us)): ramsey_values.append(ramsey_results.get_memory(i)[qubit]*scale_factor) fit_params, y_fit = fit_function(times_us, np.real(ramsey_values), lambda x, A, del_f_MHz, C, B: ( A * np.cos(2*np.pi*del_f_MHz*x - C) + B ), [5, 1./0.4, 0, 0.25] ) ###Output _____no_output_____ ###Markdown Qiskit Pulse: Off-resonant Drive (Ramsey) ###Code # Off-resonance component _, del_f_MHz, _, _, = fit_params # freq is MHz since times in us plt.scatter(times_us, np.real(ramsey_values), color='white') plt.plot(times_us, y_fit, color='red', label=f"df = {del_f_MHz:.2f} MHz") plt.xlim(0, np.max(times_us)) plt.xlabel('Delay between X90 pulses [$\mu$s]', fontsize=15) plt.ylabel('Measured Signal [a.u.]', fontsize=15) plt.title('Ramsey Experiment', fontsize=15) plt.legend() plt.show() ###Output _____no_output_____
homeworks/08/Pandas_exercises_part2.ipynb
###Markdown Problem 1 Import the library pandas as pd Import արեք pandas գրադարանը pd անունով ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Import the dataset gender.txt, assign to a variable gender and use user_id as index Import արեք gender.txt dataset-ը, վերագրեք այն gender փոփոխականին ու օգտագործեք user_id սյունը որպես index ###Code gender = pd.read_csv('data/gender.txt', sep = '|', index_col = 'user_id') gender ###Output _____no_output_____ ###Markdown Print the first 20 entries Տպեք առաջին 20 արժեքները ###Code gender.head(20) ###Output _____no_output_____ ###Markdown Print the last 10 entries Տպեք վերջին 10 արժեքները ###Code gender.tail(10) ###Output _____no_output_____ ###Markdown What is the number of rows in the dataset? Քանի տող կա dataset-ում? ###Code gender.shape[0] ###Output _____no_output_____ ###Markdown What is the number of columns in the dataset? Քանի սյուն կա dataset-ում? ###Code gender.shape[1] ###Output _____no_output_____ ###Markdown Print the name of all the columns Տպեք սյուների անունները ###Code gender.columns.to_list() ###Output _____no_output_____ ###Markdown How is the dataset indexed? Ինչպես է ինդեքսավորված dataset-ը? ###Code gender.index # gender.index.to_list() ###Output _____no_output_____ ###Markdown What is the data type of each column? Ինչ տիպի է ամեն սյունը՞ ###Code gender.dtypes ###Output _____no_output_____ ###Markdown Print only the occupation column Տպեք միայն occupation սյունը ###Code gender['occupation'] ###Output _____no_output_____ ###Markdown How many different occupations there are in this dataset? * Քանի հատ տարբեր occupation կա dataset-ում? ###Code gender['occupation'].nunique() ###Output _____no_output_____ ###Markdown What is the most frequent occupation? Որն է ամենահաճախ հանդիպող occupation-ը? ###Code gender['occupation'].mode() ###Output _____no_output_____ ###Markdown Problem 2 Import the libraries Import արեք pandas գրադարանը pd անունով ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Import the dataset drinks.csv, assign it to a variable called drinks Import արեք drinks.csv dataset-ը, վերագրեք այն drinks փոփոխականին ###Code drinks = pd.read_csv('data/drinks.csv') drinks ###Output _____no_output_____ ###Markdown Which continent drinks more beer on average? Որ continent-ն է միջինում ավելի շատ գարեջուր խմում՞ ###Code drinks.groupby('continent')['beer_servings'].mean().idxmax() ###Output _____no_output_____ ###Markdown For each continent print the statistics for wine consumption Ամեն continent-ի համար տպեք wine_servings-ի ստատիստիկան (mean, min, max, std, quartiles) ###Code drinks.groupby('continent')['wine_servings'].describe() ###Output _____no_output_____ ###Markdown Print the mean alcohol consumption per continent for every column Ամեն սյան համար տպեք միջին ալկոհոլի օգտագործման քանակը ամեն continent-ի համար ###Code drinks.groupby('continent').mean() ###Output _____no_output_____ ###Markdown Print the median alcohol consumption per continent for every column Ամեն սյան համար տպեք մեդիան ալկոհոլի օգտագործման քանակը ամեն continent-ի համար ###Code drinks.groupby('continent').median() ###Output _____no_output_____ ###Markdown Print the mean, min and max values for spirt consumption. Տպեք spirt-ի օգտագործման mean, min ու max-ը։ ###Code import numpy as np drinks['total_litres_of_pure_alcohol'].agg([np.mean, np.min, np.max]) ###Output _____no_output_____ ###Markdown Problem 3 Import the libraries Import արեք pandas գրադարանը pd անունով ###Code import pandas as pd ###Output _____no_output_____ ###Markdown Import the dataset baby_names.txt, assign it to a variable called baby_names Import արեք baby_names.txt dataset-ը ու այն վերագրեք baby_names փոփոխականի ###Code baby_names = pd.read_csv('data/baby_names.csv') baby_names ###Output _____no_output_____ ###Markdown See the first 10 entries. Տպեք առաջին 10 արժեքը ###Code baby_names.head(10) ###Output _____no_output_____ ###Markdown Delete the columns 'Unnamed: 0' and 'Id' Ջնջեք 'Unnamed: 0' ու 'Id' սյուները ###Code baby_names = baby_names.drop(['Unnamed: 0', 'Id'], axis = 1) baby_names ###Output _____no_output_____ ###Markdown Are there more male or female names in the dataset? Արդյոք dataset-ում ավելի շատ են կանացի թե տղամարդու անունները? ###Code baby_names.groupby('Gender')['Count'].sum().idxmax() ###Output _____no_output_____ ###Markdown Delete the column 'Year'. Group the dataset by name and assign to names. Ջնջեք 'Year' սյունը. Խմբավորեք dataset-ը ըստ անունի ու վերագրեք արդյունքը names փոփոխականին ###Code del baby_names['Year'] baby_names names = baby_names.groupby('Name') names ###Output _____no_output_____ ###Markdown How many different names exist in the dataset? Քանի հատ տարբեր անուն կա dataset-ում? ###Code len(names) ###Output _____no_output_____ ###Markdown What is the name with most occurrences? Որն է ամենաշատ հանդիպող անունը? ###Code names['Count'].sum().idxmax() ###Output _____no_output_____ ###Markdown What is the median name occurrence? Որն է անունների հանդիպման մեդիանը՞ ###Code names['Count'].sum().median() ###Output _____no_output_____ ###Markdown Get a summary with the mean, min, max, std and quartiles. Ստացեք dataset-ի ամփոփում mean, min, max, std ու quartile-ներով ###Code baby_names.describe() ###Output _____no_output_____
Module07_SimpleDynamicalModel/SimpleBucketModel.ipynb
###Markdown A Simple Bucket Hydrology Model April 9, 2018 ###Code import numpy as np import matplotlib.pyplot as plt Nt = 100 dt = 1.0 P = np.zeros((Nt,1)) P[19:39] = 4.0 print(P.shape) t = np.arange(1,Nt+1,1) print(t.shape) plt.figure(1) plt.bar(t,P) plt.ylabel('Precip. [mm/d]') plt.xlabel('Time [d]') plt.show() k1 = 0.02 # Drainage coefficient in units of day^-1 W1_0 = 250.0 # Water storage in units of mm # Initializing a data container for our water storage at each time step W1 = np.zeros(t.shape) # Update initial condition W1[0] = W1_0 # Initializing a data container for our discharge at each time step Q = np.zeros(t.shape) # Update initial condition Q[0] = k1*W1[0] # The main loop for i in np.arange(1,Nt,1): # Compute the value of the derivatives dW1dt = P[i-1] - k1*W1[i-1] # Compute the next value of W W1[i] = W1[i-1] + dW1dt*dt # Compute the next value of Q Q[i] = k1*W1[i] plt.figure(2) plt.subplot(311) plt.bar(t,P) plt.subplot(312) plt.plot(t,W1) plt.subplot(313) plt.plot(t,Q) plt.show() ###Output _____no_output_____
4_pose_estimation/4-2_DataLoader.ipynb
###Markdown 4.2 DataLoaderの作成- 本ファイルでは、OpenPosetなど姿勢推定で使用するDatasetとDataLoaderを作成します。MS COCOデータセットを対象とします。 学習目標1. マスクデータについて理解する2. OpenPoseで使用するDatasetクラス、DataLoaderクラスを実装できるようになる3. OpenPoseの前処理およびデータオーギュメンテーションで、何をしているのか理解する 事前準備書籍の指示に従い、本章で使用するデータを用意します ###Code # 必要なパッケージのimport import json import os import os.path as osp import numpy as np import cv2 from PIL import Image from matplotlib import cm import matplotlib.pyplot as plt %matplotlib inline import torch.utils.data as data ###Output _____no_output_____ ###Markdown 画像、マスク画像、アノテーションデータへのファイルパスリストを作成 ###Code def make_datapath_list(rootpath): """ 学習、検証の画像データとアノテーションデータ、マスクデータへのファイルパスリストを作成する。 """ # アノテーションのJSONファイルを読み込む json_path = osp.join(rootpath, 'COCO.json') with open(json_path) as data_file: data_this = json.load(data_file) data_json = data_this['root'] # indexを格納 num_samples = len(data_json) train_indexes = [] val_indexes = [] for count in range(num_samples): if data_json[count]['isValidation'] != 0.: val_indexes.append(count) else: train_indexes.append(count) # 画像ファイルパスを格納 train_img_list = list() val_img_list = list() for idx in train_indexes: img_path = os.path.join(rootpath, data_json[idx]['img_paths']) train_img_list.append(img_path) for idx in val_indexes: img_path = os.path.join(rootpath, data_json[idx]['img_paths']) val_img_list.append(img_path) # マスクデータのパスを格納 train_mask_list = [] val_mask_list = [] for idx in train_indexes: img_idx = data_json[idx]['img_paths'][-16:-4] anno_path = "./data/mask/train2014/mask_COCO_tarin2014_" + img_idx+'.jpg' train_mask_list.append(anno_path) for idx in val_indexes: img_idx = data_json[idx]['img_paths'][-16:-4] anno_path = "./data/mask/val2014/mask_COCO_val2014_" + img_idx+'.jpg' val_mask_list.append(anno_path) # アノテーションデータを格納 train_meta_list = list() val_meta_list = list() for idx in train_indexes: train_meta_list.append(data_json[idx]) for idx in val_indexes: val_meta_list.append(data_json[idx]) return train_img_list, train_mask_list, val_img_list, val_mask_list, train_meta_list, val_meta_list # 動作確認(実行には10秒ほど時間がかかります) train_img_list, train_mask_list, val_img_list, val_mask_list, train_meta_list, val_meta_list = make_datapath_list( rootpath="./data/") val_meta_list[24] ###Output _____no_output_____ ###Markdown マスクデータの働きを確認 ###Code index = 24 # 画像 img = cv2.imread(val_img_list[index]) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) plt.imshow(img) plt.show() # マスク mask_miss = cv2.imread(val_mask_list[index]) mask_miss = cv2.cvtColor(mask_miss, cv2.COLOR_BGR2RGB) plt.imshow(mask_miss) plt.show() # 合成 blend_img = cv2.addWeighted(img, 0.4, mask_miss, 0.6, 0) plt.imshow(blend_img) plt.show() ###Output _____no_output_____ ###Markdown 画像の前処理作成 ###Code # データ処理のクラスとデータオーギュメンテーションのクラスをimportする from utils.data_augumentation import Compose, get_anno, add_neck, aug_scale, aug_rotate, aug_croppad, aug_flip, remove_illegal_joint, Normalize_Tensor, no_Normalize_Tensor class DataTransform(): """ 画像とマスク、アノテーションの前処理クラス。 学習時と推論時で異なる動作をする。 学習時はデータオーギュメンテーションする。 """ def __init__(self): self.data_transform = { 'train': Compose([ get_anno(), # JSONからアノテーションを辞書に格納 add_neck(), # アノテーションデータの順番を変更し、さらに首のアノテーションデータを追加 aug_scale(), # 拡大縮小 aug_rotate(), # 回転 aug_croppad(), # 切り出し aug_flip(), # 左右反転 remove_illegal_joint(), # 画像からはみ出たアノテーションを除去 # Normalize_Tensor() # 色情報の標準化とテンソル化 no_Normalize_Tensor() # 本節のみ、色情報の標準化をなくす ]), 'val': Compose([ # 本書では検証は省略 ]) } def __call__(self, phase, meta_data, img, mask_miss): """ Parameters ---------- phase : 'train' or 'val' 前処理のモードを指定。 """ meta_data, img, mask_miss = self.data_transform[phase]( meta_data, img, mask_miss) return meta_data, img, mask_miss # 動作確認 # 画像読み込み index = 24 img = cv2.imread(val_img_list[index]) mask_miss = cv2.imread(val_mask_list[index]) meat_data = val_meta_list[index] # 画像前処理 transform = DataTransform() meta_data, img, mask_miss = transform("train", meat_data, img, mask_miss) # 画像表示 img = img.numpy().transpose((1, 2, 0)) plt.imshow(img) plt.show() # マスク表示 mask_miss = mask_miss.numpy().transpose((1, 2, 0)) plt.imshow(mask_miss) plt.show() # 合成 RGBにそろえてから img = Image.fromarray(np.uint8(img*255)) img = np.asarray(img.convert('RGB')) mask_miss = Image.fromarray(np.uint8((mask_miss))) mask_miss = np.asarray(mask_miss.convert('RGB')) blend_img = cv2.addWeighted(img, 0.4, mask_miss, 0.6, 0) plt.imshow(blend_img) plt.show() ###Output _____no_output_____ ###Markdown 訓練データの正解情報として使うアノテーションデータの作成 ※ Issue [142] (https://github.com/YutaroOgawa/pytorch_advanced/issues/142)以下で描画時にエラーが発生する場合は、お手数ですが、Matplotlibのバージョンを3.1.3に変更して対応してください。 ###Code from utils.dataloader import get_ground_truth # 画像読み込み index = 24 img = cv2.imread(val_img_list[index]) mask_miss = cv2.imread(val_mask_list[index]) meat_data = val_meta_list[index] # 画像前処理 meta_data, img, mask_miss = transform("train", meat_data, img, mask_miss) img = img.numpy().transpose((1, 2, 0)) mask_miss = mask_miss.numpy().transpose((1, 2, 0)) # OpenPoseのアノテーションデータ生成 heat_mask, heatmaps, paf_mask, pafs = get_ground_truth(meta_data, mask_miss) # 画像表示 plt.imshow(img) plt.show() # 左肘のheatmapを確認 # 元画像 img = Image.fromarray(np.uint8(img*255)) img = np.asarray(img.convert('RGB')) # 左肘 heat_map = heatmaps[:, :, 6] # 6は左肘 heat_map = Image.fromarray(np.uint8(cm.jet(heat_map)*255)) heat_map = np.asarray(heat_map.convert('RGB')) heat_map = cv2.resize( heat_map, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC) # 注意:heatmapは画像サイズが1/8になっているので拡大する # 合成して表示 blend_img = cv2.addWeighted(img, 0.5, heat_map, 0.5, 0) plt.imshow(blend_img) plt.show() # 左手首 heat_map = heatmaps[:, :, 7] # 7は左手首 heat_map = Image.fromarray(np.uint8(cm.jet(heat_map)*255)) heat_map = np.asarray(heat_map.convert('RGB')) heat_map = cv2.resize( heat_map, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC) # 合成して表示 blend_img = cv2.addWeighted(img, 0.5, heat_map, 0.5, 0) plt.imshow(blend_img) plt.show() # 左肘と左手首へのPAFを確認 paf = pafs[:, :, 24] # 24は左肘と左手首をつなぐxベクトルのPAF paf = Image.fromarray(np.uint8((paf)*255)) paf = np.asarray(paf.convert('RGB')) paf = cv2.resize( paf, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC) # 合成して表示 blend_img = cv2.addWeighted(img, 0.3, paf, 0.7, 0) plt.imshow(blend_img) plt.show() # PAFのみを表示 paf = pafs[:, :, 24] # 24は左肘と左手首をつなぐxベクトルのPAF paf = Image.fromarray(np.uint8((paf)*255)) paf = np.asarray(paf.convert('RGB')) paf = cv2.resize( paf, (img.shape[1], img.shape[0]), interpolation=cv2.INTER_CUBIC) plt.imshow(paf) ###Output _____no_output_____ ###Markdown Datasetの作成 ###Code from utils.dataloader import get_ground_truth class COCOkeypointsDataset(data.Dataset): """ MSCOCOのCocokeypointsのDatasetを作成するクラス。PyTorchのDatasetクラスを継承。 Attributes ---------- img_list : リスト 画像のパスを格納したリスト anno_list : リスト アノテーションへのパスを格納したリスト phase : 'train' or 'test' 学習か訓練かを設定する。 transform : object 前処理クラスのインスタンス """ def __init__(self, img_list, mask_list, meta_list, phase, transform): self.img_list = img_list self.mask_list = mask_list self.meta_list = meta_list self.phase = phase self.transform = transform def __len__(self): '''画像の枚数を返す''' return len(self.img_list) def __getitem__(self, index): img, heatmaps, heat_mask, pafs, paf_mask = self.pull_item(index) return img, heatmaps, heat_mask, pafs, paf_mask def pull_item(self, index): '''画像のTensor形式のデータ、アノテーション、マスクを取得する''' # 1. 画像読み込み image_file_path = self.img_list[index] img = cv2.imread(image_file_path) # [高さ][幅][色BGR] # 2. マスクとアノテーション読み込み mask_miss = cv2.imread(self.mask_list[index]) meat_data = self.meta_list[index] # 3. 画像前処理 meta_data, img, mask_miss = self.transform( self.phase, meat_data, img, mask_miss) # 4. 正解アノテーションデータの取得 mask_miss_numpy = mask_miss.numpy().transpose((1, 2, 0)) heat_mask, heatmaps, paf_mask, pafs = get_ground_truth( meta_data, mask_miss_numpy) # 5. マスクデータはRGBが(1,1,1)か(0,0,0)なので、次元を落とす # マスクデータはマスクされている場所は値が0、それ以外は値が1です heat_mask = heat_mask[:, :, :, 0] paf_mask = paf_mask[:, :, :, 0] # 6. チャネルが最後尾にあるので順番を変える # 例:paf_mask:torch.Size([46, 46, 38]) # → torch.Size([38, 46, 46]) paf_mask = paf_mask.permute(2, 0, 1) heat_mask = heat_mask.permute(2, 0, 1) pafs = pafs.permute(2, 0, 1) heatmaps = heatmaps.permute(2, 0, 1) return img, heatmaps, heat_mask, pafs, paf_mask # 動作確認 train_dataset = COCOkeypointsDataset( val_img_list, val_mask_list, val_meta_list, phase="train", transform=DataTransform()) val_dataset = COCOkeypointsDataset( val_img_list, val_mask_list, val_meta_list, phase="val", transform=DataTransform()) # データの取り出し例 item = train_dataset.__getitem__(0) print(item[0].shape) # img print(item[1].shape) # heatmaps, print(item[2].shape) # heat_mask print(item[3].shape) # pafs print(item[4].shape) # paf_mask ###Output _____no_output_____ ###Markdown DataLoaderの作成 ###Code # データローダーの作成 batch_size = 8 train_dataloader = data.DataLoader( train_dataset, batch_size=batch_size, shuffle=True) val_dataloader = data.DataLoader( val_dataset, batch_size=batch_size, shuffle=False) # 辞書型変数にまとめる dataloaders_dict = {"train": train_dataloader, "val": val_dataloader} # 動作の確認 batch_iterator = iter(dataloaders_dict["train"]) # イタレータに変換 item = next(batch_iterator) # 1番目の要素を取り出す print(item[0].shape) # img print(item[1].shape) # heatmaps, print(item[2].shape) # heat_mask print(item[3].shape) # pafs print(item[4].shape) # paf_mask ###Output _____no_output_____
test/eis-metadata-validation/Planon metadata validation5b.ipynb
###Markdown EIS metadata validation scriptUsed to validate Planon output with spreadsheet input 1. Data import ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline ###Output _____no_output_____ ###Markdown Read data. There are two datasets: Planon and Master. The latter is the EIS data nomencalture that was created. Master is made up of two subsets: loggers and meters. Loggers are sometimes called controllers and meters are sometimes called sensors. In rare cases meters or sensors are also called channels. ###Code planon=pd.read_excel('EIS Assets v2.xlsx',index_col = 'Code') #master_loggerscontrollers_old = pd.read_csv('LoggersControllers.csv', index_col = 'Asset Code') #master_meterssensors_old = pd.read_csv('MetersSensors.csv', encoding = 'macroman', index_col = 'Asset Code') master='MASTER PlanonLoggersAndMeters 17 10 16.xlsx' master_loggerscontrollers=pd.read_excel(master,sheetname='Loggers Controllers', index_col = 'Asset Code') master_meterssensors=pd.read_excel(master,sheetname='Meters Sensors', encoding = 'macroman', index_col = 'Asset Code') planon['Code']=planon.index master_loggerscontrollers['Code']=master_loggerscontrollers.index master_meterssensors['Code']=master_meterssensors.index set(master_meterssensors['Classification Group']) set(master_loggerscontrollers['Classification Group']) new_index=[] for i in master_meterssensors.index: if '/' not in i: new_index.append(i[:i.find('-')+1]+i[i.find('-')+1:].replace('-','/')) else: new_index.append(i) master_meterssensors.index=new_index master_meterssensors['Code']=master_meterssensors.index new_index=[] for i in master_meterssensors.index: logger=i[:i.find('/')] if master_loggerscontrollers.loc[logger]['Classification Group']=='BMS controller': meter=i[i.find('/')+1:] if meter[0] not in {'N','n','o','i'}: new_index.append(i) else: new_index.append(i) len(master_meterssensors) master_meterssensors=master_meterssensors.loc[new_index] len(master_meterssensors) master_meterssensors.to_csv('meterssensors.csv') master_loggerscontrollers.to_csv('loggerscontrollers.csv') ###Output _____no_output_____ ###Markdown Unify index, caps everything and strip of trailing spaces. ###Code planon.index=[str(i).strip() for i in planon.index] master_loggerscontrollers.index=[str(i).strip() for i in master_loggerscontrollers.index] master_meterssensors.index=[str(i).strip() for i in master_meterssensors.index] ###Output _____no_output_____ ###Markdown Drop duplicates (shouldn't be any) ###Code planon.drop_duplicates(inplace=True) master_loggerscontrollers.drop_duplicates(inplace=True) master_meterssensors.drop_duplicates(inplace=True) ###Output _____no_output_____ ###Markdown Split Planon import into loggers and meters Drop duplicates (shouldn't be any) ###Code # Split the Planon file into 2, one for loggers & controllers, and one for meters & sensors. planon_loggerscontrollers = planon.loc[(planon['Classification Group'] == 'EN.EN4 BMS Controller') | (planon['Classification Group'] == 'EN.EN1 Data Logger')] planon_meterssensors = planon.loc[(planon['Classification Group'] == 'EN.EN2 Energy Meter') | (planon['Classification Group'] == 'EN.EN3 Energy Sensor')] planon_loggerscontrollers.drop_duplicates(inplace=True) planon_meterssensors.drop_duplicates(inplace=True) ###Output C:\Anaconda2\envs\python3\lib\site-packages\pandas\util\decorators.py:91: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy return func(*args, **kwargs) ###Markdown Index unique? show number of duplicates in index ###Code len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()]) len(planon_meterssensors.index[planon_meterssensors.index.duplicated()]) ###Output _____no_output_____ ###Markdown Meters are not unique. This is becasue of the spaces served. This is ok for now, we will deal with duplicates at the comparison stage. Same is true for loggers - in the unlikely event that there are duplicates in the future. ###Code planon_meterssensors.head(3) ###Output _____no_output_____ ###Markdown 2. Validation Create list of all buildings present in Planon export. These are buildings to check the data against from Master. ###Code buildings=set(planon_meterssensors['BuildingNo.']) buildings len(buildings) ###Output _____no_output_____ ###Markdown 2.1. Meters Create dataframe slice for validation from `master_meterssensors` where the only the buildings located in `buildings` are contained. Save this new slice into `master_meterssensors_for_validation`. This is done by creating sub-slices of the dataframe for each building, then concatenating them all together. ###Code master_meterssensors_for_validation = \ pd.concat([master_meterssensors.loc[master_meterssensors['Building Code'] == building] \ for building in buildings]) master_meterssensors_for_validation.head(2) #alternative method master_meterssensors_for_validation2 = \ master_meterssensors[master_meterssensors['Building Code'].isin(buildings)] master_meterssensors_for_validation2.head(2) ###Output _____no_output_____ ###Markdown Planon sensors are not unique because of the spaces served convention in the two data architectures. The Planon architecture devotes a new line for each space served - hence the not unique index. The Master architecture lists all the spaces only once, as a list, therefore it has a unique index. We will need to take this into account and create matching dataframe out of planon for comparison, with a unique index. ###Code len(master_meterssensors_for_validation) len(planon_meterssensors)-len(planon_meterssensors.index[planon_meterssensors.index.duplicated()]) ###Output _____no_output_____ ###Markdown Sort datasets after index for easier comparison. ###Code master_meterssensors_for_validation.sort_index(inplace=True) planon_meterssensors.sort_index(inplace=True) ###Output C:\Anaconda2\envs\python3\lib\site-packages\ipykernel\__main__.py:2: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy from ipykernel import kernelapp as app ###Markdown 2.1.1 Slicing of meters to only certain columns of comparison ###Code planon_meterssensors.T master_meterssensors_for_validation.T ###Output _____no_output_____ ###Markdown Create dictionary that maps Planon column names onto Master. From Nicola: - Code (Asset Code) - Description- EIS ID (Channel)- Utility Type- Fiscal Meter- Tenant Meter`Building code` and `Building name` are implicitly included. `Logger Serial Number`, `IP` or `MAC` would be essential to include, as well as `Make` and `Model`. `Additional Location Info` is not essnetial but would be useful to have. Locations (`Locations.Space.Space number` and `Space Name`) are included in the Planon export - but this is their only viable data source, therefore are not validated against. ###Code #Planon:Master meters_match_dict={ "BuildingNo.":"Building Code", "Building":"Building Name", "Description":"Description", "EIS ID":"Logger Channel", "Tenant Meter.Name":"Tenant meter", "Fiscal Meter.Name":"Fiscal meter", "Code":"Code" } ###Output _____no_output_____ ###Markdown Filter both dataframes based on these new columns. Then remove duplicates. Currently, this leads to loss of information of spaces served, but also a unique index for the Planon dataframe, therefore bringing the dataframes closer to each other. When including spaces explicitly in the comparison (if we want to - or just trust the Planon space mapping), this needs to be modified. ###Code master_meterssensors_for_validation_filtered=master_meterssensors_for_validation[list(meters_match_dict.values())] planon_meterssensors_filtered=planon_meterssensors[list(meters_match_dict.keys())] master_meterssensors_for_validation_filtered.head(2) planon_meterssensors_filtered.head(2) ###Output _____no_output_____ ###Markdown Unify headers, drop duplicates (bear the mind the spaces argument, this where it needs to be brought back in in the future!). ###Code planon_meterssensors_filtered.columns=[meters_match_dict[i] for i in planon_meterssensors_filtered] planon_meterssensors_filtered.drop_duplicates(inplace=True) master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True) planon_meterssensors_filtered.head(2) ###Output _____no_output_____ ###Markdown Fiscal/Tenant meter name needs fixing from Yes/No and 1/0. ###Code planon_meterssensors_filtered['Fiscal meter']=planon_meterssensors_filtered['Fiscal meter'].isin(['Yes']) planon_meterssensors_filtered['Tenant meter']=planon_meterssensors_filtered['Tenant meter'].isin(['Yes']) master_meterssensors_for_validation_filtered['Fiscal meter']=master_meterssensors_for_validation_filtered['Fiscal meter'].isin([1]) master_meterssensors_for_validation_filtered['Tenant meter']=master_meterssensors_for_validation_filtered['Tenant meter'].isin([1]) master_meterssensors_for_validation_filtered.head(2) planon_meterssensors_filtered.head(2) ###Output _____no_output_____ ###Markdown Cross-check missing meters ###Code a=np.sort(list(set(planon_meterssensors_filtered.index))) b=np.sort(list(set(master_meterssensors_for_validation_filtered.index))) meterssensors_not_in_planon=[] for i in b: if i not in a: print(i+',',end=" "), meterssensors_not_in_planon.append(i) print('\n\nMeters in Master, but not in Planon:', len(meterssensors_not_in_planon),'/',len(b),':', round(len(meterssensors_not_in_planon)/len(b)*100,3),'%') (set([i[:5] for i in meterssensors_not_in_planon])) a=np.sort(list(set(planon_meterssensors_filtered.index))) b=np.sort(list(set(master_meterssensors_for_validation_filtered.index))) meterssensors_not_in_master=[] for i in a: if i not in b: print(i+',',end=" "), meterssensors_not_in_master.append(i) print('\n\nMeters in Planon, not in Master:', len(meterssensors_not_in_master),'/',len(a),':', round(len(meterssensors_not_in_master)/len(a)*100,3),'%') len(set([i for i in meterssensors_not_in_master])) set([i[:9] for i in meterssensors_not_in_master]) set([i[:5] for i in meterssensors_not_in_master]) ###Output _____no_output_____ ###Markdown Check for duplicates in index, but not duplicates over the entire row ###Code print(len(planon_meterssensors_filtered.index)) print(len(set(planon_meterssensors_filtered.index))) print(len(master_meterssensors_for_validation_filtered.index)) print(len(set(master_meterssensors_for_validation_filtered.index))) master_meterssensors_for_validation_filtered[master_meterssensors_for_validation_filtered.index.duplicated()] ###Output _____no_output_____ ###Markdown The duplicates are the `nan`s. Remove these for now. Could revisit later to do an index-less comparison, only over row contents. ###Code good_index=[i for i in master_meterssensors_for_validation_filtered.index if str(i).lower().strip()!='nan'] master_meterssensors_for_validation_filtered=master_meterssensors_for_validation_filtered.loc[good_index] master_meterssensors_for_validation_filtered.drop_duplicates(inplace=True) len(planon_meterssensors_filtered) len(master_meterssensors_for_validation_filtered) ###Output _____no_output_____ ###Markdown Do comparison only on common indices. Need to revisit and identify the cause missing meters, both ways (5 Planon->Meters and 30 Meters->Planon in this example). ###Code comon_index=list(set(master_meterssensors_for_validation_filtered.index).intersection(set(planon_meterssensors_filtered.index))) len(comon_index) master_meterssensors_for_validation_intersected=master_meterssensors_for_validation_filtered.loc[comon_index].sort_index() planon_meterssensors_intersected=planon_meterssensors_filtered.loc[comon_index].sort_index() len(master_meterssensors_for_validation_intersected) len(planon_meterssensors_intersected) ###Output _____no_output_____ ###Markdown Still have duplicate indices. For now we just drop and keep the first. ###Code master_meterssensors_for_validation_intersected = master_meterssensors_for_validation_intersected[~master_meterssensors_for_validation_intersected.index.duplicated(keep='first')] master_meterssensors_for_validation_intersected.head(2) planon_meterssensors_intersected.head(2) ###Output _____no_output_____ ###Markdown 2.1.2. Primitive comparison ###Code planon_meterssensors_intersected==master_meterssensors_for_validation_intersected np.all(planon_meterssensors_intersected==master_meterssensors_for_validation_intersected) ###Output _____no_output_____ ###Markdown 2.1.3. Horizontal comparison Number of cells matching ###Code (planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum() ###Output _____no_output_____ ###Markdown Percentage matching ###Code (planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\ len(planon_meterssensors_intersected)*100 ((planon_meterssensors_intersected==master_meterssensors_for_validation_intersected).sum()/\ len(planon_meterssensors_intersected)*100).plot(kind='bar') ###Output _____no_output_____ ###Markdown 2.1.4. Vertical comparison ###Code df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum()) df df=pd.DataFrame((planon_meterssensors_intersected.T==master_meterssensors_for_validation_intersected.T).sum()/\ len(planon_meterssensors_intersected.T)*100) df[df[0]<100] ###Output _____no_output_____ ###Markdown 2.1.5. Smart(er) comparison Not all of the dataframe matches. Let us do some basic string formatting, maybe that helps. ###Code sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']) planon_meterssensors_intersected['Description']=[str(s).lower().strip()\ .replace(' ',' ').replace(' ',' ').replace('nan','')\ for s in planon_meterssensors_intersected['Description'].values] master_meterssensors_for_validation_intersected['Description']=[str(s).lower().strip()\ .replace(' ',' ').replace(' ',' ').replace('nan','')\ for s in master_meterssensors_for_validation_intersected['Description'].values] sum(planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']) ###Output _____no_output_____ ###Markdown Some errors fixed, some left. Let's see which ones. These are either: - Wrong duplicate dropped- Input human erros in the description.- Actual erros somewhere in the indexing. ###Code for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index: print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Description'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Description']) ###Output _____no_output_____ ###Markdown Let us repeat the exercise for `Logger Channel`. Cross-validate, flag as highly likely error where both mismatch. ###Code sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']) planon_meterssensors_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_meterssensors_intersected['Logger Channel'].values] master_meterssensors_for_validation_intersected['Logger Channel']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_meterssensors_for_validation_intersected['Logger Channel'].values] sum(planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']) ###Output _____no_output_____ ###Markdown All errors fixed on logger channels. ###Code for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Logger Channel']!=master_meterssensors_for_validation_intersected['Logger Channel']].index: print(i,'\t\tPlanon:',planon_meterssensors_intersected.loc[i]['Logger Channel'],'\t\tMaster:',master_meterssensors_for_validation_intersected.loc[i]['Logger Channel']) ###Output _____no_output_____ ###Markdown New error percentage: ###Code (planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\ len(planon_meterssensors_intersected)*100 ###Output _____no_output_____ ###Markdown 2.2. Loggers ###Code buildings=set(planon_loggerscontrollers['BuildingNo.']) buildings master_loggerscontrollers_for_validation = \ pd.concat([master_loggerscontrollers.loc[master_loggerscontrollers['Building Code'] == building] \ for building in buildings]) master_loggerscontrollers_for_validation.head(2) len(master_loggerscontrollers_for_validation) len(planon_loggerscontrollers)-len(planon_loggerscontrollers.index[planon_loggerscontrollers.index.duplicated()]) master_loggerscontrollers_for_validation.sort_index(inplace=True) planon_loggerscontrollers.sort_index(inplace=True) planon_loggerscontrollers.T master_loggerscontrollers_for_validation.T ###Output _____no_output_____ ###Markdown Create dictionary that maps Planon column names onto Master. From Nicola: - EIS ID (Serial Number)- Make- Model- Description- Code (Asset Code)- Building Code`Building code` and `Building name` are implicitly included. `Logger IP` or `MAC` would be essential to include, as well as `Make` and `Model`. `Additional Location Info` is not essnetial but would be useful to have. Locations (`Locations.Space.Space number` and `Space Name`) are included in the Planon export - but this is their only viable data source, therefore are not validated against. ###Code #Planon:Master loggers_match_dict={ "BuildingNo.":"Building Code", "Building":"Building Name", "Description":"Description", "EIS ID":"Logger Serial Number", "Make":"Make", "Model":"Model", "Code":"Code" } master_loggerscontrollers_for_validation_filtered=master_loggerscontrollers_for_validation[list(loggers_match_dict.values())] planon_loggerscontrollers_filtered=planon_loggerscontrollers[list(loggers_match_dict.keys())] master_loggerscontrollers_for_validation_filtered.head(2) planon_loggerscontrollers_filtered.head(2) planon_loggerscontrollers_filtered.columns=[loggers_match_dict[i] for i in planon_loggerscontrollers_filtered] planon_loggerscontrollers_filtered.drop_duplicates(inplace=True) master_loggerscontrollers_for_validation_filtered.drop_duplicates(inplace=True) planon_loggerscontrollers_filtered.head(2) master_loggerscontrollers_for_validation_filtered.head(2) a=np.sort(list(set(planon_loggerscontrollers_filtered.index))) b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index))) loggerscontrollers_not_in_planon=[] for i in b: if i not in a: print(i+',',end=" "), loggerscontrollers_not_in_planon.append(i) print('\n\nLoggers in Master, but not in Planon:', len(loggerscontrollers_not_in_planon),'/',len(b),':', round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%') a=np.sort(list(set(planon_loggerscontrollers_filtered.index))) b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index))) loggerscontrollers_not_in_master=[] for i in a: if i not in b: print(i+',',end=" "), loggerscontrollers_not_in_master.append(i) print('\n\nLoggers in Planon, not in Master:', len(loggerscontrollers_not_in_master),'/',len(a),':', round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%') print(len(planon_loggerscontrollers_filtered.index)) print(len(set(planon_loggerscontrollers_filtered.index))) print(len(master_loggerscontrollers_for_validation_filtered.index)) print(len(set(master_loggerscontrollers_for_validation_filtered.index))) master_loggerscontrollers_for_validation_filtered[master_loggerscontrollers_for_validation_filtered.index.duplicated()] comon_index=list(set(master_loggerscontrollers_for_validation_filtered.index).intersection(set(planon_loggerscontrollers_filtered.index))) master_loggerscontrollers_for_validation_intersected=master_loggerscontrollers_for_validation_filtered.loc[comon_index].sort_index() planon_loggerscontrollers_intersected=planon_loggerscontrollers_filtered.loc[comon_index].sort_index() master_loggerscontrollers_for_validation_intersected.head(2) planon_loggerscontrollers_intersected.head(2) planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected ###Output _____no_output_____ ###Markdown Loggers matching ###Code (planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum() ###Output _____no_output_____ ###Markdown Percentage matching ###Code (planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\ len(planon_loggerscontrollers_intersected)*100 ((planon_loggerscontrollers_intersected==master_loggerscontrollers_for_validation_intersected).sum()/\ len(planon_loggerscontrollers_intersected)*100).plot(kind='bar') ###Output _____no_output_____ ###Markdown Loggers not matching on `Building Name`. ###Code sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']) planon_loggerscontrollers_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in planon_loggerscontrollers_intersected['Building Name'].values] master_loggerscontrollers_for_validation_intersected['Building Name']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ') for s in master_loggerscontrollers_for_validation_intersected['Building Name'].values] sum(planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']) ###Output _____no_output_____ ###Markdown That didnt help. ###Code for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index: print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Building Name'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name']) ###Output EX001-B01 Planon: roads - main campus Master: underpass MC029-B01 Planon: cetad Master: bowland hall cetad MC033-L03 Planon: county john creed Master: john creed MC047-B02 Planon: welcome centre Master: conference centre MC047-L01 Planon: welcome centre Master: conference centre MC047-L02 Planon: welcome centre Master: conference centre MC055-B01 Planon: furness residences Master: furness blocks MC071-B01 Planon: furness college Master: furness MC072-B02 Planon: psc Master: psc building MC072-B04 Planon: psc Master: psc building MC103-B01 Planon: lancaster house hotel Master: hotel MC198-B01 Planon: grizedale college - offices, bar & social space Master: grizedale MC198-B02 Planon: grizedale college - offices, bar & social space Master: grizedale OC004-B01 Planon: chancellor's wharf, wyre house Master: chancellors wharf OC005-B01 Planon: chancellor's wharf, lune house Master: chancellors wharf OC006-B01 Planon: chancellor's wharf, kent house Master: chancellors wharf ###Markdown Follow up with lexical distance comparison. That would flag this as a match. Loggers not matching on `Serial Number`. ###Code sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']) planon_loggerscontrollers_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in planon_loggerscontrollers_intersected['Logger Serial Number'].values] master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=[str(s).lower().strip().replace(' ',' ').replace(' ',' ').replace('{','').replace('}','') for s in master_loggerscontrollers_for_validation_intersected['Logger Serial Number'].values] sum(planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']) for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index: print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']) ###Output MC032-L04 Planon: 50198367 Master: 050198367e00 MC046-L05 Planon: 50198895300 Master: 050198895300 MC063-L01 Planon: 50198829500 Master: 050198829500 MC064-L03 Planon: 50198872600 Master: 050198872600 MC071-L02 Planon: 50201286300 Master: 050201286300 MC071-L05 Planon: 50201221 Master: 050201221e00 MC071-L16 Planon: 50198904000 Master: 050198904000 MC078-L03 Planon: 50198864300 Master: 050198864300 MC102-L01 Planon: 50157909800 Master: 050157909800 ###Markdown Technically the same, but there is a number format error. Compare based on float value, if they match, replace one of them. This needs to be amended, as it will throw `cannot onvert to float` exception if strings are left in from the previous step. ###Code z1=[] z2=[] for i in planon_loggerscontrollers_intersected.index: if planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']: if float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])==\ float(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']): z1.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])))) z2.append(str(int(float(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'])))) else: z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']) z2.append(master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']) else: z1.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']) z2.append(planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number']) planon_loggerscontrollers_intersected['Logger Serial Number']=z1 master_loggerscontrollers_for_validation_intersected['Logger Serial Number']=z2 for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Logger Serial Number']!=master_loggerscontrollers_for_validation_intersected['Logger Serial Number']].index: print(i,'\t\tPlanon:',planon_loggerscontrollers_intersected.loc[i]['Logger Serial Number'],'\t\tMaster:',master_loggerscontrollers_for_validation_intersected.loc[i]['Logger Serial Number']) ###Output _____no_output_____ ###Markdown New error percentage: ###Code (planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\ len(planon_loggerscontrollers_intersected)*100 ###Output _____no_output_____ ###Markdown (Bearing in my mind the above, this is technically 0) ###Code a=np.sort(list(set(planon_meterssensors_filtered.index))) b=np.sort(list(set(master_meterssensors_for_validation_filtered.index))) meterssensors_not_in_planon=[] for i in b: if i not in a: print(i+',',end=" "), meterssensors_not_in_planon.append(i) print('\n\nMeters in Master, but not in Planon:', len(meterssensors_not_in_planon),'/',len(b),':', round(len(meterssensors_not_in_planon)/len(b)*100,3),'%') q1=pd.DataFrame(meterssensors_not_in_planon) a=np.sort(list(set(planon_meterssensors_filtered.index))) b=np.sort(list(set(master_meterssensors_for_validation_filtered.index))) meterssensors_not_in_master=[] for i in a: if i not in b: print(i+',',end=" "), meterssensors_not_in_master.append(i) print('\n\nMeters in Planon, not in Master:', len(meterssensors_not_in_master),'/',len(a),':', round(len(meterssensors_not_in_master)/len(a)*100,3),'%') q2=pd.DataFrame(meterssensors_not_in_master) a=np.sort(list(set(planon_loggerscontrollers_filtered.index))) b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index))) loggerscontrollers_not_in_planon=[] for i in b: if i not in a: print(i+',',end=" "), loggerscontrollers_not_in_planon.append(i) print('\n\nLoggers in Master, but not in Planon:', len(loggerscontrollers_not_in_planon),'/',len(b),':', round(len(loggerscontrollers_not_in_planon)/len(b)*100,3),'%') q3=pd.DataFrame(loggerscontrollers_not_in_planon) a=np.sort(list(set(planon_loggerscontrollers_filtered.index))) b=np.sort(list(set(master_loggerscontrollers_for_validation_filtered.index))) loggerscontrollers_not_in_master=[] for i in a: if i not in b: print(i+',',end=" "), loggerscontrollers_not_in_master.append(i) print('\n\nLoggers in Planon, not in Master:', len(loggerscontrollers_not_in_master),'/',len(a),':', round(len(loggerscontrollers_not_in_master)/len(a)*100,3),'%') q4=pd.DataFrame(loggerscontrollers_not_in_master) q5=pd.DataFrame((planon_meterssensors_intersected!=master_meterssensors_for_validation_intersected).sum()/\ len(planon_meterssensors_intersected)*100) q6=pd.DataFrame((planon_loggerscontrollers_intersected!=master_loggerscontrollers_for_validation_intersected).sum()/\ len(planon_loggerscontrollers_intersected)*100) w1=[] for i in planon_meterssensors_intersected[planon_meterssensors_intersected['Description']!=master_meterssensors_for_validation_intersected['Description']].index: w1.append({"Meter":i,'Planon':planon_meterssensors_intersected.loc[i]['Description'], 'Master':master_meterssensors_for_validation_intersected.loc[i]['Description']}) q7=pd.DataFrame(w1) w2=[] for i in planon_loggerscontrollers_intersected[planon_loggerscontrollers_intersected['Building Name']!=master_loggerscontrollers_for_validation_intersected['Building Name']].index: w2.append({"Logger":i,'Planon':planon_loggerscontrollers_intersected.loc[i]['Building Name'], 'Master':master_loggerscontrollers_for_validation_intersected.loc[i]['Building Name']}) q8=pd.DataFrame(w2) writer = pd.ExcelWriter('final5b.xlsx') q1.to_excel(writer,'Meters Master, not Planon') q2.to_excel(writer,'Meters Planon, not Master') q3.to_excel(writer,'Loggers Master, not Planon') q4.to_excel(writer,'Loggers Planon, not Master') q5.to_excel(writer,'Meters error perc') q6.to_excel(writer,'Loggers error perc') q7.to_excel(writer,'Meters naming conflcits') q1 q9=[] try: for i in q1[0].values: if i[:i.find('/')] not in set(q3[0].values): q9.append(i) except:pass pd.DataFrame(q9).to_excel(writer,'Meters Master, not Planon, not Logger') q10=[] try: for i in q1[0].values: if 'L82' not in i: q10.append(i) except:pass pd.DataFrame(q10).to_excel(writer,'Meters Master, not Planon, not L82') q11=[] try: for i in q1[0].values: if 'MC210' not in i: q11.append(i) except:pass pd.DataFrame(q11).to_excel(writer,'Meters Master, not Planon, not 210') writer.save() test=[] for i in planon_meterssensors_intersected.index: test.append(i[:9]) planon_meterssensors_intersected['test']=test planon_meterssensors_intersected.set_index(['test','Code']) ###Output _____no_output_____
hist/Processamento Yelp 2.ipynb
###Markdown Atividade Integradora Criando Ambiente Spark ###Code # import findspark as fs from pyspark.sql import SparkSession from pyspark.sql import SparkSession from pyspark.sql.types import * from pyspark.sql import functions as f from pyspark.sql.window import Window from pyspark.ml.feature import StopWordsRemover import pandas as pd import seaborn as sns sns.set(style="ticks", palette="pastel") import os from wordcloud import WordCloud, ImageColorGenerator import matplotlib.pyplot as plt %matplotlib inline # MAC Local (Viviane) # spark_location='/Users/vivi/server/spark' # Set your own # java8_location= '/Library/Java/JavaVirtualMachines/jdk1.8.0_251.jdk/Contents/Home/' # Set your own # os.environ['JAVA_HOME'] = java8_location # fs.init(spark_home=spark_location) datapath = 'C:\\Users\\RuWindows\\Desktop\\PI\\yelp_dataset\\' #datapath = '../data/yelp' files = sorted(os.listdir(datapath)) files #!head data/yelp_academic_dataset_review.json # Spark Session spark = SparkSession.builder \ .master('local[*]') \ .appName('Integradora Yelp') \ .config("spark.ui.port", "4060") \ .getOrCreate() spark ###Output _____no_output_____ ###Markdown spark = SparkSession.builder \ .master('local[8]') \ .appName('Yelp Integradora') \ .getOrCreate() ###Code sc = spark.sparkContext spark#.stop() ###Output _____no_output_____ ###Markdown Importando as Bases Origem - Raw ###Code usr_raw = spark.read.json(datapath+'/yelp_academic_dataset_user.json') rv_raw = spark.read.json(datapath+'/yelp_academic_dataset_review.json') bz_raw = spark.read.json(datapath+'/yelp_academic_dataset_business.json') tp_raw = spark.read.json(datapath+'/yelp_academic_dataset_tip.json') bz_raw.createOrReplaceTempView('bz') rv_raw.createOrReplaceTempView('rv') usr_raw.createOrReplaceTempView('usr') tp_raw.createOrReplaceTempView('tp') # Visualizando Estrutura bz_raw.printSchema() # Verificando o SQL print(spark.catalog.listTables()) # bz_raw.columns # usr_raw.columns # rv_raw.columns # tp_raw.columns ###Output _____no_output_____ ###Markdown Base Business ###Code # Abertura dos atributos para colunas dfs = [] for x in ["hours", "attributes"]: cols = bz_raw.select(f"{x}.*").columns for col in cols: try: dfs.append(dfs[-1].withColumn(col, f.col(f"{x}.{col}"))) except IndexError: dfs.append(bz_raw.withColumn(col, f.col(f"{x}.{col}"))) df_final = dfs[-1].drop("hours", "attributes") df_final.createOrReplaceTempView("df_final") df_final.printSchema() ###Output root |-- address: string (nullable = true) |-- business_id: string (nullable = true) |-- categories: string (nullable = true) |-- city: string (nullable = true) |-- is_open: long (nullable = true) |-- latitude: double (nullable = true) |-- longitude: double (nullable = true) |-- name: string (nullable = true) |-- postal_code: string (nullable = true) |-- review_count: long (nullable = true) |-- stars: double (nullable = true) |-- state: string (nullable = true) |-- Friday: string (nullable = true) |-- Monday: string (nullable = true) |-- Saturday: string (nullable = true) |-- Sunday: string (nullable = true) |-- Thursday: string (nullable = true) |-- Tuesday: string (nullable = true) |-- Wednesday: string (nullable = true) |-- AcceptsInsurance: string (nullable = true) |-- AgesAllowed: string (nullable = true) |-- Alcohol: string (nullable = true) |-- Ambience: string (nullable = true) |-- BYOB: string (nullable = true) |-- BYOBCorkage: string (nullable = true) |-- BestNights: string (nullable = true) |-- BikeParking: string (nullable = true) |-- BusinessAcceptsBitcoin: string (nullable = true) |-- BusinessAcceptsCreditCards: string (nullable = true) |-- BusinessParking: string (nullable = true) |-- ByAppointmentOnly: string (nullable = true) |-- Caters: string (nullable = true) |-- CoatCheck: string (nullable = true) |-- Corkage: string (nullable = true) |-- DietaryRestrictions: string (nullable = true) |-- DogsAllowed: string (nullable = true) |-- DriveThru: string (nullable = true) |-- GoodForDancing: string (nullable = true) |-- GoodForKids: string (nullable = true) |-- GoodForMeal: string (nullable = true) |-- HairSpecializesIn: string (nullable = true) |-- HappyHour: string (nullable = true) |-- HasTV: string (nullable = true) |-- Music: string (nullable = true) |-- NoiseLevel: string (nullable = true) |-- Open24Hours: string (nullable = true) |-- OutdoorSeating: string (nullable = true) |-- RestaurantsAttire: string (nullable = true) |-- RestaurantsCounterService: string (nullable = true) |-- RestaurantsDelivery: string (nullable = true) |-- RestaurantsGoodForGroups: string (nullable = true) |-- RestaurantsPriceRange2: string (nullable = true) |-- RestaurantsReservations: string (nullable = true) |-- RestaurantsTableService: string (nullable = true) |-- RestaurantsTakeOut: string (nullable = true) |-- Smoking: string (nullable = true) |-- WheelchairAccessible: string (nullable = true) |-- WiFi: string (nullable = true) ###Markdown Road Map* bz: - tratar atributos - selecionar os que fazem sentido e transforar true ou false para dummy - rodar um k-means na base () Criando Base Única Principal Unificando as Bases para contrução dos Modelos - "Joins" Juntando as informações de reviews, estabelecimentos da cidade escolhidas e usuários que frequentam esses estabelecimentos. Reviews + Business ###Code base = spark.sql(""" SELECT A.business_id, A.cool AS cool_rv, A.date AS date_rv, A.funny AS funny_rv, A.review_id, A.stars AS stars_rv, A.text AS text_rv, A.useful AS useful_rv, A.user_id, B.address AS address_bz, B.categories AS categories_bz, B.city AS city_bz, B.hours AS hours_bz, B.is_open AS is_open_bz, B.latitude AS latitude_bz, B.longitude AS longitude_bz, B.name AS name_bz, B.postal_code AS postal_code_bz, B.review_count AS review_count_bz, B.stars AS stars_bz, B.state AS state_bz FROM rv as A LEFT JOIN bz as B ON A.business_id = B.business_id WHERE B.city = 'Toronto' AND B.state = 'ON' AND B.review_count > 20 AND (B.categories like '%Restaurant%' OR B.categories like '%Food%') """) # base.show(5) base.createOrReplaceTempView('base') ###Output _____no_output_____ ###Markdown - Contagem da quantidade de linhas para garantir que a integridade do dataset ser mantém ao longo do processamento. ###Code #linhas na base de reviews + business # spark.sql(''' # SELECT Count(*) # FROM base # ''').show() ###Output _____no_output_____ ###Markdown (Reviews + Business) + Users ###Code base1 = spark.sql(""" SELECT A.*, B.average_stars AS stars_usr, B.compliment_cool AS compliment_cool_usr, B.compliment_cute AS compliment_cute_usr, B.compliment_funny AS compliment_funny_usr, B.compliment_hot AS compliment_hot_usr, B.compliment_list AS compliment_list_usr, B.compliment_more AS compliment_more_usr, B.compliment_note AS compliment_note_usr, B.compliment_photos AS compliment_photos_usr, B.compliment_plain AS compliment_plain_usr, B.compliment_profile AS compliment_profile_usr, B.compliment_writer AS compliment_writer_usr, B.cool AS cool_usr, B.elite AS elite_usr, B.fans AS fans_usr, B.friends AS friends_usr, B.funny AS funny_usr, B.name AS name_usr, B.review_count AS review_count_usr, B.useful AS useful_usr, B.yelping_since AS yelping_since_usr FROM base as A LEFT JOIN usr as B ON A.user_id = B.user_id """) base1.createOrReplaceTempView('base1') #linhas na base de reviews + business + users # spark.sql(''' # SELECT Count(*) # FROM base1 # ''').show() aux = spark.sql(''' SELECT user_id, city_bz, yelping_since_usr, COUNT(review_id) AS city_review_counter_usr, review_count_usr FROM base1 GROUP BY user_id, review_count_usr, city_bz, yelping_since_usr ORDER BY city_review_counter_usr DESC ''') aux.createOrReplaceTempView('aux') # aux.show() ###Output _____no_output_____ ###Markdown Aparentemente os usuários fazem reviews em estabelecimentos não só em Toronto. Para incluir essa informação no modelo, será criada uma variável com a relação entre as quantidade de reviews do usuário na cidade pela quantidade total de reviews do usuário. - Média de reviews por usuário na cidade e total ###Code # spark.sql(''' # SELECT AVG(city_review_counter_usr), # AVG(review_count_usr) # FROM aux # ''').show() ###Output _____no_output_____ ###Markdown - Remoção de usuários com apenas 1 review na cidade ###Code base2 = spark.sql(''' SELECT A.*, B.city_review_counter_usr, (B.city_review_counter_usr/B.review_count_usr) AS city_review_ratio_usr FROM base1 as A LEFT JOIN aux as B ON A.user_id = B.user_id WHERE B.city_review_counter_usr > 1 ''') base2.createOrReplaceTempView('base2') #linhas na base de reviews + business + users # spark.sql(''' # SELECT Count(*) # FROM base2 # ''').show() ###Output _____no_output_____ ###Markdown - Classificação das avaliações em Boa (1 - maior do que 4) e Ruim ou inexistente (0 - menor do que 4). ###Code base3 = spark.sql(""" SELECT *, (CASE WHEN stars_rv >=4 THEN 1 ELSE 0 END) as class_rv, (CASE WHEN stars_bz >=4 THEN 1 ELSE 0 END) as class_bz, (CASE WHEN stars_usr >=4 THEN 1 ELSE 0 END) as class_usr FROM base2 """) base3.columns base3.createOrReplaceTempView('base3') # spark.sql(''' # SELECT Count(*) # FROM base3 # ''').show() ###Output _____no_output_____ ###Markdown ((Reviews + Business) + Users ) + Tips ###Code # spark.sql(''' # SELECT business_id, user_id, # count(text) AS tips_counter, # sum(compliment_count) as total_compliments # FROM tp # GROUP BY business_id, user_id # ORDER BY total_compliments DESC # ''').show() base4 = spark.sql(''' SELECT A.*, IFNULL(B.compliment_count,0) AS compliment_count_tip, IFNULL(B.text,'') AS tip FROM base3 as A LEFT JOIN tp as B ON (A.user_id = B.user_id AND A.business_id = B.business_id) ''') # base4.select('business_id', 'user_id','tip','compliment_count_tip').show() # base4.select('text_rv','tip').show() ###Output _____no_output_____ ###Markdown Tratamento do Texto ###Code def word_clean(sdf,col,new_col): rv1 = sdf.withColumn(new_col,f.regexp_replace(f.col(col), "'d", " would")) rv2 = rv1.withColumn(new_col,f.regexp_replace(f.col(new_col), "'ve", " have")) rv3 = rv2.withColumn(new_col,f.regexp_replace(f.col(new_col), "'s", " is")) rv4 = rv3.withColumn(new_col,f.regexp_replace(f.col(new_col), "'re", " are")) rv5 = rv4.withColumn(new_col,f.regexp_replace(f.col(new_col), "n't", " not")) rv6 = rv5.withColumn(new_col,f.regexp_replace(f.col(new_col), '\W+', " ")) rv7 = rv6.withColumn(new_col,f.lower(f.col(new_col))) return rv7 base5 = word_clean(base4,'text_rv','text_clean') base6 = word_clean(base5,'tip','tip_clean') # base6.select('text_clean','tip_clean').show() ###Output _____no_output_____ ###Markdown - Contagem de amigos de cada usuário ###Code base7 = base6.withColumn('friends_counter_usr', f.size(f.split(f.col('friends_usr'),','))) base7.createOrReplaceTempView('base7') base8 = spark.sql(''' SELECT *, (CASE WHEN friends_usr = 'None' THEN 0 ELSE friends_counter_usr END) as friends_count_usr FROM base7 ''') df = base8.select('friends_usr','friends_counter_usr','friends_count_usr').limit(10).toPandas() df.dtypes df # base8.select('friends_usr','friends_counter_usr','friends_count_usr').show() ###Output _____no_output_____ ###Markdown Concatenando Comentários por Usuário - Review + Tips ###Code base9 = base8.withColumn('rv_tip', f.concat(f.col('text_clean'),f.lit(' '), f.col('tip_clean'))) # base9.select('text_clean','tip_clean','rv_tip','stars_rv','compliment_count_tip','funny_rv','cool_rv').show() base9.createOrReplaceTempView('base9') # spark.sql(''' # SELECT stars_rv, count(tip_clean) as tip_counter # FROM base9 # GROUP BY stars_rv # ORDER BY tip_counter DESC # ''').show() ###Output _____no_output_____ ###Markdown - Remoção de colunas que não serão utilizadas na primeira modelagem ###Code base_final = base9.drop('friends_usr','friends_counter_usr','name_usr','city_bz', 'address_bz','state_bz', 'hours_bz','text_rv','tip','tip_clean','elite_usr')#,'review_id') base_final.columns ###Output _____no_output_____ ###Markdown Salva Base analítica em CSV ###Code base_final.write \ .format('csv') \ .mode('overwrite') \ .option('sep', ',') \ .option('header', True) \ .save('output/yelp.csv') ###Output _____no_output_____ ###Markdown Base para Modelo de TópicosInformações de Texto que serão tratadas em Modelos de Tópicos no R ###Code words = base_final.select('review_id','user_id','business_id','categories_bz','stars_rv','rv_tip') words2 = words.withColumn('category_bz', f.explode(f.split(f.col('categories_bz'),', '))) words3 = words2.drop('categories_bz') # words3.show() #words4 = words3.withColumn('word', f.explode(f.split(f.col('review_tip'),' '))) ###Output _____no_output_____ ###Markdown Salva Base Auxiliar para Modelo de Tópicos - "Reviews + Tips" ###Code words3.write \ .format('csv') \ .mode('overwrite') \ .option('sep', ',') \ .option('header', True) \ .save('output/yelp_words.csv') ###Output _____no_output_____ ###Markdown Matriz de distânciasEstruturação de Dados para Clusterização Hierárquica - Preparação para criação de matriz de distâncias baseada na nota de cada avaliação. ###Code dist1 = base_final.select('user_id','categories_bz','stars_rv') # dist1.show() dist2 = dist1.withColumn('category_bz', f.explode(f.split(f.col('categories_bz'),', '))) # dist2.show() dist2.createOrReplaceTempView('dist') ###Output _____no_output_____ ###Markdown - Quantidade de usuários e estabelecimentos ###Code # spark.sql(''' # SELECT Count(DISTINCT user_id) # FROM dist # ''').show() # spark.sql(''' # SELECT Count(DISTINCT categories_bz) # FROM dist # ''').show() # spark.sql(''' # SELECT Count(DISTINCT category_bz) # FROM dist # ''').show() ###Output _____no_output_____ ###Markdown - Aumentando o limite máximo de coluna de acordo com o número de estabelecimentos ###Code #spark.conf.set('spark.sql.pivotMaxValues', u'21000') dist3 = dist2.groupBy("user_id").pivot("category_bz").mean("stars_rv") dist4 = dist3.fillna(0) # dist4.show() ###Output _____no_output_____ ###Markdown Salva Base Auxiliar para Matriz de Distâncias - "Category" ###Code dist4.write \ .format('csv') \ .mode('overwrite') \ .option('sep', ',') \ .option('header', True) \ .save('output/yelp_dist.csv') ###Output _____no_output_____ ###Markdown Análise Gráfica Heatmap - Criando mapa de calor da concentração de reviews ###Code base_mapas = base_final#.limit(1000) base_mapas.createOrReplaceTempView('base_mapas') mapa1 = spark.sql(""" SELECT latitude_bz, longitude_bz FROM base_mapas WHERE latitude_bz is not null AND longitude_bz is not null """) mapa1.show(10) ###Output +-------------+--------------+ | latitude_bz| longitude_bz| +-------------+--------------+ | 43.6697687| -79.382838| |43.6386597113| -79.3806966| |43.6630940441|-79.3840069721| | 43.656838| -79.399237| |43.6599496025| -79.479805281| | 43.6547562| -79.3874925| | 43.6376269| -79.393259| |43.6543411559|-79.4004796073| |43.6729833023|-79.2866801843| | 43.655584| -79.3985383| +-------------+--------------+ only showing top 10 rows ###Markdown Decobrindo o ponto central de Latitude e Longetude do Mapa ###Code # spark.sql(""" # SELECT avg(latitude) as avg_lat, # avg(longitude) as avg_long # FROM base_mapas # """).show() import folium from folium import plugins mapa = folium.Map(location=[43.6732, -79.3919], zoom_start=11, tiles='Stamen Toner') # OpenStreetMap, Stamen Terrain, Stamen Toner mapa lat = mapa1.toPandas()['latitude'].values lon = mapa1.toPandas()['longitude'].values coordenadas = [] for la, lo in zip(lat, lon): coordenadas.append([la,lo]) mapa.add_child(plugins.HeatMap(coordenadas)) lat_lon3 = spark.sql(""" SELECT 'ON' as state, (SUM(review_count) / (select SUM(review_count) from base_mapas))*100 as review_perc FROM base_mapas WHERE latitude is not null AND longitude is not null GROUP BY state """) ## Geo-Json do Canada - https://geojson-maps.ash.ms/ #url = 'https://raw.githubusercontent.com/AshKyd/geojson-regions/master/countries/110m/' # state_geo = f'{url}/CAN.geojson' url = 'https://raw.githubusercontent.com/jasonicarter/toronto-geojson/master/' state_geo = f'{url}/toronto_crs84.geojson' df = lat_lon3.toPandas() m = folium.Map(location=[43, -79], zoom_start=10) bins = list(df['review_perc'].quantile([0, 0.25, 0.5, 0.75, 1])) folium.Choropleth( geo_data=state_geo, name='choropleth', data=df, columns=['state', 'review_perc'], key_on='feature.properties.name', fill_color='BuPu', fill_opacity=0.7, line_opacity=0.2, bins=bins, legend_name='Reviews (%)', reset=True ).add_to(m) m ###Output _____no_output_____
preprocessing/CVAT_to_CSV.ipynb
###Markdown Example to convert XML annotations from CVAT to a csv format ###Code # Import statements import pandas as pd import numpy as np import os import xml.etree.ElementTree as ET import copy ###Output _____no_output_____ ###Markdown 1. Prepare header of CSV file ###Code # List of the keypoints keypoints = ['LFHoof', 'LFAnkle', 'LFKnee', 'RFHoof', 'RFAnkle', 'RFKnee', 'LHHoof', 'LHAnkle', 'LHKnee', 'RHHoof', 'RHAnkle', 'RHKnee', 'Nose', 'HeadTop', 'Spine1', 'Spine2', 'Spine3' ] # Make header for the CSV file. Here, we have video, frame, and then 3 columns per keypoint: x,y and likelihood. # Note that I never used the likelihood in my research, but also never really bothered to remove it from my csv files... header = ['video','frame'] for k in keypoints: header.append(k+"_x") header.append(k+"_y") header.append(k+"_likelihood") ###Output _____no_output_____ ###Markdown 2. Parse the XML file ###Code def xml_to_csv(save_path, xml_file, header): """ Function that parses a CVAT XML file and saves the annotations in a csv format :param save_path: path of the folder where to save the csv file :param xml_file: the CVAT xml file containing the annotations. It should be saved as images, and not video format (I think, it was a long time ago) :param header: the header of the csv file """ # Get the parser for the CSV file tree = ET.parse(xml_file) root = tree.getroot() video_name = root.find('meta').find('source').text print(video_name) images = root.findall('image') print(len(images)) #Init dict video_labels = {} for h in header: video_labels[h] = [None] * len(images) # empty list of the number of images stop_video = False i = -1 # Loop through images for j, image in enumerate(images): points = list(image) if len(points) == 0: # Get the labels of the videos for h in video_labels: video_labels[h].pop() continue i += 1 if len(points) != 17: # If more than 17 or less than 17 keypoints then there is a problem with the labels of that frame. you need to check it in CVAT print(video_name, "frame:", image.attrib['name'], len(points)) stop_video = True video_labels['video'][i] = video_name video_labels['frame'][i] = int(image.attrib['name'].split('_')[1]) #"frame_123456" # print(video_labels['frame'][i]) for point in points: # loop through the keypoints bodypart = point.attrib['label'] xy = point.attrib['points'].split(',') # [x,y] attributes = point.findall('attribute') # likelihood for attr in attributes: # you should probably comment that part if you don't use likelihood if attr.attrib['name'] == 'likelihood': like = attr.text if video_labels[bodypart+'_x'][i] != None: print(bodypart, 'double keypoint', video_name, "frame:", image.attrib['name'], video_labels[bodypart+'_x'][i], xy[0]) stop_video = True continue # check if the keypoints are not too far from the ones in the neighbouring frames (wrong labels) You can comment this out if i > 0 and video_labels[bodypart+'_x'][i-1] != None: diff_x = np.abs(float(xy[0]) - float(video_labels[bodypart+'_x'][i-1])) diff_y = np.abs(float(xy[1]) - float(video_labels[bodypart+'_y'][i-1])) if diff_x >= 100: print(bodypart, 'outlier', video_name, "frame:", image.attrib['name'], 'x', diff_x) stop_video = True continue if diff_y >= 30: print(bodypart, 'outlier', video_name, "frame:", image.attrib['name'], 'y', diff_y) stop_video = True continue video_labels[bodypart+'_x'][i] = xy[0] video_labels[bodypart+'_y'][i] = xy[1] video_labels[bodypart+'_likelihood'][i] = like if stop_video: print('stop') else: df = pd.DataFrame(video_labels) # df.head() csv_file = video_name.split('.')[0]+'.csv' print(os.path.join(save_path, csv_file)) df.to_csv(os.path.join(save_path, csv_file), index=False) ###Output _____no_output_____
OMP.ipynb
###Markdown ###Code import numpy as np def find_sparse_representation(phi, y): iteration = 0 max_iteration = phi.shape[1] epsilon = 1.0e-5 A = np.empty((phi.shape[0], 0)) A_index = np.array([], dtype="int") sparse = np.zeros(phi.shape[1]) x = np.array([]) residue = y.copy() while iteration < max_iteration and np.linalg.norm(residue, ord=1) > epsilon: projection = np.absolute(phi.T @ residue) argmax_index = np.argmax(projection) insert_index = np.searchsorted(A_index, argmax_index) A_index = np.insert(A_index, insert_index, argmax_index, axis=0) A = np.insert(A, insert_index, phi.T[argmax_index], axis=1) x = np.linalg.pinv(A) @ y residue = y - (A @ x) iteration += 1 for idx, x_val in enumerate(x): sparse[A_index[idx]] = x_val return sparse phi = np.array([[1, 0, 1, 0, 0, 1], [0, 1, 1, 1, 0, 0], [1, 0, 0, 1, 1, 0], [0, 1, 0, 0, 1, 1]]) y = np.array([0, -10, -100, 0]) sparse = find_sparse_representation(phi, y) print(sparse) print(phi @ sparse) ###Output [-45. 0. 45. -55. 0. 0.] [ 1.42108547e-14 -1.00000000e+01 -1.00000000e+02 0.00000000e+00]
demos/NWIS_demo_1.ipynb
###Markdown National trends in peak annual streamflow IntroductionThis notebook demonstrates a slightly more advanced application of data_retrieval.nwis to collect using a national dataset of historical peak annual streamflow measurements. The objective is to use a regression of peak annual streamflow and time to identify any trends. But, not for a singile station, SetupBefore we begin any analysis, we'll need to setup our environment by importing any modules. ###Code from scipy import stats import pandas as pd import numpy as np from mpl_toolkits.basemap import Basemap, cm import matplotlib.pyplot as plt from data_retrieval import nwis, utils, codes ###Output _____no_output_____ ###Markdown Basic usageRecall that the basic way to download data from NWIS is through through the `nwis.get_record()` function, which returns a user-specified record as a `pandas` dataframe. The `nwis.get_record()` function is really a facade of sorts, that allows the user to download data from various NWIS services through a consistant interface. To get started, we require a few simple parameters: a list of site numbers or states codes, a service, and a start date. ###Code # download annual peaks from a single site df = nwis.get_record(sites='03339000', service='peaks', start='1970-01-01') df.head() # alternatively information for the entire state of illiois can be downloaded using #df = nwis.get_record(state_cd='il', service='peaks', start='1970-01-01') ###Output _____no_output_____ ###Markdown Most of the fields are empty, but no matter. All we require are date (`datetime`), site number (`site_no`), and peak streamflow (`peak_va`).Note that when multiple sites are specified, `nwis.get_record()` will combine `datetime` and `site_no` fields to create a multi-index dataframe. Preparing the regressionNext we'll define a function that applies ordinary least squares on peak discharge and time.After grouping the dataset by `site_no`, we will apply the regression on a per-site basis. The results from each site, will be returned as a row that includes the slope, y-intercept, r$^2$, p value, and standard error of the regression. ###Code def peak_trend_regression(df): """ """ #convert datetimes to days for regression peak_date = df.index peak_date = pd.to_datetime(df.index.get_level_values(1)) df['peak_d'] = (peak_date - peak_date.min()) / np.timedelta64(1,'D') #df['peak_d'] = (df['peak_dt'] - df['peak_dt'].min()) / np.timedelta64(1,'D') #normalize the peak discharge values df['peak_va'] = (df['peak_va'] - df['peak_va'].mean())/df['peak_va'].std() slope, intercept, r_value, p_value, std_error = stats.linregress(df['peak_d'], df['peak_va']) #df_out = pd.DataFrame({'slope':slope,'intercept':intercept,'p_value':p_value},index=df['site_no'].iloc[0]) #return df_out return pd.Series({'slope':slope,'intercept':intercept,'p_value': p_value,'std_error':std_error}) ###Output _____no_output_____ ###Markdown Preparing the analysis ###Code def peak_trend_analysis(states, start_date): """ states : list a list containing the two-letter codes for each state to include in the analysis. start_date : string the date to use a the beginning of the analysis. """ final_df = pd.DataFrame() for state in states: # download annual peak discharge records df = nwis.get_record(state_cd=state, start=start_date, service='peaks') # group the data by site and apply our regression temp = df.groupby('site_no').apply(peak_trend_regression).dropna() # drop any insignificant results temp = temp[temp['p_value']<0.05] # now download metadata for each site, which we'll use later to plot the sites # on a map site_df = nwis.get_record(sites=temp.index, service='site') if final_df.empty: final_df = pd.merge(site_df, temp, right_index=True, left_on='site_no') else: final_df = final_df.append( pd.merge(site_df, temp, right_index=True, left_on='site_no') ) return final_df ###Output _____no_output_____ ###Markdown To run the analysis for all states since 1970, one would only need to uncomment and run the following lines. However, pulling all that data from NWIS takes time and puts and could put a burden on resoures. ###Code # Warning these lines will download a large dataset from the web and # will take few minutes to run. #start = '1970-01-01' #states = codes.state_codes #final_df = peak_trend_analysis(states=states, start_date=start) #final_df.to_csv('datasets/peak_discharge_trends.csv') ###Output _____no_output_____ ###Markdown Instead, lets quickly load some predownloaded data, which I generated using the code above. ###Code final_df = pd.read_csv('datasets/peak_discharge_trends.csv') final_df.head() ###Output _____no_output_____ ###Markdown Notice how the data has been transformed. In addition to statistics about the peak streamflow trends, we've also used the NWIS site service to add latitude and longtitude information for each station. Plotting the resultsFinally we'll use `basemap` and `matplotlib`, along with the location information from NWIS, to plot the results on a map (shown below). Stations with increasing peak annual discharge are shown in red; whereas, stations with decreasing peaks are blue. ###Code fig = plt.figure(num=None, figsize=(10, 6) ) # setup a basemap covering the contiguous United States m = Basemap(width=5500000, height=4000000, resolution='l', projection='aea', lat_1=36., lat_2=44, lon_0=-100, lat_0=40) # add coastlines m.drawcoastlines(linewidth=0.5) # add parallels and meridians. m.drawparallels(np.arange(-90.,91.,15.),labels=[True,True,False,False],dashes=[2,2]) m.drawmeridians(np.arange(-180.,181.,15.),labels=[False,False,False,True],dashes=[2,2]) # add boundaries and rivers m.drawcountries(linewidth=1, linestyle='solid', color='k' ) m.drawstates(linewidth=0.5, linestyle='solid', color='k') m.drawrivers(linewidth=0.5, linestyle='solid', color='cornflowerblue') increasing = final_df[final_df['slope'] > 0] decreasing = final_df[final_df['slope'] < 0] #x,y = m(lons, lats) # categorical plots get a little ugly in basemap m.scatter(increasing['dec_long_va'].tolist(), increasing['dec_lat_va'].tolist(), label='increasing', s=2, color='red', latlon=True) m.scatter(decreasing['dec_long_va'].tolist(), decreasing['dec_lat_va'].tolist(), label='increasing', s=2, color='blue', latlon=True) ###Output _____no_output_____
casestudy_agriculture.ipynb
###Markdown Agriculture Case Study BackgroundDuring a normal year, sugar cane in Queensland typically flowers early May through June; July to November is typically cane harvesting season. The ProblemWhile sugar is growing, fields may look visually similar, but health or growth rates from these fields can be quite different, leading to variability and unpredictability in revenue. Identifying underperforming crops can have two benefits:- Ability to scout for frost or disease damage.- Ability to investigate poor performing paddocks and undertake management action such as soil testing or targeted fertilising to improve yield. Digital Earth Australia Use CaseSatellite imagery can be used to measure pasture health over time and identify any changes in growth patterns between otherwise similar paddocks.The normalised difference vegetation index (NDVI) describes the difference between visible and near-infrared reflectance of vegetation cover. This index estimates the density of green on an area of land and can be used to track the health and growth of sugar as it matures. Comparing the NDVI of two similar paddocks will help to identify any anomalies in growth patterns. In this example, data from the European Sentinel-2 satellites is used to make a near real-time assessment of crop growing patterns, facillitating management decisions in the field. This data is made available through the Copernicus Regional Data Hub and Digital Earth Australia within 1-2 days of capture.The worked example below takes users through the code required to:- Create a time series data cube over a farming property.- Select multiple paddocks for comparison.- Create graphs to identify crop performance trends over the previous month.- Interpret the results. Technical details**Products used: NDVI** The normalised difference vegetation index is calculated from near infra-red (NIR) and red band measurements. It takes values from -1 to 1, with high values corresponding to dense vegetation. It is calculated as$$\text{NDVI} = \frac{\text{NIR}-\text{Red}}{\text{NIR}+\text{Red}}.$$**Satellite data: Sentinel-2** Near real-time optical data for the last 90 days. Available from the [Amazon S3 dea-public-data](http://dea-public-data.s3-website-ap-southeast-2.amazonaws.com/?prefix=L2/sentinel-2-nrt/S2MSIARD/) bucket. Covers both Sentinel-2a and Sentinel-2b.**App functions:** [casestudy_agriculture_functions](/user/user/edit/examples/utils/casestudy_agriculture_functions.py)* `load_agriculture_data()`: Loads, combines and cleans data from Sentinel-2a and -2b.* `run_agriculture_app()`: Launches an interactive map and plots average NDVI for selected areas. Run this notebook Load the app functionsThe relevant Open Data Cube commands are exectuted by the two app functions `load_agriculture_data()` and `run_agriculture_app()`. To run the notebook, these need to be imported from `utils.casestudy_agriculture_functions` where they're described.The `%matplotlib notebook` command allows the notebook to contain interactive plots.**To run the following cell, click inside and either press the** `Run` **button on the tool-bar or press** `Shift+Enter` **on the keyboard.** ###Code %matplotlib notebook from utils.casestudy_agriculture_functions import load_agriculture_data, run_agriculture_app ###Output _____no_output_____ ###Markdown Load the dataThe `load_agriculture_data()` command performs several key steps:* identify all available Sentinel-2 near real time data in the case-study area over the last 90 days* remove any bad quality pixels* keep images where more than half of the image contains good quality pixels* collate images from Sentinel-2a and Sentinel-2b into a single data-set* calculate the NDVI from the red and near infrared bands* return the collated data for analysisThe cleaned and collated data is stored in the `dataset_sentinel2` object. As the command runs, feedback will be provided below the cell, including information on the number of cleaned images loaded from each satellite.**To run the following cell, click inside and either press the** `Run` **button on the tool-bar or press** `Shift+Enter` **on the keyboard.** ###Code dataset_sentinel2 = load_agriculture_data() ###Output _____no_output_____ ###Markdown Run the agriculture appThe `run_agriculture_app()` command launches an interactive map. Drawing polygons within the boundary (which represents the area covered by the loaded data) will result in plots of the average NDVI in that area.The command works by taking the loaded data `dataset_sentinel2` as an argument. **To run the following cell, click inside and either press the** `Run` **button on the tool-bar or press** `Shift+Enter` **on the keyboard.***Note:* data points will only appear for images where more than 50% of the pixels were classified as good quality. This may cause trend lines on the average NDVI plot to appear disconnected. Available data points will be marked with the `*` symbol. ###Code run_agriculture_app(dataset_sentinel2) ###Output _____no_output_____
0203_tensor_mani.ipynb
###Markdown --- 뷰(View) - 원소의 수를 유지하면서 텐서의 크기 변경. 매우 중요함!! ###Code t = np.array([[[0, 1, 2], [3, 4, 5]], [[6, 7, 8], [9, 10, 11]]]) ft = torch.FloatTensor(t) mu.log("t.shape", t.shape) mu.log("ft.shape", ft.shape) ###Output t.shape : (2, 2, 3) ft.shape : torch.Size([2, 2, 3]) ###Markdown --- 3차원 텐서에서 2차원 텐서로 변경 ###Code mu.log("ft.view([-1, 3])", ft.view([-1, 3])) mu.log("ft.view([-1, 3]).shape", ft.view([-1, 3]).shape) ###Output ft.view([-1, 3]) : torch.Size([4, 3]) tensor([[ 0., 1., 2.], [ 3., 4., 5.], [ 6., 7., 8.], [ 9., 10., 11.]]) ft.view([-1, 3]).shape : torch.Size([4, 3]) ###Markdown --- 3차원 텐서의 크기 변경 ###Code mu.log("ft.view([-1, 1, 3])", ft.view([-1, 1, 3])) mu.log("ft.view([-1, 1, 3]).shape", ft.view([-1, 1, 3]).shape) ###Output ft.view([-1, 1, 3]) : torch.Size([4, 1, 3]) tensor([[[ 0., 1., 2.]], [[ 3., 4., 5.]], [[ 6., 7., 8.]], [[ 9., 1 ... ft.view([-1, 1, 3]).shape : torch.Size([4, 1, 3]) ###Markdown --- 스퀴즈(Squeeze) - 1인 차원을 제거한다. ###Code ft = torch.FloatTensor([[0], [1], [2]]) mu.log("ft", ft) mu.log("ft.shape", ft.shape) ###Output ft : torch.Size([3, 1]) tensor([[0.], [1.], [2.]]) ft.shape : torch.Size([3, 1]) ###Markdown --- 언스퀴즈(Unsqueeze) - 특정 위치에 1인 차원을 추가한다. ###Code ft = torch.Tensor([0, 1, 2]) mu.log("ft", ft) mu.log("ft.shape", ft.shape) mu.log("ft.unsqueeze(0)", ft.unsqueeze(0)) # 인덱스가 0부터 시작하므로 0은 첫번째 차원을 의미한다. mu.log("ft.unsqueeze(0).shape", ft.unsqueeze(0).shape) mu.log("ft.view(1, -1)", ft.view(1, -1)) mu.log("ft.view(1, -1).shape", ft.view(1, -1).shape) mu.log("ft.unsqueeze(1)", ft.unsqueeze(1)) mu.log("ft.unsqueeze(1).shape", ft.unsqueeze(1).shape) mu.log("ft.unsqueeze(-1)", ft.unsqueeze(-1)) mu.log("ft.unsqueeze(-1).shape", ft.unsqueeze(-1).shape) ###Output ft : torch.Size([3]) tensor([0., 1., 2.]) ft.shape : torch.Size([3]) ft.unsqueeze(0) : torch.Size([1, 3]) tensor([[0., 1., 2.]]) ft.unsqueeze(0).shape : torch.Size([1, 3]) ft.view(1, -1) : torch.Size([1, 3]) tensor([[0., 1., 2.]]) ft.view(1, -1).shape : torch.Size([1, 3]) ft.unsqueeze(1) : torch.Size([3, 1]) tensor([[0.], [1.], [2.]]) ft.unsqueeze(1).shape : torch.Size([3, 1]) ft.unsqueeze(-1) : torch.Size([3, 1]) tensor([[0.], [1.], [2.]]) ft.unsqueeze(-1).shape : torch.Size([3, 1]) ###Markdown --- 타입 캐스팅(Type Casting) ###Code lt = torch.LongTensor([1, 2, 3, 4]) mu.log("lt", lt) mu.log("lt.float()", lt.float()) bt = torch.ByteTensor([True, False, False, True]) mu.log("bt", bt) mu.log("bt.long()", bt.long()) mu.log("bt.float()", bt.float()) ###Output lt : torch.Size([4]) tensor([1, 2, 3, 4]) lt.float() : torch.Size([4]) tensor([1., 2., 3., 4.]) bt : torch.Size([4]) tensor([1, 0, 0, 1], dtype=torch.uint8) bt.long() : torch.Size([4]) tensor([1, 0, 0, 1]) bt.float() : torch.Size([4]) tensor([1., 0., 0., 1.]) ###Markdown --- 연결하기(concatenate) ###Code x = torch.FloatTensor([[1, 2], [3, 4]]) y = torch.FloatTensor([[5, 6], [7, 8]]) mu.log("torch.cat([x, y], dim=0)", torch.cat([x, y], dim=0)) mu.log("torch.cat([x, y], dim=1)", torch.cat([x, y], dim=1)) ###Output torch.cat([x, y], dim=0) : torch.Size([4, 2]) tensor([[1., 2.], [3., 4.], [5., 6.], [7., 8.]]) torch.cat([x, y], dim=1) : torch.Size([2, 4]) tensor([[1., 2., 5., 6.], [3., 4., 7., 8.]]) ###Markdown --- 스택킹(Stacking) ###Code x = torch.FloatTensor([1, 4]) y = torch.FloatTensor([2, 5]) z = torch.FloatTensor([3, 6]) mu.log("torch.stack([x, y, z])", torch.stack([x, y, z])) mu.log("torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0)", torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0)) mu.log("torch.stack([x, y, z], dim=1)", torch.stack([x, y, z], dim=1)) ###Output torch.stack([x, y, z]) : torch.Size([3, 2]) tensor([[1., 4.], [2., 5.], [3., 6.]]) torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0) : torch.Size([3, 2]) tensor([[1., 4.], [2., 5.], [3., 6.]]) torch.stack([x, y, z], dim=1) : torch.Size([2, 3]) tensor([[1., 2., 3.], [4., 5., 6.]]) ###Markdown --- ones_like와 zeros_like - 0으로 채워진 텐서와 1로 채워진 텐서 ###Code x = torch.FloatTensor([[0, 1, 2], [2, 1, 0]]) mu.log("x", x) mu.log("torch.ones_like(x)", torch.ones_like(x)) mu.log("torch.zeros_like(x)", torch.zeros_like(x)) ###Output x : torch.Size([2, 3]) tensor([[0., 1., 2.], [2., 1., 0.]]) torch.ones_like(x) : torch.Size([2, 3]) tensor([[1., 1., 1.], [1., 1., 1.]]) torch.zeros_like(x) : torch.Size([2, 3]) tensor([[0., 0., 0.], [0., 0., 0.]]) ###Markdown --- In-place Operation (덮어쓰기 연산) ###Code x = torch.FloatTensor([[1, 2], [3, 4]]) mu.log("x", x) mu.log("x.mul(2.)", x.mul(2.)) # 곱하기 2를 수행한 결과를 출력 mu.log("x", x) mu.log("x.mul_(2.)", x.mul_(2.)) # 곱하기 2를 수행한 결과를 변수 x에 값을 저장하면서 결과를 출력 mu.log("x", x) ###Output x : torch.Size([2, 2]) tensor([[1., 2.], [3., 4.]]) x.mul(2.) : torch.Size([2, 2]) tensor([[2., 4.], [6., 8.]]) x : torch.Size([2, 2]) tensor([[1., 2.], [3., 4.]]) x.mul_(2.) : torch.Size([2, 2]) tensor([[2., 4.], [6., 8.]]) x : torch.Size([2, 2]) tensor([[2., 4.], [6., 8.]])
3_Query_Diabetic_Patients.ipynb
###Markdown After 2 demo, now try to pull diabetic patients record Advice: As we move into more complex SQL queries, for efficiency purpose, I recommend to practise SQL code in pgAdmin first before load into Jupyter Reference * Article: "Improving Patient Cohort Identification Using Natural Language Processing" 10 Sep. 2016, https://link.springer.com/chapter/10.1007/978-3-319-43742-2_28. Accessed 25 Nov. 2018.* Git repository: https://github.com/MIT-LCP/critical-data-book* To understand Mimic III tables: https://mimic.physionet.org/mimictables/* To understand ICD_9 codes: http://www.icd9data.com/* SQL query: https://github.com/MIT-LCP/critical-data-book/tree/master/part_iii/chapter_28 Prepare database Import libraries ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import psycopg2 import getpass %matplotlib inline plt.style.use('ggplot') ###Output _____no_output_____ ###Markdown Create database connection, you can load your password here just for practise but not recommend to save into repository ###Code user = 'postgres' password = 'postgres' # note if you want to set password for offline pracise would be fine, but don't post to repository host = 'localhost' dbname = 'mimic' schema = 'mimiciii' # set to your defined schema name, I use mimiciii here ###Output _____no_output_____ ###Markdown Connect to the database (execute again if lost connection) ###Code con = psycopg2.connect(dbname=dbname, user=user, host=host, password=password) cur = con.cursor() cur.execute('SET search_path to {}'.format(schema)) # this is compulsory step to find your database print('Thread opened' if not cur.closed else 'Thread closed') # If need password sensitive, use below code # con = psycopg2.connect(dbname=dbname, user=user, host=host, # password=getpass.getpass(prompt='Password:'.format(user))) # cur = con.cursor() # cur.execute('SET search_path to {}'.format(schema)) # this is compulsory step to find your database # print('connected' if not cur.connection.closed else 'not connected') ###Output Password:········ connected ###Markdown Query using Structured data Structured data means all records are categorized in the table so we can just classify themUnstructured data means records are in the text note area and need text mining to capture them First, validate whether the table we understand matches what article describes: * The unstructured clinical notes include: - discharge summaries - nursing progress notes - physician notes - electrocardiogram (ECG) reports - echocardiogram reports - and radiology reports* We excluded clinical notes that were related to any imaging results (ECG_Report, Echo_Report, and Radiology_Report). * We extracted notes from MIMIC-III with the following data elements: - patient identification number (SUBJECT_ID), - hospital admission identification number (HADM_IDs), - intensive care unit stay identification number (ICUSTAY_ID), - note type, - note date/time, - and note text. It's our first time query Mimic III db, will practise a few queries to validate the procedure is working well. The tables that are used in the queries:* admissions: include subject_id, all patients* diagnoses_icd: include icd9_code, subject_id, patients under diagnosis (covered all patients)* patients: include subject_id, dob (covered all patients)* PROCEDURES_ICD: include subject_id, those who were under procedures (subset of all patients) ###Code # Find list of categories under noteevents table query = \ """ select distinct(category) from noteevents; """ data = pd.read_sql_query(query, con) data # Discharge summaries query = \ """ select count(distinct(subject_id)) from noteevents where category like 'Discharge summary%'; """ data = pd.read_sql_query(query, con) data # Nursing/Nursing others query = \ """ select count(distinct(subject_id)) from noteevents where category like 'Nursing%'; """ data = pd.read_sql_query(query, con) data # Physician notes query = \ """ select count(distinct(subject_id)) from noteevents where category like 'Physician%'; """ data = pd.read_sql_query(query, con) data # ECG reports query = \ """ select count(distinct(subject_id)) from noteevents where category like 'ECG%'; """ data = pd.read_sql_query(query, con) data # Echocardiogram reports query = \ """ select count(distinct(subject_id)) from noteevents where category like 'Echo%'; """ data = pd.read_sql_query(query, con) data # Radiology reports query = \ """ select count(distinct(subject_id)) from noteevents where category like 'Radiology%'; """ data = pd.read_sql_query(query, con) data ###Output _____no_output_____ ###Markdown Now query Diabetes `structured` data Diabetes types in ICD9 code* Diabetes mellitus * 249 secondary diabetes mellitus (includes the following codes: 249, 249.0, 249.00, 249.01, 249.1, 249.10, 249.11, 249.2, 249.20, 249.21, 249.3, 249.30, 249.31, 249.4, 249.40, 249.41, 249.5, 249.50, 249.51, 249.6, 249.60, 249.61, 249.7, 249.70, 249.71, 249.8, 249.80, 249.81, 249.9, 249.90, 249.91) * 250 diabetes mellitus (includes the following codes: 250, 250.0, 250.00, 250.01, 250.02, 250.03, 250.1, 250.10, 250.11, 250.12, 250.13, 250.2, 250.20, 250.21, 250.22, 250.23, 250.3, 250.30, 250.31, 250.32, 250.33, 250.4, 250.40, 250.41, 250.42, 250.43, 250.5, 250.50, 250.51, 250.52, 250.53, 250.6, 250.60, 250.61, 250.62, 250.63, 250.7, 250.70, 250.71, 250.72, 250.73, 250.8, 250.80, 250.81, 250.82, 250.83, 250.9, 250.90, 250.91, 250.92, 250.93)* Hemodialysis - 585.6 end stage renal disease (requiring chronic dialysis) - 996.1 mechanical complication of other vascular device, implant, and graft - 996.73 other complications due to renal dialysis device, implant, and graft - E879.1 kidney dialysis as the cause of abnormal reaction of patient, or of later complication, without mention of misadventure at time of procedure - V45.1 postsurgical renal dialysis status - V56.0 encounter for extracorporeal dialysis - V56.1 fitting and adjustment of extracorporeal dialysis catheter * Precedure codes - 38.95 venous catheterization for renal dialysis - 39.27 arteriovenostomy for renal dialysis - 39.42 revision of arteriovenous shunt for renal dialysis - 39.43 removal of arteriovenous shunt for renal dialysis - 39.95 hemodialysis ###Code # Total number of patients in the Mimic III db is 46520, with 58976 records (someone has multiple records) # Note: connecting to diagnoses_icd table won't impact the query result so it has full record as admissions query = \ """ select subject_id, hadm_id from admissions; """ data = pd.read_sql_query(query, con) print('Total', data.shape[0], ' lines of record') print('Totally', data.subject_id.unique().shape[0], 'unique patients') # Total number of patients who are Diabetes mellitus is 10403 # Note: connecting to patients table won't impact the query result so it has full records query = \ """ select count(distinct(a.subject_id)) from diagnoses_icd di, admissions a where di.subject_id = a.subject_id and ( di.ICD9_CODE like '249%' -- secondary diabetes mellitus or di.ICD9_CODE like '250%' -- diabetes mellitus ); """ data = pd.read_sql_query(query, con) data # Total number of patients who are Diabetes mellitus and older than 18 is 10397 query = \ """ select count(distinct(a.subject_id)) from diagnoses_icd di, admissions a, patients p where di.subject_id = a.subject_id and a.subject_id = p.subject_id and ( di.ICD9_CODE like '249%' -- secondary diabetes mellitus or di.ICD9_CODE like '250%' -- diabetes mellitus ) and ( (cast(a.ADMITTIME as date) - cast(p.DOB as date))/365.242 >= 18 ); """ data = pd.read_sql_query(query, con) data # Total number of patients who are Diabetes mellitus, older than 18, and also received procedures is 9460 query = \ """ select count(distinct(a.subject_id)) from diagnoses_icd di, admissions a, patients p, procedures_icd pi where di.subject_id = a.subject_id and a.subject_id = p.subject_id -- we have patient here for p.DOB information and pi.subject_id = a.subject_id and ( di.ICD9_CODE like '249%' -- secondary diabetes mellitus or di.ICD9_CODE like '250%' -- diabetes mellitus ) and ( (cast(a.ADMITTIME as date) - cast(p.DOB as date))/365.242 >= 18 ); """ data = pd.read_sql_query(query, con) data # Compared to all patients who received procdures as Hemodialysis is 1316 query = \ """ select count(distinct(subject_id)) from diagnoses_icd where ICD9_CODE in ('5856','9961','99673','E8791','V451','V560','V561'); """ data = pd.read_sql_query(query, con) data # Patients who are Diabetes mellitus, older than 18, and received procedures as Hemodialysis is 718 # Note: diagnoses_icd also include Hemodialysis ICD9 code # We want patients who have both Diabetes mellitus amd Hemodialysis, e.g. di.ICD9_CODE of one patient have both '249x' and '5856' query = \ """ with diab as ( select distinct(a.subject_id) -- second diabtes adults who under procedures is 9460 from diagnoses_icd di, admissions a, patients p, procedures_icd pi where di.subject_id = a.subject_id and a.subject_id = p.subject_id -- we have patient here for p.DOB information and pi.subject_id = a.subject_id and ( di.ICD9_CODE like '249%' -- secondary diabetes mellitus or di.ICD9_CODE like '250%' -- diabetes mellitus ) and ((cast(a.ADMITTIME as date) - cast(p.DOB as date))/365.242 >= 18) -- adults ) select distinct(di.subject_id) -- second diabetes adults under hemodialysis procedures is 718 from diagnoses_icd di, diab where di.subject_id = diab.subject_id and di.ICD9_CODE in ('5856','9961','99673','E8791','V451','V560','V561'); -- Hemodialysis """ data = pd.read_sql_query(query, con) print('There are ', len(data), 'patients with diabetes mellitus, adults and received Hemodialysis') ###Output There are 718 patients with diabetes mellitus, adults and received Hemodialysis ###Markdown Remember to close the thread after all queries ###Code cur.close() print('cursor still open ...' if not cur.closed else 'cursor closed ...') del cur print('cursor deleted from instance ...') con.close() print('connection still open ...' if not con.closed else 'connection closed') ###Output cursor closed ... cursor deleted from instance ... connection closed
Modulo3/Ejercicios/Problema2.ipynb
###Markdown Teoria Previa-------------------------- En este ejercicio vas a trabajar el concepto de puntos, coordenadas y vectores sobre el plano cartesiano y cómo la programación Orientada a Objetos puede ser una excelente aliada para trabajar con ellos. No está pensado para que hagas ningún tipo de cálculo sino para que practiques la automatización de tareas. El plano cartesiano Representa un espacio bidimensional (en 2 dimensiones), formado por dos rectas perpendiculares, una horizontal y otra vertical que se cortan en un punto. La recta horizontal se denomina eje de las abscisas o **eje X**, mientras que la vertical recibe el nombre de eje de las ordenadas o simplemente **eje Y**. En cuanto al punto donde se cortan, se conoce como el **punto de origen O**. Es importante remarcar que el plano se divide en 4 cuadrantes: Puntos y coordenadas El objetivo de todo esto es describir la posición de **puntos** sobre el plano en forma de **coordenadas**, que se forman asociando el valor del eje de las X (horizontal) con el valor del eje Y (vertical).La representación de un punto es sencilla: **P(X,Y)** dónde X y la Y son la distancia horizontal (izquierda o derecha) y vertical (arriba o abajo) respectivamente, utilizando como referencia el punto de origen (0,0), justo en el centro del plano. Vectores en el plano Finalmente, un vector en el plano hace referencia a un segmento orientado, generado a partir de dos puntos distintos.A efectos prácticos no deja de ser una línea formada desde un punto inicial en dirección a otro punto final, por lo que se entiende que un vector tiene longitud y dirección/sentido. En esta figura, podemos observar dos puntos A y B que podríamos definir de la siguiente forma:- **A(x1, y1) => A(2, 3)**- **B(x2, y2) => B(5, 5)**Y el vector se representaría como la diferencia entre las coordendas del segundo punto respecto al primero (el segundo menos el primero):- **AB = (x2 - x1, y2 - y1) => (5-2, 5-3) => (3,2)**Lo que en definitiva no deja de ser: 3 a la derecha y 2 arriba.Y con esto finalizamos este mini repaso. Ejercicio-------------------------- - Crea una clase llamada **Punto** con sus dos coordenadas X e Y.- Añade un método **constructor** para crear puntos fácilmente. Si no se reciben una coordenada, su valor será cero.- Sobreescribe el método **string**, para que al imprimir por pantalla un punto aparezca en formato (X,Y)- Añade un método llamado **cuadrante** que indique a qué cuadrante pertenece el punto, teniendo en cuenta que si X == 0 e Y != 0 se sitúa sobre el eje Y, si X != 0 e Y == 0 se sitúa sobre el eje X y si X == 0 e Y == 0 está sobre el origen.- Añade un método llamado **vector**, que tome otro punto y calcule el vector resultante entre los dos puntos.- **(Optativo)** Añade un método llamado **distancia**, que tome otro punto y calcule la distancia entre los dos puntos y la muestre por pantalla. La fórmula es la siguiente: $$ d = \sqrt{ (({x2 - x1}){^2} + ({y2 - y1}){^2} ) }$$ - Crea una clase llamada **Rectangulo** con dos puntos (inicial y final) que formarán la diagonal del rectángulo.- Añade un método **constructor** para crear ambos puntos fácilmente, si no se envían se crearán dos puntos en el origen por defecto.- Añade al rectángulo un método llamado **base** que muestre la base.- Añade al rectángulo un método llamado **altura** que muestre la altura.- Añade al rectángulo un método llamado **area** que muestre el area. Recuerde ###Code import math # raiz cuadrada print(math.sqrt(9)) # valor absoluto print(abs(-4)) ###Output 3.0 4
Misc/mosaicking_and_masking.ipynb
###Markdown Creating a composite image from multiple PlanetScope scenes In this exercise, you'll learn how to create a composite image (or mosaic) from multiple PlanetScope satellite images that cover an area of interest (AOI). We'll use `rasterio`, along with its vector-data counterpart `fiona`, to do this. Step 1. Aquiring Imagery In order to visually search for imagery in our AOI, we'll use [Planet Explorer](https://www.planet.com/explorer/).For this exercise, we're going to visit Yosemite National Park. In the screenshot below you'll see an AOI drawn around [Mount Dana](https://en.wikipedia.org/wiki/Mount_Dana) on the eastern border of Yosemite. You can use [data/mt-dana-small.geojson](data/mt-dana-small.geojson) to search for this same AOI yourself.Here we want an image that depicts the mountain on a clear summer day, so for this data search in Planet Explorer we'll set the filters to show only scenes with less than 5% cloud cover, and narrow down the date range to images captured between July 1-July 31, 2017. Since we're only interested in PlanetScope data, and we're creating a visual - not analytic - product, we'll set the Source to `3-band PlanetScope Scene`. Finally, since we want to create a mosaic that includes our entire AOI, we'll set the Area coverage to full coverage. ![Mount Dana in Planet Explorer](images/explorer-mount-dana.gif) As you can see in the animated gif above, this search yields multiple days within July 2017 that match our filters. After previewing a few days, I decided I like the look of July 21, 2017.After selecting a single day, you can roll over the individual images to preview their coverage. In the gif above, you'll notice that it takes three individual images to completely cover our AOI. In this instance, as I roll over each item in Planet Explorer I can see that the scenes' rectangular footprints extend far beyond Mount Dana. All three scenes overlap slightly, and one scene touches only a small section at the bottom of the AOI. Still, taken together the images provide 100% coverage, so we'll go ahead and place an order for the Visual imagery products for these three scenes. ![Download imagery Planet Explorer](images/explorer-data-order.png) Once the order is ready, download the images, extract them from the .zip, and move them into the `data/` directory adjacent to this Notebook. Step 2. Inspecting Imagery ###Code # Load our 3 images using rasterio import rasterio img1 = rasterio.open('data/20170721_175836_103c_3B_Visual.tif') img2 = rasterio.open('data/20170721_175837_103c_3B_Visual.tif') img3 = rasterio.open('data/20170721_175838_103c_3B_Visual.tif') ###Output _____no_output_____ ###Markdown At this point we can use `rasterio` to inspect the metadata of these three images. Specifically, in order to create a composite from these images, we want to verify that all three images have the same data type, the same coordinate reference systems and the same band count: ###Code print(img1.meta['dtype'], img1.meta['crs'], img1.meta['count']) print(img2.meta['dtype'], img2.meta['crs'], img2.meta['count']) print(img3.meta['dtype'], img3.meta['crs'], img3.meta['count']) ###Output uint8 EPSG:32611 4 uint8 EPSG:32611 4 uint8 EPSG:32611 4 ###Markdown Success - they do! But wait, I thought we were using a "Visual" image, and expecting only 3 bands of information (RGB)?Let's take a closer look at what these bands contain: ###Code # Read in color interpretations of each band in img1 - here we'll assume img2 and img3 have the same values colors = [img1.colorinterp[band] for band in range(img1.count)] # take a look at img1's band types: for color in colors: print(color.name) ###Output red green blue alpha ###Markdown The fourth channel is actually a binary alpha mask: this is common in satellite color models, and can be confirmed in Planet's [documentation on the PSSCene3Band product](https://developers.planet.com/docs/api/psscene3band/).Now that we've verified all three satellite images have the same critical metadata, we can safely use `rasterio.merge` to stitch them together. Step 3. Creating the Mosaic ###Code from rasterio.merge import merge # merge returns the mosaic & coordinate transformation information (mosaic, transform) = merge([img1, img2, img3]) ###Output _____no_output_____ ###Markdown Once that process is complete, take a moment to congratulate yourself. At this stage you've successfully acquired adjacent imagery, inspected metadata, and performed a compositing process in order to generate a new mosaic. Well done!Before we go further, let's use `rasterio.plot` (a matplotlib interface) to preview the results of our mosaic. This will just give us a quick-and-dirty visual representation of the results, but it can be useful to verify the compositing did what we expected. ###Code from rasterio.plot import show show(mosaic) ###Output _____no_output_____ ###Markdown At this point we're ready to write our mosaic out to a new GeoTIFF file. To do this, we'll want to grab the geospatial metadata from one of our original images (again, here we'll use img1 to represent the metadata of all 3 input images). ###Code # Grab a copy of our source metadata, using img1 meta = img1.meta # Update the original metadata to reflect the specifics of our new mosaic meta.update({"transform": transform, "height":mosaic.shape[1], "width":mosaic.shape[2]}) with rasterio.open('data/mosaic.tif', 'w', **meta) as dst: dst.write(mosaic) ###Output _____no_output_____ ###Markdown Step 4. Clip the Mosaic to AOI Boundaries Now that we've successfully created a composite mosaic of three input images, the final step is to clip that mosaic to our area of interest. To do that, we'll create a mask for our mosaic based on the AOI boundaries, and crop the mosaic to the extents of that mask.You'll recall that we used Explorer to search for Mount Dana, in Yosemite National Park. The GeoJSON file we used for that search can also be used here, to provide a mask outline for our mosaic.For this step we're going to do a couple things: first, we'll use rasterio's sister-library `fiona` to read in the GeoJSON file. Just as `rasterio` is used to manipulate raster data, `fiona` works similarly on vector data. Where `rasterio` represents raster imagery as numpy arrays, `fiona` represents vector data as GeoJSON-like Python dicts. You can learn [more about fiona here](http://toblerity.org/fiona/manual.html).After reading in the GeoJSON you'll want to extract the _geometry_ of the AOI (_hint:_ `geometry` will be the dict key). A note about Coordinate Reference SystemsIf you attempt to apply the AOI to the mosaic imagery now, rasterio will throw errors, telling you that the two datasets do not overlap. This is because the Coordinate Reference System (CRS) used by each dataset do not match. You can verify this by reading the `crs` attribute of the Collection object generated by `fiona.open()`._Hint: the CRS of mt-dana-small.geojson is:_ `'epsg:4326'`You'll recall that earlier we validated the metadata of the original input imagery, and learned it had a CRS of `'epsg:32611'`. Before the clip can be applied, you will need to to transform the geometry of the AOI to match the CRS of the imagery. Luckily, fiona is smart enough to apply the necessary mathematical transformation to a set of coordinates in order to convert them to new values: apply `fiona.transform.transform_geom` to the AOI geometry to do this, specifying the GeoJSON's CRS as the source CRS, and the imagery's CRS as the destination CRS. ###Code # use rasterio's sister-library for working with vector data import fiona # use fiona to open our original AOI GeoJSON with fiona.open('data/mt-dana-small.geojson') as mt: aoi = [feature["geometry"] for feature in mt] # transform AOI to match mosaic CRS from fiona.transform import transform_geom transformed_coords = transform_geom('EPSG:4326', 'EPSG:32611', aoi[0]) aoi = [transformed_coords] ###Output _____no_output_____ ###Markdown At this stage you have read in the AOI geometry and transformed its coordinates to match the mosaic. We're now ready to use `rasterio.mask.mask` to create a mask over our mosaic, using the AOI geometry as the mask line. Passing `crop=True` to the mask function will automatically crop the bits of our mosaic that fall outside the mask boundary: you can think of it as applying the AOI as a cookie cutter to the mosaic. ###Code # import rasterio's mask tool from rasterio.mask import mask # apply mask with crop=True to cut to boundary with rasterio.open('data/mosaic.tif') as mosaic: clipped, transform = mask(mosaic, aoi, crop=True) # See the results! show(clipped) ###Output _____no_output_____ ###Markdown Congratulations! You've created a clipped mosaic, showing only the imagery that falls within our area of interest.From here, the only thing left to do is save our results to a final output GeoTIFF. ###Code # save the output to a final GeoTIFF # use the metadata from our original mosaic meta = mosaic.meta.copy() # update metadata with new, clipped mosaic's boundaries meta.update({"transform": transform, "height":clipped.shape[1], "width":clipped.shape[2]}) # write the output to a GeoTIFF with rasterio.open('data/clipped_mosaic.tif', 'w', **meta) as dst: dst.write(clipped) ###Output _____no_output_____
content/courses/ml_intro/17_big_data_spark/01-Introduction to Spark and Python.ipynb
###Markdown Introduction to Spark and PythonLet's learn how to use Spark with Python by using the pyspark library! Make sure to view the video lecture explaining Spark and RDDs before continuing on with this code.This notebook will serve as reference code for the Big Data section of the course involving Amazon Web Services. The video will provide fuller explanations for what the code is doing. Creating a SparkContextFirst we need to create a SparkContext. We will import this from pyspark: ###Code from pyspark import SparkContext ###Output _____no_output_____ ###Markdown Now create the SparkContext,A SparkContext represents the connection to a Spark cluster, and can be used to create an RDD and broadcast variables on that cluster.*Note! You can only have one SparkContext at a time the way we are running things here.* ###Code sc = SparkContext() ###Output _____no_output_____ ###Markdown Basic OperationsWe're going to start with a 'hello world' example, which is just reading a text file. First let's create a text file.___ Let's write an example text file to read, we'll use some special jupyter notebook commands for this, but feel free to use any .txt file: ###Code %%writefile example.txt first line second line third line fourth line ###Output Overwriting example.txt ###Markdown Creating the RDD Now we can take in the textfile using the **textFile** method off of the SparkContext we created. This method will read a text file from HDFS, a local file system (available on allnodes), or any Hadoop-supported file system URI, and return it as an RDD of Strings. ###Code textFile = sc.textFile('example.txt') ###Output _____no_output_____ ###Markdown Spark’s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs. ActionsWe have just created an RDD using the textFile method and can perform operations on this object, such as counting the rows.RDDs have actions, which return values, and transformations, which return pointers to new RDDs. Let’s start with a few actions: ###Code textFile.count() textFile.first() ###Output _____no_output_____ ###Markdown TransformationsNow we can use transformations, for example the filter transformation will return a new RDD with a subset of items in the file. Let's create a sample transformation using the filter() method. This method (just like Python's own filter function) will only return elements that satisfy the condition. Let's try looking for lines that contain the word 'second'. In which case, there should only be one line that has that. ###Code secfind = textFile.filter(lambda line: 'second' in line) # RDD secfind # Perform action on transformation secfind.collect() # Perform action on transformation secfind.count() ###Output _____no_output_____
Second Meetup/Binary Classification.ipynb
###Markdown Content* Libraries* Introduction to Problem * Loading Dataset * Visualizing Raw Dataset * Preprocessing * Visualizing Proprocessed Dataset* Logistic Regression with numpy * Forward Propagation * Backward Propagation * Complete Propagation * Combining All Together * Training * Prediction * Evaluation* Logistic Regression with tensorflow * Just Forward Propagation * Training and Evaluation* Logistic Regression with keras * Just Layer Description * Training * Evaluation Libraries ###Code %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import seaborn as sb import pandas as pd from sklearn import datasets from sklearn.model_selection import train_test_split from utils import plot_prediction ###Output _____no_output_____ ###Markdown Introduction to ProblemGiven a set of inputs X as features of flowers, we want to assign them to one of two possible categories 0 or 1 that means what type of flower they are.For solving this problem we use logistic regression because it models the probability that each input belongs to a particular category. Loading Dataset How many features this dataset have? What is the label for this problem? ###Code iris = pd.read_csv('./data/Iris.csv') iris.sample(5) print("Number of Flowers: {}".format(iris.shape[0])) ###Output Number of Flowers: 150 ###Markdown Visualizing Dataset ###Code sepalPlt = sb.FacetGrid(iris, hue="Species", size=6).map(plt.scatter, "SepalLengthCm", "SepalWidthCm") plt.legend(loc='upper left') ###Output C:\Users\Erfan\Anaconda3\envs\tf\lib\site-packages\seaborn\axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning) ###Markdown Preprocessing In this step we will simplify the problem into a binary classification problem with just 2 features. ###Code X = iris.iloc[:, 1:3] Y = (iris['Species'] != 'Iris-setosa') * 1 ###Output _____no_output_____ ###Markdown We devide data to train and test sets: ###Code X_train, X_test, y_train, y_test = train_test_split(X, Y) plt.figure(figsize=(10, 6)) plt.scatter(X_train[y_train == 0].iloc[:, 0], X_train[y_train == 0].iloc[:, 1], color='b', label='Iris-setosa train') plt.scatter(X_train[y_train == 1].iloc[:, 0], X_train[y_train == 1].iloc[:, 1], color='r', label='Others train') plt.scatter(X_test[y_test == 0].iloc[:, 0], X_test[y_test == 0].iloc[:, 1], color='b', label='Iris-setosa test', marker='+', s=150) plt.scatter(X_test[y_test == 1].iloc[:, 0], X_test[y_test == 1].iloc[:, 1], color='r', label='Others test', marker='+', s=150) plt.xlabel('SepalLengthCm') plt.ylabel('SepalWidthCm') plt.legend() ###Output _____no_output_____ ###Markdown Questions Logistic Regression Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 (yes, success, etc.) or 0 (no, failure, etc.). In other words, the logistic regression model predicts P(Y=1) as a function of X.![logistic_regression](figs/logistic_regression.png) \begin{align}Z = w_1x_1+w_2x_2+\dots+w_nx_n + b\end{align} Forward Propagation First we will implement linear multiplication: $$W=\begin{bmatrix}w_1, & w_2, & \dots & w_n \end{bmatrix}$$$$X=\begin{bmatrix}x_1, & x_2, & \dots & x_n \end{bmatrix}$$ \begin{align}Z = W^TX + b\end{align} ###Code def linear_mult(X, w, b): return np.dot(w.T, X) + b ###Output _____no_output_____ ###Markdown Now we implement a function could generate W and b for us: ###Code def initialize_with_zeros(dim): """ This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) """ w = np.zeros((dim,1)) b = 0 assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b ###Output _____no_output_____ ###Markdown Next we will implement sigmoid function to map calculated value to a probablity: ![sigmoid_function](figs/sigmoid.png) \begin{align}A = \sigma(Z) = \frac{1}{1 + e^{-Z}}\end{align} ###Code def sigmoid(z): return 1 / (1 + np.exp(-z)) ###Output _____no_output_____ ###Markdown Now we implement the cost function, cost function represent the difference between our predictions and actual labels(y is the actual label and a is our predicted label): \begin{align}J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(A^{(i)})+(1-y^{(i)})\log(1-A^{(i)})\end{align} ###Code def cost_function(y, a): return -np.mean(y*np.log(a) + (1-y)*np.log(1-a)) ###Output _____no_output_____ ###Markdown Now we implement the whole forward propagation which will calculate cost and the predicted value for the each data point: ###Code def forward_propagate(w, b, X, Y): m = X.shape[1] Z = linear_mult(X, w, b) A = sigmoid(Z) cost = cost_function(Y, A) cost = np.squeeze(cost) assert(cost.shape == ()) back_require = { 'A': A } return back_require, cost ###Output _____no_output_____ ###Markdown Backward Propagation Now we calculate W and b derivative as follow:$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$ ###Code def backward_propagate(w, b, X, Y, back_require): m = X.shape[1] A = back_require['A'] dw = (1/m) * np.dot(X,(A-Y).T) db = (1/m) * np.sum(A - Y) assert(dw.shape == w.shape) assert(db.dtype == float) grads = {"dw": dw, "db": db} return grads ###Output _____no_output_____ ###Markdown Complete Propagation ###Code def propagate(w, b, X, Y): """ Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation. np.log(), np.dot() """ # FORWARD PROPAGATION back_require, cost = forward_propagate(w, b, X, Y) # BACKWARD PROPAGATION grads = backward_propagate(w, b, X, Y, back_require) return grads, cost ###Output _____no_output_____ ###Markdown Combining All Together Now we combine all our implemented functions together to create an optimizer which can find a linear function to devide the zero labeled data from one labeled data points by optimizng W and b as follow:$$W=W−\alpha{dw}$$$$b=b−\alpha{db}$$$\alpha$ is the learning rate ![sigmoid_function](figs/gradient_w.gif) ![sigmoid_function](figs/8yDt.gif) ###Code # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): """ This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. """ costs = [] for i in range(num_iterations): grads, cost = propagate(w,b,X,Y) dw = grads["dw"] db = grads["db"] w -= learning_rate*dw b -= learning_rate*db # Record the costs costs.append(cost) # Print the cost every 100 training iterations if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs ###Output _____no_output_____ ###Markdown Training ###Code %%time X_train_t, y_train_t = np.array(X_train.T), np.array(y_train.T) w, b = initialize_with_zeros(2) params, grads, costs = optimize(w, b, X_train_t, y_train_t, num_iterations= 800, learning_rate = 0.1, print_cost = False) plt.plot(range(len(costs)),costs) plt.xlabel('Iterations') plt.ylabel('Cost(loss) value') ###Output _____no_output_____ ###Markdown Prediction ###Code def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1,m)) Z = linear_mult(X, w, b) A = sigmoid(Z) for i in range(m): Y_prediction[0][i] = 1 if A[0][i] > .5 else 0 assert(Y_prediction.shape == (1, m)) return Y_prediction ###Output _____no_output_____ ###Markdown Evaluation ###Code preds = predict(params['w'], params['b'], X_train_t) print('Accuracy on training set: %{}'.format((preds[0] == y_train).mean()*100)) preds = predict(params['w'], params['b'], X_train_t) print('Accuracy on training set: %{}'.format((preds[0] == y_train).mean()*100)) preds = predict(params['w'], params['b'], np.array(X_test.T)) print('Accuracy on test set: %{}'.format((preds[0] == y_test).mean()*100)) plot_prediction(X_train, y_train, X_test, y_test, predict, params) ###Output _____no_output_____ ###Markdown Logistic Regression with tensorflow Just Forward Propagation ###Code y_test = np.array(y_test).astype(np.float32).reshape(-1,1) y_train = np.array(y_train).astype(np.float32).reshape(-1,1) graph = tf.Graph() with graph.as_default(): with tf.device("/cpu:0"): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. X_data = tf.placeholder(tf.float32, shape=(None, 2)) y_data = tf.placeholder(tf.float32 , shape=(None, 1)) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable(tf.truncated_normal([2, 1])) biases = tf.Variable(tf.zeros([1])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(X_data, weights) + biases loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=y_data, logits=logits)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. prediction = tf.nn.sigmoid(logits) ###Output _____no_output_____ ###Markdown Training and Evaluation ###Code %%time num_steps = 800 with tf.Session(graph=graph) as session: # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: random weights for the matrix, zeros for the # biases. tf.global_variables_initializer().run() print('Initialized') for step in range(num_steps): # Run the computations. We tell .run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy # arrays. d, l, predictions = session.run([optimizer, loss, prediction], feed_dict={X_data: X_train.astype(np.float32), y_data: y_train}) if (step % 100 == 0): print('Loss at step %d: %f' % (step, l)) print('Training accuracy: %.1f%%' % accuracy(predictions, y_train)) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print('Test accuracy: %.1f%%' % accuracy( prediction.eval(feed_dict={X_data: X_test.astype(np.float32)}), y_test)) plot_prediction(X_train, y_train, X_test, y_test, prediction, params, fm_type='tensorflow') #TODO ###Output Initialized Loss at step 0: 2.653436 Training accuracy: 33.0% Loss at step 100: 0.112133 Training accuracy: 99.1% Loss at step 200: 0.086459 Training accuracy: 99.1% Loss at step 300: 0.074192 Training accuracy: 99.1% Loss at step 400: 0.066845 Training accuracy: 99.1% Loss at step 500: 0.061882 Training accuracy: 99.1% Loss at step 600: 0.058268 Training accuracy: 99.1% Loss at step 700: 0.055495 Training accuracy: 99.1% Test accuracy: 100.0% Wall time: 647 ms ###Markdown Keras Just Layer Description ###Code model = tf.keras.models.Sequential([ tf.keras.layers.Dense(1, activation=tf.nn.sigmoid, input_shape=(2,)) ]) model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Training ###Code %%time history = model.fit(X_train, y_train, epochs=800, verbose=0) plt.plot(range(len(history.history['acc'])),history.history['acc']) plt.xlabel('Iterations') plt.ylabel('Accuracy value') ###Output _____no_output_____ ###Markdown Evaluation ###Code print('Test accuracy: %.1f%%' % accuracy( model.predict(X_train.astype(np.float32)), y_train)) print('Test accuracy: %.1f%%' % accuracy( model.predict(X_test.astype(np.float32)), y_test)) plot_prediction(X_train, y_train, X_test, y_test, model.predict, params, fm_type='keras') ###Output _____no_output_____
build/workspace/dividedby.ipynb
###Markdown Divided by: no decimals ###Code # initialising spark context from pyspark import SparkContext, SparkConf, SQLContext conf = SparkConf().setAppName("pysparkApp") sc = SparkContext(conf=conf) series = sc.parallelize(range(0, 100)) divided_by = 5 numbers = series.map(lambda number: (number%divided_by, 1)).reduceByKey(lambda x, y : x + y) ordered = numbers.map(lambda x:(x[1], x[0])).sortBy(lambda x: x[1]) ordered.take(1) sc.stop() ###Output _____no_output_____
DAY 001 ~ 100/DAY078_[BaekJoon] 가장 긴 감소하는 부분 수열 (Python).ipynb
###Markdown 2020년 4월 24일 금요일 BaekJoon - 11722번 : 가장 긴 감소하는 부분 수열 (Python) 문제 : https://www.acmicpc.net/problem/11722 블로그 : https://somjang.tistory.com/entry/BaekJoon-11722%EB%B2%88-%EA%B0%80%EC%9E%A5-%EA%B8%B4-%EA%B0%90%EC%86%8C%ED%95%98%EB%8A%94-%EB%B6%80%EB%B6%84-%EC%88%98%EC%97%B4-Python 첫번째 시도 ###Code inputNum = int(input()) inputNums = input() inputNums = inputNums.split() inputNums = [int(num) for num in inputNums] nc = [0] * (inputNum) maxNum = 0 for i in range(0, inputNum): minNum = 0 for j in range(0, i): if inputNums[i] < inputNums[j]: if minNum < nc[j]: minNum = nc[j] #print(inputNums, nc) nc[i] = minNum + 1 if maxNum < nc[i]: maxNum = nc[i] print(maxNum) ###Output _____no_output_____
Scikit-Learn/5) Improve a Model(Hyperparameter Tuning)/HyperParameterTuning_explanation.ipynb
###Markdown 5. IMPROVING a Model (Hyperparameter Tuning) The **first predictions** you make with a model are generally referred to as **baseline predictions**. The same goes with the first evaluation metrics you get. These are generally referred to as baseline metrics.so ,**first model = baseline model**Your next goal is to improve upon these baseline metrics.Two of the main methods to improve baseline metrics are from a data perspective and a model perspective. From a data perspective asks:* Could we collect more data? In machine learning, more data is generally better, as it gives a model more opportunities to learn patterns.* Could we improve our data? This could mean filling in missing values or finding a better encoding (turning things into numbers) strategy. From a model perspective asks:* Is there a better model we could use? If you've started out with a simple model, could you use a more complex one? (we saw an example of this when looking at the [Scikit-Learn machine learning map](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html), ensemble methods are generally considered more complex models)* Could we improve the current model? If the model you're using performs well straight out of the box, can the hyperparameters be tuned to make it even better? **Note**: Patterns in data are also often referred to as data parameters. The difference between parameters and hyperparameters is a machine learning model seeks to find parameters in data on its own, where as, hyperparameters are settings on a model which a user (you) can adjust.Hyperparameters vs Parameters* Parameters = model find these patterns in data* Hyperparameters = settings on a model you can adjust to (potentially) improve its ability to find patterns. ###Code from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier() clf.get_params() # get the parameters of a model ###Output _____no_output_____
2_Ejercicios/Primera_parte/Entregables/2. Ejercios_Mod1/2020_12_21/web_scraping_delivery.ipynb
###Markdown 1. From HTML*Using only beautiful soap, no regex*Save in a dataframe the next information using web scraping. Each row of the dataframe must have in different columns:- The name of the title- The id of the div where is the value scraped. If there is not id, then the value is must be numpy.nan- The name of the tag where is the value scraped.- The next scraped values in different rows: - The value: "Este es el segundo párrafo" --> Row 1 - The url https://pagina1.xyz/ --> Row 2 - The url https://pagina4.xyz/ --> Row 3 - The url https://pagina5.xyz/ --> Row 4 - The value "links footer-links" --> Row 5 - The value "Este párrafo está en el footer" --> Row 6 ###Code html = """ <html lang="es"> <head> <meta charset="UTF-8"> <title>Página de prueba</title> </head> <body> <div id="main" class="full-width"> <h1>El título de la página</h1> <p>Este es el primer párrafo</p> <p>Este es el segundo párrafo</p> <div id="innerDiv"> <div class="links"> <a href="https://pagina1.xyz/">Enlace 1</a> <a href="https://pagina2.xyz/">Enlace 2</a> </div> <div class="right"> <div class="links"> <a href="https://pagina3.xyz/">Enlace 3</a> <a href="https://pagina4.xyz/">Enlace 4</a> </div> </div> </div> <div id="footer"> <!-- El footer --> <p>Este párrafo está en el footer</p> <div class="links footer-links"> <a href="https://pagina5.xyz/">Enlace 5</a> </div> </div> </div> </body> </html>""" from bs4 import BeautifulSoup import pandas as pd import numpy as np soup = BeautifulSoup(html, 'html.parser') type(soup) ids = soup.findAll(id) ids.contents ###Output _____no_output_____
Example_Notebooks/Window_Wall_Ratio_pix4d_HQ.ipynb
###Markdown First, we set the image and parameter directories, as well as the merged polygons file path. We load the merged polygons, as we also initialize a dictionary for the Cameras. The Camera class stores all information related to the camera, i.e. intrinsic and extrinsic camera parameters. ###Code #Example file filename = "DJI_0033.JPG" #directory = "../data/Drone_Flight/" facade_file = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/Data_results/merged_polygons.txt"#"../data/Drone_Flight/merged.txt" image_dir = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/images/" #directory + "RGB/" param_dir = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/pix4d_HQ/1_initial/params/" #directory + "params/" #predictions_dir = directory + "predictions/" offset = np.loadtxt(param_dir + "pix4d_HQ_offset.xyz",usecols=range(3)) #offset = np.loadtxt(param_dir + "offset.txt",usecols=range(3)) #Initializes a dictionary of Camera classes. See utils.py for more information. camera_dict = utils.create_camera_dict(param_dir, filename="pix4d_HQ_calibrated_camera_parameters.txt", offset=offset) #Loads pmatrices and image filenamees p_matrices = np.loadtxt(param_dir + 'pix4d_HQ_pmatrix.txt', usecols=range(1,13)) # p_matrices = np.loadtxt(param_dir + 'pmatrix.txt', usecols=range(1,13)) #Loads the merged polygons, as well as a list of facade types (i.e. roof, wall, or floor) merged_polygons, facade_type_list, file_format = fw.load_merged_polygon_facades(filename=facade_file) #Offset adjustment parameter # height_adj = np.array([0.0, 0.0, 108]) # offset = offset + height_adj polygon_offset = np.loadtxt("/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/Data_results/polygon_offset.txt", delimiter=",") #Adjust height if necessary for the camera images height_adj = -polygon_offset[2] #108.0#108.0 offset_adj = np.array([0.0, 0.0, height_adj]) offset = offset + offset_adj ###Output _____no_output_____ ###Markdown Next, we extract the contours for the window predictions, by taking the window prediction points and using them to create a shapely polygon. ###Code window_file = "/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/masks_2/DJI_0033.png" # window_file ='/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/mask_GT/DJI_0033_pink.png' print("Window predictions: ") image = cv2.imread(window_file) plt.imshow(image) plt.show() #Extract the contours of the window file contours = contour_extraction.extract_contours(window_file) #Create polygons from the window contours window_polygons = utils.convert_polygons_shapely(contours) def plot_shapely_polys(image_file, polys): for poly in polys: s = poly s = poly.simplify(0.1, preserve_topology=True) x,y = s.exterior.xy plt.plot(x,y) plt.show() print("Extracted contours: ") plt.imshow(image) plot_shapely_polys(window_file, window_polygons) ###Output Window predictions: ###Markdown Finally, for each window point, we obtain its 3D coordinates and use them to calculate the window to wall ratio. ###Code camera = camera_dict[filename] pmatrix = camera.calc_pmatrix() # print(pmatrix) image_file = utils.load_image(image_dir + filename) # print(image_dir + filename) #Projects the merged polygon facades onto the camera image projected_facades, projective_distances = extract_window_wall_ratio.project_merged_polygons( merged_polygons, offset, pmatrix) # print(projected_facades) #Creates a dictionary mapping the facade to the windows contained within them, keyed by facade index facade_window_map = extract_window_wall_ratio.get_facade_window_map( window_polygons, projected_facades, projective_distances) # print(facade_window_map) #Creates a list of all the facades in the merged polygon facades = [] for poly in merged_polygons: facades = facades + poly facade_indices = list(facade_window_map.keys()) # print(facade_indices) for i in facade_indices: #Computes window to wall ratio win_wall_ratio = extract_window_wall_ratio.get_window_wall_ratio( projected_facades[i], facades[i], facade_window_map[i]) #Output printing: print("Facade index: " + str(i)) print("Window-to-wall ratio: " + str(win_wall_ratio)) #Uncomment this line to plot the windows and facades on the image # extract_window_wall_ratio.plot_windows_facade(projected_facades[i], facade_window_map[i], image_file) ###Output facade_area: 538.634465447367 None window_area: 115.04176815306703 None Facade index: 2 Window-to-wall ratio: 0.2135804066260747 facade_area: 911.3013916015625 None window_area: 0.573974609375 None Facade index: 5 Window-to-wall ratio: 0.000629840593534342 ###Markdown convert json to mask (ground truth) ###Code import json import pandas as pd import numpy as np from PIL import Image, ImageDraw, ImageChops data = json.load(open('/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/mask_GT/DJI_0033.json')) nested_lst_of_tuples = [] for i in range(len(data.get("objects"))): objects = data.get("objects")[i] points_lab = objects['points'] nested_lists = points_lab['exterior'] nested_lst_of_tuples0 = [tuple(l) for l in nested_lists] nested_lst_of_tuples.append(nested_lst_of_tuples0) def getMask(original,polygon): #Returns the mask of the polygon mask = Image.new('L', original.size, 0) mask_draw = ImageDraw.Draw(mask) for i in range(len(data.get("objects"))): mask_draw.polygon(polygon[i], outline=1, fill=1) return np.array(mask) original=Image.open(image_dir + filename) mask = getMask(original,nested_lst_of_tuples) plt.figure() plt.imshow(mask) import matplotlib matplotlib.image.imsave('/Users/stouzani/Desktop/Unstructured_ML/Drone/Drone_Data_Capture/pix4d_HQ/dji_demo/mask_GT/DJI_0033.png', mask) ###Output _____no_output_____
Bloque 1 - Ramp-Up/05_Python/03_Funciones/04_RESU_Ejercicios funciones.ipynb
###Markdown ![imagen](../../imagenes/ejercicios.png) Ejercicios funciones Ejercicio 1Escribe una función que convierta números del 1 al 7, en nombres de los dias de la semana. La función constará de un único argumento numérico y una salida de tipo string ###Code def dia_semana(dia_num): if dia_num == 1: return "Lunes" elif dia_num == 2: return "Martes" elif dia_num == 3: return "Miércoles" elif dia_num == 4: return "Jueves" elif dia_num == 5: return "Viernes" elif dia_num == 6: return "Sábado" elif dia_num == 7: return "Domingo" else: return "No es un dia de la semana" print(dia_semana(1)) print(dia_semana(7)) def day_of_week(): day_num = int(input("Introduzca un número de día de la semana")) if day_num == 1: day = "Lunes" elif day_num == 2: day = "Martes" elif day_num == 3: day = "Miercoles" elif day_num == 4: day = "Jueves" elif day_num == 5: day = "Viernes" elif day_num == 6: day = "Sabado" elif day_num == 7: day = "Domingo" else: print("Erro al introducir el número, vuelva a intentarlo") day_of_week() ###Output Introduzca un número de día de la semana 10 ###Markdown Ejercicio 2En el ejercicio 8 de bucles, creábamos una pirámide invertida, cuyo número de pisos venía determinado por un input del usuario. Crear una función que replique el comportamiento de la pirámide, y utiliza un único parámetro de entrada de la función para determinar el número de filas de la pirámide, es decir, elimina la sentencia input. ###Code def piramide(filas): for i in range(filas): out = "" for j in range(filas-i): out = out + " " + str(j + 1) print(out) piramide(4) piramide(6) for i in range(filas): out = "" for j in range(filas-i): out = out + " " + str(j + 1) print(out) ###Output _____no_output_____ ###Markdown Ejercicio 3Escibe una función que compare dos números. La función tiene dos argumentos y hay tres salidas posibles: que sean iguales, que el primero se mayor que el segundo, o que el segundo sea mayor que el primero ###Code def compare_fun(num_1, num_2): if num_1 == num_2: return "Son iguales" elif num_1 > num_2: return str(num_1) + " es mayor que " + str(num_2) else: return str(num_2) + " es mayor que " + str(num_1) print(compare_fun(3,4)) print(compare_fun(4,4)) ###Output 4 es mayor que 3 Son iguales ###Markdown Ejercicio 4Escribe una función que sea un contador de letras. En el primer argumento tienes que introducir un texto, y el segundo que sea la letra a contar. La función tiene que devolver un entero con el número de veces que aparece esa letra, tanto mayuscula, como minúscula ###Code def letter_count(texto, letra): texto = texto.lower() return texto.count(letra.lower()) letter_count("En esta clase aprenderemos mucho Python", "C") ###Output _____no_output_____ ###Markdown Ejercicio 5Escribe una función que tenga un único argumento, un string. La salida de la función tiene que ser un diccionario con el conteo de todas las letras de ese string. ###Code def letter_count_dic(texto): all_letters = {} for i in texto.lower(): if i in all_letters: all_letters[i] = all_letters[i] + 1 else: all_letters[i] = 1 return all_letters letter_count_dic("En esta clase aprenderemos mucho Python") ###Output _____no_output_____ ###Markdown Ejercicio 6Escribir una función que añada o elimine elementos en una lista. La función necesita los siguientes argumentos:* lista: la lista donde se añadira o eliminarán los elementos* comando: "add" o "remove"* elemento: Por defecto es None. ###Code def listas_program(lista, comando, elemento = None): if comando == "add" and elemento != None: lista.append(elemento) return lista elif comando == "add" and elemento == None: lista.append(9999) return lista elif comando == "remove": try: lista.remove(elemento) return lista except: print("El elemento", elemento, "no existe en la lista") print(listas_program([1,2,3], "add", 9)) print(listas_program([1,2,3], "add")) print(listas_program([1,2,3], "remove", 2)) print(listas_program([1,2,3], "remove", 9)) ###Output [1, 2, 3, 9] [1, 2, 3, 9999] [1, 3] El elemento 9 no existe en la lista None ###Markdown Ejercicio 7Crea una función que reciba un número arbitrario de palabras, y devuelva una frase completa, separando las palabras con espacios. ###Code def junta_palabras(*args): return " ".join(args) junta_palabras("Hola", "me", "llamo", "Dani") ###Output _____no_output_____ ###Markdown Ejercicio 8Escribe un probrama que calcule la [serie de Fibonacci](https://es.wikipedia.org/wiki/Sucesi%C3%B3n_de_Fibonacci) ###Code def fibonacci(n): if n == 1 or n == 2: return 1 else: return (fibonacci(n - 1) + (fibonacci(n - 2))) print(fibonacci(6)) ###Output 8 ###Markdown Ejercicio 9Define en una única celda las siguientes funciones:* Función que calcule el área de un cuadrado* Función que calcule el area de un triángulo* Función que calcule el área de un círculoEn otra celda, calcular el area de:* Dos círculos de radio 10 + un triángulo de base 3 y altura 7* Un cuadrado de lado = 10 + 3 círculos (uno de radio = 4 y los otros dos de radio = 6) + 5 triángulos de base = 2 + altura = 4 ###Code def cuadrado(lado): return lado * lado def triangulo(base, altura): return (base * altura)/2 def circulo(radio): import math return math.pi * radio**2 circ1 = 2 * circulo(10) tria1 = triangulo(3,7) print(circ1 + tria1) cuadr2 = cuadrado(10) circ2 = circulo(4) + 2 * circulo(6) trian2 = 5 * triangulo(2,4) print(cuadr2 + circ2 + trian2) ###Output 638.8185307179587 396.46015351590177
notebooks/Cardano_Price.ipynb
###Markdown [DATA SCIENCE CHALLENGE SCL WEEK 1]() [Predict Cardano Price](https://github.com/jesussantana/Predict-Cardano-Price) Models*************[![forthebadge made-with-python](http://ForTheBadge.com/images/badges/made-with-python.svg)](https://www.python.org/) [![Made withJupyter](https://img.shields.io/badge/Made%20with-Jupyter-orange?style=for-the-badge&logo=Jupyter)](https://jupyter.org/try) [![Linkedin: Jesus Santana](https://img.shields.io/badge/-JesusSantana-blue?style=flat-square&logo=Linkedin&logoColor=white&link=https://www.linkedin.com/in/chus-santana/)](https://www.linkedin.com/in/chus-santana/) [![GitHub JesusSantana](https://img.shields.io/github/followers/jesussantana?label=follow&style=social)](https://github.com/jesussantana) ###Code import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np # ^^^ pyforest auto-imports - don't write above this line # ============================================================================== # Auto Import Dependencies # ============================================================================== # pyforest imports dependencies according to use in the notebook # ============================================================================== #!pip install mplfinance #import mplfinance as mpf #!pip install keras #!pip install tensorflow #!pip install imblearn # Dependencies not Included in Auto Import* # ============================================================================== import time import keras from tensorflow import keras as ks from keras.layers import TimeDistributed from keras.models import load_model from sklearn.preprocessing import MinMaxScaler, StandardScaler from keras.utils.np_utils import to_categorical from keras.models import Sequential from keras.layers import LSTM from keras.layers import Dense from keras.layers import TimeDistributed from keras.models import Sequential from keras.layers import Dense, Dropout from keras.callbacks import ModelCheckpoint from imblearn.over_sampling import SMOTE from sklearn.preprocessing import MinMaxScaler # Pandas configuration # ============================================================================== pd.set_option('display.max_columns', None) # # Graphics # ============================================================================== import matplotlib.ticker as ticker from matplotlib import style import matplotlib.pyplot as plotter import plotly.express as px # Matplotlib configuration # ============================================================================== plt.rcParams['image.cmap'] = "bwr" #plt.rcParams['figure.dpi'] = "100" plt.rcParams['savefig.bbox'] = "tight" style.use('ggplot') or plt.style.use('ggplot') %matplotlib inline # Seaborn configuration # ============================================================================== sns.set_theme(style='darkgrid', palette='deep') dims = (20, 16) # Warnings configuration # ============================================================================== import warnings warnings.filterwarnings('ignore') # Folder configuration # ============================================================================== from os import path import sys new_path = '../scripts/' if new_path not in sys.path: sys.path.append(new_path) path = "../data/" ###Output _____no_output_____ ###Markdown Explore Dataset********** ###Code df_train = pd.read_csv(path + 'raw/train.csv') df_test = pd.read_csv(path + 'raw/test_predictors.csv') df_train.head() df_train.info() df_test.head() df_test.head().info() def convert_time(df): df = df.sort_values('Open_time') df['Year'] = [int(x[:4]) for x in df['Open_time'] ] df['Month'] = [int(x[5:7]) for x in df['Open_time'] ] df['Day'] = [int(x[8:11]) for x in df['Open_time'] ] df['Hour'] = [int(x[11:13]) for x in df['Open_time'] ] df['Minute'] = [int(x[14:16]) for x in df['Open_time'] ] df['Open_time'] = [time.mktime(time.strptime(x, "%Y-%m-%d %H:%M:%S")) for x in df['Open_time'] ] return df df_train = convert_time(df_train) df_test = convert_time(df_test) ###Output _____no_output_____ ###Markdown Train Test********** ###Code cols = df_train.columns cols = cols.drop("target") X_train = df_train[cols] y_train = df_train["target"] # Transform Dataset # ============================================================================== # strategy = {0:15000, 1:10000, 2:10000} strategy = {0:20000, 1:5000, 2:5000} oversample = SMOTE(sampling_strategy=strategy) # X_train_over, y_train_over = X_train, y_train X_train_over, y_train_over = oversample.fit_resample(X_train, y_train) y_train_over = to_categorical(y_train_over, num_classes=3) scaler = MinMaxScaler(feature_range=(0, 1)) # scaler = StandardScaler() X_train_over = scaler.fit_transform(X_train_over) # X_train_over = X_train_over.copy() X_test = scaler.fit_transform(df_test) pd.DataFrame(y_train_over).value_counts() # [samples, timesteps, features]. timesteps = 20 samples = int(X_train_over.shape[0]/timesteps) features = len(cols) X_train_reshape = np.array(X_train_over).reshape(samples, timesteps, features) y_train_reshape = np.array(y_train_over).reshape(samples, timesteps, 3) X_train_reshape.shape y_train_reshape ###Output _____no_output_____ ###Markdown Model************ ###Code model = Sequential() model.add(LSTM(500, dropout=0.25, input_shape=( timesteps, features ),return_sequences=True)) model.add(Dense(3,activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary() checkpoint_filepath = 'model.h5' model_checkpoint_callback = keras.callbacks.ModelCheckpoint( filepath=checkpoint_filepath, monitor='val_accuracy', mode='max', save_best_only=True) X_train_orig = scaler.fit_transform(X_train[:-1]) y_train_orig = to_categorical(y_train[:-1], num_classes=3) timesteps = 20 samples = int(X_train_orig.shape[0]/timesteps) features = len(cols) X_train_orig_reshape = np.array(X_train_orig).reshape(samples, timesteps, features) y_train_orig_reshape = np.array(y_train_orig).reshape(samples, timesteps, 3) X_train_orig_reshape.shape history = model.fit(X_train_reshape, y_train_reshape, epochs=3000, batch_size=512, verbose=0, callbacks=[model_checkpoint_callback], validation_data=(X_train_orig_reshape, y_train_orig_reshape)) fig, ax = plt.subplots(2,1,figsize=(10,10)) ax[0].set_title('Cross Entropy Loss', fontsize = 20) ax[0].plot(history.history['loss'], color='blue', label='Train') ax[0].plot(history.history['val_loss'], color='orange', label='Validation') ax[0].set_ylabel('Cross Entropy Loss', fontsize = 16) ax[0].legend(fontsize = 16) ax[1].set_title('Classification Accuracy', fontsize = 20) ax[1].plot(history.history['accuracy'], color='blue', label='Train') ax[1].plot(history.history['val_accuracy'], color='orange', label='Validation') ax[1].set_ylabel('Classification Accuracy', fontsize = 16) ax[1].set_xlabel('Epochs', fontsize = 16) ax[1].legend(fontsize = 16) plt.show() ###Output _____no_output_____ ###Markdown Analysis************ ###Code best_model = load_model('model.h5') best_model.summary() p = pd.DataFrame(best_model.predict(X_train_reshape).reshape(X_train_reshape.shape[0]*X_train_reshape.shape[1],3)) p.idxmax(axis=1).value_counts() p['predicted'] = p.idxmax(axis=1) p['real'] = y_train p accuracy = (p['predicted'] == p['real']).sum() print('Accuracy: ',accuracy/p.shape[0]) ###Output Accuracy: 0.40996666666666665 ###Markdown Prediction*********** ###Code X_test.shape X_test_orig = X_test[:-3] timesteps = 20 samples = int(X_test_orig.shape[0]/timesteps) features = len(cols) X_test_orig_reshape = np.array(X_test_orig).reshape(samples, timesteps, features) res = pd.DataFrame(best_model.predict(X_test_orig_reshape).reshape(X_test_orig_reshape.shape[0]*X_test_orig_reshape.shape[1],3)) res.idxmax(axis=1).value_counts() result = list(res.idxmax(axis=1)) result.append(0) result.append(0) result.append(0) pd.DataFrame(result).shape pd.DataFrame(result).to_csv('result.csv') res ###Output _____no_output_____
notebooks/03_spurious_antisense.ipynb
###Markdown Detecting Spurious Antisense: ###Code import sys import os from glob import glob from collections import defaultdict, Counter, namedtuple import itertools as it import random import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from matplotlib import cm import seaborn as sns import pysam ## Default plotting params %matplotlib inline sns.set(font='Arial') plt.rcParams['svg.fonttype'] = 'none' style = sns.axes_style('white') style.update(sns.axes_style('ticks')) style['xtick.major.size'] = 2 style['ytick.major.size'] = 2 sns.set(font_scale=2, style=style) pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7', '#88828c']) cmap = ListedColormap(pal.as_hex()[:2]) sns.set_palette(pal) sns.palplot(pal) plt.show() ###Output _____no_output_____ ###Markdown ERCC antisense reads:We can simply count the number of reads that map antisense to the ERCC spikein controls to estimate antisense here ###Code ercc_bams = [ '/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180201_1617_20180201_FAH45730_WT_Col0_2916_regular_seq/aligned_data/ERCC92/201901_col0_2916.bam', '/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180413_1558_20180413_FAH77434_mRNA_WT_Col0_2917/aligned_data/ERCC92/201901_col0_2917.bam', '/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180416_1534_20180415_FAH83697_mRNA_WT_Col0_2918/aligned_data/ERCC92/201901_col0_2918.bam', '/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180418_1428_20180418_FAH83552_mRNA_WT_Col0_2919/aligned_data/ERCC92/201901_col0_2919.bam', '/cluster/ggs_lab/mtparker/ONT_guppy_pipeline_runs/20180405_FAH59362_WT_Col0_2917/aligned_data/ERCC92/201903_col0_2917_exp2.bam', ] def get_antisense(bam_fn): antisense = 0 with pysam.AlignmentFile(bam_fn) as bam: mapped = bam.mapped for aln in bam.fetch(): if aln.is_reverse: antisense += 1 return antisense, mapped total_mapped = 0 total_antisense = 0 for bam in ercc_bams: a, t = get_antisense(bam) total_mapped += t total_antisense += a print(total_antisense, total_antisense / total_mapped * 100) ###Output 2 0.021175224986765485 ###Markdown Antisense at RCAAs an example of a highly expressed gene with no genuine antisense annotations, we use RCA. ###Code rca_locus = ['2', 16_570_746, 16_573_692] def intersect(inv_a, inv_b): a_start, a_end = inv_a b_start, b_end = inv_b if a_end < b_start or a_start > b_end: return 0 else: s = max(a_start, b_start) e = min(a_end, b_end) return e - s def intersect_spliced_invs(invs_a, invs_b): score = 0 invs_a = iter(invs_a) invs_b = iter(invs_b) a_start, a_end = next(invs_a) b_start, b_end = next(invs_b) while True: if a_end < b_start: try: a_start, a_end = next(invs_a) except StopIteration: break elif a_start > b_end: try: b_start, b_end = next(invs_b) except StopIteration: break else: score += intersect([a_start, a_end], [b_start, b_end]) if a_end > b_end: try: b_start, b_end = next(invs_b) except StopIteration: break else: try: a_start, a_end = next(invs_a) except StopIteration: break return score def bam_cigar_to_invs(aln): invs = [] start = aln.reference_start end = aln.reference_end strand = '-' if aln.is_reverse else '+' left = start right = left for op, ln in aln.cigar: if op in (1, 4, 5): # does not consume reference continue elif op in (0, 2, 7, 8): # consume reference but do not add to invs yet right += ln elif op == 3: invs.append([left, right]) left = right + ln right = left if right > left: invs.append([left, right]) assert invs[0][0] == start assert invs[-1][1] == end return start, end, strand, np.array(invs) PARSED_ALN = namedtuple('Aln', 'chrom start end read_id strand invs') def parse_pysam_aln(aln): chrom = aln.reference_name read_id = aln.query_name start, end, strand, invs = bam_cigar_to_invs( aln) return PARSED_ALN(chrom, start, end, read_id, strand, invs) counts = Counter() with pysam.AlignmentFile('/cluster/ggs_lab/mtparker/analysis_notebooks/chimeric_transcripts/vir1_vs_col0/aligned_data/col0.merged.bam') as bam: for aln in bam.fetch(*rca_locus): aln = parse_pysam_aln(aln) overlap = intersect_spliced_invs([rca_locus[1:]], aln.invs) aln_len = sum([e - s for s, e in aln.invs]) if overlap / aln_len > 0.1: counts[aln.strand] += 1 counts ###Output _____no_output_____
ECE365/machine learning/lab3_vvv_2021/.ipynb_checkpoints/ass-checkpoint.ipynb
###Markdown Lab 3: Classification (Part 2) and Model Selection Name: Your Name Here (Your netid here) Due Feburary 16th, 2021 11:59 PM**Logistics and Lab Submission**See the [course website](https://courses.engr.illinois.edu/ece365/fa2019/logisticsvvv.html). Remember that all labs count equally, despite the labs being graded from a different number of total points). **What You Will Need To Know For This Lab**This lab covers a few more basic classifiers which can be used for M-ary classification:- Naive Bayes- Logistic Regression- Support Vector Machinesas well as cross-validation, a tool for model selection and assessment. The submission procedure is provided below:- You will be provided with a template Python script (main.py) for this lab where you need to implement the provided functions as needed for each question. Follow the instructions provided in this Jupyter Notebook (.ipynb) to implement the required functions. **Do not change the file name or the function headers!**- Upload only your Python script (.py file) on Gradescope. Don't upload your datasets or Jupyter Notebook (.ipynb file).- Your grades and feedbacks will appear on Gradescope. The grading for the programming questions is automated using Gradescope autograder, no partial credits are given. Therefore, if you wish, you will have a chance to re-submit your code **within 72 hours** of receiving your first grade for this lab, only if you have *reasonable* submissions before the deadline (i.e. not an empty script).- If you re-submit, the final grade for the programming part of this lab will be calculated as .4 \* first_grade + .6 \* .9 \* re-submission_grade.- This lab also has Multiple Choice Questions (MCQs) that are needed to be completed on Gradescope **within the deadline**.There are some problems which have short answer questions. They are not graded, but we are free to discuss answers to these problems. **Multiple Choice Questions (MCQs) will be graded on Gradescope!**Remember in many applications, the end goal is not always "run a classifier", like in a homework problem, but is to use the output of the classifier in the context of the problem at hand (e.g. detecting spam, identifying cancer, etc.). Because of this, some of our Engineering Design-type questions are designed to get you to think about the entire design problem at a high level.**Warning: Do not train on your test sets. You will automatically have your score halved for a problem if you train on your test data.** **Preamble (don't change this)** ###Code %pylab inline import numpy as np from sklearn import neighbors from sklearn import svm from sklearn import model_selection from numpy import genfromtxt from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler import glob %run main.py ###Output Populating the interactive namespace from numpy and matplotlib ###Markdown Problem 1: Spam Detection (70 points)In this problem, you will be constructing a crude spam detector. As you all know, when you receive an e-mail, it can be divided into one of two types: ham (useful mail, label $-1$) and spam (junk mail, label $+1$). In the [olden days](http://www.paulgraham.com/spam.html), people tried writing a bunch of rules to detect spam. However, it was quickly seen that machine learning approaches work fairly well for a little bit of work. You will be designing a spam detector by applying some of the classification techniques you learned in class to a batch of emails used to train and test [SpamAssassin](http://spamassassin.apache.org/), a leading anti-spam software package. Let the *vocabulary* of a dataset be a list of all terms occuring in a data set. So, for example, a vocabulary could be ["cat","dog","chupacabra", "aerospace", ...]. Our features will be based only the frequencies of terms in our vocabulary occuring in the e-mails (such an approach is called a *bag of words* approach, since we ignore the positions of the terms in the emails). The $j$-th feature is the number of times term $j$ in the vocabulary occurs in the email. If you are interested in further details on this model, you can see Chapters 6 and 13 in [Manning's Book](http://nlp.stanford.edu/IR-book/).You will use the following classifiers in this problem:- sklearn.naive_bayes.BernoulliNB (Naive Bayes Classifier with Bernoulli Model)- sklearn.naive_bayes.MultinomialNB (Naive Bayes Classifier with Multinomial Model)- sklearn.svm.LinearSVC (Linear Support Vector Machine)- sklearn.linear_model.LogisticRegression (Logistic Regression)- sklearn.neighbors.KNeighborsClassifier (1-Nearest Neighbor Classifier)In the context of the Bernoulli Model for Naive Bayes, scikit-learn will binarize the features by interpretting the $j$-th feature to be $1$ if the $j$-th term in the vocabulary occurs in the email and $0$ otherwise. This is a categorical Naive Bayes model, with binary features. While we did not discuss the multinomial model in class, it operates directly on the frequencies of terms in the vocabulary, and is discussed in Section 13.2 in [Manning's Book](http://nlp.stanford.edu/IR-book/) (though you do not need to read this reference). Both the Bernoulli and Multinomial models are commonly used for Naive Bayes in text classification. A sample Ham email is: From [email protected] Mon Jun 24 17:06:54 2002 Return-Path: [email protected] Delivery-Date: Tue May 28 02:53:28 2002 Received: from mp.opensrs.net (mp.opensrs.net [216.40.33.45]) by dogma.slashnull.org (8.11.6/8.11.6) with ESMTP id g4S1rSe14718 for ; Tue, 28 May 2002 02:53:28 +0100 Received: (from popensrs@localhost) by mp.opensrs.net (8.9.3/8.9.3) id VAA04361; Mon, 27 May 2002 21:53:26 -0400 Message-Id: Date: Mon, 27 May 2002 21:53:26 -0500 (EST) From: "Starflung NIC" To: Subject: Automated 30 day renewal reminder 2002-05-27 X-Keywords: The following domains that are registered as belonging to you are due to expire within the next 60 days. If you would like to renew them, please contact [email protected]; otherwise they will be deactivated and may be registered by another. Domain Name, Expiry Date nutmegclothing.com, 2002-06-26 A sample Spam email is: From [email protected] Fri Aug 23 11:03:31 2002 Return-Path: Delivered-To: [email protected] Received: from localhost (localhost [127.0.0.1]) by phobos.labs.example.com (Postfix) with ESMTP id 478B54415C for ; Fri, 23 Aug 2002 06:02:57 -0400 (EDT) Received: from mail.webnote.net [193.120.211.219] by localhost with POP3 (fetchmail-5.9.0) for zzzz@localhost (single-drop); Fri, 23 Aug 2002 11:02:57 +0100 (IST) Received: from smtp.easydns.com (smtp.easydns.com [205.210.42.30]) by webnote.net (8.9.3/8.9.3) with ESMTP id IAA08912; Fri, 23 Aug 2002 08:13:36 +0100 From: [email protected] Received: from mymail.dk (unknown [61.97.34.233]) by smtp.easydns.com (Postfix) with SMTP id 7484A2F85C; Fri, 23 Aug 2002 03:13:31 -0400 (EDT) Reply-To: Message-ID: To: [email protected] Subject: HELP WANTED. WORK FROM HOME REPS. MiME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Priority: 3 (Normal) X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook, Build 10.0.2616 Importance: Normal Date: Fri, 23 Aug 2002 03:13:31 -0400 (EDT) Content-Transfer-Encoding: 8bit Help wanted. We are a 14 year old fortune 500 company, that is growing at a tremendous rate. We are looking for individuals who want to work from home. This is an opportunity to make an excellent income. No experience is required. We will train you. So if you are looking to be employed from home with a career that has vast opportunities, then go: http://www.basetel.com/wealthnow We are looking for energetic and self motivated people. If that is you than click on the link and fill out the form, and one of our employement specialist will contact you. To be removed from our link simple go to: http://www.basetel.com/remove.html 1349lmrd5-948HyhJ3622xXiM0-290VZdq6044fFvN0-799hUsU07l50 First, we will load the data. Our dataset has a bit over 9000 emails, with about 25% of them being spam. We will use 50% of them as a training set, 25% of them as a validation set and 25% of them as a test set. ###Code # Get list of emails spamfiles=glob.glob('./Data/Spam/*') hamfiles=glob.glob('./Data/Ham/*') # First, we will split the files into the training, validation and test sets. np.random.seed(seed=222017) # seed the RNG for repeatability fnames=np.asarray(spamfiles+hamfiles) nfiles=fnames.size labels=np.ones(nfiles) labels[len(spamfiles):]=-1 # Randomly permute the files we have idx=np.random.permutation(nfiles) fnames=fnames[idx] labels=labels[idx] #Split the file names into which set they belong to tname=fnames[:int(nfiles/2)] trainlabels=labels[:int(nfiles/2)] vname=fnames[int(nfiles/2):int(nfiles*3/4)] vallabels=labels[int(nfiles/2):int(nfiles*3/4)] tename=fnames[int(3/4*nfiles):] testlabels=labels[int(3/4*nfiles):] from sklearn.feature_extraction.text import CountVectorizer # Get our Bag of Words Features from the data bow = CountVectorizer(input='filename',encoding='iso-8859-1',binary=False) traindata=bow.fit_transform(tname) valdata=bow.transform(vname) testdata=bow.transform(tename) ###Output _____no_output_____ ###Markdown The $100$ most and least common terms in the vocabulary are: ###Code counts=np.reshape(np.asarray(np.argsort(traindata.sum(axis=0))),-1) vocab=np.reshape(np.asarray(bow.get_feature_names()),-1) print ("100 most common terms: " , ','.join(str(s) for s in vocab[counts[-100:]]), "\n") print ("100 least common terms: " , ','.join(str(s) for s in vocab[counts[:100]])) ###Output 100 most common terms: slashnull,dogma,ist,thu,not,lists,cnet,mail,wed,as,html,have,click,jmason,exmh,00,are,align,freshrpms,or,mailman,date,text,mon,message,12,postfix,type,arial,users,bgcolor,ie,rpm,linux,version,22,be,taint,your,mailto,sourceforge,admin,content,20,color,table,jm,on,aug,border,127,example,face,href,this,nbsp,gif,09,subject,10,img,src,sep,it,that,0100,spamassassin,height,esmtp,is,size,xent,fork,you,tr,www,in,list,11,br,width,received,localhost,id,of,and,org,by,with,net,for,td,http,2002,font,from,3d,to,the,com 100 least common terms: g6mn17405760,e17titx,e17tvdy,e17ueb2,e17vjs8,e17vjsf,e17w5r4,e17wchv,e17wcmr,s4tkh2qxhrdntbervcuydvpgt4frugzlf3xwvohcrdtxohcfpaziiaed0ne9lw5,e17wosd,e17wosk,e17wssb,e17titf,e17wsyl,e17xbmd,e17xd4y,e17xlhj,e17yawz,s4lyze220qd,e17yozl,e17ysm1,e17ysna,e17ysox,e17ywux,e17z5re,e17z65d,e17wved,e17tfo0,e17texc,e17stjj,e17kazn,e17kb3f,e17kb3l,e17kba2,e17kcfg,e17kkxb,e17kxx7,e17kxxd,e17lk0h,e17lzkx,e17m2xi,e17mbzo,e17mpr7,e17n4br,e17n8od,e17nmuf,e17oai5,e17owlg,e17owlz,e17pfia,e17pfih,e17r7cf,e17rqza,e17rqzi,e17s52j,e17s6q9,e17sd3a,e17zimu,e17zl6i,e18bs5u,e18ec44161,e1n_n,e1pyognhf88zoewompdrqazaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa,s42bvq,s3zy0uqn9cxgumxzswr1e,e1s_jim_mac_gearailt,e1t,e1xwdo3b1k3wvr1u6cyugmvhm1nnyssndv2knuhw4g,s3wul4rjqofkdbzdhdtzxxnb005aaaaaa,e208716f77,e208e2940b3,e20c8406ff,s3w3ibekx4my0f8afuy,s3ulb6cl,e2178f6d01a70dfbdf9c84c4dcaf58dc,e22,e22432940aa,e224536,e226e294098,e22ab2d42c,e23,e23917,e23a916f1e,s3qjh,e240,e240merc,e241b6184464107168656739bf96c6b9,e242f2940ef,e1l_,e17k4ao,e1l1o9q,e1irt,e18gf17,e18hpmg,e18ifxm,e193416fea,e1amfeffcsliuttecieokbirfye5ds7mqt6dpbmltqjmwz5kzz5qvkvkvknb0i8hihpnwqro1z3a,e1b2916f03,e1bf816efc ###Markdown We will have our training data in `traindata` (with labels in `trainlabels`), validation data in `valdata` (with labels in `vallabels`) and test data in `testdata` (with labels in `testlabels`). For each of the following classifiers **(10 points each)**:- sklearn.naive_bayes.BernoulliNB (Naive Bayes Classifier with Bernoulli Model)- sklearn.naive_bayes.MultinomialNB (Naive Bayes Classifier with Multinomial Model)- sklearn.svm.LinearSVC (Linear Support Vector Machine)- sklearn.linear_model.LogisticRegression (Logistic Regression)- sklearn.neighbors.KNeighborsClassifier (as a 1-Nearest Neighbor Classifier)In *main.py*, you are required to finish the followings:1. Train on the training data in `traindata` with corresponding labels `trainlabels`. Use the default parameters, unless otherwise noted.2. Report Training Error.3. Report Validation Error.4. Report the time it takes to fit the classifier (i.e. time to perform xxx.fit(X,y)).5. Report the time it takes to run the classifier on the validation data (i.e. time to perform xxx.predict(X,y)).You can ignore all warnings. After you finish all parts above, you can retrieve your performances as followed: ###Code %run main.py q1 = Question1() classifier_list = ["BernoulliNB", "MultinomialNB", "LinearSVC", "LogisticRegression", "NN"] for name in classifier_list: ret = eval("q1." + name + "_classifier(traindata, trainlabels, valdata, vallabels)") print(name, "classifier:") print("The Training Error is: %.3f" % ret[1]) print("The Validation Error is: %.3f" % ret[2]) print("The Fitting Time is: %.5f sec" % ret[3]) print("The Predicting Time is: %.5f sec" % ret[4]) print("-----------------------------------------------------------------") ###Output BernoulliNB classifier: The Training Error is: 0.034 The Validation Error is: 0.055 The Fitting Time is: 0.89972 sec The Predicting Time is: 0.03590 sec ----------------------------------------------------------------- MultinomialNB classifier: The Training Error is: 0.019 The Validation Error is: 0.027 The Fitting Time is: 0.01693 sec The Predicting Time is: 0.00499 sec ----------------------------------------------------------------- LinearSVC classifier: The Training Error is: 0.000 The Validation Error is: 0.011 The Fitting Time is: 0.53161 sec The Predicting Time is: 0.00299 sec ----------------------------------------------------------------- LogisticRegression classifier: The Training Error is: 0.000 The Validation Error is: 0.008 The Fitting Time is: 1.38427 sec The Predicting Time is: 0.00399 sec ----------------------------------------------------------------- NN classifier: The Training Error is: 0.000 The Validation Error is: 0.016 The Fitting Time is: 0.00702 sec The Predicting Time is: 1.77223 sec ----------------------------------------------------------------- ###Markdown **Extra (not evaluated):** Based on the results of this problem and knowledge of the application at hand (spam filtering), pick one of the classifiers in this problem and describe how you would use it as part of a spam filter for the University of Illinois email system. Sketch out a system design at a very high level -- how you would train the spam filter to deal with new threats, would you filter everyone's email jointly, etc. You may get some inspiration from the [girls and boys](https://gmail.googleblog.com/2007/10/how-our-spam-filter-works.html) at [Gmail](https://gmail.googleblog.com/2015/07/the-mail-you-want-not-spam-you-dont.html), the [chimps at MailChimp](http://kb.mailchimp.com/delivery/spam-filters/about-spam-filters) or other places. Write a function that calculates the confusion matrix (cf. Fig. 2.1 in the notes). You may wish to read Section 2.1.1 in the notes -- it may be helpful, but is not necessary to complete this problem. **(10 points)**Run the classifier you selected in the previous part of the problem on the test data. The following code displays the test error and the output of the function. **(10 points)** ###Code _, testError, cm = q1.classify(traindata, trainlabels, testdata, testlabels) print("The Test Error is: %3f" % testError) print("Confusion matrix for test data:") print ("True Positives:", cm[0,0], "False Positive:", cm[0,1]) print ("False Negative:", cm[1,0], "True Negatives:", cm[1,1]) print ("True Positive Rate : ", cm[0,0]/(cm[0,0] + cm[1,0])) print ("False Positive Rate: ", cm[0,1]/(cm[0,1] + cm[1,1])) ###Output The Test Error is: 0.008982 Confusion matrix for test data: True Positives: 616.0 False Positive: 17.0 False Negative: 4.0 True Negatives: 1701.0 True Positive Rate : 0.9935483870967742 False Positive Rate: 0.00989522700814901 ###Markdown As a sanity check, you should observe that your true positive rate is above 0.95 (i.e. highly sensitive). Problem 2: Cross-Validation (45 Points) Now, we will load some data (acquired from K.P. Murphy's PMTK tookit). ###Code problem2_tmp= genfromtxt('Data/p2.csv', delimiter=',') # Randomly reorder the data np.random.seed(seed=2217) # seed the RNG for repeatability idx=np.random.permutation(problem2_tmp.shape[0]) problem2_tmp=problem2_tmp[idx] #The training data which you will use is called "traindata" traindata=problem2_tmp[:200,:2] #The training labels are in "labels" trainlabels=problem2_tmp[:200,2] #The test data which you will use is called "testdata" with labels "testlabels" testdata=problem2_tmp[200:,:2] testlabels=problem2_tmp[200:,2] # You should not re-shuffle your data in your functions! ###Output _____no_output_____ ###Markdown Write a function which implements $5$-fold cross-validation to estimate the error of a classifier with cross-validation with the 0,1-loss for k-Nearest Neighbors (kNN). You will be given as input:* A (N,d) numpy.ndarray of training data, *trainData* (with N divisible by 5)* A length $N$ numpy.ndarray of training labels, *trainLabels** A number $k$, for which cross-validated error estimates will be outputted for $1,\ldots,k$Your output will be a vector (represented as a numpy.ndarray) *err*, such that *err[i]* is the cross-validated estimate of using i neighbors (*err* will be of length $k+1$; the zero-th component of the vector will be meaningless). **For this problem, take your folds to be 0:N/5, N/5:2N/5, ..., 4N/5:N for cross-validation (In general, however, the folds should be randomly divided).**Use scikit-learn's sklearn.neighbors.KNeighborsClassifier to perform the training and classification for the kNN models involved. Do not use any other features of scikit-learn, such as things from sklearn.model_selection. (25 points) Write a function that *calls the above function* and returns 1) the output from the previous function, 2) the number of neighbors within $1,\ldots,30$ that minimizes the cross-validation error, and 3) the correponding minimum error. (10 points) The following code helps you to visualize your result. It plots the cross-validation error with respect to the number of neighbors. Your best number of neighbors should be roughly at the middle of your err array. ###Code %run main.py q2 = Question2() k = 30 err, k_min, err_min = q2.minimizer_K(traindata,trainlabels,k) print(err) plot(np.arange(1,k+1),err[1:]) xlabel('Number of Neighbors') ylabel('Cross-validation error') axis('tight') print("The best number of neighbors is:", k_min) print("The corresponding error is:", err_min) ###Output [1. 0.26 0.245 0.24 0.225 0.215 0.2 0.19 0.2 0.185 0.19 0.185 0.19 0.18 0.175 0.18 0.19 0.195 0.19 0.19 0.195 0.19 0.195 0.21 0.2 0.205 0.2 0.21 0.205 0.2 0.195] The best number of neighbors is: 14 The corresponding error is: 0.175 ###Markdown Train a kNN model on the whole training data using the number of neighbors you found in the previous part of the question, and apply it to the test data. **(10 points)** ###Code _, testError = q2.classify(traindata, trainlabels, testdata, testlabels) print("The test error is:", testError) ###Output The test error is: 0.214 ###Markdown As a sanity check, the test error should be around 0.2. Problem 3: Detecting Cancer with SVMs and Logistic Regression (35 points) We consider the [Breast Cancer Wisconsin Data Set](https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29) from W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, volume 1905, pages 861-870, San Jose, CA, 1993. The authors diagnosed people by characterizing 3 cell nuclei per person extracted from the breast (pictures [here](http://web.archive.org/web/19970225174429/http://www.cs.wisc.edu/~street/images/)), each with 10 features (for a 30-dimensional feature space):1. radius (mean of distances from center to points on the perimeter) 2. texture (standard deviation of gray-scale values) 3. perimeter 4. area 5. smoothness (local variation in radius lengths) 6. compactness (perimeter^2 / area - 1.0) 7. concavity (severity of concave portions of the contour) 8. concave points (number of concave portions of the contour) 9. symmetry 10. fractal dimension ("coastline approximation" - 1)and classified the sample into one of two classes: Malignant ($+1$) or Benign ($-1$). You can read the original paper for more on what these features mean.You will be attempting to classify if a sample is Malignant or Benign using Support Vector Machines, as well as Logistic Regression. Since we don't have all that much data, we will use 10-fold cross-validation to tune our parameters for our SVMs and Logistic Regression. We use 90% of the data for training, and 10% for testing.You will be experimenting with SVMs using Gaussian RBF kernels (through sklearn.svm.SVC), linear SVMs (through sklearn.svm.LinearSVC), and Logistic Regression (sklearn.linear_model.LogisticRegression). Your model selection will be done with cross-validation via sklearn.model_selection's *cross_val_score*. This returns the accuracy for each fold, i.e. the fraction of samples classified correctly. Thus, the cross-validation error is simply 1-mean(cross_val_score). First, we load the data. We will use scikit-learn's train test split function to split the data. The data is scaled for reasons outlined here. In short, it helps avoid some numerical issues and avoids some problems with certain features which are typically large affecting the SVM optimization problem unfairly compared to features which are typically small. ###Code cancer = genfromtxt('Data/wdbc.csv', delimiter=',') np.random.seed(seed=282017) # seed the RNG for repeatability idx=np.random.permutation(cancer.shape[0]) cancer=cancer[idx] cancer_features=cancer[:,1:] cancer_labels=cancer[:,0] #The training data is in data_train with labels label_train. # The test data is in data_test with labels label_test. data_train, data_test, label_train, label_test = train_test_split(cancer_features,cancer_labels,test_size=0.1,random_state=292017) # Rescale the training data and scale the test data correspondingly scaler=MinMaxScaler(feature_range=(-1,1)) data_train=scaler.fit_transform(data_train) #Note that the scaling is determined solely via the training data! data_test=scaler.transform(data_test) %run main.py q3 = Question3() # The following lines ignore the warnings. import warnings warnings.filterwarnings("ignore") ###Output _____no_output_____ ###Markdown The soft margin linear SVM is tuned based on a parameter $C$, which controls how much points can be violating the margin (this isn't the same $C$ as in the notes, though it serves the same function; see the [scikit-learn documentation](http://scikit-learn.org/stable/modules/svm.htmlsvc) for details). Use cross-validation to select a value of $C$ for a linear SVM (sklearn.svm.LinearSVC) by varying $C$ from $2^{-5},2^{-4},\ldots,2^{15}$. (10 points) ###Code C_min, min_err = q3.LinearSVC_crossValidation(data_train, label_train) print("Soft Margin Linear SVM:") print("The best C is:", C_min) print("The corresponding error is:", min_err) ###Output Soft Margin Linear SVM: The best C is: 0.125 The corresponding error is: 0.02714932126696823 ###Markdown You will now experiment with using kernels in an SVM, particularly the Gaussian RBF kernel (in sklearn.svm.SVC). The SVM has two parameters to tune in this case: $C$ (as before), and $\gamma$, which is a parameter in the RBF. Use cross-validation to select parameters $(C,\gamma)$ by searching varying $(C,\gamma)$ over $C=2^{-5},2^{-4},\ldots,2^{15}$ and $\gamma=2^{-15},\ldots,2^{3}$ [So, you will try about 400 parameter choices]. This procedure is known as a **grid search**. Use *GridSearchCV* (see doc [here](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html)) to perform a grid search (and you can use *clf.best\_params_* to get the best parameters). Out of these, which $(C,\gamma)$ parameters would you choose? What is the corresponding cross-validation error?We are using a fairly coarse grid for this problem, but in practice one could use a finer grid once the rough range of good parameters is known (rather than starting with a fine grid, which would waste a lot of time). (10 points) ###Code C_min, gamma_min, min_err = q3.SVC_crossValidation(data_train, label_train) print("SVM with RBF kernel:") print("The best C is:", C_min) print("The best gamma is:", gamma_min) print("The corresponding error is:", min_err) ###Output SVM with RBF kernel: The best C is: 8 The best gamma is: 0.125 The corresponding error is: 0.01953125 ###Markdown As stated in a footnote in the notes, Logistic Regression normally has a regularizer parameter to promote stability. Scikit-learn calls this parameter $C$ (which is like $\lambda^{-1}$ in the notes); see the [LibLinear](http://www.csie.ntu.edu.tw/~cjlin/papers/liblinear.pdf) documentation for the exact meaning of $C$. Use cross-validation to select a value of $C$ for logistic regression (sklearn.linear_model.LogisticRegression) by varying $C$ from $2^{-14},\ldots,2^{14}$. You may optionally make use of sklearn.model_selection.GridSearchCV, or write the search by hand. **(5 points)** ###Code C_min, min_err = q3.LogisticRegression_crossValidation(data_train, label_train) print("Logistic Regression:") print("The best C is:", C_min) print("The corresponding error is:", min_err) ###Output Logistic Regression: The best C is: 2 The corresponding error is: 0.02734375 ###Markdown Train the classifier selected above on the whole training set. Then, estimate the prediction error using the test set. What is your estimate of the prediction error? How does it compare to the cross-validation error? (10 points) ###Code _, error = q3.classify(data_train, label_train, data_test, label_test) print("The prediction error is:", error) ###Output The prediction error is: 0.017543859649122806
code/post_process_data.ipynb
###Markdown Process the raw mooring dataContents:* Raw data reprocessing.* Interpolated data processing.* ADCP processing.* VMP processing.Import the needed libraries. ###Code import datetime import glob import os import gsw import numpy as np import numpy.ma as ma import scipy.integrate as igr import scipy.interpolate as itpl import scipy.io as io import scipy.signal as sig import seawater import xarray as xr from matplotlib import path import munch import load_data import moorings as moo import utils from oceans.sw_extras import gamma_GP_from_SP_pt # Data directory data_in = os.path.expanduser("../data") data_out = data_in def esum(ea, eb): return np.sqrt(ea ** 2 + eb ** 2) def emult(a, b, ea, eb): return np.abs(a * b) * np.sqrt((ea / a) ** 2 + (eb / b) ** 2) ###Output _____no_output_____ ###Markdown Process raw data into a more convenient formatParameters for raw processing. ###Code # Corrected levels. # heights = [-540., -1250., -2100., -3500.] # Filter cut off (hours) tc_hrs = 40.0 # Start of time series (matlab datetime) t_start = 734494.0 # Length of time series max_len = N_data = 42048 # Data file raw_data_file = "moorings.mat" # Index where NaNs start in u and v data from SW mooring sw_vel_nans = 14027 # Sampling period (minutes) dt_min = 15.0 # Window length for wave stress quantities and mesoscale strain quantities. nperseg = 2 ** 9 # Spectra parameters window = "hanning" detrend = "constant" # Extrapolation/interpolation limit above which data will be removed. dzlim = 100.0 # Integration of spectra parameters. These multiple N and f respectively to set # the integration limits. fhi = 1.0 flo = 1.0 flov = 1.0 # When integrating spectra involved in vertical fluxes, get rid of # the near inertial portion. # When bandpass filtering windowed data use these params multiplied by f and N filtlo = 0.9 # times f filthi = 1.1 # times N # Interpolation distance that raises flag (m) zimax = 100.0 dt_sec = dt_min * 60.0 # Sample period in seconds. dt_day = dt_sec / 86400.0 # Sample period in days. N_per_day = int(1.0 / dt_day) # Samples per day. print("RAW DATA") ############################################################################### # Load w data for cc mooring and chop from text files. I checked and all the # data has the same start date and the same length print("Loading vertical velocity data from text files.") nortek_files = glob.glob(os.path.join(data_in, "cc_1_*.txt")) depth = [] for file in nortek_files: with open(file, "r") as f: content = f.readlines() depth.append(int(content[3].split("=")[1].split()[0])) idxs = np.argsort(depth) w = np.empty((42573, 12)) datenum = np.empty((42573, 12)) for i in idxs: YY, MM, DD, hh, W = np.genfromtxt( nortek_files[i], skip_header=12, usecols=(0, 1, 2, 3, 8), unpack=True ) YY = YY.astype(int) MM = MM.astype(int) DD = DD.astype(int) mm = (60 * (hh % 1)).astype(int) hh = np.floor(hh).astype(int) w[:, i] = W / 100 dates = [] for j in range(len(YY)): dates.append(datetime.datetime(YY[j], MM[j], DD[j], hh[j], mm[j])) dates = np.asarray(dates) datenum[:, i] = utils.datetime_to_datenum(dates) idx_start = np.searchsorted(datenum[:, 0], t_start) w = w[idx_start : idx_start + max_len] # Start prepping raw data from the mat file. print("Loading raw data file.") data_path = os.path.join(data_in, raw_data_file) ds = utils.loadmat(data_path) cc = ds.pop("c") nw = ds.pop("nw") ne = ds.pop("ne") se = ds.pop("se") sw = ds.pop("sw") cc["id"] = "cc" nw["id"] = "nw" ne["id"] = "ne" se["id"] = "se" sw["id"] = "sw" moorings = [cc, nw, ne, se, sw] # Useful information dt_min = 15.0 # Sample period in minutes. dt_sec = dt_min * 60.0 # Sample period in seconds. dt_day = dt_sec / 86400.0 # Sample period in days. print("Chopping time series.") for m in moorings: m["idx_start"] = np.searchsorted(m["Dates"], t_start) for m in moorings: m["N_data"] = max_len m["idx_end"] = m["idx_start"] + max_len # Chop data to start and end dates. varl = ["Dates", "Temp", "Sal", "u", "v", "Pres"] for m in moorings: for var in varl: m[var] = m[var][m["idx_start"] : m["idx_end"], ...] print("Renaming variables.") print("Interpolating negative pressures.") for m in moorings: __, N_levels = m["Pres"].shape m["N_levels"] = N_levels # Tile time and pressure m["t"] = np.tile(m.pop("Dates")[:, np.newaxis], (1, N_levels)) # Fix negative pressures by interpolating nearby data. fix = m["Pres"] < 0.0 if fix.any(): levs = np.argwhere(np.any(fix, axis=0))[0] for lev in levs: x = m["t"][fix[:, lev], lev] xp = m["t"][~fix[:, lev], lev] fp = m["Pres"][~fix[:, lev], lev] m["Pres"][fix[:, lev], lev] = np.interp(x, xp, fp) # Rename variables m["P"] = m.pop("Pres") m["u"] = m["u"] / 100.0 m["v"] = m["v"] / 100.0 m["spd"] = np.sqrt(m["u"] ** 2 + m["v"] ** 2) m["angle"] = np.angle(m["u"] + 1j * m["v"]) m["Sal"][(m["Sal"] < 33.5) | (m["Sal"] > 34.9)] = np.nan m["S"] = m.pop("Sal") m["Temp"][m["Temp"] < -2.0] = np.nan m["T"] = m.pop("Temp") # Dimensional quantities. m["f"] = gsw.f(m["lat"]) m["ll"] = np.array([m["lon"], m["lat"]]) m["z"] = gsw.z_from_p(m["P"], m["lat"]) # Estimate thermodynamic quantities. m["SA"] = gsw.SA_from_SP(m["S"], m["P"], m["lon"], m["lat"]) m["CT"] = gsw.CT_from_t(m["SA"], m["T"], m["P"]) # specvol_anom = gsw.specvol_anom(m['SA'], m['CT'], m['P']) # m['sva'] = specvol_anom cc["wr"] = w print("Calculating thermodynamics.") print("Excluding bad data using T-S funnel.") # Chuck out data outside of TS funnel sensible range. funnel = np.genfromtxt("funnel.txt") for m in moorings: S = m["SA"].flatten() T = m["CT"].flatten() p = path.Path(funnel) in_funnel = p.contains_points(np.vstack((S, T)).T) fix = np.reshape(~in_funnel, m["SA"].shape) m["in_funnel"] = ~fix varl = ["S"] if fix.any(): levs = np.squeeze(np.argwhere(np.any(fix, axis=0))) for lev in levs: x = m["t"][fix[:, lev], lev] xp = m["t"][~fix[:, lev], lev] for var in varl: fp = m[var][~fix[:, lev], lev] m[var][fix[:, lev], lev] = np.interp(x, xp, fp) # Re-estimate thermodynamic quantities. m["SA"] = gsw.SA_from_SP(m["S"], m["P"], m["lon"], m["lat"]) m["CT"] = gsw.CT_from_t(m["SA"], m["T"], m["P"]) print("Calculating neutral density.") # Estimate the neutral density for m in moorings: # Compute potential temperature using the 1983 UNESCO EOS. m["PT0"] = seawater.ptmp(m["S"], m["T"], m["P"]) # Flatten variables for analysis. lons = m["lon"] * np.ones_like(m["P"]) lats = m["lat"] * np.ones_like(m["P"]) S_ = m["S"].flatten() T_ = m["PT0"].flatten() P_ = m["P"].flatten() LO_ = lons.flatten() LA_ = lats.flatten() gamman = gamma_GP_from_SP_pt(S_, T_, P_, LO_, LA_) m["gamman"] = np.reshape(gamman, m["P"].shape) + 1000.0 print("Calculating slice gradients at C.") # Want gradient of density/vel to be local, no large central differences. slices = [slice(0, 4), slice(4, 6), slice(6, 10), slice(10, 12)] cc["dgdz"] = np.empty((cc["N_data"], cc["N_levels"])) cc["dTdz"] = np.empty((cc["N_data"], cc["N_levels"])) cc["dudz"] = np.empty((cc["N_data"], cc["N_levels"])) cc["dvdz"] = np.empty((cc["N_data"], cc["N_levels"])) for sl in slices: z = cc["z"][:, sl] g = cc["gamman"][:, sl] T = cc["T"][:, sl] u = cc["u"][:, sl] v = cc["v"][:, sl] cc["dgdz"][:, sl] = np.gradient(g, axis=1) / np.gradient(z, axis=1) cc["dTdz"][:, sl] = np.gradient(T, axis=1) / np.gradient(z, axis=1) cc["dudz"][:, sl] = np.gradient(u, axis=1) / np.gradient(z, axis=1) cc["dvdz"][:, sl] = np.gradient(v, axis=1) / np.gradient(z, axis=1) print("Filtering data.") # Low pass filter data. tc = tc_hrs * 60.0 * 60.0 fc = 1.0 / tc # Cut off frequency. normal_cutoff = fc * dt_sec * 2.0 # Nyquist frequency is half 1/dt. b, a = sig.butter(4, normal_cutoff, btype="lowpass") varl = [ "z", "P", "S", "T", "u", "v", "wr", "SA", "CT", "gamman", "dgdz", "dTdz", "dudz", "dvdz", ] # sva for m in moorings: for var in varl: try: data = m[var].copy() except KeyError: continue m[var + "_m"] = np.nanmean(data, axis=0) # For the purpose of filtering set fill with 0 rather than nan (SW) nans = np.isnan(data) if nans.any(): data[nans] = 0.0 datalo = sig.filtfilt(b, a, data, axis=0) # Then put nans back... if nans.any(): datalo[nans] = np.nan namelo = var + "_lo" m[namelo] = datalo namehi = var + "_hi" m[namehi] = m[var] - m[namelo] m["spd_lo"] = np.sqrt(m["u_lo"] ** 2 + m["v_lo"] ** 2) m["angle_lo"] = ma.angle(m["u_lo"] + 1j * m["v_lo"]) m["spd_hi"] = np.sqrt(m["u_hi"] ** 2 + m["v_hi"] ** 2) m["angle_hi"] = ma.angle(m["u_hi"] + 1j * m["v_hi"]) ###Output _____no_output_____ ###Markdown Save the raw data. ###Code io.savemat(os.path.join(data_out, "C_raw.mat"), cc) io.savemat(os.path.join(data_out, "NW_raw.mat"), nw) io.savemat(os.path.join(data_out, "NE_raw.mat"), ne) io.savemat(os.path.join(data_out, "SE_raw.mat"), se) io.savemat(os.path.join(data_out, "SW_raw.mat"), sw) ###Output _____no_output_____ ###Markdown Create virtual mooring 'raw'. ###Code print("VIRTUAL MOORING") print("Determine maximum knockdown as a function of z.") zms = np.hstack([m["z"].max(axis=0) for m in moorings if "se" not in m["id"]]) Dzs = np.hstack( [m["z"].min(axis=0) - m["z"].max(axis=0) for m in moorings if "se" not in m["id"]] ) zmax_pfit = np.polyfit(zms, Dzs, 2) # Second order polynomial for max knockdown np.save( os.path.join(data_out, "zmax_pfit"), np.polyfit(zms, Dzs, 2), allow_pickle=False ) # Define the knockdown model: def zmodel(u, zmax, zmax_pfit): return zmax + np.polyval(zmax_pfit, zmax) * u ** 3 print("Load model data.") mluv = xr.load_dataset("../data/mooring_locations_uv1.nc") mluv = mluv.isel( t=slice(0, np.argwhere(mluv.u[:, 0, 0].data == 0)[0][0]) ) # Get rid of end zeros... mluv = mluv.assign_coords(lon=mluv.lon) mluv = mluv.assign_coords(id=["cc", "nw", "ne", "se", "sw"]) mluv["spd"] = (mluv.u ** 2 + mluv.v ** 2) ** 0.5 print("Create virtual mooring 'raw' dataset.") savedict = { "cc": {"id": "cc"}, "nw": {"id": "nw"}, "ne": {"id": "ne"}, "se": {"id": "se"}, "sw": {"id": "sw"}, } mids = ["cc", "nw", "ne", "se", "sw"] def nearidx(a, v): return np.argmin(np.abs(np.asarray(a) - v)) for idx, mid in enumerate(mids): savedict[mid]["lon"] = mluv.lon[idx].data savedict[mid]["lat"] = mluv.lat[idx].data izs = [] for i in range(moorings[idx]["N_levels"]): izs.append(nearidx(mluv.z, moorings[idx]["z"][:, i].max())) spdm = mluv.spd.isel(z=izs, index=idx).mean(dim="z") spdn = spdm / spdm.max() zmax = mluv.z[izs] zk = zmodel(spdn.data[:, np.newaxis], zmax.data[np.newaxis, :], zmax_pfit) savedict[mid]["z"] = zk savedict[mid]["t"] = np.tile( mluv.t.data[:, np.newaxis], (1, moorings[idx]["N_levels"]) ) fu = itpl.RectBivariateSpline(mluv.t.data, -mluv.z.data, mluv.u[..., idx].data) fv = itpl.RectBivariateSpline(mluv.t.data, -mluv.z.data, mluv.v[..., idx].data) uk = fu(mluv.t.data[:, np.newaxis], -zk, grid=False) vk = fv(mluv.t.data[:, np.newaxis], -zk, grid=False) savedict[mid]["u"] = uk savedict[mid]["v"] = vk io.savemat("../data/virtual_mooring_raw.mat", savedict) ###Output _____no_output_____ ###Markdown Create virtual mooring 'interpolated'. ###Code # Corrected levels. # heights = [-540., -1250., -2100., -3500.] # Filter cut off (hours) tc_hrs = 40.0 # Start of time series (matlab datetime) # t_start = 734494.0 # Length of time series # max_len = N_data = 42048 # Sampling period (minutes) dt_min = 60.0 dt_sec = dt_min * 60.0 # Sample period in seconds. dt_day = dt_sec / 86400.0 # Sample period in days. N_per_day = int(1.0 / dt_day) # Samples per day. # Window length for wave stress quantities and mesoscale strain quantities. nperseg = 2 ** 7 # Spectra parameters window = "hanning" detrend = "constant" # Extrapolation/interpolation limit above which data will be removed. dzlim = 100.0 # Integration of spectra parameters. These multiple N and f respectively to set # the integration limits. fhi = 1.0 flo = 1.0 flov = 1.0 # When integrating spectra involved in vertical fluxes, get rid of # the near inertial portion. moorings = utils.loadmat("../data/virtual_mooring_raw.mat") cc = moorings.pop("cc") nw = moorings.pop("nw") ne = moorings.pop("ne") se = moorings.pop("se") sw = moorings.pop("sw") moorings = [cc, nw, ne, se, sw] N_data = cc["t"].shape[0] ###Output _____no_output_____ ###Markdown Polynomial fits first. ###Code print("**Generating corrected data**") # Generate corrected moorings z = np.concatenate([m["z"].flatten() for m in moorings]) u = np.concatenate([m["u"].flatten() for m in moorings]) v = np.concatenate([m["v"].flatten() for m in moorings]) print("Calculating polynomial coefficients.") pzu = np.polyfit(z, u, 2) pzv = np.polyfit(z, v, 2) # Additional height in m to add to interpolation height. hoffset = [-25.0, 50.0, -50.0, 100.0] pi2 = np.pi * 2.0 nfft = nperseg levis = [(0, 1, 2, 3), (4, 5), (6, 7, 8, 9), (10, 11)] Nclevels = len(levis) spec_kwargs = { "fs": 1.0 / dt_sec, "window": window, "nperseg": nperseg, "nfft": nfft, "detrend": detrend, "axis": 0, } idx1 = np.arange(nperseg, N_data, nperseg // 2) # Window end index idx0 = idx1 - nperseg # Window start index N_windows = len(idx0) # Initialise the place holder dictionaries. c12w = {"N_levels": 12} # Dictionary for raw, windowed data from central mooring c4w = {"N_levels": Nclevels} # Dictionary for processed, windowed data c4 = {"N_levels": Nclevels} # Dictionary for processed data # Dictionaries for raw, windowed data from outer moorings nw5w, ne5w, se5w, sw5w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"} moorings5w = [nw5w, ne5w, se5w, sw5w] # Dictionaries for processed, windowed data from outer moorings nw4w, ne4w, se4w, sw4w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"} moorings4w = [nw4w, ne4w, se4w, sw4w] # Initialised the arrays of windowed data varr = ["t", "z", "u", "v"] for var in varr: c12w[var] = np.zeros((nperseg, N_windows, 12)) var4 = [ "t", "z", "u", "v", "dudx", "dvdx", "dudy", "dvdy", "dudz", "dvdz", "nstrain", "sstrain", "vort", "div", ] for var in var4: c4w[var] = np.zeros((nperseg, N_windows, Nclevels)) for var in var4: c4[var] = np.zeros((N_windows, Nclevels)) # Initialised the arrays of windowed data for outer mooring varro = ["z", "u", "v"] for var in varro: for m5w in moorings5w: m5w[var] = np.zeros((nperseg, N_windows, 5)) var4o = ["z", "u", "v"] for var in var4o: for m4w in moorings4w: m4w[var] = np.zeros((nperseg, N_windows, Nclevels)) # for var in var4o: # for m4 in moorings4: # m4[var] = np.zeros((N_windows, 4)) # Window the raw data. for i in range(N_windows): idx = idx0[i] for var in varr: c12w[var][:, i, :] = cc[var][idx : idx + nperseg, :] for i in range(N_windows): idx = idx0[i] for var in varro: for m5w, m in zip(moorings5w, moorings[1:]): m5w[var][:, i, :] = m[var][idx : idx + nperseg, :] print("Interpolating properties.") # Do the interpolation for i in range(Nclevels): # THIS hoffset is important!!! c4["z"][:, i] = np.mean(c12w["z"][..., levis[i]], axis=(0, -1)) + hoffset[i] for j in range(N_windows): zr = c12w["z"][:, j, levis[i]] ur = c12w["u"][:, j, levis[i]] vr = c12w["v"][:, j, levis[i]] zi = c4["z"][j, i] c4w["z"][:, j, i] = np.mean(zr, axis=-1) c4w["t"][:, j, i] = c12w["t"][:, j, 0] c4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu) c4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv) dudzr = np.gradient(ur, axis=-1) / np.gradient(zr, axis=-1) dvdzr = np.gradient(vr, axis=-1) / np.gradient(zr, axis=-1) # Instead of mean, could moo.interp1d c4w["dudz"][:, j, i] = np.mean(dudzr, axis=-1) c4w["dvdz"][:, j, i] = np.mean(dvdzr, axis=-1) for m5w, m4w in zip(moorings5w, moorings4w): zr = m5w["z"][:, j, :] ur = m5w["u"][:, j, :] vr = m5w["v"][:, j, :] m4w["z"][:, j, i] = np.full((nperseg), zi) m4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu) m4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv) print("Filtering windowed data.") fcorcpd = np.abs(gsw.f(cc["lat"])) * 86400 / pi2 varl = ["u", "v"] for var in varl: c4w[var + "_lo"] = utils.butter_filter( c4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0 ) c4w[var + "_hi"] = c4w[var] - c4w[var + "_lo"] varl = ["u", "v"] for var in varl: for m4w in moorings4w: m4w[var + "_lo"] = utils.butter_filter( m4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0 ) m4w[var + "_hi"] = m4w[var] - m4w[var + "_lo"] c4w["zi"] = np.ones_like(c4w["z"]) * c4["z"] print("Calculating horizontal gradients.") # Calculate horizontal gradients for j in range(N_windows): ll = np.stack( ([m["lon"] for m in moorings[1:]], [m["lat"] for m in moorings[1:]]), axis=1 ) uv = np.stack( ( [m4w["u_lo"][:, j, :] for m4w in moorings4w], [m4w["v_lo"][:, j, :] for m4w in moorings4w], ), axis=1, ) dudx, dudy, dvdx, dvdy, vort, div = moo.div_vort_4D(ll[:, 0], ll[:, 1], uv) nstrain = dudx - dvdy sstrain = dvdx + dudy c4w["dudx"][:, j, :] = dudx c4w["dudy"][:, j, :] = dudy c4w["dvdx"][:, j, :] = dvdx c4w["dvdy"][:, j, :] = dvdy c4w["nstrain"][:, j, :] = nstrain c4w["sstrain"][:, j, :] = sstrain c4w["vort"][:, j, :] = vort c4w["div"][:, j, :] = div for var in var4: if var == "z": # Keep z as modified by hoffset. continue c4[var] = np.mean(c4w[var], axis=0) freq, c4w["Puu"] = sig.welch(c4w["u_hi"], **spec_kwargs) _, c4w["Pvv"] = sig.welch(c4w["v_hi"], **spec_kwargs) _, c4w["Cuv"] = sig.csd(c4w["u_hi"], c4w["v_hi"], **spec_kwargs) c4w["freq"] = freq.copy() # Get rid of annoying tiny values. svarl = ["Puu", "Pvv", "Cuv"] for var in svarl: c4w[var][0, ...] = 0.0 c4[var + "_int"] = np.full((N_windows, 4), np.nan) # Horizontal azimuth according to Jing 2018 c4w["theta"] = np.arctan2(2.0 * c4w["Cuv"].real, (c4w["Puu"] - c4w["Pvv"])) / 2 # Integration ############################################################# print("Integrating power spectra.") for var in svarl: c4w[var + "_cint"] = np.full_like(c4w[var], fill_value=np.nan) fcor = np.abs(gsw.f(cc["lat"])) / pi2 N_freq = len(freq) freq_ = np.tile(freq[:, np.newaxis, np.newaxis], (1, N_windows, Nclevels)) # ulim = fhi * np.tile(c4["N"][np.newaxis, ...], (N_freq, 1, 1)) / pi2 ulim = 1e9 # Set a huge upper limit since we don't know what N is... llim = fcor * flo use = (freq_ < ulim) & (freq_ > llim) svarl = ["Puu", "Pvv", "Cuv"] for var in svarl: c4[var + "_int"] = igr.simps(use * c4w[var].real, freq, axis=0) c4w[var + "_cint"] = igr.cumtrapz(use * c4w[var].real, freq, axis=0, initial=0.0) # Change lower integration limits for vertical components... llim = fcor * flov use = (freq_ < ulim) & (freq_ > llim) # Usefull quantities c4["nstress"] = c4["Puu_int"] - c4["Pvv_int"] c4["sstress"] = -2.0 * c4["Cuv_int"] c4["F_horiz"] = ( -0.5 * (c4["Puu_int"] - c4["Pvv_int"]) * c4["nstrain"] - c4["Cuv_int"] * c4["sstrain"] ) # ## Now we have to create the model 'truth'... # # Load the model data and estimate some gradients. print("Estimating smoothed gradients (slow).") mluv = xr.load_dataset("../data/mooring_locations_uv1.nc") mluv = mluv.isel( t=slice(0, np.argwhere(mluv.u[:, 0, 0].data == 0)[0][0]) ) # Get rid of end zeros... mluv = mluv.assign_coords(lon=mluv.lon) mluv = mluv.assign_coords(id=["cc", "nw", "ne", "se", "sw"]) mluv["dudz"] = (["t", "z", "index"], np.gradient(mluv.u, mluv.z, axis=1)) mluv["dvdz"] = (["t", "z", "index"], np.gradient(mluv.v, mluv.z, axis=1)) uv = np.rollaxis(np.stack((mluv.u, mluv.v))[..., 1:], 3, 0) dudx, dudy, dvdx, dvdy, vort, div = moo.div_vort_4D(mluv.lon[1:], mluv.lat[1:], uv) nstrain = dudx - dvdy sstrain = dvdx + dudy mluv["dudx"] = (["t", "z"], dudx) mluv["dudy"] = (["t", "z"], dudy) mluv["dvdx"] = (["t", "z"], dvdx) mluv["dvdy"] = (["t", "z"], dvdy) mluv["nstrain"] = (["t", "z"], nstrain) mluv["sstrain"] = (["t", "z"], sstrain) mluv["vort"] = (["t", "z"], vort) mluv["div"] = (["t", "z"], div) # Smooth the model data in an equivalent way to the real mooring. dudxs = ( mluv.dudx.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) dvdxs = ( mluv.dvdx.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) dudys = ( mluv.dudy.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) dvdys = ( mluv.dvdy.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) sstrains = ( mluv.sstrain.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) nstrains = ( mluv.nstrain.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) divs = ( mluv.div.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) vorts = ( mluv.vort.rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) dudzs = ( mluv.dudz.isel(index=0) .rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) dvdzs = ( mluv.dvdz.isel(index=0) .rolling(t=nperseg, center=True) .reduce(np.average, weights=sig.hann(nperseg)) .dropna("t") ) # Make spline fits. fdudx = itpl.RectBivariateSpline(dudxs.t.data, -dudxs.z.data, dudxs.data) fdvdx = itpl.RectBivariateSpline(dvdxs.t.data, -dvdxs.z.data, dvdxs.data) fdudy = itpl.RectBivariateSpline(dudys.t.data, -dudys.z.data, dudys.data) fdvdy = itpl.RectBivariateSpline(dvdys.t.data, -dvdys.z.data, dvdys.data) fsstrain = itpl.RectBivariateSpline(sstrains.t.data, -sstrains.z.data, sstrains.data) fnstrain = itpl.RectBivariateSpline(nstrains.t.data, -nstrains.z.data, nstrains.data) fdiv = itpl.RectBivariateSpline(divs.t.data, -divs.z.data, divs.data) fvort = itpl.RectBivariateSpline(vorts.t.data, -vorts.z.data, vorts.data) fdudz = itpl.RectBivariateSpline(dudzs.t.data, -dudzs.z.data, dudzs.data) fdvdz = itpl.RectBivariateSpline(dvdzs.t.data, -dvdzs.z.data, dvdzs.data) # Interpolate using splines. dudxt = fdudx(c4["t"], -c4["z"], grid=False) dvdxt = fdvdx(c4["t"], -c4["z"], grid=False) dudyt = fdudy(c4["t"], -c4["z"], grid=False) dvdyt = fdvdy(c4["t"], -c4["z"], grid=False) sstraint = fsstrain(c4["t"], -c4["z"], grid=False) nstraint = fnstrain(c4["t"], -c4["z"], grid=False) divt = fdiv(c4["t"], -c4["z"], grid=False) vortt = fvort(c4["t"], -c4["z"], grid=False) dudzt = fdudz(c4["t"], -c4["z"], grid=False) dvdzt = fdvdz(c4["t"], -c4["z"], grid=False) c4["dudxt"] = dudxt c4["dvdxt"] = dvdxt c4["dudyt"] = dudyt c4["dvdyt"] = dvdyt c4["sstraint"] = sstraint c4["nstraint"] = nstraint c4["divt"] = divt c4["vortt"] = vortt c4["dudzt"] = dudzt c4["dvdzt"] = dvdzt io.savemat("../data/virtual_mooring_interpolated.mat", c4) io.savemat("../data/virtual_mooring_interpolated_windowed.mat", c4w) ###Output _____no_output_____ ###Markdown Signal to noise ratios. ###Code print("Estimating signal to noise ratios.") M = munch.munchify(utils.loadmat('../data/virtual_mooring_interpolated.mat')) # shear strain dsstrain = M.sstrain - M.sstraint SNR_sstrain = M.sstrain.var(axis=0)/dsstrain.var(axis=0) np.save('../data/SNR_sstrain', SNR_sstrain, allow_pickle=False) # normal strain dnstrain = M.nstrain - M.nstraint SNR_nstrain = M.nstrain.var(axis=0)/dnstrain.var(axis=0) np.save('../data/SNR_nstrain', SNR_nstrain, allow_pickle=False) # zonal shear ddudz = M.dudz - M.dudzt SNR_dudz = M.dvdz.var(axis=0)/ddudz.var(axis=0) np.save('../data/SNR_dudz', SNR_dudz, allow_pickle=False) # meridional shear ddvdz = M.dvdz - M.dvdzt SNR_dvdz = M.dvdz.var(axis=0)/ddvdz.var(axis=0) np.save('../data/SNR_dvdz', SNR_dvdz, allow_pickle=False) # divergence ddiv = M.div - M.divt SNR_nstrain = M.div.var(axis=0)/ddiv.var(axis=0) np.save('../data/SNR_div', SNR_nstrain, allow_pickle=False) ###Output _____no_output_____ ###Markdown Generate interpolated data.Set parameters again. ###Code # Corrected levels. # heights = [-540., -1250., -2100., -3500.] # Filter cut off (hours) tc_hrs = 40.0 # Start of time series (matlab datetime) t_start = 734494.0 # Length of time series max_len = N_data = 42048 # Data file raw_data_file = "moorings.mat" # Index where NaNs start in u and v data from SW mooring sw_vel_nans = 14027 # Sampling period (minutes) dt_min = 15.0 # Window length for wave stress quantities and mesoscale strain quantities. nperseg = 2 ** 9 # Spectra parameters window = "hanning" detrend = "constant" # Extrapolation/interpolation limit above which data will be removed. dzlim = 100.0 # Integration of spectra parameters. These multiple N and f respectively to set # the integration limits. fhi = 1.0 flo = 1.0 flov = 1.0 # When integrating spectra involved in vertical fluxes, get rid of # the near inertial portion. # When bandpass filtering windowed data use these params multiplied by f and N filtlo = 0.9 # times f filthi = 1.1 # times N # Interpolation distance that raises flag (m) zimax = 100.0 dt_sec = dt_min * 60.0 # Sample period in seconds. dt_day = dt_sec / 86400.0 # Sample period in days. N_per_day = int(1.0 / dt_day) # Samples per day. ###Output _____no_output_____ ###Markdown Polynomial fits first. ###Code print("REAL MOORING INTERPOLATION") print("**Generating corrected data**") moorings = load_data.load_my_data() cc, nw, ne, se, sw = moorings # Generate corrected moorings T = np.concatenate([m["T"].flatten() for m in moorings]) S = np.concatenate([m["S"].flatten() for m in moorings]) z = np.concatenate([m["z"].flatten() for m in moorings]) u = np.concatenate([m["u"].flatten() for m in moorings]) v = np.concatenate([m["v"].flatten() for m in moorings]) g = np.concatenate([m["gamman"].flatten() for m in moorings]) # SW problems... nans = np.isnan(u) | np.isnan(v) print("Calculating polynomial coefficients.") pzT = np.polyfit(z[~nans], T[~nans], 3) pzS = np.polyfit(z[~nans], S[~nans], 3) pzg = np.polyfit(z[~nans], g[~nans], 3) pzu = np.polyfit(z[~nans], u[~nans], 2) pzv = np.polyfit(z[~nans], v[~nans], 2) # Additional height in m to add to interpolation height. hoffset = [-25.0, 50.0, -50.0, 100.0] pi2 = np.pi * 2.0 nfft = nperseg levis = [(0, 1, 2, 3), (4, 5), (6, 7, 8, 9), (10, 11)] Nclevels = len(levis) spec_kwargs = { "fs": 1.0 / dt_sec, "window": window, "nperseg": nperseg, "nfft": nfft, "detrend": detrend, "axis": 0, } idx1 = np.arange(nperseg, N_data, nperseg // 2) # Window end index idx0 = idx1 - nperseg # Window start index N_windows = len(idx0) # Initialise the place holder dictionaries. c12w = {"N_levels": 12} # Dictionary for raw, windowed data from central mooring c4w = {"N_levels": Nclevels} # Dictionary for processed, windowed data c4 = {"N_levels": Nclevels} # Dictionary for processed data # Dictionaries for raw, windowed data from outer moorings nw5w, ne5w, se5w, sw5w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"} moorings5w = [nw5w, ne5w, se5w, sw5w] # Dictionaries for processed, windowed data from outer moorings nw4w, ne4w, se4w, sw4w = {"id": "nw"}, {"id": "ne"}, {"id": "se"}, {"id": "sw"} moorings4w = [nw4w, ne4w, se4w, sw4w] # Initialised the arrays of windowed data varr = ["t", "z", "u", "v", "gamman", "S", "T", "P"] for var in varr: c12w[var] = np.zeros((nperseg, N_windows, cc["N_levels"])) var4 = [ "t", "z", "u", "v", "gamman", "dudx", "dvdx", "dudy", "dvdy", "dudz", "dvdz", "dgdz", "nstrain", "sstrain", "vort", "N2", ] for var in var4: c4w[var] = np.zeros((nperseg, N_windows, Nclevels)) for var in var4: c4[var] = np.zeros((N_windows, Nclevels)) # Initialised the arrays of windowed data for outer mooring varro = ["z", "u", "v"] for var in varro: for m5w in moorings5w: m5w[var] = np.zeros((nperseg, N_windows, 5)) var4o = ["z", "u", "v"] for var in var4o: for m4w in moorings4w: m4w[var] = np.zeros((nperseg, N_windows, Nclevels)) # for var in var4o: # for m4 in moorings4: # m4[var] = np.zeros((N_windows, 4)) # Window the raw data. for i in range(N_windows): idx = idx0[i] for var in varr: c12w[var][:, i, :] = cc[var][idx : idx + nperseg, :] for i in range(N_windows): idx = idx0[i] for var in varro: for m5w, m in zip(moorings5w, moorings[1:]): m5w[var][:, i, :] = m[var][idx : idx + nperseg, :] c4["interp_far_flag"] = np.full_like(c4["u"], False, dtype=bool) print("Interpolating properties.") # Do the interpolation for i in range(Nclevels): # THIS hoffset is important!!! c4["z"][:, i] = np.mean(c12w["z"][..., levis[i]], axis=(0, -1)) + hoffset[i] for j in range(N_windows): zr = c12w["z"][:, j, levis[i]] ur = c12w["u"][:, j, levis[i]] vr = c12w["v"][:, j, levis[i]] gr = c12w["gamman"][:, j, levis[i]] Sr = c12w["S"][:, j, levis[i]] Tr = c12w["T"][:, j, levis[i]] Pr = c12w["P"][:, j, levis[i]] zi = c4["z"][j, i] c4["interp_far_flag"][j, i] = np.any(np.min(np.abs(zr - zi), axis=-1) > zimax) c4w["z"][:, j, i] = np.mean(zr, axis=-1) c4w["t"][:, j, i] = c12w["t"][:, j, 0] c4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu) c4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv) c4w["gamman"][:, j, i] = moo.interp_quantity(zr, gr, zi, pzg) dudzr = np.gradient(ur, axis=-1) / np.gradient(zr, axis=-1) dvdzr = np.gradient(vr, axis=-1) / np.gradient(zr, axis=-1) dgdzr = np.gradient(gr, axis=-1) / np.gradient(zr, axis=-1) N2 = seawater.bfrq(Sr.T, Tr.T, Pr.T, cc["lat"])[0].T # Instead of mean, could moo.interp1d c4w["dudz"][:, j, i] = np.mean(dudzr, axis=-1) c4w["dvdz"][:, j, i] = np.mean(dvdzr, axis=-1) c4w["dgdz"][:, j, i] = np.mean(dgdzr, axis=-1) c4w["N2"][:, j, i] = np.mean(N2, axis=-1) for m5w, m4w in zip(moorings5w, moorings4w): if (m5w["id"] == "sw") & ( idx1[j] > sw_vel_nans ): # Skip this level because of NaNs zr = m5w["z"][:, j, (0, 1, 3, 4)] ur = m5w["u"][:, j, (0, 1, 3, 4)] vr = m5w["v"][:, j, (0, 1, 3, 4)] else: zr = m5w["z"][:, j, :] ur = m5w["u"][:, j, :] vr = m5w["v"][:, j, :] m4w["z"][:, j, i] = np.full((nperseg), zi) m4w["u"][:, j, i] = moo.interp_quantity(zr, ur, zi, pzu) m4w["v"][:, j, i] = moo.interp_quantity(zr, vr, zi, pzv) print("Filtering windowed data.") fcorcpd = np.abs(cc["f"]) * 86400 / pi2 Nmean = np.sqrt(np.average(c4w["N2"], weights=sig.hann(nperseg), axis=0)) varl = ["u", "v", "gamman"] for var in varl: c4w[var + "_hib"] = np.zeros_like(c4w[var]) c4w[var + "_lo"] = utils.butter_filter( c4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0 ) c4w[var + "_hi"] = c4w[var] - c4w[var + "_lo"] for i in range(Nclevels): for j in range(N_windows): Nmean_ = Nmean[j, i] * 86400 / pi2 for var in varl: c4w[var + "_hib"][:, j, i] = utils.butter_filter( c4w[var][:, j, i], (filtlo * fcorcpd, filthi * Nmean_), fs=N_per_day, btype="band", ) varl = ["u", "v"] for var in varl: for m4w in moorings4w: m4w[var + "_lo"] = utils.butter_filter( m4w[var], 24 / tc_hrs, fs=N_per_day, btype="low", axis=0 ) m4w[var + "_hi"] = m4w[var] - m4w[var + "_lo"] c4w["zi"] = np.ones_like(c4w["z"]) * c4["z"] print("Calculating horizontal gradients.") # Calculate horizontal gradients for j in range(N_windows): ll = np.stack( ([m["lon"] for m in moorings[1:]], [m["lat"] for m in moorings[1:]]), axis=1 ) uv = np.stack( ( [m4w["u_lo"][:, j, :] for m4w in moorings4w], [m4w["v_lo"][:, j, :] for m4w in moorings4w], ), axis=1, ) dudx, dudy, dvdx, dvdy, vort, _ = moo.div_vort_4D(ll[:, 0], ll[:, 1], uv) nstrain = dudx - dvdy sstrain = dvdx + dudy c4w["dudx"][:, j, :] = dudx c4w["dudy"][:, j, :] = dudy c4w["dvdx"][:, j, :] = dvdx c4w["dvdy"][:, j, :] = dvdy c4w["nstrain"][:, j, :] = nstrain c4w["sstrain"][:, j, :] = sstrain c4w["vort"][:, j, :] = vort print("Calculating window averages.") for var in var4 + ["u_lo", "v_lo", "gamman_lo"]: if var == "z": # Keep z as modified by hoffset. continue c4[var] = np.average(c4w[var], weights=sig.hann(nperseg), axis=0) print("Estimating w and b.") om = np.fft.fftfreq(nperseg, 15 * 60) c4w["w_hi"] = np.fft.ifft( 1j * pi2 * om[:, np.newaxis, np.newaxis] * np.fft.fft(-c4w["gamman_hi"] / c4["dgdz"], axis=0), axis=0, ).real c4w["w_hib"] = np.fft.ifft( 1j * pi2 * om[:, np.newaxis, np.newaxis] * np.fft.fft(-c4w["gamman_hib"] / c4["dgdz"], axis=0), axis=0, ).real # Estimate buoyancy variables c4w["b_hi"] = -gsw.grav(-c4["z"], cc["lat"]) * c4w["gamman_hi"] / c4["gamman_lo"] c4w["b_hib"] = -gsw.grav(-c4["z"], cc["lat"]) * c4w["gamman_hib"] / c4["gamman_lo"] c4["N"] = np.sqrt(c4["N2"]) print("Estimating covariance spectra.") freq, c4w["Puu"] = sig.welch(c4w["u_hi"], **spec_kwargs) _, c4w["Pvv"] = sig.welch(c4w["v_hi"], **spec_kwargs) _, c4w["Pww"] = sig.welch(c4w["w_hi"], **spec_kwargs) _, c4w["Pwwg"] = sig.welch(c4w["gamman_hi"] / c4["dgdz"], **spec_kwargs) c4w["Pwwg"] *= (pi2 * freq[:, np.newaxis, np.newaxis]) ** 2 _, c4w["Pbb"] = sig.welch(c4w["b_hi"], **spec_kwargs) _, c4w["Cuv"] = sig.csd(c4w["u_hi"], c4w["v_hi"], **spec_kwargs) _, c4w["Cuwg"] = sig.csd(c4w["u_hi"], c4w["gamman_hi"] / c4["dgdz"], **spec_kwargs) c4w["Cuwg"] *= -1j * pi2 * freq[:, np.newaxis, np.newaxis] _, c4w["Cvwg"] = sig.csd(c4w["v_hi"], c4w["gamman_hi"] / c4["dgdz"], **spec_kwargs) c4w["Cvwg"] *= -1j * pi2 * freq[:, np.newaxis, np.newaxis] _, c4w["Cub"] = sig.csd(c4w["u_hi"], c4w["b_hi"], **spec_kwargs) _, c4w["Cvb"] = sig.csd(c4w["v_hi"], c4w["b_hi"], **spec_kwargs) print("Estimating covariance matrices.") def cov(x, y, axis=None): return np.mean((x - np.mean(x, axis=axis)) * (y - np.mean(y, axis=axis)), axis=axis) c4["couu"] = cov(c4w["u_hib"], c4w["u_hib"], axis=0) c4["covv"] = cov(c4w["v_hib"], c4w["v_hib"], axis=0) c4["coww"] = cov(c4w["w_hib"], c4w["w_hib"], axis=0) c4["cobb"] = cov(c4w["b_hib"], c4w["b_hib"], axis=0) c4["couv"] = cov(c4w["u_hib"], c4w["v_hib"], axis=0) c4["couw"] = cov(c4w["u_hib"], c4w["w_hib"], axis=0) c4["covw"] = cov(c4w["v_hib"], c4w["w_hib"], axis=0) c4["coub"] = cov(c4w["u_hib"], c4w["b_hib"], axis=0) c4["covb"] = cov(c4w["v_hib"], c4w["b_hib"], axis=0) c4w["freq"] = freq.copy() # Get rid of annoying tiny values. svarl = ["Puu", "Pvv", "Pbb", "Cuv", "Cub", "Cvb", "Pwwg", "Cuwg", "Cvwg"] for var in svarl: c4w[var][0, ...] = 0.0 c4[var + "_int"] = np.full((N_windows, 4), np.nan) # Horizontal azimuth according to Jing 2018 c4w["theta"] = np.arctan2(2.0 * c4w["Cuv"].real, (c4w["Puu"] - c4w["Pvv"])) / 2 # Integration ############################################################# print("Integrating power spectra.") for var in svarl: c4w[var + "_cint"] = np.full_like(c4w[var], fill_value=np.nan) fcor = np.abs(cc["f"]) / pi2 N_freq = len(freq) freq_ = np.tile(freq[:, np.newaxis, np.newaxis], (1, N_windows, Nclevels)) ulim = fhi * np.tile(c4["N"][np.newaxis, ...], (N_freq, 1, 1)) / pi2 llim = fcor * flo use = (freq_ < ulim) & (freq_ > llim) svarl = ["Puu", "Pvv", "Pbb", "Cuv", "Pwwg"] for var in svarl: c4[var + "_int"] = igr.simps(use * c4w[var].real, freq, axis=0) c4w[var + "_cint"] = igr.cumtrapz(use * c4w[var].real, freq, axis=0, initial=0.0) # Change lower integration limits for vertical components... llim = fcor * flov use = (freq_ < ulim) & (freq_ > llim) svarl = ["Cub", "Cvb", "Cuwg", "Cvwg"] for var in svarl: c4[var + "_int"] = igr.simps(use * c4w[var].real, freq, axis=0) c4w[var + "_cint"] = igr.cumtrapz(use * c4w[var].real, freq, axis=0, initial=0.0) # Ruddic and Joyce effective stress for var1, var2 in zip(["Tuwg", "Tvwg"], ["Cuwg", "Cvwg"]): func = use * c4w[var2].real * (1 - fcor ** 2 / freq_ ** 2) nans = np.isnan(func) func[nans] = 0.0 c4[var1 + "_int"] = igr.simps(func, freq, axis=0) func = use * c4w[var2].real * (1 - fcor ** 2 / freq_ ** 2) nans = np.isnan(func) func[nans] = 0.0 c4w[var1 + "_cint"] = igr.cumtrapz(func, freq, axis=0, initial=0.0) # Usefull quantities c4["nstress"] = c4["Puu_int"] - c4["Pvv_int"] c4["sstress"] = -2.0 * c4["Cuv_int"] c4["F_horiz"] = ( -0.5 * (c4["Puu_int"] - c4["Pvv_int"]) * c4["nstrain"] - c4["Cuv_int"] * c4["sstrain"] ) c4["F_vert"] = ( -(c4["Cuwg_int"] - cc["f"] * c4["Cvb_int"] / c4["N"] ** 2) * c4["dudz"] - (c4["Cvwg_int"] + cc["f"] * c4["Cub_int"] / c4["N"] ** 2) * c4["dvdz"] ) c4["F_vert_alt"] = -c4["Tuwg_int"] * c4["dudz"] - c4["Tvwg_int"] * c4["dvdz"] c4["F_total"] = c4["F_horiz"] + c4["F_vert"] c4["EPu"] = c4["Cuwg_int"] - cc["f"] * c4["Cvb_int"] / c4["N"] ** 2 c4["EPv"] = c4["Cvwg_int"] + cc["f"] * c4["Cub_int"] / c4["N"] ** 2 ## c4["nstress_cov"] = c4["couu"] - c4["covv"] c4["sstress_cov"] = -2.0 * c4["couv"] c4["F_horiz_cov"] = ( -0.5 * (c4["couu"] - c4["covv"]) * c4["nstrain"] - c4["couv"] * c4["sstrain"] ) c4["F_vert_cov"] = ( -(c4["couw"] - cc["f"] * c4["covb"] / c4["N"] ** 2) * c4["dudz"] - (c4["covw"] + cc["f"] * c4["coub"] / c4["N"] ** 2) * c4["dvdz"] ) c4["F_total_cov"] = c4["F_horiz_cov"] + c4["F_vert_cov"] ###Output _____no_output_____ ###Markdown Estimate standard error on covariances. ###Code bootnum = 1000 np.random.seed(12341555) idxs = np.arange(nperseg, dtype="i2") # def cov1(xy, axis=0): # x = xy[..., -1] # y = xy[..., -1] # return np.mean((x - np.mean(x, axis=axis))*(y - np.mean(y, axis=axis)), axis=axis) print("Estimating error on covariance using bootstrap (slow).") euu_ = np.zeros((bootnum, N_windows, Nclevels)) evv_ = np.zeros((bootnum, N_windows, Nclevels)) eww_ = np.zeros((bootnum, N_windows, Nclevels)) ebb_ = np.zeros((bootnum, N_windows, Nclevels)) euv_ = np.zeros((bootnum, N_windows, Nclevels)) euw_ = np.zeros((bootnum, N_windows, Nclevels)) evw_ = np.zeros((bootnum, N_windows, Nclevels)) eub_ = np.zeros((bootnum, N_windows, Nclevels)) evb_ = np.zeros((bootnum, N_windows, Nclevels)) for i in range(bootnum): idxs_ = np.random.choice(idxs, nperseg) u_ = c4w["u_hib"][idxs_, ...] v_ = c4w["v_hib"][idxs_, ...] w_ = c4w["w_hib"][idxs_, ...] b_ = c4w["b_hib"][idxs_, ...] euu_[i, ...] = cov(u_, u_, axis=0) evv_[i, ...] = cov(v_, v_, axis=0) eww_[i, ...] = cov(w_, w_, axis=0) ebb_[i, ...] = cov(b_, b_, axis=0) euv_[i, ...] = cov(u_, v_, axis=0) euw_[i, ...] = cov(u_, w_, axis=0) evw_[i, ...] = cov(v_, w_, axis=0) eub_[i, ...] = cov(u_, b_, axis=0) evb_[i, ...] = cov(v_, b_, axis=0) c4["euu"] = euu_.std(axis=0) c4["evv"] = evv_.std(axis=0) c4["eww"] = eww_.std(axis=0) c4["ebb"] = ebb_.std(axis=0) c4["euv"] = euv_.std(axis=0) c4["euw"] = euw_.std(axis=0) c4["evw"] = evw_.std(axis=0) c4["eub"] = eub_.std(axis=0) c4["evb"] = evb_.std(axis=0) ###Output _____no_output_____ ###Markdown Error on gradients. ###Code finite_diff_err = 0.06 # Assume 6 percent... SNR_dudz = np.load("../data/SNR_dudz.npy") SNR_dvdz = np.load("../data/SNR_dvdz.npy") SNR_nstrain = np.load("../data/SNR_nstrain.npy") SNR_sstrain = np.load("../data/SNR_sstrain.npy") ones = np.ones_like(c4["euu"]) c4["edudz"] = ones * np.sqrt(c4["dudz"].var(axis=0) / SNR_dudz) c4["edvdz"] = ones * np.sqrt(c4["dvdz"].var(axis=0) / SNR_dvdz) c4["enstrain"] = esum( ones * np.sqrt(c4["nstrain"].var(axis=0) / SNR_nstrain), finite_diff_err * c4["nstrain"], ) c4["esstrain"] = esum( ones * np.sqrt(c4["sstrain"].var(axis=0) / SNR_sstrain), finite_diff_err * c4["sstrain"], ) ###Output _____no_output_____ ###Markdown Error propagation. ###Code euumvv = 0.5 * esum(c4["euu"], c4["evv"]) c4["enstress"] = euumvv.copy() enorm = emult( -0.5 * (c4["Puu_int"] - c4["Pvv_int"]), c4["nstrain"], euumvv, c4["enstrain"] ) eshear = emult(c4["Cuv_int"], c4["sstrain"], c4["euv"], c4["esstrain"]) c4["errF_horiz_norm"] = enorm.copy() c4["errF_horiz_shear"] = eshear.copy() c4["errF_horiz"] = esum(enorm, eshear) euumvv = 0.5 * esum(c4["euu"], c4["evv"]) c4["enstress_cov"] = euumvv.copy() enorm = emult(-0.5 * (c4["couu"] - c4["covv"]), c4["nstrain"], euumvv, c4["enstrain"]) eshear = emult(c4["couv"], c4["sstrain"], c4["euv"], c4["esstrain"]) c4["errF_horiz_norm_cov"] = enorm.copy() c4["errF_horiz_shear_cov"] = eshear.copy() c4["errF_horiz_cov"] = esum(enorm, eshear) euwmvb = esum(c4["euw"], np.abs(cc["f"] / c4["N"] ** 2) * c4["evb"]) evwpub = esum(c4["evw"], np.abs(cc["f"] / c4["N"] ** 2) * c4["eub"]) c4["evstressu"] = euwmvb c4["evstressv"] = evwpub edu = emult( -(c4["Cuwg_int"] - cc["f"] * c4["Cvb_int"] / c4["N"] ** 2), c4["dudz"], euwmvb, c4["edudz"], ) edv = emult( -(c4["Cvwg_int"] + cc["f"] * c4["Cub_int"] / c4["N"] ** 2), c4["dvdz"], evwpub, c4["edvdz"], ) c4["errEPu"] = edu.copy() c4["errEPv"] = edv.copy() c4["errF_vert"] = esum(edu, edv) c4["errEPu_alt"] = emult(-c4["Tuwg_int"], c4["dudz"], c4["euw"], c4["edudz"]) c4["errEPv_alt"] = emult(-c4["Tvwg_int"], c4["dvdz"], c4["evw"], c4["edvdz"]) c4["errF_vert_alt"] = esum(c4["errEPu_alt"], c4["errEPv_alt"]) edu = emult( -(c4["couw"] - cc["f"] * c4["covb"] / c4["N"] ** 2), c4["dudz"], euwmvb, c4["edudz"] ) edv = emult( -(c4["covw"] + cc["f"] * c4["coub"] / c4["N"] ** 2), c4["dvdz"], evwpub, c4["edvdz"] ) c4["errEPu_cov"] = edu.copy() c4["errEPv_cov"] = edv.copy() c4["errF_vert_cov"] = esum(edu, edv) c4["errF_total"] = esum(c4["errF_vert"], c4["errF_horiz"]) c4["errF_total_cov"] = esum(c4["errF_vert_cov"], c4["errF_horiz_cov"]) ###Output _____no_output_____ ###Markdown Save the interpolated data. ###Code io.savemat(os.path.join(data_out, "C_alt.mat"), c4) io.savemat(os.path.join(data_out, "C_altw.mat"), c4w) ###Output _____no_output_____ ###Markdown ADCP Processing ###Code print("ADCP PROCESSING") tf = np.array([16.0, 2.0]) # band pass filter cut off hours tc_hrs = 40.0 # Low pass cut off (hours) dt = 0.5 # Data sample period hr print("Loading ADCP data from file.") file = os.path.expanduser(os.path.join(data_in, "ladcp_data.mat")) adcp = utils.loadmat(file)["ladcp2"] print("Removing all NaN rows.") varl = ["u", "v", "z"] for var in varl: # Get rid of the all nan row. adcp[var] = adcp.pop(var)[:-1, :] print("Calculating vertical shear.") z = adcp["z"] dudz = np.diff(adcp["u"], axis=0) / np.diff(z, axis=0) dvdz = np.diff(adcp["v"], axis=0) / np.diff(z, axis=0) nans = np.isnan(dudz) | np.isnan(dvdz) dudz[nans] = np.nan dvdz[nans] = np.nan adcp["zm"] = utils.mid(z, axis=0) adcp["dudz"] = dudz adcp["dvdz"] = dvdz # Low pass filter data. print("Low pass filtering at {:1.0f} hrs.".format(tc_hrs)) varl = ["u", "v", "dudz", "dvdz"] for var in varl: data = adcp[var] nans = np.isnan(data) adcp[var + "_m"] = np.nanmean(data, axis=0) datalo = utils.butter_filter( utils.interp_nans(adcp["dates"], data, axis=1), 1 / tc_hrs, 1 / dt, btype="low" ) # Then put nans back... if nans.any(): datalo[nans] = np.nan namelo = var + "_lo" adcp[namelo] = datalo namehi = var + "_hi" adcp[namehi] = adcp[var] - adcp[namelo] # Band pass filter the data. print("Band pass filtering between {:1.0f} and {:1.0f} hrs.".format(*tf)) varl = ["u", "v", "dudz", "dvdz"] for var in varl: data = adcp[var] nans = np.isnan(data) databp = utils.butter_filter( utils.interp_nans(adcp["dates"], data, axis=1), 1 / tf, 1 / dt, btype="band" ) # Then put nans back... if nans.any(): databp[nans] = np.nan namebp = var + "_bp" adcp[namebp] = databp io.savemat(os.path.join(data_out, "ADCP.mat"), adcp) ###Output _____no_output_____ ###Markdown VMP data ###Code print("VMP PROCESSING") vmp = utils.loadmat(os.path.join(data_in, "jc054_vmp_cleaned.mat"))["d"] box = np.array([[-58.0, -58.0, -57.7, -57.7], [-56.15, -55.9, -55.9, -56.15]]).T p = path.Path(box) in_box = p.contains_points(np.vstack((vmp["startlon"], vmp["startlat"])).T) idxs = np.argwhere(in_box).squeeze() Np = len(idxs) print("Isolate profiles in match around mooring.") for var in vmp: ndim = np.ndim(vmp[var]) if ndim == 2: vmp[var] = vmp[var][:, idxs] if ndim == 1 and vmp[var].size == 36: vmp[var] = vmp[var][idxs] print("Rename variables.") vmp["P"] = vmp.pop("press") vmp["T"] = vmp.pop("temp") vmp["S"] = vmp.pop("salin") print("Deal with profiles where P[0] != 1.") P_ = np.arange(1.0, 10000.0) i0o = np.zeros((Np), dtype=int) i1o = np.zeros((Np), dtype=int) i0n = np.zeros((Np), dtype=int) i1n = np.zeros((Np), dtype=int) pmax = 0.0 for i in range(Np): nans = np.isnan(vmp["eps"][:, i]) i0o[i] = i0 = np.where(~nans)[0][0] i1o[i] = i1 = np.where(~nans)[0][-1] P0 = vmp["P"][i0, i] P1 = vmp["P"][i1, i] i0n[i] = np.searchsorted(P_, P0) i1n[i] = np.searchsorted(P_, P1) pmax = max(P1, pmax) P = np.tile(np.arange(1.0, pmax + 2)[:, np.newaxis], (1, len(idxs))) eps = np.full_like(P, np.nan) chi = np.full_like(P, np.nan) T = np.full_like(P, np.nan) S = np.full_like(P, np.nan) for i in range(Np): eps[i0n[i] : i1n[i] + 1, i] = vmp["eps"][i0o[i] : i1o[i] + 1, i] chi[i0n[i] : i1n[i] + 1, i] = vmp["chi"][i0o[i] : i1o[i] + 1, i] T[i0n[i] : i1n[i] + 1, i] = vmp["T"][i0o[i] : i1o[i] + 1, i] S[i0n[i] : i1n[i] + 1, i] = vmp["S"][i0o[i] : i1o[i] + 1, i] vmp["P"] = P vmp["eps"] = eps vmp["chi"] = chi vmp["T"] = T vmp["S"] = S vmp["z"] = gsw.z_from_p(vmp["P"], vmp["startlat"]) print("Calculate neutral density.") # Compute potential temperature using the 1983 UNESCO EOS. vmp["PT0"] = seawater.ptmp(vmp["S"], vmp["T"], vmp["P"]) # Flatten variables for analysis. lons = np.ones_like(P) * vmp["startlon"] lats = np.ones_like(P) * vmp["startlat"] S_ = vmp["S"].flatten() T_ = vmp["PT0"].flatten() P_ = vmp["P"].flatten() LO_ = lons.flatten() LA_ = lats.flatten() gamman = gamma_GP_from_SP_pt(S_, T_, P_, LO_, LA_) vmp["gamman"] = np.reshape(gamman, vmp["P"].shape) + 1000.0 io.savemat(os.path.join(data_out, "VMP.mat"), vmp) ###Output _____no_output_____
tools/4_extract_routines_from_transcripts.ipynb
###Markdown This notebook extracts routines of referring expressions that are "fixed", i.e. become shared or established amongst interlocutors. ###Code import re import pathlib as pl import pandas as pd import numpy as np from spacy.lang.en import English from read_utils import read_tables def natural_sort(l): convert = lambda text: int(text) if text.isdigit() else text.lower() alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ] return sorted(l, key = alphanum_key) ###Output _____no_output_____ ###Markdown Define paths. ###Code # Inputs. data_dir = pl.Path('../data') transcripts_dir = data_dir.joinpath('transcripts') dialign_dir = pl.Path('dialign-1.0') dialign_jar_file = dialign_dir.joinpath('dialign.jar') output_dir = pl.Path('../outputs') interm_dir = output_dir.joinpath('intermediate') task_features_file = interm_dir.joinpath( 'log_features/justhink19_log_features_task_level.csv') # Outputs. processed_data_dir = pl.Path('../processed_data') dialign_inputs_dir = processed_data_dir.joinpath('dialign_inputs') dialign_outputs_dir = processed_data_dir.joinpath('dialign_outputs') routines_dir = processed_data_dir.joinpath('routines') utterances_dir = processed_data_dir.joinpath('utterances') tokens_dir = processed_data_dir.joinpath('tokens') dirs = [ dialign_inputs_dir, dialign_outputs_dir, routines_dir, utterances_dir, tokens_dir, ] for d in dirs: if not d.exists(): d.mkdir() print('Created {}'.format(d)) synthesis_dep_file = dialign_outputs_dir.joinpath( 'metrics-speaker-dependent.tsv') synthesis_indep_file = dialign_outputs_dir.joinpath( 'metrics-speaker-dependent.tsv') ###Output _____no_output_____ ###Markdown Define task-specific referents. ###Code node_words = { 'basel', 'luzern', 'zurich', 'bern', 'zermatt', 'interlaken', 'montreux', 'neuchatel', 'gallen', 'davos', } task_words = node_words print(len(task_words), sorted(task_words)) ###Output 10 ['basel', 'bern', 'davos', 'gallen', 'interlaken', 'luzern', 'montreux', 'neuchatel', 'zermatt', 'zurich'] ###Markdown Load data. Read transcripts. ###Code transcript_dfs = read_tables(transcripts_dir, form='transcript') ###Output Reading transcript files from ../data/transcripts. transcript 10 files found. File justhink19_transcript_07 belongs to team 7 File justhink19_transcript_08 belongs to team 8 File justhink19_transcript_09 belongs to team 9 File justhink19_transcript_10 belongs to team 10 File justhink19_transcript_11 belongs to team 11 File justhink19_transcript_17 belongs to team 17 File justhink19_transcript_18 belongs to team 18 File justhink19_transcript_20 belongs to team 20 File justhink19_transcript_28 belongs to team 28 File justhink19_transcript_47 belongs to team 47 Transcript of 7 has 639 utterances Transcript of 8 has 669 utterances Transcript of 9 has 810 utterances Transcript of 10 has 469 utterances Transcript of 11 has 567 utterances Transcript of 17 has 325 utterances Transcript of 18 has 359 utterances Transcript of 20 has 507 utterances Transcript of 28 has 348 utterances Transcript of 47 has 396 utterances ###Markdown Refine task and transcript durations. Compute speaking durations from transcripts. ###Code end_times = dict() for team_no, df in transcript_dfs.items(): dff = df[df['utterance'] == '(omitted)'] if len(dff) > 0: end_time = dff['start'].min() else: end_time = df.iloc[-1]['end'] end_times[team_no] = end_time end_times ###Output _____no_output_____ ###Markdown Print the total transcribed duration in hours. ###Code values = [td / 60 / 60 for td in end_times.values()] sum(values) ###Output _____no_output_____ ###Markdown Slice the transcripts by their inferred duration. There is sometimes more talk after the task ends, some of which was also transcribed, we omit that.This is specifically when the team fails i.e. time is up, and we intervene. ###Code for team_no in transcript_dfs: df = transcript_dfs[team_no] df = df[df.end <= end_times[team_no]] transcript_dfs[team_no] = df # # A quick check. # transcript_dfs[7].tail(), end_times[7] ###Output _____no_output_____ ###Markdown Generate inputs for dialign to extract routines. Define a tokeniser. Create a tokeniser with the default settings for English, including punctuation rules and exceptions. ###Code nlp = English() tokeniser = nlp.tokenizer ###Output _____no_output_____ ###Markdown Define a tokeniser method for dialign (as per dialign input format). ###Code def tokenise_utterances(df, tokeniser): df = df.copy() texts = list() for u in df['utterance']: tokens = tokeniser(u) text = ' '.join([t.text for t in tokens]) texts.append(text) df['utterance'] = texts return df ###Output _____no_output_____ ###Markdown Define an exporter for dialign. ###Code def export_for_dialign(df, file): with open(str(file), 'w') as f: for i, row in df.iterrows(): print('{}\t{}'.format(row['interlocutor'], row['utterance']), file=f) ###Output _____no_output_____ ###Markdown Rework the transcripts for dialign: obtain simpler transcripts (tokenised and interlocutors A & B only). ###Code print('Reworking the transcripts to input into dialign...') utterance_dfs = dict() for team_no, df in transcript_dfs.items(): print('Processing Team {}'.format(team_no)) # Filter for interlocutors A and B only. df = df[df['interlocutor'].isin(['A', 'B'])] # Tokenise. df = tokenise_utterances(df, tokeniser) # Reset the utterance numbers. df['utterance_no'] = range(len(df)) # Keep. utterance_dfs[team_no] = df print('Done!') ###Output Reworking the transcripts to input into dialign... Processing Team 7 Processing Team 8 Processing Team 9 Processing Team 10 Processing Team 11 Processing Team 17 Processing Team 18 Processing Team 20 Processing Team 28 Processing Team 47 Done! ###Markdown Export the transcripts in dialign input format. ###Code print('Exporting the transcripts in dialign input format...') for team_no, df in utterance_dfs.items(): # Construct filename. file = 'justhink19_dialogue_{:02d}.tsv'.format(team_no) file = dialign_inputs_dir.joinpath(file) # Export table to file. export_for_dialign(df, file) print('Written for team {:2d} to {}'.format(team_no, file)) print('Done!') ###Output Exporting the transcripts in dialign input format... Written for team 7 to ../processed_data/dialign_inputs/justhink19_dialogue_07.tsv Written for team 8 to ../processed_data/dialign_inputs/justhink19_dialogue_08.tsv Written for team 9 to ../processed_data/dialign_inputs/justhink19_dialogue_09.tsv Written for team 10 to ../processed_data/dialign_inputs/justhink19_dialogue_10.tsv Written for team 11 to ../processed_data/dialign_inputs/justhink19_dialogue_11.tsv Written for team 17 to ../processed_data/dialign_inputs/justhink19_dialogue_17.tsv Written for team 18 to ../processed_data/dialign_inputs/justhink19_dialogue_18.tsv Written for team 20 to ../processed_data/dialign_inputs/justhink19_dialogue_20.tsv Written for team 28 to ../processed_data/dialign_inputs/justhink19_dialogue_28.tsv Written for team 47 to ../processed_data/dialign_inputs/justhink19_dialogue_47.tsv Done! ###Markdown Run dialign. ###Code cmd = 'java -jar {} -i {} -o {}'.format( dialign_jar_file.resolve(), dialign_inputs_dir.resolve(), dialign_outputs_dir.resolve()) print(cmd) print('Running for dialogues...') !$cmd print('Done!') ###Output java -jar /home/utku/playground/justhink-alignment-analysis/tools/dialign-1.0/dialign.jar -i /home/utku/playground/justhink-alignment-analysis/processed_data/dialign_inputs -o /home/utku/playground/justhink-alignment-analysis/processed_data/dialign_outputs Running for dialogues... Done! ###Markdown Read routine tables.i.e. shared expression lexicons as termed by dialign. ###Code routine_dfs = dict() for team_no in sorted(transcript_dfs): routine_file = 'justhink19_dialogue_{:02d}_tsv-lexicon.tsv'.format( team_no) routine_file = dialign_outputs_dir.joinpath(routine_file) df = pd.read_csv(str(routine_file), sep='\t') print('Read for team {:02d}: {} routines'.format(team_no, len(df))) l = list() for e in df['Surface Form']: tokenised_e = [t.text for t in tokeniser(e)] v = 0 for n in task_words: if n in tokenised_e: v += 1 l.append(v) df.insert(3, 'task_spec_referent_count', l) df['utterances'] = [[int(n) for n in seq.split(', ')] for seq in df['Turns']] routine_dfs[team_no] = df # # Example/debugging. # team_no = 18 # routine_dfs[team_no].head(3) ###Output Read for team 07: 384 routines Read for team 08: 420 routines Read for team 09: 533 routines Read for team 10: 226 routines Read for team 11: 371 routines Read for team 17: 149 routines Read for team 18: 149 routines Read for team 20: 287 routines Read for team 28: 194 routines Read for team 47: 223 routines ###Markdown Filter for routines with task-specific referents. ###Code for team_no, df in routine_dfs.items(): df = df[df.task_spec_referent_count > 0] df = df.drop('task_spec_referent_count', axis=1) routine_dfs[team_no] = df ###Output _____no_output_____ ###Markdown Rework routine instances with token positions. Construct token tables. ###Code token_dfs = dict() for team_no, df in utterance_dfs.items(): df = df.copy() # Split the utterances into words, convert to a list. df['token'] = [u.split() for u in df['utterance']] # df = df.assign(**{'words': df['object'].str.split()}) # Transform each word to a row, preserving the other values in the row. df = df.explode('token') # Assign a subutterance no. df.insert(2, 'token_no', range(len(df))) token_dfs[team_no] = df df = token_dfs[7].copy() df.head() def find_sub_list(sl, l): # allows for multiple matches # from https://stackoverflow.com/a/17870684 results = [] sll = len(sl) for ind in (i for i, e in enumerate(l) if e == sl[0]): if l[ind:ind+sll] == sl: results.append((ind, ind+sll-1)) return results # # Try. # greeting = ['hello', 'my', 'name', 'is', 'bob', # 'how', 'are', 'you', 'my', 'name', 'is'] # print(find_sub_list(['my', 'name', 'is'], greeting)) ###Output _____no_output_____ ###Markdown Find routine expressions' subutterance numbers from utterance numbers. ###Code def get_start_indices(subutterance, u_no, u_df): l = list() # subutterance list to be built. # Find the utterance (row) with that utterance no. utterance_row = u_df[u_df.utterance_no == u_no] # Make sure there is only one such row. assert len(utterance_row) == 1, print( 'Multiple utterances found at {}'.format(u_no)) # Select the first (and only) row. utterance_row = utterance_row.iloc[0] # Get the utterance string at that row. utterance = utterance_row['utterance'] # Find the occurrences of subutterance routine in the utterance. indices = find_sub_list(subutterance.split(), utterance.split()) assert len(indices) != 0, print( 'Could not find subutterance "{}" at utterance "{}" ({})'.format( subutterance, utterance, u_no)) # Get the token offset of the utterance. offset = t_df[t_df.utterance_no == u_no].iloc[0]['token_no'] # Put the initial positions of the occurrences into a list. for start, end in indices: l.append(start + offset) return l for team_no, df in routine_dfs.items(): print('Finding routine token indices for team {:2d}'.format( team_no)) u_df = utterance_dfs[team_no] t_df = token_dfs[team_no] tokens_list = list() establish_list = list() priming_list = list() for i, row in df.iterrows(): subutterance = row['Surface Form'] # subutterance list to be built, for each row. l = list() for u_no in row['utterances']: # for each utterance no l += get_start_indices(subutterance, u_no, u_df) tokens_list.append(l) # priming token from the priming utterance no. u_no = row['utterances'][0] t = get_start_indices(subutterance, u_no, u_df)[0] priming_list.append(t) # establishment token from the establishment utterance no. u_no = row['Establishment turn'] t = get_start_indices(subutterance, u_no, u_df)[0] establish_list.append(t) df['tokens'] = tokens_list df['priming_token'] = priming_list df['establish_token'] = establish_list print('Done!') routine_dfs[7].head() ###Output Finding routine token indices for team 7 Finding routine token indices for team 8 Finding routine token indices for team 9 Finding routine token indices for team 10 Finding routine token indices for team 11 Finding routine token indices for team 17 Finding routine token indices for team 18 Finding routine token indices for team 20 Finding routine token indices for team 28 Finding routine token indices for team 47 Done! ###Markdown Export routine tables. ###Code print('Exporting routine tables...') for team_no, df in routine_dfs.items(): # Construct filename. file = 'justhink19_routines_{:02d}.csv'.format(team_no) file = routines_dir.joinpath(file) # Write the table to file. df.to_csv(file, index=False, sep='\t') print('Exported routines for {:2d} to {}'.format(team_no, file)) print('Done!') ###Output Exporting routine tables... Exported routines for 7 to ../processed_data/routines/justhink19_routines_07.csv Exported routines for 8 to ../processed_data/routines/justhink19_routines_08.csv Exported routines for 9 to ../processed_data/routines/justhink19_routines_09.csv Exported routines for 10 to ../processed_data/routines/justhink19_routines_10.csv Exported routines for 11 to ../processed_data/routines/justhink19_routines_11.csv Exported routines for 17 to ../processed_data/routines/justhink19_routines_17.csv Exported routines for 18 to ../processed_data/routines/justhink19_routines_18.csv Exported routines for 20 to ../processed_data/routines/justhink19_routines_20.csv Exported routines for 28 to ../processed_data/routines/justhink19_routines_28.csv Exported routines for 47 to ../processed_data/routines/justhink19_routines_47.csv Done! ###Markdown Export the simplified transcripts ("utterances"). ###Code print('Exporting tokenised filtered transcripts (utterances)') for team_no, df in utterance_dfs.items(): file = 'justhink19_utterances_{:02d}.csv'.format(team_no) file = utterances_dir.joinpath(file) # Export table to file. df.to_csv(file, index=False, float_format='%.3f', sep='\t') print('Exported utterances for {:2d} to {}'.format(team_no, file)) print('Done!') ###Output Exporting tokenised filtered transcripts (utterances) Exported utterances for 7 to ../processed_data/utterances/justhink19_utterances_07.csv Exported utterances for 8 to ../processed_data/utterances/justhink19_utterances_08.csv Exported utterances for 9 to ../processed_data/utterances/justhink19_utterances_09.csv Exported utterances for 10 to ../processed_data/utterances/justhink19_utterances_10.csv Exported utterances for 11 to ../processed_data/utterances/justhink19_utterances_11.csv Exported utterances for 17 to ../processed_data/utterances/justhink19_utterances_17.csv Exported utterances for 18 to ../processed_data/utterances/justhink19_utterances_18.csv Exported utterances for 20 to ../processed_data/utterances/justhink19_utterances_20.csv Exported utterances for 28 to ../processed_data/utterances/justhink19_utterances_28.csv Exported utterances for 47 to ../processed_data/utterances/justhink19_utterances_47.csv Done! ###Markdown Export the token tables. ###Code for team_no, df in token_dfs.items(): file = 'justhink19_tokens_{:02d}.csv'.format(team_no) file = tokens_dir.joinpath(file) # Export table to file. df.to_csv(file, index=False, float_format='%.3f', sep='\t') print('Exported tokens for {:2d} to {}'.format(team_no, file)) print('Done!') ###Output Exported tokens for 7 to ../processed_data/tokens/justhink19_tokens_07.csv Exported tokens for 8 to ../processed_data/tokens/justhink19_tokens_08.csv Exported tokens for 9 to ../processed_data/tokens/justhink19_tokens_09.csv Exported tokens for 10 to ../processed_data/tokens/justhink19_tokens_10.csv Exported tokens for 11 to ../processed_data/tokens/justhink19_tokens_11.csv Exported tokens for 17 to ../processed_data/tokens/justhink19_tokens_17.csv Exported tokens for 18 to ../processed_data/tokens/justhink19_tokens_18.csv Exported tokens for 20 to ../processed_data/tokens/justhink19_tokens_20.csv Exported tokens for 28 to ../processed_data/tokens/justhink19_tokens_28.csv Exported tokens for 47 to ../processed_data/tokens/justhink19_tokens_47.csv Done!
KCM/KCM_Keyword_Correlation_Models_from_Open_Corpus.ipynb
###Markdown KCM( Keyword Correlation Models )![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABOwAAALYCAYAAADPfm0EAAAgAElEQVR4AeydCZglRZW2ywUdl3ZhRmfQcfh/Z/gZdJRhEGFgEFAHsEE2BXUERAURZBVElkYEYQCRVRFEQZBNYBrZBtlkFZBFdlll32RvmgaabrrP/7xZ9V1ORUUu99a9Vbe6Tj5PVebNjIyM+OLEiRNfnIgYsDgCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAoG8QGOiblERCAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQMCCsAshCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgE+giBIOz6qDAiKYFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQBB2IQOBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAfIRCEXR8VRiQlEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIwi5kIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQ6CMERk/YXX652e9+F3/jjcHcud0Tq7/8xezSS6NMx7tMu/39bspI96QtYgoEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoEEgdETdrvsYvbRj5otsUT8jRcGP/mJ2ezZSdGO4ucDD5gts0yU53iVZy+++5GPmJ13nhkEe/wt+BgEOTsKBRiv9gSBq65a8Otd6FazmTPN5s/viQhFpIFAIBAIBAKBQCAQCEw2BEZP2H3zm2ZvepPZwED8jRcGBx7YfcLune+M8hyv8uzFd1/3OrN//mezD384/hZ0DNZZx2zWrMnWlkV++x2Bz37W7F/+JfTPgq5/Hn00CLt+r4uRvkAgEAgEAoFAIBCYMAh0j7DDEP+f/zH73/+Nv7HC4B3vGCTVeknY7b232VlnRZmOVZl2+zvf+pbZm98c5GsvSNB+jXOppQa9XLrZDO28s9k220y+vx12MHvppW4iOXnjwkZ4/etDF/Wr3uhWuh56KAi7yVvLI+eBQCAQCAQCgUAg0GUEukfYLb+82XPPmTEVK/7GBoO//uveE3a/+Y3Zyy+PTX5CbrqP88EHm73lLYNysttuZtOnm51xRvwtiBh8/vNmCy1k1gvCbuWVzRZe2Oxd75pcf5/8pNkLL3S52Z2k0Ymw+9KXzE47LXTQgqSD9tzT7K/+arCdCcJuklbwHmT7qacGp9GzpvJk/Ovm0hYsm/OVr5hhJ8RfYDBaGbjhBrNXX+1BpY8o+wKBM880w1YbrZxMlPcnwKyk7hJ2rF0Sx9gh8Dd/MzaE3Zw5Y5en+FJ3EfCE3fHHm734ohnlGX8LHgbbbTfoTdkLwm7FFc3e8AazVVYx+8//XPD/ICjf+MbB/AZh1x2dJMJup50GPUBDBy04OoiBoLe9LQi77tSUiEUIPPKI2cc/bvahD02+v0MO6e5SN3iK/+3fDtoIzLqIv8BgNDLApnjdJJRV5+PcHwgccYQZS2ONRkYmyrvve5/Z88/3B+4VqZh0hN28efPs2WeftaefftpeeeWVCmiYCfVSEe6Ffu2wTQDCDuzAum8xrJSABeBhSthNUPJ17ty5Rt1dUA/yNnu0G8eMBWG3ySZmN95odscdC/bfRhsNrs0KQdlH+n/+/Pn2Mh7PDY45c+YY4fvm8IRdm9OMqf+02/xxHUefIZAQdi/MnNlWu48thp1A+S7Ier7PSq3/kwNhN1nXU9533+4TdnjIM/Udzxm87Tr4m7/xxvbKl75U/HHdSRzxTmfYdxu3jspSa9ZfdFGWsMPmmDFjRqHP6UM3OdTX5r2+slmaJH5BDQNhx+wsyrtNfTEP+7kD3TKad5Dl2V/8YvvfJX+LLDKCsGvXJhkLG3XSEXYYhauttpoNDAzY73//+8qqdsIJJxTh9tprr8pwTR767xJvV44JQNiBHVh3A8MUMzqu06ZNs1VXXdWuv/769PGw34cddlgR7gymIU2mo0eE3axZs+wb3/iGbb/99nbPPfe0hegtt9xiDzFtquFBA37UUUcV9fa8887raYcdYvmII46w/fbbz66++uqGKawOhjGy+eabV+qc++67z5Zffnn76le/Whg61TGWPB0Lwm7TTQe9NEuSgFxccsklBX5PPvmk8fu0004zrnXQIaceXnjhhT0xzprgrbSUnjE2aMj7iLB77rnnbJdddrHlllvObr/99tKk8+Avf/mLrbvuuvbd737XHnvsscqwZQ95b8stt7QzzzyzKMeycI3vj4Kwu/POO23JJZcs/rju9kH7QTtCe1JFiN577722wQYb2KabblqQS91Ox4SNzxF2T99wg6291lo2ZcqUop7X5enVV1+1//7v/y704w477DD6gYu6D8bzniFw8cUXFzYB9akrxKsn7Jh2jf129tnF3xXf/a6d/OUv25PHHNO6p2fp+ca99rJvvv/9dsbXv25zWeZlKI52z3NOP92OW3/94rvPMWOhJp4ZJ5xgx37uc3bqRhvZqy7tle+x9ASkWi8JO7z3sNvuv7/tv3t/9zv7zBJLFH9cdxJHvNM+7p1gNv++++zSY4+1A7fe2p754x9HlNWzN95oG3/iE6Vl+cpdd9kZhxxih3372/YiNsfFF5u9/e2D8llC2KmtXnTRRe26666r1TXeXjvkkEN6YhPWJmKCB+i63gUPEXYsdXHLLSNkJyePyNhhO+xgn/vYx+z+Sy5p9E4unnbvoYe+sOyyhRzfc+GFzb+7zz6DS3kkhF0nNon4oo033tie75G33pgSdo8//rgdfPDBRWeODnG3/k466aRabznVR0+cLWiEncgxCLLR/C06MGBztO4Zxs0ovLKUpl4Qdl7R15Wl0tEOWaqGZzRYlr2bS4fSWPZOJ/e3Gxiwl4YWE99wYMCuZB2YLhy33XabLbHEEsUf100PiIZll13WFl988YLIaWLUP/LII7bKKqsUMt3rBt3rB4iKbhxN5HT69OlF/rbZZpvOO6zjRNidc8459p3vfKcgkryMQrJSL5dZZhnDeDvooIPs/vvvtx133LHI684772zXXnttQcL493RdV6fLyqYJ3mXvtu73IWHHCB5tJvhALt19992t5PoLH466duutt/rHja8vuOCC4lsQyQ888EDj9wjoR9lvvPHGgsR9/h/+YXDTCabENhx510eliyHtekHYIWvgCrGO/JQdSgeDfuiKpkcvdDvpzaVDaVQ96sV5BE6OsDtpv/3sdQMD1tRwVXqbdvCaYM5AwR/+8Aeb2eYyLRpVp2z5q5KFJukY6zDUOwbQ6gj9NF3kU3nm3MmMCDo5DCggbwzk1c1gSdMgDwUGG6iP6J8/IFfysGPdS+om3rZPPmmfX2cde/eUKXYmYSrWP3519mzbY7fd7I0DA3b6qadWhq2Kh2fXX321/eM//IN9euWV7RF0YsV3eXbHLbfY8h//eJHOk3/1K5uPLV3zjv3d35m97nW9JeyOPNKsZoZRWj76rfo6Wl3cK53o9Z23IZBrOWv4MN28zuljb49081vENUIPq5CGzuT5i1/8YlEnsbmx1bzNLUxyZYmt9qUvfal4d+GFFy4G7gxbYsqUUsKOOrzrrrsW73Dmd91x2WWX2SKLLGIrrLBCYR/WhV9Qn6dtT1Nvw17o3csvv3w4YTdjRiPYfV9tLAfffB1r67sMikBIJoTdzTff3Orfcl13PPHEE7ZWG4OUdfGVPR9Twk6KfqyVls+8FBRp8Mrch9G1GFMaltEe/rs5sqaj+BMPu241gP1E2KHEIGRz5O4PfvCDQslTlltttVU2jN6bOnVq0YhsuOGGpeFSj6peySvpzclAt8rP168ywg4lr440HlFN/+i40yk48MADCzzbUo5mhacVXjtKI6RNVaeKb0HSER6PIe+p1VGdqXnJl3mdfqiKioZLDa5vTHJx0rncZJNNijzSSen4GCfCTnK71FJLFfn4xS9+UXQY1WEjf4ceemhhlFGOGGeQoRiOuUEc1dUcVk2wqcM7jQOjcv/99y8MzZYs9iFhR7rBUvWH84usSekO6suJJ55YeDdB1tV5HrtXh10yPRvymPLafffdjem11D3KRLqCcka/7rPPPkWnABJxxRVXLL6t+u3P97HGGbvEjhNhB8Gp9iA9036QVjoNtCvpc/3eaaedCvllsGKPPfbIhsNDNyU8VEc8Ht24znUQvQ7rxjdycYzoKDrCbrUPfagg7KraWvCkvfUdPOQVnIR17kyHs+pA/v/4xz/ammuu2dgbk3cguPBuxiswzS/tDnESrp8PdAODInSuwbHuoI6ffvrpBVZpnvkNhpdeemnjfOMlThmC4UUXXVQQdtyTvqBtI32U62677WZrrLFGMfBAu5H7Pvf23Wqr4YTd0ODx+eefX3wnRwqj/7G39XfHHXcUA34MPDC4qPtlZ7VbKX7ghb3T1HNU75988snFO2DTiEgNwq5UHsrkpOy+tyEo78lG2CGD1Icf/ehHhQwiu7RP2P8cwiQl7Ki/2GngSh2DvCuOGsLud7/7Xeu9uj4Z7TGeSMTPd7Af9t1330r9n/bRBhM18j9LK6y//vpFvGneRoYe3ztgu+222xZ6O5Xjj3zkI1ZHGPVC7zLwMu+nPx2cEguh1ZCwA0lmzyBn/KGnOz1obzW1ukxX+/t8673vfW8hf7/97W9r9Tzvvrj77iMIO9rRzTbbrJCdOnsQboJ2AecEym6xxRYrZmnkbBfdk32oOtgOPuNC2NFwnXXWWa2GXA26PwswjGJ/P3cN8eAzr+ktKID0D08dDBrAZWpR+tz/liHByIS/76+ZHsM0mbpDipHv/vrXv64L3ux5CWHnG6lmEb0WCkOvnwg73/lOlVm3f6ckmjo+n/rUp4pRa68cOrmGxKFTQLrTb1EC6tS1U350eGicRA7xG3lsjaK5KbHew65TXEk3U+U+/elPF4rxmmuueU14Gl6h4JiijEIHC+oTRnXuwMClwcV4wBhID5R61RS2NHzdb9JBI8n3OiU7+IbKktFDj3WubOWtCKadTl8s8jXOhF0ub8gjxh+GMuW99NJLF2c6g2WdYGGXi4981pHNNNzrrLNOIVs//elPS9uPp556quiMiuAifXSuimMcCbuqQQoafQxhvJEYwZYRoDOGFs+oV1/4whdGPFe4Oq90yST1gDpOHRNR2K7epb2l3X3wHe8YV8IOeWo37Z2Ez5FokmkMu07ajvQdDFLSlvuW2q0RpNqgZJf+h2TkOyIsIGj57Q9hOCLu6dNt/lvfWnhefGBgoCDs6rCjLfEdvLrwPAfHsoP0MgVc7UqTjhr66dhjj23ZhGVpIE4IasL320Fbz/ICeDIr/VU4Kf0qS72TO5PvH//4x43yDT7EAcGJTYKnH/o+F2/dPdnce2+xhc1HbzBLAA+7OXNanXzSlusQIld18Vc9L2t35AmE/qVN9we/0zqq3w8//HBBSiCbDz74YDYceLVka5IRdmV4e3x1DUaV9u4QQYV+oox93JQH+jKnMxV/7qyyVZljazPQ6A/p3FzcvKf0NCUSJDtlZxEEqR6uso3QCdhd5557bssmoh9OfxyigXqj/jX6hL4tA1ToaN2/9phj7NUhPT/nt78d9BYdAgLiCYKjqm75Z5TN8ccf3zg875LGJodII95p0g40ibPbYdDb2J7iIzw2/trLcC4NvdC7zJiZfeihHRF2noRdb731RtgQuTzk7vl64/Ho5vXOAwM2z61hR39SDiJNvkP9o59IHWoSnjDSEQyCt3uMC2GnBFclVgZu0wrq40LAm4I32nBNlYEaC75XVwF9XiqvJwFhV8WyewKsriGUm3ZVh0kNsjCvaoQVpp2zV0A5uZbMtyMfSqMabhpADFkaArCzEsKuihSQtwmeTurg68wIlxoIfbMdDBSWtFFmGObUCUaYUuJNI9o8z7nXt7Oml75bd5buaFqvc/GJ3BDp58s9LVtwkLfi1772tZZhJAOp6pwOVNg4E3bpiCpywlpflJ8IOoyUK6+8suhcYqjgMaM1AyVjdR52Hk/i7vSPshBZRxx8v9VpGkfCrlv5q8Klqu56mcTzkxFHjgMOOKAg6jVgBYGnMuOM0Q/ZzTRn2rvUy8zGeQ07dF5ZB0gEGAMqtCtl4cgbeawaxKFD2RowGVIQ0u05vZ/TIXX3pKdytlTaJtTFpedKI3EzxYN6uPbaaxfrISqMvpvKzysnn2yvYPgyTWvqVHvg/vsLEgkvBwzaHJ4MLOFlhZwefvjhRRg8b/G4wPsi904qU+BMPJAhaceniQ4XVrSZ6C+8GURYUo7HHHNMq40qGzQSNmN9pl6iP/FqTes6ZVl3UJaQfOhAsC7sBbOCEDn66KNbxD/5hqyqOsAKoo50oPc5uIeXpfQFZY1nHboCXbLddtsV4X2dE/atb/k17IYIO0g6yksySLp9faOOkY7Uw7PKc5Y0EZ73wCU9yAuyjEzlvOT0zbQcmv4eJqtB2KXwt36rvqrsR9i7PSDsVLac8WjHmST1llS6cvrYt+c52Wplro0LpUk46FX/raay1064fxkYsJlDS+w8f/rpLcLumWeeaXklUa8ZeMYxpUqXY78i99Rl9D4DLtiC3/72tw2SO6f/yV/dAZmqdoW8DatbdS+P0XN0lrc90cPoctla2KFgwGDJDTfcUJqqnuldvujXsGvDw45Xr7jiisKbmjLs9PCyXOdcpTam3fPRiy3WIuzmz5hhLE3EgDcyI7sFfY9dkJNHymijjTYq2o0999yzkOFHH320WI5DA57pe5QZ7ZXa23bwWSAJuyrDnFE/jG0qch3Jo1EMyIIUdP0GfCqXRn50Pz23813eHWG45Eq1hLBLO86+U1V3jZHeTx52uWzrnq/QdQ2hOiM0dE2Pqka4aRw+nE9vLh1KY11efJwoBTyz1HDLnRelw3SWMsLOx5Fe833qR87opzOr9eTwQG2qIMs2+8CTh80rWLcmPTRKxvf4bnpQ5zDCSStGlBq7NFw7vzEyiC9neDWNh2mVKHGt/eXLPS1bjyffbedPZd5K1zgTdmnaSR/rp51yyinDPBIoN/QhZCTlDtbpu/xOsVI+q0aRibOph52mj/EtFr2HIG4dfULY1bVRtBVqp8oGI7zhIGJqhOy0Mm5FXaPOYUjnPFhc0NaldGWlcTzOhF0rsZkL6bwqXHhN+WxXP0i35/R+Jjm1t5TeXDqUxrq8pB857rjjinqoeifvN4hZ6VZ918dNvZn+5S/brKGO3J8vucTmD3kPIEMp6cd3fQdP8aMXsE+oj7qXpjH9LVylPxgY4Hv8rpTFoYjAivAsjl5mQNOOymvV5ztNy1j+Vhkr35CVtIGQ5tzLtd1p+tALrSUA0oduahPx1a1JJxKtrK3ORF94yxB3JaYJYcfadRBnTEnXVDHaUMqQzi35EZnh65raYGylnC1B+vSe5F9pRi7kCYSOlZzQtvFNyG1sldQWAgvKpYmdNGymThB2gn7Eudbe7QFhB1mNnEqeNOvD6zXVx5w+luwRRypbIzLY8IZkNa07+lYT3dfwU8OC3X/22TaLZS0GBkyEHXVuiy22KDCibmLT1elyltYRqYbOJzxtDHofnHRv2Mcb/KBuph5SvcKiQXJKg9DeqE2hr4Bcd3L0TO+SGEfYbTc0M4yy6dVfWjcky7l6o6V0NL20CjuFzc4qcWvYXXbWWUWZUC4Qd8iSNgPEc5Q67g885Pbee+8CD+kCbCG8UrF7Wk4z/qVRXo8LYdfNAm+3MtLBUQfRCwhKmM6eX1xbSrGJ8SOF3a28+bSVlnEJYTfaNNQRdqmRPNrvpe83wRtMqFCa415HcGq6D0qg6aEyzTXCTePw4bwCUuPvnwtXyp6wKemb+y0i2I9S8z6jXLiov8guOEMbiPgpsf676TXvUyZpOeQaw7Tsyn7n8pt+1/+We30daYBxgLLku93YkEK7Fa6++urGlMlODkZYVl555aJTwZT+3FqLbL5DQyJvRaZwYqTIu9GvnSCPM8rYh0mNtfH2sMvpLJUjixdzjVHGAAgdSxaFzx2+HuSe193z9SyXJoxDcEe2kJsRZB0f6BPCTulnV2XWlEuNBpKqdipXxyAbWHRa7Zrq9gjZcaBKJjW1zT0qvZSurGyPE8JO75TpjNHer0xLkhMN8mlpgeRx6yeywyCdRklbD2ouJNO5Mqp5NftY5Zhrm4QrZUx9I625tiO9J+JXJDG6lfUL8X4gLG2oviv5oa5Rn9djc6OhjpyxA/j8+cM6bUqLMoNhy7II6Y7HvtOG57H3nNK7/ixc8dTGG473da9J+dNRYpS86iCtWs8RvY5+H+9DZUwdwYsF+9XfA4PRHiJH+MbnP//5wtbKxQnmWoOVMsNOaHJIb0mWsu84wm7+qafaicceW+jsH/7wh8XSCHxLcgsOTMdSvL6uafBv6623znqxox/1nnSu0iOCRvFzH5lgPTuwoX1HL6SHCO8Pf/jDdgu7LDY9xoGw87IzWr2bez9XF1VPwbtb9i66DgIJmSIdvizRYehL/rDt1HdI9WD6W4NcfkCMpTYgC6gjxCP8cvq4zh5pKhY+nGQ1rTv6lse7au1W7Mkmf1o7LiXsnn7iCfvKV75SYC3SQun0upy16fwUQMqI9avTd2TPY5udhkdtmwckPmQ+pAszVpABj0Wb0fUkuMqItGHnYxd3cvRU75KgPibsqupbiqXCpnWlCDdE2L387nfbv3zgAyOINtoXeUJCMMuORv9TJ5BRbAMNHhEn7QUeuL0g7SYdYYdCZ5oWo2F+jSo1HmkjTzgMy7pDQpFrrDq55xua0m+XEHYyuNOGp8lvjO9+JOzAoxMcm7yTa2TBXGVa9ry0XEoeeEXt5UzBJYPkVQ1yk/RXhfnh+95n89785mJEbLSEnRpDvscUpDJ5QokRhnpG54Zw5J0D4oGGpurwDT3ymDOG/fsyjGmkm2wj79/115rKSto1ndU/b3qNXuH9qnLBiGDklp2FCKu1ADWaC9FHY4E3mXbek06QbIzolI2zh12qd0R+sC4gepRONV6HeB3QCNIZFwHiZYkyBzvltynuCufrWRqHDEXip0GFnMrKVx8Rdt5oSA1c8ix5SHVKznAAD/KeNV7sNe86woyQLwGcOUtXVhrHfUDYedkgj93+S8tAUEm3lz1XuKZnlWOubVJZUMaQUYTpRj7Jg74r+ZFsTltiCZunneWHCDvygi5nIIXBifTgXXREetCBgZChrtYdTN1kTSa1L4QX1pWyWBdx8rwXcSafaOsn04C16zY4cqjc2627ZR9GJ0t2qgg7eXnw3VTflsXNfektyVI2rCPsHjn4YFv6ox8tPHMkTxoQ8tN2Fa/qmvRgVR0grN7zeUB+8Vz0Hn2kU3mmE6e0+PRD5GiKMISIb2O49uSFf6+4noSEnbCvKqMmz6jzTLVEplJ5lDwj0+hFhWkSb1UY4tH0zpw+9m2Ol60R5d7GDeGV1h19y+s+6WwGfpuQcz6MBov5HkdK2M19+eUiTjzrcp6r1A15vqbZox6kSxwQBvv5yCOPHD7jIX0581t1lbJiEJa1Sbn2WGReG/Nb2PrqHwjXThIhHUQe25GrMtkZkQZH2D3/0EOl/T1vu3Mtgrtq2ZD0HX6nTjeSZfLHrrV+4LHJ0iT6hsJ6pxY9e+l73ys2nZj7nvfYqsst1+qXeCyQU+oEfRjqOQfvU9fLZIuBcvo7yKDX/T7eTq7HhbDLKbU08TKQRiPQaZxVv3v9PRlTVFRPFPo0KUzjClhC2LVTef33uQaHpoQdYesO4TrasGp0wKbbf2XyqPKAYGB6AxV2NH9Mx5AbdE6uhRV5hb33DWfZNcQOxiR/XKfhLl1nHZvPLj+s6TIwYFcyTbbmENa+zPxCouAPGZfuUKlo1SD493kmwxkMyzYckHLkGzlyQt/wZ97R1FgMBxR7J4c84yRfGA2dHMo/O6Oi2DFkaCyIV6QWaSR+7jFCw4gNh6alceZQo8W6WdqYQ/Gn+I63h51w01nGJMQc63NI9vEmZPFtDtUxvePPneoxYUZcPg46uepE0ZhSFursFonx//qIsCNZvl6k0wUlD5x1qK5BSjKdS55KqtsqG4XnDBZ+OskI+fKBk2uVY5kBUwQvIewq3xn6TqP4G4T1suFlrVvXvgw8RGDJN5pMj2vSxuCZRny5tktYUcYQB+kakWkbod9awytd+0vPqcM5+UE2X4aAHyLsmm460RTzXB49tum1sG4iV+m7Zb8VJ5so4NXej4fKHVzbqbtlefEedmVTYmnHaHNVll7flsWr+9JbOV2kMOYIux+vtJK9773vLXa2VYeLtfD4tt+tXvFy1uAfbQ8dXL3HmeU46FTyjHTrPeXB61w/FVZ5RrcSZ3owyAbhTLpSe4QBS7zNUx0+LI5xJOya1BnJ2WjDqk6Bd7fsXWYu0M4jU+CvsgRfyhxdwh9rrDFFTrqt6qxZD2WEF/HgQQkeOV3l25xurcWlDRHTuqNBEGQU+eUAA7DoRCeojKgbHK/edJPNf/vbi/7E/AsvLNaww3alL6Dv8K1u/qV5LBKS/CPfmrauafmqz03kNImuZz9Jp/oqo/HWlg4Szl7O6xIvXGpxdYRdO7vESg5ydaEubf65rzd4SBOf8tut86GLLGLzcWhZZBF7/uGHS/sDDB76AUTpkjLZUj30+enG9QJP2MHgNzFYNZJQZqx6hY7h2u6BdxLTwMoKmPjUECKMjSpgCWEnUsAbJ02v2/Gwa9IASOF3M6yvyDmc9M2qZ2qAqsrRl0e3FITiyX2/Kt1l6aShhDwrVY4lm04oPq294r1IpXB9mVGHSDvbZmOoVjU2yodIJ32L0Qk6o8QDCfqTn/ykRVQRBuMaA5z4+Q7eT+mmC9Q9wvj6yHQcdWCJu9OpsalnHGtLeCWtfFSdGSWic0MeJH85eYXEw7Dw3nUoeUhX3tUUBBpmRvE93mpwffkUaRpnD7t07UzSSZkzlYhyYYoCMgFZxh9lCUmKke3LU7pY+AlvyRVx9eLO8VUAACAASURBVPrvqsUXN1toIbNVVjF74QUlYUzOOXnhw95I89MFJQ/SKVXeHarbOWMt1Xcj5Ksi93q3qn1LN51o9M7QN3sVtgkeVViV6t0Eq17Jbu77wiqX7iRZw37Ku1dyNOzh0I9SvNwusV/+xCdstYrBLTZJQMfVDYRJp+fymEub7gnrSllU4AZnXx/RxdTDfjxU7ujGdupuWV6QB3mCsKZP7pDukT5OdXbuHd3Tu5Vy6gi7HRdd1BbK6P7U+03xQrJ9//vfL9oK2p3TTz+92AiFwQsReaRb02v1nvJAG4ycYrczyCQiXXJZ5v2vNQ/TjQnINx5HyBDxlq5xNAkJO8lE3bnW3nWDnJStypJ41cluV58wUMqAaVWdUt3Lxe31h+pJt86VdWcITOns0XyTulEct95qNmVKQdjZRRe1Np3gGfaz6kjVWfUHohwnhrKwZaTkULaGnRigRN9Tr+jTcKg+d6sdGPbBDn/InqcsygZBmkStvKlMvZzXva93a2Wnjwg7djPWzEjkpakN4cPm5O2kj360RdjZ888XdVyY9uLcTjnlynFcCDsaMgog7Yz737ihAxgdWH8/vWY3kqpOtZR0N8GvUtw5kLknpVmlPKT0SWujgi0h7Eab1370sPO4+gYwh5MM9qpnrQbIR5xcqzyauvbKFZhRVTyIUpLUe1rlvp+mu6oBRHmh/IWFJ3TIxsUXX1zsyPbYTjtVrmEn5e1lWrLq70ECQLxguPPtMk9RpQcZvOCCCxJEBxcb18KyhIEUw4jFiJZxPVr5TQ34EYkouSGPNwx01p3h7NcmKHlt2G156fny8JiALSOfGmXzngFMD2PqkTarIGLWTOC3n5KkdKaE6Hh72KX1TZ0iyDlGoTXiK083SDrd8yCm9UDPdH+08tHk/X4k7MABnbT//vsPa/NUh6VTIH7RRRDXKb6q26mx5tdkEj6+/qsMys7SlVXtWxB2A8V6W2m7kP727QTLC6TP+a22JtdBVFmojJnqP23atNKOkTo4kg3vWYxtxTp2J598ckFyKIzibskDhM7b3jbYkXNTYlvP3YXSl0u7C9aymerC+Xe4lp6olMX0pYrffoFwsMCDqh8P4Ur9bafupnlBf5Bn1lcirjJvME1Hlb7gnLYBadz+t/TWCFnygRxhd9xnP2trTZ1a2DXkj34E30wH6BTvz3/+88LrZscddyw2P1pppZWKDj3r+2oQCdKNdopD7ykPanuVP9oxkQjy4vFJ5doTcqpXaRgRevLsS5/bJCXsumHvshFIam8JX/QmusTrEwaqywgjDWKrXrE0ieo+ZwaO0ZVsoKMwPm5916eniTOF1qX+4Ac/WEwFzOl/rdtYWXeGEiCdTZ867T/X/VY/XLaFVRB2ym/dWempS7vqY104P/uHQWPw5tD73WoH6vLV5LnkBJ2idpb0n3rqqYWeRRbRtwxmSy+l8Y6J3uWjfUTYSScLC+GYq28Ko7PCZuXIbToBYVelD7yeQP8z4AhBDHnon1Vdo+NGc4wLYacGsBvnusooJc23WCi0TEFpgcoqglDKqxNjSKPXjChAfuQOCRZpTQU0F95KCLvU08V7r9Rd490ShN0g2iqPJkqBN+RFWRaeRh6PI2RQi1f6clVHQ2Wvhi1XT/QNrbvm6wHfoWOBMrlvm206JuwwZr2xwG5odOypI6Sp1Yi7TIiw8lM43ePikjhQjKQPoxUDlkN4E7cfDaEBk9ymXneMrCmNfBsykfepq2UNXpoeftMxhTzk3d122634HtftLKJNPJCUvOdHz7zBRpnSIWJKMN8jzzqUf+/Zh4KHHPXxSU5G4D+GHnaeBKAjQ54ZOU0bK+7lpgHSGSxr6NL4yjo+wi13puz9LrDLLLOMsdYh2Dc6+mRKbFWbpbZM9bGq7VJY4qOsUuNF66FQ79QWttPOSXa9HhqBcx9MiU3TJB2b4kE45anqmfRwGm/6u7TOpgGZduTWrRxRx4fCs2g6ZYoXEPrUH2m6vf6h7NM/fUMeRb7cmTbIAIgwKMVrASXstAg6mHU6EOTLppfXKnfS6suw6pvIDuWOLNFJ3GeffYpposRB+4wNkBsQZyCAZ4RjQBOvMa6Rj6YHcsc7kq3se46wMxahnzOnCMZ30FV8F5LMH4pXcq1nDIh961vfask/3/U2gt5THmgryLuWEyAsdgW45KbC8lyDkWWDUaQF7HhO3rPLfkxSwk66BVzSP+nZOnuXOLy+U1mCu/qCiot70svp9/itOgSBxlR4L6dMqaUvp83JVPd83JK7svToeXr2tqhPvw9H/4E6S3+C9iJ3YBur3e/W+Q7qYImHXS4NuXsqZ49nLpzqY1U48q7p5xD4bLChQ+9X2iQKPEZn5R35YjMndC+2aU7+uM9zf4yZ3uWjk5Cw81hXXUuXjLVsjQthh+FD50Kd8NxZ06Lqpqhqt8UycAUsFaJM+XmFmjbyPl4pdyly/6zuWsojp9D1rpR+VVoVtjgnhJ3Y4dGwuMSx4Sc+YXPe+tbB0fLf/KZlJOnb7eAw2rB4kXlXWAgBvxZcjiRIO/yeRNCzHIlA3v2h8qgqMx9eI7KdrnMjrCSnUu5eJiWrPk28573BZEywXlfdLrGSS+LQoe+mjYiUEw0NzzyJpHdFMNVNHcIYZrSRkSWRKDS+kG7p4qOKu8lZ3hB4yLHYaNNDZCvG+EUXXVRsAgFRhqyBZ9MDMp4NOZh+o0NlVlevVRYQTTp+/etfF1hrNI77khMvF0X4MSTsfJ5SOen27xH5FDiZMx1QvPkgjEkH5Ul5MAre1tEnhF23sVR83gjW9GyeYfxqsWavE+qwk66UjsiGH2fCDuLXtwVcV03P0bQLP3ig9/UsN61zgw02KNZR8hiU1lkfyF2jAygPFs9u91BZqIxVV32bQZzS86pfes97kmjHYE2J1DuKu5U2R9hddcopdsnFF5d2Fvke7VXdTAuRy2m6W98suRDWlbJY8q6/Tb3Q+mzokdIpjP6lcbxW+SE3Teuut4+lGzjj0U0nP/XQVfY8wc818sB7yEfTAzngnRGy5CPIEHbyqKFMsEXw/vzOd77TIt8Ur+Qa+wKvbgayeYc/ltzg23jfaQBd7+XyQBzyasI7HhsFby5IEQ68Xlibjjjxvjj77LOLwTvqse/fsHyH1zuEJ75hOE9ywk7lBq453YVsl9m7ftYJ2PqylKx7fUJcqZ5QPVId0nt+IBUbETnSgK7e8XFLjJWHND16np61rArhsf86PUg/cXTzb8/11y8l7LDf5ZBQRRBKrzPNHN1RFlYDkVX6Qd6qlAV6wB+qz2n5+jBjfa2+E2WCBzBtIPbFFltsUegJygzbQmXGM7+R3pjpXYCZAIRdEy5pp512KpwesnKUeNg1lQfphLGWrXEh7HJKLQVKysYr7zRMk98Clgrglbd/1yvUqu8pTZzbPWSAZ4VmKDIp/aq0DvtuQtgNezaaHw88YPbOd/YFYefLT0qsV+e0XFUeTeQVuLU+Qdl00boikXxJTjmTVy+TklWfJsmW3qNzxXt0tqxmDTs1aj7v+i7kpjqonNURFS5+6qbyBrHEt32nT8/G4ozhe95557WM8CbfxBjH8CLdIug0PZV76qw2iYswjMh7A1xlRlxgy8LTGP8Y+ToYnceY97JDumS0aHpx7p7iGMspsT5PTaZ5UI/b/ZMHh5f/Vl6TC9ID+etHKyGsyzY3SV4f+bNPCLsmHnZlBm/uvoxltUPsVqhdiOXtkeqEbungWwcGbN7AgP1pzTXpibU82JoYPdI5owmrfFEPe/mXS6N0exNZRhhF1OcGRUYK6/A7wkplrLrq2wzekJ5XmjTgpPdoz6hD0on+HYVpfZk17IamxC4Im06gNzQllM4gRLbX6a1899GFyh3Z9u15VRLTAVFN+VT9QJ9eeOGFLS8z4mKHTZZ8IAxkFAQa8sBv2SCSLcXT6fn9AwP2whveMGiLnnaazX/lldbi8iwnwZpmzCag3dRO66rnnFmaRKQraWAqLO2uNn/g3le+8pVit0q9pzx43EQMSEfed999RWebNpu2WzZXJ/lEvuiIt44g7FpQ5HRXpb3rSD7Kwpel2jGvB6knqb5WPVIdki2o9xgUZKMwyAI8kDn0jsK0MlCRHh8mvWaghvST104PtTnCAM+9PffcsxhMxo7kyOHLfeT9i1/8YmvgSfW5irDT7JpO6kDVOyPamiFAIMs1LX4E6d2nU2KlY8gvOgvdxMCQP2hn6LuhFwgnHTOWepfv7v33fz+4YSGbFs6YUQxO5GzK9J5szLoBOf8eMwbSQ7JJWiTDCqP6xrOmf1k5GiLs2CX2inPOKSWPfVq5Zkk38sfAAWWaPq/7TX4rdwpXRpNzEHZOaVHwgF92SAFKkZeFS+/7ERM64FKWaTgvhKmApmH5/fKQgfyzxRe3NSoW7/SkS5NrPOxmvfGNhZH0/X/9V/vMf/5nayFPvtsODqMNC/mBUek7+36NnxxZoI5+1TMIEx8n1+kW4yqPXCOcKw9fzoyktHsIK5W9Gkkvk1JiPk3q3BFOo88oE4zK0RB2ZXLuXfY9meXvd5L/dvHqVni/JoRGTIlb3iU0qshgp4fKDP2CgudMY+zd3bXAtxpnviVD0Xts+rgkJ610jZOH3Yh0tBI0ugvVBy//aYzIOwaCPOqaNt5puBF5GEfCzo9U5wyZFIN2fqOj0HXIltoh6jAjuZqaDt7go/pPePRNilm7v8ebsKPupDqfNoJ8sIsz7Yp/jocuclX1LLe+KboiJXeayLIvR3kq+7Ur/fOqa7VbMlClM3ybwftp+0K68YxWOI3me52odxR3Kx3Tp7d2iV3ln/7JTj7ppFIjFvnqVw87OuNshoTnJHJBvWAgQNMiW/ntwwuVO+lW3e0kmegI2qM111yzwIC2ynsXirAQeSX54rvSo5IT7o3mLyXs7hjajRP7RtPfIOro/Gp9Qekvzug40k49Yr1rX47kAy8XrVOr95QHYSdiwHu7EC/1QkQhXvjMBsLTD5JFy2tg7xMfU6tll2rtWXSE1rLFLidMcQRhJ+izhFKlvZv053xZqh2TfuMj1JM6wi6dhou8QNZB2iFDHKp7Pm5lIlc/9KzsLE+sTgZsFKfaHGGgKdu+rVfafLol7+By1VVXFdERB/W4irAjDLoCDzGm15eRFiJ0RuNhR1slIl56SPnWWfU5LV89H4+z0gSWpEseumlapGMJh47BHtA95VdlRxiVscqJe6P9Swk7ydNo4829r/R7HHL503PVt5QU1LJl6F3JH5hjb4ywWYhsiLB7YcoUe0cXMMvlLXeP/GKDt3uMC2GXm0aSEklVUxfTsPyW908KgJQ0oOWEgvBeMKpckCWwnNs5ZAiTBjXWufclhFVp9e+9ODRt9dsDA/bmLgoba9jNGBgoCLt1BgaKnbkQeh3CIfXAypWLyrGdsHX4+vLKlanSV/XM50f5Ss8qjybyqrwrv7kptwrjz34abppu0o8seOVDJ4qGzjewCseonDpZXNPx7wVhB04iszzBJMM553mXYstvCCvqpz+oK+kUaI9X3bXH08dbdo3hrakufsSU8BpFpwyayEv6DQykc889tygvjZYRF/K07bbbDhKqZi2SlTB+vTZN0/VTMSplf5wJu9GWHeVOHDpUH1Ls6XQxHQkM1bEG19H8jdAV40jYKf+c6dSBiSeSOr0mnpRI4ht4oWBgi8ADb7Cs08M+ndKVlcZxyZRYGflVdVtTUNsJW5mWocRLd+aMOeWp6pnXwx6P9Fqy3LRdUH6btj3e/knTLZ2RGrfqOKl+KRwDBHQQ8EZKMSzFa/p0e5WR+IEB23a99Wzm88+nELR+K3112FWt09eKLHMhrNO0Z4IOu8VaaFq/EfmHsGKHyIlyCNd2625Z/tL1++RNRHg2tVJnU3LDd0fo0bLInQdMrn61XnNTYmf8/Oe23lprFaQA09/QV3R6SBfEudpv6S9vN6nzVnUmfJoHYYDeOf7444vvcY98qv544kbplgyqbnE/dw+Z4/1hnuDjSNi1o1/bCZuri8JDMiPd4sutbXs36c8pbvBXX9DrHdJAZ55ykmxwzT2e6eCa/F5++eUF8QqJAqmtQ3XPx61nvn7klvDJtXtVSzXkwqe2E99O8eUeg1LrrbdesewL9Udp8+mGLMSG9/lT2VQRdvJ8rJtdo7gq6/3QhmuUSbpOH8SVPBBp06SHhLfO0gM52VOYsT4rTeiZOpy0HjZheY9jzPQuH0umxNK3ysleek+y29R24X3sjfSQbKY6mXCyi7Fp/SCMZF54+bB+kLr1rQ497KQrOj1PKA87CqDbf75SUtDq1GjHHb6X87giXJ3HluLSCBln3ePMiGTVoY43Ct83IOk7Uvo5AU3DFr/bnBLrK4AX6BFxN5wS2+0yVHy+oRyRtooGWWFVaXNY61ll/oci8uWhtHX77POqtCndathy3/QNrBbFZeQF5ae6gMvtvAMP7HjTCZ82YauzFiOX4YICVf3QSLfC5s7yxkOpF1N3hwJRn8hbLs9N7lWlOZcO713HFBuNmBIWg0ZTZct2hUvj5H0aWtazob6naYawpx7q4Bt0BAhHBxmPMQ7/bb9+nXDTqJviKc5jSNj5TSfU2I627LxMkx8ZCCIxaZzxlvBrfAhfpu1pLaJhmFT88PpQda4VvE8Iu27qIOmFVh5LLtCN4NpOXVI6K79RQtipDLt9rkzLUN6lY3OdB+Wp6lkqsyWQtjpP3c6j4vN5TdPt5Vzh/dm3hXSE0Fssns+Z3+h26jtHGV7zTjvNXllooYKw+/n3vocCK4Oi0hul9KU2Hqgd9ZjUvY63Fm0nuJBv4mDAZSIdKnfy0E7drcqj7zT6dsi/4+VrhB71AZNr6Zlc/WoFdYTd8WuvXQweYzNAKnsZ1jVpVLy61+5ZeUDmGRSqe19EYSvNjizxdUty6e/5d1rX40jY1eW10+e5uig8hLd0S+4bXs9W2bvoqjJ5lH3i41Iact/0dUhefd/4xjcKr1vJLDKCraa65+NWefr05L7TjXu571blrdNvlhF2skuJt252jcpZGAqnJmecELTJBN9ik5wy0kQEfErI5jb8a/LtboSBBBX2dXpAvAHhaYfTw8uV6lAaJvdb+rEW/4Swy8WVu6fyzclkLnzZPdVX8t80f5L5Omxb3+xwDbvW+8lFp2WSRFP6c1w87HLTSCicJn90ynw4TVthTQ3m0HNIIFUxen2uEibf8fZrweRKREq/sYBmCDsWu8cDC48JzzzzPS9MlQLdkLDD/bRMWeq+XFTbCesbSuGEN5YW72UagdaXye2Iqw1Lqp75zUzKNi5RebQjryhZRn0w4uRB5uU1vfbTcKVsJE9SfHUjjowcMOVDcn7IIYcURgRlvOe7322vvulNRUdqw4EBu3JoR1bhqrriMdd3/T2F19kTdBBNLMKLqz1EktaSUdjcWdjSOeJdHeCDomcqGnU7xavst6a2VaVZ39DZ58FPd9Fzzp7Qy62T4cNyLWNSZUE+9t9//6JO5uq11sZJv6+NQ9LOgPDJGcBjuYZdmu92f0NMMvVMHqmQcEzNpExyB15gK620UkvGkRvk7uijjy5kpRPjwOtD1bnWt/uMsEMO0APSge2ceY/3szLTyvBrFzmd8NrT/JXqc+U3Sgi71KBW2+HPpIlw7YTNpQV9e8QRR7RwpI2gXtKeUL89rlqsuOpZWi60u7lDuj23FEOZTtt3332LtLFpSlkY3ffekyoLGeSS8zoPO9KtqVjCBP0nncMgBGu38ExxK68vHHecvTy01tjqH/6wrbbqqqWj8e16D2oEn7XUmhzCOlf+uffBS/YE+oi2BLttoh0qd8qnnXawKp9N4pR88d0RerQicumZVJaGveIIu9P/678Kwo7v8EdZsc4WnXbqFeX9hS98wX784x8Xz73dJF2CDLFbuH77M+HTPMhrSN9jyji7yDPIyLsMwNEW+WnjyI7i0tqz5ElySb4rj3Ek7NrRr+2EzdVF4SGZkd3pyy3nYVdl72JHMQCKTKVlKT3mbQXSkOZDbY2vQwxKYtsSp2w1yT39OvQ+efRxq4wVjnfLnEakx3XGi1M7GuPJqftl59TTyMsb73s599fCN20bfBiu5U1aRth5u7fOi1AeWOAIdtLvVee11lqrmFXhsQTPTv5q658KrgdnT8LVpcPr3slM2HndAWadlHn6TqvNCcKuXMph4Z944oliio8nk4qpe+41RjZRbPI24RFurxiMIiMUXELtFWVuN7gqZeAX2m3HjVNxysNEafJnvPfwzkFg6jyPlBfCqhHzcTEywFSwFnYJYac57rzvvZYUh1d2lcqiIWHnGzR9Iz2rUR5t2G5V1LTi8tsrBJ9+lYeXLf88d+0x9uu75cKm94SVyl5TgupGhDAQtRaK9wQDs+0GBnpC2JF27ciKwapd11IvtTSP+q2OoSfaeYZBAt5lZaL307OMvSZypnfRE6QdGSgj48BWU2YJm+5Epbh0xpik08DI/C233FIQUF4mVLaER6dhKPF9vqFOov9mq2EZ+kAlPmPoYaf8tnsm/T/60Y8Ko5d8U9YQtmVEnY8f7D/4wQ8WHR8IPI5O6qjiLCuX4nmfEXbt6CDlT+dKmVEgd5aubacuqRwq620JYVf5zlC6GsXfIKywQPZ68VeGmXR7ZbvryoBLdAU6h40f6JA1PYSVdIfWRGQAyXsQ5+LTu14nCTOmXcnjSnErjpnHHtsi7Lq96YTKyetOfTd3FtZN5ApbU4OK6GLsq4l6+LIrk8N289Ykzko9WvFB6ZlUloa94gi7hw480M4755xi04t0VgtyzbRS0oK9BMmQ2k3YK5SxX4/Pf0vv+XVDkX3Ik7L2iT4Kcsl3dbARBnWFukud0yG5rNUB40jYNakzkonRhhUeqtfdsHfRFfQdkSmuFTdlID3m21JsMNY2pG9adZC21VdfvYiTmSTIg+Qe0ompsuDh41Z8CpemR8/LzqofuSnXZe+0e19py6U7Gxf6ccqUwgHAGGgfGmCVTEhXd/ssWfMzO9T/LjurX089ZJBI4TRjI5u/Ht/0G3PUTYlFd5B2sMzpDJVdp3JVqXfBYZw97CRTfokl1YnRylcr746wO/KHP2zJiGSl3TO6ACKa9JUR11V8UZ34jZmHHZ1QRqHIjDq9dLrYoYnFI+nocmBAff3rXy8y7AkPFR4j2rDUOnS/scLRi0Nnb7AxgkIF6VaFJs8QjBSeRmWSzw/7qbyUVUAaH54xygcRaAlhpzW3wMivN6KP+ApOPGV/fg27yyAB5sxRFG2f1SiP1oCUMZWO/OR+ywCvGlny75U12CqPdmUrt75bCty9997bInT0TFh5I0PPkCXqCOs5YHCyuDFr7JBGrbNCefrGnVGZ773jHTbvzW/uuocd6cJoEVHIt9O6qbSnZzqQ2pkybbRkVKmBTt8t+6260VTOPFnmSc5c/HSWtcBtXaeOckp3//H1TmVLx1BkHdPP0EM6mJ7FM/SRX0eE55LJlOgs3u1zwg7PSwZEkBUGX1jQuDX4oMxXnJG3FFvh0clAi29cVS6tzwdh15aXjsqhst4mhF0L6wYXjeJvEA8dewxhr//LrjEO8b6oWxzbv49eyR3S7TnDOxeee/KyxSbxXsg+PPmh/fKEnrBqGaX+BbNiCQ92nCNOyHPWv8OzEN1F28L3vKe0vCdYAJ01dKi/adzz2cFzaE3dZ2++2Z5+6qlhXiG0UXgK8q7+mniuk048aXlnRB1N8qWfwrpSFocCgwH55Q9vk4l8qNzBqmk7WJdf1mBVeZWt75xr3+ri5bk6X6ksDXvXEXZ22ml2wzXXFAOFtJd0fPCWg0yhfZbt49tgLatAWA2eY4trgx19C+LtZz/7WWFf6V7dGRsOL3Fk2x/qlPvNonguuazVAeNA2Pn0111LzprUr6q4hEeuXndq76K3IM+QqVRnyLYss+exLyhL3j/88MOL3Tm32267gpjTTAjiVL8UnUtfjNktf/jDH7pO2GnZmbL+HNiKMPb2I/eVV9Xdbp0/t/jiNm9os0NP2IEdePDd3B99fWY1+XTkNnNK373rrruKHak7kTXpl07erZLb0TzzfZ+qfgfyr+V4kOkcydNTvUsmx5mwE2FJHZN9ozIt06HSKWXP0TXIYKvNcYTd/rvtNkw+vax28zqn75rK1JgRdgDOSLEXPo1aYBR7Eg7DCQMKQ01edgiwyC/vxaMCKFPCVUCgZDBUKQwafJQuHeW6TnlVnP4ZeULZEr9Psw/jr9UQEj5XqPJKauXVEXbzX365hQ8ePs8995yPurj2Fbxq8Wu/S+w1u+zSF4TdiMxU3FClzWFY8dqIRyqPFt4jQuRv6D3f6UlDSvFAvImE8OnGGMTNnukeuIOLtfeKg1EjZAzCR/dFDKthOPADHxg1YcdoNmQw060ZvfQHHR5511Fvyjqs/h3Ve9KcrnchQ6PdRlZ6AAzrDk9wNu2oEb/KoGk+lQ5f74jHr40DufzMM88oaEHcqTxzOkOylZXJMSTskE0/fbDJddX0w7r3tSNgC6ihC+Eh+e/0PEJXVBB2yDzpZWqln9Kepq0bv7uVP3BpWqekm5rUJeVR6az8Rh8Qdkpvk7Py1DLumrxUEka6vcyQzL2GzaMBEZERaTjpS3QTgzkcPt3oFohxBpHQK5qO5OsJOpDOge+U8lz6hzYGG40BIOnZEZhMn26mjtxDDw1bww799/3vf79oo9CdTCNETrI6LMmg8uftxiTIiJ/CulIWzYpNmTRwBHnf7hqYIz48zjdU7pRdXd0lr6kHWpp8305SbsWu82mgZKmVEXo0E163pGdGyJICcHaE3dyTT7Ydt9226MDTHuDFxoZM2LqUo9II0Y031Le//e1ChsFD6ZKMp224vO3TtCB/EHk33XRTQXRDcE+bNq1ow7XJlggcJZtvUaf8ZlE8k1ymOgDSEaJcfR2bhIRdt+xdSFL0FeXoWzAC6wAAIABJREFUyx38pUvQO8g2JIgGvzVNn3f834477mg33HBDIVu6T1+R/qyIWfQq8lGm01L7T3JSd/bv+SnX/j3Veab0P4TeHTqaDkw1nRKrQak/HneczX/720d42Om7uTNtkJwoIF9+9atfFRindS33rvLnvaxy4XL3pF/q2oHcu728xyA87RnyhB0JD5Ee3qHAb+znw3n5kH7zz8uuhUst/uNM2Cmd1C/6TBy6xzl3lOlYhQUncG/l3RF2sx59NEs2Y1dh86M/6v6a7IeQeocrbU3OY0bYaSTTN2IiFQDQL2grco9Gj/d0aD0phF2eJyoAH6/CV50ZkdA6Exi7xIOBrBFmGvQcq10Vp3+WGjtNiAwpJ/DIVUAJa0vYHGH3wF13tdZhATfyQX784St4mcAX4WumxPo4665VgeoMyLp42nmub+YwbCcelUeTjoWPFwWsDRhaZeUDmLV2OWKNFB0+3SJnkQX+IFhRXJQrskRZ8qeOEM8wLCl7Np/AgMAz5KgPfcjmD+3eV7eGHfKCFx8esHyTNWLSDp6XGxm/SiNnDAdwqzooF9KZa4RlVLXbyBIn36+TM+qlvOUIX9Zgpun3ukH5zHmxpu/x29c7ySTpQP/4TqIfQED/5EgqdBL6LyeTz2y8sc19wxvs1Y9+1KxmikcunZX3VlzRjDWqNt3U7MUXW50PsBiLP+GWprHTOlpWLq34Swg7Gls8jcgz59E0vq1vVVwof514EMqdn5Fc3m9ap9TO1NUln2yls/IbQdhlp7Z4HNNrDfp5m8eHkfeb16UqC9oeyAu/vilygGc262rSCUP/4OWKXkHnYAvhScXgKjoack0LrnOWnh3RrpUQdnTEiYv6gmxAHip9OR3m88Y16YNMq5Sr5CW1o3Xv+IXSweS8885r5HmJAY/tqoP2mPbeD7zo2ViehStY19Vdwi6zzDKF1+Of/vSnYZ1GdBodeZ4TV5k9qbzl2jc9qzpLz4yQJf+SI+xm/fKXttbUqcNIOnmaa6o2caqvwRI0/JEHtR+UG4vWkyf6HMjnKaecUpyxC7jv+xxqb6kH6q+I3FU/xXeove3nbTuyJLn0NhT3hUNrnSoIu4EBu/Qzn7GXMgPvHp62rpm2+653DZItRx6Jq21bryuw5Kyufil82Vl4UDbdsnchhPDKRaZ8uZMG2ZboHfSi2nHJOB6/2GQMShMHZQmpTZzIBfY1djbhKSu8jdHLXAuTnE7rtH6QZjmvlGEtLyR0vGapleGdu6+05dKdC28lU2JzYbGZmYau9UGxR7BpS9uQTCTs0s060I3T5+JQvSrDjqDjobshUxkMk9wdcMABLbKethjMwIrn4iZctlqXKjvCSb+1HlZcCJdKvcv740jYeTvbc0NKO+fcIZ1S9nyE7DnCzjI720PKYxtgN9H3rpsR1GmZ5PKSuzcmhJ0HP11bTYraN3okVIu9+pFlFAAjDRgSLH4MeNp1pVb4hnJPHKx1QSGoQjDCxX0O32mmsrCAKRWsncOPKNQZOz5eKf2yCihhbE0jHCLs5h1wgH1vaLqJjFu+C9bkR4cXpjKBLsL2EWGHwaSOZ9OzFrIvm0NeFk+6NbrKI9dYIHtMa0SZpO7oYOjXd0vJU61xQjnnFiVGqaAoTj311BY5pzLUmREHGaMQdBBApEMGBXHzd+6qq2Z3iUUWGDEXac3aYMi73kvPYEodxUCgrjBVRuEhy3/zm98UnSreo4EkD6pTSjNn7slrJEd4yKiqamR9fLqWIqaOlB0yvpQ3sMqVXdn7XjcQB5jgadYNJU4c6BrqLbhChuYOdUyQVeRIB+V/9j/9k80eGLB73/Uue+i22/SoO+eEsMOzrG60KX2OIQxuTaZCpO+WEWNVdbQu414fIj/DjhLCrso7dNj7XfoxmvwpCe3WKRlFVXVJceusdFbW23Ek7NpZ+0btg7wuqI+0q7rf5Jwuq6G2O9fu4llDG5HbrMcb99Lzwpyz7B8/YKmywCZCv1HvPDnn3+cae4hy87ZKOhijaVnSsyPsrYSwe+nFFwt9pjYCzOh8cSh9uXY1TZu8+9LphWk4/1tYV8qi68CrPWh69nlH76633nqFXoOYrPNa8+ns9rVwJR91ddeHrco3MsEmDt6OTNNdqUfTwO639IzH0z0evHSE3QM/+pEt8U//VNgs2BH0ETQ7B9whrZl2h51CuiHeJAtev+ORx1Q74hAhBynz85//vHjP90VII6QM4Tiw9yAQJMv85rnqruqN6ovPj9KS6gDIQMpA9+e+5z0FqbYzS5xsvnn3iOA+Juy6Ze9iE0PEI1Ng6std7aD0DrasJ+d8WXGNgwhhiUcDuxA8DGyozkjOVJ8Ut4+rrn6wrjtrGDP4kh6Qwhpgztmrkh31CZUOpa/b538ZGLCZAwOFfN566KGtNezSdNOmQUSBD2mgfXr88ceLYKVtSBqJWWvNVOonWLRzSL+UtQPjqbv5tsq1rIxoO/HezfWjwKFOrsqwEi6VepeXx5Gwk/cq8uPrsNIuXZnmsUzHKpzspVbeawg72j14FNLBX6/aQqWv7jwmhJ2USK7iiEFPn6EYGQVGaXqBZfqgNx5oMBF4lJncJnOZJg5G05iWJSUC84+xmh7ET8EoHEoYgoIKUneglFBOqoRS9HXv8Vyj5bzrhZRnvnK2Ru6GCLs/fvnL9jdTphQNCSMYMhqIh2nFYJjGUSbwRcA+Iuyk3IVnL89pYyu59fdpXHHppjNHWloVvwDutX/IkIiplIDRWoOpzEvZpGX/WqzDrzBSt9hii2GeWJDFrBWE7CLfTzCl+S1vKRpY72EnxZfiyXuMNLITGlNgqYde7ulAsi6M6gayLi8xjFoZMzynwU4JMU945GRQRlU3d4ml7vuRPvIMbnULDQ9He/AX+WFDCeFGPsFbGOTe8XU3V7Zpo5ASvIqTcPLclIGmZ5yf3mgjm/P619sNAwP2yY99rNXJ8GE6vk4Iu07iUV0uqzOdxJmro03jqSyXEsJOBAJyXjZFrOn3m4QbTf4Uv+pUqm/0PD1LN6CPmh5KZ+U3xpGw82WtutvLc6rbpNt1n7rMLtjoEkZvSUtON4A/OhhcCeM7beg1rXNDWyM7SWXRtJ6RlmOOOaZYj4trDuJiIJPBUXQcXkk8K63DjrD73S9/aR933lnoLK9rlT7frmI3sVs7uk/Tr0477bRWh9kTknUyKawrZbFLhB1pwdNaXiToBZE7dens9nPhipzU1V3qA1P6aWdz9YAyx3bEU19yVZZeX7fKZDj3rvRMpZw6wu6673632CVWy2jofQaxSAM2C/Y9S3RAntLBlyyUpUsOA7SntOHyLIXsk6OBvPjIg7xNNdBK2SNndOo8mUN98V6YvKu0SAcIE93XrKFiSuzAgP3oPe+xNw+tWd0VIriPCTthUXVuYu9SB8rkUe2g1ztV30Pu6fcxqO3tWDxs5fCB/uaZ6l4u7lx6uIeeo45R16p0FXoYUph66vuT2OI4tHBfMoWeZU1T4uY9dkWWPm165r1cWOrEtcccY68OrVU64/TTRxB2EHW0JXKaoG1jnUfv9JJrQ+i3kjf6HPr2scce2/Lyzdm7VWXHM+mHKmzHU3cjN/Tl1f57PYwegpuo0r05uarDxONSqXcJOI6EHUQleDBQijesDpWp5F33dZYu1XN0BvWVug+/8rWvfa2It+XNXEPYEa93pvC2kL7pz94Zp6zN8eHbve45YYfAyaj03nJKKBUccAG2zlNF7+jsF25vkVh6OHRGqFG6UrAIAaDTya6awkC6004+xMvOO+9cKBSMgbQyUcFkuPEdpivy/aYHcTKSwLvM+QcTBA1DAu8bjVbLWNCmEz9497sLwk4dfdLFyJGUJvlleuShhx5anImfER3izv09e+ONNv8d7yhInpm/+pU9+dhjxboQXuk2zZMqEOexOvTN0VYY3zk/+uijC+zAEvz4Q9FSxmXeP4yi4BVBWBoN0gPhTMPEPT+SCzZKd9PNMtSw5c6UP431Pd/6VnYNO4xD5IkFx1mXhThIrzpsubJijQwMDOV/6623HtYR4x0aXrwpFIYOH+lQ3QYDMCxrRJFHjB693+45lbNco5hLdy6/ZfeIU8SZ0kddw8DI1ZGqhhUDC51CPOCiERw899Bp1HXKhrMGGwjnp+200rnddjb/TW+yP7/znTZlyAOQd1M91QrfzsUoCTvSgFyTz5xnZTtJ8WFlKFMXmfrZxPtJYTAGpFNH6IoSwk4dtlxb5tPVrWvlD48NOg7ojnb/eI/3y+pcmlYZRb4u6Z7kvZPzrQMDNv91r7OXtt6aEaj0s5W/hUPTPFRG1vChvllr2NbEp6l0YAapALmmtlk4QvTQ3uQO6g5tu9oe9AV6QwRBqg+U7nY2y8i1Idyj84Yu0nPVYY8J6Xvu6KNtzpveVNgM2iUWmyvX6VD6fOfW2z7CRGfqaJnHcQ6v8biHrSbbjbLtmt7tcWYoO21mpTJmIKIduzXXvvl7KsdOzu8fGLDZQ0t6nLjuuoWHneoJOps48TLioJ7REeMe612TN+kt5Jbf/sDWUTtOOA6Fx0ZgoBuyzrdX+iaOAhzYNbT53vbPecISlvactFH/ZWdh71BPvdee1rB7ZKutbMUh4pt+RdMlOIqE5f71KWHXTXsXGda6bGDt23XZluhaZmhJ3js9e72I3ODt6XWaigBdLScOBiWQJ9kdpJE/ZMavQad3OSMr6jOg6xkwR+bkmJHz5uQdSGTWcoSw0MHMiL333rsgQuTIoWf0d3FEUN3RfeI66aSTinf+dMopI3aJpQ7gDIO94PPlnUUUF2fVId+GeGcVYaIz5UVb16tjvHU3/UfyhxwyiNd0arPXsZJzf0/4dXr+389+tlhO6ZXXv97eOUZL3yBD2ALyPkzXbpR+lr5OZYL3ya+ey6POY4CMCi9rQNjxDdoObDDqOHxS2pYoHZQjNqr3ytazbpx7TthBUNCRXWmllfKdzIpcyC0SgNMpKX5tLSo0RIEOChxlipL07DXKjulYgFoGuOLQmQrA1ESt5+ELnrgx0nCHp6Otb/EdvANznXfFmztjcDAy57+RXiPI5A8lOnto8c9vDwzYAXvv3TICFDcKmfySnjSeqt9+l9h1BgaKUc1OO6iqQJzH6tA3W5Wygw/7RtJjBZYYWFRaiJu6g86JJ3F9XFIqikPp9mFGe73dwIC9NOTC7j3saGSb1gHkGGWlTiYYyGhQ2v0ZIpzFen3a2XULXaBpvGUeEzKq+AbGg8iVurP0gZczjJNvfvObrXTUpdvnoe4a+UCv+Tovwjx91zeiXiapx+ySDU7EQ3zEy6GpDh5DXdPJIM4Rx9CmE3M/8hHbeGjdKHRnVzq7bRB2jJKir9GNEBQYmxhnwqpscGVEfhrcUOdf2HR69uVSfDZD2HniJa27DZLaUZBu5Q9cmpJd5I3wvi7pXqf48t5kJeywN7BRUuzQp3Te6exoQKNMSNAL2AbosDQeNiXC81tHN2Um/ZZ+q7PFd7FJ1hsYsFlD7cxmq69uV1x+eWmelD7fuaXTQqcwJaPRFe3Ya8JgPM5+mlPX9O54ZKTNb+baN39PMtPJ2RN2e3z4w/aF9dZrdWjp2DKYrbqDncL6u6znJe9nbQxB24PMSr7QbcwkIE1eL2J/0BYQl7w8vK7nmxCG2mwoHRCEeGHh8dzBIBv119s2rE1MGoYN3rpNJ+665ZaW/Thq4qJPCbtO5KLpO75dl23Z9N1OwnmdJhkQsZbGR3nSb8S7uM4WR878zA4fl3SxvkdcGuBJZYZn2rwRO1L2Ju/6XZQh73T4+L700Y8O2yX2rj/9qejbKz3Idl0fmzIhvE839QmPX9VPnek3sMNsr4+JqLu9jpWc+3sqk07P40HY7bnnnoVzEmnOEdGyQ71O9rKhPrSeY1vhlKL+I0sJsbloq741JOz4Bu/QZ+NMu+PjJX7vAMC19wz0aRzNdc8JOyWOTKph1b26s3cvLBM6iDTWIvBHqiBprPGog8BqFZR/ocE1io1KwcgbxpjSg+cbcWo6IEY4ZGG7eVUSqHAortTAJ95999235RUIQTnjjW8sRrTPX221ysVpMapx/afzTCdawuvPXthyhJ2mISidTc+qQL7z1/TdTsPpm1JincRDmapho6whj5gK6ztGTeNNPdOIz09tUjxKN55UarBGe75ozTVt7kILjZgSq282OfsOJ/XtyiuvrK1HyD9TmpBb5RWDSQSyRqjT78uo8kZ0Gib3W0ZAKmc0xBjRpBtvkU7rf+6bxMXIOp4klJM3fnx434imMgm21EOMeZ82GhVfP7nGqMdbsew75naJfeLPfy7yDVlMB3nURxuEncpCOtKfp06dWjqS3Ekac53/pvFUlYtlCDs6a3QG25XNpunJhVP+6FwyCk39aPeP93i/abplFPm6BFbtfjcN/+oSS5i9/vU2/zvfmVQedtgdGrTBFqHjxUh6aT3OCcKQpwWkFjpVdSpHDElm+CaE4GjbEP8+bRPfVmcLnYW3/ubveU/Lw24+HazEm8lnSenLdW59uIl4rfama3p3AoCQ06PIBfoy1QHt/n721ltbsz1u+/737X/PPHNYO1kHD3UMDy4NFqne6ExdYoDJt72KExIZux7HgbLD20bYOVWzdkhLjnQfISuOsLPZswvCmv4AxDjy1fHRp4RdN+1ddBU6T/rW21vIHjpnNN7qXhf665122qnwqsnpNMgv9Rk5064iN+32EalnDGB4Wc4RcmVkneRG7RFp8cQcz9X2DyOQnZcfa9i9yOZjDM5cdJHNffnlwksVD1HIxyb9I9mHakOUrvE+TzTd3Uu9S12Zfcghg8sp4eE8Y8aYFY/4G7ylGSRv51AfWoRd7bttEHY+ripuinpVtfagj6fd6zEj7NpNGOEpLNx5y1yWMfxyRi+NLyNteK/gHo1gd/Pgmyg9ph16Lyt229LCmt38Xllcs6dMKRTn3P33Nxr2rh19tIZdJ3mCMKUx9V6XncRDWeIRxtTrdhvX9Hu8TyPNiBHp83KjsHitQcx0dQ2cgw9urWG330c+Yn/8wx/0ubbOuBaDBSML7RyMSDAlwB806mWGLeEZBWH0gtGRpgeYgR0Ypgdy0G660ziqfqOnqhoWv+B9WrboKjwPu3I4wo5dYsG4qrPR1jfbIOwob62j4nU3JEW3dbFG0NINY5rkrapccoSdiIbUmG3yrU7DaG0vBnHk1dFuXBq9ZjpOk/aJDRPK6lK73x4WfhzXsBuWjoY/hD0kWdmyB02ioo7jpc/aPiwBMtqDOGhDaEuQyZRsUJ2YNm1a5bq+naRDetbHje6bd9ppZm9722BH7qGHKgk7pa+TOttJmsf6na7q3bFOfAffq9SjHcQ37BW3hp0hYx22lbQ7KVmIrZHrPwz7foMfbLiFjmg6o8anhTSMsC0Two4k0I6X2UwNkjgYpM8Iu57Yu2aFzkM/pba0bMte6J0qnYaMkVd0djfsH+xn+rdMec15c7IuPJuvsCN37qC9wNGEWUJperCTeZYj3uiz/GiTTWzO0BR1CDubO7eoQyNkOPfhoXu5NqQi+Jg+mki6u6d6F9Q7XMNutAWGfOJc0cnghHRKuulXaZo6JOxII7yA799wDRmNnunV0deEXa8yvcDEO7TphB14YN8SdgsM1hM1I46ws+OP79jgnajZn1TpTgi7rua9DcKuq98dr8gyHnbjlZQF5rujIOwWGAwW1Iy4TSeshrBbUCGIfPUAgS4Rdj1IWe+izBB2XflYlwi7rqQlIpmYCNx664g17CZmRiLVtQiME2FXm65uBuiQsOtmEtqJq7uEHQux/vKX8TdWGAx52PWUsNt7b7Ojj44yHasy7fZ3pk0z04hYEHbt6MaJFzYIu+6VWRB23cNSMQVhJyQWvHMQdgtemfZDjoKw614peMJujz3Cru+2rT0Z4vvJT1ozduRh1z0BjZj6CgFP2B1++ILJAey552D/eJFFzLow+6HX5dc9wo5dRf/hH8wWXTT+xgoDrSXQSw+7D3wgynOsyrMX36H8Xv/6walK++zDbgZmv/pV/C2IGGy5pdmb32y21FLGlNiuHuFh11U4J2VknrA79tjQQQuSDjrooNc6cuFhNymrd08y7Qm7ZZYx+8xnzKZOXbD/2Ezuda+zYgfDbi514wm7sOujX9Npf0P9iaEpsT2p9xHp+CMgwo7yXpD1BfmbVIQdmWV65kT8e+tbBzu5f/3XEzP9YI6x3M2GnTXsPvjBiYvHRJTDXqUZucb4Y5FYCPX/83/ib0HF4H3vGyRng7AbvbETHnajxzCNQYQdxt+CWgcnc77UkQvCLpX8+N0pAp6wY4CajdYW9D/VI6ZrddOuh7D7x38Mu75XtvZki/fii4s17Dqt2vFenyMAYff3fz859MX73z9JPOzuvtuMhezZqXUi/uES+f/+X7HjzYRMP5g//LDZvHndq/0vv2x27bUTszwnogz2Ms3nnWeGMnrXu+JvsmCw9NK987BbfHGzNdc0W3vtBfuPfNJxWmUVsxde6J5uncwxQdi9+92hhxZ0PYQ9UrFL7GSuApH3NhGAsGOph29+c/L9dXsgnj5C2PXRr+lWf+O550LPt6nOJlRwdse+5prJU1/mzu374hn9lNi+z2JNAhHI977XDMMgjMwasOLxhEOAXdWQ8SuvjL/JgsEf/8gW290VVU2JXWghs8nyh1dqEHbdk6PrrjNj57rJUg8naz676RXUPemLmCYiAsgStjlem5Ptj3x3cyB+IpZ/pDkQCAQCgUCgQCAIuxdfNPvbvzX73e/CvTcqRSAQCAQCOQR22cVss83MNt10cv3tuKMZU4niCAQCgUAgEAgEAoFAIBAIBAKBQGCMEeg7wm7u3Ll24YUX2o477mh/+ctfRsBxwgkn2MDAgHHu2oH3yGGHdXe9iK4lLiIadwTwWILYjSMQmKwIMM3twQcn5194OUxWqY98BwKBQCAQCAQCgUAgEAgEAuOKQM8Ju5deesmefvrpyr9nn33WIOo4Zs+ebdtss01Byk2fPn0EOD0h7L71LbPNNw9SZgTacaNAAPlgatzOO5udfbZZrN0QghEIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAI9RKDnhJ0INrziyv6WXHJJu/POO1vZvOiii2zKlCm2ySab2KxZs1r3uVB8nCH5LrnkkmHvDgvc9MfPfmb2H//R/YXam34/wvU3ArfdZvaTn5htueUgcbfCCmY77GB2+ulmTzwRax/2d+lF6gKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBCYdAzwm7q6++2vbYYw9bYoklbOrUqbbffvsVf1tttVVB4G244YZ28MEH249//GNbddVVi78VV1yxIOwWXnhhW2WVVVr3p02bZj//+c9bU2LPOeecItzmm29uePJ1fLDLLevYPfZYkC8dg7iAvzhjhtm99w5u4MB211tvbbbqqmbLL2+2zTZmp5wyuDhyTJ9bwAUhshcIBAKBQCAQCAQCgUAgEAgEAoFAIBAI9B6BnhN2ZIEpsauttprttdderRz9/ve/bxFv3JTnXJkXHvch5n72s5+13nv++edt4403Lki7888/vxV32xd48f3N35hdckn3d1dsOzHxQt8j8PzzZvfdZ8auh0cfbfbtb5t95jNmyy1n9s1vmh1//ODzIO/6vigjgYFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAPyLQN4Qd4DDF9Y477ijWsRNYkHKQe/KgE7HHmUPTZ9dff31jLbyODzylmPbINvJxBAJNEXjhBbMHHjBjY4pf/crsO98x++xnzZZd1uzrXzc75hgzpnsHedcU0QgXCAQCgUAgEAgEAoFAIBAIBAKBQCAQCEx6BPqKsDvooINskUUWsWuuuaZVMJB1rGe3++6726uvvtryxBNhxxp3rHWHB57utV5u5wLPqC22iI0n2sEswg5HgJ1kH3rI7MYbzU4+2WyXXczWXddsmWXMNt7Y7MgjzW69Nbw4h6MWvwKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCBBoK8Iu9tuu61Y6+4HP/iBzZ8/v/g78MADh5F4qYcd+cHLDtLu9ttvL95J8tjsJ+uSfeITsfFEM7QiVB0CrKn4yCNmN99sxm7Hu+9u9vnPm33sY2Zf+pLZYYeZ3XBDkHd1OMbzQCAQCAQCgUAgEAgEAoFAIBAIBAKBQGASIjCmhF1ufTrvFce0V9ap+/SnP22PPfaYPfXUU7b66qsP2y02R9jNmzevc6JOhX7llYMbTzz+eGw8IUzi3B0EmGbNhia33GJ21llme+5p9sUvmi29tNnnPmd24IFmbHzy6qvd+V7EEggEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAhMagTEl7BZffPHWjq/LLbdcaxrrCy+8UGxMweYU119/vd16663F7zPPPLOYDvvLX/6y9fyoo44q3uNM+PTvlVde6axA2Hjir//a7PLLgzjpDMF4qwkCc+aYQQrfdpvZb39rts8+ZhtuOOh5t9ZaZvvua3bFFSzo2CS2CBMIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAILIAJjStiV7RLL/Zz3XSf3WPOu44NdPn/6U7NOSb+OPxwvTkoE8Kh74gmz2283u/his/33N/vKVwZ3m2XXWXZV/t3vYiOUSSkckelAIBAIBAKBQCAQCAQCgUAgEAgEAoHJjEBfEHbnnHOO7bfffo3+ttpqq4Lcmzp1ajb83Xff3Xl5br652be+FRtPdI5gvNkpAuwi+9RTgzvKQjozTXbTTc3+4z/MPvUps2nTBj3y2JV2/vxOvxLvBQKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBwARAoC8Iu3ZwwoMOzzvvrdfO+5VhDz/cbOWVY+OJSpDi4Zgg8MwzZpDPV11ldsghZpDJK61ktuKKZjvtNLgWHmGCvBuT4oiPBAKBQCAQCAQCgUAgEAgEAoGsH52jAAAgAElEQVRAIBAIBAJjiUDfEHYzZ860K664wi655JLs39VXX22zZ8+2HGHHOnaXX365sWnFqA7WDvvbvx2cpjiqiOLlQKCLCDz3nNk99wxuTPGTnwx6geJ1t8IKZttvP7gLbWyW0kXAI6pAIBAIBAKBQCAQCAQCgUAgEAgEAoFAYHwRGFPCrmzTCSC48847bckllyxdy2611VYrNpjIEXbTp08v3jvooINGh+bMmWbvfrcZUxJjx87RYRlv9waBGTPM7r3X7NprzX72M7NttzVbbTWzf/93s622Mjv5ZLOHHjJjim0cgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgMCERGFPCLreJxAknnFAAJ8Ju1113Hbbz6yOPPGJf/epXTYTdzTffbIsttphtv/32xo6weN1ts802tsQSS9ht7Lw52uPjHzc78sjYeGK0OMb7vUcAgvn++82uv97smGPMdtjBbOrUwU0rvvENs+OOGyT3grzrfVnEFwKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBLiIwpoSdX3dOnnIpYefDkE+muW6++eYtwk7EHvd4dt1119miiy5akHaQd6M+WOh/661j44lRAxkRjCkCs2aZPfig2R//aHb88YPr3K21lhkE9Fe/avaLX5jdcUd43o1pocTHAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoHOEOg7wq7Ow+7RRx+1lVde2dZYYw174oknjPBTpkyxiy66qDME0rcOO8zsk5+MjSdSXOL3xEHgxRfNHn7Y7MYbzU45xWzXXc3WW89smWXMNtzQ7Kc/Nbvllpj2PXFKNFIaCAQCgUAgEAgEAoFAIBAIBAKBQCAwyRDoO8IuN22We5oS++yzzxZkHaTdOeecU0yF3WSTTWwWHkbdOC67zOzv/s7sqae6EVvEEQiMLwIvv2z2yCNmN99sdvrpZt/7ntn665stvbTZBhuYHXro4JTaWLNxfMspvh4IBAKBQCAQCAQCgUAgEAgEAoFAIBAIOATGnbA76qij7MYbbyx2f2XTic0222zYLrHnn3++rbPOOi3CTlNkF1lkEWMTC86XXXaZzZs3z+bPn++y1uElO3K+611mV10VHkgdQhiv9SkCr7xi9thjZrfeanb22WZ77WX2pS+Z/du/ma27rtkBBwzK/dy5fZqBSFYgEAgEAoFAIBAIBAKBQCAQCAQCgUAgMDkQGFPCbo899jCmtP72t7+17bbbrrUjLETdGWecUewSW7eGHcXy3//93613d9hhh2LjCdbEW3PNNe3yyy9vlRwk3sUXX2wbbLCBrbDCCrbTTjvZ448/3nqevYD0w/voqKNi44ksQHFzgUBgzhyzv/zF7E9/MjvvPCqV2UYbDU6b/exnzfbZxwxv0yDvFojijkwEAoFAIBAIBAKBQCAQCAQCgUAgEAhMLAR6TtjddNNNBcHGLq656a54ye24446FVx3EXRPCbvr06UVcyy67rN19990F4meeeWZx70h2eDUrvO1OPPHEYn07/921117b/gJRUXV87Wtm227LjhdVoeJZILBgIMB02CefNLv9drNLLjH74Q8HN6r49383W201s+9/34w1IpleG0cgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAj0HIGeE3YQaJ4wW2qppWyXXXax/fffv7if7hLrw/prrWEH2QbpxrN1113XWNOOg3i4p/geeOABW3755YsdZCHz+M10W8IcdNBB1cAecojZpz8dG09UoxRPF0QE8DB9+mmzO+80u/JKswMPNNtsM7MVVzRbZRWz3XYzO/dcs+efhxVfEBGIPAUCgUAgEAgEAoFAIBAIBAKBQCAQCAQC445Azwm7K664ovCaY8rqjBkzWhnmtyfY7rzzzmJKLB53q6666oi/TTfd1O666y7baKONivd41+8Oe9xxxxX3iZdD8W+//fb2Cmt3Gevu32yLLbaYbbnllvZylbfQxRebLbLIIHFRvBn/AoFJisAzz5jhxXr11YMbVHzzm4PE3X/8h9l3vmN2xhmD9STIu0kqIJHtQCAQCAQCgUAgEAgEAoFAIBAIBAKBXiDQc8KuLNEi1OQRJ8IunRKr95988skWWQfhhpccpN3GG29cEIGsjwcZBynHIXJujTXWKNatY0OKc889tyD5CFu5QQUkBRtP/OEPsfGECiDOgQAbsvz5z2bXXGN2+OFmW2016Im6/PJm221ndtppg5taBHkXshIIBAKBQCAQCAQCgUAgEAgEAoFAIBAIjAqBviHs5s6dW0xvfeGFF0Zk6I477ig87iDotthiC5s5c6Y98cQTttZaaxWkHWvZLbroorbyyisXm1oQgXaT5R289lZZZRVbeOGFW7vKjviIvzFv3uDOmb/4RWw84XGJ60BACDAl9r77zK69dnCDFgi71Vc3Y927Lbc0O/FEswcfNKMuxREIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAJtIdA3hF1Vqi+99NKCkNt1111t1qxZraBXXXVVMY0WUo6/3Xff3V5lAf2hgx1h/RRaCDs2omD32NrjK18x23772HiiFqgIMOkRmDnT7IEHzK6/3uzYY8122MFszTXNll12cP077t1zT5B3k15QAoBAIBAIBAKBQCAQCAQCgUAgEAgEAoGmCEwIwo7pq7fffrvNnj17RL4efPBBO/zww43NLZ5hKmtyQM4xnfaee+4pvO6Sx+U/2Zhi1VVj44lyhOJJIDASAQh1POtuuGHQy27nnc3WXtvs4x8322STQW+8P/0pppqPRC7uBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAItBMaNsGuloF8vLrrI7P3vNxvahbZfkxnpCgT6FoGXXjJ7+GGzm24yO/VUs2nTzD73ObOPfczsv/5rcB08njmv2L7NSyQsEAgEAoFAIBAIBAKBQCAQCAQCgUAgEBhDBIKwKwP7yScHN55gja4gFMpQivuBQDME2JX50UfNbrllcGfZPfYw+8IXzJZe2mz99c0OPnhwPbyoa83wjFCBQCAQCAQCgUAgEAgEAoFAIBAIBAILNAJB2JUVL+vc/eu/mh1zTGw8UYZR3A8EOkHglVfMHn/c7NZbzf73f81+8AOzL395cKOXddYx239/syuvNJs7t5PY451AIBAIBAKBQCAQCAQCgUAgEAgEAoFAYMIjEIRdVRFutNHgAvpM7YsjEAgEuo8ApNwTT5ixrt0FF5jtu6/ZxhsPrnm3xhpme+9tdskl7ZN37DY9f3730xsxBgKBQCAQCAQCgUAgEAgEAoFAIBAIBAJjgEAQdlUg/+hHZquvPkgo4A00fbrZSSeFx10VZvEsEOgUAabDMhX9jjvMLr/c7IADzL72NbPllx/cAIZptJB6L75Y/wU2jbnuupjOXo9UhAgEAoFAIBAIBAKBQCAQCAQCgUAgEOhDBIKwSwsFMuDmm83+53/MNtzQ7O/+zuzf/31wut6HPhRTZFO84ncg0AsE8I57+mmzu+4anB7LGnff+IbZiiuarbSS2S67mJ1zjtmMGXlPun/7N7MllzQ7+WSzOXN6kcKIMxAIBAKBQCAQCAQCgUAgEAgEAoFAIBDoGQJB2KXQHnGE2XLLmX34w2YLL2z2hjeYDQyYve51Zq9/vdmll4bXTopZ/A4Eeo0AuzXfc4/Z1VebHXaY2RZbmH3yk2YrrDA4bf03vxn0zoPou+GGwbpLvf3gBwen2bLpRRyBQCAQCAQCgUAgEAgEAoFAIBAIBAKBwARBIAi7tKBuv93s/e9/jaij068/SLtHHsl79KTxxO9AIBDoDQJ41f35z2bXXGMGwb711mb/+Z+DU2e32caMtSff9rbX6u2ii5ptsonZ3XebsZlMHIFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAnyMQhF1aQKyjxUL3b33rax1+EXbvfa/Zc8+lb8TvQCAQGC8Enn/e7L77Bter+8UvzLbf3ux97xtOuOMZ+453DBJ6553X/gYW45W3+G4gEAgEAoFAIBAIBAKBQCAQCAQCgcCkRSAIu1zRs2sla2UttNBw0o7F72fOzL0R9wKBQGC8EWBnWNa1mzJleL2FcNeU9n/+50GvvFdeGe/UxvcDgUAgEAgEAoFAIBAIBAKBQCAQCAQCgVIEgrArg4Y1sd7+9tc6/m98o9lmmzXbobIszrgfCAQCvUWA3WHf+c7X6q28Y/2Zde123HFwUwvWvIsjEAgEAoFAIBAIBAKBQCAQCAQCgUAgEOgzBIKwKysQvHW+/vVBbx28c/7qr8wOOMBs9uyyN+J+IBAIjDcCn/jEcM9Y6i6eskxxh8h7z3vMPvABs2WWGdy44vrrYxOZ8S6z+H4gEAgEAoFAIBAIBAKBQCAQCAQCgcAIBIKwGwGJu3HvvWb/9/8OTqfDww6vu7lzXYC4DAQCgb5B4P77B+vo//yP2VlnmV1wweCuzpdfbnbFFWZXXjm4yyybVUDU3XSTGbvPhpdd3xRhJCQQCAQCgUAgEAgEAoFAIBAIBAKBQGAQgSDsqiSBjvwPf2j2lrcMkna33hq7TFbhFc8CgfFEYNYss5deGvSCnTMn6up4lkV8OxAIBAKBQCAQCAQCgUAgEAgEAoFAYFQIBGFXB9+TT5p98pOD0+meeqoudDwPBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCARGhUAQdk3gO/dcM3aIff75JqEjTCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAg0DECQdg1ge7FF8122il2iG2CVYQJBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQGBUCQdg1he/BB2NNrKZYRbhAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBjhEIwq5j6OLFQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUCg+wgEYdd9TCPGQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUCgYwSCsOsYungxEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBDoPgJB2HUf04gxEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBDoGIEg7DqGLl4MBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAS6j0AQdt3HNGIMBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQ6RiAIu46hixcDgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoHuI7BAEXavvvqq3XjjjcUf1+lR9zwNn/5+6aWX7Omnn7ZXXnklfTQmv2+//Xa75JJL7KmnnhqT78VHAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBMYegTEj7F544YWC7ILw6uTv2Weftblz51YiBKG2+eabF39cp0fd8zR8+vuEE06wgYEB+/3vf58+GpPfe+21V1e+X4fDnXfeaUsuuaSR37rjgQceKEhEiMRO/yAi/TF79my7+uqrO47Pp4N4iC+OQKAMgSDCy5CZWPelN1J9olzUPVe43Hn+/Pk2Y8YMox2aN29eLkjcCwQCgUAgEAgEAoFAIBAIBAKBQKCrCIwZYSeyCcKrkz8IJIikqqOOiKp7XhU3zyYiYTdz5ky74oorhpFf559/vq2zzjrFH9ee4OKafC622GK2xx57jHiGB6P3XhQmnZSp3kE2/AGhu9pqq3UkJ4pTZ+IhvrE46NTfdNNNts0229hyyy1niy66qK211lr2i1/8wmbNmjUWSYhvdICAdNNoifg6opu60kSPkYVeEOE5XZDW/aa/Uz1QBbvIrk4GavROE6JMeiPVJ0pb3XOFy53VdoylPsmlI+4FAoFAIBAIBAKBQCAQCAQCgcDkQWBMCbtll13WzjrrrBEkEJ3EzTbbzKqeQzrR4dRx/fXX26qrrjrs79Of/nRBkkCUcN3u8zPOOEPRZ88ip0bbsc9G3uBmJ8SCSAQRWKM948FI51UHmHzkIx+xa6+9tm3PyXvuucc+9alPWdrBVseauOsOvGkgD/C+vPvuu4fJCPGOVQcb7529997bpkyZUhCNEHYrrrhi6zeySPri6D8EOqlXOUINeS0jutFxEOA8J1xKjBGfPwgz2rqa1qtu6oJUD/i0p9ciu0aTnyb1WHojzbfSU/dc4XJn5aFJOnLvj+c9PAIfffRRO/HEE4tBmquuumo8kxPfDgQCgUAgEAgEAoFAIBAIBAKBhgiMKWFX1dlpl1wJwq5hCWeCqfNZ1ulWx74JYUaYpl5DaVLKOtC63+T7yA354J2tttrKttxyy5Y3W7sylaavnd98f4011rCDDjrInn/++darTz75ZJEmyIoddtghpue2kOmfC+SE8mmHiO8GoeYJrFTW+d1tIlz1ui6fWusTMhyy57bbbmvJbZ3uyJWq3sGrN+fRW+fxC7mZTm1ncCUdkFlllVVs4YUXtsUXX3zEM8LWPadNKTuUh6o2rOzd8bjPAMZll11mu+yyS4GHl7W68h+P9MY3A4FAIBAIBAKBQCAQCAQCgUBgJAJjStgtscQShZfJfvvtZ+nf1KlTrer5OeecMzL1yR11qkREaSoWaw9xnT5PXm/9POyww7IdvqWWWqrlPZV2Fvm9wQYb2L333tuKp50LNrI46aSTRuDicQIjOl4QU/5+el2Hlb7F93IbaDz++ON28MEHF53kujxALPQDYUfZ3n///bbpppu2ymAsCTtIultuuaWQsxSzhx9+uPC2W3rppQ2vwjj6C4FOCLtcDkSIpeSbwrZTV9oJq/h1FuFNvvyh9NURNtKTvA/hvO666xb6BhJIz6Rjffxl13Xv1D3PxRuEXQ6V1+5JBmgv8DhHL66wwgptE9OvxRhXgUAgEAgEAoFAIBAIBAKBQCAw1giMKWGH9wNeDjmyC6+IqueQaHVHSkSlHcH0eVl8nRB2pL9T4op0KK3eE6LT67SjzvpPdNhyuLdzL1cGvSAW1Nkkbv46wYH3wKHKIwbvoRdffLFMDLp2X2U7GvnoWmImWUSq8ymp7X93iwivI7rxEoMIJ1zd0Yt61QlhRzqvu+66wkuUeilZ7oSwgzD6wQ9+MGKwgXs8K3tOWYFd3SG9If1H/Ub3seERR/o8F1+ZrtRyC1Vt1LRp0+zll1/ORTvm98jz2WefXayFCA4qN3RpHWE75omNDwYCgUAgEAgEAoFAIBAIBAKBQBaBMSXsqsiTOnIlm/qam+qktNO5rIqSNDJN7Y477hgRjGejIWSapJVvVHW41CEnnD/ovB1xxBFFRxnvvEUWWcR22mmnVseZtbXwboS48ETGhhtuOMzrMee51ymh5km4NL3qWBM3uPC77O+oo46yr371q/bII48MC8N7xFsmczwHC9acO/nkkz1cXb9m7cWNNtpoVPLR9URNkghVr7y8dXqdymluWn47BLjC5qZi9pqwIy+d4HDBBRdU7sSdEyuVQadri4JT3fqifFd6TnpKekTllj7PpbUTwk5TbdttZyD36nY+z6Wxk3sqA8o8CLtOEIx3AoFAIBAIBAKBQCAQCAQCgbFHYEwJOzYYYEpgjnzZdddd/z975/mzV3Gnf/4B3vACiRcrpJUsC0UIIYSwQAgQFkSAAK/ACQr2QgQ/CNWEEmBNQEAINsVZ04wpCaEYERyMRTC92DTb1NgLAWJMM26CpVmUBM5Pn8leN/OM55S7Pu0a6eGUmTPlc86M+V73d2bCBgRl8RhbuRAbIp0YoO0YWRh+ZaJcVVyu3uk9taOqPjKyywyuMsEuLotn0zakhq3SIxqUCV5xmlQAjEU/zhH+cmkkFMqgVp6qD+UTWEsrXaRfu1SShrXjMLQJLN7P1FQMYfItqz/peY5v5rLLLgvP9us/r732WhA+jzjiiILp2Q6DI9DPfsUmIvrWEbtTDzEEYb6veAq7vMlicTy3GQnfdSfjWfxM2q80PjAGMJ7mxmHdYzzmT9c68v0yRlWNU+nbrXsHdfFpfk2vqTP9P+XQ9Pk4nfLCUzn1ylVcO0x4BwiYTDdm2nG/gxjzfZT9+9HvOjh/EzABEzABEzABEzABEzCB9ggMVLCLjcl2z8uMLhkiuQXNqxYzV1xTI4uF2FnAe8899wyiUIqZ+qVCWJqm6lrtqKoPZVQZXDLIy1hRfj8Eu7p233///cGTLZ3Wxs6q3EOQi4MMYAl2anf8zYjT3XffPUSUmzdvXjCCEeR4rkywozyEvZtvvrn4+OOP4+J7eo533S9+8Yvw3ijLYbAEBtWv+Nb0TaqF9LW0v6o+VX2U5/n2cyK3BMJOhHCND9RLG0ukQrj6IvVjAxdN8Vy1alVYG/LLL7/sWLDLecLS1/GOJa4snjRwywWNFfHY0M553XtQmSonfcfEV8Xp+fTIuEU9B7Wupb679HtM6+VrEzABEzABEzABEzABEzCBkUNgoILdHnvsUSxevHgrbymMxuOPP76oipchmaKTIZIzpDqNS8vgWnmVCUAYfnXCVS5f3VP+uXYoDWVUGVwyyGMjVKJBO0ZsVdq0/QgLde1GlGPqKdPpmgQZwLFgF7eJ++JE+2T0ItLhsYJoR+CZtL5Nyu9VGtYqYyosPOPda3uVv/OpJ9CvfpWWzLemb1Jx6nscFVSf+HtWXHxs0q/aFcI1PlAf1SPt66oXXqdqj36sOO2004InK/cVF9e57LysrLTsqmuNBWkZGiv49yMVH/m3hn9TquLU3jTf9HrdunXFvvvuW5xxxhlbbdSjOrTL5Pbbby+WLl2a3agmLb/b6/gdxN9jt/n6eRMwARMwARMwARMwARMwgf4RGKhgVyWeYDhVxZchkCGSM5Y6jcuVJaMsNyWK9NS/TrjK5at7VXVVGsrAqC0zuGSQx0ZoPG0Pr5xu1rDjedbCi6cnNxEWmOaMqCYhTe0pO4q1jHTaE7eJ+3rfTDfFKOf42GOPhXLgQOCZTr6psno1vc+OxCzUzxRJ3te5555b4GnnMHgC/epXfFtVAlO7cfH3DaUm/apdIVzjQyzYqY9RJnVQPbivqebvvPNO6GMI7k14pm9Zz+S8oBHZ5O1cFk8aprrngsYK1TtO02lcnIfOxa6qHI1JemYkHfUOqv79GEn1dV1MwARMwARMwARMwARMwASKYqCCHRsbsG5ZPK1L56zpVBVPutwOizJEcsZSp3G5D4ONJthwgmmxeJykAUOuF4Jdug6W+HCEEQZXvCZWHM9GEkyjyxmVqi/GelrPMsMWo71O8CJN3dQ9rVUXr9sV11vnWstL9ZGYQHtiLxny0/vGkGea8pIlS8K9Cy64oPV+eK6u/uLSqyPTfK+99tqw4zFrVN15550DW1i+V20YS/lUjQFqJ99JlZCRE2vY2EDfLUe+7bTvNl3Djue1UYLqxLef9lPF6diuEK52NBHsEOfwKMOzjKnchx56aLFx48auBDv1WdVfxybvSGnTo8aK3JjXaVxaBtcw4xu57bbbtopWOWXt2+qBYbghxlXf+TBUy0WagAmYgAmYgAmYgAmYgAlUEBioYLfddtsV7Kin3RHbOU6cOHGI+IJQU+edUeW5kYsrm3YLv4ULFwaDTSJSyhSDsc7ATp+Jr2ODCqOqm7+c8aqy+iHYdVPX+FnqRpABLNYSVOK0Mo5ZBJ/NHBBMEFTxtHv11VeDuMBzgxTsWN+LhfqpJ1Nz16xZI+w+DhMB9atUTEvFNt5ZN0I435q+STVVIo++a+6rPlV9lHR8+70WwnOCHeK3ppIiiqte7FyL5ypxiHVaf1H1T9uqNueOeqbsHWgjjrJ4vavUu5eyNFao3nH5ncbFeegcoY5vJDetX+W0w0T5Duqod0Ab4u9xUOW7HBMwARMwARMwARMwARMwgfYJDEyww4MkZ3CpyixuzlRCAsYF0xs/++wzRQdDMhZfMGgxPnr5lzP6qABCzDHHHBM8AFevXt2qU3zCs6lgR3u+/fbbOFnpuQyqKqOPMqoMLhnkaTt0v1tWubrxHtJ25xrJelsIEMuXL89FD7knA1iC3dy5cwv+FBBrtUus1teibbNnzw7edbSfNeN+9atfDUywox6UTz08BVZvaviP6lfdfvs8n/aruHXEpf2jW8GuF3UmDwk0Gge4LuOiNspzlWmx/MjCtNiXX3652LBhQ2hn2taYRXqusvA4nTx5cts/2PAMz8Zl8uMKYmKTdepi71yJk7nnaHMu4DXL+n1l45zGq7h+uXyG857eQfw9DGd9XLYJmIAJmIAJmIAJmIAJmEA9gYEIdqzfdfLJJxe33HJLEFSee+65YsaMGcE7gired999Q9YeY/0vDLT58+e3RDwMyViwq2uaBBQ2O9h+++0LFviWIFj3bBqPNx75YLRhvOUC9UsNOoykE088MXjuxOu+5Z6nbniL8VdWT8qoMrj+8Y9/hEXh07Lgv2zZspYnjYxWHXPGq+Lio0SyuP5NBTuJBVddddVW7cPgffPNN4vvvvsuZC0DWIKdyiOeONbuuu6668I7RUxFnPv3f//3IOJ98803YWF4WLX7zaicTo4IuUzpnj59+hChuZO8/EzvCEioqBJT6vqVvl3SxYHvsxeiWvqdU0bTftWOEK52INjxA8nMmTOLRYsWtZqECCYvY8YMNkxh3FuwYEFL4MODFJZVPFsZ/t8JffKuu+4Kf5yngfFOu9ESxw81eLJprM29Q72zXvBXHrn3QH3Ut/nRhvEmDRqvUib8G8TfSAhiWPXvx0iop+tgAiZgAiZgAiZgAiZgAibwA4GBCHZMr8K7CuOSgDGGISjPDy1qLjEH4QnDkClSa9euDUYPa8c1FewwAJnCiujHundMZ+L8qaee+qHlDc/wKDnssMPC8wiJZQEDMhXsZPQyZRMhrtsgI1Xcus1Pz8vgTAUJxVcdmwoL8lJhmh3vOw5aPF/fh+pD3nhmnn322cXee+8dvhkMTqZWH3LIIcVNN91UXHTRReE+oizfDJzZGIR3TnuqvhneLSxzRnhcvybnbKhB3ShD5GUAACAASURBVHJrXDV53mn6Q6CfQrim5ceidnx+/fXXh2+CY3w/Pc95djXtVxLhNHbGFOlHsRCutOn4wXhLX6BeV155ZRifNe7R7z744IPQrxjHbrzxxrYFO9bBO/roo0NfRnTnBwL6LSIS4eqrr26tkcc1IiL9mR9KCBKbUkEsRJb8h2eY4kw+jP2PP/54Scrq2/q3iL7Nvym5oPEqrR+8+Tfsj3/8Y+6xgd4TQwt2A8XuwkzABEzABEzABEzABEygKwJ9F+wwmDEmtYA5tdV0KxmZeCGwWYAEOtLIq4E0eIxhDJXt0BoTQBjSov94XiHGyAsLw+2ee+5peXLFz+XO9RxGjqZb5tJxLyfYlRlyZXnU3e9EsKsTFTDSm3rYkRZPPYRIhTJhAQPx448/VrJwZJozQi1rVmEIK9x9993FhAkTwvpz3BM38uYZxLlZs2YFQWHTpk3h/fE862phkLOxAx48nOMVRF6IElWCHcIeAgTv9rLLLlNVOj7q3ey6666VU/7iqb0dF+YHe0pA7y4VsrothPw6FUjK+lVap3aEcAl2TzzxROgvjI+TJk0KdaSejI9Tp04NnqvE0Z/or3g6s+EEU1O1uUsqTqX1iq/lBcgPNwTE7fjHDabJ84OOBDE87PBUPfLII4PHsH74aFomY8M111wT6s94z3jDDwXyHozrVnfOjzxwUV1y6TVepfVjPU3GIgTJNFBHeLBxCP9G9jtYsOs3YedvAiZgAiZgAiZgAiZgAr0n0HfB7qOPPgqGXrx755YtW4L4FnuevfDCC8GwwrODgCGKgYlHRhODDQMIcQePKozPdB0xxLfTTz89xGEMrlq1qlK4o8xTTz01pJfwV4Ufoz82QkmrnWWJ60XoRFjA8IdHr/7SNsbCAu+KjRaoJ0ZuKoDI6wURAJENQxWxFu9JxADEOIIMYPIuC3wfTLG+9dZbg/jH+8eTCe+7n/3sZ8HQpx5lHnZ8D0xxgwvGfbdB76aOM+kcRhYBvbv0ey2rJd8s08NTT7n0uqmHHc+l083jfhXXoxshXIId7aTfIM4hIOPh+sknn4T+SP54v9F/GU/5oYJzhPDddtst1BNhKhWn4jrG5xIUeZ4xlSCPawl06ouxKCZx/957722NB2yQUSduUQaetowxbFbBuCBvQcYu2l6Xh+qPwIfQlxvLlIajxquUCWUxHuTGMURM4mCKaNfroCnP2thJ6wBSJiKt7nOUkNrrOjg/EzABEzABEzABEzABEzCB7gj0XbDDWMltNqCdPOPqYyxi/MR/iDh4eGCApZ4KGF7r168P07SYuoUxsvvuuxePPvpoVozDeMMTix1nSYsxhucWBuvmzZtbXl+ITuzySRoMTQlJcV3Tc+pGeox/DHCm/SIMck9TPdNn2r1uV1homr8Mzk7EJKaA4kXCmoTwpL28q6OOOqp46623tqoC92QEs67g008/HYRORDut96T6MI1OO0R2cmTn2DLBjoqxnhZCBUKjw/gl0G6/YpxCnOFb79VfKvbEgl2vhPBYsCt724hnl156afHggw8GYYtr1rqjT/MjCP2Fuqb1LctPntLaZZZ0jKcISGeccUbog9xjHVM8oyXqUa68z5hSi4d21fjEO8GbjvGfuiJEIhYqINohCBJ31llnhaUWqoQ76o3HN+n144LySo8ar1ImjI18HzkhWILkPvvsU7z33ntpll1ft/ON5urXdQWcgQmYgAmYgAmYgAmYgAmYQNcE+i7YYeAhiGFA1AV5JOSMYIyn2BPhoYceaglEpMdQw7DC0KsLpCGtRD6ex4uC6U+smYdxyL3US68qX0Q+vBXSuuPV16ROVXkrrl1hQc/VHWVwVhnEuTx4t6zvpzYzdZUF6uPdfXPPyRjWc6kHi+qj+G6OVYJdrm6+N/4I9KtfaTzrRBDphxAuwY613ToRwHmG6aWMxak4lftqEOCZbo5AH69biVDOjrPpOMHSB/T9+A+hDc86xoD0hw8JmfzosvPOO4c0jEEvvfRS1ouOf4P4YQUvXPLbf//9g1cu3o0S7smTdTP5UQexDu9bfuipCppez6Yz/CDEDzZM8+ffpD333DMsAZE+j1jIc03+XUyf9bUJmIAJmIAJmIAJmIAJmMD4INB3wa4djHjLsUlEakxiQKWiF9d4deGlsWLFilqjKlcPjDO8NxCZMAblcYEXGOs8Ed9OwLDD8JPBiUGmPNvJpyxtv4QF6sz6gJ2sr8b6d9QLobOdtvL+ECXYUGLp0qVDnpVgl5tKVsYmd596WbDLkfG9mEC/+lWngh39sR9CuAS7bgRwPdtEsGM8YExnuQN5z8bc03P6u/JPjz/5yU9ans7k9Yc//KHlKU1a+jnr68VedWn+umZ8QYiTyMfziIpMgeXfAQQ9fkRoZ71TPAR5Jq43+fBvSzvjouroowmYgAmYgAmYgAmYgAmYgAmMKMHOr8METMAEBk2gX4Ida4N1ukZYP4RwCXadePzpnWiqZRPBTs80PfJDSfpjzRVXXBE81lIhDs871qrE4w+hrd0fV6gTP7DwQwPTdcUEcY3lDPCYbjfARj/WcLT3XLsEnd4ETMAETMAETMAETMAETCAmYMEupuFzEzABEzABEzABEzABEzABEzABEzABEzABExhmAhbshvkFuHgTMAETMAETMAETMAETMAETMAETMAETMAETiAlYsItp+NwETMAETMAETMAETMAETMAETMAETMAETMAEhpmABbthfgEu3gRMwARMwARMwARMwARMwARMwARMwARMwARiAmNKsGP3wFdeeSX85XYlrIuPweTOtaj4N998k4se6D0WYe9kofWqStK+dDfeqvSDiKON7JqZW8CdXXiJczCBbgiwacGTTz5ZbN68uZts/GwPCIyXcU07iuf+Ldm0adOIG4d78GqdhQmYgAmYgAmYgAmYgAmYQJsEBibYffHFF0N20It302tyjjCDkVMVEHXYvbBsB8O6+Kq8ibvjjjuKbbbZprWjYF36fsVj5J1xxhnFEUcc0UhkWL16dXHjjTfWGoG0b7vttivuv//+nlRdvHfZZZeCHSo7CXwbBx54YGCfPj9v3rxijz32KN555500ytcm0JhAr3aJ1S6s9KNc4H7TvvDuu+8GEREhsdM/hMg4fP7558WyZcs6zi+uBz+M5H4Uictr93w8jWv6VrQ7rVjB9LzzziumTJniHyMExUcTMAETMAETMAETMAETGKcEBibYyShG8Orkr4mhK4FotAl233//fYG3WBPhEqMWgQqh6qqrrip4tip8++23xW9/+9ti2223LQ455JDijTfeyCb/8MMPi/322y8Yinh49CLofSC40bamAa+6LVu2hORlgh0CLkbtaaedVuCVM5ID74h2PPDAA8W0adOKBQsWjOTqjru6aWxKxZMqEDlBDUFuwoQJxYUXXpgVxbhPPOli8Ytz8osDaToZJ+NnaFccJBLFaTo9Lxtj4/I8rsU0ivCD01dffRVu6l2k31w7Y/vQ3Ad/5XFt8MxdogmYgAmYgAmYgAmYwPgiMFDBDpFp8eLFWxmrGKzHH398EKHK4vEMwUNE4cUXXywOOOCAIX+TJ08udtxxx/DHebvxixYtUvbZo4zo1MjKJm7jpoStJsYzZc+fP7/YaaedCjznmgQMqyVLlhQTJ04Mf3/605+GTKcl/ne/+12xww47FI8//viQLBHGYu5DImsuECHxAjz44IPb8haRSIcHnc5hH4fly5eH9/zYY4/Ft0fMOUxffvnlAtGE7z5+t2lbRkylx2lFOhHsNBbE77Wb8/Sb4HrnnXcuVqxY0UjIp5/o7+233y7233//8O3Fr7RMJIrTcK6lA/DQQzxnnJEorrGqiWCntE24jIdxTfwfeeSR4HHMj1DpvyULFy5sa2xP312/rz2u9Zuw8zcBEzABEzABEzABEzCBHwgMVLCr8rTCaK6K/6HK/zobS4IdRlDqYYfH27HHHlucf/75LUMcgxwPDDzhZsyYUWzcuHFInAx2juRHvnHA8N5rr72Kyy+/fMj0YgxzjMdLLrlkyP0PPvig2HfffYszzzyzZbDH+dWd46mHcNrEuI/zQnDYbbfdwtRc2pJOiUVQuOCCC4LX4N57772VMJsKtXwrgw6xWME0YzzrDjrooCDcpeLMoOvm8oYS6ESwG5rDv64kyJS9X+438RQmt3bSpnVRn6FdcVD9UpEoTsO5vl2epw/jyaoxQ3FN+rTHtaFkn3/++fAjA+NR7l2wfugxxxwTliVgjE/Hsfh66tSpxZo1a4YWMIArvX9EWI9rAwDuIkzABEzABEzABEzABMY1gYEKdniFMS0M4y/9Q8yoimc6YV2QMSFjUgajxKs0viy/uXPnZo2lXXfdNQgukyZNysb30ohSXWOjm/bgCdfEY0UM0jbiLSdvGeIwEvFuxAtu/fr1Q5JT3p133llsv/32HU3jZPotXkKsydTOelcYtHj7XX/99cEjEw81TTNEXER45FvhPWDYYjjybjBoOXLNfeJhVSdQDGl0jy6YuoxXI4a51l6UMFQm6PSoaGcTEeA93HXXXVuNN/H4IyH1lFNOqUxXNwbRf+bMmVMgzOQC94lP+1ku7UgR7KjbypUri5NOOin8OKBxqWx8ybUlvqfnx+O4xtqgmhLN++WcMU5TovEWZqzlRwj+WMZA/9bgHY33OD+AMMY1FX5j9r0497jWC4rOwwRMwARMwARMwARMwASaERioYCchJfYU0DkGSVU8IlpdkHGOgc65jEMZl2l8WX6dCHbUv5dGlOoeG7avvfZaEKpyU4sffvjh4vDDDw9/nGtReAQ5PNYQ39IgARBD8Kmnnkqjw7UEPcpMF7HPPhDdZOoXghnTYhFNmwYM2zJREs+6yy67LAhyeCHKU4WyCPDSFFyEuirBLl4rr2nduklnwa4bep09q35U9j21cz/ui9Qm5+Wr8aydY84DtN+Cnb7FdtpPWvoZ46nG1Hbfit5HzHK8jGtXX3116bjGpkAwnT59evHZZ5+FHxkQ9GAjZoimrIFX9214XGv3q3R6EzABEzABEzABEzABExiZBAYq2FVNecWAq4rvBJ8MnU6Ny7RM6ojHWG7jBuK6FezYSTf1BpRhy1pyRx55ZDD4cnXAc45plzLqqDuG23//938HTw3yQXxTQKx78MEHg9eGprspTkc8wygX4QuPNvKO81C6sqMMVLzlcqJE7jnqxdRc2kKbNL0PI5UggQ5PQ9LitUTdMGy5xhNP7/vpp58ufSd8G3hU4cUyqE0geAeIHmpLrv2+11sCTcYAvZcyT0x9c+qLquFbb73V8sjDS4/p5ny78t7j++J9x557xJOO9EpHPmngG2lXTEvTp/VVO2hn3a7dTMVPp+Nrqj39S30srXfu2uPaD9OM5W0cvwuYMVbxw4nWUb377rvDcgTr1q0LG/Acd9xxrTUJb7vtttJ/Kz2u5b5A3zMBEzABEzABEzABEzCB0UlgoIIdC6Hj7YXhl/5hHFbFY/Tlggzy1Fhtet2O4YkBXCbKVcXl6p27Rx6qj9olo3vDhg1hLSkMPjwv5FGmfDDsWG8O77M4ILCRR7xLLDvH4kXIPf7iKVhV3NoRtyQgKj/EuyZBu78ibCDAxYIdU3lZT4/prnjXERA2aDftT5kRV/a+KAdPPOqXMmtSz07S8B4oz4JdJ/Q6e0bfhPpVLhe9l3YFuzgv8kjLID/ed5yv6kP6qsA3gtB9zjnntIQ9CXw6Hn300dk0iNaI2GkZsUikjSXS3WrlRcuzsfi/atWqsGYa4wntTNta1ZaYTdr+8TKuafdX9f34XejHGHnX1f1oAc+yH7c8rlV9iY4zARMwARMwARMwARMwgdFFYKCCncSbTo6p8SnMMgCZDspU0NgATaeJ5uKaGp4YuIhle+65Z/Huu++q+NaR+pWJQ61ENSc5w5b15VRvPOIwnBGoUgFM677hmZEGPO3uueee4MGBtw/MWC8JAU5rJLHYuYSA6667LsSrXKbXIrQedthhwTto7dq1aRFbXWudOcpjgwwWrseYrAu0A08T7f6aCnZMd77vvvtCNnonZ5xxRpgCrU0u8EAhVAl2xP/1r38tbr755uLjjz8O6fv9H96vBbt+Ux6av8aHqn6u9xILa3EuEldIVxbivqs03Qp2deMJU8fpw+maeQjb3JP4pvqoHdRLXNKxWG1ExBYz9bPTTjst9GHuK055Vx1jNip3vI1rvCtEVMZFQvwuEEGvueaa4rnnngtx+rFDY7zWAsULjwDPMsGOeI9rAZP/YwImYAImYAImYAImYAKjnsBABbvc2msShTDgquJT41PkZQDmDMhO45R3fFReZYYSRlSdgR3nlzvPGbaxQU3+r776avB80ZRR5YM4pTWPdC89MvVO4hQinjZDSNOVXSOAHnrooUE0LEuj+whh1H3hwoVBfGMxdYlwSpM7Ijb85S9/CbtTEh8LdlyzDqGCvArltcK0WBhIeOE+i7Qj5I2EwPu1YDfYN6F+mxsfVBO9F303uq+jxBXSKeiZuH92cx7nTRl1YjNpEOUQ7FJvW9UxPaodsWCnvkNa6qB6cF9rQco7jHKa8EzLJU/x1/Mxq/EwriHKMd2VIyF+F3jU4fWsoDX99D3CPf63BZ5Mkd2yZYseGdYj9fG4NqyvwIWbgAmYgAmYgAmYgAmMUQIDFezKxC7Y8j/9VfFl/GUAyiCM03UaF+ehcwlHZYYS9Y+NKj3XzpE81A7VPV5HCg81RDbEsFicQ+TC+6XdzR3aqRtpm4p81BOPOgRYjP2PPvooCGdqWzvlijteekwZY0qsRDvEwNhrBS+W2AOy02+qnfq1k5b62LBth1j3adWPqr49vRcJJGmpEldIp8COsfJI5djNGnY8n+5A20Sww+t1t912K+bNm6dqVR7VjiaCHSKRppoz3iDUb9y4sWeC3Xgf1/QumL58yCGHtDymEe+uuuqqMF4ybhLwtNPY3uR7rvwI+hCp/hOLv30oxlmagAmYgAmYgAmYgAmYwLgjMFDBDnEFAyU2dHWOwVsVT7o5c+YU69evH/KSqgyYTuOGFPB/F5qWpEXD0zQYLf0Q7GKRQGVquimGHQZeeq108ZF0//M//xMMbt3HQ0f8mxzTqXfKJz3iSYLnDzu6Mp2Ostkkgnt46TUJCJMrVqwoTj/99LB7MEIXm23cdNNNQbDjO8ADCAEPwRIRj6mxsTADu05E4Cb16ySNDdtOqHX3jMaAdEOI+Htn7OH7ijeHiONZR4715HJ9UbUjLv72uI8wRr6xEKj6VOXFs4gfdWvYaa26eAOLuN4616YWEoliwY48Yi9n1Yup6QjuxCHWIdoRVP+0rSGy5D8xGz2vcuJH0nEsvY7T6ny0jWu0H6b8+MB4yPfBGqJ//vOfQ5PwJOffEW2qwyZEiHVa01P82uEvVv068i5phwW7fhF2viZgAiZgAiZgAiZgAuOVwEAFu+222y5sGHDAAQcU7f5NnDhxiPjCOnIYPk3WqWu6vl3ZtFs+Dry5qowSjJZBCXbyqEOEwIMN4S72NMt9zDL8DjvssNZachhYGI0YjFXvg3jSNTHI8MLBwKc+TO1SYN076ovnXdUUVTzqEEhYxw7e+kOo0xRejogR8NY707Q9CQsY8ogR6dRh1Wc4jjZsB09dAoe+o26OOZFJLSIuFVG6Fey6qWv8rATDnGAXp+NcbWR8xVsVUZxNXuhfL7/8csEmEbQzbas45I4xG70PlROnH8vjGjyPPfbY1o8P4o4nI57LBKbLstEHzPXD1PLly4es6ck0WLy8y344inkO6px3SXua/PswqDq5HBMwARMwARMwARMwARMYCwQGJtgx5euGG24oynZ7/eqrr4InFlAx6ljv7LPPPmsxxiiIvaUwDmT09OqYMyKpAIYUmzJUiWI8mwp2iEbx2kStxpSckIcM4SrDlse1acRRRx0VypWnWUnWQTxjGq08NUgHw7TOuedl6NcZZHjTzZ49O7wXFq3nWgEW8+fPD3GIbRLfFK+jpsAyTYwprggFvHeVTT533nlnEBBvv/328M0o7/j9iJ94Kv/hPPJ+bdgO9g00+Q70XiRspTXU90+6OPRqDNK3nebdpG/SR/DEQ9ipC2oH7WS8nTlzZlhXTc8hfksA18YHCPULFixoedYxlbVfgh31GKvjGuyZvoxnHT80sZEP71ffnH6E4F0+/vjj4ZVwD96sw6npsRof029R73A4juo/ue94OOrjMk3ABEzABEzABEzABExgrBAYiGCH8XfyyScXt9xySxBx2A2PNckwPgjs+okxg1FDWLlyZfAqQOBBjCFgFMSCXbhZ8R+JRxicbHggcafikdIovPjIh3Xi8ALJBeqXGtiIBSeeeGKYalcmVMZ5kYcEJgkN3MsF4pnChwCUerPl0mNM0QYZiKThXlrn3LMy9OsMsqeeeiq8NzzpcjvJ4lmHhx31QHTTu43L5L3hDag4GaiUzT08HfG+wxNFC7gz5Y/pe5qCS37iJ55xGcN1zru0YDdY+nwzfE/xN5XWQO8l7htxGoQT1mVM+7C8fBFgcn/XX399eN8cc/G6Rz5paLdvanp8nA99580332x5cKkfp+2kfXjOUZ8rr7wybGLBNd64eNd+8MEHgR9TM2+88ca+CnZjdVyDMd+gQvwuiGOXWMbF+McM3hNjnbyGeVbjYdm/C8p/kEf1n7p/HwZZJ5dlAiZgAiZgAiZgAiZgAmOBwEAEO9ZDwnMAbxAC04Bi8UjTGWV0YsDgWSDhBxGHKUBNBbtY2GHdu9tuuy0YPghK7QYZrhhOCIllAaMlFb/kpaIFw8ue1X3ykMAkwQkvGAxmPE9mzZrV8qSBEd5yCECUiwhaFpRX7KlB2nZFgSqDbNWqVUE0iz1EcvWRqAfPJu9DBiplc/6f//mfBTsKa7dbTSNDsNNaXZSLwMLUMqbFSvxL68O7xSiW8JfG9/rahm2vifYmP72XVMjqNnfyo392km/TvqlppNrgJa6zdpHVuCuR6IknnijuuuuuIHpPmjQp1JF60ienTp0aftxAEOeHDsZpfjhhqjvjx5IlS3oi2I33cU3vgm9jzZo1YUmCiy66KPzQwDvUvzvpEgLaaIR/08qCx7UyMr5vAiZgAiZgAiZgAiZgAqOLQN8FO8QShDjtOAgerY8kgQ5BDu8oCXSkiRccx7MFIatsh9YYOQbstddeG9YKkheWRB0M0nvuuaflcRI/lzvXcxizTPWknmUhJ9hJbJIIV/Ys9+GEuMR6e7feemtx1llnBQOasuM/DDzayJRT7rO+FOv78YeHTE6ckiCarnvUVBSQcVkm2GmBeuoTe4jk2kv9NKW1yfsQQ5WNCMofAdGS8nIee3qO95ILeLsgpFJnWPYjzJ07d8jagLwjytt1112H3GeTDofhI9CuYMc4wJRGeciVHZt62PE8+cXjS1nfRHyXWC1imkaKgE+fULj77ruH7CatfswYwhiDOMc3irCHwE3fJH+EI/om+TLucc5akHhBU0/GsyZjGvXwuKa3MfQYvwti4A97gv7dgXv6o4ae03g4NNei5QnpcS0l42sTMAETMAETMAETMAETGH0E+i7YsfYOnhnxdEUtnB17nr3wwgvBYwzvAAKiFMYJC3LLU63KSMRQxcDECw9j5dxzz20JO+SHEcSOo8SxjhAeYVrsO/faKPPUU08N6SX85dLpXk6w086yZaKRnmX9KTbkoG76w1hj8wa86mgXXna0EWOd+pCO9tAuvOvwsiOPq6++umX4KX9tmIFnYxww+th5ld1YEbjK/ognXWokYozjcSMhqk6sU9kS2mgDYtuvfvWr0o0oJLylZceiZa5cmOLtN2/ePBU75Ag31iWkDkxH60eQEKR3WnZM29aPujjPcgJ6T0094eSxWvY+O7mfjm18E/LYZZzCC4t6Mi6k9aQ+TI+PhWvEPwR6xl5t8iKxJ30+JkO/uPTSS4sHH3wwiG1c4w1H3oypCEtNBTuPa+XjWtm74N+dX/ziF0PeZfx+ysZypfG4JhI+moAJmIAJmIAJmIAJmMDoJ9B3wQ7DE+EkXRT91VdfDdOsYoQYnqlohLHJlCwMRsSoOCAYsZse6yqx1hKG8u677148+uijWTEOoYipYBKYmEaGIIaHyebNm1veKRjHTEUiPzxLZPDGZafn1I30GNV4zCCOIQxyT1PS0md0TXmIg6xVhAdLbr0t2vrSSy8FzyzyjKdPkQ/iIDu9EseGDcuWLQsMJC6w42O6VhbvhvRN/2JhCcOSacu8F/7giIjWNPAutG4T5ctDMH0+J9jFoiUcEBEQ5hDu+OMd8B3Em1Ck+XL9zTffhGerhNvcc743tgi0K9g1bT3CGN92lUBWlhdTHtkkhrU+GafIh37GJjPx1G89r3UcEfRYr/Ppp58Ogl/sVSuRCHFPfaXdI158eEKnAqPqER89rpWPa3oX8bfB+Kw1Pvm3gH93rrjiita7YrxlnEyXNoiZc+5xLSXiaxMwARMwARMwARMwARMYnQT6LtghpiCIIRzVBRm4OQEJI5H1exQeeuihliFLegQaplriYVAXSENaiXw8r+lHbJbA9F3upV56VflinEowi+svL7iqZ+vi2GkWQQyDnXpSdwSvNCBkMZWW8jHESaOpxWeccUYw5OJnEOAQBTiWTevjfpxOz2saLAbkn/70p6xAqrRlR0RIvAfxJCrzYswJdrx76o3gJw4SXcQej0A2M6EMBxOoIqBvJxZPqtI3jdN41m6+jJls+KBvGQGenVrjXbNzdaCvM07qudQbTyKR4rs5NhHscnWM743ncU3vQt8GHpF/+MMfwriGFx3jln5sid8T/2bx76mDCZiACZiACZiACZiACZjA2CfQd8GuHYR4y7FJROr18cADD2wlxCG64X2CEMWUTQk37ZSHZ9W6deuCMYwXnMQdvFVYmL1dzyvqgLEtL8Gcp1w79YvTvv7662GnzNvv6AAAIABJREFU3dzuq3E66qwprtznGjERz700sHYa61gRXxWIJ1261hrl1NWnKl/FwQyPvVwgjrUL47IxblkEX++L52in2HNs993lyva98UGgX4IdojYiPsd2w+LFi4OnKP0r/s7r8mFcxDvv7LPPLpYuXTrk2VQkqssrFy8RqReCHfmP13FNY2r8beChzL8dcYj/TenlvydxGT43ARMwARMwARMwARMwARMYmQRGlGA3MhG5ViZgAiZgAiZgAiZgAiZgAiZgAiZgAiZgAiYwOAIW7AbH2iWZgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAmYQC0BC3a1iJzABEzABEzABEzABEzABEzABEzABEzABEzABAZHwILd4Fi7JBMwARMwARMwARMwARMwARMwARMwARMwAROoJWDBrhaRE5iACZiACZiACZiACZiACZiACZiACZiACZjA4AiMKcGO3UPZDZU/ztNQF5+mT6/ZIZFd/L755ps0atRe05ax1qZR+zJc8WEhwE6lTz75ZLF58+ZhKd+F9p6Ax7XeM3WOJmACJmACJmACJmACJmACgyUwMMHuiy++CMIQ4lAnf5988knxj3/8o5IOgtoJJ5wQ/jhPQ118mj69vuOOO4ptttmmeOaZZ9Konlx/9913Be1EcLz++uuL/fffv1i4cGEo7/jjjy/Wrl1bkOaBBx4opk+fXrzxxhtBaEBs0B/iQzuBtvSzTe3UxWlNYDgIXHzxxT3pA3/729+KXXbZpWCcyAXuE0+6uvDuu++2+rT6drvHdCz4/PPPi2XLlnWdL/Uo+1Ek1y6PazkqvmcCJmACJmACJmACJmACJmAC1QQGJtjJKEYc6uSviaFbJ8jVxVejKoIh3k9xCyHzwAMPLLbddtsg1sFs+fLlxZYtW4o777yz+NGPflTsvPPOxQEHHBAM5hdffDGccz158uRixx13LHimndBEsNuwYUNx8803F19++WU7WfctLd4zq1atKmbNmlUcffTRQQDuW2HOeMwT0NjUjhCfE9QQ5CZMmFBceOGFWVGM+8STLhXfyC8OpOlknIyfSccCCYpxmk7P+WEk96NI3Aade1wTieojPBnvZ86cWcyYMaMx3+pcHWsCJmACJmACJmACJmACJjBaCQxUsNtjjz2KxYsXb2WsYrziQVYVj2cIHiIKsViFYBWLVghXCFi6r6NErbL4RYsWKfvsUUZ0O4Z9NqOSmzJsKScOH330UXHJJZcU//Zv/1b8+Mc/DsLd7bffPsSgkxiZGunK56233iouv/zyrf5OOeWUIAxwTOPx5EOkO+mkk4KIuGDBAmU38OOnn35aUP60adOK7bbbriVmIHDCzcEEOiXQiWCnsaBTwSt9Lu3zXCPOr1ixom2P5Lfffrsl+MdMJNjVjV9aOgAPPbzjVq9eXXz99dchK40znQh2aRs9rhUFP4bMmzevOPzww8MYq++iHb7xO/a5CZiACZiACZiACZiACZjA2CEwUMGuSlzBaK6KT5GPZcFu1113bYmN55xzTvHrX/+6OOuss4r33nuv+P7774uXXnopCFcY3scdd9wQsbJMsCOtjMGmx1/+8pfF+eefH55DzKubkpy+o15ex/VH2D3xxBOLHXbYoa1vppf1cV5jh0Angl2u9RLEUmFKabnfxFOY9O2kVf46SvhPxwLVr06wkyjH85s2bSqmTJkSxHz6v+LaEZRUH49rekM/HHnPGo9ZAoEfrrhuh+8PufnMBEzABEzABEzABEzABExgLBEYqGC30047heliqScX1wcddFBRFY+3V11IjUnELTyz+OM8jS/Lb+7cuS3BTN55HDE4MaYmTZqUjZ86dWqxZs2asmxr78uwjafUsVYUhvZpp50W1rCjDZdddllx3333Ba+X559/PngsPvzww8FLIzXS6wqVEJYa8XgznnrqqcHrY/bs2cMq1tEGuNJG1vgjSHxoR+StY+H4sUeA6dN33XXXVt6j8RjE2EO/znmZxunqxqD169cXc+bMKeiTucB94klXF0aKYEc9V65cGbxsGZ+ajqFx+zyuxTSGnvPjC2OvlhvQeGzBbignX5mACZiACZiACZiACZjAeCQwUMGOqYz77bdfVuyaOHFimOpYFo+IVhdknGOgc54al2l8WX6dCHbUv6n3TFm5Mmznz5/fmgaH2Pjtt9+GNex23333wI9poXi+4PGCgMVzH374YXHsscf2ZA078jryyCODWMfadU086xBEWWuP4yCCBbtBUB79ZWgMkBdTN8dUDM95+cYCf9Nz8klDvwU72tIJi0ceeaRyY5+0HVx7XMtRyd+zYJfn4rsmYAImYAImYAImYAImMB4JDFSwq/KGwoCsiu/k5chY75W3AnVkXSl2Z00Dcb0S7GJDmrrjffHoo4+GNf6233774uc//3nw0pFoFaenHnHILY4fL3jPbrQ8z5H7rBNHO7h33nnnZdcb3Lx5c1xEwZpXeOHxDEeu+x3U9l5/M/2ut/MfLIEmYwB9hm839TJVTfWtpX0rXhcSL7299torrDUpr7zc+pCsRUk60isd+aQhnipJ3Tr5S+urdtDOul27mQrPH2Jb/McPCIxJ7YypEuziNnhcS9/4v64t2OW5+K4JmIAJmIAJmIAJmIAJjEcCAxXsWKOHBdFjA1DnGIdV8RiYuSCDPDYG2zlvx/DEAC4T5aricvXO3ZNhi7GugNcamz7gXff444+H6b3nnntu8R//8R+Bo9KJQ2qk98LwT3mmwsZXX30V6kg66sp1v4PEBwt2/SY9uvNXv6jq5/QZvt30u1bL9a2lfUvxHIlLy5D4Euer+lTlRX70W9ZoZA1LCXvpkR2Sc2mYUs/yAmkZagf10cYSsXjPORtNqD1xX2ZXZqal8+MB7UzbGh4q+Y/HtRIwmdv6Ztrhm8nGt0zABEzABEzABEzABEzABMYAgYEKdqnw0851anyKvQxgdtljjbPYANW6blVxTQ0jDFw8zvbcc88Cr7U0UL8yMS9Nm7tmfSwZ2hjisXHOxhIImrr3m9/8ppgxY0brmvup5468duAjUTR3XLduXfH73/++YOdcpixfc801YbptLq3uMbU4De+//35x3XXXFRwHESQ+WLAbBO3RW4bGh6p+Tt8diYJd3Xhy//33h2nr6Zp57OjKPYlvenvqM4hC4pKOwRpnWSdTzDT2sY4mU/C5rzjlXXb0uFZGJn/fgl2ei++agAmYgAmYgAmYgAmYwHgkMFDBjt09Fy9ePERUk8DG7nhV8anxqZclwzNnQHYap7zjo/IqE4i6Fex4vmqNv6brYbEhRpX4oDZ99913xbJly4pDDjkkpJ8+fXrY1ELxI/0o8aHsfYz0+rt+gyGgfpsbH1SDTgQ7PZMKXp1ek18c8LCrE+wQ5bbddtuCdeWaBPWZWLCLvXmpg+rB/YMPPjgIdO+8804YmymnCc+4LuTncS0mUn1uwa6aj2NNwARMwARMwARMwARMYDwRGKhgVyWuYNhVxZe9lCoDstO4XFl4l1E/vN2YppoG6l9nYKfPxNedtj/Og3MZfBxzgQ0kHnvssdAWiQts9IHXnjz40qM28cjlN1z3JD508s0MV51d7uAJVI0Bqg19r0rk1rdGOgU8x+J+0s0aduST7kDbRLBjeYHddtutmDdvnqpVeVQ7mgh2iHP77rtvgQcuG88ceuihxcaNGzsS7HrRR8fLuKZ2VgnMlS/ZkSZgAiZgAiZgAiZgAiZgAmOGwEAFO9ZVYtpnbOjqHIO3Kp50c+bMCZstxPSrDPJO4+L8dc5GE2w4wbTY3KYKvRTsWK/vhhtuyHISr/gYc5HBxzEOn332WfGHP/whtAFxgl1t67z2Jk+eHKbKjkTjUeJDL8SAmJPPxxYBjQHphhBx/2HsoU+wSUR8X+esI8dacbFgl1IiLu0nub6o+lTlRd4Idrn16VQnjppCH29gEcfrXNPj1Weol+pBHrGXs+rFzrV4PBOHWIdoR9BzaVtTHromP/VRj2uiUn7UN9OUb3lOjjEBEzABEzABEzABEzABExjtBAYq2HUzNQqBSYYf0LX7aZN16qrWsIvjyqbdUt7ChQuDUR9PIYtfPoZpNx52eNgg0mHUsk4UnnxVghpecfBEaIjL1c6VHJn2+txzzxXHHntsKy0iQJU3Udymdo3z+Nl+n0t8iL+Jfpfp/EcfAX3DfPPd/knMylEgLhVZJL5wVFB9qvIiLeNMt/XV8ypffYZr1UNpdFS9GF9Zr5NpsYw1TIt9+eWXiw0bNrS9hp3HNb39+qO+mfRbqn/SKUzABEzABEzABEzABEzABMYagYEJdrEglYPIzqLff/99iMKYZNomXmEKGJKxONNLgzY1VlWmjuyMeMwxxwQPwNWrV+v2kCP1i4UzImnPt99+OyRdtxfxlFYEOzab+Oijj7LZUv7vfve7sM4Va9Qh3i1dutSCXZaWb45FAhKmqgQQ+i5jgIStlIOELolZiu/VGJT7EYB76XiicuMjG08gwi9fvjy+nT1XO2gn4+3MmTOLRYsWtdLyg4V+tPj888+LadOmhbFjwYIFLYGPzW9gWcWzlWEbJx7X/gXLgl0bH42TmoAJmIAJmIAJmIAJmMAYJzAQwQ7j7+STTy5uueWWMJ0U4YhdTlkXjnDfffeFtZgwKAkrV64MUzHnz5/fEvFSwS4krPgP01Znz54dDM7tt9++uP3221t5VTyWjcKLj8Xd2SWRHRhzISfYIRaceOKJYaodnnPdBnZgZfobO7peeeWVLX5V+SI2ijPpZBCWTf/TNDqO2nm218Z5VX2bxkl8iEXcps863fghgGj96aefhj/9IJC2vk6wQ0zC6zXtw/Ly1ZTS9Hj99dcHIZBjGhdf53adbirYqR9cddVVW41v9Ps333wzeNrSZqVNhUnah+ccdWJcYf06rg877LBi7733Lj744IPA74gjjihuvPHGngt2Htd++CI1Po/EMfeHWvrMBEzABEzABEzABEzABExgEAQGItixHhJeIHiDEDAIEcBkOGoXQhmdGJB4crDu1Nq1a4PIx9pxTcUZDHOmsCJssb7bbbfdFs6feuqptpnKcCUvhMSykBPs5KWCoYtoUBbgUzX9VXEYz3BjJ1jdKzsypRaRIQ0yCHst2NG+Z599dog4mJbdy2uJD02/iV6W7bzGFoE6wa7T1qqvaZxrJ5+mgh0/IPBDAuvNMY7GQbvIatxVn3niiScKNpI56aSTwlgiD2PGuKlTp4YfN4jjhw7GG344YcMJ1rRcsmRJY8HO41r8Npqd65uxYNeMl1OZgAmYgAmYgAmYgAmYwFgm0HfBDvEMIU47DgJT6yNJoMMb7oILLmgJdKRh6imbUJAGzxYMmLIdWuMXhAF77bXXhjXbMDrxMOOPcwzSe+65p+VxEj+XO9dzGLR46+U2m9BzOcFOO8vWGV9ady72bsuda/H7JmKb1o1S/XSUQdhERGgynZB8v/nmm+KMM84I3kRwZrpdr8OaNWuCmCCBUuJlui5imVDZ6/o4v7FDoF3BjnHglVdeqfSaw1utqYcdackvHl/KBDv65McffzwEPssHIKzhEcuPHQp33313MWHChOK1114LtyTY0fdvvfXW0J/mzp1bIOwh7jNWk/9FF10UxkryZdxj3GR6LDvSUk/Gs7oxjQI9rulNlB9TUZMfY/j3BuYIpBrvmL7cj3G1vGaOMQETMAETMAETMAETMAETGG4CfRfsWF8NwwNBTgbpli1bgvgWe5698MILweDEo42A8IaBycYJ8lSrMhIxVDEw8bjC4Dn33HPDcwKM+Hb66aeHONZzW7VqVaVwR5mnnnpqS4Ti+aqQE+y0syxxvQixwd1pfv0Q7HivvF+4Iyoi4PU6qO2UUfVnj7tekx/7+bUr2EnIrvoO241Lx7ZYsGMMRLCmngg5qdhOfRDxEe3uvPPOILzRJ/FKZuzdtGlTeInqQ+nz8RtmnLv00kuLBx98MOTDNWIReTOmIuw1FezifKvOm9Sr6nniRuu4pnrXfS/p91HHw/EmYAImYAImYAImYAImYAKjn0DfBTsMz9yi6K+++mqYZhUjxPDEKy3+w9hkShYG49VXXx0nDwbl+vXrw7pKeFxh9Oy+++7Fo48+mhXjEPWYCsaOs6RlGtmsWbOCh8nmzZtb3ikYx1OmTAlp8CyRwTuk8OSCupEnRjUeM0z7RRjknqakJY9UXuLtgmiIsU7gmo07yK/K4K7MNDJsm0yrxdhHIGhiLMIWYz728Kmri+NNYCQQaFewa1pniTGd9Fem8eMdx1qfjFP0e8bAo446KniupXXAm4109FfW63z66afDphWIdvqhRMJYEw/dnIdvL9a19LiWvjlfm4AJmIAJmIAJmIAJmIAJmECeQN8FO0QcplwhxtUFGbgYp+kf69m9/fbbrSweeuihliFLWoQ6vEvqPOHIgDSklcjH8xi6rHHHmnlM3+Ve6qXXKjxzgsjH9KW03nj1NalTmmU8zTTOU+v6pembXotxE6N9JG860bS9TmcCdQRGmmDHmMmGD+r3hxxySMFOrfGu2bk2sYwA44OeS73xJNgpvptjExE/V0ePazkqvmcCJmACJmACJmACJmACJmACWxPou2C3dZHld/CWY5OI1LsDz7JU9OIa7xPWTluxYkVHnl14r61bty4Yw3jB4f1BwFuFhdnl3VZe46Ex8jKThyAbMSjPoSmbXdHumAVefBjd3QStmcSxLrBmEtPhvH5SHSnHj2YC/RLs2ulrKb/FixcHb11+QGhnDGFcxDvv7LPPLpYuXTrkWQl2nXj8qX6aDtypYEc+HtdE00cTMAETMAETMAETMAETMAETKCcwogS78mo6xgRMwARMwARMwARMwARMwARMwARMwARMwATGBwELduPjPbuVJmACJmACJmACJmACJmACJmACJmACJmACo4SABbtR8qJcTRMwARMwARMwARMwARMwARMwARMwARMwgfFBwILd+HjPbqUJmIAJmIAJmIAJmIAJmIAJmIAJmIAJmMAoIWDBbpS8KFfTBEzABEzABEzABEzABEzABEzABEzABExgfBAYU4LdP//5z+KVV14Jf5ynoS4+TZ9es0MiO8B+8803aVRPr+vyZzfadNfcphX49ttva3fUZVdKdoh1MIHxQOD1118vnnzyyWLz5s3jobnD1kaPa8OG3gWbgAmYgAmYgAmYgAmYgAmMQgIDE+y++OKLIHYheHXy98knn9QKTQhqJ5xwQvjjPA118Wn69PqOO+4ottlmm+KZZ55Jo3p2jRD385//vDjvvPOKr7/+OpvvI488Uuywww7Fiy++mI0vu4kQ97vf/a6YPHly8eGHH2aTUSZlT5kypdiwYUM2jW+awFgicPHFF/ekX//tb38rdtlll4JxIhe4Tzzp6sK7774bRESExE7/ECLj8PnnnxfLli3rOL+4HvwwkvtRJC4vPve4FtPwuQmYgAmYgAmYgAmYgAmYgAnUExiYYCejGMGrk78mhm6dIFcXX4drEILdY489Vmy//fbFww8/nK0ORjKC2jHHHNO2l93atWuLvfbaK4h2iHdl4bXXXit22mmn4swzzywVDcueHcR9PHVWrVpVzJo1qzj66KODADyIcl3G2CSgsakdIT4nqDE+TJgwobjwwguzohj3iSddLH5xTn5x0FjTyVipZ2hXHCQoKr6bIz+M5H4UicuLzz2uxTTy5/Bcvnx5MXPmzGLGjBlt8c3n6LsmYAImYAImYAImYAImYAKjmcBABbs99tijWLx48VbGKgbr8ccfX1TF4xmCh4gC3mUHHHDAkD88x3bcccfwx3m78YsWLVL22aOM6HYM+2xGJTclKE6fPr347LPPWqnaNbRzxjRC32WXXRY85zZt2tTKOyc88D5OPfXU4On30EMPDXlf6XtoZdTnk08//bRYsGBBMW3atGK77bZrib4HHnigBbs+sx/r2Xci2Gks6Eb0ip8lvzhwvfPOOxcrVqxo2yP57bffLvbff/+iTLCrG7+0dAAeet99912xevXqlnCvMSo3xsT1j8/1jMe1mMq/zvFinjdvXnH44YcX2267bWtca4fv1rn6jgmYgAmYgAmYgAmYgAmYwFggMFDBrkpcwbisik9hj0XB7umnny4mTpxYcMRoRqjD2JVgt2TJkmC8b9y4sbj//vuLpUuXbmXMn3/++dkpwStXrix+9KMfFY8//nhAydRXjPF2hYcmno7pu+rFNSKDBA6E3RNPPDFMC27nm+lFPZzH2CPQiWCXo6B+mopvSsv9pv2nnbTKX0eWHKBfdCrYSWDjecR9psdffvnlYUkCxbUjKHlc05vZ+hiPv4is/HDFONcO361z9R0TMAETMAETMAETMAETMIGxQGCggh3TLJkWhvGX/h100EFhGmZZ/AMPPFDLOzUmmfaJZxZ/nKfxZRnOnTt3K+88vPV23XXXYExNmjQpGz916tRizZo1ZdlW3sejDg8UBDc2lZDxf9FFFxV4umDoyzOGKavUhWmz8XqArBOIkZ0ae2ne4vDTn/60WLdu3ZA88GIjb6awxXnrvMlagpUN7TASrrSX8gniY8GuQ6Dj5DGmT991111bjTfx+MPYg0hyyimnVKarG4PWr19fzJkzp3j++eezdLlPPOnqwkgR7KgnYv9JJ50UxgONHekYU9aedOxRv/W49i9iL730UhjXtYmQfphoyreMu++bgAmYgAmYgAmYgAmYgAmMfgIDFeyYyrjffvtlxS48y6riEdHqgoxzDHTOU+MyjS/LrxPBjvo39Z5Jy0VMnD9/fpgS/NZbbwXvugsuuKCVn4xcjDkZwPI2i4+Idalgp7zhro0mEL5YJy+dAoyQcPDBBxeHHXZYa8MJvPAkkqX1jq8pZ8uWLUEYje/361xMLNj1i/DYyFdjQNxPOj2nb8Uh5+WbTsNvcp3bPKbfgh1t6YQDG94gJjURlDT24BHrcS3+csrPLdiVs3GMCZiACZiACZiACZiACYw3AgMV7KrEFQzIqvhOXoyM9SbGZZP8qSPrSr3xxhtbJSeuU8GOHRdZew9PHzx/2FSCa3Z0xeiVOPXUU0+FeAQ1NpAgjr8777wzLGZPPPWI28s9PBv//Oc/Bw8Z1rfiebyJ4KOAVx9lkxYPPgLTci+55JIg4lV5BZFu9uzZQQDgyHW/g5j0+pvpd72d/2AJNBkD6DOIV/JgTWuob410cUCEkqcefZcNXegvukcfI9/Yc4940qmvk5Z80hBPlexEWOOZtL5qB+2s27UbT1/+5FmrI97KTQU7j2vpW62/tmBXz8gpTMAETMAETMAETMAETGC8EBioYMcaPQhGMv7iI8ZhVTwGZi7IIO/UqI3FrVz+8T0M4DJRriouziN3zsYPe+65Z1i/Dk84PA3xdJNIJkP76quvDh6KTFvdZ599gkH+pz/9KYh1iHaId9QjbhMLmqds5PES10Ved8pHcUzHJT1T4jRtS3E6fvXVVyGeckjHdb+DmFiw6zfp0Z2/xoe4T6Qtos/w7bYr2MX5pP2OOIkvcb6qD+mrAoLdDjvsUJxzzjktAVBCoI7skJxLw7ICCO9pGeoz1AdRHUEt3a2W/k7g2bgvsysz09IZA5oKdh7Xqt5wPk7fTNX3mn/Sd03ABEzABEzABEzABEzABMYagYEKdqlw1M51anzqRcgAZpc9RKfYAOWa+1VxTQ0jDFw83xDWMETTQP3KxLw0bdU1C7TjXRdPV40NbT373HPPBaMchkyf/fbbb0MU9YjbxDRghFE8Y9hxcsKECWG3VeXDEWMcUQ4hAE+7NCAQsoPh7bffHkTBNJ7r999/v7juuuvCMRff63tiYsGu12THVn4aH+I+kbaQPjMSBbu68YSNZ+iX6Zp5bCjDPYlvaq/6DKKQuKRjsMZZdpQWM419p512Wpgez33FKe+6o8e1OkL/irdg14yTU5mACZiACZiACZiACZjAeCAwUMEOUWjx4sVDRDUJbOyOVxWfGp96OTI8cwZkp3HKOz4qrzKBqBeCHWvFHXnkkWEKHeLY//t//y94tMSGNh4ueMHtvvvu4W/atGlhPTrKJ45jjsWGDRvCVNgzzzwzCI4Y5NxjcwnW38NTh51X8eyL193SRhsY9giJLEA/EoKYlL2PkVBH12H4Cajf5vqEakefaVew0zOp4NXpNfnFockadohyCHasK9ckqM/Egh3lKFAH1YP7jAWMSe+8804YmymnCU/lp6PHNZGoP1qwq2fkFCZgAiZgAiZgAiZgAiYwXggMVLCrElcwFKviy15IlQHZaVyuLLzUqN9xxx0XNldI01D/Oo+Y9Jn4mumsrFmHaIk4uXr16rBeHoJabGizJh0CG9Nj2YCC55YsWVL87Gc/CwIc9UjFCbzm4rXoMKCnTJlS3HzzzaEcps+x8y3iXDzFjnW25J3IGnjUDUGR54c7iEkn38xw193lD45A1RigWtBn2hXs2DFWU1M5drOGHc+nO9A2EexYXmC33XYrmPbeJKjPNBHsEOf23XffsIs048Shhx5abNy4sW3BzuNakzfzQxoLdj+w8JkJmIAJmIAJmIAJmIAJjHcCAxXsEIZYXyk2dHWOwVsVT7o5c+a01nXTi6syyDuNU97xkY0m2HCCabG5TRW6FewQ4piuivccu62+9957QRxjGhqbQCAGXn/99cXjjz8eBDp5JqZHPBVjwQ6x7pprrgl54924adOmMDV2xowZQbRDiGQnWEIqEsTiH4b3PffcE+rHlLvhDhIfLNgN95sY2eVrDEg3hNC4w5GxB8Eu3hwijmcdOYRs+kNZiPuK0kh84aig+lTlRVr6Yiyex/XRudaqo/66lztqUwv1mViwIw+NIYwdqhc71yLQE4dYh2hHUP3jMUZtyx09ruWolN/TN9OUb3lOjjEBEzABEzABEzABEzABExjtBAYq2LGZApsqxFMum57jVRaLM6wjhzHZZJ06eYnJMC17rmwH4vNRAAAgAElEQVTaLS954cKFwajHkM4FDN1OPezwWGP6WW46HSImQhl55+Jz92JjTzs15tJxj3Yp0DZ51XAvJ0Io7XAfJT7E38Rw18nljzwCEpjKvv927kvMyrUy11ckvnQq2LVTt6q0Kl99husyLmqjNoxgXGLMZlrsyy+/HLx4GV/iMSbHg3se18rIlN/XN9OEb3kujjEBEzABEzABEzABEzABExgLBAYm2DHl64YbbijKdntlZ1G8uAgYk0wFZcqnAoZkLM4gLlUZqZ3EyVhVmTqyNtwxxxwTPACZqpoLPJsKdrRHm0HkntE9NobA0+Xss88OHmxsDrF58+bigw8+KC699NLgUTd16tQCrxcFNnlgM4h0k4i5c+cWM2fObO3UitHMVFs898gXDzvKw0sQj0W87uRhx7p2MWPaNFINR4kPcX3FxkcTEAEJU1XfMd8544WELT2ro761dHzo1RiU+xGAe+l4ovrERzaewBNv+fLl8e3sudpBOxlvGSfizW34wUI/Wnz++ecF62OyRh7jjDiymzcsq3iqcI9rItH8aMGuOSunNAETMAETMAETMAETMIGxTmAggh3G38knn1zccsstQShih1OmZDIdk3DfffeFtZgwKAlsbMAGB/Pnz2+JeBjL7YgzCFKzZ88OBuf2229fucNpKLTiP3jxYbgyPbVsOij1Sw1sjFw2cmCqXZlQqWJhhDecjqwXxbpzCGoEjF/yQPRksfl4qlksdLL2lLgq7/RIerxl/vrXv7aiMOBPOumkIYY4bWpimLcyGeCJxId2vokBVs9FjRACfOvskMyf+klaNb7zKsEOURzhO+3D8vKNPXfjc6awky/H+H56ntt1uqlgp35w1VVXbdU+xoE333yzJcgrbSpM0j42oKFeV155ZdjEQpvU7L333uGHA/gdccQRxY033thYsIOzxjMdPa6lX9/Qawt2Q3n4ygRMwARMwARMwARMwATGM4GBCHZ4huEFgjcIAVEJAUyGo3YhlNGJAYknB+tOrV27Noh8rB3XVJzBMGeqJ6IfXmS33XZbOEfkajfIcK3bITUn2GGk4qWCoYvBWxVgRHsR42gnbNhYAsEMI3f69OnFueeeG+Iw5mHEelVxvfD+Ywqt1puiPITL6667Lixqj8HOOlV42ZEGr0G8Bwl6B5SpkAp2iBZ44a1atUpJWkfa9+yzz9aKha0HujyR+ND0m+iyOD8+hgnUCXadNl3ii8a5dvJpKtjxAwI/JLDeHH04DtpFVuOu+swTTzxR3HXXXUGgZ7MZeSMzluDJyw7ViPf80ME4zQ8njEGTJ08O3r5NPeyoi8e1+I3Un+ubGak/lNS3wClMwARMwARMwARMwARMwAR6RaDvgh3iGUJcvDaa1keSQIeodMEFF7QEOhon8Yk0eLZgwJTt0BrDwIC99tprC9bLw+hEkOKPcwxS1oPTFND4udy5nsOgxVuPepaFnGCHhwuCUhPj6+677w7C3rp160I7MdgxdjGoiUOIY6dW8iOOgICGaLd+/fpi1qxZxVlnnRW8+TCsP/roo5BGnjGIpHDneTaxwLjfZ599wlQ63hFGeTq1LhXsqBvvUeWLBd5/Z5xxRjD84Yy3Xq/DmjVrgpigNQ/x/EFMSNdF5BuBi4MJNCXQrmDHOIA3bOopl1439bDjOfKLx5cywQ6v3Y8//nhI01g+gL6ARy5CvgLjBhvZ0N8JEuwQhW699dbQn5hCj7BHn2EcIP+LLroojJXky7jHuMkPD+xISz3bEew8rult5I+M8RrTOEpAhTnjuOLiZQ7yOfmuCZiACZiACZiACZiACZjAWCPQd8EO4QjDA0FOBim7oCKsxJ5nL7zwQjA48WgjILxhYCKuyVOtSvjCUMXARJBCYMMbjecUEN9OP/30EIe3Gl5iVcIdz5566qktEYrnq0JOsNPOssRVBbjgQcgf7ebIM6xjh0fc/vvvH3Z0ZbfHWLBTnhLMMKqZbhx72WGsI8zBEqOcReTxuKEcpiVfccUVYedZDMQzzzxzyJRf6hAzl8H/9NNPq+hwpP68X7izoyb16XVQ2fIGKjvCp25KcK/r5vxGNwG+c76npp5wWs+t7Bvs5H7cz6AZC3aMUwjW1JN+mtaT+jDtHtGOtSoR3jSmMPbiUUtQH0qfj98e4xzrZj744IMhH64Ri8ibMZUxpKlgpzp4XIsJDz3nXTT5XtLvY2guvjIBEzABEzABEzABEzABExiLBPou2GF4pp5bgHz11VfDNKsYKoYnYkv8h7GJ9xcGYzxdk+cwTPEuY10lPK4wfHbffffi0UcfzYpxiHpMBWPHWdIyjQzPNDxMEMfknYJxPGXKlJAGEUwGb1zX9Jy6kSdGNR4zeLQhDHJPU9LSZ3RN2T/+8Y9bnmt4zf3qV78KnmqaqsoUVnnLyQDW84iPtIU6IMQxRQ5BFNER/rSBcwkNrINHwBOOzUDgcdhhh4V1rJQnR6a/whUesPn9739f7LzzzgVCZBqIx5gXwzTe1yYwUgm0K9g1bYfEmCqBrCwvpvHjHYeoTt9mHGEMPOqoowqE+zRwj3QIekxpRVRnTU3GCoQzggQ7xD3GmE7+8OJj6n4TAcnjWvqWfG0CJmACJmACJmACJmACJmACzQn0XbBDxEEQQyyqCzJwMU7TP4zEt99+u5XFQw891DJkSYtQh3dJnSccGZCGtBL5eB5DlzXuWDOPaZ/cS730WoVnThC1mL6U1huvvro6sQHEfvvt15q6FmePdw2CHJ6HCJTs+JqWwTXGunZ4RLTjOXaonTdvXtiogjx5Xl6FxCFWIgJQ75wIwPuAS1xe6oUX19XnJjAaCYw0wY4xEwFd/e6QQw4JO7XGu2bnOLOMAOOknku98STYKb6bYxPBzuNa7i35ngmYgAmYgAmYgAmYgAmYgAk0I9B3wa5ZNf6VCm85NolIPT/wAktFL67xPmHttBUrVnTk2YV4xbpsCxYsCF5wCFoExCsWZpe41bQN8jKThyAeccqzaR516agThne8XhbCWp0xn8uX9ZOYypquiRWn/fDDD1tlsXsvYqCDCYwlAv0S7LQ+Gcd2A7tDUy9+QGhnDGFcxDvv7LPPLpYuXTrkWQl2nXj8qf7y0m0i2OmZJkePa00oOY0JmIAJmIAJmIAJmIAJmMB4IjCiBLvxBN5tNQETMAETMAETMAETMAETMAETMAETMAETMIEcAQt2OSq+ZwImYAImYAImYAImYAImYAImYAImYAImYALDRMCC3TCBd7EmYAImYAImYAImYAImYAImYAImYAImYAImkCNgwS5HxfdMwARMwARMwARMwARMwARMwARMwARMwARMYJgIWLAbJvAu1gRMwARMwARMwARMwARMwARMwARMwARMwARyBMaUYPfPf/6zeOWVV8If52moi0/Tp9fskMgOsN98800aNZDr119/PezYunnz5oGUN9YLYVffdCdgrj/55JPiiy++2Kr57NaZpt8qkW+MOgLuV8P7ysy/t/w9rvWWp3MzARMwARMwARMwARMwgeEiMDDBDgEEsavTP0QUDJGqgKB2wgknhD/O01AXn6ZPr++4445im222KZ555pk0aiDXF198cUflyyB+8skng+DXzTEWC999992u81NdyCsOX3/9dfH88893lf+yZcuKzz//PM62df7ll18WJ510UvHzn/+8WL9+fes+3+eBBx5YwDoOa9euLaZPn16cfvrpBc86jB0CnfarlMDf/va3YpdddikYJ3KB+8STri70om/R7+NAX6BPqM91c+SHkdyPInF5Tc875T8ax7VevAPGRcbHXPC4lqPieyZgAiZgAiZgAiZgAiYwOgkMTLCTUYbg1clfE0O3TpCri697haNVsOuWffy+YrFSPOL4Ts9TkUPCWaf58VzVN4O33MqVK4u99tqrmDhxYvHYY4+F169yYUbAo+6mm24qtttuu+KQQw4pXnrppYJnR0rA23PVqlXFrFmziqOPPjoI4iOlbqOlHuof8bddV/ecoMY3PGHChOLCCy/MimLcJ550qViWCta96Fv6htUWCYrd9Ck9yw8juR9FVFY7x074k7+eU526OcbvvhfsVRfyikMv3gE/KDBO5YLHtRwV3zMBEzABEzABEzABEzCB0UlgoILdHnvsUSxevHgrYxXj9fjjjy+q4lNvqRdffLE44IADhvxNnjy52HHHHcMf5+3GL1q0qPItypCLjbvKB3ocKQO13fJ5romBjfecOG/atKl47733Wi2gTIzQuGx4VBmPelheJeTP+VtvvdUSvWTApoathLPzzz+/I6/M+fPnbyXYIW7dddddxeWXX976O+2004qdd965mDlzZriHqLLTTjsVBx100JA0e+65ZxBi4mcfeOABNXGgx08//bRYsGBBMW3atCAkShxo8i4GWtFRUlgn/Upjgdh3e0y/f675LlesWNH29//2228X+++//1ZeouprcR/OvSItHYAHG4L16tWrWx5d3f7okSuvE/7kMxrHNb0DxifGuHb/GA/Tfu5xLfdV+Z4JmIAJmIAJmIAJmIAJjH4CAxXsUkMjxofxVRUfp+Xcgl1KpPy6qWGLIS+vNAShSZMmFZpW141gJyOVPFTGU089FSqsuFSwkGBH3TsJ5Ke26HmJDbEYFwtwTc8l6nVaN9Wn06PeBSIRIveJJ55Y7LDDDm31n07LHovP8R5hCdduQtm3rDxz36Ti0mM7adNny/qO6lfXTvUTuCDcT5kyJYjXLEmguCY/AKT1KrvulD/PNamHxhzaP9zjmt5BOt6VsUnv0+b030m9E49rKS1fm4AJmIAJmIAJmIAJmMDoJjBQwQ7PJcSOnDCCsVEV38SbSYaLjDimB+GNxB/naXzZq5s7d+5W3nl46+26667BsEfISr33uJ46dWqxZs2asmwr7+e8JFJOMEJYOOWUU7IMlT5l1YlhyxpJeJ3df//9od4SiWJjH6MzNR5zjZSRyrO8h2uuuaaYN29eWANLcakBWyY65PLP3csJHnr/8CDkpjWmUxXja3kfdlu3XH3bucc39vDDD4fNMXhODJu8i3bKGQtp+9mvUj6shThnzpyw9mIaxzVrjxEfr5mYS8e93Pdblja9X/Z96juJ+3D6LNdpP2HqOOs9kq/iNMbmno/v9ZP/aBzX9A7S8S5mVnVeJdh5XKsi5zgTMAETMAETMAETMAETGH0EBirYsQ7YfvvtlxW7WEesKh4RrS7IOGTaI+epcZnGl+XXiWBH/VOPrrL8c/dV126n1vG8DDeVI8OWheKpY7tlYJw/8sgjW3khYXQiEr355pvh2G6+PIt4QJ1SA7ZMdFCb6o45wUOMxYc0eKadc845lQIoQigiqd5vXd0QJbds2dKa9ltX127jJQJYsNuapN55u99mLr2+G5WS8/LNCfl198gnDbnvN01Tdl32feo7oc/Rllwb6+4xDjAeNBXs+sl/NI5regfpeFf2LtP7tDnt52Ks79PjWkrN1yZgAiZgAiZgAiZgAiYwOgkMVLBLDY0YWc4QieM7OZch09S4rCuDOrKu1BtvvLFVUuIk6GwV2eBGk7pSBgZ1mYeMjEHSxYFrGLB+HLvtYtDn/lgvi7Wv0nWz8FCkzLRsDEPeKdPmqvLVmlpLliwZUi7PMOW2SrCrExCq4uveB/Uv2wQg9qzj/Prrr2/0fln/a/bs2YEVx17tpBm/z/Rc772qf6XPjJfrfvYr1mKUVyver2xgcskll7TuIfLyfcYescSTLp6+SD5p4Nus+rabxKXjgL4T+nLdrt2slZZbP5KxoBPBrmoMHk/jmt5Bk/dXlqaun3tcS3uTr03ABEzABEzABEzABExgdBIYqGCHGIR4kxOLMA6r4jEwc0EGeZlxU3e/ypBMy8OwLBOBquLSfHLXakdVfbo1bCmDjR9SMUpTPTEmd9ttt9aUPpg/99xzxZdfflkp2PE+tbFEmjflEY+Reffdd4em4+n47LPPBpFPBixGZhz0TCxsSBzRUWvJ5dKwY2rZu1I5lLntttsWe++9d9brM/aMYhp0XX7k+9VXX4Xpg3x3TCPkut9BDOsM+X7XYyTm389+FbeXvpn23ZzIrfqQvirwbdZ5f/KN59KoX6Rl6DuhXtpYIu2vWrOSZ+Pvl52ImYrNWEA707aWtUXtrUpPWfQX6pULqnfaHq6V72gZ19QW3p3GsfSIxy/vNZeGsa6un3tcy31FvmcCJmACJmACJmACJmACo4/AQAW7OvGsKj411oRaBuHhhx8e1vWKDVDW+eJ+VZwMPuVXdsTAPe+88wp2C2XtszRQvyaCTvqcrtWOqvpQRreGrUSEmLXqjZiKYCfD+bXXXgttWr58ea1gJ0M0zld1xZPu4IMPbk17XbduXbHvvvsWCxcubK2/VibYlb13uOHtc8QRRwwRFsSTdyQhUvd0lLj4l7/8pbj33nuLRx99dCsRM/6OOOdbUlrWImONv7Lw/vvvF9ddd13BcRBB7OsM+UHUZaSV0c9+FbeV7zTtu+pr6k+kV32qvmvS0R/UL+Ny4nPWl0Rw5nuMA98m9yS+KU7fCfVRPdL+qnpddtllrfZo7GNHZfoy7UzbqjLSo8qpSk+ZGivS57lWvVU3peFa+Yp13B7xG0njmtqSjndqE0fGLv6d4R2kgXdaNv54XEtp+doETMAETMAETMAETMAERjeBgQp27Gi5ePHirDhy/PHHhx0vy+JT41PYqwzCTuOUd3xUXmWiCMajDMT4uabnyl8GaO65Xhm2cRswIPEk4yivNjzhWIftqquuKqZNmxa852QQx+IDRqfyivOh7sorFgdkgCLUIdgh3JUZsHo+NdJjLvJmUx3juKpzlampinh3ajOU3JTFGTNmhA048ITB60VtripjkHFqz0ir1yAZlJXVr36lvhgLRN2cp995E8EO4QbBjnXlmgR9J3GfjIUj6qB6cB+RHYHunXfeCWMz5TThGdelSXqxjMeWOA/VW3VTHNcaL3k2/v55ZiSOa2pLzF3t0VE/cPADUTtT6pW3xzWR9NEETMAETMAETMAETMAERjeBgQp2sUGVYsP4qopP0+u6yiDsNE55x0cJSMcdd1zYUCCO45z6j3bBDg8NxC92cP3oo4+KyZMnt7ziMIhTLxiMTr2z2ECGh3jxnDx0MEDxijvmmGOC9wj3ZWSmBqyeT430lDsioMS/NK7sWmVSN7yRfvnLXwYvPab76ZtRuRs2bCgOO+yw4oILLii+/fbbwENtLst/0PfVnpFWr0FzyJWn9ylhJ5eGd51+23E68dU3QRw7McdTGZmq2OkaduST7uxMf6gbT+Q5Rn9tEtQOvntxifsd7VMbEefUr26++ebi0EMPLTZu3Nh6ropnXBeVU5W+E/6UwXPKlzbF3z9tlWA3ksY1vYOYe8yL825/iPC4lhL1tQmYgAmYgAmYgAmYgAmMTgIDFex22mmnlidTbOxyjsFbFU+aOXPmFOvXrx9Cusog7DRuSAH/d8FGE2w4Ueb1gPFYZ2Dn8tU91TU1+mNOMEJYkAdFHMe51j6S0a28U8M29nTEcNxnn32CcMbacmeccUZ4R7fffnsQ7BDuCBiBqajBszKSMUTJh3tMIcVTknJ4jnD11VcXiJ1MQ2Xa7erVq8P9MgNWgh1tTtsZX5et4xWn0a7BocBoip3qxjfFdD+8OPUexPDPf/5zceKJJxYff/xxeDxus/Ib7qMY6l0Md31GUvl6n/3oV3E74z6m+7k+o/ro+1La9Mh3llufLv6utVZdXR/Rphb6TqiX6kEemv6Nl7Pqxc619F/iEOsQ7Qh6TkJZWu/0Wun7wT9mTptGw7imd5Bbny5+t03+PUxFXuXtcS39Cn1tAiZgAiZgAiZgAiZgAqOTwEAFu+22267Yb7/9ahf4jxf71/nEiRNb4hCoWedHa4vVrVNXtYZdHFc27ZbymMaJYIUhnQsYj70Q7Cij2z8Z3apnatim+cf1Rlhj/STEyfnz5xcffvhhELKWLl1aK9iRT5q3jEfW20KQZYMHpqDiucYaeQh3PJdylWCX5tfJdSouyLDtJC+eGWnCmNoz0uql7284jxKMOn3X8XNpv4rbFfcx3efb53n1Ae6rPlV5kY7+EJfdzbnK13fCteqR5qt6aR01psUyZjMt9uWXXw79lv6U9im1OT2WlZOW2+RadVMZMXOxjvMZieOa3kFcz07PUx7d5j3Sxg+1Z6TVS9+fjyZgAiZgAiZgAiZgAibQbwIDE+zwBrjhhhuKst1emQbEumkEjLzHHnus+Oyzz1rtxziJ/8e9lwatDKbUAFLhTJVkGieCkzzDFKcjz8YGIvdpD9MomwQZtlWGMGVQVxngab4ycNJ2cK188ZzB0411kghMF2MtLI4EpsJRxpQpU4pNmzaFsvCIw8MmLZt3oHfCDpLky5Gghe/ZvZGAODdhwoTgBYPHD3VVvlWCXdqWkFn0H208gYdc1UYQ0SPBSxNvTXkexXF6D2XlImSULfoe5zPIc713vYtBlj3Sy9L71Pefqy/vOv2243Tim34TvRqDyCcN3EvHkzQN1wjheOKxMUxdUDsYPxhvZ86cWSxatKj1GD9Y6EcLTSNljbwFCxa0BD7EdlhW8WxlGAmUVek75c9zyne0jGt6B7l3HnOTYMo6ovp3MY7PneMp7HEtR8b3TMAETMAETMAETMAETGB0EhiIYIfxd/LJJxe33HJLWM/sueeeK1jIHy8qwn333RfEG4wZwsqVK4sdd9wxeHjJWME4a0eQYH202bNnh0XZt99++4IpnsorFNLGf9ghFMO1ShSifqmBjVjAdEqmsJYJlaoGdUN84q+snnWG7T/+8Y8gxKVlxYatyuNIOgz0O++8s7jiiivCtE/Oaeu1114bkiIIwB3DPhU1MDpz74R245HDdNLf/va3xdq1a4unnnoqvFOm7VJPDOxJkyaFfLsR7GDFRhFMh6PMNPz1r38dIvwST1vkuZkeWbePbw+PzjQuvqb+IyVIBMi9i5FSx+GqRz/7lbx8NaU0PV5//fWhz3BM4+Jr8klDU8FO7z4n7DC+vvnmm8V3330XslfaVPCnP+LxSp2uvPLKINpr7UY8Yj/44IMwLrEj84033tiWYNdP/qNxXNM7qBPsJDTzw4l+XNE3wtIFq1atCiKq7nH0uBbT8LkJmIAJmIAJmIAJmIAJjH4CAxHsEDfwAkH8IeDFhSgkw1G7EMroxIDEk4N1jxB7tGlBU0ECI5EprAgveBzcdttt4RzRqN0gw5W8EBLLQk6wk5cKhi5CXLehTrAry1+GLV5unLMmFe1BgOMP4YxNFTAiuc+7Yhoc02Fhh+ecPO/0zihLgh3GI+/upz/9aRC6lO+uu+4aNnNAkEVQI1+8FKnH008/HdbIY6ptN4Id9cC7iLz1/YiDFm9P+VcJLYizTJNmPa9YVEnP5TmosuIj7/rZZ59tCdJxXD/OJQI07R/9qMNozrPTflXXZvpKKnLXPaP4poIdXqX8kJATrLWLrMZdfSdPPPFEwbqOJ510Uuj76q/0/alTp4YfN4jjhw7GaabGs+EEYvaSJUvaEuzUnqpjp/xH47imd1An2MFLyzBwjIN2kWW9UcQ7BY9rIuGjCZiACZiACZiACZiACYwNAn0X7BDPEFK04yDYMCxYJ00CC4IcgpEEOtIw9RRxhzR4gjH1CeFoy5YtleQxYPEOY708jE6ms/LHOQbpPffc0/I4qcyoKFrPYdDirUc9ywLGY+php3XYNG2r7Nmm97s1bP/+978XP/nJT4qzzz47eL8hlMrgQ1TF6IcTa1VxzvpVvDc82HLigwQ7RFU8KHkWDz289vAQIfCuyYedVvF2w2OEPLkHl1deeaUtwY7vCa7xVGNNWYa/pvRRtgxbbRSiKWPx4u7pOW3lO6xbyJ/n0s0sKFMbd/DNwAPRsNeBaccIK/L4wwsKYSVdIzKe+tzrOoyl/NrtV4wDfLepiJteN/Ww4znyi8eXMsGOfqUNUPQOWD6A98+3y48dCnfffXeYho5ATpBYRF++9dZbwzc0d+7cMMWbvkLfIv+LLroojJXky7jHuMnu0Uxhp570216NadSrXf5qH89Rj9E0rukdpIIdXpCIovKGpI3aqZuxMt5sKd0d2OOavggfTcAETMAETMAETMAETGBsEei7YCejA0FOBimiG2JC7Pn0wgsvBIMTjzYCwhvGDQaMPNWqjEQMVQxMvIwQS84999zWumzkh6hz+umnh7jp06eHKUWxcZS+Vso89dRTQ3qEF56vChiPqWCnnWWJ60Xo1rCViJbWBUN9xYoVYZdZiQF45+y+++7BYw4xr0qw09TmNF+uEZfgiHBHYN24Qw45JIhL7CRbZsBK7BQ7+D/44IPh2dx3QP0QFhAG9Q1p3Ty8BAkybJmmS76pWJdek+43v/nNkHTxvZxgxzfOt843yPRfCaKhAj36j5hRRtWfPe6aAW+3X9GP+Aar2Lcbl37TsWDHOEU/op5843zrcaA+TLtHtEMwpz/zHSJU4xXHWpQEfTfp83Fe9LNLL7009DXy4Zq17sibMRVhj7qm9Y3zaPe8Xf7Kn+eq6jESxzW9Awl28OSdIbrTnjSwfiDs43+D5O2MlzLB41pKzdcmYAImYAImYAImYAImMDYI9F2wwzBhumK6KPqrr74aPApijBieCDXxH8YmU7IwWtjBNA4YZBgrrKuEwYORjMj06KOPDvFU0DOIeogsrE9GWrzIZs2aFTxMmOIo7xSMYzzBSINniQxe5ZM7UjfSY3ThMYNRhTDIPU1Jyz3Xzr1uDFu8xvDASUWpptcIArQlNvZ5t3hBXnjhhR3ni6jF9yEDVjxgjtiAqHvssccGgY/y2b32pptu2koIQ6DAG4g0Rx11VBBkmVbNRhfyMFLeTI3eZ599gich9/juqMd//dd/tb4BvkE2GoGPvgumurImIR5IPFMWSI8hrufK0vn+yCDQab+qqz19JTZy3i0AACAASURBVO0zdc8oHpGZb5e1PhmnyIcxkG87t1kK90iHoMd6nYg5/IAg71LylVhEX27a79N08kCtEsrUhqbHTvnz3Ggb1/QjDlPumXrPO+Xd8u8Xa36mQWIsac4666wwrjEuxR7resbjmkj4aAImYAImYAImYAImYAJjg0DfBTuEC7y1qgQOoZSBi3GS/mGYMRVI4aGHHmoZsqRFqMNToc4TjudJI68GlYOhi8HD9E6MIe6nXnoqO3dE5GOKovLTEa++JnXK5Zne68awVX26PaaCXbf56flUsLv33ntbLHk3iGmslVflFYlXJgKsjGDyRpyIvz1N/U3fC+tEIRzKawX21Cn1aOIb4V4s5KXvydeji0Cn/aqulRrP4j5T9wzxjJl4iqpv4JGKp1W8a3YuH5YRYJzUc+m3K8FO8d0cR4pg100b4mfjd0S/j+O6OU/HNcYn5cePD1zzbw4/PpWF2Ntbz6ZLNHhcK6Pn+yZgAiZgAiZgAiZgAiYwegn0XbBrB42m9qReHQ888MBWohciGN4nLLzNdM5OvJkQf9atWxeMYbzgZDThrcLC7FXiUK5d8qySh2DVjq+55+vudSos8Fy3BnZOfMAY7XbapUSE1LBFtDjzzDPD1LxYcKtjxDvEe5M2s+GIpvjyHF54f/zjH8MU3VRExaPv17/+dfiWVAZ1wHOSOiqQP16aeO/97//+r277OIoJdNqv6pqMiIKIz7HdwHRx6lUn5qT58l3jncc6lWzoojGNdOprsTiVPl93TV9kLOl2PInL6ZQ/z3Vbj0GPa7xPfiyg3Hb+zeLfIn4s4IcLdluPx0SPa/HX5HMTMAETMAETMAETMAETGDsERpRgN3awuiUjlQACRrtC7Ehti+tlAiZgAhDwuObvwARMwARMwARMwARMwATGHgELdmPvnbpFJmACJmACJmACJmACJmACJmACJmACJmACo5iABbtR/PJcdRMwARMwARMwARMwARMwARMwARMwARMwgbFHwILd2HunbpEJmIAJmIAJmIAJmIAJmIAJmIAJmIAJmMAoJjCmBDsW337llVfCH+dpqItP06fXLPTNhhLffPNNGjWQa8pPN0voVcEsgP73v/+9rYXQe1W28zEBEzABEzABEzABEzABEzABEzABEzABE/iBwMAEuy+++CKIXdpBtd0jO3bW7apXt4NhXfwPWPJn7GS6zTbbhB3+8in6d5cdHvfYY4+wc+rXX3/d04LYhGHWrFnFjjvuOCxt62ljnJkJtEng9ddfL5588sli8+bNbT7p5L0g4B8iekHReZiACZiACZiACZiACZiACYw1AgMT7C6++OIgdiF4dfK3yy67FIhWVaFOkKuLr8qbuOEU7BDpzjzzzGLbbbctHn744bqqth2/du3aYq+99iqmT59efPbZZ20/P8gH8HBctWpVEBmPPvroIAQPsnyXNbYIaGx65plnumoY4xPjFONELnC/yTjGs++++24QERESO/1DiIzD559/Xixbtqzj/OJ64Mmc82KOy2ty7h8imlByGhMwARMwARMwARMwARMwgfFIYKCCHR5iixcvzhqMxx9/fPAgK4vH0MTgVHjxxReLAw44YMjf5MmTg5cYnmKctxu/aNEiZZ89DqdgR4UkCPzsZz8bwiJb2eimhMpOhNLcM90KG1HVGp9++umnxYIFC4pp06YV2223XUv0PfDAAy3YNabohDkCnQh2OUGN8WHChAnFhRdemB3juE886WLxi3Pyi4PGmlz/a3qPdsVB40fT56vSnXDCCQXjSrfBP0T8QBCey5cvL2bOnFnMmDGjJ3x/yN1nJmACJmACJmACJmACJmACo43AQAW7KnEF47IqPgU71gQ7puOlRnx6/dhjjxVXXHFF8dBDD1WmTb1fJNilgun9999fLFy4sHj88cez+f3lL38p7r333uLRRx8dEj8cUwcRCSUg0I4TTzyx2GGHHdr6ZtJvyNcmAIFOBLteCGr6njmSXxy43nnnnYsVK1a0vZTA22+/Xey///6hXXGeEuzqBHet9YmHHtPlV69eXWgavsaSXgl21E/1Go8/RGzYsKGYN29ecfjhhwfvaX0TveQbfwM+NwETMAETMAETMAETMAETGD0EBirY7bTTTsH75PLLLy/Sv4MOOqioin/ggQdqqabG5Pfff1/gmcUf52l8WYZz587dyjsPb71dd901iEaTJk3Kxk+dOrVYs2ZNWbaV92NBSkZbp8fU2FO7U0G0TiRFNGg6ha+ycT2IhCtTgVnLkCAjP21TD4pyFuOMQCeCXQ6RvslUfFPadvpTO2mVv46sD0q/oF1xUP3qBDuNFzy/adOmYsqUKWG8Zg1RxaVjTFxOfO4fImIaW5/znjXOI7Liac51U75b5+g7JmACJmACJmACJmACJmACY4XAQAU7pjLut99+WbFr4sSJYapjWTwiWl1gbbO77ror/HGeGpdpfFl+nQh21L8bcYu61W3Ecf755wdjbsmSJZVpJVCqfeKAEf/BBx+0niU/jEQ8cnJlz58/fysvn7LNPxBEt2zZEoRRldvPo8QHC3b9pDz681afT38giK/5sQCR5JRTTtnqh4Q4Xd2PBuvXry/mzJlTPP/881lw3CeedHVhpAh21HPlypXFSSedFMYIjSVNBSX/EFH9pl966aWw0Y92/xavpnyrc3esCZiACZiACZiACZiACZjAaCYwUMGuSlyp8/bqBHK7xmVdGdSRaWpvvPHGVkmJ60aw2yrDzA3KQFio85BJHxUH+F999dUhD3l1tHvMtZEpdLNnzw75cuzFYvRpG9JrC3YpEV/nCOjbb/c7z6Wn/8UhNy0/XTezyTX5pKHfgp3Gklw7q+498sgjwfurqaDkHyLSN1t9bcGumo9jTcAETMAETMAETMAETGA8ERioYFflzVXn7fXFF19k30u3BnlTw5PCq0S5qrhsxTu4KSO7G8GuHx52X331VfDAwdDHE4frfgcLdv0mPDby1/hQ1c/r+pW+NdLF4a233mp55OGlxy7Ll1xySeseHnv0idhzj3jSkV7ee+SThniqZJWAVhWX1lftYPxgPM151eoe4zF/utYR711YVvFM21J3Xce/7Hm927H0Q4QFu7K37fsmYAImYAImYAImYAImMP4IDFSwqzIu6+JS41OvSkYbi3azxlm8UQPX3K+Ka2p44jV23nnnFXvuuedWOzpSF+qX8z5TPXtx7IVhi+GtQH5VXo/tePm8//77xXXXXVdwHESQ+FBV/0HUw2WMbAIaH6r6eV2/0rdGurJAXFqGxJdYYFd9qvKiDPoem6qcc845LWFPAp+ORx99dDYNu9GyHmhahtpBfbSxRDxecs5GEwSejcX3VatWhfU5mbpJO9O2lnFpcp+yGP9jTk2eE0vGgLHyQ4S+mV7ybcLSaUzABEzABEzABEzABEzABEYegYEKdukupbGxyGLbVfEyJFOEMtpyBk6ncWkZXCuvMoEIo3M0CnZV6wqyyUa/25Rj3eSexIey99EkD6cZ+wTUb3Pjg1pfJxjpWyNdWSAuLUPiSyxEqT5VeVFGE7GcXZ633XbbrdbMY0dX1stLx0y1g/qoHukPJarXZZdd1mqPfqw47bTTwqYvtDNtaxmXJvcps1vBbqz8EKFvppd8m7wDpzEBEzABEzABEzABEzABExh5BAYq2FWJKxhtVfFl6GR45gycTuNyZWEQUr/jjjsubK6QpqH+3YpbixYtym7IoXWw2NgCw7Zsl1rS5XaqFYeUL3WOd+ZNPXa47rZNKadeXUt8SNvUq/ydz9ggoG8/Nz6ohXWCkb410inomVTw6vQ6zpsymgh2iHIIdqwr1ySoHbFgRzkK1EH14P7BBx8cBLp33nkn/JhCOU14Kr+mR7GMhc0mz6ou6RhAfqP1hwgLdk3evNOYgAmYgAmYgAmYgAmYwPggMFDBLhaHNK1LR9Z0qoonXW6HRRltOYO807jcq2ejCTacYFpsblMFjMRuxa1+C3ap2Mg6Vuz6+t1334UmpyIB/Mp2hc0xGuQ9iQ+psT7IOriskU+gagxQ7esEI31rpFNgx1iNXRy7WcOO59MdaNO+qHLjI7s777bbbsW8efPi26XnakcTwQ5xbt999y3WrVtX3HzzzcWhhx5abNy4sSPBrt/jWjoG8J7if0tG0w8RFuxKP19HmIAJmIAJmIAJmIAJmMC4IzBQwa7K60FeZGVHvMtiw+zdd98N69U1Waeuag27OC6dQhZ/DQsXLgzebbFHShzfC8Euzi93Xics5J7hXipacK1F5OPj/Pnzgyi5YsWKbDw7Po6UIPEh/iZGSt1cj5FDQN9+uiFEKrbhGRdvDhHHs44c68nFgl3aQuLSHw0kvnBUUH2q8iIt40zdGnZaqy7ewCKut861qYX6TCzYkYeWJmBZAtWLnWtZooA4xDpEO4Lqn7ZV7csd+y3YjaUfIvTNtMM3x9z3TMAETMAETMAETMAETMAERj+BgQl2eJDccMMNYXfCHDZ2Fv3+++9DFEbhY489Vnz22WetpBiSsTiDQdvp9LOy52Sstgr9vxMWWj/mmGOC18bq1avT6HDNs6mHHe359ttv/z97b/prS1Xt7/sP+MYXJL4wJCbEGGKMIUSDIUggavCELigQhSugkUZaQRAhHkWU/iDS2IEi0nwRpbkIKIo0gjTKAQEbEBCQznO5Clf03qP31i/P/PnZjD32rFq11l5r7cVen5nsXau6WXM+VTXGmGOOOat6/CgbuQZljw6APvloOK+iA0dlN+x1+5Rt1GPkfIjPxKh5+bzVS0AOprZ3fpjtbfIBeuzLThY5X+J7o/J05UV+o76jtfro+npnWFc58vEqFx0ifGCHYbHbbrttw7DYe++9t3nuuedKPXNdl/MEcU3KoXL2zUt1UFlYjx0Q+v1q6oiAASxUp74sfJwJmIAJmIAJmIAJmIAJmMDqIzAVh91LL73UHHTQQc35559fhpPecccdzWGHHVYaVyC98sory9AuGpSke+65p9l0000bGlpy4tGoG8Y5w7DVU045pczxtMkmmzQXXXTRQl7D3kai+JgriknXmdC9lihfdtjRgNx///1L5A7DT5ebRm3YqvF9xhlnlCLgDGB4b46ka2vYchzHD9ugXm59u86X82GYZ6IrP+9bnQSyU6dWy0HvlZ41jotpXE418smJbVme5GNY58MTROLdddddtd2LtqkevMd0kBx33HEN0W9KRBgryhiZvddeexW5d+mlly44+I499tiZcditxo4IO+z0NHppAiZgAiZgAiZgAiZgAiYwFYcdw6toVNK4JDE/Eg4wOYA0qfnpp59enGr/+Mc/GhqGDGN7/PHHi5OP6LC+zhmcfAxhxenHvHcXXnhh+X3zzTcPfceJKNlpp53K+TgS21LNYadG72677db85S9/aTu19/ZBjoW2jDQ5vfi3OQPatseGfts1qN/tt9++4IRtO25c21Wmvs/EuK7rfF5dBJAFPJv8yfmfazDovUIeMZdjdrprWL6GlOblueeeW6KlWOZ9cZ18cmp7F/Nxeg8kO+N+HFq/+93vFuao1LGSuzqW+iHnKNNpp51W5LPk3tZbb9089dRThR9y7Ktf/erMOOxWY0eEHXZ6Kr00ARMwARMwARMwARMwAROYuMOORjKNSU1gDnI1tNTIJBru+OOPX3DQcQxDT5k4nGNoKDNEKM9VVLt9RMCdffbZ5SuBBx54YMNwVv74jQPv8ssvX2jA1s6P23QeQ5SI1qt9bELH1xx2igAZ1/CmQY4FlSUvmZR+s802a+6///6yq80Z0LZdH9zIDX1dh7ntDj/88OKcgDPRO+NOjz76aPkCruY4xJGA0zfPi8gzgnPFyQT6Ehj1vRqUv5wvbe9N1/lt72I+B3lH5C/zzdHxEVN21Mthd9NNNzWXXHJJkYl8cVrDYpGPfGWaaGTeYyKTeceIvOWDE9tvv31z/fXXz4zDLtevjVnbdvHouj/T7ojQMzMunRGfB/82ARMwARMwARMwARMwARN4dRGYuMPumWeeKQ09HHJyeL388svF+RYjz+68887mhBNOKJEeIKQhSoOKL5gqUq2rEUOUCPPeEXFFA/SYY44p5+l24Hw79NBDy7699967eeCBBzodd1zz4IMPXnBCcX5Xqjns5Ohi3zjSKI4FnFe77LJLE1m3NWDjdpxwOBz/9Kc/la9Q0nCngVxLcrjCnQn6J/FxCjWu5VxoWzrirnaHvK2LwLDvFc/7+vXrO6PmiFbrG2HHseQn+UhZ47sYy84Q3xdeeCFuKnKP9xP5iRxUuuyyyxY56vUO4RS64IILinPurLPOKu81coLOFfJfu3Zt6dxAntJRgSOP4bF8kZZyIoe7ZLGu33c5LH/luxo6Iog+VycESzlQYY6DVPsYvjyJjhCx9NIETMAETMAETMAETMAETGD2CEzcYUfDszbH0n333VeiNiISGos4ieIfDiMiPGiQag42nUMD89lnny3DtIi4womz5ZZbNjfeeGPVGUdjlsgSvjjLsUSlnHzyyaXBumHDhoXGLtFcOLk4hoYqZRiUKBvH0/ikAc6wXxyDbNNQ1LY81JDm2HH9qUGt+ff0lUfK0OYMiNuJxiMqT+WBRxcH2NLojw6Dtvp6uwnMEoFhHUaaF0/vxjiWel/FJb6LdFogkygnjpwcEUZ5+MItMvLiiy8ujjecf0wjgNNH763kTD5f12RJx8TnP//55rrrriv5sI6ziLzpBOEdnwWH3WrpiFBE3aBnKD8f8Z75twmYgAmYgAmYgAmYgAmYwOokMHGHHQ0rIrNoVA5KXY0X5rN75JFHFrK44YYbisNNDR0cdTRWB0XCkQHHcKycfORBQ5g57pgzj+G7bMtRegsXr/ygQU00hMqjJVF9g8qE05G59k466aSx/eGYhP2+++67aKgxRed+cD2uG1N0EhBheNtttxXnIx+d6HP/Yl7+bQKvFgLDOuz61kvyrMtB1pYX827iMOfjPHQsIE9wmu25557Nww8/vOQ0tnEccowhrbfcckv5aIW+DM0Jctjh3BtV1hDFhyzu40DS9SQLx7HUdd0RseQR8AYTMAETMAETMAETMAETMIFVRmDiDrtheLU5rq699tolTi+cYDRmmTsNh9IokV1Erjz99NMNX0EkCk6T0tP4ZZ4n9g+TFGWmCMGuie6HyXc5x9Jovuaaaxbq1pVXdNh1Hed9JrCaCMyaww5HOx+6kYNrzZo1RUa9+OKLndiZ9xNnms7L0XjjdKDJcdZVoDZ5PqqzkPPcEdFF3PtMwARMwARMwARMwARMwARWE4GZctitJrCuiwmYwKuDwKQcdpqfjOWwCSc75SLiVx0JffKgI4PovKOOOqq59dZbF50rh90oEX+6toYD93HY6ZxJLN0RMQmqztMETMAETMAETMAETMAETGCWCNhhN0t3w2UxARMwARMwARMwARMwARMwARMwARMwAROYewJ22M39I2AAJmACJmACJmACJmACJmACJmACJmACJmACs0TADrtZuhsuiwmYgAmYgAmYgAmYgAmYgAmYgAmYgAmYwNwTsMNu7h8BAzABEzABEzABEzABEzABEzABEzABEzABE5glAnbYzdLdcFlMwARMwARMwARMwARMwARMwARMwARMwATmnoAddnP/CBiACZiACZiACZiACZiACZiACZiACZiACZjALBGww26W7obLYgImYAImYAImYAImYAImYAImYAImYAImMPcE7LCb+0fAAEzABEzABEzABEzABEzABEzABEzABEzABGaJgB12s3Q3XBYTMAETMAETMAETMAETMAETMAETMAETMIG5J2CH3dw/AgZgAiZgAiZgAiZgAiZgAiZgAiZgAiZgAiYwSwTssJulu+GymIAJmIAJmIAJmIAJmIAJmIAJmIAJmIAJzD0BO+zm/hEwABMwARMwARMwARMwARMwARMwARMwARMwgVkiYIfdLN0Nl8UETMAETMAETMAETMAETMAETMAETMAETGDuCdhhN/ePgAGYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAnMEgE77GbpbrgsJmACJmACJmACJmACJmACJmACJmACJmACc0/ADru5fwQMwARMwARMwARMwARMwARMwARMwARMwARMYJYI2GE3S3fDZTEBEzABEzABEzABEzABEzABEzABEzABE5h7AnbYzf0jYAAmYAImYAImYAImYAImYAImYAImYAImYAKzRMAOu1m6Gy6LCZiACZiACZiACZiACZiACZiACZiACZjA3BOww27uHwEDMAETMAETMAETMAETMAETMAETMAETMAETmCUCdtjN0t1wWUzABEzABEzABEzABEzABEzABEzABEzABOaegB12c/8IGIAJmIAJmIAJmIAJmIAJmIAJmIAJmIAJmMAsEbDDbpbuhstiAiZgAiZgAiZgAiZgAiZgAiZgAiZgAiYw9wTssJv7R8AATMAETMAETMAETMAETMAETMAETMAETMAEZomAHXazdDdcFhMwARMwARMwARMwARMwARMwARMwARMwgbknYIfd3D8CBmACJmACJmACJmACJmACJmACJmACJmACJjBLBOywm6W74bKYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAnMPQE77Ob+ETAAEzABEzABEzABEzABEzABEzABEzABEzCBWSJgh90s3Q2XxQRMwARMwARMwARMwARMwARMwARMwARMYO4J2GE394+AAZiACZiACZiACZiACZiACZiACZiACZiACcwSATvsZuluuCwmYAImYAImYAImYAImYAImYAImYAImYAJzT8AOu7l/BAzABEzABEzABEzABEzABEzABEzABEzABExglgjYYTdLd8NlMQETMAETMAETMAETMAETMAETMAETMAETmHsCdtjN/SNgACZgAiZgAiZgAiZgAiZgAiZgAiZgAiZgArNEwA67WbobLosJmIAJmIAJmIAJmIAJmIAJmIAJmIAJmMDcE7DDbu4fAQMwARMwARMwARMwARMwARMwARMwARMwAROYJQJ22M3S3XBZTMAETMAETMAETMAETMAETMAETMAETMAE5p6AHXZz/wgYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAmYwCwRsMNulu6Gy2ICJmACJmACJmACJmACJmACJmACJmACJjD3BOywm/tHwABMwARMwARMwARMwARMwARMwARMwARMwARmiYAddrN0N1wWEzABEzABEzABEzABEzABEzABEzABEzCBuSdgh93cPwIGYAImYAImYAImYAImYAImYAImYAImYAImMEsE7LCbpbvhspiACZiACZiACZiACZiACZiACZiACZiACcw9ATvs5v4RMAATMAETMAETMAETMAETMAETMAETMAETMIFZImCH3SzdDZfFBEzABEzABEzABEzABEzABEzABEzABExg7gnYYTf3j4ABmIAJmIAJmIAJmIAJmIAJmIAJmIAJmIAJzBIBO+xm6W64LCZgAiZgAiZgAiZgAiZgAiZgAiZgAiZgAnNPwA67uX8EDMAETMAETMAETMAETMAETMAETMAETMAETGCWCNhhN0t3w2UxARMwARMwARMwARMwARMwARMwARMwAROYewJ22M39I2AAJmACJmACJmACJmACJmACJmACJmACJmACs0TADrtZuhsuiwmYgAmYgAmYgAmYgAmYgAmYgAmYgAmYwNwTsMNu7h8BAzABEzABEzABEzABEzABEzABEzABEzABE5glAnbYzdLdcFlMwARMwARMwARMwARMwARMwARMwARMwATmnoAddnP/CBiACZiACZiACZiACZiACZiACZiACZiACZjALBGww26W7obLYgImYAImYAImYAImYAImYAImYAImYAImMPcE7LCb+0fAAEzABEzABEzABEzABEzABEzABEzABEzABGaJgB12s3Q3XBYTMAETMAETMAETMAETMAETMAETMAETMIG5J2CH3dw/AgZgAiZgAiZgAiZgAiZgAiZgAiZgAiZgAiYwSwTssJulu+GymIAJmIAJmIAJmIAJmIAJmIAJmIAJmIAJzD0BO+zm/hEwABMwARMwARMwARMwARMwARMwARMwARMwgVkiYIfdLN0Nl8UETMAETMAETMAETMAETMAETMAETMAETGDuCdhhN/ePgAGYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAnMEgE77GbpbrgsJmACJmACJmACJmACJmACJmACJmACJmACc0/ADru5fwQMwARMwARMwARMwARMwARMwARMwARMwARMYJYI2GE3S3fDZTEBEzABEzABEzABEzABEzABEzABEzABE5h7AnbYzf0jYAAmYAImYAImYAImYAImYAImYAImYAImYAKzRMAOu1m6Gy6LCZiACZiACZiACZiACZiACZiACZiACZjA3BOww27uHwEDMAETMAETMAETMAETMAETMAETMAETMAETmCUCdtjN0t1wWUzABEzABEzABEzABEzABEzABEzABEzABOaegB12c/8IGIAJmIAJmIAJmIAJmIAJmIAJmIAJmIAJmMAsEbDDbpbuhstiAiZgAiZgAiZgAiZgAiZgAiZgAiZgAiYw9wTssJv7R8AATMAETMAETMAETMAETMAETMAETMAETMAEZomAHXazdDdcFhMwARMwARMwARMwARMwARMwARMwARMwgbknYIfd3D8CBmACJmACJmACJmACJmACJmACJmACJmACJjBLBOywm6W74bKYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAnMPQE77Ob+ETAAEzABEzABEzABEzABEzABEzABEzABEzCBWSJgh90s3Q2XxQRMwARMwARMwARMwARMwARMwARMwARMYO4JrCqH3X//9383P//5z5tf//rX1Rs7aH/1pH9t/L//+7/mL3/5S/Of//mfzf/+7/92Hbqsa6JQgAAAIABJREFUfeT9j3/8ozOPv/3tb83//M//dB7TtvPvf/97Q126EtffuHFj1yHeZwImYAIm0DTNP//5zxXngN5AN6EbRk3KY1TdMup1fZ4JmIAJSP4sR4aJInn8x3/8x8h2svKpLZGzL730Um3XsrZN0vbH5sf2H5Sw+we1Pwbl4f0mYAImYALjJzA1h91//dd/FQWKEh3lDyU5SJGQ73ve857mc5/7XJXUoP3Vk/61EQPgYx/7WMmffCaVHnzwwWbLLbdsrr766uolaBx+6lOfanbbbbfiQKwe1LIRhu9///ub448/vtUh9/jjjzfvfve7m5NOOmkg75bLeLMJmIAJ9CawYcOG5qc//emy/9avXz+U82w5+kCVo+G29957N2vXrq3K1EsvvbR57Wtf23ziE5+o7lc+y12qLt/5zndGzkp5/OxnP1uUhzqr2N/1R4fWoM6gRRl7xQRMwAT+RUDyZzkyTDDJ4zWveU2TZZn2j7p84IEHmje96U3N6aefPnZZN0nbH7t+m222ab72ta+1Vv3uu+9utthii+biiy8ee91aL+odJmACJmACvQhMzWGHEw0FOurfW9/61ua3v/1tZ6Wk8F+tDjuccTjTtt122+aPf/xjta5PP/108653vav5xje+Ud3ftfGqq65qNttss+aee+5pPYwGF0qdRibHz0r605/+1Bx44IFNX2PumWeeaU477bTiYH3961/fbL311s0hhxzS/OEPf1hSJep83333NR/5yEeat73tbeXviCOOKJGatQYoESiXXHJJcWri2Mx/5513XoOD2skETGAwARpVo+qFeB4dKsNEZwzSF4NL3jR33XVXg3ypySUivd/+9rc373jHO5pNN920+clPftInyyXHDJI3yJ/PfOYzzZvf/ObmQx/60BJ5lOUTsos8FdEiJ9wjjzzSbLfdds3111+/4JhTNAlsI+va72H5L6nomDY88cQTzcknn9zssMMOhTsyHdmPjK/J83xZuFx77bWlUww2XYmo/SuuuKLZcccdmze+8Y2FH3oHfeVkAvNMoE2uf/azny2dzZI7Wkr+YH9qW1wOI9sn5bDDRv/CF77Q9GmPDHPvJ2n7I/POPPPM5p3vfGeD464tERBx7LHHFpnZ1UZoO3/a2//617+W4IwTTjhh4KWRx1/84hdLOwA5/YEPfKDoOeR3LaEDbr755ubDH/5waQ/gpN1jjz2a73//+03bOeRjfVCj6W0mYALjIDBVhx2Nl2uuuaYaSfHRj360NG7a9t92222LwtBxJhEJFv9wdL3uda8rPWBxu34P2v+LX/yilek0IuxwSGIIYLDEhoWMj1ojqbat1nh8/vnnS6PilFNOWYhCwUggKiVHt/z7v/97cXTR6Mn7ho1iaQXacwdK+YILLij3lLrW6hazgtv3vve9YnRwPIqW+0+jjfXc44qR8uUvf7k4KHFS4tijgc2xPEtEyMR7wbX0LHBM7Y8oTwxNJxMwgcEE1LCLjqLYUMPYpjFHYyNu1286N/bZZ58SAT1Mo47zuyKyB5UcuUCkBXrtscceW3T4c8891+y0007lDwcSEXYc1zZdw6KT0wpDmY477rhFuk46TUvpNuSctrUtyYs8Vf+aDNM29BEyH2dcW0cYxWXfLDjsYp1gsf322y/oAuR7V/QI95P7Q8Qk9R8kx9FNdCJxLA5ZnJ0sWadxPKiDMd1mr5rAqiIguX7uuecWOxLbHhmIU6hPB4BkkJaDbL8ITzZztvfiMW2/da6uO8pyWIfeJG1/8iZyDltWCccS0wdl+/6yyy5rttpqq+brX//6kn2j6C5db5xLyk6HCiORuDddeonr3nHHHaVdxbHoBOlK1g899NAGOR7Tn//85+aggw4qeec2Aefstdde1Q4Z64NI0b9NwATGTWCqDrsuAxih27U/V3y1OezUc0dPPc41hlr9/ve/L1EQGBA0Bmi00iBhO4r1N7/5zaIGrHoos2FDQ4Qetve9733Ns88+WxxQNNgGOZ5QTvlvWo0yGrw8E2oAEcVCWXLd8nNBrxjnYDD96Ec/WjTfIFxfeOGFRafQk8jxsHn00UcX9jH0AeOSqBWGKsRE1AnHx3vCfdEf++mhczIBExhMQA27tsaVnDBthrnkWJRNyLl169Z1RpspKo1IrByFltcVlRZrQxQvDiGiopHfSjgQcdZFB50ceGxDtow7idEg+RivO2yEXRt/8mRf5B+vM83fyF50HfdGSRFzyPmaPOfe3Xjjjc2aNWuKjtlkk01KB06XPcI5dH6hk4455piFzkQabTxXbMdJ2xWNofJ5aQKrkUCW63SIY8f98Ic/nFqEHTZgLeHEog1Rs9OQoW95y1sahojKphtmSYf7MA67Sdr+yB/kEAERyCbqyzbpi2zfd613yf8a43Fvo0109tlnl3tDOdUm6CoX+oBpgJD9OPl0v8nrgAMOKHI6j1bi2WAU00UXXbQoYp/2wS677FLOQcfEjnzrg3HfbednAiaQCUzVYYexjDGbG0Os02jq2o+wHZSkhCTAEc4IbA1PzPtr+XE8QyNzdIJ66om6oocm72ddkQu1fAdtu//++4siwpghYTTQu0MPGL/VeEBJYBDASw48GRM0UjguN9py3upxo0HLvEM6n0blxz/+8dKDlPPWMdOap0jGHoYTPYPf+ta3iqLMdYtcaShzb3iOqHOfxPAGlP9111235HAi9dh34YUXLtqn54jn5OWXX160zysmYALDEdC7LocdPfmx51+RGTQ64nZFXdccdsg4Gk1dDZBh9tWcURj62QFEJwq6IDrrRENOO6J+iSaMBr+O6VrSYGAoT033KGqgK8LurLPO6sp+oRGn+6CDxVfRyrXrs6/GSHms9JJGKsNiuedZh6h+6FsiNYkoRI9K59bKTkQl9xjuefoK6SGmn+irh2rX8DYTeDUTyHL9K1/5SnGEMK1LLcmuyu9n7dhB28iDd513PjvNJYex5W+55ZYlWXHuMA63nMGw52f7nPPHZfvTnsCGlhyivgwJZWgnbR3Z9cgz5sX+5Cc/WeSZtsel2lG5vtNal07H+YbDjE4W7rHae7VyEEUIS+qV50CXDM92PM9n7LyP+d57771lWiE67OGnpLysD0TESxMwgXETmKrDrsvZhcHftX9QYwMwKBTmD5NzTwaABHreX4M5isNOjaVRGywvvvhiGYaDs4zGgyI3lB8KXI0HFC+NxFpjkx7D7LBT3gxDQGFpnorafBbMsUSvFQ5UKTfKk0PGa9xwjo7TefXwww+XnliuT4JBrbEVy0IDmmOGmRCYZ6PNOJPBiVMvJs0jqPsT9/m3CZjAcAT0nslRxDtZk295m95bOVyGeR8Vias8o8zrU3pNMaAGIfIPvYMeI0KLxml0Luo3kdGbb755aUQcffTR1aE1bddXg4VOL+WnpZyatX002nbeeedqw4aoFzngpMeYEkDb0Lvi2xWJyL4u/uPWD22MurbruUKXxMR8fnzkiSHXOFFlN0jnxmP1W/qopmvIg3mVeLZ4DpxMYB4JRLkuGcIH04hGkiyT/B20lA0PR73HtXMkg/R+5g4VOetw4jAViuzceH84d1oRdrLPJ2H7q+NAU+zoWkScRWcT8opoMpjEKQOIQsvOzshJv6cl24maR05TD5Ker/hsqExa6pgs89kvOZ+dbzq3ttQ5WTfoebM+qFHzNhMwgXEQmKrDLgu5WAGEbtf+eGzf3xKuXQJ92Lxybwzn6zoyFvrmqeMwHDA+UNo0HOnlpxdJE7+iDGDD0EyGWp144okLTjTNm8B29nOclBPGCPkxdJN9lJPGGz3/cT4LyiFDhvl7pBBplNIQo/FRM2xUfowxyo7Cz/nqmOUupRBVt5wfHJggdtioBjn5uiLs8hd7ZWzK+Mxl8boJmEB/AjKqWdaS5GubHFdjsK/8JYIPp9QZZ5xRhrQiN5CffZ12NHBoBCGzKRPykmGRrDO/DTqi1pjUtj333LN8VRZ5SScVHS19kuROTQaKUW2f+NT4wZxGLY4+/vgtPYTs5xzqS2R1V4QF+9qir6ehHwbxw0mAvIZ523OmPMSyzR5RXtzPtiF36Az2W0eIqpfzRiDKdckuvS9a1/x2dDzUOh1qnQ10jCCr459kF3NKMt2L7EXeQTpceWdxYCHnkQHYfW02bTxXMnvYpTqTBt3zSdn+ahfwESLmUEWmMUoF+zjLP/Qh0cLq1KfMbCNaG0ce8r8traRs1/NV02sqL+0edFpXhN3hhx9ePsKkc7qW6qyPusH6oIuY95mACYyLwFQddm1zfqFMUBZd+9saCzKuh1WoOr5L2EfIuk6tUdi1L+bR9lvGPY1IffAgOskwIIh+4CtHfKkIY4MoDYZsYqho+JXKoUYbPWQ0IFVXLdWbp/JguMBf+Wg7ipoeNyJGYs+b9mtJbx29VOSfo9F0zHKXMqJUt5wfX39lslz1llEnvgqIIchcJG1RgnJUElHCcDYlIk/gkXsj2c9x9MDqa2fUn15GJxMwgeEJyPDODQnlJLnWJqvlkKrJZuWhpd5rGnu8+xje5CsnHh/aGRRVQCQWEcrIO85FTjLHHQ5AytIncc4vf/nL0uhsk005HzVyJceHXdb4wVyND3HWfeB4/oiyU8TdoCXOyhi9QR2moR8yq7gOa6LH6QSLHVLxmPhbHMQl7uM3keTUs6tzSM90n2cy5+91E1gNBPQOsMRWZV4wDYeVLJOsob5676KNJ9lek12RkfJTRKvsRZxOOM8YAorsooMEW7bLXuPc5UTYkT9TF7QNq4zlnpTtL8dS1hHxo3OUQ1F3ODLRh0rYz+hIZCbR6G1pJWW7nq+uZyPW4/LLL1+478xlzRQbMTCirY5xOw5kHL7RyWd9EAn5twmYwKQITNVhl5XHMOttQllKPs9vFHvsuva15ZuBSwFGQa1jVIblGuf01ODwysNVMSBi4wElpJ45nGk33HBDKYrKEQ0eHJ1sR6kw7wNOKIabKmG40ItG46OmmNWDhtFDo7Yt/epXvyqOxPxRh7bjh90uAyzWLeahuSq4BwwbxjiLzxeGGj2MsMsJZx+TyaKI+Uosf/zG2UkYfk4yFGL+HM+ws5tuumnBKMjned0ETGApAb1PLGvOIQ3VrM2hRsOIjzjw3nfJX+SchqyuXbu2ONYkL6UDmN+Ha+y7777V956S48xjEm99mEDnLq3V+LeoUVqTgapLbV9Xoxfm6ASiWxThoqgX9Cb1gxuRGuglOpJiZAu/2cY+IvKYkqLWuTZp/RBpc4/QBz/+8Y+bc845pyGikft18MEHL/noUDxPv8Uy6lztY6n9XVE06tRRB1I8379NYB4ISK7feuutCx3Lsj0lyxTNixxRlByyRjJGsmWQnMUhGB3oshex+ZBv2GrIdtoFOPC7Eud2vdtd5466b9y2P3XU/NR03KOzsHH54roScpL2Bnxqtj3Hcg4yrGYHK59pynZdk6Wer0HPBvqP5wgbHQeu5iPnHpPHoOdB14wRmugWJesDkfDSBExgkgSm6rBTwwClmf9oHHTtrykUwEhY1oT2qPtqwGVgdF2nq8FYyzNvYwgsRsf5559fQrgvuOCCcggGhBoPhLcT2YUDCmXKdgwRGlUoWNZrjTacceTN8Cs+c05DBkVG1B6KjAYX9yBGUEixyTFVizbLdZjUugywWt24ppQ3jXt6co888siyjYYbdYQX9czDzxR9oU/Eq64s2Xb77bcvUehE6RDpyNBkDEuG1HEPOIdr0ItZcwxOio3zNYFXMwG9uyxrDrsok/LvPg47GixEEOO0iRF0Nf2AbMSQ5y9/ZRrGDPnnHT/11FOL4S99oLyi/Ojzu02e1e6ndBAN26w/5Wyr7asNK1P+Yt9WVtWPcrbpty6HoK4zzWW+F3LWUdc+clnnS+fmsmt/V6Ne96otj5yn101gtRGQbGGpkQw4jnAU6f2I82WqYyZ+OEc2qORQjZGmQ8EOU7Sy7EWuzbxiyDei7PokzuXdxhFF5LSch8MusT37pknZ/ti3RPxh+995552lU4ZIMzpV9t9//8KFjhbYRd2qeyG9oPvWtz7TOE7PV9ezQTmefPLJElmtumhJm4CvzvI8Dkr6mBTn8hxEPWJ9MIie95uACYyDwFQddl3GK0K3a39bZSUsa0J71H21a0k55C+Gcqyu09agqeWXtxFajkOMOTgwOhj2irHCBygwIGCD0YOywJFEYxKlod4jGqIa4sXxMdE7Ri8ZvUycc9dddzVbbLFFmdeO66C4cHIRHq4oCvV2qgHIcC8aqvlz5vE6k/wtAyzXTdfU/UGhMiwiDnmQ0UL5cXLGIVtqgK9Zs6bMGch5MGIYLQbMoCEBuj5Kn0hFjufDHbWvj+lYL03ABF4hoHeX5ShJDqMsf3mPkZPIS2RcjrCV3M66QxG3yBLkAl+jlTxBXtKxQccJMlnnKi9Fp8mBpnXkFo0mrWt/mzyrcVAjNzZo1chSA6u2j2OQ7zhDc4K59K7qoPtA3VQ/ykkEMc6/7Czscgjm601jnXuFjKcDCzn+7W9/u9SR+0nUdIwyqZVHHMQlH6P9dthlMl43gVcIZLmOnMA+wv6ULJOs4Sy9V1EmSrZLDr2S+yu/yA+biyg7JfLgfSd/fXwhD/vUsXnJudjHjNRAp9AZKzmLgxE7klEY2sZ+6oW9zjbJ4liPfI24Pknbn0AH5JTmooMR5Xzqqaea448/vpQb3YB+xHFH+4L53uApXaW5P3Nnd6zDSvzW89X1bBDRSSAI9wgGPE+0B3gm6NTnGckOuFgXdIki87nv6NDs4NNza30Qyfm3CZjAuAlM1WGnya0RkPkPZdG1n+PXrVu3JDRbwrImtEfdV4OMow7hrklz4zG6Tm4wxmO6ftOwpH4KTSeUnWEEfH4dBYviV/Qh1+dz5rnRxHqtEUjjhEYKDjuUN4qaIbTMF0jPI6H4KDAZRqqD6iSjAyciDj/mXeL4aSfKAX+VJ19fyhvjQ/OkxGM2bNjQvPe97y2NZn3iXl/jxUnJvFQ5ydjpG1moYQ2UE1YrwSnXwesmMOsE9O6yJPHVzj6RDRpelWUX7x0RxThceBdpRK1fv34JBsm4mu5A3tFJQUOGPLbZZpvSyKRThB77fK7WVYe8nhuo2t8mz5YUtmkaIgVxHuJQzKkrP2QbzrbalAeUV44p5aE6wEVsKGdslKqxyrJPFEwu77TXaWQx/It7SUMVWd2WxEFc8nHa39VAi1M08Hw6mcC8EUCO8L5JnuCYosMU2yjLQ9jovYoyUbJdcigz1H51bms/ecRrq2M2z9+s4+MSW593H1sZezhem7rk9579spvJp1aPmH/8PUnbH/2IfGa+TTqYHnvssYboOubno10hGajyqp753tDhj8ys2cixLtP+redL5c7Xp36UG0dbzdmIjqfzDb2mD/zFPBSZz/ncc4bB1mx68cvPRczL+iDS8G8TMIFRCEzVYUfjh96naOz3/U0PSTSgcaZEJxWCNzux5MDqu6/WEAIqxv4hhxyyRFELuAR2VNra12cpYY6Bkf8IVWdeoLy9az0aPJrrrnY8TkCUOEmGj75qpzrFvPrUZVLHyABrK48Ytt0D1Q8OMiCJgmOdKMKaIsZxwJyF9DYyDLZPUp5t5eiTh48xgXkiQCdEfC/ju1qTW9qm91jHx3eOfegMnG4Y5rUkGddm8HMOjRQ+VKDIZ+WTz9W6ypTXcyNI+9vkma6jJb38uZMrrisiOs7/pP3UD1lPw4SPL0RZR3nFs7YUG8oZ+apcLMVfx8Z9s/Rb94DGfVeUne5NtDdiPVRfGnLonVqSHpA+rR3jbSawmglItkgmIneQUziQ7r333mJP12RObVubbCFqj/eQr3ZHuSZ7UdfmncVZx7FEWiliusafa03LYSe7tVbn5dj+GzdubI466qhW2R5Hmkje6YMdkpNiV2M0C9v0fLU9G8h4ZH2XvNdHP7ATYlKgA/cF3d+lL6wPIjn/NgETmBSBqTnsaHC0TUhN5fgUuxQuApDeDL5gpCQlinIhsV5TcsvZ1tZ40qfB4xwZKhdLKbzcoKGHR71Y8fj8m96vf/u3fytzosEJZ6Si7JhjgaFc8et7cCJaIjcW6MHkuNibxDwcDJdVvhyDsUKkyDHHHFN6OikP24nCk/JTndqY5DpMel0GWFt5HnnkkTKMgTpQl5ykVKPzTXmqzvkc1vWc9TVeZESg5HmmnUzABLoJ8B7S66+vNOtdzfJUuegd0ztZOx4ZOej9k4zrev+5JnnlBl4+V+saRqTOIq1Tx+UMiaUxhW5rG/JK40wOurhk3iZ1ZOE8wml3/fXXC2XpvFD0di4zHV1iQ/n32WefMpSIusY/hhexT8cuZD5jP3SPGO6GvmhLOq7NYcfzgOOB+wGXWtL9UiO4doy3mcBqJpDlNHVFRvBesY+IJMlHRo0wH6nsd4YeIre6httrqpcdd9yxef755xeh5L0kL+kIdmLzMiwWpx3zuvEe56ROWuxoRmWgg6JcU7lxaimxP+oqyY822aDzWE7S9qcNhSOKDjE65mlTUGfaF8zpCg+SbGeVd7U47FSPNjlO3fWMxnuMPSHnbm0IbIEW/lkfBBj+aQImMDECU3HYEVp80EEHlY8p4MBiYu/DDjusGP3U7MorryzOFilBwpNpWMReMyl6lGGfJKHLhNOKLOhzXj6GkHWUG8o/zpERj5OCjkqb/dSH4ZbMoTMoEd2H4iQvhmz+/ve/bw444IAS4UFjkXIQ4s6wYEL1aZTR0EL5KjHEE+U8yEkIGyItYq9RVtqqk5S4rrFSSxlgbeUhigaHKg455jTJCYMFXnHIrHo3iZ7M81JwPs8tw4lpaGsYbc43rqO4NcFx7rGLx/m3CZjAKwRwasThJDUH3CtHv2JkqzHWdTzvZHa2xbz0u3ac5K6OiUvJRxn6WleDs++yTZ7Fa436Ww0WXQMZiX6oyTquoTqIa7yu5G9XvcQinjdLv5HhyPLddtutNF7byiYOXQ09RYXWdId0UV+90VYObzeBVzMBOUNuuummUg0cbHysC2fYyy+/vFA1bFvmBmXKEvQAjrvdd9+9dA4sHJR+IMP4EALOt9hBrcMkr7Isw17GDmS0D067rBtwaiEf6NzA/sOm75J5Xfskd1WmtuU0bX/qh+yKUeeyg8VKekPrbeVe6e16vtr0joIQuN8aSZTLXOtYwdHJc0W7j2ezT7I+6EPJx5iACSyHwFQcdgwnxJFC+DEJ4YZAlEKQMwVnBw0nOck0txgOKBRolwEdIXA+Q0Fx1jFnA3NmZOdWPL7rN5FsOPy65jGTgZ8ddmog9HHecAxRWSgL6vnoo48WwwGlTz5MnM6XCdWwVW9hLBdGD/MAEhGoxHFEA5Af83jg6MMQwbnF8Uo4I2P0meoUjQ7uIx+deOGFF3TawpLrcD+jIbCwcww/ZIDF8uRsMdx4rnB0UkclngfqjXHFPEZyaKLQGRpAvYlA5NlTwpBj6AT55chKDJrc6OVcnKA8K/keKE8vTcAEFhOoyXY54NoiuogQ412W/tDxWf5yJTp/mH+Oud9IvLtEWSBnldjGMVG2MA/mrrvuWr74LHmh41lKPqqxoHXKxm86QJgnVOt8/IAoQq1rf7xmzD/+Ru72nToiHsfE6MivWlQeeSIj4aIIvBxhp+1EmlDOGl/KKf5iEcuu35PWD7oOtkXsiNJ26ope4LkZ9OEk3csue4OIHiJ70B3IfSX0AI4AuM/ilxVVTi9NYNIE5FDR10h593gvsEOVZKvzMQfmbeadY0QJ9hpymujdnPq8Y8irqCNiHrLpKUuOoJKzinnsJNdw3CAT+EN+I8eR59rG/qirhpHtlGtStj/OyRNPPLHM+03bgXlh6dRHR0j/wZJ211ZbbbUwN6oYSL9SRtpuMK3Z99OS7fEe8lvPV5veUd14Do477rhyP2MeDzzwQGkXRntd9gjnDPPhOOuDSNa/TcAEJkFg4g47Cc0Y2USPEgpCDjqEJJODykFHRTUMlWP4BDmNhdwzVwOCYY4CRRmj9DEIUCgof5xdCHnK1CepNw4nTFRe+VwZ+LlBI4UyqFHG0C2cdShuhoUx5wJKk3XyhAG9jwxvksOOMvABCKIQ6bn84Ac/WBqXnBsn1aZhhhFEAxSlS8QYjHDQaR4Lhh4zT5PWyVt1imWnMRSvLw7qlUTJ4RCbRJIBFsuTr4MxAUfKQZ3pPaPxhFOTbTwDPAsxReON5wvG8ZyaoxcDAaMNRyiOPhyfe+65Z3nmeO44v+8zFsvi3yYwbwT04RfmimQ4EkkNJd7Zrj/JZB2f5a+iMKJBjhxEF8XryXEf5Z/ORfbXJqSWfFRjQesqU17PjSDt75JnehZwmMl5xvKyyy4r8ua6665btD0ew2/yJsoLOZX3kafK1MWYfdSJvDJfZByyX0NiGUKLAxD9EtM09IOux/3gnuGcQw4jn6k/8pq6oB9qjU6dz1L3psthx3HSHUTr8GVFdAe2B+s1vRGv4d8msNoJyP7FQYcti9OI0TWKaqPjly918q7gPNF7h6yRLUebADs3Jr13NXtOx5GHZJe2xSXOGOY45RjklqZRwUZWx7X0imQ851OnbAOzP8rGWI94zdrvSdr+0d6HP0EPtIdoTygyGNnNvOJap4zSC9JlbMOWrsnDacr2zE/PV7w/+Ri1/bjPahOgE5DXyOlsr+ueczxO5NgBln/nyE49l9YH+S543QRMYBwEJu6wU4MsOpEIh8c5Eoem3HnnnSUSTg4VGkwoDpQ7DQAcTVEp5sojaImmIxKt1nNGvjhu2IeRwGTiXU4VHIYYC1mg5+uyLgWdy6cvy0bFVztfjUgcYvrNORg3NCI333zzomTZlo0F8pMDFKWao+zYpnIR+k6dOJ6oRiJAUDI4NqlnVECqE4aPEr9rE7hiXBGFhpIjsnESSQZYLE/tOjwHGCcoTcrDH3U7+uijq5GB5ME8fzgsOS6eQ69p7ctxzjuSAAAgAElEQVRYMI356xzu1Up9RbfGwttMYNYJ0HHA+xPfaxnNMWoBeZT/soNPck511tQKOFFoqJDUg04jMQ6TQW7SUIvD6emwQd7SoMtOHslHNRa0Llmf13MjSPtjvVXuQUsaHDQ2kTdETrclXbPvNVQm1YF8Fd1HlB6OML5cLnlXW9b00zT0gxgQAUNkYS4bDjvm0OXZGpTEodZAjediPxBdh80Rr4cu4UvCTiYwzwSQI7wXUZ7AA5ueOet4b7BHNSJE753klSb+x9bCpkOOcAyRz4Mc4uRRu3a8H9hqdN7gUORdps2B44rOcTo0pIck4zmXumQZtxyHnez9cdv+0nOMLqJudPKovYWOw0nHl9PpwMidUtIb8b5Rx1rAxDRle7x3uhfc43h/8jGs4xgm0pB6RjmNnuA5lAOZY3XP43Ftv/Wc6prWByLhpQmYwCQITNxhh1DLDSEqct999y2ZKBZhiUKOfyhtoshwpqC0Y0LQ0mDhowrqQV+zZk2r04T8ozMHhxWT3qK46GEjP/7UIOKaOKDU2IvXjr/Vy0QkB2HnioLAIImh5vGc+BvFyETYKErKgUFCbz2JBq0iRGg8oXSIrEA5kCgvQzfZzn4aoRhBNEClfHAwkcifoV8MseU8yo2yo56KRiwHho9QMKyHfCgXCrumtDmHxrMYKo+VXFIePUeUv0/iuL7nxPw5hyhQJxMwgf4EkBd0oiCvomOcdws5yh+/c9J7yvlEKPDFQRpwND70oQmOwYGH7ETexUTjCCMc2aqkRkrsWKLRQ8Qw8pHJz2Pinceho8aC1tXIyevKX/vRa3R+ZKM/XqPrN5HY9PjjuCOSopbgQoRd32vkMpMnDUp0JvqBJewUrYeu4RxF2IlFrSzcx2npB3Qjuo2y8Tfp62IfcA2uZT1Qu/veNo8EkHXIWck8GPBeMp812+mEZ3SIEu8PMjXKKzrr+Tgaxx966KHl/UJ2PvTQQzqtdMJgwzNMlX10TPPxsexYWzgh/MBJhy1MUgePRv5Ih0S5Rl1yvuyPnUXDyHbym4Ttj8MRxyPymsR1cE4y/zV6jUAK9CW6LY8I0XzWavvQwQ9PIpXV7iiZ/uvfNGV7vO6wv6Oc5jms1WXYPGvHx+tYH9QIeZsJmMAoBCbusMOQRYGi/AYllAqKufZHow5FQkLhMM+FQto5HkVP7zoKeFDCMEAZycnH+eqxo/FDjx4OMCL2pMwH5cmHM3IPDvmo967rfL4Eu99++y0ZSsQ5cCP6kHrBUsM7MyMN9UEJqdHKvAoHH3xwcUiSF3WRksJQoseNfIhAyREkMD7llFMW3QuUe4zC66qT95mACZhAFwH0AjJz0JxiOQ+GM2X5h2yKcyPhYGNbnLNS+SgimS+rbty4sWxGzjLXEl+Hi0Y2Til0A3IX+aukxqUac1pHh5E4ls4NOlFIdCwxmTqdSzjaGG5DHdSgKgcN+Y/IgS9+8YslYpqodaKI47Ad9GOts6ztMrkObcfl7bWGbT7G6yZgAvNFQPa8ZKJqT+cF9nu21SV/osOOc7BbiQrDEVZL2KoM98w6IXdC186N27773e+WDnZFXtfkGnWpOeyw39euXVvkL9HI6B7NExevkX9PyvbnA3U4EfkKLSna/vBHp1FGOeViuag3ejDyRE/XpoaI5/m3CZiACZjA5AhM3GE3TNFxIvEVVBRt/CPiLTuUGOLKvG2EfDN0qa9jLZaHnhAiO4hGk1GBQ4tGVNdQo5hH/I2iw+jQH+vjTuRJT6KiHFgSrUhdhknUk8g7oljazoUp0Sm6Fg5TOfyGuZaPNQETMIFMAFnCFwTbGmL5eK1nPUHkF3MgRR2AvkCu45zLCXlHVFg8nmNq0XyUkc4Y9EGUfWpcymGXHXT5mlqX4w7HGnM6Zb2m44ZdUjZ0Z3TY7bzzzg0fksj1bMu7bx3y+egkGodikfd73QRMYP4ItDns2khI/ozSKYxzTG0G5L5GkbRdq7YdvUDksuQ8Hd98rIAPUygpmizazHTwELFFwADyl85z5G48RucvZzku2x9HKbKa4fyqay4Xx+Cgk+2PvnQyARMwARNYOQIz5bBbOQy+sgmYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAnMBgE77GbjPrgUJmACJmACJmACJmACJmACJmACJmACJmACJlAI2GHnB8EETMAETMAETMAETMAETMAETMAETMAETMAEZoiAHXYzdDNcFBMwARMwARMwARMwARMwARMwARMwARMwAROww87PgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAmYgAnMEIFV5bDjy0Y///nPy1dja4wH7a+do218TYkvQfElq75f3dO541jq+nyRahxp48aNS74QRb1eeOGFJn/dlq9dcbyTCZiACZjAYgL//Oc/F29YoTW+XIv8HneqfRlx3NdwfiZgAvNNAPsT+zrbn6NQIQ/kYe3L36PkF8+hjC+99FLcNNHf47b9a0yw8flae/6yLXZ/3jbRyjpzEzABEzCBKoGpOexwNKFAR/1DSQ5SHOT9nve8p3yyvFbbQftr52gbBsDHPvaxkj/5TDvp+nyOfZj08MMPNzfffPMi59yLL77Y7LPPPs0Pf/jDkhUKnDo98cQTze67795cffXVZV3Owauuuqr58Ic/3Gh9mOv7WBMwARPoIrBhw4bmpz/96bL/1q9f3wzjPFuOPlB9aLjtvffezdq1a6udGpdeemnz2te+tvnEJz5R3a98lru8//77m80226xch46pYRPcvv71rzff+973mnw++uBd73pX8/TTTy/J9o477mjOPffchnvoZAImYAKjEpA8/s53vjNqFgvnkcdrXvOa5mc/+9nCtnH8eOCBB5o3velNzemnn77Iph5H3m15jGr733333Q06MaY//vGPzU477dRQD5Icmw8++GCzww47lIAH7gPbcRSeeeaZzfHHHz9R3RXL598mYAImYAJ1AlNz2OFoQoGO+vfWt761+e1vf1uvxb+2SuG3ObUG7e/KXEoThyD5jDM9++yzzbp165qTTjqp9e+EE05o3vnOdxal2nUc+8iLPOmxxLDYdNNNm+uuu64oYJTwRRddVK5D7xkNZRk3+d7AEYcfCv7Xv/51+X3NNdcM1SgeFyd6/w488MBS1q48qd99993XfOQjH2ne9ra3lb9DDjmkbGNfLcEJpyZOSYwxzjviiCMGPm/K669//WtxEnOPnEzABIYjQKMqy55R1ulQQU73TcvRB7rGXXfd1bz+9a+vyiVk5tvf/vbmHe94R5HBP/nJT3Ta2Jd0Zh177LGlLL/4xS+Gzh9nHE652BC99tpri57Yeeedi+5BvqFfiGIn4eSjMbfLLruUyJihLzqhE6I8R5Yj0/fYY4/m+9///hJnZFcReC6p7yWXXNIaqYNeOu2005rtttuusF+zZk05fpjnsKsM3mcCr1YCbXL9s5/9bBmtgvyNf4888kh5j772ta8t2q5jhnmnZNOO22GHzPvCF77Q9GmP9Llvk7L9//73vzeHHnpo0T/SB+iIE088sbn44ouLA+6GG25ozjjjjKruhR/s6MB/7rnnisy/9dZb+1Rpasf0tbvRV11tJto4tUT+dEbR5nvDG97QINu/8Y1vNGyvJbU9aG9I5++4446d59Ty8TYTMAETqBGYqsOOxgsOn1o0xUc/+tGiXNr233bbbYvC0In6eve7373ob9ttt21e97rXFQM972N90H4pthqoSTrsiNKgfjUu2kY0HA0nOGlb2zKyotxEf+C0I3qCc/bbb78STUeeKGSMBob7yjBiSUQj2w4++ODS0KHxueuuuxbHVo3PpLahHC+44IJyT2nEY0i0JRQmxggRLfxtvfXWxfnGeTwXRLtkpx3509DlmHyOmOVzdH0iUWjUbrnlluX8NkexjvfSBExgKQE17K6//vpFMkjyCKcIjbnHH3+8up+oASKGp+2wQy7g4EKvPfbYY4sqRiOHjg7+iFwmwo7jkKOjpJq+yzpOHRR5e14nr5yQq3QIwVgJeYYM3H777YueZcm6ZDAdaFtssUVTy095THv55z//uTnooIMWyXMaT3IA77XXXmXo16Byof/e//73l/PaniuiC2m4kzdOQdkXrPe9zqByeL8JvFoJSK7j9MDuxLZHBmJv8U7pney7lNzpw4NjyXcUh53O7Vuu2nF9HXqTtP2ZHoH2AmW5/fbbi238qU99qjjgvvWtbxU5yfWRddK1sv3pwPnQhz5U+HGuOu37sJ/0McPa3YPuZ+0Zkf7m3mbZjl5gf0yU6fOf/3xpQ3AOOof2B20K1tHBbY7BmI9/m4AJmEAbgak67Lqi02gcdO3PFag1YGQwI2BzI4X1QftXymFH3QYp7r4Ou+isEzOcUkSnffzjH2/OO++8okhgpAaknJHR8EDJE5GAokbxbLLJJkXhtzmvdK1xLVGIajBSLqJYWHYZbSheGpTc69/85jelKERb4FRje26QUhd6c8mXBhaOS1I8B0ZZ0XKvzj777OYtb3lLOVdls8NuXHff+cwTATXsaoYzHGhEoBva3i/Jr+hY6RO58JnPfKZ585vf3CtquRZl9cwzzxRnFlFmcSiuhh1JvlIHNQDYpuFIw9zjmr4jqkvOtDZ9h+zO+7KD7fnnn2+IBEAWEoVxzz33lCgCeEem4owMlrOSxgsNPhp48FjphBORSEGiyCmv0qOPPloiAZH1DPPq0mPs43yO5S8yUH7cY+wJ9Mrll1++MK8tzmUayZx3yimnLHoudK6XJjAPBLJcx77GVsKWzR3EyPhJRNj96Ec/qqJGTiAHsfVyQr5h2zGkNDqy+v5GjvZ12HHtSdr+0jsnn3xyiYZGNiKzsOvZJ90qWccSXYvtj3xTpBgjUFY6jWp3ExFJvdo6BPOcfugN2ko4284666yFqOwo27MOgeP73ve+ErHIlENKnEPbi+vTaZenm9BxXpqACZjAIAJTddjROKKRVAtPZv6Erv04XQYlKR817FDGNCY091reX8uP4xlOmRs5ii4gUgtFlvezftxxxzWEoo+SaFB84AMfqOZbu1bbNvIgr5xQQgrlJsoDhU2EAOHbNLayAQUHGm+cc9hhhzVf/vKXSzTJJId1xTLL2MNwIjKOHkGUXpvDDkVIXVCyGIQxabgY53/lK19Z2KUGd824is68OEyMkzH2OAfDB8V94403lrLpuVu4gH+YgAkMJKB3XQ475FOMHlZkRo4uVueEHEnRsaJ3lHd+HH8xb1WI4THoLOb/UaKjANkcnXXap8YTnSU0HrqcRjqnbXnvvfc222yzTWfEMzw5BpZtSXIOhx2OO+bCo07INYauxXqLMzKYe0R0HbJW2+W8a7vWNLajy2r6j2vDjHn+aFih39oSHTTcP9hxfGSgc7j3PFc0BqOzlv1y5pFHjrzU+V6awGonkOU6tlfbXJiwkH3eZuMNw4s8eD+xCbOTRHIYW/6WW25Zki3n1mzCJQe2bBj2/Enb/ji6xADHG6NqsP2POuqo4ixEFsJef6zTdoITHdnUh3Zb1HMtVZ/oZun0Ye1u7HLaEerEH1RITXPB3LTR+cZ50o/5Oea4X/3qV1Wd/tRTT5WgB/QlTmknEzABExiFwFQddl3OLhoxXfvp6RiUcMwRQSbnngwAOVLy/lp+KKthHXaK3KsZ9rVrtG37wx/+sKihGhutfX+TR0wo6tiLiNKjwYKTCYNq3333bb797W+XRkls2GKw0CPKPBg03GgIMTS1ZgBxPa7x8ssvx0sv6zeNJjUGyUgGGMtakjJvm0+JuuLMwwDBgCExtwXb6AGrOVqV52677VYcmrou0TtMxC5lLsNUz5mO89IETGAwAb0/LEm8R1EWtf1Wo0oOo2HkLw0XDH/lTScSjv2+SVFpkofIP/QOeoxoZBqnNZl92WWXNZtvvnmRO0cffXSv4Zm1MqGncJChe3AQ5aT9NcdhPPbJJ58skceUG0cjw2rRD8g47gPTMCCHqQtL1r/5zW82DK2SviaKD5ZdnTnj1g+xDn1/yx7oiuRHXxIJwbNFBA7L/FyhK9AZ6A7N5xfLgBOU+f54ttATTiYwjwSiXJeMRm7g4JZtJfk7aBltqy79oHdV9mLuUJGzjneXTuiazOfcaUbY8WxMwvZHTsVOIZyTyG9kFpGHTHGDzkIeRv6s45zbc889m69+9aslcpo2QO641jM9Ldk+it3Ns8YzJ1tBZW5bRtnNFEI56TmGV1v0Zts5fcuQz/e6CZiACUBgqg67LkMZJdy1f5TbJQM9KvtR8uEc5YUzLzumtE/GwqjXwFBgyMAnP/nJahRiLTJR2zS8Kzq0cCjxIQUaDyga0o9//OOFCEF62LjW4YcfXqIrqIf+aPARgfbe97534Xgi7VDyOYKBvBVCTjTcJJIMsFi/eB2UJ0pUBmHcx28Moq222mpRDy+NKc5pez50XzkvO0Jj/jJM2/KJx/q3CZjAYgJ6f+SwW7z3Fdnb9n7JiO4rf4kOY6gPE24TOY2MJOK4r9NOUWmSHcjZY445psgSOgTQEexr+6MhxLyiNBpxeuUhqrn+bY05OrGQTcjc7BxEDnJ9JhnP+1iXPEPO84EdjmPOIsqEY44EbxxxGnarKHNkMF+UxWlFVPmgr9NOQz9kZrV1Op2IjOiyM7gXMGBYm5wK+bkiGp1OnK4GmPQVz5iTCcwjgSjX9S7JyaF1zW+HTFIkNbasZJY6CaLsx8kku1dL2b/qfNX7hwxUFKymKuD9JkK25qzjPsVz22T4oO1dsqH2LHDNcdr+1JUOGEamqJ50uCPD6ZD59Kc/XWz2L37xi9U57B566KGiI3U8OuKDH/zgksjklZTter7is5HZyjbIEXH5OK3TmY8OR6cRTVdL+lBHHK1TO07blOewz4TO99IETMAEIDBVhx1z7hASLKdQXDIRbdd+DWvNt01OlUEKtG1/l7CP19J1svHOMV37Yh6DfqO0R+3Z0/wf5KGEosYwwUDh4xEoDv4IDWdycRptDM1iziLqRSMWJY+SZr43lDY9bUSTEO5NaDdfm82JRh9GAIwxjiaRZETF+sXrXHjhheX6bUpU9ygqTSl8GXkxP34z/wQs4jn5GNaVT99nqZaHt5nAvBLQ+8OylvTutr1fMsprsjnnR9QwUWc09Ii2wHlDvnLiMdePhg/lc7WO7EQ+Iu84Fwcec9xhyFOWPolzfvnLX5ZyaKqCtvMk+9p02CjbsxxVRF6cZ4e6RabirHPluMxzg+Z6TEM/5GvW1mn8owvpoMrzFnG8hsJqaK+cCpEBx4kDedUi7DhG+qjtma2Vz9tMYDURiHKdaKXoNNG7FWW+5LzkCyz0rg16j5Sf7D/JTCKGsd+Ykw3blg4SPkxGVFhb4txR7XDqQP5tU9NM4po12x9u6hSSTtuwYUPpbEfXofu4J+vXry+d8prrlIhtbHk6N5hrlU57jocvH5/IaSVlu56vrmeD4Ao60NDztF+4PzXZr3rJ5o/PqvZpqWer67o6lqWG0eaROvEY/zYBEzCBQQSm6rAbpWGhc9qEo5R8nt8o9th17WvLN4NT73zN2FcZsmGf8xi0jiKgESDliYHR90/DcskjJhpV+nIqQwAIi6fORJfhXGOIFr2clF3GE8qe6Ds+446yY8mX99hPlB37c8Khh3OQL1NNIklJ5vrpWtSJZ6VtvxR3bGTJyMOgwyjJSQ08O+wyGa+bwPgIyPBmSdRYlnmSbRq2GffTMKJhgfzqkr9xyCoNGWSY5LZ0ALKRazBNAMNvaklDJhn2iizRubVjx7VNZaW84/qLMjw63s4555wyFIqGGHWLTDmHdclYZCbz8vy///f/yoTmRHC0pUnrh7branuMriHKPCd9mImIQjqwSNIPkQHb4aUhrywVwaI8ibhk/iP00TSeD13XSxOYJQKS69iPRMDRUaIPeOndYmRGjpIj0lfbeL/oEBj0HuF8ilFRsheJMOO6vIvIdtoFvL9diXMH2Xxd54+yj2uO2/ZHLsGRuhOVSCcC16FjCbsf+/+KK65o9thjjyLrKDf6BXnHPaOtQ0Qk0dp01HMvauxWSrbr+ep6NqTjYRD/eCZo9+TOMj2XXVHY2Ank1dbRH+8/ARIHHHBAOZ72kZMJmIAJjEpgqg47hCRh7wp3j0ucal37aw4VKi2BXBPao+6rwZQg77pONuxr+dS2oVhpINEDhEFD7xdlH+aPniF62siDvGIjAiXL/ELkSw8kyhplhyJmCXsaGDnCjrntMKjIk7pxDxg+hvNy2kkGGMta4r6gRNv2q7HJMdSZBCMiO9lGryLsYUXjHkOF3lka5oOMtz6GQ63M3mYCJvBKhCrvUc1hFx10+Xcfhx1GM+8577KiDeBe0w9Mxs37zh8yIEdiMPyUhtWpp55aom+lD5QXsmSYvzZ5Nc3ngggAIkrooOAjCQyz5V5Qt9ocdpSZhg56QxF56JVZjSDQh0C4LzRgo26EMzJfXwtnqUapdH5NrytqgmeBc+QARQ8feeSRC1811/Mxzfvpa5nALBCQXcQS25NpByQv9G7J5kSuq2MGu0tynhEOONG73iNkEXYpf3LAIKN437k2c6/xmyi7Polzkf84ooiclvNw2GVb9G0sw6Rtf/LHWccwfnQgZaJ+agcwJQ7T3ihIQBF2THkAc+4TnfdEpuN4QpfOStLz1fVs0BZCfjMEmPtHXXjmpKOJpubZVNJz2eWw03VrekH5sKTTj+G1XAvnnp7NeIx/m4AJmEBfAlN12HUJQYRu1/62CqmhVBPao+6rXUtCml6qnHSdQQI8n6d1KYlovMhgGXapIQDkmdPGjRtLFBxfy8PphhImYoSIAAwThmhRFxQYDVMMnPPPP798ZZehs0TP0SDJDZ58nUmsywBjWUvcfxRj2/6aw458qCtKWwpcSxpiTLRLtI0ddjXi3mYC4yEg2cpylKR3O8tf5BROty233LIMhYrz+XAdye2sO5gqgI/XIAvWrFnT8DVaOe7oJcdR9cQTTywMp415aU4mzcekdeQSESBa1/42eVXjwCTizBk3rE7Q8bUPNyH7+WognWXkTX0ZNkY0BfJP8jAuKTPR2jhAaYCrEcQcTOiUWUkxqhJ5Tv1rw531AZLcqJJezs8V9cOph46EVWTDbyKCeE74nZ+tWWHjcpjApAlkuU6HAM43vsKpdyvKfMnjKBMl27veI33VM34kgDx4/8if6FocUTgMo3Omrf6cS+QwTnne/RjZjY2OLJGDC9nK/jjXpxyPsR5t1xKHSdv+OIuQ2cw/zTXpfGdoKDY9Nj7b4M9HiC666KLy94Mf/KA4nYhIxlHHkFrpwbb6THO7nq+uZ6NWHmQ3bR3uHc8IkYb60rfuR1dbVNet6QWuR/5EaWvaDOa3nSVHZ42Jt5mACcw+gak67PhiE6HxtZ6qHXbYoenazznr1q1bMlRJSr4mtEfdV7ttmpOGBmBOuk6bAM/H53UpCcLQ6REiv1H+OJdG6SAHE5FktXvQta3GPtdjkusywNqMIO4/yrdtP2yIoquxoRGHc5KhAShxemppxHEPOKdrPgvqLAVeewYnycR5m8BqIKD3hyWJhkSfyAYNr1KjTvIXg5n3F6MbmcA7zVw9OUlu195bGjgMHZJDZptttimNGhp8NGryuVpXHfK6ZHze3yavcllZVz1pCOBQ65LXcV/XsDKmCoAbjUwcVpy3zz77lKgLHJNiGq9PmdGHasjisNM1NIdUrfzT3KaoShrXyHwanTwXOdFwxVlZa8zrnkUG+XzmM5QTFV1x2mmnlU4toth59maFRy63101g0gSyXMcGoyMEWaF3S/KQskhmRpkomVeT0Zyj/UTiMd+aEnnw/il/RUYzYoRzuhKyDd2hkSXx2uSXbUj2RxlRq0fb9cRhWrY/EXZRN/T5fd5555VO+7Y6rNR2PV/x/gxTFoa20slEhCHOSJLuR5fDTrK9dl3aEmeffXaxG3DiMh3RSgQ4DMPBx5qACbw6CEzVYUfjh4YBDahh/2gcRCGKod13njoaHnH4bdt5+nJevnUI4UMOOWSJotZxUtBRaWtfnyW9XgztogeSOeMiGyLmMDz4i0MF4AgTDBUdTx4oYJb5S64okS996UslwgDllHl0rRMVko2UPvUa5zEywKIxF/PXfnrLaqnPZLL5PIYY09PKPclfBo7HLtdwiHn5twnMGwE6QWLjSo0wyb22pRpjOj7KX/YhH3G6tQ1FkdyuGd66BzhkmC4gR1/lc7WuMuV1NQTy/jZ5puvHZa2ecX/bb53XVc94LlEXfGiIDz0xlOp3v/td0SssWSe/WoJV277a8ZPahqyPQ5FYryVFF/J88ezkxivDxWjQRQepnMS1/OI29BD50ih0MoF5JICs4x2QzMNhToc99hSjPLAp22R73t4muzTPcBzKDmvZg7o2cglnHQ58Isi6IsW41rQcdpO2/en8Ov7445vLL7+8RJHRxumy9fM+7ldsd83Sc6znq+3ZGFRWtQli20Z6G7sf+7+WFLyRO2OwMzTFDo7p3Aar5eVtJmACJtCXwNQcdsyj0NVTw3Af9YCjXOkRx6BWkhJFoJJYz0p9uettjSe+lEr0X5wjQ+ViKSEfG4xsJ8xaodbx+EG/EfwoTj70QKOTuiL8o5Fx1VVXlV4chjDBqqsXB56UjUYnnEk0xih3nz+Gw0alNqj8k9gvA6ztHmki2HwPVBaGNzAkjYZc3/B0hljwTNHo70rLNRy68vY+E1jtBHinmUONucZIkldt77Let9gY49h4PLpEsq6Nn+T2IIOfvKLsJb98rtbzkFetU8flDokdxKWtnjqvVs8+egD5jxxkOUhf4NCTHm8rzyS3U1c1zNuGwOr6umfD2A165pRHbalIIuYCbOsErJ3nbSawmghkOU3dkEE4gNiHTSn5eOONN5aOZr2LvLvYwDjkmEezJruYI4yo1h133LF0MER2shfj+0p0NJG0OO2IfKrJKddo/v4AACAASURBVBxcfGwBpyId2+iUeG2Vmw4YJfZH3SO50mar6rzacty2P3Uggkwd2dQZGT1Ijms/jtDV6rDTfYrOufhxuvjs6F7RnmPub57TONqK7aecckrZ7iGwouWlCZjAOAlMxWGHgwTnE3MlINiY2JuvjSIwSVdeeWWJZJISZPw/4cSx10yKXucMgiDDnXl2yIuPLoyScISp1yTOkRHzkuCPSpv91IfeeT7e0CcR2YChgEGBQiCijt6x3POvdYYiUTeOpcGL8ROdnLomjQYaD+oRklLCGajovK4l82vMusNOX/FleBMTp+ekOYXEIO/P63AksgZHLQ7broRi5x5Ew67reO8zARN4hQDvZJQvcjBleaoz9L7JoO46vuZsUz5xWTsOJ11bR4hkvt55rSMHhvkbplGneg6TfzxWZY31Zls8Zrm/2+5ZvOYkf9N5hf5EZ7fdu77XV1TksHVS1A+6e5QOu77l83EmMMsEJKdvuummUkwcbEw7kkcs8J5in+FYQg8wQoR5NZl7ri0x6oX5M3nX6bzOqeaw4xgNgWe0D0673BGDM4uP5+CUod3Cuz+qTBxGtk/K9mdOUaKE5VxSVFkcrdNl+3PcanXYae7D/LEk7BHuOR8ryU5dhl0zqilPk6OgDtoMtTZYfj69bgImYALDEpiKw05K4+qrry7lQ3mgaNXgwsGCo0UCUk4ynF0Ms1GvRl/FwfmaFJtQcObMIH+U9bBJE1LzYQJ6zmtJjbVs2Cuqa1CEVsyTXsUTTzxxiZNOcwTVhvdyDnPXoUi4Zk7wjhOCS2n3NSjodaTuXQ0gei+5n23Dz3KZhl2XAdZWZp4RDUOCVSyrjDQ9T/HafIgjK2UaxmvXri1Km16zQY0uGaa1BnG8ln+bgAksJlCT7XJMMZcajTbka/xTxJf0h47P8pcr0fnD/HN8OIKEE4YoCxw7SmzjmChb6ADYddddS6957f2XzNc7r3VFoTGcZrvttluISrv77rtLp0reH6+p8rQtVc84RFOdN11L6Q6VtS3/uJ2GLHqDBjS6gwYMnT58wOP2229fIjPjubXfk9YPXFPPEmUdx1DULoddmzMXWwdbg7++w2drvLzNBF7tBGQX7b///gtOL+x+5pNTkq1OpzDzuGHjY8siz5DTNacd9hrONvLSV2eVn5ayF6UjtJ2lbHrOz1G4eucZ9ih5i/Nf+gf5Tec48lzb2B91lWT/MLKdck3C9sf5FCN9iWKn/DUukZF+wwDbPzs2tZ/lNGR7vJ5+6/nq0muUrTYlAs8Vzxe6AmdxTHoG0H2xzcizynPJOWeeeeYiHSgnH8+NkwmYgAlMgsDEHXYoVxxxsUdCEV9y0GFo0xsdHSrqseAYhu3QGMs9czUgcbJphCtCFqGNcEYAI+Szg6aWD9vk6CGKrUvBqbGWG4xSKH0UN4pRBkBtiYLBKIjGQz4OxcQ28lKirjTY4sSqciQO+2Uq+NecluqVRJEp9F7XH9dSBlgXS91nyoGDFaOOqEMamRhntWEQTG7PkAsMN4yxk08+uXwBjDzyvFVtddF97jIc2s71dhOYZwLqsWYYEh0DJDWUeAe7/iSTdXyWv4rCiFGyisSN10OmMecMf5JvOhfZj9MvJ+QsjUu981pXmfK6GgF5f5c8y9dsq2c+Lq/rPJU174/rNMyoLyyIQmHuU6LTuQ80dGlM48Bj2CmdaX3SNPQD5VA9Kesg3VaLysl10T3LzxXHcX9x6BLlzgeL0C0cBzOeGVg5mcA8E5BdhIOOTutTTz21jK6R84eosiOPPLI4tx944IHyTiFTkYl0/GJ/0Sbgi54xyeGGTY/NV0uyFyVv8zE49BlhgqygE1xyn6ACdW5LnkS5SX60I5ANSuyPMkKyv69s5zqc0/Y3qu2v8seyKVgifuW2K8JO+/iwTm2KiWnJdrGOSz1f8f7E/fzmGGQyDJDRyGqeOY1Mqtn40SHMM8LX0jkPxzPtiNpzRxl4lgZFLtLOcDIBEzCBUQhM3GGnBlkcHqIhmTEU+c477yyOJSlgGkwoRZQ7TjjmHouKJ1cW5UQ0nZwzueeMfHHiIHAR2DQ2uhx3OAwxFtocPfH6UtC5fJqctM1oiHnQgJByrC2JnkPJ9BnGGhsjUqiE+CtShIZxm3HQtr1rLguMK+b3Q2ER2TiJJANskBHEFxwJS6cs+uOZoNFZu98oYhS6jmVJDyTDt3kG+6Q+hkOffHyMCcwbAc0TGd9rNTRi1EJNLmUHX5a/mlohDo9UFBYRUHHoPL3sNNQYJqNEhw2NMxp0OXJYMl+NBa1L1ud1OX/y/lhvXbdtKS65nm3Ha7vOU1m1PS455rrrrmuYExUZiAyVQ07yTWXni7sch27kOKIXdS9invo9Df3AtVTPKMvbfvfhrntW400DH3si5i8ejqzTnfdynglkuSEW2PTMWYddho2tKUckM/Vu0gGN3Y99po8HcQyO8kEjZsiDd1MyS9eOSxyBBBLgUMQ2xN7j43Lq3JY8iXKT/MbtsJuU7a/ACKK/lKgTDIf5Q3/WZCB5Tku2q/xxqecr3p+4n98EJ2DPRznNb7bRWd9m4xPowX45dTkH+X700Uc3OJpzogz5GrX1rrLmPL1uAiZgApHAxB12KM7cEKIA991335KJYmvKBKWNswhhmYeWovj5GANRURLKNCRQxDXnDPmThxw0DFkiAosGCAY4+fHHBzIQ1FwTBxTCuyvJKUYkB0Ke0HY+/Y1BEsPRu/IYtK9mPAw6h/15OHKfc/IxXJuIijalzfE02MQwnz/tde499wSjpE+ZogOzz/HTro+vZwKrkQDvGk4PGm1yDlFP3kfkKH81R5D0BOfT688XB2nAxY/qSF4ik/M0AUQZYEzHOUnlnIkdSzj3iBhGDzAvWUzIluVE2KHX6IRR4zTm3fZbdao1BPpsy40FGivoqoMPPngh4oDIOhxw6EElNYxi45dzr7jiiqLjuDY6lcY194xpBnKaJf2QyzbqetQzPA/cHycTMIH/n0BNbmCXMZ81MgN5wbx2SpKpUSbSWc8k/hx/6KGHltE2yM6HHnpIp5WOaGx4hqmy7+c//3n5GEV2rC2cEH4gxyTr1MGjkT+St1FuUqecL/ujbTyKbA9FWvKzVo4lB1U2xGjByu5em7gnbR/9UAazLtu5v9gK1IW/YWx86qbzLN91x700ARNYCQITd9ghHFGgfYSdFHyt8UGjTp/ZpiHF0JzY+0HjiSGNbT0mES4CGEecnHxcTz12KDkaH0SzEbEnZR7Pr/3mwxkKs1b5yUe9d7VztA2Fr3PGtZSRQe8a9dQXGHXNtiU9nznCj+FFNFrj3CNt53u7CZiACfQhgF5AZub5YAadqy9CR1mZ5ZMm/sfhpshi5avIA4YzyrmEfmLICx0TTMGgpPlVcSyiy5TQIV0OO45lCgE6TEh0LDGZOp1LyFdkKuXHedg3qeGGLmSag6556+K+tjns4MIcnbAjmpFhaTV9J73MMic6s9hOpB0feLKOyIS8bgLzSaBNbtA5gv2ebXXJ1OiwgxwyichnHGG1hBzTlzujTtCUOLVzatu++93vlg52RV5L3sqW5hzqVHPY7bfffmXeY2Q7wyKRqXGe1Nr12DYp219M6BRq45bL9M1vfnOJ7U9d0NE12Z/P97oJmIAJmMDkCEzcYTdM0eltW7du3ZKGCBFveUgSQ1w/+MEPloYLQ5dqDY1B16axQWQHw6GkkOg1pxFFA2vYhIJXbwxL1lcyURfYwLTvl4sYzhMbe/wmKpF8yM/JBEzABMZBAHnCFwT7Nih0zawnkE/Z2YS+QK7jnMsJuc+8QFln1KL5KCOdMeiDKP/UuFRjLjvo8jW1LscdDTvmdMp6TcfVlkQTMpdQ23xCtXPYpvNq8+dwferSlXA6Ul45H9uO5TqDotHbzvV2EzCB1UWgzWHXVkvJ0DilS9uxeTvOMdmtyH2iqrN8z+fkdWQXHduS8zW5qWiyKOfo4CFykIAB5CSR3tdcc82KykLqfscddzR02Nf0Wq4763SgiaGW55xzTvPEE0/UDvc2EzABEzCBKRKYKYfdFOvtS5mACZiACZiACZiACZiACZiACZiACZiACZjATBKww24mb4sLZQImYAImYAImYAImYAImYAImYAImYAImMK8E7LCb1zvvepuACZiACZiACZiACZiACZiACZiACZiACcwkATvsZvK2uFAmYAImYAImYAImYAImYAImYAImYAImYALzSsAOu3m98663CZiACZiACZiACZiACZiACZiACZiACZjATBJYVQ47PhPPl474omktDdpfO0fb+HIUX4LiS1bDfn1KeSxnqbJTP347mYAJmIAJrDyBf/7znyteCOknvlg4jrRx48aFryUqP/TeCy+8sOTr53wxkeOdTMAETGBUAsgX7Ou//e1vo2axcB558PXrvl9IXTixxw/K+NJLL/U4cvYPybqLe6Cv5PYtPfI/t4l0L2v6iPzz8X2v5eNMwARMYF4JTM1hh+BGgY76h5KMn1Kv3TDyfs973tN87nOfq+0u1+7aXz3pXxsxAD72sY+V/LnOtJPqRvmXe/3f/va3zVvf+tbmO9/5TrUabGc/xw1KOEd/+tOfLuvvD3/4w6LLYAzddttty8pTZVq/fn2TjZJFF/OKCZjAihLYsGHDirzrkqlt+qIPFGTV3nvv3axdu7bqtLr00kub1772tc0nPvGJ6v4+1+hzjPTTsHV5+OGHm5tvvnlRI+3FF19s9tlnn+aHP/xhuTSNXlg98cQTze67795cffXVZV2Nsauuuqr58Ic/3Gi9T3l9jAmYgAlEApLHbXZpPHbQb/J4zWte0/zsZz8bdOhQ+x944IHmTW96U3P66acvkplDZVI5GBtYNuuoy1qgAjL7ySefrFyxaf761782++23X3PiiSeWIABs7h133LE14KGWCXkceOCBzb777ts8++yzC4foXmZ99Pjjjxd9eeihh5brL5zgHyZgAiZgAp0EpuawQ3CjQEf96+NAalMSIjBov46rLdUgGofDrJb/oG0q+7DXrxkCGDObbbZZ85nPfKZqJLCd/RyXjYfsXFvufeV5yAaaHIqjPivxPJys3LuVSjSITzrppF5/HBsTvZA0pmkMYyS+7W1va4444oiqI5XG8nnnndfrOtdee228jH+bwIoSoFEV39lRfw/7rkum5kbFMDDuuuuu5vWvf/0SGUYeNKDe/va3N+94xzuaTTfdtPnJT34yTNYLx9IQWrduXee7fcIJJzTvfOc7mx122KHzOGQReZEn8oWGJ2W77rrrSgOU6IeLLrqo5EHUHPJfjd98X+CGzNppp51KXfl9zTXXrHgHSZSbyExk5x577NF8//vf74xOp/F57rnnlk65N7zhDc2aNWuab3zjG50Nyz/96U/Naaed1my33XblOeCcSy65ZEV1zsKD4x8msIIE2uT6Zz/72TJaBfkb/x555JHyHn3ta19btF3HDGPHSWaN22FH5+8XvvCF3h3affGrvFnGDrOe9RjyDNuRDiU6YXKiQ2aTTTZZ6Jh57rnniixHnvO7T0Jf3HPPPUX3IGd//OMfl9OybkUmf/3rX29e97rXFbn6y1/+cqwOzz5l7XuM6gSHPkEL5NvnHEZH9WkLSD+rvIyo+tGPflTuJbaEGA7STTrfSxMwgdVBYKoOOxovGPTZCcT6Rz/60dK4adtP708MQ6dX/93vfveiv2233bYIMxRH3sf6oP2/+MUvWu/qq9VhNw5DIBoN5BcTRgKNFYwtGVZ9l3fffXfzlre8ZUljVw67QcYWxhMRdDSMMQgefPDBhQaZ7tewjfhYt3H8bjNaI1N+0+iPzx/G1rHHHlscGUTobL311sVhx7E0sL/3ve8tMnhkIOV8a+tnnHHGOKrmPExgLAT0jlx//fVVGYJTBPlC73xNtvzxj38sEWHDvut6Z3JDp2+lMNJxeKHXHnvssUWnxcYPUQ5E2HFcLQpi0YmVlT4RxzS+dt5556JHa/o1bou6FDlJdKBkCscRdUGZyZOIOpx7TAcR2RPxzraDDz64OMKo16677lo6GCpVmNqmP//5z81BBx20SG7SyJEc3GuvvRqep5x0vzgO+0G2Auvvf//7qw3YO+64ozTca+e0XSdf1+smsFoJSK7jBEeuYNsjA7FrkNV6J/sus+3ZxU127yAbspaHzu1brtpxfQIM4rW5JrYwNnGUs31+y9FZ02M405DtZ5555iJ7EfmNXMs6kw7ibbbZpjjhYvnibyKu6ZSIzqdDDjmklP+4444r2+n0f/Ob37yoA4ljttpqqxIoEM+dpQ5k9N6RRx5ZouL73sO+5/R9rmAUAyN0Ho469FLUZ8M4V+M99G8TMIFXH4GpOuy6osNQNl37M1o77DKR/utyiKEIaontfZXVsPctXq+tHNo+yNiSU44y0AjbZZddirHA0GntywZJvP40fms4WZvhhSH7xje+sTToNTchjgB6mTEEafhpqAFOSYwbDDAM3xiRxz6MsLbr4FDYbbfdSk8ojg8nE5gVAmrYtb3vPNPohlqDhDrU3vU+UWm1RkVsSMTfNFDyfEjPPPNMs/322zfHH3/8oqgyHIgY0tFBJ4cQ2xhWNWwa5LTr67CLzjqVQcOaPv7xj5coXToHcFqp/OIbG6boByLWqCfHE6lx8cUXL2oUKv9pLtEd73rXu0qUIOVWevTRR4t+oA658cpx1J2OkbPOOmuh0wedQkdi7RzuMY0nZPHll1++MCdTPOeUU05Z9FyoLF6awDwQyHKdDkk6JpFVuQMAGS/H0zgj7IhMqiXkBG0I7KacsH9HdZ5RD8rf137WtYexuXWOll36kU5t5BByCucdCfuSKGrkGvZx1HMMj0XPxm38jlFf0gd9orlzPnFd+rdNr6t+01jiKCUSER3AH/ps0D0c9hy4ca/a/pg+o6ZrmIKC5zhOCRX1TLY/psHL1zABE5g+gak67OhxQUhHoa3fCP+u/X16YbLikhNDc+vk/TXcOD0+8pGPLInQo2GG0lMvRy2Cj96lv//977VsB24b1MCUcutiJJbRkVO7sK5FiHYtsT0q6Nox2oayHcbRqvNYyjGXHYfa3taAVx4yHKTwMUiYT4P7rH0r7bBTWWtLHHRE3nBP77///oVD5AioGQzRmTfMPCoYpxgiGJPk4WQCs0IgN+yI1ooRYYrMwHkSt8v5VHvXJUMwgMfxV5MjDEnh3SWyV+k3v/lN0R1ydmk7SzntcIYRTTjMe4jD6QMf+MASvVTTQ13byIO8coIhjjsS/HHEEUFGVMTTTz+9pIGNnqQBwTmHHXZY8+Uvf7lEG4467DeXZ9R1ylqrH/nde++9ZaqH973vfaVzQ9fQsOba0DHkMvcYJyB5K3Hvea4YHpfnSJUzj2cgR17qfC9NYLUTyHL9K1/5ypL3KDKQfZ7twXhM39/kwfuJ/FJHqM6VHMaWv+WWW7R5Ybkc5xmZjHL+KOeowOImO1jbtUQeHX300SVCnW3Id2zLYfRitEWlb3W92rQ7UU/n39Lbg8qt8k9jSV3gQaf/7bffXhyZsc61MoxyTi0ftqFPiXikE4j71SdJb43a/upzDR9jAiYwOwSm6rDrcnbRiOnaT8/3oKR5vOTcywoh76/lN4rDTsNnao262jVq28bZwMyOLno2uxpxfffFIZuqA0prVIWhOmOsxKTt1ENKcRjjgmPpkeJ+dN0TImZy1Ewsx6R/YyzS48wQkdh7hsMU5xrOx5oDWHyImKOnelB6/vnny2TCwxgDg/L0fhMYF4HcsOv7zsugVgOi613PZWXoDx0wkit0dsR3MB+f1/VOqUGo6Ff0GL3zNE5zQ4X1yy67rNl8883L+00jip7yvmnYhlHt+nGoDdelMRujTJAtOLRuvPHGMmE7k4l/+9vfLnJUrFjCHn3A5OHMSYUz64ILLqg2kLkO13j55Zf7VnUix8keiPoKpynz/1EnphnISc8W+xWtg0xGNiOja51eMU+iI5xMYB4JRLmu9+hTn/pUcXDLhokypes3OkGpSz9IB2BTkl/uUJGzjneXToaazOfc1RJhJ2Zaqv6wyXY3x8A2ykedF5e6l7on5IMd+8lPfrIajKFAApZEMktvSx4rn3gNfiNH0RnDdGzlPPquI9uJfEdPqX4qZ1seo5zTlpee12E61PUODbpfbdf0dhMwgVcXgak67LoESx9FMSzaQQphmPyUF9F3ueGhfTIWhslXx0r41pQox+gaXQwl9LPDjog7KU0iGZmcnEaKtqFEUeAsta02iXktco/71mVo9dmX6ywW1AMnK3Vv+8PZxV/ejyOL+9F2T6gLERD81eql+zKppYyCbFByPRp5cGszZPQs5Lku2sqq54KoECcTmDUCsWFXK5ue97b3Qe9S27ue8yTCgHlgmMuRyGkm5iaiDNlXa8Dl82lAYFjrHWVC72OOOaasM4QdHdEl9/bcc88ybxyNRjqpiH7tk3iP+zaMJMe1VIR2lLWUm7oj62FIYtJwHPt04hx11FGlEXb44Yc3OCijjKVji0jg9773vQvHE2nHPHY5wo28NeSUYT8rlXAqEikXdShDjblnfGQpRjnHMvKccD9xwpLQLXSWdDXoJHM9X2gk6d/zRCDKddl0cnprXfPb0bmgSGpklTobNNQ/yn465CXXtJR8Uyen3j/eW0XBErmEnEfuYgu1yfp4bpcc79rXJRtqz8A4rhkZ1a6hqQ9Ubq6ZE3lE+Zj319bJp+0jdbqPWnK/+7DRUF7KOu2pBWRP9CmneIxyjs5V5x+2CDq1b0Jfwb2v3dM3Xx9nAiYwmwSm6rDr+jgBTpeu/RrWmjGqMSclNOxykJLT9XSdmnDs2qfzBy1lwNSUKOfqGl3KVEo/O+zitalvrkM0rHSsFNAgPuzH6YTBJOMpL9uGO9MjRwM011ksKJc+LCGFryWNbhLXl5HGOr1kNBgxTqhnrqvqpyi2tigJHTeppaLravNP6H7EesVyEJWDcu9jUIxqDMTr+bcJTJKAnneWtSTZ1yaLJKva3vWYJ1FhOOmRUUQbIE/JV068k08+eckQqng+v5kDkk4PdA3n4sBjjjucM5SlT+IcvpRHOTQUddB5yMlRIz80P1SUtTRYabgiA/l4BM4r/hjWSx2JxmPoLlMNwBYnJ448ZA/1f+ihh8pwYGTMr371q+app54qX5vN9cC5R9QevGg8r1Si8U9dcUAqslqyNA95jWWUXtXzp+etS3dceOGFC89HzMu/TWBeCES5TvRqfMeijScekvNRRuld07unY/NS+cmprneWr0RjJzHfJrKLDhLm2oxRxTkvzh1VzlIH8m+beiBfS+tcc1BnzIc+9KHqMXJWRkaa9ka2+HnnnVcihHHwqKMKNnl0TdtIp65OJcqOLGQu05xfXkeH9LFbFcWMzmizg8Vu3Es9c33KqWuPco7O1bM6TIc6evqAAw4o3NFrTiZgAqufwFQddsM60+LxURnF2yIln+c3wrGjHruufW35xmvwW73z0djXMSpDnwajzslLGRwI71rSNWbRYddVJuqCgbDFFluUSYVj3VA6zGeRh2mJBQafFGF8Fvit+0YDUNxx7jHkgmFqNBLZrn3xuvymscrzwV9bT2s+Z1zrDEOjjDg6a1Edqj8GgxyT8dpqePYxKDCU4TVMqH28ln+bwKQJxIYdUx9kI19TDtCYyPtoGOGk73rXKT8NNCIzyIOvoiJXJFMlS3Cis59hoDR4aknzTjLslUaKzq0dO+5twzSMMicxzPoFxyENTOrCEDEYUCeifJGtDOElKgK+cqjCjui7W2+9tUQTsuTLrOwnyo79OeHQo0Hywgsv5F1TWY/RNUQRKknWdukwmCBD1XCEmYbRssz6g8hF5sOLekrX89IE5oWA5DryAadSHM2g9y6O6pDjCceUHE0a6TFIzmLnxChZOUG+9a1vlevyLiLbaRfw/nYlzu1jW3XlMey+PtdEJtc6CdBJdEBHW1F8qTd/yDc6VvhQgjqqImfxbutcrw39l/3+gx/8oLniiivKNArqUG9bYrvqWPKk7G3pySefbM4555yG5TST2hzDPAOjnEOdaKcwb96g6Wo0HQaO509/+tPFoYxT+YYbbhj4PE+Tna9lAiYwOQJTddihsHGQ1IQ5TrWu/VEZRRy50TWOfTEP/ZYCrBkOKkObc0h5dC2Vf25Q6Rxdo6thISNFDSvOpbxS2uNY5vqz3lUmykCjLxpTqlPbUiyohxRh5MI1VQ62axJxJvjmGWLYhc5bzj1pK99ytxOxwvxZmv8q50cDkIhT7hd1Y8guRiZOB+pGz2ifr1hhDDCRLUw8+Xmm7PVZIaCGHcuawy47n+J6H4cdDQveJ96ZGEEnmSpZAg8+tIChzh/vWo7EYEgnjaZTTz21RJrpXOU1rIyNcq3tfiAPeJeJYEMW0ODiesP8EUlGlB15kFd0MiFb+FgE+RKhgrzmXtD5wRLdjAMqR9gxtx0NbvJEzqKjGWIbP87QVqdpbteHQLg3NExj3aVrunSYns+oS/QxCp4FOkPQNyQ4H3nkkSUShuvp+ZhmfX0tE5gFAnpvWCJbGI7KR7Zw0ui9k0xBpqtTIUZ+Ec2LrdT1HhGhjNzhT9HK0Rbm41y8izg7+iTORf7TyUDktJxZwy5rTq626+uacGlL5Ie80bDituPydthF+SZdVdM9+dicV1zXPZTTFR2rkTZytOIAFDc6c/gwH+s4C2OZYr4r/Vtth2k47PQxuEEfkNPzLPtim222ab761a8WJ+xK8/L1TcAEpkNgqg67LgE9jKKIaKR8agp91H0xf/2W8cFQl5x0nWjQ52MGrUv51ZQo5+oaXQwl1CmrUp7vYzlz2KFo9UEP5d/nvg1raIgF9ZDyjFy4pu43xouGWhDBseOOOxYlpvOWc09Ux3EuiQJkGCyGV4z0yNfAwMXZJgWtJecxyTtRQIMMCvLn+Nqw23w9b2gHKAAAIABJREFUr5vAShGQbI1ya5iytL3rOGaQD1tuuWUZCkW0RXTWSKZKluia9GbT6807t2bNmhIFLMcdMgYH1hNPPFEaHDpXeWlOJkV3ax35RaeF1rU/yjVdPy8lD2PjNjoth/mtIWLkmdPGjRtLFBxfU8XpxvBhGtdEjNFwZQgv9UQ24bikAXz++eeXeUYZOkv0HA6ryDhfY5rrMaoSOYgzOEd0iG2XXtXzGXUJTk4YMMROsllLhgvznLCu52Oa9fa1TGAWCOi9kVwnugrnG1+31HunfZRXMjTKRMn2rvdIX8uMH40hD94/8ie6FmcgDkNk16DEuYwIwSnPOx8ju5HByJI4/JP91AvnYnQ8xnr0ueYge44OF8qlYb+D8tR+2EX5VuPcdqy215bxHiJXjzjiiBKFjNM03zc5bLFF0TOwiWWq5b9S21T2Qfcjlm+Uc+Rors1jHfPmN/lz3+h4x6bBOYru4bnjuR8UNZrz87oJmMCrj8BUHXZdc521hWKrd4blunXrlgxVkvKpKfRR99Vuo+akqfVu6TrRoK/l0bVNyq8Wpk7dNVygiyHnykhpuxaccjmzYcW5UkA1rjFv9neVibJrrrq2uuV7KxaUS+Wg/orMpMGscmlOKvbhrKOhFMuf6xrLvhK/cQbwsQgma9+wYUNnETCCaBTvsccexRCkB5mvW/K8EXknR2UtE+ZoYvg2xmU0imvHepsJrCSBLH94dvtENuhjMZIRetcxXnlPaBAgD2lErV+/fkkVJbclS+IBGNMM5ZdDhh5t5BIND4bo5HO1rnctr0eZxnW0v0+jTucytIzoOM4d5Y9zcVoOaojAFZk8zF9NN0ee0/6tqErkH/Wl86LWqBHbrsYjEYdtzjfm+iNqhGcMmXzaaacVp6XOGbZxPW1Ovp4JTIpAluvIHzpCiL7Seyd5SRlqMlGyvSajOUf7cZbFCfuRq9EWVmQ00WCc05Ww9ZEHihyO16a8WX6yX7qnrR5d12Mf5R00h53aADFqrSajpRd1TcoX5Zs4w6fvX01P5XvINBKMGiHSWvdF7LBj999//4UpEcgvlkllnYWlyp7vc1fZRjmHtgv3PD47XdfI+5gKhNEzfRx++Vyvm4AJvPoITNVhR+OHnq5hIgJ0LL1YUcCjFHDSKFIBJ44cOloOuy/PpabbieMERdQmwKUARxW8XEfKr68C7TouGkGqg5bZuGB7NqzYJgUkhavz85L9XWXpuy+yFQvKpXLkfFQuOcBoLPFs0QNFhAgNa+7Hcu5Jrus41jWn3KAQ+K5rqae19sVinafhwRjIGMpOJjCrBOgE4f2W3Gp757MMyMfHd5196AycbhomlesvuS1ZkvezjkOG4aDMXxbzyedqXWXK61Gmka/21xpCuRx8RIehv0So8M5LJ7IkYk5c4lAyZCH1V9QHx5IHDTyW+UuuZ599dvOlL32pRKDRkSAd2mdJ1GCU37n8014nyo8vv8KF+8Z6W9J9qM2xqnPUWTeM8435/7g+8985mcA8EkAW8g5IJuIwx+mEDMNGQ2ZIdg1atslozeeb5+hFrsZro1Nw1uHAv+iii5ZMdRDvD9eirTFth90gBn33i7fqpPog60iSebETXHKedlRtaqJa20g6rW+58nGxPaeyzsJS9scwOm3Yc3gXNFQ7RoYOU3/ywPkN19rIr2Hy8rEmYAKzT2BqDjuGUvKloravvfJVIPWAI/zoEWcojlJWOqxnBbDc9bbG04MPPlh6MeIcGSoXSynA2GBkO8Mf+euTpPzayqBrdCm5bKTouto+CT75vuiaeUnDue9camKB4cFzQQRD/EoVzlr+SERS0DjDEKMXVYqTkPFZc9jJ8UuvGsM4Rk1y+sG0LemY5TgG2/L2dhMYJwHkExMoM9cYSe9wlqe6Zm4I1o5HlyA7upJkKjKsK5GXhsTquHyu1vOQV61Tx1GHxOqaWuI4pIHFhx5wylF+HHCxjMhLOsgY0osu7RqmKn76qALXQU9Tpz5/DIcdpnGjekxiSV3UMK8Ngc3XfPnll4sDoS0SWR8yQnfWoutzfqwrkohI6lpDt3aOt5nAaiOQ5TT1k73IPmSG5OONN95YOhJko/LuIuNwyO28884LIyoiIyK66KjVNChxn2xerqOkYZm863xoR+0N7WepkQk4Fem4QAdF/aByY6MqsT/qKukCytA3cWwfGcqHJ4a1H8WccpG6ypeP7Sq/vkSbI/o4Rzolsot5IRcHfXQiHj/N3yp7n/uhcg17jnRE3zaRrpOXes5X8uvruUxeNwETmAyBqTjscKrQuGC+GwxgJvZmAlIpkCuvvLLMzSAlqEn5Y6/ZMIoEVDLcmWiccf5Mqj1KoqGD8wdDoq0nRAowKm2uRX2Yz4bJuQclroMQp9y1pGt0Oew4l+MwOmJCOaoHrbbEaKJ+Mp5qx7Ct1vjoe18wdDCUagyZL4lIFhlQ0WEX6wEjjC7KwtAjGlAywphThB7Rv/zlL81uu+1WJmTlfuR7EvOb9m/49R0O21Y2fYGwKwxeRmdbI7Qtb283gZUgQORSNI5l/La9u8gS5JUaY13H15xttTrWjsMB1ubokjxWg0TranD2XQ7TqGOOOBqSvNfkT0QdcwLVhkWxjS+5ovs4Foco8j12gomD5JIiyOTEwhkYo/nafjOvU7x/yncllpq3E53ddu9yuag3jGqdGwyzI0qxa/qBnJ+ifjx3aCbj9XkiIDl90003lWrj4GF6D2QYMkaJ95SpTJgmBDlCBPDuu+9e5p7TMXlJ5ycfsEAWxs5cHSdHhnSEttPRi5OEzgycdrGTg2NkP/LBHdot6KC+sjwfN4xs59g+MlS2cU1WoYN+97vfLalTttGlq2hf8Tv+ITe322678oGiuJ3fuW0C9zadgMxE9wzSIQwLnbUke6LP/VDZhz1nucNhuW7sTMKR62QCJrC6CUzFYSfh9P+xd24/tlVl3vYf8MYLEy+MV8YYY4wxRIMhYDBqhCASFAhKC2jU9oDIUUEFBRE2ChERW8VWVLThQ4QotqKoIEdFthzEQ3vYIijShKhtOuFmfXlm+xTvfmvMw1pVtXbV3u9Iquaac47jb875nscYEhUMLdGY4PQ9mRAMHMaBsQtDjoRpyFgVHxPlL7room5XwCuuuKILG4ZJG5UV8479Zi0kGA8bAPRNLYSZ0besYLJoLVEVQ5FQY+173zamYmC5KUcFqyzcTCmbhYG+MnqUWlM0We8Hr6HMW6EEQY+1rIj8QClUIOJ5INQxtYF7GGV5nxBA2LoeYYGoD55Hfib2DwWdNSB4RlloM896H536x9py2aia22JhXg2Y3kMoOOOMMzoczjvvvN7oTQygKJgV4SFyddysCLRou8Lv0Ucf3SltWXHg24YWSK/M3/rWcf6w/txNN93UQQBtYfFxDDsmrpEnKlh8Q4cccsis7zuTHmeDHX3jHtPWUXw8v+OOOzqjmefej23an6Ejzoqzzz57lZHOXflaS0NQhrXroIvQu5ygS5H+MoWUvFP7Bi2Dvg8ZyHCs8LzitOLcj7We+y7xbswzFVV+g4IWZQTGg+GT+i688MKd6HGfMRcehqzBXyvyZK1jrPKFwFZBQLmStcuU3ZDTmAlhUlZHvmONTuRbaBXfHXSaDSNyQi7C2EZd7jqb80C7Io+I95XpKZ+jcKUFTDGUr6CLyIOg3zg/oOde437kVYvQdvo7xUDkLA3oC3pTTG7upp7lvSyjy7t8JlOPmR8MBQIYGdnHj+BJ/A2t44zx9Oabb+5wdizLOPrcpzwP+zNvGZ1EY7oh6+XyHWUdhW+AIBR0IfVk+1LHQqAQ2D0R2HCDHYQFQ1z0UOvR10CHoI03OhIep6GSh+k5KGPZM9d6JHjFYKAwY5g+AoFRWBBgiF82hLTq4ZreOIgi5fqSDDArjAosmdH11TN03TbmMdiBK4utyxz7jlMj7ChPfdRrysKA13kO/MXkznkY2uIzIJw7vh8KTeD3uc99rjPOIVghkKAYUhYGifGKZ4PyjWLNb6bHsh4R/eR55GdifzQi855Q7zKS6xpNWW8CQyVTQRg3QuK5557b7UyGcIWRckjxVXCb8r0sY9zVRiHQh4ARTNGIrfA7pkhIk82fv3WjMGI0qsbs2F7LmWBZaApGv5ykx9lgZ5+873mkadTl/am8gTFSpu8PxRalMSqXOS+GOK5RlwlairEvboKjowklui+ConUdegOWORm1wvPcyKk7vge0M9b3GJUDBhoAiAhhF24XSYc/YDhAhogJHDHoEsVIXsrz/hG5wzuDUaBSIbAnIwDt41vEQIdhYtu2bd3sGo0PRAyfeOKJnXEb5ynfFPItNBH5BjkHnYDdqWPS4Nb6Ls1HHbQt/fW6Rwz6fOvkwagk3XLKKfKh9EQaT1nqy4Yc7kfeE8dhe2NH+pvrpQx9AKeYjCKGbkcnCY5vAgSiU0bdBweMa3nav9Yadn06AtfR20hOhUW/6vvTgTS2QQblkXWzA9tZIjwf3oOx5S0iPmv97XNvPY++uucpw1gYE2MbcywpN+D8QwfAuYYedcQRR3Q6bvGavidS1wuB3Q+BDTfYqZDF6SFOuWHqIsI86bbbbusUBwVjFCaIFcwdww+GmMgU86OAYBJNt9dee3WELHvOqJcoOQRwhIQ4BTPXxTkGQ4QF8iOMRwNTzi8DzP1zseo+oSHXM3RuG/MY7GQiMIb1+stjjAY7hIe77757dtxxx3XPgecXEwIDEXaRyfhso3FJJjWEGwLdWWedNbvuuuu6Z8M5a93xvE499dROAKOvub/2R4MwfWntIGm+9TrGZzHGpGkTJRDlLz43PLtMK+fbGEoKqwhkQ+/tUB11rxBYBgKutcg7a/JbiVEL0L/8p5Bv/vytu7QCRiwVG6OwcoQCQnBeGwiHDUI7Cl02kEuPVeY8l2bl80zTvB/H7fhbx6HpRxjPUMigZWNTkMgbjVUa05gCpiMGXOnfPH9EN/fxJrBj/VdoGZHvG5V8DyLN7Pudcef9QHFUiaccvOSUU05ZpTDTfxR85IlYP/nZoKQi6zbqCVe9WwkBaCHfhzTRviPTs2YdsjoyNrIYKdNE5EXkfuQgDH7QEfJgKId+x2hY6/aoDJTb9j5HDIE4ijEoIichV7G5nM4L6Yk0njLUlw05622wAx/WJKVeaHoeA/1ynU51E/laNMyp88DX+MPBTVkCIFhPfFFHtQa7c845p+tjn9HO6+TLUeHxWstgx3jQGXl/Tj755FUGvfgc1/u3zz0/56F25injex7X7e2rm7zoq1kXkNegb1UqBAqBPQOBDTfYwTizIgS027dv76YvRpghehCo+AfTRhmAQOXwYRkbngeIH8SdRbZhxC1DBfVTh8QPrwXRZRhsEMCpjz82yEBwp00UDJW92Nf4W6WHSA6YD94ovF0IJOs1LVEi36cUxf7M+xuBoCVYjdUDxhiFYGxMTVDZAV8iWGDsOdEWQghliBzDMMX74fpJ5Fe5RSiR6c971MOXlfjYHwQX/jZrioqz7+dm7Wv1qxCYFwGNHjGymjp476GjLUGe+/IJyuOtZsdBFLjoiVeAhibHiAPKOzU9rqcpzYmOJZQGIsLgA0zxiUl6rDLnucpVPrd+78PX5pl2Gttu/Xa89qeVp3XNSOM8jaqVt+8abUOrh2gtz3Qr0LBIcxnXUIL/wft51vyN5R+qq+4VArsbAi25ku+F9ayRNzHGRRlRmhmN6Th0ccCSH0cw8hq087777luBCzqNDM80Ve5hiGIziikGF4x0yPwkHTzO/GnRVMaU680Gu0VoO859ouNY2xtexnjhO0RStRwAXCMfsjQzVnAC0y8dL0Tl4WjiPhGJRiWy1jNTjsVK2jV0ZDzk54+lWkzUyVIS8F8SeGFcO+2001Z0JurFWYP8rh7FO8A0aQ2I1peP5IdnWC7f35POI18C0+I1e9LTr7EWAv+HwIYb7CC4MNApBEYGD7PKfyh1rA1BgkHDdDQQkRdDFgagsegjykPwMMRp5KO8HjsUFwxOMDoi9mTm/wdX/382zqBM7Df16L3rLzntDn1mjJvJYIcXMAsXKLZDzxolB4NmfHbUEYUSlduI5aK/h5TIachXrkKgENgIBOAL0My8PthYWygnmR6g3MS1kVz4H4Mb/CIml2RgOqMKCDQLJQLDUzTiu75qXsNUeqyBzHMNcvA9ooYxiJGglay7iXOJKDfX5MR4OJZoI493ref2G0fJFE+/fSQyJk+HZSwZf/PXsRAoBPZMBJTnpYmigHyH/J5ldWloNNhRBhn89ttvX5nSaT0eoe8YqjJNjEYi8w4dr7zyys7B7tpw8ATkR2klZRlLy2B37LHHdgYoaCObAUEP4zqpQ+3CK5jea/8JOoCXtTYIivU4C8hy8FKxZgYRxjQj8JC7CWRghov55z1GZxZ8DbkdI2qMPscJhgM+ziLheca+MQYNiPM+ozj++l0IFAKFwJ6EwIYb7OYB01DrHE1FxFtkCtRJuPeRRx7ZTaMlNH6qYS32B88NjI3pUDI6GBtKFArWvAkGj9Dh35Dhat66FWY2wmAH80XQULmc2jewQuFj+jGbPcyT8NpdfPHF3TTWHNatwc5nMk+95lXYKoOdiNSxENhcCEA/2FgGWjBPynyCqGnWQIo8AH4BXXfdnVg/dJ8132J+7uPFzok+4oyBH/DbJD1WmcsGOvPlo4Y76C1rOmW+lvNv5DnjgXdecMEFo8qh/cCxkvkz+FNPxMf8dSwECoE9EwHkN4xCU+U4aWicsj8VOYxj0iXoPlHVmb6P1QVfuP/++1foGNHbLLOCfGsyUjhGfeHgIWqMgAHoOpHe11577VyRYeSHl4wt1WM/PMI/iM7D+cSmHdJgjv/1X//VxCDqKUQmDq1bF++5XBEG0ssuu2z2jne8YxX/gpe///3v76Lx7CPPFR6BXG+if0TQs1zFY4895uU6FgKFQCFQCPQgsKkMdj19rMuFQCFQCBQChUAhUAgUAoVAIVAIFAK7EAEMbvMaRHdhd6vpQqAQKAS2PAJlsNvyj7AGUAgUAoVAIVAIFAKFQCFQCBQChUAhUAgUAoVAIbA7IVAGu93padZYCoFCoBAoBAqBQqAQKAQKgUKgECgECoFCoBAoBLY8AmWw2/KPsAZQCBQChUAhUAgUAoVAIVAIFAKFQCFQCBQChUAhsDshUAa73elp1lgKgUKgECgECoFCoBAoBAqBQqAQKAQKgUKgECgEtjwCu5XBjm3ib7311m7HutaTGbvfKuM1FlllJyh2PKrFVkXliSO7ZmVcOAcvdtHKqRatzYjUeSFQCGxFBNg1b7MkaO7tt98+9867sf/wOejzshN9/8c//rHsZqu9QqAQ2AQIKC+yi+lakzuhtnb+XmvdyLR/+9vf1lrNQuWhy+xeu96p9Jr1RrTqKwQKgUJgfRFYmsEOo81///d/L/wHQ4lbqbdgoP6Xv/zl3fboi9xvlfEaAsCb3/zmrn7aWe/0+9//fvL26nGr9dZv6opJQ2Ur79RrN910U6+QwtbybGV/zDHHzP70pz+tNN33PNi6/qijjpodd9xxq7aFXylcPwqBQmCPQOCRRx5ZF9p31113zeYxnvXRp3lAR3GDlp1xxhmzxx9/fFXRr3zlK7MnP/nJsxNOOKF5f1WBngu/+MUvZh//+MdnY8osxrpnPvOZHT2GLs+boM377rvv7JxzzpmtRdmdl5/xDlBm7733nn3jG9+Yt9uVvxAoBLY4AtLjL33pS2seCXU86UlPmv3oRz9ac12xgnvuuWf2rGc9a/bRj350lzg1oJEvfvGLZ9u2bVvlII/95Ddy/6WXXjq78847B/v68MMPzw466KDZ4Ycf3gUl5HrqvBAoBAqBQmDXI7A0g90HP/jBjoHCRBf5e97znjdDaRlKMnzaaqWx+60yXttog50CxiLY5DJZ4HHcOd8850P44/X78Y9/3AkSCDPf/e53O9hs1+eBB/Uzn/nM7ClPecrswAMPHBUkxH5XHB966KHZ+eef3xlon/a0p8322Wef2Tvf+c5OqRzqz1/+8pdOWc7PwDJEgH7kIx8Z/bvgggt2Mn5avo6FwO6GAErVPLSoLy8OlTGDVsQu06d4b+pvDGTQh9b3/vOf/3z2whe+cPaiF71o9oxnPGP2ve99b2q1O+WDbl522WWd4e/1r3/9YPQctPjTn/50h+fnP//5neoZO9HxQl9/8IMfdNl/85vfzF772tfOXvayl43+vfGNb+wiqik4Lz/jHfjOd74zG+IzY/1f5n153qte9apRuQTF+Wtf+1qHI9jyPnzoQx+aPfjgg80u49y85JJLennE5ZdfviZjarPRulgIrDMCfXT9zDPP7AxD0N/49+tf/3q2//77d/QrXvf3PLRd+rPeBjscQh/+8IfXjU4RhEAwgmPsOxoBd9VVV3X8Br4zljAuYtxD3v7EJz7RGfByGfkFTiWcS1spwa/QLaClY4m8n/zkJzt5/ulPf3qnf3z2s59dFTAwRnuj7K5jCccWNDne6/tdtHvsSdX9QqAQ6ENgqQY7lJdrr722GU3xpje9qVNu+u7nCK+vf/3rqxSIl7zkJR1zwmjUUjDG7v/kJz/pw6lTBDcywg4Bg+hAGPZQIqIDLIhI4PevfvWrFe8ZBk0Unqw8Uid1v/e97x0VDFoCAwpgVqRaTAqD1nOf+9zZ6aef3jGvD3zgA7NnP/vZs1e+8pUrzIw8RFFwLzI1md/Q2JdxDwEGoQjFCsOA79Lzn//87rxPAEQg+NznPtflp1x+BvZdQbLP6OB1MMqRktZRx0Jgd0JAxe5b3/pWkz5hBEeZI/qrRZ/++Mc/zo4++uguAnoepU66qENhXkyhFURawNd++9vf7lT8z3/+8wxjDn87duzoIuzIhxFvkRTp0gEHHNDR/T4lAfoLr8s0VnqLMSgvU4DiyH3oD0ej2Rc12I2NUQeY2KMIv+9975u94Q1vWKVEjdW17Ps8zxNPPLEzoGa+mPvCu4uRFVxRnHku8BTOKXvLLbfkIt07Dr+WF+TjvIbpVQ3UhUJgCQhI1zGUMJMD2R4aiBzKO5zf67HzPpmqNRTlrD55rVXGa5Yd68/Q/TG6YFvK7EN1cQ96AI8hmnseGvnoo492DmTqyDNg6AM8FaMe+hcyrAknEXwVvrPZEg4Q9IW99tqre4fkIX39lBeDAbRXPZDz17zmNTPum5QJuDf297GPfawrJi8by8/9448/vpwtgl3HQqAQmAuBpRrshgxSEN2h+3lUe6rBTgaPIMIfgoHREN7Lgo1MaIyxZYw9p74sgMikojFOhXCeo0a9RftmH9frCJYY6xgvER8ILiamDiAAxQSzp+8a+Ii2gTHnZ2AZcGsZHbyGl5PyF1544aYUlhxHHQuB9UJAxa5PuRqjX9KiaMhgaj5RqkO0SNozhYa1PONE4b70pS/tDE1xKi6KDoa6aKBTaeAakQ+LJJQnovQw5hPFvIjSC23JfJZ6v/zlL3cGKJY2iIrbIv20DMbJ1pILOJt8ZtJ9saR/rb/Mf2xjmcc77rijU5iJRuHvqU996iq+GPuD0RPjBOM59dRTV5aUQOEkuoM6Xv3qV6+KmMQ4/YIXvKAzQvMuyRs87qo1BuPY6nchMIZApus4xJGPvv3tby8twg4ZrpWQldEhonxnPmQ3HM98735z8xxbDm7rzsdWhB3t0n50YBFhd91113U048orrxzsF7Q1JujNueee20X2RocRbROdBm2N1ymLUweZlpkzmyURoECkINhAU5W15SGtfoLF29/+9g43lpQACxKOFIyU1BNlbd6HoYhHjKaHHnpoZ+TE2EmCf0KT+94ReD992Gx4dp2vf4VAIbBlEFiqwY5oqz6vP0rT0P0pEVgQTJQRCbjE12iCfL/1lCDWTO3JEXooZhBcveT5PudENiy6ICxCQlakWv3TKIcwBKO46KKLZp/61Ke6tZu8l41FU8bdastr1JcVpqxwzbtmkRGTa+2bfVyPI8oR3jfew5/97GeTqlQoRYjA2MY0NISA/AymVMa7h8ePPtCXSoXAnoCA3xBHUjb0GJmBgB0NQNIQaVE02EkL+RbX4y/W7TPB6AKtuPfee700u//++zveEY113tRoh5cfZQz6PW+iTB+PwdmAMe///b//N7lalDaNR2PTbSdX+s+MOjLgnfBHIxt4zj4zefU111zTYQkNjc+Y38gM++23X+8U0nn7tWh++sq7hJHt5ptv7gymmS/GunkveD9YHwpnT0zRmEdEd0y+u4x7kXck1lW/C4FdhUCm68ipQ9+xsuAislMeI3XwrTKjQyONeaTDyPI//OEPvbxybMm7Kzcn/Fhreb9/+SFNKhtO4WUt/NSFYvdx/mD0+uIXv7iKzohRy6EQ61jmb3FBD8PIdv3113fPWB7S6otLVhCZ+Ne//nWnLMj40Oehd3KnArNZZ+TF0YJRdiptxuhJn3HeQPcrFQKFQCGwCAJLNdgNGbtQYobu4x0ZS64/oHFPAUCCnu+36lvEYKci0lLqWm20rsFkMdj98pe/7I5TGHPMQ1kYPApEZtgZh1b7Q9daAkhWuMgD8z/55JMHo1qIeMHbpKIzpW8oqMtgdCiuYDrPgsJMScZjDB4kcKCO/AyG8PWeZecRBixbx0JgqyKQFTsNI5G+tX5LQ6RF89BfI2mtF7o0D41xoW4VQhQi+A58jMgrlNNseOL8q1/96uw5z3lO5/E/5ZRTVkVXtZ4higF8aUhBYGoleOQpPpRBMbv66qtXlUeRPeuss7q+RGMdxrP3vOc9q6KJW30busZzjM9EWs/z9pmRh36AI3/sEsvmIUZ8ELlIX2I9sU3GR5khbGL+tfwmWofoSJ61/fcdbNXLs+b9Ys2rVgIHlL88TQpFko1DlFtaZetaIbDZEYh03e+Fb5lvWuOL9HfsGL8Ffvfll04oS2WHioYovjuc3S2aT9llRdi1nqHYgB8Z3J20AAAgAElEQVQJ2oZMyJiz0wqeQn+hFxj4OXcpFZYzICqtlZwKOxRRrUFvjDdCD5exuzdR8/AmDW++X/HdiGMFNyIIwS07RcjnO8n9vkjMWJ88fx6Hum3wHk4NAoht1u9CoBAoBERgqQa7oQgyiO7QfTs8z1EFoY+gL1IX0XeZOdmOwsI89ZoXpsv4CdVGOaPO1p8L88ZwefJRBiUHBYK6YrJ/fULOlOtDigltKTRwbCmq8RprmozVZ/9hyninNtrTxzQw1gZB8FkLY2X84JmfgePpOyoMEI3C9LBKhcCegoCCtwpKHrf0q4+OKxRPpb/QSRb+Zw0avje+e6awjikm9isqUPQJBYIpj3z3GL7gEUM09Ygjjuh2lUVpxEnF1KyhhLENQ2DfwuFMNSOiLxvrqBOFFM8+NNSlE2wLxRljEkqgU/1RilgjrxUVZrmpR7BhbUGndcq7eM4+M/IQBUH/mOYbr9OOU2Vb9JT+n3feeR3WHDlfVrKfQ3yM92uIF6iYM8WKKVUmvweMvpUKga2KgO8xR991DSOeu74d8qGR1BqeuIYz9OCDD97JeI1jBFod/yiDUQQDFA5e5TC+P2gctAE6BJ2H7uKcbRnrwDqWHaLjQ/eG6EJ+nka/QVNIYgNuJCO0aK+lf2jgx0Fg0iDH+nSUjw4NaD00vxUFbnmCGzBu4lRq8Q7z0WennC570wrfrz65AGMl/HhIppdGT6G1vhe8O1MTEZwEMrA+6zL509T+Vb5CoBDYOggs1WDHDlAI7S1DFErF0H2ntWZoVeaGmOfQvT5i39dOSym0D617uZ6+c5iBBksYDdO9opGL36z9Y1syZzZ/YHoOBjsZPXXFZJmhtZoUeFp5Xve6140a2GgTQYjdVFvTheM1lOWpAo1RCqzpw7uzUQmvJNPJUFbBEmFu+/bt3TNgTZGp6zrJ1PMzGOu35eYRBsbqrPuFwFZAQMFbBSX3WfrVR6s1nkyhvxq3UPZQSKC51KsRj7V+8hSq3B+VIfgKZVGGWOMO4V+lK5fJ55S58847O6VzjLb0LRxOHWAGLY1Gt9yWUSUtBQ3nk+OF5oELCgbRFfAWsJ/nT9pJH8CmxXvps8+MPDipWDOJsvE6dfC8oMtx2rHjQzFHQacNFXXvbfTRfg7xMWl6X4Qd06eJ5JHv22eUPMbEzsBgUmvWiUwdtxICka4T4RSnHiqrRpovnY+yk98ZdGIoWZ+GF789NgvjG2WXZmRQHCSs14mRrC9Rdi0RdtTPztpEuE1JedyORWxwtNB3IpAjhtaNERTZ+9Zbb/VSx5Mw1OVdYpkRQoQY9AUnEPVybNHpeK3PYQ59QmYmbx+dW+nUOv/w/ep7N+ArOORamNkV35O+Osy3iEPddzdHeVpnHQuBQqAQmAeBpRrsIgOY93cfQZXZtULF9dgN3eurN4P44IMPdoQ/T18hn32YojDmej2HcSi4y7AzRjAoGST5SfYLgchy3rNu+zc0VpQCPP0txQdjlutFWadHjYvf/OY3u7WTWFciGxrzOV5T1lkiL0KGCqN1xiNMjzU2brzxxp28hDHPevymHwg9PEM8lggyEX8EPdan6/PK2gcFgPwMvN868kwRiOYJtW/VU9cKga2IgII3R5Y+4NuLfy45oIIR76EYMU2R73aI/qKgOWX1jDPO6AxDmS5iKKGN1m564gqtOuGEE7ppr9CLIZpqmfU40i7GRNp897vf3UV5s6kG5/zpKOHIeaRd8Xef4kUfnQLFNCLonM8llh/7HQ1Y4B0363BJCpRGjG2s+ZqXulDJibjy7GKESMTzD3/4w+ziiy+ecVxmsp9xvLl98ONZoNASuRgT43Gam3zf+/KQiDX8h2hFnEh9WFi+joXAZkBA+oHshkMYhwHfPklZlegsI+V0GuMg9hq0CKNTpAetsSH/xkgqvyFkNtrlW4K2I4uOfT+UHfquW+2v5Zp8SJlRbGL0IbSZTSdwpuDEiAlHUZ9DG7pDlBlRcjwPZ5JwjjEL2nTaaaet4I1hU3md5/bAAw90ci90LG7OENu/++67u4hFo7TjvY387fvV926IY6avsU86R1p6T8zH+8U7NM9yNa6fV2vXRSTrdyFQCCyKwFINdjBOjGgyhHg0RLvvvmva5IHK7FpEe9F7uQ3OJf5D7QwpjK064zWYtYyFtlBKOZIcBwxKRUFvVvRc2kcZv/VbvtV38xitAHPvW/fCvPFomwpeMCcELwQuha0Ytfeud72rU9S4j2DmmGOdu+K3zB/jAB65E088sRNwMOSdc845nWcWoWVs+hrYw9jzMxgaE3VS9zxr5w3VV/cKga2EgN8ex5bBLhro8u8pBjvoGXSJteViBF2LLroWHAobkQs5EoNpP3yr27Zt6xQeaap1RSPLlN/z0AmMaDhGiJgjoWywHh6KF7hk5QsaGxWw888/v6NNremj8Ff4MwY/lDQSEd2RR8ffTlXLPB3HzmOPPdatQxfzj/1m3TqmDPGs4EGuddV1ZBP+kw8PKfZMlWaxc94D5BuXOsD4ihMKAwLvUuaB8AOmafP8zj777C5aBwWbejDc8Q6OGR02IWTVpT0MgUjXjfLF2cH7r9zIbAtpuo4ZouK8Bm3j3ZfOtiDUCMU3Y7Sychh9QK7i24EWTkmU5bvGEIXDge9wkb8Y8TbUrrxDXiA2kX/oSMYwZxQhdUqHhuR28uCElmY8/vjjK7+H+uU96BjOAgx2PLvNkny/+t4Nccz0NfbfOoZ0NxzqLDcBr2OX2CkJnHjXMbBiuKtUCBQChcBaEViqwW6IcEJ0h+73DVRm1yLai95rtSVh/8IXvrDqtu0MEf1VhdIFmLXjh9EgsHAkWT99QKlBmeGPqDiEFNfokEHJ+G3C8i2MzMOReobCx2Nef9smfYNJEf2BtwrBSWHCdhXaWM8BoSGO2fp21dHni5CEQBQVdQQdpjmgXBGhAgPvS4yJOvIz6MuvsFlh830I1fXdHQG/PY6LJOlMpr8YuDC67bXXXp2hI0fI9tFFIor5zvmODzzwwC66WHrAlHWMLzt27OjotbTNuoyKMLrbc+gBESCee38qnejDhX5Bj8CABK2CLziN0nOi26A1rIOXo72ky4xXHtTXntfFvJXf6DmV7tZRRdyoSXdYF8e8rpvtbpaj4x8y2NFX+CMRQmAb/zC8seg9+LUwzOPk2WEkhgeNtZnL1nkhsCsQyHQdIz/GNwwYUW60b377kSb6nUlnzRuPRjLhvDZRB98bfWDtOoyBrF+ns8N8rSNlMYxh5IKnSKOgYxgY+QaNaOYa941Y41zDYxxHqx2v5XGLTVynGnqOMwN5PxrnMCBhSCLKbiPTvEa+jeyLdft+9b0b4jhEX60jyw62wZG1VXnm86xD5w7h0Ygc66zfhUAhUAjMi8BSDXYYJYy+yh4rorCG7pOfKUBZ2ZDZtYj2ovdaIGKoQwBw0dyYx3aGiH7M3/oNc5exwGj23XffzuhDZILKHcyFBHNm8VmiLRAsXN9HBpUFBfsXI90y/pwT8YZHaGin1zjFib7Ypn3j+bDWBhEbWdjCw/mWt7xlZYHzOOYWJsu8JuPuM1gSbfKKV7xip2kXrf4xJt6T/AxaebnG9AYwX8u701d3XS8EtgICfnvSENZOmxLZ4PQq6YzfEEYq1v2BnvItokQRwZWTdLHFOzCQQGcxrFAH9Bhah8LH9Mtc1nPHkM8znfT+VDpB3xkXilNMrr1pPRkLz1tjpB7GQ/QACieGIHlQbKP123qn5s91WD73S5w2u1HK/k/pJ+8Shl4Wz+ddhD8ytZX3F/4dFfCMUzy3zXn4SyxfvwuBZSKQ6bpLfzDzwu9cekm/WjTRdz7TCcfhfRwARrByTznM+o2MZiYIZYYSsj50jUhjeEpsm/ryN899eQ/1tsYx1F7O38LG8tCRGLWVz80Xj6zlBs+Cf5BcmqClA7Susb5o3xrisZ1l//b9is8n9kEch3iUa2T31YEswjJIGOx8l2Ibfb+RHYpO96FT1wuBQmARBJZqsEP5wfuE0DrvH16sSHgxCEVj1tA6dVPvofy0EpFjCNmZUZtXhhuZtvemHhEwHJ+MBoIf/2QYbG2OcRMvH1O9ULrwBmK4o48qb7Zt/2Jdi/7OY+zr69T6HbN93VXHuIZdS6BTMGRcPodWXxUU8zNo5UWAcrpG9A638ta1QmB3RQAnSPyu4rc2REf8Ds0faRP34BkIzhhMWkm62CesU4YNJpjWaNSw9eSyntunfC6dzPen0AnbROElSoRpvSgSJOu1noyF560x0sdDDjmkU0aIID7zzDNXeJBt9h2td4h+t3ZzRCHEUGX53C/oMMZDnp1j6uvDrrxu//tkgil9k+fgxFSZHiun4zDjNlau7hcCy0YAWhfpOu847zrO5p/+9KedrDpE3+O9vvedqD2MKXltMWhHbJvv1d1MmY5uxHQLE9qCrm1Gg50RdUy3xEBJJPhYFBf6AliwbjRJ3gSNHdPDst7VwmtXXfP96ns3HCdOkb4N66SncZpxHI94j82siWU0TM8zhTaWr9+FQCFQCLQQWJrBDuF9yFPDNBqFVpgrYcisnWCSiUKESZxHhr4ev/sUhLHwZhlDVBjpI9NXp27lTdsqP+wuhVDjLlMYDBHuifIiYZxjehUMAeUHpQ2mhLdtyGDXx9i6SmezbhoVU5EwTtLmlEREHZGPRrvEMio1fe1iIGVcU9uKda/3bxg6GLIOFAw3J8eCdzMv+hvzKij2vUsxbzH2iEb93lMR4FthVz52zST5rWV6Kj4K6hq/WvnhJfCUoSTd7qNPlqWurODlsp7nKa+eM8a1Tol1V1EiVOSVazHYwZvYgMM10cBBHuTY+45iPpSf+mJUN8o6jiaem+Uz9ihO8CAcUWOKaF/flnHd/i9qsOP56axBoZ6a5C+uYTu1XOUrBJaNQKbTtC+N4R7fjvSRDchYj1Q5nrVMcci7VmamE9SF7Im8dtBBB83YxTMmvxN5BPdwbOPwwMCHg0IaGssZUYX8jbwND4pt22/oron7kVfJC+jDlJTzS9Nj360Hms3UTOTQY489thvL0LrK5Gf5nEinbC+Oy/rzkTxDND7nX+Y5+PC+9I2DHdB5jn3RcWJDHa2ZU4zFzSbmWV/aKdrz6FHLxK3aKgQKga2JwFIMdqy98La3vW126aWXdgYsFvZm8wEYB+nqq6/ujCUyQbYjx8sevWbzMg4EajxqLDROXeyytEhiHSSUB4h6XxSUDDAybdpiPKxfc9lll402DXNvMUbGgZeH6aRsfkDEB9O9GBNTV+kfBiTW1oBxr8VghwCDMtjnGWIR3mhEZVC02eely+sU9eUbMoCNArdOGYjCQUGM0w1i1Xra+qbMmldBcYqwVtNhRa2OezICGGmiQqExJNNTMVJQV6EZyg9Ny8Y264nHVj7KQV9bSZqvsuC5CufU4xQ6YftGSkTlwigtNqAgZSw8t5/W5TGuTTQPj7XeFs+ybuqL7YoRz83y8b4bTsCDbrvtti7KbrMu2G3/43vruKccXVcrT+UbKmukP+/WPEa+oTrrXiGwUQhIp2+44YauCQxshx9+eGdEwZhigsbibGbJEb4nDHeHHXZYt/acefKRb4EoM4wxLYOVcpg8wvJursNsH4x2mTewVhwOA4xc0CN40FRanvNNpe3SRfMPGewYB85xZHTaI/o7y+SOlaNLuUReanuR9sYy8Td5hmh8zLvs375fQ+NAtgCnlsGNCEXob59Mr/G2z+DXN96aDtuHTF0vBAqBtSCwFIOdhgmFTBSOSAQ1hkhUNZJh7MJApSdkKuOgPAs6Y6y74oorVoxQMOt5k8Yx1vlpRV5RnwwwMkWuGwkHAR9LMGvGd88993TMBcGGcHSFAHbOYloWxk2YNYYlohVoA2UNxsM27Gsx2NFHvUM+C/vtLrJ5MXCi5Pp2ANQ72pqSHMsYOWhb8ciz5P0hAq7lEY151/obwY/38q1vfetOO+XSB6Zy8Szc4KOvLQVFha++fFxXmJjyfgzVU/cKga2KQIu2awxhZzoMG9DX+Mdi3HyLKmPmz/QXTHD+sP4cu5eSUIaIsiCC28Q11wz12oMPPthNF23tqEoeab7KgucuFA692n///Wee33HHHV0Uoefen0InaE9nSp7eQ3ki9+ADpIyF5/YTvD/3uc91BrFMT+dRzqx3iCdTn+1GzHhulo/34WPwPHiQijPOMuhvXyLfzTff3D2Pvjwbcd3+jxnsnLoc+/Doo492G5fAa4hujIn8KOTZkMA5U/koM4+RL9ZdvwuBZSKgQYV1i5Vj8zuvrI7DGfkVesLmQMhb0Gnof07QLTcBc9fZnEc5TB4R7yvT0xci+eIMD41lTJX0G4cGyX+g30SDQ8+9xv3Iq+al7fIOeYF9QE9CZ8FZzzIIrEVHMrIQTFn/0utxjP5m/IwTg6jJ9iLt9V4+kmeIxpOfyEXa6Vt6Ite5Xue+X0PjEEvodNT/okzP7reZF9JHZACMeXvvvfesb7mkPBadTmMzcXK5Oi8ECoFCYAyBDTfYQQgx/kQvBsQPIqhRCCWCMG8NdHTaaajkYcFTlDHCm6NnrjU4CCYMFCYF04cwGwoP0YbIt4hzqy69cUSzUa4vyQCzwihDkRH3lec6eWCMGCiJRsQ4h1BCHxAcSODGFAAEGaLdWFcBXLlG2yysPo/BDhzoe1zI3EizzOAwVtIOnkeel1Nhwbjvj0gJnmmcFtWXN29mIVZGlWRF1fvreWTs4I4gxLPAoMYzwFjLNXDnXRpKCopjz1wDKPUaHTNUb90rBHZHBPRys7Czxg0VJb6NoT9psvkz/TUKA8eGG/MohMf2nJoe16mxLLQfo19O0nyVBc/tUz5Xccj3x+iE7WrAihsU0Eem3RCZotMjY+G5/fQ8Y0U7U5Qz+2M9Q8oc9cV1ktw9EQwsb7+gvTh24jRYItqHeK8RELwj0O2xKdD2fT2O9h8+ybPtS2eddVanVKMwsywI04JR+Omz8kksa72sT4uSjoEAowK7FVMGPDA4VCoENjsCfOe8sxilcUqyyzGzazRGY7g+8cQTOwc0jmppJjRRWQz58c4779xpqBrchuQx5TDp7U4VzGadzKVDHLqjMx55U2OL36I0ijqoL3/z3I/0NI4jt9s6x+CGEf60006bXXzxxbMjjjii01/Azj/pLAZMxk2EIHQfPQf5tM+wif4UHTq0b//iuFr94hp5bLuVR75EP5c9Td/3a2gc6Dgad3nerNPq5ndgN/QOGb0+Re8UG4y16Co8zyFDqvnrWAgUAoXAVAQ23GCnQha3xHZtgRitxRQYDDwaRFBGEIRh7notIlPMA4S5Ek231157dUwse86oF8YGkUZIwDA2ZLhDwUNYID8EfyivDDD3zwVN+4SGOAYEjCHGSF7WtHvHO96x4u3BE48gD/NmJ1mVwqwE2j8ZG8LQdddd15XNfaYd+otiEJmZ0YKMiaTBjmm61NtniPM6+c4+++yd8sVrfQY7t1QnAmbHjh1d2xv5j/cI4RJMFZZ4B0455ZSV3W2H2ldQzM8gl/GZxLW7cp46LwR2dwRcIyZ+LypKMWqB7yX/ZQNfpmUurRCjtIzoy9P+Majk6fA4S1DOUOigmTH5/UpTPZfW53Npc74fxx3rz7+lv1EpYpkH+qzjizJiJxae2095qeexHa6N8SDzW+9QfuprRVdjXLQ8eVSqiIgnKtvEulSsTxX5kPc46uiDTrM8hO9DzLNRv+1/Vt5zezwb+Ie8hCPGOPi1hotYhjEgK+UynPM9ILdUKgS2AgLQOt53aZ595r1nzTpkdWRsnSnSTGkiBg8cFMhiyGTQYPKwUQ70O0ZMWbdH6mi17X2OGAJxeLuGZ3aA+I1HWslY8jfPfekt9eZxxDbzb4zykTbwm9k0OC7gSUTyYUyERtJf8AIP+oyjnZlE0AawhB9EPUVHVHTyxP7FyEH63PojzxCN55nQV/pNX5aZfL/i82m1T9AG+oUGWvoKZmMyve/QPJsC2ad5jHytPte1QqAQKAQyAhtusIPoZUWITmzfvn3VQrEwyMw0YNqsZQeBzVMHYfwYsWB6eq0xYMHYIuNy0NQfjTFMWWLRWyLTYG7Uxx+ecIg7bcKEIPhDSS8TkRwwBqZ7sl04THRqODU4UR7moJFr3iNKC1hTV0x68DCQIvRrjAKzz3zmM6sUHRQhpoLB2PD24f2EcWdPHW3g7cSYxq5fJDCmH3gLxY1nClNnPF4DM6ZKsOg5ZfoSz5G8Q3n6yq7lOoqT7+Ky215Lv6tsIbBVEIDm4kRBCYmGCL496GifEZ/vkW+T8kRVQXtQ4GKUFXlQolw2IGLCVCNoW1yTVINadCxBBzGQwQeiIYm6aB9FRmXBc5XTfG793pcmZ1od+xl/o7zRZ/oOTSTyCh6VsXPcKpCes54rvzVCEj2cE2PpU84wGMZlDCiPU4eoa55DKxEZz18r6bSjTSNmWtPbwB388zIF1gk/oX35itc30xHceR/4g5e1ZJPc31iGcpxXKgS2EgIaL6R59J33nxkk0DIMSTh+TdLMSBNxMJx66qld/uOOO66jJ9DO++67z2Kd4R4ZHuMW94iMgi5lw9pKgfADIx0yP0kHjw4QvjnoqDSePIwl18t96S155qHtLMNy0kkndVFfLA3U+s7pI9Phobf84WyXhtB3AhW4Dp0kGEHHNn3lWtabxJlnMOWvjyd0oM1mnf4ADRZHr2+2Y8n0m+2JVH8KgUJgHgQ23GAHIYeBthhR7qgMvsVEUEwINyahSLHORfSYwFRQYmBuYwmGhSFOIx/t6bEjJB6DFgwQRjiVCbG2HGVi3/WEyVyH+oWQEsuu5XcUeGiT7dytjz5iTMMINzQ2cMQQCsO3rEqf42BtOXBDkIoRKCjCGA7jVE/6RNtReFNRi4Y8665jIVAI7N4IwBegCX1ryPSNHroiTfIInYrrgWnoaa056ZIMKEouBwB/woEAjYtGJtdXxbAYDVMqPSpznkvfyIuXHRpJwrHEYurQVDbfYc0m+o4BbiypODLVBoObdBmeaHSKdZhXBRLeA77ixBHMW9N8GUufcoYDKtbBbzCHj07hbxGDOH6WWCDKpS+KDkMc/CHzDsdbx0KgENicCCjPSxPtJc4L5Pcsq0tDs/yKnMq6lhjCWgl9ADqS6dO8cuWVV17ZOdih+SRpqTSea4ylZbBjx1acz9A2IuSgjXGd1Fa/p1zDoImjnbFRN4bJVuI696GTyNVggvMJOVweZDlxbkU/R6cMv8nTxxOsr46FQCFQCBQCG4/Ahhvs5hmC0yxhtPGPiLdoEKJOprgeeeSR3TRalJgh41NfH1AGiOwgekGhAuUDJQoFa94Eg4cZ+sf51ISQslbGaBRHFnhQHolewDM3T5/AgkhIBJYLLrhgpymhCATsfssU3fxsEKze//73dx5Px08f8PTRRxP1E0VD9N5jjz3m5ToWAoXAHoAA3z87CPYpYn0QZD4BXckOCGgSdL21WDR0nzV/Ms/AA58TfcQZAz/gt0mlR2UuG+jMl48a7lCuWNMp086cn3P6CY/DgQT95jfLR7Rwgy6jvPHHb8vHCLnWekfkYxkJjIyMJSemsWZlDlqeMczl4jmYHXPMMZ1iSQTMJZdc0q1Jy7giX4hl+A1GKJ0R/5ynzguBQmBzIdBnsOvrpTS0tetrXxmvYxxTZ4DuQ+/moU3UA1+4//77V+gM0dunn356RxdtBx5BP8lrwsFD5CABA9B1Ir2Z8h7zmHfeIzQPGfn8888f5RXQSTbukO5zzjPIu8iKM/R+LA3xhLGydb8QKAQKgUJg/RDYVAa79RtW1bQMBBAm5hWKltGvaqMQKAQKgUKgECgECoFCoBAoBAqBQqAQKAQKga2MQBnstvLTq74XAoVAIVAIFAKFQCFQCBQChUAhUAgUAoVAIVAI7HYIlMFut3ukNaBCoBAoBAqBQqAQKAQKgUKgECgECoFCoBAoBAqBrYxAGey28tOrvhcChUAhUAgUAoVAIVAIFAKFQCFQCBQChUAhUAjsdgiUwW63e6Q1oEKgECgECoFCoBAoBAqBQqAQKAQKgUKgECgECoGtjMBuZbBjm/hbb72120Wv9VDG7rfKeI0NFtgJih2WdvVGC/SFHazWM7GjFLv4VSoECoFCoBCYjoC78k0vsfE54RH/+Mc/VnY8nLdFeBy8Lu8qzq678Fh4aaVCoBAoBNYLgT6as0j90C3k2dbO34vUF8tAF//2t7/FSxv6W72l6O6GwlyVFwKFQCGwqRFYmsGOrc9hoIv+wSTHtkmn7pe//OWzD37wg03Qx+43C/3zIgLAm9/85q5+6tmVCaXpxS9+8Wzbtm3rYjz84x//OHvpS186O+qoo1ZtAb8rx1ltFwKFwO6PwCOPPDL7/ve/v+a/u+66azaP8Wwt/MCnguIG3TzjjDNmjz/+uJdXjl/5yldmT37yk2cnnHBC8/5KxoEf8+IDf8C5dOihh84uuuiigZr7b4nNl770pZ0ycQ6P3dU8cKdO1UkhUAhseQT6aM4iA4NOPelJT5r96Ec/WqR4b5l77rln9qxnPWv20Y9+dGFnSG/lPTfEZT3oLngM4YLuNLWdn//852vm2fCqmDhfD1mAOnLdsZ36XQgUAoXAVkNgaQY7GAGMYtG/5z3vebNf/OIXg/jK2LaawQ5DJAZJ+j/0Rx68kFddddXsaU972uz2228fxGPqTaIxvvjFL3bP5sILL1yaIDK1fzkfnsaPfOQjvX+/+tWvcpHunCjCT37yk51A8vSnP3124IEHzr785S/PuD6UwPzOO++cvfOd75y96EUvmj3jGc+YvfKVr5x9/OMfXxWBMlRP3SsECoHVCKhELMobLIdDJUeErW7tiStj/OKJnP2/oMHQ4mzYogQKzQtf+MIVmvG9732vv6KBO/PiQ19+9rOfzZ797GcvrLCKTR4X51MVuoEhLe0WtPsb3/hGZwZtBLkAACAASURBVLxkTPMkcIfPXH755YOROsUf5kG18u4pCPTRrTPPPLNzKGRZ99e//vVs//33n336059uysHz0HboFHyBPqxnwiH04Q9/eDZFH1mvdqXF89LdlkEN+RdcOLYMY29605s6nnXttdeuuo/jKKa16nT0o8Vf5OdrPea6Y9+X+Rv+cMMNN8wOP/zwztiLDvGhD31otmPHjsFu/O53v5u9+93v7uQHjMRveMMbZj/4wQ9GgzT++te/zj772c/ODjrooE5XoT3Kbt++fbC9ulkIFAKbG4GlGuxQXlqMAMYxxCi4f9NNN+0Uhv71r3999rKXvWynv5e85CWzpzzlKR1RzPc4H7v/k5/8pPdpbWSEHYZIBIAxBgXD/u1vf9tFdECAxwxNDoboxksuuaTXwIVScvrpp89e8IIXzE466aTBfNRDfbsyKYz14dUS0jDi8Q5QBuZHRCGGN85f85rXzP785z83hwTG733ve1eeDczPsvMKUM0G6mIhsIcjoGL3rW99q6mo/eUvf5mhzCHAZiWPcyKEjz766C4Ceh6lTkWoz8Ez9lhwdBBpAV+DLscEPXnVq17V/SGYE2FHPpSojUiMIRosEdihUw899NBCzYlNVno43wp0j2cD1kQ/QuPn7TPOMfgCZSOuGcziDxmROi8E/g8B6brGIWR/aCDyFN8U39Y8f5kWDeGsjNiSBYfKcc+y8/Qt511Pg560eF4ath4GtTiujCX1Y2CFN9PHef7uuOOO2XOf+9ymwW7KOIlsRyfEiMhv5HtoPkl9ap73ZeydWPR+5A/opuigz3/+87v3Hj3khz/8YbNqDHPqJ+gc/PEsiNY/77zzemeb3XLLLSu6JOWRASy7GfBoDrYuFgKFwCQElmqwGyLEEP+h+3k0u5PBrhVhJ0OLSixKxHXXXdcR7SuvvHKQQUbFVYYfme+iv+d5RvmZrdc5Hk76H7GJwkJet4Q1QFCYYXYosq6/xNG63ve+962aTsdzwZhJW0cccURnMHAMeM0wIFiX1+tYCBQC8yGgYpcVAmuRfsEjWklnSjSs/OlPf5pdcMEFg86HD3zgA10UGtGyQxG7fVFWGMMQiDPtwICIsS4a6DTgcY1pVfOmvqlCTvsBG8ePkoBDZ4jGR6yJFM4OLp1bKBfxHucqHvE6v6lnVyciYK6//voueprxP/WpT+3o/jx8K0acU4e45rEVf8iI1Hkh8AQCma7jECca+dvf/vbSIuy+853vPNGh8AujDjoEclxOGDYwJiGDR7ly6m8iBDeDwS6Pi/P8THKeefSwefLmdvqMamA/hVZbnvHwB94YuUje2wwGKt4x9A4CUnA8kuAvRNtrUMtONYyPyAmMibGRnz/eR65T7sc//nFXV/xnRD/3mTEVdROi7h588MGYvX4XAoXAFkNgqQY7puigJLWUI5SmoftMaxlLWbGDGWPkMiIs32/VR/43vvGNOykpKCNGVfUpK+QhSm29NoOQ6UCwTdHrP6SMcS8yq75xQ9D/8z//c/bwww/bxE5HlB/WhZp3baidKtmAEwQFBKr7779/Uu14AIkePPLIIztBMRYC01e/+tXd85Wheh/BEmZba/uJSB0LgfVHABoHzZLW5ak8RmYg9MZpPEZdtwx20s8xOjn1fstog/EfnnXvvfeugAJNghdEY503NdrhWcfZgBA+NUHPI++RH0nno8EOxRih/VOf+tROeIEd0S7PfOYzuymztk3ZWDf9n8dgZ17q2dXJdwG6TdQlvAsFcIoSaN9VmPbdd98Oq9azJ2/xBxGrYyGwGoFM16FH++23X6/hQDlVmra6xulXqAPazjIm0XBBDdJhaF4rwomyazG4zVt+zLmkY2lIP1KngnYNJe6Tty8fetbUWTTQ+3noauyX/Dk/a86n1Gl53jH4KGu18n6hs3gv1x3bX8ZvdMF//dd/7XjIT3/6052apM9E5/OOxncwXsfwG2UEfnONMu95z3t2CjDAIIeeAt+DL1UqBAqB3Q+BpRrsslKAYuAfSszQ/Snee6d+atxTAFCRyPdbj3MRg50KS59g32pn7JpMRyU2EuusuKKIwZxQxGDunBt5QTsZB9vGS4Ni17duncrPlHFhHGVXwo1OMGSY1TwClVj6HsQ+OkYMehj2TEaprOdagdZdx0KgEHgCgazY8Z1OMaRJA/yGp9ApW41TTmgLJYaIqakJJwdrxKgQQv/gO/AxorpaxjLo8le/+tXZc57znE6wPuWUU1a87mPtZkXGMauUgBnjh26hCOCEgJdh/IwOF/r1ile8optKZJuUzUqSPMP6zZv7wXXztuir5ZbFH4iuvuaaa7roZ3imfcvjs1/5iHJPNDbvFtERHFvvVfGHjFydFwI7IxDpuvRKQ4My2RQ6T55IW/jdV85vFTpFnuxQ0ViHYQMjT4vmU3aZEXbzYtE3dq6rL/gkWjOR1LmmHl/72tfOfvOb31jlyrHFN1ZujvxwzH385Ze//GXHk4bG2roHnQcD6HauO3YJY1rr2cc8a/3tO9/He3xHkQlMTPGFP+Pwy8tskEfcsuH7u9/9bidTKI9YXx0LgUJg90FgqQa7PsIFnGsh/n2PQ2E9Mvu+vGPXrYvou2yY8p7CwlhdrftGA0LkSRJmGbDGNZhUqw8sMo7BLhJ/27F/EQcVE3abZWpnK8lwxsZFvre//e0dw2BXxI1M9ikzrKE2nbo2FGHHjorsrGgiagZh7/Wvf/1Oayd6v46FQCGwPghExa5VY4t+xXzShDE6ZRmMWKzr8rGPfayLrGX6KFNYpxrtovMEmop3+9RTT+0UROgF9LmlTHiN6fXsKovSiJMKpWosIdzH9YJct0+lhH4wfqbbIuwT/UfyOhhpZGLBa8ZgIk/mzWJu/eblvC8v9bTSMvlDbt9x5D7nfJ47hYlIBnlw670q/iBidSwE2ghEuu635BRVz13fDmeGkdQ6nblGtNDBBx+8k8EOx4gRZR6NQiOiCWMMdEp6y7InOHpdqgC6C33sM9jEstYx71FnUhuZna+KRaa15ppCw+yz+oJl3aANmo+sn5d/eN3rXtdNUz755JNXMBVL7oEvS0sQBZgT9H5eXHL+PGbOodXMdsHhxNhbfzjX4Yd5WRydVEMGOzAiUAGnVp5Vk8e4lnPeOZbLQC/ri7AjICCuna4e16d3uPt7jJK3HbD1+1pLv6tsIVAIbE4ElmqwiwpHJsIsRDt032mtGUaZWWYEU8/7lIy+dlrCu31o3cv19J1bhwxMJi4DJiIEjxgelJaxCkKNIAKDzsm641gJw4ZZ5LDrWHaqIgyTPOCAAzrmjXC0kQljKQoxTP2BBx7omHlesy63j3LKbrBEvhCp6RQJxqfinJVmojR4hxwPSjnPAiGSZ9Mn7OW267wQKASGEeC74luT1uXcLfoV80ylU5RBOMaghSJCtAV0BLqoEe/cc89doQ+xjfgbBwfKD32mLPSFnUQxANKXKYky7DxNPzCkjSUVsszX5Bf0A/6Dhx5l1Ahrr9MvoiTgIXjjYyJPNmiJufWbn/O+vNTTSsvkD7l9x5H7nPNx7lRYNpugz/LgFl8v/tBCsK4VAk8gEOn6VVddtZPc6rcVab7faqQ50vY+2mJr1kcEMYk6oJWsuYnx5mtf+1pH+3CQIAviIO9LlF1LhB3190Wltdq073HcMZ+4DNEwxxvxjHX04Ui5bFy0vb7+WC/PZGyabt9SRxgI0T9yG5w7TjeWQOaOf/A4+2iAAjrAzTffvBPdznXbb8rwbuRZNd5fzyNyBfIGBjiNnvB+1iLHaEg0t/oI7aKX0TcjUXNffI7k8VlrxHM8vNu8U2BGHnhZpUKgENj6CCzVYAeRWfSvj2FLuFvTRPXYDd3rqzc/WhbsxFB2/PHHz7KByD60BPtcT9+5dchkZOLRA8lCpRD67JWhTpRFCXZuw7odKwQcpYTxECmAAawVHu86STAWfpOnb52+u+++u1MUH3300dz8up47lvwewRTBqk/5xcD2+c9/votoYdoaY+HI2K644opVApybUbB4K8Im+WKbKOwo3JUKgUJgbQggVPJtcRzaAMHvNtIqFCOiyqC9Q/QXIdYpqxjpEXylJdJFhGXaOOaYY1aE6zwyI5PdzMCyOd96n+OIiWsLwYMwEuqgAbcWbaZ/EReUhayskkclyX6LjfzI61Gh8pp5h7BYFn+wTx7tWx6f9z3CN4jOgc67oLc8OOJn/uIPIlHHQqCNgHT9xhtv7JZqQUZz7TS/LWZm5Cg5I7u4bmTYEG2hdWS0GHUEnYKnIPPRLr+h7RgxoIFDibLZiDWUf633xCLTWuudQsMcL5i3koaejGNrrLbX1x/rb/EN73ns00s0xulYMj9tSqvFJcrdygk6geyj+hnvgeW8Z90ewQK5nvdy7F2wzFqO6AnoCy755K6tROWDQ0z0mTHm52Qe+ksEJHk0VrpGN7ixhi7GwYgZgRx8R4y7UiFQCGxdBJZqsINxYkSL3hJ/Y1Qbuo+nopVkLi0Ct+i9VjsygaF2WoJ9q67WNfsqk7G9SHipn5BpDHN6EqlLZtwXRm3d9B2C78KlEHiI/XoY7Fpj2ohrMGpwOO200zpB76STTlrZthysMEQSOZMThkTWjIJ5RUw5J7ozM06wIh8Ggb322qvDm+gcolNY3JV7MOG+6cS5/TovBAqBNgIqdn0Gu2igy7+nGOz4tvnGMbLFCLpIF+3ZLbfc0ilrKGxELWfjFlP+oRnbtm3rnBjyA+uKtGXKb+m97beO8D755JQjRjwMi6732ccXbIsxqCR5zfHk/nHel1csrGMzHB1H7nPsW+SJMeJcHtzi68UfIoL1uxBYjUCk664dZ0SR3xbGC2m6a0HHnal1Gg/RFqf6s7SBDlvoFPSXPri4P1F2UxJlof84GXCKaFCc96gzZaxNsci01nJTaFgcL+XUCabwoKl5cv9afMM+e8SoFA2pXu870oa0Glx4NziSxIFn6vicARMjOMfw7Gt7I64TKMA7hLE443zggQeu2jjP5zj0vst7fB6Ol3eWOtFPkBPYVRY90bb71irfiHFXnYVAIbD+CCzVYCchbg1jCvFvlZOItwjcovda7Sh8fOELX1h123Zagv2qzD0XrCMT4bhGA6HPKJ8IJlEJY3FSjJ14s1oJL9bee+/deW3iYuv5eaDgxSRTXMu4Yn0b9RuFCy8WzB2mCBOPY2GdCtarwMMFRkxvJWH8Q/FGASe6QmGPezJFQv5zJJ3RGLSFMEj7lQqBQmAxBKStHBdJfXQKYRmjGwZ3vn2iLeJUdmlu5h3QS+gF3zcCMLvRarhjuinOpR07dnSKhWWty4hoo7s9h66juHjufen90LhbUYcquR4Ryo2ExgEDbRMXaH+OZIjtMYY8tSmvY6SySvRLX16xiHXv6t8+l8zrYr/kiZkHqAi1+B9j5f0o/hCRrN+FwBMIZLrOenTQqNtvv30lCirSfL/VSBOlYUO0hfqYdYLRxkQdfJ/Uz9p1GANZp7TlzLWMR8riFMc5zrcPbZXOYmBEXtxnn31WrkXaSz4Nj3Ec1t06Smf68ovLEA2L46UNI7Cl20YqLrKGnXVkAyTPZKhP9IMy4DV1bTXGYZ3gAp4cSeLAM9UZxdRR9CJ0IuX+MTy7ypbwjz6ed9553XuIEY0lKdAVeDbodRjY0NtiMIrPceh9l/f4vjhe3nfwiFNsGab8jbZaG1ksAYpqohAoBNYBgaUa7LKgLyPg2LfWQczTWvxUIt4icIvea+GKoQ6C2GI8ttMS7Ft1ta5ZRybCUaCxHEojAgqCCimfm8+jBP3ss8/uGCCCCwxOxogyqiKqMYuyCktrGZd9WMbRdfnyLoiMjWfX8jChwCPMIFTEdexkin1rSfBcKBMNp8sYY7VRCOxuCPAtqVwxtqxsRB4Qfzu9KtMphGKEVOgb9SL0s1NqTtLcFu/AKI9xH0Mfdey7776d4oDC94c//GFFebCsdUmv87k0ON+X3ue+zXtOPzKdtg/QKdtt1SutY5xr+ROLVhu76poYyOtyP1xjqKXM+8wyrtQhZsUfMqJ1Xgj8HwKZruNEwBGCvOW3FemS32qkidL2PtrifSLx2FzMRB3QMus3MpopuJQZSsj60AvWSObbj21TH4YW+m/ifqQRrXGYt3UUizgVOPI5nSdD+hNl43hzO+IUx0IecMrjmdp/6hrqE2Nwrbq+sZEn6nX0R1oNLvBdrhExppPLZwp/xjn1zW9+szOwsrwPSTwptyuTGxO5JmruC0scoccZdcp9+sxzzM/JsmyogmMpPmvH2xfJ6LOnTEt/te46FgKFwOZGYKkGO+fw662a54gXS0IOpE4TkogPrVM39V5fFAIeCzZ7yIzNRyuDi0zbe1OP1iGTkQjLnGI9RtRB6BFSEILidICYl9/WRYQJzBHPIQwh4omxiiljeEFNEvq1jMu6lnEkkg7BLT4nN6lgEWHWd2glprrCOOP6hK5R5PPI5cQ0Ypjz1HkhUAiMI4AQGQVQ6Q7Xhv6kjeaPdIp78AyE+hg5G3sjze0TjsnLlHe84zn6Kpf13D7lc+lFvt9HX2I/+U2kQlTi/G3UA2OI46cMyx0wbR8cUJL7IoEzL6Cs/c/94zzTPPMO4ZjHs6xz+5b7TPs4pw477LDuHQM7MfWosgmG4Md1jcTFH5b1BKudrYoAtC7SdegPxieMLOyaiZw2RN/jvT7agryKQyJOZQcv6FRsGx6BsY68rF9mxHQLW+nhsg12cbyL/pa/5HHJIzOO4BTlZcpJMzPtz3VS16L9jOVi+5G/yDNjXn47Rjb+wWBItCNLXuBMQ7fBUEadY/3P41nvc4M8WrOyaMvNImIEvN8M8gbGuZzUZ6JxLq5hx7NrJZ/Vrsak1be6VggUAtMQWJrBjgW/46LZuXsQJxUKmAtrhcVoL5moBEkClIn5Ws77iJmekj6jmAwuK0yERMepmXnM8dw67IPMSuYU81In24VjZDr22GNXRYfFvPyWCRCBhqACzhlPvZ9xjDL5PK5c/2Y5F8O4+YbXolCQ+yvWcZzuJIVi1kqWOfTQQzvG28pT1wqBQmAcAWheNKiP0R3pmbSxlR8a1xJ4Y2+kDdDCoURdWcHLZT3PU149Z4yLTomlb5SPhiMMSJzLLxhDpF+UwRCKMnD++ed3jowYgRLHm3kB9xyP9Zuf82z8Mu8YjtaxzKN9y32OY5xHZvCdK/6wzKdYbW1FBDKdZgzSGu4hk0kfr7/++m69YL9FlgEgqgqD3MEHH9yMOGLXzQMOOGB20EEHzR5++OGdIIJOUZffKzddRw+jHTu5qm/EgkR347jFqMhupNDUSNfsN/KfifuR9kpzMu00fz4qS/blt74WDbOu1ni5R9/EdC3HiKNt+izp31DCaTZ1OibjcJxMIeU5cCQROIGDiudCwjgHT6VuHCngiOzPjJrNYLAT+77nqtwSdROCCpCF8iyhbsCz2czNNdgwkN8kdDe+g6j3mN+jfcHIWakQKAS2JgJLMdix7trb3va22aWXXtoZsFjY+13velenFADb1Vdf3REbmSC7tLHWRfSaTWUOPgaIIR41osaoi/DjRRJTJvHewOziGhmxLhlqZNrcZzwoVZdddlnM3vxtHRJ3mXiLUVIBDApGRb+IAInGzdxAa+HXFp60jTCDsZQkQ8njyvVvlnPXMolGNEPI4xTi3F8jfOL0Jo20RC/CEHPiXQD7WCbnqfNCoBAYR4CFkaPQOkZ3oIl8e9LGofwtY1urR618GOnimnexnPQaOkrynH7N8ye9j3W3fpMv0mHHbHn6Ee/j1IE2cY1ddOEVffyrxQscj/XbJ85VqLxmXrHw+mY42rfc5yl9kwdHXC1X/EEk6lgItBGQTt9www1dBgxshx9+eGeEIVLIBI3FyIKRAj7ARkJEvrL2XF/CeMMME+TVuJSJ+aFTkUd43SnwzPbBaJcdMUY9QTvRW/j256HnMW+mnfYhHxk/MiY0vZWm0DDKkg+DY0zOROrbrIiIR4xe9LUvD9c1ksW6W3wj3vc37wHPqcV/MDoRxa7xtMVfqIfxMbOIjUPOOeecroxrsxEJDYbMlGGNQd6HzWCw06nTt851y/jGO8cyO336CroZWEa9wwAO3r0WxgZjxKg8n00dC4FCYOsgsBSDnVMOte5jIIHoqHA5xVPCBvHFSIaxC2Ku8jFV6Kb8RRdd1Bnrrrjiim46CwoLzGveJFPoW4eA+mSoWbDXA4SHaSxZh0xeZQGs6DeMih0OmfZJ0rsIkYbAez23AyMkGiN6ZMjTYrZEYDD11/WeVArzuHIbnOO95Hn2TT9rlVnkGu20xopwxzpE4IHwF5PGNaIH2S02JsvxPmqo5H4UCPMUCj21MFWiFisVAoXAYgi0aLt05+ijj+6UNmhj/GPBZr5z+Yf5W3QK5w/r4LBxBAm6Cp2I3zrXXCvHUSBMH3LIId2i0fQxJ+m1RirP3SSIaSr7779/t7g099ixDc95vi+9z/Xnc/LF8Tlmy9OPeF/+Ae0zaqTPsdPiBY7H+u0P55kPm1cszBuPy+IPsU1+27fc55yvdS6GEVfzFX8QiToWAm0EoM/Q6be85S0rRi/kLNaTMymrY2i58cYbO9rC0i1MP4dOt4x2yLQY26grrv9lnRyhU5FHxHvK9JQnko9v2eQ3zzRGaSy6CHSEP+g3dBx67jXuR14l7c+00zbmPa6Fho21RR+js2wsf7zf4hvxvr81GLWc3xi1kKPREUnyF5xM6IMYeFnSgWfJHzsIM12UIA90OsoyLRZdC1mcJXF4j4YMdhr3eE4aCu3reh7Va+kL70tMvHMuq8BsqShj+O6ylFPUp9Q7CEBBronJIJf8zcRvpcXHYh31uxAoBDY3AhtusINgQHijwchdSzXQQawgWhrogEwPNnn+/ve/d8oI4dHRM9eCFg8FDBRmDNOHOEvoIJwIEVOJtN44CKTKYatNGWomiAosUxi366+ddtpps4svvnh2xBFHdGOQUXFU6dDIhJcQryRjxaDYEm70GOa+TWG2Ciy5bMbANuhj3xTSXGbRczBl3PQJoQ1D5oknnthFUdJ+XmuKdmB6XOc+O0ayjh+C15lnnrkiDPiuxH75/MGXd+/KK6/sylIHdbXKxPL1uxAoBIYRwEmAkB3Xj5Tu8I0N/UmTzZ/plEYVBHr4CUmvdmyvpVBYtiUcU480XyOV5/Ypn6sI5vtTeAPtkY++gBVrv3Lk3PL0w/HD83DSkMdpsEQfkz8qyx0gPc4b+2/95uVcPuQ184qF1z0ukz/Ypkf7lvvs/aGjz0xcc97iDxmROi8EnkAAWgf9hubgtN62bduM2TVGteE8RXbD8IKBxm8VGqPMhk5w5513PlFp2PUS4wSyfStRB21Lb3MejDsagjCMwANIBBVoQJKvRLpGfdnAFWkvdcRx5HYXObe+eWgY/Ivpo0NRc9ybGmFHXuhdTIy71Sd0MP5icuM3nN9R/0JfiLqh/IVADWZlIbcj59M2z4OE/sgUUJ7/3Xff3a3hTR1cg1YTcDBksOMZ824MTSGNfV/0dzSWobMQCcjSUMwoADf60HqHffe5Tz7yM0vLMi29A57Pdcqo46CvoLegvywasLLo2KtcIVAIrD8CG26wUyGLXgQXzoxTF2+77bZOyZABw3AQmGHuhgn3Cc7AAjEnmg5iBYHKnjPqxajFPYSEGIbdghUFD2GB/DCMyGRyfhlq7p+LjvYJDdZD5ByENv7hSSIiDEaHdwaBgj4gvNAvGACC0OOPP95FE9JPxs7U39hXp4nmqLM+ZmufOCqw5HHFPPyGwdBX+k9k40YmPGl4OCNW/Oba5ZdfvpO3NPaDsbCGYi47Vo4dIYlMie0h6NEWTLJSIVAILI6A0a8I6ibpToxagMbmP6f/mD/TKb3OOHD8Vo3oQ4DFA26CPqKoufM211ESEPyzp5t70nyVOc+l9flc40++H8dtX1pH8rGWE2s6RSXMjZKi0mgESVxSQoxagjtliQYk4kCMjRKhDq9x5Lwvr1jk/i+TP+S2fQ4txTLnzec+s/xexXzFHyIa9bsQeAIBaB1ykzTPO8j0rFmHvIosqzPFb1WaiBOb2SPIum4eRB4in1t0zPo5Uker7ZgHWRpDD3I0MjM6BzNMXD9MmhnpGmPZCgY7sYxy61p/RxzAkXPpKvwV49lxxx3XPVdoZ0w8SyLscBrBn0jqdTEQg+dmnbF8/M2adu94xzs6wx3XWR7owAMP7N4TNiGUbvsexbL8dlopUfU7duzIt9f1nPeK8dK/iD/vNHJJa7YQHcCYfcopp3T6p+XG9A6eAXqJhmjLob/ApyoVAoXA1kZgww12EM2sCAHZ9u3bVy0UC4OE0cQ/CBpKAgapPLUUxg/xxuClIQbCCCOORisfEfVTB8QSYobiwaK3eGQwiFEff3hBIHq0iQFKZc968tEIAiI5IJgoVIR6I5DEHYByOc9Zc+Gkk07qosVQIulnTggTeKdgePxdd911K2OkzxgruU6fMUjCiOg3TCFGmFhvZLZeox5wAH+ULHBBMBpSWCyL8iyGXtuoY+wnfZ2n3Vh2nnJEedrW2PuwUeOueguB3QkBvj+cKChtOFBM0BLoKH8a5bzHUT5BedaoZMdB6BTeeDeaIA90y+kysbxrVsb1XhTyo2MJ4x4RANDUuHs2dUELUCxUYjxXOc3n1u99+BoRcH1KReyvY4bPtPga14iUYLxMLQKL1hIOODvAI0+bYQwK92s5ikXuO+fL5A+t9jf6WvGHjUa46t9qCEDroCfSPPoPDSNyiusY41jaxSTNjDQRo86pp57a5ccYxHcG7bzvvvss1k0nRFbFsc09IsuItsqGtZUC4QdyNTIhSQePM3/kIZGuMZZcL/ejjDwvbQ/daf4UlzFDVrPwyEWwzuMZKdLdludQlmnJGonQrYhej8/V+sAOHYUyzHBhdgy6IRFkJvoDj4KfETG2yJ+7e8f31QJ4IwAAIABJREFUyPo50nfew5aeFfOt9295BM+zJde02rMMss5UvYN85KcdylcqBAqB3QOBDTfYQThgoFOIowy+pTSg1OH1J6FIsc6FTIL8MDOYAAx4LEHIMMRp5KO8HjvCpWE6MBaMYDLzsTpZU4Eyse/Uo/durPzQfZgf0YjUzXQohJNW4jr36QdeHcaJNzIqoZZDyMgCgBs0xDHwO0fnWUcdC4FCoBBYFAH4ArTqwgsvbBqi+uplOlOmURjVoLUmDGxcw+AGv4jJJRlwkhChTII/sdYSGxVFIdd1aLIBDNo6ZLCD7xE54No8OJZYTB3nEjSaNZsYA8bDRZL1U9c+++zTjRWDJf2Ul+V6UVSIFmcjpogVvIAyRCfE6L2pvylH+ajY5rbrvBAoBPYsBJTnOcaE8wL5Pcvq0tRsaEEGJ/IZQ1grGTWdeUJr6mCrvNeYQoiD3cjreQx2xx577OyMM87oaDuzY+A9cZ1U21jkKC5ZXl+krlxmUYMd/AyaD+aMlSV84LlDeh78B54S9TbqIELORH/yc1z0PL9HtlHHQqAQKAS2IgIbbrCbBxQMU6wvlr0qRLwR8RUTU1yPPPLIbhotU5emGtZiHXgiiOzAIKVQAVNBiYIhzZtgVjBX/4aY1zx10yeiTc4///xVOOR6wIlFV1VSGd8DDzyQs3VThlEoUfxiAmvxJxoRhXMRbGOd9bsQKAQKgYwAdI0dBPsUsZzf88wnoFOsgRTpFHQQuu6UUctyhO6z3mfMz/WW15s+4oyBH/DbpBKlkUoDmgY68+WjhjsMbazplPlazt93Dm8hepp6+GPnPKbRoBBB//sSY2f5CY4mlo9o8QLvjx0dO/VUKgQKgUIABPoMdn3oSEdau772lfE6xjHlVug+0cSZvpu37whNvP/++1foPA7s008/vZOVLWOkcKSfOHiI2CJgAFqM4wQnRsxj+UWO8pqNMNiBNY6kefUdeCGRcdD8hx9+eK5hwe9ZpxtsmUYbE0a2tY7TaPYy2EVk63chUAhsdQQ2lcFuq4NZ/S8ECoFCoBAoBAqBQqAQKAQKgUKgECgECoFCoBAoBNaKQBns1opglS8ECoFCoBAoBAqBQqAQKAQKgUKgECgECoFCoBAoBNYRgTLYrSOYVVUhUAgUAoVAIVAIFAKFQCFQCBQChUAhUAgUAoVAIbBWBMpgt1YEq3whUAgUAoVAIVAIFAKFQCFQCBQChUAhUAgUAoVAIbCOCJTBbh3BrKoKgUKgECgECoFCoBAoBAqBQqAQKAQKgUKgECgECoG1IrBbGezYJv7WW2+dsWtsK43db5XxGrsisRMUO1nNu/uUdazXkb6wg9V6J8a3XjtbrXffqr5CoBAoBDYjAu7Ivav69uCDD3Y77e5qvrSrxl/tFgKFwNZHAPqFfM0O2GtN1MHuqq2dv9daN33829/+ttZqFiq/EbL/rhzPQiBUoUKgECgE9kAElmawY+tzGOiifzCVMWOS259/8IMfbD7KsfvNQv+8iADw5je/ec1bjg+1MfXe73//+9mLX/zi2bZt20aNhwhBV1111ez6668fzOv49ttvv9nvfve7qV2pfIVAIVAIrAmBRx55ZPb9739/zX933XXXbB7j2Vr4gQNGcTvqqKNmZ5xxxuzxxx/38srxK1/5yuzJT37y7IQTTmjeX8m44A8U0uOPP372whe+cPbb3/527lrgqfDWFl+mbv5a9/I1nD0ok2OJ59PK+/DDD3c8aoqyPtTn3K+x841Q6McwqPuFQCGwGgG+1Ze//OWzL33pS6tvznmFOp70pCfNfvSjH81Zcjj7PffcM3vWs541++hHPzqJ3g3XNv/deWT/KbU7nve+972j+tWU+ipPIVAIFAKFwMYgsDSDHUY0GOiif8973vNmv/jFLwZRkOFvNYPdVAUExUoD3NOe9rTZ7bffPogHN4m+OOSQQzqlEabc5xn89re/3eU577zz5lJ6RzuwARmIovzIRz7S+/erX/1qVasok9u3b5+9853vnL3oRS+aPeMZz5gddNBBs89+9rOz//mf/1mVnwu8T//2b/82O/jggzshDUHtDW94w+wHP/jBoPGzWVldLAQKgSYCKFWL8oVYDofKFIOPnRjjF+YbOkKDocUtJZNIbwxp0pvvfe97Q1UtdA8jHW28733vW4huw1PhrRFHf/Ncpj6bqdjb3sknnzwj4t30sY99rMNxCk+zDvu5luN6K/SOZy1H+BEyzIc+9KG5qgHPL37xix1fhEe20iK8s1VPXSsExhDoox1nnnlmZ7SH/sa/X//617P9999/9ulPf3qn6+aZh7ZvlMEOh8OHP/zhjmaO6SNj+HB/I2X/qe1DZ3Aqff3rX59SZFPk+etf/9rJ7sjwyPLwWGRzZPxWgqZ+8pOf7AzCT3/602cHHnjgoOxfdLKFYl0rBAqBXYnAUg12KBbXXnttM5LiTW96U6d49N2/6aabdjI2wVxe9rKX7fT3kpe8ZPaUpzylM67ke5yP3f/JT37S+yyMQMMDiACxnmmqAkLbKGhEdMCc+gxNuW/0XaYMo7r//vt3yoIh8DWvec3sgAMOmP3pT39auYeRi3OEis2UFMb6FLWshKHInHXWWZ1QQhmY+z777LNyzruRjXw+b/JjqCMPR9vEYLjZcNlMz6j6UghMRUDF7lvf+lZTUfvLX/4yQ5kj8lflLR7/+Mc/zo4++uguAnoepY46oKl9Dp6x/kMfibRoRbf9+c9/nr3qVa/q/nbs2NFF2JGvb7mGobagwRdccEHTQcG4oUkYzIacGPmeBh15zwc+8IEVvnzuued2tJE8PhuUnb4oSHj3FIOdyi7K4Xe/+92dhmw/cKhEQ95Omf55gtMJeaCvP1OuMx5wy7yi1d6yrjHub3zjG7O99tqr69u87+WPf/zjTnllXC0DMuOYl3cua+zVzu6HQKYdyPbQQBzH0AtlqanHvne6hZzv+SLft2Wn9quVb0qAAf2W7rXqiNcWlf2HeIc8gehvngtOH6+1jpdffvmGTDFuPb+ha7fccsuKkwlj3Utf+tJOpgev1jsiL+Y+Mrx6IOfoPdzPaewdWOS9ym3UeSFQCBQC8yCwVIPdkLEL4XTofh7U7mSwa3nZ7rjjjtlzn/vcWVRiMaxdd911nTJ15ZVXNpVXFdmsuBKZ9+Uvf7ljWESImVA68coTJZIjQGj7qU996qbzvOHhhNlGbBw3xzzNiWsYI4niwDNnwhDwr//6r11dCC1RUQS/iy66aPab3/xmZeoDWPFcEG5aSqf11rEQKASmI6Bi1ycE8/0OGdY0rkej0RRFBSPVs5/97NkrX/nKQUUF5aWlrDz00EOdspCj2zAgYqyLBjqVBq4xDWmeNFWpgyZO/VOxsW7PMaoxnle/+tXdVNmxZ8M44N0R+76xaVBq5Y3GPKK9NzpNGddG98H6MUB+4hOf6Pg9zw9ezHEegx3OOwynPn+fp214nJd3Wq6OhcC8CORvDIc47zbfN9Pio8zG742IsPvOd77T7DZ0Dx0CuTgnvh1kb2S93Mcp50QITjXYbbTsL32XLqzl2KLbGbuNPjdqHUMdekuU2ZHtmVEUE7LB29/+9k5e//jHP76SH9lfennhhReuyPiWLTopEnUsBAqBzYLAUg12KEcoSS3vDUrT0H08z2MJZhoVO5gxRi7WzyPl+636yP/GN75xp8g9oqvw4sAkiODDQ8O1/Hf66aev22YQMtqoxBoJN4XptgR2DE7UwdEEA0S4IAIvR4yhBGDQQsnMEWiW3xVHFBkEqhwp2NcXGPndd9+907jN+8ADD3TRdi94wQs6gdHrfUewI6qGZzCPQtVXX10vBPZ0BLJiB02KUVJGZiBgx+tGXbcMdtLPKbRySp6WssJ0enjWvffeu/IIoUnwhWis86ZGO7z8OBsiHTbP1CMGLpYvgCdhCFs0iZP8QiOkazTlZ9NqZ4rBTt4FXj/72c9a1cwwdMJbWZ91o9dRnTKuZic34KLPgGeJ8sh6s/PyF9ZKxLmGHEVZn2fu7ry8M5ev80JgKgL5G/vUpz41Y43kbFSxPuXzvnfXfFOO1MF30IrYlQ4jy//whz9cVR1lpxrcVhX+ZxTrWspLD8DPJP2cwqsiftYVr1EnMjE8KDqwbYsjhjAirBeJCI/1rOdv+srsIpzlU506LllBuTxW+BD8qPVOFp1czydXdRUChcB6ILBUg92QsQslZug+3pGxhGHukksu6aaVkFcBQMNKvt+qD8Y4r8HOEOuWUtdqY8o1Ga1MG+UOzx0MOyuuKLEw5Gc+85mdQZRzFqcluYNgq02FAKJBWmHhlNGgh+FuaAouxtF//OMfrWbW9RqK6nve8541CVSxQyr78whYYM1z8L2K9dXvQqAQmA+BrNjxXU1RTPxm/Ybnob9EGWMgsR2cSNlhMTQKNklg/RwVQugfTiX4GIYTlNNoXPT3V7/61dlznvOcTuk45ZRTut1dh9rpu2e02loXC5fPqNAxVRVcXEvOZ8MyAtlB5TljHsIeXMEXrFvRDHGMrqUKj3v00Ufjre63/fG5zXs0it96OPYlN93ou79e14kGveaaa1YUSvs2lb9g3MTIyTNgzVUw8XnGPq4374x11+9CICPge8xRGo3sxnso3Zn6/cZvYYg/SIeU0bJDRWMdRh9mULRoPmWXFWGXMeNcbKRNi8r+sa5MD4guHJo9k3WnVj+9tizZH97Ec5Pn2n7fEdwIROAdY+O9nHwnuR8jMYtOZqTqvBAoBDYDAks12CkstwYOEx663yozdm0epjO1Lox52TBlOwoLY3W17hsNCBMhZaatggZzafUBbxEGOxRCEwrPYYcd1imRKJO0YaIddjZEOYtTZL3PkTyMDcEGRsmU2lZUCPkMO8fTv5GJtsC55RVbpF2mI73+9a+fywCIMs5zyELQIu1XmUJgT0cgKnYtLKSvUWmL+aQJU+kvTggMUEyRJ3Ka9UBxWkw12kUFij7huT/11FM7mgAtgT5DH/r+jjjiiI72QlNxUqE8zZOYzsOUVRQupvK3ItZb13BmGW1ue/IZaJkKjtNhyeOzgb636uQaUV192FMnfIOxYoQbcvrQXjTutYx2RHr39WPKdTFwXCrF4uGRdoiS3BXR5fat7323jxyJhOEdMHKR59jHm/xO1ot3xn7U70IgI+B7zFE6o2HE87g2ppHUcT1NDPhs+hW/BWTZ/K27vAGO5f/93//tZDPpL9MbMcK4VAG0iOjolrGOMfgNWX6Ro86kjEnrfCNkf9sR5yir6qhn/TZ+t9IYz7XMsmR/nh9LNfAsfIfsQ99R2R69qC+qGxmAOpHpTUUnRaKOhUAhsJkQWKrBjh2gWKcCZpD/iBQYup8VDUGUsSzCVCkTBQHrbB1tp6WYDN1r1dW6Zh0yVhmtCgVGNSIa8C61BG6YGIKIi4nbBmuwoYBxz11iiQ553ete1zEqDHYorc9//vO78yEc+5QXmD5rxFEW4WgjE8ZSFGKMu0xnBbe8Zt087RsWf+ihh3brqoyVZf0pcCCqEgGwUiFQCKwNgajYtWqSNvbRagXsFm3O9bGOEt8vCh/RFtAR6tWIx4YLcV2cXJ5zI5rkHxilWOMO4Z++TEmUufPOO7t+jBmxYn3RoDUU9Wb0G0eXc2g5xOQz8J1HHnlk9opXvKIbh22OPRvygV8f9kYygvnU6VXgcdxxx3X8BN5ltLh9Wo/j2Ljgo/DMFk9dj/aH6rBvfe97LIuDjD4auaixQTki5l1v3hnrrt+FQEbA95gjEU5RbpXucM8knY/vrrR97FuwPg0vfgfItRjPvva1r3XyMw4SHAjReW37Him7lgg76n/ta1/brX9snUPHPG7HIjaLyv60aV0RU35DM4amldqnMdyXJfuz5iEyukvX8PwYG5Hr4NQyPOLYgvfF9y4/B9+TOM6ikxmlOi8ECoHNgMBSDXZDxqCxe5GgRuBkLK1ponrshu711Rvb4DdTSyH8xx9//CoDkX3oU1pyXa1z65CxymijB5JNIdh0goV78462KIsys1w/niYiQGDSCPh6q+IU5JNOOmnFa4nA4RQujuxMC3PHuJc3Z7At1ojDa9maxmSe9TiKU35fUAjBah7lF1ze+ta3doohfc8JnO66664OC+7zfMEMYWwjlMjcfp0XAnsCAgjcfM8cWfogGpv47ZIDTL3M9/gWMaLzbQ7RXwR8p6wSWYwiKC2RB7CeEW0cc8wxO+2WHZ+BEU1Et0FPLRvzbNRvxsBC27QrXlPaUukdM9gxDRb69u///u8rkdTx2fS1BQYZewyS8Ct4RiuKGzrNFKc+es11HEyMk2eSI8T7+jL1+ti4MIwiP/DXF4kzta1589m3sXcLAyh8L0bKqIAqR8S2fd/Xg3fGeut3IdBCwPf4xhtv7JZq4V11LWTl2xi5a5QczmQj6JjSyHTvsW8Bg2CMpPI7+PznP999I9IR5Flo01Ci7DwRckN1Tbnnd+k3KzbrIftbl3XrbCISfPv27Z0sm3lqH8/tW5ZoGbI/gR7oN/Aw1oml/5GOwRN5V6LDzLG3+J7PBZ5PPUZmct3nEevn9yI6hu3UsRAoBAqBtSKwVIMdBA8BOBqD/I1Rbeh+n3de4tpi6Ivea4Eq8R9qJystrXr6rtlXGavtRaZB/USEwbj0JFKfChlMDCNUK6F0wKBVPjgOeRlzHeQ/7bTTuvXjHnvssXx7aed40sCBviDUYWgk0kSc+rZpzx1k3SCZft/6fOJq3Rz/5V/+pVusNwoGue46LwQKgekIqNhxbBnsWgqF16YY7KCJGH8wssUIOmlupOm33HJLp6yhsBG1nGmkEU3btm3rvPeWta5IK6b8lt6PoYWS6dRSdxEFrylJOtZSXOQz9AMaj9ITlVWfTVQe5dke4d2R99FXFOg+Yx19ZhowSlZUlPJYMI7yPpCPumL0OA4r34GxIxHZOQLDcU3FMPdtI8/tm+9Wqy0XYHcqrHl4jrx3rfdqvXinbdWxEBhCwPeYo2vH6fCV7sQoYR0zRMX5TRsdPPQtYNxnWQP+dAD4HdC2m4QRZTclURYaiCGKyGmNh/MeI70aalfe4TcrNpF/LCr7Wxd1Q991glAfji74p1jHo88iOsn6DHZDY1uve46D53LggQd2m0/Af9jJF12IfoKXkca0a5kW37NfvqORfxWdFJ06FgKFwGZCYKkGuyHCCUMeut8HmsyuxdAXvddqS8L+hS98YdVt24lEf1WmkQvWkZk2Ozlxjz/CwlE+EUyicY4IOIydRNltZJrXyLeRfYl1oyAyvQyBA6btmiUxj7/Jy3qAeG3JS+Rhn5GTvGAO9kTaIfBhEKQcbekttu46FgKFwPwISFs5LpI0SGX6C73C6LbXXnt1kWNEW3DNJM3NvIPoWaZi8p2jHLAbrYY7Im0xUO3YsWNlOi31WZeGLaO7PYeuEwHiufel9/apdYyRday1d8UVV3R9i9EpQ8qkUSot/qpSYz+MwGC9ICKMfTZgMfSXsUcZRNlGqYKOxuSGHdnYFPP4m7JE4mEAjPXYr6jcR4XT3yhyrXFbftF3zv5txNG+5ffStniHed4YMjHiRlx4jjwnn6dl+o6Unco7++qo64VACwHfY78xZ2kQySvd8R7lpaHx3ZW2930LlHMnUGiEye+A+t19emhzNctxpCxOcZzj0LVotMLAyHe3zz77rBi6uI9DAXoH3dHYFccR68+/87jFZj1kfxwbOHiIzNbhA33I9BpaH5N9GsI95t/o32JC39m4JC9bEZdeQB8iWaZF/+2v72jGw/sei06KRB0LgUJgVyGwVIMdAjph7y3lgoWrh+5T5oILLlg1VWmIsSx6r/UwMNTBLFoLntrOGNFv1es165DJy2yiQGNelEaYsDv55XPzxSOex1/+8pcriifrvs3jPWxhH+vfDL8JbwcX1mFiPaacYPKf+MQnOuUdAQsBJirwOX/rPCrP0aPbylvXCoFCYBwBhWZp3VTapMFcpU76i3CNAI+gDs1GicLgnpM0t6WUQC9xgDBFlDr23XffTgEgUuQPf/jDinJpWetyDPk803PvS+9z3+L5ZZdd1imJKpziRb/m+WspLvbLfoAdESl77713N+3ftji6lEKme0xXxRhKtEOkp621Ranf3c7PO++8rs441qm/7Zf97ivH82mN2/IcN1uyb75bsX/gp+KNTBTxJh948E6M4RLr5PcY78z567wQGEPA99hvjMglHCE4EKQ73qOuFk2Utre+Bcp4H2PZQw89tNIlvwPrNzIaJwdlhhKyPjSDNZLhKbFt6osRyNTDfXkP561xDLWX87ewsXyW9fO5+TzSX+gB0XHsTs50Y5zO9pcNOngeOGgiLbFPcezWuSuOYhKnPcd++B4wVnU0y7Tov2XZpI8yU8dZdFLk6lgIFALLRmCpBru4Zpoe8KnH7ClniizeeyMVhtapm3qvb10yDD1s9pAZtQ9L5iYT9Po8R+tQ0JbZKHDEuoyoY3oBQgpCUFaiYn5+62ljShhJBhc9g33PwoV76dNmTi4y23pOKOBOBwAvNuNYNLk4OwsTs55GpUKgEFgcAQRshGZpnbSJa0N/OX+kv9yDZ2B0c5pU7qE0d0hYJ+LsqKOO6qZuxnpyWc/tUz7P9Nz70vvct3gOr2NaJ8ZCEm1EvGLe1m/xbCku9iv2w3V9aCe3RbQbPENnEe2BCxgy5fjKK69sdWHlmruvQoOh1zGh0PdFO8d8/LZfsd85D+f0qzVuy3PcbMm+td5LDM/gj+J6+umnr3J+upmU64C5K+7YGId451jZul8ItBDwPfYbw9iMwx5a9tOf/rSTp4foe7zX+hZok6g9It5wAlC/CboQaSQ0EGMdeYk2M2La/PEozdiMBrt5ZX+fATwW7KG/8El5JZgxjTRHO8uf+nCPeC3jd1zDjr61En3lmcsTHEPf2t7UYSBGXGKoVbfXik6KRB0LgUJg2QgszWCHF35IeMTTI8OFuaIYsE6LSSYqsZY4R6a+1t8Setv0eO+993YMrc8oJmOQCVqOiIQcau69fLQO+6AipbAT8xvpQDTZscce2wkhrAk0lGBICCvWpxKX+9yqgz61jGCtvLvymhhmBg1eRHPwfgxNgZ3ad7EDz6nrlEytu/IVAnsaAtCXaPz2++qjTSohQ7QMXgJPGUrSizGlhLqygpfLep6nvHrOGBedEpvbz+MfGiP3xLNluJLPyHfIb/1EH/hbrF07DScYhjocFkwbhhayrl80auZ+4fjCyYTBiQjImIj+Zupy33qiMS+/7Vfsd87DeZYbzGN5x+X1zXC0b6330ntTZZ3WM2+N0fc3885W3rpWCExBwHc1fmN+j1xDppQ+Xn/99d16ar7XRIThkMcgd/DBBzcjoFiH+IADDpgddNBBM6bZxwRdoK7YtuvoQavyVHLLEhXMxnIYFXHMwoPid2i/oZsm7kde5bc0Rpssn/NLk2PfzTuv7E8f5K2UlRfE/moEdBkE2rJPcez2YVcccebwrIfoE33lmV9zzTVdF93tNeo9se/gwfRayhiVF++3fovLUD9a5epaIVAIFAJrRWApBju85m9729tml156aWfAYmHvd73rXR1TYABXX311R4hlgqwxhlAfvWYQ46nCJ3XCmPCo4fWnLnasWyQRJm5kVlwjI9YlEY9MkPuMh7XSmNI0lqxDJj/EtKnLSAWYDREg0biZ25JJx6miXst9zmU5p09bwWDnWiZs/87acyYNrmM4mX/s6I7BThsby1/3C4FCoB8BnAmRvozRJhSZqIwN5c/Grr5etPJhpIvThGJZ6bUKjef0a54/6X2se+y349+INexo24hHjrYVlUeUaPgqU6uImicym3XQwHAoudFEayonZYl8AbvI9/vqs19j+PXJDZaP4+pra9nX7Zvv1jztgwcYjuGS6+zjnTlfnRcCUxHwPb7hhhu6IhjYDj/88M4YhjHFBI1laifyKXyAjRAOO+ywbu058+Sjxn+MMS1ntd9B/r7dWRm6hdEuO2KQG5EfMeSgtyAfz0PPY96p36C8w/zrKfuznvN+++03Q2YltXglhivyEWWHrEyyT4vQoK6Cdf6noRJ8W3qY063zlFlkC8qwzEPmT8xOYip1xGes20UnxxCq+4VAIbBRCCzFYOd0TD0fKALR66GHR6KqkQxjF1OS9IRMNdhR/qKLLuqUChboZo0GNmXo22l2CFwXM0U5gSm0kswtG79YtBYGMmUzCOvITBus6DcbHjCd1WlEehdhRmxA4fVW/xQAogetxbhbZblGn6JC3cqH9xLhaCjColVu3mu00xorCwuzxhN4IPzFJNNubRgS88XfN998c/fuxWv8RlBEuKEddzzLeeq8ECgEpiHQou3SpqOPPrpT2qCN8Y/FuPn+VMbMn+kvPcD5w/pzbBxBghZCJ4jgNnGNPNJerqPgHHLIIV1kLn3MSXqtQuO5C4UzhWf//ffvdpTmHuu7EemQ78c2cxt95yrCYDDPX4t/yhvOPPPM2X/8x390U1pRYFTebEus6ZP8WRr497//va+rK9dVlMHeqb0rN//5A95BhF0rAi/ntV8btekEyh0bZ8DDs1Kf+7Le547Nd2ue+nmfeC6t92oR3jlP25W3EIgI+B6/5S1vWaFTyP2sJ2dSVmczhxtvvLFzyrMeJkZ9aAVyXU58m67j2CeD+R1EumU9yvT0hUg+ZDqT9BBZUb6Cw17+A/2GjkPPvcb9yKuk/a1v0HbiUd5hfvuwVtnf/scN6ryWeeV9993XBTiwPivJPk2hQcuS/Q3kyO9FfB/yuMQS/SXqf7x3vGPQyrizLGMvOhnfzvpdCBQCmwWBDTfYQUwxxEUvBmvFEZ2kgU7viQY6wDEqijwoBBBiwtSjZ64FIl4xGCjMWE8+BBgiD9GGgdOnKUklAwWixfitQ+aWmYUCi4zY/K2jayOcdtpps4svvnh2xBFHdGOICpkKl8YpvIQ34RwDAAAgAElEQVR4JRkrBsWWcENbeKSoBwHA1Me4vR+P9H/IYKdXkjYwZm1kAlPGDdYIbRgyTzzxxE7Jo/3WlCqEDu6NKXdx23rGTDsIOxgAwQ6DKbuDURfvE+9VpUKgEFgcAb3cTENykwJpE9/Z0J802fyZ/hqFofGJXhodG9vTO8+6ajplLAvtR1HISZqvQuO5fcrnKg75/hTekNumDnCxrnw/n4uP/CPet19MZyXagHqheyjV8MncFtdY/xMnGzQQfHCKDRm15L99WMb+yHPH6Kv9GqPpRP8hN/hcbcvyfRjqZIS3LnvZA/vmu2Wfpxx5n3iGrfeKeuflnVParDyFQAsB32NoCU5raAyza6QVjz76aCe74UzHOC7N5N3VeI9OQPRuTBrchmiE30Hf9806naxxyrfC9H7pA0EFLDXD9y/djN8h9WVZmPuR98RxxH73/d4o2V89i4AFdR7HFPvb6pdjiGNv5Vum7B+NbCyfwEZ4rJkKfYdOt4IyGLfGXZ43jil0BozIlGm9Q0UnW0+6rhUChcCuRmDDDXYqZDG6y7UF4tTF2267rYuE0wiCwoQyAXPHCIfhZIjJwIhQHCDkEOLsOaNejFrcw8BD5J5MrPUQMBgiLJAfgj+UV+aW++eCpn1Cg+1iCEJwiH8oIqyZh7EIbx4CBX1AeKFfKlWPP/54F01IPxk7U39jX8GRDTOiwZR2ZdzRM8g4Wn9MUcpCin3niHBFX+k/kY0bmYh4wMMZseI319j1lvHmhNCR87fOo3ACzqzNlPOhdJ599tkzhM1KhUAhsDYEdCZEA8NU2pQNfJn+6pHHgYOwTzKiD+Eeo5MJOht33uY6xiPonuu1mZejNF+a4bm0Pp9rGMv347hj/UO/qQO6ZF1DebknnkMGO6ajonzBZ8SKsrEt8GBZAZcCkKfSF3hqK/IZPovziTzwBhabZ22q+IcChXONv3POOaebYkv++NzyGO3XIvhRV9xYI9fNuQ5D6H1rh+FWmfW65th8t+apV0NFC5dFeOc8bVfeQiAi4HvMMSZketasQ15Fls3TMH13oSfI/ci6bh4EXSXyuWWciW34HeS2Yx5kPORinRPKyi4dI92M3yH1ZVmY+5H3SPsdR2wz/95I2R/eil4Qo8kdU+xv7hPnjiGOvZVvmbI/7cObkPM1tsIn+IMvGR2Y+9kqAy6nnHJKU44vOpkRrPNCoBDYDAhsuMEOppUVIQa+ffv2VQvFwkxgFPEPpo3BCAKbp5bC+PH2w/Q04mBkgRFHo5VAUz91IABA5JmyxKK3COQoKtTHHxtkwBBoEyUjKjDWFY96mYjkgJmgjLBgNwKJyk3Mn3+zBsdJJ53UeX5QIulnTggTKFUoEPxdd911K2OkzxgruU6fUZ527NjRVaGiGCNKuCHjluGNHbOQkvuH8iyG+d56nzNe2vI92ch2ie6M7Yy9C+s91qqvENhdEeC7xYmC0oZhxwQtgY7yp1HOexzlE5RnYwmMQChwRNe60YT0DZqMAB4T0bLQu7gWjnQyOpYw7hExDE1l3baYoAkYwFRoPFdBzOfW732jKqYodbFdflMH/beufF98wA4+yPjBp6Wk2a++fogVSzvQJhHGXJMO4rgAd+7BVzG4uZ6qz4B78/zhrJL/ZtwdKwvCw2f7dnY3H+OHP0vDOWJohB/yXIei56D9U6b7/n/2zvzX9+ne//4Bv/hB4odGIhERERERQgQhCCemHFPMU7SooWaOOqjhmM4prelqr6GGL0Wd65qKGlqtWQ2XyzUcao70uqlKzm3y/uaxbp/ba7/2Wu/pM+zP3ue1kr3f0xqf78/7Na+11NZcOI6Td84FPKKPo0MgR6f4FlnPGnqAMY6lXZREMy0twlnPRmHkP/7449P3CO1kCqcSdBoZHsc2z/im2aCgSWalPHI13wRJDh7N/BH9Eo0nD2Py9XqDXRfaPirZX5sDETVN0ISSxuR5AfRcMjUBFdB49Ak7dtXhj+OU/dW27W9bGk0/xQfAoS4FnaxDJ54FAoHAbCAwcoMdTAAG2kQgGbwYfE64R6ljbQgSDJp1LqyXBeWJ9SVyEVYeWIg2hjgZ+WhPHjtC4lE8YFZNU31svWycQRnbd+qR987m7XqOUEM0InUzxafk8ec+z+mHduEjcoRyWj9QbYtxswMXSpGNePDnbAfvhRTVE8dAIBAIBPogAF+AVvk1ZJrqUnSUpbUYX6C1StA07mFwg1/YpKlCOEmIUCZBD5kmw0YOVgHQ+qp+DVMpl1JodC0jGnyPqTpMrSLhWGIxdZxL0GjWbKL/KEZdk/ik2vLlwZWx1+GjMnUGO7DRLnrwWiLNc/wV5YapSbkIb9YO3H///VNUMrxIvIVoPSlPKPHewaboRq9wqt9tjyhpGOcsFjr377RtnZEvEAgEmhEo0SloDvK7pyWiodZgRyvQFxb7xxCWS9B30Sl92xy1JE6uTO4eNAwHuyKvJSOLxlOGMXlZmOeHH354tXjx4kTbcThAf21kW669Nvf6yv7/9V//lXgMNNcmjckb7OBXGDktfsMag20/zgOBQCAQCAT6ITByg12XbsGcWJdA02N0JOKN0Gub8JKjCLA+A8K9vGQ2T9M5XhoiO2BqUn5QHFCiULC6JpihlBCOXA8j0SeiTS677LIZOPj6wYnFe6WkorAg7ICXTUSiLFq0KP0pKsU+t+d4AVE2+2Bi64nzQCAQCASEAHSNHQRLipjy+aPnE0RNswaS5QHQQeh6LgILus96nzY/bUArfaKPOGOgfZwrSbmUMucNdMrnjzLcYbRjTSfP13z+3DVGQMrLGOjzeHxYExVnjh8v5eoMdjxnN3KmprZ5R9RvMfL96nJNPb/85S9nLPHQpQ7lRX6QLKHjv/7rvyZerTxxDAQCgeEiUDLYlVoRDc3t+loqo/sYx/RtQ/eJKs7RO+XPHeELb7755hQNk4xs1zaGR9BP8irh4MHpQMAAdJmI4+XLl0/Lo7xdj9DBvrK/nCK2TY0J2d/K/YyLdoQhPAPeMCx6bvsQ54FAIBAIBALdEZgog1337keJQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUBgfiEQBrv59T5jNIFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAJzHIEw2M3xFxjdDwQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEJhfCITBbn69zxhNIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBwBxHIAx2c/wFRvcDgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEAgE5hcC88pgxzbxf/zjH9OusbnX1PQ8V0b32C2JnaDYIarr7lOqY1KP2lFW/euz2x+7ZnlcuAYvdtHyCTx9fp8nrgOBQCAQmHQEPP2c9P5G/wKBQCAQmDQEJC/+/e9/H7hr1MFO3rmdvwetHJn2f/7nfwatZlbL53a7ndUOReOBQCAQCAQCtQiMzWCH0QYG2vfPb6WeGxV177jjjtV5552Xe5zarnueLfTPmwgARx11VKqfdkaRPvjgg+p3v/vdQH9s5e7TF198Ub399tv+drpG2fzxj39cHXPMMdVf//rXZOxcuHBh9cQTT2Tz527+7W9/S1vZH3bYYdWnn346laX0Pt5///3qoIMOqo4//viKspECgUBg1UXgyy+/HIjmiWa+/PLLVRfjWYk+dXkTKG7QssWLF1crV66cUfT222+vVl999eqkk07KPp9RoOYGThH4IP3u85fjoXJiCcO6Ywnfzz77rLrkkksq+EykQCAQCAT6ICB6/Ktf/apP8WllqGO11Varfv/730+7P+jFa6+9Vq233nrV5ZdfXuF0HkbCUPnnP/95mixc0gW4X6LZTz/9dNaQCN+ARkPblyxZUsGTwGWjjTaq3nrrrWlDyPGYHN+YViguAoFAIBAIBEaOwNgMdhjRYKB9/3LMxaMjhj9XDXYSMvpiRDk/dhTYs88+u9p2222rv/zlLx6yJCisv/761fXXX58EEBnfNttss2Kkoq8EweX555+vttxyyyTMPProoymLfx8IJv/yL/9SrbHGGtWCBQuqF198cWhCj+/TsK6//vrr6uqrr64uvvjiotFTbX3yySfVZZddloy6a621VrXVVltVxx13XIWQ5RNC1z333FPtvffe1dprr11tvvnm1fnnn199/PHHPmtcBwLzGgGUh0FonsriUOkSneHpUx+Qn3322YpvPadk4jyBjvJt840/9thjfZqYKoNyBR/UeLseczxUGGy88cbVDjvsUPxDSc3h+9VXX1VHHnlkGh88YNISPOf++++vcEIx1lJasWJFUmZ33nnnNBbwgHa/8sorM3gU0Sm33XZb4gnwhbo/8o0iyqc0jrgfCEwKAiW6fu6556bZKnyP9u+dd96ptttuuySL2vs670LbJUsP22CHPH3hhRdmjV19cX/99dcrZPBly5ZN0Rr6j5yM3A5d3n777RNd4j54EHhgaTY8JkffhQO8Ah606667Vpdeemn161//OvGtU089dRr9uu+++2bwmFy9fcc67HLI0bfccksaA7OrSokggUWLFk3xYuTuhx56qMJAWUroQsj+YP29730v6Sw33HDDNMNqqWzcDwQCgUBg2AiM1WCH8rJ8+fJsNAVCf91z7z36zW9+M0O5gLnB5FAucspH0/MXXnihiO84IuxgrhtuuGH13HPPTRNkJLDUHSXseIMdA4JZYUwjygMGp8Q592Din3/+uW4nQx0CAsy7lHJKCwoO/YcxosScc845SRBBCZJSQ54tttgiPdM9jihVk5j4nREhg8BTEv4wWN59991JICKffn8IVLlyRKIceOCB6ZmEMsqQF+HomWeemUQook+BwEgQkGL34IMPZuke3wv0DTqWo4E4Ig499NCsQamuw1J8cjSzrpye8d0TaQHfeu+993Q7HYlo2G233dIfxiDoLPlyEdDTCva8EH/qOhZhAO+pS9TrDXZy7qAIPvDAA1PKZl0943rGuwFroh+hqyhdjDWXhAH5oNlSjrmG9t96663Txiased70d+KJJ4bBLgd63Jv3CIiuY/QgugvZHxp45plnJlrS9O345000ygIqQ1VJZrN5/bnK+va7XHcxckGrMNZBR+X0oA+WZonmcF/0inMlxplrkzwYQeFBNpEfR5M32PmZOKV6bV2zeQ5e4Ma7sXioT142h77jSJdMz28xN8tH/Jt6kc2lO3K91157pYhFtRHHQCAQCATGgcBYDXaWAfnBoRDUPff556vBLsd0/dhz12LiJYWNMPg111yzAjclIj5g2vvss8+UQQ3j2UUXXZQi9axBTedi6BIgrDFOebocZdQr9Vt9nY0j03t32WWXKaWsJPwxfRihgXf3yCOPTFubD0MoUShKePQQEmD8p59++tQUBoyneO8QJPbYY4+YXibA4jjvEeC74nsofV9NtE20yBqU+HaXLl06ja55uiTa04aG5SKliKjFuEMEs52KiwERY5010EkB4B7TqrokjQ+MhvFncRK2Nloj5+xCabHlFFmHwwHegmI0CYn38Nvf/jZFQ4AVPA+aWidbMOULhZn3qaTIPOg60S9EwSgxVtazBbvcH+/62GOPnaaAq2wcA4FVBQFP13GII28+/PDD2e9HTmdme+S+K+hg24Txhu8feSyXiFZGFuY794myfR3n9Jv+d5XjkRNxnGMMgh7RB3iFAhzAbPfdd0/3aQN6Bv/SEgYYRXNtUg8GO7AVptT/1FNPZfODB8+VFydarl6P2WxcY2gj0EM8kbH6xFRj6Ddj4PcoPvXuu+8m2R7eAP+yid8Z9JtnV1555VSQA45DtWejIW3ZOA8EAoFAYFQIjNVgB+GEyXjFiWuUprrnbSKwxMhk/BHz0aYH/nkOVJjVEUccMSNCT153RUTllBoiy7799ttcta3uwXD6MsemsTG1k7XqNA3VG6PE9JqOUqqlRArr0pobEij8URGTTf1uBdwIMqH4sS4Tv0k8cuCisdvmUM7xvpEP4aApafqDj2qknDXmEbEXKRBYFRDwih2RUZZeKDIDYdneFw0RLbIGpUGnj3o6aOvWO8HAzndvjTlvvvlm4h3WWKf8Mtph/EIRkvKg56WjxoehH3qZ+1OUYV0eyvHcjkX01yp/FmOdg73KoezgVMCYBZ1qO47S+IZ5X1ihbBF1ybp7KLd1BrtS+zhRiAjnt5BTBkvlFPUB1nVTrkrl434gMB8Q8HT92muvrbbZZpvish+iRV2+tRJO1MF3y/drZ5WQX3QYWf7JJ5+cUcUgcjiV9S2Po+GKK65IEV/UkdM1MDIKJ+9kYZontNkm4WD5GbSQenIRdkwr5bnN31cnsf0YxbmCENAdczQaGR5nWu4Z/ZExD16G3qekZS6IzkZvskll6n7HNn+cBwKBQCAwLATGarDLMSAZvlBi6p7j6WhKGOauueaaqemVYmwyKvnnufr6GOwULi2FJldvm3t9GT11+7HWtaepTDAyYWPzt+mHFCOVp0xOAMgZZ/FeSQho02+MoONWfKR0sV6JmH7OYIfSDo5tFyG+4447Un7qzSXaQNmMqVQ5dOLefETAK3bQFL6ppj/RENGiLvRXUbFqAzrVhcYoIkIKoSKy4GNEdaGcythlj3z/G2ywQfrGTzvttFaRtBofuOQiB4kkRFFj/HV5KMtzi5Pobymqhef8ydD37//+72mKENjz3mSsAzvq17X/nYLPN998428P/ZqlGljKgenT9EXj62Owo3P6LcLf2iS9q7YOnDZ1Rp5AYC4iYOm6voszzjgjRSN3dajwHSrpmxTttkfRNr5X7nuHiox1yFhXXXVVluZTdtwRdhqbjvTB0yxoPDxDRjVPk6B7P/3pTysFKFBXrh7uM1PGyuZMjUV+553ZxLX4rL1vz8dF222bWuaH933dddeld+3xQJdjhswmm2ySIgxtec71m2TcWg4JnsFa0vx2ck5zleF5KXrTtxPXgUAgEAgMA4GxGuw8A7IDgAnXPbd5255LWLfMvm1Zn091EX3nFQ89k7Dgy7a9huFY4aPPedNYUaxg1Ko7l59+NDFpPybKrLvuuklAsApq7rwUvu/r5BqBgUiOcU4ThdEzNQFDLJErYAReXpjB8HnIIYekcbeJrmM8eFCpywsXGrsEWRZJZ9pVpEBgviPAd5X7vjRu0dccrSKPhOi29JcIPhbp5lskcppvmCmsbY12CPUYuOgzfcILz/R2rlmbEh7Beelvv/32S7vKojTipLLLFGjM9qjx0daoDHalvtr74MtY77zzzmm7gYMH67yh+DB1yyf6rylGfvqRzzvsa/12+sgWRGhgYOA9edpf6icRO+Dgp0mX8sf9QGC+ImDpuuQaGTl0rfXtkBMVSW2jfTUV1NJ+ZttYYxPnlME4d/TRR6dZLlaWxjnKt6ylCviecbSWHDS2rKV/Xc67ys/+NwBO1PHKPze9kUOIMUJjkA/ZvAPaSlKggcav+hhLbkos62TbiDy9D6bKUhd0k7+mKbGzQdu19racInpfHG1i1g/rZdfRfn5XvFet182u7/BwdJmSTC8ZHqdcpEAgEAgExoXAWA12nnGIKXDEg1/33HqNLDiUhSB3YaY2rxUEbL3+XO3klMK6Z76eumsYDsK+XwjWCicHHHBANo8EFjseRRSqPJEYMGSU1bPOOispqkSEKMpRR0LtEWqYCqp7HOuiHOl7rowtr/PSjlY5bBSRVvKS5coMck9TYRmLFGkxda+0SSDAi4eQgwCIgIXwiUCUW8xWwkUpwo4pdXh364SMQcYXZQOBSUPAKna5vom+Wtpm88mglaPNNh/neNKZrgpNJNqC74x6ZcRbsmTJjClUvg559+EjlMVgxRp3CPJSoHwZf00ZliegHzk6YfOTF+M99ByFgqnA1hHCNWNh/PSnlIf71EFd1EkSj9AOeyhD1PHDH/5w2lhQknPr+FGHohVRFnNjgTZqLdAS3bPjHea5fjtd6Sn4sMYrzqLc1KhcH/U79FE9ubxxLxCY7whYuk60kp1GKAORlan0rVrDi74paFJdUn0yokjOQpbF8HXPPfckWRYHCc4FDGClRNlBIuyo3xrDSu1wH7or+Zyj6DDrpbG2tNVVOD/hhBMSjaUNZFT7HFneT/EVDjYftBDZ1hoVhR/3eW7z23x+LLNB23H6MHatI6cxcrRJv6eS7gCNR29irCoL7jjx7G/V1sm52mv6TfpycR0IBAKBwCAIjNVgZ5lA1/MScRRR9usbWY9d3bNSvR7Ujz/+OBHx3FRF9aGNwujrtdcwgjrmSF48QTArMXaVR9HiHoqnkvolrFX3Sy+9lKZi0d/cguslo2BuHUEph0yVYqt41uGwymTuHK+p8tJnv8aI+s8RgY1t2zE0Ssm0z4d9LuXT7qjLbwQMrXBJu/SddwGOeOMwSAprjgiHN9544zRPLnVQBgWWaBmbGJ8id7oqmLaeOA8E5hICVrHDKSDDvo5aciDnXEAxYhMHvsE6+qsIBepYvHhxoiuij+IBKDs8P+yww2Z8m8JT3n1tZqCyej7q46ARdrZ/ot2eRmO0hEblpvVq3UDVI3rZtHPeq6++mqJa7AY8qmOUR73jJnoq/vnoo49WP//5zyuiIHnHGC7b9llrH8XadaN8o1H3XEFAdB3ZDcMIjhJtWiYDEZG3MljJ6Yz8qXtMT9xyyy2TE6Fu3BgEbVSUjCrIX7SLPAZth9Y1yZFt5PC6vnR5pn5KbuRaCZ6FQQw5k6mwwk7PicIDMxzH0DmWA/CJ+nK0j3cjfYAyeh/ct8nns890Pk7ajn7D+9TmHPRBGFrsuC9jL9iiQ/j3rvWnea6ywiGHmcaLnEAZH82o53EMBAKBQGAUCIzVYAeh1a5HXknAqFb33BqiLBASyHOKU99ntn6di5DXtVOnMKqeuiNMwzLRXF4ZiTS1IJcnd8/XLWaWG4/Pm6tP94SLBC+UFYQIBC4JW9YoiIeQzTl4jmBWxxjVxriOYuCaCqt2wQgGnRNmuE9+PHInn3xyysM7YqddDHYovorUoz6mlBGxQTl+89qZEIURoQKhkjKThItwiGMgMAoE+K70feUMdjLc5Y5tDHYYpqBLGGBsBF2OPzzzzDOJBkOHobE+EkPe/UsvvTR54kU/VRfj6PInRaENrkTwosDRVp8/ytrxsCYS+OVwLd1T5AjKjyLQmox1bcY2qjx6L030VPn07mSs47dZmjpn+yxDLhHyGO4iBQKrOgKWrmvtODlCJTcy20K0Ro4Zu5mCNnsTnc1hqqVJWNpAEb7QVfEU1hfmnCi7NknyL4YoooplPOx6RA5sm0R/aJtZHmyWIx2JacO55WZkBLU6FcY7kox9OIBzM5eY6mqjCJkRwjX3LW+x+YjGns0k2VlTYdUXvWuOPhEcgDxNpDQ4iZYTJY9TBnrNb0Nl9bus4xf6XQ+q7/m+xnUgEAgEAnUIjNVgV0cEYch1z0uDEKPLMfS+z3JtiUjfdNNNMx6rnUEJOEyjyWDH9uyEeCv0f0ZnCjd83cM22IEPSsuPfvSj5HlCcPJtSGhjfZ+VK1cmJtnnnReGONBt+ovHDMZO1IhN/LYk/Nn7+k3wjPdhlWEUWk1b8LtQIRTgNaac/cPAx0LIYDIpuNjxxnkgMAoE9B1x7JNEZzz9RTjH6Lbppptmo11Ftz3vQOnhm+XbXLBgQZqCqm+btY8wtK9YsSJ9oyqrurQmk9Zj0jX0F6VL13ouRaHNuKVMWJrR5XxYNAVciVyBXrHeD9OIJjXpvTSNXQouY0F5vfnmm9P7Bd82Y9Tu39ZoMKmYRL8CgXEg4Ok6xhPkKwzaomWW5utbtTRRtF10NtdvRbbaTQKog2+X+uWI3W233dLSAbk67D3KImMzawKeghNVRkUMjBiA7HIxPGdcGBfJJ8OjHYetP3dux64xd6Htyqs2VZ/uD+NY9w5yYxrmPXgOBlOwR6620XJ61xq7bZdyl1xySSrnMWDKMY437qusfpd1/EK/ay9v2HbjPBAIBAKBYSMwVoMdnhFFX3lvFVFYdc/JzxpsfhqhGFOOmfR9lgMZQx2EPRfZpnYGJeAwjaY17MAPnGzUmseSax8+T93WGNhHKMhhLAYnwYv3w86JRESqDZXDw/n9739/aooRfapjjLn3MIp7dcIA7dF/3r3GqD6IcZfWu/jyyy+rnXbaadpUDZXFQIjyv/vuuychD8xY/473hrCIkkhkUKRAYL4joO9I3xdTe9pENojGic6I/iLMY3SHtvDdokQRseCT6Lbok33O98madBilqGPrrbdOSiZOhw8//DBFIVC/yqoujcFfezqp51IUbNulc1+HzScM1B/7jPM6Wiv6xjjr/qiD6aFEEpOP6aKTTqOEcx8+gwOKNfcYa9MmElqIvMv79O8orgOB+YSAp+tE+OIIYeZFjpbpW7XfUBNd03OMZZqtAIbUwXcreqzIaGaCUKYuIetDLz766KNksLM0lfqsHE09PBfv4To3jrr2msrk2qQMY+xC1xg32KNjgAd8EueUljmQQ513RPT1pCTr/Ea3QV63Se/a/m7sc8ojWyNjIwsgcyN7w+NF36XX6XdZh6vW1ba/C9tenAcCgUAgMAoExmqwQ/nB+yRvVZcjXixLRDEIETKuSIW6deraPlM4uQcawR1i7xm18olBW6atZ12OYjx1SlPbZxJU1D512/5L0Mlhg1EwF4Kfm5YsBte2Xz6ffafq67iPrM2H544p2TBhbwDFOEq/Ne1Xi69renLpvQtjyvr3URqj6uQdIGhECgTmOwIIy/Ybsd+Npxf2Wt+U8tvvkGfwDAwpCOa5JLpdJ3gzdYYp7H5DBV9W1+qTvxad9M9LSkauv6rDT1uiLaJIDj300DT1l2v/x9SoEq1l/JSlDl+Oa6K6mVZFX8HymGOOSRsQwReVoFXsHnvXXXdNizTW89k60n/GXRp7U7+EOQaBUiShDBHwj/fee0CaD0kAACAASURBVK+pyngeCKwSCEDrLF2HRiDXsIs26ygjj1p6XndeotGa8gh9s/KSZGnRW3gE8htyHkuPKGI69yJoC3oxnwx2mgJKJCCRiMLK6wXwAJY4gHcSyWZpfA6rcdzD2Ua/0Um0nI6V0Vlah9+O1j685ppr0mZKTX379ttvE19nKjCbvZHEL0obVZBHwRtdZzk19SeeBwKBQCBQh8DYDHYYReoIKcRTTATmyuLPrFmgJCYKQSVxXcfg+zwrKU9N011E5K3CSB9Zi4K/tskzz1I5Np7oulaOr1tKbk4Q8nlL/eC+FkFXtIvNW9cG+TCQYqCabaGA8Xb5veg9a3qydom1Y+dc4+ddsTtlU+L3r/VWtM18U5l4HgjMdQT4/qzQrO9G35kfn1cEc/n5luApdUl0O0cDbTnq8gqeL6trP+VV14xxUqfEMv4S1uCgsTEGEliIVwsnnDkYrLxhU89n66i+9zXYqXydAqcpeTj1ZpuXzRbO0W4g4BHwdJrn0Bq+RZ5hsBN9ZLMy1seUHMZapjjkMcgREZWj0cieyF677rpr9fnnn09rXjKdDHY8VARZblqlChPdzcZyGBWZIQFdtG2r3xjylTz9FM0QvVS+uqMto74Li65HO2YbKc7SEOBMW/rD0GnXsuM+xj2WtqFdymC4wykxW4nxdMGgLa3HuQLPWrhwYdo5nfF988036d3zG7E4auzoc2eccUbqj6Ly9CyOgUAgEAiMEoGxGOyYNoNX/he/+EUyYLGwN5sPwBxI9957b5oGKCb4/PPPJ4+K9ZqJ0atMEygocXjUWDwa7wwLZPdJhF+zYDkMw66RYeuiTzAJr/QwHtYqYz2cNglGbaPgSmXk9ce44xUn+vKf//mfMxRMX7eUXMZGGfuXY+J67heeZUOFUqSkFgzGW1fKw/02xqwSFuO4z2+P9+8ZOMIQaxaVjKcSCEpTZn3ftdaKn97h88V1IDCfEMBTbemeaJOnpxqzBHh9j3X5c8Y21WOPuXwYpvz0G5WBHkLzpczpuotiQd4uSp3ovsatvnAUBuqPfcY57dBflFBvfBR9a+p7qa9ShOF1KHuTlPRe2ipxvu+sY4Wh1Sp1Pk9Mh/WIxHUgUCV5CZry+OOPJzgwsO27777JIIJhRAkayxRFlg+BD2C4Y30x5KFSwjDOBhYYVuymXsoPraJtTyvlWGC2D4YoTwv/+7//O33rGGXQW+BBTXSx9LxEL9VHexSdogy0nGv7x5qpyPW6B83905/+lIyQuqcjRkfGgdET2RuM2GWW8tDBUn91n3cAThj3MNjpPjobfZu0pHfdBW/4PXoOY+O3ZxPyCPdz+hXTrpHP28r0tt44DwQCgUBgEATGYrDDIINRQ1FDeCasB0OGDRFIGcmkAMir0VbopjyL92OsY5oO6zbgSclN6WwCj7WQMPjZbcR9GRglffMKpoR9BPo2CYZjFddSGYQVvPm5KTiaUimsVYevWwqemHHbo1cIiZLTblb+KO9obtqtzYsSWUq8S34/RLN542SpzLDvS6H1wh/tICzyW/7BD34wbT0n+k3YPriyToaNtESg8om1ocCJulhfJFIgsCogkKPtok2laZpMCeW70veo/J7+gh/OH9afY50eEkYvFh8ngluJe+SxAv/HH39c7bnnnmnBavvtqoxovuihrjVdVdNIde134dNz26bqLh1lsFNUiqWhTbSWqWjwKNYmOuyww6r/9//+31QzRL+ziQRKIOPwfaXMz372sxQNPVXonyeiW/BIv1mPzUvdvK/S9GSbd5jnei91sgPySG66Kwo7dJ3f2rJly7L8hzysN9o2inqYY4u6AoFJRoDvnW+HdYs58uflG8nqbObw1FNPJRoFLUJ2gk7njHbIgRjbqEu7znocoKu0Jx5hn0umpzxGLRsVKxrLtEfxFevUhp77iDSeW17Vh7aLTll+wDhxgMBjoK8YMaG3JOkW3Oc5+ax8DJ0l2pn16F588cX0DOMkkXK0pb+cc548vBcS2MAfiGSkzVyaLdquvuhdW+z0DN7t+Tc4PfDAAwlT8PG0X78BGS5Vl5XpS/xAeeMYCAQCgcCwERi5wQ7iiCHOeiQw8myxxRZTHgwIKos6y0DHIDUNlbJEdaGMEaZuPXM5MBCgYaAwY5g+RBaGAvOHAMPALWPL1aF78sbBFHOMX/nEbL3CKIElx0hU1h7JlzPYITiIUSs/CidjxBgp5sozFkQlIsAyVzHv3LSzJmOaVQo5l9FTU2HtWhL+nL7xTps2yKCc1oXT+HTE8IjgVTclSXlHdawz2Alb+ohSiHcOYRIDL/f43fH7s+knP/lJUvLw7CEMoUzzbsiv36zNH+eBwHxFQB5rpiHJkC1Fie+h7k80Wfk9/VUUBpv0wE9IGOLgRbY9rUGG8K6pPyoL7cfo55NovjfYqU96rmspAbrW87a8gfZVB8qtj1huE80M/2Q9IHgvfEIJfgjNOv3006u//vWvU+3QV/gMdaP0+WSNdShAJb6qqBXeJc6LcSbhXGewk0KMcQ4cPE2um+aLcg5vAiOv+I1znNFWIDBpCEA/+OZxQOK0ZkdOZtcoqg36wQY2OJ5fe+21ZETiO4UmSq5CfvS0Rwa3nGwlDKiDtkVvdV/HJ598MkWfkQcZWHQfeVPGd/EV0XjKUp+X0XlueY9oThfarjHhjIFeYkhjJ1r6xxGaRH9sos/QK+VjR3Oi4mSgQi8Q1racPaePfjz2edP5bNJ29U3vOoc3PBNjIzI2GCJz77fffkl3KgVxiB+iXxGheO6551baMI97db879SmOgUAgEAgMG4GRG+ykkNld1rROgJ1mQng3Bh4ZN1CYILYwHHmxLVP0QMDMiKYjhBui6j1n1IsRhWcICd4j5etDwUNYID9MsaSMUE4M2vdPi5OWhAbfpmWejJvIBikTvg7Gq0V01T8YNaH8VnlA8Dn++OPTboff+973knCCoIRyjJEMJtYnyWB30UUXpT56Y52/Jt8FF1yQDFJ6Zu+VDHYyTBIBw7SA2Ui8gzrhj3eBQKodJcnL74ZpCN7QSv8xQvOcfPpD6GIDlSYBazbGH20GAqNCgGUG+AassM33BC21UQvQWP/nDXye/mppBRw4cmqIRvroZAR5P7Ud5wTKDAoddNQm0Xwpc7oWnfbXMrb553bctv7cua/D5hFm6o99Zs+JwMYIqWUI4Gu8A5w8l1xyScLJtkO9ixcvTmWsUQ6nGwZO6rKLmNu2dA52LB3AeybyfZxJ76HOYEfUjJRe0WOOOFFYdxcMSon3Sd42zsRSHXE/EJiPCOjbEM3TGJFxNN0SGVvOFH2rookYwIleRa5CvoKOkIfI55KxRW1QB9+lb1vPOWIIxHmjHVPROZi5wtRcZn3kaCr1eQMXNNfyHj8O22buXMbDddZZJ9FhoqWhxehD6ABNMiHPMXgefvjhU7OY1A56kHfu2OuNN944yaLQP3vfnjNFmX7k0mzSdvVH71q/G93nSL89bef3RGQmOkwpIS+gk2CwE0+ok+lL9cT9QCAQCASGhcDIDXYQUa8I0Xm22fYLxcIgYXb2D6aNtwli6aeWwqggyEuWLJmKUMLLpBBwDxL1W8MKO9/h0SLqAG8V9fGHEUtrP6BgSNnz9elaXiYiOSDyRKIRwYDxkGgGlJs2CQMfihNrRSCQwCgYNx6h3KYO3CMfShM7X8H4ESYw2qGYMl6ULerAqKeIQRQRQroRPjBkWrzrzsGJMlZ5xTOIMY1dv0i0eeqpp1ZnnXXWFG7UicJmo8fAjKkS9I8ypYRCSd66PKWy476PAUH4NfWX58rL+OoMwuMeR7QXCIwDAWguThSUNrv2mZwJJSO+vh3Ks7EEtAc6SCSUNpogD0oUNNlGGzMu7Upr1ySVkco6lqChRIRBP1GibOLbxQgkA5mupSD6a9Wv5/A1HCs5JcO20/Zchkj6xI538DGbxCuZDiuFVMY6+IelzZpupb6iyIIDhk9oP04U+Aw8EmNXG9rFOxWPtf2alHPxGdHkSe7rpGAW/QgE6hCAfnijGbIO61lzH2OcNZqIZlqaiLOeyF/y43hmtg2084033phqGtqHbMpUfp7hlCCqyhvWpgqYE2ibaKUcPFqaRzxENJ5ijMnX6w12XWg7hjZ4F1Fb7EoLHYbf4IxHj+gin7O0EGVYq04JXUYO8tyRnVXRz5DZc8+5t3Tp0mnvSXXrOOm0nfcLPRdtp79tE3lVjt9DpEAgEAgEZguBkRvsIJQw0DbETgwe5uz/UOqYfkKCQbPOhfV+oKigPMCAmxIEGEOcpiHSFkwTgxYh8XhgUGKI2BMzb6qTjTMoY/tNPfLeNZUHJ5i2ymN4pKzdKTdXhyIBVY4+SNGiThRimK6MjijGRK4gFKhMlyOGNxnsiNIANwQp3aOPKMIIARgQlRDCbN+4r2kAtn/KH8dAIBCY3wgo2qvrejDQFU+zoGfQSyUMbNzz60fyXEsynHLKKdXKlStTEfgTDgQUJbuxjtZXhY5CT5WkXEqZ07WlvURdKZINxxKRCjiXiF5gWitjGOZOcxgF4ZMeG3sNzxRdZvorYyaaA0yI0MDRQkSCdzTBc8mPQRNcGQOGwUiBQCAQCOQQkDwvmqg80Cnkdy+ri4Zagx1lkMHZiRlDWC7JWWHpHOdd5cq77ror0T1oPqmLwQ7aifMZuqioNRwbTQlZWboHecEER5XVTfy46q6hzW3aVb/A2hsg9SyOgUAgEAgEApODwMgNdl2GqmmWMFr7h5fIGoSoE8/T/vvvn8LGMbS1NazZ/mDEwoDFdCgJFXjaUaJKIeC2vD+HwSN06K+NkdLWwZRIFMCm6bq2DOdgQ3QeCigL99qIB+ryghFlrOcI/PDMtfkTLghJ7H77wx/+cMa7QbD68Y9/nDye6ivKLtGNCGtK9BPhhMgNlMFIgUAgsOogwPfPDoIlRayEhOcT0BUiFSwPgCZC13PRzdB9FjO3+Wkr53mnjzhjoHuWrkq5lMEO+mYNdKW+y3CHYseaTp6vlcq1vc8YiDbJ0XLue54EBoyLfjBNiH4xLYwIeDtetY+xD9487H6r/jgGAoHA/ECgZLArjU40NLfra6mM7mOkks4A3SdK2NN35S0d4Qs4IUT3iNZetGhRWl5HZaCv9FMOcO7j4CFykIAB6CeR3sjyNo/K+yMOeSsT6zl9p50SLc/Rd+6xuRJRiW1TGOzaIhX5AoFAIBCYXQQmymA3u1BE610RQLDpKhR1bSPyBwKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCCwqiEQBrtV7Y3HeAOBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQmGoEw2E3064nOBQKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCKxqCITBblV74zHeQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgYlGYF4Z7NhcgZ0H2UQhl5qe58roHuu1sbAsC8HO1rpttN9mIVv1ucuRxW8//PDDLkUibyAQCAQCqzwCbMATKRAIBAKBQKA/AtpowW+M06dGbQCX20ioT322DDpAl40dbNk+54yBTY6GORb6v2LFiqHW2WdsUSYQCAQCgUCgHQJjM9ixk5J2T+1z9Dsz5Ybnd+7zeZqe+/z2GgHgqKOOqnbcccc0DvtsHOfs5rTWWmtVfXbQaurfl19+We22227VHnvs0XnHxqa643kgEAgEAnUIQH9Ku951uc+Oel2MZ4PwA40Hxeeggw6qFi9eXK1cuVK3p4633357tfrqq6cdWHPPpzIOeCL+tNFGG2V3HRyw+igeCAQCgcBIERA9RtYdNFHHaqutVrFT7TATu5Gvt9561eWXXz61m+yg9bNj+TnnnFOx+3ku3XfffdWw6Tp1gs/dd9+dazLuBQKBQCAQCEwYAmMz2J133nmJQcAk+vy1YVhi+LSVS03Pc2V0TwrRbBns3n///WrLLbestt122woGP+yEIRDF8vrrrx+aIDLsPtbVh2B28cUXV7fddlvWa/j222+n5+TJ/RGZ2Sa9+eabqfw111xTYYSOFAgEAoMhwLfbhyf4MjhUoNNt0yD8QG08++yzyZGSUzKJ9N5ss82qzTffvFp77bWrxx57TMWGfpxt/jTogL744ovq6KOPrnI4+rrhhT/60Y8SrijPhxxySPXEE09kI9+JSoEn5Gg+94KOe3TjOhAYDgIlun7uueem2SrQX/v3zjvvVNttt12SQe19nXeh7aMy2OEQuvDCC4dqQLv33nsTf0C+f/3112eAf8UVVyTZfKuttqp22GGH7N/ee+9dvfvuuzPK5m4w0+i4446rtt9+++qTTz7JZZkz9xjLLbfckuh7kwzP7CQcgPCLjTfeOBled99998Qfco4++MyiRYum+DcYP/TQQyOb5TRnQI+OBgKBwKwgMFaDHcrL8uXLs9EURx55ZFJuSs+ffvrpaWHoGJg888KYtcYaayRC7J9x3fT8hRdeKL6ESVCIJISgZHRJb731VhIwvILb53q2DJZ14yX6cq+99kpKf0lpLwmPwqCNoohwcNJJJ6V2JhGHOoziWSAwqQjo23zwwQenKXBS1DDmoMwhQOuePeLAOPTQQ1MEdBeljjr4jksOnia8WCaBSAv42nvvvTct+2effZailolcZuoRdIN8peUaphXuccFyCQsXLqx22WWXtGxDjypmpcjf/va36pe//GXi2dDiJjqMYQ7jJ3kxhPLHOc6mSy65ZIYyJb4tOu+PQcdn5bVHo6sAAqLrV199dZL5ke2hgWeeeWai1f5bbLpuog0WUsnK9KFrUtmm/tQ9bxNgYPv1zDPPJBkdBwRGJXgLSfRrm222qS644IKs4+GAAw6YYUCsc1ScccYZiYaiE1100UXZOnMOjpIz3I5j3OfPP//8FD+o+30gQxx44IFTvALjp/RBrzOAPZGH4jMY98gPj+Gd8/uFb0UKBAKBQGCcCIzVYFcnHKM01T33oMw3gx2KXNP0r3vuuaf62c9+1pjvgw8+mAaXDHYYRdUG0R4wJULjdc8f8fzxZ+/jxcJwNSkJ5oqHTcKTZ77q5x133JHyEEFolX2dt1H0wYxpybTV5beqPsQxEAgEZiIgxa6kXDUZ1qTU2G+f6UVLly6tVUaYhrT++utXO++8c20+lJecskJ0AlEKZ5999rSpuBgQMdRZA50MeNxjWtWwEwoJfbEYDLuNYdYHHvB8KUWiq3VKF1HS4IcyzG8F2s/fc889l+5TFwqcTThzMGISuYPRV/ReR57P1pq0tp9xHgjMNwQ8Xcchznf+8MMPjy3C7pFHHsnCikyMDpH79qFBG264YaIrohNdjsiYXQ12dBJ8ZNCUPIojiHvXXnttdhzcJALPy6PiiZKLh3GcNN6C0QydRmMr8Q7yEb1NvpNPPnnasj9E3eEItBF2f/7zn5NcYPkMOBPBCC/BcMdSF5ECgUAgEBgnAmM12KEcoSTlvDcoTXXP77///kZcYKowLkVMaBFbTV30z3MVIsAfccQRM6L3UIZQCIjgwzOTi+AjfPrbb7/NVdt4jz6L8Qx69IxLBjvhQmeasBDDnzQm7YGUErf11ltX6667blFhBRNwffLJJ30Vra5RiFnjD08bv1MvILWqJDIFAoHADAS8YuedF4rMsA4HnAiKus7RKtG8QWmpyufo4A033JBogZ3GxJR5eANKlo+mk9GOKAqiCTE2DSvRLkom0RNW+RhW/cOuR++cPqP83HjjjYk+e96ldsGKaEbeh1+2gWfc45kfv/gcPP2bb75RdXEMBAKBESOgb5wjCaMTkWIff/xxtmV9qyUakC1UuCl5j6mf3sEsOowsn5MHKdvH4KauDFL+o48+mlrWQXQNedPyGLXDkTzoVESPtdkIg3eBHrNs2bKh8h/bp3GdwzfWXHPN5HCD9ud+N8KQ50TGNW3aB+/EAVeqT8Y8dAF0xUiBQCAQCIwLgbEa7OqMXSgxdc+vvPLKRkwwzDFdVMY9CQAyVPnnuQr7GOxKodW5+kv36Bv9Lf1pfY9SpIAtJ++c2pLyatcOUX0wMVtW55pmxlQzznWfqVcwQZ8wjo5bIdIUVYQrvKUcc4o1fUVYhAlLePT9r7tmvAg4eId//etfJ2NdGOzqEItngUB7BLxi19Z5IaUqZ7Brat1OrYQu4ERqEuZtnZ9//nm16667prWAoEPQP/gOfAwlAnpjI5N1TqTvBhtskLz0p5122jRvv62/6zmRJIyDabHQ6L6JqVTD3I2w1A8cLUTaiFdJwc4pXdTBxiQ77bRTMoT66cc8F4/zBgGMA9wr8YVS/+J+IBAIDIaApeui0TKo63uFZrX5kwxPj+r4g75z0RNv7JKxjiipq666KkvzKTuuCDtkS3SOnExN5Bfr2uWMjnoz0Pr999+/OvHEExvpNu2wdAz6CjL9sNJsyP7Chvd93XXXFQ1sioLHgZbjGx4DMCKKbpNNNkkR2f65fsfoAnVLKPlycR0IBAKBwKAIjNVgV2fkgAnXPe8zUIxM1GmZfZ96KKO6cp56PZOw0LeNunJqow9GEo5OOeWUpLi0EZBKeXJjhIkde+yxYw8Vx0iH4EV0hcaY6x+48hsgAg8PWdekdTIwbiLw8Q76vIeu7Ub+QGBVQMAqdrnxivaV6LiE6NK37+sk8o21z5hKROQ0i1AzhbWt0Q7lShFd9Onrr7+uTj/99KQ0EOkAjyjRT+7vt99+aVdZaBdOKujYoImxUPcgioSilVFuOB9nkoJdMthBt6HfpUgSreHnabz4ggwF4xxTtBUIrMoIWLqu71BTVHWt9e1waCiSmogxOTgw6rMxgKX9OEb8LB0tb8DUR2a5iJ5AE9kkgsgpLVUA3SU6uuSgsWXr6HjdMzmTmt4/jiOcPCx1YyMBNY3TT/N/6aWXkuNCM3xYX43xNPEQxs8an/R5n332mYGfx9Ne1/GC2ZD95ajHGAtf0PvK8Y5HH3004dOW/rOc0BZbbFEr3/NbBEeWE4oUCAQCgcC4EBirwa4uOgxjSN1zTWv1wEiZq2Oedc+sIODrttdqJ6cU1j2zdQxyrjb6GIokHI0qwk5eKXBGOBpHknKJx5D2Ncbc+0HBRqBDiHr11VeT8RVBo03SVFg8nVr0Pgx2bZCLPIFAOwSsYpcrIdpXotVdDHZaJwiFRMZ36pURb8mSJdMUp1x/5N2H3lEW+sIadxjN2tIVyrz44otJcRp0AWumQmlBbfpEP/ok1idF+eOvace9PvXXlalTuijH1DXGVlK89BsgD78nJU0VFu+DVxAREikQCARGi4Cl66yXbKNfJa/Zb1V03hpe9F2XaL9GoPq01pvoCQYt5D7Wf8bIhYPk1ltvraUBlB0kwo762+7a+tVXX02tw4axkWuMa8zogJb5qasffvhhiqiTwY5oMKaGloyPwkcR5RgHVbbpCHaenqo+HWdD9me88Chho3dtfzfqnxxZRLaTiIx/6qmnkkEY45znBfoNliLspEuAS649tRvHQCAQCASGjcBYDXYQub5/JYYtAuvXN7Ieu7pnpXo90Jpakws9Vx9yxiJfT99rtTGIwc6OVfXZe7ZvEpTajglDGF5LBI5Rp5z3UQJbrr8ai//tIbywSxZGuVxCCEKxtx5M4dbnPeTaiHuBwKqOgFXsWPrAKxJaciCnbKAYsYkD333u2xe2dsrq4sWLk2FN37JoIEYh2jjssMMqNq3IJXn3mfYKXVDZXN5x3WN9I6INiHI+4YQT0lqbKFJdE/SOKBf+mhTArnU35a9Tuiir5yW8rSIl5Yxy+m1Z2s97I2rn8ccfn6GwNfUzngcCgUA7BPTtYSDBYWojdyWvQbMUzaUoOXY91b3zzz8/TQstfffqCQZBG10resHamLTL9w9tRy+AVtQlyraNkKurp+0z5FPGCV2Cn8GXuPfb3/62tQOori34ozBowtHWIwytUdU+1/k4ZX8ca4xFjnr6oH5ytIlIS202gcGSKdBgbHkBa9HZTfqsrsBmdv63QpQm8gh1+PZs23EeCAQCgcCwERirwQ5CizKgcHd7xKhW99wv4C0gvNKl+xz7PrN16FwCRo7hqZ06hVH19D2qjT6GolzfVV9OCUZh1iYboxxTHyxgoJqOxlEMVWPM9RfGzYYgTAlGEDzrrLNSNKeYN9FzlPeJ6RjksYvVCrc+78HXH9eBQCDwnVEFxSBnsPMGPHvdxmBHBBrfMEY2G0Gnb9nS9GeeeSYpayhsTN/yHnh59y+99NJEI1VWdVlloM35MIR+HCW0hdLKFCDGyXEupZLSpTHoufDWfXvkmVekiKjcd999qwsuuCDRfqY/w/PIB21nmti4jZO2z3EeCMxXBGSw46i140466aQUwSx5jaUJRM/lmCGyS/ckh9Z99zhw+a75U7Sy6AVta7MaouzaJMpqNgaR0zIedj12iVKGBt18881Tm2Awg6Rre7mpqzJw9XEwWQzb4DbqPCw9cdBBByXnlF3aRv3kaJM1vmEERr/EmfPyyy+n6awLFixIfMAa/ygvuZ/pyNZ5RWQ9y1mw7ITnM7bdOA8EAoFAYBQIjNVgV2fkgCHXPS8NXopSjqH3fZZrS8LHTTfdNOOx2skZi2ZkLtyQUUmCij9KmKnbmIMyuZ1qJRxZjNRndudFMJAnE+MV57oeZEyFoQ50W6H9eM4knFGhxtilv2LAMF9fToIOa1shbCoJtz6/VdURx0AgEPgOAdFWjn2SBHP/DaMEYXTbdNNN01Qooi2scUbfsqWLtI/HHc87dAGhnt1oZbjDOIZzacWKFYlfqazq0ppMCPooCLpGmSACRNd67pWMruMnko6+0haLamuRbY9F13rHnb+kdKkfei68dd8eedZGkSJKkugJFDKUr9xOkbbeOA8EAoHuCHi6jiGEb+7ZZ5+dktcszRcNtTRRtL3uu6c+vmMcFkqiF9SvqCgvyymvP1KWKZEYhaCj1qmNgRFD/1ZbbTVlVOQ5HEMshwAAIABJREFU48K4iAwuWd2Ow7fRdC3soGdt/yyW1K/IOvp25513Jn4leb+NMRAjF237epv6Porn8G36DPZMOZajnrb0rj3e+u0wBt4JPNsmGZF5bn87tIUjh7Y89qz/h7OO+749W3ecBwKBQCAwbATGarBj2g5h7zlmASOpe06ZpUuXzpiqJCafY+h9n+VAxlAHkdaiuTaP2hlESRqHwc4aG1FAUfa0NqCYm8YAQ2Qhb/4sc7TjHvd5yYhGP/oY7Gw5OwVCjBwl2Ed26l2HwW7cbz/am68ISDmRYsAupW0iGxRRkKNdGPb5RqHZKFF41X3St5zjHTgDWP8GBwl1bL311onGQBtYR8iX1bXG4K9Fn/zzQYV+bbxz9tlnp7WPoNWs7YOygYI8V1JJ6VL/9Tz3rshjpz8JY5XNHVknivVWebc4pyaFx+X6GvcCgbmIAN8h35e+RzkX+N48PWR8opmWJoq2l757PcdYhrNCSfRCbSsymim4lKlLyMnwjo8++igZ7Gzb1GdlRerhueRmrnPjqGuv7hkRwhjcSrQc4xLR41pjWXUhtxKpSFl4ofrE++j6JwxV97iP0GaMdPA09EDrdKMvetf2d8N9/TYYr9Y29H1XWb82Km2+8soraYde5AeWUMBZh1wgvpHTBX39cR0IBAKBwLAQGKvBrik6DMJY+sOLZY0kMKS269TVrWFnn9m1DCzAeOTZWt0zauURM7RMW8+GdVQbFoO2dUs4gjnBiDDCUZ/9wwt56KGHpj/O7TPOEbYUZdK23WHmIxwe7xbMF5y90ffUU09NXlZFCPJcCn1dP7755pupXR0RTDAWUBft5HbT0jor1rjcZepDXV/iWSCwKiKA4Mv3JsXACtrcL/35/Jb+8gyegdHNRuJafKFr0FOrkNnnnBOFyzQcH9Hry+paffLXosH+uVcyfPt11yygveuuu86YIqRNMYi8K63PWVfvbDyT4lTCA9z4HfAeMM75JDpu17Hyefy1NrKwvxufJ64DgUCgHwL6ZkXzkD2Rn9hFm91OkadLtN3fL9FoTV+0y6PQW9ETtQ1PwViH0Yfo2jpZlrZmw2BHvw4++ODkEBLi4oV+2qaeP//888kopw0YdB+ed/zxx09FD4sflXBUOXv0GNpn4zzH2YbhEdrODCIv+ysSUGsfXnPNNSkQoY0TR7/RtjxAdbIpCRsaRQoEAoFAYFwIjM1gx1bsIqS5wUEIYegkmBRr8GCkURIThfGQuPZMfdDrkrKgRb3tGhnqF0cxQ0/08eLzN4ykNvoY7MSUUIwlAHTFqk+7wxi36tD4u/RbwprqyB2FB4Ichjddd2mn9LvJtRf3AoFAYDoCfD9WANY36OmpSome6fvO5YeX5Aw7qoOjaEqTEkNdXsHzZXXtp7zqmjEOc0osfIVpO9ApPP6Wz9BfrfOZi0iwGEzKOfgwlhIt1W6vO+20U/Xll1/O6LY2hbI7Uc7I5G7od1QyArrscRkIBAIdEND3JTpNUcnx3MNgJ/rIBgusRyq5i7VMcchjkCO6KUej2RiIXVJxWuC8sEn0xLatmRPIen5apcrisGVjOYyK0Bl4kG1b/cYBo8Rzy6vEC0q0TOXsUe3m6JeMct4oqfGUHDOWJ6hPdiy2/dx5DsNcvlHfA3P9Ltocra6iaDi7EZHtr+rObSZo8+mcZSeYebNw4cIU+KD7cQwEAoFAYNQIjMVgx6LfxxxzTPWLX/wiKRYs7M1udjAR0r333pvWjBATzDEoGI0lxE3AoMThUWOxVbwzjz32WFOR7HOFnMMo7DoHNrOYoWXaPGc8RHyxmOygSW10wUBt3nfffSn6jPB6KbdE09lIuroIOwSFPu2q/XEcFcHi30FT22LAW2yxxbTdokrlBnkPpTrjfiCwKiPAdBUbvSwaVfqWJWRzJNXlzxnbcljn8mGk89NvVFZ0QAqQrtsoFDZPF6VObXPUWp5+KpTyEFmHIlennCrvJBylHJbwQIY48MADEx9jzSqfcPAxVj+1yefTNe9bi9EThRkpEAgEhouA6DS7MZMwsLEBDMYwImKVoLFMN8QYDx/AcMfsBmTSUtJu3XzzLAvgk+iJeISea1kVZvtgtPOOGGafYIyBjkBz4EGWXnc5L9Ey9cUe5XDI8TzwwfEiPQba9dVXX6W1VLnnx2jr1bn401xdw07jyB31rnN4iy8wQ4rfjE2WB5SmzPr8coTxe40UCAQCgcA4ERiLwQ5DEYvCYjgiEekFoxWjkdEEARoiKiOZlBE8RTDQtkYjyrOFN8Y6FltlzYzcemRtgJZiVApJpw4xQ89sWbSWqIphKARqoy0Gdmx4mWSQKim3pfvUAyNsahdvH++zNP3M9mcU53UGO7BDIPIJgewHP/hBEsi0BpTP468HeQ++rrgOBFZ1BHK0XbTIOxX49vh78MEH0zcr/qH8nv6CLc4f1p9j4wgSdILFx+0uqtwjjxX4oRd77rlnimKzkQp6X6ID3mBH33j2zjvvpJ2odf3cc8+lKEJd67ltU3U3HbWYODy1zhEl3oVSx3lTgvdSN3zLK7JNZQd9Dg4ow3V4KA/LWFg+o0gTxsn7tol3m1PUwI38LG1ABH2kQCAQGC4C0Ge+6e9///vpyDlyP+vJKUlWZzOHp556KsmZbA6EgQo6nTPaQae0ppl2nVV9OopWiEfoPkfRRfpCJJ+lD5IjWcdOfIU14izvIRoceq57PLe8qg9tp5/0p2Q4gt4RCQzNIg/OGM4feOCBqZlJdoz+XPyqi8FReXMY2vpnW/bXu87xDpbyQXcDK2g+vx0lGW+ts5Bn8HvP8ykH1tRTimhUvXEMBAKBQGAUCIzcYAehwxBnQ71ZKw4Dkgx0EEcMJjLQMVBNQyUPGyOgjHnPXA4QjDAwUJifpgNJoIcww3ws0c7VoXsi6BDpOqYlZugVRsrA9HKMRG20PaqNJsOZr0+RCUQncC4hxPe1dJ/6tAgvfcgleSUZK8bB2UgStPy46AvP2PWLLdnxjDE9m7VUELzoc9vdw6ir73uYDUyizUBg0hHQrqZ2SopokRSG0lE0Wfn9t68oDGuUUSSDbU+LoSOIc05SWWi/NwLxXHTAG+zUJz3XteiTrvW8K2/A+YXzCUzE30rvGD4nxZZx4LyqM8TJsQbvHPe6nHVKl8YnpZWxwwdRXIle57yEB+8HOg+9h+5D/+EDjJG/0tQ4tRnHQCAQ6IcAtI7vEgMdTmt212R2jWgQUWInn3xyomc4CixN1LeOTvDiiy9O64AMbnVym+iJ6O20Cqoqre3GGqf0DweA6H5uNopoPHVQnzfw8NzyHjsO327pGnxwwECDSwk9Rus4E4xAX9vqMuqTHUupHd1vwpB8kyD7q58cc0m/F6IqWZ/6nnvuqc4999y0xm2OB8CrmWqd4xl9Az9y/Yp7gUAgEAh0QWDkBjspZDaCSQtE23UA/vSnP6VIOJgSCYUJwglzl9HJMkU/SJQ2FJJNN900CeLec0a9eFog0AgJLMxdx+wwGCIs5Ai6b1vM0PcPQxcCQUlo8PXUXauNrgY7bXmvKL+ScuvvYySlTYyrrN0HA5NQ4/uJcEUexkpk42wkKcT+HdAXdnVkKgD9s38oseBC/9umvu+hbf2RLxBYlRBgmQG+SStsixbZqAW+O//Huj8k5fffvpZWwIFDJAcJ5xDR2gjeRHYrYchBYbLTLXHYoJz5iC7KiA5IAdK1aL2/Fn3yz+241ZfcEV5FdJ6UzCZjnepg3OQFY3jZaaedVtyIQk4y6GJuV13VOYpjk9KlNlHyGQNjES0HE3YV1jtWXo4Y9bTTr/JzxDiLIaBOBrD1xHkgEAh0QwBax7cmmqfSyPSsWYesjoytCFfRTNFEpvXjaOb7lZxGHiKfmwwnoie+bfWBI98/gQQYFKED6BxMndQ6meIrovGUob5hG+xk9FK7to+cQ9eIPlywYEHCc5111kn0D/zQeehnUxK2dixNZdpgOAmyv/qp301uXPAz4Sc+AH4+6o6y7777brXVVltN8Rfy8xskmpNp3ZECgUAgEJgNBEZusIOIekWIgbJltl8oFsYDY7F/MG3WDUBAl9FJQMH4Ia5LliyZipaCKJcEceqnDgnw2223XVr0FmKOMYr6+MMTjxJAmxigcoqA+sBRDJdIDhQHFstlkVMYgqai2vz+XAxHjGQYR+qk3yirNsJEQohXbv19FB3bDxY492Hidhwoz8LQ3p+UcwQy3pN+W/S16b1OSt+jH4HAfESAbxAnio2sZpzQEugofzLK2fGLT1CejSXYcRAFzm4eIHoG7WOKp03aldauSSqDmnUsQe+IGIYPsPi5TV4B0rUURH+t+vUcvrb99ttPM1Ta+u25jRqnL/A7lMu2CToHH6MsNB3eBo/KJRw1/E16kkOpDR3nNyS6z3EujG/S8Y/+BQJNCEDroDeieeRHBmM9a+5jjLMGENFMa3iB9p1++ukpP7ue8u1CO994442p5qHTyPBMU+UZ0cE4mL1hbaqAOYGOIvOT5ODRzB/xEGvkYiy+Xp5beboLbaddRTazdI8cCBzBBroNbxPdZrowAQ8YmtAvuI+D5ayzzkrOJvqcS8LWjsXmg0dAS0UbOWe5GD9WW0bnky77q59WB2B8eu96bo88I4/4Rk4OsfnjPBAIBAKBUSMwcoMdRE+7bzYNRgweJuT/UOpYG4IEg4ZxKdqAvESeEYHQRpGBCMMINSWS8vLYEWaOQa/NFCI7HjbOoIztN/XIe2fz+nPwIQpimH/USYTIBhtskDxDwgXGk1OGJZxI8CCyDqWOP5TNOubmxxPXgUAgEAg0IQCNgmYuW7ZsSlFpKsPzJ598chqdheZijLJrI2Fg4x4GN+9o0JIMp5xySrVy5crUJPSPtZbYqEhKCw+0vqpfw9QrQLqWcgrfYwkHTXHCscRi6hjbdthhh4o1m+g3xsOmRB3wJ/jdXXfd1YsWo6ywbh8KGIbNLlHFTf2L54FAIBAIeAQkz4sm6jnyJPK7ZFLdFw21BjueIXsS+YwhLJeg70RNW9mb87ZRyKoT2oqDXZHXkomtkYuxeCMWzw8//PBq8eLFibZvvPHGiffYdVLVRu5IEIF4AWOBj8lIx330FHbT/frrr6cVBz+CC2w0GDwPXcQnYWvHYvPArzByegzhhyUjoC0f54FAIBAIBAKjRWDkBrsu3cejtHTp0hmGK5iSVzCY4rr//vunabQYpvoYlPAqMTWW6VASKlBsUKJQsLomGBuMUX+zzegYy/Lly5PBrWksEk5ksGvKH88DgUAgEBgEAegTOwiWFLFS3Z5PoPCwBpLlAfAL6DrGOZ+g+yxmbvOTJ+dFp48oQPADzpW8AuQNdMrnjzLcYbRjTSfP13x+XRM9Aq8aNNFPolYiBQKBQCAwSgRKBrtSm6KhuV1fS2V0H+OYHN7Q/T4b58AX3nzzzSk6T/T2okWL0sYUakfRZORVwsFD5CABA9B1HCLI3TaP8vojPIUIcdZUY+kWEvyNab8//OEPk17SVA98DB0IZ1BpRpCwZamgXJIjXxjCm3B6teVPuTrjXiAQCAQCgcDwEJgog93whhU1BQKBQCAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCcxOBMNjNzfcWvQ4EAoFAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBCYpwiEwW6evtgYViAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgcDcRCAMdnPzvUWvA4FAIBAIBAKBQCAQCAQCgUAgEAgEAoFAIBAIBOYpAmGwm6cvNoYVCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgMDcRmFcGO7Y5/+Mf/5h2TMq9jqbnuTK6x25O7ATFbkt+d0HlGeWRnaLY+Wq2d54d5Rij7kAgEAgE5hoC//jHP+Zal2v7K17H7oc2wT//+te/2ltxHggEAoHAUBBArka+HoaMSx3s5J3b+XvQztLH2Gl7UBSjfCAQCAQCgUAXBMZmsEP4h4H2/YNJNm1vTt077rhjdd5552UxaHqeLfTPmwgARx11VKqfesaZEGR+8pOfVKuvvnrVZ8t79VUCUZ934JU31RnHQCAQCAQGQeDLL7+sfve73w389/LLL1ddjGeD8AONF8XtoIMOqhYvXlytXLlSt6eOt99+e6LbJ510Uvb5VMYBT8SfNtpoo+qtt97qVNvbb79dXXPNNZVovOqyfBQj3i233FL1qb9TZyJzIBAIrJIIiB7/6le/Gnj81LHaaqtVv//97weuy1bw2muvVeutt151+eWXV9DEcSTR46C940A72ggEAoFAYDIRGJvBDuEfBtr3rw2zEsO3ioaFvem5zevPxTQxCFLPuNP7779fbbnlltW2225b/eUvf+nVvMbf5x2UMO3VkREUQjC7+OKLq9tuuy3rVWXs1113XbX77rsngQuh65BDDqmeeOKJxohJDJ0vvvhiddxxx1Wbb755tfbaa1c777xzdeWVVw7FGzwCOKLKQGDOIMC324cm+TI4VKDTbZPo4SC07dlnn63WWmutKqdk/sd//Ee12WabTdGMxx57rG3XOucbhD+Bv+VrqsviAp2E7vG3/fbbVzvssMOMvyOOOCJFyHTu/AgKwC8XLVo0hf3ee+9dPfTQQ0WnHzSeMcITNt5448Qj9t133+qee+6piCyMFAgEAt0QKNH1c889N81Wgf7av3feeafabrvtquuvv37afeXpQttHZbDDIXThhReO1XEhemxpdLc3Mfu5v/766+rqq69OMjoOolzCAPrKK69U8BFoMH/I3NwrGUct3Uamp8yPfvSjzk6rXH/iXiAQCAQCk4TAWA12KC/Lly/PRlIceeSRSbkpPX/66aenhaETaeaVBoxZa6yxRhK2/TOum56/8MILxXcz20wThnXDDTdUxx57bPXxxx8X+1n3QArqmWeemRWIJBjljoq+qKt/tp4RfbnXXnslpT+ntOvdoeDD1PktcJTCj6GvFL35t7/9rQIv5cVgh8KK4jqXBajZelfRbiDgEZBi9+CDD2bp0hdffFGhzGGEydEmHBiHHnpoioDuotSJHlrDlO9b3TU0mUgL+Np77703Letnn31W7bbbbulvxYoVFRF25MOIN4rEcg0LFy6sdtlll85GszqDHWN84IEHqnXXXbe64oorktELQxjGL0VF/tu//Vvirfvss0+FYjabif7efffdiT5Ds1HgttpqqxTlyDW0HJpuE9N8jznmmETjiWInP3ReNP/AAw+s+A1GCgQCgfYIiK5jqIFWINtDA/kGkdP0fbU95pwipd4MYrBT2bb9yuVrE2BQ6ru/Pwht93XN1jX6GrQVrPhd+ATdvvXWW1Me0WBoN/nR6YhUJ49NVjb3ZZDP4QO+jC0f54FAIBAIzCUExmqwqzNwoDTVPfegzieDnQSbHOPvey8n3AyqoPp3MAnXMGSmagmnksHuqquuqt59990pBk655557LgmQMPtHH310xnAw4mHMo+799tsvGQyUCc8eBoSIvhAicQwE+iEg+pcT5KmxiW7JIG+//U8//bRaunRp+n75hnN/55xzTrX++uunaNncc3svF7n7ySefJOP92WefPW0qLgZEjHXWQCcDHveYVjXshEEJR4LFoG0b4E5kC0ZRsJYBFJ5MhNmaa65ZLVu2LBm6oKE4PI4++ujqq6++qsAZg5Yda9t2R5GPdV55pyjMjEsKG/3GmAmtR/mziSnE22yzTeIj1uBLmT322CPRf8avumzZOA8EAoE8Ap6u4xAnGvnhhx8eW4TdI488ku0c3zw6BHKcT8jOG264YZIPoYdd/4gQHKbBbhDa7sc2G9fwCGivZPQcn+ceRjZ4y5tvvpm6ybu5//77031mFyFvK0GLwZk64T+0QbJl4EmlaD7VE8dAIBAIBOYKAmM12CFIoyRZRUjnTDGsew7hbkpesYN4E32l6DD/PFcf+QnJhnHYP0VV4e0hUs8+0zmRB99++22u2tp7MBXhUDqef/75aUosjIvzUj7dZ/MNnzR+Mc4uxxyT9fXPxjXYwZi33nrrFAXSRWGF6RMhAw65KBsESxQ81qia7ciR2cA22gwExoGAV+yIQlP0lo3MIArb3lfUdc5gh0KG0tSFxtXlzdEVIp7hWa+//voUTCgb8IOcAUtGO6J7iSYcpgGIdlEyzzjjjGnGw6mO1ZwIfz9+aCJOizfeeKO66667UuQZSwOwbt/pp59erbPOOkmZwqj1wQcf1LQwnkdMV8N4yjhyDisZ8+gvfF6JiHWMc7n00ksvJb7SJ3IxV1/cCwRWFQREVyQ7XnvttckwXpohIvk09+12xYw6oANMqfROVdFhZPknn3xyRtWUHcTgNmh536FBaLuva9zX0ORLLrkk8Ukil3kn+j2oL7wf3hOyNjK3TfAfzXDh96MkZ1nuPVlj3jjXGlTf4hgIBAKBwCgQGKvBrs7YhRJT95z1wpoShjkWz5ZxTwKAjDH+ea6+PgY7TbXNKXW5NvrcyymlXesRHhhHZdhre2zyVGEc/eabb7p2aaD8MHqmmsG08ZZy7PoOJNjpN6IOEW7PekZ4hFmnKlIgEAiMBgGv2PEteuNR7lrCeh/aqDXZVC90sDQtPjfqzz//vNp1112nFEJ59uFjRKShXFjjos7vuOOOaoMNNkjKyWmnnTa0qZZEkjAWpsUyhapLYidFeIP+bISd6gEbIhoYHw4toiEwEBJ56BVilbHHcfAHeDeGtU022SRFC9r2OdfvBJpet/yFLSee2SX635aP80BgVUXA0nV9e3IodHWoWPmsjj9I/pNc5x0qMtZhHGLWRY7mU3aSIuwGoe32t0cwQW68Ns+wz59//vnEK1j3T84Ub7DTb8E7UtQX8vO+iKTT7rwEJHCPSO9ckITq7MMP1W4cA4FAIBCYJATGarCrE3phwnXP+4AmYdsy+z71UEZ1oax4w5SeSVjo20ZdOQk8g7Shfg4DD9tX+sbaejBQP93I5hv2udbFQJEUg+6KD4o1ii5Cmk1EzSDsWSHBPo/zQCAQGA4CVrHL1dhEt7rSRiL4WKOMNdmInMYwzxTWtkY768GHlhJ9S8QZdAR6AY+QITB3ZHo9u8pCL3FSQccGTYyFtroYo0ptCk/GhjHupz/96dQ6cPSZ/hMJQT7RXa+E2brHxR+I8ttiiy1q5QjGBE733Xef7WLxnGggpssOWzYpNhgPAoF5goCl66ITmqKqa61vZyOpmYUjBwd0ho3CrMyKQ947mrW8gQw4MtjxrWMsItJLSxVAw4iOLhmvbNkc/W5zT86kYbzKYdB23gVOFoxi41qPEwcKa0sT0AD2or2eV8ggKWOux0x0HTqs6EzoN+/B/i5sOckM8INJiP62fYvzQCAQCAT6IDBWg51dJweCav8Ie657rmmtfpAizG2YaC5PieCX2skZhNSH3DNfT901jEWCij9KcEF44dw/13Xdoubq57Aj7BTZAL4IR+NImgqLQED7EgC7vAPWkmLqmgQK228JBBoPSjmCBjjTVknYs3XEeSAQCDQjYBW7XG7RrRKtloGpzbdPZBXfPAof0RYYYqhXRrwlS5Y0Royxlg5LE0hhwIBHpBmKFX1pkyjD9FL64TdBaFPe5iHqAEOh+Bv9aJOgm3XLPwhvoiQuu+yyFGnsxye665Uw2/64+IN+J6UIOzBHsQcn76Cx/bXn8FoU/BNPPDG7+7jNG+eBQCDwHQKWrrMBgDW45OiGvl/7bYq2ixZ9V/v0M9WnaZMyurFxAcYz1uJkqQIcJGxuQMRvKVF2kAg76mdX6tI0+1K7uft9abuvi8hu6F6JNvr8g15rKiy0Uw4p3iF98LzipptuSvf17nzb+l1YI6h+WzLQ+jJa98+W8XniOhAIBAKBuYTAWA12Uij6HEsMW8Tcr2+EYUW7UtU9K9XrX6I87TnBXX1oozD6eu21hIw++KhM3XjUT6Y1ad29tsemKUSvvvpq8lqyEPmoEwoujBqPIcokSQJb6R0gQLz88svJ4IZ3lXwIbwhWOQ8chjowZUMLu+ugcEZhR+GOFAgEAoMhIOGbI0sfeJqkJQdydIvvF8M733Pp26d3dsoq0W0ogqKHopmsZ0Qbhx122NQi1n5kmobPtFeUEZX1+cZ5rWhgopxPOOGEFEWBkawpMRamFsEriXRhJ1h4kJxDjK0rTypFoo2DP0i5F93GQGcTUR78lnjOuJqSjcjJbUrUVD6eBwKrMgKi60899VQylOMo0dIqktegWYqWU5TcAQccMHVPazc30VlkNOgX61SSRLduvPHG5KDhm4e2Q+s8XfDviLKTYujpS9v9mKCNyLK8i6bx+7J9rrXkBEvWaMkE3iHvgd+FTbpfosnMaMKxBL/V2tz6/fCeckEKcrRMynu0443zQCAQCAT6IDBWgx0MGyMaTNP/YVSre54jygzYK10WhL7PbB06F4PICQ5qp05hVD11RxhWybOH8nDooYemP85p0/6xwx8Rirn+qU31sy6P8k7qEWFDu0NxlPCh91N6B1aZQ2jg7+CDD06Lv/PMJwkRGAQ23XTTtCYVRksUNzahoLzfucrXEdeBQCDQjIAUu5LBzhvw7HUbgx1RCkRwY2SzEXQ5evjMM88kZQ1Bn6k6PhKDKf8oDpdeemmaTitaqrpEW9oeS0pKM2rf5cABQXsordAnxtnVwEQ/mB5MZIJoJWPjnEjEm2++OSlGludwzk7b8Cw20eAaQ6HH7Luejv5MihrOHGQNRUITFclUXqYMg1UT7to8hLwYFFTP6EcQLQQC8wMBS9e1dpwMOJLXWJpA9FyOGaLidE+bvYnO5pDResMsbaBoZb5vvl36oI3FiLJrkygL/cfJQOS0DIpdjzIutWmzlGcYtL1U96juyzHiZ65Ipuad2KT7JZosfqT3SVnosTajYN1SDMHoAvAe+Da/IfhgGOws0nEeCAQCcxmBsRrsSt53AIRo1z0vgSxFKcfQ+z4RJ5iXAAAgAElEQVTLtSXhg/Btn9ROyVjk85euJSggzPgkplVqQ32wOMDAWICcZ/zJqAej072ux9lWyOS5I8JOwhlYSQAs4WOxINIO4Y3ptAgBCIfy/Ap3CRGsY+cj6RThR9nYhUqIxTEQ6IeAaKsX5NvWVqKNCPUI7xjciaYl2sIaXqB9mhJr2yLilrV++L4XLFhQsRutjFAoUDiXVqxYMa2s6tKaTIru1jW0nQgQXet5SUmx/ak7hx7TV5xd7733XqXd80p0MFcXzjDKsyg4kcjCU7yEsR1++OEpQoXpXoqYoC7tYNj33eX6M8g93i+7EmJU5f3Zv3322ScZWrlXwp33zBpZRONQBxGfdryD9C3KBgKrEgKermNMx5DOJl6S1yzdEA2136anRTn8qA9DPA4LJergO6d+GZBYpxTDYVOiLFNHidaDjtrIbgyM0AV2PJVRkeeMC+Mi92R4tONoajP3fBi0PVfvKO9JNgYPZHWbJFPbd85z3S/hpd+A3qfq5F1Khrd0nvdz7rnnpkj5MNgJrTgGAoHAXEdgrAY7jB+Evec8VayrVvecMkuXLp0xVUlMXsqFfSF9n9k6dK51FlAAfVI7XZQkXwfXMKwSgxHTKrWhPlgcVMYys0HP+xhVc2Ptc0+KZU7wkgBYwifXHsoZ0wRg8NY7S14JEaWFcBE6KBebUuSQjXuBQHsE+JasMM6upW0iG2RkF53Tt49xHmUBWkW9KFEY6X3K0UzlQfFgLTgMfdSx9dZbJyUTJeHDDz9MDg9r7FNdUkb8teiTf15SUtSPpqM23pGxjbEvW7Ys0SYU5LqEcY7IOHiOpanC0/ISaOVdd92V6CRrM/3sZz9LfJw1VSdtYW8weOWVV9IOvrx7+oihlXeqpQ5yfFyRmNB1MCFKkboiBQKBQHcEPF2XAYpprp4eUrtopqWJOVpke6LnGMtwVihRh+UpioxmCi5l6hKyPrT9o48+SgY7SwcZk5fReS7eQ725cdS1V3o2CG0v1TnK+zhL0NOgnzh2PO2UTC0eqL7ovn3vesaR3w1RdB53nuFMwfm+7777Jj6PHA/v5x1Qxq6baOuM80AgEAgE5hoCYzXYofzgfZJnqssRL5Y1FmG8YVqtIhXq1qlr+yy3lhkvFKZw3HHHZRkGz8WgLdPu80OAYeWYEnVJMCm1oT5Y4QKGaSPscht7ELaPoZQt7qkj90c0yUsvvZSewTwVbdJnjH3LsOkDERIIYWDgjb6nnnpq8rIyTRWBkOdS6Ova/PLLL6uddtopTesiWkRJil1JiJDAaX+TKhvHQCAQaI8AxhOrXInWca/uT4K/8lvayDN4BkY3DDW5lKOZPh9TKZkC7yN6fVldq0/+WvTCPy/RF9+P3PXnn39e7brrrol+a+0m8mlTjDY7ArJuH8Y+u3Og8BQvwYAK3Vd0Im3BU+HfKEooSF45y/V3tu99++236T0yhdfSevrF+LVxB+/a4jHb/Y72A4G5iAC0ztJ1aAQOe9YjQ55E1q2j7/aZaJHHQVPg7fIo5IGu2rahaRjrMCbhpK2TYWlrtg12w6DtHqtRXxOZDL5Ea4Ohl9EJyuCdaN1CnHLwFr0rbfDm+wktxiDbxfjGbCKiJPmtsQZepEAgEAgE5joCYzPYQcyvueaaqrTbK8K0hH6YK95tjDRKYqIoQiSuLUMfxnlJedLCrz4KS32TcmYVRp4RwcBf20T7wzTY+XY9huojjBIm69cJxFCJpwzFN/fc1z/Ka2Hc5T1LOa7rl5RTBA275oh21SoJEVLAFy5cmIyidW3Es0AgECgjAN2zRhR9k56eqgavCObyw0vgKXVJNAW6WJeoyyt4vqyu/ZRXXTPGYU6Jha8w9RN6CI2yfIb+ap1PlCYZ2urGaJ9RHkePeDW790lZYpzwQXgCbaxcudIWnehzpgzDxzzN5vcjZT6mwE70K4zOzSEEPJ2m65JBeYasK/r429/+Nm0AJvmO7xCHPAY5ImRzNPrTTz9NUVQ4LTBw2QS9pS4rA2odvVIEGOUxILGxHIYenLnwINu2+o38p8Rzy6vEC+hDnzRK2t6nP23LCHO9w6ajMMNpRF5d+/ZwEME7u8xmYXo0dbbdMd23GdeBQCAQCEwaAmMx2DHV5Jhjjql+8YtfJMWChb3ZzQ7GRrr33nuTN0RMkJ0/WQPBes3E6FWmCUgJ4Sw8Sl2PPfZYU5Hsc5QdLW5q18iwmcWgPcNhPER8sWB3mwTDG7fBjn5J8NG0KJRTBCjW6YDpccTgOslr+ciA5t9BE+7a/ddP65KRligVokt8kkBQmjLr88d1IBAI5BHAIGTpXs4AZ0t6RbAuf87YZuvSeS4fdLBk7BLNlzKn6yYlxT/vq9QR1QZfK218Q1QCtKtOOcUghxPNR0LY6wsuuCAZ62zkMvcUKX/RRRdNK1/nlBPWs3G0Rkymx9qEcxCc4POl923zx3kgEAg0IyA6/fjjj6fMyJlE5PqoJ745vklmOsAH2EiI2RSsPVdKyKJsYMF3y9RRn2Q8sgY78mhZFWb74Iz2jhgcFRj0kevQW5AnPc1uez2btN3jMQnX8Eqw8+9EMjjOFJwqPvHboBxyQptEoAdR8cwcQo6PFAgEAoHAfEBgLAY7dtdkUdj77rsvYcYUKBitCLc831rAX0YyKSN4nGCgbacfUp4pnhjr7rzzzjRFsm+EmBQjFjfNGW4YkJQ1byySZ6itl2cQg53CxqVA5n6cdUZPjRNFjD8YJHijzLRRYvBe8j5L089y/RnmvTqD3R/+8Ic0Tcy3h9BHdApj1e5lymMFQj+FQp5aftN4ByMFAoFAPwRytF0GOHbFzu2IzbprVvBXfk9/6RHOH9afY+MIEnQCxwR0TYl75LEKFkrEnnvumaLYbPSayojmi97qWrulaoMfXfvdVPXctqm6m46vvfZaihSD/tQ5okTTcwuA0wb8DOW5bmkKdtsDa/6oRwurl8pQX45Pjos/8K78+8JY98ADD6T++2nC+v0xvqDlTb+8eB4ItEcAeZDv6vvf//4UDUHuZz05JcnqbObw1FNPJZmTzYFwGkCnc0Y7vmeMbdTl5TbVC12lbekYus9RdJHyPqJWciTr2Imv2E3aoOdEg0PPofn88dzyqkmg7Xa8nIMzehh9A7/ZSCWDHTRYcjjL2Vh9QwZW6YK230R3+7HwzhYvXpzePRHonhfY8nEeCAQCgcBcQmDkBjsIKoY4TakBHNaKI6JJBjqIKuvoWKKsCCfyEAmAMuY9czmg7cLRmg4kAwveOxi4J/K5ergnZoGikmP8KidlzSuMEljaKmXks5Emqp+jhAe1wThRQpk2wN9PfvKTJMDUeaFKBju8jBioUFiY6oSgc9lll01jnLYv/lxeScqVppD6MsO+lqAlfGz94IpHlZB6vHUYjJcsWTIVQajIQluGc71/BDt+eyy6zsYn7DrJWPX78uXiOhAIBNohoF1NmYbEdCSSaB3fWN2faLLy+29fRnfraZc337aHgQlDjo2mVVloP0Y/n0TzvcFOfdJzXYs+6VrP2/IGtY/ShfOpDf2Bz0mxZRw4r3xEierNHeHTLNzNenXXXXddcrr5nXZz5fy9cfIHcKbPrJVFVDj0fr/99ku8Mee0028HPDEalAyR3M9F8vixxnUgEAj8HwLQOr4rDHQ4rS+99NKK2TWiQV999VV18sknJ3qGE8LSRBy/rCWJTvDiiy9Og1QGt5LcRmboKm2L3k6roKqmybrQNzkZCCrAEQKdFW0QjacO6vMyOs8t77Hj8O3WXY+StjMu8GBdN4x2s5HAqfROpKPxnOAI+BbTpZG1kb+59noba+AxXRqjK4ZUK9P7NWdnY7zRZiAQCAQCw0Rg5AY7KWTaxY7OswgoBhC7lsyf/vSnFAkH4SahMCF8w9wxTmFssUzRgwBzRSERgfeeM+qFEUD8ERJYmNszAFsnBkOEhRKzsHnFoH3/tLNsSWiwdXDexWAnoycMTn/0t44ZwzBtlCLrfqCIMeVViiOLcaOcUCeKDsKSBCzfX10jXLGuEWWIbJyNJIXYvwP6whgWLFgwhZPwQolleheCYymxIyTh9SrDEaMmwoL1BJbKx/1AIBAoI6Cp5aI/5JSiZKMWoLH+zxv4/LevpRXsVEdFVPnpNxh2UNSeffbZqc5isEc5Q6HzkcOi+VLmdC1a769Fn/xzO+6phjMn8CqUEjlU2joLoFHkhW7By0477bTGDRUow9pRjB1ewOY98ACUJhwf0EOwqeOfdgjj5A/sYKulHESz6TOROEzJ80m/NeWtO7Z9V76NuA4EVkUEoHV8T6J5wgBawpIryOrIrJq2KJqp74xZI8j9fL/aPIg8RD7njO+qnyN15Nq2eZALCSTAoAgtQ+dgczmm5rJ+nWiDaDxlGcuwDXbjoO2a9k8kOZvIzUaqM9jRn5yszW+EKPIcr2F3WH4blmYT/cjSS7zLSIFAIBAIzCcERm6wg3F6RQgAX3nllRkLxcIgYcj2D6bNWnYoG35qKYwfAR3PCoQawo1hBkacI/DUTx0i8tttt13y4rz88svJw0Z9/OGZRzGiTQxQTYYZRRAQyYEhh4g3Ni2A2fi10ep+PGx6QARXnWIhpZR+4hW0WEmBzbVhFVUYmjVgYbhCGVRCwbI4IRyBMV7QUhvcpz/0a1ITkZrCi742vVc7DpXtWs7WEeeBQCDwHQJ8SzhRUNpwoChBS6Cj/OXojfgE5dlYgh0HoVF41bXRhJQtaLLdQZU2tCutXZNUBjXrWIJmEjEMH8CAZRN0BOeHlDldSzn116pfz7WEgZRTW7c/t1Hj9AVa3EUhgc7BxygLj4S3waNsgm4TfSgHDvkOO+ywabwInmqdH+KfGO/Auy6Nkz943pj7DdX1NZ4FAoHA4AhA66AjonnUiKzMetbcxxhnZV3RTEsToX2nn356yn/88cen2TbQzjfeeGOqg9BpZHimqfIMOZooW29YmypgTqCjklnl4NHMH/EQ0XiKMRZfL88ll5Nn0mg7fYJ2g30TnTbQzMqp+slvAf6ud1PqDLSdvG3zl+qJ+4FAIBAITDoCIzfYQXRhoG0YhRg8zNz/2egxGDRTcxRtQF6UJ4xObRQZiDsKjIx8lJfHjtBxDHpdpxCxcQZlbL+pR967QX8IEh6sYNClTq2np/7RV6aFvfrqq0XDVQ4nvwZQlz5E3kAgEAgEhAB8ATq0bNmyrINF+fxRu8qJlnHEGGXXRsLAxj0MbvALmxSdfMopp0ztcgp9Za0ldgvFOK+k9VX9GqbQxjqDHXyPKHKmOZFwLLGYOsY2otaYfkm/MR42JU2Vgt8xLb9JicnVhyJElAXKpqYL4Zhhkwj6ZXkXNJ7lFkrtYACkLqYjyQiod/DjH/+4FQ/O9THuBQKBwPxBQPK8NdgxOpwXyO9eVhdNtQY78kOHiHzGEJZL0HfWuLb8gPO2UciqE9qKg10bH0jmbmOwO/zww9PaadB21v2ELtp1UtWGP46Ktvt24joQCAQCgUBgbiMwcoNdF3jwthFhBqO1f0S8+SlJTHHdf//90zRaPPwl5aKufRQPIjuYDiWhAsUGJQoFq2uCwSN06I/rYSUiRxYtWpT+FEXSpW6EI6YFMc0VZaxL1AE44dFkXTs8mZECgUAgEBgUAWgtOwiWFLFS/Z5PEA1M9K/lAfAL6DrGOZ+gZyxmbvOTJ0cT6SPOGPgB50pSLqXMeQOd8vmjDHcodqzp5Pmaz69rokdsFKLudz3ST6JWlFiXbZ111qkOPvjgZPBkmYQuiagNovVYM44dIFHGIwUCgUAgUDLYlZARDe2zViTGMekM0H0c1J6+l9rVffgCS8KIzkvmZnkdJXgE/SSvEg4e6CABA9B1HCLLly+flkd5c8dR0fZcW3EvEAgEAoFAYG4iMFEGu7kJ4dzpNQKMhJG50+voaSAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCqxYCYbBbtd53jDYQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUBgwhEIg92Ev6DoXiAQCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgcCqhUAY7Fat9x2jDQQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgEJhwBMJgN+EvKLoXCAQCgUAgEAgEAoFAIBAIBAKBQCAQCAQCgUAgsGohMK8MduyE+sc//rFi19hcanqeK6N7bNbATlDsENV19ynVMchR7bMjVaRAIBAIBAKByUDgH//4x2R0pKqqb775ZiD+BI+cDf42MQBGRwKBQGBWEIDuIF///e9/H7h96mAn79zO34NWTh/tTtuD1jeb5cGp7U7ls9nPaDsQCAQCgVUdgbEZ7DA0wUD7/vmt1HMvjrp33HHH6rzzzss9Tm3XPc8W+udNGNtRRx2V6qedcSe1Xxpb1/786le/qjbaaKPqrbfemla0dH9aprgIBAKBQGBICHz55ZfV7373u4H/Xn755aqL8ayJX7QZHorbQQcdVC1evLhauXLljCK33357tfrqq1cnnXRS9vmMAgPceO2116oNNtigos0+CR671157VUccccSUQorCe9ttt1X3339/nyqjTCAQCAQCrRAQPUYGHTRRx2qrrVb9/ve/H7SqaeWhseutt151+eWXVzjRh5U++OCDgflfKVCh1Edk/8022yzxJhw1kQKBQCAQCAQmF4GxGewwNMFA+/7ljEseVjH8klGr6bmvz17LYIbBj3qGmT799NNq6dKl1cUXX1z8O//886stt9yy2nnnnYt5VJ66qLMulQxzpft1dU3CMwQzxo9y2eRVRdB6/vnnq912222GwdKP5Ysvvqguu+yyarvttqvWWmutasGCBakNfg+RAoFAYHAE+Hb78gVbDodKl+9yEH6gUT/77LOJLuSUTBQoFKLNN9+8WnvttavHHntMxXodm5xen332WXXaaaelNl988cVa55jHCZq4bNmyZFz8zW9+M9U/8b1zzz039f/ee+9Niip5dthhhxl/V1555VTZ2TyBbl900UXVVlttVa2zzjrV3nvvXT344INVnWLap8xsjjHaDgQmGYESXYeWMFsF+mv/3nnnnSRnXX/99dPuK4+nWXVjH5XBDofQhRdemHV21/Wn6Zn6a/lZ1/OS3lNqG1qIIwmH0sMPP1zKNjH3icB85plnquOOOy7R9TXXXDP9XpDPod116f33368WLVo0xYvhBw899FD1v//7v8VifcoUK4sHgUAgEAgMiMBYDXYoL8uXL896ko488sikaJSeP/3001Nef8acUxi23Xbbao011kgesJwy0fT8hRdeKMIpxWUUBjuiNBhfXZQJDHX33XevwKkuH888VrlBlQxzpfu5OiblniJDEHCalPYVK1ZUJ598chJSmozACAfkoV68qvr9cH3ggQc2CgmTgk/0IxCYZASk2GFQkXJmjwjjKHMI0Pa+zv/yl79Uhx56aOO37zGgfN+Ia+rCyEWkBXztvffem1Y9xjMcAvxBc1CMyNc1CsJWOqjTyyqA0HmbnnjiiWRUPPPMM6cpMeJ7tA0PWnfddSvy5vgvPHcSDHaWbm+88cbT6Pbxxx+fnQLWp4zFL84DgUBgOgKi61dffXWSWZHtoYHQGOQ0S4/anHuaNb216VfkpU760DWpbJs+lfI0yZa+T7S54YYbVs8991yWx4nX5Y4ydHY12NEHouzo6/777z9Nv/L9m4Rr/Z4wMOKI4Y9z3kGJt8Kj77777sTbyAc/sOX4LfopwX3KTAI+0YdAIBCY3wiM1WBXZ+yC2dQ9968hpzDIoIJxZS4Z7Bhbk9GurcGujbGO9hAQckJF6b7Hf1KuYa633HLLlPBXMtghCDF1DQbPH9653Pg1LowA/J6IjLnzzjun1nXCeIDRFOZ/ySWXdJqCp7rjGAgEAt8hIEG8pFw1GdZkVLLffpuo5XPOOadaf/31W0Ut5yJ3P/nkk2r77bevzj777Gl0ANqBoc4qETLgcY9pVX0SPNKOkToY5y9/+cuqtLYp93leF3GtSED6TD9tEra0TTTCBRdckCLxctN/bbnZOpfzBrrNNF6txwd//cEPfpDo9g033DCte33KTKsgLgKBQGAGAp6u4xBnlgKy7Lgi7B555JEZ/eIGhip0CNEHm2kQ4xm8igjBOtnStqXzQeTuEn9ss9TEo48+Wl166aUp2qwuEKDrchMa1zCP/H6Q9eFJShjb4OPI44cccsgM49uf//znxON5H/we0RdI7777brXLLrskXcAvIdGnjPoTx0AgEAgERoXAWA12KEcQV03dtEemetY9b7OGjmdcWsRWyox/ngMV4Z01fLzBD8UMJYAIPgw5/jnXhFx/++23uWob78FACNPO1dvlHnVQV1MqCQi5+zA5plgRil6n+DW1OYrnb7/9dlKMt9566xT94RVatYnCCVPfY489qj/84Q9J8a0TqlDqyM/0B78ulox5KN8+skbtxTEQCATaIeAVOwxIVnlQZIaPLpZzQkYl++0rcoBveBh/tm6NChoBz3r99dd1q3rzzTcTDbfGOj2U0Q6HEtGEUh70vOkogx18TH84Ivbcc88Ugcg9+BdLAnDkmugLntvIDavwgBNLLeT6S3+ELW2T4Kld+900rmE+Z9MpHDKnnnrqtEhB2oBWM074O5tzKPUpo7JxDAQCgTwCnq5fe+211TbbbFN9/PHH2QLQK5z2yKCDJuqA7iOz+mnwosPI8k8++eSMpnIy8IxMNTf6lO9TRl0QbqLRui/8R8X/1M5sH0XXvTyP3I4zjfGDr08yzKETwC9Jfcr4euM6EAgEAoFRIDBWg12dsQslpu55m6k2GOauueaaqQWyPSPzz3OA9jHYKbIvp9Tl2ijdG8bCs9TRJpUEBH8fD9YVV1yR3g3vJ8f4aA9FzipBbfowaB6twQGjxlvKsfQO8LQS2UI/pYR6Bq/+YHQ9+uijk+KHMucTCitrCiII3Hffff5xXAcCgUAHBKRYcCTJuN6kaOj71fdc+vZzXdEUULWB86huPRtfx+eff17tuuuuUwohdAWnEnyM6F2UU2t01Pkdd9yRNobAqMR6c01r79h2ZbC79dZbpxw7nm9ijCISQc4fnjNGjnL8QCtJjAFHGc+kuBJJ8corr6S1gsAkt3ZqG+eZ7Tfn4+IP+i3l+JTkASIrpKDRtz5l/PjiOhAIBKYjoO+Ko2j0GWeckYwiXR0q1hhVxx/EA/j+oXveoSJjHfT3qquuytJ8yvadngqNmZQIOxw39Kfujymh4FRajkJliYgsOWrGRdun/7q+u6KPGHolD+gJNB5av8kmmyTHle7rqN8kUZ9aDqlPGdUXx0AgEAgERonAWA12dVNeYcJ1z/uAIEJumX2feiijurx33j6TsNC3DQQFmAfRATb6sM25pndZRUUMSUqpP3oGR78pz/1XX321+vWvf11tuummiaEznZQ1pHKJdo499thseHku/7DuoXgieCEgSQBs8w6ES2789A3hZOHChTMEANtvcAJPjJmRAoFAoD8CVrHL1SLaW6Lj+p7bfPvUTwQfG0Hw7RI5zVQapoO2NdqhuEBz+P7p09dff12dfvrp6Zq1LeERntba6/322y/tKgvtwgkiA1pu7PYebdkx4oAgYow1eW6++eZpxj/6CK6Mj3XnoONce6WLTSRYv42EssIusfz99Kc/LY6h9B5sX+35OPkD0Y4o6XURdieeeOK0jYn6lLHji/NAIBCYiYCl65LPNEVV11rfDoeGIqmRZeXg0FIwlubgMPAyseRfHK04XCWfQXc1S0JLFUB3iY4uOWhsWUu3u5yXZMuZKP3fnWG0aTEqtePvU4Zx8a76pHHS9lL/iNgkcnOnnXaqmAasRPDCFltsUatXavxyvPcpo/biGAgEAoHAKBEYq8GOnTaZooMC5v/w9NQ917RWD4aUuS7M1OZty+TUjlWY1Je6Z8rT5gjT7uvZ08Kz1KGEh421l7xww/UBBxyQNUhRHoGGnfXAiV1R2VEVL1opyStFfoSjcSRNhUW5pH0JgLn34/sjBb8kVOk5OOQi7KjvpptuSvi0/f34PsR1IBAI/B8CVrHLYSL6WvrW9L22+fbxpGPkggYSbYGTiHplxFuyZMmMKVS+TzgumEYKvaMsRjDoLAZA+tImUYZlBuiHX/S6VJ62GCP577rrrmSMgw5BA3/+85+nCDroIHixvibRcxjj9t133xRBAS1nwx367w13XGOExGFkd7MFL+5Z7LmnaD1/zC3JME7+gBIOpn7t0a+++iqtPcp9+JlNfcrY8nEeCAQCMxGwdJ2F/+10WMlr1lAkOm9lWNF2S39mtvTd5glENpOoA/rMJgPIeffcc0+iWThIiFCuk2cp21cOZwzUn6ODuX7rHm02OeuR2XN5ZKxswkht2SNlwMm+B/u86XyctD3XF2bZoHMwBviX5Wv6PZUi7MgLdpTVb65PmVy/4l4gEAgEAsNGYKwGOwhj378SMxKB9esbWY9d3bNSvR5oeXG8d5586kMbhdHXa69hGhiJiJjwilDTtablivHYenPn5MsZrLjPO0KRffzxx2sFG1svEXl4LVGMRp1QWPGkWuVLAmCbdyAhMDd++g4j15RXjt4TS0QNEYfg1Pb3M2pMov5AYK4iYBU7lj7wtE60zU7rVB4UIyLN+O7rvn0UNE1ZXbx4cTKsiW7rG2ZaKG0cdthhxbU6NQ2faa/QapUdB/a0dfDBB1cnnHBCwoj18qB7GOQYC7vRoizeeOONFXyK6bbc5zn5oM1Mwz3mmGNmGAm1no+ndxghPZ2zBjut7Qot5rykqI6TP0DfGQfvB4VdfYTe81uzSp3eW58yKhvHQCAQmImA6PpTTz2VDCM4SnC0kiSvMTMDAzt/MjxhmNI9vmOcI010FoMgkcTQMZLkWGgh7ULDoO3oBbnvPxX657+SbGzzDPu8TZtEgeWcyPAkHMs4nbomcAWbvgY72hsnbddGGvByNuJW94oAACAASURBVEBCV4L38J69nC45n/GxWYV/71qLmufgT+pTpivmkT8QCAQCgT4IjNVgB+Mk7F3h7vaIUa3ueYkZeaXLgtD3ma1D5xIwcoKD2qlTGFVP7gijwVP10UcfJYGGyA/q7PKHckaUHXVQl2devt2SgFC678vP1jVMV9PRrEdN76fNOxBTLhnsGJsUWAQk2qEMCZyJUsHTCaPP/R5mC5toNxCYiwhIseOYM9jJOJc7tjHYsUMoEdwY2WwEnei2/YaJSIMu8Mf0LR+JwY5y0AR21sMQpLKqC5rQ5U+KQpv3Rlv/n70z/fWtqNKw/4Bf/GDiB2NCYowxxhhyo8EYMRglShCIokQFEWlpW0CCgCg4NiKToICIIs5Cg4KCjcoFRFRUEFHBARUVFBRxAsUBh915dvd7et11q2oPZ59zp7eSc2r/alhV9da4Vq2qYnxjDNIri8wVP/7xj3s37sfj6CsCNc0dhMWfcMwLXKqdy4QfR4LRKkEIKEN4LuSmPIccckh/5xNho9FYOmbcjfHW8vvOO+9c2VCJdYF2zVlnnVXUoJwTZy3LYNpGYFtHII7rGmOOPPLIvv9pvcbVBBrXtTGDkF1uErZrnC1hwgYu1xrEV0IZV+n75OG0007rv9GyG2O0BkYQVTuhIoFiy66dzijlQWmCS81Aj7lHx4pr4aa4g6twmhJvS4VVm9K4TltBm7ykNU4eOVINZgj14D3FFxGeqym0jo/z8Jw4WwoPp2sEjMCOg8C6Cuxad9QxcbT8a1UiRqk0oc/1K6WliYIjSNkonblMS2nxogXLVFtHAFoTP/mvLRBq7rnMW+q3LotHwy4eJROGY+pATGZLYIdgEOYcJk+LA9ns+OoV2VK721LYOF0jsC0ioLEVe45Rf859n8U5zA1CLPpx3oXXuJ37MPfYSFDFMVJeo5WQi37P5hLabDpOS55FS3cy6T4m/WZcRQNEv+UfGYWhspNPlRGsKBNMbpwjGP+l7RbdYYxL86u0lRnb8nh41VVX9W7cAwQejHt53K1hP1SWtfLXVQlo06BVQf4Yy9GmYKOFcsJki3EjH3PirFX+TdcIbC8I5HEdQQhj09e//vUVDbs45msMjWOixpc8RkeMoIfgBS07GWjQ16EvTSo2JfKGg8JHm7gco2TTlvE2anYzjiIAiqdg8I9jrgSPsRyRfumbsHn8zeHYkCdfOvab/Uu/uR81zgP5m7yDUxSc5jA1relSemvtpoc0UExA4YOxnDJQJ6VHRBjnEejhTznj3wte8IJ+4w23WFdz4qx1uU3fCBgBI7CuAjsug0btvbQrxX07LX/inH766ZsdVdIkX5rQ5/qVmoXuLCvtbikdMVOl+C03CZs4OoBWA/Tm/BEXpnRo4icvtQVCzb2V//XyQ8sSLczSwksYjqkDLQLH4MRO3HHHHdcvenhx6tRTT13RZmGin7J4Wi+cnI4R2JYQyIxd6+7NOHfoeJX6s/o+AhoE+wio6KMwILx+mo3G7dLcgSCL46AS2D/taU/rmUwYPrSxclz9FgOaf2t8yv6RUcj5y7+zwK4kgIOecIjxSTeH1/0/MDMwbHE8xO+www7rj9AioCRtbZacccYZK8eLMvYxzfX+Rnvw+OOP75mz0kMe1CllgbnWPXZz4qx3uZyeEdgWEcjjujR2Oeaax0PKpzEzjokaX0pjNHHkjybe3XffvQITNBj7Nd5KM5ojuMRpGdb6jJUIhRhLY9rQi+MkdOK4zO9SOVrp4Ud+S/fTxflOR4bhlaJ7/ta8CN3tTWBXwjE+JsImUzasB3j5nPmMtcBee+3Vb7gzH+j+u8zXzYmT0/VvI2AEjMCSCKyrwK6kEZB3c2q/2UWJDAfCG3ZYpKnAQjwesZ3jh2ZFyYh5yRO1wmqCLjFKCtOyb7/99v7uH3YgeWEwYoDGBAsP/viWH7t4YMJCRW7shDF5j9kRY4FQKk/NvZX/9fDj3jh2xMABnPMihVcBWfCgBcKCEP+4cIl51CKvVP4YrvWtiZ57r2yMgBGYjwCL5chcqX9q3KvZYsYUPo6/+DE+InRjYV4yGrcjQ5bDIbDnvsqsWZbj6rfylH9nBlX+jLdjDAwEDJvuUCWd0nw6RcNODDRjZd7oQWsF/JhHxbQi3EJbgTkHJglTwn5MedYiDMd/yRt/fJcM90DRnmgXmDlxSnTtZgSMwKYIMEbFcV1jGGvcb37zm/36sza2Z/faGK3ji/F6FHLBuBrTZpxCWMfmBJq30pjeNMf/+4u0toTALpd57m/NQaWyZTfKGnHK/tvKb9U3cyTtbIzhNWHm9XwVRCvunDgtevYzAkbACIxFYN0EdlwSes4553S1114ZCDXQMrmyU4KQRkaTKIwORhPN3EmtFK/GPN1666299l+8I0P5whbzFRlG3GFw+JtqYDBhlLgcHKaJsiLUi4sMds5g2DiyBVbxiM+Y9ChrSWBVc4cmd3rEOhmTzlJhhHGp3mputYWLmMxS+cfkV4wuR8VqQt4xdBzGCBiB/2Wu4qJZ/TOPp8IqM4Kl8MwlzCktozGF8bVloBXHXsLmuPqdj7zqN+Pqao7EqozKKxiU7nzlwYzS5hX5iBteKi939jHfxHFfaYE/5ZLAjjhsXnEnoEwMy/eWNBKKlsqpfKntCMc5cUTLthEwAnUE1NewZeh39E/cWH9pfNy4cWO/0ay1HHeZsgZGIIdGlPqr6GD/8pe/7Dj1sOeee3b33HNP9NpMYIen7tFDaMfjPOI3YkS0u9kUQajIAwdx7COc8s24IUPe4lyluYAxdayJ428rDhsObEyzobKEIe9gHutoCbrrTYP8Uw4EcEPzvvL2k5/8pJ9Dn/e85/V3wsq9Zc+J06JnPyNgBIzAWATWRWDHAh/h0/vf//5egAWTwEt3TGyYSy+9tL+bQZMgx1U4thJ3zTTRK85QAWEe2FHjonFoXX311UNRiv4IwriwnMkg3pERA2uCjpM2/pQHja8Pf/jDMXj1m1f8WCjovgW0JTjik7XJ9Puoo47qy0beYHhZ/AwJ1FikoDUCNiWBVWvhQB2U4lQLtI4eYrxyHZSyICZzblm0q0vdzBHIlvJkNyOwoyLAsfLYF9U/a31Zi3MxGa3wJWFbCedSOIR0tY0QjfmMiRj9Ziye8jeWqWOTAOZUR/Ap+2677dY/NETa/DGuw9wyD8hN9mc/+9miwE5YxHGfzSDmTTaChK3KqfCy5V+rK4VbD1sYIciEsSoZ8KN+hOOcOCW6djMCRmBTBDROX3PNNb0HAjZerGaN+8ADD6wEZozlbtDdd9+9nwc4IcJpCmnxrgQMH2wc8IAFa+XS8XfGM/q55ghF1bUqbHYjtMsbMTzmgwDn2GOP7TcmGNemjOcx7NixnbzF8Vd5Ldla5/KQRhY4Mtbfdtttm5WpREdujOslnOS/rdga16U5PZRvsIO/pOy0vTFmTpwxdB3GCBgBIzAGgXUR2PFqHbtC7A5hOALFRKvJVLsWmoQkJEPYBROCUIQJtLVzHgtLfC4ghem46KKL+iOSLOJrL83GuPlb9/Y8//nP7++Xy/78ZqIkb5lp4dJatCrGTiLQYleR58ollJOt5+1L2hPE4UgTR4H0rH3OJ4zVFVdc0WvkaVERmWSFry0cxJjVjhuxe0l91o6fif5a2VrI5DoopaeylMqv8DVmnbZMW+KvduRWNGwbASPQRqA0tqt/HnjggT3TJqGTbIRPkclQ+FLfZ/OH++d4OALDOMEdmPGuG9wIExmsu+66q9tnn336I6AlobzGfAmy9Ju88c0F4QjU9PuGG27oN1X0W/4xzRZSvN7Kpozu2hEzrLF8jN2aPzXugwvXLQhLYaty5jzKX+Gzv36vx/wAQ6UXIbl3lLxFc8stt/TjNnflojWPmRMn0vS3ETACZQQ0RvHCtMYn1v3cJyejtTp3aHKHM2MUa1nWvYzTJaEdfRZhG7T06qzoyWY8i3OE3LG1pic+mnwI/2S0juQeO41tbNjHuYdxmPFcbvjHuWrq2E7aGn9Jv2V0PU9pU0KvyIrPatGR3xICu/UY21FEYO6LdaUyaFyPd5PKj7k7z9+0H3ghwvOYUr4+YU4cpWfbCBgBI7BWCKy5wE4L4l133bWDCcJwjJDjhBLQMUCirSQBHWF0DJUwHKOFIcg7cyVQ0OZjAmUyZtJnQSBVeAQ0LCLI0xij3TgGduLVjJi1zLRowTKGKWNxoAVAyWbhwqIgLh5yOCYe3MSoUHaOsaLiTxlYwLDYQJiJELEksKotHMD1JS95Sf8Xj0SBiXYloc/dblvCaKGV66CUFy3ESuVXeHCEYUeL8ZJLLukXiNBmZxYsWfTZGAEjsDoEuCicTQDdzQY19U8xeTVbY7LC574vLYwooGEOYi6K6emIO4t3vjGKS1/XAwWxpIwPMJcSZOm38pR/a3zK/mPmBtIlHGM3gjsMmwXxignmBxhcxiz+SJ/566CDDurnXZgdMb88GoHgMBqN+8y7zA0qs7BVOWMcvuWfsY/h1nN+0FxPm6F+0LzgOg7uOGXsZl2Qj8PNiRPL528jYAQ2R0DrXwR0jCmnnHJKx+kaabVxooSXmxE+IXTRmMlYxMYvxxvhCW666aZNiEvgxnhH3y0ZaDAGaLzNYbh7mOtmCMMmuMZ9HTllY1ZjWxz7oJfXjfjH8S+WI6db+63xNwvsyAM4RcOmCuMYm/is8WUuvPDCXkGgtmmvcNEm7y2cYtjS93qN7cKUtsI9dYzpjONak5fGdfILnmimKw7adPvtt1+PH7RKShxz4pSwsZsRMAJGYEkE1lxgJ4YsHh9EHR7hW7w74Gtf+1o/AWkChmFi4GRyl7AoTooZBCY2tOk2bNjQD8Z55wy6aMkxsLNIQHOvJbiDcWGxUJsIYvqaTHL+9LJsbdEQaQy95gRTC/PIIkOPTNRsHREQI8qEzCRFPjTB1xYIuKMVyKXA0UgLMjK68mdxxf1+pIMwcEsY2goLqVwHpbxoIZYXXjEs2NFeKJP+aAtcQG/NuoiUv43AfAS4ZoD+xbgjo/4ZtRYYY/Mf9w1hFD73fV2twCaHxj02h9DWzhoKLOTz3UAs5hkjYOiy5rDGfDFz+q2xPv/W+JT9Y7lV/mxrfI3zpcIwh8HQMhdQfrQsOHYGAwxDi9uJJ57Y+/ObcRy3rPVdmw+ErcqpdGXLP2Mvf2zlf73mBxhctNS1SaXx+6lPfWrHXVkSGMQ8zokT4/vbCBiBTRFgrKPvacyTL/2PfshanTW2tF01ZmpMZAOaTWIE7YxXjCOEYUOiJmxRGtAopS1/bMZNNm8QKDKOwnPwkihHc7m/TmNbHPsoS1434h/Hv1yOmGbtO46/4MOd1dBlDMv4kS89oKHNB81rtRMwtXRJYwinWlzc12tsp8wIfPOYTt65xxshbomfA0fGfcLpj/aEZiZHtEtmTpwSHbsZASNgBJZEYM0FdkxEmRGiADyznS+KZVBmsot/TNrcNYCwJDMZmthOOumkXvtAgzcTcWnwhj40GLAJy5El7vu5+eab+x026PHH7g2CMdJEACVmrwa8dpnQ5Pj4xz/eH2tlt4sFyVIPE5B3FgVx8VDLT3TnuGxJOBkXCDG8du/IO2mhpQjzg7YCmNXu8YN5RsgFftuLoQ1Rt2qP1IGNETACyyAgoXjUrIYyYwnjKH8SysUUNU8Qnwum2VyAgYsXTmu8ZEzOGgdom+WxTAK1uLEEE4TGMPMA91ZGI6ZM47F+i7nKv0Vf/sxrMFeMw0MGbRDm0HjXDuMsQjkxbq997Wv7jS3SQaOATQXmNcZ/jm+R3uc+97leQw8mmPkpmtp8IBxVTuIwHz744IN9dF38HrGPdPW9JeYH8kkboS4Yx0trAuVP9pw4imvbCBiB/0eAsY5xVmMePvRD7rPGnXEoCk00ZsYxkc16xjbCH3744f1pG8ay7373uysJMU4z1mmcY9OCMTAL1lYihA+EdFqzaoNHJ39KYx9lyXQZG6PAbsrYrqywuc9GOXd7M5dRXuYdNtpLG8S4EQ4BFq/eMkeQLzajwEPzDXSW+otlVL6x13Nsj+Mz7aX2iGHMH/WreYA4pTVFDM/3nDiZhn8bASNgBJZEYM0FdgyUTKBMfkNGE3xpgoGp424IDBMS91xIpZ3wCJQ45sMEPGQYtBHEccRIaWnHDpV4BHpMhGjsaTIfosnDGXn3BzravWvFZ8JXPpayI4NVSrvGoDEhcowol4XfHKUaU4+l9OxmBIyAEYgIMC9oXBkjTFFcmJM8TsLcxLuR9DAMAjfmi2h0JQPH3SV4YlzjriUEYJEJkGZxvsNUzKXGWf0Wc8q8hxY5R6sw7NpzmTqbS2jDcWcTZdCddDF/+ZuysPFDXtAUf8Mb3rAy9zHvcam75ilptPNgUUkDG20D5j1ptSit2nxQYlpV1lgHUZgomraNgBHYcRHQel5jopBAmMT6Pa/VNa5EgR1xGNt4FRVBWMkwviOoiuMR37oSpxSn5HbxxRevjLP4l8Y+ylIS2HH1wJve9KZ+zGXsZT6K96SW0pMbcwXaz8o/GmPMZUMPyOkUkOIxlwprBKGnn356jwE4LPFX20BTOWwbASNgBIzA2iGw5gK7KVmvTTJovKF6HQ2My4te9KL+GC1Hl8SwxDBD3win0D6D2dBEB+MIEwWDNdUwwbPo0B+/t1YDs8yEHnc4t9a8Ol9GwAhsfwgw1iJsqjFitRLneQKtabTN4hzAfMG4jnAuG8Z97nOL4QlT2nknj2zGMB9EoaKYSwnssoAup6nfEtwhTOOIT57XFC7a5Je760if8Gia8KhCLrPi3HHHHd3ZZ59dZNLARHfZKTz2FIEdOMG8iQlkvszMd6TtbyNgBHY8BFhTI0zS2noIAY2hutJlKHz0Rzim8YgxDq3qPL7H8KXvOM7ij/Y24yzX68hIm4ywMmzwoDmIwgDjOtrGl1122eDJHMXHJjxzSek0TAyXv5kP0M5j84lHO+IclcP6txEwAkbACGy7CGxVArttF0bn3AgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpGAEjYASMgBEwAkbACBgBI2AEjIARMAJGwAgYgUUQsMBuERhNxAgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpGAEjYASMgBEwAkbACBgBI2AEjIARMAJGwAgYgUUQsMBuERhNxAgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpGAEjYASMgBEwAkbACBgBI2AEjIARMAJGwAgYgUUQsMBuERhNxAgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpGAEjYASMgBEwAkbACBgBI2AEjIARMAJGwAgYgUUQsMBuERhNxAgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpGAEjYASMgBEwAkbACBgBI2AEjIARMAJGwAgYgUUQsMBuERhNxAgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpGAEjYASMgBEwAkbACBgBI2AEjIARMAJGwAgYgUUQsMBuERhNxAgYASNgBIyAETACRsAIGAEjYASMgBEwAkbACCyDgAV2y+BoKkbACBgBI2AEjIARMAJGwAgYASNgBIyAETACRmARBCywWwRGEzECRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMALLIGCB3TI4mooRMAJGwAgYASNgBIyAETACRsAIGAEjYASMgBFYBAEL7BaB0USMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBIzAMghYYLcMjqZiBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AEFkHAArtFYDQRI2AEjIARMAJGwAgYASNgBIyAETACRsAIGAEjsAwCFtgtg6OpdF3317/+tfvpT3/a/etf/zIeRmAzBGgXDz744GbuO5rDvffe233hC1/ovvrVr/Z9Zkcr/2rLSzv6y1/+sloy21T8v/3tb91vfvOb7g9/+IPH122q5rb+zDImb6k5m3T/+c9/rhtI//jHP2anRT7//ve/z46/VETysVb1BT6rwWg1ZQTb9VgfrCV+qym/4xoBI2AEjIARqCGwXQrs3vKWt3QPechDui9/+cu1cu9w7jB7z3rWs7onPOEJ3Q9+8IPFy3/LLbd0e+yxR/ekJz2p+973vrc4fQh+4xvf6J75zGd2L3/5y7vf/e53zTSmhG0S2ko8Ee68/e1v784555zuj3/841aSq/HZuPPOO7sDDzywe8ELXjBYd+Opjgv50Y9+tG+X73rXu7o///nPzUjf/va3u5133rl76Utf2t1+++3NsHM9GZcYn+iP9MvtxcBwXXnlld0+++zTC+6HyjVXcHn33Xd3z3jGM7rddtutT28onS3lT3tjvGIsGmNgwn/5y18WhQIXXnhh32aOOOKIDuHdWpkp4+aUsGuV3yl0wfetb31r324+/vGPDwomaMuPecxjuqOOOqq75557piS1RcMyxh1yyCHd7rvv3l1yySXVvCC4uPjii7sNGzZ01157bTXcWI+bb765HzenzP+E3XPPPbtPfOITkzcvPv3pT3cHHHBAd8MNNwxmkc3Ej33sY/38Qx+baijb3nvv3b3qVa8anEMybY33r3jFKybHzbQYY1/3utf1a62NGzcOtuEcv/abvnH11Vd3T33qU7szzjhjzQSCtfTpa7TDI488crAdsH5lHTt3/lR97L///v0GSC1PdjcCRsAIGAEjsLUgsOYCOzEtMC5L/B133HGD2hUlgd2nPvWpRdKfwoBtLZVMPtZaYPeLX/yie/rTn94zlQcffHD3pz/9afHia6E1ZqE2JSwZvf/++7svfelLveYT2k9L/61Wmwqh03oJeeb22X333bcq5ELz8ilPeUpfhve+972bMQQIIRFGIpSc+tcSYtIOEb6B3fHHHz/I4Jx33nl9WJizIaHw3AY+tW3OTWct4qkdlpjPX//61z1TC9YwlUPaKHNx+OQnP9nX0VpsPkxth6effnovZCthXZqHSuFwQ7PkzDPP7B7xiEf0goWoQcP3G9/4xr7M73nPe2okFnGfUidTwpK5LT3GStBL+6SftwxaRowXhD3ssMMGhQiRljZXhsYxhIYSvk5pdz/84Q9jcpt9S6Dx0Ic+tLvqqqs285fDr371q+65z31uX0Y2MA1p9QAAIABJREFU2oboKl7JRkiIMAu8oAntIcPY/MpXvnIl/SmCPsbm5z//+R1l/PznPz+UVL8eUVrYU9cnbOQ89rGPHZ1ezJD6SWnMjOGGvhkHEDpS5kc96lGLCFljmqyRoU05KW82tI+hNl3z/8xnPpPJbfJb64Mx9an2HfH8zne+07EpOMZoDqMdlDS16Yusl+f8DW0IjsmfwxgBI2AEjIARyAisucBOTAsLuSX+4iSdC6PfSpOFkowm6SXyEOmK/lrbLNY4DtVaRLBYqIX70Y9+1GsWPP7xj+93pEt0WKisxmjBB8Yf+chHNhPKrIY2cbXwXQuBnRaBS7SPEo0xeW7ho/Y7l46wK+UtuhFO/Se6j/keEqCoDKVwtEfKNiadHKaFya233tozIAhCvv71r7cg7pk4CfdOO+200e0XTRUYyFKfKrl99rOf7cuJhhj9shSm5LY1HIdUHdbGYZhnMZRD46TaZKv+coWBM8JU2sAYAWyOP/Qb3Ke0w1JbVhrqR0M4KDxaTjDimRlH0PWSl7ykL/MXv/hFBV8Te0qdTAlLZrf0GIvwiraJcOonP/lJE78o3ENAPMWoj+RxKv+OfWhKuxtqTxJoo4VKOVoGIRl4kLc5gqxIOwoAhzSlWKewcUO6ub1HmrVvrTWG0onxEThRVtoAgi/yMNYgwH3b297W5xeNwCkal+onsb7HphvDaXwAs7XQgmP9SB6hX8qryoH/1D/GwiGjfsMc3BKoahxRHhGQownLH4JB5uOWUT1il4zG7allJDxlsDECRsAIGAEjsDQC6yawQ+tqNVpLxGdC1CTdAkITblzYshgpMcFaNMJEy7/kJj9s7Yq38rC0X1xM1RYSLBbGhKvFH7OoapUrpv3kJz+5+/GPf9wKPtlPC8YxDP6UsGREi0AEO0cfffTsneS8wwwtaNbyrHRbjD/502K2RkdgKlzuJ8KjVvdyJ5z6T+wTsf3nb7QrOe46VAaYHLQvCJt3w6domAjjIWxhyBC8UbYhJgD80Cp49KMf3f+VNAyEcbbBY4qQR1hPtXOd5nysx+9a+1LacQzgyNF9990nr81stcmhNh0jShix0047ddddd93KmJ3b5NjfWQiquoxYq0w5n/STVptXP6KcYwztVRo0CBakcfSzn/2s22WXXfqj2gh419JMqZMpYcmzxrr1HmNJmyORaMrR58YIejmiR9hdd921u+uuuyZBrj6CXTLCIbYxtbvcxmL8Me1pajmhf8EFF/RCLMr7vve9LyY5+Zs2gQBuSCgm4dNQuFIGmEcQmtFHslYgWr2tzZN3v/vd/Vh94403VscO+nvJIORFSxzt4d/+9relIEU39ZNY38WADcfrr7++H2uoIzDj6OqcEysIOluGDYHHPe5xvUAw4zBHw+7FL35x349ouxjGuNrGM+Pdqaee2t12223FutGGcu4/tHlOBYALfxx7z3lXmVm7c60AOHLNQMmonxFm6l+tz5fSsZsRMAJGwAgYgbEIrJvAThP22IzlcJpExyx6FJaF0pDR4jqGLbkN0VlrfxYaHKGRsEI2O8wPf/jDVxYg7ARz30oWjl522WX9AheBBOWL/ixiWZisto7AAEHHC1/4wu6mm26atIM9Bj8tfFtMjehMCUscLQJbDLhoT7FFt5Zn+Q+lqzZZo6M8KdxQP1E47Gym9B/iSqARyyC3MQveGC/npfVb2EVMYvu/4oorVgRpJ5100iZtnvYfjylH4R5HgjiCqD5WsuMxXBhE7lUcy0AhzAaXhz3sYf0x8rHxxlwH0MJrCT+1m1b7ghkW044woGam9lEx6mPa1NgwuRwSnOCOZhtHptSW1c5gENHioJ+02u7UfgROpMXRQujSRjHgBCOKlh15WkszpU6mhCXP6q8tzOaUTXRVP6JBPdHHwZE70kiXuZLjsHH+45s5k7EDE4VeCGgQAJTGALnFo63EVx8pja34K7+x7and5TL0Gfq/f2Pak2iP0SgWbdocmx/cZzeknaQ4NZtxFO0vcGMc4Hc2URMPDIeOzsf4ok//Lmnxq/xj+38pHPUmrEv+JbdWvamfxPqOZRr65n5gaUGW0p7iFtvkFKzm5l19ATwxGkun5FlhRUP5jnmiXTDfM+8QHr+Slh7Cwuc973n9eMrYsBqT02R85loIGyNgBIyAETACSyOwQwnsSndTlLTpSm4sqPljwbG1mHgHzNBxFjEEJWZJC0otiGL52JEdK1AgnC6DnxKHsHHnF42SzFDxW4JFFq8IIEth5DYUlvvqIvOrRWAJn4jH1G/RrS3o5T+Urha+NTrKl8LFxaz8oq1w2NmIWaFdjDFahMcyIOhAwDSmHbTuvmulL+wiJsqLFvktO8aLR+BaceQX47byWPJTf1sNjRLd9XBTu2m1L12MDlbca4kGZslMwSEy6qqDJexcDo2TXGb/b//2b702Bkwe4agvHspAgHPQQQf1F6THNp/LOLUfKT7jEkJgmXe84x09E0q64LCE2RHGWNXlmHYS24GO0Y+JR5gYl7pRH0G7SEK9aEszOMZTXlubBc9+9rP7dtAal3UHpzSKlZexZWmF03g1ZYxt0RvyU3qxvaOFRZ+L2rto8Z5yyikdgi00UJlP8rzDcUmEOaxPsl/+zVpk6j2urYewNM7F+o5lan3zsIuEdXMEQgiQdJweOtLaJU0eVSphlfHg99zNIrU/rS1rG8+xf9S+dQ+e5v0SnuCFVmx8RIX2Sv/ij/KjnVi7Gmbs6Zmo1Uc7RuuyJCBs1a39jIARMAJGwAiMRWC7ENhpUdBaALJoEgPVCjfkR1pbg4EpZmFDfnXJM4sNLUyy3brDTndqsehQPB0/GIPtEGZj/COu65FmZrS1CMzuq61r0S0xH9CW/1C6wqRGR/lUuNJiVmGwFQ47G/UTNH1qi+fojiABjYqhMuR0Vvtb2EVMIjOJJluJ+ZCGW4yno5aU4c1vfnO13Driwy49gpw5RgxcTH8OnS0RR+1mqH2haYvwgb5fu29pCg6RHhsqqxVcqY3ncjD+US+4n3vuuT3zx9F+fuMOk0vdI0SDRqvNK43a+Ke0hWkt3Grca6/KrmWaym/GRv01u6+2HYtu7k+qy9YRwic+8Yl9G1Vd0K50jB7G/4QTTqiOBRKgZYzHYqs0Kb/yKuxaNv2mZOL9juQBMzYvrfTkJ3zjGCu/tbCVnsqKQISrUfKjCAjYqOPnPOc5xQdgeBUWP8Iwdqy30TgX63soD7RD4tFXwJYHNsY85BHpIpSn3MSHDsdq19uo/TEWLmXU32t48oBPnB+UhzFttNa3Yt7ZUHnta1/b40qb4rGgKVqikZa/jYARMAJGwAiMQWC7ENhFLTA0OjhqxuQcGXZ23lg05PtzxIBH4UTLTQvhMeCuVRgWI6W7jrQwHLMwGQqjBVZNE0OabEvZpCMT6zMKXCRoGXOUcChs1urSIlDM5BJYQkN0M/Ohsspf6co921p01ugovMLVFrM5XKk9Dwkaam1nqAxKu2Qr3zXa2Z08CruISWQma4tv1a3iRUZ36PVI5TPjG4/fDfWJIe3PUvyoGVHCb73cauXP6XO8kLvCwPiaa64pHrXL9ZBp6DcaNGjU0AZgXKP2mcJMtdXGcz1KcII7R6YQDnCfGb8py9VXX90zvzxgAo1Wm1caue3qt9KujXeMfUPjWBwf4zcaRaRTe1W2luaU9IbCbukxVnXZqqPcnrmnDE0kGPHWC6u0N9UvdjSiiXCIPOS/G264odfwUf0TV3ltPUTDhhp1WhvX9OBLLG/ULsr5mPqbfseRWdYftXvIoKk7Tclr6R5UnWDgLlPC1vKh9MCHNDkCC80osCc+az7WdfTNbBiHTj755D6eNjYVBpqlV0Llv5StcS7Wd4s2GF900UUrxzulWYdQiA2DoWPLlIs0aQfghb1aYV2cV6E55o88qC/kPtIq/5Cf5v2xeCoPY/PcSh/hrzQW0dhkow+8bYyAETACRsAIrCUC6yawW69HJ9DE4J42Jue8sGXREBezAKvJPIZtueFXMkzal1xySf9S1Yte9KLJu6ElmjU3aZuwYIiq/61LgbmTC+YzCyzRlEJYCV5oDEhzSscPanmY684uOYu/OYY6Ip8StLRoTAkLHS0C1T4Uf8wirxYGGqJby7P8lW6tTGqTNTqKp3BDi1mFK7VnMaIlZqvEXIlBK5WBdqQ2VbJPP/30XitC+alhmd3Jo7CLmETGAvxLRnWreAgvYNBbxzdFR/nM+IIL9HI+l/q9JMOjssyxa+Uv0XrggQf6+8BKfrhxwTn4tLQVoyYx412tTmtp1NzVxjOuqkfq9+c//3m3zz779Pef8Zv6RZtijz326I/GErfU5pWm0lhNnqfgrXTRtNbF6lNflc19QzRL9pSwxFd/FWaKv5o+Ag3RVX9WXlWXSk/u0Y74MjehDUp+4pHLGD5+q35zG4ph1uubvOuF61Z5p+SHdjT2iGCkOzQGR8zHrgX0SEWsF9Zcqi+EmSUtJ9oH4wZ/3KknQ7o8cIDQubYZgsC+NGdlt3ifqehHW+08zxkxjL7R3qIszEe0w0MPPbS/uoP8vulNb+rdWxpdbBydf/75K5vWCPG///3vi/xm9pjHnrijUdcCTOmrlFt1nfvIlOO4HDemLmjXU9Iv9QM2MKARNzKG2qtAQ7jL1TPER6iPEoCNETACRsAIGIH1QGDdBHZTJtpW2KFFjxYI0GDBEA2LhjyJK3wM23LDr2TYDdbxA9LmyNRaGF3YzIIOLbuxu3st5kULyryoyvlnZ5edWu53Ku1m5/Cl3+CChtxrXvOafme9FKbmpnxmxqwUfkpY4ovpU/toHS8GyzF/0BDdWp7lr3RLZcFNbbJGR/EUbmw/KbVn2kGp/yiNbGvBWyqDaNX6dClOpl/7LewiJspLK/+xbXDvkfrtkHYd+ajhO4bxEaMnATkC9KHHLRRnrQToNWxr7rXy18K33IdoMbZJk5j6RFNGDwO06I7xU7vMY57Gydh/1KbUznTkirit9qs0aG9zjNKl7KV+WqOpuWjnnXfu7/SqhSu5x74BFi0zJSx01F+F2VqPsapLpVcqS2yDMOCEHaNdBy3Vb25D+E25wzNqRra+W/eISbuOtlIqL1p9CH8Q5ow1YMNc/bKXvay74447xkbb5GGBUtuPmNPGh4zaM2Xba6+9Vq450GmK2kZLa62EcI+rHKBZu/9X+SRM649xgbTIZ2ldoCtHWhqFzB8IDql/0qINco8eAjgM+WX8kyAP4V3G7qc//Wm33377reQVTTA0wlpGfaRVvjgWtmiV/IRh7iMaC1rpyg98aUe0a7mNsXM/YLzRRgZa0zJsLCEUZKOfzfCaiePx1I2QGk27GwEjYASMgBEYg8C6Ceyi9paY0Cm27otpLR7ihMqEziQfjw+waMiXzep4RtQmarmxACmZuMPNomotGGzS0A4fi83SjnIpb7hpYZYXMfiJ8cqLqkyLxeNhhx3WL5qwtZjM4Wq/2aVFm4a6aWnV1OIrn2Kca+FwnxKW8FpAlvBppTPkJ7q1PMt/KF0tfGt0lA+Fy/2EthIZCrVxbDEZ+NNfaAfqP6LbstXvSmWAERH9kk2aU9pxzIewi5goL638x7bBsTfijz1qWcM35mvoO6YPJtuSWaL8Ku+HPvShvp3lO8DkL60a6hINbTRZ+J7yVxvT1MazP/VBe4ivg+qOxixgZU4qtXnlX2lQ33OMHj9opVGiy32lCOsQRNO/ppgpbXNKWPKg/jq1PEP5F904DhBHddlKL7ZnNG03bNjQPyYyZm5T/eY2RNpxHJrSXlth85guXOKRceKXyqtyjp13o3CjFIc0OdrPuimbWPZS21deauXhOCiCetLAxLxQPoSIaMaxzuIPYWU2zCkSyB155JHFtYoEetAsbQZQjtKcld3oYwjbwL1Vfy0/2pDWdxxnB9e8GUuZ2FSS0E7ad8TTRqjwOeuss4plzjgx35P/XKb4m3VbzkumU/utus59RH221gagRx7o07lfa9yBZi1MKT+UkzERHiBqHYrG0AbHULsupWk3I2AEjIARMAJLILBuArs8YU/NPPFZjLQmeC0CtDBCE4xjInpqXTTkP8dmAVIzpIOq/caNG2cLIWq0cddChXyzYM278TAccVExtXxj6ggBBwtGmFcYyimGe584kku+xmgzZdoqf17A5XD8nhKW8Go7JWanRH+sm+jW8iz/qXU1FD73kzHpKI/qJ/Fex5ZwXQKNpbEbwlhlUr4JH9s/baBkctuAOYd5QxuBsvBdM2JAMr618CX3nH4pzNbqtkT5VTa1s9K4g4YSWri0c71uqLSH2n70L9Em/VraYt4ijdZ3q80rjVo7FA41mzGStCk/tFp9MPpJg1NCR4TyMPVjzJS2OSUsaau/tjAbk8ccRnTjOEAY1WUrPbUp9WdwIh4MPZi2cFP9YmczZhwijtLHrhnhrDzmcBdccMGKEIf2Uiqv7uZj/h1zlO+uu+7qH1yBXum0ABpK+Ok12pinobKrzKXycCRUd4R9+tOfXiELTQR3GIRHus+udBQ2aubme+tWCP7fhzYF8vUiOdzQb7VBMJnzpzaEcKylFYeAjbJLaEebR8isNDmuf/PNNw9ld938VdcqnxIWXqU2oDDqv7lfqz9AsxZGNKJNn0ZYlzcyxm5wDLXrmJa/jYARMAJGwAgsicB2JbCDwUGtHQaHBcwuu+zS2zrywARfusMtMju1bz1E0VpYL1kxJVpaqGhxlm3yVjuKo+MjLPR41j4K+3R5eF5UlfLALuXee+/d48premN3Xgmn1/dKDEUpreym8ucFXA7H7ylhCa8FZM7blKOOtJ18n43o1vIs/1yXq/2dF8L5zhi9joittsCxEOqXdjAn/YwduI7VsIuL4VbasVzCLmIb6dAGSqbUNmDMDzrooE3Gi1JcMSDKx9T2QRuRQCVrbNXGnui+Fpq7pXLW3HL5a+GG3GM91cZU6gTtO93TqbTHjFOtsIxFHEWmnWVaqk/ubZKAQHmN7YzyURe6g7FUXvWjWjssxZFbHGdb/WGMn9qqaLfsUt+ohZ8SFhrqr3mcEOaxnbe+x46xYuZzerE8aicRIwT2OppIPmoawKrf3IagrzZD/bTqX+lj14xwjnlUWD26QDq8XPmMZzyjKLCL2vFj5m3SZK1QE/DdfffdfVr4sxEXzVDZVeZSeXQHMS/0IjQsGd3hi+AF4dY999zTjxEcl2fMOOWUU/q8P/zhD+83UHnEh7VhbFNHHXVUX8daF4Gf6ClNykEbGvqjr8aTHIovu1V/CjPFpj2q7an/o5WHVuIYzdCY1hQNO/VfpVmzY1tWXec+IlqlNqD8qf/mcffCCy/sx+63ve1tkwR2EjJnjW7ux6OtwyfcdNNNXenRJ9zQ5ORINuVGu7QWrnYnospl2wgYASNgBIzAVAS2G4EdCzV2e9mdffWrX91Pqp/73OdWjpAefvjhHUxYi8FqgacLiLG3lIGBLC0ejznmmL68rXs1tPgpMS96rGKMMCAK3mAOWLiPMffee2+3++679/mcc5yWNLTwzQu4UvpTwhJfC8iMj3CrLU6ze86b6GZ35ZnFNwv+Ur1GNx1hbb0iSHiFay2ESVsL6bi4Vp7EDMRj4jEv+bv16IRoZZz0W3hHJk9+JTuWq4RtpFNbVOuV1lwnLLQl7K8x6mibkC8EPvQFsIBOKa9r4ZYZH9XZetlqN7Ee5qQdNXjoqzUDQwnOGKU9BoNW2NhGxtBS+NxeanmWu9p+q3wKm23dSYZAAc2Q3OfG/GY8j201p1H6PWXcnBKWtNRf1eeV/tQ+lOtBdLO76LKJR3soMdgS3Ob2LM0rGPnSXbHxmCZHu7NRmwH/2jhEfpQ+dil/uGm8ynmkb5x00kl9HSNs4j5OMMj4Km+8aEl+2HBjzmkZjXOl47DEi+sAtJLVR/GLZS+1ffXNXB7i6lEAXpgu3VepY6zSiJMgUhr/0iScO/ZGjT3lc4hWbncZV/WTUnlz2NZvcKVPs+GqPHE8GPx1iqQVv+SnPiJ6JVv5Vj8rhYlu4CYjDPM4K1qirfDRVt4yvrpKAdq1MJGOvtWm44MT+Ike/hqzY3mmfueyKn3bRsAIGAEjYATmIrDdCOx03JLJWJMuCyUt8Nh95rgFCzyEbrXFcc2dOFN3L+dWSive73//+03yocVxbSdctLSwqS3mFW6MrbuVWMjABIwxOkoLA8T3HKOFb17AlWhNCUt8LSAzPsKtxfTRZlg8EibnTXSzeynPLTctfIfo0B7I89C9M6KHnU3sP9mv9FttMGNHWBiMqNmQvyVAFw3aFHWXjXCMC3y5RUwinaGFdoyn9CQoqTHqwkaL8injyeWXX96/RKt87bTTTt2HP/zhSWPRlt69V7uJ9SDsptgcy2PMGro3KNJU2sI++uXvVtjYRjLzhsYObTK2Ux35LmlEqv2W2rnuXS0dLSd8zWjziXbCK5hRGFKLU3IXBrmMpbBymzJuTgkLffXXPE6s1RgruupvLTu3ZzDX5oeEQ8IIO7YhcM4m+rfSneKX80iajFdokmGrvBlf5U3CrJJWnMJgx2OpCDFqRmuuvHEXy14ay9Uuc3mkVVpbI9AvdIdv7BfKB5qDbICdcMIJ/eYtfVjjNZpRYAQGbESBOxrlPDiA4REtcCFt5nOMNmnjWFD6zhqffeTwT/0klzcEqX7SDnlMgnLQDtVe+KacYzdMawmM0W6VtrH6b2nehL7qNfYHueUxW7QoB+1HWv7RlvZjTA88JORmg1ptPoYplVUPS+R1MkLhY489tscVDTwe+oh5iN/kU3VQupJGYaFhYwSMgBEwAkZgSQS2C4Fd3mXVIk2LRRYlOtaiCV4Ln7H20IJgyUop0WJhwREAFgzxDhqVh+O/P/vZz3o/LRyircUPC9J8JFbh9t13346jk0NGeINd6Q6bHD9qIwzt7iOQqAlNpWmAFtRll11WDUf8sWHR/MNoAZmZHeGb3XMZFT+3k5p7jj/0WwvfTH8oXs1f9OLiWmFz/5F7zRaDNoRRLT7uokGbUr+N4YVjZHrkFjGJdEqCEhguHUmN8ZQW4wRhyAd97cYbb5RXbwubzIBsEqjwA2YT7Q3ocq8SGr98IxRoCWRgnsf0yUKSa+KkdhPrYU5C0nZAI5oyjjFKewz2rbCxjeT2rzZF3Yz5U5tXuxgThzCtMmhzg3EOAcNcozxxhCyaHWWM1dgNk3700UdvIoSV4EVXXZTacxQQcR8gghOZVhsiTPRvaSpLKIhNfkt/EjCV8ojAhgcTmJNVXrVJ5VV2zFNLEKc+kIUboiM7PiIVN+5iOqWxXH0zl0eCt9IaIY7LCO3QKKO83NfGWoC+UuovJWFZyQ36p5566qxjpcKjZpfSq4WVO8d82cxBo37smDImXB7vlN4YW+2iNG8SX/Ua05BbHu9Ea0yeY3oSvLE5ytFotfkYplQW1saskcEz3lGro921PhNpDbXrGNbfRsAIGAEjYASWRGDdBHZjJuYxYfIiDzB0p4mOWopRKS0Wp2jERMHR1qBhpyMtcbGtO1/E+GpxOAbLHGbMokWNT3lhUc9Cu2W0s096Q49NqO5y3tbqt9qIFpAZAy0Is3sur+LnhWPNnfhoFLDLj8A0CmEzbX5r4Zvpl8Jql7i10yt6cXEtWqqDmsBLzK5saSANYST6JXtoMSwcY/+XW8RkiA5pq4/EeDFP0sqlzeVLy4UN9lgThXUwlQhMlPcsCBBNhHhf+cpX+gvFFUd+W9JWu4n1kPPDGHvDDTdsogkcw0iTBnynaH8p7THYt8LGNlJq/zGvfCt8rb0QRu2idfwxbiTUyhBf/HzZy17W3JSI81P+RluEl4/BWHmCaY15XasxNdPdUmPsmLFb7aTWnumr9D/KpLtwY5vAvdSG1GbwV/l78NM/pV+ioaAar0p5ZJzQ/WlTyls76kqaPPZAvlthlDcJ3rX2wn2o7CpzLA8bkhyDJd3SHXsXX3xxr/2Gf+svry9K2JXcVJ61sMekRz3eeeedfV9dWkgX8Wq1s6Gya86qjYOq15iG3PJ4J01mae+RtsIqvrT/ogajBG9ou0loS35qeVKZeFiCuRYs6Me//e1vey9tjozZeB5q10rLthEwAkbACBiBpRHY5gV2MIdHHnlkv5jTUUsxTyyUticjwVfcgdYiJwrxSmUes5gvxau5aWeSBVDtvhnFZRFNuNIOuMLIlrBJWn9rbUtQpsVoFjqNxU3x88Kx5k55RRtshtqq6jnTF27RVvvPi+QYRvS0OI5+ik++pvxl7KDJy8WtOjzuuOP6h1KGFsPCMTJ5couYDNEhT2KgYrxYfr51hxV3BH3iE59Y8RY2LWxXAnddzxgcfPDBK+0fAQAGBo1jXeDL+MU4JgMjwr2UaMPij6YffUiMucJtCVvtJtZDzAf3rfFSYQtbHTvWvVMxfutbaY/BvhU2tpFS+895UPhWmdQuhvqx2l6tDPQX1fuUvjcmrMq6o4yxGl9L45LqWO2k1p7pp9xhR51wsT8bdxi1CXAXrqKZ/ddSwy6mOaa82uCrac9FbfiS4Cymx7foxb4csSn1hxLmEsRIcyqnoyP04M2YzIkBBC+sexBOM6bgFtdG0ODYJHE49qg78dQHa3WutEmzNXfJr7UxBq2x6eW+j+Dune9854rAuISl8tqyY33ktlp7pExlwyZfmNJ8G9NVvcY05FYa79BqREtTRmGxmeuuueaazbTLwYC+SN2Td7X51tgs+ty1q00MyoV2JoJm2kcW9CpOtCOOc+si0vO3ETACRsAIGIGxCKybwK40YY/NJOHEEOVFlnZm40JNYeOkytHHrIUw97eOUU7J/xJhYepZYLBgYYEaf8eyltLSwqbFvJTi1dyEO4udliAuatMMCfZqaa2HuxajGZ+xuCl+XjjW3CmTaIPhUP1pMZvpl7BR+2/1OdHDzkbxpZkjgZN+Y5Nn3QtEW+Q7YwddpUPr0u0mAAAgAElEQVT40p/689BiWDgqPLTlFjEZokM8sCYvMV7GAGbi7LPP7o/EwrTLCJshATnhYQhgDEgLbCSsEy1p8tGfEQqgUQBd3ZODzW+0rrYWo/qM9UDe0CIkrzDTlFcaEDnfCCMZqwlD+bg7KuKbw8ffSpv2ODR2656jUh/QsSryENs/+YCBpF9SV0rjoosu6rUzEEr8x3/8xwoTr9e1872pMc/5W22vlK/44ueBBx5YPMIprdYptu7SQ7CxJY36ax4nNA5m95xXxc/9tuY+hq7aVG7PMW3a9oknnriJ8CCOM2ikZRP9aWdL/LXySPpjyhvvpytpt+oxmDGa86QZ6UnoEctemtdKmEtjv6bpBM2f//znvaA0Y81v+i4ayRwljUYPCnCdiIz64BCeCjdUd6W+rLSwRWcoPcr4+te/vjv33HP7128pk+qUPJSwjOnUvmN9xPGO8NGvVk7FqfUzpat6VXjc5VbCiJMZzAHUDetJhcW+9NJLVwRz9D8MeKDNTz7VdoVPHhOUp2zTXg899NCeBnc/Mve21q8xfsRqbl1Eev42AkbACBgBIzAWgW1eYEdBOX4VL/FmcZAXOFo01RYlU9y35GSte17233//foEKE8nrqwgR9dpriZmDgSUsC/HafT6KJy2CoUYEDix4wC7eYRPj6cjBWAYgxi19s/P63//9331ZS/64scD70pe+tInWUi2s3LUYzUyjFoSUs3b3H0IZ/AiTF46im91JV7RzW1Weoq3FbIlODMe32n9pkaywooeNoAhtAi7QZyEsYYUE06Kndq++JAaExTbCKTBnQRyN0lFc+ansohEXw6WjuLRZ2pDCQ6eEbaST01Tayv8YLBUHO9KmXDWDIP2ss85aEV5Rv7W0pMlH24GBICxMDPcpgdHWZlSfqgf648aNG/uju+SdP/rDTTfdtJkgLt5DpbCUFQzGGKWtuGPsUh9Q2yN+vN+NdqwLyMfQlqYPGii5j9TKo7aX8wU2Ykb33HPPzQQPNXrZnTbKcTOZWKZaf1BY2dvLGKt6zmO6yomtNqX2HP1a36JNOynhGseK0nimuVZ36GHLLdu6c3Moj8pTq7yUCcE6+dY1GrGcCHXxixug0b/0nenFspewyZgzXyOoI93aOqKUrtyYcz7ykY90119/vZx6O7b92M/VB4fwVLjcV5XIkH8ON5SewkdbdVprZzFs7TvWB9hHI7/S/KTyKY7mW9aRrCdzO1VbVnjSUV1nDJkj88kYhcWmTaBFR7l1z6tOdUQtTOGT809eWSOjpcd4Fg1p87qy1q5HHHHEqLWisFpNXcR8+NsIGAEjYASMwFgEtguBHQsz/mRKzNOQhp20hFgMsGCQdkXJliBD6clmYYBGGRP7WhloawGPAIPFg46uaIGF22r+KP8YE7XnSjvjcSFe8h+TRgzDwpzXfikb95GwKMsmCgVgOsZe2K/FaGZ2tCAci2dp4QjN7E6+I23qrmW0mC3RyfHU/vMimXAceUJTQWFyuXIcLVJZ3EqQq3ZWYkC4hydqhCnfuXwqu2gonZyf/FvhKYvqLGIS6eQ0hZPyH+PJr2VH2qU+Qv+HQYAu+QazF7zgBcXXg5UOWgM6ckcc7i1DC22qIW/vf//7eyYqbl5MpTMUXvVJPXz3u9/tmSLVEQIsmHf6fTa5nGiSSHOTo4YcXRsyShuNscws1n6XsFDbI9+5jVxwwQW9lgv0KAsCDOYABNLE448+hGGsefDBB/tv9adML5dJbS/3s/hKJXOR6GdmM9KjnyEsRCsKg4Yemo3Pec5zVoR2sc0O5Q0a29MYq3rOY3rEUG0qjivRv/Yt2qU2RBzaCPdz0Y7YSKsZpY9dM9qIi/d9lcIqT63yEo92wNiE4IUX32VYQ7XukVO4bOu4qgQpQ21OZRbmzCsI7nfdddeVtpzTIG/cY3fFFVf0whX6y4te9KJ+ftc6JN6jR3zWaWxmKl+iqT6o9OXO3MURVB4XYbxSuNxXFX7IP4fL6cm/ZatOa+2sFVd+sT5yO5NfaS5U+RRH863G+5qt8KSvus4Y6t7pKBhWWMVn45C2TDslfEkLU/jk/HM1w+Mf//hNXgIWHtj5+HG81y6Gi9/Cam5dkNczzzyzHxO0lor0/W0EjIARMAJGoIbAdiGwy4UbwzyhhXDdddetaIFoMo6LXS6mZZHC4m3IREERDCiL2LUyWsiwcIjq/DAJLApKf1y6y50oLGLQSCyFkRtYjDHgglYWL17ecsstm+1kShsQ5kBM6Bi6OQzpoLGj44WUe7/99usvac5hyTuMto7mtYQIMa4Wo7H+8QcTFoNjcCMszENkskU3Lygj7TELQC1mS3RiOfhW+4+L5M997nOdjvCRXvwDI+524S4e6jEaHZHSK8T4aSFfYkBIE3o6HqV8Eyca4Soa6n/kS0dvS8JyBCcSzpewjXRymkpf+R+DpeJgR9piKnCnvtEuhPkQrhs2bOg1YEt5jDT5ZuxgIU8/If5rXvOayUI73QFF/NIRvZzm3N86YqbjRCov/TG+ohnp039htmDKCQ9zhFBPR4JxYxzLbS/S4FttKbbrHGbMbwSiCLZIt9RGuBspjvlqbxy3Q5uQdkBZETDoOJf6XIlezJPaXiwD4znlJz9o2dEeSJ+NGMYdhI5ohSCsoI/yjRHTyV2IhOcvvjrKb7VZNndacxJht7cxVmNMHtNjfahNaRyKfq1v0a61oVbc6Kf043gS/ad8K0+t8kKPMX2fffbpuD80vkIsDaap2vBoaHOEURs1anM1bFTmiPkdd9zRcdef+lout16k5ZEmjrRLExBBP22XvpIFkKw5GFPz4xnqgzF90tNYjcCbeVzhYl+N+RryV1iFy+nJv2WrTmtYtuLKL9ZHbmfyK82FyrfiCB/GKl7mLc3PuOlxG9JXXUcMpV1HmUQ7hpWbxjPmVV5r1/1zUQtT+OT8K++6605YYGu8pW2gZap5ifVlS7gurObWhQTbQ2NxzKu/jYARMAJGwAiAwA4nsIO5hgFCqMZEzcSO0WSsxS5M0+te97p+wccF8ENaL1o4MJnzh8r9Whm0INDEIR2EI+eff/4KE1dLU/lT+WrhprrXFtgsynShLwstFsBzDIx9FsBx3FBMa40mgh0uv1d9IEBp3ZelxWjGh3zDJOy7776jtfVinkQ3LygJozohj2qHMW781sK3RCeG41vCg7hI1st/pEW7P+CAA3rNLgQPtPWakVAgakhqMZwZEAQdOsaiI0jKdy6fyi4a6n9jsFBeS9hGOrXL3nEnnTFYKi1s2gLMHHElFGNcgCHEjb8sIC7lUTRhSMCfRy3o0x/84AdXBM2MT7RzMcGKU7PBFwYkClZrYee6kxeOGams2OQTDcFaf8QdIZOEkfnVXTFP0IIR5HfNwAjCELbC1OJGd12UjzAMTYxo1H5e+MIX9nVDeXVhPUJomC2EBBrfNF6oz+V2Hmnzrb6jvsn4Jk3DiA1tDUZVQgjlC4wkZFGY6EZ7QvuYC/jRuJOgQ/nM+eH39jrGaoxpbbZIwKlxqIRPyY0NsJ133rlvDxKEYsP0T/l74hOf2Pcn7CnxCKt0lT+Vt1XXCluydX0F7Y62NdeordKnS/1Bc8IUzDWOshbAqA+zgRB/67RB3AzQ5lEfMPTBnL60suSe+6riyx7yz+FEV+5jbNVpDcsxNGJ9SBimePIrzYUqn+KoDkphRS/bqmuNd/gzfjKO5qP/Cqv0CMv6kj80n5lD2GhBsCwjfHKeJNCN6RIntgs2R9CQZk5hHgNj5k+Nr0pDtrCaWxcqXxYgi75tI2AEjIARMAI1BNZNYMeEOHVBGsNrQh2z6KkxTzAzYjhhrKPgR5OxFrsI9uLdTGMYU+7kYDKPgsAa8HPdEa5wzIZ0HvnIR64w+NyhhiAS5qtktLBR+UphshtpsXAn7tQ/FkEsysjnBz7wgcH4pBMFRxl/6CCAQxA31sBUU8fUNfFZ8NWEr1qMTsFnTD5ENy8oias6IW8lxibS12KvRCeG41vtPy5W2TlGyIR2BdiOMTCECAPIX9zV1kI+90XqEIFW3EFWvrPwTBqfohFfq8uMaC2vJWzVj8nz0N8YLGPaHB9DgJLrC6EmGmdoxqEpEk0pjwi+EHKh8QotYYAAj4vTES4r77RdHjqgP8UX9WIafHP3IHHysbAcbjW/zznnnJV8kRabBvG+tEw7jreEp21E7QuF1z1+hIlCK/kvaYOxXufV3Z+RvjQVEdIxfmDrqJ5e6RbGEnAgJHjzm9/cYzPUj9V31DdFQ8e+lBdpBykt3NWXsGUQRoCbhBKUjw0c5gL6gsaY0ri2vY+xKjv4DP2pDwrXIVv1FnFV3Q6ltZR/bmsqb8zTUDnkj1BkznFYxY92HINzHgmndjwFc204IYjBSGAqYTrrHjaV2FxDoKONI4SPebNV9ZTTlzv3s9GP9Ft9NZaR7yF/hVe4nJ78W7bqlDYDnTkm1kccO6AlP+YiMCU9/WljS3G4XoSNSzAeK9BVXQtDTq1wTQRrMebNaBRW6clPmxBgkP2ET57LNR9KoAstNjCYX6CT5xnWHGx0IBik7ktGWM2pC/oXbZW4EiqX0rCbETACRsAIGIESAusmsGOiWuJvzKKHxUGcVGGiYaal5cEiLt9tpvtO8mIXzRdp2kEToVHt6BYCp29961v9wqAE9mrdYLC4XJlyIBTkSG9m8PGDEQWDSy65ZOXYAos9dhe1GIs2GisIBOIfgjHuqAKPJeptiEbEPTP6Wbg6FUd2zrO2HcLYKLiSYCXmY2o6pfC6myovKAmrxSbYcFw1CqjztzQxwALtmewff0u4rUVyKV8tNxgedp8l6NQRRsVhcYtQLmtifPOb3+zvC5KAg/BahNfqf0x/VrrRZlFNX4CujknhHxfVtTTlnuuE3XbaPf0EhkTtg7QYLzj2SdysxUa/rwnT1K4QfKJBBw2NQ9DiGyEyAksZaTzRx5VX2Sz2lS+Fj5qNUbAq/6VsaSfQvhAI5XwoHfKPoFztkLzThmDWamYJoR3pctw/jnPUC/VHHXF348knn7zSrkuvVkswIA1KbOqI8ZMrB7igXFpvtBHKyfyiOaf1wAAbLbp/VH2TfHGsNjKKEj6QLoIhGWkVxT6DEJE2xZFEmMJsNP7kV3t3hDE2jq/qPzU7YgqG1AFjAcIetSHcafMcz2O8hdZqtWU0PmZBRK7HMb9VXs1htE/Gxjg31L6ZUzTet7T9WFOAS1wr5G+EzbwaDj6l6w30grNeGc/x42/Sot8h7IjjrsoaH85gHMRdwjrGz9KDNpq/2EDQhgP1yqkI8iwhjwRttTYjd/XlWh2JTm5jtfDRXeVUWqu1czsbM1/mODF/rW/6kLSHhSnhcc+by4zRenRH2vkKq/osndQQPtxTSL/ERMGcBLysAdVns7Cuj9R1vbYd+cCwFmBsjiZqg1OnUwx9UZuZXBVjYwSMgBEwAkZgCgLrJrBj4o4LsanfmvjHLHrEPGlSFTMYteRYqJ1yyikrl5frfoyS1gWTuARjMN6rPZI1pYIUNh4tg5FDQ0eLCxZdF1100YrGzmoXdcRHY0PChiXoDdEQk8GuqxgH4qARCXO5WgNGvLwJdtAljXivnsqqfExNDyZgp5122kSYFpmgUrvVYnMIm7n+Q4yEygizQvnZlYaJEkbYCKsRWkeju47Il5g7BAISMEVBiBjSLMjQq8UlXGJa4IoAMDOa8T4+HZMiXmRA1P8jPb7FQGWBXdwFb2GO0KckHMnpgKt2+jM9NBrowyzka0aaeGjPEr/EsBBX9wzObbu19EvuMNC1e340DkUNQdoEglXGr5aBOZLmMON0re5aNBCsxOPJGfP4m3E8l4M8otGGn45FSUimNnb55Zf3d3aijRKN5pyYRus79k3aicZyaEq7Lte3ykd/yGMiDLD6CJpG0GeTQmMprywj+MDsKGOsxtdWv9D4lMchxjyEQa06ZHxE0Loao/TnCkVi2rm8+t0qw1Q/tDfBamq8ueFJS4KSWEeMF/RNBKr0H/7iiQjGndoVGKX5S6+8SxgPrponhvIe+3KsD32LTsy//IbspeswtzPNl5Q7v/yqzYUcJ+eZKw9Km4jasIka9zEucwnrbcYt4R/DxjmB+izNCRqzS3WkI7ds4jzucY/r22xNyzvmi28dj4YuaaN9p3XR0572tM006XP8/FvzSIm/yGH92wgYASNgBIxARmDdBHZDi5qcsfyb+EyeYxY9CqsJHmYIAWE8GiHmJ070MDdR0yHngfiZycth1uq37o8ayiMCADTvzj777O6EE07ouItJjNxYmzilo2trVbZIl91XdlRh+rMWXAw35zsKXhEOxB3U1QrsdJQutid9wzCyaMyGxSavhU0VXg+Fl3B7bJ8DB+1uK89oz3EJfRQkxPyjeaQFueJgIySIfUQMqfqiaIgRGerPOvoU04jfCDXQqJIRA0KYnKbC4I5/FtjhXxOwEZ6+h+CGNMYYsNMdWcTnLi20OBC21HAt0SUsGwxx/IrhpDEyhGWMsxbfYqwpK8zNgQceuJlgqZUufR8BMeWZY8YIXKlDHsnhRchsNAbEY6i0U8YKtEdbdaY5p6RRFPsr/uDT6pvSQJF2iPJJ+RjfEU5kozYN7fzHuB/75I4yxmqMmSOwG2pLjH282hrnkFwnY35rfBwSioyhlcu7FvML47Fev6VfrPUfaaE5x/zKfFQzatO0fYQrrbDQ4B61PH/ltZX6VK2vDvkrrwo3Z3xWnVIu6MwxcU7M7Ux+pblQ+c5xch7QiMtjjn4zD7DBVeon2miqhWWeJF/QiBvUOX3GNtYrkQ7rAoRuGK1vEMDX5tBMk3B6mEh0sVmXznnhVZrbrLNa80jOh38bASNgBIyAEQCBNRfYsfPPog57NUZ0WMBJU6BGjx0/mJQW48eCPB7tQMjFK4BbsyG/CLG29wmfxfdYocic+kJYSxrRIBDhxVv+dEwm+g99w0TDTMc2BaOOEGOovQ7RnupPmkp7bFwWx2gnTREoUUcwFPqj/HlhDqNFXjhyPse0mE6On+f0Yr+upUl9kOeS8EN+KlO059QjaXDklfEl53UOHqU4Yvp1j1kpzHq5wQgfcsghfbvfEuMUL1bGOovfpfaZcWFcGMvUxbhj5hzCMycxNxF+ScOYRhuLwkG+2SgojaU7whirsYMXnLOWsLDXGAZ+ub3KL7Yhvse0I9EfstF2pD3k+7yG4pX8yRdHYOc+kFSiuS25IUw88cQTR/dfxmMwU/3m8R1BUGvtqrmCMadlJPjaGgV2mi8RQtFf5hjm2Tzu6DcbI7lfKQ2tmYR/qQ9K83+1cycvC08pX86b+n1ps0TlqdmUH0Edgsd4xUEtvN2NgBEwAkbACGQE1lxglxP0byNgBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AE6ghYYFfHxj5GwAgYASNgBIyAETACRsAIGAEjYASMgBEwAkZg3RGwwG7dIXeCRsAIGAEjYASMgBEwAkbACBgBI2AEjIARMAJGoI6ABXZ1bOxjBIyAETACRsAIGAEjYASMgBEwAkbACBgBI2AE1h0BC+zWHXInaASMgBEwAkbACBgBI2AEjIARMAJGwAgYASNgBOoIWGBXx8Y+RsAIGAEjYASMgBEwAkbACBgBI2AEjIARMAJGYN0RsMBu3SF3gkZgGgL3339/x5/N6hD429/+NorAv/71r+7BBx8cFXZrDfTPf/6zoxw2RqCFwD/+8Y+OP5vNEaD/0I/mGmM7FznHMwJGwAgYASNgBIyAERACFtgJiW3M/uhHP9o961nP6n7zm9+MzvmvfvWr7uUvf3n3zGc+szv33HNXxYzAzFx77bXd1Vdfvd0IBr797W93z3nOc7oPfOADozGNAd/ylrd0T3jCE7of/OAH0Xn2Nxh/5Stf6TZs2NC98pWv7P70pz/NprU1RLzooou6Aw44oLvqqqsGs0P7fshDHtJ9+ctfHgw7FOCvf/1r95GPfKTbddddu+9973tDwTvawVOe8pTuYx/7WEfcpcyf//zn7hWveEVfLmx+r4X5+9//3r3uda/r9thjj27jxo1rKpB517ve1Y8n2ENmStghWlvaH+Hvxz/+8e7tb39795nPfGZLZ2dy+owtjN1PfepTuzPOOGOLjeHkQ8JxcATPH/7wh5PLMxSBfnzsscd2jNHMg0Pm17/+dXfMMcd0J5xwQkd/mmIitieffPLk/qexj7zONVNpfPGLX+ye+MQnzp775ubT8YyAETACRsAIGAEjYATaCFhg18Znq/WdKrCD6XjrW9/aCwsQhBx88MGzBUBoDpxzzjndQx/60P4PwQZMyhJGjAZ5HPojrAyCnaHw8i8xQuT/tNNO62k897nPHcXUKW3ZSwvsoItQ9FGPelSfL5jZqcyj8gbD+tWvfrX7whe+sGZ/LWEYAo4jjjiiL8enP/1pZatqqx0sIbBD0InAk/ofqlvC0jfGhK1mvuKxHgI72jH9kb5Ju6H9rKWhzYNVqU/ldKeEJe699967Zm2VfnDzzTdPFqaoTLEux5Rd8ebYf/nLX7rjjjuuF4yy2TLljw2a3/3ud8VkP/WpT/Xt5LGPfWwvpM6BEJwx5sz5awkx0Vr72c9+1r373e/udtttt27//ffv7rvvvr4N0ZZqfR7M2aAq/Q0Jv2+88ca+Pzz96U/vfvGLX+SibvabsexJT3rS7Pnt85//fBPbzRIMDhr7VtOuptJ4z3ve0/djBHdTjNKh3lb7By0bI2AEjIARMAJGwAgYgU0RsMBuUzy2yl8lxvWNb3xjz1BcdtllmzC1NSb0ggsu6BkINMCe8Yxn9Ivr9773vbMFbQiA0JhZWjAgBoDd/hpjih/MQVzgS2DXivfkJz+5j1dihH7605/2WlWEQdCBcFPCsT/+8Y+bMYm4IYSKzCNaTY9//OO7G264YRP3P/zhD7NxjgIYsJ4rHCWfaGSulqlqxS/hqg5111139Rpuu+yySy8okabnUB1TH7Uwcv/GN76hZKo2WjUI68h/TVsRrOkThIFZbwkgqwk1PKKQZ6007KKAdz00p6hz8GrVvSCZEpY46tOtNrcav9XUQazLMWUXBtHWWNcqA+M1Y7o0M1thS34tLexYhhIWq8E/Y8Imz8UXX9xr2GoDQvl92MMe1iHEV/uoCexaeMW5IGLMN3PVkUce2c9VCCnHGs2Zc8aCIWxbeVA5M4alOJRH42C0NUc+5jGPKfpHjVhtpjA2I0idYpRX1iNzN4MQ3NIWWnU4JU8OawSMgBEwAkbACBiB7QkBC+y2gdqcwjiVGC8EGjAd0rhBwwBNA/2eCwECLbQvWGzPYWpK6YoBaC3eS2GEUSuewpQYofPOO697xCMe0R9Bfdvb3tZ/c2QMI0ZSDCY2bqIX3UvfpToplb3mBvP3qle9qoOxRRNizp1TUWAHrTlaM7U4z372sweFNmCFwBEtu7vvvntR4SG0xxjC0eb322+/DgFiNrfddlt//Hi1/SLT1e/VMPGi0bKvv/76/kg2bRCsOe4Ymfix31OEGuobpT6V8zolLHHVv9AAQyBQa39T3Wn/YFTrlxpfav7kLdZlq+wxXB6blE5pzJBbFNiVNgPihkH8/tGPftRrr7UEdpQDjarHPe5x/bFY8hrNHA27F7/4xcWxIOJA29xrr736sYzrA7QxovZRGp/Q6L7yyis3awNKL2Mby0EZGduhm8vI74hb/P75z3/ea/9xNPaOO+4ohkN7UfmPafL99a9/vYptDht/q1202pXCKyxCrygwo7/QhtAWju6Ef/SjH72JgJ0Nwd133717yUteMvmuVKWPPdeon6+Gxty0Hc8IGAEjYASMgBEwAls7AhbYbe011HX9nT6ZKYVRKTGy3KvEjrlM7WiPNHFgCGH055p43LCmuTSFthgAypfLrN8lJk2L/lY8MeqZEZIA87DDDuu1MfR777337rjLSHcrkb7Sfsc73rHZkT2YI5ghygCTxLEoGNPM+CNAHSs8UTg0zWraEgqT7Sh4gRGVhh1YLWnEaGdclQaaazqOPeY4LPHUDsbmFcZ5SGsPzdKddtqpKchCMwWBHWEznvF31FBROYfsKLTIbWIo7pD/Lbfc0gvNJehZjT2FcR6q+5jvKWGJpz49JHSKaYz5Ft1aHajt1fxJI9Zlrd3ncEO4Cp/c5pUWY/XY+zHV3yN2xIXGmLbRKnsLY2GXMRlTBpW/lL9Yjpi+0qthy7jw/Oc/vy93SWNW8UtpjnEDz29961uTNSBb+CpPGcNYbn0rbG4zcs801AaiO/MRAk3mtKlG6YzBaigMtGyMgBEwAkbACBgBI2AENkXAArtN8dhmfrG4rTExKsStt97aH/NEo+HMM8/cRBMgHrVcrdBOx0lJh2NEqzFTGIC4wBcTPsQU4B+ZFXDg6CAMCxoRMspHPFY4xHRCNzLVCp+Zsyl5HVOeWpiIjxh4wmbmTmWea1PujGukhUYdAjD++Mbk48RRs4VvHU397Gc/W9RsIUzUbonlq+GxpHtsQ7GsrW+1B/KR20Qr3pCfNGihi5YMQuYphvDEI35JUxaNTo5lRk0dfeu+v6zJI/9oD4XNx/nVT4bGuSllJazo1upAfb/mD41Yl622EMPF/ljKs/pR7p+iEceWUvzopv4Qsbv99tu7fffdtymIllCaO/O4O2+qEXYZkzFlUPlLfZ6+XnqxVemVsGVs57EZ2nW8/gHhNvPhPffc05WOlKJ9jjZz63oF4QSeCAKn3jHYwldlyhjGulCflCbdVA07+iJ3mnJcGEGdxiRtipXs0mMgyquPxMba8bcRMAJGwAgYASNgBJZDwAK75bBcV0oslCMzlhPX8TiEaLxUVzq2gxuMC2HQKuJ+tFK4TLv0W5dWj73Uu0QDNzEA2DVTCiMmvBVPYSIjBLMFI8wdRzAvMtLMgDHTq4II9BDscWT297///WaCpHyHHZp6Bx54YP/HN7qvSkAAACAASURBVPfeYZQP/HCHuV7yj3zAgEUsxMDjngUCKvNcW4x2xDXS4lVY2tjxxx+/cpxXGJCfuX9RgAEzT50tiWOLluoylnPoW0ILytsSBg3RkT8CCXAEB2iiSaS2qjBDNndW8TIy8aFT0raN+Z5bV2PiZUzURlrj3FD5Sv6im9NTWI0vNX/CRUxq7T6Hi/1RaUVb/Yj8RaO0GHuOPvroquZxFLIgQEEDe2nsYr5K38IuY6IyxD6b49fKn8PF30qvhK3Gdj1qQTzdZ0d7jPeURppchQDWHBX+zne+E73W5VtlyhjGxPUQie6qy3d9yj1rZXNMnrEYdzSS2Wx73vOeN2oMzu2S/CivCERb42XLDwEt9VGqw1hmfxsBI2AEjIARMAJGYEdEwAK7baTWOYKnXX1sFuRoASAgkzu7/Rx5+eAHP9j7sTDfddddBxfjHPMkLItmmBsW8VONdumhEbXSptIRA9A62qpjqXGBLya8FS8fiY3HeWHOI8PLN7Q4YoxgBEaPI7MwwWguirmkvGP/xIApry2BwFTcYnjlLeIDwwTzTl5LjFeMP/Vb6al8Mb5wI92YH2FQ08yoaY6gsaWjxi3mP+Zha/mW0AIsVlv3CCgvuuiiXtAOPWnWIXBHk6qkjRRxyMK+mrCOOBIOaJyJNow/6WfBQAyj76GwWetIbURCJ40NpDfnT+1FdGt1oHRq/mAS67LU7oV1DBfbv/yjrX6U+2ekMbXcwi6mM+Z7TprkW9hlTESvJnREe6tWfuW3dKdeaS4gvF57Zrz+9re/LRL92MFch5D6l7/85Yq7PtA25SoEcGYOiBtYfD/44IMKumZ2DcNSggqb24zccz3kI7HahDrxxBM3E7hJkPaGN7yh94tXbSgvSmdquyyFh5aNETACRsAIGAEjYASMwKYIWGC3KR5b7S8xM6WFrtxgTl760pf2zAbCPDTmeLFUDHPNJgyLcxjqORfu8wpq3KV/ylOeMkvoB/hTGIC4wBcTLixatpgY3d3D63il8DAqeuBBjI20xNDeiMf9+OaYUekOO+6xQ8hEHIzy2hIIrKYhqq1EfFoCO4UvYVByy8yh4gvXmHeYZdoldGJ+hEF0i/FwJ05OizBi/iWAifHW6lv5KZVxbJrKN+VaTd3ff//9HVqUErIfeuih/WXx0H/Tm97Uu+cj8DGPCFHPP//8XqhPXhgXvv/978cgo79bdZ+JTAlLXLURCZ1UB+R5zp/ai+jW6kDp1PzJW6zLVpuI4aDbMsInt3nRWOLRCTRDecAhb07E32xSMKZT/ik4k29hlzFRGWr0iKvyl47Ekm/VW4lGxBbBGuUhXDwKK61p+g3jcTaM9WijEw9tVcLL3Hnnnf1jNa17WuNdpxHP/J3vmVUasmsYyj/aCpvbjNxzPUSBHeVlPqO8aMhnw32j+EVscxilU9t4yXNk6bdfic2o+rcRMAJGwAgYASNgBP4fAQvs/h+LrfqLhbcYVzLKQjn/hiG97rrreqEdC3i0aKYYNA5YpE+Nh1AGQRVCwkc+8pH9In+ulp0YgMgkkB+EjxxRgikqGXb/KTPCoUMOOaQXGOZjONIYEBODFsZpp53WayTBSIhx0H1AErBJSwzaaNfVDHQlFCCMmNTM+OvRCWkU6XdNoDrGXbRIV9qYYx+dIN8wZmP/MnOo+MJV+ESGMDN+0MCtphEpzZnSi5EcZUMoHLFWmmJIx5aFcLE80uLJTLXaZS6j0h1jqz2QZm4TY+IThjZJe4AGggfqmvaJQVCBwEGCPIR3pBkN2rO8kit80MwraRrFOK3vWt2X4kwJS3y1EY1zlCX36Sm/EcCAkejW6kB1XfMnb7EuW20ihotjWguf2B5jWqX2XqKDmwT0wk7h5K76L9mtcotOzRZ2GRPhUBM6Mn6rfZTyhF/p3kvddRmx5W42+gBzEpslGj85NgpthN20g2z0EFPpHkf6CFp50GUuKs2RrfzHMoEvAtFa21WZyGctDHhihHekP+abvDKXaTOF39mINq/sYihzzjcv6TLmYtfyOuTOvaYlGqRVwjnn07+NgBEwAkbACBgBI7A9I2CB3TZSuyyoI/PFYjr/nsLQxWIjWOHCd11CHf3GfOv+Opj/Sy65pGdq5t5lJyYBW0aMFAwMQjYW8Zdeemm3cePGlaN/3AX33Oc+txfk3HTTTX1U7vOCuSIORox6ZE4ou4z8M8N844039pqH0q5T+GxDN9aBmNQhBljpjmG0amGG0oBpor0QP5ePfOMeMc9lU1la8SOuxI8MYaa/RJkj1sqvBHb5uLiYdtk8gIE2aS6P8pXxVLuMZZRwL2vQ1H5L0EiaCBz5XQtbcic92jJaPmjDIoDODC2CiPPOO29FaCftO+JxbB1cSB/7rLPOWhH2Cb+pttpOxKVGY0pYaKgu4jhXoz3FXXRzHYuG6rrmT7jYH1plj+Fy/8oCKMY36iZqmCG0oO7IS6m9K8/ZVn/P2I2563E1ghJhlzERDq0yaKNBfRRbd65leiqv0ovYci9j1Jqmr+hOt9q8FAVycaND6WBrHmLcoA1lgxbgkHAKf/BVG6S+5/ypvCp/bDOkURP6oU2P0JS5TNp1pF9q62DOEWY2lDCqwzn5nROnlKeMuX8bASNgBIyAETACRmB7R8ACu22khlk8s/N/2WWX9UcxOYKSf4sZQpOEC6Uj46PvEjOihXhm7sZAQ1q68wfBHb85TsQCHcHBVCMGBBthxBVXXNFrSsQjfmKc0HiAOdPF+eSDb8xvf/vb/ogqGhEf/vCHe8GemKQa8yf/zIzxyATlidpeCDcVfogZGWI87r333s2O15aODkU3XuONTCnMPgKAmhEDT15z+cADdzGBJRpqI634EVcER+QJ/KXZEukLu+gW08W9lBZhlBe19xhPAruhtiwaOQ3lK9eZ8hPLqLDQWI8/1RsMf0srDqEMr2OCPfkCiw0bNqzkcY899ugF9BG3ud9qOxGXGq0pYaEhfIfqspZezV10cx0rvOp66TrNbV35aKVDHum72LX730rCXeYHtKeWxk4Y1Wxhl9uD+lupz9Zo4a7+nOkpjtKL2DJvcGxc9zgi8OS6AvpD6SishODUA1iWtO9ID3f8Ccfm0NQHXpRn7DF132oXsbyRrr6FSw03NOK435ZycI0Gcylztwz3VrIxwByjORVcGXtoj+vxtxrBscph2wgYASNgBIyAETAC2zoCFthtIzUoZre1iBczJOEMmgUS1ElTQQt9LehhHMRMzWHu9AJoPC76yU9+smdq0GLiuMsUo3xhI6CS0CGXG6EgWnViPLI/v9GEIC/SQhKTJCaGo4QI3iQIy0dicYdZKWFP/rKgrXaHXU0wMAUXhYVp5LgmdUsZsTkmLOZU4bKtNkEccIhG5aNMNaM20oovXKFBGuBPPb3zne/s8xrpqy6iW0wb91JahFFe1N5jPDH4Q21ZNHIayleuM+UnlnHqUeao1Qc2/Fb/HGNL0yWWt/ZNO1G9Ukb+aCtonOoIbS3uFHelEXGpxZ8SFhqqi1yXCCtPP/300dqJ+Xiz6OY6Vr5V18JtKTu39dx+1Kfji58cdWfzgbzOyUfGboqGnfrSULqxXMIutwf1t1KfFe5qH9SPjPKQ6clf6cU8yA+bsV/aZmi0sqlxyy23rNwnGo+Io313+eWXd1deeWWvjRoFoQi16KPafAAT6EXh3hQNO81JMa/6VplqZVY4bOavmE9960qBZz/72UV/5gyOuvIiLpq3O++8c/ejH/1ohTTCO4R43E2L4KxmSne5aj6daqPlHzXea2na3QgYASNgBIyAETACOxICFthtI7XN4j1r1OXfYoYknIkLfjE+YmzEFMAciZnKzN0QNIoH88ILqhIE3HPPPd2ee+7ZM5gwSy3mJKehfGFTDjTrYEI4anvqqad2D3/4w/udf/xgPkkTDTjKSrj//M//7JkqsLj++us3IS9GXbgIpxZDSj4iI6Z78GDqsoGu6gA/yg2zs5SmAMwlr/iSXwSZxxxzTMerhmNMLGtkiIlLvqFJWWsm1nUtvnBFYwUGVxotsU5FX3XRwn7IL2ItumrnQ225Vh7lKwtzVAaVUelNsWOamf4UOq2wpAEzjoBe+HH8FeHC2LbSop/91HbG4DIlLOmoLnJdqo5VviE7Yy262V1lA0P6S+uPzYIDDzywxxhN0lrYGK7Vv0hb+JC/aNRuave/ldJG+LLbbrttpmFHWPBsYSZcxuIcy1XrJypDLV3KXCq/8oCf6q1EI+YhYidtbGnE/eQnP+nnUcYnxiltLpVoDrlpfFN6yv9QPOGreNmuYZjD8Vtho/Y382BNYHf00Uf3mprkVUaPS+iuOtx5hIb2duyxxzaFaGPLPIQJ/kO4KL+2jYARMAJGwAgYASOwIyFggd02UtssjCPjykI9/5YAQ0xZXJSL8RFjo4U+TJCYqUhvDCws8DmmBeOCpl00oj/1xVjFUz6hieALQRmPTiB4gOni4vz8sAWaN9yjhyYE5X3f+97XM9X33XdfnzUxfMJlrIZdLJdoYGcDXdVB9lvNb+on3j/G8UY0I6YIQtUmYIxy3sk37hHznF+1kVb8iCsC3COPPLIXqJbqVDjOeXQChvTf//3f+wcW8tFQtXPyOfYv4qF8ZeZRZVAZMz5jfkcMM/0x8WthaAcIc8kbmnsqN98nnHDCZC3XWjold7WdMbhMCUtaqos8LqmO44ZFSZtHGrMZa9HN7qXy1dxiXbbKLqE9/Y84LSN8YnskvNKaMraov2fs2HwY80osmmjCOdNQGdQn4rght4yJylA71sv9jKXyKw/4le6MlGAq5kH5Q/uLNkI/4B5SDPXBQ0Pkg5e/0QCHxlFHHdWPscwf9B+OFFMPHHvV3YIf+tCHehpo1TEPEY65RkdjEZQzNg39ZY1P5Vd2DUP5R1thc5uRe66HiKfo6NEo5hgZCfFUZrlnu1RnMUytHcYwytNq+mOk528jYASMgBEwAkbACGxPCFhgt43UJgvjyDixIM+/xdBpkcwRTTGyhOfVPGyMFvQs9MVMRXpDsCAEk7aXtBVinKhlhwbc2KMuypfySd7QrIPB4jgfWnXQgiblhSmDCYMhQzjIvTscIcNQNpg1Cfb4DZOVmRjlW/7YNXPhhRf2NNCwg8GQcKRlT8G1lK4YGgSj5B3NkKlGbYJ85vKJ6RLmJdpqI634EddrrrmmZ7ChlesUN124D92SUZycV8J+7Wtf69syjHE8koafsFqPRydK+W65RQyXYE7pY9zPiCZVq/1N9YvtIAu1NZ7IZoyBfhxr5JftMWHjwzfqj7n/qI6ze8Ze8TPWNfccv/U71mVs9604Q37qh7nNKy2N70N08Fd/H8KoRWsIZ/XR2F7kljEZU4ZS+ZWHTE/5VnoxD/ghREOrjjGT+xzRcsYNbCXIjVrhoqc8RHolNzYKiM8DR8w/SxqVqVbmmJbC5jYj90yjhGc+/sr8imYdc+7QMXxhkx+9oP3xJ01Pxii+5R5tPYSR+2ksp7+NgBEwAkbACBgBI7CjImCB3TZS81oYt5hvMXQshmHUSmHFiGhBz0JfzNQU5o7HK2CGStp1glRhooaD/Gq28oXNxeEIAykHNLjzS/c8qWxocZ199tkreeEoIFoP3EekFzG5Xw8tAspKvMjEfOtb3+oOOuig/q46+WNzdx2X8yMQwZ8juRge1gDn73znO/1dclGbgvuCShokaLSg2YKBbhZkDP0GC4StErgOhY/+ujA8tgnKF43aFseXIyMVv+PRvlr8iCuCVTGysU5hBrmrKOax9I2GC3UFc5390S5EMEvbO/nkkzcR2okhHWrLavOkEcvDN26ZeVQZYhkjhmO+Y5qZ/pj44HnnnXf2mCwtpKPM+qOsMrHdyH8t7VhvqovoRr7G1rHiZ6xr7tDWvXI82hMv4RcesmNdDrUJ6OgRoJYARP2Q/EWjtEpjSxx/4vcSj04M4aw+EduL3DImKoPmqFg+fZfKrzxkeoqj9GIeeDDh8MMPX2nPtfYa710VPeUh0iu5Kfxa2CpTrcwxTYWN9x4y/+lVXOZLfutPd9lG2owraAzSvqR1yFy7++679/e0xvTyt7CpYTzFPffTnJZ/GwEjYASMgBEwAkZgR0TAArttpNZZGEetIRbk+beYITHZ8W4l7WKLEdFCH+ZQzFRmjGvQcPwObTYW49ydQ/ySiVp4PD7QYoAVX/nChpFAQIbQDmaUIztotiHAQbOO3xz5Q5gGQ1IKxyXiT3va07qTTjqpj0eeI7Oiu3qgK0Yemz+EhGgOwLigTfHAAw/0mgcwM6X7wKCrOlB5sq3yTWFkVhOW9DBqE9CibNFMZbpq8SOukb7KrLxMTa9UfoR13LGE/cEPfnBFg1MM/lBbVpvPeFA23DLzqDLUyhjLW/uOaWb6tTjZXUJwYYLgjkc9OPqXy5Ljtn7HvKmeCB+FTWL619KOgjLVRa7LsXWs+BnrmjvllV9OM2MX8RpqE62+F+mqX5CHaGJaqvexdi4HwiwesmjVoV4SH8JZfSK2F7llTFSG1vhYKr/ykOkJH6UX84Afc4MwQmjFAwqvf/3r+5fLmT846snYwfFYbSxga6Mg3lGqfOU0lAfZ4NbCVX7CV/GyrTLVyhzDK6w2Nt71rnf15X7Zy17WX1ORNzsIz8ZPpq3Ho44//vjuv/7rv3oaCPGETUwzfgsba9hFVPxtBIyAETACRsAIGIHlELDAbjks15QSC/HIzLLwjswYTMC+++7b3X777SvCmbgoF+NDPIwW+jCHYqYivVphOI6JdhPMEMwXgrOW4agqgi/CI3TLRxhzXOVL+dSrhvHhBgkRYDxhQPU74qN4+KGpB+MhZjziAl1ewuOIrfyxERQiqEMwB1OHNsaXvvSlPizlJ91soNtiSAk/lqkTc4ctrQgYTGkQRv/Wt5jDltBATBdC4BothJSqR/CJRvEjrtE/1ylthnsIyVPpjzortRMdz9SxSdriKaec0mstKj2189YRLNKsaQyqDWQhj8pQK6PSb9nqZ/SFTL8VL/pBA8HDueee23Esj3bdqtsYt/Ud86a+1wq/Hn6qizwuqY6ze86T4mesa+7El98Q7YjXUJsYWz/qR+QhGqUlDdvPf/7z3V577bWJxq0ETTqafNlll/VC3FwO0ZIwq2Sr/odwVp9QePIsN2HC0XfubEP4wyZPS0sQDWXyEx9Q0CMJtddOdYed7sJkbMCAOUdgS+MI/mwmgTN4yLAhwxzCOCs6+KleYjkVJ9oqewnT6DaWjjCMabS+VbcI77nvb4phzN177737MZ45TNp2QzSETW6ziqe2n9uh/LHVznI/jWH8bQSMgBEwAkbACBiBHRUBC+y20Zpn0V9bBGuRvLSGHcwPQjeYD5ga7pQb2oHnCCTHFokDE8BxxpYR0yOmRmWJi/nspt8Rj5KbmPHICOm+Hi4bR0uAfOp+Or2Qd+utt/YCO47IwjSjvaGjc1HAhSZHTaiGwHWuEUMzJAxs0RcelC8zV2K6hHmJjpjBVvyIa6SR6zT6TflWGWI95/jCinyO/Yt4qI3E9kYaKkOtjDkfpd8Rw0y/FH6sm3Ap1c1YGjFvrXZQoodAHKFM625FtHJvueWWwfEi0ldd5PpWHUcN49gP9c0xQTDJWItudidt+eU0Y774jngNtYmx9aN+SB4QoGDjdumll/bHyNkwAGulHceD3D6zcFv5V9xS+VR21b9wZrMCgWA8csu3hGUKTxo5H0pvbF9cbbiYF5U522xqnXXWWSuPRcj/rrvu6nbddddu55137u9bk7vqZYi2yl4LN+Sv9BRuqF3l6xWYl5h/Xv3qV292jUDUtFM7UnqyzzvvvJUxs3QvrcJFW9istt5K/TSm428jYASMgBEwAkbACOyoCFhgtw3UPPefwfTFP+4bK2kRweihFQZDVlpEi5kQUwCTJqaqxMQJHgRzCOhgCKCLllmLQVc8bF0ATjyYzOuvvz56b/KtfCmfYnYjcw2jsssuu3RHHHHEyuMF5D3mX/GimxjSzAjxkAT0uK+OPJI22iC8lIdBUKmXAhE6IqzbngR2aLzB0Om+u00q5P9+xLvn7r333k2CiGnLuCpQrNMSbhKwDNm6lzALaqJmpYQMOUymXdMYVBuJ7Y1yqAy1MqqsLVv9jDaW6bfiDfmprUOX/M8xMW/qe2PocKQcbEmbe8NKY4L6P2MH+JXClNJSXcQ+TDjVMWmO+ctYi252h7b8cpo5fxGvoTYxVD/QQuNUj3LkMuX6EL0oWKq1T8ZtBFSkgVG+S+VT2ZXeWJwVHvo5H3HciEKjtfrWGAZGaKFyRylafjxcJI1s7mljHP/kJz+5SbVSftoomzcIRmWoX+oklhM/Np+YE/UKucqew4nOkH8ON9SuRC+3l6HfUdCrNLG1MUX8K664InpVv4VN1IqMgl3dpVgT+hJWWpSl/lhN2B5GwAgYASNgBIyAEdhBELDAbhuoaC2Khxbi+MOI3Xbbbb2t41EwRyzudaSKImuxD5PSYuIIC9MHc6Mjkby+BxM+xfDoA4t28siRHQQ3JaN8YWPEnCJA41hVXOCjURDDRKZAjEJkTMWQZkYIJg/tn+uuu67PnzCB0ZPBnzvTEJLWyg7dGjMkOnNsMc6roS0cwZ/yLWnUPjOuSiPWaUlgxzFf6jcL1fLvKQK7WO/KR7TV5jMeaiOZeVQZamWMtGvfMc1MvxZnjPsSdRvzpr7XShsNLl7f1JiAoAOhCHSy4aEMvSgN3hs2bOg2btzYv/icw8bfqotcl+oPpQ0LsMh/8Tg99EW3VAfyy2nGfPEd8RpqE7X6ed/73rfyOAC4xD9wPeCAA/o71ySEUh4YSxnLOcpP2TC19ql8Qg/NYf0ulU9lV/0LZ8ZrjtjWhGwxf7V8KO8lmzxxdJ/0Sob5h7TvvvvuknfRjXEGoRx31UmLWneP6vhn1CKLmzKaV0RY45twkbvKynUKGP3O4XL4mn8ON9SuFJ6+yONL1FPtigquiEC4yEZGabMNIbquuqAdjp3jtZkY50rlC1ttv9TeFA7sqZPcT+Vv2wgYASNgBIyAETACOzICFthtA7X/mc98pngcKQqotKvNi6QwUCyQ44JfzJeYBcLABKEt1WLi4kKfhXyLKWhBmTX0OD7KEVTco8lMjxb8kZnlGwEBF2VjamEIFxkFMaQRl5i2/GFs0cIg3AUXXNAHgaEkTWiibVfSEiL8aoRqMS/xW3W3GtoRI8q5pKHc4FLDNddpTPv3v/99r23GpfCR8Y9h9K0yxDqVn2xh1QpDWLV58h3xgPEkncw8qgy1Mir9lh3TLAmLWnFbfsIll6UVJ/vFvGmMyGH0m+OtWQD3la98ZbO+rPDYMOUcm6Xfk0/+oIEwr2bUH3NdojHGfZ1Rs7JGo+QuuqU6kF9OM9OJeA21iVr9IBgSFuCCwAStXo5mMu6WDOPlGWec0cfjgQCEUZha+0QQQt9i7OJeNuW7VD6VXfU/ti/FfNbyEcPomzGUB2PYCAGHUn0QVps9lAFNLjZPaviINvmQNjRu3ElKOmiE6jf+jPOYa6+9thc+M6eiaRaNxjfhIj+EgeRb7iq7fiuc7CH/HG6oXSk8D0rpVViubUA7Ls5P1DnCdPA79NBDN9EehAYCP4SOlIVNEh6I4rsk2CN8adMlb67od22TRf41u7ahpzLbNgJGwAgYASNgBIzAjoKABXbbaE2z+C8xXRRHDGJc8Iv5KjETNSYOd7TaJKhCWAezNNfAtJ955pkr9KDLowGRuRhiatAgIB977rlnd8899/RZUXkjHiW3L37xf9o7e924cSiMPmOqIKXrAC4CBE6ZxrWRIg/hGHDeIMAipWHnAZImSO3a5SyOsJ9xzSUljT3GjncOgTFl/VDk4Y/ET5fkX1NHpHIhTn/+/JnEQ+b+oaNSf8SHBQroeCCYYaXB8Tdv3mwQLqoj3KeIajWsup28e0rY4UHcs6LgyGJm2/0Zyle51vjP5Sn8mTuJsoCVXdtZruEkDTWf63G2w2ruHM5LmYcHQsWSSxpGaVy6vr3nSJxYE057TrisTUt7fRs30tpz1FNEJqx0uBd5dnJysrm9ve2d3t3HQhmsYMn1/BCqsPxBNGhdBKSlvGyvW/o/4fbyIMeW7lnLz1KZGOUPQgsLSHC8/XDRSwPnIMZgLVfFJs4dlc+fP39Oc7Ix5B9BPPHuWSdmDs/k/2OE0VE8anoYQkobFKGOcsA24h3xa93Nzc3U3qbM4CNM8cGG9qN1iO5Ml8B8dIifOKY94Lqs/ko7Q3tK25NnCseZbzUiaMIlfzkWLu1+niu4pL09L+cvHW/PWypXOR8fDpRdRHDqJXWUIeoXFxcTK/Yxj2pbz6pYBw/KZOXRE/hY0CIfCJf8WLr3Pi7OXbvtohmVhdsSkIAEJCABCUjg/0RAwe6F5iYv/6NOZTqI9YU/QkavM5FOXA0PSwQ6RXRU+C2JKWsx0rGooh1hM0SOYby4uU4NHc5Yi2T1U65Jemv8scDAuoQf23R2We2V+2EZgcsk40kjPh0dLAywyKDjgkiR+euYNxDLKzoxOfft27cbLCDnVj0lfvxaq60pEgt/iPeXL1+m++1KsKvp3eV2LW81WXN5ynlVtMNS5MePH/Xy+23mZkQwrfl8f/CfjZTzuXM4NStCkn46unOOMrAkSs5dn2OpZ9yzJxblvG39lP9d5WXbRlB/GMJKPc092F6yqhulg/AQElpru1YAXyueje7T2099YijvKA9yT9oBRPqRBVCdA5F0jM5jf6yMuOdSWRvFGfEM6zKEF8JB7KhiFZZ57D8+Pn4gyiDwsT/zstUyyP7er83/Xpx6+2o9SRtbz6OcYumVYdTcO0Id18458q1lwPWUQ9rHKvSlXc8cp4SbfKVdx1EGEauur6+nZxthjYaBxhKSj1dhTlpYqbda7aWd6zGt+5b4JpxRezrHiWM8J1+/fv0gb/kY1U7lwDx9zjtYnQAABmJJREFUHz58mM4jT7AyjEOETP3k49ScJWyu6flpm5ba49617pOABCQgAQlIQAIS2GwU7F5oKeClfvQSnJdkhkyxze/y8nKyyuA6LBB4OY8lFZ0YXtgRwxC4cBkiREfj1atXi8MVt8FIZynz6RB+tWpIZwU/jvOxjECwosOK4JeOE+ckveGB5d2nT58eWAG8e/duEuOqZQoCGvNAYW2CxQdiTw2X7Vh/IeKFDR094p8OTe2MzW3T4aPjOXJ0oLCQYE63CABsp5Ne56wahTHaH0bEj9UzE/4u/HAYdTB7edrGE9YIqgybJL9hkbIbP2WYoZAIbj03Euyw7Do7O7svEykPzAXGkLvq6KAjHIZNhpvBLguR1PPXblex5KUIdpR1JoVPuUbIgs+SwLKGCSIAYkDCRgTDmjUuIkvqdfav9ckr2jXCTV7W+tSrj7ln4rRrf61gR31ATKLM0j4lHvBnhVPypbq6YEDqdxUKacdwKYM9iycEQe5T2956j2xjqdUTM9MO1KGoXPP79+/JEpO4Jx0IXbShVWhL+Et+LzzuTRtOeMy32qaDtp56zpxrOMpvtRZFrKtlr8YBSz7aYH5pm9Mm1Hnw0s4ljSN/iW/CGbWnNW5s8zzngxHtI8JsOPPcZuEN4kxciD/PF6zoED85zn7qCAtotM+mKtrBFytF2t7UpTV+yiBx6pWZuTAeO+S95eP/EpCABCQgAQlI4CUTULB7obnHS/2oIxtxhhf+zE2TF/Orq6vpxZyhKm2HgrmR8tKOj3jCynqIJ8/hsKTCUqR2/tNZSaeGIV10MhJ/rCkQdKpLesMjQlybPv6vlimk8e7urgb1YPvXr1+TuDWat4/OIUPa3r9/f9+JqYJAvX8VCh/cpPyToVz1umy3FhDlslWbYUR4a0WDVQFvNtPcdYQ76mC2ebomXObbilAZBvEjPvTCGQl2iK0I0gkjPiJFKxrEWinnxB+tgtqLR28f90GoI7znEuwem7c1bql7SUOGyFEPWyu4nPNYH+EJAQpBoFr4EF7Es9Trbe+RhQeSf9VHKEC0aB1zeuZDxq58FmygDeH+a/OHNjFWnVxHXTg6OhryZxgnH14i1tS0MgQZ61Rc8rnHNLzb/G8ZZXhpvUe2iScfYOqHj/rxp2cR14a/9n8WoUB0TZopnwy3pR0l7zMcthdeyjTxZq62uWHdpIU0te0RVue0N3FL7dzS8TacUXua8759+3a/kFP4U854jiOG51nO8/L79++TQMdz6Pz8/N4Cj49gc6u2xwKRcHl32FawmxPklo4p2CWn9SUgAQlIQAISOGQCCnYvNPcZEjp6oUWc4BiWEAgfiFRYE9BJyUt89nOMeeR4aW+FsP8CDeniRT5DXulgfvz4cRLtRmJB0hsedF4RA9sON52rbdNIJ3ZtJ3sXvMgjhtiSJ+QNPwSkzNf32HtgWcKCJIS36/mBsigKfs+1edo7p92HmMLCArVThzULLFrronptrktZyLFemaAD2gurJ9owzGzbspN7x0ccRgAnTfhzYnGuWePvQoyNkEPHvyfYUMeemv5RWmiTKN/t3GGZK4tyG8uoURi9/Qgt5CUWRbUtoB2pglLv2l3uo4zR3hIH4rPWUT6ZlgArqbX1P6t2Uib4tfmWekB8emV/Tdx69SN8q1CUsGD9+fPnfw1dzfGn+oiRp6enGwSsbdzXr1+nj1JrOVBHRly5b56p+D2X6/HnXIS9JcEOcZLpGRAcaXuJW57vvfCpv1gRkh+ULdrTNUNdsU5mBfW5sHv3c58EJCABCUhAAhKQwNMJKNg9naEhPDMBOhp2Fp4ZssFLQAISkIAEJCABCUhAAhKQgAQksDcEFOz2JiuMiAQkIAEJSEACEpCABCQgAQlIQAISkIAEXHTCMiABCUhAAhKQgAQkIAEJSEACEpCABCQggb0ioIXdXmWHkZGABCQgAQlIQAISkIAEJCABCUhAAhI4dAIKdodeAky/BCQgAQlIQAISkIAEJCABCUhAAhKQwF4RULDbq+wwMhKQgAQkIAEJSEACEpCABCQgAQlIQAKHTkDB7tBLgOmXgAQkIAEJSEACEpCABCQgAQlIQAIS2CsCCnZ7lR1GRgISkIAEJCABCUhAAhKQgAQkIAEJSODQCSjYHXoJMP0SkIAEJCABCUhAAhKQgAQkIAEJSEACe0Xgb9gCp/WKXL+BAAAAAElFTkSuQmCC) 安裝套件 ###Code !pip install gdown ###Output Requirement already satisfied: gdown in /usr/local/lib/python3.7/dist-packages (3.6.4) Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gdown) (2.23.0) Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gdown) (1.15.0) Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from gdown) (4.62.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (2021.5.30) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gdown) (1.24.3) ###Markdown 下載wiki 資料隨機50000筆維基百科條目的JSON檔,格式如下:![json.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAA9wAAAIWCAYAAAClVIV7AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAAFiUAABYlAUlSJPAAAP+lSURBVHhe7J0JnBTF+b9fbhC5PPACvC/AC2MENIoxCho1RgU84wFqEpOoqFFjIqDG468RNJdR0Bg1yqHxFjQmeAH+VNQIeKJyeABegBde/Pep6Vprm+6ZntnZ2V34Pvupz+xU93RXV1VX17fet6qbDBs2bIUJIYQQQgghhBCirDSNPoUQQgghhBBCCFFGmixevHiVtnCvs8460X/f8txzz9kaa6xhXbp0cZ9xPv30U1uwYIH73HHHHaNYIYQQQgghhBAiO6udhfv11193InurrbZKFNsQbmd/IYQQQgghhBCiWFY7wb1w4UJn2c4C+7G/EEIIIYQQQghRLKud4F6+fHmqZTsO+7G/EEIIIYQQQghRLFo0TQghhBBCCCGEqANWu0XTpkyZYv369Yu+mU2aNMmefPLJ6JtZixYtbPDgwbb55pu77/H9hRBCCCGEEEKILNSLhfv999+3v//973bggQfa9ddfH8UKIYQQQgghhBCrDhUV3CxA9pe//MWOO+44u+222+zLL7+MttQvbdu2tV/96lc2fPhw+81vflNt3RZCCCGEEEIIIUqlYoIbcY3Ivvvuu22nnXayESNG2HrrrRdtFUIIIYQQQgghVi0qJriZG927d28bNmyY/e53v7MNN9ww2iKEEEIIIYQQQqx6VNSlfNddd7V9993XiW8hhBBCCCGEEGJVpl4WTSuVJ554Im8QQgghhBBCCCEaCo1KcD/88MP24IMP2v33318jEMc2IYQQQgghhBCiodCoBPfAgQOtV69e1rJlyxqBOLYJIYQQQgghhBANhUYluLt37+7CNttsY+3bt3eB/328EEIIIYQQQgjRUGhUgnvFihW22Wab2f7772+dO3d2gf+JY5sQQgghhBBCCNFQaFSCG5o3b25rrLGG7bLLLi7wP3FCCCGEEEIIIURDoqKCm8XNBgwY4MJJJ51kCxcutPHjx1fHXX/99dGe6WDJbtasmW299dYu8L+s20IIIYQQQgghGhqNzsLtQWRLaAshhBBCCCGEaKg0Wbx48SqtWtdZZ53ovxxTpkyxfv36Rd/MJk2aZDNnzrQhQ4ZYp06dothvie8vhBBCCCGEEEJkodFauIUQQgghhBBCiIaMBHcVn3zyiV199dU2cuRIu/jii23OnDnRFiGEEEIIIYQQojQkuIUQQgghhBBCiDpgtZvDPW3aNNthhx3c68QK8emnn9rzzz9vffr0iWKEEEIIIYQQQohsrHYW7vXWW88WLFgQfcsP+7G/EEIIIYQQQghRLKud4N5ss82c5fqVV15xn0mE29lfCCGEEEIIIYQoltXOpdzz+uuv28KFC2358uVRzLe0atXKWbYltoUQQgghhBBClMpqK7iFEEIIIYQQQoi6RKuUCyGEEEIIIYQQdYAEtxBCCCGEEEIIUQdIcAshhBBCCCGEEHWA5nDXIR9++KHdc889Nn/+fPvqq6/swAMPtF69ekVbhRBCCCGEEEKsylTcwv3uu+/aVVddZQMHDrQDDjjAzj77bHv22WdtxYpVS/d/9tlnNm7cOHvjjTec2BZCCCGEEEIIsXpRUQs3wvriiy+2ZcuWRTE5mjdv7oT39773vSimfNSXhfvNN9+0W2+91dq3b2+HHXaYe82YEEIIIYQQQojVh4pauNdaay3bbbfd7C9/+Yvdf//9dt9999mpp57qtj3yyCPOKryq8PXXXzur/U477SSxLYQQQgghhBCrIRUV3BtvvLGddtppttlmm1nTpk2tWbNm1q9fP9tll13svffesy+//DLac9WhdevW0X9CCCGEEEIIIVYnGswq5WussYYT4fl44okn8oaGxKo2J10IIYQQQgghRHHUu+B+66237JVXXrEddtjB2rZtG8Um8/DDD9uDDz7o3NHDQBzbGgqff/65zZw507mVt2jRIooVQgghhBBCCLE6Ua+vBWPO9ujRo+2DDz6wc889183xzseLL77oxPlzzz0XxeTYcccdbauttrJtt902ivmWSi6axmvAxo4da5988omz1m+33XZuJXYWhRNCCCGEEEIIsXpRbxZuxPY111xjr732mv3iF78oKLahe/fuLmyzzTZu9W8C//t4IYQQQgghhBCioVAvFu6lS5fan//8Zye2f/vb39qmm24abSkMbtqffvqp3X333e77QQcd5OZ/swBbEvXxWjBee3bnnXfavHnz3PvGsb4LIYQQQgghhFi9qLiFu0rg22WXXebE9vnnn1+U2AbcsxHYrGxO4P+G5rLdrl07lzYWTvv444+jWCGEEEIIIYQQqxMVFdwLFy60Sy65xL0CDLHNa8KKBRGLNXvrrbd2gf8b4orgLJZWaNV1IYQQQgghhBCrLhVVhPfdd5/Nnj3b5s6dayeffLINGDCgRrj++uujPQuDyNart4QQQgghhBBCNFRkghVCCCGEEEIIIeqAen0tWCWoj0XTYM6cOTZu3Djr27ev9evXL4oVQgghhBBCCLG6IAt3HdGqVSs3v3zGjBn2xhtvRLFCCCGEEEIIIVYXZOGuI7766iu75ZZb7M0334xizA488EDr1atX9E0IIYQQQgghxKqMLNx1BK8qO+SQQ9xK6g3ttWVCCCGEEEIIIeoeWbiFEEIIIYQQQog6QBZuIYQQQgghhBCiDpDgFkIIIYQQQggh6gAJ7jLw6quvRv8JIYQQQgghhBA5JLjLQLdu3aL/hBBCCCGEEEKIHBLcZWDJkiXRf0IIIYQQQgghRA4J7jLQEAX3119/bddee63ttNNOdu+990axQgghhBBCCCEqhQR3GaiES/no0aNto402cp9J3+MsXbrUHnnkEVu0aJE99thj9vnnn0db8vPhhx/aUUcdtdJxiz2/EEIIIYQQQqzuVFRwf/PNN/bMM8/Y2WefbQcccIAdeOCBdt5557k4tjVWKmHh7tKli/ts1qyZ+2zbtq379PFx2rdvb3vuuadtuummttdee1nr1q2jLaVR7PnLzldfmd12m9l3vmPWpInZBhuYnXGG2cKF0Q4JrFhhdsMNZi1b5n7zyCPRhhjLl5uNG2fWv79Zmza5fY86yuzTT6MdIqijTz1lVZU3//HOOSe3PR422cTsxRejnSI++cTs0ktz18M+W29t9ve/59IkhBBCCCGEaNRUVHAvWLDArr76anv++eer9NNX9uWXXzqxPXz4cHviiSeivRoflRDcTZvmigqrMqy99tru08fHQRifdNJJ9vjjj9uAAQOi2NIp9vxlBbF94YVmRxxhVRUmF/fuu2ZXXmk2eLDZO+/k4uLMnp0Ts/kGG954w2z//c0OP9zswQfNkjwBENpTppjts4/Zd79rdt990YZa8vHHZj//udm55+auB155xez4480uvjh33UIIIYQQQohGS0UFNyLw4IMPtltuucUeeOCBKt1yn11yySXWoUMHe/LJJ50Ab4xUwqW8efPm0X/1Q72e///+z2zUqJwofuutnOUa6/PVV5tNnWr2r39FOwYgZv/f/8tZjk8/PYqMwT6/+U3uGCNHms2fz+T33PGr6qitsUZuv7lzc8L4scfMLrggl5ZCHHlkznrNsXx4802zbbeNdqhi/Hizf/zD7Kc/zV0Xwp5rRdT/8Y9mTz8d7SiEEEIIIYRojFRUcGMd/fGPf+yso02aNHECfNsqAdKzZ09r0aJFtFfjoxIW7g0QjlWsueaa7nOdddZxn127dnWfnjlz5tg+++zj8tqHO+64I9pakxVVInD27Nl25pln2q677lpdPs8991y0x7dkPb/ns88+szFjxthll13m0lQrsC5z/osuMttww1wcrt9DhpgNHGj26KM5cetB3E6YYDZpUk5Ib7xxtCHG44+b3X232V/+Yva73+Efj8k+2hhQlS929NFmeGH89rf460cbasGyZblz77GH2fDhuevCpXyXXb61bv/nP9HOQgghhBBCiMZIRQV3HEQZC3u99NJLVbpjj0YruishuHepEmJvvfWW7bvvvu47+cV34ktlSpWQPf744+3WW2917v7w6quv2kcffeT+Dyn2/G+//bZbsI3F2p5i3nNtWLo0Z22Ou4YTh+DHclxVl6qZNSsntBHHCNokEOUI9d12M/vxj3NiNw3mgGMJ51rz7VcMuMHPnJmbN77++rk40lSV//anP+UEOdcRn0cuhBBCCCGEaDRUXHCzevZpp53m5hVjTZ04caL9+te/th133DHao/FRCZfyrGy++eb20EMPOTH8aJWg7N69e7SlJojhv/3tb85yfc8999jcuXOd6J4+fbrtvvvu0V6ls+GGG1rnzp2rtGrL2pftVluZYXVnRXQ/uMGgAG7ft99u9t57Zh98kIvHTfz3vzfr0yc3FzpNICPQcRVfd92cS3rfvjlhjeWcueKvvx7tWCL//Ccry+XOjxs5gp2BAc+HH1II34ptFn/75S/NevQwu/POXBzpy7i6vBBCCCGEEKLhUa8Wbpg/f75ddNFFiW7MjYWG+B7uQpDfzz77rHMn79Wrl5ujjZs/LuPlmK/dpkq4Dh061M4991zbZpttotgS2W8/sz33NLviCrOOHXMitlOnnJv3a6/lhDLeEViIWW18xoycdTtyf08EyzGu7gjjE04wmzbNjDUEELishn7QQbnt5eCll8wuuSRnzcZqDZwHKzZpx4WcFcz//GeryqycqzlzxnErb8Sr9wshhBBCCLG6U3HBzeuqeHfzpEmTnGV1xIgRzgp6zTXX2Dtpq003cBqj4MbCjWXez81u0JDGG280O/HEnNCGnXc2u/56s5NPzs2pbtfO7Nlnc67kvJYrxbK/EliYmcP9/vs5wY7lnIXRWC383nujnYqEldE5FoFF2FiM7bTTcse86iqzL77IuceT5mOOMTvvvNxAAteD+z2roWOpZ+CjEqvACyGEEEIIIeqEeu3NM2e7d+/e9tOf/tS5NL+Ha3AjpCG5lGflQ1yaGxMsfHbttTnXcYQsK3jzSjCsxLy7Gms2XhKIWyzWCFUs4QQWV4N+/XLfEbaIWYQ6i64h5NdaK7dPhw45Eb/DDumvGysG0sFibLz6i3eIs7gblmvOzTkR3rjAY6nHBb5Vq5z1m3Nvvvm3K6ULIYQQQgghGh0NwnzWOt97khsBjdHCzYrkDHB8Eq7uXQUL2L2G+GsM4Ab+wAM5d/Ni6xBCFjduXLxZlC0E1/JyvwObgQHO463WWO2Zm87ccV4LxnxvD+8Pnz49N5DQyO8NIYQQQgghVmfqVXDz3u0XX3zR/v73v1dpj63cQluNkcYouLfYYgv3yQrlpJ/VxG+//Xa3oJ1fsbw2lPW1YCEIYazYuG1jnd5pJ7MDDshtw7LtXbnDMHZsbjuvF+M7+zF3+vvfz71bG8s5Ltzw9tu5148hxLGI1xbSi+X91FPN3ngjl1ZENJZ0zs+rv3gtmLfcv/BC7r3huJv76wogL8nTyy+/vLz5KoQQQgghhCg7TRYvXlzVy68MDz74oF155ZXRt2/p1KmTW6l8J8RTmfHvi65Lli9fbq1wBW4A8M7tX7LadQp33nmne5UXApvF6m644YZoS44TqsTo+++/7wZAEN+lghgcN26cG1TZeuut7fDDD4+2lAALnOH2zQJnId/9rtk//pGzBOcDF3LcyhHcWMM9rBLOiuRJ77seOjQ33xpLeNr5Q/yxmRaBUH7yyWhDBIu6sZAbq5X7RelwG+f8jzyS++5h37/9zey443Iu8AGsQD916lT3P2V0BL8XQgghhBBCNEjq1cK96aab2rHHHmt//etf60RsV4rGaOHGjf/000+3k08+2Tp27OhWEr/00kvtjDPOsHZYV2tJWV8LFsKiaaz2zauzELmFxHY+qtLnRDQDC+FibMTxLuxyzJ/muIMGmT3xRE5whyvA41bOqurx8991l1XdGCuJbWDNg7XXXtv9/zULsgkhhBBCCCEaLBW1cNcHlbBwv/rqq7bllltG34SoOz744AP37npW9P/e975n38ctXQghhBBCCNEgkeAuAw3JpVysmoQu+rD++uvb0UcfbW3DxdaEEEIIIYQQDYoGsUp5Y6cxupSLxkeTJk1szTXXtN13392OP/54iW0hhBBCCCEaOLJwlwG5lAshhBBCCCGEiCMLdxno1q1b9J8QQgghhBBCCJFDgrsMaP62EEIIIYQQQog4EtxCCCGEEEIIIUQdIMEthBBCCCGEEELUARLcZYBF04QQQgghhBBCiBAJ7jKgRdNEIQ6feLjd/uLt0beasO2CRy6IvgkhhBBCCCFWFfRasDKwaNEi69y5c/RNiJVBVA/sMdAO3fbQKOZb2NZ93e52/p7nRzEr89ayt6zv2L42b8m8KCaZwT0Gu2MNnzI8ivmW3bruZvcfdb+1b9U+ijFbunyp7X/L/vbE/CeimOKZOGhijet68q0nrf9N/W3J8tLfT9+tQzebOmSqbdRuoyhGCCGEEEKIxocEdxlo7O/hfvPNN+20006zDTbYwC666CJbe+21oy2iFLBkHzb+sOjbyiB8rzvoOhtw84BUAd2hVQebfMxk23WjXaOYHAjvwRMG27iB46rFaDwOa/nsxbPttsNuc9uBNE2YNaFGXBLxY/G7UdNGVQt1vs9aNCvv4AAg5BlIGN5v+ErXIIQQQgghxOqCXMrLQCVcykePHm0bbbSR+0z6XhtmzJhhTz31lN199932+uuvR7GlMWfOHNtnn31c2ny44447oq2lQ/o41lFHHWUffvjhSt/rGqy2HS/t6AL/x7+HYO1dMXxFjYDlGUsw/z9+wuO27Trb2tzT5lZvG9lvZI39Pzrno7IKVcQ2FvZ8IKaxoodiPo63ZO9+/e5OVFcSBHyTkU3cZ9J3IYQQQgghGhr1Jri//vpru+mmm2zAgAF27LHH2rx5+V1lGzJLlpTuOpuVLl26uM9mzZq5z7Zt27pPHx/y5Zdf2o033uis1Z9//nkUm06vXr1sl112sYEDB9oWW2wRxa7MPffcY8OGDauIwI3TqVMn22abbdz1N23a1L37HDf+DTfc0Fq3bh3tVXdssOYG1q5VO2vapKk1b9rc1mixhrVs1tLFsa0QWJYRq4hUxGooWNlWyGJcGxgQ4Fz7bLZPFFMTrNobj97YiXIGAQq5cZPWP/T/g3Ub1a2ouefsi0BOC4WE81Zrb+U+WzRr4T47tO7gPn28EEIIIYQQDY16E9z/+9//7L777nPCqbFTCcGNyASsuuDdvn18CIMZs2fPtmXLlkUx+dlkk03szjvvdNZyhG0aWK8XLlwYfUtm8803t4ceesjeeuste/TRR6179+7RltrRpEkTd624vVNnGHBgugBxbKtrmjWtEvpVYhuBvf6a69tabdayNi3auDi2xUkTlx0u7eDmSxP4P2kfQj4hO3/pfOtyZZfqffmfuDQmvzbZenfpXWPutgerdo8/97AVK1bYuFnjaqSBgGt8Ulp7j+nt5mgzV7wYa3fcku8D8YVgoAO2XCs3fcMPDPh4IYQQQgghGhr1Irg/+OAD++c//2nf+c53bO+9945iGy+VcClv3nz1FhVYtuszD5ywbrKysE4DK7AXk9OHTreu7bu6z1Bk+oCrOfO6l5yzpDoun8WbYy0YtqB6X/4nLgms1w/OedCG9BoSxdQEq/u80+e5Rcq8y3sYktJG4Jw9O/d014SLfJKYLzfesi2EEEIIIURjoeKCG+srlm1E9+DBg23NNdeMtjReKmHhxrILPr/8YnBdu34rtJgrjQUcK/PNN9/sAv8T5wNznz1+HrgPSfOh/VxpwuWXX25Tpkyxnj171vhdbeZoz58/3379619bjx49XMBlHes8FteQDh062FprrWVt2rRx4pt8II58SXIpp56NHz/efv/739e45lJp26KtbdhuQ+dK3qJpC2vTvI2t3WZtJ3TbtWwX7ZUMc7EnDJpgA8cPXGm+N99HTBnh5k3XhWidvmC6dWnfpaCbOGClZsVy0oTlO3TxJo5tD73+kPvMatGOg0U8tJT7kLSqehzvOt6xdUf3yXVBj8493KcQQgghhBANjYoL7ueee84tznXyySfXEIuNmUoIbuZY46a97777uu977LGH+058Y4XV3X/2s5/ZLbfcYh999JEL48aNs+OPP96mT58e7ZUDV3f2GzFihLVo0cLWW289mzhxoltdPYmlS5e6dQG++uorV+eWL18ebSkNxDCW3BdPedE6t+3sXMpnnDwj0bqLGMXNOhSUuGDj9s1nPH7mopk1XMQJ5VoIjLTMWjzLWboLwXWwevqgCYNs2ORhbp62h0ED3NIZNGDl8VIGB0Krf1IotIK6X4zuV7v+yn0/YacT3He/kJsQQgghhBANjYoK7vfff9+Jph/+8Ie28847R7GNn0q4lGfhkEMOcSKcudZHH320C/xPnA+hQEes+vizzjoriq2JF/p+n379+tnMmTOr4wict1hYzG3MmDHufwZg5s6d66zd999/v7PKY53+9NNP3fZSaN++vSsX3NCZR17ptQIQpGku5MTjjh26hYeh0HzmYuZwH7/j8Tai3wj3qq9irNK8rgzruGfZF8ucazpW5TMmn1GyhVsIIYQQQojViYoJblx8WeUad+CDDjrIfa4qVMLCvaqBUOd1ZD//+c/d4AvCmAXQdthhByfgX3rpJVu0aFG0d/FQvwYNGmTnnXee7bbbblHsqkExc7gBCzBCGXfwfCCiT7z7RBs/cLw7Ju/fXrB0gdtG/Ol9TreZP59p+26+rxPfWcE9PbTeE/xCa0nbWDE9i0VeCCGEEEKIhk7FBPczzzxjkyZNsiOPPNLNxV2VkOAuHubwM1f7xBNPrDEfnHDqqafau+++69zCGyNYt+8/6n7nhu3fFR2GNDdyAquT43pdyL26WHgHNwI6n2WabaSdd4SPnTHWCWs/T7r7ut1t1qJZbi43lm+2FQPvGvcDBFj4Q5f0cFG2+DYhhBBCCCEaMxUT3LghM0f33HPPde/e9gHXYV41ddJJJ7nvL7zwQvSLxkNDcSkXDZP46t9pLuVZXo1VKsy/BlzD89F/i/5uUbT4yuZYt4ljsIB4iWIhhBBCCCEKU/FF01ZFGqqFmxXHv/zyy+hbefj444/tk08+ib6VDquMb7rppnbrrbfWmA/uw7PPPmvbb799tLeIw5xsbxUv9B7urGDNZs43n4RwZXNWYkd0d2jVodrqLYQQQgghhMhPxQT3CSec4FzK44F5tqw4fe2117rv2223XfSLxkNDE9ys4s2q3rwOa+rUqW6l7nLAq8iefvpp92owFj2rDRtuuKFts802ds0117gVycs9MFDu14LVN7h7M+8Zcf3E/CecS7e3jBeaww1+AbRCrzDDpR1wIff/A3OqcUkf+6OxNvTuofbW0uLmWI+bNa56gAAreejazvV0uLRD4rbawMDQddddZxdeeKE99FD++etCCCGEEELUBbJwl4GG5lLOgmE77bSTW+WbgY6NN964en60F59Yv3nvto+Pv2P77LPPXklUMxjSpUsXty18v3f4Hm7/LnACry5jnvYvf/nL6jh/ft6hPWTIELeK+qGHHmqbbLJJ9T6EpPMXQ7lfC1bf4MLNK8i8yGaedyGYb93x0o5OyB42/jBnoc7nCs4CabMXz7ZrD7zWHR/3c79o2sgpI91rwliAbcxBY2z8rPEuPgv+dV5h8K9TS9o297S5md4bXojFixfbe++9Z998842bqsKUFiGEEEIIISqJBHcZaIgu5XvvvbeNGjXKdt999yim9uDiPXr0aDfXHpfw2tK7d2+74YYb7IgjjnBCvpzU92vBsuIt14jiK6dd6eZQlwsWbfvonI+qhWyh91XjKs5ibV6UX77P5dXu44hwjgd8It4LwbV5wV4fMIjTo0cP9z+ie8WKFe5/IYQQQgghKkWTxYsXr9K9UNyg65pXX33Vttxyy+ibEPnB8oxb9qSjJ5XFkou7N3O6xw0c546HKziW6qwrnSOMWSgNER0X5by2C1dyVl1HiHOuvmP7uvd0s7q4j0+C6+x/U39bsry0ASkWmys0SJCPzz77zB577DGbNm2am8Jw7LHHWsuWLaOtQgghhBBC1D0S3GUAd+WGakEVYnUDoX3jjTe6tx8A9+Zhhx1mW2yxhfsuhBBCCCFEpZBLeRnQe7iFaHhgzd5qq61s6NChEttCCCGEEKJekIW7DMilXAghhBBCCCFEHFm4y0BDW6VcCCGEEEIIIUT9I8FdBjR/WwghhBBCCCFEHAluIYQQQgghhBCiDpDgLgPM4RZCCCGEEEIIIUIkuMuA5nCLhs7hEw9379ROgm28u1uI+oQ6SF0sN7wPnvfM8755T1Kcpzbp4B7b/frdE48rRLFQj6hPaW13KcTb+7eWveXOwWcWSBP3DvfQqgZ5sN1ft0u9NuI7XtrRmoxsUidtlRBi1UWrlJeBRYsWWefOnaNvQjQ86BwM7DHQDt320CjmW9jWfd3udv6e50cxybDfuFnjom/JdOvQza7sf6UNuWuILVle83V5HVp1sMnHTLZdN9o1islB52/4lOHRt+IZ3GOw3XbYbdG3XKeo/039Vzp/MXAdU4dMtY3abRTFCCg1b7Pkp+/In97n9Op6mqVupNUrD2keOWWkqyPtW7WPYnPieMKsCTXqDnGjpo2y+4+6v8a+WSnl9/66n5j/RBRTPBMHTaxxbzfWe6DYtmC3rrul5rXP13033zdv28Z+tG3D+w1PrUMhlPFh4w+LvhVPvKzy4UVdWEdrS7y9R2QOnjDYxg0cl6msya+HXn/ItfHD+gxzxymmvhVz/WlkeRalMbLfyLz1wQ9G5Nsnqe0QQoh8SHCXAb0WbGW+/vprGzt2rP31r3+1Cy+80A444IBoi6hrCnUI6aRed9B1NuDmATZvybwotib5RAydnbh4D+PofA29e6hNOnpSdQeumE5deCx+RzrHHDTGpYXvZ0w+w6498Nq8gqbYTnQ58flfm45lOY5RVxSbt3RgH5zzYKIw8qIoSWzSMfbQ+fX1Yp/N9qkW5r279K5RP5JIE9xA2np07lFdb9P2ixOvlx7KrTaCHeL3SvyYfJ+1aFZeQQD1eQ+UShax4ymU11z/gqUL7KR7TrL5S+enDh4Um0+ctxSx5et6OKAUUuxgQ0haO8F1lSpM4wOZcainfcf2tSE7DbH+W/Rfqc2PU+j6i8G3Bf44Sfdj0v2cpX5laQdKrQNCiNUXuZSXgUq4lI8ePdo22mgj95n0vaGxdOlSe+SRR5z1/7HHHrPPP/882lIa/np9OOqoo+zDDz+MttY9c+bMsX322ccF/o9/r2voBODKRuD/+PcQOiErhq+oEeg80Snj/8dPeNy2XWdbm3va3OptiJtw/4/O+aisnfTpC6Zbl/Zd8optOk0bj954JTEfwu/psPX8S8+VrrshQJqw/NRWKPNbymTY5GEuXxoSdEIRupNfmxzF5Gf24tnOypjUeSWO+hjWPR+yiK40/P2B62fvMb3tgdcesA6XdnAddQLxBAQOAxvx/fhOp7ochGkpdEy2I2LyDUz5eiXX9fxQt7DkUr/wumEQoxL5RXkjLL2Q9yIvC7TFSfdCGNjH77vknCVuADUNBGHS78P2fsGwBe4YfPo42q9CUD8ZxGAwjYGNxgz3nW8T4u1AMeUnhBBpVFRwz5s3z4499lgbMGDASuH666+P9mp8LFlSutteVrp06eI+mzVr5j7btm3rPn18Jbjnnnts2LBhmYRu+/btbc8997RNN93U9tprL2vdunW0pXHSsWNHW3vtta158+auDHgV3BprrOHi2FbXbLDmBtauVTtr2qSpNW/a3NZosYa1bNbSxbGtEHS86KjTCaSjHnbW2VYbcVMIzjN2xlgnlNOgY4rQoANXSKgyEDDz5zOdpbsY0UHHyXeqkgJpqA2kgzThZlkbse05rfdp1rV9V3fMhgYWrQmzJxQcDGD7rMWz3P6lgiimfLDUIY7pCGMR5/8uV3ax+UvmR3t+C3WEQSMvHhAUiBPqelyEIDR6du5p04dOr47zoqYU8BrpNqpbdb2i/GadMiuvMPKDTVjNGAjLNzAF3K9/6P8Hd55iBEFd3wPlwJd3oUD5Z4X7EeGNCA8HXAjUJwQWQiuMr82ABhb1ZV8sc544DDYWMzjo60KYljAgbn1drm+op+Qrg6mVxg+UEWgHZi6aWaMM48KZkOZBQP0I24QwcK/F6wznpj0K4xrCvSOEaLjIwl0GKiG4mzbNFRXWXUDogY+vBFhyFy5cGH3LD6L0pJNOsscff9wNqNSW0047zd566y0XzjrrrCi2cpDPXNNaa61lHTp0cAMea665pourRBk0a1p1niqxjcBef831ba02a1mbFm1cHNvipHWsvVghhB2ReCjUiQ87O4R8bosvvvei+8SqHsd3LuksIVToOIXHTepIEcLrQHRk7dDGLfk+hO7LpTJ6+mjX0R7Sa0gUUzsQBwxSTHptUlEd9kpAWTLtADGRD7azX1LZZ4HOblhOiOL9ttjPCQ4fV8gbAxFLPaF86grfIXf3RZMmTmD79CFIgPmtScIE61qPP/ewFStWrNSJ98dMul+5JzgmIqIYcViX90A5SEtfPGSxwiYRH3ChLlGnwgEXghfoaaS1sZQLbRZtF2XmxXxWjwlvOWYOvfdKIk98vjDQstv1uxUc7KorqOtJ18LAV7z9DoNvs8tBvAyLCaUOLvuyIFAWfhDPfxdCiHzUi+C+/PLLbdKkSTXCCSecEG1tfFTCpRzLqqg/vOCuL5ywbpL9/KFQoSOJpTTeofQhtAD6uEKdkrDzQchnEWSeJSI0qfNK5xKL3qAeg9wxwmMSkqyPPnjXyHK7v5cCYgfXSuYzFrJOFgOu2x1ad8jsvl0p/GAAZZsm9IhnO/vlEy6hO6cPDMIgKOKWpSSrFSFNzHCMaQumuTpE+TBwET9f0qBOoXmvWBkRGP43TIOgTnJfcK+1a9ku2jOHd7lNEtxY1+adPq+GwApD0v1JCO+NQuJwdYRy9vWoLogPBoXlQnmFLtqEYrxeaEPGDxxvI6aMqJF+rgmPiSdOeCJTO8M9yGCMr9e+bodeBNR/hHAolhnkSQPBz1SX+KBs1w5dV7rmMFB/83l5lAJpSLMsE8927vlCg7JJ+URIOzZ4LwYhhMiCLNxloBIW7g02yLkNY1UFvxhc165d3acHKwmLuI0cOdL22GMPZxHv37+//elPf7KPPvoo2isH86LPPvts+/TTT+2hhx6yQw45xO0/dOhQe+2119w+Tz31VPW8aQZKpkyZYj179qyOI9xxxx1uX/Bzm9O2x/nmm29s6tSpNmTIENt6662tR48e9utf/9refvvtaI/i+eKLL+zOO+90C7Vxfj7/8Y9/uOuMQxzb/L7k2ZgxY1baFxdyygDLNoMfLVq0cGWAS3+bNm2ivWoyY8YM+/3vf2/jxo2zr776KootjbYt2tqG7TZ0ruQtmrawNs3b2Npt1k7s3MdBjE4YNMEGjh+4UqeD73TqmDNaFx12OouIDYRjFuggEfgdHaC3luY6m3SImBfJ6rh85us85SPNXTXN1TArWPGx0NfGdToJOtV9uvRxYjFN2NYXLF4GlEkSPt7vl0aSO6d3q45bshCXcQs3IU3MMJWhx7o93H3Colh8h1DApg3qxI8Zdsq5l7hf/G8KiSnuAdJRSCT5ek79RlyFHX7iwnug1PpQV/dAuUhLXzzkE4VAmSBa8R7IJ5xqA21VmCbqB1A3/CAL5Uh8VnHmjxlayskTQtzbgWPng3PiBRHW7SxzuAlpLuvU4Rd+9oJrk0hrfcH9wLQWBgB4XjC44vODbdwfTMvh+Te011A3+JcG5cWgVXj9+fKAdSmEEKIYJLjLQCUE9y677OLcqffdd1/3HWHId+JDENUjRoywa6+9tnoxr5kzZ9oll1xiV199tROjIeyP8D7uuOPsySdzIuaBBx6wSy+9tM6vCxH6t7/9zY4//njn5fDxxx+79Nxyyy02fvz4aK/i4JjXXXednXLKKfbss8+6OD7PPfdcu+iii2oIaRZyu/jii902vy95Nnz4cJeeEOagX3bZZS5f27Vr5wY+rrrqKheXNj999uzZLj1vvPGGK6va4DsEL57yonVu29m5lM84eUaiZSsUBj7QeWNEPrTi+XjfqQvjy9VBJS2uw1fA9diD1YjODOnBMrpR+5xA4RoRTAgdxHspFu00i5QPaZ2rLGCBxkJZqut0PrCeNkRrCmWSZuXmexbrdl1CJ9x5HUQu/tQZ7y2ApbnYdLG/75RPGzrNHSsruLWzgFchOAdvEBg0YZCzIiImPKSfus89wL1QSr7W5T1QDgqlLx4KWfbJMzxgoK7EoReviFrSwgAon9Sz8D4oNDDqiedB3Hso9ILIMtDDYFO8XWJl/trA9bE6PG11aIGvFJyTe2REvxFuAACrPwOT5Ad5zvfQE4bBinC6Rm3qgm/rGEDzgypCCFGIehHczMFlXu/AgQPtvPPOs+nTp9uXX34ZbW18VMKlPCu4Pvfu3dsefvhhmzt3rhN6iElW9Z42bdpKluN7773X7rrrLifGEZuIQ8qHlcX57oU+gfh+/fo5Ae/jCFjGPZtvvrmzlhP/6KOPWvfu6Z3Mp59+2gnYnXfe2e677z6XXgJp32STTaK9ioNjYqFmcTcEL+l46aWX7He/+507B1ZnD9sYZLjgggvszTffdN/xDsDi7T0JagPXjjUcLwTvoVAp6BDFLXY+EI9FL839L8t8NDdXNeq8ENJccOno8aoYrOjFWqURKSF0YLmuMTPGFH2susYNEpQg4rLAcfFcKVfnjg4jAzLlGFShg0v64vOj+U58IUEAWKXCukTAWnXVk1etFM8AUZJLedK1YM0mDQTP5ftc7q4/i/gtF5zPWbiLFDl4TIQDVQy4MIDA9SAofMe/oVDOelVuGEhg4IV6FdYbAnXJz7OObyv1Whgko03w90UhL48k/H1BXaVN9sKedjGrq7xvM+uiXeKY5Kv32qjUHG6umwU2WXmevODY3F8sUEcdxPuD84QDEwxahN8Z1AjxdTee5qTyx5uJ/SkX8lcIIbJQrxbuZcuW2TPPPOMssrfddpt7d3NjpBIW7qywoNcvf/lL22abbarnfXfu3NmJcKyt8TzGJRoL96GHHuostS1btrTddtvNWZvrsjw4Nu7puGXj/r7jjju69BJI+8EHHxztmR1ECQIaAY9bPHkBWKQZ3Nl2222rLdngVxpHhL/77rsuju977713tSdBbejVq5cbUGKwg3xdlYjPNY1bYULokGGJyOfS58HyQEeGY8LE2bmFkehY0YFlrivu8d41OAtJgo7OFZ2mNLFXjNWG4zRGSwdzm8thncIKi2unHwQhT8c+O7aGdTYfiPKwLhFwKT9111Pd/37uZ1jn4nFxyyxpYWAGy2ISodsyIiFpYb5yCUfvmZBlWgV16cS7T3Su0AyIcc/4ukU818MK/bxmDfGdlbq+B0KKrVeUlX9tWm0C15EP2iH/KsQwxF2sw5DF4u/rEvWH/ATKmny4ctqV7j4oRvCSd+Q/bR5pQBz6AKSJ63ALElaJWOe5lFK3aDPjC/HxPRwwpf7H53D7UChPPd6TIJ5/SSFpEM670cdDkiWawSYGo/yUAtoB/y52BjjIC+LCa4xfc/y6+G1Wl3LuR87Rp2vDnOojhGiYVFRwYwm+8cYbqxdKw7LKHFc/zxfrYmOkIQluRCeW3TPPPNN23XXX6nnUiPAktthiCydwQ7xVO+6uXk4+++wze+edd5wojc9DL5Xly5c7Cz4u8ViX/bUTtt9+e2e1f//996u9KYg/+eSTnVWcAQleWfevf/2rou/3rgvoPNABoROEaPCdDB/S3MgJdHDo2GXpaBYDHRReD1XIMk1nhnnQ7EdHBmEBLMjDdzqjI6eMLHolcDrVvhPlXT89dM78fN74tmKopNW0NnB9CLcs8/+z4AdUcPGkI3v6pNOdYPSWr9pCehEtDLL4zm0hCzrbBvcc7O6BONRtXxcI1A234nrgEYKQLxfeSp0lr7k+rhfPEK6X+u8t9NSvWYty9xDH9PdGVur6Hii1XmURa/lEsQ/5vClo15LEI3G0Ocz1LQXaSs5NnlGHEK7Msef6yQdeEcj1UWYn3XNS9KtkfLvrBwbCNph2PJ5+P1Dl1ztIIl7XKeP4iuxJc7j9gFY+qKv51tMgveUatArx1+Q9sRDK5DkDSJSjH5gIB+ioP+H3fHUlH1wz9yXPKJ5pTJfyb+EQQoh81KuFGwsj1sjDDjvMCbDG6lbekFzKcc9nTvStt95qCxY0XKsbAwOUeX3C63v2339/+/e//21/+ctfnHX9nHPOsd13390NBpHGVYGwo0Ggs5XkUu47MHUBHcJCc97ozCCksWAQwpXN6dwQGCQgLklI1TdYkxoLCDeEXCnCKgk6sKzQPuSuIc4DoZjyocMet3B666sfMIq7kodzMvltvNNPfcN9vBBecDGgU1dgqUQcZ81rOvMImXD+OSBmiSMvwnujIVGOekWZ+PIvBwzScW/G3bq9ePIWaF/fslrnQ6ssg03+VXC0U9v9dTu3j/f8wFMh34KKSQOjYYhbaJNCFnGLOMQi7gdx8kGe5Nsv3+sei8UPXMSDF88hlBP1AxDflBfrHTDYEZ/aUgxJZRDPU4Q9+UL7RhszsPvATJ5bQghR74umIWqwECN2KvlO6XLSUCzcDFhMnpybU/T3v/+9eg434Y9//KOLry24mn/yySfRt9LBxRp38tdff32l1dNLhZXDO3Xq5AZwXn755eprDwPTF9gvBJfzH/3oRzZ27Fjn5r7nnnvaTTfdZIsWLYr2EHH83Dkf6BDWFt+RQVgkuUkiMliYrKFZkgt1TGsLgrBc4sPDMcnncoFIwn2W1YARhHFrXCH6du1bbWENrctxCx2DRcW+hzsNhBBrCzAXNA7CsVz1jGtIEg5JUI+O3/F490kILZdYTSkzLKl1Wd9qQ23rFfXcLzLGfRUSTgMIQ9KASwiimjnV8eN592Nfd9jOeXHdz3K/hSLRW5lJR58xfZyHB5ZXPD+4HyCfVTVez8PAQGjonZAWOEYhmHOcZbV8pkHQBueDcvIDQgjgeLkwQOAHxcKQZWAgH5Qb4trXB+Zz895yBpAZWPFTOIpxKfeEg9PxAWjqBHUprN9cP1MHSIsQQuSj3udws5DV7bffbjvssEODshQXQ0MR3MyLxmrcvn175y7Ne6NZlfs///mPy+PagkDG/RpRygrftQHvhu222869EuzCCy90i7XxijACi5zxWq9i4XpxHWe6AlMXEMz5rNT/+9//7Prrr7eFCxe68wLzvrfccks3qFBbC3w5XwvWEPAWHTovdKxD0UOHMB9YIXApLyQUQhEU7/hi9aYDi5WvWEEXdvzoAIfHpfPmLafxbVlBnNWFMPawInY5RRYLDJUiUuNwvXS26Xwzv/6Kfa9wHV8sTrXtWFPGvszCssu6aFo+ODZWSRb08yvhh5TqrYCoyrdyNvdQobSyD1Cn/P/APUQ+j/3RWBt699DqV+Zlpa7vAahtvfIWyqRFxtJcyvMNuKRZtyl/2pG4KznHQYSXainl99wHPj3e7TuLGE6CdPr1EPz9UOp9RV5gcc8yIEJbBmltDu00dYR8pa4nzX9GvCYNFJSaF564NdwPdhA4tp/OEIrncriUn3DXCTUGaIBzjjlojBsEFEKIfFRUcD/44INudXIfWMiK90OvtdZadtJJJzlLY2OkoQwUsOgZ87GZw827sFkQDfF4zDHHuLjagkDmmLy7m9XI/fzo8D3b/O/jeXUZ52X+uI/jvd4erMrMm0Zc48bNXG4Ci5axariHOdUsPOaPEX8fOOnxAwCsos4CcLzua6eddnLp9b8jhOdnzjerl/t55Gwnv6644grr06ePbbjhhtGepcG1l+u1YA2BsKOTT1R4vBijk4grOJaVtI6xBwsCHRjnotxriOsUAxZ1BAjWP+anT5s/LbMo8J3eMPj0J23LNycyDVagZl58Xczn49rLNd+6nCAGEWl04MP6QN6Rh5QXZR+KxmJIKpskCzehmE486WFAAKtYWM5hfcVq5a135Saf5Ryhg0BEuHK/Ya3z4oe1CxBe5Av3yPhZ2V+dWIl7oLYg4ljojgWxavOe8RAG6eLWbT9wMW7guMQ2DBFOGbBfGkmW20KBe4WBjawgrMN66suJ6+F4hSz7IeTl4AmDnQt0Whsc1n8GXfwrt+KwX3zKTyXx6WQAguvHfZ9218fVFQj2JE8VBh3i3lhCCBGnXi3cm266qVu06sorr7SNN944im18NKRF04444gj3misEMSAcWYWcUFuwHnMcBkvK8dosVgT/7W9/694RTjoBgXziiSfaoEGD3PdiSTpmGnhV8Fqyvfbay10Pgf+JYy53bVcWr8/XgmUl7GThEpxvnmGx0BkLLR9ZRFFoIaODeVj33Eq0dDJ9Z4fjYkkt1Nnj2rxYqWvocPl3PJcTOv2Iv2LmANc1CAHqC1CulFMSfoAG8cL++YT31PlTqy2suH7WFd6y+cLPXlhJTIT1tZDg9ItiFQP1kXPnez0YFkXuE1/WzEP3Vsbw3uAzi6WykvdAbaBc+t/U383/x0JMfac+lGrNBS++wvrJ/YTwRGz78kW0hWsIcF4ENaIyjSwu3vHAIFGWhch8e0ybl1QPvfAmn0gj++YTmlxft1HdXJ4mCUZPWP8Jafc185ghyQuhEiCuSSvnZ/44UyxoYxiMwjvKD5TkcylPq1fhPnITF0KUkyaLFy9eNVaGSgE36LqG1dWxjArRGKADhkuqc6fNIyqKgQ4MHUQ6aaUcHzFGpykuyuk8Dbh5gLPoebHBuehA0dGafMzkVIsN6aATz0qypYBFI63TmQTXgKDyr6gpB3Skk6yx9QHXRye02HzxpP2ecsJ664Um14wFMikf4/tmodBvkupYFkhnsYMDCLV4HQeEFlZdRHQ8b+P5QXqZt4olGAGXr75V+h4oBdKY1l74OpOFMK3kJ6uCI8LCY/q2IyQtD/k9Vtxi6kSpZG3TkvB1J0lQZ2k/kgYh8hG29f7cxVjuodR6lXSfkB68Rvy157uX8hFeF4TPpPg2jy83qMS9IoRovEhwlwFck5mTLIRYffFCiNW681mSsuI7jt7qKYSoHAwE8Po13gcvhBBC1IbGuSx4A6MhuZQLIeoHrEMs9sYiR4jv2sLCTawUjJVOCFFZsDJLbAshhCgHEtxlQIJbCAG4FGLh7vHnHs5CViq4guJKi4DP4uYphBBCCCEaJnIpLwNyKRdCCCGEEEIIEUcW7jIgsS2EEEIIIYQQIo4EtxBCCCGEEEIIUQdIcAshhBBCCCGEEHWABLcQQgghhBBCCFEHSHCL1Qb/XuParB59wSMXuBDCitK7X7+7O34ah0883IVi4Tfx8zU0uG6uvzbpJA9LyZ9S4JVd2/11u1rVgySS6kZDoFAdylJ/GzOr+vVlgTpAPsQpR5sohBBCiPxolXKRCURK37F9bd6SeVFMjsE9BrvPcbPGuc987NZ1N7v/qPutfav2UUyuM3zY+MOib8WTdMw0EB2zF8+22w67LYopHi9czt/zfPcJXMOoaaNWSgf78monIJ94n3K7lu0ypdVDR3lgj4HudVNZqW2eQjH5ShoXLF2Qef8k0somzMMkikmnh7o84OYBNuagMe5du7XhpHtOsiG9hrjjhHXDC5nT+5xeVNmFUI7DJg+zqUOmJr4ajHznvuvWoVvqPsB+3dftXqPOximlnuU7blp51gc+nf236G/9b+pvS5Zne43jyH4ja1wborSY3ydRqKzqAur74AmDbdzAcSudl2saOWWkK6es93qWtj4f+e7Z8DkTz3/gviINw/sNr/W9K4QQQlQKCW6RiaROWygw8pGvYx4nn2jhOwIH4Vqow5o2QFCIDq062ORjJq/UmctyPH47YdAE14F9Yv4TqccqBvKuWCEE+TrZ5cKXD9eajyz5kFZH8tUdBOmEWROKFnXkTTkEN2kDf/74/eDrDO/lzlL3IRR1oTDh2PGBh4mDJqbWi6T9PaMHjLYrp11Z8N7Id/xCQq3UeltuwjakS/sumcVlWtvWGAVf0rXkqx+efOWfRtrgYz7ibSuDk5QRAzYTZ08smE7wvxFCCCEaInIpFyVBx3P6gunOalQOOB5un5AmTujAIbbpnNGxK0TX9l1twbAFtmL4ihqBjiQdtHg8+3bt0DX6dU0QrXNPm+usLoTwdxwPcTTv9HnVnXDiPjrno1p1yskTLMcNEQRXt1HdnJAJ8yIM5CcWvWF9huXNB3+dPTr3cN8RA17MejgfbuB0zrPAMUtxleX41MNC54mL7SR8nUE4xK8nxNf9JiObWO8xvW3sj8a6/Hv8hMdriJZ4vcsnhriH/H7U9fC3h3U/LPXe8IHfpEF6z5h8hrPsJ4kq8px99tlsnyim/njxvRfd4EXvLr2jmJpQznUxvaChQDlQ/ygroN307vXx+uTDknNygz3FwjER27QJWcU2+PuEc5MmBtjAD7TF07bfFvvZ9KHTa8RLbAshhGjISHCLknjo9YdcB2vbdbaNYkoHgYV4Q0yniW2P75xh2dx49MaZBVhjhY4rlrlygOhD1KUFbwnLB/lNviO4sOYPuWtI4u/YL6t1d9kXy5woCq/Td7oBIY5gn3T0JGe192U+a9Es91lJ/PVDoU4++YK4ufbAa933fKKbcvYiotJWYe5j0okY4/pueO6GaMvKcA0dLu3gvBqYtuDrTnhtk1+bbA+89oDbL6xfhCx1rJyQFjwsmMpRV3BN8esMQ75yr2top7mXaDeTBDFpZ2Cq46UdXfkjzktl9PTR7rOcAy3+fvN5SZ2ibjEw5eNIe9KAyf8W/s/639zf2l7c1u3X+qLWttv1u7n4kHzlFx/Ynbtkrv3kXz+pPmanyzq578QLIYQQaUhwi5JA8PqOG52SeEclixj2Hf3xs8bnrFBBJ4qQ1LH3gXmEuCB2ubLLSp0iz/yl8932+G85Hr+Px7Pv/CXzo1+ng4tj+LvazJfO19lL6lwmhbTrTwLLe2gZ8iGfRdPDeXr8uYeNHzjeWV/pWGPVf3DOgzVEBddEXl7Z/8q8YtuXP/vOXDSz+jrJXwLCL7TwIxoYlDnx7hPdbyEU5nUN9ZlBBK4rSWxzrQR/PyBevJXa719MWdUGBAhChHRQ132d9YMFpAsPFeLC+4zrY+CD9KYJ/3gdwirpIY+oD0nW83C/SkAdIS1xfPkQ4nWPUIpATrMWV/qaQ7j+afOn2Wm9T3PfEd+UbViufbr2sVmLZ1nPzj1dPWWqRSFrf5h/YaCOuak0CQMtPhRb/0PrNyHJwp3kSfTU20/Z92/8viv/T7/81MUt/3q5TZ0/1cWzvRTOfuhsu+l/N1Uf86PPP3Lf97t5P3v343ddnBBCCBFHglsUDZ1qhFDophm6aDvX7PbJrtkhdPDo+I/ca6RzYaQz5Y9B8K6NSSKRjqw/Z5owSHObDX8bBpfuFJfykHjnmvSVCgItPFYY6FT2WLeH6wznc/+tlEWU88Q7t74MwXeq6eRSdoXS5X9L/oXlT9mQx/yefULLN+euzeJstcF3/tOuyw+eMBjFdcRFeT4RizhKGljhmKVCXabe+PyM35ekhXSG9xnXVxsLJZ4PDMSRV/UN7uSI6Tj+ugnkCfdXKOCSBlMaIwxYIahxmacu4Y1C2YQgyGljYOjdQ23a0Gl5p394wvuVepPUnlLnwraSfZIIBbwfbON/0s+AnN9GSBuEjFu5x8wYYx9/8bFdNeAq+/S8T935Pzz7Qzuz75n24ecf2gOvPhDtmYOpL0ltbPx+3bTTpnbrobdWH3P+6fOt/+b97eX3X15lpyUIIYSoPRLcomgQ27VZqTcNOl5YlxAf8fm3xPvtxVpJ4mB5rGvLaJJVPs31MQ3cYekI4hKLNbIcJKWLUJuVhxGF/N53WhHRLG6X1VKIa/i+m+/rRDRlT/3q0LqDEwr5LPy+c06nnN/VB6FYADrhpQg2rj0+LxWxEuKFSFIo9p4I053kSVKKlZe6zXWkDSpUGu6fck3HKERa2RBfG8hT76lQzOAL9wMDP5QpHhnUpfhaCuzDfcYAFvmExxAeDoU8k8pNOADiB4f4//gdj3dtid+WL8QHArFAr9lyTdtj4z2sTfM2Lq5j645uMKlVs1bu/1K4ZO9L7PCeh1cfk3w7bsfjrFmTZtayWUsXJ4QQQsSR4BZFQ8dmYPeBzppVTuh4IYQRAFjPww4UrsS8Iglq06GnM4kLZW0We4t3runUhoTiiU9cILEGFbuIml/sCDFK57m2IATjHdUwFJpr7SEPw3mVpJPfYx3FsumFQRbhSacfi7hfMM3P56b8eX0SFql4On2gc451jU55pSzeoVCNl30+QZxlikUhQmth3DKbdE8wPQKXae9S7qZMLM1NmQhFTmjh9nHFDhpQjswPZvXutEEx6okv5xD2T8qzYkJ8gIC8pl6duPOJUUzdkc9LhVCqxZw8ZVV1VvinjMi/rHWI+8GLVe4lfutdyz3sc+qup7r7lXaXfVlzoZTBvaQpOsUONjC4MOm1SW4FfT8wSR7ErdzxkHRvHdHzCGfhZuD278/93bUpTF3Cio/nx8HbHBztWTpfr/jann77abv4sYudJ0GfLn2iLUIIIURNJLhFSSAEEa7FWGyLgc4y4stDJ3DFihWuQ5bVmpc0hztpzma4Lcsc7lD4EBAq5YYOpHfbZ3CAvK6tYKstXhj5edz++kNBQeedjnZSB5wQLztfxn56AteM2zOL8TlPis9znhQcN7Tw0RFneyUh7Qhsyp+Q5EabFBDFtR0QQDAVSyGX8lIJLeEEypvrox74efYjpozI3Dbwu6R8KybERS3tBQNVaRZubzmOtwdhHVsV4D5hzQNcybnXEK/xdsSvPeD/L2VAM+leoM4VAwM2lNfQXkPd/2E6w8GgMKTV6f233N+9EpF528ffdbwr6yNuP8K2X297e/CYB503TohfD4Q6wEJoLLaG10fVWaI9voU6wn7NL2huu1y3i2259pZ25+F3lmw1F0IIseojwb0a4zudvuPsQxYXXTrWzP3DbbNcIGi8tZTOMp0uYJ7h2BljbebPZ9qsU2Y5a2+W9IWL7RAQPrhnp3XeCLV9lVe54HrpfHIdpIe8Jq4UfAcxDN4imLQtrfy9MErKI/bndxDPUwJ5T8c4LoAQRggeL56pT4g3AnF+f4QAdSMU7OyTJqjqAm8V9uIkbVAhHhBzheor2+ODQMVaCLNCnof3fZJLOSFuNfbE75+4sKLO8o5zrLOFrruuoKx8OaXRt2vf6nnISddRCOpimF8Ef+8kbSvWy4H6jccA72WnjPxq48XAyuEMjnK/8lvuF2/Bpn7F00jw9aGS8Cwi3wZsMcBdNwMEDBT4Abl43fTBDZJGXhtxtlxrS9uk4ybRN7NvVnxjr37wqr3/6ftRTDIshMaA79437m23zSzsnXDHi3fYwPED3e+EEEKIJCS4RcmUex40c3kH9hjoOqV0ePy7Y7HU0Rmj84UbO/vQKcsKx6OzO2jCICfYvQWH+CSLT20gjaS1NkKQ9EyYnVsF3sP/xJWaVsSEFxZxi3xonSrFWo+4yPdaN/KDcsNyFRfqlAVlQtng/okY8NdNfQhdkDl+3PJVn5TTws127x4ehjA/k9yx85HmUo43AYMm8XPFQ9xqXAyUM+fxr4qiDtT2vmiIhHUgXs7hwmJZ6kAS5KMvq0IDCHG8tZ489wMs1AUGLDmWT3daKMXSXQrUCwZnGFzwecR149njPTuKtXCzGvkeN+xhL733kv3lh3+xOb+aYz/c8of22gev2b437VtjlfIwL74Z/o1bCO2X3/2lffnNly4PF32yKNozh9+ffV/6xUtukODhNx62cx8+N9EiLoQQQkhwr8aEnbkwZJ0TS2fIi4DQ2pfP6pAPOlhYcbFsYOH21hw6jIgwjovVM+sqyt7K5F2g/RzjUvDHQrjELUNYX7AIITonzp5YqwXl6HzyrmnmyIfilP9rY+WuC7x1nA48CzJhiUtyI2YBNcowTTBQJpQNdY7r9+92j7tRsx/1gDwox8BGQ6HQtbCt2GsN723vUs7//h5g0CKchx8P1Pfawpxhn3ZRWchzBi1pq/Cc4P6k/BGofjtQzgw6+hXBSy2rJG8Pzp0FXlfG4EzY3gHzy31cMRbuz776zE1p4FquOeAa+9l3fmabddrM7jriLnfM9z97361inkTVUd19duH3L7TvbvRdt/gawjsJ9t167a3tHz/+h+2w3g72wsIXbNnyb6dBCSGEEB4JblEUoTuqF7/e1TYMpYhbOldYOXAxji9qxnmwGNEZKjQY4Oca00mDpHd8E+isIZT5jG+Ldz6TrjEeEDikD2FcqrD3FsH4AkeAhXfss2PLIoZKJRRqwHVjCUVMj/3RWJfPYfooC+rJtQdeG8UkQ16zH+WBpZvzEBcXmZQD52LfulgpvxjK6VLOdbZr2S76VhNebwV+IKJccB8lWdUR6OWA41+x7xXusyEOkGAFxX3al1NWgdgYIM/DFb79YBftEvcrIjds4xjgBOKJ474txpMkydsjq4u+v6fzUYyF+4PPPnCv6dp8rc3dK7s8rCQ+uOdgN60I75l89yRCm+Pwm6ZN8neTWJ28TYvcquVCCCFEEhLcoihCd9Ryrw5NB49VZEcNGOVckMMOH0LUdwq9q2Qafq5xoUBnDRHPZ3xbvmvDYuvnHMZfX4a1t1RXe64LQc1iP0nnprOMpf70SacnWpLzEVrl/UCEJxSO8W1xSIOfGx/vJNNxRsCRvquevMoNzECWekJHHzHGcfnEU4CF4pIGSgjEMzDDYEmh+lBXlMulnMEDtqftw7x27rtCeeihXiKmfF5RvmH5p83NXp0IPQDCUEj4xQnvnfjACoNHXtBnGXQpN7QRvFrv/976P1cfGLjx7VX4Kj5gsMcN6EUeNEzbaUjeNMVYuHll19pt1nZTKu586U5n8QbmWNOuMPiDZ1bS/cTK48+++6wd+69j7ZX3X3EDv+uvuX60tSZVNcbdu7w9w+Vxt90z36NCCCFWLyS4RYOBDt6IfiOccKPzR+cIWDSNjg2dYcQ0HSa+1weIOzrOdNjpXF130HXVgwOEUl855sU2r8LKZx3nvBMGTXCL9GQV3eRbXFj4+blJ22ozkIJYbtKkiQ3/73D3OiN/nnxwHbiAUubAb3D99MI+KWDx8nNkixVJDQ0EdVqdoU6F6xlkgbILrZuhSzkhS5nkIy5+sliG/aBCmhW/MZLk9eLvnaRtpXj91AbyHGvuRu1z52SKBgM3DAriqRSvc7QtvJObwU3aXPYpdmCv3Pi6zKAoK4uH9TgtX9dqs5Ydu+OxTmiffO/Jtsbv13D1lNXHr5h6hXVq3cmtJwHxwSlWHu/1t15uAHDvTfd2792u2uL25V4Mp2E0HdnUuo7qatc/e73ttcleds7u57j9hBBCiDgS3KLBgHCiowp0/hBdwDtOQ5fky/e5vF5cU3GVRvyEaaGj5wcHGDDAnZy0Z8V3+DjuCz97IVOH3Itu5kzns+7icl2JgQnyxXdC+47t6wYNsB5myQc69AwesKp1MWIE0RBa6JIgb5Ms5FjF8r4aLuP6A+VwKSfeC6EkGMwJ1zMoJ2n5w3XlI+7em8V1OFx9vj6Ju5GnhXyDCORbfQ34FYO/R6g7tJdYtWljsV4DUxSID8uEa6Mtos1lkCftOkPrPQMwSfcCeRh6VhSqV0mQHtpH7ssr+19ZPbjm2xy2sU+cX+36K/f6r1023MW5hQOv7Tpm+2PsmZOfcfFJsC/5cuPBN7rBh3yv+mrVrJXtvMHO9s9D/+kGF/VaMCGEEGk0Wbx48Sq9rOY666wT/SdqA6P7LOaFu3OWzj9uq76DhWUin+WWzhOrT9PBiXf+cIFkoSwvxBGYvjNMx9/Hl0Ix14QwZCVdrINJoiHtWIV+Rz7hgl6KldbnD53qpN+Tr4VcxNPAMkYnMi6aw/z3YEEtxWpK+rl+5u0XEuf+WunoA9bteH0pB1nrBPmAUM5y3dQBpkpMOnrSSsdkGwM1SXPcSQvWRgaY4rBtwM0D3EBFobxLqmP5fs/+iLKkeytpW1JecHwGX3i/MRRqA+qKsG3BpTrfvRjiB7KS7ivKjMGuUtcQqG27lQXSyEAWA3P5yjdeTlDo3kprrwuRr155wnz3bU2+/PJl0aF1h3qpX0IIIUQWJLiFEEIIIYQQQog6QC7lQgghhBBCCCFEHSDBLYQQQgghhBBC1AES3EIIIYQQQgghRB0gwS2EEEIIIYQQQtQBEtxCCCGEEEIIIUQdIMEthBBCCCGEEELUARLcotHD+113v353965mUTq805Z3XSflI+/eJZ/LAe/+pbz4FKsO+epPQ4P6XO42g2NxzHLdJ3WJ7kEhhBCickhwi7JDZ7bJyCYlh46XdnSd96xce+C17nP09NHus7Zw7m6julWngU7pxqM3dmnj2rIQ/qaYUOy1l5PJr0223l16W/tW7aOYb5kwa4IN7DEw+lY7FixdYEuWL4m+iVUFyrWxDHoduu2h1qV9F3vo9YeimNrDfTNu4DibtmBa5naivtA9KIQQQlSOJosXL14R/b9Kss4660T/iYYCFiDEG53ecoHAHXDzABtz0BjbdaNdo9jSuOCRC2z24tl222G3RTE56EQjPOPxSZCewRMGuw74Ru02imJzcHw4f8/z3acHsULeDO83vNbXUCxJ6SWu79i+Nm/JPPc9id267mb3H3V/okhPgwGFgeMH2oRBE4q+zixpSqKUdIri4P4YNnmYTR0ydaU6X4i6KFfus+FThkffimdwj8GJ9zrXedj4w6JvxdGhVQebfMzkit/fcWpzDwohhBCiOGThFo0COohYf5OswoQuV3axmYtmWu8xvRO3E7JYj+n4j312bKI1F+vvrMWz6s0CXZdMXzDd9t18XyeUvGvsxNkTrWv7rrZg2AJbMXzFSmHioInRrysL4mr60OmJaUoK9ZVOURx1Ua6I5qTfFgoj+42MjpBMqcf96JyPJHCFEEKI1QwJblEQRGiae3TaPEh+U+wcQSxHnCfpN3RS6ayGnVeEYM/OPTN30rN0dsfOGOtE5j6b7RPFfAtidGD3gTZq2qgoZtWA8sNyP6TXEPfdu9kiwOGtpbmy9IHyKaZc4+DOOn/pfPdZ3+BRkFSv0wZnvl7xtd0681b7zrXfsdYXtXb7JtXZ/y38n+17075un2YXNLOt/riV+11VTYz2yMHxbv7fzdb9z92t+QXN3fE6XdbJfnbfz1Zy+f30y0/tiqlXuMEl9mt7cVv7yb9+Ym8vezvao36ZtWiWLfl8SYMo1ySo58wxp1wp97jbN9+990mx8Luk33LMLHO6s9YroB6Mmj6qRp1Ja4fTaEj3oBBCCLGqI8Et6gTE6el9Tnduylk6guyDkB2y05Ci3VHLBR3xMTPG2B/6/yHV9RhRmnWOJh1aL47CgJsrIR7f4dIONnX+1OjXlQOBjUXfW7cZdAjzAJGIdf+J+U+4Mj1j8hkulArCLPxsLCBsv3/j9+3I24+0Z955xpZ/vTzaUhOE03ev+67LV/b5ZsU39uoHr9rRdxxtVz95dbRXjssev8yOvfNYe/G9F53ogo8+/8iuefoaO/i2g6vvHeL2u2U/O+uhs6pFGMLrpv/dZD/4xw/s3Y/fdXH1CdMwGCRoDCKOdR9ob/x9zCffT+t9mvueBQS2v3fT7mlcz8fNGlf9PWkgJ2u9ghcWvWA7/W0n57of1pliaaz3oBBCCNEYkeAWmcHNMm41fvyEx1PFaTELE/l9iunwlhNEzKAJg2xor6F5reCI0iv7X+k6vIWsvGnu2ORjUl4uOWeJ9e3aN/p1ZeAacCen440gQPRTnmEeVEkFmzxnsp3R5wwnILqv2z3TPPY0EGa45D4458GirHJ1BfOAyfuwLOLeEF98/YWd+eCZ9vi8x23AFgPsmZOqhNFvl7t95542t8Yg0cYdNrbvbfw9m3LcFPvq/K/cfr/b43duG9ZsxDMs+mSRE8zsP23INLfvN8O/sZd+8ZL16dLHnn3nWSeqYI0Wa7h8P3f3c23xWYvdeWf+fKZtsdYW9vL7L7vyqU8oR8J+W+znvCUaAqyR4Otp3KJNHWe9AkT2Sfec5D6ZC87/7Bf+Ng328fUl7Z7G7T10P69NvaLenHzPyTbngzl2/I7H28u/eNnVGfbN1w4n0dDuQSGEEGJVRoJb1ClYRLGY5uvUYfE5fdLpeS3LabAIEaK+NpA2LPGInHAhM8Qobp1xazYDCVjis1rvGzKIbdyU6XjPPmW2E5+UQwjz1lmMDis4+Y0lr1TXW/IUC+iwPsOcNdQLyoYOdfTul++2o7c/2u46/C7rtUEva9msZbS1JgyaPHTMQ7bnxntasybN3H4MJO2w3g5OZH/y5SduP6yTy79abpt22tS2X297ty+DG1utvZV9d6Pvut8htIH///rDv9rFe19s66yRWwiyx7o9nADHgj73o7kurr7w5cj9Tn0pNBhVSbxLd3yRxnYt27lP7mEvWKn7iO9Kpb+YenXXy3fZU28/5RZVHPujsa6eUGeKpbHeg0IIIURjRYJb1ClYc+jIplm56ezinjxqwKi8luU00ty2w5BPHNL53O6v2znRHrdoIUbT5nMjzPkNv21I4qJYECHeQoZlEsEUd+lnLjfXSjnNOmWWs9pjIStlsIE85VgISgQjryJrDPz3jf868Xtm3zNTBVEWOEaLpi3c/+uvub7136K//eeN/7gV9p9++2lb+MlC+83Dv3Eu5QdsdYBts842bt9CdGzdMfqvfvCvlOMeZlCGcq5vqJ/MbYb4vc0gGt4c1PdwG3Uf0X3i3SdWZDAta71aUfVHHuPRcPLOJ7uBmVJprPegEEII0ViR4BaZ8XMUWahn86s3twsfvTDTu1zp0Ka9AgwxjthL254PrDR0GOPuwGHAzTMNOtRYqbFWxzvkbMPSRYc8zerOb/ht2nzmSs7hRkDU5h3e/B4RTTkkLQCFEKGc+CRw7cV6I0D4Pm8+G4JLK3PTyXvKYN3L13VzabHShpA3iFrEiV+sisXQ+B937qraFu2ZzHPvPmevvP+K9dukn63bdl0XV1XqdtWAq+zUXU915b7LdbvY+les7+Z5n7fHefan/f9ULc6TwB0Z62jntp1tz032jGILwwARnhuleinEofwQcQweUCcYoKlvt3LSxAJppCW8t4nnTQbc27QbYbuDJZw8YdCAwYO09/qnzXtOm8Odj6z1atnyZe6VaWu3Wdv+9szfbJPRm7j92J/XqbFIX1Ya4j0ohBBCrMpIcIuiwRX29Q9ft/P/e74ddOtB1XNSgY4bVqV4xzMMLCJERzRpmw9ZVvZFcNPBL0X4Ab9DRIZu5B5vkU+yboekzfVElDIHM20QIGm+J6E2rw1i8KMUaxUCjE44C0kBwgkBRVnm8yAo9l3EDAZwTJ+n/jNN2NQH7336nlv0bM8b9qwe/CDN1DUWPmPRMlxwuQdw5eb/H936I3vg1Qfcvklwf/z2P7+1DdptYGf1PauGdbJFsxa284Y7W4fWHaIYs8+/+tyJui+/+TKKSYb7aNJrk+wX3/2Fc0kvlnIJLe4V7iVfb4tZWLCuiN/b1L1uo7q5wRXmVbMtbDe4B0hzj8493HemAJA3WfOH8yTdz4Sk9gGKqVfLvljm7kXST7s7d8lctx/7k+59btrHnl/4vNs3H43hHhRCCCFWNSS4RUHi4vHT8z61B4950DbtuKk9OvdR+8fz/4j2/LajG3Y444HFeuj0Jm3zIa2TGoJIZCGpfGBBKhY6pHHrNnEMJNSniMgHHWfmX3vBkBWEBlZ+3GhZMAohjQUQsck1pw0a+BAXLmlwLDwBEGN+fz7JY957Xl9u+dQzfy0sQMVCVAdvc7C9/9n7NmLKCPvsq8+iPc3Nl2Wxqtd+9Zpb3GzpuUtt+J7D3TZcwJMEMmKbgQms2zcefKNt1mmzaEvOTfiiRy+y4+48zs3Lfvqkp23S0ZNsk46bODEdrlIe57aZt9lP7/2pDe452M7e7ewaIr4Q3M94ZuBWnKXs8kG5kU+Uo8cfn3soLf2VAos1dZq6N+/0ea7dSRrsY1CJNRy8xZt8uWLfKxLzhzbFtzsMDMaPlRTyvbarmHqF+znz9t8+421XZ1lA74SdTnADRWE7nERDvQeFEEKIVR0JblE0bZq3cQKPBZwgtHBXCiw1iEKssYUoVoQmWbfpmCJKERcNsWOKBQwLNyKqGLBkYzVDcGAp86uqs6AS1jego45guOG5G0oedPAWtDBPAYGD0Elzy68kCB8WosLNm3zktVtffv2ls0IjdLAiI8I277S5E7gsuoVY2WmDneyDzz6wz778VpwDr3vCrRl38jsG37HSCvQzF820Pz75R7dKOStk77zBztZ/8/42dchU22n9neyJeU+4sglBpONSjEg/ZNtD3EJqpcwpD0VjbaDcmNYR98rwbxuoL8sp9yhu8xAODFHf/ABLPGQZ5ONecFNZojaF3+CtEq5EHgbc1hkIw7U9Lt6LqVdMLWDfvTfd20buNdI2WHMDdwwGahDnG7bb0L0vPx+N4R4UQgghVkUkuEXJeOFJx7HSYD1DwORzv6ZzTChGhHqLHeI63kHmXAO7DyzYMfUCNcnSRUibw+1DPmtYGogAOs3FuqOHAiR8DREusnTMQ4HtrWHeckl8lnnADI5MmD3BvYYpnqdAXte3C3KIFznNmjazpk2augEmVhJnDi0h5OtvciuNx3n5/Zfd+7GxbN99xN22x8Z7RFu+hW2LP11s+2y+jxNbnvXarues7Fg2n3rrqSg2N5UDi/gp953iLNtjDxrrRFix+LqFpbM2UF7UOz8VIYRyplx5rz3lX2m8V07SdBEgTQyGFHufJQ1sMbhAPiRNg8FrhH2T0lFMvWrXqp2xgCP1Cmt2CO/tZv98NLZ7UAghhFiVkOAWbpGmthe3tR2v2dHNzS4EFu0//d+f7MJHLnQLNu2/5f7RlsqAyKNzSCcxH37eYzGkWew8vnOdr2NKhzafW32+OdyErG7aIaQ3SfhkwS8W5QPXRhz5Bywg5UGgIyDwAkCQYynNZ/FnG+83H9FvRLWYj0P8+IHj3avh6kOceViEbMqbU+wn//qJffj5h86CvGbLNd02VgxHnP3uv79z5Q8IH+beMnd29267V5fZC4tesP1u3s+5pd975L2p71Zfq81aTnQxLYMVyhHUVTXAiSrmZmPV3GWjXdy+pO28h89z5fOTHX5i1x14XcmrpZNOLLNp5ZEFyqnQq/yok7x9gPLPV0fqkmKFNfuxf1o9pOwRvtuus20U8+39DljVuVaCt7Dns5xnrVfUE7x5XvvgNVcHvFcR33/1wK/c6vb7bbmfi4vTmO5BIYQQYlWkyeLFi/Mvr9vIWWed3DtrRTK+g+ldV5njiKgKQYAlLZCFdQ0r2+E9Cy9wFoKYY3Xc+HmyQFqG3DXEJh8zuaA1l47j0LuHunmxWcQFxx42eZhz6U3b3+cX4AacJjbyQYcZ0qxvlcRfj5+Di/X6uoOuc69FooymzZ/mhLzfx5cZnXjmfvt9kkQF+7CC8pX9r8xU1lnK1h8zbhEsBG698fJKOxZW7WN3ONauOeCaalFLPv3oth85QR6HARqO3a1DN/ed+s0c7DRwPya/cFk/ZNwhbjXqJFjRnHczk2bqcv+b+qe+FYBXcWW5J8oBaRk4fqBNGDQh0/mo78wRzndflbNcQ0jryCkjXX6H+6TFU86UH++6Trq2Qvdu2FYmtaVxiqlX5A2Lo+EZEecHm/3Abh90+0r5UBf3oBBCCCGKQxbu1Rw6aIN6DHLieYf1dnBzRwvBa2yO2f4Ye/6nzxcttmsDnd1iOoNYjOi0ZhHbdExxJcfK4/enMxx3Dffv7vVW3sYOqyEj4rBic038T53AYoeQRlDzPXShBf+e5cO6H+a2kVchCJoef+6RuaMP7Df2R2Pdom10/NMgPdOHTk/0DkgKCJ8stGrWytUrRA6DDqEFmXPeefid9tPv/LTajZtXNPFKryeGPFEtioqB4zC3+/J9Lq+uc4h95nQTd9+R97nzNiQoF4R/VrENiFMWUaM+UC/SqKtyLRe0EazsHnfF55p4JR/tA2KbdHEdtFWFpogUU6/4n8UqmW5AXQXqDXXFD8yE1OU9KIQQQojsyMItKk4pFm46f1hfC1mzQisgHdR8VrUQjh+34uezGnKuJAsZFLJwZqGQ5a5ckFaIW6jj18dgB/POQ9IseAiTATcPsDEHjSnJSsa50yyopRw7S90RhaFcivEYiUM5pHmQ1FW5xtuELKTd95yP17UxrST0CspiZQ/TwHSSuvZuqct7UAghhBDFIcEtRImwIBLWLnVIhVg1KORSLoQQQghRLBLcQgghhBBCCCFEHaA53EIIIYQQQgghRB0gwS2EEEIIIYQQQtQBEtxCCCGEEEIIIUQdIMEthBBCCCGEEELUARLcQgghhBBCCCFEHSDBLYQQQgghhBBC1AES3KJBcsEjF9jGoze2t5a9FcUIsXrBPbD79bu7d0M3JJ5860nb/5b9C6aL9BO4h7mOLPcy+/Me7DQKba80/hqTaGhpbUyQb7e/eHv0rSZsS8tzkUyhusg9vd1ft6v185bzNNQ2qxzXlw+umWuvbd0kjaSVNCdRaLsQomGi93CLTNC497+pvy1ZviSKKZ6JgybaodseGn0rjO8g3HbYbe6zNvCQ6ju2r13Z/0qXBh6OiIYn5j9hI/uNtPP3PD/aMz+kadyscdG37BRzjtUZ6tnQu4fapKMn2UbtNopiV1+ob93X7V503aHTN3zK8OhbaaTVWcpo5JSR7r5s36p9FLsyvuPJMRBPE2ZNyHsv+zamQ+sONnXI1MTyLzU/shC2Cd06dEtNQ0h4jXHYNnvx7ILtV5ay6tCqg4390VgbNnmYzVsyL4r9lqS2lTw/bPxh0bfi2a3rbnb/UffnLeO6gnIe2GNg4vMiax3I0lZTzjwThtw1ZKVnG3k++ZjJtutGu0YxOWp7bw3uMbhGnQjrXW3I93zNUhfz1eUs8IwdcPMAG3PQmJXyrL7x1/+H/n9w/YCkeyiJeFkVohx54OttvvZXz0ghGh8S3KIs1PZhnYTviJze5/SihHoSdD5HTRu1Ugcyq3jwpHUE8x2nHHnDMa6cdmViB3BVIl9Hu7Z4AVLswE99Us7OVdL9lHZf5CPrPROv9yfdc5IN6TUksf76tO27+b7Wo3OP6jTxmywDXEllSjqLGSRME1ieQsdDvFFOJ959Yl7xlFb/ktqJMC6pM0++cc8M7ze8YLsQHiteF/hOXiNIKt2JLzQwgPC/7qDr3LWnCaV8ZZfUpoRxSfcYeT14wmAbN3BcwfwIjxUvI76fMfkMu/bAawveX0nlXxvy5esZfc6w6QumFxT5cdFX7D2VRqXbYMqIQRran6zlSv4VGiRMgt8xMJZlwC6O/61vR7g/u7TvkinP63OATAhRGLmUixq89sFr1nVUVzvn3+dEMXUHnRHcxpuMbJIYOlzawXUI6DQkbfeBh1Q+6EzSgaczH38YbbvOtm77Q68/FMU0PLi+1UFsc52Iq0LlHYZiph3QwaMDSYcm62/qG8q7x7o9bOyMsVFMw4NySyobrIAE//26Gdc58cH9Fmf09NGuY0nnnnLiXkUA0tldMXyFCwuGLXCdSj59nA9JnXfyjntmvy32syXnLKmxPx1+jhXGf3TOR+43XE9SvWIb+4THoT4R+H/uaXNde/L4CY9Xb8NCFu5PKKfQePG9F90n502D/MbVFdLEHO2it/4Vak/LDfkRzyPyjTLif/KT6yN//Taf5z74sisXiFHqYz7B5J9f+QYI+T2iqedfejqxWknCfI3XReoBZT596PTquHjgN3GS7gEC92TPzj3zHi8M5bwHCkE5zVo8y/pv0T+KqYm/P7LUe79v2M7FA88vBoa6XNklcTuBAYA4pHPElBE2fuB4V98ZFOA7hHlOucTrP4H7pBJi27f3vo2MfxdCJCPBLWpw18t32WdffmaH91z5gcDDBqtIMR0H9u02qlvib+iM+E6UD3SA6Qj7zlaWUOjhjZiev3S+G92OwwOKDhGCPEkI1Dc8wBCIuJNm7VB++uWndsXUK6of+G0vbms/+ddP7O1lb0d7fAv7XvToRbbO/1unet+f3fez1NF0BmSOvP1Ia35B88ROA3y94mu7+X83W/c/d3f7cdxOl3XKe1yukzJIElNh5zseqD/FWBFO632adW3f1Qm/hgKWLfIoLTAIEQrXePCWsTjx4yYNYPE/cWwL9y1WdMXFKyGpY5jUKeRcD8550FkBPQiCrh26unpBPSNN1GfSGu/I5ksr90zvLr2doA8hjrroBavH32+4GWepV6TTi1jSGXY6iS/WOlYs3DO0a2kdbeoA7S9i2qczDd8eY9Wr784z+Ua77gUOwbfPbCt0LbWB8zDAxXMhDcqawQmsmIWeP9TBmT+f6dqc8DoKwTOz46Uda9T1MJS7jLgm6gvpu+rJq6LYbHBtA7sPrPWgR1UrYRc+eqE1u6CZu8Z89/YXX39hx955rNuPfErrlzB4wjUxgFJbuM/8gFpaSBPEYYi3C6QPy/uIfiOq85D7EU8J0h/WhfhAJqGYelVbNmy3ocuHZk2aWdMmTW2NFmtYi6YtXPyaLdeM9hJCxJHgFtV89PlHdtvM22zPTfZ0bp1xaGRxXcT9LuuDng5h3659a/0gLpUsHeh9NtvHfcY75WkkWWB7j+ltD7z2wErChcDDsVToyCAQfRoLQRnud8t+dtZDZ1WXEaL6pv/dZD/4xw/s3Y/fdXFAhwUR/Lv//s7e/+x9F8e+1zx9jR1060HuWB4sBAf88wDb5k/b2K0zb3WiOo3LHr/MdYQQM34/jsVxD77t4JU6Br6zQQe3GPFcCtRhzjPptUmpHbRKg3hI6pQR8ll1fcgnPkKrVtJgVpKll9+UC+ZN5oOyR9xgEYrfO/OXzK+uD2kDLlnSiiBF0IdtFsfFc2Dya5OjmG/roV/nIQlvzUkKDIyUYtkKiXek87UdXM+CpQvc4EEcroVO+PhZ493AAu1TeNx83kPhdRQ78FIqaYNOPp1Jg0JhSBt08sSvM980hXxeA+Q5QnfmopmJZc13tqXlNyFtADoJnp3xgSwCluS0QRaPTyvnpx5xzfyPcOMacT8GL+T8wB7pQ5wyOJllYIM64n/rrzctFBKG0+ZPs6umX+XEXCE4L4NDzZs2j2KSYZ84iFh/3Un3QiUFLGAUoD6xloBPAwFXcn9/16YulBNEdlXqnMBu26KtrdVmLSe6KTPihRDJSHCLah6b95jrIB+343FuxDIJhDMj2VlcXOlUINSwrtQHvgPdp0ufvFYIHlakccyMMZk6Qkmdfx56Sa6rBEa7S4EO07QF05xAzPpA5cHHXLVzdz/XFp+12J0f68oWa21hL7//sk2e863AuOPFO+zWF2617TpvZ8+c9Ix9M/wbe/6nz7vvj8973G58/ka332dffWanTzrd7nv1Pnfsvx3wNzdnMolFnyxy4n7jDhvbtCHT7Kvzv3LHfekXL7lyePadZ1eyKuI6jAtxIUtRuaADw8JcodgShfFWlnBwqZCA9FCX6cTyGUK9puM/7/R5Kw0IhFagJGFIyCecPIhSRFBcHMVFAtfj9stjCaOO+vuaQRDmbacNBtAm4GYbDpYUsnjHLWP52g7aYO6bpEEq8hVL3Mi9RiZ6H8TzOgyc0w/UVOqeDAedyDcGGfn0cWEgzfFrKiQM49eZb6Amn9cAeY0XwKAeg2oMZvlAWae5VrM/eVtu9/d8cA2kxZcpeYZw85AO764cpo9yz/LMoU3g2ZBWVmGgDPLBoOy5D59rndp0siO2OyKKTeb1D1+38/97vu2y0S72/U2/H8WujH+GxuEZ4K876V6olHu2h/ymDSRNYf0O68rU+VMTB51ojys5OMAAB6JbCFEcumuE48tvvrS/P/d3J6i+1+17UWwySRajOGwbNGGQc5Eq1mrJg64c7l8IOQhdVb3lJ24R4aE2asAol+Z811VJGIWHJAtWGi2btbS//vCvdvHeF9s6a+QWDMSShwD/ZsU3NvejuS6O8v7nC/90Ap0FiXpt0MuNTm+/3vbue7uW7ey+V+5zYrtN8zaug3nBXhe4jtUO6+/gjpEEFu3lXy23TTtt6o7lR723Wnsr++5G33Xp45weyiHLisPlhPqI+KcOl6uj4sUoHaBC1rbGiu+cUwf84JIXkHELJNan0EvG12XqVSnEBZMP+YSTB8GNCAoFWlLwYilr20M9wqUYD5q4JZg2BE8gXEKLbf+yQL0lT9PmpcYhfQyO8Lv4tCDi/fb4dRRDue4B6tmEQRNs4PiBNdIJfGdeK/Nb60IQUW5pXgNJcJ0EfucGlJbmnh0+n7FcFjsNK6QuRVZYXoT4AFQhKy+/xwJb23RA1R1of/y/P9pTbz1ll+x9iW3eafNoy8rgmTXykZH24Wcf2hX7XmGdWneKtqwM98iKFSvKXle4T3w+xQN5mM/aX+o9Vm4Ld3gNxaRp/TXXt3at2uVcyZu1qLZw025WcpBCiMaGBLdwzFo0yx558xE3d7tj645RbDJ0ILGs5LNy4wo9ZKchJVlIeIDHXfLiIZ9ljd/TWaDjFF+1E+sqLpZJ87lJK2nu8eceJXeQygmucAjDcnbYfdm+9+l7zpqHCEYYe5ijfeEjF9qyL5Y5i/gHn33g4of2Gmq/2+N3NcRyEjyMEQH/eeM/brXep99+2hZ+stB+8/BvnEv5AVsdYNuss02098pzYJPKmo5gmoUzXz3IBwsdMa+f66wt1DdW7WaBLjpEeIk0lEEb8s7nFR33uOsk/8fFcharcZzQ4pgkgmlfkhYtzEptLNyQpTNI55z98t1v8TqK1RxX0Hj60lyLC4mYYuA4Wb00aNsY2KKcEZPeagZ49zBoAKW011Cbe8C312E+kW/cn/H843uSt0Kp7UAc0sLzwQ8QFYK2i2slPW5KTPtc3aGuMf2KQYN4fmfFD3DFBZYPWdav4HrIMwQg9wrlj4iH+PG9hdt/z2fl5fnItTEwElrMS4U0/WHqH+yU755ih3bPXwe5jnEzx9ll+1xmu2y4SxS7Mlw7fZQz+tbNeh1xLwsfyMMwH31gX35TKuUcfOHe9GumEOhrZD0GdY669+AxD7rB+C3X2tLePuPtgt47QqzuSHALB3O327RoYz/a+kdRTH5CoZQEjW++7Wl4ARS6YcZDIdc0LNt0nJM6DDxkcIlP66iQZhYoY8Ag7QGU1PnnoVfOOdycmwEDOsm1BYvA3S/fbZ3bdnbz8wHB/fEXH7vRaR6adDLJN+Zo4zqONXzZ8mU15nxnoeqK7aoBV9mpu57qOgi7XLeLrX/F+nb1k1fbeXucZ3/a/0+p0xUgqaNCRzDNwlnqQ576geWDPF5VoS4n5Rn5SUja5kOxwssLWl9vfRzf6dzls8ZiIeQ+4d4JBwRCcVobCzdiPxx4SAuc1++XTxinpSWfOzGhULsFcctYWttBXjOYiJdGsRYzfhMONHkrIOsalGp9qy1cT1q+ER93zQ8DbUYh4m122kAN87Z5JRNW9GIHXeNzhRkM4bqyTlWqC3y+kkfcK4i+2gpk6giWbcQ2op37JD4wkhQogyRwJT/rwbPcs+68753nniFpzPlwjg3/73A7ZNtD7Jjtj4lik2FwnTYozVuBdok57vF2p1yDN8Xg0+LzKRwE9fdkQ5nDLYQoDQluYe98/I7966V/2YAtBtjmayW7cqVZH32gYxjvLMZDFusOnXUeHqW6ngIiLEmI0elhTnmSdTsEsZE2us9x4w88/9BLm8NNKHbwgQ4xFp5yQOeSzvQvvvuLams2i6MhxHENY1XYDf+woXttU9uWbe3CvS60n37np050f/XNV27/YsDNbOcNd3bzpD2ff/W5Ez64sq9qUE+wZtEJpZNEx7FYrwQvOuMByxmdr7hFz4diXXfpvDGnMb6uAvHl6miSH3R0Pd51P+3VVX5gwFuAvKAtxzxKzougTBNzSeHEXie6/KntuUshPuCUT0ySPtKJFa9Qu0r5YonlmHgaMPAILFLF71nnYdYps4qydIWU4x6oS+KDJPkGakg3U6F8HuWD+49r5ZgwcXZuUIV8JL+Zl4swzecNFicUXz7g+s3zK2kbwYuyrPCcjR+H50T8GR5/ZpMGBiOoK95qT9lnub+SBpyqtrjFSl95/xXnHp7Pu47n1e/+8ztr1rSZXfT9i9z0pHyQvnA6WRKsFRAO5GQZFKsLvNXYpyG0npfqdZIPzodHBs8VAh5f9dHeCbE6IcEtXIcYS+ZPdvhJ6uhymtD0gY5hvLMYD1k60N4qkG8/hFspJFm3ERnFipZKkrRafDHgufDTe39qg3sOtrN3O7u6fHENp8OC9YXFZ5h3jdCgg/jbPX7rrN8sjFJoBdg4VSXtXjN23J3HuTnkT5/0tLMYbdJxE9ehS1qlfFUgdM8sxbMjzRpNZ5DOV5p1L+lc5G/cRdcHrCdJqysT7627PhQS4HTyOQ+iLckTw1u4ERv5Fv7zxwmtrnTst/vrdk5ggLc+xUOapdKDlQtPiywWOB8YeGL/pOunHaQDnDRAQp4muZETOFa+gbxSYRCDPGaucD5oM+lUk5+0937QkVev8Xvy3r3aqRYd71LvAc6HtZ7fk0/xvCM/k9zICZQD50oaYK0NWEUZnC1kmUa44rnBfuQjgxng85X8xtW+0CBvnFAIxq3S5JcXuH6QKg3SQP4hpLlXGAzhfuDe9AIvX4jXV8rohZ+9UP0Mre3A8H/f+K+Nmj7KTu19qvXp2ieKTYbFOO986U63lshmnTaLYlcNKCfawPjASdg/KadLOdAe+XKuC1EvhKiJBPdqDpbOfzz/D/vOht+xnTfYOYqtH+icjH12rOv0FSKpg58POkRJ1m0sRBNmTyjYsaovSh1cqHqM2t+e+ZsTvrjfsZBaaBFwruQt2jhR/cMtf2hzTp3jrAGsPs5CaQs/XuisDbz6oxjoGP/xyT+6VcrpRFOn+m/e3y0utdP6O9kT83KvxhF1Bx1kOsphx5nOOytqJw2KYVFhW1zU5xMxdP4YHOM8iA4/MBR2wEMvlXyvtfMun2GnkWOz2J/v2MctlD4UcinnOMP6DEv8bb6Qz7Lsiecl+ZfkUl6XVjPKGpFXqJ1AFDLwceLdJ7r9fb4ivLyli3LM+vrBuiRe1uRnkkt5ljIqFfKH+keepEF95XnCQIVbs6Tqfy9OqdME8pU4hGql4Rq8qCav/DQSBkVIjxd5cQHnQ5aBaPJnyedLom/Fw9swSAeDvk1HNq0+t59K4QfaSMtDcx5yz6Yjbz+yej8CAwnhq++KtfY3BJLuY/pD9Fn8VBy5lAvRuJHgXs155p1n3MJWWLcLLYhV19AhzPLOaVz1irH88kCnQ5S0Yjrfic83Z9uTZNXygYd92hxugncJzApihbwoBVYKx8p8yn2nOMv22IPGrlS2CO6t197arfCKex4C2fPyey/bjHdmuFeJFVpALw6ugYs/XWz7bL5PjZVm12u7nh28zcHOpZyVaD1hntJxSpqWQHyahZNAp7FQ2cWho1jsbxor1DsWAuT90nHLI9t4rc/4geNXujeSoLyo66xzgCCnU0hehvMkGbQJXcrpSObrEFLX2e7FDeWCFTbLwFs+uDbcyXmfMOnEWsSxvdWIOKzo7EcILepQ7KBeOSC94crRXnjUFgQWLt8MiMXn0tPeYiWlzFbVjjsu3mGbUcgzIgvkFfnKgIVb4C24BwABzkBWfdSjYkgazMo6mME9m7S4XVKgDW9okPbQc6I+00ifhr6Nh7aAtjRtKo4QonEhwb0aU/VoddZtVpb2rnD1BaPSV067suDcSTrMvmOeFeaIQZqQx52KzqbfL400t18Co8z55nB7q0JWyAPSFD6As8A8t/MePs8JCgZRrjvwusS5biyU9sOtfmgffv6hnXDXCfbSey+5eD75TvxR2x/l9isGvwjbo3MfdQM5iP+qHHArnjOPnAXTeHeqJ1+e+oBlJs3CSSjVTZc55qEwXBXhfkFsMOeSKRVhB7jbqG5uzi2vw8tSN7lH/Txg74JIp5A8RDQz8EHnle+Idxbhw5vBz+GGsx46q4aoBcqOgDXaf6dMuV85Zr4BF78tPujC/7jyIjI5HulE+PA/Qp7rII1MMeG8XD8WTT/Xttj7rjaQr1wL4hqLFmXl63Yh4cN1kr+FBiDJc15TRlkzuBiWAe2eb/9pN1YV/GAedYRnRtg2F/KMIH8oi0LtAwM1zGlmMAvvgbAO+kFeyqdYq2soBL0buIdzeIHLtjSPodA937uU++/cL+EUjlKhbQnnG+cLSZ4eae2/r/e+3ffTBuL7EShLhKn3LMnqHs39n+RWX+7pCVmhrlHnfP+G66AdZLpIvgH9cFtjtO4Lsbogwb0aM+eDOU4E/XibH9sGa24QxVYeOi1D7hrirGaFOv50EnAfyyqUODbzlLMsgoQ7e1wM1CcIBBa4KiZNz777rHv9Foud3fDcDdbqolY1Hs6hpf3YHY613bvt7n6z7Z+3ddv55PvR2x9tR/Q8wu0HvvNK4AFPGYQdODp3QPntsfEeTrizQnnzC5o7V0FWP+daduu2W945h5WCjiIeBLVZnK8xQJ1nMSI6l97N3LuX06mnHJMEaxJ0AMPOKPuH87P98dnH1wfOxTmYS009Zr53EvwmtL7zeyzOvG+ZY4ZuxX7eqneRJcQHXVwntUvudUykk/L200loO4gjIFT9gABCPGwDivGiqQ3kq78OBEAWTwMvpuhkQyGvIMoJ8ce5aAv94l6UBx18L2hoX4sd0GyohGIuXj+SoA5wH5CviF0GYAo9j8hX3rVOvlK/qEtAfaP9Pn7H493UGvKZ42chSQj6Ads0kcj541CefnvoUk4gP2rb9rn7uapNz7c+w+pOWKe4V5M8ITyUL/Pjfd+G5zTPa9zMfblRhpQl/zMYEQ40EJLqgRCiYSDBvRpz18t32WdffubevV1fMCKLgENsZ3lY0BlEKGVxs+Jhh5UhbsELR/59oHPEO7i9hashgOsn8+OwztUFuIvffcTdbkVy73K+dpu13Srl8TnfWeE4dwy+wy7f5/Jq4cA8cVzWibvvyPvqvXPmO4qFXJ0rge9UxeujD3T8861STsgyXSEcMMG9HBdy30kjIMKweLPdi+VCIGrJv7go8b+nw08dwIqMdd1bw5MEJWWCwOZaGQDgfvTik+MNmjDILb7Hd86JYEAgp6WXtsQLeNLJ8ZLOS3oYEEBo8hvOCbQzaR3jrPh2huuprXt8nFBMZRGT5IVvXykvXtsHLFQVruTMPZp1MLM+CAUMHlFx9/ja4OuVz1fyuBDkna//5O9h3XMuyZS3r38cl0GvQmUElfSs8CR5j2SZysCzMssUsBDqVpZ8qEvibuRpgXxJg7LOct/F61ShATXaac5LO4hHCt4u1CP/nKAd8/WKNLDIKX2cLM8AIUT90mTx4sW591msoqyzzjrRfyKEd1/uc9M+ztJFxyLfu5Gz4F0R/cMgCzwgeKj4jnQSdMT7ju3rVlYGRnQnH5NzAS0Ex8dllg51CCPDvvMZwrkGTxjsrGrx9DAwkO8BnIVi0u4JxcvqCnlABzapzEqBshw2eZhbyC1f56cxw/0YdpqxCGNpy9LZJX/wOMlXVxE+uIwj1MM8TGoH2Hf/W/Z384fjx8y3zd+/LHqWr13Jd49wT9MhRRj5a09qd+LtDFakYtoy4BgDbh7gLJ7F3OP5CPOzlONTlgx2xMve5zvWSX9fhXUmrY2sD7I8J4olbFNKOT55hTiO17mkMuJceAMVav99maS5iBcC62facyIpvUl1wJN0H4dkaUOTnpn1Ua/C8oW0Z3wc0o+nQrHP3kJ5l4TPq6QypP4wQJwvr7nGgeMHVr8bXQjR8JDgXk2555V7XENO437gVgdGsYKHJa6kDamziejI6gEg8uM7mVhaiu1ICSGEEEIIUSwS3EI0cBgEYG5gVgulSIe8ZJ7uqmzdFkIIIYQQDQfN4RaigYNrGhbZ+GuLRHHgtofLLCsKS2wLIYQQQohKIAu3EEIIIYQQQghRB8jCLYQQQgghhBBC1AES3EIIIYQQQgghRB0gwS2EEEIIIYQQQtQBEtxCCCGEEEIIIUQdIMEthBBCCCGEEELUARLcQgghhBBCCCFEHaDXgpWZGTNm2D333GPNmze3rl272oEHHmidOnWKtgohhBBCCCGEWF2oFwv3l19+aY888oidd955TpAOGDDALrvsMlu+fHm0R+Pnq6++sjfeeMMmTJhgn332WRQrhBBCCCGEEGJ1oeIW7oULF9qoUaPsueeei2Jy7LXXXnbaaadZq1atopjyUGkLt4frnDhxoi1dutSOOOII22STTaItQgghhBBCCCFWByoquLH0jh492qZOnWqHH3649e/f39Zaay1r2rTuDO31JbiB65wyZYoNHjzYNt988yhWCCGEEEIIIcTqQEVdymfNmmXTp0+3X/7yl3bkkUc6MVyXYru+ad26dfSfEEIIIYQQQojVjYqp3RUrVtjMmTOtR48e1rdvX2vSpEm0ZdWHaxdCCCGEEEIIsXpRMcH9xRdfuHnNHTp0cK7Ww4YNswMOOMAtmnbJJZfYu+++G+256tCiRQv7+uuvbdq0abZs2bIoVgghhBBCCCHE6kDF5nCzeNj5559vL730UhRTk4033thGjBhhG2ywQRRTHupzDjcrld977732wgsv2DfffGNt27a1IUOG6DVhQgghhBBCCLEaUPEJ1IjNX/ziF+51WZMmTbLbb7/dfvKTn9hbb71lTz75ZLTXqkPLli2j/4QQQgghhBBCrE5UTHCzONoaa6xh3/ve92y//fazdu3auXisvvvvv79tttlm9sEHH7i4VYXXX3/dnn32WbdC+dlnn21nnnmmrNtCCCGEEEIIsZpQMcHN+7XXW289mzdvnn366adRbA5crwmrGh9//LFbHO673/2uViwXQgghhBBCiNWMigluFhDbcccd3Url999/v3snN7z//vt26623OiG+/fbbu7hVjUIrsj/11FP2+9//3v74xz/aokWLolghhBBCCCGEEI2Zis7h3mGHHaxnz552ww032I9//GMbMGCAHXXUUXbffffZD37wA9tuu+2iPVcvXnvtNWfhx6X++eefj2KFEEIIIYQQQjRmKiq4O3bsaOecc44T234O95ZbbuniTjnlFOd2vjrSr18/N78deI2YEEIIIYQQQojGT0UFNyC6Tz755OpVynGjRnDicr6qwavQCsHrwrBus5o5C8uV+7VoQgghhBBCCCHqh4oL7tWFN954w2bMmGHNmjVLtdyz/cILL7Trr7/ePvroI2ft79GjR7RVCCGEEEIIIURjRoK7zCCiR44caf/4xz9s2bJltv7667uQBguqYfVnPvugQYOsefPm0RYhhBBCCCGEEI2ZJosXL14R/b9Kss4660T/VQYE9z333OOEM+/f/uEPf1g9X10IIYQQQgghxOqDBLcQQgghhBBCCFEHyKVcCCGEEEIIIYSoAyS4hRBCCCGEEEKIOkCCWwghhBBCCCGEqAMkuIUQQgghhBBCiDpAglsIIYQQQgghhKgDJLiFEEIIIYQQQog6QIJbCCGEEEIIIYSoA/Qe7jrkww8/tHvuucfmz59vX331lR144IHWq1evaKsQQgghhBBCiFWZignupUuX2vnnn28vvfRSFLMygwYNshNOOCH6Vh7qS3B/9tlnduONN9rChQujGJPgFkIIIYQQQojVCLmU1xEIbSzcCP6f/vSnNnz4cIltIYQQQgghhFiNaBAu5Y899phdffXV9pvf/MZ22mmnKLY81JeFe86cOTZu3Djr16+f9e3bN4oVQgghhBBCCLG6UO8W7mXLltmkSZNshx12sG222SaKXXVo3bp19J8QQgghhBBCiNWJehfczz33nM2cOdMGDBhgbdq0iWKTeeKJJ/KGhsSKFav0WnRCCCGEEEIIIQpQr4LbW7d32WUX69GjRxSbzsMPP2wPPvig3X///TUCcWxrKHz++eduEOHrr7+2Fi1aRLFCCCGEEEIIIVYn6lVwe+v2vvvuW9C6DQMHDnQLj7Vs2bJGII5t9Q2LpF1xxRV22WWX2QsvvGDbbbedbbvtttFWIYQQQgghhBCrE/UmuL11e+edd85k3Ybu3bu7wFzv9u3bu8D/Pl4IIYQQQgghhGgo1Jvg9tbtvfbay9q2bRvF5od50Ztttpntv//+1rlzZxf4n7iGMGe6U6dOduaZZ9qwYcNsk002sVmzZtnrr78ebRVCCCGEEEIIsTpRL4LbW7d79uxpO+64YxSbjebNm9saa6zh5n0T+J+4hkS7du1c2hgE+Pjjj6NYIYQQQgghhBCrE/UiuLFuP//8827uNuK0GBCxzZo1s6233toF/m+IK4KzWFrTpvU6RV4IIYQQQgghRD1ScUUYWrdZ7KxUENkNUWgLIYQQQgghhBBQccHtrdv77bdf0dZtIYQQQgghhBCisVBxwf29733P7r33Xttzzz2jmFWbpUuXRv8JIYQQQgghhFid0CTjOqJVq1ZufvmMGTPsjTfeiGKFEEIIIYQQQqwuNFm8ePEqPRF6nXXWif6rLF999ZXdcsst9uabb0YxZgceeGCt5q0LIYQQQgghhGg8yMJdR/CqskMOOcStpN7QXlsmhBBCCCGEEKLukYVbCCGEEEIIIYSoA2ThFkIIIYQQQggh6gAJbiGEEEIIIYQQog6Q4C4Dr776avSfEEIIIYQQQgiRQ4K7DHTr1i36TwghhBBCCCGEyCHBXQaWLFkS/SeEEEIIIYQQQuSQ4C4DDVFwf/3113bttdfaTjvtZPfee28UK4QQQgghhBCiUkhwl4FKuJSPHj3aNtpoI/eZ9D3O0qVL7ZFHHrFFixbZY489Zp9//nm0JT8ffvihHXXUUSsdt9jzCyGEEEIIIcTqTsUF97vvvmtXX321HXHEETZgwAAbOHCgXXrppS6+sVIJC3eXLl3cZ7Nmzdxn27Zt3aePj9O+fXvbc889bdNNN7W99trLWrduHW0pjWLPX1auv96sSZP0sMkmZi++GO1cxcKFZuefb7bZZrntbdqYHXCA2VNPma2IXjv/3ntmvXuvfKwwnHNObl/45huzBx80+/73zVq2NFtrLbOTTjKbOzfaIcCff4MNcschHXwnXgghhBBCCLHaUFHBPX/+fDvvvPPs/vvvd5ZUWLZsmU2ZMsXFs70xUgnB3bRprqiwKsPaa6/tPn18HITxSVWC8PHHH3cDG7Wl2PPXG4sWmR15pNmFF5q98UYuDuv+ffeZ9e9v9uijubhi+Oors5Ejc7//73/NvvwSVwCz664z++EPzebMiXaMGDUqd34/iEQ6+P6Tn5h98EEuTgghhBBCCLHKU1G19MQTTzhX59/+9rd2zz332KRJk+xf//qX/fznP7eFCxfai6GVshFRCZfy5s2bR//VD/V6/hNOyFmm4+H998323ddst93MNt44ty+C+rHHzK6+mtGc3H7Ll5tNmJDb/sADuc911jGbPn3lYxLGjzfr1Mlsn31y+2LV33zznGh+553cPpz76KPNZs0ymzw5t5+HtPz737nzsi+DACefnBPrs2dHOwkhhBBCCCFWdSoquJdXCRBcnTevEi8tWrRwcW3atLFtttnG1lxzTVtjjTVcXGOjEhbuDXBProJ8gnUQjFV07drVfXrmzJlTpRP3cZZoH+64445oa01WVInB2VUC8Mwzz7Rdd93V7fvjH//YnnvuuWiPb8l6fs9nn31mY8aMscsuu8ylqU54+GGzxx83O+YYq6o8ubiq8xru8zvvTGJzcbiAV12f4f5eVf/yggW6Kt2211653wBu4Vinf/tbs/XXz8XhUj5smFnnzmbz5uXiPD/7mdnee+fOC+uuazZwYC5dUb0XQgghhBBCrPpUVHB/97vfdRbuCy+80J5++mm3kNfzzz9vf/7zn22TTTaxnj17Rns2LiohuHfZZRd76623bF8sulXsscce7jvxpYIr//HHH2+33nqrLViwwMW9+uqr9tFHH7n/Q4o9/9tvv+0WbKOMn2LudLnxwhh3+d13jyKr6NsXl4OcW/lNN7F6nFVVspwIxuJ8yCHRjil4EX/iid8K9kLkE/HM/X755Zyb+Y9/bNajR7RBCCGEEEIIsapTUcG97bbb2ogRI5wIw6384IMPdnO3t956azvnnHOsY8eO0Z6Ni0q4lGcF74GHHnrIieFHH33UunfvHm2pCWL4b3/7m7Nc494/d+5cJ7qnT59epV8DAVsiG264oXXu3NlatmxpO+64YxRbRtKEMa7ft9+eE91YpTt0sKoE5CzLuJNvs020YwJpIj4JXMU5P/O5WUgtjl/oDXd0zllVx+1Pf8ou4oUQQgghhBCNnoqveIUrMtZsz1dffWXz5s2zTz75JIppfDTE93AXArfxZ5991rmT9+rVy83RblIlEHEZL8d8baYKDB061M4991w3ZaCseGGMtb1PnygyAHf37bev6b796qu5udQI5TS8iGfOeCFhzNztP/zB7Je/NPvOd6LIPFx5pdnpp5t9/HEUIYQQQgghhFjVqajgxor6m9/8xi2Odv7559s///lPO+igg2zmzJnuu1YprxxYuLHM+7nZjQovjKsEvbNgh7Co2aGHmo0da3bVVbnFzX7/+9xK4vvvn75KeSjiC1m3Wfjs8MPN9tzT7KyzWFEu2hDgF3rDpZw53j/9aS5NWLnziX4hhBBCCCHEKkPFBPeXX35pd955Z5Wu+cBZVfv27WtrrbVWlQ75qQ0ZMsS5QLOKeWOkIbmUZ8W/lq3R4YUxojjJun3DDWaPPJITtohcFjc791yz227LWZdvucXsiy+inQO8iGfud1zEhzz5ZG4u9k47mf3xj4Ut4biVs7DcJZfk3gX+wgu5hd2EEEIIIYQQqzwVE9ysWv3GG2+4edzM2fbwHuedd97Z1ltvPWcBZyXzxkZjtHCzIvl77723kiv/Sy+9ZK+99lr0rQESWrcR0yGffppz9a6qY25eNWIX+OTVYb16mf3vf7mF1EJCEc/q4klglX7oIbODDzbr3dvsr38tvOJ5CO7tbdtGX4QQQgghhBCrAxUT3LwGrFOnTk5U40KOxRsQ4swnfv/9923jjTe2Vq1aufjGRGMU3FtssYX7ZIVy0s9Cdrfffruddtpp1SuW14Y6eS2YF8YI3iRhzGu41l7bqi4gJ479fGks2lOmmL3ySm5ud1woI+J5R/bxx68s4gGxfffdZoMHm/3wh2Z//nP2xc/47eLFuTncvJ4N4R97/R2DHtddd51bvZ8F74QQQgghhBCrBk0WL15csQmljz32mBNgLJQWB4srK5invde5VPz7ousSrPINZaCAd27/koW8UsCtn1d5IbAvuugiuwEX7IATTjjBDX5stdVWTnyXCiJ73LhxbmAFj4bDmfNcWyZMMDvqKLN//CM3hzqJ6dNzc7WTXOY7dTL7179yc689iPgjjmD1vtzxkwT3e+/l3MFxJ08DQc9x8+07ZIjZ6NErifU333zTDXx88cUX1q5dO1cGjXXFfiGEEEIIIcS3VHTRNF43NbpKcOy2227O4g24kh955JH2hz/8oexiu1I0Rgt369at7fTTT7eTTz7ZiTtWEr/00kvtjDPOcKKvtpT9tWDeuv297+UWNksD6/fTT+deF4bAhvXXNzvppFx8KLbBW7fZP0ls1xbSwJzvf//b7NprEy3jrNrfI3o/9zfffGMrtKiaEEIIIYQQqwQVtXDXB5WwcL/66qu25ZZbRt+EKA7c7/H+mDZtmhuoOPbYY91AhRBCCCGEEKJxI8FdBhqSS7loPCC0b7zxRlu4cKH7Th067LDDqufXCyGEEEIIIRo3FXUpX1VpjC7louGANZs580OHDpXYFkIIIYQQYhVCFu4yIJdyIYQQQgghhBBxZOEuA926dYv+E0IIIYQQQgghckhwlwHN3xZCCCGEEEIIEUeCWwghhBBCCCGEqAMkuIUQQgghhBBCiDpAgrsMsGiaEEIIIYQQQggRIsFdBrRomhDpHD7xcLvgkQuib0IIIYQQQqw+SHCXAb2HW+Tj9hdvt41Hb2xvLXsriikvCFrOUSr8nlCIUs7DNc9aPMv6b9E/isnlR5ORTQoGiXQhhBBCCNHYkeAuA41dcL/55pt28MEH289+9jN7//33o1hRDp5860kbMWWETR0y1TZqt1EUW17+0P8PNmzysJJE99LlS93voeOlHV16y8n0BdOtQ6sOtu0620YxZodue6itGL6iOiwYtsB6du5p04dOrxF//p7nR78QQgghhBCicSLBXQYq4VI+evRo22ijjdxn0vfaMGPGDHvqqafs7rvvttdffz2KLY05c+bYPvvs49Lmwx133BFtLR3Sx7GOOuoo+/DDD1f6XtcgRBGkXpTGvyeBdXfQhEE2c9FM63Jll0QrblooxrqLkEfQlyK627dq735/22G32eRjJtvQu4eWxRLvrdiHjT/Mnpj/hHW4tIP7XsqggAcLO8fw1vj4dyGEEEIIIRoaFRfcCxcutKuuusoGDhxoBxxwgA0bNsyefvpp++abb6I9Gh+VsHB36dLFfTZr1sx9tm3b1n36+JAvv/zSbrzxRrvooovs888/j2LT6dWrl+2yyy6uTLbYYosodmXuueceV16VELhxOnXqZNtss427/qZNm7p3n3fu3Nk23HBDa926dbRX3bHBmhtYu1btrGmTpta8aXNbo8Ua1rJZSxfHtjiI8D5j+tj4geNrWG2zhmKtu150Y01PGwAoxK4b7Wov/OwFdywEP2I2DONmjXMCOozL5yq/W9fdbMk5S6qvaXCPwdGW0thq7a3cZ4tmLdxnh9Yd3KePF0IIIYQQoqFRUcE9d+5c+81vfmMPPPCALVu2zL766iubPXu2jRgxwv7973/bihUroj0bF5UQ3IhMwKoLa6+9tvv08SFff/21y1fyOAubbLKJ3Xnnnc5ajrBNA+s1Ayb52Hzzze2hhx6yt956yx599FHr3r17tKV2NGnSxF3rBhts4MQ2Aw7rrLOOi2NbXdOsaZXQrxLbCOz111zf1mqzlrVp0cbFsS0EAXrG5DNs2tBp7vv+t+zvXLcBa6y3XrPfdn/drlog8xnumwWO563GCOUxB42xkVNGFnWMJBD84QCAF8wTB02sETf3tLnuvMUSCnqs/3gB9B7TuzouycLPQAdsudaW7tOf18cLIYQQQgjR0KiY4Mbqevvtt9vSpUvtd7/7nd13331OeI8ZM8a22moru/fee+2DDz6I9m5cVMKlvHnz1VtUYNmuzzxwwrpJTWGdBkLw8RMeL0mI1has1L279LaHXn8oiskPIr/bqG4lW8VrAwIe0R6fwz2y38hoj5p4y7YQQgghhBCNhYoJbsT0Sy+9ZIceeqj17dvXCSgsk7hEH3/88W7u8Lx586K9GxeVsHBj2YU111zTfWLdha5du7pPYK40FnCszDfffLML/E+cD8x99vh54D4kzYf2c6UJl19+uU2ZMsV69uxZ43e1maM9f/58+/Wvf209evRwAZd1rPNxb4cOHTrYWmutZW3atHF1h3wgjnxJcinHyj9+/Hj7/e9/X+OaS6Vti7a2YbsNnSt5i6YtrE3zNrZ2m7Wta/uu1q5lu2ivHFiXd79+d2epxWr7wGsPVM9hxi17+JThiZbdcF9+n2Sl9sfONxca6zQLk7EvFvN8YhqBjiWeueZp5yyVcO62v/ba4F3HO7bu6D67tM9Np+jRuYf7FEIIIYQQoqFRMcG9fPlyFxBHcRdghBMu0osXL45iGheVENzMscZNe99993Xf99hjD/ed+MbKq6++6lZGv+WWW+yjjz5yYdy4cW4AZvr06dFeOXB1Zz+mH7Ro0cLWW289mzhxop122mnRHjXBk4IBHKYtPPfcc67u1QYWF8Nq/eIpL1rntp2dS/mMk2e4OLaF+H2x1mK13W+L/arnMmPVxYKbZNkN9006Liz7YpktWb6kWmzmg98P6TXERk0bFcUkgyUe1/B9N9/XTrrnpCi29sTncBMYCCgVv7r5r3b9lft+wk4n1PqYQgghhBBC1CUVE9xeVLPw1qxZs9wiaVghsXBOmDDBvY7qnXfeifZuXFTCpTwLhxxyiBPhzLU++uijXeB/4nwIBTpi1cefddZZUWxNvND3+/Tr189mzpxZHUfgvMXCYm5MJwBWR2d+P3Xh/vvvd1Z5rNOffvqp214K7du3d+WCGzrzyJn3vSqwYOkCZ1UPX7OVj30228dZrbO4jGMZv/bAa6ut8/GQtGgaodyWcSGEEEIIIVYVKia4EUADBgxwi26dccYZtv/++9sPf/hDO/HEE90iW1giG6soauzv4a4PEOq8juznP/+57bzzzk4YswDaDjvs4AQ80w8WLVoU7V08uJ0PGjTIzjvvPNttt92i2MbP5NcmuznaSdbvJLJauT2hdT4MWOX9nOt4SLPGCyGEEEIIsbpTMcGNG/nee+/trKRbbplbZbhdu3a233772TnnnGPrrruum6PbGJHgLh7m9DNXmwGXcD444dRTT7V3333XuYWLmsxePLvoOcsI9FmLZxW0cjMvPGl1cH43YfYE+0P/P0Qx2YjP4ZY1XAghhBBCrG5UTHADVsc999zT/vjHP9qkSZOcKzniCldzxBViqzHSUFzKRcMAgdrx0o5OYJa6aBq/jwtkXiOGSzkCOgv+lWHM0R7YfaCzjqeBCJ4wa4KzhsfBOj6i34jqVdeZ553FRT0+h5s56rKECyGEEEKI1YmKCu4kPvvsM5s8ebKbt8uK5Y2RhmrhZsVxXsdWTj7++GP75JNPom+lwyrjm266qd1666015oP78Oyzz9r2228f7d24YOXvj875qIbbdbGB33OckLEzxrrF0kp53Vj/Lfrb9AXTU63LvEas+7rdVzq2Xw09XJhseL/h7j3jDAAIIYQQQggh0qk3wY3Q/t///mcXXXSRe9UUc7pZWK0x0tAEN6t4s6o3r8OaOnWqmx9fDngV2dNPP+3Ki0XPasOGG25o22yzjV1zzTVuRfJyDwyU+7VgpYC4DV/LhXgNXaqJ3+6v21ULVz75nmQ95jcI5tP7nB7F5If9CX41cz7nL51vL773ovsewn5J1m3SMWLKiJVcyRHlpAPR3ZBhYOi6666zCy+80K0TIYQQQgghRKWpqOB+4YUX3MJphB//+Mfu/cvPP/+8HXfccbbXXntFezU+GppLOa77O+20k1vl+4QTTrCNN964en60F59Yv3nvto+Pv2P77LPPXklUb7fdds4LgW3h+73D93D7d4ETeHUZ87R/+ctfVsf58zO4MmTIELeKOu9m32STTar3ISSdvxjK/VqwUhg9fbRz//aWakQvr/Ti1V7ASuMdWnVwlmtAyI45aIyNnDKyWpR7vFDOujo55wgFN8d+4WcvrGQ1B9IZt27zW9JBeuJWb8DijXt4Ftfy+oLXDL733nvujQi0Pbx2TgghhBBCiEpSbxburl272uGHH+5eDYXgQiQ2VhqiSzkL1I0aNcp23333KKb24OI9evRoN2CCS3ht6d27t91www12xBFHlH06QX2/FgxrNguc8aotjxe/zMMGBCuW4gfnPFgtsBHEiHREcAjx9x91f+Ic6FmLZkX/fQvnYN92LdtFMckgmLGcn9b72/eZkxbmf+M67gU61veNR2/s5pr7cN2M65yVOz444IkvmsYc9bR9gTRzjiunXelc4GsLgzg9euQWmEN0r1ixwv0vhBBCCCFEpWiyePHiVboXiht0XfPqq69Wr7wuBCIW6/Bth91WQyAjNnExR2T7OdHsO/TuoTbp6EnVluQkwZsGwp53YycxcdDEGnOvk2ABNFzJw/OwUjkLu4V069DNpg6ZupK1m31ZNT1+HtLFYmtpgwQhiPkBNw9w1vRC11sMTFt57LHHbNq0aW4Kw7HHHmstW7aMtgohhBBCCFH3SHCXAdyVG+s7xIWoDYhlrPGX73N5FFP/ILRvvPFG985/4N487LDDbIsttnDfhRBCCCGEqBT1vkr5qoDewy1WV7B4NySxHYI1e6uttrKhQ4dKbAshhBBCiHpBFu4yIJdyIYQQQgghhBBxZOEuAw1tlXIhhBBCCCGEEPWPBHcZ0PxtIYQQQgghhBBxJLiFEEIIIYQQQog6QIK7DDCHWwghhBBCCCGECJHgLgOawy0aOrzve7u/bude41UI3gHOe7SLhd/xXu7aUOjc5TgHeUBekCerK+TxxqM3zlQfhBClU442a3UjXxvNtt2v371sbRdlQxmtznD9q3seiBz0DZLqAvdJUt+MuFLbt6XLl9r+t+zfoPpixfSVi0WCuwzotWCiMTB/yXzrcmUXazKySXVAdP3fW//XKAQojTOh/xb9o5jKw/np7IV5mCU0pA43D5Jhk4fZvCXzbOyMsVFscfBATrrOfCEtD/J1rj0+30vJx/ABSueA43C8NLi2pI5FJSCN3JPkV6E0+DIodD3F4I/Z8dKOZW0PalN+cUgX6fP1qphQynVlqZ+F6NG5R/RfDso2KX0+lDv/y0VYP8PgB++ytI+VHOhLyudQTJDeB+c8aAN7DIxi6g/ujXhak+6XLG1YMXCOcbPGuZD1uPwmzMckSGepZc2xOYevT/naQu4TRFu58qNYfLkVyruw3cp3PcUQHrNQeWRl1qJZ0X81Oa33aa6/ELZLlO2EWRPctmIhryi39q3a264b7RrFlocs9TMfXdt3tXYt20XfsvX7spxPrwUrA3ot2Mp8/fXXNnbsWPvrX/9qF154oR1wwAHRFlEf0EgOvXuoTTp6knt3dgiN5oCbB9iYg8a4ho+Ggw7IodseGu2RDX7Xfd3udv6e50cxxZN0buLoDCQxuMdgu+2w26JvOXxD/sT8J6KY4unQqoNNPmZyUQ8CHqI8fOLpofGH2uRLuaCs+47ta0N2GmJDeg1x/1/Z/8qSyrqYOlKoboTpyrdPWE/zwb5nTD7Drj3wWnvxvRer6/70BdNt1LRRdv9R99tDrz9U/T8PfU+x11ZbqB/DpwyPvplNHDSx+tz56n64XxI+TxlYyUqhY5ZKofIvhSx1Kmt9yUdtjkNbRDqH9xue+bel/KY+IX8GTxhs4waOc51U2t7T+5xeVD3KV8+Bdp6yDu+TOLt13W2lezneJse/53suVprwOeGfYbRDV067MtM9nPTMirctEOYT28c+O9amDpnqrp/86H9Tf1uyPGdEytcexH+bBPvMXjx7pWdiEuzLwBTn8/c2Qs7Xp3022yexbpHmkVNGunOEZV9XxNvVMD/j+ReSVD/jJJVXPrIcs1hIA4R57/Oba/fPVSjUTsXzqlRG9htZ1LOjmHoXJ60fl4+sv5GFuwxUwqV89OjRttFGG7nPpO8NjaVLl9ojjzxiixYtsscee8w+//zzaEtp+Ov14aijjrIPP/ww2lr3zJkzx/bZZx8X+D/+va6hIWck01s+4t/j+O1+9K33mN42c9HMlSzchImzJ0a/qjw0yKHFhE7XYeMPq/5Og04jtmL4Ctfo0vHif/89CR4+j5/wePV+8bBg2ALr2bmnTR86PXE74aNzPiq6s0uDSwepoeIffl7U0kkaP3C8DblriHtgFEtYToVCvs40kJYXfvaCe0iSTh6Y8WNQd6nD1OX4NupJCMejw4aoTmLZF8uc2KYzUYlOWj4oC+rcknOW2H5b7Gdd2neJtlh13U8KhQQNeUBnmE4ZdT78LXWfeyAezzFpO7qN6pbYrsxdMtd+PO7H1vqi1tV1hrhDxx9qbS9um1iPKEvKn45kvNzyhXzWMQQJoT69XbJAPSOdlCnXUltL+b9f/7fL5x/84wf20ecfubhn3nnG1rtivRpxjY14PY+30Wz39wnbkuo0bX6x9zJtQNpzkRBvVyoFg4Rcy/E7Hm9zT5tb4zoRwVw/7UUYn/TM8nkWPjt9PjmxVHVPIoZ6/LmHq5f8nuNwjm4dulnvLr2jI60Mx2awFqtn/DnuA8fn3o/HJ/VbGADmGcr9koRvyxHe9Qntqi8T8ol729c7n3++TMKQpX6Sp5QVIf57yi8eH5ZlUl39esXX9vfn/u7qt7fA+7hNRm9i37vheyvlN8/g0COH7d66y3Eozw6XdnDhgdceqPE8jrfZYV75QL2l/oZ9OR+4p6l38fpNvlQKLPz0HYBnVznbgLIJ7k8++cRuv/12O+KII+yyyy6z5cuXR1tqgvAaN26c22/AgAE2ZMgQe+ihh+zLL7+M9mh8VMKlvEuXXCesWbNm7rNt27bu08dXgnvuuceGDRuWSei2b9/e9txzT9t0001tr732statW0dbGicdO3a0tdde25o3b+7KgFfBrbHGGi6ObXXNBmtuYO1atbOmTZpa86bNbY0Wa1jLZi1dHNuS6Nu1b3WjldTB9h2XPl36RL8oDI2Pb1zjoVCn2o+chsQbZBphHmL+ux8xpNEv1vWPzn9SOvIJN0JSOiHftRO4/iQRSp74fCnkdlZXkBdcNx2k8OFFB2HWKbOci3nadacRllOhQLkWgo4D5U2d8B3FrMHXkxDEGJ3BpPymLu27+b4FRWsS/1v4P+t/c38nfCjTdf7fOnbapNOqrRpJnc/mFzS37n/ubtfNuM4+/fJTt18WqHO+XHzHJxS1hUQcecmgAlaJkG3X2dZZxLD4h3AO9h3aa+hKnfen3n7Kdv7bznbnS3fa8q9zz/cXFr1gfcb0sTtevCPxukg/HbikMisUaBdIfxKhkC0H5HFYXmEo1F4Q0u6dBUsXuHpNWhns4jiTX5ucty0JO7LsF1KVM/b1N1/bf9/8rz0y9xEXR75/8sUnNeLqm7TBuHyDKPkI74NywX00bcG0lYS7D7Rv9QV1BLFbSKCViq9X/lrnnT7P3feUkbNqTjrdDcam3X8e2k/a6vhzvFBIGhzw50oaJOX+QYzjRVFsnnB/XDH1Cicwm13QzLXF373uu/bwGw+7+wmS7v9Ol3WyI28/0mYtTnaxToJ22E9fAupY/JnPueL3dQiWZdrleJtO34fnVngs8PX4D/3/EMXk+PKbL+3Eu0+04+86vjo9XO+pD5xqQ+4e4gZKV6yo6eDMsQm0q6Onj3ZegtRFb8DwfUY+6VMyQBwK43xtNnDtDOaS1qRnNr9lkHj+0vmpg76Q9IwNQ9pAjw/52iE/4EBaOQ55Sx6HBqx4oL3jfGkGsGpwKa9NmDNnzoqrr756Rb9+/VbsvPPOLpx55pkrFixYsNK+8+bNW3H22WdX7xeGK6+8csW777670m9qGyrBK6+8Ev1Xd9x+++0rNtxwQ/eZ9L0SjBo1asWRRx654oMPPohi6of6SAfn4pz+vPHvdU1Vx21FVQPkAv/Hv8eparBX7HfzfiuWfL6k+nuHSzqssBFVLW5V2G3sbm77yCkj3e97/qWn2wcGTxi8YuLsie7/OIW2cbwk8m3jeD5dYSC9VY1+jXQnBbb7tIdwXK7T54Enfr0h+dKZD37Db5NgWynHLAdcO3mQlkcev19afYrDtSaVRb6QlAf+vOF+WdOQFa6b8uaY8TrBtrT6Fc+zb6r+qjpuK1pd2Cpxf399nIdrSNqHMGjCoBVVAsntG0KauCfj5RTWn/BafNrT7kdPvuPG62xa/efaj5h4xIqmI5uuOPPBM1dUdcpc3NC7h9aICwnTzXF9OsNrgHg6wn3T4Bhh+5YEx0+7z4uB41BnfHqzEK/X1IcnFzyZ6Thp5QXk+bH/OtYd0+cRcVWd6BpxlSTMH3/d5UwHxy21HElHWLf89yzHjP+2rvH3S5ifPt7Xo7RQqH2P32Me4sP2lmN0vbJrwXyJn5/jlBPSyjGT6hPb4uf3IX6Ncz+au2Kna3ZK3De87nx5TN7+543/uP3iJNWRePn5a/H/sy1fuwVpx43XWf4P29IQtpH2TUdvuuKpt55y7cSz7zy7Yu3L1q4R54nnQZVYX+l84bURX6gN9vhyDI+fFny55Lu2QnAtPs+zEq/X/hhZjpNUXknU2sJ9//3324033mhV4s8uuOAC22qrraItK4Nr8b///W/74Q9/aLfccos98MADViXWbeutt7a7777bqoRrtGfjohIu5VhWRf3RtGnTau+C+gDLdrMmxZ/fj9xiLenQukP1aD4jlsz9qaSrTj7iLkSks2uHrm6UP81Fy4ek0fJKwohwQ1l4J4RRdKxljFYP2GJAXgsd+zGq3H/z/jXcz9JgdDqpLPKFpLqGtcKPnhOwhONxwUh3ktUhKbBfnHAEnOv2bqOMRDNqz/UyGg3x+uW9LOL1aur8qXbhIxe6/8/Z/RxbfNZit/+HZ39of97/z9apTSe3zRPW6U/P+9RuPfRWYzGWf734L2dd8VBOpDPuoufLgHzzeYcL7MDuA13+kDbSWMhKjzUYK028/JOsAHyPL/AFy5Yvc66n666xrpuSwFxd4l5c/KKt2XJNO2TbQ2osMgNhustNVSfM1Z26sgCGcK6kOZn58PWaekR9cta/qr9ijxPC7595+xl7dO6jtnabtW3zTptXx9398t3OY6FcFv98xK1L3Fe0HXVFVcfbXRteGeUCz5cR/UbU63MjDdJGOYaWQuqQb5+oU2Hb4p+VxeDLkOcWU3k4F+3QoAmDbNrQaXnzhfbGpwUrJ20a3kSFrI4+pFkXw/aedoj2iTaR9tp7TJDG+LMntLSGVtMvvv7Czvn3Ofbsu8/adp23synHTbGvzv/KhRknz3AeTvSrQrzX1jfDv7HXfvWa/WSHn7j286JHL6r24MGC6a2d3rLp040llLzk3ueTfWl7cZcH0se2Qu0Wbs3xtjnJ04b/uTfibS/4duugrQ+ynTfc2ap+YXM+nGPvf/a+++7jPLTVYV4yN9vjn1GkgfLgk3PzvKKM2JZWrhB/zofBP2/9d9pK0l4ldO2JE56ocR9kBSt1sVCvuW6un3wgP0o5Tj5qLbh79erl3MJxI99mm22cMEnis88+s2nTptl2221nRx99tHPFbdKkiRPoxx13nH311Vf2/PPPR3s3LirhUr7BBjm34TXXXNN9+sXgunat2dCuWLHCLeI2cuRI22OPPdx85/79+9uf/vQn++ijmvO7mBd99tln26effurc+g855BC3/9ChQ+21115z+zz11FPV86Yvv/xymzJlivXs2bM6jnDHHXe4fcHPbU7bHuebb76xqVOnujrEwEuPHj3s17/+tb399tvRHsXzxRdf2J133ukWauP8fP7jH/9w1xmHOLb5fcmzMWPGrLQvLuSUAa78DH60aNHClQEu/W3atIn2qsmMGTPs97//vZtCQf2uDW1btLUN223oXMlbNG1hbZq3cZ2u+GqKcWg0aMR4KAMNpW+sCUlipaHBQyxMcxgKufB4cRX+JunB5QMPuWJpKHPLQihX7y7JQz7eSQnn9PnAg+7aA691Dx046Z6T3Gcc//AtJeQrL9LMg5Y0gK+7hUKSqOMhzfV4wRPu7+OKGajBPY/Fi+h8nbfHeXbx3hfbOmvk2uCOrTvaz3f5uf3yu79035Pgfj285+F2au9T3bGefefZaMu3gxc85Ok8+fTSQaFuhfnnO6JhHPdHPrw49YNtacF3NpJEmxvwa9rMFn6y0M77z3nuGnwcgwJ0SD/76rNo7xzhfUu6faeZ+84PgPA9LvzDfdOuLalDGg9p93lBt78Y5F++wad8hPMBwbfX+do02isGd0I4//eu/57tct0uzhWUurRpp02r49746A036NFrg17RL+qOpPoUdyX15RcPaZ3yfG0Kx0pqx+MhbZAwrCccC3ybEf7eB9JSn+BSTB4Xuq+z4EVseI8R59tH2hjaeeIZMI6XYz4oRwT6qAGjXDvqjxm2KUkh7RyUSSh4/P5hXCioC8E9zkDUFmttYXcefqftufGezmhB2Gn9ney6A69LnY5XVRPcgNalP7jU/f61D16zDz/PTaX0A52kjbbaTc2J0ksfgHro61K8rSMUGswGhF7SPO544FlGe50k4Jl6SBs9ZsYYe3ze4zXi7nn5HrceRBx/b7v7u+p/4Pj+GcU9n+RSzv9JaQihPJiuUGpbmhWO79NeLElTlRiAJi4s13jw7Uohai24EcwDBw4sOEf3gw8+sDfffNN23nln69QpZwlAHL711ltubjCCfO7cualzvxsylRDcu+yyi8urfffd131HGPKd+BBE9YgRI+zaa6+tXsxr5syZdskllzhvAsRoCPsjvBn0ePLJXCcEz4NLL720zq8LEfq3v/3Njj/+eJs0aZJ9/PHHLj14P4wfPz7aqzg45nXXXWennHKKPftsrlPL57nnnmsXXXRRDSHNegIXX3yx2+b3Jc+GDx/u0hNC/WZQiXxt166dG/i46qqrXFxa3Z89e7ZLzxtvvOHKqjbQmPFwfPGUF61z2862Vpu13ChtltFSSHsYJomV+iDeoeIBxWvMPHHRRKDRLzSyH7ec+9+lLZrGefKR1DEs1CGks1MKPKC65ZnHlA/KtZjOU4iva/k6N1k6A0khTeTSuWTF23COHtftLQn5Qr5BI2/99Z1XjokFyYv6rJ2Adz9+11kS6Xwxv7nqzNGW0kCkx2GuHJCWUJiEdZ//fd5Tr6nfWSg0MAcs1ARJlkSs2D/7zs/cJ21PGMcgIB25OKEljHR7Kwb3XbieRHzwJ9w3zXpP/fb7JwXO0WPdHu664/d5MQMtgGjOkn9J+PmAEHZmw7xJCklpxECxcYeNbexBY+2q/a6qroPc4zf86Aa77qDr3GBsXcN1pFnVPL784iGtTYoPCPoQryv5QtqzMKxbpMuTVgbFiLo4PB9qK9i5Bua4jpgyoroNiA8ahM+b+LMyxN8n4T1GXPgc49hsS7vXkqCNGjxhsPN2CX+X9HyMh0KWUCyrWDd9m8yAABZifz9wjiyDEf9947/2yZef2E+/81PbrNNmUWxpeENHCNfAoDb1k4EH/xziGnybE6+/Yf0rRJKnUZx8C7UywLDXJntZ25ZtXfrDOAZKq0rDxYWEA4RZ7vNCkEeUN+Uet4j7QP2jTodx8X3zPePjIJrxuMmSf3F8G801U/8IfkCD9oUyTAtZ2o1aC+6sLFu2zImptdbKPaz5n1dGnXzyyc7CCQsXLlxJEDYGKuFSnhU8DHr37m0PP/ywG8BA6CEmWdUbD4O45fjee++1u+66y4lxxCbi8KyzznLu/3z3Qp9AfL9+/ZyA93EELOOezTff3FnLiX/00Uete/dvR/fjPP30007AMghz3333ufQSSPsmm2wS7VUcHBMLNYu7IXhJx0svvWS/+93v3DmwOnvYxiADUyEYDOI73gFYvL0nQW3g2rGG44XgPRQqTT5LCsE9/JbWbjCgHMSFcRYxXR/EO4Y8QJNEfTyUMrBBQz/2R2Pda0Zq24krN3Era5aQNrKP6E1apCe0JOQLSXkbPujDB3r4IGe7f5imrWbueXvZ224F6K3X3rrasl0MWH9v+t9NdsljlzjPlD023iPakoP08rBnoUM6nCxuF1/orFQ4blJHJx7ieRPvFGNVXXbuMmcZ8p1P4j75zSd21+F3OSt+Q4A6Rod95F4j3TSaQoR1JSlQ1wtZWJOs5r5M/UrPdGYh60BSeEzq6GPHP2ZvnvamHbfjcc5K5zuBtJU+rhjoxCaVcyG4Dn/f1CWUI/cArt+lDBxWGtLrF64q1osiDm0fA0a400N80CB85pTyrPTPMYQ4hII+DGn1AxFMHYi3vWkDJ2FIG3ShPnLOuNgK22/2YRHItMUwQxjsok0qxeujKqXO/frn9/3cXnn/Fdu92+62btt1o605Js6e6Kb3kA+45VNWtSlzD9dFuxEXoUkhnjchDOr++yf/toVnLrSdN9i5Rhxt9g82+4GL83BersG/+cHf5wzE+vaKwR3aQj7DcuJ/LPl+tXsP5ewNPQw+eIt4WB/CAVYf4vvG65mvK0mBtDH9qVD+JfWpwsUKvXCHLANJhEL9tIoJblYhx4rNPNjbbrvNjjnmGDdvG5dcLLIHHnigcy/G6t3YqISFOysdOnSwX/7yl86938/77ty5sxPhWFt5P3YI+Y+F+9BDD3WW2pYtW9puu+3mrM3xfcsJx8Y9Hbds3N933HFHl14CaT/44IOjPbND3UFAI+BxiycvAIs0XhjbbrtttSUb/ErjiPB3333XxfF97733rvYkqA1MtzjvvPPcYAf5Wh+Eo/g8XHnlQmgtcA+/9g2/M5PUIcg3su9J6ijzu3K4lPNgwQpRysqpWeBhhYsaK8jyAC7WxTDfQynuxhsPhR4c8QdklpBmgeIhx4PNl0cWl7tChA96H+IPdt/xw0LA3Ogs50yydKQR1r01fr+G/eRfP3GifVifYbbD+jtEe+VAWPTp+u3bArhv/Yh5WE787wc7ODbnKARWkGLKi3YCwZ/UKW7oUIZ4LGARy/dKo5CkuuIDHT46fli4yZekfQhJFmk6r7Qz3LuAAPDWIwRS2A5z7PD4nJfBl7qGjqkXdcUQbztoL2hvfJ0s1NmNi4M4vhxLfZNAOfADJFmhbaMucd8yUMo9XZt2jLrCvVtXUAbM4faiJkn4+DYyBAGOZTesA8U+m5JAVIXnThJo7IMHDvlaaJAUeJOLt+5mwdfbpiOb2hZXb+HeyMD5zt393BoWYZ79PLe8FZWyZ00c2gDS5p9lXojS5+A7xy+EfyVceN35Au0IZcdUhNqAwGS+ufe08u1VOPDNuRjs4TMsH/8/fZV4O1gXxOtKGKjDtNeFPGP889VDuXE/cN3g220GHSBsn8N88HGctxAVE9zMeWWuK/OA//73vzsLIlbIP/7xj04gIcaxzuI21dhoSIIb0Yll98wzz7Rdd921eh41IjyJLbbYwgncEG/VjrurlxPK+5133nGiND4PvVSYjoAFH5d4rMv+2gnbb7+9s9q///771a+gIx4PC6ziDEgce+yx9q9//aui7/euK2gsaLQJ3oJDQ8KCVIwOA/GIm2It3GmdqVAMJG3LR1wYh2I6HDQghA19PvfQ+O/8b72rqbMgVH2P7xMfTU2CBy6W5/j8rLRQiiXJQxkiVovteOZ7KPHwCK0m8RB/GMUp1KFOC0n5EKYzfGj5ept0nDDUtrPHoAZ57F2q88E7TKtSGn3LDp0+xMMjxz9i535v5c4bD3u2JxGWE//7Bz/1n4d+PshDOsfFlBf3MPuXY+CjkpBW90qjPqeXTaTRqee4CKgJsycUdQ+TBn5Hp/uqJ69ynVmsR7RXfqGqNKiPvgNfVzAoQafUdyizEm9X/D1LvK+TceEWD/naWPKYVyvxTEh7noShlHqar20JrdPh/PtioOzTBhiLwZ8/FLfcm+HzMsvAcxzaTMQFdazYNIYDVPE2KN8grw+1bVdIL/d4lsGIqtpm36z4JvqWHeY5M33jgr0ucIvIxV3SGaDl/knKO+J8v4LPYl3KEbxZPJJ8oPypH+xfm2ehL1f6iUxlYx0Jb+0uFxwzfl2kPf58os0spY7wG8qGdQXo3/m+bhYoN+4H+s7UUeoXx/CDaPnaLOCeL9RvqpjgxnLIu5kR3swXvuGGG5wVke8IJeZ4s9J5Y3xfc0NyKZ8+fbqbE33rrbfaggWlLRxQCRgYQHTXJwzu7L///m7l/L/85S/Oun7OOefY7rvv7tzsG6O3RRwegLz/lXcb4u5G55u5sriE4f5DXDEWbhoU32mKh1AMJIW0BouGyu/DMXxnDTFN4xc2xL4xDh9IWR/gdKJwNZ0waIJzNaVR5rvvXGWFhxpztgb1GJT3en3gYYvAry+wPhWyWAN5yH75RIXPZ9+BSBrpzReSLCYhPOQQnr4jE3Ze4qGQ4ORawnoTf7D7POEcWEP9yH4SzFtGNDPAsviTxVFsfkgbaSStuPBNPnqyi6s6e7RHDjoFuJHXZq5cGlgvGWSL512hkKVjWIgw/8O8j1t94t4W4b5Z6i1wD/f8S0/XnpVLbPvOG517BmWK7cABaaH+Dv/vcDcf0s+Ppw3x10sgD+LisrYDSYWIW3BKJZ8l2Itn38aSpwyKpLW5XDP1Ai+QLO1KqfU0FI3xkG8Qt64hf3ieUf4s0ojgCQc4qEvUIwifPcWkmTIZNnnYSoPcScKHkO8eZJCSlbBDb5J8A7n5yos6EU61iD/nw4EQzkf9zfesol9Dfj711lNRTGF83+Pr87920zd+t8fvqvPbQxp4ZpSrnQkhvbTZac+8tJBl8DUr9PHcOiVV/WPahrBOcm9Sb/gMyyeLQKZ+ekt5GMI+XxhKGbAKF7B1gzJFDpJyPs7Ldf9h2h+q58eHeRDPBx+XpR9aMcHN3G0siqxCzWvBQmE9f/58e/HFF517MwK8sdFQLNxYbidPznUa8SLwc7gJeBKUA1zNP/nkk+hb6eBijTv566+/7ubzlwPqDgvyHXbYYfbyyy9XX3sYmL4Qr2O4nP/oRz+ysWPHOjf3Pffc02666SZbtGhRtEfjg04Qo+M8rLGmsMCJF950wGkY6rNjkRXfYQ1dy+LfszTMdORw8eNh4vflk+/EZ+3c0vmgE06e1rajWin8QEuhaxw9fbSrM/kEMW5nkCYOqVfxDnW8050G+9GJ8q9QqS3xwaH4g53tHlwDsfikPTDXa7ues3Iwn++eV+6JYssDC7jF70Pf+aQDHBejXpjR0eGhzz5JD3vyk7pK+bONukscg3D8Txy/o14klRH1u9gOT0g8/0sJYRklQbqxVHIPz/z5zLK2Z36equ9Yk48MVmZtKzykidfyUVahG2w4IIN4KsZzoRxwr/lXzNWWNEswx2aggnpIvlFnwwWwPL4uDrlriGvfD+uebdXfusS7llYS3+GnHsSfz9ybDBAzUEz9IM8KuebH4RhcV9JgQ5rwyXcPMkhZrjrEtYaCLP6cD/PDP3/yTYegn4Ol+sbnb7T3Pn0viq09pMEvuhlCm0r9RoQiQGmj44OLtNV+oCOpHeEZzGAC52A75Ru2zcT5tt634x7qTjn6JBybZyH5R78RfJ2kjfLTEikfBiN8u5XUD/P3tX9+JYW0gZ4wZGlzyR/WgqGdJh3kIXWTayAdxYBY59pot8Lf+vsjbmggPgsVE9y8TmmHHXaw5557zokZFlHDgshiVddcc42zgOMC3RhpKIKbedFYjfEkYHCD+fKsyv2f//zHbr+9uE5CEghk3K8RpazwXRuYP80r4lgw78ILL3SLtTGHn8AiZ7zWq1i4XlzHWWGcd8MjmPNZqf/3v//Z9ddf7xbr47zAvO8tt9zSDSrU1gJfzteCFQsPVUQEDTLuQTRCPGSx0NDAEXwjxsMSAekfZpWERjIc1Y43vqSRhzoPoVI6/77Bx3Ka9EDgO/Fs9w+yNHznJuk4DRnKlzm58YdHCOXAaHAhsUsnnWtPu37isdYOvXtotbijs5HlnbeIJqzbYeeN9PrOSzx4wVkOEFX5ypX443c63v3/qwd+ZSMfGenmYwOff3nqL/bH/yvPoCaQV3QweZCnWYy8MGOfpLSTn3QcyE8/L5D/sZr5euBdM4mnc0IdgWLnr+aDe9h3DP39GHag2BYKh3hHMgl+Tx3wg4jlvif9+UOxQR5xLqyDYXoLQVoZ8KKcEJThtdcn1PlCbpLlgDo26bVJLt/ooCZZBhncgErNAc1KKSsd1wXUx/iAMWXHc77Qc4t22L0BourZyjHIX+L4HZ9phPdtEmnPDM4Tb6t94NleDsgD7vmkuuTpt0k/23H9HW3GOzOs/8397em3n3ZTggi8m/vEe060dz5+J9q79lA2tMkMEiBGk9ps2gA/0BZPO/nJAIKfi02/h7aadodBK5694fQn8p1pKvwuXOCrtiD6gUEFnsf+u09f6K2Huzx1IK1N8+WUlBc+pA30hCFfOQNpGzh+oPNeDNsP7hGugQGLfHU9hHuJ+wQvGwYwMCB4Q0NtqbXgfuGFF2zAgAEuDBo0yIml//73v85iSByvTcJlHPcE3s+MyGKxNBax2m+//eynP/2pWw2b+bOlrkxd3zQUl3K8BpiPzRxu8hqPAcQjC9QRV1soO47Ju7tZjdzPjw7fs83/Pp5Xl3Fe5o/7ON7r7aGOUO6Ia9y4mctNYNEyBmI8zKlm4TF/jPj7wEmPHwBgFXUWgON1XzvttJNLr/8dITw/9ZLVy/08craTX1dccYX16dPHTXGoDVx7uV4LVgw0GAQaCx4CdNzpSPPA4wHtO/F0tAt1bOsaLyx8wxpvfHnAMNoauluGrkx8D13NQugU+8GGfKP0wHb2Y/+0PKHxLnSchgr5CEkLzfgHTJbVgH0nIB+U6aSjJzkxRNkldS7i8MCm88DD24sZ0pLm9hmGQscuF8dsf4wdvf3R9umXn7rF8jpd1snVPz5Puf8U+/CzhrP2A/cDnR2fNwy0+Y4x7QJlTieC/8l3OiOUK3Ps/L1U6vzVEI5L28O9BaTJnTMQ9LjtIQh8h8iLwKROHHWDPKftouwLTVEoFtKAxRyS7nXOxeAk9bqQ0AGOh9BkwIuyYE53OQczGgKFLMG0B1z3ks+XONGQBHld7kETTygAswq+8Blan/j6zr2YlD/kG4KCZ2HSIBDPMj8oxf1Srjz2zwyEIMcP792wbU4KdVXOcXibxB/3+6N7KwSim/fVN7+guQu9/tbL3cOlzO+uK/zUIvLGt8FePFL+tBtsI/Ac9kLcD5Jika1tfeW8vLfbW4kpTwLxDKLTRwhhH94uQhuXdSCRukPbGa+vPj6tP5cGx2E9nbjY9pB+2l8G/rOk0Q8wMPDBoIN/XpaDilm4AbdyVm3+8Y9/7Nx4AYGDm+8PfvCDRrlgGjSkRdOOOOII95orBDEgHFmFnFBbsB5zHAZSyvHaLLwafvvb37p3hJNOQCCfeOKJbvCmFJKOmQYeF7yWbK+99nLXQ+B/4pjLXduVxevrtWB+8ScaQ78wDJ1aHnY8oL27Of/zmfSgbihwDfER0nyuZhB2muPb8sF+7E9eZW304/MukwINfblGn0vF52OSOM23LYSHDp3rcL5eCJ07rpdPL5bzdQY95DMCFpGOmOEcHCfrA7wYps2fVsOjIn4OrjFNFLDi7fU/ut5u+vFNbi6ufw3T+muub6fueqqd2vtU972SUHZJnSzqcviucUi6D/xvuWa2Y2Hk2rBkFBpYKQR5eeLdJ7rOWyiKaXNCkcZgEFaT0D2U39CR9B15j+/QJ4nh2kK9ZV0LOpD5ju/vF+d2WFW30wboSDtiJHxXMZ9cQxoci2M6N9R6FnvFELcE+86zbw/8YAODpVnva9rMcI5kUsgioEMPEQZzs8DAJANR9VkGtJl+wDJfnWEbz0TmfMfzlnqcNigVz9+4d1m+vMUjgbad43O/cN9kGYAqBeqPT1PSOWi70s7LSv/PnPyMGyzllVjQqlkr19bRlm/YrnZGlVKgTtGGxAmnFoUDpHHC9pP8J9B2ctykcs4K7RVr0zBlIXxWeLHN8zlpvR/O6T1/8t3bbKMMvREkXqd9u8oziMEc9k1rW4F08SwnLwr182h/uI94puTr23HPOW+Q6M0zBPI3LV/Jd47HvZJWrjVYvHjxilU5VIJXXnkl+k+I+mfi7IkuhFQ9lFZUNXQrbIS5T757lny+ZMVuY3dz2wgdLumwoqohibZmZ/CEwStGThkZfSsNjhFPexzStt/N+7l0lwLX3vMvPUu6xhCuNcv1cj7yN8zzuiZepsWGeB0B8mv09NHRtxzEUV/4DWWXBvmUdFzSSVkmlQXHC9OUL8TPHaYrDORJvN6QHn9vpO1TCZLygnsh37UllVMIx+T34T78H6//7BfWl3xlmRWOkXQvExfPY/aN30tJ154P9vfpL+Y6fH6W2nb588avNema0mC/Us9faeL5HJZleC/F8wN8PSt0rRyH/fLVbeAc8bpUCmG6CaU+A2tD1jpQrmsuBOdJun/S4sN2qVBIyl+OmbRvUj0inwrtUwnieUGZJD3PwmsrVMYcM74P3+N5zn7+mOWor5wjfl6OybPC34fhfcm2sB9GXFK/imOSxqQ6A8SnlR+/TXrG8ZtCz740fBsUv4fi15oP9vH5UAxNEKWR9l4lYd5xXYNrMnOShRBCCCGEEEIIT0VdyldVGpJLuRBCCCGEEEKIhoEEdxmQ4BZCCCGEEEIIEUeCuww0lFXKhRBCCCGEEEI0HCS4y4DmbwshhBBCCCGEiCPBLYQQQgghhBBC1AES3EIIIYQQQgghRB0gwS2EEEIIIYQQQtQBEtxiteKCRy5wQTQ8nnzrSdvur9vZW8veimJEOSBf979lf1u6fGkUU5Ny5XtDLj/u+cMnHh59q0mh/KkkdZUWyoSy4fhJZD0v28nHhlTG+cq2LuB8t794e/QtGdLDfoXyPQ6/IWSF43a8tGNRv1kd8eWRD8p09+t3z3TvsQ/75qsHfh+VTe1pyHmZr95w/7OtIT4TGxu1fTZSd8Jyqu3xSqHJ4sWLV0T/r5Kss8460X+ioUFFp8I/Mf+JKKY0Jg6aaIdue2j0LT++wT5/z/PdZxwaxsETBtu4geNso3YbRbG5+L5j+9q8JfOimOLp0KqDTT5msu260a5RTO6m739Tf1uyvPRXyw3uMdhuO+y26NvK0Nnovm73xGsmP2Yvnp339x5fXvtuvm9q/gH7cc7h/YbXuNZCxNOStX5069DNpg6ZWqO8QsjjgeMH2oRBE4pKT1Y4/sgpI12627dqH8U2HAqljw7DhFkTMtWBfHCeoXcPtUlHT0otizRKvb+y3Ptc35C7hrhr79u170rXedI9J9mQXkPqpG4US3gPcA+NmzUu2pLOyH4j896PUOg+93XkD/3/YIMmDLIxB41JzY+kciYuSzuWlFbSNnzK8OhbOrt13c3uP+r+lepw2rXVts1Oa1eztIO+zaVeDbh5QN78DOF3A3sMTK3T8Wvyz5R2Ldu58xVDqfc9vzts/GHRt5r48s1XpoXqK9eYlmc+70/vc3rmZ37W33Bdo6aNSqxjcbI+4/JdSxpZ74eQfO1g1nYkiUJ9i0rh86TQsz6NUtqCpP5aHNo9+hZA2uJ1h3T36Nwjc11tDGTtlxWi2LpVzP2ZRniMh15/qCz9nqJAcK/KQTR+Js6euGK3sbutWPL5kiimMPzGRljmUNXZdMfnPPxfKvy2SrBH34qH3yedv5Q8gOkLpq/Y7+b9Un9HWjl2FjjGrEWzXDq6jeq2YsHSBdGWmrAf5+TcWfF5XygtxeYvaez5l55FpaUUSFcp5ZMVnz/lqgPEdbikQ+K9QHxSfpGXlHvSb7KGfPUmjbR7Ikvd5Toof39OfhPmYaF2Isu94Y+RZd98kKbwviGt4XUnlWNa3oSklVs8H/z3pPPECfeH+G+S7rv49aTBsfPVk/g1838xbUIhOH/aPeDhOrl+X+Z8hmny1xrPBz5PvPtE93+c8Jjs5+stx4qXGyEtf+Kk5TvnSco34sOyjZP2O87hz8NnoX3SyLcP11zMtUP8N2lpK3TdIfFjcjx+78swLK94qM31J+HPnY+wPqWRli+lwrm4j2t7TNLl2wP+z1pGIfHyKgT7xduvOPF9wnQC8WnPWEKWMvb1qZRrrk+4tizXVwzU8XzPhWKhXpY7jYWQS/lqzDVPX2MtL2xpJ9x1gn3x9RcujhGgdpe0qxFXnzCaxogUo9PFjGoxorhi+IqVAqPrhHg8I+5YugpZb/PBKOqE2RNcWksFSw0jouWAvDtj8hnOypKUd4zOss8+m+0TxeSHY2BFefyEx+3K/lc6TwB+Xxv4PW4+HS7t4EZMsZw0Gdkk1UWrGPg9aZy5aKb1HtPbHTdrwCpQDNQZ6g4jv7VNdxLUTcg6usvIur8Wrv2B1x5wecx3XFDho3M+sgXDFljPzj1t+tDp7j7ge9cOXd32OFgV5p42d6V7xweOwbE4RtJ2Ar8vxjpBXlZ1XKz/Fv2jmOzQlnHtlH+XK7u4a6f+Uk6Mbrv7ddYEW3LOEhf222K/6nzwIYtlgn1oU4ZNHuaOWSovvvei+9x2nW3dZ7mgDejTpU+N6yK95ENSXcKi07tLb5dHaXDNWGJ9mmHq/KnVdYz8jt93WaxslDftPe0L9YT7kHKsFNw3eEMUsmyRb3hBUX9IM/lB250vrdQNPANoj5NY9sUy5yHQpX2XKOZbwmcW52W/qo5ntHXVgWcSz9C0PBo7Y6yrt8W0IeQT5YUXgKdYb4A4HNN7c1BnqNs8uyh/zhVvR3zAqlcfkN4e6/Yo2jJcKv7ZS7tTGwsieTv22bHVVm2es9wfpTxn5y+dX/0cKBTYb/6S+dEvV4Z62uPPPWq0cdz/tFv/v733j7asqvJ7dzUqhg4W+lBbqSocQP+QwlZJAlVg0viHUJionTRVxF8Zaaqg05q0/OqhGSMtlCOv227pAtuhPYQqfwxNRygcL2KiVdgZj4wnFJo8dAQKXyeACmWnpVqlQA1oa7372ZxZzjtrrbXX3mefe2/V/X7GWOPec84+e68fc84151xr74OMmh3b+/a9h8adXQheFmr8TLMz1P2Gu2+YvCv6gN09+YaTDxtjdJZdE/F9K2P4oBEF3MuYnx78afOTgz9p/sP/9x+aex+9t33v+z/6fvPDH/9w3nuLCUYGA1vj+E6DBVhDg23Asd14+sbqrWMRDMPe/XuTDldfaE8MYik+kNz9wO55gZgvTHQlGA8CbyYEzum/y/k4bwxySwbMT0Y4K5x3GuhL7p0k+eEnudoyxEmwoNuC47HAiduzb0878db2C3WxttCfBJMElbwm0DYZJZhl61yfIM8H877E4DaWVDCCPCAXqeMpOVmimJPL/16uAYcIx8fabIV+oRBQstWzbyIvx+XrLm9WP2d1awOGgoNGvXx9vENAH0R97dp6Sp/TTxTTafr89gdvPyyxh92xa9NHXTb3HWe/Y56tY7u+9XdM5FBqgo1o79niTr9Mk8ioweRw255tncG2QQCAnbA+K9WVgKdra3EbxD15oN2iarqUkicCR2RtMUGeTAZzslhzjGFOccqGmN0wuU05ySQR0fkU9KvXG75PEPf+L71/3jmwJcyXcT70ditVT+plybpZ+ylD2fvo3qmTDH2w4PDG193Y/u2L6SPjfe9v3zsvUYDOMc/2fV4IOlNKBvtSSjxzTfTSgmkr1IvxZy4gIUBCcowEB+cgkMc25WR8OYAMYx/R5z7Qf3GhgPHlFqWcPJQS0tOggHsZ889e/s+av7/m77eB99/89G/a9954xhubDadtmPfeYoFx2X7P9qlWjLswww7TZGItKMpl5mvAkJBV9cEFk3ufScUTM6oYEYNzMpmlDI4/rgb6zX8/t1poAXofzMGx/sBZ8o5cqn8Y00tvu7S5ZeMtbQDD+NoxBB0WnNnYm0PFMf7YGpBRn20nUAHvpE0DdWHldPMrN/eavK1t9FEM1Gi/JUmik4kDaVn7GMR6CJ782FIY79QKN/LA5JYCeUAu/PG+cB2/uhc/MxmPuktQw/2Va65f07bLF/qFwGia5FiEdmCndj2wa5BTxHf4bsS3PSZOKCVdRQZZreV7fAd9p+1ci/rigJtuIQcxQOJYk2vD5KZ0zFBsNYvA1UDm6ddpEhld0B/ICatIONElmaBPffsppidWV5JYHvqHccgF2+i4BXHUYcuZWw6NtQUsPshEV1NJqK4k6Zik9D/KYs0xnhgQRbvBjguSMf58dhzJnoiNFbLt9Yh6EcCQMPLnwZZwPa9fFB9Em/PO+ax9ZvMNxjuOjRV0bDEYc+dcF+gTweHQZCbjhowz1jmfgT5nDJmvauV+rBVuZIDkN/eEx++hxx/96kfnJQ3HgF2IzK3MW0sB71+kCvYqlRizMmTeQIbH2tmDjWaMUj4V9Rq6q64LBdzLFFa2//PX/3PrXL/wb7+wfeAD7zGRf2nfl5oTnn1C8+LjXzw5euHBCeHhPX2VEqJT6Es0BBh2Ao4aZzMHEwwO1bRgTK9cf+WhiR7nY1YrGTiwTIgpg5ODiTAV4M4aJlwL3r2jk+sfjmfr9VjBVF8IGGyr6bSY8z40kZNLusQkSapMk4AaA+SMHR/TTHx+1ZWCHCEfOGycl5UxswMxiEHe+0ByZ+WzV7Y7R/qC7nc9cKwP2KQrdl1xaLWWNuO84mSc/4nz27qyq8Ey/8iFD0ooKWfX5MaCIe9UT7OlHKcZJ50kmdkk3uN7lgzoOx41MFdQR5LMNbc74ERb/2B/mDdZyTL4PDra9E9p1ZxrWhBHQTZxKr39iGOTCg5j4Hc0gS1AR3wypgsbK+YLCzbpUwp6MA04/7ZizK0A/pyMt81XsVCXGkrBSixdQfxYba4F+0dwWHurmoHNwh7jS92w4YY2ARnb6gt24dIzL22TdLwu2QfTsdSY5IrfCZYiJojMH7no9ItaObVkJoUxor72um9yDJlixZVFnYX2v0pE/8JKaj6xwnf6YjJMEhLfaho4D7Yk+vxWoi8w5ryjgHsZgsKe8v5Tml//1K83jz35WPOvzvpX7Yq2vfe9J7/XXPZ3LqsO9sxQIpxjZNmpH9lDgk8cWAS/z3lrgglT+pxRqF2Npe1sAdz66q3tdhcczSEKynkILnxghdNFHezes9R2tyGZQq7FeftmYDkeh5h7l0qrn0PwkxHj3bdNEUu6MB70m2W2cWTM0Fqyxa7tjy0lFhjfUr8zuSOD9DHHTtNXTC7cB9cVCPSFupcy1F11Tk1WjFvcDkqxfu4LW8K573CWiRO2DeIoEbj4XRm1jrGHMWKsWEnuI7/ICLp+1frDV3G9403/+p0KFD5PQZ89fMXD7V9sp225JTgkUGPbo0FdqXOfFbC49Zvr4KCa/aRP45ZySioopF4E2yQczMmhgH2P89i9kWNBvyDHzAV9k0vUg3tU2XnSZUc5lie4084ScRWS/s3NQRbM2H322CpW1nKkAjhLZKTeX2ow9n0TxGDyQoIJYnA8BL6PvtpY8f+YyTLI+SWp0mWraDO+hdetVPFzoy995i/6BlvSdzsu8ssCCz4fdoTdBzX2hB0gBNLoMIF6Ssc4tw9++5YhPhbQfj+fWHDK2A6BBDE+ZtxBsxywZ4VgA5DlnG9Wg9nMuJMlFpO5MRNVCriXKcesOKZ52Qte1vynN/+n5u1nvb19b868tKserA7+67//r9v3usAQbb1ja5vBR4BxGqZRBr5LsM12ITKFZrSgdJ9WH5hAbKvlNKsCrQM7+bkpJnRz3gmS+qwE04esOMeHmljAbRNXartbbWLA4Fpk99huSz+kkgOl7WfmWMMYyRXDJqOh7YpY0oXxoN/4y2smOyY//qcv+cyu7Y8trXZxL1HJGfbgGNPnQ+QW+SGj7VfQ+uITGRQfoFH/6MBQahwC60NfOBcTlPW1FevnHPRPKvjPOYBW+KwLv+pKGSOZU4KxYusiTm4trAixIhLlKSYO6d+4pZySs2HIHDYTB5jgG3vC7QnxWQDmgLBiV/Ob0XzOg638aiO2YGhiKQbrlCiDHIPO+Ye0jQFy3DfxCJZwSPU9/eBlk77e8YYd7QPTcn2LTFLMucPGRLAHFjgwZjifFmQQkOa2wOYS0Ni91JxC6bK/Kb2MyZ+aY2rY88ieVnbj7UFGKYhGXvjMktY2p/oHqPWF6xFg2/U4J8/BsHNyvVxwW2OzxoZ5rGt1Fz3A/0jZ9T7JKPqbn97quyvJ6jjUH0OHsSHYiRSMeWquqyk5XYhb1Pmf92YFvjnza8o2LCbYftqf8s95PcZiHHMk+s/4omtDkw7oJv6v35mVA71m0XHMh5gq4F6GYNweesdDzX//7f/eXHDqBW2gzXvfuPwbzf1vv//QewsNyrl++/p2FTU6QRhinEYC06EZR3NYUCRb/RkKgSoZ2T1b9hx2HiYonqpJ4qDGCeUhWzZ5+0B92gedpIItjAz1Y7xxmK+949qsE1iCc7Aan8ocE+Ck7jGkDHXKlxIkJGqDYPqbfsLI9wU5PXjwYNKRrCUmMoZm12cJfYRTE+uZcv5i6QqWclvKZwVjxZgxdrWwShNtCDo57RPvOSdOKH2LjUEG4wOIsGMEbdgCHAt+Wxu7lkrEAbaJz68979pD58GRiskRnM+4pZwyTdI01U+LAX1DEiM+EIr3aSO2oQ1gwmo1fZsLui2RYM6dbVlm/JEl+pc+JRFtck8hkKePqY/ftTBrcrppARN/U5+X7E8MYGxnzPrV69u+Rt5wlPv8OoY56qbz9CX/U3wCg8J8yfV8gi6lB6bbZpdjApa/peBuaFA5K6yNm9Zuav9OA32zYsWc/k8xZ80C5CWXBCmVUpCY21I+K5Ar+hXbMBa0z/ucQzBdT8mPzUHT+B2Mnd1TTR+we2LotnISpdgZ2/FSItqOMVDALaYCYWS1lC1zTFQ4Cd6hqwXFzwWwBtfCeWTS5Vo5pzAFgZ45LF3Z+y44V8p59fA+GVv6o+Rk0gYmKRyKNoh95eY2UMcBm/bBDV3BFnXkQT7sUBji2Fsb/TUoOGNcK75P6ZMtX4owMTFeNQbbGLrKzXWGjEstOSekZhUqtXrFucbYUo5OpAKa5QK2kL6M28dT71nJ2ULe53PskLd7jD2JS1a8/YPC0GmShbyfcjaxSaxgWTKPcxDoxQQJzmfcAkriI/VwqyMJ+g3b7wMz5gMKTiDtNH2PQYfZW1ajIzh3puv8pZiD6ZNRMcHEa97PzWmMYWocu0BuFjo5GgOY1M4Y2ovDbU/B7oLgBFtifRuT2PGapWsb2GW/ol1KwNKPfoGAvyTSSnPBmAFVF9QPeR7L1k6z+hoTIL7kEnhWuhJ5uXHuKkstOTILsOfTblOn72/Ze0urGxFkHh0cKtfs5sG22RzFAgY7/3JzXg6Ox7ehvdFHSRWOpUyTJI4o4BZTY1msIQbKjCzKWNrG62HSZUJkouha7cbZQHkIEHAKo8PSB5QO5YvOawn6A4eWZELKgaE+/lwcT9B9+gefXt0YcztLCsaO4NEcGK5J6cpQ068pg8d7GN2+qy1xNZ7S16AuBLSNseqzJZHj6WPktS88w2Ca1YLclnLq5B15CgGTJWlKiRHLaPvC93BqKLYtPx5To3uMeWqlKVe69H+ht5SPAf2b2j6eeo/CuEXMrloQ6O0yukufkLhM2VwLunGS4soHY2jXZcyRzZIdRF+6tqmbXfXjas6Of29I4Dg2UW+YU5gP0BfTGZKl7WptQm+xt6nghvHhePoKB9N/P9U/sXTpwdEEcwsBgslUySazc4O+5Cn09Ou0SWyw++cZqy9/68vFn/LE7rP9HJmgvtSDgMGChxxR9ktl6DZ1bC1zErcQlnS4L8wBQ7bsY3dSCXxKKoHnS2k7OcTdE31KLvEUz8n/vHckgSyOcZ8yCUPOkVp5RvZ5VtSQe685nl2YzFUGcoKPjOzW2jx0j51cuYWgXGGuHTNJrIBbLBoogW0h77vyaY4P2e7U7x5jJDGCOI0oDYrWNYmVAjw+I2gmePbOaw02kUCN08j5caL7PnhkKDgwGK5a44URJEFijofB91m9sftRec041BjZuBqfCiJq4XqWKWcSJHiziREZsNVZgg4+s6A0HpuShz7bjJBv2xbMxIZj2HfCGfK7k2D6QV/GQA2HHwedNvpCv/gAvaQPHo5jouVZBiQI2EXCts++bUU3mESpa5SHVKFdXeMQt5RTfILrSIR7SBm/0viYzUnZVewL/VBKgPB9+om/ORhzv1qYA/nnVqDcfe0+YWsFex2do752d9Yg39zLHgM4bER8HofBdxi7lG4wVjiS6GD8fkqOrdTowdEEbWVXXc2qnOkBT6Hn6fzI4rRJbLOtjNXZ288uPtiR6/PzgyT62AXIs25Kekf9sPc19s/KkLmSeRn9nYUtJOjs8wyLiCUm+NsFNrDLp6I/uU3S+ou+pc98H5ZKzjdNrZqnEphLGfMtpgm4sWX42R95w0eSPg5zOg8VRk/6rqTbs42iftmCTs1OF+QI3eOBfHzP5k5vi5EhS6zwt9b36YsCbrFooETT3kuNExYNIsqD4uOg+AmlK7tVmgj5bFpjSj1rnEaUnfrXrBKPYRzon+vOv679W3oAjUFQzRa6OFFj/AhGbTz5nMwkv4nNORcKc7JSY9ynRHnAMKcc7BqoE33aZ8Lh+NjHY8A50QucDu/YxdclfTDQNRw3r4MnPefp3ynd8MkNVU4TssHERxJnFg6gQV27nLNpQW9nKeus2FtiqDbDT5vNRvCXvrbvMT6s1JmTxN+aB6dxHG0lkeTBHpGsoY5sAWVr+s6NO9tVvoXcKrsQYAejM0i/xQfKedp5qfBEa2w+O0Ti77nHnRq+HAm7NsaGPudJ1kCfYjNKdgP7hH1Db+whgdNiY9X1U03oCD8fx/32Xb5OfCDb2GALkBnm71wgOQ19fukgB33Eb9ETbJXkmj7HBkYbFEE+Sv2JnsXnZUQ72QXHW8A2K6gLbRkTzkdCaBq/lnFiceisk846zMehX7gGizP4gtjMPn3Kd1M7gtB1bOz2e7Z3zlXICLqH781qO/qFn8ouDM7DNfDpbAUe3bC5tcYf7oMCbnHUgWINddxROAxnrVEYG4wHv59rq8QlZmGAu6B/UqvbGEeynDFJwOSJcau9525sfEKCvi0FF/SnZT9TMJGUVjMijA1jaOPIamBqy1UOjDwPoBkyxhb4mEPu7/2ljUw8nJexGQJ9xy4CSDluTOB3XnJnOxl3OSLsUGHCnoUD6MEuILu58R2LaW8D8CCbtp2YccRxsIQIfdYVPPB95Nb01cbbVqBY7fNPfWXceCAaDkcJHCfaGB01xtDqRyHxhSPG8y5q9aYLxrEmcTlLkH8Ca7/Vkb7mVyu4T9v6hb/YDMbOxrDk4JqdJOBgR5XZKq1wT4c577e/9fZ2a+kYiTd72Ck7enjuio1VhLHmGJJPuWMM9Ap9HPNWMpvXkD9AZmqSqUOgP/o+NDKF+REl+2aBXk6XDO4rLyUCzEdhPK2vUr/mUIL+ZH6fdUIXanYV1UK9p7GlNs42XthDdvgAfYk8my9LPyMfNff5Y0u7/GDOd/2G6+fZyRTMSSbv1Bd7bDJjPjT61u5YmPsfWbDbQHhdSpD2RQG3EA4UDCVcDMxhY2turXOKMco597n7d0vQfs6ZuweLSS6ubmPscNBzExTGmECny9kYGwuszNhaP9FGMGfYggvqTht4naorwXLu4Tgp4gN6mNj6BJWMAZNAnyDd8IFP3FJOMorx8NvnKXFLeS5Q5n0cTO7zLU3W9CfXos84X84Zoa4LEUBRb9pYcnyRDRIJrDDm9KoEYzX0HsYU2AG/3do7yvSZfx3BjqGvrACZXvqsPvA/TquXMYJzPs8lJrBTrCyw6lrjYHI8SQMSPtSpTzDB8RYsMC4LbUNSUCdumWAF35w9Cn2d+9UKGz9KTtbp7x1f2dHaIHYcsWW5z46YI514T6zd8tNFtLMR+tUCKGSb28La5xM8/q3svb1d1+acrGwTEKCDBNQEFxECRuSfYxjP0hPWkSHGn6DF9HUaOB86w/Zs6lmSvbHAZpIktqBrKLSffrCVxgj9j42KCf4I32Wu60os0y/MU4w79rDvbkbsIP7Vtj3b2vFOYbaM44bsBCAAxOaMsYtgLBhv7JvJK7bPVqR5j+cDeHvIZ/zqQIk+frDpVSnh5fE+HPWzOZ7/KciU+YXIMK/9AxKnRQG3EA6UzJRvIcFgc59Jn2AbA9w6DJnAIN4Hxpb6LkrtNwfcO/kYOZwInBmboMzB9s4LgQ4OaY6YHOD4EvQXjhJtT02M1AEHyG9HwmhiPC3YACZ1/zAPzsV7qboysaQCnNw9a0z000yOjAGTf9eWxSHQDi8blLilnPZ6bFyZsPo4JHYtqA2aojykSs1WWr8VF6ecephzjJ7FXxpAnmhb10N4UtAuxooxW2j7kYLVUtoTZZY+8zrgt9MBdc85u7xGN1hZuO9t97WBCzLBOaK8GPSjJQ1KO4/MIWWscEopPlgoydzQVZ8h37OVTerE80fWfnBtW6hnrU5EsK2svHI+O4dPqEyzpZx+TH0PHSOoTJ2XzxaaeE8sCcLck8IN2o0M5uysBdskB61f+YsccutL6j7crmunxopxYryYB9Al29HAL4BYgonPsQ25n/tDr9avWn+Yvg6Fug21ZUPhmrSBMSnJZA30Q85elD7z4CNxjI2TB3vHfMQ4Maack3E3m5YLnA2frGGuR264R5+5hevFXT3Ugzpz3JAxxjdj99TQXWlHAoxDXz8YvSLoxgaXxozPWt3M9B9zga2+M5dxXgL0MedzBdxiWcAqD9ljcyhyhQwlzuZCQvCIwS5NjLYl1DtHXdsTa/CTDoU+wnGMMHlifOJnOAlxpZT3uDffOzAUjFbOIMbkAIVJKTVxAcaQY1JOPm2iDje9/qZ5hpL/cZT9liZegw9AbDLE+HdhW3JTqyScM26974vdn7YUVroYA2R0iLMAjFltoJ6Sh1i6ttLS/2wHs3NxfaD+9v+Y2Bh13VM4BuijBacE+dGJQAewKfH+N9MBT0oHkFuOxWH1EMRjH+lDPseBtF9hiPKfK9TbO+N2+wO2DefGj3HKJlrix86Hza55roLvM8qQlSazCWZ3TCewd9gcO3epRGeeeuGk46zn7P/QLeVmJ/sWdGYMYp/XjlUJxsDOh8wgvymbxHEkjUo/39kX2sM5c/djtzbnyZ/ZnBgUMh7IeLzFCnng3Kn7VSP4C9Z+Skr/u4jjQhnL9+Ec/PRStB1d+HGNhfm19LNgFPolgk2O9pj5n4Q+5yT5yjiZ/JhN8/qc2ull42zPBfL2oGYM+8J4kQggsBxLlhcS72eSzEsFsegATyT3P1NZC3qFTpa2l5OwSPnL5g9yDorXL3t/LFbs37//6aWHo5QTTzxx8p8QYjHBoDIB2sNuRB1MACRC/CqNWFrgELFyhfOfW+kVQojlAEEqQenYPzm2XCEZwa6KUlJOLH20wi2EWBCYKBRs94f71Nj6mLpHUCwNWLFii2Fqd4gQQiwnsIPYw7GeCL+cYcWW2yL4WSsF20c2WuEWQoglDpMuDwbh/jitoC4tWH1gmxzbSIdutxdCiKMJdrTZ7x+PuS13OWE7p0C7BY58FHALIYQQQgghhBAzQFvKhRBCCCGEEEKIGaCAWwghhBBCCCGEmAEKuIUQQgghhBBCiBmggFuIGcGDrl72py+b95urwM88xd+M5BiOzf2GoDjyYWzttyhTv+05LcjU2Oelzvxe63KVS/RyzfVrDtPhpUDOvhjIAg90W4rMyt5NqwPUi4cU8bCiWcGYxN8iF6IvXfrf9XmKIfrD8dGf6WIs/Z+VHckxizlW/IwhMnskoYB7mYMBsR9571tOeO8JgxSDa5qBxunA+ZjWAbGnOR6tihopGaYxJ4Uhk2kXQ40q37OAdUjJyVhJB6zt9EPq85wOmFz7Y3nKOL+lffCag+2Txvke3/fH+DJkDE9//umT/8aB31I98NSByavFpzRWNaWvLPMzLFvO3NJctfuq3vapa3xLpcse8tnFOy9uDh482D4JOMogr3mi7VJ9ajr9uvH0jUv+p+5qbU5tYoNxo838bNJSfeIwslNKOvB5nyBnqM7W9Cl1RFf66nUtXX1RwuqWalufMk3SbOWxK5tVz1k1edUfro38d4312hesnfzXDf2CPb32vGubk44/afJuf+w8a5+/dt55GLMuu5ubW0vfpS82n7m52bNvT/s/JTf/e2r7kONq5/yUTvH9SF9dnQU538mKn+vwN/gJ1Jee+NL2dQnOW9NfHJPqmxSxrrnx5Xxdc3QKPaVc9AYhYxLCkA/5iSKbHPmpCM71xI+eaO7ed3f1D/tjPHA2b9548yFDy3lQlmvOu2bJ/FYh9dzwyQ3N9tdvbw0JPx0U+cgbPtI6YHc+cufknac5d/W5xZ+BoA/v339/c8X6K5qtd2xtx8GOtc/82NA3G9du7O2A8z2CuDF/1sP3S5+xGvo9wGjGfjK8PHr8+6n+K8mc6QjjU9PnY8hvrh3TQL9tvGVjs3PTzpnoFRPXWD+plbILNdhYRR2s5eK1F/e2g/Trltu2NLvesmuQ0+n1knMRdO94w462D7va02VbDORp255tVTY5hY3tULaet7W3LKdsXx+ineiyOV16PlS2hrR9LLpsfl/ZzY1JTl+7+tTTVdcxoP63P3j7zH6WCT25fs/1vc/vZTPnX+TIyVdqrOhjyOkUn/fxLbrOVwt1veaOayavngZbjNyYfFpCj0DZ+mr3A7t72QjGh9/BJlmOnJq9XfnslYfe45ide3cmz8m1vvbXX2vHp6TXpXNE/DihL/QpbaSeDx94eHJUnto5YGy6ZIV2QZc+02biBhIuux7YVZyj+sgnx5o9MfuEPBGb1Cw8dPkCWuEWvUAIyZgNDbYjKDwGC2V47F2PVTl2GFGuXzPZLxVoHyubFIwuhf9/8xW/2Xzxki82d2+5uznjBWc0+67c177PezljyBjs+MqO1ojQX+tWrWtuuPuGyaeHwwSxd//e9rijGYy1Gey+MHH7zCYlTuZMmP7zle+dm3AfuWvyaZr4HSsxc8qEDDWZ3RxMwEcStJ+JbLF/vxo9Q99MP6kPMHnae1bQU1aNOMbeS9lBHILS6ta67eua+x69r1m1bVXyc4o5ph477817bz4ks5t2bmpuf+vtrbPG59gCnATqRj1jO0q2xYPTseG0DYNW9w0cuwPvOjDv+jUF+3g04eXFCuNi84Avi9l2mytw3pc62HqvB7WlZrXRgx6cf+r5bRJgqB7k4HwE2+jrNMGP9y+8HKXki9IV0HjYjWGruhHqT7BfgmO8LWTMKH5MUqW0gsjYM+6mV/hN+E/04zTQRi8fXIfxufe37z3kb+Jzkdw07JgbX3fj5J35EMAxPthBEjepfpwG8x1ec8prmm9e/s154xz9Siu1c8CY0KfICn4ofWDzm8nHR7/60bZ/oj6n+ou6Mx7MvQTbJFj66HQNLAISZ1hs4vsPuUvNbV0xkQJuUQ2Cj4O4ftX6omChSF5hYkGhuibJXOCEcqKUBJtLFepOG+grnGqca16nDAd9lWtrDhIOjIEFKThH9EnO4JDRZftmLkFh9U2VLodm7MmjBox2aptnTq5KE7dR4/hGpxlje87qcyafpkk52kx+q1eubj+n/6gjMvL5Bz7fBvG87usUGn229tVAXz/y+COdTlVfGA8CuSvXX7mowXYKkh6MEzbGZIgECfdyE9zuffveqjrjFODwIANAoM5rzp1yhOw4/tr/kcs+e9mhYNoKTtZZJ53V2mSCbRIvY/UpzjbjX0roLTQle4X+l5z5VBIDPTN74vWw1cHHx3XiljKmk7bVl9cEmVfffvW8Pswli1J9a6TGhO+z8h/PQ9937QjgWsi514Pagr70TdZb0I3+jQl6ZU79EBgjgj3wckzJzYd952z6atsF29rrpOZR7BxtKGG20MYgJpuwdz4xyLG5gJD6s+CA7aTfaHfchfLIgUdaubI+8L4Yr3NwPlatuf2LvoJUcMpx9AnnpS7s0sjV1+BzzsV3SWzFresk51N60jVejAs+YNf1Fxv8VuYu5MnknbZRb+SHhDH/WxBrSZQu2WLMLSGSmhvo07jw0eVfIedW3zFRwC2qYIKzLTFdWRw+N8OZKpwjFeD4ksvAMkExGadWG3GULLhNlZrAawyoO20wg2ETjRkZPqdgbDAGBMs4+T6LmgMjgYH38B2Mkm2f8mDYd96/s7hiYfVNldwqjBVrUw7qW3O/Nn3RZQRpJ33ENWMml5KTKyY5sr+LsYWqhrgC2Qbkz3k6IPfQRym5tpKaWGLpm9zZ++jeeX/HAj0mkFuKK2msSERIrLCtHnCyumTVg7wyrgTqBOw+MCGgMycZh4PjSjqFbeW4OK6U93/p/a0tIUgeC3Ru8ys3tzantr0RvucDga5SCtyMnE3i/ahP/rMUtNHsCbb6wtMubJ2+NjB7zji7qFJ6ib6mAqJSQDBLLKmCrYQvPPSFdjXquvOvm9ePqWQRpeQXpMaE77NKFM9D3/N+DmwYOsr1kBULSJhj/P2qHOdlyR9bA+fzq9rmk/Q5RwnOv/2e7b2detpHO1mBA+TI6sS8Efuztl9LYJPwMZAJD9t66Z+uoGhMqMstG29p1n5wbasvPpjGzlAXViQJ4nN2Iier9C3BNgsaHGdjnoJ60NfA1uM+viVBYmrVNKUnpfmA+tqqMUQfIZUc6zN3jQWyTj19W5B7fFb6jeR267/PtcP7aH2fQZDyZelT+ta/15V0izsOYyCPPScOsQUSSk18oYBbFEFRyMRxnwQTbckATQvCygTHNVNgJAgeLYD1hckERyn1mZXF2EaTgzZeseuKtr4YlJrMOf2DYT/+WcdP3vkZGCyc7ajwZD+nfTjJNOAUYDS7tkrj5OEspJIGC0GN45tK8gzZUs7kRwa+L0xWKbmmMPHjBPMAmZIO9NVfVpGYsFKyNRTOw/kI5BZLLlNgX3xgaAEwhWQNTpIFZubw9XG+LbDzzh4BnZ2zFLBEogPBGGFHsHGsUlobkD8ccns9JOl4wWkXNAeePHDIwa8FeaU+6L61MdZ5zco1hwUI9ANyOsu5pg/0Vy6RW7MaC3G8KLQ/FRDkEgOzBDm2lTqCKtrG3HH5ussnRywdFlM2SGbZrRvTgK0h+Tb0PMypzJf4M1etv6q9d5fkHUlMH1z5UiurOXyAZBBEUYeUTzJrSILi91lAxbxn/h11RH5rd3yZ7SfYpl+9zaRYoprzYkPtffqa43n2Cjug4nxgx/dNdNfCeLDzws+jfqszfeKTY/ylvgsJfcucRB9ZPEHfYVMJsrE3zF3U0yfgS7LFeWb56yGpXQPImdlo5C32sz82hwJukcUmBRTh4SsePrRdpwYMTJ9MGseRtSXDlboOhotgk63RfeqxVCCgNuPAXx66cf2G69u22D1SXUaZc2CYtr76cIfMnNuo9DivpQzprME5YULoMkZ8zoReE9ixUmETni8EyKnAmdLVt9HxNWcYx86+i1FlsrKAFmM7ZEs5xT+vIE7ubUA+5zj1wRIbBDDcQjAG6CSTHtu+eWCIZX2nhfPwYBcCuaWEBcQ2RqUAmLFjDPvolq0++FU3v8Id7+ufBpNn5M8chVqnIELAzByALo8B+kR7ka3Urh76qUtfF4ooE+g8jlbU68W0sWPA2KKT2B6ceJLXzEtD5CVFtHEUrkUAGAPErsDQ9IjCeS2pGVfzmAv8df2xMTAyeL+UlEIesAv0C8d6Xa4Fu0pgh13lwVvUu4+8HwpEjn06ECHZfsGpF7RzBvNTTGBZMdkdCjaP6/pVbqvLWHIyFtQRGUolvClxjE3PKfhRvt+wnwYr+syF5gOYTc3NB3zG+Uhcj2nfDRJi+Ew5eV4KYE9Mz3kAK31i/Ycdxfbgs7AIgTwZvJeTLfp7z5Y9bXwyJIncBbbv2juuHX28FHCLIgj7kImXQMXucUlNSj6wwlhwHMfnMtdsdyNwW6zM9lBoIxMqgTJGgrby2p4oDBh77htihSE3gaP4nCs+mCMVfEZHw8pCO7HUl1IbWPmtjF3kVoZy7/eBMSIBErc7m5EnW1ubSAL6gJ0btIvJIU6OPnNKGZKFZss3+kGdx1qNtoeGcH8wK+djBfKch8RA166HhQT9ivfU9SlRt7CXtipuOopjwfjisPM+wSZ9awEdD38hEce5sA2lgD86kuj7rKAtyBbO0FC58v0LtBfZsm3MBm1ntW4p3GrAuKGrlFk4dUsJZI0xISijnfQ/MgrYOr/zIwa2lFJynTmbc8eCnSMAzAWIuSQG79sx2E5LfuDE+9U87L63rf7Y3Lmxo8hljb/DfEVf9XHKLdjG17no9Iva61BvqA3IUkEufgGLFbNc4QbkxPcd/ZW6BWcxIYlJsES/Mt5+JZJiQfIQmPtrduxF6LdZPNyLdlhwaOdljBlrxjzqKn/7JvOnxes//hNJJgNZov7sXGXxwvsYJClKz2qyJAlz09jPVuDc7AxlvM3ue5+a+Tf2c838oIBbzAybGCE6pAirOTI4WEySuUkQUNqFDrap39CsJO3FuLVbMecMPHXHgUNRCV6iw8yx/KYuW/dT18T5Sd2DbI6SL9HRsBL7LzpSqYKRya0aW8klCQgwMUK1kxNtw3iOtZI2BOrbbidy251tWxrtZJyQXZymromT9uC80Q9snTr/E+e337WxjzoxFOpMcExiA6drrNVoxsEmvNwtC0NgIq11aiMlmbWx4W/q8y5dti2KUW9isQSOT+6UbBM6iuNXCliRBeSOVZIaO2eBgxX0fZagAzhqrPDU4scKx4VdUtZX2FZsi90bTr8wFyBjNc+ymCXcJoIjBcxJlswaIq9GtPeUnG3lvaEgR6XgtwRjQGKQnUa0G3tHMcfWy1ssfL6YYzYWXU6+B3kgMYHe1oDtWb99fXs7SvR10An0Az3pSu5gl3NB7ixXuCPU0eadxcD0FD1CvyzoIfjE7+I17/vACLtTwuyQ10fOD+gWNjz6YVaPrnGzJCsrvv78Vqgr14rvd81bnBefkvOCTzDEJBR/h2wpt37J+XrTQJ9yzzxbzpEnroX94tcS8BO6QHdIOMVxs2LyEd/vGq+4AOR96pjIqU3iKOAWMweHMzqRbFnjHmYcmT6TNUYvKg4Fg1d6aNpQJ4Tgpe/KHnVk4m6Nm3saNaumvHff2+47pLi+MOHicLPVrHYSn4YuRwoD44OKXGF8U+AY4Lz1cVSZvDG0XWOVc1SndWDtYV78HJIFCzYR4uRYm2tlFoPOWJqxpj+tT00n4iRLwNgnC+0TG9SJ2y6mlR8meM5pk479jSuSfeGcTLBDKcksulVaMfPb+PtAX+D02PgA54s2zYP8+sRAdPz4zBJ6vEZGoxwMtVlLBT9WdruL9aUlWXG0kHf6hbkgdVvMQoBc4oAxf+Cg4kiZXcOWD336P23xW1SxA5Ys4a934nwpyVYXbA3Hce0DcsYtXRZsM04kfyklZ79E1JtYuhJklK5gY0zoA2yTPYCqhtpVbnwCtr+ywpezQSYrjAH6kAsOU/6UMesVbk98sNRCk7uH+6bX39TO37y2wMjmhlpMR70+UqJ9YiyZVzim1nZxHn9uSgyMfamZt/BNsFOzBh927DmJthEwI0foEv4M9qvPToJoZ63ga+VsbNd48RkB/5gPjFXALRYElBSHxpSVSY3Asu+qWcpYUTC8pYem9QnqDSZTjHRfZ4s6xiAUJ6a2DqnvLzRM9jgf/qE5ZDe7MsQGDgjjaoFaLRhfn63NkUoE+GA2vl8DbbPg3D/MCplCtugL2lR6sF+EZM1jTz52KJiypIAPpuKEwKRbm4WmPgTXPrHBqgtJi6GOKudkpYXz2Dn5yzWmeVq1Z6ltQ/Qg595R9auzFPQTcKJzWXIfbGKbsCMUy4jzGTYBZ4rxZos9Doa3XyV7ETP2yNdShf6x1Qfry9g2dMD6tQZ0yLffSipx4T9LgZ7wAB7km/73ThiyngrCYv93rZYAn+eSTXzWx67kQGdx3GtWhjztuMwF6tYuGydufcKG5RLdVlLzgg9GYiGgwcZRSg96HJokGwJjw9j3eQAYxyMbuQQn8oOtJxgq6bMHu4Cd4Jw1cuWZ1Qo344sMeKgfbff6cqRj92in9KekAzXJUT5nbonjyWv0bZqH2yJfNqf6xC4JxDG2lNv8j3z1fUCe7zeuT338wpitmnMNkq74ftGnKZGSTcCWslWdLetDMHtt/r+fV2ICnfbU6KkCbjFzMDRswUWZvEHBwcKw4Wj0mVQWipLxHQJtNMczV/pk9DkutYJQcjprAmbOy+6DeO8+WUgMYdc5bALxAVsfxty+XAvXYkX+S1u+1E4qHsafSYp+QVZxMmqdQNrCQ+4sqLakQK3z1YU9SM9vUeS806xyx58HMrgGK5KM7bQsRDa+D+ZsU0h2eUc1ZsItIIeaVQ0y9kzOlNS9ZjgYBDucZ+sdW6vkPq7AULwMLAXM0SKYxZb4vrRAxBIF/K2xTcC8EdtuBd3KrWhQUolMCwxT/YdcMAdEYv/XyIFtx4/BO9guFc7BHNHlvOewgL7vnJWTefrEEiG5fuX9PjDO2CbuaSXR9JE3fKS9t7V23vMJMeTGkgQxuIjzoD/WnHwPiYXaANLmAsaMeY5+j2PGMbaFvG8CnTowBvhMY9+f2gfahzwyXj75zhjSn0N3fkwL/c2ORgIeG1cf9PCX1xYYIRP8BW/rU5D4TOkP44y8xqQ+r2t+cQM7jz8Qr4vu+zmcvk0FkBEbG+TZnjnDOXzdSGTFlfOhvkef5xt4vL2mHtTHJ9i8btAGFgugZrGGPsBv43sRZNYnMdCjGhuDfDA3IVtAPeKcYzsnLIFOqZkDFHCLmYKA8/M5uQeioWxMKrN8xP9QUDwCjLEz7ClnmYLydj312pNbQSg5nV1OOQaMgGrLmVsOazfGhPuXuoLuOIH0Baen6z7k1AoX7+Xe74K2IYup393FYLMKwz3YyGpKjiM41/SlTdzm+FkdfWLFO4WU2iw0Y8BEm8rg4hwhvzUTt4c6kRXm54FSkwfX4pql8S/BOVPOzGJTeviPBYc2PjhN6BJyYE5Prj/4Ltun0UcmaNrO8XwP+B7JJX7aB92nf/o42Jx/mgCtliHb6sxJiauUBDvIOHMCDhj9cumZl7bO8VC5mhUEYQRxXTtuauAcjH90dpEFPkO32EJpqzxDQOdJtvV1qJEftpSj/9gMCmORCkyngfMyf/j7YLG5d15yZzvv1Ix/TA4MKTEApv3YvSH3I9PXjGvcxo/Ms0tgGv8BHYp1jViAYEGlJRxisS3l6Jm3QTkYe77DSqMPJpAREvI3bLih/btYflv0cfzuFJMRC4xuf+vtbaCXSnZ56EuftLGCXDLO3IONvbb5m/fZ9ZUK+DzIF0nmGETyPvrg53D8Jo7t0gXayrjQzrES+CXoG2RhlmD3SD7Xgq/JvB3bbn3nfVDuEcfGdM2V6DL9Sb/WBNF9UMAtZgZCz1N3yWSXgi8mlb1v39ve41SzNWehYKKMTwU/mmECJmuPwckFlRgfgjGCiNyEwFh3OQklbGLLOSo5h4tEQ8w+W8m1pwtk0VbimLgty/+7X/jdopwyOfmVMXMOrI4WhNBXJFri7RBdEyh9v/kzm9vVk9RxNk44A10Tt0F70MHS1jbe55rTOFpMkPRPl8O3kODg2GoNQYF3ttidw68I2NiYLfMOacq+0T5+ypDVD3sQE3JIQIWjQP/xgB+f3MDe0De1iRLGA2ft0tsunXl/DtlOaFA3W40B34+AM4ReEAzUtn3WUGeCKJx2nOxp5iXGevs925MOK7LA+ON40z8k5Lge7xME97kufTrE1iFzgO1HD9AHghPen6bdBufAjnLelBPLa95ntaomGATsmiUETL68reMzL0v8n0sgMM4kVWuDY/qFOls7sGnUfTHwAUJtKQUSlgAGjvV6ihzz0047N+1s3nH2O9q/Yz95u4ZSgjRCkoFkeU3ijPGPixh+B4fJqfmrpTnYw3Xj6jYyy/yATYjfx6ZjD/r0a5y3KHHXhy+1foHB3FSrHzUgSyQuTN+p/7Y921o/CJnuSjzzndTqts2rPokB9DF93XeHHjYj9p3tnGD+9+937VBVwC1mAkJKUIZhqlFSlIFJA+PFivhScbqWCxgvC7ZLwTLjgvEmC8lkUzIuRzL2+8gWbBEUA9utaTMTfs5hARwyHLiuiRiYWHBu0RPO3XWLhQXbJLJKusW1qXspOWIwsdvP1aSCRw/X5DfkSaYNGX8cepyAsX7Xe1poO+Nlqx8ELN7h8skP2mu3cdhKd66/GNdU8orXvI/juv312+fJCDLF6h8BV40NpD6MA9srSw+041w4X5y/JLcpkEXqQ537fpfr0lc4Jnyf/vL2hX63xBT9grOF09XluCwEFggzvjhqOMd9ntJuIF845+hMSl9xGglyCCDoH2wNfUVwyvbWoUmOPjAO/lq8Nvo+gM2D7OBQM6fXbK3mcxxldryV5J8+9SuDjBN95ndioJ/+mROmhylbyBhYUqyGGPBx7q62LRTojU/U8Je5LJdsMOgXkj1sm2bnSWwPn2NrCLJNjvlLghbdKM1ZY8J1kMncbgSbu7E7BEa2oxDbwvf4fm2wnsOCtk1rN3XOg3zGdePqNvaa5Gtq/mBOwOb02emCDNJOX2hz7mFsXfP82JjvyNzPXEQfsgOEBAZzJXbffBpLPJdklv5jHP38ydhyW1acVw3ajK3oM7egB7HvUlvKKXE3V0QBtyiSu5+lBEqCInSt0qVAWBFamxyPRpgELCPmC04pPzGx0DAhM0Ezzn6STWX2wIwLxrH2ntMxSNXHF9uunfrMSpfTYdhD00yGMdK2YswEy4pcyRGODlluS7k5lSbvyD8Z21zQTf37JrIs6M61nXrYbR+1kzDH8UAlJs6uYD5CYMtvcfZ9+v+swKlB9nO2ysudOQnIf6mv6FPGL7VDxpwC77h6TNbY4ppzDMyGEMghC0z8BGjoMjoc7ac5Y0OCAhIjOEl9AhJz8tEXc/ZS1yZo8okps/88UJO2WdCw0BBYIxe2Is1Y4xzjMNq9oDWgG3ynpFvm0NmYMf68x08bIZe8njXYK9rHteya2Df+tyA23vpihfdT0HbbAdLliHpMBgAZijKA/rCjg2Db6yz2Fh0wCHDYleETBrmVQ/o7NT7c2pNKsnCdxbp/OYfpHPMT8uZhxxj9w3ilEhnYGXYtEaCl/LbSvGO60edWmGlgPJDP3BOs/QNPvZ2m3iQzHzlQ/9Aw+tT/PBUyTR8C18C+4wfR56l+NVuPv+D1OM774OcZCjaeIDQ3BxwJmExan9mYUAi0GUs+5zY1L1v0FXMg76f8FvoEvfbPFaCvOZa+tvP461u56Z6b2nmc4xeaFfv373/6h5KPUk488cTJf6IGDAqKbpDFib87OAQUwSZmgviu1blIrNcQhlx3CCgyAROOme8/ex8HJDW5pwxGCvqCjHzOecaY4xDkPo/4sfGQxcs5iUZq4kiB4WMF1e7PYZtWbf1qqa1LZMj3rD1AQBsdFLacMxnwfm68SuPIZzi6vk59xzViE5f/Pu3Y8MkNbUZ4iF4w8dkWwz7fpy2smvaxLVwLxyb1EKs+eLmOOud116jRgy4YTxz9XHtrx9bqBzV912UrahgyViW8vWE1LaU/QxmqI162sE3YaBw6ts1GUvYyNbdQFxIOufZFmxgZu29y1Mw7pX7luyRj+uoI7WdlFMe6Txtz10vpGMcSaHo7WqsTKVtgjOEXTWt77fsE1wR1JAdq5IU+gRodsT4goC7NjxxHwO2TIKX+60Psawu2TDf9dXJ+hT+mNHbIRvQzudWFuZzvQ+q7nD+lQ5wvzuO5eSw1z3Asib+YuK21c3yfLf+73rJr5nZkCF3tyPUrssbCRLS3LGZ4cjaUY0mYxf5O2YsUKVtTgwJuIcRRAUYUuoylWHws2OD+Zo3X0hyG/SMAAF+mSURBVEXjJIQQi4dP3otxYF5jS/r7XvO+yTsLgwJuIYQQCw5ZYrYpLsQqnhgGGX/upZt2NU8IIYRYzugebiGEEAsO27lYOeX+cba+iaUFO0Z2PbCr3SaqYFsIIYQYjla4hRBCCCGEEEKIGaAVbiGEEEIIIYQQYgYo4BZCCCGEEEIIIWaAAm4hhBBCCCGEEGIGKOAWQgghhBBCCCFmgAJukYXfqnvVR17V/h0KTx9+7b97bfsD9jn46Rn7DeWhcJ2X/enLete1to3UkZ8xWkpQZ9o8xhOeGR/6oTQOS7EPxqZGXsH6q29/cPyKrSumlvflTl+bwbie8N4TBsnvLOR+DN3t6oNa2ybqoB9PvuHkQbLQZS8Yx+ViE2gnNhB9HGPuivS1DTUs5PjYHEE7lgIm99O0Pzcmufm2dh4+0mGs6dvlZKORg5xsj6ln9Ok0c2ypnkPlUwG3OMQsJqqzTzq7/UmZLzz0hck780Fg+Z3XtS9YO3mnDHXs6/CUFAc2nzn/p4k4lgnPl5v33txcdMtF896blcPQBe3h+qu2rWquPe/a5qUnvrR15nzdUqXUB4zRFeuvaG5/8PakEeE9yqrnrJq8MxwM4ZBAgL6mz1Nt6ypcr69xHAtzoCibP7O5uXvL3c1Fp1807/1YUpOwjfs0paYfxrYDpYmPupQCkRR8p8Zm+P7atHNTs/fte9ufIuuDXWsoXbYnBX3R9R36cu/+va3tGoNpdGshHUauw/W6rj9Erkpwvot3Xtw8fODhdh7oqx+Xffay9rvof6wTdb9///3N5esun7xz9JCycXDwmoPNY+96rPUPUvOtlaFzbK0/UQvjMwtSNn3n3p1t//BzfMhdpNRftaV2PqTv8Y34+cZ3/9q72/f4HsFGn3G58XU3Nnfvu3vQWEZSfUYxnUzJXCwp/TWbkTqegp3h3CU72dfWMx+tX7W+7ZvFJNd2r39dcjdUV4eQqm+8Pn268tiVrX/cF8Z5255tzV2P3JUc0x337GiuOe+a/j+Xyc+CHc1F1DHnVB48d8e57V8j9d4Qbr3/1oNzzsrk1XzmlOLgGR86o/oanIfzRUrn2XrH1uz1Dc655vo17fc5lu944nUPPHng4IWfvLC97kLA9RiL5tqm/evHxdeF9+wz6sv/fF4aA8Oukepff95a/PU9uXPxmjEcu085H/0T65Gi9thSXxnIEONFSR2X6x/q0EcnjNz5+tCnr2qhH6I+GTlZKOHlxI+D728rXTLfhZ0/V/8u+F6qDiVZpy1d9ebzWCfOtfIPVh7WB77weeqaqXFP1Z3XXpaHjJ/H+pdSI3PxeibzV+2+KtneVIltsjrE46x/7XP/PT6z42zeyOH7keM43o8dn/nr+tJ1boPz5cZ2Melrk2pkvwu+72V0DDjnkHrlZMuXadvrGaP/DJPxnKx2yVup7aYTqc98qZVpzmf17CtzBsebHxXhvdSc7K+bg89TbastQ8fT+n9IX0DsD+rR1dYI3y3NSXzGeGHnUp9bifoc65YauyH1hTjWnMf3IXVJ1dFKrKtHK9yihYwNq5cnHX/S5J3hxAwkKwKsEKcyUXNC3Wahjn/W8e3rGuJKM2Xd9nXNfY/e1676xs+uueOayTfzkGncdsG2th+WImTSvnjJF9vM980bb568+3QmLpWtrcVnLVe+d2Vz5yN3zutfy4TPGaH2s1T/1mbLlyIxU4ocff6Bz7d9wetps7YXr724HbO+K6t9oR3X77m+3aXQO+s6gXPMBS7z2l9TSvJH3+28f2d2JRZ9O//U83vZHWwGxN0WrMDQ11Zu3XTr5JPFxds+K+gR9gp5i5+hfyXoU8bKVkSxAay+0B+sGlr7t563tS2+T2xVcanAyi987s2fGyS3rAYi89edf928dh5414Hm3NXntjLg36d86qJPTb79NGZb43eQJ9udge1jHM3Wmayh36xQ5eQX+8r8YzLAauH2129vV0w5N2N3+vNPb8/FzpcLT7uwrYfV9ZuXf7NKN6jPhtM2tPpL/ZYS9F3OnsTVI8Zz49qNk1f9oe1mH5YCft5OFfRzKYKNYYUPkF8bL8bG6s6OIeQt7i4xSm1HXtHDlMxT7P2Hr3h4Qe0V8pOyybwX9Yp2syOwZpeR+QF9yzTyMa1tHQPGzs9J1ibrDz7DN8LO2WdxzqL09Z+Q310P7Jonu77k/DrscYwlsM/4KOzSZcyxUcgrBRlFVmvrqoBbtEK04ys7DnMMEbhckEXJOdrR8Y3FO30Ib2pCTm3jAIx06pxdJTpZKVAU6g5RUekbH4hSX7abLBQYe5w9DIIH5+KC0y44NFH4MaO+1rc5J976EyczNSkwYWKs9z66d57zipE54wVnNPuu3HfomCOR6BREBwBZJZCJ21jpU/qWfl3IrVQ57JaN15zymvbvEG64+4Z2YrG+yPVJLKYzHmwKfRYnL4rJMDKLw5KaFFN9yvf4zJ/Tj0O0R8jstDzxoyeaRx5/ZPCW0i5bmCs5e0Wfbb1j66HtbPQR25RJFvrAjPdJdNC/9FlNUg575u2wDxStRDvIGNA/Q2A89+zb0yYQh9gPZAz7t27Vusk748L5z9lxTtu3Ni5s811z/ZpWPtlWy/XZLpuCPrdg2gr2BB1tg419d7dJsrG2klM3xgI9XkqQxEjZDgsmbG5LyVeN3EaQpTFufTKoH+NM4f+jHfTSbsGJY2bBBP4ZwTZz5xiLNGOT8ilrZArZiQEUhffMRtk85H0tXi+lhYdpbWuOvgFsF3yHhCa2diwYA2TzyvVXHjaOFGzROavPmRz9M6gLCwDRVjGHU5hnNnxyw1SLGgq4RSuc3KPjhYzig6r4GSXlaHsQYHuwAAqFkfYGife4D9EbOMt+1QTIY0CdME4eC0J9oU5xtcQnDmYNCo6ik6AwvvX4t5o9j+xp6xAzyan6Du1TxoyAIzq2q5+zutfOhDFAZqZ92FRfcCgsA2sFo2yrYQspBynok2vvuLZ1uIdOBOgAwdmY95EiH952WJ8ZJAlwjH2/2nGpCRFnj899hpxjLcvcZY+GgJN94MkDvZ1tjvX3OTJGOGT85bNU8sy+8/4vvf/Q6kSE99mBQNIBR4e/rGz6rDo2DYd511t2HbIL6C/Hx2saqZWI2lK7CuuhH67cfWU77wx12P3uCM7nk2I+EWPvUXKJ3BSm9wTIjBfftX7CmWO8SqtHJo/++lauvv3qtv7T6GyE+tKfJM/pjyMJ+iAGOhaQRwiYUn1KYdy9fuRKTg9SWNKNwv9HM8gNc8ktG29p5Ym+jjqDTcNvQ3aBY3JBLP0c+z4GZzHRR2H8+tjbSC7JQynNE1wzJTuxPvH8+AFLhTFsa47U6jOljw9E/5svuvuB3c3a5689VE8+m3YeJ+GIX9H3PNSfRDYJ1Tj+2H+SoxtP3ziVr6eAe5mDQcSZzDnaQ4MqjFMbyJ+5uZ1MUSiy/d6RxOFA2aIAc1wKjDTGOioDjpY5GLnJOJeBY3UCRWJSoc4oVur7MfPuS5/Jexps9ZJAG7bfs/3Q9rtY71R9OSY3iZVWA2311Bvv5ZLtX2qkZJQMe257shWvIxH0Aidr7Gx4CeqC3pnT1geCR5N7HGD6JLeilbMltSDnJB0PPHWg+dpff23ybjf0402vv6nZctuW1j6wUkrCDB3iM9qN/UvpEImtlc9emRwvnwzEyVuzcs2hPsSGMdb0TQyC+R4OItdM2UGDa+YSWrxnqxFcKzrifWB1F4Y+9A0byAo+u3sMH7TRVkuIWX/lArgSjB3BAIE9fUjb0SVksHZnT3RSec33CNZxOE1H0V9/O0dJZ3PQHySIrH+XArnVRlbLhoAj7fvTF8Yfn6K0UEApbfuMYAPwg1r56tmvKXvtS2oXiZWhq4ZDoa4X77y4fRArPhn6bfUz3wF5xKZxWwT2BX+LYyipoNuSpL7E4IzkagyOGceSbkU/b6gsebgecyB2w8sO+koZklj0mO3w9c6VacZ+qG2NsorOznIXJ9cjyT/NLSQRdBXbnNt1VEOUR5NF7A721WIQ+icm97piAQXcyxgMAE7vLBxtW73yE5tlnBBKlA3DgBPqQVlKTxhdvXL+qhn/Mxl6ooPTHjP3vRS0G8cHRSUZEFeKa0qfyXsaqCtO30nPedrok43j2rxfs8Ld5SDGid8capzCOE7A+HbJTcrZilux/PuPHBi2PXUoyKKvQ3R6KaVERRcpZ2qIIx2JY1sqJeeFepAYYzV0GmeiLwR+FoD2gfqyumIBNnpLMEwAFp0ZcwT9e30dGbtHGCcePegDbcMxtS3ftlJKPaKcRWeVoJs+ykE/sIphW8l5zRZCyCUGzUHA5udAFiyLT5287PM9nHGuR5DPOAx1CunX0r3PXWADqQv1HaqbBt+nnfSPrYqbnFjAgPOKfJE4uWvzXYdWaMbAdmugp3brRmpeq4En8hJs+p1Q0+JtZJdDmaK02mj9yBjEhOGQIAodJQnFc2HGSjpwThIu6EXffk3Nzb7gq9j4xzJ05xRzzpA5Bv/H7wrx92ub74C++eQu8Dnji+/W95pD4VrRz/MrmrkkDyWVGDBoH/aeeYXdRiRZ7T5tvmffjeenz3JQr9Zvm7N1cadcrtjY23f7MNS2krxmLsUOxXoYcT61kptXec8C1Fi8vU197kvK7nibwblIDuAXEGwzdvEcfef+HBaDIPP+Hm70uAsF3MsYFBJDmlNMjA5GtiuoiiDUV+y64rAgDWOFkuBIokA4lV6Z+YySW62aFbSPoJtrU/cYhOXKNCs8fWEis4DCB6xjBG+Gn/iZzMAMVJz4WRHvWj1MZbcpGCucQm/YrdQ4Gd7Q1hSO5TspcnX0BR0hgTQk8E45U9NmysfE24Dc5Ej/xSSElSGTGLdBYFvQf/o0ym9p1Zrvgf+MwORXTvyVec4MkyGOfuz/Pk4s9bJ64nziePUZf/rlgk9c0Dox6EvJ+fbOIiCX8T0PNhSniiAe54J6lpw5r3M5Bw77zHnY7UTd2UFDYE29sYk48uYcYXu8493HDnEcyYFpVzboI+TA7ln2diHlzOUCOBsXkxmSWcgJ5/Y2l5/1ivfK1xCd1CGBZC20heCQcexrq1IwViRakB8KzvzQ8yJDzJn8jbaUesf5wOagPhCIERzRB2MkHagjgTsrWxSSTLVyvlhg80gKkRzqM2dhF3z/o1+GzQ3oLXJgx5iNYvwsqPSkfKk4Z/TdUs77yHdpYaaU5MnZVeoK2FQCb+QIGUKWwAJviOfHZiwFprGt9CmU/O+Y5LCSm1d5j89S30FOc+eLxcui4W0GY2G3oZk9j+eonfujPJZksS8KuJcxZkS9ofMFZyW1QudLyqAzQV+/4frDhBuFZlUWQ4kRw3j57Ytd20OBFVCcH7s+/8eH9kQHpz2mY+UUJSXotjqXDDZlqIGlffRZ32Cdicwcaq5NFp+Jj/dwvn17Kd459sUmlRSpAJpzx+wubTAnZDFIOWddBQPM97pAFglgokzbdn7bXn+0EneQUOjr3EPTSpMYeul11YKg9avXt1loZJqkHNsYY3/nYLXJ3/PFa8Y1ji22hOtPE3gg49girkXQ3WdbOXK08ZaNze637m77j3NRD3TI6yOF+3mxCfSP36KWC2LRYZw/+pB2U7e+q+8R6sU57R426mA2nOthS7xNjDLRJ4nEmBw8eLBo52tBfqxvvV2gXhY8m6z2DeDM5tq5IGVTKSV7Hp3KIYFkH5hfl9L9xjbO7W66ybxkK4RjJow5D7KFPowVHJu+s3OAMubK+Swx2UU/6OfSvF+C/mOMSBxuffXW9nYFbHqt/5JKaPs5g78WkEWbkpuzbSV2DPth0E52DCGf9BfnZs5vbeKcP0mb8VlrbZzR5WPXlD5J7WlsK9/t+4tBQ0Eeud6Yz4wZi9yW8jFQwL2M8cYuFu9keKcllpRRJNNp2yfN0GPQcIqYtN73mve1hovit2raRFZS+LG3lOcwhyBlACk4DdNAFnKIM2ATA6stFqiQsaWdjFNMFNB23rM+S2UKIbWVnwAcpx454TNb4fNOSI5c4NoF7UNuhvTNLEHGcV5wGvu2abmCXnpd9TbFQB5xZGqerEy/I4M+e4/cprL5TOZcn3EbEnhwLZJNtksHW8WW0totzOiM/aSN6Ql6Y7rqCz9pFVdYeT8VxKIXyCD3h9u2ORxC000c4WirupxtdJWfAMLmYVuwldSBsbHr3bDhhnk2kYDc73pIJV5zMDa1x3aBY0lwiRzMAupJPyOjjEm0p1b6bvs8UkD+0AECDgq6NsT5JFHJjgicV9+HJuP0syWarPTdCYC+Ig+cr52z5nwL3psGZJ+EM222OQAdWWrzUw50GLtCO/roqNkRu4UCP/EdZ7+j/csY8pNLfN4nGByTKC/UAztHcsAndOzzWGKiB5lBFi3ot2Qmdtne88FhPH/OHyz52JScPfGF73OeGqa1rbFdyMxHv/rR9n+SEXExK5aaRAzj1CY3FvCZMR5Lxi8GCrhFEpxDsog4XkxafZUYRUKhUCyUFoNPABcVjAncnCXbvrMYSkj7WGmydmIEfeAaC/0yBNrGpE0wMCSTiMN/wakXtPXjPBg4qzOTKw6HrcLy/qW3XdpuC42Ou4fjKKWsKAkU6o5ckCBhFWExxmlsaDfyaRNGDCS8Q9F3lXOW5FbaUoU2mYwsRXBkSLZZP+M0IFtRP3jPbApyj8Pk7+f2IKNmS4asSJEA4Lve0cHxZqLuu8vBnixOu1Ir3OakmCNQ2gqL/uMUrd++vtVrbBHOGlhiwScc2bbXhTmFdh7uO7eEKX2H00kfe5toTqi9zq1G5eChcCV70wdWZbCnyLg54egv/eT1pG8AB4wdclTa3t9FdFKH1GMxYV4xeeL/vjAu+BDIB+ND4M7Y8Nd2TbV6OidTdh1Kn50A2AIeomcJMuB/3hsaHNsqnO1uAv5H1moCeexZ1+omstC1i7ArYdYFfYt+Isf+obWeWFf8MsbAJ0SwB9gqH0Ty02H8IkKurqk+INj98re+3P7178e510oM4iw4NjnBBrEqiT3h/dTCQyzWLg915YFwJDOx8cimf4++s/bF8w/1B2fBUNvq9ZyCPiI79n7NFvCuxCN9ik1mwQj99+OcKkNkPzXHWjFZYs7I9ZG2lIsFBeHCsWNFB2Uj0Bjy255mGBFsJj67/8XDuXGabXJLHeOZ1ZZygii/ShIzfbHkMpo1cD8nfdLHQQUMCX1kTgV9R3+RgbTJy28lp/5+iyoTZspw4KjzfslIt87S3PWQi/h04CMFjG004OaM2IQRAwmfXUaehzw8axYwwVudu4pNnEsV6sZW5prA2GwKoNdMnHGnBQ42K77oBrrSN2GIk8X9y955B67NPby1K1wcg14Cjgp6n1rhNicF3SbJQF1z9eVYvhPlEhnm71DQC1azeHge58XeYwNntZpnP7U2lFvvv7W1acgAduusk86a54RbosFuvYl9XQN9Eld1uJ4Fi/aelZxzmHJSpwngjzTQb/rdniHgV/XG6Ad0hd1e9rA/g/+HrnIj88g+NsDbTv4n0YV9wE6U4PoWmOYKsoGspj6zMiTJkYK+zsl/rKtdE5lGtvEl6Au+T9/YrYBmj0t19Vt0bS7ioYf+e/aZn3utdOls14olfpMP2mlTDOKRIXwbdmGg2/hQzCEkP+09bAELR33mkhSp6xt8lvPTapnWthqcI1cP3qee3ual/CvDjqdPr1p/VWfgjgxgI4aQmmOt1Nj/uKWc0jehnEMBtzgMC65tCw1/UZScMnXByrbfAu3BYDMpYuDIwHY5jVEZmLTJOBsEzTEYaY/p2FLut45BV4Z0moymD5prYZJjDOI2HCY4tnrFyYtJPPZDzmhQH973K4qpLZomD4yXd2yWExjsaZxEm3imnVTHwoJCnCoSM+Zc+EAit/Lgy1DbYCBPyDGYPKZk1WDLHzqKo2UruwaOk9kSgnH6uXZVmmP5fuoZFIC+8bAyjimBM8oqNL9n6yd5+inXd7SbABFyuyioH1ucOb85ODiUlKFYcOFXHQCbwW+5WiLEJyGjTPSR5yHJxshFp190yK5FJ4p+ISGIfPB72cw7tLEPJDFJMNo1rPhgMX7WFRj5sZslpZ94XGj8wzVpP3KDjFnSwgIP+8x0glK7EyD6Kx6CY36XvK99Qr+R/dSYYhe2nLmlPaZW5o8kGBMbAwJOk2/b9bL2g2vbxCN6TPv9zsAhYLv8+GDPqUMfnS2tWGILWLVkDqD++HrMHx7skU+8U3jN7T7xPY6NCzK1CzC0NXV9g/mKuYvnaAyxE2PYVk/ufLxHPWkL/couBvQ8pS+MLX3FucYKXKeBeZadijVMO7dGFHCLeWDoCOx4gJgpBn95jXL1nbg4nvMRvKJwqft6bUIuPXUSuhxxzkvJGd4cGDa+57eORYMayzQr3HZfdB8IHjBWqYQEBiHWz690+5Iav9wW8TgeturF/VuzdhrHZqhsjAX9Tv8zkeIExonHPqfgeJLU6Eo+jQHXiMmaWHIrD750BRtjY042Pw3Fyqw5Z+gCdsKcb/qY5Ba2i/Hvgm2DyEipPegv1yhNxOi33cMd8ck8v+Ub55akH7qY20Xhd6NgNwlocCgJiq19fkUWO9BFSgYsqcRf6wtf7ygTfRwp6r5ixYq2D8cEm4Tjx3ZYqzPXQt8IErrmLhxH7Bt2kx1R9AvfsaAwB593nRvoH7amEqj1TQD0ZegtS2PjE7e0P65wkyzhfeSHBHFc7e1KbqKDBNS5+0EZQ5Je/GpK7ZwVbUgK6oVs5bZoH8kwJtb//G/+hc1drITjK2Fz8JF2P7h7sDwzJpzH+17ILTrcldSsBRvMQs35nzi/DRJztoo2EJDz13SatlOop/mu2BbsHvYPeba+KtlAvse5o28dwaYjWzxskwfV1dgVz6xsawrmKhLe9nDQlK7Sb/wsJrodE6OLSa19pE3YsL7jkEMBt2gxgwApw8FrC7q7HBADIWXlxCZDFA6DYM4y2ISJQqKYJcHGIcUQ57BVodLDvFLQJr+6jUHFgFInMqdmVDGyOArmFCwVA4JRMKNvJbXCTYmBBOPOhNe1RdzGCScJ44qRrXVglgJd285SMPGy+oteWDDTFwIenBVkDPnxW4E9JnNWltLkNBT626+W45wR0HThV8VSYCPMyf7NV/xmu9qCc+bf9/aLvo12J4XZta6+59xcg2uhFzUwaZcSiugSMobjiS3CMUs5sbbajLPA+TivfygbdfdyRIk6n4K+MzmnH2iXf29MqDsOT+le9T6YnnIvKfeUxvaib7zPXMRxsV/t+341L+U8TgvXITmEDpSCCfodvYG+QTNjhewg714HFgPqQqm5/Qj5v/aOa9uADmqSGDYn8VCvUnKS8d+5ac53qJizOGdXUGRY4q3WHzpSoD1msymATtjcRR+S2EKf8Afu/e17e/tcgGxsvWNreztR7GvT4Rr7yhh0rVgyVvhuqZ17Budh3kGWYjKT9lFH2ylFooXX9Ad1LMkAcszcVwr2I95m1fSBMaZt7dopY8ksyAX4tAO5qZmD+oL82K4Y+pddDDVQV8agZhwYV3w4xiE1HwPnY/5gIQq7W0IBt2iFyrKXJWcTAcVgEPQi5CVDwGcIaZwMOT/OjDk5GEBWV1DIkoHheASb1Z8cBDWplVrIbTnCGHJen2H1Ewp1wmgYONoYY9qfct6ONJhA6C/fRnOU6C/+WmaWiZWx5Ng9W/a0Dm5JBlK/r0nBmfS/4Rs/KwXGNqH0pc3GJ+739SAL1Mm2ytoTWnOTJO/lDCw6ZQkKnJXaibaW1O6FXKFNjONCwzhZ+ymWsCpBPZG1XHCasivYDmwSCTscAG9vDBwuzpuSV5NxqE10cA3qwBiXnC3ARviHu/ndM7YC7ZN+nBvnLHXvKU4Q5/GyxP+UaZ4tgINm96vSl/QVNpFrWaLC19vrCa+xl13BjME1aN/QX2rw8H30lKQL80hq7IH3+ZzbAeKtTfZZbuwtaUbBPtEPPpHUtYvAdhywyo58ogv0Af3FXBivizwPtRkkXbCtpcT0QuF3YwD/Ize+D9EdxpAHU/GwPhs/9BWdSAXdpq9+TuqCOYugm1XD3JxFXThnTbANHMOx+A+zSEwtFsij2WwKMso4mLyTLGL3TknfPN4P8HMRQSs2z/seHvxRxqPGRqRWLK3O2Cb0omussJ8xOYTsciyyTL+gm8gJPqDprSXnUv4gdWC+Yh7sm8Sjb5ljavsAxrStkEp80376lX7En2IBhrm35AtG4nOWYkFesBElaKtfDGMXQxeMJXYl1S7DyyvHcX7mF+ZjZDXaHGQEXcgtpsxj//79B4/mIvLcev+tB5trm/bvEOYcl4NzgfrBOSM2eedptt6xtf0sB5+nvmfw3fh96ujf43/qTjl3x7kH54zSwQs/eeHBA08eaD/nL+/7Y+wzz6W3XXpwLhhr/7fvlOrm4Xurt60+9P2FgrpRz1Id6Z+acZ1zpg/V3/cp/1t/MF45OI4yFjVtM7ntU3Jj6s/V1Y543WnbzflycpnDxqRmbA3G94wPnVHs0xJ83+tWDalxzNW9tl85LjWOtXrI9+iHeBzXK8l4iZKOcJ2Vf7ByXpvimPP+nAPb2iFPTf/VjIvvW39dT6wT543yEo+ZFs7NWFpbakn1S4rYV7OEMcxdh8/6zCd99SyCHI45TtNAe5hfgPrQNt7z5N4HPqP/Yt/x3tj6ymvOOxS+W/N95MT00cqYMsq5pmkH8P1Yx6HnjDLNa2wL9i6OQTwWaE88zsbQ18/3IcfzXk4PODbqJMfRRnuPc8TrAsfkxmtI33M96lljH2rhXENsK/1v8xXF95Hv81S/AG2vaX+ubz12vb5tSMG1rE2UUh25HsfUXJc65mxXjhUEpZPY+6jkxBNPnPwnhBBCiKUAKyI1W4JFHaxosdrPw776rqYJIY4eZFuXJtpSLoQQQogFhYdSsRV0yM82icNhqy/9WXrYlxDi6Ee2dWmigFsIIYQQCwr34NnD57rugxdlWNHioT3c91pz/7EQ4uhFtnVpoi3lQgghhBBCCCHEDNAKtxBCCCGEEEIIMQMUcAshhBBCCCGEEDNAAbcQQgghhBBCCDEDFHCLowJ+EuVlf/qy5kvf+tLknX7UfP/TX/v0kn0ABXV71Ude1f6w/yzgvJyf6yw2jBFjxZjl4CFClD4wtqXvcN3X/rvXzqyP+7BQ4zHGdejTkt5wDfp1qO4udZBT+pC/OTvjjxFHH8g4OsD48ndMvbXzUUp6djSyUHYwwjiefMPJzYqtK9qy0NeH2rYjE33nwrGhjtT1Cw99ITmH2txq+jHUDvJdrhPPL8aBcbF5ij6exbxt9uxoRAG3mClm7Gsmh5qApst5T8E1u77Dzyesff7a5uyTzp68Mx3U0ybjvqVv++A1p7ym/XvD3Te0f2uJjgOF13HC46mXPAH32juuHTwZGrMy1MuJr/3115oDTx1o1q1ad0i3/BimSkn3zCEq6V6kRl+RlZ3372yuWH/F5J3pSMlrbTnhvScMkjmzYbNk3+P72vHsg437rOp2JOgpYzPEXtbCucdw/i777GXN6c8/vf1NXOzolbuvnHderpOS2dK1bY7Zs29Pa5/3Prp38snPiPqSsu2eIfpVshvIDnqX+l5XGaqvXdfM1bfPnM3vnfMbxwevOdiW33jpb0zOsnAw5tjV6/dcX+x/nl5///77J+9009d36Zo3+Iw6Ulf8FOod/ZTdD+xu5zL0g+Muve3S4jlz3Pi6G9u/ff0gY+j8UpLVUn+a7c7pPyVlA3gvdWxt6XtOs7HHP+v4dvyYr/h70+tv6pS/ki6aPeK4rkWUHHwnt1hGvZDPkh2dBj9uVdfgKeVHcxH92XrH1oMr/2Dlwbv33T15px98/9b7b23/v3jnxe3rA08eOHjujnPb9/3/Hq534ScvbD/Pwbk4Z2TOABw840NnJOvMdVLfMXLf5VrNtU2x0I5UffkuxUi1mevyHn+NUl35LFWH2uLrY8Q62Ou5Cbr9mzpPLHEcPak20heM81D54nuMl52zZpyslOpqsgp9+rokW7OCetp1c/pkdH0OfMYxUZZL3x2qr4zbmuvXJPvSl9Q1U/KUqjvXtbEEPsvJHN9PXb+2pGwlr3nfjqG91DnVdq6fakMXXl5nBe1YvW11st8Wm5RsjY2NV0oWa0nVM/Zraix5L3Vd08kodxzv5cpky0jpTqTmGA/X77IBkGqfh+vl5u8S1he+nZzDzw+emvpSz1jXnH727a+xoT45u2Z9Mxe89pLhVPtz5PrFw7miXFqhXl2+Rm29jaGyBEPGszQGQPuj/oPXiZR+2Pj1bX8JrtM1Xl39xzmoEyWOVUluON7aGY/jM98XfdqcOp/Rdzytz2O7csXXc++jeyf/5dEKtziMd//au5sNp21ortp91aAs4+YzNzc79+7MfpdtRWCrsn25ee/N87JklFXbVjX3PXpfs277usM+u+iWiybfTMPq9sbTNx5a3SZrRaEfLIN94F0HmgtPu7C5e8vdh96jfPGSL7aZvoWALLpdl3qsPHZlc+umW+fVx8rW87Y2564+t623vUd7aqFNtC11Hsq+K/c1Z7zgjLYepew+fbvqOavazPWs8ONkhXpT4vu1KxG+r63QVmQg9sWnLvrU5FsLA3o1NxmOtmps3PnInc3K966cpzu85n10yL9PQddK9oHM8+0P3t6u6AEZ6DkHq82Of/Pybx7qP2T44rUXz+tTykKtGqXG2hfqlpIlK4+967HDdsbwmvf5HF1Z/ZzVk0+e1i1vR7g+9vL8U8+vtiWsjGAHr7njmsPGpVS6VjgjtGPnpp3Nxls2ZldwFgNWE2h7ai4olb6rptgtVjLjinQtjBO/hWs6YNCv12+4fvAci5zsfuvuw+QuzgcLpUMlaB/lgtMumLyztLl83eWtffVyQj9ib9mpYNCmueCgfX+W81vEr0Rinz//wOfn+T22amp1ve7865pbNt4yyu60vtCH2/ZsO+RDUPjf5JR5gH58+MDDh2xinA+iDHPO0qppyRe00tcOTEvKTvGeJ9pym3vHgD5mpZe593Nv/lxxnuEY5quXnvjSyTvzV3KpN/4Ac5aNkZU+PqaHc1n77fx2va4dNOygI+ZIgf/JvFqrn+b3xnZRUn4AsolOMa+y67AL/Q63SIIQsW1q8ys3D1IiFHTj2o2tIrGVjkmM7YkoBtuHcECiEqA8W+/Y2gYwtY7ntMRrmoOEk+Xrx/sED488/khz4MkDSWfHw/EYkCGg2DVBnI3Rtgu2zZuUaBMOMo5yqY7AOXAabt54c9ve+BpDx7hhtLwcML6Ma0k27Ls4JL5+vM/3rznvms76paB9W27b0ux6y65DdaYfmLRLrFm55rBxBWsL2+7YnmnH8H6cFCMkPbpkYWxwuAgArJ65fja6Pp8VZgNsUkrZE97f8MkN7QSP49gl+7VjnaI0VtPoKw7iNP1q4xOdKxzTlHNEv8JCJnqizi0mZou7HMcxMTnd/vrt1boe+8zG2dtSgiLmRLZlRnvq9ceTsp+p9yI1Y0g7vf3vgnN2zdk15xzSv5CybaV21tQXOA5n3bYop0AOoTQHzhLmAfyrlB2gbtGPqdWbvrYwZ6cAmSThvv2e7clbZq5af1Wb3MgFlrk5OzJUfiKcZ8j80jW34FvEcaJvTOf9/0ZKtoeALF/wiQvaBZIam1mSK4j6VWsz7Ly0E6ytfJ9bAN73mve1r+mLlN3zlMbJ5mPrv5RslcbL+mvHG3bMq0NsN9g1kPFcf80jbsE+2ooYDts02PJTux0jxZwitudJbY/hs9Q2DQqfefi+37ZDnWyrSOrcYN+ZU+Z2W1WE79K+eG1/HjvG18fem7Zv+pCra98S+4jz+i038TXQ536LHufgGPq3BOOeqkNNSZ3f6sb12U7J+XPwWelzD2PLOfmLvFBSbaMfcp8tJNTTyx71sX5J0fW5tR841usZn6X6kffRK/6m+iM19nFMOSZui7Xv1Y7dUib2Aa/je/Q7JfZNTtbsHODHLeoox/C54Y+tIcoB8H1/zsXA18H3Be322yBjn/pja4l9lhuTWjiXyUGEa8X3c2PG9WmbyRA6xHFxfuD9uQBr3rFdc5bZ2NIxnpo+qTkmjl8J+sm30xfajF3yuuDJ1YX3+W7qnLWlr3xNC2Oe0kfej3bVoI5dMsAxtW3hWtF2RXyfUzje5Jp60Iaou6l2legjPyU4Tx/5r4G+TLWH96yf/f9G7KshcE5ks09/2nespOTF6luSNSOezxc+s/MYnLurzalxiv3F31S7OQ55LNWZ89JuX494fntN/WvRlnKRhS1gc0LVZiD7QEbPbz8hWxq3ppLFIiPkt2f4LbsxW0RWjgc0kGEiU0Z2i8wfmSY+Y8WcjDQZpwgP41j57JVtVszDd/3WVlbXKGS1OJZtIlyHDKuvj32PbVpdD/egrqWHtNAPdi2OzRHrOrTUZkrnjMqhMWRrFlu02KrFa8aQsbTtx6n20Sa2+vits1YY39T2fF9SW/XnjOChLPnqlasP2/Y3BOpNIUMJyArXtbb5Qj+wChs/W8jtaVyHh+CMCas4rLB99KsfbbO19IFlftErVkWi7gB9tn71+uRWKrLXNpaMMyvYnIu+NfsAcTu2fY8VgZI+MGa5h6FQV3uICtcqbUnzeLtVU2rG3bagYVcM/vfvkfXHltGP9D91Rdbpq6gD9I9fBVlIsB3UZ6FkPQV1qFpJmAHIKbpht0T1gTFFxz7yho+0+oTdnwbGwewnOkS/+PmhvY1hzkZynG2R5PgoTynYxWW2vqt03VYCOVkeircttJNVUNuyTF8wRo8cSLehVF/6i/PZuX1Bj01vU8Xr90KRe0gecrb37XuTq3f0HTvi5gKRKps4Fnc9clc7b0Y/EJ+NeYG62BZwPsNv5P+F3vqdguunHuiFDpfmKI+1x5e4e862VFuxvhoCdcWn5Jys+NbaTMYBv8psS85mMF9xqwCy9vAVD7eyxjXpk5RcoT92qwD/W+FY5np0dkxMD5CtIZi/7f1ls6X2nr3uMx8r4BZZuIeDbSg4hH1AAFNBlX9viNOEErBlaPNnNreTBveA40xjnGIwhPPsQaEx7jkwnGwnNgXF6LBdhZJzPuyaOQcMpee+KTsndbV68RmOF5/RLtqDgeiaBM2QpuqTK0MmrYtOv+jQuHWVGBzTBiZ0f1/8GOC4sRXIrmVbMU0GfGGiiRMYJTq6BDmtgzkJuAHZ9O1DbnHozKnzn1FS9/DOAvqVez63nLklOQn6+5586Zq4ORfbwZBV2+5KP/FdZJ/vmg7E/uvSK19n+ojXyD2kxscKzkhJH9iCxpgx+aG76AS6AdTHZI9bWaA2SCo51b4gB+esPmfyrTzWRu9cmfPFX2Qa4vMscKZt652Htvo+sjHHFvmkGNe168Rjc04i/Zd72qtBcsacK/TO+nyhoG7YM2uj9S/t9vdt0lafFPTHxrnBQNZoU8mJRjeQOY4lOVJrV5FX4LvYS4i6NC2cz7cNW+ntWi0kx3KBZyw1QTyy7GUxVeL4Wemau7AtzNGMtfUx5ILnmvpyPbb6IwfIA2PNuLPNlT42Pbn69qtHH8O+xH7F9iFf+BTIQqp+JoOlfjBd6SroWQ3YSmymzaUEX4yHBSvUxXxFC8z4v8/cOlTeIzHhFG2rFW9TKSW7kZtXrP3R56BYX/UFecX/AnTAAsQamCf5vr9/G3vA08kBeaKtm3Zuara+emurdza3oovMWTm5Ss1pHEvbkdcxQQ9sQa4PNgfYmNaWWjuggFtkQRlwwHEKEcQ+8F3uJ9t6x9NZLLD71cyAIqQlIxVhIuTeClY5UV6uwcQRDRXFDJmB0YnvGdQP556gl7biSOEAp85rhUnBArCUQeOcPug0Z8ACAIyTD2Sig1hSYNrtExmlgsHF+SjhJxj+8noaaGu853sMGHMmVNoPGG9ec19Squ2pEhM9FsTbhBJB5tZcv6Ztj3dSGB8f5C0EFmTi+KUwJyaWrombNtj9UGSbIeUAUGL/IdsEYTlsMkbu6TNe53SW4h2ynFOIzbAHUVF37mdHd5lcGS8y7+YwWrLBHKOFXDFB/mmPJRlNH7zz9Y6z39H2Ke00O/vEj55okxJrX7C2Pd6DrbHvch4bc85PctSCjLgi54/NOWAW/Nc4rfQ1OlFKtswC5A0n3NpI4X/abQ9x5LW3z/HYnF2i35lbatrPeFnCrwvkjXtYkVeTZ9OjXPDfBXJiwSkybY6yPZiMsaQtvGcOJMeXbN2soL/p91xhzPhJThz7OK+VAi7swN79e9txv2HDDe2OqqG6zTXu/e17277BZ2FsI+gN42Z6gkPPHMQY8v7Y810NJtMU/vdYffrOU13jFUtX8L5Q1OzMqNG3mHCKttWKt6mUnF1dLGxO7IPttrLxRNb5H9+D/iPxQLtZ/WX3K3rH/MtnUNIBfIvUnIZf4McoJjIofeKEPY/saevNYkAqKYydxy6m7DxtzfkncU6xEvWuhAJuUQQFwZAhpLVg1FASJni/6uwViWNQbBw2hL8LJlIeBMaDDjCArEDzPbuWL2SeUTSuy/XNMclNPCQCcB4xmJbZq3niYA7qRdDOOYFrEyCwiohCY2DoizhZ+tcxsJklfoLhL6+BvupaTU9NYBjdWTgfGGyfIaUveQIrf1NyEEsqicGYpxwG5A1HFtnBcY0Zfww6DtqQzKz1a6rvciBTtN92S4wFdSHYbpNNc2NPu3iPCc63l0L/2eTodTkXxPIeq+Ym9yQK+u6WiVAvJvidG3e29cbB4kFsjCPXIyHH7SNenpkk7XXXignt8m3OFWwL2yS7wLZ4G4Qc+WvYCpqBrf3W49+qDvzGhLEn8KmVaYKS3O0GXTCOC5n8qIH2Y/v86k4JkrKMXakN9A2rQTyRPModiSr6r48z6TFnH5mmHuiYXYO2EFwzluZAMm9GO7fYUG8CXFbL0NtazLZwW5etbF573rUNiWyvT31BX3HU6UfOSYBh/osVdJlEOn2beqJ5LdPqQC6A8eBHYNvXfnDtoupabkt565PN2TvGzGwkn3kbWaMfZrtiMORLn8BoVsTgMlf6+AZjgXxwuxr9b9e3VWlLwqCn9DXjhf0nuMUP4rOSz8rxlNScxvf8OKVKKpkREywmW9zmhm1FP4kxFvr2iRIKuMXoxAwpE73dm23vcQyODYpgq2AlmADtXhEfFMdrUQjCcDK4nq2W8T5ZuehMmjFnZYDJD6XFQeZJ6hYYeUOYC9o9ZANRfJxrJmOyo/akTbseWUIfxPG/f10yuPSZTU5dpV2xPjB8xTpme33pmsByQbD1ca4NuT7uWkXwK3qx9J1skTNbSYvnQJ7GyOrjbNdOBFyL9kf5nRbOZ/cq8T8TIjLLa992Cte3yZG+tgRRKoilXTjS3AKCE8y4MpFz2wZjm5KNLieD7xFssxK/cefGVjapAzJh18MhIPNuEzF/0UV7XdJfsyXIu18t9YXrmZx1Be+0B8fRr4LwXf/ayxHODXWf+6R1YkrnngUkQ3K7J1IgL9OscpNUwM4uFagLwVatXnNcaZUbOSMpxKqpBRi+sC2ZYAh9mwZkn7EjGcf8wrm5nl+pqgVnmnFZCKg3iWn6kH6vhX4liUGA7e0hNovggEB0yD3cNjfbHIP+cS8094ibrbNix9C/PFumdPtLiWl0gGvXJOXoF564TDIyF3TzPv5P7K++JRUcp+ZSK8w9Z510Vvs39TklFWxFkFv6o6+8R2Yt/zZ/xjkmvi75ObMCOURO8JvxTVi4ijutkDfrY3SP3ZM1i3HoBt/Dd+d85u9OQ/RNzd/3IDvMUbbDtESXDmBT8SXwX/37tMMSRF0JNAXcYtFAAXEK+q562aoRxjHluNtKJoaALFju/ObAoyztvWBzAT2Ki0PMpA3U0RtFW/0tYY4758HQ3HnJnW1dcbBwLDC64Cdx/vevSwbX16mrtHXu2FI+K6wfYsEwlh6alkqM1OCz4rFMY9zNiWXyoe4YcWSn637XHLSNVVk/eS0FLEAk8LM2+2KrsRTkuZQwYIKzpApBHONK37EKwXfBJ0iQ/S7oN2SD86CH2A5kDH3netxagH75+zf561e4o2xRf9tyawVHPXU/KcVPrr6YzTGQCwIp5NwHW3zXv7bvUY9WvuacPeufhcTGtcaB95CoZCtvLomRg9VhxqVrhW4hqVkxjCBvbGtO2QHmFALqTWs3HRaw2Vxy0nNOOvRzOEPBhlgyDttk16gJVFLkgtVUKQWwJfgOwTY63KeefI8VK/o19T1WtqIN8IU5J2dzOR/fRyeRZ2wg/dD106j0O/3f15ZPowMpfTWfJQVtI5jKJYdKQbEVn2TNlaEyNy34eH73W4pS/3ii/CPjNfdw1yzGLGWQccYPOWYuZf7Cf/bJMOQNuaOdzJvX3XXdvH6J86Bx0z03tX4A5+YayIqf8/menSOWVBKnD3EXCvMr9Yi31nTpAP48+or/4d+nHbVJeAXcYnQQbJ8pwmDhfFtmyGeBUGYUoMZQcQxGDRByAoNUUGcBLedFqWxyipgDz3f8ShMGwc4xFAwIkwDnBZwvjDZ/jyTith1fps1Qjo0P4GLxxr0WS+bgpGBkvYwgO/weI0/NTznbXTD5dzkICwW6weRJwMdTlKmbTYq+WPvtXlfIZbe9XpoTxneZyEsTUhf09RkfOqNdzeO8TMY4Pty3PWQcqBPt8u0cUqK9oI32m6elFW77nvUpAXopAekdE+/wRacwJgb8sSmnyG6hqdlOjbzYA8NoJ0mUvqu01t6+Ab5PsNJGCv/Tbp8koa1+NcIfG7fxA3ML84V3LkvQh8gedoDtxalVSsbWZH8suCZtom3e2a8pNg/zTIqcrtAHrLRFpzJXSgFsjqjDfTB9HbtfDc5PYIA80b9gcpMqKVmqZagOgNndGDRY8I6cRD2nz2r9Gr4/Tds85rel+q9UUnYqBfLElnV7fkGJruQG8r/htA2H5NsHU12la6GAPrW2RXsVX3etlM4a9ICVYZJNYONHPbE967evb/U39kFOvujHLp3lGH8ubFBcsR4CbeF5UtPuJBoDBdyiCEFtX2KmiEnZbyn3WSCbMLqUAeODknPPlldqb8Ss8B5gPNkOBrl7sjkvziOfM8Hw15zJoXB9HFxzeHFwME60Hyy54Cdy7xBScCxzMAmaYe4qGMjF2lK+kIwZwNK/yALjAaltRBS2i7Kt0T8YsAY7NvVgnmnIOeDmoKdAFlmpZ6XJJxSQ/7i9ymQSvUIuOLaktxaU8D0K/9c6USnoNxJWbdZ9Iv8Ep8giqzcW9PgVCnMQ7HVuFWKWTqEfFx/4+u/Tj9yLyjiwYpqqI2D7oh72LSmniKQSwaaNfx+G3JuPDK1ftb538iWVYO1bvJwbbIsn8Ck5zDlw9Bm/PjZgKDitqTb1KaVVGOZ7+mGIHHRhOtYmn992X++xXwiwUxZoo4+5XVgEYXzOvD60r4bqAJjdzV3bEmc5e1cCOcYeYIvi+XPJh1KAaAsb9CV95hOOsViQhd9G39ZAXXkKeqkfaROlK7nRtVLOHBb71HzILv33uktfkNgy34q/fkt510rprKF9JODxUWz8rO6MH0kJqx/tZh4ryRnH4FOZvEzjBwyBuvKAUkDvkOuU7sR6+oJdSPmC5ifWoIBbdIJzHTOpY4HQ12SsURi7hzvit3kwERoYT7s/LHeflCkfRoW/KJw/vn3tspA4+11Ep8iMJ+fH2TWnzmf0+N+/zm1hi8YvGm5KPNdiG+9ZwxgxjjlngNLHKJpMxv7lf0sc0b/2sC5byawlysG0+PqWSkrHTJ5Sn/ltmbTXQDfQEXQrF2gxJhQcHFYVSIjwHcbJJma/AlszPql2WvBI/U1n+mwp93B+c3hqCo5HDV0r3PSTObjoaevodyQgffKC7+Mk8J7BZz5px/85J4fxYFxqE0CsrtnYAn9LSYIUtLP0dPsuopPb1V76hj6y4z28R3/T/zVwfGw/80IuqbsQcJtVDHioI23OjXuE8ePhnn3u46+BvkfHSVDyLJNUwmOxsb6y24bQTfwNkgM+wKIvaQtB2bTz6jQ6QGLEj1OUSeqHbWHrPX6Ltw1d2P2u8acKwfsVvtT0BZ/Tp9i6GKDTv/QzwQx6WLLTHvTc/5xrDmxWl+9GHbBjpZVy5hnmfXsQF/3KvfGsoPaRaextnwdULiS0i/axwp2qH/O43UZDYUHJZBH94PuROHfbvL3USPkYVpi3U1vKKbXtUcAtsqA4TEAY8aU2QeLAl7YHYQioP5MGjiTtSDmEltGkfRQMoa1YkGTwwS2ldiIwB5Br2r2+9t5Y0D4cgvjwmBQcG51yw28br00qLCVse50FVaniA8ZaGDOejM+Dv2L/cl8QMjXmeC4FcORKGX7k2TLfpac0E3y0cjVnOygchz7xP7qVWqnMJZk8XAt9oh4WVPn3jkTsoZH0JzYIh5NnS+Taw/s4reZk8h361e9GwgEieLJzWN+m9J/x4Pu1zh82kmtaEhZHeugT+4eALLGrxDu5yCxzgoENZ5upySaOcu7hORYo12ynh5hwoN1jPeTOglMKgQTObQk7nmOtLzzICJ9xTJetYpWf5Hoq0BoC1+O6zLHod+3cOQZ9H5pG3+Fo+2SAvceuOp7yzfeRs1pbNSvQaWxwl2wAck8ScfNnNlfNVegLP2HHgylrVm37Yn3Kw+h48J2NC/1LP9O3qeRvCmSfW4n4XpdcYbPAdDYFdpBkZ5cdYOyxJSQHsNO0pY/u06fY77ETW2OBjaSfcjJOX+NzokuMIe1v5Wyu/yBlh2aB91sppZ18ni4fZ5Yo4BZZcES4r2Qsw2DZYUpqtQGjWGPgmXDIsJnx9Ns8WDUDDBpBAcqPgcBAxqfpcq020xgCd86LMg9dsaB+nJfrc20ymawK4vz43wr1K7L871/nVmMM+tKywTUTFE5i7n4xv22cv7w2olHzpc+q8SxhXHPZ2KEwmZO53rlpZ3IyRa5Y2T6agm7kjfaYPnin1cba9zV9gIyjaxHkH+cl7oxhosutiteAHnHPLNcnqDI7wHv2fITSlnJKKugE2l97qwbFtp52kdtSzioPwTZ9im0wRwV9ZquptcdDHS+97dL2eC/vMeAkaEKPzQkCvsNYWRBucL1Uht7sY4T3qKvVd6FhNRe583qJzFIvs5l8xjZTv7Mp9xNOHJvbpeKTGIa1v8+uL2/bKchjKrHJWGCHuwJUC7QJpAioCPhjoEAdaRvnwa6TgMlt/bVAy8vhtOCw047FWM0a8tA0j/dVmAf4OVLrQ3s/Z0dmDTIc5zuf4IwgAwRF1L00V/k5j5VoCyrpi7GwfkX+CerZ8UPBN8H2lrame2gHss+4eDuQA50trShjN7CN+FMp+TB9o3BtZBt5ANrSRxawX4xTjd82FNqQkoUuaBs+QG7nhbc7zC/8qobNEdja3C6haP8o0X+Mx3QF0N5vpdjtCCUYZ+/jLDj79+8/eDQXMZytd2w9eO6Ocw/OKdXknTou3nnxweba5rBy6/23To74GVyj6xhjbqI5uPIPVrbHcQ3geF9H3p9zVA/OOaXta2POKLTH8dfgfGd86IxD71EXSgnfttyxHOM/43+rrxGP6YI6rrl+TVU/8Zkd50usA6T6xSh9BtQ/1Ybc9fsUxpnxSeHHjXGnXak68hn1t3OW+gxsTFJyYeeIfcgxvNdXR6bB149rW91qitcVg/OYbNnnqT5Ap2Jb+f/CT1542FhZXwLnKMkR8H3TbeqSOjbWib+89teOx9SSOlcXyFOUhwifd+lqSoeoT6rPcufjvTi2fgyMmjobXu59KenmrKFOqf5KjTvHxbZyHPIadSAFfZVqPyU1BjlSdY7jmxqrHBzXtw5Gbvx5r+Z81NfPQ5TaentSfVsrlx7GkX70dS/ZgNz4x/rU9AX19d+J+jcrrr796kNt8/XuGococ55Sn5m81ZTYb7GPuvrVxtN/J7aLc3T1Ne2x+YSSm1MMPqNfI1b/ruuVjqP+Jtv8nzqG69v848e0q95G3/amyOkGr21MrB2Gv25O/ng/9Zl/n/PGY+y6KZmhbXzm25g73vdnqg05fLutpNrRhxUEpZPY+6jkxBNPnPwn+sAqCPdcdf0khhBCCCGEEEKINNpSLpKwnZEtG2zDE0IIIYQQQgjRHwXc4jC4j2PXA7tGvZ9LCCGEEEIIIZYb2lIuhBBCCCGEEELMAK1wCyGEEEIIIYQQM0ABtxBCCCGEEEIIMQMUcAshhBBCCCGEEDNAAbcQQgghhBBCCDEDFHAvcz79tU83//TWfzp5VQ+/0/2yP31Z86VvfWnyTh6Oee2/e237Hf7WfCcFT0/vquvjTz3evOojr2qPPZrp6gs+X2p9YHLAGMFSrKOYTx89X0ogY9PYGiGEEEKIsdBTypc5BNwX3XLR5NV81qxc02y7YFuz+TObmwNPHZi8+zNWHruy2f3W3c3ZJ509eWc+OOvn7DinefjAw82lZ17aXLH+imbTzk3N9tdvn/cdgq5r7rhm8qpptp63tXn3r7178mo+HLvjKzuauzbf1Zx0/EmTd38G7bly95XZzyMWtH7qok+1f6HUJxH6qPZaY0I/3L///nn1Ngg2aNc1512THZtaLCDOjUcfCH623rG1rTM/Nzfk3BZI3fnInZN36rh47cXJvpolUa5rOXf1uc3n3vy55E/yMa4377158qofQ2QVHb5458XNzRtv7vW91Dilrm/HYRt+46W/MXk3T6pPb91062HfHaIDHL9x7cZD56LtGz654ZC9Qn4v+MQFh9nC0njNkqhPET7fctuWZtdbdlWNXR+75+lqf189WKz+nDV9x0MIIcTRg1a4RRuMHLzmYFv4n4CX/795+Tdb5/Oxdz122Gf7rtzXrF65enKGw8F5W7VtVbN+1fr2+Btfd2PrBD9y4JFm3fZ1zYqtK9rCavTl6y5vj6Fw/hIEZ7dsvKW59LZL2/MBjrKdD4eRAJ9r23t2HTveg7OKk33yDSe3DjbQZquPlbu33N2c8YIz2nb79+mjpeY8PfGjJ1pn9aUnvnTyTh6cYd9PseAoU1KfWbHA2aCf6e94HOMexyB3buSnBEGWH4dS6ZKpWeJ1q6bQri76tN0K8rvQAQzX47pjX99sEIX+TcnbyveubD7/wOfn2RpKzg7Ucs7qc5oD7zowWr9ic7A92LC+XL/n+mbdqnWjjivBrm9fV6ltvx+zUqmR/yMVxuva866dyXyBvUS+u+ymEEKIxUEBt5gZOMNxVTE6rF+85Iu9HUZWm+IKSCkI6XIKCbBjEO/BiWFli5WuxQquzTG3wIFAlZVOe+2drR337Gg/I+jwn1NicEwCI9VnVnCUu5zluEJNXzOu8bjUOOTOHVcsIyRWYttypc/q2pEO40tBXsbYCr7v8X3J3S192f3A7mbt89e2/3s5RkZZBY/j6RNgXaTkDRtz4WkXtjLn30/ZG0s6oTO+HiTt7nv0vjZonzZQT8H52D1AUrLv7gvGde/+vc3mMzdP3hFLGWxzlK+u0kcHsJfYUnZ31X7H8AkrdCG+FkIIMT0KuMW8wI3/bdXRVl1sBdl/hjPKanUfcN5xdqPD62GbdArL4PsyZFUoRyqIx6k94b0ntCsTW87c0gYNHq6/UCsKBPqsplvggHMVV09xunCW6EO/Eu+Dj9T27RjM+8J425jH0sch7APnxOErnftIWeFeDBh/5IUEEVuOpw0UVz9ndXP8s46fvOoP17/9wdvbnSRRjpFNVlXjeC7kzhFLOqFPvh7oELta0JshicEubrj7hvYvu3/6QH9etfuqQ6ulvGZb/tW3Xz1PP0kUkDCIu31KdpPkRypRlyupXSspcjYkFgLSLv76h3/d/M7nf6f5+d//+WIiBD04/xPnN8/+t89ujnnPMc1LbnhJ87Gvfqz5ycGfTI74Gby3+8Hdza/+6a+29UjZdUvM5ApzRSrBhR1jDom7oyhR5nzpqwPsFENXkY0+HH/s8c2Ljn9R+/8zj3lmW4575nHt61/6P36p/SuEEGI6FHCLeYEb/9uqo6268Dd+1rWlHHwgj5O399G9896z930wzecp/DZvrs39oDjw05AKNHHgvvDQF1rnCcfl4Ssebp1trrXz/p2HgkDqTQLhNae8pn29VKDu4B01tpjjlK56zqrJO4dDMBFXAymMt425L+34zzl3tTDGOOds88WhLyUq7t53d/u3FOQdKSvcUd67Sk3AwVjaClQstNWuaf3NSjeymqJ0Loofs9TnlK7Ey9f++mutfC01XVlMCMy27dnW3rveN5C3QN36E51na/l15183T0fR59RtMKXV9L5byik1yYiUDUkVgs8cyBj3QJNA+MCXP9D88Mc/nHxyOHc9clfzDz76D9q+eeonTzU/PfjT5psHvtlsvm1z8/v/z+83c1drjyPQxq6f/sHT2/v173303vb9sUC/2MXAOM86gcQYcJ1dD+zqtbNlToubY1Yc0/5PgP23nvG3muf9ree1rwm+hRBCTI8CbjEzfCCPkxe3L+OEgQ+m+U4XBMJsw/TbjksBWG4VBgfIVttspQ2HBUeW+9bNkcR52XjLxubAkweaW++/tQ0wYBarXtNAG9lOThs8trNgmlXKPtjOAD8GO/fuPDTGFMZu7QvWJle+GMtcIMJ79Ls/F4WggvGLwYWVvlt2DRIr1GnoTgavA77wfm5lq0uucn1A4Zzxmsh4LtlSOhcFHWUreK4dlK6VOHaGEBBCDO5zW8oppdXLSEyecd7UPdxd4+jrkdrFQyDnkw8529IFfUIw3DcJQf15aCQPsSOYpA6snrK6uZDQZsanzxhNCyvMtP0FP/+CdvdGTqYJxH/v//695vs/+n7zgQs/0Dz1b+YC7mt+2vzXS/9rc/LKk5sb/98bmwe/+2B7LKvgb/9Pb28e+O4DzSWvvKS5Yt182+nJ3X7zv676X+3zMhjP+NyMyz57WXP+qefPm6tmCXq28tkrD9uN1YUCayGEmC0KuMW8VTj+tyAIBxYHz4In/9mQLeUEYqy2+dWw059/+uS/OjgH9y7+8QV/PHnnaXLBi5WuIAbHCAcuOka0n/u3d27a2Vy/Yc6x3XV5++T2oQGcD0bHvj+OlS8crvhU5pqt/DjNMTihMN6poLgd/8fT48/17UF7VlL95RMtscRxMAc/1sMK9SFw42/qcyul7bQLCXLPjo+hpPoDeUKG+cyCINo7VM44B1vBN5y2odW50ip2CYIaSgzufXIglTDp0lmPT56VSlfg4+0IdYm7eMZ4aJr1K4FY3++SuLKHQpJc41YRbGHf8/QlldAgKVY7RikbkiqlHR4XnX5R87a/97bm3rfd28rkz61Iuy8Pfu/B5t5v39u89eVvbd5+1tubZx3zrHYV9++++O82f/SaP2r+8om/bP7bX/639tjTnnda849f+o/b24kI4k949gnt+31gXvyf3/2fzaV/59J5fYHeoeep23hmBXpAMhr5MhtQwy8+7xebZ/7cz7aSn/Sck9pfISntihJCCFGPAm6RXb3CgcVBjcETJeWMRnwgb1uwWS2z1TCccCCoteP4DiufKXAg/L2LY8H1qZu/l5JrEdCwesS2coJInN1p4JzcU8tPqeG00/6hQUyE83A+tr7Hh2UR2JUSG6VgJbelnNK1shmd9JqSWzHDkS2txNaWvokSjud7XYHaEEz+h0Kf2G0AtlsEB5lECNu4LTk19MFa3IoAfB/n27b69wW9Qr/QMw/61DfhBj54w150gRwupd8SZ2wImi847YLJO/WYPGI/0BPGxhJsUd9IoMV7uPm8y+Z4e2yFc9gvTlip1YncynCu5IJ4dgN88LUfbGWxBCvcP/rJj9pzzNV+8u7TMGex0+d/fOd/tK/ZPv3hf/Th5oJTLzjs2Br+6vt/1Xz4v324ecUvvKJ5wy+/YfLu09BuC7Ztp0wsyG9ud9bQ5CBzADbA9LcG6vmj3/tRm8CBPz7/j9t5PyZvhRBCDEMBt5gZ/n5AnG6/3RlnsQ3A54JrHDdztgj+c8R7Fz05p8WX6PADTvgVu66Yt0pkDrpfwWG1gr+0iSA8da5Z4x1hgg6f0CAYwhknAMYhpW+Bfma1I5fEANpm54klt8JNyT0kyMN93rlt3rHUrBbSntJKd64s1sq2HyNf4vjFUhMYpWD8SWoRzJOc2nj60w8qGwIyRQDPNllWY4cmnBhTdAm9sWQKsrxn355ByQCfACrZiyF0bSkfA3RzxYq58w9cPaQP/W+XI9uUUuLMSleSjDr95it+87DvWULnSODFx7+4Xan++Fc/3nzmLz7T3qdNAP7lb325edefv6sNRFmRHgN0mHP9y7P+ZefqeCpxifzmdmf1TQ4ajOHBgwcPzQFCCCEWHwXcYkGwB/vgvBMwsSURau9hJPjg/j3/FF2/VbZrS3nKMcdxJShhNcC2U1M3VkBsdR+4DgHMNedd0/5EEsE598j1XTEj8OAcbFGnDazu9Q2GfBIj1Tau4YMjW+Wwe2hTlFagcBJTjmLNDgegb1PbvP3YDaFrvH1ZzGCBsUnVif4r3XPeFRgB8mtySwBvsMJ1yWcuaf8fem8v5yZAtgcTEhizWj50ldjkD3lk7AlsuTXD2shfAgUvK7VJEupaSsJwTvt5L16XkmWlLeUEMFxrWqa5lcAnA7FPjAcPyer7oKwh0P7UjpVUcoi6xOc4DClDEpvIEfL6vSe/1/z6p369ecZ7ntEc+2+Pbc7efnZzxzfuaB+gZlunp8FWt3/1hb/a/MNf/IeTd4UQQojDUcAtFgScQwI7AkLbGlx7/x/gkPvVGwKZ0qptDb4uFAIznDVfJ5x+gu242kDgTLDedxXS399s2w3HBmcT57hNUtyzo21TLnijfSlH10puhdsHMblt4JBb4aZP7Vy1gdXRhD3Abug2bS+7llAgIcWYM970u+3aIMDtuzptT7u3hBjyw2o5QfgQaC914fs8nZvdDNgEC5YJrGy7tJXaFb6ox7Egf/bzXry2RNoQ4s9mIf9DgnDGZ8hDDLE5bEe3lfj29RUPNzvesKNNAjLWVrdUKQWwdutJKljGBpA8u2vzXYf1byo5lHqOQyzY8FQyz5chYzVX4+ad576zvV/b6nXsMce2svy+17yvlZecPeyDrW5zX/mJx504eVcIIYQ4HAXcYlTMgcYh7PubrhScGL5bCsIIKggoS6u2QyAItBU9czpxQL3jbw+SwWFjlQmHd6lB3agj94vzkzfUM0cMcmLBIU45xT6IySVOGCN2BKTw160NrDx+629X8au/SwX6a+g2bdMxax9ySz8S+BDAEwTycC9kk/uFuaVg/er1k293g+zH2yxgmlVuVrapN7I4q3tDsQv0S58kmO9Lsz3Wrz6phD0iII07Fmpug0jR9x5bI+qr6Z4lNCG3q4L3SyBHJDFTwTI2gHu4hwaqBPr085DkxBB4UNpV6686lOx78t882dz+1tubl5zwkvYe75f/wssnRw7DVrd52JhWt4UQQnShgFuMCs5fabWptpSCsK5V2yH4IB7ncNPOTc3et++dtwodA0hb/evj4C8U7bb3B3e39/Mu5oNvalbybJW9ZrWwJF+lFbMhQf2sKQWwrFTmViNjH1jb+A4ryNxywHvoCL8vTJDL/zXQ/wTqW87ccpjcoG88sBDdqJV5EgAkrgheSTDw2gJaCgm51M+CDbmH3ZJg/O5xbWBXa6/oTwLSIQ95i0yzM4c+sQfAMd4mJ9PuEqG/KCU5wf6ltpR3jRfnRS4ZG/rbk9o9Q6l5PkRfHnvysfb3u5GTM1905uTdYdjq9m/93d9qfuFv/8Lk3fn43QYcn2prTPLEMiRJwTj1/Y4QQojZooBbLAg4T/z0VgRnsY9zhVPHih0BpadrxRPHpgRBPAEBzhirRaltkmzZ9E9Zx3nEEY/HLQUIbNh6WnNvp3cMY8FJTDmKBFBdD5Siv+gjc7KpR9yqSmFXAUFN39XCnEwBDifOai5oXQogN2zTJsCNDjLBXV9IDtmKJyCb9GvttlzqwMO4CLpytztwrs2v3Fwd1JLA8rdQ8H0LYikkB7iXPd6TX7qH3ctj1GvOT2CXk4sStIfAFRuDbNEX1kbeIzky5MniEfp36EOt7Ducg8AdOaGPeT8X8NbATghW3ed6NRlU0+e5LeVd41V62GUuQTbmLgju5Wanzzk7zmm++PAXm8v+zmVtInAotrp9ynNPaTat3TR593CQ91TbfCFJWHoehdfnPvBb3LVJNiGEELNHAbeYKTisBD6sXuIgRiedh6hduf7K9vOuVS2+i6PfOpsfXHsokLTAoqvkAg/Ow32ldg+sx+qP00mg7386bDFIbdOPQQcB9JW7r2y3U3JvJw9ps75KUXIMc1vKKSWnmH6jv9iSa/W0+03jeWoDQg9tZKWVe+lT4KTy27qsrE2zldVW38cK3DkP57MVSXuomQ8Qra65Wyb6bKe30rV7gM8s2EafSiAvJKd8QLqQeHlMbZM2eeo7ZoyBPcgQuab/bVxIYHE7CfZqWuhjnlJOQqovBNbUw3aN+KCdOg7FkolnnXRWGzx7/aTQ57kt5dgWVt1TtpvPtt+zvU3+TSsvpjsUEn6c09tDr+f+2Of94fNaW/EX3/mL9n7rq8+5uj0GON7sO4XEAngdi3KEveVcBO4v+tsvmry7dOA2laHPCBBCCDEbFHCLmWCODM4QDiZO2zvOfsdh2XocXAv4WDnDkcptj8T5xdEnIOC3rAkkh2w99VBPAkECU56azPW940Z9bcvp0NWGMSk9pdz6nED33t++t3WOCT6srwhSFwoeusW43/e2+w7Vc4z+Y6wZc9/GHDZ2yEwuIJg10aHHGaYvLKiljiQGCJxM7uyBZbm29XlCu5XS7gGCojXXrzmkWzVY0M33SsmcxYJ29EnkmM2hXQb/02eMR25L9BAYV4JXZNjsTC3cR06/Uw/0i78EVvxvTz8nIDR58yW3ywe96HrWQ2kbfEwCGASq2J2dm3a2iTbqzXyQs++zggeavfGMNzZf+a2vNH9y4Z+093cPxVa3T3veac2bXvamybtLB8aSn9szGRFCCLE0WLF///6Dk/+PSk48UU8PLYFTRBBQ42jjWONA2X3MBH8EC3FiZ9Jn+x5Z9tTnJXBAcchYufSrp7l62rVY7aiBYMU74gShbMv05+U9W+moIZ5zVtAHOP6lPidpkKoL/coKE46YDypScB1WeIy+7aMu7ES4eePNxYDY6sQqlUHyICeLqTZ6mcTpJ7mQWnXvI+djgRwT5NT2nx2/ZuWadutuqe/Ggj7d8MkNzfbXbx+0hZf+33jLxjao8t9P6bG1r5aUfbGkUZcMR9nK2Srokg0+J5i1a8Zzs/rbVZ9Irt9KpPo0krJnBt/l9o0oi9SFW2pKu3e67GyUcc655bYtza637DpMjvvY14WyrYtFbkyGgqyyu2mh7IcQQog6FHCLZQtOIU/yxjntkxQQQohpIdhidbhvUlKIFJYIYrfDQiYWhRBCdKMt5WLZwiqRnF0hxGLAgx95EJndQiDENPBwOuQpPlBUCCHE4qOAWwghhFhg2PJ7y8Zbms2f2Xxoq7wQQ2ArOdv0ud1GW8mFEGLpoS3lQgghhBBCCCHEDNAKtxBCCCGEEEIIMQMUcAshhBBCCCGEEDNAAbcQQgghhBBCCDEDFHALIYQQQgghhBAzQAG3EEIIIYQQQggxAxRwCyGEEEIIIYQQM0A/CzYy99xzT/PZz362ecYzntGsXr26ed3rXtc897nPnXwqhBBCCCGEEGK5MNoK9w9+8IPm05/+dPPGN76x+cM//MPmqaeemnxyON/5zneaj33sY20w+pGPfGTy7tHF3/zN3zRf//rXm507dzb/+3//78m7QgghhBBCCCGWC1MH3I8//njz7//9v2/++T//581NN93UfO9735t8cjjf/va3mw996EPtsZ/61KeaH//4x5NPjh7OPPPM5pprrmn+xb/4F+3qOskF2i2EEEIIIYQQYnkxdcD9uc99rvn4xz/evPjFL27e8573NL/0S780+WQ+BNcE2bfddlvzyle+srn22mubF77whZNPjz5oG+08ePBg85Of/GTyrhBCCCGEEEKI5cLUATcrups3b263kf/Kr/xK83M/lz7lM5/5zGbdunXNlVde2fze7/1eG6Af7Tz72c+e/CeEEEIIIYQQYrkxdcDNivbGjRurgsuzzz67Of/889vgeznBKrcQQgghhBBCiOXF1AG3yENige3ke/bsaZ544onJu0IIIYQQQgghlgMKuGfIS1/60uZlL3tZ841vfKPZtm1bc9111xUfKieEEEIIIYQQ4uhBAfeMedaznjX5TwghhBBCCCHEckIB9wx56KGHmq985SvNqaee2rzzne9srr766ua5z33u5FMhhBBCCCGEEEczCrhnyPe///1mxYoVzVlnnaUnlgshhBBCCCHEMkMB9wJA0C2EEEIIIYQQYnmxYv/+/VP9ZtW9997b/O7v/u7k1eG8+tWvbi6//PLm2GOPbW6//fb24WE5Nm3a1FxyySWTV+Nw4oknTv5beO65555m165dzcUXX9xuKxdCCCGEEEIIsXzQCrcQQgghhBBCCDEDpl7hXuos5gr3HXfc0dx1111a4RZCCCGEEEKIZYhWuGfE17/+9XZL+THHHNNupxdCCCGEEEIIsbzQCvfIEGR/9rOfnbxqmpe85CXNm9/85uYZz3jG5B0hhBBCCCGEEMsBrXDPCALsX/7lX27+yT/5Jwq2hRBCCCGEEGIZohVuIYQQQgghhBBiBizbgPuhhx5qvv3tbzdPPfXU5J2fwT3XL3zhC5tTTjll8o4QQgghhBBCCNGPZRlwf/WrX22OO+64ZtWqVe3fyA9/+MNm37597d9XvOIVk3eFEEIIIYQQQoh6VjzwwANHdcB96sk/mPz3NA9944fNj//mYPPLp/385J08f/HAD5pnPmNFc8pLDg/KhRBCCCGEEEKIEsvuoWmP7v9Rs/rFz568KsNxHC+EEEIIIYQQQvRl2QXcTz710+a4446ZvCrDcRwvhBBCCCGEEEL0RT8LJoQQQgghhBBCzAAF3EIIIYQQQgghxAxYdg9Nu+OL323Oe9Xz2v8ffOjR5uZb725+/OOftK/h1FNe0LzljedOXs0/XgghhBBCCCGEqGW0FW5+Qus//sf/2PzWb/1W84EPfCD5+9bAb1/feOONzSWXXNK86U1vat7znvc09913X3Pw4FEd9wshhBBCCCGEWGZMvcL9xBNPNF/4whfaYPsHP3h6NflVr3pVc9lllzXHHnts+9ogsN62bduh44xjjjmm+Z3f+Z1m3bp1k3fGo7TC7fneYz9odnzsvzS/8MKVWuEWQgghhBBCCDE1U69w//mf/3lz8803Ny960Yuad77znc0pp5wy+eRwTjjhhObss89u/uiP/qj51Kc+1fzZn/1ZG5jDnXfe2Tz55JPt/0IIIYQQQgghxJHO1AH3y1/+8ubNb35z8+53v7v5xV/8xebnfi5/ylWrVrVbzk8++eRmxYoV7cr2Oeec07zyla9svvvd7zY//vGPJ0cKIYQQQgghhBBHNlMH3Kxov/71rz9s+3hfjjvuuGKwLoQQQgghhBBCHEkseoT7V3/1V82DDz7YrF27tg26hRBCCCGEEEKIo4FFDbi5Z/u2225r7//+tV/7tXabuRBCCCGEEEIIcTSwaAE3wfbHP/7x5utf/3qzefPm5rnPfe7kEyGEEEIIIYQQ4shnUQJufkrswx/+cPO1r32tufLKK9uHqQkhhBBCCCGEEEcTCx5wf+c732n+5E/+pF3Zvvrqq5s1a9ZMPhFCCCGEEEIIIY4eFjTgfvTRR5v3v//9zfe+97022NbKthBCCCGEEEKIo5UVDzzwwMHJ/4O4//77m61bt05eHc6rXvWq5rLLLmt/NuzP/uzPms985jOTTw7nDW94Q/OmN71p8mocTj35B5P/nuaOL363Oe9Vz5u8+hnfe+wHzY6P/ZfmF164snnLG8+dvJs/XgghhBBCCCGEKLGoTykXQgghhBBCCCGOVqZe4V7qaIVbCCGEEEIIIcRioBVuIYQQQgghhBBiBizrgPvBhx5tfv+Pbmu2/p//V/MnH7y9+cEPnpp8IoQQQgghhBBCTIdWuIUQQgghhBBCiBmw7O7hvvu/Ptb86trjm+OOO2byTp4f/vAnzX/f+0Sz7u+dMHlHCCGEEEIIIYSoY9mtcL/g+c9qHvnLJyevynAcxwshhBBCCCGEEH1ZsX///qN6hfvEE0+c/PczvvrVrzbHHXdcs2rVqvZv5Ic//GGzb9++9u8rXvGKybtCCCGEEEIIIUQ9yzLghoceeqj59re/3Tz11OEPSjv22GObF77whc0pp5wyeUcIIYQQQgghhOjHsg24hRBCCCGEEEKIWaKnlAshhBBCCCGEEDNAAbcQQgghhBBCCDEDFHALIYQQQgghhBAzQAG3EEIIIYQQQggxAxRwCyGEEEIIIYQQM0ABtxBCCCGEEEIIMQMUcAshhBBCCCGEEDNAAbcQQgghhBBCCDEDFHALIYQQQgghhBAzQAG3EEIIIYQQQggxAxRwCyGEEEIIIYQQM0ABtxBCCCGEEEIIMQMUcAshhBBCCCGEEDNAAbcQQgghhBBCCDEDFHALIYQQQgghhBAzQAG3EEIIIYQQQggxAxRwCyGEEEIIIYQQM0ABtxBCCCGEEEIIMQMUcAshhBBCCCGEEKPTNP8/OwvVQh/oZ/oAAAAASUVORK5CYII=) ###Code #untokenize wiki_data wiki_2021_10_05_50000 !gdown --id 1rQnbaOiqoN40AzHVq_IrRW4ki-rFPRxZ ###Output Downloading... From: https://drive.google.com/uc?id=1rQnbaOiqoN40AzHVq_IrRW4ki-rFPRxZ To: /content/wiki_2021_10_05_50000.json 100% 62.9M/62.9M [00:00<00:00, 87.8MB/s] ###Markdown 將資料進行斷詞 可使用[Jieba](https://blog.pulipuli.info/2017/11/fasttag-identify-part-of-speech-in.html)進行斷詞。斷詞後範例:```[ { "id":"0", "title":"克拉西瓦亞梅恰河", "token":[ [('克拉西瓦亞梅恰河', 'n'), ('俄羅斯', 'n'), ('河流', 'n')], [('位於', 'v'), ('圖拉州', 'n'), ('利佩茨克州', 'n')] ] }]```![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAsAAAADvCAYAAAD8SmIlAAAgAElEQVR4Aey993McV7bn+f4SVFVmlvfeAWVQqAJQBRRswXsPgiAJ0ILeiaIVPUVRlKVEdUtqq7bq7ve637x582bn7cbMzuzETmzEbmxMxG7sr/vb/vTZyARJkaKVqGaTxEFERqGqMrPuPfdzT37vyZP3/h3yJxYQC4gFxAJiAbGAWEAsIBbYQBb4u7179yKb2EAYEAaEAWFAGBAGhAFhYKMw8Hd1dXXIJjYQBoQBYUAYEAaEAWFAGNgIDMzMzCACWAYAMgASBoQBYUAYEAaEAWFgwzAgAlhg3zCwb4QRrdRRIjfCgDAgDAgDwsDTGXhmAaxqdrItk8zOzjEzOUJ3sx+z2STiSQS0MCAMCAPCgDAgDAgDwsArxcAzC2CHK8DU9tt8emU//V0lMkk/isX8SlVWRkRPHxGJjcRGwoAwIAwIA8KAMPC6M/CdBfD1ozOkwk7MJon+vu5wSP3EAQoDwoAwIAwIA8LA68jAdxbAbx+ZJhFySORXbnUIA8KAMCAMCAPCgDAgDLySDIgAFnBfSXBfx9Go1EmiLMKAMCAMCAPCwIthQASwCGARwMKAMCAMCAPCgDAgDGwoBkQAC/AbCngZWb+YkbXYWewsDAgDwoAw8DIzIAJYBLAIYGFAGBAGhAFhQBgQBjYUAyKABfgNBfzLPBqVskm0RBgQBoQBYUAYeDEMiAAWASwCWBgQBoQBYUAYEAaEgQ3FgAhgAX5DAS8j6xczshY7i52FAWFAGBAGXmYGnlkAW+1u+mcvcf3CCXZuW2C4GpSlkEU8i3gWBoQBYUAYEAaEAWHglWPgmQSwyWRCUW14/EnqG7LkMimSUS8WWQr5lWvwl3k0JmWTaIEwIAwIA8KAMCAMvAgGnkkAa5qGoigi9mSEJwwIA8KAMCAMCAPCgDDwyjPwTALY5XKhi+AXocjlN2TkJwwIA8KAMCAMCAPCgDDw12TgmQSw2+3GarWKAJYRnzAgDAgDwoAwIAwIA8LAK8/AyyuATRasNjtujwu7TZN8Y+lsr3xn+2uOZOXcEikRBoQBYUAYEAaenYGXVwBrLlpnV/n41lnWtg4Q9ttEAIkIFgaEAWFAGBAGhAFhQBh4bgZeWgHsDKXY/eY5Dg5XCPicqIrluSsrI6NnHxmJrcRWwoAwIAwIA8KAMPC6MvDSCuBAqpGT5y+wq5xFkenWRPzLaFcYEAaEAWFAGBAGhIEfiIGXVgCH65s4feEiO1szIoB/oMZ+XUdxUi+JUAgDwoAwIAwIA8LAd2HgpRXAsWwfZy5cYKGYFAEsAlhGvMKAMCAMCAPCgDAgDPxgDLx0Atii2GgsT3Hw4Dm2DQ3QEHXLkssC/A8G/HcZHcq+Ek0QBoQBYUAYEAZeTwZeOgGsqFY8/gTF4nbOnzlMc9IvAlgEsAhgYUAYEAaEAWFAGBAGfjAGXjoBfHekFaovGDnA8hDc6znyutvO8irtKwwIA8KAMCAMCAMvmgERwDKa+sFGUy8aXvk9cZjCgDAgDAgDwoAw8H0YeGkFsDuR49hb59nb2ShzAItIF5EuDAgDwoAwIAwIA8LAD8bASyuAVX+MmaNnuX6wjM9jl5kgBPofDPrvM1KUYyTCIAwIA8KAMCAMvD4MvLQCWIfMlS4wtWWFlYUeQj5ZClk63uvT8aQtpS2FAWFAGBAGhIG/HQMvnQA2mUxYLBaMV1XD5nTiclpRFLNEACUKLAwIA8KAMCAMCAPCgDDw3Ay8dAJYF78ejwezWQSvjAz/diNDsb3YXhgQBoQBYUAYeH0ZeCkFsN+vz/0rAlg63uvb8aRtpW2FAWFAGBAGhIG/HQMigOU2wnPfRpAO/LfrwGJ7sb0wIAwIA8KAMPDdGdjYAthkRgln6Tpyk8WjB6kfvcHcyavkm0L3RKGruofxs5/Q29WFqmn3Pr8Hm+bBVtnF7KmdeMMurJ5WKrs/YXJ1J03bLjB99iPyLY0oqvLwsSK+xSbCgDAgDAgDwoAwIAy8cAZEAKfbmfz4Hzn2yUUaV//Agc+/ojmTvtcQrsp2hk/cpKujiqqp9z6/J4DtEVwj1zjyy2tEG4I4woMMXPnP7HnrIuUTv2bPz/8Dld4imvURxz4D8PrDgM6gn3QpQ7GcpdSew61PCycPBT7cFs9gz3vtJvuK/YQBYUAYEAaEgQ3LwIYXwHWREn2Xf8qO84Mk56+w450PaYjHqDOZMCsqitWOarWhqKoxM4UhoExmzIoNxWpD9abxTrzD0a/eNgSwNVCl/eRfWD56lPzqxyzd+g2FshdVXc9pNpstaHYHVk19+tzG+owYmsbA4Z0cfGuJ6c39HPjwJKs7qng81g0LrYjY736rR2wmNhMGhAFhQBgQBr5hYGMLYH3kp2iodjs2pwOL1YVm06OrCmbVRnT8NLPXfsnO23+ib6gfzbouOhV7kEDLmyze+i1b3vst227/Cyd/dd0QwCaLhmJ1o1ltaA43qt2FopoxmdaN7gnFWHvnKz65uEY25H2iiDVZbTgq3Zz7YBe1zhAuv5fYxCRXPjvAaC38xGMF8m8gF1uILYQBYUAYEAaEAWHgfgZEAD/m9ofJoqC5QwRKNYZv/ImRyTGsNn0xDg1XrJm5T/8DHYPDBLJtxDbd4tidCPD9xn3U/05fmJXzn/HOmR3U+1xPFLGK00l2fpbzl6YpN8YJpNrZfHSZ8z89xdJyyxOPfdRvy2fS+YUBYUAYEAaEAWFAGKhDBPBjBPBdOGzJIr2Xfn2fAPYQzJ7l8B//gicSxfStHOC7xz3uVU+b8EXT+H1e7E/JC9ZcLpq3bOLCtXl6F0ZZe2MLve15jv74FEsr7SKAn9J2j2sD+VycvzAgDAgDwoAwsLEZEAH8FBH1sAAOEcpe5ugf/4gnEsHkiOIefZsjX60/BPe0DqU/1Ga2KMY8x/r/T9pf1SPAi/O896tTHDo6RSngwWROcvDjYywul+4dq+cV6wuImJ9yvif9lny3sR2BtL+0vzAgDAgDwsBGYkAE8HcWwE782b3s/4f/mWAqjqehRNe73+QAPw0eVyDC6qUv+PDMKsmA556IfdRxJlXDWihz4advMDaSwut24RsY4cz7O+hr91FXpwtoK5uPXeVHt24wVk5hVS1PPOejfkc+E6cnDAgDwoAwIAwIAxuJARHAjxHA+gwQkd5NVLcdY/69P7D0xnmqU4v4wxFswTRdZ3/C1IE1eveco7bjM7Z/eJJA/MmCVgfL5Y+w68oXvHt+jUb/k3OA6/TZJqw2OjaNsG1tmuGZGksntjA4mMTj1uck1gWwnbGjN/jFL3/BxZF2HE9Jq9hIcEtdxZkLA8KAMCAMCAPCwKMYEAH8GAFs0Wx4M2VipQ5SLZ2kWjpIFoo4vV5MqhUtkCBaKBNpKOEJpgllEmiOp09NplrtBFM5wqEADtsjFtb4dnlMJjSXk0g6RrYpRSYbxGZfn1PYZDaj2hw0tu/j5u3bHKw1Y9dkwY1HgS6fiQMUBoQBYUAYEAaEgbsMiAD+tuC8773JZEYXmfc2Pcf2bp6tyYTxvb6P8f+T83nvGlx//a7733/M/XnD+gN1ucFZLrz7Pm/tWCThNmExC9z321r+Fx6EAWFAGBAGhAFh4NsMiAC+T/B+2zgv+3uT2YLDFyKRiBPyuFHNdZhe4fq87PaW8okDFQaEAWFAGBAGXg8GRACLYJSH5oQBYUAYEAaEAWFAGNhQDIgAFuCfC3g9JcNiNmMxrUef9feqWU8PuW+EaDKhmE1Y7qaPvJI2NxnTzJnvr9crWY/72kXK/1zsSxRIWBIGhAFh4NVlQASwiIDnEgGqopAI+UlbzVjNFux2Jy0BG1bLnZxok5k6q4NixEXU+ipP0WYh5LLispqfy17iLF9dZyltJ20nDAgDwsDrw4AIYBHADwg6WzDMfCVG2mrBn2pgpa/CnsG2B7eBCqvtDUQdKj63m5WRLg7mAjRFA7Q2lThXS5P1u2gI+Njc18baUAvTuQBZ5WHxqEeMo7Ek/QkPbsuzdSxVtdJTLNAfs2G7K7Tva0ery81UZwu7B9bLfWC0gzemejg03M7eu3UZaGMsqqCZ67BqGr3dHZxbHOLa8si97XitnpBdwWJWaIzF2dGRpumOAHbYVAIuJz3FEr0BEz67l9meVvYNtXNquofDox3s7i+zuRBAtZhJRqOsViJ4rRYUi4VSpoFtTX58VjMeh42GaIjtnWkSFjPm++oizvbZmBA7iZ2EAWFAGBAGvgsDG1sA66uyOf2Ei1UaWlpxJ7tIlip4/c47c+xqaA43msPxgEg0DGwyYUu0kWjuIOjzGjM7rH9uwaTasbudKN+aksziiRAutJNqGyKRy2K1PX3aNP1BN2eui3ixDU8oT6pnnFDSjaJ+f9DdDgdNiSgpl0ZjPEgx6sNjrjNWpwv4PZRaS4xnvMR9Xtoaksy3NzLRmjO26UqOmXKaZr8Dt6ZSn85wffsU7810cWK8m5Nzg/x8tZe1xizn5no5W6snE3Thtypod9IlstEQTfEQ+WiAxliQvlKWw31FqvGA8VkhHqIx4sVnsxipFDbNSn04QCkWoDEaoJSKMdRW5kJvltZE0DgmF/HT4LOjWSzEAgGGW3KM3ynzQlcz720f5EAlz/Sdz8ZbcvTX+4nZLDjtNpbnxri+bZwbW0d5d9s4H+2e45PFFkoRN7lokhOTFXrjTtzKemQ7mmxgV7WBlf5BDrUGKUfcNIQDdOfSXN08xNZKPeVkkKxPj4ab8TrtjFXbmM4GyMRCdOXSnFkc5lBnku7mIhfmutlSTtIS9uBWzdg0jcZkjKzXRkPIR2sqSMBSh0XE8cN9UWwiNhEGhAFhQBj4jgxsbAGsz6Nb38HMp//CiR9/QnHnv+HQT35PuVRPXZ1+u76B5sVjVEZqD4GlC9PI1q/Y9eU/MdDcbIhHQwArDtTMJIN7Vwgm3A8cZ0uXaV44ytz7/4191y4TiIQe+P5RIxezaiVz8X9i7bM/UuxfYe1//P+Y3FbE5fr+Argpm+Xjvcscbg5ycesYt7d00qyY0OcoHmwrcWS8k+MTVUZ8LpY6W7g5UmGxo4mFahMr/WU+2NpGWbWg6Qt7OBzM9VXZEdYI2m3EEhne6ghTqc+xpy1NymJGs5iNHGB9qWa7pnJ6fox9fS3Mt6+fc7GjyKaOJmPTf2NzT5mz01XaIzbjuEggwomZGoe7Cyx1NLFY1Y8rGOXR99e3+UqO8WwQj9VOf6mJ98bb7p1vW1+FD3YMcaSzyGZj/yLzbRVubWtjwGdDj+YuDVdI+p2oFgt2TSORSHFqpMByrYWDvU20+lWcip7vrOc3m8gXmjhaa2TfyChXR5ror/fR0VLk9PQQv9g7wdX5PvYPtNDitxg50WazGbdVoyseZaglx0x7gc1dJc4vDrI20LpeB32g0Rgl5lSJBYO8vWcrV3uiHBjr5leHR+jXTNglB/mpfeZR/Ug++/7+QmwnthMGhIHXkYGNLYD1OXwTFYZv/JZ912fJbv0Zez/5CU3pFHV1+iIVHYxd/pqZQzsfcdE1odg9WJ0eNOt9kVyrH1vXCfb+5AMamsMPHKevLmcP5Wne90/sunjhmQVwev9f2HrzR+S7a6z88//LwFgBu/b9O2QkmebSzgV2NNg4tTjIB4sVsqY6FP0BNsVCxOti91gbQ3YXmzqaudbTxHCxgaFiA9PVJt5drjwggOcHevlwsY/LC4NcXBzkUkeYtvoCJ4Zbmcwm6M4mqKaCpNyakQ6wWG1mptrM6bkBLm8a4ubqFD/bO8vnq6PG+3OTTWzvyNHgVdEfOosEwhyd6ObaRBfvr05xa3MHC/URBor1DJYa2DvSzcXZGhMZFzbFSm8+y7mWDKcXBnl/sY2ZSpbzMxXm0gkmSkWubRlkOdfAO8ut9PlsaKpKrbvKibkBrm8Z4dNtQ5xfGOSNvgaiLs1YXc/jcDDUlGDCY8PhcDLS3ck7cx2c3zTD+b4GevNx5vur7KsVmGzJMFFuZO9YLxNJDZvdxmBHmcuLA/x6zwDj8SDldJzxliwfbRtjazXPULGe/sYEGaeK3WLC5/dzeGWRN0oudg+28bN9/bSbTVi/4wj3dXRaUqfv3/fFdmI7YUAYEAbWGdjgAthEnTNIvGuEYlcaV7aLQncNrz+IozxH9/JN9v7iv7D/81/Ru/UA3YsrBONhzFY38f7t9O46zcDqPhLRiBEVtNhcNMwcZejc7zj9j//Mwpnz9CzvoTxQwKKsPwCmupMUdv2FnefPPyCA9c+jrTvoXzvBwL5TFJvLOB0O9EizqzhFtqsPbzRKdnoHIZ8PxVjwQr8dr9AyvMjKtnmyMSvKI3Jivw273eWivbGevNtMSzpKd9qPq67OyD21WBSSgQhvj5cI2W1kQkFGSvX05JP05JL0F1IMNoYI6DM/3IkALwx0caYlTl8uRV+lzAVDAOc5Plxmuameq3M1pksp0k6LMZOCy6oa8xZnoiE6mhs5M9/H+ZEStaiXiN9LKRqlN+3BrpiMeY11AXxwuI3xmJfVvja2tMRIBnx055Ms9FdZqyRJu+w4VX02CgsRj4euZIqj41XeHG1mqiXJ/oFGxtIJxktNvDleYTYdppYLkrBbSSXr2T7Qzr7RLs4vDvHOXA+HxzrZ09/CfHOYuNvLfLXAXD6A36bgcLgYr+TZVmvn+rYxjldS9BfijHVXOFjLMVBIMdRaYG2ki5mUZqzO5/d6aGku8vZglkIkyIGRbg4MtLBvsMJqbwsrtQoXtwywKaDg1POSrVZKuQZafKqR4jHUGMZnkhSIb7Ms7+ViLgwIA8KAMPB9GNjYAvgx0TSTZkfzp4jkt7H84//M9qsXiRVaiWZyODweLFYnzkSRVN8Cmz78HZXmopECYXF4cadbqd/yKSf+/ms6ZgaJ5hoJJkMo6voSxQ8LYF3EqgRGrrL5vV9THp0gP7mX5fd+Tls2+0AE+eEG1o/1svPs+9y+/R61lhDanRzVh/d9lg5iwu10s9hf49PhRhoDLqIeJzGvk4jXTdzrJOp1EvI4CdgVVIvJSIGY79cjwP1c0aO/9yLAOXZVUhSsKhfGO8h4bUZd9PQBXdxFvX52j/VyaabCSGuOC2MtlAIuIskEp8cLtDu+WSY6GghzYKjCSrnIleEMxYiHkF0j6HbS015mX3uatNOG3VJnRIxVVSMS8NObS1BOBCjGfDSnQmT9bjIhP6V4iJ64F5dmQRf8XpeLajZJLZ8ytt58ir5CPb3ZCK0NMbYPdrG34CJ4ZwnqujoT6WKJA91NnFjoY8anEvV76KmWmGsKGHZKRcLM91bXBbBiJhyIcmJTD5sSdnKREMemh/lgqcbZ2RqnZ2tc2TzCJ7snWAmpeJ7xYcDv18bPwoHsI7YVBoQBYUAYeL0ZEAH8GBG8ngLRzeT1P7NwfO/DQtRkxpGtMnH1y3sC2OgstiC2ntMc+tUnZMvRh457WADrwriekZt/Zvb4UVSTisnUzOC537PtwumHjn9Uh8xUhxgb6yceqHvOpZAVI4L76cooP95U4+ZYgesL/bwx2sbR2WE+36rPblBl+/QQJ3tCBO2WeznAqyEVn9VKJN5g5ABXCwV2luNkvyWALRYL9ZkMs+UG+mNeOouNnJjo4PrmYT6bqXF2OEPBoaLc1y6xYIwjoxVWi1m297RzZabCWibKQk8rxxdGeW+xl9WOAo0ufQ7iOvyeAIc3jXJtpMxaXyv7Rjv5ZPcYpwcq7Kq1cmCojc83tRG8I7IdNivzQ92c0meIqJXZ2V/h4NQAV0bztGcidAZtaJb1aLSpTs8BVlgb62JXOcPBxX5m4mHWOhMsjXZxcabKrv4ye4Y7OT3bfycCrLFY6+DLrT3s7ctTS0TZ3VGg5K4zIvn6nMk+t5cdk71sCSgigO9r+0fxLp+93hclaV9pX2FAGHgRDIgAfuzFVo9AvgABbLJSFxhmy5f/keN//C+svvsVK+/+jrXbf2Lh0L5nEsCq1WbMKKFY6h5cgOKxdXu4c+mRWYfDzeZKho5iExe645RTMd4caWOqmGSqp4NPZ4oMF9NUu7sMARxx2xjo6OCGPnvC5iGuLQ1zbfMIH23r53KtmaVikIBV5fx9EWB93uBMLMx4Zxs3NtVYzoboakzxzo55fn9ogTfagwTtygNLOieCMQ4OFzk82sbW+hTHag0UfFZcdittrS3sLieIaOq9paDXBfAEX+6Y4JPVcW7vmeGrI5v40Y5JPloZ57OdE/xma/s9AaznAA90Vrm+dZzPd0/zq/1z3No5zfm+NGG7BatixqSotGQS7E65sHrDnOmNkA8G2TXdycmuLJtSEcZ7KuzsbKArG6en0MCu0d57EWCHVSUdcLNvtJViKs7x2VE+3TbIlU3rUfN39XKtTUoE+Dsw+yIcpPzGw75CbCI2EQaEgdeBARHAj73g6pHZEqPX/szSmRMoyrfE5eMiwIoPW/4YB77+mny18aE5XR+KAJs06pQ2Jj/+VzZfuEQsVyCYyhCIp3D5/E8RwOs5wPnOYSYnh0iFNCzPkAP8aHBNKIqCzWYnm85wthqmMR7h0Egn+2tN7Bjt4+ZsGzv7mpkd7uVEV5CwS0N/OCwTiTLbEmOwMc1iKUY24Gb/aB8LGTfubwlgxaIxWC6y0pGlr5hjpVrgUG8jm2sVPt1a4/hoG3ONAcJODeud1eXK+Tz7O5KUCyXem6/yZnuK3myK4VIDW4e6eGOohYlCmmrYidVswu3wsmu6n7eGWtnVV34gAry7VmZtoMzNqRIpnwO7akFVVSLRKDMtKXa2NfF+f46FtkYKPgduxYymadQ3pNnZnqErYMPl9dLmVgg57ST8Yfb1ZGnwOmkt5LmxMsIbfc2s9jSzpT1Lo1efBaLOSLWIePycG8wbU9AdGu5gthinMxOn2hCnr5Dl3NZh9Ej6U1MgVBu2dBtbVpbpbUjgVF/lBUbkQvLo/ih2EbsIA8KAMPDXZEAE8GMFsL5og5PyG1+y8v5tYgk3NqcDi6JgsiioDg/eQheTb/+UtnIZm9OJ2aILESdW1zxbfv0f6RsaN2YM0GwaJn0pYNWKI5yntPsP7Lp4iXAyjqrPIGFyEd/1Kdve/ympQga724tmt9/LG348AOs5wLvf+ogf/+hD+sthNPXhxSYef/wjOpdFI55ouCeAd3c20hqy05jNc7orQkPIja+xxPHOIAGHYsxX21nMsxK3Ugh52NzXTqfXy1sL+gwL9gcFsD4PsEUhG0twdLqfC7OdLOYjZD12ivl6rk/liYRSnFwa4ERPAzHNjNlsYa6zmZUGPyFviA92jbGlNUlLLExvPnlv62qI0RpwGGLQqmpE/T6akxFaUxF6mjJcXu5leyFNv/4wXzZOdzrA6mA3/QmnIXBrPWV2JIMsFDO83ZFkpLnAfMJP0GalmMtxcbaFDq/j3iwMVlXB5/SxbaiNxaidsMNKuSHN1elmsqEAO7oaaHNZjfIoFjNuu4P5Wo3LvWlKiQgn5sf4dHWUd7aMcn3LKO+vTvDZ2gTbo1a8T8vjtnnwTR/m3/4P/55PTy8Tc3+TL/2d2vqx7D+CC9n3KYNRsZmwJwwIA8LAq8SACOAnXthN2PzdtK6eZ/LYFcb3HiOeSaN5w7TsvMTYsbeZPnGdqeNXGN21B7dfj9iaMCt2QqU9DOy/zMShM/TOVVH1mQ86Jhk68jZTJ24Yx00cOkXHSNoQeWZ7gGj/HsaPX2Py2FVG9x4m1xR4potuoX+WpaUp0mF9ntrn7IDfEsAnpvo5N1nl4EgHB0eqxpy0++dGONkdIuCykUrGONCRMB4oM1sUAqEoe8erHM+7CammBwWw2YIWiLC7J08+6DAir6FgiNm2Js5OdXC6EjRmibDbbPQ0phn0O7DY/ay0xZjJJDk13cm21jhr/e2cG2xmS/c32+aOAlOZAB5tPRqqp3SY9UGH2YTf42HPeCdHuptZuXtMV5m359vpjLuwhqJM1LspN9Szd6CNi61RHB4vA9koi+1NnBzOkLMpqKb1PGB9juiuUo7NvUWmok5Us8ZAqcDh3hwNbsWYu9gbibOjVmZ3MUDA7WSsnGdfZ4L2pkZ2D7WwrEeI75bl7mtXiYXmBAmX+uR2N6m4/SXe//2/56O35kkGRQC/Sk5XyvqcPuqJPlvOLXwJA8LAszEgAvgpztSs2tGcPpy+oCGKVE3DrGhYPUEcviBOfwinL7A+O4SyPtODyWRGsXqwuQM4vD7sLrsxS4Ric2L3hdaP8YdweL3YHLb1VeTMKorNhUM/n/6d240eOX4WkK12B3aHA+3baRpPqdsjz20yG0sDJ10qTpvVmAUi7LLhc9jw6pvTjtfjIuZS0RQLNpuVkFM1hKYu/vU0irDXSVBfKEOfW9hsJul1YtOngdNFqcWC26Yai2PoIlXTrMR8bhoCTsJ3ZlnQF42wKhYcigWTRTWmHgs6bSR9DvQp1Hx2G6mAh+R9W8LvJurUjEUnvl0vfW7joMdN2u954Lh6rw2Hphht41DMeJwOkj4XSYeeSmLBpqpEPQ7CThVNXwDjnj1NBDwugk4rDtWCqc5szIyhzxKhL62s76fnOrtsNuJODU1RCLpseO2aYVu3y/lA2e/Vw+8m7rFjf8SS0d/UScHtj9M4sY+f/PQThnsasWvPGfW/V69ncxrflEX2F1sIA8KAMCAMvJoMiACWi/8ziWzp4C9LB1dI5HrZuns/S2MteN2PWKZbmBamhQFhQBgQBoSBJzIgAlgAeSIgInxfFuEr5RAWhQFhQBgQBoSBH4oBEcAigEUACwPCgDAgDAgDwoAwsKEYEAEswG8o4H+okaOcR6IQwoAwIAwIA8LAq8uACGARwCKAhQFhQBgQBoQBYUAY2FAMiAA2W7AoKooxu4PV+F+fhWB9VGfCZDYb27OO8kwWCxZVW5/Z4a/dmfRZFVQrZkU1ym3RrMbUX/rSus9a3m/vp9dXn+tYXx1NUSzG+b69j7z//vYV2+bKi9oAACAASURBVInthAFhQBgQBoSBvz0DG1sAm8yo8RJjN/7A/htXKSz/lj0f/5LWSoy6Ol0Ee/GnmwgkE88sKP2dUyx98Q+kbQ6Uv7IAVl1+Ktf+le2X36d57iq7fv+/Uhv3YrN9f7BcgQirp2/ys08/5NrFMwx0ND5z3aVDf3+7i+3EdsKAMCAMCAPCwItjYMMLYEuqjfEP/p5DHxwmv/I79v/oFzQ3pKmr0xcjKDN45ueM715+ZhEY7l9i+9f/iazDjfpXFsCaJ0z7jf/G9mvvUl4+yN5/+r/o7Algs35/gLzRFHsufMjNtUUa6xO4nPZnrrt03O9vd7Gd2E4YEAaEAWFAGHhxDGx4AVwXzNJ18mOWjnYQGTvC8rlLxKNRLDY/Dt8Ecx/8O5bPHl9fCMPrNVIlTBYN1RnGGYziDsdxhaI4PXYsioX7BbDD6cUZiuEO+IyFFSyqA80Vxh2O4o7EcXr9WFXVWDjBHojgjqZwBmO4wklcfi9WfWEIkwlVX1AjGMEdDBu/pdo0zOY6FIeXwsHfMnPkAOmBaeZv/4mMz41q0QEyYTKpBCMxggEvtmdcIlkXwLvO3uTK8hA+p03E7195ECPO7sU5O7G12FoYEAaEAWHgLgMbWwA/RtyYbS7cHUsM7f2Ug1//Hxz9+R8Z3X+O4R37iTWkUD1xsos/ZvXTXzJz/mPmPvtnFg+M4g04HhDAmcl9bP7yX1g+vYbP5cCeGqZj9+/Y9v5nzF78lE3n36cpFjNWIuu88if2/uo/MX/mR0xd/RWr712i1BbEpKgkV66y8N5PmH3zEtNXv6S5N4H1qWkOCopSz/Vf/gO3rx8mE7byLLnBIoDFOdx1DvIqLAgDwoAwIAy8rgyIAH6UCDZbMGtebO5R5t7/d2w+cwy7R1/S2I1iD+HL7WPtl/+WSL4JZzBJIL/Gzj/8K/l8I/H+JXb+5X+ja2I/Wz76DeOrRVx+DxaLh/yWC+z+9CeEQmFcwRLV1S/YeeUMFlWl8+rfs/OTn5DMJHEVZ5i5+Uumd+eNh9z6Pv1fmNh3jFh9vRF1tjrWI8BPhlJfdtjKzgsfcWptmqDTfN9Svo/v0CKAH2+bJ9tbjhP7CAPCgDAgDAgDrwoDIoAfJYCNzzTq6rqZvP5nFo7v/SYVwBYlPPsBb/zmMzzhIHV1Co5QPXN/+O/09FRIDW1m9z/93+z5+f/J9rNbSWT1fczUqRn6T3/NwZ98RcvQLKWhVUaOf8nemzeMGRw6r/6JTW9dwWnVMMX6mXr3l8wfaMZkUUltvcT89Z8zsLiDrk078IQjKKqeo/zkjmYymfAFQ3g9bqzKk/e9ey4RwM9mp7v2klexlzAgDAgDwoAw8OoxIAL4sSJSF5hVJq//hYUT+42cWwNwW4TQ7E0O/ewC3ogfs0XBEaln9nf/O51tZVLDy+z+N/+dub2/Z8/tr+garEdVrZisTfSd/BOHv/iCQm2UfPcwuc4+0k3F9RSIq1+zcOYSNrOGKXpXALdQZ1KwuCOECh2Uxjax9uf/hy1vvoHT43qKANZzgC24PV5cLgeaYnrK/uvwigB+9TqxOF5pM2FAGBAGhAFh4LsxIAL4sQLYQl1dlNqlP7Jy/V1czjrM+vy6JjvezBLbf/9fybXHsXqDBLpOc+Cr3xLL1hPp38z2r/8reW+Fnjd+w67PfkuhPo2qhijsvMXeL35HvCGOZrMZ8+2aLfrv1NH5SAFcwWSKEW3txOX2YvPFadzxJbvefodYNPwUQatgUeJc+OJrPrq0l/qgPjfx0+EQAfx0G4mTERsJA8KAMCAMCAOvNgMigB8rgPWIqUKgtJOJd37LwunrTB06RbyhHsUVIjx/ms1XbjJ39gPmrtyipdSEzW7DWxlh7qNfkbR6cKfaKB37gq0XrhG3uXDEijRs+ZCFSx8zc/IGk0fO0VqNGItmlI7cYmztAJpZwRQo03/qQ4aW0pjMLkLFYwwffYfZU++xeP5j6oslrLanzdCg5wC72Hf1E84dXCDskhxgcVavtrOS9pP2EwaEAWFAGPihGBAB/FgBvA6ZYvfjDDcQbmgklEpjd7mos6iY7D58iQzhTCP+eBLVtC4wVacHZzSFalEwqw6s/gihdAarvtKcxYrqDhHKFAhnCgSTKdx+jyGAHYEIdl8Qs0nPF3bh8Ptx+t3UmcxorhjeWI5wfZ5QMolqtT4l+rtedj0HOBCO4vd5sEkO8DPZ7IfqWHIecdLCgDAgDAgDwsDLy4AI4KcI4I0GryecYMfpG1xaHiHkdWGx3F0W+uWFeKO1kdRXWBQGhAFhQBgQBp6PARHAIoAfiIza3T7Gtx/h8oXznD15jFp7/oHvpcM9X4cT+4n9hAFhQBgQBoSBvz0DG1YA6+kBZrPZSD8QEL8B0aJqOFxuwuEQ4aAfh/1pucbfHCt2FFsIA8KAMCAMCAPCwKvAwIYVwLr49Xq9IoAlAi4RbmFAGBAGhAFhQBjYYAxsWAFssVgIBAJGFPhVGKlIGWVELQwIA8KAMCAMCAPCwA/DgAhgszzkJZ3ph+lMYkexozAgDAgDwoAw8GowIAL4RQpg1YG3scbuXVtZWJijr70Rh1qHaYPddhDn8Go4B2knaSdhQBgQBoSB15UBEcAvUgA7gqSnT/Dnr87Q091GMhbCKQJY8q5kACQMCAPCgDAgDAgDL5QBEcAvWACnJo/wx8/3EI+sL4Dxuo6spF4SNRAGhAFhQBgQBoSBl5UBEcB/EwG8m1jY/UJHOi8rgFIucY7CgDAgDAgDwoAw8KIZEAEsAliEuNx2EgaEAWFAGBAGhIENxYAIYBHAGwr4Fz3ClN+TqIYwIAwIA8KAMPDyMSACWASwCGAZ9QsDwoAwIAwIA8LAhmJABLAI4A0FvIzCX75RuLSJtIkwIAwIA8LAi2ZABLAIYBHAMuoXBoQBYUAYEAaEgQ3FgAhgEcAbCvgXPcKU35OohjAgDAgDwoAw8PIxIAJYBLAIYBn1CwPCgDAgDAgDwsCGYkAE8IsUwFYPkcG9fPnZJfbs3s54bzNOTZZClpHxyzcyljaRNhEGhAFhQBh4nRkQAfyCBLDJZMKkWLF7guTyefL5HLFwAIcshbyhRpyvszORusnFUhgQBoQBYeBVYUAE8AsSwKqqomnahhB7pkfcRtI/e9Tnr0pHkXK+3E79SWx9F/aedB5h4OVmQNrn1Wgf6WOvRjtthP4kAvgFCGA9+mu323E4HK+dAFYUCx6HHY9iRjObsaoqMaeCZjGt19Vkok7RCNhVvJr5Fa6/GbtqwflK1+H1dLyaxUzcbcemWB7iS1UU/C4XSbcNt/rN926nDa9Nw3ZvYGbGoapEHCoW8x12HzGQ2wgXhZeqjmYzJlXDrZqwml9Pfl8qe/9VmTdjtSjEXRp6n9049RZuX9a2FgH8AgWw0+l86Tu91eOlo8FPRDPjCUXoa2pgpDnz4FZqYCATxmdT8DidLA1U2ZP20BDw0pgr8GZXgpTHQdzjolbKMtqWY6k5Qk592OnpgwN/IEgx4MDxjBc4RVEpJOO0+HVH+rBYUe122nJphkvr5Z4o55ivFphqyTJ2ty6lDK0+C6q5Dj06XyzkWe5pZUdf+d42W4zgtVowmy0kA0EWW+PUW9frYNMUQ/gXkima3CbcVicdjWnGWrIs6r9VzjFSrKc34UYxmwj6fPQ3+HBpZiwWM6lImP6kG5dqxmnViPo9DOZDBMwmzH/Vi9Dr54ydmsKRvmYiTishr59Bo41zzFXyLLQ1stSaZ7IxSOQ+/tzBKFtaUyRsJsymOuosGsX6FFtT7kcy9bI68JetXNlEksHiN/5irq0RffvGhzRQywbx1tUZnKcjEZZ6vulzev/b2l2i4rNgVSw4nQ5qbQXmwxq2Z/QPL5tN/hblcbtc9BQaGCllGC/nma7kmatkmaw0MlPJMVPOGN+NFJKkXCoWi4VULEot7CRoV7FbzHQ0RI3AhS5WvVYVi0mlKe6n2bl+J9Oq2enIp++17fq588y0Zhi942eHCgka3fqxut8x4XN4Odibx29V8dgcdBWzzLRkGNf3b8kx0pqjJ+HArjzs1/8WdnxVflMxm/HZNELBIK1RFy6zmUI6xVAxy1Q5z6ZqnvGWDAO5EFHVjMmk0VNMkHHp/cqEz+NhoDFE0qG/N+P3uOnMxCl5lZdeszxPG4kAfk0FsNliwWp34vN68Lpd2GzfpF9o6rp4c6gWfA4rHruGZqrDZDLj93tpaS8zk/GQCPhpScfoL6ToziWNrTefpJaP0xp2GxG0XLbArbV5bm8Z5vrmEW5sG+e3a8McK2S4sGmQj6abaE34iDo07CZdbJjw2K24bVacVhWXzUohFWdPNUPSod77zGVVDSeo3y7To3gOq4bHtv693+2htVDkTFuMsHP9PA5NRa+PxWTG53LR3BCnK7te5qGWPO9sq7G9kKbvTj26s0ma/HYjKu2021hemObnR5b5/bFlfn98C398c4UvllopBB34HAFOLtSYbPDgV9cdcyieZqUtzdb+AbY3WEn5XORiYfoa01xaGmVHR5aO+giloAObYibocbE8WGOswW0I52wkzBuzA2wreKhPNXBqsspMPkjUrmI1m1AtFjxOB7q409tHbyeraV00fL8Ob8att7XDakQ6Q07tkRHTB89tQrFYCLjtuKwabqtKwKFh0aP6dWYsiobD5cLrD+JyObAqzy+uXR4vXp8Pl8OBzqnZbDK41Acp+m97DG40/FbFGFjodop6HFyd6iTndxHxeShnErSn01zY1M2xpgSlZJLFnB2bxYTFbMauDzhCQea7mmgN23BbFewOF93lJmayXoIuu2Fzp2oxRJqmqkbdfXYNh123hYL1EQOvB233eFvoNnVYrfisettajT6gX/ANIf6EwY/VquG0abg0C36XA5emoN4RFopixeZw4fZ4CPqdqI+Ihj9r+fT9LIpq2CTg8+JxObFq6p0Lockog9dhM/gJ6P1PU9AHsnUWxegDnXf6ne4z9k0N8e5E8Z7/6M4nKCcDhFWzUfaBcgu/PrbC7/R+d2yZP57YxtfHltmXtxHz2GittPBWd5qoVUUzmYx6BdwO9P6u+wO/wePzpFeZcduteB1W7IpCUPdT990peKTNTGYURSHk1NtOMwawfpuy3i9MZiyqhh7s8AYCRr/QLI9n4ZHnf4ABva9puD1efH4vDocd9Q4rOss2TcNnUw2OnDYrbn2AbTLhcthpiIZoToap5uo5PFljtZJgqFzk6mQLQ/mo8V0p6iXoUI0gQD6f59p8jbWWEDGbwtHRKk1BJ+lgkLWmMFaLk+WuPKsRp9HeYZ+X5lTsXtsOt+S5PD/AcmsS/Tqht39nfZjmoO4bzIYvCXnc7BjtZjXuIBOKcnLrKEdbkwzo+xfq6e7t5nRPmKDdcqevWvHe7Sd23XeZ7ojp72dTm9V6pw/pd4bW/et6H3r8+fSAxT1frDPntBmiUQ9U6NdZzebA4fbg9bpxO63PLRitNjcutxevx4XDrhkBE33woAdi/C6bwZxLU9F9uB5c0RlyaAp7OvO0t5TY25Gk3m4lGw3TnkmzpaeDm0Ml+huTNEc8hK0KZpNCS6GJQ9UUEbtGxOel2lHhVF+WgtvBnok+djbFKfgdhq/TtYHue3x63RXF8MVO7Zs7ak/n+Fv21fuQ+mAf0gNq69eWb+17X3/QB2q6btCvBXarFY9VRffThn0sClabHY/HTTAYwKpfP+479lFlFAH8mgpgm8tLbcshvvriYy6fOMLqXG39IlVXR1Mmw0drSxxqDnFh6yifLnfSoppQrTbaCxm2dDezpatA1eNiqbPMJ4s1Do50cGCkgxPTPdxaaaeiKVjr6gyhMt9fZXtIFyVWovEM5zsjVOpz7G5LkzCbUPQHAO+AaNdUzmya5MxUN/uG2tk3VGX/8N1Nf99uOOsrcx20R2xGBw/7w5yYG+T8WDsHhtvZd2f/A8NV43/9mL0DFVZb43itdvpLJT5Z6ufQ6HqZ35iu8dnaJBfHOzls1KOTAyM1bm9to99rMzr28kgbqYATxbKexpFIpDk92sRstcCx0XaGohoOZb0eel2y+UYO9ebYMzzC+3NVFhr9NGfr2VHr5qd7Rjg+WGG+nCHrXo8y6wJBv8U+mEqxqVZmbbjKkbFObmwb5dR09x0btLOrI0O9WyUaDHBt9xaudEfZP9bNLw+NMmA1YTMEz+MdxKM6uS5W6+rcXNg1yYeLVcaaK/xl3xC1lOcpzlp3uAm+OjPPkaY0+0b7+XpbiyG86upshGM19hw+wqXbv+Xm+SVSwe9arof33//2j/j89i2Oru1hqCOPXRc+mpXW5gpfrw2zo73A2eVxbg80ELBpNEQjbO1t5cudE+yspmmNhRittTDo9XBkuImD2Qg7evM0awpanQm3w82W/iq3ds/z7mIv+4fb2NdfYCAV48BoL5cmOji3MMKtzT3srITxms1kExne2zrMu+1xVuan+Wg2T9F9VxA+XIdHt8E3+/ncQQ4vjHN7vJGVoX4+XBlmNGl7qqjOVLu5uXWA8VyIjw5u5VJ3kohWZ1wYw4khlvcc5fQ7t/j69g4iQddT2vab8jyqvK5AhKUTN/jV7fc4c3Q/Y73Nd86nsDrWz+d7xhhoSPP1oSn2t8XQLAp14RQ3NvdyaqrL8BW6v7ixMsln2wbuvT8w3sPhuQG2NLlxqmZqhSwXOyNGtFfvewGPjx0Tw5xsi/DmSCvb25NETCYshv8wkwjG+MWbWzjWGObgRI3f7KgQsZjvfP/kOj1cT108ODm9bYxbW7oYK7by530jjGS8T7adasVfaOEfDw4z3ZLl1OIYPx6ux60LAtVBsHmE/YcOcfGnv+OdU9tIefTgwnct29399cBFKxc/+pwf336HvWu76W7SsKp1OO0O5vpq/GJbN6tdrby7c4bjlQBezWL4Wz3YoG9ut4fZjmaGEzbS8QQHOpJk3LoIWv/+btl0H+W022lPhY3vjxkC2EUxFuVSZxqn6rongPVjl0aGuDxfu9e2J2b6+Xh1gkvTHRy6c704NdXNj+ZKxiAzFg4z3V5ka08LW7oamSnlOLd1nMtDLezsbWZrrcLWycF7Aljvq1tHh/l8Uyt7eqp8tHuWlYID150gxMPteddmj3/NdvRyc7mPkWyYW4e2caEjRlh7/P76b3idDs5tX+bGQJqtPWW+OjbFrFvBaarD5o1QW9jJ2qkLfPr5R7y1kHkyO08RZPrvpfKHuXnrR1w6c4Q9y32E/Tbq6iw4bWF+fGKeM515Nvf18o+7OowBm36M365xaaxCb1sblxd6ONxTT3skwqbuCleWx7gwUWZ7b4HusBvHnTKYFZViIkRXqdG4jh4YqXJ+vp/PlwfWr53DVXb3NtEbsqOa7eyYqPHzA4OM1ef5et8kO8uh719XzUqgWOafDg4zWspwZnGM24OpO2L28e3hcnk4PDPE7zY1MdTWwWcrNVbjTkxmM45QkqnNq5w5d5Y//OnnDIUCRorbkzgRAfyaCmC720ff1sN8+c4x+nMpXE77PVjjqTRXdy2yK+vgzKYhPtrUTt5Uh2K2GJ29MRZkbbzKlMPFpo4SF9sydGTiVBviDDRneWdzhbJqQbsngHt5b76X87P9vDU/wMWOMJX6Ro7UmugK+8mF/TT4neu30xQLO/urLHW1cm5hkCtLw7y7Mskn2yf4ZHWUK0tDXJhu5kCtRN6vGVGxSCDM0Ykerk91897KJDcW2plLB2lviFPNxFkd6uLyfB/TWTd2xUpvY463yllOLQxxY7aFwWI95+famEtG6c1kObc0yEohy43lMn0+mxHdGh3o5oRe/rk+rm/q59zCEBdGcmRDLqJuO3rEq5qJMOKyGoKsWq5wabyVN+emONUVpzkZYrKvjT09jUy3Zpgo59kz0sNoQsNms9LX3sJbc7ro7mE8HqIQC9GbS/LO0jCzzWmqDTEq6TBplxWnokfi/Rxf3cSbLR7Whtr5+YEBOswmY9DxpA796O/0C72NM1vHuDbVQjFX4h/29NObeJpIMuN3BfjZsWm25BIs9nfzu6WmOwLYgqra8AdD9B68xZWzS6RDj3dcjy7Xw/sffOdHXDm+lwaPB4d+29Vsxqxa8WWL/HbnIFtbG3hjYZQPe5JGFFiPpDZG/dyY66EzFWS2u8pbm0d5f3mUrw7M8+t9M3y6bZBzU1VG425semTdYePIzBjLhaAR+fPaNZrqM+zuSBL3OOgpFTjSEiNt16PMZlLRBFcXa7zVGGXL5ATvjaVJur7/rUGf28eeqVE+GEoz2tvNzS2DDMX0qPLD9rjfbpmmMm9v6jHuIryzdzNny3HCFl1cmVA1B85QPc1LJ/njj3cRCbnv9ff7z/Gs/+spIpveuM6tN1YopaLY7XcjWyY2DfZwe3s/nb4Yv9k/wUo5jFkXwJE058eybOlrN7g+WAmyY7ibCwMZWtNxpgd7OTvWQrlaYWtxXQBXclneXhrizPwQF+f7uLTQz+WlEbY02Mn4bAScdgKOIHNFj9Ev4kE/nx/bxI5QkH2Dnfx6awmrxXxvgP2s9VvfT+8XVo5vGubd+TLFhib+tGeQ4fRTbKdo+BM5/rR3gPFiksPTw3xaS60LYLMF1WrHHwjQdf7HXH5zhbT3eQVwmcsffsq5Q9NGdMuqmTGb63DY7Ax3dvOTzRU2Nxe5tjrF7ma3EQV22p3M9XVyfrqHs3P9nBmv0BZUCQWCHBjrNnzRG9N9nO5PkLHrz2yYiXkcxp2JaibNm60RTkwPcW22jSOlPNdn+jhaTXNgsMmIAOsCePNgjZ39zZxdGOTqeJmhxjSnxjuNO1nD5UbemqlxpjPJ2+NNxl0TPfoa9rpJ+D3UutvZ21Pk+KYBVlMecgE38XCQ+pYyb3QFjQiwy+FkcbCf2zN5Zqtl3ts5w5aMhvM50iOyxTbeXuhiuN7Nu/s2c7o5Sugp/c7tdLBreZG3OkJs7mjiF4dHGdbvGt25U2Jz+/GUenjznbe5sCn7XP1O5zJdOMaVqxfoLafxe/SIvx7h1FPlvHxycJp9lQb6Ojr482rFEMBWVSMd8vP+Qi8ztW4uTTQzUB9lrLWJi7WCkWIy0dzA1r521hojBC0KzdnseqBpoY/zHSEy0RCt9Ql2Dld5qztHVzZONR0mF9KvqWbMJo1tw118ubOTpmgDv10bZ0fJ//3rqj8XlMzzD3v7GSokOTYzzMfdiacKYMXhZmm0xs+nMvS1Vri1pZOloIM6fTCn35F0umhua+dnX/+KwVDQaKMn+QQRwK+zAN5ymB9d3k979EFQXT4/sz0VeoIWBlvybG6N4b+Tk6ffZon5AlyZbCWs59LWp9k7VGFrT7MRGd7R18Lu3nqSFjPKHQG8MNDFmZY4fbkUfZUyFzrCtNU3cnq6h+N9rby7NMRarUTZv36LQ7+VHPV5yMbCdBQa2DVU5dpMlelkgFQ4QCkaphxzGREiPTqhC+BDw21MJnwsdbVwoLOexkSc5Z4WDk71cqA9RdbrxGM1o5gVsuEws/UpdvQUODjSxt6+PG8MF5hqzLPW38HZ4SZqkRDLXSkanVaCgRBDrXlmO0ocnerl4nCZpc4ik81pulJeAg4nPfkU25qjRBwaXl+AtZEqp2cH+WjHJFdGWljpybJQ6+DSdDvbay3sGurk5HQvMykNPeod9HlpK2S5OVGiGNUvQn1cmO3h6uIAp6Z7OTHVy9tbB1nwK0YutN3hYLCzjbGYRkc2wZ7eeqImk2HzJ3XoR3+nX+gVJjqKzJVihDwh9vfkaPBoKBaFaDBEZ9pPaypCbzZOXyaER1OMyL3D6mDXUDNtQQ8t9Sl2tQQNwXH3d/TbvZXd73Ph9KYfSADf5vyBFYJ3eDR+x6KgBqPs62mkPealr7nIXL0Hh7Kej+21W7k8XWM4FyQd8FBOp9jV084Xuyf50VIfa91ZBpJ+Yk6rESk0mSwcnRnlYE+eaiZBc9zH/uF2FtIuI72g2tjAXNyHx0gLMuF1e5hryzERdFEpNLGYc+N5joch7VY7tZYmNjf6yCeSLLTnyblMKKpGUzJMJaxHfHQxbCLm99KbC+M1mQiFEyy2ZSj4NSa7WxkOe4wo1N22qHMEqZ86wh9+IAG8eOwa7x9YMNJ77v1GXR2VfIbVrhxpi4sd/SWqMQd1uh91B5htjjDWmmNLe4WTgxkjqneyt4FNXS0cme5hoTlBfUOSzpgNt81GNZdmtlpi50gvHy+0s9JdZK4tz2BDkJTPQTIcYVt7IzWfYqREeZx2Vofb6LI76G5Isbsces6HFhVG2gpsao0TcgfZ15sn79VQFZX6WIzOmItKfYyuTJzelM94kFL3kfp0lof68hQiXrqbcmxKeR/oF/qt8eaTn3D+xLYfRABfev9jTuwefEBw6GlBjfUN7KomqQRDTHW30B1RjTsJPreXXZM1dqa9NIY8RPQ0JtWMns4TcDtJBf3Up3O8N52h1a1hUzV2dJcYykeYacrwwUiGk5N9vLM4wM2BEjdGO7kx3c6V8ZY7KRB1tDdm2dRaz0SlwOGhNtbak+zuKbOvO8fesSonOnLM5EPMFaNod/qq7m+CgQinp1qYyKU5tjTEwWbd58Roz9fT1d3BiTsCWBd2lXyeldYQpViUxZ5WKj4T2nPkggcjSRYr9eT9VqZ7WhkMunDqftVsoysfpyUZpjuXoLcxTt7IhTZh1TQ62ytMpZ20JkPs6tPZN6Pejebqg790K4evXP7BBPCFC6epZP33sW1CU2xsG2ihN+GnIRpnX3scPQ1BUx1MdDYb15P3d07z9mIvu2qtrPW2cGW8yoFaMyu1Vk4vDrAvHyFsthiR/uZ0ikPDrSwnvIz3VDkz2cvZmRqX5np5Y6Kby5sGOd4du8Ocmc7GBnZ2pwk6vOzpK9Fxz089eeB+v++497/ZgsMb4nBfjlzYS28xy0LCYzx/4XS5qWaiVBMhOjMJejMhm51/qQAAIABJREFUsu47A3DVSrkpy65SwOifWyoJis4H78al84188XsRwA84i3uGvwOtnksSCAQwb0AB/G1b3P/ebnMw3N7BrcEscbfNyC3S84t0Ebf+qhi5NXpOl/5gg56rOT/QywfzNS7M6RHgQSMC3FafY1clRcGqcmG8g4x3/aKupw/otvfoI7mBTs5Mt7Gpq8jpgQL1biv2QIgjQwVq/m8i1roAPjBYYakpy1tDWXIB/SEJCzZNpbW5xI6WGEFFz/9dzwW0WBTcbi9DxQa60n4jtaGYDJB22oi4PbSkovRG3es5w4pCOBBkuDlrPMA2dud1vDXHWClNX2OCzb0dvNEeJu35Jo86ks2y0t7IiYU+ZrwWI8+6o62JsYwbt54n5vUy0dm2LoAVE15PkB3TPayk7OQiIY5ND3FjsdcQv29O9XJhcYhbeyZYCal4nhKRuL+9nvd/u83OYGc3X+4e5tJAkfnWHB9sH2M5asf5DBeaFyKA715oHvGqR2gjHic3V+d4szVstO2Bsf+fvffubvPY8nS/CZFzTkQGQZAASAAEM5hzDpJIiaRyTlTOVrAlW3L2sX1in9Cnw8z0Wd0zfeeu+5Geu+plsCQzQKJEyXL9gQUS4UW9VU9V/WrXrr2LLESjXBuu51iVn9a6NFe7asladagqhE+xkWuzPZxqraW7sZE7Y3XcaEpwvDlJ2qFVFkMNAetPE9wGv7vTev/Z94UPrc7CoYFWngzGFP9YjVrHaFOObyZSeNRrbgBbTDa7JIB/Vvbn6kdEuklHQ3TXRkk69NQE3IrfodiijYQqKYRd+C1axWIrrJSFRBilr4l+l61ioK6KwboEvUkfI005znRn6Bd+9LscNcBqtrK3r5fv5zs42V7NQLaG7xdKlBwmZedrqzoQ7+2GAN6qDE6bnYWBFqbcOtyGFQuvGB+F5Vb4zRsMBszuEA8GY4oA1mv1JHxuFruKXOus4ZO+Kq4PNXOqr43PBut41Jbk+nAfT8ZzigBW7lGtJux101kToxQW46yDlM9FjdOA225XDtHlA1Zl0S7GfbVKjc1iYbKng8f9EQqREMv7+rnUVKW4kvTnUvR3ta+7QGx1f2/2PQ0mvZevTozx5d5mhnJJTo918aAjQspYxk7PrgjgLfp9hRqz3qK4gC32l1hoCGI2GmjN1nI86SZs1Cpufk3pWvYnfHiE4NdoGWptZq6+kojDqFhV742XuDrchpiPxOPzAwNc74ptqaPebDus3GM4luTJ0gi/Gc/Rl67i/FALn/RGVowXz401m/22FMBlVJIUwBt1KA11qRq+WBjid/Nd3GuNcWeyS1kRXpvp58u5Tq5PtHN2soezTR6cRo0igMdLRRb8BrwmE8FwFdcbfTQkq1nMh0m+JIDFgYJETS2HOjKMhOxk4hGODZZ4tr+H+631XB9K02A1KIcM1gAPeAKc6m3gVFOWS0MlHk0WORyr5GBfCx/ND/F0Xzfnugvk7ColqoPD6uLoRK8iyq+OtHJtooOvDg1xd6SN5aFWro+18f1UTvGbElsn4hDcZE8bYgC4NtLGpdESt/YMcH8gRUsmRl/QilUcsFs9cKCq0LC3s4mDhSpOTnUw4nez1BJhoreFT/eUuDDUwvJ4J3emOtYtwENNBT4fzTKaDVIIBjjUlFaiUBiFK4lWo0QvODjczl63dvcFcGMz383k6fCa8dhtLI4NciFjxrONb5xon3ctgCNuH9dGWvl+uolcpY8DpXoONNcylsvycK6Tm20Z5QT8TF2cnqAQtUIAm7ky3shQxI0jXM2RBjcht53BXA2TMTsX+4sUfObX3FbfqF+V+ZpKRamQ48m+NhqN4mS2g6W+Ri6nHIo1fq0/bPr8HgjgeKHIw3093BC7GkMtPN4/xJfz3ZwbbOH8cDuPZ9s5nVhxMRBirJCq5sv9fVwabVd2TG5MdvF0sZ9TqQBzNW7cRg16cRCyjDF903p5je8KAbynt4cnA1Eydj0+h4W7S0PMu63YyrjeuxfADg6NdCricrS+ivaEn4BZWP7M5BNh+uurGWhq4OlYlSKAhRuNcDcSh3VrIl4OFENc7SvSlqnmbEeaB61x9jY3cWukfv0QnDAynBju4MFMB8vDrYrb2NP9g9yfaOHCUCu3Zrq5N5KluqJCEcEmg4X2dIY/HC5xo7eNY/kUN/f1c601xVQhxWgxzWhv6Z0J4G9ODHAg48NlMpFN1/HNTD0d9rXdmC368LsWwBo9dleEC81BDg2UONYWp6tYy8XGOj6baOf6sOh7rTyY7+NkTQC/VoevMs6X880cLMTpitqZ6a6nx2XBoV2Zj8ScNNCU52Ip+g4EcDUf7etlMmHGbtAyWKzj27k8MeGmWUbfkwK4jEqSAvilDq1SYTBamC6mGGnIcrstxkAmyqW+AsPpMMOtjXw5mqYnHaXY0sxyqxe/1UAhnebaVDc3+xs41t3AiYE2Pp4scqY5y96MF59Bx43nLMAiokMs4KWzvpazPXlm0hFGc3Hu7BvhrycmOVtw4zW9eHoz5AlysjfDktgejUY4U6qiwWNCnAZvyuc41hAhZDZg1AgrQwUuu5tT0wM8nWznxmgbt6a7+c2xUR6KFe5IG7cnSvxuT2H9AIE44d7T3MD1kXbuTHby8Z5ubk73cK0jqsQ0Fod1xIGBqkovw34zequbS51RGgI+Do22cLIQZ0/Cz1B7nsWmuOI/JUIQLfWtukDoNIS9LjprIpwfztMQCXJ6sIPrA3kOdeZZ6shzarCVjw/0M/cuLMCNzXw+nKDOplcsNPtHB7mYMeM1vMTIBv3qXQtgcUCl4LXz0VCRKpeFhmScx2ONjBfq+Xi+m9ulOsZaGng8XEXGqlcOjKnMXm6PJGn02jFXpTlWcCKiGeSq4lxur+dub4Kql7bV3qS42upafm+Ai5M9zCWt1EaTnO/J0GDXbRshQrnmeyCAk41NfDLfx02xkBxp49niCL9Z7GN5RAjcEk/3dnA++dMhs1Q8zoOZLpbHOvloeuVxf28n4x4DTp0ao06L2WhluMZD5U5OnW/A7lbtIATwbG8Pt5tcVJo1OK0mbi8Ns99tU0K4bfVd8d67FsDC1SHq91IIe2lMRplvE+4RegI+P8faqulKBsiE/dT7LTiU8IDCTUpNT65WWfB7rA5u9mbJuq0UomHutcTwW10caK9ZF8B2h4vTo518MtuhLGBE9JuvFof5eLqNy6Nt3N3Ty/2xempFJJ8KNdlomKO5MMujeUa8LkZSETpSIU4MNLK3PslkR4HZQoRaj2HbA6Hb1f+rvb9iAf76cA/9CQfC9aI6leXb2RxdqzuXW17vHQtgs8lIe2OeAZ+B4ZYiR0opzpfiDNVnudmUoKsqSFNViOm2Bo7XVuJVq7FZ7RRiQRbastyodzPb18oVcWi8Y2U+EnPSg709XO18NwL4zkwXfX4VJk0FfQ1Zvp0vkFAJjrafk6QALqOSpAB+ESRhARA+iMJCUB1LcLXoIxX0c3a4g3vjLVydGeDzvR3cmGjn1EQPF5pXBLAIOVZT6acj6aclEaQzGaDW5+RwbzvTSbsSPup5AazVGJntama5qwYRiqwzLkRhIyf6m/j2QC93+usoBkWINd1qiKcKxCR5vCVKR67Avd40FxojZPwOkn4XncUcR5sSZLwOQhY9OrUKu8XJ4bEOLjRWKbF/x5syPD7QxdFcNUPZKsbySR4NZQnaTYr1VafT4vd5aUlUMl/M8Flnip7aKFVOMzbdasgjn5f9zTUMV5pxeX0MunUErCZCbjcLrdUk3XZaCjm+Whrk9mgbV0ZLXOwv0OTVKhYs4Tfotdm41puhNlzJyZ5GxtJBJdaiOFxYqqni2r4e9pcjgDV69K4QhWKBuNelhFnbcoDeoj8oLhCNzTwdjCsuAmKL8k0IYHEQKBBLUcynlYXKyin+F5nbqMwnHm7gA7xF+cU1REKLWwMNRGxGcvEIn88PcGagxLP5Xp7OdHFquJNPh+JkRLxTrQ5nJMnNFi9hlxVvTR1H6+24TDpioRAfHxhlud6G31zG1udquSy+KDWZLDVBrxLqaaP7Kvc14YLU3VDkk84qhvs6Ff92tzjotk0dKNffUAALP2kzsVSGbDqJx6ZTQsttVx5xCG4zH+CtvhvJNXBlMM/YqkvD2fFenozlFPeG/mySs/1FTifsiq+12FFx263UV8UUf9JPZ3IcSIdoEpZ5IX61QnhaGG6o41Cdi6Bh67BLKq0egy9CQ0OeiM+57aHCre5jTQDfaHTiN4nQim9GAOsNNkLxNIX6alw207ZhmkQItIqKHBv5AG9V/uffs1jtjBSzdIeMhCtDHC2GiFlfrEvhCmg1uzjZHiLntCjRfK40B6g06slUVnK3+cUoEGLnTITqOtrbyGyxWnEfm2jMcn+6m4NNwo2livFiPddH68jqdZhUauVcQdRt40BfPS1mEw6dFp/NyEJ3jnill8b6FF1RMzajFp04/LoN81q9nmBtjmy6Cq/tJ5e55++9vL81mHQevj7YTW/cju5NCWDBoydCsbGBqMv+go/4VuUSh+B+7gO8+dhpMVoZrfMoLmsRl53B+ipmwi7SiSRXJ7t5uqdTOWNyY7KFmagLq1KvKowmK5OFBLNRJ7M9RRZqIrSKg+Wrj0P9bdzoLs8FwmC2k0ilyafjSr/dru22uv9wrJrbM130eCsUo5YUwNt0hK0qc7P3pADepENp9ARD8XUBvNiYJO3Sk4gnWW70EnFZsFTVcrbJjdskHPB1ygn6Q9U2miM+ltoz1JrMXB5roMdr/pkA1qg1hLx+5jqLXBhu5mRzFXm7nlRVlPsjNaSqark93cJSPoRXhB1TqRhtrONA0o3d7uWbo0Ms5SI0J+JMNdauPyby1XSF7MpBD6PeSDYRYSSfYqxQw972HJ8u9HC2OcMe8Z2GFBNpv2LpzvnM6PU68g1ZjohT/rVxHjSFmW7MMuC3Y9fpiITCnBsp0B+wr58qFfFprSYbAw1ZFhN2vAadcjDv1liBBr+LsUwltWadEsZN3IM4+NRZ38DdtjDpoJ9zI93cHWtRtobPDrZweayDpwcH2O/Tb+8CYXLiGj7Df/7f/82nx8epXA1MvxnrW71uMhrpLDbx6UBMEYhWi4W54QHOp014d+ACYfMkOXTrK/7vf/6JuULNer1tVRbx3qsKYFG3TpOB24oANijRBm63x7Ho7RzsqmU24MTkCvFRf5S0VYfJaKC/Oc+8x0DIZqG5oYmDNWYl0sdQYz03u4vcG8qS8RgVl5dyhGdsz2V++Ke/8vsrc9jMZWyZbjmeqQm7vHy1t5tPF3sZjv1kLd2u7jY+BCeEfDXXP/8Tf/njl8wWKjE8lxBks2u+rgB2+vx01SWVfif63iURsWOycb2fjgn/3lAl0zVexQe/JhLiTG896XCQ24NJ5tIRTjaHEK5ByjmB1jqONIUxl2H91VhdpPYu81//5x/cO9KJx7TJGLdl/a98RwjgmZ4urhUd+IwrAvjG4iBzLiv2Mr6/mQXY4c1x8u6P/O9/+4apxuoy/IlfTwAL9xLh0iB85O02x0oYtLBJCYN2TIRBs69EOBHvi8+K8xRt2RTDXiMRu52p7jwjTh1WjUYJg3arSYSo+ikMmuBGbzCSS0TX21qEMXs428vpjlpmmsTYLCLiRJnMxagTW+sVFYgMjIoANoloDjpcoQi3OhIE3U4KuRR9CRvZRJA6p2lba5/F6ebY1//OX//wBQtdoR1s1a8I4C+WuuhZE8DVGb4WLhA7sQCbXfimLvP//H//zcdLffjL3FV6VQEs2kIxYKnVNKRr2ZMPKwt4vc3BfE+BI9lKGmurafdqlQgaa2xEYtWc60gh5sDZ3jZuC9/fwRbEfCQej+f7FQG8XXxy8fveSDv3v/kD//jrYxos5jK43rxvhqJJbk530u1ZEcA9hQxf78tLC/Bmg/XrvC4F8CYAviSAL4z18GS2A+GbJ0IU3ZzuVuKwXmzx4rYaCPg8nG6JKkH9xSDqr4xypK/A9ZwDv0HzogBWa9D7Qyz356ivtGE1GfG7nDTEgkr833ttfuX0tcNqYaRYy4jLjEbvYH9zhJFwQIn4sNwe58poO+fzUTprY+uPUjJEo19kVtOsH/QQ2drE4T2/28XJ4SIHamP0iu/UxOhMJbgx2UZH2IIhGGNv2kO60sdMQ5o7+UqcbjcT6Uo6quPcGE6Ttxgwa9asEmoSlV7G23PMh21YtHoywRjn++rIegzKCtgVinK+v8hUlQ2H2URjTRXXu6L0Feu5OpBWyt2T/qn8Xek43ekYrWGn4gKyJdNqPXZPE1/923/z+Y09hN0/Hc7b8nsbTNziJHl9qpZTTQHChpXDMT1NRfbG9FjLSGyxmQuExuQmMHKB//rvf+Vksa4sv0lR9lcVwOLQlQjTdb8/R9SmJ5+I8tXiiHIY8/ZUJ3emu7k21cfTVQuwSHpwoDlFzOumWBXi5GgPB5IexTJ5oN6LiCiRCoW40FNDNmBTws5tJ4K9sV7ufPkD//LssDLBv2obvPx5cU+T3a1c6opSZX7xhPPLn33h/w0twCsRQPrPPOGH3/6GxZYa5RDrC9/bgIvXFcBiXBUB6EW/E4/uxgZu9Nau99OO2jgDDTku1Hnw2SyMZWMk3DbytWnuDVfT4bPQVl/DSCLA6e4GlmocivgVk/Z2ZVapjdjdPfz4H//FJxd7qbRvMsZtcL8vX9ugN9KRz7GYsihpmEUinvn+RvqsxrIWc5sJYL3NS3H/Tf7PP/7EXGuuDKHwOgJYTTZexYWxElfGSlyb6OTmZCd3JlfG8VuTnUrIuStj7VwZaWI45sJtMHK8NYnfbCAdruRyS1AJOyfqPSJ2uVJe9M8lwhD1JUSXCNG11tZul5tDPc3sL8ToVsbmqJLZ7epwlrSIKvOcAO6wi2RFMa6NFOlwaDEazbQUsiy2JFgcamI4bN9WABvNdjpn7vHH333F6cnItny83MY//a9Gp7FzZihPzm9CHKIOhmJc7E6SNZXR/zZzgVAbcAdKfP+//l+eXh4l6CrjWqth0F7FAlyh0WC12ZltrFOiNnitRrxOOwf7i0wHHTh1Wgx6MweG2jlY61fcBf2uSi4M1rOUr+baeCuHczF6amN0PTen9mZitCUCRK06ZWf1p/r6eb/S2kKMXnzCP/7X9/RazdvG393qWk5PJQe78tTbK5QzPblknIu9VQTEmZMy+q50gSijkqQA/jnECpQiRIrFRtFnwmkxkwt5ibksykl7cdre77ThD/gpBEwIgelwOUi6TehEZiyVSrEIi0MU4vSsSV2hxJdsjPgUZ/YKkSVJZFuyGJRsSyK7jAhs3ZwM05Pyk7KuhDoRbWMW2epEhim9iWqHgbjbSUvEhdtsIGCz0VoTpT0VWX+ImLpFv4jz+vOJ0qDTUR2upEPxOVv7TpSumF3J+KXRarDrNQR9PtqTQRpdZiXTk8VgJB/1k3QZlVPoP23riDSTQWq8FiVLkapCSybqJ2U3YNIIq3WFEjpHZD0q+ixK5rF0pRO/iKphMuLx+9bL/fw9tItsSDEvrtV0yxsPEhrMdh+J1im++OF79gzXYTW9uJ258fc2bm+x9SnqRwTxF9nNhNXIbDQqsR+3y5AkfmdDAawz4qyMMTB/nR+f3aA9Hihjol8p36sKYJ3JTEMiRGfEqSx+RJ23Rz0EHNYVZh1WAm43pZgdt16tJFzIeO3Y7E6aqyN0pXzUuJyUwjbFDUJM6sIHMOxzUwzZlIF8MwGshMOyuqjpnOXOk085PFXAaChvktuqjQT/FrNJ6TOvlHFuAwGsFUkb/Cn2XvyYJ3eWqQ84lWQvW/2+eO91BfDL1xUH+dprYj/xLv5OhkjZtVj0OsXH16o3UEjGKEVXXFHEQjrodZN3mZVYzy9fc+P/dZjtfuI9M3z7w5fs7U9jKcOHfeNrVSj9QGSaMouMdeoKpc7sIgtWmUk3NhLAGr0JbzjNxKn7fP/gErlIoIwJ/XUEsNhx0uMV4/W2DxM2sVjRmMgHLMriPWC3kHIYFUOCqB8RI1hk3FKrdCT9Dmo2WZQJl4RIsJLe2gil1bFZnHtoCznWx0+R3bEm7CFoEEYDD9V2EyaR/l2txedy05sVRoogfrNuSxcIEffaE4gxef4un909TmPSuQMBLBKCqBHhOcXOg2JN1eqU+aes6CMbCmAtFoefZMcevvntbxjvTmI1roRt3Iy5tddf2QKsUmNwuGiPeBAZCcU9iMVIISRiQq8sPMQ473TYSAfsyn0FvR7l8KPPYsRjs9NW/VObPT8ntSQCJITr2GaJXEQULYOZyqoMx5Yf8cXVvYSN+rIOq63d78vPYldZLDjXzvSYVrOhigXUZmPx89eQAlgKYAwWGy1Th/n8zknak1EcNvMOBoiNxdPz0Mm/d6uOtCQL41y594izc214nNsls3hb5VKjE1mvfH5aDn3M9QtjBF2rv2Uw07Z4hoePrjAS9mB/Lg33dpwcvPUp108uEHc5sZrEobWfL2i2u8Zuva/W6Ihnh7l+7yFH9nTgtJnWI4XsVhmU31ESYViw+2NkJs/yuyf7lHTI4j2d3sTo0Zs8vHOe9lh4S0H/fJktTi9jJ2/x6OReMpFKLDt27XhbHK5d10Ayt4+rDz/m+GQYj/MdjXcqDVqjBY/XS3H5M66eniFoW0mEoTVZGT1ymY/vXabd51F2x56v843/FguqDJcfPObC4QFcLicmg3BbWLvvX+ezwZhg9sQl7t87TTrpw2wo32d/43p+vXoUh6PNDjeuTDOnb99ieWTNEq0j2zbL9Y8ecXy2GZfdUvb8G4wf5fqNy7TUx/A4RVrz8oTzm7yvsq+l1lARTnP62i2uHtlDvc+ObpdDFiplFYkwdHqsdgeZfJ4vf/sN7S7XtomjZCKMDzQOsFZvJJJpZN/8fg4u7mdmsFlZ2ZYNdhmLCHmt1xs0P4x6M+INNDN38CBLBw8y0p3Fbt55fTT0jDO/eJAD++dpyyeUVMgfRn3tvG42qwflgGWwxMTsfpYW5tg7XI/F/PpuMeJ39CYLmfYBFhcXObgwR1dTuuwJfLNy/ipeF4vC2g4WDx7k4MElhkr12A3lWa42rh+xu+Ojc3iKhYOiX8zRUC0slW+Pp43LIX9vo3oxOnw0D+3hwOISB/ZNUKrZiSV6pY7t7kZGpudZWtzP3HgLXiUVsqz/jep/7TWVSKzhCdE3NsPBg4sszU8Qt5i2tURLAbwLAlg0kgg8bjLt5KTqq3UAkRtbqzdgsVixWS3K1vsaLPL51epS1tdG9SWiYxiw2mzYbDZMYtvrDSTwMBhNiExAVqsFg177XluA3xsulAguRkzmlb5uNesR8bZ3Uj4hqnUGI1arTRk/jIadCeqdlOUX9V1hAdYblD6x3i/KSCiz+T2KHRANRpNZ6WuiX+hFWMZfuQV48/raaKx6e6+JFOAGk0XpJ1bLilvgTssm3JZMZgs2qxWr2VCWy9JOf/MX/30lFbJO6Sc228o4KCzR27lMSAG8SwJY+OAI36JfPGjSMizbUDIgGZAMSAYkA5KBXzgDUgDvkgCWwvftrcJl3cq6lQxIBiQDkgHJgGTgVRiQAlgKYLmK/YWvYl+lw8vPyglCMiAZkAxIBiQDFUgBvIsCWGTHap8+wrlzl3l4/w5jvUUpPqX4lAxIBiQDkgHJgGRAMrDLDEgBvIsCWGswYnf5icU6ePr7P3F0flACv8vAy1WvtHxIBiQDkgHJgGRAMiAF8C4K4JUOp0OlquKjb3/gkBTAcgEgFwCSAcmAZEAyIBmQDOw6A1IASwG869DJlbdceUsGJAOSAcmAZEAy8C4ZkAJYCmApgOXKWzIgGZAMSAYkA5KBXxUDUgBLAfyrAv5drjblb0trh2RAMiAZkAxIBt4PBqQAfo8EsJK9TadDq9WglkkzpDCX1gjJgGRAMiAZkAxIBt4KA1IAv0cC2JNp4vpXv+ezqweo9tneSoPLlef7sfKU7SDbQTIgGZAMSAYkA++OASmA3yMB7K9q5OqzP/H3727QlnVLASxXvZIByYBkQDIgGZAMSAbeAgNSAL8nAlij1eMNJth38WN+fHKWQsIhgX8LwMvV9rtbbcu6l3UvGZAMSAYkA+8LA1IAvycC2OasZ+LIGT774jo9jVHMBq0UwFIASwYkA5IByYBkQDIgGXgLDEgBvIsCWGc007P/LDdu3OOLzz9lZqh5HWqtzozT7cEfcGM06NCoVevvvS+rJVkOuXKXDEgGJAOSAcmAZOBDYEAK4F0UwFq9AZvTRTgcJhwMYLdZpMh9C6u6D6FjynuQE4xkQDIgGZAMSAbeHgNSAO+iAJYgvz2QZd3KupUMSAYkA5IByYBkoFwGpACWAlhaoaUVWjIgGZAMSAYkA5KBXxUDUgBLAfyrAr7claH8nLQiSAYkA5IByYBk4MNlQArg3RTAWgOWQBXt7Y3kcnUkwj4Mmg8XLjlwyLaVDEgGJAOSAcmAZOB9ZEAK4N0UwGYP0dGL/MffHrJ3zwTNuWrshgpUcttFWqElA5IByYBkQDIgGZAM7BoDUgDvsgCODJ3hX78/SizsQafTolbJleH7uDKUZZJcSgYkA5IByYBk4MNlQArgXRfAp/mX7w5S6bPt2ipHduAPtwPLtpVtKxmQDEgGJAOSgVdnQApgKYClEJdbTpIByYBkQDIgGZAM/KoYkAJYCuBfFfBylfzqq2RZZ7LOJAOSAcmAZOBDY0AKYCmApQCWq37JgGRAMiAZkAxIBn5VDEgBLAXwrwr4D20FK+9HWmUkA5IByYBkQDLw6gxIASwFsBTActUvGZAMSAYkA5IBycCvigEpgKUA/lUBL1fJr75KlnUm60wyIBmQDEgGPjQGpACWAlgKYLnqlwxIBiQDkgHJgGTgV8WAFMC7KYCNDgI9R/ntlxfp7+skmwxh1MpV5Ye2qpT3I5mWDEgGJAOSAcnA+81OVYCNAAAgAElEQVSAFMC7JIBVKhUVWiPWQBVtpRKlUjt1qSg2vUyFLAeJ93uQkO0j20cyIBmQDEgGPjQGpADeJQGs1WrRanWo1Bp0Op3y0Go0qGQq5F/VlsuHNoDI+5GTomRAMiAZkAz8EhmQAngXBLCw/hqNRsxm8wcn9oSItxgNWDQqdGo1Oq0Gj1GDTq1auVfF8q3DbtBi1ak/uPt/rzu9RotBp8GiVaGSvm2SvQ+IAYtRj1Gvfa5N1Zh0WjxmPRox5mxxryaDDrNOi6FC7r5tVU/yPSlqP3QGpADeJQFsMpmwWCxbDszvA2w6o5GQy4xdq8ZstRLzuUj63S89XCTcFsw6DVaTmbG2PPuDFoJ2K7FoklMNAfwWI26ziapKD9XxMPuLQVJ6KYDLbWOxYxBwu6jxu5XHz9tgpU2yASchsw61SoXTbqe60kM64KbGacRpszKQT1IMWLCbTcR9TtKVbjJeC5q1BcoWQqHcsv4aPqfXqLEa9fhcNrxaDSa9nqDHqfSLupAX8Uj6XUQdJkxqNUadniqPDbNWjU6jxma1Uu22YhHf1WqxmcwkPVYsclH4ymOiWMwF0xm6q1yIBbjTYsRlsdCaquZSQxC/3YzLbMCi16JSqbGaDDgtJlyrj0y0msWGKBGLCff660acJr2ycJeLxfKFn1ajVhbZmtXFhOgjVS6zMh5VVKjQqtVYdJrV/yuoUKlQaQ3UiHEo6HlhXklVeqgJeolateg1Wy9ifg1jjrzH8jl83bqSAvgDFcDC6qzR6tDr9eh1OjQazfpEo1Gr0QuXDLUKvVaDXqtGrQghFQ6Ph47mLL1BM6FgkN76aqYba5ksrjymxHNDisFqPz6LgXg0wSeLozwbb2V5qJUrk738fqGdhaoIJ/ua+HSqyGh9hJTThEMMfioVOq1WEQVCGBh0WrTi9dcWYirWB2GVGsOq4Ni+Q6zdu0YZpI06zarlSJRRjRCger0BnV6HRr1zS5FyPYNBuXeNRq3Ugyij+C0hrkQ7iMlc/C0sWFazhdneFi4XapjOVzOy+phoqFHaYryhhrFCmnuTrRyK2RTLe1Mhz3JvA0udzdxtqqRQl+ZIe5qjPUXmCwnGcklOdDfwWW8Mk5iUhMVetIFmxXIv/hZMbF93Gw9MCnOr96DXaJQyiett5+YjyiHufY1HUQ7NqmuQqB+NRotOp8dgEBzvbBG10i9E2670C/G7a/cr+oXCo9I/VtpljcuI38v+Oh+jHXlmbCbCdjvN6SrGi1ke7x/gQV+O8UI13TEXHr0Wj9XO8d4mhkMGnCYDiVCIs30NTFRaqA74WOjMcTwTJGzVK+JAlEswIO5d7KT8xOPGdb1W5s2fn+sX4r7K7Rer5VjhceV7KxbVF/uFqL+d9wvRz3QYRL8QY9RzrChjlOgXgkmlXlSoVaK/iPvSkG5sYKyuEqvTz5G+Rk71t3J/tp9/Wurl/GALJ7tzjCZsaNQWpjsLnOhv4nR/M2cHW/hsYZQnezo41d/006OvyNFSnIiopw36xRqPm9f35u2klFmMt6Jtlb4uxrztxxRRB2JHba1fiLpYGacrUKnVyviu0+sx6HXKtV+nbGvfWRnzDEq/0Om0L/SzlX6xMk4qY9VzfdBqszGaC+BVVSBEcE0owN3uBAat6Fca/HYXg2G70o7Kb6k1qKx+ns01caI1vT6uifFtojHDkeEeztVbcRtWxiFVhQq9TvQLsau40i/EYn+t3K/8rPCjxqjVsMbY+m7lFnOQqG8xT66N02LMVPqFmLvE+CXmC4MevV633kavXLYtfl9ea/P+tZO6kQL4AxXABrON5slDfPflUx7fu8nZhb510ZWMRnlwYJxDtR6Wp7t4PFkkrVOh1elJBP2UaqKUqiupslmZairw3cIAd6a7uT3dxaO5Xr6eL9Jg0mFUq7BbLMx0NXGw0kTAYiYSTXKrpZJCNMnhhhgJZbtxxSVCDN5isF4YHeaj3hSd1Um+Oz3Cfq8B62v7QusZbS3ym2MdDITifLO/l6MNvm0GSDVqlZ17h4e50ZVmoKGRvy22knSbqKgQi4YGjl66zP1n3/Lpk1u0pCrQ7zBaR9f0At//9rfcv36dU/u6sZj0yqLEXxnh4z093Cgl2NvbwRdj9eQdeqxmMzM9BaYsJrx6LabVx2hbE8c7ajjeVUu1y8ap3jyLQgBr1DTlM+yvj9GSz7NQE+TuTAc3e3IMd3VwsSfDtZYgexpSHK1yYNCocNk9nB/v4dP2GG2lDh5MFig59NvU3eYDkdFgoK+5lWezeQ7V13J3foRDdQ5s21j+vdEE9/YPM59xc2nPMA9GCiRNFYoINtuaWDh7jVsPP+N3X1+nlHO/dvnEQKk3WWgYOcC3Xzzlyb2bnF8cVBYCFRVq8skkXx0bYTrm4+ZUJ4+GkzhWLVuFZJxHPTHGOtr4eLydk4UomYCfzsYcz/Y0MVWoopQKEjfrMKlVaNQaLAYH060R9rU1cG2ik7vTXXx+YIjP9vVyc6qLO1OdLPelKRp1GHQW7iyOcbWtmuZ8nr8dLdFgMSii4vUGeB39jXl+c7yToVCUz/f1cKY5sHXdabRU+MJ8sdDHmfYkc32dfDeRo1owodGji7Vz9tJlHn7xDZ99epfmUAV6zeY8bF1uIWI8jOy5wu9+9zU3blxnrs+LSV+hiIlsVQ2fzLZzLhfhyOQgd3oixExazAYjuUSIscFuznfXkKu0YzboMOlNlFIpHrVHFOuvWEAIsaYIKK1WcZdwWM00F+q51RYiYjWs9ymlb+m0q4uOCjyOSi7P9nC3IcTM4AD3x/JErZrXXqSbjEbGRwa5159gtJDmi8PjzIa0WLapu3RVkk8OjDGfdHNjvo+HIzmqtCtC01LdwdFLN7jx+Bk/PrtGR3pn/cJodtK352M+/+wJH91a5uBIaFUEa6mvquH7031MV1ZyZ7KLR72RdY68Pi+XhlMkhGHjZwJYS7U/xIU6v7JbqPCwKoAfjlbR4ja/0AYOq5XGfJFTWcuqABYLABcPjw1zubWa4aYW/r7YTMBqWP/9rRnbgE2DmaGuFv64t46GmjRP9nRzqHrrnVmxAA+F4zyb7+VCS4KloS4+H8pQY9Oj0RkINw1y8dJlPv/qS754eJ5wRQVaKWZfv412se6kAP5ABbDJ5qS07xQ/fHKR3nQVLodt3RIXjcZ4cHCaQ9VWrs708GymgZRKTDw6RQD3ZBKcG2thzGZlujHDrUKCxkSQYjxIZ7aKh7N5cmKCqahQhNpERwu3O7MstNaz2N3MzUYf+ViKk60pCi47EZedoN2k+AEb9Xqmxka5152kvybKHy+MMWMz7kAA65hsL/LHEyXy/ii/XernVMG7TedTo1ZbeHR0jBs9aeoyef5xqI1aj/DR1qDRmHB7veRmlnn8+UO6MhUYdBsMpq/QUQfnj/D7H5/QHAjgtpkVq4ZKo8XiD/Nkrpc7HXH29AjBkSFvM2A1mxQBvD8WZ1+hhqnGWuVxc7aPB9NtnBrp5EJbimtDDSxE1wRwmiOFJKd7s3QGPZwp5TlYqueT8QJpn41jvUXOtSfJ2lYsUG6Hm3NjvXzRFqO7tZ2PJ/LUOHXb1N3m9SAm+oHWdr6dqWNfrpaPFic4mjbj0G9tsamMJHiwOMZS2srynlE+GSmQ0q8IYL3Bgt0bJdpzmD///i69Tf7XLp+YLI0WOy3TR/ntp5cZyCRxOUW/EOVT0ZCq4sfTY0x4fXw03cWToYSyOyGY7azP8N14LQuDrVxIhWkO+9jX1cnNgRyHOuo51Jnn2nQH024tDr2W+mQ1M011nOmuZzDlJxX0UYzHuD3Xzcl8jGK8kmLMR9xhxqbVYNQbuXdoiqvNKUo1Gf5xokSlzbgDa5KW4aY8fzrZSWMgwjf7+7jYuM3CUK2lwhnk+8ODLHdUMdXVzR8m6ql1GKhQa9DozSv9YmmZZ18/piNasYMwjqLO/cwcuMUfnx4lGKzEZjGg01QouySZZA3P9rZxIRvl+OQwdzsrCZiFT7uBQlWYvcOdnOpO0xqx43PYCLpcjOTr+aw7SZXPqexOPX/mQKfVEXB6uDrby/WuHPtb0ut9am9rPVP1MaKriy6fq5IrM3181FhJd18/j0ZzVNleXwBbTCZmR4d42BtlsljDdyfGmXXrsKo370uC1USymieHptifsHBnfoBPRuuJr1paxULOHowRnjrBn7+7Q39uZ/3CbPUwuvgNn13eS7Y6iNNuQq3sBmlpSFbz53P99AWC3Jzs4rOe4Hof9Pv9XB2tIbmhANZREwhzNV+puEE8L4CfTKXpD7sJu+zrj0TAS29LC2fq1izAwuJq57PjYyy3pSjkGvivQ82E7cb1339lAaw3M97ZzN/n6sjX1PL5vg6OxG1bXk+t1WINxvlmoZ8rbXEWBnv4ZriGWqseMYaLtvB4vcyfucT3n98kJgXwlvX5ym32CvPsq15bCuAPWQDvPcU3d47REHC9AKTD5+fEWA+jQS0z7Q1c6Ijjr1ixLIitML/Dzs2RAj6Tme5Mmod7u7kyVlIet6Y6uDeeISW2ClcF8GRnM1fqgpSSEUr5nCKAC7EUt2Z6+WSig6dz/dwYbaE7oFMmt4bWNo4XKskFndyaaabBqFcOpLwqvCufV9NRV8P9yQwRq4frYy1MxC2KqBEWSZ/NhNdmJmAzE3JaFBEutq7UKgOnJjpZzIcJ+qM8Hs4Str1o/Yz2H+fep/ffkAA+zA/ffETV6kShlF2w5/BydrCZQxk37bkCV9tDRE3aFQHcW2Ap4KM9FqSnNkJvTZje7nbOd6WoC/lpDftYHnxRAC8PNHO7LUxnbYr7XVGO9dUzEXbjMBupDcU535slb9OjV6mwCTeLjmbOp33UxWs52Z6g0vyTS8CrtodwtWnK1HGzN04pEuTYSCcjQQ1mbQUmvY6Q04rHasZvtyp+mm7Dyvau0xPg7HgnoxEDI6VmTrckqFRX/CT+DDZM+Vn+8OPtNyaAv71/kubKF61myXAlt/a002awsdRR4EzRo3DksLpYHu/i28MjfHtikicz3dwdKHBksIcv95a4M1nizmwPTxf7mfPo8Bp0+BwOilUpHu9rYiwe4vhkD4/3DPDd0hBfLgzycG8vn82XGAxYFfbF1urh0W4WqkPUukI8nsjiML3+YkRYtFvSSe5P1RG3ulkeama2yqK4WwhB5reJh1npGyGHCZte+GmqqTC5uDnZxlydn0I2x632GCHji+WITp/g0eefvBkBPH+NHx/Ov+iipdEQCYZYHqhnJuBhtNTCkYwNx+pOgnCpSTUUmKwPKFzPdbVzabSDe9PdipXuo5luLvZUU6NfYVmr0eK1uzkz1sOToQyl6pCymBcLevHoSFdxrr+ZJmcFenUFdquTxd4WDtc4yabrOdGawG98fTctg15PT0sL5xo8NFaFuDbdTJte9AstdrORoEW/4nokXLh0eqJOk2K9dlWGODfRTb9fw1x3C6daongqnusXJjv67nn+8PXNNyaA758aIqQYAtbEuZrqUJAHs0WydicLHQVOZn4SjB6vj8ujaRpWx9jmVIxPBmsJir5us1GMxTmfFRbgVdcllRqV2c21yRL3pjvX5xUxv1yb6OLGVDcHUmZsupVFqVZj5NxkB3szQaLBOE+GavHspF9o9HTUZ/i4N0YkGObiQJYBh1Hp52LXTfSLgM2C1ybOtBixCobUGipcAa6OtjJf66a72Mil5kqCxucPYVYwdvAU3zyTAvhV5413+XkpgH+FAngr4HQ6I8VUlmfdCWWgEZPNymPFl3P9/9VVmRg0Jjtb+XigwKneRk4MtnOr0UchlmQpH6HWoOPmQCMJxw5W7a+xAhR+xjWJah7v6+LmWBMXSjme7B/kYNq3reVlrX7eugDe4r7WLMAHApVcG2jh6kCe72ab6MzWcqA5QT6R5npzmBuDhRcswAdySWa6mrnakeZ6dw1LTTUstRc43FVgbyHMUG2cG10Jglr1DrbX1ybH8p+zsTA/HBnl0lAzZ3ubuD7Vy8V697buEUpb7JIAXmv3jZ4Tfi+zdX4WJ1ooCfFosTDcXmJvWI1dV4HdYmW6q5E9Li12TQVWq4Wp7kb2hM1Ue/1cnu/lZleBY10FDonn3gYeLfUwHXFi2YKDjcqyk9eEJbQpU+TpnhL3xooslhr4drGLmbhL2dEp59pvWwBvVwZhsa8pNnCuL8t0IYhXuJysuqoIV6Dm6gihVSuh+KzHEeBQKcfpnmZ+mC9xureotIFoB/E4PdjG7ZGfBPB2v/+m3jdarIx2tvFVVwyHQUuF1kA0Ucuf9xXwWV5cjG/4m7sigLfu4xarnd58Lfvb6tnfXs/5kXZ+t9DFUkdO+X9fY4ouv1VxuVq7B9Em6/PImq+/ch5iba7Z3j967Vpv6lnMF81NLXwx28HHI40stDfz3WIrA3H7C8ajrX5PCuCtWdmq7t7Ve1IASwH8XAdXEw7FebY0yl8O9XKtrpKrEz082dfDg7khPp3r45O5Pm7v6eNYwY3doFZcIMZLRQ4FzQRtK1EgbjT6aIgnWSxEqH7HAvjJ3g6OVznwWEx0tbRytRQnYCivo74vAvhqVz0TWWEVLNKUiHG+t5GH0wW6TGZOdOVeEMAz1T4yHj/3++ppq6qkUdlqX7F05aN+CmE3WZ+YkIR/ZHn18CYGp2w8zHfHxzkQtOK32cjVZvlhLIWrHGvOuxbAah19dVVcrfMqArjP72Ygm+R0fzffznfzaFb4xg/wxdKAYgF26/VUBxN81JelGHKQDQU5N1Oky2jAY9Qr0SQ8djNTwy2MBh3vRAA/ni4w5DMr0REOjfdxPh/BWSYP704AqzAZ9MSrq/j6yCi3ejPEHBZSAff6IxvycnGim/FsJZU2E2qVlQNdSTJ2G/FYNXfaQ0TsJqUNRMQC8Qj7vCy0F9YtwG+C93KuodboqI9X87vFJvwmPW6rjX293Yjx02koYyfmnQtgNU6TkYjTgm21LvOJCI/6q3FbjErduixGKh1mzOLQWIUGnz3Ep4sD3J/t4dG+Hh5NZkiorZwaaefJXB93Z7p5stDDQsSFrUwey6nr7T6zIoBbeTxeT8Ztwm4ysjwzxMmGUNn9Uwrg3ZtPtmvPct+XAlgK4FUBrEKnNzHZlOVge467pSQLLVVc7W9gOB1muLWRL0fT9KSjFFuaWW71KlYKcaDu1EiJC41VDNclGWtp4N5Amrlilrmsj0qDjhvv0AL88VgdAx6T4nrR1LgigMOm8jrq+yCAZx0OBmuTLLRluNBVQy5WyeGBNh63R8n7/VwcLL4ggOdqg+QDfj4abWKmsYbJhhr2t2S4N9XBwVIdE4Vq2itN28ZKLXcAKfdz2ViIbw520WzSY9IaSCVr+cNUBrf5xe31Da/3DgWwsFZZLDam6yN0+W1M9+WZSERZKlRxqLeHS01B2quCtNfEOD/ZybxXh0unpToa40BrltvjJfbWxbiwp8TRdJzBdJxu8ZxLcmlvJ+OhdyOA7/RUUWUT0Vg0HBjv50J+ZXt9w/p/SYi8OwFcgVFvoisdprmjlT25Suw2J1MtWaYaahjPVSt+vZ8uDHGqu46WkFWJsmLTazGbzASq0twoRaj22gk4rOuPVKiSw53FXRfAwu+80uvhxlwfewJGaqMxbs620+vRYSonDNg7F8A6ujJJTlXZ1sOcvRgFogKrxcxcRzUZEW2oQkvQEeZhXw0pv5NsVZR7U3Wk1HaujOeZi3mJeh0MNuVYjHlwvcRdOWy+7mdWBHAb1zriRC3iEKuK8zPDnC5GsK/uLmx3bSmAy5tXt6vH3XxfCmApgBUBLCZ6EfbLZbNSE09wtegjFfRzYayHZ/u6eXhglG8X+ng818eNvQNcbPHitxnxOZ00xIO0JCpprQrRUhWiGPVxpKeN2WoHtlcRwMIHUWfE4XLhsFlXT3G/Xqdac4F4MJqlz7USfuxNCGC1RoPBZMPlcip+umthibbrtIPzG/gAbzHAr7hA5Jk0G/Hq9bj9Ea5NNrGnIc/NkUaelCIcbG/g/kTzCwL4RHeR28OtzNd58FoM1IY8TLqtXB5oIJOI013lwKQVYdi2r1edwYTV4cJtt6LXvejvtt39vvx+Nhbkm6UOigYdhjcmgLWYzDZcbhdW40pYv5d/9+X/1w7BbeQD/PJnxf+iX/hcHnrDTvwmLeGAm6WuIhNRD8OlDr5b6FWieDzY08un+7uVQ3AWETZPpcGgd3J1oo6xRIzL8yUOVkforI7QJp5ro5ye7SjPBUKtRq03rfYLC7odhOVbc4G42R0nZl0JL/UmBLBao8VoduByOrCayzm8t3oIbgMf4I3a4fnXBLvCB3g47VVCouXSCQaEf79Wg1Wv5dRwG41hx3M7WxUYzBbaW9v5fqFH8T29PtHB2kPsbt0faypbAOuNJmwuN077ymHW58v2qn9bzBbGO0vcbK9krpTn4VAS6+p5jG2vtZkA1uox2+y43S4l3OFKGLut+/vaIbif+wBv9T09I/labmRd6wvqlwWw027j6ECWgl6LsUJLpa2SO+1RnEYtfp+PK6MZqtV2Lg6mGXaL+tTSlK1lX9i1rQAWWVX1JitOpwO71bwjl641AXylPYYwkEgBvFW7fzjvSQEsBfALE4UIUh4KxdcF8GIxSdqpJxGv5lKjl4jLjDlRy9kmNx6TBr04tBGIcDzvprc6wqm2JDG9kQsjLQwGLa8mgPUmKtIDfP/Xf+ab22eodhrXB9ZtJ4OXxOTbEsBGi5OO2Qv8y9//zPJ8nzJZleNK8HoCuMChSJxj7XUstURIeXyMJJ1U2m101NcwlvRxvjf/ggCeSrixq0S8UR299Skl4H/SaeDyQIGox04mEWYsLqJGCAGy9UDmbxzik9/9nb88PEfqpQNj23335fffvAAW5Xcxtv82f//3f+X0aBUuy/bW5FcVwOI+1nwWjQY96XxaObAoXBnsdhcXBvMMxfw0RQPErNr1RAp6g5HqbI4bLW7qQ0Euz3dzqSXLfEuWPS1Z9rfXcftAd1kCWG0wYWsa4fs//5Uvbh8jbl+JkPFyHZfz/9sSwCaLi769j/jbX37Pub2NGLe1mr0ZASysqCqVgZFiij1JD5V6Lac3EMAuu4uFwQ6mgjpsOrUicITIEQ+P08Vca75sARxtG+LpP/0b3987RCpYvo/oRu2j0ugweWI8m+viy8lmTtbayndN2kwAO0PMXHvK//z33zOd8mHVbe9O8UsUwHqjhULfEn/48594eGZaceEp1yDxcltIAbz1XPByfX0o/0sBLAXwi0JIoyf4nAA+N9LFg8lWrkx082C6xJWJDs5P9ykWYLdFj8fp4FhznLBZi91kIC58f9uz3G7xEjZpX00Aq3VUmKu598Pf+PHLO9RHXz9jmUj8ERWnfHuqabSsJFCoqa3nRDGEW19eZ9/IBUKjt+Js2svf/vWf+fj4PuVU9tsRwGZme9t5PNHJ/YkObk52sDzSzuWRVi6PtnFZnJoeLfH1vhIH1+MAp5lMuHCoNcRDUU4WvASterxmPZcGCkScZmwmE+P1AUJq1U9RFjYRws5wA+ce/8B//uEedcltYshuco21gTIe8HFrokBaJ0SilnAkwf2ehGKxW/vMps+bukDoaRo7wm//9u/cWezDZ9v+oOXrCOAKlUY5FT7Z1sRcXQCfxUgg4OfkYJHJsBW3UU9NOMiJ/kbGvKK+jUS8fs6VkkyXilzsStOTidOfjtOz6gIh/h/KJcj77Hi28fdUqXVobfU8+vEvfP/5FeqCFWi3CaG1WV2KJBK18VrOtITwGoUQVDPc2crB2kDZPpcbuUBojTaSPYf4y9/+iXsnZ8tYGL6eABZCxeWw09LWxHR2LdyhCqvwDfbYlSyIl0baaA4/J0w1WlpEVJT+Klw61UqSFWE9NBiJeZ3k4iGOdRYpWCvQlbEz4o20cOWz3/Ev31wkn3wxws5m9b7p6yIxg87AkeF+PupLEbduv4hbv9ZmAlhnY2D/Vf72P/4Hx4aT2E1vSwBr6arP8mimQ0kwIpKMXBvv4PHebs4PtSiviWgPX+xvotGgUyzAIWeE7w/1c22sjVsznTzdk1NcIK5OlHgy28mFkVY+XujjSNy7rQVYrTNizQzy1R/+xNe3TxMp13K+wVgl5ot0up7jDUFlfhDJNub7O9mf9mPa4PPrbfDce9IForx5daO6e1evSQEsBfCLAlitxeHyMhKzKfE1+2ujpP0Ooh4HEbeDiNdFJJFgOGnDYtDhDwYoBlfcFSpUakTosZZMjFazDotGpWTcGcvG8W572EmNTm/BFcly9pPfcO/KAiGPVsn+9DqdQ1jtRPghh1GHiAcq/hfhn6xiIC5TPPxMAGu0GOweksVZvvrxG06NtZUx0a8MCq9qARZlb8ok6HU7SIh63/DhYjhXRYfHqFiy4pEgRa9lJRWvwYBVL7K8VSgB/oeyUeXAmchGJQL/60QouOcG75frWGcwE6wucOj2Fzy7fYBI4KfQRy9/tpz/jXotXpsJg9jOF5zotEpAe3Fqf9vvbySAVSrMNjfFoSW++vF75trSyqGb7a71WgJYrUUkwhiPWfEYNYhMWbXC3cdnw6pZEVQmg4GAP8h4ykmlyayE1mryGfA7rNTFoyy01zHf9vPHnmJ0PVzXxmXXoNXbcMdzXH3yFR8t7yPkqHjtfiEmdrPRiNWgVfxMRb9wWE3KQlXE9d64DC++/rIAVmt1WJw+sv2H+PbbZyz1N5WRCOD1BLBWpyOXTnGwo47GShG3e61sKvwuN7OtdRxuThB5PoqCVk9rTZBGp0h0s/p5kdnLU8m+1jqWOgvM1AWp1GxvWRdxqUM1BU5/9DlPLs8Q92+dRGH999Z+d4Nn0SZum1VJ4GHSinpZu6dtnn8mgMV3tdhcQfrmTvHDD1/SWS+STWzfx17PAqzGZjIS2nBsWhmzqgJuJhujJHQatBUibbWHfbkICZ+T6ojdZjcAACAASURBVJCf+eYIAZWJscYEnZUuYl4HjakYXT4b5q3qQWSHtDqJ1g3y8OnnXF8c2JEF+OX5QbjZiAN8woWv3KQWUgBvw+tW7fmO3pMC+AMVwEarnbbZY3x1+xiNIZ8yaZc9sL4TGLWY7TkO33zC09tL5Gp/yja0u+UWW6oaJSVofOA49z65Sym9mghDb8LTN8ejZ485XyqSeCFe5tadv2/vEt99dZcaoxGTVqQhfoWJ7h20h8WR4/DybT67d4rqqA+Dfmc+wK/bhmq1Bq3FhaNxD7/97gbdxRWrn4hXXZg4xd27dzk+1IlXpCMto54MFhtNE4f4+qNTtIb9O/Ztft37Kv97eky2Do7df8KD5T7yqW2SWZRRB+X/9vNMi5SvK/0iMXOCh589oC1SgUFbgdZopm7sMI8/e8bB9kYi1ueE5qblEfz7mNx3hd/cn8dsNinpfEW649cr3+58z+oocfTqLR7cnKE64sJQhnvB27gf4XOts7ux9u7nt59fp7dO9AshdD2MHb3E3XvXmOwrKGmky/l9k9XN0P7PuX96hGjArqRgLmdnq5xrv5XP6IxY2sa598lDLg13Ueu3vhtuhCFBJMPQ6xk/eJIvn1wlKhNhvJu22HSs2XxskAL4AxXA4gBTTdsAy8vLXF0+z/G93YoV9K0MRq8B3s/LsTLBinIbjeLQ2vbbdj+/xuagl/9ZHXpDjqWz57h87TpnThygJlSBVqQtFYOdSKtqMikpnbWvwE5dex/L125yeXmZQ9MlrOYy4ny+kXp9vToRwlMcijQajYowERaS8uvw9X5zo+tb7EX2Hj3L8uXLXDk7Tyb+00Sn0eoxGIwI31xhzS5nwtbqjSSberh48SJXL53nxFyf4gaw0W+/H6+tCk+jSblPnXZ7a95bKbdIhRxt4fiZs1y5do0zJxZJuVdcMcRiRKPVYTKZMeh0ym7E9mUQPNlo6pzj6s3ryjg10+XFXGaIwu2v/+YYfP631GotOtEvTEYlVXA5h0mf//6b+tuabGPx1HmWL1/h0qk5suG1frHiUmEwGpQxqtzyCct2vnSI8xcucen8SfYPhpR+/6bK+8avsyo8xVhs1OveWVlFKuRQQy+nz5zl2rXLnDk0iff5ZCXvcAx/43X+Ad6LFMCvIGJ2ApTBYMBkKscy8mYGbkXAGM04XS68HhFVwVKWQNjJPX4Y31Wj0RhxutxKekuHw6akQS53ItmsDowmM063F6/Xi9UiwrK9K4H/Zvja7D7f9Os6vRmrzYXH48HttGAUCQN2MBD/rF/YRaisdyPud3Ifu/5dlQbhc7nSLzyI0/3CvWZn/WIliodIOy76hcWsX1lo7qB9d71e3lFZ9UYzNqdb6RcuhwXhYrSTexfCXkTxUNrX7cBhNa6HNtvJdT/074pdEZ3RjMvlxutxK9GL1pKyfOj3/iHcnxTAuySA1eoVP9QPARp5D78sESnbS7aXZEAyIBmQDEgGXmRACuBdEsASvBfBk/Uh60MyIBmQDEgGJAOSgXfFgBTAUgDvaOvsXYErf1cOmpIByYBkQDIgGZAMvC4DUgDvogDW6g2k2waYmJjl0NIBWgopKT7fkQ/d63YY+T052EoGJAOSAcmAZOCXz4AUwLsogBVneV+Impo+vvjjP3F0flAKYCmAJQOSAcmAZEAyIBmQDOwyA1IA76IAXlkx6lCpqvjo2x84JAWw7PC73OGl1eKXb7WQbSjbUDIgGZAM7JwBKYClAJYiVIpQyYBkQDIgGZAMSAZ+VQxIASwF8K8KeLlq3vmqWdahrEPJgGRAMiAZ+KUzIAWwFMBSAMtVv2RAMiAZkAxIBiQDvyoGpACWAvhXBfwvfcUqyy+tLpIByYBkQDIgGdg5A1IAv0cC2FGV5fC1+1w6NELEaZHCVK7GJQOSAcmAZEAyIBmQDLwFBqQAfo8EcCjdyt3v/pl//u4GrRm3BP4tAC9XzTtfNcs6lHUoGZAMSAYkA790BqQAfk8EsFqtweVLMHfxMX/+8hLNNS4pgKUAlgxIBiQDkgHJgGRAMvAWGJAC+D0RwBZ7jKaRvXz81RP2D2WwmfUS+LcA/C99xSrLL60ukgHJgGRAMiAZ2DkDUgC/QQFs1etwmQzYDDqmaqKcbKilymnDpNNS73Nht1jJ906xf/8JPv3iGXtH29ZFrs4QIFNopKkhgdVsQKtRrb8nQd856LIOZR1KBiQDkgHJgGRAMrDGgBTAb1AAu00GjhZSTNZEudCc4UBdFfe7GhiqCnE4X43LZsfp8ZNMJkkmIriddilypZVXMiAZkAxIBiQDkgHJwC4zIAXwGxTAwtLbGfGzWJ/EYdBj1mlpCXo5nKvGYzKgVkmr7trKSz7LVbhkQDIgGZAMSAYkA++KASmA36AAFgLXoNVg0WkVsSv+12vUCNcIjVqK33cFufxdOcBKBiQDkgHJgGRAMvA8A1IAv0EB/HzFyr9lR5MMSAYkA5IByYBkQDLwfjIgBfBuCmC1Dr3Jgt/nxmG3YTTo0ajfTzBkh5XtIhmQDEgGJAOSAcnAh8qAFMC7KYDNHqIjF/mPf/uKixdO///tnXd3U9e6r/cnUVlFvVrdtuQi4yJbslxw72BcMcXUEBKSEFpoMQmEACmQHUKy905Ozt03u+See+8f554xzjjf6bljLtOSA7YxWCl+M4ZiY0mr/OYzl545NedczIyUCLls2O1SwX6vFUzOS9gWBoQBYUAYEAZ+fQyIAFdYgKun3uHf/vwmDbkUHreJJj3AMvO1wjNf5UL867sQS5lImQgDwoAwUFkGRIArLcCTb/Hj10dJxvwifiJ+woAwIAwIA8KAMCAM/AIMiACLAEvF+wUqnrT0K9vSl7wlb2FAGBAGhIGnGRABFgEWARYBFgaEAWFAGBAGhIFtxYAIsAjwtgL+6daf/C69AcKAMCAMCAPCwPZkQARYBFgEWFr9woAwIAwIA8KAMLCtGBABFgHeVsBLS397tvSl3KXchQFhQBgQBp5mQARYBFgEWFr9woAwIAwIA8KAMLCtGBABFgHeVsA/3fqT36U3QBgQBoQBYUAY2J4MiABXUoBdIVKjJ/n+ixPsyGcJB7zociMMEXDpdRAGhAFhQBgQBoSBijIgAlxJAdbdBOu72b+8zPLyQSb6CnIrZKnwFa3w0tOxPXs6pNyl3IUBYUAY+CkDIsAVEmCn04nDqeHUDXw+n/VwmQZO6QEWAZRGgDAgDAgDwoAwIAxUlAER4AoIsN1uxzRN3G53RQtXWns/be1JHuvk4XDg1Jz4DAdO+zqvlQv1S9Vlh92O2zRwa050yfqlspR6LXVVGBAGNsOACHCFBFjJr9frlQu9iFMFGbBjak68hhO7zYaha9a/NZsNu82OoWl4DA235sChJMyp4a+pY6ohiEd3WM973SY+08Br6vhcJn63iVuziyC/IMcOhwO3qeMxdQJuk6jPzY6aHEvZMHG3jk93YGqq4WF/wofdgctqkKyWn0vXrH87XnDfm/lgkPeIUAgDwsDvnQERYBHgJx+48sH6+8rCoTHYnONsocoSq0yqlqPlGpI2Ow6bTnt9LbOlRiZyIVyaA5vNTsQf4NJ0B3Ueg3RNNTeWRzk/XOLkUIlT492sLE1yNKcT1OXDYeMfDg48Zoj9Q0VeGyqxMj/I98d2Uo7GOTdR4p3RIq83hGjPxIm5tScM6m4mi3nezPmt8uto3sHh9jgeqwEj+W88f8lKshIGhIH/zoAI8O9UgA23l8LoAjeurfDB5Quc2j+EGoqhKkFNKsXZmREW60Icn+jivbFW6jUbzooKsJOaeDUfHe5jV1WUk8NlVgYzVk/lpiqqw4ktHOfSnl6Oddayq6eT62PN1AeMJ0LxCs8vHKji1HiZd3dEGerbybmROpo9GnaHA2+8ln3HT/H+ylU+vXOFdr8P4yX2bfhCdC69y92b17l68TzzQyXcDyUoVpfn3ckyfdVh3p0f463ODGHNZpV1TTTGkWKWpmCQQyPd3D4wwb1DE1za3cfpzgRNER+JoJeYR8dhdxD0emhIVtHfVE1bLEBLUz03D/Ywk0vRlU3R01jDa1OjHH8owF5/gIXxAd7ojLOn3MbK3CC9IRvmpsa1KzZ1Dk30cWawmaZcHZ8vdTOS8KF6rJ/PhB2/O8DlfaMs11UxXi5we7aFWs2J6ikNJzIce/sC7394i8unj5KqCqyxrbX28+S5gWOXuPbhR1x+9zRLQ0X8umb1nrvSWW7PdzHSmGF5uIdrfTX4DSc2uxN/NMOlkVYG6lOcHivRlw3h0XWy2QZWZsrMFfOcKsYIqdersrXbScTSvNnfzHyxjTO7+ri7PM7dfUOcnezh3YePEzt3kPPaCQZCvDbSxdXeNDvb21mZamdn2Nz0uXrdXmb7urg2UE9noZUzU53sTnkwHE/1UD+jXKLBNG9NdvJabYS9o0Oc6aulyu3ArviKZzh44jTvX13h3s2zBAO+TR/f83l4Uk7yGslCGBAG1mJABPh3KsBuf4i+fW/x7acXmCg0Ea8KPxbgXDbLrWMLnMgHubQ4zL2FInm7Dc1mJ+DxUMilKWeT1qPab1i9g7GqKIWaJL0NGYZ31NKZS9GeiZJ2qQ94O5rpYkdNioGGBCnTjma3oes6NYkEg41punJpuuuSFGN+3E4HDptGU3Udfzs/xWwyybndA3wz8bICnOKro+NcGW5gZmCAf51vpy3k2pIP2UQ0yQcLQ9zuSDE3NsqnUw1kA7r1Qa+5fURjCXqHhvn+H9/RHwnjeoYsrFUxn37OHUkycuo23184QWdTPZFg4LEU1jYXuLV/mJmGANePLrLSkyOp27Gbbg73t9OXCZKOVbG72MTlxUmujheYH+3jzlSetqBpSeLqvuw0VqdZ7G5lX2+B92f6OTbQxPWDw7zV28pyTwtHB0tcnB/naFazeoBD4TBv7N3Dtf4kh4bK/PnEBKM+G+5NC7DGmaVd3Jku01md4/+eGqW3OrhOw0z1Wof4+vRe3smnOdBb4p+HOwgamnVuLreHVE0dfeNn+PjGRRprql6ah4WVB9w69zaldIp40IepO7FpBt7qBn48OcpSMcuJqVH+PFpHxKVjGgaZaJRL0wNcnu5ld12YtN/A0HXikRD97W18dqCPI01VpN06mt2OQzOYGBvgvZ56mgJeWhsbuTGZZ7YxzVx7LdmqENWRAHG/G6/uwB+u4uLcMN9M5hjr7uEvB4qMhDc/5yDgC3B4YoRvp5uZKBb5eGmQiVoPLufaApyO5bixNMDl5gR7Z6a5OZKl2rsqwKbHR1W8lp6dh/nx718Sq4q8dFk8XU/kd5EdYUAYeBEGRIB/zwK8dIqvrr1OMRH+yQdNJJnm3NIUi9U6x8Z6uDaeJ22zoztMdnW18/m+MrP5GkZ31DDfnKLKrZFJJSjX1/DacDf/PDHISHOWntoYOZ9h9Xoq4bs6N8S9w+MsZgy8mh2f18fCyAgPZtuZaa3j8EA33yz30O3SMO0Ocokk94/10xWOcGSwkxtdkc33ANsd2DwRPloa5M1ymp6OTj4fb6TBa1ji71RjMHUNt6GjxlKqMa5qIpJancNl6HjVWFj1vPWcE7vdZj1vqDG0pm69T/1Ur9McNsLBEKcnuzmTjTJS7uFyX5qkZ7X37lEFrGtu5U8/fEffKxDg4ZM3+ObkAungT3vNktUNXJnpYSTj4uTsGG+1pIk4HdhdHuoibvKZOHsbkoy31TPeWsdYax3jLTkmWzLkIz4CxqMsVi+cDocTlz/C1ckmZjoaWFko0KrG/+oaYb+fyd5eDtc6LQH2+wIsT41wtiPIZKmZTw/20qnZMF9iUpfa3iXVA+xN8uBQL4WE1yo/zanGMuvWmGU1blmVhe6wW7wEPC6uH9nFgXgVU807uD/TgFtJ6cNGh1MzaCwdZ+XquVckwPdZObFMxmZ70oBwaLjDab4+2MdkPsH0zh5uljMEDCfpRJLTU118c2SK92cHOD9R4khdiERVnKNDZa7ND/Ll4QFODxfZWx3C49Rwx9Lc3z/Eez05agyDnkKew5kgw43VfDLTZo0j9uhOTKfDaoDafEGOjvdwvTtGV2uBz6YbKfo3/+2Hx+1h184+7gzl6Klt5Nx0F70JA8PpwKU7rXzVuHKVsaojbkOzxoXHwgnO7SpzJBdmfGCAs+UEcdfT0uwhV7+bv/3tS6pEgB8z+ohV+SkCJwxUjgER4G0owGpCjq5rlkCo3is1UUpzOAhHYlyZ7edKa9j6m/pg0xz2VVFU73Hq9O9o5LsDRYJuA925OmlHSWRLczvnBpuZ7yrxcW+KiEuzBHhucIhbAxlqfSY7sjV8dmyWQxkDv9OGJaWWUDpWP1itcagvAb9abUN7eD5OpzW5S00q0jSNVFUVb031cXW6nwvTaghAjIDpJBpLcu3gGLdG2rgwtZNrMz2cLSWtXm+36aG/uYlP9g5yWfWuvbaHr/Z20xFZnUxmyYDmQNe0h1n89NgrIcAqe11zYjjtltirY1Jf/auv0AP+EHu7G1moDXNxYYIrMwOc39XHxd1D3F0sc3mqj5tLo3zYn8b7UBj9Hg9Lo8Oca/HQnq/lo8MTvK9Ef6Kb89P9fHRgF8ceDoFQDYjVc3+Yu65ZPdOPxGgzF3JDd1pCpdkdeIxV/gzDpGVHgT8udrEy3sXVPTv5dG8vhaiJ27naUFGNGN2xypGSX9WAebT/igiwzW71/qt9q3qj6sajyYWrOTnxGDqmZtBbX8NKMU13WytnSknaUmFOTbZSdht4HXYCbhf7+1o4VN/EyUINjd4Apw/u4rPZfj7ZO8L/OLmbs1M7ubPQy5Hqhw0i+8M6pDtRjQUlxqruPsrgRX8qftR21Njw1fNxrjY4NDevjXVzY7jaGovssNnpbslzd38PTabd2q9iUO1fTbpcvUY8KQubTQT4RctCXv80P/K78PDqGBAB3oYC/KwKpD7wTNPH8nAvf1zq42R7DXVhD37zqUk5Nie9+Xq+3d9h9cStbsduifGxXUMsNKdojNfw7b4iKa9J0OtjfniEuxMNdGaqGOsscP9gNz1eDddTgvKs43mVfwv4/RyaGOBMR4yGkI+G6hQ39vcx5DXJJjN8sLybS71p6sI+BlsaeXCgRJ3uJJ2s5dJsP4t1AaojUU7vHuD+7jrcTtUoWL8SVkKAn5mT3Y5dNxjv6eXG7jZ6Mn5enxri0uwQ788N8sHMAB+MNlJOx1no62Qu58GjOXC5XdQ3N/P1/jE+nM4z0dqw5hjgZ+77JYZ6PG97humiraWD75eHONBURS4W4ejUKO91BIiY60teZQR4bR68psGhwW5rEty7Y51caKnl6EALpYibWNDPwZEyb/c2kQ+7rNU3clE/zdX1vF2u52BnF7eXujmRiTKaz3JvoYNkKMSJoXaOZ/2bltzn5b3m3+1ORspFPl0o0+Gx43G52dtX4pOBJC7VWF63/EWA18x33fzW5ky2LfkIAxtnQARYBPipD1AH0WCAyZ4urs/0c3vfMAv5CBG39nBown8XYPWVudcX597hYaYbqqxJVH98bZLpuJdEwMfC8AjfLw+yMjfCp0v9HG2tIqCvTlCqVEWtSSa4eXSKY7Ux2hIh2rIpzuwb40jcRUMqw/sHpliq0fDrdjrqa3hwuJeCoVHXsIPri0Ms5HykwxFrktFHO+NP5bV2RfvlBNiBPRDlSKmaAwPNDDbEGWurZ1epmStzg5zormOwNkJtVZixUoGJjBrn7SRRk+HCWJ57u0t0NtaztzfP6d07+XxpxOo9vr44yrU9vexKOvE41z73V1m2qwLczp/mWykHXfjcXuaGhlgph0m41eoVax/LLy/AdiJeH7fm+hnKV9OerqItFmGspYZiTYzWTJyefJbhXJiER7fORzVIc6kcbw22c7E9wWJ/G0thLz3VSW5Nt2JqLpZ3tnK0tsICbLORiKY4OzfEgZyX6liMd8Y7WI57ftLr/vwyEQF+fjZrcyzvk3yEgVfLgAiwCPBjgVAfuuqrY7VGrN1uMlZu486+IY49XIbJ9oweYN00aW7r4K+vTXJzppd3d/XxxZv7uD2UJB8LMDs4yM2+JE2JBAcm+3kwmrNmxq/7NbndYd05T91ARFNjWjfQ4/q8i0NdOsndk7Nc6W5hbzHPnmIj0x11tPp0alMZru6fYiHtwKfZfiLAtdEM15Z38ad9/VycHWBluJn+yMZn1j9PgJ26jq6+steejFN93rGrv6tJcM8bA/z899mxefxMdjUxW6zny0NjvDFa5sPFEc4M7WC+NU1DIvpEgA2DbNRHKebn6kSBeMhH2u+h2mNwcaxIZ22K/V1tlNQqD85VTp6/b3WRUq9xYpimNUzC+RJfxz8S4Ad7mikFzFcnwIoxTbNuUvMijC2sPGMM8JoS7iTijXP/yDjnprp5Y7DAkcYMn7+5wJ3xTo72tvHe/AQXiyGSrlWhfyTAh1vTZE2D8e425oOrAnx7uhW34eFQ38YFWN2FUk1KNQy14sf6veZrla3p9tLXXeZufy09bQXO9tZT7Vodj73W+1afEwFeP6NX+yEv+5M8hYFnMyACLAL8uMfJ43IRdel4HGoMqYNsJsP784Ocblhdh/RZAqwmQu2fGuFiR5iMz8Dv9dDe1s7XBzpZqI09FuBqn4v2hnq+PDHGdNDEt85scqcvRHLhNP/8+79wqLeJgPlsgDdSscORKO8sjnM2HyHrd1kT2tSkHTWJKpl8vgD3l4rcmC3R4zUIuQxrAtx6s+CfPp7nCXDTiQ+5/9VnHO9t2JAEv4wAj5UaGcymuTPbzXxpB5dmBjhazjJUHyMbf0qA1U0YHHYSAY8lwBGviWmYVEequdhXTWsyZglwV8RONuL72dCYZ5WNi0Cwn3t//ZGPTw9QE9n8igRbJcBOX5j00jn+1z++40Bn3YYZe3EBtltj7INuE69pWmOALzTHqfKHODSxk6+ODPP9fAdxj2FNsFQMrSXAn84WaMxkeGesyNFHY4DXFHAbsfQEK5894Ktru4m6zXVW1nhWeT75m93hJBdP8GBvF6dnRznSEse74ZU/RICfvkbI70+4kiwki0ozIAIsAvxYgF2mi5F8jvlSnplSnoMDZQ61p0j51PJeqnI6KGRruD3dZE02U7BGwyHenu5hOLAqzmpIhOGPsbLQz77WOOPdPZwrxagynYSCIY5NDXC6FCeur/3VtcP0kS0f59/+8z+5eaqTmHfzFwe1DFVTbTVvj5U5Ut7BTLGRPc0Ja+JdKBrn3ZkhJuJ2a0JVU02KG3Md5DUn+VyWK/OjnO7Os1DKs9TTwnxrHJ++sTuhPU+AC5OXefDdt3x5toip1pBdR15eRoBHi/V0JsOMteaYKjZb68ee6Moy2JCkuTrOws4iowkd18MhDeoOZaoHOBH00ZBKcGqkjcEqk0Q4xMGdRXZlA7w9XKJQtZ7Q6gRCbdz8n//BX+5dZmcquO55Pi8HXTdobNjBjbEGmj06itOx7i7OFAKEXmIMsMP0U9fzBv/nv/6LGyfbqNogY5sRYEPTSYT9pCJhJgr1HM9FyKtGRW8rb/U1cbK/lcX2DBmXjv4zAU47ndQkotSZOoXqJHdm2miqqbaGtjQ8HDLxvOwe/T2Y7OG9z/7Cv//9E1p9npcSYLXNkM/HG2oS5VyZ7ojrBbYnAvyoTOTn5q/pkp1k9yoYEAEWAX5KTOyEfV6y8QgN8Qi5iJeQ62lBs+N3GaSCHqu3UAGolhVLhX141IoQlsipr1edJAJuYgEPPreLhEez1g9VKwaodYbVBDnvmis+ODFcQVK5Fu799R+c29tG2PNyFV6tOBEPBcnG1LmFqI96reWcXKZByOshoK+uXex1GSSDXlwOh7WU1edL/UzXRGhKROjtbOf+0QlGkvqGbvbwcwFWPWe66aGurYdrn97l/X07trAH2IbN9NDbUk02qL72tuF2exhtraclrKEZbhYGypwfa6XgVUvgrearbtN7cmeeWCDC0d5GWuI+XHY7aj3dzlIHtxb6uT6Wt+4Ut9YFSNNNoslaDn/4LXevvk51dPM3oLCWZjNNkh4D38NVDgJeLyFzlau1jkM99+wxwIqxEOm6Nr784Z+cmcsT2iBjmxFglxHlrT19XJwdslZL6ahJc3GynXLKT8LvRq2+Ud7RwEJNGPOhACejGebycULq3w8bSbWJKs4P1VurK6x33up51ZOs1hSOZPLMvXOTH+6eJ+R5EWF9dr1TdVkti5fymfifWnJu/WMSAV4/o2dnLu+TXISBV8uACPDvVoCD9D1nHeBffyXyUN+xm2M37/HJhUmyqc3L0+bP1cHyWB93F0o06TZ0u8NaturekXF2xzY2CeyxAIdXb4ShGx4a23fz5u0vOH9qmkTYvaHxmKoHeOjhOsCZn60DvPnze7UXkp8fRyzXxtKRy9z//ENGc5mXuhHIz7f9ov9+IsBnn1oH2EO+cw/HP/6CO+fGqY5vfDLZ/AuPAd7arNfKw6HpBDJ5Dp24zMcrl9hVm8Rwbmzs+Vrb3fxzIsCbz+6X40iOWbL/PTIgAvw7FWDd5aFjYolPbn/M7RvXuHB8wuoN+m1ArHrnfAQjESJBN2pt2F/iuKNBP4fGBrk+N2DdwODadDtDES9etYbxcyblqVsh+5J11u13P751ky/vXqPo865+ra2Gh5hewpEIgYAXfc1e8CcXXMMbpGv5Il99/gk3r69wcKLHWoP1Ua/gL5HNevvUDBf+QJhYNIzbMDawPNaT811v2y/yfDRVy6mLH3Lzzl2uv/c68cijG4k4MdybY2zg9WvcuvMZN65c4chkt3UzkRc5poq+VvUA6waBUJhwMGiNY3/ZSXCbOX5VL0LJGk6cvszHN2/x4M4FAn7vL1KvN3P88p6tqZ+Sq+T6SzIgAlwhAXa5XHg8nopd8NWsb5c3QDyRIJ1KUBUJ/oYE+NdxUVA30Ah43KTDfusR87rwrTN2WU0e1Fxe9Du82AAABFRJREFUItEY6XSKVDKGW9dfSgCVwHj8IdLpNOl0knDA9/hWyL/kxeO3sG/T5SaWTJNKp4lGQrhdq8uMvcyx+0NR4mqbyQQRv9e6UcTLbG87vFcNxTDcXiJVq/UinaiyVqTYDucu5/jruJ5LOUg5/JwBEeAKCLAKXd2xS92B7ecFIP+WSikMCAPCgDAgDAgDwkBlGRABFikVKV9nFQa5KFX2oiR5S97CgDAgDAgDW82ACLAIsAiwCLAwIAwIA8KAMCAMbCsGRIArKMBOtRZotolCoUR/Xw91tcltBdtWt+Zk+9JjIAwIA8KAMCAMCAMbYUAEuIICbHh85FrKjI0t86cffuD1gxMiwNLiFgaEAWFAGBAGhAFhoMIMiABXUIDVUkBOzY1pNnHzm285fkAEeCOtNHmNtOaFAWFAGBAGhAFh4FUyIAJcQQFeLTh1W+E6rj/4M8dEgKXFW+EW76u8eMi25MNIGBAGhAFh4LfKgAiwCLBIqEioMCAMCAPCgDAgDGwrBkSARYC3FfC/1ZaqHLf0sggDwoAwIAwIA6+OARFgEWARYGn1CwPCgDAgDAgDwsC2YkAE+FckwL5ULRP7jrA4Xibmd28rEKVV++patZKlZCkMCAPCgDAgDKzNgAjwr0iAazsGuP0v/5t/PLhCT3NEBFha48KAMCAMCAPCgDAgDGwBAyLAvyIBDkTqOHj+c3785jK9LVEBfguAlxbx2i1iyUfyEQaEAWFAGNgODIgA/0oE2O2LkOsa4YMvH3DmQBeRgEsEWARYGBAGhAFhQBgQBoSBLWBABLiCAuzUDWqay/T1zfLxF1+wf6b/MdSmO8/43CIL0+2EAm50zfH4ue3QEpNzlB4HYUAYEAaEAWFAGKgUAxsWYJfLhd1u33Ip0zSNcDiM0+nc0n2p7UcikS3fjypIlZvD4cDjD5JtaqW/v5/+3jJ1tSkMw8A0zYocR6Wgkv3IBUwYEAaEAWFAGBAGfs0MTE9P84f1DtDv9+PxeFByutUPJYPRaNQSw63clxLPqqqqLd+POgdd1wmFQuSbmii0t1MsFil2dNBeKFAoFOjv77NkfCvPV7a99exKxpKxMCAMCAPCgDDw22BgZmZmfQEOBAKoh+oFVoK6lQ8l2kpM3W73lu8nFott+X5UVupcstkshVKBRG3iJ4/65nrufHKbpqb8lp7vVpaZbHtr64TkK/kKA8KAMCAMCAOvloG5uTn+oP631uPgwYOox8LCwpY/FhcXOXz4MOrnVu6vUvtR56D2dfLkSd58501m983+5LF8fJn/9x//ztvvvL2l57uVWcq2t75eSMaSsTAgDAgDwoAw8GoZ+APynyQgCUgCkoAkIAlIApKAJLCNEhAB3kaFLacqCUgCkoAkIAlIApKAJAAiwEKBJCAJSAKSgCQgCUgCksC2SkAEeFsVt5ysJCAJSAKSgCQgCUgCkoAIsDAgCUgCkoAkIAlIApKAJLCtEvj/4gxCVEA/D4wAAAAASUVORK5CYII=) ###Code """ wiki_data tokenize you download """ ###Output _____no_output_____ ###Markdown KCM-Keyword Correlation Models from Open Corpus ###Code import json from tqdm import tqdm #data have been tokenized with open('wiki_tokenize.json') as file: data = json.load(file) print(len(data)) flag_list = ['n','ng','nr','nrfg','nrt','ns','nt','nz'] #Part of Speech list kcm_res = {} #Compute KCM model ###Output 50000 ###Markdown 作業說明會給定20個目標詞,需要回傳與目標詞相關的前十名,並將KCM的結果繳交到網站上。評分方式:20題一題5分,回傳的前十名與標準答案有5個以上重疊即拿到分數。繳交方式: https://github.com/NCHU-NLP-Lab/110_Advanced-Data-Mining-and-Big-Data-Analysis填入組別與答案按下送出,答案格式回傳一個list,包含20組答案(用逗號分隔):```範例question_list=['臺灣','美國']Answer=[["日本", "香港", "中國大陸", "分佈", "中國", "中華民國", "日治", "臺北市", "名稱", "臺北"],["非建制地區", "城市", "人口普查", "加拿大", "英國", "地區", "加利福尼亞州", "國家", "伊利諾伊州", "公司"]]``` ###Code def Top_10_Related_Words(QueryTerm): """ Return 10 most related keywords from kcm result. Parameters ---------- input : string only keywords on kcm dict will return result, otherwise return empty list Returns ------- value : list of dict, resultant keywords as keys, and its frequency will be value. """ return res test_word_list = ['臺灣', '美國', '大學', '肺炎','天安門'] for word in test_word_list: Top_10_Related_Words(word) ###Output _____no_output_____
Conditional_Statements.ipynb
###Markdown Python Conditional Statements Explained in Minutes - 18 ASA Learning ###Code # Comparison Operators # Equals: a == b # Not Equals: a != b # Less than: a < b # Less than or equal to: a <= b # Greater than: a > b # Greater than or equal to: a >= b print('asa' == 'aaa') #Logical Operators a = 1 b = 2 c = 3 print(a<b or a>c) print(not True) # if-else statement if a>b: print('A is greater than B') else: print('A is less than B') # Nested if if a>b: print('A is greater than B') elif a<c: print('A is less than C') else: print('None') # One-liners a = 5 print('A is less than 7') if a<7 else print('A is greater than 7') ###Output A is less than 7
src/python_cheatsheet/jupyter_notebooks/19_Context_Manager.ipynb
###Markdown Python Cheat SheetBasic cheatsheet for Python mostly based on the book written by Al Sweigart, [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) under the [Creative Commons license](https://creativecommons.org/licenses/by-nc-sa/3.0/) and many other sources. Read It- [Website](https://www.pythoncheatsheet.org)- [Github](https://github.com/wilfredinni/python-cheatsheet)- [PDF](https://github.com/wilfredinni/Python-cheatsheet/raw/master/python_cheat_sheet.pdf)- [Jupyter Notebook](https://mybinder.org/v2/gh/wilfredinni/python-cheatsheet/master?filepath=jupyter_notebooks) Context ManagerWhile Python's context managers are widely used, few understand the purpose behind their use. These statements, commonly used with reading and writing files, assist the application in conserving system memory and improve resource management by ensuring specific resources are only in use for certain processes. with statementA context manager is an object that is notified when a context (a block of code) starts and ends. You commonly use one with the with statement. It takes care of the notifying.For example, file objects are context managers. When a context ends, the file object is closed automatically: ###Code with open(filename) as f: file_contents = f.read() # the open_file object has automatically been closed. ###Output _____no_output_____ ###Markdown Anything that ends execution of the block causes the context manager's exit method to be called. This includes exceptions, and can be useful when an error causes you to prematurely exit from an open file or connection. Exiting a script without properly closing files/connections is a bad idea, that may cause data loss or other problems. By using a context manager you can ensure that precautions are always taken to prevent damage or loss in this way. Writing your own contextmanager using generator syntaxIt is also possible to write a context manager using generator syntax thanks to the `contextlib.contextmanager` decorator: ###Code import contextlib @contextlib.contextmanager def context_manager(num): print('Enter') yield num + 1 print('Exit') with context_manager(2) as cm: # the following instructions are run when the 'yield' point of the context # manager is reached. # 'cm' will have the value that was yielded print('Right in the middle with cm = {}'.format(cm)) ###Output _____no_output_____
Sentiment_analysis_NLP.ipynb
###Markdown ###Code !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 /root/.kaggle/kaggle.json !kaggle competitions download -c sentiment-analysis-on-movie-reviews !unzip train.tsv.zip -d ./dataset !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip glove.6B.zip -d ./glove/ #paths data_file = './dataset/train.tsv' dataset = './dataset' import pandas as pd reviews = pd.read_csv(data_file, sep='\t') reviews.head() review_phrase = reviews['Phrase'] review_sentiment = reviews['Sentiment'] import numpy as np Y_dim = (len(review_sentiment), 5) Y = np.zeros(Y_dim) for i,sentiment in enumerate(review_sentiment): Y[i][sentiment-1] = 1 #sentiments are numbers from 1 to 5 #clean the Phrases import re import string rem = string.punctuation pattern = r"[{}]".format(rem) #remove punctuations review_phrase_cleaned = review_phrase.str.replace(pattern, '') review_phrase_cleaned #dataset is too much noisy from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences max_features = 10000 tokenizer = Tokenizer(num_words = max_features, split=' ') tokenizer.fit_on_texts(review_phrase_cleaned.values) X = tokenizer.texts_to_sequences(review_phrase_cleaned.values) X = pad_sequences(X) X.shape #split train and validation data from sklearn.model_selection import train_test_split x_train, x_val, y_train, y_val = train_test_split(X,Y, test_size = 0.2, random_state = 33) print(x_train.shape,y_train.shape) print(x_val.shape,y_val.shape) def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32') def get_embed_mat(EMBEDDING_FILE, max_features,embed_dim): # word vectors embeddings_index = dict(get_coefs(*o.rstrip().rsplit(' ')) for o in open(EMBEDDING_FILE, encoding='utf8')) print('Found %s word vectors.' % len(embeddings_index)) # embedding matrix word_index = tokenizer.word_index num_words = min(max_features, len(word_index) + 1) all_embs = np.stack(embeddings_index.values()) #for random init embedding_matrix = np.random.normal(all_embs.mean(), all_embs.std(), (num_words, embed_dim)) for word, i in word_index.items(): if i >= max_features: continue embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector max_features = embedding_matrix.shape[0] return embedding_matrix # embedding matrix EMBEDDING_FILE = './glove/glove.6B.100d.txt' embed_dim = 100 #word vector dim embedding_matrix = get_embed_mat(EMBEDDING_FILE,max_features,embed_dim) print(embedding_matrix.shape) from keras.models import Sequential from keras.layers import Dense, Embedding, GRU, Flatten, Conv1D, MaxPooling1D, Dropout, Bidirectional from keras.optimizers import Adam filters = 64 pool_size = 2 kernel_size = 3 gru_out = 128 model = Sequential() model.add(Embedding(max_features,embed_dim,input_length=X.shape[1], weights=[embedding_matrix], trainable=True)) model.add(Conv1D(filters,kernel_size=kernel_size,padding='same',activation='relu')) model.add(MaxPooling1D(pool_size=pool_size)) model.add(Dropout(0.25)) model.add(Bidirectional(GRU(gru_out,return_sequences=True))) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(128,activation='relu')) model.add(Dropout(0.5)) model.add(Dense(5,activation='softmax')) model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) model.summary() from keras.callbacks import ModelCheckpoint model_name = 'Sentiment_analysis_nlp.h5' checkpointer = ModelCheckpoint(filepath=model_name,monitor='val_acc', verbose=0, save_best_only=True) history = model.fit(x_train, y_train, epochs=5, batch_size = 128, callbacks=[checkpointer], validation_data=(x_val,y_val)) import matplotlib.pyplot as plt plt.figure(figsize=(15,5)) plt.subplot(1, 2, 1) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('model loss') plt.xlabel('epochs') plt.ylabel('loss') plt.legend(['train', 'test'], loc='upper left') plt.subplot(1, 2, 2) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.xlabel('epochs') plt.ylabel('acc') plt.legend(['train', 'test'], loc='upper left') plt.show() from keras.models import load_model #retrieving model from the checkpoint best = load_model(model_name) _, acc = best.evaluate(x_val, y_val) print("validation accuracy: ", acc) ###Output _____no_output_____
maps_hash/.ipynb_checkpoints/hash_map-checkpoint.ipynb
###Markdown (In addition to having fun) We write programs to solve real world problems. Data structures help us in representing and efficiently manipulating the data associated with these problems. Let us see if we can use any of the data structures that we already know to solve the following problem Problem StatementIn a class of students, store heights for each student. The problem in itself is very simple. We have the data of heights of each student. We want to store it so that next time someone asks for height of a student, we can easily return the value. But how can we store these heights?Obviously we can use a database and store these values. But, let's say we don't want to do that for now. We want to use a data structure to store these values as part of our program. For the sake of simplicity, our problem is limited to storing heights of students. But you can certainly imagine scenarios where you have to store such `key-value` pairs and later on when someone gives you a `key`, you can efficiently return the corrresponding `value`.The class diagram for HashMaps would look something like this. ###Code class HashMap: def __init__(self): self.num_entries = 0 def put(self, key, value): pass def get(self, key): pass def size(self): return self.num_entries ###Output _____no_output_____ ###Markdown ArraysCan we use arrays to store `key-value` pairs?We can certainly use one array to store the names of the students and use another array to store their corresponding heights at the corresponding indices.What will be the time complexity in this scenario?To obtain height of a student, say `Potter, Harry`, we will have to traverse the entire array and check if the value at a particular index matches `Potter, Harry`. Once we find the index in which this value is stored, we can use this index to obtain the height from the second array. Thus, because of this traveral, complexity for `get()` operation becomes $O(n)$. Even if we maintain a sorted array, the operation will not take less than $O(log(n))$ complexity.What happens if a student leaves a class? We will have to delete the entry corresponding to the student from both the arrays. This would require another traversal to find the index. And then we will have to shift our entire array to fill this gap. Again, the time complexity of operation becomes $O(n)$ Linked ListIs it possible to use linked lists for this problem?We can certainly modify our `LinkedListNode` to have two different value attributes - one for name of the student and the other for height. But we again face the same problem. In the worst case, we will have to traverse the entire linked list to find the height of a particular student. Once again, the cost of operation becomes $O(n)$. Stacks and QueuesStacks and Queues are LIFO and FIFO data structures respectively. Can you think why they too do not make a good choice for storing `key-value` pairs? ------------------------------------------------------------------------ Can we do better? Can you think of any data structure that allows for fast `get()` operation? Let us circle back to arrays. When we obtain the element present at a particular index using something like `arr[3]`, the operation takes constant i.e. `O(1)` time. *For review - Does this constant time operation require further explanation?*If we think about `array indices as keys` and the `element present at those indices as values`, we can fairly conclude that at least for non-zero integer `keys`, we can use arrays. However, like our current problem statement, we may not always have integer keys.`If only we had a function that could give us arrays indices for any key value that we gave it!` Hash FunctionsSimply put, hash functions are these really incredible `magic` functions which can map data of any size to a fixed size data. This fixed sized data is often called hash code or hash digest. Let's create our own hash function to store strings ###Code def hash_function(string): pass ###Output _____no_output_____ ###Markdown For a given string, say `abcd`, a very simple hash function can be sum of corresponding ASCII values. *Note: you can use `ord(character`) to determine ASCII value of a particular character e.g. `ord('a') will return 97`* ###Code def hash_function(string): hash_code = 0 for character in string: hash_code += ord(character) return hash_code hash_code_1 = hash_function("abcd") print(hash_code_1) ###Output 394 ###Markdown Looks like our hash function is working fine. But is this really a good hash function?For starters, it will return the same value for `abcd` and `bcda`. Do we want that? We can create 24 different permutations for the string `abcd` and each will have the same value. We cannot put 24 values to one index. Obviously, this makes it clear that our hash function must return unique values for unique objects. When two different inputs produce the same output, then we have something called a `collision`. An ideal hash function must be immune from producing collisions. Let's think something else.Can product help? We will again run in the same problem. The honest answer is that we have differernt hash functions for different types of keys. The hash function for integers will be different from the hash function for strings, which again, will be different for some object of a class that you created. However, let's try to continue with our problem and try to come up with a hash function for strings. Hash Function for Strings For a string, say `abcde`, a very effective function is treating this as number of prime number base `p`. Let's elaborate this statement. For a number, say `578`, we can represent this number in base 10 number system as $$5*10^2 + 7*10^1 + 8*10^0$$Similarly, we can treat `abcde` as $$a * p^4 + b * p^3 + c * p^2 + d * p^1 + e * p^0$$Here, we replace each character with its corresponding ASCII value. A lot of research goes into figuring out good hash functions and this hash function is one of the most popular functions used for strings. We use prime numbers because the provide a good distribution. The most common prime numbers used for this function are 31 and 37. Thus, using this algorithm, we can get a corresponding integer value for each string key and store it in the array.Note that the array used for this purpose is called a `bucket array`. It is not a special array. We simply choose to give a special name to arrays for this purpose. Each entry in this `bucket array` is called a `bucket` and the index in which we store a bucket is called `bucket index`.Let's add these details to our class. ###Code class HashMap: def __init__(self, initial_size=10): self.bucket_array = [None for _ in range(initial_size)] self.p = 37 self.num_entries = 0 def put(self, key, value): pass def get(self, key): pass def get_bucket_index(self, key): return self.get_hash_code(key) def get_hash_code(self, key): key = str(key) num_buckets = len(self.bucket_array) current_coefficient = 1 hash_code = 0 for character in key: hash_code += ord(character) * current_coefficient current_coefficient *= self.p current_coefficient = current_coefficient return hash_code hash_map = HashMap() bucket_index = hash_map.get_bucket_index("abcd") print(bucket_index) hash_map = HashMap() bucket_index = hash_map.get_bucket_index("bcda") print(bucket_index) ###Output 5054002 ###Markdown Compression FunctionWe now have a good hash function which will return unique values for unique objects. But let's look at the values. These are huge. We cannot create such large arrays. So we use another function - `compression function` to compress these values so as to create arrays of reasonable sizes. A very simple, good, and effective compression function can be ` mod len(array)`. The `modulo operator %` returns the remainder of one number when divided by other. So, if we have an array of size 10, we can be sure that modulo of any number with 10 will be less than 10, allowing it to fit into our bucket array.Because of how modulo operator works, instead of creating a new function, we can write the logic for compression function in our `get_hash_code()` function itself.https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/modular-multiplication ###Code class HashMap: def __init__(self, initial_size = 10): self.bucket_array = [None for _ in range(initial_size)] self.p = 31 self.num_entries = 0 def put(self, key, value): pass def get(self, key): pass def get_bucket_index(self, key): bucket_index = self.get_hash_code(keu) return bucket_index def get_hash_code(self, key): key = str(key) num_buckets = len(self.bucket_array) current_coefficient = 1 hash_code = 0 for character in key: hash_code += ord(character) * current_coefficient hash_code = hash_code % num_buckets # compress hash_code current_coefficient *= self.p current_coefficient = current_coefficient % num_buckets # compress coefficient return hash_code % num_buckets # one last compression before returning def size(self): return self.num_entries ###Output _____no_output_____ ###Markdown Collision HandlingAs discussed earlier, when two different inputs produce the same output, then we have a collision. Our implementation of `get_hash_code()` function is satisfactory. However, because we are using compression function, we are prone to collisions. Consider the following scenario. We have a bucket array of length 10 and we get two different hash codes for two different inputs, say 355, and 1095. Even though the hash codes are different in this case, the bucket index will be same because of the way we have implemented our compression function. Such scenarios where multiple entries want to go to the same bucket are very common. So, we introduce some logic to handle collisions.There are two popular ways in which we handle collisions.1. Closed Addressing or Separate chaining2. Open Addressing1. Closed addressing is a clever technique where we use the same bucket to store multiple objects. The bucket in this case will store a linked list of key-value pairs. Every bucket has it's own separate chain of linked list nodes.2. In open addressing, we do the following: * If, after getting the bucket index, the bucket is empty, we store the object in that particular bucket * If the bucket is not empty, we find an alternate bucket index by using another function which modifies the current hash code to give a new code Separate chaining is a simple and effective technique to handle collisions and that is what we discuss here. Implement the `put` and `get` function using the idea of separate chaining. ###Code class LinkedListNode: def __init__(self, key, value): self.key = key self.value = value self.next = None class HashMap: def __init__(self, initial_size = 10): self.bucket_array = [None for _ in range(initial_size)] self.p = 31 self.num_entries = 0 def put(self, key, value): bucket_index = self.get_bucket_index(key) new_node = LinkedListNode(key, value) head = self.bucket_array[bucket_index] # check if key is already present in the map, and update it's value while head is not None: if head.key == key: head.value = value return head = head.next # key not found in the chain --> create a new entry and place it at the head of the chain head = self.bucket_array[bucket_index] new_node.next = head self.bucket_array[bucket_index] = new_node self.num_entries += 1 def get(self, key): bucket_index = self.get_hash_code(key) head = self.bucket_array[bucket_index] while head is not None: if head.key == key: return head.value head = head.next return None def get_bucket_index(self, key): bucket_index = self.get_hash_code(key) return bucket_index def get_hash_code(self, key): key = str(key) num_buckets = len(self.bucket_array) current_coefficient = 1 hash_code = 0 for character in key: hash_code += ord(character) * current_coefficient hash_code = hash_code % num_buckets # compress hash_code current_coefficient *= self.p current_coefficient = current_coefficient % num_buckets # compress coefficient return hash_code % num_buckets # one last compression before returning def size(self): return self.num_entries hash_map = HashMap() hash_map.put("one", 1) hash_map.put("two", 2) hash_map.put("three", 3) hash_map.put("neo", 11) print("size: {}".format(hash_map.size())) print("one: {}".format(hash_map.get("one"))) print("neo: {}".format(hash_map.get("neo"))) print("three: {}".format(hash_map.get("three"))) print("size: {}".format(hash_map.size())) ###Output size: 4 one: 1 neo: 11 three: 3 size: 4 ###Markdown Time Complexity and Rehashing We used arrays to implement our hashmaps because arrays offer $O(1)$ time complexity for both put and get operations. *Note: in case of arrays put is simply `arr[i] = 5` and get is `height = arr[5]`* 1. Put Operation* In the put operation, we first figure out the bucket index. Calculating the hash code to figure out the bucket index takes some time.* After that, we go to the bucket index and in the worst case we traverse the linked list to find out if the key is already present or not. This also takes some time.To analyze the time complexity for any algorithm as a function of the input size `n`, we first have to determine what our input is. In this case, we are putting and gettin key value pairs. So, these entries i.e. key-value pairs are our input. Therefore, our `n` is number of such key-value pair entries.*Note: time complexity is always determined in terms of input size and not the actual amount of work that is being done independent of input size. That "independent amount of work" will be constant for every input size so we disregard that.** In case of our hash function, the computation time for hash code depends on the size of each string. Compared to number of entries (which we always consider to be very high e.g. in the order of $10^5$) the length of each string can be considered to be very small. Also, most of the strings will be around the same size when compared to this high number of entries. Hence, we can ignore the hash computation time in our analysis of time complexity.* Now, the entire time complexity essentialy depends on the linked list traversal. In the worst case, all entries would go to the same bucket index and our linked list at that index would be huge. Therefore, the time complexity in that scenario would be $O(n)$. However, hash functions are wisely chosen so that this does not happen. `On average, the distribution of entries is such that if we have n entries and b buckets, then each bucket does not have more than n/b key-value pair entries.` Therefore, because of our choice of hash functions, we can assume that the time complexity is $O(\dfrac{n}{b})$.This number which determines the `load` on our bucket array `n/b` is known as load factor. Generally, we try to keep our load factor around or less than 0.7. This essentially means that if we have a bucket array of size 10, then the number of key-value pair entries will not be more than 7.**What happens when we get more entries and the value of our load factor crosses 0.7?**In that scenario, we must increase the size of our bucket array. Also, we must recalculate the bucket index for each entry in the hashn map.*Note: the hash code for each key present in the bucket array would still be the same. However, because of the compression function, the bucket index will change.* Therefore, we need to `rehash` all the entries in our hash map. This is known as `Rehashing`. 2. Get and Delete operationCan you figure out the time complexity for get and delete operation?*Answer: the solution follows the same logic and the time complexity is O(1). Note that we do not reduce the size of bucket array in delete operation. Rehashing Code ###Code class LinkedListNode: def __init__(self, key, value): self.key = key self.value = value self.next = None class HashMap: def __init__(self, initial_size = 15): self.bucket_array = [None for _ in range(initial_size)] self.p = 31 self.num_entries = 0 self.load_factor = 0.7 def put(self, key, value): bucket_index = self.get_bucket_index(key) new_node = LinkedListNode(key, value) head = self.bucket_array[bucket_index] # check if key is already present in the map, and update it's value while head is not None: if head.key == key: head.value = value return head = head.next # key not found in the chain --> create a new entry and place it at the head of the chain head = self.bucket_array[bucket_index] new_node.next = head self.bucket_array[bucket_index] = new_node self.num_entries += 1 # check for load factor current_load_factor = self.num_entries / len(self.bucket_array) if current_load_factor > self.load_factor: self.num_entries = 0 self._rehash() def get(self, key): bucket_index = self.get_hash_code(key) head = self.bucket_array[bucket_index] while head is not None: if head.key == key: return head.value head = head.next return None def get_bucket_index(self, key): bucket_index = self.get_hash_code(key) return bucket_index def get_hash_code(self, key): key = str(key) num_buckets = len(self.bucket_array) current_coefficient = 1 hash_code = 0 for character in key: hash_code += ord(character) * current_coefficient hash_code = hash_code % num_buckets # compress hash_code current_coefficient *= self.p current_coefficient = current_coefficient % num_buckets # compress coefficient return hash_code % num_buckets # one last compression before returning def size(self): return self.num_entries def _rehash(self): old_num_buckets = len(self.bucket_array) old_bucket_array = self.bucket_array num_buckets = 2 * old_num_buckets self.bucket_array = [None for _ in range(num_buckets)] for head in old_bucket_array: while head is not None: key = head.key value = head.value self.put(key, value) # we can use our put() method to rehash head = head.next hash_map = HashMap(7) hash_map.put("one", 1) hash_map.put("two", 2) hash_map.put("three", 3) hash_map.put("neo", 11) print("size: {}".format(hash_map.size())) print("one: {}".format(hash_map.get("one"))) print("neo: {}".format(hash_map.get("neo"))) print("three: {}".format(hash_map.get("three"))) print("size: {}".format(hash_map.size())) ###Output size: 4 one: 1 neo: 11 three: 3 size: 4 ###Markdown Delete OperationCan you implement delete operation using all we have learnt so far? ###Code class LinkedListNode: def __init__(self, key, value): self.key = key self.value = value self.next = None class HashMap: def __init__(self, initial_size = 15): self.bucket_array = [None for _ in range(initial_size)] self.p = 31 self.num_entries = 0 self.load_factor = 0.7 def put(self, key, value): bucket_index = self.get_bucket_index(key) new_node = LinkedListNode(key, value) head = self.bucket_array[bucket_index] # check if key is already present in the map, and update it's value while head is not None: if head.key == key: head.value = value return head = head.next # key not found in the chain --> create a new entry and place it at the head of the chain head = self.bucket_array[bucket_index] new_node.next = head self.bucket_array[bucket_index] = new_node self.num_entries += 1 # check for load factor current_load_factor = self.num_entries / len(self.bucket_array) if current_load_factor > self.load_factor: self.num_entries = 0 self._rehash() def get(self, key): bucket_index = self.get_hash_code(key) head = self.bucket_array[bucket_index] while head is not None: if head.key == key: return head.value head = head.next return None def get_bucket_index(self, key): bucket_index = self.get_hash_code(key) return bucket_index def get_hash_code(self, key): key = str(key) num_buckets = len(self.bucket_array) current_coefficient = 1 hash_code = 0 for character in key: hash_code += ord(character) * current_coefficient hash_code = hash_code % num_buckets # compress hash_code current_coefficient *= self.p current_coefficient = current_coefficient % num_buckets # compress coefficient return hash_code % num_buckets # one last compression before returning def size(self): return self.num_entries def _rehash(self): old_num_buckets = len(self.bucket_array) old_bucket_array = self.bucket_array num_buckets = 2 * old_num_buckets self.bucket_array = [None for _ in range(num_buckets)] for head in old_bucket_array: while head is not None: key = head.key value = head.value self.put(key, value) # we can use our put() method to rehash head = head.next def delete(self, key): bucket_index = self.get_bucket_index(key) head = self.bucket_array[bucket_index] previous = None while head is not None: if head.key == key: if previous is None: self.bucket_array[bucket_index] = head.next else: previous.next = head.next self.num_entries -= 1 return else: previous = head head = head.next hash_map = HashMap(7) hash_map.put("one", 1) hash_map.put("two", 2) hash_map.put("three", 3) hash_map.put("neo", 11) print("size: {}".format(hash_map.size())) print("one: {}".format(hash_map.get("one"))) print("neo: {}".format(hash_map.get("neo"))) print("three: {}".format(hash_map.get("three"))) print("size: {}".format(hash_map.size())) hash_map.delete("one") print(hash_map.get("one")) print(hash_map.size()) ###Output size: 4 one: 1 neo: 11 three: 3 size: 4 None 3
.ipynb_checkpoints/Pre-processing-checkpoint.ipynb
###Markdown Pre-processing ###Code #https://medium.com/@bedigunjit/simple-guide-to-text-classification-nlp-using-svm-and-naive-bayes-with-python-421db3a72d34 import pandas as pd import numpy as np from nltk.tokenize import word_tokenize from nltk import pos_tag from nltk.corpus import stopwords from nltk.stem import WordNetLemmatizer from sklearn.preprocessing import LabelEncoder from collections import defaultdict from nltk.corpus import wordnet as wn from sklearn.feature_extraction.text import TfidfVectorizer from sklearn import model_selection, naive_bayes, svm from sklearn.metrics import accuracy_score from collections import Counter #import dill if __name__ == "__main__": # Reproduce the same result every time if the script is kept consistent otherwise each run will produce different results np.random.seed(500) #[1] Read the data Corpus = pd.read_json(r"C:\Users\Panos\Desktop\Dissert\Code\Sample_Video_Games_5.json", lines=True, encoding='latin-1') Corpus = Corpus[['reviewText','overall']] # Print some info Corpus.info() print(Corpus.overall.value_counts()) #[1.5] Reduce number of classes for index,entry in enumerate(Corpus['overall']): if entry == 1.0 or entry == 2.0: Corpus.loc[index,'overall_final'] = -1 elif entry == 3.0: Corpus.loc[index,'overall_final'] = 0 elif entry == 4.0 or entry == 5.0: Corpus.loc[index,'overall_final'] = 1 #[2] Preprocessing # Step - a : Remove blank rows if any. Corpus['reviewText'].dropna(inplace=True) # Step - b : Change all the text to lower case. This is required as python interprets 'dog' and 'DOG' differently Corpus['reviewText'] = [entry.lower() for entry in Corpus['reviewText']] # Step - c : Tokenization : In this each entry in the corpus will be broken into set of words Corpus['reviewText'] = [word_tokenize(entry) for entry in Corpus['reviewText']] # Step - d : Remove Stop words, Non-Numeric and perfom Word Stemming/Lemmenting. # WordNetLemmatizer requires Pos tags to understand if the word is noun or verb or adjective etc. By default it is set to Noun tag_map = defaultdict(lambda : wn.NOUN) tag_map['J'] = wn.ADJ tag_map['V'] = wn.VERB tag_map['R'] = wn.ADV for index,entry in enumerate(Corpus['reviewText']): # Declaring Empty List to store the words that follow the rules for this step Final_words = [] # Initializing WordNetLemmatizer() word_Lemmatized = WordNetLemmatizer() # pos_tag function below will provide the 'tag' i.e if the word is Noun(N) or Verb(V) or something else. for word, tag in pos_tag(entry): # Below condition is to check for Stop words and consider only alphabets if word not in stopwords.words('english') and word.isalpha(): word_Final = word_Lemmatized.lemmatize(word,tag_map[tag[0]]) Final_words.append(word_Final) # The final processed set of words for each iteration will be stored in 'text_final' Corpus.loc[index,'text_final'] = str(Final_words) #Print the first 3 rows print(Corpus.iloc[:3]) print("hey yo") #dill.dump_session('notebook_env.db') #[3] Prepare Train and Test Data sets Train_X, Test_X, Train_Y, Test_Y = model_selection.train_test_split(Corpus['text_final'],Corpus['overall_final'],test_size=0.3) print(Counter(Train_Y).values()) # counts the elements' frequency #[4] Encoding Encoder = LabelEncoder() Train_Y = Encoder.fit_transform(Train_Y) Test_Y = Encoder.fit_transform(Test_Y) #[5] Word Vectorization Tfidf_vect = TfidfVectorizer(max_features=10000) Test_X_Tfidf = Tfidf_vect.fit_transform(Corpus['text_final']) Train_X_Tfidf = Tfidf_vect.transform(Train_X) Test_X_Tfidf = Tfidf_vect.transform(Test_X) #[6] SMOTE (Synthetic Minority Over-Sampling Technique) from imblearn.under_sampling import NearMiss, RandomUnderSampler nm = NearMiss(ratio='not minority',random_state=777, version=1, n_neighbors=1) X_nm, y_nm = nm.fit_sample(Train_X_Tfidf, Train_Y) print(Counter(y_nm).values()) # counts the elements' frequency the vocabulary that it has learned from the corpus print(Tfidf_vect.vocabulary_) the vectorized data print(Train_X_Tfidf) #[7] Use the ML Algorithms to Predict the outcome # fit the training dataset on the NB classifier Naive = naive_bayes.MultinomialNB() Naive.fit(X_nm,y_nm) # predict the labels on validation dataset predictions_NB = Naive.predict(Test_X_Tfidf) # Use accuracy_score function to get the accuracy print("Naive Bayes Accuracy Score -> ",accuracy_score(predictions_NB, Test_Y)*100) # Making the confusion matrix from sklearn.metrics import confusion_matrix cm = confusion_matrix(Test_Y, predictions_NB) print("-----------------cm------------------") print(cm) print("-------------------------------------") #[8] Support Vector Machine # Classifier - Algorithm - SVM # fit the training dataset on the classifier SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto') SVM.fit(X_nm,y_nm) # predict the labels on validation dataset predictions_SVM = SVM.predict(Test_X_Tfidf) # Use accuracy_score function to get the accuracy print("SVM Accuracy Score -> ",accuracy_score(predictions_SVM, Test_Y)*100) #A try to parallelize the for loop # #https://medium.com/@bedigunjit/simple-guide-to-text-classification-nlp-using-svm-and-naive-bayes-with-python-421db3a72d34 # import pandas as pd # import numpy as np # from nltk.tokenize import word_tokenize # from nltk import pos_tag # from nltk.corpus import stopwords # from nltk.stem import WordNetLemmatizer # from sklearn.preprocessing import LabelEncoder # from collections import defaultdict # from nltk.corpus import wordnet as wn # from sklearn.feature_extraction.text import TfidfVectorizer # from sklearn import model_selection, naive_bayes, svm # from sklearn.metrics import accuracy_score # from collections import Counter # import multiprocessing # from joblib import Parallel, delayed # if __name__ == "__main__": # # Reproduce the same result every time if the script is kept consistent otherwise each run will produce different results # np.random.seed(500) # #[1] Read the data # Corpus = pd.read_json(r"C:\Users\Panos\Desktop\Dissert\Code\Sample_Video_Games_5.json", lines=True, encoding='latin-1') # Corpus = Corpus[['reviewText','overall']] # # Print some info # Corpus.info() # print(Corpus.overall.value_counts()) # #https://medium.com/@mjschillawski/quick-and-easy-parallelization-in-python-32cb9027e490 # num_cores = multiprocessing.cpu_count() # processed_list = Parallel(n_jobs=num_cores)(delayed(my_function(i,parameters) # for i in enumerate(Corpus['overall']) # #[2] Preprocessing # # Step - a : Remove blank rows if any. # Corpus['reviewText'].dropna(inplace=True) # # Step - b : Change all the text to lower case. This is required as python interprets 'dog' and 'DOG' differently # Corpus['reviewText'] = [entry.lower() for entry in Corpus['reviewText']] # # Step - c : Tokenization : In this each entry in the corpus will be broken into set of words # Corpus['reviewText'] = [word_tokenize(entry) for entry in Corpus['reviewText']] # # Step - d : Remove Stop words, Non-Numeric and perfom Word Stemming/Lemmenting. # # WordNetLemmatizer requires Pos tags to understand if the word is noun or verb or adjective etc. By default it is set to Noun # tag_map = defaultdict(lambda : wn.NOUN) # tag_map['J'] = wn.ADJ # tag_map['V'] = wn.VERB # tag_map['R'] = wn.ADV # for index,entry in enumerate(Corpus['reviewText']): # # Declaring Empty List to store the words that follow the rules for this step # Final_words = [] # # Initializing WordNetLemmatizer() # word_Lemmatized = WordNetLemmatizer() # # pos_tag function below will provide the 'tag' i.e if the word is Noun(N) or Verb(V) or something else. # for word, tag in pos_tag(entry): # # Below condition is to check for Stop words and consider only alphabets # if word not in stopwords.words('english') and word.isalpha(): # word_Final = word_Lemmatized.lemmatize(word,tag_map[tag[0]]) # Final_words.append(word_Final) # # The final processed set of words for each iteration will be stored in 'text_final' # Corpus.loc[index,'text_final'] = str(Final_words) # #Print the first 3 rows # print(Corpus.iloc[:3]) # print("hey yo") # def my_function(): # #[1.5] Reduce number of classes # for index,entry in enumerate(Corpus['overall']): # if entry == 1.0 or entry == 2.0: # Corpus.loc[index,'overall_final'] = -1 # elif entry == 3.0: # Corpus.loc[index,'overall_final'] = 0 # elif entry == 4.0 or entry == 5.0: # Corpus.loc[index,'overall_final'] = 1 ###Output _____no_output_____
01 Prepare point spread data.ipynb
###Markdown Load packages ###Code import numpy as np import pandas as pd ###Output _____no_output_____ ###Markdown Load data ###Code spread_df=[] years_list = [18,17,16,15,14,13,12,11,10,'09','08','07','06','05','04','03'] year_int = 2019 for year in years_list: #Note: 18 refers to 2019 and so on temp_spread_df = pd.read_csv("Data/Point spreads/ncaabb" + str(year) + ".csv") temp_spread_df['Season'] = year_int year_last = year year_int = year_int-1 if year==18: spread_df = temp_spread_df else: spread_df = spread_df.append(temp_spread_df) spread_df['home'] = spread_df['home'].str.lower() spread_df['road'] = spread_df['road'].str.lower() spread_df.head() ###Output _____no_output_____ ###Markdown Match Team IDs to data ###Code teams_df = pd.read_csv('Data/Kaggle NCAA/TeamSpellings.csv', sep='\,', engine='python') teams_df.head() team_list = ['home','road'] for team in team_list: spread_df = pd.merge(spread_df, teams_df, left_on=[team], right_on = ['TeamNameSpelling'], how='left') if team=='home': spread_df.rename(columns={'TeamID': 'HTeamID'}, inplace=True) if team=='road': spread_df.rename(columns={'TeamID': 'RTeamID'}, inplace=True) spread_df = spread_df.drop(['TeamNameSpelling'], axis=1) ###Output _____no_output_____ ###Markdown Note: change to home and road team and score ###Code spread_df.loc[spread_df['hscore']>spread_df['rscore'], 'WTeamID'] = spread_df['HTeamID'] spread_df.loc[spread_df['hscore']<spread_df['rscore'], 'LTeamID'] = spread_df['HTeamID'] spread_df.loc[spread_df['hscore']>spread_df['rscore'], 'WScore'] = spread_df['hscore'] spread_df.loc[spread_df['hscore']<spread_df['rscore'], 'LScore'] = spread_df['hscore'] spread_df.loc[spread_df['rscore']>spread_df['hscore'], 'WTeamID'] = spread_df['RTeamID'] spread_df.loc[spread_df['rscore']<spread_df['hscore'], 'LTeamID'] = spread_df['RTeamID'] spread_df.loc[spread_df['rscore']>spread_df['hscore'], 'WScore'] = spread_df['rscore'] spread_df.loc[spread_df['rscore']<spread_df['hscore'], 'LScore'] = spread_df['rscore'] spread_df.replace([np.inf, -np.inf], np.nan).dropna(subset=['WTeamID','LTeamID','WScore','LScore','line']) spread_df = spread_df[spread_df['WScore'] != "."] spread_df = spread_df[spread_df['LScore'] != "."] spread_df = spread_df[spread_df['line'] != "."] spread_df['line'].value_counts(dropna=False) #drop spreads with bad coverage spread_df = spread_df.drop(['line7ot','lineargh','lineash','lineashby','linedd','linedunk','lineer','linegreen','linemarkov','linemass','linepib','linepig','linepir','linepiw','linepom','lineprophet','linerpi','lineround','linesauce','lineseven','neutral','lineteamrnks','linetalis','lineespn','linemassey','linedonc','linesaggm','std','linepugh','linefox','linedok','lineopen'], axis=1) spread_df.tail() spread_df.dropna(subset=['line']) ###Output _____no_output_____ ###Markdown Write the data to a csv ###Code spread_df.to_csv('Data/~Created data/spreads_all.csv', index=False) ###Output _____no_output_____
TITANIC-SUBMISION.ipynb
###Markdown '''Titanic challenge''' ###Code #Import the required libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline #Importing dataset titanic=pd.read_csv('D:\\R-Projects\\train.csv') print() df=titanic.copy() #Lets check our columns df.columns #Lets list our columns starting with the target column columns= ['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked'] #what's the datatype of our columns df.dtypes #Are there null values in our dataset? Let's see by writing a function to check through the set def isnull(): names=[] values=[] for column in columns: null_values_count=df[column].isnull().sum() names.append(column) values.append(null_values_count) data={'Column':names, 'Null values':values} data=pd.DataFrame(data=data) return data d = isnull() d titanic.describe() ###Output _____no_output_____ ###Markdown In the abbove describe, we discover that the dset contains a total of 891 rows. ###Code #Lets plot the distribution of each of our variables. #We define a function first then we will be calling it def distribution(column, bins=20, figsize=(8,8)): try: df[column].hist(bins=bins, figsize=figsize) except KeyError: print(f'The column {column} is not part of the dataframe') sns.set(style='darkgrid') distribution('Age') distribution('Fare') ###Output _____no_output_____ ###Markdown We can only view the distribution of numerical values ###Code #We have our numerical variables as numerical_attributes=['Age','Fare'] Categorical_attributes=['Pclass','Sex','SibSp','Parch','Embarked'] #You notice we have ingored the variables (i.e Cabin and Ticket) with string values which need encoding df['Sex'].value_counts() ###Output _____no_output_____ ###Markdown We discover that Column 'Sex' has 577 males and 314 females ###Code #How many males in the first class males=df[df['Sex']=='male'] males_first_class=males[males['Pclass']==1] len(males_first_class) males_survived_first_class=males[males['Survived']==1] len(males_survived_first_class) ###Output _____no_output_____ ###Markdown We can see that there survived 109 men in the first class ###Code #How many passengers died and had siblings df['SibSp'].value_counts() #We want to check unique values in Embarked column df['Embarked'].value_counts() ###Output _____no_output_____ ###Markdown From the overcell we cab derive that 644 passengers boarded at S, 168 at C, and 77 at Q. ###Code #can we plot the number of males and females in the ship? yes sns.set(style='darkgrid') sns.countplot (df['Sex']) sns.countplot(df['Pclass']) ###Output _____no_output_____ ###Markdown We can discover that 2nd class had few passegers followed by 1 and many in 3rd Class. ###Code #How many people had children and died? sns.countplot(df['Parch']) ###Output _____no_output_____ ###Markdown We can discover that many people (687) without children died ###Code #Let's fill any 0 vakue with nan from numpy import nan from numpy import isnan from sklearn.impute import SimpleImputer # mark zero values as missing or NaN new_fao[['loan_amount_term']] = new_fao[['loan_amount_term']].replace(0, nan) # retrieve the numpy array values = new_fao.values # define the imputer imputer = SimpleImputer(missing_values=nan, strategy='mean') # transform the dataset transformed_values = imputer.fit_transform(values) # count the number of NaN values in each column print('Missing: %d' % isnan(transformed_values).sum()) ###Output _____no_output_____ ###Markdown Now let's start our models.. ###Code #importing all requiered libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings(action='ignore') %matplotlib inline from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import Imputer from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import normalize from sklearn.base import BaseEstimator, TransformerMixin from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.neighbors import KNeighborsClassifier from sklearn.neural_network import MLPClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import LinearSVC from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, RandomForestClassifier #Now we want to list the names of the models we are going to use in our project names=['Gaussian NB', 'Logistic Regression', 'Kneighbors Classifier', 'Neural Net', 'Decision Tree', 'Linear SVM', 'AdaBoost', 'Gradient Boosting', 'Random Forest'] #This code now lists the models themselves models=[GaussianNB(), LogisticRegression(), KNeighborsClassifier(), MLPClassifier(), DecisionTreeClassifier(), LinearSVC(), AdaBoostClassifier(), GradientBoostingClassifier(), RandomForestClassifier()] #The are list of all columns in the dset all_coulmns =['Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked'] #usable columns are here columns=['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'] #Some columns in our dset contains categorical values and they are here categorical_columns=['Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked'] #Lets think of encoding the two usable columns encoded_columns=['Sex','Embarked'] #We have some columns with continous variables continous_columns=['Age','Fare'] #Lets view the structure of our dset df=df[columns] df.tail(12) '''Now we want to make a single function to perfom the entire process of preprocessing''' '''The process will.. 1. fill in the missing values 2. Scale the values 3. Encode categorical values''' class ColumnSelector(BaseEstimator, TransformerMixin): def __init__(self,columns): self.columns=columns def fit(self, X, y=None): return self def transform(self, X, y=None): wanted_columns=X[self.columns] return wanted_columns ''' num_pipeline=Pipeline([ ('Column selector', ColumnSelector(continuos_columns)), ('imputer', Imputer(strategy='mean')), ('standard scaler', StandardScaler()) ]) tranformed_numerical_attributes=num_pipeline.fit_transform(df ''' #Now we want to develop another function to encod all our string columns in the dset encoded_sex=LabelEncoder().fit_transform(df['Sex']) encoded_pclass=LabelEncoder().fit_transform(df['Pclass']) embarked=df['Embarked'] embarked_transformed=[] for value in embarked: value=str(value) embarked_transformed.append(value) encoded_embarked=LabelEncoder().fit_transform(embarked_transformed) #Now let's generate a clean dset with this single function def clean(df): columns = ['Survived', 'Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked'] categorical_columns = ['Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked'] encoded_columns = ['Sex', 'Embarked'] continuous_columns = ['Age', 'Fare'] num_pipeline = Pipeline([ ('Column selector', ColumnSelector(continuous_columns)), ('imputer', Imputer(strategy='median')), ('standard scaler', StandardScaler()) ]) transformed_numerical_attributes = num_pipeline.fit_transform(df) encoded_sex = LabelEncoder().fit_transform(df['Sex']) encoded_pclass = LabelEncoder().fit_transform(df['Pclass']) embarked = df['Embarked'] embarked_transformed = [] for value in embarked: value = str(value) embarked_transformed.append(value) encoded_embarked = LabelEncoder().fit_transform(embarked_transformed) dataset = np.c_[transformed_numerical_attributes, encoded_sex, encoded_pclass, encoded_embarked, df['SibSp'], df['Parch']] return dataset dataset = clean(df) dataset.shape X_train, X_test = train_test_split(dataset, test_size=0.3) y_train, y_test = train_test_split(df['Survived'], test_size=0.3) X_train.shape ###Output _____no_output_____
ENPM690 Robot Learning Final Project.ipynb
###Markdown Training an Autonomous Cab to pick up and drop passengers using Q-LearningAuthor: Akshitha Pothamshetty UID: 116399326About: M.Eng Robotics 2020, University of Maryland, College Park About ProjectIn this project, I have used Reinforcement Learning to pick up and drop off passengers at the right locations. I have used Q-Learning to train the model. To keep the focus on applying Q-Learning, I have used existing OpenAI GYM environment, Taxi-v3. This task was introduced in [Dietterich2000] to illustrate some issues in hierarchical reinforcement learning. There are 4 locations (labeled by different letters) and our job is to pick up the passenger at one location and drop him off in another. We receive +20 points for a successful dropoff, and lose 1 point for every timestep it takes. There is also a 10 point penalty for illegal pick-up and drop-off actions.In this notebook, I will go through the project step by step. 1: Install and import dependenciesFollowing packages needs to be installed successfully for executing rest of the notebook:1. CMAKE2. Scipy3. Numpy4. Atari Gym ###Code !pip install cmake matplotlib scipy numpy 'gym[atari]' # Run only once. import gym # Import OpenAI GYM package from IPython.display import clear_output, Markdown, display # For visualization from time import sleep # For visualization import random # Randomly generating states import numpy as np # For Q-Table import matplotlib.pyplot as plt from scipy.signal import savgol_filter random.seed(42) # Setting random state for consistent results. env = gym.make("Taxi-v3").env # Using existing Taxi-V3 environment # Generate a random environment env.reset() env.render() # Properties of our environment print("Property: Action Space {}".format(env.action_space)) print("Property: State Space {}".format(env.observation_space)) ###Output +---------+ |R: | : :G| | : | : : | | : : : : | | | : | : | |Y| : |B: | +---------+ Property: Action Space Discrete(6) Property: State Space Discrete(500) ###Markdown 2: Explore our environmentOur environment consists of 6 actions:1. Go South2. Go North3. Go East4. Go West5. Pick Up Passenger6. Drop Off a PassengerOur environment consists of 500 possible states:* It is a 5x5 grid, with 4 pick up or drop off location, with one additional state of passenger being inside the taxi:* 5x5 x (4+1) x 4 = 500 states ###Code # Drop off and pick up locations are indexed from 0-3 for the four # different locations. # Let's initiate a forced state: # Taxi at: (3,1), Pick Up at R(0)(Blue) and Drop Off at B(3)(Pink) initialState = env.encode(3, 1, 0, 3) print "Out of 500 possible states, our unique state ID is:", initialState env.s = initialState # set the environment env.render() # Let's see the possible action space in this state print "Action: [Probability, NextState, Reward, GoalReached]" env.P[initialState] ###Output Out of 500 possible states, our unique state ID is: 323 +---------+ |R: | : :G| | : | : : | | : : : : | | | : | : | |Y| : |B: | +---------+ Action: [Probability, NextState, Reward, GoalReached] ###Markdown 3: Establish a baseline, using Brute Force ApproachWe will use a Brute force approach to establish a baseline. We will ask the taxi to reach the goal state with no intelligence at all. It will choose a random state to reach the goal. We will evaluate how the model performs from here. ###Code # Create a visualizer function def print_frames(frames, episode=1, verbose=False): for i, frame in enumerate(frames): clear_output(wait=True) print(frame['frame']) if (verbose): print("\nEpisode Number {}".format(episode)) print("Timestep: {}".format(i + 1)) print("State: {}".format(frame['state'])) print("Action: {}".format(frame['action'])) print("Reward: {}".format(frame['reward'])) sleep(.1) def printmd(string): display(Markdown(string)) env.s = initialState # Set environment to initial state. epochs = 0 penalties, reward = 0, 0 frames = [] # for animation done = False while not done: action = env.action_space.sample() state, reward, done, info = env.step(action) if reward == -10: penalties += 1 # Put each rendered frame into dict for animation frames.append({ 'frame': env.render(mode='ansi'), 'state': state, 'action': action, 'reward': reward } ) epochs += 1 # Start visualization only if steps taken are less than 1000. Otherwise it takes too long. if epochs < 1000: print_frames(frames) print("\nSteps taken: {}".format(epochs)) print("Penalties incurred: {}".format(penalties)) ###Output +---------+ |R: | : :G| | : | : : | | : : : : | | | : | : | |Y| : |B: | +---------+ (Dropoff) Timestep: 508 State: 475 Action: 5 Reward: 20 Steps taken: 508 Penalties incurred: 163 ###Markdown 4: Train the model using Reinforcement LearningUsing a Q-Learning approach to train the model. Q-learning lets the agent use the environment's rewards to learn, over time, the best action to take in a given state. For this purpose, we will create a Q-table to store Q-values, that map to a state and corresponding action combination. A Q-value for a particular state-action combination is representative of the "quality" of an action taken from that state. Better Q-values imply better chances of getting greater rewards.For evaluation of the model, We will keep a track of how many timesteps and number of penalties incurred while training the model. ###Code %%time # initialize a q-table to store q-values for each state-action pair. q_table = np.zeros([env.observation_space.n, env.action_space.n]) # Initialize a q-table with zeros print"Training the agent .. .. .." # Typically takes around 3m 17seconds around # Hyperparameters alpha = 0.1 gamma = 0.6 epsilon = 1.0 min_epsilon = 0.1 max_epsilon = 1.00 # For plotting metrics all_epochs = [] all_penalties = [] all_epsilons = [] frames = [] # for animation rewards = [] # for analyzing training. for episode in range(1, 100001): state = env.reset() episode_rewards = [] epochs, penalties, reward = 0, 0, 0 done = False while not done: # Explore Action Space if random.uniform(0, 1) < min_epsilon: action = env.action_space.sample() # Exploit learned Values else: action = np.argmax(q_table[state]) # Get next state, reward, goal_status next_state, reward, done, info = env.step(action) # current q-value old_value = q_table[state, action] next_max = np.max(q_table[next_state]) # Learning: update current q-value based on best chosen next step new_value = (1 - alpha) * old_value + alpha * (reward + gamma * next_max) q_table[state, action] = new_value if done: epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-0.0001*episode) all_epsilons.append(epsilon) # check if penalties incurred if reward == -10: penalties += 1 episode_rewards.append(reward) # update state state = next_state epochs += 1 # Put each rendered frame into dict for animation frames.append({ 'frame': env.render(mode='ansi'), 'state': state, 'action': action, 'reward': reward } ) rewards.append(np.mean(episode_rewards)) # render training progress if episode % 100 == 0: clear_output(wait=True) print("Training ongoing...\nSuccessfully completed {} Pick-Drop episodes.".format(episode)) print("Training finished.\n") plt.figure(figsize=(16, 8), dpi= 80, facecolor='w', edgecolor='k') plt.plot(savgol_filter(rewards, 1001, 2)) plt.title("Smoothened training reward per episode") plt.xlabel('Episode'); plt.ylabel('Total Reward'); ###Output _____no_output_____ ###Markdown As we can see, rewards are getting more and more positive as the training continues and starts to converge once the model is trained. Depending on the graph, 25,000 epochs seem enough to train the agent successfully. ###Code plt.figure(figsize=(16, 8), dpi= 80, facecolor='w', edgecolor='k') plt.plot(all_epsilons) plt.title("Epsilon for episode") plt.xlabel('Episode'); plt.ylabel('Epsilon'); ###Output _____no_output_____ ###Markdown Epsilon is the exploration factor. It shuffles the model from exploration mode to exploitation mode. Exploration is finding new information about the environment. Whereas, Exploitation using existing information to maximize the reward. Initially, our driving agent knows pretty much nothing about the best set of driving directions for picking up and dropping off passengers. After good amount of exploration, we want our agent to switch to exploitation mode. Eventually, all choices will be based on what is learned. In this graph, we can see how it decreases over training. Lower values of epsilon switches the agent to exploitation model. This is a desired trend. 5: Analyze what the model has learnedWe want to see if the model has learned to take optimum steps from any state to reach a goal position. ###Code import pprint # Drop off and pick up locations are indexed from 0-3 for the four # different locations. # Let's initiate first forced state: # Taxi at: (2,0), Passenger inside the taxi and Drop Off at R(0)(Pink) statesList = [[2,0,4,0], [2,0,4,2], [2,3,4,3], [2,3,4,1]] Actions = ["South", "North", "East", "West", "PickUp", "DropOff"] exampleCount = 0 for state in statesList: exampleCount += 1 exampleState = env.encode(*state) print"Example {}: Passenger inside taxi - To be dropped at location: {}".format(exampleCount, state[3]) env.s = exampleState # set the environment env.render() # Let's see the possible action space in this state env.P[exampleState] stateAction = dict(zip(Actions, q_table[exampleState])) max_key = max(stateAction, key=stateAction.get) print "Best Action:", max_key, "\nQ-Value:",round(stateAction[max_key], 2) print "\n--------------------------------------------------------\n" ###Output Example 1: Passenger inside taxi - To be dropped at location: 0 +---------+ |R: | : :G| | : | : : | |_: : : : | | | : | : | |Y| : |B: | +---------+ (Dropoff) Best Action: North Q-Value: 5.6 -------------------------------------------------------- Example 2: Passenger inside taxi - To be dropped at location: 2 +---------+ |R: | : :G| | : | : : | |_: : : : | | | : | : | |Y| : |B: | +---------+ (Dropoff) Best Action: South Q-Value: 5.6 -------------------------------------------------------- Example 3: Passenger inside taxi - To be dropped at location: 3 +---------+ |R: | : :G| | : | : : | | : : :_: | | | : | : | |Y| : |B: | +---------+ (Dropoff) Best Action: South Q-Value: 5.6 -------------------------------------------------------- Example 4: Passenger inside taxi - To be dropped at location: 1 +---------+ |R: | : :G| | : | : : | | : : :_: | | | : | : | |Y| : |B: | +---------+ (Dropoff) Best Action: North Q-Value: 2.36 -------------------------------------------------------- ###Markdown As we can see after the training has been completed, the cab agent is able to predict the best step to reach destination accurately.1. In example 1, cab with the passenger is at (2,0). Dropping destination for the case is R(0). The most sensible step is to move north. The model predicts this accurately as:| Best Action | || | Q-Value| --- | ---| --- || North | || | 5.6 |2. In example 3, cab with the passenger is at (2,3). Dropping destination for the case is B(3). The most sensible step is to move South. The model predicts this accurately as:| Best Action | || | Q-Value| --- | ---| --- || South | || | 5.6 |3. Similarly, in example 4, can is at (2,3) with the passenger inside. The dropping destination for the case is G(1). Both North and East have same q-values in this case, 2.36. Thus, the model hasn't overfit and predicts both the correct paths successfully. 6: Evaluating agent's performance after training.Now we want to test our model on finding best routes between checkpoints and analyze penalties it is adding, if any! ###Code total_epochs, total_penalties = 0, 0 episodes = 100 # Give 100 tests to the agent. # frames = [] # for animation testRewards = [] # Analyzing performance sleep(5) for episode in range(episodes): state = env.reset() # Reset the environment to a random new state epochs, penalties, reward = 0, 0, 0 done = False # Episode completed? frames = [] episodeRewards = [] while not done: # While not dropped at correct destination action = np.argmax(q_table[state]) # Choose best action for a given state state, reward, done, info = env.step(action) # Get reward, new state and drop status if reward == -10: # Increment penalties if occurred penalties += 1 epochs += 1 # Keep track of timesteps required to successfully reach the destination. # Put each rendered frame into dict for animation frames.append({ 'frame': env.render(mode='ansi'), 'state': state, 'action': action, 'reward': reward } ) episodeRewards.append(reward) total_penalties += penalties total_epochs += epochs testRewards.append(np.mean(episodeRewards)) # Visualize the current episode. print_frames(frames, episode+1, True) sleep(.50) print"\n----------------- Test Results --------------------- \n" print("Results after {} episodes:".format(episodes)) print("Average timesteps per episode: {}".format(total_epochs / episodes)) print("Average penalties per episode: {}".format(total_penalties / episodes)) ###Output +---------+ |R: | : :G| | : | : : | | : : : : | | | : | : | |Y| : |B: | +---------+ (Dropoff) Episode Number 100 Timestep: 11 State: 85 Action: 5 Reward: 20 ----------------- Test Results --------------------- Results after 100 episodes: Average timesteps per episode: 13 Average penalties per episode: 0
41-problem-solution_introduction_to_machine_learning.ipynb
###Markdown Using `DecisionTreeClassifier` fit a decision tree to the data- What are the attributes/features?- What is the target feature?- Interpret the tree ###Code dtree = tree.DecisionTreeClassifier().fit( iris[['petalWidth','petalLength','sepalWidth','sepalLength']], iris['species']) print(export_text(dtree, feature_names=['petalWidth','petalLength','sepalWidth','sepalLength'])) ###Output |--- petalLength <= 2.45 | |--- class: setosa |--- petalLength > 2.45 | |--- petalWidth <= 1.75 | | |--- petalLength <= 4.95 | | | |--- petalWidth <= 1.65 | | | | |--- class: versicolor | | | |--- petalWidth > 1.65 | | | | |--- class: virginica | | |--- petalLength > 4.95 | | | |--- petalWidth <= 1.55 | | | | |--- class: virginica | | | |--- petalWidth > 1.55 | | | | |--- petalLength <= 5.45 | | | | | |--- class: versicolor | | | | |--- petalLength > 5.45 | | | | | |--- class: virginica | |--- petalWidth > 1.75 | | |--- petalLength <= 4.85 | | | |--- sepalWidth <= 3.10 | | | | |--- class: virginica | | | |--- sepalWidth > 3.10 | | | | |--- class: versicolor | | |--- petalLength > 4.85 | | | |--- class: virginica
tomef/metrics/clustering.py.ipynb
###Markdown Clustering ← ↑`Description`--- Setup--- ###Code from __init__ import init_vars init_vars(vars(), ('info', {})) from sklearn.metrics.cluster import normalized_mutual_info_score, adjusted_rand_score import data import config from base import nbprint from util import ProgressIterator from widgetbase import nbbox from metrics.widgets import h_mat_picker from metrics.helper import load_ground_truth_classes, load_class_array_from_h_mat if RUN_SCRIPT: h_mat_picker(info) ###Output _____no_output_____ ###Markdown --- NMI---`Definition` ###Code def nmi(labels_true, labels_pred): return normalized_mutual_info_score(labels_true, labels_pred) ###Output _____no_output_____ ###Markdown --- ARI---`Definition` ###Code def ari(labels_true, labels_pred): return adjusted_rand_score(labels_true, labels_pred) ###Output _____no_output_____ ###Markdown --- Show all--- ###Code if RUN_SCRIPT: nbbox(mini=True) labels_true = load_ground_truth_classes(info) labels_pred = load_class_array_from_h_mat(info) nbprint('NMI score: {}'.format(nmi(labels_true, labels_pred))) nbprint('ARI score: {}'.format(ari(labels_true, labels_pred))) ###Output _____no_output_____
Chapter07/05 - Enabling ML explainability with SageMaker Clarify.ipynb
###Markdown Enabling ML explainability with SageMaker Clarify This notebook contains the code to help readers work through one of the recipes of the book [Machine Learning with Amazon SageMaker Cookbook: 80 proven recipes for data scientists and developers to perform ML experiments and deployments](https://www.amazon.com/Machine-Learning-Amazon-SageMaker-Cookbook/dp/1800567030) How to do it... ###Code %store -r s3_bucket_name %store -r prefix %store -r training_data_path %store -r test_data_path %store -r model_name import sagemaker session = sagemaker.Session() region = session.boto_region_name role = sagemaker.get_execution_role() s3_training_data_path = training_data_path s3_test_data_path = test_data_path s3_output_path = f"s3://{s3_bucket_name}/{prefix}/output" !aws s3 cp {s3_training_data_path} tmp/training_data.csv !aws s3 cp {s3_test_data_path} tmp/test_data.csv import pandas as pd training_data = pd.read_csv("tmp/training_data.csv") test_data = pd.read_csv("tmp/test_data.csv") target = test_data['approved'] features = test_data.drop(columns=['approved']) features.to_csv('tmp/test_features.csv', index=False, header=False) features base = f"s3://{s3_bucket_name}/{prefix}/input" s3_feature_path = f"{base}/test_features.csv" !aws s3 cp tmp/test_features.csv {s3_feature_path} from sagemaker.clarify import ModelConfig model_config = ModelConfig( model_name=model_name, instance_type='ml.c5.xlarge', instance_count=1, accept_type='text/csv' ) from sagemaker.clarify import SageMakerClarifyProcessor processor = SageMakerClarifyProcessor( role=role, instance_count=1, instance_type='ml.m5.large', sagemaker_session=session ) baseline = features.iloc[0:200].values.tolist() baseline from sagemaker.clarify import SHAPConfig shap_config = SHAPConfig( baseline=baseline, num_samples=50, agg_method='median' ) headers = training_data.columns.to_list() from sagemaker.clarify import DataConfig data_config = DataConfig( s3_data_input_path=s3_training_data_path, s3_output_path=s3_output_path, label='approved', headers=headers, dataset_type='text/csv' ) %%time processor.run_explainability( data_config=data_config, model_config=model_config, explainability_config=shap_config ) output = processor.latest_job.outputs[0] output_destination = output.destination output_destination !aws s3 cp {output_destination}/ tmp/ --recursive !ls -lahF tmp/ !cat tmp/analysis.json ###Output _____no_output_____
natural-language/text-classification.ipynb
###Markdown Contextual Text ClassificationRNN based sentiment analysis on dataset of plain-text IMDB movie reviews. ###Code import os import re import string import numpy as np import tensorflow as tf import tensorflow_datasets as tfds import matplotlib.pyplot as plt tf.__version__ ###Output _____no_output_____ ###Markdown Prepare data ###Code os.listdir("/database/tensorflow-datasets/") # Load data dataset, info = tfds.load( name="imdb_reviews", with_info=True, as_supervised=True, data_dir="/database/tensorflow-datasets/" ) train_dataset, test_dataset = dataset["train"], dataset["test"] for review, label in train_dataset.take(1).as_numpy_iterator(): print("Review;", review, "Label:", label) for review, label in test_dataset.take(1).as_numpy_iterator(): print("Review;", review, "Label:", label) type(train_dataset), type(test_dataset) # Create an optimized dataset BUFFER_SIZE = 10000 BATCH_SIZE = 64 train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE) test_dataset = test_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE) ###Output _____no_output_____ ###Markdown Create text encoder ###Code VOCAB_SIZE = 5000 MAX_SEQUENCE_LENGTH = 512 EMBEDDING_DIM = 100 # Custom text processor def text_processor(input_data): lowercase_text = tf.strings.lower(input_data) stripped_text = tf.strings.regex_replace(lowercase_text, "<br />", " ") return tf.strings.regex_replace( stripped_text, "[%s]" % re.escape(string.punctuation), "" ) # Encoder layer encoder_layer = tf.keras.layers.TextVectorization( max_tokens=VOCAB_SIZE, standardize=text_processor, split="whitespace", output_mode="int", output_sequence_length=MAX_SEQUENCE_LENGTH ) # Learn the encoder layer encoder_layer.adapt(train_dataset.map(lambda text, label: text)) vocabulary = np.array(encoder_layer.get_vocabulary()) print("Top 20 vocabulary:", vocabulary[:20]) # Most frequent print("Bottom 20 vocabulary:", vocabulary[-20:]) # Least frequent ###Output Bottom 20 vocabulary: ['acid' '35' '1971' 'wouldbe' 'voiced' 'victory' 'uplifting' 'unseen' 'unfair' 'tooth' 'technicolor' 'survivor' 'stunned' 'sounding' 'sid' 'screens' 'rolled' 'resulting' 'reflection' 'ramones'] ###Markdown Implement model architecture ###Code def get_bilstm_model(): model = tf.keras.Sequential() model.add(encoder_layer) model.add(tf.keras.layers.Embedding(VOCAB_SIZE, EMBEDDING_DIM, mask_zero=True)) model.add(tf.keras.layers.BatchNormalization()) model.add(tf.keras.layers.Dropout(0.3)) model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128, return_sequences=True))) model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128))) model.add(tf.keras.layers.Dense(64, activation="relu")) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(1)) return model ###Output _____no_output_____ ###Markdown Learn and evaluate model ###Code # Get model model = get_bilstm_model() # Compile model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer="adam", metrics=["accuracy"]) # Learn history = model.fit( train_dataset, epochs=10, validation_data=test_dataset, callbacks=[tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=2)] ) # Evaluate model test_loss, test_acc = model.evaluate(test_dataset) print("Test Loss:", test_loss, "Test Accuracy:", test_acc) ###Output 391/391 [==============================] - 20s 50ms/step - loss: 0.3759 - accuracy: 0.8816 Test Loss: 0.3758942484855652 Test Accuracy: 0.8815600275993347 ###Markdown Plot model performance ###Code # Model accuracy plt.plot(history.history["accuracy"]) plt.plot(history.history["val_accuracy"]) plt.title("model accuracy") plt.ylabel("accuracy") plt.xlabel("epoch") plt.legend(["train", "val"], loc="upper left") plt.show() # Model loss plt.plot(history.history["loss"]) plt.plot(history.history["val_loss"]) plt.title("model loss") plt.ylabel("loss") plt.xlabel("epoch") plt.legend(["train", "val"], loc="upper left") plt.show() ###Output _____no_output_____ ###Markdown Generate predictions ###Code sample_text = ("The movie was not good. The animation and the graphics were terrible. I would not recommend this movie.") predictions = model.predict(np.array([sample_text])) label_predicted = "Positive" if predictions[0][0] >= 0.0 else "Negative" print("Review:", sample_text) print("Predicted Label:", label_predicted) ###Output Review: The movie was not good. The animation and the graphics were terrible. I would not recommend this movie. Predicted Label: Negative
munge/preprocessing_lda.ipynb
###Markdown LDA being a probabilistic graphical model (i.e. dealing with probabilities) only requires raw counts, so we use a CountVectorizer. ###Code from sklearn.feature_extraction.text import CountVectorizer def apply_count_vectorizer(df, series, word_appearal_threshold): vectorizer = CountVectorizer(max_df=word_appearal_threshold, min_df=2, # words that appear in < x lines will be discarded token_pattern='\w+|\$[\d\.]+|\S+') tf = vectorizer.fit_transform(df[series]).toarray() logging.info(f"applying vectorizer : {vectorizer.get_params()}. \n Matrix shape: {tf.shape}") logging.info(f"discard words appearing in more than {word_appearal_threshold}% of cases") # tf_feature_names tells us what word each column in the matric represents tf_feature_names = vectorizer.get_feature_names() return tf, tf_feature_names tf, tf_feature_names = apply_count_vectorizer(df=df, series = 'abstract', word_appearal_threshold=.9) ###Output 2020-26-03: INFO: applying vectorizer : {'analyzer': 'word', 'binary': False, 'decode_error': 'strict', 'dtype': <class 'numpy.int64'>, 'encoding': 'utf-8', 'input': 'content', 'lowercase': True, 'max_df': 0.9, 'max_features': None, 'min_df': 2, 'ngram_range': (1, 1), 'preprocessor': None, 'stop_words': None, 'strip_accents': None, 'token_pattern': '\\w+|\\$[\\d\\.]+|\\S+', 'tokenizer': None, 'vocabulary': None}. Matrix shape: (803, 6465) 2020-26-03: INFO: discard words appearing in more than 0.9% of cases ###Markdown LDA with Sklearn ###Code from sklearn.decomposition import LatentDirichletAllocation number_of_topics = 10 model = LatentDirichletAllocation(n_components=number_of_topics, random_state=0) model.fit(tf) def display_topics(model, feature_names, no_top_words): topic_dict = {} for topic_idx, topic in enumerate(model.components_): topic_dict["Topic %d words" % (topic_idx)]= ['{}'.format(feature_names[i]) for i in topic.argsort()[:-no_top_words - 1:-1]] topic_dict["Topic %d weights" % (topic_idx)]= ['{:.1f}'.format(topic[i]) for i in topic.argsort()[:-no_top_words - 1:-1]] return pd.DataFrame(topic_dict) no_top_words = 10 display_topics(model, tf_feature_names, no_top_words) ###Output _____no_output_____ ###Markdown LDA with Gensimhttps://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/ ###Code # Gensim import gensim import gensim.corpora as corpora from gensim.utils import simple_preprocess from gensim.models import CoherenceModel # spacy for lemmatization import spacy # Plotting tools import pyLDAvis import pyLDAvis.gensim def preprocess_news(df, series): corpus=[] for item in df[series].dropna()[:5000]: words=[w for w in word_tokenize(item)] corpus.append(words) return corpus texts = preprocess_news(df, series='abstract') # Create Dictionary id2word = corpora.Dictionary(texts) # Term Document Frequency - TDF corpus = [id2word.doc2bow(text) for text in texts] print(f"produced corpus shown above is a mapping of (word_id, word_frequency: {corpus[:1]})") # Build LDA model lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=id2word, num_topics=10, random_state=100, update_every=1, chunksize=100, passes=10, # training epochs alpha='auto', per_word_topics=True) # Print the Keyword in the 10 topics import pprint pprint.pprint(lda_model.print_topics()) doc_lda = lda_model[corpus] ###Output 2020-26-03: INFO: topic #0 (0.159): 0.143*"cov" + 0.136*"sars" + 0.023*"early" + 0.015*"protein" + 0.014*"binding" + 0.013*"severe" + 0.012*"associated" + 0.011*"spike" + 0.011*"acute" + 0.011*"syndrome" 2020-26-03: INFO: topic #1 (0.275): 0.085*"data" + 0.026*"using" + 0.025*"virus" + 0.013*"specific" + 0.013*"reveals" + 0.012*"sequencing" + 0.012*"human" + 0.011*"identification" + 0.011*"influenza" + 0.010*"genome" 2020-26-03: INFO: topic #2 (0.120): 0.100*"infect" + 0.091*"diagnosis" + 0.088*"application" + 0.087*"optimization" + 0.033*"rna" + 0.008*"contact" + 0.008*"highly" + 0.007*"epidem" + 0.007*"different" + 0.007*"detect" 2020-26-03: INFO: topic #3 (0.237): 0.053*"period" + 0.052*"analysis" + 0.050*"incubation" + 0.050*"publicly" + 0.049*"statistical" + 0.049*"available" + 0.049*"truncation" + 0.049*"right" + 0.039*"novel" + 0.009*"immune" 2020-26-03: INFO: topic #4 (0.196): 0.092*"control" + 0.029*"cell" + 0.028*"unknown" + 0.021*"cells" + 0.015*"studi" + 0.015*"single" + 0.014*"expression" + 0.014*"receptor" + 0.011*"molecular" + 0.011*"non" 2020-26-03: INFO: topic #5 (0.544): 0.074*"covid" + 0.069*"coronavirus" + 0.065*"epidemiological" + 0.047*"transmission" + 0.043*"model" + 0.038*"novel" + 0.036*"case" + 0.032*"pcr" + 0.031*"identifying" + 0.031*"characterizing" 2020-26-03: INFO: topic #6 (0.148): 0.040*"virus" + 0.019*"infectious" + 0.018*"host" + 0.016*"author" + 0.015*"zika" + 0.013*"protein" + 0.010*"infection" + 0.009*"activity" + 0.008*"infect" + 0.007*"may" 2020-26-03: INFO: topic #7 (0.181): 0.092*"infections" + 0.089*"strategy" + 0.025*"pneumonia" + 0.019*"prediction" + 0.016*"coronavirus" + 0.013*"evolution" + 0.010*"structure" + 0.009*"health" + 0.008*"diagnostic" + 0.008*"review" 2020-26-03: INFO: topic #8 (0.247): 0.083*"characteristics" + 0.033*"patients" + 0.031*"clinical" + 0.024*"viral" + 0.020*"covid" + 0.013*"dynamics" + 0.012*"study" + 0.012*"disease" + 0.011*"bat" + 0.011*"rna" 2020-26-03: INFO: topic #9 (0.108): 0.017*"gene" + 0.013*"proteins" + 0.013*"social" + 0.013*"epidemics" + 0.012*"evolutionary" + 0.012*"pathogen" + 0.011*"science" + 0.010*"monitoring" + 0.009*"hospitalized" + 0.009*"quantifying" ###Markdown Compute Model Perplexity and Coherence Score ###Code # Compute Perplexity: measure of how good the model is. lower the better. print('\nPerplexity: ', lda_model.log_perplexity(corpus)) # Compute Coherence Score coherence_model_lda = CoherenceModel(model=lda_model, texts=texts, dictionary=id2word, coherence='c_v') coherence_lda = coherence_model_lda.get_coherence() print('\nCoherence Score: ', coherence_lda) ###Output 2020-26-03: INFO: -8.803 per-word bound, 446.5 perplexity estimate based on a held-out corpus of 803 documents with 131769 words 2020-26-03: INFO: using ParallelWordOccurrenceAccumulator(processes=11, batch_size=64) to estimate probabilities from sliding windows ###Markdown Hyperparameter TuningWe have a baseline coherence score for the default LDA model, let’s perform a series of sensitivity tests to help determine the following model hyperparameters: - Number of Topics (K) - Dirichlet hyperparameter alpha: Document-Topic Density - Dirichlet hyperparameter beta: Word-Topic Density We’ll perform these tests in sequence, one parameter at a time by keeping others constant and run them over the two different validation corpus sets. We’ll use C_v as our choice of metric for performance comparison ###Code def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=3): """ Compute c_v coherence for various number of topics Parameters: ---------- dictionary : Gensim dictionary corpus : Gensim corpus texts : List of input texts limit : Max num of topics Returns: ------- model_list : List of LDA topic models coherence_values : Coherence values corresponding to the LDA model with respective number of topics """ coherence_values = [] model_list = [] for num_topics in range(start, limit, step): model = gensim.models.ldamodel.LdaModel(corpus=corpus, num_topics=num_topics, id2word=id2word) model_list.append(model) coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v') coherence_values.append(coherencemodel.get_coherence()) return model_list, coherence_values model_list, coherence_values = compute_coherence_values(dictionary=id2word, corpus=corpus, texts=texts, start=2, limit=40, step=6) # Show graph limit=40; start=2; step=6; x = range(start, limit, step) plt.plot(x, coherence_values) plt.xlabel("Num Topics") plt.ylabel("Coherence score") plt.legend(("coherence_values"), loc='best') plt.show() # Print the coherence scores for m, cv in zip(x, coherence_values): print("Num Topics =", m, " has Coherence Value of", round(cv, 4)) # Select the model and print the topics import pprint optimal_model = model_list[5] model_topics = optimal_model.show_topics(formatted=False) pprint.pprint(optimal_model.print_topics(num_words=10)) ###Output 2020-26-03: INFO: topic #8 (0.031): 0.007*"patients" + 0.006*"covid" + 0.005*"exon" + 0.004*"severe" + 0.004*"study" + 0.004*"virus" + 0.003*"model" + 0.003*"disease" + 0.003*"viral" + 0.003*"infection" 2020-26-03: INFO: topic #14 (0.031): 0.006*"virus" + 0.005*"data" + 0.005*"cases" + 0.005*"disease" + 0.004*"viruses" + 0.004*"viral" + 0.004*"preprint" + 0.004*"rna" + 0.004*"model" + 0.004*"protein" 2020-26-03: INFO: topic #31 (0.031): 0.009*"protein" + 0.008*"rna" + 0.006*"virus" + 0.005*"cells" + 0.004*"infection" + 0.004*"sequence" + 0.004*"preprint" + 0.004*"cases" + 0.004*"host" + 0.003*"proteins" 2020-26-03: INFO: topic #12 (0.031): 0.007*"virus" + 0.006*"viral" + 0.006*"rna" + 0.004*"viruses" + 0.004*"transmission" + 0.004*"protein" + 0.004*"using" + 0.003*"species" + 0.003*"epidemic" + 0.003*"genome" 2020-26-03: INFO: topic #10 (0.031): 0.006*"covid" + 0.006*"transmission" + 0.005*"patients" + 0.005*"cases" + 0.005*"data" + 0.005*"clinical" + 0.004*"human" + 0.004*"preprint" + 0.004*"cov" + 0.004*"viral" 2020-26-03: INFO: topic #25 (0.031): 0.011*"cov" + 0.010*"sars" + 0.010*"virus" + 0.006*"host" + 0.005*"infection" + 0.005*"patients" + 0.005*"coronavirus" + 0.004*"ncov" + 0.004*"rna" + 0.004*"novel" 2020-26-03: INFO: topic #19 (0.031): 0.007*"virus" + 0.005*"patients" + 0.005*"pcr" + 0.005*"viral" + 0.004*"rna" + 0.004*"results" + 0.004*"one" + 0.004*"preprint" + 0.003*"two" + 0.003*"data" 2020-26-03: INFO: topic #20 (0.031): 0.006*"infection" + 0.005*"cells" + 0.004*"viral" + 0.004*"human" + 0.004*"preprint" + 0.004*"using" + 0.004*"time" + 0.003*"virus" + 0.003*"patients" + 0.003*"proteins" 2020-26-03: INFO: topic #7 (0.031): 0.009*"rna" + 0.005*"binding" + 0.005*"transmission" + 0.004*"virus" + 0.004*"protein" + 0.004*"viral" + 0.004*"cov" + 0.004*"two" + 0.003*"preprint" + 0.003*"available" 2020-26-03: INFO: topic #27 (0.031): 0.006*"virus" + 0.006*"model" + 0.006*"data" + 0.006*"patients" + 0.005*"disease" + 0.005*"china" + 0.005*"transmission" + 0.004*"preprint" + 0.004*"epidemic" + 0.004*"infection" 2020-26-03: INFO: topic #15 (0.031): 0.009*"patients" + 0.005*"cases" + 0.005*"transmission" + 0.005*"disease" + 0.005*"data" + 0.004*"infection" + 0.004*"model" + 0.004*"covid" + 0.004*"cells" + 0.004*"preprint" 2020-26-03: INFO: topic #11 (0.031): 0.007*"cases" + 0.006*"transmission" + 0.006*"cov" + 0.006*"sars" + 0.006*"coronavirus" + 0.005*"model" + 0.005*"data" + 0.004*"covid" + 0.004*"disease" + 0.004*"patients" 2020-26-03: INFO: topic #5 (0.031): 0.009*"ncov" + 0.005*"cov" + 0.005*"human" + 0.005*"sars" + 0.005*"coronavirus" + 0.004*"viral" + 0.004*"cases" + 0.004*"patients" + 0.004*"novel" + 0.004*"identified" 2020-26-03: INFO: topic #23 (0.031): 0.015*"sars" + 0.015*"cov" + 0.009*"virus" + 0.009*"protein" + 0.006*"viral" + 0.005*"patients" + 0.005*"infection" + 0.005*"binding" + 0.005*"cells" + 0.005*"human" 2020-26-03: INFO: topic #13 (0.031): 0.008*"cases" + 0.007*"wuhan" + 0.007*"data" + 0.006*"china" + 0.005*"number" + 0.004*"virus" + 0.004*"preprint" + 0.004*"model" + 0.004*"outbreak" + 0.004*"time" 2020-26-03: INFO: topic #1 (0.031): 0.008*"cov" + 0.007*"rna" + 0.007*"sars" + 0.006*"patients" + 0.006*"novel" + 0.005*"human" + 0.005*"virus" + 0.004*"viral" + 0.004*"proteins" + 0.004*"severe" 2020-26-03: INFO: topic #6 (0.031): 0.010*"patients" + 0.006*"covid" + 0.005*"cases" + 0.005*"epidemic" + 0.003*"using" + 0.003*"virus" + 0.003*"study" + 0.003*"preprint" + 0.003*"two" + 0.003*"china" 2020-26-03: INFO: topic #26 (0.031): 0.006*"virus" + 0.006*"cell" + 0.006*"preprint" + 0.005*"viral" + 0.005*"infection" + 0.005*"sars" + 0.004*"study" + 0.004*"infected" + 0.004*"human" + 0.004*"cov" 2020-26-03: INFO: topic #24 (0.031): 0.006*"cases" + 0.005*"cov" + 0.005*"patients" + 0.004*"sars" + 0.004*"human" + 0.004*"model" + 0.004*"preprint" + 0.004*"covid" + 0.004*"data" + 0.004*"disease" 2020-26-03: INFO: topic #4 (0.031): 0.005*"transmission" + 0.005*"human" + 0.004*"disease" + 0.004*"virus" + 0.004*"infection" + 0.004*"protein" + 0.004*"model" + 0.004*"using" + 0.003*"based" + 0.003*"preprint" ###Markdown Finding the dominant topic in each sentenceOne of the practical application of topic modeling is to determine what topic a given document is about.To find that, we find the topic number that has the highest percentage contribution in that document.The format_topics_sentences() function below nicely aggregates this information in a presentable table. ###Code def format_topics_sentences(ldamodel=lda_model, corpus=corpus, texts=df['title']): # Init output sent_topics_df = pd.DataFrame() # Get main topic in each document for i, row in enumerate(ldamodel[corpus]): row = sorted(row, key=lambda x: (x[1]), reverse=True) # Get the Dominant topic, Perc Contribution and Keywords for each document for j, (topic_num, prop_topic) in enumerate(row): if j == 0: # => dominant topic wp = ldamodel.show_topic(topic_num) topic_keywords = ", ".join([word for word, prop in wp]) sent_topics_df = sent_topics_df.append(pd.Series([int(topic_num), round(prop_topic,4), topic_keywords]), ignore_index=True) else: break sent_topics_df.columns = ['Dominant_Topic', 'Perc_Contribution', 'Topic_Keywords'] # Add original text to the end of the output contents = pd.Series(texts) sent_topics_df = pd.concat([sent_topics_df, contents], axis=1) return(sent_topics_df) df_topic_sents_keywords = format_topics_sentences(ldamodel=optimal_model, corpus=corpus, texts=df['title']) # Format df_dominant_topic = df_topic_sents_keywords.reset_index() df_dominant_topic.columns = ['Document_No', 'Dominant_Topic', 'Topic_Perc_Contrib', 'Keywords', 'Text'] # Show df_dominant_topic.head(10) ###Output _____no_output_____ ###Markdown Topic distribution across documents ###Code # Number of Documents for Each Topic topic_counts = df_topic_sents_keywords['Dominant_Topic'].value_counts() # Percentage of Documents for Each Topic topic_contribution = round(topic_counts/topic_counts.sum(), 4) # Topic Number and Keywords topic_num_keywords = df_topic_sents_keywords[['Dominant_Topic', 'Topic_Keywords']] # Concatenate Column wise df_dominant_topics = pd.concat([topic_num_keywords, topic_counts, topic_contribution], axis=1) # Change Column names df_dominant_topics.columns = ['Dominant_Topic', 'Topic_Keywords', 'Num_Documents', 'Perc_Documents'] # Show df_dominant_topics ###Output _____no_output_____
assignment/Assignment 4 Week 4.ipynb
###Markdown Exercise 4.1 ###Code plt.figure(figsize=(8,8)) x = np.array([[0,1], [0,3], [2,0]]) y = np.array([0, 0, 1]) # 0 is class 1 and 1 is class 2 plt.scatter(x[:,0], x[:,1], marker="x") plt.plot([2,0], [0,1]) plt.show() ###Output _____no_output_____ ###Markdown a) The classification boundary is the perpendicular bisector of the line segment between (0,1)and (2,0). There are 2 support vectors. ###Code x = np.array([[0,-1], [0,3], [2,0]]) y = np.array([0, 0, 1]) plt.scatter(x[:,0], x[:,1], marker="x") plt.plot([2,0], [0,0]) plt.show() ###Output _____no_output_____ ###Markdown b) The classification boundary now becomes the vertical line through (1,0). All three points become support vectors. Exercise 4.2 ###Code from sklearn.preprocessing import minmax_scale from sklearn.preprocessing import maxabs_scale from sklearn.preprocessing import normalize help (pr.svc) # Consider (0,0) for one class, (1,1) and (2,0) for the other class, # and (1,0)as a test point and see # how the classification of the last point changes with different scalings of the first feature. plt.figure(figsize=(16,8)) plt.subplot(1,2,1) x = np.array([[1,1], [2,0], [0,0], [1,0]]) y = np.array([0, 0, 1, 2]) plt.scatter(x[:,0], x[:,1], marker="x", s=100) train = pr.prdataset(x[:3], y[:3]) c = pr.svc(train, ("linear", 0, 20)) pr.plotc(c) plt.subplot(1,2,2) x_rescale = normalize(x) plt.scatter(x_rescale[:,0], x_rescale[:,1], marker="x", s=100) train_re = pr.prdataset(x_rescale[:3], y[:3]) c = pr.svc(train_re, ("linear", 0, 20)) pr.plotc(c) plt.show() ###Output _____no_output_____ ###Markdown It testify the fact that the support vector classifier is sensitive to feature scaling. Exercise 4.3 Difference between LDA and SVM ###Code plt.figure(figsize=(8,8)) x = np.array([[0,1],[2,4],[1,0]]) y = np.array([0,0,1]) data = pr.prdataset(x,y) w1 = pr.ldc(data,1) w2 = pr.svc(data,("linear",0, 20)) plt.scatter(x[:,0], x[:,1],marker = "x") pr.plotc(w1) pr.plotc(w2) plt.legend() plt.show() ###Output _____no_output_____ ###Markdown The two solutions will be the same when the number of support vectors is three in the 2D case. LDA will always have three "support vectors" in this 2D setting. See Exercise 3.17 ###Code data = pr.gendatc(n=(1000,1000),dim=2, mu=0.0) feature = +data label = pr.genlab(n=(1000,1000),lab=[-1,1]) noiseFeature = np.hstack((feature,np.random.rand(2000,60))) noiseData = pr.prdataset(noiseFeature, label) e_nmc = pr.cleval(noiseData, pr.nmc(), trainsize=[5, 10, 20, 40], nrreps=10) # default u = pr.svc([],("linear",0,20)) e_svc = pr.cleval(noiseData, u,trainsize=[5, 10, 20, 40], nrreps=10) plt.title("Learning Curve for nmc with gendatc") plt.title("Learning Curve for svc with gendatc") plt.legend() help (pr.cleval) ###Output Help on function cleval in module prtools.prtools: cleval(a, u, trainsize=[2, 3, 5, 10, 20, 30], nrreps=3, testfunc=<function testc at 0x7fb7bcd8a670>) Learning curve E = cleval(A,U,TRAINSIZE,NRREPS) Estimate the classification error E of (untrained) mapping U on dataset A for varying training set sizes. Default is trainsize=[2,3,5,10,20,30]. To get reliable estimates, the train-test split is repeated NRREPS=3 times. Example: a = gendatb([100,100]) u = nmc() e = cleval(a,u,nrreps=10) ###Markdown Exercise 4.7 svc(a,(kernel type,par,C)) a) ###Code a=pr.gendatb(n=[20,20], s=1) # a large independent banana test set plt.figure(figsize=(35,60)) widths = [0.1, 0.25, 0.5, 0.75, 1, 1.5, 2, 5, 7.5, 10] for i in range(len(widths)): svc = pr.svc(a, ("rbf", widths[i], 10)) plt.subplot(5,2,i+1) pr.scatterd(a) pr.plotc(svc) plt.title("widths="+str(widths[i])) plt.show() ###Output _____no_output_____ ###Markdown When the width increases from 0.1 to 10, the decision boundary becomes smoother and the performance is deteriorating. Exercise 4.8 Optimize the hyperparameter of an RBF SVC ###Code help (pr.prcrossval) a = pr.gendatb(n=[200,200], s=1) s = np.array([0.2, 0.5, 1.0, 2.0, 5.0, 7.0, 10.0, 25.0]) e = np.zeros(len(s)) for i in range(len(s)): e[i] = pr.prcrossval(a, pr.svc([],("rbf", s[i], 10)), k=10).mean() plt.figure(figsize=(15,10)) plt.plot(s, e, "-D") plt.title("Validation Curve") plt.xlabel("s") plt.ylabel("error") plt.show() ###Output _____no_output_____
GoogleCloud/GoogleTranslation/google-translation.ipynb
###Markdown Google Translation Translate text to a specified target language (supported languages here).This notebook is largely identical to the Python implementation in the official documentation.Google Translation Documentation ###Code from google.cloud import translate import six def google_translate(target, text): translate_client = translate.Client() if isinstance(text, six.binary_type): text = text.decode('utf-8') result = translate_client.translate( text, target_language=target) return result['translatedText'] input_text = "ここにgoogle翻訳のための日本語のセンテンスがあります!" result = google_translate('en', input_text) # Print translated text print(result) ###Output Here is a Japanese sentence for google translation!
code/notebooks/collision_analysis.ipynb
###Markdown Norm ###Code seed = 7 for dataset, features_by_dataset in features.items(): for head_info, f_d_h in features_by_dataset.items(): negs = list(sorted(f_d_h[seed])) fig, axes = plt.subplots(len(f_d_h[seed]), 1, figsize=(20, 20), sharex=True, sharey=True) norms = [] norm_max = 0. norm_min = 10e7 for i, neg in enumerate(negs[::-1]): # use only use seed value X_train = f_d_h[seed][neg][0] norm = np.sqrt(np.sum(X_train ** 2, axis=1)) norms.append(norm) norm_max = max(norm_max, np.max(norm)) norm_min = min(norm_min, np.min(norm)) for i, norm in enumerate(norms): num_bins = int(np.log(len(norm))) sns.histplot(norm, bins=num_bins, stat="probability", ax=axes[i], fill=False, binrange=(norm_min, norm_max)) axes[i].axvline(norm.mean(), color="k", linestyle="dashed", linewidth=2.) for i, neg in enumerate(negs[::-1]): axes[i].set_xlabel("") axes[i].set_ylabel("" + "${}$".format(neg)) axes[-1].set_xlabel("Norm") fig.text(0.02, 0.5, "\# negative samples $+ 1$", va="center", rotation="vertical") fname = "../../doc/figs/norm_hist_{}_{}_seed-{}.pdf".format(head_info, dataset.upper().replace("1", "-1"), seed) plt.savefig(fname) axes[0].set_title("{} {}".format(head_info, dataset)) plt.show() ###Output _____no_output_____ ###Markdown Cosine ###Code seed = 7 # store averaged cosine cosine_sims = {} for dataset, f_d in features.items(): cosine_sims[dataset] = {} for head_info, f_d_h in f_d.items(): cosine_sims[dataset][head_info] = {} C = min(10, len(id2classes[dataset])) negs = list(sorted(f_d_h[seed])) fig, axes = plt.subplots(len(negs), C, figsize=(32, 16), sharex=True, sharey=True) for i, neg in enumerate(negs[::-1]): X_train, y_train, X_eval, y_eval = f_d_h[seed][neg] X_train_normalized = sklearn.preprocessing.normalize(X_train, axis=1) hist = [] for c in range(C): X_train_c = X_train_normalized[y_train == c] cos_sim = X_train_c.dot(X_train_c.T) cos_sim = cos_sim[np.triu_indices(len(cos_sim), 1)].flatten() num_bins = int(np.log(len(cos_sim))) sns.histplot(cos_sim, bins=num_bins, stat="probability", ax=axes[i, c], fill=False, binrange=(-1., 1.)) axes[i, c].axvline(cos_sim.mean(), color="k", linestyle="dashed", linewidth=2.) for r in range(len(negs)): for c in range(C): axes[r, c].set_xlabel("") axes[r, c].set_ylabel("") for i, class_name in enumerate(id2classes[dataset][:C]): axes[-1, i].set_xlabel(class_name.replace("_", " ").capitalize()) for i, neg in enumerate(negs[::-1]): axes[i, 0].set_ylabel(neg, size="large") fig.text(0.06, 0.5, "\# negative samples $+ 1$", va="center", rotation="vertical") fname = "../../doc/figs/cos_hist_{}_{}_seed-{}.pdf".format(head_info, dataset, seed) plt.savefig(fname) axes[0, 0].set_title("{} {}".format(head_info, dataset)) plt.show() ###Output _____no_output_____ ###Markdown relative change ###Code def get_value_hist(edges): results = [(edges[i] + edges[i + 1]) / 2. for i in range(len(edges) - 1)] return np.array(results) for dataset, features_by_dataset in features.items(): for head_info, f_d_h in features_by_dataset.items(): fig, axes_ratio = plt.subplots(1, 1, figsize=(16, 9)) cos_w_distance_by_seed = [] norm_w_distance_by_seed = [] for seed, f_d_h_s in f_d_h.items(): negs = np.array(list(sorted(f_d_h_s))) cosine_hist_by_neg = [] norms_by_neg = [] for neg in negs: X_train, y_train, X_eval, y_eval = f_d_h_s[neg] C = len(np.unique(y_train)) # norm norms_by_neg.append(np.sqrt(np.sum(X_train ** 2, axis=1))) X_train_normalized = sklearn.preprocessing.normalize(X_train, axis=1) # histogram for cosine similarity cos_hist = [] for c in range(C): X_train_c = X_train_normalized[y_train == c] cos_sim = X_train_c.dot(X_train_c.T) cos_sim = cos_sim[np.triu_indices(len(cos_sim), 1)].flatten() cos_hist.append( np.histogram( cos_sim, bins=int(np.log(len(cos_sim))), range=(-1.0, 1.0), ) ) cosine_hist_by_neg.append(cos_hist) # num-negs x C x 2 # cosine W distance per class w_mean_over_C = [] min_neg = negs[0] for cos_histogram in cosine_hist_by_neg[1:]: w_distance = [] for c in range(len(cosine_hist_by_neg[0])): base_values = get_value_hist(cosine_hist_by_neg[0][c][1]) base_weights = cosine_hist_by_neg[0][c][0] / np.sum(cosine_hist_by_neg[0][c][0]) values = get_value_hist(cos_histogram[c][1]) weights = cos_histogram[c][0] / np.sum(cos_histogram[c][0]) d = scipy.stats.wasserstein_distance( base_values, values, base_weights, weights ) w_distance.append(d) w_mean_over_C.append(np.mean(w_distance)) cos_w_distance_by_seed.append(w_mean_over_C) # histgram for norm # since norm is not bounded, decide the boundaries among k h_min = 10 ** 9 h_max = 0. for norm_list in norms_by_neg: h_min = min(h_min, np.min(norm_list)) h_max = max(h_max, np.max(norm_list)) norms_hist_by_neg = [] for norm_list in norms_by_neg: norms_hist_by_neg.append( np.histogram( norm_list, bins=int(np.log(len(norm_list))), range=(h_min, h_max), ) ) # norm's W distance # base dist. norm_w_distance = [] base_weights = norms_hist_by_neg[0][0] / np.sum(norms_hist_by_neg[0][0]) base_values = get_value_hist(norms_hist_by_neg[0][1]) for norm_histogram in norms_hist_by_neg[1:]: values = get_value_hist(norm_histogram[1]) weights = norm_histogram[0] / np.sum(norm_histogram[0]) d = scipy.stats.wasserstein_distance( base_values, values, base_weights, weights ) norm_w_distance.append(d) norm_w_distance_by_seed.append(norm_w_distance) # the smallest pair's distance is removed since the relative change is trivial: 0 cos_w_distance = np.array(cos_w_distance_by_seed).mean(axis=0) cos_w_distance -= cos_w_distance[0] cos_w_distance = cos_w_distance[1:] norm_w_distance = np.array(norm_w_distance_by_seed).mean(axis=0) norm_w_distance -= norm_w_distance[0] norm_w_distance = norm_w_distance[1:] axes_ratio.errorbar(np.arange(len(negs) - 2), cos_w_distance, fmt="o", markersize=20, label="Cosine") axes_ratio.errorbar(np.arange(len(negs) - 2), norm_w_distance, fmt="v", markersize=20, label="Norm") axes_ratio.set_xticks(np.arange(len(negs) - 2)) axes_ratio.set_xticklabels(negs[2:]) axes_ratio.set_ylabel("Relative change") axes_ratio.set_xlabel("\# negative samples $+ 1$") plt.legend() fname = "../../doc/figs/wasserstein_distance_{}_{}.pdf".format(head_info, dataset) plt.savefig(fname) axes_ratio.set_title(f"{dataset} {head_info}") plt.show() ###Output _____no_output_____
ukpsummarizer-be/cplex/python/examples/mp/jupyter/tutorials/Linear_Programming.ipynb
###Markdown Tutorial: Linear Programming, (CPLEX Part 1)This notebook gives an overview of Linear Programming (or LP). After completing this unit, you should be able to - describe the characteristics of an LP in terms of the objective, decision variables and constraints, - formulate a simple LP model on paper, - conceptually explain some standard terms related to LP, such as dual, feasible region, infeasible, unbounded, slack, reduced cost, and degenerate. You should also be able to describe some of the algorithms used to solve LPs, explain what presolve does, and recognize the elements of an LP in a basic DOcplex model.>This notebook is part of [Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html).>It requires a valid subscription to **Decision Optimization on Cloud** or a **local installation of CPLEX Optimizers**. Discover us [here](https://developer.ibm.com/docloud).Table of contents:* [Introduction to Linear Programming](Introduction-to-Linear-Programming)* [Example: a production problem](Example:-a-production-problem)* [CPLEX Modeling for Python](Use-IBM-Decision-Optimization-CPLEX-Modeling-for-Python)* [Algorithms for solving LPs](Algorithms-for-solving-LPs)* [Summary](Summary)* [References](References) Introduction to Linear ProgrammingIn this topic, you’ll learn what the basic characteristics of a linear program are. What is Linear Programming?Linear programming deals with the maximization (or minimization) of a linear objective function, subject to linear constraints, where all the decision variables are continuous. That is, no discrete variables are allowed. The linear objective and constraints must consist of linear expressions. What is a linear expression?A linear expression is a scalar product, for example, the expression:$$\sum{a_i x_i}$$where a_i represents constants (that is, data) and x_i represents variables or unknowns.Such an expression can also be written in short form as a vector product:$$^{t}A X$$where $A$ is the vector of constants and $X$ is the vector of variables.*Note*: Nonlinear terms that involve variables (such as x and y) are not allowed in linear expressions. Terms that are not allowed in linear expressions include - multiplication of two or more variables (such as x times y), - quadratic and higher order terms (such as x squared or x cubed), - exponents, - logarithms,- absolute values. What is a linear constraint?A linear constraint is expressed by an equality or inequality as follows:- $linear\_expression = linear\_expression$- $linear\_expression \le linear\_expression$- $linear\_expression \ge linear\_expression$Any linear constraint can be rewritten as one or two expressions of the type linear expression is less than or equal to zero.Note that *strict* inequality operators (that is, $>$ and $<$) are not allowed in linear constraints. What is a continuous variable?A variable (or _decision_ variable) is an unknown of the problem. Continuous variables are variables the set of real numbers (or an interval). Restrictions on their values that create discontinuities, for example a restriction that a variable should take integer values, are not allowed. Symbolic representation of an LPA typical symbolic representation of a Linear Programming is as follows:$minimize \sum c_{i} x_{i}\\\\subject\ to:\\\ a_{11}x_{1} + a_{12} x_{2} ... + a_{1n} x_{n} \ge b_{1}\\\ a_{21}x_{1} + a_{22} x_{2} ... + a_{2n} x_{n} \ge b_{2}\\...\ a_{m1}x_{1} + a_{m2} x_{2} ... + a_{mn} x_{n} \ge b_{m}\\x_{1}, x_{2}...x_{n} \ge 0$This can be written in a concise form using matrices and vectors as:$min\ C^{t}x\\s.\ t.\ Ax \ge B\\x \ge 0$Where $x$ denotes the vector of variables with size $n$, $A$ denotes the matrix of constraint coefficients, with $m$ rows and $n$ columns and $B$ is a vector of numbers with size $m$. Characteristics of a linear program Example: a production problemIn this topic, you’ll analyze a simple production problem in terms of decision variables, the objective function, and constraints. You’ll learn how to write an LP formulation of this problem, and how to construct a graphical representation of the model. You’ll also learn what feasible, optimal, infeasible, and unbounded mean in the context of LP. Problem description: telephone productionA telephone company produces and sells two kinds of telephones, namely desk phones and cellular phones. Each type of phone is assembled and painted by the company. The objective is to maximize profit, and the company has to produce at least 100 of each type of phone.There are limits in terms of the company’s production capacity, and the company has to calculate the optimal number of each type of phone to produce, while not exceeding the capacity of the plant. Writing a descriptive modelIt is good practice to start with a descriptive model before attempting to write a mathematical model. In order to come up with a descriptive model, you should consider what the decision variables, objectives, and constraints for the business problem are, and write these down in words.In order to come up with a descriptive model, consider the following questions:- What are the decision variables? - What is the objective? - What are the constraints? Telephone production: a descriptive modelA possible descriptive model of the telephone production problem is as follows:- Decision variables: - Number of desk phones produced (DeskProduction) - Number of cellular phones produced (CellProduction)- Objective: Maximize profit- Constraints: 1. The DeskProduction should be greater than or equal to 100. 2. The CellProduction should be greater than or equal to 100. 3. The assembly time for DeskProduction plus the assembly time for CellProduction should not exceed 400 hours. 4. The painting time for DeskProduction plus the painting time for CellProduction should not exceed 490 hours. Writing a mathematical modelConvert the descriptive model into a mathematical model:- Use the two decision variables DeskProduction and CellProduction- Use the data given in the problem description (remember to convert minutes to hours where appropriate)- Write the objective as a mathematical expression- Write the constraints as mathematical expressions (use “=”, “=”, and name the constraints to describe their purpose)- Define the domain for the decision variables Telephone production: a mathematical modelTo express the last two constraints, we model assembly time and painting time as linear combinations of the two productions, resulting in the following mathematical model:$maximize:\\\ \ 12\ desk\_production + 20\ cell\_production\\subject\ to: \\\ \ desk\_production >= 100 \\\ \ cell\_production >= 100 \\\ \ 0.2\ desk\_production + 0.4\ cell\_production <= 400 \\\ \ 0.5\ desk\_production + 0.4\ cell\_production <= 490 \\$ Using DOcplex to formulate the mathematical model in PythonUse the [DOcplex](https://cdn.rawgit.com/IBMDecisionOptimization/docplex-doc/2.0.15/docs/index.html) Python library to write the mathematical model in Python.This is done in four steps:- create a instance of docplex.mp.Model to hold all model objects- create decision variables,- create linear constraints,- finally, define the objective.But first, we have to import the class `Model` from the docplex module. Use IBM Decision Optimization CPLEX Modeling for PythonLet's use the DOcplex Python library to write the mathematical model in Python. Step 1: Download the libraryFirst install *docplex* if needed. ###Code import sys try: import docplex.mp except: if hasattr(sys, 'real_prefix'): #we are in a virtual env. !pip install docplex else: !pip install --user docplex ###Output _____no_output_____ ###Markdown Step 2: Set up the prescriptive engine* Subscribe to our private cloud offer or Decision Optimization on Cloud solve service [here](https://developer.ibm.com/docloud) if you do not want to use a local solver.* Get the service URL and your personal API key and enter your credentials here if accurate: ###Code url = None key = None ###Output _____no_output_____ ###Markdown Step 3: Set up the prescriptive model Create the modelAll objects of the model belong to one model instance. ###Code # first import the Model class from docplex.mp from docplex.mp.model import Model # create one model instance, with a name m = Model(name='telephone_production') ###Output _____no_output_____ ###Markdown Define the decision variables- The continuous variable `desk` represents the production of desk telephones.- The continuous variable `cell` represents the production of cell phones. ###Code # by default, all variables in Docplex have a lower bound of 0 and infinite upper bound desk = m.continuous_var(name='desk') cell = m.continuous_var(name='cell') ###Output _____no_output_____ ###Markdown Set up the constraints- Desk and cel phone must both be greater than 100- Assembly time is limited- Painting time is limited. ###Code # write constraints # constraint #1: desk production is greater than 100 m.add_constraint(desk >= 100) # constraint #2: cell production is greater than 100 m.add_constraint(cell >= 100) # constraint #3: assembly time limit ct_assembly = m.add_constraint( 0.2 * desk + 0.4 * cell <= 400) # constraint #4: paiting time limit ct_painting = m.add_constraint( 0.5 * desk + 0.4 * cell <= 490) ###Output _____no_output_____ ###Markdown Express the objectiveWe want to maximize the expected revenue. ###Code m.maximize(12 * desk + 20 * cell) ###Output _____no_output_____ ###Markdown A few remarks about how we formulated the mathemtical model in Python using DOcplex:- all arithmetic operations (+, \*, \-) are done using Python operators- comparison operators used in writing linear constraint use Python comparison operators too. Print information about the modelWe can print information about the model to see how many objects of each type it holds: ###Code m.print_information() ###Output _____no_output_____ ###Markdown Graphical representation of a Linear ProblemA simple 2-dimensional LP (with 2 decision variables) can be represented graphically using a x- and y-axis. This is often done to demonstrate optimization concepts. To do this, follow these steps:- Assign one variable to the x-axis and the other to the y-axis.- Draw each of the constraints as you would draw any line in 2 dimensions.- Use the signs of the constraints (=, =) to determine which side of each line falls within the feasible region (allowable solutions).- Draw the objective function as you would draw any line in 2 dimensions, by substituting any value for the objective (for example, 12 * DeskProduction + 20 * CellProduction = 4000) Feasible set of solutions This graphic shows the feasible region for the telephone problem. Recall that the feasible region of an LP is the region delimited by the constraints, and it represents all feasible solutions. In this graphic, the variables DeskProduction and CellProduction are abbreviated to be desk and cell instead. Look at this diagram and search intuitively for the optimal solution. That is, which combination of desk and cell phones will yield the highest profit. The optimal solution To find the optimal solution to the LP, you must find values for the decision variables, within the feasible region, that maximize profit as defined by the objective function. In this problem, the objective function is to maximize $$12 * desk + 20 * cell $$To do this, first draw a line representing the objective by substituting a value for the objective. Next move the line up (because this is a maximization problem) to find the point where the line last touches the feasible region. Note that all the solutions on one objective line, such as AB, yield the same objective value. Other values of the objective will be found along parallel lines (such as line CD). In a profit maximizing problem such as this one, these parallel lines are often called isoprofit lines, because all the points along such a line represent the same profit. In a cost minimization problem, they are known as isocost lines. Since all isoprofit lines have the same slope, you can find all other isoprofit lines by pushing the objective value further out, moving in parallel, until the isoprofit lines no longer intersect the feasible region. The last isoprofit line that touches the feasible region defines the largest (therefore maximum) possible value of the objective function. In the case of the telephone production problem, this is found along line EF. The optimal solution of a linear program always belongs to an extreme point of the feasible region (that is, at a vertex or an edge). Solve with the Decision Optimization solve serviceIf url and key are None, the Modeling layer will look for a local runtime, otherwise will use the credentials.Look at the documentation for a good understanding of the various solving/generation modes.If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.In any case, `Model.solve()` returns a solution object in Python, containing the optimal values of decision variables, if the solve succeeds, or else it returns `None`. ###Code s = m.solve(url=url, key=key) m.print_solution() ###Output _____no_output_____ ###Markdown In this case, CPLEX has found an optimal solution at (300, 850). You can check that this point is indeed an extreme point of the feasible region. Multiple Optimal SolutionsIt is possible that an LP has multiple optimal solutions. At least one optimal solution will be at a vertex.By default, the CPLEX® Optimizer reports the first optimal solution found. Example of multiple optimal solutions This graphic shows an example of an LP with multiple optimal solutions. This can happen when the slope of the objective function is the same as the slope of one of the constraints, in this case line AB. All the points on line AB are optimal solutions, with the same objective value, because they are all extreme points within the feasible region. Binding and nonbinding constraintsA constraint is binding if the constraint becomes an equality when the solution values are substituted.Graphically, binding constraints are constraints where the optimal solution lies exactly on the line representing that constraint. In the telephone production problem, the constraint limiting time on the assembly machine is binding:$$ 0.2desk + 0.4 cell <= 400\\ desk = 300 cell = 850 0.2(300) + 0.4(850) = 400$$The same is true for the time limit on the painting machine:$$ 0.5desk + 0.4cell <= 490 0.5(300) + 0.4(850) = 490 $$On the other hand, the requirement that at least 100 of each telephone type be produced is nonbinding because the left and right hand sides are not equal:$$ desk >= 100\\ 300 \neq 100$$ InfeasibilityA model is infeasible when no solution exists that satisfies all the constraints. This may be because:The model formulation is incorrect.The data is incorrect.The model and data are correct, but represent a real-world conflict in the system being modeled.When faced with an infeasible model, it's not always easy to identify the source of the infeasibility. DOcplex helps you identify potential causes of infeasibilities, and it will also suggest changes to make the model feasible. An example of infeasible problem This graphic shows an example of an infeasible constraint set for the telephone production problem. Assume in this case that the person entering data had accidentally entered lower bounds on the production of 1100 instead of 100. The arrows show the direction of the feasible region with respect to each constraint. This data entry error moves the lower bounds on production higher than the upper bounds from the assembly and painting constraints, meaning that the feasible region is empty and there are no possible solutions. Infeasible models in DOcplexCalling `solve()` on an infeasible model returns None. Let's experiment this with DOcplex. First, we take a copy of our model and an extra infeasible constraint which states that desk telephone production must be greater than 1100 ###Code # create a new model, copy of m im = m.copy() # get the 'desk' variable of the new model from its name idesk = im.get_var_by_name('desk') # add a new (infeasible) constraint im.add_constraint(idesk >= 1100); # solve the new proble, we expect a result of None as the model is now infeasible ims = im.solve(url=url, key=key) if ims is None: print('- model is infeasible') ###Output _____no_output_____ ###Markdown Correcting infeasible modelsTo correct an infeasible model, you must use your knowledge of the real-world situation you are modeling.If you know that the model is realizable, you can usually manually construct an example of a feasible solution and use it to determine where your model or data is incorrect. For example, the telephone production manager may input the previous month's production figures as a solution to the model and discover that they violate the erroneously entered bounds of 1100.DOcplex can help perform infeasibility analysis, which can get very complicated in large models. In this analysis, DOcplex may suggest relaxing one or more constraints. Relaxing constraints by changing the modelIn the case of LP models, the term “relaxation” refers to changing the right hand side of the constraint to allow some violation of the original constraint.For example, a relaxation of the assembly time constraint is as follows:$$0.2 \ desk + 0.4\ cell <= 440$$Here, the right hand side has been relaxed from 400 to 440, meaning that you allow more time for assembly than originally planned. Relaxing model by converting hard constraints to soft constraints- A _soft_ constraint is a constraint that can be violated in some circumstances. - A _hard_ constraint cannot be violated under any circumstances. So far, all constraints we have encountered are hard constraints.Converting hard constraints to soft is one way to resolve infeasibilities.The original hard constraint on assembly time is as follows:$$0.2 \ desk + 0.4 \ cell <= 400$$You can turn this into a soft constraint if you know that, for example, an additional 40 hours of overtime are available at an additional cost. First add an overtime term to the right-hand side:$$0.2 \ desk + 0.4 \ cell <= 400 + overtime$$Next, add a hard limit to the amount of overtime available:$$overtime <= 40$$Finally, add an additional cost to the objective to penalize use of overtime. Assume that in this case overtime costs an additional $2/hour, then the new objective becomes:$$maximize\ 12 * desk + 20 * cell — 2 * overtime$$ Implement the soft constraint model using DOcplexFirst and extra variable for overtime, with an upper bound of 100. This suffices to express the hard limit on overtime. ###Code overtime = m.continuous_var(name='overtime', ub=40) ###Output _____no_output_____ ###Markdown Modify the assembly time constraint by changing its right-hand side by adding overtime. *Note*: this operation modifies the model by performing a _side-effect_ on the constraint object. DOcplex allows dynamic edition of model elements. ###Code ct_assembly.rhs = 400 + overtime ###Output _____no_output_____ ###Markdown Last, modify the objective expression to add the penalization term.Note that we use the Python decrement operator. ###Code m.maximize(12*desk + 20 * cell - 2 * overtime) ###Output _____no_output_____ ###Markdown And solve again using DOcplex: ###Code s2 = m.solve(url=url, key=key) m.print_solution() ###Output _____no_output_____ ###Markdown Unbounded Variable vs. Unbounded modelA variable is unbounded when one or both of its bounds is infinite. A model is unbounded when its objective value can be increased or decreased without limit. The fact that a variable is unbounded does not necessarily influence the solvability of the model and should not be confused with a model being unbounded. An unbounded model is almost certainly not correctly formulated. While infeasibility implies a model where constraints are too limiting, unboundedness implies a model where an important constraint is either missing or not restrictive enough.By default, DOcplex variables are unbounded: their upper bound is infinite (but their lower bound is zero). Unbounded feasible regionThe telephone production problem would become unbounded if, for example, the constraints on the assembly and painting time were neglected. The feasible region would then look as in this diagram where the objective value can increase without limit, up to infinity, because there is no upper boundary to the region. Algorithms for solving LPsThe IBM® CPLEX® Optimizers to solve LP problems in CPLEX include:- Simplex Optimizer- Dual-simplex Optimizer- Barrier Optimizer The simplex algorithmThe simplex algorithm, developed by George Dantzig in 1947, was the first generalized algorithm for solving LP problems. It is the basis of many optimization algorithms. The simplex method is an iterative method. It starts with an initial feasible solution, and then tests to see if it can improve the result of the objective function. It continues until the objective function cannot be further improved.The following diagram illustrates how the simplex algorithm traverses the boundary of the feasible region for the telephone production problem. The algorithm, starts somewhere along the edge of the shaded feasible region, and advances vertex-by-vertex until arriving at the vertex that also intersects the optimal objective line. Assume it starts at the red dot indicated on the diagam. The revised simplex algorithmTo improve the efficiency of the simplex algorithm, George Dantzig and W. Orchard-Hays revised it in 1953. CPLEX uses the revised simplex algorithm, with a number of improvements. The CPLEX Optimizers are particularly efficient and can solve very large problems rapidly. You can tune some CPLEX Optimizer parameters to change the algorithmic behavior according to your needs. The Dual simple algorithm The dual of a LPThe concept of duality is important in linear programming. Every LP problem has an associated LP problem known as its _dual_. The dual of this associated problem is the original LP problem (known as the primal problem). If the primal problem is a minimization problem, then the dual problem is a maximization problem and vice versa. A primal-dual pair *Primal (P)* -------------------- $max\ z=\sum_{i} c_{i}x_{i}$ *Dual (D)*------------------------------- $min\ w= \sum_{j}b_{j}y_{j}$ - Each constraint in the primal has an associated dual variable, yi.- Any feasible solution to D is an upper bound to P, and any feasible solution to P is a lower bound to D.- In LP, the optimal objective values of D and P are equivalent, and occurs where these bounds meet.- The dual can help solve difficult primal problems by providing a bound that in the best case equals the optimal solution to the primal problem. Dual pricesIn any solution to the dual, the values of the dual variables are known as the dual prices, also called shadow prices.For each constraint in the primal problem, its associated dual price indicates how much the dual objective will change with a unit change in the right hand side of the constraint.The dual price of a non-binding constraint is zero. That is, changing the right hand side of the constraint will not affect the objective value.The dual price of a binding constraint can help you make decisions regarding the constraint.For example, the dual price of a binding resource constraint can be used to determine whether more of the resource should be purchased or not. The dual simplex algorithmThe simplex algorithm works by finding a feasible solution and moving progressively toward optimality. The dual simplex algorithm implicitly uses the dual to try and find an optimal solution to the primal as early as it can, and regardless of whether the solution is feasible or not. It then moves from one vertex to another, gradually decreasing the infeasibility while maintaining optimality, until an optimal feasible solution to the primal problem is found. In CPLEX, the Dual-simplex Optimizer is the first choice for most LP problems. Basic solutions and basic variablesYou learned earlier that the simplex algorithm travels from vertex to vertex to search for the optimal solution. A solution at a vertex is known as a _basic_ solution. Without getting into too much detail, it's worth knowing that part of the simplex algorithm involves setting a subset of variables to zero at each iteration. These variables are known as non-basic variables. The remaining variables are the _basic_ variables. The concepts of basic solutions and variables are relevant in the definition of reduced costs that follows next. Reduced CostsThe reduced cost of a variable gives an indication of the amount the objective will change with a unit increase in the variable value.Consider the simplest form of an LP:$minimize\ c^{t}x\\s.t. \\Ax = b \\x \ge 0$If $y$ represents the dual variables for a given basic solution, then the reduced costs are defined as: $$c - y^{t}A$$Such a basic solution is optimal if: $$ c - y^{t}A \ge 0$$If all reduced costs for this LP are non-negative, it follows that the objective value can only increase with a change in the variable value, and therefore the solution (when minimizing) is optimal.DOcplex lets you acces sreduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem: Getting reduced cost values with DOcplexDOcplex lets you access reduced costs of variable, after a successful solve. Let's experiment with the two decision variables of our problem: ###Code print('* desk variable has reduced cost: {0}'.format(desk.reduced_cost)) print('* cell variable has reduced cost: {0}'.format(cell.reduced_cost)) ###Output _____no_output_____ ###Markdown Default optimality criteria for CPLEX optimizerBecause CPLEX Optimizer operates on finite precision computers, it uses an optimality tolerance to test the reduced costs. The default optimality tolerance is –1e-6, with optimality criteria for the simplest form of an LP then being:$$c — y^{t}A> –10^{-6}$$You can adjust this optimality tolerance, for example if the algorithm takes very long to converge and has already achieved a solution sufficiently close to optimality. Reduced Costs and multiple optimal solutionsIn the earlier example you saw how one can visualize multiple optimal solutions for an LP with two variables. For larger LPs, the reduced costs can be used to determine whether multiple optimal solutions exist. Multiple optimal solutions exist when one or more non-basic variables with a zero reduced cost exist in an optimal solution (that is, variable values that can change without affecting the objective value). In order to determine whether multiple optimal solutions exist, you can examine the values of the reduced costs with DOcplex. Slack valuesFor any solution, the difference between the left and right hand sides of a constraint is known as the _slack_ value for that constraint. For example, if a constraint states that f(x) <= 100, and in the solution f(x) = 80, then the slack value of this constraint is 20.In the earlier example, you learned about binding and non-binding constraints. For example, f(x) <= 100 is binding if f(x) = 100, and non-binding if f(x) = 80.The slack value for a binding constraint is always zero, that is, the constraint is met exactly.You can determine which constraints are binding in a solution by examining the slack values with DOcplex. This might help to better interpret the solution and help suggest which constraints may benefit from a change in bounds or a change into a soft constraint. Accessing slack values with DOcplexAs an example, let's examine the slack values of some constraints in our problem, after we revert the change to soft constrants ###Code # revert soft constraints ct_assembly.rhs = 440 s3 = m.solve(url=url, key=key) # now get slack value for assembly constraint: expected value is 40 print('* slack value for assembly time constraint is: {0}'.format(ct_assembly.slack_value)) # get slack value for painting time constraint, expected value is 0. print('* slack value for painting time constraint is: {0}'.format(ct_painting.slack_value)) ###Output _____no_output_____ ###Markdown DegeneracyIt is possible that multiple non-optimal solutions with the same objective value exist. As the simplex algorithm attempts to move in the direction of an improved objective value, it might happen that the algorithm starts cycling between non-optimal solutions with equivalent objective values. This is known as _degeneracy_. Modern LP solvers, such as CPLEX Simplex Optimizer, have built-in mechanisms to help escape such cycling by using perturbation techniques involving the variable bounds.If the default algorithm does not break the degenerate cycle, it's a good idea to try some other algorithms, for example the Dual-simplex Optimizer. Problem that are primal degenerate, are often not dual degenerate, and vice versa. Setting a LP algorithm with DOcplexUsers can change the algorithm by editing the `lpmethod` parameter of the model.We won't go into details here, it suffices to know this parameter accepts an integer from 0 to 6, where 0 denotes automatic choice of the algorithm, 1 is for primal simplex, 2 is for dual simplex, and 4 is for barrier...For example, choosing the barrier algorithm is done by setting value 4 to this parameter. We access the `parameters` property of the model and from there, assign the `lpmethod` parameter ###Code m.parameters.lpmethod = 4 m.solve(url=url, key=key, log_output=True) ###Output _____no_output_____
exercises/assignments/dlb-1-neural-networks.ipynb
###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JorisRoels/deep-learning-biology/blob/main/exercises/assignments/dlb-1-neural-networks.ipynb) Exercise 1: Neural NetworksIn this notebook, we will be using neural networks to identify enzyme sequences from protein sequences. The structure of these exercises is as follows: 1. [Import libraries and download data](scrollTo=ScagUEMTMjlK)2. [Data pre-processing](scrollTo=ohZHyOTnI35b)3. [Building a neural network with PyTorch](scrollTo=kIry8iFZI35y)4. [Training & validating the network](scrollTo=uXrEb0rTI35-)5. [Improving the model](scrollTo=o76Hxj7-Mst5)6. [Understanding the model](scrollTo=Ult7CTpCMxTi)This notebook is largely based on the research published in: Li, Y., Wang, S., Umarov, R., Xie, B., Fan, M., Li, L., & Gao, X. (2018). DEEPre: Sequence-based enzyme EC number prediction by deep learning. Bioinformatics, 34(5), 760–769. https://doi.org/10.1093/bioinformatics/btx680 1. Import libraries and download dataLet's start with importing the necessary libraries. ###Code import pickle import numpy as np import random import os import matplotlib.pyplot as plt plt.rcdefaults() import pandas as pd from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split from sklearn.manifold import TSNE from sklearn.svm import SVC from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier from progressbar import ProgressBar, Percentage, Bar, ETA, FileTransferSpeed import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.utils.data as data from torch.utils.data import DataLoader from torchvision import datasets import gdown import zipfile import os ###Output _____no_output_____ ###Markdown As you will notice, Colab environments come with quite a large library pre-installed. If you need to import a module that is not yet specified, you can add it in the previous cell (make sure to run it again). If the module is not installed, you can install it with `pip`. To make your work reproducible, it is advised to initialize all modules with stochastic functionality with a fixed seed. Re-running this script should give the same results as long as the seed is fixed. ###Code # make sure the results are reproducible seed = 0 np.random.seed(seed) random.seed(seed) torch.manual_seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False # run all computations on the GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print('Running computations with %s' % torch.device(device)) if torch.cuda.is_available(): print(torch.cuda.get_device_properties(device)) ###Output _____no_output_____ ###Markdown We will now download the required data from a public Google Drive repository. The data is stored as a zip archive and automatically extracted to the `data` directory in the current directory. ###Code # fields url = 'http://data.bits.vib.be/pub/trainingen/DeepLearning/data-1.zip' cmp_data_path = 'data.zip' # download the compressed data gdown.download(url, cmp_data_path, quiet=False) # extract the data zip = zipfile.ZipFile(cmp_data_path) zip.extractall('') # remove the compressed data os.remove(cmp_data_path) ###Output _____no_output_____ ###Markdown 2. Data pre-processingThe data are protein sequences and stored in binary format as pickle files. However, we encode the data to a binary matrix $X$ where the value at position $(i,j)$ represents the absence or presence of the protein $i$ in the sequence $j$: $$X_{i,j}=\left\{ \begin{array}{ll} 1 \text{ (protein } i \text{ is present in sequence } j \text{)}\\ 0 \text{ (protein } i \text{ is not present in sequence } j \text{)} \end{array} \right.$$The corresponding labels $y$ are also binary, they separate the enzyme from the non-enzyme sequences: $$y_{j}=\left\{ \begin{array}{ll} 1 \text{ (sequence } j \text{ is an enzyme)}\\ 0 \text{ (sequence } j \text{ is not an enzyme)} \end{array} \right.$$ ###Code def encode_data(f_name_list, proteins): with open(f_name_list,'rb') as f: name_list = pickle.load(f) encoding = [] widgets = ['Encoding data: ', Percentage(), ' ', Bar(), ' ', ETA()] pbar = ProgressBar(widgets=widgets, maxval=len(name_list)) pbar.start() for i in range(len(name_list)): single_encoding = np.zeros(len(proteins)) if name_list[i] != []: for protein_name in name_list[i]: single_encoding[proteins.index(protein_name)] = 1 encoding.append(single_encoding) pbar.update(i) pbar.finish() return np.asarray(encoding, dtype='int8') # specify where the data is stored data_dir = 'data-1/' f_name_list_enzymes = os.path.join(data_dir, 'Pfam_name_list_new_data.pickle') f_name_list_nonenzyme = os.path.join(data_dir, 'Pfam_name_list_non_enzyme.pickle') f_protein_names = os.path.join(data_dir, 'Pfam_model_names_list.pickle') # load the different proteins with open(f_protein_names,'rb') as f: proteins = pickle.load(f) num_proteins = len(proteins) # encode the sequences to a binary matrix enzymes = encode_data(f_name_list_enzymes, proteins) non_enzymes = encode_data(f_name_list_nonenzyme, proteins) # concatenate everything together X = np.concatenate([enzymes, non_enzymes], axis=0) # the labels are binary (1 for enzymes, 0 for non-enzymes) and are one-hot encoded y = np.concatenate([np.ones([22168,1]), np.zeros([22168,1])], axis=0).flatten() # print a few statistics print('There are %d sequences with %d protein measurements' % (X.shape[0], X.shape[1])) print('There are %d enzyme and %d non-enzyme sequences' % (enzymes.shape[0], non_enzymes.shape[0])) ###Output _____no_output_____ ###Markdown Here is a quick glimpse in the data. For a random selection of proteins, we plot the amount of times it was counted in the enzyme and non-enzyme sequences. ###Code # selection of indices for the proteins inds = np.random.randint(num_proteins, size=20) proteins_subset = [proteins[i] for i in inds] # compute the sum over the sequences enzymes_sum = np.sum(enzymes, axis=1) non_enzymes_sum = np.sum(non_enzymes, axis=1) # plot the counts on the subset of proteins df = pd.DataFrame({'Enzyme': enzymes_sum[inds], 'Non-enzyme': non_enzymes_sum[inds]}, index=proteins_subset) df.plot.barh() plt.xlabel('Counts') plt.ylabel('Protein') plt.show() ###Output _____no_output_____ ###Markdown To evaluate our approaches properly, we will split the data in a training and testing set. We will use the training set to train our algorithms and the testing set as a separate set of unseen data to evaluate the performance of our models. ###Code test_ratio = 0.5 # we will use 50% of the data for testing x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=test_ratio, random_state=seed) print('%d sequences for training and %d for testing' % (x_train.shape[0], x_test.shape[0])) ###Output _____no_output_____ ###Markdown 3. Building a neural network with PyTorchNow, we have to implement the neural network and train it. For this, we will use the high-level deep learning library [PyTorch](https://pytorch.org/). PyTorch is a well-known, open-source, machine learning framework that has a comprehensive set of tools and libraries and accelerates research prototyping. It also supports transparant training of machine learning models on GPU devices, which can benefit runtimes significantly. The full documentation can be found [here](https://pytorch.org/docs/stable/index.html). Let's start by defining the architecture of neural network. **Exercise**: build a network with a single hidden layer with PyTorch: - The first layer will be a [fully connected layer](https://pytorch.org/docs/stable/generated/torch.nn.Linear.htmltorch.nn.Linear) with [relu](https://pytorch.org/docs/stable/nn.functional.html?highlight=relutorch.nn.functional.relu) activation that transforms the input features to a 512 dimensional (hidden) feature vector representation. - The output layer is another [fully connected layer](https://pytorch.org/docs/stable/generated/torch.nn.Linear.htmltorch.nn.Linear) that transforms the hidden representation to a class probability distribution. - Print the network architecture to validate your architecture. - Run the network on a random batch of samples. Note that you have to transfer the numpy ndarray type inputs to floating point [PyTorch tensors](https://pytorch.org/docs/stable/tensors.html). ###Code # define the number of classes """ INSERT CODE HERE """ # The network will inherit the Module class class Net(nn.Module): def __init__(self, n_features=512): super(Net, self).__init__() """ INSERT CODE HERE """ def forward(self, x): """ INSERT CODE HERE """ return x # initialize the network and print the architecture """ INSERT CODE HERE """ # run the network on a batch of samples # note that we have to transfer the numpy ndarray type inputs to float torch tensors """ INSERT CODE HERE """ ###Output _____no_output_____ ###Markdown **Exercise**: manually compute the amount of parameters in this network and verify this with PyTorch. ###Code """ INSERT CODE HERE """ print('There are %d trainable parameters in the network' % n_params) ###Output _____no_output_____ ###Markdown 4. Training and validating the networkTo train this network, we still need two things: a loss function and an optimizer. For the loss function, we will use the commonly used cross entropy loss for classification. For the optimizer, we will use stochastic gradient descent (SGD) with a learning rate of 0.1. ###Code learning_rate = 0.1 loss_fn = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Great. Now it's time to train our model and implement backpropagation. Fortunately, PyTorch makes this relatively easy. A single optimization iteration consists of the following steps: 1. Sample a batch from the training data: we use the convenient [data loading](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html) system provided by PyTorch. You can simply enumerate over the `DataLoader` objects. 2. Set all gradients equal to zero. You can use the [`zero_grad()`](https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=zero_gradtorch.nn.Module.zero_grad) function. 3. Feed the batch to the network and compute the outputs. 4. Compare the outputs to the labels with the loss function. Note that the loss function itself is a `Module` object as well and thus can be treated in a similar fashion as the network for computing outputs. 5. Backpropagate the gradients w.r.t. the computed loss. You can use the [`backward()`](https://pytorch.org/docs/stable/autograd.html?highlight=backwardtorch.autograd.backward) function for this. 6. Apply one step in the optimization (e.g. gradient descent). For this, you will need the optimizer's [`step()`](https://pytorch.org/docs/stable/optim.htmltorch.optim.Optimizer.step) function. **Exercise**: train the model with the following settings: - Train the network for 50 epochs- Use a mini batch size of 1024- Track the performance of the classifier by additionally providing the test data. We have already provided a validation function that tracks the accuracy. This function expects a network module, a binary matrix $X$ of sequences, their corresponding labels $y$ and the batch size (for efficiency reasons) as inputs. ###Code # dataset useful for sampling (and many other things) class ProteinSeqDataset(data.Dataset): def __init__(self, data, labels): self.data = data self.labels = labels def __getitem__(self, i): return self.data[i], self.labels[i] def __len__(self): return len(self.data) def validate_accuracy(net, X, y, batch_size=1024): # evaluation mode net.eval() # save predictions y_preds = np.zeros((len(y))) for b in range(len(y) // batch_size): # sample a batch inputs = X[b*batch_size: (b+1)*batch_size] # transform to tensors inputs = torch.from_numpy(inputs).float().to(device) # forward call y_pred = net(inputs) y_pred = F.softmax(y_pred, dim=1)[:, 1] > 0.5 # save predictions y_preds[b*batch_size: (b+1)*batch_size] = y_pred.detach().cpu().numpy() # remaining batch b = len(y) // batch_size inputs = torch.from_numpy(X[b*batch_size:]).float().to(device) y_pred = net(inputs) y_pred = F.softmax(y_pred, dim=1)[:, 1] > 0.5 y_preds[b*batch_size:] = y_pred.detach().cpu().numpy() # compute accuracy acc = accuracy_score(y, y_preds) return acc # implementation of a single training epoch def train_epoch(net, loader, loss_fn, optimizer): """ INSERT CODE HERE """ return -1 # implementation of a single testing epoch def test_epoch(net, loader, loss_fn): """ INSERT CODE HERE """ return -1 def train_net(net, train_loader, test_loader, loss_fn, optimizer, epochs): # transfer the network to the GPU net = net.to(device) train_loss = np.zeros((epochs)) test_loss = np.zeros((epochs)) train_acc = np.zeros((epochs)) test_acc = np.zeros((epochs)) for epoch in range(epochs): # training train_loss[epoch] = train_epoch(net, train_loader, loss_fn, optimizer) train_acc[epoch] = validate_accuracy(net, x_train, y_train) # testing test_loss[epoch] = test_epoch(net, test_loader, loss_fn) test_acc[epoch] = validate_accuracy(net, x_test, y_test) print('Epoch %5d - Train loss: %.6f - Train accuracy: %.6f - Test loss: %.6f - Test accuracy: %.6f' % (epoch, train_loss[epoch], train_acc[epoch], test_loss[epoch], test_acc[epoch])) return train_loss, test_loss, train_acc, test_acc # parameters """ INSERT CODE HERE """ # build a training and testing dataloader that handles batch sampling train_data = ProteinSeqDataset(x_train, y_train) train_loader = DataLoader(train_data, batch_size=batch_size) test_data = ProteinSeqDataset(x_test, y_test) test_loader = DataLoader(test_data, batch_size=batch_size) # start training train_loss, test_loss, train_acc, test_acc = train_net(net, train_loader, test_loader, loss_fn, optimizer, n_epochs) ###Output _____no_output_____ ###Markdown The code below visualizes the learning curves: these curves illustrate how the loss on the train and test set decays over time. Additionally, we also report a similar curve for the train and test accuracy. The final accuracy is also reported. ###Code def plot_learning_curves(train_loss, test_loss, train_acc, test_acc): plt.figure(figsize=(11, 4)) plt.subplot(1, 2, 1) plt.plot(train_loss) plt.plot(test_loss) plt.xlabel('epochs') plt.ylabel('loss') plt.legend(('Train', 'Test')) plt.subplot(1, 2, 2) plt.plot(train_acc) plt.plot(test_acc) plt.xlabel('epochs') plt.ylabel('accuracy') plt.legend(('Train', 'Test')) plt.show() # plot the learning curves (i.e. train/test loss and accuracy) plot_learning_curves(train_loss, test_loss, train_acc, test_acc) # report final accuracy print('The model obtains an accuracy of %.2f%%' % (100*test_acc[-1])) ###Output _____no_output_____ ###Markdown 5. Improving the modelWe will try to improve the model by improving the training time and mitigating the overfitting to some extent. **Exercise**: Improve the model by implementing the following adjustments: - Train the network based on [`Adam`](https://pytorch.org/docs/stable/optim.htmltorch.optim.Adam) optimization instead of stochastic gradient descent. The Adam optimizer adapts its learning rate over time and therefore improves convergence significantly. For more details on the algorithm, we refer to the [original published paper](https://arxiv.org/pdf/1412.6980.pdf). You can significantly reduce the learning rate (e.g. 0.0001) and number of training epochs (e.g. 20). - The first adjustment to avoid overfitting is reduce the size of the network. At first sight this may seem strange because this reduces the capacity of the network. However, large networks are more likely to focus on details in the training data because of the redundant number of neurons in the hidden layer. Experiment with smaller hidden representations (e.g. 32 or 16). - The second adjustment to mitigate overfitting is [Dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html). During training, Dropout layers randomly switch off neurons (i.e. their value is temporarily set to zero). This forces the network to use the other neurons to make an appropriate decision. At test time, the dropout layers are obviously ignored and no neurons are switched off. The amount of neurons that are switched off during training is called the dropout factor (e.g. 0.50). For more details, we refer to the [original published paper](https://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf). ###Code # The network will inherit the Module class class ImprovedNet(nn.Module): def __init__(self, n_features=512, p=0.5): super(ImprovedNet, self).__init__() """ INSERT CODE HERE """ def forward(self, x): """ INSERT CODE HERE """ return x # initialize the network and print the architecture """ INSERT CODE HERE """ # parameters """ INSERT CODE HERE """ # Adam optimization """ INSERT CODE HERE """ # start training train_loss, test_loss, train_acc, test_acc = train_net(improved_net, train_loader, test_loader, loss_fn, optimizer, n_epochs) # plot the learning curves (i.e. train/test loss and accuracy) plot_learning_curves(train_loss, test_loss, train_acc, test_acc) # report final accuracy print('The model obtains an accuracy of %.2f%%' % (100*test_acc[-1])) ###Output _____no_output_____ ###Markdown 6. Understanding the modelTo gain more insight in the network, it can be useful to take a look to the hidden representations of the network. To do this, you have to propagate a number of samples through the first hidden layer of the network and visualize them using dimensionality reduction techniques. **Exercise**: Visualize the hidden representations of a batch of samples in 2D to gain more insight in the network's decision process: - Compute the hidden representation of a batch of samples. To do this, you will have to select a batch, transform this in a torch Tensor and apply the hidden and relu layer of the network on the inputs. Since these are also modules, you can use them in a similar fashion as the original network. - Extract the outputs of the networks as a numpy array and apply dimensionality reduction. A common dimensionality reducing method is the t-SNE algorithm. ###Code # select a batch of samples """ INSERT CODE HERE """ # compute the hidden representation of the batch """ INSERT CODE HERE """ # reduce the dimensionality of the hidden representations """ INSERT CODE HERE """ # visualize the reduced representations and label each sample """ INSERT CODE HERE """ ###Output _____no_output_____ ###Markdown Another way to analyze the network is by checking which proteins cause the highest hidden activations in enzyme and non-enzyme samples. These features are discriminative for predicting the classes. ###Code # isolate the positive and negative samples h_pos = h[batch_labels == 1] h_neg = h[batch_labels == 0] # compute the mean activation h_pos_mean = h_pos.mean(axis=0) h_neg_mean = h_neg.mean(axis=0) # sort the mean activations i_pos = np.argsort(h_pos_mean) i_neg = np.argsort(h_neg_mean) # select the highest activations n = 5 i_pos = i_pos[-n:][::-1] i_neg = i_neg[-n:][::-1] print('Discriminative features that result in high activation for enzyme prediction: ') for i in i_pos: print(' - %s (mean activation value: %.3f)' % (proteins[i], h_pos_mean[i])) print('Discriminative features that result in high activation for non-enzyme prediction: ') for i in i_neg: print(' - %s (mean activation value: %.3f)' % (proteins[i], h_neg_mean[i])) ###Output _____no_output_____
Model backlog/Deep Learning/[39th] Bi LSTM - Craw, GloVe - Adam.ipynb
###Markdown Dependencies ###Code import gc import os import random import warnings import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from gensim.models import KeyedVectors from sklearn import metrics from sklearn.model_selection import train_test_split from keras import optimizers from keras.models import Model from keras.callbacks import EarlyStopping, ReduceLROnPlateau, LearningRateScheduler from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import Dense, Input, Embedding, Dropout, Activation, CuDNNGRU, CuDNNLSTM, Conv1D, Bidirectional, GlobalMaxPool1D, GlobalAveragePooling1D, SpatialDropout1D # Set seeds to make the experiment more reproducible. from tensorflow import set_random_seed def seed_everything(seed=0): random.seed(seed) os.environ['PYTHONHASHSEED'] = str(seed) np.random.seed(seed) set_random_seed(0) seed_everything() %matplotlib inline sns.set_style("whitegrid") pd.set_option('display.float_format', lambda x: '%.4f' % x) warnings.filterwarnings("ignore") train = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/train.csv") test = pd.read_csv("../input/jigsaw-unintended-bias-in-toxicity-classification/test.csv") print("Train shape : ", train.shape) print("Test shape : ", test.shape) ###Output Train shape : (1804874, 45) Test shape : (97320, 2) ###Markdown Preprocess ###Code train['target'] = np.where(train['target'] >= 0.5, 1, 0) train['comment_text'] = train['comment_text'].astype(str) X_test = test['comment_text'].astype(str) # Lower comments train['comment_text'] = train['comment_text'].apply(lambda x: x.lower()) X_test = X_test.apply(lambda x: x.lower()) # Mapping Punctuation def map_punctuation(data): punct_mapping = {"_":" ", "`":" ", "‘": "'", "₹": "e", "´": "'", "°": "", "€": "e", "™": "tm", "√": " sqrt ", "×": "x", "²": "2", "—": "-", "–": "-", "’": "'", "_": "-", "`": "'", '“': '"', '”': '"', '“': '"', "£": "e", '∞': 'infinity', 'θ': 'theta', '÷': '/', 'α': 'alpha', '•': '.', 'à': 'a', '−': '-', 'β': 'beta', '∅': '', '³': '3', 'π': 'pi'} def clean_special_chars(text, mapping): for p in mapping: text = text.replace(p, mapping[p]) return text return data.apply(lambda x: clean_special_chars(x, punct_mapping)) train['comment_text'] = map_punctuation(train['comment_text']) X_test = map_punctuation(X_test) # Removing Punctuation def remove_punctuation(data): punct = "/-'?!.,#$%\'()*+-/:;<=>@[\\]^_`{|}~`" + '""“”’' + '∞θ÷α•à−β∅³π‘₹´°£€\×™√²—–&' def clean_special_chars(text, punct): for p in punct: text = text.replace(p, ' ') return text return data.apply(lambda x: clean_special_chars(x, punct)) train['comment_text'] = remove_punctuation(train['comment_text']) X_test = remove_punctuation(X_test) # Clean contractions def clean_contractions(text): specials = ["’", "‘", "´", "`"] for s in specials: text = text.replace(s, "'") return text train['comment_text'] = train['comment_text'].apply(lambda x: clean_contractions(x)) X_test = X_test.apply(lambda x: clean_contractions(x)) # Mapping contraction def map_contraction(data): contraction_mapping = {"trump's": 'trump is', "'cause": 'because', ',cause': 'because', ';cause': 'because', "ain't": 'am not', 'ain,t': 'am not', 'ain;t': 'am not', 'ain´t': 'am not', 'ain’t': 'am not', "aren't": 'are not', 'aren,t': 'are not', 'aren;t': 'are not', 'aren´t': 'are not', 'aren’t': 'are not', "can't": 'cannot', "can't've": 'cannot have', 'can,t': 'cannot', 'can,t,ve': 'cannot have', 'can;t': 'cannot', 'can;t;ve': 'cannot have', 'can´t': 'cannot', 'can´t´ve': 'cannot have', 'can’t': 'cannot', 'can’t’ve': 'cannot have', "could've": 'could have', 'could,ve': 'could have', 'could;ve': 'could have', "couldn't": 'could not', "couldn't've": 'could not have', 'couldn,t': 'could not', 'couldn,t,ve': 'could not have', 'couldn;t': 'could not', 'couldn;t;ve': 'could not have', 'couldn´t': 'could not', 'couldn´t´ve': 'could not have', 'couldn’t': 'could not', 'couldn’t’ve': 'could not have', 'could´ve': 'could have', 'could’ve': 'could have', "didn't": 'did not', 'didn,t': 'did not', 'didn;t': 'did not', 'didn´t': 'did not', 'didn’t': 'did not', "doesn't": 'does not', 'doesn,t': 'does not', 'doesn;t': 'does not', 'doesn´t': 'does not', 'doesn’t': 'does not', "don't": 'do not', 'don,t': 'do not', 'don;t': 'do not', 'don´t': 'do not', 'don’t': 'do not', "hadn't": 'had not', "hadn't've": 'had not have', 'hadn,t': 'had not', 'hadn,t,ve': 'had not have', 'hadn;t': 'had not', 'hadn;t;ve': 'had not have', 'hadn´t': 'had not', 'hadn´t´ve': 'had not have', 'hadn’t': 'had not', 'hadn’t’ve': 'had not have', "hasn't": 'has not', 'hasn,t': 'has not', 'hasn;t': 'has not', 'hasn´t': 'has not', 'hasn’t': 'has not', "haven't": 'have not', 'haven,t': 'have not', 'haven;t': 'have not', 'haven´t': 'have not', 'haven’t': 'have not', "he'd": 'he would', "he'd've": 'he would have', "he'll": 'he will', "he's": 'he is', 'he,d': 'he would', 'he,d,ve': 'he would have', 'he,ll': 'he will', 'he,s': 'he is', 'he;d': 'he would', 'he;d;ve': 'he would have', 'he;ll': 'he will', 'he;s': 'he is', 'he´d': 'he would', 'he´d´ve': 'he would have', 'he´ll': 'he will', 'he´s': 'he is', 'he’d': 'he would', 'he’d’ve': 'he would have', 'he’ll': 'he will', 'he’s': 'he is', "how'd": 'how did', "how'll": 'how will', "how's": 'how is', 'how,d': 'how did', 'how,ll': 'how will', 'how,s': 'how is', 'how;d': 'how did', 'how;ll': 'how will', 'how;s': 'how is', 'how´d': 'how did', 'how´ll': 'how will', 'how´s': 'how is', 'how’d': 'how did', 'how’ll': 'how will', 'how’s': 'how is', "i'd": 'i would', "i'll": 'i will', "i'm": 'i am', "i've": 'i have', 'i,d': 'i would', 'i,ll': 'i will', 'i,m': 'i am', 'i,ve': 'i have', 'i;d': 'i would', 'i;ll': 'i will', 'i;m': 'i am', 'i;ve': 'i have', "isn't": 'is not', 'isn,t': 'is not', 'isn;t': 'is not', 'isn´t': 'is not', 'isn’t': 'is not', "it'd": 'it would', "it'll": 'it will', "it's": 'it is', 'it,d': 'it would', 'it,ll': 'it will', 'it,s': 'it is', 'it;d': 'it would', 'it;ll': 'it will', 'it;s': 'it is', 'it´d': 'it would', 'it´ll': 'it will', 'it´s': 'it is', 'it’d': 'it would', 'it’ll': 'it will', 'it’s': 'it is', 'i´d': 'i would', 'i´ll': 'i will', 'i´m': 'i am', 'i´ve': 'i have', 'i’d': 'i would', 'i’ll': 'i will', 'i’m': 'i am', 'i’ve': 'i have', "let's": 'let us', 'let,s': 'let us', 'let;s': 'let us', 'let´s': 'let us', 'let’s': 'let us', "ma'am": 'madam', 'ma,am': 'madam', 'ma;am': 'madam', "mayn't": 'may not', 'mayn,t': 'may not', 'mayn;t': 'may not', 'mayn´t': 'may not', 'mayn’t': 'may not', 'ma´am': 'madam', 'ma’am': 'madam', "might've": 'might have', 'might,ve': 'might have', 'might;ve': 'might have', "mightn't": 'might not', 'mightn,t': 'might not', 'mightn;t': 'might not', 'mightn´t': 'might not', 'mightn’t': 'might not', 'might´ve': 'might have', 'might’ve': 'might have', "must've": 'must have', 'must,ve': 'must have', 'must;ve': 'must have', "mustn't": 'must not', 'mustn,t': 'must not', 'mustn;t': 'must not', 'mustn´t': 'must not', 'mustn’t': 'must not', 'must´ve': 'must have', 'must’ve': 'must have', "needn't": 'need not', 'needn,t': 'need not', 'needn;t': 'need not', 'needn´t': 'need not', 'needn’t': 'need not', "oughtn't": 'ought not', 'oughtn,t': 'ought not', 'oughtn;t': 'ought not', 'oughtn´t': 'ought not', 'oughtn’t': 'ought not', "sha'n't": 'shall not', 'sha,n,t': 'shall not', 'sha;n;t': 'shall not', "shan't": 'shall not', 'shan,t': 'shall not', 'shan;t': 'shall not', 'shan´t': 'shall not', 'shan’t': 'shall not', 'sha´n´t': 'shall not', 'sha’n’t': 'shall not', "she'd": 'she would', "she'll": 'she will', "she's": 'she is', 'she,d': 'she would', 'she,ll': 'she will', 'she,s': 'she is', 'she;d': 'she would', 'she;ll': 'she will', 'she;s': 'she is', 'she´d': 'she would', 'she´ll': 'she will', 'she´s': 'she is', 'she’d': 'she would', 'she’ll': 'she will', 'she’s': 'she is', "should've": 'should have', 'should,ve': 'should have', 'should;ve': 'should have', "shouldn't": 'should not', 'shouldn,t': 'should not', 'shouldn;t': 'should not', 'shouldn´t': 'should not', 'shouldn’t': 'should not', 'should´ve': 'should have', 'should’ve': 'should have', "that'd": 'that would', "that's": 'that is', 'that,d': 'that would', 'that,s': 'that is', 'that;d': 'that would', 'that;s': 'that is', 'that´d': 'that would', 'that´s': 'that is', 'that’d': 'that would', 'that’s': 'that is', "there'd": 'there had', "there's": 'there is', 'there,d': 'there had', 'there,s': 'there is', 'there;d': 'there had', 'there;s': 'there is', 'there´d': 'there had', 'there´s': 'there is', 'there’d': 'there had', 'there’s': 'there is', "they'd": 'they would', "they'll": 'they will', "they're": 'they are', "they've": 'they have', 'they,d': 'they would', 'they,ll': 'they will', 'they,re': 'they are', 'they,ve': 'they have', 'they;d': 'they would', 'they;ll': 'they will', 'they;re': 'they are', 'they;ve': 'they have', 'they´d': 'they would', 'they´ll': 'they will', 'they´re': 'they are', 'they´ve': 'they have', 'they’d': 'they would', 'they’ll': 'they will', 'they’re': 'they are', 'they’ve': 'they have', "wasn't": 'was not', 'wasn,t': 'was not', 'wasn;t': 'was not', 'wasn´t': 'was not', 'wasn’t': 'was not', "we'd": 'we would', "we'll": 'we will', "we're": 'we are', "we've": 'we have', 'we,d': 'we would', 'we,ll': 'we will', 'we,re': 'we are', 'we,ve': 'we have', 'we;d': 'we would', 'we;ll': 'we will', 'we;re': 'we are', 'we;ve': 'we have', "weren't": 'were not', 'weren,t': 'were not', 'weren;t': 'were not', 'weren´t': 'were not', 'weren’t': 'were not', 'we´d': 'we would', 'we´ll': 'we will', 'we´re': 'we are', 'we´ve': 'we have', 'we’d': 'we would', 'we’ll': 'we will', 'we’re': 'we are', 'we’ve': 'we have', "what'll": 'what will', "what're": 'what are', "what's": 'what is', "what've": 'what have', 'what,ll': 'what will', 'what,re': 'what are', 'what,s': 'what is', 'what,ve': 'what have', 'what;ll': 'what will', 'what;re': 'what are', 'what;s': 'what is', 'what;ve': 'what have', 'what´ll': 'what will', 'what´re': 'what are', 'what´s': 'what is', 'what´ve': 'what have', 'what’ll': 'what will', 'what’re': 'what are', 'what’s': 'what is', 'what’ve': 'what have', "where'd": 'where did', "where's": 'where is', 'where,d': 'where did', 'where,s': 'where is', 'where;d': 'where did', 'where;s': 'where is', 'where´d': 'where did', 'where´s': 'where is', 'where’d': 'where did', 'where’s': 'where is', "who'll": 'who will', "who's": 'who is', 'who,ll': 'who will', 'who,s': 'who is', 'who;ll': 'who will', 'who;s': 'who is', 'who´ll': 'who will', 'who´s': 'who is', 'who’ll': 'who will', 'who’s': 'who is', "won't": 'will not', 'won,t': 'will not', 'won;t': 'will not', 'won´t': 'will not', 'won’t': 'will not', "wouldn't": 'would not', 'wouldn,t': 'would not', 'wouldn;t': 'would not', 'wouldn´t': 'would not', 'wouldn’t': 'would not', "you'd": 'you would', "you'll": 'you will', "you're": 'you are', 'you,d': 'you would', 'you,ll': 'you will', 'you,re': 'you are', 'you;d': 'you would', 'you;ll': 'you will', 'you;re': 'you are', 'you´d': 'you would', 'you´ll': 'you will', 'you´re': 'you are', 'you’d': 'you would', 'you’ll': 'you will', 'you’re': 'you are', '´cause': 'because', '’cause': 'because', "you've": 'you have', "could'nt": 'could not', "havn't": 'have not', 'here’s': 'here is', 'i""m': 'i am', "i'am": 'i am', "i'l": 'i will', "i'v": 'i have', "wan't": 'want', "was'nt": 'was not', "who'd": 'who would', "who're": 'who are', "who've": 'who have', "why'd": 'why would', "would've": 'would have', "y'all": 'you all', "y'know": 'you know', 'you.i': 'you i', "your'e": 'you are', "arn't": 'are not', "agains't": 'against', "c'mon": 'common', "doens't": 'does not', 'don""t': 'do not', "dosen't": 'does not', "dosn't": 'does not', "shoudn't": 'should not', "that'll": 'that will', "there'll": 'there will', "there're": 'there are', "this'll": 'this all', "u're": 'you are', "ya'll": 'you all', "you'r": 'you are', 'you’ve': 'you have', "d'int": 'did not', "did'nt": 'did not', "din't": 'did not', "dont't": 'do not', "gov't": 'government', "i'ma": 'i am', "is'nt": 'is not', '‘i': 'i', 'ᴀɴᴅ': 'and', 'ᴛʜᴇ': 'the', 'ʜᴏᴍᴇ': 'home', 'ᴜᴘ': 'up', 'ʙʏ': 'by', 'ᴀᴛ': 'at', '…and': 'and', 'civilbeat': 'civil beat', 'trumpcare': 'trump care', 'obamacare': 'obama care', 'ᴄʜᴇᴄᴋ': 'check', 'ғᴏʀ': 'for', 'ᴛʜɪs': 'this', 'ᴄᴏᴍᴘᴜᴛᴇʀ': 'computer', 'ᴍᴏɴᴛʜ': 'month', 'ᴡᴏʀᴋɪɴɢ': 'working', 'ᴊᴏʙ': 'job', 'ғʀᴏᴍ': 'from', 'sᴛᴀʀᴛ': 'start', 'gubmit': 'submit', 'co₂': 'carbon dioxide', 'ғɪʀsᴛ': 'first', 'ᴇɴᴅ': 'end', 'ᴄᴀɴ': 'can', 'ʜᴀᴠᴇ': 'have', 'ᴛᴏ': 'to', 'ʟɪɴᴋ': 'link', 'ᴏғ': 'of', 'ʜᴏᴜʀʟʏ': 'hourly', 'ᴡᴇᴇᴋ': 'week', 'ᴇxᴛʀᴀ': 'extra', 'gʀᴇᴀᴛ': 'great', 'sᴛᴜᴅᴇɴᴛs': 'student', 'sᴛᴀʏ': 'stay', 'ᴍᴏᴍs': 'mother', 'ᴏʀ': 'or', 'ᴀɴʏᴏɴᴇ': 'anyone', 'ɴᴇᴇᴅɪɴɢ': 'needing', 'ᴀɴ': 'an', 'ɪɴᴄᴏᴍᴇ': 'income', 'ʀᴇʟɪᴀʙʟᴇ': 'reliable', 'ʏᴏᴜʀ': 'your', 'sɪɢɴɪɴɢ': 'signing', 'ʙᴏᴛᴛᴏᴍ': 'bottom', 'ғᴏʟʟᴏᴡɪɴɢ': 'following', 'mᴀᴋᴇ': 'make', 'ᴄᴏɴɴᴇᴄᴛɪᴏɴ': 'connection', 'ɪɴᴛᴇʀɴᴇᴛ': 'internet', 'financialpost': 'financial post', 'ʜaᴠᴇ': ' have ', 'ᴄaɴ': ' can ', 'maᴋᴇ': ' make ', 'ʀᴇʟɪaʙʟᴇ': ' reliable ', 'ɴᴇᴇᴅ': ' need ', 'ᴏɴʟʏ': ' only ', 'ᴇxᴛʀa': ' extra ', 'aɴ': ' an ', 'aɴʏᴏɴᴇ': ' anyone ', 'sᴛaʏ': ' stay ', 'sᴛaʀᴛ': ' start', 'shopo': 'shop'} def clean_special_chars(text, mapping): for p in mapping: text = text.replace(p, mapping[p]) return text return data.apply(lambda x: clean_special_chars(x, contraction_mapping)) train['comment_text'] = map_contraction(train['comment_text']) X_test = map_contraction(X_test) # Mapping misspelling def map_misspelling(data): misspelling_mapping = {'sb91': 'senate bill', 'trump': 'trump', 'utmterm': 'utm term', 'fakenews': 'fake news', 'gʀᴇat': 'great', 'ʙᴏᴛtoᴍ': 'bottom', 'washingtontimes': 'washington times', 'garycrum': 'gary crum', 'htmlutmterm': 'html utm term', 'rangermc': 'car', 'tfws': 'tuition fee waiver', 'sjws': 'social justice warrior', 'koncerned': 'concerned', 'vinis': 'vinys', 'yᴏᴜ': 'you', 'trumpsters': 'trump', 'trumpian': 'trump', 'bigly': 'big league', 'trumpism': 'trump', 'yoyou': 'you', 'auwe': 'wonder', 'drumpf': 'trump', 'brexit': 'british exit', 'utilitas': 'utilities', 'ᴀ': 'a', '😉': 'wink', '😂': 'joy', '😀': 'stuck out tongue', 'theguardian': 'the guardian', 'deplorables': 'deplorable', 'theglobeandmail': 'the globe and mail', 'justiciaries': 'justiciary', 'creditdation': 'accreditation', 'doctrne': 'doctrine', 'fentayal': 'fentanyl', 'designation-': 'designation', 'conartist': 'con-artist', 'mutilitated': 'mutilated', 'obumblers': 'bumblers', 'negotiatiations': 'negotiations', 'dood-': 'dood', 'irakis': 'iraki', 'cooerate': 'cooperate', 'cox': 'cox', 'racistcomments': 'racist comments', 'envirnmetalists': 'environmentalists'} def clean_special_chars(text, mapping): for p in mapping: text = text.replace(p, mapping[p]) return text return data.apply(lambda x: clean_special_chars(x, misspelling_mapping)) train['comment_text'] = map_misspelling(train['comment_text']) X_test = map_misspelling(X_test) # Train/validation split train_ids, val_ids = train_test_split(train['id'], test_size=0.2, random_state=2019) train_df = pd.merge(train_ids.to_frame(), train) validate_df = pd.merge(val_ids.to_frame(), train) Y_train = train_df['target'].values Y_val = validate_df['target'].values X_train = train_df['comment_text'] X_val = validate_df['comment_text'] # Hyper parameters maxlen = 220 # max number of words in a question to use embed_size = 250 # how big is each word vector max_features = 410047 # how many unique words to use (i.e num rows in embedding vector) learning_rate = 0.001 epochs = 5 batch_size = 512 es_patience = 3 rlr_patience = 2 decay_factor = 0.25 # Fill missing values X_train = X_train.fillna("_na_").values X_val = X_val.fillna("_na_").values X_test = X_test.fillna("_na_").values # Tokenize the sentences tokenizer = Tokenizer(num_words=max_features) tokenizer.fit_on_texts(list(X_train)) X_train = tokenizer.texts_to_sequences(X_train) X_val = tokenizer.texts_to_sequences(X_val) X_test = tokenizer.texts_to_sequences(X_test) # Pad the sentences X_train = pad_sequences(X_train, maxlen=maxlen) X_val = pad_sequences(X_val, maxlen=maxlen) X_test = pad_sequences(X_test, maxlen=maxlen) ###Output _____no_output_____ ###Markdown Loading Embedding ###Code def get_coefs(word, *arr): return word, np.asarray(arr, dtype='float32') def load_embeddings(path): emb_arr = KeyedVectors.load(path) return emb_arr def build_matrix(word_index, path): embedding_index = load_embeddings(path) embedding_matrix = np.zeros((len(word_index) + 1, 300)) unknown_words = [] for word, i in word_index.items(): if i <= max_features: try: embedding_matrix[i] = embedding_index[word] except KeyError: try: embedding_matrix[i] = embedding_index[word.lower()] except KeyError: try: embedding_matrix[i] = embedding_index[word.title()] except KeyError: unknown_words.append(word) return embedding_matrix, unknown_words glove_path = '../input/gensim-embeddings-dataset/glove.840B.300d.gensim' craw_path = '../input/gensim-embeddings-dataset/crawl-300d-2M.gensim' glove_embedding_matrix, glove_unknown_words = build_matrix(tokenizer.word_index, glove_path) print('n unknown words (GloVe): ', len(glove_unknown_words)) craw_embedding_matrix, craw_unknown_words = build_matrix(tokenizer.word_index, craw_path) print('n unknown words (Crawl): ', len(craw_unknown_words)) embedding_matrix = np.concatenate([glove_embedding_matrix, craw_embedding_matrix], axis=-1) del glove_embedding_matrix, craw_embedding_matrix gc.collect() ###Output n unknown words (GloVe): 117166 n unknown words (Crawl): 116009 ###Markdown Model ###Code inp = Input(shape=(maxlen,)) x = Embedding(*embedding_matrix.shape, weights=[embedding_matrix], trainable=False)(inp) x = SpatialDropout1D(0.3)(x) x = Bidirectional(CuDNNLSTM(128, return_sequences=True))(x) x = Bidirectional(CuDNNLSTM(256, return_sequences=True))(x) # x = GlobalAveragePooling1D()(x) x = GlobalMaxPool1D()(x) x = Dense(512, activation="relu")(x) x = Dense(512, activation="relu")(x) x = Dense(1, activation="sigmoid")(x) model = Model(inputs=inp, outputs=x) optimizer = optimizers.Adam(lr=learning_rate) model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']) model.summary() es = EarlyStopping(monitor='val_loss', mode='min', patience=es_patience, restore_best_weights=True, verbose=1) rlrop = ReduceLROnPlateau(monitor='val_loss', mode='min', patience=rlr_patience, factor=decay_factor, min_lr=1e-6, verbose=1) history = model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs, validation_data=(X_val, Y_val), callbacks=[es, rlrop]) fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 8)) ax1.plot(history.history['acc'], label='Train Accuracy') ax1.plot(history.history['val_acc'], label='Validation accuracy') ax1.legend(loc='best') ax1.set_title('Accuracy') ax2.plot(history.history['loss'], label='Train loss') ax2.plot(history.history['val_loss'], label='Validation loss') ax2.legend(loc='best') ax2.set_title('Loss') plt.xlabel('Epochs') sns.despine() plt.show() ###Output _____no_output_____ ###Markdown Model evaluation ###Code identity_columns = [ 'male', 'female', 'homosexual_gay_or_lesbian', 'christian', 'jewish', 'muslim', 'black', 'white', 'psychiatric_or_mental_illness'] # Convert taget and identity columns to booleans def convert_to_bool(df, col_name): df[col_name] = np.where(df[col_name] >= 0.5, True, False) def convert_dataframe_to_bool(df): bool_df = df.copy() for col in ['target'] + identity_columns: convert_to_bool(bool_df, col) return bool_df SUBGROUP_AUC = 'subgroup_auc' BPSN_AUC = 'bpsn_auc' # stands for background positive, subgroup negative BNSP_AUC = 'bnsp_auc' # stands for background negative, subgroup positive def compute_auc(y_true, y_pred): try: return metrics.roc_auc_score(y_true, y_pred) except ValueError: return np.nan def compute_subgroup_auc(df, subgroup, label, model_name): subgroup_examples = df[df[subgroup]] return compute_auc(subgroup_examples[label], subgroup_examples[model_name]) def compute_bpsn_auc(df, subgroup, label, model_name): """Computes the AUC of the within-subgroup negative examples and the background positive examples.""" subgroup_negative_examples = df[df[subgroup] & ~df[label]] non_subgroup_positive_examples = df[~df[subgroup] & df[label]] examples = subgroup_negative_examples.append(non_subgroup_positive_examples) return compute_auc(examples[label], examples[model_name]) def compute_bnsp_auc(df, subgroup, label, model_name): """Computes the AUC of the within-subgroup positive examples and the background negative examples.""" subgroup_positive_examples = df[df[subgroup] & df[label]] non_subgroup_negative_examples = df[~df[subgroup] & ~df[label]] examples = subgroup_positive_examples.append(non_subgroup_negative_examples) return compute_auc(examples[label], examples[model_name]) def compute_bias_metrics_for_model(dataset, subgroups, model, label_col, include_asegs=False): """Computes per-subgroup metrics for all subgroups and one model.""" records = [] for subgroup in subgroups: record = { 'subgroup': subgroup, 'subgroup_size': len(dataset[dataset[subgroup]]) } record[SUBGROUP_AUC] = compute_subgroup_auc(dataset, subgroup, label_col, model) record[BPSN_AUC] = compute_bpsn_auc(dataset, subgroup, label_col, model) record[BNSP_AUC] = compute_bnsp_auc(dataset, subgroup, label_col, model) records.append(record) return pd.DataFrame(records).sort_values('subgroup_auc', ascending=True) # validate_df = pd.merge(val_ids.to_frame(), train) validate_df['preds'] = model.predict(X_val) validate_df = convert_dataframe_to_bool(validate_df) bias_metrics_df = compute_bias_metrics_for_model(validate_df, identity_columns, 'preds', 'target') print('Validation bias metric by group') display(bias_metrics_df) def power_mean(series, p): total = sum(np.power(series, p)) return np.power(total / len(series), 1 / p) def get_final_metric(bias_df, overall_auc, POWER=-5, OVERALL_MODEL_WEIGHT=0.25): bias_score = np.average([ power_mean(bias_df[SUBGROUP_AUC], POWER), power_mean(bias_df[BPSN_AUC], POWER), power_mean(bias_df[BNSP_AUC], POWER) ]) return (OVERALL_MODEL_WEIGHT * overall_auc) + ((1 - OVERALL_MODEL_WEIGHT) * bias_score) # train_df = pd.merge(train_ids.to_frame(), train) train_df['preds'] = model.predict(X_train) train_df = convert_dataframe_to_bool(train_df) print('Train ROC AUC: %.4f' % get_final_metric(bias_metrics_df, metrics.roc_auc_score(train_df['target'].values, train_df['preds'].values))) print('Validation ROC AUC: %.4f' % get_final_metric(bias_metrics_df, metrics.roc_auc_score(validate_df['target'].values, validate_df['preds'].values))) ###Output Train ROC AUC: 0.9327 Validation ROC AUC: 0.9296 ###Markdown Predictions ###Code Y_test = model.predict(X_test) submission = pd.read_csv('../input/jigsaw-unintended-bias-in-toxicity-classification/sample_submission.csv') submission['prediction'] = Y_test submission.to_csv('submission.csv', index=False) submission.head(10) ###Output _____no_output_____
logi-gsmap/Untitled.ipynb
###Markdown Test ###Code import numpy as np import netCDF4 as nc4 f = nc4.Dataset('jabar_gsmap_1hr.nc','r') print(f) ###Output <class 'netCDF4._netCDF4.Dataset'> root group (NETCDF3_CLASSIC data model, file format NETCDF3): dimensions(sizes): lon(48), lat(32), time(122689) variables(dimensions): float64 lon(lon), float64 lat(lat), float64 time(time), float32 precip(time,lat,lon) groups:
guides/notebooks/categorical-encoding-DEMO.ipynb
###Markdown Categorical Encoding Demo and ExamplesThis is a Jupyter notebook for exploring the categorical-encoding library discussed in a [Feature Labs article] I wrote on the topic.To use the library, make sure you have the `categorical-encoding` and `featuretools` libraries installed. Encoder API ###Code import categorical_encoding as ce import featuretools as ft from featuretools.tests.testing_utils import make_ecommerce_entityset es = make_ecommerce_entityset() f1 = ft.Feature(es["log"]["product_id"]) f2 = ft.Feature(es["log"]["purchased"]) f3 = ft.Feature(es["log"]["value"]) f4 = ft.Feature(es["log"]["countrycode"]) features = [f1, f2, f3, f4] ids = [0, 1, 2, 3, 4, 5] feature_matrix = ft.calculate_feature_matrix(features, es, instance_ids=ids) print(feature_matrix) ###Output product_id purchased value countrycode id 0 coke zero True 0.0 US 1 coke zero True 5.0 US 2 coke zero True 10.0 US 3 car True 15.0 US 4 car True 20.0 US 5 toothpaste True 0.0 AL ###Markdown Performing a train-test split is standard in machine learning pipelines. Here, I've just simulated an actual train-test split by randomly picking certain rows to be train or test data. ###Code train_data = feature_matrix.iloc[[0, 1, 4, 5]] print(train_data) test_data = feature_matrix.iloc[[2, 3]] print(test_data) ###Output product_id purchased value countrycode id 2 coke zero True 10.0 US 3 car True 15.0 US ###Markdown Next up, we initialize and call the encoder on our data. ###Code enc = ce.Encoder(method='leave_one_out') train_enc = enc.fit_transform(train_data, features, train_data['value']) test_enc = enc.transform(test_data) print(train_enc) print(test_enc) ###Output PRODUCT_ID_leave_one_out purchased value COUNTRYCODE_leave_one_out id 2 2.50 True 10.0 8.333333 3 6.25 True 15.0 8.333333 ###Markdown Note how that the encoder only uses the training data to learn its encoding, and the test data is directly encoded using the learning mappings.Now, we typically would have to redo the entire categorical encoding process for the following feature matrix. ###Code fm2 = ft.calculate_feature_matrix(features, es, instance_ids=[6,7]) print(fm2) ###Output product_id purchased value countrycode id 6 toothpaste True 1.0 AL 7 toothpaste True 2.0 AL ###Markdown However, by integration with Featuretools, we can generate already encoded data. ###Code features_encoded = enc.get_features() fm2_encoded = ft.calculate_feature_matrix(features_encoded, es, instance_ids=[6,7]) print(fm2_encoded) ###Output PRODUCT_ID_leave_one_out purchased value COUNTRYCODE_leave_one_out id 6 6.25 True 1.0 6.25 7 6.25 True 2.0 6.25 ###Markdown Encoding Methods ExamplesFor reference, here is our original encoder: ###Code feature_matrix ###Output _____no_output_____ ###Markdown Classic Encoders ###Code # Creates a new column for each unique value. enc_one_hot = ce.Encoder(method='one_hot') fm_enc_one_hot = enc_one_hot.fit_transform(feature_matrix, features) fm_enc_one_hot # Each unique string value is assigned a counting number specific to that value. enc_ord = ce.Encoder(method='ordinal') fm_enc_ord = enc_ord.fit_transform(feature_matrix, features) fm_enc_ord # The categories' values are first Ordinal encoded, # the resulting integers are converted to binary, # then the resulting digits are split into columns. enc_bin = ce.Encoder(method='binary') fm_enc_bin = enc_bin.fit_transform(feature_matrix, features) fm_enc_bin # use a hashing algorithm to map category values to corresponding columns enc_hash = ce.Encoder(method='hashing') fm_enc_hash = enc_hash.fit_transform(feature_matrix, features) fm_enc_hash ###Output _____no_output_____ ###Markdown Bayesian Encoders ###Code # replaces each specific category value with a weighted average of the dependent variable. enc_targ = ce.Encoder(method='target') fm_enc_targ = enc_targ.fit_transform(feature_matrix, features, feature_matrix['value']) fm_enc_targ # identical to target except leaves own row out when calculating average enc_leave = ce.Encoder(method='leave_one_out') fm_enc_leave = enc_leave.fit_transform(feature_matrix, features, feature_matrix['value']) fm_enc_leave ###Output _____no_output_____
thPyCh1.ipynb
###Markdown Think Python 2 Outline: Chapter 1 Problem solving: the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately. The process of learning to program is an excellent opportunity to practice these skills. What is a Program?******What is it to you? ________________- input- output- math- conditional execution- repetition The Environment that you write Python:***- directly into interpreter - the REPL- IDLE- web-based eg. PythonAnywhere- Local Development Environment, like Sublime Text or PyCharm- Local/web hybrid: Jupyter ###Code #First Program! print("Hello World!") #Arithmetic Operators: 40+2 6*7 2**3 #This is exponentiation! #Fill in your own: ______________ ###Output _____no_output_____
DJI_rnn1.ipynb
###Markdown [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lior0110/main/blob/master/DJI_rnn1.ipynb) ###Code import os if not os.path.exists('main'): os.system('git clone https://github.com/lior0110/main/') os.chdir('main') import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt # read the given data data = pd.read_csv('DJI2.csv', index_col=0, parse_dates=True) data.head() data.info() data['Adj Close'].plot() # see if 'Adj Close' is the same as 'Close' # if yes drop 'Adj Close' test = data['Adj Close'] == data['Close'] if all(data['Adj Close'] == data['Close']): data = data.drop(columns='Adj Close') data.head() data.tail() data.columns data.shape # make data and target arrays target_style = 'Price' # / 'Price' / 'Change' / 'ChangeP' / 'UD' window_len = 100 # window length to use for prediction useVolume = True y = np.zeros(data.shape[0]-window_len) if useVolume: X = np.zeros((data.shape[0]-window_len,window_len,data.shape[1])) else: X = np.zeros((data.shape[0]-window_len,window_len,data.shape[1]-1)) for i in range(data.shape[0]-window_len): if target_style == 'Change': # target of Change for next day y[i] = data['Close'][i+window_len] - data['Close'][i+window_len-1] elif target_style == 'ChangeP': # target of Change for next day in percentage y[i] = (data['Close'][i+window_len] - data['Close'][i+window_len-1]) / data['Close'][i+window_len-1] elif target_style == 'UD': # target of Up/Down for next day if (data['Close'][i+window_len] - data['Close'][i+window_len-1]) > 0: y[i] = 1 elif (data['Close'][i+window_len] - data['Close'][i+window_len-1]) < 0: y[i] = -1 else: y[i] = 0 # y[i] = 1 if (data['Close'][i+window_len] - data['Close'][i+window_len-1]) > 0 else -1 else: # target of Price for next day y[i] = data['Close'][i+window_len] if useVolume: X[i,:,:] = data.iloc[i:i+window_len].values else: X[i,:,:] = data.iloc[i:i+window_len,:-1].values plt.plot(y) plt.show() X[-1,-10:,:] y[-1] # train test split y_train = y[:int(0.7*len(y))] y_valid = y[int(0.7*len(y)):int(0.85*len(y))] y_test = y[int(0.85*len(y)):] X_train = X[:int(0.7*len(X)),:,:] X_valid = X[int(0.7*len(X)):int(0.85*len(X)),:,:] X_test = X[int(0.85*len(X)):,:,:] # get the max and min in the train data maxPrice = np.max(X_train[:,:,:-1]) print('max Price: ',maxPrice) minPrice = np.min(X_train[:,:,:-1]) print('min Price: ',minPrice) if useVolume: maxVolume = np.max(X_train[:,:,-1]) print('max Volume: ',maxVolume) minVolume = np.min(X_train[:,:,-1]) print('min Volume: ',minVolume) # data scaling if useVolume: X_train[:,:,:-1] = (X_train[:,:,:-1] - minPrice) / (maxPrice - minPrice) X_train[:,:,-1] = (X_train[:,:,-1] - minVolume) / (maxVolume - minVolume) X_valid[:,:,:-1] = (X_valid[:,:,:-1] - minPrice) / (maxPrice - minPrice) X_valid[:,:,-1] = (X_valid[:,:,-1] - minVolume) / (maxVolume - minVolume) X_test[:,:,:-1] = (X_test[:,:,:-1] - minPrice) / (maxPrice - minPrice) X_test[:,:,-1] = (X_test[:,:,-1] - minVolume) / (maxVolume - minVolume) else: X_train[:,:,:] = (X_train[:,:,:] - minPrice) / (maxPrice - minPrice) X_valid[:,:,:] = (X_valid[:,:,:] - minPrice) / (maxPrice - minPrice) X_test[:,:,:] = (X_test[:,:,:] - minPrice) / (maxPrice - minPrice) # target scaling if target_style == 'Price': y_train = (y_train - minPrice) / (maxPrice - minPrice) plt.plot(y_train) plt.show() y_valid = (y_valid - minPrice) / (maxPrice - minPrice) plt.plot(y_valid) plt.show() y_test = (y_test - minPrice) / (maxPrice - minPrice) plt.plot(y_test) plt.show() elif target_style == 'Change': maxChange = np.max(np.abs(y_train)) y_train = y_train / maxChange plt.plot(y_train) plt.show() y_valid = y_valid / maxChange plt.plot(y_valid) plt.show() y_test = y_test / maxChange plt.plot(y_test) plt.show() # from keras.models import Sequential # import keras.layers as layers # from keras.layers import Input, Dense, Dropout, LSTM, CuDNNLSTM, GRU, CuDNNGRU, Bidirectional # from keras.optimizers import SGD, RMSprop, Adam, Adagrad # from keras.losses import mean_squared_error # from keras.models import load_model # from keras import backend as K # from keras.callbacks import EarlyStopping import tensorflow as tf from tensorflow.keras import layers from tensorflow.keras import backend as K # The GRU architecture regressorGRU = tf.keras.Sequential() # First GRU layer with Dropout regularisation regressorGRU.add(layers.GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2]))) regressorGRU.add(layers.Dropout(0.2)) # Second GRU layer regressorGRU.add(layers.GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2]))) regressorGRU.add(layers.Dropout(0.2)) # Third GRU layer regressorGRU.add(layers.GRU(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2]))) regressorGRU.add(layers.Dropout(0.2)) # Fourth GRU layer regressorGRU.add(layers.GRU(units=50)) regressorGRU.add(layers.Dropout(0.2)) # The output layer regressorGRU.add(layers.Dense(units=1)) # The LSTM architecture regressorLSTM = tf.keras.Sequential() # First LSTM layer with Dropout regularisation regressorLSTM.add(layers.LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],X_train.shape[2]))) regressorLSTM.add(layers.Dropout(0.2)) # Second LSTM layer regressorLSTM.add(layers.LSTM(units=50, return_sequences=True)) regressorLSTM.add(layers.Dropout(0.2)) # Third LSTM layer regressorLSTM.add(layers.LSTM(units=50, return_sequences=True)) regressorLSTM.add(layers.Dropout(0.2)) # Fourth LSTM layer regressorLSTM.add(layers.LSTM(units=50)) regressorLSTM.add(layers.Dropout(0.2)) # The output layer regressorLSTM.add(layers.Dense(units=1)) def r2_score(y_true, y_pred): SS_res = K.sum(K.square(y_true - y_pred)) SS_tot = K.sum(K.square(y_true - K.mean(y_true))) SS_reg = K.sum(K.square(y_pred - K.mean(y_true))) # return ( 1 - SS_res/(SS_tot + K.epsilon()) ) return ( SS_res/(SS_tot + K.epsilon()) ) # return ( SS_reg/(SS_tot + K.epsilon()) ) # regressor = regressorLSTM regressor = regressorGRU callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True) regressor.compile(optimizer='Adam', loss=r2_score, metrics=['mse',r2_score]) # optimizer='Adam'/'RMSProp' print(regressor.summary()) hist = regressor.fit(X_train, y_train,epochs = 100, callbacks=[callback], validation_data=(X_valid, y_valid)) # , batch_size=32 hist.history.keys() plt.plot(hist.history['loss']) plt.show() # predict for lest point if useVolume: lest_data = data.iloc[-window_len:].values lest_data[:,:-1] = (lest_data[:,:-1] - minPrice) / (maxPrice - minPrice) lest_data[:,-1] = (lest_data[:,-1] - minVolume) / (maxVolume - minVolume) else: lest_data = data.iloc[-window_len:,:-1].values lest_data = (lest_data - minPrice) / (maxPrice - minPrice) lest_data = lest_data.reshape((1,window_len,X_train.shape[2])) predicted_stock_price = regressor.predict(lest_data) # target_style = 'Price' # / 'Price' / 'Change' / 'ChangeP' / 'UD' if target_style == 'Price': predicted_stock_price = predicted_stock_price * (maxPrice - minPrice) + minPrice print("predicted stock price for next step is: ", predicted_stock_price) elif target_style == 'Change': predicted_stock_price = predicted_stock_price * maxChange print("predicted stock price Change for next step is: ", predicted_stock_price) elif target_style == 'ChangeP': print("predicted stock price Change percentage for next step is: ", predicted_stock_price) elif target_style == 'UD': print("predicted stock price Up Down percentage for next step is: ", predicted_stock_price) predicted_stock_price = regressor.predict(X_test) # if target_style == 'Price': # predicted_stock_price = (predicted_stock_price * (maxPrice - minPrice)) + minPrice # predicted_stock_price # Visualising the test results plt.plot(y_test, color = 'pink', label = 'Real Stock Price') plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price') plt.title('Stock Price Prediction on test data') plt.xlabel('Time') plt.ylabel('Stock Price on test data') plt.legend() plt.show() predicted_stock_price = regressor.predict(X_valid) # if target_style == 'Price': # predicted_stock_price = (predicted_stock_price * (maxPrice - minPrice)) + minPrice # predicted_stock_price # Visualising the validation results plt.plot(y_valid, color = 'pink', label = 'Real Stock Price') plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price') plt.title('Stock Price Prediction on validation data') plt.xlabel('Time') plt.ylabel('Stock Price') plt.legend() plt.show() # r2_score(y_valid, predicted_stock_price) SS_res = np.sum(np.square(y_valid - predicted_stock_price)) print('SS_res = ',SS_res) SS_tot = np.sum(np.square(y_valid - np.mean(y_valid))) print('SS_tot = ',SS_tot) SS_reg = np.sum(np.square(predicted_stock_price - np.mean(y_valid))) print('SS_reg = ',SS_reg) r2 = 1 - SS_res/SS_tot print('r2 = ',r2) r2 = SS_reg/SS_tot print('r2 = ',r2) mse = np.mean(np.square(y_valid - predicted_stock_price)) print('mse = ',mse) predicted_stock_price = regressor.predict(X_train) # if target_style == 'Price': # predicted_stock_price = (predicted_stock_price * (maxPrice - minPrice)) + minPrice # predicted_stock_price # Visualising the train results plt.plot(y_train, color = 'pink', label = 'Real Stock Price') plt.plot(predicted_stock_price, color = 'blue', label = 'Predicted Stock Price') plt.title('Stock Price Prediction on train data') plt.xlabel('Time') plt.ylabel('Stock Price') plt.legend() plt.show() plt.close('all') ###Output _____no_output_____
###Markdown 第六章—支持向量机 1、基于最大间隔分割数据**最大化支持向量到分割面的距离** **1、线性可分支持向量机:**通过**硬间隔最大化**学习一个线性分类器,也称为**硬间隔支持向量机** **2、线性支持向量机:**通过**软间隔最大化**学习一个线性分类器,也称为**软间隔支持向量机** **3、非线性支持向量机:**通过**核技巧**及**软间隔最大化**,学习非线性支持向量机 2、寻找最大间隔 3、SMO高效优化算法 ###Code import random from numpy import * from time import sleep # SMO算法中的辅助函数 # 文本处理为数据集 def loadDataSet(fileName): dataMat = []; labelMat = [] fr = open(fileName) for line in fr.readlines(): lineArr = line.strip().split('\t') dataMat.append([float(lineArr[0]), float(lineArr[1])]) labelMat.append(float(lineArr[2])) return dataMat, labelMat def selectJrand(i, m): j = i # 随机化任意一个不等于i的整数 while (j == i): j = int(random.randint(0, m)) return j # 调整大于H和小于L的数值 def clipAlpha(aj ,H, L): if(aj > H): aj = H if(L > aj): aj = L return aj dataArr, labelArr = loadDataSet('D:/data/study/AI/ML/MLcode/Ch06/testSet.txt') print(dataArr, '\n', labelArr) # 简化版SMO算法 def smoSimple(dataMatIn, classLabels, C, toler, maxIter): dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose() b = 0 m, n = shape(dataMatrix) # 数据个数和特征数 alphas = mat(zeros((m, 1))) # 初始化为0向量 iter_ = 0 while(iter_ < maxIter): # 外循环,迭代次数 alphaPairsChanged = 0 for i in range(m): # 内循环,对数据集中每个数据 fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b Ei = fXi - float(labelMat[i]) if(((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0))): j = selectJrand(i, m) # 随机选择第二个alpha fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b # labelMat[j] = labelMat[j].astype(float) Ej = fXj - float(labelMat[j]) alphaIold = alphas[i].copy() alphaJold = alphas[j].copy() if(labelMat[i] != labelMat[j]): # 确保alpha在0和C之间 L = max(0, alphas[j] - alphas[i]) H = min(C, C + alphas[j] - alphas[i]) else: L = max(0, alphas[j] + alphas[i] - C) H = min(C, alphas[j] + alphas[i]) if(L == H): print("L==H") continue eta = 2.0 * dataMatrix[i,:]*dataMatrix[j,:].T - dataMatrix[i,:]*dataMatrix[i,:].T - dataMatrix[j,:]*dataMatrix[j,:].T if(eta >= 0): print("eta>=0") continue alphas[j] -= labelMat[j]*(Ei - Ej)/eta alphas[j] = clipAlpha(alphas[j], H, L) if (abs(alphas[j] - alphaJold) < 0.00001): print("j not moving enough") continue alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j]) #将i和j更新相同的值,更新方向相反 b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[i,:].T \ - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:]*dataMatrix[j,:].T b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[j,:].T \ - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:]*dataMatrix[j,:].T if((0 < alphas[i]) and (C > alphas[i])): b = b1 elif((0 < alphas[j]) and (C > alphas[j])): b = b2 else: b = (b1 + b2)/2.0 alphaPairsChanged += 1 print("iter: %d i:%d, pairs changed %d" % (iter_, i, alphaPairsChanged)) if(alphaPairsChanged == 0): iter_ += 1 else: iter_ = 0 print("iteration number: %d" % iter_) return b, alphas smoSimple(dataArr, labelArr, 0.6, 0.001, 40) ###Output L==H iter: 0 i:1, pairs changed 1 L==H iter: 0 i:4, pairs changed 2 iter: 0 i:5, pairs changed 3 iter: 0 i:8, pairs changed 4 j not moving enough j not moving enough L==H iter: 0 i:17, pairs changed 5 j not moving enough j not moving enough iter: 0 i:23, pairs changed 6 L==H L==H iter: 0 i:29, pairs changed 7 j not moving enough L==H L==H j not moving enough L==H j not moving enough L==H L==H iter: 0 i:54, pairs changed 8 j not moving enough L==H L==H j not moving enough j not moving enough iter: 0 i:71, pairs changed 9 j not moving enough L==H L==H iter: 0 i:82, pairs changed 10 j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough L==H L==H j not moving enough L==H L==H j not moving enough iter: 0 i:69, pairs changed 1 L==H j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough L==H iteration number: 0 L==H j not moving enough L==H j not moving enough L==H L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:20, pairs changed 1 j not moving enough j not moving enough L==H j not moving enough iter: 0 i:29, pairs changed 2 j not moving enough L==H j not moving enough j not moving enough j not moving enough iter: 0 i:52, pairs changed 3 j not moving enough iter: 0 i:55, pairs changed 4 j not moving enough j not moving enough L==H j not moving enough j not moving enough L==H L==H j not moving enough j not moving enough iter: 0 i:85, pairs changed 5 j not moving enough j not moving enough iteration number: 0 iter: 0 i:1, pairs changed 1 L==H j not moving enough L==H iter: 0 i:8, pairs changed 2 j not moving enough j not moving enough L==H j not moving enough iter: 0 i:20, pairs changed 3 iter: 0 i:22, pairs changed 4 j not moving enough iter: 0 i:25, pairs changed 5 j not moving enough j not moving enough iter: 0 i:30, pairs changed 6 iter: 0 i:39, pairs changed 7 L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:94, pairs changed 8 j not moving enough iteration number: 0 j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:52, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:94, pairs changed 2 L==H iteration number: 0 j not moving enough iter: 0 i:8, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:30, pairs changed 2 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:86, pairs changed 3 L==H j not moving enough L==H iteration number: 0 L==H j not moving enough L==H j not moving enough j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough iter: 0 i:30, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough iter: 0 i:22, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:68, pairs changed 2 j not moving enough iter: 0 i:71, pairs changed 3 j not moving enough j not moving enough L==H L==H iteration number: 0 L==H j not moving enough j not moving enough iter: 0 i:8, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:29, pairs changed 2 j not moving enough j not moving enough iter: 0 i:46, pairs changed 3 j not moving enough j not moving enough iter: 0 i:55, pairs changed 4 iter: 0 i:65, pairs changed 5 j not moving enough L==H j not moving enough j not moving enough L==H L==H L==H iteration number: 0 j not moving enough j not moving enough L==H L==H j not moving enough j not moving enough j not moving enough iter: 0 i:24, pairs changed 1 j not moving enough j not moving enough iter: 0 i:27, pairs changed 2 j not moving enough j not moving enough j not moving enough iter: 0 i:52, pairs changed 3 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough j not moving enough iter: 0 i:24, pairs changed 1 j not moving enough iter: 0 i:26, pairs changed 2 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:55, pairs changed 3 j not moving enough j not moving enough iter: 0 i:92, pairs changed 4 j not moving enough iteration number: 0 L==H L==H j not moving enough L==H j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:52, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:92, pairs changed 2 iter: 0 i:96, pairs changed 3 L==H iteration number: 0 L==H j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:65, pairs changed 1 j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:26, pairs changed 1 j not moving enough j not moving enough j not moving enough iter: 0 i:54, pairs changed 2 j not moving enough j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough iter: 0 i:25, pairs changed 1 j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:96, pairs changed 1 L==H iteration number: 0 j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:69, pairs changed 1 iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:46, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 0 j not moving enough L==H j not moving enough j not moving enough j not moving enough iter: 0 i:26, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough L==H j not moving enough L==H iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:54, pairs changed 1 j not moving enough j not moving enough j not moving enough L==H j not moving enough L==H iteration number: 0 j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough L==H L==H L==H L==H iteration number: 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough j not moving enough iter: 1 i:55, pairs changed 1 j not moving enough L==H j not moving enough iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough L==H j not moving enough j not moving enough j not moving enough iter: 0 i:52, pairs changed 1 iter: 0 i:54, pairs changed 2 j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 0 iter: 0 i:17, pairs changed 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:30, pairs changed 2 j not moving enough j not moving enough j not moving enough j not moving enough iter: 0 i:55, pairs changed 3 j not moving enough j not moving enough j not moving enough iteration number: 0 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 1 j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough j not moving enough iteration number: 2 L==H ###Markdown 4、利用完整Platt SMO算法加速优化 ###Code # 完整版Platt SMO的支持函数 class optStruct: def __init__(self, dataMatIn, classLabels, C, toler, kTup): self.X = dataMatIn self.labelMat = classLabels self.C = C self.tol = toler self.m = shape(dataMatIn)[0] self.alphas = mat(zeros((self.m, 1))) self.b = 0 self.eCache = mat(zeros((self.m, 2))) def calcEk(oS, k): fXk = float(multiply(oS.alphas,oS.labelMat).T*(oS.X*oS.X[k,:].T)) + oS.b Ek = fXk - float(oS.labelMat[k]) return Ek def selectJ(i, oS, Ei): maxK = -1; maxDeltaE = 0; Ej = 0 oS.eCache[i] = [1, Ei] validEcacheList = nonzero(oS.eCache[:,0].A)[0] if((len(validEcacheList)) > 1): for k in validEcacheList: if(k == i): continue Ek = calcEk(oS, k) deltaE = abs(Ei - Ek) if (deltaE > maxDeltaE): maxK = k; maxDeltaE = deltaE; Ej = Ek return maxK, Ej else: j = selectJrand(i, oS.m) Ej = calcEk(oS, j) return j, Ej def updateEk(oS, k): Ek = calcEk(oS, k) oS.eCache[k] = [1, Ek] # 完整Platt SMO算法中的优化例程 def innerL(i, oS): Ei = calcEk(oS, i) if (((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0))): j, Ej = selectJ(i, oS, Ei) alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy(); if (oS.labelMat[i] != oS.labelMat[j]): L = max(0, oS.alphas[j] - oS.alphas[i]) H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i]) else: L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C) H = min(oS.C, oS.alphas[j] + oS.alphas[i]) if(L==H): print("L==H") return 0 eta = 2.0 * oS.X[i,:]*oS.X[j,:].T - oS.X[i,:]*oS.X[i,:].T - oS.X[j,:]*oS.X[j,:].T if(eta >= 0): print("eta>=0") return 0 oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta oS.alphas[j] = clipAlpha(oS.alphas[j], H, L) updateEk(oS, j) if(abs(oS.alphas[j] - alphaJold) < 0.00001): print("j not moving enough") return 0 oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j]) updateEk(oS, i) b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[i,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[i,:]*oS.X[j,:].T b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.X[i,:]*oS.X[j,:].T - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.X[j,:]*oS.X[j,:].T if((0 < oS.alphas[i]) and (oS.C > oS.alphas[i])): oS.b = b1 elif((0 < oS.alphas[j]) and (oS.C > oS.alphas[j])): oS.b = b2 else: oS.b = (b1 + b2)/2.0 return 1 else: return 0 # 完整Platt SMO外循环代码 def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)): oS = optStruct(mat(dataMatIn), mat(classLabels).transpose(), C, toler, kTup) iter_ = 0 entireSet = True; alphaPairsChanged = 0 while (iter_ < maxIter) and ((alphaPairsChanged > 0) or (entireSet)): alphaPairsChanged = 0 if entireSet: #go over all for i in range(oS.m): alphaPairsChanged += innerL(i,oS) print("fullSet, iter: %d i:%d, pairs changed %d" % (iter_,i,alphaPairsChanged)) iter_ += 1 else: #go over non-bound (railed) alphas nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0] for i in nonBoundIs: alphaPairsChanged += innerL(i,oS) print("non-bound, iter: %d i:%d, pairs changed %d" % (iter_, i, alphaPairsChanged)) iter_ += 1 if entireSet: entireSet = False #toggle entire set loop elif (alphaPairsChanged == 0): entireSet = True print("iteration number: %d" % iter_) return oS.b,oS.alphas b, alphas = smoP(dataArr, labelArr, 0.6, 0.001, 40) def calcWs(alphas,dataArr,classLabels): X = mat(dataArr); labelMat = mat(classLabels).transpose() m,n = shape(X) w = zeros((n,1)) for i in range(m): w += multiply(alphas[i]*labelMat[i], X[i,:].T) return w ws = calcWs(alphas, dataArr, labelArr) print(ws) datMat = mat(dataArr) print(datMat[0] * mat(ws) + b) ###Output [[-0.92555695]] ###Markdown 5、复杂数据上应用核函数 ###Code # 核转换函数 def kernelTrans(X, A, kTup): #calc the kernel or transform data to a higher dimensional space m,n = shape(X) K = mat(zeros((m,1))) if(kTup[0]=='lin'): K = X * A.T #linear kernel elif(kTup[0]=='rbf'): for j in range(m): deltaRow = X[j,:] - A K[j] = deltaRow * deltaRow.T K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab else: raise NameError('Houston We Have a Problem -- \ That Kernel is not recognized') return K class optStruct: def __init__(self, dataMatIn, classLabels, C, toler, kTup): self.X = dataMatIn self.labelMat = classLabels self.C = C self.tol = toler self.m = shape(dataMatIn)[0] self.alphas = mat(zeros((self.m, 1))) self.b = 0 self.eCache = mat(zeros((self.m, 2))) self.K = mat(zeros((self.m, self.m))) for i in range(self.m): self.K[:, i] = kernelTrans(self.X, self.X[i, :], kTup) def innerL(i, oS): Ei = calcEk(oS, i) if (((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0))): j, Ej = selectJ(i, oS, Ei) alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy(); if (oS.labelMat[i] != oS.labelMat[j]): L = max(0, oS.alphas[j] - oS.alphas[i]) H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i]) else: L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C) H = min(oS.C, oS.alphas[j] + oS.alphas[i]) if(L==H): print("L==H") return 0 eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j] if(eta >= 0): print("eta>=0") return 0 oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta oS.alphas[j] = clipAlpha(oS.alphas[j], H, L) updateEk(oS, j) if(abs(oS.alphas[j] - alphaJold) < 0.00001): print("j not moving enough") return 0 oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j]) updateEk(oS, i) b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j] b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j] if((0 < oS.alphas[i]) and (oS.C > oS.alphas[i])): oS.b = b1 elif((0 < oS.alphas[j]) and (oS.C > oS.alphas[j])): oS.b = b2 else: oS.b = (b1 + b2)/2.0 return 1 else: return 0 def calcEk(oS, k): fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b) Ek = fXk - float(oS.labelMat[k]) return Ek # 测试中使用核函数:利用核函数进行分类的径向基测试函数 def testRbf(k1=1.3): dataArr,labelArr = loadDataSet('D:/data/study/AI/ML/MLcode/Ch06/testSetRBF.txt') b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1)) # C=200 important datMat = mat(dataArr); labelMat = mat(labelArr).transpose() svInd = nonzero(alphas.A>0)[0] sVs = datMat[svInd] #get matrix of only support vectors labelSV = labelMat[svInd]; print("there are %d Support Vectors" % shape(sVs)[0]) m, n = shape(datMat) errorCount = 0 for i in range(m): kernelEval = kernelTrans(sVs, datMat[i,:], ('rbf', k1)) predict = kernelEval.T * multiply(labelSV, alphas[svInd]) + b if(sign(predict)!=sign(labelArr[i])): errorCount += 1 print("the training error rate is: %f" % (float(errorCount)/m)) dataArr, labelArr = loadDataSet('D:/data/study/AI/ML/MLcode/Ch06/testSetRBF2.txt') errorCount = 0 datMat = mat(dataArr); labelMat = mat(labelArr).transpose() m, n = shape(datMat) for i in range(m): kernelEval = kernelTrans(sVs, datMat[i,:], ('rbf', k1)) predict = kernelEval.T * multiply(labelSV, alphas[svInd]) + b if(sign(predict)!=sign(labelArr[i])): errorCount += 1 print("the test error rate is: %f" % (float(errorCount)/m)) testRbf() ###Output fullSet, iter: 0 i:0, pairs changed 1 fullSet, iter: 0 i:1, pairs changed 1 fullSet, iter: 0 i:2, pairs changed 2 fullSet, iter: 0 i:3, pairs changed 3 fullSet, iter: 0 i:4, pairs changed 3 fullSet, iter: 0 i:5, pairs changed 4 fullSet, iter: 0 i:6, pairs changed 4 fullSet, iter: 0 i:7, pairs changed 5 fullSet, iter: 0 i:8, pairs changed 5 fullSet, iter: 0 i:9, pairs changed 5 fullSet, iter: 0 i:10, pairs changed 6 fullSet, iter: 0 i:11, pairs changed 7 fullSet, iter: 0 i:12, pairs changed 7 fullSet, iter: 0 i:13, pairs changed 8 fullSet, iter: 0 i:14, pairs changed 9 fullSet, iter: 0 i:15, pairs changed 10 fullSet, iter: 0 i:16, pairs changed 11 fullSet, iter: 0 i:17, pairs changed 12 fullSet, iter: 0 i:18, pairs changed 13 fullSet, iter: 0 i:19, pairs changed 14 fullSet, iter: 0 i:20, pairs changed 14 fullSet, iter: 0 i:21, pairs changed 15 j not moving enough fullSet, iter: 0 i:22, pairs changed 15 j not moving enough fullSet, iter: 0 i:23, pairs changed 15 fullSet, iter: 0 i:24, pairs changed 16 fullSet, iter: 0 i:25, pairs changed 16 fullSet, iter: 0 i:26, pairs changed 17 fullSet, iter: 0 i:27, pairs changed 18 fullSet, iter: 0 i:28, pairs changed 19 fullSet, iter: 0 i:29, pairs changed 20 fullSet, iter: 0 i:30, pairs changed 20 fullSet, iter: 0 i:31, pairs changed 21 fullSet, iter: 0 i:32, pairs changed 21 fullSet, iter: 0 i:33, pairs changed 21 fullSet, iter: 0 i:34, pairs changed 21 fullSet, iter: 0 i:35, pairs changed 21 fullSet, iter: 0 i:36, pairs changed 22 fullSet, iter: 0 i:37, pairs changed 22 fullSet, iter: 0 i:38, pairs changed 22 fullSet, iter: 0 i:39, pairs changed 22 j not moving enough fullSet, iter: 0 i:40, pairs changed 22 fullSet, iter: 0 i:41, pairs changed 23 L==H fullSet, iter: 0 i:42, pairs changed 23 fullSet, iter: 0 i:43, pairs changed 23 fullSet, iter: 0 i:44, pairs changed 23 fullSet, iter: 0 i:45, pairs changed 24 L==H fullSet, iter: 0 i:46, pairs changed 24 fullSet, iter: 0 i:47, pairs changed 24 L==H fullSet, iter: 0 i:48, pairs changed 24 fullSet, iter: 0 i:49, pairs changed 24 fullSet, iter: 0 i:50, pairs changed 25 fullSet, iter: 0 i:51, pairs changed 25 j not moving enough fullSet, iter: 0 i:52, pairs changed 25 L==H fullSet, iter: 0 i:53, pairs changed 25 fullSet, iter: 0 i:54, pairs changed 26 fullSet, iter: 0 i:55, pairs changed 26 fullSet, iter: 0 i:56, pairs changed 27 fullSet, iter: 0 i:57, pairs changed 27 fullSet, iter: 0 i:58, pairs changed 27 fullSet, iter: 0 i:59, pairs changed 27 fullSet, iter: 0 i:60, pairs changed 27 fullSet, iter: 0 i:61, pairs changed 27 fullSet, iter: 0 i:62, pairs changed 28 fullSet, iter: 0 i:63, pairs changed 28 fullSet, iter: 0 i:64, pairs changed 28 fullSet, iter: 0 i:65, pairs changed 28 fullSet, iter: 0 i:66, pairs changed 28 fullSet, iter: 0 i:67, pairs changed 28 fullSet, iter: 0 i:68, pairs changed 28 fullSet, iter: 0 i:69, pairs changed 28 fullSet, iter: 0 i:70, pairs changed 28 fullSet, iter: 0 i:71, pairs changed 28 fullSet, iter: 0 i:72, pairs changed 28 fullSet, iter: 0 i:73, pairs changed 28 j not moving enough fullSet, iter: 0 i:74, pairs changed 28 fullSet, iter: 0 i:75, pairs changed 28 L==H fullSet, iter: 0 i:76, pairs changed 28 fullSet, iter: 0 i:77, pairs changed 28 L==H fullSet, iter: 0 i:78, pairs changed 28 fullSet, iter: 0 i:79, pairs changed 28 fullSet, iter: 0 i:80, pairs changed 28 L==H fullSet, iter: 0 i:81, pairs changed 28 fullSet, iter: 0 i:82, pairs changed 28 fullSet, iter: 0 i:83, pairs changed 28 fullSet, iter: 0 i:84, pairs changed 28 L==H fullSet, iter: 0 i:85, pairs changed 28 fullSet, iter: 0 i:86, pairs changed 28 L==H fullSet, iter: 0 i:87, pairs changed 28 fullSet, iter: 0 i:88, pairs changed 28 fullSet, iter: 0 i:89, pairs changed 28 L==H fullSet, iter: 0 i:90, pairs changed 28 fullSet, iter: 0 i:91, pairs changed 28 L==H fullSet, iter: 0 i:92, pairs changed 28 fullSet, iter: 0 i:93, pairs changed 28 fullSet, iter: 0 i:94, pairs changed 28 fullSet, iter: 0 i:95, pairs changed 28 fullSet, iter: 0 i:96, pairs changed 28 fullSet, iter: 0 i:97, pairs changed 28 fullSet, iter: 0 i:98, pairs changed 28 fullSet, iter: 0 i:99, pairs changed 28 iteration number: 1 j not moving enough non-bound, iter: 1 i:0, pairs changed 0 j not moving enough non-bound, iter: 1 i:1, pairs changed 0 j not moving enough non-bound, iter: 1 i:3, pairs changed 0 j not moving enough non-bound, iter: 1 i:10, pairs changed 0 j not moving enough non-bound, iter: 1 i:11, pairs changed 0 j not moving enough non-bound, iter: 1 i:13, pairs changed 0 j not moving enough non-bound, iter: 1 i:14, pairs changed 0 j not moving enough non-bound, iter: 1 i:15, pairs changed 0 j not moving enough non-bound, iter: 1 i:16, pairs changed 0 j not moving enough non-bound, iter: 1 i:17, pairs changed 0 j not moving enough non-bound, iter: 1 i:18, pairs changed 0 j not moving enough non-bound, iter: 1 i:19, pairs changed 0 non-bound, iter: 1 i:21, pairs changed 0 j not moving enough non-bound, iter: 1 i:24, pairs changed 0 non-bound, iter: 1 i:26, pairs changed 1 non-bound, iter: 1 i:27, pairs changed 2 j not moving enough non-bound, iter: 1 i:28, pairs changed 2 j not moving enough non-bound, iter: 1 i:29, pairs changed 2 j not moving enough non-bound, iter: 1 i:31, pairs changed 2 j not moving enough non-bound, iter: 1 i:36, pairs changed 2 j not moving enough non-bound, iter: 1 i:41, pairs changed 2 j not moving enough non-bound, iter: 1 i:42, pairs changed 2 non-bound, iter: 1 i:45, pairs changed 3 j not moving enough non-bound, iter: 1 i:50, pairs changed 3 j not moving enough non-bound, iter: 1 i:54, pairs changed 3 j not moving enough non-bound, iter: 1 i:56, pairs changed 3 j not moving enough non-bound, iter: 1 i:62, pairs changed 3 iteration number: 2 j not moving enough non-bound, iter: 2 i:0, pairs changed 0 j not moving enough non-bound, iter: 2 i:1, pairs changed 0 j not moving enough non-bound, iter: 2 i:3, pairs changed 0 j not moving enough non-bound, iter: 2 i:10, pairs changed 0 j not moving enough non-bound, iter: 2 i:11, pairs changed 0 j not moving enough non-bound, iter: 2 i:13, pairs changed 0 j not moving enough non-bound, iter: 2 i:14, pairs changed 0 non-bound, iter: 2 i:15, pairs changed 1 j not moving enough non-bound, iter: 2 i:16, pairs changed 1 j not moving enough non-bound, iter: 2 i:17, pairs changed 1 j not moving enough non-bound, iter: 2 i:18, pairs changed 1 j not moving enough non-bound, iter: 2 i:19, pairs changed 1 j not moving enough non-bound, iter: 2 i:21, pairs changed 1 j not moving enough non-bound, iter: 2 i:26, pairs changed 1 j not moving enough non-bound, iter: 2 i:27, pairs changed 1 j not moving enough non-bound, iter: 2 i:28, pairs changed 1 j not moving enough non-bound, iter: 2 i:29, pairs changed 1 j not moving enough non-bound, iter: 2 i:31, pairs changed 1 j not moving enough non-bound, iter: 2 i:36, pairs changed 1 non-bound, iter: 2 i:41, pairs changed 1 j not moving enough non-bound, iter: 2 i:42, pairs changed 1 non-bound, iter: 2 i:45, pairs changed 2 j not moving enough non-bound, iter: 2 i:50, pairs changed 2 j not moving enough non-bound, iter: 2 i:54, pairs changed 2 non-bound, iter: 2 i:56, pairs changed 3 j not moving enough non-bound, iter: 2 i:62, pairs changed 3 j not moving enough non-bound, iter: 2 i:76, pairs changed 3 iteration number: 3 non-bound, iter: 3 i:0, pairs changed 1 j not moving enough non-bound, iter: 3 i:1, pairs changed 1 j not moving enough non-bound, iter: 3 i:3, pairs changed 1 j not moving enough non-bound, iter: 3 i:10, pairs changed 1 non-bound, iter: 3 i:11, pairs changed 2 j not moving enough non-bound, iter: 3 i:13, pairs changed 2 j not moving enough non-bound, iter: 3 i:14, pairs changed 2 j not moving enough non-bound, iter: 3 i:16, pairs changed 2 j not moving enough non-bound, iter: 3 i:17, pairs changed 2 j not moving enough non-bound, iter: 3 i:18, pairs changed 2 j not moving enough non-bound, iter: 3 i:19, pairs changed 2 j not moving enough non-bound, iter: 3 i:21, pairs changed 2 j not moving enough non-bound, iter: 3 i:26, pairs changed 2 j not moving enough non-bound, iter: 3 i:27, pairs changed 2 j not moving enough non-bound, iter: 3 i:28, pairs changed 2 j not moving enough non-bound, iter: 3 i:29, pairs changed 2 j not moving enough non-bound, iter: 3 i:31, pairs changed 2 non-bound, iter: 3 i:36, pairs changed 2 j not moving enough non-bound, iter: 3 i:41, pairs changed 2 j not moving enough non-bound, iter: 3 i:42, pairs changed 2 j not moving enough non-bound, iter: 3 i:45, pairs changed 2 j not moving enough non-bound, iter: 3 i:50, pairs changed 2 j not moving enough non-bound, iter: 3 i:54, pairs changed 2 j not moving enough non-bound, iter: 3 i:56, pairs changed 2 j not moving enough non-bound, iter: 3 i:62, pairs changed 2 j not moving enough non-bound, iter: 3 i:76, pairs changed 2 j not moving enough non-bound, iter: 3 i:87, pairs changed 2 iteration number: 4 j not moving enough non-bound, iter: 4 i:0, pairs changed 0 j not moving enough non-bound, iter: 4 i:1, pairs changed 0 j not moving enough non-bound, iter: 4 i:3, pairs changed 0 j not moving enough non-bound, iter: 4 i:10, pairs changed 0 j not moving enough non-bound, iter: 4 i:13, pairs changed 0 j not moving enough non-bound, iter: 4 i:14, pairs changed 0 j not moving enough non-bound, iter: 4 i:16, pairs changed 0 j not moving enough non-bound, iter: 4 i:17, pairs changed 0 j not moving enough non-bound, iter: 4 i:18, pairs changed 0 j not moving enough non-bound, iter: 4 i:19, pairs changed 0 j not moving enough non-bound, iter: 4 i:21, pairs changed 0 j not moving enough non-bound, iter: 4 i:26, pairs changed 0 j not moving enough non-bound, iter: 4 i:27, pairs changed 0 j not moving enough non-bound, iter: 4 i:28, pairs changed 0 j not moving enough non-bound, iter: 4 i:29, pairs changed 0 j not moving enough non-bound, iter: 4 i:31, pairs changed 0 non-bound, iter: 4 i:36, pairs changed 0 j not moving enough non-bound, iter: 4 i:41, pairs changed 0 j not moving enough non-bound, iter: 4 i:42, pairs changed 0 j not moving enough non-bound, iter: 4 i:45, pairs changed 0 j not moving enough non-bound, iter: 4 i:50, pairs changed 0 j not moving enough non-bound, iter: 4 i:54, pairs changed 0 j not moving enough non-bound, iter: 4 i:56, pairs changed 0 ###Markdown 6、手写数字识别问题回顾 ###Code def img2vector(filename): returnVect = zeros((1,1024)) fr = open(filename) for i in range(32): lineStr = fr.readline() for j in range(32): returnVect[0,32*i+j] = int(lineStr[j]) return returnVect # 基于SVM手写数字识别 def loadImages(dirName): from os import listdir hwLabels = [] trainingFileList = listdir(dirName) #load the training set m = len(trainingFileList) trainingMat = zeros((m,1024)) for i in range(m): fileNameStr = trainingFileList[i] fileStr = fileNameStr.split('.')[0] #take off .txt classNumStr = int(fileStr.split('_')[0]) if classNumStr == 9: hwLabels.append(-1) else: hwLabels.append(1) trainingMat[i,:] = img2vector('%s/%s' % (dirName, fileNameStr)) return trainingMat, hwLabels def testDigits(kTup = ('rbf', 10)): dataArr,labelArr = loadImages('D:/data/study/AI/ML/MLcode/Ch02/trainingDigits') b, alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, kTup) datMat = mat(dataArr); labelMat = mat(labelArr).transpose() svInd = nonzero(alphas.A > 0)[0] sVs = datMat[svInd] labelSV = labelMat[svInd]; print("there are %d Support Vectors" % shape(sVs)[0]) m, n = shape(datMat) errorCount = 0 for i in range(m): kernelEval = kernelTrans(sVs, datMat[i,:], kTup) predict = kernelEval.T * np.multiply(labelSV, alphas[svInd]) + b if(sign(predict)!=sign(labelArr[i])): errorCount += 1 print("the training error rate is: %f" % (float(errorCount)/m)) dataArr, labelArr = loadImages('D:/data/study/AI/ML/MLcode/Ch02/testDigits') errorCount = 0 datMat = mat(dataArr); labelMat = mat(labelArr).transpose() m, n = shape(datMat) for i in range(m): kernelEval = kernelTrans(sVs, datMat[i,:], kTup) predict = kernelEval.T * multiply(labelSV, alphas[svInd]) + b if(sign(predict)!=sign(labelArr[i])): errorCount += 1 print("the test error rate is: %f" % (float(errorCount)/m)) testDigits(('rbf', 20)) ###Output L==H fullSet, iter: 0 i:0, pairs changed 0 fullSet, iter: 0 i:1, pairs changed 1 fullSet, iter: 0 i:2, pairs changed 2 fullSet, iter: 0 i:3, pairs changed 3 fullSet, iter: 0 i:4, pairs changed 4 fullSet, iter: 0 i:5, pairs changed 4 fullSet, iter: 0 i:6, pairs changed 4 fullSet, iter: 0 i:7, pairs changed 5 fullSet, iter: 0 i:8, pairs changed 6 fullSet, iter: 0 i:9, pairs changed 7 fullSet, iter: 0 i:10, pairs changed 7 fullSet, iter: 0 i:11, pairs changed 7 fullSet, iter: 0 i:12, pairs changed 7 fullSet, iter: 0 i:13, pairs changed 8 fullSet, iter: 0 i:14, pairs changed 9 fullSet, iter: 0 i:15, pairs changed 9 fullSet, iter: 0 i:16, pairs changed 9 fullSet, iter: 0 i:17, pairs changed 10 fullSet, iter: 0 i:18, pairs changed 11 fullSet, iter: 0 i:19, pairs changed 11 j not moving enough fullSet, iter: 0 i:20, pairs changed 11 L==H fullSet, iter: 0 i:21, pairs changed 11 L==H fullSet, iter: 0 i:22, pairs changed 11 fullSet, iter: 0 i:23, pairs changed 11 fullSet, iter: 0 i:24, pairs changed 11 L==H fullSet, iter: 0 i:25, pairs changed 11 L==H fullSet, iter: 0 i:26, pairs changed 11 L==H fullSet, iter: 0 i:27, pairs changed 11 L==H fullSet, iter: 0 i:28, pairs changed 11 L==H fullSet, iter: 0 i:29, pairs changed 11 L==H fullSet, iter: 0 i:30, pairs changed 11 fullSet, iter: 0 i:31, pairs changed 11 L==H fullSet, iter: 0 i:32, pairs changed 11 fullSet, iter: 0 i:33, pairs changed 11 L==H fullSet, iter: 0 i:34, pairs changed 11 L==H fullSet, iter: 0 i:35, pairs changed 11 L==H fullSet, iter: 0 i:36, pairs changed 11 fullSet, iter: 0 i:37, pairs changed 11 L==H fullSet, iter: 0 i:38, pairs changed 11 L==H fullSet, iter: 0 i:39, pairs changed 11 L==H fullSet, iter: 0 i:40, pairs changed 11 fullSet, iter: 0 i:41, pairs changed 11 L==H fullSet, iter: 0 i:42, pairs changed 11 fullSet, iter: 0 i:43, pairs changed 11 fullSet, iter: 0 i:44, pairs changed 11 fullSet, iter: 0 i:45, pairs changed 11 L==H fullSet, iter: 0 i:46, pairs changed 11 L==H fullSet, iter: 0 i:47, pairs changed 11 L==H fullSet, iter: 0 i:48, pairs changed 11 L==H fullSet, iter: 0 i:49, pairs changed 11 L==H fullSet, iter: 0 i:50, pairs changed 11 L==H fullSet, iter: 0 i:51, pairs changed 11 L==H fullSet, iter: 0 i:52, pairs changed 11 L==H fullSet, iter: 0 i:53, pairs changed 11 fullSet, iter: 0 i:54, pairs changed 11 fullSet, iter: 0 i:55, pairs changed 11 L==H fullSet, iter: 0 i:56, pairs changed 11 fullSet, iter: 0 i:57, pairs changed 11 fullSet, iter: 0 i:58, pairs changed 11 fullSet, iter: 0 i:59, pairs changed 11 fullSet, iter: 0 i:60, pairs changed 11 fullSet, iter: 0 i:61, pairs changed 11 fullSet, iter: 0 i:62, pairs changed 11 fullSet, iter: 0 i:63, pairs changed 11 fullSet, iter: 0 i:64, pairs changed 11 L==H fullSet, iter: 0 i:65, pairs changed 11 L==H fullSet, iter: 0 i:66, pairs changed 11 L==H fullSet, iter: 0 i:67, pairs changed 11 L==H fullSet, iter: 0 i:68, pairs changed 11 L==H fullSet, iter: 0 i:69, pairs changed 11 L==H fullSet, iter: 0 i:70, pairs changed 11 L==H fullSet, iter: 0 i:71, pairs changed 11 L==H fullSet, iter: 0 i:72, pairs changed 11 L==H fullSet, iter: 0 i:73, pairs changed 11 L==H fullSet, iter: 0 i:74, pairs changed 11 L==H fullSet, iter: 0 i:75, pairs changed 11 L==H fullSet, iter: 0 i:76, pairs changed 11 fullSet, iter: 0 i:77, pairs changed 11 L==H fullSet, iter: 0 i:78, pairs changed 11 L==H fullSet, iter: 0 i:79, pairs changed 11 fullSet, iter: 0 i:80, pairs changed 11 L==H fullSet, iter: 0 i:81, pairs changed 11 L==H fullSet, iter: 0 i:82, pairs changed 11 L==H fullSet, iter: 0 i:83, pairs changed 11 L==H fullSet, iter: 0 i:84, pairs changed 11 L==H fullSet, iter: 0 i:85, pairs changed 11 fullSet, iter: 0 i:86, pairs changed 11 fullSet, iter: 0 i:87, pairs changed 11 fullSet, iter: 0 i:88, pairs changed 11 L==H fullSet, iter: 0 i:89, pairs changed 11 L==H fullSet, iter: 0 i:90, pairs changed 11 fullSet, iter: 0 i:91, pairs changed 11 L==H fullSet, iter: 0 i:92, pairs changed 11 fullSet, iter: 0 i:93, pairs changed 11 L==H fullSet, iter: 0 i:94, pairs changed 11 fullSet, iter: 0 i:95, pairs changed 11 L==H fullSet, iter: 0 i:96, pairs changed 11 fullSet, iter: 0 i:97, pairs changed 11 L==H fullSet, iter: 0 i:98, pairs changed 11 L==H fullSet, iter: 0 i:99, pairs changed 11 L==H fullSet, iter: 0 i:100, pairs changed 11 L==H fullSet, iter: 0 i:101, pairs changed 11 fullSet, iter: 0 i:102, pairs changed 11 L==H fullSet, iter: 0 i:103, pairs changed 11 L==H fullSet, iter: 0 i:104, pairs changed 11 L==H fullSet, iter: 0 i:105, pairs changed 11 fullSet, iter: 0 i:106, pairs changed 11 fullSet, iter: 0 i:107, pairs changed 11 L==H fullSet, iter: 0 i:108, pairs changed 11 L==H fullSet, iter: 0 i:109, pairs changed 11 L==H fullSet, iter: 0 i:110, pairs changed 11 L==H fullSet, iter: 0 i:111, pairs changed 11 fullSet, iter: 0 i:112, pairs changed 11 L==H fullSet, iter: 0 i:113, pairs changed 11 L==H fullSet, iter: 0 i:114, pairs changed 11 L==H fullSet, iter: 0 i:115, pairs changed 11 fullSet, iter: 0 i:116, pairs changed 11 L==H fullSet, iter: 0 i:117, pairs changed 11 L==H fullSet, iter: 0 i:118, pairs changed 11 L==H fullSet, iter: 0 i:119, pairs changed 11 fullSet, iter: 0 i:120, pairs changed 11 L==H fullSet, iter: 0 i:121, pairs changed 11 L==H fullSet, iter: 0 i:122, pairs changed 11 L==H fullSet, iter: 0 i:123, pairs changed 11 L==H fullSet, iter: 0 i:124, pairs changed 11 L==H fullSet, iter: 0 i:125, pairs changed 11 L==H fullSet, iter: 0 i:126, pairs changed 11 L==H fullSet, iter: 0 i:127, pairs changed 11 L==H fullSet, iter: 0 i:128, pairs changed 11 L==H fullSet, iter: 0 i:129, pairs changed 11 L==H fullSet, iter: 0 i:130, pairs changed 11 L==H fullSet, iter: 0 i:131, pairs changed 11 fullSet, iter: 0 i:132, pairs changed 11 L==H fullSet, iter: 0 i:133, pairs changed 11 fullSet, iter: 0 i:134, pairs changed 11 L==H fullSet, iter: 0 i:135, pairs changed 11 L==H fullSet, iter: 0 i:136, pairs changed 11 fullSet, iter: 0 i:137, pairs changed 11 L==H fullSet, iter: 0 i:138, pairs changed 11 fullSet, iter: 0 i:139, pairs changed 11 L==H fullSet, iter: 0 i:140, pairs changed 11 fullSet, iter: 0 i:141, pairs changed 11 L==H fullSet, iter: 0 i:142, pairs changed 11 L==H fullSet, iter: 0 i:143, pairs changed 11 L==H fullSet, iter: 0 i:144, pairs changed 11 fullSet, iter: 0 i:145, pairs changed 11 L==H fullSet, iter: 0 i:146, pairs changed 11 fullSet, iter: 0 i:147, pairs changed 11 L==H fullSet, iter: 0 i:148, pairs changed 11 L==H fullSet, iter: 0 i:149, pairs changed 11 L==H fullSet, iter: 0 i:150, pairs changed 11 L==H fullSet, iter: 0 i:151, pairs changed 11 L==H fullSet, iter: 0 i:152, pairs changed 11 L==H fullSet, iter: 0 i:153, pairs changed 11 L==H fullSet, iter: 0 i:154, pairs changed 11 L==H fullSet, iter: 0 i:155, pairs changed 11 fullSet, iter: 0 i:156, pairs changed 11 L==H fullSet, iter: 0 i:157, pairs changed 11 fullSet, iter: 0 i:158, pairs changed 11 L==H fullSet, iter: 0 i:159, pairs changed 11 L==H fullSet, iter: 0 i:160, pairs changed 11 L==H fullSet, iter: 0 i:161, pairs changed 11 L==H fullSet, iter: 0 i:162, pairs changed 11 L==H fullSet, iter: 0 i:163, pairs changed 11 L==H fullSet, iter: 0 i:164, pairs changed 11 L==H fullSet, iter: 0 i:165, pairs changed 11 L==H fullSet, iter: 0 i:166, pairs changed 11 L==H fullSet, iter: 0 i:167, pairs changed 11 L==H fullSet, iter: 0 i:168, pairs changed 11 L==H fullSet, iter: 0 i:169, pairs changed 11 L==H fullSet, iter: 0 i:170, pairs changed 11 L==H fullSet, iter: 0 i:171, pairs changed 11 L==H fullSet, iter: 0 i:172, pairs changed 11 L==H fullSet, iter: 0 i:173, pairs changed 11 L==H fullSet, iter: 0 i:174, pairs changed 11 L==H fullSet, iter: 0 i:175, pairs changed 11 L==H fullSet, iter: 0 i:176, pairs changed 11 L==H fullSet, iter: 0 i:177, pairs changed 11 L==H fullSet, iter: 0 i:178, pairs changed 11 L==H fullSet, iter: 0 i:179, pairs changed 11 L==H fullSet, iter: 0 i:180, pairs changed 11 L==H fullSet, iter: 0 i:181, pairs changed 11 fullSet, iter: 0 i:182, pairs changed 11 L==H fullSet, iter: 0 i:183, pairs changed 11 L==H fullSet, iter: 0 i:184, pairs changed 11 L==H fullSet, iter: 0 i:185, pairs changed 11 L==H fullSet, iter: 0 i:186, pairs changed 11 L==H fullSet, iter: 0 i:187, pairs changed 11 L==H fullSet, iter: 0 i:188, pairs changed 11 L==H fullSet, iter: 0 i:189, pairs changed 11 L==H fullSet, iter: 0 i:190, pairs changed 11
Webscraping/digit.in/GamingLaptops_Webscraping.ipynb
###Markdown 7. Write a program to scrap all the available details of top 10 gaming laptops from digit.in. ###Code #Connect to web driver driver=webdriver.Chrome(r"D://chromedriver.exe") #r converts string to raw string #If not r, we can use executable_path = "C:/path name" #Getting the website to driver driver.get('https://www.digit.in/') #When we run this line, automatically the webpage will be opened #Clicking on top 10 option top10=driver.find_element_by_xpath("//div[@class='menu']/ul/li[4]/a") top10.click() #Clicking on best gaming laptops option best_gl=driver.find_element_by_xpath("//div[@class='listing_container']/ul/li[26]/a") best_gl.click() #Specifying the url of the webpage to be scraped url="https://www.digit.in/top-products/best-gaming-laptops-40.html" driver.get(url) #Extracting the tags having the laptop name name=driver.find_elements_by_xpath("//div[@class='right-container']/div/a/h3") name #Extracting the text from the tags prod_name=[] #Empty list #As we need to scrap data for all the products, we are running a for loop for extracting all data for i in name: prod_name.append(i.text) prod_name #Extracting the tags having the OS type OS_type=driver.find_elements_by_xpath("//div[@class='product-detail']/div/ul/li[1]/div/div") OS_type #Extracting the text from the tags OS=[] #Empty list #As we need to scrap data for all the products, we are running a for loop for extracting all data for i in OS_type: OS.append(i.text) OS #Extracting the tags having display details display=driver.find_elements_by_xpath("//div[@class='product-detail']/div/ul/li[2]/div/div") display #Extracting the text from the tags display_specs=[] #Empty list #As we need to scrap data for all the products, we are running a for loop for extracting all data for i in display: display_specs.append(i.text) display_specs #Extracting the tags having processor details processor=driver.find_elements_by_xpath("//div[@class='product-detail']/div/ul/li[3]/div/div") processor #Extracting the text from the tags processor_specs=[] #Empty list #As we need to scrap data for all the products, we are running a for loop for extracting all data for i in processor: processor_specs.append(i.text) processor_specs #Extracting the tags having memory specs #List of specification names memory=driver.find_elements_by_xpath("//div[@class='Spcs-details'][1]/table/tbody/tr[6]/td[1]") #Value of the specifications memory_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details'][1]/table/tbody/tr[6]/td[3]") #Now we will separate HDD and RAM text from memory specs tags HDD=[] RAM=[] #Empty lists for i in range(len(memory)): if memory[i].text=='Memory': HDD.append(memory_specs[i].text.split('/')[0]) RAM.append(memory_specs[i].text.split('/')[1]) else: HDD.append('Not Available') RAM.append('Not Available') print('HDD:',HDD) print('RAM:',RAM) #Extracting the tags having weight #List of specifications name weights=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[1]") #Value of the specifications weights_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[3]") #Now we will separate weight text from tags weight=[] #Empty list for i in range(len(weights)): if weights[i].text=='Weight': weight.append(weights_specs[i].text) weight #Extracting the tags having dimensions #List of specifications name dims=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[1]") #Value of the specifications dims_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[3]") #Now we will separate dimensions text from tags dimension=[] #Empty list for i in range(len(dims)): if dims[i].text=='Dimension': dimension.append(dims_specs[i].text) dimension #Extracting the tags having Graphics Processor #List of specifications name GPs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[1]") #Value of the specifications GPs_specs=driver.find_elements_by_xpath("//div[@class='Spcs-details']/table/tbody/tr/td[3]") #Now we will separate GPs text from tags GPU=[] #Empty list for i in range(len(GPs)): if GPs[i].text=='Graphics Processor': GPU.append(GPs_specs[i].text) GPU #Extracting the tags having the price #As there are some prices in the main url, we will go to full specs and scrape the prices #First we will extract the urls of all laptop's full specs full_specs=[] #Empty list urls=driver.find_elements_by_xpath("//div[@class='full-specs']/span") #Running a for loop for extraction of text from tags for i in urls: if i.get_attribute('data-href'): full_specs.append(i.get_attribute('data-href')) full_specs #Now we will extract price by iterating full_specs Price=[] #Empty list for i in full_specs: driver.get(i) try: prices=driver.find_element_by_xpath("//div[@class='Block-price']/b") #Getting price tags Price.append(prices.text) #Extracting text except NoSuchElementException as e: #Running an exception if the price is not available Price.append("Not Available") #Message to be printed in places where the price is not available Price #Checking the extracted prices #Creating a dataframe for saving our extracted data laptops=pd.DataFrame({'Product Name':prod_name,'OS':OS,'Display':display_specs,'Processor':processor_specs,'HDD':HDD,'RAM':RAM, 'Weight':weight,'Dimension':dimension,'Graphic Processor':GPU,'Price':Price}) laptops #Saving the data into a csv file laptops.to_csv('Gaming_Laptops.csv') #Closing the driver driver.close() ###Output _____no_output_____
figurasaula2.ipynb
###Markdown Retas em $V_n$Vamos aproveitar estes exercícios para falar um pouco do *matplotlib* que é uma biblioteca gráfica que se usa com o python. Ela é bem completa e quem tiver interessado pode ver mais no [site oficial do matplotlib](https://matplotlib.org).Primeiro vamos carregar a biblioteca em nosso ambiente para usar suas funções. ###Code %matplotlib inline import matplotlib.pyplot as plt import numpy as np # outra biblioteca importante para parte numérica ###Output _____no_output_____ ###Markdown Exercício 1:Uma reta passa pelos pontos $(-3,1)$ e $(1,1)$: ###Code plt.plot([-3 , 1], [1, 1]) plt.plot([0,0,1,2,-2],[0,1,2,1,1], "ro") plt.plot([-3 , 1], [1, 1]) ###Output _____no_output_____ ###Markdown Exercicio 2Vamos verificar se os pontos $P=(2,1,1)$ $Q=(4,1-1)$ e $R=(3,-1,1)$ estão na mesma reta ###Code from mpl_toolkits import mplot3d fig = plt.figure() ax = plt.axes(projection='3d') xs=[2,4,3] ys=[1,1,-1] zs=[1,-1,1] ax.plot3D(xs,ys,zs, 'ro') import numpy as np import matplotlib.pyplot as plt plt.rcParams['legend.fontsize'] = 10 fig = plt.figure() ax = fig.gca(projection='3d') # Prepare arrays x, y, z t = np.linspace(0, 6, 100) s = np.linspace(0, 1.5, 100) x = 1+t y = 1+2*t z = 1+3*t x1= 2+3*s y1= 1+8*s z1= 13*s ax.plot(x, y, z, label="L1") ax.plot(x1,y1,z1, label="L2") ax.legend() ###Output _____no_output_____
Lab 4 - Exploratory Data Analysis.ipynb
###Markdown Assignment: SQL Notebook for Peer AssignmentEstimated time needed: **60** minutes. IntroductionUsing this Python notebook you will:1. Understand the Spacex DataSet2. Load the dataset into the corresponding table in a Db2 database3. Execute SQL queries to answer assignment questions Overview of the DataSetSpaceX has gained worldwide attention for a series of historic milestones.It is the only private company ever to return a spacecraft from low-earth orbit, which it first accomplished in December 2010.SpaceX advertises Falcon 9 rocket launches on its website with a cost of 62 million dollars wheras other providers cost upward of 165 million dollars each, much of the savings is because Space X can reuse the first stage.Therefore if we can determine if the first stage will land, we can determine the cost of a launch.This information can be used if an alternate company wants to bid against SpaceX for a rocket launch.This dataset includes a record for each payload carried during a SpaceX mission into outer space. Download the datasetsThis assignment requires you to load the spacex dataset.In many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. Click on the link below to download and save the dataset (.CSV file):Spacex DataSet Store the dataset in database table**it is highly recommended to manually load the table using the database console LOAD tool in DB2**.Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new table as follows:**SPACEXDATASET****Follow these steps while using old DB2 UI which is having Open Console Screen****Note:While loading Spacex dataset, ensure that detect datatypes is disabled. Later click on the pencil icon(edit option).**1. Change the Date Format by manually typing DD-MM-YYYY and timestamp format as DD-MM-YYYY HH\:MM:SS. Here you should place the cursor at Date field and manually type as DD-MM-YYYY.2. Change the PAYLOAD_MASS\_\_KG\_ datatype to INTEGER. **Changes to be considered when having DB2 instance with the new UI having Go to UI screen*** Refer to this insruction in this link for viewing the new Go to UI screen.* Later click on **Data link(below SQL)** in the Go to UI screen and click on **Load Data** tab.* Later browse for the downloaded spacex file.* Once done select the schema andload the file. ###Code !pip install sqlalchemy==1.3.9 !pip install ibm_db_sa !pip install ipython-sql ###Output Collecting sqlalchemy==1.3.9 Downloading SQLAlchemy-1.3.9.tar.gz (6.0 MB)  |████████████████████████████████| 6.0 MB 28.4 MB/s eta 0:00:01 [?25hBuilding wheels for collected packages: sqlalchemy Building wheel for sqlalchemy (setup.py) ... [?25ldone [?25h Created wheel for sqlalchemy: filename=SQLAlchemy-1.3.9-cp38-cp38-linux_x86_64.whl size=1209520 sha256=0d3446be09ff9f7f7645d278253310253f852059d91f70ded7ad2fe33c11467d Stored in directory: /tmp/wsuser/.cache/pip/wheels/cb/43/46/fa638f2422554332b7865d600275b24568bf60e76104a94bb4 Successfully built sqlalchemy Installing collected packages: sqlalchemy Attempting uninstall: sqlalchemy Found existing installation: SQLAlchemy 1.4.22 Uninstalling SQLAlchemy-1.4.22: Successfully uninstalled SQLAlchemy-1.4.22 Successfully installed sqlalchemy-1.3.9 Requirement already satisfied: ibm_db_sa in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (0.3.7) Requirement already satisfied: ibm-db>=2.0.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ibm_db_sa) (3.0.4) Requirement already satisfied: sqlalchemy>=0.7.3 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ibm_db_sa) (1.3.9) Collecting ipython-sql Downloading ipython_sql-0.4.0-py3-none-any.whl (19 kB) Requirement already satisfied: ipython>=1.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (7.27.0) Collecting prettytable<1 Downloading prettytable-0.7.2.zip (28 kB) Requirement already satisfied: six in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (1.15.0) Requirement already satisfied: sqlalchemy>=0.6.7 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (1.3.9) Requirement already satisfied: ipython-genutils>=0.1.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython-sql) (0.2.0) Collecting sqlparse Downloading sqlparse-0.4.2-py3-none-any.whl (42 kB)  |████████████████████████████████| 42 kB 3.5 MB/s eta 0:00:01 [?25hRequirement already satisfied: matplotlib-inline in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.1.2) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (3.0.20) Requirement already satisfied: pickleshare in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.7.5) Requirement already satisfied: traitlets>=4.2 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (5.0.5) Requirement already satisfied: pexpect>4.3 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (4.8.0) Requirement already satisfied: backcall in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.2.0) Requirement already satisfied: decorator in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (5.0.9) Requirement already satisfied: pygments in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (2.9.0) Requirement already satisfied: jedi>=0.16 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (0.17.2) Requirement already satisfied: setuptools>=18.5 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from ipython>=1.0->ipython-sql) (52.0.0.post20211006) Requirement already satisfied: parso<0.8.0,>=0.7.0 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from jedi>=0.16->ipython>=1.0->ipython-sql) (0.7.0) Requirement already satisfied: ptyprocess>=0.5 in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from pexpect>4.3->ipython>=1.0->ipython-sql) (0.7.0) Requirement already satisfied: wcwidth in /opt/conda/envs/Python-3.8-main/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=1.0->ipython-sql) (0.2.5) Building wheels for collected packages: prettytable Building wheel for prettytable (setup.py) ... [?25ldone [?25h Created wheel for prettytable: filename=prettytable-0.7.2-py3-none-any.whl size=13700 sha256=306fb1b9a5a5d3ed4f545c2d12061e1305c1897c13a1005301f5c763b6d7bdb6 Stored in directory: /tmp/wsuser/.cache/pip/wheels/48/6d/77/9517cb933af254f51a446f1a5ec9c2be3e45f17384940bce68 Successfully built prettytable Installing collected packages: sqlparse, prettytable, ipython-sql Successfully installed ipython-sql-0.4.0 prettytable-0.7.2 sqlparse-0.4.2 ###Markdown Connect to the databaseLet us first load the SQL extension and establish a connection with the database ###Code %load_ext sql ###Output _____no_output_____ ###Markdown **DB2 magic in case of old UI service credentials.**In the next cell enter your db2 connection string. Recall you created Service Credentials for your Db2 instance before. From the **uri** field of your Db2 service credentials copy everything after db2:// (except the double quote at the end) and paste it in the cell below after ibm_db_sa://in the following format**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name****DB2 magic in case of new UI service credentials.** * Use the following format.* Add security=SSL at the end**%sql ibm_db_sa://my-username:my-password\@my-hostname:my-port/my-db-name?security=SSL** ###Code # The code was removed by Watson Studio for sharing. ###Output _____no_output_____ ###Markdown TasksNow write and execute SQL queries to solve the assignment tasks. Task 1 Display the names of the unique launch sites in the space mission ###Code %sql select distinct(LAUNCH_SITE) from SPACEXDATASET; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 2 Display 5 records where launch sites begin with the string 'CCA' ###Code %sql select * from SPACEXDATASET where LAUNCH_SITE like 'CCA%' limit 5; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 3 Display the total payload mass carried by boosters launched by NASA (CRS) ###Code %sql select sum(PAYLOAD_MASS__KG_) from SPACEXDATASET where CUSTOMER = 'NASA (CRS)'; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 4 Display average payload mass carried by booster version F9 v1.1 ###Code %sql select avg(PAYLOAD_MASS__KG_) from SPACEXDATASET where BOOSTER_VERSION = 'F9 v1.1'; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 5 List the date when the first successful landing outcome in ground pad was acheived.*Hint:Use min function* ###Code %%sql select DATE from SPACEXDATASET where LANDING__OUTCOME = 'Success (ground pad)' order by DATE asc limit 1; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 6 List the names of the boosters which have success in drone ship and have payload mass greater than 4000 but less than 6000 ###Code %%sql select distinct(BOOSTER_VERSION) from SPACEXDATASET where LANDING__OUTCOME = 'Success (drone ship)' and (PAYLOAD_MASS__KG_ between 4000 and 6000); ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 7 List the total number of successful and failure mission outcomes ###Code %%sql select MISSION_OUTCOME, count(*) as COUNT from SPACEXDATASET group by MISSION_OUTCOME; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 8 List the names of the booster_versions which have carried the maximum payload mass. Use a subquery ###Code %%sql select distinct(BOOSTER_VERSION) from SPACEXDATASET where PAYLOAD_MASS__KG_ = (select max(PAYLOAD_MASS__KG_) from SPACEXDATASET); ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 9 List the failed landing_outcomes in drone ship, their booster versions, and launch site names for in year 2015 ###Code %%sql select BOOSTER_VERSION, LAUNCH_SITE, LANDING__OUTCOME from SPACEXDATASET where LANDING__OUTCOME = 'Failure (drone ship)' and YEAR(DATE) = '2015'; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done. ###Markdown Task 10 Rank the count of landing outcomes (such as Failure (drone ship) or Success (ground pad)) between the date 2010-06-04 and 2017-03-20, in descending order ###Code %%sql select LANDING__OUTCOME, count(*) as COUNT from SPACEXDATASET where (DATE between '2010-06-04' and '2017-03-20') group by LANDING__OUTCOME order by count(*) desc; ###Output * ibm_db_sa://qvq13764:***@3883e7e4-18f5-4afe-be8c-fa31c41761d2.bs2io90l08kqb1od8lcg.databases.appdomain.cloud:31498/BLUDB Done.
PyCitySchools/PyCitySchools_JiKim.ipynb
###Markdown District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting ###Code school_data_complete.head() school_data_complete["average_score"] = (school_data_complete["reading_score"]+school_data_complete["math_score"])/2 total_school_number = school_data_complete["school_name"].nunique() total_student_number = school_data_complete["Student ID"].count() school_budgets = school_data_complete["budget"].unique() total_budget = school_budgets.sum() average_math_score = school_data_complete["math_score"].mean() average_reading_score = school_data_complete["reading_score"].mean() #Bin students into pass (>=70) and fail (<70) bins =[0,70,100] school_data_complete["Math Pass"] = pd.cut(school_data_complete["math_score"], bins, labels=False, right=False) school_data_complete["Reading Pass"] = pd.cut(school_data_complete["reading_score"], bins, labels=False, right=False) school_data_complete["Overall Pass"] = pd.cut(school_data_complete["average_score"], bins, labels=False, right=False) #Calculate % of students that passed percent_passing_math = school_data_complete["Math Pass"].sum()/total_student_number*100 percent_passing_reading = school_data_complete["Reading Pass"].sum()/total_student_number*100 percent_passing_overall = (percent_passing_math+percent_passing_reading)/2 #Create Summary of District district_summary_df = pd.DataFrame([{"Total Schools":total_school_number, "Total Students":"{:,}".format(total_student_number), "Total Budget":"${:,.2f}".format(total_budget), "Average Math Score": average_math_score, "Average Reading Score": average_reading_score, "% Passing Math": percent_passing_math, "% Passing Reading": percent_passing_reading, "% Overall Passing Rate": percent_passing_overall}]) district_summary_df ###Output _____no_output_____ ###Markdown School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results Top Performing Schools (By Passing Rate) * Sort and display the top five schools in overall passing rate ###Code #Group complete data by Schools grouped_school_data = school_data_complete.groupby(["school_name"]) #Calculate per school numbers school_total_student_number = grouped_school_data["Student ID"].count() school_total_budget = grouped_school_data["budget"].sum()/school_total_student_number school_per_student_budget = school_total_budget/school_total_student_number school_average_math_score = grouped_school_data["math_score"].mean() school_average_reading_score = grouped_school_data["reading_score"].mean() school_type = grouped_school_data["type"].unique().apply(lambda x: "%s".join(x)) school_percent_passing_math = grouped_school_data["Math Pass"].sum()/school_total_student_number*100 school_percent_passing_reading = grouped_school_data["Reading Pass"].sum()/school_total_student_number*100 school_percent_passing_overall = (school_percent_passing_math+school_percent_passing_reading)/2 #Create school summary dataframe school_summary_df = pd.DataFrame({"School Type":school_type, "Total Students":school_total_student_number, "Total Budget":school_total_budget, "Per Student Budget":school_per_student_budget, "Average Math Score":school_average_math_score, "Average Reading Score":school_average_reading_score, "% Passing Math":school_percent_passing_math, "% Passing Reading":school_percent_passing_reading, "% Overall Passing Rate":school_percent_passing_overall}) school_summary_df["Total Students"] = school_summary_df["Total Students"].map("{:,}".format) school_summary_df["Total Budget"] = school_summary_df["Total Budget"].map("${:,.2f}".format) school_summary_df["Per Student Budget"] = school_summary_df["Per Student Budget"].map("${:.2f}".format) #sort school data by overall passing rate (best to worst) sorted_df = school_summary_df.sort_values(by="% Overall Passing Rate", ascending = False) sorted_df = sorted_df.rename_axis(None) sorted_df[0:5] ###Output _____no_output_____ ###Markdown Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools ###Code #sort school data by overall passing rate (worst to best) sorted_df = school_summary_df.sort_values(by="% Overall Passing Rate", ascending = True) sorted_df = sorted_df.rename_axis(None) sorted_df[0:5] ###Output _____no_output_____ ###Markdown Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting ###Code #Create series for each grade only_9th = school_data_complete.loc[school_data_complete['grade'] == '9th',:] only_10th = school_data_complete.loc[school_data_complete['grade'] == '10th',:] only_11th = school_data_complete.loc[school_data_complete['grade'] == '11th',:] only_12th = school_data_complete.loc[school_data_complete['grade'] == '12th',:] #Group data by school name grouped_9th = only_9th.groupby('school_name') grouped_10th = only_10th.groupby('school_name') grouped_11th = only_11th.groupby('school_name') grouped_12th = only_12th.groupby('school_name') #Calculate average math scores per grade average_math_9th = grouped_9th["math_score"].mean() average_math_10th = grouped_10th["math_score"].mean() average_math_11th = grouped_11th["math_score"].mean() average_math_12th = grouped_12th["math_score"].mean() #Create new dataframe with average math scores broken down by school and grade math_scores_df = pd.DataFrame({"9th":average_math_9th, "10th":average_math_10th, "11th":average_math_11th, "12th":average_math_12th}) math_scores_df = math_scores_df.rename_axis(None) math_scores_df ###Output _____no_output_____ ###Markdown Reading Score by Grade * Perform the same operations as above for reading scores ###Code #Calculate average reading scores per grade average_reading_9th = grouped_9th["reading_score"].mean() average_reading_10th = grouped_10th["reading_score"].mean() average_reading_11th = grouped_11th["reading_score"].mean() average_reading_12th = grouped_12th["reading_score"].mean() #Create new dataframe with average reading scores broken down by school and grade reading_scores_df = pd.DataFrame({"9th":average_reading_9th, "10th":average_reading_10th, "11th":average_reading_11th, "12th":average_reading_12th}) reading_scores_df = reading_scores_df.rename_axis(None) reading_scores_df ###Output _____no_output_____ ###Markdown Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) ###Code # create bins based on budget per student spending_bins = [0, 585, 615, 645, 675] group_names = ["<$585", "$585-615", "$615-645", "$645-675"] #put data into bins based on spending ranges school_summary_df["Per Student Budget"]=school_summary_df["Per Student Budget"].replace('[^.0-9]','',regex=True).astype(float) school_summary_df["Spending Ranges (Per Student)"]=pd.cut(school_summary_df["Per Student Budget"], spending_bins, labels=group_names, include_lowest=True) #group data by Spending Ranges df = school_summary_df.groupby("Spending Ranges (Per Student)") binned_average_math_score = df["Average Math Score"].mean() binned_average_reading_score = df["Average Reading Score"].mean() binned_average_percent_passing_math = df["% Passing Math"].mean() binned_average_percent_passing_reading = df["% Passing Reading"].mean() binned_average_overall_passing_rate = df["% Overall Passing Rate"].mean() display_table = pd.DataFrame({"Average Math Score":binned_average_math_score, "Average Reading Score":binned_average_reading_score, "% Passing Math":binned_average_percent_passing_math, "% Passing Reading":binned_average_percent_passing_reading, "% Overall Passing Rate":binned_average_overall_passing_rate}) display_table ###Output _____no_output_____ ###Markdown Scores by School Size * Perform the same operations as above, based on school size. ###Code # create bins based on budget per student size_bins = [0, 1000, 2000, 5000] group_names2 = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"] #put data into bins based on School Size school_summary_df["Total Students"]=school_summary_df["Total Students"].replace('[^.0-9]','',regex=True).astype(float) school_summary_df["School Size"]=pd.cut(school_summary_df["Total Students"], size_bins, labels=group_names2, include_lowest=True) #group data by School Size df = school_summary_df.groupby("School Size") binned_average_math_score = df["Average Math Score"].mean() binned_average_reading_score = df["Average Reading Score"].mean() binned_average_percent_passing_math = df["% Passing Math"].mean() binned_average_percent_passing_reading = df["% Passing Reading"].mean() binned_average_overall_passing_rate = df["% Overall Passing Rate"].mean() display_table = pd.DataFrame({"Average Math Score":binned_average_math_score, "Average Reading Score":binned_average_reading_score, "% Passing Math":binned_average_percent_passing_math, "% Passing Reading":binned_average_percent_passing_reading, "% Overall Passing Rate":binned_average_overall_passing_rate}) display_table ###Output _____no_output_____ ###Markdown Scores by School Type * Perform the same operations as above, based on school type. ###Code #group data by School Type df = school_summary_df.groupby("School Type") binned_average_math_score = df["Average Math Score"].mean() binned_average_reading_score = df["Average Reading Score"].mean() binned_average_percent_passing_math = df["% Passing Math"].mean() binned_average_percent_passing_reading = df["% Passing Reading"].mean() binned_average_overall_passing_rate = df["% Overall Passing Rate"].mean() display_table = pd.DataFrame({"Average Math Score":binned_average_math_score, "Average Reading Score":binned_average_reading_score, "% Passing Math":binned_average_percent_passing_math, "% Passing Reading":binned_average_percent_passing_reading, "% Overall Passing Rate":binned_average_overall_passing_rate}) display_table ###Output _____no_output_____
Union Fold/0912/547. Friend Circles.ipynb
###Markdown 说明: 一班有N个学生。他们中有些是朋友,有些不是。 他们的友谊本质上是传递的。例如,如果A是B的直接朋友,而B是C的直接朋友,则A是C的间接朋友。 并且我们定义的朋友圈是一组是直接或间接朋友的学生。 给定一个N * N矩阵M表示班级学生之间的朋友关系。 如果M[i][j] = 1,则第i个学生和第j个学生是彼此的直接朋友,否则不是。 而且,您必须输出所有学生之间的朋友圈总数。Example 1: Input: [[1,1,0], [1,1,0], [0,0,1]] Output: 2 Explanation:The 0th and 1st students are direct friends, so they are in a friend circle. The 2nd student himself is in a friend circle. So return 2. Example 2: Input: [[1,1,0], [1,1,1], [0,1,1]] Output: 1 Explanation:The 0th and 1st students are direct friends, the 1st and 2nd students are direct friends, so the 0th and 2nd students are indirect friends. All of them are in the same friend circle, so return 1.Constraints: 1、1 <= N <= 200 2、M[i][i] == 1 3、M[i][j] == M[j][i] ###Code class Solution: def findCircleNum(self, M) -> int: def findFather(x): if father[x] != x: father[x] = findFather(father[x]) return father[x] def union(a, b): x = father[a] y = father[b] father[x] = y father = dict() N = len(M) for i in range(N): father[i] = i for i in range(N): for j in range(N): if i != j and M[i][j] == 1: # 如果i和j不是共同的祖先,并且二者还有联系, # 通过将二者的祖先合并,构成一个完整的朋友圈 if findFather(i) != findFather(j): union(i, j) ancestors = set() for i in range(N): ancestors.add(findFather(i)) return len(ancestors) M_ = [[1,1,0], [1,1,1], [0,1,1]] solution = Solution() solution.findCircleNum(M_) ###Output {0: 0, 1: 1, 2: 2} {0: 1, 1: 1, 2: 2}
homage_to_alignment algorithms.ipynb
###Markdown Sequence Alignment Algorithms- Sequence alignment algorithms are ways to **arrange two or many biological sequences** - They **identify regions of similarity** that may indicate to functional, structural, or evolutionary relationships between the sequences. Biological information in arrays Data science in biology originated from the biological data stored in sequences 3 main biological data types*Image soure: Shutter Stock* (adpated) Central dogma of molecular biology*Image soure: Shutter Stock* (adapted) ... explained in data terms*Image soure: Shutter Stock* (adapted) More about Proteins- Proteins are class of chemicals in our body that **makes our body function**. - In a healthy individual, proteins make them **see, listen, walk and talk and think, process information**, and control immune response. - In **diseases** some these protein are **dysregulated**.*Image source: [loxooncology](https://www.loxooncology.com/genomically-defined-cancers)* Protein composition- Proteins are made up of different combinations of 20 amino acids. - Based on how they are arranged, their characters and functions are decided.Image credit: *Wikimedia commons by LadyofHats* **Each amino acid (aa) is represented as a letter:** ###Code amino_acid_list = ('A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y') ###Output _____no_output_____ ###Markdown 1. **Structural components**: Several amino acid in proteins are arranged closely, forming compact structures.2. **Disorder region**: Several amino acid don't form structures and exist as disorder region.Image credit: *Wikimedia commons by LadyofHats* **Role of protein composition**- Protein composition **determine their physics and chemical nature**.- It makes them function as **machines in different part of body** where it’s needed. Evolutionary biology- Study of evolutionary processes that **produced the diversity of life on Earth**, starting from a single common ancestor. - **natural selection**, common descent, and speciation (origin of species). Evolution of proteins- Process of change in the sequence composition- Studied by **comparing the sequences and structures** of proteins (*homologs*) from other organisms Example Use of such comparative studies- observe pattern of conservation- identify common region that are present in both sequences- transfer functions, understand origin of these sequences etc. Complexity increases when comparing proteins with longer aa chains**Full length protein sequence of human P53**```>P53_HUMAN Cellular tumor antigen p53 OS=Homo sapiens length=393MEEPQSDPSVEPPLSQETFSDLWKLLPENNVLSPLPSQAMDDLMLSPDDIEQWFTEDPGPDEAPRMPEAAPPVAPAPAAPTPAAPAPAPSWPLSSSVPSQKTYQGSYGFRLGFLHSGTAKSVTCTYSPALNKMFCQLAKTCPVQLWVDSTPPPGTRVRAMAIYKQSQHMTEVVRRCPHHERCSDSDGLAPPQHLIRVEGNLRVEYLDDRNTFRHSVVVPYEPPEVGSDCTTIHYNYMCNSSCMGGMNRRPILTIITLEDSSGNLLGRNSFEVRVCACPGRDRRTEEENLRKKGEPHHELPPGSTKRALPNNTSSSPQPKKKPLDGEYFTLQIRGRERFEMFRELNEALELKDAQAGKEPGGSRAHSSHLKSKKGQSTSRHKKLMFKTEGPDSD```**Full length protein sequence of mouse P53**```>P53_MOUSE Cellular tumor antigen p53 OS=Mus musculus length=390MTAMEESQSDISLELPLSQETFSGLWKLLPPEDILPSPHCMDDLLLPQDVEEFFEGPSEALRVSGAPAAQDPVTETPGPVAPAPATPWPLSSFVPSQKTYQGNYGFHLGFLQSGTAKSVMCTYSPPLNKLFCQLAKTCPVQLWVSATPPAGSRVRAMAIYKKSQHMTEVVRRCPHHERCSDGDGLAPPQHLIRVEGNLYPEYLEDRQTFRHSVVVPYEPPEAGSEYTTIHYKYMCNSSCMGGMNRRPILTIITLEDSSGNLLGRDSFEVRVCACPGRDRRTEEENFRKKEVLCPELPPGSAKRALPTCTSASPPQKKKPLDGEYFTLKIRGRKRFEMFRELNEALELKDAHATEESGDSRAHSSYLKTKKGQSTSRHKKTMVKKVGPDSD``` ###Code # Example protein: P53: acts as a tumor suppressor in many cancers # Source: UniProt: https://www.uniprot.org/uniprot/P04637 human_p53 = 'TFSDLWKLLPENNV' # first 10 aa of the full length (393 aa) mouse_p53 = 'SQETFSGLWKLLPP' # first 10 aa of the full length (390 aa) ###Output _____no_output_____ ###Markdown Protein Similary and Alignment Algorithms Protein sequences are aligned and their smiliarity is scored. Pairwise Alignments **Global alignment: Needleman Wunsch Algorithm**- Assigns a score to every possible alignment- Finds alignments with highest scoreIt was the first application of **dynamic programming** - simplifies a decision by breaking it down into smaller problems (all alignments)- finds optimal solution (best aligment)**Local alignment (Smith-Waterman algorithm)* ###Code human_p53 = 'TFSDLWKLLPENNV' mouse_p53 = 'SQETFSGLWKLLPP' import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns def create_substution_matrix(residue_list, match_score=1, mismatch_score=-1): ''' This function creates a substituion matrix for residues: Arguments: - residue_list: A list of amino acid or dna/rna residues. - match_score: An integer number indicating match score. (default score = 1) - mismatch_score: An integer number indicating mismatch score. (default score = -1) ''' scoring_matrix = pd.DataFrame(index=residue_list, columns=residue_list) scoring_matrix = scoring_matrix.fillna(0) for residue_col in residue_list: for residue_row in residue_list: if residue_col == residue_row: scoring_matrix.loc[residue_col, residue_row] = match_score else: scoring_matrix.loc[residue_col, residue_row] = mismatch_score return scoring_matrix scoring_matrix = create_substution_matrix(amino_acid_list) def create_heatmap_from_matrix(matrix_name, filename='default', color='Blues'): sns.set(rc={"figure.figsize":(12,8)}) df = pd.DataFrame(matrix_name) g = sns.heatmap(df, annot=True, fmt='g', cmap=color) if not filename == 'default': g.get_figure().savefig(os.path.join(file_path, filename)) create_heatmap_from_matrix(scoring_matrix) def create_dynamic_prog_matrix(seq1, seq2, scoring_matrix, gap_penalty=-1): ''' This function creates a scorinf matrix for two sequences using dynamic programming: Arguments: - seq1: First amino acid sequence - seq2: Second amino acid sequence - scoring_matrix: Scoring matrix for amino acid - gap_penalty: An integer number indicating gap penalty/score. (default value = -1) ''' index_list = [0]+list(seq1) column_list = [0]+list(seq2) dp_matrix = pd.DataFrame(index=index_list, columns=column_list) dp_matrix = dp_matrix.fillna(0) for i, residue_seq1 in enumerate(list(seq1)): dp_matrix.iloc[i+1, 0] = (i+1)*-1 for j, residue_seq2 in enumerate(list(seq2)): dp_matrix.iloc[0, j+1] = (j+1)*-1 dp_matrix.loc[residue_seq1, residue_seq2] = scoring_matrix.loc[ residue_seq1, residue_seq2] scored_dp_matrix = _calculate_alignment_scores(dp_matrix, gap_penalty) return scored_dp_matrix def _calculate_alignment_scores(dp_matrix, gap_penalty): for i, rows in enumerate(dp_matrix.index.values): if not rows == 0: for j, cols in enumerate(dp_matrix.columns.values): if not cols == 0: current_score = dp_matrix.iloc[i, j] left_score = dp_matrix.iloc[i, j-1] + gap_penalty up_score = dp_matrix.iloc[i-1, j] + gap_penalty diag_score = dp_matrix.iloc[i-1, j-1] + current_score high_score = max([left_score, up_score, diag_score]) dp_matrix.iloc[i, j] = high_score return(dp_matrix) scored_dp_matrix = create_dynamic_prog_matrix(human_p53, mouse_p53, scoring_matrix) print(scored_dp_matrix) def trace_best_alignment(scored_dp_matrix, match_score=1, mismatch_score=-1, gap_penalty=-1): ''' This function traces back the best alignment. Diagonal arrow is a match or mismatch. Horizontal arrows introduce gap ("-") in the row and vertical arrows introduce gaps in the column. Arguments: - scored_dp_matrix: scored matrix for two sequences. - match_score: An integer number indicating match score. (default score = 1) - mismatch_score: An integer number indicating mismatch score. (default score = -1) - gap_penalty: An integer number indicating gap penalty/score. (default score = -1) ''' i = len(scored_dp_matrix.index.values)-1 j = len(scored_dp_matrix.columns.values)-1 row_residue_list = [] col_residue_list = [] match_positions = [] print("Trackback type:\n") while i > 0 and j > 0: current_score = scored_dp_matrix.iloc[i, j] left_score = scored_dp_matrix.iloc[i, j-1] up_score = scored_dp_matrix.iloc[i-1, j] diag_score = scored_dp_matrix.iloc[i-1, j-1] row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] trackback_type = "" if i > 1 and j > 1 and (current_score == diag_score + match_score and row_val == col_val): trackback_type = "diagonal_match" row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] i -= 1 j -= 1 match_positions.append(row_val) elif i > 1 and j > 1 and (current_score == diag_score + mismatch_score and row_val != col_val): trackback_type = "diagonal_mismatch" row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] i -= 1 j -= 1 match_positions.append(row_val) elif i > 0 and (current_score == up_score + gap_penalty): trackback_type = "up" row_val = scored_dp_matrix.index.values[i] col_val = '-' i -= 1 # match_Score -= 1 elif j > 0 and (current_score == left_score + gap_penalty): trackback_type = "left" col_val = scored_dp_matrix.columns.values[j] row_val = '-' j -= 1 # match_Score -= 1 else: row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] i -= 1 j -= 1 match_positions.append(row_val) print(trackback_type) row_residue_list.append(row_val) col_residue_list.append(col_val) print("Total aligned positions: {}".format(len(match_positions))) col_seq = ''.join(map(str, col_residue_list[::-1])) row_seq = ''.join(map(str, row_residue_list[::-1])) return col_seq, row_seq aligned_seq1, aligned_seq2 = trace_best_alignment(scored_dp_matrix) print('Optimal global alignment of the given sequences is:\n{}\n{}'.format( aligned_seq1, aligned_seq2)) create_heatmap_from_matrix(scored_dp_matrix) ###Output Optimal global alignment of the given sequences is: TFSGLWKLLPP--- TFSDLWKLLPENNV ###Markdown Substituion Matrix for ProteinsA substitution matrix used for sequence alignment of proteins are used to score alignments between evolutionarily divergent protein sequences. The most popular metrices is BLOSUM (BLOcks SUbstitution Matrix) matrix, [provided by NCBI](https://www.ncbi.nlm.nih.gov/Class/BLAST/BLOSUM62.txt).This matrix gives a similarity score to appropriately align similar (even when not identical) aa based on their bio-chemical properties. ###Code with open('blosum62.txt', 'r') as in_fh: print(in_fh.read()) # Format BLOSUM subsitution file to pandas matrix def read_blosum_file_to_matrix(blosum_file): ''' Creates a matrix from NCBI BLOSUM file. Arguments: - blosum_file: provide local file BLOSUM substitution matrix. (Download the current version: https://www.ncbi.nlm.nih.gov/Class/BLAST/BLOSUM62.txt) ''' header_list= ['A', 'R', 'N', 'D', 'C', 'Q', 'E', 'G', 'H', 'I', 'L', 'K', 'M', 'F', 'P', 'S', 'T', 'W', 'Y', 'V', 'B', 'Z', 'X', '*'] # * is gap blosum = pd.read_csv(blosum_file, skiprows=6, delim_whitespace=True) blosum = blosum.replace('NaN', 0) blosum.columns = header_list blosum.index = header_list return blosum blosum = read_blosum_file_to_matrix('blosum62.txt') print(blosum) ## Calling the function create_dynamic_prog_matrix # with blosum substituion matrix and gap penalty -4 scored_dp_matrix_blosum = create_dynamic_prog_matrix( human_p53, mouse_p53, blosum, gap_penalty=-4) print(scored_dp_matrix_blosum) ## Edited the previous function trace_best_alignment ## by removing match and mismatch score def trace_best_alignment_with_blosum(scored_dp_matrix, gap_penalty=-4): ''' This function traces back the best alignment. Diagonal arrow is a match or mismatch. Horizontal arrows introduce gap ("-") in the row and vertical arrows introduce gaps in the column. Arguments: - scored_dp_matrix: scored matrix for two sequences. - gap_penalty: An integer number indicating gap penalty/score. (default gap penalty in NCBI BLOSUM62 = -4) ''' i = len(scored_dp_matrix.index.values)-1 j = len(scored_dp_matrix.columns.values)-1 row_residue_list = [] col_residue_list = [] match_positions = [] print("\nTrackback type:") while i > 0 and j > 0: current_score = scored_dp_matrix.iloc[i, j] left_score = scored_dp_matrix.iloc[i, j-1] up_score = scored_dp_matrix.iloc[i-1, j] diag_score = scored_dp_matrix.iloc[i-1, j-1] row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] trackback_type = "" if i > 1 and j > 1 and current_score == diag_score: trackback_type = "diagonal_match" row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] i -= 1 j -= 1 match_positions.append(row_val) elif i > 0 and (current_score == up_score + gap_penalty): trackback_type = "up" row_val = scored_dp_matrix.index.values[i] col_val = '-' i -= 1 elif j > 0 and (current_score == left_score + gap_penalty): trackback_type = "left" col_val = scored_dp_matrix.columns.values[j] row_val = '-' j -= 1 else: trackback_type = "diagonal_match" row_val = scored_dp_matrix.index.values[i] col_val = scored_dp_matrix.columns.values[j] i -= 1 j -= 1 match_positions.append(row_val) print(trackback_type) row_residue_list.append(row_val) col_residue_list.append(col_val) print("\nTotal aligned positions: {}".format(len(match_positions))) col_seq = ''.join(map(str, col_residue_list[::-1])) row_seq = ''.join(map(str, row_residue_list[::-1])) return col_seq, row_seq print('Input sequences:\n{}\n{}'.format(mouse_p53, human_p53)) aligned_seq1, aligned_seq2 = trace_best_alignment_with_blosum(scored_dp_matrix_blosum) print('Optimal global alignment of the given sequences using BLOSUM62 is:\n{}\n{}'.format( aligned_seq1, aligned_seq2)) create_heatmap_from_matrix(scored_dp_matrix_blosum) ###Output Input sequences: SQETFSGLWKLLPP TFSDLWKLLPENNV Trackback type: up up up diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match diagonal_match Total aligned positions: 11 Optimal global alignment of the given sequences using BLOSUM62 is: TFSGLWKLLPP--- TFSDLWKLLPENNV ###Markdown Local Alignment: Smith–Waterman AlgorithmA local alignment, instead of global alignment can be carried out to **finds subsequences** (rather than full length) that align the best.**Smith–Waterman algorithm** compares segments of all possible lengths and optimizes the similarity score rather than comparing the entire length. - It is a dynamic programming algorithm, and a **variation of Needleman-Wunsch** (Global) alignment algorithm.- It **sets negative scoring matrix cells to zero**, which makes local alignments visible.- **Traceback starts at the highest scoring matrix** cell and proceeds until a cell with score zero is encountered. - It has a much higher complexity in time and space, and often cannot be applied to large-scale problems. The Biopython CommunityThanks to the very active bioinformatics community of volunteers who have developed, improved and maintained the freely available Python package [Biopython](https://biopython.org/) ([License](https://github.com/biopython/biopython/blob/master/LICENSE.rst)).The Biopython tools for biological computation (including the different algorithms to handle biological sequences) allows the reproducibility of results obtained from different softwares. ###Code from Bio import pairwise2 alignments = pairwise2.align.localxx(human_p53, mouse_p53) alignments ###Output _____no_output_____
Quarterly_to_Monthly.ipynb
###Markdown ###Code !pip install datetime-quarter !pip install numpy import pandas as pd import numpy as np import datetime as dt from datetime import datetime import time import pytz, tzlocal import string import plotly.graph_objects as go from datequarter import DateQuarter from scipy.interpolate import CubicSpline # read and parse part of "Summary" worksheet def get_sum_region(start, size): emp = pd.DataFrame() for i in range(size): df1 = df_sum.iloc[5:26, start+i] emp = pd.concat([emp, df1], axis=0) emp.reset_index(drop = True, inplace = True) emp.columns = ['emp'] emp_p = pd.DataFrame() for i in range(size): df1 = df_sum.iloc[28:49, start+i] emp_p = pd.concat([emp_p, df1], axis=0) emp_p.reset_index(drop = True, inplace = True) emp_p.columns = ['emp_percent'] gdp = pd.DataFrame() for i in range(size): df1 = df_sum.iloc[53:74, start+i] gdp = pd.concat([gdp, df1], axis=0) gdp.reset_index(drop = True, inplace = True) gdp.columns = ['gdp'] gdp_p = pd.DataFrame() for i in range(size): df1 = df_sum.iloc[76:97, start+i] gdp_p = pd.concat([gdp_p, df1], axis=0) gdp_p.reset_index(drop = True, inplace = True) gdp_p.columns = ['gdp_percent'] pop = pd.DataFrame() for i in range(size): df1 = df_sum.iloc[101:122, start+i] pop = pd.concat([pop, df1], axis=0) pop.reset_index(drop = True, inplace = True) pop.columns = ['pop'] pop_p = pd.DataFrame() for i in range(size): df1 = df_sum.iloc[124:145, start+i] pop_p = pd.concat([pop_p, df1], axis=0) pop_p.reset_index(drop = True, inplace = True) pop_p.columns = ['pop_percent'] return(pd.concat([emp, emp_p, gdp, gdp_p, pop, pop_p], axis=1)) # Employment and GDP by sector for the region (df = US, NYPA, NYCY, etc) def get_sector(start, size, df): emp = pd.DataFrame() for i in range(size): df1 = df.iloc[5:26, start+i] emp = pd.concat([emp, df1], axis=0) emp.reset_index(drop = True, inplace = True) emp.columns = ['emp'] gdp = pd.DataFrame() for i in range(size): df1 = df.iloc[29:50, start+i] gdp = pd.concat([gdp, df1], axis=0) gdp.reset_index(drop = True, inplace = True) gdp.columns = ['gdp'] return(pd.concat([emp, gdp], axis=1)) # Imcome info by region (df = US, NYPA, NYCY, etc) def get_income(begin, end, df): # average income from employment (thousands US$) avg_income = pd.DataFrame(df.iloc[52, begin:end]) avg_income.columns = ['avg_income'] avg_income.reset_index(drop=True, inplace=True) # Personal disposable income (million US$) disp_income = pd.DataFrame(df.iloc[53, begin:end]) disp_income.columns = ['disp_income'] disp_income.reset_index(drop=True, inplace=True) # Real personal disposable income (million US$, constant 2012 prices) rdisp_income = pd.DataFrame(df.iloc[54, begin:end]) rdisp_income.columns = ['rdisp_income'] rdisp_income.reset_index(drop=True, inplace=True) # Retail sales (millions US$) retail = pd.DataFrame(df.iloc[55, begin:end]) retail.columns = ['retail'] retail.reset_index(drop=True, inplace=True) # Real retail sales (millions US$, constant 2012 prices) r_retail = pd.DataFrame(df.iloc[56, begin:end]) r_retail.columns = ['r_retail'] r_retail.reset_index(drop=True, inplace=True) return(pd.concat([avg_income, disp_income, rdisp_income, retail, r_retail], axis=1)) # Read in the Oxford County dataset # create functions to automate the data processing file_path = '/content/drive/MyDrive/Oxford_2021Q2/County dataset.xlsm' xls = pd.ExcelFile(file_path) # to read all sheets to a map sheet_to_df_map = {} for sheet_name in xls.sheet_names: sheet_to_df_map[sheet_name] = xls.parse(sheet_name) # Get the "Summary" worksheet df_sum = sheet_to_df_map['Summary'] # create time and region columns # Year column year = pd.DataFrame(np.arange(1980,2036), columns = ['year']) year.reset_index(drop=True, inplace=True) # Quarter column, repeat quarter 1-4 for each year q = pd.DataFrame(np.arange(1, 5), columns = ['quarter']) q.reset_index(drop=True, inplace=True) quarter = pd.concat([q]*56, ignore_index=True) # Year column for quarterly report, i.e., repeat every year 4 times year_q = pd.DataFrame(np.repeat(year.values,4,axis=0), columns = ['year']) # Get the 20 regions and US as region_id=0 (total 21 regions) region = pd.DataFrame(df_sum.iloc[5:26, 0]) region.reset_index(drop = True, inplace = True) region.columns = ['regions'] region['region_id'] = np.arange(len(region)) # repeat regions for each year: 21*56 regions = pd.concat([region]*56, ignore_index=True) # repeat year for each region year_region = pd.DataFrame(np.repeat(year.values,21,axis=0), columns = ['year']) # repeat 21 region list for each quarter regions_q = pd.concat([region]*224, ignore_index=True) # repeat each year 84 times for each region-quarter combination year_region_q = pd.DataFrame(np.repeat(year.values,84,axis=0), columns = ['year']) # repeat quarter 1-4 and year combination for each region quarter_region = pd.DataFrame(np.repeat(quarter.values,21,axis=0), columns = ['quarter']) # read "Summary", start from column 3, from year 1980 to 2035 df_data = get_sum_region(3, 56) sum_region = pd.concat([year_region, regions, df_data], axis=1) # read "Summary", Quarterly data, start from column 60, from year 1980Q1 to 2035Q4 df_data = get_sum_region(60, 224) sum_region_q = pd.concat([year_region_q, quarter_region, regions_q, df_data], axis=1) # Get the "US" worksheet df_us = sheet_to_df_map['US'] # read in the list of sectiors, 21 including "Total" as sector 0 sector = pd.DataFrame(df_us.iloc[5:26, 0]) sector.reset_index(drop = True, inplace = True) sector.columns = ['sectors'] sector['sector_id'] = np.arange(len(sector)) # sector set repeat for each year sectors = pd.concat([sector]*56, ignore_index=True) year_sector = pd.DataFrame(np.repeat(year.values,21,axis=0), columns = ['year']) # sector set repeat for each year-quarter combination sectors_q = pd.concat([sector]*224, ignore_index=True) # repeat each year 84 times for each quarter-sector combination year_sector_q = pd.DataFrame(np.repeat(year.values,84,axis=0), columns = ['year']) # repeat each quarter 21 times for each sector quarter_sector = pd.DataFrame(np.repeat(quarter.values,21,axis=0), columns = ['quarter']) # "US", "NYPA", "NYCT", "NY01", ... all have the same data structure region_list = xls.sheet_names sector_region = pd.DataFrame() sector_region_q = pd.DataFrame() income_region = pd.DataFrame() income_region_q = pd.DataFrame() for r in region_list[2:23]: df_r = sheet_to_df_map[r] df_data = get_sector(3, 56, df_r) df1 = pd.concat([year_sector, sectors, df_data], axis=1) df1['region']= r sector_region = pd.concat([sector_region, df1], axis=0) df_data = get_sector(60, 224, df_r) df2 = pd.concat([year_sector_q, quarter_sector, sectors_q, df_data], axis=1) df2['region']= r sector_region_q = pd.concat([sector_region_q, df2], axis=0) df_data = get_income(3, 59, df_r) df3 = pd.concat([year, df_data], axis=1) df3['region'] = r income_region = pd.concat([income_region, df3], axis=0) df_data = get_income(60, 284, df_r) df4 = pd.concat([year_q, quarter, df_data], axis=1) df4['region'] = r income_region_q = pd.concat([income_region_q, df4], axis=0) ## Explore variables ## region and region ID #region.head() ## list of region worksheet names #region_list ## sector and sector ID #sector.head() sum_region_q.head() # Summary sheet ## Employment/GDP by sector by region, in US, NYPA NYCY and all the region sheets #sector_region_q.head() ## Incomes and Spending by region, in US, NYPA NYCY and all the region sheets #income_region_q.head() # functions used in interpolation method def get_ym(i, start_year): year = int(i/12) month = int(i%12+1) if (month == 0 and year>0): year = year -1 year = start_year + year ym = str(year)+'-'+str(month).zfill(2) return(ym) def get_ym_next(i, start_year): i = i+1 year = int(i/12) month = int(i%12+1) if (month == 0 and year>0): year = year -1 year = start_year + year ym = str(year)+'-'+str(month).zfill(2) return(ym) def interpolate_sum(df, var, start_year): series2 = df.loc[:, ['q_date', var]] series2.columns = ['datetime', var] series2[var] = pd.to_numeric(series2[var]) series2.loc[-1] = [str(start_year-1)+"-12-31", 0] # adding a row series2.index = series2.index + 1 # shifting index series2 = series2.sort_index() # sorting by index series2['datetime'] = pd.to_datetime(series2['datetime']) # variable value for each quarter is incremental values for each quarter # Generate cumulative series series2[var] = series2[var].cumsum() x = [] for i in range(len(series2.datetime)): x.append((series2.datetime[i].year-start_year)*12 + series2.datetime[i].month) y = np.array(series2[var], dtype = 'float') f = CubicSpline(x, y, bc_type='natural') num_years = 2035 - start_year + 1 x_new = np.linspace(0, num_years*12, num_years*12+1) y_new = f(x_new) x_ym = [] for i in range(len(x_new)): x_ym.append(get_ym(x_new[i], start_year)) output_variable = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_ym), pd.DataFrame(y_new)], axis = 1) output_variable.columns = ['timeid', 'datetime',var] output_variable['datetime'] = pd.to_datetime(output_variable['datetime']) output_variable['datetime'] = output_variable['datetime'] - dt.timedelta(days = 1) output_variable[var] = output_variable[var].diff() output_variable = output_variable.loc[output_variable["timeid"] > 0, ['datetime', var]] return(output_variable) def interpolate_average(df, var, start_year, factor): series3 = df.loc[:, ['month_id', var]] series3.columns = ['datetime', var] series3[var] = pd.to_numeric(series3[var]*3/factor) series3.loc[-1] = [0, 0] # adding a row series3.index = series3.index + 1 # shifting index series3= series3.sort_index() # sorting by index series3[var] = series3[var].cumsum() x = [] x = np.array(series3['datetime'], dtype = 'int') y = np.array(series3[var], dtype = 'float') f = CubicSpline(x, y, bc_type='natural') num_years = 2035 - start_year + 1 x_new = np.linspace(0, num_years*12, num_years*12+1) y_dev = f(x_new, 1)*factor x_datetime = [] for i in range(len(x_new)): x_datetime.append(get_ym_next(x_new[i], start_year)) monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_dev)], axis = 1) monthly_avg.columns = ['time_id','datetime', var] monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime']) monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1) return(monthly_avg) def monthly_average(df, var, start_year, factor): series3 = df.loc[:, ['month_id', var]] series3.columns = ['datetime', var] series3[var] = pd.to_numeric(series3[var]*3/factor) series3.loc[-1] = [0, 0] # adding a row series3.index = series3.index + 1 # shifting index series3= series3.sort_index() # sorting by index series3[var] = series3[var].cumsum() x = [] x = np.array(series3['datetime'], dtype = 'int') y = np.array(series3[var], dtype = 'float') f = CubicSpline(x, y, bc_type='natural') num_years = 2035 - start_year + 1 x_new = np.linspace(0, num_years*12, num_years*12+1) x_datetime = [] y_month = [] for m in range(len(x_new)): y_month.append(np.mean(f(np.linspace(m, m+1, 1001), 1))*factor) x_datetime.append(get_ym_next(x_new[m], start_year)) monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_month)], axis = 1) monthly_avg.columns = ['time_id','datetime', var] monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime']) monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1) return(monthly_avg) # GDP by region # the year the data became available start_year = 2001 # region ID region_ID = 2 # select variable for interpolation var = "gdp" df2 = sum_region_q.loc[(sum_region_q["region_id"] == region_ID) & (sum_region_q["year"]>=start_year), ["year", "quarter", "gdp", "region_id"]] df2.reset_index(drop = True, inplace = True) end_date = [] for i in range(len(df2)): end_date.append(DateQuarter(df2.iloc[i].year, df2.iloc[i].quarter).end_date()) df2['q_date'] = pd.DataFrame(end_date) output_variable = interpolate_sum(df2, var, start_year ) output_variable.head() # GDP by region by sector: US, NYPA, NYCY and other regions worksheets # the year the data became available start_year = 1980 # region worksheet name region_worksheet = 'US' # select sector sector_ID = 1 # select the varialbe to be interpolated var = "gdp" # select the dataset input_variable = sector_region_q df2 = input_variable.loc[(input_variable["region"] == region_worksheet) & (input_variable["year"]>=start_year) & (input_variable["sector_id"]==sector_ID), ["year", "quarter", var]] df2.reset_index(drop = True, inplace = True) end_date = [] for i in range(len(df2)): end_date.append(DateQuarter(df2.iloc[i].year, df2.iloc[i].quarter).end_date()) df2['q_date'] = pd.DataFrame(end_date) output_variable = interpolate_sum(df2, var, start_year) output_variable.head() # Incomes and Spending # By region by sector: US, NYPA, NYCY and other regions worksheets # the year the data became available start_year = 1992 # region worksheet name region_worksheet = 'US' # select the dataset input_variable = income_region_q # select variable: avg_income disp_income rdisp_income retail r_retail var = "retail" df2 = input_variable.loc[(input_variable["region"] == region_worksheet) & (input_variable["year"]>=start_year), ["year", "quarter", var]] df2.reset_index(drop = True, inplace = True) end_date = [] for i in range(len(df2)): end_date.append(DateQuarter(df2.iloc[i].year, df2.iloc[i].quarter).end_date()) df2['q_date'] = pd.DataFrame(end_date) output_variable = interpolate_sum(df2, var, start_year) output_variable.head() # using interpolate_average & monthly_average function # Select the variable: emp or pop var = "emp" region_ID = 0 # the year the data became available if region_ID == 0: start_year = 1980 else: start_year = 1990 df3 = sum_region_q.loc[(sum_region_q["region_id"] == region_ID) & (sum_region_q["year"]>=start_year), ["year", "quarter", var]] df3.reset_index(drop = True, inplace = True) end_date = [] month_id = [] for i in range(len(df3)): month_id.append((df3.iloc[i].year-start_year)*12+(df3.iloc[i].quarter)*3) end_date.append(DateQuarter(df3.iloc[i].year, df3.iloc[i].quarter).end_date()) df3['month_id'] = pd.DataFrame(month_id) df3['datetime'] = pd.DataFrame(end_date) df3['datetime'] = pd.to_datetime(df3['datetime']) factor = 10000 interpolated_monthly = monthly_average(df3, var, start_year, factor) interpolated_quarterly = interpolate_average(df3, var, start_year, factor) print(interpolated_monthly.head()) print(interpolated_quarterly.head()) # Plotting the results # remove scrolling output window feature from IPython.core.display import display, HTML display(HTML("<style>div.output_scroll { height: 44em; }</style>")) import plotly.graph_objects as go # set report variable report_variable = "Employment" # set plot title plot_title = "Monthly "+report_variable # Create traces fig = go.Figure() fig.add_trace(go.Scatter(x=interpolated_quarterly.time_id[0:72], y=interpolated_quarterly[var][0:72], mode='lines', line_shape = 'spline', name='smooth')) fig.add_trace(go.Scatter(x=interpolated_monthly.time_id[0:72], y=interpolated_monthly[var][0:72], mode='lines', line_shape = 'hv', name='monthly')) fig.add_trace(go.Scatter(x=df3.month_id[0:24], y=df3[var][0:24], mode='lines', line_shape = 'vh', name='quarterly')) fig.update_layout( title = plot_title, xaxis_title="Time", yaxis_title=report_variable, font=dict( family="Courier New, monospace", size=12, color="RebeccaPurple" ) ) fig.show() # Plotting the results # remove scrolling output window feature from IPython.core.display import display, HTML display(HTML("<style>div.output_scroll { height: 44em; }</style>")) import plotly.graph_objects as go # set report variable report_variable = "Employment" # set plot title plot_title = region.iloc[region_ID].regions + " Monthly "+ report_variable fig = go.Figure() # Create traces fig = go.Figure() fig.add_trace(go.Scatter(x=interpolated_quarterly.time_id, y=interpolated_quarterly.emp, mode='lines', line_shape = 'spline', name='smooth')) fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=interpolated_monthly.emp, mode='lines', line_shape = 'hv', name='monthly')) fig.add_trace(go.Scatter(x=df3.month_id, y=df3.emp, mode='lines', line_shape = 'vh', name='quarterly')) fig.update_layout( title = plot_title, xaxis_title="Time", yaxis_title=report_variable, font=dict( family="Courier New, monospace", size=12, color="RebeccaPurple" ) ) fig.show() def interpolate_average2(df, var, start_year, end_year, factor): series3 = df.loc[:, ['month_id', var]] series3.columns = ['datetime', var] series3[var] = pd.to_numeric(series3[var]*3/factor) series3.loc[-1] = [0, 0] # adding a row series3.index = series3.index + 1 # shifting index series3= series3.sort_index() # sorting by index series3[var] = series3[var].cumsum() x = [] x = np.array(series3['datetime'], dtype = 'int') y = np.array(series3[var], dtype = 'float') f = CubicSpline(x, y, bc_type='natural') num_years = end_year - start_year + 1 x_new = np.linspace(0, num_years*12, num_years*12+1) y_dev = f(x_new, 1)*factor x_datetime = [] for i in range(len(x_new)): x_datetime.append(get_ym_next(x_new[i], start_year)) monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_dev)], axis = 1) monthly_avg.columns = ['time_id','datetime', var] monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime']) monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1) return(monthly_avg) def monthly_average2(df, var, start_year, end_year, factor): series3 = df.loc[:, ['month_id', var]] series3.columns = ['datetime', var] series3[var] = pd.to_numeric(series3[var]*3/factor) series3.loc[-1] = [0, 0] # adding a row series3.index = series3.index + 1 # shifting index series3= series3.sort_index() # sorting by index series3[var] = series3[var].cumsum() x = [] x = np.array(series3['datetime'], dtype = 'int') y = np.array(series3[var], dtype = 'float') f = CubicSpline(x, y, bc_type='natural') num_years = end_year - start_year + 1 x_new = np.linspace(0, num_years*12, num_years*12+1) x_datetime = [] y_month = [] for m in range(len(x_new)): y_month.append(np.mean(f(np.linspace(m, m+1, 1001), 1))*factor) x_datetime.append(get_ym_next(x_new[m], start_year)) monthly_avg = pd.concat([pd.DataFrame(x_new), pd.DataFrame(x_datetime), pd.DataFrame(y_month)], axis = 1) monthly_avg.columns = ['time_id','datetime', var] monthly_avg['datetime'] = pd.to_datetime(monthly_avg['datetime']) monthly_avg['datetime'] = monthly_avg['datetime'] - dt.timedelta(days = 1) return(monthly_avg) file_path = '/content/drive/MyDrive/Interpolate/BLS_employment.xlsx' xls = pd.ExcelFile(file_path) # to read all sheets to a map sheet_to_df_map = {} for sheet_name in xls.sheet_names: sheet_to_df_map[sheet_name] = xls.parse(sheet_name) # Get the "Summary" worksheet df_quarter = sheet_to_df_map['Quarterly'] df_quarter.reset_index(drop=True, inplace=True) df_month = sheet_to_df_map['Monthly'] df_month.reset_index(drop=True, inplace=True) year = pd.DataFrame(np.arange(1990,2021), columns = ['year']) year.reset_index(drop=True, inplace=True) # Quarter column, repeat quarter 1-4 for each year q = pd.DataFrame(np.arange(1, 5), columns = ['quarter']) q.reset_index(drop=True, inplace=True) quarter = pd.concat([q]*31, ignore_index=True) # Year column for quarterly report, i.e., repeat every year 4 times year_q = pd.DataFrame(np.repeat(year.values,4,axis=0), columns = ['year']) year_q.reset_index(drop=True, inplace=True) # read Quarterly data quarterly_us = pd.concat([year_q, quarter, df_quarter['US'][0:124]], axis=1) quarterly_us.columns = ['year', 'quarter', 'US'] quarterly_us.reset_index(drop=True, inplace=True) quarterly_us = quarterly_us.astype({"year": int, "quarter":int, "US":object}).copy() end_date = [] month_id = [] start_year = 1990 end_year = 2020 for i in range(len(quarterly_us)): month_id.append((quarterly_us.iloc[i].year-start_year)*12+(quarterly_us.iloc[i].quarter)*3) end_date.append(DateQuarter(quarterly_us.iloc[i].year, quarterly_us.iloc[i].quarter).end_date()) quarterly_us['month_id'] = pd.DataFrame(month_id) quarterly_us['datetime'] = pd.DataFrame(end_date) quarterly_us['datetime'] = pd.to_datetime(quarterly_us['datetime']) factor = 10000 var = "US" interpolated_monthly = monthly_average2(quarterly_us, var, start_year, end_year, factor) interpolated_quarterly = interpolate_average2(quarterly_us, var, start_year, end_year, factor) print(interpolated_monthly.head()) print(interpolated_quarterly.head()) # Plotting the results # remove scrolling output window feature from IPython.core.display import display, HTML display(HTML("<style>div.output_scroll { height: 44em; }</style>")) import plotly.graph_objects as go # set report variable report_variable = "Employment" # set plot title plot_title = " Monthly "+ report_variable fig = go.Figure() # Create traces fig = go.Figure() fig.add_trace(go.Scatter(x=interpolated_quarterly.time_id, y=interpolated_quarterly["US"], mode='lines', line_shape = 'spline', name='smooth')) fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=interpolated_monthly["US"], mode='lines', line_shape = 'hv', name='monthly')) fig.add_trace(go.Scatter(x=quarterly_us.month_id, y=quarterly_us["US"], mode='lines', line_shape = 'vh', name='quarterly')) fig.update_layout( title = plot_title, xaxis_title="Time", yaxis_title=report_variable, font=dict( family="Courier New, monospace", size=12, color="RebeccaPurple" ) ) fig.show() # Plotting the results # remove scrolling output window feature from IPython.core.display import display, HTML display(HTML("<style>div.output_scroll { height: 44em; }</style>")) import plotly.graph_objects as go # set report variable report_variable = "Employment" # set plot title plot_title = " Monthly "+ report_variable fig = go.Figure() # Create traces fig = go.Figure() fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=df_month["US"], mode='lines', line_shape = 'spline', name='true monthly')) fig.add_trace(go.Scatter(x=interpolated_monthly.time_id, y=interpolated_monthly["US"], mode='lines', line_shape = 'spline', name='estimated monthly')) fig.update_layout( title = plot_title, xaxis_title="Time", yaxis_title=report_variable, font=dict( family="Courier New, monospace", size=12, color="RebeccaPurple" ) ) fig.show() from google.colab import files #output_file_name = input('Output File Name: ') #interpolated_monthly.to_csv(output_file_name+'.csv') #files.download(output_file_name+'.csv') ###Output _____no_output_____
Audio_dispersion.ipynb
###Markdown Audio DispersionIn this project I am going to experiment audio dispersion using Phyphox app. I am interested to do a simple task; using FFT to cleanout audio data by denoising lower frequencies from the signal in jupyter notebook and Phyphox. Phyphox is an app that uses the sensors in a smartphone to perform physics experiments. The app is developed by RWTH Aachen University, Germany. More about phyphox can be found on [PhyPhox](http://phyphox.org) website.\Audio dispersion is also done with Broadband Transmision Method . This method requires the measurements of a reference velocity to obtain values for the acoustic dispersion in different medium. In our case we will do it simply by taking fourier transform in Jupyter Notebook. The goal is to clean out noisy Audio signal by Audio dispersion. We will take noisy audio data with Phyphox and than by using fourer transform we will be able to remove some noisy frequencies. Waves and Dispersion: When a wave is a distrubance in a medium (like waves in water) which propagate through the medeum,without moving of the medium.A travelling wave can be described by the following equation:\ $ y(x,t) = asin(kx−ωt) $ \Here,\y(x,t) : The height of the wave at position x, and time t \a : The amplitude of the wave\k : The wave number\ω : The angular frequency. The speed at which the wave propagate is given by: v = w/k . More about traveling waves can be found [Here](https://openstax.org/books/university-physics-volume-1/pages/16-1-traveling-waves) If multiple waves such as\$ y_1= a_1sin(k_1x−ω_1t)$,\$y_2 = a_2sin(k_2x−ω_2t)$\............ \ travels together , the equation of the resultant wave is given by their sum:\$ y_{sum} = a_1sin(k_1x−ω_1t)+a_2sin(k_2x−ω_2t)+......$The amplitude of the resultant wave depends on the frquency of each wave. Lets for $ w_1$ and $ w_2 $ If $ ω_1/k_1 = ω_2/k_2 $ , both waves propagate at the same speed, there will be no dispersion,the shape of the resultant function does not change as it moves forward. But when waves of different frequencies propagate at different speeds it causes Dispersion, and the shape of the resultant wave changes as it moves through medium. The plots of the wave fronts of the above waves are given below : Plot of $ y_1 $ ![image](Data/sin1.PNG) Plot of $y_2$ ![image](Data/sin2.PNG) Plot of $ y_{sum} $ Algorithm :In this project , I am going to use Fourier Transform to experiment the dispersion in sound wave. Fourer transform will allow us to convert the audio signal from the time domain to the frequency domain. Fourier transform is a means of mapping a signal, in the time or space domain into its spectrum in the frequency domain. The time and frequency domains are just alternative ways of representing signals and the Fourier transform is the mathematical relationship between the two representations. A change of signal in one domain would also affect the signal in the other domain, but not necessarily in the same way. Conversion from time Domain to frequency domain by FT![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAApoAAAGoCAYAAADmVkXEAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAAA7DAcdvqGQAAP+lSURBVHhe7J0FgFzV1cf/O+667hYnRoAQnGAFinvxtrS0UAFqtP1qtKVUseLeAm2BFneHCHHfbNbdd2Z3dty+c+6b2WxCEiK7G7u/5O3Me/P0vnvP/V87NyNJQCKRSCQSiUQiGWVUqU+JRCKRSCQSiWRUkUJTIpFIJBKJRDImSKEpkUgkEolEIhkTpNCUSCQSiUQikYwJUmhKJBKJRCKRSMYEKTQlEolEIpFIJGOCFJoSiUQikUgkkjFBCk2JRCKRSCQSyZgghaZEIpFIJBKJZEyQQlMikUgkEolEMiZIoSmRSCQSiUQiGROk0JRIJBKJRCKRjAlSaEokEolEIpFIxgQpNCUSiUQikUgkY4IUmhKJRCKRSCSSMUEKTYlEIpFIJBLJmCCFpkQikUgkEolkTJBCUyKRSCQSiUQyJkihKZFIJBKJRCIZE6TQlEgkEolEIpGMCVJoSiQSiUQikUjGBCk0JRKJRCKRSCRjghSaEolEIpFIJJIxQQpNiUQikUgkEsmYIIWmRCKRSCQSiWRMkEJTIpFIJBKJRDImSKEpkUgkEolEIhkTpNCUSCQSiUQikYwJUmhKJBKJRCKRSMYEKTQlEolEIpFIJGOCFJoSiUQikUgkkjFBCk2JRCKRSCQSyZgghaZEIpFIJBKJZEyQQlMikUgkEolEMiZIoSmRSCQSiUQiGROk0JRIJBKJRCKRjAlSaEokEolEIpFIxgQpNCUSiUQikUgkY4IUmhKJRCKRSCSSMUEKTYlEIpFIJBLJmCCFpkSyCySTSSSSidSaRCKRSCSSHSGFpkSyC3T39KCutlYITolEIpFIJDtGCk2JZCdhcblwwQK8/PIr4rsUmxKJRCKR7BgpNCWSnYBFpT8YwtvvvIe33n4bLa3tqV8kEolEIpFsDyk0JZKdgIVm1YYNWLVqFRoaGvHpp5/IGk2JRCKRSL4AKTQlkp0gEAzjrbfeRE9vNyKRCN5+6y10dHRIsSmRSCQSyQ6QQlMi+QJYTG7YsB5vvPEmwqEwkvRv1epVeOeddxFPxFN7SSQSiUQi2RopNCWSHcAiMxQK4cUXX0RLS6sQmbzN6x0QfTX7entlraZEIpFIJNtBCk2JZAckSEQuXbYMb7zxBhKJOAx6fWp7AitXrsQHH34khaZEIpFIJNtBCk2JZAe0tbXh6aefhsfrgdudCY1GI8Smw25HPB7Dv//9b9TU1EixKZFIJBLJNpBCUyLZDtz/csGCBRgaGsIlF12E679xHQry82E0W/CN667D97/3PdhsNrz51tsY9PlSR0kkEolEIkmTkZRVMRLJNvH5A6jasF7UXlqtVmi0alx//Y2oqd2ERx95BJUVFejv70dvXx+ysrJQXlaWOlIikUgkEgkjazQlku1gMhgwY/p0VFZWIicnB3YSnCo1JRkqm+n1erFeRuJy1syZoqZTltkkEolEItkSKTQlku2gJlFpILGpUqmQkZFBWzKg02rEb7zOC//GopMXiUQikUgkWyKFpkSyC4RDIah12xaVihiVSCQSiUSSRgpNiWSnUXxoZtA/iUQikUgkX4wUmhLJTiMFpkQikUgku4IUmhLJLmAwGlPfJBKJRCKRfBFSaEoku0AkKuc2l0gkEolkZ5FCUyLZBfQ6ZdS5RCKRSCSSL0YKTYlkF5C+MiUSiUQi2Xmk0JRIdoFQMJj6JpFIJBKJ5IuQQlMi2WlkbaZEIpFIJLuCFJoSiUQikUgkkjFBCk2JRCKRSCQSyZgghaZEIpFIJBKJZEyQQlMikUgkEolEMiZIoSmRSCQSiUQiGROk0JRIJBKJRCKRjAlSaEokEolEIpFIxgQpNCUSiUQikUgkY4IUmhKJRCKRSCQHITyt8lhPrSyFpkQikUgkEslBRCweh8frRUNDA/yB4JiKTSk0JRKJRCKRSA5gWEgmEgkMDQ2hqqoKL770Eh548CHc/8CD6OjsHFOhmUEnlxM4SyRfACeTaCyCyy79ChqaW/Hk44/ikGnTkJGRkdpDIpFIJJJ9B8634ok4wqEwOklMssDcsGED1qxZi5rGZgR8g5SvRfHMP57CzJkzoVarU0eOLrJGUyKRSCQSiWQ/h4Ul11oGAn60tLZi2bJleOnlV0St5d333IMHHnoYr731LgYjwPwzzsL8086EWqWi4xKpM4wNskZTItkJZI2mRCKRSPZVwuEwOru6sGnTJlRXV6Ouro7WuxElDRlPZMCdlYWCwhIUlZYjt7CUPgvwybvv4JF7/oxHH3oAs2bNGrMaTSk0JZKdQApNiUQikewLpGVbPB5Hf3+/aA7n2st169ahpbUNyQw1sgtKSFgWo6SsHFm5+XC5s2A2m6HRaMXxBpMJH7z1Kp586C489vCDmDVTCk2JZK8ihaZEIpFI9hZJ+hePxTE05EdvXx+am5qwloRl9cYqNDY2wTvogzMrDxUTJ6Ji0lSUVU6Gy5UJrUYj8i9uUh+JwWjCh++8JoWmRLKvIIWmRCKRSMYTznfYDVF/vwdNTY2oqalBTW09icxGDAWCiCZVMBpMyC8sQlFpGYrLKpGZlQO9wUDHAgk6dnsSTwpNiWQfQwrN/Z+0qZPvTCKR7MuwrQoEg6jeuBELFy4Uo8Tb29sRikSRU1CCrOwcZOcXIie/CJmZWXA4nCQujchQZSCZ+Hzt5bYwms149/WX8M9H7sVjj0ihKZHsdaTQ3H+J07vzhKJo7xkkQ6yC26SD06yDXqeFSr4/iUSyF+G8hZdwJEJ5TBzNDQ34bMkSLFiwAC0tzejt9yA7rwhTD5mB4vIJKC6tgNPphFZv3CNhaLSa8fbL/8PTj/4djz/6kBSaEsneRgrN/ZNoPIHmvkF8tL4Bi9ZsRCgaQa7DisMnlOLIaROQ77ZAI9+hRCIZR9KyKxAKo7uzQ4wQ37SpBqvXrkVHWxsCkZgYtDNh0lQUV0xCcVk5cvPyoNeboKLCMtdY7kyt5Y6QQlMi2ceQQnP/g0VmTXsv/vPpSnxKIrO2vRMR7tOk1aA8y4kTZh+C0+bNxPSiTOjle5RIJGNIWmqFwmG0so/LpUuxcuUqNDY1obu7G1kFpbDabMgvmYC83Dzk5GTDnZkFvdEsBCAPBkqQTRstxlNoSoftEonkgCNBRr1twI9/vr8Ez3+4CLUtHRiKJISh9gQiWNPcgec+WoyXPl2B1r6h1FESiUQyOrCsZHGZXrpITL722mu45eab8Y1vfAO33XYbPl28FJmFFTjjwqtx/uXX4MLLrsH8U07HrDmHo7CkHEazVZyL3RiNpsgcb6TQlEgkBxRs4IPROD7b1IYPV1ejxxdCIKmBVqWCXp0BkyYDXH/Z1u/BR6s24JO19eilfWTTjkQi2VO4SZudp/f19oopH1944b/44Y9vxVVXXY3f/f52rK2uQ+mkGbjmhh/guz/9Nc7/ylU46rgTUcp9L91ZMOh1oqVsNJrHv4hQKEiF8rEXsFJoSiSSAwo2zvUdffjP+5+huacXCZKVNq0KcyZW4uxjjsShEytgSInNpo4uvL1kJTa0dSGWatqSSCSSnSVdYxmJRtHW3o4PPvwY993/IH5y66347ve+j/sfeRx1rZ2omDEPF1xzA2645Wf4yle/iaOPPxllZZWwmK1K03hCOc94wjWldFGldD6GSKEpkUgOKCKxON5bWYXVm2oQiiVhzEigIMuOy06eh699+RhcPP9IFOcVI5GhxlAsgfVNrVhV3QCfP5Q6g0QikewYFoUs1AYHB/DpggX41S9/hauvuhq33HwT/vX8C0iq9Zh/5oW44rrv4qqv3YBzL7gYRxx5FIpLymGx2IQHDHbAzk3i4y0wxxspNCUSyQED2+vOoRBe/2w1wpEwuFHIaNDipMOm47hZFSjOceLQiUU4buYkOM1m6NVaDIUiWNXQhpa+QSQObHsvkUh2g3StZSIRRyQagcfrxcqVK/GXv/4VF19yGb761a/ify+9CIszCxddfT1u+cXt+Mo3voNj55+KiZOmICc3D1arFRq1RjkfG5qDyNZIoSmRSA4I2G6H43Es2dSG1p5e0RRu0qgxsagA86ZPgdOog5EsXp7NiLlTijG1OE8cNRSOoaq5C2vqWoRDZIlEImEUcZlAIBhAdXU1Xn7lVdz++9tx5RVX4hvf+CZeff1NuLPz8bVv34xf/ul+fPcnv8L8085EcVEpTAYL1BrNfuGZJEOl5pksUmujjxSaEonkwIAyhYHBISxetxGBSByRDC0Zez0On1KJygI31GRHdSQsTRoVJhRkYWpZMYx6PeKJOHq9Xmxo6kDPgF/UikokkoOP4ZpLWtiBOovLhx9+GF//2tfxVVp++ctfY+nKtZgw4zAxmOf7P/0dvnrjD3HsSaeLJnGjyUJnUQby8Hn29VrLaCym3OcYI4WmRCI5IGB7WdM+gOUb6+CLRqFV61GY6cTUijLYzQYy/2z4EyQ4k8i1GXHY5BKU5GaK5nWPfwhLNtVhHQ8K2tdzB4lEMqqkay7jtHT19uG5//wHF1x4CU488UT84pe/RE1jM44+/UL8+Pf34Jb/ux3nXXIVZs6Zi5zcfJjMFqVGkM+TFpj7CUmyinoqbHOt61jWu0qhKZFIDgiS9K+2qRmtPf30TU2CMoIJBXmYWpQjmsy1lAGoMni7GiadBmUkMkuz2Vk7T1OpQhsd19TZi3Bs/PzVcZ7EGRNPkxnljG4/yqQkkv0RTm/pJRaPooeEZdXGjXjqqadEk/iRhx+OH9/6U3T09OLCK6/DXx96Gn97+GlcdNlXUFFeLuYU51l71GqNGNCzX0MFb5aYaq0utWFskEJTIpEcEEQTSSysb4U34BcldJfJiGnlBch1maAV9ZZI/QWJTRXyLCbMLM5Dlt1CllCFwVAMDS1d8Hh8qb3GDpaT3GzV2t2Hd5fX4MUFG/DaqkYsbehG31BIZIISiWR0EYU6KtB1dHbj1Vdfxa23/hxXXHEFTjvtS7j9jjsQ15px9bdvxh8eeBZ/e4CE53XfQuWkqWQvNIiFIoo7oAOISDgs/GgqtZmyj6ZEIjkAGVm7sCfw0b0DQ2hu7yQVqYGKhKbbZsaEwjxY1ZubhTibiKSaiWwWA8qLs5BJQjMjQ41wUo0NHf1o7vWKfccSdii/qL4Tf/rvR/i/f/wXtz72LH7z6DP4+wtv4bOaVgTl8HeJZLcZaVe4SZwXr28IH3z4Ib55/bdw/HHH4JprrsU/nv038son49d33I27H3seP/jZb3DameehpLBA1FrGInHhguhAR6tRjeVYICk0JRLJ3qGvvx+PPvYYfvu736F60yaxbXdFJx/zSU0n2j0DVDDXQqVSIcdhR0VeFjT0nZvMGXYuwk3oDM8SlGc1Ic9mFd+5RrTbF0LfUHDMmrD5rAHKuBbXd+D+lz7C/z5ZhObuXgRCUbT0+/D+mo146PVP8emaOjEYQcpNiWTnSQtLrnkc8vvR2dWFRQsX4dZbf4o5s2fj3HPOxVtvvIFZRxyNO+5+CC++vQDf/9EvMWXmYbBYbYhGY8PnkYweUmhKJJK9gsViRUXlBHR0dmHpsuVk5KPo7e1FdfUm9PT07HQzFWcJQcpcNjY2o7mrR5TMnXo1Kovzke0yitpLbkpXZVCpXRyhoKZtLqcDkypKkcPN58SAbxBVDe3o9vjHROSxgG3o8uCfJCbfW7kWwWQGYlAjkIiLGYxCCRUWr6/C/a+8jwU1bYjQc0kkkm0jhCUtbCt42sfunl4sW74C/3z6afzyl7/CeeddgK9ccSU++PgTHHPS6fjDfU/hqZfex62/+QsOPeo4OkMGAv4hcR4xIGYsq/UOYsZVaPLLlCUFiUTC6HVanHDcsfjbX/+CSy6+CFqtFtWbavH7P/wB37/pJuEQOV07wZ/btR20va/fh7rWXqi4g34yhmynE4dUlIkmdD5q+NhURsJ/eYvNbEJpbhYsJj20GXH4g2FUt7Sjy+MVGdhoMzAUwoerarG8tgHxDC3diAZGnQ45VgvcJh30GQmEE0msbmzDG0urUNc9iKi0mRLJ52CbwOKyq7sHS5Yuw9/uvBPXXnstvvrVr+H22+/AhrpmnHDWpbjtzofx+7sfx7dvuhWHH3EkTCazmOM7FAiJcxzMRLnVZBzsy7gKTa7K5uayYCiEWCy248xDIpEc8HANgp6Elo5EJnPUvLn47W234aqrrkJ2drb4fcPGaqxeswZeL4m/VMbAdiNtO/jvp6trsa6+ASq9DapEHDl2M2aU5JB4FDtvrp2k7yw+Gf5rMWhQnOdCrtsNDW0PxyJo7PeiZcCPyCj3kwzHE9jQ2Ye3VqxBa7+HbiUGq1GHE2ZNxa1XX4wLTjwKdhKbfH++UAxLN9Zi5aYGEr+R1BkkkoMPTuac1rnAGSFh5Gcd0dePjdXVeOLJJ3HllVfivPPOx5133Y2hSBznX3kd/vjAP/GrO+7EBRddjAmTppC4NInjudXkYBeXW8N2UK/XpcvgY8K4CU1+uY2NjXj//fexeNEi1FKmwNXc/oBf6YuUyjQkEsnBCwvLosICnHrKKSgqKhLbli35DDfffAueffZZDA4OIkYZRr+nH4M+5XuMBOHKljZ0ejxif5Nejwn5WchyWaAXWxTSFmakrWHj6rRaUJztItGpRZx+6hvwUUbmQWwUZwni2tHewQBeXbQOGxtb6V7UMGpUmFmSi3OOmY0vHTYBZx8zByfMngmnyYg47dHY1YPlNS3ooeOkeZQcjLA4DAYD6OEuNZs24YMPP8LDjz6OG278Ds47/wL8+a93IUNvwde/9yM8+PSL+PP9T+Dciy9Dbl4eErEEopSG2belZNtwEVxHBX2ly8DYKc1xE5rBUJCMdx+GfD60trZi5Yrl+PSTj7FkyVLU1tSITCMUComIJUWnRKLAKeFgSg1s8EYuzFVXXYlHHn4Qp59+uqiZ8JENefKJf9C2R9HS1gFPNIa+fi+iGVoq0EZJMBowvbQIBp1muIl8ZDiOPDcbwCyLEaVZLpjZcTGt89znzT0+DAZGryYxwrWZHX1YumETvIEgXT+JXKcVJx82E4dVFsGuU6Mi24Fjpk8i0euGBgn4w3HUtnahs88rXLJIJAc66ZpLrrVsb2/H8uUr8O//PIff3/5H/PgnP8X//fLX+O+LL8PszMLV19+EO+59FL/4/V9xzgWXIq+gCPFIDCF/8KAYKT4aRFIzA42dxFQYN6EZCYcQJiGpM1qQk1eI7JxcGI0mePr7sXbdOnyyYDHWrFlLGUe7yEhYdIqm9YMqm5VIUi54EknhAscXjMAzFITHH0aIjCiPjD7YUoRKpUZ5eQVKS0tF6dtAQrKsohLvvvsuGuvr0N0zgIbWdiQG+qGKBmE3G1FQmA9ujOewYokWZXHJJyPENjKubF14zWjQIstlg8OoF/42A+Ew2gYGMRCOjlpYB+hcy6ub0dTZKfpcGtQqTC7MxbwZk+B2mMRoeJtBjanluZhUlAe9Wkv3m4Gq5jYsr23GYDCsnEgiOcBgi8biMhAIiFHirAeee/55/PTnP8fV134Vf/zjn7F85WqUTpmNb9zyS/zyj/fh6zf+ECeecgbyC4qhJvvANZcxKnDKSqpdIy6EJhe+x1YKjpvQ5IeJU6m+J5CBpNGJoomHYNrM2SirnIicnBzEo2E0NNRj4aef4NNPP8WatWvRQUbZ6x0QopObyGQkkhzoRKgk3ur1Y2lDO95csQnPfLQSD76xCI++twxvrdiI5Y2d6PUOCcF5MDCydjNdC2kkoXnOWWfg1VdfwdHHHoPG3iF0NdchtPAFqJvXwES/F7nMonAbCASRINuRsQ3bweV4HofOgtRpNcMt/GkqtY/cfO6hY0djII5oNqfCwvqGVnq/CboXwG7UYmppAQrdFgjHS3RhHgWfazVhanEecu1G2i+KAcp815HQ7OgboPOI00kk+zWcj3N+HgwqrZzcpW75ipX434sv4c9/uRPXf+sG/OZ3f8DaqhocfvzJ+Omvb8fv7nwQ11z3bcyYOR0mk1GcI71IdhMKOo1ajQzVZts6Vqh/RaS+jyk8yqu1tY0MbgRGswMRKsMHohkwWWzIzsqC0+mGwWwWAwO432YXiUx2e9Lb249IJIwI+7diD/akvNUcOGMcMBLJ1iQScbzw/AvwDgzivHPPGR6sMloEwzFsbO/FywvX4B9vfoznP/qMxGUVlpLBXbKhBh+tqSax0omAPwxnjhs2nUb4iDxY4bCPUoa1eGMjPli2CnGDDfrgII449kScfNgUVK9aiaXLl8NMGVOmy6UcI/4qx6ZJ0nd/JIGNzZ2oa+8WU0G6THpMKS5AcZYLes2ehXGYCg8rq5vx8oLP0E+CU63VY2pRHs44chZ9ZpPAVEr8fEcajWLbNrV2oam7V4jkYCSCCSWFqMjLhFZ98L5vyf4Ni8Iw5eV9vX3YWL0Ja1avxhtvvIHXXn8TL730Ct7/4CPESPxMPfRInHvpV3Hx5dfihJNORnZekXBNJgbyUCFQMjpo1FosX/ABejvbcdFFF8Fus6V+GX32qtUKR6Lo9QbQ0hdCb1ANkysf+SWVKJswBZOmToPb7SaBGsCG9RuwfNlSLFu2HDU1Neju6cbQ0JCobpdIDgS4aXVpTRueensR/vX+Aiyvb0V3IIZwIgMkTeCnqN7tj2BNbR2eeP9TPPHaAqyqZz+LB2eJXghFWnxDITR29iCqtyOjbDbMs0/E1NISaMmybdhYhWeefgbvv/eeEHHsyqO/nwuuEc71hsUdj0x3k7DMtJpEIZYZGPShp9uDcGjPms85cw2FI9jQ0IauAT/CVMA26Y2YWFSIaYV5ojaVm815pDnfi44+c20WTMjPg91qQ0KtQb8/iOauPgwFx9+BO18vvUgku0K6xjFCArGruxsrqeD34osv4W9/+xtu+dFP8Ne77sXr73yIQEyF4049Cz/+1e347o9/g0uv+Cpmz54Fi9mESCgiutzJvH5siJIVVGm4m87YsveEJkVANvPcfBWnzDJKpf4ebxDtAxSxNDZYswtRVDYRxeUTkFdQCL1eL6rZV61ahYULFgp3J21tbcLlCfvSkv05JfsrgXAMy2rb8cQ7n+LFRcvQ1DeEGHQw6QzIttuR57TBaTZAr6H0otGjzePDv956H3e/9D42dnnGxN/jfgE9tycYQ21rBwJJFa3GYLOYUJLrht5kEnMY/+KXv6RMa7bI8No7O0TfL65F6erqGj5HBtkfs06DHKcdNpNRbOv2+dHqIdsS3bMBQfxqBgIhbGrvxEAwJGpmnCYNJpXkUEHaoijd1I78VUV/HA4rpk4oRYHbTgY6g+4hjprWTrKPA8LOjQfReAID/hA6+31op6XfHxbumaSFlewIzoO55pFdGbZ3dGD16jV47fU38MADD+LHt/4cv/jVb/DBJ4vhys7DuZdei1/84R585wc/x0lfOgsFJRUwGCn9EXJQ8DhBYaxR8/STYys190rTud3hhIkjFD0c90ti85Uu0fPfcDSWGvxA6zoTbLS/3WanzMMMIx2XTMSFwOTMYmBgAIFgUEwOLzoD0zl4+jkOuLEOPMnBxVg0nYcozq6q78Q/3lmA99dsINEZh1qtgYNK87NLc3HctAk4fGIJCl1WkM4ksRKlDJ++JOOoI/GiVmkxrYAdjhsOyvhe3e7BCx9/hl7fkHj+Sfm5OPeoOchxmWHQaFCQl4uCggJhF3p6+/DB++9j+bJlKCwqQmFhIVpaW1FXV48kCcDeQBwbSdD10vvl/uSF2ZmYRWHvthqFfdodWLCtberBCwtWoMszQIUFNSbkZ+LcYw5DUbZdEZfi7hTbJ9Y1KgTjSVQ1tqOxu1e4Z+F3z303uSlfS+cYK1jI+snw1rb34aM1m/Dx2jqsaOhAK4lNdoZvN+qgFV2XUgdIDnq4oBuJREVrQVNjoxjMs2jRYrz//nt49Y238cqrr6G9qxe5+cU45cxzcMnV38SXzjoHZeUThP/GtE9tyfii0Wqx5JN34entxgXnnw/bGDadj5vQ5I6/I4Uml1wUMagU6hUTq8BGnUv+SrV7nIRnAlFooDXZYXe64HQ4YLVYoNVqhMjk6eq6u7uFM3jfkF+ITi4RseBUU2YjBadkNBhtocnGtaajD//+aDneW7UBfRR3ucm3wGnF/JmTcc6xc3Dc9ErMqijGtLIi5GW6KU0k4BnwwheOIUZpht366FU6TCzJh57Sw8EW1T/c2IR3l63CUJRsRiKKuVMmYP7hh8Bp0Cp9H1MBwraAp5s8bM5hOPTQOSguLhHN5IsWL8a//vUvDFLB1pyZj+ZeD1p7+kkgxsQAoRkU9vluOzRc1bgbsP1atK4ery5eLmb8sej1dM4SnH7kDDhItHGzuSIwN5+fZWc4lhT9RbkwEYjwm05iSkkhJhXmwKhTnNuPNpzV9/sCWLaxEc9/uhL/W7AcSzc1oKqpjURvK7oH/CS6Lci2m8dU7Er2DziPHSBbuKm6GgsXLcJbb7+Fl15+GR99shBr1m3AUCiKvJJKHDv/NJx61oU4nj65S5zZZBJ9LaXA3LsIofkxCc2+XlxwwdgKzb3XdM6GmxdRmKdPtrPpXJK2KaNEFfObSDWtB4JheEio+uJ6aO3ZyCwoQ1FpBYqLimG12UV1fV1tDVYsX44VK1ZgQ1UVWltaSHwOyVHrkn0Kjot9lKl/vKaGlioSmQEhWm1GA+bPmobL5h+JE2ZWYmKhC4VZZkwqcOLkwybh8tOOxbHTJ8PM+XyGGi2eIfxnwVJ8tL4JPm4C2A/hVMlzgMdJRHPtyM6mUh44UNdKQiwcpbDTkDFLoCjHDYeFBFzKpIyEw9dJYnPixAnIzsoUsxFNmjgRs2bNglanhtmoFd0TIt3NiHfUoY1sR0ePB1ESersL+6mraW2Hj31n0j92z1RRkA2rmV0pKQaYnzf9zOlPm0mPkmw3nCYD3bgKQ+E46rv6MRAI73T47CrsTmt1UyeeenchXly4HHXdXvT4o2gfCKGmsw+vLlqOf3+4hApHvaKmVnLwwPaKhWUoHBY1l+vWrSNR+Qr+ft99+O1vf4e//u1OvPfRQkQz9Dj06BNx9Te/h+u+82Ocf/EVOGzu0cihQrlKrVLcEAmXOmMViyX7IuMmNNMakn3ipbMANlVJ+qqYrBHZAu+cPiBNyhpz/AyFY+gdCKPbF0cwwwiDIxvO3BLkFpYhN68AFqsVQ2TYa2trsXLVKqzfUIUWyjR4dgEWo1zNL0tSkr0JZ+orGzrx4ZpqtHoGRPR2m02YN7kSpx4+HROLs2DTqUiMJKElAcb+HV0GDQ4tzcZp8+agsigfRnUGpR8d6rv78Z+PlqGmpUukj/0JdkTuGQphQ1svFlc3YxMJx2Bo53xG9gejqGvrQjgapeeOQ6vWoDg7EwategvDtq1aZ87oeIT35EkT8a1vXY9rr74C5YU50POcPN1NSNStRm9LHTr7BzDg84sZibjv2a7Aorl7IIhVLR1IanT0rtRwkcAsy8uBiZ3Jp+4rfXf86vg7LwYSvsVZDuTYbWJ0qD+aQE17N/p9Q2OSSbM97O4fpEJPLT6raYQ3RM+q0pEwNtG9GMW9e6mg//H6Wry3qgY9Xp+0oQcBLC4HB31oam7GkqVL8d///hcPPPAA/kbC8u/33Y833/0QSZ0Z8884D1//zg/w7Zt/hrPOvwQTpkyD2WIRLQmMiLP7m3E6CBiv/v3jW6NJFpR9NolPekDuhP+5yLd1pkDrW+xBK7wLN4lx9fsAlbi7BqPwRVVQmd1w5JUgp3gCikoqRB8so8GIzo52rF+/XrhT4JJYTW2N6KjMsxXJkpVkvOHsublvEO+u2IA19a0Ikejkfm+zJpTj7GMPw4yKfNhJiHDiZHdeijPdDOF/0aLJwBz6/apTj8Hkwmyo6WzhpBrLN27ERyvWI0iFqP2JfhIsby1dh/tf/hB/+s9beO79pejqI+H9BcmS021Vax/qW9sQo9JqBgnEPJcD5Tlu6FL7MFufhsXdsMBLf9KipQzRYTLA7bDDUjwJ2rLpCGhM8JG44jmVn+dBRG++AY+H5yjfOZsRiyexod1DYriT1lTQUcEg3+1AaW4m9HzR1HmUu9jyk/fNc9lRmOkioUfFDNq1s99Ly6Bojh9N+Da8gRAWbqhTatcDJPTVerhsdsybUomjpk1AvsMKnUaLroEhvLFkDZZuahY1yZIDC47bLC65FXAj2ZTX33gDjz3+OO66+x7c98BDeP7F17B4VRVUJqcQl9d//ydCXJ5z8eWYOHmaqLGPUEFRTP24k+lEsveI8QxKIn8ZW8ZXaDIjIp+wtalP5du24V/EQjuKvcSKsn/aOHPTuj8YIYOZgD+uQ9Jgg8mVh9yiMhSXlMFut4vR6c1UMtuwYYMYDbdu3Xo0c9O6zycThWTcGArzAKB2LK2uR89QgCRSBtw2C46eOQlzppTBadaL/oVKxxGGV1hw8vcksq06nDJnCk6bOwe6DKVLiMcfwLtrNqK6vXfcSqmjQd+AH5+u3oi3l6zAwnVVWLSxHp0eSo9KSt8uYfp5xaYG9HkHRRO6KhHDhMJcFOQ4RbP5thhZsznyO1+JB+QY9BpkZ7phzy+FqngakrZsDEUT4v0MDA5Sxlst3KoxTU1N8OxgFDifM5yIo76zJ1ULGYedzj+luFD0cVQmx0zdBy3pu+Hn5nthscwO5POzHDDq1GL7gD+Ihl4f/BFl0ONoQTEITf0+vLemBps6KP7QHVjpXg+tLMKl8w/DVaccjjmVhWIOeXZmv665FR+srUGXLzCq9yHZu0RjUTRRfsheGf76l7/gt7f9Fnf88U949c130O3xI7d0Ek4+60Jc9Y3v4bJrv4WTvnQ2icupMJvNiEdjiLGfS1nLvV/BtlPxdzG2jK/QpIfikYtbPpaytl2DRZkm77Gt33m7+E0EVuqT/kRjMfjCCfSHAD8MUFsy4corQ2HFZDF1XW5uLmKRMOrq6rB61SqsWLEc69aT6CQRyk3r4UhEJBgpPiWjDfdFrO/sxaJ1m9DW5xH9Ei0kJGZWlIiMPdOkU2aKGcHnYiEJE6vFgGNImE4szIE6ycIjA809HryxZJ1wS7O/xFyT2Qib1SxqJWMZGjSTyKwmsePnptvtwMmy1x/F2roGBCitq0kqWUgYTSgpJnFmTFkUDqbNAm5HCDtC+3JzdVGOi0S/WQS6hsS9l4RlYXk5rrr6alx04YXCty/z+uuv455778XSZctFkzrbCh4sls5oeX0wEEJdazsVkFk6JuCwmDGtohQWKkikb0zYGF5YcKbgu2ZvHNxPM9tFQlPPvTkTYhrKhq5eDHHXglF8wTxRQE1rDzY0dyAYS0BDNroiz43T5kzEvMn5mDexAPNnTUYJiV6dSiX6Z66oaaD9uxDcg/6rkr2DElcTor8luwys3rgRb775Fu688y78/ve348577sXrJC4DCTUOO2Y+Lrzya7jquhtw/sWXY87hc1FQkE+FMp04B/e3lOJy/4W7Lu18r/jdZ/xrNEcgMgKyr4qN3Wxotya9j8g2lJ0/D4cV/SaCjD7VXANEmVcgFEO/L4L+QBIRlQkGey4cOSUoLClFUVGRqOrv7uoWQpN9c3JNZ3V1Ndra25Tp60gISCSjhS8YxaraNizfVC+mF+RJZ8qyXTjx0MmopMxdxz7NUvsy6e8jE2oGxUkNxfTyXAdOOnwmMkl0srujAYrrH6xcj+XVjYqA2Q+wk8gsyGX3TBZkqPVi1POquhb0DCg1h9uCTWNdRz9aOnsQSnAIZaAsLxczS4thYBOh7LbDMOBftv7doFYL35UsNLkOkWci66LCQN/gEGw2O8rKymAymcS+5SQ+WSAODHhFRstOqd+gzHrZsmWioMrnbu8exIbGVpLBaqhUWjhsVrrPTBi16s33mPpMr6fhdZ1OQ0LTBgeJcb4ffzCAlq4eeIMhEQajAZ/FSwWTdfXt6OjzCoHrNGgwu7wIcycVI5dEscuoxRGTSjGjNI8KRcqd1nd0Y9nGBvQMBamwJDZJ9gN4soL29nYsWLAAzzzzLO66+2784Y4/4sl//BPvfbyQCnhxHDrvBFz7re/juu/8AJde9Q0xmMfh4gJWBuKxuHC3JTlAiEeFh5/PW6DRZe8KTbFs/vs5OCMQwpJ/V/bYqsem+J13EU1QDP9MCxt63pIOQi55+QNh9A9F4AkmEdc5YHDkwplXhpKJ01BWXiGcwvd0d4nZh9atXYdVq1dhAwlQ7pfFtRX7S+Yt2Tfhup/mvgEs29SElr5B0d3Dqtfh8AllOLSiUPgo5MjLhRt2Y5SKyqmYvzmOa+iT3eJYDFqcOHsyDptQAa1KLURXQ3cfPly7Cf7Y/pEZ6ElMleRmCoHHz8XTNda28/SL/eL7tuAaxHV1TaK/YDJDK9L/lPIyTCjKpPSeDi2lL+xOQ2mb6x1dJCSzbTZoqQTA74sHA3X2D4k5ykdy6qmn4sYbvo3DDzsMWi3XOCZRtaEK9913P3y+IbIXSbR19dPSQ7+oodFoketywmU1QJuResfiqNQibAsvm+F+6CxOs2jhPpuJRAweEr0dHj+Fzei831Akhpr2PqxtakWARIiOxHZpthNzJhShwGUR98ABnOmwYM7kchRlucT8yIFoDCtqG1Dd0i38Hkv2PThO8RKLx9DZ1YX33n8ff/zTn/CrX/0at912G55/8VUqrPXDml2COUefhEuv/iau+PoNOOv8yzDr8KPgcmcKUcl9LoXAlPnfAQcXEsfjrY670ORmLsGIp9ucNWwFG7nPoRg+XvgUIvLzf/GZPmn6M3Vu2ldk0qn9uLrYH82AL5KBUIYRCZ0VJmce8ksqUDFpKvLz8sQZOjo6UFVVLVwlrVy5CtWbNsHj9SISjciaTskuw5l6XUc3qlrbMRRhlzxqFDmsmDu5FAWZjlS/TI6uSvweGXfT8ZdLn/wL/2VZOjHfjWMPmQibUQ8eYDcYjmJVQxvqu7wjUsG+i1aVgYpsNypyMkmAcdNzDO39XtS194g+19uCm2s3NjTAQwVHHYk2o86ASSWFyHRZISo4U4z4+nnYVnCYplCaj5KwUDjmZ7lhJcHJI/65K02n10/CinsyKqTfh8vlgtPpFCNrtSQkTz/9SyRAT4HBoBcF05pVi9G15B2gpxEGTQaKc92wcf9bujM+lziPOKNy5vT507BxzrZb4SahyXGDn4jdJLX3DiI8Ck3WfD12l8QDe3h2pUQyAwZ1EoeUFmB2ZTGMWi7OKOFo1msxq5LEfGE+dBrartKgqrkDq+raMBga/6kxJduH86ZgOIzunh4sXLwEd91zH279ya2iz+V/nnseHd4Qphx6NM4471Kc/5VrcN6lV+HoE07BpCmHwOHKFP5luZZeissDHx5kOB6Mr9Aki6USXeuTwsbv8TOmMgo+j8iIGf5IGfDUFtqB9khnLPzBi9jM019SogzF4PXHMJTQIaa1weDMR0HZBJSWlCIrOwt+ymzq6xvEIKKVK1eK5vXmZsU/Z5z7ZUnRKfkCeIBOry+AtQ1KEyXHWZ7ucGp5CcqKcmFgdzdiTxYxKkVQUnxN18gzisBRtvPCMsCqU2P25BIUZGWR8IxTCVWDxp5+fLaxHkOjPDp5LOCm2gISiJOKckjcGZBBoq3fF0R1Ww+8JKq2zuh4nadFrKHfuVYvnowjh8TYRBJxPJhHOE+jczIijfNCx3D48zJ8Pg5D5dswLN71Oi1y3A4x5SeLT3ad1NLrEa59djTIisXmlClTcNlllwmH1AM84Ku5FXFfH5KdtbAijGIS1H3tbcK3L4/KpZvh2/jcfaThQU18H1kOM4k7kqcqLfp8frR1dCEc3nOfqdxfuGuI4mRjM3oGfOI+7CY9ppLQ5P6YojYzhY4KBGWZVkwvLxZ9TTn0eNR5NQlUpfl8+2EjGVs4TnOTOPu3ZO8qr776Ku655178/Of/J+YVf+2td+CLJDHnmJPx9Rt/hK9963u4gATm4fOOQXZuvojzfI70IpGMNuMuNOOid9n2TOuuwWdJ20LxwX9EzsIrXwAlKN5dRfty5s3KngWnxxeiTIK+J41ImjNhySpGTtkklJSWiBoMHrne3tYm3CStINHJCbulpRVD/gCdUgpOybZhNxLs83FFTRM8lDGrKP6V5WTiiOmTkZ/pAHd9S0sf5e8Xw4JKS3G3MMeJeYdMgstqoqjPLr+CWLK+BhtbuncqKextrEYdplcUo7wgT2mWDYewvqkFTe3diGzVLMvPs6y2Ba0k1rmAF6XCYll+FsrzXKJ/5tYGjcMyHQb8qdRcbjuMeRsL/pxMJxxWnodcRUIzjpaOTvQP+b+wKZ7FpiJyM8QUup1JE3RT5kJbOAluhws5Zj1e+Pe/8Pvf/x6LFi5U7IW4HeVuRAEi9T19zyYDCV+XnUS4GWqVmt5tAE3dPQiSsNhT+NkaO/vR2N4pCsxcu1yWm43ppUUw8CxTqf3SmPVqzKkoRGmmXQhPrjJooGO5awG7cpKMLxz/B32DWLVqFZ586h+47be/w8//7//w4KNPYtnaTVDpbThkzjxccNk1uPJr38bZF16GI489EQWFhdBqNIjHYqLWUnIQEyf7Og6Fi/EVmgTPbJ5ma0O2OwjTTAZa1DbQ/22eM61GU4wM1i0zDz6X0peO5/sdIlsejKnFIKKkMRNWEp0FpZXIzy8Q1+zp6kJdbZ1I6MuWLcWGqo3o9/SLZgdZMpSMhF3SbGjswMamVjG9IddQTSjKx/TyArhMPJMNT7ualhk7B+/Lbmltei1OnDMFRdluZMQiom9jVUMzlm2sh38/qNVkgVaZn4XJBVnQaxSx1tjRjaX17egJhIfTKKeo3sEAFpCI9g4pLslspNAriwpgsxmFY3tF6G2G91EEHK+wsNtRrQ03w6uR7eDmaovwbcotHh09vfD6/PR959I0i9n2Pp8YnKV2FUCdVYTc7GzkOp045qh5mDZ1qmIj6F9XdxeWLluG7t5evrUt7BHfM0/1mO12INNuIaGZIeJOn28IAxSf9lTbcfN7a1cfermvq0or+suW5WdTwcW1zXjIoVjqtqCUR59rtYhx2PT1o7XXK/ob7+Ht7DQcTvwOlSW18QAn/bwcb3jaZZ717pFHHsX3v3cTbr75Zjz88MOob25D2ZQ5OPXsi3HexVfg3EuuwAmnfBmzDj1C1Fxqtcq84uyGaPtpQHIwIWZMTH0fS8ZdaHKX+9Em3RTPxjG9iA1pUokqnbjSv7Ch3zy4iL6LxCy+KlvpT5znZI3EEYhmYCimgj+hAyxZyC2bhPLKCXC6yCjTfr2UUXAfzhXLV4hBRC2tbaJpnX2TyUQt4Xmi1zZ3wMM1Y4m48JU5pSQX+S4rtCkhNCLG7hJ6tQrlJNJmTKwUvg5jiQy00/XWN7ah37d/uDpyk1CcUpKHLIedRI8O3kAEK+tb0Ng7IIQzP0OU0tGn6xuxvL4T4YSWAkyFXLsVcyeUw6Llhm9Ov9yVZbO4FsKNjmOxqXRHSNc68ubNISPegNgHcBn1yCGhyY7So7QLz0Hf2e1FOPzF/SL5jHzexpYOeP1BJOh6PMAmN9MJd6YDRx51FC6//HIceuihfFtobGzEvffeiz/ecYfwdsEbubk+GuX5zZXm83y3Cy6rWZg03uYZ9KGPxGFsD6aBZJHaHwyjqbMHPnaXlKGGw2JCRX4mnFTw2RYcbnaLgQpIebCbjbwF/nAUNS3tVDAPiXsfS2J0fp4Sc3lDF95YXo03l2zAyk0twu9q5ACsmeN4xLXeoXBIDOZZvmIlnnzySdxy8y246fs34bEn/oEWKihMmnEELr3227jyuu/i7AsuxRFHHiMGt3J/Sx3ZA35vyrnG9v1I9i8UW0mfqfWxZJyFJvc5233juBkOGjYs6YVrKhQjzL/wFUTgCcvMC23JiNNX2lcs6XUWmilSoc3rakqQatpHCE86fvi89FZC4bhS00mCM8oj192FKJwwDRUkOjPdbuHDk/tvriaxuWjxYqxatVqIzkEfZQz0m0zsBx+cqTf39KG6pRPs9ZHFTmm2G1OLc8RIc0Ugba4F39U4wv0cHXSewyeWiv6O7OooEE2gsduD5u7+/SLOGTRqTM3PEQODeOBLhMKpuqULizY0oNXrx2AsgdXNPXhj6Ro09w+RgNPAqAbmTZuAyWU5QiAynKIp5YK9cPJTD6fvFGmRyYgMmD7TfS85pXMfz0y7SdTqmQzs71KFgUAI6+jdDQaCYr8dQqfiWoIVDe0YDCuDZMw8sj4vB1aLHhoqFLhcbjGIiJvCKysr8ZXLLsWRRx4Jq9UGn28Q77z9Np544nFhO6LBEIlpC5xWK3j6XpVai6FQFH0etids+3YPvkfuK1zf0YUk2TUNBWBxlpPEfpEouGwPg8GASQW5yLGa6P5ViFDk3tDQAi+J3x31Yd1TOA73egN4a1kV7n3xXfz1+Tfxp+fewO/+9SruevF91LZ1i0qB/R1+TlFrGfCLKZRfffU1Md3jrbf+FP/3i1/gf6+8RuI+hllzj8NXvnajcEF00RXXYt5x81FUUgaDkQsAnFfJVjXJjqFcR/wbD8ZVaHLTT4JKzsyepQHOLPjW05kG12YobA689AXY+CjSkw0h2yLO+OM8EIgTNe+SOpjrTfgrC1KR+fM5+BjeTj8og4joCxnmaCSOYDSJSEKDYFKPsNoGvbMAOUUVomldbzAJNydcY7Fq1UoSnYuwdv0G9Hu9YoovycGDzx/GxqZO4VA9SlLGYTahsjAf+S6n6JupoHQqUeLj8MadhpviKwrctOTCrNcLp9rsEL6mpUMMFNjX4RRcWZBFwrECmQ6rSImd3iG8u2Q1Flc1Y11jN976bC2W1TSJAps6ESZR7cbJRxwKp92MEKXNrmAMTf1h9ASjVBBksap0gxkJZ75bZ8C8xuHOn2xVzAYtCU03MoWYovRNab26uVX0j/xikqjv9pHwaUcsqaK3naD3bBfiTMzBTsJVuRK/5gxROJ0//ySccfrpyM3JFmKSaxcXLFyMd997F34SnvxuTTqtcHHEzt/7hoJoG2AXR7vf7KX0z/SgheJkhMKa/bnmOR0oynTSfW0/W2BxmUfxtsBNcZdulW1qc1cPGnuHRJwbK3h0/PurN+Hf7y/CR2uqsLG5DdVtnVhe04jlmxrRQXEltpNdG/Y10nGS43V7ewdeeukl/OhHP8YNN9woHKh/9MkCMcvd8V86Dxdc/nWxnHrWBZhzxDwSl+UwW2x0FsrPKF+RAlOys6hYix2IfTRZpI2oQxwF+Fy8bDZw6X5uHHYsEfk3tj9CYFLARtQahCnziJNBT9ASJ8MZIsMao31VqZrOkfC5OBMcedfpq/JluZYzSkY7FMtAMK7GUFyHhCkbWcUTMGHSFBQUFAh3EYMDA6jZVI1PP/kECxcuREN9PQZ9PtG0Lo3DgQvXOnGt0fLaJsrcY2JmlQLKzA8pL1aaQ0dIhT1JjFyjl+2wYAb71KQ4rqI4PhQMiQx5ILDvC03W1m6bCYdNqcTE/CwS4FGEohFsamnDs299hPv+9x5eWLAGHYMhJBMhOM06HDVrGopynejsGsR7K+rwEv3+30+W4u1l1Vi6qQ1tnhAJTqUwyemLB0+IJbU+El7jgiX/rqV7YSGVn+kSfRFZTDV0edHYMygE2o7g86xuaEdDZzd94xrSuBhRX17ohjK/j0L6+vzBfjh5Gj+NRiNGrB979FH46a0/xkUXXgCH3Y5WKqyue+t5+NZ+gmTASwWHEAaH/IhQJrE7doOP4LjY1utB/9CQuCd261Sa4xLN5l9UznG6rKgozIaZhHOcxLTHH0JD6+cHbo0W3Eq0cE01nn33U6ysq8dAKEwFC7XoSsG+j+dUlKIw2wU1q+X9BH5vMbL9PKVpZ2cH3n77bfzsZz/HBRecj1/+8lf4bNlKFE+ajsu+/h1c971bcT67IDruJFROmoqs3HwYTdyVgqwH5R0y/5DsHpTQyd4pFmFsUf+KSH0fU0KhoCitDQyFYLY6YDQaRULZfdLH8qeSXYstqT+cvfBfJfOgDJ8y3sFIHJ2eAXR4fMLZMw/QyFCryEBpRILlkcDpJjgFRWJusYnh+04vW/xK16L3xoMGOD/iZlKt2Q5HJglPyriMep3oiO31etHS2ooODg+PFwEKGz4XPwOPBtyzcJGMFdy38oXnX4B3YBDnnXsOsrOzv/BdDYXjWFDdgreXrBYjkbUU32ZVFuFLc2eiLMcOLQlPjkNbn2eX4kAqk+FuHux7cnVNLXopnSUSGWIk8dSyIhRl2j8fj/cxuJBoNOgxFIqgvq1T+LDk1ofOQT9a+wMUfkFKV1HRb7E8Nw9HzZyMli4PXvlkEd5YvAIrq+uwhkTe2tp6rK1vJRHlR1Ktg9Vs2GI2Hg7b4bBIhXPafojfaInzdJjdHlQ3tyAQiQr7UJSdiWnFuTAZuLvDtuHG9afeWYzPaprpm0b4pTzu0Nk4fmYFjPQuNl9t87V4ScMDo/QGg5jq0uFwCPGZJEHS0NKKuk0bEdKS3dSbYKF4VGTRIDfTIXx4jjzHF8GipMsXxGtLq1DV1CrsFfcVPu3w6ZhRni+a0XeEmsKyrd+HZdVUWA7F6X0og6iOnV4Jq9GQ2mt0YJFf2+nBMx8swafr6xCgCMFpqNBpFSPkT6HCxmnzZqKiIBNGzeZ3vK+RFoL8PB6y+RurN+LdDz7Cv//1LO677z68+dY76PcOonzKTJx89sU445wLMPfoE1BSWgG7kwo8Op2IG0oslUj2ELIXKiqYvfXWm4iFQ7iICjg2mzX14+gzrkKT+yp6QyChaacMxbBLxnEknGRFwqXj+SNdS8C/sCkXv4uF/mWo4SfR180+DOtb8PGaGiyt2oSa5jbKSPpEpmW1mKClhKwmda84Rk6zdV3mCLa+99T98Fa+drqZPkpGPBzLQCypBXQWuJ0kPOmF8vW4SbOvvx+dHR1oa2+Hp68PWoORRLheXHd3w0cyNuyq0OR40OH14Y1lVfhsYwOC0Rhc9O5PmnUIjjukXEztlz5aiTdKTBYDVsS3L4bTAdfCcT9Nvpc4HdvY1Y8NFL+TJHS4H3JpjhuzyguEkNuX4WdhtzrZDpuoHevo9YrBJtwPkGvvEskYCS8DirNzMa28mERgFz5YugKrGlvQNxggcT0EbyCAvqEwevq5T2wb6pvbEY0l4SabY7Gk5pEX4UC2gdOsQLEZInS4hE+/a3Uacc719c3wBCJikgan2YiZk8qQaTVuMyz5HDUdA3juvU/Q1OshYZCBErcD5xw3F5NTNZqMci3ujrO5Bo7vZWRc4npT3o/30JtNgCMHDYEMdEe4YKxFV/0GLHjteQz29WL69Omi7ySzMzaDhWVzZw+eeX8Zujwe4bt0Qn42zjhyFsoybV94DraZPRQ2q+ua0T04JN4Ne004be50uMiWfvEd7Bz8/F1eP974bC1e/2y1KHDwvXH/5mtPPQZXnzwXJ8+eJOZl536wfN2def7xhN8rLwNU6Fm+bCmeevIpPPTQg/j3v/8tvA3ozE7MOOwozD/jfJxwyuk4Yt6xKCuvhIPEpZhxip5H1ljuO5g++xR5N38T9heeQbS4FNGC4tQv+xkUr9RUYHv75ecRj0VIaF4wpkJzXNsa2PBy7aKwIHuAYlAUMZde55OmTYxS7mORqUKUhGaPbwhL1tfipcVrsbCmGRu6fNjY6cWCmlYyYMuxYF0tfMEQkmqtEIcKQuqJ/pl8Yjb8m39KX2kE6W30ycZuuC40Qc9MGSVndmE6SX9Ui6DWCVdhhZiJgQcD5OTmitJqa1s7Pnz/fbz15ttYuWoViVCe+pKMjHJmyX4Gx/WuAR+qSPRx7Tmvc5Nvaa4DNqNuS7HCcYbiqhgZndq0MygZqyJSWGy6HRZMLy+FnraGKML2+cJY39CGoe3MsrMvwc/AtWmlWXZcevzhOHveLCHqjOok1LEgjBkJTCssxZTifNS09mNJdT2a+gYQiPH0m0noKXwNLNgSUQzFuHAZxsq6Vjz9zgK8vGQ1+oNRsgc80SRn/tziQdekf/wetjaE3B+yPMtB9+KEjkr+Sdq5rZcKhd39oqvMtmAxsJbCemNrN51YjQwSxjlUsJxYkKMI3GH4yltecaRASovM9DEqtRol7B4pJw8aditEP2aVVOK4My/F0cedKEQm+8F87oUX0dDQIGzGjuC+6b0UFr0D9Cx0FR0J1yybDZmmbQvoreEwy7fbked0QqtSCjrcfN7YxdOq7vjauwKP+G/gmlMStCwy43RvVoOGCmpTcdphh2BWRRFyXTYYSWTyfY8Mw71BWlSmF576cePGjfjzX/6K+Sccj8svv4JE5kOiwuHcS6/Fj371Z1z77e9j/mlnYfLUQ5CTVwCT2SLygpHnkew7OJ98COoBD9Sefjieeji1dT8myf5z2f6k1seIcRWaCVJxwpDxf1p2NwnxcZz+FOOy1XkyWBryFp6DSIV+dl69sRkfrq1Gm8eLEIk+rrkMkDUPUoZR1R3AwvX1orN/mA7jGlCGS/38jzN+NsycPQ0bMr74ThgANrm8Fx/FcpHPw7MRsejs80XQ7iPxqXUjs2gCJk2ajKLiYjgddvgDAVSTgXrv3XfwwYcfirnXB3jUOhkuzlAk+wcBEpd1nYNo6fYgGo+LwSAsXEpzXPSdYxSnA0XopGLWLsMZEQ9Y4eO5Nt6tU6My2y4cjmsQgz8cxKaWdtQ0d4l99wd4wEl5rhtfO2UufnjWiTh33mycceShuPik4zDvkGKEInE0dHUgEA6TqIrCrIqjkkThvOmTcOLhMzCrrAguEiTckDrEfWR7PHh90Qq8s7wWXhbcXGs5Ag4X8R5of54WlN8Gh2d2lgtlhXkw6yl8EzExgn85FUqHyKZsDQdtbyiKxVXs49NPWzJg0ugwI8+N/EyzqM1Mv2P+5HfOb2Prd8Lr3IWHZ33i/dgG8RQXJr0WmTYzDDo9CU16LlJh5dNnYdr0GUJo8kxB77zzNi6/4kq88N//DteCcfeDrWvEAuEYqlt7xShtjjN2vRqFbjOyHMqI5S+CzaA704pst01kUmxrgxTPOrx+4VtztBgiMbyxqYuWdjEPu54Ed3lOFk6cXkHC2y5G8A/b5L0Ihy2HcTAUQnt7Oz759FP89W9/wxlnfhlnnXUWHn/iceQWV+La7/wI9zz5Am7+v9tw/GlnIL+wCCajRXSV4Nql9LlGvivJvkWS4mCapEaZonV/RCQbWniMSUbG2D/HuApNrhUYtrZ7AJvodOYsMgX6wkmTxRzXGolP2jYUjWJDUzeWbWpCj3C/oRwRpp34Ox/DTdubuvvwybomMdVcWKUmQ64WgpPPzaV1Fptc/yFqUHfBsHHgir3pGM7EuHmJ//FpWHByfsdit3cwjJ6QDlpXMSomTMTEiRNQWloKi9kMr8eDZcuWiZrOpZ8tRV1dvZhvPU4ZnzRI+y4cVzo9g1hKwqN/cEDENSE0s7OQ57RTlNj5eLQj0ufh63GNJjsZLyrIwqETisTI7DhFsrYBH1a0dMDPiWM/gUUEzzl+zslz8d1LTscPvnImLpg3E5FQAKsoDQwFh8hIRuA26nDcIZW46dJzaTmH9vsybr36Ilxy0rEoclu584DwsbiptR3/efcjfFbdhkCSUianb0rXw1D4cVim3wqbXrfNgEOKc5BtIQGm1sFHQuJTsiUbewcQ2dz0IeAa0qraNizZUIeYWi9ab1gsz5o9EyZSY+muDSOvwZ8j44FwcSX+jYT3J9Fq1iEvNxMmkzJjUX8ggt4edvKu7M1dkf50x+34+733YPahh4nz9tN7/953v4unnv6XEEEMx5OBcAi17PsyyNvUos9pJQlqnu5yZ8k00f24HTDqlObdcDSBupZWMWNR6pb2CD6HZyiA9Y3NaOsne0f2ksX23KmTMKn4i/uRjiVpMZgWlxurq/H0M8/gxz/+Ma6+5hrc8O1v49U330VeyURc//1bcdvfHsWP/u93+NJZ5yIvv5DunWIXZVCjEU6S8aX/q99GNDcfsYIieK+5PrV1/yNthbjybzyi4bgKTQ2JuM1mlh92d9nySH4IUVNK38ik069UwqbMpbGTBeQm1Pf0I5DQiIERJVY1jptUgmm5rmF/cZpkBOuaWrCxvgFDZDiSotN1mpF3TJlR6ttuId4o/REnoT/0n40NL2F64cFIAu0BLQZUTugyy1E+/TBMnzED5RUTxGjU9o52LF2yBO+T6Fzw6QLUNzQgGPz8fNCSsWLn336ERF37wBDq2jsxGAxDp9aQwLSitCgbVjMJEdpntN8a15hzMsgyGTCxsIBEADegqzDoj6ChtQPR0L7ffD4SFmtWvQ6FLivsJPaWNbThnRXr4PVzXz0NdIjhxNnTcd2Zp+CE2RU4pMSN8mwHZlfk4OovHYXLT5+PihyH6ELDky1Ut/fi/aWrsaGF7QE3UXOBU2kBSQs+/mRbwotBo0JFSTFK8nOh12pEM3NDVzeqqPDq5a4Q4giF3nAcizc2obOnh+5bBz3ZkMrCXMyaWCj88XJLBIu8HcUgpW/utvfQazQozMqC26ASBdYEFTSrewbhT7mu4vu22+2YQfairLRYrHM8mzt3Lj764F0httlOcM1672AQ9RQv/TzfOu2n1xvgsphTNnTnYOGcabHBRsJXTfcdjITQ3ueBlwTnaGRdYbrX1v5BNLR1IRgOg32O5rozMXNiCTIdFvF8401aYMbjMTQ2t4hpHy//yuU4+6yzhTuilWvW47CjT8IPfnMnbr3tL/jq9d/DkcedjPyCQugNRiRiqRHiZBukzd4/Cc0+HK1P/hctjz2H4KzDUlv3X9hpu44Ki2OdnsZVaCrTHSkJbM/TGSdWbkZm18w8iwabfXZoRImZAo2NaEN7N/oGBkX1MI8a1SSjZKyLcMoRM3Hq4dNQ4rKJknGYROggibzajiD6fAHhVy4NNy3xrfInG9c9vW2uVOJz8JnSL1cYMP6Nt9Enz/jBTqLZN11/1AhXfgkKJk5Dbl4ezBYLwpS51Dc04uOPP8bb77wjpiMLhoLDmVk6jCWjj0qz7VlTtiYYpHfU1IUuynw5k9RQBCrJzcIknkdarwwCUt7+6CBEAleRExYSZ9OKCpGfmUWFKxWJDBK9vV70eHfGD+S+BYcRt05saunFe0uWo7e/W6RlkzqGY6ZNxNVnzMfsKQVw69UwZSRgJAtgorAucxpxyXEzcN6xc5FrUd6Zxx/Am8vW4vWFK9E/yDPZiM3DpIVEOj3ytQuybZgzuRzZNqswluxW6MMVa7CmtkNM4cj78f7tnQP4eM06BBOKSXWZNJg74xA4SSBrU5I0LeR2aNRH/DZsH2jhUd2ZTptwyB2mAFHRxs7OVuEija/P+/LCrtS46wFv01Jcvfbaa/HA/feTUDcLVzr33XMvLjrleNR/9CoVqDNE/8ZJZFdyrZZdshq8bzEVAHKddmVkPG1o7e6Ff0C5nz2Ffc+urmlCbXsXvX+t6GoypSgHE/MyYdJs2eN1LBBxgBYWhiwseXYeHrD54ksv4eprv4rjjj0GN990k3Cqfvo5F+GvDz+LPz7wFC688quYNHkKLBYe8EkFyhFNrRLJvgLrBE7EoWB4XLqgjKvQ3NKIpr7sDikjoECPwE3d4jufVCX6Mg74BrGeR5b3eRGhH83qhHCJctzscpTlODG1LB9HT5uaqvUh6Hwb6tajrq1HnEvULNB/UUtEW9jNSrrGaKfgHdNLap3dz3Bmw7fO5xOlW/GbUpOhoQyCp1njGgLup6XNUGMoHEVLzxA6PBQhXKUom3EkDp09G+XlFSjIy0c4FMKK5cvx2muvYyHXctbVob+vnwxjWIlMkr3CYCiCdRSX2j0+4fOPa42mFOaiJNMBPWXwIuHR+xmNTJlRzkMLiU02HHkkkCaVFHIUFv2NOwcDqOsboLSwf8UJvtvBgQDeWbwK1S0dFJYaSjchTMh14GunHi9qMVlgqqjQKfpX0sLfWdzbjHqcMW86jp4+CRYNFRQTMXhImC1YW4XldS3CFZGAwkQpoKVWaWEPA2o6j5UKBYdOqkAp2QwtCdgAvcslG2vx3qoq4YaJw91DQv7t1RuwtrGVir1K7UC2w47DyvNhoGPSJmNn3jXvqxhljhtKn3bRJYK2ZBn0yLJaYaaVMP3W0u9FVzguBgdtTVp4sghk1zj83WQy4bQvn405x5yEDjpWw32zwj5EO2vh83npmXfeDybfl9Vug9PuoPNQ+JHtGqJMq6OfCryjMCAoQOeq76T0Q++e36nNYMCM4hwUu620ntppjGC7zB5BOjo78emnn+L++x/ANVdfg/knzsdNt/wQbT0DuOiqb+K+f/wXD/7rdVx7ww9QUTlRGPUIxQk+XiLZlxGFXpGOlMqtsWZ8heYI9jS/G2G+UxmEStRkir6Zok9WJ9ooY40maU8yykatBodW5mFSYRYs9N1lMmJSaTYm5bpFxpxUaTBAFruqvhnsfy/B2+ga6VrM9FX25L75jvlw0hmihpSNP5+fr8DrfHKlH6dSQ8HbeF3ZhzK5UBT9AyHKXIzQusuQWToVEyZNQkVFJSxmi5gPd8nSpXj/gw+wcOEibKqpQV9fH8IkOkdL0Eh2jqFAAA1tbWLAipaEUBapgyISK26rXkxxOtpwfEnD8bk414lKius6imxcQOrxDKChsYkywnBqr/0DjrebGtqxcD3FZX9QpJ98uwXnHHc0pkwsoAIkpw9FYDKiXYPX6ZP9VrL/0MtOPk5MVcmzYPDxzWQXlq7bhLbeIYRAx4mCXjp98h6bjS8PdqnMsePoKRXIthhoexL9oTjeXrISr6yoxaKNnXj2g1V44YOFCMTJBiUisGliOOnQWXA4THRfqRNtB36+kWmTvykyJX1HCtx2YzLpUZzjhpmnxiR8JGoaW/sRiXJL0bbheJGOG+zGyJmdjcoTvwzH7OPpnLQ96MfS917F7372Q7z26uviXpRaPKWpfXvwGfNzHHQ/Nug4kEi0DgT5fjpE8/yeECShuobs95qGFioA8DuLY0JxPg6bXAI7hcFokg5/IS6jEfT0dotWop/97Ge4+OKLcfmVV+Hev9+HpM6Ka7//c/ztkWfxuz/fi8uu+jrKKyeJcwQDfjGrj0SyP5EkvcNJfM99mn8x4yo0ucaOBdMew4EybED5kx6DbSKFWpwMRo93SPgS5BK2snsGsp0OlBTkwaDXC+GppQOyKSPgjEiTat4IxTWoa2kQDky5FpQHUvBp2ckR1zJuwW6+GOUo5b7Ta1uub3sZ/saGMZ4g8RjFYCCGQTigdpUgu3QKGb4JyMnJEc/b2tqCRQsW4JOPP0HVxmr0eweEb0JRi8qxSzJmcH9bniKwyzNI+a+BChYqZDntKCsooIKDss9wwqZPkdkpa7uNOIcQXCrRt1FHcTrHaoHNZCBhG1O6YvR44Y+matH3A/g++4JR/G/JarT39lJYqWFWJXBIxVQcOpniupmejW0KhaEooRNcMBOFNQ5XWjdpVJhW7MZJs6Yhy+VCPEMHrz+Cj9fWYvH6OgxS4S2W2l8s9E9MEynSO52L/jqsJhwxcxomFReImlGzioR7nwf3v/AqfvPMK7j7udfQ2dNPkpUKitEA5kwsxcmzJyPbaoCWboK9AnAN6/A7H0F6Gw863JqR+7OFctqNKM5zwarnZ1YjmqHFxvZu+HnCh51I06Jmm0S2mJs8zAMqk6icMBG/u/dRPPrIQzj66KPFNdetW4f//e9FNDU3D9fObSuGOo1a5LnsMOs5o1IJX5E1fSR8U8fsDnyVQV8A6zY1o6mT3jltMNI7nFGUh8rcHOH6ZzTg52JxyIVwnkBj5crl+N1vf4fjjzsBF154EZ79z/Nw55UKF0T3/fMV/ODnv8X8+ScjJzuXjs5ANBr9QjEukewPsL/Wbdmm0WR0Uu1uwM+150lUyQy4vM+GMElf2UVRMxn9mg4PhiJx6BBFFpWCDykvQaHbBaNaJUai8nF2UvITqaTsMptELSPfj5fEaXMvj1DnWg3lH++7+W+KfcTAcP9TFtSD9OBJczaKJ0zDtOmzUEGl7YLCItHvc/36dXj37bdEM1BtfQN6+/rg51J4nH070nHSWO4kXxxOvEeQxEtjSw8VeHi0eVLMCc2Ou4tctlQ8UhCJm+NZ6nNP4HOkEzPPyW+geJ5tZZc1FtGDmftH82xYnV4WJakd93E4fi7c0IANTe2UlqMiLF12G46ZVYbyAi4gKiI97YGBw3XruMzbuPn7kInFmFFZCgMFEvu37BzwYnl1Hdp6B0WN78huJso5EqkwzYCBTjKpwIXjZ01FQaYbEdo6lFSjjuzEhsZG9Aeioik7nowiP9OJc4+bi8ryHLI9ivDlZvjtpbH0dhZq4vuI/Vh88pLeR6dVIzczE5ZUDYSGnmNTYxP66b2yHfgiWGh29IdJtPfTGs+wAyqA21CS7UJJSYmYjYiv1Uui/uGHH8Y999wjmo95qkQWY96BASGw0vfDdjTT7iD7qaN3oYGfhBuPdA/tyUhWOjdPMdlEQn6QR8XTc7Jbp+L8TBh5eszUbrsC3y8vLCx58GS/pw/V1dV466238Mc//RlnnXsBzjjzHOGHdOrMw/CbvzyAJ//7Ln51x104+vgTYab8IS0sGZFeJZIDAI7JPDMip7OxZK8JzdGDHkG4K+F6xwz4whG0dHWhhzIS9jdHRWBk2a2YVJwLCwlOFRlbrtEUzdN0tNthQ4HDAVWqRiGaoUFtSwf6/QHR1KQYTK5PSG43sJR9dtu07hEcP7ihkEvoXn8YLb0B9AQ1UDsKkVs6WfQdKiwshsVqRW9PN5YvXYoPP/gAny3+DI1NrUJ0yv6c24fFDQ+iYD+FQRL0ItNK/bYtOBz7AyFs7OwV00FqMtTIJMFXWpANq4Uy5NR+adKZ1mhlXnwWzuYzSEVk5diQ53KKrhc8F38HCYyOnh7aY99/1xwdm3r9ePuzVahv7xItFTzK/KhpE3DstFLYSL1zeuSaTHZ0zygjyJV3wKTTKw/4KyIxdfJhM1CS5SA7QQVKfxCrapsorXeKwVJM+t2yJWCJyTWbHJ68OIw6nDJ7Ks46chZKqcCgzeCzJBBIqMX+epUahQ4rLj/hSBw9s5IKtBmK1aBzqrZRm5m+T/4UV0/ds/I3Tbqgq9wD99PMt5lQkZsFk05LoaERYVPV9nl3S9uCBxHWNjfDL7pPkIDWU7zMzyfxbKdVulJqOfHEE/Hcc//BrT/5CbKzstDX14+nn34Gf7j9D1i6fAUiKbHJrSsldKzdxAV1JbS7BkLwDgbE77sD91Hv8vrQ2uOhe9KKQlMOFS7KxTNvnXq+GBaHgUAAXd2dWLJkKR555FHc9P1b8K0bvovv0ucHH3+KmUccg5/ecR/ufOw5/Pg3f8JxJ51MadWMIB0XImGaFpgSyYEGWyA92ZJ0i9BYsT3ttB+x2aBxoPV6hyiDGsIQD0ansNPzlHU5britJugpMNn5sTColHFx83mW3YRCl1b0a8ugDIFrHzY0k0ggsbBFr5v0ZdIGNPVi+Jq7a1RHA760yBj5mUR2RIKbni0QCqGHAsETN0GXWUqicxLKJ0xCUUkprFabEJjLln6Gjz/8EFUbNqC/rxc+ElTcCZ6P35vPtC/BI04ffOghvP7mW+jv70cyzhErJRK2EUbhaBx17T2obW0XNYl6dVLUgpXn5Yrm7LFIcMPCKhUn+S8POOLpALPsFtG8HKV43T0wiG7voIgv+zpRKvjVtHejvqOb0rIyKKYsJxvzKstQ5rTAwAokVdvHT8+PvkU/zVSYZJD45OIiN/POrsjHtNJ8mEgkcjrvHvRhfUMrPIPbcMDOS/ocdEYtnScvy4rTjpyNL82dhWmFOXDq1TDrVLAY1CgmIXT+CfPwpaMPR5bFIEQhw2dQ3o9yrmFoG78nfq60CFVsibh7+k+f/Hz8g/idJ5pIwm2zoDQvBwa6Lu8fItFX1daRqvHdPrxvkPat6fVQIVrpUmTUZaAwywaHlfuebobvx8YzBWVmimY1h9OJY489VsxaM+D1iFH/Hu8AaupqEQn4oVZz3SYV3UkYDvn60etXZsHaHWLRGPr6val4mqCCWhJ2uxVOEn67IjO59rK7uxurVq3Go489hquuvAZfufIq3P33+9HW1YPjv3wxfn/PY/jVnx/AVV/7Fo44/HAhLqMUjjygR4pLycFAiHQCT/iQtkFjxX4vNNksKw3nlMnH4miijKm1T6nNJDsKt5Hn8XWLWk3RP453pIVrAdVkDW1GA/Ky8+AwGoVhD8OIfm8Hujw+4cttGH4RI18G/Sb+pTIM5e84krofjiDc7M/5Lg8eUu5C1MeIZ2TH8DwTyEBYjbA+E9Y8yqgnTsXEKdORX1gCnc6A6upNeOfd9/Duu+9iA4nOzs5ODPoGRf+ldB+tg5VoJIbWtk784fbf49lnnhVz1LNIaWvvhM/n+1z4hGj/+vZ+tPb0ixp2nUaDIirolBbkiMLMWMBiZaTo5TjAksus1yLP7YTTrBceD9gfYScVKHjGon0dnl988cYGtHZTOFI65f7d3P3lkKll0Om5CZXSLxUMGX7+ke+BQ3lYJpD45PDggUH5TivmTiyHy26HSq0VLQCraxtRSzaDJ3FImQbBSMPLApabtk104mlFblx16tH4/jmn4ptnnYwrj5+Dr518JL574em47svHY0qBXTSZK3emIGwUxRmON4ywGXz+1HsTi/hBEUjhUFB0G+ACH9uk9J1wOnfZjCgvKoAl1dc8RCKcn4G9CqQLHNuCT9XpC6OtswvRWES4Psp1uFCSkyWa93eEjsTmjOnT8X8//ym+dNpp0NO1V5KAu/GGG3HnX/4Ev7dP9AvOoDDs7uuHt0+ZOnd3CFFBrcMzBM8Q+wdmX6AmTM7NRLZ152YtYjg8WWTedPPNOOPMM3HnXfdgMBTDJdd8C3+47ynccd8TOPfcC1BWVi7656f7WvIikRxsjLXIZMZdaGaQIhqVxxo2DGTgRI1NAj6/X8zGwqMxGc6AHFY73HZbqm8mkxSGn49kw80BkOe2i1kxxH1RpuJPaETTDdcKppu3BCMMEY9u5/OIzul0nbF/VduGw2C49oA+Oc6IeJNa+DtnerwaCkfQ6w2gayiJkMYOFwns/KJi5BaWIjMzS+zMfZd4xPriRYvFdx7JrjQdhw+6mk5+Vi7t/eVPf8DLr7yCK664XLzvKMWLxx59VEz1x9OF8n6cWXE/SB+FU3VnL3p9IQpONcwGHYqzebS5URR0ODaNiFGjB597xLvh983OzicX58Fts4I9zgRiGahq86CmrVfE3X0VvreNLd1Yxx4geEICWs+mNDx7ciXyXVYSmFSQSolMNpL83LzO6Z2XkXD65mZd3m4h4T2lvBBzJpYJX5s8OrqxqxvLqxtFk29anIoz0Dn53MII83f6xw3p/A7LMi04cd4huObLx+Gb58zH1acehS/NnSJmy2H7MVJk8r0rHXA235e4Z/rk+x75S4yEaGdHJ9atWY9Nm2rR1tYuWh6GhnyIUNqNUUGays0ooDDIc/L0j0kE40nUtXfBQwXjHfXT5Getb+5GIwl3kMjkMk++24VpBbmiQLoj0uEwHB7EkUfOxW2//S3mz5tD53GQcNUg3tuI7qrlqK+rHRZvuwK/935fAA3tvQhThOUBmDkOG6YU58Bl2bXR5ix42c/wqWdfiD/d/yTueuifuOTyq1FcWChqLIPBgBD16eeRSA420ulTcyAOBtKKep5dM0DbJRU4bKCCZGRrO3vQ0NEu3FRwTYCRArAiz4lsB8/7TNdMsq+9ERk9fXDzeUGWA8UuZWYM/oXDv5Yy40CQhCarUd4wwmjyN5FRc78kWhk3U8X3wM+cjhSpdZG5Dm/fxkLbOSJxhsKZJWc6PP1cDwnOgMoKY2Yx8koqUV45GcXlE5GdlUnPHsSG9RuwcMFCMQWmEJ2dnaIz/cFSy5lOfPyZQ2GSm5cj1tkp9qGzZpAo6BCZP2dYDQ0NWLacMtnmFjR3diHA/dhoX6vZiOJcN8z6zQJEOevowrVlI40Fixujlu7bbUWWjbuNxITXgbaeXjT3eRDagSjZ2wQjMTEyur6ji8Q7xAxePOJ7SlGe6E/ETymmaxyRJtOk0zbvIxb6k96m1apRUpCNOVMq4eBpJSmUuECwvKYBDT08WnrYMgi2dX7FYCZFa4lVp0GWw4xsEn7sGJ/fL//O74GP5LPxN77Xke9GnJXOvXk/5V55+lD2e1m1sQqPPvIIHnrwIfzn38/h9dffxKJFi7B27TpU19QhMTQIm0EPA9ci0nFDlJabqXCjTIjxefgxeKrbagpPj2+IjsmAkRRzSZYZTptehNGuYqIC2JFHHI6vfvMbOGzGNFhoHQMkimvWYOlnCxGNRMQAmra2NlFI3Rm4f2bLwBCq2zsRjUXpLhNw2y3Iz3ZDp93F/pmpgJh7zEkomzRZ2CzubzlyMJNEcrDCaSBdGBxpm8aKcReaMZEFkoGltL5zj8dGYRuGIRU4LHm4WZB9yXX0etHu8Ytmc25GthqNyM/KEkZws1Hf8pE5c+Dm8/xMGxl65TcxOtMzIJpbuOEqvvWNbrG+nfsbC7aOELS+9a3tDCw2WXTysfGkSgwO6CbROZgwIsOag8y8YhSWTkRxSQlMJiNaWlqEy5PPFi/GmrVr0dLaioGBAaU/527UXOxPcLzZnBBJpMcS0OgN+PJZZ+EnP/kJHHabCIdPPvkEP//Zz/DA/Q+hrr1X9AHOiIbg0GtRnOUWM7AoMZDPObrJjs+6tbHgmniufbXbLCjOy4NJpychkkCvd5DSiUfMarMvwlGprXsAq2tbSARyc3ACVhJVh0+pQHGOHTpKo1z2Eztug7STc95FCRMuiFF4c3jQmt2gwdTSPEwozIUqwe6+gFYqoFaRsGU/kMKebLUwiv1Q1ji+cwFVT3vrxScJz9T9pN+CKLQKMZwulCm/8F7DrST0KR5FrLHBVyGT7FVF5QR4vF68/NpruP/BB/GHO+7Ar359m3C/87e778Obr7wIX28ndPR++dm4m8HKxjYM0v1vKy3y9Vq8Q1hX3wJfRHHV5jJqUVqQB61h52a6GomSJpRPLS2lmS6K5/QEE46EYdaJQEEFAlRCGPD5cP/99+PNN98UI9n53vh9Rqlgtq37jFHaGhz0wzMwiDAJTa1GjXyXC5l2u7jW7sD2KUHnOpBtlESyO6TTBHuM2N30tbOMbo63E/BYSTbZO/VcFBBKYMTFd7FsA94nwM3CviD84ZgQmex7Lc9pQo7TAjMZLH5QUfO3NamanezMLLhMVLqnewvFk3SeMPp9JFrpNxay3C2f9OsWiAxC3NP+U8O3RQjS8/AjcbbMn1HKHLims9sPhDVW2DJzkFtUjpKKSXDlFEBFEZJnHlq8aBGWLl2KurpaMf+6d8CLUCgoaiFSWegBj5LZKovRaMLZZ5+Nq679GpqooNM7OIh4OIRk2wZkdNUjGfIrg9D43yiLTAHFQR55PTIz5fvi0eYOEmnFOVmwUoEhmQhTvI6gtXcAXkor++Kb4nTW2Nkt5uIOkSjimtriTDtmlhch02YSroZ4MAo9oFj4mUcayfQI9M1blHTK//gXrh3Nd9kxtbgIDhN3l0mIdF7X3In+Ae4GkbITtIjz8obU+VnUDXe8oe0ja1V5j5HX3BI+F38o5xOSlY8Vv7ARpr+pa3ATdEV5GY479ljkZWfD5xsU/Q05ra1avRILF36MZUsWwhAcgN2oF4WJWDKOquYu1LT3iULy1nC6Xri6BhubW4XIVGUk4HY4MKk4H3rlsrsN33tWTg4sJh29mwjU2cXwUgGdp881GIyw2mz4xz/+gfUbNoiw4gLq4s+WopUKqywC0/Bd84QUTT1eeFKTCpj0GuQ5LHDQc458x7sCu7eTSCTbRy2mVd5DQ/AFjLvQ3FXSj88ZxbZgFxtssPoGhtDl9SNC+QDXaGo1WpTlZVNpWJmjmJcMdoPEZ+RMInU+Pj9v5VHpbu5jRUabjX6cRFMvnU90yCc4e9jahot7G9v3M+qkQkBBPI9S08NPyUJIzLJC/7iZtXsIGEoYoXcVwZpTjhwSncXllXBlUgY4NIS1a9dj2bLlWLVyFaqqNqK5qQmDLLIO4FpOFghbv3LOA3lk7rwTTsa0Y04W7l+4tkxPYiDR147BrlZxDI9aZ6HOPgnT8Wo04Ex4OLzTGTKv02I16FCe64LLzJm1SjShNnT2otM7tE++Iy4osmuodq+P0nECWircTC0rRWmuU/SrFLWV9Ij8HrimLi1ARj4Lf98iTOhD1GoS7FSd/epOLytAaV4WEhka+EiL1Pf0o6nXK9wE8RnT6YTPkwrRFLxO/0jg8W983vQ9iH1T35nh7fxnxD0w6b34DtkWxUkMBkMh9FEc6e7uEfFDT4UEcQztrKbCcpTSpJ3eJ4+QvuCsL+PQKRNAWoxQo7alCW+urEaY9hkJn3+IhNviKp4JqV9cy6pVYVpJHiZRmH7RQKAvJolsmxG5ToeYRYntZyO9P/ZFzIOGfvTDH+Lee+/FoXMOE3v39vTimWeexssvvwx/gAf8JEVzNnc/YZdT1W3dykQbdF8mnRo5VMiwWrkCYDfhvrzbEN8SycFO2j5qNWy3xNcxY7Pl22fhEOCAoIx568CggOJaxih99g0FxUhFDjE28Ga9jkSmRTSLs3uXzSimfeSpuLaJRzdqdWZaowyKxBafc2AoLGpKuRlUNNcRSq0mizPF3Uj6/vYbPhejlPX0U4jIR18oFEUmx5k5+4/k0aBhlQUJgwNGdwHyCsuRnV8sRmGzgNq0qQarV6/BmjVr0dDYKAYxpP3tDWf4BwLbEIgc3yKxBDa192FDQxOJdhW0FjsOOfl8XHHdt3HErOkkPBNYsWIF7rrrLrz22mvweDwUg9jZONfa7bnoFKKGFhHetM41XSzKDJT7Zzm5Lx4VuFRahEiHdPd50OvxisLFvgTfd3ufD6vr2uAJklhXaWAhsTWJRJGdZzii51HiKdejcb3gZvh7en1Y4I345HARx9KnQa9GcUEWKooKYNCoECVx2dLdj+bObhHPd6RLxB3weWkfvg8Bn5u2cVpJw9+GRShfX9k8/Mn9u7lmNhKJYmjIj/aODqxYuRJvvP46nn3mGbz8yqtobesQ/YH5XbINs9vt+NKXTsfXrrocp580D4dNmwSbyUx3kcCAbwjLNmxEq2doi/uI0TVqOj2oaW6j22CfnxnIsllwBPdTNRtSe+0+/Hx2I4VnlgtG9gBP1xgMBNBD9yPCnH7Pzc0V9pW/FxUX4rJLLhaDD/l37vO9ctUqvPPOO6ihQlgHCX62uVwg4BrnXIcVpl3tnzmCpFqLRHxzeEgkEoX04F62MWPNvi00U0ZauU3+TEOGg42psPfstiWK3oFBMc0eb+aRoS6r4kOQR5vzwCAOUNFYxTVSI09FsJDk0cFOs1YplROhWBKdngExml0cIGwV9zGiK9J3Nu6pG1A+9wfEc4gb3gyt85Z0t4L0r8pW8UUcx7/yXPLBpAZhaBHVO2Bw5SOHBGdh2STkFpWIJuSenl6sWb2aROdqEp+b0NTchO6ebuEKKBxRRq7vz2SoKdOjwEhPW8pw3OIuB+ubO9HnC4jRw+yjcNrECsyZdQhysrNFMLJvQk7UPLUfj+RnR9Jr163D8hUr0dPbt0VT4q7C95B+t/yX0wULHdF8bjGiNMctupPwvXn9fvSyJ4GdcPI9nrAPUnY1VC8EX0z4Ai3LzUJlvhvWlIP24T6PXNO3FSMF1tYIwSdgV0kkfpxWlOXlwk5iK5mMoZ/iZ11rF3oGAxAdQMS5lE/luwKnC6XZe/M2PvfWIlM5jqMKCUr+PrwtgXAoLPos1tbVYvFnn+H119/A0//8J5584gk899xzopnZ5XRizqGHoqioSDSn24TIPA2XXHoJysvLYNVqMXNCCYozbTCqyC6RKK9r7cC7q2oxwNNq0rV46R6K4IO1tWj3DCJCItCs4z6qhThmSjm0HCR8k3sIz70+lQoDBp3iPZR9zza1d4qZfbY+u0FvFD45L774Itip8MP9m7n/95NP/RNvv/sevINe2haGOjSEbKMeOVaz6PKw+6Tf+8EHeyHgmmLuE7u/213J6MPFXzaLXJDdbB/Hhs9b630SDgS+VXbhqxjwlMlHjH7q8/nR2esVGSf3AGWfezlOO5xWxVk113+IYBUHcm0kCyfapmwW29jouh12yow3l55ZuHL/pjjXTND68HnEr+JA+hCvK7VtP2AbESq9hfuiKoJTqTMaSfowMWdzkmvw4qKZMwAzYHLDllmAzIIy5BaXw+HOQogyUx6pvpJEFC/r1q9DXV2DaBoUMxHtp4ZP9A+ksOFmwTQsMjq9g1heXSvChOOT3WTEjAllyMu0i7jBwvSII47Ab37zG3zjG99Efn4B/P4gli5dhqf+8U+soDDiTJfDhkXobonO1Dvi2DgsjuiTa/XL8nJgooybtwZI0PV4/fCTINlX4Ntl1zarapvR2T9A63ExUv+IyeUoycuCRqP0QeX75zegvIUt4fXh507Ba1tuUfbj2rLJRTnIdthoBxJ/kRg2NrejoasXUTqAx29zzaZyRSU8txayI40z/5JeV/YbsS+LYoo3PDimqbFR1Fy+8soreOzRx3Hv3Xfh8ccfE654uPn8qKOOxjXXXI3vf++7uPaaqzBp4gSYzRaceOJ82n4NJk6oFHGJXTxNLXDjsIllcFnMSJLQ9ATC+HDFWixv6oKPzuWj97yyoQsfLF8tukxwM3mOzYx50ychz2UW9zsaGQwPCJpYWiRGonOtKT9+XXMr2sV0p5vDIX2tkdflwhf3b77y6msQ15mFLU/6BxBrWod4ZwMyIiGx3+7CXRw+F1EOAjh8e3p6sHjBx/j4g7dRX1MtCgASydbIGs1hFMOdTHLtANlsSkSKuONRwDH0Dvgoo/eJ7zyVpIFK+9kkNC3ciZyOZJum9O2iL1wdyQtDH6lvYtYWJ5We9Zq0+5QkAqEAvHTuMGf6fCgfz7+Jv/sxIhxSywiG1ziQt/pN5BeU87JTZu5GoPTkzBCCkUetDwTiCMKAmNYKnT0Hztwi5BUUwuVykYFLCJ+A7Ayem9b5k0eyD/mVPrBbi4N9GVFBTvfLcS0Nj+Zu7fOLQSzsIodrzNj/X0WOSzTNpkOSRQvPm+x2OYW7Fq7ROXrePMw5dJYYvc4ly/Xr1uOJJ54Q8zBz8/ouCXIRjIooSq/ytU1aNTLtJpj0HLdJVEVj8PCc1EHF3+y+AIdNl2cQtW1dGOQ+evQMnB6nlxUhy2IUz8HPozRHi0NEbe3WcBrdIj7Rd14XYUG7cz9VtgUmei9FWXZaXKJgGiJh1trTh5bOXuFeKd0Tl+M4/+Xj+U0o507FffrGJ+VtvIfSb5Q7TqT3g3Dt09XRgfUbqvD+e+/jySefwgMPPIhnnnlW1Gaye6xZM2fhyiuvwne++x187etfw5lnfhkzZ86kwkg+LFYbZs8+FOefdz6JzAlipp60HbLQM8ydMgEluVliNH5SY8Cq+ha89ukK1HZ4sbGlB28vWUVh2kk2UyvcJ5WRaJ9dWSzuf9SgU2XajHBaTPRdgzDZ6eaefhLWARFuO4LjvJtsxKzDjoCdCqoD/hBUWj20OgN0GSSOk0o6a2xqQlVVFRXOSIiKLTuHStjzUXzW/QSOJ7WbqvCPR+7F/X/9PT59/00qyEihKdkMF4hTZmrM2T+EJgUGG3CucRO5BZGu1eDmNm427/fxIBRlxLnDbCTDZxaZPGv19BEq4Sye1sSgIAWhOzkY1Brhs42dOosLEpwhc182FhWKcVe2pw39SD6/ZT9hRJgOk15P/yYyUv6qmOx05ORQ5HUVrXMmy6KR8k1EeKynxgKV2Q1rViHySipQWFwCGwmrQa9H1HSuWr0aa9eswaaUf06uxeMavX29ppPDgGu6+F4ZDguuLart6BM1cnHawM2T5fm5yHFaU/14FUTopcOW4FrRqVOn4pqrr8bcuUeI9UAwIPpy/u9/L6KVxDmHBzezdnV1Cb+uW4ioEYyMk+n3lH5vekoHTkoPdosFGSqdmNu7f9An/C/uK7BAb+jxoIkWvj+utSvMykRhnlvMcS3iWuoZWbqnu3qkGbm2dVhsho9TWjjYTrhIwJbnZ8Nu0tM7jZPICaCurQc9AwGxr/g38ly80Hq676UQsBzG6X3EuuK+x0fhy11HFi1ejOeefx7333+fqLn84IMPKb53oLS0FOeccw5+cMst+M53bsT5559PgnOmmFvcYFBGWSeoAFNQUIDzzjuXfpsOnW4rV0S0z/SSXBxBYpNtHgtvbyCAT1etxfMfLMEz7y3Dh8tXY4hH79O9cZ9M7ptZmePgQ0cNPpVRq0JBppNEO6VflQY99PxdngFx3S+C7foAFVZbSOiL/plWF4qPPBknnXMhJk8sF3aFffk+9PDDWLBwoZiDnMOdByxur0mY38MuKdIDDJ7AYGjQg672VnS2NqO/t3uft62S8YU9VIxXEtmHhSYHgRIMygAcxTLyDavYuNNvnGwGw1F0ev0Yok/Kn0QGlOeyiUyenSgr5+A9uSmSP2l9pJWl72zUWZDmuKxKB3kWtbQeIkPf0jco+jqxMeRtfKRyNP/lLen1AwzOINJLKoz4OT8XYThs6B9n1MK48xpt45lo/FEVBiJqhFQmaKzZcOaVIbt4ArJyC0SJu7OrGxuqqkR/TvbPWVNTQ+KqFYM+nyg0bE9U7U14hGzqtQs4I+0mgbmusRV+4aMwA26LCRN4NhMH1/Bw8KXCj5a0SNkW/LyHH34EbrnlByQuzoPb7Rb7fvbZZ8IfIfvq5D5XjFIa/Xz4iOtsFSO51shO95TpcNB3rahF6yYx5SFhta+EsT9MYr29V8zsxZ1bzCSqKgt5ViMqsKhS4Zfal9k6Hm79+0hEa0bq+8jntVJan1iYLZrPuZbeH4mhuq1T1MZx32wRszmcyW7wXz5H+kxKXFfgDDxM9meACry1tXUkJj/ACy+8gMceexwPkzh695130d7ejrLSMiEob7zxO/jhD26hAsZVOOroo4SYZHG5NWYqGBx99NE44ojDYaHvn4MeJddtwalzp2NWeRHMVEbWkMBo9fjx6sJleG3JGnQMDIk4yb8dPbEY82dMEP3RRxuLlmtLs+mTWzwozVNYbmrvQSDOIbdjuBtOm2cI7X1UqKdw5QqCyoIcTK0ohdlEaYjeQUV5OYnzEoq7ZOfjcTFFLnc3eO+999HX3/f5eJxge5+kZ9834vd4w53MYtHwcDxND8yUSBjRL/MLU+boMe5Ck0cBitx3a0YmAv7Oq6lNioFXKQKTF15PZag+fxBtJAb9nP/SNnY+ykKTazTZwA63kqdPOeK8Aj4fnZCFJo9wtZnN0IBrqzIQpBJze28fiSalfxYfK+585PEHKhwon0MJgG39whtFGClf6W0p74e3BUIxeIJxDMW1iOts0DvzhOjMLKiAPbsIGRod+iicWWiuWb0Gq1etFt95NLt/HxJD6RefTGxuguJZltY1dGBjcxuJkySMairouJ2UUebBoFW8Z+4sHKcNeh2mTZuKM844Hfl5ucIgsC9Cbp73ePqFqPH7h/Daq6+JMErXrDJ8vPICKORTC8MiymY2iu4kPNiNs57+oaBoqo7swQCk0cRLgqilk4RJiGtZE8i2W1CZnwObybCFUNwR6efdESP34ZrS4txMlOfliEkdOGzbu3vQJFzsRETRdHOM3hKO11wg5VHTrW1tWLhoEf7z3PN4/PEncPfd9+Cxxx8X/ZJZKJ32pS/hm9/8Jm76/vdw7bXX4Eu0PnnyZNGtRBj8bdw3b3O7XZg+fTqys7O3sw+goz/TirJw0XGHY3JhrhBWUXrB7FM4EBwSz8TdBGaWFuD8Yw8T+7JdHG244FicnwsjFRB4oCT3a6+mNMF9jhWju31CoSgaW7rQ3u+hh6JChl4jHOoXZrvEQ3IfstmzZ+Nb11+P+SfOh5lsdIxtc3sbPvzgfdEiwjYiRHEnGAqLd5Ou6eSBoQcjLDC5qTxtO7flX1VykCNsGBUMKX3tjO3cE8ZdaEJFpWl6qJ2P9mwwPu+7kIlTYvIH/Bgc8gk3HryPWaeCy2KAiUTjSLdGfBbRx5Meeetrs3jlTIP7atnN7BRaqdEMUilQjCLm9mCCt4n3IW5m+MuBCz9sOgKKMOJPZXUkvIciBpR9+TuLG7Gd/vI69+lko8cGL5YkKZ9hRFxvhdaWDUd2AbJz82AyW1LT1rVi3br1WL58uRiRyv05eVQ3O+/eF0QnP1sabr5btqkRzd29lMFBTJE4oSgfFQXZYj8lu9sTMnDk3Lm48Ts34sQTTxQZ+qDPj38+/YyYa3rxkmVKuMZjCJPoVJoMN4cRvz4hNE1G4aicm9E5DXiHhtDU2QdfYO8PCOLBN3Vdfahr7xbhyUKlODcLZXmZlJ6V0eajCYcOv0IeAJjrsGAyvS92Cs4MBpTpHFmI830NFxQoXHmAEsdBFvcD3gGsX78eb7/9Nv797//gn//8J5555hksWLBQvCMeWc2Dd7j28qqrrsRJJ52E0rIy4aKI48jOwE3lJqNhCw8H28KkUeOE6RW4dP48MU2niUoTvGjJjrH/1GnlJbjkxHmYO7VcjGAfC/gdlWVlIs/poPhGBfMMLRXS++EZCCo7bAcWhV5K29WtHfD42VE73TvZ7oJsJ5xWdqi/GZ1ODwMVCNKunk6aPx+nn34GsjKzxAtdsmQp/vTnP+O9d99FINWXkwdJ7Qs2Y2/ANcXpZ2fbcCD7N5bsPlzYHWvGX2jGeR7yHUT2rX/jdaXtfAs4A+cO/OyzLTSiVsdq0FKmqhfTSYr+mXSokllw9qV8bmG9CL4Ei00eoMFzFxv0JmU7iVeelm6AMp3NgoEPTi8HCdt6XxywvCgr4h1xszDD/WlFjU8q5MVCf4TopIVrmRXXG9ynM4FQQouYzgaTS6np5EFEFotV9NtsaWkWTeurVq0S8zx3dnXRdp/I7MffaNK9q0j4sIsjgkVzQ6cXSzfWCFcu/IwumwVzJlcgz2ECywkOkZH3uat3zOdkwZGbnS3cJHHJk5tRL7zwAtGcyEU2zkBq6hvw5FP/wIqVqyg9jBSP3B+RMmZKF4WZDipIkcCn99PrG8ImEnb9Q1xjnNp1L8HuyTa2dqKx1yPEnZGEWGleDvKz7FsMphoN0o/K4caSO8tqooIBFXasZhHYAQo7HtTFgwuVWiCOpzH4KLzaWtuxksL31ddeE7PdPPTQw/gnfX700ScIh0I4at6RuPzyr+AHt9yM737nOzj3nHMwc+YM2KxWEpc68e7GAj6r02LEmYdPw9fPOB6XHH8ETj9iJk6eMRlfOeEoXHf2iThtzlTYjWN3Dxx2pbkOTCsrJDHLxUo1ugf8WFJVLwpg24PDuGdgCLVU6IlSeZ6FcJbDjsIstxDQ24PnhOcuB8cee4xS40v/eLpcv8+HNWvWDNek1m6qRlNjPdmL8Ng9+z5IksKVJx1J254Yd0US3yQSypfGQVyOZNyFZkaCM8FdaFBMiZet4TNwpsAzAvEo0XQqspFAcdls0FNAiiNpe1IIVWUQANuarTNWbh7j33hUZrbbQYLTqPxA1+b+QN3eQfHJKE1mI06w9fqBxnaejbfyElcptdMsIfkv/+N3oyy8T0ps0gobPdEUmXr5ouaTVni+9TCJzbiaBL7RAa0tF7acImQXT4Ijp0DUdA4OeFGzqUZkIpzZc99Orunk5vWRzcdjDddW8XNxk6mfCjoLq5uFW5wExRUD5YsVuW7MquT+cmqwHOVQEeEgnn340XcbITTNJlxw/nm46aabMOfQ2bwRkXCEwmcT3nn7HfgGKb6SOOIZiHjhApOBMu08Em75LivdTJTEXUQMvugaIEEl7mzvwNGrx0tCg4QmuxPjQkqW3Sqazd0kitklz56E2Q4LI3QtrVaF7Ewn8twu6Ei88ADAtj4vOnsHMeTzo6e7Bxs3bsS7776HJ558En//+9/xyCOP4LXX36TfulFRUYGLLroQ11//DVquxxWXXy76Vebl5Ym4Mp64rQacPmcKbjjreNx49gm45cJT8fXTj8FJ0yvFb8L4jRF85iy6xiHlRRTXlIKYxx/CByvXCTdL24Pdx7VTWLdTIYMnyjBodSjLdqEiyynSz67AI/V5YNVpp50GtYbSKb36VUsX4R8P34uVK1aIJniOD5yGxsOly16FhSaJa47jAs6/dpQWJAcdXMkzXjFi3IVmmu3G+c8ZQ0ooPCsQffJfPox1IxsM7o/T6wshFKcNdBw3hbGjdpvFAh5VKprEBYrYEVIotWnk5fm8/LuW9s922mDU0P6cQEk8hCl9Dg75RV8nZlu3LSTEQZaI06IxLSgZfnUsHJWFw5RDlZetYcE/YqEtQpDRi6W4T6JTg6jKhKTOCq01B+bMItiz8khgGcV76KIMnkf08kjUlStXora2VvRb5ELADoXFKJA+P3enaPUEsHRTgxhEwrW5LrNBzB+d56L4J/ZSECGQes7RgMOMS6TcfGgymURTYllpCa668gocf/xxormVpwJ9/vnn8ctf/koMmtBRgiikuF2c7aYzqJBUqdE5EEJb74CoEd17KBMjtPb0C3GcjMeQQ0KzNNcl+urtSX/C4Tg6Ik7w2bY+o9tiRml2Jkx6nfiN03ttXb2Y058dqD/40MPCLRG7J+rq6kZRYSHOv+ACfPuGG3DdddfhggvOwzHHHIOSkhJR25yO1+MNX5FH0JdmOzC1KJtEX4Fw4cSFnm0992jDDvYrc7LhYjdHVHyJ0Pvc2NaF1m4e4LVtwpSOGnu96B7gfagQZdCihIRmlmXXZi3i8GZhX0jv5pBDDhGFWI7Xhx95FKZMnw2DySxsx+CgD2++9iIa62pEN4gDmWR8xAAg+i6RjITdDlIESa2NLXtNaG7T6PFD85I20vShhMNm2ZLexs1/vkAIvYN+Epq8U4Zo+uZRqjaDTgwcYpS/ivARx9KH2Ja6Bo8mV/4pm2wmHbTsr4d/o4VHnw4MBRCmFWVfIn1/DG/j9ZHbDiTouVhiKzKbSIt+Cn9ehiMqPz5/TQWD+JoOltQ25WtqhRDf6PjNW1JvmjbwqblDf4IkWyipRURtgdaRD1dBJdyFlXA6nWL6S4/Xi2oSnZ99tkQIT551xzfCVdJoC09u6uMmV67hXlHThtV1TcI1lp7iTFGWG3MnV8CQztjpQUTfVV6Uw8cEPr+VBA5nsPPmzRODJbg/W1ZODkKhoBg0xMJXEw1D5fdCF6dPEsa9lOm2dPchHNl7GW6EXk8jicxmug/uZ83+bHnWnhy7SfQn3R2EwBSLEmv5U7hHS2+nf/xGhE9Nuj7PB5/jMMOMGAzeDkTrV+G9l/6Dxx55GK+/9jrqG1vgcrtx2pdOw/e//z1873vfwyUXXYijjzoalZWVVAAyi/A86KF4WJFjF7NQqSgsmQ4qRKysa02F+5bw1J+N3V4qrLUgEE2IgTt5NiMm52fv8ch4pdk4gcnTZuKsC76CKZOnCHsQDIaw6JMP8MSDd6G5sU7YkEg0gvS86wcK/CRcGE4/Eo/WH16RSMaZcbWObOy5yYLsUUplpOAEIDamEOtKJsF/lfShZNZsDLjCl0f4st/C3sEgZf5xMZrWSQafhSbPucsZPB/LS/rMfJpEeiSHuMbm3/j6YtAKLSajcfgH7kvI/gaHXRzx8coNjWD4LAcO/Lz8nPzJq7wtJTL5U0UL1/qqk3Hl6VNBotRw8rFxURgQtYzKTyKMPwdvG7EI0Z4Oz/SB9JlMqhHLMCAMPTRGB0zuYjjzK+DOK4LVahd96BobG0VfzqVLl4pp/Hg08CBtH+lcfTTgu2v2RrBw7QbhjoeLMVadBoeUF2NKaT44i1SeY3wRgja1sGP4E084QTSv86AJxtPbhablnyC8YSHg7UIkHEYTibzuoR0P2BhLBoZCaOrog4cyeoZbJMpzs+Hkuc15w+fS2k5AQS9EJoVDWuCw9eBPsdA27rPGc4y3NDVh45rV6Nq0Dsm2akS6mtDf0Ya2libRD5BriK/76rX43ne/i6uvukoMyGLfp06nQ8xWtDfe874Kh0S204xpBVlU8CKxR+tDkSg+XVeFHp4cYMSr5K/c9WlVYyfWNzQKO63XQNS4VxTkiX6eewIPKuRrkLYUFRB6vSJcnU47Lrrsasw99mSYLXayDREs+Wwxnn3iQSxdvEBxX0ak09H+jAgDTgdEjMS00m9eIhl/xlVosrFPqHRkALZKwFsnaLHOCycS+kz/Lr5SYqH1IJWGvYEwQmwYKAGxSGSXRm6rWTk7bdv6tHwCce30djqVIoToC5+DFgOVcLnTPDcDKbtkwBfwi9oW0WRP60rSPfARIZAKRBaOyvtgFFHO/Yn5fWwRIhSGHErinwjbFHwa8duuMdz9ga7Hp+DmMPbRGc3QIao2Qm1yQ+csgC23BJn5xZR5WOEnAVFfV4dlS5dRJvIZ1q9bJ/pyihrYPYTvxkNScn1NCz6rqhPTnqrVKhRlZ+GY6ZNJKBlEaAiRvpfhQSgTJ01EWVmpuO/ikmKcef75mHDokUjojKKwtujjD/HSc/9GS2uLqPEZTzhGtfV5UNfeJSZe4HvMtNM9l+TCYVVmA9odFFHJ8VJEuvRGEf+ikTA62tvxycefiPnF//73+/DQ3+/Bh2+/Dm8fO7WOI6o3wVU6AWeeewG+cvnlOPXUkzFzxnTR75LFpxSX24f7uR45dSKKM11UUIgjQYXDVZvq8OGaui2EDvcPa+jyYMHaKvQN+qggnwGHSYfp5UUoyrLtcRiz2zF+31otC1bFFjE8KcIhMw/FiaecDndmlnCKn+VyQa9OYlP1BhJnMbIxMfT2dJEdGRz3QROjSSjMnjqUNM3uoPYFmyQ5OBnXVMQ1kdpEQGQDO0QkCM4qVCRoVMP9KpXtitzhwQyewQHRD4j7UvJgh3y3XfSxZDdFfDxfh7+lTVb6upwBpQ3ZyDthUSNcZ1AGrdMoHflZnAwGgohR6TvdwKgcO/LIAwwKB1FzS2y296nnTSo1DWKAVXqGJf66VXjw4Xwo99ccPsXwl52AL5y+ON2PEJy0nj4F12KzDU3Q/SRIdEJvg86SDVtOGXJLJqKwqBg2ux0BKiTw1HWr16wWvjr3+K1xHKGMaEVbt6jNRIYaFgqS4rwcTCnLg4FEp+JAa4+vtEek47f4FN+TIkM97phjMGvu0TCSIGdp19LUiEcfuA/P/ec5Ica5BoRdoXDXgLF+Am4l6PIF0OHxIRLnwqIKbpsdmZSG9VoSdKn99iyDpGIQRRRRe9nSguXLV+Ctt9gl0b/xz6efxnsffIC2rm44HE7klk6EqmgykhSHMjKLocssgs2VLZrGlZaYzXZDsj0yMGtSEaaW5FPBneNQUnRvWrSmWkzHmYaFJvs/Xl3bhEA0KQZ9ZVrNmFVaCBtPmrGHCB+S9Mlzwm8tFjk+aSl+8XaOc+UTJ+GcS6/EqWecA6PJiH5PH5554kHc88fbsHbl0uHj96d3n6D0K6acTCWddNcRiWRvsGUKHGO4Foznwt0pSMhwTdnmAT2cZmgLr9LiD0Uoo/chKEYc85R1GjitFth5XmS+Du0j9mXbMMJAcHMvJzhxVtqsCFG+r1RtJ8GuQhSBy31ckgjHVRgIRcVABUa5p83nPOCgsBgeNEXPymHEg6solEmcZCBI7zCk0iBMIotHjPMeihHmgzhzYUdSXOvJMlPZzO+Cu9KmTvvFcBiLcE6Rfoe0icOfzy0iL28Wl84QtSIBelehDD2SpiyYMgthdedQOUQDT78H/kCA9tuy1m6XDDBdx29wIKk3o7q7V/Qh5Tia67DihGmVyLablLCirftCpiSei8NLCSAKM8BhMKA82y3iOG+3lEzBnDMvw+y5x8BsMYvZbV783//wHxJiXV2ddNROhs1uEAnH0N3nRb/PT2sqOMwmFNG9OSwUjnSv/Ka+6Opbvzte5/cSCoXRR8K5pqYWH7z/gfBxefc99+Cee/+OF/73Erp7+zFz5iwx/eePfnAzfvaTH+Kcc8+FK78ICZMTXcEYatt7MOTnvnupk0t2gqTwY3z0rEOQZbfROon8mBrLNtZgaWOP6OPMwr+P4tlHazaS2OTCTRx2PTCtJA8Ti3NEYX+P2YWXptVo4XBmIzc3j+Iddzux4tAjjkJeYbGo4eRnam1uxMpln6Grs10MWtvXRSfXHo+0t+FIlMTn6HYhkkh2llFI0TuPyPBIICjK4IvgfTibU5IK949UMh4SFPTXFwzDMxQiw8WltiSMOjVsRq0wUsrZ+WpKFisQ/QtJJtG+6do2PvXw7yMME/f1NLBTZd5G/7mjOrtR4mZSsYH/it/4O92hEC/K9gMKNqb0WBxGCYoqMXp3QVprI4Ff1dKJjW0daBn0w8fN2bSv0riuhCgfKro5jBYj3g+z+d1uvkcBZWKxWBIBWnj0ekxtAXRG0eTJUxmO2DPFrt2jXatGzJVHBR2l364WURw1cwoOn14Jo1qJe8qy9XXGH84MRYYobkVJ6iaTDmWF2chxk2CmtxXXGWDLK0J2fiH0eoMYHGGxWlFdvQn9/X0ieAYHB9DDM2TF0nX6ew6/zp5+LzbWNcPjGxJpyGU1Y0ppAXJs5uHBfAz/tj3Bm87whcCkjJQHh7W1toqm8aeefAp33Xkn/n7f/XiahOb69RuQ6XbjnLPPws3f/x5uvvkmXHXFFTj1pBMxd/YMHDZlAvIzs5FBhVaeTrSpf0DMwb2rceRght8H93M/+ZBSTCstEoX1OBUAG3o9eHNZFRr6A2gaCOK9qlZ8vLYGkaRWRM98px3HkDjNdpiVE+0hwpPCVjZjR7D45YXjkclgxBHzjsUFl16FKYfMQCwaJ4HZiZefexpPP/4gxa8WqFO+dNM13fsiI/tocuGLa5ElkjSxSEi4vhsPxlVocmTXIkKGZScMQAZnaqJuSFknhOxksRNLiFqQQRKb0aRazJxhNxphM5mhI3GTzqSGj+R1WrjmLV0rmq4J4WAW+oNJGQyzzQ4L90yndb5eOBoRI9xFDWkKJQNX9lc+0ifZx0mFhVhGrqcZXicZwp/8HugBE5RZdPkD+Ky6Aa98sgzPvr8E//pgBf77yWosXL8JXTyTCoWVUiucilaitlMJYw4jri3Y6VBKhe0WpO+Tf0svI7fx7fI//ok305cIXTxKopPf1y5cfdvQOQcTKjgjg/QsKorLcVGbedz0ych228TZ98lMh+5btCYQep0GuVluuOwOcb/c9aTFM4hO7yDiJCSNBgOOO/ZYfOMb16GstFwc8+qrr+Erl12KP/3pjxgYGKAgT4X5HpFED6Xh6s5eMSkCjwDnWtbCLAes7Ficft+euBwJiwMe2NPv8aK2rg5vvvkm7iNhyXOMv/jSi1i2fLk4F8/Mc/Mtt+CHP/oBLrn4IsydewTKysrEoB4e/KfTqJHrdgo/ulyk5Qy61zuAPlrSPnQlOwfHqxyHBSdOqRCDghLJGMIxYMGadXiabMYnGxrx6oJlaOn10s7KlJOzK0twWGWRmKt9NBie0IHOxzWQuwSlFa2WClyUDxiMJrrFDEybORNfue4GnHj6eXBl5cDv9+GFfz2Fl59/Fj3dXSIe7ksoBS+Ox8p6kGxgTBTYJBKFRIZaVOCNB+MqNLlGKQDjcK3XFnCK2OKh2eCMMPCp37nvYChKGcugH4GI0sGZMwazUQsHZVQ8s4QQFXQqbhblJrQBnw89/R509/QIB9Y8PZkY3EP/OBMarnijc3HPzCynVZTKxSbaw0+J1DsUEOvpvoIKfCAZGBJhyvd9HL73kYxcT38f8Qo4hLgQzAbKQ6XjdfVteGfZeny2qQYbu/qxvr0bi2sa8OnaTVjX0CpGmLIwZ0HHpxON6BRWCRUJTn4nHFZiGSP4vfBC104LS9G/aoeZ14gH/gK472JGcBBdJMpmT6hARU4mLjx8BmZNLIJNTdeifdIDHjhe8rfREWW7j7gH5asIEx2lH4fJgDynDVYSlezZqKOf/Rj6hCsvDi+r1SpmXTGy9wXi5FNOwVe/+lXojRYh6viMa9etw/qqaqU7wm7A7sLaBobQ3uehgkAUWgq/HLcLmS47NMKtERcLlXsW7zD1ECIDTcQQCAbR1dOLquoa0efynrvvxh9u/wOeePxxvPfeewgEgjjkkOn40Q9/iD/ccQdu+Pa3SUAfg6KiIjgcDjHbkmj94PiSgke8F1GBgefrjtPV2/sHSAwNkL2RQnNX4cL//MOn4aipE2BGlNIFsKmzH0+/txCPvPgmFomBdBniHVTk5eDsow6lsLfSkenYumcMi0t6v7ubBkUapoXjiNFgwoQJk3Ho7ENht1lJxCUxRHHszdf+i4UfU3zjPIUKaj4qPI1mzf/ukI7TSTELn2JvE6EhJHdVcEsObCivHi/G70oEuwoadoezNWmDz5/iOyVWHniS2s5/WcKwyQiRwOzmpmzRf4Z3UcNiMsFOGSh3Kucsnqcga+voxAYSRavXbcCq1bSs2YDV66soc9qEptZW4fomloiTkIoPS1o+mjM6u14xTvyX+xUFgiHKlEl68fnZcKWMlzLrEJP+3Meh2xQSe1u3S8/EGYLowi+eMYGEWkMiM4ZNrd34aMVq1HV2YjCqIlECWjIwFI6gutOLT9dUYU1dMwIkRJIsDOidKKGp/FXEPEe3nYxyfP3Uu98cJ1LfU2E/vE/6tzRiXdnGTQPJUXLMzNO6GRIREpVJnHvoVNx0zik476hDkG81Dj8VXzV1d+L+RgqZvQFfnWszlUwzIWZbyTTpKVN3wKTX0vuOw+sPoKV7AL7gtp06Z2Vm4pxzzsFXr7kKDruDtmRg0aJFuOXmm/DRhx+mzq30vRPxZicYGgqjqd2DfvpMUEFATRGkwG5GjsVERUzlHEpRQfnGiKZxTz/q6qhw88kn+OdTT+Fvf/kT7r7rLrzxzrtUkOzH9OkzxOwwv/nNr/GTn/wYZ555JiZUVsLNI4tJQKbPuC1MRh1yXVYS4HpxB1ygaOjsQyAknV3vMhTncrOduOik41CQ5aICfEzMvd07OICqti4EyYCok1E49cAxU8sxvbKQxOmWwn9fQsRxUWtJ8TweF4Wwiy69Aj/51R046viTRXeT9tZm3PfX3+DxB+9Ed1eXOI6fZ7yfSUmLXAmzuVAv7n0n06bk4CAjrkzTOh6Mq9CkpLoDM781FAApX5oKfHSCzFUGlRgjGBwaQIBKaGSbYNNnwG0xwqAjcUMJKhigjLO9A7UNjWhuaaPMyYtINCqaEtj1TVtHF+oam9HU0go/lUBVKdXFfh8ZPZXGVWod9BmKkI2RwOgdpAwxqkx1ONJwpN0t7S8Ig5n6vi34VxYm6ZDnjg48YveTtXWo7xkUbkMSPKCLBArPHq8jUTlEwVTT1Y+1JAB6BpV54VmwCuh6XAssJOeuRGoOU94//Tnyezq805/bQTTVU6ltc2Hg8wgx8wXnScO++LibhSY0gOn5Zpw2bxoqy4pJJPFtsXxX4M/0972NeN8cZuK7KGLAaNAiP8uhzOBCBYLBUBzVXV70bcOfZjqj5BmIHA676JPGXHTRhbj9D3/AzNlzxHpnZwduu+23YjYd0W/6C+DZdxrbOuGjtGqge3KRyCvJdiDLrBeOu/m9MXzvA4OD2FhTi/c/+BDPPvsv/PpXv6Zr3Yb/vPBftFKBcfLkKfj2N6/D737/O9z605/gggsuwOzZs8UMPnzfXCOafo4dYSbhXZqXhWyHTRhGrmlt6O6Fh4T4vvI+9xc4pHmu8uNnV+DrpxyHiXnZ0JH1NtGSSKrItsZR6DTjwuOOwFdOPAJZVPjZ8dvZNSIxjvejnw7TaYmjktlsQWFRKTIzs0Tccmdl4Yh5xyHEA29ShdvW1hZ0tHOfzh21qowufC9J9ggy4uG5W8yOLb/koEPE5fGJE+MqNLdm+4/IGRWbHaVuI92PgMUKj/z2+CMYCrOAUISrWW+Fw+Ig0ZmBgG8ArS0tqK1vxMCAD2bKaIoK8zGhohQTK8tIGJQgOysTQco8WGhuqm/AoM9Hxo+HuyhXYTHBzfBqVWqEPF0zEguLZsYMUapNZ6TKPe5PsLDasufrZtgUiYZvCnTxbiiz94fDWFbbhjoWBdEkCU+9cOeTR+FT5LAKg8tvYZA0+IYOLxraO0UTOje3M9xLU0y0tP2XvX1S7118jvy+kwyLrB0c8kXiYyTD+3IcIDFl1mmh0WpE3EkLI95D2YvNelqu70VGPB/fP/8z6TQoyCRRR4JKR7fNTqvbe7rgoXSQzkh3BJ/HbrNjxiHTkJeTpWykgllDQwO+ef31qN64ic6TEDOucBP3tmo6e4ZCqOvxUKYcgZoESUG2G/m5WdCQ2ONd2RVOZ2cnli9bjv/997+i1vI3v7kNf7/vPjE9ZFFRMU4/7RTcfPPN+PWvf4VLLr4Yh0ybApfTJcTlSH+XO/uOdbRbudMhpurk1vt4hhaNPV609yrTY0p2DS5gZlNcu/ikObjhorNxyqwpwiH7xBwXphXn4cr5R+Kb55yAyjy3sLmjye6Ym11l2L6kvrPwnH/6mfjOzbciJy9fxLs3/vcsvnPtJfjz7b8ajkN8xFjfX1LkUZuvwjWcYe4om7pfiWQ82atCc/tsS7yxzCThE42jx+OF398vEo0qQw2nzYgclwHxUAgNzW2obWoVTQUVZcWYfeh0TDtkKirKy8T65ImVOGzWNEwi4Wkz6tFGwoib0vv7PcO1cCa9AW67CTY9yy5KpHRdXzAk5rQWzcL7mbjcJiPtzRYZ8QjjRM/a7fVh8epVaPcFROdh3jPfnY0rTp6H84+bg2y7izIUpTavf9CHDQ1N6OXmUB6VSbmHEoJ7wB4YRj5y94/ePlx4SaPUXHCGo/Q8ZmnJnyzoOG7uTUQmOCL8RC9Setf8z2GzopiEnc1ooPekRd9gAOsbutDnC+1UmHEtIdfSpGsLczJduOeeu/HKa2+gqLhY9I1+9933cNNNPxADdLgrC8P3xNdYU9uMzu5OuhP2XQsUZrnhNpvg6evHmjXr8Norr+HhRx7BHX/8Ix555FEx45PT5cR5556LH/3gFtz2+9vxwx/+EPPnzxd9LtkRN/cLTN/P7qCEiwUum5mEJp2D/nd7BtA1EBB+PiW7Br8HDkYr2dnzDq/EL646Bz+55BzceOZ83HLxl3HNKXNRYOO5+inkd/OdfRFa1TiPCk9wLBJRRxSwLrzqG/juj36Bivws6LRacS/PPvkwli/+BMHg7vVv3hnisS0Ld+ymiddHbJJIxo29JjQ5wYnm1BRxSgFsD/hT2czNs0qTa3o//hYhsceZYii+ORM3kyEzU8m5vasbza3tYiBPcUEeykqKSITaxUhGrl9S/IolodMZUVJRhgmTJyLLYUFHdy8ampoR4vlg+Sp0vM3qFMaCm4gZNhrhcIR/Ha5h5X0VabEfwt0S6DH4WYTPNdqkyuBnV56HBTU3HdY3t6DR6xf7MHqtGvMmleGQsjxMLcnD7LJsuA0cHjxbUwaa+vyiBihMx3JNX5zDR1xHuda4WTq6jtJ0rrhdGk3CKdEh4jCJySQL8AxF4IiQ4Ofcy3Cmkr4P/uR4z++Dw4Wbpksy7ZhRXgCHhZ1jJ9BFhbeFK1ajpaNLHLur8DV4jvXiwnzh4JzdJE2dNh3HHH2kqJkMk/DkeaZbWqkgWFuH9XWNwpG3lu7FGvXB0N+GT997D3f+7S789rbb8Oe/3YmXXn4FPr8fX/7yl0VT+d9o209/9jOcc+45KCsuFKPFueYyff09hU9hMelRXpCLLLtdGMcQpfleiv+hsJJRS3YdfjM8qr8i14UzjpiAi0+cjVNmVcButQyn0bGCbdp4ooi5zfbBaXfghFPOwHlXf1tEMM5HWtta8efbfoZ33nlb7JPeX0tCdDThwh1fj4kJV0dkE0e76liy36Pibh1jmAaZvSY0GRZt2yVt1PmDFv5gscPih/tn8sxA3LTLznbNej2tR9HT1QVNMoaC3BwUFuRTRkSlZQpAvgoLAl643ilO4tGo1sKZmYn8bKfwmdnf50F7WweV/Pi8lPmZDMjQmGg/RUqyL80hyixZivJwJkVspm5uf4HvOR2hUn14hGGkVWHwU89DWar41heKYk19Bx2iox004NArdLgxtdQFs8EAI4VbRUEWclxZdFhCTA3Z4wuhsatfONRn0iPN+eziyunr7yy7uv/W7OHhW5OIKc/FfK4vIhvy1P2OZea5PdKFAYavz+sc+krmp2Q4vAe/CXbfle+ywWq2UDLSwReJo4YEVXswisjm0+w0fL2RfSG5drGkqABXXHEFrrnmGtjtNrRRBvuL//s5zj7jVLz98vNIhPxwDrYi7OnFoqoavPHS8/jv8//Gxo1VKKL0e9WVV+C3v/k1brjxBsw/4XiUl5aI2mTFJ+ru11zuCL1GhVynFXYzpX/6F4ol0R0Iwx/jVCHZXfhNcdrgeMHzmG894n+s4Ekc9uZ747THPj15fACLPo63P/3F7/Dwv17FMceeSGlPhfVVVfjxd7+Kj99/W9Q8psNFEaCjc/eRSGq6ZokkhdKXOClmUhzrlLjXhKaSiHYdTrRefxBermGgBKlTq0Tfk/bmVkSGAnA4M1FUWEAZm4MSLD+ekvGlA3J4thpKwEYSTzkFZSgtKRaua2pr6tCfGi3Is5OYNTEERcsoC1zKbEg8bb5r/qZk3vsVHO5syFLGLG3UeCAUPxF3Ik9SZsCCfl1dGxp6vMK9E+/mMhkxqzwb+VkucRxPE8iDJ0pzSKzziBhiKBRCa3cvBv0h2ofDPxXFdvN9jwqjeGmVhkR3Cu7Dy5mneEoOIFpEjfxeIi1yGRaZ6XtjOAVw3FfgOZ6AciqQcV85kyqKZCyErr4+VNM79/k+PyhodxDpLrVwei8hoXjj92/GsSedjv6eTvg7m9FZXw3v+s/gWfWxqF399reux33334/7H7gPN5LAnDNnjqghTZ9nLOF7NOq1yHLaYTPqRMEqHo+ipb0LHrIt+2FqP2hJRLkLCGeiO6zOGHc4jsXItrrdmch0O8U0kRajEQa9Af985O/opnSh0mRgYMArJh8Qjud3g3AkJkTESDJiweEWAIlErdWJgg63Po21bU3nPOPI5zPiL8qaWVByMLDvy25fAP1+HraioFNzphmkxBsikelAYUEerBaeXSK9BxuazVfg/oRp2BCZSDzlcA1ofjaC4ZAYpc4DGGwWI4UONysqx/KsCgNiqrzN7Gg08z7LNiIUb2H3MmyYxMh7MoZDsQRW1rZgIEgGO0Mt5i3Od1gxtawAJoqYOtqHRws7KKy5rx/7ZORuBjwqvY/CqZ/eE+lT5eR7i9S1eV703TXYOwMLOs5A+HJ7IUFtk7ToTAs0Var7CE/3lx6klOkwoiI/E3ajHiq1FoOhKKq7+uAJ7Fw/zR3B4aFkqorPy+aWVrz+5rt47oUX0eMb4huE2tcDAxk6PcW9iRVluP6b38T13/oW5h5xBIlLSzrpDT/DWCOuQ58Ws0k06/IgpSTFm2YqOHk9A+K7ZP+Cilupb/sS3AUsImo4Oc6Vl5fjt395AH+6/ynk5ubD09uPl557Bn/4xQ/x2ovPi1a2dA0+p6ndgScdCMvoKxlBnFtvKb8fe8u6l/NF8YAig1ZqXzgNcWa4BcL401baOUSZVn9/P6Jhn/KTWBJQxSLQU0LMzc5CblaWaE7nrjnpAOSCnXJW/rs5WEV2S+d32e2iFtRqNov+muHeHvohCbuJXRwpbiESiQiVPkNilKzC+GR+o08qhEcYLJHBJlWiUz67u+Ffur1DaOlsU2ozad2iVaMkJwcVeW7oaA8VG0k6h4liUI5VhywrCU06Txxq9AfCwhG3PxLdKow2X3P8UK6/m/b582znROnn3JPMYCwR90QLD1BSlgxY9VpMy3WjIssNLb179lPZ0NYupl6MbFUbsrPwdVjUh0Ih9FA6WrZ8JR548EHc+pOf4I4/3I7/vvo61jU0iSZNs8WCecedhDvvfxwPPfywmJEoHo3hr3+9E7ff/gdUVVWJafTS56W/4vtYwm/RSnHdblAGF8UorPpJGA8GgsKfrmT/YlTmTR9juPWBZ83jPs58vxarDaeffT5OPeNs6I1GqDQa9PV0YeniT9Hc1CD6W+4O8Uh4n7RNkr0L27mRufRYMO6pUJ0M0UPxiLjUBoa+C1+L9LTDvQVSH8L3H/3GtZrs8LfL64c3wGIvKZq2tBRIKr0FNqcDbrcLWj3LICVLEgmYp7YRqp3LtilxyAuhTjWj82J3u+Fy2qDX6lDf1IoYZZQZaj0iGYrz5kBMTZlNuulc+bvfQeEh8koOfBEOyrMr4RsfzkhJHqK5ywNPiMKeMlrGoNNjUpEbZjWFIfc14mMpXLUUhjYykA6zXiks0L+BoQBaejwYCkeFH87hC4034nFG910lSIxpthpgwPFspAHfUlzvO6QHLKXhbzkFeSguLobBYEJSY0A9vfcV6+vhGRja4pl2BNfMcM0lzxLU19+HtWvX4dHHHsONN36Xlm/jcfre2tGJgrKJqJw9F/rKWcDEw6GacDiKZhyB6dOmIC8vT7QuaLRaTJg4AWvWrMaHH34In88n3CR1dnWKQuZY1kwLKHxsZiOKst2wGg2iVj5Ahcum7kH4Q9LF0f7EvpkKt086vXHtZW5+Ac48/2Kcdd7FYrR6S1sznnjoXtz5+//DyuWfif24pjMcDom0t3Va5alluRZzeDN/RkOpLxLJZkZqorFi3IVmPIMbXLd9WZ55RTT5iWemDIU/KXOM0xKl7WzwvYMDCMbpeEpBLFiN2gwUWnUoddlFXxc+iHsbhsNByvT6xUj0nt4+Ej9DiJBI4AQpBCf9S4tYxqDSoLCwgDJcAwYHB9FDx+rVSRjFKGy+ERZpyr78VxlcoTRB7k+IQYfpSEUPws/Cb4O0An2nsCMjFyVxXtPaCz+P7qGtfEyeOYmCLKdSW8z787F0nqRGLTLmfAp/p0EvTshTGQ6GIghGuZ8Qvw+Gr5S67jjDXRz46qOBip5/tM41XnAcTzeXi++0cBcJrrvOtVlQmZ9FBQUDvWcqTAUjqGrtFN0fvqgCjzO4YDCI9o4ObKjaiJdffAl//OOf8KMf/whPP/OMSHcTJkzARRdeJGYRuuq6byJryuFQmx0izjmtNpQV5MNqNikpjO7LSHHowgvOxwsvPI9vfft6uFxOdFEavvuuu/HwI4+ir69v+Nrp2s7RhO/DZjEg02mFgd3RJOMIR8Kob+sQzuUl+w/sfmvrfor7C+xgPcyz0YXD4vvUabPw09v+hKuvuxGFRSUiL1qzcjle+99/sH7tasrvwsOFyHQWmuBKlhHWiict2e+Ml2TMUPIF9naw4xnTRoNxF5pfKDaEgBNfaGHJSN9Es14CXn8IPUMkYCjhcROvjgyJy6RHZpYLzpwc0amVa1cGB32oa2zBmvVVWLN2vZiCcsOGjeju7BZ9Y0YmtrQ7H06cNqcLhfm5vIam1g4x4lSBmzYAX4hnteBR2VwzykE3Ivho+z6PCHolSqXvNrUJ7MuQOwbziOPWXg9ae3oUI52Iw0yCuzQ3Gy6bCapUYUDUaLJkp124xO2msLMYWTBQ+CRVJFQCaO3uh5+MW4LCKu0Of2/AcxjFeDrTUSJOwnVkzSCHx75ai5lmpE/PkTHVZFBjckkuyrJd0NIrilE83tTWjabOPsqYtqzBS4vUKIk8nq1nNaWtV197Dffccw9++ctf4N6/34uVK1cjLzcXZ3zpdHz3Ozfg1p/+VAzqOeaEE5E029BOBThuNk9kaJFLhZNZlQXIshtE+I1cuE8pD7bidJaVlYVzzz1X1HpyHOV74Gb1jz/+GG1tbSLNjyYWjQrFJHCzSHBq6Xr+cBT1lCa4hn5k2En2fdRi3vz9H27eLCKBOefIY5BXWCzSALulWb9mpfDJGQoG/p+9rwCMq8re/+IZycTdpe4GlFLc3WGxhV1cdhfWfX//32/dDRZZYFlci7sWqFHX1OLu40kmyf98581Lp6FAW5o0bedrX+bNm/fuu36/e+455+rki7sQdXW1o6/HIw02tF0M6JbN0vTCCENBiTc7NK4CG0Rg+LD/W2HoAM3zCCPxTDllj/1BstkjmUK/e96g2xyCvtmSHXYkJyUiNjZG76ThQV1dPWpr63Qpj8RQfeG1dahBQmtbG3r7zAF0x7spzSNhyBRCFR8XhxivBxH9xhK9RkD+0Dl3N0mughfZkPm73nAAwJD2MrY71atg9Dmoe2VmvL2+Fc1O9+B9sTHxKBICTldRxA4hgZSQdGbWmCjkptiQat/hB67L40WDDM5G58ZyDf6wH0AVi2jWq30AdvBMykhuKfdl8VkkmFJ5TjCy01ORI0ecfKE6RH1rC1Zur0eXGgWx3vfrBM3tcqG+vl7I5Co88eRT+PWvfonf/OY3WLhQBjq/HzNnzsRZZ56OW26+GTfJceZZZ2LspElISEnRyUeN1KuGdqdMZqJgiY9HSVYaijOF4H6B/0C2x8MOOwxXXnGF7rvO6rdw4Yf4/ve/rySXkk2WS2dnJ7xs8/uAeKYm2pCWlCADfKSqlLRJ39Pu696vXgXC2H1wENUJy/7sePYxqDaiqiNSB6nLOfuIufj+//sdLrn6BiSlpKKlsQ73/fNPeOCuv6Fi2/ZPqZn0dvson9mvfXEYowdx/cJxBvp22kVtuLAfiKbRUe8yXdqJS5T0tx0dOs/8gX60dHnQ6ukW4kJdM+7gE4c8GXhykxMRJxfokqepuVn1wUiG8nKyMXH8GIwZU6JktL2zC7X1Deh0u1X5up8WuMGBg7Q2NjIKCQl2pMugGx1HK+p+IVLctouNu18GNK+QV7+SVy7pjwaevqcw1AZ2zltdAper7JboVHtrTZMQC2OZkOQ72RqHgtwcWGIN1z5G8QhJiaBBkMy2pdNz2K06OEdzRyAJze/3or2LBkGUAvMS//AN+1b6tDsgyekVcmNQbB57j4G+Xm2cw64ruI9h1vOhIMXLsFswqSAPmQkywervVkfPyzZswuaqWjS3tGB7xXYsWbIUTz/9NH71q1/hJz/+CR56+BFpb32YP/9ofPWrX8W3vvUt/PCHP8S1116LWXNmITUlWesFw++WV1d3OLGsfKuxzaVM4NITLJhSWojU5ASNxxfBlHTquRw33ngD/n3fv3HssceqThvx0ksv4dVXX9U+gOk1jh3qMbsNeU9qghU5koZ4inmlkrtk4lnZ1CUTsbCe5oECLgnueqA5ONDX26c2BXaroXpiS0jE3COPQkAIZVN9NQK0KhYM1v7gCUeAMA5tRERFoKc/WqqETMaipJce5nYy4kxJd58JHew5CAQTaeiRCSHQn+Wansjd8unr6UGLEEVK3Hgn/WdmJVqRnZaoPshI/to6OlHf2KS+yXIy03R/c/ruKykpxpiyYjU2aG5pQ0NtPbr9hguXfq4l8PXyRfUXhShlZWUg4HDI+MJl32ghVKRHEfAFouCV51TkzAc0ngcazEiHRF7yrl/IGNPokoG0qa0Rrj4pByHTVhloi9IT4LDGC2ngM0b5RQQNYjhBjpHnE4T0ZyQlI0XuIwntlokBl8/dXp9BzEPftx+wb95uhEKCxaWsAxus95x2SNlZozGtKANjczLhiBpApM+FrWuW46nnXsGD/30cf/3r3/H73/8e9z/4IMrLy1FQWIhzzzoDN914PW648SZceeWVOPmkk5CdnQULrWQjoqTM+4LlLpMXfy82VDRiXWWNEP4BtXDPT03C2Ox02PZyaZOkc+q0qTj11FMHpcucJL32+uuorKzU716pe+s3bDCMiPZAysluICXJLn1IqqGnGREDl68H2xua4fHTcte4L4ww9jd2TKgG4EhMwunnXYbbf/IrzDh8vrZvsw0SdPPGLswnkyauQAw3uQjjwEB8rOEPejgx4kQzSjf7J9gA5AhNIMmLNAxd3hXiw+/cCpE+LNu6nHC6XYbbFbnHIgNUbnICslNkMBDS2SODTGtjM9xOFxwJCarXZYm3QMZNtYzmVmB0f8QBqrWtHV305SckM0r9CxpgY6UOYmKKECZHwqBzW933WH7r7vGr5Xms6fRWoz7iWfglwdQaRNnk2Copkv80EGnt8sLZEylcUQZm6ZhihXiX5WfqLkAEl893DNnGlIESUWt8PHJSE5AspIXg9W6Zcbu9flV7IGnVvNpJSWgw54cchPk5+kBHt6wrBwpCc5XQMpdDByip7wMyibNF9aPAEYskbwvim7ehvXY7Pnj7VTz93AJs3LRJXRGdecaZuOWWW3DrrTfhiiuvwgknHI9xY0p1GVxC2WlQM6HLzl0urN5ageYuaXPy5mSbBbPGFGFsbrpx0x7ClG5qvQ1+J6752tX43W9/g+nTpmna6puahCD/GY8//gQ8Ho9eo/oLpdFfVH4O6XzzhGwmWuLBbfu8QjCrGhrhpl/ZnXIzjNEKU9J9qIB1mjsAUb9ZhSZmPQ1+0HCP58sWL8TrLy9AVcU2RFJ3JohwrT40ERcXN9iHDhdGvCXSzyKxU7LY6UtCB4IGG/TqRLkZpWxEIBBAY2s7WjplsJDvzBSLDAQ5KUlIsdIAqE8txdvaWiWMPmQIoaSrIy7j8l6y9bjYWGTK9SQhkB6vT0grl8+NGZ455phLCtHRMcjLzNR90hmTgAyWhkQzEh5/r0pHdgxUwSwc5oLad2A85WC65cwcNEkEKQmuaeqE0+fVq7wzIaYfJXl5KtkhCSfoFsrYW8bwv8V847ZydHMUF0Mn98y1SDWeanf7dKJA6PtUEsrv5mfw0PwMfsoxmL37EEb58vhyYCduSLVHP8xs5KcekrGBQJ9aqTpl8lZZVY3ly5Zh4bvvoWLVUriaG7R9BKT8Wn09yCoag8uu/CruuP12fPXqq3HGmWdg6tSpyExP1Ukas9N4B6XcVGnZkb+87uoJYENVI9YK0aQPWk4KC6UdHj6hSCaJ9p3u31swDB5s78nJKSpV5fdkaeuHHzYb27ZtU9+D1DPdXlGBT5YvR6OQ0C8im4XSv+TTZZr0B5weN7V3ocPjV/IcxuhHrO54EvxyiGEg0GMIC0KqKnWXI6TPpmrJawuexFuvv4SY2Bh0S79fW12hHh24BWYYIw/2RVwt9ai0ed/szPa54BgbPB0JjDjRjFUvjUwkewCOUpJc6Q2YaEYmKM/cSZRL/5mN7R1odRmO2lVvMMGKvCwZ7IQA9Xb3oMvpgr+3DwmJyaofRt0VhQ4mHAQBu90mBDQZUTLY0fURXR71D+ywhzbfzUEqNdGOdDniIrmNmcRPwmE8fEKeaEy0k9V5SFxHPwwSRymkoa9pXOPhFULY2Nkl6ZQOilrjms92ZCRagzNkvU3/UHrFfwRzgXmUYLHCEselU7kg+eP2dcvkwK0+3Qh1CcWD+ayfhPkpCEqxiX1BQDRVRnD7FFTbGG0wSdOnkquFLYcMMt3+bl1G3rbd0Ll88cWX8MADD+Luu+/Gyy8uQENjHVLTUmFJSkFEUiYC6QWw5Y/DmGmHYeK06TqB02VqCW+w/OWTJcXlcn7uIG/U+Y1Ao0w0FpdXorK5TYo3GvGWeMwoycWE3IzgfcOH1NRUXH/D9fjVr38lBDRZyfX6dRuwYMECLPtkhUwWArr1bGdXly6zDyWeGRmpyKVxYGy8pEWIt8uHqoZ23d4vjNEPTgaHFOkhA3qGUB/SRi+o1/yUaMr5eRddjjt+/L+Yf/Txqufp8rjx6vNP4ZUFz6C+rkb7XpJStod90Q+H8cWgwfI7b7yK1198Fh++89qwS+PZLuhVZqSax4iPmJRoGkNTEMGKbPxlxZZZV4jLIQ5ozp5eNHX64NMtk4C46Chk00G7w6GWsrQ0pbsVWuKlCcnkFpRcoR3MRJ7IQTc8KTLg2O12uNwetLe27rTLAvUOTerFbRYz5F7u590nzzIIjwxKHj+tWnlHMNADDkacjVQKgkngXNfp7UanEAO/JJiXYyRvczMyVB9RF8SDZWWmerAT11o7IPkbgSR7FOKFi7DcqFfr9nnUTQ5DJAE1+y2jAzNjYQYkCEqx9xWCUdMyC3nLQQWma3BAGCwUAxws6AeyubUFq9asweuvv47HHn1UyOW/cP/99+Pjjz/SR448/DDccPkluOSii1A05TBEp+YCQja3u/1YW9WMzu5e0NuXTrDkXZSAqzqEHHw338N/Zjx4Hw34KupbsbJ8K1z+HiXouSkJmD2+GFnyOZww48E+gVbr/G6z2TH3yCNx9llnITszXdUGGurrlXi+8MLzalEf2vWmyAQrNz1ViCZXWiLRLoR0U1U9vDIohDH6ESH9VrAaHHIISB/Onb5CuwM6dydioqMxYfI0jJs4RdVIaORZUFwKj7MNbmenNOkI1NXW4uMP3pUxskXb9mD/EsY+ByfvbpcTD971R/z777/Df+7+m4xXhteE4QLLOKCCH+nZuDI1zBhxoolIoSyflYFBaZacyGHQUd3fnNbmTq/6L+RFW2wMclOTYI2PQ780FKfLpcTRbrMJkUzUZVwTDNEcPDhLSExMVIknLWvb27vk068NyXh3MF7yncQqyWGDJcbQd+GgysL3+n1GAR2wYBoNCbLmTpDUcyGYy+adHsNRN+9KkA6oKCtdyLahn6nqlTy09zKIowkWKTsw7n1uiY1T9QK/zKiZz5Ro0i0MJcYhgehzBuT7MDQqhsg4KtEMfd3BBuZt8CA4wLilHJubm7F5y2Z89NHHeOmFF/H4Y4/hiSeelO8fqXRz4sQJOPucs3H9dV/HddddiwsuOB/nnHoy5h8+G4nJKdIDxqOh04OF6zejurEDfilTs+Yzb3lu1ASC76cnB04UI3R5ua7djWUbtgvZbFbfs7HRMZiQn4OpxXlaV4YbjEdoZ82VjPT0NBxxxBGYPXu2ulXiQFtfV4ely5ahtq5W85C7G9XKQNsnk6SMRBsS4mO0HtEQaFtLG7qEaB7IPcChAm6uEBGig38ogepewe5gEKqbLJ+s41RHM/sMu9WG08+7GF+/9TtCPifrb1WV2/DMw/finTdeUg8ivK9b6n1PT6+08UMxR4cPJJpcLnd2tKOry4nW1lZ4fd5hlWqyPM2GEa0qJjv6yeHAiBPNuGhq7+2iooYknJ/MCErZeoSsNAshbHe70Qsuh0eolVRmkh22uBj4vD50dnQKCexHcpIDjgS73CLJCoZlfEgmyn+e098mdxuhlMPFPYxdHiVB/FUt0AXURWS+W+Ni1SiIgyfBd9AgiDqH5iC7V5D3aUF/VgiaF3sd+heAUiimkwffISmRUxL6hlYhmpKf/cFKZ4+LRk5aop4zPhFMt3RWzAcaDmka5FbezWX4eO4RLWQ/ljsNSBgkd7Rid9L/oHZ8RrhGsoPvDs0Dvtc89gUYjITF4EJJ8YGOkBxTMK9ZFlz+bZFOauu2berM/Iknn8S/7r4P99x9ty6TNzY2orikVMjlOeoe6Pvf+y5uvvFGHHvsceoMncQrJ82BE2ZPxsSCHP3Ove5Xb6/ER6vL0eaiBwHjnew4SBWZrSbBJNSdkHw6u3uxeN1WvL9qPbw9AcTIBLMgLQlzJ45BYWqCxnl/wKgSBgHlUVJSgltvvRW3yVFWNkbv2bJlC5586il8/PHHyHRYkJeaiGiZkPVIHje1d6DJJX2AmRFhjEqwnNnmKdE2+tpDCySLBiE00s6/3LJSMkO/h4L5o66ShHCoSyi5pXTMWJx69rmIi4tXyT+FOcuXLcHCd15DjZBQ3sT2QzJ0qBldDQd8Mk5ybGW+cjeoHukzhzVfQ6qB6R97ODHiNaRnwPDdtBNY+SWxg/KSCCEl8sGGwl05mttJgAx3DOzwadmc5rAjVggl3ed0CFmMEwKZmJigro54H5/noQOLHIS+Rv4lChnlYehnOYNbUzIrjJhRB5R7r8cKGU1LNhxZ82GOLd7uPjl6VULD8Iw/n4Ohvwe/K8UKPq8dYfBTfXsOV7kH38M8MF7BNBu6dLSm7XR36TafqkQuN2QmJcJus0hk+9Dt9aKjswMNzU1oEMLC7QWZd1QeN1I0oPmUYI0LWuVLuFI+LiGZTrcX3HFGJwByXaca9Gupxl/GsqRx3QjJoCr7AkYqSXJ7DzBi8FmxZR0JPWjQ5nK7UFtfjxUrV+D555/HA/c/iPvuuw/PPbcAGzZsgE3I/9HH0N/llUIwr8fXv/Y1nHjiCSgsLNKBhAOGiWg5n1KQhRmleUi0RGupNHU68daqDVhT2Qi3lLdJKvmUPsmyDYJnlGRXtjrxzqrN2FLXLBPESFjiYzG5tACzJpToZG+0gGl3OBwoKipCakqKpCdC9TltVqsaD+Uk25Frk3xo2IKB5mq00E9vUyd6ZGA+sGrUoQf6cQ2p2ocU6P95gONa8DtPAj3dHNyCFz4Ns08hwcnJzcOp534Fp51zMSxWmxDXXnR1tmHDmuWo3lauUjga4DbW1+muRGF8CUiR9HQbBrgE+/Ru1acdPnDpvF95D986/D0ZR/iRg6SH6fpUstgb6A/0+2VIwniNksYujwcdUqF75TdS1PjoKGQ5rLDJwAUhRPQHxoPbT8Zzr3N5LjRRg81KwjTPLXGxQqCsSmg8UqAcUAgSTLZDDq50i2SLidI9vOOi45R4kqLRQs8nMw4OtaED9C6hhRiEpi/ku4JSW5nB9PbCJ2kg8eU+4Xpr8I7hAUM330CCD3S4fboTTOiySF56MmJjouFua9PdlrZWVKF8y3bd/m/z5m2DOy352NHIc+zYKWVWJ9dBeFSi2WOUKUuG+RvUwzWyz5BvM3fNKH0qm/YaQwPaZwGPCIxOYGcwn7gTDlVFGhoasHbNGrz26mt48MGH8J8HH8Rzzz6nVtW0uD1q3pG49mtX4+abb8bVX/0qTj3lFIwfN06Ip1UGE+qv7br+JltjcNTkMiGcuYiPYv2IxMa6Jry6eDU21bfDKxNvg2oa8QkWpKJX2m9thxfvrSwXYlovE8sInTHnpSbhyCnjkEtpZvDe0QQzL/jJ/dnphP7kk09CjrSBTOlvLG3V6Nu0CB3bN6C6sQld/h5V4DeNJsIYXaDqjlGko7G2DT+oo0mDoB2dKV0f0R5h9+oqpZjcKz1axlu2iaSkZMw/7kSceeFlKJswRfuP8g1r8eiD/8Krzz+N5sYGfY73fla/Esauoc7TvTR03lE2Pp8neDZ86JPJgxrlBgVAw4mRJZqCuEg6TtkFJJ0kejyhxIugFKy5041GOdhr8BdbfByKszORZLeqiNnrciNCOhXqa8bHxhnPc+zTsxDIFwbPa5FR0bBZLWqZTr2rHjkIY0bHE8aFjkxjkJZokzhLA5XvlPw5fQGVIFHKp2Cgg405BLu6ZoKRkJdwoGpuaUVFdQ02b69UVzN0OE+9RpUaSRhK/MxjH4FBkSgwRJJnnrd2dirZNIlmnJDGpAQrOpuasW1rhW7pSQkmB9Z+1SnpRktrO7ZIvBl/Z1cX+qXi2mKjkWizgFv38UW00HUL0aQKhPlOIwPMfDOu6EewgzL+7gNo0MHwFfss5GGHxjSYH6wL6sLL5UJlRSVWr16tRj0P/ue/+Oedd+HhRx5VvUsunc+cOQNfveoK3P6tb+H666/XPcJnzZqlFtifRy5DwTtmluThzMOnoiidW0TGwuXvxQerNuDFj9aiutUtZFMGMrmPEwgebBt+IZX1Xd14a8VmPP/RJ2hzuqQlRyDNbsFRk8bgsPElsMvEZbQjUjre+Ph49cNrt8Rh0tixyJ8+D1G5E+AdiEZ1QzM+Wr4ODz3yGD5e9LEaW4UxumAsQ0pdHvERbnTA2JaVPe4O0L1YPy1b9wDmJIpjc0JCAsaMm4S8whJdmk9MdKiBXUtrC9wyJvLeym2bsWn9anR1dhjvl+fCxPPzQdGYz+M2hkNCPt1u4TWfI30+0DCyzVAqXHd/9KA0ZGdIpsp/dvJqeCO5TufsDW1ONHZ5B3fnIYnJT0+CPS4GHp9XXZPQsXqiI0F1TMyi4adWcPMwrhj/5XuC3a7ujujE1i2HsVzLewyw0OmXLykxQY0YjGtcyvejU8jtTn4UNfw9gIQd6A2oL7/yLduwedt2VFRWYauQNpK6ZiF3ppSVseKxL8HwmL9MD922UA2gub0dTpVoSlqkg+Ay+EB/LxqrqtDR2SWzrmhkpqWiID8PpaVl8pmrkjGSn6rqOol/DTrbO3RZNNVhh0WXz42lXbffDy+VyDWfWOXMg+8ycl1fy0zfl2DAIWV6IEF1HeWgtLtR6sPadevw/vvvq/Px+//9b/znwQfwzrvv6PLVxAnjccnFF+GWW27CddddhwsvvBCHH3448vLygr4E96yz5712SyyOn1aGk2dNUqLIx5tdXry5bJUc61De0AFXTx/8kr898qNPCrC2zYsP1m7Hcx8sxuY6qcMyubBFD2ByYTZOmy0DVJIV0QdYcdDjwsTCXORNmIaY8XMRyCjF2qo6bNpShbqaWmzYsFHbKutufUMjmqRNhxHGfof03STboX1qD6VkX6KPpZSTXlrob5nhjp0wBZd//WacfcFXkJ2bp3399q3leOHpR7F6xTJjDJP7aITE5fg96YMOJSjRDLqeIrhK5HU79fxgwQjP9wZ0wPqs6mYa4/A+kkqP24PmDqfqDXLZPE7YZmaCBVaLVW7ul8IRAuPvRlx8vO5cQqtShvD51ZkSz37Ey/00HOqThuMW4kg9zZ0HZFqeD8BmiVOrPOMKly17hGh60Ct14nOJkSZ01zGhhKqtrR1VMlB1ComzxMUhMz0NFqtFjZsqq2tkwGpB3wDlRHzvvgXDNOTDEnYwil2S104/rWkZ70jERvSpyyhaJ2dmZaC0qBBjSoqQl5OD3Ix0JZql8r0gL1fJJY1QOND2SoNJkDyL49aCkj/cFajLzT3ivZruQchv+n7jVMG859VDuUNinSK57OjsxPr1G/DmG2/iySefwt1334O75HjjzTfUmnzCxIm48orL8K1vfhM3XH89Ljj/PMydOxcFBQWwsH18SbAEspITcNphkzFvYimS4tgaIGSyE8+8twhPvPkRFm+qQXl9B6obu7Bucy0WfPAJnn73I6wXIsZ1i3ipQwXpKTjriOmYUZyty/AHGijJSUm0IScjTSay0TKYBrCtvhERMvk896ILMX/+fJV+sm6/8uqr+PGPf4zXXntNy9E8whh5cFMCCiBCu5xDCfShydWnUJAg7kuwbtvtCcjKzoFVxkgKicZOmIwZc+YiNT1DV1Aa62vw7OMP4cP33jJWAsPYJbxeY+cyhXz4PJJXI9RdRspkeriH3BEmmhACo2Y0wW/yOdgRG8RDIdcoL2zodKOyuV0NOXhY4mJQnJmKRLtNZ1beoCse+sUkGTScqIeAYQ8WXsi5ID4uVq3UqdTc0eWEx+Ml05FfjMZJskPdMpvcl2TlkryBXmnA7Z4ew30E7w8J8zMRcg8lp3QUX1FTIw3PiwwhmGPHlmLi+LGYNGEsMmVAo/5dRVWNEj19hTynIezOu3YDTGVQnqjEktJGkkE/dTbkH68z3ZHSS2dlZWJMWQly5NNqs8MSLyRSiLFNyAzdRJVwL/miApWctbZ1oLm9DZYY7twUpflJXSmnNBrq2tJan2ErmDDzi5wb/wSSRG1wmtYvmV59/EuGMQLgxMfQu3SrisLy5cvx+utv4OFHHsN9/74PL77wPLZu3YrExGQcf8IJuObr1+Kmm27CRRdepK56CouK1G3X7i6N7y4ohZhYkIUzDpuOCTnpsEmRsh1WNTTjlcXL8d/XP8SLH67AgoXL8ehbH+PZhUuwcut2lWTGRvQjL9WB+dMn4uQZ49VI7ECdQHDiNL4gB0kWo077pKxqZIJoT8vA2HHjDS8Lct+4sWN061tOEliHuR3gkiVLUFVdPbhCEcbIgPkdHXPg1rkvA7Zbbi4Sqm9P9Ep93NcTH1WlkoPh8r35hSU48bRzMHHyNPVa0RMYQHXFNrz4zCOor6lCZHQkXDLWGkvro79vHimoBHMwOwwPIsNZdymdNt/HcWPHYDw8GHGiacJIlpA6GZAI0psdkYnQZfP6Thea2tvlt0glRA4hN9lCzBzS8fdKR0L3RAzHZrNopTadSe8SO12ntDJSZmGGURH1IXxChAyCY9xBMqyW5zIrTrBaEBtJSajMNKThuPz96Ka1tRnkLhsMw9p5RkkS1yukrrahCW3tnfJ+CwoK84VcZqj+C338FRXkw5FgE/LbhfrGZtWFNF9Dkrrrd+0BgoHxgxWZMaRuZoekSfiBxjlGJgOJ1hgUZKaguLRI1QyiVLfOfLeUiDzLrSjtkoaszAxkCxGl4nhHRxecQpRp8Wi8JVJ1NOkM3uhYDCkPD7ou1XgEDyL0nHViaGe5R9CAzND2DeiX74tgpm9ozM3rBCVgPr9Pd6iq2F6hOpaPPfY4/nX3v3Dfvffi6aefQVVlpeStEL0zzsD119+A7377DsNiXMhmsZBL1l+W4XB1SIwpfdbOnViEc4+di5llhUgRstUbEaUufj7esBXPfbQcjy38BG+sKUdNS7uqlNhkMCnLSsFZR83BufNnID3JNpjuAw2MdVxMFCbmpiMjgTtkUbUnCqur6lDb1KWSI4J9z7wjj8Ttt9+O4487Tsukp7cHb775Jv797/vVZRLBQZk7FB2o+XGggHnPcqI+/vC0jtEL1q3Ivm452Xn8CfT4+WPw2/DAlKKSdBKZWVm44LKrcOrZFyElNV31vVctX4oH7/4L3n3zVcNricRpOPux0Y8BdAmXMUcM/vV63Xp+sGBkiSZz0KznWqn4xZBvGlbdcib/qcvnls64rqUD7t4+vdciPCddOvpUh1W3O6QhDV0b0ehEB1zTSXtoQ+Ir+J7Qa3puVOh4Ek05SP46fPSNJ41kUCpq3MMGwz2TuepHyRN3Z+yW2SJdHA1dmtgZfE/IewU0bup0OtHe2qahp6Umw5GYoLqgxtto3ZeIrAw2yBg0N7fq/rPaAOUIyvy+FJh8M1amZLm9k/ub022NfJEbooR+ZqUkYryQXrqBouSWhI9vV2txnmk+EtxHPg5ZWRnIEqI8IHnT6XKhXzq6KLmXZer2B9T6XJ/QdPCpEAyGZcD8zeiAgl/2Cl/q4V1AwhsS16EwarHWZL2XhDL0CaaJ0sv29g6sXbsOL77wIu6599+45557hWg+infffU+l63Nmz8ZVV12Bm26+EVdeeQXOOP00zJg+FTnZ2SpVHolOmW/ga9KlzZ122CRcffrxOHxCKSxCPkm43NIGa9rcerS5Df1ei7THspw0nH30ETjnqBmYlJehcT1QBxEj1hHISU1ESU62GgjGSl9T29qBDTXN8ATdezF97CsyZNKYmZmp3+Ni43HSySdh9mGHwS4TSeqqbdy0Cb/93e+watUqVZEIY3jA7fXYGk3CcyiBfUwgqEcZChqtUR1rJGC+m/6qS8rG45gTTkNaeoaQ3QCypA9Ly8hGS1ODjqFcVVy3bj0aaqv0mUMLkk/y3+vsYKYZY4X8oXHtcEINjUawSx75VhhMnFERSQ4NvUr+UQoi5IRDc0uXG9XNXULOjAdiomOQm5aCdCFmA9Jhc9vJbiGIXMalxC2akqYhDcsoNQEHuZDf1BBGPuNjY2G1GdIWr9uDbiGbvE895Ml/kgRK7lKTE6UxCNOV3/isTwZYv9zbT1Iqv+sS+qew430Ev3FnhYaWVniEICcJec3IzEB8cNlNQ5CbuGNKWmoqkoVwOmWW0yYkkPqj/DFYDb8UQkNg+noCvWhsd8Lro0TXWNomkZ9QXIg0iUMU9zyXNKtTV03nEKVuOaVT5ATJR5LNNCGo9G5EIsJnSN693V70cHcJOdf3qyiTJP3z08P3yKu/BHY8rO46dPD5coiJiVb1ic9E8BWDb5ITppuuq2hUtX37drz33rtCKh/Dw//9L5597ll8+OFCdHR0YOrUqbj2uutUKnbVVVfiuOOOw6RJk6Q+pMFioUHO/iFs7JOyHBYcN6UEN5x+NM49cjom5Wciwx6HpLgIpNuikS7nRZmpOGbaJHz9tONx1rxpKMtIMnzQHuBglmck2TC1KA9Jllgp0hhVD1hXWY2GDpeqh+wKnCweNucwnHLSScjNydFrJADbtm3D8y++pJJNkk8aApAEfP7ENYw9AZfOR76ljB5w+9eh/Z27p0/r25fvBXcfHFv5TrPb4q5lJaVjcPaFl+Gk087W/pTjzlsvPY2//PZ/8PHCdxEdu8MzxUjGdX+A4xvTSHUCHes0wXSQ7wbdSw0n6LVn6GRkuDCyo0BIyzczdTCZchIRIQ1BfuiR85ZON2rb2vQ6PS3aLVbkpCbBYbVgoDegxJBWdRZLvBJGHYB3dWhJDr5F3ydX5aDkLkKJKsldt9utg4AB3iF/hUBFyiibaI1DfJSQMAmPRe8V0tTlMVwuGSHvkGDteBdJqOkjUkid/PXLgNLR3qENj8vkDiXIO4i2EY68z+FAcqKDF9DudCupJgwiLich6dlT8HHu6W78M3Qo25w+tSBmvxQbFQFrvE1IpkMIIycCcpdc3/FaSjclFzQg/VU/o+TeBIlzdnamKoYj0tjFiWH6pbw8/qCLI/muzwwenw1tBCzDvYXGV/9oWPuiUZGKs058HlhOrBlcvmtta9dtIN9//z0lln/5619x511348WXXkZNTa06Cr/8isvx/e9/D7fddhsuvOB8zJkzW3fqscTHa77uD3I5FJw0OOKicNiYfNx29rH40aWn4o7zTsGNZxyLK085GreedRJ+cOVZ+O7Fp+D0ORNQkpKgJHP/x3zfgLuRleVnITc1WaUBA1HxWLutEtvq2qQ977pesdwoUeNmEjRM4XlpSQnukInEOWefpdIeTjBefPkV3PWve7Bu3bp9UkfDkHY66Iz60AQnLaGp53lEXy+lC8aFEUZoWVAwkZiYhIysbG0TJMQnnnYmJk+frSsGjCw9r7z9+kvYtH4NvD7/YB84GvrCfQ1u0uJzd0qymUccp2TM9ITqbA4HBlTNaaTayIgRTU2PeQhYX0jamNAdEkGZ/QxEoKnTJR14M7r8wriFrMVGR6I4zYHMZLuQHyFHQgg9Ho+GQcfr1B/creonN/E+w2hIyICEZRWiGisHlyvp4JbRI4/QuYTEJYbvsMTCRuMWAdspt1Vs9xh7eOvSqKTBpJwGzHPjk3+5PN8pgwqlF9TNTElJGiTIjBPD4LDM71EyKNFIKCHBDmdnF7o6KNU0LNB3es1eQMOQQIx/Qpq7e+HxOgeldFQRyE/PVGv7oXzKeDVTKgfLTb5Rwmv2XZQqpyUm6v7oibZ4vZOgE2+XvMffK2RWwiRZG9yO0oSEowcLNXjwaTOMvYK+IkKdF1Pl4YsI4u6AnaJKdz8D7OBdMmmprqrGsiXL8MILL+Kuu+/Dnf+8E88tWIDKyirk5+fjvHPPwS233IKbb74JF198EebNm4eS4mKVzrPzHY0dKuPEZePizBTMn1yGi46ega+eeBiuPmEOLjt2Jk6eOhaTCjJlMhir7eZgApOTl5aEEk6kKHGJiEJjexc2N7ahW9rm7oD5x12aKLmePm2aks+4eAtycnJ0gO3s7NT76uvr1Z0VJylhKefegSsIX0q/+wCG9h/9dNYeUnckLwLdfhWmcJVuf4P1mnEh2C4mT5uJ8y+5EhOnzFDXf4G+Hmwp34BnHn0ATfU1akREjy9UO+JS+8EECsz83lDXUwPwcSn9y4x9owwjRjS1bku+eQdihWjsGEj7ZfA3lrKNTOVydH1rJ9ZVNQjplAtyW5wObonS0SciijM1mZn19XYjUohavMOBmM+reHyPeTAw/WQD5Btl4IyPR3xMDKjUTyt2faX80fhJvJhBMfJ+h5BNDi5Et7AWp6dHpa8kwgoj+kHwHcanmU5umt8upJGVitJKmy6FBglv8C/vpa4qvycLYctITdGG1dTcolZogwiGuTdgNDWNDEPe3+n2o9XdZ1jRy/WoqBghmolqdc7lZpIqTWEwfca+8HxWDj03o2NIiOmqifHWLUIHsyYC/p4AfNKBKCkNPkdoSTAoCWTHVc60WEt2unUvIfGSOkZdXrMsvgxYFyiZpX9FM2ok3ZReNjQ2Ye3atere5t5778Nvf/87PPjgA1i5/BNESx3jto+33HIzbrvtVlxy8cWYP/8ojB0zFqkpqbrMOloJZii0jsoRL5O7ZHs8MpPsyJUJYKrDAmtMlA5iozsFe4+stARMKKCeJpfPpS/r7Ud5dS18wQnq7sDMP/PgxhEzp0/H+eefrwSUdWndxnL8v//7FX7+859h/fr1wSeNehbGF4P5xFUjCyfydLN2iIH1ih4+htYXU1VjdCIC9gQH4oOuBDPSM3HOBV/BeZd8VSWffq8Xr7/4LP70y5+oj06OiwcDKPxQjyNOp5RX8KKAW0LTRdfBghFvhXFB90ZsBCSZzF2lb0J6AgMya5HTDmHzzV1dek/EQAC26H7p5NPgsFlV58/j9aNbOnmLDM42OXRoG9KoPh9MtjEgWmw29V9JMbLuyBPoYUtVwicx1PvioqOFGNoRF2m8gySop8fYupJ37frNOw+3bo9XpaaR0TFq8EPn8pr5ZrzlnYOQ06hY7t1u10Gts9OpOo77AnwLqSMligE6U5cKbWypaZAc5mmOEHruiLLjfvOPRFf+kUyQMFIyqb/LueaD5Av1O5Otcbpzk+E2wQCt9HkMQh8MBipftFPUr2xcUu7yG3MmNFv2GMGs5V7nn6VHt6dgPWHd4Cf3/22Xstm8eTM+XPghHnv0Efztb3/Dv/71L3zwwQdK1KdPn4avfvUq/OD738d1116LY485RiWX3E+bahsklwc6vkwRHUhIjI3GGCGa6QnxKvmnZ4zyyhpUNHTqRHVvwPLnCgeNiOiiivV+1ozpuPmmG3DSSSfDkZik961YsQILFy5EU1PzKCYLowPshwj1q3yoVM4h2NVwaFp4j0YoHwjpozkxz87Nx8Qp02GzJ8h4E4Gk9Gz0St1fueITHbNoTNfOjUacO7w/HHiQcbi3T1VozAGLReTzD7/ONl1EjlR9GLFRzkwPLZoNoilf2AsEf6BUsycyGu0eHzbWtMPTY3SmsdJZlGQkIy89BbFye4BL0F1ONQSijhOPwd5kdzPNvE3up/siSuFIlJwej1qz6096E6UOhoFOekqKSrEIzhY7PH50uP3GPfKP/wfBeMjBEDRdQki7JM5cmk9y2JGYYNNOcOgzutQj1ygB5E8J0sBSU1O0UXV2udBtbnv5JcCqG3wNuiUdDe1daJY8p0ST4LJgbop9UEpsyFeZDp4b9/CMB+cJWoTyjdJZnlOvlbs00al7XIxF7+6TCYTLH4DL6zP2cpcbNVxGRGF+8l3Bc/ngoGtc20voo/JHwmH8v2zDZXCUjNI/4hYhly+/9DL++9CD+OMf/4i//u0feO3116WsenH0Mcfjuuuvw89+9lP84Affx0UXXiiEc7qSCRr1cOnHSFsYBxKipcym5KdhbHYqHFzgkEpK6/MXPl6JRukPvuxSrVknUpKTcNSRR+Lkk05CTnaWNJMBNDY24r///S8WfvghvD6f1GWZGPvlnWxPX/K9BxvYT/UGDP2zgT3ccvFgAd3LDa0XdAlIY5wDCdokJB2xQjwPO3wubv7WD3DqGeeo3+ym+lo89p97cPdff4stm9YO9qv81PH1AEGvlMnOLs+EC1CQJROD4Rwn9H1D6shwYcRKY2h+7WSpLYnlP9Koxg4XKhpa0E8faEJa7DGRyM/MUXc7fIIs3C2EkKB+JUkgC4b/dht8d/D9lM5RumiNj4fX7ZWO28h8k5LwLvoFTLJbBi1oKVVtd3nR4vQE5W8GoST0r4ZPaSxJNZcs/CrR5KwrJTkZFqtdrkpYQ6KsaZD/GjOJh9VmhcNu08GECtEBqYyfysg9hD4uf/hqEs02IcAksmb+JVpi1MCKOnZcNueAxvtZ4Y1lUfkntw4ukcq5DHVyLof8IHRTpaLJVsnTGEP6RwmDh7qgcpjv0UFAPpmjWuElsMGUCTHlr8aExLh/7zAYouJLBRUE84H179kFL+Ivf/8H/vPQQ6iqqkZRUSFuuuFG/OTHP8ItN1+P8887DzNmzEBuTq76SD3Y9IoORUjRIyXRiqnjSpGVlKD9U5eQvUXry7G9uvVL1lUDrF88qEpBSSc/+X3eUfPw7e98F/OPPkrbZ0VlJX7wwx/innvuQUdn+z5598EEZgfVXILdzSEHruAMrRIUeHCbyAMxU9gGYmPjkJKapm6SuOScnpmNY088DZOnzUJcXLzet379OixfughdHQeGQ3iOC36fjL9BQY8Jqtr1BXo03QcDRoxoapkzz4L5ptIwaQx0+k1SwU67U8hYRX0LOmhlPRBAfGQAKXZKx1LVOltYj5CjgBAjowBsQsIMQyAGblKY3YBZASUMEiObLV4tfN10cURxMuMj1/WfVIBoVnAhmiRXTACXYp3uLnS5nOrzc0ASw/iotJDhavhy8Dc5I8mkjiWV/+3cO12ILX8bhJzyWQ2fl+V5OqGgSye73SqNKBatLa3o8bl3emxvsCPpEeC2kx1dnSHW9lCfobollZyzUzIoN5eeA/BKOujYnnvM0wCCDZn3GXfLIQRRjaMkbKoIRERx+d14oc/rgtvr1LwbhNzHe1VXVS4bv8jgQJIu13nrl2lojN8+XX4IRoXK6uvXr5WJiQszZ8zE+eefh2u//jWccuopGD9urLoj4m5VJkkwjzAOfMRHRuGIsmKU5uYjWvqofqnz1c3t+HDVWl3pGC4kOhIxdswYZKSl6+CUlJyK8RMm478PP4JtW7frcjoNDbtc3IHr0La4NhHQ/ufQxK5Kn6o+NIw8cKuGIXgw+1Ju/Tpx0hScetZ5KC4bBxoLbVi9FH/77S/w19//P9RUVegK22jufxmvHp9L0rVz38Fxy989vHqoMQPcDGZkKsOIEE1WDvbBoUnSYucFyWDK/QxpphsrNtfB00uyEiVHDArTk5Cfkaz7jg9IIN1C2Li/ObeQtHMZkoxVwHq0W1WJrYw38zN4brdya8VY+IVketxu9AUlkSScLAi6/EkQwmeVdxK0/3b1DkhFoHjbqAwMipm5I0ONbyRqJLAkZ/HxcboEoBWfDwj4DhrY8H1ylRf0Gskb/zmEaFM9gGnuolNsfWoIgmF9LoJp5btZHiTsze3tqGl3o4em4PIu7kVNFQUrXUwQci/9aHqlg6qva8DGTeVYtmYd1qzbgO3bKnbazYDPawrkGaaaeq00oIpRFYFIdXLv517qQf0yWvzLrfKE8bzmiZ6xovCqIUH9MgOmhi8dDWeLX1ZHk/GLioxGWVkZHMkp6uifcaOrDk4cqOPKmHLCxHvDOPigg5x8Ti3OxFGTi5AQF6PL6V3d3Vi2rQbNXdIe9r66fiEoxdG6Je+gQeGVl1+Kp59+CpMmT9J6t3Dhh7jm6qvw6COPCqEw+yVpocMZqVEKGoweeqneAeoy7lKVo9/YA/5ggeGCkMIJY8n8jHMuxnd+8r84/cxzkZqaip5uP5558mE8//SjqKmuHhQ8jJY+OiI6wlhRHFJUjKfuPjhMbZehuvtilMuMBEaEaBIBbgkWwjYp2SKLp+9MkpXO7h5U1DUJ8Wk3JFnyLyE2CsUZCUi2G86qe+V+rxA23k9pUTyJJpNAs2WVhO1Gpu3iHpMcsFDpKNXYPnFH5pDwUOIZHRmDWNAAiA15AO7ufrVAJjRYrikHwTMuKffIrMTXY+yNzqV+WrjvlOlCwpRUmtGST5NyMU4WIWvc+YgVz0XHziHSR8WeVES5l2lkjJ0+P+rbutS90UBwOYWcPcESBZsQW76bTtdJuuuqa1BeWY3q+ka0t7WhpbUd27ZXoKqqSqWijMHAgFTZCEqopcHLd1uslJ8lRogYl6/6VeLi7+mFX/KB7yfRZpr4rGadHIyDluNgDjBOPNuDNO4EIxzWl8/KJx2Ig+efB97HPLnuhhvxs5/+DKedegpsCYlYsnQJHnroIfz+d7/Bvffch2XLV6oOnYYbPMI4OMDyZ22iu7UZJYXITkkyJrqRFpTX1uO98npVqxluaNuUPov9QnZWlvQrxlak02fOwrVf+zociYl6H+vexx99jHXr1uqKxKGEQ7nVsa+lHjk3ihgKlZJpPT54oH2s9rWA1WZXf5wzDjsKdkeijjFx0l4/fvc1bN24Ssc6WnN7vR7dMpntaH+CI1SP1yXx37msqCKI3hBPM8MAtZYZoYYyYrnMykB9Pyoku71uY99VSWogIMxdyEiby4t3Vm1GFxuCfOf2hbmpKcjLzoPDlCRKA+L+5gEhpZQo0ZDHAAvJkJTtDjjTUx1ROUh4IuOFzMVysRxo7+iE329IA/SKkuEI3fM5zWEbbKP+3gFUt3rR2unSTs0oMMnO4O/KOeVZp1Rol8uFGKnQSVLx4+PiNTwNiO/Xm/k3+ODgp4GYmDjd+5yGSK4uF3xBYre34DI/n++ROLW2tsJJKaMQSuqT2mMikOxwIC5KBjKdEfejubkF5Zu3qz/P3Mw0TJ84HuMKMiRpA7oXe42QTS7ZGfGW9AcjFyvxTrAlqN9FomeA+mxSfl7TgIHVnA7cSTblIc3AoP6QmcmfkSe7D4ZpqGV8lo4ky2J3Qtf75MjJzMS5Z5+Bn//857j//vtx4003aYe2dcsWPP74o/jxD7+P73/ve3L+OJqamrTem0cYBz7MejC+JBPjC3OlnkdLHQ6gw+PD8vXlaPbJxG2EyprxMMHz9NRk3fLyjNNPl/4iRtvZk08/jTPOOBMP/ee/O9XFcH08yPFZ5Wt0iQctWK/VzkLaA7MgVsbbM869FP/zh39h7rGnqN9aenC49eqLcNeffon2zo7BNh3ankYKkREyrnPL5iHlxXTQ8vxgwYjSebqE6ezqFILSonpEKvuKioWnuxtrttaiWkjegMy8WdwkZuOyUzCpKEelWsx4WmZx+0ZuW5WUmCCftGEnmIxdE4nPRbBwbdwGMmgh7Zc4KvhOiUhUpBCVPvpNFBJm5x7P8h7Oivq64XK365I2dU0HMaQR0zcndTTtdhssdIIevK4YUrkUvMYKL/91SV1Id0oKl2ot6gi8V2aqFKIM3ruHoM4pdS9rXB60+LhET+E53xkl70hAnG6JyZKJUFK7dVuFzIK7lWROHD8excX5KCgpw9iyYi2ThqZWOZoNwijfOZlgfqjv0fgo3VFJ3zvQh6a2DrR2yUSB8ZeMINU0yo7p5TknDvK5F+naJYLBcMD9LJcwjPOegMuXVH+g9XhJYT6u/drX8NgjD+Pvf/+7DPInIzo2Hu9/9DF+9/vf45prvoZf/OIXePvtt1HX0KDbjKneq7xzT98bxuiCTerBSbOnoTgjTaX43VJ331+1Dh+ur9pv4zgHSq70mLrBnGD95te/woIFC3Q7U6qPVNfWq/X68uXLgxNEAwdbnWRvEi9dSyT3wz0E0St9thpyCjjJNklUt88lfw+dvkdTLWmPt1hVBY3jwJgxY3H8yadj1SeL0es3+MTWrVtQX1slvxtCppFDBNwyrg9te1TJU0nncEFeN9BDW5idJanDhRFthazw3NHH43JLJ9eDPums/dIZ0oJ7U02zZLakWwhPpJCS3IQYFORkqUsj/UEyvtfj0z3GSWJs1nhEyqx9ELvdSRrLsaaOJMFlbdNAp7/HqGhsl5RKknTRN2SCkNoU3RZSfhDyx4zrD/Sh1dkDt1Re6aYlCjsXGncD8nn96JP7aNBDK3Ia2ig+K77BDoHaYGr1LffRl2asvL/X54SX+RBc2t8bUKLZ4u1Gs6tbOiM3uqUvIuGLj+wXIm2HNToCcfLOvkA3qmpq1Wcpd1/iEh13raEuYmycBencjz05CW5pqI1tbUKC6efT0KtkePSDRjI2EGFInfuFurp83XC6nOhjHlMvVLKAmo2GjHUYoHlp5OdnwSi3PX8/O24erNPc7WXWrFn48U9+jHvuvQe/k8GdDtkdjkS8/uZbcv0nuO3W2/DXv/5FSOc7qKiq1tlqmGwewJCynz0mD+Pyc5AULTVeirKhowtPvfURmijVDN62P8H6SYMJej8oLCyUlhChA+3GTZvw29/9TiaR27TP8nX7g3stH1z1kVt/DgQOxTa2c5oTklNl0mGMOz0jwytGHcy6zU/uyve1W+/APx9agIzMbLkagbdeeR63X38F7vn7n9Q1FtsOJ2qctA03hu45znNyB+6kd7BgxKd7XDqmq54ut0v19eiH7pn3V2JDxVYpb6MxREVFYGxWIgoyU5UmRAixol5Fe2cneqQAbDa77iCwd1sKfvoZhpOUnAy71QpPiH7dDgihkHgnOWxynSQvAgEhTp1CFjqd7egJkoadKotUVD/VBGS2QvKVlJgonT6JF7uBnTuCoeCvJhGmMQ0dynP/8EBkHFz0ISqEba8g76ZU2dfWgva2dok/paZGfgwM9CLJFquW/GxknW4f6hqadNeCvJxspMshGSW/SRnJqJqYZJfrOUgUkuXpcqGttRX9MhtkWiMj+nVJkS6hdLs+Qa+UdZc/gC5fQIjy0N7OLBM2rM/Pm9EG5hXBTik+Ng552Vnq//D2O76FP//lz/jF//w/nHzKaWoR+fjjT+Bbt38bd9z+LfzzH//Ahx9+iJbWFunYelXq+ul6F8ZoBUs9P8WOU+ZMRnJigkxE5UpEDJZt2IinP1gLj04+9z9YP82DvgWLC/Pwi1/8HPf++35MmDBeN5Gg8dAF552LF196SVcvzHp4wNfFyIPH6GVPwKXjgUAPO3X9HhUl+RDsYvu8XWqAciiD9docQ/Vcxumrr7sN//f7f2D2YXPVVzJXDh9/6D4sWviO+uyWBqT3D8f41O6k1fmQcOU7vcwcLBhZiaYMxiwwZmmTTK2anG6s2lKDDbWN6NVlU4nQQB9ShPDlZ6YjJyUR0ZLh1JUMSMPhEjStzJISue2k3C8NShPASjBYEfYMKouUd9BIxxJvuMNwup1CjMzZBCVufeC8hm54Itl56T6ykgo5fD6XkCfurNMvBMt4gmAqOVPxk4RyZhQbq+k3rMl5Q0icg2GZ54PSVvnUbSApQWUYkuZeCW8QoUk2wwh5dug1LsX3dfegoyeAhq5O+KTToREPVQGYvrxUKxzWeL3P6ySp70GczYrC/DxY6K5I7lFjn4hoiVOMEm97gl2Xg7msz12GuITOcuH2k1R/UEi+cavOCCGzXslbuqcydWTNPGAM6didub3PoOk20s5ZKh3mDwdMskmYgzp1VLOzMnHqSSfgRz/4Ln79q1/i9m9+A8ccfTSaW9pw73334pvfuh0//clPdT/08s1b0MGJlBBSSp0O+EH+EAD7Iko1Z40doxMrroW4BqLxysIP8e7aGng/NaHa/2AfFicTokS6hpN+hasOxxxzDE6UydGGjZvVMwQH3uaWVrR1dOgk6IDGPuxODhhI/9PPcpO+mCsuCVz9C/ZRTg4f4a5lJzBnYuNiMHH6bMw79kTNP4/Xi0Ufvos//t9PsPDtV9RGgv0yJY37tn8egN/FcdjoK9inEOp7W8bU4QLH8CATUbXAYPUYNgSZwPCDgy/JRndEPPyBfmyvrsF7n2xVA6BOuiwKDMAa2aeSgYnZySjlsnmwQPv6e3UbRr/fK51kNFKS7IiRimFIBwV7Wui8fzBnjeXpSAt3GTJ0m9rbu3QHG6McJN70DyksMjE+Ggk0GpJ7+iTrnN1cOudWlNKohYCqzmEwWOpDeYR4ef1+2CRsq90qM0lKA+XHodFlXHjsFK8dIMlMFtIdFReLTpcH3dKJ7OK2EOz8AqGrmld8Oa3WnVKBe7j3eL80HvmBP9mFZKelZCBO3tXd14tGIUM98p705CRYbbYg+ZUj+Enn7LR6zUlPVp3F1vZOOGVmZuyFbiSTKg4cgGloxE7P3xtAq8ulBlKU4Cnp3jmqguCFYDj7Asw/LqOpA+ch0NR/fmbuMUyyyYOgJHvy5Cm45pprhHD+H373m1/hG9/4BmbPmYOt2yvw29/+FnfccTv+9Kc/4cUXXsKadevR1t4m9d0fJp2jGCxdSjVvOO1oTCvOk/psTEBXVTXhzSUrsKXBqX3LaIRZP9k2SktK8N3vfAc/+N63Vfe9ob4e//znP/CLn/8ca9es1frHlQ1zr+wDpj4yjcFNNg5VaBlbHdIJGn1fHA4eA5N9CeYTBSs82K4Tk5Lw57v+g1//+V+YPvtwqfs92Fy+Cffe9Re8886bcl+3tp19AU9nswyP/ep2KiO3QK8FZJxmezN3I9zXMOwW6D5pZCbDI9IKWYjc13kg2ioETDI2IgptHh8+2bIF7c5WxET0Ij6SBdyLFEsUynJTkJrEfX8lghH9KtXr7GhDwOeCncvmFrswchaA3CA3sdvbo86PAesn/xjPxco1LsnHCWnqIGHydWu8+wZIh+R3Oeg3L0uIl75xoE9dmbhl5uEN9GBQFSj4SfdN9I9F5942iwU2SjQZR/O+XcGMl4ngd8bDTgtu+fT6hdhKuOYARlVHrSqhzw55B8kUf6WUtsXjQZfTrVtCemRc5A5tcULwc1KSkWSPR79Uvo6WVnR2uTUvMtJSBmfDg+EKO+Re6RFCru0paUhIsINbd3YJiaQagXRt4LbwbDhWSbduZymTDHmlvLcXHpJladCM+85gSvb98jkbFZc/enah88K8/VQ09iE0/ODBjon7Wc+bNw/XXXcdfvbTn+B//9//4Oqrr0Z6egbeeutt/P6Pf8T/+5//we//8Bc8/8KL2F5RoXrN7HjChHN0QctDynVScTq+ctJ8pMlkjXWpW/qD1z9Zj7eWr0MHvTqM4mILrZ88iLy8PFxz9TW4+KKLdE9+pnPZJ8vwwgsvYMuWzToYHxCIjpVu+hBsM1JeXMHhGBETEytjh6FmxpwwVLmGh7wcVJDMioyKxpiJU5GekS1tA3A4ZAyOicbqpR/C6+pSYVJtbY1OzPr2drVM3uOS8ZBtjLsbJaVnBS8PSFm59tkqnNm+WQ/oarKmuhLbNq5Ft/AJjt3DPSEeQYkmd42B+mFs6+4X3hGjhWaNIGHjknIfUq3xyE1JQJHNghSbVRsGCVtvbz88QlB8/dG6Tzif0zAlc/TguV7ZO9DpMncosghhovIvB3ZTZ45FzgLiG+lmKDtFyGhwlsyfu4W8dHn8hnNViYU5PyAZ9KmVcT/i6INTwjViumdgDNhFxFjipCLGCBHs112GWDE0dvrJt8oxRDwYvMrM1zR4JY4utwfNnm51Ts/rekgIOemZupzGb43NrUoEExMS1Fcpb+FSt/FGglJdw6iKBDo5yaHPdXR0Gj4k5UskDbZiY2CJ5TK7QTQZUI+UJX1vUtJr5O8ORMlzDGcQQ35XDL3G7+a10PPdhGktP1IwGzy3Pc3LzcURhx+OG66/Dn/8w+/wi5//DCefdKLWvzfeeBV3/utu/OpXv8YjjzyiLjnoLomuMDjQj2Scw9g1tCzlk7uWzZ9ahjkTSpGiNoUxaJM2+urHn+Dhd1eh0eWGvydg9CnGo6MOTIsJ9oHcUnX+/PnyWaTtvl4G07vvvht//vNfUFlZKXcNwOl0Sn106yRoNCIm1iD+hyKoE292EbS2NoUFOukJdx27BbZXSvLZ11JIkJdfiOtv/Q6+/aP/RWpmtvTHDXj0P/fh1z+9HR9/8NZgv8z2sLuS/76BAHw+g2hSvS7RQYNjgTxKKWpou9xzGL6qGZ/uHj+6ujqwYe0q/Ofef+KO6y/HE4/+By5pw5s3bVS+MpwYMaLJLPdKxtU31AtT71Cr5oKkGETIjCs5thd5QlZyk2wozMlVa+a4SCEozGQpgC6nS/UzuUSbmJRoWIgzRLMQzM/dKNihoFNyFiafTLTbESlkjsZK1NNkozTkmQapSrTZkSsVzPTfSSlmuzeA5k4hTpI2gndz15teKVjqWZBQ2IRo6jNMTjCquwsOTRzO6IKJBkEkmF1CRHpYkY236T/NjyGBs3CDlJi/ol/S1NbWDqfHBU+vNIigSDE6oh/F2QlITbCqS6OO9k6Na1ZWhuSJVe8xCL1JNjmw8H0DKrVMS0mWWTPdL3l0+ZwO7yMkfnHCLxPiIhETKXSXhI7bWEqFpo6iS97RJ8T30whKqk18bplKbIJlFwo2WmYFr+/8uF4xToMwy39/ghahaWnpOOWUU/DjH/0I9957D37w/e9j2uSJ2L59G/52579wx7e/i5///Bd49LHHsOyTT9DU3KTSCe6fb5RJGPsLrD2pcdH4+hknYHJpESyR0jalg19XXYd/PvMK/vDkW3h/bSXqWjvQE/i0VH00Qkl0sF2wjVxw/gV46qmn8P0f/AA5OTk6gD3z7LN4/IknUV1Tq0aGbHdfdgeuMPYN6K+aZURQ0KHb/Aq83RSIhLE3YP1mnnJ8p8FVako6zr3gUhw+7xgV/kgmo6uzDWtWfYJtW8t1/P88kLz2B/pVH5RhR8VZkJWdo+2Ofbrf41Qd2z2B0W659N6rJLKhrhqrVyzDk/99AD/99i34xrWX4YG7/oza6orBwXHLli3DvkoxYkSToMQqIzsXDqsVKTHRSLJYUZySiClFBZgxbhyKab0ZK4ROCB9BI5Oenj60d7TDJ4WRYLOqYrMpHf3ykEIJZjZpFPUojV1xgI5Op3SehpsS3kKiGSsVzG6J0+V9ozJEqmibxJRW8Ua3zFnjAHw9Abi9fsTG0ZLbIpVTslrC2dNoCw1S0sT30SqeBkUUtdMNE+PJdxrvpSGNhB6UavKvcUaJnczMZCBwSnxcbp9U7G64uawv1/lshiNB85zEvquzUy3vYy3xyExPR2R0nDYCEwap5UEyJ4OLnEbFxqnks1eIo9PpRg+luxJWVFQMouNsKr02JJpGdeNMraOrS3VQdoRs4At1RpjokPhQIhn6BH8h9dLrct8XLQkoKf1ULEYORsfA/DRAdzQF+fk4++yz8UMhnb/+9a/x7W/dhqnTpqGmtgZPPP64SpX+9te/4bnnnlN9zrq6OpWChvU59x84+MwZn4cLjp2LsXnZsEbJRC4iGu3+AF5asgZ/WvA2/vXaItS3u/djbdtzmPWTB5fRi4uKkJCQoG2ZW/xt2bxJ6l+NLh16vB6sXLUGtbW1Kgna3zBa1Y62daiAZD+0H6VvXylAPR/opRu6cB/xpcDsk36WLgsnTZ2Kr97wDZxy5nm6xXRLSzPefvUFvPTMY2hqqFMy6XG7lHTuqm8OSLtxu5z6G/Uxk1IzjR/k1h2T0t0rL/b/LgmrqnI7Fi18D4/+51788f9+hF/+6Bu47x+/x4olH+qOQylpGZgweRpKx4xT+4tjjztO7S2GEyNCNJmJnPHaY/owefxYnDx9MgoyUpEZ2Y+Z6Yk4btZ0JMdHIkmYaDaX04Xw6c7RwiadLpeSksiBAOwOhxA3+U3aDA8WgJY5/7IhBRvTnsIs/xgJ1CFkltbd9B/ZTf0q/i4HDVro+zfBEqsufUg8qf7jEcLX6eqCx+9VnUPGgJ2u6mfKM9RzpJ6MGT0Sx53Al++iAprQ9wdvYcVmZezxc3cdIbZykfmgREV+N0igUaQMkc+a3Q0d+HZ0dqHV14uu3giJmxEP/k1NTBYCb0FMn1R6tyHGT5LvcUI2ST6NcEMwYEgdeTVKiC3zi26RuB2fjzqkQjT5Gw237PGxmm+UaBpW+1HokzDpt8/fE9r4jE8uyX8hNL3GfSwHjZ18N9JsEMehsSalNmj1UMhTu/HK4YY5mJvn3PI0KzMTc484Al+96ir86pf/i1/93y9x7nnnIz0jEytWrsTd99yDH//4x/jjH/6Il195BWvXrZOOrkUmZ0F3SaMhYYcIWHIWGVS+ctRkXH3G8ZhUnA+71H/qK3dJXV+5dQs+WLlBiOYOg7kDEaybbLNsd2eeeSb+3//+L+bOPUJXmerr6tXY7Y9/+hM6pc82+31T2jnSoP/jXTb5gxwcFwb66N6IfW4E/DF0q2eMC2q4Gu4W9gnYx9IKnXYY+r2vH2PGTsS1N9+Ocy68DKlC6OhF5JMlH6G2ulLvHwoKqhgGERstvUVqvp6z7/Z4ZFLKdvM55UVBCgU2rc2NWLV8CZ58+AH88iffkeMOPHb/nVi26GN1PWix2pBfVIp5x56Aa279Ln7y2ztxyrmXIik5DYUFBaouM5xtdESIpg6iwfPouDgk5WQjMz/f0FuUzpnLrJGSqeysstJTESdkhHqHLKSOrk50OT2wOZKRJr/FCNkazA6eSNhfGsEMjoiIVp94jG9Xl1Otshg6O1W+h9LERLsFaQ6ruu8huOe509cNt8+nUkOCLo3cThciAv3GjkBCGiSAYKWRg+HtZqGa7+dBpe7Y2BjJl17VhVRpHYMxg9KskC8G+5Qz40dSDt7v9PhUN9LbK53/gJEAS1Q/CrLS4RBi2U0reRkUGU/6FTWd/Gp8d4VgGNxZIYV+QiVuPo8Lfi93HBhQd1EWKW8+b4YQLaTUFxWLTnlXjxxm5Ta0W3nIO/k+851D323mm143jp0qsfzOybxayUr90cYt15jewABVHnYOj+SX5T0aoe1GDpYDdXemTZ+GW2+5Gf/3/36BG2+4AWeccYb6WF21ejX+/vd/4Gc/+x/cf/8D+GjRYnXK3dzcrJJOU88ojOEH2+mF86bglgtOx9xxxchOtCKBW9xKG6EiPtte9wFeFGZ7YVqp/8ftLnktXwasb9x2m7YwqgwRW7Zuw0cfL0JNTbV+J0aqLrLdRHAZ7RAD7RpIepjLUVIONKDVMUwu6KqHcVsYw4TU9EyMmzgFjsQk6Xv71MjYZrNq+w8Fx2f6nqaPZTYJGkwn28gVDPT4fcpBQsG2w/6cuvr1dbVYt3oFXn3pOdz5p1/h/374DTxw15+wcc1KlVwmJCajZMw4zDr8KJx3yVX4wf/7PX7yy7/gkq9chbKyUrV1oW9dsz2bn8OBESGaBO1nmBDrgBcxEf3IlgzlUrg30Ie26ip0C8GhsU1aWqreP9AfYRiutLZL5vYjNTVF7rdLJ0bCprcoSKf2OnsYkMRJl6blKxuj1ZGoy93UWaA0leETxnsiYIuLF/LokMeYdSQwUCfkbV1udd/DykNS1+U09Cso6dPN+1mIcqhaZPC9g5/m+RAwLKPw+cmKGKNueqgfwq0v2WkMgkHIh86Z5MTszJW+yX0kHC3+AJySru4eklSZ7cqRYrMgOyURdks83EL26bCZboASJR9o5DAYK41HEBpfEkL+SiI0gDghPBZ7Anr7ubTPrTJ7VC/VKmQzir5Hg8/zCfrtYm56/D26y9HOA0/wfBf58Wkwh/hPOlC5n6oOLklnk8zgqmvrUNfQiE6ZMHDGaObjUPDdRgijF4y7eXAphluSnnPOOfjud7+Nn//0J7j22q/juOOPQ1KSAwsXLsQf/vAH/OnPf8F/H34Eb771NtatX4+2Nhp4UVVhdKf1YIBFOrsTphThtvNOwrWnzMXpsyZiemkhxmZnID0uCtaQpnQwgRPqo48+Wt106fK6YK1Mgh7493148823tK2xz2poaEBnZ+eQdj9M2Dc6VgcOpI/gahc3C+EkOzI6BlFqH2BUOq/Xs5t9axh7C+Y7l8T5aZVx9YijjkV2bqEUwRC6JcXQTSNBuuKRL9xuOiopJfibjGe64idFx3FYTkguuSz+ydKP8PKCp3D33/+A3/38u/jrL3+MN15egI6ODiSnpGHM+ImYf9zJuOiKr+Hmb/8EP/vt33Dz7T9UZ/RUzerp9stkJIC+QK9OCI2xcXgxYkRTIZlFqSDdopPYFOZlI1FYNa0XSe4yMzMQJ8SEJITSqFYarnS5dOtDOr+2cktDBsN/2ljMwyiovQHnfSoZDGa2RcgvpUfUc6GeJvVEg3fp7/SJmZpo1yVsvpPPdnm70drplorRrVJNt9cLj69b0hKrrn+0ocu9zGxys8GYhhbwrgpbwlYdUn1AiIa8ky4QmHaPm2QuVA+KhNmgxQaZkEO+U5E/ILMq7hHv8fvh6e6Fr086I106j0BeejqyU0kqKcV1qa5pSlIiLFIenIExfbuKmnGReRAp75DBMyZWJwoUJ5Kgs6Fx2T1ekh7L5Mu7iG5h5upeQ8hsl+QTxf4aY5WO8h6mVxO8a5iRCd7DvOC/HiGZnR2d2LJlO9av34jyzduwZWsFtldUqTrADqK5c9iaV5/zutEIk3RSkjRl8mTd7vL22+/Ad7/zbXzlsq9g2tTJaG1twysvv4x//uOfuPOf/8Rjjz+BRYsWqysO7nEd1uccHmjZyCfJ5uFjc/G1kw7H7ReeglvPOQlfPf5wjMlPC9bFgxtmGk874wx873vfw6yZM/U7696TTz6pnhRoGKrtN3jsS5BfahSCHkIOLXBcMs6ozmQYAxnlQVc24VY/cmC95li4q/qt+ps+v0oo9XtsvPALGUMFvJ1CE+p2Utdz+dKP8dyTD+Nff/kt/vLLn+DBO/+Ad155DtWVFVK+FpSMmYCjjjsFl159A2757k9xy3d+jMuuvh5zjzoGKSmpEodeFZ5pPIJ1gcKZSOEmI9EfjUgrZOLUOtbMbBnkaCiSk5WhCrQEE0x2TXZP1zqtre3SETVro0hNS0VikmH2b2SJ0gP5wr+hx57DiJIQtGDcYoQEpaUkIkbi45SCNgqHZJMDCPURI5EkZNQSYxi4cDcbX29Al4LdQvyom0n3PYG+gMzwLbDExkkmm3EzBqHdLVa9O6QSkDRyOZoErkc6DHXNIz8PpjyYFoK6oEb2RKhPtU5/D1x+n24D2SNEk7dRYpkh+ZomBDEi0AOvpJcEJFmIZryQZDNPzGLTuAyplMZPkUqmE2w2lYZyWzuTQFKSq5VM4srOj25eoul2RIgpG1kPG6HcsyNYI26fCzNC8qE6r/Kdeqs1dQ2orW9Q1QWqGPDddHpOPZhd6ccYMBN3YIKdRHR0DJITHZgspPMrl16K2269Fd+49WZcfPElKC0rRVNzM559dgH+8c+7cN99/8arr76KVatWqcsal0xY2BEe6Pkw2sAqzPZlt1pQnJmME6eX4aiZ42Ug2bE0djAidNDiOVetJk6ciOkzZgR/Y1uPwPLly1Ehk0C2S6opVVRUoL29XfuffQEuH3OlckdsDhFIMza2oDTIDe0DIqNNqdUAev1hY6DRAhJNbnVplkdEnBXxdntQgtmP1qYGvPXqC/j3XX/G7/7nh7jnL7/Gh++8jubGBjYu5OYXYvbceTj7wsuUXH7357/BlV+7EXOOmIe09EwdF9i+PmvsU2GQ8JnQNjtcGLHpHlm7JpjiY9XBHNDZlS4pCwJCTOobGlFfVydkoR7VtFwUspYuJDMzM10JIO/k7aQuxnNmBvFz7xoPwzFDYQj8TgkqCR2tydtVqsnYKqVRy3O6/Emh8Uvwnf39Afi8bnS65BCSRT1Hbr9ICR8NeHYU5B7Gkc+RVAWfpz9KEnOG4hMyRaObwTyQ+0iEI2GYvPBg4fJe6mXSQT6dR7skz03n8rpsnpwAh5BKv8ujupUkityXnUsuDDEYuhEPHiHgtx0z50jYrFbEWiwSNz+8QrYlY3TyEEf9NEkD3xvo71O9rngJnySQ6TDylwHxkIFmyHt2QuhvEjl+YzjNTS26ZZ5VBnZKyseNLdOD22faZbD77Mb0WdcPPDCN7LyysrJw/PHH45qrv4offP97Qjxvw1lnnYmU5CR89PHH+Oedd+JPf/4z7r33Przy8itYs3YtGhob4ZXyNyxWPyf/wwjjS4C6arfccovUvz9h5qxZOibUSp+/YMEC3bCArl54jcKGQQlMGLsP6c4CQWENNzqRDhlR1kS5bgz1fX1hY6DRhEC3YRNBRMfHIz6Gq5aGp5eKLeX48y9/ileeexJNDbVqLc5l8SOPPkHI5eX4+s2347s/+zVuvuNHmH/s8UhJSdaJWt+gtfrng0Z6dE/IcWO4yeaIEc1BwxK2BOlIaNDS1NqmREN9TQpJqW9oQvmWrdi8rQKdXS6kpiajqCgfyUJ8mA8mcTIJhp4MEgXzcw8gj3BpejCTg+c0uiFJHJBK0NTcomyK7+NBx+5JtnjkpCYY1tSCXinXNm8PatudaBNiSldMUbExsDvsKhkl9Hkj0nsOPijxIomwSD5R59PrcakSMSWFO3KA1JCzF2MGw7Rw6Z9L+XQq3+HpRm8gOLuRMHNTHMjPSlOLs66uLrh93arDYVULcqO8KBX4LPAn82eTaFInhR0d9VTl5WoMZBWyHaqTr8RTBpxeedot5DwgnaJRoGaIITd/BvR2+cOG1dbWhu1V1VKn+lAgJLOstAg5OdnIzclBbm62Dm5mzgzFF7/pwALL3DzYYdHh9vHHH4evXXM1vvmN23DjjTfi2GOP1Q5m8eLFeOi/D+Guu/6F/z78KN58+21s3LBB83O49oUP49AG6yXrXlJikupvR0ZFqc5xXl6+6ocPsL9yu7F40WK8LJOgmpoaJZ5h7B7Yn1HdycizAURJr2eJM1Z3CO1rjd4zjP0IjuXkRH6PW73UEBxxk6zxsCen6pBPiSPHt5KyMsw79iRcePk1uPaWO3Dj7T/E12/5Dk479yIUFJVoWFzh3F2CSZDIUmfU2Exl+EfBESOa0Wp1KYka6EOEsHaSsYbGFs1MSiyLCvKRkZ4Ka0IiEhPsQhSyVBqVKp1QDGdjkvFcYtU2Igcbk0H/mEl7m1HynIRphmCEMqD6otxuiq57Wjs6dXlXC0PeSeLlCOppknTynoBc73D70NjUjM6ODh2k6VcyzmJK0hhP+csg9OxzwBrGZ4LvG4Sc8xKJGw2CaFXG5Wm+W3/Qe4wPE/zKZfMunx9Ofw+8PX3wS11k3sVF9iPDEYc0h03dOpl+GEnKaNnPEI0cDoa9qzgNgnHoV+MnK/WB5DulwVRKp2Vb/KDCsaHTyl0rSEwjpDPUfdcDMssOpkFnd8FXfh4YCx7cc7appVWX4ZOlzDLS0jXftWwkXdSlZb3RjpcFMASj2ep8X4EdUVJSEiZMmIDTTjsVN914k+5tffPNN+HII4+UtujBO++8i3vuvhd//dvf8OCD/8Hbb72NNWvWqOW6YRUpdWGXZR9GGHsP9gPUv7/ggvNxySUXqxERx4SKygq8+NKLMhlaopJN9Vfspeu03ZFySnuWCXh0zA71n0MF7Mu4KsG+nGmPkfGJxmdmD8eVNyNLwm15f4Gru+1tLViz8hN89P7bMrFyamnQh3dhWjKS0zKRlpamRkQXfOWruPEb38Nt3/0pLr3qWiWcBUXFurJJt0q009ibiRi3Zu3r7Vah1UiMfyNCNJkQEjQzPYGeANo6O9EhR7yQmrz8XOTm5shAOBbjxpaibEwpxskxSDIFLAgllhKIQYDku7YV/mHAXyazuORsfLJx0kqay8cDMVZpmD60dXCXIINyRQlRjo+KRLLdhkSrRa7J++U3Wvm1d3WpvhFjlGS1wi4ES/pRDXfH3y+AmUlM3JAKQFIUK7NTiyUefRFRcLm9OovZEa55ZuQQ/3GryVYaKwmh80q+UyLVnaEAAL3PSURBVAGY1uZp1ljkpibqFpH+3j74unslrhGwSZpiZUKgT8u9/LcTQuK08y9C2CWxlKKxrP0+n+qfkMSSGOtjEh6NpWgoxQGGs2xKPrv9lGQwNEodqZ+iAX42JCxmK4ksJbEsHxLczMw02NXZv1Ga+ko9E8jJoTbomDA7EhLO+Lh49exAfc4zzzpTSOeNuO0bt+FCGehLS0uFWLaoT84777xTD+4G88HChdi2bZvu0BVezgxjX4P1k5NUrmypKy/pe0879VRc9pXLdN91/t7Q0KjE8+2331br2i+E9HExUtd3uYxxEINtnP17v3rzkLyVsYyCE24oQBjGQOwM9WsYwwjWWx60paBKSH1dDVZ/sgivv/gMHrrvTvzzj7/Es48/pIa9LA+u3Npl/DzxpNPx1etvxW3f+xlu/s5PcMzJZ+v2l7Gxhj0Ly3Wf9MESRqjq4HDCGJFHAOQRZtYwY5ub22TW1Y/UtBSkJiUpQXEkJCJFzlOTk9RVRnSQZJrQvJU/Go5kkPyRg/eY53sIo8T0Y8f+MiTFEUpYEoRMksw0tbSgT5ccDPF2nHSKWSkOZCcnyvPUaeMhM26ZYbiENFGCx60yQ52g8h4a5+x2LDV9QyCX6IeSluzMBLfHo5LNHbsbGdJKfqXwrlsy3e3zooO6o0I2fUJKudBOQ8yc9DTkZ2WqUZPL44bP59dGQUksxekGrWYjMfLbeIMJXuNLBuWd+k52cizHqJhYlaR6/T0yeESrpb6Zci6rd7p9cq90gPLuXvoglfrAjpHh7la1Z2TkNrpQamxq0SU3h8OhExM6yKexlBlnIxVcxZdZ/l7M/A5m0LiMy5ZzZs/BpZdcjNu/9Q1861vfxBWXX45x48bp3vWvvva66nLee++9ePyxx/DOu+9i69Ztaj0cJpxhDAdIOjMyMnDCCcerM3jVWZN+mCodVVXV2l9Qwrl9+3adAHFHonBd3AHaDBirEP3ojohGjIylhupaBALcFlGyKpxbwwQZeziOEi5nF7Zt3oiFb7+KBU88hIfu/Qfu/fvv8e9//AHPPvoAVi5bpGNYYXEpjjzmeMw+fJ5K88+7+ApcdOV1GDNukq7IcdWOEup9DdpG0F5mJDAiRJOdAN3m8JNiY3Vb5HQpmczMzhLiFs2b9KDUjgRzaMdhEgahdEZhyj/eofcNuXe3EawQBhjyjnDYuaUlObTT6+zsgtvpDO7oEaFSuoykBGSnUE+T5IUEyfiNOkd2R4IefNaEEf+9jGcQJKqGQZBhuUrratM1AsHQWXcMUh8BbqDZ7vGhzeWDt0eIli4d0+VQlEozM5Idmpa+IGHljImGOmZDMcE8Ds0bQrOdJ3qrnEX0S2dGohqvepp0NEsjpEhhtfa4KMRqPgn5FaLZRUmsPBIthJwKyT4hw4YbqZ3fu0swbnJQAkrdU9Yliv/TZMJCNw+hsTSqhnGFy+gk2GF8GpwgcGJVXFyM+UcdhUsuvVSX1a+/4XpdaudWg6vXrMHDDz+Mf/3rX7j//vuxYMHzes3wLhBGGPsWpjTI7Iso2aQbrzPPOEPrI/srLqv/5S9/xauvvKou8sIwQNd1hoGl9HvR0RiIjNE2zqz0qLeSvsF8DWPfgPlL9UCOZ3Sk/t47b+Dh+/+F+/75RyGWf8Kj/74Trz3/FFat+ATNzU1wJKfoHumXfPU6XP/N7+LGb35fyOaJOl5RfY3L4abK0nChX0gtiexI1IURG3rpNJwjP8lRZ0ujzlCzMtPULYu2AAEjQ2mikbWkkjtDf5erUXJDpBZACP3Z2wKR9/E9kRr6jndSxy8lPVV9SvZInCnVZOfGd3IVgnqaWakOFXXTAX1sdIyStDirXZ2+x6mYe0cKuLUiU6ax1vTuQXx5uxx8QvUgudODkCvOfvwyQ1U3R/xV/qsoXO7VN0ieu7w+dHh64OweQE+/5J3ENcUar07auWMJ8013jwn0SgW3qP/M4OsG85th7gzjmnHZuFuu6F9uR8kJRL8wSe78Eys3JVpiYIsV0i3nAWlATk+XvK9HLcRJROnjk4Y8lJ4yEZ/KGV43yzd4znDo55Rklvu0Z2cYRk28y4w/48dGxGv6dDCIMD4bNNSglT6NiI499hhcecUVurx+880346yzz1Kp8YYNG/Dfh/+Lu+66C/fd/wBefvllbNy4UZ3jq26ulM9wdpBhHHpgv0eCScJpoxs1aeuUuheXlGgfwvrGVZm333kH69au080+6ObnUIQ51rIPtMfGqM49Xd0QtDrn6lGYaH45mPlHQx63swsV27bgAyGXjwihpK/L+/7xRzz1yP1YKNdqqqt0jObWwYcdebSQyxtw8+0/wk13/BBXfP1mHHfymSgbP1GNaQnW5eHuP1WVQtJgTECGvy6MGNGkpE9Spoyfbm5obJOZkY5YSjOZZvmVBIyflNwpK2AGmEcQBh00YFCbL4nBAiXVDAlP3klCx/2muU1mc0u7SjU1WvKPDpmzkh3ITLTDEhOJZOn8KAX1M66RUbrErpUlKEVks+eydb/8zjcakkfz3QzWOOc185BvxlX+kYMhsWJQH5FSTYrTuUOQGgQRRsbpfRKC/kai6RIy1sPORa5zUTwjyYaslCTEC6ngrMbp9qqrJBIM6vMw7UaMjfwmmDdDwbuMq/wr5/JB6SLzgfGnb8sIEliLBfag/0/e5+kWgtwTQIxclxEEbu4rL/FQ/VF5bzA1nwn+3i3p0d1F5IkEqUs2q02dzvMVesg9evCPgIMOpahh7B7Y+VB6TpUE6nOedeaZuP6663Dbbbfhmq9dg3nz5ulk47XXXsNdd9+Dv/z1r7j33/fjJSGd69etR0NTk3qWCCOM4QAnRDNnzsBNN92IU04+RY2IuqW+rVmzFk88+RTWr18vdw3o0qSzq0Pr86ECCnFMokKJZkw0dfsNg8wBriweogT8y4JjLyc8VMFqb2vD+jUr8earL+Cxh+7FPX//Pe7+6+/x6L//ibdeeQ5VWzchNt6CMROm4LiTTsdFV12Pa2/7Lm781vdx7S2344zzLsb4SVNl0mQX4k+d2oNbrSvqF4Lg+bCip7sbtbW1hiTMmoKc7GzVx4wSoslqrySPDUHO2ScohQntHELPBbyXV/SqPrDz77sN89nQMOSTZ4xrlBCnji4nnC637sxDCSednJr3tnR0qI5QJpfZpSL6e7rV/VGGfI+htTVvknu58w1JpnzZebbCS/KVxJAUy5BOEqairvEeRfCdXGbu6uxSx+iUHtLBeowSec09/UdL7vb2LpTXNWNLU7uQO0qaIOQyAlOL8jC1tECNmXxOl1rLU0yfl5Oj1sl0NE0JLImxGf/PA+OtOqi8T/5Tp6S9o1O+CymWuHULw6tt6USL06sElC6fCjPSkJ6aBB+3iOzuQXJykhoisSEbFNfImp0QDJ/bZ9GQrKamXt9ZmJ+jW2ZqNINZy1PmH8lOC7cxFVJeWpiN7PRkI55h7BGYZ5zc0EdnaUmJSpMmTpyAouIS+H1+tU5fs3YdVixfhvJNm7B9e4XqN1OdwTDyYJ0K53sY+w6sT+xLYtnPyjn7ZfYjMTFxWLliOQpKxsIqE9BHH7hL+m8niuQ7+xfee7DWRbYz9o3vvv4S2lqakJOXh3Gz52HdovfR1dEGW4IdZ114mRL1ML4YWq+EA3BAcUq+lm9ci/ffeQNvvvwcPnz3DXz43ltYtXyJ6mK2t7ZovpaOm4h5R5+AE884ByeedjaOPOYEzJhzpBLLzOwccHc/Lo2bngH2B5iuRR99AGd7C84952wd94ezTYyIRNNo2DwbgMViRUFOFjLS0nR5eXCj+Z0+5O+uEs1r5hH6fV8iJDz6frRyiVxIEUGL3JbmRiUvdMhOopaXmYH89HQhfBZ4/D7UC6mpa+tQwyAzpIGBoCRTniH4l8lWIjlIjIy7aaCh1/UHul8n4RPINZI0vSqdJS3Ppdc0rAh1NsQQKJU15LK9co1O2ludbiGZ3egRssfnrbFWpCcnwyakgTsW0aiDk4ComBjVr6TKAEE9n91uAtpYjPhTCkbyGxMTrU6DubTgsFmRaDMkjrx3oK9brc2ZTupzcqmH/jQ5qzPb3We/O0LDbG/tkPB7kWC1wpGQYLw99FkjOiGQq2bgYewVjHYcoQM7J4pTpkzB6aedhhtvuA63f+tbuPKKyzFh/Hgh9q1459331FXS7//wB91z/ZPly9Hc0iJtJ7ijVbgswtiH0HopfdikiRNx8sknoqCgQK/TuJGdwcfvvCafnBAPoK1dJt5enz5zMKI/YPhXJqy2BCWfJijh1aX1MD4XZt3weNwo37AeLzz1GP722//BnX/6FR67/y61HP9k8Yeoq67QPM3NL8KpZ52Pm27/Ab7xvZ/jazffgbPOvRhzjjhKJjhjkEwBjozXHOP2J8E0waXzPu2HgxeGGSMm0SShqamtVXcLcbYkREbH6VI5mwDTStLBDmHUQOKjkjo5pRV1t8+vBkwkT5FR0Ur26AuUhkI+IXvOnn7UtTvR6Xary6ai7AwkC8Ei7WMYA1KwTCvljuRbJINmio1jxz9+I+E0ltuNmsCKb9YJVg66memQd/P21BTDat9sHLzPK+Rxe0Mz1tU2otHpR3d/hBK9/BQHZo8tQI580sca98RubmkVsmZHVnqaEFh2zBKCdE5GPAXBcD8Lxs/Be+SDS1jU16M+LreljJfJRU2bC5XNHejpH1CJaWZyIgrSpR4M9MHlcqlkIiklWT7pCklygYeGF/Juvc7G70VVTS26Jd8zMtKRIWTfkOgOxkLvYxg7STQLwhLNfQnmIzvPZJm45OTkoKy0FJMmT9KldkqW2D6qqqrUaGjlypXYtm27Wg5TTYN1m4R1pLZAC+PQAdWzXnjhBcRYEnD0Sadi3ISpKJa6mZGVp5Kklxc8hXdee0H7STq8Jjjwa59zgNdFksrWlia899arKmErHjsJxVNmYd2id9HZ1op4mx3nXHSFjhdh7IBZ7hxTmpoasHXTeiz+4E28/vICvP7Sc/j4vbewdtVyNNbXISCT5eSUNBSWlGHO3KOVYJ52zkU47pTTMX324cgT0mlPEIIvfeNom1DTJzhdAlZu34KFb74i7SGA8887d9glmiNGNP1+IZpCDsiSuCVWRFSskkumTRs4y2MYE7rHYMcjH5QsUjoXQ/1LIctdLo8UlEutHNuEwHidXbpcXtvlRn2nB75Av1o4pyU6VHIYR7F7ECSaQdqpZ3qEplnOVTbJy/LJpW7SLd5t5JV5Tp3DPrR1dKoPShrV0Ml9RNBVAXUdm4UUb6yuQ3lDG9r9Bl1NtsRhYkEWppblI81mQb+Q1cbGZiWs3Hee/hXpHsh4D8muBrdzHHcJoxyNM4mbdPRUNXBJXnErT4eQjnqJ6/aGFvjlN6YlUchnWW4mUm2x6GzvVL0hqiWow3cJxMirT4Pp6HK6UVlZraoKBVw2T+QWa0be8JnBuEgCwkRz+KHtVw5ut5oidT4vP09IZxmmCOEsKyuVepWu++iv37ARK1etwtq165R0trY2a/nEStuiA+JwuYSxL0BPHC88/wIsSRmYddhcJCQ4kJmTN6ibSNdIPrcL9vgYlI6fjHYhYO+8+ZqQtBjV99bdUg5QcJm3saEOC996XdNVWDoOk+fMxdpF76G9pRkxsXE4/9IrpL1RoHBog/0Nl7pZH5qaGrF00Ud48+Vn8a6Q9PffeBlLPnof61avQF11pfRfbiSlpGLqzNk4+sTTcdIZ5+LE087EUceehGkz56iLooTEJK07nMyMBoJp6pQyPh2dHdiwdhXee/NlvPfGK/jo3TcQNRDAiScch6OOOkoFTMPZ/44o0ayra9CCiIp3DBJNSR1pyg52MIrAqhIpRDMyIhrRMhjGSNy55NtO/UivV/f0JqmMEHJW4/Si2c292yPBlexEqxUFGanSmXGnHIKVj1qPnDnzu0GLuIxIcsmKaUh1KUXVHNE7CH4z4iJnwfyinqJTCBcJHd0EpckAHymNpl8e5Iy+tqUVKysbUd3uVv+ZDKAwldLMQhRnZ8AWE4Vurxst9TXq7zK9oBCpQvSiheHpbjl8L+NjRPaLIeEbdxoieZfEiwSWBIIEttXtxea6Zni6e1UlISk+EhOE9KYLQW6VuPZzliiElBLQoHBS0zkUzKemtjbpGJpVZ7C4qECdkGv+yME4s42b+UUi09rWKRf6UZKfFSaawwzmLTs4q9R/WgkX5Bdg/PjxSjoLi4pk0haL+vp6rF69BqtXrcT69RtVd7tb2hJ3clH1DXneDCuMMPYUPunPnlvwHFJkAjp11hHGUmWQZLI/S0vPRHHpWOQVFMkkx4JmIRlPP3wPVq/4BKnyW7aQUk5SadnOjmS0SaU+D2x7TSSaQiQ6hGhaJ8zBzJmzsOrDd9HW3KCSzAsv+6rqTh+KYJ8SHRutYws3V1mzYimefex+LHjyESVfq1csQeXWLWhurIfH7VHiPnnqdJx69oU45+IrccqZ52LmYUeidNwEqSf5MjGRMTPa8Jc9MEoIJuNMgun2eLFp0ya8+vyTeP6J/wrJfAWdUgfGjynGmWecjosuvADHHXOMjM9pev9wYkSJZm1dvTb4KIsh0WSjHxxKRuGgorET0kV6GB0VqazflmBTMkRfV8mJCeoH1JqShianR3Uzewci1PiF+6CnJ9qRKPdw1sTdGthrRUQYxi48KGck+TTB90VEyHcht+oOSe7lPSSvBEmtQj8i1FF5W3unXk9OSUasEC+G5vR3o7yuFWsq64T8+tXamvEZn52KwyaUIJ2EUu7r6uhEbUOTqgZQCuWwWOVdDJvvM3RDjTcLdrN8eBdvJREnIWRn7ZC4tUo8Nte3wOnr1nDjY6IxNj8HWQ4bvEJKeT/3mE9i3IKVPqR2DKK3v0+Xzbk0n5aeipzcHHWppHcG42jkqNQz+a4SzbZ2lXIUC7HNzUiR2z4dbhj7FibhZPnTbQdJZ2FhoRoQTZk8CflS3/g7DdE2bNyATz5Zhg0bNqKxoQG9fQHtvBkGB3zW/3CZhbG7oN76ggULkJyWhSkz5hgkIEgAWIs4kaGtALeqpTcK6nEWl4xBdn6R6trRErh843o19FDPC5RUsU86AOogSQZJ0ofvvqlEs2j8FEyYNhMrP3wb7UKoOUacc9HlaiR1KED7EC23Ad0PvLWpXrfWffPFp/DEf/+Nd994CauWLUJ1xTZ0dbZrX2MT8jimrAzHn3YuLrziWpx1/kU4fN5xKCodg5S0dJUGc+OU0dInMR6s3lTr49aWW8vX4/23X8fzTz4s9eA1NNZWYUxJIS447xxceuklOO3UU3D44YehpLhEN5ZhPzzcGFmiWVunjTwy3iENgpaChuRsEKOpIYfEhWcsCnY2cdJQEx0JSE5OFHKXIgXl0IHU4/ejQQhNl78PffKEtzugErWURP5uGO70SafVJ2l29gTgCvQJKRUyGSWzIfnNLGq+yzAEks8BY4DVfySg+qt8ys8qOZUBuUXILf17Mk72BLuECTR2dmFdRRU2Nwix6yaljdAtMQ8bX4KJRTmwC8mL7A+gqbEZTa3tSEtN0f3m47lsznfIO5Xg8puZDSH58Smwlof+LuSA/j3bJD8Yf0diIvxCwKua29EWdNZOo6OxuRnITE1Ct8cjxLFLdfYy0lKFZDA3NNVGeEFwsOB+x9XVtTqYFBTkISVZBgF5B+/UQ/4Y5yQ6JJoBNLe2aSdTnJeJ/KxUjVMYIwvmOY016IYmMzNTO/KpU6dixowZKCktlTsGUC0TiJWr12L58hVYu3atlHOV7iIWJaSTA6g6F5b+I1x6YXwevDJpXfDss0jOyMWU6bOl3zCmnkNhkk8KAtIzs5EvEyHWT5JPbhf4yaL3dXeXsrHj1ZiTfntZT1mPR2sfwnbSUleJhe++hY72NoyZMhNjJk/Hsg/fQ2dTnY5f1NEkmT5YwbIxJXRutwdV27dikaT/ndeew0vPPiGfL+muPBXbNgsZb9NVxYLiUsw67EicfOZ5OPW8r+Cs84Rczj8WY8dPknE+3bCBkPBGg8SSIDlkOn1eD2oqt2PZ4oV4+9UX8PbrL2LdiiWI7OvBhLGlOO2Uk3XXt7POPANzjzhC9ei5G5zpqWGkMKLGQNU1NVoBIi1UlA369ZJjMLkjmPA9gRkrRi8yckCJUGyM4QSXfsqo10g619juRLvLrY7RSfg6PH4qUyIpwaqukbqFXHZ4fNhY1YxtdU26FWNMnHRalNpwhiSflKCyLtN6nG/mO/W7Ek3jN80x+R/o7zN2LfJ4dYZKqSZ9lFa0dGDFlirUtLvQCxmcJXb5QujmTy5DYXoyYuV7t5QHpUedTheyszLUC0C0EGGD0LIS6yt24PPKZshvnEDQeXJ7u7EnsSMpCX2SvorGFjR1uJRocuvLkpwM5Ei8Bnq6dbtDVv709DSjEaj/0RDIO2g17xLiUVNbr3qoZSVFcNhlZi7hhYJP6iGNUZfOJR4k41w6DxPN/QtzEKDaAxXQs7INd0k0IOLyOg3bmpuasW7dWpU8rF23XnU6qUOVlJyi/nfNGXi4HMPYFbw+L5559hmkZOZ8LtHcGTKxDg7enKAmJiahbNwElWLR8KO+pgrPPvYg3n/zVdjsXMnKlvtGZleVPQGJZk1VJT567210drQja+bRmDJpMsqXvIvm+lohTPE49+IrD0KiyfKTyWhMtPqMJrl8742X8PSjD+ANGvN88DbWr16J6qoKlfRS8JUqY978E07D2Zd+DedddDmOPv5krS/FJWXymyG5ZOmGSsT3J1g/1XJdOEWtcKn333oVT0udfPW5J1G+fjXSkxNx8gnH4vzzzlGp5bHHHCMT+ekoKS5W3XmSZbOOjzRGVkczKNGMiA8STbm+U5JHQ6NVJheMR/CT3ZRxykV0HoKBHVbZJJlRuvtClC7pdri9CAhZo/W1U8gkjXYaW1qxqbIOyzdXYemmrSiva8GmumbUNdSjw+lBf6QQzahIXeKmtFJdITEq+lZeY0Xncjov8o2RWuGoC0ldTe5KlJKaLO8MoKK6Buuq69DuDcizkbrl5IS8DMygEZA1HlGSxvaODjTXVSEg78rPzVUXQdpx8qVC8rQyhh5fhCH5Rj+eJMH+nh7YHA7ECBGuFQJc19bFm2ERsl6YmojSnHR1It/W2qaPplGyKp1hUKi7AxImdT+bGpvURyd9Mxbn56khSchUZSdI0oRo9qBVpb6BMNEcZWA5sD/gxIKSJG6OMG7sWBwx93BMnTZdVVVamptQWVmJjZs2YtmyT7Bp40Z1J2KxUnld2mAU6+r+6TzDGJ2g4dkzzy1AGiWaM3aXaO4A6xJ9INO4g4ZEnBhRBYQEtL6mEokyQSodMwGd7S1CWlqk/sap0GE0gDrQldu2KbHq6uzA1ElTUDJxChYvXoz2+iolG6ed+xXt7w8GmO2+t6cXDUKkP/7gXTx6/1145tEHsXjhu9i6eSOaGxvgcTv1XntCImYcfiQu/+p1uOxrN+GkM8/FRMkf7trjkLKOlbGHZIzD2Wggl2b6AoEAaurqsHThO3j6kX/LcT82rlmBBFscLjjvXFx/3bVqPX7YYYehrKwMGRkZ2qeqD2MJY3/3jxGSmSOSmx0dbVi0aKky8sikHETF2HZO/H7OiEGEEqbgOaVnxq42ZlbRYId+KY1rXGTmHiitHh8+WluON1ZuQoPTpxWV+pMxQiDp0odP9/X1o5fK6aRH8jj3GnfExyLVbkOBkKwpRdkYm5MCq82GxNgYRAuZNKhtv8Rhx/spdeztlZlNXT02lG+RSmVHQVkpOoVQvb9yAxZvqUNnN98D5CU7cOqsCThyUilSbdKQZMZXWVGlTmYdSSkYO3ECkqUB0khHYihHlPGK3S2TkCqkEmp5jNtalm/ZiobGFuQWFyJFyOzrS1fjjRWbhIQOIDY6AnPHl+KceTNg6+/B+rUbMBDoUd+MqWmpxnI4CbcMEnKmYff292LVmg1oampBTk4Wxk8YB6s0JH3/jigY8ZZrHBhcXg/WC7H3SXxOknfNnT5Www1jdIPuu9xut06ItmzZghUrVujWlw0NDdquaLAxRur7vHlHYvy4caoDSgMkYn93qmHsX7S2tuDSyy7H5JnzcNk1N6qK0b4Y5jix9/s9OiGnEci7b72GF556VJfWz73oCmRkZet9Zv0boaF1J3Bp/MN338Kff/VTVFduw5nXfVfI1Fl46I//i9WL3lcfx/949GWUFhUGnzjwwPxlmXJ1q7qyAtvK16tEr3zjOrQ1N8PtcuqmIcx/llOWlEvJmHGYOvMwGeumISe/AI5EehfgroQyxki57o+y+jQoSDLGJhqwdXY60VxfiTVr12DzpvWo3rIJyUkO3YZ1zqwZSip5Tvdy3JZ1NDvhH3EdTRXdUqIZHXRnYh4s6GAD3W8YGgc9F8IS/JQLg4f+k+tm9eQ9nD1wSTDQ44fH1QW/kL4+IW2BgUj0yKRaeKEubQ+wcvOfvI+ST19PAB1eP1o7O1Hb1Ig2IamU8lC3kxbtUUK2uLc3o6FxGaBiuvGdzsu5EwR1ESlVbXH7sHprBZpc3OozBjERAZSkJ6m1eV5KIuKF9NK/YQPdGnW5kZ6Wjoz0VMTRYEmILHcwMnREJXC+YA/Ap1T6G/zC99BnV0ScBVaHA42t7dje2ApPwMiHZKsFpTlpcEi8O9s7VO/S4bDLzNKmnTnDUWluMC5ejxc19Y3aiRTlZenSK+8jdiIXwXPDvVEAbRI28yks0TxwYC6v09FxQX4+pk+fhjmzZ2HixEm6DNQo7WTtmjV47733sGjJMtRWV6s/VqpXJElnHJZyHrqgjubTTz+D9JwCTN0LieZngWOXSi+pKtU/gOSUVOQLacnIyERWdp5ef+uVBbrvdbw1QQf/ka6DJBskJUs+eg+urk7kzj4a4ydNwrqP3kVDTaUasZxxzvnIyMxS4nygwFjyhW47vH7tarz6wtN45rEH8dqCJ/HR+29hi6SZqx8cczh2jps4GSefeS7O/co1OFsmAcefciYmT5upurg0BDOEGMYYvL/BOkKJOcuDq3VLlyzBi88+imceuVf1SX3OdsyZPgWXXXoxLrv8clxw/rmYe8RcFBUVqb4lpdSmTupoxcgSzXoSTSEQ5tJ5aCMcDYPCLuNgXgv5LUhIjX8kf9SojEC0XLdb4tQfJO/pkIGP1s6UWrKzY6WOFeIXJ5OW+JhYXSan03SSLhoJ0Xl5q7cHje1tQsg86mooOcmuyzi6LaR0bpGUNmo8KVGlJJV7ivfo8rmnpw9VHU5srm0UMtcvv/chSeIzrSQPU0rykcJlc6nMHUK86oSwcdmSOnLJdNGgFZXkmZ0P30HJ5p5L/mhExNgxHK+vG+0yK6OzekuCHR1CgmuaO+Dp7ZO4ASm2OEwszNblfHWH1OXUfdE5Q4uOZt4a+aoxkTS3dbShrrkVNCQqLCyQvLZqnhoTAX1pMG8MGETTXDoP62geaGA58eDgyWV0dqos9+nTpgvxnIlEmbxw5r916zYsXrIYb775JjaXl6vbEjpMpjUxNwIwES73QwNOtxfPPPMM8opKMUnIBSVWwwESmozMHGTnFuimFHSG/eE7r+uyLZewJ8+cpR4xWH9JBEaC2PFd27ZswrKPP1BDptnTZ6F04hSsXvYxGqq2K9E86ZyLkTWaiaaOrwa5pEoC23hlxXa8LKTygbv+jAVP/hdrli9BTVUFujra1b8178nIysFxZ16Ea27+Ni696hoccdSx6t8yRSYE1LfkdtejATpmadronzsa3u5ebFy3Gk88+gDu/P3/4P03XkR7Uz1OP/UU3HHHt3DDddfi2GOPkUn2RGRnZcEh/Z5JLs0+crRj5CWazJhdEc0DCYx3sDEwBZRrkljpcriQG7tVBjiZ+TLNCPiUTNqj+2GTjinVGofS7BxMF+KXkWBFb49XnovS5fReYV8Mtlvaf6fMzNo6u9Hf3YXMjDQ1OKITH+NdQgC5bC/xIFlkI6MeaIfTifq2dlR3uOEb4LJABFITbJg9rgjjctNhkXtBa/PWNjQKYaOeW25+rlrNk9BpuiR8Q45oEM89gjzPJ9S3qJx4/NzzvAsD0gFbZdD39vSq5XmHr0fTyS08xxVkIzM5Ab1+vxoEWWPkOh3HRwf9j8of0l1KqugKp11Io80Sj7zcHEM/k+/SOH86rqxfhh/NMNE8GMBy40DJAT4lJRmTJ0/Csccdj0mTp6CkqFDbXHVdPT766EMsXrQY69atg0/aIHfo4G804mMY4fI/uOEWovnss88gu6AEk6bOGDaiqXVJ+lRD2mbUzZlz52H2USdgwqQpMhFKVFczd/3jT3A7Owf3uZabh60OMg7bt5YPEs2io05B2bjxWLPoAyGa2/T3U8+5RD0/jDaiabbNvv4+lcZu2bgOr77yEh745+/x0L3/0C0fuTOP3+fVYcoh+VtQWIj5x52Er954B75+y7dx0imnqaRPDXkiolQINFx5vScw00aS2RuQ8bqlESuWLcKLC57Bnb/9hRousY6cfsrJ+MEPvoef/uQnOPHEE9WQh8SSfZ6hJ7znwp/RgJElmnV1Qor6EWWjc/GgK50DFAYdo4SRaTDcAZEM8Ru3eoyTQS2utxvWgX6UFBTgsKlTMH/aeMybVIY5E4owviALY4RkTSouQIYjAdyels8Fer0I9EeiXxqJ0+dBQ4d87wkgNckO7gAUJS+W2/T9fJmKzOUC9+5tEELV4PWrH82A9CGcTU/OT8PcCYXI5o4XEj/uaFRdW6/bOObm0IF5OmJ3UmSXQOnrk2nZgwbK+Gic9K9RsiR53MPcLXGKT0hARJwVVS1daHV5Na8s0bHIT3UgP13qg5Dl5pY29EmOOux2Jb/mFqUSEYlPP7Zu264W9smpKciSuHPbSS5hGdH8dFzNpXNusUn3RtwZKEw0D3yw/EyJAJfX8/NyMX3aNMybNw8zZ0xHsQw0LVLmJJrvfrAQC99fiM2bN6O+vknmWTLpsdm04zYRrg8HF9hHPPrYo8gvKhOiOXwSzV2BPV9aSooumxP+wABamhpUP7JwzCRkZWdL3ZSJfmO9Suj3Ndgmyjetx7JFC4W4dGH27HkoEqK5KoRonnTyqcjOK9jvRNOU7JFAUVjS3NyKlUs/xMvPPo5nHr4PTz/2Xyz98F3U11Yrf+COc/RgMnXmHBx36rk497Kv46Irr8Xxp56herKqoy1hMlwe+xuMgbpkk/6F8d++dQs+eOtlLHjmMTz98L/xiZQJt2I+68zTcN0NN+Kbt92MM844XY15aMjDfDmQpJafhxEmmvW6lBUZd4BLNKUSm1JMLuuqBFDSYtBMOZUaRqleX7cfnq4uJMZEoTg/EwWZachy2JEUFwt7dBQSLHFIku+luRmYXJqDjETpnGQgdPkC6A70Ktn09XSjsb0d9LSZk5mCWHmW3RnjoNJh+UeyS4ldu9sDv1xz+nrgDciMLy4Ks8cUYHJRPuyxkRjoo3SvHXUy4CbZY5EnM2zOlnZVDntkMMN46GGcs43zPGKgT7d641JmlMwwLXYHGts6Ud/hVJ1Tu8QvM8mB4sxU3Q7L2Wnsj07d1KTERDWiIkg4+4S0byzfrlbs3A0oTTocvk5prb6YX4KdS/C76d4oVEczL0w0DxponQt+csDi4J4jAzldJR0xbz7mzJmF1ORk1NRUY+WKFViyZLHuuU7S2dnVqcvr9D1r1N1wnThY0NnZgaeefgr5xWNGnGgSJHAm2eHGHlOmzlA3OukZ6Yi32PD0I/fhNz/9tjpWn3nEfCET1P2kjl6w//oSiI6Pwfo1q7B88YfwSN87c/6JKCgbiw2L30ddpUE0Tz79bCGahfuPaEpbM0hUpOrwr1q+FPf/8w+452+/wRsvLcDaVZ+gob4ePp9XySWXiSdNm4FLrr4JV9/8bVxwyeU4Yu5RyM+Xcc1u13GQeTcayCVBcki3hRyTq7ZtxYsLnsQ9f/0tFjz+H2xYswLjx5Tipuuvxw9++CN85dKLMO/IeSgpLtqJXBIHU580snJYNiTWBansOkIcsJBERNAtuzRs+WsQPtK9CNUfZKbGRkbBKuSKe3fT3UaPkK2YQB+iJQ8iaSIkz8cO9CK6rwfxElCOw4bZYwtw7KwpmJSbIkTUGETZeOiy6MONFSivatQspIU74xChZwH0SYP1xMbCJe+MjbUiXcijLbIbKbZY5KenIZH7SA9EwCcz/XYhmtxBINaeBAuljCahZKXe24qtyvak3EacmR8D8r6o6Hjp+Gzo43mvXwg3YJFOgzqqlN76u31wuTvgE1JtkQ7YJp0GOz/GjxJPrSsMVc7pc5RfGUWLdMr6JrnAd+24Uc55mB2OfgbPBX3ym2HBH8bBBJNo8lM7eemsczLTcfhhh+Gmm27C3/7+T/z4Rz+SDv1IOF1OvPTyS/jzX/6Kn/7s57jrzruwaNFidDll8tMXGCQJYRwc6JcJ7P4EexuVvkudjJD+2eNx45yLr8Jt3/uZsX2h9N29Pd3YtHYVmpvqtQ6b2Df1MNgnhqC7t0f+jlwdZzpC30YCvHTxR/j9//0UF58yD3dcfxneePl5tLU060YfTDZ9lRaXlOBr192Aex55Hvc+8gKuuuYGTBg3TtUPGN6OvNo5fSMJs4wYF0aH1vA1QuiffvhBfOPrX8F1XzkLj973D+RlZeAPv/sdVqxYjof+8wAuuOhCFOTnwmqxDpJLsw87GLF/rM7jbAe+jmaQo5PuKcmUS0o4peLxV1Y/zlKpj0FDl/7+PlgTbODe3wY5kkop/IyhRFMEKufcljEjJRHZaSmwREqFbe5AT98AeoRdUr+xud0pRC0GCXaLkK1oY9lYQuC2jqu2V2FrbT3iJAxKAm3ye1lOBqYU5yJVCFyPv1sdnVfVN2ocivPy1GiJnZ8RG/nLRsO4BeO329Db2ZnQko+Ps9ExXwZUUZu+RZkhdnlfs9uLrQ2t8El6IiKikZ5gwYSCXN2tiBaDlD5Spy4jIxUxVK+QcEg6W1ta1FKe5J3bZdqEwO+E0DgHz/ksJZpcku/1e1CUlYLczOHf1zWM/YvQfoX9jSPBrvutH3/88Zh31HwUFRbqhgWUbC5eshTvL/wQy5Yuk4GuBdxyMCpGOn2pI+rzN7RehXFAgOpBTzzxJCaOH4cxk6ZpnzRawNrEcWHqjDmYfeQ89Pf2qYP5xx76N/7zr78KiYrFpOkz0dsrk55AYFAHdHcRHRmNtauX6843XrcbE487XQ1i1qxcirqtm7V+zzuBeoylIyLRpDTZ5/eq2hz1Rp99+nHc97ff4MmH7lXpHj2I0ECYRj+5+QWYKmk/+awLcf2td+Cam7+DecefipSMdPWTScPa/SaFHQrpF0gye4W0t7S1Yr1MFN5+7UU8+sBdePqRB9BYU4Gpkyfitttuw29+/Rtcd/116jnDcJRvjI/KFw6R/mW/EM1IaxIiogzTlgMXEvcBpoB0yqCbrHzGT7wmH5LWASGYTqcLXU63Ws7qziaU6MLQg4yKMIxuomjtLQ8xfxJsVthtCTIY+lHV1qXhcxnd1d0Dt8sjZCtODXli4uJUN7SxowOL15ejuqEF8bFxQsLikGSJxbjcbJTKESvxaWtvU93MHmmwudmZSM9KR5zcy4puxDok/nsMPmd0hgzCJJu0V+JSN/dj9wnRpYujru5+VDS1q+U5l88dMVCL+PSUJPTI4N/S2ibPRurSuMUSr+H09Qeko2rQrSRpBJKXlbmTjt1nxTvUvVFvoB8FuVkoyA4TzUMFWrdDjhiZzKSlpmLCxImYK4M8fXBSvzMgA3plZRVWrlyBxUuXY8PGTerDUyqQbjfIjRjMwZ7hhDG6QaL55FNPoWzMOIybPGv0kJMQkKD0SR8oFUq9inBHmmlCsvLyi+BITMEaIYsP3PMPFSYkp2WoLvLugDqaq1ctx6pPFsPncWPGcaeioKgYa5d+jNqtm1SF6Ijjz1Q9wGHJF+386drOg4rt2/Dem6/g0fv+jmcfuR9vvvw8Nki6aCnOdkRjzozMbBx72lk4/+KrcNX1t6gz+SPmHa1uiAiSy76AoaC2v8Gmz8knffw2NTZi2eKP8KSQymcfvg9L338Tfd0enHTC8bjt1ltx8y234OKLL9adeZKSDK8uoX3RodaPjLgxEK3Aom0O4SUHtjGQgpXFrDRmxQl+8kN395GDEk0e7FCSExPUgpx3qUVc8B+JGr/RwyafTbFbpNeIUfdAHq8LASGmMQMBNDi9qG7qQGQ/l6Yj0OL2YtW2WqzdXok2r08Ia4Rac9utNqQmO2CXMNydnaiuqROy61KDorycbKQkSuWXmSSJqsbAjP/eQDoXJdsSxo6wJOSIAfQIwSPRpvGRxWqFOxCBmuZ2uIV4civKJInP+IJspDnsGAj0orG5RUKgj7ok2O02Jet9MuutqqmFy+NTspiZlaGz4EF8RtwZD9O9EWfEhbkZYaJ5iIN1gp0+jc2ys7Mxc+ZMHHPM0TLwlqrkvKmpARs3rsfCDz7A++++i+rqammGcWpooMtbrGtyfKn2EsawYpBojh2PsolTlPuMZrAuUQ8xMzsPqemZOvHpbG/TnW3WrlqGgsJilfZR+kedRW5EYT43FNwSedXyJVizfKmSvWnHnKpbvG5fuRQVm9ercOPY409EKfdv3wdEU5fFgwfJc1NTEz7+4B08cPdf8cBdf8LCd95AbXWVMXGTfj0u3qLL4rNnz8FXb/wWrr3tBzjj3PN0T3Gb3aETOsZrNEwOtNoMMC59uiTeJePoJ8s/waMP3Yu77/wzlrz7qtoWXHjB+fjpz3+BW4VgHnfssSguLkaiI2FwghruK6SuSgUZkWaoOwMtWaqi/ajkfGkQcfL23V8SGM1gBpoW5+ayuPZucspdhapr67BtW4XeN660BNk5WcYMhzcE71Mdx4FIBAY4yzV+a5fZ3PbGDjz7zgeoaBOy2kNaKJ2J3O+guyRptJHR8bojkU/ujRFylxAbiZK0JExOs8Mi95nxCUgHZbdZUZCfA245yR0iqO9p/DpEIrvHYKwokTXKk8Y7GqZ8ev1+bJW01wjRdWRmwhsVjbdWlWNTQ6e6c6IPzUuPnYV5E4plNuLDuo3l6BBiOK6sGKXFRbqE2dDYgLXl2xAQcjprxmRkZWRK2PIus+ruKt7yG7cndHk82Lh5GzxuD46eMwnzZoxXKVUYYZgwu0BKKrZu24blnyzHylWrsL2iUl3ExMZEo7CoCIfPOQxTp03V7d2SuHewTBwpQdrdwYSv4bvkdsHetrUwvgi1tbW48KKLceq5l+K08y5EH11wBLuKAwUkjBw71OWb9Nv90r+//srz2LphDU48/Vw1jlFBgRAy1j1T4s4tFB+652949IF/ob2tFVf9/E84cv6xePHev+Ct558SohrAHXLtvAsuUEK7N2AdppV4D41dPW60NDejvHyTEtxVSxaio7VZ72Ea6E/UnuAQAp0uE7scHDbvOMyZd7y6JeLEja6KRoiC7BY4jpNY9nZ3w+P1qsuh7Vs2Yf2G9diyeokaLxUUFGCOEOUzzz4L06ZNV3d7LIMwofxsjKxEU8gGpXgxQYnmQdXZSlK0sgW/mmB6+6QzIOHxuL06y6Gkjku/vJfzNj2kgkdEciTiVZI0LttFq05mfEwkXEKUuvwys5Kb++U3f79c65XZe3cPenlRrpHgxUdHoCAtAcXZGbDJ82w0HAy5dVVRYT5ysrN0X3TebzYMvp8x3/uGwudCJg2aD3JN/lNHhw22y+VSi/qomHjUdLjQ3OVRiabwYpVmFmalwyZ543U50dHZhQS7XZcc2Fk1NjajuakFySnJyBayapWGrQMHX8s4s6PaRdxNq3Na2tO9UWFOBvKz0pTkhxGGCbPec+BLlwFx6tQpOOqoeZg5fRqKigrVkKNGJouLFi3CwoUfYM2atVInG3UnIpoExgbJ5hdJyjmwt3d2yvsGdAAOY3gwKNEcPxljJkzU5ecDDWopL/0ajUm5WQetqkkqu9rbZOyIQ0FRKZxS/7Zs2QyvxwWb1aZGJezrP1m6COtWfaL68UecdA5yhRhVrFupJJXhzj1qPiZMnqbh7Q50XJODLuZIDBtl4r9h7Sq889qLeO6ph7HgiYfw7qvPY9umteiRcd4icXEkJWP85Kk45oRTccq5X8F5l38dZ51/KabNmqV7ivfKpI5W2fsbZto4RtMoi/uir1+/Fu+98TLefOkZfPjem9iyfhUKcjJx1hln4Ju33aYGhqeffpruWEahhRlGGJ+NkZVoLjYkmjFpedKrWwwyctDAZD5BcGYUvMZGVd/YhC3bK3Tpo6SkSCppHuJksCHBNMHl52ht0HyWn9yrPFIlmzVNkn/rt2FlZSPahHSSpLH/1L3AhTRyyT0xPgbjpUHMmzIRUwuSESkNx+v1aCfFjoiuXGhgoysv5mvNKGtU97I8PqsKSXAkuk1CHLdsq4RLBtm4tEx8UlGLZdvq4OmPUuOpeWPycNkJRyAvyY5qySNKIJOTkzBhTKmS8lXrNqgh08SxpRhbWiydaXCQ1t2LOLjvIt4SriHR9Ep4W+FyunDEzIk4etYE1WENI4wvgrZDqb/0y7h9m7S9lSt1z/VN5eVKZrgzUWlpGaZNm4YpUyZjwoQJusTOSaQpYQodgPjMoo8/VhWa+fPnKzEIY9/jYJBofhZYn8whe92alXjiv/fBJWPrNTd9C9Nnz9X5/l1//T2ef/whdXr+zT/+G5NnzsHL/7kTrwoh7A/04vYf/S/Ov/SqL5RoGvVfCFhvDzo72lFVIW1g6UfqOL2uugrd3d0SRq/Wdepb2u0JyMrJw4zDZZJ2xDEoLStTUmlK/DXWwbjvbzBtJJd9kjaXy4ltFZXYtHIpVix+F11dLnAr5CPmzsMZp52CqdK+ud0xDbVoTGW2aYYRJpi7h5HX0WSlsyRJAXFv7YOpkIakRb6aUj1KOjio9EoeUKLR39ePpETD2z/liVQwJtgEudRN+iS5JPnDq/2Ilhlsityfm5kCm1T2ASGeUX3cdqtf73fERiHTYcOMknwcM2M8phZlyjVj33Va3Drktzg5N3TMJOTQts6X8PuXKQs+az5vnge/kyhzRu4VckyyFxlvgatHyGeXFz7uvSnpS7bEYXIJreOtCAgp50ydn4mSZsZte02tfi8rLdLdNgahhlSclYdIU0PA+mXqaFKimS/5V5idGl46D2O3wPrDg/pzWVlZmDFjBqZNNxzC06iIS+0VlRVYsmQJ1q5fj5aWFrjcbm1OxlQxQlU/2OdxUPL5/Vi8eIncvxT5BfnqsNskpGHsOxwMEs3dQWp6BkrGTtJ+knqcyalpWL9qOd5//UXUVlUokTzilHOQkZ0rk6NN2LZmueRFH2YcdhSmTJ+5S4kmr5FAetxOtLa2YEv5Brz7xst47MG78eR//40VSxfpkjzBLV7TM7NQXDoWc+cdjUuu+jquuu42HH38qcjLyzNW7YLjwGiALvf3dMs45ERbWws2bdqIpQvfwZOP3I+3XngSnq523UXppptvwf/84mc4/9xzdnaeHkIyidGUttGO/bR0niDc4CAwBtpNMJVRURTPD6DT6Za86BbiF6eOyWOijSV0HuwOmT8E6zAlmtTHjKSfShmQrHExSEuyIyNRKr7cQJc96Yk2lOXmYPaYfMwZVyRkKhVWivMZmjzPZTpKMElIDfLKN4WA34de21uEhEPpLFPF99Exr9vrRXtHJwaEHPZHxaLDIw3e59OOzS7pKsnNRFqiXdI6oESTLpFofd/bF0BTQ7Mul7Pz4idVBIxXyR+qGnxG/A2rc2MLSiWaWWkozEkPE80wdhusv2ab4ScHnTFjxuCIuYdjyuQpek5pR3dvH1Ys/wQfL1qMtWvWSF9Xq4THoJsGWBdXy29vv/2O/Nalg1hiYuJg+GHsGxwqRJPEJz0jAxOnTEdiSqqmc83KZfjovbeESLXq5OZwIZrs/ys2rEalHFxRmzz7SMycNVv63h0SOa/0xe1Cvqpl4rR2zUq8/9YreP6ZJ/D84w9i0cJ31bl8VHQUkuQ9eQXFmDx9Fo485gSced4luOCyq3HCaeeidNxEIWRxQnB7lNTtT5jtlsIYruq1NDeqFfyaFcskPe9g8Ufv4ZMP3kR/jwezZ0zHVVdeiTvuuANXXXE5Jk0crzYMZt6YYYWx99g/S+epXDpnQR6Es3kzO1kv5ZTfgqfqL618eyXq6urV6pXLwFnp6YiOidYbuFBOMyCSNOO8X5fISTZZz+n4nLqV7ULC1m4sV7dBCdk5KCsqRLrDhnjpCOg4KUJmrVxS5zvZyfBZ+S/vYH7LEdpoGN993oj4Zn27/u2XuFDas7F8K7ro0iMhCSsrmrCmugF+mXVz//dz583E0ZNLdcvOmpoalG/ZjtzMdJVXNja1IDsjFZMmTNB8G8xQxnswv4ekQa4PXTqfO2M85s9kGJbgTWGEsWcIXS7jOQ+SxqbWdpRvWI8VK1di69atqK+vV13onJwcjJN6O2H8eCGkyXj7rbfw4osv6iToiiuuwGWXXYa0tDSjfYaxT3AwL51/HlgvI2VS/6f/+zFeXvCUECwvbv3dPVi/5H1sWvkJmiq3qkSTe4Jfe/Pt8NNdnrMTdbXV2LRpEzatWYYtG9aiubkJbpdMkvr7dRUsUeot3Q1Rcjl1xmyUCoHPLSxGksOhOqEca0jo9jdMQkh1NL/fjy7hHNWVlSjftBGb1q2Ep6sFVmmThTJezj3iCMyYPgMFxcVIS00S0s6RcwfCxHLfYv8sncfZ5c2GEu2hAqaUUk1uTcmlCe5EwvQnOBIGFYopcdT/EYY80JyIK6fSfwO6XNcug1pLc6s28pKiAhSkJ8Ei+RrNQZDST8pRJDwuW6v0j/80q0k09QX8YiD0fJ/A7NFDw+UWYYabI7/MLuOtdjS6/GjocKJXbo+O6Ed2UgKKM5KRYLeqTlBri8zI5Rn63+ROQbnZWcgQ4mlKfAfB+H9GGpgHoUvnOdkZKMjJUKf2YYSxNwjts3jOIz7eoltdjh07Focffjhmz5mjW2Fys4Y6IZzrN2zAyuXLsWbtGlTKwNckA3l7R4dOqGi5zq30uFVhaNhh7D0OFYnmrkCp40fvv4Mtm9ZjoK8Pc046B8nxMajcugldQSknHbhbHYlYu2oF3njpWTzy4N1q+LJ180b1ssB6aHckIV/I5MQpM3Dy6Wfhoiuvw2nnXqREMyMzS+stwX59hGRVnwnGgYZFXq8bbW3NqNi2BcuWLMabLy/AwjdeQHPddmRnpuFsGvN88xu4UiZ4c6SN5ublqg2AuSQeeoSxbzGyEs1FS8G9VmPT8jBw0BkDBTEkO/nNTCVljPTnWNfYiK3bK3VryZLCfBRKhbdZJD9YwSP6hGAq5dRlNy6fm+jvD6CxsRXbKqqkUfmQlZWJMWOk07BaEEWSqU/xjUFjIn2K9JKNZ8c3jeNwNSYSXTXSISjRNIgh002F68rqWvjjbVjf7MGq6ga4hQAypkeOKcD582egOD0F3o52rNtQrsvnrJ7UM500YSwyMzPVeGgwYaEYmh65b6hEc9bUcSrRTE6wBm8KI4x9D7NL7XK5UV5ejqVLlmDN6tXYsGE9mltbVdpCsL0XFxbhiisvx/nnn4/kpOSwzuY+gCnRPO38y3HK2eein8tCu+ozDjZIfeKmBP/7o9uVZJFg3/GXBzFhylS89coLeOYfv1avHllZMgmKikVbXZU+Rqkls4c+PIuLy5CRm4v8ghLMnHMESsdORILDIXW6X529j5Ylceqf+ru74erqUJ3R1uZG1FZXorm+Bm3NDcIzImUCl4eZM2bgsMOP0BUFbgASijChHDnsF4lmlPUgcdi+B2BKmV66N4qNi0WfkEzOvOnIlorGJJqGexRjudm0JiciI6IMn2pCliora9DZ2aVW2UVFBYZTc7lHG6D8M+im8c94o/zTbGZYcs5BcFgb2I7wDZIZjIn86evvU2OJNm83/HLe4fHrvrtMryM+HmPyMpFGwyW5uTOop8lBOy8vW2akGYiNpqfQHfmyUzp2kaah7o3yMlNRlJ2GeMn/MMIYLpiDIaU+lGyOnzARpaXFuvtQRcV29WmrKzvyjy6S2tvbdZmd+nZ2u12fDWPvwX71cW5BOX2OEKcxhwTJZJ2hHn639HPvvPEKKrdtUT3Dw046C0lp6WisqsCGZR9JP+iXMccFv9sJi82u0s3xU6ZrT330KWfjiq/fgKNPPAOHHXEksvPyVDCkuxhJnR0hmdQuwfbC93NFr7mpEZs2rMPyJR+p/ujaZR+joXor+nv9GFdWgtNPPQWXfuVSXHLJxZg7d662QW77abZL8whj5BAcscPYZ2AFDqnEPNupSktbZQeQm5OFTBlYAoE+1NTVo6G5BT6ZofWRYMo/LhGrfqU8wmUBZ1cXqiurVT+T20/m5+YgKcGu9zBQPjNUS2bwvfJO/sa5qOFMfRihJHDHYfZNEUKWKZmk8UNMRD9iJRpxUUK8g7JXP60BPR709NPKPlrJIDsD+rx00O+oDMQKDS+kwxuS3zsj5D6BDO96hBHGSIEDZILNoha4JEC9gYBOKOkRwp6QIIO9DY2NTXj99TfUiIgTozD2HbQ7GuYub79B+j32kZy4ODvaVA/xpeeexPbNG9EX6EWE1LmYaGPPfm5DHBPNfjUOeYXFOOzIo3HORVfgxtt/hNt//H+4+Iqv4+TTzsG4SVNV95L9dm/P/pNghpJBWoq3NDVgzYqleFnS9597/4GH7/s7Fr/7MqL7vJh/5OH45m234ve//S1+8IPv44ILzsfUKVPUtdJQS/Ew9g/2y9J5XFo+BqLihQaQHh2kGJKt/Gamlec0kOkQ8sgdg1rbO2Clq4j0NGSkpcjAZNdOBP0BXV6nBLOugXpdxi4RBXm5elC3k9qcJGrmEjUf28l9URB9cp0kkz7hh5tq7fR6yQcz3UxzXVMTNmytQEWnF+sa2lHdQXI5oC6Ozpg9ASfOnowESUB5+RZUVtVoJzl9ynjk5uWrmyTNV03gFzhdl/u4VZvLa+wMxKXzmVMNY6CUhLAxUBgjA1r2tra14pmnn8GC5xeoxIl62elpaXA4ElUdJDUlWSWZZWPGYPLkydoXhLH34NL5+RdchHMuvx4nnHKKupPbuVM6sME61S3ky9nZgfaWRtTV1qByywasXfkJNm3aAK9M2KNknM0qGYvv/OJ3KBkzFhtWrcYHrzytOvDjp8zE5KnTkJ1bKBOgOCGTASFz9Ie5g9yNNPheUhGVWlLf0u1GZ3sr2uRorK1EU30tGupq4XF2IDs7C+PHjccRRxyuW8iqMV1IvMPEcvRhvxDN+PQC9EdSQnWQV4iQrDXPmGJalCvF7u9DS2ub7u/d1NwKf08PkhMduixOQ5++Hj+83b3o6OyEx+tHWnICCgoKkZWZLh1EvLEMHgIapRsnxvUduUsqasSBbXC46b3xdiONrF6hb6ProlWbt2Jzcwcq24Rot3Siq8dwVH/spBKcc9QsRPX6UVG+VTqZdl0OGj+mWH0X6o5GAo3/F3Um8t6hOpphohnGSIO6ZOs2bMRrr7yiGyZw56G0tFSkpKTCkSCEU77zumnkZnbH4cFy7zFINK+4EcefcOJgnh6oMPV2Kdlra23RvcNraqpQuXWzEsyaqu1yvRW9QtBsMmHJzMpGbkEJJk+fjdPPuQgpQsRoXU41DRrycOceavEHegI71bf9kU+Guhh0ed7Z0Y5aIZNV27egqaEebS1NiImOlLGhT1f/yoQwjyktxaRJEyUdmSqhDePAwMgSzcVLQc0ka2bxoUE0idDsDQ4e1LfkGQ92Dm6ZgZJsNjQ1w+/za8dC63RarbJ4IqOjkJKUqD4gU1NJMqljGJRLapihRWi4NeJ7jdfxPuO9I5XfZmw0ZsG0muiRgbeytg4ba+pR2d6J8sZONLl86i9zekEmjp81GXGBHnTUN6o+Di0KszLSMWHcGCQl0lm7EbrqaTJvP2tAZr6FEE1nlxPTJo/D0bMmIj0xbAwUxsiAS48NDQ3olMlibm6u7hxEd2Y7/OWOTJs8lGASzfOvuBbHnnCqurs5EGHWDbcQxLrq7di0fjU2bViPbVvKhYjV6c4/JJ9EWnoGisvGYdzEKRg/aTLyCkuRmZ0Lm83Q+eXBcWU0WIkTtI6npJk6+/XVFVi/ZgXqqipQLSQ6sr9XjVyLi4UsC6ksKSlFVnaWql2ZPpDD7ebAwsgSzSXLdOk2PqMIA1EG0Rxu6dpoRGiWM/WkhnTizqX01tZ2dHf7dSmDoKUcnUSnpiarTqaxi1CQZIZCGx7D5WGEujPJ3E8ISSsX+PnN5fViW2U1NtY2YFu7Bw0dnXB6XMhNScb4wjykDQRgj4tVAyla7tLN0eRJ45GblaUhGBJNCWlQhLtrhC6dk2hOHFeK+bMmISctYX/nShhhhDFMMInmxVddj6OOO/mAIZokTxwbqHPp9bjR2tyAaiFfm9evQ7mQzM0b16rhGO+Li4uHw5GA7LwCFJaMxYSp0zBp6kzk5hXBZrepxTlJ5WiASQq5nWt3N+0NOtDW1oTqigps37oZTXVVaG+qF0JZjDFjx2Ly5CmYM2c2CgsK1FA2jAMf+41oUqJJyhB1KM5MhmY5s0AuUYeRTnQ9fp92NlHynR1KvBx0WzEQEWloJh6QeSYdn6SHZc6Ov7mlFZsrqlHZ1oFWtwctLh+SrYaBRH58FEpys5CdnoaKmjrUNzRibFkJxpWWqO6RwRIlwz6HaDKHSTTdITqa48aU6NJ5XkZimGiGEcZBigOJaJKE8eDKVldnu+oh1lRuV1c9WzdvQLWcN9bXqns4eiuhCyIll8VlKC4pRUnZGBSVjUNyShq43Wl/oG/UEExKUKk6whW7lkbpx2sq0dhQj9qqSkRSUNDfC4fNgqLiYpSVlmLq1KmSppJBv9ImQQ3jwMeIEs2PFy3RJSNbZjF6I7j1ouGS4VCDZjkbktAhPZVrpvyRCAgh43c6YJfWaiyzyTl5ldrBHGh5JulhIg2SyU+ZtUvH2tDYhPKqWlS0O+Hpj0DsQEASHY3xORmYXlqIFCGd2yurdEeh5NQUzJk6SfWLOHAIh2RAO4P5wnySUx4kmp4Qieb4saW6dJ6b7tA8J4yy4H/zShhhhHEgYyjRDJB4De0rdgXpAkZqPDJ21DF2sKmt2o71a1dhy8Z1qKmqQH1ttRrCUKcyMjIKScnJKB4zAWMnTkHZ2PHIyy/U/cuTU1JVb52EbH+7HzJBcsn4eNwu1bNcv261Si3rhTjHRgH5dFUncS8rK0FxUaF8z0dmVhZsNpvGP0wuD06MLNFcvAQxkbQ6L0BfVLw2/kOSaPJfMNeZeh4miTQ/aTDE7SgVwQ/tLA+k/GIiGV0lmqTVBtE04XG7sVWI5Duba+Ht7RHy2aPK31PHFOO4yRNgk060ubkZq9asV/WCuXNmITkpSV1AURJOX6OGa/ud84Sv4DGUaNLH2vzZk5CfsYNo9tINiIbFpaaQyIUxLKDqBwfZMMIYLijRPP8CnHPp1ThSiGZvL6ftX9y2SZLi42KU3A0n6NKO4+GWDauxavkyVGwtR2XFNrS3NusEnPrncfFxyM7Jw6RpszF+0jSMmTgJmfKdHklGY/th/+mTvra5sQHbtm7CJiGYjfV1cHa2ozA/BxMnTMDEiRMxbtw4FBUVq0pYTKzktaQ1TC4Pfoy4MZDp3qgvkl76I3albXhwI5jd/Mtjp/RLeyPX4TV2jUrC5X7+G5S4HUiNkmnVJDA1TK+xbC49uaaDxk7VTS145qOVqGpqRHdfv0xEgFljSnD2vDlIt1nh6/Zj9crVaGnrwOSJ49WtEzsoo+7QeTtDloFhSDXmN+4nbRJNXTovKzKWzjOTzdw0/MRpHCWXw0Rz2KGGbkFL0zDCGA6QaJ533vm46KrrcfjRJ+jOQPynncLngP0FLZlZR/cFQgmU2+NVIllbsQWbt2zG1k3rUFu5DVUV23VL4qioSCSmpCE7Kxv5xWUoGzcRY8dPQFHpeLUap64idRzVVdN+hjkxpxuizrY2NDbU6rJ4xdbNaG9rQV9vNxITrBhTVqbbspJglpWVqjsvPhsmloceRpZoLloqlQywZpYgEBFrzGa0C5DKx2jwRw74rIzaKxjfFQdL5QzN7mB6mUyzE9RUm9/Nk+BvIT/o11ENszwlvibRZLw1KdKR84rM3dHh9eG5D5Zj8cZydPkDiInow5isdJx//FGYmJsJ7gW0ZVM5Kqpr4UhMxEQhoampaQxFQzO21oySd7AWDckq+eNye1C+NYRozqKO5g6iGUYYYRxcMInmpdfdjnnzjxlxnUVzIsVJck1NLao2rdLtRxvrhJDVVqFJJtV05UPCRVUgWouXjhmPcZOmIC+/AJnZeaqLSQ8FlG6y/9zfepcm+ea2j63NTUqUqyu2oq62WiWZmempyExLRUFBPoqKi1Ag6cjJyUFycrLmh0kuwyTz0MQIEs0OLF26VPdLtdC9UVS8VF6pgEEpEpdGTZcfSlJMsGKapEU++QuXl40l0zBGPbTs2EkGy0u+0qiJ5d0nZeruDWDJxko899Ey1Hb6tJiT46Jw4fyZOHr6RNilk+psacG6TZvhFNI4beJY6cTyEaV1JVhP5CFT5svqzLfxV7U693iwaYtBNMcGJZr5IRLNMMII4+ACiebZ55yLK2/+HuYeedSIkTSTUHV1dmDjhnVY+tH72Fy+Ea31VWhpaUa336/GMURqWhpmzz0aM+bMQ+m4sUhNyYAjKUl3jDKX7vc3uWRa1A2REF1KLjeuX4NPlixC5ZZNajmek5WBObNnYeLESSguKUZ2VhZSUlIMF15h9ZgwQjBiRLNdZnBLlyxVBecIaypgccBisSE6NhaxwYalg3+QAZhLyGbk+Bn6/VMzI5OMhjH6MEg2ec4yMiSb/XIakJNtze144v1lWFUtnXEfnbcDp04rw5lHTke2w4YBmUWv37AJFdU1KJDObdzECUi02SUMSktDJigChkv9Vk5Eokk0vaYfzS6MLS3C0aqjKUQzXFXCCOOgBInmmWedjWtu+yEOnztvWAkb+x9uK9rV1YnammqUr1uFDauXo6a6Cg211fB6XHqP1WZTP5Al4yZj6szDMXbcOBQUlwnhzDQ8isg9IzQUfyY4pjIOfZJffp9XreCrK7apW6XtW8rVn6clPgaTpf+dMX06xowZg6KiIt2Zh0v7fP5T43IYYQhGjGi63W6sXbcOlRWV6PD1w9kfB7s9ASlJdthsCYiNikZsfFzQTyQJg7GczsjxUHqidVj+MMpDK7SZjHBFH33g0jmJppJMli+/s2ONxICUd5PHhxcWrcf76zai09crt0XhsOJcnD53CibmZ4KblTbWN6hRUExUBKZPm4L09AwN2qgdrBWGuoXx1wD1nijRVKLpdGJMcSHmz5qIwqyUcDUJI4yDFCSap51+Br7+rR/jiLn7XqJJySU3k2hvbRICtgnlmzZg+9Yt6pKopbEene1tuq0jt3fMLSpFYXEpJk2eguIxE5FfVCjELEulfuyE6CN4f0ouVWop6VH3Sl1O1NdWYvP61ZKWCrQ0NSI2JgpZGRnIzs7ERCGYJSUlyM3JRbpcM3dqCyOML8KIEU0aXbjcLjQ3NaG2vhlLtzTB6fKqlTH3+eWRlJgAizTAuGghnTLLi4oynLUygmYkd8iuhiDMHEYnBicAxodx0ieHdLBCPCl79Mq1RRsr8ewHi1DT6UNfRDQKkxNwxmGTMX/qGNjlkd6ebry3eh26m1tQVlYss+kyY+9zOah+EclggxMTkk2CkxbDYbuxBWVRQZ7qaJbmpoerSxhhHITgcLZ9+3acd/4F+Po3f7zPJJqm254umbBu3rQeq5Z+iC0b16O+rgYdQiy5NzfJJ3ugDCFhUw+bjykzZks/NQ7JySlITEqC1e5AdDQNF4147i+Y5FLm83BLv7h961YsWfQBNq5eCVdXO2wyBh9x+Bw14ikSYpybk42kpGQ4HA5d2g9LLsPYU4wY0TRfQx0V+g5rc3qwubYTdXW1qG1q051wWHXjZJaUlJQIh92qsz6rxaqOas3l0dDqbZCKIFjxQ5MSbgijA1omcgzqaQYlmnL0yWUSzd7IaJTXt+LZdz/A6toOdA9EIk6Y45S8TFxw9HSMzc1CjNy5Zet2VMhhtVkxZdI4JEkHbhDNfg2ZBNNwCW+cG0vnO6zOC/OFaM6cgLL8MNEMI4yDERxntghxOuuss3DTd/8fjvgSOpommeru7kF9QwPWfLIIyz5+FxXbtqqfS7pnCwR6hTzGKHErGjMe8+Yfixmzj0B+yRjd/pHj2WggZaFx8Pt9qKmqxOqVS7Hsw/fQ2tIs+daHWdOn48QTT8DESZOQlZmphjzxFguig7qnYYSxtxgxohkKvpIv9XUbpLOhqQWb6zrQ1NSMxpZ21REhHAl2JAvpTLDbYRFyYZFKH0MDIq30XHpVCrNDymlcNjC0YZjJDDeYkcVgvhtlSmLJK6SERllFICBksdHtw5tLVuCdtVvQ5uvT3xyxEZg/eRKOnzkWGYk29HR3Y/OmcjS0dWJscT5Ss3IQL5OQuOgoxMXEaD2IlgcNp/bQXZS4l65BNJ0oLiwIE80wwjiIYRLNc845B9d/538xd+7c3Sea0imwW+CubE6XEw011dhWvg4b16zEli2b0dLUoEYwvT09iImJRUpaOnIKizFp8jRMnjYLhUIu0zLSpE+yqEHP/lwSJzhOMj/8Pd1oa2lB1bbNqNqyHps2blCDJbpNyslIxVFHHYWpUyYjTdJDfUvD2p05QcmlEVYYYXwZ7BeiORSUcnpJOrt9aGpuR3VzBxrlc1tdM3/UWaFNiGZychKSHAmwCuGMpbNXNSIyOgfCbNZKPPVi8JfQJIZbzgiCZJJ5LyUSJJr91MvUM7kqRWE4qY+AR+7bWFOPx95bhc0NLQj098nvA0iVTm9GaSHG5qXDGh+ruwnVNDboElSqzLgdCYmwWy1Ic1hgifv/7Z0HYFzFtYZ/aVerVe9dsuUm925cqMamhV5C74FA4EHCSwKkEAgQIAESEgMhL4QAIQkEEgi9Ggim2AbcK5a7ZPVedlVW+85/dq+QhU0olm3Z57OvdveWuXPnTvnnTPMiIcaLxFgvvG63vn1/cxM++aQYzY0NGFw4ANMmjsQwEZo9BxAZhrFv4AjN4487Hlfd8CtMnToNXZwr93Ngszg3X2srNm8sxkcL38eqZUt0bsiqynJdd5wjxikck5OTMWL0WIydNA0jxkxAwYBCXbnHy4Gt4ZHW9MOeKVaD4BK9+izi3/KSzVi57GMs/vADVJSV6XNMmjgBo0ePwpAhQ2QbivS0VPU/m8uJWS6NvmCvEJoO9Aprk+0dAbRJrXFrabn2r9ta2YCK2madUDcx3JczKSlREwc3j+fTpnWiDxQWMd0wATmPurPE9N+OG18SCc/uOTQdQnNoMohDlkc2oQcRiHSjzufH20tW4ZVF61DR2IIueacxEe2IcHkRHeVGjLsrND1Ihw8dEZKhigNdkRIHooDUuBikJCajQGroOamJGJCZgozkRHT5fNhQvEFnOygsyMOBE0eEhKZkxoZh7FuwDFm5Zi1OPuEE/PDnd2ozti7KsAM4UjooeUhNVQ0WL3wPc994BetWLNHWD7+/FZ0dnSIupcIreUVewUAcOusozDh0FgoKB+v8l263R8Ulp1Fzio49AZvtI91Sge8KorKyCos/mo95r7+E9Z+sRktzIyZMGI9jjjlGrbt5ubnapO/1RofKzXA+aALT6Ev2KqHZE3qLGYS/rR2NzT4sWbsR5ZU1ukJMQ0sbXJLAvTFepIngTJHNGxOL6GiPLshPFRP691nYaO80C3yGnkFhCe/ro6PNKTbDoq67n2Y4bHmIfwT2svRLmG+sqMYbH63EgnWb4Wv3o71LhKmOVperaMHu6tSreRWHEkWJmx2g6Ax1cI/xRCM7IQYj8jMwcmAeUhPjUL2tDBGtocFAM8YXmdA0jH0UlhsrVqzAKaecgmtvvgsTJk+TckTyHdkfwTQvnzRm1NbWYf2a5Wq9XPju27qgCPtcctAhW8o4OJVzXQ4fNRbTZx6J0RMmIy0tDVEuT7co2yNFJ+8p96cf+FytUpGurizHutXLsPTD97Huk7WaL+bmZGHmzJk49JBDUFBQoIORaJRhvmei0tjd7LVC04Heow/bpXbJJa8aGhpQUsFm9VqUbitFfbMfESIwuCh/SnIiEuLjEOf1wktLp4hOndibCVMSH+fmZBrTZNb7qXmgZ1BYYvyaSFgyOFVoqrwPhbv8DVk05bfup/DvklMi0Ca/JZvH1uo6LFxZjAWflKGkvk6brPTNRLJpKvyOtGkdiJbXyy6doRfr0nu45I/XHYEMKSwGZ6UiNTYCyVIJGV+Yh0NMaBrGPgvLi+XLV+Ckk07Ej279jQrNSCkfOH3PtpItWLNyqTYlb1r/iU5JxOUf29v8mvWzi9bAIUMwbuI0jBwzDsNHjkF2Xr4OiOEKPV0sQHjiHoD5VWgaonbUVFeL3z/ByqWLsP6TlagoK0VqSjLGjBqFiRMnYuy4cRhQkI+ExETESFnI64gJTGNPsdcLzd5QdOiEsm0dqK+v1znTVhWXYGOV1EZFjEa6XUhPS0FGagri4+JVbHokA3F7okLCRp5WNUnvp+bOnkFhiXIXIuHa3YTOPpqfCs+ICKrEULh3dokIdblFcAIlIjaXrC3G2q2V2FpVjeZON6K7WtASjILb5UGMCEnm+/7OLnQG2iROBNAepBCVKoUUChFyP1YuEjyRyE3woKggFzPGjsSsCUMwMjcVUfZ+DWOfg8XZ4iVLcMrJp+DGu/+IwsGFWLVkEV575QUdNc5BMH6fD5znkuUIu1ylZ2Tg4JlHYMphx2DUiOGIT0hEFEeSS1lCgem4uzvh/SgM6Qc2zTc0NmPt6pWY99pzWPrxQnmOemTn5ODII47A4YcfhqEikFPT0nT8ArsEWJO4sTfR74SmA73tbK2+NmyrqMS6TdtQXFKFypp6dAW6tCaampKEtLRUxMfGyu9o7SztksSnCbDnkzM9Or+d704i7RlEPRNu76DbWaLmeTs7tj/ghJMGAQUihWYIWiW7ggF9j66I0HJnFJsUkXV+P1raO7C5tAwby2rhcQfR1iHvLlLea5IH9c0+bNxWhUB7E1xRcWhpbUBlcyeapBLiD+taNzoR6+pCnlQ8Zo0twmkzRmNi0QDERHtCJxiGsc/AfGTR4sUiNE/G+KmHYNmiBaiuKFeLJiujzIvcIsQ4bV6hiLPjTj0XBx46C2lcACIiiEBHKC/aYwJN7ss7Mx/k4KSNxWvx/jtvYd5br6GmslwXqjjssENxxpln6uo88XGh5R7pXxOXxt5KvxWaveFjsGmj1d+GreXV2FRaocKTorNDxEpMbAySkxJVeHJS+NiEeESLoOGk30zanSJ93JpAI3RwSqixIUzPhMvg4m/n87/xZc/fV5Hn5+jy0NeQ0NS/kqGyqTuCmTy7Usl+l6jPQHjVIP5uk3fSHU3DH2xMr65vxOb1xTolVkFGOuKyc1DX2IplGyqwYstWlIkQdQU74UGHvOsITBqSj8uOPRSzxxch3mtC0zD2NZhPUGieLEKzbNu27ryGOU9Obh7yh47EAQdMUwvmwMIh8Hi9CHR2qhDdE9B/FIj85HiEmqpKrF2xCKuXL8bKZYt1MniuIz550mQcMXsWxo8fr1MQ6QAgE5ZGP2GfEpohmOjYH7ML9Q1NqK6uxrqNpdhcUoZttS3aPJsQH5oqKTExXr4nIjaOk9KG1mqltU1FZsiZT3ESM+9jCftLImHW8/XIH8eqyTAPihhklhkK1l6Ck6bN7u+0gFJ8hsK/pd2Psq2l2CBbQCoT6dlZyBs2BG1tnVhSXIL3VhZjY2W1qNIOZMe7cdDoIpx9+FRMH1FoQtMw9kFYDiyYvwCnn3E6tonQjItP0Enbpx92JMZOmIzMnHxtXtbZK8Ln7wmcPpc+nw+btm7FsgXv4IP/vIGSrZt0XMHkKZNx1FFHiSg+AAMHDpCyKklzPUdUmrg0+hP7jNDcGXw8Ns02NjajqqoKa9asxfLNdahubEW0ZDiclzNDRGe0CI+EuASdGJ6TwhNNzJagvz6fF8W0j2akiEjaKB1RKbv1G/tthj7dso/fudoP10jX8T/yj53jN2/aiOKNJTr3Zo6IzYED8uGWAmbZpjL8Z9FylJSXYnB+AQ6bPB6zRhdgXEEG4mJjQg4bhrHP8KnQPAOHHH0STjnzAhQOHqCr3lJc7onizrknixIdmCN5WVV5hQ5Keu7f/8LKRfN13XAO5DnvvPNUYGZnZ2v542yG0Z/ZL4RmCCbWkOisrKgSYbIZxSXVqKxp0JHrXMM1jisRJSbKpwhOj0eXOnRTdIq40cFDTPA7C66dZQY9z99fM4wdhZlOdUQ7JufRDI0657Qi7MgQlN8ciU5oxaTwDCI0AbvjFoUm5F0yE25p9WHNuvWoqq1Dh8+HjLRkZOXmIRjlxeryKmzcugUZSckYV1iAA4YXYFRhtlk0DWMfhPn9e+++i/MvuAC3zXkYQ4YN3+k8mrsDDl5tl8pwS2srtkk+tG7lMqxa9hEWf7QASVLWDB4yGCeeeCJmz56NIYMH6zXM0/gcJjKNfYV9XmjuCD4yt/qGRjQ3NaGisgJbtm7FhgofmtsCOkcnLZ3JKclIiItHVGwMYrTDdWgOsh0m/Z1kCN2By4wj/FXZnzKQXlGMEpJPz41WTIYo+2/KXw0w3ccmchGckM9QM3voPMI5MyXiqgN8j7SGNjU1Y1t1NUpLy9Du86vlICY+FpHy3pp8bXBFuVGYn4fxIwZhbEG6CU3D2AdhfkChec655+Lme/6MYUVF3X0ZdxviB074XllVg62b1uPjBe+iZPMG1FWVITk5BQceOANTJk/GuPHjMXToUKlIh8uV/alMMPYr9luh6SRqfqeV0+9vR2NTC0qk1rm0uFRHKrZ0ebSpNkEyh+SEOMQnJiNJPiPdXIkoQvsNhuRSyB0GJH/xsztrC+9QceUIJLKfZCp8WieKhUR6WDLuLBx67NdBWXI4EN6lQ4NEeJLQO6TIDNA1+R6J9s5O1NXWo1IEZ119A1qaW0L3lC0mxouBBfkYP3IQigZkmtA0jH0Q5gshoXmeCM2HUDR8uKb/vieI9rY2VNdUY+2Kpfjo3TewYvkytLe3IScnF8cfdxy+8Y1jMGhoERLjvVJ2hJardNg9fjSMPcN+KTR3hhMUnJuzpqYGJdvK8ElZKzols/B3BnRlhZTUNB21HuON1uZ2d3ht2VA+EcosnABVSSQHnIEv1KWhM8KZCu/HC53PfQ0+tzxXdxST75Hh7zoCXb5/5qnD4cCPTs5zJ985I6acrJ3k+RkSoJHoZBM7v0v48x7aBC9X+Dva0FjXoGKzvZ2T/Dfqdfl5uZgwajBGWtO5YeyTMB8INZ1fqPNo9oXQ1LmcAwF0dnSokGxqrMfWzZuwfOnHWPTBO2htbtSVeGbNmo1TTj0VkyZMgIfzOIs/TFAa+yMmNHvAoHAyAn7vDHSgvKwCtfWNKC8vw+aKejT5OkXgRMAdFS3C04u4uASdy4yrSrijPHC5KDrZ01ACV10S+EVCmeKKojMkOOUPg573cz73QRi5NIqFn9lpHu+Gj93zdw9oBWZYOXJUv4tb+ls+A/KNIc2gCw9Olz2hLxSgrAA0t7Rg3boNaGxu1iUoxw4PWTQTYkxoGsa+BvOHnkJzaFGRVkS/KtuVByIsfb5WnfSd5cGWjetFYG5A6aZ16OrswKBBhZgwfjymTZ+OSZMm6ej2nsLSRKaxv2JCcycwWJyMgTVYrinb2tKMquoabC0V0bmtGs2tfnRERiPSHa0WTnbuTtJ112N0EJH2JZTN6ZNIQja30P7t4GvQXfzjvJJe5/Rb2D1BnkYeR5+oZ4zjzh1EQdHycq7TiZ+zAPQMi5BVMyRcHUEaspzSJec+DGMKzVVri9Hc1IzBhQUoGjIAI0RopibYqHPD2NdgHkChecFFF+Fnd4nQHDbsKwlN5h3MTbjwB5eorKmuxMYNxVi1bDE2rFsDf0uTLnc8duwYTJs2DePHjUNObq7si9f+4U7+3rMcMYz9FROaXxIGl18ynqrKKp0uaePWMpTU+NDW0akrTrBZPT6eVs64UNO6OwqRtHJqZhcSRs6mdGdCfA0UVhRV/OTvKNn2AZwoxmftHd167+sOD4YBJSThvrDYDJ/Lv9zLT0fIq1PyGfojyO8WEZprizeo0GQfzcGDRGgWZiIr0YSmYexrOELzkku/jR/fcT+GfEmh6TSLd3S0oaG+FqVbtuKTtauw7OP5qCzdgsTEBIwYOQrHHXssZs48DBkZGZLvMPcJYaLSMD6LCc2vCIONGRKFDCeFLy+vQHl1HarqW+Hv6EKEKwpuj1fn5UzkdEkiOqOiPCI8Q+vnfiY7Ck/n86lskteia3fzZ4+zndfVO0Pb2f49SdhPzpPtPLuX8xx/8xL96ojt0H7OnRmS6dwbaj533NMQk+t1WiT5x988g0uN8v18IkKzobEJA0RoFg0ZiBEDM5FhQtMw9jm+itDk/Jp+vw9NjQ2or61GheTlZSUbUbJpHSq2lSA+Ph5FRUWYPm0aDjroIOTm5SEqvOyjCUvD+O+Y0PwaMOgcgdPW3q6DTurq6nQgUUVNA6rqmtHa1okojwee6BgRnYmIjY9DTDRFpxtcd51Nvg5BkUicP5Kiir8i4JK/IRHVTc/X5ezf0b69BfGb4zv1GSck1R09/KmTlIbO4nyaHE0eQiRjkGEQztB7R1U64ezqcZx/KTYZts0tzWrRbAoLzZFDB2K4CM00azo3jH2O/yY0mY9EREplNNCF1tZW1NVUYfPGDdiyZRO2bFqPtsYaOasLmenpGDNmDMaNG4dh4kZOTo62UPVku3zZMIydYkJzF8Kg5Nbe0YGWllZUV1WirLwctbW1qGvyo6WTa6p7EBcXExpExPXWPdHavB4aREThGnJLJyqPcKlg0syRr4kHd/C6uKc7y9vrMr8e/lWP9vbfdr7vRc8+msLnRdUdhA0LFPbR5GTujWGhOXRwyKKZlWRC0zD2NXoKzZ/+6gEMHjKkex5NGgT8fj8aGhtQunULiteuxrpVS9VymZ6WjsLCQl3ycerUAzBwwADJnxPgibLR4obxdTGh2YdwvXW/v01ETiNKS0tRtq0MlQ0+tX62dgAxsXGI8sYiLjYBcfGx8HDkumSKLnck+E8FGPWT/FGhSXoJKudbdza4r2aIfGbn2b/IM8p5FJpNbDoXoek0nQ9lH00KzeTY8ImGYewrOELz25ddjhvu/D2ycvJ14E5DfQ3qamqwaWOxLs6xdVMxoro6RFwOwvjxYzFz5ky1YHo8289GYQLTML4+JjR3Ez6fT5vWmyXTq62tx5ayKjQ2NcPfEUQwIlLXuo2MSVILZ0x8DKJd3BcjYik8gtF5TfKdr6w7++t5jFjGGELChIOwKDRXf1Lc3XRuFk3D2Hdh3vguLZqXXIqTzrlUV3mrLN0MX0sjgh1t6Oxsx5AhQzB48GBMmDABRUXDkZSYsJ2gNHFpGLsWE5q7GVo529o7UFNbh5qqKu3TWSsC1N/agqbOKLQH3SKQXDonZ3JSEjzRXh1IxBHtXKWIJkxtTpeN83LqajnyCh3paZlkGBWatGi2mtA0jP0EFmfvv/c+zjzrTMQnp6JwwADk5mRj5KjRGD92tDalZ2Vm6Mwgllcaxu7BhOYeghO3t7W1o0EEZnWTH3XVtaivr0d1XTMCvgb4O9lPswvu6DgkJKchPiEOsd4YXRGHtfQuySSZT0ZypLV8chANX2T3eHb90AP6U9mfMlYKTTadS/iu/mR9WGjmYUghpzfKQrY1nRvGPgeLs7Vr1uC+++9HWloaZhx0CCaOH4u09PTuQUEmMA1j92JCcw/jk+AvrWvBli0VKKttgLvDj4xEDzqaatHQ0IAmfxcCkR4EI1zwxiXC64mC2xurfYmiZQtNl0RBGXJP5KfzJUTPt7u/Cc3PWDTzUMiR5wMzkZueED7RMIx9CU5pxkGA6SI0Q8sDm7A0jD3Jp/M+GHuEaNkS3JGIjexCW3MjKkRsxiRnY9TYcRg3fgKmjB+J4QMykBIbhWBLDVpqStFQsVm2LTo1R31jo45w7+wMqJDs1pXyhd976ky1bjrbfsEOChh59k8nXDIMY18jLi4OWZmZukKPYRh7HrNo7gVw2g2fvw0lFbUorarDwPxs5GelItDRCZdoJR5rbGpCVWWFjpzkdEm+9g50QsRnVAwiPXGIi09EDJe+jIrSKTl6zs/pfOv5or9QHb8/WwIkWu/QopmfpysD5aUnhk80DMMwDKOvMKG5FxF6E6G5ONnc07PJh0uj6fyczc26ElFtfT2aGhrQIiK0yR9AZ4CrEXkQFZMQEpye0EpEoYnhnVV1QhZPx9UvJSP7m+ik0NxBH82BeXkYJkKzICPRzPmGYRiG0ceY0OyHcOlLn9+PVhFRLa2tqK9rQE11FaprG9Da3qGi0i2CMyo6Di5PNLwx8fB6PXC73PLGQ8OFegrO/0aXnB0p14WVcP8QnRSaO7BoDsjLxdCBWSjITER4gU/DMAzDMPoIE5r9GL46jkxvb2+HT0RnfUMjamprUVVZicamFrR1BIBINzyxiWrp5OTwHhGe0Z4ouCJ30n+JIlLc1SUcQ3u2+66Y0DQMwzAM4wtgQnOfISQ6OWVSU2Mj6urrdc11ztPZ2NwKak5aN92eOLi9cXBFeRAd7dXViNifM4IWyzBhralf2NjeJd/7pdDcQdO5CU3DMAzD2H2Y0NwH4SvtDAR0XV+KzhoRm/X1DWgQ8dnc6kN7Z1CEZhRiY0VwehMRjPIimv06ufylNq2HhaR8UGTqL4pOJ6r0F6FpFk3DMAzD2KOY0NzH4SCijs5OXXPd52tRsVlauk2b2Ds7OtAV4dYmdW9sLNrhle/xOmqdlk6XCEpGDkdohsyc4e97Oyo0ey9BmYcCEZpD8tMwIDMJUTb9iWEYhmH0KSY09xNCbzmIzs4ONDe3oK6hEdVVlaioqEBzU5NaQbtcHsTGJwKeRLjcFJvs0+lGZJTnU2tmt/IU9PsOBCh/9/zem97H+kK4itshi+b2Tef5IjQHZadgYBbXlY8Kn2wYhmEYRl9gQnM/hC+cI9fb2tp05Hp5ebmIzWYRn01o87XC39mFKG1aj9e+nP4I9uUU0RnN6eWBSB1IFJTIoz8lFsnWMxY5uvHzYtaOBOquhEJzB300KTQLs5NVaHo9nvDJhmEYhmH0BSY093P4+jlqvZPN6yo8W3UAUWV5Berq6ygnEWTzuogyj9eLQFSiNq+7oqPglmOc65O9OoPy2W31/AJwJDvlZbfE3NVik0JzB300TWgahmEYxu5ju8HExv4HhWJ0dLQu25aamoLc3FwMLyrCAVMPwISJk5CdlYmuQBvqq8vRUFWOQGM5onxVCDRUollEXFegU0WjEhaLlJvd+8I4+1SKUpRy43eyq0XmdmwvfmmF5QCnrmBf3tMwDMMwDGIWTWOHMFpw5Hp7m4jMhgbU1zeirKxUBxFRrEVERiIhKQWxIlCDkVHodCfovtA0SSLi+J8xiyJS3HKE5w5rNn0hNGnR3EnTeUFWCgqzkhAbbX00DcMwDKMvMaFp7BRGDVoeOXI9tBqRTwcSUXRu21aC+ro6+Hw+xMTFIyY2HtHRXAIzEZ1RIdEZuj40XZKjObv7dTqEheguF5sUmjtoOi/IzUV+drIIzWQTmoZhGIbRx5jQNL4QTjThJ0Wnv82PhoZGlJSUoLamFvUN9XIQiI1LQHxSMqJEeHZFxYdFpgtcxpJSMrSFRWVfiUyiQvOz0xtxHs38rNCocxOahmEYhtG3mNA0vhKMNtw4iKijowO1tbXYtHkztpVuQ5uI0IhIF5KTU5GYmKADiTpdMeiIjOWQdW0+p6U0RFhsOjji0+G/iVDn3N7nyf6dTW9kTeeGYRiGsXswoWl8bRzR2dbejvr6elRX16CyqgrVlRUqQjlVUkJiEhISEuGOTUAwKh6dXZHoCl9HkfjpVEm7UGjuoI+mNZ0bhmEYxu7DhKaxS2F0Yp9OX1u7TgTf1NSAkq0lOjE812KPiY1FUpKIPPn0Rkej3Z2I9i6XHAsiEOzSqZLU2rmDaBmQ7XPX8nHEZvjaHfbRzMlBRloCBuekICnOq+cZhmEYhtE3mNA0+oSe0SrQxYFEbdi2rQzF6z5BVWWl7o+JiUVyUhISEhN19LrflYi2QAR8gU54Il2IpBO9xON2v+UrpyrqHsneW2juyKIpQjMzIwmDcpKRFBOagN4wDMMwjL7BhKbR5zCK0UrJz1YRnHW1tdhWWqLCs7m5SQVisojNBBGdiYlJaIuMRldkrArUQFdYPPZGYi2nwmTzO9dkV9XpRGX+lu87E5q0aA7KSUGyWTQNwzAMo08xoWnsFpxo5gjOrmCXLnvZ2NSErZs3o6yiAr5Wn06RlCRiMz4hAbFxsXJ+JDq9GWjv6ICvvV0ccOn1Ki57xtywuOzJzvpoZqSK0MxO0qbzTwclGYZhGIaxqzGhaexRQqKT6663h5vW1+kSmJxCyev16oj15OQUJCWJ+IyPR5MrQcSpD/7ODkSJ6GSfThWYPYVm+LsKTY46X2dC0zAMwzD2BCY0jb0GRsWmFh8a6+tQUVkhwnMbWkQoEg4eysjKQWoS11qPQRui0NDCqZUCOkcn29FpJSWfztMJNMr1nxRv+GzTeXayCU3DMAzD6GNMaBp7DYyKTtM6YR9NTgq/ceNGtXI2NjaqpTMmJgaJiYm6xYjo7IpwobHTo6PdOW69q4tW0pAbJjQNwzAMY89hQtPY63AEp0OgqxN+fzt8rS3YtHkLSku3obGhHi63G56oKMTFxSEtIwupqSnwxCWh2deBhtZ2dAS60NDYgrXFnzad51FopnJ6o2QdDGRC0zAMwzD6DhOaRr+BMTUYFPHY1Izysm3Ysnkzmpub0dbWpmIzPj4BSclJSExIRExCEtqDbmytqMPSFavQKNcMKMgXoZmLhMQ47aOZkWBC0zAMwzD6EhOaRr+E0ZZN5RSdlRXlqCwvR3VNjYpOjydaBw6lpKUhUoRk8YbNaPJ3YvDQYcjOzUeky4XMJC9S401oGoZhGEZfYkLT6Jcw2joikd87OtpR39CI6uoqVFZWoampCX6fT4/75JPLX06eMhn5+fm6whCvdJvINAzDMIw+xYSmsc/AqMx/HDDEQURbt2xBeXm5fG8QoZmA8RMmIC8316yYhmEYhrGbMKFp7JM4orO1tRXlZeVoa/MjR0RmSkqqM/mRYRiGYRh9jAlNY59HRSfn2IyIQGRE98rohmEYhmH0MSY0DcMwDMMwjD7BzDuGYRiGYRhGn2AWTcMwDCGUE3IwGbtZAK7IyD4fOBa+pf7hAqpcZCAYCCDK7YLb5eIBwzCMfo0JTcMw+j0dHR3oEKUWGx0V3vPF4ICxLhF3rW2daPZ3oryuCeu21SLSFYnxg7IwMCMJHveub/hhn+G2zi7UN7ehscWHysZW1DS3o7KhFT5fG4blpmDq8DykJcSErzAMw+ifmNA0DKNfwpyrXkTaf1Zsxuqt9QhEtGFsQTaOmTgIbrdb50sNdnbqeZ1dQbS2d8InYrK5rR0VDc3wibisbW5FdWMT6poCqGsMYEtVA1aV1cIVEYnDx+biyuMmYkxBxheybIYNkypeO+Wmjc1+NPk7ECn7IiMj4BcxvLWmSYRlG6rknpUNflTUtKO2qQWltU2oamoXsdkGX3sHikRo/uibB+D4KUNE6Jpl0zCM/osJTcMw+iWBri7c/+JCPPjKSqwqb4Xb5cfEgkzccdHBaBeV2ewPoFVEXV2THy2dHSLu/Ghs6kCTrw0lIuxaRXBWN4nQ9PnQ3h6JrkBY0EXSghlEgjcCv7/scJx1yMidNmOzubtd3PEHgmjxdahIrGz2oaK+GfOWl6G+qQ0ucYtC0yfnra9sQIP4qby+CfVtHQh09rDAUsy6QtbTaPl6/amT8P2TJiMpNlr3GYZh9EdMaBqG0e9gtvX2yk34zr2vY32FD11RIgQlJ4tzR2D6iExtCm/1d8Lf1qVCsy0YQIuoz0BnKLvrzvVoqBSBx98RIlwp9IIUfPI93hOJB75DoTlCheKqklpsLK9GckIcUqI9qBCRWtXYgooan4jXgFomG3zt2FrvR01jM5ZuakCQbva4V8/clrfmzwhxG7J5XBGIlBP87V0YlBmLn5w+FWcfOgoxHreebxiG0R8xoWkYRr+CGVZXVxAXzXkB/5y3CX7uDFsCqeQiOkXcUbz1aO7u1nrM7vifApDIeS4ReKmxLqTHetHY1oaqlna0twFHjc3FLy44GJOHZmNlSTXmPL8Ei4q3ISk2Bkki/qp8baht8aG+JYCOjiDaAh3oECHbFgjfj/dXNSm/eLuAHJB9EeLXGI8LA5K9SE/wIjc9Adkp8chIikZraxuqG9sxblAajp86BAMzkns+hmEYRr/DhKZhGP0KZlgUmof+9O/4cF0tOrSpOwQ1WURnAFEeIDbKJToyAtGRUSImI5EUH4XEWDe8crBAxF1CdBRSkuKQkuBBSowbKXFeNPv9qGryI9AZgSlDskRkZukAo/tf+Qh3/3MRSurbw+IxdLOgCNUIJweluBV/eV2ALxjRLW7Z6J6XHI0DR+QiLSEWmUkxyEj0Iic5Fsnx0SIwY5EW70VCjEeb3lv8nUgJ/7blUg3D6O+Y0DQMo1/BDCsogu6Hj87FH15cDV9YaFKSpXojcODIXGQlRSM/OUGnKMoWMen1uJAo4i4xJgqeKDcyk+MQI58JsdGIi3bDFRZ0zA67urpU4EWG3aWoveuZD/C7Z5eivKkjJCgFtyuIeK8b6SJQU0UYpqfGivvRSPRG4cFXV6GLg3jk1BgRuSdMysdN5xyk/S2TYz2I9bhNRBqGsV9gQtMwjH6Fk2WtLq3G8Tc9jc317egSMUf9NzotFvf/zyxtik6M8aqYixdx6ZaDkS5aOPVS0X/yJfT/v8L7fVRcjvteWIw3V6xHXnoSJg/JR0ZiDAakxYhwjEZCYixS4qIR44nSqZamfO9xdEVHaX/P+CgXLj6sCL+59DARvq4vdE/DMIx9BROahmH0S7ok65ry/T9h2WYfAu5QM/mUrES8dMtJSE1K2GV9G5lFciL1T7bVYlNFLZLjY1CQnox4rwcJXreK1wgVsfJFzt3W2Iqhl/4O7V0JCIoAjnOE5iWHwu2ygT2GYexffNq5yTAMo58R2d1Bsu+gVZRzWY4uSMexU4pw0IgCDEhPRGqCV5vhOWcnm94dXdsZwamSQtZMwzCM/R0TmoZh7Pdov09u8mdn0lX7bVJQ6vZ5ze4R6OL8mCY0DcMwTGgahrFvsSOhuJ2QlD9sdtetK6gTv7e1d2BDSRWeeHs5/v3BGlTUNeu5X40g3N6W0LychmEY+znWR9MwjH4JReLUHz6IJZv83X00J2sfzRORkhiPdsna2n1tel5jWyd8HUE0c57KVh+aW3worWlGTYMfJdXNKKlp0XXHaxpaERXlxhXHTcQFh49ESvyXW5WH2emWJj+GXXg/Ol1eHaFufTQNw9ifMaFpGEa/g5lWZ6ATw//nIWyqbkcwMlJbqocmuXHNKePRFogQ0diO6voWtHd2obSuGU0tHWjp4IpBXQiI+PT729HR0QW/HG+jZZPuiiORXV24aPYo/Pj0AzAkM0nv90VxhObQ8/4PgWgRleKeCU3DMPZnrOncMIz+hwi6FxdvRF19Z3jJyKA2dW9p6MQvnlqGXz6zHA+8thp//2ATnlq4GW+vrsLCTXVYXtqI4uoWbKprRXlrJ2raA2gR5zpdkTpCnBZIqtiuYJeKxq9KVEwzIsRPhmEY+zsmNA3D6FdQvlEDPvfBSvgD8oVCMzxBZpv85KTqVQ1+1LW0o9nfqWucd4joY4/JCMnyXHJxpPyOjggiOSoC6dFATmzoMzkyiAGpMZhQmI6MhBh188sTgc622M+OOo+w7NYwjP0Pazo3DKNfwQyL/S5PvPVfeGNxKdqj2EQdOiYZWuhTjruiIpHuBaKjPUiOiUZaYhQS5DMv1Y2UhHikJyUjOyEWaQnRyE2LRX1LB8pqWpCZ5MWIgnSkJcVqv88vQ6jpvA1Dz/sDAtEceY5Pm84vPQzuSC5IaRiGsf9gQtMwjH4FsyxmWnNeXIifPbIQzbQUiqCjJkwXzXnxN0ZjcE4aEj1uZKUkIM4TgcQYD+K8bniivYhxcXlJbpE6/yW/u2RjS3cg0KXfuTb6lxWZ5HOFpvXRNAxjP8TacgzD6FeE5rGMwAVHTERuZgCRAQ7joaaLwAARllcfPR4XHjoSp80owqEjczFlaA6G56fraj5ZiTFIivOqZTNOhKBXxCgnY+ea6FEiLvk7KjyC/WtjuathGIZlhYZh9D8oA5M8UYj3xup3B1opvVFuRItYpIB0i3ikiOyeaD183u4gosPm0TQMw9hvhKY2t+2lvQQcv3Vv4f37CntruBv9GxodewtH/g7t252ScscEXXveD4axv9G7PN3X6I/P1S+FJgO5q6vrC207OndveUmO33r7MRjety8kET5PIBDQT8MwjH2Nnnn3jra9pbzZ1/m0HA19OuXOvvQOnOfqb8/TLwcDbdu2De/Mmwe/z6+GCwa+A5vHnE/+Gzt2DFJSkvHgg3/CypUr8f0f/AAzpk+Hx+PR8/YUDPbyiip88MH7qKmpxqaNm1BeXo7U1FSMGDkCw4aPxNChQ5GWkgRPVFT3c/UnGhoa8Pjjj+Oll17C+Recj9NOOw2RNsWLsYvgEpJTf7D9ykBTdGWgk5CalKAWz90N0zUnbB91+V3wt6Ug6I60wUD7KHzXNbX1koe/h8rKSu220ZsYrxfHH38c4uMTwnuMviDQFcDmTZuwbNkKbN68Wbfm5mYUFg5EXl4exo6bgEGDByIhLh4ul6tflqfNLS144cWX8ZdHH8VZZ56Oc887F65+MotFvxOawWAXnnv+eZx37nnhPUBnZyfa2zt0pGiUiDInwfPzt7/7HeJiY3Hbbbdhk0TE22+7Heedfx6Skr7cih+7Gorjhx56CDfffDMqKioQExOjCYCvgzWWtrY2TJg4Geefdw7OPvtspKWl9avEwedYtmwZbrnlFhWa1/zvNfjxj36MxMTE8BmG8fXYe4WmM+qc0y7ZykD7KnzXb7zxBn7yk59g6dKlarzoLTYPOugg/PHBB5EvYqc/ipv+QnOrD/f89ne493e/RWNDPaKjo/VdsJylPogUQXbIIYfg8u9cjlmHHy7CP77flacbNmzAt7/9bSxZsgTfPOMM/OY39yA+9qvO9bt7cf1cCH/vF3RIpPH5/MjJzsHBBx8iCflgxIqQ5EsoGj4cZ511No455mjdP/uIIzDzsMNQUFCAsopKjBo1GmfIC8rNzflMhkD4MsmOIiAPfZF4+Xlu9IQJ4IknnsCCBQuQlJyCn//8Jlx66aX4xrHHYtTIUSgrK8fKFSswd+4bcLvdOPCgGfIZFb56e77oPb8oX8Y951zS+3yX+LdJapXJSck4+eSTMXLkiJ26uaufwdj3YZR58LVFKOfqQJKeGXdy46NxzuEjEOuNlt/hE3czDe0BzHnmQwSl4khPeKQCPLEwDUdPGrjDfMfonzDPeve991RsJiUn45JLL8OJJ56ImYfPxKzZszFr1iwcJ/k5870v04K2q/LfXQHd/zLufpnzv8xz/jcaGxrw7DNPawvhIYcciuuvuxYXnH++voPcvAKUbisVgbYYc996C4MHD8GQIYPVKLUjvoq/vmw47Qzn3qS3e8w7uI/TsJ0pOmfkiKLPbSHcVX7aFfTLpnPHy/yk5e/+++/HT396g4jMM3HLzbegYECBHmcg85xO9tWQjRZDx2zu9HXgb75ACth2cYv7WRtixsDzQtbS9u5zo6O98rn9y+U9eNw5j9cxEjtu9Eb93d6Gyy+7HE8//TTGjBmLJ5/8h5r4HT9TGB9z3HFYtXQJkiUTe/W11zB50qTt3KNfOzo6dON3+o/3pDDteR79xI3PyXP4TG3iV/YFdYs/o8M1cecZ6B5/x8TEfuZZifO8PI9u8Tfv55L7spmf92cPU85J2NnRKcfQbWl2wp3QL+KaWqPpFk/0Stj39r9h9IY5QEtbBw667mGsKPGji+lY4gwtmi/cfBLSk/ecRXNrow8jLv812jqSdVlLs2jumzAf+9Vdd+O+OXNw4IEzcOutt2L48OHho9sLBZ7LzRE3TlnBlizmi4zPLKOYD/IYr3XKEB7vDa9l3tsh+WtXVyg/Zb7Jjdc59+5ZNnBz9jOetjPPFdzhYw48xuu48T48xjLR7eY5oes/zceZ77vUD+1t7eon+oH+7nk/B6fs4DPyXDkBUe4oeKViyPsxHDhLBN1w4DU8ly0YvY85bNy0WTTAT/HPp57ET2+4AVdfdRVSUlL02q5gAC+9/Dpu+MmPsWbNahSNGIlnnv4nhg4Zup3/6C/nuXndjp6b8DwnXPhueD6fhzDsnfDnOdQnDKvPe5d8Lr777jAReH3Pa0LhRutshx6j0YllM93mNaF9bgTkN3UM783rQv7f83lOv7NoEgYqN1JWVobnn3seS0WQzZZa5JFHHamJ1znH5/dj+fLlWLRokb4Q9tdkwLO/5kcffQRXlEciShs+XLgQL7zwAt555x00NTUhJzdbzu/C8hUr8eorr2Du3LnYuHETEpOSkJqa0n1/vtCa2jqsWLkab0ptae4br2PpsmVobGxEXFycWlt3FLm2lW7DE//4h7i5EUceeQSOOeYYPd9xN16+T5syBX957DG9R1JiotbOeJyRjhF4a8lWzJ+/AK+I/96dN08S0Rr4/W2IkXvyvjyXkXjr1q348MMP5ZgfEZERWLp8BV568SV51v+grLwCaelp2tdj3bpP8Prrr+M1EbXLV6xAaloGUiW8HP9ropWIXFVVhVWrV+GD9z/Qe9Mq+/HHH2PT5s2aODIzMzXjKV6/Ae/LOewrQ6smI311dQ0+knO3bNmi76miolzDnGFMN0hGRkZ3YjWMHSJx8ckP1uLvc9fDz/gp1XzG7bQYN46bMRTpCd49En9YUDbUN+Gep9cg6JEKl/iT83OOyU/GMZML+02fKuO/09Lair/+5TEpRz7E8ccfj5kzZyIhgRWcUNnjbIR52+LFiyWvTUdFZSXefPNNybvnY+So0SK03GiRPHLNJ8VqkXvpxRf1XPbZj42NUTcdIcj8l+XTho0bMH/Bh+oO8366xbKEeS37+TOvJfMlb6ZbnYGg5Oep6g7jpL/Nh1defUPc2Yi4GC8SE0NdySiatpVt07Lx9TfewFtSpq1Zu1ZlFrs9ecRd+qFsW+icuvoGzauLi9dpuUHr7rpP1un9KfR6ihwKqVK5btXKVXouy1SWuxVVNSLQh+Hdd9+VcmUN2vw+ZGdna9jRr3X19Rp+PBYIdCI9I707XB2WLl2OZ597DvXin29+8zQcIGUn7x16B5EYMmQQNkh5xHK/pbkJU6dOlX1D5ByGB9Aq73L1mrV4Z947ePmllzQ8161fr8+dwOcWwUe3+AzFsn+R+McphxcsmI+XX3lVy8HGxiYkS5nJ/SxDX3n1VbwtYVheUYEUeS8s13uWpxomJSVYuWqVvksnXiyUcGG/3/z8fC0nqV3Wy7t6//33UV1bL26lSBhHibtleP+D+airrUWsuF1cXIzXXn9D3JmLTz75BF55tynJ27+HPYI8bL9FInxQBFTwG8ceG8zJyQn+bs6coLxg3e8cLy0tDZ5z7rlBEV/Bm2++OSgiKSg1ueC3vvWtYHJycvCHP7w2eOWVVwYnTZoczMrKCkqiCUqiCz7whz8E//Hkk8HDZx8ZzMvLC8pLDMbHJwQvvPiSYEtri7ovAjC4efPm4N133x084ICpwYKCAcFBgwYFJaEHBxYOCt7085uDIqj03J7QXxKZglOnTgtKwg3ec889QRGm3f4m/C7CMDhu4pSgRPLgkUcfE5TajO6XRBF8/Y25wVNOPTWYk5sbzM3NC0qEDCbJ8xQOGhy88aabg5KgQ25IeNz3+weCAwcWyjNfomFw+OGz9Hyv16vP9Ot7fhd8/Ikngscdf3ywsLAwKBE7KBmShNt5warq6m5/SSYUXLZ8WfCaa64JjhozVu87cOBADXs+R3p6evDGG2/U8+vr64NSh1G/f+uSS4IidtWNl15+NTht+ozg+AkTg3f88pfBSy65NDhixMigZEoa9qNHjw7Oe++DoCRAPd8wesPY2BnoCh75s8eCntPmBHH6fbpFnHFfMOX8e4OvLl7fHWd3NwG572sffhJ0nXCP+of+8sjnMTc8Fayobw6fZewLiEgLHn/CCcFoyUfvu+/+YG1dneaRzsY4yI152YEHHST5Y0bwmWeeCZ5x5plS9qQEDznssGBtfYPmlX/961+Dhx42M5hfUBAcPHhwMFvy1NTUNM2zRYSoO4xbzNf/7//+GDziyKPk3AFaNjEPTkpKCrrc7uDkyZODIlKkbOoK+vy+4OzZR2g+f8PPfhZsCJcx3Ogmy7m8/ILgCy+8oPtYdn740UfBiy6+WMqRQcGCAQOkHCsMikAKThJ3//b3v8mztAV9Pl/w4UceCQ4bNix45hlnBufMuTd46qmnib+HaHkioiZ4pjzj2rVrwyEVlOs6gu++954+z0Bxm2UFy6BcKb+OPPJI8Vt98Kijjtbrv/nN06Ws61Q/+f1twVdfeyM4bty4oAjD4B+kXOb+3jwh5RfLkTFjxwdffOllLZsdeD5/P/nU08EBAwaq/+7+9a+DIsr1GMveRx95NHiQvCMR03LOAPUby3yG518ee0z9z3NFYAevv/76oAjh4MUSTtddd11wypQp+jws6/jufiPl+b333Rc8fNYsDcNY0R58P7fdfrvqD8f/LS0twddefz14zjnnql5gOTpYnpFusfzNzMwMLl26VM+vra0N3nTTz9V/x594UvCT4vUar/7972c13E+UfY8++pfgaad9UzTI4KBUTtSNs88+O7hi5crue+4pPmtq60eI/0X1V2Hrli1ae0rngJnIT2uRhANtSrZuRYfUHOSlq6VPEooOVGHtT16OduT+7nevxrXXXiu1wTQ0tzTjzl/dCUlAOPigGfjFL36hfT59vlZ8/OFCrP1kvd67rq4eTz/zLO67/35EuiLxQ7n+jw/+CWeeeRbq6+rwwgvPY/Xq1WGfbA/9Vd9Qr83Xw4qKumtMPWHNJ11qLhJJUFFWpmZx1jgXL16CH/3oR3hdaoUjhg/HbbffhjvvuguHS426XGqjTzz+N7wmNSnWgmhuL5EaU1V1FV565RW8+trrkISN6667HpJR6DM98vDDuPvuX0MSiQ5OOuiQQ7QZ/K2339KaK6Ef1hV/giuuvAp//OMfZUcAl1x6Cebcey++d8012rzPml/BgIF6fk11tVpSWdvle2Hna1p7Nm/epFZojhB87LG/6nNff/11OPfcczXsJXPC++/N03dkGDtEIhJTCq0DtBa6IoI6T1uU7Iz3RCOeSz/2aOranfCuEcEuxEZHdvvJ44pATLwLXXvIT0bfwNlPmM+zi1FtfT0WLFiIt956u3urrKrUfFMEBYrXrYM7yo1//OMfmPfOO5g0aRLOOesseKIiMW/eO9rsznKKg1w5SJSzo0RI/v+25ME8n61aLG+eeuqf+OUv78DyZUtxqOTTt99xhw54PXzWbERL3Pd4Y6UsovUqiNKSUslrt4HNrMzbaTllGcOya+WqNWhtbZH82Y3snGz1Z0lpCe66807886mnpGwowj33zJEycI6UgQdpa9ncuW+qxZB5c7nk4fUNDViwcAGefe5ZnSnlhp/dgKOOOlLL2Fel/GH+T3i/ZStW4n//9/t4/PG/IyY6Gt/+9mW4U+510003aVnmdkXpWIqOjnaUip9bpVwiTU2N+OD997B+/XqMGTNGW/56w9YzlnEVlRXIy82W8iZ1u7KU37mlJCeoBZPPKsJNu9SxyxxbRH/805+q5feoo4/WZ77r7rsxauRI1Qb33PNbbNy8WZ+jRq6jVZTXv/TSy1i1ajUuvPAiHV9B6yNHu//hgQfwshybefhsLWcnTJyoYfbKy6+olVqc0fdJ6+/3v/8DDT+2ArLp/+FHH4OIdvoaaekZyMjM1mfg9StXrtAWysSEeCQnJkhYdWJbWbk+u1QQ8PAjjyJP/HDDDT/F0fIctCpz4NCmjRvVjT1JvxaaFF7s5MvR5BxFLjUB7W/Sk2oRPC0trSJEU1E4cKCaoaUGCX9bmwqfjMx0SA0E55xzDq7+7lWSwIYyZUiEqtHpeG668SacfvrpOE+Oe70xkFqiZAglYL/Cjz7+CH/+80Py4hPwve9+F5d9+1JMmTwJEydNFHczROgFNDIzgvakM9CpCae5qUmbRfJycz9j2mbCYGSsr6vV5g76S5KLDhK6//77NKMZO3Ys2POBnZ7PPOMMfOeKKzBkyFA109NsLzVs+CXBNsrzst+GV0TdFd/5jorq//mfK3HQwQdrM/uG9cU44fjj8HNJ9Oeddx6+d/XViGU4SWKqEiFP/zPMfvOb32Lh/PkYNGiQClKezxF8HFFJv0otHYWDBtP3ImxrsEUymszMLO187Q2b/znlkV8STbtkKLNnz8J1IjIvuOACXHb55RgqwtclmaSvtfUzYWYYn8I+wcBVx07ArNE5GD8wGSOyk3Dw4DR8+8ihmFCYET5vzzBtVAFOmVGAsbnJmDogBbNGZOG8g4chI7F/jBA1vhgrlq/QQp5NsXeK+DvrzDNw+hmna3lx+unfxALJK2kY4PR1zB8rJV9+++23VXy8+urLuOyyy1BZWY07fvkrzV+/9a2LceONP8OUKVNEiE5G4eCheh27PHGb/8H7+NtfHxPBWacV8zvuuF2E6bk64GjwoELtszdh/HgtBwnFIQVKtDcaGbKP5YjDfBFvhN2UCkSc0H2Ko2effRaTJ0/GvffOwTFHH4Hx4t5EEcUUZ9qXXvJwlp0sW5hP+9vaRZwdgx+IMGY5dOGFF2r5wPPodxVnNTX41a9+hdWrVmoe/+Cf/iTlx89xlghtjqI+XMoQln+TpNxk/t/Y0IgtW7bqPUtLt6kgy87JwUknn6xitDfsSkADBkV/VlaWGp16Q3+wOxv9FBXlkTI3T7XC1pJtuPvXd8u7KceBBx4kfvszTjjhBJx6yikq/inWaLxhl7igVCBZaSgTsUhGjx6Na6+7Fldc8R0tNw+YOlX2hsZ/nHHmmfjB96/BJfJOZ88+UgeLsWsZjVvMv9jNgWJ2w4b1mDZtOv7++OP4jpTNB02fqn6l4YoDluLjYvQ3NcxG0Tnsz5qVnaPdI/yiRWpqqlScBzo7JBxnqp8vvvhi9U9RUZGUs6F+vHu6PO3XQrO5uUVEVaVG6kyJYFnar+PTR2LgbtiwURJmrQo1Z0ojWhlbJcJwjrPLJbGPHjVKI3ogENSIRZE3UmozzDgiwxZSvnhuFGCDCgtEwFZpf8b1xeuQI4mA5zBBPPnkk3hOEiv7TAwbOgQ54b4mPWlvl5qIRFYm7nHjxmnC6N2Pk36nYNy0aaP6jaKSwpX9VOa++ab2l5kltdip06ap+9wyJdMYJBkOOxYzQrPfCftgbpPaJ90/55yztT8oxTbhICB2rh4rmcmZkuhplaQ7jMyEGVZqepoknCAWfvghnn76GcTFxeKii7+FUyQh8lz2Rdm8abOO+uP5KtQlIZWUlmLt2k8Qz9pXcpKey/4rlVLrZAIZPnyEjtB0pv2g//ic8lWuSfhMeBiGA+MLOXxMIR7+32/g/u8chfuumIlH5PtPz5gp6Zr90/ZcxsopR+7/ztF46KojxF9H4IGrjsLJB46WzHb7fMDov1BMbN26RS1bFD9HHHEkTpD8jCLlONlOOfVUtQoyT1u6bLlWstl6c/G3LsHVV1+lgooi9M25c7FwwQLtI0mLGPvo/fvf/8YTf/8bNm1Yh9zcXBVnrKCzvGFL3EwRZpyij/dlWqDwpLGFeX5BXg4S4uO0/Fi9eq2WAWkiSgYOHKB+IRRbxeuL9XtaWgZiYuPUIPMnEYAsG8ZKmcTBNS++9BL++rfHtT9/qpRRQ0X4cKrABhHF7FfIvP+ww2bi+OOOQ0q47KD1lHm3R8Qc78dBoRz78MF78/Q3rZq05oby+lC5xY3XDB8xUsOIhgi2iFHUsc8qR4tPmDBBZ5Nxznfgc1ZVVathg61nuVKesBzrDY1S64rXq/CmcYflMoX3yy+9qIKcLW4U7onxobEN9F+hCGa2grKcpjCk0Ygtg5zzmoKWM8QccMDU7Z6BFuJpUiZPPWCKagU5IPspPyNQWFgo5We8CvYHHviDVFSWY6C4f8cdt6FwQIFez3vQis2BjWytjBQ/slWS/XrZZ5N9dofJeyCsnPC906+HHXaYVD7OFmEa8n+UhCMH+nqjPRKmOx5dvzvp16U5BeSW8AAUij0mhp4wEm7ZslkT6bCi4VqrIOuLizXCpYswm3HgDBWXhJkGBSBf+KzDZ2lkIkyY1RKZOYKaIoj3YvPvf/7zH+3Mu3DhhzqX2ve+9z3cdtvtIgZX45BDD5XM4AKtVfSGo/0YcemHcePGa1NDT+hvbkzgjExM/DNmzNCay9o1q9AkNTOK04MPPkjFohPRWXNhouB3HUkoz8EmjuqqSh3RPmXKZDXR8/k6pAa0fv0G9cPRIj5Z4+V1vO9WyUSYkbLjOgchdUlN7vXX39D5yQYMGKg1JrpBmJEx0+I7yMrKFLGbrpkqBSUH+jCTY8du+qehvk5r9cygmBCHDhnSXctmV4JaqfnyvoWFg9Q9w/g8GF9TE2IwbXAmZo7MR35GkmbQzrE9Ae/LLV7S9MQhWRgvfstJlYqT+ses9PsKakXbskXFEOdn/NmNN+Kuu+7Spudf33Un7rzzLu0SxXySg1GZJ+blF+CMs87ujpvMq59//oVu0cpWoqu/+1389IafaX5L4wIFJZuuObCDg0SSJd+fMWM6hg0d1u3Ops1bUF1Tq+VErg4eiVWjxCcqrPxISU3T65w8m03wLH8IB+FwphD+XrFihYrVJ596Cld853K1Uj7w+/u1if2kk05SiyINHCwnaQShOJ4+7QAVoA6lpSVqOeRAG57LLmvvzHtXm9kZHgfKszjGnJ7QbzSScLBtS0uzTlfIrglPPPEPKT9yVMyy7NoRHFjK7nMJ4h+WNQyHnu6zTGPZ8vHHi3TS86zsLBGRtAC78KyEP9/NxIkTMX78OL3OuZblfqeU1ZyDk2Ha5m9DeXmF7melYNLECSrkeD6ttuxmRoE3bvx4EfahLmS0KDY1t6oxjOGUkBCv2oHTFrLs/cY3viEVErbkhcpBNoVTPDI8Boow5SwsHABFi2trS6sOGKam4D1ZwVgv75hzbE854AAMknLTecdbt5ZI+JUhIyNTDWw9w2NP0K+FJms9tPixFkRrGmsqPeGoQL40jfiSMJkwtDYXFpq0pjFiOlP41EhibW5q1pd16KGH6D5CYbROxBQjbFZunoo89iGpkhoGE01eXq6+fEbW4084XhPonb/6FY4//tgdikgKthJJGHSXc6z1TBg8TmG3bt06/P73v9cIOEYynPETxmuGxMhFcctr8ns0IzCx0KS/RSIYw4Mj3HhvJtaKikpdZShX/M778B7c19jYIOfE6TE2bTt+WLBwoda2CyRj5EhF9oH5UBIpa07Tpk2V2mso4tI/FIiLFi3WBMCIzkyLmTATXUD8RLEesjRHSGZYo7UyvgP2t2GCV//IP/ZdYX8cr9erCcyEpvF5OHGVf3tuxDm2p/kq/uoMdMHf3qmfJkv3Tph/Uiyw1Spa8qvBIqry8/OQJUKI5Qm3jPQ0rVwzb+MIZebPw4pGYEBBfnc8oJWNTe/M43NyQmXImNGjtQ/9FVdeibtFuLJ5nYKNM31Q2LKcy5d8mXm8uiF5KWdcoYWRxoSMzCzNpxukzKuuqdJuWgMGDND+m1q2yD1Xr1mt+T/9MWXyZN1P9/mbZeQI8Qf7Jx544IE468wzccutt2oXrXFSDvEczrJCwcVVdyiSHOukEy7M/4uKhqnAaW7xoVHEK/1J6x37FjqVwZ7wehqKRkl5yPKN5faSpcu0expH5lPMc85IntcT3pNhWFpaquHEJuWeZQePU9DPe2ceVq1agS5JVxxvQUslw72ibJuGyVDRBxSUzjV8X5xlxrEijhhepO6wXOP5DGeKSfqHFse6+kY0inZgqyK76DnlfouUaWXbSkRvtGorKQ03a6XS0CTnsiseRak32qvn8r5vzZ2rYcXr2a+W7rMsZrlOAU7LKwU7+2qyvygrCIw7bJV1RCbPp3GJ5XuOlLEMkz1NvxWafCmsWTFi05zPRN5b1FHVU9xQvGRkpKsoZA2UTcmMzJy/MlZqKiwK6N76DRvV6sYpHNg3xUk87JdZJrWDGIlww0WUEUa2Dqk1UtCdfc65+POf/4yHH34Y9917r/aDpHjbUcIgTBQUurzPMEmQ9J8DLYTLly3H7bffrnN+MdO69JJLQwOdxC0mDn7y/m0S8Qn9WF1TjcWLPtYaJS2u48aNRVx8HDaKEC8vL9NmFqfvDlmzerWKVlp12f/FSZysea1bV6yZoGZQ4kcKTVooGZEdMa/hL9ezOYf+5EAg1mIJ97PpnM+Xnp6hfVh5Pq2ZFPMJCYnIzMrsTmAUpEzUrRImTPCpYSFrGPsbTb52LN9UiY+Ly1BW14Q2yWNMcO59sFLPaXdi4+K1oGfZs6M8i1202Fed+XXR0MFIiOtRRkme2C55H/PYw2cfgYekDHn00Ufx4B//D9dd+8NQEzMH9oi7bAWjRU0u0ftw0CvzVA4+efutt7FtW6la/JIS4yWfZjelBjSL2ORgUwpYR5iydY/9MNlvlO44LW4sj9jtjHn+bb+4Tf3xl7/8BXfddaf2V2RfTp5PEcTyk/l7Tm6uNlU7z62tW8XFoRZEEaAUmvRjV2eHPmtDQyNaWn26b0fQj+yfyjme2VXryaf+qW5w2kKK2h1BUVVVXS0ir0kNQPRnTxFOS+bbb/8Hf3roIR0UwyWpL7rwAhV8JDpskaSIZKWA0H8U3pzbmvtZrrH1kNqhuHi9npchYU0DCmGfVU4JReE3QEQmw8SBcYQGsWBXUMPLK/GkorJK9QcHJtFi6YQfjUKP/fUx/U7LrmPB5ZzbG6R85POwWwCnHKSlmKKXZTunbKROIPQ7/cF4wS4IvOeOuhLsbvqt0OSLYmSgyZoBSbHEmlVPtmzepC85JzdPhRYjFcUpm6MppIZLLYWJnDDybNm0QSNWtmQc6VIj1f3y4poam1WwMXJOmxrqk5GclKJikk0NrKFw9DszEzZL8Df7hjIR9IbusU8IrawUXKz9rd+wXpst/vPOOzpv5i233KqjEykML7302zjuuGO1tsX+LwUFA3QAD5+bIo+JniMLOQKdSz3SosjO1TOmz1B/l5eVawRlougpEjnHJftMcqQ3m1YoYLm/sqpaMy3O90drKzM6CuY4Edms5XHuNPZxZURmn83nnn9BmxWYYJwwY38fjiqPkRobrZZJSYkhsSrvi8/NfpzZWZ/2XW3vCGjGxbBnX5ye84kaxv5GcXkDHnxjOeY8+zGenb8Oq7dUo8XfrunTROeeh4NCNkr+XllBa1esdjFi2dIbvq/Nm7eo9Yp594SJE7qtToR96EaPHqOVew4KYZ7J4xRqm7ds1dHNKgDlXM6wwL7utLBx1hE2mXIuYwrC9957V8tDGhhYFjr3UDEqn5+sK5Yyolytnn9/4gk888wzKkA572LRcMnjpdwcOnQIOGk6uzstWrxY3eME7MzL2WzvjCBn0/M2cYcDoDiIiKvsObA5neUSn4tii2HDcRAZ6aF5kZctW6rzTTKv5zNyQCzLA4YT83ueM336dBVRnPf63Xn/0e4DJ51wvJZ/O4LGkm3hsoNlEMcKcOYSjrbmXJ0ceHTb7bfj/fff08n02drIcRGOVuDgKZZ978+fjw8++CAU9lK2Pf54qG8qW+RO++bpKti4yl1JyVY1DNF6rH0whfY2v+iMKg0Tjvp2RB+hGKTmoDAdIGU3/RgbE1r0hcKco8VD4xnWql8ZpwhbGJ3ymnFgq2gZlvsjR43Sfp7UHbRw0+8UpBTYDowjLGtp4aUu4nV7ujztt0KTEYvWSn4ykHe0liz7eVAA8kUki9hhTY+1HyYeWtEoFFn74XUUhYz4jPRcgaenGb1BaodbJMNgc3VosAu0qeR4EYDs5PvKSy/il7/8pa5QNOfeOdpPk7UhJjpe3xs2i1NgclDMgw/+SddfZ/+c66+7DjfdeKMkkFcxSiIU+3x+73vf7e5jwcxs0uRJOiqQNaVHpcb561//Wrbf4He/m6O1LU5rwMX2KawpxNmJmLUjRv4YieCEHY5ptWyShJGdHa4Fh8OOEZoj1RkurJXyvswkjzriCBWanPid0z3dLfd9TETxNMkYWItjwkhNS9Pnpbin1ZbNShSybJ5n/1FaAZzabs/+NuzisHWL1MDkXY4ZM1rD2TD2R5LjvJgxMhcxUW489OoK/PiRd3HbE/Px2JtL8eG6UtQ2taJDRIIJzj0HrYtl5WUqkjIkz+N0Ojtb8YkTonN0NvNTpynU2SjCzjz9NMl/k/Dhgvm49dZf4L777sO9996r09Xdf//vVfQQDkxhK1ttTTX+/czT2h/0l3f8SscJsAsWyysaVBLiQ5Y6irx4ESTM19+a+zruu/8+3H333Xj0kUe683UOVuJUORzkyuZYDjJimfHb3/4Wv/nNb7Q843eOGH9RyjgaC9hdgFY0FZCZWXK/0AT1hINCWbamZ2SKQMtW4chmZ/ZDZD9LCq7f/fYe3CNu8znvuOMOFbEOFH/0B5uH2fpFo8kZZ5yJgYWhJmrnPj3hSHPtjtUZ0H6uLJNYPjnTJt11191YJYKdI8J//OMf49hjj1VDhsPZ55wj5VchNonQv+GnP1N/MZxYrlLEnXjSSTj5lFNECISswew2wPfF5nEHll/0A1sYuTR2Wlqo5ZBlIftJalcCWnilokAmTpykQpzl6T+f+qeGMcPivffex3HHHafnUMTSsEU3mlukfBSBmyDhMl5EMqFfODE+y12GrdOay/NZoaD4ZjcItig6xrQ9Sb8VmmotlEAdIIKK0wzk5+WHj4RggNPkzFUExop4YZ8ZDo6h+ZtTHbG/xECJYE7Nhp2nafnj1AxHzA6twEPoDps/MjMztF+hYxZnvwdGwvMvuED7aixevEhrl0899RSK1zHxhJq3eycO2iTYd4dCkDVQiluuAkCLJkeYTZ8xA9+69DJNKNeJ8GStxIE1VQ6g4fJas2fNBkeXc4Q7RwcyI7vwwgvwXRGmUyZP0fObmlvgifZqrZB+9HhCEY6ij95ijXTSxInbZRbMPPmMw0eM0L5HvCc3TtfBkW15Es4ffvQxlkitl5bWb33rW1obHizhxtofw5D9RzgLABPFoMJCdZuJKoL7JRxHyfuiKHXgShDMVDgqjwKbNcbe4WYY+wUS7QemJ+KbBw7DmMHp2FLTgic+2IAb/vY+bvzbu7jvhUV4/sN1WLalCpWNvlBfTlOduxWuvsZyY8CAAkyfPg3ZktftLL/iFHbsw8/8lPltz/PoBld7u/LKK1VEbty4QcuQJ554QqeR46ns/0eGiwDjdECcgifQFVRRwibS2Uccoc3fFBrss8/uUoQDRI79xjE6TRKtnGz9WrFqFc4/7zydCo9jFugWobGE4ukH3/++CjF2EWNL1cOPPCyfz2v3NC69zDNphSNjx43X8sEd9aklt76+Tu6foE3+OTmhUd3kCPHjty+7TMsPGns4Cv2vf/ubWmY57Z4Dy1rOeMKBSzQ2HHzwwVrGfB4sa2iIYRnEMFhXXKyrCNHIxP1HHDEbV/7PlfjFrbfqs1HEEr4Hbhxke/X3rtGR4rTmsvvb3Dff0jLq8iuuwNVXX43c7Czxm6QzKc9Z7o8SHTBy5Kjud0nrLwU9m9hHjhguYRmyRBIOuqXFm8YhZ7Dy4EEDcd5552Pq1Glq2XxD3g37mXIA1mGHHarvk0YwR5hSRFMssoJACzhhGc5nHzFiJAbLuRT0hH6iBZRWzNHiz4KCfC1b9zT9cq1zQtPxipUrtEbEQSh8kY6qJ4wYXH6L1kOuaUqxxZoMO89+LCIpKLWzY44+RiMjYa2QE+eyieDQQw6TRBSqfVIgcXmtBSIGKQ5nHjazO4Ix6DiIhRO9skbDQTzs4MuR1kXDh6v47T21AMUnhSX9wUjYE60NSaQsKhquTdW8D+/R836Elj82q6wIN63wnrmSsNlZmyZ+B5rWFy9ZIolAalETJuhIOfbD4dxnXPpra8kWycCGq7hjwqb7a9au0WYHZoInnXii1ErDFl/x64L5C7BqzVo9r3DAAM1k+cycUJg1amYkFImsxbODOsN24oSJar3kUqBsdqcInzxlqjbLcwJhQnHLyeHZpeHoo45Wweo8s2HsjzT62nD/y4sw58VlKG+UfCIgaV/yjrjoSAzOScDIgekYk5uOSYOTUZSfiYEZifCE05PRt9BYsWz5ch2IygGTbIrtaRBwYD65eMli7SpF697JJ52snw48zo1NssyP2YTKmUXY5YjdqjiP4hAREWxuJcwfmZ9zCUROHzRCxCv7IHIC9+ING9XyeNqpp6g1jO4yX6UgZVcoTuJOEcRycuWqlTrlUpG4TaHllDMs6yjUlso9mlpaVRRy9gQaF4bLtZyZhIM/P170sZSXHZgwYXy3IYGslbJjqYQLRevkyZOkHAyNK6Db/vYOXeyEfQ3ZYsZygs94yCEHaxcxntPQ2CTi9hERhbeoAelXd96FI0Uokp2VB3x+zhnNyeZp5HDg4Cdt6ZQyj+W2Mw6itzu8r08qDu+9Oy9spW4Wf8WLGByM8VJmcqoownKb9+CSz/Eiprm2Pad0Imy+53umBpg0cVK3xZhCkAYZDr4aMXxEtxGF0ErJpUM5t2ZsrFc1CsUuZ3x5V/zC0eKcipCVAI6P4HKg1C9HHnmUWja5GMB7773HLEFF8fCi0JgQPg+Xkl6ydKmKapa/NGjtafqt0KS3KXLki7zUkNWtN4wcFJw87kQwPiz38TqKo+79jntCz0E8tEDSHcI9Ok+nHHOiK6/jxkjOT5db3OQ/nhN2ozc9/d4TrgThXLszeK0D+5kyc2BtisKw9z15D9bCZGfoWNjtkJ9DzW/Ovt77CcOBOG4yHNhsxF9OlwN9FhHtITdC56s7YVecsAy5Hdqv54bvS3hPJkr5st07MYz9EaYczl07b/UW/PyJDzBvTRW6mLbCCZMtA25XEEnRHgzK8KIoLwPTirJx8Jh8jMpPRZTLyYNC5xu7lu3ysnA+trM8i3ljz/y093l0x4FlSEDOZzN8Tzd7XsM8mCPJeV9aDGkB5cDRSMn/f/Obe3CUiBOnAu/4k61/LJfYosccv2fe37vc5Pm8B/uN8r4csOKUAw47K3d7ljcsk5zywMFxm2VWyD+fdk9rbfXhsb8/jrtELLe2NGm3sauuvlr7pvYOs544z6hleg9C4Re6P7/znB254+znJ98VZ3zwetxaDjk4123/fD00Aq+VYwxPZ5/jpm6MJ/K753HnmL+9XSoN7k/DQo+FnsW5h95XzuWVfCbHbfpXfnzGL7qpSzuOc3uCfis0DcMwvi6hTLkn/B0uLHod+SJIlUs/g+zaE4auUDjqZ/g4YeWqU/bvCBbUZTVNuOPpBXhqwSb4O8MHehHRGSr00+OiMH1ELqYXpWNqURaKcjORkRgNZ54/Y9+Do53ZB3HO736r3cdo0ZwaHqzaX2D6o3X42eeew18f+6v27//mN7+Jn/z4R9r33+LuvoEJTcMwdjnMVNo6Aqhu8FE1hXbuBE794eRCtHiwph4IizAOfGH9nv3SugKitmR/u1Tke8vDLwPvx0mQ1XJDoRfoQlv3viDaOmkJceTml8MnXvS4I9DmCwlN+rMj0IH2sO7kM7V1hCwWtCvxuXYEhWabqMv/rC7BR5tr4A/swDe8lAEnhyLkGXhGfHQkxuQloWhgOibKNmpgCkbkZyMnJU5fgxXc+w6cIeRXd96py1KeeMKJuO66a7WfYH8Tmvc/8IAuZ8ymaK44d9FFF2m3AD6Hxdd9AxOahmHscjpFxK0pqcWcZxeF9FBo946RLMjJhdg0RQHIBkeKsNZOEWpyjAJQm4rkRIrD7lyLgyXCCy7s9LuDuIuISLlHQEUe78XTVAyKm+yKwvt0ONZDFnJyowi5V5Aqrdf3zxznteJelAjNDvE34d9AV4eIWDku0IrZTivkF4CCuLW9Ez4+745kLx3n7u5PekQ+xX23B8hJikNBZizG5KRh2ohUTBs5CEW5qYiywnufgKvtzJvHichX6cTr7OPX36aGo/x4+ZVX8OQ//qGj3mfPmqX9KomJzH0HE5qGYexyuJjBf1aW4JTbn6f+CgmxrwAFIK/fGRRjnC/w874rdINZXVgc9rSyqvNOoaY/+Ed+c1d39qg/Ql+d745buqvX98+h28kw7HPZjX4PX08n6dbnO/cpzqU9vMGfMS6KziiMzM/E1BHpOGLcIIzOS4E3NsZEZz+GfTo5MJR9HjnIxOk335+g/OCgJQ4y4qh5Dnza0XgLo39jQtMwjF1OuwjNt1dsxXG/eAE05qlFbn/QND104hdmO6Epm3O9ft8F4Ub35X9ksAup3ghMGJyJI8dn4ZSDx2NYdmjKFcMwjL7Cqg6GYfQt+5PV7Ks8qorJ8EZL63bfw+d8TSI4F1+UC4Oz4zF+cDqG5mUgIXbPT+RsGMa+j1k0DcPY5Wxn0RTBZJnM7kc1alsnolyROHB4Fo6fNgQzRmRgQEYyUuNj4I2OMkuDYRh9jglNwzB2OZ8Rmrspm+k2APbq2LmLDIMK+5vqACA6SssjP/Tv14SO0NsSVi65QaQriM5ABAJfRg7KtZHy7PFREUgTMTlpcBy+MWUUJg3JRmFWIpLiovU2NtDCMIzdhQlNwzB2ORwM9BaF5s3PoYtNwGEo0iIpokRRMedxiZhycIly41TjX4ZoT2giZEU+kmJjw79C7nKaoNjoTydf/iq45XKOT3C7osQ9FzxyT2+U3C78XF5PlKM34ZXz1D9fZtk3joanf10uuMUhLkvHSZw5ev3jdRX4aEMtfBx6/zlQ+HK4fITHjZkjMnDImAIcMCgTw3ITkZ+eLGGw55ehMwxj/8SEpmEYuxzOg7mhqgGPzV2u6/SKNgNXJvG4REBFiaASMeUWoRb96VLJKtBUMH1BYt1BdES4te+hgyfGC2/Xp7ObUwC6egjdr4JeTnc80SIuRWiKJ12RQXWbktkt4jIYEbI6OotAcpWvL4w+s/wRB3kvWhu5AkizvwNzXvwYf5tXjKa2HU+JpPNnysb5M4+YlI+TZozE2MJ05CbHIjnGAw9VMs+jZw3DMPYAJjT3cjinIFeAYEHBqR8+r8DgNBdcIpIWEa6p23vpMMPYXTBb4dREja1+jYeO1qOFMbQsWkj8fB0N6JJrOU15txv85L3C1sxdjaY92bbvccqb8nfIE45Xvi4MvxVbqnDD3z/Ai4u3dq84RBwxLpk30uNdOHbyAJx12BgMy0pCdkq8Cm8X/WoYuxnGW0ZPzlXrpBJdqpLpZgdxMiCVQq6bziOcounzyrfPQ/MbKf9YBnIJTmdJZmPvwITmXgxF5qZNm3Dvvfdi+IgRuPyyyz438WzZsgVPP/20JrbTTjsNhYWFltiMPU7PDMZi4xejXUT6P99bi9uf/BCryhq0P2hEeG7QtHg3xhak4qAxBfjGhIEYmJ6A9KQ4bXbncQtjY0/ABQ84H+ayZcuwZvUatLS2iuhzITc3FwcccAAGDx6sBhCiglS2ee/Ow18efQzHHXesrgr0VcsrlpXLly/Dv57+N8aNG4djjj4a8fFx4aPGnsaE5l4KX0t7Rxuu+M6VWLDwQzz854c0sYaPaiKurq5BRmYW4mK8urekpAS/vee3eOPNN3HdtdfhrLPOsMlvDaMfsrmyEbc8/gH+/l4x2to7VWQOyfVi9uTBmD1ukArNjMQ4JMVGq0XXKpTGnoTl1Tvz5uFPf/oTFsyfj/r6BhV/jJbsOnPdddfp0pKJiYnd568rLsbF3/qWlHMB/PtfT6kgJV8lLtM9Ctybbvq5tujdeOONGD9+fPiosacxFbIXwkTD7aknn8Jzzz2HU04+CWPHjdFjrSIw//HEP3D0Md/ALbfcipamRt1PMjMzMGToENTUVGPDhmI5tyV8xDCM/oA2O0ran7t0Pd5cuhGd/g6MKUjBzedOxxPXnYJbzzoEx00chKHZqUiJ82r/UxOZxp6E8fXDDz/Er+++G888/bQaQU497VT85Cc/wXnnnYfU1DSMGDlKm8YdAoFO/OnBP2HJokVSjt2C7OxsjcdfJy7n5OQgvyAfK1evxpYtW8N7jb0BE5p7CEdMstbnfO/JtvJKXHvtdcjNy8M555yDaE8okTY0NODfzz6LFVJ7Yz+Y2NjY7mvd7ihJsDlITEjEpk2bUVNbq/sNw+gnSFp+e8UmPDP/E+RnxuNXlx6Cf/zoBFxz4iSML8xEWrwXMR53WGCGrzGMPUhLSwteefU1LFi4EGnpGbjjjl/itl/8Apdffhl+/vOf49XXXsGhhxys/SYJy6uPPvwYjz7yCI4//gTMOuyQ7VredlQe9sQ53vMcCtQ4KfdS0jK0pW/DhvVqlDH2Dkxo7maYODqlNrdkyRI8/Ohfcf/9v8eDDz6IxfK7rb29OwH95u67UCtC8QRJiMOGDZN9QEVVNd555x18/PHH6JId/vYOLGV/mLVrRbAG1P3U1BTZUtWq2dTYpPsMw+gfVDf50dEFXHPCJDz5w+NxxTHjUZSTjDivx6yXxl4Jy6mSrVvQKoKzaNhQjBwxHCkpKWoEYVN5dla2Np9TTLJso3Hl9jtu18E7FKPst+nE63YpA5cvX475H8xHWVmZ7utJu5R5m7eWqKjdsHEDAuFyj8R6o5GSlIAolwuVFZVobm7W+xl7HhOauxmOsrvnnt/ijDPOwP9+7yrccMNP8YMf/AB/efQvUgNr1nM4qGfum3M18V1x5ZVaE2xr8+OJv/8Nl1xyCTZu3Aif1NZeeO5ZnHnGmbh3zr1yLkfyRmhijpQCibVMJjTDMPoPqfEezBxdgMPGDkF2WhK8HEFu1ktjL8YRj2TDhg0oKy9HIBASgCyTnM3h40WLMH/+fEyfPg3jJ0zoPsZBrO9/8AFmz56Na6+7FqWlpep2T7HY3t6Gt6RsvPiii/Hzm36O4nXF4SOhe4VGtwM1tTUifM2iubdgQnM3wgTzhwf+D/ffdx+qa2pw3vnn64jyK678Hxx6+CzEeEPN4LR2rl2zBiefcgpysrO6E5vP50NyaqqKyZiYGAwrGo6Jkybj0EMP6XbfwbnGMIz+g4vzdHJ6Fleo4tizgDaMvZG8/DyMHjMaiYlJOkvKnXfeiTfefAstrS3oCm4//yvLJI4xYLP2Mcccg6Qeg4Pq6mtxzTXXqOA89NBDMWFCaDAPjzldzGglpYVU9uKjRUvw8ZLlek5vOjo6trN2GnsWE5q7kdq6Brwx901s27YNR0it7X8lUZ177rn45R2345QTj4fH41GL57x339NEdaEIUa2lhYXlVVddhR9df73uGzVqtLg1F/9+5l9qHXXgdfJfr3G5vt6KKIZhGIaxM1jesHJ0xhlnquEkIzNTR52z7PrR9T/C2jVrVTg6cKDQG3PfgDcmFtNnzNByiuUZheETj/8Dn6xdi4IBA3HBhRci0uVGdXU1nnjiCTz3/AtobvWrG7GxMdokX1tdiYptW7cTsyz7uLG8dKZSMvY8JjR3IzROeDyhvips/uaAnbbwZOyO5cLna8OixUv0nImTJ3fv5/x4Pn8rVq9apQIyryAfifFxep6TWEljY6NuWVLrY19NwzAMw+gLnHInNycH119/PX55xy8xadJkNDU14Q9/+AN++MNrsVLKLDalU5QuX7oMDfUNiBUhOHz4MC27uL++oQEvvvyq/p56wBQMGzoUARGoH370MS4U0XnfvXPQ2hyaYYUiMikpScVpo9zHFxagLEtpRe3s7EBeXp6e4/jP2LOY0NyNpCQn4eijjtSJ1Nnh+fwLzsfjUlurq6vTpgEmCs6ZV1ZWimFFRUhPSwtfKQlato6OTpSWbpMaXRxGjhi1ncAkTLDl5eXatyUhIR7R0Z9OJ2EYhmEYuxqWQdxYvl0gZdqfH3oIRx55pLbQvfLKy3j7rbfQ0tyi5VNtXS38fh9GjBiOKPen689WVlRgxfKlOmjo6KOPVvcoTpubGhETG4shQ4chNj5Bz21oaERZeSWiojyIi4tDlCdK3eaYhKqqKl0dLz4hER5xy9g7MKG5G2Hiufjii3DDDTdg+vTpOkrvO5dfjt/NmaMT3KpQFJFZJkJxyJBh24nIriCtlc0o3rBRReT0aQdsd5zQkslmeU5Ym5ObqyP/DMMwDKOvYXnEbeSokbjnnt9g0qRJWqYtXrwYTc1N+n316jVqeeRk6o41k1tNXQMa6utVaE6bNk3dY3nG+TljvF5kZ2UiSso19rtsbmlGi2xx8fFITU2HJ9xEXifX11TXIDM7B4MGDdLrjL0DE5q7ESYoJkTOi/n7Bx6QWt9RmrAe/NOfsHbdOklEXfD7fHpeV6AjfFUIWjw5gKhky2ZtOhg0eHD4yKdwZSCujpCRkYGiYVIDjI0JHzEMwzCMXYvIxG6x6JRv3Nh0TWtjbygeWZaxdY7wXNLa1KhN4XSDfTr5yeb3lStX6UTvObk5iIpyo6amDsuWr0RdbS0GDijA8KKhej3d3Lhhg3ZJGzdmtB6jG8begQnN3QQTAmtynCeMCYB9UA46+GDExydAkiZ8rS0IyjmS8sJXbA/7q5SVlqBZaoZsEuDEuD0TEufmLF5fLLXHJTrvJmuMTiI2DMMwjF2Nr9WHsrJtqKqu0llR/H6/CsR3/vMOKsortPl81OjROlpc2UmRlJCYoF29KDZXr1qjbq1es0bnjWYxF+gMwCdur1m9Cv95a66c145xUsaNGTtOy0Hed/mKVaisrMSUKVNQUFBg5d9ehAnN3URjUzPeeutt2d7CkqVL8dHHizB/wUK0S4LhJLf5BQN0kI/HGwOX2/2Z2hgTTaQc52dNdTX+/czTWC81uM5wJ+uqqmosXrJMaoMBTJaENkzctIRmGIZh9AUsd7h4yPXX/xi/uOUXePKpf+K5F17Aw488gh//5MfYsHGjlkUHHXwI4uPjtTyKi+NnpBpeHLg/KytLraAtLa2474EH8OLLr+D//vB/SBQB6na7MG/eu3jh+Rfw0EN/xsKFC3HggQfilJNPRkJ8yGrKLmML5n+AtLRUTJs6tXtNdWPvwITmbqKhvg5//etj+N411+A7V1wp2xVYOP99jBkzBlddfTUK8nK1z0pOZjoyMjJ1qUln0lvCY1lZmdr3hFNE/OLWWzFnzhyp5bVpU8OqVazpvYVJkybghOOP18RsGIZhGH3FtrIybUn757/+iR/+4Pv4zmWX6fKTNTU1OHTmTFz/ox9h/Lix3av/jBo5AtHRHtRLeegYU7g/vyC01HJmZgYWfPAervqfK1FWXqZl46mnnoZ33vkPrpBy88233sSMGTNw2eWX6/zRvJathK+//oaurjdT7jlq1Cjdb+w9RMjLto4MuwE2eb/33vuaGCqrauCVxJafn6c1s5EjRmgTA6GIPPuc8/D+e+9i7do1SEtN1UTD18RO0ExQtIR2ibjkygonnHCCjlq/97778PS//oUf/vCHuOiiC+FyhdaVNQzDMIxdDcskNlUvWrRIyqq18r1Km7RpvSwqKsLkKQdg4MABOljHKcOWLl2G0047DTExXrz//vvdlkcuqVxTW4eXXnheZ2ThQJ9p0w/CIQfPQHlFJV58/jls3rwFhYUDcfDBB6uBhuMbaBldsmQpbr7lZrl/NW666UYcMXtW97rqxt6BCc3dyM6C2ql98TgnbL/jjjulVnir9k9hfxNaMx16u8Ha3Ny5c3HbbbdjpNQWb775ZuTm5lqNzjAMw+hzPk9C9C6Hmlpacdyxx+rE7PPenYehQ4ZsV/59UZxraGS57/cP4Il/PInzzz0Hl1zyLWSkp+sxY+/B2ld3I0wcO9oc+N0V6cbhMw/DuHHjdFWFnsdJz+u41dbWYtHixcjPz8d5551vItMwDMPYbfQuk3puPaGQjI+Nwdlnn43hw4tQsmVr+EiIHV2/s43QveL161FashUnHHccTj31VG0BNPY+zKK5l8HXwdHp8+fPR3pauq4h6ySs3vDc+vp6rF69GolJSTqlkdMEbxiGYRh7C47UqKtvwJIlizF48GAMHDBgp+Xbf4MDYbk4Caf14yjzgvx83f9V3TP6DhOaexl8HUwoPV/L5yWc3q/PEplhGIaxN9K7XHPKu6/KFy0njT2LCU3DMAzDMAyjT7A+moZhGIZhGEafYELTMAzDMAzD6BNMaBqGYRiGYRh9gglNwzAMwzAMo08woWkYhmEYhmH0CSY0DcMwDMMwjD7BhKZhGIZhGIbRJ5jQNAzDMAzDMPoEE5qGYRiGYRhGn2BC0zAMwzAMw+gTTGgahmEYhmEYfYIJTcMwDMMwDKNPMKFpGIZhGIZh9AkmNA3DMAzDMIw+wYSmYRiGYRiG0SeY0DQMwzAMwzD6BBOahmEYhmEYRp9gQtMwDMMwDMPoE0xoGoZhGIZhGH2CCU3DMAzDMAyjTzChaRiGYRiGYfQJJjQNwzAMwzCMPsGEpmEYhmEYhtEnmNA0DMMwDMMw+gQTmoZhGIZhGEafYELTMAzDMAzD6BNMaBqGYRiGYRh9APD/VbbnCdq08ScAAAAASUVORK5CYII=) Fig : [Fourier Transform](https://towardsdatascience.com/understanding-audio-data-fourier-transform-fft-spectrogram-and-speech-recognition-a4072d228520) DFT vs FFT---Discrete Fourier Transform (DFT) is a transform like Fourier transform used with digitized signals. As the name suggests, it is the discrete version of the FT that views both the time domain and frequency domain as periodic. Fast Fourier Transform, or FFT, is a computational algorithm that reduces the computing time and complexity of large transforms. FFT is just an algorithm used for fast computation of the DFT.![IMAGE](Data/fft1.png) Fig : [DFT](https://en.wikipedia.org/wiki/Discrete_Fourier_transform) A DFT can be performed as O($N^2$) in time complexity, whereas FFT reduces the time complexity in the order of O (NlogN). DFT can be used in many digital processing systems across a variety of applications such as calculating a signal’s frequency spectrum, solving partial differential applications, detection of targets from radar echoes, correlation analysis, computing polynomial multiplication, spectral analysis, and more. FFT has been widely used for acoustic measurements in churches and concert halls. Other applications of FFT include spectral analysis in analog video measurements, large integer and polynomial multiplication, filtering algorithms, computing isotopic distributions, calculating Fourier series coefficients, calculating convolutions, generating low frequency noise, designing kinoforms, performing dense structured matrices, image processing, and more. ![IMAGE](Data/fft2.png) The inverse Fourier Transform will allow us the reverese the process by converting signal from frequency domain to the time domain. Taking audio data with Phyphox app. Using the phyphox app, I have recorded audio data from the sound of waterfall (from tap).From the audio data, I am going to denosie the audio signal using FFT. Phyphox uses the cellphones microphone to record the audio amplitudes. For this experiment I have recorded the sound of waterfall for 1 minute. The screen shot of the data visualization (generated by the app) is shown below : ![image](Data/amp.png) Importing necessary librariesFor the data visualization , I am going to use Pandas, more about pandas can be found [here](https://pandas.pydata.org/docs/index.html). I am going to use numpy and matplotlib to visualize the data. I am also going to use scipy to take the Fourer transform of the audio data . ###Code import numpy as np import pandas as pd # Import ploting tool import matplotlib.pyplot as plt # Linting tool %load_ext pycodestyle_magic %pycodestyle_on ###Output The pycodestyle_magic extension is already loaded. To reload it, use: %reload_ext pycodestyle_magic ###Markdown Loading audio data in jupyter notebook. From the phyphox app, we can export the audio data as a ".csv" file which we will use in the following analysis. ###Code # reading csv file df = pd.read_csv('Data/Amplitudes.csv', sep=',') # Reading csv file as dataframe and assigning them as data series. amplitude = df["Sound pressure level (dB)"] time = df["Time (s)"] ###Output _____no_output_____ ###Markdown Visualization of audio data in Time-domain If we plot the Amplitude vs time the visual information we get is here : ###Code plt.figure(figsize=[12, 6]) plt.rcParams["figure.dpi"] = 100 plt.plot(time, amplitude) plt.xlabel("Time (seconds)") plt.ylabel("Sound Pressure Level (dB)") plt.show() ###Output _____no_output_____ ###Markdown This is the same plot as it was shown by the phyphox app. Converting to frequency domain by Fast Fourier Transform ###Code # Necessary imports for FFT from scipy.fft import fft, fftfreq, rfft, rfftfreq # from scipy import signal # Number of samples in normalized_tone sampling_rate = round(len(amplitude)/time[len(time)-1]) N = sampling_rate * time[len(time)-1] amplitude_fft = fft(amplitude.to_numpy()) psd = amplitude_fft * np.conj(amplitude_fft) / len(time) # Getting frequency freqs = fftfreq(psd.shape[0], 1 / sampling_rate) plt.figure(figsize=[12, 6]) # Converting y axis to log scale plt.yscale('log') plt.ylabel("Power spectrul density") plt.xlabel("Frequency (Hz)") plt.plot(freqs, np.abs(psd), color='red') plt.show() ###Output _____no_output_____ ###Markdown Denoising the signalLests say I am going to neglect all signal with below PSD of 0.2 , so I will set a threshold at 0.2 so that it will filter out all the signals below this value. ###Code # Our target frequency is filtering_threshold filtering_threshold = 0.2 indices = psd > filtering_threshold # cleaning out the noise below the threshold power spectrul of frequencies psd_clean = psd * indices # ploting the clean signal PSD on top of noisy signal PSD plt.figure(figsize=[12, 6]) plt.yscale('log') plt.ylabel("Power spectrul density") plt.xlabel("Frequency (Hz)") plt.axhline(y=filtering_threshold, color='blue', linestyle='--', label="PSD threshold") plt.plot(freqs, np.abs(psd), color="red", label="Noisy PSD") plt.plot(freqs, np.abs(psd_clean), color='green', label="Clean PSD") plt.legend(loc="upper right") plt.show() ###Output _____no_output_____ ###Markdown Plot after cleaning out nosiy frequencies below threshold: ###Code # Plot of cleaned psd plt.title('Cleaned PSD') plt.plot(freqs, np.abs(psd_clean), color='green', label="Clean PSD") # Conversion to Log scale plt.yscale('log') plt.ylabel("Power spectrul density") plt.xlabel("Frequency (Hz)") ###Output _____no_output_____ ###Markdown Applying the Inverse FFT to get back to the original signal after cleaning out noise: ###Code # Imports from scipy.fft import ifft amplitude_fft_clean = amplitude * indices amplitude_clean = ifft(amplitude_fft_clean.to_numpy()) ###Output _____no_output_____ ###Markdown Comparing the cleaned Audio signal with noisy Audio signal: ###Code # ploting the noisy signal and clean signal fig, axs = plt.subplots(1, 2, figsize=(18, 6)) # Plot Noisy Signal axs[0].set_title('Noisy signal') axs[0].plot(np.abs(time), np.abs(amplitude), color="red", label="Noisy signal") axs[0].set_xlabel("Amplitude") axs[0].set_xlabel("Time (seconds)") axs[0].set_xlim([5, 30]) # Plot cleaned signal axs[1].set_title('Cleaned out signal') axs[1].plot(np.abs(time), np.abs(amplitude_clean), color='green', label="Clean signal") axs[1].set_ylim([-0.50, 0.50]) axs[1].set_xlim([5, 30]) axs[1].set_xlabel("Amplitude") axs[1].set_xlabel("Time (seconds)") ###Output _____no_output_____
scatterplot_practice.ipynb
###Markdown In this workspace, you'll make use of this data set describing various car attributes, such as fuel efficiency. The cars in this dataset represent about 3900 sedans tested by the EPA from 2013 to 2018. This dataset is a trimmed-down version of the data found [here](https://catalog.data.gov/dataset/fuel-economy-data). ###Code fuel_econ = pd.read_csv('./data/fuel_econ.csv') fuel_econ.head() ###Output _____no_output_____ ###Markdown **TO DO 1**: Let's look at the relationship between fuel mileage ratings for city vs. highway driving, as stored in the 'city' and 'highway' variables (in miles per gallon, or mpg). **Use a _scatter plot_ to depict the data.**1. What is the general relationship between these variables? 2. Are there any points that appear unusual against these trends? ###Code sb.regplot(data=fuel_econ, x='city', y='highway', scatter_kws={'alpha': 1/8}) plt.plot([10,60], [10,60]) plt.xlabel('City (mpg)') plt.ylabel('Highway (mpg)'); ###Output _____no_output_____ ###Markdown Expected Output ###Code # run this cell to check your work against ours scatterplot_solution_1() ###Output Most of the data falls in a large blob between 10 and 30 mpg city and 20 to 40 mpg highway. Some transparency is added via 'alpha' to show the concentration of data. Interestingly, for most cars highway mileage is clearly higher than city mileage, but for those cars with city mileage above about 30 mpg, the distinction is less pronounced. In fact, most cars above 45 mpg city have better city mileage than highway mileage, contrary to the main trend. It might be good to call out this trend by adding a diagonal line to the figure using the `plot` function. (See the solution file for that code!) ###Markdown **TO DO 2**: Let's look at the relationship between two other numeric variables. How does the engine size relate to a car's CO2 footprint? The 'displ' variable has the former (in liters), while the 'co2' variable has the latter (in grams per mile). **Use a heat map to depict the data.** How strong is this trend? ###Code plt.hist2d(data=fuel_econ, x='displ', y='co2', cmin=0.5, cmap='viridis_r') plt.colorbar() plt.xlabel('Displacement (l)') plt.ylabel('CO2 (gpm)'); fuel_econ[['displ', 'co2']].describe() bins_x = np.arange(0.6, 7+0.4, 0.4) bins_y = np.arange(29, 692+50, 50) plt.hist2d(data=fuel_econ, x='displ', y='co2', cmin=0.5, cmap='viridis_r', bins=[bins_x, bins_y]) plt.colorbar() plt.xlabel('Displacement (l)') plt.ylabel('CO2 (gpm)'); ###Output _____no_output_____ ###Markdown Expected Output ###Code # run this cell to check your work against ours scatterplot_solution_2() ###Output In the heat map, I've set up a color map that goes from light to dark, and made it so that any cells without count don't get colored in. The visualization shows that most cars fall in a line where larger engine sizes correlate with higher emissions. The trend is somewhat broken by those cars with the lowest emissions, which still have engine sizes shared by most cars (between 1 and 3 liters).
notebooks/opt-cts/opt-cts.ipynb
###Markdown OptimizationIn this notebook, we explore various algorithmsfor solving x* = argmin_{x in R^D} f(x), where f(x) is a differentiable cost function. TOC* [Automatic differentiation](AD)* [Second-order full-batch optimization](second)* [Stochastic gradient descent](SGD)* [TF2 tutorial by Mukesh Mithrakumar](https://nbviewer.jupyter.org/github/adhiraiyan/DeepLearningWithTF2.0/blob/master/notebooks/04.00-Numerical-Computation.ipynb) ###Code import sklearn import scipy import scipy.optimize import matplotlib.pyplot as plt from mpl_toolkits import mplot3d from mpl_toolkits.mplot3d import Axes3D import seaborn as sns import warnings warnings.filterwarnings('ignore') import itertools import time from functools import partial import os import numpy as np from scipy.special import logsumexp np.set_printoptions(precision=3) # We make some wrappers around random number generation # so it works even if we switch from numpy to JAX import numpy as onp # original numpy def set_seed(seed): onp.random.seed(seed) def randn(*args): return onp.random.randn(*args) def randperm(args): return onp.random.permutation(args) import torch import torchvision print("torch version {}".format(torch.__version__)) if torch.cuda.is_available(): print(torch.cuda.get_device_name(0)) print("current device {}".format(torch.cuda.current_device())) else: print("Torch cannot find GPU") def set_seed(seed): onp.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) use_cuda = torch.cuda.is_available() device = torch.device("cuda:0" if use_cuda else "cpu") #torch.backends.cudnn.benchmark = True # Tensorflow 2.0 try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf from tensorflow import keras print("tf version {}".format(tf.__version__)) if tf.test.is_gpu_available(): print(tf.test.gpu_device_name()) else: print("TF cannot find GPU") # JAX (https://github.com/google/jax) !pip install --upgrade -q https://storage.googleapis.com/jax-releases/cuda$(echo $CUDA_VERSION | sed -e 's/\.//' -e 's/\..*//')/jaxlib-$(pip search jaxlib | grep -oP '[0-9\.]+' | head -n 1)-cp36-none-linux_x86_64.whl !pip install --upgrade -q jax import jax import jax.numpy as np import numpy as onp from jax.scipy.special import logsumexp from jax import grad, hessian, jacfwd, jacrev, jit, vmap from jax.experimental import optimizers print("jax version {}".format(jax.__version__)) ###Output jax version 0.1.43 ###Markdown Automatic differentiation In this section we illustrate various AD libraries by using them to derive the gradient of the negative log likelihood for binary logistic regression applied to the Iris dataset. We compare to the manual numpy implementation. As a minor detail, we evaluate the gradient of the NLL of the test data with the parameters set to their training MLE, in order to get an interesting signal; using a random weight vector makes the dynamic range of the output harder to see. ###Code # Fit the model to a dataset, so we have an "interesting" parameter vector to use. import sklearn.datasets from sklearn.model_selection import train_test_split iris = sklearn.datasets.load_iris() X = iris["data"] y = (iris["target"] == 2).astype(onp.int) # 1 if Iris-Virginica, else 0' N, D = X.shape # 150, 4 X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) from sklearn.linear_model import LogisticRegression # We set C to a large number to turn off regularization. # We don't fit the bias term to simplify the comparison below. log_reg = LogisticRegression(solver="lbfgs", C=1e5, fit_intercept=False) log_reg.fit(X_train, y_train) w_mle_sklearn = np.ravel(log_reg.coef_) w = w_mle_sklearn ## Compute gradient of loss "by hand" using numpy def BCE_with_logits(logits, targets): N = logits.shape[0] logits = logits.reshape(N,1) logits_plus = np.hstack([np.zeros((N,1)), logits]) # e^0=1 logits_minus = np.hstack([np.zeros((N,1)), -logits]) logp1 = -logsumexp(logits_minus, axis=1) logp0 = -logsumexp(logits_plus, axis=1) logprobs = logp1 * targets + logp0 * (1-targets) return -np.sum(logprobs)/N # Compute using numpy def sigmoid(x): return 0.5 * (np.tanh(x / 2.) + 1) def predict_logit(weights, inputs): return np.dot(inputs, weights) # Already vectorized def predict_prob(weights, inputs): return sigmoid(predict_logit(weights, inputs)) def NLL(weights, batch): X, y = batch logits = predict_logit(weights, X) return BCE_with_logits(logits, y) def NLL_grad(weights, batch): X, y = batch N = X.shape[0] mu = predict_prob(weights, X) g = np.sum(np.dot(np.diag(mu - y), X), axis=0)/N return g y_pred = predict_prob(w, X_test) loss = NLL(w, (X_test, y_test)) grad_np = NLL_grad(w, (X_test, y_test)) print("params {}".format(w)) #print("pred {}".format(y_pred)) print("loss {}".format(loss)) print("grad {}".format(grad_np)) ###Output params [-4.414 -9.111 6.539 12.686] loss 0.11824002861976624 grad [-0.235 -0.122 -0.198 -0.064] ###Markdown AD in JAX Below we use JAX to compute the gradient of the NLL for binary logistic regression.For some examples of using JAX to compute the gradients, Jacobians and Hessians of simple linear and quadratic functions,see [this notebook](https://github.com/probml/pyprobml/blob/master/notebooks/linear_algebra.ipynbAD-jax).More details on JAX's autodiff can be found in the official [autodiff cookbook](https://github.com/google/jax/blob/master/notebooks/autodiff_cookbook.ipynb). ###Code grad_jax = grad(NLL)(w, (X_test, y_test)) print("grad {}".format(grad_jax)) assert np.allclose(grad_np, grad_jax) ###Output grad [-0.235 -0.122 -0.198 -0.064] ###Markdown AD in Tensorflow We just wrap the relevant forward computations inside GradientTape(), and then call tape.gradient(objective, [variables]). ###Code w_tf = tf.Variable(np.reshape(w, (D,1))) x_test_tf = tf.convert_to_tensor(X_test, dtype=np.float64) y_test_tf = tf.convert_to_tensor(np.reshape(y_test, (-1,1)), dtype=np.float64) with tf.GradientTape() as tape: logits = tf.linalg.matmul(x_test_tf, w_tf) y_pred = tf.math.sigmoid(logits) loss_batch = tf.nn.sigmoid_cross_entropy_with_logits(y_test_tf, logits) loss_tf = tf.reduce_mean(loss_batch, axis=0) grad_tf = tape.gradient(loss_tf, [w_tf]) grad_tf = grad_tf[0][:,0].numpy() assert np.allclose(grad_np, grad_tf) print("params {}".format(w_tf)) #print("pred {}".format(y_pred)) print("loss {}".format(loss_tf)) print("grad {}".format(grad_tf)) ###Output WARNING: Logging before flag parsing goes to stderr. W0826 04:28:46.621946 140039241475968 deprecation.py:323] From /tensorflow-2.0.0b1/python3.6/tensorflow/python/ops/nn_impl.py:182: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where ###Markdown AD in PyTorch We just compute the objective, call backward() on it, and then lookup variable.grad. However, we have to specify the requires_grad=True attribute on the variable before computing the objective, so that Torch knows to record its values on its tape. ###Code w_torch = torch.Tensor(np.reshape(w, [D, 1])).to(device) w_torch.requires_grad_() x_test_tensor = torch.Tensor(X_test).to(device) y_test_tensor = torch.Tensor(y_test).to(device) y_pred = torch.sigmoid(torch.matmul(x_test_tensor, w_torch))[:,0] criterion = torch.nn.BCELoss(reduction='mean') loss_torch = criterion(y_pred, y_test_tensor) loss_torch.backward() grad_torch = w_torch.grad[:,0].cpu().numpy() assert np.allclose(grad_np, grad_torch) print("params {}".format(w_torch)) #print("pred {}".format(y_pred)) print("loss {}".format(loss_torch)) print("grad {}".format(grad_torch)) ###Output params tensor([[-4.4138], [-9.1106], [ 6.5387], [12.6857]], device='cuda:0', requires_grad=True) loss 0.11824004352092743 grad [-0.235 -0.122 -0.198 -0.064] ###Markdown Second-order, full-batch optimization The "gold standard" of optimization is second-order methods, that leverage Hessian information. Since the Hessian has O(D^2) parameters, such methods do not scale to high-dimensional problems. However, we can sometimes approximate the Hessian using low-rank or diagonal approximations. Below we illustrate the low-rank BFGS method, and the limited-memory version of BFGS, that uses O(D H) space and O(D^2) time per step, where H is the history length.In general, second-order methods also require exact (rather than noisy) gradients. In the context of ML, this means they are "full batch" methods, since computing the exact gradient requires evaluating the loss on all the datapoints. However, for small data problems, this is feasible (and advisable).Below we illustrate how to use LBFGS as implemented in various libraries.Other second-order optimizers have a similar API.We use the same binary logistic regression problem as above. ###Code # Repeat relevant code from AD section above, for convenience. # Dataset import sklearn.datasets from sklearn.model_selection import train_test_split iris = sklearn.datasets.load_iris() X = iris["data"] y = (iris["target"] == 2).astype(onp.int) # 1 if Iris-Virginica, else 0' N, D = X.shape # 150, 4 X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.33, random_state=42) # Sklearn estimate from sklearn.linear_model import LogisticRegression log_reg = LogisticRegression(solver="lbfgs", C=1e5, fit_intercept=False) log_reg.fit(X_train, y_train) w_mle_sklearn = np.ravel(log_reg.coef_) w = w_mle_sklearn # Define Model and binary cross entropy loss def BCE_with_logits(logits, targets): N = logits.shape[0] logits = logits.reshape(N,1) logits_plus = np.hstack([np.zeros((N,1)), logits]) # e^0=1 logits_minus = np.hstack([np.zeros((N,1)), -logits]) logp1 = -logsumexp(logits_minus, axis=1) logp0 = -logsumexp(logits_plus, axis=1) logprobs = logp1 * targets + logp0 * (1-targets) return -np.sum(logprobs)/N def sigmoid(x): return 0.5 * (np.tanh(x / 2.) + 1) def predict_logit(weights, inputs): return np.dot(inputs, weights) # Already vectorized def predict_prob(weights, inputs): return sigmoid(predict_logit(weights, inputs)) def NLL(weights, batch): X, y = batch logits = predict_logit(weights, X) return BCE_with_logits(logits, y) ###Output _____no_output_____ ###Markdown Scipy versionWe show how to use the implementation from [scipy.optimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.htmlscipy.optimize.minimize) ###Code import scipy.optimize # We manually compute gradients, but could use Jax instead def NLL_grad(weights, batch): X, y = batch N = X.shape[0] mu = predict_prob(weights, X) g = np.sum(np.dot(np.diag(mu - y), X), axis=0)/N return g def training_loss(w): return NLL(w, (X_train, y_train)) def training_grad(w): return NLL_grad(w, (X_train, y_train)) set_seed(0) w_init = randn(D) options={'disp': None, 'maxfun': 1000, 'maxiter': 1000} method = 'BFGS' w_mle_scipy = scipy.optimize.minimize( training_loss, w_init, jac=training_grad, method=method, options=options).x print("parameters from sklearn {}".format(w_mle_sklearn)) print("parameters from scipy-bfgs {}".format(w_mle_scipy)) # Limited memory version requires that we work with 64bit, since implemented in Fortran. def training_loss2(w): l = NLL(w, (X_train, y_train)) return onp.float64(l) def training_grad2(w): g = NLL_grad(w, (X_train, y_train)) return onp.asarray(g, dtype=onp.float64) set_seed(0) w_init = randn(D) memory = 10 options={'disp': None, 'maxcor': memory, 'maxfun': 1000, 'maxiter': 1000} # The code also handles bound constraints, hence the name method = 'L-BFGS-B' w_mle_scipy = scipy.optimize.minimize(training_loss, w_init, jac=training_grad2, method=method).x print("parameters from sklearn {}".format(w_mle_sklearn)) print("parameters from scipy-lbfgs {}".format(w_mle_scipy)) ###Output parameters from sklearn [-4.414 -9.111 6.539 12.686] parameters from scipy-lbfgs [-4.415 -9.114 6.54 12.691] ###Markdown PyTorch versionWe show how to use the version from [PyTorch.optim.lbfgs](https://github.com/pytorch/pytorch/blob/master/torch/optim/lbfgs.py). ###Code # Put data into PyTorch format. import torch from torch.utils.data import DataLoader, TensorDataset N, D = X_train.shape x_train_tensor = torch.Tensor(X_train) y_train_tensor = torch.Tensor(y_train) data_set = TensorDataset(x_train_tensor, y_train_tensor) # Define model and loss. class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.linear = torch.nn.Linear(D, 1, bias=False) def forward(self, x): y_pred = torch.sigmoid(self.linear(x)) return y_pred set_seed(0) model = Model() criterion = torch.nn.BCELoss(reduction='mean') optimizer = torch.optim.LBFGS(model.parameters(), history_size=10) def closure(): optimizer.zero_grad() y_pred = model(x_train_tensor) loss = criterion(y_pred, y_train_tensor) #print('loss:', loss.item()) loss.backward() return loss max_iter = 10 for i in range(max_iter): loss = optimizer.step(closure) params = list(model.parameters()) w_torch_bfgs = params[0][0].detach().numpy() #(D,) vector print("parameters from sklearn {}".format(w_mle_sklearn)) print("parameters from torch-bfgs {}".format(w_torch_bfgs)) ###Output parameters from sklearn [-4.414 -9.111 6.539 12.686] parameters from torch-bfgs [-4.415 -9.114 6.54 12.691] ###Markdown TF version There is also a version of [LBFGS in TF](https://www.tensorflow.org/probability/api_docs/python/tfp/optimizer/lbfgs_minimize) Stochastic gradient descent In this section we illustrate how to implement SGD. We apply it to a simple convex problem, namely MLE for binary logistic regression on the small iris dataset, so we can compare to the exact batch methods we illustrated above. Numpy versionWe show a minimal implementation of SGD using vanilla numpy. For convenience, we use TFDS to create a stream of mini-batches. We compute gradients by hand, but can use any AD library. ###Code import tensorflow_datasets as tfds def make_batcher(batch_size, X, y): def get_batches(): # Convert numpy arrays to tfds ds = tf.data.Dataset.from_tensor_slices({"X": X, "y": y}) ds = ds.batch(batch_size) # convert tfds into an iterable of dict of NumPy arrays return tfds.as_numpy(ds) return get_batches batcher = make_batcher(2, X_train, y_train) for epoch in range(2): print('epoch {}'.format(epoch)) for batch in batcher(): x, y = batch["X"], batch["y"] #print(x.shape) def sgd(params, loss_fn, grad_loss_fn, get_batches_as_dict, max_epochs, lr): print_every = max(1, int(0.1*max_epochs)) for epoch in range(max_epochs): epoch_loss = 0.0 for batch_dict in get_batches_as_dict(): x, y = batch_dict["X"], batch_dict["y"] batch = (x, y) batch_grad = grad_loss_fn(params, batch) params = params - lr*batch_grad batch_loss = loss_fn(params, batch) # Average loss within this batch epoch_loss += batch_loss if epoch % print_every == 0: print('Epoch {}, Loss {}'.format(epoch, epoch_loss)) return params, set_seed(0) D = X_train.shape[1] w_init = onp.random.randn(D) def training_loss2(w): l = NLL(w, (X_train, y_train)) return onp.float64(l) def training_grad2(w): g = NLL_grad(w, (X_train, y_train)) return onp.asarray(g, dtype=onp.float64) max_epochs = 5 lr = 0.1 batch_size = 10 batcher = make_batcher(batch_size, X_train, y_train) w_mle_sgd = sgd(w_init, NLL, NLL_grad, batcher, max_epochs, lr) print(w_mle_sgd) ###Output Epoch 0, Loss 21.775604248046875 Epoch 1, Loss 3.2622179985046387 Epoch 2, Loss 3.1074540615081787 Epoch 3, Loss 2.9816956520080566 Epoch 4, Loss 2.875518798828125 (DeviceArray([-0.399, -0.919, 0.311, 2.174], dtype=float32),) ###Markdown Jax version JAX has a small optimization library focused on stochastic first-order optimizers. Every optimizer is modeled as an (`init_fun`, `update_fun`, `get_params`) triple of functions. The `init_fun` is used to initialize the optimizer state, which could include things like momentum variables, and the `update_fun` accepts a gradient and an optimizer state to produce a new optimizer state. The `get_params` function extracts the current iterate (i.e. the current parameters) from the optimizer state. The parameters being optimized can be ndarrays or arbitrarily-nested list/tuple/dict structures, so you can store your parameters however you’d like.Below we show how to reproduce our numpy code using this library. ###Code # Version that uses JAX optimization library #@jit def sgd_jax(params, loss_fn, get_batches, max_epochs, opt_init, opt_update, get_params): loss_history = [] opt_state = opt_init(params) #@jit def update(i, opt_state, batch): params = get_params(opt_state) g = grad(loss_fn)(params, batch) return opt_update(i, g, opt_state) print_every = max(1, int(0.1*max_epochs)) total_steps = 0 for epoch in range(max_epochs): epoch_loss = 0.0 for batch_dict in get_batches(): X, y = batch_dict["X"], batch_dict["y"] batch = (X, y) total_steps += 1 opt_state = update(total_steps, opt_state, batch) params = get_params(opt_state) train_loss = onp.float(loss_fn(params, batch)) loss_history.append(train_loss) if epoch % print_every == 0: print('Epoch {}, train NLL {}'.format(epoch, train_loss)) return params, loss_history b=list(batcher()) X, y = b[0]["X"], b[0]["y"] X.shape batch = (X, y) params= w_init onp.float(NLL(params, batch)) g = grad(NLL)(params, batch) # JAX with constant LR should match our minimal version of SGD schedule = optimizers.constant(step_size=lr) opt_init, opt_update, get_params = optimizers.sgd(step_size=schedule) w_mle_sgd2, history = sgd_jax(w_init, NLL, batcher, max_epochs, opt_init, opt_update, get_params) print(w_mle_sgd2) print(history) ###Output _____no_output_____
Iris_dataset_problem/encoding_categorical_variables.ipynb
###Markdown Imputing missing values ###Code URL = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' df = pd.read_csv(URL, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'class']) # Randomly select 10 rows random_index = np.random.choice(df.index, replace=False, size=10) # Set the sepal_length values of the rows to be None df.loc[random_index, 'sepal_length'] = None df.isnull().any() # droping the missing value row from data frame print("Number of rows before deleting: %d" %(df.shape[0])) df2 = df.dropna() print("Number of rows before deleting: %d" %(df2.shape[0])) # imputing missing value using mean df.sepal_length = df.sepal_length.fillna(df.sepal_length.mean()) df.isnull().any() ###Output _____no_output_____
module_1/Movies_IMBD_v4.1.ipynb
###Markdown Предобработка ###Code answers = {} data['profit'] = data['revenue'] - data['budget'] data['genres_array'] = data['genres'].str.split('|') data['overview_len'] = data['overview'].str.split().str.len() data['cast_array'] = data['cast'].str.split('|') data['production_array'] = data['production_companies'].str.split('|') data['release_date'] = data['release_date'].apply(pd.to_datetime) def movie_title(row): clean_title = row['original_title'].to_string(index=False).strip(' ') clean_imdb_id = row['imdb_id'].to_string(index=False).strip(' ') return f'{clean_title} ({clean_imdb_id})' ###Output _____no_output_____ ###Markdown 1. У какого фильма из списка самый большой бюджет? ###Code row = data[data['budget'] == data['budget'].max()][['imdb_id', 'original_title']] answers['1'] = movie_title(row) print(answers['1']) ###Output Pirates of the Caribbean: On Stranger Tides (tt1298650) ###Markdown 2. Какой из фильмов самый длительный (в минутах)? ###Code row = data[data['runtime'] == data['runtime'].max()][['imdb_id', 'original_title']] answers['2'] = movie_title(row) print(answers['2']) ###Output Gods and Generals (tt0279111) ###Markdown 3. Какой из фильмов самый короткий (в минутах)? ###Code row = data[data['runtime'] == data['runtime'].min()][['imdb_id', 'original_title']] answers['3'] = movie_title(row) print(answers['3']) ###Output Winnie the Pooh (tt1449283) ###Markdown 4. Какова средняя длительность фильмов? ###Code answers['4'] = round(data['runtime'].mean()) print(answers['4']) ###Output 110 ###Markdown 5. Каково медианное значение длительности фильмов? ###Code answers['5'] = round(data['runtime'].median()) print(answers['5']) ###Output 107 ###Markdown 6. Какой самый прибыльный фильм? Внимание! Здесь и далее под «прибылью» или «убытками» понимается разность между сборами и бюджетом фильма. (прибыль = сборы - бюджет) в нашем датасете это будет (profit = revenue - budget) ###Code row = data[data['profit'] == data['profit'].max()][['imdb_id', 'original_title', 'profit']] answers['6'] = movie_title(row) print(answers['6']) ###Output Avatar (tt0499549) ###Markdown 7. Какой фильм самый убыточный? ###Code row = data[data['profit'] == data['profit'].min()][['imdb_id', 'original_title', 'profit']] answers['7'] = movie_title(row) print(answers['7']) ###Output The Lone Ranger (tt1210819) ###Markdown 8. У скольких фильмов из датасета объем сборов оказался выше бюджета? ###Code answers['8'] = data[data['profit'] > 0]['imdb_id'].count() print(answers['8']) ###Output 1478 ###Markdown 9. Какой фильм оказался самым кассовым в 2008 году? ###Code max_profit = data[data['release_year'] == 2008]['profit'].max() row = data[(data['release_year'] == 2008) & (data['profit'] == max_profit)][['imdb_id', 'original_title', 'profit']] answers['9'] = movie_title(row) print(answers['9']) ###Output The Dark Knight (tt0468569) ###Markdown 10. Самый убыточный фильм за период с 2012 по 2014 г. (включительно)? ###Code min_profit = data[data['release_year'].between(2012, 2014)]['profit'].min() row = data[(data['release_year'].between(2012, 2014)) & (data['profit'] == min_profit)] answers['10'] = movie_title(row) print(answers['10']) ###Output The Lone Ranger (tt1210819) ###Markdown 11. Какого жанра фильмов больше всего? ###Code genres = data.explode('genres_array').groupby('genres_array')['imdb_id'].count().sort_values(ascending=False) answers['11'] = genres.index[0] print(answers['11']) ###Output Drama ###Markdown 12. Фильмы какого жанра чаще всего становятся прибыльными? ###Code with_profit = data[data['profit'] > 0] genres = with_profit.explode('genres_array').groupby('genres_array')['imdb_id'].count().sort_values(ascending=False) answers['12'] = genres.index[0] print(answers['12']) ###Output Drama ###Markdown 13. У какого режиссера самые большие суммарные кассовые сбооры? ###Code answers['13'] = data.groupby('director').sum()['revenue'].sort_values(ascending=False).index[0] print(answers['13']) ###Output Peter Jackson ###Markdown 14. Какой режисер снял больше всего фильмов в стиле Action? ###Code variants = ['Ridley Scott', 'Guy Ritchie', 'Robert Rodriguez', 'Quentin Tarantino', 'Tony Scott'] action_directors = data[data['director'].isin(variants)] action_directors = action_directors[action_directors['genres'].str.contains('Action')] \ .groupby('director')['imdb_id'] \ .count() \ .sort_values(ascending=False) answers['14'] = action_directors.index[0] print(answers['14']) ###Output Robert Rodriguez ###Markdown 15. Фильмы с каким актером принесли самые высокие кассовые сборы в 2012 году? ###Code data_2012 = data[data['release_year'] == 2012] revenue = data_2012.explode('cast_array').groupby('cast_array')['revenue'].sum().sort_values(ascending=False) answers['15'] = revenue.index[0] print(answers['15']) ###Output Chris Hemsworth ###Markdown 16. Какой актер снялся в большем количестве высокобюджетных фильмов? ###Code cast_budget = Counter() def count_cast_budget(row): for cast in row['cast_array']: cast_budget[cast] += 1 data[data['budget'] > data['budget'].median()].apply(count_cast_budget, axis='columns') answers['16'], _ = cast_budget.most_common(1)[0] print(answers['16']) high_budget = data[data['budget'] > data['budget'].median()] cast_hist = high_budget.explode('cast_array').groupby('cast_array')['imdb_id'].count().sort_values(ascending=False) answers['16'] = cast_hist.index[0] print(answers['16']) ###Output Matt Damon ###Markdown 17. В фильмах какого жанра больше всего снимался Nicolas Cage? ###Code cage = data[data['cast'].str.contains('Nicolas Cage')] genres = cage.explode('genres_array').groupby('genres_array')['imdb_id'].count().sort_values(ascending=False) answers['17'] = genres.index[0] print(answers['17']) ###Output Action ###Markdown 18. Самый убыточный фильм от Paramount Pictures ###Code paramount = data[data['production_companies'].str.contains('Paramount Pictures')] answers['18'] = movie_title(paramount.sort_values(by='profit').head(1)) print(answers['18']) ###Output K-19: The Widowmaker (tt0267626) ###Markdown 19. Какой год стал самым успешным по суммарным кассовым сборам? ###Code answers['19'] = data.groupby('release_year')['revenue'].sum().sort_values(ascending=False).index[0] print(answers['19']) ###Output 2015 ###Markdown 20. Какой самый прибыльный год для студии Warner Bros? ###Code warner = data[data['production_companies'].str.contains('Warner Bros')] answers['20'] = warner.groupby('release_year')['revenue'].sum().sort_values(ascending=False).index[0] print(answers['20']) ###Output 2014 ###Markdown 21. В каком месяце за все годы суммарно вышло больше всего фильмов? ###Code answers['21'] = data.groupby(by=data['release_date'].dt.month)['imdb_id'].count().sort_values(ascending=False).index[0] print(calendar.month_name[answers['21']]) ###Output September ###Markdown 22. Сколько суммарно вышло фильмов летом? (за июнь, июль, август) ###Code answers['22'] = data[data['release_date'].dt.month.between(6,8)]['imdb_id'].count() print(answers['22']) ###Output 450 ###Markdown 23. Для какого режиссера зима – самое продуктивное время года? ###Code directors = data[data['release_date'].dt.month.isin([12,1,2])].groupby('director')['imdb_id'].count().sort_values(ascending=False) answers['23'] = directors.index[0] print(answers['23']) ###Output Peter Jackson ###Markdown 24. Какая студия дает самые длинные названия своим фильмам по количеству символов? ###Code title_len = {} def title_count(row): for p in row['production_array']: title_len.setdefault(p, []) title_len[p].append(len(row['original_title'])) data.apply(title_count, axis='columns') result = [] for p in title_len: result.append( (sum(title_len[p]) / len(title_len[p]), p) ) _, answers['24'] = sorted(result, reverse=True)[0] print(answers['24']) with_len = data.explode('production_array')[['production_array', 'original_title']] with_len['original_title_len'] = with_len['original_title'].apply(len) answers['24'] = with_len.groupby('production_array')['original_title_len'] \ .mean() \ .sort_values(ascending=False) \ .index[0] print(answers['24']) ###Output Four By Two Productions ###Markdown 25. Описание фильмов какой студии в среднем самые длинные по количеству слов? ###Code answers['25'] = data.explode('production_array') \ .groupby('production_array')['overview_len'] \ .mean() \ .sort_values(ascending=False) \ .index[0] print(answers['25']) ###Output Midnight Picture Show ###Markdown 26. Какие фильмы входят в 1 процент лучших по рейтингу? по vote_average ###Code variants = [ ['Inside Out', 'The Dark Knight', '12 Years a Slave'], ['BloodRayne', 'The Adventures of Rocky & Bullwinkle'], ['Batman Begins', 'The Lord of the Rings: The Return of the King', 'Upside Down'], ['300', 'Lucky Number Slevin', 'Kill Bill: Vol. 1'], ['Upside Down', 'Inside Out', 'Iron Man'], ] best_99 = data[data['vote_average'] > data['vote_average'].quantile(.99)]['original_title'] best_set = set(best_99.unique()) answers['26'] = 'unknown' for variant in variants: variant_set = set(variant) if set(variant).issubset(best_set): answers['26'] = ', '.join(variant) break print(answers['26']) ###Output Inside Out, The Dark Knight, 12 Years a Slave ###Markdown 27. Какие актеры чаще всего снимаются в одном фильме вместе? ###Code pairs = Counter() def count_pairs(row): for i, cast in enumerate(row['cast_array']): for cast2 in row['cast_array'][i:]: if cast != cast2: pairs[', '.join(sorted([cast, cast2]))] += 1 data.apply(count_pairs, axis='columns') answers['27'], _ = pairs.most_common(1)[0] print(answers['27']) ###Output Daniel Radcliffe, Rupert Grint ###Markdown Submission ###Code display(answers) # и убедиться что ни чего не пропустил) print(len(answers)) ###Output 27
1.Neural-Network.ipynb
###Markdown Basics on how to build a simple Neural Network 0. Imports ###Code import torch import torch.nn as nn # Network Modules import torch.optim as optim # Gradient Descent, SGD, Adam, ... import torch.nn.functional as F # Activation functions # The Data Loader gives us easier data set management # allowing us to create mini batches and this kind of things easily from torch.utils.data import DataLoader # Datasets from torchvision: https://pytorch.org/vision/stable/datasets.html import torchvision.datasets as datasets # Transformations to perform on our data set (for data augmentation, for example) import torchvision.transforms as transforms # Already implemented & pre-trained models from torchvsion: https://pytorch.org/vision/stable/models.html import torchvision.models from tqdm import tqdm # progress bar ###Output _____no_output_____ ###Markdown 1. Create a Fully Connected Network Model of the neural network: ###Code class NN(nn.Module): def __init__(self, input_size, num_classes): # call the initialization of the nn.Module super(NN, self).__init__() # create here the NN modules that are going to be used self.fc1 = nn.Linear(input_size, 50) self.fc2 = nn.Linear(50, num_classes) def forward(self, x): # assembly the modules that participate on the forward propagation part x = F.relu(self.fc1(x)) x = self.fc2(x) return x ###Output _____no_output_____ ###Markdown To import the CNN: ###Code from models.SimpleCNN import CNN # # to make sure it runs correctly (should output torch.Size([64, 10])): # model = CNN() # x = torch.randn(64, 1, 28, 28) # print(model(x).shape) ###Output _____no_output_____ ###Markdown 2. Set Device ###Code DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ###Output _____no_output_____ ###Markdown 3. Hyperparameters ###Code INPUT_SIZE = 784 INPUT_CHANNELS = 1 NUM_CLASSES = 10 LEARNING_RATE = 0.001 BATCH_SIZE = 64 NUM_EPOCHS = 3 LOAD_MODEL = True CHECKPOINT_NAME = "checkpoints/my_checkpoint.pth.tar" ###Output _____no_output_____ ###Markdown 4. Load Data - `root`: Where the dataset is going to be downloaded.- `train`: If True: download the training set. If False: Download the test set.- `transform`: Transformations to perform on the dataset (from NumPy to Tensor to be run on PyTorch). ###Code train_ds = datasets.MNIST(root='data', train=True, transform=transforms.ToTensor(), download=True) test_ds = datasets.MNIST(root='data', train=False, transform=transforms.ToTensor(), download=True) ###Output _____no_output_____ ###Markdown ⚠️ Be careful not to shuffle the data if it has to follow an specific order, like in some NLP cases. ###Code train_loader = DataLoader(dataset=train_ds, batch_size=BATCH_SIZE, shuffle=True) test_loader = DataLoader(dataset=test_ds, batch_size=BATCH_SIZE, shuffle=True) ###Output _____no_output_____ ###Markdown 5. Initialize network > To choose the model, just decomment it and comment the rest Simple neural network (NN): ###Code # model = NN(input_size=INPUT_SIZE, num_classes=NUM_CLASSES).to(DEVICE) ###Output _____no_output_____ ###Markdown Convolutional neural network (CNN): ###Code model = CNN(in_channels=INPUT_CHANNELS, num_classes=NUM_CLASSES).to(DEVICE) ###Output _____no_output_____ ###Markdown VGG16: Initial model summary:```VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ... (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace=True) (5): Dropout(p=0.5, inplace=False) (6): Linear(in_features=4096, out_features=1000, bias=True) ))``` We don't want to perform any operation as avgpool. Therefore, we're going to create an Identity module that will leave the input as it is: ###Code class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x ###Output _____no_output_____ ###Markdown We also don't want the classifier part to have an output of 1000 features, so we are gonna say that the classifier is just a Linear module with an output of 10: ###Code def load_vgg16_model(): model = torchvision.models.vgg16(pretrained=True) # We just want to perform backpropagation on the last layers. Therefore, # we're going to deactivate the grad of the parameters until now. # This will make the traning much more faster as it will only train the new # added layers! for param in model.parameters(): param.requires_grad = False model.avgpool = Identity() # if we look at line 28 of the summary, we can see that there are 512 output_channels model.classifier = nn.Sequential(nn.Linear(512, 100), nn.ReLU(), nn.Linear(100, NUM_CLASSES)) return model # model = load_vgg16_model() # print(model) # model summary ###Output _____no_output_____ ###Markdown Summary:```VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ... (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): Identity() (classifier): Sequential( (0): Linear(in_features=512, out_features=100, bias=True) (1): ReLU() (2): Linear(in_features=100, out_features=10, bias=True) ))``` 6. Loss & Optimizer ###Code criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) ###Output _____no_output_____ ###Markdown 7. Checkpoints & Model Loading ###Code def save_checkpoint(state, filename="checkpoints/my_checkpoint.pth.tar"): print("=> Saving checkpoint") torch.save(state, filename) def load_checkpoint(checkpoint): print("=> Loading checkpoint") model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) ###Output _____no_output_____ ###Markdown 8. Train Network ###Code if LOAD_MODEL: try: load_checkpoint(torch.load(CHECKPOINT_NAME)) except: raise FileNotFoundError("No previous checkpoints were found.") print("Checkpoint has been loaded correctly!") def reshape_if_simple_nn(data): if isinstance(model, NN): # Get to correct shape for the simple neural network: [64, 1, 28, 28] -> [64, 784] # - The Linear layer expects one input per neuron, therefore, # we cannot introduce an array per neuron. We've first to convert it to only 1 value. data = data.reshape(data.shape[0], -1) # -1 flatten all the following layers return data for epoch in range(NUM_EPOCHS): losses = [] loop = tqdm(enumerate(train_loader), total=len(train_loader), leave=False) if epoch % 2 == 0: # save a checkpoint every two epochs checkpoint = {'state_dict': model.state_dict(), 'optimizer': optimizer.state_dict()} save_checkpoint(checkpoint, CHECKPOINT_NAME) for batch_idx, (data, targets) in loop: # Carry data to CUDA if possible data = data.to(device=DEVICE) targets = targets.to(device=DEVICE) data = reshape_if_simple_nn(data) ### Forward ### scores = model(data) loss = criterion(scores, targets) losses.append(loss.item()) ### Backward ### # For each batch, set all the gradients to 0 to avoid using previous gradients # on a new batch and run through new problems optimizer.zero_grad() loss.backward() # perform the optimization optimizer.step() # update progress bar loop.set_description(f"Epoch [{epoch}/{NUM_EPOCHS}]") loop.set_postfix(loss = loss.item()) mean_loss = sum(losses) / len(losses) print(f"Loss at epoch {epoch} was {mean_loss:.5f}") ###Output => Saving checkpoint Loss at epoch 0 was 0.25962 Loss at epoch 1 was 0.07248 => Saving checkpoint Loss at epoch 2 was 0.05244 ###Markdown 9. Accuracy & Test ###Code def check_accuracy(loader, model): dataset_type = "training" if loader.dataset.train else 'test' print(f"Checking accuracy on {dataset_type} data") num_correct = 0 num_samples = 0 model.eval() # in other cases, it'll disable dropout and this kind of layers # with torch.no_grad() we avoid computing the gradients in the calculations with torch.no_grad(): for x, y in loader: x = x.to(device=DEVICE) y = y.to(device=DEVICE) x = reshape_if_simple_nn(x) scores = model(x) # Remember we said that the output shape is gonna be nn.Linear(50, 10) # We want to take the greatest value, so just apply argmax predictions = scores.argmax(dim=1) # Remember, x, predictions & y are batches of 64 elements. # if we perform (predictions == y), we'll obtain a tensor like the following one: # tensor([True, False, True, True]).sum() = 4 num_correct += (predictions == y).sum() num_samples += predictions.size(0) acc = (float(num_correct) / float(num_samples)) * 100 print(f"Got {num_correct} / {num_samples} with accuracy {acc:.2f}") model.train() # to remove the model.eval() part return acc check_accuracy(train_loader, model) check_accuracy(test_loader, model) ###Output Checking accuracy on training data Got 59258 / 60000 with accuracy 98.76 Checking accuracy on test data Got 9854 / 10000 with accuracy 98.54
EPM_datos_Uraba/daniel-new.ipynb
###Markdown Text analysis RepairCode First we extract a text and clean it To do that, remove stopwords, NAN, RAMAL, RAMALES ###Code es_stopwords = [str(x).upper() for x in stopwords.words("spanish")] def remove_accents(input_str): nfkd_form = unicodedata.normalize('NFKD', input_str) return u"".join([c for c in nfkd_form if not unicodedata.combining(c)]) es_stopwords_na = [remove_accents(x) for x in es_stopwords] es_stopwords_na.extend(["NAN", "RAMAL", "RAMALES"]) def clean_text(text): # Remove non alphabetic charactets using pattern = r"[^\w]" seen in class pattern = r"[^\w]" ret = re.sub(pattern, " ", text) ret = remove_accents(ret) for bad in es_stopwords_na: to_replace = " " + bad + " " if bad != "NAN" else bad ret = ret.replace(to_replace, " ") return ret # Create clean column df["RepairCodeStringClean"] = df["RepairCodeString"].apply(clean_text) all_reviews_text = ' '.join(df["RepairCodeString"]) all_reviews_text = clean_text(all_reviews_text) print(all_reviews_text) # Get tokens tokenized_words = nltk.word_tokenize(all_reviews_text) # remove length smaller than 2 tokenized_words = [each.strip() for each in tokenized_words if len(each.lower()) > 2] word_freq = Counter(tokenized_words) ten_pct =round(len(word_freq)*0.1) ## Top 10% word_freq.most_common(ten_pct) ## Similarly, bottom 10% word_freq.most_common()[-ten_pct:-1] df["RepairCodeStringClean"].apply(lambda x: np.nan if str(x).strip() == "" else x).dropna().head() ## First 5 repair codes n-grams # first_5_revs = AllRCs[0:5] # word_tokens = nltk.word_tokenize(''.join(first_5_revs)) # list(ngrams(word_tokens, 3)) #ngrams(word_tokens,n) gives the n-grams. ###Output _____no_output_____ ###Markdown N-Grams RepairCode ###Code def top_k_ngrams(word_tokens,n,k): ## Getting them as n-grams n_gram_list = list(ngrams(word_tokens, n)) ### Getting each n-gram as a separate string n_gram_strings = [' '.join(each) for each in n_gram_list] n_gram_counter = Counter(n_gram_strings) most_common_k = n_gram_counter.most_common(k) print(most_common_k) top_k_ngrams(tokenized_words, 1, 10) top_k_ngrams(tokenized_words, 2, 10) top_k_ngrams(tokenized_words, 3, 10) top_k_ngrams(tokenized_words, 4, 10) # nltk.pos_tag(tokenized_words) # import spaghetti as sgt # sent1 = 'Mi colega me ayuda a programar cosas .'.split() # sent2 = 'Está embarazada .'.split() # test_sents = [sent1, sent2] # # Default Spaghetti tagger. # print (sgt.pos_tag(test_sent)) # # Tag multiple sentences. # print (sgt.pos_tag_sents(test_sents)) # spa_tagger = sgt.CESSTagger() # # POS tagger trained on unigrams of CESS corpus. # spa_unigram_tagger = spa_tagger.uni # print (spa_unigram_tagger.tag(sent1)) # # POS tagger traned on bigrams of CESS corpus. # spa_bigram_tagger = spa_tagger.bi # print (spa_bigram_tagger.tag(sent2)) # print (spa_bigram_tagger.tag_sents(test_sents)) # # Now lets PoD tag everything # etiquetador=StanfordPOSTagger(tagger,jar) # etiquetas=etiquetador.tag(tokenized_words) # etiquetas ###Output _____no_output_____