path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
source/notebooks/L3/reclassify.ipynb | ###Markdown
Data reclassificationReclassifying data based on specific criteria is a common task when doing GIS analysis. The purpose of this lesson is to see how we can reclassify values based on some criteria which can be whatever, such as:```1. if travel time to my work is less than 30 minutes AND 2. the rent of the apartment is less than 1000 € per month ------------------------------------------------------ IF TRUE: ==> I go to view it and try to rent the apartment IF NOT TRUE: ==> I continue looking for something else```In this tutorial, we will use Travel Time Matrix data from Helsinki to classify some features of the data based on map classifiers that are commonly used e.g. when doing visualizations, and our own self-made classifier where we determine how the data should be classified.1. use ready made classifiers from pysal -module to classify travel times into multiple classes.2. use travel times and distances to find out - good locations to buy an apartment with good public transport accessibility to city center - but from a bit further away from city center where the prices are presumably lower.*Note, during this intensive course we won't be using the Corine2012 data.* Classifying data Classification based on common classifiers[Pysal](http://pysal.readthedocs.io/en/latest) -module is an extensive Python library including various functions and tools to do spatial data analysis. It also includes all of the most common data classifiers that are used commonly e.g. when visualizing data. Available map classifiers in pysal -module are ([see here for more details](http://pysal.readthedocs.io/en/latest/library/esda/mapclassify.html)): - Box_Plot - Equal_Interval - Fisher_Jenks - Fisher_Jenks_Sampled - HeadTail_Breaks - Jenks_Caspall - Jenks_Caspall_Forced - Jenks_Caspall_Sampled - Max_P_Classifier - Maximum_Breaks - Natural_Breaks - Quantiles - Percentiles - Std_Mean - User_Defined- First, we need to read our Travel Time data from Helsinki into memory from a GeoJSON file.
###Code
import geopandas as gpd
fp = "L3_data/TravelTimes_to_5975375_RailwayStation_Helsinki.geojson"
# Read the GeoJSON file similarly as Shapefile
acc = gpd.read_file(fp)
# Let's see what we have
print(acc.head(2))
###Output
car_m_d car_m_t car_r_d car_r_t from_id pt_m_d pt_m_t pt_m_tt \
0 15981 36 15988 41 6002702 14698 65 73
1 16190 34 16197 39 6002701 14661 64 73
pt_r_d pt_r_t pt_r_tt to_id walk_d walk_t GML_ID NAMEFIN \
0 14698 61 72 5975375 14456 207 27517366 Helsinki
1 14661 60 72 5975375 14419 206 27517366 Helsinki
NAMESWE NATCODE area \
0 Helsingfors 091 62499.999976
1 Helsingfors 091 62499.999977
geometry
0 POLYGON ((391000.0001349226 6667750.00004299, ...
1 POLYGON ((390750.0001349644 6668000.000042951,...
###Markdown
As we can see, there exist plenty of different variables (see [from here the description](http://blogs.helsinki.fi/accessibility/helsinki-region-travel-time-matrix-2015) for all attributes) but what we are interested in are columns called `pt_r_tt` which is telling the time in minutes that it takes to reach city center from different parts of the city, and `walk_d` that tells the network distance by roads to reach city center from different parts of the city (almost equal to Euclidian distance).**The NoData values are presented with value -1**. - Thus we need to remove the No Data values first.
###Code
# Include only data that is above or equal to 0
acc = acc.loc[acc['pt_r_tt'] >=0]
###Output
_____no_output_____
###Markdown
- Let's plot the data and see how it looks like.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
# Plot using 9 classes and classify the values using "Fisher Jenks" classification
acc.plot(column="pt_r_tt", scheme="Fisher_Jenks", k=9, cmap="RdYlBu", linewidth=0, legend=True)
# Use tight layout
plt.tight_layout()
###Output
_____no_output_____
###Markdown
As we can see from this map, the travel times are lower in the south where the city center is located but there are some areas of "good" accessibility also in some other areas (where the color is red).- Let's also make a plot about walking distances:
###Code
# Plot walking distance
acc.plot(column="walk_d", scheme="Fisher_Jenks", k=9, cmap="RdYlBu", linewidth=0, legend=True)
# Use tight layour
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Okay, from here we can see that the walking distances (along road network) reminds more or less Euclidian distances.- Let's apply one of the `Pysal` classifiers into our data and classify the travel times by public transport into 9 classes- The classifier needs to be initialized first with `make()` function that takes the number of desired classes as input parameter
###Code
import pysal as ps
# Define the number of classes
n_classes = 9
# Create a Natural Breaks classifier
classifier = ps.Natural_Breaks.make(k=n_classes)
###Output
_____no_output_____
###Markdown
- Now we can apply that classifier into our data by using `apply` -function
###Code
# Classify the data
classifications = acc[['pt_r_tt']].apply(classifier)
# Let's see what we have
classifications.head()
###Output
_____no_output_____
###Markdown
Okay, so now we have a DataFrame where our input column was classified into 9 different classes (numbers 1-9) based on [Natural Breaks classification](http://wiki-1-1930356585.us-east-1.elb.amazonaws.com/wiki/index.php/Jenks_Natural_Breaks_Classification).- Now we want to join that reclassification into our original data but let's first rename the column so that we recognize it later on:
###Code
# Rename the column so that we know that it was classified with natural breaks
classifications.columns = ['nb_pt_r_tt']
# Join with our original data (here index is the key
acc = acc.join(classifications)
# Let's see how our data looks like
acc.head()
###Output
_____no_output_____
###Markdown
Great, now we have those values in our accessibility GeoDataFrame. Let's visualize the results and see how they look.
###Code
# Plot
acc.plot(column="nb_pt_r_tt", linewidth=0, legend=True)
# Use tight layout
plt.tight_layout()
###Output
_____no_output_____
###Markdown
And here we go, now we have a map where we have used one of the common classifiers to classify our data into 9 classes. Creating a custom classifier**Multicriteria data classification**Let's create a function where we classify the geometries into two classes based on a given `threshold` -parameter. If the area of a polygon is lower than the threshold value (average size of the lake), the output column will get a value 0, if it is larger, it will get a value 1. This kind of classification is often called a [binary classification](https://en.wikipedia.org/wiki/Binary_classification).First we need to create a function for our classification task. This function takes a single row of the GeoDataFrame as input, plus few other parameters that we can use.It also possible to do classifiers with multiple criteria easily in Pandas/Geopandas by extending the example that we started earlier. Now we will modify our binaryClassifier function a bit so that it classifies the data based on two columns.- Let's call it `custom_classifier` that takes into account two criteria:
###Code
def custom_classifier(row, src_col1, src_col2, threshold1, threshold2, output_col):
# 1. If the value in src_col1 is LOWER than the threshold1 value
# 2. AND the value in src_col2 is HIGHER than the threshold2 value, give value 1, otherwise give 0
if row[src_col1] < threshold1 and row[src_col2] > threshold2:
# Update the output column with value 0
row[output_col] = 1
# If area of input geometry is higher than the threshold value update with value 1
else:
row[output_col] = 0
# Return the updated row
return row
###Output
_____no_output_____
###Markdown
Now we have defined the function, and we can start using it.- Let's do our classification based on two criteria and find out grid cells where the **travel time is lower or equal to 20 minutes** but they are further away **than 4 km (4000 meters) from city center**.- Let's create an empty column for our classification results called `"suitable_area"`.
###Code
# Create column for the classification results
acc["suitable_area"] = None
# Use the function
acc = acc.apply(custom_classifier, src_col1='pt_r_tt',
src_col2='walk_d', threshold1=20, threshold2=4000,
output_col="suitable_area", axis=1)
# See the first rows
acc.head(2)
###Output
_____no_output_____
###Markdown
Okey we have new values in `suitable_area` -column.- How many Polygons are suitable for us? Let's find out by using a Pandas function called `value_counts()` that return the count of different values in our column.
###Code
# Get value counts
acc['suitable_area'].value_counts()
###Output
_____no_output_____
###Markdown
Okay, so there seems to be nine suitable locations for us where we can try to find an appartment to buy.- Let's see where they are located:
###Code
# Plot
acc.plot(column="suitable_area", linewidth=0);
# Use tight layour
plt.tight_layout()
###Output
_____no_output_____ |
001-Jupyter/001-Tutorials/005-Python4Maths/03.ipynb | ###Markdown
All of these python notebooks are available at [https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git] Data Structures In simple terms, It is the the collection or group of data in a particular structure. Lists Lists are the most commonly used data structure. Think of it as a sequence of data that is enclosed in square brackets and data are separated by a comma. Each of these data can be accessed by calling it's index value.Lists are declared by just equating a variable to '[ ]' or list.
###Code
a = []
type(a)
###Output
_____no_output_____
###Markdown
One can directly assign the sequence of data to a list x as shown.
###Code
x = ['apple', 'orange']
###Output
_____no_output_____
###Markdown
Indexing In python, indexing starts from 0 as already seen for strings. Thus now the list x, which has two elements will have apple at 0 index and orange at 1 index.
###Code
x[0]
###Output
_____no_output_____
###Markdown
Indexing can also be done in reverse order. That is the last element can be accessed first. Here, indexing starts from -1. Thus index value -1 will be orange and index -2 will be apple.
###Code
x[-1]
###Output
_____no_output_____
###Markdown
As you might have already guessed, x[0] = x[-2], x[1] = x[-1]. This concept can be extended towards lists with more many elements.
###Code
y = ['carrot','potato']
###Output
_____no_output_____
###Markdown
Here we have declared two lists x and y each containing its own data. Now, these two lists can again be put into another list say z which will have it's data as two lists. This list inside a list is called as nested lists and is how an array would be declared which we will see later.
###Code
z = [x,y]
print( z )
###Output
_____no_output_____
###Markdown
Indexing in nested lists can be quite confusing if you do not understand how indexing works in python. So let us break it down and then arrive at a conclusion.Let us access the data 'apple' in the above nested list.First, at index 0 there is a list ['apple','orange'] and at index 1 there is another list ['carrot','potato']. Hence z[0] should give us the first list which contains 'apple' and 'orange'. From this list we can take the second element (index 1) to get 'orange'
###Code
print(z[0][1])
###Output
_____no_output_____
###Markdown
Lists do not have to be homogenous. Each element can be of a different type:
###Code
["this is a valid list",2,3.6,(1+2j),["a","sublist"]]
###Output
_____no_output_____
###Markdown
Slicing Indexing was only limited to accessing a single element, Slicing on the other hand is accessing a sequence of data inside the list. In other words "slicing" the list.Slicing is done by defining the index values of the first element and the last element from the parent list that is required in the sliced list. It is written as parentlist[ a : b ] where a,b are the index values from the parent list. If a or b is not defined then the index value is considered to be the first value for a if a is not defined and the last value for b when b is not defined.
###Code
num = [0,1,2,3,4,5,6,7,8,9]
print(num[0:4])
print(num[4:])
###Output
_____no_output_____
###Markdown
You can also slice a parent list with a fixed length or step length.
###Code
num[:9:3]
###Output
_____no_output_____
###Markdown
Built in List Functions To find the length of the list or the number of elements in a list, **len( )** is used.
###Code
len(num)
###Output
_____no_output_____
###Markdown
If the list consists of all integer elements then **min( )** and **max( )** gives the minimum and maximum value in the list. Similarly **sum** is the sum
###Code
print("min =",min(num)," max =",max(num)," total =",sum(num))
max(num)
###Output
_____no_output_____
###Markdown
Lists can be concatenated by adding, '+' them. The resultant list will contain all the elements of the lists that were added. The resultant list will not be a nested list.
###Code
[1,2,3] + [5,4,7]
###Output
_____no_output_____
###Markdown
There might arise a requirement where you might need to check if a particular element is there in a predefined list. Consider the below list.
###Code
names = ['Earth','Air','Fire','Water']
###Output
_____no_output_____
###Markdown
To check if 'Fire' and 'Rajath' is present in the list names. A conventional approach would be to use a for loop and iterate over the list and use the if condition. But in python you can use 'a in b' concept which would return 'True' if a is present in b and 'False' if not.
###Code
'Fire' in names
'Space' in names
###Output
_____no_output_____
###Markdown
In a list with string elements, **max( )** and **min( )** are still applicable and return the first/last element in lexicographical order.
###Code
mlist = ['bzaa','ds','nc','az','z','klm']
print("max =",max(mlist))
print("min =",min(mlist))
###Output
_____no_output_____
###Markdown
Here the first index of each element is considered and thus z has the highest ASCII value thus it is returned and minimum ASCII is a. But what if numbers are declared as strings?
###Code
nlist = ['1','94','93','1000']
print("max =",max(nlist))
print('min =',min(nlist))
###Output
_____no_output_____
###Markdown
Even if the numbers are declared in a string the first index of each element is considered and the maximum and minimum values are returned accordingly. But if you want to find the **max( )** string element based on the length of the string then another parameter `key` can be used to specify the function to use for generating the value on which to sort. Hence finding the longest and shortest string in `mlist` can be doen using the `len` function:
###Code
print('longest =',max(mlist, key=len))
print('shortest =',min(mlist, key=len))
###Output
_____no_output_____
###Markdown
Any other built-in or user defined function can be used.A string can be converted into a list by using the **list()** function, or more usefully using the **split()** method, which breaks strings up based on spaces.
###Code
print(list('hello world !'),'Hello World !!'.split())
###Output
_____no_output_____
###Markdown
**append( )** is used to add a single element at the end of the list.
###Code
lst = [1,1,4,8,7]
lst.append(1)
print(lst)
###Output
_____no_output_____
###Markdown
Appending a list to a list would create a sublist. If a nested list is not what is desired then the **extend( )** function can be used.
###Code
lst.extend([10,11,12])
print(lst)
###Output
_____no_output_____
###Markdown
**count( )** is used to count the number of a particular element that is present in the list.
###Code
lst.count(1)
###Output
_____no_output_____
###Markdown
**index( )** is used to find the index value of a particular element. Note that if there are multiple elements of the same value then the first index value of that element is returned.
###Code
lst.index(1)
###Output
_____no_output_____
###Markdown
**insert(x,y)** is used to insert a element y at a specified index value x. **append( )** function made it only possible to insert at the end.
###Code
lst.insert(5, 'name')
print(lst)
###Output
_____no_output_____
###Markdown
**insert(x,y)** inserts but does not replace element. If you want to replace the element with another element you simply assign the value to that particular index.
###Code
lst[5] = 'Python'
print(lst)
###Output
_____no_output_____
###Markdown
**pop( )** function return the last element in the list. This is similar to the operation of a stack. Hence it wouldn't be wrong to tell that lists can be used as a stack.
###Code
lst.pop()
###Output
_____no_output_____
###Markdown
Index value can be specified to pop a ceratin element corresponding to that index value.
###Code
lst.pop(0)
###Output
_____no_output_____
###Markdown
**pop( )** is used to remove element based on it's index value which can be assigned to a variable. One can also remove element by specifying the element itself using the **remove( )** function.
###Code
lst.remove('Python')
print(lst)
###Output
_____no_output_____
###Markdown
Alternative to **remove** function but with using index value is **del**
###Code
del lst[1]
print(lst)
###Output
_____no_output_____
###Markdown
The entire elements present in the list can be reversed by using the **reverse()** function.
###Code
lst.reverse()
print(lst)
###Output
_____no_output_____
###Markdown
Note that the nested list [5,4,2,8] is treated as a single element of the parent list lst. Thus the elements inside the nested list is not reversed.Python offers built in operation **sort( )** to arrange the elements in ascending order. Alternatively **sorted()** can be used to construct a copy of the list in sorted order
###Code
lst.sort()
print(lst)
print(sorted([3,2,1])) # another way to sort
###Output
_____no_output_____
###Markdown
For descending order, By default the reverse condition will be False for reverse. Hence changing it to True would arrange the elements in descending order.
###Code
lst.sort(reverse=True)
print(lst)
###Output
_____no_output_____
###Markdown
Similarly for lists containing string elements, **sort( )** would sort the elements based on it's ASCII value in ascending and by specifying reverse=True in descending.
###Code
names.sort()
print(names)
names.sort(reverse=True)
print(names)
###Output
_____no_output_____
###Markdown
To sort based on length key=len should be specified as shown.
###Code
names.sort(key=len)
print(names)
print(sorted(names,key=len,reverse=True))
###Output
_____no_output_____
###Markdown
Copying a list Assignment of a list does not imply copying. It simply creates a second reference to the same list. Most of new python programmers get caught out by this initially. Consider the following,
###Code
lista= [2,1,4,3]
listb = lista
print(listb)
###Output
_____no_output_____
###Markdown
Here, We have declared a list, lista = [2,1,4,3]. This list is copied to listb by assigning it's value and it get's copied as seen. Now we perform some random operations on lista.
###Code
lista.sort()
lista.pop()
lista.append(9)
print("A =",lista)
print("B =",listb)
###Output
_____no_output_____
###Markdown
listb has also changed though no operation has been performed on it. This is because you have assigned the same memory space of lista to listb. So how do fix this?If you recall, in slicing we had seen that parentlist[a:b] returns a list from parent list with start index a and end index b and if a and b is not mentioned then by default it considers the first and last element. We use the same concept here. By doing so, we are assigning the data of lista to listb as a variable.
###Code
lista = [2,1,4,3]
listb = lista[:] # make a copy by taking a slice from beginning to end
print("Starting with:")
print("A =",lista)
print("B =",listb)
lista.sort()
lista.pop()
lista.append(9)
print("Finnished with:")
print("A =",lista)
print("B =",listb)
###Output
_____no_output_____
###Markdown
List comprehensionA very powerful concept in Python (that also applies to Tuples, sets and dictionaries as we will see below), is the ability to define lists using list comprehension (looping) expression. For example:
###Code
[i**2 for i in [1,2,3]]
###Output
_____no_output_____
###Markdown
As can be seen this constructs a new list by taking each element of the original `[1,2,3]` and squaring it. We can have multiple such implied loops to get for example:
###Code
[10*i+j for i in [1,2,3] for j in [5,7]]
###Output
_____no_output_____
###Markdown
Finally the looping can be filtered using an **if** expression with the **for** - **in** construct.
###Code
[10*i+j for i in [1,2,3] if i%2==1 for j in [4,5,7] if j >= i+4] # keep odd i and j larger than i+3 only
###Output
_____no_output_____
###Markdown
Tuples Tuples are similar to lists but only big difference is the elements inside a list can be changed but in tuple it cannot be changed. Think of tuples as something which has to be True for a particular something and cannot be True for no other values. For better understanding, Recall **divmod()** function.
###Code
xyz = divmod(10,3)
print(xyz)
print(type(xyz))
###Output
_____no_output_____
###Markdown
Here the quotient has to be 3 and the remainder has to be 1. These values cannot be changed whatsoever when 10 is divided by 3. Hence divmod returns these values in a tuple. To define a tuple, A variable is assigned to paranthesis ( ) or tuple( ).
###Code
tup = ()
tup2 = tuple()
###Output
_____no_output_____
###Markdown
If you want to directly declare a tuple it can be done by using a comma at the end of the data.
###Code
27,
###Output
_____no_output_____
###Markdown
27 when multiplied by 2 yields 54, But when multiplied with a tuple the data is repeated twice.
###Code
2*(27,)
###Output
_____no_output_____
###Markdown
Values can be assigned while declaring a tuple. It takes a list as input and converts it into a tuple or it takes a string and converts it into a tuple.
###Code
tup3 = tuple([1,2,3])
print(tup3)
tup4 = tuple('Hello')
print(tup4)
###Output
_____no_output_____
###Markdown
It follows the same indexing and slicing as Lists.
###Code
print(tup3[1])
tup5 = tup4[:3]
print(tup5)
###Output
_____no_output_____
###Markdown
Mapping one tuple to anotherTupples can be used as the left hand side of assignments and are matched to the correct right hand side elements - assuming they have the right length
###Code
(a,b,c)= ('alpha','beta','gamma') # are optional
a,b,c= 'alpha','beta','gamma' # The same as the above
print(a,b,c)
a,b,c = ['Alpha','Beta','Gamma'] # can assign lists
print(a,b,c)
[a,b,c]=('this','is','ok') # even this is OK
print(a,b,c)
###Output
_____no_output_____
###Markdown
More complex nexted unpackings of values are also possible
###Code
(w,(x,y),z)=(1,(2,3),4)
print(w,x,y,z)
(w,xy,z)=(1,(2,3),4)
print(w,xy,z) # notice that xy is now a tuple
###Output
_____no_output_____
###Markdown
Built In Tuple functions **count()** function counts the number of specified element that is present in the tuple.
###Code
d=tuple('a string with many "a"s')
d.count('a')
###Output
_____no_output_____
###Markdown
**index()** function returns the index of the specified element. If the elements are more than one then the index of the first element of that specified element is returned
###Code
d.index('a')
###Output
_____no_output_____
###Markdown
Sets Sets are mainly used to eliminate repeated numbers in a sequence/list. It is also used to perform some standard set operations.Sets are declared as set() which will initialize a empty set. Also `set([sequence])` can be executed to declare a set with elements
###Code
set1 = set()
print(type(set1))
set0 = set([1,2,2,3,3,4])
set0 = {1,2,2,3,3,4} # equivalent to the above
print(set0)
###Output
_____no_output_____
###Markdown
elements 2,3 which are repeated twice are seen only once. Thus in a set each element is distinct.However be warned that **{}** is **NOT** a set, but a dictionary (see next chapter of this tutorial)
###Code
type({})
###Output
_____no_output_____
###Markdown
Built-in Functions
###Code
set1 = set([1,2,3])
set2 = set([2,3,4,5])
###Output
_____no_output_____
###Markdown
**union( )** function returns a set which contains all the elements of both the sets without repition.
###Code
set1.union(set2)
###Output
_____no_output_____
###Markdown
**add( )** will add a particular element into the set. Note that the index of the newly added element is arbitrary and can be placed anywhere not neccessarily in the end.
###Code
set1.add(0)
set1
###Output
_____no_output_____
###Markdown
**intersection( )** function outputs a set which contains all the elements that are in both sets.
###Code
set1.intersection(set2)
###Output
_____no_output_____
###Markdown
**difference( )** function ouptuts a set which contains elements that are in set1 and not in set2.
###Code
set1.difference(set2)
###Output
_____no_output_____
###Markdown
**symmetric_difference( )** function ouputs a function which contains elements that are in one of the sets.
###Code
set2.symmetric_difference(set1)
###Output
_____no_output_____
###Markdown
**issubset( ), isdisjoint( ), issuperset( )** is used to check if the set1/set2 is a subset, disjoint or superset of set2/set1 respectively.
###Code
set1.issubset(set2)
set2.isdisjoint(set1)
set2.issuperset(set1)
###Output
_____no_output_____
###Markdown
**pop( )** is used to remove an arbitrary element in the set
###Code
set1.pop()
print(set1)
###Output
_____no_output_____
###Markdown
**remove( )** function deletes the specified element from the set.
###Code
set1.remove(2)
set1
###Output
_____no_output_____
###Markdown
**clear( )** is used to clear all the elements and make that set an empty set.
###Code
set1.clear()
set1
###Output
_____no_output_____ |
_posts/numpy/randint/Randint.ipynb | ###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! ImportsThis tutorial imports [Plotly](https://plot.ly/python/getting-started/) and [Numpy](http://www.numpy.org/).
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
###Output
_____no_output_____
###Markdown
Randint`np.random.randint()` allows users to pick uniformly from a set of integers [`low`, `low + 1`, ..., `high`].
###Code
import plotly.plotly as py
import plotly.graph_objs as go
num_of_points = 200
random_numbers = np.random.randint(0, 10, num_of_points)
trace1 = go.Scatter(
x=[j for j in range(num_of_points)],
y=random_numbers,
mode='markers',
marker = dict(
size=7,
color=random_numbers,
colorscale='Jet',
symbol='diamond'
),
name='Numbers sampled from 0 to 9'
)
py.iplot([trace1], filename='numpy-randint')
help(np.random.randint)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Randint.ipynb', 'numpy/randint/', 'Randint | plotly',
'How to sample numbers from a range of integers uniformly.',
title = 'Numpy Randint | plotly',
name = 'Randint',
has_thumbnail='true', thumbnail='thumbnail/numpy-random-image.jpg',
language='numpy', page_type='example_index',
display_as='numpy-random', order=3)
###Output
_____no_output_____ |
DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling-master/module1-join-and-reshape-data/LS_DS_121_Join_and_Reshape_Data.ipynb | ###Markdown
_Lambda School Data Science_ Join and Reshape datasetsObjectives- concatenate data with pandas- merge data with pandas- understand tidy data formatting- melt and pivot data with pandasLinks- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data) - Combine Data Sets: Standard Joins - Tidy Data - Reshaping Data- Python Data Science Handbook - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables Reference- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)- [Hadley Wickham's famous paper](http://vita.had.co.nz/papers/tidy-data.html) on Tidy Data Download dataWe’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
###Code
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd instacart_2017_05_01
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
Join Datasets Goal: Reproduce this exampleThe first two orders for user id 1:
###Code
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'
example = Image(url=url, width=600)
display(example)
###Output
_____no_output_____
###Markdown
Load dataHere's a list of all six CSV filenames
###Code
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
For each CSV- Load it with pandas- Look at the dataframe's shape- Look at its head (first rows)- `display(example)`- Which columns does it have in common with the example we want to reproduce? aisles
###Code
import pandas as pd
aisles = pd.read_csv('aisles.csv')
print(aisles.shape)
aisles.head()
###Output
(134, 2)
###Markdown
departments
###Code
departments = pd.read_csv("departments.csv")
print(departments.shape)
departments.head()
###Output
(21, 2)
###Markdown
order_products__prior
###Code
order_products__prior = pd.read_csv('order_products__prior.csv')
print(order_products__prior.shape)
order_products__prior.head()
###Output
(32434489, 4)
###Markdown
order_products__train
###Code
order_products__train = pd.read_csv('order_products__train.csv')
print(order_products__train.shape)
order_products__train.head()
###Output
(1384617, 4)
###Markdown
orders
###Code
orders = pd.read_csv('orders.csv')
print(orders.shape)
orders.head()
###Output
(3421083, 7)
###Markdown
products
###Code
products = pd.read_csv('products.csv')
print(products.shape)
products.head()
###Output
(49688, 4)
###Markdown
Concatenate order_products__prior and order_products__train
###Code
order_products = pd.concat([order_products__prior, order_products__train])
print(order_products.shape)
order_products.head()
assert (order_products__prior.shape[0] + order_products__train.shape[0]) == order_products.shape[0]
# Filter Order Products
#if this == that , do something
order_products[(order_products["order_id"] == 2539329) | (order_products["order_id"] == 2398795)]
###Output
_____no_output_____
###Markdown
Get a subset of orders — the first two orders for user id 1 From `orders` dataframe:- user_id- order_id- order_number- order_dow- order_hour_of_day Merge dataframes Merge the subset from `orders` with columns from `order_products`
###Code
display(example)
orders.shape
orders[orders["user_id"] == 1][orders["order_number"] <=2 ]
#Merge dataframes
condition = (orders["user_id"] == 1) & (orders["order_number"] <=2 )
columns = [
'user_id',
'order_id',
'order_number',
'order_dow',
'order_hour_of_day'
]
subset = orders.loc[condition, columns]
subset
columns = ["order_id", "add_to_cart_order", "product_id"]
print(order_products.shape)
order_products[columns].head()
###Output
(33819106, 4)
###Markdown
Merge with columns from `products`
###Code
merged = pd.merge(subset, order_products[columns], how="inner", on="order_id")
merged
final = pd.merge(merged, products[["product_id", "product_name"]], how="inner", on="product_id")
final
final = final.sort_values(by=["order_number", "add_to_cart_order"])
final
final.columns = [column.replace('_', ' ') for column in final]
final
display(example)
###Output
_____no_output_____
###Markdown
Reshape Datasets Why reshape data? Some libraries prefer data in different formatsFor example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).> "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.htmlorganizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:> - Each variable is a column- Each observation is a row> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot." Data science is often about putting square pegs in round holesHere's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling! Hadley Wickham's ExamplesFrom his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['John Smith', 'Jane Doe', 'Mary Johnson'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
###Output
_____no_output_____
###Markdown
"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild. The table has two columns and three rows, and both rows and columns are labelled."
###Code
table1
###Output
_____no_output_____
###Markdown
"There are many ways to structure the same underlying data. Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
###Code
table2
###Output
_____no_output_____
###Markdown
"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."| name | trt | result ||--------------|-----|--------|| John Smith | a | - || Jane Doe | a | 16 || Mary Johnson | a | 3 || John Smith | b | 2 || Jane Doe | b | 11 || Mary Johnson | b | 1 | Table 1 --> TidyWe can use the pandas `melt` function to reshape Table 1 into Tidy format.
###Code
table1.columns.to_list()
table1.index.to_list()
table1 = table1.reset_index()
table1
table1.index.to_list()
tidy = table1.melt(id_vars="index")
tidy = table1.melt(id_vars="index", value_vars=["treatmenta", "treatmentb"])
tidy
tidy = tidy.rename(columns={
'index': 'name',
'variable': 'trt',
'value': 'result'
})
tidy
tidy.trt = tidy.trt.str.replace('treatment', '')
tidy
###Output
_____no_output_____
###Markdown
Table 2 --> Tidy
###Code
table2.columns.to_list()
table2.index.to_list()
table2 = table2.reset_index()
table2
table2.index.to_list()
tidy2 = table2.melt(id_vars='index')
tidy2 = table2.melt(id_vars='index', value_vars=["John Smith", "Jane Doe", "Mary Johnson"])
tidy2
tidy2 = tidy2.rename(columns={
'index': 'trt',
'variable': 'name',
'value': 'result'
})
tidy2
tidy2.trt = tidy2.trt.str.replace('treatment', '')
tidy2
###Output
_____no_output_____
###Markdown
Tidy --> Table 1The `pivot_table` function is the inverse of `melt`.
###Code
tidy.pivot_table(index='name', columns='trt', values='result')
###Output
_____no_output_____
###Markdown
Tidy --> Table 2
###Code
tidy2.pivot_table(index='name', columns='trt', values='result')
###Output
_____no_output_____
###Markdown
Seaborn exampleThe rules can be simply stated:- Each variable is a column- Each observation is a rowA helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
###Code
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
###Output
_____no_output_____
###Markdown
Now with Instacart data
###Code
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
###Output
_____no_output_____
###Markdown
Goal: Reproduce part of this exampleInstead of a plot with 50 products, we'll just do two — the first products from each list- Half And Half Ultra Pasteurized- Half Baked Frozen Yogurt
###Code
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
###Output
_____no_output_____
###Markdown
So, given a `product_name` we need to calculate its `order_hour_of_day` pattern. Subset and MergeOne challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.
###Code
product_names = ['Half Baked Frozen Yogurt', 'Half And Half Ultra Pasteurized']
products.columns.tolist()
orders.columns.tolist()
order_products.columns.tolist()
merged = (products[['product_id', 'product_name']]
.merge(order_products[['order_id', 'product_id']])
.merge(orders[['order_id', 'order_hour_of_day']]))
merged.head()
condition = ((merged['product_name']=='Half Baked Frozen Yogurt') | (merged['product_name']=='Half And Half Ultra Pasteurized'))
merged = merged[condition]
print(merged.shape)
merged.head()
product_names = ['Half Baked Frozen Yogurt', 'Half And Half Ultra Pasteurized']
condition = merged['product_name'].isin(product_names)
subset = merged[condition]
print(subset.shape)
subset.head()
###Output
(5978, 4)
###Markdown
4 ways to reshape and plot
###Code
froyo = subset[subset['product_name']=='Half Baked Frozen Yogurt']
cream = subset[subset['product_name']=='Half And Half Ultra Pasteurized']
###Output
_____no_output_____
###Markdown
1. value_counts
###Code
cream['order_hour_of_day'].value_counts(normalize=True).sort_index()
(cream['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot())
(froyo['order_hour_of_day']
.value_counts(normalize=True)
.sort_index()
.plot());
###Output
_____no_output_____
###Markdown
2. crosstab
###Code
(pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize='columns')).plot();
###Output
_____no_output_____
###Markdown
3. Pivot Table
###Code
subset.pivot_table(index='order_hour_of_day',
columns='product_name',
values='order_id',
aggfunc=len).plot();
###Output
_____no_output_____
###Markdown
4. melt
###Code
table = pd.crosstab(subset['order_hour_of_day'],
subset['product_name'],
normalize=True)
table
melted = (table
.reset_index()
.melt(id_vars='order_hour_of_day')
.rename(columns={
'order_hour_of_day': 'Hour of Day Ordered',
'product_name': 'Product',
'value': 'Percent of Orders by Product'
}))
melted
sns.relplot(x='Hour of Day Ordered',
y='Percent of Orders by Product',
hue='Product',
data=melted,
kind='line');
###Output
_____no_output_____ |
Desafio 7/modelo_enem.ipynb | ###Markdown
Em 2016:Um dia os estudantes fizeram as provas de ciências humanas(CH) e ciências da natureza(CN). No outro dia, a prova de linguagens, códigos e suas tecnologias(LC), matemática(MT) e redação.
###Code
faltou = 0
presente = 1
eliminado = 2
dados_notas.isna().sum()
faltou = 0
presente = 1
eliminado = 2
dados_notas_filled = dados_notas.copy()
dados_notas_filled.loc[df_train['TP_PRESENCA_CN'].isin([faltou, eliminado]), 'NU_NOTA_CN'] = 0
dados_notas_filled.loc[df_train['TP_PRESENCA_CH'].isin([faltou, eliminado]), 'NU_NOTA_CH'] = 0
dados_notas_filled.loc[df_train['TP_PRESENCA_LC'].isin([faltou, eliminado]), 'NU_NOTA_LC'] = 0
dados_notas_filled.loc[df_train['TP_PRESENCA_MT'].isin([faltou, eliminado]), 'NU_NOTA_MT'] = 0
dados_notas_filled.loc[df_train['TP_STATUS_REDACAO'].isin([2,3,4,5,6,7,8,9]),'NU_NOTA_REDACAO'] = 0 # 1 = sem problemas (todos os outros valores representam motivos para zerar)
dados_notas_filled.loc[df_train['TP_PRESENCA_LC'].isin([faltou, eliminado]),'NU_NOTA_REDACAO'] = 0 # se a pessoa não foi nas outras duas provas do mesmo dia (LC e MT)
dados_notas_filled.isna().sum()
df_test[['NU_NOTA_CN', 'NU_NOTA_CH', 'NU_NOTA_LC', 'NU_NOTA_REDACAO']].isna().sum()
dados_notas_test = df_test[['NU_NOTA_CN', 'NU_NOTA_CH', 'NU_NOTA_LC', 'NU_NOTA_REDACAO']]
dados_notas_test
dados_notas_test_filled = dados_notas_test.copy()
dados_notas_test_filled.loc[df_test['TP_PRESENCA_CN'].isin([faltou, eliminado]), 'NU_NOTA_CN'] = 0
dados_notas_test_filled.loc[df_test['TP_PRESENCA_CH'].isin([faltou, eliminado]), 'NU_NOTA_CH'] = 0
dados_notas_test_filled.loc[df_test['TP_PRESENCA_LC'].isin([faltou, eliminado]), 'NU_NOTA_LC'] = 0
dados_notas_test_filled.loc[df_test['TP_STATUS_REDACAO'].isin([2,3,4,5,6,7,8,9]),'NU_NOTA_REDACAO'] = 0 # 1 = sem problemas (todos os outros valores representam motivos para zerar)
dados_notas_test_filled.loc[df_test['TP_PRESENCA_LC'].isin([faltou, eliminado]),'NU_NOTA_REDACAO'] = 0 # se a pessoa não foi nas outras duas provas do mesmo dia (LC e MT)
dados_notas_test_filled.isna().sum()
corr = dados_notas_filled.corr()
ax = plt.subplots(figsize=(11, 8))
sns.heatmap(corr, annot=True)
colunas_features = ['NU_NOTA_CN', 'NU_NOTA_CH', 'NU_NOTA_LC', 'NU_NOTA_REDACAO']
x_train = dados_notas_filled[colunas_features]
y_train = dados_notas_filled['NU_NOTA_MT']
x_test = dados_notas_test_filled[colunas_features]
###Output
_____no_output_____
###Markdown
Usando Regressão Linear
###Code
linear_regression = LinearRegression()
linear_regression.fit(x_train,y_train)
predicted = linear_regression.predict(x_test)
predicted
answer = pd.DataFrame({'NU_INSCRICAO': df_test['NU_INSCRICAO'], 'NU_NOTA_MT': predicted})
answer.head()
answer.to_csv('answer.csv')
###Output
_____no_output_____ |
Generative/imgFX_scatterCircles.ipynb | ###Markdown
Circle Scatter Diagram---- Author: Diego Inácio- GitHub: [github.com/diegoinacio](https://github.com/diegoinacio)- Notebook: [imgFX_scatterCircles.ipynb](https://github.com/diegoinacio/creative-coding-notebooks/blob/master/Generative/imgFX_scatterCircles.ipynb)---Image effect algorithm that scatters raster circles (with anti-aliasing).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random, math, time
from PIL import Image as image
from _utils import *
def average(vec3):
'''
Returns the mean value between axes
'''
avr = int((vec3[0] + vec3[1] + vec3[2])/3.0)
return avr
def midHigh(img):
'''
Returns the median and the maximum value of the input image
'''
mid = [0.0, 0.0, 0.0]
high = [0.0, 0.0, 0.0]
for y in range(imy):
for x in range(imx):
pix = img.getpixel((x, y))
mid[0] += pix[0]
mid[1] += pix[1]
mid[2] += pix[2]
if average(pix) > average(high): high = pix
else: continue
mid[0] = int(mid[0]/(imx*imy))
mid[1] = int(mid[1]/(imx*imy))
mid[2] = int(mid[2]/(imx*imy))
return (tuple(mid), tuple(high))
def setRange(value, smin, smax, dmin, dmax):
'''
Linear interpolation
'''
value = float(value)
smin, smax = float(smin), float(smax)
dmin, dmax = float(dmin), float(dmax)
out = dmin + ((value - smin)/(smax - smin))*(dmax - dmin)
return int(out)
class Point:
def __init__(self, tx, ty, cd, lvl):
self.tx = tx
self.ty = ty
self.cd = tuple(cd)
rad = setRange(lvl, lod - 1, 0, minSize, maxSize)
self.rad = int(random.uniform(rad - rad*varSize, rad + rad*varSize))
def circle(self):
'''
Draw a circle
'''
r1 = self.rad
r2 = self.rad + antAlsg
for y in range(self.ty - r2, self.ty + r2):
for x in range(self.tx - r2, self.tx + r2):
try:
dx = math.pow(self.tx - x, 2.0)
dy = math.pow(self.ty - y, 2.0)
r = math.sqrt(dx + dy)
if r <= r1:
imgOut.putpixel((x, y), self.cd)
elif r > r2:
cdt = imgOut.getpixel((x, y))
imgOut.putpixel((x, y), cdt)
else:
cdt = imgOut.getpixel((x, y))
ca = (r2 - r)/(r2 - r1)
cr = int(self.cd[0]*ca + cdt[0]*(1 - ca))
cg = int(self.cd[1]*ca + cdt[1]*(1 - ca))
cb = int(self.cd[2]*ca + cdt[2]*(1 - ca))
imgOut.putpixel((x, y), (cr, cg, cb))
except:
continue
%%time
# parameters
lod = 8 # level of detail
minSamp = 0.001 # minimum probability
maxSamp = 0.01 # maximum probability
minSize = 8 # minimum size
maxSize = 32 # maximum size
varSize = 0.5 # size deviation
antAlsg = 1 # circle antialising level
# init
img = image.open('../_data/fruits.png')
imx = img.size[0]
imy = img.size[1]
imgIn = image.new('RGB', img.size)
imgIn.paste(img)
midPix, highPix = midHigh(imgIn)
highPixMax = max(highPix)
imgOut = image.new('RGB', img.size, midPix)
# execution
imgArr = np.asarray(imgIn)
imgArrM = imgArr.max(axis=2)
lpt = []
for lvl in range(lod):
mmin = int(lvl*highPixMax/lod)
mmax = int((lvl + 1)*highPixMax/lod)
sel = np.argwhere(np.logical_and(imgArrM > mmin,
imgArrM <= mmax))
sel = np.argwhere(imgArrM > mmin)
np.random.shuffle(sel)
lim = np.linspace(minSamp, maxSamp, lod)[lvl]
lim = int(lim*len(sel))
for py, px in sel[:lim]:
cd = imgArr[py, px]
lpt.append(Point(px, py, cd, lvl))
for point in lpt:
point.circle()
output = np.array([np.asarray(imgIn),
np.asarray(imgOut)])
panel(output, (2, 1))
###Output
_____no_output_____ |
data/visualise_features_and_partition_data.ipynb | ###Markdown
visualise and play with the data / features
###Code
df = pd.read_csv('CAE_dataset.csv')
# delete nan values
a = df.values
b = np.argwhere(np.isnan(a))
list_non_repeating = []
for e in b:
if e[0] not in list_non_repeating:
list_non_repeating.append(e[0])
print(len(list_non_repeating))
df = df.drop(list_non_repeating)
b = np.argwhere(np.isnan(df.values))
b
df.head(n=10)
###Output
_____no_output_____
###Markdown
plotting the first feature as a function of time
###Code
# all of the features
def get_features(df):
return [np.array(df.iloc[:,n]) for n in range(1,12)]
features = get_features(df)
#for i in range(len(features)): print(len(features[i]),end="\t")
# x is a list of start indexes of each person, [starting point index,pilot_id]
# returns x
def list_of_indexes(df):
pilot_id = np.array(df.iloc[:,-1])
x0 = pilot_id[0] # pilot id is the current
x=[[0,pilot_id[0]]]
for i in range(len(pilot_id)):
if pilot_id[i]!=x0:
x.append([i,pilot_id[i]])
x0 = pilot_id[i]
# find the number of ids that were counted twice
#len(x)
count=0
for i in range(len(x)):
for j in range(i+1,len(x)):
if x[i][1]==x[j][1]:
#print(i,":",x[i],"\n",j,":",x[j])
count+=1
print("count" ,count)
return x,count
x,count = list_of_indexes(df)
print(len(x))
print(count)
x[287]
# plot the first person
i = 27 # e.g. the 5'th pilot's data, the index
def disp_features(i):
#n=1
df = pd.read_csv('CAE_dataset.csv')
plt.figure(figsize=(16,16))
features = get_features(df)
for n in range(len(features)):
feature = features[n]
plt.subplot(4,3,n+1)
try: plt.plot(np.linspace(0,(x[i+1][0]-x[i][0])/10,x[i+1][0]-x[i][0]),feature[x[i][0]:x[i+1][0]])
except: plt.plot(np.linspace(0,x[i][0]/10,x[i][0]),feature[x[i][0]:])
if n==10: plt.title("0 or 1, defective pilot label")
else: plt.title("feature n="+str(n))
plt.xlabel("time (s)")
#plt.savefig("index_"+str(i)+".png")
plt.show()
# the following doesn't work
def disp_feat_start_index(start_index,df):
plt.figure(figsize=(16,16))
features = get_features(df)
for n in range(len(features)):
for n in range(len(features)):
feature = features[n]
plt.subplot(4,3,n+1)
y = [i[0] for i in x]
index_i = x.index(start_index)
try: plt.plot(np.linspace(0,x[index_i][0]/10, x[index_i+1][0]-x[index_i][0]), feature[x[index_i][0]])
except: print()
plt.show()
#disp_feat_start_index(647,df)
#x.index([0, 327])
x
disp_features(409)
disp_features(101)
###Output
_____no_output_____
###Markdown
Partition the data and pickle the result
###Code
# want all the features so that i chunk each test run
def get_features_by_test_run(df):
features = np.transpose([np.array(df.iloc[:,n]) for n in range(1,13)])
indexes,count = list_of_indexes(df)
features_by_run = []
np.transpose(get_features_by_test_run(df))[0]
###Output
count 26
###Markdown
partition the data
###Code
features = np.transpose([np.array(df.iloc[:,n]) for n in range(1,13)])
indexes,count = list_of_indexes(df)
indexes = [i[0] for i in indexes]
features_by_run = []
j=0
for i in range(len(features)):
if i==0: test_run = [features[i]]
elif features[i][-1]!= features[i-1][-1]:
features_by_run.append(test_run)
test_run = [features[i]]
else: test_run.append(features[i])
#if i%1000==0:#trace
# print(i)#trace
features_by_run.append(test_run)
len(features_by_run)
feat=features_by_run
print(len(feat),len(feat[4]))
#feat[0]
###Output
470 595
###Markdown
pickle the partitioned data
###Code
f = open("partitioned_features.pickle","wb")
pickle.dump(features_by_run,f)
f.close()
###Output
_____no_output_____
###Markdown
partition and pickle the data by label
###Code
defective_pilot = []
good_pilot = []
for i in features_by_run:
if i[0][-2]==0: good_pilot.append(i)
elif i[0][-2]==1: defective_pilot.append(i)
else: raise Exception
len(defective_pilot)
len(good_pilot)
f = open("partitioned_features_defective.pickle","wb")
pickle.dump(defective_pilot,f)
f.close()
f = open("partitioned_features_good.pickle","wb")
pickle.dump(good_pilot,f)
f.close()
###Output
_____no_output_____ |
Cuadernos/Suma_y_promedio_de_numeros.ipynb | ###Markdown
¿Cómo calcular un promedio de x números ?GitHub: [AlexisBautistaB](https://github.com/AlexisBautistaB)Twitter: [@BautistaBAlexis](https://twitter.com/BautistaBAlexis)Instagram: [@alexisby9](https://www.instagram.com/alexisby9/?hl=es-la) Primero hay que plantear nuestro algoritmo* Pedir números al usuario hasta que el quiera.* Convertir los datos de str a float.* Sumar esos números.* Promediar los números.* Informar al usuario los resultados. Código en Python
###Code
#Nombre de la lista que contendrá los números
numeros = []
#Darle instrucciones al usuario
print("Introduce los números que quieras y cuando quieras parar escribe listo en minusculas")
#Ciclo para pedir numeros infinito
while True :
#Crea una variable para almacenar la respuesta del usuario
resp = input("Número: ")
if resp == "listo" :
break
else :
#Convertir de str a float
r = float(resp)
#El numero covertido a float se agrega al final de la lista
numeros.append(r)
#crea variable sumatoria de los numeros
sumaNumeros = 0
#Ciclo para acceder a los valores de la lista
for nums in numeros :
#Suma de los numeros de la lista
sumaNumeros += nums
#Formula para el promedio
promedio = sumaNumeros/len(numeros)
#Informar resultados
print("Tu suma de los numeros es:",sumaNumeros)
print("Tu promedio es este:",promedio)
###Output
_____no_output_____ |
FCP Review/Variables and Memory/02 - Dynamic vs Static Typing.ipynb | ###Markdown
Dynamic Typing Python is dynamically typed.This means that the type of a variable is simply the type of the object the variable name points to (references). The variable itself has no associated type.
###Code
a = "hello"
type(a)
a = 10
type(a)
def f(x):
return x**2
a = f
print(a(2))
a = lambda x: x**2
a(2)
type(a)
###Output
_____no_output_____ |
examples/local_iterated_svm.ipynb | ###Markdown
the_models[weird].fit_status_
###Code
#testfit_model = sklearn.svm.SVC(C=0.0001,kernel="linear",verbose=True)
testfit_model = sklearn.svm.LinearSVC(C=30.,dual=False)
dz[0]
inverse_iz = np.empty_like(iz[0])
inverse_iz[iz[0]] = np.arange(iz[0].shape[0])
inverse_iz
dz[0][inverse_iz]
iz[0]
testfit_model.fit(noisy_circles[iz[0]], c[iz[0]], sample_weight=dz[0])
def line(coef, intercept, x):
return x, (x*coef[0] + intercept)/(-coef[1])
testfit_model.coef_
testfit_model.coef_[0,1]/testfit_model.coef_[0,0]
testfit_model.intercept_
plt.scatter(noisy_circles[iz[0]][:,0], noisy_circles[iz[0]][:,1], c=c[iz[0]], s=30*kernel(dz[0]))
plt.plot(*line(testfit_model.coef_[0], testfit_model.intercept_, np.array([-1,1])))
changes = projected_xx - xx
#pts = np.array([[1.8, 0.1]])
pts = xx
plt.quiver(xx[:,0], xx[:,1], changes[:,0], changes[:,1], scale=10.)
plt.scatter(noisy_circles[:,0], noisy_circles[:,1],c=cmap(c))
plt.xlim(*data_ranges[0])
plt.ylim(*data_ranges[1])
plt.title("k = {:04d}".format(K))
span = np.linspace(0.05,0.05,10)
pts_params = linear_models.transform(pts, r=kernel.support_radius(), weighted=True, kernel=kernel)
#for i in range(pts.shape[0]):
# x,y = pts[i]
# plt.plot(span + x, (1-pts_params[i,0]*(span+x))/pts_params[i,1])
plt.savefig(os.path.join(project_dir, "k_{:04d}.png".format(K)))
plt.show()
###Output
_____no_output_____
###Markdown
pts = xxfor i in range(100): print(i) fig = plt.figure() plt.scatter(pts[:,0], pts[:,1], s=2, c='r') plt.scatter(noisy_circles[:,0], noisy_circles[:,1],c=cmap(c)) plt.xlim(*data_ranges[0]) plt.ylim(*data_ranges[1]) plt.title("iteration: {:05d}".format(i)) plt.savefig(os.path.join(project_dir, "svc_circles_{:05d}.png".format(i))) plt.close(fig) params = linear_models.transform(pts,r=kernel.support_radius(),weighted=True,kernel=kernel) perps = np.stack((-params[:,1], params[:,0]),axis=-1) bases = perps[:,:2].reshape(pts.shape[0],1,2)/np.sqrt(np.sum(perps[:,:2]**2,axis=-1)).reshape(pts.shape[0],1,1) pts = local_models.utils.linear_project_pointwise_bases(pts, bases, np.stack((np.zeros(params[:,2].shape), -params[:,2]/params[:,1]),axis=-1)) TODO: this is to deal with the weird pts in the middle that go off to infty. What is going on there? pts = pts[np.linalg.norm(pts,axis=-1)<200.] print(pts.shape)
###Code
def linear_reject_pointwise_bases(x, bases, mean=0):
x = x - mean #mean center everything
projection = local_models.utils.linear_project_pointwise_bases(x, bases)
rejection = x - projection
rejection = rejection + mean #re-add the mean in
return rejection
def scms(X, lm, kernel, iters=30, constraint_space=None, return_params=False, failure_delta=None):
#all_failures = []
print(X.shape)
if failure_delta is None:
failure_delta = np.average(lm.index.query(X, k=2)[0][:,1])*1e4
print("default failure delta: {}".format(failure_delta))
for i in range(iters):
print("scms iteration {:04d}".format(i))
X = np.copy(X)
Xrange = np.arange(X.shape[0])
params = lm.transform(X, r=kernel.support_radius(), weighted=True,
kernel=kernel)
normalized_params = params/np.sqrt(np.sum(params[:,:X.shape[1]]**2,axis=-1,keepdims=True))
normals = normalized_params[:,:X.shape[1]]
intercepts = normalized_params[:,X.shape[1]]
biggest_normal_component = np.argmax(normals, axis=1)
biggest_normal_component_indices = np.stack((Xrange, biggest_normal_component))
biggest_normal_component_indices = tuple(map(tuple, biggest_normal_component_indices))
plane_pt_component = -intercepts/normalized_params[biggest_normal_component_indices]
plane_pts = np.zeros(normals.shape)
plane_pts[biggest_normal_component_indices] = plane_pt_component
normals = normals.reshape(X.shape[0], 1, X.shape[1])
new_X = linear_reject_pointwise_bases(X, normals, plane_pts)
failures = np.sqrt(np.sum((new_X-X)**2, axis=1)) > failure_delta
successes = np.logical_not(failures)
X[successes] = new_X[successes]
if constraint_space is not None:
X[successes] = local_models.utils.linear_project_pointwise_bases(X[successes], constraint_space[0][successes], constraint_space[1][successes])
if return_params:
yield X, successes, normals
else:
yield X, successes
import traceback
import collections
def orthogonal_project_scms(X, lm, kernel, scms_iters=30, newtons_iters=30, alpha=1e-2, return_everything=False):
#1. do scms to get *a* point on the surface, y
#2. get the tangent plane at y
if return_everything:
everything = collections.defaultdict(list)
shifter = scms(X,lm,kernel,iters=scms_iters,return_params=True)
for y, successes, normals in shifter:
if return_everything:
everything[0].append((y, successes, normals))
X = X[successes]
y = y[successes]
normals = normals[successes]
#3. do scms while projecting along some convex combo of the line passing thru x and y, and
# the line passing through x and along the normal vector to the tangent plane in 2 to get y'
#4. y <- y'
#5. GOTO 2
for i in range(newtons_iters):
print("newtons method iteration: {:04d}".format(i))
print(X.shape, y.shape, normals.shape)
try:
Xy = y-X
normalized_Xy = (Xy)/np.sqrt(np.sum(Xy**2,axis=1,keepdims=True))
normalized_Xy = np.expand_dims(normalized_Xy, 1)
#print("shapes", normalized_Xy.shape, normals.shape)
surface_normal_aligned_Xy = normalized_Xy * np.sign(np.sum(normalized_Xy*normals, axis=-1, keepdims=True))
constraint_vec = surface_normal_aligned_Xy*(1-alpha) + normals*alpha
constraint_vec = constraint_vec/np.sqrt(np.sum(constraint_vec**2, axis=-1, keepdims=True))
print("constraint shape", constraint_vec.shape)
shifter = scms(X,lm,kernel,iters=scms_iters,return_params=True,constraint_space=(constraint_vec, X))
for y, successes, normals in shifter:
if return_everything:
everything[i+1].append((y, successes, normals))
X = X[successes]
y = y[successes]
normals = normals[successes]
except:
traceback.print_exc()
break
if return_everything:
return everything
return X, y, normals
test = orthogonal_project_scms(noisy_circles, linear_models, kernel, return_everything=True)
project_dir
scmsnewtondir = os.path.join(project_dir, "scms_orthogonal")
os.makedirs(scmsnewtondir, exist_ok=1)
print(noisy_circles.shape)
X = noisy_circles
targets = c
for newton in test:
print("plting newton {}".format(newton))
for i in range(len(test[newton])):
y, successes, normals = test[newton][i]
print(list(map(lambda z: z.shape, (X, y, successes, normals))))
fig = plt.figure()
plt.scatter(y[:,0], y[:,1], s=2, c='r')
for j in range(X.shape[0]):
plt.plot(np.stack((X[j,0],y[j,0])), np.stack((X[j,1],y[j,1])), c='k')
plt.scatter(X[:,0], X[:,1],c=cmap(targets))
plt.xlim(*data_ranges[0])
plt.ylim(*data_ranges[1])
plt.title("iteration: {:05d}.{:05d}".format(newton,i))
plt.savefig(os.path.join(scmsnewtondir, "svc_circles_{:05d}.png".format(newton*len(test[0]) + i)))
plt.close(fig)
X = X[successes]
targets = targets[successes]
normals
testX,testy,testnormals = test
plt.scatter(testX[:,0], testX[:,1])
plt.scatter(testy[:,0], testy[:,1])
###Output
_____no_output_____ |
01_supervised_learning/4_ModelEvaluationMetrics/.ipynb_checkpoints/Classification_Metrics_Solution-checkpoint.ipynb | ###Markdown
Our MissionIn this lesson you gained some insight into a number of techniques used to understand how well our model is performing. This notebook is aimed at giving you some practice with the metrics specifically related to classification problems. With that in mind, we will again be looking at the spam dataset from the earlier lessons.First, run the cell below to prepare the data and instantiate a number of different models.
###Code
# Import our libraries
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.ensemble import BaggingClassifier, RandomForestClassifier, AdaBoostClassifier
from sklearn.svm import SVC
import tests as t
%matplotlib inline
# Read in our dataset
df = pd.read_table('smsspamcollection/SMSSpamCollection',
sep='\t',
header=None,
names=['label', 'sms_message'])
# Fix our response value
df['label'] = df.label.map({'ham':0, 'spam':1})
# Split our dataset into training and testing data
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
# Instantiate a number of our models
naive_bayes = MultinomialNB()
bag_mod = BaggingClassifier(n_estimators=200)
rf_mod = RandomForestClassifier(n_estimators=200)
ada_mod = AdaBoostClassifier(n_estimators=300, learning_rate=0.2)
svm_mod = SVC()
###Output
_____no_output_____
###Markdown
> **Step 1**: Now, fit each of the above models to the appropriate data. Answer the following question to assure that you fit the models correctly.
###Code
# Fit each of the 4 models
# This might take some time to run
naive_bayes.fit(training_data, y_train)
bag_mod.fit(training_data, y_train)
rf_mod.fit(training_data, y_train)
ada_mod.fit(training_data, y_train)
svm_mod.fit(training_data, y_train)
# The models you fit above were fit on which data?
a = 'X_train'
b = 'X_test'
c = 'y_train'
d = 'y_test'
e = 'training_data'
f = 'testing_data'
# Change models_fit_on to only contain the correct string names
# of values that you oassed to the above models
models_fit_on = {e, c} # update this to only contain correct letters
# Checks your solution - don't change this
t.test_one(models_fit_on)
###Output
That's right! You need to fit on both parts of the data pertaining to training data!
###Markdown
> **Step 2**: Now make predictions for each of your models on the data that will allow you to understand how well our model will extend to new data. Then correctly add the strings to the set in the following cell.
###Code
# Make predictions using each of your models
preds_nb = naive_bayes.predict(testing_data)
preds_bag = bag_mod.predict(testing_data)
preds_rf = rf_mod.predict(testing_data)
preds_ada = ada_mod.predict(testing_data)
preds_svm = svm_mod.predict(testing_data)
# Which data was used in the predict method to see how well your
# model would work on new data?
a = 'X_train'
b = 'X_test'
c = 'y_train'
d = 'y_test'
e = 'training_data'
f = 'testing_data'
# Change models_predict_on to only contain the correct string names
# of values that you oassed to the above models
models_predict_on = {f} # update this to only contain correct letters
# Checks your solution - don't change this
t.test_two(models_predict_on)
###Output
That's right! To see how well our models perform in a new setting, you will want to predict on the test set of data.
###Markdown
Now that you have set up all your predictions, let's get to topis addressed in this lesson - measuring how well each of your models performed. First, we will focus on how each metric was calculated for a single model, and then in the final part of this notebook, you will choose models that are best based on a particular metric.You will be writing functions to calculate a number of metrics and then comparing the values to what you get from sklearn. This will help you build intuition for how each metric is calculated.> **Step 3**: As an example of how this will work for the upcoming questions, run the cell below. Fill in the below function to calculate accuracy, and then compare your answer to the built in to assure you are correct.
###Code
# accuracy is the total correct divided by the total to predict
def accuracy(actual, preds):
'''
INPUT
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the accuracy as a float
'''
return np.sum(preds == actual)/len(actual)
print(accuracy(y_test, preds_nb))
print(accuracy_score(y_test, preds_nb))
print("Since these match, we correctly calculated our metric!")
###Output
0.988513998564
0.988513998564
Since these match, we correctly calculated our metric!
###Markdown
> **Step 4**: Fill in the below function to calculate precision, and then compare your answer to the built in to assure you are correct.
###Code
# precision is the true positives over the predicted positive values
def precision(actual, preds):
'''
INPUT
(assumes positive = 1 and negative = 0)
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the precision as a float
'''
tp = len(np.intersect1d(np.where(preds==1), np.where(actual==1)))
pred_pos = (preds==1).sum()
return tp/(pred_pos)
print(precision(y_test, preds_nb))
print(precision_score(y_test, preds_nb))
print("If the above match, you got it!")
###Output
0.972067039106
0.972067039106
If the above match, you got it!
###Markdown
> **Step 5**: Fill in the below function to calculate recall, and then compare your answer to the built in to assure you are correct.
###Code
# recall is true positives over all actual positive values
def recall(actual, preds):
'''
INPUT
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the recall as a float
'''
tp = len(np.intersect1d(np.where(preds==1), np.where(actual==1)))
act_pos = (actual==1).sum()
return tp/act_pos
print(recall(y_test, preds_nb))
print(recall_score(y_test, preds_nb))
print("If the above match, you got it!")
###Output
0.940540540541
0.940540540541
If the above match, you got it!
###Markdown
> **Step 6**: Fill in the below function to calculate f1-score, and then compare your answer to the built in to assure you are correct.
###Code
# f1_score is 2*(precision*recall)/(precision+recall))
def f1(preds, actual):
'''
INPUT
preds - predictions as a numpy array or pandas series
actual - actual values as a numpy array or pandas series
OUTPUT:
returns the f1score as a float
'''
tp = len(np.intersect1d(np.where(preds==1), np.where(actual==1)))
pred_pos = (preds==1).sum()
prec = tp/(pred_pos)
act_pos = (actual==1).sum()
recall = tp/act_pos
return 2*(prec*recall)/(prec+recall)
print(f1(y_test, preds_nb))
print(f1_score(y_test, preds_nb))
print("If the above match, you got it!")
###Output
0.956043956044
0.956043956044
If the above match, you got it!
###Markdown
> **Step 7:** Now that you have calculated a number of different metrics, let's tie that to when we might use one versus another. Use the dictionary below to match a metric to each statement that identifies when you would want to use that metric.
###Code
# add the letter of the most appropriate metric to each statement
# in the dictionary
a = "recall"
b = "precision"
c = "accuracy"
d = 'f1-score'
seven_sol = {
'We have imbalanced classes, which metric do we definitely not want to use?': c,
'We really want to make sure the positive cases are all caught even if that means we identify some negatives as positives': a,
'When we identify something as positive, we want to be sure it is truly positive': b,
'We care equally about identifying positive and negative cases': d
}
t.sol_seven(seven_sol)
###Output
That's right! It isn't really necessary to memorize these in practice, but it is important to know they exist and know why might use one metric over another for a particular situation.
###Markdown
> **Step 8:** Given what you know about the metrics now, use this information to correctly match the appropriate model to when it would be best to use each in the dictionary below.
###Code
# use the answers you found to the previous questiona, then match the model that did best for each metric
a = "naive-bayes"
b = "bagging"
c = "random-forest"
d = 'ada-boost'
e = "svm"
eight_sol = {
'We have imbalanced classes, which metric do we definitely not want to use?': a,
'We really want to make sure the positive cases are all caught even if that means we identify some negatives as positives': a,
'When we identify something as positive, we want to be sure it is truly positive': c,
'We care equally about identifying positive and negative cases': a
}
t.sol_eight(eight_sol)
# cells for work
def print_metrics(y_true, preds, model_name=None):
'''
INPUT:
y_true - the y values that are actually true in the dataset (numpy array or pandas series)
preds - the predictions for those values from some model (numpy array or pandas series)
model_name - (str - optional) a name associated with the model if you would like to add it to the print statements
OUTPUT:
None - prints the accuracy, precision, recall, and F1 score
'''
if model_name == None:
print('Accuracy score: ', format(accuracy_score(y_true, preds)))
print('Precision score: ', format(precision_score(y_true, preds)))
print('Recall score: ', format(recall_score(y_true, preds)))
print('F1 score: ', format(f1_score(y_true, preds)))
print('\n\n')
else:
print('Accuracy score for ' + model_name + ' :' , format(accuracy_score(y_true, preds)))
print('Precision score ' + model_name + ' :', format(precision_score(y_true, preds)))
print('Recall score ' + model_name + ' :', format(recall_score(y_true, preds)))
print('F1 score ' + model_name + ' :', format(f1_score(y_true, preds)))
print('\n\n')
# Print Bagging scores
print_metrics(y_test, preds_bag, 'bagging')
# Print Random Forest scores
print_metrics(y_test, preds_rf, 'random forest')
# Print AdaBoost scores
print_metrics(y_test, preds_ada, 'adaboost')
# Naive Bayes Classifier scores
print_metrics(y_test, preds_nb, 'naive bayes')
# SVM Classifier scores
print_metrics(y_test, preds_svm, 'svm')
###Output
Accuracy score for bagging : 0.9734386216798278
Precision score bagging : 0.9065934065934066
Recall score bagging : 0.8918918918918919
F1 score bagging : 0.8991825613079019
Accuracy score for random forest : 0.9849246231155779
Precision score random forest : 1.0
Recall score random forest : 0.8864864864864865
F1 score random forest : 0.9398280802292264
Accuracy score for adaboost : 0.9770279971284996
Precision score adaboost : 0.9693251533742331
Recall score adaboost : 0.8540540540540541
F1 score adaboost : 0.9080459770114943
Accuracy score for naive bayes : 0.9885139985642498
Precision score naive bayes : 0.9720670391061452
Recall score naive bayes : 0.9405405405405406
F1 score naive bayes : 0.9560439560439562
Accuracy score for svm : 0.8671931083991385
Precision score svm : 0.0
Recall score svm : 0.0
F1 score svm : 0.0
###Markdown
As a final step in this workbook, let's take a look at the last three metrics you saw, f-beta scores, ROC curves, and AUC.**For f-beta scores:** If you decide that you care more about precision, you should move beta closer to 0. If you decide you care more about recall, you should move beta towards infinity. > **Step 9:** Using the fbeta_score works similar to most of the other metrics in sklearn, but you also need to set beta as your weighting between precision and recall. Use the space below to show that you can use [fbeta in sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html) to replicate your f1-score from above. If in the future you want to use a different weighting, [this article](http://mlwiki.org/index.php/Precision_and_Recall) does an amazing job of explaining how you might adjust beta for different situations.
###Code
#import fbeta score
from sklearn.metrics import fbeta_score
#show that the results are the same for fbeta and f1_score
print(fbeta_score(y_test, preds_bag, beta=1))
print(f1_score(y_test, preds_bag))
###Output
0.899182561308
0.899182561308
###Markdown
> **Step 10:** Building ROC curves in python is a pretty involved process on your own. I wrote the function below to assist with the process and make it easier for you to do so in the future as well. Try it out using one of the other classifiers you created above to see how it compares to the random forest model below.
###Code
# Function for calculating auc and roc
def build_roc_auc(model, X_train, X_test, y_train, y_test):
'''
INPUT:
stuff
OUTPUT:
auc - returns auc as a float
prints the roc curve
'''
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn.metrics import roc_curve, auc, roc_auc_score
from scipy import interp
y_preds = model.fit(X_train, y_train).predict_proba(X_test)
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(len(y_test)):
fpr[i], tpr[i], _ = roc_curve(y_test, y_preds[:, 1])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_preds[:, 1].ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
plt.plot(fpr[2], tpr[2], color='darkorange',
lw=2, label='ROC curve (area = %0.2f)' % roc_auc[2])
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.show()
return roc_auc_score(y_test, np.round(y_preds[:, 1]))
# Finding roc and auc for the random forest model
build_roc_auc(rf_mod, training_data, testing_data, y_train, y_test)
# Your turn here - choose another classifier to see how it compares
build_roc_auc(naive_bayes, training_data, testing_data, y_train, y_test)
# The naive bayes classifier outperforms the random forest in terms of auc
###Output
_____no_output_____ |
notebooks/taxifare-collab.ipynb | ###Markdown
Step by Step
###Code
%%time
trainer = Trainer(nrows=500_000)
%%time
trainer.clean()
%%time
trainer.preproc(test_size=0.3)
%%time
trainer.fit(plot_history=True, verbose=1)
%%time
# evaluate on test set (by default the holdout from train/test/split)
trainer.evaluate()
###Output
_____no_output_____
###Markdown
All at once
###Code
%%time
# Instanciate trainer with number of rows to download and use
trainer = Trainer(nrows=1_000_000)
# clean data
trainer.clean()
# Preprocess data and create train/test/split
trainer.preproc(test_size=0.3)
# Fit neural network and show training performance
trainer.fit(plot_history=True, verbose=1)
# evaluate on test set (by default the holdout from train/test/split)
trainer.evaluate(X_test=None, y_test=None)
###Output
###### loading and cleaning....
clean 1.03
###### preprocessing....
###### shape of X_train_preproc, y_train: (592034, 467) (592034,)
preproc 45.12
###### fitting...
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 200) 93600
_________________________________________________________________
dense_1 (Dense) (None, 100) 20100
_________________________________________________________________
dense_2 (Dense) (None, 20) 2020
_________________________________________________________________
dense_3 (Dense) (None, 1) 21
=================================================================
Total params: 115,741
Trainable params: 115,741
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/500
6476/6476 [==============================] - 28s 4ms/step - loss: 26.0965 - mae: 2.2224 - val_loss: 23.9129 - val_mae: 2.1942
Epoch 2/500
6476/6476 [==============================] - 28s 4ms/step - loss: 20.4338 - mae: 2.0536 - val_loss: 21.9842 - val_mae: 2.1046
Epoch 3/500
6476/6476 [==============================] - 28s 4ms/step - loss: 19.2659 - mae: 1.9992 - val_loss: 21.2366 - val_mae: 1.9575
Epoch 4/500
6476/6476 [==============================] - 28s 4ms/step - loss: 18.5828 - mae: 1.9630 - val_loss: 20.6013 - val_mae: 1.9661
Epoch 5/500
6476/6476 [==============================] - 28s 4ms/step - loss: 18.1737 - mae: 1.9457 - val_loss: 20.4292 - val_mae: 1.9500
Epoch 6/500
6476/6476 [==============================] - 28s 4ms/step - loss: 17.9374 - mae: 1.9351 - val_loss: 20.1900 - val_mae: 1.9679
Epoch 7/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.7770 - mae: 1.9295 - val_loss: 20.1351 - val_mae: 1.9337
Epoch 8/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.6463 - mae: 1.9274 - val_loss: 20.0244 - val_mae: 1.9593
Epoch 9/500
6476/6476 [==============================] - 28s 4ms/step - loss: 17.4933 - mae: 1.9240 - val_loss: 20.2441 - val_mae: 1.9894
Epoch 10/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.4323 - mae: 1.9249 - val_loss: 19.9714 - val_mae: 1.9106
Epoch 11/500
6476/6476 [==============================] - 28s 4ms/step - loss: 17.3117 - mae: 1.9198 - val_loss: 20.0565 - val_mae: 1.9058
Epoch 12/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.2542 - mae: 1.9183 - val_loss: 19.8343 - val_mae: 1.9482
Epoch 13/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.1759 - mae: 1.9174 - val_loss: 20.0811 - val_mae: 2.0418
Epoch 14/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.1327 - mae: 1.9144 - val_loss: 19.7456 - val_mae: 1.9310
Epoch 15/500
6476/6476 [==============================] - 27s 4ms/step - loss: 17.0985 - mae: 1.9101 - val_loss: 19.7884 - val_mae: 1.9622
Epoch 16/500
6476/6476 [==============================] - 28s 4ms/step - loss: 17.0546 - mae: 1.9061 - val_loss: 19.7068 - val_mae: 1.9302
Epoch 17/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.9440 - mae: 1.9046 - val_loss: 19.5900 - val_mae: 1.9085
Epoch 18/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.9716 - mae: 1.9057 - val_loss: 19.8037 - val_mae: 1.9865
Epoch 19/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.9260 - mae: 1.9043 - val_loss: 19.8398 - val_mae: 1.9570
Epoch 20/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.8736 - mae: 1.9072 - val_loss: 20.0718 - val_mae: 2.0041
Epoch 21/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.8636 - mae: 1.9047 - val_loss: 19.5969 - val_mae: 1.9608
Epoch 22/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.7614 - mae: 1.8996 - val_loss: 19.5680 - val_mae: 1.9256
Epoch 23/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.7222 - mae: 1.8983 - val_loss: 19.4205 - val_mae: 1.9104
Epoch 24/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.6778 - mae: 1.8963 - val_loss: 19.8595 - val_mae: 2.0051
Epoch 25/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.6558 - mae: 1.8942 - val_loss: 19.6366 - val_mae: 1.9002
Epoch 26/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.6472 - mae: 1.8933 - val_loss: 19.5363 - val_mae: 1.9026
Epoch 27/500
6476/6476 [==============================] - 27s 4ms/step - loss: 16.6515 - mae: 1.8907 - val_loss: 19.5358 - val_mae: 1.9289
Epoch 28/500
6476/6476 [==============================] - 28s 4ms/step - loss: 16.6060 - mae: 1.8916 - val_loss: 19.4583 - val_mae: 1.9459
####### min val MAE 1.9001555442810059
####### epochs reached 28
|
AlgebraIIAHonors.ipynb | ###Markdown
Algebra II A Honors Project - Jamison Weitzel A Jupyter Notebook to graph and find the roots of polynomials. 1) Ask the user to enter the degree of the polynomial.
###Code
# This is the variable where I will keep the degree of the polynomial.
polynomial_degree = 1
#A function that can be called to change the degree
def set_degree(degree):
global polynomial_degree
polynomial_degree = degree
# This will create a slider that the user can use to set the degree.
interact(set_degree,degree=widgets.IntSlider(min=1,max=10,value=1))
###Output
_____no_output_____
###Markdown
2) Ask the user for the polynomial coefficients.
###Code
print(f"You've selected a polynomial of degree {polynomial_degree}.")
print(f"Please enter the coefficients.")
# A variable to hold the coefficient values.
coefficients = []
# A variable to display the exponent of the term.
expnt = polynomial_degree
# For the number of terms in the polynomial, collect the coefficient.
for x in range(polynomial_degree+1):
print(f'Input coefficient for x**{expnt} term and press "Enter"')
# Get the input value and add it to the list.
coefficients.append(float(input()))
# Go to the next term by subrtacting one from the exponent.
expnt -= 1
###Output
You've selected a polynomial of degree 3.
Please enter the coefficients.
Input coefficient for x**3 term and press "Enter"
1
Input coefficient for x**2 term and press "Enter"
1
Input coefficient for x**1 term and press "Enter"
-22
Input coefficient for x**0 term and press "Enter"
-40
###Markdown
Here is your polynomial.
###Code
poly = ''
expnt = polynomial_degree
for x in range(polynomial_degree+1):
poly += f'+ {coefficients[x]}x^{expnt} '
expnt -= 1
print(poly)
###Output
+ 1.0x^3 + 1.0x^2 + -22.0x^1 + -40.0x^0
###Markdown
3) Here is a graph of your polynomial!
###Code
# This function will calculate the value of the polynomial for the passed in value of x.
# https://stackoverflow.com/questions/37352098/ploting-a-polynomial-using-matplotlib-and-coeffiecients
def apply_function(x, coeffs):
exp = len(coeffs)-1
y = 0
for i in range(polynomial_degree+1):
y += coeffs[i]*x**exp
exp -= 1
return y
# This creates a set of numbers that will be used for values of x to pass to the function.
# The numbers will be between -5 and 5.
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html
x = np.linspace(-5, 5, 11)
# This will print the (x, y) coordinates.
for some_x in x:
print(f'({some_x}, {apply_function(some_x, coefficients)})')
# Draw a plot of the function.
# https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.plot.html
plt.plot(x, apply_function(x, coefficients))
plt.show()
###Output
(-5.0, -30.0)
(-4.0, 0.0)
(-3.0, 8.0)
(-2.0, 0.0)
(-1.0, -18.0)
(0.0, -40.0)
(1.0, -60.0)
(2.0, -72.0)
(3.0, -70.0)
(4.0, -48.0)
(5.0, 0.0)
###Markdown
4) Here are the roots of your polynomial!
###Code
# Use the numpy roots function to calculate the roots.
# https://docs.scipy.org/doc/numpy/reference/generated/numpy.roots.html
roots = np.roots(coefficients)
print(roots)
# remove the values that are not real
# https://stackoverflow.com/questions/28081247/print-real-roots-only-in-numpy
real_valued = roots.real[abs(roots.imag)<1e-5] # where I chose 1-e5 as a threshold
print('The real roots are:')
print(real_valued)
###Output
The real roots are:
[ 5. -4. -2.]
|
site/ja/hub/tutorials/image_enhancing.ipynb | ###Markdown
Copyright 2019 The TensorFlow Hub Authors.Licensed under the Apache License, Version 2.0 (the "License");Created by @[Adrish Dey](https://github.com/captain-pool) for [Google Summer of Code](https://summerofcode.withgoogle.com/) 2019
###Code
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
###Output
_____no_output_____
###Markdown
ESRGAN を使用した画像超解像 TensorFlow.org で表示 Google Colab で実行 GitHub でソースを表示 ノートブックをダウンロード TF Hub モデルを参照 この Colab では、ESRGAN(強化された超解像敵対的生成ネットワーク)における TensorFlow Hub モジュールの使用を実演します。(*Xintao Wang et.al.*)[[論文](https://arxiv.org/pdf/1809.00219.pdf)] [[コード](https://github.com/captain-pool/GSOC/)]上記を使用して画像補正を行います(*バイキュービック法でダウンサンプリングされた画像の補正*)。モデルは、128 X 128 のサイズの画像パッチを使った DIV2K データセット(バイキュービック法でダウンサンプリングされた画像)でトレーニングされています。 **環境の準備**
###Code
import os
import time
from PIL import Image
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
os.environ["TFHUB_DOWNLOAD_PROGRESS"] = "True"
!wget "https://user-images.githubusercontent.com/12981474/40157448-eff91f06-5953-11e8-9a37-f6b5693fa03f.png" -O original.png
# Declaring Constants
IMAGE_PATH = "original.png"
SAVED_MODEL_PATH = "https://tfhub.dev/captain-pool/esrgan-tf2/1"
###Output
_____no_output_____
###Markdown
**ヘルパー関数の定義**
###Code
def preprocess_image(image_path):
""" Loads image from path and preprocesses to make it model ready
Args:
image_path: Path to the image file
"""
hr_image = tf.image.decode_image(tf.io.read_file(image_path))
# If PNG, remove the alpha channel. The model only supports
# images with 3 color channels.
if hr_image.shape[-1] == 4:
hr_image = hr_image[...,:-1]
hr_size = (tf.convert_to_tensor(hr_image.shape[:-1]) // 4) * 4
hr_image = tf.image.crop_to_bounding_box(hr_image, 0, 0, hr_size[0], hr_size[1])
hr_image = tf.cast(hr_image, tf.float32)
return tf.expand_dims(hr_image, 0)
def save_image(image, filename):
"""
Saves unscaled Tensor Images.
Args:
image: 3D image tensor. [height, width, channels]
filename: Name of the file to save to.
"""
if not isinstance(image, Image.Image):
image = tf.clip_by_value(image, 0, 255)
image = Image.fromarray(tf.cast(image, tf.uint8).numpy())
image.save("%s.jpg" % filename)
print("Saved as %s.jpg" % filename)
%matplotlib inline
def plot_image(image, title=""):
"""
Plots images from image tensors.
Args:
image: 3D image tensor. [height, width, channels].
title: Title to display in the plot.
"""
image = np.asarray(image)
image = tf.clip_by_value(image, 0, 255)
image = Image.fromarray(tf.cast(image, tf.uint8).numpy())
plt.imshow(image)
plt.axis("off")
plt.title(title)
###Output
_____no_output_____
###Markdown
パスから読み込まれた画像の超解像の実行
###Code
hr_image = preprocess_image(IMAGE_PATH)
# Plotting Original Resolution image
plot_image(tf.squeeze(hr_image), title="Original Image")
save_image(tf.squeeze(hr_image), filename="Original Image")
model = hub.load(SAVED_MODEL_PATH)
start = time.time()
fake_image = model(hr_image)
fake_image = tf.squeeze(fake_image)
print("Time Taken: %f" % (time.time() - start))
# Plotting Super Resolution Image
plot_image(tf.squeeze(fake_image), title="Super Resolution")
save_image(tf.squeeze(fake_image), filename="Super Resolution")
###Output
_____no_output_____
###Markdown
モデルのパフォーマンスの評価
###Code
!wget "https://lh4.googleusercontent.com/-Anmw5df4gj0/AAAAAAAAAAI/AAAAAAAAAAc/6HxU8XFLnQE/photo.jpg64" -O test.jpg
IMAGE_PATH = "test.jpg"
# Defining helper functions
def downscale_image(image):
"""
Scales down images using bicubic downsampling.
Args:
image: 3D or 4D tensor of preprocessed image
"""
image_size = []
if len(image.shape) == 3:
image_size = [image.shape[1], image.shape[0]]
else:
raise ValueError("Dimension mismatch. Can work only on single image.")
image = tf.squeeze(
tf.cast(
tf.clip_by_value(image, 0, 255), tf.uint8))
lr_image = np.asarray(
Image.fromarray(image.numpy())
.resize([image_size[0] // 4, image_size[1] // 4],
Image.BICUBIC))
lr_image = tf.expand_dims(lr_image, 0)
lr_image = tf.cast(lr_image, tf.float32)
return lr_image
hr_image = preprocess_image(IMAGE_PATH)
lr_image = downscale_image(tf.squeeze(hr_image))
# Plotting Low Resolution Image
plot_image(tf.squeeze(lr_image), title="Low Resolution")
model = hub.load(SAVED_MODEL_PATH)
start = time.time()
fake_image = model(lr_image)
fake_image = tf.squeeze(fake_image)
print("Time Taken: %f" % (time.time() - start))
plot_image(tf.squeeze(fake_image), title="Super Resolution")
# Calculating PSNR wrt Original Image
psnr = tf.image.psnr(
tf.clip_by_value(fake_image, 0, 255),
tf.clip_by_value(hr_image, 0, 255), max_val=255)
print("PSNR Achieved: %f" % psnr)
###Output
_____no_output_____
###Markdown
**出力サイズの対照比較**
###Code
plt.rcParams['figure.figsize'] = [15, 10]
fig, axes = plt.subplots(1, 3)
fig.tight_layout()
plt.subplot(131)
plot_image(tf.squeeze(hr_image), title="Original")
plt.subplot(132)
fig.tight_layout()
plot_image(tf.squeeze(lr_image), "x4 Bicubic")
plt.subplot(133)
fig.tight_layout()
plot_image(tf.squeeze(fake_image), "Super Resolution")
plt.savefig("ESRGAN_DIV2K.jpg", bbox_inches="tight")
print("PSNR: %f" % psnr)
###Output
_____no_output_____ |
batman_eda.ipynb | ###Markdown
Exploratory Data Analysis IntroductionIn this notebook, we'll use the NMF models that were created in the notebook where topic modeling was performed, as well as the original DataFrames that contain the corpus, to try to find some more obvious patterns with EDA after identifying the hidden patterns with machines learning techniques. We are going to look at the following:* **Number of Lines Spoken** - look at counts of lines for the top 25 characters* **Explore Selected Topics in Movies Between Directors** - look at selected topics and how often they're the dominant topic in all the scenes for each director* **Visualize Common Terms Between Directors** - create Word Clouds
###Code
# data manipulation
import pandas as pd
import numpy as np
# files
import pickle
# topic modeling
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import NMF
# visualization
from matplotlib import pyplot as plt
import seaborn as sns
from wordcloud import WordCloud, STOPWORDS
import matplotlib.colors as mcolors
###Output
_____no_output_____
###Markdown
Number of Lines Spoken
###Code
df_character = pd.read_pickle('df_char.pkl')
# look at lines spoken by the top 25 characters
counts = df_character.character.value_counts()[:25]
plt.figure(figsize=(12, 8))
sns.barplot(counts.values, counts.index, palette='gist_gray')
###Output
/Users/willnobles/opt/anaconda3/envs/metis/lib/python3.8/site-packages/seaborn/_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Explore Selected Topics in Movies Between DirectorsIt would be interesting to select some of the topics from the NMF model and get counts of how often the dominant topic for each scene appears between Burton and Nolan films. The steps to accomplish this will be:1. Create a DataFrame with just the selected topics.2. Find the max value in each of the 825 rows.3. Find the column (i.e. topic) that the max value belongs to.4. Add back the column containing the topic information to the `doc_topic` DataFrame.5. Get counts of the dominant topic for each director for each of the selected topic.6. Get the percentages that the topic occurs in the director's films by dividing the counts by the total number of movie scenes for that director.
###Code
# read in relevant dataframes
df = pd.read_pickle('df.pkl')
df_nmf = pd.read_pickle('df_nmf.pkl')
df_doc_topic = pd.read_pickle('df_doc_topic.pkl')
# add columns to the document-topic matrix
df_doc_topic['dialogue_lemmatized'] = df['dialogue_lemmatized'].values
df_doc_topic['movie'] = df['movie'].values
df_doc_topic['director'] = df['director'].values
df_doc_topic.sort_values(by=0, ascending=False)
# create a dataframe with selected topics
df_selected_topics = df_doc_topic[[2, 3, 5, 6, 9]]
# get max value in each of 825 rows (i.e. all scenes)
dominant_topic = df_selected_topics.max(axis=1)
# get column of max value for each row
dominant_column = df_selected_topics.idxmax(axis=1)
# add as columns to dataframe
df_selected_topics['dominant_topic'] = dominant_topic
df_selected_topics['dominant_topic_column'] = dominant_column
df_selected_topics
# add dominant topic column to the original dataframe
df_doc_topic['dominant_topic'] = dominant_column
# look at total number of scenes for each director
df_doc_topic.groupby('director')['director'].value_counts()
# get counts of total number of scenes for each dominant topic
topic_counts = df_doc_topic.groupby('director')['dominant_topic'].value_counts()
# create a dataframe and format columns
df_topic_counts = topic_counts.to_frame().reset_index(level='director')
df_topic_counts.index.names = ['topic']
df_topic_counts.rename(columns={"dominant_topic":"topic_count"}, inplace=True)
df_topic_counts.reset_index(inplace=True)
# create a column for the number of scenes per topic
df_topic_counts['topic_percentage'] = df_topic_counts[df_topic_counts.director == 'Tim Burton'].topic_count / 221
df_topic_counts['topic_percentage_n'] = df_topic_counts[df_topic_counts.director == 'Christopher Nolan'].topic_count / 604
# add Nolan values to Burton column
for i in range(0, 5):
df_topic_counts.topic_percentage.iloc[i] = df_topic_counts.topic_percentage_n.iloc[i]
df_topic_counts.drop(columns='topic_percentage_n', inplace=True)
df_topic_counts.sort_values(by='topic', inplace=True)
df_topic_counts
plt.figure(figsize=(12, 8))
ax = sns.barplot(x="topic", y="topic_percentage", hue="director", data=df_topic_counts, palette="Greys")
ax.set_xticklabels(['Trust','Power','Control','Hero','Love'])
ax.set(xlabel=None)
ax.set(ylabel=None)
# look at topic 0 probability over all scenes in Batman
batman_topics = df_doc_topic[df_doc_topic['movie'] == 'Batman']
plt.plot(batman_topics.index, batman_topics[0])
plt.show()
###Output
_____no_output_____
###Markdown
Visualize Common Terms Between DirectorsLet's create Word Clouds to explore the terms used most frequently in the Batman movies directed by Tim Burton and Christopher Nolan.
###Code
# read in the dataframes
df_burton = pd.read_pickle('df_burton.pkl')
df_nolan = pd.read_pickle('df_nolan.pkl')
df_nmf_burton = pd.read_pickle('df_nmf_burton.pkl')
df_nmf_nolan = pd.read_pickle('df_nmf_nolan.pkl')
# re-add stop words
add_stop_words = ['im', 'know', 'dont', 'think', 'thought', 'got', 'ready', 'sir', 'hell', 'ill',
'oh', 'tell', 'youre', 'going', 'want', 'like', 'yes', 'just', 'hes', 'shes',
'took', 'theyre', 'wanna', 'looks', 'need', 'does', 'yeah', 'thats', 'come',
'gonna', 'gon', 'whered', 'didnt', 'did', 'coming', 'told', 'aint', 'little',
'okay', 'youve', 'trying', 'lets', 'ive', 'hed', 'mr', 'doing', 'let', 'came',
'whats', 'sure', 'stay', 'theres', 'doing', 'said', 'knows', 'ah', 'gotta', 'hey',
'weve', 'theyve', 'wheres', 'em', 'whatre', 'batman', 'gotham', 'dent', 'rachel',
'harvey', 'wayne', 'bruce', 'alfred', 'youll', 'yous', 'yup', 'ac', 'shouldnt',
'yknow', 'youd', 'youits', 'say', 'hi', 'ya', 'lot', 'gordon', 'isnt', 'wa']
stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)
# make word cloud for Burton movies
from PIL import Image
burton_mask = np.array(Image.open("batman-images/dk.png"))
def similar_color_func_red(word=None, font_size=None,
position=None, orientation=None,
font_path=None, random_state=None):
h = 350 # 0 - 360
s = 100 # 0 - 100
l = random_state.randint(0, 100) # 0 - 100
return "hsl({}, {}%, {}%)".format(h, s, l)
burton_cloud = WordCloud(stopwords=stop_words, background_color="white", max_words=1000, mask=burton_mask, max_font_size=256,
random_state=42, width=burton_mask.shape[1],
height=burton_mask.shape[0], color_func=similar_color_func_red).generate(" ".join(df_burton.dialogue_lemmatized))
plt.figure(figsize=[20,10])
plt.imshow(burton_cloud)
plt.axis('off')
plt.show()
# make word clouds for Nolan movies
nolan_mask = np.array(Image.open("batman-images/dk.png"))
def similar_color_func_blue(word=None, font_size=None,
position=None, orientation=None,
font_path=None, random_state=None):
h = 200 # 0 - 360
s = 100 # 0 - 100
l = random_state.randint(0, 100) # 0 - 100
return "hsl({}, {}%, {}%)".format(h, s, l)
nolan_cloud = WordCloud(stopwords=stop_words, background_color="white", max_words=1000, mask=nolan_mask, max_font_size=256,
random_state=42, width=nolan_mask.shape[1],
height=nolan_mask.shape[0], color_func=similar_color_func_blue).generate(" ".join(df_nolan.dialogue_lemmatized))
plt.figure(figsize=[20,10])
plt.imshow(nolan_cloud)
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
SimpleRankOrdersJupyter/ExploringAZeroAllocationTeam.ipynb | ###Markdown
Exploring data from a team that had zero allocation for a teamWe found that the user interface allows the system to be at a state where the allocation for certain teams is zero. We are looking at the data log to see how team-mix, interaction-score and walking time are affected by this.
###Code
import sys
import os.path
sys.path.append("../CommonModules") # go to parent dir/CommonModules
import Learning2019GTL.Globals as Globals
import Learning2019GTL.DataConnector as DataConnector
data_map = Globals.FileNameMaps()
TEAM_MAP = dict([[v,k] for k,v in data_map.CSV_MAP.items()])
conditions = ['A - control (discuss conference) at beginning', 'B - strategy at beginning', 'D - strategy at mid']
t = DataConnector.Team()
t.getTeamByID(2)
print(t.getStringID())
###Output
MS1R1A-2
|
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-12.ipynb | ###Markdown
RadarCOVID-Report Data Extraction
###Code
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
###Output
_____no_output_____
###Markdown
Constants
###Code
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
###Output
_____no_output_____
###Markdown
Parameters
###Code
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
###Output
_____no_output_____
###Markdown
COVID-19 Cases
###Code
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
###Output
_____no_output_____
###Markdown
Extract API TEKs
###Code
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
###Output
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().drop(
###Markdown
Dump API TEKs
###Code
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
###Output
_____no_output_____
###Markdown
Load TEK Dumps
###Code
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
###Output
_____no_output_____
###Markdown
Daily New TEKs
###Code
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
###Output
_____no_output_____
###Markdown
Hourly New TEKs
###Code
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
###Output
_____no_output_____
###Markdown
Official Statistics
###Code
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
###Output
_____no_output_____
###Markdown
Data Merge
###Code
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
###Output
_____no_output_____
###Markdown
Report Results
###Code
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
###Output
_____no_output_____
###Markdown
Daily Summary Table
###Code
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
###Output
_____no_output_____
###Markdown
Daily Summary Plots
###Code
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
###Output
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
layout[ax.rowNum, ax.colNum] = ax.get_visible()
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.
layout[ax.rowNum, ax.colNum] = ax.get_visible()
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
if not layout[ax.rowNum + 1, ax.colNum]:
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning:
The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.
if not layout[ax.rowNum + 1, ax.colNum]:
###Markdown
Daily Generation to Upload Period Table
###Code
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
###Output
_____no_output_____
###Markdown
Hourly Summary Plots
###Code
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
###Output
_____no_output_____
###Markdown
Publish Results
###Code
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
###Output
_____no_output_____
###Markdown
Save Results
###Code
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
###Output
_____no_output_____
###Markdown
Publish Results as JSON
###Code
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
###Output
_____no_output_____
###Markdown
Publish on README
###Code
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
###Output
_____no_output_____
###Markdown
Publish on Twitter
###Code
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
###Output
_____no_output_____ |
v0.12.2/examples/notebooks/generated/statespace_varmax.ipynb | ###Markdown
VARMAX modelsThis is a brief introduction notebook to VARMAX models in statsmodels. The VARMAX model is generically specified as:$$y_t = \nu + A_1 y_{t-1} + \dots + A_p y_{t-p} + B x_t + \epsilon_t +M_1 \epsilon_{t-1} + \dots M_q \epsilon_{t-q}$$where $y_t$ is a $\text{k_endog} \times 1$ vector.
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
dta = sm.datasets.webuse('lutkepohl2', 'https://www.stata-press.com/data/r12/')
dta.index = dta.qtr
dta.index.freq = dta.index.inferred_freq
endog = dta.loc['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']]
###Output
_____no_output_____
###Markdown
Model specificationThe `VARMAX` class in statsmodels allows estimation of VAR, VMA, and VARMA models (through the `order` argument), optionally with a constant term (via the `trend` argument). Exogenous regressors may also be included (as usual in statsmodels, by the `exog` argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the `measurement_error` argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the `error_cov_type` argument). Example 1: VARBelow is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is `maxiter=50`) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables.
###Code
exog = endog['dln_consump']
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='n', exog=exog)
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
###Output
Statespace Model Results
==================================================================================
Dep. Variable: ['dln_inv', 'dln_inc'] No. Observations: 75
Model: VARX(2) Log Likelihood 361.038
Date: Tue, 02 Feb 2021 AIC -696.076
Time: 06:54:22 BIC -665.949
Sample: 04-01-1960 HQIC -684.046
- 10-01-1978
Covariance Type: opg
===================================================================================
Ljung-Box (L1) (Q): 0.04, 10.15 Jarque-Bera (JB): 11.23, 2.37
Prob(Q): 0.84, 0.00 Prob(JB): 0.00, 0.31
Heteroskedasticity (H): 0.45, 0.40 Skew: 0.15, -0.38
Prob(H) (two-sided): 0.05, 0.03 Kurtosis: 4.87, 3.43
Results for equation dln_inv
====================================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------------
L1.dln_inv -0.2412 0.093 -2.593 0.010 -0.423 -0.059
L1.dln_inc 0.2947 0.449 0.657 0.511 -0.585 1.174
L2.dln_inv -0.1648 0.155 -1.061 0.288 -0.469 0.139
L2.dln_inc 0.0825 0.422 0.195 0.845 -0.745 0.910
beta.dln_consump 0.9479 0.640 1.482 0.138 -0.306 2.201
Results for equation dln_inc
====================================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------------
L1.dln_inv 0.0633 0.036 1.768 0.077 -0.007 0.133
L1.dln_inc 0.0841 0.107 0.783 0.434 -0.126 0.295
L2.dln_inv 0.0097 0.033 0.296 0.768 -0.055 0.074
L2.dln_inc 0.0339 0.134 0.253 0.801 -0.229 0.297
beta.dln_consump 0.7711 0.112 6.872 0.000 0.551 0.991
Error covariance matrix
============================================================================================
coef std err z P>|z| [0.025 0.975]
--------------------------------------------------------------------------------------------
sqrt.var.dln_inv 0.0434 0.004 12.289 0.000 0.036 0.050
sqrt.cov.dln_inv.dln_inc 4.755e-05 0.002 0.024 0.981 -0.004 0.004
sqrt.var.dln_inc 0.0109 0.001 11.220 0.000 0.009 0.013
============================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
###Markdown
From the estimated VAR model, we can plot the impulse response functions of the endogenous variables.
###Code
ax = res.impulse_responses(10, orthogonalized=True).plot(figsize=(13,3))
ax.set(xlabel='t', title='Responses to a shock to `dln_inv`');
###Output
_____no_output_____
###Markdown
Example 2: VMAA vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term.
###Code
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal')
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
###Output
Statespace Model Results
==================================================================================
Dep. Variable: ['dln_inv', 'dln_inc'] No. Observations: 75
Model: VMA(2) Log Likelihood 353.886
+ intercept AIC -683.771
Date: Tue, 02 Feb 2021 BIC -655.961
Time: 06:54:27 HQIC -672.667
Sample: 04-01-1960
- 10-01-1978
Covariance Type: opg
===================================================================================
Ljung-Box (L1) (Q): 0.01, 0.07 Jarque-Bera (JB): 12.35, 12.99
Prob(Q): 0.93, 0.78 Prob(JB): 0.00, 0.00
Heteroskedasticity (H): 0.44, 0.81 Skew: 0.05, -0.48
Prob(H) (two-sided): 0.04, 0.60 Kurtosis: 4.99, 4.80
Results for equation dln_inv
=================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------
intercept 0.0182 0.005 3.824 0.000 0.009 0.028
L1.e(dln_inv) -0.2620 0.106 -2.481 0.013 -0.469 -0.055
L1.e(dln_inc) 0.5405 0.633 0.854 0.393 -0.700 1.781
L2.e(dln_inv) 0.0298 0.148 0.201 0.841 -0.261 0.320
L2.e(dln_inc) 0.1630 0.477 0.341 0.733 -0.773 1.099
Results for equation dln_inc
=================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------
intercept 0.0207 0.002 13.123 0.000 0.018 0.024
L1.e(dln_inv) 0.0489 0.041 1.178 0.239 -0.032 0.130
L1.e(dln_inc) -0.0806 0.139 -0.580 0.562 -0.353 0.192
L2.e(dln_inv) 0.0174 0.042 0.410 0.682 -0.066 0.101
L2.e(dln_inc) 0.1278 0.152 0.842 0.400 -0.170 0.425
Error covariance matrix
==================================================================================
coef std err z P>|z| [0.025 0.975]
----------------------------------------------------------------------------------
sigma2.dln_inv 0.0020 0.000 7.344 0.000 0.001 0.003
sigma2.dln_inc 0.0001 2.32e-05 5.834 0.000 9.01e-05 0.000
==================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
###Markdown
Caution: VARMA(p,q) specificationsAlthough the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information.
###Code
mod = sm.tsa.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1))
res = mod.fit(maxiter=1000, disp=False)
print(res.summary())
###Output
/home/travis/build/statsmodels/statsmodels/statsmodels/tsa/statespace/varmax.py:163: EstimationWarning: Estimation of VARMA(p,q) models is not generically robust, due especially to identification issues.
EstimationWarning)
|
round01/12_viz-plt-plot.ipynb | ###Markdown
A = np.zeros((9, 21, 3), dtype=np.float32)
###Code
A = np.zeros((9, 21, 3), dtype=np.uint8)
plt.figure(figsize=(7,7))
plt.imshow(A);
plt.grid(True);
plt.yticks(range(A.shape[0]));
plt.xticks(range(A.shape[1]));
A[0,0] = gold
plt.figure(figsize=(7,7))
plt.imshow(A);
plt.grid(True);
plt.yticks(range(A.shape[0]));
plt.xticks(range(A.shape[1]));
A[0,1, :] = pink
A[0,2, ...] = green
A[0,3] = silver
A[0,4] = blue
plt.figure(figsize=(15,15))
plt.imshow(A);
plt.grid(True);
plt.yticks(range(A.shape[0]));
plt.xticks(range(A.shape[1]));
import sys
import numpy as np
#import pandas as pd
import datetime
import json
from array import *
import os
import math
from random import randrange
import random
#from keras.models import Sequential
#from keras.models import model_from_json
#from keras.layers import Dense, Activation
#from keras import optimizers
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import model_from_json
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras import optimizers
import tensorflow.keras as keras
import tensorflow.compat.v1 as tf
from tensorflow.compat.v1.keras import backend as K
tf.disable_v2_behavior()
#Classes in GAME_SOCKET_DUMMY.py
class ObstacleInfo:
# initial energy for obstacles: Land (key = 0): -1, Forest(key = -1): 0 (random), Trap(key = -2): -10, Swamp (key = -3): -5
types = {0: -1, -1: 0, -2: -10, -3: -5}
def __init__(self):
self.type = 0
self.posx = 0
self.posy = 0
self.value = 0
class GoldInfo:
def __init__(self):
self.posx = 0
self.posy = 0
self.amount = 0
def loads(self, data):
golds = []
for gd in data:
g = GoldInfo()
g.posx = gd["posx"]
g.posy = gd["posy"]
g.amount = gd["amount"]
golds.append(g)
return golds
class PlayerInfo:
STATUS_PLAYING = 0
STATUS_ELIMINATED_WENT_OUT_MAP = 1
STATUS_ELIMINATED_OUT_OF_ENERGY = 2
STATUS_ELIMINATED_INVALID_ACTION = 3
STATUS_STOP_EMPTY_GOLD = 4
STATUS_STOP_END_STEP = 5
def __init__(self, id):
self.playerId = id
self.score = 0
self.energy = 0
self.posx = 0
self.posy = 0
self.lastAction = -1
self.status = PlayerInfo.STATUS_PLAYING
self.freeCount = 0
class GameInfo:
def __init__(self):
self.numberOfPlayers = 1
self.width = 0
self.height = 0
self.steps = 100
self.golds = []
self.obstacles = []
def loads(self, data):
m = GameInfo()
m.width = data["width"]
m.height = data["height"]
m.golds = GoldInfo().loads(data["golds"])
m.obstacles = data["obstacles"]
m.numberOfPlayers = data["numberOfPlayers"]
m.steps = data["steps"]
return m
class UserMatch:
def __init__(self):
self.playerId = 1
self.posx = 0
self.posy = 0
self.energy = 50
self.gameinfo = GameInfo()
def to_json(self):
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4)
class StepState:
def __init__(self):
self.players = []
self.golds = []
self.changedObstacles = []
def to_json(self):
return json.dumps(self, default=lambda o: o.__dict__, sort_keys=True, indent=4)
#Main class in GAME_SOCKET_DUMMY.py
class GameSocket:
bog_energy_chain = {-5: -20, -20: -40, -40: -100, -100: -100}
def __init__(self):
self.stepCount = 0
self.maxStep = 0
self.mapdir = "Maps" # where to load all pre-defined maps
self.mapid = ""
self.userMatch = UserMatch()
self.user = PlayerInfo(1)
self.stepState = StepState()
self.maps = {} # key: map file name, value: file content
self.map = [] # running map info: 0->Land, -1->Forest, -2->Trap, -3:Swamp, >0:Gold
self.energyOnMap = [] # self.energyOnMap[x][y]: <0, amount of energy which player will consume if it move into (x,y)
self.E = 50
self.resetFlag = True
self.craftUsers = [] # players that craft at current step - for calculating amount of gold
self.bots = []
self.craftMap = {} # cells that players craft at current step, key: x_y, value: number of players that craft at (x,y)
def init_bots(self):
self.bots = [Bot1(2), Bot2(3), Bot3(4)] # use bot1(id=2), bot2(id=3), bot3(id=4)
#for (bot) in self.bots: # at the beginning, all bots will have same position, energy as player
for bot in self.bots: # at the beginning, all bots will have same position, energy as player
bot.info.posx = self.user.posx
bot.info.posy = self.user.posy
bot.info.energy = self.user.energy
bot.info.lastAction = -1
bot.info.status = PlayerInfo.STATUS_PLAYING
bot.info.score = 0
self.stepState.players.append(bot.info)
self.userMatch.gameinfo.numberOfPlayers = len(self.stepState.players)
#print("numberOfPlayers: ", self.userMatch.gameinfo.numberOfPlayers)
def reset(self, requests): # load new game by given request: [map id (filename), posx, posy, initial energy]
# load new map
self.reset_map(requests[0])
self.userMatch.posx = int(requests[1])
self.userMatch.posy = int(requests[2])
self.userMatch.energy = int(requests[3])
self.userMatch.gameinfo.steps = int(requests[4])
self.maxStep = self.userMatch.gameinfo.steps
# init data for players
self.user.posx = self.userMatch.posx # in
self.user.posy = self.userMatch.posy
self.user.energy = self.userMatch.energy
self.user.status = PlayerInfo.STATUS_PLAYING
self.user.score = 0
self.stepState.players = [self.user]
self.E = self.userMatch.energy
self.resetFlag = True
self.init_bots()
self.stepCount = 0
def reset_map(self, id): # load map info
self.mapId = id
self.map = json.loads(self.maps[self.mapId])
self.userMatch = self.map_info(self.map)
self.stepState.golds = self.userMatch.gameinfo.golds
self.map = json.loads(self.maps[self.mapId])
self.energyOnMap = json.loads(self.maps[self.mapId])
for x in range(len(self.map)):
for y in range(len(self.map[x])):
if self.map[x][y] > 0: # gold
self.energyOnMap[x][y] = -4
else: # obstacles
self.energyOnMap[x][y] = ObstacleInfo.types[self.map[x][y]]
def connect(self): # simulate player's connect request
print("Connected to server.")
for mapid in range(len(Maps)):
filename = "map" + str(mapid)
print("Found: " + filename)
self.maps[filename] = str(Maps[mapid])
def map_info(self, map): # get map info
# print(map)
userMatch = UserMatch()
userMatch.gameinfo.height = len(map)
userMatch.gameinfo.width = len(map[0])
i = 0
while i < len(map):
j = 0
while j < len(map[i]):
if map[i][j] > 0: # gold
g = GoldInfo()
g.posx = j
g.posy = i
g.amount = map[i][j]
userMatch.gameinfo.golds.append(g)
else: # obstacles
o = ObstacleInfo()
o.posx = j
o.posy = i
o.type = -map[i][j]
o.value = ObstacleInfo.types[map[i][j]]
userMatch.gameinfo.obstacles.append(o)
j += 1
i += 1
return userMatch
def receive(self): # send data to player (simulate player's receive request)
if self.resetFlag: # for the first time -> send game info
self.resetFlag = False
data = self.userMatch.to_json()
for (bot) in self.bots:
bot.new_game(data)
# print(data)
return data
else: # send step state
self.stepCount = self.stepCount + 1
if self.stepCount >= self.maxStep:
for player in self.stepState.players:
player.status = PlayerInfo.STATUS_STOP_END_STEP
data = self.stepState.to_json()
#for (bot) in self.bots: # update bots' state
for bot in self.bots: # update bots' state
bot.new_state(data)
# print(data)
return data
def send(self, message): # receive message from player (simulate send request from player)
if message.isnumeric(): # player send action
self.resetFlag = False
self.stepState.changedObstacles = []
action = int(message)
# print("Action = ", action)
self.user.lastAction = action
self.craftUsers = []
self.step_action(self.user, action)
for bot in self.bots:
if bot.info.status == PlayerInfo.STATUS_PLAYING:
action = bot.next_action()
bot.info.lastAction = action
# print("Bot Action: ", action)
self.step_action(bot.info, action)
self.action_5_craft()
for c in self.stepState.changedObstacles:
self.map[c["posy"]][c["posx"]] = -c["type"]
self.energyOnMap[c["posy"]][c["posx"]] = c["value"]
else: # reset game
requests = message.split(",")
print("Reset game: ", requests[:3], end='')
self.reset(requests)
def step_action(self, user, action):
switcher = {
0: self.action_0_left,
1: self.action_1_right,
2: self.action_2_up,
3: self.action_3_down,
4: self.action_4_free,
5: self.action_5_craft_pre
}
func = switcher.get(action, self.invalidAction)
func(user)
def action_5_craft_pre(self, user): # collect players who craft at current step
user.freeCount = 0
if self.map[user.posy][user.posx] <= 0: # craft at the non-gold cell
user.energy -= 10
if user.energy <= 0:
user.status = PlayerInfo.STATUS_ELIMINATED_OUT_OF_ENERGY
user.lastAction = 6 #eliminated
else:
user.energy -= 5
if user.energy > 0:
self.craftUsers.append(user)
key = str(user.posx) + "_" + str(user.posy)
if key in self.craftMap:
count = self.craftMap[key]
self.craftMap[key] = count + 1
else:
self.craftMap[key] = 1
else:
user.status = PlayerInfo.STATUS_ELIMINATED_OUT_OF_ENERGY
user.lastAction = 6 #eliminated
def action_0_left(self, user): # user go left
user.freeCount = 0
user.posx = user.posx - 1
if user.posx < 0:
user.status = PlayerInfo.STATUS_ELIMINATED_WENT_OUT_MAP
user.lastAction = 6 #eliminated
else:
self.go_to_pos(user)
def action_1_right(self, user): # user go right
user.freeCount = 0
user.posx = user.posx + 1
if user.posx >= self.userMatch.gameinfo.width:
user.status = PlayerInfo.STATUS_ELIMINATED_WENT_OUT_MAP
user.lastAction = 6 #eliminated
else:
self.go_to_pos(user)
def action_2_up(self, user): # user go up
user.freeCount = 0
user.posy = user.posy - 1
if user.posy < 0:
user.status = PlayerInfo.STATUS_ELIMINATED_WENT_OUT_MAP
user.lastAction = 6 #eliminated
else:
self.go_to_pos(user)
def action_3_down(self, user): # user go right
user.freeCount = 0
user.posy = user.posy + 1
if user.posy >= self.userMatch.gameinfo.height:
user.status = PlayerInfo.STATUS_ELIMINATED_WENT_OUT_MAP
user.lastAction = 6 #eliminated
else:
self.go_to_pos(user)
def action_4_free(self, user): # user free
user.freeCount += 1
if user.freeCount == 1:
user.energy += int(self.E / 4)
elif user.freeCount == 2:
user.energy += int(self.E / 3)
elif user.freeCount == 3:
user.energy += int(self.E / 2)
else:
user.energy = self.E
if user.energy > self.E:
user.energy = self.E
def action_5_craft(self):
craftCount = len(self.craftUsers)
# print ("craftCount",craftCount)
if (craftCount > 0):
for user in self.craftUsers:
x = user.posx
y = user.posy
key = str(user.posx) + "_" + str(user.posy)
c = self.craftMap[key]
m = min(math.ceil(self.map[y][x] / c), 50)
user.score += m
# print ("user", user.playerId, m)
for user in self.craftUsers:
x = user.posx
y = user.posy
key = str(user.posx) + "_" + str(user.posy)
if key in self.craftMap:
c = self.craftMap[key]
del self.craftMap[key]
m = min(math.ceil(self.map[y][x] / c), 50)
self.map[y][x] -= m * c
if self.map[y][x] < 0:
self.map[y][x] = 0
self.energyOnMap[y][x] = ObstacleInfo.types[0]
for g in self.stepState.golds:
if g.posx == x and g.posy == y:
g.amount = self.map[y][x]
if g.amount == 0:
self.stepState.golds.remove(g)
self.add_changed_obstacle(x, y, 0, ObstacleInfo.types[0])
if len(self.stepState.golds) == 0:
for player in self.stepState.players:
player.status = PlayerInfo.STATUS_STOP_EMPTY_GOLD
break;
self.craftMap = {}
def invalidAction(self, user):
user.status = PlayerInfo.STATUS_ELIMINATED_INVALID_ACTION
user.lastAction = 6 #eliminated
def go_to_pos(self, user): # player move to cell(x,y)
if self.map[user.posy][user.posx] == -1:
user.energy -= randrange(16) + 5
elif self.map[user.posy][user.posx] == 0:
user.energy += self.energyOnMap[user.posy][user.posx]
elif self.map[user.posy][user.posx] == -2:
user.energy += self.energyOnMap[user.posy][user.posx]
self.add_changed_obstacle(user.posx, user.posy, 0, ObstacleInfo.types[0])
elif self.map[user.posy][user.posx] == -3:
user.energy += self.energyOnMap[user.posy][user.posx]
self.add_changed_obstacle(user.posx, user.posy, 3,
self.bog_energy_chain[self.energyOnMap[user.posy][user.posx]])
else:
user.energy -= 4
if user.energy <= 0:
user.status = PlayerInfo.STATUS_ELIMINATED_OUT_OF_ENERGY
user.lastAction = 6 #eliminated
def add_changed_obstacle(self, x, y, t, v):
added = False
for o in self.stepState.changedObstacles:
if o["posx"] == x and o["posy"] == y:
added = True
break
if added == False:
o = {}
o["posx"] = x
o["posy"] = y
o["type"] = t
o["value"] = v
self.stepState.changedObstacles.append(o)
def close(self):
print("Close socket.")
#Bots :bot1
class Bot1:
ACTION_GO_LEFT = 0
ACTION_GO_RIGHT = 1
ACTION_GO_UP = 2
ACTION_GO_DOWN = 3
ACTION_FREE = 4
ACTION_CRAFT = 5
def __init__(self, id):
self.state = State()
self.info = PlayerInfo(id)
def next_action(self):
if self.state.mapInfo.gold_amount(self.info.posx, self.info.posy) > 0:
if self.info.energy >= 6:
return self.ACTION_CRAFT
else:
return self.ACTION_FREE
if self.info.energy < 5:
return self.ACTION_FREE
else:
action = self.ACTION_GO_UP
if self.info.posy % 2 == 0:
if self.info.posx < self.state.mapInfo.max_x:
action = self.ACTION_GO_RIGHT
else:
if self.info.posx > 0:
action = self.ACTION_GO_LEFT
else:
action = self.ACTION_GO_DOWN
return action
def new_game(self, data):
try:
self.state.init_state(data)
except Exception as e:
import traceback
traceback.print_exc()
def new_state(self, data):
# action = self.next_action();
# self.socket.send(action)
try:
self.state.update_state(data)
except Exception as e:
import traceback
traceback.print_exc()
#Bots :bot2
class Bot2:
ACTION_GO_LEFT = 0
ACTION_GO_RIGHT = 1
ACTION_GO_UP = 2
ACTION_GO_DOWN = 3
ACTION_FREE = 4
ACTION_CRAFT = 5
def __init__(self, id):
self.state = State()
self.info = PlayerInfo(id)
def next_action(self):
if self.state.mapInfo.gold_amount(self.info.posx, self.info.posy) > 0:
if self.info.energy >= 6:
return self.ACTION_CRAFT
else:
return self.ACTION_FREE
if self.info.energy < 5:
return self.ACTION_FREE
else:
action = np.random.randint(0, 4)
return action
def new_game(self, data):
try:
self.state.init_state(data)
except Exception as e:
import traceback
traceback.print_exc()
def new_state(self, data):
# action = self.next_action();
# self.socket.send(action)
try:
self.state.update_state(data)
except Exception as e:
import traceback
traceback.print_exc()
#Bots :bot3
class Bot3:
ACTION_GO_LEFT = 0
ACTION_GO_RIGHT = 1
ACTION_GO_UP = 2
ACTION_GO_DOWN = 3
ACTION_FREE = 4
ACTION_CRAFT = 5
def __init__(self, id):
self.state = State()
self.info = PlayerInfo(id)
def next_action(self):
if self.state.mapInfo.gold_amount(self.info.posx, self.info.posy) > 0:
if self.info.energy >= 6:
return self.ACTION_CRAFT
else:
return self.ACTION_FREE
if self.info.energy < 5:
return self.ACTION_FREE
else:
action = self.ACTION_GO_LEFT
if self.info.posx % 2 == 0:
if self.info.posy < self.state.mapInfo.max_y:
action = self.ACTION_GO_DOWN
else:
if self.info.posy > 0:
action = self.ACTION_GO_UP
else:
action = self.ACTION_GO_RIGHT
return action
def new_game(self, data):
try:
self.state.init_state(data)
except Exception as e:
import traceback
traceback.print_exc()
def new_state(self, data):
# action = self.next_action();
# self.socket.send(action)
try:
self.state.update_state(data)
except Exception as e:
import traceback
traceback.print_exc()
#MinerState.py
def str_2_json(str):
return json.loads(str, encoding="utf-8")
class MapInfo:
def __init__(self):
self.max_x = 0 #Width of the map
self.max_y = 0 #Height of the map
self.golds = [] #List of the golds in the map
self.obstacles = []
self.numberOfPlayers = 0
self.maxStep = 0 #The maximum number of step is set for this map
def init_map(self, gameInfo):
#Initialize the map at the begining of each episode
self.max_x = gameInfo["width"] - 1
self.max_y = gameInfo["height"] - 1
self.golds = gameInfo["golds"]
self.obstacles = gameInfo["obstacles"]
self.maxStep = gameInfo["steps"]
self.numberOfPlayers = gameInfo["numberOfPlayers"]
def update(self, golds, changedObstacles):
#Update the map after every step
self.golds = golds
for cob in changedObstacles:
newOb = True
for ob in self.obstacles:
if cob["posx"] == ob["posx"] and cob["posy"] == ob["posy"]:
newOb = False
#print("cell(", cob["posx"], ",", cob["posy"], ") change type from: ", ob["type"], " -> ",
# cob["type"], " / value: ", ob["value"], " -> ", cob["value"])
ob["type"] = cob["type"]
ob["value"] = cob["value"]
break
if newOb:
self.obstacles.append(cob)
#print("new obstacle: ", cob["posx"], ",", cob["posy"], ", type = ", cob["type"], ", value = ",
# cob["value"])
def get_min_x(self):
return min([cell["posx"] for cell in self.golds])
def get_max_x(self):
return max([cell["posx"] for cell in self.golds])
def get_min_y(self):
return min([cell["posy"] for cell in self.golds])
def get_max_y(self):
return max([cell["posy"] for cell in self.golds])
def is_row_has_gold(self, y):
return y in [cell["posy"] for cell in self.golds]
def is_column_has_gold(self, x):
return x in [cell["posx"] for cell in self.golds]
def gold_amount(self, x, y): #Get the amount of golds at cell (x,y)
for cell in self.golds:
if x == cell["posx"] and y == cell["posy"]:
return cell["amount"]
return 0
def get_obstacle(self, x, y): # Get the kind of the obstacle at cell(x,y)
for cell in self.obstacles:
if x == cell["posx"] and y == cell["posy"]:
return cell["type"]
return -1 # No obstacle at the cell (x,y)
class State:
STATUS_PLAYING = 0
STATUS_ELIMINATED_WENT_OUT_MAP = 1
STATUS_ELIMINATED_OUT_OF_ENERGY = 2
STATUS_ELIMINATED_INVALID_ACTION = 3
STATUS_STOP_EMPTY_GOLD = 4
STATUS_STOP_END_STEP = 5
def __init__(self):
self.end = False
self.score = 0
self.lastAction = None
self.id = 0
self.x = 0
self.y = 0
self.energy = 0
self.mapInfo = MapInfo()
self.players = []
self.stepCount = 0
self.status = State.STATUS_PLAYING
def init_state(self, data): #parse data from server into object
game_info = str_2_json(data)
self.end = False
self.score = 0
self.lastAction = None
self.id = game_info["playerId"]
self.x = game_info["posx"]
self.y = game_info["posy"]
self.energy = game_info["energy"]
self.mapInfo.init_map(game_info["gameinfo"])
self.stepCount = 0
self.status = State.STATUS_PLAYING
self.players = [{"playerId": 2, "posx": self.x, "posy": self.y},
{"playerId": 3, "posx": self.x, "posy": self.y},
{"playerId": 4, "posx": self.x, "posy": self.y}]
def update_state(self, data):
new_state = str_2_json(data)
for player in new_state["players"]:
if player["playerId"] == self.id:
self.x = player["posx"]
self.y = player["posy"]
self.energy = player["energy"]
self.score = player["score"]
self.lastAction = player["lastAction"]
self.status = player["status"]
self.mapInfo.update(new_state["golds"], new_state["changedObstacles"])
self.players = new_state["players"]
for i in range(len(self.players), 4, 1):
self.players.append({"playerId": i, "posx": self.x, "posy": self.y})
self.stepCount = self.stepCount + 1
#MinerEnv.py
TreeID = 1
TrapID = 2
SwampID = 3
class MinerEnv:
def __init__(self):
self.socket = GameSocket()
self.state = State()
self.score_pre = self.state.score#Storing the last score for designing the reward function
def start(self): #connect to server
self.socket.connect()
def end(self): #disconnect server
self.socket.close()
def send_map_info(self, request):#tell server which map to run
self.socket.send(request)
def reset(self): #start new game
try:
message = self.socket.receive() #receive game info from server
self.state.init_state(message) #init state
except Exception as e:
import traceback
traceback.print_exc()
def step(self, action): #step process
self.socket.send(action) #send action to server
try:
message = self.socket.receive() #receive new state from server
self.state.update_state(message) #update to local state
except Exception as e:
import traceback
traceback.print_exc()
# Functions are customized by client
def get_state(self):
# Building the map
#view = np.zeros([self.state.mapInfo.max_x + 1, self.state.mapInfo.max_y + 1], dtype=int)
view = np.zeros([self.state.mapInfo.max_y + 1, self.state.mapInfo.max_x + 1], dtype=int)
for x in range(self.state.mapInfo.max_x + 1):
for y in range(self.state.mapInfo.max_y + 1):
if self.state.mapInfo.get_obstacle(x, y) == TreeID: # Tree
view[y, x] = -TreeID
if self.state.mapInfo.get_obstacle(x, y) == TrapID: # Trap
view[y, x] = -TrapID
if self.state.mapInfo.get_obstacle(x, y) == SwampID: # Swamp
view[y, x] = -SwampID
if self.state.mapInfo.gold_amount(x, y) > 0:
view[y, x] = self.state.mapInfo.gold_amount(x, y)
DQNState = view.flatten().tolist() #Flattening the map matrix to a vector
# Add position and energy of agent to the DQNState
DQNState.append(self.state.x)
DQNState.append(self.state.y)
DQNState.append(self.state.energy)
#Add position of bots
for player in self.state.players:
if player["playerId"] != self.state.id:
DQNState.append(player["posx"])
DQNState.append(player["posy"])
#Convert the DQNState from list to array for training
DQNState = np.array(DQNState)
return DQNState
def get_reward(self):
# Calculate reward
reward = 0
score_action = self.state.score - self.score_pre
self.score_pre = self.state.score
if score_action > 0:
#If the DQN agent crafts golds, then it should obtain a positive reward (equal score_action)
#reward += score_action
reward += score_action*5
##If the DQN agent crashs into obstacels (Tree, Trap, Swamp), then it should be punished by a negative reward
#if self.state.mapInfo.get_obstacle(self.state.x, self.state.y) == TreeID: # Tree
# reward -= TreeID
#if self.state.mapInfo.get_obstacle(self.state.x, self.state.y) == TrapID: # Trap
# reward -= TrapID
if self.state.mapInfo.get_obstacle(self.state.x, self.state.y) == SwampID: # Swamp
reward -= SwampID
if self.state.lastAction == 4:
reward -= 40
# If out of the map, then the DQN agent should be punished by a larger nagative reward.
if self.state.status == State.STATUS_ELIMINATED_WENT_OUT_MAP:
#if self.state.stepCount < 50:
# reward += -5*(50 - self.state.stepCount)
reward += -50
#Run out of energy, then the DQN agent should be punished by a larger nagative reward.
if self.state.status == State.STATUS_ELIMINATED_OUT_OF_ENERGY:
if self.state.stepCount < 50:
reward += -(50 - self.state.stepCount)
if self.state.lastAction != 4:
# 4 is taking a rest
reward += -10
# control comes to here \implies our agent is not dead yet
if self.state.status == State.STATUS_PLAYING:
if self.state.energy >= 45 and self.state.lastAction == 4:
reward -= 30
# print ("reward",reward)
return reward
def check_terminate(self):
#Checking the status of the game
#it indicates the game ends or is playing
return self.state.status != State.STATUS_PLAYING
#Creating Maps
#This function is used to create 05 maps instead of loading them from Maps folder in the local
def CreateMaps():
map0 = [
[0, 0, -2, 100, 0, 0, -1, -1, -3, 0, 0, 0, -1, -1, 0, 0, -3, 0, -1, -1,0],
[-1,-1, -2, 0, 0, 0, -3, -1, 0, -2, 0, 0, 0, -1, 0, -1, 0, -2, -1, 0,0],
[0, 0, -1, 0, 0, 0, 0, -1, -1, -1, 0, 0, 100, 0, 0, 0, 0, 50, -2, 0,0],
[0, 0, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0, -1, 50, -2, 0, 0, -1, -1, 0,0],
[-2, 0, 200, -2, -2, 300, 0, 0, -2, -2, 0, 0, -3, 0, -1, 0, 0, -3, -1, 0,0],
[0, -1, 0, 0, 0, 0, 0, -3, 0, 0, -1, -1, 0, 0, 0, 0, 0, 0, -2, 0,0],
[0, -1, -1, 0, 0, -1, -1, 0, 0, 700, -1, 0, 0, 0, -2, -1, -1, 0, 0, 0,100],
[0, 0, 0, 500, 0, 0, -1, 0, -2, -2, -1, -1, 0, 0, -2, 0, -3, 0, 0, -1,0],
[-1, -1, 0,-2 , 0, -1, -2, 0, 400, -2, -1, -1, 500, 0, -2, 0, -3, 100, 0, 0,0]
]
map1 = [
[0, 0, -2, 0, 0, 0, -1, -1, -3, 0, 0, 0, -1, -1, 0, 0, -3, 0, -1, -1,0],
[-1,-1, -2, 100, 0, 0, -3, -1, 0, -2, 100, 0, 0, -1, 0, -1, 0, -2, -1, 0,0],
[0, 0, -1, 0, 0, 0, 0, -1, -1, -1, 0, 0, 0, 0, 0, 0, 50, 0, -2, 0,0],
[0, 200, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0, -1, 50, -2, 0, 0, -1, -1, 0,0],
[-2, 0, 0, -2, -2, 0, 0, 0, -2, -2, 0, 0, -3, 0, -1, 0, 0, -3, -1, 0,0],
[0, -1, 0, 0, 300, 0, 0, -3, 0, 0, -1, -1, 0, 0, 0, 0, 0, 0, -2, 0,0],
[500, -1, -1, 0, 0, -1, -1, 0, 700, 0, -1, 0, 0, 0, -2, -1, -1, 0, 0, 0,0],
[0, 0, 0, 0, 0, 0, -1, 0, -2, -2, -1, -1, 0, 0, -2, 0, -3, 100, 0, -1,0],
[-1, -1, 0,-2 , 0, -1, -2, 400, 0, -2, -1, -1, 0, 500, -2, 0, -3, 0, 0, 100,0]
]
map2 = [
[0, 0, -2, 0, 0, 0, -1, -1, -3, 0, 100, 0, -1, -1, 0, 0, -3, 0, -1, -1,0],
[-1,-1, -2, 0, 0, 0, -3, -1, 0, -2, 0, 0, 0, -1, 0, -1, 0, -2, -1, 0,0 ],
[0, 0, -1, 0, 0, 0, 100, -1, -1, -1, 0, 0, 50, 0, 0, 0, 50, 0, -2, 0,0],
[0, 200, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0, -1, 0, -2, 0, 0, -1, -1, 0,0],
[-2, 0, 0, -2, -2, 0, 0, 0, -2, -2, 0, 0, -3, 0, -1, 0, 0, -3, -1, 0,0],
[0, -1, 0, 300, 0, 0, 0, -3, 0, 0, -1, -1, 0, 0, 0, 0, 0, 0, -2, 0,0],
[0, -1, -1, 0, 0, -1, -1, 700, 0, 0, -1, 0, 0, 0, -2, -1, -1, 0, 0, 0,0],
[0, 0, 0, 0, 0, 500, -1, 0, -2, -2, -1, -1, 0, 0, -2, 0, -3, 0, 700, -1,0],
[-1, -1, 0,-2 , 0, -1, -2, 400, 0, -2, -1, -1, 0, 500, -2, 0, -3, 0, 0, 100,0]
]
map3 = [
[0, 0, -2, 0, 0, 0, -1, -1, -3, 0, 0, 0, -1, -1, 0, 0, -3, 0, -1, -1,0],
[-1,-1, -2, 0, 0, 0, -3, -1, 0, -2, 0, 0, 100, -1, 0, -1, 0, -2, -1, 0,0],
[0, 0, -1, 0, 100, 0, 0, -1, -1, -1, 0, 0, 0, 0, 50, 0, 50, 0, -2, 0,0],
[0, 200, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0, -1, 0, -2, 0, 0, -1, -1, 0,0],
[-2, 0, 0, -2, -2, 0, 0, 0, -2, -2, 0, 0, -3, 0, -1, 0, 0, -3, -1, 0,0],
[0, -1, 0, 0, 0, 0, 300, -3, 0, 700, -1, -1, 0, 0, 0, 0, 0, 0, -2, 0,0],
[0, -1, -1, 0, 0, -1, -1, 0, 0, 0, -1, 0, 0, 0, -2, -1, -1, 0, 0, 100,0],
[500, 0, 0, 0, 0, 0, -1, 0, -2, -2, -1, -1, 0, 0, -2, 0, -3, 0, 0, -1,0],
[-1, -1, 0,-2 , 0, -1, -2, 400, 0, -2, -1, -1, 0, 500, -2, 0, -3, 0, 0, 100,0]
]
map4 = [
[0, 0, -2, 0, 100, 0, -1, -1, -3, 0, 0, 0, -1, -1, 0, 0, -3, 0, -1, -1,0],
[-1,-1, -2, 0, 0, 0, -3, -1, 0, -2, 100, 0, 0, -1, 0, -1, 0, -2, -1, 0,0],
[0, 0, -1, 0, 0, 0, 0, -1, -1, -1, 0, 0, 0, 0, 50, 0, 0, 0, -2, 0,0],
[0, 200, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0, -1, 0, -2, 0, 50, -1, -1, 0,0],
[-2, 0, 0, -2, -2, 0, 0, 0, -2, -2, 0, 0, -3, 0, -1, 0, 0, -3, -1, 0,0],
[0, -1, 0, 0, 300, 0, 0, -3, 0, 0, -1, -1, 0, 0, 0, 0, 0, 0, -2, 0,0],
[500, -1, -1, 0, 0, -1, -1, 0, 0, 700, -1, 0, 0, 0, -2, -1, -1, 0, 0, 100,0],
[0, 0, 0, 0, 0, 0, -1, 0, -2, -2, -1, -1, 0, 0, -2, 0, -3, 0, 0, -1,0],
[-1, -1, 0,-2 , 0, -1, -2, 400, 0, -2, -1, -1, 0, 500, -2, 0, -3, 0, 0, 100,0]
]
Maps = (map0,map1,map2,map3,map4)
return Maps
game_over_reason = (
"playing",
"went_out_map",
"out_of_energy",
"invalid_action",
"no_more_gold",
"no_more_step",
)
###Output
_____no_output_____
###Markdown
Start Drawing
###Code
MAP_MAX_X = 21
MAP_MAX_Y = 9
Maps = CreateMaps()
minerEnv = MinerEnv()
minerEnv.start()
mapID = np.random.randint(0, 5)
mapID = 1
posID_x = np.random.randint(MAP_MAX_X)
posID_y = np.random.randint(MAP_MAX_Y)
request = ("map" + str(mapID) + "," + str(posID_x) + "," + str(posID_y) + ",50,100")
minerEnv.send_map_info(request)
minerEnv.reset()
s = minerEnv.get_state()
carte = s[:-9].reshape(MAP_MAX_Y, MAP_MAX_X)
carte
carte.shape
def a(c):
return [*c, 3]
a(carte.shape)
def b(c):
return (*c, 3)
b(carte.shape)
from constants import width, height, terrain_ids
s
numerical_image = s[:width*height].reshape((height, width))
numerical_image
image = np.zeros((height, width, 3), dtype=np.uint8)
numerical_image==-terrain_ids["forest"]
image[numerical_image==-terrain_ids["forest"]] = green
plt.figure(figsize=(7,7))
plt.imshow(image);
plt.grid(True);
plt.yticks(range(image.shape[0]));
plt.xticks(range(image.shape[1]));
#plt.imshow(image);
image[numerical_image==-terrain_ids["trap"]] = silver
plt.figure(figsize=(7,7))
plt.imshow(image);
plt.grid(True);
plt.yticks(range(image.shape[0]));
plt.xticks(range(image.shape[1]));
#plt.imshow(image);
image[numerical_image==-terrain_ids["swamp"]] = blue
imshow(image)
image[numerical_image>0] = gold
imshow(image)
imshow(render_state_image(s))
primitive = render_state_image(s)
test = np.kron(primitive, np.ones((3,3)))
test.shape
test.dtype
###Output
_____no_output_____
###Markdown
- Not only the shape was awfully wrong- but also the dtype
###Code
test = np.array([np.kron(primitive[..., j], np.ones((3,3), dtype=np.uint8)) for j in range(3)])
test.shape, test.dtype
np.moveaxis(test, 0, -1).shape
imshow(np.moveaxis(test, 0, -1), figsize=(15,15))
###Output
_____no_output_____
###Markdown
The np.kron() could do this for us.
###Code
primitive = render_state_image(s)
test = np.kron(primitive, np.ones((3,3,1), dtype=np.uint8))
test.shape
imshow(test)
imshow(prettier_render(s))
bots_xy = s[n_px+3:].reshape((3,2))
bots_xy
s[189:]
agent_xy = s[n_px:n_px+2]
agent_xy
agent_xy.size
s[n_px+3:].
from viz_utils import *
imshow(prettier_render(s), figsize=(15,15))
###Output
primitive_3x.shape = (27, 63, 3)
coord = [ 7 16]
coord = [ 7 16]
coord = [ 7 16]
agent_xy_3x = [ 7 16]
|
001_Decision_Tree_PlayGolf_ID3.ipynb | ###Markdown
All the IPython Notebooks in this **Python Decision Tree and Random Forest** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/Python_Decision_Tree_and_Random_Forest)** Decision TreeA Decision Tree is one of the popular and powerful machine learning algorithms that I have learned. It is a non-parametric supervised learning method that can be used for both classification and regression tasks. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. For a classification model, the target values are discrete in nature, whereas, for a regression model, the target values are represented by continuous values. Unlike the black box type of algorithms such as Neural Network, Decision Trees are comparably easier to understand because it shares internal decision-making logic (you will find details in the following session).Despite the fact that many data scientists believe it’s an old method and they may have some doubts of its accuracy due to an overfitting problem, the more recent tree-based models, for example, Random forest (bagging method), gradient boosting (boosting method) and XGBoost (boosting method) are built on the top of decision tree algorithm. Therefore, the concepts and algorithms behind Decision Trees are strongly worth understanding! There are *four* popular types of decision tree algorithms: 1. **ID3**2. **CART (Classification And Regression Trees)**3. **Chi-Square**4. **Reduction in Variance**In this class, we'll focus only on the classification trees and the explanation of **ID3**. **Example:**>You play golf every Sunday and you invite your best friend, Arthur to come with you every time. Arthur sometimes comes to join but sometimes not. For him, it depends on a number of factors for example, **Weather**, **Temperature**, **Humidity** and **Wind**. We'll use the dataset of last two week to predict whether or not Arthur will join you to play golf. An intuitive way to do this is through a Decision Tree. * **Root Node:** - The attribute that best classifies the training data, use this attribute at the root of the tree. - The first split which decides the entire population or sample data should further get divided into two or more homogeneous sets. * **Splitting:** It is a process of dividing a node into two or more *sub-nodes*.>**Question:** Base on which attribute (feature) to split? What is the best split?>**Answer:** Use the attribute with the highest **Information Gain** or **Gini Gain*** **Decision Node:** This node decides whether/when a *sub-node* splits into further sub-nodes or not.* **Leaf:** Terminal Node that predicts the outcome (categorical or continues value). The *coloured nodes*, i.e., *Yes* and *No* nodes, are the leaves. ID3 (Iterative Dichotomiser)ID3 decision tree algorithm uses **Information Gain** to decide the splitting points. In order to measure how much information we gain, we can use **Entropy** to calculate the homogeneity of a sample.>**Question:** What is **“Entropy”**? and What is its function?>**Answer:** It is a measure of the amount of uncertainty in a data set. Entropy controls how a Decision Tree decides to split the data. It actually affects how a Decision Tree draws its boundaries. We can summarize the ID3 algorithm as illustrated below:1. Compute the entropy for data-set **Entropy(s)** - Calculate **Entropy** (Amount of uncertainity in dataset):$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$ 2. For every attribute/feature: - Calculate entropy for all other values **Entropy(A)** - Take **Average Information Entropy** for the current attribute - Calculate **Average Information**: $$I(Attribute) = \sum\frac{p_{i}+n_{i}}{p+n}Entropy(A)$$ - Calculate **Gain** for the current attribute - Calculate **Information Gain**: (Difference in Entropy before and after splitting dataset on attribute A) $$Gain = Entropy(S) - I(Attribute)$$3. Pick the **Highest Gain Attribute**.4. **Repeat** until we get the tree we desired. 1. Calculate the Entropy for dataset Entropy(s)We need to calculate the entropy first. Decision column consists of 14 instances and includes two labels: **Yes** and **No**. There are 9 decisions labeled **Yes**, and 5 decisions labeled **No**. Calculate Entropy(S):$$Entropy(S) = (Yes)log_{2}(Yes) - (No)log_{2}(No)$$➡$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$Entropy(S) = \frac{-9}{9+5}log_{2}(\frac{9}{9+5}) - \frac{5}{9+5}log_{2}(\frac{5}{9+5})$$➡$$Entropy(S) = \frac{-9}{14}log_{2}(\frac{9}{14}) - \frac{5}{14}log_{2}(\frac{5}{14})$$➡$$Entropy(S) = 0.940$$ 2. Calculate Entropy for each Attribute of Dataset Entropy for each Attribute: (let say Outlook) Calculate Entropy for each Values, i.e for 'Sunny', 'Rainy' and 'Overcast'.| Outlook | PlayGolf | | Outlook | PlayGolf | | Outlook | PlayGolf ||:---------:|:--------:|:---:|:---------:|:--------:|:---:|:------------:|:--------:|| **Sunny** | **No**❌ | \| | **Rainy** | **Yes**✅| \| | **Overcast** | **Yes**✅|| **Sunny** | **No**❌ | \| | **Rainy** | **Yes**✅| \| | **Overcast** | **Yes**✅|| **Sunny** | **No**❌ | \| | **Rainy** | **No**❌ | \| | **Overcast** | **Yes**✅|| **Sunny** | **Yes**✅| \| | **Rainy** | **Yes**✅| \| | **Overcast** | **Yes**✅|| **Sunny** | **Yes**✅| \| | **Rainy** | **No**❌ | \| | | |1. Calculate Entropy(Outlook='Value'):$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$E(Outlook = Sunny) = \frac{-2}{5}log_{2}(\frac{2}{5}) - \frac{3}{5}log_{2}(\frac{3}{5}) = 0.971$$➡$$E(Outlook = Rainy) = \frac{-3}{5}log_{2}(\frac{3}{5}) - \frac{2}{5}log_{2}(\frac{2}{5}) = 0.971$$➡$$E(Outlook = Overcast) = -1 log_{2}(1) - 0 log_{2}(0) = 0$$| Outlook | Yes = $p$ | No = $n$ | Entropy ||:------------ |:-------:|:------:|:-------:|| **Sunny** |**2** | **3** | **0.971** || **Rainy** |**3** | **2** | **0.971** || **Overcast** |**4** | **0** | **0** |2. Calculate Average Information Entropy(Outlook='Value'):$$I(Outlook) = \frac{p_{Sunny}+n_{Sunny}}{p+n}Entropy(Outlook=Sunny) + $$➡$$\frac{p_{Rainy}+n_{Rainy}}{p+n}Entropy(Outlook=Rainy) + $$➡$$\frac{p_{Overcast}+n_{Overcast}}{p+n}Entropy(Outlook=Overcast)$$➡$$I(Outlook) = \frac{3+2}{9+5}*(0.971) + \frac{2+3}{9+5}*(0.971) + \frac{4+0}{9+5}*(0)$$$$I(Outlook) = 0.693$$3. Calculate Gain: Outlook$$Gain = Entropy(S) - I(Attribute)$$➡$$Entropy(S) = 0.940$$➡$$Gain(Outlook) = 0.940 - 0.693$$➡$$Gain(Outlook) = 0.247$$ Entropy for each Attribute: (let say Temperature)) Calculate Entropy for each Values, i.e for 'Hot', 'Mild' and 'Cool'.| Temperature | PlayGolf | | Temperature | PlayGolf | | Temperature | PlayGolf ||:-----------:|:--------:|:---:|:-----------:|:--------:|:---:|:-----------:|:--------:|| **Hot** | **No**❌ | \| | **Mild** | **Yes**✅ | \| | **Cool** | **Yes**✅ || **Hot** | **No**❌ | \| | **Mild** | **No**❌ | \| | **Cool** | **No**❌ || **Hot** | **Yes**✅ | \| | **Mild** | **Yes**✅ | \| | **Cool** | **Yes**✅ || **Hot** | **Yes**✅ | \| | **Mild** | **Yes**✅ | \| | **Cool** | **Yes**✅ || | | \| | **Mild** | **Yes**✅ | \| | | || | | \| | **Mild** | **No**❌ | \| | | |1. Calculate Entropy(Temperature='Value'):$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$E(Temperature = Hot) = \frac{-2}{4}log_{2}(\frac{2}{4}) - \frac{2}{6}log_{2}(\frac{2}{6}) = 1$$➡$$E(Temperature = Mild) = \frac{-4}{6}log_{2}(\frac{4}{6}) - \frac{2}{6}log_{2}(\frac{2}{5}) = 0.918$$➡$$E(Temperature = Cool) = \frac{-3}{4}log_{2}(\frac{-3}{4}) - \frac{-1}{4}log_{2}(\frac{-1}{4}) = 0.811$$| Temperature | Yes = $p$ | No = $n$ | Entropy ||:------------|:---------:|:--------:|:---------:|| **Hot** | **2** | **2** | **1** || **Mild** | **4** | **2** | **0.918** || **Cool** | **3** | **1** | **0.811** |2. Calculate Average Information Entropy(Temperature='Value'):$$I(Temperature) = \frac{p_{Hot}+n_{Hot}}{p+n}Entropy(Temperature=Hot) + $$➡$$\frac{p_{Mild}+n_{Mild}}{p+n}Entropy(Temperature=Mild) + $$➡$$\frac{p_{Cool}+n_{Cool}}{p+n}Entropy(Temperature=Cool)$$➡$$I(Temperature) = \frac{2+2}{9+5}*(1) + \frac{4+2}{9+5}*(0.918) + \frac{3+1}{9+5}*(0.811)$$➡$$I(Temperature) = 0.911$$3. Calculate Gain: Temperature$$Gain = Entropy(S) - I(Attribute)$$➡$$Entropy(S) = 0.940$$➡$$Gain(Temperature) = 0.940 - 0.911$$➡$$Gain(Temperature) = 0.029$$ Entropy for each Attribute: (let say Humidity)) Calculate Entropy for each Values, i.e for 'Normal' and 'High'.| Humidity | PlayGolf | | Humidity | PlayGolf | |:--------:|:--------:|:---:|:--------:|:--------:|| **Normal** | **Yes**✅ | \| | **High** | **No**❌ | | **Normal** | **No**❌ | \| | **High** | **No**❌ | | **Normal** | **Yes**✅ | \| | **High** | **Yes**✅ | | **Normal** | **Yes**✅ | \| | **High** | **Yes**✅ | | **Normal** | **Yes**✅ | \| | **High** | **No**❌ | | **Normal** | **Yes**✅ | \| | **High** | **Yes**✅ | | **Normal** | **Yes**✅ | \| | **High** | **No**❌ | 1. Calculate Entropy(Humidity='Value'):$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$E(Humidity = Normal) = \frac{-3}{7}log_{2}(\frac{3}{7}) - \frac{4}{7}log_{2}(\frac{4}{7}) = 0.985$$➡$$E(Humidity = High) = \frac{-6}{7}log_{2}(\frac{6}{7}) - \frac{1}{7}log_{2}(\frac{1}{7}) = 0.591$$| Humidity | Yes = $p$ | No = $n$ | Entropy ||:-----------|:---------:|:--------:|:---------:|| **Normal** | **3** | **4** | **0.985** || **High** | **6** | **1** | **0.591** |2. Calculate Average Information Entropy(Humidity='Value'):$$I(Humidity) = \frac{p_{Normal}+n_{Normal}}{p+n}Entropy(Humidity=Normal) + $$➡$$\frac{p_{High}+n_{High}}{p+n}Entropy(Humidity=High)$$➡$$I(Humidity) = \frac{3+4}{9+5}*(0.985) + \frac{6+1}{9+5}*(0.591) $$➡$$I(Humidity) = 0.788$$3. Calculate Gain: Humidity$$Gain = Entropy(S) - I(Attribute)$$➡$$Entropy(S) = 0.940$$➡$$Gain(Humidity) = 0.940 - 0.788$$➡$$Gain(Humidity) = 0.152$$ Entropy for each Attribute: (let say Wind)) Calculate Entropy for each Values, i.e for 'Weak' and 'Strong'.| Wind | PlayGolf | | Wind | PlayGolf | |:--------:|:--------:|:---:|:--------:|:--------:|| **Weak** | **No**❌ | \| | **Strong** | **No**❌ | | **Weak** | **Yes**✅ | \| | **Strong** | **No**❌ | | **Weak** | **Yes**✅ | \| | **Strong** | **Yes**✅ | | **Weak** | **Yes**✅ | \| | **Strong** | **Yes**✅ | | **Weak** | **No**❌ | \| | **Strong** | **Yes**✅ | | **Weak** | **Yes**✅ | \| | **Strong** | **No**❌ | | **Weak** | **Yes**✅ | \| | | | | **Weak** | **Yes**✅ | \| | | | 1. Calculate Entropy(Wind='Value'):$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$E(Wind = Normal) = \frac{-6}{8}log_{2}(\frac{6}{8}) - \frac{2}{8}log_{2}(\frac{2}{8}) = 0.811$$➡$$E(Wind = High) = \frac{-3}{6}log_{2}(\frac{3}{6}) - \frac{3}{6}log_{2}(\frac{3}{6}) = 1$$| Wind | Yes = $p$ | No = $n$ | Entropy ||:--------|:---------:|:--------:|:---------:|| **Weak** | **6** | **2** | **0.811** || **Strong** | **3** | **3** | **1** |2. Calculate Average Information Entropy(Wind='Value'):$$I(Wind) = \frac{p_{Weak}+n_{Weak}}{p+n}Entropy(Wind=Weak) + $$➡$$\frac{p_{Strong}+n_{Strong}}{p+n}Entropy(Wind=Strong)$$➡$$I(Wind) = \frac{6+2}{9+5}*(0.811) + \frac{3+3}{9+5}*(1) $$➡$$I(Wind) = 0.892$$3. Calculate Gain: Wind$$Gain = Entropy(S) - I(Attribute)$$➡$$Entropy(S) = 0.940$$➡$$Gain(Wind) = 0.940 - 0.892$$➡$$Gain(Wind) = 0.048$$ 3. Select Root Node of DatasetPick the highest Gain attribute.| Attributes | Gain | ||:----------------|:---------:|:---------:|| **Outlook** | **0.247** | ⬅️ Root node|| **Temperature** | **0.029** | || **Humidity** | **0.152** | || **Wind** | **0.048** | |As seen, **Outlook** factor on decision produces the highest score. That's why, outlook decision will appear in the root node of the tree. 4. Calculate Entropy for dataset when Outlook is Sunny Now, we need to test dataset for custom subsets of Outlook attribute.**Outlook = Overcast**| Outlook | Temperature | Humidity | Windy | PlayGolf | ||:-------:|:-----------:|:--------:|:-----:|:--------:|:--------:|| **Overcast** | **Hot** | **High** | **Weak** | **Yes** | ✅ || **Overcast** | **Cool** | **Normal** | **Strong** | **Yes** | ✅ || **Overcast** | **Mild** | **High** | **Weak** | **Yes** | ✅ || **Overcast** | **Hot** | **Normal** | **Strong** | **Yes** | ✅ |Basically, decision will always be **Yes** if outlook were overcast. We'll apply same principles to those sub-trees till we get the tree.Focus on the sub-trees for **Sunny** **Outlook**. We need to find the Gain scores for **Temperature**, **Humidity** and **Wind** attributes respectively.**Outlook = Sunny**| Outlook | Temperature | Humidity | Windy | PlayGolf | ||:-------:|:-----------:|:--------:|:-----:|:--------:|:--------:|| **Sunny** | **Hot** | **High** | **Weak** | **No** | ❌ || **Sunny** | **Hot** | **High** | **Strong** | **No** | ❌ | | **Sunny** | **Mild** | **High** | **Weak** | **No** | ❌ | | **Sunny** | **Cool** | **Normal** | **Weak** | **Yes** | ✅ | | **Sunny** | **Mild** | **Normal** | **Strong** | **Yes** | ✅ | $$p = 2, n = 3$$Calculate Entropy(S):$$Entropy(S) = (Yes)log_{2}(Yes) - (No)log_{3}(No)$$➡$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$Entropy(S) = \frac{-2}{2+3}log_{2}(\frac{2}{2+3}) - \frac{3}{2+3}log_{2}(\frac{3}{2+3})$$➡$$Entropy(S) = 0.971$$ 5. Calculate Entropy for each Attribute of Dataset when Outlook is Sunny Entropy for each Attribute: (let say Temperature) for Sunny Outlook Calculate Entropy for each Temperature, i.e for Cool', 'Hot' and 'Mild' for Sunny Outlook.| Outlook | Temperature | PlayGolf | ||:-------:|--------:|--------:|:--------:|| **Sunny** | **Cool** | **Yes** | ✅ || **Sunny** | **Hot** | **No** | ❌ | | **Sunny** | **Hot** | **No** | ❌ | | **Sunny** | **Mild** | **No** | ❌ | | **Sunny** | **Mild** | **Yes** | ✅ || Temperature | Yes = $p$ | No = $n$ | Entropy ||:--------|:---------:|:--------:|:--------:|| **Cool** | **1** | **0** | **0** || **Hot** | **0** | **2** | **0**|| **Mild** | **1** | **1** | **1**|1. Calculate Average Information Entropy(Outlook=Sunny|Temperature):$$I(Outlook=Sunny|Temperature) = 0.4$$2. Calculate Gain(Outlook=Sunny|Temperature):$$Gain(Outlook=Sunny|Temperature) = 0.571$$ Entropy for each Attribute: (let say Humidity) for Sunny Outlook Calculate Entropy for each Humidity, i.e for 'High' and 'Normal' for Sunny Outlook.| Outlook | Humidity | PlayGolf | ||:-------:|--------:|--------:|:--------:|| **Sunny** | **High** | **No** | ❌ || **Sunny** | **High** | **No** | ❌ | | **Sunny** | **High** | **No** | ❌ | | **Sunny** | **Normal** | **Yes** | ✅ | | **Sunny** | **Normal** | **Yes** | ✅ || Humidity| Yes = $p$ | No = $n$ | Entropy ||:--------|:---------:|:--------:|:-------:|| **High** | **0** | **3** | **0** || **Normal** | **2** | **0** | **0** |1. Calculate Average Information Entropy(Outlook=Sunny|Humidity):$$I(Outlook=Sunny|Humidity) = 0$$2. Calculate Gain(Outlook=Sunny|Humidity):$$Gain(Outlook=Sunny|Humidity) = 0.971$$ Entropy for each Attribute: (let say Windy) for Sunny Outlook Calculate Entropy for each Windy, i.e for 'Strong' and 'Weak' for Sunny Outlook.| Outlook | Wind | PlayGolf | ||:-------:|--------:|--------:|:--------:|| **Sunny** | **Strong** | **No** | ❌ || **Sunny** | **Strong** | **Yes** | ✅ | | **Sunny** | **Weak** | **No** | ❌ | | **Sunny** | **Weak** | **No** | ❌ | | **Sunny** | **Weak** | **Yes** | ✅ || Wind | Yes = $p$ | No = $n$ | Entropy ||:--------|:---------:|:--------:|:--------:|| **Strong** | **1** | **1** | **1** || **Weak** | **1** | **2** | **0.918**|1. Calculate Average Information Entropy(Outlook=Sunny|Wind):$$I(Outlook=Sunny|Windy) = 0.951$$2. Calculate Gain(Outlook=Sunny|Wind):$$Gain(Outlook=Sunny|Windy) = 0.020$$ 6. Select Root Node of Dataset for Sunny OutlookPick the highest gain attribute.| Attributes | Gain | ||:----------------|:---------:|:---------:|| **Humidity** | **0.971** | ⬅️ Root node|| **Wind** | **0.02** | || **Temperature** | **0.571** | |As seen, **Humidity** factor on decision produces the highest score. That's why, **Humidity** decision will appear in the next node of the Sunny. 7. Calculate Entropy for each Attribute of Dataset when Outlook is Rainy Now, we need to focus on **Rainy** **Outlook**.Focus on the sub-tree for **Rainy** **Outlook**. We need to find the Gain scores for **Temperature**, **Humidity** and **Wind** attributes respectively.**Outlook = Rainy**| Outlook | Temperature | Humidity | Wind | PlayGolf | ||:-------:|:-----------:|:--------:|:-----:|:--------:|:--------:|| **Rainy** | **Mild** | **High** | **Weak** | **Yes** | ✅ || **Rainy** | **Cool** | **Normal** | **Weak** | **Yes** | ✅ || **Rainy** | **Cool** | **Normal** | **Strong** | **No** | ❌ || **Rainy** | **Mild** | **Normal** | **Weak** | **Yes** | ✅ || **Rainy** | **Mild** | **High** | **Strong** | **No** | ❌ |$$p = 3, n = 2$$Calculate Entropy(S):$$Entropy(S) = (Yes)log_{2}(Yes) - (No)log_{3}(No)$$➡$$Entropy(S) = \frac{-p}{p+n}log_{2}(\frac{p}{p+n}) - \frac{n}{p+n}log_{2}(\frac{n}{p+n})$$➡$$Entropy(S) = \frac{-3}{2+3}log_{2}(\frac{3}{2+3}) - \frac{2}{2+3}log_{2}(\frac{2}{2+3})$$➡$$Entropy(S) = 0.971$$ Entropy for each Attribute: (let say Temperature) for Sunny Rainy Calculate Entropy for each Temperature, i.e for Cool', 'Hot' and 'Mild' for Sunny Rainy.| Outlook | Temperature | PlayGolf | ||:-------:|-------------:|---------:|:--------:|| **Rainy** | **Mild** | **Yes** | ✅ || **Rainy** | **Cool** | **Yes** | ✅ | | **Rainy** | **Cool** | **No** | ❌ | | **Rainy** | **Mild** | **Yes** | ✅ | | **Rainy** | **Mild** | **No** | ❌ || Temperature | Yes = $p$ | No = $n$ | Entropy ||:------------|:---------:|:--------:|:--------:|| **Cool** | **1** | **1** | **1** || **Mild** | **2** | **1** | **0.918**|1. Calculate Average Information Entropy(Outlook=Rainy|Temperature):$$I(Outlook=Rainy|Temperature) = 0.951$$2. Calculate Gain(Outlook=Rainy|Temperature):$$Gain(Outlook=Rainy|Temperature) = 0.02$$ Entropy for each Attribute: (let say Humidity) for Sunny Rainy Calculate Entropy for each Humidity, i.e for 'High' and 'Normal' for Sunny Rainy.| Outlook | Humidity | PlayGolf | ||:-------:|----------:|---------:|:--------:|| **Rainy** | **High** | **Yes** | ✅ || **Rainy** | **High** | **No** | ❌ | | **Rainy** | **Normal** | **Yes** | ✅ | | **Rainy** | **Normal** | **No** | ❌ | | **Rainy** | **Normal** | **Yes** | ✅ || Humidity| Yes = $p$ | No = $n$ | Entropy ||:--------|:---------:|:--------:|:-------:|| **High** | **1** | **1** | **1** || **Normal** | **2** | **1** | **0.918** |1. Calculate Average Information Entropy(Outlook=Rainy|Humidity):$$I(Outlook=Rainy|Humidity) = 0.951$$2. Calculate Gain(Outlook=Rainy|Humidity):$$Gain(Outlook=Rainy|Humidity)= 0.02$$ Entropy for each Attribute: (let say Windy) for Sunny Rainy Calculate Entropy for each Windy, i.e for 'Strong' and 'Weak' for Sunny Rainy.| Outlook | Wind | PlayGolf | ||:-------:|--------:|---------:|:--------:|| **Rainy** | **Strong** | **No** | ❌ || **Rainy** | **Strong** | **No** | ❌ | | **Rainy** | **Weak** | **Yes** | ✅ | | **Rainy** | **Weak** | **Yes** | ✅ | | **Rainy** | **Weak** | **Yes** | ✅ || Wind | Yes = $p$ | No = $n$ | Entropy ||:--------|:---------:|:--------:|:--------:|| **Strong** | **0** | **2** | **0** || **Weak** | **3** | **0** | **0**|1. Calculate Average Information Entropy(Outlook=Rainy|Wind):$$I(Outlook=Rainy|Windy) = 0$$2. Calculate Gain(Outlook=Rainy|Wind):$$Gain(Outlook=Rainy|Windy) = 0.971$$ 8. Select Root Node of Dataset for Rainy OutlookPick the highest gain attribute.| Attributes | Gain | ||:----------------|:---------:|:---------:|| **Humidity** | **0.02** | || **Windy** | **0.971** | ⬅️ Root node|| **Temperature** | **0.02** | |As seen, **Wind** factor on decision produces the highest score. That's why, **Wind** decision will appear in the next node of the Rainy. So, decision tree construction is over. We can use the following rules for decisioning. The same problem is solved with CART algorithm **[here](https://github.com/milaan9/Python_Decision_Tree_and_Random_Forest/blob/main/002_Decision_Tree_PlayGolf_CART.ipynb)** Building a Decision Tree Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an appropriate data set for building the decision tree and apply this knowledge to classify a new sample.
###Code
import pandas as pd
df = pd.read_csv("dataset/playgolf_data.csv")
print("\n Given Play Golf Dataset:\n\n", df)
###Output
Given Play Golf Dataset:
Outlook Temperature Humidity Wind PlayGolf
0 Sunny Hot High Weak No
1 Sunny Hot High Strong No
2 Overcast Hot High Weak Yes
3 Rainy Mild High Weak Yes
4 Rainy Cool Normal Weak Yes
5 Rainy Cool Normal Strong No
6 Overcast Cool Normal Strong Yes
7 Sunny Mild High Weak No
8 Sunny Cool Normal Weak Yes
9 Rainy Mild Normal Weak Yes
10 Sunny Mild Normal Strong Yes
11 Overcast Mild High Strong Yes
12 Overcast Hot Normal Weak Yes
13 Rainy Mild High Strong No
###Markdown
Predicting Attributes
###Code
t = df.keys()[-1]
print('Target Attribute is ➡ ', t)
# Get the attribute names from input dataset
attribute_names = list(df.keys())
#Remove the target attribute from the attribute names list
attribute_names.remove(t)
print('Predicting Attributes ➡ ', attribute_names)
###Output
Target Attribute is ➡ PlayGolf
Predicting Attributes ➡ ['Outlook', 'Temperature', 'Humidity', 'Wind']
###Markdown
Entropy of the Training Data Set
###Code
#Function to calculate the entropy of probaility of observations
# -p*log2*p
import math
def entropy(probs):
return sum( [-prob*math.log(prob, 2) for prob in probs])
#Function to calulate the entropy of the given Datasets/List with respect to target attributes
def entropy_of_list(ls,value):
from collections import Counter
# Total intances associated with respective attribute
total_instances = len(ls) # = 14
print("---------------------------------------------------------")
print("\nTotal no of instances/records associated with '{0}' is ➡ {1}".format(value,total_instances))
# Counter calculates the propotion of class
cnt = Counter(x for x in ls)
print('\nTarget attribute class count(Yes/No)=',dict(cnt))
# x means no of YES/NO
probs = [x / total_instances for x in cnt.values()]
print("\nClasses➡", max(cnt), min(cnt))
print("\nProbabilities of Class 'p'='{0}' ➡ {1}".format(max(cnt),max(probs)))
print("Probabilities of Class 'n'='{0}' ➡ {1}".format(min(cnt),min(probs)))
# Call Entropy
return entropy(probs)
###Output
_____no_output_____
###Markdown
Information Gain of Attributes
###Code
def information_gain(df, split_attribute, target_attribute,battr):
print("\n\n----- Information Gain Calculation of",split_attribute,"----- ")
# group the data based on attribute values
df_split = df.groupby(split_attribute)
glist=[]
for gname,group in df_split:
print('Grouped Attribute Values \n',group)
print("---------------------------------------------------------")
glist.append(gname)
glist.reverse()
nobs = len(df.index) * 1.0
df_agg1=df_split.agg({target_attribute:lambda x:entropy_of_list(x, glist.pop())})
df_agg2=df_split.agg({target_attribute :lambda x:len(x)/nobs})
df_agg1.columns=['Entropy']
df_agg2.columns=['Proportion']
# Calculate Information Gain:
new_entropy = sum( df_agg1['Entropy'] * df_agg2['Proportion'])
if battr !='S':
old_entropy = entropy_of_list(df[target_attribute],'S-'+df.iloc[0][df.columns.get_loc(battr)])
else:
old_entropy = entropy_of_list(df[target_attribute],battr)
return old_entropy - new_entropy
###Output
_____no_output_____
###Markdown
ID3 Algorithm
###Code
def id3(df, target_attribute, attribute_names, default_class=None,default_attr='S'):
from collections import Counter
cnt = Counter(x for x in df[target_attribute])# class of YES /NO
## First check: Is this split of the dataset homogeneous?
if len(cnt) == 1:
return next(iter(cnt)) # next input data set, or raises StopIteration when EOF is hit.
## Second check: Is this split of the dataset empty? if yes, return a default value
elif df.empty or (not attribute_names):
return default_class # Return None for Empty Data Set
## Otherwise: This dataset is ready to be devied up!
else:
# Get Default Value for next recursive call of this function:
default_class = max(cnt.keys()) #No of YES and NO Class
# Compute the Information Gain of the attributes:
gainz=[]
for attr in attribute_names:
ig= information_gain(df, attr, target_attribute,default_attr)
gainz.append(ig)
print('\nInformation gain of','“',attr,'”','is ➡', ig)
print("=========================================================")
index_of_max = gainz.index(max(gainz)) # Index of Best Attribute
best_attr = attribute_names[index_of_max] # Choose Best Attribute to split on
print("\nList of Gain for arrtibutes:",attribute_names,"\nare:", gainz,"respectively.")
print("\nAttribute with the maximum gain is ➡", best_attr)
print("\nHence, the Root node will be ➡", best_attr)
print("=========================================================")
# Create an empty tree, to be populated in a moment
tree = {best_attr:{}} # Initiate the tree with best attribute as a node
remaining_attribute_names =[i for i in attribute_names if i != best_attr]
# Split dataset-On each split, recursively call this algorithm.Populate the empty tree with subtrees, which
# are the result of the recursive call
for attr_val, data_subset in df.groupby(best_attr):
subtree = id3(data_subset,target_attribute, remaining_attribute_names,default_class,best_attr)
tree[best_attr][attr_val] = subtree
return tree
###Output
_____no_output_____
###Markdown
Tree formation
###Code
#Function to calulate the entropy of the given Dataset with respect to target attributes
def entropy_dataset(a_list):
from collections import Counter
# Counter calculates the propotion of class
cnt = Counter(x for x in a_list)
num_instances = len(a_list)*1.0 # = 14
print("\nNumber of Instances of the Current Sub-Class is {0}".format(num_instances ))
# x means no of YES/NO
probs = [x / num_instances for x in cnt.values()]
print("\nClasses➡", "'p'=",max(cnt), "'n'=",min(cnt))
print("\nProbabilities of Class 'p'='{0}' ➡ {1}".format(max(cnt),max(probs)))
print("Probabilities of Class 'n'='{0}' ➡ {1}".format(min(cnt),min(probs)))
# Call Entropy
return entropy(probs)
# The initial entropy of the YES/NO attribute for our dataset.
print("Entropy calculation for input dataset:\n")
print(df['PlayGolf'])
total_entropy = entropy_dataset(df['PlayGolf'])
print("\nTotal Entropy(S) of PlayGolf Dataset➡", total_entropy)
print("=========================================================")
####################################################
from pprint import pprint
tree = id3(df,t,attribute_names)
print("\nThe Resultant Decision Tree is: ⤵\n")
pprint(tree)
attribute = next(iter(tree))
print("\nBest Attribute ➡",attribute)
print("Tree Keys ➡",tree[attribute].keys())
def classify(instance, tree,default=None): # Instance of Play Tennis with Predicted
attribute = next(iter(tree)) # Outlook/Humidity/Wind
if instance[attribute] in tree[attribute].keys(): # Value of the attributs in set of Tree keys
result = tree[attribute][instance[attribute]]
if isinstance(result, dict): # this is a tree, delve deeper
return classify(instance, result)
else:
return result # this is a label
else:
return default
df_new=pd.read_csv('dataset/playgolf_test.csv')
df_new['Predicted'] = df_new.apply(classify, axis=1, args=(tree,'?'))
print(df_new)
###Output
Outlook Temperature Humidity Wind PlayGolf Predicted
0 Sunny Hot Normal Weak ? Yes
1 Rainy Mild High Strong ? No
###Markdown
--- Building a Decision Tree using `scikit-learn`
###Code
pip install numpy
pip install pandas
# Importing the necessary module!
import numpy as np
import pandas as pd
# Importing data
df = pd.read_csv("dataset/playgolf_data.csv")
df
df.dtypes
df.info()
# Converting categorical variables into dummies/indicator variables
df_getdummy=pd.get_dummies(data=df, columns=['Temperature', 'Humidity', 'Outlook', 'Wind'])
df_getdummy
pip install sklearn
# Separating the training set and test set
from sklearn.model_selection import train_test_split
X = df_getdummy.drop('PlayGolf',axis=1)
y = df_getdummy['PlayGolf']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=101)
# importing Decision Tree Classifier via sklean
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier(criterion='entropy',max_depth=2)
dtree.fit(X_train,y_train)
predictions = dtree.predict(X_test)
pip install matplotlib
# visualising the decision tree diagram
from sklearn.tree import plot_tree
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(16,12))
a = plot_tree(dtree, feature_names=df_getdummy.columns, fontsize=12, filled=True,
class_names=['Not_Play', 'Play'])
###Output
_____no_output_____ |
src/local_notebooks/notebook_Aditya-Pulak.ipynb | ###Markdown
Basic Model Testing
###Code
import os
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
import torch.utils.data as td
import torchvision as tv
import pandas as pd
from PIL import Image
from matplotlib import pyplot as plt
#mod1 = torch.load('/datasets/home/27/827/ausant/ECE285 Project/MoDL_CenterNet/models/ctdet_coco_resdcn18.pth')
#for name in mod1['state_dict']:
# print(name)
###Output
_____no_output_____
###Markdown
Loading the Dataset
###Code
import os
import sys
sys.path.append(sys.path[0]+'/../lib') # Add library folder
#print(sys.path)
from opts import opts
from datasets.dataset_factory import get_dataset
from datasets.dataset.coco import COCO
from datasets.sample.ctdet import CTDetDataset
Dataset = get_dataset('coco', 'ctdet')
###Output
_____no_output_____
###Markdown
Create opt for passing to the constructor. \Also pass a string with the training value
###Code
opt = type('', (), {})()
opt.data_dir = sys.path[0]+'/../../data/'
opt.task = 'ctdet'
split = 'train'
dataset = Dataset(opt,split)
all_Ids=dataset.coco.getImgIds()
print(len(all_Ids))
import skimage.io as io
img_dir='/datasets/home/30/230/psarangi/dataset_l/images/train2017/'
N=5
kld=np.zeros(10)
for iter in range(10):
imgIds_perm=np.random.permutation(len(all_Ids))
tmp=imgIds_perm[0:N].astype(int)
tmp2=[all_Ids[t] for t in tmp]
dataset.images=tmp2
dataset.num_samples=len(dataset.images)
sub_inst_cat=np.zeros(90)
for j in range(N):
sub_cat_lab=[]
#print(dataset.images[j],all_Ids[imgIds_perm[j]])
img = dataset.coco.loadImgs(dataset.images[j])[0]
#id_vec.append(img['id'])
f_name=img_dir
f_name+=img['file_name']
print(f_name)
I = io.imread(f_name)
#print(img['coco_url'])
#plt.figure()
#plt.imshow(I)
annIds = dataset.coco.getAnnIds(imgIds=img['id'])
anns = dataset.coco.loadAnns(annIds)
sub_cat_lab=[k['category_id'] for k in anns]
for jj in range(90):
t=np.where(np.asarray(sub_cat_lab)==jj)
sub_inst_cat[jj-1]+=t[0].shape[0]
#print(sub_inst_cat/np.sum(sub_inst_cat),np.sum(sub_inst_cat))
prob_sub=(sub_inst_cat+1)/np.sum(sub_inst_cat+1)
#print(np.log(prob1/(prob_sub+0.001)))
#kld[iter]=np.sum(prob1*np.log(prob1/prob_sub))
plt.plot(sub_inst_cat/(np.sum(sub_inst_cat)))
print(dataset.images)
#plt.show()
#plt.figure()
#print(kld)
#x=np.arange(90)
#print(x.shape,prob1[0,:].shape)
#plt.plot(x,prob1[0,:])
###Output
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000443880.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000496402.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000530187.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000497878.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000394784.jpg
[443880, 496402, 530187, 497878, 394784]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000337390.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000191501.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000522660.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000094271.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000144896.jpg
[337390, 191501, 522660, 94271, 144896]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000578652.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000493210.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000309292.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000204232.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000541429.jpg
[578652, 493210, 309292, 204232, 541429]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000460125.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000181852.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000186720.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000503539.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000164600.jpg
[460125, 181852, 186720, 503539, 164600]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000520515.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000352732.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000356671.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000101421.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000578962.jpg
[520515, 352732, 356671, 101421, 578962]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000136943.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000083477.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000370305.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000199919.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000036914.jpg
[136943, 83477, 370305, 199919, 36914]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000270123.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000357247.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000240889.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000485236.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000301147.jpg
[270123, 357247, 240889, 485236, 301147]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000162285.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000552001.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000516490.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000469139.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000539313.jpg
[162285, 552001, 516490, 469139, 539313]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000467579.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000539573.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000548198.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000479879.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000168409.jpg
[467579, 539573, 548198, 479879, 168409]
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000174507.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000443564.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000512561.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000559462.jpg
/datasets/home/30/230/psarangi/dataset_l/images/train2017/000000133229.jpg
[174507, 443564, 512561, 559462, 133229]
|
Ridge and Lasso.ipynb | ###Markdown
Linear Regression
###Code
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
lin_regressor=LinearRegression()
mse=cross_val_score(lin_regressor,X,y,scoring='neg_mean_squared_error',cv=5)
mean_mse=np.mean(mse)
print(mean_mse)
###Output
-26.70225907220975
###Markdown
Ridge Regression
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import GridSearchCV
ridge=Ridge()
parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40,45,50,55,100]}
ridge_regressor=GridSearchCV(ridge,parameters,scoring='neg_mean_squared_error',cv=5)
ridge_regressor.fit(X,y)
print(ridge_regressor.best_params_)
print(ridge_regressor.best_score_)
###Output
{'alpha': 100}
-22.96774759693228
###Markdown
Lasso Regression
###Code
from sklearn.linear_model import Lasso
from sklearn.model_selection import GridSearchCV
lasso=Lasso()
parameters={'alpha':[1e-15,1e-10,1e-8,1e-3,1e-2,1,5,10,20,30,35,40,45,50,55,100]}
lasso_regressor=GridSearchCV(lasso,parameters,scoring='neg_mean_squared_error',cv=5)
lasso_regressor.fit(X,y)
print(lasso_regressor.best_params_)
print(lasso_regressor.best_score_)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
prediction_lasso=lasso_regressor.predict(X_test)
prediction_ridge=ridge_regressor.predict(X_test)
import seaborn as sns
sns.distplot(y_test-prediction_lasso)
import seaborn as sns
sns.distplot(y_test-prediction_ridge)
###Output
_____no_output_____ |
u_net_for_image-based_food_segmentation.ipynb | ###Markdown
**Food recognition challenge**Recognizing food from images is an extremely useful tool for a variety of use cases. In particular, it would allow people to track their food intake by simply taking a picture of what they consume. Food tracking can be of personal interest, and can often be of medical relevance as well. Medical studies have for some time been interested in the food intake of study participants but had to rely on food frequency questionnaires that are known to be imprecise.Image-based food recognition has in the past few years made substantial progress thanks to advances in deep learning. But food recognition remains a difficult problem for a variety of reasons. Import and parameters
###Code
# Doc: https://segmentation-models.readthedocs.io/
!pip install -U segmentation-models
%env SM_FRAMEWORK = tf.keras
import os
import random
import numpy as np
import tensorflow as tf
import keras
import segmentation_models
from tqdm import tqdm
from pycocotools.coco import COCO
import pandas as pd
import cv2
from PIL import Image
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
''' Upload files from the drive '''
from google.colab import drive
drive.mount('/content/drive')
'''Image parameters'''
IMAGE_WIDTH = 128
IMAGE_HEIGHT = 128
IMAGE_CHANNELS = 3
CLASSES = 18 # 16 categories + background + other categories
###Output
_____no_output_____
###Markdown
Dataset reduction
###Code
'''Loading annotations file for the training set'''
annFile = '/content/drive/MyDrive/deep_learning_project/train/annotations.json'
coco_train = COCO(annFile)
'''Display COCO categories'''
cats = coco_train.loadCats(coco_train.getCatIds())
nms = [cat['name'] for cat in cats]
print('COCO categories: \n{}\n'.format(' '.join(nms)))
'''Getting all categories with respect to their total images and showing the 16 most frequent categories'''
no_images_per_category = {}
for n, i in enumerate(coco_train.getCatIds()):
imgIds = coco_train.getImgIds(catIds=i)
label = nms[n]
no_images_per_category[label] = len(imgIds)
img_info = pd.DataFrame(coco_train.loadImgs(coco_train.getImgIds()))
'''Most frequent categories'''
categories = pd.DataFrame(no_images_per_category.items()).sort_values(1).iloc[::-1][0][:30].tolist()[0:16]
print(categories)
'''Dict with most frequent categories, the ones we chose'''
category_channels = dict(zip(categories, range(1, len(categories) + 1)))
print(category_channels)
'''Extraction of COCO annotations for the selected categories'''
image_directory = '/content/drive/MyDrive/deep_learning_project/train_reduced/images/'
folder_cats = os.listdir(image_directory)
coco_imgs_train = []
for i, folder in tqdm(enumerate(folder_cats), total = len(folder_cats), position = 0, leave = True):
if not folder.startswith('.'):
images_train = os.listdir(image_directory + folder)
for image_name in images_train:
imgId = int(coco_train.getImgIds(imgIds = [image_name.split('.')[0]])[0].lstrip("0"))
coco_imgs_train.append(coco_train.loadImgs([imgId])[0])
TRAINING_SET_SIZE = len(coco_imgs_train)
###Output
100%|██████████| 16/16 [00:33<00:00, 2.09s/it]
###Markdown
Generators Training set generator
###Code
'''Creating masks splitted out in channels: each channel corresponds to one category'''
def read_resize_image(coco_img, path):
image = cv2.imread(path + coco_img['file_name'], cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (IMAGE_WIDTH, IMAGE_HEIGHT))
image = np.asarray(image)
return image
def generate_mask(coco_img, coco_annotations):
annIds = coco_annotations.getAnnIds(imgIds=coco_img['id'], iscrowd = None)
anns = coco_annotations.loadAnns(annIds)
mask = np.zeros((coco_img['height'], coco_img['width'], CLASSES), dtype=np.float32)
# Setting all pixels of the background channel to 1
mask[:,:,0] = np.ones((coco_img['height'], coco_img['width']), dtype=np.float32)
for ann in anns:
catName = [cat['name'] for cat in cats if cat['id'] == ann['category_id']][0]
if catName in category_channels:
mask[:,:,category_channels[catName]] = coco_annotations.annToMask(ann)
mask[:,:,0] -= mask[:,:,category_channels[catName]]
else:
mask[:,:,-1] += coco_annotations.annToMask(ann)
mask[:,:,0] -= mask[:,:,-1]
mask[mask < 0] = 0
mask[mask > 1] = 1
mask = (cv2.resize(mask, (IMAGE_WIDTH, IMAGE_HEIGHT)))
return mask
def dataset_generator(coco_imgs, path, coco_annotations, cats, category_channels, dataset_size, batch_size):
batch_features = np.zeros((batch_size, IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS))
batch_labels = np.zeros((batch_size, IMAGE_WIDTH, IMAGE_HEIGHT, CLASSES), dtype = np.float64)
c = 0
random.shuffle(coco_imgs)
while True:
for i in range(c, c + batch_size):
coco_img = coco_imgs[i]
batch_features[i - c] = read_resize_image(coco_img, path)
batch_labels[i - c] = generate_mask(coco_img, coco_annotations)
c = c + batch_size
if(c + batch_size >= dataset_size):
c = 0
random.shuffle(coco_imgs)
yield (batch_features, batch_labels)
###Output
_____no_output_____
###Markdown
Validation set generator
###Code
'''Loading annotations file for the validation set'''
annFile = '/content/drive/MyDrive/deep_learning_project/val/annotations.json'
coco_val = COCO(annFile)
'''Extraction of COCO annotations for the selected categories in the validation set'''
image_directory = '/content/drive/MyDrive/deep_learning_project/val/images/'
images_val = os.listdir(image_directory)
coco_imgs_val = []
for i, image in tqdm(enumerate(images_val), total = len(images_val), position = 0, leave = True):
imgId = int(coco_val.getImgIds(imgIds = [image.split('.')[0]])[0].lstrip("0"))
coco_img_val = coco_val.loadImgs([imgId])[0]
annIds = coco_val.getAnnIds(imgIds=coco_img_val['id'], iscrowd = None)
anns = coco_val.loadAnns(annIds)
for ann in anns:
catName = [cat['name'] for cat in cats if cat['id'] == ann['category_id']][0]
if catName in category_channels.keys():
coco_imgs_val.append(coco_val.loadImgs([imgId])[0])
break
VALIDATION_SET_SIZE = len(coco_imgs_val)
###Output
100%|██████████| 1269/1269 [00:00<00:00, 17663.42it/s]
###Markdown
Generator testDo not run before generator call. It's just a test!
###Code
''' Generator test: call an entire batch of the generator '''
gen_t = next(gen_val)
image_t = gen_t[0] # Image
mask_t = gen_t[1] # Mask
'''Shape check'''
print(image_t.shape, mask_t.shape)
''' Image check '''
plt.imshow(image_t[3].astype(np.uint8))
plt.show()
''' Mask check '''
for i in range(CLASSES):
plt.imshow(mask_t[3,:,:,i])
plt.show()
''' Type check '''
print(type(mask_t[0,0,0,0]))
print(np.max(mask_t[:,:,:,:]))
print(type(image_t[0,0,0,0]))
print(image_t[0,:,:,0])
###Output
_____no_output_____
###Markdown
Neural network structure
###Code
''' U-net parameters'''
FILTER = 16
###Output
_____no_output_____
###Markdown
Input layer
###Code
''' Input layer '''
inputs = tf.keras.layers.Input((IMAGE_WIDTH, IMAGE_HEIGHT, IMAGE_CHANNELS))
s = tf.keras.layers.Lambda(lambda x: x / 255)(inputs) # Normalization
###Output
_____no_output_____
###Markdown
Contractive path
###Code
''' Contractive path '''
### Layer 1
c1 = tf.keras.layers.Conv2D(FILTER, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(s)
c1 = tf.keras.layers.BatchNormalization()(c1) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c1 = tf.keras.layers.Activation('selu')(c1)
c1 = tf.keras.layers.Conv2D(FILTER, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c1)
p1 = tf.keras.layers.MaxPooling2D((2, 2))(c1)
### Layer 2
c2 = tf.keras.layers.Conv2D(FILTER*2, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(p1)
c2 = tf.keras.layers.BatchNormalization()(c2) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c2 = tf.keras.layers.Activation('selu')(c2)
c2 = tf.keras.layers.Conv2D(FILTER*2, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c2)
p2 = tf.keras.layers.MaxPooling2D((2, 2))(c2)
### Layer 3
c3 = tf.keras.layers.Conv2D(FILTER*4, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(p2)
c3 = tf.keras.layers.BatchNormalization()(c3) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c3 = tf.keras.layers.Activation('selu')(c3)
c3 = tf.keras.layers.Conv2D(FILTER*4, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c3)
p3 = tf.keras.layers.MaxPooling2D((2, 2))(c3)
### Layer 4
c4 = tf.keras.layers.Conv2D(FILTER*8, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(p3)
c4 = tf.keras.layers.BatchNormalization()(c4) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c4 = tf.keras.layers.Activation('selu')(c4)
c4 = tf.keras.layers.Conv2D(FILTER*8, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c4)
p4 = tf.keras.layers.MaxPooling2D((2, 2))(c4)
### Layer 5
c5 = tf.keras.layers.Conv2D(FILTER*16, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(p4)
c5 = tf.keras.layers.BatchNormalization()(c5) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c5 = tf.keras.layers.Activation('selu')(c5)
c5 = tf.keras.layers.Conv2D(FILTER*16, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c5)
###Output
_____no_output_____
###Markdown
Expansive path
###Code
''' Expansive path '''
# Layer 6
u6 = tf.keras.layers.Conv2DTranspose(FILTER*8, (2, 2),
strides = (2, 2),
padding = 'same')(c5)
u6 = tf.keras.layers.concatenate([u6, c4])
c6 = tf.keras.layers.Conv2D(FILTER*8, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(u6)
c6 = tf.keras.layers.BatchNormalization()(c6) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c6 = tf.keras.layers.Activation('selu')(c6)
c6 = tf.keras.layers.Conv2D(FILTER*8, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c6)
# Layer 7
u7 = tf.keras.layers.Conv2DTranspose(FILTER*4, (2, 2),
strides = (2, 2),
padding = 'same')(c6)
u7 = tf.keras.layers.concatenate([u7, c3])
c7 = tf.keras.layers.Conv2D(FILTER*4, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(u7)
c7 = tf.keras.layers.BatchNormalization()(c7) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c7 = tf.keras.layers.Activation('selu')(c7)
c7 = tf.keras.layers.Conv2D(FILTER*4, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c7)
# Layer 8
u8 = tf.keras.layers.Conv2DTranspose(FILTER*2, (2, 2),
strides = (2, 2),
padding = 'same')(c7)
u8 = tf.keras.layers.concatenate([u8, c2])
c8 = tf.keras.layers.Conv2D(FILTER*2, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(u8)
c8 = tf.keras.layers.BatchNormalization()(c8) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c8 = tf.keras.layers.Activation('selu')(c8)
c8 = tf.keras.layers.Conv2D(FILTER*2, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c8)
# Layer 9
u9 = tf.keras.layers.Conv2DTranspose(FILTER, (2, 2),
strides = (2, 2),
padding = 'same')(c8)
u9 = tf.keras.layers.concatenate([u9, c1], axis = 3)
c9 = tf.keras.layers.Conv2D(FILTER, (3, 3),
kernel_initializer = 'he_normal',
padding = 'same')(u9)
c9 = tf.keras.layers.BatchNormalization()(c9) # Batch normalization instead of dropout (note: I had to split the conv2D and ReLU)
c9 = tf.keras.layers.Activation('selu')(c9) # Dropout
c9 = tf.keras.layers.Conv2D(FILTER, (3, 3),
activation = 'selu',
kernel_initializer = 'he_normal',
padding = 'same')(c9)
###Output
_____no_output_____
###Markdown
Output layer
###Code
''' Output layer '''
outputs = tf.keras.layers.Conv2D(CLASSES, (1, 1), activation = 'softmax')(c9)
''' Model building '''
model = tf.keras.Model(inputs = [inputs], outputs = [outputs])
model.summary()
###Output
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
lambda (Lambda) (None, 128, 128, 3) 0 input_1[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 128, 128, 16) 448 lambda[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 128, 128, 16) 64 conv2d[0][0]
__________________________________________________________________________________________________
activation (Activation) (None, 128, 128, 16) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 16) 2320 activation[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 64, 64, 16) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 32) 4640 max_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 64, 64, 32) 128 conv2d_2[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 64, 64, 32) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 32) 9248 activation_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 32, 32, 32) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 32, 32, 64) 18496 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 32, 32, 64) 256 conv2d_4[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 32, 32, 64) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 32, 32, 64) 36928 activation_2[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 16, 16, 64) 0 conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 16, 16, 128) 73856 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 16, 16, 128) 512 conv2d_6[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 16, 16, 128) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 16, 16, 128) 147584 activation_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 128) 0 conv2d_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 8, 8, 256) 295168 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 8, 8, 256) 1024 conv2d_8[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 8, 8, 256) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 8, 8, 256) 590080 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_transpose (Conv2DTranspo (None, 16, 16, 128) 131200 conv2d_9[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 16, 16, 256) 0 conv2d_transpose[0][0]
conv2d_7[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 128) 295040 concatenate[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 16, 16, 128) 512 conv2d_10[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 16, 16, 128) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 16, 16, 128) 147584 activation_5[0][0]
__________________________________________________________________________________________________
conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 64) 32832 conv2d_11[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 32, 32, 128) 0 conv2d_transpose_1[0][0]
conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 32, 64) 73792 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 32, 32, 64) 256 conv2d_12[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 32, 32, 64) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 32, 32, 64) 36928 activation_6[0][0]
__________________________________________________________________________________________________
conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 32) 8224 conv2d_13[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 64, 64, 64) 0 conv2d_transpose_2[0][0]
conv2d_3[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 64, 64, 32) 18464 concatenate_2[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 64, 64, 32) 128 conv2d_14[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 64, 64, 32) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 64, 64, 32) 9248 activation_7[0][0]
__________________________________________________________________________________________________
conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 16) 2064 conv2d_15[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 128, 128, 32) 0 conv2d_transpose_3[0][0]
conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 128, 128, 16) 4624 concatenate_3[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 128, 128, 16) 64 conv2d_16[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 128, 128, 16) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 128, 128, 16) 2320 activation_8[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 128, 128, 18) 306 conv2d_17[0][0]
==================================================================================================
Total params: 1,944,338
Trainable params: 1,942,866
Non-trainable params: 1,472
__________________________________________________________________________________________________
###Markdown
Loss functions and metrics Smoothed Jaccard distance loss
###Code
def jaccard_distance(y_true, y_pred, smooth = 100):
""" Calculates mean of Jaccard distance as a loss function """
intersection = tf.reduce_sum(y_true * y_pred, axis = -1)
union = tf.reduce_sum(y_true + y_pred, axis = -1)
jac = (intersection + smooth) / (union - intersection + smooth)
jd = (1 - jac) * smooth
return tf.reduce_mean(jd)
''' Metrics '''
IoU_metric = segmentation_models.metrics.IOUScore()
F_metric = segmentation_models.metrics.FScore()
''' Losses '''
crossentropy_loss = tf.keras.losses.CategoricalCrossentropy(label_smoothing = 0.5, name = 'categorical_crossentropy')
jaccard_loss = segmentation_models.losses.JaccardLoss()
dice_loss = segmentation_models.losses.DiceLoss()
focal_loss = segmentation_models.losses.CategoricalFocalLoss() # Non ha righe ma la IoU rimane bassa
# Dice + Focal loss
combined_loss = dice_loss + (1 * focal_loss)
###Output
_____no_output_____
###Markdown
Model Checkpoints
###Code
''' Model checkpoints: the model will be saved at each epoch, if there is an improvement '''
checkpointer = tf.keras.callbacks.ModelCheckpoint('/content/drive/MyDrive/deep_learning_project/trained_models/modella-128-16-smoothing_jaccard.h5',
monitor = 'loss', # Select which quantity to refers, can also use metrics or the validation
verbose = 1,
save_best_only = False,
mode = 'auto',
save_freq = 'epoch' # Flag to save at each epoch
)
###Output
_____no_output_____
###Markdown
Neural network training
###Code
''' Generator initialization '''
path_train = '/content/drive/MyDrive/deep_learning_project/train/images/'
path_val = '/content/drive/MyDrive/deep_learning_project/val/images/'
BATCH_SIZE = 64
gen_train = dataset_generator(coco_imgs = coco_imgs_train,
path = path_train,
coco_annotations = coco_train,
cats = cats,
category_channels = category_channels,
dataset_size = TRAINING_SET_SIZE,
batch_size = BATCH_SIZE)
gen_val = dataset_generator(coco_imgs = coco_imgs_val,
path = path_val,
coco_annotations = coco_val,
cats = cats,
category_channels = category_channels,
dataset_size = VALIDATION_SET_SIZE,
batch_size = BATCH_SIZE)
''' Load previous model: after the first training upload the previous configuration and use it as initial state for new training'''
model = tf.keras.models.load_model('/content/drive/MyDrive/deep_learning_project/trained_models/modella-128-16-smoothing_jaccard.h5',
custom_objects = {'jaccard_distance_loss': jaccard_distance_loss}) # Remember to select the right custom objects (losses, metrics)
model.summary()
''' Learning rate scheduler '''
initial_learning_rate = 1.25e-4
lr_schedule1 = tf.keras.optimizers.schedules.ExponentialDecay(initial_learning_rate,
decay_steps = 100000,
decay_rate = 0.96,
staircase = True)
lr_schedule2 = tf.keras.optimizers.schedules.CosineDecay(initial_learning_rate,
decay_steps = 1000,
alpha = 0.15)
''' Before starting a new traning is it possible to change optimizer or loss functions'''
opt = tf.keras.optimizers.Adam(learning_rate = lr_schedule1,
beta_1 = 0.9,
beta_2 = 0.999,
epsilon = 1e-07,
amsgrad = True,
clipnorm = 1.0)
opt2 = tf.keras.optimizers.SGD(learning_rate = lr_schedule2,
momentum = 0.99,
nesterov = True,
clipnorm = 1.0)
model.compile(optimizer = opt,
loss = [jaccard_distance_loss],
metrics = 'accuracy')
''' Model training '''
EPOCHS = 20
records = model.fit(gen_train,
validation_data = gen_val,
steps_per_epoch = np.ceil(TRAINING_SET_SIZE / BATCH_SIZE),
validation_steps = np.ceil(VALIDATION_SET_SIZE / BATCH_SIZE),
epochs = EPOCHS,
verbose = 1,
callbacks = [checkpointer]
)
###Output
_____no_output_____
###Markdown
EvaluationFor a known ground truth mask A, you propose a mask B, then we first compute IoU (Intersection Over Union).IoU measures the overall overlap between the true region and the proposed region. Then we consider it a **true detection** when there is at least half an overlap, namely when IoU > 0.5Then we can define the following parameters :* Precision (IoU > 0.5);* Recall (IoU > 0.5).The final scoring parameters: * AP{IoU > 0.5}; * AR{IoU > 0.5};are computed by averaging over all the precision and recall values for all known annotations in the ground truth.Guide 1: https://www.jeremyjordan.me/evaluating-image-segmentation-models/Guide 2: https://towardsdatascience.com/metrics-to-evaluate-your-semantic-segmentation-model-6bcb99639aa2
###Code
'''Generator initialization'''
path_val = "/content/drive/MyDrive/deep_learning_project/val/images/"
gen_val = dataset_generator(coco_imgs = coco_imgs_val,
path = path_val,
coco_annotations = coco_val,
cats = cats,
category_channels = category_channels,
dataset_size = VALIDATION_SET_SIZE,
batch_size = VALIDATION_SET_SIZE)
model = tf.keras.models.load_model('/content/drive/MyDrive/deep_learning_project/trained_models/modella-128-16-focal_loss.h5',
custom_objects = {'focal_loss': focal_loss, 'iou_score': IoU_metric})
validation_set = next(gen_val)
images_val_set = validation_set[0] # Images
masks_val_set = validation_set[1] # Masks
plt.imshow(images_val_set[4].astype(np.uint8))
plt.show()
predictions = model.predict(images_val_set, verbose = 1)
def show_masks_threshold(prediction):
labels = list(category_channels.keys())
labels.insert(0, "background")
labels.append("other")
prediction_threshold = prediction.copy()
prediction_threshold[prediction_threshold >= 0.4] = 1.
prediction_threshold[prediction_threshold < 0.4] = 0.
for i in range(CLASSES):
if np.max(prediction_threshold[:,:,i]) != 0:
plt.imshow(prediction_threshold[:,:,i])
plt.title(labels[i])
plt.show()
'''Showing the predicted masks'''
show_masks_threshold(predictions[4,:,:,:])
def show_mask_overlapping(prediction):
labels = list(category_channels.keys())
labels.insert(0, "background")
labels.append("other")
prediction_threshold = prediction.copy()
prediction_threshold[prediction_threshold >= 0.4] = 1.
prediction_threshold[prediction_threshold < 0.4] = 0.
mask_plot = np.zeros((IMAGE_WIDTH, IMAGE_HEIGHT), dtype = np.float32)
'''Preparing the mask with overlapping'''
for i in range(CLASSES):
prediction_threshold[:,:,i] = prediction_threshold[:,:,i] * i
mask_plot += prediction_threshold[:,:,i]
mask_plot[mask_plot >= i] = i
values = np.array(np.unique(mask_plot), dtype=np.uint8)
plt.figure(figsize=(8,4))
im = plt.imshow(mask_plot, interpolation='none')
colors = [ im.cmap(im.norm(value)) for value in range(len(labels))]
patches = [ mpatches.Patch(color=colors[i], label=labels[i] ) for i in values ]
plt.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0. )
plt.axis('off')
plt.show()
show_mask_overlapping(predictions[4,:,:,:])
'''Computing the IoU, Recall and Precision metrics'''
def compute_results(mean_iou, recall, precision, mask, prediction):
mean_iou.update_state(mask, prediction)
recall.update_state(mask, prediction)
precision.update_state(mask, prediction)
return recall.result().numpy(), precision.result().numpy(), mean_iou.result().numpy()
mean_iou = tf.keras.metrics.MeanIoU(num_classes = 17)
recall = tf.keras.metrics.Recall()
precision = tf.keras.metrics.Precision()
mean_iou_results = []
recall_results = []
precision_results = []
threshold = 0.5
for i in range(VALIDATION_SET_SIZE):
mask = masks_val_set[i,:,:,:-1]
prediction = predictions[i,:,:,:-1]
recall_res, precision_res, mean_iou_res = compute_results(mean_iou, recall, precision, mask, prediction)
mean_iou_results.append(mean_iou_res)
mean_iou.reset_states()
if mean_iou_res >= threshold:
precision_results.append(precision_res)
precision.reset_states()
recall_results.append(recall_res)
recall.reset_states()
print('Mean precision: {}.'.format(np.average(precision_results)))
print('Mean recall: {}.'.format(np.average(recall_results)))
print('Calculated on {} samples, over {} total samples, that passed the IoU test.'.format(len(np.asarray(mean_iou_results)[np.asarray(mean_iou_results) >= threshold]), VALIDATION_SET_SIZE))
print(mean_iou_results)
print(np.max(mean_iou_results), np.min(mean_iou_results))
print(np.mean(mean_iou_results))
###Output
0.67396533
|
.ipynb_checkpoints/Prog6-NaiveBayesianDoc-checkpoint.ipynb | ###Markdown
Program 6 - Naive Bayesian (Doc)We use Multinomial Naive Bayes classifier of the `scikit-learn` library.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
###Output
_____no_output_____
###Markdown
Load the data set from local file with labels
###Code
msg = pd.read_csv('prog6_dataset.csv', names = ['message', 'label'])
print("Total instances in the data set:\n", msg.shape[0])
###Output
Total instances in the data set:
18
###Markdown
`x` contains messages (feature)`y` contains label number (target)
###Code
# here, msg['labelnum'] is important as that is the output that is used in mapping the NB classifier
msg['labelnum'] = msg.label.map({
'pos': 1,
'neg': 0
})
x = msg.message
y = msg.labelnum
x, y
###Output
_____no_output_____
###Markdown
First 5 msgs with labels are printed
###Code
x5, y5 = x[0:5], msg.label[0:5]
for x1, y1 in zip(x5, y5):
print(x1, ',', y1)
###Output
I love this sandwich , pos
This is an amazing place , pos
I feel very good about these beers , pos
This is my best work , pos
What an awesome view , pos
###Markdown
Split the data set into training and testing data
###Code
xtrain, xtest, ytrain, ytest = train_test_split(x, y)
print('Total training instances: ', xtrain.shape[0])
print('Total testing instances: ', xtest.shape[0])
###Output
Total training instances: 13
Total testing instances: 5
###Markdown
`CountVectorizer` is used for feature extractionThe output of the count vectorizer is a sparse matrix
###Code
count_vec = CountVectorizer()
xtrain_dtm = count_vec.fit_transform(xtrain) # sparse matrix
xtest_dtm = count_vec.transform(xtest)
print("Total features extracted using CountVectorizer: ", xtrain_dtm.shape[1])
print("Features for first 5 training instances are listed below\n")
df = pd.DataFrame(xtrain_dtm.toarray(), columns = count_vec.get_feature_names())
print(df[:5])
# print(xtrain_dtm) # same as above, but sparse matrix representation
###Output
Total features extracted using CountVectorizer: 47
Features for first 5 training instances are listed below
about am amazing an awesome bad beers best boss can ... tired \
0 0 0 0 0 0 0 0 0 0 0 ... 0
1 0 0 0 1 1 0 0 0 0 0 ... 0
2 0 0 0 0 0 0 0 0 0 0 ... 0
3 0 0 0 0 0 0 0 0 1 0 ... 0
4 0 0 0 0 0 0 0 0 0 0 ... 0
to tomorrow very view we what will with work
0 0 0 0 0 0 0 0 0 0
1 0 0 0 1 0 1 0 0 0
2 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0
4 0 1 0 0 1 0 1 0 0
[5 rows x 47 columns]
###Markdown
Training Naive Bayes Classifier
###Code
clf = MultinomialNB().fit(xtrain_dtm, ytrain)
predicted = clf.predict(xtest_dtm)
predicted
###Output
_____no_output_____
###Markdown
Classification results of testing samples are:
###Code
for doc, p in zip(xtest, predicted):
pred = 'pos' if p == 1 else 'neg'
print('%s -> %s' % (doc, pred))
###Output
I love this sandwich -> pos
I went to my enemy's house today -> neg
I am sick and tired of this place -> neg
What a great holiday -> pos
I do not like this restaurant -> neg
###Markdown
Metrics
###Code
print("Accuracy: ", metrics.accuracy_score(ytest, predicted))
print("Recall: ", metrics.recall_score(ytest, predicted))
print("Precision: ", metrics.precision_score(ytest, predicted))
###Output
Accuracy: 1.0
Recall: 1.0
Precision: 1.0
###Markdown
Confusion Matrix
###Code
print(metrics.confusion_matrix(ytest, predicted))
###Output
[[3 0]
[0 2]]
|
Lab 2 - Coding Vectors using NumPy and Matplotlib/.ipynb_checkpoints/LinAlg Lab 2-checkpoint.ipynb | ###Markdown
Lab 2 - Plotting Vector using NumPy and MatPlotLib In this laboratory we will be disucssin the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib. ObjectivesAt the end of this activity you will be able to:1. Be familiar with the libraries in Python for numerical and scientific programming.2. Visualize vectors through Python programming.3. Perform simple vector operations through code. Discussion NumPy NumPy or Numerical Python, is mainly used for matrix and vector operations. It is capable of declaring computing and representing matrices. Most Python scienitifc programming libraries uses NumPy as the basic code. Representing Vectors Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors: $$ A = 4\hat{x} + 3\hat{y} \\B = 2\hat{x} - 5\hat{y}$$ In which it's matrix equivalent is: $$ A = \begin{bmatrix} 4 \\ 3\end{bmatrix} , B = \begin{bmatrix} 2 \\ -5\end{bmatrix} $$ We can then start doing numpy code with this by:
###Code
## Importing necessary libraries
import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname.
A = np.array([4, 3])
B = np.array([2, -5])
print(A)
print(B)
###Output
[4 3]
[ 2 -5]
###Markdown
Describing vectors in NumPy Describing vectors is very important if we want to perform basic to advanced operations with them. The fundamental ways in describing vectors are knowing their shape, size and dimensions.
###Code
### Checking shapes
### Shapes tells us how many elements are there on each row and column
A.shape
### Checking size
### Array/Vector sizes tells us many total number of elements are there in the vector
A.size
### Checking dimensions
### The dimensions or rank of a vector tells us how many dimensions are there for the vector.
A.ndim
###Output
_____no_output_____
###Markdown
Great! Now let's try to explore in performing operations with these vectors. Addition The addition rule is simple, the we just need to add the elements of the matrices according to their index. So in this case if we add vector $A$ and vector $B$ we will have a resulting vector: $$R = 6\hat{x}-2\hat{y} \\ \\or \\ \\ R = \begin{bmatrix} 6 \\ -2\end{bmatrix} $$ So let's try to do that in NumPy in several number of ways:
###Code
R = np.add(A, B) ## this is the functional method usisng the numpy library
R
R = A + B ## this is the explicit method, since Python does a value-reference so it can
## know that these variables would need to do array operations.
R
###Output
_____no_output_____
###Markdown
Try for yourself! Try to implement subtraction, multiplication, and division with vectors $A$ and $B$!
###Code
### Try out you code here! Don't forget to take a screenshot or a selfie!
###Output
_____no_output_____
###Markdown
Scaling Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below: $$S = 5 \cdot A$$ We can do this in numpy through:
###Code
S = 5 * A
S
###Output
_____no_output_____
###Markdown
MatPlotLib MatPlotLib or MATLab Plotting library is Python's take on MATLabs plotting feature. MatPlotLib can be used vastly from graping values to visualizing several dimensions of data. Visualizing Data It's not enough just sloving these vectors so might need to visualize them. So we'll use MatPlotLibe for that. We'll need to import it first.
###Code
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
n = A.shape[0]
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1)
plt.show()
###Output
_____no_output_____ |
prediciones/Pred_wine_white.ipynb | ###Markdown
Arbol de decisión
###Code
#No hay standar entre los parametros, pero si tiene hyperparametros, es decir, profundidad: max_depth
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeClassifier
# creacion de pipeline
pipeline_arbol= Pipeline([("arbol",DecisionTreeClassifier())])
hyper_arbol={"arbol__max_depth":[2,3,4,5,6,7,8]}
gs_arbol= GridSearchCV (pipeline_arbol,param_grid = hyper_arbol,cv=5,scoring ="roc_auc",verbose=3)
gs_arbol
gs_arbol.fit(df_white_train[nombres],df_white_train["quality"])
print("Puntuacion:"+ str (gs_arbol.best_score_))
print("Mejor Parametro:"+ str (gs_arbol.best_params_))
y_pred = gs_arbol.predict(df_white_test[nombres])
y_pred
from sklearn.metrics import accuracy_score, recall_score,precision_score
accuracy_score(df_white_test["quality"], y_pred)
###Output
_____no_output_____ |
PostgreSQL/13-1 Guest House - Easy.ipynb | ###Markdown
Guest House - EasyData for this assessment is available:- [guesthouse data in MySQL format](http://sqlzoo.net/guesthouse.sql)- [guesthouse data in Microsoft SQL Server format](http://sqlzoo.net/guesthouse-ms.sql)Background- Guests stay at a small hotel.- Each booking is recorded in the table booking, the date of the first night of the booking is stored here (we do not record the date the booking was made)- At the time of booking the room to be used is decided- There are several room types (single, double..)- The amount charged depends on the room type and the number of people staying and the number of nights- Guests may be charged extras (for breakfast or using the minibar)- **Database Description** | [Easy Problems](https://sqlzoo.net/wiki/Guest_House_Assessment_Easy) | [Medium Problems](https://sqlzoo.net/wiki/Guest_House_Assessment_Medium) | [Hard Problems](https://sqlzoo.net/wiki/Guest_House_Assessment_Hard)- [Guest House Assessment Sample Queries](https://sqlzoo.net/wiki/Guest_House_Assessment_Sample_Queries) Table bookingThe table booking contains an entry for every booking made at the hotel. A booking is made by one guest - even though more than one person may be staying we do not record the details of other guests in the same room. In normal operation the table includes both past and future bookings.```+----------+------------+-------+--------+---------+-------------------+------+------------+|booking_id|booking_date|room_no|guest_id|occupants|room_type_requested|nights|arrival_time|+----------+------------+-------+--------+---------+-------------------+------+------------+| 5001 | 2016-11-03 | 101 | 1027 | 1 | single | 7 | 13:00 || 5002 | 2016-11-03 | 102 | 1179 | 1 | double | 2 | 18:00 || 5003 | 2016-11-03 | 103 | 1106 | 2 | double | 2 | 21:00 || 5004 | 2016-11-03 | 104 | 1238 | 1 | double | 3 | 22:00 |+----------+------------+-------+--------+---------+-------------------+------+------------+``` Table roomRooms are either single, double, twin or family.```+-----+-----------+---------------+| id | room_type | max_occupancy |+-----+-----------+---------------+| 101 | single | 1 || 102 | double | 2 || 103 | double | 2 || 104 | double | 2 || 105 | family | 3 |+-----+-----------+---------------+``` Table rateRooms are charged per night, the amount charged depends on the "room type requested" value of the booking and the number of people staying:```+-----------+-----------+--------+| room_type | occupancy | amount |+-----------+-----------+--------+| double | 1 | 56.00 || double | 2 | 72.00 || family | 1 | 56.00 || family | 2 | 72.00 || family | 3 | 84.00 || single | 1 | 48.00 || twin | 1 | 50.00 || twin | 2 | 72.00 |+-----------+-----------+--------+```You can see that a double room with one person staying costs £56 while a double room with 2 people staying costs £72 per nightNote that the actual room assigned to the booking might not match the room required (a customer may ask for a single room but we actually assign her a double). In this case we charge at the "requirement rate".
###Code
# Prerequesites
import getpass
%load_ext sql
pwd = getpass.getpass()
# %sql mysql+pymysql://root:$pwd@localhost:3306/sqlzoo
%sql postgresql://postgres:$pwd@localhost/sqlzoo
%config SqlMagic.displaylimit = 20
###Output
····
###Markdown
1.Guest 1183. Give the booking_date and the number of nights for guest 1183.```+--------------+--------+| booking_date | nights |+--------------+--------+| 2016-11-27 | 5 |+--------------+--------+```
###Code
%%sql
SELECT booking_date, nights FROM booking
WHERE guest_id=1183;
###Output
* postgresql://postgres:***@localhost/sqlzoo
1 rows affected.
###Markdown
2.When do they get here? List the arrival time and the first and last names for all guests due to arrive on 2016-11-05, order the output by time of arrival.```+--------------+------------+-----------+| arrival_time | first_name | last_name |+--------------+------------+-----------+| 14:00 | Lisa | Nandy || 15:00 | Jack | Dromey || 16:00 | Mr Andrew | Tyrie || 21:00 | James | Heappey || 22:00 | Justin | Tomlinson |+--------------+------------+-----------+```
###Code
%%sql
SELECT arrival_time, first_name, last_name
FROM booking JOIN guest ON booking.guest_id=guest.id
WHERE booking_date='2016-11-05'
ORDER BY arrival_time;
###Output
* postgresql://postgres:***@localhost/sqlzoo
5 rows affected.
###Markdown
3.Look up daily rates. Give the daily rate that should be paid for bookings with ids 5152, 5165, 5154 and 5295. Include booking id, room type, number of occupants and the amount.```+------------+---------------------+-----------+--------+| booking_id | room_type_requested | occupants | amount |+------------+---------------------+-----------+--------+| 5152 | double | 2 | 72.00 || 5154 | double | 1 | 56.00 || 5295 | family | 3 | 84.00 |+------------+---------------------+-----------+--------+```
###Code
%%sql
SELECT booking.booking_id, room_type_requested, occupants, rate.amount
FROM booking INNER JOIN room ON booking.room_no=room.id
INNER JOIN rate ON (room.room_type=rate.room_type AND
booking.occupants=rate.occupancy)
WHERE booking.booking_id IN (5152,5165,5154,5295);
###Output
* postgresql://postgres:***@localhost/sqlzoo
3 rows affected.
###Markdown
4.Who’s in 101? Find who is staying in room 101 on 2016-12-03, include first name, last name and address.```+------------+-----------+-------------+| first_name | last_name | address |+------------+-----------+-------------+| Graham | Evans | Weaver Vale |+------------+-----------+-------------+```
###Code
%%sql
SELECT first_name, last_name, address
FROM booking JOIN guest ON booking.guest_id=guest.id
WHERE room_no=101 AND
booking_date BETWEEN '2016-12-03' AND (date '2016-12-03' + nights)
-- MySQL: booking_date BETWEEN '2016-12-03' AND ADDDATE('2016-12-03', nights);
###Output
* postgresql://postgres:***@localhost/sqlzoo
1 rows affected.
###Markdown
5.How many bookings, how many nights? For guests 1185 and 1270 show the number of bookings made and the total number of nights. Your output should include the guest id and the total number of bookings and the total number of nights.```+----------+---------------+-------------+| guest_id | COUNT(nights) | SUM(nights) |+----------+---------------+-------------+| 1185 | 3 | 8 || 1270 | 2 | 3 |+----------+---------------+-------------+```
###Code
%%sql
SELECT guest_id, COUNT(nights) "COUNT(nights)", SUM(nights) "SUM(nights)"
FROM booking
WHERE guest_id IN (1185, 1270)
GROUP BY guest_id;
###Output
* postgresql://postgres:***@localhost/sqlzoo
2 rows affected.
|
notebooks/BreadBoard_DataRecord.ipynb | ###Markdown
Bread Board Constrcut record.py
###Code
import os
import numpy as np
import pandas as pd
from pyaspect.moment_tensor import MomentTensor
###Output
_____no_output_____
###Markdown
paths
###Code
data_in_dir = 'data/output/'
data_out_dir = data_in_dir
!ls {data_out_dir}/tmp/TestProjects/CGFR_Test
projects_fqp = os.path.join(data_out_dir,'tmp','TestProjects','CGFR_Test')
recip_project_fqp = os.path.join(projects_fqp,'ReciprocalGeometricTestProject')
fwd_project_fqp = os.path.join(projects_fqp,'ForwardGeometricTestProject')
!ls {recip_project_fqp}
print()
!ls {fwd_project_fqp}
###Output
_____no_output_____
###Markdown
Record Object
###Code
import os
import copy
import importlib
import numpy as np
import pandas as pd
from pyaspect.specfemio.headers import RecordHeader
#TODO this is the actual record. I need the header, then make record.py
class Record(RecordHeader):
class _TraceData(object):
class _XYZ(object):
def __init__(self,ex,ny,z):
self.ex = ex
self.ny = ny
self.z = z
def __str__(self):
out_str = f'Component E/X:\n{self.ex}\n\n'
out_str += f'Component N/Y:\n{self.ny}\n\n'
out_str += f'Component Z:\n{self.z}'
return out_str
def __repr__(self):
out_str = f'Component E/X:\n{self.ex.__repr__()}\n\n'
out_str += f'Component N/Y:\n{self.ny.__repr__()}\n\n'
out_str += f'Component Z:\n{self.z.__repr__()}'
return out_str
def __init__(self,data_df):
self.df_x = data_df['comp_EX']
self.df_y = data_df['comp_NY']
self.df_z = data_df['comp_Z']
def __getitem__(self,islice):
return self._XYZ(self.df_x.loc[islice],self.df_y.loc[islice],self.df_z.loc[islice])
def __str__(self):
out_str = f'Component E/X:\n{self.df_x}\n\n'
out_str += f'Component N/Y:\n{self.df_y}\n\n'
out_str += f'Component Z:\n{self.df_z}'
return out_str
def __repr__(self):
out_str = f'Component E/X:\n{self.df_x.__repr__()}\n\n'
out_str += f'Component N/Y:\n{self.df_y.__repr__()}\n\n'
out_str += f'Component Z:\n{self.df_z.__repr__()}'
return out_str
def __init__(self, rheader,dtype='b',data_df=None):
super(Record,self).__init__(name=rheader.name,
solutions_h=rheader.get_solutions_header_list(),
stations_h=rheader.get_stations_header_list(),
proj_id=rheader.proj_id,
rid=rheader.rid,
iter_id=rheader.iter_id,
is_reciprocal=rheader.is_reciprocal)
if not isinstance(data_df,pd.DataFrame):
self._load_data(dtype=dtype)
else:
self['data_df'] = data_df
def __str__(self):
out_str = f'{super(Record, self).__str__()}\n\n'
out_str += f'Data:\n {self.data_df}'
return out_str
def __repr__(self):
out_str = f'{super(Record, self).__repr__()}\n\n'
out_str += f'Data:\n {self.data_df.__repr__()}'
return out_str
def __getitem__(self, kslice):
if not isinstance(kslice, str):
dslice = super(Record, self)._get_df_slice_index(kslice,self.data_df,is_stations=True)
c_data_df = self.data_df.reset_index()[dslice]
c_rheader = super(Record, self).__getitem__(kslice)
return Record(rheader=c_rheader,data_df=c_data_df)
else:
return super(Record, self).__getitem__(kslice)
def _read_specfem_bin_trace(self,fpath,dtype=np.float32):
return np.fromfile(fpath, dtype=dtype)
def _load_data(self,dtype='b',sl=slice(None,None,None),scale=1.0,rfunc=None):
if dtype != 'b' and _rfunc == None:
raise Exception('can only read binary type data for the time being')
#FIXME: add read ascii
read_func = self._read_specfem_bin_trace
if rfunc != None:
read_func = rfunc
l_data = []
for eidx, edf in self.stations_df.groupby(level='eid'):
for sidx, sdf in edf.groupby(level='sid'):
for tidx, tdf in sdf.groupby(level='trid'):
for gidx, gdf in tdf.groupby(level='gid'):
fp_prefix = gdf.loc[(eidx,sidx,tidx,gidx),"data_fqdn"]
fp = os.path.join(projects_fqp,fp_prefix)
match_fp = fp + '.*X[XYZEN].sem*'
data_dict = {'eid':eidx,'sid':sidx,'trid':tidx,'gid':gidx}
for filepath in glob.glob(match_fp):
comp = filepath.split('.')[-2][-1]
if comp == 'X' or comp == 'E':
data_dict['comp_EX'] = scale*read_func(filepath)
elif comp == 'Y' or comp == 'N':
data_dict['comp_NY'] = scale*read_func(filepath)
elif comp == 'Z':
data_dict['comp_Z'] = scale*read_func(filepath)
else:
raise Exception(f'Could not find component: "{comp}"')
l_data.append(data_dict)
self['data_df'] = pd.DataFrame.from_records(l_data, index=self['default_stat_midx'])
@property
def data(self):
return self._TraceData(self.data_df)
@property
def data_df(self):
return self['data_df']
@property
def component_names(self):
return ['comp_EX','comp_NY','comp_Z']
###Output
_____no_output_____
###Markdown
Reciprocity: Read RecordHeader and instantiate RecordObject
###Code
import glob
from pyaspect.specfemio.read import _read_headers
recip_record_fqp = os.path.join(recip_project_fqp,'pyheader.project_record')
recip_record_h = _read_headers(recip_record_fqp)
recip_record_h.is_reciprocal = True #just a hack until updated
ne = recip_record_h.nevents
ns = recip_record_h.nsrc
print(f'ne:{ne}, ns:{ns}')
print(f'Recip Header:\n{recip_record_h.solutions_df.loc[pd.IndexSlice[:,1],:]}')
###Output
_____no_output_____
###Markdown
Instantiate Record, and test slicing and pandas operations with DataFrames
###Code
recip_record = Record(recip_record_h)
#print(recip_drecord['is_reciprocal'])
#print(recip_drecord.data_df.loc[(0,0,0),:])
#print(recip_drecord.data_df.loc[:,'comp_EX'])
#print(recip_drecord[0,0,0,:])
data = recip_record.data
#print(type(data[0,0,0,0].z))
#print(data)
print(pd.merge(recip_record.stations_df,recip_record.data_df,on=['eid','sid','trid','gid']))
###Output
_____no_output_____
###Markdown
trace spacial derivative function to add to records.py module
###Code
from scipy import signal
def calulate_spacial_derivative(tdf,eidx,sidx,tidx,g_p1,g_m1,sos,comp_key,coord_key):
gidx_0 = pd.IndexSlice[eidx,sidx,tidx,0]
gidx_p1 = pd.IndexSlice[eidx,sidx,tidx,g_p1]
gidx_m1 = pd.IndexSlice[eidx,sidx,tidx,g_m1]
df_0 = tdf.loc[gidx_0]
df_p1 = tdf.loc[gidx_p1]
df_m1 = tdf.loc[gidx_m1]
data_p1 = signal.sosfilt(sos, df_p1[comp_key].astype(np.float64))
data_m1 = signal.sosfilt(sos, df_m1[comp_key].astype(np.float64))
c_p1 = df_p1[coord_key]
c_m1 = df_m1[coord_key]
c_0 = df_0[coord_key]
delta = 0.5*(c_p1 - c_m1)
h = 2.0*np.abs(delta)
c = c_m1 + delta
assert h != 0
assert c_0-c == 0
h_scale = 1/h
mt_trace = h_scale*(data_p1 - data_m1)
return mt_trace
###Output
_____no_output_____
###Markdown
make reciprocal Green's functions: add to record.py module
###Code
import pandas as pd
from scipy import signal
def make_rgf_data_df(record,fl,fh,fs):
comp_dict = {'comp_EX':0,'comp_NY':1,'comp_Z':2}
coord_dict = {0:'lon_xc',1:'lat_yc',2:'depth'}
sos = signal.butter(3, [fl,fh], 'bp', fs=fs, output='sos')
l_rgf_traces = []
m_df = pd.merge(record.stations_df,record.data_df,on=['eid','sid','trid','gid'])
for eidx, edf in m_df.groupby(level='eid'):
for sidx, sdf in edf.groupby(level='sid'):
for tidx, tdf in sdf.groupby(level='trid'):
for comp_key in comp_dict.keys():
ie = tidx
ig = eidx
fi = sidx
for di in range(3):
rgf_dict = {'eid':tidx,'trid':eidx,'fid':sidx}
rgf_dict['cid'] = comp_dict[comp_key]
coord_key = coord_dict[di]
rgf_dict['did'] = di
ip1 = di+1 #coord + h
im1 = ip1 + 3 #coord - h
if di == 2:
tm1 = ip1
ip1 = im1
im1 = tm1
rgf_dict['data'] = calulate_spacial_derivative(m_df,
eidx,
sidx,
tidx,
ip1,
im1,
sos,
comp_key,
coord_key)
l_rgf_traces.append(rgf_dict)
return pd.DataFrame.from_records(l_rgf_traces, index=('eid','trid','cid','fid','did'))
###Output
_____no_output_____
###Markdown
create Reciprocal Green's Table (as DataFrame)
###Code
rgf_data_df = make_rgf_data_df(recip_record,1.0,10.0,1000)
rgf_data_df
#print(rgf_df)
print(rgf_data_df.loc[0,0,0,:,:])
###Output
_____no_output_____
###Markdown
Forward/CMTSolution: Read RecordHeader and instantiate RecordObject
###Code
fwd_record_fqp = os.path.join(fwd_project_fqp,'pyheader.project_record')
fwd_record_h = _read_headers(fwd_record_fqp)
fwd_record_h.is_reciprocal = False #just a hack until updated
ne = fwd_record_h.nevents
ns = fwd_record_h.nsrc
print(f'ne:{ne}, ns:{ns}')
print(f'Forward Record:\n{fwd_record_h.solutions_df.loc[(0,0),"date"]}')
###Output
_____no_output_____
###Markdown
Instantiate Forward Record
###Code
fwd_record = Record(fwd_record_h)
#print(fwd_record['is_reciprocal'])
#print(fwd_record.data_df.loc[(0,0,0),:])
#print(fwd_record.data_df.loc[:,'comp_EX'])
#print(fwd_record[0,0,0,:])
data = fwd_record.data
#print(type(data[0,0,0,0].z))
print(data)
###Output
_____no_output_____
###Markdown
Get Moment tensors to compare with Foward data and also Construct Combinded Reciprocal CMTs. These functions will not be part of record.py module, but make_moment_tensor will be added to utils.py module
###Code
def make_moment_tensor(src_h):
mrr = src_h['mrr']
mtt = src_h['mtt']
mpp = src_h['mpp']
mrt = src_h['mrt']
mrp = src_h['mrp']
mtp = src_h['mtp']
h_matrix = np.array([[mrr,mrt,mrp],[mrt,mtt,mtp],[mrp,mtp,mpp]])
return MomentTensor(m_up_south_east=h_matrix)
#print(f'Forward Record Sources:\n{fwd_record_h.solutions_df}')
SrcHeader = fwd_record_h.solution_cls
d_fwd_src = {}
for eidx, edf in fwd_record_h.solutions_df.groupby(level='eid'):
for sidx, sdf in edf.groupby(level='sid'):
idx = pd.IndexSlice[eidx,sidx]
src = SrcHeader.from_series(fwd_record_h.solutions_df.loc[idx])
#print(src)
#mag = src.mw
#strike = src.strike
#dip = src.dip
#rake = src.rake
#mt = MomentTensor(mw=mag,strike=strike,dip=dip,rake=rake)
mt = make_moment_tensor(src)
print(mt)
d_fwd_src[eidx] = mt
#print(f'mt.aki_m6:\n{mt.aki_richards_m6()}')
#print(f'header.m6:\n{src.mt}\n')
for key in d_fwd_src:
print(d_fwd_src[key].m6_up_south_east())
###Output
_____no_output_____
###Markdown
Make Reciprocal CMT record from MomentTensors Function
###Code
def make_cmt_data_df_from_rgf(rgf_df,mt_dict):
comp_dict = {'comp_EX':0,'comp_NY':1,'comp_Z':2}
rgf_events = list(rgf_df.index.get_level_values('eid').unique())
#print(f'rgf_events: {rgf_events}')
mt_events = list(mt_dict.keys())
#print(f'mt_events: {mt_events}')
#print(f'all: {rgf_events == mt_events}')
if not rgf_events == mt_events:
raise Exception('RGF-events do not match MomentTensors-events')
l_recip_cmt_traces = []
for eidx, edf in rgf_df.groupby(level='eid'):
mw = d_fwd_src[eidx].magnitude
m0 = d_fwd_src[eidx].moment
mt_arr = d_fwd_src[eidx].m6_up_south_east()
wzz = mt_arr[0] #mrr
wyy = mt_arr[1] #mtt
wxx = mt_arr[2] #mpp
wyz = -mt_arr[3] #mrt
wxz = mt_arr[4] #mrp
wxy = -mt_arr[5] #mtp
#print(f'Mw:{mw:.2f}, M0:{m0:.2f}, wzz:{wzz:.3f}, wyy:{wyy:.3f}, wee:{wxx:.3f}, wxy:{wxy:.3f}, wxz:{wxz:.3f}, wyz:{wyz:.3f}')
for tidx, tdf in edf.groupby(level='trid'):
d_recip_cmt = {'eid':eidx,'sid':eidx,'trid':tidx,'gid':0}
for comp_key in comp_dict.keys():
ic = comp_dict[comp_key]
composite_trace = wxx*1*rgf_df.loc[(eidx,tidx, 0,ic, 0),'data'] #Matrix: Mee
composite_trace += wyy*1*rgf_df.loc[(eidx,tidx, 1,ic, 1),'data'] #Matrix: Mnn
composite_trace += wzz*1*rgf_df.loc[(eidx,tidx, 2,ic, 2),'data'] #Matrix: Mzz
#Matrix: M1/Mxy
composite_trace += wxy*1*rgf_df.loc[(eidx,tidx, 1,ic, 0),'data']
composite_trace += wxy*1*rgf_df.loc[(eidx,tidx, 0,ic, 1),'data']
#Matrix: M2/Mxz
composite_trace += wxz*1*rgf_df.loc[(eidx,tidx, 0,ic, 2),'data']
composite_trace += wxz*1*rgf_df.loc[(eidx,tidx, 2,ic, 0),'data']
#Matrix: M3/Myz
composite_trace += wyz*1*rgf_df.loc[(eidx,tidx, 1,ic, 2),'data']
composite_trace += wyz*1*rgf_df.loc[(eidx,tidx, 2,ic, 1),'data']
d_recip_cmt[comp_key] = composite_trace
l_recip_cmt_traces.append(d_recip_cmt)
return pd.DataFrame.from_records(l_recip_cmt_traces, index=('eid','sid','trid','gid'))
###Output
_____no_output_____
###Markdown
Construct the Dataframe with the Reciprocal CMT Traces
###Code
rgf_cmt_data_df = make_cmt_data_df_from_rgf(rgf_data_df,d_fwd_src)
for eidx, edf in rgf_cmt_data_df.groupby(level='eid'):
print(eidx)
print(rgf_cmt_data_df.loc[pd.IndexSlice[0,0,:,0],:])
assert False
###Output
_____no_output_____
###Markdown
Construct the Reciprocal CMT RecordHeader and then a Record
###Code
import datetime
from pyaspect.specfemio.headers import CMTSolutionHeader as cmt_h
from pyaspect.specfemio.headers import StationHeader as stat_h
'''
hstr = f'PDE {date.year} {date.month} {date.day} {date.hour} {date.minute} {date.second}'
hstr += f' {lat_yc} {lon_xc} {depth/1000.0} {mt.magnitude} 0 srcid_{eid}'
'''
idx = pd.IndexSlice
solu_df = recip_record.solutions_df
stat_df = recip_record.stations_df
l_recip_cmtsolutions = []
l_recip_cmtstations = []
proj_id = recip_record.proj_id
for eidx, edf in rgf_cmt_data_df.groupby(level='eid'):
eid = eidx
print(f'eid: {eid}')
mt = d_fwd_src[eid]
for tidx, tdf in edf.groupby(level='trid'):
date = datetime.datetime.now()
lon_xc = solu_df.loc[(eidx,0),'lon_xc']
lat_yc = solu_df.loc[(eidx,0),'lat_yc']
depth = solu_df.loc[(eidx,0),'depth']
elevation = 0.
network = stat_df.loc[(tidx,0,eidx,0),'network']
stat_header = stat_h(name=f'Reciprocal-Station:{tidx}',
lat_yc=lat_yc,
lon_xc=lon_xc,
depth=depth,
elevation=elevation,
network=network,
proj_id=proj_id,
eid=eid,
sid=eid,
trid=tidx,
gid=0)
l_recip_cmtstations.append(stat_header)
cmt_lon_xc = stat_df.loc[(tidx,0,eidx,0),'lon_xc']
cmt_lat_yc = stat_df.loc[(tidx,0,eidx,0),'lat_yc']
cmt_depth = stat_df.loc[(tidx,0,eidx,0),'depth']
cmt_header = cmt_h(ename=f'Reciprocal-CMT:{eid}',
lat_yc=cmt_lat_yc,
lon_xc=cmt_lon_xc,
depth=cmt_depth,
tshift=0,
date=date,
hdur=0,
mt=mt,
proj_id=proj_id,
eid=eid,
sid=eid)
l_recip_cmtsolutions.append(cmt_header)
constructed_record = RecordHeader(name=f'Reciprocal of:{recip_record.name}',
solutions_h=l_recip_cmtsolutions,
stations_h=l_recip_cmtstations,
proj_id=proj_id,
rid=recip_record.rid,
iter_id=recip_record.iter_id,
is_reciprocal=False)
#sid0_df['eid','name'].apply((lambda x: x+1)())
constructed_record
###Output
_____no_output_____ |
Code/02_SVHN_YOLOv4_tiny_Darknet_Roboflow.ipynb | ###Markdown
02. SVHN YOLOv4-tiny Darknet Training on Google Colab by Roboflow Purpose:This notebook contains the steps necessary to retrain the Darknet YOLOv4-tiny model for digit detection using the images and annotation files created in the previous notebook (01_Preprocessing_SVHN.ipynb). It is a slight adaptation of the notebook provided by Roboflow located [here](https://blog.roboflow.com/train-yolov4-tiny-on-custom-data-lighting-fast-detection/). Thank you Roboflow. Before Running Notebook:1. Create a folder on your Google drive named SVHN1. Upload training and validation data to the SVHN folder created in the previous notebook as train.tar.gz and test.tar.gz.1. Upload obj.names containing the name of the classes (0 through 9), one per line, to the SVHN folder.1. Upload this notebook into Google Colab and run from there.
###Code
#Mount google drive with trainging and testing data
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
Configuring cuDNN on Colab for YOLOv4
###Code
# CUDA: Let's check that Nvidia CUDA drivers are already pre-installed and which version is it.
!/usr/local/cuda/bin/nvcc --version
# We need to install the correct cuDNN according to this output
#using google drive to bring in a file
#if you have other means, by all means, use them friend!
import gdown
url = 'https://drive.google.com/uc?id=1NLOBQmV6QZpP_c7-693ug0X-bqBpeJLE'
output = '/usr/local/cudnn-10.1-linux-x64-v7.6.5.32 (1).tgz'
gdown.download(url, output, quiet=False)
#we are installing the cuDNN that we dropped in our google drive
%cd /usr/local/
!tar -xzvf "cudnn-10.1-linux-x64-v7.6.5.32 (1).tgz"
!chmod a+r /usr/local/cuda/include/cudnn.h
!cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
%cd /content/
# #copy cuDNN file from google drive
# %cp "/content/drive/MyDrive/cudnn-10.1-linux-x64-v8.0.5.39.tgz" /usr/local/
# #we are installing the cuDNN that we dropped in our google drive
# %cd /usr/local/
# !tar -xzvf "cudnn-10.1-linux-x64-v8.0.5.39.tgz"
# !chmod a+r /usr/local/cuda/include/cudnn.h
# !cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
# %cd /content/
#take a look at the kind of GPU we have
!nvidia-smi
# Change the number depending on what GPU is listed above, under NVIDIA-SMI > Name.
# Tesla K80: 30
# Tesla P100: 60
# Tesla T4: 75
%env compute_capability=75
###Output
env: compute_capability=75
###Markdown
Installing Darknet for YOLOv4 on Colab
###Code
%cd /content/
%rm -rf darknet
#we clone the fork of darknet maintained by roboflow
#small changes have been made to configure darknet for training
!git clone https://github.com/roboflow-ai/darknet.git
###Output
Cloning into 'darknet'...
remote: Enumerating objects: 13289, done.[K
remote: Total 13289 (delta 0), reused 0 (delta 0), pack-reused 13289[K
Receiving objects: 100% (13289/13289), 12.13 MiB | 24.74 MiB/s, done.
Resolving deltas: 100% (9106/9106), done.
###Markdown
**IMPORTANT! If you're not using a Tesla P100 GPU, then uncomment the sed command and replace the arch and code with that matching your GPU. A list can be found [here](http://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/).**
###Code
#install environment from the Makefile
%cd /content/darknet/
# compute_30, sm_30 for Tesla K80
# compute_75, sm_75 for Tesla T4
!sed -i 's/ARCH= -gencode arch=compute_60,code=sm_60/ARCH= -gencode arch=compute_75,code=sm_75/g' Makefile
#install environment from the Makefile
#note if you are on Colab Pro this works on a P100 GPU
#if you are on Colab free, you may need to change the Makefile for the K80 GPU
#this goes for any GPU, you need to change the Makefile to inform darknet which GPU you are running on.
!sed -i 's/OPENCV=0/OPENCV=1/g' Makefile
!sed -i 's/GPU=0/GPU=1/g' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/g' Makefile
!sed -i "s/ARCH= -gencode arch=compute_60,code=sm_60/ARCH= -gencode arch=compute_${compute_capability},code=sm_${compute_capability}/g" Makefile
!make
#download the newly released yolov4-tiny weights
%cd /content/darknet
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.conv.29
###Output
_____no_output_____
###Markdown
Set up Custom Dataset for YOLOv4
###Code
#copy and extract training data
print('Copying files...')
%cp /content/drive/MyDrive/SVHN/train.tar.gz /content/darknet/data/
%cp /content/drive/MyDrive/SVHN/test.tar.gz /content/darknet/data/
%cd /content/darknet/data
print('Extracting training data...')
!tar -xf train.tar.gz
print('Extracting testing data...')
!tar -xf test.tar.gz
#Set up training file directories for custom dataset
%cd /content/darknet/
%cp /content/drive/MyDrive/SVHN/obj.names data/
%mkdir data/obj
#copy image and labels
print('Copying training images...')
%cp data/train/*.png data/obj/
print('Copying testing images...')
%cp data/test/*.png data/obj/
print('Copying training labels...')
%cp data/train/*.txt data/obj/
print('Copying testing labels...')
%cp data/test/*.txt data/obj/
print('Creating obj.data...')
with open('data/obj.data', 'w') as out:
out.write('classes = 3\n')
out.write('train = data/train.txt\n')
out.write('valid = data/valid.txt\n')
out.write('names = data/obj.names\n')
out.write('backup = backup/')
#write train file (just the image list)
import os
print('Creating train.txt...')
with open('data/train.txt', 'w') as out:
for img in [f for f in os.listdir('data/train/') if f.endswith('png')]:
out.write('data/obj/' + img + '\n')
#write the valid file (just the image list)
import os
print('Creating valid.txt...')
with open('data/valid.txt', 'w') as out:
for img in [f for f in os.listdir('data/test/') if f.endswith('png')]:
out.write('data/obj/' + img + '\n')
print('Complete.')
###Output
Creating obj.data...
Creating train.txt...
Creating valid.txt...
Complete.
###Markdown
Write Custom Training Config for YOLOv4
###Code
#we build config dynamically based on number of classes
#we build iteratively from base config files. This is the same file shape as cfg/yolo-obj.cfg
def file_len(fname):
with open(fname) as f:
for i, l in enumerate(f):
pass
return i + 1
num_classes = file_len('/content/drive/MyDrive/SVHN/obj.names')
#max_batches = num_classes*2000
max_batches = 34500
steps1 = .8 * max_batches
steps2 = .9 * max_batches
steps_str = str(steps1)+','+str(steps2)
num_filters = (num_classes + 5) * 3
print("writing config for a custom YOLOv4 detector detecting number of classes: " + str(num_classes))
#Instructions from the darknet repo
#change line max_batches to (classes*2000 but not less than number of training images, and not less than 6000), f.e. max_batches=6000 if you train for 3 classes
#change line steps to 80% and 90% of max_batches, f.e. steps=4800,5400
if os.path.exists('./cfg/custom-yolov4-tiny-detector.cfg'): os.remove('./cfg/custom-yolov4-tiny-detector.cfg')
#customize iPython writefile so we can write variables
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
%%writetemplate ./cfg/custom-yolov4-tiny-detector.cfg
[net]
# Testing
#batch=1
#subdivisions=1
# Training
batch=64
subdivisions=24
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.00261
burn_in=1000
max_batches = {max_batches}
policy=steps
steps={steps_str}
scales=.1,.1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[route]
layers=-1
groups=2
group_id=1
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[route]
layers = -1,-2
[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky
[route]
layers = -6,-1
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[route]
layers=-1
groups=2
group_id=1
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[route]
layers = -1,-2
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[route]
layers = -6,-1
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[route]
layers=-1
groups=2
group_id=1
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[route]
layers = -1,-2
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[route]
layers = -6,-1
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
##################################
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters={num_filters}
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes={num_classes}
num=6
jitter=.3
scale_x_y = 1.05
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
ignore_thresh = .7
truth_thresh = 1
random=0
nms_kind=greedynms
beta_nms=0.6
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 23
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters={num_filters}
activation=linear
[yolo]
mask = 1,2,3
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes={num_classes}
num=6
jitter=.3
scale_x_y = 1.05
cls_normalizer=1.0
iou_normalizer=0.07
iou_loss=ciou
ignore_thresh = .7
truth_thresh = 1
random=0
nms_kind=greedynms
beta_nms=0.6
#here is the file that was just written.
#you may consider adjusting certain things
#like the number of subdivisions 64 runs faster but Colab GPU may not be big enough
#if Colab GPU memory is too small, you will need to adjust subdivisions to 16
%cat cfg/custom-yolov4-tiny-detector.cfg
###Output
_____no_output_____
###Markdown
Train Custom YOLOv4 Detector
###Code
%cd /content/darknet/
!./darknet detector train data/obj.data cfg/custom-yolov4-tiny-detector.cfg yolov4-tiny.conv.29 -dont_show -map
#If you get CUDA out of memory adjust subdivisions above!
#adjust max batches down for shorter training above
###Output
_____no_output_____
###Markdown
 Infer Custom Objects with Saved YOLOv4 Weights
###Code
#define utility function
def imShow(path):
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
image = cv2.imread(path)
height, width = image.shape[:2]
resized_image = cv2.resize(image,(3*width, 3*height), interpolation = cv2.INTER_CUBIC)
fig = plt.gcf()
fig.set_size_inches(18, 10)
plt.axis("off")
#plt.rcParams['figure.figsize'] = [10, 5]
plt.imshow(cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB))
plt.show()
#check if weigths have saved yet
#backup houses the last weights for our detector
#(file yolo-obj_last.weights will be saved to the build\darknet\x64\backup\ for each 100 iterations)
#(file yolo-obj_xxxx.weights will be saved to the build\darknet\x64\backup\ for each 1000 iterations)
#After training is complete - get result yolo-obj_final.weights from path build\darknet\x64\bac
!ls backup
#if it is empty you haven't trained for long enough yet, you need to train for at least 100 iterations
#coco.names is hardcoded somewhere in the detector
%cp data/obj.names data/coco.names
#/test has images that we can test our detector on
test_images = [f for f in os.listdir('data/test') if f.endswith('.png')]
import random
img_path = "data/test/" + random.choice(test_images);
#test out our detector!
!./darknet detect cfg/custom-yolov4-tiny-detector.cfg backup/custom-yolov4-tiny-detector_best.weights {img_path} -dont-show
imShow('predictions.jpg')
###Output
CUDA-version: 10010 (11020), cuDNN: 7.6.5, GPU count: 1
OpenCV version: 3.2.0
compute_capability = 750, cudnn_half = 0
net.optimized_memory = 0
mini_batch = 1, batch = 24, time_steps = 1, train = 0
layer filters size/strd(dil) input output
0 conv 32 3 x 3/ 2 416 x 416 x 3 -> 208 x 208 x 32 0.075 BF
1 conv 64 3 x 3/ 2 208 x 208 x 32 -> 104 x 104 x 64 0.399 BF
2 conv 64 3 x 3/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.797 BF
3 route 2 1/2 -> 104 x 104 x 32
4 conv 32 3 x 3/ 1 104 x 104 x 32 -> 104 x 104 x 32 0.199 BF
5 conv 32 3 x 3/ 1 104 x 104 x 32 -> 104 x 104 x 32 0.199 BF
6 route 5 4 -> 104 x 104 x 64
7 conv 64 1 x 1/ 1 104 x 104 x 64 -> 104 x 104 x 64 0.089 BF
8 route 2 7 -> 104 x 104 x 128
9 max 2x 2/ 2 104 x 104 x 128 -> 52 x 52 x 128 0.001 BF
10 conv 128 3 x 3/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.797 BF
11 route 10 1/2 -> 52 x 52 x 64
12 conv 64 3 x 3/ 1 52 x 52 x 64 -> 52 x 52 x 64 0.199 BF
13 conv 64 3 x 3/ 1 52 x 52 x 64 -> 52 x 52 x 64 0.199 BF
14 route 13 12 -> 52 x 52 x 128
15 conv 128 1 x 1/ 1 52 x 52 x 128 -> 52 x 52 x 128 0.089 BF
16 route 10 15 -> 52 x 52 x 256
17 max 2x 2/ 2 52 x 52 x 256 -> 26 x 26 x 256 0.001 BF
18 conv 256 3 x 3/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.797 BF
19 route 18 1/2 -> 26 x 26 x 128
20 conv 128 3 x 3/ 1 26 x 26 x 128 -> 26 x 26 x 128 0.199 BF
21 conv 128 3 x 3/ 1 26 x 26 x 128 -> 26 x 26 x 128 0.199 BF
22 route 21 20 -> 26 x 26 x 256
23 conv 256 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 256 0.089 BF
24 route 18 23 -> 26 x 26 x 512
25 max 2x 2/ 2 26 x 26 x 512 -> 13 x 13 x 512 0.000 BF
26 conv 512 3 x 3/ 1 13 x 13 x 512 -> 13 x 13 x 512 0.797 BF
27 conv 256 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 256 0.044 BF
28 conv 512 3 x 3/ 1 13 x 13 x 256 -> 13 x 13 x 512 0.399 BF
29 conv 45 1 x 1/ 1 13 x 13 x 512 -> 13 x 13 x 45 0.008 BF
30 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
31 route 27 -> 13 x 13 x 256
32 conv 128 1 x 1/ 1 13 x 13 x 256 -> 13 x 13 x 128 0.011 BF
33 upsample 2x 13 x 13 x 128 -> 26 x 26 x 128
34 route 33 23 -> 26 x 26 x 384
35 conv 256 3 x 3/ 1 26 x 26 x 384 -> 26 x 26 x 256 1.196 BF
36 conv 45 1 x 1/ 1 26 x 26 x 256 -> 26 x 26 x 45 0.016 BF
37 yolo
[yolo] params: iou loss: ciou (4), iou_norm: 0.07, cls_norm: 1.00, scale_x_y: 1.05
nms_kind: greedynms (1), beta = 0.600000
Total BFLOPS 6.801
avg_outputs = 300864
Allocate additional workspace_size = 26.22 MB
Loading weights from backup/custom-yolov4-tiny-detector_best.weights...
seen 64, trained: 1603 K-images (25 Kilo-batches_64)
Done! Loaded 38 layers from weights-file
data/test/test_1200.png: Predicted in 5.259000 milli-seconds.
2: 74%
2: 75%
Unable to init server: Could not connect: Connection refused
(predictions:208214): Gtk-[1;33mWARNING[0m **: [34m23:47:56.822[0m: cannot open display:
###Markdown
Save Weights and Config
###Code
#%cp cfg/custom-yolov4-detector.cfg
%cp cfg/custom-yolov4-tiny-detector.cfg /content/drive/MyDrive/SVHN/
%cp backup/custom-yolov4-tiny-detector_last.weights /content/drive/MyDrive/SVHN/
%cp backup/custom-yolov4-tiny-detector_best.weights /content/drive/MyDrive/SVHN/
%cp backup/custom-yolov4-tiny-detector_final.weights /content/drive/MyDrive/SVHN/
###Output
_____no_output_____ |
mind_models/MIND textual features.ipynb | ###Markdown
1. Preprocessing
###Code
# remove users which have fewer than 5 interacations
print("Len before removal: ",len(merged))
_keys = merged["user"].value_counts()[merged["user"].value_counts() > 5].keys()
merged = merged[merged["user"].isin(_keys)]
print("Len after removal: ",len(merged))
user_enc = LabelEncoder()
article_enc = LabelEncoder()
merged["user_id"] = user_enc.fit_transform(merged["user"].values)
merged["article_id"] = article_enc.fit_transform(merged["news_id"].values)
def hyphen_to_underline(category):
"""
Convert hyphen to underline for the subcategories. So that Tfidf works correctly
"""
return category.replace("-","_")
merged["subcategory_cleaned"] = merged["sub_category"].apply(func = hyphen_to_underline)
category_enc = LabelEncoder()
subcategory_enc = LabelEncoder()
merged["subcategory_int"] = subcategory_enc.fit_transform(merged["subcategory_cleaned"].values)
merged["category_int"] = subcategory_enc.fit_transform(merged["category"].values)
users = merged["user_id"].unique()
userid_to_profile = collections.defaultdict(list)
for user_id in tqdm(users):
user_subcat = merged[merged["user_id"] == user_id]["subcategory_int"].values.tolist()
counter = Counter(user_subcat)
s = sorted(user_subcat, key=lambda x: (counter[x], x), reverse=True)
final_subcategories = []
for elem in s:
if elem not in final_subcategories:
final_subcategories.append(elem)
while len(final_subcategories) < 6:
final_subcategories.append(0)
userid_to_profile[user_id] = final_subcategories[:6]
profile_df = pd.DataFrame.from_dict(userid_to_profile, orient="index")
profile_df["user_id"] = profile_df.index
merged = merged.merge(profile_df, on="user_id")
merged = merged.rename(columns={"0": "p0","1": "p1","2": "p2","3": "p3","4": "p4","5": "p5",})
article_id_to_category_int = merged[["article_id", "category_int"]].set_index("article_id").to_dict()
article_id_to_category_int = article_id_to_category_int["category_int"]
article_id_to_subcategory_int = merged[["article_id", "subcategory_int"]].set_index("article_id").to_dict()
article_id_to_subcategory_int = article_id_to_subcategory_int["subcategory_int"]
###Output
_____no_output_____
###Markdown
1.1 Text preprocessing
###Code
import nltk
from nltk.corpus import stopwords
# Helper functions
def _removeNonAscii(s):
return "".join(i for i in s if ord(i)<128)
def make_lower_case(text):
return text.lower()
def remove_stop_words(text):
text = text.split()
stops = set(stopwords.words("english"))
text = [w for w in text if not w in stops]
text = " ".join(text)
return text
def remove_html(text):
html_pattern = re.compile('<.*?>')
return html_pattern.sub(r'', text)
def remove_punctuation(text):
text = re.sub(r'[^\w\s]', '', text)
return text
def text_to_list(text):
text = text.split(" ")
return text
def clean_title(df):
df["title_cleaned"] = df.title.apply(func = make_lower_case)
#df["title_cleaned"] = df.title_cleaned.apply(func = remove_stop_words)
df["title_cleaned"] = df.title_cleaned.apply(func = remove_punctuation)
df["title_cleaned"] = df.title_cleaned.apply(func = _removeNonAscii)
return df
def clean_abstract(df):
df["abstract_cleaned"] = df.abstract.apply(func = make_lower_case)
#df["abstract_cleaned"] = df.abstract_cleaned.apply(func = remove_stop_words)
df["abstract_cleaned"] = df.abstract_cleaned.apply(func = remove_punctuation)
df["abstract_cleaned"] = df.abstract_cleaned.apply(func = _removeNonAscii)
return df
merged = clean_title(merged)
merged = clean_abstract(merged)
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
MAXLEN_TITLE = 20
MAXLEN_ABSTRACT = 30
oov_tok = "<UNK>"
tokenizer = Tokenizer(oov_token=oov_tok)
tokenizer.fit_on_texts(merged["abstract_cleaned"].values)
temp = tokenizer.texts_to_sequences(merged["title_cleaned"].values)
temp = pad_sequences(temp, padding="post", maxlen=MAXLEN_TITLE)
merged["title_tokenized"] = temp.tolist()
temp = tokenizer.texts_to_sequences(merged["abstract_cleaned"].values)
temp = pad_sequences(temp, padding="post", maxlen=MAXLEN_TITLE)
merged["abstract_tokenized"] = temp.tolist()
word_index = tokenizer.word_index
for i,w in enumerate(word_index):
print(w, word_index.get(w))
if i == 4:
break
VOCABSIZE = len(tokenizer.word_index)
VOCABSIZE
article_to_title = merged[["article_id", "title_tokenized"]].set_index("article_id").to_dict()["title_tokenized"]
article_to_category = merged[["article_id", "category_int"]].set_index("article_id").to_dict()["category_int"]
article_to_subcategory = merged[["article_id", "subcategory_int"]].set_index("article_id").to_dict()["subcategory_int"]
article_to_abstract = merged[["article_id", "abstract_tokenized"]].set_index("article_id").to_dict()["abstract_tokenized"]
###Output
_____no_output_____
###Markdown
2. Train test split
###Code
def train_test_split(df, user_id, article_id, have_timestamp, timestamp):
"""
params:
col_1: user_id
col_2: article_id
"""
df_test = df
if have_timestamp: # if df have timestamp; take last interacted article into test set
df_test = df_test.sort_values(timestamp).groupby(user_id).tail(1)
else:
df_test = df_test.sort_values(user_id).groupby(user_id).tail(1)
df_train = df.drop(index=df_test.index)
assert df_test.shape[0] + df_train.shape[0] == df.shape[0]
return df_train, df_test
df_train_true, df_test_true = train_test_split(merged, "user_id", "article_id", False, 0)
def get_userid_to_article_history(df):
userid_to_article_history = {}
for user_id in tqdm(df["user_id"].unique()):
click_history = df[df["user_id"] == user_id]["article_id"].values
if len(click_history) < 10:
while len(click_history) < 10:
click_history = np.append(click_history, 0)
if len(click_history) > 10:
click_history = click_history[:10]
userid_to_article_history[user_id] = click_history
return userid_to_article_history
userid_to_article_history = get_userid_to_article_history(df_train_true)
all_article_ids = merged["article_id"].unique()
def negative_sampling(train_df, all_article_ids, user_id, article_id):
"""
Negative sample training instance; for each positive instance, add 4 negative articles
Return user_ids, news_ids, category_1, category_2, authors_onehotencoded, titles
"""
user_ids, user_click_history, articles, article_category, article_sub_category,titles,abstract, labels = [],[],[], [], [], [], [], []
p0, p1, p2, p3, p4, p5, p6, p7, p8, p9 = [], [], [], [], [], [], [], [], [], []
user_item_set = set(zip(train_df[user_id],
train_df[article_id]))
num_negatives = 4
for (u, i) in tqdm(user_item_set):
user_ids.append(u)
user_click_history.append(userid_to_article_history[u])
profile = np.array(userid_to_profile[u])
p0.append(profile[0])
p1.append(profile[1])
p2.append(profile[2])
p3.append(profile[3])
p4.append(profile[4])
p5.append(profile[5])
article_category.append(article_id_to_category_int[i])
article_sub_category.append(article_id_to_subcategory_int[i])
titles.append(article_to_title[i])
abstract.append(article_to_abstract[i])
for _ in range(num_negatives):
negative_item = np.random.choice(all_article_ids)
while (u, negative_item) in user_item_set:
negative_item = np.random.choice(all_article_ids)
user_ids.append(u)
user_click_history.append(userid_to_article_history[u])
p0.append(profile[0])
p1.append(profile[1])
p2.append(profile[2])
p3.append(profile[3])
p4.append(profile[4])
p5.append(profile[5])
article_category.append(article_id_to_category_int[negative_item])
article_sub_category.append(article_id_to_subcategory_int[negative_item])
titles.append(article_to_title[negative_item])
abstract.append(article_to_abstract[negative_item])
articles.append(negative_item)
labels.append(0)
articles.append(i)
labels.append(1)
user_ids, user_click_history, p0, p1, p2, p3, p4, p5, articles,article_category,article_sub_category,titles,abstract, labels = shuffle(user_ids,user_click_history, p0, p1, p2, p3, p4, p5, articles,article_category,article_sub_category,titles,abstract, labels, random_state=0)
return pd.DataFrame(list(zip(user_ids,user_click_history,p0, p1, p2, p3, p4, p5, articles,article_category,article_sub_category,titles,abstract, labels)), columns=["user_id","user_history","p0", "p1", "p2", "p3", "p4", "p5", "article_id","article_category","article_sub_category","titles","abstract", "labels"])
df_train = negative_sampling(df_train_true, all_article_ids, "user_id", "article_id")
def fix_dftrain(df, column, max_len, padding):
i = 0
for i in tqdm(range(max_len)):
df[column + "_" + str(i)] = df[column].apply(lambda x: x[i] if i < len(x) else padding)
#df.drop(column, axis=1, inplace=True)
return df
#df_train = fix_dftrain(df_train, "user_history", 10, 0)
#df_train.drop(columns=["user_history"], inplace=True)
#df_train.head()
# For each user; for each item the user has interacted with in the test set;
# Sample 99 items the user has not interacted with in the past and add the one test item
def negative_sample_testset(ordiginal_df, df_test, all_article_ids, user_id, article_id):
test_user_item_set = set(zip(df_test[user_id], df_test[article_id]))
user_interacted_items = ordiginal_df.groupby(user_id)[article_id].apply(list).to_dict()
users = []
p0, p1, p2, p3, p4, p5, p6, p7, p8, p9 = [], [], [], [], [], [], [], [], [], []
res_arr = []
article_category, article_sub_category = [], []
userid_to_true_item = {} # keep track of the real items
for (u,i) in tqdm(test_user_item_set):
interacted_items = user_interacted_items[u]
not_interacted_items = set(all_article_ids) - set(interacted_items)
selected_not_interacted = list(np.random.choice(list(not_interacted_items), 99))
test_items = [i]+selected_not_interacted
temp = []
profile = userid_to_profile[u]
for j in range(len(test_items)):
temp.append([u,
userid_to_article_history[u],
profile[0],
profile[1],
profile[2],
profile[3],
profile[4],
profile[5],
test_items[j],
article_id_to_category_int[test_items[j]],
article_id_to_subcategory_int[test_items[j]],
article_to_title[test_items[j]],
article_to_abstract[test_items[j]]
])
# user_click_history.append(userid_to_article_history[u])
res_arr.append(temp)
userid_to_true_item[u] = i
X_test = np.array(res_arr)
X_test = X_test.reshape(-1, X_test.shape[-1])
df_test = pd.DataFrame(X_test, columns=["user_id",
"click_history",
"p0",
"p1",
"p2",
"p3",
"p4",
"p5",
"article_id",
"category",
"sub_category",
"title",
"abstract"])
return X_test, df_test, userid_to_true_item
X_test, df_test, userid_to_true_item = negative_sample_testset(merged, df_test_true, merged["article_id"].unique(), "user_id", "article_id")
def fix_dftest(df, column, max_len, padding):
i = 0
for i in tqdm(range(max_len)):
df[column + "_" + str(i)] = df[column].apply(lambda x: x[i] if i < len(x) else padding)
#df.drop(column, axis=1, inplace=True)
return df
#df_test = fix_dftest(df_test, "click_history", 10, 0)
#df_test.drop(columns=["click_history"], inplace=True)
def getHitRatio(ranklist, gtItem):
for item in ranklist:
if item == gtItem:
return 1
return 0
def getNDCG(ranklist, gtItem):
for i in range(len(ranklist)):
item = ranklist[i]
if item == gtItem:
return math.log(2) / math.log(i+2)
return 0
###Output
_____no_output_____
###Markdown
4. Model
###Code
num_users = len(merged["user_id"].unique())
num_items = len(merged["article_id"].unique())
num_categories = len(merged["category_int"].unique())
num_sub_categories = len(merged["subcategory_int"].unique())
dims = 20
def get_model_neumfonefeat(num_users, num_items, dims, dense_layers=[128, 64, 32, 8]):
user_input = Input(shape=(1,), name="user")
item_input = Input(shape=(1,), name="item")
mf_user_emb = Embedding(output_dim=dims,
input_dim=num_users,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_user_emb")(user_input)
mf_item_emb = Embedding(output_dim=dims,
input_dim=num_items,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_item_emb")(item_input)
num_layers = len(dense_layers)
mlp_user_emb = Embedding(output_dim=int(dense_layers[0] / 2),
input_dim=num_users,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mlp_user_emb")(user_input)
mlp_item_emb = Embedding(output_dim=int(dense_layers[0] / 2),
input_dim=num_items,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mlp_user_item")(item_input)
# Matrix factorization
mf_user_vecs = Reshape([dims])(mf_user_emb)
mf_item_vecs = Reshape([dims])(mf_item_emb)
mf_vec = multiply([mf_user_vecs, mf_item_vecs])
#MLP
category_input = Input(shape=(1,), name="category_input")
item_category_emb = Embedding(input_dim=num_categories, output_dim=int(dense_layers[0] / 2), name="category_emd", embeddings_regularizer=regularizers.l2(0.001))(category_input)
item_category_flatten = Flatten()(item_category_emb)
user_flatten = Flatten()(mlp_user_emb)
item_flatten = Flatten()(mlp_item_emb)
wide_features = Concatenate()([item_category_flatten,user_flatten, item_flatten])
mlp_vector = Flatten()(wide_features)
for num_dense in dense_layers:
l = Dense(num_dense, activation="relu")
mlp_vector = l(mlp_vector)
mlp_vector = Dropout(0.2)(mlp_vector)
mlp_vec = Concatenate()([mlp_user_emb, mlp_item_emb])
mlp_vector = Flatten()(mlp_vec)
y = Concatenate()([mf_vec, mlp_vector])
y = Dense(1, activation="sigmoid", name="pred")(y)
model = Model(inputs=[user_input, item_input,category_input], outputs=y)
model.compile(
optimizer=Adam(0.01),
loss="binary_crossentropy",
metrics=["accuracy"],
)
return model
model_neumf_one_feat = get_model_neumfonefeat(num_users, num_items, dims)
###### Training ########
user_input = df_train.user_id.values
articles = df_train.article_id.values
category = df_train.article_category.values
labels = df_train.labels.values
epochs = 3
for epoch in range(epochs):
hist = model_neumf_one_feat.fit([user_input,articles,category], labels, validation_split=0.1, epochs=1, shuffle=True)
test_users = df_test.user_id.unique()[:100]
test_users = shuffle(test_users)
hits_ten,hits_five,ndcgs_ten,ndcgs_five = [], [], [], []
h_ten, h_five, n_ten, n_five = [], [], [], []
for user_id in tqdm(test_users):
user_df = df_test[df_test["user_id"] == user_id ]
users = np.array([user_id]*100).reshape(-1,1).astype(int)
items = user_df.article_id.values.reshape(-1,1).astype(int)
categories = user_df.category.values.reshape(-1,1).astype(int)
true_item = userid_to_true_item[user_id]
predictions = model_neumf_one_feat.predict([users, items,categories])
predicted_labels = np.squeeze(predictions)
top_ten_items = [items[k] for k in np.argsort(predicted_labels)[::-1][0:10].tolist()]
h_ten.append(getHitRatio(top_ten_items, true_item))
h_five.append(getHitRatio(top_ten_items[:5], true_item))
n_ten.append(getNDCG(top_ten_items, true_item))
n_five.append(getNDCG(top_ten_items[:5], true_item))
print(np.average(h_ten))
print(np.average(h_five))
print(np.average(n_ten))
print(np.average(n_five))
###Output
_____no_output_____
###Markdown
4.2 With title and abstract
###Code
num_users = len(merged["user_id"].unique())
num_items = len(merged["article_id"].unique())
num_categories = len(merged["category_int"].unique())
num_sub_categories = len(merged["subcategory_int"].unique())
dims = 20
def get_model_neumfonefeat(num_users, num_items, dims, dense_layers=[128, 64, 32, 8]):
user_input = Input(shape=(1,), name="user")
item_input = Input(shape=(1,), name="item")
title_input = Input(shape=(MAXLEN_TITLE,), name="title")
category_input = Input(shape=(1,), name="category_input")
mf_user_emb = Embedding(output_dim=dims,
input_dim=num_users,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_user_emb")(user_input)
mf_item_emb = Embedding(output_dim=dims,
input_dim=num_items,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_item_emb")(item_input)
num_layers = len(dense_layers)
mlp_user_emb = Embedding(output_dim=int(dense_layers[0] / 2),
input_dim=num_users,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mlp_user_emb")(user_input)
mlp_item_emb = Embedding(output_dim=int(dense_layers[0] / 2),
input_dim=num_items,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mlp_user_item")(item_input)
title_emb = Embedding(output_dim=int(dense_layers[0] / 2),
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
input_dim=VOCABSIZE,
input_length=MAXLEN_TITLE,
name="title_em")(title_input)
item_category_emb = Embedding(input_dim=num_categories,
output_dim=int(dense_layers[0] / 2),
name="category_emd",
embeddings_regularizer=regularizers.l2(0.001))(category_input)
# Matrix factorization
mf_user_vecs = Reshape([dims])(mf_user_emb)
mf_item_vecs = Reshape([dims])(mf_item_emb)
mf_vec = multiply([mf_user_vecs, mf_item_vecs])
#MLP
item_category_flatten = Flatten()(item_category_emb)
user_flatten = Flatten()(mlp_user_emb)
item_flatten = Flatten()(mlp_item_emb)
title_flatten = Flatten()(title_emb)
wide_features = Concatenate()([item_category_flatten,title_flatten])
mlp_vector = Flatten()(wide_features)
for num_dense in dense_layers:
l = Dense(num_dense, activation="relu")
mlp_vector = l(mlp_vector)
mlp_vector = Dropout(0.2)(mlp_vector)
mlp_vec = Concatenate()([mlp_user_emb, mlp_item_emb])
mlp_vector = Flatten()(mlp_vec)
y = Concatenate()([mf_vec, mlp_vector])
y = Dense(1, activation="sigmoid", name="pred")(y)
model = Model(inputs=[user_input, item_input,category_input, title_input], outputs=y)
model.compile(
optimizer=Adam(0.01),
loss="binary_crossentropy",
metrics=["accuracy"],
)
return model
model_neumf_one_feat = get_model_neumfonefeat(num_users, num_items, dims)
###### Training ########
user_input = df_train.user_id.values
articles = df_train.article_id.values
category = df_train.article_category.values
titles = np.array([np.array(t) for t in df_train.titles.values])
labels = df_train.labels.values
epochs = 3
for epoch in range(epochs):
hist = model_neumf_one_feat.fit([user_input,articles,category, titles], labels, validation_split=0.1, epochs=1, shuffle=True)
test_users = df_test.user_id.unique()[:100]
test_users = shuffle(test_users)
hits_ten,hits_five,ndcgs_ten,ndcgs_five = [], [], [], []
h_ten, h_five, n_ten, n_five = [], [], [], []
for user_id in tqdm(test_users):
user_df = df_test[df_test["user_id"] == user_id ]
users = np.array([user_id]*100).reshape(-1,1).astype(int)
items = user_df.article_id.values.reshape(-1,1).astype(int)
categories = user_df.category.values.reshape(-1,1).astype(int)
titles = np.array([np.array(t) for t in user_df.title.values])
true_item = userid_to_true_item[user_id]
predictions = model_neumf_one_feat.predict([users, items,categories,titles])
predicted_labels = np.squeeze(predictions)
top_ten_items = [items[k] for k in np.argsort(predicted_labels)[::-1][0:10].tolist()]
h_ten.append(getHitRatio(top_ten_items, true_item))
h_five.append(getHitRatio(top_ten_items[:5], true_item))
n_ten.append(getNDCG(top_ten_items, true_item))
n_five.append(getNDCG(top_ten_items[:5], true_item))
print(np.average(h_ten))
print(np.average(h_five))
print(np.average(n_ten))
print(np.average(n_five))
###Output
_____no_output_____
###Markdown
4.3 With GRU
###Code
num_users = len(merged["user_id"].unique())
num_items = len(merged["article_id"].unique())
num_categories = len(merged["category_int"].unique())
num_sub_categories = len(merged["subcategory_int"].unique())
dims = 20
def get_model_neumfonefeat(num_users, num_items, dims, dense_layers=[128, 64, 32, 8]):
user_input = Input(shape=(1,), name="user")
item_input = Input(shape=(1,), name="item")
title_input = Input(shape=(MAXLEN_TITLE,), name="title")
mf_user_emb = Embedding(output_dim=dims,
input_dim=num_users,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_user_emb")(user_input)
mf_item_emb = Embedding(output_dim=dims,
input_dim=num_items,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_item_emb")(item_input)
num_layers = len(dense_layers)
title_emb = Embedding(output_dim=int(dense_layers[0] / 2),
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
input_dim=VOCABSIZE,
input_length=MAXLEN_TITLE,
name="title_em")(title_input)
# Matrix factorization
mf_user_vecs = Reshape([dims])(mf_user_emb)
mf_item_vecs = Reshape([dims])(mf_item_emb)
mf_vec = multiply([mf_user_vecs, mf_item_vecs])
nlp_gru = layers.Bidirectional(layers.GRU(64))(title_emb)
nlp_gru = Dropout(0.5)(nlp_gru)
nlp_l = Dense(units=dense_layers[-1], activation="relu")(nlp_gru)
y = Concatenate()([mf_vec, nlp_l])
y = Dense(1, activation="sigmoid", name="pred")(y)
model = Model(inputs=[user_input, item_input, title_input], outputs=y)
model.compile(
optimizer=Adam(0.01),
loss="binary_crossentropy",
metrics=["accuracy"],
)
return model
model_neumf_one_feat = get_model_neumfonefeat(num_users, num_items, dims)
###### Training ########
train_loss = []
val_loss = []
user_input = df_train.user_id.values
articles = df_train.article_id.values
category = df_train.article_category.values
titles = np.array([np.array(t) for t in df_train.titles.values])
labels = df_train.labels.values
epochs = 3
for epoch in range(epochs):
hist = model_neumf_one_feat.fit([user_input,articles, titles], labels, validation_split=0.1, epochs=1, shuffle=True)
train_loss.append(hist.history["loss"])
val_loss.append(hist.history["val_loss"])
import matplotlib.pyplot as plt
sns.set_style("darkgrid")
plt.plot(train_loss)
plt.plot(val_loss)
plt.title('Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.savefig("arc1_loss.pdf")
plt.show()
test_users = df_test.user_id.unique()[:100]
test_users = shuffle(test_users)
hits_ten,hits_five,ndcgs_ten,ndcgs_five = [], [], [], []
h_ten, h_five, n_ten, n_five = [], [], [], []
for user_id in tqdm(test_users):
user_df = df_test[df_test["user_id"] == user_id ]
users = np.array([user_id]*100).reshape(-1,1).astype(int)
items = user_df.article_id.values.reshape(-1,1).astype(int)
categories = user_df.category.values.reshape(-1,1).astype(int)
titles = np.array([np.array(t) for t in user_df.title.values])
true_item = userid_to_true_item[user_id]
predictions = model_neumf_one_feat.predict([users, items,categories,titles])
predicted_labels = np.squeeze(predictions)
top_ten_items = [items[k] for k in np.argsort(predicted_labels)[::-1][0:10].tolist()]
h_ten.append(getHitRatio(top_ten_items, true_item))
h_five.append(getHitRatio(top_ten_items[:5], true_item))
n_ten.append(getNDCG(top_ten_items, true_item))
n_five.append(getNDCG(top_ten_items[:5], true_item))
print(np.average(h_ten))
print(np.average(h_five))
print(np.average(n_ten))
print(np.average(n_five))
###Output
_____no_output_____
###Markdown
4.4 GRU and pretrained
###Code
emb_mean = word2vec.vectors.mean()
emb_std = word2vec.vectors.std()
def pretrained_embedding(word_to_vec_map, word_to_index, emb_mean=emb_mean, emb_std=emb_std):
vocab_size = len(word_to_index) +1
dim = 300
emb_matrix = np.random.normal(emb_mean, emb_std, (vocab_size, dim))
for word, idx in word_to_index.items():
if word in word_to_vec_map:
emb_matrix[idx] = word_to_vec_map.get_vector(word)
return emb_matrix
w_2_i = {'<UNK>': 1, 'handsome': 2, 'cool': 3, 'shit': 4 }
em_matrix = pretrained_embedding(word2vec, w_2_i, emb_mean, emb_std)
em_matrix
emb_matrix = pretrained_embedding(word2vec, tokenizer.word_index)
num_users = len(merged["user_id"].unique())
num_items = len(merged["article_id"].unique())
num_categories = len(merged["category_int"].unique())
num_sub_categories = len(merged["subcategory_int"].unique())
dims = 20
def get_model_neumfonefeat(num_users, num_items, dims, emb_matrix=None):
dense_layers=[128, 64, 32, 8]
user_input = Input(shape=(1,), name="user")
item_input = Input(shape=(1,), name="item")
title_input = Input(shape=(MAXLEN_TITLE,), name="title")
mf_user_emb = Embedding(output_dim=dims,
input_dim=num_users,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_user_emb")(user_input)
mf_item_emb = Embedding(output_dim=dims,
input_dim=num_items,
input_length=1,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
name="mf_item_emb")(item_input)
num_layers = len(dense_layers)
title_emb = Embedding(output_dim=300,
embeddings_initializer='he_normal',
embeddings_regularizer=regularizers.l2(0.001),
input_dim=(VOCABSIZE +1),
input_length=MAXLEN_TITLE,
weights=[emb_matrix],
trainable=False,
name="title_em")(title_input)
# Matrix factorization
mf_user_vecs = Reshape([dims])(mf_user_emb)
mf_item_vecs = Reshape([dims])(mf_item_emb)
mf_vec = multiply([mf_user_vecs, mf_item_vecs])
# NLP
nlp_gru = layers.Bidirectional(layers.GRU(64))(title_emb)
nlp_gru = Dropout(0.5)(nlp_gru)
nlp_l = Dense(units=dense_layers[-1], activation="relu")(nlp_gru)
y = Concatenate()([mf_vec, nlp_l])
y = Dense(8, activation="relu")(y)
y = Dense(1, activation="sigmoid", name="pred")(y)
model = Model(inputs=[user_input, item_input, title_input], outputs=y)
model.compile(
optimizer=Adam(0.001),
loss="binary_crossentropy",
metrics=["accuracy"],
)
return model
model_neumf_one_feat = get_model_neumfonefeat(num_users, num_items, dims, emb_matrix)
###### Training ########
train_loss = []
val_loss = []
user_input = df_train.user_id.values
articles = df_train.article_id.values
category = df_train.article_category.values
titles = np.array([np.array(t) for t in df_train.titles.values])
labels = df_train.labels.values
epochs = 3
for epoch in range(epochs):
hist = model_neumf_one_feat.fit([user_input,articles, titles], labels, validation_split=0.1, epochs=1, shuffle=True, batch_size=256)
train_loss.append(hist.history["loss"])
val_loss.append(hist.history["val_loss"])
test_users = df_test.user_id.unique()[:100]
test_users = shuffle(test_users)
hits_ten,hits_five,ndcgs_ten,ndcgs_five = [], [], [], []
h_ten, h_five, n_ten, n_five = [], [], [], []
for user_id in tqdm(test_users):
user_df = df_test[df_test["user_id"] == user_id ]
users = np.array([user_id]*100).reshape(-1,1).astype(int)
items = user_df.article_id.values.reshape(-1,1).astype(int)
categories = user_df.category.values.reshape(-1,1).astype(int)
titles = np.array([np.array(t) for t in user_df.title.values])
true_item = userid_to_true_item[user_id]
predictions = model_neumf_one_feat.predict([users, items,titles])
predicted_labels = np.squeeze(predictions)
print(predicted_labels)
top_ten_items = [items[k] for k in np.argsort(predicted_labels)[::-1][0:10].tolist()]
h_ten.append(getHitRatio(top_ten_items, true_item))
h_five.append(getHitRatio(top_ten_items[:5], true_item))
n_ten.append(getNDCG(top_ten_items, true_item))
n_five.append(getNDCG(top_ten_items[:5], true_item))
print(np.average(h_ten))
print(np.average(h_five))
print(np.average(n_ten))
print(np.average(n_five))
import matplotlib.pyplot as plt
sns.set_style("darkgrid")
plt.plot(train_loss)
plt.plot(val_loss)
plt.title('Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['Train', 'Val'], loc='upper left')
plt.savefig("arc1_loss.pdf")
plt.show()
###Output
_____no_output_____ |
src/Fedavg.ipynb | ###Markdown
FedAvg Experiment
###Code
#@test {"skip": true}
!which python
import nest_asyncio
nest_asyncio.apply()
%load_ext tensorboard
from matplotlib import pyplot as plt
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
import collections
from IPython.display import display, HTML, IFrame
import numpy as np
import tensorflow as tf
import tensorflow_federated as tff
print(tff.__version__)
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
np.random.seed(0)
#def greetings():
# display(HTML('<b><font size="6" color="#ff00f4">Greetings, virtual tutorial participants!</font></b>'))
# return True
#l = tff.federated_computation(greetings)()
# Code for loading federated data from TFF repository
emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data(only_digits=False)
len(emnist_train.client_ids), len(emnist_test.client_ids)
# Let's look at the shape of our data
example_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[0])
#example_dataset
example_dataset.element_spec
# Let's select an example dataset from one of our simulated clients
example_dataset = emnist_train.create_tf_dataset_for_client(
emnist_train.client_ids[45])
# Your code to get an example element from one client:
example_element = next(iter(example_dataset))
example_element['label'].numpy()
plt.imshow(example_element['pixels'].numpy(), cmap='gray', aspect='equal')
plt.grid(False)
_ = plt.show()
###Output
_____no_output_____
###Markdown
**Exploring non-iid data**
###Code
## Example MNIST digits for one client
f = plt.figure(figsize=(20,4))
j = 0
for e in example_dataset.take(40):
plt.subplot(4, 10, j+1)
plt.imshow(e['pixels'].numpy(), cmap='gray', aspect='equal')
plt.axis('off')
j += 1
# Number of examples per layer for a sample of clients
f = plt.figure(figsize=(12,7))
f.suptitle("Label Counts for a Sample of Clients")
for i in range(6):
ds = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[i])
k = collections.defaultdict(list)
for e in ds:
k[e['label'].numpy()].append(e['label'].numpy())
plt.subplot(2, 3, i+1)
plt.title("Client {}".format(i))
for j in range(62):
plt.hist(k[j], density=False, bins=[i for i in range(62)])
for i in range(2,5):
ds = emnist_train.create_tf_dataset_for_client(emnist_train.client_ids[i])
k = collections.defaultdict(list)
for e in ds:
k[e['label'].numpy()].append(e['pixels'].numpy())
f = plt.figure(i, figsize=(12,10))
f.suptitle("Client #{}'s Mean Image Per Label".format(i))
for j in range(20):
mn_img = np.mean(k[j],0)
plt.subplot(2, 10, j+1)
if (mn_img.size==1):
continue
plt.imshow(mn_img.reshape((28,28)))#,cmap='gray')
plt.axis('off')
###Output
_____no_output_____
###Markdown
Preprocessing the data
###Code
NUM_CLIENTS = 20
NUM_EPOCHS = 1
BATCH_SIZE = 20
SHUFFLE_BUFFER = 100
PREFETCH_BUFFER=10
def preprocess(dataset):
def batch_format_fn(element):
"""Flatten a batch `pixels` and return the features as an `OrderedDict`."""
return collections.OrderedDict(
x=tf.reshape(element['pixels'], [-1, 28,28,1]),
y=tf.reshape(element['label'], [-1, 1]))
return dataset.repeat(NUM_EPOCHS).shuffle(SHUFFLE_BUFFER).batch(
BATCH_SIZE).map(batch_format_fn).prefetch(PREFETCH_BUFFER)
preprocessed_example_dataset = preprocess(example_dataset)
sample_batch = tf.nest.map_structure(lambda x: x.numpy(),
next(iter(preprocessed_example_dataset)))
def make_federated_data(client_data, client_ids):
return [
preprocess(client_data.create_tf_dataset_for_client(x))
for x in client_ids
]
import random
shuffled_ids = emnist_train.client_ids.copy()
random.shuffle(shuffled_ids)
shuffled_ids_train = shuffled_ids[0:2500]
def create_keras_model():
data_format = 'channels_last'
initializer = tf.keras.initializers.RandomNormal(seed=0)
return tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(28, 28,1)),
tf.keras.layers.Conv2D(32,(3,3), activation='relu'),
tf.keras.layers.Conv2D(64,(3,3), activation='relu'),
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Dropout(rate=0.75),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu', kernel_initializer=initializer),
tf.keras.layers.Dropout(rate=0.5, seed=1),
tf.keras.layers.Dense(62, kernel_initializer=initializer),
tf.keras.layers.Softmax()
])
###Output
_____no_output_____
###Markdown
Centralized training
###Code
## Centralized training with keras ---------------------------------------------
# This is separate from the TFF tutorial, and demonstrates how to train a
# Keras model in a centralized fashion (contrasting training in a federated env)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 28,28,1).astype("float32") / 255
y_train = y_train.astype("float32")
mod = create_keras_model()
mod.compile(
optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]
)
h = mod.fit(
x_train,
y_train,
batch_size=64,
epochs=2
)
# ------------------------------------------------------------------------------
def model_fn():
# We _must_ create a new model here, and _not_ capture it from an external
# scope. TFF will call this within different graph contexts.
keras_model = create_keras_model()
return tff.learning.from_keras_model(
keras_model,
input_spec=preprocessed_example_dataset.element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
iterative_process = tff.learning.build_federated_averaging_process(
model_fn,
client_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=0.05),
server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0, momentum=0.9))
state = iterative_process.initialize()
NUM_ROUNDS = 1000
import time
start = time.process_time()
weights = None
for round_num in range(0, NUM_ROUNDS):
sample_clients = np.random.choice(shuffled_ids_train, NUM_CLIENTS, replace=False)
federated_train_data = make_federated_data(emnist_train, sample_clients)
state, metrics = iterative_process.next(state, federated_train_data)
print('round {:2d}, metrics={}'.format(round_num, metrics['train']))
weights = state.model.trainable
print(time.process_time() - start)
import pickle
with open('global_model.pkl', 'wb') as f:
pickle.dump(weights,f)
f.close()
#@test {"skip": true}
import os
import shutil
logdir = "/tmp/logs/scalars/training/"
if os.path.exists(logdir):
shutil.rmtree(logdir)
# Your code to create a summary writer:
summary_writer = tf.summary.create_file_writer(logdir)
state = iterative_process.initialize()
#@test {"skip": true}
with summary_writer.as_default():
for round_num in range(1, NUM_ROUNDS):
state, metrics = iterative_process.next(state, federated_train_data)
print('round {:2d}, metrics={}'.format(round_num, metrics['train']))
for name, value in metrics['train'].items():
tf.summary.scalar(name, value, step=round_num)
#@test {"skip": true}
%tensorboard --logdir /tmp/logs/scalars/ --port=0
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# Construct federated evaluation computation here:
evaluation = tff.learning.build_federated_evaluation(model_fn)
import random
shuffled_ids = emnist_test.client_ids.copy()
random.shuffle(shuffled_ids)
sample_clients = shuffled_ids_train[0:2500]
federated_test_data = make_federated_data(emnist_test, sample_clients)
len(federated_test_data), federated_test_data[0]
# Run evaluation on the test data here, using the federated model produced from
# training:
test_metrics = evaluation(state.model, federated_test_data)
str(test_metrics)
ckpt_manager = tff.simulation.FileCheckpointManager(root_dir="/src/training/")
ckpt_manager.save_checkpoint(tff.learning.framework.ServerState(state.model,state,None,None), round_num=1000)
###Output
_____no_output_____ |
Studying Materials/Course 4 Clustering and Retrieval/Week 2 Retrieval/0_nearest-neighbors-features-and-metrics_blank.ipynb | ###Markdown
Nearest Neighbors When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically* Decide on a notion of similarity* Find the documents that are most similar In the assignment you will* Gain intuition for different notions of similarity and practice finding similar documents. * Explore the tradeoffs with representing documents using raw word counts and TF-IDF* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import necessary packages As usual we need to first import the Python packages that we will need.The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html).
###Code
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
'''Check GraphLab Create version'''
from distutils.version import StrictVersion
assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'
###Output
_____no_output_____
###Markdown
Load Wikipedia dataset We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
###Code
wiki = graphlab.SFrame('people_wiki.gl')
wiki
###Output
_____no_output_____
###Markdown
Extract word count vectors As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in `wiki`.
###Code
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
###Output
_____no_output_____
###Markdown
Find nearest neighbors Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
###Code
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
###Output
_____no_output_____
###Markdown
Let's look at the top 10 nearest neighbors by performing the following query:
###Code
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
###Output
_____no_output_____
###Markdown
All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.* Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.* Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.* Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.* Andy Anstett is a former politician in Manitoba, Canada.Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
###Code
def top_words(name):
"""
Get a table of the most frequent words in the given person's wikipedia page.
"""
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
###Output
_____no_output_____
###Markdown
Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as **join**. The **join** operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See [the documentation](https://dato.com/products/create/docs/generated/graphlab.SFrame.join.html) for more details.For instance, running```obama_words.join(barrio_words, on='word')```will extract the rows from both tables that correspond to the common words.
###Code
combined_words = obama_words.join(barrio_words, on='word')
combined_words
###Output
_____no_output_____
###Markdown
Since both tables contained the column named `count`, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (`count`) is for Obama and the second (`count.1`) for Barrio.
###Code
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
###Output
_____no_output_____
###Markdown
**Note**. The **join** operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget `ascending=False` to display largest counts first.
###Code
combined_words.sort('Obama', ascending=False)
top_five_obama = list(combined_words['word'][:5])
def count_obama(row):
word_list = list(row['word_count'].keys())
for word in top_five_obama:
if word not in word_list:
return 0
return 1
check = wiki.apply(count_obama)
print('Number of individuals in wiki: {}'.format(len(wiki)))
print('Number of individuals with top five words in Obama entry: {}'.format(np.count_nonzero(check)))
###Output
Number of individuals in wiki: 59071
Number of individuals with top five words in Obama entry: 56066
###Markdown
**Quiz Question**. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?Hint:* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function `has_top_words` to accomplish the task. - Convert the list of top 5 words into set using the syntax```set(common_words)``` where `common_words` is a Python list. See [this link](https://docs.python.org/2/library/stdtypes.htmlset) if you're curious about Python sets. - Extract the list of keys of the word count dictionary by calling the [`keys()` method](https://docs.python.org/2/library/stdtypes.htmldict.keys). - Convert the list of keys into a set as well. - Use [`issubset()` method](https://docs.python.org/2/library/stdtypes.htmlset) to check if all 5 words are among the keys.* Now apply the `has_top_words` function on every row of the SFrame.* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
###Code
common_words = set(combined_words['word'][:5]) # YOUR CODE HERE
print(common_words)
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys()) # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print(np.count_nonzero(wiki['has_top_words'])) # YOUR CODE HERE
###Output
set(['and', 'of', 'the', 'to', 'in'])
56066
###Markdown
**Checkpoint**. Check your `has_top_words` function on two random articles:
###Code
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
###Output
Output from your function: False
Correct output: False
Also check the length of unique_words. It should be 188
###Markdown
**Quiz Question**. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?Hint: To compute the Euclidean distance between two dictionaries, use `graphlab.toolkits.distances.euclidean`. Refer to [this link](https://dato.com/products/create/docs/generated/graphlab.toolkits.distances.euclidean.html) for usage.
###Code
obama_dict = wiki[wiki['name'] == 'Barack Obama']['word_count'][0]
bush_dict = wiki[wiki['name'] == 'George W. Bush']['word_count'][0]
biden_dict = wiki[wiki['name'] == 'Joe Biden']['word_count'][0]
obama_bush_distance = graphlab.toolkits.distances.euclidean(obama_dict, bush_dict)
obama_biden_distance = graphlab.toolkits.distances.euclidean(obama_dict, biden_dict)
bush_biden_distance = graphlab.toolkits.distances.euclidean(bush_dict, biden_dict)
print('Obama - Bush distance: {}'.format(obama_bush_distance))
print('Obama - Biden distance: {}'.format(obama_biden_distance))
print('Bush - Biden distance: {}'.format(bush_biden_distance))
###Output
Obama - Bush distance: 34.3947670438
Obama - Biden distance: 33.0756708171
Bush - Biden distance: 32.7566787083
###Markdown
**Quiz Question**. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
###Code
obama_words = wiki[wiki['name'] == 'Barack Obama'][['word_count']].stack('word_count', new_column_name=['word' , 'count'])
bush_words = wiki[wiki['name'] == 'George W. Bush'][['word_count']].stack('word_count', new_column_name=['word' , 'count'])
bush_obama_words = obama_words.join(bush_words, on='word')
bush_obama_words.rename({'count': 'obama', 'count.1': 'bush'})
bush_obama_words = bush_obama_words.sort('obama', ascending=False)
bush_obama_words[:10]
###Output
_____no_output_____
###Markdown
**Note.** Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words. TF-IDF to the rescue Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons. To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. **TF-IDF** (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
###Code
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
###Output
_____no_output_____
###Markdown
Let's determine whether this list makes sense.* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
###Code
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
###Output
_____no_output_____
###Markdown
Using the **join** operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
###Code
obama_schiliro = obama_tf_idf.join(schiliro_tf_idf, on='word').rename({'weight': 'obama', 'weight.1': 'schriliro'}).sort('obama', ascending=False)
obama_schiliro
###Output
_____no_output_____
###Markdown
The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011. **Quiz Question**. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
###Code
common_words = set(obama_schiliro['word'][:5]) # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys()) # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return common_words.issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
print('The number of individuals with all five top Obama tf-idf words in their entry:' , np.count_nonzero(wiki['has_top_words'])) # YOUR CODE HERE
###Output
('The number of individuals with all five top Obama tf-idf words in their entry:', 14)
###Markdown
Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words. Choosing metrics You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of `model_tf_idf`. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden. **Quiz Question**. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
###Code
obama_tfidf = wiki[wiki['name'] == 'Barack Obama']['tf_idf'][0]
biden_tfidf = wiki[wiki['name'] == 'Joe Biden']['tf_idf'][0]
obama_biden_tfidf_distance = graphlab.toolkits.distances.euclidean(obama_tfidf, biden_tfidf)
import math
math.sqrt(sum((obama_tfidf.get(word, 0) - biden_tfidf.get(word, 0))**2 for word in set(obama_tfidf) | set(biden_tfidf)))
print('The distance between Obama and Biden using tfidf is: {}'.format(obama_biden_tfidf_distance))
###Output
The distance between Obama and Biden using tfidf is: 123.29745601
###Markdown
The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
###Code
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
###Output
_____no_output_____
###Markdown
But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
###Code
def compute_length(row):
return len(row['text'].split(' '))
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
###Output
_____no_output_____
###Markdown
To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
###Code
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 300 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many of the Wikipedia articles are 300 words or more, and both Obama and Biden are over 300 words long.**Note**: For the interest of computation time, the dataset given here contains _excerpts_ of the articles rather than full text. For instance, the actual Wikipedia article about Obama is around 25000 words. Do not be surprised by the low numbers shown in the histogram. **Note:** Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them. To remove this bias, we turn to **cosine distances**:$$d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}$$Cosine distances let us compare word distributions of two articles of varying lengths.Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
###Code
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
###Output
_____no_output_____
###Markdown
From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
###Code
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided. **Moral of the story**: In deciding the features and distance measures, check if they produce results that make sense for your particular application. Problem with cosine distances: tweets vs. long articles Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example. ```+--------------------------------------------------------+| +--------+ || One that shall not be named | Follow | || @username +--------+ || || Democratic governments control law in response to || popular act. || || 8:05 AM - 16 May 2016 || || Reply Retweet (1,332) Like (300) || |+--------------------------------------------------------+``` How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
###Code
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
###Output
_____no_output_____
###Markdown
Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
###Code
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
obama = wiki[wiki['name'] == 'Barack Obama']
obama
###Output
_____no_output_____
###Markdown
Now, compute the cosine distance between the Barack Obama article and this tweet:
###Code
tweet_array = []
obama_array = []
for word in set(obama_tfidf.keys() + tweet_tf_idf.keys()):
tweet_array.append(tweet_tf_idf.get(word, 0))
obama_array.append(obama_tfidf.get(word, 0))
obama_array = np.array(obama_array)
tweet_array = np.array(tweet_array)
cosine_similarity = 1 - (np.dot(tweet_array.T, obama_array)) / (np.linalg.norm(obama_array) * np.linalg.norm(tweet_array))
print('Cosine Similarity between tweet and Obama wiki entry: {}'.format(cosine_similarity))
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
###Output
_____no_output_____
###Markdown
Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
###Code
model2_tf_idf.query(obama, label='name', k=10)
###Output
_____no_output_____ |
2QGetSave.ipynb | ###Markdown
Functions
###Code
#taus can be a list of numbers or a list of tuples, it should work for both
def getModuAll(adjDict,taus,pRandRewires,rewirings = 4000):
repetitions = len(adjDict)
lenTaus = len(taus)
lenPRand = len(pRandRewires)
Q = np.zeros((lenPRand,lenTaus,repetitions))
for rep in np.arange(repetitions):
for indT,tau in enumerate(taus):
for indP,p in enumerate(pRandRewires):
#load the specific rewired matrix
A = adjDict[rep+1][(p, tau, rewirings)][1]
#construct it so that igraph can read it
#make it undirected
g = ig.Graph(directed=False)
#make it weighted
g.es["weight"] = 1.0
g.add_vertices(len(A))
ix, jx = A.nonzero()
for i,j in zip(ix, jx):
if i<j:
g.add_edge(i, j, weight=A[i,j])
#calculate the clusters and their modularity score
clusters = g.community_multilevel(weights=g.es['weight'])
modularity_score = g.modularity(clusters.membership,weights=g.es['weight'])
#store it in the matrix
Q[indP,indT,rep] = modularity_score
return Q
def plotQSlices(varNormal,varLognormal,taus,pRandom):
ratio = 1
#Normal
#axis along repetitions
meanNormal = np.mean(varNormal,axis=2)
stdNormal = np.std(varNormal,axis=2)
seNormal = stdNormal/np.sqrt(varNormal.shape[2])
#LogNormal
#axis along repetitions
meanLognormal = np.mean(varLognormal,axis=2)
stdLognormal = np.std(varLognormal,axis=2)
seLognormal = stdLognormal/np.sqrt(varLognormal.shape[2])
ratio = 1
labels = [ 'Normal', 'Lognormal']
xLabel = 'tau'
ylabel = 'Modularity (Q)'
colorsPlot = [ 'orange', 'green']
shapePoint = ['-s','-v']
shapePointNoLine = ['s','v']
plt.rcParams['figure.figsize'] = [14, 6]
fig = plt.figure()
for ind, pR in enumerate(pRandom):
ax = fig.add_subplot(1, len(pRandom), ind+1)
#plt.subplot(len(pRandom),1,ind)
plt.xlabel(xLabel)
plt.ylabel(ylabel)
#plt.ylim((0, 0.8))
#plt.xlim((0,80))
ttl = 'p(random) = '+ str(pRandom[ind])
plt.title(ttl)
ax.errorbar(taus, meanNormal[ind,:], stdNormal[ind,:], mfc=colorsPlot[0], mec=colorsPlot[0], fmt=shapePoint[0], color=colorsPlot[0], label=labels[0])
ax.errorbar(taus, meanLognormal[ind,:], stdLognormal[ind,:], mfc=colorsPlot[1], mec=colorsPlot[1], fmt=shapePoint[1], color=colorsPlot[1], label=labels[1])
if ind == 0:
ax.legend(loc='upper right')
def plotQSlicesDiffTaus(varNormal,varLognormal,taus,pRandom):
ratio = 1
#Normal
#axis along repetitions
meanNormal = np.mean(varNormal,axis=2)
stdNormal = np.std(varNormal,axis=2)
seNormal = stdNormal/np.sqrt(varNormal.shape[2])
#LogNormal
#axis along repetitions
meanLognormal = np.mean(varLognormal,axis=2)
stdLognormal = np.std(varLognormal,axis=2)
seLognormal = stdLognormal/np.sqrt(varLognormal.shape[2])
ratio = 1
labels = [ 'Normal', 'Lognormal']
xLabel = 'tau'
ylabel = 'Modularity (Q)'
colorsPlot = [ 'orange', 'green']
shapePoint = ['-s','-v']
shapePointNoLine = ['s','v']
plt.rcParams['figure.figsize'] = [14, 6]
fig = plt.figure()
for ind, pR in enumerate(pRandom):
ax = fig.add_subplot(1, len(pRandom), ind+1)
#plt.subplot(len(pRandom),1,ind)
plt.xlabel(xLabel)
plt.ylabel(ylabel)
#plt.ylim((0, 0.8))
#plt.xlim((0,80))
ttl = 'p(random) = '+ str(pRandom[ind])
plt.title(ttl)
#print(taus['normal'])
#print(meanNormal[ind,:])
#print(stdNormal[ind,:])
ax.errorbar(taus['normal'], meanNormal[ind,:], stdNormal[ind,:], mfc=colorsPlot[0], mec=colorsPlot[0], fmt=shapePoint[0], color=colorsPlot[0], label=labels[0])
ax.errorbar(taus['lognormal'], meanLognormal[ind,:], stdLognormal[ind,:], mfc=colorsPlot[1], mec=colorsPlot[1], fmt=shapePoint[1], color=colorsPlot[1], label=labels[1])
if ind == 0:
ax.legend(loc='upper right')
#typical var1 is the variable for normal, var2 for lognormal,xVar is taus
def plotQSlicesGeneral(var1,var2,xVar,xLabel,yLabel,labels,ttl):
ratio = 1
#Normal
#axis along repetitions
mean1 = np.mean(var1,axis=1)
std1 = np.std(var1,axis=1)
se1 = std1/np.sqrt(var1.shape[1])
#LogNormal
#axis along repetitions
mean2 = np.mean(var2,axis=1)
std2 = np.std(var2,axis=1)
se2 = std2/np.sqrt(var2.shape[1])
ratio = 1
colorsPlot = [ 'orange', 'green']
shapePoint = ['-s','-v']
shapePointNoLine = ['s','v']
plt.rcParams['figure.figsize'] = [14, 6]
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
plt.xlabel(xLabel)
plt.ylabel(yLabel)
#plt.ylim((0, 0.8))
#plt.xlim((0,80))
plt.title(ttl)
ax.errorbar(xVar, mean1, std1, mfc=colorsPlot[0], mec=colorsPlot[0], fmt=shapePoint[0], color=colorsPlot[0], label=labels[0])
ax.errorbar(xVar, mean2, std2, mfc=colorsPlot[1], mec=colorsPlot[1], fmt=shapePoint[1], color=colorsPlot[1], label=labels[1])
ax.legend(loc='upper right')
###Output
_____no_output_____
###Markdown
Load A , calculate Q, save Q
###Code
#parameters tested
#rewirings = 4000
#pRand = [0,0.2]
rewirings = 16000
taus= [1.5, 2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8]
pRand = [0.2]
directoryALoad ='data/ArandA/200Vertices/1Step/'
weightDist = 'lognormal'
filePathALoad = directoryALoad + 'ArandA_'+weightDist+'_p'+str(pRand[0])+'_rewir'+str(rewirings)+'.pckl'
#taus = np.arange(0,8.2,0.2)
#taus = np.arange(3.5,4.55,0.05)
#taus = np.arange(4.5,6.55,0.05)
#weightDist = 'lognormal'
###### Load Adjacency matrices
#directoryALoad ='data/ArandA/'
#filePathALoad = directoryALoad + 'ArandA_tauTransition_'+weightDist+'_'+str(rewirings)+'.pckl'
#filePathALoad = directoryALoad + 'ArandA_'+weightDist+'_'+str(rewirings)+'.pckl'
ArandA = hf.loadVar(filePathALoad)
#######Path to save modularity values
#directoryQSave ='data/ModularityValues/'
directoryQSave = directoryALoad
descr = 'Q_'+weightDist+'_p'+str(pRand[0])+'_rewir'+str(rewirings)
#descr = 'QTransition_'
filePathQSave = directoryQSave + descr+'.pckl'
#Calculate the modularity values
Q = getModuAll(ArandA,taus,pRand,rewirings)
hf.saveVarSimple((Q,taus), filePathQSave)
np.mean(Q,axis=2)
###Output
_____no_output_____
###Markdown
Load A show A
###Code
rewirings = 36000
taus= [1.5, 2,2.5,3,3.5,4,4.5,5,5.5,6,6.5,7,7.5,8]
pRand = [0.2]
directoryALoad ='data/ArandA/300Vertices/1Step/'
weightDist = 'normal'
filePathALoad = directoryALoad + 'ArandA_'+weightDist+'_p'+str(pRand[0])+'_rewir'+str(rewirings)+'.pckl'
ArandA = hf.loadVar(filePathALoad)
import swnMetrics as swn
A = ArandA[1][0.2,8,36000][1]
AReord = swn.reorderA2Visualize(A)
plt.imshow(AReord, cmap='coolwarm')
###Output
_____no_output_____
###Markdown
Load A , calculate Q, save Q for the hist figure
###Code
#parameters tested
rewirings = 4000
pRand = [0,0.2]
weightDist = ['normal','lognormal']
###### Load Adjacency matrices
directoryALoad ='data/ArandA/1000iterationsHist/'
directoryQSave ='data/ModularityValues/1000iterationsHist/'
filePathALoad, filePathQSave = {}, {}
for p in pRand:
for wD in weightDist:
filePathALoad[(wD,p)] = directoryALoad + 'ArandA_tauTransProx_'+wD+'_p'+str(p)+'_rewir'+ str(rewirings)+'.pckl'
filePathQSave[(wD,p)] = directoryQSave + 'Q_'+wD+'_p'+str(p)+'.pckl'
taus = {}
taus['normal',0] = [4, 4.1, 4.2, 4.3, 4.4]
taus['normal',0.2] = [3.95, 4.05, 4.15, 4.25, 4.35]
taus['lognormal',0] = [5.6, 5.7, 5.8, 5.9, 6]
taus['lognormal',0.2] = [5.3, 5.4, 5.5, 5.6, 5.7]
Q = {}
for p in pRand:
for wD in weightDist:
ArandA = hf.loadVar(filePathALoad[wD,p])
Q[wD,p] = getModuAll(ArandA,taus[wD,p],[p])
hf.saveVarSimple((Q[wD,p],taus[wD,p]), filePathQSave[wD,p])
ArandA[1].keys()
taus[wD,p]
###Output
_____no_output_____
###Markdown
Load Q , plot Q
###Code
#parameters tested
rewirings = 4000
pRand = [0,0.2]
directoryQLoad ='data/ModularityValues/'
descr = 'QTransition_'
QLoadPath = {}
QLoadPath['normal'] = directoryQLoad + descr+'normal'+'.pckl'
QLoadPath['lognormal'] = directoryQLoad + descr+'lognormal'+'.pckl'
tausAll ={}
dictQ = {}
(dictQ['normal'],tausAll['normal']) = hf.loadVar(QLoadPath['normal'])
(dictQ['lognormal'],tausAll['lognormal']) = hf.loadVar(QLoadPath['lognormal'])
#plotQSlices(QNormal,QLognormal,taus,pRand)
plotQSlicesDiffTaus(dictQ['normal'],dictQ['lognormal'],tausAll,pRand)
###Output
_____no_output_____ |
bronze/Q88_Grovers_Search_One_Qubit_Representation_Solutions.ipynb | ###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Solutions for Grover's Search: One Qubit Representation _prepared by Abuzer Yakaryilmaz_ Task 1 Execute Grover's search algorithm for 5 steps where $ N = 16 $ and the first element is marked.Draw all quantum states on the unit circle during the execution.Print the angle of each state in degree (use $\sin^{-1}$), and check whether there is any pattern created by the oracle and inversion operators?Is there any pattern for each step of Grover's algorithm? Solution
###Code
def query(elements=[1],marked_elements=[0]):
for i in marked_elements:
elements[i] = -1 * elements[i]
return elements
def inversion (elements=[1]):
# summation of all values
summation = 0
for i in range(len(elements)):
summation += elements[i]
# mean of all values
mean = summation / len(elements)
# reflection over mean
for i in range(len(elements)):
value = elements[i]
new_value = mean - (elements[i]-mean)
elements[i] = new_value
return elements
from math import asin, pi
# initial values
iteration = 5
N = 16
marked_elements = [0]
k = len(marked_elements)
elements = []
states_on_unit_circle= []
# initial quantum state
for i in range(N):
elements.append(1/N**0.5)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,"0"])
print(elements)
# Execute Grover's search algorithm for $iteration steps
for step in range(iteration):
# query
elements = query(elements,marked_elements)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,str(step)+"'"])
# inversion
elements = inversion(elements)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,str(step+1)])
# draw all states
%run quantum.py
draw_qubit_grover()
for state in states_on_unit_circle:
draw_quantum_state(state[0],state[1],state[2])
show_plt()
# print the angles
print("angles in degree")
for state in states_on_unit_circle:
print(asin(state[1])/pi*180)
###Output
[0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25]
###Markdown
ObservationsThe operator oracle is a reflection over the $x$-axis.The operator inversion is a reflection over the initial state.If the angle of the first state $ \theta $, then each step of Grover's algorithm is a rotation with angle $ 2 \theta $. Task 2 In Task 1, after which step the probability of observing a marked element is the highest? Solution As can be verified from the angles, after the third step, the probability of observing a marking element is the highest. Task 3 We have a list of size $ N = 128 $. We iterate Grover's search algorithm 10 steps.Visually determine (i.e., Tasks 1 & 2) the good number of iterations if the number of marked elements is 1, 2, 4, or 8. (The quantum state on the unit circle should be close to the $y$-axis.) Solution
###Code
def query(elements=[1],marked_elements=[0]):
for i in marked_elements:
elements[i] = -1 * elements[i]
return elements
def inversion (elements=[1]):
# summation of all values
summation = 0
for i in range(len(elements)):
summation += elements[i]
# mean of all values
mean = summation / len(elements)
# reflection over mean
for i in range(len(elements)):
value = elements[i]
new_value = mean - (elements[i]-mean)
elements[i] = new_value
return elements
from math import asin, pi
# initial values
iteration = 10
N = 128
# try each case one by one
#marked_elements = [0]
marked_elements = [0,1]
#marked_elements = [0,1,2,3]
#marked_elements = [0,1,2,3,4,5,6,7]
k = len(marked_elements)
elements = []
states_on_unit_circle= []
# initial quantum state
for i in range(N):
elements.append(1/N**0.5)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,"0"])
# Execute Grover's search algorithm for $iteration steps
for step in range(iteration):
# query
elements = query(elements,marked_elements)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,str(step)+"''"])
# inversion
elements = inversion(elements)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,str(step+1)])
# draw all states
%run quantum.py
draw_qubit_grover()
for state in states_on_unit_circle:
draw_quantum_state(state[0],state[1],state[2])
show_plt()
# print the angles
print("angles in degree")
for state in states_on_unit_circle:
print(asin(state[1])/pi*180)
###Output
_____no_output_____
###Markdown
ObservationsThe good number of iterations- For $ k = 1 $, $ 8 $ iterations- For $ k = 2 $, $ 6 $ iterations- For $ k = 4 $, $ 4 $ iterations- For $ k = 8 $, $ 3 $ or $ 9 $ iterations Task 4 We have a list of size $ N = 256 $. We iterate Grover's search algorithm 20 (or 10) steps.Visually determine (i.e., Tasks 1 & 2) the good number of iterations if the number of marked elements is 1, 2, 4, or 8. (The quantum state on the unit circle should be close to the $y$-axis.) Solution
###Code
def query(elements=[1],marked_elements=[0]):
for i in marked_elements:
elements[i] = -1 * elements[i]
return elements
def inversion (elements=[1]):
# summation of all values
summation = 0
for i in range(len(elements)):
summation += elements[i]
# mean of all values
mean = summation / len(elements)
# reflection over mean
for i in range(len(elements)):
value = elements[i]
new_value = mean - (elements[i]-mean)
elements[i] = new_value
return elements
from math import asin, pi
# initial values
iteration = 20
#iteration = 10
N = 256
# try each case one by one
marked_elements = [0]
#marked_elements = [0,1]
#marked_elements = [0,1,2,3]
#marked_elements = [0,1,2,3,4,5,6,7]
k = len(marked_elements)
elements = []
states_on_unit_circle= []
# initial quantum state
for i in range(N):
elements.append(1/N**0.5)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,"0"])
# Execute Grover's search algorithm for $iteration steps
for step in range(iteration):
# query
elements = query(elements,marked_elements)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,str(step)+"''"])
# inversion
elements = inversion(elements)
x = elements[N-1] * ((N-k)**0.5)
y = elements[0] * (k**0.5)
states_on_unit_circle.append([x,y,str(step+1)])
# draw all states
%run quantum.py
draw_qubit_grover()
for state in states_on_unit_circle:
draw_quantum_state(state[0],state[1],state[2])
show_plt()
# print the angles
print("angles in degree")
for state in states_on_unit_circle:
print(asin(state[1])/pi*180)
###Output
_____no_output_____ |
tvo_pricingNotebook.ipynb | ###Markdown
TVO pricing in the lognormal fSABR modelWe consider the pricing problem of target volatility options (TVO) in the lognormal fractional SABR model. The pricing formulas implemented in this notebook were developed and proved in the paper [Alos et al. (2018)](https://arxiv.org/pdf/1801.08215.pdf) available on arXiv. We give here for completeness some useful theoretical results and definitions. 1. The fractional SABR modelThe price of underlying asset $S_t$ and its instantaneous volatility $Y_t$ in the lognormal fSABR model are governed by the following SDE:$$\begin{array}{rcl}\dfrac{dS_t}{S_t} &=& Y_t \left(\rho dB_t + \bar\rho dW_t\right), \\\crY_t &=& Y_0 \exp\left(\nu B_t^H\right),\end{array}$$where $B_t$ and $W_t$ are independent Brownian motions defined on the filtered probability space $(\Omega, \mathcal{F}_t, \mathbb Q)$ satisfying the usual conditions, $\rho\in (-1,1)$ is the correlation, $\bar\rho := \sqrt{1 - \rho^2}$, and $B_t^H$ is the fractional Brownian motion driven by $B_t$. That is, $$B_t^H = \int_0^t K(t,s) dB_s,$$where $K$ is the Molchan-Golosov kernel$$K(t,s) = c_H (t-s)^{H-\frac{1}{2}}{ }_2F_1\left(H-\frac{1}{2},\frac{1}{2}-H,H+\frac{1}{2};1-\frac{t}{s}\right)\mathbf{1}_{[0,t]}(s),$$with $c_H=\left[\frac{2H\Gamma\left(\frac{3}{2}-H\right)}{\Gamma(2-2H)\Gamma\left(H+\frac{1}{2}\right)}\right]^{1/2}$, ${ }_2F_1$ is the Gauss hypergeometric function, and $\Gamma$ is the Euler Gamma function.For fixed $K > 0$, we define $X_t = \log\dfrac{S_t}K$. Then the fSABR model satisfies:$$\begin{array}{rcl}dX_t &=& Y_t \left(\rho dB_t + \bar\rho dW_t\right) - \frac{1}{2} Y_t^2 dt = Y_t d\tilde W_t - \frac{1}{2} Y_t^2 dt, \\\crY_t &=& Y_0 \exp\left(\nu B_t^H\right).\end{array}$$ 2. Target volatility options (TVO)A *target volatility* call struck at $K$ pays off at expiry $T$ the amount$$\dfrac{\bar\sigma}{\sqrt{\frac{1}{T}\int_0^T Y_t^2 dt}} \left( S_T - K \right)^+ = \dfrac{K \, \bar\sigma \sqrt T}{\sqrt{\int_0^T Y_t^2 dt}} \left( e^{X_T}- 1 \right)^+,$$where $\bar\sigma$ is the (preassigned) *target volatility* level. If at expiry the realized volatility is higher (lower) than the target volatility, the payoff is scale down (up) by the ratio between target volatility and realized volatility. For $t \leq T$, the price at time $t$ of a TV call struck at $K$ with expiry $T$ is hence given by the conditional expectation under the risk neutral probability $\mathbb Q$ as $$K\, \bar\sigma \sqrt T \, \mathbb{E}\left[\left. \dfrac{1}{\sqrt{\int_0^T Y_\tau^2 d\tau}} \left( e^{X_T}- 1\right)^+\right|\mathcal{F}_t\right]$$provided the expectation is finite. 3. TVO pricingWe employ for verification purposes three pricing methods: * Monte Carlo Simulations* Decomposition Formula Approximation (DFA), see equation (4.7) in [Alos et al. (2018)](https://arxiv.org/pdf/1801.08215.pdf)* Small Volatility of Volatility Expansion (SVVE), see equation (5.6) in [Alos et al. (2018)](https://arxiv.org/pdf/1801.08215.pdf) Observe that both formulas can be easily implemented numerically as they only require the use of special functions: the pdf $N'$ and cdf $N$ for a standard normal distribution, the Euler Gamma function $\Gamma$, the Gauss hypergeometric function ${ }_2F_1$, and the Beta function $\beta$. 3.1. Formula accuracyTo test the accuracy of DFA and SVVE, we produce sample paths for the lognormal fSABR price process and we use Monte Carlo techniques to calculate TVO prices. We first load the required modules, price via analytic formulas, and lastly we simulate sample paths for the fSABR process and use MC techniques for TVO pricing.
###Code
import numpy as np
import time
import utils
from fSABR import fSABR
from scipy.integrate import trapz
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
We consider the following values the parameters of the fSABR model and for the TVO contract:
###Code
T = 1.0
H = 0.1
nu = 0.05
S0 = 1.0
sig0 = 0.3 # Y0 in our notations
rho = -0.7
TV = 0.3
k = np.arange(-0.22, 0.22, 0.01)
K = np.exp(k)[np.newaxis,:]/S0
###Output
_____no_output_____
###Markdown
Pricing via DFA and SVVE:
###Code
""" TVO pricing via DFA (4.7) """
startTimer = time.time()
price1 = utils.tvoPrice_formulaAlternative_3(S0, sig0, K, T, TV, H, rho, nu)
endTimer = time.time()
t1 = endTimer - startTimer
""" TVO pricing via SVVE (5.6) """
startTimer = time.time()
price2 = utils.tvoPrice_formula(S0, sig0, K, T, TV, H, rho, nu)
endTimer = time.time()
t2 = endTimer - startTimer
###Output
_____no_output_____
###Markdown
Pricing via Monte Carlo with $N=50,000$ paths and $n=1,000$ time steps:
###Code
""" TVO pricing via MC """
n = 1000
N = 50000
a = H - 0.5
fSV = fSABR(n, N, T, a)
dW1 = fSV.dW1()
dW2 = fSV.dW2()
dB = fSV.dB(dW1, dW2, rho)
WH = fSV.WH(dW1)
sig = fSV.sig(WH, nu, sig0)
S = fSV.S(sig, dB, S0)
startTimer = time.time()
ST = S[:,-1][:,np.newaxis]
call_payoffs = np.maximum(ST - K,0)
RV = trapz(np.square(sig),fSV.t[0,:])[:,np.newaxis]/T
tvoCall_payoffs = TV * call_payoffs / np.sqrt(RV)
price3 = np.mean(tvoCall_payoffs, axis = 0)[:,np.newaxis]
endTimer = time.time()
t3 = endTimer - startTimer
###Output
_____no_output_____
###Markdown
Note that we used the following notations:* price1 - Decomposition Formula Approx. (DFA)* price2 - Small vol of vol expansion (SVVE)* price3 - MC simulationWe plot the results and compare:
###Code
plot, axes = plt.subplots()
axes.plot(np.transpose(K), np.transpose(price1), 'r')
axes.plot(np.transpose(K), np.transpose(price2), 'b')
axes.plot(np.transpose(K), price3, 'g--')
axes.set_xlabel(r'$K/S_0$', fontsize=12)
axes.set_ylabel(r'TVO Call Price', fontsize=12)
axes.legend(['Decomposition Formula Approximation',
'Small Vol of Vol Expansion','Monte Carlo Simulation'])
title = r'$T=%.2f,\ H=%.2f,\ \rho=%.2f,\ \nu=%.2f,\ \sigma_0=%.2f,\ \bar\sigma=%.2f$'
axes.set_title(title%(fSV.T, H, fSV.rho, fSV.nu, sig0 , TV), fontsize=12)
plt.grid(True)
###Output
_____no_output_____
###Markdown
We can note that the formulas are a good approximation for the actual (MC) price of the TVO. We calculate now the relative error between MC prices and DFA or SVVE.
###Code
# Concatenate arrays
err1 = np.divide(np.absolute(price3 - np.transpose(price1)), price3) * 100
err2 = np.divide(np.absolute(price3 - np.transpose(price2)), price3) * 100
tableTex = np.concatenate((np.transpose(K), price3), axis = 1)
tableTex = np.concatenate((tableTex, np.transpose(price1)), axis = 1)
tableTex = np.concatenate((tableTex, np.transpose(price2)), axis = 1)
tableTex = np.concatenate((tableTex, err1), axis = 1)
tableTex = np.concatenate((tableTex, err2), axis = 1)
import pandas as pd
df = pd.DataFrame(data=tableTex)
df.columns = ['K/S0', 'MC', 'DFA', 'SVVE', 'DFA rel.err.(%)', 'SVVE rel.err.(%)']
print(df)
# Computing times
print('Computing times in seconds:')
print('SVEE = %.5f s; '%t1, 'DFA = %.5f s; '%t2, 'MC = %.5f s; '%t3)
###Output
Computing times in seconds:
SVEE = 0.00236 s; DFA = 0.00457 s; MC = 1.22972 s;
###Markdown
3.2. Sensitivity to parametersIn order to stress test our formulas, we compute the TVO price at-the-money via the analytic formulas DFA, SVVE, and Monte Carlo simulations for a broad range of parameters $(H,\nu,\rho)$. Namely, we consider $H\in(0,0.5),\ \nu\in(0,0.6),$ and $\rho\in(-1,1)$. Firstly, we plot the TVO price as a function of 2 parameters while assuming the third being fixed. Secondly, we compute and plot the relative error between our formulas and prices via Monte Carlo trials. Note that the relative error is small, and that the price surfaces are fairly smooth. We emphasize that the approximation formulas turns out to be highly accurate and robust to parameter variations. We load our prices from a pkl file, which was produced using the script tvo_pricingSensitivity.py. We consider here SVVE formula - note that the code may be modified to account for DFA.
###Code
import pickle
from matplotlib import cm
import mpl_toolkits.mplot3d
from matplotlib.ticker import FormatStrFormatter
H = np.arange(0.05,0.49,0.05)
nu = np.arange(0.01,0.61,0.01)
rho = np.arange(-0.99,0.99,0.06)
with open('tvoCall_prices.pkl', 'rb') as f:
tvo_MC, tvo_SVVE, tvo_DFA = pickle.load(f)
tvo_formula = tvo_SVVE # change to tvo_DFA for DFA analysis
tvoCall_prices = tvo_MC
errFormula = np.divide(tvoCall_prices - tvo_formula, tvoCall_prices)
print(tvo_MC.shape)
###Output
(9, 60, 33)
###Markdown
We keep the Hurst parameter $H$ fixed:
###Code
"""
Keeping H fixed
max(coefH) = 9
"""
coefH = 0
X, Y = np.meshgrid(rho,nu)
errFormulaH = errFormula[coefH,:,:]
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
surf = ax.plot_surface(X, Y, errFormulaH, cmap=cm.coolwarm, linewidth=0,
antialiased=False)
title = r'Relative Error for H = %.2f'%H[coefH]
ax.set_title(title)
ax.set_xlabel(r'$\rho$')
ax.set_ylabel(r'$\nu$')
ax.set_zlabel('Relative Error')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
surf = ax.plot_surface(X, Y, tvo_formula[coefH,:,:], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
title = r'TVO prices for H = %.2f'%H[coefH]
ax.set_title(title)
ax.set_xlabel(r'$\rho$')
ax.set_ylabel(r'$\nu$')
ax.set_zlabel('TVO Calls');
"""
Keeping \nu fixed
max(coefNu) = 60
"""
coefNu = 10
X, Y = np.meshgrid(rho,H)
errFormulaNu = errFormula[:,coefNu,:]
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_ticks(np.arange(0, 0.55, 0.1))
surf = ax.plot_surface(X, Y, errFormulaNu, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
title = r'Relative Error for $\nu$ = %.2f'%nu[coefNu]
ax.set_title(title)
ax.set_xlabel(r'$\rho$')
ax.set_ylabel(r'$H$')
ax.set_zlabel('Relative Error')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_ticks(np.arange(0, 0.55, 0.1))
surf = ax.plot_surface(X, Y, tvo_formula[:,coefNu,:], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
title = r'TVO prices for $\nu$ = %.2f'%nu[coefNu]
ax.set_title(title)
ax.set_xlabel(r'$\rho$')
ax.set_ylabel(r'H')
ax.set_zlabel('TVO Calls');
"""
Keeping \rho fixed
max(coefRho) = 33
"""
coefRho = 11
X, Y = np.meshgrid(nu,H)
errFormulaRho = errFormula[:,:,coefRho]
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_ticks(np.arange(0, 0.55, 0.1))
surf = ax.plot_surface(X, Y, errFormulaRho, cmap=cm.coolwarm, linewidth=0,
antialiased=False)
title = r'Relative Error for $\rho$ = %.2f'%rho[coefRho]
ax.set_title(title)
ax.set_xlabel(r'$\nu$')
ax.set_ylabel(r'$H$')
ax.set_zlabel('Relative Error')
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
ax.yaxis.set_ticks(np.arange(0, 0.55, 0.1))
surf = ax.plot_surface(X, Y, tvo_formula[:,:,coefRho], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
title = r'TVO prices for $\rho$ = %.2f'%rho[coefRho]
ax.set_title(title)
ax.set_xlabel(r'$\nu$')
ax.set_ylabel(r'H')
ax.set_zlabel('TVO Calls')
###Output
_____no_output_____
###Markdown
3.3. fSABR path generation Sample paths for fractional Brownian Motions $\{ B^H(t_k),\ k=1,2,\dots,n\}$ using the Molchan--Golosov kernel are simulated. Here, we consider a partition $$\Pi:=\{0 = t_0<t_1<\cdots<t_n=T\}$$ of the interval $[0,T]$. We employ the hybrid scheme for Brownian semistationary processes given in the paper [Bennedsen et al. (2017)](https://arxiv.org/pdf/1507.03004.pdf), which is based on discretizing the stochastic integral representation of the process in the time domain, see also the code available on GitHub [here](https://github.com/ryanmccrickerd/rough_bergomi). Several test routines for fractional processes are also implemented: mean and variance as a function of time via Monte Carlo simulations, a chi-square test for fractional Gaussian noise, as well as the 2D correlation structure via sample paths. We notice that the sample paths have the required properties, that are specific to fBMs.
###Code
N = 50000
n = 1000
T = 1.0
H = 0.2
a = H - 0.5
rho = -0.7
nu = 0.1
S0 = 1.0
sig0 = 0.2
""" fSABR process paths """
fSV = fSABR(n, N, T, a)
dW1 = fSV.dW1()
dW2 = fSV.dW2()
dB = fSV.dB(dW1, dW2, rho)
WH = fSV.WH(dW1)
sig = fSV.sig(WH, nu, sig0)
S = fSV.S(sig, dB, S0)
""" Plotting some sample paths """
plt.title('fBM with H = %.1f'%H)
plt.xlabel('t')
plt.ylabel('$W^H_t$')
plt.plot(fSV.t[0,:], WH[0,:], 'g')
plt.grid(True); plt.show()
plt.title('Fractional SV')
plt.xlabel('t')
plt.ylabel('$\sigma_t$')
plt.plot(fSV.t[0,:], sig[0,:], 'g')
plt.grid(True); plt.show()
plt.title('Price process in fSABR')
plt.xlabel('t')
plt.ylabel('$S_t$')
plt.plot(fSV.t[0,:], S[0,:], 'g')
plt.grid(True); plt.show();
""" Check Statistical Properties of the fBM via MC """
eY1 = 0 * fSV.t # Known expectation
vY1 = fSV.t**(2*fSV.a + 1) # Known variance
eY2 = np.mean(WH, axis=0, keepdims=True) # Observed expectation
vY2 = np.var(WH, axis=0, keepdims=True) # Observed variance
plt.plot(fSV.t[0,:], eY1[0,:], 'r')
plt.plot(fSV.t[0,:], eY2[0,:], 'g')
plt.xlabel(r'$t$')
plt.ylabel(r'$E[W^H_t]$')
plt.title(r'Expected value of simulated fBM for N = %d paths'%N)
plt.grid(True); plt.show()
plt.plot(fSV.t[0,:], vY1[0,:], 'r')
plt.plot(fSV.t[0,:], vY2[0,:], 'g')
plt.xlabel(r'$t$')
plt.ylabel(r'$Var(W^H_t)$')
plt.title(r'Variance of simulated fBM for N = %d paths'%N)
plt.legend(['$t^{2H}$','Monte Carlo'])
plt.grid(True); plt.show()
""" Check the 2D covariance structure of the simulated fBM via MC """
from matplotlib.ticker import LinearLocator
# Make the data
X, Y = np.meshgrid(fSV.t, fSV.t)
Z = 0.5* (X**(2*H) + Y**(2*H) - (np.abs(X-Y))**(2*H))
# Compute covariance structure of simulated fBM via MC
Z2 = np.cov(WH, rowvar = False)
# Compute error
err = np.linalg.norm(Z-Z2)
errSurf = Z-Z2
# Plot covariance surface for verification
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0,
antialiased=False)
ax.set_zlim(-1.01, 1.51)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.01f'))
title = r'Covariance function $\gamma(s,t)$'
ax.set_title(title, fontsize=16)
ax.set_xlabel(r't')
ax.set_ylabel(r's')
ax.set_zlabel('$\gamma$')
plt.show()
# Plot error surface
fig = plt.figure()
ax = fig.gca(projection='3d')
surf3 = ax.plot_surface(X, Y, errSurf,
cmap=cm.coolwarm, linewidth=0, antialiased=False)
ax.set_zlim(-0.05, 0.1)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
title = r'Absolute error surface'
ax.set_title(title, fontsize=16)
ax.set_xlabel(r't')
ax.set_ylabel(r's')
plt.show()
"""
Hypothesis testing
NULL hypotheses:
the covariances of the sample are in accordance with fractional
Gaussian noise for some specified Hurst parameter H
We use a chi-square test for fractional Gaussian noise
Test: reject NULL hypothesis when CN < chi2Test
"""
import scipy as sp
XH = np.diff(WH)
Gam = [[utils.covGamma(i-j,H) for i in range(n)] for j in range(n)]
L = np.linalg.cholesky(Gam)
ZH = (np.linalg.inv(L)).dot(np.transpose(XH))
CN = (np.linalg.norm(ZH, 2))**2 # Test statistic
alpha = 0.99 # Confidence level
chi2Test = sp.stats.chi2.ppf(alpha,n) # p value of the chi2Test
print('Reject null hypothesis: ', CN<chi2Test)
###Output
Reject null hypothesis: False
|
Guides/algorithms/Search.ipynb | ###Markdown
Search Algorithms
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Binary Search
###Code
arr = np.random.random(100)
arr = np.sort(arr)
# find 8
def binary_search(array, value):
"""Find the value in the array using binary search."""
arr = array
res = 0
while True:
idx = len(arr)//2
if value > arr[idx]:
arr = arr[idx:]
res += idx
elif value < arr[idx]:
arr = arr[:idx]
elif value == arr[idx]:
return idx + res
print(binary_search(arr,arr[10]))
###Output
10
|
_notebooks/2020-02-19-python_static_vs_classmethods.ipynb | ###Markdown
Static vs Classmethods> This post describes the use cases of staticmethods and classmethods decorators in python- toc: true- branch: master- badges: true- comments: true Classmethods can be used as an constructor overloading . Staticmethods can be used as standalone function which does not have any dependency on any functions in the same class.
###Code
from math import *
class Point():
def __init__(self, x, y):
self.x = x
self.y = y
@classmethod
def frompolar(cls, radius, angle):
"""The `cls` argument is the `Point` class itself"""
return cls(radius * cos(angle), radius * sin(angle))
@staticmethod
def angle(x, y):
"""this could be outside the class, but we put it here
just because we think it is logically related to the class."""
return atan(y/x)
p1 = Point(3, 2)
p2 = Point.frompolar(3, pi/4)
angle = Point.angle(3, 2)
p1.x
p2.x
###Output
_____no_output_____
###Markdown
Another consideration with respect to staticmethod vs classmethod comes up with inheritance. Say you have the following class:
###Code
class Foo(object):
@staticmethod
def bar():
return "In Foo"
###Output
_____no_output_____
###Markdown
and you want to and you then want to override bar() in a child class:
###Code
class Foo2(object):
@staticmethod
def bar():
return "In Foo2"
###Output
_____no_output_____
###Markdown
This works, but note that now the bar() implementation in the child class (Foo2) can no longer take advantage of anything specific to that class. For example, say Foo2 had a method called magic() that you want to use in the Foo2 implementation of bar():
###Code
class Foo2(Foo):
@staticmethod
def bar():
return "In Foo2"
@staticmethod
def magic():
return "Something useful you'd like to use in bar, but now can't"
###Output
_____no_output_____
###Markdown
A workaround is call Foo2().magic() in bar() but it takes away the flexibility to refactor Foo2 because then you will also need to change in the bar() method . But if bar() was a class method , it wouldn't have been a problem
###Code
class Foo(object):
@classmethod
def bar(cls):
return "In Foo"
class Foo2(Foo):
@classmethod
def bar(cls):
return "In Foo2 " + cls.magic()
@classmethod
def magic(cls):
return "MAGIC"
print (Foo2().bar())
###Output
In Foo2 MAGIC
|
notebooks/Archive/AT2_Classification_v1.ipynb | ###Markdown
1. Set up Environment
###Code
%pwd
%cd '/home/jovyan/work'
%load_ext autoreload
%autoreload 2
import os
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader, WeightedRandomSampler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
pd.options.display.max_rows = 10000
###Output
_____no_output_____
###Markdown
2. Load and Explore Data
###Code
df = pd.read_csv('data_files/raw/beer_reviews.csv')
df.head()
df.shape
df.info()
df.head()
###Output
_____no_output_____
###Markdown
3. Prepare Data
###Code
df_cleaned = df.copy()
###Output
_____no_output_____
###Markdown
Drop unused variables
###Code
df_cleaned = df_cleaned.drop(['brewery_id', 'review_time','review_profilename','beer_beerid','beer_name','beer_abv'], axis=1)
###Output
_____no_output_____
###Markdown
Create Categorical Variable Dictionary
###Code
arr_brewery_name = df_cleaned.brewery_name.unique()
arr_beer_style = df_cleaned.beer_style.unique()
lst_brewery_name = list(arr_brewery_name)
lst_beer_style = list(arr_beer_style)
cats_dict = {
'brewery_name': [lst_brewery_name],
'beer_style': [lst_beer_style]
}
###Output
_____no_output_____
###Markdown
Quantify NULL Values
###Code
df_cleaned.isnull().sum()
df_cleaned.dropna(how='any', inplace=True)
###Output
_____no_output_____
###Markdown
Transform Categorical column values with encoder
###Code
from sklearn.preprocessing import StandardScaler, OrdinalEncoder
for col, cats in cats_dict.items():
col_encoder = OrdinalEncoder(categories=cats)
df_cleaned[col] = col_encoder.fit_transform(df_cleaned[[col]])
num_cols = ['brewery_name','review_overall', 'review_aroma', 'review_appearance', 'review_palate', 'review_taste']
target_col = 'beer_style'
sc = StandardScaler()
df_cleaned[num_cols] = sc.fit_transform(df_cleaned[num_cols])
df_cleaned['beer_style'] = df_cleaned['beer_style'].astype(int)
X = df_cleaned
y = df_cleaned['beer_style']
###Output
_____no_output_____
###Markdown
Visualise Target Class Distribution
###Code
a4_dims = (25, 8)
fig, ax = pyplot.subplots(figsize=a4_dims)
sns.countplot(ax=ax,x = 'beer_style', data=X)
###Output
_____no_output_____
###Markdown
Split Data
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train = torch.Tensor(X_train)
X_test = torch.Tensor(X_test)
y_train = torch.Tensor(y_train)
y_test = torch.Tensor(y_test)
###Output
_____no_output_____ |
notebooks/nl-be/Input - Temperatuur.ipynb | ###Markdown
DS18B20 1-wire temperatuur sensor- VDD = 3v3- signaalpin = GPIO4- 4.7 KOhm pull-up op signaalRaspbery Pi installatie + detectie device ID:```> sudo modprobe w1-gpio> sudo modprobe w1-therm> cd /sys/bus/w1/devices/> ls```nota: opgelet met Raspberry Pi 2 (wegens DeviceTree): "dtoverlay=w1-gpio" toevoegen aan /boot/config.txt http://www.raspberrypi.org/forums/viewtopic.php?f=28&t=97314
###Code
# Info inlezen vanuit sensor in fileformaat
temp_file = open("/sys/bus/w1/devices/28-011465166dff/w1_slave")
temp_tekst = temp_file.read()
temp_file.close()
# De temperatuur is te vinden op de tweede lijn in de tiende kolom
tweede_lijn = temp_tekst.split("\n")[1]
temperatuur_tekst = tweede_lijn.split(" ")[9]
# De eerste twee karakters zijn "t=", dus die laten we vallen, zodat we de rest kunnen converteren naar een nummer.
temperatuur = float(temperatuur_tekst[2:])
# Omzetten van milligraden naar graden.
temperatuur = temperatuur / 1000
print("Gemeten temperatuur: {}".format(temperatuur))
###Output
_____no_output_____ |
MassLuminosityProject/SummerResearch/ValidatingSimplerModel_20160626.ipynb | ###Markdown
Validating Simpler ModelIn order to confirm our hypothesis that our process is not currently able generate meaningful posteriors, we further simplify the model and investigate its performance.Contents:- [Model](Model)- [Results](Results)- [Discussion](Discussion) ModelWe fix the hyper-parameters $\alpha$ and infer the hyper-posterior of $S$. We make these changes and then run this new model over the cluster, generating 960 samples that we analyze below. Results
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
test = np.loadtxt('justStest.txt')
seed_S = 0.1570168038792813
plt.hist(test[:,0], bins=20, alpha=0.6, normed=True);
plt.title('$S$ Samples')
plt.xlabel('$S$')
plt.ylabel('Density')
plt.gca().axvline(seed_S, color='k', label='seed S')
plt.legend(loc=2);
plt.hist(test[:,1], bins=20, alpha=0.6, normed=True);
plt.title('Log-Likelihood Weights')
plt.xlabel('Log-Likelihood')
plt.ylabel('Density')
plt.hist(test[:,0], bins=20, alpha=0.6, normed=True, label='prior');
plt.hist(test[:,0], bins=20, alpha=0.6, \
weights=((test[:,1] - test[:,1].min()) / (test[:,1].max() - test[:,1].min())),\
normed=True,\
label='l-weighted'
);
plt.title('Log-Likelihood Weighted Samples')
plt.xlabel('$S$')
plt.ylabel('Weight')
plt.gca().axvline(seed_S, color='k', label='seed S')
plt.gca().axvline(test[:,0][np.argmax(test[:,1])], color='r', label='highest weight')
plt.legend(loc=2);
w = np.exp((test[:,1] - test[:,1].max()))
plt.hist(test[:,0], bins=20, alpha=0.6, normed=True, label='prior');
plt.hist(test[:,0], bins=20, alpha=0.6, \
weights=w,\
normed=True,\
label='L-weighted'
);
plt.title('Likelihood Weighted Samples')
plt.xlabel('$S$')
plt.ylabel('Weight')
plt.gca().axvline(seed_S, color='k', label='seed S')
plt.gca().axvline(test[:,0][np.argmax(test[:,1])], color='r', label='highest weight')
plt.legend(loc=2);
print np.min(w), np.max(w), np.sort(w)[-10:]
###Output
7.15851099999e-27 1.0 [ 2.78935071e-04 3.14961809e-04 8.49480085e-04 9.09198943e-04
2.15296258e-03 2.23480042e-03 4.00936170e-03 2.17792876e-01
3.51397493e-01 1.00000000e+00]
###Markdown
We see that a few samples dominate the weight contribution.
###Code
np.sort(test[:,1])[::-1][:10]
###Output
_____no_output_____ |
7.17+hexiaojing.ipynb | ###Markdown
数学函数、字符串和对象 本章介绍Python函数来执行常见的数学运算- 函数是完成一个特殊任务的一组语句,可以理解为一个函数相当于一个小功能,但是在开发中,需要注意一个函数的长度最好不要超过一屏- Python中的内置函数是不需要Import导入的 尝试练习Python内置函数 Python中的math模块提供了许多数学函数
###Code
import math
a = 10
res = math.fabs(a)
print(res)
x=eval(input('请输入一个值'))
res=math.fabs(2*x)
print(res)
x=eval(input('请输入一个值'))
res=abs(2*x)
print(res)
b=-2
res=math.ceil(b)
print(res)
b=2.5
res=math.ceil(b)
print(res)
b=2.5
res=math.floor(b)
print(res)
e=2
f=math.log(e)
print(f)
e=2
f=math.log(e,2)
print(f)
math.sqrt(4)
math.sin(math.radians(90))
###Output
_____no_output_____
###Markdown
两个数学常量PI和e,可以通过使用math.pi 和math.e调用
###Code
math.pi
math.e
###Output
_____no_output_____
###Markdown
EP:- 通过math库,写一个程序,使得用户输入三个顶点(x,y)返回三个角度- 注意:Python计算角度为弧度制,需要将其转换为角度
###Code
a=1
b=1
c=math.sqrt(2)
A=math.acos((math.pow(a,2)-math.pow(b,2)-math.pow(c,2))/(-2*b*c))
B=math.acos((math.pow(b,2)-math.pow(a,2)-math.pow(c,2))/(-2*b*c))
C=math.acos((math.pow(c,2)-math.pow(b,2)-math.pow(a,2))/(-2*b*c))
A=math.degrees(A)
B=math.degrees(B)
C=math.degrees(C)
print(A)
print(B)
print(C)
###Output
44.999999999999986
44.999999999999986
90.00000000000001
###Markdown
字符串和字符- 在Python中,字符串必须是在单引号或者双引号内,在多段换行的字符串中可以使用“”“- 在使用”“”时,给予其变量则变为字符串,否则当多行注释使用
###Code
a=""""""Joker
is
a
good
boy """"""
#三引号可以当文本使用(写作文一样)
#当你不给予变量的时候,三引号当注释用
###Output
_____no_output_____
###Markdown
ASCII码与Unicode码- - - 函数ord、chr- ord 返回ASCII码值- chr 返回字符
###Code
ord('a')
chr(97)
'a'+'b'
###Output
_____no_output_____
###Markdown
EP:- 利用ord与chr进行简单邮箱加密
###Code
ord('e')
ord('u')
chr(ord('a')+29)
chr(ord('a')-35)
a=chr(ord('a')+10)
b=chr(ord('b')*2)
print(a+b)
a=chr(ord('a')+10)
b=chr(ord('b')*3)
print(a+b)
###Output
kĦ
###Markdown
转义序列 \- a = "He said,"Johon's program is easy to read"- 转掉它原来的意思- 一般情况下只有当语句与默认方法相撞的时候,就需要转义
###Code
1.三引号/单/双引号的区别是:
三引号内可以使用换行,单/双引号
三引号如果不加变量的话,代表是注释(可以换行注释)
2.单/双引号内是不是可以加入单/双引号的,但是可以加入双/单
3.如果非要头铁,可以使用"\\(转义字符),表示转掉它原来的意思
###Output
_____no_output_____
###Markdown
高级print- 参数 end: 以什么方式结束打印- 默认换行打印 函数str- 将类型强制转换成字符串类型- 其他一些以后会学到(list,set,tuple...) 字符串连接操作- 直接使用 “+” - join() 函数 EP:- 将 “Welcome” “to” "Python" 拼接- 将int型 100 与 “joker is a bad man” 拼接- 从控制台读取字符串> 输入一个名字返回夸奖此人
###Code
import sys, time
for i in range(100):
print('{}/{}\r'.format(i,100),flush=True,end='')
time.sleep(1)
#进度条 ^
||
###Output
_____no_output_____
###Markdown
实例研究:最小数量硬币- 开发一个程序,让用户输入总金额,这是一个用美元和美分表示的浮点值,返回一个由美元、两角五分的硬币、一角的硬币、五分硬币、以及美分个数 - Python弱项,对于浮点型的处理并不是很好,但是处理数据的时候使用的是Numpy类型 id与type- id 查看内存地址,在判断语句中将会使用- type 查看元素类型 其他格式化语句见书 Homework- 1
###Code
import math
r=eval(input('请输入距离'))
s=2*r*math.sin(math.pi/5)
area=5*s*s/(4*math.tan(math.pi/5))
print(area)
###Output
请输入距离5.5
71.923649044821
###Markdown
- 2
###Code
(x1,y1)=eval(input('请输入第一个经度纬度'))
(x2,y2)=eval(input('请输入第二个经度纬度'))
radius=6371.01
x1=math.radians(x1)
x2=math.radians(x2)
y1=math.radians(y1)
y2=math.radians(y2)
d=math.fabs(radius*math.acos(math.sin(x1)*math.sin(x2)+math.cos(x1)*math.cos(x2)*math.cos(y1-y2)))
print(d)
###Output
请输入第一个经度纬度39.55,-116.25
请输入第二个经度纬度41.5,87.37
10691.79183231593
###Markdown
- 3
###Code
s=eval(input('请输入边长'))
area=(5*s**2)/(4*math.tan(math.pi/5))
print(area)
###Output
请输入边长5.5
52.044441367816255
###Markdown
- 4
###Code
s=eval(input('请输入边长'))
n=eval(input('请输入边数'))
area=(n*s**2)/(4*math.tan(math.pi/n))
print(area)
###Output
请输入边长6.5
请输入边数5
72.69017017488386
###Markdown
- 5
###Code
a=eval(input('请输入数字'))
b=chr(a)
print(b)
###Output
请输入数字69
E
###Markdown
- 6
###Code
a=input('请输入名字')
b=eval(input('请输入工作时间'))
c=eval(input('请输入每小时的报酬'))
d=eval(input('请输入联邦预扣税率'))
e=eval(input('请输入州预扣税率'))
print('employee name:',a)
print('hours worked',b)
print('pay rate','$',c)
print('gross pay','$',b*c)
print('deductions')
print('federal withholding','$',b*c*d)
print('state withholding','$',b*c*e)
print('total deduction','$',b*c*d+b*c*e)
print('net pay','$',b*c-b*c*d-b*c*e)
###Output
请输入名字Smith
请输入工作时间10
请输入每小时的报酬9.75
请输入联邦预扣税率0.20
请输入州预扣税率0.09
employee name: Smith
hours worked: 10
pay rate: $ 9.75
gross pay: $ 97.5
deductions:
federal withholding: $ 19.5
state withholding: $ 8.775
total deduction: $ 28.275
net pay: $ 69.225
###Markdown
- 7
###Code
x=eval(input('请输入一个数'))
a=str(x%100%10)
b=str(x%100//10)
c=str(x//100%10)
d=str(x//1000)
print(a+b+c+d)
###Output
请输入一个数3125
5213
|
docs/SCOPE2019.ipynb | ###Markdown
 *Mohammad Dehghani Ashkezari * *Ginger Armbrust**Raphael Hagen**Michael Denholtz* Table of Contents:* [Installation](installation)* [**Data Retrieval (selected methods)**](dataRetrieval) * [API](api) * [Catalog](catalog) * [Search Catalog](searchCatalog) * [Cruise Trajectory](cruiseTrajectory) * [Subset by Space-Time](spaceTime) * [Colocalize Along Cruise Track](matchCruise) * [Query](query)* [**Data Visulization**](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_data_vizualization.html) * [Histogram](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_histogram.htmlhistogram) * [Time Series](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_time_series.htmltimeseries) * [Regional Map, Contour Plot, 3D Surface Plot](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_rm_cp_3d.htmlrmcp3d) * [Section Map, Section Contour](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_section_map_contour.htmlsectionmapcontour) * [Depth Profile](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_depth_profile.htmldepthprofile) * [Cruise Track](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_cruise_track.htmlcruisetrackplot) * [Correlation Matrix Along Cruise Track](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_correlation_matrix_cruise_track.htmlcorrmatrixcruise) * [**Case Studies**](caseStudy) * [Attach Environmental Parameters to the SeaFlow Observations](caseStudy1) * [Inter-Annual Variability of Eddy Induced Temperature Anomaly](caseStudy2) API: Data Retrieval
###Code
# if using jupyter notebook: enable intellisense
%config IPCompleter.greedy=True
###Output
_____no_output_____
###Markdown
Table of Contents Installationpycmap can be installed using *pip*: `pip install pycmap`In order to use pycmap, you will need to obtain an API key from SimonsCMAP website:https://simonscmap.com. Note:You may install pycmap on cloud-based jupyter notebooks (such as [Colab](https://colab.research.google.com/)) by running the following command in a code-block: `!pip install pycmap`
###Code
# !pip install pycmap -q #uncomment to install pycmap on Colab
import pycmap
pycmap.__version__
###Output
_____no_output_____
###Markdown
Table of Contents [*API( )*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_api.htmlpycmapapi)To retrieve data, we need to create an instance of the system's API and pass the API key. It is no necessary to pass the API key every time you run a code locally, because it will be stored locally. The API class has other optional parameters to adjust its behavior. All parameters can be updated persistently at any point in the code.Register at https://simonscmap.com and get and API key, if you haven't already.
###Code
api = pycmap.API()
###Output
_____no_output_____
###Markdown
Table of Contents [*get_catalog()*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog)Returns a dataframe containing the details of all variables at Simons CMAP database. This method requires no input.
###Code
api.get_catalog()
###Output
_____no_output_____
###Markdown
Table of Contents [*search_catalog(keywords)*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_search_catalog.htmlsearchcatalog)Returns a dataframe containing a subset of Simons CMAP catalog of variables. All variables at Simons CMAP catalog are annotated with a collection of semantically related keywords. This method takes the passed keywords and returns all of the variables annotated with similar keywords. The passed keywords should be separated by blank space. The search result is not sensitive to the order of keywords and is not case sensitive. The passed keywords can provide any 'hint' associated with the target variables. Below are a few examples: * the exact variable name (e.g. NO3), or its linguistic term (Nitrate) * methodology (model, satellite ...), instrument (CTD, seaflow), or disciplines (physics, biology ...) * the cruise official name (e.g. KOK1606), or unofficial cruise name (Falkor) * the name of data producer (e.g Penny Chisholm) or institution name (MIT) If you searched for a variable with semantically-related-keywords and did not get the correct results, please let us know. We can update the keywords at any point. Example:Returns a list of Nitrite measurements during the Falkor cruise, if exists.
###Code
api.search_catalog('nitrite falkor')
###Output
_____no_output_____
###Markdown
Table of Contents [*cruise_trajectory(cruiseName)*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_cruise_trajectory.htmlcruise-traj)Returns a dataframe containing the trajectory of the specified cruise. Example:Returns the meso_scope cruise trajectory.The example below passes 'scope' as cruise name. All cruises that have the term 'scope' in their name are returned and asks for more specific name.
###Code
api.cruise_trajectory('scope')
###Output
_____no_output_____
###Markdown
Table of Contents [*cruise_variables(cruiseName)*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_cruise_variables.htmlcruisevars)Returns a dataframe containing all registered variables (at Simons CMAP) during the specified cruise. Example:Returns a list of measured variables during the *Diel* cruise (KM1513).
###Code
api.cruise_variables('diel')
###Output
_____no_output_____
###Markdown
Table of Contents [*space_time(table, variable, dt1, dt2, lat1, lat2, lon1, lon2, depth1, depth2)*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_subset_ST.htmlsubset-st)Returns a subset of data according to the specified space-time constraints (dt1, dt2, lat1, lat2, lon1, lon2, depth1, depth2).The results are ordered by time, lat, lon, and depth (if exists), respectively. > **Parameters:** >> **table: string**>> Table name (each dataset is stored in a table). A full list of table names can be found in [catalog](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog).>> >> **variable: string**>> Variable short name which directly corresponds to a field name in the table. A subset of this variable is returned by this method according to the spatio-temporal cut parameters (below). Pass **'\*'** wild card to retrieve all fields in a table. A full list of variable short names can be found in [catalog](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog).>> >> **dt1: string**>> Start date or datetime. This parameter sets the lower bound of the temporal cut. Example values: '2016-05-25' or '2017-12-10 17:25:00'>> >> **dt2: string**>> End date or datetime. This parameter sets the upper bound of the temporal cut. >> >> **lat1: float**>> Start latitude [degree N]. This parameter sets the lower bound of the meridional cut. Note latitude ranges from -90° to 90°.>> >> **lat2: float**>> End latitude [degree N]. This parameter sets the upper bound of the meridional cut. Note latitude ranges from -90° to 90°.>> >> **lon1: float**>> Start longitude [degree E]. This parameter sets the lower bound of the zonal cut. Note longitue ranges from -180° to 180°.>> >> **lon2: float**>> End longitude [degree E]. This parameter sets the upper bound of the zonal cut. Note longitue ranges from -180° to 180°.>> >> **depth1: float**>> Start depth [m]. This parameter sets the lower bound of the vertical cut. Note depth is a positive number (it is 0 at surface and grows towards ocean floor).>> >> **depth2: float**>> End depth [m]. This parameter sets the upper bound of the vertical cut. Note depth is a positive number (it is 0 at surface and grows towards ocean floor).>**Returns:** >> Pandas dataframe. Example:This example retrieves a subset of in-situ salinity measurements by [Argo floats](https://cmap.readthedocs.io/en/latest/catalog/datasets/Argo.htmlargo).
###Code
api.space_time(
table='tblArgoMerge_REP',
variable='argo_merge_salinity_adj',
dt1='2015-05-01',
dt2='2015-05-30',
lat1=28,
lat2=38,
lon1=-71,
lon2=-50,
depth1=0,
depth2=100
)
###Output
_____no_output_____
###Markdown
Table of Contents (see slides →) [*along_track(cruise, targetTables, targetVars, depth1, depth2, temporalTolerance, latTolerance, lonTolerance, depthTolerance)*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_match_cruise_track_datasets.htmlmatchcruise)This method colocalizes a cruise trajectory with the specified target variables. The matching results rely on the tolerance parameters because these parameters set the matching boundaries between the cruise trajectory and target datasets. Please note that the number of matching entries for each target variable might vary depending on the temporal and spatial resolutions of the target variable. In principle, if the cruise trajectory is fully covered by the target variable's spatio-temporal range, there should always be matching results if the tolerance parameters are larger than half of their corresponding spatial/temporal resolutions. Please explore the [catalog](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog) to find appropriate target variables to colocalize with the desired cruise. This method returns a dataframe containing the cruise trajectory joined with the target variable(s). > **Parameters:** >> **cruise: string**>> The official cruise name. If applicable, you may also use cruise "nickname" ('Diel', 'Gradients_1' ...). A full list of cruise names can be retrieved using [cruise](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog) method.>> >> **targetTables: list of string**>> Table names of the target datasets to be matched with the cruise trajectory. Notice cruise trajectory can be matched with multiple target datasets. A full list of table names can be found in [catalog](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog).>> >> **targetVars: list of string**>> Variable short names to be matched with the cruise trajectory. A full list of variable short names can be found in [catalog](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_catalog.htmlgetcatalog).>> >> **depth1: float**>> Start depth [m]. This parameter sets the lower bound of the depth cut on the traget datasets. 'depth1' and 'depth2' allow matching a cruise trajectory (which is at the surface, hopefully!) with traget varaiables at lower depth. Note depth is a positive number (depth is 0 at surface and grows towards ocean floor).>> >> **depth2: float**>> End depth [m]. This parameter sets the upper bound of the depth cut on the traget datasets. Note depth is a positive number (depth is 0 at surface and grows towards ocean floor).>> >> **temporalTolerance: list of int**>> Temporal tolerance values between the cruise trajectory and target datasets. The size and order of values in this list should match those of targetTables. If only a single integer value is given, that would be applied to all target datasets. This parameter is in day units except when the target variable represents monthly climatology data in which case it is in month units. Notice fractional values are not supported in the current version.>> >> **latTolerance: list of float or int**>> Spatial tolerance values in meridional direction [deg] between the cruise trajectory and target datasets. The size and order of values in this list should match those of targetTables. If only a single float value is given, that would be applied to all target datasets. A "safe" value for this parameter can be slightly larger than the half of the traget variable's spatial resolution.>> >> **lonTolerance: list of float or int**>> Spatial tolerance values in zonal direction [deg] between the cruise trajectory and target datasets. The size and order of values in this list should match those of targetTables. If only a single float value is given, that would be applied to all target datasets. A "safe" value for this parameter can be slightly larger than the half of the traget variable's spatial resolution.>> >> **depthTolerance: list of float or int**>> Spatial tolerance values in vertical direction [m] between the cruise trajectory and target datasets. The size and order of values in this list should match those of targetTables. If only a single float value is given, that would be applied to all target datasets. >**Returns:** >> Pandas dataframe. Example:Colocalizes the Gradients_1 cruise with prochloro_abundance and prokaryote_c01_darwin_clim variables from the Seaflow and Darwin (climatology) Data sets, respectively.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pycmap
api = pycmap.API()
df = api.along_track(
cruise='gradients_3',
targetTables=['tblSeaFlow', 'tblDarwin_Nutrient_Climatology'],
targetVars=['prochloro_abundance', 'PO4_darwin_clim'],
depth1=0,
depth2=5,
temporalTolerance=[0, 0],
latTolerance=[0.01, 0.25],
lonTolerance=[0.01, 0.25],
depthTolerance=[5, 5]
)
################# Simple Plot #################
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
c1, c2 = 'firebrick', 'slateblue'
t1, t2 = 'tblSeaFlow', 'tblDarwin_Nutrient_Climatology'
v1, v2 = 'prochloro_abundance', 'PO4_darwin_clim'
ax1.plot(df['lat'], df[v1], 'o', color=c1, markeredgewidth=0, label='SeaFlow', alpha=0.2)
ax1.tick_params(axis='y', labelcolor='r')
ax1.set_ylabel(v1 + api.get_unit(t1, v1), color='r')
ax2.plot(df['lat'], df[v2], 'o', color=c2, markeredgewidth=0, label='Darwin', alpha=0.2)
ax2.tick_params(axis='y', labelcolor='b')
ax2.set_ylabel(v2 + api.get_unit(t2, v2), color='b')
ax1.set_xlabel('Latitude')
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Table of Contents [*query(query)*](https://cmap.readthedocs.io/en/latest/user_guide/API_ref/pycmap_api/pycmap_query.htmlquery)Simons CMAP datasets are hosted in a SQL database and pycmap package provides the user with a number of pre-developed methods to extract and retrieve subsets of the data. The rest of this documentation is dedicated to explore and explain these methods. In addition to the pre-developed methods, we intend to leave the database open to custom scan queries for interested users. This method takes a custom SQL query statement and returns the results in form of a Pandas dataframe. The full list of table names and variable names (fields) can be obtained using the [get_catalog()](Catalog.ipynb) method. In fact, one may use this very method to retrieve the table and field names: `query('EXEC uspCatalog')`. A Dataset is stored in a table and each table field represents a variable. All data tables have the following fields:* [time] [date or datetime] NOT NULL,* [lat] [float] NOT NULL,* [lon] [float] NOT NULL,* [depth] [float] NOT NULL, Note:Tables which represent a climatological dataset, such as 'tblDarwin_Nutrient_Climatology', will not have a 'time' field. Also, if a table represents a surface dataset, such as satellite products, there would be no 'depth' field. 'depth' is a positive number in meters unit; it is zero at the surface growing towards the ocean's floor. 'lat' and 'lon' are in degrees units, ranging from -90° to 90° and -180° to 180°, respectively.Please keep in mind that some of the datasets are massive in size (10s of TB), avoid queries without WHERE clause (`SELECT * FROM TABLENAME`). Always try to add some constraints on time, lat, lon, and depth fields (see the basic examples below). Moreover, the database hosts a wide range of predefined stored procedures and functions to streamline nearly all CMAP data services. For instance retrieving the catalog information is achieved using a single call of this procedure: *uspCatalog*. These predefined procedures can be called using the pycmap package (see example below). Alternatively, one may use any SQL client to execute these procedures to retrieve and visualize data (examples: [Azure Data Studio](https://docs.microsoft.com/en-us/sql/azure-data-studio/download?view=sql-server-ver15), or [Plotly Falcon](https://plot.ly/free-sql-client-download/)). Using the predefined procedures all CMAP data services are centralized at the database layer which dramatically facilitates the process of developing apps with different programming languages (pycmap, web app, cmap4r, ...). Please note that you can improve the current procedures or add new procedures by contributing at the [CMAP database repository](https://github.com/simonscmap/DB). Below is a selected list of stored procedures and functions, their arguments will be described in more details subsequently:* uspCatalog* uspSpaceTime* uspTimeSeries* uspDepthProfile* uspSectionMap* uspCruises* uspCruiseByName* uspCruiseBounds* uspWeekly* uspMonthly* uspQuarterly* uspAnnual* uspMatch* udfDatasetReferences* udfMetaData_NoRefHappy SQL Injection! Example:A sample stored procedure returning the list of all cruises hosted by Simons CMAP.
###Code
api.query('EXEC uspCruises')
###Output
_____no_output_____
###Markdown
Example:A sample query returning the timeseries of sea surface temperature (sst).
###Code
api.query(
'''
SELECT [time], AVG(lat) AS lat, AVG(lon) AS lon, AVG(sst) AS sst FROM tblsst_AVHRR_OI_NRT
WHERE
[time] BETWEEN '2016-06-01' AND '2016-10-01' AND
lat BETWEEN 23 AND 24 AND
lon BETWEEN -160 AND -158
GROUP BY [time]
ORDER BY [time]
'''
)
###Output
_____no_output_____
###Markdown
Study Cases Table of Contents (see slides →) Case Study 1: Attach Environmental Parameters to the SeaFlow Observations In this study, we take all seaflow cruises (approximately 35 cruises) and colocalize them with 50+ environmental variables. The idea is to identify the highly-correlated environmental variables (correlated with seaflow abundances). These variables then serve as predictors of machine learning algorithms that capture the seaflow variations. The trained machine learning models then are used to generate spatial maps of pico-phytoplanktons (Prochlorococcus, Synechococcus, and pico-Eukaryotes).
###Code
"""
Author: Mohammad Dehghani Ashkezari <[email protected]>
Date: 2019-08-13
Function: Colocalizes tens of variables along-track of cruises with underway Seaflow measurements.
"""
import os
import pycmap
from collections import namedtuple
import pandas as pd
def all_cruises(api):
"""
Returns a list of seaflow cruises, excluding the AMT cruises.
"""
cruises = api.cruises().Name
return list(cruises[~cruises.str.contains("AMT")])
def match_params():
"""
Creates a collection of variables (and their tolerances) to be colocalized along the cruise trajectory.
"""
Param = namedtuple('Param', ['table', 'variable', 'temporalTolerance', 'latTolerance', 'lonTolerance', 'depthTolerance'])
params = []
params.append(Param('tblSeaFlow', 'prochloro_abundance', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'prochloro_diameter', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'prochloro_carbon_content', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'prochloro_biomass', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'synecho_abundance', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'synecho_diameter', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'synecho_carbon_content', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'synecho_biomass', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'picoeuk_abundance', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'picoeuk_diameter', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'picoeuk_carbon_content', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'picoeuk_biomass', 0, 0.1, 0.1, 5))
params.append(Param('tblSeaFlow', 'total_biomass', 0, 0.1, 0.1, 5))
######## Ship Data (not calibrated)
params.append(Param('tblCruise_Salinity', 'salinity', 0, 0.1, 0.1, 5))
params.append(Param('tblCruise_Temperature', 'temperature', 0, 0.1, 0.1, 5))
######## satellite
params.append(Param('tblSST_AVHRR_OI_NRT', 'sst', 1, 0.25, 0.25, 5))
params.append(Param('tblSSS_NRT', 'sss', 1, 0.25, 0.25, 5))
params.append(Param('tblCHL_REP', 'chl', 4, 0.25, 0.25, 5))
params.append(Param('tblModis_AOD_REP', 'AOD', 15, 1, 1, 5))
params.append(Param('tblAltimetry_REP', 'sla', 1, 0.25, 0.25, 5))
params.append(Param('tblAltimetry_REP', 'adt', 1, 0.25, 0.25, 5))
params.append(Param('tblAltimetry_REP', 'ugos', 1, 0.25, 0.25, 5))
params.append(Param('tblAltimetry_REP', 'vgos', 1, 0.25, 0.25, 5))
######## model
params.append(Param('tblPisces_NRT', 'Fe', 4, 0.5, 0.5, 5))
params.append(Param('tblPisces_NRT', 'NO3', 4, 0.5, 0.5, 5))
params.append(Param('tblPisces_NRT', 'O2', 4, 0.5, 0.5, 5))
params.append(Param('tblPisces_NRT', 'PO4', 4, 0.5, 0.5, 5))
params.append(Param('tblPisces_NRT', 'Si', 4, 0.5, 0.5, 5))
params.append(Param('tblPisces_NRT', 'PP', 4, 0.5, 0.5, 5))
params.append(Param('tblPisces_NRT', 'CHL', 4, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'NH4_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'NO2_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'SiO2_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'DOC_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'DON_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'DOP_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'DOFe_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'PIC_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'ALK_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Nutrient_Climatology', 'FeT_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Plankton_Climatology', 'prokaryote_c01_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Plankton_Climatology', 'prokaryote_c02_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Plankton_Climatology', 'picoeukaryote_c03_darwin_clim', 0, 0.5, 0.5, 5))
params.append(Param('tblDarwin_Plankton_Climatology', 'picoeukaryote_c04_darwin_clim', 0, 0.5, 0.5, 5))
####### World Ocean Atlas (WOA)
params.append(Param('tblWOA_Climatology', 'density_WOA_clim', 0, .75, .75, 5))
params.append(Param('tblWOA_Climatology', 'nitrate_WOA_clim', 0, 0.75, 0.75, 5))
params.append(Param('tblWOA_Climatology', 'phosphate_WOA_clim', 0, 0.75, 0.75, 5))
params.append(Param('tblWOA_Climatology', 'silicate_WOA_clim', 0, 0.75, 0.75, 5))
params.append(Param('tblWOA_Climatology', 'oxygen_WOA_clim', 0, 0.75, 0.75, 5))
params.append(Param('tblWOA_Climatology', 'salinity_WOA_clim', 0, 0.75, 0.75, 5))
tables, variables, temporalTolerance, latTolerance, lonTolerance, depthTolerance = [], [], [], [], [], []
for i in range(len(params)):
tables.append(params[i].table)
variables.append(params[i].variable)
temporalTolerance.append(params[i].temporalTolerance)
latTolerance.append(params[i].latTolerance)
lonTolerance.append(params[i].lonTolerance)
depthTolerance.append(params[i].depthTolerance)
return tables, variables, temporalTolerance, latTolerance, lonTolerance, depthTolerance
def main():
api = pycmap.API()
cruises = all_cruises(api)
cruises = ['KOK1606'] # limiting to only one cruise (for presentation)
exportDir = './export/'
if not os.path.exists(exportDir): os.makedirs(exportDir)
tables, variables, temporalTolerance, latTolerance, lonTolerance, depthTolerance = match_params()
df = pd.DataFrame({})
for cruise in cruises:
print('\n********************************')
print('Preparing %s cruise...' % cruise)
print('********************************\n')
data = api.along_track(
cruise=cruise,
targetTables=tables,
targetVars=variables,
temporalTolerance=temporalTolerance,
latTolerance=latTolerance,
lonTolerance=lonTolerance,
depthTolerance=depthTolerance,
depth1=0,
depth2=5
)
if len(df) < 1:
df = data
else:
df = pd.concat([df, data], ignore_index=True)
data.to_csv('%s%s.csv' % (exportDir, cruise), index=False)
df.to_csv('%ssfMatch.csv' % exportDir, index=False)
return df
##############################
# #
# main #
# #
##############################
if __name__ == '__main__':
df = main()
###Output
_____no_output_____
###Markdown
Table of Contents (see slides →) Case Study 2: Inter-Annual Variability of Eddy Induced Temperature Anomaly In this example, we iteratively retrieve daily eddy locations and colocalize them with satellite and model variables (SST, CHL, SLA, and NO3). To infer the eddy induced effects we also compute an estimate of the local background. Subtracting the background field from that of eddy domain results in the eddy induced effects. For demonstration purposes, the script below is limited to a small region within a one-day period (see the root of the script).
###Code
"""
Author: Mohammad Dehghani Ashkezari <[email protected]>
Date: 2019-11-01
Function: Colocalize (match) eddy data set with a number of satellite & model variables (e.g. SST, CHL, NO3, etc ...).
"""
import os
import pycmap
from collections import namedtuple
import pandas as pd
from datetime import datetime, timedelta, date
def sparse_dates(y1, y2, m1, m2, d1, d2):
dts = []
for y in range(y1, y2+1):
for m in range(m1, m2+1):
for d in range(d1, d2+1):
dts.append(datetime(y, m, d))
return dts
def eddy_time_range(api):
"""
Returns the start-date and end-date of the eddy dataset.
"""
query = "SELECT min([time]) AS min_time, max([time]) max_time FROM tblMesoscale_Eddy"
df = api.query(query)
dt1 = datetime.strptime(df.loc[0, 'min_time'], '%Y-%m-%dT%H:%M:%S.000Z')
dt2 = datetime.strptime(df.loc[0, 'max_time'], '%Y-%m-%dT%H:%M:%S.000Z')
return [dt1 + timedelta(days=x) for x in range((dt2-dt1).days + 1)]
def daily_eddies(api, day, lat1, lat2, lon1, lon2):
"""
Returns eddies at a given date (day) delimited by the spatial parameters (lat1, lat2, lon1, lon2).
"""
query = """
SELECT * FROM tblMesoscale_Eddy
WHERE
[time]='%s'
AND
lat BETWEEN %f AND %f AND
lon BETWEEN %f AND %f
""" % (day, lat1, lat2, lon1, lon2)
return api.query(query)
def match_covariate(api, table, variable, dt1, dt2, lat, del_lat, lon, del_lon, depth, del_depth):
"""
Returns the mean and standard-deviation of variable within the eddy domain and with the background field.
"""
def has_depth(table):
return table in ['tblPisces_NRT', 'tblDarwin_Nutrient', 'tblDarwin_Ecosystem', 'tblDarwin_Phytoplankton']
query = "SELECT AVG(%s) AS %s, STDEV(%s) AS %s FROM %s " % (variable, variable, variable, variable+'_std', table)
query += "WHERE [time] BETWEEN '%s' AND '%s' AND " % (dt1, dt2)
query += "[lat] BETWEEN %f AND %f AND " % (lat-del_lat, lat+del_lat)
query += "[lon] BETWEEN %f AND %f " % (lon-del_lon, lon+del_lon)
if has_depth(table):
query += " AND [depth] BETWEEN %f AND %f " % (depth-del_depth, depth+del_depth)
try:
signal = api.query(query)
except:
return None, None, None, None
outer, inner = 4, 2
query = "SELECT AVG(%s) AS %s, STDEV(%s) AS %s FROM %s " % (variable, variable+'_bkg', variable, variable+'_bkg_std', table)
query += "WHERE [time] BETWEEN '%s' AND '%s' AND " % (dt1, dt2)
query += "[lat] BETWEEN %f AND %f AND " % (lat-outer*del_lat, lat+outer*del_lat)
query += "[lat] NOT BETWEEN %f AND %f AND " % (lat-inner*del_lat, lat+inner*del_lat)
query += "[lon] BETWEEN %f AND %f AND " % (lon-outer*del_lon, lon+outer*del_lon)
query += "[lon] NOT BETWEEN %f AND %f " % (lon-inner*del_lon, lon+inner*del_lon)
if has_depth(table):
query += "AND [depth] BETWEEN %f AND %f " % (depth-del_depth, depth+del_depth)
try:
background = api.query(query)
except:
return None, None, None, None
sig, sig_bkg = None, None
try:
if len(signal)>0: sig, sig_bkg = signal.loc[0, variable], signal.loc[0, variable+'_std']
except:
sig, sig_bkg = None, None
bkg, bkg_std = None, None
try:
if len(background)>0: bkg, bkg_std = background.loc[0, variable+'_bkg'], background.loc[0, variable+'_bkg_std']
except:
bkg, bkg_std = None, None
return sig, sig_bkg, bkg, bkg_std
def match_params():
"""
Prepares a list variables (and their associated tolerances) to be colocalized with eddies.
"""
Param = namedtuple('Param', ['table', 'variable', 'temporalTolerance', 'latTolerance', 'lonTolerance', 'depthTolerance'])
params = []
######## satellite
params.append(Param('tblSST_AVHRR_OI_NRT', 'sst', 0, 0.5, 0.5, 5))
# params.append(Param('tblSSS_NRT', 'sss', 0, 0.5, 0.5, 5))
params.append(Param('tblCHL_REP', 'chl', 4, 0.5, 0.5, 5))
# params.append(Param('tblModis_AOD_REP', 'AOD', 15, 1, 1, 5))
params.append(Param('tblAltimetry_REP', 'sla', 0, 0.5, 0.5, 5))
######## model
params.append(Param('tblPisces_NRT', 'NO3', 4, 0.5, 0.5, 5))
# params.append(Param('tblPisces_NRT', 'Fe', 4, 0.5, 0.5, 5))
# params.append(Param('tblPisces_NRT', 'O2', 4, 0.5, 0.5, 5))
# params.append(Param('tblPisces_NRT', 'PO4', 4, 0.5, 0.5, 5))
# params.append(Param('tblPisces_NRT', 'Si', 4, 0.5, 0.5, 5))
# params.append(Param('tblPisces_NRT', 'PP', 4, 0.5, 0.5, 5))
# params.append(Param('tblPisces_NRT', 'CHL', 4, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Nutrient', 'PO4', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Nutrient', 'SiO2', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Nutrient', 'O2', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Ecosystem', 'phytoplankton', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Ecosystem', 'zooplankton', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Ecosystem', 'CHL', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Ecosystem', 'primary_production', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Phytoplankton', 'diatom', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Phytoplankton', 'coccolithophore', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Phytoplankton', 'picoeukaryote', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Phytoplankton', 'picoprokaryote', 2, 0.5, 0.5, 5))
# params.append(Param('tblDarwin_Phytoplankton', 'mixotrophic_dinoflagellate', 2, 0.5, 0.5, 5))
tables, variables, temporalTolerance, latTolerance, lonTolerance, depthTolerance = [], [], [], [], [], []
for i in range(len(params)):
tables.append(params[i].table)
variables.append(params[i].variable)
temporalTolerance.append(params[i].temporalTolerance)
latTolerance.append(params[i].latTolerance)
lonTolerance.append(params[i].lonTolerance)
depthTolerance.append(params[i].depthTolerance)
return tables, variables, temporalTolerance, latTolerance, lonTolerance, depthTolerance
def main(y1, y2, m1, m2, d1, d2, edd_lat1, edd_lat2, edd_lon1, edd_lon2):
"""
Instantiates the API class and using the 'match_covariate()' function colocalizes the retrieved eddies
with the specified variables.
"""
api = pycmap.API()
daysDir = './export/eddy/days/'
if not os.path.exists(daysDir): os.makedirs(daysDir)
days = sparse_dates(y1, y2, m1, m2, d1, d2)
tables, variables, temporalTolerance, latTolerance, lonTolerance, depthTolerance = match_params()
for day_ind, day in enumerate(days):
eddies = daily_eddies(api, str(day), edd_lat1, edd_lat2, edd_lon1, edd_lon2)
eddies['time'] = pd.to_datetime(eddies['time'])
for variable in variables:
eddies[variable] = None
eddies[variable+'_std'] = None
eddies[variable+'_bkg'] = None
eddies[variable+'_bkg_std'] = None
print('Day %s: %d / %d' % (str(day), day_ind+1, len(days)))
for e in range(len(eddies)):
print('\tEddy %d / %d' % (e+1, len(eddies)))
for i in range(len(variables)):
# print('\t\t%d. Matching %s' % (i+1, variables[i]))
dt1 = str(eddies.loc[e, 'time'] + timedelta(days=-temporalTolerance[i]))
dt2 = str(eddies.loc[e, 'time'] + timedelta(days=temporalTolerance[i]))
lat, del_lat = eddies.loc[e, 'lat'], latTolerance[i]
lon, del_lon = eddies.loc[e, 'lon'], lonTolerance[i]
depth, del_depth = 0, depthTolerance[i]
v, v_std, bkg, bkg_std = match_covariate(api, tables[i], variables[i], dt1, dt2, lat, del_lat, lon, del_lon, depth, del_depth)
eddies.loc[e, variables[i]] = v
eddies.loc[e, variables[i]+'_std'] = v_std
eddies.loc[e, variables[i]+'_bkg'] = bkg
eddies.loc[e, variables[i]+'_bkg_std'] = bkg_std
eddies.to_csv(daysDir+str(day.date())+'.csv', index=False)
return eddies
##############################
# #
# main #
# #
##############################
if __name__ == '__main__':
### time window
y1, y2 = 2014, 2014
m1, m2 = 1, 1
d1, d2 = 1, 1
### spatial range
edd_lat1, edd_lat2 = 20, 30
edd_lon1, edd_lon2 = -160, -150
eddies = main(y1, y2, m1, m2, d1, d2, edd_lat1, edd_lat2, edd_lon1, edd_lon2)
###Output
_____no_output_____ |
2018-09-16_matt_lecture_1_problem_set_1_2_cars_versus_chairs.ipynb | ###Markdown
How can I tell the difference between a car and a chair?So I can get to work on time.
###Code
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
# Just import everything
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/chairscars/" #custom imageset grabbed from Google Images
sz = 224
#make sure images are in their standard folders
os.listdir(PATH)
#This is a car
files = os.listdir(f'{PATH}valid/cars')[:5]
img = plt.imread(f'{PATH}valid/cars/{files[0]}')
plt.imshow(img);
#This is a chair
files = os.listdir(f'{PATH}valid/chairs')[:5]
img = plt.imread(f'{PATH}valid/chairs/{files[0]}')
plt.imshow(img);
# Uncomment the below if you need to reset your precomputed activations
#shutil.rmtree(f'{PATH}tmp', ignore_errors=True)
#Train this thing
arch=resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 3)
# this gives prediction for validation set. Predictions are in log scale
log_preds = learn.predict()
log_preds.shape
preds = np.argmax(log_preds, axis=1) # from log probabilities to 0 or 1
probs = np.exp(log_preds[:,1]) # pr(chair)
def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], min(len(preds), 4), replace=False)
def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8)) if len(imgs)>0 else print('Not Found.')
# 1. A few correct labels at random
plot_val_with_title(rand_by_correct(True), "Correctly classified")
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask(((preds == data.val_y)==is_correct) & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(1, True), "Most correct chairs")
plot_val_with_title(most_by_correct(0, True), "Most correct cars")
plot_val_with_title(most_by_correct(1, False), "Most incorrect chairs")
plot_val_with_title(most_by_correct(0,False), "Most incorrect cars")
most_uncertain = np.argsort(np.abs(probs -0.5))[:5]
plot_val_with_title(most_uncertain, "Most uncertain predictions")
###Output
Most uncertain predictions
|
vpc_2018/lab/Lab_VpC_FelixRojoLapalma_002.ipynb | ###Markdown
Final Lab*Felix Rojo Lapalma* Main taskIn this notebook, we will apply transfer learning techniques to finetune the [MobileNet](https://arxiv.org/pdf/1704.04861.pdf) CNN on [Cifar-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. ProceduresIn general, the main steps that we will follow are:1. Load data, analyze and split in *training*/*validation*/*testing* sets.2. Load CNN and analyze architecture.3. Adapt this CNN to our problem.4. Setup data augmentation techniques.5. Add some keras callbacks.6. Setup optimization algorithm with their hyperparameters.7. Train model!8. Choose best model/snapshot.9. Evaluate final model on the *testing* set.
###Code
# load libs
import os
import matplotlib.pyplot as plt
from IPython.display import SVG
# https://keras.io/applications/#documentation-for-individual-models
from keras.applications.mobilenet import MobileNet
from keras.datasets import cifar10
from keras.models import Model
from keras.utils.vis_utils import model_to_dot
from keras.layers import Dense, GlobalAveragePooling2D,Dropout
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import plot_model, to_categorical
from sklearn.model_selection import train_test_split
import cv2
import numpy as np
import tensorflow as tf
###Output
Using TensorFlow backend.
###Markdown
cuda
###Code
cuda_flag=False
if cuda_flag:
# Setup one GPU for tensorflow (don't be greedy).
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
# The GPU id to use, "0", "1", etc.
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
# Limit tensorflow gpu usage.
# Maybe you should comment this lines if you run tensorflow on CPU.
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.3
sess = tf.Session(config=config)
###Output
_____no_output_____
###Markdown
1. Load data, analyze and split in *training*/*validation*/*testing* sets
###Code
# Cifar-10 class names
# We will create a dictionary for each type of label
# This is a mapping from the int class name to
# their corresponding string class name
LABELS = {
0: "airplane",
1: "automobile",
2: "bird",
3: "cat",
4: "deer",
5: "dog",
6: "frog",
7: "horse",
8: "ship",
9: "truck"
}
# Load dataset from keras
(x_train_data, y_train_data), (x_test_data, y_test_data) = cifar10.load_data()
############
# [COMPLETE]
# Add some prints here to see the loaded data dimensions
############
print("Cifar-10 x_train shape: {}".format(x_train_data.shape))
print("Cifar-10 y_train shape: {}".format(y_train_data.shape))
print("Cifar-10 x_test shape: {}".format(x_test_data.shape))
print("Cifar-10 y_test shape: {}".format(y_test_data.shape))
# from https://www.cs.toronto.edu/~kriz/cifar.html
# The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.
# The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.
# Some constants
IMG_ROWS = 32
IMG_COLS = 32
NUM_CLASSES = 10
RANDOM_STATE = 2018
############
# [COMPLETE]
# Analyze the amount of images for each class
# Plot some images to explore how they look
############
from genlib import get_classes_distribution,plot_label_per_class
for y,yt in zip([y_train_data.flatten(),y_test_data.flatten()],['Train','Test']):
print('{:>15s}'.format(yt))
get_classes_distribution(y,LABELS)
plot_label_per_class(y,LABELS)
###Output
Train
airplane : 5000 or 10.00%
automobile : 5000 or 10.00%
bird : 5000 or 10.00%
cat : 5000 or 10.00%
deer : 5000 or 10.00%
dog : 5000 or 10.00%
frog : 5000 or 10.00%
horse : 5000 or 10.00%
ship : 5000 or 10.00%
truck : 5000 or 10.00%
###Markdown
Todo parece ir de acuerdo a la documentación. Veamos las imagenes,
###Code
from genlib import sample_images_data,plot_sample_images
for xy,yt in zip([(x_train_data,y_train_data.flatten()),(x_test_data,y_test_data.flatten())],['Train','Test']):
print('{:>15s}'.format(yt))
train_sample_images, train_sample_labels = sample_images_data(*xy,LABELS)
plot_sample_images(train_sample_images, train_sample_labels,LABELS)
############
# [COMPLETE]
# Split training set in train/val sets
# Use the sampling method that you want
############
#init seed
np.random.seed(seed=RANDOM_STATE)
#
VAL_FRAC=0.2
TRAIN_FRAC=(1-VAL_FRAC)
TRAIN_SIZE_BFV=x_train_data.shape[0]
TRAIN_SAMPLES=int(TRAIN_FRAC*TRAIN_SIZE_BFV)
# Get Index
train_idxs = np.random.choice(np.arange(TRAIN_SIZE_BFV), size=TRAIN_SAMPLES, replace=False)
val_idx=np.array([x for x in np.arange(TRAIN_SIZE_BFV) if x not in train_idxs])
# Split
x_val_data = x_train_data[val_idx, :, :, :]
y_val_data = y_train_data[val_idx]
x_train_data = x_train_data[train_idxs, :, :, :]
y_train_data = y_train_data[train_idxs]
####
print("Cifar-10 x_train shape: {}".format(x_train_data.shape))
print("Cifar-10 y_train shape: {}".format(y_train_data.shape))
print("Cifar-10 x_val shape: {}".format(x_val_data.shape))
print("Cifar-10 y_val shape: {}".format(y_val_data.shape))
print("Cifar-10 x_test shape: {}".format(x_test_data.shape))
print("Cifar-10 y_test shape: {}".format(y_test_data.shape))
###Output
Cifar-10 x_train shape: (40000, 32, 32, 3)
Cifar-10 y_train shape: (40000, 1)
Cifar-10 x_val shape: (10000, 32, 32, 3)
Cifar-10 y_val shape: (10000, 1)
Cifar-10 x_test shape: (10000, 32, 32, 3)
Cifar-10 y_test shape: (10000, 1)
###Markdown
Veamos si quedaron balanceados Train y Validation
###Code
for y,yt in zip([y_train_data.flatten(),y_val_data.flatten()],['Train','Validation']):
print('{:>15s}'.format(yt))
get_classes_distribution(y,LABELS)
plot_label_per_class(y,LABELS)
# In order to use the MobileNet CNN pre-trained on imagenet, we have
# to resize our images to have one of the following static square shape: [(128, 128),
# (160, 160), (192, 192), or (224, 224)].
# If we try to resize all the dataset this will not fit on memory, so we have to save all
# the images to disk, and then when loading those images, our datagenerator will resize them
# to the desired shape on-the-fly.
############
# [COMPLETE]
# Use the above function to save all your data, e.g.:
# save_to_disk(x_train, y_train, 'train', 'cifar10_images')
# save_to_disk(x_val, y_val, 'val', 'cifar10_images')
# save_to_disk(x_test, y_test, 'test', 'cifar10_images')
############
from genlib import save_to_disk
save_to_disk(x_train_data, y_train_data, 'train', output_dir='cifar10_images')
save_to_disk(x_val_data, y_val_data, 'val', output_dir='cifar10_images')
save_to_disk(x_test_data, y_test_data, 'test', output_dir='cifar10_images')
###Output
_____no_output_____
###Markdown
2. Load CNN and analyze architecture
###Code
#Model
NO_EPOCHS = 25
BATCH_SIZE = 32
NET_IMG_ROWS = 128
NET_IMG_COLS = 128
############
# [COMPLETE]
# Use the MobileNet class from Keras to load your base model, pre-trained on imagenet.
# We wan't to load the pre-trained weights, but without the classification layer.
# Check the notebook '3_transfer-learning' or https://keras.io/applications/#mobilenet to get more
# info about how to load this network properly.
############
#Note that this model only supports the data format 'channels_last' (height, width, channels).
#The default input size for this model is 224x224.
base_model = MobileNet(input_shape=(NET_IMG_ROWS, NET_IMG_COLS, 3), # Input image size
weights='imagenet', # Use imagenet pre-trained weights
include_top=False, # Drop classification layer
pooling='avg') # Global AVG pooling for the
# output feature vector
###Output
_____no_output_____
###Markdown
3. Adapt this CNN to our problem
###Code
############
# [COMPLETE]
# Having the CNN loaded, now we have to add some layers to adapt this network to our
# classification problem.
# We can choose to finetune just the new added layers, some particular layers or all the layer of the
# model. Play with different settings and compare the results.
############
# get the output feature vector from the base model
x = base_model.output
# let's add a fully-connected layer
x = Dense(1024, activation='relu')(x)
# Add Drop Out Layer
x=Dropout(0.25)(x)
# and a logistic layer
predictions = Dense(NUM_CLASSES, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# Initial Model Summary
model.summary()
model_png=False
if model_png:
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
# let's visualize layer names and layer indices to see how many layers
# we should freeze:
for i, layer in enumerate(model.layers):
print(i, layer.name)
# En esta instancia no pretendemos entrenar todas sino las ultimas agregadas
for layer in model.layers[:88]:
layer.trainable = False
for layer in model.layers[88:]:
layer.trainable = True
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 128, 128, 3) 0
_________________________________________________________________
conv1_pad (ZeroPadding2D) (None, 129, 129, 3) 0
_________________________________________________________________
conv1 (Conv2D) (None, 64, 64, 32) 864
_________________________________________________________________
conv1_bn (BatchNormalization (None, 64, 64, 32) 128
_________________________________________________________________
conv1_relu (ReLU) (None, 64, 64, 32) 0
_________________________________________________________________
conv_dw_1 (DepthwiseConv2D) (None, 64, 64, 32) 288
_________________________________________________________________
conv_dw_1_bn (BatchNormaliza (None, 64, 64, 32) 128
_________________________________________________________________
conv_dw_1_relu (ReLU) (None, 64, 64, 32) 0
_________________________________________________________________
conv_pw_1 (Conv2D) (None, 64, 64, 64) 2048
_________________________________________________________________
conv_pw_1_bn (BatchNormaliza (None, 64, 64, 64) 256
_________________________________________________________________
conv_pw_1_relu (ReLU) (None, 64, 64, 64) 0
_________________________________________________________________
conv_pad_2 (ZeroPadding2D) (None, 65, 65, 64) 0
_________________________________________________________________
conv_dw_2 (DepthwiseConv2D) (None, 32, 32, 64) 576
_________________________________________________________________
conv_dw_2_bn (BatchNormaliza (None, 32, 32, 64) 256
_________________________________________________________________
conv_dw_2_relu (ReLU) (None, 32, 32, 64) 0
_________________________________________________________________
conv_pw_2 (Conv2D) (None, 32, 32, 128) 8192
_________________________________________________________________
conv_pw_2_bn (BatchNormaliza (None, 32, 32, 128) 512
_________________________________________________________________
conv_pw_2_relu (ReLU) (None, 32, 32, 128) 0
_________________________________________________________________
conv_dw_3 (DepthwiseConv2D) (None, 32, 32, 128) 1152
_________________________________________________________________
conv_dw_3_bn (BatchNormaliza (None, 32, 32, 128) 512
_________________________________________________________________
conv_dw_3_relu (ReLU) (None, 32, 32, 128) 0
_________________________________________________________________
conv_pw_3 (Conv2D) (None, 32, 32, 128) 16384
_________________________________________________________________
conv_pw_3_bn (BatchNormaliza (None, 32, 32, 128) 512
_________________________________________________________________
conv_pw_3_relu (ReLU) (None, 32, 32, 128) 0
_________________________________________________________________
conv_pad_4 (ZeroPadding2D) (None, 33, 33, 128) 0
_________________________________________________________________
conv_dw_4 (DepthwiseConv2D) (None, 16, 16, 128) 1152
_________________________________________________________________
conv_dw_4_bn (BatchNormaliza (None, 16, 16, 128) 512
_________________________________________________________________
conv_dw_4_relu (ReLU) (None, 16, 16, 128) 0
_________________________________________________________________
conv_pw_4 (Conv2D) (None, 16, 16, 256) 32768
_________________________________________________________________
conv_pw_4_bn (BatchNormaliza (None, 16, 16, 256) 1024
_________________________________________________________________
conv_pw_4_relu (ReLU) (None, 16, 16, 256) 0
_________________________________________________________________
conv_dw_5 (DepthwiseConv2D) (None, 16, 16, 256) 2304
_________________________________________________________________
conv_dw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024
_________________________________________________________________
conv_dw_5_relu (ReLU) (None, 16, 16, 256) 0
_________________________________________________________________
conv_pw_5 (Conv2D) (None, 16, 16, 256) 65536
_________________________________________________________________
conv_pw_5_bn (BatchNormaliza (None, 16, 16, 256) 1024
_________________________________________________________________
conv_pw_5_relu (ReLU) (None, 16, 16, 256) 0
_________________________________________________________________
conv_pad_6 (ZeroPadding2D) (None, 17, 17, 256) 0
_________________________________________________________________
conv_dw_6 (DepthwiseConv2D) (None, 8, 8, 256) 2304
_________________________________________________________________
conv_dw_6_bn (BatchNormaliza (None, 8, 8, 256) 1024
_________________________________________________________________
conv_dw_6_relu (ReLU) (None, 8, 8, 256) 0
_________________________________________________________________
conv_pw_6 (Conv2D) (None, 8, 8, 512) 131072
_________________________________________________________________
conv_pw_6_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_pw_6_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_dw_7 (DepthwiseConv2D) (None, 8, 8, 512) 4608
_________________________________________________________________
conv_dw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_dw_7_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_pw_7 (Conv2D) (None, 8, 8, 512) 262144
_________________________________________________________________
conv_pw_7_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_pw_7_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_dw_8 (DepthwiseConv2D) (None, 8, 8, 512) 4608
_________________________________________________________________
conv_dw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_dw_8_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_pw_8 (Conv2D) (None, 8, 8, 512) 262144
_________________________________________________________________
conv_pw_8_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_pw_8_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_dw_9 (DepthwiseConv2D) (None, 8, 8, 512) 4608
_________________________________________________________________
conv_dw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_dw_9_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_pw_9 (Conv2D) (None, 8, 8, 512) 262144
_________________________________________________________________
conv_pw_9_bn (BatchNormaliza (None, 8, 8, 512) 2048
_________________________________________________________________
conv_pw_9_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_dw_10 (DepthwiseConv2D) (None, 8, 8, 512) 4608
_________________________________________________________________
conv_dw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048
_________________________________________________________________
conv_dw_10_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_pw_10 (Conv2D) (None, 8, 8, 512) 262144
_________________________________________________________________
conv_pw_10_bn (BatchNormaliz (None, 8, 8, 512) 2048
_________________________________________________________________
conv_pw_10_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_dw_11 (DepthwiseConv2D) (None, 8, 8, 512) 4608
_________________________________________________________________
conv_dw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048
_________________________________________________________________
conv_dw_11_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_pw_11 (Conv2D) (None, 8, 8, 512) 262144
_________________________________________________________________
conv_pw_11_bn (BatchNormaliz (None, 8, 8, 512) 2048
_________________________________________________________________
conv_pw_11_relu (ReLU) (None, 8, 8, 512) 0
_________________________________________________________________
conv_pad_12 (ZeroPadding2D) (None, 9, 9, 512) 0
_________________________________________________________________
conv_dw_12 (DepthwiseConv2D) (None, 4, 4, 512) 4608
_________________________________________________________________
conv_dw_12_bn (BatchNormaliz (None, 4, 4, 512) 2048
_________________________________________________________________
conv_dw_12_relu (ReLU) (None, 4, 4, 512) 0
_________________________________________________________________
conv_pw_12 (Conv2D) (None, 4, 4, 1024) 524288
_________________________________________________________________
conv_pw_12_bn (BatchNormaliz (None, 4, 4, 1024) 4096
_________________________________________________________________
conv_pw_12_relu (ReLU) (None, 4, 4, 1024) 0
_________________________________________________________________
conv_dw_13 (DepthwiseConv2D) (None, 4, 4, 1024) 9216
_________________________________________________________________
conv_dw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096
_________________________________________________________________
conv_dw_13_relu (ReLU) (None, 4, 4, 1024) 0
_________________________________________________________________
conv_pw_13 (Conv2D) (None, 4, 4, 1024) 1048576
_________________________________________________________________
conv_pw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096
_________________________________________________________________
conv_pw_13_relu (ReLU) (None, 4, 4, 1024) 0
_________________________________________________________________
global_average_pooling2d_1 ( (None, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 1049600
_________________________________________________________________
dropout_1 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 10250
=================================================================
Total params: 4,288,714
Trainable params: 1,059,850
Non-trainable params: 3,228,864
_________________________________________________________________
###Markdown
4. Setup data augmentation techniques
###Code
############
# [COMPLETE]
# Use data augmentation to train your model.
# Use the Keras ImageDataGenerator class for this porpouse.
# Note: Given that we want to load our images from disk, instead of using
# ImageDataGenerator.flow method, we have to use ImageDataGenerator.flow_from_directory
# method in the following way:
# generator_train = dataget_train.flow_from_directory('resized_images/train',
# target_size=(128, 128), batch_size=32)
# generator_val = dataget_train.flow_from_directory('resized_images/val',
# target_size=(128, 128), batch_size=32)
# Note that we have to resize our images to finetune the MobileNet CNN, this is done using
# the target_size argument in flow_from_directory. Remember to set the target_size to one of
# the valid listed here: [(128, 128), (160, 160), (192, 192), or (224, 224)].
############
data_get=ImageDataGenerator()
generator_train = data_get.flow_from_directory(directory='cifar10_images/train',
target_size=(128, 128), batch_size=BATCH_SIZE)
generator_val = data_get.flow_from_directory(directory='cifar10_images/val',
target_size=(128, 128), batch_size=BATCH_SIZE)
###Output
Found 40000 images belonging to 10 classes.
Found 10000 images belonging to 10 classes.
###Markdown
5. Add some keras callbacks
###Code
############
# [COMPLETE]
# Load and set some Keras callbacks here!
############
EXP_ID='experiment_002/'
from keras.callbacks import ModelCheckpoint, TensorBoard
if not os.path.exists(EXP_ID):
os.makedirs(EXP_ID)
callbacks = [
ModelCheckpoint(filepath=os.path.join(EXP_ID, 'weights.{epoch:02d}-{val_loss:.2f}.hdf5'),
monitor='val_loss',
verbose=1,
save_best_only=False,
save_weights_only=False,
mode='auto'),
TensorBoard(log_dir=os.path.join(EXP_ID, 'logs'),
write_graph=True,
write_images=False)
]
###Output
_____no_output_____
###Markdown
6. Setup optimization algorithm with their hyperparameters
###Code
############
# [COMPLETE]
# Choose some optimization algorithm and explore different hyperparameters.
# Compile your model.
############
from keras.optimizers import SGD
from keras.losses import categorical_crossentropy
model.compile(optimizer=SGD(lr=0.0001, momentum=0.9),
loss='categorical_crossentropy',
metrics=['accuracy'])
#model.compile(loss=categorical_crossentropy,
# optimizer='adam',
# metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
7. Train model!
###Code
############
# [COMPLETE]
# Use fit_generator to train your model.
# e.g.:
# model.fit_generator(
# generator_train,
# epochs=50,
# validation_data=generator_val,
# steps_per_epoch=generator_train.n // 32,
# validation_steps=generator_val.n // 32)
############
model.fit_generator(generator_train,
epochs=NO_EPOCHS,
validation_data=generator_val,
steps_per_epoch=generator_train.n // BATCH_SIZE,
validation_steps=generator_val.n // BATCH_SIZE,
callbacks=callbacks)
###Output
Epoch 1/25
1250/1250 [==============================] - 5004s 4s/step - loss: 1.5383 - acc: 0.4734 - val_loss: 2.4793 - val_acc: 0.1521
Epoch 00001: saving model to experiment_002/weights.01-2.48.hdf5
Epoch 2/25
1250/1250 [==============================] - 5001s 4s/step - loss: 1.0014 - acc: 0.6584 - val_loss: 2.4961 - val_acc: 0.1635
Epoch 00002: saving model to experiment_002/weights.02-2.50.hdf5
Epoch 3/25
1250/1250 [==============================] - 5013s 4s/step - loss: 0.8758 - acc: 0.7000 - val_loss: 2.5085 - val_acc: 0.1712
Epoch 00003: saving model to experiment_002/weights.03-2.51.hdf5
Epoch 4/25
1250/1250 [==============================] - 5022s 4s/step - loss: 0.8147 - acc: 0.7207 - val_loss: 2.5849 - val_acc: 0.1679
Epoch 00004: saving model to experiment_002/weights.04-2.58.hdf5
Epoch 5/25
1250/1250 [==============================] - 5180s 4s/step - loss: 0.7736 - acc: 0.7317 - val_loss: 2.5298 - val_acc: 0.1741
Epoch 00005: saving model to experiment_002/weights.05-2.53.hdf5
Epoch 6/25
1250/1250 [==============================] - 5244s 4s/step - loss: 0.7384 - acc: 0.7424 - val_loss: 2.5776 - val_acc: 0.1698
Epoch 00006: saving model to experiment_002/weights.06-2.58.hdf5
Epoch 7/25
1250/1250 [==============================] - 5243s 4s/step - loss: 0.7177 - acc: 0.7512 - val_loss: 2.6164 - val_acc: 0.1713
Epoch 00007: saving model to experiment_002/weights.07-2.62.hdf5
Epoch 8/25
1250/1250 [==============================] - 5275s 4s/step - loss: 0.7019 - acc: 0.7584 - val_loss: 2.5811 - val_acc: 0.1747
Epoch 00008: saving model to experiment_002/weights.08-2.58.hdf5
Epoch 9/25
1250/1250 [==============================] - 5290s 4s/step - loss: 0.6847 - acc: 0.7616 - val_loss: 2.6260 - val_acc: 0.1737
Epoch 00009: saving model to experiment_002/weights.09-2.63.hdf5
Epoch 10/25
1250/1250 [==============================] - 5281s 4s/step - loss: 0.6741 - acc: 0.7644 - val_loss: 2.5844 - val_acc: 0.1747
Epoch 00010: saving model to experiment_002/weights.10-2.58.hdf5
Epoch 11/25
1250/1250 [==============================] - 5307s 4s/step - loss: 0.6558 - acc: 0.7721 - val_loss: 2.6508 - val_acc: 0.1695
Epoch 00011: saving model to experiment_002/weights.11-2.65.hdf5
Epoch 12/25
1250/1250 [==============================] - 5327s 4s/step - loss: 0.6495 - acc: 0.7766 - val_loss: 2.6879 - val_acc: 0.1722
Epoch 00012: saving model to experiment_002/weights.12-2.69.hdf5
Epoch 13/25
1250/1250 [==============================] - 5090s 4s/step - loss: 0.6410 - acc: 0.7801 - val_loss: 2.7010 - val_acc: 0.1737
Epoch 00013: saving model to experiment_002/weights.13-2.70.hdf5
Epoch 14/25
1/1250 [..............................] - ETA: 3:17:38 - loss: 0.3864 - acc: 0.8438
###Markdown
8. Choose best model/snapshot
###Code
############
# [COMPLETE]
# Analyze and compare your results. Choose the best model and snapshot,
# justify your election.
############
###Output
_____no_output_____
###Markdown
9. Evaluate final model on the *testing* set
###Code
############
# [COMPLETE]
# Evaluate your model on the testing set.
############
###Output
_____no_output_____ |
example/keyphrase-similarity/load-keyphrase-similarity.ipynb | ###Markdown
Keyphrase similarityFinetuning transformers to calculate similarity between sentences and keyphrases. This tutorial is available as an IPython notebook at [Malaya/example/keyphrase-similarity](https://github.com/huseinzol05/Malaya/tree/master/example/keyphrase-similarity). This module trained on both standard and local (included social media) language structures, so it is save to use for both.
###Code
import malaya
import numpy as np
###Output
_____no_output_____
###Markdown
List available Transformer models
###Code
malaya.keyword_extraction.available_transformer()
###Output
INFO:root:tested on 20% test set.
###Markdown
We trained on [Twitter Keyphrase Bahasa](https://github.com/huseinzol05/Malay-Dataset/tree/master/keyphrase/twitter-bahasa) and [Malaysia Entities](https://github.com/huseinzol05/Malay-Datasetmalaysia-entities).Example training set,
###Code
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/keyphrase/twitter-bahasa/topics.json
import json
with open('topics.json') as fopen:
topics = set(json.load(fopen).keys())
list_topics = list(topics)
len(list_topics)
import random
def get_data(data):
if len(set(data[1]) & topics) and random.random() > 0.2:
t = random.choice(data[1])
label = 1
else:
s = (set(data[1]) | set())
t = random.choice(list(topics - s))
label = 0
return data[0], t, label
data = ('Peguam dikuarantin, kes 1MDB ditangguh', ['najib razak'])
get_data(data)
###Output
_____no_output_____
###Markdown
Some time will returned random topics inside corpus and give label `0`.
###Code
get_data(data)
###Output
_____no_output_____
###Markdown
Load transformer model```pythondef transformer(model: str = 'bert', quantized: bool = False, **kwargs): """ Load Transformer keyword similarity model. Parameters ---------- model : str, optional (default='bert') Model architecture supported. Allowed values: * ``'bert'`` - Google BERT BASE parameters. * ``'tiny-bert'`` - Google BERT TINY parameters. * ``'xlnet'`` - Google XLNET BASE parameters. * ``'alxlnet'`` - Malaya ALXLNET BASE parameters. quantized : bool, optional (default=False) if True, will load 8-bit quantized model. Quantized model not necessary faster, totally depends on the machine. Returns ------- result: model List of model classes: * if `bert` in model, will return `malaya.model.bert.KeyphraseBERT`. * if `xlnet` in model, will return `malaya.model.xlnet.KeyphraseXLNET`. """```
###Code
tiny_bert = malaya.keyword_extraction.transformer(model = 'tiny-bert')
alxlnet = malaya.keyword_extraction.transformer(model = 'alxlnet')
# !wget https://raw.githubusercontent.com/huseinzol05/Malay-Dataset/master/keyphrase/twitter-bahasa/testset-keyphrase.json
with open('testset-keyphrase.json') as fopen:
testset = json.load(fopen)
testset[:10]
###Output
_____no_output_____
###Markdown
predict batch of strings with probability```pythondef predict_proba(self, strings_left: List[str], strings_right: List[str]): """ calculate similarity for two different batch of texts. Parameters ---------- string_left : List[str] string_right : List[str] Returns ------- result : List[float] """```you need to give list of left strings, and list of right strings.first left string will compare will first right string and so on.similarity model only supported `predict_proba`.
###Code
texts, keyphrases, labels = [], [], []
for i in range(10):
texts.append(testset[i][0])
keyphrases.append(testset[i][1])
labels.append(testset[i][2])
np.around(tiny_bert.predict_proba(texts, keyphrases))
np.around(alxlnet.predict_proba(texts, keyphrases))
np.around(tiny_bert.predict_proba(texts, keyphrases)) == np.array(labels)
np.around(alxlnet.predict_proba(texts, keyphrases)) == np.array(labels)
###Output
_____no_output_____
###Markdown
VectorizeLet say you want to visualize sentences in lower dimension, you can use `model.vectorize`,```pythondef vectorize(self, strings: List[str]): """ Vectorize list of strings. Parameters ---------- strings : List[str] Returns ------- result: np.array """```
###Code
v_texts = tiny_bert.vectorize(texts)
v_keyphrases = tiny_bert.vectorize(keyphrases)
v_texts.shape, v_keyphrases.shape
from sklearn.metrics.pairwise import cosine_similarity
similarities = cosine_similarity(v_keyphrases, v_texts)
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
plt.figure(figsize = (7, 7))
g = sns.heatmap(
similarities,
cmap = 'Blues',
xticklabels = keyphrases,
yticklabels = texts,
annot = True,
)
plt.show()
v_texts = alxlnet.vectorize(texts)
v_keyphrases = alxlnet.vectorize(keyphrases)
v_texts.shape, v_keyphrases.shape
similarities = cosine_similarity(v_keyphrases, v_texts)
plt.figure(figsize = (7, 7))
g = sns.heatmap(
similarities,
cmap = 'Blues',
xticklabels = keyphrases,
yticklabels = texts,
annot = True,
)
plt.show()
text = 'Peguam dikuarantin, kes 1MDB ditangguh'
label = 'najib razak'
v = tiny_bert.vectorize([text, label])
cosine_similarity(v)
v = alxlnet.vectorize([text, label])
cosine_similarity(v)
###Output
_____no_output_____ |
test_multipanel_figs.ipynb | ###Markdown
Making multilabel plots with matplotlibfirst we import numpy and matplotlib
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Then we define an array of angles and their sines and cosines using numpy. This time we will use linspace
###Code
x = np.linspace(0, 2*np.pi, 100)
print(x[-1], 2*np.pi)
y = np.sin(x)
z = np.cos(x)
w = np.sin(4*x)
v = np.cos(4*x)
#call subplots to generate a multipanel fig. This means 1 row, 2 columns of fig
f, axarr = plt.subplots(1,2)
#treat axarr as am arrax, from left to right
#first panel
axarr[0].plot(x,y)
axarr[0].set_xlabel('x')
axarr[0].set_ylabel('sin(x)')
axarr[0].set_title(r'$\sin(x)$')
#second panel
axarr[1].plot(x,z)
axarr[1].set_xlabel('x')
axarr[1].set_ylabel('cos(x)')
axarr[1].set_title(r'$\cos(x)$')
#add more space b/w the figs.
f.subplots_adjust(wspace=0.4)
#fix the axis ratio
#here are two possible options
axarr[0].set_aspect('equal') # make the ratio of the ticks equal, a bit counter-intuitive
axarr[1].set_aspect(np.pi) # make a square by settig the aspect to be the ratio of the tick unit range
#adjust the size of the figure
fig = plt.figure(figsize=(6, 6))
plt.plot(x, y, label=r'y = \sin(x)$') #add a label to line
plt.plot(x, z, label=r'y = \cos(x)$')
plt.plot(x, w, label=r'y = \sin(4x)$')
plt.plot(x, v, label=r'y = \cos(4x)$')
plt.xlabel(r'$x$') #note set_xlabel vs. xlabel
plt.ylabel(r'$y(x)$')
plt.xlim([0,2*np.pi])
plt.ylim([-1.2,1.2])
plt.legend(loc=1, framealpha=0.95) #add legend with semitransparent frame
#fix the axis ratio
plt.gca().set_aspect(np.pi/1.2) #use gca to get curent axis
###Output
_____no_output_____ |
Detecting Named Entities using NLTK and spaCy.ipynb | ###Markdown
Detecting Named Entities using NLTK and spaCy Our two favorite imports 😛
###Code
import spacy, nltk
###Output
_____no_output_____
###Markdown
And good old pandas
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Read in the text
###Code
text = open("data/testfile.txt", "rb",).read()
###Output
_____no_output_____
###Markdown
Load spaCy
###Code
nlp = spacy.load("en")
###Output
_____no_output_____
###Markdown
Run spaCy pipeline
###Code
doc = nlp(unicode(text))
###Output
_____no_output_____
###Markdown
View named entitiesSee: https://spacy.io/docs/usage/entity-recognition
###Code
pd.DataFrame([(x.text, x.label_, ) for x in doc.ents], columns = ["Entity", "Entity Type"]).head()
###Output
_____no_output_____
###Markdown
View POS tags
###Code
df_pos = pd.DataFrame([(x.text, x.ent_type_, x.tag_, x.pos_) for x in doc], columns = ["Token", "Entity Type", "Tag", "Part of Speech"])
df_pos.head()
###Output
_____no_output_____
###Markdown
Identifying domain-specific termsOne way we could extract programming language names and such domain-specific terms is by looking for proper nouns. However, this would merely identify single-word terms; we would miss out terms such as _Ruby on Rails_. Also, we would still have to have a master list to compare and identify our terms of interest from the proper nouns list. The noun chunks list does not contain the term _Ruby on Rails_ either. Proper nouns
###Code
df_pos[df_pos["Part of Speech"] == "PROPN"]
###Output
_____no_output_____
###Markdown
Noun chunks
###Code
list(doc.noun_chunks)
###Output
_____no_output_____ |
set_parameters.ipynb | ###Markdown
Specify the pipeline parameters
###Code
parameter_fn = 'parameters.json'
db_filename = "path/to/database/cosmos.sqlite"
workflow_name = 'CARE_test'
input_dir = "path/to/the/directory/with/data" # folder with the prepared input data
output_dir = "path/to/the/directory/to/save/results/" + workflow_name
data_dir = 'CARE_data'
npz_dir = 'datagen'
model_dir = 'CARE_models'
accuracy_dir = 'accuracy'
accuracy_fn = 'accuracy.csv'
n_jobs = 30
name_high='high_deconvolved'
name_low='low_raw'
name_train='train'
name_validation='validation'
name_test='test'
axes = 'ZYX'
save_training_history = False
patch_size = [(16, 32, 32),
(32, 64, 64)]
n_patches_per_image = [20]
train_epochs = 10
train_steps_per_epoch = [10]
train_batch_size = [8]
parameter_fn = os.path.abspath(parameter_fn)
args = vars().copy()
keys = [key for key in args.keys()
if not (key.startswith('_') or key in sysparams or key == 'sysparams')]
params = {key: args[key] for key in keys}
os.makedirs(os.path.dirname(parameter_fn), exist_ok=True)
with open(parameter_fn, 'w') as f:
json.dump(params, f, indent=4)
params
###Output
_____no_output_____
###Markdown
Running the workflowTo run locally, run `python cosmos_workflow.py -p parameters.json`To generate a bsub command to submit to an lsf cluster, specify the following parameters:
###Code
n_cores = 30
n_gpus = 2
queue = 'dgx'
max_mem_gb = 300
python_path = "'/path/to/your/python/environment/bin/:$PATH'"
parameter_fn = os.path.abspath(parameter_fn)
code_path = os.path.abspath('cosmos_workflow.py')
command = rf'python {code_path} -p {parameter_fn} -g {n_gpus} -c {n_cores}'
command = f'bsub -P CARE -J CARE -q {queue} -n {n_cores} -gpu "num={n_gpus}:mode=exclusive_process"'\
f' -R "rusage[mem={int(max_mem_gb/n_cores)}G]" "export PATH={python_path}; {command}"'
print(command)
###Output
bsub -P CARE -J CARE -q dgx -n 30 -gpu "num=2:mode=exclusive_process" -R "rusage[mem=10G]" "export PATH='/path/to/your/python/environment/bin/:$PATH'; python /research/sharedresources/cbi/common/Anna/codes/COSMOS-CARE/cosmos_workflow.py -p /research/sharedresources/cbi/common/Anna/codes/COSMOS-CARE/parameters.json -g 2 -c 30"
|
tutorials/01 Basic functionality.ipynb | ###Markdown
Wotan: Basic functionalityTo illustrate the easiest use case, we generate some noisy synthetic data which includes *signals* we hope to preserve:
###Code
import numpy as np
points = 1000
time = np.linspace(0, 15, points)
flux = 1 + ((np.sin(time) + + time / 10 + time**1.5 / 100) / 1000)
noise = np.random.normal(0, 0.0001, points)
flux += noise
for i in range(points):
if i % 75 == 0:
flux[i:i+5] -= 0.0004 # Add some transits
flux[i+50:i+52] += 0.0002 # and flares
flux[400:500] = np.nan # a data gap
###Output
_____no_output_____
###Markdown
Use wotan to detrend. Without a method specified, the default (sliding biweight) is used.
###Code
from wotan import flatten
flatten_lc, trend_lc = flatten(time, flux, window_length=0.5, return_trend=True)
###Output
_____no_output_____
###Markdown
Plot the result:
###Code
import matplotlib.pyplot as plt
%matplotlib notebook
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc, linewidth=2, color='red')
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.xlim(0, 15)
plt.ylim(0.999, 1.0035);
plt.show();
###Output
_____no_output_____
###Markdown
To show the detrended lightcurve, we could of course divide the raw flux by the red trend line. As wotan provides this directly as ``flatten_lc``, we can just plot that:
###Code
plt.close()
plt.scatter(time, flatten_lc, s=1, color='black')
plt.xlim(0, 15)
plt.ylim(0.999, 1.001)
plt.xlabel('Time (days)')
plt.ylabel('Detrended flux')
plt.show();
###Output
_____no_output_____
###Markdown
Using ``window_length`` to find the right balance between overfitting and underfittingSo far, we used ``window_length=0.5`` without justification. We can explore other window sizes:
###Code
flatten_lc1, trend_lc1 = flatten(time, flux, window_length=0.2, return_trend=True)
flatten_lc2, trend_lc2 = flatten(time, flux, window_length=0.5, return_trend=True)
flatten_lc3, trend_lc3 = flatten(time, flux, window_length=1, return_trend=True)
plt.close()
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc1, linewidth=2, color='blue') # overfit
plt.plot(time, trend_lc2, linewidth=2, color='red') # about right
plt.plot(time, trend_lc3, linewidth=2, color='orange') # underfit
plt.xlim(0, 2)
plt.ylim(0.9995, 1.0015)
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.show();
###Output
_____no_output_____
###Markdown
- We can see that the shortest (blue) trend with ``window_length=0.2`` produces an *overfit*. This is visible near ``t=1.25`` where part of the transit signal is removed.- The red trend line (``window_length=0.5``) seems about right- The orange trend line (``window_length=1``) is probably an underfit, because it doesn't readily adjust near the beginning of the time series. Remove edgesIs the feature right at the start a signal that we want to keep? A visual examination is inconclusive. For the purpose of a blind transit search, it is (slightly) preferable to remove edges. We can do this with wotan:
###Code
flatten_lc1, trend_lc1 = flatten(time, flux, window_length=0.5, return_trend=True)
flatten_lc2, trend_lc2 = flatten(time, flux, window_length=0.5, edge_cutoff=0.5, return_trend=True)
plt.close()
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc1, linewidth=2, color='blue', linestyle='dashed')
plt.plot(time, trend_lc2, linewidth=2, color='red')
plt.xlim(0, 2)
plt.ylim(0.9995, 1.0015)
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.show();
###Output
_____no_output_____
###Markdown
Note that we set ``edge_cutoff=0.5``, but only 0.25 are removed -- the maximum is half a window length, because for anything longer than that, the window is fully filled and thus the trend is optimal. Handling gaps in the dataIf there are large gaps in time, especially with corresponding flux level offsets, the detrending is often much improved when splitting the data into several sub-lightcurves and applying the filter to each segment individually. The default setting for a break is ``window_length/2``.The feature can be disabled with ``break_tolerance=0``. Then, the whole dataset is treated as one.Positive values, e.g., ``break_tolerance=0.1``, split the data into chunks if there are breaks longer than 0.1 days (which is the case here):
###Code
points = 1000
time = np.linspace(0, 15, points)
flux = 1 + ((np.sin(time) + + time / 10 + time**1.5 / 100) / 1000)
noise = np.random.normal(0, 0.00005, points)
flux += noise
for i in range(points):
if i % 75 == 0:
flux[i:i+5] -= 0.0004 # Add some transits
flux[i+50:i+52] += 0.0002 # and flares
flux[425:475] = np.nan # a data gap
flux[475:] -= 0.002
flatten_lc1, trend_lc1 = flatten(time, flux, break_tolerance=0.1, window_length=1, method='hspline', return_trend=True)
flatten_lc2, trend_lc2 = flatten(time, flux, break_tolerance=0, window_length=1, method='hspline', return_trend=True)
plt.close()
plt.scatter(time, flux, s=1, color='black')
plt.plot(time, trend_lc2, linewidth=2, color='red')
plt.plot(time, trend_lc1, linewidth=2, color='blue', linestyle='dashed')
plt.xlim(2, 11)
plt.ylim(0.9982, 1.0012)
plt.xlabel('Time (days)')
plt.ylabel('Raw flux')
plt.show();
###Output
_____no_output_____ |
notebooks/demos/locate_recombinants_demo.ipynb | ###Markdown
Locating recombinant haplotypes - demoThis notebook has a demo of the function "locate_recombinants" from the hapclust_utils notebook. The function identifies haplotypes with evidence for recombination based on the 4-gamete test.
###Code
%run setup.ipynb
import hapclust
%matplotlib inline
callset = h5py.File('../data/ag1000g.phase1.AR3.1.haplotypes.specific_regions.2L_2358158_2431617.h5',
mode='r')
region_vgsc = SeqFeature('2L', 2358158, 2431617)
genotypes = allel.GenotypeArray(callset['2L/calldata/genotype'])
haplotypes = genotypes.to_haplotypes()
pos = allel.SortedIndex(callset['2L/variants/POS'])
loc = pos.locate_range(region_vgsc.start, region_vgsc.end)
h_vgsc = haplotypes[loc]
pos_995S = 2422651
pos_995F = 2422652
loc_995S = haplotypes[pos.locate_key(pos_995S)] == 1
loc_995F = haplotypes[pos.locate_key(pos_995F)] == 1
h_vgsc_995F = h_vgsc.compress(loc_995F, axis=1)
h_vgsc_995S = h_vgsc.compress(loc_995S, axis=1)
sample_ids = callset['2L']['samples'][:]
hap_ids = np.array(list(itertools.chain(*[[s + b'a', s + b'b'] for s in sample_ids])))
hap_ids_995F = hap_ids[loc_995F]
hap_ids_995S = hap_ids[loc_995S]
tbl_haplotypes = etl.fromtsv('../data/ag1000g.phase1.AR3.1.haplotypes.meta.txt')
hap_pops = np.array(tbl_haplotypes.values('population'))
hap_pops_995S = hap_pops[loc_995S]
hap_pops_995F = hap_pops[loc_995F]
# need to use named colors for graphviz
pop_colors = {
'AOM': 'brown',
'BFM': 'firebrick1',
'GWA': 'goldenrod1',
'GNS': 'cadetblue1',
'BFS': 'deepskyblue',
'CMS': 'dodgerblue3',
'UGS': 'palegreen',
'GAS': 'olivedrab',
'KES': 'grey47',
'colony': 'black'
}
hap_colors = np.array([pop_colors[p] for p in hap_pops])
hap_colors_995S = np.array([pop_colors[p] for p in hap_pops_995S])
hap_colors_995F = np.array([pop_colors[p] for p in hap_pops_995F])
tbl_variant_labels = (
etl
.frompickle('../data/tbl_variants_phase1.pkl')
.eq('num_alleles', 2)
.cut('POS', 'REF', 'ALT', 'AGAP004707-RA')
.convert('AGAP004707-RA', lambda v: v[1] if v[0] == 'NON_SYNONYMOUS_CODING' else '')
.addfield('label', lambda row: row['AGAP004707-RA'] if row['AGAP004707-RA'] else '%s:%s>%s' % (row.POS, row.REF, row.ALT))
)
tbl_variant_labels
pos2label = tbl_variant_labels.lookupone('POS', 'label')
pos2label[pos_995F]
pos2label_coding = tbl_variant_labels.lookupone('POS', 'AGAP004707-RA')
pos2label_coding[pos_995F]
variant_labels = np.array([pos2label.get(p, '') for p in pos], dtype=object)
variant_labels_vgsc = variant_labels[loc]
variant_labels_vgsc
variant_labels_coding = np.array([pos2label_coding.get(p, '') for p in pos], dtype=object)
variant_labels_coding_vgsc = variant_labels_coding[loc]
variant_labels_coding_vgsc
###Output
_____no_output_____
###Markdown
L995S
###Code
cut_height = 2
fig, ax_dend, ax_freq, cluster_spans_995S, leaf_obs_995S = hapclust.fig_haplotypes_clustered(
h_vgsc_995S, cut_height=cut_height, dpi=150,
highlight_clusters=5, label_clusters=5)
cluster_idx = 2
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995S[cluster_idx]
cluster_haps = h_vgsc_995S.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995S.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_coding_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn',
variant_labels=variant_labels_coding_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 9
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995S[cluster_idx]
cluster_haps = h_vgsc_995S.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995S.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_coding_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn',
variant_labels=variant_labels_coding_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 12
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995S[cluster_idx]
cluster_haps = h_vgsc_995S.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995S.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_coding_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn',
variant_labels=variant_labels_coding_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 9, 12
cluster_hap_indices = list()
for i in cluster_idx:
_, _, hix = cluster_spans_995S[i]
cluster_hap_indices.extend(hix)
cluster_haps = h_vgsc_995S.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995S.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 14
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995S[cluster_idx]
cluster_haps = h_vgsc_995S.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995S.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
###Output
224 895 [ 7 60 2 9] (1, 0) {24, 20}
895 1031 [ 1 8 1 68] (0, 0) {16}
895 1031 [ 1 8 1 68] (1, 0) {36}
found 2 solutions; min recombinant haplotypes: 3
###Markdown
L995F
###Code
cut_height = 4
fig, ax_dend, ax_freq, cluster_spans_995F, leaf_obs_995F = hapclust.fig_haplotypes_clustered(
h_vgsc_995F, cut_height=cut_height, dpi=150,
highlight_clusters=5, label_clusters=5)
cluster_idx = 4
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995F[cluster_idx]
cluster_haps = h_vgsc_995F.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995F.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 7
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995F[cluster_idx]
cluster_haps = h_vgsc_995F.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995F.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 8
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995F[cluster_idx]
cluster_haps = h_vgsc_995F.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995F.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
# look at clusters 7 and 8 together
cluster_idx = 7, 8
cluster_hap_indices = list()
for i in cluster_idx:
_, _, hix = cluster_spans_995F[i]
cluster_hap_indices.extend(hix)
cluster_haps = h_vgsc_995F.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995F.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 12
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995F[cluster_idx]
cluster_haps = h_vgsc_995F.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995F.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]))
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
cluster_idx = 16
dend_start, dend_stop, cluster_hap_indices = cluster_spans_995F[cluster_idx]
cluster_haps = h_vgsc_995F.take(cluster_hap_indices, axis=1)
cluster_hap_colors = hap_colors_995F.take(cluster_hap_indices)
hapclust.graph_haplotype_network(
cluster_haps, hap_colors=cluster_hap_colors, network_method='mjn',
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
idx_rec = hapclust.locate_recombinants(cluster_haps, debug=True)
print('found', len(idx_rec), 'solutions; min recombinant haplotypes:', len(idx_rec[0]), [len(s) for s in idx_rec])
idx_norec = [i for i in range(cluster_haps.shape[1]) if i not in idx_rec[0]]
cluster_haps_norec = cluster_haps.take(idx_norec, axis=1)
cluster_hap_colors_norec = cluster_hap_colors.take(idx_norec)
hapclust.graph_haplotype_network(
cluster_haps_norec, hap_colors=cluster_hap_colors_norec, network_method='mjn', max_dist=10,
variant_labels=variant_labels_vgsc, fontsize=6, show_node_labels='count')
###Output
6 1698 [ 4 1 442 28] (0, 1) {172}
6 1699 [ 4 1 391 79] (0, 1) {39}
147 1674 [ 1 1 364 109] (0, 0) {243}
147 1674 [ 1 1 364 109] (0, 1) {291}
147 1698 [ 1 1 445 28] (0, 0) {291}
147 1698 [ 1 1 445 28] (0, 1) {243}
224 1674 [358 109 7 1] (1, 1) {282}
1379 1674 [361 109 4 1] (1, 1) {474}
1379 1681 [467 3 2 3] (1, 0) {473, 474}
1379 1697 [418 52 4 1] (1, 1) {473}
1433 1695 [431 12 31 1] (1, 1) {87}
1698 1704 [431 15 28 1] (1, 1) {242}
found 4 solutions; min recombinant haplotypes: 8 [8, 8, 9, 9]
|
_build/jupyter_execute/Review/Mock Exam Questions.ipynb | ###Markdown
Mock Exam Questions **CS1302 Introduction to Computer Programming**___
###Code
%reload_ext mytutor
###Output
_____no_output_____
###Markdown
- Access the mock exam here: https://eq1302.cs.cityu.edu.hk/mod/quiz/view.php?id=226- The questions are slightly modified from E-Quiz exercises.- You may use this notebook to get additional hints, take notes, and record your answers.- This notebook does not contain all the questions, but you can see model answers to all the questions after attempting the mock exam. Dictionaries and Sets **Exercise (Concatenate two dictionaries with precedence)** Define a function `concat_two_dicts` that accepts two arguments of type `dict` such that `concat_two_dicts(a, b)` will return a new dictionary containing all the items in `a` and the items in `b` that have different keys than those in `a`. The input dictionaries should not be mutated.
###Code
def concat_two_dicts(a, b):
### BEGIN SOLUTION
return {**b, **a}
### END SOLUTION
#tests
a={'x':10, 'z':30}; b={'y':20, 'z':40}
a_copy = a.copy(); b_copy = b.copy()
assert concat_two_dicts(a, b) == {'x': 10, 'z': 30, 'y': 20}
assert concat_two_dicts(b, a) == {'x': 10, 'z': 40, 'y': 20}
assert a == a_copy and b == b_copy
### BEGIN HIDDEN TESTS
a={'x':10, 'z':30}; b={'y':20}
a_copy = a.copy(); b_copy = b.copy()
assert concat_two_dicts(a, b) == {'x': 10, 'z': 30, 'y': 20}
assert concat_two_dicts(b, a) == {'x': 10, 'z': 30, 'y': 20}
assert a == a_copy and b == b_copy
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- `{**dict1,**dict2}` creates a new dictionary by unpacking the dictionaries `dict1` and `dict2`.- By default, `dict2` overwrites `dict1` if they have identical keys. **Exercise (Count characters)** Define a function `count_characters` which- accepts a string and counts the numbers of each character in the string, and - returns a dictionary that stores the results.
###Code
def count_characters(string):
### BEGIN SOLUTION
counts = {}
for char in string:
counts[char] = counts.get(char, 0) + 1
return counts
### END SOLUTION
# tests
assert count_characters('abcbabc') == {'a': 2, 'b': 3, 'c': 2}
assert count_characters('aababcccabc') == {'a': 4, 'b': 3, 'c': 4}
### BEGIN HIDDEN TESTS
assert count_characters('abcdefgabc') == {'a': 2, 'b': 2, 'c': 2, 'd': 1, 'e': 1, 'f': 1, 'g': 1}
assert count_characters('ab43cb324abc') == {'2': 1, '3': 2, '4': 2, 'a': 2, 'b': 3, 'c': 2}
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Create an empty dictionary `counts`.- Use a `for` loop to iterate over each character of `string` to count their numbers of occurrences.- The `get` method of `dict` can initialize the count of a new character before incrementing it. **Exercise (Count non-Fibonacci numbers)** Define a function `count_non_fibs` that - accepts a container as an argument, and - returns the number of items in the container that are not [fibonacci numbers](https://en.wikipedia.org/wiki/Fibonacci_number).
###Code
def count_non_fibs(container):
### BEGIN SOLUTION
def fib_sequence_inclusive(stop):
Fn, Fn1 = 0, 1
while Fn <= stop:
yield Fn
Fn, Fn1 = Fn1, Fn + Fn1
non_fibs = set(container)
non_fibs.difference_update(fib_sequence_inclusive(max(container)))
return len(non_fibs)
### END SOLUTION
# tests
assert count_non_fibs([0, 1, 2, 3, 5, 8]) == 0
assert count_non_fibs({13, 144, 99, 76, 1000}) == 3
### BEGIN HIDDEN TESTS
assert count_non_fibs({5, 8, 13, 21, 34, 100}) == 1
assert count_non_fibs({0.1, 0}) == 1
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Create a set of Fibonacci numbers up to the maximum of the items in the container.- Use `difference_update` method of `set` to create a set of items in the container but not in the set of Fibonacci numbers. **Exercise (Calculate total salaries)** Suppose `salary_dict` contains information about the name, salary, and working time about employees in a company. An example of `salary_dict` is as follows: ```Pythonsalary_dict = { 'emp1': {'name': 'John', 'salary': 15000, 'working_time': 20}, 'emp2': {'name': 'Tom', 'salary': 16000, 'working_time': 13}, 'emp3': {'name': 'Jack', 'salary': 15500, 'working_time': 15},}```Define a function `calculate_total` that accepts `salary_dict` as an argument, and returns a `dict` that uses the same keys in `salary_dict` but the total salaries as their values. The total salary of an employee is obtained by multiplying his/her salary and his/her working_time. E.g.,, for the `salary_dict` example above, `calculate_total(salary_dict)` should return```Python{'emp1': 300000, 'emp2': 208000, 'emp3': 232500}.```where the total salary of `emp1` is $15000 \times 20 = 300000$.
###Code
def calculate_total(salary_dict):
### BEGIN SOLUTION
return {
emp: record['salary'] * record['working_time']
for emp, record in salary_dict.items()
}
### END SOLUTION
# tests
salary_dict = {
'emp1': {'name': 'John', 'salary': 15000, 'working_time': 20},
'emp2': {'name': 'Tom', 'salary': 16000, 'working_time': 13},
'emp3': {'name': 'Jack', 'salary': 15500, 'working_time': 15},
}
assert calculate_total(salary_dict) == {'emp1': 300000, 'emp2': 208000, 'emp3': 232500}
### BEGIN HIDDEN TESTS
salary_dict = {
'emp1': {'name': 'John', 'salary': 15000, 'working_time': 20},
'emp2': {'name': 'Tom', 'salary': 16000, 'working_time': 13},
'emp3': {'name': 'Jack', 'salary': 15500, 'working_time': 15},
'emp4': {'name': 'Bob', 'salary': 20000, 'working_time': 10}
}
assert calculate_total(salary_dict) == {'emp1': 300000, 'emp2': 208000, 'emp3': 232500, 'emp4': 200000}
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Use `items` method of `dict` to return the list of key values pairs, and- use a dictionary comprehension to create the desired dictionary by iterating through the list of items. **Exercise (Delete items with value 0 in dictionary)** Define a function `zeros_removed` that - takes a dictionary as an argument,- mutates the dictionary to remove all the keys associated with values equal to `0`,- and return `True` if at least one key is removed else `False`.
###Code
def zeros_removed(d):
### BEGIN SOLUTION
to_delete = [k for k in d if d[k] == 0]
for k in to_delete:
del d[k]
return len(to_delete) > 0
## Memory-efficient but not computationally efficient
# def zeros_removed(d):
# has_deleted = False
# while True:
# for k in d:
# if d[k] == 0:
# del d[k]
# has_deleted = True
# break
# else: return has_deleted
### END SOLUTION
# tests
d = {'a':0, 'b':1, 'c':0, 'd':2}
assert zeros_removed(d) == True
assert zeros_removed(d) == False
assert d == {'b': 1, 'd': 2}
### BEGIN HIDDEN TESTS
d = {'a':0, 'b':1, 'c':0, 'd':2, 'e':0, 'f':'0'}
assert zeros_removed(d) == True
assert zeros_removed(d) == False
assert d == {'b': 1, 'd': 2, 'f':'0'}
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- The main issue is that, for any dicionary `d`,```Python for k in d: if d[k] == 0: del d[k]```raises the [`RuntimeError: dictionary changed size during iteration`](https://www.geeksforgeeks.org/python-delete-items-from-dictionary-while-iterating/). - One solution is to duplicate the list of keys, but this is memory inefficient especially when the list of keys is large.- Another solution is to record the list of keys to delete before the actual deletion. This is memory efficient if the list of keys to delete is small. **Exercise (Fuzzy search a set)** Define a function `search_fuzzy` that accepts two arguments `myset` and `word` such that- `myset` is a `set` of `str`s;- `word` is a `str`; and- `search_fuzzy(myset, word)` returns `True` if `word` is in `myset` by changing at most one character in `word`, and returns `False` otherwise.
###Code
def search_fuzzy(myset, word):
### BEGIN SOLUTION
for myword in myset:
if len(myword) == len(word) and len(
[True
for mychar, char in zip(myword, word) if mychar != char]) <= 1:
return True
return False
### END SOLUTION
# tests
assert search_fuzzy({'cat', 'dog'}, 'car') == True
assert search_fuzzy({'cat', 'dog'}, 'fox') == False
### BEGIN HIDDEN TESTS
myset = {'cat', 'dog', 'dolphin', 'rabbit', 'monkey', 'tiger'}
assert search_fuzzy(myset, 'lion') == False
assert search_fuzzy(myset, 'cat') == True
assert search_fuzzy(myset, 'cat ') == False
assert search_fuzzy(myset, 'fox') == False
assert search_fuzzy(myset, 'ccc') == False
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Iterate over each word in `myset`.- Check whether the length of the word is the same as that of the word in the arguments.- If the above check passes, use a list comprehension check if the words differ by at most one character. **Exercise (Get keys by value)** Define a function `get_keys_by_value` that accepts two arguments `d` and `value` where `d` is a dictionary, and returns a set containing all the keys in `d` that have `value` as its value. If no key has the query value `value`, then return an empty set.
###Code
def get_keys_by_value(d, value):
### BEGIN SOLUTION
return {k for k in d if d[k] == value}
### END SOLUTION
# tests
d = {'Tom':'99', 'John':'88', 'Lucy':'100', 'Lily':'90', 'Jason':'89', 'Jack':'100'}
assert get_keys_by_value(d, '99') == {'Tom'}
### BEGIN HIDDEN TESTS
d = {'Tom':'99', 'John':'88', 'Lucy':'100', 'Lily':'90', 'Jason':'89', 'Jack':'100'}
assert get_keys_by_value(d, '100') == {'Jack', 'Lucy'}
d = {'Tom':'99', 'John':'88', 'Lucy':'100', 'Lily':'90', 'Jason':'89', 'Jack':'100'}
assert get_keys_by_value(d, '0') == set()
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Use set comprehension to create the set of keys whose associated values is `value`. **Exercise (Count letters and digits)** Define a function `count_letters_and_digits` which - take a string as an argument,- returns a dictionary that stores the number of letters and digits in the string using the keys 'LETTERS' and 'DIGITS' respectively.
###Code
def count_letters_and_digits(string):
### BEGIN SOLUTION
check = {'LETTERS': str.isalpha, 'DIGITS': str.isdigit}
counts = dict.fromkeys(check.keys(), 0)
for char in string:
for t in check:
if check[t](char):
counts[t] += 1
return counts
### END SOLUTION
assert count_letters_and_digits('hello world! 2020') == {'DIGITS': 4, 'LETTERS': 10}
assert count_letters_and_digits('I love CS1302') == {'DIGITS': 4, 'LETTERS': 7}
### BEGIN HIDDEN TESTS
assert count_letters_and_digits('Hi CityU see you in 2021') == {'DIGITS': 4, 'LETTERS': 15}
assert count_letters_and_digits('When a dog runs at you, whistle for him. (Philosopher Henry David Thoreau, 1817-1862)') == {'DIGITS': 8, 'LETTERS': 58}
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Use the class method `fromkeys` of `dict` to initial the dictionary of counts. **Exercise (Dealers with lowest price)** Suppose `apple_price` is a list in which each element is a `dict` recording the dealer and the corresponding price, e.g., ```Pythonapple_price = [{'dealer': 'dealer_A', 'price': 6799}, {'dealer': 'dealer_B', 'price': 6749}, {'dealer': 'dealer_C', 'price': 6798}, {'dealer': 'dealer_D', 'price': 6749}]```Define a function `dealers_with_lowest_price` that takes `apple_price` as an argument, and returns the `set` of dealers providing the lowest price.
###Code
def dealers_with_lowest_price(apple_price):
### BEGIN SOLUTION
dealers = {}
lowest_price = None
for pricing in apple_price:
if lowest_price == None or lowest_price > pricing['price']:
lowest_price = pricing['price']
dealers.setdefault(pricing['price'], set()).add(pricing['dealer'])
return dealers[lowest_price]
## Shorter code that uses comprehension
# def dealers_with_lowest_price(apple_price):
# lowest_price = min(pricing['price'] for pricing in apple_price)
# return set(pricing['dealer'] for pricing in apple_price
# if pricing['price'] == lowest_price)
### END SOLUTION
# tests
apple_price = [{'dealer': 'dealer_A', 'price': 6799},
{'dealer': 'dealer_B', 'price': 6749},
{'dealer': 'dealer_C', 'price': 6798},
{'dealer': 'dealer_D', 'price': 6749}]
assert dealers_with_lowest_price(apple_price) == {'dealer_B', 'dealer_D'}
### BEGIN HIDDEN TESTS
apple_price = [{'dealer': 'dealer_A', 'price': 6799},
{'dealer': 'dealer_B', 'price': 6799},
{'dealer': 'dealer_C', 'price': 6799},
{'dealer': 'dealer_D', 'price': 6799}]
assert dealers_with_lowest_price(apple_price) == {'dealer_A', 'dealer_B', 'dealer_C', 'dealer_D'}
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Use the class method `setdefault` of `dict` to create a dictionary that maps different prices to different sets of dealers.- Compute the lowest price at the same time.- Alternatively, use comprehension to find lowest price and then create the desired set of dealers with the lowest price. Lists and Tuples **Exercise** (Binary addition) Define a function `add_binary` that - accepts two arguments of type `str` which represent two non-negative binary numbers, and - returns the binary number in `str` equal to the sum of the two given binary numbers.
###Code
def add_binary(*binaries):
### BEGIN SOLUTION
def binary_to_decimal(binary):
return sum(2**i * int(b) for i, b in enumerate(reversed(binary)))
def decimal_to_binary(decimal):
return ((decimal_to_binary(decimal // 2) if decimal > 1 else '') +
str(decimal % 2)) if decimal else '0'
return decimal_to_binary(sum(binary_to_decimal(binary) for binary in binaries))
## Alternative 1 using recursion
# def add_binary(bin1, bin2, carry=False):
# if len(bin1) > len(bin2):
# return add_binary(bin2, bin1)
# if bin1 == '':
# return add_binary('1', bin2, False) if carry else bin2
# s = int(bin1[-1]) + int(bin2[-1]) + carry
# return add_binary(bin1[:-1], bin2[:-1], s > 1) + str(s % 2)
## Alternatve 2 using iteration
# def add_binary(a, b):
# answer = []
# n = max(len(a), len(b))
# # fill necessary '0' to the beginning to make a and b have the same length
# if len(a) < n: a = str('0' * (n -len(a))) + a
# if len(b) < n: b = str('0' * (n -len(b))) + b
# carry = 0
# for i in range(n-1, -1, -1):
# if a[i] == '1': carry += 1
# if b[i] == '1': carry += 1
# answer.insert(0, '1') if carry % 2 == 1 else answer.insert(0, '0')
# carry //= 2
# if carry == 1: answer.insert(0, '1')
# answer_str = ''.join(answer) # you can also use "answer_str = ''; for x in answer: answer_str += x"
# return answerastr
### END SOLUTION
# tests
assert add_binary('0', '0') == '0'
assert add_binary('11', '11') == '110'
assert add_binary('101', '101') == '1010'
### BEGIN HIDDEN TESTS
assert add_binary('1111', '10') == '10001'
assert add_binary('111110000011','110000111') == '1000100001010'
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Use comprehension to convert the binary numbers to decimal numbers.- Use comprehension to convert the sum of the decimal numbers to a binary number.- Alternatively, perform bitwise addition using a recursion or iteration. **Exercise (Even-digit numbers)** Define a function `even_digit_numbers`, which finds all numbers between `lower_bound` and `upper_bound` such that each digit of the number is an even number. Please return the numbers as a list.
###Code
def even_digit_numbers(lower_bound, upper_bound):
### BEGIN SOLUTION
return [
x for x in range(lower_bound, upper_bound)
if not any(int(d) % 2 for d in str(x))
]
### END SOLUTION
# tests
assert even_digit_numbers(1999, 2001) == [2000]
assert even_digit_numbers(2805, 2821) == [2806,2808,2820]
### BEGIN HIDDEN TESTS
assert even_digit_numbers(1999, 2300) == [2000,2002,2004,2006,2008,2020,2022,2024,2026,2028,2040,2042,2044,2046,2048,2060,2062,2064,2066,2068,2080,2082,2084,2086,2088,2200,2202,2204,2206,2208,2220,2222,2224,2226,2228,2240,2242,2244,2246,2248,2260,2262,2264,2266,2268,2280,2282,2284,2286,2288]
assert even_digit_numbers(8801, 8833) == [8802,8804,8806,8808,8820,8822,8824,8826,8828]
assert even_digit_numbers(3662, 4001) == [4000]
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
- Use list comprehension to generate numbers between the bounds, and- use comprehension and the `any` function to filter out those numbers containing odd digits. **Exercise (Maximum subsequence sum)** Define a function `max_subsequence_sum` that - accepts as an argument a sequence of numbers, and - returns the maximum sum over nonempty contiguous subsequences. E.g., when `[-6, -4, 4, 1, -2, 2]` is given as the argument, the function returns `5` because the nonempty subsequence `[4, 1]` has the maximum sum `5`.
###Code
def max_subsequence_sum(a):
### BEGIN SOLUTION
## see https://en.wikipedia.org/wiki/Maximum_subarray_problem
t = s = 0
for x in a:
t = max(0, t + x)
s = max(s, t)
return s
## Alternative (less efficient) solution using list comprehension
# def max_subsequence_sum(a):
# return max(sum(a[i:j]) for i in range(len(a)) for j in range(i,len(a)+1))
### END SOLUTION
# tests
assert max_subsequence_sum([-6, -4, 4, 1, -2, 2]) == 5
assert max_subsequence_sum([2.5, 1.4, -2.5, 1.4, 1.5, 1.6]) == 5.9
### BEGIN HIDDEN TESTS
seq = [-24.81, 25.74, 37.29, -8.77, 0.78, -15.33, 30.21, 34.94, -40.64, -20.06]
assert round(max_subsequence_sum(seq),2) == 104.86
### BEGIN HIDDEN TESTS
# test of efficiency
assert max_subsequence_sum([*range(1234567)]) == 762077221461
###Output
_____no_output_____
###Markdown
- For a list $[a_0,a_1,\dots]$, let $$t_k:=\max_{j<k} \sum_{i=j}^{k-1} a_i = \max\{t_{k-1}+a_{k-1},0\},$$ namely the maximum tail sum of $[a_0,\dots,a_{k-1}]$. - Then, the maximum subsequence sum of $[a_0,\dots,a_{k-1}]$ is $$s_k:=\max_{j\leq k} t_j.$$ **Exercise (Mergesort)** *For this question, do not use the `sort` method or `sorted` function.*Define a function called `merge` that- takes two sequences sorted in ascending orders, and- returns a sorted list of items from the two sequences.Then, define a function called `mergesort` that- takes a sequence, and- return a list of items from the sequence sorted in ascending order.The list should be constructed by - recursive calls to `mergesort` the first and second halves of the sequence individually, and - merge the sorted halves.
###Code
def merge(left,right):
### BEGIN SOLUTION
if left and right:
if left[-1] > right[-1]: left, right = right, left
return merge(left,right[:-1]) + [right[-1]]
return list(left or right)
### END SOLUTION
def mergesort(seq):
### BEGIN SOLUTION
if len(seq) <= 1:
return list(seq)
i = len(seq)//2
return merge(mergesort(seq[:i]),mergesort(seq[i:]))
### END SOLUTION
# tests
assert merge([1,3],[2,4]) == [1,2,3,4]
assert mergesort([3,2,1]) == [1,2,3]
### BEGIN HIDDEN TESTS
assert mergesort([3,5,2,4,2,1]) == [1,2,2,3,4,5]
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
More Functions **Exercise (Arithmetic geometric mean)** Define a function `arithmetic_geometric_mean_sequence` which- takes two floating point numbers `x` and `y` and - returns a generator that generates the tuple \\((a_n, g_n)\\) where$$\begin{aligned}a_0 &= x, g_0 = y \\a_n &= \frac{a_{n-1} + g_{n-1}}2 \quad \text{for }n>0\\g_n &= \sqrt{a_{n-1} g_{n-1}}\end{aligned}$$
###Code
def arithmetic_geometric_mean_sequence(x, y):
### BEGIN SOLUTION
a, g = x, y
while True:
yield a, g
a, g = (a + g)/2, (a*g)**0.5
### END SOLUTION
# tests
agm = arithmetic_geometric_mean_sequence(6,24)
assert [next(agm) for i in range(2)] == [(6, 24), (15.0, 12.0)]
### BEGIN HIDDEN TESTS
agm = arithmetic_geometric_mean_sequence(100,400)
for sol, ans in zip([next(agm) for i in range(5)], [(100, 400), (250.0, 200.0), (225.0, 223.60679774997897), (224.30339887498948, 224.30231718318308), (224.30285802908628, 224.30285802843423)]):
for a, b in zip(sol,ans):
assert round(a,5) == round(b,5)
### END HIDDEN TESTS
###Output
_____no_output_____ |
ipynb/14-Sai-khác-của-Biến-thiên-VN.ipynb | ###Markdown
Table of Contents1 Ba Biển Quảng cáo ngoài trời ở Nam Brazil2 Mô hình ước lượng DID3 Xu hướng biến thiên không song song4 Ý tưởng chủ đạo5 Tài liệu tham khảo Ba Biển Quảng cáo ngoài trời ở Nam BrazilTôi vẫn nhớ khoảng thời gian làm công việc marketing, mà trong đó khá thú vị là mảng marketing trực tuyến. Không phải chỉ vì nó rất hiệu quả (mặc dù đúng là như vậy thật), mà còn vì nó rất thuận tiện để đánh giá tính hiệu quả. Với marketing trực tuyến, bạn có cách để nhận biết khách hàng nào đã xem quảng cáo và theo dõi họ bằng cookies để xem họ có ghé thăm trang web được định hướng hay không. Bạn cũng có thể sử dụng học máy để tìm đối tượng mục tiêu tương tự các khách hàng của bạn và phát quảng cáo đến họ. Tóm lại, marketing trực tuyến rất hiệu quả: bạn nhắm đến những đối tượng bạn muốn và bạn có thể theo dõi xem họ có hồi đáp theo cách mà bạn mong muốn hay không. Nhưng không phải ai cũng là đối tượng của makerting trực tuyến. Đôi khi bạn phải sử dụng những phương pháp kém chính xác hơn như quảng cáo TV hoặc đặt các biển quảng cáo ngoài trời trên đường phố. Nhiều khi việc đa dạng hóa kênh marketing là điều mà các phòng marketing hướng đến. Nhưng nếu marketing trực tuyến là cần câu chuyên nghiệp để săn một loại cá ngừ cụ thể, biển quảng cáo và TV là những tấm lưới lớn bạn quăng ra đón bầy cá và hi vọng sẽ tóm được một vài con ngon. Nhưng một vấn đề khác với biển quảng cáo và quảng cáo TV là chúng rất khó để đánh giá tính hiệu quả. Bạn chắc chắn có thể đo được lượng mua hàng hoặc bất kì chỉ tiêu nào, trước và sau khi đặt biển quảng cáo ở đâu đó. Nếu có sự gia tăng trong các chỉ tiêu này, đó có thể là bằng chứng cho sự hiệu quả của chương trình marketing. Nhưng làm thế nào để biết sự gia tăng đó không phải xu hướng tự nhiên trong nhận thức của công chúng về sản phẩm của bạn. Hay nói cách khác, làm thể nào để bạn biết giả tưởng \\(Y_0\\), điều lẽ ra xảy đến nếu bạn không đặt biển quảng cáo? Một kĩ thuật dùng để trả lời những loại câu hỏi này là Sai khác của biến thiên. Sai khác của biến thiên thường được dùng để đánh giá tác động của các can thiệp vĩ mô, như tác động của nhập cư đối với thất nghiệp, tác động của thay đổi luật quản lý súng và tỉ lệ tội phạm hoặc đơn giản là khác biệt trong lượng tương tác người dùng do một chiến dịch marketing. Trong tất cả các trường hợp này, bạn phải có một khoảng thời gian trước và sau can thiệp và mong muốn tách biệt tác động của can thiệp khỏi xu hướng vận động chung. Một ví dụ cụ thể, hãy cùng xem xét một câu hỏi tương tự như cái mà tôi đã từng phải trả lời.Để đánh giá xem các biển quảng cáo có phải là một kênh marketing hiệu quả hay không, chúng tôi đã đặt 3 biển quảng cáo ở thành phố Porto Alegre, thủ phủ bang Rio Grande do Sul. Dành cho những người không quen thuộc với địa lý Brazil, miền nam của nước này là một trong những vùng miền phát triển nhất với tỉ lệ đói nghèo thấp nếu so sánh với phần còn lại. Nắm được điều này, chúng tôi đã quyết định nhìn vào dữ liệu từ Florianopolis, thủ phủ của bang Santa Catarina, một bang khác ở miền nam. Ý tưởng là chúng tôi sẽ sử dụng Florianopolis làm mẫu đối chứng để ước lượng giả tưởng \\(Y_0\\). Chỉ tiêu chúng tôi cố thúc đẩy ở chiến dịch này là lượng tiền gửi vào tài khoản tiết kiệm (dù sao đây không phải là một thí nghiệm thực có tính bảo mật nhưng ý tưởng khá tương tự). Chúng tôi đã đặt biển quảng cáo ở Porto Alegre trong suốt tháng 6. Dữ liệu chúng tôi thu được có dạng như sau:
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
style.use("fivethirtyeight")
data = pd.read_csv("data/billboard_impact.csv")
data.head()
###Output
_____no_output_____
###Markdown
`deposits` là biến kết quả của chúng tôi. `POA` là biến giả chỉ thành phố Porto Alegre. Khi nó bằng 0 nghĩa là mẫu dữ liệu lấy từ Florianopolis. `Jul` là một biến giả cho tháng 7 (thời điểm sau can thiệp). Khi biến này bằng 0, nó chỉ mẫu dữ liệu từ tháng 5 (trước can thiệp). Mô hình ước lượng DID Để tránh gây nhầm lẫn giữa thời gian và can thiệp, tôi sẽ sử dụng D để chỉ can thiệp và T để chỉ thời gian. Gọi \\(Y_D(T)\\) là kết quả tiềm năng cho can thiệp D ở thời điểm T. Ở một thế giới siêu tưởng nơi mà chúng ta có thể quan sát giả tưởng, chúng ta sẽ ước lượng tác động can thiệp như sau:$\hat{ATET} = E[Y_1(1) - Y_0(1)|D=1]$Nghĩa là tác động nhân quả là hiệu của kết quả của nhóm can thiệp trong khoảng thời gian sau can thiệp và kết quả trong khoảng thời gian sau can thiệp của nhóm ấy nếu giả định chưa từng nhận can thiệp. Tất nhiên chúng ta không thể đo lường theo cách này vì \\(Y_0(1)\\) là giả tưởng.Một cách để giải quyết là so sánh trước và sau.$\hat{ATET} = E[Y(1)|D=1] - E[Y(0)|D=1]$Trong ví dụ này, chúng ta muốn so sánh lượng tiền gửi từ POA trước và sau khi đặt biển quảng cáo.
###Code
poa_before = data.query("poa==1 & jul==0")["deposits"].mean()
poa_after = data.query("poa==1 & jul==1")["deposits"].mean()
poa_after - poa_before
###Output
_____no_output_____
###Markdown
Mô hình ước lượng này cho biết chúng ta nên kì vọng lượng tiền gửi tăng R$ 41,04 sau can thiệp. Nhưng liệu chúng ta có nên tin kết quả này?Lưu ý rằng \\(E[Y(0)|D=1]=E[Y_0(0)|D=1]\\), vì thế ước lượng trên đây đặt giả thiết rằng \\(E[Y_0(1)|D=1] = E[Y_0(0)|D=1]\\). Nó nói rằng khi giả tưởng không có can thiệp, kết quả (của nhóm can thiệp) trong thời điểm sau giống hệt kết quả trong giai đoạn trước. Điều này rõ ràng không đúng nếu biến kết quả của bạn tuân theo một xu hướng vận động nào đó. Ví dụ nếu lượng tiền gửi từ POA tăng, \\(E[Y_0(1)|D=1] > E[Y_0(0)|D=1]\\), nghĩa là kết quả của giai đoạn sau lớn hơn giai đoạn đầu kể cả khi vắng mặt can thiệp. Tương tự nếu có xu hướng giảm, \\(E[Y_0(1)|D=1] < E[Y_0(0)|D=1]\\). Vì thế phương pháp này không chính xác. Một ý tưởng khác là so sánh nhóm can thiệp với một nhóm đối chứng không nhận can thiệp:$\hat{ATET} = E[Y(1)|D=1] - E[Y(1)|D=0]$Trong ví dụ của chúng ta, điều này nghĩa là so sánh tiền gửi từ POA với lượng tiền gửi từ Florianopolis sau thời điểm can thiệp.
###Code
fl_after = data.query("poa==0 & jul==1")["deposits"].mean()
poa_after - fl_after
###Output
_____no_output_____
###Markdown
Mô hình ước lượng này cho biết chiến dịch marketing phản tác dụng và khách hàng gửi tiền ít hơn R$ 119.10. Lưu ý \\(E[Y(1)|D=0]=E[Y_0(1)|D=0]\\), vì thế chúng ta giả định rằng có thể thay thế giả tưởng không quan sát được bằng \\(E[Y_0(1)|D=0] = E[Y_0(1)|D=1]\\). Nhưng lưu ý điều này chỉ đúng nếu hai nhóm có mức cơ sở tương tự nhau. Ví dụ, nếu Florianopolis được gửi nhiều tiền hơn Porto Alegre, điều này không đúng nữa vì \\(E[Y_0(1)|D=0] > E[Y_0(1)|D=1]\\). Mặt khác, nếu lượng tiền gửi thấp hơn ở Florianopolis, ta có \\(E[Y_0(1)|D=0] < E[Y_0(1)|D=1]\\). Vì thế phương pháp này không giúp ích mấy. Để khắc phục nó, ta có thể so sánh theo cả không gian và thời gian. Đây là ý tưởng của phương pháp sai khác của biến thiên. Nó hoạt động bằng cách thay thế giả tưởng thiếu vắng bằng:$E[Y_0(1)|D=1] = E[Y_1(0)|D=1] + (E[Y_0(1)|D=0] - E[Y_0(0)|D=0])$Cách thức vận hành của nó là lấy kết quả của nhóm can thiệp trước khi can thiệp diễn ra cộng thêm vào một phần xu hướng được ước lượng bằng cách sử dụng nhóm đối chứng \\(E[Y_0(1)|T=0] - E[Y_0(0)|T=0]\\). Nghĩa là, nó cho rằng nhóm can thiệp, trong giả tưởng vắng mặt can thiệp, sẽ trông giống như nhóm can thiệp trước lúc nhận can thiệp cộng thêm một phần tăng trưởng tương tự như phần tăng trưởng của nhóm đối chứng. Điều quan trọng là cần lưu ý điều này đặt giả thiết rằng xu hướng biến thiên của nhóm can thiệp và nhóm đối chứng là như nhau:$E[Y_0(1) − Y_0(0)|D=1] = E[Y_0(1) − Y_0(0)|D=0]$trong đó vế trái là xu hướng giả tưởng. Bây giờ, chúng ta có thể thay thế ước lượng của giả tưởng trong định nghĩa của tác động can thiệp \\(E[Y_1(1)|D=1] - E[Y_0(1)|D=1]\\)$\hat{ATET} = E[Y(1)|D=1] - (E[Y(0)|D=1] + (E[Y(1)|D=0] - E[Y(0)|D=0])$Sắp xếp lại các phần tử, ta thu được mô hình ước lượng Sai khác của biến thiên cổ điển.$\hat{ATET} = (E[Y(1)|D=1] - E[Y(1)|D=0]) - (E[Y(0)|D=1] - E[Y(0)|D=0])$Tên gọi này xuất phát từ việc nó tính sự sai khác giữa biến thiên của nhóm can thiệp và nhóm đối chứng sau và trước khi có can thiệp. Bây giờ hãy xem đoạn code sau:
###Code
fl_before = data.query("poa==0 & jul==0")["deposits"].mean()
diff_in_diff = (poa_after-poa_before)-(fl_after-fl_before)
diff_in_diff
###Output
_____no_output_____
###Markdown
Sai khác của biến thiên cho biết chúng ta nên kì vọng mức gửi tiền tăng R$ 6.52 cho mỗi khách hàng. Lưu ý rằng giả thiết đặt ra bởi sai khác của biến thiên có lý hơn 2 mô hình ước lượng trước. Nó chỉ giả định rằng xu hướng tăng trưởng ở 2 thành phố là tương tự nhau. Nhưng nó không yêu cầu chúng phải có cùng mức cơ sở hoặc không biến thiên. Để mô tả sai khác của biến thiên, chúng ta có thể dùng xu hướng tăng trưởng của nhóm đối chứng để dự báo cho nhóm can thiệp và thiết lập giả tưởng, lượng tiền gửi mà chúng có thể kì vọng nếu không có can thiệp.
###Code
plt.figure(figsize=(10,5))
plt.plot(["T5", "T7"], [fl_before, fl_after], label="FL", lw=2)
plt.plot(["T5", "T7"], [poa_before, poa_after], label="POA", lw=2)
plt.plot(["T5", "T7"], [poa_before, poa_before+(fl_after-fl_before)],
label="Giả tưởng", lw=2, color="C2", ls="-.")
plt.legend();
###Output
_____no_output_____
###Markdown
Hãy nhìn khác biệt nhỏ giữa đường màu đỏ và đường nét đứt màu vàng. Nếu tập trung, bạn có thể thấy tác động can thiệp nhỏ đối với Porto Alegre. Bây giờ bạn có thể hỏi chính mình "liệu mình có thể tin tưởng bao nhiêu vào mô hình ước lượng này? Tôi có quyền yêu cầu được biết các sai số chuẩn !". Điều này có lý, vì mô hình ước lượng sẽ trông ngờ nghệch nếu không có chúng. Để làm vậy, chúng ta sẽ dùng một mẹo nhỏ với hồi quy. Cụ thể chúng ta sẽ ước lượng mô hình sau:$Y_i = \beta_0 + \beta_1 POA_i + \beta_2 Jul_i + \beta_3 POA_i*Jul_i + e_i$Lưu ý \\(\beta_0\\) là mức cơ sở của nhóm đối chứng. Trong trường hợp của chúng ta, nó là lượng tiền gửi ở Florianopolis trong tháng 5. Nếu chúng ta "bật" biến giả cho thành phố nhận can thiệp, chúng ta thu được \\(\beta_1\\). Vì thế \\(\beta_0 + \beta_1\\) là mức cơ sở của Porto Alegre vào tháng 5, trước khi có can thiệp, và \\(\beta_1\\) là chênh lệch giữa mức cơ sở của Porto Alegre so với Florianopolis. Nếu chúng ta "tắt" biến giả POA và "bật" biến giả cho tháng 7, ta thu được \\(\beta_0 + \beta_2\\), kết quả của Florianópolis trong tháng 7, thời gian sau can thiệp. \\(\beta_2\\) là xu hướng biến thiên của nhóm đối chứng, vì ta cộng nó vào mức cơ sở của nhóm đối chứng để thu được kết quả của thời gian sau can thiệp. Hãy nhớ lại, \\(\beta_1\\) là mức chênh lệch giữa nhóm can thiệp và nhóm đối chứng, \\(\beta_2\\) là mức biến thiên giữa hai thời điểm trước và sau can thiệp. Cuối cùng nếu "bật" cả 2 biến giả, ta thu được \\(\beta_3\\). \\(\beta_0 + \beta_1 + \beta_2 + \beta_3\\) là kết quả của Porto Alegre sau khi nhận can thiệp. Vì thế \\(\beta_3\\) là mức biến thiên từ tháng 5 đến tháng 7 và khác biệt giữa Florianopolis và POA. Nói cách khác, nó là mô hình ước lượng sai khác của biến thiên. Nếu bạn không tin tôi, hãy kiểm tra lại. Và hãy lưu ý cách tính sai số chuẩn.
###Code
smf.ols('deposits ~ poa*jul', data=data).fit().summary().tables[1]
###Output
_____no_output_____
###Markdown
Xu hướng biến thiên không song songMột vấn đề khá hiển nhiên với Sai khác của biến thiên là trường hợp giả định xu hướng song song không được đảm bảo. Nếu đường xu hướng biến thiên của nhóm can thiệp khác với xu hướng của nhóm đối chứng, sai khác của biến thiên sẽ bị chệch. Đây là một vấn đề khá phổ biến với dữ liệu không ngẫu nhiên, khi mà quyết định chỉ định can thiệp với một khu vực cụ thể dựa trên tiềm năng đáp ứng của nó đối với can thiệp, hoặc khi can thiệp được chỉ định cho cả khu vực đang hoạt động chưa tốt. Trong ví dụ marketing của chúng ta, chúng ta quyết định đánh giá tác động biển quảng cáo tại Porto Alegre không phải vì chúng ta cần đánh giá tác động của biển quảng cáo mà đơn giản vì hoạt động marketing tại Porto Alegre đang yếu kém. Có thể vì marketing trực tuyến không hiệu quả tại thành phố này. Trong trường hợp này, mức tăng trưởng chúng ta quan sát thấy tại Porto Alegre khi không có biển quảng cáo có thể thấp hơn mức tăng trưởng mà chúng ta quan sát được ở các thành phố khác. Điều này có thể khiến chúng ta đánh giá thấp tác động của biển quảng cáo ở đó.Một cách để kiểm chứng điều này là vẽ biểu đồ xu hướng trong quá khứ. Ví dụ giả sử POA có xu hướng giảm nhẹ nhưng Florianopolis đang tăng mạnh. Trong trường hợp này, biểu đồ cho khoảng thời gian trước khi diễn ra can thiệp sẽ bộc lộ các xu hướng này và chúng ta sẽ biết Sai khác của biến thiên không phải mô hình ước lượng đáng tin cậy cho tình huống này.
###Code
plt.figure(figsize=(10,5))
x = ["Jan", "Mar", "May", "Jul"]
plt.plot(x, [120, 150, fl_before, fl_after], label="FL", lw=2)
plt.plot(x, [60, 50, poa_before, poa_after], label="POA", lw=2)
plt.plot(["May", "Jul"], [poa_before, poa_before+(fl_after-fl_before)], label="Giả tưởng", lw=2, color="C2", ls="-.")
plt.legend();
###Output
_____no_output_____ |
synthesis/synpuf16.ipynb | ###Markdown
Synthesis Setup
###Code
import synpuf
import pandas as pd
###Output
/home/maxghenis/miniconda3/lib/python3.6/site-packages/rpy2/rinterface/__init__.py:145: RRuntimeWarning: During startup -
warnings.warn(x, RRuntimeWarning)
/home/maxghenis/miniconda3/lib/python3.6/site-packages/rpy2/rinterface/__init__.py:145: RRuntimeWarning: Warning message:
warnings.warn(x, RRuntimeWarning)
/home/maxghenis/miniconda3/lib/python3.6/site-packages/rpy2/rinterface/__init__.py:145: RRuntimeWarning: package 'RevoUtils' was built under R version 3.4.3
warnings.warn(x, RRuntimeWarning)
###Markdown
**UPDATE**
###Code
INFILE = '~/Downloads/puf/train90.csv'
OUTFILE = '~/Downloads/syntheses/synpuf16.csv'
###Output
_____no_output_____
###Markdown
Load
###Code
train = pd.read_csv(INFILE).drop(['RECID'] +
synpuf.get_puf_columns(seed=False, categorical=False),
axis=1)
###Output
_____no_output_____
###Markdown
Synthesize
###Code
%%time
synth = synpuf.synthesize_puf_rf(train,
seed_cols=synpuf.get_puf_columns(calculated=False),
trees=100)
###Output
Synthesizing feature 1 of 58: E02100...
Synthesizing feature 2 of 58: E58990...
Synthesizing feature 3 of 58: E17500...
Synthesizing feature 4 of 58: N24...
Synthesizing feature 5 of 58: E00300...
Synthesizing feature 6 of 58: e00600_minus_e00650...
Synthesizing feature 7 of 58: E19200...
Synthesizing feature 8 of 58: E03220...
Synthesizing feature 9 of 58: E00200...
###Markdown
Export
###Code
synth.to_csv(OUTFILE, index=False)
###Output
_____no_output_____ |
pipeline/eagle_eye.ipynb | ###Markdown
Eagle EyeIn this notebook the relevant test and calibrations for the eagle eye were done. This calibration is based on the images with straight lines: 
###Code
import numpy as np
import cv2
import glob
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib qt
from pathlib import Path
import os
from pipeline import undistort_image
straight_lines1_file = "../test_images/straight_lines1.jpg"
straight_lines2_file = "../test_images/straight_lines2.jpg"
straight_lines1 = mpimg.imread(straight_lines1_file)
straight_lines2 = mpimg.imread(straight_lines2_file)
# Undistorting image
straight_lines2 = undistort_image(straight_lines2)
straight_lines1 = undistort_image(straight_lines1)
def plot_corners():
plt.plot(293, 670, '.')
plt.plot(1018, 670, '.')
plt.plot(577, 466, '.')
plt.plot(707, 466, '.')
# Visualy selection of 4 points to make a rectangle
plt.figure(1)
pts = np.array(
[[293, 670],
[577, 463],
[704, 463],
[1018, 670]], np.int32)
pts = pts.reshape((-1,1,2))
cv2.polylines(straight_lines1,[pts],True,(0,255,255), 2)
plt.imshow(straight_lines1)
plot_corners()
plt.figure(2)
# Visualy selection of 4 points to make a rectangle
cv2.polylines(straight_lines2,[pts],True,(0,255,255), 2)
plt.imshow(straight_lines2)
plot_corners()
###Output
_____no_output_____
###Markdown
It was observed that this 4 points fits both images quite nicely so we will use them as starting points to select the corners of the perspective transformation. But I tried to tuned the point to produce the straightest lines simultaneously on both images. It was some kind of trade off, because while making one image straighter, the other one bends. Finally I arrive to the values shown below which make both look decent.
###Code
src = np.float32(
[[293, 670],
[1018, 670],
[577, 463],
[704, 463]])
dst = np.float32(
[[293, 670],
[1018, 670],
[293, 463],
[1018, 463]])
mtx_perspective = cv2.getPerspectiveTransform(src, dst)
mtx_inv_perspective = cv2.getPerspectiveTransform(dst, src)
img_size = (straight_lines1.shape[1],straight_lines1.shape[0])
straight_lines1_warped = cv2.warpPerspective(straight_lines1, mtx_perspective, img_size, flags=cv2.INTER_LINEAR)
straight_lines2_warped = cv2.warpPerspective(straight_lines2, mtx_perspective, img_size, flags=cv2.INTER_LINEAR)
output_dir = "../output_images/eagle_eye/"
Path(output_dir).mkdir(parents=True, exist_ok=True)
f1 = plt.figure(3)
plt.imshow(straight_lines1_warped)
f1.savefig(output_dir + "straight_lines1.jpg")
f2 = plt.figure(4)
plt.imshow(straight_lines2_warped)
f2.savefig(output_dir + "straight_lines2.jpg")
np.save("mtx_perspective", mtx_perspective)
np.save("mtx_inv_perspective", mtx_inv_perspective)
###Output
_____no_output_____
###Markdown
The two straight lines in eagle look as follows: 
###Code
# test pipeline implementation
from pipeline import eagle_eye, eagle_eye_inv
straight_lines1_warped_test = eagle_eye(straight_lines1)
plt.figure()
plt.imshow(straight_lines1_warped_test)
straight_lines1_unwarped_test = eagle_eye_inv(straight_lines1_warped_test)
plt.figure()
plt.imshow(straight_lines1_unwarped_test)
###Output
_____no_output_____ |
Tips/2016-04-30-Enum.ipynb | ###Markdown
Python 中的枚举类型枚举类型可以看作是一种标签或是一系列常量的集合,通常用于表示某些特定的有限集合,例如星期、月份、状态等。Python 的原生类型(Built-in types)里并没有专门的枚举类型,但是我们可以通过很多方法来实现它,例如字典、类等:
###Code
WEEKDAY = {
'MON': 1,
'TUS': 2,
'WEN': 3,
'THU': 4,
'FRI': 5
}
class Color:
RED = 0
GREEN = 1
BLUE = 2
###Output
_____no_output_____
###Markdown
上面两种方法可以看做是简单的枚举类型的实现,如果只在局部范围内用到了这样的枚举变量是没有问题的,但问题在于它们都是可变的(mutable),也就是说可以在其它地方被修改从而影响其正常使用:
###Code
WEEKDAY['MON'] = WEEKDAY['FRI']
print(WEEKDAY)
###Output
{'FRI': 5, 'TUS': 2, 'MON': 5, 'WEN': 3, 'THU': 4}
###Markdown
通过类定义的枚举甚至可以实例化,变得不伦不类:
###Code
c = Color()
print(c.RED)
Color.RED = 2
print(c.RED)
###Output
0
2
###Markdown
当然也可以使用不可变类型(immutable),例如元组,但是这样就失去了枚举类型的本意,将标签退化为无意义的变量:
###Code
COLOR = ('R', 'G', 'B')
print(COLOR[0], COLOR[1], COLOR[2])
###Output
R G B
###Markdown
为了提供更好的解决方案,Python 通过 [PEP 435](https://www.python.org/dev/peps/pep-0435) 在 3.4 版本中添加了 [enum](https://github.com/rainyear/cpython/blob/master/Lib/enum.py) 标准库,3.4 之前的版本也可以通过 `pip install enum` 下载兼容支持的库。`enum` 提供了 `Enum`/`IntEnum`/`unique` 三个工具,用法也非常简单,可以通过继承 `Enum`/`IntEnum` 定义枚举类型,其中 `IntEnum` 限定枚举成员必须为(或可以转化为)整数类型,而 `unique` 方法可以作为修饰器限定枚举成员的值不可重复:
###Code
from enum import Enum, IntEnum, unique
try:
@unique
class WEEKDAY(Enum):
MON = 1
TUS = 2
WEN = 3
THU = 4
FRI = 1
except ValueError as e:
print(e)
try:
class Color(IntEnum):
RED = 0
GREEN = 1
BLUE = 'b'
except ValueError as e:
print(e)
###Output
invalid literal for int() with base 10: 'b'
###Markdown
更有趣的是 `Enum` 的成员均为单例(Singleton),并且不可实例化,不可更改:
###Code
class Color(Enum):
R = 0
G = 1
B = 2
try:
Color.R = 2
except AttributeError as e:
print(e)
###Output
Cannot reassign members.
###Markdown
虽然不可实例化,但可以将枚举成员赋值给变量:
###Code
red = Color(0)
green = Color(1)
blue = Color(2)
print(red, green, blue)
###Output
Color.R Color.G Color.B
###Markdown
也可以进行比较判断:
###Code
print(red is Color.R)
print(red == Color.R)
print(red is blue)
print(green != Color.B)
print(red == 0) # 不等于任何非本枚举类的值
###Output
True
True
False
True
False
###Markdown
最后一点,由于枚举成员本身也是枚举类型,因此也可以通过枚举成员找到其它成员:
###Code
print(red.B)
print(red.B.G.R)
###Output
Color.B
Color.R
###Markdown
但是要谨慎使用这一特性,因为可能与成员原有的命名空间中的名称相冲突:
###Code
print(red.name, ':', red.value)
class Attr(Enum):
name = 'NAME'
value = 'VALUE'
print(Attr.name.value, Attr.value.name)
###Output
R : 0
NAME value
|
day3/.ipynb_checkpoints/howto_work-checkpoint.ipynb | ###Markdown
Work habits and reproducible research - Source your code- Continuous Integration- Reproducible research- Unit Tests- Workflows- Acceleration: profiling, JIT Source your code
###Code
a = "important stuff"
assert(a == "important stuff"), "Ha! You just corrupted your data!"
def f(a):
a = "corrupted " + a
return
a = f(a)
###Output
_____no_output_____
###Markdown
**Why?**- Because you avoid errors. Because reproducibility. And because you must be a responsible programmer and researcher.- You can version your code with a version control, track changes, assign error reporting, documentation etc.- If the code is kept in Jupyter Notebooks, it should always be executed sequentially.- Did you know, Jupyter has code editor, terminal and markdown editor, independent from the notebook?- Make your research presentation easily reproducible by offering the data, the code and the findings separately.- But, don't isolate the code from your findings! You can execute your source code from a notebook, that is fine!- The bits of notebook that you see on internet are meant to demonstrate code, not do scientific research.- If you receive a notebook from a colaborator, always make sure you can run it and reproduce the findings in it (it may already be corrupted).**How**Python editors:- Simple text processors: Atom, Sublime, Geany, Notepad++ etc- Spyder: good for basic scientific programming, ipython interpreter- PyCharm: refactoring, internal jupyter, object browsing, etcWhat matters:- Using the editor most appropriate to the complexity of the task.- Full feature editors make it easier to write good code!- Syntax and style linting.- Code refactoring.- Git/svn integration.- Remote development.- Advanced debugging.- Unit testing.**Standards**- Source code can be one or several scripts, it should contain information on deployment, testing and documentation.- Some care for design should be given. Module hierarchy, will you create classes, use of design patterns. - https://www.geeksforgeeks.org/python-design-patterns/- Python style guide. - https://www.python.org/dev/peps/pep-0008/**Quality of life, or simply good habits**- Software versioning milestones.- Continuous integration.- Reproducibility.- Workflows.- Containerization. **Versioning example**- https://www.python.org/dev/peps/pep-0440/- https://en.wikipedia.org/wiki/Software_versioningmilestone x.x:- expected outcomes- tests```X.YaN Alpha releaseX.YbN Beta releaseX.YrcN Release CandidateX.Y Final release``` Continuous Integration- Submit your code to github often!- Make backup for your data and findings.- Set baselines on expected outcomes, and verify often that your code is tested against them.- Unit tests are one way to do this.- Notebook keeping helps (internal notebooks) especially if you can re-test.- Test your code more than once, and everytime you do a modification.- Use workflows, virtual environments and containers. Reproducible research- The vast majority of published results today are not reproducible.- Let us admit this, if our research cannot pe reproduced we probably did something else.- Research findings do not only depend on your plotting skill- For someone to be able to produce your results several thinks must harmonize: - Open data access: - (on federated databases) - Open source access - on github, gitlab, etc - Open environment: - conda requirements and container script (or image) - Findings (paper AND notebooks) - public access **Development vs production**- They are separated, fully.- If is fine to demo your source code, or just parts of it on a notebook during development.- Most notebooks you see on the web are in a development stage.- How does it impact on the reproducibility if the development and production is not separated?- Bring forward the issue of having different projects using the same source code directory, or the same data directory. What is to do? Reproducible environments: containers and conda- **Docker usage is described in another notebook**- containers can isolate an environment even better than a package manager!- Problem: what if the old package versions cannot be maintained?- Problem: what if the older container instances cannot be spinned or even re-created?
###Code
# export an environment into a yaml file
conda environment_name export > environment.yaml
# re-create an environment based on the file
conda env create -f environment.yaml
###Output
_____no_output_____
###Markdown
Unit testing- The unittest module can be used from the command line to run tests from modules, classes or even individual test methods - https://docs.python.org/3/library/unittest.html - https://www.geeksforgeeks.org/unit-testing-python-unittest/ - https://realpython.com/python-testing/- Some editors give special support for unit tests: - https://www.jetbrains.com/help/pycharm/testing-your-first-python-application.htmlchoose-test-runner Documentation- Docstrings convention - https://www.python.org/dev/peps/pep-0257/ - https://realpython.com/documenting-python-code/- https://readthedocs.org/ - simplifies software documentation by building, versioning, and hosting of your docs, automatically. Think of it as Continuous Documentation- Using Sphinx: - https://docs.readthedocs.io/en/latest/intro/getting-started-with-sphinx.html - https://www.sphinx-doc.org/en/master/- Other options exist, pydoc, etc **Other development tools:**- debugger: allows to to follow your code step by step and investigate the program stack- profiler: shows memory usage finding possible leaks and bottlenecks (see acceleration notebook) Workflows**Snakemake**- https://snakemake.readthedocs.io/en/stable/tutorial/basics.html- https://snakemake.github.io/snakemake-workflow-catalog/```conda install -c bioconda snakemakeconda install graphviz```
###Code
SAMPLES = ['ctl1', 'ctl2']
rule all:
input:
'merged.txt'
rule acounts:
input:
file='{sample}.fastq'
output:
'{sample}_counts.txt'
run:
with open(input.file, 'r') as f:
nc = [str(l.count('A')) for l in f if not l[0]=='@']
data = ', '.join(nc)+'\n'
with open(output[0], 'w') as f: f.write(data)
rule merge:
input:
counts=expand('{sample}_counts.txt',sample=SAMPLES)
output:
'merged.txt'
shell:
"""
for f in {input.counts}
do
cat $f >> {output}
done
"""
snakemake --dag merged.txt | dot -Tsvg > dag.svg
snakemake --name mylittleworkflow.txt
###Output
learning.ipynb scicomp.ipynb visualization.ipynb
networks.ipynb statistics.ipynb workflows.ipynb
###Markdown
**Nextflow**- https://www.nextflow.io/
###Code
#!/usr/bin/env nextflow
params.range = 100
/*
* A trivial Perl script producing a list of numbers pair
*/
process perlTask {
output:
stdout randNums
shell:
'''
#!/usr/bin/env perl
use strict;
use warnings;
my $count;
my $range = !{params.range};
for ($count = 0; $count < 10; $count++) {
print rand($range) . ', ' . rand($range) . "\n";
}
'''
}
/*
* A Python script task which parses the output of the previous script
*/
process pyTask {
echo true
input:
stdin randNums
'''
#!/usr/bin/env python
import sys
x = 0
y = 0
lines = 0
for line in sys.stdin:
items = line.strip().split(",")
x = x+ float(items[0])
y = y+ float(items[1])
lines = lines+1
print "avg: %s - %s" % ( x/lines, y/lines )
'''
}
###Output
_____no_output_____
###Markdown
Acceleration Speed: Profiling, IPython, JITThe Python standard library contains the cProfile module for determining the time that takes every Python function when running the code. The pstats module allows to read the profiling results. Third party profiling libraries include in particular line_profiler for profiling code line after line, and memory_profiler for profiling memory usage. All these tools are very powerful and extremely useful when optimizing some code, but they might not be very easy to use at first.
###Code
%%writefile script.py
import numpy as np
import numpy.random as rdn
# uncomment for line_profiler
# @profile
def test():
a = rdn.randn(100000)
b = np.repeat(a, 100)
test()
!python -m cProfile -o prof script.py
$ pip install ipython
$ ipython --version
0.13.1
$ pip install line-profiler
$ pip install psutil
$ pip install memory_profiler
%timeit?
%run -t slow_functions.py
%time {1 for i in range(10*1000000)}
%timeit -n 1000 10*1000000
def foo(n):
phrase = 'repeat me'
pmul = phrase * n
pjoi = ''.join([phrase for x in xrange(n)])
pinc = ''
for x in xrange(n):
pinc += phrase
del pmul, pjoi, pinc
#%load_ext line_profiler
%lprun -f foo foo(100000)
###Output
_____no_output_____
###Markdown
- %time & %timeit: See how long a script takes to run (one time, or averaged over a bunch of runs).- %prun: See how long it took each function in a script to run.- %lprun: See how long it took each line in a function to run.- %mprun & %memit: See how much memory a script uses (line-by-line, or averaged over a bunch of runs). Numba Numba is an open source JIT (just in time) compiler that translates a subset of Python and NumPy code into fast machine code.- https://numba.pydata.org/```conda install numbaconda install cudatoolkit```
###Code
from numba import jit
import random
@jit(nopython=True)
def monte_carlo_pi(nsamples):
acc = 0
for i in range(nsamples):
x = random.random()
y = random.random()
if (x ** 2 + y ** 2) < 1.0:
acc += 1
return 4.0 * acc / nsamples
@numba.jit(nopython=True, parallel=True)
def logistic_regression(Y, X, w, iterations):
for i in range(iterations):
w -= np.dot(((1.0 /
(1.0 + np.exp(-Y * np.dot(X, w)))
- 1.0) * Y), X)
return w
###Output
_____no_output_____ |
notebooks/LDA_model-nature.ipynb | ###Markdown
Import all the required packages
###Code
## basic packages
import numpy as np
import re
import csv
import time
import pandas as pd
from itertools import product
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
##gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
##spacy and nltk
import spacy
from nltk.corpus import stopwords
from spacy.lang.en.stop_words import STOP_WORDS
##vis
import pyLDAvis
import pyLDAvis.gensim_models as gensimvis
pyLDAvis.enable_notebook()
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
###Output
_____no_output_____
###Markdown
load the metadata of podcast transcripts
###Code
global df, show_descriptions
meta_data = []
with open("../data/metadata.tsv") as csvfile:
csvreader = csv.reader(csvfile,delimiter="\t")
for row in csvreader:
meta_data.append(row)
df = pd.DataFrame(meta_data[1:],columns=meta_data[0])
show_filename_prefixes = df.show_filename_prefix
episode_filename_prefixes = df.episode_filename_prefix
shows = df.groupby(by=['show_name'])
show_names = shows.apply(lambda x: x.show_name.unique()[0])
nature_keywords = ["nature","photography","environment","ecosystem","wilderness","animals",\
"ocean","climate","landscape","waterfall","glacier","mountains","mountain",\
"coastal","geographic","birds","planter",\
"lakes","lake","volcano","earthquake","tsunami","flood","draught",\
"zoo","aquarium","desert","forest","forests","everglades",\
"cherry-blossom","spring","autumn","summer","winter","earth","planets",\
"marshland","frozen-lake","mammals","fish","reptiles","tornado","hurricane",\
"storm","thunder-storm","alaska","sahara","ghats","antarctica","arctic","pacific",\
"atlantic","garden","plants","himalayas","greenland","north-pole",\
"south-pole","greenhouse"]
genres_topics = nature_keywords
formats = ["monologue","interview","storytelling","repurposed",\
"bite-sized","co-host conversation","debate","narrative",\
"scripted","improvised"]
podcasts_genres_topics = {}
for k,show in enumerate(show_names):
keywords = show.lower().split(" ")
for word in keywords:
if word in genres_topics:
if (k,show) in podcasts_genres_topics:
if word not in podcasts_genres_topics[(k,show)]:
podcasts_genres_topics[(k,show)].append(word)
else:
podcasts_genres_topics[(k,show)] = [word]
podcasts = [item[1] for item in podcasts_genres_topics.keys()]
nature_category = [(key,val) for key,val in podcasts_genres_topics.items() if ("nature" in val)\
or ("photography" in val)\
or ("environment" in val)\
or ("ecosystem" in val)\
or ("wilderness" in val)\
or ("animals" in val)\
or ("ocean" in val)\
or ("climate" in val)\
or ("landscape" in val)\
or ("waterfall" in val)\
or ("glacier" in val)\
or ("mountains" in val)\
or ("mountain" in val)\
or ("coastal" in val)\
or ("geographic" in val)\
or ("birds" in val)\
or ("planter" in val)\
or ("lakes" in val)\
or ("lake" in val)\
or ("volcano" in val)\
or ("earthquake" in val)\
or ("tsunami" in val)\
or ("flood" in val)\
or ("draught" in val)\
or ("zoo" in val)\
or ("aquarium" in val)\
or ("desert" in val)\
or ("forest" in val)\
or ("forests" in val)\
or ("everglades" in val)\
or ("cherry-blossom" in val)\
or ("spring" in val)\
or ("autumn" in val)\
or ("summer" in val)\
or ("winter" in val)\
or ("earth" in val)\
or ("planets" in val)\
or ("marshland" in val)\
or ("frozen-lake" in val)\
or ("mammals" in val)\
or ("fish" in val)\
or ("reptiles" in val)\
or ("tornado" in val)\
or ("hurricane" in val)\
or ("storm" in val)\
or ("thunder-storm" in val)\
or ("alaska" in val)\
or ("sahara" in val)\
or ("ghats" in val)\
or ("himalayas" in val)\
or ("antarctica" in val)\
or ("arctic" in val)\
or ("pacific" in val)\
or ("atlantic" in val)\
or ("garden" in val)\
or ("plants" in val)\
or ("greenhouse" in val)\
or ("greenland" in val)\
or ("north-pole" in val)\
or ("south-pole" in val)]
nlp = spacy.load("en_core_web_sm")
stops_nltk = set(stopwords.words("english"))
stops_spacy = STOP_WORDS.union({'ll', 've', 'pron','okay','oh','like','know','yea','yep','yes','no',\
"like","oh","yeah","okay","wow","podcast","rating","ratings","not",\
"support","anchor","podcasts","episode","http","https","5star","reviews",\
"review","instagram","tiktok","amazon","apple","twitter","goole",\
"facebook","send","voice message","message","voice","subscribe","follow",\
"sponsor","links","easiest","way","fuck","fucking","talk","discuss",\
"world","time","want","join","learn","week","things","stuff","find",\
"enjoy","welcome","share","talk","talking","people","gmail","help","today",\
"listen","best","stories","story","hope","tips","great","journey",\
"topics","email","questions","question","going","life","good","friends",\
"friend","guys","discussing","live","work","student","students","need",\
"hear","think","change","free","better","little","fucking","fuck","shit",\
"bitch","sex","easiest","way","currently","follow","follows","needs",\
"grow","stay","tuned","walk","understand","tell","tells","ask","helps",\
"feel","feels","look","looks","meet","relate","soon","quick","dude","girl",\
"girls","guy","literally","spotify","google","totally","played","young",\
"begin","began","create","month","year","date","day","terms","lose","list",\
"bought","brings","bring","buy","percent","rate","increase","words","value",\
"search","awesome","followers","finn","jake","mark","america","american",\
"speak","funny","hours","hour","honestly","states","united","franklin",\
"patrick","john","build","dave","excited","process","processes","based",\
"focus","star","mary","chris","taylor","gotta","liked","hair","adam","chat",\
"named","died","born","country","mother","father","children","tools",\
"countries","jordan","tommy","listeners","water","jason","lauren","alex",\
"laguna","jessica","kristen","examples","example","heidi","stephen","utiful",\
"everybody","sorry","came","come","meet","whoa","whoaa","yay","whoaw",\
"anybody","somebody","cool","watch","nice","shall"})
stops = stops_nltk.union(stops_spacy)
d = {}
for val in podcasts_genres_topics.values():
for word in nature_keywords:
if word in val:
if word in d:
d[word] += 1
else:
d[word] = 1
plt.figure(figsize=(10,8))
plt.bar(d.keys(),d.values())
plt.title('Distribution of podcast episodes related to nature/natural',fontsize=16)
plt.xlabel('Keyword',fontsize=16)
plt.ylabel('Keyword frequency',fontsize=16)
plt.xticks(rotation=90,fontsize=14)
plt.yticks(fontsize=14);
number_of_topics = [5,6,7,8,9,10,15]
df_parameters = list(product([2,3,4,5,6,7,8,9,10],[0.3,0.4,0.5,0.6,0.7,0.8,0.9]))
hyperparams = list(product(number_of_topics,df_parameters))
nature_cs = []
with open('/home1/sgmark/capstone-project/results/coherence_scores_nature_category.csv','r') as f:
reader = csv.reader(f)
for row in reader:
nature_cs.append([float(x) for x in row])
best_hp_setting = hyperparams[np.argmax([x[5] for x in nature_cs])]
###Output
_____no_output_____
###Markdown
The individual transcript location
###Code
# def file_location(show,episode):
# search_string = local_path + "/spotify-podcasts-2020" + "/podcasts-transcripts" \
# + "/" + show[0] \
# + "/" + show[1] \
# + "/" + "show_" + show \
# + "/"
# return search_string
###Output
_____no_output_____
###Markdown
load the transcripts
###Code
transcripts = {}
for podcast,genre in nature_category:
for i in shows.get_group(podcast[1])[['show_filename_prefix','episode_filename_prefix']].index:
show,episode = shows.get_group(podcast[1])[['show_filename_prefix','episode_filename_prefix']].loc[i]
s = show.split("_")[1]
try:
with open('/home1/sgmark/podcast_transcripts/'+s[0]+'/'+s[1]+'/'+show+'/'+episode+'.txt','r') as f:
transcripts[(show,episode)] = f.readlines()
f.close()
except Exception as e:
pass
keys = list(transcripts.keys())
# Cleaning & remove urls and links
def remove_stops(text,stops):
final = []
for word in text:
if (word not in stops) and (len(word)>3) and (not word.endswith('ing')) and (not word.endswith('ly')):
final.append(word)
return final
def clean_text(docs):
final = []
for doc in docs:
clean_doc = remove_stops(doc, stops)
final.extend(clean_doc)
return final
def lemmatization(text_data):
nlp = spacy.load("en_core_web_sm")
texts = []
for text in text_data:
doc = nlp(text)
lem_text = []
for token in doc:
if (token.pos_=="VERB") or (token.pos_=="ADV"):
pass
else:
lem_text.append(token.lemma_)
texts.append(lem_text)
return texts
###Output
_____no_output_____
###Markdown
tokenize/convert text into words
###Code
def normalize_docs(text_data):
final_texts = []
for text in text_data:
new_text = gensim.utils.simple_preprocess(text,deacc=True)
final_texts.append(new_text)
return final_texts
docs = []
for text in transcripts.values():
docs.append(' '.join(clean_text(normalize_docs(text))))
texts = lemmatization(docs)
texts = [remove_stops(text,stops) for text in texts]
###Output
_____no_output_____
###Markdown
Using bigrams
###Code
from gensim.models.phrases import Phrases
bigram = Phrases(texts, min_count=5)
for i in range(len(texts)):
for token in bigram[texts[i]]:
if '_' in token:
texts[i].append(token)
###Output
_____no_output_____
###Markdown
Construct a corpus of words as a bag of words
###Code
dictionary = corpora.Dictionary(texts)
dictionary.filter_extremes(no_below=best_hp_setting[1][0],no_above=best_hp_setting[1][1])
corpus = [dictionary.doc2bow(text) for text in texts]
###Output
_____no_output_____
###Markdown
Hyperparameter tuning
###Code
# from itertools import product
# number_of_topics = [5,6,7,8,9,10,15]
# df_parameters = list(product([2,3,4,5,6,7,8,9,10],[0.3,0.4,0.5,0.6,0.7,0.8,0.9]))
# coherence_scores_umass = np.zeros((len(number_of_topics),len(df_parameters)))
# coherence_scores_uci = np.zeros((len(number_of_topics),len(df_parameters)))
# coherence_scores_npmi = np.zeros((len(number_of_topics),len(df_parameters)))
# j = 0
# for num in number_of_topics:
# i = 0
# for n,m in df_parameters:
# dictionary = corpora.Dictionary(texts)
# dictionary.filter_extremes(no_below=n,no_above=m)
# corpus = [dictionary.doc2bow(text) for text in texts]
# num_topics = num
# chunksize = 200
# passes = 20
# iterations = 500
# eval_every = None
# lda_model = gensim.models.ldamodel.LdaModel(corpus,
# id2word=dictionary,
# num_topics=num_topics,
# chunksize=chunksize,
# passes=passes,
# iterations=iterations,
# alpha='auto',
# eta='auto',
# random_state = 123,
# eval_every=eval_every)
# cm = CoherenceModel(lda_model, texts=texts,corpus=corpus, coherence= 'c_uci')
# coherence_scores_uci[j,i] = cm.get_coherence()
# cm = CoherenceModel(lda_model, texts=texts,corpus=corpus, coherence= 'c_npmi')
# coherence_scores_npmi[j,i] = cm.get_coherence()
# cm = CoherenceModel(lda_model, corpus=corpus, coherence= 'u_mass')
# coherence_scores_umass[j,i] = cm.get_coherence()
# with open("coherence_scores_nature_category.csv",'a') as f:
# writer = csv.writer(f)
# writer.writerow([num,n,m,coherence_scores_uci[j,i],coherence_scores_npmi[j,i],\
# coherence_scores_umass[j,i]])
# i += 1
# print(i)
# j += 1
# print(j)
###Output
_____no_output_____
###Markdown
Final model
###Code
%%time
import logging
logging.basicConfig(filename='nature_topics.log', encoding='utf-8',format='%(asctime)s : %(levelname)s : %(message)s', level=logging.DEBUG)
num_topics = best_hp_setting[0]
chunksize = 200
passes = 50
iterations = 500
eval_every = None
lda_model = gensim.models.ldamodel.LdaModel(corpus,
id2word=dictionary,
num_topics=num_topics,
chunksize=chunksize,
passes=passes,
iterations=iterations,
alpha='auto',
eta='auto',
random_state=123,
eval_every=eval_every)
top_topics = lda_model.top_topics(corpus,texts=texts,coherence='c_npmi') #, num_words=20)
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics])/num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence)
print(f'topic coherence scores: {[t[1] for t in top_topics]}')
###Output
_____no_output_____
###Markdown
Visualizing data
###Code
vis = pyLDAvis.gensim_models.prepare(lda_model,corpus,dictionary,mds="mmds",R=20)
pyLDAvis.save_json(vis,'nature_umass.json')
vis
# from pprint import pprint
# pprint(top_topics)
import pickle
pickle.dump(lda_model,open('../model/nature_episodes_lda_model_umass.pkl','wb'))
pickle.dump(dictionary,open('../model/nature_episodes_dictionary_umass.pkl','wb'))
pickle.dump(corpus,open('../model/nature_episodes_corpus_umass.pkl','wb'))
# pickle.dump(texts,open('../model/nature_episodes_texts.pkl','wb'))
import pickle
file = open('../model/nature_episodes_lda_model_umass.pkl','rb')
lda_model = pickle.load(file)
file.close()
file = open('../model/nature_episodes_corpus_umass.pkl','rb')
corpus = pickle.load(file)
file.close()
file = open('../model/nature_episodes_dictionary_umass.pkl','rb')
dictionary = pickle.load(file)
file.close()
file = open('../model/nature_episodes_texts.pkl','rb')
texts = pickle.load(file)
file.close()
def get_main_topic_df(model, bow, texts):
topic_list = []
percent_list = []
keyword_list = []
podcast_list = []
episode_list = []
duration_list = []
publisher_list = []
show_prefix_list = []
episode_prefix_list = []
descriptions_list = []
rss_link_list = []
for key,wc in zip(keys,bow):
show_prefix_list.append(key[0])
episode_prefix_list.append(key[1])
podcast_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].show_name.iloc[0])
episode_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].episode_name.iloc[0])
duration_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].duration.iloc[0])
publisher_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].publisher.iloc[0])
descriptions_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].episode_description.iloc[0])
rss_link_list.append(df[(df['show_filename_prefix'] == key[0])&(df['episode_filename_prefix'] == key[1])].rss_link.iloc[0])
topic, percent = sorted(model.get_document_topics(wc), key=lambda x: x[1], reverse=True)[0]
topic_list.append(topic)
percent_list.append(round(percent, 3))
keyword_list.append(' '.join(sorted([x[0] for x in model.show_topic(topic)])))
result_df = pd.concat([pd.Series(show_prefix_list, name='show_filename_prefix'),
pd.Series(episode_prefix_list, name='episode_filename_prefix'),
pd.Series(podcast_list, name='Podcast_name'),
pd.Series(episode_list, name='Episode_name'),
pd.Series(topic_list, name='Dominant_topic'),
pd.Series(percent_list, name='Percent'),
pd.Series(texts, name='Processed_text'),
pd.Series(keyword_list, name='Keywords'),
pd.Series(duration_list, name='Duration of the episode'),
pd.Series(publisher_list, name='Publisher of the show'),
pd.Series(descriptions_list, name='Description of the episode'),
pd.Series(rss_link_list, name='rss_link')], axis=1)
return result_df
main_topic_df = get_main_topic_df(lda_model,corpus,texts)
main_topic_df.to_pickle('../model/nature_topics_main_df_umass.pkl')
main_topic_df.to_csv('../model/main_df_csv/nature_topics_main_df_umass.csv')
topics_terms = {k:lda_model.show_topic(k,topn=30) for k in range(lda_model.num_topics)}
plt.figure(figsize=(10,8))
topics_groups = main_topic_df.groupby('Dominant_topic')
plt.bar(range(lda_model.num_topics),topics_groups.count()['Podcast_name'],width=0.5)
plt.title('Dominant topic frequency in the investment/stocks category of podcast episodes',fontsize=16)
plt.xlabel('Dominant Topic index',fontsize=16)
plt.ylabel('Number of episodes',fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14);
representatives = pd.DataFrame()
for k in topics_groups.groups.keys():
representatives = pd.concat([representatives,
topics_groups.get_group(k).sort_values(['Percent'], ascending=False).head(3)])
representatives.to_csv('../model/main_df_csv/nature_representatives_umass.csv')
# for k,words in enumerate(representatives.Keywords):
# print(f'topic {k}: {words}')
# print('Document: {} Dominant topic: {}\n'.format(representatives.index[2],
# representatives.loc[representatives.index[2]]['Dominant_topic']))
# print([sentence.strip() for sentence in transcripts[keys[representatives.index[2]]]])
num_topics = best_hp_setting[0]
def word_count_by_topic(topic=0):
d_lens = [len(d) for d in topics_groups.get_group(topic)['Processed_text']]
plt.figure(figsize=(10,8))
plt.hist(d_lens)
large = plt.gca().get_ylim()[1]
d_mean = round(np.mean(d_lens), 1)
d_median = np.median(d_lens)
plt.plot([d_mean, d_mean], [0,large], label='Mean = {}'.format(d_mean))
plt.plot([d_median, d_median], [0,large], label='Median = {}'.format(d_median))
plt.legend()
plt.xlabel('Document word count',fontsize=16)
plt.ylabel('Number of documents',fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
from ipywidgets import interact, IntSlider
slider = IntSlider(min=0, max=num_topics-1, step=1, value=0, description='Topic')
interact(word_count_by_topic, topic=slider);
lda_top_words_index = set()
for i in range(lda_model.num_topics):
lda_top_words_index = lda_top_words_index.union([k for (k,v) in lda_model.get_topic_terms(i)])
#print('Indices of top words: \n{}\n'.format(lda_top_words_index))
words_we_care_about = [{dictionary[tup[0]]: tup[1] for tup in lst if tup[0] in list(lda_top_words_index)}
for lst in corpus]
lda_top_words_df = pd.DataFrame(words_we_care_about).fillna(0).astype(int).sort_index(axis=1)
lda_top_words_df['Cluster'] = main_topic_df['Dominant_topic']
k=1
clusterwise_words_dist = lda_top_words_df.groupby('Cluster').get_group(k)
plt.figure(figsize=(30,8))
plt.bar(list(clusterwise_words_dist.sum()[:-1].transpose().index),\
list(clusterwise_words_dist.sum()[:-1].transpose()))
plt.title(f'Term frequencies of keywords of topic: {k}',fontsize=16)
plt.xlabel('Keywords in the topics',fontsize=16)
plt.ylabel('Word frequency',fontsize=16)
plt.xticks(rotation=90,fontsize=14)
plt.yticks(fontsize=14);
word_totals = {k:{y[1]:y[0] for y in x[0]} for k,x in enumerate(top_topics)}
import matplotlib.pyplot as plt
from ipywidgets import interact, IntSlider
from wordcloud import WordCloud
def show_wordcloud(topic=0):
cloud = WordCloud(background_color='white', colormap='viridis')
cloud.generate_from_frequencies(word_totals[topic])
plt.figure(figsize=(10,8))
plt.gca().imshow(cloud)
plt.axis('off')
plt.tight_layout()
slider = IntSlider(min=0, max=best_hp_setting[0]-1, step=1, value=0, description='Topic')
interact(show_wordcloud, topic=slider);
representatives
###Output
_____no_output_____ |
05_Machine_Learning_solutions/05_02_Machine_Learning-SDR.ipynb | ###Markdown
Website Ghani, Rayid, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke, Daniela Hochfellner, Clayton Hunter, Brian Kim, Avishek Kumar, Jonathan Morgan, and Ridhima Sodhi. _Citation to be updated on notebook export_ Machine Learning----- IntroductionIn this tutorial, we'll discuss how to formulate a research question in the machine learning framework; how to transform raw data into something that can be fed into a model; how to build, evaluate, compare, and select models; and how to reasonably and accurately interpret model results. You'll also get hands-on experience using the `scikit-learn` package in Python to model the data you're familiar with from previous tutorials. This tutorial is based on chapter 6 of [Big Data and Social Science](https://github.com/BigDataSocialScience/). Glossary of TermsThere are a number of terms specific to Machine Learning that you will find repeatedly in this notebook. - **Learning**: In Machine Learning, you'll hear about "learning a model." This is what you probably know as *fitting* or *estimating* a function, or *training* or *building* a model. These terms are all synonyms and are used interchangeably in the machine learning literature.- **Examples**: These are what you probably know as *data points* or *observations* or *rows*. - **Features**: These are what you probably know as *independent variables*, *attributes*, *predictors*, or *explanatory variables.*- **Underfitting**: This happens when a model is too simple and does not capture the structure of the data well enough.- **Overfitting**: This happens when a model is too complex or too sensitive to the noise in the data; this canresult in poor generalization performance, or applicability of the model to new data. - **Regularization**: This is a general method to avoid overfitting by applying additional constraints to the model. For example, you can limit the number of features present in the final model, or the weight coefficients appliedto the (standardized) features are small.- **Supervised learning** involves problems with one target or outcome variable (continuous or discrete) that we wantto predict, or classify data into. Classification, prediction, and regression fall into this category. We call theset of explanatory variables $X$ **features**, and the outcome variable of interest $Y$ the **label**.- **Unsupervised learning** involves problems that do not have a specific outcome variable of interest, but ratherwe are looking to understand "natural" patterns or groupings in the data - looking to uncover some structure that we do not know about a priori. Clustering is the most common example of unsupervised learning, another example is principal components analysis (PCA). Python SetupBefore we begin, run the code cell below to initialize the libraries we'll be using in this assignment. We're already familiar with `numpy`, `pandas`, `matplotlib`, and `sqlalchemy` from previous tutorials. Here we'll also be using [`scikit-learn`](http://scikit-learn.org) to fit machine learning models.
###Code
# Basic data analysis and visualization tools
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# SQL Connection
from sqlalchemy import create_engine
# Machine learning tools
from sklearn.metrics import (confusion_matrix, accuracy_score, precision_score, recall_score,
precision_recall_curve,roc_curve, auc, classification_report)
from sklearn.ensemble import (RandomForestClassifier, ExtraTreesClassifier,
GradientBoostingClassifier,
AdaBoostClassifier)
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
# Seaborn settings for nicer graphs
sns.set_style("white")
sns.set_context("poster", font_scale=1.25, rc={"lines.linewidth":1.25, "lines.markersize":8})
# Database connection
db_name = "appliedda"
hostname = "10.10.2.10"
connection_string = "postgresql://{}/{}".format(hostname, db_name)
conn = create_engine(connection_string)
###Output
_____no_output_____
###Markdown
The Machine Learning ProcessThe Machine Learning Process is as follows:- [**Understand the problem and goal.**](problem-formulation) *This sounds obvious but is often nontrivial.* Problems typically start as vague descriptions of a goal - improving health outcomes, increasing graduation rates, understanding the effect of a variable *X* on an outcome *Y*, etc. It is really important to work with people who understand the domain beingstudied to dig deeper and define the problem more concretely. What is the analytical formulation of the metric that you are trying to optimize?- [**Formulate it as a machine learning problem.**](problem-formulation) Is it a classification problem or a regression problem? Is the goal to build a model that generates a ranked list prioritized by risk, or is it to detect anomalies as new data come in? Knowing what kinds of tasks machine learning can solve will allow you to map the problem you are working onto one or more machine learning settings and give you access to a suite of methods.- **Data exploration and preparation.** Next, you need to carefully explore the data you have. What additional datado you need or have access to? What variable will you use to match records for integrating different data sources?What variables exist in the data set? Are they continuous or categorical? What about missing values? Can you use the variables in their original form, or do you need to alter them in some way?- [**Feature engineering.**](feature-generation) In machine learning language, what you might know as independent variables or predictors or factors or covariates are called "features." Creating good features is probably the most important step in the machine learning process. This involves doing transformations, creating interaction terms, or aggregating over datapoints or over time and space.- **Method selection.** Having formulated the problem and created your features, you now have a suite of methods tochoose from. It would be great if there were a single method that always worked best for a specific type of problem. Typically, in machine learning, you take a variety of methods and try them, empirically validating which one is the best approach to your problem.- [**Evaluation.**](evaluation) As you build a large number of possible models, you need a way choose the best among them. We'll cover methodology to validate models on historical data and discuss a variety of evaluation metrics. The next step is to validate using a field trial or experiment.- [**Deployment.**](deployment) Once you have selected the best model and validated it using historical data as well as a fieldtrial, you are ready to put the model into practice. You still have to keep in mind that new data will be coming in,and the model might change over time.You're probably used to fitting models in physical or social science classes. In those cases, you probably had a hypothesis or theory about the underlying process that gave rise to your data, chose an appropriate model based on prior knowledge and fit it using least squares, and used the resulting parameter or coefficient estimates (or confidence intervals) for inference. This type of modeling is very useful for *interpretation*.In machine learning, our primary concern is *generalization*. This means that:- **We care less about the structure of the model and more about the performance** This means that we'll try out a whole bunch of models at a time and choose the one that works best, rather than determining which model to use ahead of time. We can then choose to select a *suboptimal* model if we care about a specific model type. - **We don't (necessarily) want the model that best fits the data we've *already seen*,** but rather the model that will perform the best on *new data*. This means that we won't gauge our model's performance using the same data that we used to fit the model (e.g., sum of squared errors or $R^2$), and that "best fit" or accuracy will most often *not* determine the best model. - **We can include a lot of variables in to the model.** This may sound like the complete opposite of what you've heard in the past, and it can be hard to swallow. But we will use different methods to deal with many of those concerns in the model fitting process by using a more automatic variable selection process. Problem FormulationFirst, turning something into a real objective function. What do you care about? Do you have data on that thing? What action can you take based on your findings? Do you risk introducing any bias based on the way you model something? Four Main Types of ML Tasks for Policy Problems- **Description**: [How can we identify and respond to the most urgent online government petitions?](https://dssg.uchicago.edu/project/improving-government-response-to-citizen-requests-online/)- **Prediction**: [Which students will struggle academically by third grade?](https://dssg.uchicago.edu/project/predicting-students-that-will-struggle-academically-by-third-grade/)- **Detection**: [Which police officers are likely to have an adverse interaction with the public?](https://dssg.uchicago.edu/project/expanding-our-early-intervention-system-for-adverse-police-interactions/)- **Behavior Change**: [How can we prevent juveniles from interacting with the criminal justice system?](https://dssg.uchicago.edu/project/preventing-juvenile-interactions-with-the-criminal-justice-system/) Our Machine Learning Problem> For SEH students receiving their PhD, who is most likely to go into academia, as measured by their employer in two years?This is an example of a *binary prediction classification problem*. In this case, we are trying to predict a binary outcome: whether a PhD recipient is in academia in two years or not.Note that the way the outcome is defined is somewhat arbitrary. Data Exploration and PreparationDuring the first classes, we have explored the data, linked different data sources, and created new variables. A table was put together using similar techniques and writen to the class schema. We will now implement our machine learning model on this dataset.A step-by-step description of how we created the table is provided in the notebook "Data Preparation" notebook.1. **Creating labels**: Labels are the dependent variables, or *Y* variables, that we are trying to predict. In the machine learning framework, labels are often *binary*: true or false, encoded as 1 or 0. This outcome variable is named `label`.> Refer to the [05_01_ML_Data_Prep-SDR.ipynb](05_01_ML_Data_Prep-SDR.ipynb) notebook for how the labels were created.1. **Decide on feature**: Our features are our independent variables or predictors. Good features make machine learning systems effective. The better the features the easier it is the capture the structure of the data. You generate features using domain knowledge. In general, it is better to have more complex features and a simpler model rather than vice versa. Keeping the model simple makes it faster to train and easier to understand rather then extensively searching for the "right" model and "right" set of parameters. Machine Learning Algorithms learn a solution to a problem from sample data. The set of features is the best representation of the sample data to learn a solution to a problem.1. **Feature engineering** is the process of transforming raw data into features that better represent the underlying problem/data/structure to the predictive models, resulting in improved model accuracy on unseen data." ( from [Discover Feature Engineering](http://machinelearningmastery.com/discover-feature-engineering-how-to-engineer-features-and-how-to-get-good-at-it/) ). In text, for example, this might involve deriving traits of the text like word counts, verb counts, or topics to feed into a model rather than simply giving it the raw text. Example of feature engineering are: - **Transformations**, such a log, square, and square root. - **Dummy (binary) variables**, sometimes known as *indicator variables*, often done by taking categorical variables (such as industry) which do not have a numeric value, and adding them to models as a binary value. - **Discretization**. Several methods require features to be discrete instead of continuous. This is often done by binning, which you can do by equal width, deciles, Fisher-Jenks, etc. - **Aggregation.** Aggregate features often constitute the majority of features for a given problem. These use different aggregation functions (*count, min, max, average, standard deviation, etc.*) which summarize several values into one feature, aggregating over varying windows of time and space. For example, we may want to calculate the *number* (and *min, max, mean, variance*, etc.) of crimes within an *m*-mile radius of an address in the past *t* months for varying values of *m* and *t*, and then use all of them as features.1. **Cleaning data**: To run the `scikit-learn` set of models we demonstrate in this notebook, your input dataset must have no missing variables.1. **Imputing values to missing or irrelevant data**: Once the features are created, always check to make sure the values make sense. You might have some missing values, or impossible values for a given variable (negative values, major outliers). If you have missing values you should think hard about what makes the most sense for your problem; you may want to replace with `0`, the median or mean of your data, or some other value.1. **Scaling features**: Certain models will have an issue with features on different scales. For example, an individual's age is typically a number between 0 and 100 while earnings can be number between 0 and 1000000 (or higher). In order to circumvent this problem, we can scale our features to the same range (eg [0,1]).> Refer to the [05_01_ML_Data_Prep-SDR.ipynb](05_01_ML_Data_Prep-SDR.ipynb) notebook for how the features were created. Training and Test SetsIn the ML Data Prep notebook, we created one row for each individual who graduated in the 2012-2013 academic year and in the 2014-2015 academic year. We also created features for each person in our cohorts.For both the training and test sets, let's now combine the labels and features into analytical dataframes.
###Code
# For the Training Set:
sql = '''
SELECT *
FROM ada_ncses_2019.sdr_ml_2013
'''
df_training = pd.read_sql(sql, conn)
# For the Training Set:
sql = '''
SELECT *
FROM ada_ncses_2019.sdr_ml_2015
'''
df_testing = pd.read_sql(sql, conn)
df_training.describe(include='all', percentiles=[.5, .9, .99])
df_testing.describe(include='all', percentiles=[.05,.5, .9, .99])
df_training.label.value_counts()
###Output
_____no_output_____
###Markdown
Before running any machine learning algorithms, we have to ensure there are no `NULL` (or `NaN`) values in the data. As you have heard before, __never remove observations with missing values without considering the data you are dropping__. One easy way to check if there are any missing values with `Pandas` is to use the `.info()` method, which returns a count of non-null values for each column in your DataFrame.
###Code
df_training.info()
df_training.replace('NA', "", inplace = True)
df_testing.replace('NA', "", inplace = True)
###Output
_____no_output_____
###Markdown
Formatting FeaturesOur features are already part of the dataset, but we need to do a little bit of data cleaning and manipulation to get them all into the format we want for our machine learning models. - All categorical variables must be binary. This means we need to make them into dummy variables. Fortunately, `pandas` has a function to make that easy.- All numerical variables should be scaled. This doesn't matter for some ML algorithms such as Decision Trees, but it does for others, such as K-Nearest Neighbors.- Missing values should be accounted for. We need to impute them or drop them, fully understanding how they affect the conclusions we can make.> Categorical variables that are already binary do not need to be made into dummy variables. Objects Used in SetupOur goal in this formatting step is to end up with:- `train` which contains the training data, with all categorical variables dummied and numerical variables scaled.- `test` which contains the testing data, with all categorical variables dummied and numerical variables scaled (according to the training data scaling) Creating dummy variablesCategorical variables need to be converted to a series of binary (0,1) values for `scikit-learn` to use them. We will use the `get_dummies` functions from `pandas` to make our lives easier.
###Code
# only want to make object types dummy variables
cols_to_change = ['wtsurvy','yrscours','yrsdisst','yrsnotwrk']
for col in cols_to_change:
df_training[col] = pd.to_numeric(df_training[col], 'coerce')
df_testing[col] = pd.to_numeric(df_testing[col], 'coerce')
#confirm columns
df_training.info()
# Find the categorical features
feat_to_dummy = [c for c in df_training.columns if df_training[c].dtype == 'O']
feat_to_dummy.remove('drf_id')
print(feat_to_dummy)
# get dummy values
# Note: we are creating a new DataFrame called train and test.
# These are essentially cleaned versions of df_training and df_testing
train = pd.get_dummies(df_training[feat_to_dummy], columns = feat_to_dummy, drop_first = True)
test = pd.get_dummies(df_testing[feat_to_dummy], columns = feat_to_dummy, drop_first = True)
# Check to make sure that it created dummies
train.head()
print(train.columns.tolist())
print(test.columns.tolist())
# check if column list is the same for train and test dummy sets
sorted(train.columns.tolist()) == sorted(test.columns.tolist())
# add dummy columns to full training and testing dataframes
df_training = df_training.merge(train, left_index=True, right_index=True)
df_testing = df_testing.merge(test, left_index=True, right_index=True)
df_training = df_training.drop(feat_to_dummy, axis = 1)
df_testing = df_testing.drop(feat_to_dummy, axis = 1)
df_training.shape, df_testing.shape
df_training.head()
###Output
_____no_output_____
###Markdown
Checkpoint 2: Dumb Dummies What could you do if you have a dummy variable that exists in your testing set but doesn't exist in your training set? Why is this a problem? **Discuss with your group**. Scaling valuesCertain models will have issue with numeric values on different scales. In your analysis cohort, the number of trips taken may vary from ten to 700 while the total expenditures may range from zero to thousands of dollars. Traditional regression methods, for example, tend to result in features (aka right-hand variables, Xs, etc) with small values having larger coefficients than features with large values. On the other hand, some models - like decision trees - are not generally affected by having variables on different scales. To easily use different models, we'll scale all of our continuous data to value between 0 and 1.We'll start by creating a scaler object using `MinMaxScaler`. We'll `fit` it with the training data and then use it to scale our testing data. This is because we want both the training and testing data to be scaled in the same way. Remember, we're essentially pretending we don't know what's in the testing data for now, so we only scale using the training set, then use that same scaling for any new data (i.e. for the testing data).
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
num_cols = df_training.select_dtypes(include = ['float', 'int']).columns
# With few features it is relatively easy to hand code which columns to scale
# but we can also make our lives a bit easier by doing it programmatically
# get a list columns with values <0 and/or >1:
cols_to_scale = []
for i in num_cols:
if (df_training[i].min() < 0 or df_training[i].max() > 1):
cols_to_scale.append(i)
cols_to_scale.remove('wtsurvy') # Don't scale weights
print(cols_to_scale)
# add a '*_scl' version of the column for each of our "columns to scale"
# and replace our "sel_features" list
for c in cols_to_scale:
# create a new column name by adding '_scl' to the end of each column to scale
new_column_name = c+'_scl'
# fit MinMaxScaler to training set column
# reshape because scaler built for 2D arrays
scaler.fit(df_training[c].values.reshape(-1, 1))
# update training and testing datasets with new data
df_training[new_column_name] = scaler.transform(df_training[c].values.reshape(-1, 1))
df_testing[new_column_name] = scaler.transform(df_testing[c].values.reshape(-1, 1))
# now our selection features are all scaled between 0-1
df_training.describe().T
df_training.drop(cols_to_scale,axis = 1, inplace = True)
df_testing.drop(cols_to_scale,axis = 1, inplace = True)
df_training = df_training.fillna(0)
df_testing = df_testing.fillna(0)
df_training.describe()
# get the underlying numpy.array data for use in scikit-learn
X_train = df_training.iloc[:,3:].values
y_train = df_training['label'].values
X_test = df_testing.iloc[:,3:].values
y_test = df_testing['label'].values
###Output
_____no_output_____
###Markdown
Data DistributionLet's check how much data we have, and what the split is between positive (1) and negative (0) labels in our training dataset. It's good to know what the "baseline" is in our dataset, to be able to intelligently evaluate our performance.
###Code
print('Number of rows: {}'.format(df_training.shape[0]))
df_training['label'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Checkpoint 2: ML SetupTry adding a few more features and making sure they are in an appropriate format for the data. All categorical variables should be binary, and all numerical variables should be scaled. Model Understanding and EvaluationIn this phase, we will run the Machine Learning model on our training set. The training set's features will be used to predict the labels. Once our model is created using the test set, we will assess its quality by applying it to the test set and by comparing the *predicted values* to the *actual values* for each record in your testing data set. - **Performance Estimation**: How well will our model do once it is deployed and applied to new data? Running a Machine Learning ModelPython's [`scikit-learn`](http://scikit-learn.org/stable/) is a commonly used, well documented Python library for machine learning. This library can help you split your data into training and test sets, fit models and use them to predict results on new data, and evaluate your results.We will start with the simplest [`LogisticRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) model and see how well that does.To fit the model, we start by creating a model object, which we call `logit` here. You can think of this model object as containing all of the instructions necessary to fit that model. Then, we use the `.fit` method in order to give it the data, and all of the information about the model fit will be contained in `logit`.
###Code
# Let's fit a model
logit = LogisticRegression(penalty='l1', C=1, solver = 'liblinear')
logit.fit(X_train, y_train, sample_weight = df_training['wtsurvy'])
print(logit)
###Output
_____no_output_____
###Markdown
When we print the model results, we see different parameters we can adjust as we refine the model based on running it against test data (values such as `intercept_scaling`, `max_iters`, `penalty`, and `solver`). Example output: LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, max_iter=100, multi_class='ovr', penalty='l2', random_state=None, solver='liblinear', tol=0.0001, verbose=0)To adjust these parameters, one would alter the call that creates the `LogisticRegression()` model instance, passing it one or more of these parameters with a value other than the default. So, to re-fit the model with `max_iter` of 1000, `intercept_scaling` of 2, and `solver` of "lbfgs" (pulled from thin air as an example), you'd create your model as follows: model = LogisticRegression( max_iter = 1000, intercept_scaling = 2, solver = "lbfgs" )The basic way to choose values for, or "tune," these parameters is the same as the way you choose a model: fit the model to your training data with a variety of parameters, and see which perform the best on the test set. An obvious drawback is that you can also *overfit* to your test set; in this case, you can alter your method of cross-validation. Model Evaluation Machine learning models usually do not produce a prediction (0 or 1) directly. Rather, models produce a score between 0 and 1 (that can sometimes be interpreted as a probability), which is basically the model ranking all of the observations from *most likely* to *least likely* to have label of 1. The 0-1 score is then turned into a 0 or 1 based on a threshold. If you use the sklearn method `.predict()` then the model will select a threshold for you (generally 0.5) - it is almost **never a good idea to let the model choose the threshold for you**. Instead, you should get the actual score and test different threshold values.
###Code
# get the prediction scores
y_scores = logit.predict_proba(X_test)[:,1]
###Output
_____no_output_____
###Markdown
Look at the distribution of scores:
###Code
sns.distplot(y_scores, kde=False, rug=False)
df_testing['y_score'] = y_scores
# see our selected features and prediction score
df_testing.head()
###Output
_____no_output_____
###Markdown
Tools like `sklearn` often have a default threshold of 0.5, but a good threshold is selected based on the data, model and the specific problem you are solving. As a trial run, let's set a threshold to the value of 0.7.
###Code
# given the distribution of the scores, what threshold would you set?
selected_threshold = 0.7
# create a list of our predicted outocmes
predicted = y_scores > selected_threshold
# and our actual, or expected, outcomes
expected = y_test
###Output
_____no_output_____
###Markdown
Confusion MatrixOnce we have tuned our scores to 0 or 1 for classification, we create a *confusion matrix*, which has four cells: true negatives, true positives, false negatives, and false positives. Each data point belongs in one of these cells, because it has both a ground truth and a predicted label. If an example was predicted to be negative and is negative, it's a true negative. If an example was predicted to be positive and is positive, it's a true positive. If an example was predicted to be negative and is positive, it's a false negative. If an example was predicted to be positive and is negative, it's a false negative.
###Code
# Using the confusion_matrix function inside sklearn.metrics
conf_matrix = confusion_matrix(expected,predicted)
print(conf_matrix)
###Output
_____no_output_____
###Markdown
The count of true negatives is `conf_matrix[0,0]`, false negatives `conf_matrix[1,0]`, true positives `conf_matrix[1,1]`, and false_positives `conf_matrix[0,1]`. Accuracy is the ratio of the correct predictions (both positive and negative) to all predictions. $$ Accuracy = \frac{TP+TN}{TP+TN+FP+FN} $$
###Code
# generate an accuracy score by comparing expected to predicted.
accuracy = accuracy_score(expected, predicted)
print( "Accuracy = " + str( accuracy ) )
df_training['label'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Evaluation metricsWhat do we think about this accuracy? Good? Bad? Two metrics that are often more relevant than overall accuracy are **precision** and **recall**. Precision measures the accuracy of the classifier when it predicts an example to be positive. It is the ratio of correctly predicted positive examples to examples predicted to be positive. $$ Precision = \frac{TP}{TP+FP}$$Recall measures the accuracy of the classifier to find positive examples in the data. $$ Recall = \frac{TP}{TP+FN} $$By selecting different thresholds we can vary and tune the precision and recall of a given classifier. A conservative classifier (threshold 0.99) will classify a case as 1 only when it is *very sure*, leading to high precision. On the other end of the spectrum, a low threshold (e.g. 0.01) will lead to higher recall.
###Code
# precision_score and recall_score are from sklearn.metrics
precision = precision_score(expected, predicted)
recall = recall_score(expected, predicted)
print( "Precision = " + str( precision ) )
print( "Recall= " + str(recall))
###Output
_____no_output_____
###Markdown
If we care about our whole precision-recall space, we can optimize for a metric known as the **area under the curve (AUC-PR)**, which is the area under the precision-recall curve. The maximum AUC-PR is 1. Checkpoint 3: Evaluation with Different Thresholds Above, we set the threshold at an arbitrary threshold. Try a few different thresholds. What seems like a good threshold value based on precision and recall? Would you like to know any other information before making this decision? **Discuss with your group**. Plotting the Precision-Recall CurveIn order to see the tradeoff between precision and recall for different thresholds, we can use a visualization that shows both on the same graph. The function `plot_precision_recall` below does this by using the `precision_recall_curve()` function to get the values we want to plot and putting them together. We also print out the AUC for good measure.
###Code
def plot_precision_recall(y_true,y_score):
"""
Plot a precision recall curve
Parameters
----------
y_true: ls
ground truth labels
y_score: ls
score output from model
"""
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true,y_score)
plt.plot(recall_curve, precision_curve)
plt.xlabel('Recall')
plt.ylabel('Precision')
auc_val = auc(recall_curve,precision_curve)
print('AUC-PR: {0:1f}'.format(auc_val))
plt.show()
plt.clf()
plot_precision_recall(expected, y_scores)
###Output
_____no_output_____
###Markdown
Precision and Recall at k%If we only care about a specific part of the precision-recall curve, we can focus on more fine-grained metrics. For instance, say there is a special program for those most likely to need assistance within the next year, but that it can only cover *1% of our test set*. In that case, we would want to prioritize the 1% who are *most likely* to need assistance within the next year, and it wouldn't matter too much how accurate we were on the overall data.Let's say that, out of the approximately 300,000 observations, we can intervene on 1% of them, or the "top" 3000 in a year (where "top" means highest likelihood of needing intervention in the next year). We can then focus on optimizing our **precision at 1%**.
###Code
def plot_precision_recall_n(y_true, y_prob, model_name):
"""
y_true: ls
ls of ground truth labels
y_prob: ls
ls of predic proba from model
model_name: str
str of model name (e.g, LR_123)
"""
from sklearn.metrics import precision_recall_curve
y_score = y_prob
precision_curve, recall_curve, pr_thresholds = precision_recall_curve(y_true, y_score)
precision_curve = precision_curve[:-1]
recall_curve = recall_curve[:-1]
pct_above_per_thresh = []
number_scored = len(y_score)
for value in pr_thresholds:
num_above_thresh = len(y_score[y_score>=value])
pct_above_thresh = num_above_thresh / float(number_scored)
pct_above_per_thresh.append(pct_above_thresh)
pct_above_per_thresh = np.array(pct_above_per_thresh)
plt.clf()
fig, ax1 = plt.subplots()
ax1.plot(pct_above_per_thresh, precision_curve, 'b')
ax1.set_xlabel('percent of population')
ax1.set_ylabel('precision', color='b')
ax1.set_ylim(0,1.05)
ax2 = ax1.twinx()
ax2.plot(pct_above_per_thresh, recall_curve, 'r')
ax2.set_ylabel('recall', color='r')
ax2.set_ylim(0,1.05)
name = model_name
plt.title(name)
plt.show()
plt.clf()
def precision_at_k(y_true, y_scores,k):
threshold = np.sort(y_scores)[::-1][int(k*len(y_scores))]
y_pred = np.asarray([1 if i > threshold else 0 for i in y_scores ])
return precision_score(y_true, y_pred)
plot_precision_recall_n(expected,y_scores, 'LR')
p_at_10 = precision_at_k(expected,y_scores, 0.1)
print('Precision at 10%: {:.3f}'.format(p_at_10))
###Output
_____no_output_____
###Markdown
Feature UnderstandingNow that we have evaluated our model overall, let's look at the coefficients for each feature.
###Code
print("The coefficients for each of the features are ")
list(zip(df_training.columns[3:], logit.coef_[0]))
###Output
_____no_output_____
###Markdown
Assessing Model Against BaselinesIt is important to check our model against a reasonable **baseline** to know how well our model is doing. > Without any context, it's hard to tell what a good precision, recall, or accuracy is. It's important to establish what baselines we're comparing against. A good place to start is checking against a *random* baseline, assigning every example a label (positive or negative) completely at random. We can use the `random.uniform` function to randomly select 0 or 1, then use that as our "predicted" value to see how well we would have done if we had predicted randomly.
###Code
# We will choose to predict on 10% of the population
percent_of_pop = 0.1
# Use random.uniform from numpy to generate an array of randomly generated 0 and 1 values of equal length to the test set.
random_score = np.random.uniform(0,1, len(y_test))
# Calculate precision using random predictions
random_p_at_selected = precision_at_k(expected,random_score, percent_of_pop)
print(random_p_at_selected)
###Output
_____no_output_____
###Markdown
Another good practice is checking against an "expert" or rule of thumb baseline. > This is typically a very simple heuristic. What if we predicted everyone who graduated in certain fields to go into academia? You want to make sure your model outperforms these kinds of basic heuristics. Another good baseline to compare against is the "all label" (label is always 1). Our "model" in this case is that we always predict 1 and see how our measures perform with that.
###Code
all_predicted = np.array([1 for i in range(df_testing.shape[0])])
all_precision = precision_score(expected, all_predicted)
print(all_precision)
model_precision = precision_at_k(expected, y_scores, percent_of_pop)
sns.set_style("white")
sns.set_context("poster", font_scale=1.25, rc={"lines.linewidth":1.25, "lines.markersize":4})
fig, ax = plt.subplots(1, figsize=(10,8))
sns.barplot(['Random','All Academia', 'Our Model'],
# [random_p_at_1, none_precision, expert_precision, max_p_at_k],
[random_p_at_selected, all_precision, model_precision],
# palette=['#6F777D','#6F777D','#6F777D','#800000'])
palette=['#6F777D','#6F777D','#800000'])
sns.despine()
plt.ylim(0,1)
plt.ylabel('precision at {}%'.format(percent_of_pop*100));
###Output
_____no_output_____
###Markdown
Checkpoint 4: Running another modelLet's try running a different model, using decision trees this time. The `sklearn` package actually makes it quite easy to run alternative models. All you need to do is create that model object, then use the `fit` method using your training data, and you're all set! That is, we can use the code below:
###Code
# packages to display a tree in Jupyter notebooks
from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import graphviz as gv
import pydotplus
tree = DecisionTreeClassifier(max_depth = 3)
tree.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Just like we fit our logistic regression model above and used that model object to get predicted scores, we can do the same with this Decision Tree Classifier model object. That is, we can get our predicted scores using this model using the following code:
###Code
tree_predicted = tree.predict_proba(X_test)[:,0]
tree_predicted
# visualize the tree
# object to hold the graphviz data
dot_data = StringIO()
# create the visualization
export_graphviz(tree, out_file=dot_data, filled=True,
rounded=True, special_characters=True,
feature_names=df_training.iloc[:,3:].columns.values)
# convert to a graph from the data
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
# you can print out the graph to a pdf with this line:
# graph.write_pdf('./output/model_eval_tree1.pdf')
# or view it directly in notebook
Image(graph.create_png())
###Output
_____no_output_____
###Markdown
Using the predicted scores from the tree model, try calculating the precision and recall for varying values of the threshold. Use the functions created above to create precision-recall graphs. Do you think the tree model is performing better or worse than the logistic regression model? Machine Learning PipelineWhen working on machine learning projects, it is a good idea to structure your code as a modular **pipeline**, which contains all of the steps of your analysis, from the original data source to the results that you report, along with documentation. This has many advantages:- **Reproducibility**. It's important that your work be reproducible. This means that someone else should be ableto see what you did, follow the exact same process, and come up with the exact same results. It also means thatsomeone else can follow the steps you took and see what decisions you made, whether that person is a collaborator, a reviewer for a journal, or the agency you are working with. - **Ease of model evaluation and comparison**.- **Ability to make changes.** If you receive new data and want to go through the process again, or if there are updates to the data you used, you can easily substitute new data and reproduce the process without starting from scratch. Survey of AlgorithmsWe have only scratched the surface of what we can do with our model. We've only tried two classifiers (Logistic Regression and a Decision Tree), and there are plenty more classification algorithms in `sklearn`. Let's try them!
###Code
clfs = {'RF': RandomForestClassifier(n_estimators=1000, max_depth = 5, n_jobs=-1),
'ET': ExtraTreesClassifier(n_estimators=1000, n_jobs=-1),
'LR': LogisticRegression(penalty='l1', C=1e5),
'SGD':SGDClassifier(loss='log'),
'GB': GradientBoostingClassifier(learning_rate=0.05, subsample=0.5, max_depth=6, random_state=17
, n_estimators=10),
'NB': GaussianNB(),
'DT': DecisionTreeClassifier(max_depth=10, min_samples_split=10)
}
sel_clfs = ['RF', 'ET', 'LR', 'SGD', 'GB', 'NB', 'DT']
# Get the selected features
sel_features = df_training.columns[3:]
max_p_at_k = 0
df_results = pd.DataFrame()
for clfNM in sel_clfs:
clf = clfs[clfNM]
clf.fit( X_train, y_train )
print(clf)
y_score = clf.predict_proba(X_test)[:,1]
predicted = np.array(y_score)
expected = np.array(y_test)
plot_precision_recall_n(expected,predicted, clfNM)
p_at_1 = precision_at_k(expected,y_score, 0.01)
p_at_5 = precision_at_k(expected,y_score,0.05)
p_at_10 = precision_at_k(expected,y_score,0.10)
p_at_30 = precision_at_k(expected,y_score,0.30)
fpr, tpr, thresholds = roc_curve(expected,y_score)
auc_val = auc(fpr,tpr)
df_results = df_results.append([{
'clfNM':clfNM,
'p_at_1':p_at_1,
'p_at_5':p_at_5,
'p_at_10':p_at_10,
'auc':auc_val,
'clf': clf
}])
#feature importances
if hasattr(clf, 'coef_'):
feature_import = dict(
zip(sel_features, clf.coef_.ravel()))
elif hasattr(clf, 'feature_importances_'):
feature_import = dict(
zip(sel_features, clf.feature_importances_))
print("FEATURE IMPORTANCES")
print(feature_import)
if max_p_at_k < p_at_5:
max_p_at_k = p_at_5
print('Precision at 5%: {:.2f}'.format(p_at_5))
# df_results.to_csv('output/modelrun.csv')
###Output
_____no_output_____
###Markdown
Let's view the best model at 5%
###Code
sns.set_style("white")
sns.set_context("poster", font_scale=1.25, rc={"lines.linewidth":1.25, "lines.markersize":8})
fig, ax = plt.subplots(1, figsize=(10,6))
sns.barplot(['Random','All Academia', 'Best Model'],
# [random_p_at_1, none_precision, expert_precision, max_p_at_k],
[random_p_at_selected, all_precision, max_p_at_k],
# palette=['#6F777D','#6F777D','#6F777D','#800000'])
palette=['#6F777D','#6F777D','#800000'])
sns.despine()
plt.ylim(0,1)
plt.ylabel('precision at 5%')
# view all saved evaluation metrics
df_results
###Output
_____no_output_____ |
Assignments/Assignment1/Assignment1.ipynb | ###Markdown
Assignment 1: Introduction to Python
###Code
from numpy import sqrt
###Output
_____no_output_____
###Markdown
From Computational Physics by NewmanExercise 2.1:A ball is dropped from a tower of height $h$ with an initial velocity of zero. Write a function that takes the height of the tower in meters as an argument and then calculates and returns the time it takes until the ball hits the ground (ignoring air resistance). Use $g = 10\ m/s^2$ You may find the following kinematic equation to be helpful:$$ x_f = x_0 + v_0 t + \frac{1}{2} a t^2 $$
###Code
def time_to_fall(h):
"""
Calculates the amount of time it takes a ball to fall from a tower of height h with intial velocity zero.
Parameters:
h (float) - the height of the tower in meters
Returns:
(float) time in seconds
"""
return sqrt((2*h)/10)
# TO DO: Complete this function
pass
###Output
_____no_output_____
###Markdown
Below I've added `assert` statements. These statements are useful ways to test functionality. They are often referred to as unit tests because they test a single unit or function of code. These statements will produce an `AssertionError` if your function does not produce the expected result. Otherwise, they will run silently and produce no result.
###Code
assert(time_to_fall(0) == 0)
assert(time_to_fall(20) == 2)
###Output
_____no_output_____
###Markdown
ADD ONE MORE ASSERT STATEMENT BELOW
###Code
#TO DO: Add an assert statement in this cell!
assert(time_to_fall(500) == 10)
###Output
_____no_output_____ |
part_3_gain_customer_insights_from_Amazon_Aurora.ipynb | ###Markdown
Gain customer insights, Part 3. Inference from Amazon AuroraNow that we've created the ML model and an endpoint to serve up the inferences, we'd like to connect that endpoint to Amazon Aurora. That way we can request a prediction on whether this customer will churn at the same time that we retrieve information about this customer.In addition, we'll call Amazon Comprehend to Amazon Aurora. That way, we can also request an assessment of the customer's sentiment when they send a message to customer service.With both of these pieces of information in hand, we can then make an on-the-fly decision about whether to offer the customer an incentive program of some kind. Of course, the details of that incentive and the rules on when to offer it must come from Marketing.---- Table of contents1. [Connect to Aurora Database](Connect-to-Aurora-Database)2. [Customer sentiment: Query Amazon Comprehend from Amazon Aurora](Customer-sentiment:-Query-Amazon-Comprehend-from-Amazon-Aurora)3. [Prepare the database for inference](Prepare-the-database-for-inference)4. [Query the Amazon SageMaker endpoint from Amazon Aurora](Query-the-Amazon-SageMaker-endpoint-from-Amazon-Aurora)5. [Ready, Set, Go!](Ready,-Set,-Go!) Note that for simplicity we're using a predefined Amazon SageMaker endpoint_name here. The AWS CloudFormation template created this endpoint (together with an endpoint configuration), added it to an IAM role (this role authorizes the users of Aurora database to access AWS ML services), and assigned the Aurora Database cluster parameter group value 'aws_default_sagemaker_role' to this IAM role. This combination of settings gives Aurora permission to call the Amazon SageMaker endpoint.If you'd like to read further on this setup, documentation on how to create the policy and a role can be found [here](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-ml.htmlaurora-ml-sql-privileges). Details on how to create a custom database parameter group are described [here](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Authorizing.IAM.AddRoleToDBCluster.html). Connect to Aurora DatabaseIf the Python module `mysql.connector` is not installed, install it with pip.
###Code
import sys
# upgrade pip
!{sys.executable} -m pip install --upgrade pip
!{sys.executable} -m pip install mysql.connector
import mysql.connector as mysql
###Output
_____no_output_____
###Markdown
For this use case, we've created the S3 bucket and appropriate IAM roles for you during the launch of the AWS CloudFormation template. The bucket name was saved in a parameter file called "cloudformation_values.py" during creation of the notebook instance, along with the DB secret name and ML endpoint name.
###Code
# import installed module
import mysql.connector as mysql
import os
import pandas as pd
import numpy as np
import boto3
import json
import cloudformation_values as cfvalues
# get the session information
session = boto3.Session()
# extract the region and account id
region = cfvalues.REGION
# AWS Secrets stores our database credentials.
db_secret_name = cfvalues.DBSECRET
# Get the secret from AWS Secrets manager. Extract user, password, host.
from utilities import get_secret
get_secret_value_response = get_secret(db_secret_name, region)
creds = json.loads(get_secret_value_response['SecretString'])
db_user = creds['username']
db_password = creds['password']
# Writer endpoint
db_host = creds['host']
# This is the Amazon SageMaker preset endpoint_name created by the Cloud Formation
endpoint_name = cfvalues.ENDPOINT
print(endpoint_name)
# Define the database and table names
database_name = "telecom_customer_churn"
churn_table = "customers"
customer_msgs_table = "customer_message"
customer_churn_results = "customer_churn_results"
###Output
_____no_output_____
###Markdown
Connect to the database using the credentials retrieved above.
###Code
# create connection to the database
cnx = mysql.connect(user = db_user,
password = db_password,
host = db_host,
database = database_name)
dbcursor = cnx.cursor(buffered = True)
###Output
_____no_output_____
###Markdown
Customer sentiment: Query Amazon Comprehend from Amazon AuroraLet's first test that we can call Amazon Comprehend from our SQL query, and return the sentiment for a customer message. We'll use the messages we inserted into our "customer call history" table in the part 1 to test this capability.
###Code
sql = """SELECT message,
aws_comprehend_detect_sentiment(message, 'en') AS sentiment,
aws_comprehend_detect_sentiment_confidence(message, 'en') AS confidence
FROM {};""".format(customer_msgs_table)
dbcursor.execute(sql)
dbcursor.fetchall()
###Output
_____no_output_____
###Markdown
Here we can see the customer's sentiment, based on the text of their customer service contact text. We have an overall assessment, such as 'POSITIVE', and a numeric confidence. We can use the assessment and the score to make a decision on what to offer the customer. Prepare the database for inference Now we need to set up the Aurora database to call the Amazon SageMaker endpoint and pass the data it needs to return an inference. Our original data contained numeric variables as well as several categorical variables (such as `area_code` and `int_plan`) which are needed for prediction. During creation of the ML model, the categorical variables were converted to one-hot vectors. In the final model, we used only 1 of these values: `int_plan_no`. There are two ways to approach this problem:1. Add data transformation code to the endpoint. 2. Create functions in the SQL database that will represent one-hot encoded variables.Here we will demonstrate the second option._**Below, we've listed the features used by our final model. If this list has changed in content or in order for your run, you will need to modify the steps below so that they match your list.**_cols_used = ['acc_length', 'vmail_msg', 'day_mins', 'day_calls', 'eve_mins', 'night_mins', 'night_calls', 'int_calls', 'int_charge', 'cust_service_calls', 'int_plan_no']
###Code
cols_used = ['acc_length', 'vmail_msg', 'day_mins', 'day_calls', 'eve_mins', 'night_mins',
'night_calls', 'int_calls', 'int_charge', 'cust_service_calls', 'int_plan_no']
dbcursor.execute("DESCRIBE {churn_table};".format(churn_table=churn_table))
dbcursor.fetchall()
###Output
_____no_output_____
###Markdown
Create functions to perform one-hot encoding:
###Code
# one-hot encoding for int_plan
dbcursor.execute("DROP FUNCTION IF EXISTS IntPlanOneHot;")
sql = """CREATE FUNCTION IntPlanOneHot(int_plan varchar(2048))
RETURNS INT
BEGIN
DECLARE int_plan_no INT;
IF int_plan = 'no' THEN SET int_plan_no = 1;
ELSE SET int_plan_no = 0;
END IF;
RETURN int_plan_no;
END
;"""
dbcursor.execute(sql)
# one-hot encoding for area_code to generate area_code_510
# While this function is not used for this model run, we provide it as an additional demonstration,
# and in case a similar feature is used in a later model run
dbcursor.execute("DROP FUNCTION IF EXISTS AreaCode510;")
sql = """CREATE FUNCTION AreaCode510(area_code bigint(20))
RETURNS INT
BEGIN
DECLARE area_code_510 INT;
IF area_code = 510 THEN SET area_code_510 = 1;
ELSE SET area_code_510 = 0;
END IF;
RETURN area_code_510;
END
;"""
dbcursor.execute(sql)
# one-hot encoding for area_code to generate area_code_510
# While this function is not used for this model run, we provide it as an additional demonstration,
# and in case a similar feature is used in a later model run
dbcursor.execute("DROP FUNCTION IF EXISTS stateTX;")
sql = """CREATE FUNCTION stateTX(state varchar(2048))
RETURNS INT
BEGIN
DECLARE state_TX INT;
IF state = 'TX' THEN SET state_TX = 1;
ELSE SET state_TX = 0;
END IF;
RETURN state_TX;
END
;"""
dbcursor.execute(sql)
###Output
_____no_output_____
###Markdown
Quick demonstration that the functions have been created and work correctly:
###Code
dbcursor.execute("""SELECT IntPlanOneHot(int_plan), AreaCode510(area_code), stateTX(state),
int_plan, area_code, state FROM {} LIMIT 5;""".format(churn_table))
dbcursor.fetchall()
###Output
_____no_output_____
###Markdown
Query the Amazon SageMaker endpoint from Amazon AuroraWe need to create a function that passes all the information needed by the Amazon SageMaker endpoint as described [here](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/mysql-ml.htmlaurora-ml-sql-privileges) (see section "Enabling Aurora Machine Learning"). Here we will create a function `will_churn` that will use the variables needed by the model. Notice that we are now including the columns created by one-hot encoding in the previous section. The endpoint name was declared in the beginning of the notebook.*If the list of columns printed below has changed in content or in order for your run, you will need to modify the steps below so that they match your list.*
###Code
print(cols_used)
# REMEMBER! to modify the columns in the SQL below to match the cols_used (if it doesn't already)
dbcursor.execute("DROP FUNCTION IF EXISTS will_churn;")
sql = """CREATE FUNCTION will_churn (
acc_length bigint(20),
vmail_msg bigint(20),
day_mins double,
day_calls bigint(20),
eve_mins double,
night_mins double,
night_calls bigint(20),
int_calls bigint(20),
int_charge double,
cust_service_calls bigint(20),
int_plan_no int)
RETURNS float
alias aws_sagemaker_invoke_endpoint
endpoint name '{endpoint}' ; """.format(endpoint = endpoint_name)
dbcursor.execute(sql)
cnx.commit()
###Output
_____no_output_____
###Markdown
Now we can call the function with data from our table and ask for the results of the prediction.
###Code
# To make it easier to carry across SQL without error, we'll capture the parameters in a function here.
will_churn_fn = """will_churn(acc_length, vmail_msg, day_mins, day_calls, eve_mins, night_mins,
night_calls, int_calls, int_charge, cust_service_calls, IntPlanOneHot(int_plan) )"""
sql = """SELECT state, area_code, phone, round(day_charge + eve_charge + night_charge + int_charge, 2),
int_plan, cust_service_calls,
round({will_churn_fn},3) will_churn FROM {customers}
LIMIT 5;""".format(will_churn_fn = will_churn_fn, customers = churn_table)
dbcursor.execute(sql)
dbcursor.fetchall()
###Output
_____no_output_____
###Markdown
If the previous command returns a list of entries, then the request from Amazon Aurora for predictions from the model worked!The last value in the tuple is the model's prediction score for whether this customer will churn.Currently the model returns a prediction value before assigning a cutoff (since we deployed it to return such a value). We could choose to convert this value to a Boolean True or False either via a modification to the SageMaker endpoint, or via an additional transformation on the DB side. However, in this case we'll leave it, so at some later time we can explore these values in greater detail. For example, there is likely a large range of "maybe churn", between "will" and will not churn". From a Marketing perspective, these are the customers we'd ideally likely to identify and target.Now let's add sentiment detection into the SQL request.
###Code
dbcursor.execute("""SELECT day_mins, cust_service_calls, int_plan,
round({will_churn_fn},3) will_churn,
aws_comprehend_detect_sentiment('You morons! You charged me extra again!', 'en') AS sentiment,
round(aws_comprehend_detect_sentiment_confidence('You morons! You charged me extra again!', 'en'),3)
AS confidence
FROM customers
WHERE area_code=415 AND phone='358-1921';""".format(will_churn_fn = will_churn_fn))
dbcursor.fetchall()
###Output
_____no_output_____
###Markdown
The values returned are:* day minutes* number of customer service calls* whether they have an International plan* the prediction score for whether this customer will churn, returned from the Amazon SageMaker model* the overall sentiment of the message, from Amazon Comprehend* the confidence in the message sentiment, from Amazon Comprehend Ready, Set, Go!Now we're finally ready to put all the pieces together in our campaign to prevent customer churn!We've received our first round of proposed incentives from Marketing. We've coded their rules into a function, suggest_incentive, shown below. After the function, we'll send it some test requests.
###Code
# Create a select with a join: customer_message, customers; with a call on the fly to do the one-hot encoding AND call comprehend
import random
import json
def suggest_incentive(day_mins, cust_service_calls, int_plan_no, will_churn, sentiment, confidence):
# Returns a suggestion of what to offer as a rebate to this customer, based on their churn characteristics and this interaction
if sentiment == 'POSITIVE' and confidence>0.5:
if will_churn < 0.5: # Basically happy customer
return "Sentiment POSITIVE and will_churn<0.5: No incentive."
else: # Good interaction, but at-risk-of churn; let's offer something
return "Sentiment POSITIVE and will_churn>0.5: $5 credit"
elif sentiment == 'NEGATIVE' and confidence>0.7:
if will_churn > 0.8: # oh-oh! High odds! Pull out all stops
return "Sentiment NEGATIVE and will_churn>0.8: $25 credit"
elif will_churn > 0.4: # Not so bad, but still need to offer something. But what?
if random.choice([0,1]) == 1:
return "Will_churn confidence > 0.4, experiment: $15 credit"
else:
return "Will_churn confidence > 0.4, experiment: $5 credit"
else: # Pretty happy customer, we'll trust it's just a blip
return "Will_churn confidence <= 0.4: No incentive."
elif cust_service_calls > 2 and not int_plan_no:
return "cust_service_calls > 4 and not int_plan_no: 1000 free minutes of international calls"
else:
return "NOT (cust_service_calls > 4 and not int_plan_no): No incentive."
return "No incentive."
def assess_and_recommend_incentive(area_code, phone, message):
sql = """SELECT day_mins, cust_service_calls, IntPlanOneHot(int_plan) as int_plan_no,
round({will_churn_fn},3) as will_churn,
aws_comprehend_detect_sentiment('{message}', 'en') AS sentiment,
round(aws_comprehend_detect_sentiment_confidence('{message}', 'en'),3)
AS confidence
FROM {customers}
WHERE area_code={area_code}
AND phone='{phone}';""".format(will_churn_fn = will_churn_fn,
customers = churn_table,
message = message,
area_code = area_code,
phone = phone)
dbcursor.execute(sql)
result = dbcursor.fetchone()
incentive = suggest_incentive(result[0], result[1], result[2], result[3], result[4].decode(), result[5])
ret = {"area_code": area_code,
"phone": phone,
"service_calls": result[1],
"international_plan": 1 - result[2],
"churn_prob": result[3],
"msg_sentiment": result[4].decode(),
"msg_confidence": result[5],
"incentive": incentive
}
return ret
print(assess_and_recommend_incentive(408, '375-9999' , "You morons! You charged me extra again!"), "\n")
print(assess_and_recommend_incentive(415, '358-1921', "How do I dial Morocco?"), "\n")
print(assess_and_recommend_incentive(415, '329-6603', "Thank you very much for resolving the issues with my account"), "\n")
###Output
_____no_output_____ |
wp6/analyse/sl_enhancer_analyses_pancreas_PANC-1.ipynb | ###Markdown
Analyse: comparison between pancreas cells and PANC-1 cells (cancer cell line of pancreas) PANC-1PANC-1 is a human pancreatic cancer cell line isolated from a pancreatic carcinoma of ductal cell origin. PANC-1 was derived from the tissue of a 56-year-old male. The cells can metastasize but have poor differentiation abilities. PANC-1 cells take 52 hours to double in population, have a modal chromosome number of 63, and show G6PD (Mangel des Enzyms Glucose-6-phosphat-Dehydrogenase beim Menschen durch Mutation des G6PD-Gens auf dem X-Chromosom) of the slow mobility type. PANC-1 cells are known to have an epithelial morphology
###Code
from tfcomb import CombObj
genome_path="../testdaten/hg19_masked.fa"
motif_path="../testdaten/HOCOMOCOv11_HUMAN_motifs.txt"
result_path="./results/"
###Output
_____no_output_____
###Markdown
Using saved objects from market basket analysis from pkl files for pancreas cells and PANC-1 cells using complete results of TF. Saving results from market basket analysis in objects.
###Code
pancreas_object= CombObj().from_pickle(f"{result_path}Pancreas_enhancers_complete.pkl")
pancreas_object.prefix = "pancreas"
PANC1_object = CombObj().from_pickle(f"{result_path}PANC-1_enhancers_complete.pkl")
PANC1_object.prefix = "PANC-1"
###Output
_____no_output_____
###Markdown
Showing found rules of TF in each cell line
###Code
print(f"pancreas: {pancreas_object}")
print(f"PANC1: {PANC1_object}")
###Output
pancreas: <CombObj: 95624 TFBS (401 unique names) | Market basket analysis: 157359 rules>
PANC1: <CombObj: 118977 TFBS (401 unique names) | Market basket analysis: 158469 rules>
###Markdown
Comparing rules of TF of cell line objects
###Code
compare_objpancreas_PANC1 = pancreas_object.compare(PANC1_object)
###Output
INFO: Calculating foldchange for contrast: pancreas / PANC-1
INFO: The calculated log2fc's are found in the rules table (<DiffCombObj>.rules)
###Markdown
Results Differential analysis of pancreas cells and PANC-1 cells Results of differential analysis are found in compare_objpancreas_PANC1.rules. The table shows the rules of TF that distinguish strongly in occurence. The duplicates in the results are removed with simplify rules
###Code
compare_objpancreas_PANC1.simplify_rules()
compare_objpancreas_PANC1.rules
compare_objpancreas_PANC1.plot_heatmap()
selectionpancreas_PANC1 = compare_objpancreas_PANC1.select_rules()
selectionpancreas_PANC1.plot_network()
selectionpancreas_PANC1.rules.head(10)
#selectionpancreas_PANC1.rules.tail(10)
###Output
_____no_output_____ |
examples/tutorials/translations/português/Parte 12 - Treinar uma Rede Neural criptografada com dados criptografados.ipynb | ###Markdown
Parte 12: Treinar uma Rede Neural criptografada com dados criptografadosNeste tutorial, iremos utilizar todas as técnicas que aprendemos até agora para realizar o treinamento de uma rede neural (e a previsão) enquanto tanto o modelo quanto os dados são criptografados.Em particular, iremos apresentar nossa próprio algoritmo Autograd personalizado que funciona em computações criptografadas.Autores:- Andrew Trask - Twitter: [@iamtrask](https://twitter.com/iamtrask)- Jason Paumier - Github: [@Jasopaum](https://github.com/Jasopaum)- Théo Ryffel - Twitter: [@theoryffel](https://twitter.com/theoryffel)Tradução:- Marcus Costa - Twitter: [@marcustpv](https://twitter.com/marcustpv) Passo 1: Criar workers e dados de exemplo
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import syft as sy
# Prepare as configurações iniciais
hook = sy.TorchHook(torch)
alice = sy.VirtualWorker(id="alice", hook=hook)
bob = sy.VirtualWorker(id="bob", hook=hook)
james = sy.VirtualWorker(id="james", hook=hook)
# Dataset de exemplo
data = torch.tensor([[0,0],[0,1],[1,0],[1,1.]])
target = torch.tensor([[0],[0],[1],[1.]])
# Modelo de exemplo
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(2, 2)
self.fc2 = nn.Linear(2, 1)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
model = Net()
###Output
_____no_output_____
###Markdown
Passo 2: Criptografar o modelo e os dadosA encriptação aqui vem em dois passos. Como a Computação Multi-Partes Segura (i.e Secure Multi-Party Computation) só funciona em números inteiros, para operar sobre números decimais (como pesos e ativações), precisamos codificar todos os nossos números usando Precisão Fixa, o que nos dará vários bits de precisão decimal. Nós fazemos isso com a chamada .fix_precision().Podemos então fazer a chamada .share() como temos para outras demonstrações, que irão encriptar todos os valores, partilhando-os entre Alice e Bob. Note que nós também definimos requires_grad para True, que também adiciona um método especial de auto gradiente para dados criptografados. De fato, como a Computação Multi-Partes Segura não funciona em valores de ponto flutuante, não podemos usar o auto gradiente do PyTorch. Portanto, precisamos adicionar um nó especial denominado AutogradTensor que calcula o grafo do gradiente através de retropropagação (i.e backpropagation). Você pode imprimir qualquer um desses elementos para ver se ele inclui um AutogradTensor.
###Code
# Nós criptografamos tudo
data = data.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
target = target.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
model = model.fix_precision().share(bob, alice, crypto_provider=james, requires_grad=True)
print(data)
###Output
_____no_output_____
###Markdown
Passo 3: TreinamentoE agora nós podemos treinar utilizando operações básicas de tensores
###Code
opt = optim.SGD(params=model.parameters(),lr=0.1).fix_precision()
for iter in range(20):
# 1) Apague todos os antigos valores de gradient (se existirem)
opt.zero_grad()
# 2) Faça uma predição
pred = model(data)
# 3) Calcule a função de perda (i.e loss function)
loss = ((pred - target)**2).sum()
# 4) Descubra quais pesos estão elevando a função de perda
loss.backward()
# 5) Mude esses pesos
opt.step()
# 6) Exiba o progresso
print(loss.get().float_precision())
###Output
_____no_output_____ |
2_Delay_Airlines/.ipynb_checkpoints/Delayed_Airlines_Offline-checkpoint.ipynb | ###Markdown
IntroductionHello everyone,This noteboot is an assignment of CBD Robotics Intern to utilize my acknowledge. It entails two main sections.***Cleaning data***, includes: dealing with missing data, outliers, scaling, and PCA.***Building and Tuning Linear Regression*** to get the best predictions.
###Code
import numpy as np
import pandas as pd
import scipy
import random
random.seed(10)
np.random.seed(11)
pd.set_option('display.max_columns', 500)
from scipy import stats
from scipy.stats import norm
import missingno as msno
import datetime
from pandas_profiling import ProfileReport
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder, Normalizer, MinMaxScaler
from sklearn.impute import KNNImputer, SimpleImputer
from sklearn.model_selection import train_test_split,cross_val_score, GridSearchCV, validation_curve, RandomizedSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA, NMF
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression, Ridge, RidgeCV
# Ploting libs
from plotly.offline import iplot, plot
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import plotly.express as px
import plotly.figure_factory as ff
import plotly.io as pio
#pio.renderers.default = "notebook"
# As after installing vscode, renderer changed to vscode,
# which made graphs no more showed in jupyter.
from yellowbrick.regressor import ResidualsPlot
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
sns.set_palette('RdBu')
from sklearn.feature_selection import SelectKBest, f_classif, chi2, RFE, RFECV
from sklearn.metrics import roc_curve, auc, classification_report, confusion_matrix
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB,BernoulliNB
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import xgboost as xgb
import lightgbm
from lightgbm import LGBMClassifier
###Output
_____no_output_____
###Markdown
Take a look at the Dataset
###Code
df = pd.read_csv('~/Documents/0_Delay_Airlines/DelayedFlights.csv') #, nrows=20000)
print('Observations : ', df.shape[0])
print('Features -- exclude the Price: ', df.shape[1] - 1)
df.info()
df.head(15)
###Output
_____no_output_____
###Markdown
Comments* ***To drop***: Unnamed and Year. Unnamed column is a redundant index, and Year is constant.* ***The Target***: ArrDelay.
###Code
df.drop(['Unnamed: 0', 'Year'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Choosing Columns to be Features Not all of columns in the dataset are competence to be features59
###Code
df.drop(['DepTime', 'ArrTime',
'ActualElapsedTime', 'CRSElapsedTime',
'AirTime', 'DepDelay',
'TaxiIn', 'TaxiOut',
'CarrierDelay', 'WeatherDelay', 'NASDelay', 'SecurityDelay', 'LateAircraftDelay'], axis=1, inplace=True)
df.columns
###Output
_____no_output_____
###Markdown
Define: 'Target' -- Binary Encode of ArrTime
###Code
# Binary Encode Function
## Threshold = 30 mins
def above_30_DELAYED(num):
if np.isnan(num):
return
elif num < 30:
return 0
elif num >= 30:
return 1
df['Target'] = df.ArrDelay.apply(above_30_DELAYED)
## Diverted == 1 then ArrDelay = 1
df.Target[df.Diverted==1] = 1
## Cancelled == 1 then ArrDelay = 1
df.Target[df.Cancelled==1] = 1
df.Target.value_counts().plot(kind='bar')
df.drop('ArrDelay', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Missing Data
###Code
# A Brief of Missing data
total_miss = df.isnull().sum().sort_values(ascending=False)
percent = total_miss / df.shape[0]
table = pd.concat([total_miss, percent], axis=1, keys=['Numbers', 'Percent'])
print(table.head(8))
# TailNum: drop missing
df.dropna(axis=0, subset=['TailNum'], inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Descriptive Statistic
###Code
# Numeric
df.describe(percentiles=[0.01, 0.25, 0.75, 0.99])
###Output
_____no_output_____
###Markdown
Comments on Numerics***Mistake Datatype***: * FlightNum should be categorical. * Month, DayofMont, DayofWeek, CRSDepTime, CRSArrTime, and CRSElapsedTime are datetime format, though being numeric are helpful somehow.* Cancelled is binary. ***Distance***: the only true numeric here.
###Code
# Categories
df.describe(include='O').sort_values(axis=1, by=['unique'], ascending=False)
###Output
_____no_output_____
###Markdown
Comments on CategoriesNothing much to say. We have more 300 airports, 20 carriers, 5366 aircrafts with responsive tail numbers.2 000 000 * 0.001 = 2000 features are enough. Preprocessing Data for Tree-based Models
###Code
df_tree = df.copy()
df.Target.value_counts()
df.Target.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Bad news. We got a fair skewed data.
###Code
df_tree.head()
###Output
_____no_output_____
###Markdown
Label Encoding
###Code
# Label Encoding
to_encode = ['UniqueCarrier', 'FlightNum', 'TailNum',
'Origin', 'Dest', 'CancellationCode']
le = LabelEncoder()
for column in to_encode:
df_tree[column] = le.fit_transform(df_tree[column])
df_tree.head()
df_tree.shape
###Output
_____no_output_____
###Markdown
Preprocessing Data for Non-Tree-based Models
###Code
df_nontree = df.copy()
###Output
_____no_output_____
###Markdown
Normalization
###Code
df_nontree.sample(10)
df_nontree['Distance'] = MinMaxScaler().fit_transform(df_nontree['Distance'].values.reshape(-1, 1))
df_nontree.sample(10)
###Output
_____no_output_____
###Markdown
Cyclical Feature Engineering
###Code
df_nontree.head()
df_nontree['Month_sin'] = np.sin(df.Month * (2 * np.pi/12))
df_nontree['Month_cos'] = np.cos(df.Month * (2 * np.pi/12))
df_nontree['DayofMonth_sin'] = np.sin(df.DayofMonth * (2 * np.pi/31))
df_nontree['DayofMonth_cos'] = np.cos(df.DayofMonth * (2 * np.pi/31))
df_nontree['DayOfWeek_sin'] = np.sin(df.DayOfWeek* (2 * np.pi/7))
df_nontree['DayOfWeek_cos'] = np.cos(df.DayOfWeek* (2 * np.pi/7))
df_nontree['CRSDepTime_sin'] = np.sin(df.CRSDepTime* (2 * np.pi/2400))
df_nontree['CRSDepTime_cos'] = np.cos(df.CRSDepTime* (2 * np.pi/2400))
df_nontree['CRSArrTime_sin'] = np.sin(df.CRSArrTime* (2 * np.pi/2400))
df_nontree['CRSArrTime_cos'] = np.cos(df.CRSArrTime* (2 * np.pi/2400))
df_nontree.drop(['Month', 'DayofMonth', 'DayOfWeek',
'CRSDepTime', 'CRSArrTime'],
axis=1, inplace=True)
df_nontree.head()
###Output
_____no_output_____
###Markdown
One Hot Encoding
###Code
df_nontree.FlightNum.nunique()
df_nontree.TailNum.nunique()
df_nontree.drop(['FlightNum', 'TailNum'], axis=1, inplace=True)
# One hot encode
to_onehot = ['UniqueCarrier', 'Origin', 'Dest', 'CancellationCode']
#for column in to_onehot:
# df_nontree[column] = OneHotEncoder().fit_transform(df_nontree[column].values.reshape(-1, 1))
df_nontree = pd.get_dummies(df_nontree, columns=to_onehot, sparse=True)
df_nontree.sample(10)
###Output
_____no_output_____
###Markdown
Train Test Splitting
###Code
sub_tree = df_tree.sample(frac=0.05, random_state=10, axis=0)
sub_ntree = df_nontree.sample(frac=0.05, random_state=10, axis=0)
sub_tree.shape
sub_ntree.shape
# for-tree-based train_test_split
X_tree = sub_tree.drop('Target', axis=1).values
y_tree = sub_tree['Target'].values
X_train_tree, X_test_tree, y_train_tree, y_test_tree = train_test_split(X_tree, y_tree,
test_size=0.2,
random_state=11,
stratify=y_tree)
# for-NON-tree-based train_test_split
X_ntree = sub_ntree.drop('Target', axis=1).values
y_ntree = sub_ntree['Target'].values
X_train_ntree, X_test_ntree, y_train_ntree, y_test_ntree = train_test_split(X_ntree, y_ntree,
test_size=0.2,
random_state=11,
stratify=y_ntree)
###Output
_____no_output_____
###Markdown
Models Gini Score to Evaluate Performances
###Code
# Gini score = 2*auc -1
# to evaluate performances of models
def gini_coef(y_test, y_pred):
fpr, tpr, threshold = roc_curve(y_test, y_pred)
roc_auc = auc(fpr, tpr)
gini_coef = 2 * roc_auc - 1
print('Gini score: %5.4f' %(gini_coef))
return gini_coef
# Summary table to compare models performance
summary = {}
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
# Decision Tree Baseline
model = DecisionTreeClassifier()
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['DecisionTreeClassifier'] = model_gini
###Output
precision recall f1-score support
0.0 0.58 0.58 0.58 32335
1.0 0.48 0.48 0.48 25768
accuracy 0.54 58103
macro avg 0.53 0.53 0.53 58103
weighted avg 0.54 0.54 0.54 58103
Gini score: 0.0635
###Markdown
Random Forest
###Code
# Random Forest Baseline
model = RandomForestClassifier(max_depth=10, random_state=0)
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['RandomForestClassifier'] = model_gini
###Output
precision recall f1-score support
0.0 0.60 0.85 0.70 32335
1.0 0.60 0.28 0.38 25768
accuracy 0.60 58103
macro avg 0.60 0.56 0.54 58103
weighted avg 0.60 0.60 0.56 58103
Gini score: 0.1294
{'DecisionTreeClassifier': 0.06347067146284235, 'DecisionTreeClassifier Tuned': 0.05132304134087584, 'RandomForestClassifier': 0.12936212899852606}
###Markdown
Too bad to hande. :( Gradient Boosting
###Code
# Gradient Boosting Baseline
model = GradientBoostingClassifier()
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['GradientBoostingClassifier'] = model_gini
###Output
precision recall f1-score support
0.0 0.61 0.81 0.70 32335
1.0 0.59 0.34 0.43 25768
accuracy 0.60 58103
macro avg 0.60 0.58 0.56 58103
weighted avg 0.60 0.60 0.58 58103
Gini score: 0.1557
{'DecisionTreeClassifier': 0.06347067146284235, 'DecisionTreeClassifier Tuned': 0.08782813224083652, 'RandomForestClassifier': 0.12936212899852606, 'RandomForestClassifier Tuned': 0.16708207820498377, 'GradientBoostingClassifier': 0.1556645848502609}
###Markdown
LightGBM
###Code
# LightGBM Baseline
model = LGBMClassifier(metric='auc')
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['LGBMClassifier'] = model_gini
confusion_matrix(y_test_tree, predict, normalize='true')
# LightGBM Baseline - with NON tree
model = LGBMClassifier(metric='auc')
model.fit(X_train_ntree, y_train_ntree)
predict = model.predict(X_test_ntree)
print(classification_report(y_test_ntree, predict))
#gini_coef(y_train_ntree, predict)
model_gini = gini_coef(y_test_ntree, predict)
summary['LGBMClassifier'] = model_gini
confusion_matrix(y_test_tree, predict, normalize='true')
###Output
precision recall f1-score support
0.0 0.63 0.80 0.71 10778
1.0 0.62 0.41 0.50 8590
accuracy 0.63 19368
macro avg 0.63 0.61 0.60 19368
weighted avg 0.63 0.63 0.61 19368
Gini score: 0.2146
###Markdown
XGBoost
###Code
# XGBoost Baseline
model = xgb.XGBClassifier()
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['XGBoost'] = model_gini
###Output
precision recall f1-score support
0.0 0.64 0.75 0.69 10778
1.0 0.60 0.46 0.52 8590
accuracy 0.62 19368
macro avg 0.62 0.61 0.60 19368
weighted avg 0.62 0.62 0.61 19368
Gini score: 0.2131
###Markdown
Naive Bayes
###Code
# GaussianNB Baseline
model = GaussianNB()
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['GaussianNB'] = model_gini
# BernoulliNB Baseline
###Output
precision recall f1-score support
0.0 0.56 1.00 0.72 32335
1.0 0.92 0.01 0.02 25768
accuracy 0.56 58103
macro avg 0.74 0.51 0.37 58103
weighted avg 0.72 0.56 0.41 58103
Gini score: 0.0112
{'DecisionTreeClassifier': 0.06347067146284235, 'DecisionTreeClassifier Tuned': 0.08782813224083652, 'RandomForestClassifier': 0.12936212899852606, 'RandomForestClassifier Tuned': 0.16708207820498377, 'GradientBoostingClassifier': 0.1556645848502609, 'GradientBoostingClassifier Tuned': 0.16120749664177625, 'LGBMClassifier': 0.19051880521398568, 'LGBMClassifier Tuned': 0.1591612051670921, 'GaussianNB': 0.010245265445513851, 'BernoulliNB': 0.01122634306994641}
###Markdown
Logistic Regression
###Code
# LogisticRegression Baseline
model = LogisticRegression()
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['LogisticRegression'] = model_gini
# Logistic Regression Tuning
## Params
random_grid = {'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'penalty': ['l1', 'l2', 'elastic'],
'C' : [100, 10, 1.0, 0.1, 0.01],
'max_iter' : [100, 200, 300, 500]}
model = LogisticRegression()
# Tuning
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_ntree, y_train_ntree)
predict = tuner.predict(X_test_ntree)
# Output to screen
tuner.best_params_
tuner_gini = gini_coef(y_test_ntree, predict)
# To final comparition
summary['LogisticRegression Tuned'] = tuner_gini
###Output
Fitting 3 folds for each of 100 candidates, totalling 300 fits
###Markdown
SVM
###Code
# SVC Baseline
model = SVC()
model.fit(X_train_tree, y_train_tree)
predict = model.predict(X_test_tree)
print(classification_report(y_test_tree, predict))
model_gini = gini_coef(y_test_tree, predict)
summary['SVC'] = model_gini
# SVC Tuning
## Params
random_grid = {'kernel' : ['poly', 'rbf', 'sigmoid'],
'max_iter' : [100, 200, 300, 500],
'gamma' : ['scale', 'auto'],
'C' : [100, 50, 10, 1.0, 0.1, 0.01]}
model = SVC()
## Tuning
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_tree, y_train_tree)
predict = tuner.predict(X_test_tree)
## Output to screen
print(tuner.best_params_)
tuner_gini = gini_coef(y_test_tree, predict)
## To final comparition
summary['SVC Tuned'] = tuner_gini
###Output
Fitting 3 folds for each of 100 candidates, totalling 300 fits
###Markdown
Comparation of Performance
###Code
table = pd.DataFrame(summary, index=[0]).T
table.to_csv('ComparisionOfPerformances.csv', float_format='%5.4f')
table.columns = ['Gini score']
table.plot(kind='barh',
title='Models Performance')
###Output
_____no_output_____
###Markdown
Feature Selection To avoid errors caused by splitting data, I would use the original dataset in Feature Selection.
###Code
# Original dataset for Selection
X = df_tree.drop('Target', axis=1)
y = df_tree['Target']
# Select K Best
selector = SelectKBest(score_func=chi2, k=12)
X_new = selector.fit_transform(X, y)
pd.DataFrame(selector.scores_).plot(kind='barh')
for i, value in enumerate(list(X.columns)):
print((i, value))
###Output
(0, 'Month')
(1, 'DayofMonth')
(2, 'DayOfWeek')
(3, 'CRSDepTime')
(4, 'CRSArrTime')
(5, 'UniqueCarrier')
(6, 'FlightNum')
(7, 'TailNum')
(8, 'Origin')
(9, 'Dest')
(10, 'Distance')
(11, 'Cancelled')
(12, 'CancellationCode')
(13, 'Diverted')
###Markdown
WHAT THE HELL HAPPENING HERE??Let's see. Features of No 3, 4, and 10 being CRSDepTime, and Distance are subtle. Yes, I agree, it is very coherent.But, look! No. 6, ***FlightNum***?? Seem that names of flights are decisive in their fates?No. I think my label encoder impacted badly to model performance. So I will ***remove the feature FlightNum***.
###Code
# SelectKBest
# PCA
# RFE - Recursive feature elimination
estimator = LogisticRegression(solver='liblinear',
penalty='l1',
max_iter=100,
C=0.1)
selector = RFE(estimator, n_features_to_select=5, step=1)
selector = selector.fit(X_tree, y_tree)
selector.support_
selector.ranking_
# RFE - Recursive feature elimination
estimator = LogisticRegression(solver='liblinear',
penalty='l1',
max_iter=100,
C=0.1)
selector = RFECV(estimator, step=1,
cv=6,
n_jobs=-1)
selector = selector.fit(X_tree, y_tree)
selector.ranking_
for feature, support in zip(list(X.columns), selector.ranking_):
print((feature, support))
selector.grid_scores_
###Output
_____no_output_____
###Markdown
Best Parameters for Models Remove FlightNum in datasets
###Code
df_tree = df.copy()
# Dataset for TREE-based model
df_tree.drop('FlightNum', axis=1, inplace=True)
sub_tree = df_tree.sample(frac=0.05, random_state=10, axis=0)
# for-tree-based train_test_split
X_tree = sub_tree.drop('Target', axis=1).values
y_tree = sub_tree['Target'].values
X_train_tree, X_test_tree, y_train_tree, y_test_tree = train_test_split(X_tree, y_tree,
test_size=0.2,
random_state=11,
stratify=y_tree)
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
# Decision Tree -- max_depths
max_depths = np.linspace(1, 32, 32, endpoint=True)
params = max_depths
model = DecisionTreeClassifier()
train_scores, test_scores = validation_curve(
model,
X = X_train_tree, y = y_train_tree,
param_name = 'max_depth',
param_range = params,
scoring='roc_auc',
cv = 3,
n_jobs=-1)
plt.plot(params, train_scores)
plt.plot(params, test_scores)
plt.title('AUC score vs max_depth')
plt.xlabel('max_depths')
plt.ylabel('AUC score')
###Output
_____no_output_____
###Markdown
Depth sau qua chi to overfit.
###Code
# Decision Tree -- min_samples_leafs
min_samples_leafs = np.linspace(0.1, 0.5, 5, endpoint=True)
params = min_samples_leafs
model = DecisionTreeClassifier()
train_scores, test_scores = validation_curve(
model,
X = X_train_tree, y = y_train_tree,
param_name = 'min_samples_leaf',
param_range = params,
scoring='roc_auc',
cv = 3,
n_jobs=-1)
plt.plot(params, train_scores)
plt.plot(params, test_scores)
plt.title('AUC score vs min_samples_leaf')
plt.xlabel('min_samples_leaf')
plt.ylabel('AUC score')
###Output
_____no_output_____
###Markdown
It seems like that Min samples leaf does not affect the Decision tree so much.
###Code
# Decision Tree -- max_features
max_features = list(range(1,X_train_tree.shape[1]))
params = max_features
model = DecisionTreeClassifier()
train_scores, test_scores = validation_curve(
model,
X = X_train_tree, y = y_train_tree,
param_name = 'max_features',
param_range = params,
scoring='roc_auc',
cv = 3,
n_jobs=-1)
plt.plot(params, train_scores)
plt.plot(params, test_scores)
plt.title('AUC score vs max_features')
plt.xlabel('max_features')
plt.ylabel('AUC score')
# Decision Tree
params = {'max_depth' : np.linspace(1, 32, 32, endpoint=True),
'min_samples_leaf' : np.linspace(0.1, 0.5, 5, endpoint=True),
'max_features' : list(range(1,X_train_tree.shape[1]))}
model = DecisionTreeClassifier()
tuner = GridSearchCV(model, params,
scoring='roc_auc', cv=3,
n_jobs=-1)
tuner.fit(X_train_tree, y_train_tree)
predict = tuner.predict(X_test_tree)
[print(key, value) for key, value in tuner.best_params_.items()]
tuner_gini = gini_coef(y_test_tree, predict)
summary['DecisionTreeClassifier Tuned'] = tuner_gini
print(summary)
###Output
{'max_depth': 5.0, 'max_features': 13, 'min_samples_leaf': 0.1}
Gini score: 0.0513
{'DecisionTreeClassifier': 0.06347067146284235, 'DecisionTreeClassifier Tuned': 0.05132304134087584}
###Markdown
Random Forest
###Code
# Random Forest Tuning
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
random_grid = {'n_estimators': [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)],
'max_features': ['auto', 'sqrt'],
'max_depth': max_depth,
'min_samples_split': [2, 5, 10],
'min_samples_leaf': [1, 2, 4],
'bootstrap': [True, False]}
model = RandomForestClassifier()
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_tree, y_train_tree)
predict = tuner.predict(X_test_tree)
[print(key, value) for key, value in tuner.best_params_.items()]
tuner_gini = gini_coef(y_test_tree, predict)
summary['RandomForestClassifier Tuned'] = tuner_gini
rf_tuned = RandomForestClassifier(n_estimators=1800,
min_samples_split=5,
min_samples_leaf=4,
max_features='auto',
max_depth100,
bootstrap=True)
###Output
_____no_output_____
###Markdown
LightGBM
###Code
# LightGBM Baseline - with NON tree
model = LGBMClassifier(metric='auc')
model.fit(X_train_ntree, y_train_ntree)
predict_train = model.predict(X_train_ntree)
predict_test = model.predict(X_test_ntree)
print(classification_report(y_test_ntree, predict))
gini_coef(y_train_ntree, predict_train)
gini_coef(y_test_ntree, predict_test)
#confusion_matrix(y_test_tree, predict, normalize='true')
# LightGBM Tuning -- NON TREE dataset
## Params
random_grid = {
# Complexity
'max_depth' : [-1, 10, 20, 30, 50, 100],
# At nodes
'min_data_in_leaf' : [2, 5, 10, 20, 30, 50],
# Resample
'bagging_fraction' : [.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1],
# Regularization
'lambda_l1' : [1, 2, 3, 4, 5],
'lambda_l2' : [0.1, 0.15],
# Histogram-based
'max_bin' : [63, 140, 255],
'boosting_type' : ['dart', 'gbdt', 'goss'],
'n_iter' : [10, 30, 100, 300, 500],
'learning_rate' : [0.5, 0.25, 0.1, 0.05, 0.01, 0.005, 0.001],
}
model = LGBMClassifier(objective='binary',
is_unbalance=True,
metric='auc',
seed=0 , n_jobs=-1)
# Tuning
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_ntree, y_train_ntree)
predict = tuner.predict(X_test_ntree)
# Output to screen
[print(key, value) for key, value in tuner.best_params_.items()]
tuner_gini = gini_coef(y_test_ntree, predict)
# To final comparition
summary['LGBMClassifier Tuned'] = tuner_gini
###Output
Fitting 3 folds for each of 100 candidates, totalling 300 fits
###Markdown
XGBoost
###Code
# XGBoost Tuning
## Params
random_grid = {'eta' : [1, 0.5, 0.1, 0.03, 0.003],
'gamma' : [1, 2, 3, 4, 5],
'max_depth' : [8, 10, 12],
'min_child_weigh': [50, 100]}
model = xgb.XGBClassifier(nthread=-1)
# Tuning
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_ntree, y_train_ntree)
predict = tuner.predict(X_test_ntree)
# Output to screen
[print(key, value) for key, value in tuner.best_params_.items()]
tuner_gini = gini_coef(y_test_ntree, predict)
# To final comparition
summary['GradientBoostingClassifier Tuned'] = tuner_gini
###Output
Fitting 3 folds for each of 100 candidates, totalling 300 fits
###Markdown
Logistic Regression
###Code
# Logistic Regression Tuning
## Params
random_grid = {'solver' : ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'],
'penalty': ['l1', 'l2', 'elastic'],
'C' : [100, 10, 1.0, 0.1, 0.01],
'max_iter' : [100, 200, 300, 500]}
model = LogisticRegression()
# Tuning
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_ntree, y_train_ntree)
predict = tuner.predict(X_test_ntree)
# Output to screen
tuner.best_params_
tuner_gini = gini_coef(y_test_ntree, predict)
# To final comparition
summary['LogisticRegression Tuned'] = tuner_gini
###Output
Fitting 3 folds for each of 100 candidates, totalling 300 fits
###Markdown
SVM
###Code
# SVC Tuning
## Params
random_grid = {'kernel' : ['poly', 'rbf', 'sigmoid'],
'max_iter' : [100, 200, 300, 500],
'gamma' : ['scale', 'auto'],
'C' : [100, 50, 10, 1.0, 0.1, 0.01]}
model = SVC()
## Tuning
tuner = RandomizedSearchCV(estimator=model,
param_distributions=random_grid,
n_iter=100,
cv=3,
random_state=0,
verbose=2,
n_jobs=-1)
tuner.fit(X_train_tree, y_train_tree)
predict = tuner.predict(X_test_tree)
## Output to screen
print(tuner.best_params_)
tuner_gini = gini_coef(y_test_tree, predict)
## To final comparition
summary['SVC Tuned'] = tuner_gini
###Output
Fitting 3 folds for each of 100 candidates, totalling 300 fits
|
MichaelHiggins/Astro480Assignment2.ipynb | ###Markdown
Table of Observable Galaxies on August 15th - August30th
###Code
galaxy_table[mask]
###Output
_____no_output_____
###Markdown
Table of Observable Galaxies on August 15th - August30th
###Code
galaxy_table_observable = galaxy_table[mask][0:10]
galaxy_table_observable = galaxy_table_observable.reset_index(drop=True)
galaxy_table_observable
observing_length = (astro_rise - astro_set).to(u.h)
start_time = astro_set
end_time = astro_rise
#time to observe without destroying the CCD with the sun
observing_range = [astro_set, astro_rise]
time_grid = time_grid_from_range(observing_range)
#plotting how long to view object based on airmass
def target(x):
target = FixedTarget.from_name(galaxy_table_observable["Name"][x])
return target
for i in range(len(galaxy_table_observable['Name'])):
plot_airmass(target(i), het, time_grid);
plt.legend(loc='upper center', bbox_to_anchor=(1.45, 0.8), shadow=True, ncol=1)
from astropy.coordinates import get_sun, get_body, get_moon
from astroplan import moon_illumination
moon_observing = get_body('moon',observing_time1)
moon_illumination(observing_time1)
print(moon_observing.ra.hms)
print(moon_observing.dec.dms)
###Output
hms_tuple(h=21.0, m=16.0, s=41.48403520171172)
dms_tuple(d=-18.0, m=-27.0, s=-12.57342903422682)
###Markdown
Looking one month ahead
###Code
observing_time2 = Time("2019-09-15 00:00:00")
astro_set2 = het.twilight_evening_astronomical(observing_time2, which='nearest')
astro_rise2 = het.twilight_morning_astronomical(observing_time2, which='next')
midnight_het2 = het.midnight(observing_time2, which='next')
def istargetup2(x):
target = FixedTarget.from_name(galaxy_table_observable["Name"][x])
visible = het.target_is_up(midnight_het2, target)
return visible
#mask that can be applied to the galaxy_table to yield which galaxies will be visible
mask2 = []
for i in range(len(galaxy_table_observable["Name"])):
mask2.append(istargetup2(i))
galaxy_table_observable[mask2]
###Output
_____no_output_____
###Markdown
This shows that they will all still be observable from the observatory a month later
###Code
observing_length2 = (astro_rise2 - astro_set2).to(u.h)
start_time2= astro_set2
end_time2 = astro_rise2
#time to observe without destroying the CCD with the sun
observing_range2 = [astro_set2, astro_rise2]
time_grid2 = time_grid_from_range(observing_range2)
#plotting how long to view object based on airmass
def target(x):
target = FixedTarget.from_name(galaxy_table_observable["Name"][x])
return target
for i in range(len(galaxy_table_observable['Name'])):
plot_airmass(target(i), het, time_grid2);
plt.legend(loc='upper center', bbox_to_anchor=(1.45, 0.8), shadow=True, ncol=1)
moon_observing2 = get_body('moon',observing_time2)
moon_illumination(observing_time2)
print(moon_observing2.ra.hms)
print(moon_observing2.dec.dms)
###Output
hms_tuple(h=0.0, m=9.0, s=24.881550381235265)
dms_tuple(d=-4.0, m=-17.0, s=-22.532708388507956)
|
CustomDash-master/research/Data Wrangling.ipynb | ###Markdown
DROPPING
###Code
train_df.drop(['Bill Payment Aggregator', 'OTT Content App', 'Pincode', 'ID'], axis=1, inplace=True)
test_df.drop(['Bill Payment Aggregator', 'OTT Content App', 'Pincode', 'ID'], axis=1, inplace=True)
test_df.head()
###Output
_____no_output_____
###Markdown
Missing Values
###Code
train_df.isnull().sum()
test_df.isnull().sum()
###Output
_____no_output_____
###Markdown
International Usage & VAS Subscription
###Code
train_df['International Usage'].fillna('No', inplace=True)
test_df['International Usage'].fillna('No', inplace=True)
train_df['VAS Subscription'].fillna('None', inplace=True)
test_df['VAS Subscription'].fillna('None', inplace=True)
###Output
_____no_output_____
###Markdown
Dropna
###Code
nan_train_df = train_df.dropna()
nan_test_df = test_df.dropna()
nan_df = nan_train_df.append(nan_test_df, ignore_index=True)
nan_df.shape
###Output
_____no_output_____
###Markdown
Label Encoding
###Code
train_df.fillna('NaN', inplace=True)
le = LabelEncoder()
###Output
_____no_output_____
###Markdown
01 - Age
###Code
train_df['Age'] = le.fit_transform(train_df['Age'])
le_age_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
train_df['Age'].replace(le_age_mapping['NaN'], -1, inplace=True)
test_df['Age'].replace(le_age_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
02 - ARPU
###Code
train_df['ARPU'] = le.fit_transform(train_df['ARPU'])
le_arpu_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
le_arpu_mapping
test_df['ARPU'].replace(le_arpu_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
03 - Age on Network
###Code
train_df['Age on Network'] = le.fit_transform(train_df['Age on Network'])
le_aon_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
train_df['Age on Network'].replace(le_aon_mapping['NaN'], -1, inplace=True)
test_df['Age on Network'].replace(le_aon_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
04 - Brand Identifier
###Code
train_df['Brand Identifier'] = le.fit_transform(train_df['Brand Identifier'])
le_brand_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Brand Identifier'].replace(le_brand_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
05 - Circle Name
###Code
train_df['Circle Name'] = le.fit_transform(train_df['Circle Name'])
le_circle_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Circle Name'].replace(le_circle_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
06 - Connection Type
###Code
train_df['Connection Type'] = le.fit_transform(train_df['Connection Type'])
le_connection_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Connection Type'].replace(le_connection_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
07 - Data Usage
###Code
train_df['Data Usage'] = le.fit_transform(train_df['Data Usage'])
le_data_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Data Usage'].replace(le_data_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
08 - Gender
###Code
train_df['Gender'] = le.fit_transform(train_df['Gender'])
le_gender_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
train_df['Gender'].replace(le_gender_mapping['NaN'], -1, inplace=True)
test_df['Gender'].replace(le_gender_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
09 - Genre
###Code
train_df['Genre'] = le.fit_transform(train_df['Genre'])
le_genre_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Genre'].replace(le_genre_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
10 - International Usage
###Code
train_df['International Usage'] = le.fit_transform(train_df['International Usage'])
le_international_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['International Usage'].replace(le_international_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
11 - Recharge
###Code
train_df['Recharge'] = le.fit_transform(train_df['Recharge'])
le_recharge_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
train_df['Recharge'].replace(le_recharge_mapping['NaN'], -1, inplace=True)
test_df['Recharge'].replace(le_recharge_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
12 - SMS Usage
###Code
train_df['SMS Usage'] = le.fit_transform(train_df['SMS Usage'])
le_sms_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['SMS Usage'].replace(le_sms_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
13 - VAS Subscription
###Code
train_df['VAS Subscription'] = le.fit_transform(train_df['VAS Subscription'])
le_vas_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['VAS Subscription'].replace(le_vas_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
14 - Voice Usage
###Code
train_df['Voice Usage'] = le.fit_transform(train_df['Voice Usage'])
le_voice_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Voice Usage'].replace(le_voice_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
15 - Web/App
###Code
train_df['Web/App'] = le.fit_transform(train_df['Web/App'])
le_webapp_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['Web/App'].replace(le_webapp_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
16 - DND
###Code
train_df['DND'] = le.fit_transform(train_df['DND'])
le_dnd_mapping = dict(zip(le.classes_, le.transform(le.classes_)))
test_df['DND'].replace(le_dnd_mapping, inplace=True)
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_df.replace(np.nan, -1, inplace=True)
test_df['Recharge'] = test_df['Recharge'].astype('int')
test_df['Age'] = test_df['Age'].astype('int')
test_df['Age on Network'] = test_df['Age on Network'].astype('int')
test_df['Gender'] = test_df['Gender'].astype('int')
test_df.dtypes
test_df.drop([142, 427, 18066, 41997, 4725, 35398, 10893, 38518, 17787, 22955, 47284], inplace=True)
test_df['Data Usage'] = test_df['Data Usage'].astype('int')
test_df['VAS Subscription'] = test_df['VAS Subscription'].astype('int')
###Output
_____no_output_____
###Markdown
To The CSV
###Code
train_df.to_csv('E:\VIL Codefest\secret\VIL Confidential Information Dataset\Train_cleaned.csv', index=False)
test_df.to_csv('E:\VIL Codefest\secret\VIL Confidential Information Dataset\Test_cleaned.csv', index=False)
###Output
_____no_output_____ |
notebooks/authors MultinominalNB.ipynb | ###Markdown
dataset https://www.kaggle.com/gdberrio/spooky-authors-csvPredict author of sentence (3 different authors)
###Code
from string import punctuation
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.preprocessing import LabelEncoder
from sklearn.naive_bayes import MultinomialNB, GaussianNB, BernoulliNB
from nltk.corpus import stopwords
from nltk import wordpunct_tokenize
from nltk.stem import PorterStemmer
df = pd.read_csv("../datasets/authors.csv")
df.head()
df['author'].value_counts()
###Output
_____no_output_____
###Markdown
more or less equal counts
###Code
texts = df['text']
le = LabelEncoder().fit(df['author'])
authors = le.transform(df['author'])
np.bincount(authors)
stopwords = set(stopwords.words('english')) # remove not important words
stemmer = PorterStemmer() # stem the words so the vocabulary will be smaller
# removing punctuation, stopwords and stemming words to reduce the vocabulary
cleared_texts = []
for i in texts:
cleared_texts.append(' '.join([stemmer.stem(x) for x in wordpunct_tokenize(i.lower())
if x not in punctuation and x not in stopwords])) # removing punctiation and all to lowercase
print(texts[0])
print()
print(cleared_texts[0])
x_train, x_test, y_train, y_test = train_test_split(cleared_texts, authors,
stratify=authors, test_size=0.2)
cv = CountVectorizer(decode_error='ignore', ngram_range=(1, 1),
max_df=1.0, min_df=1)
cv.fit(x_train)
x_train_transformed = cv.transform(x_train)
x_test_transformed = cv.transform(x_test)
x_train_transformed.shape
clf = MultinomialNB() # default
clf.fit(x_train_transformed.toarray(), y_train)
clf.score(cv.transform(x_test).toarray(), y_test)
print(np.bincount(y_test))
print(np.bincount(clf.predict(x_test_transformed.toarray())))
# with different params
param_grid = {'alpha': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'fit_prior': [True, False]}
grid_search_multinomial = GridSearchCV(MultinomialNB(), param_grid=param_grid, cv=5)
grid_search_multinomial.fit(x_train_transformed.toarray(), y_train)
grid_search_multinomial.best_params_
grid_search_multinomial.score(x_test_transformed.toarray(), y_test)
from sklearn.metrics import log_loss # in kaggle competition this metric is used
predictions_proba = grid_search_multinomial.predict_proba(x_test_transformed.toarray())
log_loss(y_test, predictions_proba)
###Output
_____no_output_____
###Markdown
well, not great, not terrible
###Code
from sklearn.linear_model import LogisticRegression # let's try LR as well
param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000], 'n_jobs': [-1]}
grid_search_lr = GridSearchCV(LogisticRegression(solver='lbfgs'), param_grid=param_grid, cv=5)
grid_search_lr.fit(x_train_transformed, y_train)
grid_search_lr.best_params_
print(grid_search_lr.score(x_test_transformed.toarray(), y_test))
predictions_proba = grid_search_lr.predict_proba(x_test_transformed.toarray())
print(log_loss(y_test, predictions_proba))
###Output
_____no_output_____
###Markdown
seems to be a bit better in terms of logloss What can be tested as well:different preprocessing - n-grams, no removal of stopwords, no stemming, TF-IDF instead of TF,different algorithms
###Code
idf = TfidfTransformer().fit(x_train_transformed)
x_train_transformed_idf = idf.transform(x_train_transformed)
clf = MultinomialNB()
clf.fit(x_train_transformed_idf.toarray(), y_train)
x_test_transformed_idf = idf.transform(cv.transform(x_test)).toarray()
print(clf.score(x_test_transformed_idf, y_test))
predictions_proba = clf.predict_proba(x_test_transformed.toarray())
print(log_loss(y_test, predictions_proba))
###Output
_____no_output_____ |
AI/Training/Question_Answering.ipynb | ###Markdown
install library yang diperlukan
###Code
! pip install datasets transformers
###Output
_____no_output_____
###Markdown
login ke akun huggingface
###Code
!huggingface-cli login
###Output
_____no_output_____
###Markdown
install dan setup git-lfs
###Code
!apt install git-lfs
!git config --global user.email "[email protected]"
!git config --global user.name "Muhammad Fadhil Arkan"
###Output
_____no_output_____
###Markdown
Fine-tuning a model on a question-answering task parameter nama dataset dan model
###Code
# This flag is the difference between SQUAD v1 or 2 (if you're using another dataset, it indicates if impossible
# answers are allowed or not).
squad_v2 = False
model_checkpoint = "distilbert-base-uncased"
batch_size = 16
###Output
_____no_output_____
###Markdown
Loading the dataset download dataset
###Code
from datasets import load_dataset, load_metric
datasets = load_dataset("squad_v2" if squad_v2 else "squad")
###Output
_____no_output_____
###Markdown
untuk custom dataset : (https://huggingface.co/docs/datasets/loading_datasets.htmlfrom-local-files) cek dataset
###Code
datasets
###Output
_____no_output_____
###Markdown
visualisasi dataset
###Code
from datasets import ClassLabel, Sequence
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
#display(df)
for column, typ in dataset.features.items():
if isinstance(typ, ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])
display(HTML(df.to_html()))
show_random_elements(datasets["train"])
###Output
_____no_output_____
###Markdown
Preprocessing the training data download dan inisialisasi tokenizer
###Code
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
###Output
_____no_output_____
###Markdown
pastikan model yang digunakan memiliki fast-tokenizer.cek : https://huggingface.co/transformers/index.htmlbigtable
###Code
import transformers
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)
###Output
_____no_output_____
###Markdown
tentukan parameter panjang fitur maksimum dan panjang overlap
###Code
max_length = 384 # The maximum length of a feature (question and context)
doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.
###Output
_____no_output_____
###Markdown
preventif jika terdapat model yang membutuhkan padding yang berbeda
###Code
pad_on_right = tokenizer.padding_side == "right"
###Output
_____no_output_____
###Markdown
preprocess dataset
###Code
def prepare_train_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]
# Tokenize our examples with truncation and padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original context. This will
# help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != (1 if pad_on_right else 0):
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != (1 if pad_on_right else 0):
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).
if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the answer.
# Note: we could go after the last offset if the answer is the last word (edge case).
while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples
###Output
_____no_output_____
###Markdown
aplikasikan fungsi pada seluruh data pada dataset
###Code
tokenized_datasets = datasets.map(prepare_train_features, batched=True, remove_columns=datasets["train"].column_names)
###Output
_____no_output_____
###Markdown
Fine-tuning the model download dan inisialisasi dataset
###Code
from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)
###Output
_____no_output_____
###Markdown
tentukan parameter training
###Code
model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
f"test-squad",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.01,
push_to_hub=True,
push_to_hub_model_id=f"{model_name}-finetuned-squad",
)
###Output
_____no_output_____
###Markdown
gunakan data collator untuk preprocess input data
###Code
from transformers import default_data_collator
data_collator = default_data_collator
###Output
_____no_output_____
###Markdown
Inisialisasi trainer
###Code
from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)
###Output
_____no_output_____
###Markdown
mulai proses training
###Code
trainer.train()
###Output
_____no_output_____
###Markdown
simpan model pada penyimpanan lokal
###Code
trainer.save_model("test-squad-trained")
###Output
_____no_output_____
###Markdown
Evaluation prediksi output menggunakan data evaluasi
###Code
import torch
for batch in trainer.get_eval_dataloader():
break
batch = {k: v.to(trainer.args.device) for k, v in batch.items()}
with torch.no_grad():
output = trainer.model(**batch)
output.keys()
###Output
_____no_output_____
###Markdown
cek dimensi keluaran model (start dan end logit)
###Code
output.start_logits.shape, output.end_logits.shape
output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1)
###Output
_____no_output_____
###Markdown
preprocess fitur dari dataset validasi
###Code
def prepare_validation_features(examples):
# Some of the questions have lots of whitespace on the left, which is not useful and will make the
# truncation of the context fail (the tokenized question will take a lots of space). So we remove that
# left whitespace
examples["question"] = [q.lstrip() for q in examples["question"]]
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(o if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
###Output
_____no_output_____
###Markdown
aplikasikan preprocess pada dataset
###Code
validation_features = datasets["validation"].map(
prepare_validation_features,
batched=True,
remove_columns=datasets["validation"].column_names
)
###Output
_____no_output_____
###Markdown
lakukan prediksi
###Code
raw_predictions = trainer.predict(validation_features)
validation_features.set_format(type=validation_features.format["type"], columns=list(validation_features.features.keys()))
###Output
_____no_output_____
###Markdown
tentukan score dan text dari hasil prediksi
###Code
max_answer_length = 30
start_logits = output.start_logits[0].cpu().numpy()
end_logits = output.end_logits[0].cpu().numpy()
offset_mapping = validation_features[0]["offset_mapping"]
# The first feature comes from the first example. For the more general case, we will need to be match the example_id to
# an example index
context = datasets["validation"][0]["context"]
# Gather the indices the best start/end logits:
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
valid_answers = []
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
if start_index <= end_index: # We need to refine that test to check the answer is inside the context
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
valid_answers = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[:n_best_size]
valid_answers
import collections
examples = datasets["validation"]
features = validation_features
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
###Output
_____no_output_____
###Markdown
post process hasil prediksi
###Code
from tqdm.auto import tqdm
def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = raw_predictions
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
# The dictionaries we have to fill.
predictions = collections.OrderedDict()
# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]
min_null_score = None # Only used if squad_v2 is True.
valid_answers = []
context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]
# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}
# Let's pick our final answer: the best one or the null answer (only for squad_v2)
if not squad_v2:
predictions[example["id"]] = best_answer["text"]
else:
answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
predictions[example["id"]] = answer
return predictions
###Output
_____no_output_____
###Markdown
aplikasikan fungsi post process pada seluruh hasil prediksi
###Code
final_predictions = postprocess_qa_predictions(datasets["validation"], validation_features, raw_predictions.predictions)
###Output
_____no_output_____
###Markdown
inisialisasi metric
###Code
metric = load_metric("squad_v2" if squad_v2 else "squad")
###Output
_____no_output_____
###Markdown
hitung score f1
###Code
if squad_v2:
formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()]
else:
formatted_predictions = [{"id": k, "prediction_text": v} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in datasets["validation"]]
metric.compute(predictions=formatted_predictions, references=references)
###Output
_____no_output_____
###Markdown
upload model ke huggingface hub
###Code
trainer.push_to_hub()
###Output
Saving model checkpoint to test-squad
Configuration saved in test-squad/config.json
Model weights saved in test-squad/pytorch_model.bin
tokenizer config file saved in test-squad/tokenizer_config.json
Special tokens file saved in test-squad/special_tokens_map.json
|
nlp/histone_doc2vec.ipynb | ###Markdown
###Code
!pip install -q biopython
from google.colab import drive
drive.mount('/content/drive', force_remount=False)
# module auto reload
%load_ext autoreload
%autoreload 2
# copy modules
!cp -r '/content/drive/My Drive/dna_NN_theory/reading_dna_scripts' .
!ls reading_dna_scripts
import re
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from itertools import product
from sklearn.model_selection import train_test_split
import sklearn.metrics as metrics
from sklearn.metrics import silhouette_score
import tensorflow as tf
from tensorflow import keras
from tensorflow.data import Dataset
from reading_dna_scripts.load import read_fasta
DIR = '/content/drive/My Drive/'
DATA_DIR = DIR + 'dna_NN_theory/histone/'
MODEL_DIR = DIR + 'dna_NN_theory/models/'
file = DIR + 'H3.fasta'
sequences, labels = read_fasta(file)
seq_len = len(sequences[0])
def n_gram(x, word_size=3):
arr_x = [c for c in x]
words = tf.strings.ngrams(arr_x, ngram_width=word_size, separator='').numpy()
words = list(pd.Series(words).apply(lambda b: b.decode('utf-8')))
return words
df = pd.DataFrame(sequences, columns=['sequence'])
df['ngram'] = df['sequence'].apply(n_gram)
df.head()
# sequences = list(df.ngram)
# sequences[0]
# split
SEED = 3264
test_size = 0.15
val_size = 0.15
split_options = dict(test_size=test_size, stratify=labels, random_state=SEED)
x_train_val, xtest, y_train_val, ytest = train_test_split(df, labels, **split_options)
# normalize val_size and update options
split_options.update(dict(test_size=val_size/(1-test_size), stratify=y_train_val))
xtrain, xval, ytrain, yval = train_test_split(x_train_val, y_train_val, **split_options)
del x_train_val, y_train_val
print('train size:', len(xtrain))
print('val size: ', len(xval))
print('test size: ', len(xtest))
print("The length of the sequence is", seq_len)
xtrain['label'] = ytrain
xval['label'] = yval
xtest['label'] = ytest
ytrain = np.array(ytrain)
yval = np.array(yval)
ytest = np.array(ytest)
# saving current model
DATE = '_20210311'
SUFFIX = "doc2vec_histone"
import multiprocessing
from tqdm import tqdm
from gensim.models import Doc2Vec
from gensim.models.doc2vec import Doc2Vec,TaggedDocument
from gensim.test.utils import get_tmpfile
xtrain_tagged = xtrain.apply(
lambda r: TaggedDocument(words=r["ngram"], tags=[r["label"]]), axis=1
)
xval_tagged = xval.apply(
lambda r: TaggedDocument(words=r["ngram"], tags=[r["label"]]), axis=1
)
xtest_tagged = xtest.apply(
lambda r: TaggedDocument(words=r["ngram"], tags=[r["label"]]), axis=1
)
tqdm.pandas(desc="progress-bar")
cores = multiprocessing.cpu_count()
def getVec(model, tagged_docs, epochs=20):
sents = tagged_docs.values
regressors = [model.infer_vector(doc.words, epochs=epochs) for doc in sents]
return np.array(regressors)
def doc2vec_training(embed_size_list=[50,100,150,200], figsize=(10,50), verbose=0):
num_model = len(embed_size_list)
# fig, axes = plt.subplots(num_model, 2, figsize=figsize)
counter = 0
model_list = []
hist_list = []
es_cb = keras.callbacks.EarlyStopping(patience=30, restore_best_weights=True)
for embed_size in embed_size_list:
start = time.time()
print("training doc2vec for embedding size =", embed_size)
model_dm = Doc2Vec(dm=1, vector_size=embed_size, negative=5, hs=0, \
min_count=2, sample=0, workers=cores)
if verbose == 1:
model_dm.build_vocab([x for x in tqdm(xtrain_tagged.values)])
else:
model_dm.build_vocab(xtrain_tagged.values)
for epoch in range(80):
if verbose == 1:
model_dm.train([x for x in tqdm(xtrain_tagged.values)], \
total_examples=len(xtrain_tagged.values), epochs=1)
else:
model_dm.train(xtrain_tagged.values, \
total_examples=len(xtrain_tagged.values), epochs=1)
model_dm.alpha -= 0.002
model_dm.min_alpha = model_dm.alpha
xtrain_vec = getVec(model_dm, xtrain_tagged)
xval_vec = getVec(model_dm, xval_tagged)
xtest_vec = getVec(model_dm, xtest_tagged)
# save the embedding to csv files
train_filename = "size" + str(embed_size) + "_train.csv"
val_filename = "size" + str(embed_size) + "_val.csv"
test_filename = "size" + str(embed_size) + "_test.csv"
np.savetxt(DATA_DIR + train_filename, xtrain_vec, delimiter=",")
np.savetxt(DATA_DIR + val_filename, xval_vec, delimiter=",")
np.savetxt(DATA_DIR + test_filename, xtest_vec, delimiter=",")
print("the shape for training vector is", xtrain_vec.shape, \
"the shape for val vector is", xval_vec.shape, \
"the shape for test vector is", xtest_vec.shape)
# xtrain_tsne = TSNE(n_components=2, metric="cosine").fit_transform(xtrain_vec)
# xval_tsne = TSNE(n_components=2, metric="cosine").fit_transform(xval_vec)
# xtest_tsne = TSNE(n_components=2, metric="cosine").fit_transform(xtest_vec)
# plotVec(axes[counter, 0], xtrain_tsne, ytrain, title="TSNE, training, embedding="+str(embed_size))
# plotVec(axes[counter, 0], xtrain_tsne, ytrain, title="TSNE, training, embedding="+str(embed_size))
# plotVec(axes[counter, 1], xtest_tsne, ytest, title="TSNE, test, embedding="+str(embed_size))
counter += 1
print("embedding size =", embed_size)
model = keras.Sequential([
keras.layers.Dense(128, activation="relu", input_shape=[embed_size]),
keras.layers.Dropout(0.2),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dropout(0.2),
keras.layers.Dense(32, activation="relu"),
keras.layers.Dropout(0.2),
keras.layers.Dense(16, activation="relu"),
keras.layers.Dropout(0.2),
keras.layers.Dense(1, activation="sigmoid")
])
model.compile(keras.optimizers.SGD(momentum=0.9), \
"binary_crossentropy", metrics=["accuracy"])
hist = model.fit(xtrain_vec, ytrain, \
epochs=1000, callbacks=[es_cb], validation_data=(xval_vec, yval ))
train_loss, train_acc = model.evaluate(xtrain_vec, ytrain)
val_loss, val_acc = model.evaluate(xval_vec, yval)
test_loss, test_acc = model.evaluate(xtest_vec, ytest)
print("Evaluation on training set: loss", train_loss, \
"accuracy", train_acc)
print("Evaluation on val set: loss", val_loss, \
"accuracy", val_acc)
print("Evaluation on test set: loss", test_loss, \
"accuracy", test_acc)
model_list.append(model)
# fname = get_tmpfile(MODEL_DIR+"histone_doc2vec_"+embed_size)
# model.save(fname)
model.save(MODEL_DIR+SUFFIX+str(embed_size)+"_"+DATE+".h5")
hist_list.append(hist)
save_hist(hist, SUFFIX + "_size" + str(embed_size) )
end = time.time()
print("running time in ", end - start, "seconds")
print("\n\n")
return model_list, hist_list
def save_hist(hist, suffix):
filename = DIR + 'dna_NN_theory/histone/' + suffix + "_history.csv"
hist_df = pd.DataFrame(hist.history)
with open(filename, mode='w') as f:
hist_df.to_csv(f)
def save_prediction(res, suffix=""):
i = 0
for ds in ['train', 'val', 'test']:
filename = DIR + 'dna_NN_theory/histone/' + suffix + "_" + ds + "_prediction.csv"
df = pd.DataFrame()
df[ds] = res[i]
i += 1
df[ds+'_pred'] = res[i]
i += 1
with open(filename, mode='w') as f:
df.to_csv(f)
### ref: https://stackoverflow.com/questions/25009284/how-to-plot-roc-curve-in-python
def plot_ROC(label, pred, title="ROC"):
fpr, tpr, threshold = metrics.roc_curve(label, pred)
roc_auc = metrics.auc(fpr, tpr)
plt.title(title)
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
### ref: https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/
def plot_recall_precision(label, pred, title="RP"):
precision, recall, thresholds = metrics.precision_recall_curve(label, pred)
no_skill = np.sum(label) / len(label)
plt.plot([0, 1], [no_skill, no_skill], linestyle='--', label='random')
plt.plot(recall, precision, marker='.', label='model')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
plt.show()
def eval_model(model, ds, ds_name="Training"):
loss, acc = model.evaluate(ds, verbose=0)
print("{} Dataset: loss = {} and acccuracy = {}%".format(ds_name, np.round(loss, 3), np.round(acc*100, 2)))
embed_size_list = [50,100,150,200]
num_model = len(embed_size_list)
model_list, hist_list = doc2vec_training(embed_size_list, figsize=(10,20))
# for i in range(len(embed_size_list)):
# s = embed_size_list[i]
# print("embedding size is {}".format(s))
# train_filename = "doc2vec_viridae_size" + str(s) + "_train.csv"
# test_filename = "doc2vec_viridae_size" + str(s) + "_test.csv"
# training_vectors = pd.read_csv(data_path + train_filename, header=None)
# test_vectors = pd.read_csv(data_path + test_filename, header=None)
# ## evaluation for whole embedding
# train_sil = silhouette_score(training_vectors, ytrain, metric='cosine')
# test_sil = silhouette_score(test_vectors, ytest, metric='cosine')
# print("Whole embedding, silhouette score for training/test embedding is {}/{}".format(train_sil, test_sil))
# for i in range(len(embed_size_list)):
# s = embed_size_list[i]
# filename = data_path + "doc2vec_histone_size" + str(s) + "_history.csv"
# hist_df = pd.DataFrame(hist_list[i].history)
# with open(filename, mode='w') as f:
# hist_df.to_csv(f)
for i in range(len(embed_size_list)):
s = embed_size_list[i]
model = tf.keras.models.load_model(MODEL_DIR + "doc2vec_histone" + str(s) + "_" + DATE + ".h5")
train_filename = "size" + str(s) + "_train.csv"
val_filename = "size" + str(s) + "_val.csv"
test_filename = "size" + str(s) + "_test.csv"
xtrain_vec = pd.read_table(DATA_DIR + train_filename, delimiter=",", header=None)
xval_vec = pd.read_table(DATA_DIR + val_filename, delimiter=",", header=None)
xtest_vec = pd.read_table(DATA_DIR + test_filename, delimiter=",", header=None)
ytrain_pred = model.predict(xtrain_vec)
yval_pred = model.predict(xval_vec)
ytest_pred = model.predict(xtest_vec)
res = [ytrain, ytrain_pred, yval, yval_pred, ytest, ytest_pred]
save_prediction(res, SUFFIX + "_size" + str(s) )
plot_ROC(ytrain, ytrain_pred, title='ROC on histone training')
plot_recall_precision(ytrain, ytrain_pred, title='precision/recall on histone training')
# recovered_hist = []
fig, axes = plt.subplots(num_model, 2, figsize=(10, 15))
for i in range(num_model):
s = embed_size_list[i]
filename = data_path + "doc2vec_histone_size" + str(s) + "_history.csv"
# hist_df = pd.read_csv(filename)
ax1 = axes[i, 0]
ax2 = axes[i, 1]
ax1.plot(hist_df['loss'], label='training')
ax1.plot(hist_df['val_loss'], label='test')
ax1.set_ylim(0.1, 1.2)
ax1.set_title('model loss, embedding = '+str(embed_size_list[i]))
ax1.set_xlabel('epoch')
ax1.set_ylabel('loss')
ax1.legend(['train', 'test'], loc='upper left')
ax2.plot(hist_df['accuracy'], label='train')
ax2.plot(hist_df['val_accuracy'], label='test')
ax2.set_ylim(0.5, 1.0)
ax2.set_title('model accuracy, embedding = '+str(embed_size_list[i]))
ax2.set_xlabel('epoch')
ax2.set_ylabel('accuracy')
ax2.legend(['train', 'test'], loc='upper left')
fig.tight_layout()
###Output
_____no_output_____ |
notebooks/7.0-hwant-skew-distributions-gamma.ipynb | ###Markdown
Gamma dist
###Code
a = 2
gamma_series = pd.Series(fn.gen_rv(gamma, args=[a], size = 100))
gamma_series.plot.kde()
###Output
_____no_output_____
###Markdown
Test for normality
###Code
fig = qqplot(gamma_series, fit=True, line='45')
plt.show()
st.shapiro_wilks_(gamma_series)
st.jarque_bera_(gamma_series)
###Output
Statistics=22.759, p=0.000, skew=1.087, kurt=3.858
Sample does not look Gaussian (reject H0)
###Markdown
Individual control chart
###Code
in_control_mean = gamma_series.mean()
MR = cp.calculate_MR(gamma_series)
in_control_sigma = cp.estimate_sigma_from_MR(MR)
in_control_mean, in_control_sigma
x_ind_params0 = cp.x_ind_params(x = gamma_series, sigma = in_control_sigma, center=in_control_mean)
x_ind_params0 = x_ind_params0.reset_index()
pf.plot_control_chart(
data=x_ind_params0,
index='index',
obs='obs',
UCL='UCL',
center='Center',
LCL='LCL',
drawstyle='steps-mid',
title='Individual Control Chart for Gamma Distribution',
ylab='x',
xlab=None,
all_dates=False,
rot=0)
(x_ind_params0['obs'] > x_ind_params0['UCL']).sum() / len(x_ind_params0['obs']) + \
(x_ind_params0['obs'] < x_ind_params0['LCL']).sum() / len(x_ind_params0['obs'])
###Output
_____no_output_____
###Markdown
Transform gamma dist
###Code
# Test boxcox
from scipy.stats import boxcox
from scipy.special import inv_boxcox
boxcox(gamma_series)[1]
pt = PowerTransformer(method='yeo-johnson', standardize=False, copy=True)
# pt = PowerTransformer(method='box-cox', standardize=False, copy=True)
pt_fitted = pt.fit(gamma_series.values.reshape(-1, 1))
gamma_series_transformed = pd.Series(pt_fitted.transform(gamma_series.values.reshape(-1, 1)).flatten())
lambda_bc = pt_fitted.lambdas_
lambda_bc
gamma_series_transformed.plot.kde()
###Output
_____no_output_____
###Markdown
Test for normality
###Code
fig = qqplot(gamma_series_transformed, fit=True, line='45')
plt.show()
st.shapiro_wilks_(gamma_series_transformed)
st.jarque_bera_(gamma_series_transformed)
###Output
Statistics=1.575, p=0.455, skew=0.014, kurt=2.386
Sample looks Gaussian (fail to reject H0)
###Markdown
Individual control chart (transformed)
###Code
in_control_mean = gamma_series_transformed.mean()
MR = cp.calculate_MR(gamma_series_transformed)
in_control_sigma = cp.estimate_sigma_from_MR(MR)
# in_control_sigma = gamma_series_transformed.std()
in_control_mean, in_control_sigma
x_ind_params = cp.x_ind_params(x = gamma_series_transformed, sigma = in_control_sigma, center=in_control_mean)
x_ind_params = x_ind_params.reset_index()
pf.plot_control_chart(
data=x_ind_params,
index='index',
obs='obs',
UCL='UCL',
center='Center',
LCL='LCL',
drawstyle='steps-mid',
title='Individual Control Chart for Transformed Distribution',
ylab='x',
xlab=None,
all_dates=False,
rot=0)
(x_ind_params['obs'] > x_ind_params['UCL']).sum() / len(x_ind_params['obs']) + \
(x_ind_params['obs'] < x_ind_params['LCL']).sum() / len(x_ind_params['obs'])
###Output
_____no_output_____
###Markdown
Back transform to original
###Code
x_ind_params2 = x_ind_params.copy()
x_ind_params2['obs'] = pt_fitted.inverse_transform(x_ind_params2['obs'].values.reshape(-1, 1))
x_ind_params2['UCL'] = pt_fitted.inverse_transform(x_ind_params2['UCL'].values.reshape(-1, 1))
x_ind_params2['Center'] = pt_fitted.inverse_transform(x_ind_params2['Center'].values.reshape(-1, 1))
x_ind_params2['LCL'] = pt_fitted.inverse_transform(x_ind_params2['LCL'].values.reshape(-1, 1))
pf.plot_control_chart(
data=x_ind_params2,
index='index',
obs='obs',
UCL='UCL',
center='Center',
LCL='LCL',
drawstyle='steps-mid',
title='Individual Control Chart for Gamma Distribution (Adjusted Control Limits)',
ylab='x',
xlab=None,
all_dates=False,
rot=0)
(x_ind_params2['obs'] > x_ind_params2['UCL']).sum() / len(x_ind_params2['obs']) + \
(x_ind_params2['obs'] < x_ind_params2['LCL']).sum() / len(x_ind_params2['obs'])
###Output
_____no_output_____
###Markdown
non-parametric method (sample quantiles)
###Code
alpha = (1- 0.997)/2
x_ind_params_q = x_ind_params0.copy()
x_ind_params_q['UCL'] = gamma_series.quantile(q=1-alpha)
x_ind_params_q['Center'] = gamma_series.quantile(q=0.5)
x_ind_params_q['LCL'] = gamma_series.quantile(q=alpha)
pf.plot_control_chart(
data=x_ind_params_q,
index='index',
obs='obs',
UCL='UCL',
center='Center',
LCL='LCL',
drawstyle='steps-mid',
title='Individual Control Chart for Gamma Distribution (non-parametric)',
ylab='x',
xlab=None,
all_dates=False,
rot=0)
(x_ind_params_q['obs'] > x_ind_params_q['UCL']).sum() / len(x_ind_params_q['obs']) + \
(x_ind_params_q['obs'] < x_ind_params_q['LCL']).sum() / len(x_ind_params_q['obs'])
###Output
_____no_output_____
###Markdown
non-parametric method (dist quantiles)
###Code
x_ind_params_qd = x_ind_params0.copy()
x_ind_params_qd['UCL'] = gamma.ppf(1-alpha, a)
x_ind_params_qd['Center'] = gamma.ppf(0.5, a)
x_ind_params_qd['LCL'] = gamma.ppf(alpha, a)
pf.plot_control_chart(
data=x_ind_params_q,
index='index',
obs='obs',
UCL='UCL',
center='Center',
LCL='LCL',
drawstyle='steps-mid',
title='Individual Control Chart for Gamma Distribution (non-parametric)',
ylab='x',
xlab=None,
all_dates=False,
rot=0)
(x_ind_params_qd['obs'] > x_ind_params_qd['UCL']).sum() / len(x_ind_params_qd['obs']) + \
(x_ind_params_qd['obs'] < x_ind_params_qd['LCL']).sum() / len(x_ind_params_qd['obs'])
###Output
_____no_output_____ |
codigo/notebook-tests.ipynb | ###Markdown
plotly
###Code
import plotly
import chart_studio
import plotly.graph_objects as go
import plotly.io as pio
df['created_at'].value_counts().sort_index()
data = [go.Scatter(x=df['created_at'].unique(), y=df['created_at'].value_counts().sort_index() )]
fig = go.Figure(data)
fig.show()
DF=pd.to_datetime(df['created_at'])
DF.head()
DF = DF.dt.floor('min')
DF.head()
unique_time=pd.to_datetime(DF.unique())
unique_time.__len__()
max_date=unique_time.max()
max_date
direction = './OutputStreaming_20191026-191509.csv'
import time
def getDf2plot(direction):
df = pd.read_csv(direction)
DF = pd.to_datetime(df['created_at']).dt.floor('min')
max_date = DF.max()
DF = pd.to_datetime(DF.loc[DF < max_date])
DF = DF.sort_index().value_counts()
data = {'date': DF.index, 'freq': DF.values}
data = pd.DataFrame(data).sort_values('date')
return data
data = getDf2plot(direction)
DF.sort_values().value_counts()
df.sort_values('created_at')
df[df['text'].str.contains('terminar')]
key_words = get_keywords()
df[df['text'].str.contains(key_words[0])].index
KWdic = {key_words[i]: df[df['text'].str.contains(key_words[i])].index for i in range(len(key_words))}
def get_KWdic(df):
'''
devuelve un diccionario con los índices del df que tienen la palabra
'''
return {key_words[i]: df[df['text'].str.contains(key_words[i])].index for i in range(len(key_words))}
get_KWdic(df)
from utils import read_mongo
df = read_mongo('dbTweets', 'tweets_chile')
df.head()
from utils import parse_tweet, read_mongo
import pandas as pd
csv_1 = pd.read_csv('OutputStreaming_20191027-141746.csv')
csv_2 = pd.read_csv('OutputStreaming_20191030-210028.csv')
csv_1
csv_2
df = read_mongo('dbTweets', 'tweets_chile')
import numpy as np
from utils_app import get_username_list
politicos = get_username_list('data/Politicos-Twitter.csv')
np.where(df['screenName'] == politicos[9])[0]
###Output
_____no_output_____ |
Calculo_de_Rentabilidade.ipynb | ###Markdown
APP RENDA FIXA Importando as Bibliotecas
###Code
import numpy as pd
import pandas as pd
###Output
_____no_output_____
###Markdown
Obtendo os Dados Solicitados.
###Code
df = pd.read_csv('http://dados.cvm.gov.br/dados/FI/DOC/INF_DIARIO/DADOS/inf_diario_fi_202005.csv', sep=';')
df2 = pd.read_csv('http://dados.cvm.gov.br/dados/FI/DOC/INF_DIARIO/DADOS/inf_diario_fi_202006.csv', sep=';')
###Output
_____no_output_____
###Markdown
Exibindo os Dados em um DataFrame Dados do Mês de Maio
###Code
dadosmaio = df.loc[df['CNPJ_FUNDO'] == '28.504.479/0001-12'] #Criando um novo Dataframe com os Dados do CNPJ solicitado
dadosmaio #Exibindo o DataFrame
###Output
_____no_output_____
###Markdown
Dados do Mês de Junho
###Code
dadosjunho = df2.loc[df2['CNPJ_FUNDO'] == '28.504.479/0001-12'] #Criando um novo Dataframe com os Dados do CNPJ solicitado
dadosjunho #Exibindo o DataFrame
###Output
_____no_output_____
###Markdown
Calculando a rentabilidade
###Code
calc = ((dadosjunho['VL_QUOTA'][250966] / dadosmaio['VL_QUOTA'][240002])-1)*100 #calculando a rentabilidade
print("Rentabilidade Absoluta: {:.2f}".format(calc)) #Exibindo o resultado para o usuario
###Output
_____no_output_____ |
ml/notebooks/AttentionTesting.ipynb | ###Markdown
To do:- [X] handle unk words better - currently initalizing to the average of all word embeddings like suggested [here](https://stackoverflow.com/questions/49239941/what-is-unk-in-the-pretrained-glove-vector-files-e-g-glove-6b-50d-txt)- [X] make work with batches- [ ] use different, better embeddings- [X] use better tokenizer, like spacy or some huggingface transformer model- [ ] train and save a good model- [ ] visualize attentions- [ ] make work with other datasets- [ ] convert to .py script that runs with input file that determines which data and parameters to use
###Code
input_tensor_batch = input_tensors[0]
target_tensor_batch = target_tensors[0]
target_tensor = torch.stack(target_tensor_batch).reshape(len(input_tensor_batch))
input_tensor = torch.stack(input_tensor_batch)
input_tensor = input_tensor.reshape(len(input_tensor_batch), input_tensor.shape[2])
output, attention = model(input_tensor, return_attn=True)
train_df.iloc[0]
input_tensors[0][0]
attention[0].shape
tokenizer(train_df['text'].iloc[0])[attention[0].sum(axis=0).argmax().item()]
attn = attention[0].sum(axis=0).cpu().detach().numpy()
plt.matshow([attn], cmap='Reds')
sent = "It is raining."
tokens = tokenizer(sent)
output, attns = sent_pred(sent, model, vocab_dict, tokenizer, max_length, device, batch_size)
softmax = torch.nn.Softmax(dim=0)
pred_scores = softmax(torch.tensor(output[0]))
print(f"humorous : {pred_scores[0]*100:.2f}%, not : {pred_scores[1]*100:.2f}%")
attn, _ = attns[0].max(axis=0)
attn = attn.cpu().detach().numpy()
print(tokens)
plt.matshow([attn[:len(tokens)]], cmap='Reds');
y = attn[:len(tokens)]
x = np.arange(len(tokens))
labels = tokens
fig, ax = plt.subplots(figsize=(10, 7))
plt.plot(x, y, 'bo')
ax.set_ylabel("Attention")
ax.set_xlabel("Token Position")
for i, txt in enumerate(labels):
ax.annotate(txt, (x[i], y[i]), xytext=(x[i] + 0.015, y[i] + 0.015))
plt.savefig('Attention_funny.png')
###Output
_____no_output_____ |
examples/.ipynb_checkpoints/Compiler Demonstration-checkpoint.ipynb | ###Markdown
1) Imports
###Code
import sys
sys.path.insert(1, '/mnt/c/Users/conno/Documents/GitHub/MISTIQS/src')
from Heisenberg import Heisenberg
from ds_compiler import ds_compile
###Output
_____no_output_____
###Markdown
2) Create Heisenberg objectThis object solely takes in your input file, so it contains the information about the system you are simulating and your preferences including backend choice, compilation method, and others.
###Code
#First, create the Heisenberg object using the parameters specified in the input file. This defines the system to simulate
#and allows for the generation of circuits to simulate the time evolution of this system.
test_object=Heisenberg("TFIM_input_file.txt")
test_object.compile="n" #return uncompiled circuits for now
#Because we are working in the IBM backend in this example, run the connect_IBM() method of the object to connect to IBM's
#backend. This is required for both compilation and circuit execution, if desired.
#First time user of IBM's Quantum Experience API? Run the line below
# test_object.connect_IBM(api_key="insert your IBM Quantum Experience API key here")
#If you already run IBM Quantum Experience API jobs, run the following instead:
test_object.connect_IBM()
###Output
/home/cpowers/miniconda3/envs/py38/lib/python3.8/site-packages/qiskit/providers/ibmq/ibmqfactory.py:192: UserWarning: Timestamps in IBMQ backend properties, jobs, and job results are all now in local time instead of UTC.
warnings.warn('Timestamps in IBMQ backend properties, jobs, and job results '
###Markdown
3) Generate Quantum Circuits for Quantum Simulation of Your Physical SystemNote: any warning messages about gate error values are due to qiskit's noise model building, not MISTIQS
###Code
test_object.generate_circuits()
uncompiled_circuits=test_object.return_circuits()
###Output
Generating timestep 0 circuit
Generating timestep 1 circuit
Generating timestep 2 circuit
Generating timestep 3 circuit
Generating timestep 4 circuit
Generating timestep 5 circuit
Generating timestep 6 circuit
Generating timestep 7 circuit
Generating timestep 8 circuit
Generating timestep 9 circuit
Generating timestep 10 circuit
Generating timestep 11 circuit
Generating timestep 12 circuit
Generating timestep 13 circuit
Generating timestep 14 circuit
Generating timestep 15 circuit
Generating timestep 16 circuit
Generating timestep 17 circuit
Generating timestep 18 circuit
Generating timestep 19 circuit
Generating timestep 20 circuit
Generating timestep 21 circuit
Generating timestep 22 circuit
Generating timestep 23 circuit
Generating timestep 24 circuit
Generating timestep 25 circuit
Generating timestep 26 circuit
Generating timestep 27 circuit
Generating timestep 28 circuit
Generating timestep 29 circuit
Generating timestep 30 circuit
Creating IBM quantum circuit objects...
IBM quantum circuit objects created
###Markdown
4) Run the circuits through the domain-specific quantum compiler
###Code
compiled_circuits=[]
for circuit in uncompiled_circuits:
compiled_circuits.append(ds_compile(circuit,'ibm'))
###Output
_____no_output_____
###Markdown
Now, let's compare this to the same circuits run through IBM's compiler
###Code
test_object.compile="y" #allow IBM compiler to transpile circuits
test_object.auto_ds_compile="n"
test_object.generate_circuits()
ibm_circuits=test_object.return_circuits()
import numpy as np
rxcount=0
rzcount=0
cxcount=0
rxplot=np.zeros(test_object.steps+1)
rzplot=rxplot
cxplot=rxplot
totalplot=rxplot
i=0
for circuit in ibm_circuits:
rxtemp=0
rztemp=0
cxtemp=0
total_gates=0
data=circuit.qasm().split(";")
for line in data:
if "u3" in line:
rxcount+=2
rxtemp+=2
rzcount+=3
rztemp+=3
total_gates+=5
elif "u2" in line:
rxcount+=1
rxtemp+=1
rzcount+=3
rztemp+=3
total_gates+=4
elif "u1" in line:
rzcount+=1
rztemp+=1
total_gates+=1
elif "cx" in line:
cxcount+=1
cxtemp+=1
total_gates+=1
rxplot[i]=rxtemp
rzplot[i]=rztemp
cxplot[i]=cxtemp
totalplot[i]=total_gates
i+=1
rxcount_ds=0
rzcount_ds=0
cxcount_ds=0
rxplot_ds=np.zeros(test_object.steps+1)
rzplot_ds=rxplot_ds
cxplot_ds=rxplot_ds
totalplot_ds=rxplot_ds
i=0
for circuit in compiled_circuits:
rxtemp_ds=0
rztemp_ds=0
cxtemp_ds=0
totalgates=0
data=circuit.qasm().split(";")
for line in data:
if "rx" in line:
rxcount_ds+=1
rxtemp_ds+=1
totalgates+=1
elif "rz" in line:
rzcount_ds+=1
rztemp_ds+=1
totalgates+=1
elif "cx" in line:
cxcount_ds+=1
cxtemp_ds+=1
totalgates+=1
rxplot_ds[i]=rxtemp_ds
rzplot_ds[i]=rztemp_ds
cxplot_ds[i]=cxtemp_ds
totalplot[i]=totalgates
i+=1
print("The IBM-compiled circuits contain {} RX gates, {} RZ gates, and {} CX gates\n".format(rxcount,rzcount,cxcount))
print("The DS-compiled circuits contain {} RX gates, {} RZ gates, and {} CX gates,\nfor a reduction by {}%, {}%, and {}%, respectively.".format(rxcount_ds,rzcount_ds,cxcount_ds, round(100*((rxcount-rxcount_ds)/rxcount),1),round(100*((rzcount-rzcount_ds)/rzcount),1),round(100*((cxcount-cxcount_ds)/cxcount),1)))
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
totaldiff=100*((rxcount+rzcount+cxcount)-(rxcount_ds+rzcount_ds+cxcount_ds))/(rxcount+rzcount+cxcount)print(totaldiff)
###Code
fig,ax=plt.subplots()
plt.plot(totalplot,'k--')
plt.plot(totalplot_ds,'b')
plt.xlabel("Simulation Timestep",fontsize=14)
plt.ylabel("Total Gate Count",fontsize=14)
plt.legend(["IBM Native Compiler","DS Compiler"])
plt.tight_layout()
every_nth = 2
for n, label in enumerate(ax.xaxis.get_ticklabels()):
if (n+1) % every_nth != 0:
label.set_visible(False)
every_nth = 2
for n, label in enumerate(ax.yaxis.get_ticklabels()):
if (n+1) % every_nth != 0:
label.set_visible(False)
###Output
_____no_output_____ |
notebooks/match_visualization.ipynb | ###Markdown
Match VisualizationIn this notebook, we visualize player data by animating the players using matplotlib.
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
import matplotlib
from IPython import display
from itertools import count
import matplotlib.lines as mlines
import os
from utils import remove_missing_raw_rows
#freekick
mid = 20157
#
#mid = 20183
raw_df = pd.read_csv('../data/processed/{}_raw.csv'.format(mid))
try:
event_df = pd.read_csv('../data/event/{}_event.csv'.format(mid))
hasball_df = pd.read_csv('../data/hasball/{}_hasball.csv'.format(mid))
except FileNotFoundError:
event_df = None
hasball_df = None
event_names = pd.read_csv('../doc/event_definitions_en.csv')
event_names = pd.Series(data=event_names['NAME'].values, index=event_names['ID'].values)
max_name_len = max(len(name) for _, name in event_names.iteritems())
event_counts = np.zeros(4)
event_file_list = os.listdir('../data/event')
for filename in event_file_list:
df = pd.read_csv('../data/event/{}'.format(filename))
event_df[event_df['eventId'] == 93]
###Output
_____no_output_____
###Markdown
Visualization
###Code
%matplotlib inline
font = {#'family' : 'normal',
#'weight' : 'bold',
'size' : 30}
matplotlib.rc('font', **font)
# freekick
# 20157
#out_file = 'freekick_visualization.pdf'
#h, m, s = 2, 91, 5 # (half, minute, second)
# corner
# 20157
#out_file = 'corner_visualization.pdf'
#h, m, s = 2, 47, 0 # (half, minute, second)
# goal
# 20157
#out_file = 'goal_visualization.pdf'
h, m, s = 2, 78, 10 # (half, minute, second)
# penalty
# 20157
#h, m, s = 2, 77, 59 # (half, minute, second)
#out_file = 'penalty_visualization.pdf'
width, height = 105, 68
plt.figure(figsize=(10, 10*height/width))
for i in count():
beg_sec = 60*m + s
tot_sec = beg_sec + i
minute = tot_sec//60
second = tot_sec%60
event_name = 'No Event'
if hasball_df is not None:
hasball = hasball_df[
(hasball_df['half'] == h) &
(hasball_df['minute'] == minute) &
(hasball_df['second'] == second)]
teamPoss = hasball['teamPoss'].iloc[0]
if teamPoss == 1:
event_name = 'Home Possession'
elif teamPoss == 0:
event_name = 'Away Possession'
elif event_df is not None:
event = event_df[
(event_df['half'] == h) &
(event_df['minute'] == minute) &
(event_df['second'] == second)]
event_name = event_names[event['eventId'].iloc[0]] if not event.empty \
else 'Game Stop'
sec = raw_df[
(raw_df['half'] == h) &
(raw_df['minute'] == minute) &
(raw_df['second'] == second)]
home = sec[sec['teamType'] == 1][['x', 'y']].values
away = sec[sec['teamType'] == 2][['x', 'y']].values
ref = sec[sec['teamType'] == 3][['x', 'y']].values
plt.xlim([0, width])
plt.ylim([0, height])
plt.xlabel('Distance (m)')
plt.ylabel('Distance (m)')
size = 350
ax = plt.gca()
mid_circ = plt.Circle((52.5, 34), 9.15, color='black', fill=False, linewidth=0.6, zorder=1)
left_pen_circ = plt.Circle((11, 34), 0.3, color='black', fill=True, linewidth=0.6, zorder=1)
right_pen_circ = plt.Circle((94, 34), 0.3, color='black', fill=True, linewidth=0.6, zorder=1)
# middle line
ax.add_line(mlines.Line2D((52, 52), (0, 68), color='black', linewidth=0.6, zorder=1))
# circles
ax.add_artist(mid_circ)
ax.add_artist(left_pen_circ)
ax.add_artist(right_pen_circ)
# left part
ax.add_line(mlines.Line2D((0, 16.5), (13.84, 13.84), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((0, 16.5), (54.16, 54.16), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((16.5, 16.5), (13.84, 54.16), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((0, 5.5), (24.84, 24.84), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((0, 5.5), (43.16, 43.16), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((5.5, 5.5), (24.84, 43.16), color='black', linewidth=0.6, zorder=1))
# right part
ax.add_line(mlines.Line2D((88.5, 105), (13.84, 13.84), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((88.5, 105), (54.16, 54.16), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((88.5, 88.5), (13.84, 54.16), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((99.5, 105), (24.84, 24.84), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((99.5, 105), (43.16, 43.16), color='black', linewidth=0.6, zorder=1))
ax.add_line(mlines.Line2D((99.5, 99.5), (24.84, 43.16), color='black', linewidth=0.6, zorder=1))
plt.scatter(*home.T, label='Home', s=size, zorder=2)
plt.scatter(*away.T, label='Away', s=size, zorder=2)
plt.scatter(*ref.T, label='Ref', c=[[0, 0.9, 0]], s=size, zorder=2)
plt.legend(loc=3) # lower left
#plt.legend(loc=4) # lower right
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(0.3)
plt.gcf().clear()
#plt.savefig(out_file, format='pdf', bbox_inches='tight');
#break
###Output
_____no_output_____ |
05-combine_linguistic_and_cluster_features.ipynb | ###Markdown
Data process
###Code
data = pd.read_csv('data_leave_one_out.csv')
data = data.rename(columns = {'intent':'Intent','query':'Questions'})
data = data.dropna().drop_duplicates()
data['Questions'] = data['Questions'].apply(lambda x: x.lower())
data['Questions'] = data['Questions'].apply(lambda x: re.sub(r'(!|\.|,|\(|\)|\[|\]|\\|\?|\$|#|%|\*)', '', x))
data['Intent'] = data['Intent'].astype(str)
data.head(2)
cluster_cols =['Promotions', 'Card Promotions',
'Open Account', 'OCBC Singapore Account', 'OCBC Securities Account ',
'OCBC Malaysia Account', 'NISP Account', 'Card Cancellation',
'Cancel Credit or Debit Card', 'Cancel ATM Card',
'Speak with a customer service officer', 'Request for sponsorship',
'Repricing', 'Auto Loan', 'Home Loan', 'Service Fee',
'Token Replacement', 'Token Activation', 'Hardware Token', 'Onetoken',
'Late fee waiver', 'Credit card interest waiver',
'Request for account fee waiver', 'Uplift suspension on accounts',
'Loan Enquiry', 'Home and property loans', 'Personal Loans',
'365 Card Application', 'Paying a cancelled credit card',
'How to close my account', 'Credit card transaction dispute',
'Change credit card limit', 'Increase credit card limit',
'Decrease credit card limit', 'Credit card application rejection',
'Rebates', 'How to redeem rewards', '360 Account interest dispute',
'Statement Request', 'Passbook savings account statement',
'Credit card statement', 'Debit card statement',
'Investment account statement', 'Update details']
len(cluster_cols)
cluster_cols =[
'Statement request',
'Passbook savings accounts',
'Card statements',
'Credit card statement',
'Debit card statement',
'Investment account statement',
'Home loan account statement',
'360 Account interest dispute',
'Change of billing cycle',
'Token Activation',
'Student Loan',
'Tuition fee loan',
'Education loan',
'Study loan',
'Car loan full settlement',
'Home loan repayment',
'Cancel Fund Transfer',
'Cancel credit card transaction',
'Credit Refund',
'Account opening for foreigners',
'Mobile Banking Issues',
'Account Fraud',
'Dormant Account Activation',
'CRS Enquiries',
'SRS Contribution',
'Dispute status',
'Give a compliment',
'File a complaint',
'Funds Transfer Status',
'Telegraphic transfer Status',
'Make a telegraphic transfer',
'Unable to log into internet banking',
'Card application status',
'Supplementary card application',
'Access codes for banking services',
'Interest or Late fee waiver',
'Annual Fee Waiver',
'SMS Alerts',
'Reset PIN',
'Unsuccessful card transaction',
'Card Renewal',
'Card activation for overseas use',
'Replace Card',
'Lost or compromised cards',
'Damaged or Faulty card',
'Promotions',
'Card Promotions',
'Open Account',
'Open OCBC Singapore Account',
'Open OCBC Securities Account ',
'Open OCBC Malaysia Account',
'Open NISP Account',
'Card Cancellation',
'Cancel Credit or Debit Card',
'Cancel ATM Card',
'Speak with a customer service officer',
'Request for sponsorship',
'Repricing',
'Reprice home loan',
'Service Fee',
'Token Replacement',
'Request for account fee waiver',
'Uplift suspension on accounts',
'Loan Enquiry',
'Card Application',
'Apply for credit or debit cards',
'Apply for ATM card',
'Paying a cancelled credit card',
'How to close my account',
'Card dispute',
'Change credit card limit',
'Increase credit card limit',
'Decrease credit card limit',
'Credit card application rejection',
'Rebates',
'How to redeem rewards',
'Update details']
len(cluster_cols)
###Output
_____no_output_____
###Markdown
Get top3 clusters
###Code
data['clusters_top3'] = data.apply(lambda x: np.argsort(x[cluster_cols].values)[:3].tolist(), axis=1)
intents = cluster_cols # get all tickers
intent2index = {v: i for (i, v) in enumerate(intents)}
data['target'] = data['Intent'].apply(lambda x: intent2index[x])
top_clusters_cols = pd.DataFrame(data['clusters_top3'].values.tolist(),columns = ['clusters_1','clusters_2','clusters_3']).reset_index(drop=True)
data = data.reset_index(drop=True)
data = pd.concat([data,top_clusters_cols], axis=1)
data.drop(columns = 'clusters_top3', inplace=True)
data.drop(columns = cluster_cols, inplace=True)
data.head()
# check cluster method accuracy - top 1
data[(data['clusters_1'] == data['target'])].shape[0] / data.shape[0]
# top 2 accuracy
data["exists"] = data.drop(data.columns[[0,1,2,3,6]], 1).isin(data["target"]).any(1)
sum(data['exists'])/ data.shape[0]
# top 3 accuracy
data["exists"] = data.drop(data.columns[[0,1,2,3]], 1).isin(data["target"]).any(1)
sum(data['exists'])/ data.shape[0]
###Output
_____no_output_____
###Markdown
load nlp model and get stop word list
###Code
# load spacy model
import spacy
nlp = spacy.load("en_core_web_sm")
from spacy.lang.en.stop_words import STOP_WORDS
stop_words = list(STOP_WORDS)
###Output
_____no_output_____
###Markdown
Get keyword list
###Code
keywords = []
for intent in list(set(data['Intent'])):
keywords.extend(intent.strip().split(' '))
keyword_list = list(set(keywords))
keyword_list = [i.lower() for i in keyword_list if i.lower() not in stop_words]
keyword_list.append('nsip')
keyword_list_lemma = []
text = nlp(' '.join([w for w in keyword_list]))
for token in text:
keyword_list_lemma.append(token.lemma_)
print(keyword_list_lemma)
###Output
['interest', 'statement', 'apply', 'home', 'tuition', 'pay', 'issue', 'service', 'dispute', 'internet', 'unable', 'transfer', 'reprice', 'pin', 'home', 'enquiry', 'damage', 'card', 'service', 'rebate', 'singapore', 'rejection', 'officer', 'ocbc', 'compliment', 'update', 'use', 'atm', 'customer', 'open', 'study', 'credit', 'activation', 'sponsorship', 'application', 'malaysia', 'compromise', 'suspension', 'loan', 'passbook', 'request', 'telegraphic', 'enquiries', 'statement', 'investment', 'debit', 'dispute', 'statement', 'fee', 'cancellation', 'transfer', 'close', 'mobile', 'log', 'annual', 'billing', 'banking', 'repayment', 'waiver', 'cancel', 'reward', 'status', 'student', 'increase', 'contribution', 'application', 'speak', 'telegraphic', 'card', 'access', 'account', 'token', 'waiver', 'card', 'opening', 'file', 'code', 'foreigner', 'refund', 'education', 'supplementary', 'cancel', 'uplift', 'interest', 'replace', 'security', 'banking', 'decrease', 'lose', 'srs', 'savings', 'car', 'late', 'overseas', 'limit', 'loan', 'nisp', 'fund', 'activation', 'settlement', 'crs', 'detail', 'redeem', 'debit', 'request', 'fee', 'status', 'account', 'transaction', 'service', '360', 'alert', 'promotion', 'reprice', 'renewal', 'cycle', 'reset', 'faulty', 'change', 'credit', 'sms', 'account', 'fund', 'replacement', 'dormant', 'complaint', 'unsuccessful', 'fraud', 'nsip']
###Markdown
Get Linguistic feature
###Code
data['lemma'] = data['Questions'].apply(lambda x:' '.join([token.lemma_ for token in nlp(x) if token.lemma_ not in stop_words]))
data['keyword'] = data['lemma'].apply(lambda x: list(set([token.lemma_ for token in nlp(x) if token.lemma_ in keyword_list_lemma])))
data['noun'] = data['Questions'].apply(lambda x: list(set([token.lemma_ for token in nlp(x) if token.pos_ in ['NOUN','PROPN'] and token.lemma_ not in stop_words])))
data['verb'] = data['Questions'].apply(lambda x: list(set([token.lemma_ for token in nlp(x) if token.pos_ in ['VERB'] and token.lemma_ not in stop_words])))
data['adj'] = data['Questions'].apply(lambda x: list(set([token.lemma_ for token in nlp(x) if token.pos_ in ['ADJ'] and token.lemma_ not in stop_words])))
data['noun'] = data['noun'].apply(lambda x: ' '.join([w for w in x]))
data['verb'] = data['verb'].apply(lambda x: ' '.join([w for w in x]))
data['adj'] = data['adj'].apply(lambda x: ' '.join([w for w in x]))
data['keyword'] = data['keyword'].apply(lambda x: ' '.join([w for w in x]))
data[data.lemma == 'want open account']
###Output
_____no_output_____
###Markdown
K-fold Cross validation results tf-idf + linguistic + cluster
###Code
# combine model score
countvector_cols = ['lemma', 'keyword', 'noun', 'verb']
top_clusters_cols = ['clusters_1', 'clusters_2', 'clusters_3']
feature_cols = countvector_cols + top_clusters_cols
# StratifiedKFold coss validation
skf = StratifiedKFold(n_splits=5)
skf.get_n_splits(data[feature_cols], data['target'])
print(skf)
cv_scores_top1 = []
cv_scores_top2 = []
cv_scores_top3 = []
final_result = pd.DataFrame()
for train_index, test_index in skf.split(data[feature_cols], data['target']):
# get train, test data for each chunk
X_train, X_test = data.loc[train_index,feature_cols], data.loc[test_index,feature_cols]
y_train, y_test = data.loc[train_index,'target'], data.loc[test_index,'target']
v_lemma = TfidfVectorizer()
x_train_lemma = v_lemma.fit_transform(X_train['lemma'])
x_test_lemma = v_lemma.transform(X_test['lemma'])
vocab_lemma = dict(v_lemma.vocabulary_)
v_keyword = TfidfVectorizer()
x_train_keyword = v_keyword.fit_transform(X_train['keyword'])
x_test_keyword = v_keyword.transform(X_test['keyword'])
vocab_keyword = dict(v_keyword.vocabulary_)
v_noun = TfidfVectorizer()
x_train_noun = v_noun.fit_transform(X_train['noun'])
x_test_noun = v_noun.transform(X_test['noun'])
vocab_noun = dict(v_noun.vocabulary_)
v_verb = TfidfVectorizer()
x_train_verb = v_verb.fit_transform(X_train['verb'])
x_test_verb = v_verb.transform(X_test['verb'])
vocab_verb = dict(v_verb.vocabulary_)
# v_adj = TfidfVectorizer()
# x_train_adj = v_adj.fit_transform(X_train['adj'])
# x_test_adj = v_adj.transform(X_test['adj'])
# vocab_adj = dict(v_adj.vocabulary_)
# combine all features
x_train_combined = hstack((x_train_lemma,x_train_keyword,x_train_noun,x_train_verb,X_train[top_clusters_cols].values),format='csr')
x_train_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()+top_clusters_cols
x_test_combined = hstack((x_test_lemma,x_test_keyword,x_test_noun,x_test_verb,X_test[top_clusters_cols].values),format='csr')
x_test_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()+top_clusters_cols
x_train_combined = pd.DataFrame(x_train_combined.toarray())
x_train_combined.columns = x_train_combined_columns
x_test_combined = pd.DataFrame(x_test_combined.toarray())
x_test_combined.columns = x_test_combined_columns
# build classifier
clf = RandomForestClassifier(max_depth=50, n_estimators=1000)
clf.fit(x_train_combined, y_train)
probs = clf.predict_proba(x_test_combined)
best_3 = pd.DataFrame(np.argsort(probs, axis=1)[:,-3:],columns=['top3','top2','top1'])
best_3['top1'] = clf.classes_[best_3['top1']]
best_3['top2'] = clf.classes_[best_3['top2']]
best_3['top3'] = clf.classes_[best_3['top3']]
result = pd.concat([best_3.reset_index(drop=True),pd.DataFrame(y_test).reset_index(drop=True), X_test[feature_cols].reset_index(drop=True)], axis=1)
final_result = pd.concat([final_result, result])
score_1 = result[result['top1'] == result['target']].shape[0] / result.shape[0]
score_2 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])].shape[0] / result.shape[0]
score_3 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])| (result['top3'] == result['target'])].shape[0] / result.shape[0]
cv_scores_top1.append(score_1)
cv_scores_top2.append(score_2)
cv_scores_top3.append(score_3)
print(np.mean(np.array((cv_scores_top1))), np.std(np.array((cv_scores_top1))))
print(np.mean(np.array((cv_scores_top2))), np.std(np.array((cv_scores_top2))))
print(np.mean(np.array((cv_scores_top3))), np.std(np.array((cv_scores_top3))))
###Output
StratifiedKFold(n_splits=5, random_state=None, shuffle=False)
###Markdown
Error analysis
###Code
intent_cols = ['target','top1','top2','top3', 'clusters_1',
'clusters_2', 'clusters_3']
for col in intent_cols:
final_result[col] = final_result[col].apply(lambda x: [intent for intent, index in intent2index.items() if index == x])
final_result[col] = final_result[col].apply(lambda x: x[0])
final_result[final_result['target'] != final_result['top1']]
###Output
_____no_output_____
###Markdown
tf-idf + linguistic
###Code
# combine model score
countvector_cols = ['lemma', 'keyword', 'noun', 'verb']
top_clusters_cols = ['clusters_1', 'clusters_2', 'clusters_3']
feature_cols = countvector_cols + top_clusters_cols
# StratifiedKFold coss validation
skf = StratifiedKFold(n_splits=5)
skf.get_n_splits(data[feature_cols], data['target'])
print(skf)
cv_scores_top1 = []
cv_scores_top2 = []
cv_scores_top3 = []
final_result = pd.DataFrame()
for train_index, test_index in skf.split(data[feature_cols], data['target']):
# get train, test data for each chunk
X_train, X_test = data.loc[train_index,feature_cols], data.loc[test_index,feature_cols]
y_train, y_test = data.loc[train_index,'target'], data.loc[test_index,'target']
v_lemma = TfidfVectorizer()
x_train_lemma = v_lemma.fit_transform(X_train['lemma'])
x_test_lemma = v_lemma.transform(X_test['lemma'])
vocab_lemma = dict(v_lemma.vocabulary_)
v_keyword = TfidfVectorizer()
x_train_keyword = v_keyword.fit_transform(X_train['keyword'])
x_test_keyword = v_keyword.transform(X_test['keyword'])
vocab_keyword = dict(v_keyword.vocabulary_)
v_noun = TfidfVectorizer()
x_train_noun = v_noun.fit_transform(X_train['noun'])
x_test_noun = v_noun.transform(X_test['noun'])
vocab_noun = dict(v_noun.vocabulary_)
v_verb = TfidfVectorizer()
x_train_verb = v_verb.fit_transform(X_train['verb'])
x_test_verb = v_verb.transform(X_test['verb'])
vocab_verb = dict(v_verb.vocabulary_)
# combine all features
x_train_combined = hstack((x_train_lemma,x_train_keyword,x_train_noun,x_train_verb),format='csr')
x_train_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()
x_test_combined = hstack((x_test_lemma,x_test_keyword,x_test_noun,x_test_verb),format='csr')
x_test_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()
x_train_combined = pd.DataFrame(x_train_combined.toarray())
x_train_combined.columns = x_train_combined_columns
x_test_combined = pd.DataFrame(x_test_combined.toarray())
x_test_combined.columns = x_test_combined_columns
# build classifier
clf = RandomForestClassifier(max_depth=50, n_estimators=800)
clf.fit(x_train_combined, y_train)
probs = clf.predict_proba(x_test_combined)
best_3 = pd.DataFrame(np.argsort(probs, axis=1)[:,-3:],columns=['top3','top2','top1'])
best_3['top1'] = clf.classes_[best_3['top1']]
best_3['top2'] = clf.classes_[best_3['top2']]
best_3['top3'] = clf.classes_[best_3['top3']]
result = pd.concat([best_3.reset_index(drop=True),pd.DataFrame(y_test).reset_index(drop=True), X_test[feature_cols].reset_index(drop=True)], axis=1)
final_result = pd.concat([final_result, result])
score_1 = result[result['top1'] == result['target']].shape[0] / result.shape[0]
score_2 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])].shape[0] / result.shape[0]
score_3 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])| (result['top3'] == result['target'])].shape[0] / result.shape[0]
cv_scores_top1.append(score_1)
cv_scores_top2.append(score_2)
cv_scores_top3.append(score_3)
print(np.mean(np.array((cv_scores_top1))), np.std(np.array((cv_scores_top1))))
print(np.mean(np.array((cv_scores_top2))), np.std(np.array((cv_scores_top2))))
print(np.mean(np.array((cv_scores_top3))), np.std(np.array((cv_scores_top3))))
###Output
StratifiedKFold(n_splits=5, random_state=None, shuffle=False)
###Markdown
Leave one out cross validation result tf-idf + linguistic
###Code
from sklearn.model_selection import LeaveOneOut
# combine model score
countvector_cols = ['lemma', 'keyword', 'noun', 'verb']
top_clusters_cols = ['clusters_1', 'clusters_2', 'clusters_3']
feature_cols = countvector_cols + top_clusters_cols
# LeaveOneOut coss validation
loo = LeaveOneOut()
loo.get_n_splits(data[feature_cols], data['target'])
print(loo)
cv_scores_top1 = []
cv_scores_top2 = []
cv_scores_top3 = []
final_result = pd.DataFrame()
for train_index, test_index in loo.split(data[feature_cols], data['target']):
# print("TEST:", test_index)
# get train, test data for each chunk
X_train, X_test = data.loc[train_index,feature_cols], data.loc[test_index,feature_cols]
y_train, y_test = data.loc[train_index,'target'], data.loc[test_index,'target']
v_lemma = TfidfVectorizer()
x_train_lemma = v_lemma.fit_transform(X_train['lemma'])
x_test_lemma = v_lemma.transform(X_test['lemma'])
vocab_lemma = dict(v_lemma.vocabulary_)
v_keyword = TfidfVectorizer()
x_train_keyword = v_keyword.fit_transform(X_train['keyword'])
x_test_keyword = v_keyword.transform(X_test['keyword'])
vocab_keyword = dict(v_keyword.vocabulary_)
v_noun = TfidfVectorizer()
x_train_noun = v_noun.fit_transform(X_train['noun'])
x_test_noun = v_noun.transform(X_test['noun'])
vocab_noun = dict(v_noun.vocabulary_)
v_verb = TfidfVectorizer()
x_train_verb = v_verb.fit_transform(X_train['verb'])
x_test_verb = v_verb.transform(X_test['verb'])
vocab_verb = dict(v_verb.vocabulary_)
# combine all features
x_train_combined = hstack((x_train_lemma,x_train_keyword,x_train_noun,x_train_verb),format='csr')
x_train_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()
x_test_combined = hstack((x_test_lemma,x_test_keyword,x_test_noun,x_test_verb),format='csr')
x_test_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()
x_train_combined = pd.DataFrame(x_train_combined.toarray())
x_train_combined.columns = x_train_combined_columns
x_test_combined = pd.DataFrame(x_test_combined.toarray())
x_test_combined.columns = x_test_combined_columns
# build classifier
clf = RandomForestClassifier(max_depth=50, n_estimators=1000)
clf.fit(x_train_combined, y_train)
probs = clf.predict_proba(x_test_combined)
best_3 = pd.DataFrame(np.argsort(probs, axis=1)[:,-3:],columns=['top3','top2','top1'])
best_3['top1'] = clf.classes_[best_3['top1']]
best_3['top2'] = clf.classes_[best_3['top2']]
best_3['top3'] = clf.classes_[best_3['top3']]
result = pd.concat([best_3.reset_index(drop=True),pd.DataFrame(y_test).reset_index(drop=True), X_test[feature_cols].reset_index(drop=True)], axis=1)
final_result = pd.concat([final_result, result])
score_1 = result[result['top1'] == result['target']].shape[0] / result.shape[0]
score_2 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])].shape[0] / result.shape[0]
score_3 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])| (result['top3'] == result['target'])].shape[0] / result.shape[0]
cv_scores_top1.append(score_1)
cv_scores_top2.append(score_2)
cv_scores_top3.append(score_3)
print(np.mean(np.array((cv_scores_top1))), np.std(np.array((cv_scores_top1))))
print(np.mean(np.array((cv_scores_top2))), np.std(np.array((cv_scores_top2))))
print(np.mean(np.array((cv_scores_top3))), np.std(np.array((cv_scores_top3))))
###Output
LeaveOneOut()
TEST: [0]
TEST: [1]
TEST: [2]
TEST: [3]
TEST: [4]
TEST: [5]
TEST: [6]
TEST: [7]
TEST: [8]
TEST: [9]
TEST: [10]
TEST: [11]
TEST: [12]
TEST: [13]
TEST: [14]
TEST: [15]
TEST: [16]
TEST: [17]
TEST: [18]
TEST: [19]
TEST: [20]
TEST: [21]
TEST: [22]
TEST: [23]
TEST: [24]
TEST: [25]
TEST: [26]
TEST: [27]
TEST: [28]
TEST: [29]
TEST: [30]
TEST: [31]
TEST: [32]
TEST: [33]
TEST: [34]
TEST: [35]
TEST: [36]
TEST: [37]
TEST: [38]
TEST: [39]
TEST: [40]
TEST: [41]
TEST: [42]
TEST: [43]
TEST: [44]
TEST: [45]
TEST: [46]
TEST: [47]
TEST: [48]
TEST: [49]
TEST: [50]
TEST: [51]
TEST: [52]
TEST: [53]
TEST: [54]
TEST: [55]
TEST: [56]
TEST: [57]
TEST: [58]
TEST: [59]
TEST: [60]
TEST: [61]
TEST: [62]
TEST: [63]
TEST: [64]
TEST: [65]
TEST: [66]
TEST: [67]
TEST: [68]
TEST: [69]
TEST: [70]
TEST: [71]
TEST: [72]
TEST: [73]
TEST: [74]
TEST: [75]
TEST: [76]
TEST: [77]
TEST: [78]
TEST: [79]
TEST: [80]
TEST: [81]
TEST: [82]
TEST: [83]
TEST: [84]
TEST: [85]
TEST: [86]
TEST: [87]
TEST: [88]
TEST: [89]
TEST: [90]
TEST: [91]
TEST: [92]
TEST: [93]
TEST: [94]
TEST: [95]
TEST: [96]
TEST: [97]
TEST: [98]
TEST: [99]
TEST: [100]
TEST: [101]
TEST: [102]
TEST: [103]
TEST: [104]
TEST: [105]
TEST: [106]
TEST: [107]
TEST: [108]
TEST: [109]
TEST: [110]
TEST: [111]
TEST: [112]
TEST: [113]
TEST: [114]
TEST: [115]
TEST: [116]
TEST: [117]
TEST: [118]
TEST: [119]
TEST: [120]
TEST: [121]
TEST: [122]
TEST: [123]
TEST: [124]
TEST: [125]
TEST: [126]
TEST: [127]
TEST: [128]
TEST: [129]
TEST: [130]
TEST: [131]
TEST: [132]
TEST: [133]
TEST: [134]
TEST: [135]
TEST: [136]
TEST: [137]
TEST: [138]
TEST: [139]
TEST: [140]
TEST: [141]
TEST: [142]
TEST: [143]
TEST: [144]
TEST: [145]
TEST: [146]
TEST: [147]
TEST: [148]
TEST: [149]
TEST: [150]
TEST: [151]
TEST: [152]
TEST: [153]
TEST: [154]
TEST: [155]
TEST: [156]
TEST: [157]
TEST: [158]
TEST: [159]
TEST: [160]
TEST: [161]
TEST: [162]
TEST: [163]
TEST: [164]
TEST: [165]
TEST: [166]
TEST: [167]
TEST: [168]
TEST: [169]
TEST: [170]
TEST: [171]
TEST: [172]
TEST: [173]
TEST: [174]
TEST: [175]
TEST: [176]
TEST: [177]
TEST: [178]
TEST: [179]
TEST: [180]
TEST: [181]
TEST: [182]
TEST: [183]
TEST: [184]
TEST: [185]
TEST: [186]
TEST: [187]
TEST: [188]
TEST: [189]
TEST: [190]
TEST: [191]
TEST: [192]
TEST: [193]
TEST: [194]
TEST: [195]
TEST: [196]
TEST: [197]
TEST: [198]
TEST: [199]
TEST: [200]
TEST: [201]
TEST: [202]
TEST: [203]
TEST: [204]
TEST: [205]
TEST: [206]
TEST: [207]
TEST: [208]
TEST: [209]
TEST: [210]
TEST: [211]
TEST: [212]
TEST: [213]
TEST: [214]
TEST: [215]
TEST: [216]
TEST: [217]
TEST: [218]
TEST: [219]
TEST: [220]
TEST: [221]
TEST: [222]
TEST: [223]
TEST: [224]
TEST: [225]
TEST: [226]
TEST: [227]
TEST: [228]
TEST: [229]
TEST: [230]
TEST: [231]
TEST: [232]
TEST: [233]
TEST: [234]
TEST: [235]
TEST: [236]
TEST: [237]
TEST: [238]
TEST: [239]
TEST: [240]
TEST: [241]
TEST: [242]
TEST: [243]
TEST: [244]
TEST: [245]
TEST: [246]
TEST: [247]
TEST: [248]
TEST: [249]
TEST: [250]
TEST: [251]
TEST: [252]
TEST: [253]
TEST: [254]
TEST: [255]
TEST: [256]
TEST: [257]
TEST: [258]
TEST: [259]
TEST: [260]
TEST: [261]
TEST: [262]
TEST: [263]
TEST: [264]
TEST: [265]
TEST: [266]
TEST: [267]
TEST: [268]
TEST: [269]
TEST: [270]
TEST: [271]
TEST: [272]
TEST: [273]
TEST: [274]
TEST: [275]
TEST: [276]
TEST: [277]
TEST: [278]
TEST: [279]
TEST: [280]
TEST: [281]
TEST: [282]
TEST: [283]
TEST: [284]
TEST: [285]
TEST: [286]
TEST: [287]
TEST: [288]
TEST: [289]
TEST: [290]
TEST: [291]
TEST: [292]
TEST: [293]
TEST: [294]
TEST: [295]
TEST: [296]
TEST: [297]
TEST: [298]
TEST: [299]
TEST: [300]
TEST: [301]
TEST: [302]
TEST: [303]
TEST: [304]
TEST: [305]
TEST: [306]
TEST: [307]
TEST: [308]
TEST: [309]
TEST: [310]
TEST: [311]
TEST: [312]
TEST: [313]
TEST: [314]
TEST: [315]
TEST: [316]
TEST: [317]
TEST: [318]
TEST: [319]
TEST: [320]
TEST: [321]
TEST: [322]
TEST: [323]
TEST: [324]
TEST: [325]
TEST: [326]
TEST: [327]
TEST: [328]
TEST: [329]
TEST: [330]
TEST: [331]
TEST: [332]
TEST: [333]
TEST: [334]
TEST: [335]
TEST: [336]
TEST: [337]
TEST: [338]
TEST: [339]
TEST: [340]
TEST: [341]
TEST: [342]
TEST: [343]
TEST: [344]
TEST: [345]
TEST: [346]
TEST: [347]
TEST: [348]
TEST: [349]
TEST: [350]
TEST: [351]
TEST: [352]
TEST: [353]
TEST: [354]
TEST: [355]
TEST: [356]
TEST: [357]
TEST: [358]
TEST: [359]
TEST: [360]
TEST: [361]
TEST: [362]
TEST: [363]
TEST: [364]
TEST: [365]
TEST: [366]
TEST: [367]
TEST: [368]
TEST: [369]
TEST: [370]
TEST: [371]
TEST: [372]
TEST: [373]
TEST: [374]
TEST: [375]
TEST: [376]
TEST: [377]
TEST: [378]
TEST: [379]
TEST: [380]
TEST: [381]
TEST: [382]
TEST: [383]
TEST: [384]
TEST: [385]
TEST: [386]
TEST: [387]
TEST: [388]
TEST: [389]
TEST: [390]
TEST: [391]
TEST: [392]
TEST: [393]
TEST: [394]
TEST: [395]
TEST: [396]
TEST: [397]
TEST: [398]
TEST: [399]
TEST: [400]
TEST: [401]
TEST: [402]
TEST: [403]
TEST: [404]
TEST: [405]
TEST: [406]
TEST: [407]
TEST: [408]
TEST: [409]
TEST: [410]
TEST: [411]
TEST: [412]
TEST: [413]
TEST: [414]
TEST: [415]
TEST: [416]
TEST: [417]
TEST: [418]
TEST: [419]
TEST: [420]
TEST: [421]
TEST: [422]
TEST: [423]
TEST: [424]
TEST: [425]
TEST: [426]
TEST: [427]
TEST: [428]
TEST: [429]
TEST: [430]
TEST: [431]
TEST: [432]
TEST: [433]
TEST: [434]
TEST: [435]
TEST: [436]
TEST: [437]
TEST: [438]
TEST: [439]
TEST: [440]
TEST: [441]
TEST: [442]
TEST: [443]
TEST: [444]
TEST: [445]
TEST: [446]
TEST: [447]
TEST: [448]
TEST: [449]
TEST: [450]
TEST: [451]
TEST: [452]
TEST: [453]
TEST: [454]
TEST: [455]
TEST: [456]
TEST: [457]
TEST: [458]
TEST: [459]
TEST: [460]
TEST: [461]
TEST: [462]
TEST: [463]
TEST: [464]
TEST: [465]
TEST: [466]
TEST: [467]
TEST: [468]
TEST: [469]
TEST: [470]
TEST: [471]
TEST: [472]
TEST: [473]
TEST: [474]
TEST: [475]
TEST: [476]
TEST: [477]
TEST: [478]
TEST: [479]
TEST: [480]
TEST: [481]
TEST: [482]
TEST: [483]
TEST: [484]
TEST: [485]
TEST: [486]
TEST: [487]
TEST: [488]
TEST: [489]
TEST: [490]
TEST: [491]
TEST: [492]
TEST: [493]
TEST: [494]
TEST: [495]
TEST: [496]
TEST: [497]
TEST: [498]
TEST: [499]
TEST: [500]
TEST: [501]
TEST: [502]
TEST: [503]
TEST: [504]
TEST: [505]
TEST: [506]
TEST: [507]
TEST: [508]
TEST: [509]
TEST: [510]
TEST: [511]
TEST: [512]
TEST: [513]
TEST: [514]
TEST: [515]
TEST: [516]
TEST: [517]
TEST: [518]
TEST: [519]
TEST: [520]
TEST: [521]
TEST: [522]
TEST: [523]
TEST: [524]
TEST: [525]
TEST: [526]
TEST: [527]
TEST: [528]
TEST: [529]
TEST: [530]
TEST: [531]
TEST: [532]
TEST: [533]
TEST: [534]
TEST: [535]
TEST: [536]
TEST: [537]
TEST: [538]
TEST: [539]
TEST: [540]
TEST: [541]
TEST: [542]
TEST: [543]
TEST: [544]
TEST: [545]
TEST: [546]
TEST: [547]
TEST: [548]
TEST: [549]
TEST: [550]
TEST: [551]
TEST: [552]
TEST: [553]
TEST: [554]
TEST: [555]
TEST: [556]
TEST: [557]
TEST: [558]
TEST: [559]
TEST: [560]
TEST: [561]
TEST: [562]
TEST: [563]
TEST: [564]
TEST: [565]
TEST: [566]
TEST: [567]
TEST: [568]
TEST: [569]
TEST: [570]
TEST: [571]
TEST: [572]
TEST: [573]
TEST: [574]
TEST: [575]
TEST: [576]
TEST: [577]
TEST: [578]
TEST: [579]
TEST: [580]
TEST: [581]
TEST: [582]
TEST: [583]
TEST: [584]
TEST: [585]
TEST: [586]
TEST: [587]
TEST: [588]
TEST: [589]
TEST: [590]
TEST: [591]
TEST: [592]
TEST: [593]
TEST: [594]
TEST: [595]
TEST: [596]
TEST: [597]
TEST: [598]
TEST: [599]
TEST: [600]
TEST: [601]
TEST: [602]
TEST: [603]
TEST: [604]
TEST: [605]
TEST: [606]
TEST: [607]
TEST: [608]
TEST: [609]
TEST: [610]
TEST: [611]
TEST: [612]
TEST: [613]
TEST: [614]
TEST: [615]
TEST: [616]
TEST: [617]
TEST: [618]
TEST: [619]
TEST: [620]
TEST: [621]
TEST: [622]
TEST: [623]
TEST: [624]
TEST: [625]
TEST: [626]
TEST: [627]
TEST: [628]
TEST: [629]
TEST: [630]
TEST: [631]
TEST: [632]
TEST: [633]
TEST: [634]
TEST: [635]
TEST: [636]
TEST: [637]
TEST: [638]
TEST: [639]
TEST: [640]
0.7722308892355694 0.41939282653141685
0.890795631825273 0.31189545387242446
0.9344773790951638 0.24744576588536996
###Markdown
tf_idf + linguistic + cluster
###Code
from sklearn.model_selection import LeaveOneOut
# combine model score
countvector_cols = ['lemma', 'keyword', 'noun', 'verb']
top_clusters_cols = ['clusters_1', 'clusters_2', 'clusters_3']
feature_cols = countvector_cols + top_clusters_cols
# LeaveOneOut coss validation
loo = LeaveOneOut()
loo.get_n_splits(data[feature_cols], data['target'])
print(loo)
cv_scores_top1 = []
cv_scores_top2 = []
cv_scores_top3 = []
final_result = pd.DataFrame()
for train_index, test_index in loo.split(data[feature_cols], data['target']):
print("TEST:", test_index)
# get train, test data for each chunk
X_train, X_test = data.loc[train_index,feature_cols], data.loc[test_index,feature_cols]
y_train, y_test = data.loc[train_index,'target'], data.loc[test_index,'target']
v_lemma = TfidfVectorizer()
x_train_lemma = v_lemma.fit_transform(X_train['lemma'])
x_test_lemma = v_lemma.transform(X_test['lemma'])
vocab_lemma = dict(v_lemma.vocabulary_)
v_keyword = TfidfVectorizer()
x_train_keyword = v_keyword.fit_transform(X_train['keyword'])
x_test_keyword = v_keyword.transform(X_test['keyword'])
vocab_keyword = dict(v_keyword.vocabulary_)
v_noun = TfidfVectorizer()
x_train_noun = v_noun.fit_transform(X_train['noun'])
x_test_noun = v_noun.transform(X_test['noun'])
vocab_noun = dict(v_noun.vocabulary_)
v_verb = TfidfVectorizer()
x_train_verb = v_verb.fit_transform(X_train['verb'])
x_test_verb = v_verb.transform(X_test['verb'])
vocab_verb = dict(v_verb.vocabulary_)
# combine all features
x_train_combined = hstack((x_train_lemma,x_train_keyword,x_train_noun,x_train_verb,X_train[top_clusters_cols].values),format='csr')
x_train_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()+top_clusters_cols
x_test_combined = hstack((x_test_lemma,x_test_keyword,x_test_noun,x_test_verb,X_test[top_clusters_cols].values),format='csr')
x_test_combined_columns= v_lemma.get_feature_names()+v_keyword.get_feature_names()+v_noun.get_feature_names()+v_verb.get_feature_names()+top_clusters_cols
x_train_combined = pd.DataFrame(x_train_combined.toarray())
x_train_combined.columns = x_train_combined_columns
x_test_combined = pd.DataFrame(x_test_combined.toarray())
x_test_combined.columns = x_test_combined_columns
# build classifier
clf = RandomForestClassifier(max_depth=50, n_estimators=1000)
clf.fit(x_train_combined, y_train)
probs = clf.predict_proba(x_test_combined)
best_3 = pd.DataFrame(np.argsort(probs, axis=1)[:,-3:],columns=['top3','top2','top1'])
best_3['top1'] = clf.classes_[best_3['top1']]
best_3['top2'] = clf.classes_[best_3['top2']]
best_3['top3'] = clf.classes_[best_3['top3']]
result = pd.concat([best_3.reset_index(drop=True),pd.DataFrame(y_test).reset_index(drop=True), X_test[feature_cols].reset_index(drop=True)], axis=1)
final_result = pd.concat([final_result, result])
score_1 = result[result['top1'] == result['target']].shape[0] / result.shape[0]
score_2 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])].shape[0] / result.shape[0]
score_3 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])| (result['top3'] == result['target'])].shape[0] / result.shape[0]
cv_scores_top1.append(score_1)
cv_scores_top2.append(score_2)
cv_scores_top3.append(score_3)
print(np.mean(np.array((cv_scores_top1))), np.std(np.array((cv_scores_top1))))
print(np.mean(np.array((cv_scores_top2))), np.std(np.array((cv_scores_top2))))
print(np.mean(np.array((cv_scores_top3))), np.std(np.array((cv_scores_top3))))
###Output
LeaveOneOut()
TEST: [0]
TEST: [1]
TEST: [2]
TEST: [3]
TEST: [4]
TEST: [5]
TEST: [6]
TEST: [7]
TEST: [8]
TEST: [9]
TEST: [10]
TEST: [11]
TEST: [12]
TEST: [13]
TEST: [14]
TEST: [15]
TEST: [16]
TEST: [17]
TEST: [18]
TEST: [19]
TEST: [20]
TEST: [21]
TEST: [22]
TEST: [23]
TEST: [24]
TEST: [25]
TEST: [26]
TEST: [27]
TEST: [28]
TEST: [29]
TEST: [30]
TEST: [31]
TEST: [32]
TEST: [33]
TEST: [34]
TEST: [35]
TEST: [36]
TEST: [37]
TEST: [38]
TEST: [39]
TEST: [40]
TEST: [41]
TEST: [42]
TEST: [43]
TEST: [44]
TEST: [45]
TEST: [46]
TEST: [47]
TEST: [48]
TEST: [49]
TEST: [50]
TEST: [51]
TEST: [52]
TEST: [53]
TEST: [54]
TEST: [55]
TEST: [56]
TEST: [57]
TEST: [58]
TEST: [59]
TEST: [60]
TEST: [61]
TEST: [62]
TEST: [63]
TEST: [64]
TEST: [65]
TEST: [66]
TEST: [67]
TEST: [68]
TEST: [69]
TEST: [70]
TEST: [71]
TEST: [72]
TEST: [73]
TEST: [74]
TEST: [75]
TEST: [76]
TEST: [77]
TEST: [78]
TEST: [79]
TEST: [80]
TEST: [81]
TEST: [82]
TEST: [83]
TEST: [84]
TEST: [85]
TEST: [86]
TEST: [87]
TEST: [88]
TEST: [89]
TEST: [90]
TEST: [91]
TEST: [92]
TEST: [93]
TEST: [94]
TEST: [95]
TEST: [96]
TEST: [97]
TEST: [98]
TEST: [99]
TEST: [100]
TEST: [101]
TEST: [102]
TEST: [103]
TEST: [104]
TEST: [105]
TEST: [106]
TEST: [107]
TEST: [108]
TEST: [109]
TEST: [110]
TEST: [111]
TEST: [112]
TEST: [113]
TEST: [114]
TEST: [115]
TEST: [116]
TEST: [117]
TEST: [118]
TEST: [119]
TEST: [120]
TEST: [121]
TEST: [122]
TEST: [123]
TEST: [124]
TEST: [125]
TEST: [126]
TEST: [127]
TEST: [128]
TEST: [129]
TEST: [130]
TEST: [131]
TEST: [132]
TEST: [133]
TEST: [134]
TEST: [135]
TEST: [136]
TEST: [137]
TEST: [138]
TEST: [139]
TEST: [140]
TEST: [141]
TEST: [142]
TEST: [143]
TEST: [144]
TEST: [145]
TEST: [146]
TEST: [147]
TEST: [148]
TEST: [149]
TEST: [150]
TEST: [151]
TEST: [152]
TEST: [153]
TEST: [154]
TEST: [155]
TEST: [156]
TEST: [157]
TEST: [158]
TEST: [159]
TEST: [160]
TEST: [161]
TEST: [162]
TEST: [163]
TEST: [164]
TEST: [165]
TEST: [166]
TEST: [167]
TEST: [168]
TEST: [169]
TEST: [170]
TEST: [171]
TEST: [172]
TEST: [173]
TEST: [174]
TEST: [175]
TEST: [176]
TEST: [177]
TEST: [178]
TEST: [179]
TEST: [180]
TEST: [181]
TEST: [182]
TEST: [183]
TEST: [184]
TEST: [185]
TEST: [186]
TEST: [187]
TEST: [188]
TEST: [189]
TEST: [190]
TEST: [191]
TEST: [192]
TEST: [193]
TEST: [194]
TEST: [195]
TEST: [196]
TEST: [197]
TEST: [198]
TEST: [199]
TEST: [200]
TEST: [201]
TEST: [202]
TEST: [203]
TEST: [204]
TEST: [205]
TEST: [206]
TEST: [207]
TEST: [208]
TEST: [209]
TEST: [210]
TEST: [211]
TEST: [212]
TEST: [213]
TEST: [214]
TEST: [215]
TEST: [216]
TEST: [217]
TEST: [218]
TEST: [219]
TEST: [220]
TEST: [221]
TEST: [222]
TEST: [223]
TEST: [224]
TEST: [225]
TEST: [226]
TEST: [227]
TEST: [228]
TEST: [229]
TEST: [230]
TEST: [231]
TEST: [232]
TEST: [233]
TEST: [234]
TEST: [235]
TEST: [236]
TEST: [237]
TEST: [238]
TEST: [239]
TEST: [240]
TEST: [241]
TEST: [242]
TEST: [243]
TEST: [244]
TEST: [245]
TEST: [246]
TEST: [247]
TEST: [248]
TEST: [249]
TEST: [250]
TEST: [251]
TEST: [252]
TEST: [253]
TEST: [254]
TEST: [255]
TEST: [256]
TEST: [257]
TEST: [258]
TEST: [259]
TEST: [260]
TEST: [261]
TEST: [262]
TEST: [263]
TEST: [264]
TEST: [265]
TEST: [266]
TEST: [267]
TEST: [268]
TEST: [269]
TEST: [270]
TEST: [271]
TEST: [272]
TEST: [273]
TEST: [274]
TEST: [275]
TEST: [276]
TEST: [277]
TEST: [278]
TEST: [279]
TEST: [280]
TEST: [281]
TEST: [282]
TEST: [283]
TEST: [284]
TEST: [285]
TEST: [286]
TEST: [287]
TEST: [288]
TEST: [289]
TEST: [290]
TEST: [291]
TEST: [292]
TEST: [293]
TEST: [294]
TEST: [295]
TEST: [296]
TEST: [297]
TEST: [298]
TEST: [299]
TEST: [300]
TEST: [301]
TEST: [302]
TEST: [303]
TEST: [304]
TEST: [305]
TEST: [306]
TEST: [307]
TEST: [308]
TEST: [309]
TEST: [310]
TEST: [311]
TEST: [312]
TEST: [313]
TEST: [314]
TEST: [315]
TEST: [316]
TEST: [317]
TEST: [318]
TEST: [319]
TEST: [320]
TEST: [321]
TEST: [322]
TEST: [323]
TEST: [324]
TEST: [325]
TEST: [326]
TEST: [327]
TEST: [328]
TEST: [329]
TEST: [330]
TEST: [331]
TEST: [332]
TEST: [333]
TEST: [334]
TEST: [335]
TEST: [336]
TEST: [337]
TEST: [338]
TEST: [339]
TEST: [340]
TEST: [341]
TEST: [342]
TEST: [343]
TEST: [344]
TEST: [345]
TEST: [346]
TEST: [347]
TEST: [348]
TEST: [349]
TEST: [350]
TEST: [351]
TEST: [352]
TEST: [353]
TEST: [354]
TEST: [355]
TEST: [356]
TEST: [357]
TEST: [358]
TEST: [359]
TEST: [360]
TEST: [361]
TEST: [362]
TEST: [363]
TEST: [364]
TEST: [365]
TEST: [366]
TEST: [367]
TEST: [368]
TEST: [369]
TEST: [370]
TEST: [371]
TEST: [372]
TEST: [373]
TEST: [374]
TEST: [375]
TEST: [376]
TEST: [377]
TEST: [378]
TEST: [379]
TEST: [380]
TEST: [381]
TEST: [382]
TEST: [383]
TEST: [384]
TEST: [385]
TEST: [386]
TEST: [387]
TEST: [388]
TEST: [389]
TEST: [390]
TEST: [391]
TEST: [392]
TEST: [393]
TEST: [394]
TEST: [395]
TEST: [396]
TEST: [397]
TEST: [398]
TEST: [399]
TEST: [400]
TEST: [401]
TEST: [402]
TEST: [403]
TEST: [404]
TEST: [405]
TEST: [406]
TEST: [407]
TEST: [408]
TEST: [409]
TEST: [410]
TEST: [411]
TEST: [412]
TEST: [413]
TEST: [414]
TEST: [415]
TEST: [416]
TEST: [417]
TEST: [418]
TEST: [419]
TEST: [420]
TEST: [421]
TEST: [422]
TEST: [423]
TEST: [424]
TEST: [425]
TEST: [426]
TEST: [427]
TEST: [428]
TEST: [429]
TEST: [430]
TEST: [431]
TEST: [432]
TEST: [433]
TEST: [434]
TEST: [435]
TEST: [436]
TEST: [437]
TEST: [438]
TEST: [439]
TEST: [440]
TEST: [441]
TEST: [442]
TEST: [443]
TEST: [444]
TEST: [445]
TEST: [446]
TEST: [447]
TEST: [448]
TEST: [449]
TEST: [450]
TEST: [451]
TEST: [452]
TEST: [453]
TEST: [454]
TEST: [455]
TEST: [456]
TEST: [457]
TEST: [458]
TEST: [459]
TEST: [460]
TEST: [461]
TEST: [462]
TEST: [463]
TEST: [464]
TEST: [465]
TEST: [466]
TEST: [467]
TEST: [468]
TEST: [469]
TEST: [470]
TEST: [471]
TEST: [472]
TEST: [473]
TEST: [474]
TEST: [475]
TEST: [476]
TEST: [477]
TEST: [478]
TEST: [479]
TEST: [480]
TEST: [481]
TEST: [482]
TEST: [483]
TEST: [484]
TEST: [485]
TEST: [486]
TEST: [487]
TEST: [488]
TEST: [489]
TEST: [490]
TEST: [491]
TEST: [492]
TEST: [493]
TEST: [494]
TEST: [495]
TEST: [496]
TEST: [497]
TEST: [498]
TEST: [499]
TEST: [500]
TEST: [501]
TEST: [502]
TEST: [503]
TEST: [504]
TEST: [505]
TEST: [506]
TEST: [507]
TEST: [508]
TEST: [509]
TEST: [510]
TEST: [511]
TEST: [512]
TEST: [513]
TEST: [514]
TEST: [515]
TEST: [516]
TEST: [517]
TEST: [518]
TEST: [519]
TEST: [520]
TEST: [521]
TEST: [522]
TEST: [523]
TEST: [524]
TEST: [525]
TEST: [526]
TEST: [527]
TEST: [528]
TEST: [529]
TEST: [530]
TEST: [531]
TEST: [532]
TEST: [533]
TEST: [534]
TEST: [535]
TEST: [536]
TEST: [537]
TEST: [538]
TEST: [539]
TEST: [540]
TEST: [541]
TEST: [542]
TEST: [543]
TEST: [544]
TEST: [545]
TEST: [546]
TEST: [547]
TEST: [548]
TEST: [549]
TEST: [550]
TEST: [551]
TEST: [552]
TEST: [553]
TEST: [554]
TEST: [555]
TEST: [556]
TEST: [557]
TEST: [558]
TEST: [559]
TEST: [560]
TEST: [561]
TEST: [562]
TEST: [563]
TEST: [564]
TEST: [565]
TEST: [566]
TEST: [567]
TEST: [568]
TEST: [569]
TEST: [570]
TEST: [571]
TEST: [572]
TEST: [573]
TEST: [574]
TEST: [575]
TEST: [576]
TEST: [577]
TEST: [578]
TEST: [579]
TEST: [580]
TEST: [581]
TEST: [582]
TEST: [583]
TEST: [584]
TEST: [585]
TEST: [586]
TEST: [587]
TEST: [588]
TEST: [589]
TEST: [590]
TEST: [591]
TEST: [592]
TEST: [593]
TEST: [594]
TEST: [595]
TEST: [596]
TEST: [597]
TEST: [598]
TEST: [599]
TEST: [600]
TEST: [601]
TEST: [602]
TEST: [603]
TEST: [604]
TEST: [605]
TEST: [606]
TEST: [607]
TEST: [608]
TEST: [609]
TEST: [610]
TEST: [611]
TEST: [612]
TEST: [613]
TEST: [614]
TEST: [615]
TEST: [616]
TEST: [617]
TEST: [618]
TEST: [619]
TEST: [620]
TEST: [621]
TEST: [622]
TEST: [623]
TEST: [624]
TEST: [625]
TEST: [626]
TEST: [627]
TEST: [628]
TEST: [629]
TEST: [630]
TEST: [631]
TEST: [632]
TEST: [633]
TEST: [634]
TEST: [635]
TEST: [636]
TEST: [637]
TEST: [638]
TEST: [639]
TEST: [640]
0.7956318252730109 0.40323916462286735
0.9017160686427457 0.2976981696185195
0.9391575663026521 0.2390410675158805
###Markdown
Baseline model - tf-idf
###Code
# combine model score
feature_cols = ['Questions']
# StratifiedKFold coss validation
skf = StratifiedKFold(n_splits=5)
skf.get_n_splits(data[feature_cols], data['target'])
print(skf)
cv_scores = []
cv_scores_top1 = []
cv_scores_top2 = []
cv_scores_top3 = []
for train_index, test_index in skf.split(data[feature_cols], data['target']):
# get train, test data for each chunk
X_train, X_test = data.loc[train_index,feature_cols], data.loc[test_index,feature_cols]
y_train, y_test = data.loc[train_index,'target'], data.loc[test_index,'target']
v = TfidfVectorizer(ngram_range=(1, 1))
x_train = v.fit_transform(X_train['Questions'])
x_test = v.transform(X_test['Questions'])
x_train = pd.DataFrame(x_train.toarray())
x_test = pd.DataFrame(x_test.toarray())
x_train.columns = v.get_feature_names()
x_test.columns = v.get_feature_names()
# build classifier
clf = RandomForestClassifier(max_depth=25, n_estimators=300)
clf.fit(x_train, y_train)
score = clf.score(x_test, y_test)
cv_scores.append(score)
probs = clf.predict_proba(x_test)
best_3 = pd.DataFrame(np.argsort(probs, axis=1)[:,-3:],columns=['top3','top2','top1'])
best_3['top1'] = clf.classes_[best_3['top1']]
best_3['top2'] = clf.classes_[best_3['top2']]
best_3['top3'] = clf.classes_[best_3['top3']]
result = pd.concat([best_3.reset_index(drop=True),pd.DataFrame(y_test).reset_index(drop=True)], axis=1)
score_1 = result[result['top1'] == result['target']].shape[0] / result.shape[0]
score_2 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])].shape[0] / result.shape[0]
score_3 = result[(result['top1'] == result['target']) | (result['top2'] == result['target'])| (result['top3'] == result['target'])].shape[0] / result.shape[0]
cv_scores_top1.append(score_1)
cv_scores_top2.append(score_2)
cv_scores_top3.append(score_3)
print(np.mean(np.array((cv_scores_top1))), np.std(np.array((cv_scores_top1))))
print(np.mean(np.array((cv_scores_top2))), np.std(np.array((cv_scores_top2))))
print(np.mean(np.array((cv_scores_top3))), np.std(np.array((cv_scores_top3))))
###Output
StratifiedKFold(n_splits=5, random_state=None, shuffle=False)
|
vmc_ho.ipynb | ###Markdown
Variational Monte Carlo: Harmonic Oscillator
###Code
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
try: plt.style.use('./notebook.mplstyle')
except: pass
red,blue,green = '#e85c47','#4173b2','#7dcca4'
###Output
_____no_output_____
###Markdown
Hamiltonian\begin{equation}\hat{H} = -\frac{1}{2} \frac{d^2}{dx^2} + \frac{1}{2} x^2\end{equation}where $\hbar = \omega = m = 1$. Local Energy and Transition Probability\begin{equation}E_L^\alpha (x) = \alpha + x^2\left(\frac{1}{2} - 2\alpha^2\right)\end{equation}\begin{equation}\frac{\pi(x^\prime)}{\pi(x)} = \mathrm{e}^{-2\alpha({x^\prime}^2 - x^2)}\end{equation}
###Code
def EL(x,α):
return α + x**2*(0.5-2*α**2)
def transition_probability(x,x̄,α):
return np.exp(-2*α*(x̄**2-x**2))
def vmc(num_walkers,num_MC_steps,num_equil_steps,α,δ=1.0):
# initilaize walkers
walkers = -0.5 + np.random.rand(num_walkers)
# initialize energy and number of accepted updates
estimator = {'E':np.zeros(num_MC_steps-num_equil_steps)}
num_accepted = 0
for step in range(num_MC_steps):
# generate new walker positions
new_walkers = np.random.normal(loc=walkers, scale=δ, size=num_walkers)
# test new walkers
for i in range(num_walkers):
if np.random.random() < transition_probability(walkers[i],new_walkers[i],α):
num_accepted += 1
walkers[i] = new_walkers[i]
# measure energy
if step >= num_equil_steps:
measure = step-num_equil_steps
estimator['E'][measure] = EL(walkers[i],α)
# output the acceptance ratio
print('accept: %4.2f' % (num_accepted/(num_MC_steps*num_walkers)))
return estimator
###Output
_____no_output_____
###Markdown
Perform the VMC Simulation
###Code
α = 0.45
num_walkers = 100
num_MC_steps = 20000
num_equil_steps = 5000
np.random.seed(1173)
estimator = vmc(num_walkers,num_MC_steps,num_equil_steps,α)
###Output
accept: 0.62
###Markdown
Compute the average energy and standard error
###Code
from scipy.stats import sem
Ē,ΔĒ = np.average(estimator['E']),sem(estimator['E'])
print('Ē = %f ± %f' % (Ē,ΔĒ))
###Output
Ē = 0.501036 ± 0.000566
###Markdown
Due a brute-force minimizaiton search over the variational parameter $\alpha$
###Code
Ēmin = []
ΔĒmin = []
α = np.array([0.45, 0.475, 0.5, 0.525, 0.55])
for cα in α:
estimator = vmc(num_walkers,num_MC_steps,num_equil_steps,cα)
Ē,ΔĒ = np.average(estimator['E']),sem(estimator['E'])
Ēmin.append(Ē)
ΔĒmin.append(ΔĒ)
print('%5.3f \t %7.5f ± %f' % (cα,Ē,ΔĒ))
###Output
accept: 0.62
0.450 0.50387 ± 0.000635
accept: 0.62
0.475 0.50012 ± 0.000291
accept: 0.61
0.500 0.50000 ± 0.000000
accept: 0.60
0.525 0.50151 ± 0.000280
accept: 0.59
0.550 0.50239 ± 0.000543
###Markdown
Compare VMC with the exact variational energy\begin{equation}E_v = \frac{\alpha}{2} + \frac{1}{8\alpha}\end{equation}
###Code
cα = np.linspace(α[0],α[-1],1000)
plt.plot(cα,0.5*cα + 1/(8*cα), '-', linewidth=1, color=green, zorder=-10,
label=r'$\frac{\alpha}{2} + \frac{1}{8\alpha}$')
plt.errorbar(α,Ēmin,yerr=ΔĒmin, linestyle='None', marker='o', elinewidth=1.0,
markersize=6, markerfacecolor=blue, markeredgecolor=blue, ecolor=blue, label='VMC')
plt.xlabel(r'$\alpha$')
plt.ylabel('E');
plt.xlim(0.44,0.56)
plt.legend(loc='upper center')
###Output
_____no_output_____ |
11.MC_Dropout_Interpretability.ipynb | ###Markdown
I. Monte Carlo Dropout 1. Test Monte Carlo DropoutIf the number of stochastic forward passes for each observation is big enough (>100 for our case), the accuracy of test set from MC dropout prediction will be close to that from normal prediction.
###Code
ModelTrainer.validate(model, nn.CrossEntropyLoss(), mnist_data_loader.testloader)
MC_Dropout.mc_dropout_validate(model, nn.CrossEntropyLoss(), mnist_data_loader.testloader, 1)
MC_Dropout.mc_dropout_validate(model, nn.CrossEntropyLoss(), mnist_data_loader.testloader, 100)
###Output
1 Stochastic forward passes. Average MC Dropout Test loss: 0.304. MC Dropout Accuracy: 91.00%
100 Stochastic forward passes. Average MC Dropout Test loss: 0.069. MC Dropout Accuracy: 97.92%
###Markdown
II. Uncertainty Example - Rotated Images
###Code
img = mnist_data_loader.get_image_for_class(1).cpu()
rotate_imgs = []
for i in range(0,-63,-7):
rotate_img = mnist_data_loader.rotate_image(1,i)
rotate_imgs.append(rotate_img)
MnistVisualizer.cat_images(rotate_imgs, 10, 8)
###Output
/media/storage/dl_env/bdluam-p4-interpretability-of-nn/implementation/nn_interpretability/visualization/mnist_visualizer.py:83: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
imgs_stack = np.hstack(image.squeeze(0).cpu().numpy().reshape(28, 28) for image in images)
###Markdown
1. Prediction without MC dropout
###Code
cat_size = torch.Tensor(10,1,28,28)
score, cls = MC_Dropout.predict_class(model, torch.cat(rotate_imgs, out=cat_size), top_k=2)
MnistVisualizer.result_for_each_rotation(rotate_imgs, cls, score, 2, 10, 20)
###Output
_____no_output_____
###Markdown
2. MC dropout prediction of the model
###Code
import matplotlib.pyplot as plt
from matplotlib.offsetbox import (TextArea, DrawingArea, OffsetImage, AnnotationBbox)
cls_logits, cls_prob = MC_Dropout.mc_dropout_predict(model, torch.cat(rotate_imgs, out=cat_size), 500)
plt.figure(figsize=(8, 5))
plt.scatter(np.tile(np.arange(1, 10), cls_logits.shape[0]), cls_logits[:, :, 1].flatten(), \
c='g', marker='_', linewidth=None, alpha=0.2, label='1');
plt.scatter(np.tile(np.arange(1, 10), cls_logits.shape[0]), cls_logits[:, :, 2].flatten(), \
c='b', marker='_', linewidth=None, alpha=0.2, label='2');
plt.scatter(np.tile(np.arange(1, 10), cls_logits.shape[0]), cls_logits[:, :, 8].flatten(), \
c='r', marker='_', linewidth=None, alpha=0.2, label='8');
plt.title('Class logits scatter');
plt.legend(framealpha=0.7);
plt.tight_layout();
MnistVisualizer.cat_images(rotate_imgs, 8, 1)
plt.figure(figsize=(8, 5))
plt.scatter(np.tile(np.arange(1, 10), cls_prob.shape[0]), cls_prob[:, :, 1].flatten(), \
c='g', marker='_', linewidth=None, alpha=0.2, label='1');
plt.scatter(np.tile(np.arange(1, 10), cls_prob.shape[0]), cls_prob[:, :, 2].flatten(), \
c='b', marker='_', linewidth=None, alpha=0.2, label='2');
plt.scatter(np.tile(np.arange(1, 10), cls_prob.shape[0]), cls_prob[:, :, 8].flatten(), \
c='r', marker='_', linewidth=None, alpha=0.2, label='8');
plt.title('Softmax output scatter');
plt.legend(framealpha=0.7);
plt.tight_layout();
MnistVisualizer.cat_images(rotate_imgs, 8, 1)
###Output
_____no_output_____
###Markdown
3. Use LRP to interpret uncertaintyCheck out how we use our **nn_interpretability** API by just calling `LRPMix()` and `interpretor.interpret()` two lines 3.1 Interpret with composite LRPWe first observe 1000 predictions. For each prediction we perform LRP backward pass to the bottom layer starting from the **predicted class**. If the predicted classes are the same in our observation, their corresponding heatmaps will sum together. From the following images we see that LRP fails to tell us the uncertainty of the model's predictions.
###Code
import torch.nn.functional as F
from nn_interpretability.interpretation.lrp.lrp_ab import LRPAlphaBeta
from nn_interpretability.interpretation.lrp.lrp_composite import LRPMix
import matplotlib.pyplot as plt
def uncertain_prediction_lrp_mix(model, image, T=1000):
model.train()
endpoint = torch.zeros_like(image[0], device=MC_Dropout.device).repeat(10,1,1,1)
image = image.to(MC_Dropout.device)
times = torch.zeros(10)
for _ in range(T):
# Consturct LRPMix
interpretor = LRPMix(model, 'predicted', None, 1, 0, 0)
heatmap = interpretor.interpret(image)
predicted = interpretor.get_class()
endpoint[predicted] += heatmap[0]
times[predicted] += 1
return endpoint.detach().cpu().numpy(), times
out, predict_times = uncertain_prediction_lrp_mix(model, rotate_imgs[3], 1000)
MnistVisualizer.dropout_heatmap_for_each_class(out, predict_times)
###Output
_____no_output_____
###Markdown
Another approach is to observe 1000 times predictions for each class. Different from the above method, this time we perform LRP backward pass starting from an **assigned class**. We sum the resulting heatmaps from all these 1000 predictions even if the predicted class is not same as the assigned class. This approach shows that for the classes that our model is tend to predict, the resulting heatmap will contain more red pixels. In other words, the more heatmaps consisting mainly of red pixels we have, the more uncertain our prediction is.
###Code
def uncertain_all_lrp_mix(model, image, T=1000):
model.train()
endpoint = torch.zeros_like(image[0], device=MC_Dropout.device).repeat(10,1,1,1)
image = image.to(MC_Dropout.device)
times = torch.zeros(10)
for _ in range(T):
for i in range(10):
# Consturct LRPMix
interpretor = LRPMix(model, i, None, 1, 0, 0)
heatmap = interpretor.interpret(image)
predicted = interpretor.get_class()
endpoint[i] += heatmap[0]
if predicted == i: times[i] += 1
return endpoint.detach().cpu().numpy(), times
out, predict_times = uncertain_all_lrp_mix(model, rotate_imgs[1], 1000)
MnistVisualizer.dropout_heatmap_for_each_class(out, predict_times)
###Output
_____no_output_____
###Markdown
3.2 Interpret with LRP-α1β0 (Deep Taylor Decomposition)
###Code
from nn_interpretability.interpretation.lrp.lrp_ab import LRPAlphaBeta
def uncertain_all_lrp_ab(model, image, T=1000):
model.train()
endpoint = torch.zeros_like(image[0], device=MC_Dropout.device).repeat(10,1,1,1)
image = image.to(MC_Dropout.device)
times = torch.zeros(10)
for _ in range(T):
for i in range(10):
# Consturct LRPMix
interpretor = LRPAlphaBeta(model, i, None, 1, 0, 0)
heatmap = interpretor.interpret(image)
predicted = interpretor.get_class()
endpoint[i] += heatmap[0]
if predicted == i: times[i] += 1
return endpoint.detach().cpu().numpy(), times
out, predict_times = uncertain_all_lrp_ab(model, rotate_imgs[3], 1000)
MnistVisualizer.dropout_heatmap_for_each_class(out, predict_times)
###Output
_____no_output_____
###Markdown
3.3 Interpret with DeepLIFT RevealCancelNow we try to introduce uncertainty to the `DeepLIFT` method and more specifically, the `RevealCancel` rule. In order to do so, we are going to choose an image and execute `DeepLIFT` 1000 times for it. In each iteration, we are adding random noise to the image. If the image was classified for the class C, we are adding the result to the collection of results which are classified for the class C. At the end, we are displaying the composite result for each class. Deep blue image refers to classes for which no classifications have occurred.
###Code
from nn_interpretability.interpretation.deeplift.deeplift import DeepLIFT, DeepLIFTRules
def uncertain_all_deeplift(model, image, T=1000):
model.train()
interpretor = DeepLIFT(model, i, None, DeepLIFTRules.RevealCancel)
device = MC_Dropout.device
endpoint = torch.zeros_like(image[0]).repeat(10,1,1,1).to(device)
image = image.to(device)
times = torch.zeros(10)
for j in range(T):
noisy_img = img + torch.randn(img.size()) * 0.08
result = interpretor.interpret(noisy_img)
predicted = interpretor.last_prediction()
endpoint[predicted] += result[0]
times[predicted] += 1
return endpoint.detach().cpu().numpy(), times
img = mnist_data_loader.get_image_for_class(1)
out, predict_times = uncertain_all_deeplift(model, img, 1000)
MnistVisualizer.dropout_heatmap_for_each_class(out, predict_times)
###Output
_____no_output_____ |
Graduate Admission/.ipynb_checkpoints/Model-Width-EDA-checkpoint.ipynb | ###Markdown
`GRE Score` = Nilai Test GRE `TOEFL Score` = Nilai Toefl `University Rating` = Ranking Universitas `SOP` = Kualitas Statement Of Purpose `LOR` = Kualitas Letter Of Recomendation `CGPA` = Cumulative Grade Point Average(IPK) `Research` = Berpengalaman dalam riset `Chance of Admit` = Persentase Diterima
###Code
from sklearn.model_selection import GridSearchCV
from jcopml.tuning import grid_search_params as gsp
from sklearn.model_selection import RandomizedSearchCV
from jcopml.tuning import random_search_params as rsp
from jcopml.tuning.skopt import BayesSearchCV
from jcopml.tuning import bayes_search_params as bsp
from skopt import BayesSearchCV
from jcopml.plot import plot_residual
###Output
_____no_output_____
###Markdown
EDA
###Code
df.head()
num_feature = ['gre_score','toefl_score','sop','lor','cgpa']
cat_feature = ['university_rating','research']
print("numerical feature : ", num_feature)
print("categorical feature : ", cat_feature)
import matplotlib.pyplot as plt
import seaborn as sns
fig, ax = plt.subplots(nrows=5)
fig.set_size_inches(12,10)
index=0
for num_ftr in num_feature:
sns.distplot(df[num_ftr], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
fig, ax = plt.subplots(nrows=5)
fig.set_size_inches(12,10)
index=0
for num_ftr in num_feature:
sns.histplot(data=df, x=num_ftr, bins=11, ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
fig, ax = plt.subplots(nrows=5)
fig.set_size_inches(12,10)
index=0
for num_ftr in num_feature:
sns.scatterplot(x=df[num_ftr], y=df["chance_of_admit"], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
###Output
C:\Users\WIN10\miniconda3\envs\jcopml\lib\site-packages\ipykernel_launcher.py:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
if __name__ == '__main__':
###Markdown
Terlihat terdapat 3 feature yang berkorelasi kuat positif
###Code
fig, ax = plt.subplots(nrows=2)
fig.set_size_inches(12,5)
index=0
for num_ftr in ["sop","lor"]:
sns.scatterplot(y=df[num_ftr], x=df["chance_of_admit"], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
###Output
C:\Users\WIN10\miniconda3\envs\jcopml\lib\site-packages\ipykernel_launcher.py:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
if __name__ == '__main__':
###Markdown
Jika dibalik ternyata change_of_admit berkorelasi positif dengan sop dan lor
###Code
fig, ax = plt.subplots(nrows=2)
fig.set_size_inches(12,10)
index=0
for cat_ftr in cat_feature:
sns.countplot(x=cat_ftr, data=df, ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
fig, ax = plt.subplots(nrows=2)
fig.set_size_inches(12,5)
index=0
for cat_ftr in cat_feature:
sns.scatterplot(y=df[cat_ftr], x=df["chance_of_admit"], ax=ax[index])
index+=1
fig.tight_layout()
fig.show()
###Output
C:\Users\WIN10\miniconda3\envs\jcopml\lib\site-packages\ipykernel_launcher.py:9: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
if __name__ == '__main__':
###Markdown
Korelasi
###Code
plot_correlation_matrix(df, "chance_of_admit", num_feature)
plot_correlation_ratio(df, catvar=cat_feature, numvar=["chance_of_admit"])
###Output
_____no_output_____
###Markdown
`Numerical` Jika dilihat dari korelasi terdapat 3 yang korelasinya kuatSedangkan `Categorical` tidak terlalu kuat, terlebih feature research terbilang berkorelasi lemah
###Code
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
X = df[feature]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
from sklearn.ensemble import RandomForestRegressor
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), ["gre_score","toefl_score","cgpa"]),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['cgpa']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one)
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['cgpa','university_rating']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['toefl_score','cgpa','university_rating']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
feature = ["gre_score", "toefl_score","cgpa", "university_rating"]
one = ['gre_score','toefl_score','cgpa','university_rating']
X = df[one]
y = df.chance_of_admit
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
preprocessor = ColumnTransformer([
('numeric', num_pipe(transform="yeo-johnson", scaling="standard"), one),
('categoric', cat_pipe(encoder='ordinal'), ["university_rating"])
])
pipeline = Pipeline([
('prep', preprocessor),
('algo', RandomForestRegressor(n_jobs=-1, random_state=42))
])
model = RandomizedSearchCV(pipeline, rsp.rf_params, cv=3, n_iter=100, n_jobs=-1, verbose=1, random_state=42)
model.fit(X_train, y_train)
print(model.best_params_)
print(model.score(X_train, y_train), model.best_score_, model.score(X_test, y_test))
###Output
C:\Users\WIN10\miniconda3\envs\jcopml\lib\site-packages\jcopml\pipeline\_pipeline.py:65: UserWarning: Transformer has default standardization, so the scaling argument is neglected
warn("Transformer has default standardization, so the scaling argument is neglected")
###Markdown
Model dengan score test terbaik didapat ketika feature yang dipilih adalah `cgpa` dan `university rating`
###Code
for ftr in ["gre_score", "toefl_score","cgpa"]
###Output
_____no_output_____ |
notebooks/vitalaperiodic.ipynb | ###Markdown
vitalAperiodicThe vitalAperiodic table provides invasive vital sign data which is interfaced into eCareManager at irregular intervals. Unlike most tables in eICU-CRD, vitalAperiodic does not use an entity-attribute-value model, but rather has an individual column to capture each data element. Columns available include:* Blood pressures: nonInvasiveSystolic, nonInvasiveDiastolic, nonInvasiveMean* Cardiac output measures: cardiacOutput, cardiacInput* Systemic circulation measures: svr, svri, pvr, pvri* Pulmonary pressures: pulmonary artery occlusion pressure (paop)
###Code
# Import libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import psycopg2
import getpass
import pdvega
# for configuring connection
from configobj import ConfigObj
import os
%matplotlib inline
# Create a database connection using settings from config file
config='../db/config.ini'
# connection info
conn_info = dict()
if os.path.isfile(config):
config = ConfigObj(config)
conn_info["sqluser"] = config['username']
conn_info["sqlpass"] = config['password']
conn_info["sqlhost"] = config['host']
conn_info["sqlport"] = config['port']
conn_info["dbname"] = config['dbname']
conn_info["schema_name"] = config['schema_name']
else:
conn_info["sqluser"] = 'postgres'
conn_info["sqlpass"] = ''
conn_info["sqlhost"] = 'localhost'
conn_info["sqlport"] = 5432
conn_info["dbname"] = 'eicu'
conn_info["schema_name"] = 'public,eicu_crd'
# Connect to the eICU database
print('Database: {}'.format(conn_info['dbname']))
print('Username: {}'.format(conn_info["sqluser"]))
if conn_info["sqlpass"] == '':
# try connecting without password, i.e. peer or OS authentication
try:
if (conn_info["sqlhost"] == 'localhost') & (conn_info["sqlport"]=='5432'):
con = psycopg2.connect(dbname=conn_info["dbname"],
user=conn_info["sqluser"])
else:
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"])
except:
conn_info["sqlpass"] = getpass.getpass('Password: ')
con = psycopg2.connect(dbname=conn_info["dbname"],
host=conn_info["sqlhost"],
port=conn_info["sqlport"],
user=conn_info["sqluser"],
password=conn_info["sqlpass"])
query_schema = 'set search_path to ' + conn_info['schema_name'] + ';'
###Output
Database: eicu
Username: alistairewj
###Markdown
Examine a single patient
###Code
patientunitstayid = 145467
query = query_schema + """
select *
from vitalaperiodic
where patientunitstayid = {}
order by observationoffset
""".format(patientunitstayid)
df = pd.read_sql_query(query, con)
df.set_index('observationoffset', inplace=True)
df.sort_index(inplace=True)
df.head()
# list of columns to plot
vitals = ['noninvasivesystolic', 'noninvasivediastolic', 'noninvasivemean',
#'paop',
'cardiacoutput', 'cardiacinput',
'pvr', 'pvri']
# we exclude 'svr', 'svri' from the plot as their scale is too high
# we exclude 'paop' as it's all none
df[vitals].vgplot.line()
###Output
_____no_output_____
###Markdown
Hospitals with data available
###Code
query = query_schema + """
with t as
(
select distinct patientunitstayid
from vitalaperiodic
)
select
pt.hospitalid
, count(distinct pt.patientunitstayid) as number_of_patients
, count(distinct t.patientunitstayid) as number_of_patients_with_tbl
from patient pt
left join t
on pt.patientunitstayid = t.patientunitstayid
group by pt.hospitalid
""".format(patientunitstayid)
df = pd.read_sql_query(query, con)
df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0
df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True)
df.head(n=10)
df[['data completion']].vgplot.hist(bins=10,
var_name='Number of hospitals',
value_name='Percent of patients with data')
###Output
_____no_output_____ |
notebooks/1.0-tg-initial-data-exploration.ipynb | ###Markdown
Anticipez les besoins en consommation électrique de bâtiments=============================================================Explication des variables:[City of seattle](https://data.seattle.gov/dataset/2015-Building-Energy-Benchmarking/h7rm-fz6m) Importation des librairies
###Code
import os
import matplotlib.pyplot as plt
import missingno as msno
import numpy as np
import pandas as pd
import folium
from folium.plugins import FastMarkerCluster
from IPython.display import display
import ipywidgets as widgets
from ipywidgets import interact
import seaborn as sns
from src.utils.univar import UnivariateAnalysis
from src.utils.bivar import BivariateAnalysis
###Output
_____no_output_____
###Markdown
Chargement des données
###Code
data = dict()
data_dir = os.path.abspath('../data/raw')
for file in os.listdir(data_dir):
if file.endswith('.csv'):
key = file.split('.')[0].replace('_', '-')
data[key] = pd.read_csv(os.path.join(data_dir, file))
# tired to write the full name...
# bad code but I wont rewrite all cells...
for key in data.keys():
print(key)
year = int(key.split('-')[0])
exec(f"_{year} = '{year}-building-energy-benchmarking'")
from IPython.display import display
for i, df in zip(range(len(data.values())), data.values()):
with open(f'../reports/headers_{i}.html', 'w') as f:
f.write(df.head(1).T.to_html())
col_2015 = data[_2015].columns.values.tolist()
col_2016 = data[_2016].columns.values.tolist()
print(set(col_2016) - set(col_2015))
print(set(col_2015) - set(col_2016))
to_drop = [
'Zip Codes',
'City Council Districts',
'SPD Beats',
'2010 Census Tracts',
'Seattle Police Department Micro Community Policing Plan Areas',
]
data[_2015].drop(to_drop, axis=1, inplace=True)
columns = {'GHGEmissions(MetricTonsCO2e)': 'TotalGHGEmissions',
'GHGEmissionsIntensity(kgCO2e/ft2)': 'GHGEmissionsIntensity',
'Comment': 'Comments'}
data[_2015].rename(columns=columns, inplace=True)
location = data[_2015]['Location']
data[_2015]['Location'] = location.apply(eval)
data[_2015]['Latitude'] = location.apply(lambda x: float(x['latitude']))
data[_2015]['Longitude'] = location.apply(lambda x: float(x['longitude']))
address_2015 = data[_2015]['Location'].apply(lambda x: x['human_address'])
address_2015 = address_2015.map(eval)
for field in ['Address', 'State', 'City']:
data[_2015][field] = address_2015.apply(lambda x: x[field.lower()])
data[_2015]['ZipCode'] = address_2015.apply(lambda x: x['zip'])
col_2015 = data[_2015].columns.values.tolist()
col_2016 = data[_2016].columns.values.tolist()
print(set(col_2016) - set(col_2015))
print(set(col_2015) - set(col_2016))
data['2018-Building-Energy-Benchmarking'].rename(columns={'BuildingName': 'PropertyName'}, inplace=True)
data = pd.concat(data, sort=False)
data.rename({"2015-building-energy-benchmarking": 2015,
"2016-building-energy-benchmarking": 2016,
"2017-Building-Energy-Benchmarking": 2017,
"2018-Building-Energy-Benchmarking": 2018}, inplace=True)
data
cat_2017 = data.loc[2017, 'LargestPropertyUseType'].drop_duplicates().values
cat_2018 = data.loc[2018, 'LargestPropertyUseType'].drop_duplicates().values
mapper = dict.fromkeys(cat_2017)
for k in mapper.keys():
for cat in cat_2018:
if str(cat).startswith(str(k)):
mapper[k] = cat
for col in ['LargestPropertyUseType',
'SecondLargestPropertyUseType',
'ThirdLargestPropertyUseType']:
data[col] = data[col].apply(lambda x: mapper.get(x) if mapper.get(x) else x)
data.drop(['Location',
'DataYear',
'ComplianceIssue'], axis=1, inplace=True)
def strip_all_string(x):
if type(x) == str:
return x.capitalize().strip()
else:
return x
for col in data.columns:
data[col] = data[col].apply(strip_all_string)
###Output
_____no_output_____
###Markdown
Correction des types de données
###Code
categorical_fields = ['BuildingType', 'PrimaryPropertyType', 'Neighborhood',
'LargestPropertyUseType', 'SecondLargestPropertyUseType',
'ThirdLargestPropertyUseType']
for col in categorical_fields:
data[col] = data[col].astype('category')
for col in data.columns:
print(f"col : {col} dtype : {data[col].dtype}")
data.describe().T
data.dtypes.to_latex('../reports/latex-report/includes/variables.tex')
data['ZipCode'] = data['ZipCode'].map(float)
data.index.names = ['year', 'idx']
# data = data.loc[[2015, 2016]]
###Output
_____no_output_____
###Markdown
Localisation des bâtiments
###Code
year_widget = widgets.Dropdown(options=[2015, 2016, 2017, 2018])
usage_type = data['LargestPropertyUseType'].sort_values()
usage_type = usage_type.drop_duplicates().tolist()
usage_type.insert(0, 'ALL')
usage_type.remove(np.nan)
usage_widget = widgets.Dropdown(option=usage_type)
@interact
def make_map(year=year_widget, usage=usage_type):
location = data.loc[year][['Latitude', 'Longitude']].dropna().mean(axis=0).values
data_map = data.loc[year][['Latitude',
'Longitude',
'LargestPropertyUseType']].dropna()
if usage != 'ALL':
data_map = data_map[data_map['LargestPropertyUseType'] == usage]
m = folium.Map(location=location,
tiles='cartodbpositron',
zoom_start=11)
mc = FastMarkerCluster(data_map)
mc.add_to(m)
display(m)
###Output
_____no_output_____
###Markdown
Analyses univariées
###Code
data.columns = data.columns.map(lambda x: x.replace('(', '_'))
data.columns = data.columns.map(lambda x: x.replace(')', ''))
data.columns = data.columns.map(lambda x: x.replace('/', '_'))
dtypes = data.columns.map(lambda x: data[x].dtype.name)
opt = ['BuildingType',
'PrimaryPropertyType',
'Neighborhood',
'YearBuilt',
'NumberofBuildings',
'NumberofFloors',
'PropertyGFATotal',
'PropertyGFAParking',
'PropertyGFABuilding_s',
'LargestPropertyUseType',
'SecondLargestPropertyUseType',
'ThirdLargestPropertyUseType',
'ENERGYSTARScore',
'LargestPropertyUseTypeGFA',
'SecondLargestPropertyUseTypeGFA',
'ThirdLargestPropertyUseTypeGFA',
'SiteEUI_kBtu_sf',
'SiteEUIWN_kBtu_sf',
'SiteEnergyUse_kBtu',
'SiteEnergyUseWN_kBtu',
'SourceEUI_kBtu_sf',
'SourceEUIWN_kBtu_sf',
'TotalGHGEmissions',
'GHGEmissionsIntensity',
'SteamUse_kBtu',
'Electricity_kBtu',
'NaturalGas_kBtu']
variable_widget = widgets.Dropdown(options=opt)
y_widget = widgets.Dropdown(options=['ALL', 2015, 2016, 2017, 2018])
@interact
def univariate_analysis(var=variable_widget, year=y_widget):
if year == "ALL":
univar = UnivariateAnalysis(data)
else:
univar = UnivariateAnalysis(data.loc[year])
univar.make_analysis(var, orient='h', figsize=(8, 12))
###Output
_____no_output_____
###Markdown
Analyses bivariées Catégoriel vs Continu
###Code
dtypes = list(map(lambda x: data[x].dtype.name, data.columns))
names_dtypes = zip(data.columns.values.tolist(), dtypes)
names_dtypes = [(x, y) for x, y in names_dtypes]
opt_1 = [x for x, y in names_dtypes if y in ['float64', 'int64']]
opt_2 = [x for x, y in names_dtypes if y == 'category']
outcome_variable = widgets.Dropdown(options=opt_1)
group = widgets.Dropdown(options=opt_2)
years = widgets.Dropdown(options=['ALL', 2015, 2016, 2017, 2018])
save = widgets.Checkbox(description="Save report")
@interact
def anova(outcome_variable=outcome_variable, group=group, year=years, save=save):
bivar = BivariateAnalysis(data)
if year != 'ALL':
bivar = BivariateAnalysis(data.loc[year])
bivar.anova(outcome_variable=outcome_variable,
group=group,
orient='h',
figsize=(8,12),
label_rotation=0)
if save:
pass
###Output
_____no_output_____
###Markdown
Catégoriel vs Catégoriel
###Code
dtypes = list(map(lambda x: data[x].dtype.name, data.columns))
names_dtypes = zip(data.columns.values.tolist(), dtypes)
names_dtypes = [(x, y) for x, y in names_dtypes]
variables = [x for x, y in names_dtypes if y in ['category']]
var_1 = widgets.Dropdown(options=variables)
var_2 = widgets.Dropdown(options=variables)
years_2 = widgets.Dropdown(options=['ALL', 2015, 2016])
@interact
def chi2_test(var_1=var_1, var_2=var_2, year=years_2):
variables = (var_1, var_2)
bivar = BivariateAnalysis(data)
if year != 'ALL':
bivar = BivariateAnalysis(data.loc[year])
bivar.chi_square_contingency(variables)
###Output
_____no_output_____
###Markdown
Continu vs Continu
###Code
dtypes = list(map(lambda x: data[x].dtype.name, data.columns))
names_dtypes = zip(data.columns.values.tolist(), dtypes)
names_dtypes = [(x, y) for x, y in names_dtypes]
variables = [x for x, y in names_dtypes if y in ['int64', 'float64']]
var_3 = widgets.Dropdown(options=variables)
var_4 = widgets.Dropdown(options=variables)
years_3 = widgets.Dropdown(options=['ALL', 2015, 2016])
@interact
def regression(x=var_3, y=var_4, year=years_3):
variables = (x, y)
bivar = BivariateAnalysis(data)
if year != 'ALL':
bivar = BivariateAnalysis(data.loc[year])
bivar.regression(variables=variables)
###Output
_____no_output_____
###Markdown
Premières conclusions---------------------------------- Les variables ayant un lien avec la cible (SiteEnergyUse) sont: * La surface du bâtiment * Le type d'usage des surfaces Il est préférable de prédire la consomation en fonctions des conditions métérologiques sur 30 ans (`SiteEnergyUseWN_kBtu`) plutôt que que la consomation brute (`SiteEnergyUse_kBtu`). Il semble que le passage au log pour les surfaces et les consomations permettent une meilleure corrélation. Check point
###Code
data.to_pickle('../data/interim/full_data.pickle')
msno.matrix(data)
###Output
_____no_output_____ |
06-cyclists_FSI_reduced_features_with_address.ipynb | ###Markdown
General cleaning until this point, remapping after
###Code
cleaned['isCitySpeed'] = cleaned.apply(lambda row: (row.Crash_Speed_Limit in ['60 km/h', '0 - 50 km/h']), axis=1)
cleaned['Crash_Lighting_Condition'] = cleaned['Crash_Lighting_Condition'].replace(['Darkness - Lighted', 'Darkness - Not lighted'], 'Darkness')
cleaned['combined_street'] = cleaned.apply(lambda row: ("%s - %s - %s" % (row.Loc_Suburb, row.Crash_Street, row.Crash_Street_Intersecting) ), axis=1)
cleaned['isClear'] = cleaned.apply(lambda row: (row.Crash_Atmospheric_Condition in ['Clear']), axis=1)
cleaned.columns
cleaned.corr()
cleaned.tail()
shuffle(cleaned).to_csv(os.path.join(data_path,'cyclist_any_address_reduced.csv'),
index=False,
columns=['Crash_Month', 'Crash_Day_Of_Week', 'Crash_Hour',
'combined_street', 'isCitySpeed',
'Crash_Road_Surface_Condition', 'Crash_Atmospheric_Condition',
'isClear',
'Crash_Road_Horiz_Align', 'Crash_Road_Vert_Align',
'Cyclist_FSI'
])
###Output
_____no_output_____ |
notebooks/intro-neural-networks/gradient-descent/GradientDescent.ipynb | ###Markdown
Implementing the Gradient Descent AlgorithmIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
###Output
_____no_output_____
###Markdown
Reading and plotting the data
###Code
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
###Output
_____no_output_____
###Markdown
TODO: Implementing the basic functionsHere is your turn to shine. Implement the following formulas, as explained in the text.- Sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$- Output (prediction) formula$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$- Error function$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$- The function that updates the weights$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$$$ b \longrightarrow b + \alpha (y - \hat{y})$$
###Code
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
pass
# Output (prediction) formula
def output_formula(features, weights, bias):
pass
# Error (log-loss) formula
def error_formula(y, output):
pass
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
pass
###Output
_____no_output_____
###Markdown
Training functionThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
###Code
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
# Converting the output (float) to boolean as it is a binary classification
# e.g. 0.95 --> True (= 1), 0.31 --> False (= 0)
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
###Output
_____no_output_____
###Markdown
Time to train the algorithm!When we run the function, we'll obtain the following:- 10 updates with the current training loss and accuracy- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.- A plot of the error function. Notice how it decreases as we go through more epochs.
###Code
train(X, y, epochs, learnrate, True)
###Output
_____no_output_____ |
Ch06_CNN/6-3tf.ipynb | ###Markdown
http://preview.d2l.ai/d2l-en/master/chapter_convolutional-neural-networks/padding-and-strides.html
###Code
import tensorflow as tf
# We define a convenience function to calculate the convolutional layer. This
# function initializes the convolutional layer weights and performs
# corresponding dimensionality elevations and reductions on the input and
# output
def comp_conv2d(conv2d, X):
# Here (1, 1) indicates that the batch size and the number of channels
# are both 1
X = tf.reshape(X, (1, ) + X.shape + (1, ))
Y = conv2d(X)
# Exclude the first two dimensions that do not interest us: examples and
# channels
return tf.reshape(Y, Y.shape[1:3])
# Note that here 1 row or column is padded on either side, so a total of 2
# rows or columns are added
conv2d = tf.keras.layers.Conv2D(1, kernel_size=3, padding='same')
X = tf.random.uniform(shape=(8, 8))
comp_conv2d(conv2d, X).shape
# Here, we use a convolution kernel with a height of 5 and a width of 3. The
# padding numbers on either side of the height and width are 2 and 1,
# respectively
conv2d = tf.keras.layers.Conv2D(1, kernel_size=(5, 3), padding='valid')
comp_conv2d(conv2d, X).shape
conv2d = tf.keras.layers.Conv2D(1, kernel_size=3, padding='same', strides=2)
comp_conv2d(conv2d, X).shape
conv2d = tf.keras.layers.Conv2D(1, kernel_size=(3,5), padding='valid', strides=(3, 4))
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____ |
Jupyter/betaplane_eq_cart.ipynb | ###Markdown
Coriolis term for horizontal velocity ($w \to 0 $) Vertical component of Earth's rotation vector
###Code
from sympy.abc import beta
f = (f1 + beta * R.y) * R.k
f
vort = curl(u)
vort_r = vort.dot(R.k)
div = divergence(u_h).doit()
corio = -f.cross(u)
corio
###Output
_____no_output_____
###Markdown
Coriolis term in vorticity equation $\nabla \times (-f \hat{e}_z \times \bf{u})$ is given by
###Code
curl(corio).dot(R.k).expand()
###Output
_____no_output_____
###Markdown
Divergence in cartesian coordinates is given by
###Code
div.expand()
###Output
_____no_output_____
###Markdown
$u_3 = 0$ in the above equation because it is the 2D divergence --- which can be separated from the expression for $\nabla \times (-f \hat{e}_z \times \bf{u})$
###Code
(- f * div).expand() - beta * u2
###Output
_____no_output_____
###Markdown
Therefore linearized vorticity equation can be written as (probably wrong):$$\partial_t \zeta = -\beta v - (f_0 + \beta y) \nabla . \mathbf{u}$$ Coriolis term in divergence equation $\nabla . (-f \hat{e}_r \times \bf{u})$ is given by
###Code
divergence(corio).expand()
###Output
_____no_output_____
###Markdown
Vertical component of vorticity (in the $\hat{e}_r$ direction), $\zeta$ in spherical coordinates is given by:
###Code
vort_r.doit()
-beta * u1 + (f * vort_r).expand().doit()
###Output
_____no_output_____ |
20. Adv OOPs Concepts/.ipynb_checkpoints/04. others methods __str__ & __repr__-checkpoint.ipynb | ###Markdown
https://www.python.org/download/releases/2.2/descrintro/__new__ https://www.youtube.com/watch?v=5cvM-crlDvg https://rszalski.github.io/magicmethods/\ str() and repr() **str() is used for creating output for end user while repr() is mainly used for debugging and development. repr’s goal is to be unambiguous and str’s is to be readable**
###Code
class Base:
c = 0
def __init__(self, a, b):
self.c = str(a + b)
@classmethod
def fromlist(cls, ls):
return cls(ls[0], ls[1])
import datetime
# a = Base.fromlist([4, 5])
a = datetime.datetime.today()
b = str(a)
str(a)
str(b)
repr(a)
repr(b)
print('{}'.format(str(a)))
print('{}'.format(str(b)))
print('{}'.format(repr(a)))
print('{}'.format(repr(b)))
###Output
datetime.datetime(2019, 7, 2, 22, 8, 5, 366978)
'2019-07-02 22:08:05.366978'
###Markdown
use in class
###Code
class Base:
def __init__(self, a, b):
self.a=a
self.b=b
def __str__(self):
return f"Rational {self.a+self.b}"
def __repr__(self):
return f"{self.a}+i{ self.b}"
b = Base(40, 50)
str(b)
repr(b)
###Output
_____no_output_____ |
Getting started with TensorFlow 2/.ipynb_checkpoints/Week 4 Programming Assignment-COMPLETED-checkpoint.ipynb | ###Markdown
Programming Assignment Saving and loading models, with application to the EuroSat dataset InstructionsIn this notebook, you will create a neural network that classifies land uses and land covers from satellite imagery. You will save your model using Tensorflow's callbacks and reload it later. You will also load in a pre-trained neural network classifier and compare performance with it. Some code cells are provided for you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line: ` GRADED CELL `Don't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly. Inside these graded cells, you can use any functions or classes that are imported below, but make sure you don't use any variables that are outside the scope of the function. How to submitComplete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook. Let's get started!We'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here.
###Code
#### PACKAGE IMPORTS ####
# Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook
import tensorflow as tf
from tensorflow.keras.preprocessing.image import load_img, img_to_array
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
import os
import numpy as np
import pandas as pd
# If you would like to make further imports from tensorflow, add them here
###Output
_____no_output_____
###Markdown
 The EuroSAT datasetIn this assignment, you will use the [EuroSAT dataset](https://github.com/phelber/EuroSAT). It consists of 27000 labelled Sentinel-2 satellite images of different land uses: residential, industrial, highway, river, forest, pasture, herbaceous vegetation, annual crop, permanent crop and sea/lake. For a reference, see the following papers:- Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. Patrick Helber, Benjamin Bischke, Andreas Dengel, Damian Borth. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019.- Introducing EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. Patrick Helber, Benjamin Bischke, Andreas Dengel. 2018 IEEE International Geoscience and Remote Sensing Symposium, 2018.Your goal is to construct a neural network that classifies a satellite image into one of these 10 classes, as well as applying some of the saving and loading techniques you have learned in the previous sessions. Import the dataThe dataset you will train your model on is a subset of the total data, with 4000 training images and 1000 testing images, with roughly equal numbers of each class. The code to import the data is provided below.
###Code
# Run this cell to import the Eurosat data
def load_eurosat_data():
data_dir = 'data/'
x_train = np.load(os.path.join(data_dir, 'x_train.npy'))
y_train = np.load(os.path.join(data_dir, 'y_train.npy'))
x_test = np.load(os.path.join(data_dir, 'x_test.npy'))
y_test = np.load(os.path.join(data_dir, 'y_test.npy'))
return (x_train, y_train), (x_test, y_test)
(x_train, y_train), (x_test, y_test) = load_eurosat_data()
x_train = x_train / 255.0
x_test = x_test / 255.0
###Output
_____no_output_____
###Markdown
Build the neural network model You can now construct a model to fit to the data. Using the Sequential API, build your model according to the following specifications:* The model should use the input_shape in the function argument to set the input size in the first layer.* The first layer should be a Conv2D layer with 16 filters, a 3x3 kernel size, a ReLU activation function and 'SAME' padding. Name this layer 'conv_1'.* The second layer should also be a Conv2D layer with 8 filters, a 3x3 kernel size, a ReLU activation function and 'SAME' padding. Name this layer 'conv_2'.* The third layer should be a MaxPooling2D layer with a pooling window size of 8x8. Name this layer 'pool_1'.* The fourth layer should be a Flatten layer, named 'flatten'.* The fifth layer should be a Dense layer with 32 units, a ReLU activation. Name this layer 'dense_1'.* The sixth and final layer should be a Dense layer with 10 units and softmax activation. Name this layer 'dense_2'.In total, the network should have 6 layers.
###Code
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_new_model(input_shape):
"""
This function should build a Sequential model according to the above specification. Ensure the
weights are initialised by providing the input_shape argument in the first layer, given by the
function argument.
Your function should also compile the model with the Adam optimiser, sparse categorical cross
entropy loss function, and a single accuracy metric.
"""
# The model should use the input_shape in the function argument to set the input size in the first layer.
# The first layer should be a Conv2D layer with 16 filters, a 3x3 kernel size,
# a ReLU activation function and 'SAME' padding. Name this layer 'conv_1'.
# The second layer should also be a Conv2D layer with 8 filters, a 3x3 kernel size,
# a ReLU activation function and 'SAME' padding. Name this layer 'conv_2'.
# The third layer should be a MaxPooling2D layer with a pooling window size of 8x8. Name this layer 'pool_1'.
# The fourth layer should be a Flatten layer, named 'flatten'.
# The fifth layer should be a Dense layer with 32 units, a ReLU activation. Name this layer 'dense_1'.
# The sixth and final layer should be a Dense layer with 10 units and softmax activation. Name this layer 'dense_2'.
model = tf.keras.Sequential([
Conv2D(16, (3,3), activation = 'relu', padding = 'SAME', name = 'conv_1', input_shape = input_shape),
# DON'T GET THE INPUT_SHAPE
Conv2D(8, (3,3), activation = 'relu', padding = 'SAME', name = 'conv_2'),
MaxPooling2D((8,8), name = 'pool_1'),
Flatten(name = 'flatten'),
Dense(32, activation = 'relu', name = 'dense_1'),
Dense(10, activation = 'softmax', name = 'dense_2')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
###Output
_____no_output_____
###Markdown
Compile and evaluate the model
###Code
# Run your function to create the model
model = get_new_model(x_train[0].shape)
# Run this cell to define a function to evaluate a model's test accuracy
def get_test_accuracy(model, x_test, y_test):
"""Test model classification accuracy"""
test_loss, test_acc = model.evaluate(x=x_test, y=y_test, verbose=0)
print('accuracy: {acc:0.3f}'.format(acc=test_acc))
# Print the model summary and calculate its initialised test accuracy
model.summary()
get_test_accuracy(model, x_test, y_test)
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_1 (Conv2D) (None, 64, 64, 16) 448
_________________________________________________________________
conv_2 (Conv2D) (None, 64, 64, 8) 1160
_________________________________________________________________
pool_1 (MaxPooling2D) (None, 8, 8, 8) 0
_________________________________________________________________
flatten (Flatten) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 32) 16416
_________________________________________________________________
dense_2 (Dense) (None, 10) 330
=================================================================
Total params: 18,354
Trainable params: 18,354
Non-trainable params: 0
_________________________________________________________________
accuracy: 0.120
###Markdown
Create checkpoints to save model during training, with a criterionYou will now create three callbacks:- `checkpoint_every_epoch`: checkpoint that saves the model weights every epoch during training- `checkpoint_best_only`: checkpoint that saves only the weights with the highest validation accuracy. Use the testing data as the validation data.- `early_stopping`: early stopping object that ends training if the validation accuracy has not improved in 3 epochs.
###Code
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function names or arguments.
def get_checkpoint_every_epoch():
"""
This function should return a ModelCheckpoint object that:
- saves the weights only at the end of every epoch
- saves into a directory called 'checkpoints_every_epoch' inside the current working directory
- generates filenames in that directory like 'checkpoint_XXX' where
XXX is the epoch number formatted to have three digits, e.g. 001, 002, 003, etc.
"""
checkpoint = ModelCheckpoint(filepath = 'checkpoints_every_epoch/checkpoint_{epoch:03d}',
# DON'T NEED TO PUT './' in front!
save_weights_only = True)
return checkpoint
def get_checkpoint_best_only():
"""
This function should return a ModelCheckpoint object that:
- saves only the weights that generate the highest validation (testing) accuracy
- saves into a directory called 'checkpoints_best_only' inside the current working directory
- generates a file called 'checkpoints_best_only/checkpoint'
"""
checkpoint = ModelCheckpoint('checkpoints_best_only/checkpoint', # DON'T NEED TO PUT './' in front!
monitor = 'val_accuracy',
save_best_only = True,
save_weights_only = True,
mode = 'max')
return checkpoint
#### GRADED CELL ####
# Complete the following function.
# Make sure to not change the function name or arguments.
def get_early_stopping():
"""
This function should return an EarlyStopping callback that stops training when
the validation (testing) accuracy has not improved in the last 3 epochs.
HINT: use the EarlyStopping callback with the correct 'monitor' and 'patience'
"""
early_stopping = tf.keras.callbacks.EarlyStopping(patience = 3, monitor = 'val_loss')
return early_stopping
# Run this cell to create the callbacks
checkpoint_every_epoch = get_checkpoint_every_epoch()
checkpoint_best_only = get_checkpoint_best_only()
early_stopping = get_early_stopping()
###Output
_____no_output_____
###Markdown
Train model using the callbacksNow, you will train the model using the three callbacks you created. If you created the callbacks correctly, three things should happen:- At the end of every epoch, the model weights are saved into a directory called `checkpoints_every_epoch`- At the end of every epoch, the model weights are saved into a directory called `checkpoints_best_only` **only** if those weights lead to the highest test accuracy- Training stops when the testing accuracy has not improved in three epochs.You should then have two directories:- A directory called `checkpoints_every_epoch` containing filenames that include `checkpoint_001`, `checkpoint_002`, etc with the `001`, `002` corresponding to the epoch- A directory called `checkpoints_best_only` containing filenames that include `checkpoint`, which contain only the weights leading to the highest testing accuracy
###Code
# Train model using the callbacks you just created
callbacks = [checkpoint_every_epoch, checkpoint_best_only, early_stopping]
model.fit(x_train, y_train, epochs=50, validation_data=(x_test, y_test), callbacks=callbacks)
###Output
Train on 4000 samples, validate on 1000 samples
Epoch 1/50
4000/4000 [==============================] - 78s 19ms/sample - loss: 0.4758 - accuracy: 0.8255 - val_loss: 0.7164 - val_accuracy: 0.7670
Epoch 2/50
4000/4000 [==============================] - 76s 19ms/sample - loss: 0.4618 - accuracy: 0.8365 - val_loss: 0.7334 - val_accuracy: 0.7480
Epoch 3/50
4000/4000 [==============================] - 76s 19ms/sample - loss: 0.4542 - accuracy: 0.8370 - val_loss: 0.7528 - val_accuracy: 0.7470
Epoch 4/50
4000/4000 [==============================] - 77s 19ms/sample - loss: 0.4671 - accuracy: 0.8322 - val_loss: 0.7822 - val_accuracy: 0.7390
###Markdown
Create new instance of model and load on both sets of weightsNow you will use the weights you just saved in a fresh model. You should create two functions, both of which take a freshly instantiated model instance:- `model_last_epoch` should contain the weights from the latest saved epoch- `model_best_epoch` should contain the weights from the saved epoch with the highest testing accuracy_Hint: use the_ `tf.train.latest_checkpoint` _function to get the filename of the latest saved checkpoint file. Check the docs_ [_here_](https://www.tensorflow.org/api_docs/python/tf/train/latest_checkpoint).
###Code
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function name or arguments.
def get_model_last_epoch(model):
"""
This function should create a new instance of the CNN you created earlier,
load on the weights from the last training epoch, and return this model.
"""
latest_ = tf.train.latest_checkpoint('checkpoints_every_epoch')
model.load_weights(latest_)
return model
def get_model_best_epoch(model):
"""
This function should create a new instance of the CNN you created earlier, load
on the weights leading to the highest validation accuracy, and return this model.
"""
latest_ = tf.train.latest_checkpoint('checkpoints_best_only')
model.load_weights(latest_)
return model
# Run this cell to create two models: one with the weights from the last training
# epoch, and one with the weights leading to the highest validation (testing) accuracy.
# Verify that the second has a higher validation (testing) accuarcy.
model_last_epoch = get_model_last_epoch(get_new_model(x_train[0].shape))
model_best_epoch = get_model_best_epoch(get_new_model(x_train[0].shape))
print('Model with last epoch weights:')
get_test_accuracy(model_last_epoch, x_test, y_test)
print('')
print('Model with best epoch weights:')
get_test_accuracy(model_best_epoch, x_test, y_test)
###Output
Model with last epoch weights:
accuracy: 0.739
Model with best epoch weights:
accuracy: 0.767
###Markdown
Load, from scratch, a model trained on the EuroSat dataset.In your workspace, you will find another model trained on the `EuroSAT` dataset in `.h5` format. This model is trained on a larger subset of the EuroSAT dataset and has a more complex architecture. The path to the model is `models/EuroSatNet.h5`. See how its testing accuracy compares to your model!
###Code
#### GRADED CELL ####
# Complete the following functions.
# Make sure to not change the function name or arguments.
def get_model_eurosatnet():
"""
This function should return the pretrained EuroSatNet.h5 model.
"""
model = tf.keras.models.load_model('models/EuroSatNet.h5')
return model
# Run this cell to print a summary of the EuroSatNet model, along with its validation accuracy.
model_eurosatnet = get_model_eurosatnet()
model_eurosatnet.summary()
get_test_accuracy(model_eurosatnet, x_test, y_test)
###Output
Model: "sequential_21"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv_1 (Conv2D) (None, 64, 64, 16) 448
_________________________________________________________________
conv_2 (Conv2D) (None, 64, 64, 16) 6416
_________________________________________________________________
pool_1 (MaxPooling2D) (None, 32, 32, 16) 0
_________________________________________________________________
conv_3 (Conv2D) (None, 32, 32, 16) 2320
_________________________________________________________________
conv_4 (Conv2D) (None, 32, 32, 16) 6416
_________________________________________________________________
pool_2 (MaxPooling2D) (None, 16, 16, 16) 0
_________________________________________________________________
conv_5 (Conv2D) (None, 16, 16, 16) 2320
_________________________________________________________________
conv_6 (Conv2D) (None, 16, 16, 16) 6416
_________________________________________________________________
pool_3 (MaxPooling2D) (None, 8, 8, 16) 0
_________________________________________________________________
conv_7 (Conv2D) (None, 8, 8, 16) 2320
_________________________________________________________________
conv_8 (Conv2D) (None, 8, 8, 16) 6416
_________________________________________________________________
pool_4 (MaxPooling2D) (None, 4, 4, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 256) 0
_________________________________________________________________
dense_1 (Dense) (None, 32) 8224
_________________________________________________________________
dense_2 (Dense) (None, 10) 330
=================================================================
Total params: 41,626
Trainable params: 41,626
Non-trainable params: 0
_________________________________________________________________
accuracy: 0.810
|
R/06 JOIN.ipynb | ###Markdown
The JOIN operation JOIN and UEFA EURO 2012This tutorial introduces `JOIN` which allows you to use data from two or more tables. The tables contain all matches and goals from UEFA EURO 2012 Football Championship in Poland and Ukraine.The data is available (mysql format) at
###Code
library(tidyverse)
library(DBI)
library(getPass)
drv <- switch(Sys.info()['sysname'],
Windows="PostgreSQL Unicode(x64)",
Darwin="/usr/local/lib/psqlodbcw.so",
Linux="PostgreSQL")
con <- dbConnect(
odbc::odbc(),
driver = drv,
Server = "localhost",
Database = "sqlzoo",
UID = "postgres",
PWD = getPass("Password?"),
Port = 5432
)
options(repr.matrix.max.rows=20)
###Output
-- [1mAttaching packages[22m --------------------------------------- tidyverse 1.3.0 --
[32mv[39m [34mggplot2[39m 3.3.0 [32mv[39m [34mpurrr [39m 0.3.4
[32mv[39m [34mtibble [39m 3.0.1 [32mv[39m [34mdplyr [39m 0.8.5
[32mv[39m [34mtidyr [39m 1.0.2 [32mv[39m [34mstringr[39m 1.4.0
[32mv[39m [34mreadr [39m 1.3.1 [32mv[39m [34mforcats[39m 0.5.0
-- [1mConflicts[22m ------------------------------------------ tidyverse_conflicts() --
[31mx[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31mx[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
###Markdown
1.The first example shows the goal scored by a player with the last name 'Bender'. The `*` says to list all the columns in the table - a shorter way of saying `matchid, teamid, player, gtime`**Modify it to show the matchid and player name for all goals scored by Germany. To identify German players, check for: `teamid = 'GER'`**
###Code
game <- dbReadTable(con, 'game')
goal <- dbReadTable(con, 'goal')
eteam <- dbReadTable(con, 'eteam')
goal %>%
filter(teamid=='GER') %>%
select(matchid, player)
###Output
_____no_output_____
###Markdown
2.From the previous query you can see that Lars Bender's scored a goal in game 1012. Now we want to know what teams were playing in that match.Notice in the that the column `matchid `in the `goal` table corresponds to the `id` column in the `game` table. We can look up information about game 1012 by finding that row in the `game` table.**Show id, stadium, team1, team2 for just game 1012**
###Code
game %>%
filter(id==1012) %>%
select(id, stadium, team1, team2)
###Output
_____no_output_____
###Markdown
3.You can combine the two steps into a single query with a JOIN.```sqlSELECT * FROM game JOIN goal ON (id=matchid)```The **FROM** clause says to merge data from the goal table with that from the game table. The **ON** says how to figure out which rows in **game** go with which rows in **goal** - the **matchid** from **goal** must match **id** from **game**. (If we wanted to be more clear/specific we could say`ON (game.id=goal.matchid)`The code below shows the player (from the goal) and stadium name (from the game table) for every goal scored.**Modify it to show the player, teamid, stadium and mdate for every German goal.**
###Code
game %>%
inner_join(goal, by=c(id="matchid")) %>%
filter(teamid=='GER') %>%
select(player, teamid, stadium, mdate)
###Output
_____no_output_____
###Markdown
4.Use the same `JOIN` as in the previous question.**Show the team1, team2 and player for every goal scored by a player called Mario `player LIKE 'Mario%'`**
###Code
game %>%
inner_join(goal, by=c(id='matchid')) %>%
filter(str_starts(player, 'Mario')) %>%
select(team1, team2, player)
###Output
_____no_output_____
###Markdown
5.The table `eteam` gives details of every national team including the coach. You can `JOIN` `goal` to `eteam` using the phrase goal `JOIN eteam on teamid=id`**Show `player, teamid, coach, gtime` for all goals scored in the first 10 minutes `gtime<=10`**
###Code
goal %>%
inner_join(eteam, by=c(teamid="id")) %>%
filter(gtime<=10) %>%
select(player, teamid, coach, gtime)
###Output
_____no_output_____
###Markdown
6.To `JOIN` `game` with `eteam` you could use either`game JOIN eteam ON (team1=eteam.id)` or `game JOIN eteam ON (team2=eteam.id)`Notice that because `id` is a column name in both `game` and `eteam` you must specify `eteam.id` instead of just `id`**List the the dates of the matches and the name of the team in which 'Fernando Santos' was the team1 coach.**
###Code
game %>%
inner_join(eteam, by=c(team1="id")) %>%
filter(coach=='Fernando Santos') %>%
select(mdate, teamname)
###Output
_____no_output_____
###Markdown
7.**List the player for every goal scored in a game where the stadium was 'National Stadium, Warsaw'**
###Code
goal %>%
inner_join(game, by=c(matchid="id")) %>%
filter(stadium=='National Stadium, Warsaw') %>%
select(player)
###Output
_____no_output_____
###Markdown
8. More difficult questionsThe example query shows all goals scored in the Germany-Greece quarterfinal.**Instead show the name of all players who scored a goal against Germany.**> __HINT__ > Select goals scored only by non-German players in matches where GER was the id of either **team1** or **team2**.> You can use `teamid!='GER'` to prevent listing German players.> You can use `DISTINCT` to stop players being listed twice.
###Code
game %>%
inner_join(goal, by=c(id="matchid")) %>%
filter((team1=='GER' | team2=='GER') &
teamid != 'GER') %>%
select(player) %>%
distinct
###Output
_____no_output_____
###Markdown
9.Show teamname and the total number of goals scored.> __COUNT and GROUP BY__ > You should COUNT(*) in the SELECT line and GROUP BY teamname
###Code
eteam %>%
inner_join(goal, by=c(id="teamid")) %>%
group_by(teamname) %>%
summarise(goals=n())
###Output
_____no_output_____
###Markdown
10.**Show the stadium and the number of goals scored in each stadium.**
###Code
game %>%
inner_join(goal, by=c(id="matchid")) %>%
group_by(stadium) %>%
summarise(goals=n())
###Output
_____no_output_____
###Markdown
11.**For every match involving 'POL', show the matchid, date and the number of goals scored.**
###Code
game %>%
inner_join(goal, by=c(id="matchid")) %>%
filter(team1=='POL' | team2=='POL') %>%
select(id, mdate) %>%
group_by(id, mdate) %>%
summarise(goals=n())
###Output
_____no_output_____
###Markdown
12.**For every match where 'GER' scored, show matchid, match date and the number of goals scored by 'GER'**
###Code
game %>%
inner_join(goal, by=c(id="matchid")) %>%
filter(teamid=='GER') %>%
group_by(id, mdate) %>%
summarise(goals=n())
###Output
_____no_output_____
###Markdown
13.List every match with the goals scored by each team as shown. This will use "CASE WHEN" which has not been explained in any previous exercises. mdate | team1 | score1 | team2 | score2--------|-------|--------|-------|----------1 July 2012 | ESP | 4 | ITA | 010 June 2012 | ESP | 1 | ITA | 110 June 2012 | IRL | 1 | CRO | 3... | ... | ... | ... | ... | ...Notice in the query given every goal is listed. If it was a team1 goal then a 1 appears in score1, otherwise there is a 0. You could SUM this column to get a count of the goals scored by team1. **Sort your result by mdate, matchid, team1 and team2.**
###Code
game %>%
left_join(goal, by=c(id="matchid")) %>%
mutate(goal1=if_else(is.na(teamid), FALSE, team1==teamid),
goal2=if_else(is.na(teamid), FALSE, team2==teamid)) %>%
group_by(id, mdate, team1, team2) %>%
summarise(score1=sum(goal1),
score2=sum(goal2)) %>%
arrange(mdate, id) %>%
ungroup() %>%
select(mdate, team1, score1, team2, score2)
dbDisconnect(con)
###Output
_____no_output_____ |
dev/_downloads/c9c419cf3fcf654a7859fae774bdb8f7/plot_simulate_evoked_data.ipynb | ###Markdown
Generate simulated evoked data
###Code
# Author: Daniel Strohmeier <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.time_frequency import fit_iir_model_raw
from mne.viz import plot_sparse_source_estimates
from mne.simulation import simulate_sparse_stc, simulate_evoked
print(__doc__)
###Output
_____no_output_____
###Markdown
Load real data as templates
###Code
data_path = sample.data_path()
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')
proj = mne.read_proj(data_path + '/MEG/sample/sample_audvis_ecg-proj.fif')
raw.info['projs'] += proj
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # mark bad channels
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fwd_fname)
fwd = mne.pick_types_forward(fwd, meg=True, eeg=True, exclude=raw.info['bads'])
cov = mne.read_cov(cov_fname)
info = mne.io.read_info(ave_fname)
label_names = ['Aud-lh', 'Aud-rh']
labels = [mne.read_label(data_path + '/MEG/sample/labels/%s.label' % ln)
for ln in label_names]
###Output
_____no_output_____
###Markdown
Generate source time courses from 2 dipoles and the correspond evoked data
###Code
times = np.arange(300, dtype=np.float) / raw.info['sfreq'] - 0.1
rng = np.random.RandomState(42)
def data_fun(times):
"""Function to generate random source time courses"""
return (50e-9 * np.sin(30. * times) *
np.exp(- (times - 0.15 + 0.05 * rng.randn(1)) ** 2 / 0.01))
stc = simulate_sparse_stc(fwd['src'], n_dipoles=2, times=times,
random_state=42, labels=labels, data_fun=data_fun)
###Output
_____no_output_____
###Markdown
Generate noisy evoked data
###Code
picks = mne.pick_types(raw.info, meg=True, exclude='bads')
iir_filter = fit_iir_model_raw(raw, order=5, picks=picks, tmin=60, tmax=180)[1]
nave = 100 # simulate average of 100 epochs
evoked = simulate_evoked(fwd, stc, info, cov, nave=nave, use_cps=True,
iir_filter=iir_filter)
###Output
_____no_output_____
###Markdown
Plot
###Code
plot_sparse_source_estimates(fwd['src'], stc, bgcolor=(1, 1, 1),
opacity=0.5, high_resolution=True)
plt.figure()
plt.psd(evoked.data[0])
evoked.plot(time_unit='s')
###Output
_____no_output_____ |
Friedman test/TC/01_LogisticRegression.ipynb | ###Markdown
LogisticRegression
###Code
import os
import warnings
import numpy as np
import pandas as pd
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
import optuna
import shap
import time
import json
import config as cfg
from category_encoders import WOEEncoder
from mlxtend.feature_selection import SequentialFeatureSelector
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.ensemble import RandomForestClassifier
from imblearn.under_sampling import RandomUnderSampler
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from optuna.visualization import plot_optimization_history
from optuna.visualization import plot_param_importances
from sklearn.model_selection import (
RepeatedStratifiedKFold,
StratifiedKFold,
cross_validate,
cross_val_score
)
cv_dev = StratifiedKFold(n_splits=cfg.N_SPLITS, shuffle=True, random_state=cfg.SEED)
cv_test = RepeatedStratifiedKFold(n_splits=cfg.N_SPLITS, n_repeats=cfg.N_REPEATS, random_state=cfg.SEED)
np.set_printoptions(formatter={"float": lambda x: "{0:0.4f}".format(x)})
pd.set_option("display.max_columns", None)
warnings.filterwarnings("ignore")
sns.set_context("paper", font_scale=1.4)
sns.set_style("darkgrid")
MODEL_NAME = 'LogisticRegression'
# Load data
X_train = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "X_train.csv"))
X_test = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "X_test.csv"))
y_train = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "y_train.csv"))
y_test = pd.read_csv(os.path.join("Data", "data_preprocessed_binned", "y_test.csv"))
X_train
###Output
_____no_output_____
###Markdown
Test performance
###Code
fig, axs = plt.subplots(1, 2, figsize=(16, 6))
model = Pipeline([("encoder", WOEEncoder()), ("lr", LogisticRegression(random_state=cfg.SEED))])
model.fit(X = X_train, y = np.ravel(y_train))
# Calculate metrics
preds = model.predict_proba(X_test)[::,1]
test_gini = metrics.roc_auc_score(y_test, preds)*2-1
test_ap = metrics.average_precision_score(y_test, preds)
print(f"test_gini:\t {test_gini:.4}")
print(f"test_ap:\t {test_ap:.4}")
# ROC
test_auc = metrics.roc_auc_score(y_test, preds)
fpr, tpr, _ = metrics.roc_curve(y_test, preds)
lw = 2
axs[0].plot(fpr, tpr, lw=lw, label="ROC curve (GINI = %0.3f)" % test_gini)
axs[0].plot([0, 1], [0, 1], color="red", lw=lw, linestyle="--")
axs[0].set_xlim([-0.05, 1.0])
axs[0].set_ylim([0.0, 1.05])
axs[0].set_xlabel("False Positive Rate")
axs[0].set_ylabel("True Positive Rate")
axs[0].legend(loc="lower right")
# PR
precision, recall, _ = metrics.precision_recall_curve(y_test, preds)
lw = 2
axs[1].plot(recall, precision, lw=lw, label="PR curve (AP = %0.3f)" % test_ap)
axs[1].set_xlabel("Recall")
axs[1].set_ylabel("Precision")
axs[1].legend(loc="lower right")
plt.savefig(os.path.join("Graphs", f"ROC_PRC_{MODEL_NAME}.png"), facecolor="w", dpi=100, bbox_inches = "tight")
# Cross-validation GINI
scores_gini = cross_validate(
model, X_train, np.ravel(y_train), scoring="roc_auc", cv=cv_test, return_train_score=True, n_jobs=-1
)
mean_train_gini = (scores_gini["train_score"]*2-1).mean()
mean_test_gini = (scores_gini["test_score"]*2-1).mean()
std_test_gini = (scores_gini["test_score"]*2-1).std()
print(f"mean_train_gini:\t {mean_train_gini:.4}")
print(f"mean_dev_gini:\t\t {mean_test_gini:.4} (+-{std_test_gini:.1})")
###Output
mean_train_gini: 0.4881
mean_dev_gini: 0.4819 (+-0.02)
###Markdown
Model analysis
###Code
rus = RandomUnderSampler(sampling_strategy=cfg.SAMPLING_STRATEGY)
X_sub, y_sub = rus.fit_resample(X_test, y_test)
print(y_sub.mean())
preds = model.predict_proba(X_sub)[::,1]
preds_calibrated = pd.DataFrame(np.round(28.85*np.log(preds/(1-preds))+765.75), columns=["preds_calibrated"])
fig, axs = plt.subplots(1, 1, figsize=(10,7))
palette ={0: "C0", 1: "C1"}
sns.histplot(data=preds_calibrated, x="preds_calibrated", hue=y_sub['BAD'], palette=palette, ax=axs, bins='auto')
plt.savefig(os.path.join("Graphs", f"Score_distr_{MODEL_NAME}.png"), facecolor="w", dpi=100, bbox_inches = "tight")
# Logistic regression coefficients
coefs = pd.DataFrame(
zip(X_train.columns, model["lr"].coef_[0]), columns=["Variable", "Coef"]
)
coefs_sorted = coefs.reindex(coefs["Coef"].abs().sort_values(ascending=False).index)
coefs_sorted
# Save results for final summary
results = {
"test_gini": test_gini,
"test_ap": test_ap,
"optimization_time": 0,
"fpr": fpr.tolist(),
"tpr": tpr.tolist(),
"precision": precision.tolist(),
"recall": recall.tolist(),
"mean_train_gini": scores_gini["train_score"].tolist(),
"mean_test_gini": scores_gini["test_score"].tolist(),
}
with open(os.path.join("Results", f"Results_{MODEL_NAME}.json"), 'w') as fp:
json.dump(results, fp)
###Output
_____no_output_____ |
notebooks/data/wikivoyage/feature-engineering/sampling-weight.ipynb | ###Markdown
Sorting place resultsGoal is to determine a nice sequence of place results for the end user.
###Code
%load_ext autoreload
%autoreload 2
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import sys
sys.path.append('../../../../')
data_dir = '../../../../data/wikivoyage/'
# folder where data should live for flask API
api_dir = '../../../../api/data/'
input_path = data_dir + 'processed/wikivoyage_destinations.csv'
output_path1 = data_dir + 'enriched/wikivoyage_destinations.csv'
output_path2 = api_dir + 'wikivoyage_destinations.csv'
from stairway.utils.utils import add_normalized_column
###Output
_____no_output_____
###Markdown
Read data
###Code
df = pd.read_csv(input_path)
df.head()
df['nr_tokens'].describe()
df.columns
columns = ['country', 'id', 'name']
df[columns].head()
###Output
_____no_output_____
###Markdown
Remove destinations with no tokensHas to be done for resampling, otherwise there will be observations with weight 0 which means they will never get sampled and you can thus not 'sort' the *entire* data set as some observations aren't drawn.
###Code
df = df.loc[lambda df: df['nr_tokens'] > 0]
###Output
_____no_output_____
###Markdown
Biased sortingIn order to get some randomness, but make sure the more important destinations get oversampled, use `nr_tokens` as a weight in the sampling method.For now, let's first have a look at the overall distribution of `nr_tokens` in our data. It is strongly skewed towards destinations with very few tokens:
###Code
(
df
# .loc[lambda df: df['country'] == 'Netherlands']
.assign(nr_tokens_bins = lambda df: pd.cut(df['nr_tokens'], bins = list(range(0, 10000, 500)) + [99999]))
['nr_tokens_bins']
.value_counts()
.sort_index()
.plot(kind='bar')
);
###Output
_____no_output_____
###Markdown
You can imagine that you don't want to random sample this way. It would mean that you would mostly show very unknown destinations to the user. Let's compare 3 different ways of sampling:1. without weights (so random)2. weighting by `nr_tokens`3. weighting by `nr_tokens` to the power `X`The more weighting, the more places are drawn with a larger number of tokens.
###Code
n_results = 16 # number of fetched results per API call
power_factor = 1.5 # nr of times to the power of nr_tokens for sampling bigger documents
fig, axes = plt.subplots(nrows=8, ncols=3, figsize=(16, 8*4))
df_bins = (
df
.assign(nr_tokens_bins = lambda df: pd.cut(df['nr_tokens'], bins = list(range(0, 10000, 500)) + [99999]))
.assign(nr_tokens_powered = lambda df: df['nr_tokens']**power_factor)
)
for i, row in enumerate(axes):
for weights, ax in zip(['random', 'nr_tokens', 'nr_tokens^{}'.format(power_factor)], row):
n = (i+1)*n_results
# depending on weights type, sample differently
if weights == 'random':
df_plot = df_bins.sample(frac=1, random_state=1234)
elif weights == 'nr_tokens':
df_plot = df_bins.sample(frac=1, random_state=1234, weights='nr_tokens')
else:
df_plot = df_bins.sample(frac=1, random_state=1234, weights='nr_tokens_powered')
# plot
(
df_plot
.head(n)
['nr_tokens_bins']
.value_counts()
.sort_index()
.plot(kind='bar', ax=ax)
)
# prettify plot
if i < 7:
ax.get_xaxis().set_ticks([])
ax.set_title('{} - {} obs'.format(weights, n))
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Power factor 1.5 seems to be nice. Powering even more will deplete the places with most observations very quickly. For the user this means that they first get all the well known destinations, and then the rest. The aim of our app is to surprise and inspire, so we also want to show more lesser known destinations. Write to CSVAdd the sampling weight feature and write the final data set to be used by the frontend
###Code
from stairway.wikivoyage.feature_engineering import add_sample_weight
power_factor = 1.5
output_df = (
df
# add the feature
.pipe(add_sample_weight)
# other hygiene
.drop(columns=['nr_tokens', 'ispartof', 'parentid'])
.set_index('id', drop=False)
# need to do this to convert numpy int and float to native data types
.astype('object')
)
output_df.head()
# write 'approved' file to the data and api folders
# output_df.to_csv(output_path1, index=False)
# output_df.to_csv(output_path2, index=False)
###Output
_____no_output_____
###Markdown
Sorting based on profilesWe want to allow the user to sort based on profiles like 'Nature', 'Culture', 'Beach'. To do this, we have identified which features are part of a profile. For the sorting, we then select the features in scope and sum their BM25 scores to get the final score for the sorting.The question is: do these BM25 scores bias towards smaller destinations? If yes, do we want to apply some kind of weighting with the number of tokens as is demonstrated above? Imports and data
###Code
file_name = 'wikivoyage_destinations.csv'
features_file_name = 'wikivoyage_features.csv'
features_types = 'wikivoyage_features_types.csv'
df_places = pd.read_csv(data_dir + 'enriched/' + file_name).set_index("id", drop=False)
df_features = pd.read_csv(api_dir + features_file_name).set_index("id")
df_feature_types = pd.read_csv(api_dir + features_types)
###Output
_____no_output_____
###Markdown
Do a sort
###Code
from api.resources.utils.features import add_sorting_weight_by_profiles, sort_places_by_profiles
profiles = ['nature']
sort_places_by_profiles(df_places, profiles, df_features, df_feature_types).head()
###Output
_____no_output_____
###Markdown
Visualize
###Code
n_results = 16 # number of fetched results per API call
profiles = ['nature', 'city', 'culture', 'active', 'beach']
fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(16, 8*4))
df_bins = (
df_places
.assign(nr_tokens_bins = lambda df: pd.cut(df['nr_tokens'], bins = list(range(0, 10000, 500)) + [99999]))
)
i = 0
for profile, row in zip(profiles, axes):
i += 1
for j, ax in enumerate(row):
n = (j+1)*n_results
# depending on profile, sort differently
df_sorted = df_bins.pipe(sort_places_by_profiles, [profile], df_features, df_feature_types)
# plot
(
df_sorted
.head(n)
['nr_tokens_bins']
.value_counts()
.sort_index()
.plot(kind='bar', ax=ax)
)
# prettify plot
if i < len(profiles):
ax.get_xaxis().set_ticks([])
ax.set_title('{} - {} obs'.format(profile, n))
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
This confirms our hypothesis that the sorting using BM25 weights heavily skews the top results towards destinations with little amount of tokens. So let's experiment a little, and scale the profile score with the number of tokens:
###Code
n_results = 16 # number of fetched results per API call
power_factor = 1.5
profiles = ['nature', 'city', 'culture', 'active', 'beach']
fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(16, 8*4))
df_bins = (
df_places
.assign(nr_tokens_bins = lambda df: pd.cut(df['nr_tokens'], bins = list(range(0, 10000, 500)) + [99999]))
.assign(nr_tokens_powered = lambda df: df['nr_tokens']**power_factor)
)
i = 0
for profile, row in zip(profiles, axes):
i += 1
for j, ax in enumerate(row):
n = (j+1)*n_results
# depending on profile, sort differently
df_sorted = (
df_bins
.pipe(add_sorting_weight_by_profiles, [profile], df_features, df_feature_types)
.assign(weight = lambda df: df['nr_tokens'] * df['profile_weight'])
.sort_values('weight', ascending=False)
)
# plot
(
df_sorted
.head(n)
['nr_tokens_bins']
.value_counts()
.sort_index()
.plot(kind='bar', ax=ax)
)
# prettify plot
if i < len(profiles):
ax.get_xaxis().set_ticks([])
ax.set_title('{} - {} obs'.format(profile, n))
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
That helps, although we seem to be overshooting a bit.... and now we are not even using the power factor. This could be because `nr_tokens` is of quite some magnitudes bigger than `profiles_weight`. Let's therefore try normalizing first and then adding both.
###Code
n_results = 16 # number of fetched results per API call
power_factor = 1.5
profiles = ['nature', 'city', 'culture', 'active', 'beach']
fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(16, 8*4))
df_bins = (
df_places
.assign(nr_tokens_bins = lambda df: pd.cut(df['nr_tokens'], bins = list(range(0, 10000, 500)) + [99999]))
.assign(nr_tokens_norm = lambda df: (df['nr_tokens'] - df['nr_tokens'].min()) / (df['nr_tokens'].max()
- df['nr_tokens'].min()))
)
i = 0
for profile, row in zip(profiles, axes):
i += 1
for j, ax in enumerate(row):
n = (j+1)*n_results
# depending on profile, sort differently
df_sorted = (
df_bins
.pipe(add_sorting_weight_by_profiles, [profile], df_features, df_feature_types)
.assign(profile_weight_norm = lambda df: (df['profile_weight'] - df['profile_weight'].min()) /
(df['profile_weight'].max() - df['profile_weight'].min()))
.assign(weight = lambda df: df['nr_tokens_norm'] + df['profile_weight_norm'])
.sort_values('weight', ascending=False)
)
# plot
(
df_sorted
.head(n)
['nr_tokens_bins']
.value_counts()
.sort_index()
.plot(kind='bar', ax=ax)
)
# prettify plot
if i < len(profiles):
ax.get_xaxis().set_ticks([])
ax.set_title('{} - {} obs'.format(profile, n))
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Better :) Now give the profile scores an even higher weight by multiplicating that score before adding it. Tuning it a bit suggests that `multiplication_factor = 2` is too high, it favors more of the unknown destinations. But notably, even more important is that the right multiplication factor varies quite some per profile.. We will need to spend time to make something more robust.
###Code
n_results = 16 # number of fetched results per API call
multiplication_factor = 1.5
profiles = ['nature', 'city', 'culture', 'active', 'beach']
fig, axes = plt.subplots(nrows=5, ncols=3, figsize=(16, 8*4))
df_bins = (
df_places
.assign(nr_tokens_bins = lambda df: pd.cut(df['nr_tokens'], bins = list(range(0, 10000, 500)) + [99999]))
.pipe(add_normalized_column, 'nr_tokens')
)
i = 0
for profile, row in zip(profiles, axes):
i += 1
for j, ax in enumerate(row):
n = (j+1)*n_results
# depending on profile, sort differently
df_sorted = (
df_bins
.pipe(add_sorting_weight_by_profiles, [profile], df_features, df_feature_types)
.pipe(add_normalized_column, 'profile_weight')
.assign(weight = lambda df: df['nr_tokens_norm'] + (df['profile_weight_norm']*multiplication_factor))
.sort_values('weight', ascending=False)
)
# plot
(
df_sorted
.head(n)
['nr_tokens_bins']
.value_counts()
.sort_index()
.plot(kind='bar', ax=ax)
)
# prettify plot
if i < len(profiles):
ax.get_xaxis().set_ticks([])
ax.set_title('{} - {} obs'.format(profile, n))
fig.tight_layout()
plt.show()
df_sorted.head()
###Output
_____no_output_____ |
final notebook.ipynb | ###Markdown
Missing values To get a better understanding of our dataset, we plot a heatmap which shows where are the missing values in our data. This way we can see the holes in our dataset better.
###Code
plt.figure(figsize=(30, 10))
sns.heatmap(train_df.isnull(), yticklabels= False , cbar = False)
percentage = pd.DataFrame((train_df.isnull().sum()/train_df.shape[0]).sort_values(ascending = False))
percentage.columns = ['percentage']
percentage.head(10)
###Output
_____no_output_____
###Markdown
As we can see, in some columns there is a high percentage of missing values. we will deal with missing values later before training models. Label class distribution
###Code
train_df['TARGET'].value_counts().plot(kind = 'pie' ,autopct='%1.0f%%')
###Output
_____no_output_____
###Markdown
As we can see the target class is unbalanced and it would be hard to train data. We will consider this fact when we want to train models with our data. Types of columns
###Code
train_df.dtypes.value_counts()
###Output
_____no_output_____
###Markdown
So we have 16 catogerical columns and we need to encode them. Let's see in detail how many categories each column have.
###Code
for col in train_df.select_dtypes(include=['object']) :
print('column %s has %s unique categories' % (col,len(train_df[col].unique())))
###Output
column NAME_CONTRACT_TYPE has 2 unique categories
column CODE_GENDER has 3 unique categories
column FLAG_OWN_CAR has 2 unique categories
column FLAG_OWN_REALTY has 2 unique categories
column NAME_TYPE_SUITE has 8 unique categories
column NAME_INCOME_TYPE has 8 unique categories
column NAME_EDUCATION_TYPE has 5 unique categories
column NAME_FAMILY_STATUS has 6 unique categories
column NAME_HOUSING_TYPE has 6 unique categories
column OCCUPATION_TYPE has 19 unique categories
column WEEKDAY_APPR_PROCESS_START has 7 unique categories
column ORGANIZATION_TYPE has 58 unique categories
column FONDKAPREMONT_MODE has 5 unique categories
column HOUSETYPE_MODE has 4 unique categories
column WALLSMATERIAL_MODE has 8 unique categories
column EMERGENCYSTATE_MODE has 3 unique categories
###Markdown
We have 16 categorical columns, in each of which from 2 to 58 different options of values. We use one-hot-encoding to transform them to numerical values. One-hot-encoding
###Code
# Write a function for one hot encoding to handle categorical features
def one_hot_encoding(df) :
for col in list(df.columns) :
if df[col].dtype == 'object' :
df = pd.concat([df, pd.get_dummies(df[col], prefix=col)], axis=1)
df = df.drop(columns = col) #remove the categorical column after hot encoding
return(df)
train_df = one_hot_encoding(train_df)
test_df = one_hot_encoding(test_df)
print('size of train_df after one-hot-encoding is: ', train_df.shape)
print('size of test_df after one-hot-encoding is: ', test_df.shape)
###Output
size of train_df after one-hot-encoding is: (307511, 246)
size of test_df after one-hot-encoding is: (48744, 242)
###Markdown
As we can see we have 3 more columns(rather than target) in train dataframe compared to test dataframe. So we should make an alignment between them. Alignment
###Code
target_label = train_df['TARGET'] #saving target column to add it afterwards since it will disappear after alignment
train_df, test_df = train_df.align(test_df, join = 'inner', axis = 1)
train_df['TARGET'] = target_label #add target column to train_df
print('size of train_df after alignment is: ', train_df.shape)
print('size of test_df after alignment is: ', test_df.shape)
###Output
size of train_df after alignment is: (307511, 243)
size of test_df after alignment is: (48744, 242)
###Markdown
Now the the train and test data frame are aligned Data correlation In this step we calculate pearson correlation coefficient between each column and the target column, so we will have a basic understanding that which columns are more related to the target.
###Code
corr = train_df.corr()['TARGET'].sort_values()
print(corr.tail(15)) #to get most positively correlated features
print(corr.head(15)) #to get most negatively correlated features
###Output
DAYS_REGISTRATION 0.041975
OCCUPATION_TYPE_Laborers 0.043019
FLAG_DOCUMENT_3 0.044346
REG_CITY_NOT_LIVE_CITY 0.044395
FLAG_EMP_PHONE 0.045982
NAME_EDUCATION_TYPE_Secondary / secondary special 0.049824
REG_CITY_NOT_WORK_CITY 0.050994
DAYS_ID_PUBLISH 0.051457
CODE_GENDER_M 0.054713
DAYS_LAST_PHONE_CHANGE 0.055218
NAME_INCOME_TYPE_Working 0.057481
REGION_RATING_CLIENT 0.058899
REGION_RATING_CLIENT_W_CITY 0.060893
DAYS_BIRTH 0.078239
TARGET 1.000000
Name: TARGET, dtype: float64
EXT_SOURCE_3 -0.178919
EXT_SOURCE_2 -0.160472
EXT_SOURCE_1 -0.155317
NAME_EDUCATION_TYPE_Higher education -0.056593
CODE_GENDER_F -0.054704
NAME_INCOME_TYPE_Pensioner -0.046209
ORGANIZATION_TYPE_XNA -0.045987
DAYS_EMPLOYED -0.044932
FLOORSMAX_AVG -0.044003
FLOORSMAX_MEDI -0.043768
FLOORSMAX_MODE -0.043226
EMERGENCYSTATE_MODE_No -0.042201
HOUSETYPE_MODE_block of flats -0.040594
AMT_GOODS_PRICE -0.039645
REGION_POPULATION_RELATIVE -0.037227
Name: TARGET, dtype: float64
###Markdown
As we can see Age, External soureces, gender, education, income type and region are more related to the target(although non of them has very high correlation) Let's do some more analysis on these factors to get a better knowledge of our data Age
###Code
plt.hist(-train_df['DAYS_BIRTH'] / 365, edgecolor = 'k', bins = 10)
plt.title('Age of Client')
plt.xlabel('Age (years)')
plt.ylabel('Count')
###Output
_____no_output_____
###Markdown
As we can see most of clients are between 30 and 45. let's see how does age change customers behavior for paying loans.
###Code
Age = train_df[['DAYS_BIRTH','TARGET']].copy(deep=True)
warnings.filterwarnings('ignore')
imputer = SimpleImputer(strategy = "median")
imputer.fit(Age)
Age.loc[:] = imputer.transform(Age)
#change Age from days to years
Age.loc[Age['TARGET']==0 ,'paid'] = -Age.loc[Age['TARGET']==0,'DAYS_BIRTH']/365
Age.loc[Age['TARGET']==1 ,'not_paid'] = -Age.loc[Age['TARGET']==1,'DAYS_BIRTH']/365
fig = plt.figure(figsize=(10, 6))
plt.subplot(1, 2, 1)
plt.hist(Age['paid'],edgecolor = 'k', bins = 10)
plt.title('paid_loans')
plt.xlabel('Age (years)')
plt.ylabel('Count')
plt.subplot(1, 2, 2)
plt.hist(Age['not_paid'],edgecolor = 'k', bins = 10)
plt.title('not_paid loans')
plt.xlabel('Age (years)')
plt.ylabel('Count')
plt.subplots_adjust(wspace = .5)
plt.show()
###Output
_____no_output_____
###Markdown
As we can see in not_paid loans subplot, as the age of customers increases, the possibility that they will pay the loan back increase. Education level
###Code
train_df2 = pd.read_csv("application_train.csv")
edu = train_df2[['NAME_EDUCATION_TYPE','TARGET']].copy(deep=True)
edu = edu.dropna(how='any',axis=0)
fig = plt.figure(figsize=(10, 10))
edu.groupby(['NAME_EDUCATION_TYPE','TARGET']).size().unstack().plot(kind='bar',stacked=True)
plt.xlabel('Education level')
plt.ylabel('count')
plt.show()
###Output
_____no_output_____
###Markdown
Gender
###Code
gender = train_df2[['CODE_GENDER','TARGET']].copy(deep=True)
gender = gender.replace('XNA', np.nan)
gender = gender.dropna(how='any',axis=0)
fig = plt.figure(figsize=(10, 10))
gender.groupby(['CODE_GENDER','TARGET']).size().unstack().plot(kind='bar',stacked=True)
plt.xlabel('Gender')
plt.ylabel('count')
plt.show()
###Output
_____no_output_____
###Markdown
Women clients are almost twice as many men, while men show higher risk. Family Status
###Code
FStatus = train_df2[['NAME_FAMILY_STATUS','TARGET']].copy(deep=True)
FStatus = FStatus.dropna(how='any',axis=0)
fig = plt.figure(figsize=(10, 10))
FStatus.groupby(['NAME_FAMILY_STATUS','TARGET']).size().unstack().plot(kind='bar',stacked=True)
plt.xlabel('family status')
plt.ylabel('count')
plt.show()
###Output
_____no_output_____
###Markdown
While the majority of clients are married, customers in unmarried and single relationships are less risky. Feature engineering Adding some useful features
###Code
train_df['DAYS_EMPLOYED'].replace({365243: np.nan}, inplace = True) # Deleting outsiders
train_df['CREDIT_DIV_ANNUITY'] = train_df['AMT_CREDIT']/train_df['AMT_ANNUITY']
train_df['ANNUITY_INCOME_PERCENT'] = train_df['AMT_ANNUITY'] / train_df['AMT_INCOME_TOTAL']
train_df['BIRTH_DIV_EMPLOYED'] = train_df['DAYS_BIRTH']/train_df['DAYS_EMPLOYED']
train_df['DAYREG_DIV_DAYPUB'] = train_df['DAYS_REGISTRATION']/train_df['DAYS_ID_PUBLISH']
train_df['CREDIT_MINUS_GOOD'] = train_df['AMT_CREDIT']/train_df['AMT_GOODS_PRICE']
train_df['INCOME_CHILD'] = train_df['AMT_INCOME_TOTAL']/train_df['CNT_CHILDREN']
train_df['INCOME_DIV_FAM'] = train_df['AMT_INCOME_TOTAL']/train_df['CNT_FAM_MEMBERS']
test_df['CREDIT_DIV_ANNUITY'] = test_df['AMT_CREDIT']/test_df['AMT_ANNUITY']
test_df['ANNUITY_INCOME_PERCENT'] = test_df['AMT_ANNUITY'] / test_df['AMT_INCOME_TOTAL']
test_df['BIRTH_DIV_EMPLOYED'] = test_df['DAYS_BIRTH']/test_df['DAYS_EMPLOYED']
test_df['DAYREG_DIV_DAYPUB'] = test_df['DAYS_REGISTRATION']/test_df['DAYS_ID_PUBLISH']
test_df['CREDIT_MINUS_GOOD'] = test_df['AMT_CREDIT']/test_df['AMT_GOODS_PRICE']
test_df['INCOME_CHILD'] = test_df['AMT_INCOME_TOTAL']/test_df['CNT_CHILDREN']
test_df['INCOME_DIV_FAM'] = test_df['AMT_INCOME_TOTAL']/test_df['CNT_FAM_MEMBERS']
###Output
_____no_output_____
###Markdown
Adding polynomial features In this step we try to create new features from available important ones. One way to do that is to use polynomial method and create features that are the degree of the features available. Since creating polynomial features may not always improve our model, we create another data frame with theses features and try to learn data with and without them.We choose 5 important features(based on their correlation with target) : EXT_SOURCE_1, EXT_SOURCE_2, EXT_SOURCE_3, DAYS_BIRTH, CODE_GENDER_F and use PolynomialFeatures class from Scikit-Learn with degree 3.
###Code
important_features = train_df[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH','CODE_GENDER_F' ]]
important_features_test = test_df[['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH','CODE_GENDER_F' ]]
imputer = SimpleImputer(strategy = 'median') #replacing null values with the median of that column
important_features = imputer.fit_transform(important_features)
important_features_test = imputer.fit_transform(important_features_test)
polynom = PolynomialFeatures(degree = 3)
poly_features = polynom.fit_transform(important_features) #applying PolynomialFeatures to train set
poly_features_test = polynom.fit_transform(important_features_test ) #applying PolynomialFeatures to test set
print(poly_features.shape)
###Output
(307511, 56)
###Markdown
Now we havve 56 polynomial features from 5 original important features. Now we calculate correlation between these polynomial features and the target to see how these features are related to the label.
###Code
# We create a data frame from all polynomial features that we created in previous step and then calculate correlations
poly_features = pd.DataFrame(poly_features , columns = polynom.get_feature_names(['EXT_SOURCE_1', 'EXT_SOURCE_2',
'EXT_SOURCE_3', 'DAYS_BIRTH', 'CODE_GENDER_F']))
poly_features_test = pd.DataFrame(poly_features_test , columns = polynom.get_feature_names(['EXT_SOURCE_1', 'EXT_SOURCE_2',
'EXT_SOURCE_3', 'DAYS_BIRTH', 'CODE_GENDER_F']))
poly_features.head()
poly_features = poly_features.drop('1' , axis = 1) # The first featur with degree 0 is useless so we drop it
poly_features_test = poly_features_test.drop('1' , axis = 1)
poly_features ['TARGET'] = train_df['TARGET']
corr = poly_features.corr()['TARGET'].sort_values()
print(corr.tail(15)) #to get most positively correlated features
print(corr.head(15)) #to get most negatively correlated features
poly_features.shape
###Output
_____no_output_____
###Markdown
As we can see now, some of these polynomial features have more correlation with the target class.Now we create new train and test data sets and add these polynomial features. Later for training model we also use these datasets with polynomial features to see if it would improve our model.
###Code
poly_features['SK_ID_CURR'] = train_df['SK_ID_CURR'] #adding Id column so we can merge these datasets later
poly_features = poly_features.drop('TARGET', axis = 1)
poly_features_test['SK_ID_CURR'] = test_df['SK_ID_CURR']
poly_train = train_df.merge(poly_features, on = 'SK_ID_CURR', how = 'left')
poly_test = test_df.merge(poly_features_test, on = 'SK_ID_CURR', how = 'left')
poly_train.head()
###Output
_____no_output_____
###Markdown
Adding data from other tables bureau_balance table
###Code
bureau_balance = pd.read_csv("bureau_balance.csv")
bureau_balance.head()
bureau_balance["STATUS"].unique()
###Output
_____no_output_____
###Markdown
Meaning of differnet values of STATUS column is as following :C - closed, that is, repaid credit. X - unknown status. 0 - current loan, no delinquency. 1 - 1-30 days overdue, 2 - 31-60 days overdue, and so on up to status 5 - the loan is sold to a third party or written offWe can use this STATUS column and define a risk factor by allocating a value to each status and then calculate sum of them for each SK_ID_BUREAU
###Code
bureau_balance['STATUS'] = bureau_balance['STATUS'].map({'C' : 0 , '0' : 0 , 'X' : .1 , '1' : 1 ,
'2' : 2 , '3' : 3 , '4' :4 , '5' : 5})
# Allocate .1 for X because in this case the status is unknown and it's not reasonable to map a high risk to it
bureau_balance_final = bureau_balance.groupby('SK_ID_BUREAU', as_index=False)['STATUS'].sum()
bureau_balance_final = bureau_balance_final.rename(columns = {'STATUS' : 'BB_RISK'})
bureau_balance_final.head()
###Output
_____no_output_____
###Markdown
bureau table In this step, we first add the risk value that we calculated in previous step to the bureau table and fill null values with 0
###Code
bureau = pd.read_csv("bureau.csv")
bureau= bureau.merge(bureau_balance_final, on = 'SK_ID_BUREAU', how = 'left')
bureau['BB_RISK'] = bureau['BB_RISK'].fillna(0)
bureau = one_hot_encoding(bureau)
bureau.head()
###Output
_____no_output_____
###Markdown
Now we get the mean of bureau table features for each SK_ID_CURR and also number of previous loans that each customer got before
###Code
bureau_mean = bureau.groupby('SK_ID_CURR').mean()
previous_loans = bureau.groupby('SK_ID_CURR', as_index=False)['SK_ID_BUREAU'].count() #number of previous loans for each customer
previous_loans = previous_loans.rename(columns = {"SK_ID_CURR" : "SK_ID_CURR", "SK_ID_BUREAU" : "PLoan_num"})
bureau_mean= bureau_mean.merge(previous_loans, on = 'SK_ID_CURR', how = 'left')
bureau_mean = bureau_mean.drop(columns = "SK_ID_BUREAU" )
bureau_mean.head()
###Output
_____no_output_____
###Markdown
Now we define a new variable from exisiting varaibles that may be useful and it is how often the cusomer took loans in past, was it on a regular basis or for a short period? Each can have different interpretation.
###Code
frequency = bureau[['SK_ID_CURR', 'SK_ID_BUREAU', 'DAYS_CREDIT']].groupby(by=['SK_ID_CURR'])
frequency1 = frequency.apply(lambda x: x.sort_values(['DAYS_CREDIT'], ascending=False)).reset_index(drop=True)
frequency1['Loan_FRQ'] = frequency1.groupby(by=['SK_ID_CURR'])['DAYS_CREDIT'].diff()
###Output
_____no_output_____
###Markdown
Now need to find mean of Loan_FRQ for each SK_ID_CURR. First, I drop null values(beacause when we calculate diff, the diff value for the first bureau of each SK_ID_CURR is NAN ) and then calculate mean values for each SK_ID_CURR
###Code
frequency1 = frequency1.dropna(subset = ['Loan_FRQ'])
frequency1 = frequency1.groupby('SK_ID_CURR', as_index=False)['Loan_FRQ'].mean()
# Now we should merge frequency1 and bureau_mean database
bureau_mean= bureau_mean.merge(frequency1, on = 'SK_ID_CURR', how = 'left')
#we have null values in Loan_FRQ column if there was just 1 previous loan
#fill null values of this column with the value of DAYS_CREDIT column
bureau_mean["Loan_FRQ"] = np.where(bureau_mean["Loan_FRQ"].isnull(), bureau_mean['DAYS_CREDIT'], bureau_mean["Loan_FRQ"])
bureau_mean["Loan_FRQ"] = bureau_mean["Loan_FRQ"].abs()
bureau_mean.head(10)
# Now we fill null values with the value of median
imputer = SimpleImputer(strategy = "median")
imputer.fit(bureau_mean)
bureau_mean.loc[:] = imputer.transform(bureau_mean)
bureau_mean.columns = ['BUR_' + col for col in bureau_mean.columns]
bureau_mean = bureau_mean.rename(columns = {'BUR_SK_ID_CURR' : 'SK_ID_CURR'})
###Output
_____no_output_____
###Markdown
POS_CASH_balance table
###Code
pos_cash = pd.read_csv("POS_CASH_balance.csv")
pos_cash.head()
pos_cash = one_hot_encoding(pos_cash)
pos_count = pos_cash[[ 'SK_ID_PREV', 'SK_ID_CURR']].groupby(by = 'SK_ID_CURR').count()
pos_count = pos_count.rename(columns= {'SK_ID_CURR' : 'SK_ID_CURR', 'SK_ID_PREV' : 'prev_pos_count'})
pos_avg = pos_cash.groupby('SK_ID_CURR').mean()
pos_avg = pos_avg.merge(pos_count, how='left', on='SK_ID_CURR')
pos_avg = pos_avg.drop('SK_ID_PREV', axis = 1)
pos_avg.head()
# changing column names to avoid any problem when we want to merge these tables with train and test
pos_avg.columns = ['POS_' + col for col in pos_avg.columns]
pos_avg = pos_avg.rename(columns = {'POS_SK_ID_CURR' : 'SK_ID_CURR'})
###Output
_____no_output_____
###Markdown
installments_payments table
###Code
ins_pay = pd.read_csv("installments_payments.csv")
ins_pay = one_hot_encoding(ins_pay)
ins_pay.head()
ins_count = ins_pay[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
ins_count = ins_count.rename(columns = {'SK_ID_CURR' : 'SK_ID_CURR' , 'SK_ID_PREV' : 'ins_count'})
ins_avg = ins_pay.groupby('SK_ID_CURR').mean()
ins_avg = ins_avg.merge(ins_count, how='left', on='SK_ID_CURR')
ins_avg = ins_avg.drop('SK_ID_PREV', axis = 1)
ins_avg.head()
###Output
_____no_output_____
###Markdown
Adding new features
###Code
# Percentage and difference paid in each installment (amount paid and installment value)
ins_avg['PAYMENT_PERC'] = ins_avg['AMT_PAYMENT'] / ins_avg['AMT_INSTALMENT']
ins_avg['PAYMENT_DIFF'] = ins_avg['AMT_INSTALMENT'] - ins_avg['AMT_PAYMENT']
# Days past due and days before due (no negative values)
ins_avg['DPD'] = ins_avg['DAYS_ENTRY_PAYMENT'] - ins_avg['DAYS_INSTALMENT']
ins_avg['DBD'] = ins_avg['DAYS_INSTALMENT'] - ins_avg['DAYS_ENTRY_PAYMENT']
ins_avg['DPD'] = ins_avg['DPD'].apply(lambda x: x if x > 0 else 0)
ins_avg['DBD'] = ins_avg['DBD'].apply(lambda x: x if x > 0 else 0)
ins_avg.head()
#analyze null values
ins_avg.isnull().sum()
###Output
_____no_output_____
###Markdown
There are just 9 rows with null values in DAYS_ENTRY_PAYMENT and AMT_PAYMENT columns (also in new columns that we created from them). We fill these null values with the mean of the column
###Code
ins_avg = ins_avg.fillna(ins_avg.mean())
ins_avg.head()
#changing columns name
ins_avg.columns = ['ins_' + col for col in ins_avg.columns]
ins_avg = ins_avg.rename(columns = {'ins_SK_ID_CURR' : 'SK_ID_CURR'})
###Output
_____no_output_____
###Markdown
credit_card_balance table
###Code
cc = pd.read_csv('credit_card_balance.csv')
cc.head()
# handling categorical features
cc = one_hot_encoding(cc)
# filling null values with median
imputer = SimpleImputer(strategy = "median")
imputer.fit(cc)
cc.loc[:] = imputer.transform(cc)
# Adding number of credit cards
cc_count = cc[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
cc_count = cc_count.rename(columns = {'SK_ID_CURR' : 'SK_ID_CURR' , 'SK_ID_PREV' : 'cc_count'})
# calculating the mean of each feature for each customer
cc_avg = cc.groupby('SK_ID_CURR').mean()
cc_avg = cc_avg.merge(cc_count, how='left', on='SK_ID_CURR')
cc_avg = cc_avg.drop('SK_ID_PREV', axis = 1)
cc_avg.head()
#changing columns name
cc_avg.columns = ['ins_' + col for col in cc_avg.columns]
cc_avg = cc_avg.rename(columns = {'ins_SK_ID_CURR' : 'SK_ID_CURR'})
###Output
_____no_output_____
###Markdown
previous_application table
###Code
pre_app = pd.read_csv('previous_application.csv')
pre_app.head()
# handling categorical features
pre_app = one_hot_encoding(pre_app)
# Adding number of credit cards
pre_count = pre_app[['SK_ID_CURR', 'SK_ID_PREV']].groupby('SK_ID_CURR').count()
pre_count = pre_count.rename(columns = {'SK_ID_CURR' : 'SK_ID_CURR' , 'SK_ID_PREV' : 'pre_app_count'})
# calculating the mean of each feature for each customer
app_avg = pre_app.groupby('SK_ID_CURR').mean()
app_avg = app_avg.merge(pre_count, how='left', on='SK_ID_CURR')
app_avg = app_avg.drop('SK_ID_PREV', axis = 1)
app_avg.head()
# filling null values with median
imputer = SimpleImputer(strategy = "median")
imputer.fit(app_avg)
app_avg.loc[:] = imputer.transform(app_avg)
#changing columns name
app_avg.columns = ['app_' + col for col in app_avg.columns]
app_avg = app_avg.rename(columns = {'app_SK_ID_CURR' : 'SK_ID_CURR'})
###Output
_____no_output_____
###Markdown
Merging tables with train and test set In this step we merge the datasets that we created in last step with trian and test dataset. Since not all applicants have previous applications or loans, we fill null values of columns of these new datasets with 0 .
###Code
def merge_dataset(df1,df2,key):
df2_cols = list(df2.columns)
df1 = df1.merge(df2, how='left', on= key)
df1[df2_cols] = df1[df2_cols].fillna(0)
return df1
# Adding Bureau table
train_df = merge_dataset(train_df, bureau_mean, 'SK_ID_CURR')
test_df = merge_dataset(test_df, bureau_mean, 'SK_ID_CURR' )
# Adding POS_CASH_balance table
train_df = merge_dataset(train_df, pos_avg , 'SK_ID_CURR')
test_df = merge_dataset(test_df, pos_avg , 'SK_ID_CURR' )
# Adding installments_payments table
train_df = merge_dataset(train_df, ins_avg , 'SK_ID_CURR')
test_df = merge_dataset(test_df, ins_avg , 'SK_ID_CURR' )
# Adding credit_card_balance table
train_df = merge_dataset(train_df, cc_avg , 'SK_ID_CURR')
test_df = merge_dataset(test_df, cc_avg , 'SK_ID_CURR' )
# Adding previous_application table
train_df = merge_dataset(train_df, app_avg , 'SK_ID_CURR')
test_df = merge_dataset(test_df, app_avg , 'SK_ID_CURR' )
# Adding some new useful features
train_df['INTEREST'] = train_df['app_CNT_PAYMENT']*train_df['AMT_ANNUITY'] - train_df['AMT_CREDIT']
train_df['INTEREST_RATE'] = 2*12*train_df['INTEREST']/(train_df['AMT_CREDIT']*(train_df['app_CNT_PAYMENT']+1))
train_df['INTEREST_SHARE'] = train_df['INTEREST']/train_df['AMT_CREDIT']
test_df['INTEREST'] = test_df['app_CNT_PAYMENT']*test_df['AMT_ANNUITY'] - test_df['AMT_CREDIT']
test_df['INTEREST_RATE'] = 2*12*test_df['INTEREST']/(test_df['AMT_CREDIT']*(test_df['app_CNT_PAYMENT']+1))
test_df['INTEREST_SHARE'] = test_df['INTEREST']/test_df['AMT_CREDIT']
train_df.head()
#train_df.to_csv('processed.csv', encoding='utf-8', index=False)
#test_df.to_csv('processed_test.csv' , encoding='utf-8', index=False)
###Output
_____no_output_____
###Markdown
Modeling
###Code
train_df = pd.read_csv('processed.csv')
test_df = pd.read_csv('processed_test.csv')
test_df.head()
###Output
_____no_output_____
###Markdown
Checking for Nan, inf or -inf values and substituding them with the mean of each numeric columns
###Code
# Substituting inf and -inf values with nan
train_df = train_df.replace([np.inf, -np.inf], np.nan)
# Filling the Nan values in the new numeric columns with the mean
for column in list(train_df.columns):
if train_df[column].dtypes == 'float64':
train_df[column] = train_df[column].fillna(train_df[column].mean())
# Checking if there are still any problems into the dataframe
train_df.isnull().any().any()
###Output
_____no_output_____
###Markdown
---
###Code
# Selecting the 'SK_ID_CURR' column for future use
client_names = train_df[['SK_ID_CURR']]
client_names
# Splitting dataframe in features and target variable
feature_cols = list(train_df.columns)
y = train_df.TARGET.values # Target variable
train_df = train_df[feature_cols].drop(['TARGET'], axis = 1)
train_df = train_df.drop(['SK_ID_CURR'], axis = 1) # Features
###Output
_____no_output_____
###Markdown
Dividing the data into train, val and test datasets Now that we have defined the initial dataframe of features and the Target variable array, we can divide our dataset into training, validation and testing sets, and then select suitable methods for binary classification in order to develop our statistical model.
###Code
# Splitting the dataset
X_train, X_temp, y_train, y_temp = train_test_split(train_df, y, stratify = y, test_size=0.3, random_state=42)
X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, stratify = y_temp, test_size=0.5, random_state=42)
print('Shape of X_train:',X_train.shape)
print('Shape of X_val:',X_val.shape)
print('Shape of X_test:',X_test.shape)
###Output
Shape of X_train: (215257, 506)
Shape of X_val: (46127, 506)
Shape of X_test: (46127, 506)
###Markdown
As we can see, we can count over 500 columns in our dataframes, and so many different features can consistently slow our models and generate too much noise, so that becomes even more difficult finding the correct probability that each client will or won't pay back the loan. For these reasons, we already decided to select the best feature with light gbm
###Code
model_sel = lgb.LGBMClassifier(boosting_type='gbdt', max_depth=7, learning_rate=0.01, n_estimators= 2000,
class_weight='balanced', subsample=0.9, colsample_bytree= 0.8, n_jobs=-1)
train_features, valid_features, train_y, valid_y = train_test_split(X_train, y_train, test_size = 0.15, random_state = 42)
model_sel.fit(train_features, train_y, early_stopping_rounds=100, eval_set = [(valid_features, valid_y)], eval_metric = 'auc', verbose = 200)
get_feat = pd.DataFrame(sorted(zip(model_sel.feature_importances_, train_df.columns)), columns=['Value','Feature'])
features_sorted = get_feat.sort_values(by="Value", ascending=False)
features_sel = list(features_sorted[features_sorted['Value']>=50]['Feature'])
print(features_sel, len(features_sel))
# Selecting the best 150 features out of 202
best_features = features_sel[0:150]
# Defining new dataframes with only the selected features
X_train_sel = X_train[features_sel]
X_val_sel = X_val[features_sel]
X_test_sel = X_test[features_sel]
X_train_best = X_train_sel[best_features]
X_test_best = X_test_sel[best_features]
X_val_best = X_val_sel[best_features]
# Feature Scaling
sc = StandardScaler()
X_train_sel = sc.fit_transform(X_train_sel)
X_train_sel = sc.transform(X_train_sel)
X_test_sel = sc.fit_transform(X_test_sel)
X_test_sel = sc.transform(X_test_sel)
X_val_sel = sc.fit_transform(X_val_sel)
X_val_sel = sc.transform(X_val_sel)
X_train_best = sc.fit_transform(X_train_best)
X_train_best = sc.transform(X_train_best)
X_test_best = sc.fit_transform(X_test_best)
X_test_best = sc.transform(X_test_best)
X_val_best = sc.fit_transform(X_val_best)
X_val_best = sc.transform(X_val_best)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
aucs = {}
# inizialize the model (using the default parameters)
logistic = LogisticRegression(max_iter = 4000) # It doesn't converge for lower values
# fit the model with data
logistic.fit(X_train_best,y_train)
# Predicting the target values for X_test
y_pred = logistic.predict(X_test_best)
###Output
C:\Users\Notebook HP\Anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Defining and plotting with heatmap the confusion matrix relative to logistic regression
###Code
def heat_conf(conf_matrix):
fig = plt.figure(figsize=(8,8))
# plotting the heatmap
class_names=[0,1] # name of classes
fig, ax = plt.subplots()
tick_marks = np.arange(len(class_names))
plt.xticks(tick_marks, class_names)
plt.yticks(tick_marks, class_names)
sns.heatmap(pd.DataFrame(conf_matrix), annot=True, cmap="YlGnBu" ,fmt='g')
ax.xaxis.set_label_position("top")
ax.set_ylim(len(conf_matrix)+1, -1)
plt.tight_layout()
plt.title('Confusion matrix', y=1.1)
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
conf_matrix = metrics.confusion_matrix(y_test, y_pred)
heat_conf(conf_matrix)
###Output
_____no_output_____
###Markdown
Let's see the Accuracy, Precision and Recall values:
###Code
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred, pos_label = 1))
print("Recall:",metrics.recall_score(y_test, y_pred, pos_label = 1))
###Output
Accuracy: 0.9192446939970083
Precision: 0.4909090909090909
Recall: 0.007250268528464017
###Markdown
Now we can plot the ROC, and calculate the AUC relative to logistic regression.
###Code
fig = plt.figure(figsize=(7,6))
y_pred_proba = logistic.predict_proba(X_test_best)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba, pos_label = 1)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
plt.show()
aucs['Logistic Regression'] = auc
auc
###Output
_____no_output_____
###Markdown
Random Forest
###Code
# Create a random forest classifier
clf = RandomForestClassifier(n_estimators=200, random_state=0, n_jobs=-1, max_depth = 10)
# Train the classifier
clf.fit(X_train_sel, y_train)
# Print the name and gini importance of each feature
for feature in zip(features_sel, clf.feature_importances_):
print(feature)
# Create a selector object that will use the random forest classifier to identify
# features that have an importance of more than
sfm = SelectFromModel(clf, threshold=0.005)
# Train the selector
sfm.fit(X_train_sel, y_train)
# Print the names of the most important features
for feature_list_index in sfm.get_support(indices=True):
print(features_sel[feature_list_index])
# Transform the data to create a new dataset containing only the most important features
X_important_train = sfm.transform(X_train_sel)
X_important_test = sfm.transform(X_test_sel)
# Create a new random forest classifier for the most important features
clf_important = RandomForestClassifier(n_estimators=200, random_state=0, n_jobs=-1)
# Train the new classifier on the new dataset containing the most important features
clf_important.fit(X_important_train, y_train)
# Apply The Full Featured Classifier To The Test Data
y_pred = clf.predict(X_test_sel)
# View The Accuracy, Precision and Recall Of our model with all features
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred, pos_label = 1))
print("Recall:",metrics.recall_score(y_test, y_pred, pos_label = 1))
conf_matrix = metrics.confusion_matrix(y_test, y_pred)
heat_conf(conf_matrix)
fig = plt.figure(figsize=(7,6))
y_pred_proba = clf.predict_proba(X_test_sel)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba, pos_label = 1)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
plt.show()
aucs['Random Forest'] = auc
auc
# Apply The Full Featured Classifier To The Test Data
y_important_pred = clf_important.predict(X_important_test)
# View The Accuracy,Precision and Recall off our model with selected features
print("Accuracy:",metrics.accuracy_score(y_test, y_important_pred))
print("Precision:",metrics.precision_score(y_test, y_important_pred, pos_label = 1))
print("Recall:",metrics.recall_score(y_test, y_important_pred, pos_label = 1))
conf_matrix = metrics.confusion_matrix(y_test, y_important_pred)
heat_conf(conf_matrix)
fig = plt.figure(figsize=(7,6))
y_pred_proba = clf_important.predict_proba(X_important_test)[::,1]
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba, pos_label = 1)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
plt.show()
aucs['Random Forest selected'] = auc
auc
###Output
_____no_output_____
###Markdown
Light gbm
###Code
# Defining again the dataframes, without scaling
X_train_sel = X_train[features_sel]
X_val_sel = X_val[features_sel]
X_test_sel = X_test[features_sel]
###Output
_____no_output_____
###Markdown
Since the dataset is unbalanced, with over 90% of target values equal to 0, we need to add weight to give more importance to the target value 1 when is found. (We tried downsampling, but it didn't give better results).
###Code
tar_weight = np.ones((len(X_train_sel),), dtype=int)
for i in range(len(X_train_sel)):
if y_train[i]== 0:
tar_weight[i]=1
else:
tar_weight[i]=10
# lgbm format
train = lgb.Dataset(X_train_sel, label = y_train, weight= tar_weight )
valid = lgb.Dataset(X_val_sel, label = y_val)
###Output
_____no_output_____
###Markdown
Cross Validation to find the best max depth:
###Code
cross = []
max_D = [2,3,5,10] # Possible values of max_depth parameter
for i in max_D:
params = {'boosting_type': 'gbdt',
'max_depth' : i,
'objective': 'binary',
'nthread': 5,
'num_leaves': 32,
'learning_rate': 0.05,
'max_bin': 512,
'subsample_for_bin': 200,
'subsample': 0.7,
'subsample_freq': 1,
'colsample_bytree': 0.8,
'reg_alpha': 20,
'reg_lambda': 20,
'min_split_gain': 0.5,
'min_child_weight': 1,
'min_child_samples': 10,
'scale_pos_weight': 1,
'num_class' : 1,
'metric' : 'auc'
}
lgbm = lgb.train(params,
train,
2500,
valid_sets=valid,
early_stopping_rounds= 100,
verbose_eval= 10
)
y_prob = lgbm.predict(X_val_sel)
cross.append(roc_auc_score(y_val,y_prob))
best = max_D[np.argmax(cross)]
print('The best max depth is ', best )
params = {'boosting_type': 'gbdt',
'max_depth' : 5,
'objective': 'binary',
'nthread': 5,
'num_leaves': 32,
'learning_rate': 0.05,
'max_bin': 512,
'subsample_for_bin': 200,
'subsample': 0.7,
'subsample_freq': 1,
'colsample_bytree': 0.8,
'reg_alpha': 20,
'reg_lambda': 20,
'min_split_gain': 0.5,
'min_child_weight': 1,
'min_child_samples': 10,
'scale_pos_weight': 1,
'num_class' : 1,
'metric' : 'auc'
}
lgbm = lgb.train(params,
train,
2500,
valid_sets=valid,
early_stopping_rounds= 100,
verbose_eval= 10
)
y_pred_prob = lgbm.predict(X_test_sel)
y_pred_prob # Probabilities relative to clients
# Giving each predicted probability a target value
y_pred = np.ones((len(X_test_sel),), dtype=int)
for i in range(len(y_pred_prob)):
if y_pred_prob[i]<=0.5:
y_pred[i]=0
else:
y_pred[i]=1
conf_matrix = metrics.confusion_matrix(y_test, y_pred)
heat_conf(conf_matrix)
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
print("Precision:",metrics.precision_score(y_test, y_pred, pos_label = 1))
print("Recall:",metrics.recall_score(y_test, y_pred, pos_label = 1))
fig = plt.figure(figsize=(7,6))
y_pred_proba = lgbm.predict(X_test_sel)
fpr, tpr, _ = metrics.roc_curve(y_test, y_pred_proba, pos_label = 1)
auc = metrics.roc_auc_score(y_test, y_pred_proba)
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
plt.show()
aucs['Light GBM'] = auc
auc
best_method = max(aucs.items(), key=operator.itemgetter(1))[0]
best_method
###Output
_____no_output_____ |
notebooks/Time_series_analysis.ipynb | ###Markdown
This notebook was written to showcase a typical data analytics pipline for a data driven building performance analysis Import necessary packages for the analysis
###Code
import pandas as pd # package for handling tables of data
import matplotlib.pyplot as plt # package for generating plots
import seaborn as sns # package similar to matplotlib with additional functionalities
import os
from datetime import date, timedelta # package for handling date-time formats
import numpy as np
# This code below makes the plot appear inside the browser.
%matplotlib inline
import mpld3
mpld3.enable_notebook()
plt.rcParams['figure.figsize'] = (13, 7)
repos_path = "C:/Users/MANOJ/the-building-data-genome-project/"
###Output
_____no_output_____
###Markdown
We load the consumption of each building as a time series dataframe using pandas. Data from each building was recorded at different time ranges.
###Code
time_series = pd.read_csv(os.path.join(repos_path,"data/raw/temp_open_utc.csv"), index_col="timestamp", parse_dates=True)
time_series.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 40940 entries, 2010-01-01 08:00:00 to 2016-01-01 06:00:00
Columns: 507 entries, PrimClass_Jolie to PrimClass_Ulysses
dtypes: float64(507)
memory usage: 158.7 MB
###Markdown
507 columns indicate that there are 507 building consumption data. Datetimeindex of 40940 indicates that many rows of timestamps ranging from 2010 to 2016 at 1 hour interval
###Code
time_series.head(5)
###Output
_____no_output_____
###Markdown
The metadata contains information on each building, such as: start data, end date, Heating type, industry, number of floors, occupants, space usage (office, lab, classroom), area, time-zone, age of the building, corresponding weather file.
###Code
metadata = pd.read_csv(os.path.join(repos_path,"data/raw/meta_open.csv"), index_col='uid', parse_dates=["datastart","dataend"], dayfirst=True)
metadata.info()
metadata.tail(5)
###Output
_____no_output_____
###Markdown
Out of 507 building available in the dataframe, we choose one to visualize the consumption profile and the correlation of consumption with temperature and humidity.
###Code
building_selected = 'Office_Elizabeth'
start = metadata.ix[building_selected]['datastart']
end = metadata.ix[building_selected]['dataend']
time_series[building_selected][start:end].plot()
plt.xlabel('Time Stamp')
plt.ylabel('Consumption')
weather_file = metadata[metadata.index == building_selected]['newweatherfilename'][building_selected]
weather = pd.read_csv(os.path.join(repos_path,"data/external/weather/",weather_file),index_col="timestamp", parse_dates=True)
start_date = '2012-02-01'
end_date = '2012-02-03'
temperature = weather[['TemperatureC']].resample('H').mean()
temperature = temperature[start_date:end_date]
humidity = weather[['Humidity']].resample('H').mean()
humidity = humidity[start_date:end_date]
office = time_series[[building_selected]][start_date:end_date]
# Function for normalize
def normalize(df):
return (df-df.min())/(df.max()-df.min())
frame = normalize(office.join(temperature.join(humidity)))
frame.plot()
###Output
_____no_output_____ |
StatsForDataAnalysis/stat.hi2_test.ipynb | ###Markdown
Критерий согласия Пирсона ( $\chi^2$)
###Code
import numpy as np
import pandas as pd
from scipy import stats
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Исчерпанная рождаемость Исчерпанная рождаемость — количество детей, родившихся у женщины на момент окончания конвенционального репродуктивного возраста (45 лет). Для 1878 женщин старше 45, участвовавших в социологическом опросе жителей Швейцарии, известно количество детей. Этот признак — типичный счётчик, а, значит, его можно попробовать описать распределением Пуассона. * **выборка** - целочисленный вектор длиы $n$, характеризующий количество детей у опрашиваемой женщины* **гипотеза $H_0$** - рассматриваемая величина имеет распределение Пуассона
###Code
fin = open('fertility.txt', 'r')
data = list(map(lambda x: int(x.strip()), fin.readlines()))
data = []
for i in range(0, 67, 1):
data.append(1)
for i in range(67, 100, 1):
data.append(0)
#data
pylab.bar(range(12), np.bincount(data), color = 'b')
pylab.legend()
l = np.mean(data)
l
###Output
_____no_output_____
###Markdown
Критерий согласия
###Code
observed_frequences = np.bincount(data)
observed_frequences
expected_frequences = [len(data)*stats.poisson.pmf(x, l) for x in range(min(data), max(data) + 1)]
expected_frequences
pylab.bar(range(len(expected_frequences)), expected_frequences, color = 'b', label = 'poisson_distr')
pylab.legend()
###Output
_____no_output_____
###Markdown
Статистика критерия хи-квадрат: $$\chi^2=\sum_{i=1}^K \frac{\left(n_i- np_i\right)^2}{np_i}$$При справедливости нулевой гипотезы имеет распределение хи-квадрат с числом степеней свободы $K-1-m$, где $m$ - число параметров распределения, оцененных по выборке.
###Code
stats.chisquare(observed_frequences, expected_frequences, ddof = 1)
###Output
_____no_output_____ |
Video_Lecture_NBs/NB_04_OOP.ipynb | ###Markdown
Object Oriented Programming (OOP): Creating a Financial Instrument Class An example Class: pandas.DataFrameGoal: handling and manipulating any Tabular Data (efficiently)
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
__Instantiation__
###Code
df = pd.read_csv("titanic.csv")
df
type(df)
###Output
_____no_output_____
###Markdown
__Attributes__
###Code
df.columns
df.shape
###Output
_____no_output_____
###Markdown
__Methods__
###Code
df.info()
df.sort_values(by = "age", ascending = False)
###Output
_____no_output_____
###Markdown
The FinancialInstrument Class live in action (Part 1)Goal: Analyzing Financial Instruments (e.g. stocks) efficiently
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import yfinance as yf
plt.style.use("seaborn")
class FinancialInstrument():
''' Class for analyzing Financial Instruments like stocks.
Attributes
==========
ticker: str
ticker symbol with which to work with
start: str
start date for data retrieval
end: str
end date for data retrieval
Methods
=======
get_data:
retrieves daily price data (from yahoo finance) and prepares the data
log_returns:
calculates log returns
plot_prices:
creates a price chart
plot_returns:
plots log returns either as time series ("ts") or histogram ("hist")
set_ticker:
sets a new ticker
mean_return:
calculates mean return
std_returns:
calculates the standard deviation of returns (risk)
annualized_perf:
calculates annulized return and risk
'''
def __init__(self, ticker, start, end):
self.ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self.ticker,
self.start, self.end)
def get_data(self):
''' retrieves (from yahoo finance) and prepares the data
'''
raw = yf.download(self.ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
'''calculates log returns
'''
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
''' creates a price chart
'''
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self.ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
''' plots log returns either as time series ("ts") or histogram ("hist")
'''
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self.ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self.ticker), fontsize = 15)
def set_ticker(self, ticker = None):
'''sets a new ticker
'''
if ticker is not None:
self.ticker = ticker
self.get_data()
self.log_returns()
def mean_return(self, freq = None):
'''calculates mean return
'''
if freq is None:
return self.data.log_returns.mean()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.mean()
def std_returns(self, freq = None):
'''calculates the standard deviation of returns (risk)
'''
if freq is None:
return self.data.log_returns.std()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.std()
def annualized_perf(self):
'''calculates annulized return and risk
'''
mean_return = round(self.data.log_returns.mean() * 252, 3)
risk = round(self.data.log_returns.std() * np.sqrt(252), 3)
print("Return: {} | Risk: {}".format(mean_return, risk))
###Output
_____no_output_____
###Markdown
__Instantiation__
###Code
stock = FinancialInstrument(ticker = "AAPL", start = "2015-01-01",
end = "2019-12-31" ) # instantiation
stock
type(stock)
###Output
_____no_output_____
###Markdown
__Attributes__
###Code
#stock.
stock.ticker
stock.start
stock.end
stock.data
###Output
_____no_output_____
###Markdown
__Methods__
###Code
stock.plot_prices()
stock.plot_returns()
stock.plot_returns(kind = "hist")
###Output
_____no_output_____
###Markdown
The FinancialInstrument Class live in action (Part 2) __More Methods__
###Code
stock.mean_return()
stock.data.log_returns.mean()
stock.mean_return(freq = "w")
stock.std_returns()
stock.std_returns(freq = "w")
stock.annualized_perf()
stock.set_ticker("GE")
stock.ticker
stock.plot_prices()
stock.annualized_perf()
###Output
_____no_output_____
###Markdown
Building the FinancialInstrument Class from scratch: Instantiation
###Code
class FinancialInstrument():
pass
stock = FinancialInstrument() # instantiation
stock
class FinancialInstrument():
def __init__(self, ticker, start, end):
self.ticker = ticker
self.start = start
self.end = end
stock = FinancialInstrument("AAPL", "2015-01-01", "2019-12-31") # instantiation
stock
stock.ticker
stock.end
stock.start
###Output
_____no_output_____
###Markdown
The method get_data()
###Code
yf.download("AAPL", "2015-01-01", "2019-12-31").Close.to_frame()
raw = yf.download("AAPL", "2015-01-01", "2019-12-31").Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
raw
class FinancialInstrument():
def __init__(self, ticker, start, end):
self.ticker = ticker
self.start = start
self.end = end
self.get_data()
def get_data(self):
raw = yf.download(self.ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
stock = FinancialInstrument("AAPL", "2015-01-01", "2019-12-31")
stock.ticker
stock.data
###Output
_____no_output_____
###Markdown
The method log_returns()
###Code
stock.data
class FinancialInstrument():
def __init__(self, ticker, start, end):
self.ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def get_data(self):
raw = yf.download(self.ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
stock = FinancialInstrument("AAPL", "2015-01-01", "2019-12-31")
stock.data
stock.log_returns()
###Output
_____no_output_____
###Markdown
(String) Representation
###Code
stock
print(stock)
class FinancialInstrument():
def __init__(self, ticker, start, end):
self.ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self.ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self.ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
stock = FinancialInstrument("AAPL", "2015-01-01", "2019-12-31")
stock
print(stock)
###Output
_____no_output_____
###Markdown
The methods plot_prices() and plot_returns()
###Code
stock
stock.data
stock.data.price.plot()
plt.show()
stock.data.log_returns.plot()
plt.show()
stock.data.log_returns.hist(bins = 100)
plt.show()
class FinancialInstrument():
def __init__(self, ticker, start, end):
self.ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self.ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self.ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self.ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self.ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self.ticker), fontsize = 15)
stock = FinancialInstrument("aapl", "2015-01-01", "2019-12-31")
stock.plot_prices()
stock.plot_returns()
stock.plot_returns(kind = "hist")
###Output
_____no_output_____
###Markdown
Encapsulation
###Code
stock
stock.plot_prices()
stock.ticker
stock.ticker = "GE"
stock.ticker
stock.plot_prices()
class FinancialInstrument():
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
stock = FinancialInstrument("aapl", "2015-01-01", "2019-12-31")
stock
stock.ticker
stock.
stock._ticker
###Output
_____no_output_____
###Markdown
The method set_ticker()
###Code
class FinancialInstrument():
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
def set_ticker(self, ticker = None):
if ticker is not None:
self._ticker = ticker
self.get_data()
self.log_returns()
stock = FinancialInstrument("aapl", "2015-01-01", "2019-12-31")
stock.plot_prices()
stock.set_ticker("GE")
stock.plot_prices()
###Output
_____no_output_____
###Markdown
Adding more methods and performance metrics
###Code
class FinancialInstrument():
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
def set_ticker(self, ticker = None):
if ticker is not None:
self._ticker = ticker
self.get_data()
self.log_returns()
def mean_return(self, freq = None):
if freq is None:
return self.data.log_returns.mean()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.mean()
def std_returns(self, freq = None):
if freq is None:
return self.data.log_returns.std()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.std()
def annualized_perf(self):
mean_return = round(self.data.log_returns.mean() * 252, 3)
risk = round(self.data.log_returns.std() * np.sqrt(252), 3)
print("Return: {} | Risk: {}".format(mean_return, risk))
stock = FinancialInstrument("aapl", "2015-01-01", "2019-12-31")
stock.mean_return()
stock.mean_return("w")
stock.std_returns()
stock.std_returns("a")
stock.annualized_perf()
###Output
_____no_output_____
###Markdown
Inheritance
###Code
class FinancialInstrumentBase(): # Parent
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
def set_ticker(self, ticker = None):
if ticker is not None:
self._ticker = ticker
self.get_data()
self.log_returns()
class RiskReturn(FinancialInstrumentBase): # Child
def __repr__(self):
return "RiskReturn(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def mean_return(self, freq = None):
if freq is None:
return self.data.log_returns.mean()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.mean()
def std_returns(self, freq = None):
if freq is None:
return self.data.log_returns.std()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.std()
def annualized_perf(self):
mean_return = round(self.data.log_returns.mean() * 252, 3)
risk = round(self.data.log_returns.std() * np.sqrt(252), 3)
print("Return: {} | Risk: {}".format(mean_return, risk))
stock = RiskReturn("aapl", "2015-01-01", "2019-12-31")
stock.annualized_perf()
stock.data
stock.plot_prices()
stock.set_ticker("ge")
stock
stock.mean_return("w")
###Output
_____no_output_____
###Markdown
Inheritance and the super() Function
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import yfinance as yf
plt.style.use("seaborn")
class FinancialInstrumentBase(): # Parent
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
def set_ticker(self, ticker = None):
if ticker is not None:
self._ticker = ticker
self.get_data()
self.log_returns()
class RiskReturn(FinancialInstrumentBase): # Child
def __init__(self, ticker, start, end, freq = None):
self.freq = freq
super().__init__(ticker, start, end)
def __repr__(self):
return "RiskReturn(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def mean_return(self):
if self.freq is None:
return self.data.log_returns.mean()
else:
resampled_price = self.data.price.resample(self.freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.mean()
def std_returns(self):
if self.freq is None:
return self.data.log_returns.std()
else:
resampled_price = self.data.price.resample(self.freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.std()
def annualized_perf(self):
mean_return = round(self.data.log_returns.mean() * 252, 3)
risk = round(self.data.log_returns.std() * np.sqrt(252), 3)
print("Return: {} | Risk: {}".format(mean_return, risk))
stock = RiskReturn("aapl", "2015-01-01", "2019-12-31", freq = "w")
stock.freq
stock._ticker
stock.data
stock.plot_prices()
stock.mean_return()
stock.annualized_perf()
###Output
_____no_output_____
###Markdown
Docstrings
###Code
class FinancialInstrument():
''' Class to analyze Financial Instruments like stocks
'''
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
''' plots log returns either as time series ("ts") or as histogram ("hist")
'''
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
def set_ticker(self, ticker = None):
if ticker is not None:
self._ticker = ticker
self.get_data()
self.log_returns()
def mean_return(self, freq = None):
if freq is None:
return self.data.log_returns.mean()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.mean()
def std_returns(self, freq = None):
if freq is None:
return self.data.log_returns.std()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.std()
def annualized_perf(self):
mean_return = round(self.data.log_returns.mean() * 252, 3)
risk = round(self.data.log_returns.std() * np.sqrt(252), 3)
print("Return: {} | Risk: {}".format(mean_return, risk))
###Output
_____no_output_____
###Markdown
__Final Version__
###Code
class FinancialInstrument():
''' Class for analyzing Financial Instruments like stocks.
Attributes
==========
ticker: str
ticker symbol with which to work with
start: str
start date for data retrieval
end: str
end date for data retrieval
Methods
=======
get_data:
retrieves daily price data (from yahoo finance) and prepares the data
log_returns:
calculates log returns
plot_prices:
creates a price chart
plot_returns:
plots log returns either as time series ("ts") or histogram ("hist")
set_ticker:
sets a new ticker
mean_return:
calculates mean return
std_returns:
calculates the standard deviation of returns (risk)
annualized_perf:
calculates annulized return and risk
'''
def __init__(self, ticker, start, end):
self._ticker = ticker
self.start = start
self.end = end
self.get_data()
self.log_returns()
def __repr__(self):
return "FinancialInstrument(ticker = {}, start = {}, end = {})".format(self._ticker,
self.start, self.end)
def get_data(self):
''' retrieves (from yahoo finance) and prepares the data
'''
raw = yf.download(self._ticker, self.start, self.end).Close.to_frame()
raw.rename(columns = {"Close":"price"}, inplace = True)
self.data = raw
def log_returns(self):
'''calculates log returns
'''
self.data["log_returns"] = np.log(self.data.price/self.data.price.shift(1))
def plot_prices(self):
''' creates a price chart
'''
self.data.price.plot(figsize = (12, 8))
plt.title("Price Chart: {}".format(self._ticker), fontsize = 15)
def plot_returns(self, kind = "ts"):
''' plots log returns either as time series ("ts") or histogram ("hist")
'''
if kind == "ts":
self.data.log_returns.plot(figsize = (12, 8))
plt.title("Returns: {}".format(self._ticker), fontsize = 15)
elif kind == "hist":
self.data.log_returns.hist(figsize = (12, 8), bins = int(np.sqrt(len(self.data))))
plt.title("Frequency of Returns: {}".format(self._ticker), fontsize = 15)
def set_ticker(self, ticker = None):
'''sets a new ticker
'''
if ticker is not None:
self._ticker = ticker
self.get_data()
self.log_returns()
def mean_return(self, freq = None):
'''calculates mean return
'''
if freq is None:
return self.data.log_returns.mean()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.mean()
def std_returns(self, freq = None):
'''calculates the standard deviation of returns (risk)
'''
if freq is None:
return self.data.log_returns.std()
else:
resampled_price = self.data.price.resample(freq).last()
resampled_returns = np.log(resampled_price / resampled_price.shift(1))
return resampled_returns.std()
def annualized_perf(self):
'''calculates annulized return and risk
'''
mean_return = round(self.data.log_returns.mean() * 252, 3)
risk = round(self.data.log_returns.std() * np.sqrt(252), 3)
print("Return: {} | Risk: {}".format(mean_return, risk))
###Output
_____no_output_____ |
Machine_Learning_Workshop.ipynb | ###Markdown
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
###Output
_____no_output_____
###Markdown
Welcome to Ivan's Machine Learning WorkshopModule 1: Image Classification Step 1: Import the required tensor flow modules
###Code
# TensorFlow and tf.keras
import tensorflow as tf #main tensorflow library
import numpy as np #for math
import matplotlib.pyplot as plt #for graphs
print(tf.__version__) #show the tensorflow version
###Output
_____no_output_____
###Markdown
Step 2: Import the image dataThe data can be obtained here: https://www.tensorflow.org/datasets/catalog/overview
###Code
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
Step 3: Normalize the image dataUsually, image data comes in grayscale from 0 to 255. Sometimes, it could be an RGB colour too. We will need to normalize the data before feeding it into the Neural Network, by changing it to a number between 0 and 1
###Code
plt.figure()
plt.title("Before Normalize")
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
train_images = train_images / 255.0
test_images = test_images / 255.0
plt.figure()
plt.title("After Normalize")
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
###Output
_____no_output_____
###Markdown
Step 4: Built the network
###Code
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)), #Flatten the image to 1D, input shape is 28x28, so output is array of 784 numbers
tf.keras.layers.Dense(128, activation='relu'), #First layer 128 neurons
tf.keras.layers.Dense(10), #Last layer must have same number of neuron as categories
tf.keras.layers.Softmax() #Normalize the output probabilities
])
###Output
_____no_output_____
###Markdown
Step 5: Compile the modelHere you specify the "settings" for the training process, i.e. which optimizer to use, what's the loss function and what metrics to optimize for...Here, "adam" was used as the optimizer. For more info, read this: https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/ For loss function, we use the cross-entropy loss... Basically, the loss function is the value we want to "minimize". The setting for *from_logits* basically tells the model that the value of loss need not be normalized.
###Code
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Step 6: Start training!
###Code
model.fit(train_images, train_labels, epochs=10)
###Output
_____no_output_____
###Markdown
Step 7: Evaluate the model
###Code
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('\nTest accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
Step 8: Test the model!
###Code
predictions = model.predict(test_images)
#Set i to a certain number to see check
i = 66
#Print out to see
print(predictions[i])
#Plots the graph
fig, ax = plt.subplots(1,1)
ax.bar(range(10), predictions[i], color="#777777")
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt','Sneaker', 'Bag', 'Ankle boot']
plt.xticks(range(10))
ax.set_xticklabels(class_names, rotation='vertical', fontsize=18)
plt.show()
#Show the image
plt.imshow(test_images[i])
###Output
_____no_output_____ |
examples/finetune_emot.ipynb | ###Markdown
Finetuning EmotEmot is a Emotion Recognition dataset with 5 possible labels: `sadness`, `anger`, `love`, `fear`, `happy`
###Code
import os, sys
sys.path.append('../')
os.chdir('../')
import random
import numpy as np
import pandas as pd
import torch
from torch import optim
import torch.nn.functional as F
from tqdm import tqdm
from transformers import BertForSequenceClassification, BertConfig, BertTokenizer
from nltk.tokenize import TweetTokenizer
from utils.forward_fn import forward_sequence_classification
from utils.metrics import document_sentiment_metrics_fn
from utils.data_utils import EmotionDetectionDataset, EmotionDetectionDataLoader
###
# common functions
###
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
def count_param(module, trainable=False):
if trainable:
return sum(p.numel() for p in module.parameters() if p.requires_grad)
else:
return sum(p.numel() for p in module.parameters())
def get_lr(optimizer):
for param_group in optimizer.param_groups:
return param_group['lr']
def metrics_to_string(metric_dict):
string_list = []
for key, value in metric_dict.items():
string_list.append('{}:{:.2f}'.format(key, value))
return ' '.join(string_list)
# Set random seed
set_seed(26092020)
###Output
_____no_output_____
###Markdown
Load Model
###Code
# Load Tokenizer and Config
tokenizer = BertTokenizer.from_pretrained('indobenchmark/indobert-base-p1')
config = BertConfig.from_pretrained('indobenchmark/indobert-base-p1')
config.num_labels = EmotionDetectionDataset.NUM_LABELS
# Instantiate model
model = BertForSequenceClassification.from_pretrained('indobenchmark/indobert-base-p1', config=config)
model
count_param(model)
###Output
_____no_output_____
###Markdown
Prepare Dataset
###Code
train_dataset_path = './dataset/emot_emotion-twitter/train_preprocess.csv'
valid_dataset_path = './dataset/emot_emotion-twitter/valid_preprocess.csv'
test_dataset_path = './dataset/emot_emotion-twitter/test_preprocess_masked_label.csv'
train_dataset = EmotionDetectionDataset(train_dataset_path, tokenizer, lowercase=True)
valid_dataset = EmotionDetectionDataset(valid_dataset_path, tokenizer, lowercase=True)
test_dataset = EmotionDetectionDataset(test_dataset_path, tokenizer, lowercase=True)
train_loader = EmotionDetectionDataLoader(dataset=train_dataset, max_seq_len=512, batch_size=32, num_workers=16, shuffle=True)
valid_loader = EmotionDetectionDataLoader(dataset=valid_dataset, max_seq_len=512, batch_size=32, num_workers=16, shuffle=False)
test_loader = EmotionDetectionDataLoader(dataset=test_dataset, max_seq_len=512, batch_size=32, num_workers=16, shuffle=False)
w2i, i2w = EmotionDetectionDataset.LABEL2INDEX, EmotionDetectionDataset.INDEX2LABEL
print(w2i)
print(i2w)
###Output
{'sadness': 0, 'anger': 1, 'love': 2, 'fear': 3, 'happy': 4}
{0: 'sadness', 1: 'anger', 2: 'love', 3: 'fear', 4: 'happy'}
###Markdown
Test model on sample sentences
###Code
text = 'Bahagia hatiku melihat pernikahan putri sulungku yang cantik jelita'
subwords = tokenizer.encode(text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
label = torch.topk(logits, k=1, dim=-1)[1].squeeze().item()
print(f'Text: {text} | Label : {i2w[label]} ({F.softmax(logits, dim=-1).squeeze()[label] * 100:.3f}%)')
text = 'Budi pergi ke pondok indah mall membeli cakwe'
subwords = tokenizer.encode(text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
label = torch.topk(logits, k=1, dim=-1)[1].squeeze().item()
print(f'Text: {text} | Label : {i2w[label]} ({F.softmax(logits, dim=-1).squeeze()[label] * 100:.3f}%)')
text = 'Dasar anak sialan!! Kurang ajar!!'
subwords = tokenizer.encode(text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
label = torch.topk(logits, k=1, dim=-1)[1].squeeze().item()
print(f'Text: {text} | Label : {i2w[label]} ({F.softmax(logits, dim=-1).squeeze()[label] * 100:.3f}%)')
###Output
Text: Dasar anak sialan!! Kurang ajar!! | Label : sadness (25.323%)
###Markdown
Fine Tuning & Evaluation
###Code
optimizer = optim.Adam(model.parameters(), lr=3e-6)
model = model.cuda()
# Train
n_epochs = 5
for epoch in range(n_epochs):
model.train()
torch.set_grad_enabled(True)
total_train_loss = 0
list_hyp, list_label = [], []
train_pbar = tqdm(train_loader, leave=True, total=len(train_loader))
for i, batch_data in enumerate(train_pbar):
# Forward model
loss, batch_hyp, batch_label = forward_sequence_classification(model, batch_data[:-1], i2w=i2w, device='cuda')
# Update model
optimizer.zero_grad()
loss.backward()
optimizer.step()
tr_loss = loss.item()
total_train_loss = total_train_loss + tr_loss
# Calculate metrics
list_hyp += batch_hyp
list_label += batch_label
train_pbar.set_description("(Epoch {}) TRAIN LOSS:{:.4f} LR:{:.8f}".format((epoch+1),
total_train_loss/(i+1), get_lr(optimizer)))
# Calculate train metric
metrics = document_sentiment_metrics_fn(list_hyp, list_label)
print("(Epoch {}) TRAIN LOSS:{:.4f} {} LR:{:.8f}".format((epoch+1),
total_train_loss/(i+1), metrics_to_string(metrics), get_lr(optimizer)))
# Evaluate on validation
model.eval()
torch.set_grad_enabled(False)
total_loss, total_correct, total_labels = 0, 0, 0
list_hyp, list_label = [], []
pbar = tqdm(valid_loader, leave=True, total=len(valid_loader))
for i, batch_data in enumerate(pbar):
batch_seq = batch_data[-1]
loss, batch_hyp, batch_label = forward_sequence_classification(model, batch_data[:-1], i2w=i2w, device='cuda')
# Calculate total loss
valid_loss = loss.item()
total_loss = total_loss + valid_loss
# Calculate evaluation metrics
list_hyp += batch_hyp
list_label += batch_label
metrics = document_sentiment_metrics_fn(list_hyp, list_label)
pbar.set_description("VALID LOSS:{:.4f} {}".format(total_loss/(i+1), metrics_to_string(metrics)))
metrics = document_sentiment_metrics_fn(list_hyp, list_label)
print("(Epoch {}) VALID LOSS:{:.4f} {}".format((epoch+1),
total_loss/(i+1), metrics_to_string(metrics)))
# Evaluate on test
model.eval()
torch.set_grad_enabled(False)
total_loss, total_correct, total_labels = 0, 0, 0
list_hyp, list_label = [], []
pbar = tqdm(test_loader, leave=True, total=len(test_loader))
for i, batch_data in enumerate(pbar):
_, batch_hyp, _ = forward_sequence_classification(model, batch_data[:-1], i2w=i2w, device='cuda')
list_hyp += batch_hyp
# Save prediction
df = pd.DataFrame({'label':list_hyp}).reset_index()
df.to_csv('pred.txt', index=False)
print(df)
###Output
100%|██████████| 14/14 [00:01<00:00, 10.57it/s]
###Markdown
Test fine-tuned model on sample sentences
###Code
text = 'Bahagia hatiku melihat pernikahan putri sulungku yang cantik jelita'
subwords = tokenizer.encode(text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
label = torch.topk(logits, k=1, dim=-1)[1].squeeze().item()
print(f'Text: {text} | Label : {i2w[label]} ({F.softmax(logits, dim=-1).squeeze()[label] * 100:.3f}%)')
text = 'Budi pergi ke pondok indah mall membeli cakwe'
subwords = tokenizer.encode(text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
label = torch.topk(logits, k=1, dim=-1)[1].squeeze().item()
print(f'Text: {text} | Label : {i2w[label]} ({F.softmax(logits, dim=-1).squeeze()[label] * 100:.3f}%)')
text = 'Dasar anak sialan!! Kurang ajar!!'
subwords = tokenizer.encode(text)
subwords = torch.LongTensor(subwords).view(1, -1).to(model.device)
logits = model(subwords)[0]
label = torch.topk(logits, k=1, dim=-1)[1].squeeze().item()
print(f'Text: {text} | Label : {i2w[label]} ({F.softmax(logits, dim=-1).squeeze()[label] * 100:.3f}%)')
###Output
Text: Dasar anak sialan!! Kurang ajar!! | Label : anger (91.915%)
|
Workbook_BasicGates.ipynb | ###Markdown
Basic Gates Kata Workbook**What is this workbook?** A workbook is a collection of problems, accompanied by solutions to them. The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required.Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck or for reinforcement. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.This workbook describes the solutions to the problems offered in the [Basic Gates Kata](./BasicGates.ipynb). Since the tasks are offered as programming problems, the explanations also cover some elements of Q that might be non-obvious for a novitiate. **What you should know for this workbook**You should be familiar with the following concepts and associated techniques **prior to** beginning work on the Basic Gates Quantum Kata.1. [Complex numbers](../tutorials/ComplexArithmetic/ComplexArithmetic.ipynb).2. Basic linear algebra (multiplying column vectors by matrices), per the first part of [this tutorial](../tutorials/LinearAlgebra/LinearAlgebra.ipynb).3. [The concept of qubit and its properties](../tutorials/Qubit/Qubit.ipynb).4. [Single-qubit gates](../tutorials/SingleQubitGates/SingleQubitGates.ipynb).You can also consult the [complete Quantum Katas learning path](https://github.com/microsoft/QuantumKataslearning-path). Part 1. Single-Qubit Gates Task 1.1. State flip: $|0\rangle$ to $|1\rangle$ and vice versa**Input:** A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.**Goal:** Change the state of the qubit to $\alpha |1\rangle + \beta |0\rangle$.**Example:**If the qubit is in state $|0\rangle$, change its state to $|1\rangle$.If the qubit is in state $|1\rangle$, change its state to $|0\rangle$. Solution We can recognise that the Pauli X gate will change the state $|0\rangle$ to $|1\rangle$ and vice versa, and $\alpha |0\rangle + \beta |1\rangle$ to $\alpha |1\rangle + \beta |0\rangle$.As a reminder, the Pauli X gate is defined by the following matrix: $$X = \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}$$We can see how it affects, for example, the basis state $|0\rangle$: $$X|0\rangle= \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}\begin{bmatrix} 1\\ 0\end{bmatrix}=\begin{bmatrix} 0 \cdot 1 + 1 \cdot 0\\ 1 \cdot 1 + 0 \cdot 0\end{bmatrix}=\begin{bmatrix} 0\\ 1\end{bmatrix}=|1\rangle$$ Similarly, we can consider the effect of the X gate on the superposition state $|\psi\rangle = 0.6|0\rangle + 0.8|1\rangle$: $$X|\psi\rangle= \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}\begin{bmatrix} 0.6\\ 0.8\end{bmatrix}=\begin{bmatrix} 0 \cdot 0.6 + 1 \cdot 0.8\\ 1 \cdot 0.6 + 0 \cdot 0.8\end{bmatrix}=\begin{bmatrix} 0.8\\ 0.6\end{bmatrix}= 0.8|0\rangle + 0.6|1\rangle$$
###Code
%kata T101_StateFlip
operation StateFlip (q : Qubit) : Unit is Adj+Ctl {
X(q);
}
###Output
_____no_output_____
###Markdown
[Return to Task 1.1 of the Basic Gates kata.](./BasicGates.ipynbTask-1.1.-State-flip:-$|0\rangle$-to-$|1\rangle$-and-vice-versa) Task 1.2. Basis change: $|0\rangle$ to $|+\rangle$ and $|1\rangle$ to $|-\rangle$ (and vice versa)**Input**: A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.**Goal**: Change the state of the qubit as follows:* If the qubit is in state $|0\rangle$, change its state to $|+\rangle = \frac{1}{\sqrt{2}} \big(|0\rangle + |1\rangle\big)$.* If the qubit is in state $|1\rangle$, change its state to $|-\rangle = \frac{1}{\sqrt{2}} \big(|0\rangle - |1\rangle\big)$.* If the qubit is in superposition, change its state according to the effect on basis vectors. Solution We can recognize that the Hadamard gate changes states $|0\rangle$ and $|1\rangle$ to $|+\rangle$ and $|-\rangle$, respectively, and vice versa.As a reminder, the Hadamard gate is defined by the following matrix: $$\frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1 \\1 & -1\end{bmatrix}$$For example, we can work out $H|1\rangle$ as follows: $$H|1\rangle=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 0\\ 1\\ \end{bmatrix}=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 \cdot 0 + 1 \cdot 1 \\ 1 \cdot 0 + (-1) \cdot 1 \end{bmatrix}= \frac{1}{\sqrt{2}}\begin{bmatrix} 1\\ -1 \end{bmatrix}= \frac{1}{\sqrt{2}} \big(|0\rangle - |1\rangle\big) = |-\rangle$$ Similarly, we can consider the effect of the Hadamard gate on the superposition state $|\psi\rangle = 0.6|0\rangle + 0.8|1\rangle$ (rounding the numbers to 4 decimal places): $$H|\psi⟩ = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} \alpha\\ \beta\\ \end{bmatrix} =\frac{1}{\sqrt{2}}\begin{bmatrix} \alpha + \beta\\ \alpha - \beta\\ \end{bmatrix}= 0.7071\begin{bmatrix} 1.4\\ -0.2\\ \end{bmatrix}= \begin{bmatrix} 0.98994\\ -0.14142\\ \end{bmatrix}= 0.9899|0\rangle - 0.1414|1\rangle $$
###Code
%kata T102_BasisChange
operation BasisChange (q : Qubit) : Unit is Adj+Ctl {
H(q);
}
###Output
_____no_output_____
###Markdown
[Return to Task 1.2 of the Basic Gates kata](./BasicGates.ipynbTask-1.2.-Basis-change:-$|0\rangle$-to-$|+\rangle$-and-$|1\rangle$-to-$|-\rangle$-(and-vice-versa)). Task 1.3. Sign flip: $|+\rangle$ to $|-\rangle$ and vice versa.**Input**: A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.**Goal** : Change the qubit state to $\alpha |0\rangle - \beta |1\rangle$ (i.e. flip the sign of the $|1\rangle$ component of the superposition). Solution The action of the Pauli Z gate is exactly what is required by this question.This gate leaves the sign of the $|0\rangle$ component of the superposition unchanged but flips the sign of the $|1\rangle$ component of the superposition.As a reminder, the Pauli Z gate is defined by the following matrix: $$Z = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix} $$ Let's see its effect on the only computational basis state that it changes, $|1\rangle$:$$Z|1\rangle = \begin{bmatrix} 1 & 0\\ 0 & -1 \end{bmatrix} \begin{bmatrix} 0\\ 1\\ \end{bmatrix}=\begin{bmatrix} 1 \cdot 0 + 0 \cdot1\\ 0 \cdot 1 + -1 \cdot 1\\ \end{bmatrix}= \begin{bmatrix} 0\\ -1\\ \end{bmatrix}= -\begin{bmatrix} 0\\ 1\\ \end{bmatrix}= -|1\rangle$$ In general applying the Z gate to a single qubit superposition state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$ gives $$Z|\psi\rangle = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} \alpha\\ \beta\\ \end{bmatrix} =\begin{bmatrix} 1\cdot\alpha + 0\cdot\beta\\ 0\cdot\alpha + -1\cdot\beta\\ \end{bmatrix} = \begin{bmatrix} \alpha\\ -\beta\\ \end{bmatrix} = \alpha |0\rangle -\beta |1\rangle$$
###Code
%kata T103_SignFlip
operation SignFlip (q : Qubit) : Unit is Adj+Ctl {
Z(q);
}
###Output
_____no_output_____
###Markdown
[Return to Task 1.3 of the Basic Gates kata](./BasicGates.ipynbTask-1.3.-Sign-flip:-$|+\rangle$--to-$|-\rangle$--and-vice-versa.). Task 1.4. Amplitude change: $|0\rangle$ to $\cos{α} |0\rangle + \sin{α} |1\rangle$.**Inputs:**1. Angle α, in radians, represented as Double.2. A qubit in state $|\psi\rangle = \beta |0\rangle + \gamma |1\rangle$.**Goal:** Change the state of the qubit as follows:- If the qubit is in state $|0\rangle$, change its state to $\cos{α} |0\rangle + \sin{α} |1\rangle$.- If the qubit is in state $|1\rangle$, change its state to $-\sin{α} |0\rangle + \cos{α} |1\rangle$.- If the qubit is in superposition, change its state according to the effect on basis vectors. Solution We can recognise that we need to use one of the rotation gates Rx, Ry, and Rz (named because they "rotate" the qubit state in the three dimensional space visualized as the Bloch sphere about the x, y, and z axes, respectively), since they involve angle parameters. Of these three gates, only Ry rotates the basis states $|0\rangle$ and $|1\rangle$ to have real amplitudes (the other two gates introduce complex coefficients).As a reminder, $$R_{y}(\theta) =\begin{bmatrix} \cos \frac{\theta}{2} & -\sin \frac{\theta}{2}\\ \sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix}$$ Let's see its effect on the $|0\rangle$ state: $$R_y(\theta)|0\rangle =\begin{bmatrix} \cos \frac{\theta}{2} & -\sin \frac{\theta}{2}\\ \sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix}\begin{bmatrix} 1\\ 0\\\end{bmatrix}=\begin{bmatrix} \cos \frac{\theta}{2}\cdot1 - \sin \frac{\theta}{2}\cdot0\\ \sin \frac{\theta}{2}\cdot1 + \cos \frac{\theta}{2}\cdot0 \end{bmatrix}=\begin{bmatrix} \cos \frac{\theta}{2}\\ \sin \frac{\theta}{2} \end{bmatrix}= \cos\frac{\theta}{2} |0\rangle + \sin\frac{\theta}{2} |1\rangle$$ Recall that when applying a gate, you can tell what its matrix does to the basis states by looking at its columns: the first column of the matrix is the state into which it will transform the $|0\rangle$ state, and the second column is the state into which it will transfrom the $|1\rangle$ state. In the example used by the testing harness we are given $\beta = 0.6, \gamma = 0.8$ and $\alpha = 1.0471975511965976 = \frac{\pi}{3}$. Since $\cos \frac{\pi}{3} = 0.5$ and $\sin \frac{\pi}{3} = 0.8660$, working to 4 decimal places, we can compute: $$R_{y}(\theta) |\psi\rangle= \begin{bmatrix} \cos \frac{\theta}{2} & -\sin \frac{\theta}{2}\\ \sin \frac{\theta}{2} & \cos \frac{\theta}{2} \end{bmatrix} \begin{bmatrix} \beta\\ \gamma \end{bmatrix}=\begin{bmatrix} cos \frac{\theta}{2}\cdot\beta - sin \frac{\theta}{2}\cdot\gamma\\ sin \frac{\theta}{2}\cdot\beta +cos \frac{\theta}{2}\cdot\gamma \end{bmatrix}= \begin{bmatrix} 0.6\cdot\cos \frac{\pi}{3} -0.8\cdot\sin \frac{\pi}{3}\\ 0.6\cdot\sin \frac{\pi}{3} +0.8\cdot\cos \frac{\pi}{3} \end{bmatrix}= \begin{bmatrix} 0.3 - 0.6928\\ 0.5196 + 0.4 \end{bmatrix}= \begin{bmatrix} -0.3928\\ 0.9196 \end{bmatrix}$$ Notice that we used $\frac{\theta}{2} = \alpha$; this means that in the Q code we need to pass the angle $\theta = 2\alpha$.
###Code
%kata T104_AmplitudeChange
operation AmplitudeChange (alpha : Double, q : Qubit) : Unit is Adj+Ctl {
Ry(2.0 * alpha, q);
}
###Output
_____no_output_____
###Markdown
[Return to Task 1.4 of the Basic Gates kata](./BasicGates.ipynbTask-1.4.-Amplitude-change:-$|0\rangle$-to-$\cos{α}-|0\rangle-+-\sin{α}-|1\rangle$.). Task 1.5. Phase flip**Input:** A qubit in state $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$.**Goal:** Change the qubit state to $\alpha |0\rangle + \color{red}i\beta |1\rangle$ (add a relative phase $i$ to $|1\rangle$ component of the superposition). SolutionWe can recognise that the S gate performs this particular relative phase addition to the $|1\rangle$ basis state. As a reminder, $$S = \begin{bmatrix} 1 & 0\\ 0 & i \end{bmatrix} $$ Let's see the effect of this gate on the general superposition $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$. $$ \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix} \begin{bmatrix} \alpha\\ \beta\\ \end{bmatrix} =\begin{bmatrix} 1\cdot\alpha + 0\cdot\beta\\ 0\cdot\alpha + i\cdot\beta \end{bmatrix} = \begin{bmatrix} \alpha\\ i\beta\\ \end{bmatrix} $$ It is therefore easy to see that when $|\psi\rangle = 0.6|0\rangle + 0.8|1\rangle, S|\psi\rangle = 0.6|0\rangle + 0.8i|1\rangle$.
###Code
%kata T105_PhaseFlip
operation PhaseFlip (q : Qubit) : Unit is Adj+Ctl {
S(q);
}
###Output
_____no_output_____
###Markdown
Solution 2See the next task, Phase Change, for an explanation of using R1 gate to implement the same transformation:
###Code
%kata T105_PhaseFlip
open Microsoft.Quantum.Math;
operation PhaseFlip (q : Qubit) : Unit is Adj+Ctl {
R1(0.5 * PI(), q);
}
###Output
_____no_output_____
###Markdown
[Return to Task 1.5 of the Basic Gates kata](./BasicGates.ipynbTask-1.5.-Phase-flip). Task 1.6. Phase change**Inputs:**1. Angle α, in radians, represented as Double.2. A qubit in state $|\psi\rangle = \beta |0\rangle + \gamma |1\rangle$.**Goal:** Change the state of the qubit as follows:- If the qubit is in state $|0\rangle$, don't change its state.- If the qubit is in state $|1\rangle$, change its state to $e^{i\alpha} |1\rangle$.- If the qubit is in superposition, change its state according to the effect on basis vectors: $\beta |0\rangle + \color{red}{e^{i\alpha}} \gamma |1\rangle$. SolutionWe know that: $$R1(\alpha)= \begin{bmatrix} 1 & 0\\ 0 & \color{red}{e^{i\alpha}} \end{bmatrix} $$ So we have: $$R1(\beta |0\rangle + \gamma |1\rangle) = \begin{bmatrix} 1 & 0 \\ 0 & \color{red}{e^{i\alpha}} \end{bmatrix} \begin{bmatrix} \beta\\ \gamma\\ \end{bmatrix} =\begin{bmatrix} 1.\beta + 0.\gamma\\ 0.\beta + \color{red}{e^{i\alpha}}\gamma \end{bmatrix} = \begin{bmatrix} \beta\\ \color{red}{e^{i\alpha}}\gamma \end{bmatrix} = \beta |0\rangle + \color{red}{e^{i\alpha}} \gamma |1\rangle$$ > Note that the results produced by the test harness can be unexpected.If you run the kata several times and and examine the output, you'll notice that success is signalled even though the corresponding amplitudes of the desired and actual states look very different.>> So what's going on? The full state simulator used in these tests performs the computations "up to a global phase", that is, sometimes the resulting state acquires a global phase that doesn't affect the computations or the measurement outcomes, but shows up in DumpMachine output. (You can read more about the global phase in the [Qubit tutorial](../tutorials/Qubit/Qubit.ipynbRelative-and-Global-Phase).)>> For example, in one run you can get the desired state $(0.6000 + 0000i)|0\rangle + (-0.1389 +0.7878i)|1\rangle$ and the actual state $(-0.1042 + 0.5909i)|0\rangle + (-0.7518 -0.2736i)|1\rangle$.You can verify that the ratios of amplitudes of the respective basis states are equal: $\frac{-0.1042 + 0.5909i}{0.6} = -0.173667 +0.984833 i = \frac{-0.7518 -0.2736i}{-0.1389 +0.7878i}$, so the global phase acquired by the state is (-0.173667 +0.984833 i). You can also check that the absolute value of this multiplier is approximately 1, so it doesn't impact the measurement probabilities.>> The testing harness for this and the rest of the tasks checks that your solution implements the required transformation exactly, without introducing any global phase, so it shows up only in the helper output and does not affect the verification of your solution.
###Code
%kata T106_PhaseChange
operation PhaseChange (alpha : Double, q : Qubit) : Unit is Adj+Ctl {
R1(alpha, q);
}
###Output
_____no_output_____
###Markdown
Suppose now that $\alpha = \frac{\pi}{2}$.Then $e^{i\alpha}= \cos\frac{\pi}{2} + i\sin\frac{\pi}{2}$.And, since $\cos\frac{\pi}{2}= 0$ and $\sin\frac{\pi}{2} = 1$, then we have that $\cos\frac{\pi}{2} + i \sin\frac{\pi}{2} = i$, and $R1(\frac{\pi}{2}) = S$, which we used in the second solution to task 1.5, above. [Return to Task 1.6 of the Basic Gates kata](./BasicGates.ipynbTask-1.6.-Phase-Change). Task 1.7. Global phase change**Input:** A qubit in state $|\psi\rangle = \beta |0\rangle + \gamma |1\rangle$.**Goal**: Change the state of the qubit to $- \beta |0\rangle - \gamma |1\rangle$.> Note: this change on its own is not observable - there is no experiment you can do on a standalone qubit to figure out whether it acquired the global phase or not. > However, you can use a controlled version of this operation to observe the global phase it introduces. > This is used in later katas as part of more complicated tasks. SolutionWe recognise that a global phase change can be accomplished by using the R rotation gate with the PauliI (identity) gate.As a reminder, the R gate is defined as $R_{\mu}(\theta) = \exp(\frac{\theta}{2}i\cdot\sigma_{\mu})$, wehere $\sigma_{\mu}$ is one of the Pauli gates I, X, Y or Z. > Note that a global phase is not detectable and has no physical meaning - it disappers when you take a measurement of the state. > You can read more about this in the [Single-qubit measurements tutorial](../tutorials/SingleQubitSystemMeasurements/SingleQubitSystemMeasurements.ipynbMeasurements-in-arbitrary-orthogonal-bases). For the problem at hand, we'll use the rotation gate $R_{\mu}(\theta) = \exp(\frac{\theta}{2}i\cdot\sigma_{\mu})$ with $\sigma_{\mu} = I$. $R(PauliI, 2\pi) = \exp(\frac{2\pi}{2} iI) = \exp(i\pi) I = (\cos\pi + i\sin\pi) I$ and, since $\cos\pi = -1$ and $\sin\pi = 0$, we have that $R(PauliI, 2\pi) = -I$: $$R(\beta |0\rangle + \gamma |1\rangle) = -1\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \beta\\ \gamma\\ \end{bmatrix}= \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix} \begin{bmatrix} \beta\\ \gamma\\ \end{bmatrix} = \begin{bmatrix} -1\cdot\beta + 0\cdot\gamma\\ 0\cdot\beta + -1\cdot\gamma \\ \end{bmatrix}=\begin{bmatrix} -\beta\\ -\gamma\\ \end{bmatrix}=- \beta |0\rangle - \gamma |1\rangle$$ The test harness for this test shows the result of applying the *controlled* variant of your solution to be able to detect the phase change.
###Code
%kata T107_GlobalPhaseChange
open Microsoft.Quantum.Math;
operation GlobalPhaseChange (q : Qubit) : Unit is Adj+Ctl {
R(PauliI, 2.0 * PI(), q);
}
###Output
_____no_output_____ |
scraping_wikipedia/wikipedia-api-query.ipynb | ###Markdown
The Wikipedia API: The Basics* by [R. Stuart Geiger](http://stuartgeiger.com), released [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/) The API An API is an Application Programming Interface, which is a standardized way for programs to communicate and share data with each other. Wikipedia runs on an open source platform called MediaWiki, as do many other wikis. You can use the API to do almost anything that you can do with the browser. You want to use the API (rather than just downloading the full text of the HTML page as if you were a web browser) for a few reasons: it uses fewer resources (for you and Wikipedia), it is standardized, and it is very well supported in many different programming languages. API resources* [The main API documentation](https://www.mediawiki.org/wiki/API:Main_page)* [The properties modules](https://www.mediawiki.org/wiki/API:Properties)* [Client code for many languages](https://www.mediawiki.org/wiki/API:Client_code)* [Etiquette and usage limits](https://www.mediawiki.org/wiki/API:Etiquette) -- most libraries will rate limit for you* [pywikibot main manual](https://www.mediawiki.org/wiki/Manual:Pywikibot) and [library docs](http://pywikibot.readthedocs.org/en/latest/pywikibot/) The wikipedia libraryThis is the simplest, no hastle library for querying Wikipedia articles, but it has fewer features. You should use this if you want to get the text of articles.
###Code
!pip install wikipedia
import wikipedia
###Output
_____no_output_____
###Markdown
In this example, we will get the page for Berkeley, California and count the most commonly used words in the article. I'm using nltk, which is a nice library for natural language processing (although it is probably overkill for this).
###Code
bky = wikipedia.page("Berkeley, California")
bky
bk_split = bky.content.split()
bk_split[:10]
!pip install nltk
import nltk
fdist1 = nltk.FreqDist(bk_split)
fdist1.most_common(10)
###Output
_____no_output_____
###Markdown
There are many functions in a Wikipedia page object. We can also get all the Wikipedia articles that are linked from a page, all the URL links in the page, or all the geographical coordinates in the page. There was a study about which domains were most popular in Wikipedia articles.
###Code
print(bky.references[:10])
print(bky.links[:10])
###Output
_____no_output_____
###Markdown
Querying using pywikibotpywikibot is one of the most well-developed and widely used libraries for querying the Wikipedia API. It does need a configuration script (user-config.py) in the directory where you are running the python script. It is often used by bots that edit, so there are many features that are not available unless you login with a Wikipedia account. Register an account on WikipediaIf you don't have one, [register an account on Wikipedia](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Main+Page&type=signup). Then modify the string below so that the usernames line reads u'YourUserName'. You are not inputting your password, because you are not logging in with this account. This is just so that there is a place to contact you if your script goes out of control. This is not required to use pywikibot, but it is part of the rules for accessing Wikipedia's API. In this tutorial, I'm not going to tell you how to set up OAuth so that you can login and edit. But if you are interested in this, I'd love to talk to you about it.**Note: you can edit pages with pywikibot (even when not logged in), but please don't! You have to get approval from Wikipedia's bot approval group, or else your IP address is likely to be banned. **
###Code
user_config="""
family = 'wikipedia'
mylang = 'en'
usernames['wikipedia']['en'] = u'REPLACE THIS WITH YOUR USERNAME'
"""
f = open('user-config.py', 'w')
f.write(user_config)
f.close()
!pip install pywikibot
import pywikibot
site = pywikibot.Site()
bky_page = pywikibot.Page(site, "Berkeley, California")
bky_page
# page text with all the wikimarkup and templates
bky_page_text = bky_page.text
# page text expanded to HTML
bky_page.expand_text()
# All the geographical coordinates linked in a page (may have multiple per article)
bky_page.coordinates()
###Output
_____no_output_____
###Markdown
GeneratorsGenerators are a way of querying for a kind of page, and then iterating through those pages. Generators are frequently used with categories, but you can also use a generator for things like a search, or all pages linking to a page.
###Code
from pywikibot import pagegenerators
cat = pywikibot.Category(site,'Category:Cities in Alameda County, California')
gen = cat.members()
gen
# create an empty list
coord_d = []
for page in gen:
print(page.title(), page.coordinates())
pc = page.coordinates()
for coord in pc:
# If the page is not a category
if(page.isCategory()==False):
coord_d.append({'label':page.title(), 'latitude':coord.lat, 'longitude':coord.lon})
coord_d[:3]
import pandas as pd
coord_df = pd.DataFrame(coord_d)
coord_df
###Output
_____no_output_____
###Markdown
SubcategoriesPages are only members of the direct category they are in. If a page is in a category, and that category is a member of another category, then it will not be shown through the members() function. The basic rule is that if you're on a category's Wikipedia page (like http://enwp.org/Category:Universities_and_colleges_in_California), the members are only the items that are blue links on that page. So you have to iterate through the category to recursively access subcategory members. This exercise is left to the readers. :)Note: Many Wikipedia categories aren't necessarily restricted to the kind of entity that is mentioned in the category name. So "Category:Universities and colleges in California" contains a subcategory "Category:People by university or college in California" that has people asssociated with each university. So you have to be careful when recursively going through subcategories, or else you might end up with different kinds of entities.
###Code
bay_cat = pywikibot.Category(site,'Category:Universities and colleges in California')
bay_gen = bay_cat.members()
for page in bay_gen:
print(page.title(), page.isCategory(), page.coordinates())
###Output
_____no_output_____
###Markdown
Other interesting information from pages Backlinks are all the pages that link to a page. Note: this can get very, very long with even minorly popular articles.
###Code
telegraph_page = pywikibot.Page(site, u"Telegraph Avenue")
telegraph_backlinks = telegraph_page.backlinks
telegraph_backlinks()
for bl_page in telegraph_backlinks():
if(bl_page.namespace()==1):
print(bl_page.title())
###Output
_____no_output_____
###Markdown
Who has contributed to a page, and how many times have they edited?
###Code
telegraph_page.contributors()
###Output
_____no_output_____
###Markdown
Templates are all the extensions to wikimarkup that give you things like citations, tables, infoboxes, etc. You can iterate over all the templates in a page. TemplatesWikipedia articles are filled with templates, which are kinds of scripts written in wikimarkup. Everything you see in a Wikipedia article that isn't a markdown-like feature (bolding, links, lists, images) is presented through a template. One of the most important templates are infoboxes, which are on the right-hand side of articles.But templates are complicated and very difficult to parse -- which is why [Wikidata](https://wikidata.org) is such a big deal! However, it is possible to parse the same kind of template with pywikibot's textlib parser. For infoboxes, there are different kinds of infoboxes based on what the article's topic is an instance of. So cities, towns, and other similar articles use "infobox settlement" -- which you can see by getting the first part of the article's wikitext.
###Code
bky_page = pywikibot.Page(site, "Berkeley, California")
bky_page.text
###Output
_____no_output_____
###Markdown
If you go to the raw text on the Wikipedia ([by clicking the edit button](https://en.wikipedia.org/w/index.php?title=Berkeley,_California&action=edit)), you can see that this is a little bit more ordered: We use the textlib module from pywikibot, which has a function that parses an article's wikitext into a list of templates. Each item in the list is an OrderedDict mapping parameters to values.
###Code
from pywikibot import textlib
import pandas as pd
bky_templates = textlib.extract_templates_and_params_regex(bky_page.text)
bky_templates[:5]
###Output
_____no_output_____
###Markdown
We iterate through all the templates on the page until we find the one containing the "Infobox settlement" template.
###Code
for template in bky_templates:
if(template[0]=="Infobox settlement"):
infobox = template[1]
infobox.keys()
print(infobox['elevation_ft'])
print(infobox['area_total_sq_mi'])
print(infobox['utc_offset_DST'])
print(infobox['population_total'])
###Output
_____no_output_____
###Markdown
However, sometimes parameters contain templates, such as citations or references.
###Code
print(infobox['government_type'])
print(infobox['website'])
###Output
_____no_output_____
###Markdown
Putting it all togetherThis script gets data about all the cities in the Bay Area -- only traversing through this category, because all the pages are direct members of this with no subcategories.
###Code
bay_cat = pywikibot.Category(site,'Category:Cities_in_the_San_Francisco_Bay_Area')
bay_gen = bay_cat.members()
for page in bay_gen:
# If the page is not a category
if(page.isCategory()==False):
print(page.title())
page_templates = textlib.extract_templates_and_params_regex(page.text)
for template in page_templates:
if(template[0]=="Infobox settlement"):
infobox = template[1]
if 'elevation_ft' in infobox:
print(" Elevation (ft): ", infobox['elevation_ft'])
if 'population_total' in infobox:
print(" Population: ", infobox['population_total'])
if 'area_total_sq_mi' in infobox:
print(" Area (sq mi): ", infobox['area_total_sq_mi'])
###Output
_____no_output_____
###Markdown
This is a script for Katy, getting data about U.S. Nuclear power plants. Wikipedia articles on nuclear power plants have many subcategories:* https://en.wikipedia.org/wiki/Category:Nuclear_power_stations_by_country * https://en.wikipedia.org/wiki/Category:Nuclear_power_stations_in_the_United_States * https://en.wikipedia.org/wiki/Category:Nuclear_power_stations_in_the_United_States_by_state * https://en.wikipedia.org/wiki/Category:Nuclear_power_plants_in_California * https://en.wikipedia.org/wiki/Diablo_Canyon_Power_Plant * https://en.wikipedia.org/wiki/Rancho_Seco_Nuclear_Generating_Station * etc... * https://en.wikipedia.org/wiki/Category:Nuclear_power_plants_in_New_York * etc... * etc... * etc... So we are going to begin with the Category:Nuclear power stations in the United States by state and just go one subcategory down. There is probably a more elegant way of doing this with recursion and functions....
###Code
power_cat = pywikibot.Category(site,'Category:Nuclear power stations in the United States by state')
power_gen = power_cat.members()
for page in power_gen:
print(page.title())
# If the page is not a category
if(page.isCategory()==False):
print("\n",page.title(),"\n")
page_templates = textlib.extract_templates_and_params_regex(page.text)
for template in page_templates:
if(template[0]=="Infobox power station"):
infobox = template[1]
if 'ps_units_operational' in infobox:
print(" Units operational:", infobox['ps_units_operational'])
if 'owner' in infobox:
print(" Owner:", infobox['owner'])
else:
for subpage in pywikibot.Category(site,page.title()).members():
print("\n",subpage.title())
subpage_templates = textlib.extract_templates_and_params_regex(subpage.text)
for template in subpage_templates:
if(template[0]=="Infobox power station"):
infobox = template[1]
if 'ps_units_operational' in infobox:
print(" Units operational:", infobox['ps_units_operational'])
if 'owner' in infobox:
print(" Owner:", infobox['owner'])
###Output
_____no_output_____ |
notebooks/medical-federated-learning-program/network-operators/02-prepare-datasets-BreastCancerDataset.ipynb | ###Markdown
Breast Cancer DatasetInvasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers. To assign an aggressiveness grade to a whole mount sample, pathologists typically focus on the regions which contain the IDC. As a result, one of the common pre-processing steps for automatic aggressiveness grading is to delineate the exact regions of IDC inside of a whole mount slide.The original dataset consisted of 162 whole mount slide images of Breast Cancer (BCa) specimens scanned at 40x. From that, 277,524 patches of size 50 x 50 were extracted.The original data was collected from 279 patients. But we have modified this number to 1000 different patients i.e. the images have been distributed randomly across the 1000 unique patient ids. This has been to done to demonstrate that through our library PySyft we can perform multiple queries while preserving the privacy of a large group of people at scale.Further we have created 100 unique subsets from the dataset and distributed across the participants so that each one can act as an individual Data Owner.
###Code
import os
from tqdm import tqdm
import pandas as pd
import numpy as np
import shutil
import uuid
from PIL import Image
def calc_label_frequency(labels):
from collections import defaultdict
labels = labels.flatten() if hasattr(labels, "flatten") else labels
freq = defaultdict(int)
for label in labels:
freq[label] += 1
return dict(freq)
def check_data_distribution(data_df):
data_distribution = {}
for patient_id in data_df["patient_ids"].unique():
temp = data_df[data_df["patient_ids"] == patient_id]
data_distribution[patient_id] = temp.shape[0]
print(data_distribution)
print(min(data_distribution.values()), max(data_distribution.values()))
def generate_random_ids(num=1000):
patient_ids = set()
while(len(patient_ids) < num):
patient_id = np.random.randint(1000, 1000000)
patient_ids.add(patient_id)
return patient_ids
def read_data_from_disk(data_path = "archive/"):
data_list_dict = []
for patient_dir in tqdm(os.listdir(data_path)):
if patient_dir == "IDC_regular_ps50_idx5":
continue
patient_id = int(patient_dir)
patient_dir = data_path + patient_dir + "/"
label_dirs = os.listdir(patient_dir)
for label in label_dirs:
image_file_path = patient_dir + label + "/"
image_files = os.listdir(image_file_path)
for image_name in tqdm(image_files):
data_list_dict.append(
{
"patient_ids": patient_id,
"labels": label,
"image_paths": image_file_path + image_name,
}
)
data_df = pd.DataFrame(data_list_dict)
return data_df
# Create data subset directory
data_subset_folder = "BreastCancerDataset/subsets"
if os.path.exists(data_subset_folder):
print("Data subset directory already Exists. Clearing existing one.")
shutil.rmtree(data_subset_folder)
os.makedirs(data_subset_folder)
print("Data subset directory created.")
TOTAL_PARTICIPANTS = 100
def create_data_subsets(data_df, patient_ids):
data_subsets_map = {}
data_df.sort_values("patient_ids", inplace=True, ignore_index=True)
start = 0
for participation_number in tqdm(range(1, TOTAL_PARTICIPANTS+1)):
# Calculate start and end index based on your participant number
batch_size = data_df.shape[0] // TOTAL_PARTICIPANTS
start_idx = (participation_number - 1) * batch_size
end_idx = start_idx + batch_size
# Slice the dataframe according
subset = data_df[start_idx:end_idx]
# Reset index of the subset
subset.reset_index(inplace=True, drop=True)
patient_id_list = [np.random.choice(patient_ids[start: start+10]) for i in range(subset.shape[0])]
subset["patient_ids"] = patient_id_list
print("Reading Images as array.....")
images_as_array = []
for image_filepath in subset["image_paths"]:
img = np.asarray(Image.open(image_filepath))
images_as_array.append(img)
del subset["image_paths"]
subset["images"] = images_as_array
print("Done storing Images as array.")
subset_filename = f"BreastCancerDataset-{uuid.uuid4().hex[:TOTAL_PARTICIPANTS]}.pkl"
subset_path = f"{data_subset_folder}/{subset_filename}"
subset.to_pickle(subset_path)
data_subsets_map[participation_number] = subset_filename
return data_subsets_map
print("Data subsets Created Successfully !!!")
data_df = read_data_from_disk()
random_patient_ids = generate_random_ids(1000)
data_df.head()
calc_label_frequency(data_df.labels)
len(random_patient_ids)
data_subset_map = create_data_subsets(data_df, list(random_patient_ids))
data_subset_map
with open("BreastCancerDataset/dataset.json", "w") as fp:
json.dump(data_subset_map, fp)
assert len(os.listdir(data_subset_folder)) == TOTAL_PARTICIPANTS, "Subsets are less than TOTAL PARTICIPANTS"
###Output
_____no_output_____ |
3. Machine_Learning_Classification/week2_programming assignment 1.ipynb | ###Markdown
Implementing logistic regression from scratch The goal of this assignment is to implement your own logistic regression classifier. You will:Extract features from Amazon product reviews.Convert an SFrame into a NumPy array.Implement the link function for logistic regression.Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.Implement gradient ascent.Given a set of coefficients, predict sentiments.Compute classification accuracy for the logistic regression model.
###Code
import pandas as pd
import numpy as np
products = pd.read_csv('/Users/April/Downloads/amazon_baby_subset.csv')
products.head()
###Output
_____no_output_____
###Markdown
Let us quickly explore more of this dataset. The name column indicates the name of the product. Try listing the name of the first 10 products in the dataset.After that, try counting the number of positive and negative reviews.
###Code
products[:10]
products = products[products['rating'] != 3]
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products[:10]
###Output
_____no_output_____
###Markdown
Apply text cleaning on the review data n this section, we will perform some simple feature cleaning using data frames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into the JSON file named important_words.json. Load the words into a list important_words.
###Code
import json
with open('/Users/April/Desktop/datasci_course_materials-master/assignment1/important words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
###Output
['baby', 'one', 'great', 'love', 'use', 'would', 'like', 'easy', 'little', 'seat', 'old', 'well', 'get', 'also', 'really', 'son', 'time', 'bought', 'product', 'good', 'daughter', 'much', 'loves', 'stroller', 'put', 'months', 'car', 'still', 'back', 'used', 'recommend', 'first', 'even', 'perfect', 'nice', 'bag', 'two', 'using', 'got', 'fit', 'around', 'diaper', 'enough', 'month', 'price', 'go', 'could', 'soft', 'since', 'buy', 'room', 'works', 'made', 'child', 'keep', 'size', 'small', 'need', 'year', 'big', 'make', 'take', 'easily', 'think', 'crib', 'clean', 'way', 'quality', 'thing', 'better', 'without', 'set', 'new', 'every', 'cute', 'best', 'bottles', 'work', 'purchased', 'right', 'lot', 'side', 'happy', 'comfortable', 'toy', 'able', 'kids', 'bit', 'night', 'long', 'fits', 'see', 'us', 'another', 'play', 'day', 'money', 'monitor', 'tried', 'thought', 'never', 'item', 'hard', 'plastic', 'however', 'disappointed', 'reviews', 'something', 'going', 'pump', 'bottle', 'cup', 'waste', 'return', 'amazon', 'different', 'top', 'want', 'problem', 'know', 'water', 'try', 'received', 'sure', 'times', 'chair', 'find', 'hold', 'gate', 'open', 'bottom', 'away', 'actually', 'cheap', 'worked', 'getting', 'ordered', 'came', 'milk', 'bad', 'part', 'worth', 'found', 'cover', 'many', 'design', 'looking', 'weeks', 'say', 'wanted', 'look', 'place', 'purchase', 'looks', 'second', 'piece', 'box', 'pretty', 'trying', 'difficult', 'together', 'though', 'give', 'started', 'anything', 'last', 'company', 'come', 'returned', 'maybe', 'took', 'broke', 'makes', 'stay', 'instead', 'idea', 'head', 'said', 'less', 'went', 'working', 'high', 'unit', 'seems', 'picture', 'completely', 'wish', 'buying', 'babies', 'won', 'tub', 'almost', 'either']
###Markdown
Let us perform 2 simple data transformations:Remove punctuationCompute word counts (only for important_words)We start with the first item as follows:If your tool supports it, fill n/a values in the review column with empty strings. The n/a values indicate empty reviews. For instance, Pandas's the fillna() method lets you replace all N/A's in the review columns as follows:
###Code
products = products.fillna({'review':''}) # fill in N/A's in the review column
###Output
_____no_output_____
###Markdown
Write a function remove_punctuation that takes a line of text and removes all punctuation from that text. The function should be analogous to the following Python code:
###Code
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
###Output
_____no_output_____
###Markdown
Apply the remove_punctuation function on every element of the review column and assign the result to the new column review_clean.
###Code
products['review_clean'] = products['review'].apply(remove_punctuation)
###Output
_____no_output_____
###Markdown
Now we proceed with the second item. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text. Note: There are several ways of doing this. One way is to create an anonymous function that counts the occurrence of a particular word and apply it to every element in the review_clean column. Repeat this step for every word in important_words. Your code should be analogous to the following:
###Code
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
###Output
_____no_output_____
###Markdown
After 4 and 5, the data frame products should contain one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews. Now, write some code to compute the number of product reviews that contain the word perfect First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect is >= 1.Sum the number of 1s in the column contains_perfect.
###Code
products['contains_perfect'] = products['perfect'].apply(lambda pf: 1 if pf >=1 else 0)
sum(products['contains_perfect'] == 1)
###Output
_____no_output_____
###Markdown
Convert data frame to multi-dimensional array It is now time to convert our data frame to a multi-dimensional array. Look for a package that provides a highly optimized matrix operations. In the case of Python, NumPy is a good choice.Write a function that extracts columns from a data frame and converts them into a multi-dimensional array. We plan to use them throughout the course, so make sure to get this function right. The function should accept three parameters:dataframe: a data frame to be convertedfeatures: a list of string, containing the names of the columns that are used as features.label: a string, containing the name of the single column that is used as class labels.The function should return two values:one 2D array for featuresone 1D array for class labelsThe function should do the following:Prepend a new column constant to dataframe and fill it with 1's. This column takes account of the intercept term. Make sure that the constant column appears first in the data frame.Prepend a string 'constant' to the list features. Make sure the string 'constant' appears first in the list.Extract columns in dataframe whose names appear in the list features.Convert the extracted columns into a 2D array using a function in the data frame library. If you are using Pandas, you would use as_matrix() function.Extract the single column in dataframe whose name corresponds to the string label.Convert the column into a 1D array.Return the 2D array and the 1D array.
###Code
def get_numpy_data(dataframe, features, label):
dataframe['constant'] = 1
features = ['constant'] + features
features_frame = dataframe[features]
feature_matrix = features_frame.as_matrix()
label_sarray = dataframe[label]
label_array = label_sarray.as_matrix()
return(feature_matrix, label_array) #Why do we need transpose it to matrix?
###Output
_____no_output_____
###Markdown
Using the function written in 8, extract two arrays feature_matrix and sentiment. The 2D array feature_matrix would contain the content of the columns given by the list important_words. The 1D array sentiment would contain the content of the column sentiment. Quiz Question: How many features are there in the feature_matrix?Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model?
###Code
feature_matrix, label_array = get_numpy_data(products, important_words, 'sentiment')
feature_matrix.shape
###Output
_____no_output_____
###Markdown
Estimating conditional probability with link function P(yi=+1|xi,w)=11+exp(−wTh(xi)) where the feature vector h(x_i) represents the word counts of important_words in the review x_i. Write a function named predict_probability that implements the link function.Take two parameters: feature_matrix and coefficients.First compute the dot product of feature_matrix and coefficients.Then compute the link function P(y = +1 | x,w).Return the predictions given by the link function.Your code should be analogous to the following Python function:
###Code
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
# YOUR CODE HERE
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
# YOUR CODE HERE
predictions = 1/(1+np.exp(-score))
# return predictions
return predictions
###Output
_____no_output_____
###Markdown
Aside. How the link function works with matrix algebraSince the word counts are stored as columns in feature_matrix, each i-th row of the matrix corresponds to the feature vector h(xi):[feature_matrix]=⎡⎣⎢⎢⎢⎢h(x1)Th(x2)T⋮h(xN)T⎤⎦⎥⎥⎥⎥=⎡⎣⎢⎢⎢⎢h0(x1)h0(x2)⋮h0(xN)h1(x1)h1(x2)⋮h1(xN)⋯⋯⋱⋯hD(x1)hD(x2)⋮hD(xN)⎤⎦⎥⎥⎥⎥By the rules of matrix multiplication, the score vector containing elements wTh(xi) is obtained by multiplying feature_matrix and the coefficient vector w.[score]=[feature_matrix]w=⎡⎣⎢⎢⎢⎢h(x1)Th(x2)T⋮h(xN)T⎤⎦⎥⎥⎥⎥w=⎡⎣⎢⎢⎢⎢h(x1)Twh(x2)Tw⋮h(xN)Tw⎤⎦⎥⎥⎥⎥=⎡⎣⎢⎢⎢⎢wTh(x1)wTh(x2)⋮wTh(xN)⎤⎦⎥⎥⎥⎥CheckpointJust to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
###Code
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(errors, feature)
# Return the derivative
return derivative
###Output
_____no_output_____
###Markdown
In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log-likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log-likelihood instead of the likelihood to assess the algorithm.The log-likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
###Code
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores)))
return lp
###Output
_____no_output_____
###Markdown
Taking gradient steps Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.Write a function logistic_regression to fit a logistic regression model using gradient ascent.The function accepts the following parameters:feature_matrix: 2D array of featuressentiment: 1D array of class labelsinitial_coefficients: 1D array containing initial values of coefficientsstep_size: a parameter controlling the size of the gradient stepsmax_iter: number of iterations to run gradient ascentThe function returns the last set of coefficients after performing gradient ascent.The function carries out the following steps:Initialize vector coefficients to initial_coefficients.Predict the class probability P(y_i = +1 | x_i,w) using your predict_probability function and save it to variable predictions.Compute indicator value for (y_i = +1) by comparing sentiment against +1. Save it to variable indicator.Compute the errors as difference between indicator and predictions. Save the errors to variable errors.For each j-th coefficient, compute the per-coefficient derivative by calling feature_derivative with the j-th column of feature_matrix. Then increment the j-th coefficient by (step_size*derivative).Once in a while, insert code to print out the log likelihood.Repeat steps 2-6 for max_iter times.
###Code
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_1,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]
# compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = np.dot(errors, feature_matrix[:,j])
# add the step size times the derivative to the current coefficient
# YOUR CODE HERE
coefficients[j] = step_size * derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
###Output
_____no_output_____
###Markdown
Now, let us run the logistic regression solver with the parameters below:
###Code
feature_matrix = feature_matrix
sentiment = label_array
initial_coefficients = np.zeros(194)
step_size = 1e-7
max_iter = 301
variable_coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter)
###Output
iteration 0: log likelihood of observed labels = -36780.91768478
iteration 1: log likelihood of observed labels = -36780.92075691
iteration 2: log likelihood of observed labels = -36780.92075238
iteration 3: log likelihood of observed labels = -36780.92075240
iteration 4: log likelihood of observed labels = -36780.92075240
iteration 5: log likelihood of observed labels = -36780.92075240
iteration 6: log likelihood of observed labels = -36780.92075240
iteration 7: log likelihood of observed labels = -36780.92075240
iteration 8: log likelihood of observed labels = -36780.92075240
iteration 9: log likelihood of observed labels = -36780.92075240
iteration 10: log likelihood of observed labels = -36780.92075240
iteration 11: log likelihood of observed labels = -36780.92075240
iteration 12: log likelihood of observed labels = -36780.92075240
iteration 13: log likelihood of observed labels = -36780.92075240
iteration 14: log likelihood of observed labels = -36780.92075240
iteration 15: log likelihood of observed labels = -36780.92075240
iteration 20: log likelihood of observed labels = -36780.92075240
iteration 30: log likelihood of observed labels = -36780.92075240
iteration 40: log likelihood of observed labels = -36780.92075240
iteration 50: log likelihood of observed labels = -36780.92075240
iteration 60: log likelihood of observed labels = -36780.92075240
iteration 70: log likelihood of observed labels = -36780.92075240
iteration 80: log likelihood of observed labels = -36780.92075240
iteration 90: log likelihood of observed labels = -36780.92075240
iteration 100: log likelihood of observed labels = -36780.92075240
iteration 200: log likelihood of observed labels = -36780.92075240
iteration 300: log likelihood of observed labels = -36780.92075240
###Markdown
Predicting sentiments Recall from lecture that class predictions for a data point x can be computed from the coefficients w using the following formula: Now, we write some code to compute class predictions. We do this in two steps:First compute the scores using feature_matrix and coefficients using a dot product.Then apply threshold 0 on the scores to compute the class predictions. Refer to the formula above.
###Code
scores_new = np.dot(feature_matrix, variable_coefficients)
predicted_sentiment = np.array([+1 if s > 0 else -1 for s in scores_new])
sum(predicted_sentiment == +1)
###Output
_____no_output_____
###Markdown
Measuring accuracy We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
###Code
float(sum(predicted_sentiment == sentiment))/len(sentiment)
###Output
_____no_output_____
###Markdown
Which words contribute most to positive & negative sentiments Recall that in the earlier assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:Treat each coefficient as a tuple, i.e. (word, coefficient_value). The intercept has no corresponding word, so throw it out.Sort all the (word, coefficient_value) tuples by coefficient_value in descending order. Save the sorted list of tuples to word_coefficient_tuples.
###Code
coefficients = list(variable_coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
word_coefficient_tuples[:10]
word_coefficient_tuples_negative = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=False)
word_coefficient_tuples_negative[:10]
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.