path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebook/PXC8-25GB-042020.ipynb | ###Markdown
Percona XtraDB Cluster 8.0 in IO bound workloadPercona XtraDB Cluster 8.0 is on the final stretch before GA release, and we have pre-release packages available for testing: https://www.percona.com/blog/2020/03/19/help-drive-the-future-of-percona-xtradb-cluster/I want to see how Percona XtraDB Cluster 8.0 performs in CPU and IO-bound scenarios , like in my previous posts about MySQL Group Replication:https://docs.google.com/document/d/1vHYl8GaMnYeXsJWvg_k9FH1NmznxH9N4vY8HOcubfa4/editIn this blog I want to evaluate Percona XtraDB Cluster 8.0 Scaling capabilities in both cases when we increase the amount of nodes and increase user connections. The version I used is Percona-XtraDB-Cluster-8.0.18 available from https://www.percona.com/downloads/Percona-XtraDB-Cluster-80/Percona-XtraDB-Cluster-8.0.18-9.1.rc/binary/tarball/Percona-XtraDB-Cluster_8.0.18.9_Linux.x86_64.bionic.tar.gzFor this testing I deploy multi node bare metal servers, where each node and client are dedicated to an individual server and connected between themselves by 10Gb networkAlso I will use 3-nodes and 5-nodes Percona XtraDB Cluster setup. Hardware specifications:```System | Supermicro; SYS-F619P2-RTN; v0123456789 (Other) Platform | Linux Release | Ubuntu 18.04.4 LTS (bionic) Kernel | 5.3.0-42-genericArchitecture | CPU = 64-bit, OS = 64-bit Threading | NPTL 2.27 SELinux | No SELinux detectedVirtualized | No virtualization detected Processor Processors | physical = 2, cores = 40, virtual = 80, hyperthreading = yes Models | 80xIntel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz Caches | 80x28160 KB Memory Total | 187.6G```For the benchmark I use sysbench-tpcc 1000W, prepared database as```./tpcc.lua --mysql-host=172.16.0.11 --mysql-user=sbtest --mysql-password=sbtest --mysql-db=sbtest --time=300 --threads=64 --report-interval=1 --tables=10 --scale=100 --db-driver=mysql --use_fk=0 --force_pk=1 --trx_level=RC prepare```The configs, scripts and raw results are available on our github: https://github.com/Percona-Lab-results/PXC-8-March2020Workload is “IO bound”, that is data (about 100GB) does not fit into innodb_buffer_pool (25GB) and there are intensive read/write IO operations
###Code
library(IRdisplay)
display_html(
'<script>
code_show=false;
function code_toggle() {
if (code_show){
$(\'div.input\').hide();
} else {
$(\'div.input\').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()">
<input type="submit" value="Click here to toggle on/off the raw code.">
</form>'
)
library (RCurl)
library(ggplot2)
library(repr)
loadd <- function (cluster,nodes) {
threads <- c(1,2,4,8,16,64,128,256)
res = data.frame()
for (val in threads) {
urldown=paste0("https://raw.githubusercontent.com/Percona-Lab-results/PXC-8-March2020/master/res-tpcc-",cluster,"-",nodes,"nodes-1writer-BP25G-1","/res_thr",val,".txt")
download <- getURL(urldown)
dl<-strsplit(download, split='\n')
data <- read.csv (text = grep("^[0-9]", dl[[1]], value = TRUE), header=F)
data$threads=val
data$nodes=nodes
data$cluster=cluster
if(nrow(res)<1){
res<-data
}else{
res<-rbind(data,res)
}
}
return(res)
}
r1 <- loadd("GR8","3")
r2 <- loadd("GR8","5")
r3 <- loadd("PXC8","3")
r4 <- loadd("PXC8","5")
results<-rbind(r1,r2,r3,r4)
theme_set(theme_light())
theme_replace(axis.text.x=element_text(size = rel(2)))
theme_replace(axis.text.y=element_text(size = rel(2)))
theme_replace(axis.title.x=element_text(size = rel(1.5)))
theme_replace(axis.title.y=element_text(size = rel(1.5), angle = 90))
theme_replace(legend.title=element_text(size = rel(1.5)))
theme_replace(legend.text=element_text(size = rel(1.5)))
theme_replace(plot.title=element_text(size = rel(2)))
theme_replace(strip.text.x=element_text(size = rel(2)))
theme_replace(strip.text.y=element_text(size = rel(2)))
###Output
_____no_output_____
###Markdown
ResultsLet’s review the results I’ve got. 3 nodesFirst, let’s take a look at how performance changes when we increase user threads from 1 to 256 for 3 nodes.
###Code
m <- ggplot(data = subset(results,cluster=="PXC8" & V1>900 & nodes==3),
aes(x=as.factor(threads), y=V3, color=as.factor(nodes)))
options(repr.plot.width=18, repr.plot.height=10)
m + geom_boxplot()+
ylab("Throughput, tps (more is better)")+
xlab("Threads")+
scale_colour_discrete(name="Nodes")
###Output
_____no_output_____
###Markdown
Percona XtraDB Cluster 3 nodes - Individual scalesTo see the density of the results it in more details, let’s draw the chart with the individual scales for each set of threads:
###Code
m <- ggplot(data = subset(results,cluster=="PXC8" & V1>900 & nodes==3),
aes(x=as.factor(threads), y=V3, color=as.factor(nodes)))
options(repr.plot.width=18, repr.plot.height=10)
m + geom_boxplot()+
ylab("Throughput, tps (more is better)")+
xlab("Threads")+
scale_colour_discrete(name="Nodes")+facet_wrap( ~ threads,scales="free")+expand_limits(y=0)
###Output
_____no_output_____
###Markdown
Percona XtraDB Cluster 3 nodes, timeline for 64 threads
###Code
m <- ggplot(data = subset(results,cluster=="PXC8" & V1>900 & nodes==3 & threads==64),
aes(x=V1, y=V3))
options(repr.plot.width=18, repr.plot.height=10)
m + geom_line(size=1)+geom_point(size=2)+
ylab("Throughput, tps (more is better)")+
xlab("----------- time, sec -------->")+
labs(title="Timeline, throughput variation, 1 sec resolution, 64 threads")+
expand_limits(y=0)
###Output
_____no_output_____
###Markdown
3 nodes vs 5 nodesNow let’s the performance under 5 nodes (comparing to 3 nodes)
###Code
m <- ggplot(data = subset(results,cluster=="PXC8" & V1>900),
aes(x=as.factor(threads), y=V3, color=as.factor(nodes)))
options(repr.plot.width=18, repr.plot.height=10)
m + geom_boxplot()+
ylab("Throughput, tps (more is better)")+
xlab("Threads")+
scale_colour_discrete(name="Nodes")+facet_wrap( ~ threads,scales="free")+expand_limits(y=0)
###Output
_____no_output_____
###Markdown
Percona XtraDB Cluster 3 nodes, timeline for 64 threads3 nodes vs 5 nodes
###Code
m <- ggplot(data = subset(results,cluster=="PXC8" & V1>900 & threads==64),
aes(x=V1, y=V3,color=as.factor(nodes)))
options(repr.plot.width=18, repr.plot.height=10)
m + geom_line(size=1)+geom_point(size=2)+
ylab("Throughput, tps (more is better)")+
xlab("----------- time, sec -------->")+
labs(title="Timeline, throughput variation, 1 sec resolution, 64 threads")+
expand_limits(y=0)+
scale_colour_discrete(name="Nodes")
###Output
_____no_output_____
###Markdown
Percona XtraDB Cluster vs Group ReplicationNow as we have both results for Percona XtraDB Cluster and Group Replication, we can compare how they perform under identical workloads. In this case we compare 3 nodes setup:
###Code
m <- ggplot(data = subset(results,nodes=="3" & V1>900),
aes(x=as.factor(threads), y=V3, color=as.factor(cluster)))
options(repr.plot.width=18, repr.plot.height=10)
m + geom_boxplot()+
ylab("Throughput, tps (more is better)")+
xlab("Threads")+
scale_colour_discrete(name="Nodes")+facet_wrap( ~ threads,scales="free")+expand_limits(y=0)
cv <- function(x, na.rm = FALSE) {
sd(x, na.rm = na.rm)/mean(x, na.rm = na.rm)
}
vars<-ddply(subset(results, (V1>900)), .(threads,cluster,nodes),
function(Df) c(x1 = cv(Df$V3)
)
)
###Output
_____no_output_____
###Markdown
We can see that Percona XtraDB Cluster consistently shows better average throughput, with much less variance compared to Group Replication. Cofficient of variationLet's calculate a cofficient of variation for both Percona XtraDB Cluster and Group Replication, for 3 nodes and 5 nodes setupshttps://en.wikipedia.org/wiki/Coefficient_of_variationI want to use cofficient of variation instead of standard devation, because standard deviation operates in absolute values and it is hard to compare standard deviation between 1 and 128 threads.Cofficient of variation shows variation in percentage (standard deviation relative to average value) - the smaller value will correspond to less variance, and less variance is better
###Code
ggplot(data=vars, aes(x=as.factor(threads), y=x1*100, fill=as.factor(cluster) ) ) +
geom_bar(stat="identity", position=position_dodge(), colour="black")+
geom_text(aes(label = format(x1*100, digits=2, nsmall=2)), color="black", size = 6, vjust=1.1, position=position_dodge(width = 1))+
facet_grid(nodes~.,scales="free")+
ylab("Coefficient of variation, %, less is better")+
xlab("Threads")+
scale_fill_discrete(name="Cluster")
###Output
_____no_output_____ |
Section 2/2.4.ipynb | ###Markdown
To understand the utility of kernel PCA, particularly for clustering, observe that, while N points cannot in general be linearly separated in d=N dimensions. Kernel PCA has been demonstrated to be useful for novelty detection and image de-noising.KPCA reduces dimensions while also making the data linearly separable. PCA only reduces dimensionssource: https://en.wikipedia.org/wiki/Kernel_principal_component_analysis
###Code
data.shape
time_start = time.time()
kpca = KernelPCA(kernel="poly", n_components=2)
kpca_data = data.head(5000)
kpca_result = kpca.fit_transform(kpca_data)
print('KPCA done! Time elapsed: {} seconds'.format(time.time()-time_start))
kpca_data['kpca-one'] = kpca_result[:,0]
kpca_data['kpca-two'] = kpca_result[:,1]
plt.figure(figsize=(16,10))
sns.scatterplot(
x="kpca-one", y="kpca-two",
palette=sns.color_palette("hls", 10),
data=kpca_data,
legend="full",
alpha=0.3
)
###Output
_____no_output_____ |
activity_2.ipynb | ###Markdown
Introduction to Python for Data ScienceActivity 2: Intro to Data VisualisationMaterials made by Jeffrey Lo, for the Python Workshop on 12 September, 2018.In this section, we will introduce Pandas and Seaborn, which builds on top of NumPy and Matplotlib, respectively. We will use a practical approach by using a real dataset from Kaggle (a data science competition platform). I have simplified and cleaned the dataset included in the ZIP folder, originally downloaded from Kaggle, for the purpose of this introductory workshop. Bike Sharing DatasetMore info here: https://www.kaggle.com/marklvl/bike-sharing-dataset/homeThis dataset contains the daily count of rental bikes between years 2011 and 2012 in Capital bikeshare system in Washington, DC with the corresponding weather and seasonal information. Importing Modules/PackagesRecall from activity 1 that we need to import our extra modules and packages first.We will use NumPy, Pandas, Matplotlib and Seaborn. The last line is a technicality, telling Python to show the graphs in-line.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
PandasWe will start by using some of the functionalities of Pandas. You can think of Pandas as one that gives some functionalities of Excel. Importing data`pd.read_csv` will read the CSV file and store the dataset in the variable we named `data`. The file has to be in the same directory (folder) as this notebook, otherwise you can use subdirectories and so on.N.b If it's an Excel file, you can use `pd.read_excel`.
###Code
data = pd.read_csv("bike_sharing_data_by_day_simplified.csv")
###Output
_____no_output_____
###Markdown
Getting to know your dataset Below, it is clear that Pandas dataframe resembles Excel worksheet, except you cannot edit them (you would need to do this via coding).We call `.head()` to show the first 5 rows.
###Code
data.head()
###Output
_____no_output_____
###Markdown
**Activities:**- You can use `.tail()` to show the last 5 rows.- You can specify the number of rows to display by inputting a parameter e.g. `.head(12)`. You can also call `data.shape` to output the number of rows and columns, respectively.N.b. in machine learning, rows are called instances and columns are called features.
###Code
data.shape #number of rows, columns
###Output
_____no_output_____
###Markdown
Descriptive StatisticsDescriptive statistics provide simple but useful summaries about the data. In practice, it is essentially not possible to look at every single data row. It is mainly used in exploratory data analysis (EDA) in the data science process.You can start by looking at the number of data in each column - that will indicate whether or not there are missing data. In this dataset, we know that there are 731 rows and every column has 731 fields, thus there is no missing data.We can also use descriptive statistics to look for outliers and distribution of the data (min, 25%, median, 75%, max). But is there a better way to look for outliers than looking at this table?
###Code
data.describe().round(1)
###Output
_____no_output_____
###Markdown
DistributionWith most statistical analysis, another way to get to know your data is to visually look at the distribution of data. For simplicity, we will just look at the response variable (in this case `total_count`, or daily count of bike riders). The below plot agrees with the descriptive statistics of `total_count` (above) - the minimum is ~0, mean is ~4500, max is around ~8000+.
###Code
sns.distplot(data['total_count'], kde=None)
plt.show()
###Output
_____no_output_____
###Markdown
**Activity:**- Try showing the distribution of casual riders over 2011-12. Time Series Plot (Matplotlib)For this particular dataset which is time-series based, clearly we should start by looking at the general trend of the number of bike shares over 2011-12. We can use the `.plot` function to plot the data from a column, if the data is time series.
###Code
plt.plot(data['total_count'])
plt.show()
###Output
_____no_output_____
###Markdown
**Activities:**- Oftentimes it is best practice to label your graph, by calling `.title`, `.xlabel`, `.ylabel` functions from Matplotlib. Try adding this line: `plt.title("Time Series Plot of Daily Bike Riders")`- We can add `fig = plt.figure(figsize=(8,4))` to store the figure and change the figure size and any other parameters from the documentation [here](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.figure.html).- For aesthetics purposes, it is common to remove the top and right axes by adding a new line `sns.despine()`. Seaborn - More VisualisationsThere are many types of plots available, you can see from [Seaborn's Example gallery](https://seaborn.pydata.org/examples/index.html). Here are just some of them:- Barcharts (`sns.barplot`)- Boxplots (`sns.boxplot`)- Distribution plots (`sns.distplot`)- Regression plots (`sns.regplot`)- ... even Violin plots (`sns.violinplot`) Barplots
###Code
fig = plt.figure(figsize=(8,4))
sns.barplot(data=data, x='month', y='total_count')
plt.title("Average Daily Bike Shares (by Month)")
plt.show()
###Output
_____no_output_____
###Markdown
**Activities:**- There are many parameters for each function... simply Google `sns.barplot` for Seaborn documentation. Open the [parameter list for barplots](https://seaborn.pydata.org/generated/seaborn.barplot.html)- When the parameters are not specified, all the possible parameters (set by Seaborn) is set to default or none.- For barplots (and some other ones), one useful parameter is `hue` where you can specify an additional feature to be included in addition to `x` and `y`. Try adding `hue='year'` within `sns.barplot()`- The black vertical lines are the confidence intervals. We probably don't need it here, you can turn it off by adding `ci=None` within `sns.barplot()`. Boxplots and moreWhat if you want to visaulise the total bike shares against weekday? Would a barchart (barplot) work well here... or not? Try changing the code above and see!If we use a barplot, we can find only find the average daily bike shares by day of week. That is, it is essentially just 7 numbers plotted on a graph.Boxplots do more than that... they show the distribution, or variability of the data corresponding to each weekday as below. You can see that Wednesdays has a lower variability than other days, such as Thursdays and Sundays.
###Code
fig = plt.figure(figsize=(8,4))
sns.boxplot(data=data, x='weekday', y='total_count')
plt.title("Daily Bike Shares (by Weekday)")
plt.show()
###Output
_____no_output_____
###Markdown
**Activities:**- Again, you can check what other parameters you can use.... here is the [documentation for boxplots](https://seaborn.pydata.org/generated/seaborn.boxplot.html)- For boxplots, the middle horizontal line is the 50% quartile, not the mean. Thus, it is useful to show the mean by adding a new parameter `showmeans=True`.- What happens when you change from `sns.boxplot` to `sns.swarmplot`... what extra information does this show? What about `sns.violinplot`?- Try plotting weather against total_count, by replacing `weekday` with `weather` in the code above. Regression and ScatterplotsYou may recall from intro stats about regression plots, which is basically a scatterplot with a line of best fit (for simple linear regression). Seaborn can handle this as well!
###Code
fig = plt.figure(figsize=(8,4))
sns.regplot(data=data, x='temperature', y='total_count')
sns.despine()
plt.title("Temperature vs Daily Bike Shares (2011-12)")
###Output
_____no_output_____ |
cwpk-52-interactive-web-widgets-2.ipynb | ###Markdown
CWPK \52: Distributed Interactions via Web Widgets - II =======================================A Design for a Working Ecosystem of Existing Applications--------------------------In our last installment of the [*Cooking with Python and KBpedia*](https://www.mkbergman.com/cooking-with-python-and-kbpedia/) series, I discussed the rationale for using distributed Web widgets as one means to bring the use and management of a knowledge graph into closer alignment with existing content tasks. The idea is to embed small Web controls as either plug-ins, enhancements to existing application Web interfaces, or as simply accompanying Web pages. Under this design, ease-of-use and immediacy are paramount in order to facilitate the capture of new knowledge if and where it arises.More complicated knowledge graph tasks like initial builds, bulk changes, logic testing and the like are therefore kept as separate tasks. For our immediate knowledge purposes, knowledge workers should be able to issue suggestions for new concepts or modifications or deletions when they are discovered. These suggested modifications may be made to a working version of the knowledge graph that operates in parallel to a public (or staff-wide) release version. The basic concept behind this and the previous installment is that we would like to have the ability to use simple widgets -- embedded on any Web page for the applications we use -- as a means of capturing and communicating immediate new information of relevance to our knoweldge graphs. In a distributed manner, while working with production versions, we may want to communicate these updates to an interim version in the queue for review and promotion via a governance pipeline that might lead to a new production version.On some periodic basis, this modified working version could be inspected and tested by assigned managers with the responsibility to vet new, public versions. There is much governance and policy that may guide these workflows, which may also be captured by their own ontologies, that is a topic separate from the widget infrastructure to make this vision operational.While there are widgets available for analysis and visualization purposes, which we will touch upon in later installments, the ones we will emphasize in today's **CWPK** installment are focused on [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) (*create - read - update - delete*) activities. CRUD is the means by which we manage our knowledge graphs in the immediate sense. We will be looking to develop these widgets in a manner directly useful to distributed applications. Getting StartedRemember from our last installment that we are basing our approach to code examples on the [Jupyter Notebook](https://en.wikipedia.org/wiki/Project_JupyterJupyter_Notebook) [ipywidgets](https://ipywidgets.readthedocs.io/en/latest/) package. That package does not come pre-installed, so we need to install it in our system using conda: conda install -c conda-forge ipywidgetsOnce done, we fire up our Notebook and begin with our standard start-up, only now including a new [autoFill module](https://github.com/germanesosa/ipywidget-autocomplete), which I explain next. This [module](https://github.com/germanesosa/ipywidget-autocomplete) is an example of an ipywidgets extension. Place this new autoFill.py page as a new module page within your [Spyder](https://en.wikipedia.org/wiki/Spyder_(software)) *cowpoke* project. Make sure and copy that file into the project before beginning the startup:
###Code
from cowpoke.__main__ import *
from cowpoke.config import *
from cowpoke.autoFill import *
from owlready2 import *
###Output
_____no_output_____
###Markdown
Some Basic Controls (Widgets)As noted, our basic controls for this installment revolve around CRUD, though we will not address them in that order. But before we can start manipulating individual objects, we need ways to discover what is already in the KBpedia (or your own) knowledge graph. Search and [auto-completion](https://en.wikipedia.org/wiki/Autocomplete) are two essential tools for this job, particularly given the fact that KBpedia has more than 58,000 concepts and 5,000 different properties. Basic SearchRecall in [**CWPK 23**](https://www.mkbergman.com/2356/cwpk-23-text-searching-kbpedia/) that we covered owlready2's basic search function and parameters. You may use the interactive commands shown in that documentation to search by type, subclasses, IRI, etc. Auto-complete HelperWe list the auto-completion helper next because it is leveraged by virtually every widget. Auto-completion is used in a text entry box where as characters are typed, a dropdown list provides matching items that satisfy the search string as entered so far. It is a useful tool to discover what exists in a system as well as to provide the exact spelling and capitalization, which may be necessary for the match. As suggested items appear in the dropdown, if the right one appears you click it to make your current selection.The added utility we found for this is the [ipywidget-autocomplete](https://github.com/germanesosa/ipywidget-autocomplete) module, which we have added as our own module to *cowpoke*. The module provides the underlying 'autofill' code that we import in the actual routine that we use within the Notebook. Here is an example of that Notebook code, with some explanations that follow below:
###Code
import cowpoke.autoFill as af
from ipywidgets import * # Note 1
def open(value):
show.value=value.new
strlist = [] # Note 2
listing = list(kb.classes()) # Note 3
for item in listing: # Note 4
item = str(item)
item = item.replace('rc.', '')
strlist.append(item)
autofill = af.autoFill(strlist,callback=open)
show = HTML("Begin typing substring characters to see 'auto-complete'!")
display(HBox([autofill,show]))
###Output
_____no_output_____
###Markdown
Besides the 'autofill' component, we also are importing some of the basic ipywidgets code **(1)**. In this instance, we want to auto-complete on the 58 K concepts in KBpedia **(3)**, which we have to convert from a listing of classes to a listing of strings **(1)** and **(4)**. We can just as easily do an auto-complete on properties by changing one line **(3)**: listing = list(kb.classes())or, of course, other specifications may be entered for the boundaries of our auto-complete.To convert the owlready2 listing of classes to strings occurs by looping over all items in the list, converting each item to a string, replacing the 'rc.' prefix, and then appending the new string item to a new strlist **(4)**. The 'callback' option relates the characters typed into the text box with this string listing. This particular expression will find string matches at any position (not just the beginning) and is case insensitive.When the item appears in the dropdown list that matches your interest, pick it. This value can then be retrieved with the following statement:
###Code
show.value
###Output
_____no_output_____
###Markdown
Read ItemLet's use a basic *record* format to show how individual property values may be obtained for a given reference concept (RC), as we just picked with the auto-complete helper. We could obviously expand this example to include any of the possible RC property values, but we will use the most prominent ones here:
###Code
r_val = show.value
r_val = getattr(rc, r_val) # Note 1
a_val = str(r_val.prefLabel) # Note 2
b_val = r_val.altLabel
b_items = ''
for index, item in enumerate(b_val): # Note 3
item = str(item)
if index == [0]:
b_items = item
else:
b_items = b_items + '||' + item
b_val = b_items
c_val = str(r_val.definition)
d_val = r_val.subclasses()
d_items = ''
for index, item in enumerate(d_val): # Note 3
item = str(item)
if index == [0]:
d_items = item
else:
d_items = d_items + '||' + item
d_val = d_items
e_val = str(r_val.wikidata_q_id)
f_val = str(r_val.wikipedia_id)
g_val = str(r_val.schema_org_id)
# Note 4
a = widgets.Text(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='Preferred Label:', value = a_val)
b = widgets.Text(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='AltLabel(s):', value = b_val)
c = widgets.Textarea(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='Definition:', value = c_val)
d = widgets.Textarea(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='subClasses:', value = d_val)
e = widgets.Text(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='Q ID:', value = e_val)
f = widgets.Text(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='Wikipedia ID:', value = f_val)
g = widgets.Text(style={'description_width': '100px'}, layout=Layout(width='760px'),
description='schema.org ID:', value = g_val)
def f_out(a, b, c, d, e, f, g):
print(''.format(a, b, c, d, e, f, g))
out = widgets.interactive_output(f_out, {'a': a, 'b': b, 'c': c, 'd': d, 'e': e, 'f': f, 'g': g})
widgets.HBox([widgets.VBox([a, b, c, d, e, f, g]), out]) # Note 5
###Output
_____no_output_____
###Markdown
We begin by grabbing the show.value value **(1)** that came from our picking an item from the auto-complete list. We then start retrieving individual attribute values **(2)** for that resource, some of which we need to iterate over **(3)** because they return multiple items in a list. For display purposes, we need to convert all retrieved property values to strings.We can style our widgets **(4)**. 'Layout' applies to the entire widget, and 'style' applies to selected elements. Other values are possible, depending on the widget, which we can inspect with this statement (vary by widget type):
###Code
text = Text(description='text box')
print(text.style.keys)
###Output
_____no_output_____
###Markdown
Then, after defining a simple call procedure, we invoke the control on the Notebook page **(5)**.We could do more to clean up interim output values (such as removing brackets and quotes as we have done elsewhere), and can get fancier with grid layouts and such. In these regards, the Notebook widgets tend to work like and have parameter settings similar to other HTML widgets, though the amount of examples and degree of control is not as extensive as other widget libraries.Again, though, since so much of this in an actual deployment would need to be tailored for other Web frameworks and environments, we can simply state for now that quite a bit of control is available, depending on language, for bringing your knowledge graph information local to your current applications. Modify (Update) ItemThough strings in Python are what is known as 'immutable' (unchanging), it is possible through the string.replace('old', 'new') option to update or modify strings. For entire strings, 'old' may be brought into a text box via a variable name as shown above, altered, and then captured as show.value on a click event. The basic format of this approach can be patterned as follows (changing properties as appropriate):
###Code
from ipywidgets import widgets # Run the code cell to start
old_text = widgets.Text(style={'description_width': '100px'}, layout=Layout(width='760px'), description = 'Value to modify:', value = 'This is the old input.')
new_text = widgets.Text(description = 'New value:', value = 'This is the NEW value.')
def bind_old_to_new(sender):
old_text.value = new_text.value
old_text.description = new_text.description
old_text.on_submit(bind_old_to_new)
old_text # Click on the text box to invoke
###Output
_____no_output_____ |
07-Extra-Content/Big-Data-Google-Colab/day-1/06-Stu_Pyspark_Dataframes_Basics/Unsolved/demographics.ipynb | ###Markdown
Install Java, Spark, and Findspark
###Code
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://www-us.apache.org/dist/spark/spark-2.3.2/spark-2.3.2-bin-hadoop2.7.tgz
!tar xf spark-2.3.2-bin-hadoop2.7.tgz
!pip install -q findspark
###Output
_____no_output_____
###Markdown
Set Environmental Variables
###Code
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.2-bin-hadoop2.7"
#### Start a Spark Session
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("demographics").getOrCreate()
### Load the stocks.csv file, have Spark infer the data types
from pyspark import SparkFiles
url = "https://s3.amazonaws.com/dataviz-curriculum/day_1//demographics.csv"
spark.sparkContext.addFile(url)
df = spark.read.csv(SparkFiles.get("demographics.csv"), sep=",", header=True)
df.show()
### Print the column names
# Print out the first 10 rows
# Select the age, height_meter, and weight_kg columns and use describe to show the summary statistics
# Print the schema to see the types
# Rename the Salary column to `Salary (1k)` and show only this new column
# Create a new column called `Salary` where the values are the `Salary (1k)` * 1000
# Show the columns `Salary` and `Salary (1k)`
###Output
_____no_output_____ |
lectures/21-power_perm/power_curve.ipynb | ###Markdown
Power CurveThis is an **interactive notebook**! If you run the cells below, it'll give you some sliders that you can adjust to see the effect that p-value threshold (the x-axis on the plot), effect size, and sample size have on the power of a simple t-test.
###Code
%matplotlib notebook
import numpy as np
from matplotlib import pyplot as plt
import scipy.stats
from ipywidgets import interact
import ipywidgets as widgets
num_tests = 3000
p_thresholds = np.logspace(-5, np.log10(0.05), 40)
#p_thresholds = np.logspace(-5, 0, 40)
power_line = plt.semilogx(p_thresholds, np.linspace(0, 1, 40), '.-')
plt.xlabel('p-value', size=14)
plt.ylabel('power', size=14)
def update(effect_size, n):
arr = np.random.randn(n, num_tests) + effect_size
t,p = scipy.stats.ttest_1samp(arr, 0)
power = np.array([(p<p_t).mean() for p_t in p_thresholds])
power_line[0].set_ydata(power)
effect_size_slider = widgets.FloatSlider(value=0.3, min=0.1, max=1.5, step=0.1)
n_slider = widgets.IntSlider(value=100, min=10, max=250, step=10)
interact(update, effect_size=effect_size_slider, n=n_slider);
###Output
_____no_output_____ |
TF for AI/Exercise2_Question_Blank.ipynb | ###Markdown
Exercise 2In the course you learned how to do classification using Fashion MNIST, a data set containing items of clothing. There's another, similar dataset called MNIST which has items of handwriting -- the digits 0 through 9.Write an MNIST classifier that trains to 99% accuracy or above, and does it without a fixed number of epochs -- i.e. you should stop training once you reach that level of accuracy.Some notes:1. It should succeed in less than 10 epochs, so it is okay to change epochs to 10, but nothing larger2. When it reaches 99% or greater it should print out the string "Reached 99% accuracy so cancelling training!"3. If you add any additional variables, make sure you use the same names as the ones used in the classI've started the code for you below -- how would you finish it?
###Code
# YOUR CODE SHOULD START HERE
# YOUR CODE SHOULD END HERE
import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# YOUR CODE SHOULD START HERE
# YOUR CODE SHOULD END HERE
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# YOUR CODE SHOULD START HERE
# YOUR CODE SHOULD END HERE
###Output
_____no_output_____ |
pytorch/autograd_tutorial1.ipynb | ###Markdown
Autograd========Autograd is now a core torch package for automatic differentiation.It uses a tape based system for automatic differentiation.In the forward phase, the autograd tape will remember all the operationsit executed, and in the backward phase, it will replay the operations.Variable--------In autograd, we introduce a ``Variable`` class, which is a very thinwrapper around a ``Tensor``. You can access the raw tensor through the``.data`` attribute, and after computing the backward pass, a gradientw.r.t. this variable is accumulated into ``.grad`` attribute... figure:: /_static/img/Variable.png :alt: Variable VariableThere’s one more class which is very important for autogradimplementation - a ``Function``. ``Variable`` and ``Function`` areinterconnected and build up an acyclic graph, that encodes a completehistory of computation. Each variable has a ``.grad_fn`` attribute thatreferences a function that has created a function (except for Variablescreated by the user - these have ``None`` as ``.grad_fn``).If you want to compute the derivatives, you can call ``.backward()`` ona ``Variable``. If ``Variable`` is a scalar (i.e. it holds a one elementtensor), you don’t need to specify any arguments to ``backward()``,however if it has more elements, you need to specify a ``grad_output``argument that is a tensor of matching shape.
###Code
import torch
from torch.autograd import Variable
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x) # notice the "Variable containing" line
print(x.data)
print(x.grad)
print(x.grad_fn) # we've created x ourselves
###Output
_____no_output_____
###Markdown
Do an operation of x:
###Code
y = x + 2
print(y)
###Output
_____no_output_____
###Markdown
y was created as a result of an operation,so it has a grad_fn
###Code
print(y.grad_fn)
###Output
_____no_output_____
###Markdown
More operations on y:
###Code
z = y * y * 3
out = z.mean()
print(z, out)
###Output
_____no_output_____
###Markdown
Gradients---------let's backprop now and print gradients d(out)/dx
###Code
out.backward()
print(x.grad)
###Output
_____no_output_____
###Markdown
By default, gradient computation flushes all the internal bufferscontained in the graph, so if you even want to do the backward on somepart of the graph twice, you need to pass in ``retain_variables = True``during the first pass.
###Code
x = Variable(torch.ones(2, 2), requires_grad=True)
y = x + 2
y.backward(torch.ones(2, 2), retain_graph=True)
# the retain_variables flag will prevent the internal buffers from being freed
print(x.grad)
z = y * y
print(z)
###Output
_____no_output_____
###Markdown
just backprop random gradients
###Code
gradient = torch.randn(2, 2)
# this would fail if we didn't specify
# that we want to retain variables
y.backward(gradient)
print(x.grad)
###Output
_____no_output_____ |
Sagemaker/Mini-Projects/IMDB Sentiment Analysis - XGBoost (Hyperparameter Tuning).ipynb | ###Markdown
Sentiment Analysis Using XGBoost in SageMaker_Deep Learning Nanodegree Program | Deployment_---In this example of using Amazon's SageMaker service we will construct a random tree model to predict the sentiment of a movie review. You may have seen a version of this example in a pervious lesson although it would have been done using the sklearn package. Instead, we will be using the XGBoost package as it is provided to us by Amazon. InstructionsSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a ` TODO: ...` comment. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted. Step 1: Downloading the dataThe dataset we are going to use is very popular among researchers in Natural Language Processing, usually referred to as the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/). It consists of movie reviews from the website [imdb.com](http://www.imdb.com/), each labeled as either '**pos**itive', if the reviewer enjoyed the film, or '**neg**ative' otherwise.> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.We begin by using some Jupyter Notebook magic to download and extract the dataset.
###Code
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
###Output
mkdir: cannot create directory ‘../data’: File exists
--2020-10-14 12:28:41-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
Resolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10
Connecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 84125825 (80M) [application/x-gzip]
Saving to: ‘../data/aclImdb_v1.tar.gz’
../data/aclImdb_v1. 100%[===================>] 80.23M 24.2MB/s in 4.5s
2020-10-14 12:28:46 (17.7 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]
###Markdown
Step 2: Preparing the dataThe data we have downloaded is split into various files, each of which contains a single review. It will be much easier going forward if we combine these individual files into two large files, one for training and one for testing.
###Code
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
train_X[100]
###Output
_____no_output_____
###Markdown
Step 3: Processing the dataNow that we have our training and testing datasets merged and ready to use, we need to start processing the raw data into something that will be useable by our machine learning algorithm. To begin with, we remove any html formatting that may appear in the reviews and perform some standard natural language processing in order to homogenize the data.
###Code
import nltk
nltk.download("stopwords")
from nltk.corpus import stopwords
from nltk.stem.porter import *
stemmer = PorterStemmer()
import re
from bs4 import BeautifulSoup
def review_to_words(review):
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
print('Processing training words.')
words_train = [review_to_words(review) for review in data_train]
print('Processing testing words.')
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
###Output
Read preprocessed data from cache file: preprocessed_data.pkl
###Markdown
Extract Bag-of-Words featuresFor the model we will be implementing, rather than using the reviews directly, we are going to transform each review into a Bag-of-Words feature representation. Keep in mind that 'in the wild' we will only have access to the training set so our transformer can only use the training set to construct a representation.
###Code
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.externals import joblib
# joblib is an enhanced version of pickle that is more efficient for storing NumPy arrays
def extract_BoW_features(words_train, words_test, vocabulary_size=5000,
cache_dir=cache_dir, cache_file="bow_features.pkl"):
"""Extract Bag-of-Words for a given set of documents, already preprocessed into words."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = joblib.load(f)
print("Read features from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Fit a vectorizer to training documents and use it to transform them
# NOTE: Training documents have already been preprocessed and tokenized into words;
# pass in dummy functions to skip those steps, e.g. preprocessor=lambda x: x
vectorizer = CountVectorizer(max_features=vocabulary_size,
preprocessor=lambda x: x, tokenizer=lambda x: x) # already preprocessed
features_train = vectorizer.fit_transform(words_train).toarray()
# Apply the same vectorizer to transform the test documents (ignore unknown words)
features_test = vectorizer.transform(words_test).toarray()
# NOTE: Remember to convert the features using .toarray() for a compact representation
# Write to cache file for future runs (store vocabulary as well)
if cache_file is not None:
vocabulary = vectorizer.vocabulary_
cache_data = dict(features_train=features_train, features_test=features_test,
vocabulary=vocabulary)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
joblib.dump(cache_data, f)
print("Wrote features to cache file:", cache_file)
else:
# Unpack data loaded from cache file
features_train, features_test, vocabulary = (cache_data['features_train'],
cache_data['features_test'], cache_data['vocabulary'])
# Return both the extracted features as well as the vocabulary
return features_train, features_test, vocabulary
# Extract Bag of Words features for both training and test datasets
train_X, test_X, vocabulary = extract_BoW_features(train_X, test_X)
###Output
Read features from cache file: bow_features.pkl
###Markdown
Step 4: Classification using XGBoostNow that we have created the feature representation of our training (and testing) data, it is time to start setting up and using the XGBoost classifier provided by SageMaker. Writing the datasetThe XGBoost classifier that we will be using requires the dataset to be written to a file and stored using Amazon S3. To do this, we will start by splitting the training dataset into two parts, the data we will train the model with and a validation set. Then, we will write those datasets to a file and upload the files to S3. In addition, we will write the test set input to a file and upload the file to S3. This is so that we can use SageMakers Batch Transform functionality to test our model once we've fit it.
###Code
import pandas as pd
val_X = pd.DataFrame(train_X[:10000])
train_X = pd.DataFrame(train_X[10000:])
val_y = pd.DataFrame(train_y[:10000])
train_y = pd.DataFrame(train_y[10000:])
test_y = pd.DataFrame(test_y)
test_X = pd.DataFrame(test_X)
###Output
_____no_output_____
###Markdown
The documentation for the XGBoost algorithm in SageMaker requires that the saved datasets should contain no headers or index and that for the training and validation data, the label should occur first for each sample.For more information about this and other algorithms, the SageMaker developer documentation can be found on __[Amazon's website.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
# First we make sure that the local directory in which we'd like to store the training and validation csv files exists.
data_dir = '../data/xgboost'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
# First, save the test data to test.csv in the data_dir directory. Note that we do not save the associated ground truth
# labels, instead we will use them later to compare with our model output.
pd.DataFrame(test_X).to_csv(os.path.join(data_dir, 'test.csv'), header=False, index=False)
pd.concat([val_y, val_X], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)
pd.concat([train_y, train_X], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
# To save a bit of memory we can set text_X, train_X, val_X, train_y and val_y to None.
train_X = val_X = train_y = val_y = None
###Output
_____no_output_____
###Markdown
Uploading Training / Validation files to S3Amazon's S3 service allows us to store files that can be access by both the built-in training models such as the XGBoost model we will be using as well as custom models such as the one we will see a little later.For this, and most other tasks we will be doing using SageMaker, there are two methods we could use. The first is to use the low level functionality of SageMaker which requires knowing each of the objects involved in the SageMaker environment. The second is to use the high level functionality in which certain choices have been made on the user's behalf. The low level approach benefits from allowing the user a great deal of flexibility while the high level approach makes development much quicker. For our purposes we will opt to use the high level approach although using the low-level approach is certainly an option.Recall the method `upload_data()` which is a member of object representing our current SageMaker session. What this method does is upload the data to the default bucket (which is created if it does not exist) into the path described by the key_prefix variable. To see this for yourself, once you have uploaded the data files, go to the S3 console and look to see where the files have been uploaded.For additional resources, see the __[SageMaker API documentation](http://sagemaker.readthedocs.io/en/latest/)__ and in addition the __[SageMaker Developer Guide.](https://docs.aws.amazon.com/sagemaker/latest/dg/)__
###Code
import sagemaker
session = sagemaker.Session() # Store the current SageMaker session
# S3 prefix (which folder will we use)
prefix = 'sentiment-xgboost'
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
val_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)
train_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)
test_location = session.upload_data(os.path.join(data_dir, 'test.csv'), key_prefix=prefix)
###Output
_____no_output_____
###Markdown
(TODO) Creating a hypertuned XGBoost modelNow that the data has been uploaded it is time to create the XGBoost model. As in the Boston Housing notebook, the first step is to create an estimator object which will be used as the *base* of your hyperparameter tuning job.
###Code
from sagemaker import get_execution_role
# Our current execution role is require when creating the model as the training
# and inference code will need to access the model artifacts.
role = get_execution_role()
# We need to retrieve the location of the container which is provided by Amazon for using XGBoost.
# As a matter of convenience, the training and inference code both use the same container.
from sagemaker.amazon.amazon_estimator import get_image_uri
container = get_image_uri(session.boto_region_name, 'xgboost', '1.0-1')
# TODO: Create a SageMaker estimator using the container location determined in the previous cell.
# It is recommended that you use a single training instance of type ml.m4.xlarge. It is also
# recommended that you use 's3://{}/{}/output'.format(session.default_bucket(), prefix) as the
# output path.
xgb = sagemaker.estimator.Estimator(container,
role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),
sagemaker_session=session)
# TODO: Set the XGBoost hyperparameters in the xgb object. Don't forget that in this case we have a binary
# label so we should be using the 'binary:logistic' objective.
xgb.set_hyperparameters(max_depth=5,
eta=0.2,
gamma=4,
min_child_weight=6,
subsample=0.8,
objective='binary:logistic',
early_stopping_rounds=10,
num_round=200)
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
(TODO) Create the hyperparameter tunerNow that the base estimator has been set up we need to construct a hyperparameter tuner object which we will use to request SageMaker construct a hyperparameter tuning job.**Note:** Training a single sentiment analysis XGBoost model takes longer than training a Boston Housing XGBoost model so if you don't want the hyperparameter tuning job to take too long, make sure to not set the total number of models (jobs) too high.
###Code
# First, make sure to import the relevant objects used to construct the tuner
from sagemaker.tuner import IntegerParameter, ContinuousParameter, HyperparameterTuner
# TODO: Create the hyperparameter tuner object
xgb_hyperparameter_tuner = HyperparameterTuner(estimator = xgb,
objective_metric_name = 'validation:rmse',
objective_type = 'Minimize',
max_jobs = 5,
max_parallel_jobs = 3,
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 12),
'eta' : ContinuousParameter(0.05, 0.5),
'min_child_weight': IntegerParameter(2, 8),
'subsample': ContinuousParameter(0.5, 0.9),
'gamma': ContinuousParameter(0, 10)
}
)
###Output
_____no_output_____
###Markdown
Fit the hyperparameter tunerNow that the hyperparameter tuner object has been constructed, it is time to fit the various models and find the best performing model.
###Code
s3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='text/csv')
s3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='text/csv')
xgb_hyperparameter_tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
_____no_output_____
###Markdown
Remember that the tuning job is constructed and run in the background so if we want to see the progress of our training job we need to call the `wait()` method.
###Code
xgb_hyperparameter_tuner.wait()
###Output
.................................................................................................................................................................................................................!
###Markdown
(TODO) Testing the modelNow that we've run our hyperparameter tuning job, it's time to see how well the best performing model actually performs. To do this we will use SageMaker's Batch Transform functionality. Batch Transform is a convenient way to perform inference on a large dataset in a way that is not realtime. That is, we don't necessarily need to use our model's results immediately and instead we can peform inference on a large number of samples. An example of this in industry might be peforming an end of month report. This method of inference can also be useful to us as it means to can perform inference on our entire test set. Remember that in order to create a transformer object to perform the batch transform job, we need a trained estimator object. We can do that using the `attach()` method, creating an estimator object which is attached to the best trained job.
###Code
# TODO: Create a new estimator object attached to the best training job found during hyperparameter tuning
xgb_attached = sagemaker.estimator.Estimator.attach(xgb_hyperparameter_tuner.best_training_job())
###Output
Parameter image_name will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Now that we have an estimator object attached to the correct training job, we can proceed as we normally would and create a transformer object.
###Code
# TODO: Create a transformer object from the attached estimator. Using an instance count of 1 and an instance type of ml.m4.xlarge
# should be more than enough.
xgb_transformer = xgb_attached.transformer(instance_count = 1, instance_type = 'ml.m4.xlarge')
###Output
Parameter image will be renamed to image_uri in SageMaker Python SDK v2.
###Markdown
Next we actually perform the transform job. When doing so we need to make sure to specify the type of data we are sending so that it is serialized correctly in the background. In our case we are providing our model with csv data so we specify `text/csv`. Also, if the test data that we have provided is too large to process all at once then we need to specify how the data file should be split up. Since each line is a single entry in our data set we tell SageMaker that it can split the input on each line.
###Code
# TODO: Start the transform job. Make sure to specify the content type and the split type of the test data.
xgb_transformer.transform(test_location, content_type='text/csv', split_type='Line')
###Output
_____no_output_____
###Markdown
Currently the transform job is running but it is doing so in the background. Since we wish to wait until the transform job is done and we would like a bit of feedback we can run the `wait()` method.
###Code
xgb_transformer.wait()
###Output
................................[32m2020-10-14T12:35:53.903:[sagemaker logs]: MaxConcurrentTransforms=4, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34m[2020-10-14:12:35:51:INFO] No GPUs detected (normal if no gpus installed)[0m
[34m[2020-10-14:12:35:51:INFO] No GPUs detected (normal if no gpus installed)[0m
[34m[2020-10-14:12:35:51:INFO] nginx config: [0m
[34mworker_processes auto;[0m
[34mdaemon off;[0m
[34mpid /tmp/nginx.pid;[0m
[34merror_log /dev/stderr;
[0m
[34mworker_rlimit_nofile 4096;
[0m
[34mevents {
worker_connections 2048;[0m
[34m}
[0m
[35m[2020-10-14:12:35:51:INFO] No GPUs detected (normal if no gpus installed)[0m
[35m[2020-10-14:12:35:51:INFO] No GPUs detected (normal if no gpus installed)[0m
[35m[2020-10-14:12:35:51:INFO] nginx config: [0m
[35mworker_processes auto;[0m
[35mdaemon off;[0m
[35mpid /tmp/nginx.pid;[0m
[35merror_log /dev/stderr;
[0m
[35mworker_rlimit_nofile 4096;
[0m
[35mevents {
worker_connections 2048;[0m
[35m}
[0m
[34mhttp {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /dev/stdout combined;
upstream gunicorn {
server unix:/tmp/gunicorn.sock;
}
server {
listen 8080 deferred;
client_max_body_size 0;
keepalive_timeout 3;
location ~ ^/(ping|invocations|execution-parameters) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 60s;
proxy_pass http://gunicorn;
}
location / {
return 404 "{}";
}
}[0m
[34m}
[0m
[34m2020/10/14 12:35:51 [crit] 19#19: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:51 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[34m2020/10/14 12:35:51 [crit] 19#19: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:51 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14 12:35:51 +0000] [17] [INFO] Starting gunicorn 19.10.0[0m
[34m[2020-10-14 12:35:51 +0000] [17] [INFO] Listening at: unix:/tmp/gunicorn.sock (17)[0m
[34m[2020-10-14 12:35:51 +0000] [17] [INFO] Using worker: gevent[0m
[34m[2020-10-14 12:35:51 +0000] [24] [INFO] Booting worker with pid: 24[0m
[34m[2020-10-14 12:35:51 +0000] [25] [INFO] Booting worker with pid: 25[0m
[35mhttp {
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /dev/stdout combined;
upstream gunicorn {
server unix:/tmp/gunicorn.sock;
}
server {
listen 8080 deferred;
client_max_body_size 0;
keepalive_timeout 3;
location ~ ^/(ping|invocations|execution-parameters) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_read_timeout 60s;
proxy_pass http://gunicorn;
}
location / {
return 404 "{}";
}
}[0m
[35m}
[0m
[35m2020/10/14 12:35:51 [crit] 19#19: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:51 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[35m2020/10/14 12:35:51 [crit] 19#19: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:51 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[35m[2020-10-14 12:35:51 +0000] [17] [INFO] Starting gunicorn 19.10.0[0m
[35m[2020-10-14 12:35:51 +0000] [17] [INFO] Listening at: unix:/tmp/gunicorn.sock (17)[0m
[35m[2020-10-14 12:35:51 +0000] [17] [INFO] Using worker: gevent[0m
[35m[2020-10-14 12:35:51 +0000] [24] [INFO] Booting worker with pid: 24[0m
[35m[2020-10-14 12:35:51 +0000] [25] [INFO] Booting worker with pid: 25[0m
[34m[2020-10-14 12:35:52 +0000] [29] [INFO] Booting worker with pid: 29[0m
[34m[2020-10-14 12:35:52 +0000] [33] [INFO] Booting worker with pid: 33[0m
[34m[2020-10-14:12:35:53:INFO] No GPUs detected (normal if no gpus installed)[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:53 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:53 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:35:54:INFO] No GPUs detected (normal if no gpus installed)[0m
[34m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:35:54:INFO] No GPUs detected (normal if no gpus installed)[0m
[34m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:35:54:INFO] No GPUs detected (normal if no gpus installed)[0m
[34m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14 12:35:52 +0000] [29] [INFO] Booting worker with pid: 29[0m
[35m[2020-10-14 12:35:52 +0000] [33] [INFO] Booting worker with pid: 33[0m
[35m[2020-10-14:12:35:53:INFO] No GPUs detected (normal if no gpus installed)[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:53 +0000] "GET /ping HTTP/1.1" 200 0 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:53 +0000] "GET /execution-parameters HTTP/1.1" 200 84 "-" "Go-http-client/1.1"[0m
[35m[2020-10-14:12:35:54:INFO] No GPUs detected (normal if no gpus installed)[0m
[35m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:35:54:INFO] No GPUs detected (normal if no gpus installed)[0m
[35m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:35:54:INFO] No GPUs detected (normal if no gpus installed)[0m
[35m[2020-10-14:12:35:54:INFO] Determined delimiter of CSV input is ','[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12281 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12237 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12287 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12281 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12237 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12287 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:35:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:35:57:INFO] Determined delimiter of CSV input is ','[0m
[34m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12259 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:35:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:35:57:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:35:57:INFO] Determined delimiter of CSV input is ','[0m
[35m169.254.255.130 - - [14/Oct/2020:12:35:57 +0000] "POST /invocations HTTP/1.1" 200 12259 "-" "Go-http-client/1.1"[0m
[35m[2020-10-14:12:35:57:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:35:58:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:35:58:INFO] Determined delimiter of CSV input is ','[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:00 +0000] "POST /invocations HTTP/1.1" 200 12258 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:00 +0000] "POST /invocations HTTP/1.1" 200 12255 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:00 +0000] "POST /invocations HTTP/1.1" 200 12244 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:00 +0000] "POST /invocations HTTP/1.1" 200 12258 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:00 +0000] "POST /invocations HTTP/1.1" 200 12255 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:00 +0000] "POST /invocations HTTP/1.1" 200 12244 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:01 +0000] "POST /invocations HTTP/1.1" 200 12304 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:01 +0000] "POST /invocations HTTP/1.1" 200 12304 "-" "Go-http-client/1.1"[0m
[35m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:36:01:INFO] Determined delimiter of CSV input is ','[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12227 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12227 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12268 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12264 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[34m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12241 "-" "Go-http-client/1.1"[0m
[34m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[34m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12268 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12264 "-" "Go-http-client/1.1"[0m
[35m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[35m169.254.255.130 - - [14/Oct/2020:12:36:04 +0000] "POST /invocations HTTP/1.1" 200 12241 "-" "Go-http-client/1.1"[0m
[35m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
[35m[2020-10-14:12:36:04:INFO] Determined delimiter of CSV input is ','[0m
###Markdown
Now the transform job has executed and the result, the estimated sentiment of each review, has been saved on S3. Since we would rather work on this file locally we can perform a bit of notebook magic to copy the file to the `data_dir`.
###Code
!aws s3 cp --recursive $xgb_transformer.output_path $data_dir
###Output
Completed 256.0 KiB/476.0 KiB (2.8 MiB/s) with 1 file(s) remaining
Completed 476.0 KiB/476.0 KiB (4.7 MiB/s) with 1 file(s) remaining
download: s3://sagemaker-us-east-1-136004397992/sagemaker-xgboost-201014-1149-004-3b451-2020-10-14-12-30-46-903/test.csv.out to ../data/xgboost/test.csv.out
###Markdown
The last step is now to read in the output from our model, convert the output to something a little more usable, in this case we want the sentiment to be either `1` (positive) or `0` (negative), and then compare to the ground truth labels.
###Code
predictions = pd.read_csv(os.path.join(data_dir, 'test.csv.out'), header=None)
predictions = [round(num) for num in predictions.squeeze().values]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
###Output
_____no_output_____
###Markdown
Optional: Clean upThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.
###Code
# First we will remove all of the files contained in the data_dir directory
!rm $data_dir/*
# And then we delete the directory itself
!rmdir $data_dir
# Similarly we will remove the files in the cache_dir directory and the directory itself
!rm $cache_dir/*
!rmdir $cache_dir
###Output
_____no_output_____ |
4. Cliques, Triangles and Graph Structures (Student).ipynb | ###Markdown
Cliques, Triangles and SquaresLet's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such:
###Code
G = nx.Graph()
G.add_nodes_from(['a', 'b', 'c'])
G.add_edges_from([('a','b'), ('b', 'c')])
nx.draw(G, with_labels=True)
###Output
_____no_output_____
###Markdown
Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like?
###Code
G.add_node('d')
G.add_edge('c', 'd')
G.add_edge('d', 'a')
nx.draw(G, with_labels=True)
###Output
_____no_output_____
###Markdown
The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square.You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well. If all of the triangles in the two small-scale networks were closed, then the graph would have represented **cliques**, in which everybody within that subgraph knows one another.In this section, we will attempt to answer the following questions:1. Can we identify cliques?2. Can we identify *potential* cliques that aren't currently present in the network?3. Can we model the probability that two unconnected individuals know one another? Load DataAs usual, let's start by loading some network data. This time round, we have a [physician trust](http://konect.uni-koblenz.de/networks/moreno_innovation) network, but slightly modified such that it is undirected rather than directed.> This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg. The data was collected in 1966. A node represents a physician and an edge between two physicians shows that the left physician told that the righ physician is his friend or that he turns to the right physician if he needs advice or is interested in a discussion. There always only exists one edge between two nodes even if more than one of the listed conditions are true.
###Code
# Load the network.
G = cf.load_physicians_network()
# Make a Circos plot of the graph
import numpy as np
from circos import CircosPlot
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
###Output
_____no_output_____
###Markdown
QuestionWhat can you infer about the structure of the graph from the Circos plot? CliquesIn a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not.The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.
###Code
# Example code that shouldn't be too hard to follow.
def in_triangle(G, node):
neighbors1 = G.neighbors(node)
neighbors2 = []
for n in neighbors1:
neighbors = G.neighbors(n)
if node in neighbors2:
neighbors2.remove(node)
neighbors2.extend(G.neighbors(n))
neighbors3 = []
for n in neighbors2:
neighbors = G.neighbors(n)
neighbors3.extend(G.neighbors(n))
return node in neighbors3
in_triangle(G, 3)
###Output
_____no_output_____
###Markdown
In reality, NetworkX already has a function that *counts* the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
###Code
nx.triangles(G, 3)
###Output
_____no_output_____
###Markdown
ExerciseCan you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with?Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship.Hint: Python Sets may be of great use for this problem. https://docs.python.org/2/library/stdtypes.htmlsetVerify your answer by drawing out the subgraph composed of those nodes.
###Code
# Possible answer
def get_triangles(G, node):
neighbors = set(G.neighbors(node))
triangle_nodes = set()
"""
Fill in the rest of the code below.
"""
return triangle_nodes
# Verify your answer with the following funciton call. Should return:
# {1, 2, 3, 6, 23}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that those are the only triangles that node 3 is involved in.
neighbors3 = G.neighbors(3)
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
###Output
_____no_output_____
###Markdown
Friend Recommendation: Open TrianglesNow that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. ExerciseCan you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one? Hint: You may still want to stick with set operations. Suppose we have the A-B-C triangle. If there are neighbors of C that are also neighbors of B, then those neighbors are in a triangle with B and C; consequently, if there are nodes for which C's neighbors do not overlap with B's neighbors, then those nodes are in an open triangle. The final implementation should include some conditions, and probably won't be as simple as described above.
###Code
# Fill in your code here.
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
for n in neighbors:
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
# nodes = get_open_triangles(G, 2)
# for i, triplet in enumerate(nodes):
# fig = plt.figure(i)
# nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
###Output
_____no_output_____
###Markdown
If you remember the previous section on hubs and paths, you will note that node 19 was involved in a lot of open triangles.Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here. CliquesWe have figured out how to find triangles. Now, let's find out what **cliques** are present in the network. Recall: what is the definition of a clique?- NetworkX has a [clique-finding](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.clique.find_cliques.html) algorithm implemented.- This algorithm finds all maximally-sized cliques for a given node.- Note that maximal cliques of size `n` include all cliques of `size < n`
###Code
list(nx.find_cliques(G))
###Output
_____no_output_____
###Markdown
ExerciseThis should allow us to find all n-sized maximal cliques. Try writing a function `maximal_cliques_of_size(size, G)` that implements this.
###Code
def maximal_cliqes_of_size(size, G):
return ______________________
maximal_cliqes_of_size(2, G)
###Output
_____no_output_____
###Markdown
Connected ComponentsFrom [Wikipedia](https://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29):> In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.NetworkX also implements a [function](https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.components.connected.connected_component_subgraphs.html) that identifies connected component subgraphs.Remember how based on the Circos plot above, we had this hypothesis that the physician trust network may be divided into subgraphs. Let's check that, and see if we can redraw the Circos visualization.
###Code
ccsubgraphs = list(nx.connected_component_subgraphs(G))
len(ccsubgraphs)
###Output
_____no_output_____
###Markdown
ExercisePlay a bit with the Circos API. Can you colour the nodes by their subgraph identifier?
###Code
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, g in enumerate(_____________):
# Fill in code below.
# Then, pass in a list of nodecolors that correspond to the node order.
node_cmap = {0: 'red', 1:'blue', 2: 'green', 3:'yellow'}
nodecolor = [__________________________________________]
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/physicians.png', dpi=300)
###Output
_____no_output_____
###Markdown
Cliques, Triangles and SquaresLet's pose a problem: If A knows B and B knows C, would it be probable that A knows C as well? In a graph involving just these three individuals, it may look as such:
###Code
G = nx.Graph()
G.add_nodes_from(['a', 'b', 'c'])
G.add_edges_from([('a','b'), ('b', 'c')])
nx.draw(G, with_labels=True)
###Output
_____no_output_____
###Markdown
Let's think of another problem: If A knows B, B knows C, C knows D and D knows A, is it likely that A knows C and B knows D? How would this look like?
###Code
G.add_node('d')
G.add_edge('c', 'd')
G.add_edge('d', 'a')
nx.draw(G, with_labels=True)
###Output
_____no_output_____
###Markdown
The set of relationships involving A, B and C, if closed, involves a triangle in the graph. The set of relationships that also include D form a square.You may have observed that social networks (LinkedIn, Facebook, Twitter etc.) have friend recommendation systems. How exactly do they work? Apart from analyzing other variables, closing triangles is one of the core ideas behind the system. A knows B and B knows C, then A probably knows C as well. If all of the triangles in the two small-scale networks were closed, then the graph would have represented **cliques**, in which everybody within that subgraph knows one another.In this section, we will attempt to answer the following questions:1. Can we identify cliques?2. Can we identify *potential* cliques that aren't currently present in the network?3. Can we model the probability that two unconnected individuals know one another? Load DataAs usual, let's start by loading some network data. This time round, we have a [physician trust](http://konect.uni-koblenz.de/networks/moreno_innovation) network, but slightly modified such that it is undirected rather than directed.> This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg. The data was collected in 1966. A node represents a physician and an edge between two physicians shows that the left physician told that the righ physician is his friend or that he turns to the right physician if he needs advice or is interested in a discussion. There always only exists one edge between two nodes even if more than one of the listed conditions are true.
###Code
# Load the network.
G = cf.load_physicians_network()
# Make a Circos plot of the graph
import numpy as np
from circos import CircosPlot
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
###Output
_____no_output_____
###Markdown
QuestionWhat can you infer about the structure of the graph from the Circos plot?My answer: The structure is interesting. The graph looks like the physician trust network is comprised of discrete subnetworks. CliquesIn a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not.The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.
###Code
# Example code that shouldn't be too hard to follow.
def in_triangle(G, node):
neighbors1 = G.neighbors(node)
neighbors2 = []
for n in neighbors1:
neighbors = G.neighbors(n)
if node in neighbors2:
neighbors2.remove(node)
neighbors2.extend(G.neighbors(n))
neighbors3 = []
for n in neighbors2:
neighbors = G.neighbors(n)
neighbors3.extend(G.neighbors(n))
return node in neighbors3
in_triangle(G, 3)
###Output
_____no_output_____
###Markdown
In reality, NetworkX already has a function that *counts* the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
###Code
nx.triangles(G, 3)
###Output
_____no_output_____
###Markdown
ExerciseCan you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with?Hint: The neighbor of my neighbor should also be my neighbor, then the three of us are in a triangle relationship.Hint: Python Sets may be of great use for this problem. https://docs.python.org/2/library/stdtypes.htmlsetVerify your answer by drawing out the subgraph composed of those nodes.
###Code
# Possible answer
def get_triangles(G, node):
neighbors = set(G.neighbors(node))
triangle_nodes = set()
"""
Fill in the rest of the code below.
"""
return triangle_nodes
# Verify your answer with the following funciton call. Should return:
# {1, 2, 3, 6, 23}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that those are the only triangles that node 3 is involved in.
neighbors3 = G.neighbors(3)
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
###Output
_____no_output_____
###Markdown
Friend Recommendation: Open TrianglesNow that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. ExerciseCan you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one? Hint: You may still want to stick with set operations. Suppose we have the A-B-C triangle. If there are neighbors of C that are also neighbors of B, then those neighbors are in a triangle with B and C; consequently, if there are nodes for which C's neighbors do not overlap with B's neighbors, then those nodes are in an open triangle. The final implementation should include some conditions, and probably won't be as simple as described above.
###Code
# Possible Answer, credit Justin Zabilansky (MIT) for help on this.
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
for n in neighbors:
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
# nodes = get_open_triangles(G, 2)
# for i, triplet in enumerate(nodes):
# fig = plt.figure(i)
# nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
###Output
_____no_output_____
###Markdown
If you remember the previous section on hubs and paths, you will note that node 19 was involved in a lot of open triangles.Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here. CliquesWe have figured out how to find triangles. Now, let's find out what **cliques** are present in the network. Recall: what is the definition of a clique?- NetworkX has a [clique-finding](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.clique.find_cliques.html) algorithm implemented.- This algorithm finds all maximally-sized cliques for a given node.- Note that maximal cliques of size `n` include all cliques of `size < n`
###Code
list(nx.find_cliques(G))
###Output
_____no_output_____
###Markdown
ExerciseThis should allow us to find all n-sized maximal cliques. Try writing a function `maximal_cliques_of_size(size, G)` that implements this.
###Code
def maximal_cliqes_of_size(size, G):
return ______________________
maximal_cliqes_of_size(2, G)
###Output
_____no_output_____
###Markdown
Connected ComponentsFrom [Wikipedia](https://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29):> In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.NetworkX also implements a [function](https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.components.connected.connected_component_subgraphs.html) that identifies connected component subgraphs.Remember how based on the Circos plot above, we had this hypothesis that the physician trust network may be divided into subgraphs. Let's check that, and see if we can redraw the Circos visualization.
###Code
ccsubgraphs = list(nx.connected_component_subgraphs(G))
len(ccsubgraphs)
###Output
_____no_output_____
###Markdown
ExercisePlay a bit with the Circos API. Can you colour the nodes by their subgraph identifier?
###Code
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, g in enumerate(_____________):
# Then, pass in a list of nodecolors that correspond to the node order.
node_cmap = {0: 'red', 1:'blue', 2: 'green', 3:'yellow'}
nodecolor = [__________________________________________]
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/physicians.png', dpi=300)
###Output
_____no_output_____
###Markdown
Load DataAs usual, let's start by loading some network data. This time round, we have a [physician trust](http://konect.uni-koblenz.de/networks/moreno_innovation) network, but slightly modified such that it is undirected rather than directed.> This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg. The data was collected in 1966. A node represents a physician and an edge between two physicians shows that the left physician told that the righ physician is his friend or that he turns to the right physician if he needs advice or is interested in a discussion. There always only exists one edge between two nodes even if more than one of the listed conditions are true.
###Code
# Load the network.
G = cf.load_physicians_network()
# Make a Circos plot of the graph
import numpy as np
from circos import CircosPlot
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
nodecolor = plt.cm.viridis(np.arange(len(nodes)) / len(nodes))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
###Output
_____no_output_____
###Markdown
QuestionWhat can you infer about the structure of the graph from the Circos plot? Structures in a GraphWe can leverage what we have learned in the previous notebook to identify special structures in a graph. In a network, cliques are one of these special structures. CliquesIn a social network, cliques are groups of people in which everybody knows everybody. Triangles are a simple example of cliques. Let's try implementing a simple algorithm that finds out whether a node is present in a triangle or not.The core idea is that if a node is present in a triangle, then its neighbors' neighbors' neighbors should include itself.
###Code
# Example code.
def in_triangle(G, node):
"""
Returns whether a given node is present in a triangle relationship or not.
"""
# We first assume that the node is not present in a triangle.
is_in_triangle = False
# Then, iterate over every pair of the node's neighbors.
for nbr1, nbr2 in itertools.combinations(G.neighbors(node), 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if G.has_edge(nbr1, nbr2):
is_in_triangle = True
# We break because any triangle that is present automatically
# satisfies the problem requirements.
break
return is_in_triangle
in_triangle(G, 3)
###Output
_____no_output_____
###Markdown
In reality, NetworkX already has a function that *counts* the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
###Code
nx.triangles(G, 3)
###Output
_____no_output_____
###Markdown
ExerciseCan you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Do not return the triplets, but the `set`/`list` of nodes.**Possible Implementation:** If my neighbor's neighbor's neighbor includes myself, then we are in a triangle relationship.**Possible Implementation:** If I check every pair of my neighbors, any pair that are also connected in the graph are in a triangle relationship with me.Hint: Python's [`itertools`](https://docs.python.org/3/library/itertools.html) module has a `combinations` function that may be useful.Hint: NetworkX graphs have a `.has_edge(node1, node2)` function that checks whether an edge exists between two nodes.Verify your answer by drawing out the subgraph composed of those nodes.
###Code
# Possible answer
def get_triangles(G, node):
neighbors = set(G.neighbors(node))
triangle_nodes = set()
"""
Fill in the rest of the code below.
"""
triangle_nodes.add(node)
is_in_triangle = False
# Then, iterate over every pair of the node's neighbors.
for nbr1, nbr2 in itertools.combinations(neighbors, 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if G.has_edge(nbr1, nbr2):
# We break because any triangle that is present automatically
# satisfies the problem requirements.
triangle_nodes.add(nbr1)
triangle_nodes.add(nbr2)
return triangle_nodes
# Verify your answer with the following funciton call. Should return something of the form:
# {3, 9, 11, 41, 42, 67}
get_triangles(G, 3)
# Then, draw out those nodes.
nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True)
# Compare for yourself that those are the only triangles that node 3 is involved in.
neighbors3 = G.neighbors(3)
neighbors3.append(3)
nx.draw(G.subgraph(neighbors3), with_labels=True)
###Output
_____no_output_____
###Markdown
Friend Recommendation: Open TrianglesNow that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. What are the two general scenarios for finding open triangles that a given node is involved in?1. The given node is the centre node.1. The given node is one of the termini nodes. ExerciseCan you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one?Note: For this exercise, only consider the case when the node of interest is the centre node.**Possible Implementation:** Check every pair of my neighbors, and if they are not connected to one another, then we are in an open triangle relationship.
###Code
# Fill in your code here.
def get_open_triangles(G, node):
"""
There are many ways to represent this. One may choose to represent only the nodes involved
in an open triangle; this is not the approach taken here.
Rather, we have a code that explicitly enumrates every open triangle present.
"""
open_triangle_nodes = []
neighbors = set(G.neighbors(node))
#for n in neighbors:
for nbr1, nbr2 in itertools.combinations(neighbors, 2):
# Check to see if there is an edge between the node's neighbors.
# If there is an edge, then the given node is present in a triangle.
if not G.has_edge(nbr1, nbr2):
# We break because any triangle that is present automatically
# satisfies the problem requirements.
open_triangle_nodes.append([nbr1,node,nbr2])
return open_triangle_nodes
# # Uncomment the following code if you want to draw out each of the triplets.
nodes = get_open_triangles(G, 2)
for i, triplet in enumerate(nodes):
fig = plt.figure(i)
nx.draw(G.subgraph(triplet), with_labels=True)
print(get_open_triangles(G, 3))
len(get_open_triangles(G, 3))
###Output
[[1, 3, 67], [1, 3, 101], [1, 3, 9], [1, 3, 41], [1, 3, 42], [1, 3, 11], [1, 3, 112], [1, 3, 91], [67, 3, 101], [67, 3, 9], [67, 3, 41], [67, 3, 11], [67, 3, 112], [67, 3, 91], [101, 3, 9], [101, 3, 41], [101, 3, 42], [101, 3, 11], [101, 3, 112], [101, 3, 91], [9, 3, 42], [9, 3, 112], [9, 3, 91], [41, 3, 42], [41, 3, 11], [41, 3, 112], [41, 3, 91], [42, 3, 11], [42, 3, 112], [42, 3, 91], [11, 3, 112], [11, 3, 91], [112, 3, 91]]
###Markdown
Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here. CliquesWe have figured out how to find triangles. Now, let's find out what **cliques** are present in the network. Recall: what is the definition of a clique?- NetworkX has a [clique-finding](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.clique.find_cliques.html) algorithm implemented.- This algorithm finds all maximally-sized cliques for a given node.- Note that maximal cliques of size `n` include all cliques of `size < n`
###Code
list(nx.find_cliques(G))
###Output
_____no_output_____
###Markdown
ExerciseThis should allow us to find all n-sized maximal cliques. Try writing a function `maximal_cliques_of_size(size, G)` that implements this.
###Code
def maximal_cliqes_of_size(size, G):
return ______________________
maximal_cliqes_of_size(2, G)
###Output
_____no_output_____
###Markdown
Connected ComponentsFrom [Wikipedia](https://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29):> In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.NetworkX also implements a [function](https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.components.connected.connected_component_subgraphs.html) that identifies connected component subgraphs.Remember how based on the Circos plot above, we had this hypothesis that the physician trust network may be divided into subgraphs. Let's check that, and see if we can redraw the Circos visualization.
###Code
ccsubgraphs = list(nx.connected_component_subgraphs(G))
len(ccsubgraphs)
###Output
_____no_output_____
###Markdown
ExercisePlay a bit with the Circos API. Can you colour the nodes by their subgraph identifier?
###Code
# Start by labelling each node in the master graph G by some number
# that represents the subgraph that contains the node.
for i, g in enumerate(_____________):
# Fill in code below.
# Then, pass in a list of nodecolors that correspond to the node order.
# Feel free to change the colours around!
node_cmap = {0: 'red', 1:'blue', 2: 'green', 3:'yellow'}
nodecolor = [__________________________________________]
nodes = sorted(G.nodes())
edges = G.edges()
edgeprops = dict(alpha=0.1)
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
c = CircosPlot(nodes, edges, radius=10, ax=ax, fig=fig, edgeprops=edgeprops, nodecolor=nodecolor)
c.draw()
plt.savefig('images/physicians.png', dpi=300)
###Output
_____no_output_____ |
EjerciciosSimplexDual/EjerciciosSimplex.ipynb | ###Markdown
Ejercicios
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Simplex Problema 1a
###Code
df_a1 = pd.DataFrame({'z': [1, 0, 0],
'x1': [-6, 2, 1],
'x2': [-8, 1, 3],
'x3': [-5, 1, 1],
'x4': [-9, 3, 2],
's1': [0, 1, 0],
's2': [0, 0, 1],
'RHS':[0, 5, 3]},
columns=['z','x1','x2','x3','x4','s1','s2','RHS'],
index=['z', 's1', 's2'])
df_a1
# Copia de trabajo
wip1 = df_a1.copy()
# Iteración 1
wip1.iloc[2] /= 2
wip1.iloc[0] += wip1.iloc[2] * 9
wip1.iloc[1] += wip1.iloc[2] * -3
# Cambia variables del pivote
a = list(wip1.columns)
a[4] = 's2'
wip1.columns = a
a = list(wip1.index)
a[2] = 'x4'
wip1.index = a
# Show
wip1
# Iteración 2
wip1.iloc[1] *= 2
wip1.iloc[0] += wip1.iloc[1] * 1.5
wip1.iloc[2] += wip1.iloc[1] * -0.5
# Cambia variables del pivote
a = list(wip1.columns)
a[1] = 's1'
wip1.columns = a
a = list(wip1.index)
a[1] = 'x1'
wip1.index = a
# Show
wip1
# Iteración 3
wip1.iloc[1] *= ((1/7) * -1)
wip1.iloc[0] += wip1.iloc[1] * 5
wip1.iloc[2] += wip1.iloc[1] * -5
# Cambia variables del pivote
a = list(wip1.columns)
a[2] = 'x1'
wip1.columns = a
a = list(wip1.index)
a[1] = 'x2'
wip1.index = a
# Show
wip1
# Iteración 4
wip1.iloc[1] *= 7
wip1.iloc[0] += wip1.iloc[1] * (9/7)
wip1.iloc[0] = np.around(wip1.iloc[0])
wip1.iloc[2] += wip1.iloc[1] * -(2/7)
wip1.iloc[2] = np.around(wip1.iloc[2])
# Cambia variables del pivote
a = list(wip1.columns)
a[3] = 'x2'
wip1.columns = a
a = list(wip1.index)
a[1] = 'x3'
wip1.index = a
# Show
wip1
# Iteración 5
wip1.iloc[1] *= -1
wip1.iloc[0] += wip1.iloc[1] * 2
wip1.iloc[2] += wip1.iloc[1] * -1
# Cambia variables del pivote
a = list(wip1.columns)
a[1] = 'x3'
wip1.columns = a
a = list(wip1.index)
a[1] = 's1'
wip1.index = a
# Show
wip1
# Iteración 5
wip1.iloc[1] *= 1
wip1.iloc[0] += wip1.iloc[1] * 2
wip1.iloc[2] += wip1.iloc[1] * -1
# Cambia variables del pivote
a = list(wip1.columns)
a[1] = 'x3'
wip1.columns = a
a = list(wip1.index)
a[1] = 's1'
wip1.index = a
# Show
wip1
###Output
_____no_output_____
###Markdown
Loop Infinito ^ Problema 1b
###Code
df_b1 = pd.DataFrame({'z': [1, 0, 0, 0],
'x1': [-2, 0, 1, 1],
'x2': [-3, 2, 1, 2],
'x3': [-4, 3, 2, 3],
's1': [0, 1, 0, 0],
's2': [0, 0, 1, 0],
's3': [0, 0, 0, 1],
'RHS':[0, 5, 4, 7]},
columns=['z','x1','x2','x3','s1','s2', 's3', 'RHS'],
index=['z', 's1', 's2', 's3'])
df_b1
wip2 = df_b1.copy()
# Iteración 1
wip2.iloc[1] *= (1/3)
wip2.iloc[0] += wip2.iloc[1] * 4
wip2.iloc[2] += wip2.iloc[1] * -2
wip2.iloc[3] += wip2.iloc[1] * -3
# Cambia variables del pivote
a = list(wip2.columns)
a[3] = 's1'
wip2.columns = a
a = list(wip2.index)
a[1] = 'x3'
wip2.index = a
# Show
wip2
# Iteración 2
wip2.iloc[2] *= 1
wip2.iloc[0] += wip2.iloc[2] * 2
wip2.iloc[3] += wip2.iloc[2] * -1
# Cambia variables del pivote
a = list(wip2.columns)
a[1] = 's2'
wip2.columns = a
a = list(wip2.index)
a[2] = 'x1'
wip2.index = a
# Show
wip2
# Iteración 3
wip2.iloc[1] *= (3/2)
wip2.iloc[0] += wip2.iloc[1] * 1
wip2.iloc[0] = np.around(wip2.iloc[0], decimals=1)
wip2.iloc[2] += wip2.iloc[1] * (1/3)
wip2.iloc[2] = np.around(wip2.iloc[2], decimals=1)
wip2.iloc[3] += wip2.iloc[1] * -(1/3)
wip2.iloc[3] = np.around(wip2.iloc[3], decimals=1)
# Cambia variables del pivote
a = list(wip2.columns)
a[2] = 'x3'
wip2.columns = a
a = list(wip2.index)
a[1] = 'x2'
wip2.index = a
# Show
wip2
###Output
_____no_output_____ |
exploring-nyc-film-permits/exploring-nyc-film-permits.ipynb | ###Markdown
Project 2 - Data Characterization About the data: The obtained data shows film permits granted for New York City. Permits are generally required when asserting the exclusive use of city property, like a sidewalk, a street, or a park. I found this data through the suggestions for project 2 ideas on Blackboard. My story:Growing up I have watched a lot of American movies and TV shows. Many of these have shown New York City. After I came to the USA I myself visited many of the places in New York City (NYC) and visualized the movies and shows I had watched as a kid. I did not get to see an actual film shoot though. So, when I saw this data, the data scientist in me thought I should figure out when do movies actually shoot in NYC. Following questions came to my mind:1. Can this data tell me popular timing of day for film shoots? * The answer to the first question is that most popular time of day for shooting is between 5 AM and mid-day. * Theater "shoots" are an outlier when events per hour of day are analyzed and we see a lot of them seem to happen in hour "zero" or mid-night. However, this is not an issue from the perspective of analysis as this could be reasonable and not an anomaly. This is because a lot of theater shows start in the evening and can run upto mid-night. 2. Can this data tell me the popular day of the week when shooting activities occur? * Weekday-wise permit counts and the normalized value of the permit count show that weekends are outliers when shoots per day are considered. * We were able to conclude from the number of shoots per day that weekdays are fairly well balanced in matters of shooting activities.3. Can it tell me popular months of year for film shoots? * So, the answer to our third question is TV shoots happen in phases. Mostly in Fall months but some in Spring months as well. Movie shoots happen starting around Spring, peaking around summer and again a bit in the Fall.4. Winter in New York city is very beautiful due to all the snow but are the shooting really happening in the harsh winter conditions of NYC? * The graph for normalized value of total number of permits per month answers our fourth question that winter is really a bad time to shoot in New York City as the number of events go down but there still are a non-zero number of shooting activities happening. This is especially true for TV shows.5. I know some Bollywood movies have shot in Staten Island because of a large Indian community in that area but is it a popular location in general? * The graph of normalized value of total number of permits per borough and type of activity shows that Staten Island is NOT in-fact a popular shooting location.6. I like a lot of web series and watch Youtube stars like Casey Neistat who films in New York City. Given the popularity of Youtube in recent times are web shoots are rising in the city? * After filtering out some top "shooting" categories we were able to see a clear rising trend of WEB shoot activity in New York City!7. Which locations in New York City are popular for movie shoots? * WEST 48th STREET, New York City, New York is near Times Square. Intuitively this seems to be a reasonable location to be considered popular. Data properties and access information:* Download [link](https://data.cityofnewyork.us/api/views/tg4x-b46p/rows.csv?accessType=DOWNLOAD) for data source.* Data available through [NYC Open Data site](https://data.cityofnewyork.us/City-Government/Film-Permits/tg4x-b46p).* Downloaded file named: "Film_Permits.csv".* There is no cost to accessing this data.* Accessing this data does not require creation of an account.* Accessing this data does not violate any laws.* This data does not appear to have been previously analyzed based on a Google search.* A preliminary survey of the data indicates there are 40,682 rows, 14 columns, and the file size is 15.4 MB.
###Code
!pip install geopy
!pip install humanize
!pip install folium
import numpy as np
import pandas as pd
import time
import datetime
from datetime import datetime
import calendar
import chardet
import missingno as msno
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
import os
import random
import re
from geopy.geocoders import Nominatim
import json
import humanize
import folium
import warnings
warnings.filterwarnings("ignore")
start_time = time.time()
print('Pandas',pd.__version__)
print('Matplotlib',matplotlib.__version__)
print('Seaborn',sns.__version__)
print('File Size In MB : ',(os.path.getsize('Film_Permits.csv')/1048576),' MB')
NYC = 'New York City'
###Output
Collecting geopy
Using cached https://files.pythonhosted.org/packages/75/3e/80bc987e1635ba9e7455b95e233b296c17f3d3bf3d4760fa67cdfc840e84/geopy-1.19.0-py2.py3-none-any.whl
Collecting geographiclib<2,>=1.49 (from geopy)
Installing collected packages: geographiclib, geopy
Successfully installed geographiclib-1.49 geopy-1.19.0
Collecting humanize
Installing collected packages: humanize
Successfully installed humanize-0.5.1
Collecting folium
Using cached https://files.pythonhosted.org/packages/43/77/0287320dc4fd86ae8847bab6c34b5ec370e836a79c7b0c16680a3d9fd770/folium-0.8.3-py2.py3-none-any.whl
Requirement already satisfied: six in /opt/conda/lib/python3.6/site-packages (from folium) (1.11.0)
Requirement already satisfied: requests in /opt/conda/lib/python3.6/site-packages (from folium) (2.20.1)
Requirement already satisfied: numpy in /opt/conda/lib/python3.6/site-packages (from folium) (1.13.3)
Requirement already satisfied: jinja2 in /opt/conda/lib/python3.6/site-packages (from folium) (2.10)
Collecting branca>=0.3.0 (from folium)
Using cached https://files.pythonhosted.org/packages/63/36/1c93318e9653f4e414a2e0c3b98fc898b4970e939afeedeee6075dd3b703/branca-0.3.1-py3-none-any.whl
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests->folium) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests->folium) (2018.11.29)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests->folium) (1.23)
Requirement already satisfied: idna<2.8,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests->folium) (2.7)
Requirement already satisfied: MarkupSafe>=0.23 in /opt/conda/lib/python3.6/site-packages (from jinja2->folium) (1.1.0)
Installing collected packages: branca, folium
Successfully installed branca-0.3.1 folium-0.8.3
Pandas 0.23.4
Matplotlib 2.2.2
Seaborn 0.9.0
File Size In MB : 15.404823303222656 MB
###Markdown
Exploring data**Encoding check for the input CSV file to ensure data is in the right format**
###Code
with open('Film_Permits.csv','rb') as fraw:
file_content = fraw.read()
chardet.detect(file_content)
###Output
_____no_output_____
###Markdown
**Character encoding of the CSV file is ascii and confidence level is 1(100%).**Exploring file contents from the CSV:
###Code
!head -n 3 Film_Permits.csv
###Output
EventID,EventType,StartDateTime,EndDateTime,EnteredOn,EventAgency,ParkingHeld,Borough,CommunityBoard(s),PolicePrecinct(s),Category,SubCategoryName,Country,ZipCode(s)
455604,Shooting Permit,12/11/2018 08:00:00 AM,12/11/2018 11:59:00 PM,12/07/2018 11:00:12 PM,"Mayor's Office of Film, Theatre & Broadcasting","STANHOPE STREET between WILSON AVENUE and MYRTLE AVENUE, WILSON AVENUE between MELROSE STREET and GEORGE STREET, MELROSE STREET between WILSON AVENUE and KNICKERBOCKER AVENUE, WILSON AVENUE between STOCKHOLM STREET and STANHOPE STREET",Brooklyn,4,83,Film,Feature,United States of America,"11221, 11237"
455593,Shooting Permit,12/11/2018 07:00:00 AM,12/11/2018 09:00:00 PM,12/07/2018 05:57:34 PM,"Mayor's Office of Film, Theatre & Broadcasting","STARR AVENUE between BORDEN AVENUE and VAN DAM STREET, REVIEW AVENUE between BORDEN AVENUE and VAN DAM STREET",Queens,2,108,Television,Episodic series,United States of America,11101
###Markdown
**Next, I will extract data from the CSV file and insert into a dataframe for processing**
###Code
pd.options.display.max_rows = 40
start_time_before_load = time.time()
film_permits_df = pd.read_csv("Film_Permits.csv")
print('Time taken to load the data : ',time.time() - start_time_before_load,'seconds')
film_permits_df.shape
###Output
Time taken to load the data : 0.3617970943450928 seconds
###Markdown
The csv/dataframe contains 40682 rows and 14 columnsLet us explore the data a bit using head(), tail(), info(), describe()
###Code
film_permits_df.head()
film_permits_df.tail()
film_permits_df.info()
film_permits_df.describe()
film_permits_df.describe(include='all')
film_permits_df.describe(include='object')
###Output
_____no_output_____
###Markdown
**Next, I will explore the column metadata...*** What are the data types for the columns in our data?* How many unique entries are there in each column where type is object?* Below I will exlpore the first five rows of each column where type is object? * Why am I exploring unique entries for objects? * Because there could possibly be categorical data or datetime data in an object column. * After finishing the data exploration I will transform these object type columns with categorical data into 'category' type and object type columns with datetime data into 'datetime' type
###Code
first_n_entries=5
print('Total rows in the dataframe:', film_permits_df.shape[0])
for col, col_type in film_permits_df.dtypes.iteritems():
if(col_type=='object'):
print(col, 'has', film_permits_df[col].nunique(), 'unique entries')
print('First', first_n_entries, 'entries are')
print(film_permits_df[col][0:first_n_entries])
print('')
###Output
Total rows in the dataframe: 40682
EventType has 4 unique entries
First 5 entries are
0 Shooting Permit
1 Shooting Permit
2 Shooting Permit
3 Shooting Permit
4 Shooting Permit
Name: EventType, dtype: object
StartDateTime has 16151 unique entries
First 5 entries are
0 12/11/2018 08:00:00 AM
1 12/11/2018 07:00:00 AM
2 12/11/2018 09:00:00 AM
3 12/10/2018 07:00:00 AM
4 12/11/2018 06:00:00 AM
Name: StartDateTime, dtype: object
EndDateTime has 19635 unique entries
First 5 entries are
0 12/11/2018 11:59:00 PM
1 12/11/2018 09:00:00 PM
2 12/11/2018 11:00:00 PM
3 12/10/2018 08:00:00 PM
4 12/11/2018 11:00:00 PM
Name: EndDateTime, dtype: object
EnteredOn has 40470 unique entries
First 5 entries are
0 12/07/2018 11:00:12 PM
1 12/07/2018 05:57:34 PM
2 12/07/2018 04:45:33 PM
3 12/07/2018 04:20:34 PM
4 12/07/2018 04:17:03 PM
Name: EnteredOn, dtype: object
EventAgency has 1 unique entries
First 5 entries are
0 Mayor's Office of Film, Theatre & Broadcasting
1 Mayor's Office of Film, Theatre & Broadcasting
2 Mayor's Office of Film, Theatre & Broadcasting
3 Mayor's Office of Film, Theatre & Broadcasting
4 Mayor's Office of Film, Theatre & Broadcasting
Name: EventAgency, dtype: object
ParkingHeld has 24944 unique entries
First 5 entries are
0 STANHOPE STREET between WILSON AVENUE and MYRT...
1 STARR AVENUE between BORDEN AVENUE and VAN DAM...
2 WEST 13 STREET between 7 AVENUE and 6 AVENUE...
3 NORTH HENRY STREET between GREENPOINT AVENUE a...
4 FULTON STREET between GREENWICH STREET and CHU...
Name: ParkingHeld, dtype: object
Borough has 5 unique entries
First 5 entries are
0 Brooklyn
1 Queens
2 Brooklyn
3 Brooklyn
4 Manhattan
Name: Borough, dtype: object
CommunityBoard(s) has 668 unique entries
First 5 entries are
0 4
1 2
2 1, 2
3 1
4 1
Name: CommunityBoard(s), dtype: object
PolicePrecinct(s) has 1923 unique entries
First 5 entries are
0 83
1 108
2 6, 90
3 94
4 1
Name: PolicePrecinct(s), dtype: object
Category has 9 unique entries
First 5 entries are
0 Film
1 Television
2 Television
3 Television
4 Commercial
Name: Category, dtype: object
SubCategoryName has 29 unique entries
First 5 entries are
0 Feature
1 Episodic series
2 Episodic series
3 Episodic series
4 Commercial
Name: SubCategoryName, dtype: object
Country has 9 unique entries
First 5 entries are
0 United States of America
1 United States of America
2 United States of America
3 United States of America
4 United States of America
Name: Country, dtype: object
ZipCode(s) has 3528 unique entries
First 5 entries are
0 11221, 11237
1 11101
2 10011, 11211, 11249
3 11222
4 10048
Name: ZipCode(s), dtype: object
###Markdown
* In the data set, there are Thirteen object type columns: EventType, StartDateTime, EndDateTime, EnteredOn, EventAgency, ParkingHeld, Borough, CommunityBoard(s), PolicePrecinct(s), Category, SubCategoryName, Country and ZipCode(s) Data Type Transformation* Now, I will count the frequency of these unique values per column and print frequency of top five most frequent elements.* I will check if a column with object data type has categorical data or not?* I will check if a column with object data type has datetime data or not?* If and when necessary, I will perform some transformations on the data.
###Code
for this_column in film_permits_df.columns:
print('====', this_column, 'has', film_permits_df[this_column].nunique(), 'unique entries ====')
print(film_permits_df[this_column].value_counts().head(5))
print('')
###Output
==== EventID has 40682 unique entries ====
66602 1
126487 1
125565 1
50657 1
179741 1
Name: EventID, dtype: int64
==== EventType has 4 unique entries ====
Shooting Permit 35774
Theater Load in and Load Outs 3380
Rigging Permit 1028
DCAS Prep/Shoot/Wrap Permit 500
Name: EventType, dtype: int64
==== StartDateTime has 16151 unique entries ====
11/13/2018 06:00:00 AM 24
12/01/2014 06:00:00 AM 22
10/06/2014 06:00:00 AM 20
11/19/2018 06:00:00 AM 20
10/24/2018 06:00:00 AM 20
Name: StartDateTime, dtype: int64
==== EndDateTime has 19635 unique entries ====
08/04/2014 09:00:00 PM 14
09/22/2015 10:00:00 PM 14
08/31/2015 09:00:00 PM 14
11/18/2015 10:00:00 PM 14
10/05/2015 09:00:00 PM 13
Name: EndDateTime, dtype: int64
==== EnteredOn has 40470 unique entries ====
01/30/2018 12:43:07 PM 6
06/12/2012 06:58:12 PM 5
05/28/2018 09:52:30 AM 5
10/03/2018 01:48:16 PM 4
07/03/2018 12:45:41 PM 4
Name: EnteredOn, dtype: int64
==== EventAgency has 1 unique entries ====
Mayor's Office of Film, Theatre & Broadcasting 40682
Name: EventAgency, dtype: int64
==== ParkingHeld has 24944 unique entries ====
WEST 48 STREET between 6 AVENUE and 7 AVENUE 820
AMSTERDAM AVENUE between WEST 73 STREET and WEST 75 STREET, BROADWAY between WEST 74 STREET and WEST 75 STREET, WEST 75 STREET between AMSTERDAM AVENUE and BROADWAY 412
WEST 55 STREET between 11 AVENUE and 12 AVENUE 382
NORTH HENRY STREET between GREENPOINT AVENUE and MESEROLE AVENUE 259
WEST 44 STREET between BROADWAY and 6 AVENUE 224
Name: ParkingHeld, dtype: int64
==== Borough has 5 unique entries ====
Manhattan 20537
Brooklyn 12263
Queens 6277
Bronx 1100
Staten Island 505
Name: Borough, dtype: int64
==== CommunityBoard(s) has 668 unique entries ====
1 8827
2 6824
5 4739
4 2887
7 1615
Name: CommunityBoard(s), dtype: int64
==== PolicePrecinct(s) has 1923 unique entries ====
94 4228
18 3337
108 2809
14 1939
1 1529
Name: PolicePrecinct(s), dtype: int64
==== Category has 9 unique entries ====
Television 21475
Film 7322
Theater 3735
Commercial 3471
Still Photography 2627
Name: Category, dtype: int64
==== SubCategoryName has 29 unique entries ====
Episodic series 11750
Feature 5810
Not Applicable 5808
Cable-episodic 4315
Theater 3735
Name: SubCategoryName, dtype: int64
==== Country has 9 unique entries ====
United States of America 40635
United Kingdom 10
Japan 8
France 7
Panama 7
Name: Country, dtype: int64
==== ZipCode(s) has 3528 unique entries ====
11222 3686
11101 2563
10036 1723
10019 1607
10023 993
Name: ZipCode(s), dtype: int64
###Markdown
* After exploring the data I observed that EventType, EventAgency, Borough, Category, SubCategoryName and Country columns contain categorical data.* I will transform these columns into 'category' data type.* Also StartDateTime, EndDateTime, EnteredOn columns contain datetime data.* I will transform the above three columns into 'datetime' data type.
###Code
"""
Next, I transform the object data type for EventType to 'category' data type
"""
film_permits_df['EventType'] = film_permits_df['EventType'].astype('category')
film_permits_df['EventType'].dtype
"""
Next, I transform the object data type for EventAgency to 'category' data type
"""
film_permits_df['EventAgency'] = film_permits_df['EventAgency'].astype('category')
film_permits_df['EventAgency'].dtype
"""
Next, I transform the object data type for Borough to 'category' data type
"""
film_permits_df['Borough'] = film_permits_df['Borough'].astype('category')
film_permits_df['Borough'].dtype
"""
Next, I transform the object data type for Category to 'category' data type
"""
film_permits_df['Category'] = film_permits_df['Category'].astype('category')
film_permits_df['Category'].dtype
"""
Next, I transform the object data type for SubCategoryName to 'category' data type
"""
film_permits_df['SubCategoryName'] = film_permits_df['SubCategoryName'].astype('category')
film_permits_df['SubCategoryName'].dtype
"""
Next, I transform the object data type for Country to 'category' data type
"""
film_permits_df['Country'] = film_permits_df['Country'].astype('category')
film_permits_df['Country'].dtype
def get_date(d1):
return datetime.strptime(d1,"%m/%d/%Y %I:%M:%S %p").strftime('%m/%d/%Y %H:%M:%S')
"""
Next, I transform the object data type for StartDateTime to 'datetime' data type
"""
film_permits_df['StartDateTime']=film_permits_df['StartDateTime'].astype(str)
film_permits_df['StartDateTime']=film_permits_df['StartDateTime'].apply(get_date)
film_permits_df['StartDateTime']=pd.to_datetime(
film_permits_df['StartDateTime'],
format='%m/%d/%Y %H:%M:%S')
"""
Next, I transform the object data type for EndDateTime to 'datetime' data type
"""
film_permits_df['EndDateTime']=film_permits_df['EndDateTime'].astype(str)
film_permits_df['EndDateTime']=film_permits_df['EndDateTime'].apply(get_date)
film_permits_df['EndDateTime']=pd.to_datetime(
film_permits_df['EndDateTime'],
format='%m/%d/%Y %H:%M:%S')
"""
Next, I transform the object data type for EnteredOn to 'datetime' data type
"""
film_permits_df['EnteredOn']=film_permits_df['EnteredOn'].astype(str)
film_permits_df['EnteredOn']=film_permits_df['EnteredOn'].apply(get_date)
film_permits_df['EnteredOn']=pd.to_datetime(
film_permits_df['EnteredOn'],
format='%m/%d/%Y %H:%M:%S')
###Output
_____no_output_____
###Markdown
Let us look at the data types of columns after transformation
###Code
film_permits_df.dtypes
###Output
_____no_output_____
###Markdown
Now the dataframe has...* Four object type columns: ParkingHeld, CommunityBoard(s), PolicePrecinct(s) and ZipCode(s)* Three datetime Type columns: StartDateTime, EndDateTime and EnteredOn* Six categorical columns: EventType, EventAgency, Borough, Category, SubCategoryName and Country* One numerical columns: EventID with data type int64 Data clean up, Missing data detection and Fill up Black = filled; white = empty
###Code
"""
Searching for missing data in sample set of 300 randomly selected data points
"""
_=msno.matrix(film_permits_df.sample(300))
plt.xlabel('Features in data',fontsize=16)
plt.ylabel('Gaps in data',fontsize=16)
plt.show()
"""
Searching for missing data in sample set of 3000 randomly selected data points
"""
_=msno.matrix(film_permits_df.sample(3000))
plt.xlabel('Features in data',fontsize=16)
plt.ylabel('Gaps in data',fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Data Clean upThe data looks fairly clean to me from the graphs above but jsut to make sure, I will perform the following tasks:* Drop all rows and columns where entire row or column is NaN.* Drop columns with duplicate data or with 50% missing value.* Drop columns where all rows have the same value. * Such columns have no data variety and nothing useful to contribute to my data analysis.
###Code
print('Shape of data frame before Cleanup :',film_permits_df.shape)
print('Drop all rows and columns where entire row or column is NaN.')
film_permits_df.dropna(how='all',axis=0,inplace=True) # rows
film_permits_df.dropna(how='all',axis=1,inplace=True) # columns
print('Drop columns with duplicate data or with 50% missing value.')
half_count = len(film_permits_df)*.5
film_permits_df = film_permits_df.dropna(thresh=half_count, axis=1)
film_permits_df = film_permits_df.drop_duplicates()
print('Drop columns where all rows have the same value.')
for this_column in film_permits_df.columns:
if (film_permits_df[this_column].nunique()==1):
unique_entry=film_permits_df.iloc[0][this_column]
print('Drop column ',this_column,' where all rows have the same value : ', unique_entry)
film_permits_df.drop([this_column],axis=1,inplace=True)
print('Shape of data frame after cleanup :',film_permits_df.shape)
###Output
Shape of data frame before Cleanup : (40682, 14)
Drop all rows and columns where entire row or column is NaN.
Drop columns with duplicate data or with 50% missing value.
Drop columns where all rows have the same value.
Drop column EventAgency where all rows have the same value : Mayor's Office of Film, Theatre & Broadcasting
Shape of data frame after cleanup : (40682, 13)
###Markdown
Through the above process I was able to conclude that in my dataset...* There are no rows and columns where entire row or column is NaN.* There are no columns with duplicate data and with 50% missing value.* There is one column, EventAgency where all rows have the same value. - Hence, I will be dropping the column EventAgency as it has no data variety and nothing useful to contribute to my data analysis.. Missing data detection and fill up using random sampling in a meaningful way **That is get data from the same borough**
###Code
film_permits_df.head().T
"""
Counting null data per column
"""
film_permits_df.isnull().sum()
"""
Percentage of missing data per column
"""
(film_permits_df.isnull().sum()/len(film_permits_df)).sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
We were able to find that ZipCode(s), PolicePrecinct(s), CommunityBoard(s) columns have some missing data **Filling up missing data through sampling of data in same boroughs**
###Code
print("Data index for missing ZipCode(s)",list(film_permits_df[film_permits_df['ZipCode(s)'].isnull()].index))
print("Data index for missing CommunityBoard(s)",list(film_permits_df[film_permits_df['CommunityBoard(s)'].isnull()].index))
print("Data index for missing PolicePrecinct(s)",list(film_permits_df[film_permits_df['PolicePrecinct(s)'].isnull()].index))
'''
Viewing the missing data
'''
film_permits_df.iloc[[1138, 6038, 17714, 20833, 23054, 26856, 39837]]
'''
Boroguh based sampling for ZipCode(s), PolicePrecinct(s), CommunityBoard(s) data
'''
zipcode_smapling_dict={}
communityboard_smapling_dict={}
policeprecinc_smapling_dict={}
null_index=list(film_permits_df[film_permits_df['ZipCode(s)'].isnull()].index)
print(null_index)
for indx in null_index:
print('index :',indx)
this_borough=film_permits_df.iloc[indx]['Borough']
print(this_borough)
sample_zipcode=random.choice(list(film_permits_df[(film_permits_df['Borough']==this_borough)
& (film_permits_df['ZipCode(s)'].notnull())]['ZipCode(s)']))
sample_communityboard=random.choice(list(film_permits_df[(film_permits_df['Borough']==this_borough)
& (film_permits_df['CommunityBoard(s)'].notnull())]['CommunityBoard(s)']))
sample_policeprecinct=random.choice(list(film_permits_df[(film_permits_df['Borough']==this_borough)
& (film_permits_df['PolicePrecinct(s)'].notnull())]['PolicePrecinct(s)']))
zipcode_smapling_dict[indx]=sample_zipcode
communityboard_smapling_dict[indx]=sample_communityboard
policeprecinc_smapling_dict[indx]=sample_policeprecinct
print(zipcode_smapling_dict)
print(communityboard_smapling_dict)
print(policeprecinc_smapling_dict)
'''
Filling up the missing values with sampled data
'''
film_permits_df['ZipCode(s)'].fillna(zipcode_smapling_dict,inplace=True)
film_permits_df['CommunityBoard(s)'].fillna(communityboard_smapling_dict,inplace=True)
film_permits_df['PolicePrecinct(s)'].fillna(policeprecinc_smapling_dict,inplace=True)
'''
Checking filled up data
'''
film_permits_df.iloc[[1138, 6038, 17714, 20833, 23054, 26856, 39837]]
film_permits_df.isnull().sum()
###Output
_____no_output_____
###Markdown
**Missing data have been filled up successfully for ZipCode(s), PolicePrecinct(s), CommunityBoard(s) columns** Start of data analysis - Visualization and Exploratory Data Analysis***... for Film Permit data in New York City***Let's ask our data some questions about film permits in New York City.* How many types of "shooting" activities are happening in New York City? * What kind of "shooting" activities are these?
###Code
print("There are",film_permits_df['Category'].nunique(),
"kinds of \"shooting\" activities happening in",NYC)
for shoot_category in film_permits_df['Category'].unique():
print(shoot_category)
###Output
There are 9 kinds of "shooting" activities happening in New York City
Film
Television
Commercial
WEB
Theater
Still Photography
Documentary
Student
Music Video
###Markdown
* How many permits for each category of "shooting" activity have been granted in New York City?
###Code
film_permits_df['Category'].value_counts()
plt.figure(figsize=(15,10))
sns.countplot(x='Category',data=film_permits_df,order=film_permits_df['Category'].value_counts().index)
plt.title("Number of permits granted in each category of \"shooting\" activity in New York",fontsize=20)
plt.xlabel("Category",fontsize=16)
plt.ylabel("Number of permits",fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
* How many kinds of events are being granted permits in New York City? * What are these event categories?
###Code
print("There are",film_permits_df['EventType'].nunique(),
"kinds of events that are being granted permits in",NYC)
for permit_category in film_permits_df['EventType'].unique():
print(permit_category)
###Output
There are 4 kinds of events that are being granted permits in New York City
Shooting Permit
Rigging Permit
Theater Load in and Load Outs
DCAS Prep/Shoot/Wrap Permit
###Markdown
* How many permits have been granted per category of event?
###Code
film_permits_df['EventType'].value_counts()
plt.figure(figsize=(15,10))
sns.countplot(x='EventType',data=film_permits_df,order=film_permits_df['EventType'].value_counts().index)
plt.title("Number of permits granted per event type in New York",fontsize=20)
plt.xlabel("Event type",fontsize=16)
plt.ylabel("Number of permits",fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
* Do all boroughs in New York City see some "shooting" activity? * Which boroughs are shoot permits being granted for?
###Code
if film_permits_df['Borough'].nunique() == 5:
print("Yes, shoot permits are being granted for:")
else:
print("No, shoot permits are being granted for:")
for boroughs in film_permits_df['Borough'].unique():
print(boroughs)
###Output
Yes, shoot permits are being granted for:
Brooklyn
Queens
Manhattan
Bronx
Staten Island
###Markdown
* How many "shooting" activities are happening in each borough?
###Code
film_permits_df['Borough'].value_counts()
###Output
_____no_output_____
###Markdown
I assume that a lot of foreign movies are shot in New York City. Its not just movies from Hollywood/USA. * Is that assumption true? * Which countries are shooting movies in New York?
###Code
if film_permits_df['Country'].nunique() == 1 and film_permits_df['Country'].unique() == 'United States of America':
print("No, it is not true. Only US based shoots are happening in",NYC)
else:
print("Yes, it is true. All the following countries come to shoot in",NYC)
for countries in film_permits_df['Country'].unique():
print(countries)
###Output
Yes, it is true. All the following countries come to shoot in New York City
United States of America
France
Australia
Canada
United Kingdom
Panama
Netherlands
Japan
Germany
###Markdown
How many shoots are happening per country?
###Code
film_permits_df['Country'].value_counts()
###Output
_____no_output_____
###Markdown
**Method defined to compute normalized value for a series**Formula for normalization [used](https://www.statisticshowto.datasciencecentral.com/normalized/) is as follows:$\mathbf{X_{new}} = {X - X_{min} \over X_{max} - X_{min}}$
###Code
'''
This method will return the value normalized between 0 and 1, for a number in a series
given the number, maximum value and minimum value in the series
'''
def compute_norm(number, max_val, min_val):
return (number - min_val)/(max_val - min_val)
'''
This method will take a series and return a df with the normalized values for that series.
Created as we will reuse this a number of times.
'''
def get_normalized_value_df(series_to_process, category_col_name, count_col_name):
column_list = []
column_list.append(category_col_name)
column_list.append(count_col_name)
series_to_df = pd.DataFrame(list(series_to_process.items()), columns=column_list)
normalized_value_list = []
for num in np.array(series_to_df[count_col_name]):
normalized_value_list.append(compute_norm(number=float(num),
max_val=float(series_to_process.nlargest(1)),
min_val=float(series_to_process.nsmallest(1))
)
)
series_to_df['norm_'+count_col_name] = normalized_value_list
return series_to_df
###Output
_____no_output_____
###Markdown
Processing date time to extract year, month, hour, day of event
###Code
'''
Computing the number of shooting permits per year
'''
film_permits_df['Year'] = film_permits_df['StartDateTime'].apply(lambda time: time.year)
film_permits_df['Month'] = (film_permits_df['StartDateTime'].dt.month).apply(lambda x : calendar.month_abbr[x])
film_permits_df['Hour'] = film_permits_df['StartDateTime'].apply(lambda time: time.hour)
film_permits_df['Year'].value_counts()
'''
Computing the number of shooting permits per month
'''
months=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
film_permits_df['Year'] = film_permits_df['StartDateTime'].apply(lambda time: time.year)
film_permits_df['Hour'] = film_permits_df['StartDateTime'].apply(lambda time: time.hour)
film_permits_df['Month'] = pd.Categorical(
film_permits_df['Month'],
categories=months,
ordered=True)
film_permits_df['Month'].value_counts()
'''
Computing the number of shooting permits per weekday
'''
weekdays=['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']
film_permits_df["Weekday"] = film_permits_df['StartDateTime'].dt.weekday_name
film_permits_df['Weekday'] = pd.Categorical(
film_permits_df['Weekday'],
categories=['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday', 'Sunday'],
ordered=True)
film_permits_df['Weekday'].value_counts()
###Output
_____no_output_____
###Markdown
**Extracting the top five category of shooting a activity for processing**
###Code
top_category = film_permits_df['Category'].value_counts().head(5).index.values
top_category
top_category_df = film_permits_df[(film_permits_df['Category']=='Television')|(film_permits_df['Category']=='Film')
|(film_permits_df['Category']=='Theater')|(film_permits_df['Category']=='Commercial')
|(film_permits_df['Category']=='Still Photography')]
top_category_pivot_df=top_category_df.pivot_table(values='EventID', index='Month', columns='Year', aggfunc=np.size)
###Output
_____no_output_____
###Markdown
Next, we move onto the important questions we wanted to answer:First on list, we have:* "Can this data tell me popular timing of day for film shoots?"* "Can this data tell me the popular day of the week when shooting activities occur?"* To answer the first question, let's find out the hour of events for top five category of shooting activity
###Code
top_category_hour_pivot = top_category_df.pivot_table(values='EventID',
index='Category',
columns=top_category_df['StartDateTime'].dt.hour,
aggfunc=np.size)
top_category_df.groupby([top_category_df['StartDateTime'].dt.hour,
'Category',])['EventID'].count().unstack().plot(marker='o',figsize=(15,10))
plt.title('Number of permits at hours of the day for top five category',fontsize=20)
plt.ylabel('Number of permits',fontsize=16)
plt.xlabel('Hours of the day',fontsize=16)
plt.xticks(np.arange(24))
plt.show()
'''
Computing the normalized value of total number of shooting permits per hour of day
We are computing normalized values to determine the outlier hours for shooting activities.
'''
hourly_permits_df = get_normalized_value_df(
series_to_process=film_permits_df['StartDateTime'].dt.hour.value_counts(),
category_col_name='hour',count_col_name='permit_count')
hourly_permits_df.plot.bar(x='hour', y='norm_permit_count', figsize=(15,10))
plt.setp(plt.gca().get_xticklabels(), rotation=90, fontsize=12)
plt.setp(plt.gca().get_yticklabels(), fontsize=12)
plt.xlabel('Hour of day',fontsize=16)
plt.ylabel('Normalized number of permits',fontsize=16)
plt.title('Normalized value of total permits per hour of a day',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
From the above two graphs we can see that:* **The answer to the first question is that most popular time of day for shooting is between 5 AM and mid-day.*** **The outlier for hour zero is due to a lot of theater shows ending at mid-night. See purple line above.** * To answer the second question, let's find out the weekly trend for permits acquired per weekday in top five category of shooting activities
###Code
top_category_df.groupby(['Weekday','Category',])['EventID'].count().unstack().plot(marker='o',figsize=(15,10))
plt.title('Weekly trend for permits acquired in top-five category of shooting activities',fontsize=20)
plt.xticks(np.arange(7),weekdays)
plt.xlabel('Week Day',fontsize=16)
plt.ylabel('Number of permits',fontsize=16)
plt.show()
'''
Computing the normalized value of number of shooting permits per weekday
We are computing normalized values to detect if weekends are outliers for number of shooting activities.
'''
weekday_df = get_normalized_value_df(series_to_process=film_permits_df['Weekday'].value_counts(),
category_col_name='weekday',count_col_name='permit_count')
weekday_df.plot.bar(x='weekday', y='norm_permit_count', figsize=(15,10))
plt.setp(plt.gca().get_xticklabels(), rotation=90, fontsize=12)
plt.setp(plt.gca().get_yticklabels(), fontsize=12)
plt.xlabel('Week Day',fontsize=16)
plt.ylabel('Normalized of permits',fontsize=16)
plt.title('Normalized value of total number of permits per weekday',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
* **From the above two graphs of weekday-wise number of permits and the normalized value of number of permits, we can now answer the second question.*** **We can conclude that apart from the weekend every day is fairly well balanced in matters of shooting activities**. Next, we look at our data to find out: * "Can it tell me popular months of year for film shoots?"* "Winter in New York city is very beautiful due to all the snow but are the shoots really happening in the harsh winter conditions of NYC?"* To answer the third question, let's find out the monthly trend for permits acquired per month in top five category of shooting activities
###Code
top_category_df.groupby(['Month','Category',])['EventID'].count().unstack().plot(marker='o',figsize=(15,10))
plt.title('Number of permits per month for top five category of shooting activity',fontsize=20)
plt.xticks(np.arange(12),months)
plt.xlabel('Month',fontsize=16)
plt.ylabel('Number of permits',fontsize=16)
plt.show()
'''
Computing the normalized value of total number of shooting permits per month
We are computing normalized values to detect if Winter months are outliers for number of shooting activities.
'''
month_df = get_normalized_value_df(series_to_process=film_permits_df['Month'].value_counts(),
category_col_name='month',count_col_name='permit_count')
month_df.plot.bar(x='month', y='norm_permit_count', figsize=(15,10))
plt.setp(plt.gca().get_xticklabels(), rotation=90, fontsize=12)
plt.setp(plt.gca().get_yticklabels(), fontsize=12)
plt.xlabel('Month',fontsize=16)
plt.ylabel('Normalized number of permits',fontsize=16)
plt.title('Normalized value of total permits per month',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
From the above two graphs of month-wise number of shooting permits in each category and normalized value of total shooting permits per month we can see that:* Winter is generally a bad time for shooting.* From my knowledge "of watching too many TV shows", I know that they generally follow a fall shooting schedule with a fall finale and then resume shooting in spring with a season finale. This schedule is clearly visible in this graph if you look at the red line.* New York winters are cold. Naturally it would logically and logistically be easy to film movies during summer. We can see that pattern when we look at the orange line.* Fall is still a good enough time to shoot outdoors in New York. More so because of fall colors that brings out the [beauty of nature](https://www.timeout.com/newyork/things-to-do/where-to-see-the-best-fall-foliage-in-nyc) in New York.* **So, the answer to our third question is TV shoots happen in phases. Mostly in Fall but some in Spring. Movie shoots happen starting around Spring, peaking around summer and again a bit in the Fall.*** **The graph for normalized value of total permits per month answers our fourth question that winter is really a bad time to shoot in New York City as the number of events go down but there still are a non-zero number of shooting activities happening. This is especially true for TV shows.** From the permit data I would like to next find out the answer to: "I know some Bollywood movies have shot in Staten Island because of a large Indian community in that area but is it a popular location in general?"
###Code
'''
Computing the normalized value of number of shooting permits per borough and event combo
We are computing normalized values to detect if Staten Island is an outlier for number of shooting activities.
'''
borough_df = get_normalized_value_df(
series_to_process=film_permits_df.groupby(['Borough','EventType'])['EventID'].count(),
category_col_name='borough_and_event',
count_col_name='permit_count')
borough_df.plot.bar(x='borough_and_event', y='norm_permit_count', figsize=(15,10))
plt.setp(plt.gca().get_xticklabels(), rotation=90, fontsize=12)
plt.setp(plt.gca().get_yticklabels(), fontsize=12)
plt.xlabel('Borough and Event combination',fontsize=16)
plt.ylabel('Normalized number of permits',fontsize=16)
plt.title('Normalized value of total permits per borough and event combination',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
* From the graph above we can clearly see that shooting permits are most common in Manhattan and Brooklyn. * Staten Island has the lowest number among the five boroughs.* **Which means that we have our answered the fifth question. Staten Island is NOT in-fact a popular shooting location.** Next, we take a look at some of the less popular events that acquire shooting permits in New York City. We would like to find out the answer to: "I like a lot of web series and watch Youtube stars like Casey Neistat who films in New York City. Given the popularity of Youtube in recent times are web shoots are rising in the city?"" * If we look at year wise number of permits for each category of shooting activity, it is difficult to find out web shooting activities. As it is sort of an outlier when compared to movies or TV shoots.
###Code
film_permits_df.groupby(['Year','Category'])['EventID'].count().unstack().plot(kind='bar',figsize=(15,10))
plt.title('Year wise number of permits for each category of shooting activity',fontsize=20)
plt.setp(plt.gca().get_xticklabels(), rotation=0, fontsize=12)
plt.xlabel('Year',fontsize=16)
plt.ylabel('Number of permits',fontsize=16)
plt.show()
'''
Computing the normalized value of number of shooting permits per borough and event combo
'''
year_permit_df = get_normalized_value_df(
series_to_process=film_permits_df.groupby(['Category','Year'])['EventID'].count(),
category_col_name='category_year',
count_col_name='permit_count')
year_permit_df.plot.bar(x='category_year', y='norm_permit_count', figsize=(15,10))
plt.setp(plt.gca().get_xticklabels(), rotation=90, fontsize=12)
plt.setp(plt.gca().get_yticklabels(), fontsize=12)
plt.xlabel('Category and Year',fontsize=16)
plt.ylabel('Normalized number of permits',fontsize=16)
plt.title('Normalized value of total permits per category over the years',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
* So we look at the data that is "NOT IN" the popular shooting activity category.
###Code
web_df = film_permits_df[~film_permits_df['Category'].isin(top_category)]
web_df.groupby(['Year','Category'])['EventID'].count().unstack().plot(kind='bar',figsize=(15,10))
plt.title('Year wise number of permits for each low popularity shooting activity category',fontsize=20)
plt.setp(plt.gca().get_xticklabels(), rotation=0, fontsize=12)
plt.xlabel('Year',fontsize=16)
plt.ylabel('Number of permits',fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
* No further normalization is required in this case, as we are just looking for an up or down trend and not detecting outliers or comparing numerical values.* **From the above graph we can see a clear rising trend of WEB shoot activity in New York City!** Lastly, we seek the answer for "Which locations in New York City are popular for movie shoots?"We determine this using the areas where parking was held for a shooting. Assumption being people don't want to walk too far to shoot their movies/shows.**Top ten parking held locations for shooting activities** Remove multiple whitespaces learned from this [SO link](https://stackoverflow.com/questions/2077897/substitute-multiple-whitespace-with-single-whitespace-in-python)Using [GeoPy](https://github.com/geopy/geopy) to extract lat long from street address
###Code
geolocator = Nominatim()
street_address_list = []
lat_long_list = []
parking_series = film_permits_df['ParkingHeld'].value_counts().head(10)
parking_df = pd.DataFrame(list(parking_series.items()), columns=['ParkingHeld','permit_count'])
for street_info in parking_df['ParkingHeld']:
street_address = street_info.split('between')[0].strip()
found_numbers = re.search(r'\d+', street_address)
if found_numbers is not None:
indices = list(found_numbers.span())
street_number = street_address[indices[0]:indices[1]]
street_parts = street_address.split(street_number)
street_address = street_parts[0] + humanize.ordinal(street_number) + street_parts[1] + ', New York City, New York'
else:
street_address = street_address + ', New York City, New York'
location_dict = geolocator.geocode(street_address).raw
latitude = float(location_dict['lat'])
longitude = float(location_dict['lon'])
street_address_list.append(street_address)
lat_long_list.append([latitude,longitude])
new_df = pd.DataFrame({'ParkingHeld':street_address_list})
parking_df.update(new_df)
parking_df['lat_long'] = lat_long_list
parking_df
parking_df.plot.bar(x='ParkingHeld', y='permit_count', figsize=(15,10))
plt.setp(plt.gca().get_xticklabels(), rotation=90, fontsize=12)
plt.setp(plt.gca().get_yticklabels(), fontsize=12)
plt.xlabel('Top ten shooting locations',fontsize=16)
plt.ylabel('Number of permits',fontsize=16)
plt.title('Number of permits for top ten shooting locations',fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
Using the [Folium library](https://python-visualization.github.io/folium/quickstart.html) let's take a look at where the popular shooting locations are in New York City!
###Code
folium_map = folium.Map(location=parking_df.iloc[0]['lat_long'],
zoom_start=11,
tiles='Stamen Terrain')
for curr_loc in list(parking_df.index):
folium.Marker(location=parking_df.iloc[curr_loc]['lat_long'],
popup=parking_df.iloc[curr_loc]['ParkingHeld']
).add_to(folium_map)
folium_map.add_child(folium.ClickForMarker(popup='Waypoint'))
folium_map
###Output
_____no_output_____
###Markdown
* The top 10 filming locations can be seen in the graph and map above.* **WEST 48th STREET, New York City, New York** is near Times Square. Intuitively this seems to be a reasonable location to be considered popular.
###Code
print('Total Time taken:',time.time() - start_time,'seconds')
###Output
Total Time taken: 23.470895767211914 seconds
|
notebooks/by_coin/polkadot_notebook_from_CSV.ipynb | ###Markdown
Load and inspect data
###Code
dot_df = pd.read_csv(Path('../../resources/prices/coin_Polkadot.csv'), index_col='SNo')
dot_df
dot_df['Date'] = pd.to_datetime(dot_df['Date']).dt.date
dot_df['Date'] = pd.to_datetime(dot_df['Date'])
dot_df['Spread'] = dot_df.High - dot_df.Low
dot_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 191 entries, 1 to 191
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Name 191 non-null object
1 Symbol 191 non-null object
2 Date 191 non-null datetime64[ns]
3 High 191 non-null float64
4 Low 191 non-null float64
5 Open 191 non-null float64
6 Close 191 non-null float64
7 Volume 191 non-null float64
8 Marketcap 191 non-null float64
9 Spread 191 non-null float64
dtypes: datetime64[ns](1), float64(7), object(2)
memory usage: 16.4+ KB
###Markdown
Plot the closing value of Polkadot over time
###Code
import matplotlib.dates as mdates
fig, ax = plt.subplots(figsize=(12,8))
# sns.lineplot(y = dot_df.Close.values, x=dot_df.Date_mpl.values, alpha=0.8, color=color[3])
sns.lineplot(y = dot_df.Close.values, x=dot_df.Date.values, alpha=0.8, color=color[3])
ax.xaxis.set_major_locator(mdates.AutoDateLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y.%m.%d'))
# fig.autofmt_xdate()
plt.xlabel('Date', fontsize=12)
plt.ylabel('Price in USD', fontsize=12)
plt.title("Closing price distribution of DOT", fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Candlestick chart
###Code
import matplotlib.ticker as mticker
# from matplotlib.finance import candlestick_ohlc
import mplfinance as mpf
dot_df['Date_mpl'] = dot_df['Date'].apply(lambda x: mdates.date2num(x))
temp_dot_df = dot_df.copy(deep=False)
temp_dot_df = temp_dot_df.set_index(['Date'])
temp_dot_df = temp_dot_df.drop(['Name', 'Symbol', 'Marketcap','Spread'], axis=1)
temp_dot_df
mpf.plot(temp_dot_df.loc['2020-9-1':], type='candle', mav=(5,10), volume=True)
###Output
_____no_output_____
###Markdown
Price prediction
###Code
from fbprophet import Prophet
INPUT_FILE = "coin_Polkadot.csv"
price_predict_df = pd.read_csv("../../resources/prices/" + INPUT_FILE, parse_dates=['Date'], usecols=["Date", "Close"])
price_predict_df.columns = ["ds", "y"]
price_predict_df = price_predict_df[price_predict_df['ds']>'2020-9-1']
m = Prophet(changepoint_prior_scale=.7)
m.fit(price_predict_df);
future = m.make_future_dataframe(periods=7)
forecast = m.predict(future)
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
m.plot(forecast)
m.plot_components(forecast)
###Output
_____no_output_____ |
Week 1/Programming Assignment - Predicting sentiment from product reviews.ipynb | ###Markdown
Predicting sentiment from product reviews
###Code
#Libraries Import
import json
import string
import numpy as np
import pandas as pd
pd.set_option("Chained_Assignment",None)
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import CountVectorizer
#read dataframe
dataframe=pd.read_csv("amazon_baby.csv")
dataframe.head()
dataframe.info()
#contains null values for name, reviews
#replace null values with empty string
dataframe = dataframe.fillna({'review':''})
#remove punctuations
def remove_punctuation(text):
translator = str.maketrans('', '', string.punctuation)
return text.translate(translator)
dataframe["review_without_punctuation"] = dataframe['review'].apply(lambda x : remove_punctuation(x))
dataframe=dataframe[["name","review_without_punctuation","rating"]]
#ignore all reviews with rating = 3, since they tend to have a neutral sentiment
dataframe=dataframe[dataframe["rating"]!=3].reset_index(drop=True)
# reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2
#or lower are negative. For the sentiment column, we use +1 for the positive class label and -1
#for the negative class label
dataframe['sentiment'] = dataframe['rating'].apply(lambda rating : +1 if rating > 3 else -1)
#test-train data
with open('module-2-assignment-test-idx.json') as test_data_file:
test_data_idx = json.load(test_data_file)
with open('module-2-assignment-train-idx.json') as train_data_file:
train_data_idx = json.load(train_data_file)
train_data = dataframe.iloc[train_data_idx]
test_data = dataframe.iloc[test_data_idx]
#Build the word count vector for each review_without_punctuations
vectorizer = CountVectorizer(token_pattern=r'\b\w+\b')
train_matrix = vectorizer.fit_transform(train_data['review_without_punctuation'])
test_matrix = vectorizer.transform(test_data['review_without_punctuation'])
#Logistic model fit
sentiment_model = LogisticRegression(solver='liblinear',n_jobs=1)
sentiment_model.fit(train_matrix, train_data['sentiment'])
###Output
_____no_output_____
###Markdown
QUIZ Predicting sentiment from product reviews Question 1How many weights are greater than or equal to 0? __Ans__:
###Code
np.sum(sentiment_model.coef_ >= 0)
###Output
_____no_output_____
###Markdown
Question 2Of the three data points in sample_test_data, which one has the lowest probability of being classified as a positive review?
###Code
sample_test_data = test_data.iloc[10:13]
sample_test_matrix = vectorizer.transform(sample_test_data['review_without_punctuation'])
print(sentiment_model.classes_)
print(sentiment_model.predict_proba(sample_test_matrix))
###Output
[-1 1]
[[3.67713366e-03 9.96322866e-01]
[9.59664165e-01 4.03358355e-02]
[9.99970284e-01 2.97164132e-05]]
###Markdown
__Ans__: Third Question 3Which of the following products are represented in the 20 most positive reviews? __Ans__: Third
###Code
test_data["postive_review_probability"]=[x[1] for x in np.asarray(sentiment_model.predict_proba(test_matrix))]
top_20=list(test_data.sort_values("postive_review_probability",ascending=False)[:20]["name"])
options_list=["Snuza Portable Baby Movement Monitor","MamaDoo Kids Foldable Play Yard Mattress Topper, Blue","Britax Decathlon Convertible Car Seat, Tiffany","Safety 1st Exchangeable Tip 3 in 1 Thermometer"]
[x for x in options_list if x in top_20]
###Output
_____no_output_____
###Markdown
Question 4Which of the following products are represented in the 20 most negative reviews? __Ans__:
###Code
test_data["postive_review_probability"]=[x[0] for x in np.asarray(sentiment_model.predict_proba(test_matrix))]
top_20=list(test_data.sort_values("postive_review_probability",ascending=False)[:20]["name"])
options_list=["The First Years True Choice P400 Premium Digital Monitor, 2 Parent Unit","JP Lizzy Chocolate Ice Classic Tote Set","Peg-Perego Tatamia High Chair, White Latte","Safety 1st High-Def Digital Monitor"]
[x for x in options_list if x in top_20]
###Output
_____no_output_____
###Markdown
Question 5What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76) __Ans__:
###Code
def get_classification_accuracy(model, data, true_labels):
pred_y=model.predict(data)
correct=np.sum(pred_y==true_labels)
accuracy=round(correct/len(true_labels),2)
return accuracy
get_classification_accuracy(sentiment_model,test_matrix,test_data["sentiment"])
###Output
_____no_output_____
###Markdown
Question 6Does a higher accuracy value on the training_data always imply that the classifier is better? __Ans__: No, higher accuracy on training data does not necessarily imply that the classifier is better. Question 7Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words.How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model? __Ans__:
###Code
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
vectorizer_word_subset = CountVectorizer(vocabulary=significant_words) # limit to 20 significant words
train_matrix_sub = vectorizer_word_subset.fit_transform(train_data['review_without_punctuation'])
test_matrix_sub = vectorizer_word_subset.transform(test_data['review_without_punctuation'])
#Logistic model fit
simple_model = LogisticRegression(solver='liblinear',n_jobs=1)
simple_model.fit(train_matrix_sub, train_data['sentiment'])
simple_model_coefficient = pd.DataFrame({'word':significant_words,'simple_model_coefficient':simple_model.coef_.flatten()}).sort_values(['simple_model_coefficient'], ascending=False).reset_index(drop=True)
len(simple_model_coefficient[simple_model_coefficient["simple_model_coefficient"]>0])
###Output
_____no_output_____
###Markdown
Question 8Are the positive words in the simple_model also positive words in the sentiment_model? __Ans__: No
###Code
simple_model_coefficient=simple_model_coefficient.set_index("word",drop=True)
sentiment_model_coefficient = pd.DataFrame({'word':list(vectorizer.vocabulary_),'sentimental_model_coefficient':sentiment_model.coef_.flatten()}).sort_values(['sentimental_model_coefficient'], ascending=False).reset_index(drop=True)
sentiment_model_coefficient=sentiment_model_coefficient[sentiment_model_coefficient["word"].isin(significant_words)].set_index("word",drop=True)
simple_model_coefficient.join(sentiment_model_coefficient,on="word",how="left")
###Output
_____no_output_____
###Markdown
Question 9Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set? __Ans__: Sentiment Model
###Code
print("Sentiment Model: ",get_classification_accuracy(sentiment_model,train_matrix,train_data["sentiment"]))
print("Simple Model: ",get_classification_accuracy(simple_model,train_matrix_sub,train_data["sentiment"]))
###Output
Sentiment Model: 0.97
Simple Model: 0.87
###Markdown
Question 10Which model (sentiment_model or simple_model) has higher accuracy on the TEST set? __Ans__: Sentiment Model
###Code
print("Sentiment Model: ",get_classification_accuracy(sentiment_model,test_matrix,test_data["sentiment"]))
print("Simple Model: ",get_classification_accuracy(simple_model,test_matrix_sub,test_data["sentiment"]))
###Output
Sentiment Model: 0.93
Simple Model: 0.87
###Markdown
Question 11Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76). __Ans__:
###Code
#Find Majority Class
freq=pd.crosstab(test_data["sentiment"],columns=["count"]).reset_index()
freq
#Majority class=1
baseline_model=round(freq[freq["sentiment"]==1]["count"].values[0]/freq["count"].sum(),2)
print("Baseline Model: ", baseline_model)
###Output
Baseline Model: 0.84
|
benchmarks/profet/emukit/notebooks/Emukit-tutorial-bayesian-optimization-context-variables.ipynb | ###Markdown
Bayesian optimization with context variables In this notebook we are going to see how to use Emukit to solve optimization problems in which certain variables are fixed during the optimization phase. These are called context variables [[1](-references)]. This is useful when some of the variables in the optimization are controllable/known factors. And example is the optimization of a the movement of a robot under conditions of the environment change (but the change is known).
###Code
from emukit.test_functions import branin_function
from emukit.core import ParameterSpace, ContinuousParameter, DiscreteParameter
from emukit.core.initial_designs import RandomDesign
from GPy.models import GPRegression
from emukit.model_wrappers import GPyModelWrapper
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.core.loop import FixedIterationsStoppingCondition
###Output
_____no_output_____
###Markdown
Loading the problem and the loop
###Code
f, parameter_space = branin_function()
###Output
_____no_output_____
###Markdown
Now we define the domain of the function to optimize. We build the model:
###Code
design = RandomDesign(parameter_space) # Collect random points
X = design.get_samples(10)
Y = f(X)
model_gpy = GPRegression(X,Y) # Train and wrap the model in Emukit
model_emukit = GPyModelWrapper(model_gpy)
###Output
_____no_output_____
###Markdown
And prepare the optimization object to run the loop.
###Code
expected_improvement = ExpectedImprovement(model = model_emukit)
bayesopt_loop = BayesianOptimizationLoop(model = model_emukit,
space = parameter_space,
acquisition = expected_improvement,
batch_size = 1)
###Output
_____no_output_____
###Markdown
Now, we set the number of iterations to run to 10.
###Code
max_iter = 10
###Output
_____no_output_____
###Markdown
Running the optimization by setting a context variable To set a context, we just need to create a dictionary with the variables to fix and pass it to the Bayesian optimization object when running the optimization. Note that, every time we run new iterations we can set other variables to be the context. We run 3 sequences of 10 iterations each with different values of the context.
###Code
bayesopt_loop.run_loop(f, max_iter, context={'x1':0.3}) # we set x1 as the context variable
bayesopt_loop.run_loop(f, max_iter, context={'x2':0.1}) # we set x2 as the context variable
bayesopt_loop.run_loop(f, max_iter) # no context
###Output
Optimization restart 1/1, f = 55.82239092951151
Optimization restart 1/1, f = 61.3846273395993
Optimization restart 1/1, f = 65.9026092044098
Optimization restart 1/1, f = 70.10888667806952
Optimization restart 1/1, f = 74.1328973094729
Optimization restart 1/1, f = 77.99175645392145
Optimization restart 1/1, f = 52.29992606115588
Optimization restart 1/1, f = 57.2135370696718
Optimization restart 1/1, f = 61.71269586013616
Optimization restart 1/1, f = 64.61711212283623
Optimization restart 1/1, f = 67.32871572150273
Optimization restart 1/1, f = 67.32871572150273
Optimization restart 1/1, f = 72.46948949054092
Optimization restart 1/1, f = 76.3396222448238
Optimization restart 1/1, f = 80.13694597576568
Optimization restart 1/1, f = 82.78050118466332
Optimization restart 1/1, f = 85.53554907845636
Optimization restart 1/1, f = 87.66997139826
Optimization restart 1/1, f = 82.51513223264337
Optimization restart 1/1, f = 74.56925204657252
Optimization restart 1/1, f = 66.47734698335717
Optimization restart 1/1, f = 58.330733958274834
Optimization restart 1/1, f = 58.330733958274834
Optimization restart 1/1, f = 64.60067656071124
Optimization restart 1/1, f = 70.20437602983576
Optimization restart 1/1, f = 75.80428915860006
Optimization restart 1/1, f = 80.95909118096986
Optimization restart 1/1, f = 85.034596272839
Optimization restart 1/1, f = 89.12568615232692
Optimization restart 1/1, f = 92.23693057747008
Optimization restart 1/1, f = 97.40863200244975
Optimization restart 1/1, f = 102.46737266911367
Optimization restart 1/1, f = 106.5758265632924
###Markdown
We can now inspect the collected points.
###Code
bayesopt_loop.loop_state.X
###Output
_____no_output_____ |
examples/Tutorial_cytosim.ipynb | ###Markdown
Simularium Conversion Tutorial : CytoSim Data
###Code
from IPython.display import Image
import numpy as np
from simulariumio.cytosim import CytosimConverter, CytosimData, CytosimObjectInfo
from simulariumio import MetaData, DisplayData, DISPLAY_TYPE, ModelMetaData, InputFileData
###Output
_____no_output_____
###Markdown
This notebook provides example python code for converting your own simulation trajectories into the format consumed by the Simularium Viewer. It creates a .simularium JSON file which you can drag and drop onto the viewer like this:  *** Prepare your spatial data The Simularium `CytosimConverter` consumes spatiotemporal data from CytoSim. We're working to improve performance for the Cytosim converter, and also working with the Cytosim authors to add the ability to output Simularium files directly from Cytosim. For now, if you find the conversion process is too slow, you can try to record less data from Cytosim, for example record at a larger timestep by adjusting `nb_frames` in the `run` block of your Cytosim `config.cym` file (https://gitlab.com/f.nedelec/cytosim/-/blob/8feaf45297c3f5180d24889909e3a5251a7adb1a/doc/tutorials/tuto_introduction.md).To see how to generate the Cytosim output .txt files you need, check Cytosim documentation here: https://gitlab.com/f.nedelec/cytosim/-/blob/8feaf45297c3f5180d24889909e3a5251a7adb1a/doc/sim/report.md* for Fibers, use the command `./report fiber:points > fiber_points.txt`, which will create `fiber_points.txt`* for Solids, use the command `./report solid > solids.txt`, which will create `solids.txt`* for Singles, use the command `./report single:position > singles.txt`, which will create `singles.txt`* for Couples, use the command `./report couple:state > couples.txt`, which will create `couples.txt` * in some versions of Cytosim, state is not a reporting option. In this case you can use `./report couple:[name of your couple] > couples_[name of your couple].txt` and provide a filepath for each type of couple in your data. If this is necessary, you should also check the position XYZ columns in your `couples.txt` file and override **position_indices** if they aren't at \[2, 3, 4\]The converter requires a `CytosimData` object as parameter ([see documentation](https://allen-cell-animated.github.io/simulariumio/simulariumio.cytosim.htmlsimulariumio.cytosim.cytosim_data.CytosimData)).If you'd like to specify PDB or OBJ files or color for rendering an agent type, add a `DisplayData` object for that agent type, as shown below ([see documentation](https://allen-cell-animated.github.io/simulariumio/simulariumio.data_objects.htmlmodule-simulariumio.data_objects.display_data)).
###Code
box_size = 2.
example_data = CytosimData(
meta_data=MetaData(
box_size=np.array([box_size, box_size, box_size]),
scale_factor=100.0,
trajectory_title="Some parameter set",
model_meta_data=ModelMetaData(
title="Some agent-based model",
version="8.1",
authors="A Modeler",
description=(
"An agent-based model run with some parameter set"
),
doi="10.1016/j.bpj.2016.02.002",
source_code_url="https://github.com/allen-cell-animated/simulariumio",
source_code_license_url="https://github.com/allen-cell-animated/simulariumio/blob/main/LICENSE",
input_data_url="https://allencell.org/path/to/native/engine/input/files",
raw_output_data_url="https://allencell.org/path/to/native/engine/output/files",
),
),
object_info={
"fibers" : CytosimObjectInfo(
cytosim_file=InputFileData(
file_path="../simulariumio/tests/data/cytosim/aster_pull3D_couples_actin_solid/fiber_points.txt",
),
display_data={
1 : DisplayData(
name="microtubule"
),
2 : DisplayData(
name="actin"
)
}
),
"solids" : CytosimObjectInfo(
cytosim_file=InputFileData(
file_path="../simulariumio/tests/data/cytosim/aster_pull3D_couples_actin_solid/solids.txt",
),
display_data={
1 : DisplayData(
name="aster",
radius=0.1
),
2 : DisplayData(
name="vesicle",
radius=0.1
)
}
),
"singles" : CytosimObjectInfo(
cytosim_file=InputFileData(
file_path="../simulariumio/tests/data/cytosim/aster_pull3D_couples_actin_solid/singles.txt",
),
display_data={
1 : DisplayData(
name="dynein",
radius=0.01,
display_type=DISPLAY_TYPE.PDB,
url="https://files.rcsb.org/download/3VKH.pdb",
color="#f4ac1a",
),
2 : DisplayData(
name="kinesin",
radius=0.01,
display_type=DISPLAY_TYPE.PDB,
url="https://files.rcsb.org/download/3KIN.pdb",
color="#0080ff",
)
}
),
"couples" : CytosimObjectInfo(
cytosim_file=InputFileData(
file_path="../simulariumio/tests/data/cytosim/aster_pull3D_couples_actin_solid/couples.txt",
),
display_data={
1 : DisplayData(
name="motor complex",
radius=0.02,
color="#bf95d4",
)
},
position_indices=[3, 4, 5]
)
},
)
###Output
_____no_output_____
###Markdown
Convert and save as .simularium JSON file Once your data is shaped like in the `example_data` object, you can use the converter to generate the file at the given path:
###Code
CytosimConverter(example_data).write_JSON("example_cytosim")
###Output
Reading Cytosim Data -------------
Writing JSON -------------
Converting Trajectory Data -------------
saved to example_cytosim.simularium
|
Lesson2/Exercise21.ipynb | ###Markdown
Exercise 10 : Stop words removal Remove stop words (she, on, the, am, is, not) from the sentence: She sells seashells on the seashore
###Code
from nltk import word_tokenize
sentence = "She sells seashells on the seashore"
custom_stop_word_list = ['she', 'on', 'the', 'am', 'is', 'not']
' '.join([word for word in word_tokenize(sentence) if word.lower() not in custom_stop_word_list])
###Output
_____no_output_____ |
housing_1.ipynb | ###Markdown
###Code
from keras.datasets import boston_housing
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
train_data.shape
test_data.shape
train_targets
from keras import models
from keras import layers
def build_model():
model = models.Sequential()
model.add(layers.Dense(64, activation='relu',input_shape=(train_data.shape[1],)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
import numpy as np
k = 4
num_val_samples = len(train_data) // k
num_epochs = 100
all_scores = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate([train_data[:i * num_val_samples],train_data[(i + 1) * num_val_samples:]],axis=0)
partial_train_targets = np.concatenate([train_targets[:i * num_val_samples],train_targets[(i + 1) * num_val_samples:]],axis=0)
model = build_model()
model.fit(partial_train_data, partial_train_targets,epochs=num_epochs, batch_size=1, verbose=0)
val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)
all_scores.append(val_mae)
all_scores
np.mean(all_scores)
average_mae_history = [np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs)]
num_epochs = 100
all_mae_histories = []
for i in range(k):
print('processing fold #', i)
val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate(
[train_data[:i * num_val_samples],
train_data[(i + 1) * num_val_samples:]],
axis=0)
partial_train_targets = np.concatenate(
[train_targets[:i * num_val_samples],
train_targets[(i + 1) * num_val_samples:]],
axis=0)
model = build_model()
history = model.fit(partial_train_data, partial_train_targets,validation_data=(val_data, val_targets),epochs=num_epochs, batch_size=1, verbose=0)
mae_history = history.history['val_mean_absolute_error']
all_mae_histories.append(mae_history)
average_mae_history = [np.mean([x[i] for x in all_mae_histories]) for i in range(num_epochs)]
import matplotlib.pyplot as plt
plt.plot(range(1, len(average_mae_history) + 1), average_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
def smooth_curve(points, factor=0.9):
smoothed_points = []
for point in points:
if smoothed_points:
previous = smoothed_points[-1]
smoothed_points.append(previous * factor + point * (1 - factor))
else:
smoothed_points.append(point)
return smoothed_points
smooth_mae_history = smooth_curve(average_mae_history[10:])
plt.plot(range(1, len(smooth_mae_history) + 1), smooth_mae_history)
plt.xlabel('Epochs')
plt.ylabel('Validation MAE')
plt.show()
###Output
_____no_output_____ |
docs/circuit-examples/qubit-couplers/TwoTransmonsDirectCoupling.ipynb | ###Markdown
Direct Coupler (transmon-transmon)
###Code
%load_ext autoreload
%autoreload 2
import qiskit_metal as metal
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, Headings
import pyEPR as epr
###Output
_____no_output_____
###Markdown
Create the design in MetalSetup a design of a given dimension. Dimensions will be respected in the design rendering. Note that the design size extends from the origin into the first quadrant.
###Code
design = designs.DesignPlanar({}, True)
design.chips.main.size['size_x'] = '2mm'
design.chips.main.size['size_y'] = '2mm'
gui = MetalGUI(design)
###Output
_____no_output_____
###Markdown
Create two transmons with one readout resonator and move it to the center of the chip previously defined.
###Code
from qiskit_metal.qlibrary.qubits.transmon_pocket import TransmonPocket
from qiskit_metal.qlibrary.interconnects.straight_path import RouteStraight
# Just FYI, RoutePathFinder is another QComponent that could have been used instead of RouteStraight.
# Be aware of the options for TransmonPocket
TransmonPocket.get_template_options(design)
RouteStraight.get_template_options(design)
q1 = TransmonPocket(design, 'Q1', options = dict(
pad_width = '425 um',
pocket_height = '650um',
connection_pads=dict(
readout = dict(loc_W=+1,loc_H=+1, pad_width='200um')
)))
q2 = TransmonPocket(design, 'Q2', options = dict(
pos_x = '1.0 mm',
pad_width = '425 um',
pocket_height = '650um',
connection_pads=dict(
readout = dict(loc_W=-1,loc_H=+1, pad_width='200um')
)))
bus = RouteStraight(design, 'coupler', Dict(
pin_inputs=Dict(
start_pin=Dict(component='Q1', pin='readout'),
end_pin=Dict(component='Q2', pin='readout')), ))
gui.rebuild()
gui.autoscale()
# Get a list of all the qcomponents in QDesign and then zoom on them.
all_component_names = design.components.keys()
gui.zoom_on_components(all_component_names)
#Save screenshot as a .png formatted file.
gui.screenshot()
# Screenshot the canvas only as a .png formatted file.
gui.figure.savefig('shot.png')
from IPython.display import Image, display
_disp_ops = dict(width=500)
display(Image('shot.png', **_disp_ops))
# Closing the Qiskit Metal GUI
gui.main_window.close()
###Output
_____no_output_____ |
Assignment1/Assignment_1b.ipynb | ###Markdown
Scale the image by the factor of 2
###Code
def nearestNeighborInterpolation(np_image: np.ndarray, new_size: int) -> np.ndarray:
"""
Nearest neighbor interpolation.
:param np_image: numpy array of image
:param new_size: new size of image
:return: nearest neighbor interpolated image
"""
new_image = np.zeros((new_size, new_size), dtype=np.uint8)
for i in range(new_size):
for j in range(new_size):
new_image[i][j] = np_image[int(i/new_size*np_image.shape[0])][int(j/new_size*np_image.shape[1])]
return new_image
cameraman_2x = nearestNeighborInterpolation(cameraman_img, cameraman_img.shape[0]*2)
plt.imshow(cameraman_2x)
###Output
_____no_output_____
###Markdown
Rotate the Image
###Code
def rotateImageBy90(np_image: np.ndarray) -> np.ndarray:
"""
Rotate an image by 90 degrees.
:param np_image: numpy array of image
:return: rotated image
"""
rotated_image = np.zeros((np_image.shape[1], np_image.shape[0]), dtype=np.uint8)
for i in range(np_image.shape[1]):
for j in range(np_image.shape[0]):
rotated_image[i][j] = np_image[j][np_image.shape[1]-1-i]
return rotated_image
def rotateImage(np_image: np.ndarray, angle: int) -> np.ndarray:
"""
Rotate an image by a given angle.
:param np_image: numpy array of image
:param angle: angle to rotate by multiples of 90 degrees
:return: rotated image
"""
stage = angle//90
new_image = np_image.copy()
while stage > 0:
new_image = rotateImageBy90(new_image)
stage -= 1
return new_image
###Output
_____no_output_____
###Markdown
Rotate by 90 degrees
###Code
cameraman_90 = rotateImage(cameraman_img, 90)
plt.imshow(cameraman_90)
###Output
_____no_output_____
###Markdown
Rotate by 180 degrees
###Code
cameraman_180 = rotateImage(cameraman_img, 180)
plt.imshow(cameraman_180)
###Output
_____no_output_____
###Markdown
Horizontal shear
###Code
def shearImage(np_image: np.ndarray) -> np.ndarray:
"""
Shear an image.
:param np_image: numpy array of image
:return: sheared image
"""
fact = 10
new_image = np.zeros((np_image.shape[0], np_image.shape[1]), dtype=np.uint8)
for i in range(np_image.shape[0]):
for j in range(np_image.shape[1]):
nj = j+int(i*fact/45)
if nj < np_image.shape[0]:
new_image[i][j] = np_image[i][nj]
return new_image
cameraman_shear = shearImage(cameraman_img)
plt.imshow(cameraman_shear)
###Output
_____no_output_____ |
main/.ipynb_checkpoints/TFIDF-checkpoint.ipynb | ###Markdown
Wine Points Prediction **Note:** a sample is being used due to the kernel running out of memory due to the jupyter notebook kernel running out of memory and dying when performing analysis otherwise as a result predictions depend on the sample that is given as it is obtained randomly. Furthermore these predictions vary with $\pm1\%$
###Code
wine_df = pd.read_csv("wine.csv")
str_cols = ['description', 'price', 'title', 'variety', 'country', 'designation', 'province', 'winery']
reviews = wine_df.sample(20000)[['points'] + str_cols].reset_index()
reviews = reviews.drop(['index'], axis=1)
reviews.head()
###Output
_____no_output_____
###Markdown
We first have to change features that we are going to use from categorical to numerical varialbes. This is done in order to give the features meaning when performing different forms of analysis on them in order to predict the points given to a bottle of wine
###Code
# assign numerical values to string columns
factorized_wine = reviews[str_cols].drop(['description'], axis=1).copy()
for col in str_cols[2:]:
factorized_wine[col] = pd.factorize(reviews[col])[0]
factorized_wine.head()
###Output
_____no_output_____
###Markdown
Now we assign the variables we just factorized along with the price of the wine to be our X values and our y value will be what we are trying to predict, which in this case are the points for a bottle of wine.
###Code
X = factorized_wine.to_numpy('int64')
y = reviews['points'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
###Output
_____no_output_____
###Markdown
Below we perform several different formst of prediction to see which one produces the best result. We then need to determine how accurate this algorithm is given the estimates returned from the random forest regression. We do this by using `score()` which returns the coefficient of determination of the prediction ($r^2$). In other words it is the observed y variation that can be explained by the and by the regression model. We also compute the residual mean squared error of the model (rmse). linear regression
###Code
from sklearn import linear_model
model = linear_model.LinearRegression()
model.fit(X_train, y_train)
pred = model.predict(X_test)
print('r2 score:', model.score(X_test,y_test))
print('rmse score:', mean_squared_error(y_test, pred, squared=False))
###Output
r2 score: 0.025621093791445948
rmse score: 3.004001678134892
###Markdown
as you can see this isnt the best prediction model so lets try some other methods and see what we get linear discriminant analysis
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda_model = LinearDiscriminantAnalysis()
lda_model.fit(X_train, y_train)
pred = lda_model.predict(X_test)
print('r2 score:', lda_model.score(X_test,y_test))
print('rmse score:', mean_squared_error(y_test, pred, squared=False))
###Output
r2 score: 0.132
rmse score: 3.1166648841349627
###Markdown
The results from this method are not good either so onto the next one classification tree
###Code
from sklearn import tree
dt_model = tree.DecisionTreeClassifier()
dt_model.fit(X_train, y_train)
pred = dt_model.predict(X_test)
print('r2 score:', dt_model.score(X_test,y_test))
print('rmse score:', mean_squared_error(y_test, pred, squared=False))
###Output
r2 score: 0.1468
rmse score: 3.2819506394825626
###Markdown
The methods that we have done prior as well as this one are getting use nowhere and are showing very little signs of improvement so lets pivot to a different direction and try to predict the points based off of the description of the wine. Incorporating description
###Code
reviews.head()
###Output
_____no_output_____
###Markdown
Because we are focusing on the description (review) of the wine here is an example of one
###Code
reviews['description'][5]
###Output
_____no_output_____
###Markdown
We remove punctuation and other special characters and convert everything to lower case as it is not significat that words be capitalized.
###Code
descriptions = []
for descrip in reviews['description']:
line = re.sub(r'\W', ' ', str(descrip))
line = line.lower()
descriptions.append(line)
len(descriptions)
###Output
_____no_output_____
###Markdown
Here we use `TfidfVectorizer`, in order to understand what it is what term frequency-inverse document frequency (TF_IDT) is must be explained first. TF-IDF is a measure that evaluates the relevancy that a word has for a document inside a collection of other documents. Furthermore TF-IDF can be defined as the following:$ \text{Term Frequency (TF)} = \frac{\text{Frequency of a word}}{\text{Total number of words in document}} $$ \text{Inverse Document Frequency (IDF)} = \log{\frac{\text{Total number of documents}}{\text{Number of documents that contain the word}}} $$ \text{TF-IDF} = \text{TF} \cdot \text{IDF} $In turn what `TfidfVectorizer` gives us is a list of feature lists that we can use as estimators for prediction. The parameters for `TfidfVectorizer` are max_features, max_df, and stop_words. max_features tells us to only look at the top n features of the total documentmax_df causes the vectorizer to ignore terms that have a document frequency strictly higher than the given threshold. In our case because a float is its value we ignore words that appear in more that 80% of documentsstop_words allows us to pass in a set of stop words. Stop words are words that add little to no meaning to a sentence. This includes words such as i, our, him, and her. Folling this we fit and transform the data then we xplit it into training and testing data
###Code
y = reviews['points'].values
vec = TfidfVectorizer(max_features=2500, min_df=7, max_df=0.8, stop_words=stopwords.words('english'))
X = vec.fit_transform(descriptions).toarray()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 12)
###Output
_____no_output_____
###Markdown
Now that we've split the data we use `RandomForestRegressor()` to make our prediciton given that its a random forest algorithm it takes the average of the decision trees that were created and used as estimates.
###Code
rfr = RandomForestRegressor()
rfr.fit(X_train, y_train)
pred = rfr.predict(X_test)
###Output
_____no_output_____
###Markdown
Now we check to see how good our model is at predicting the points for a bottle of wine
###Code
print('r2 score:', rfr.score(X_test, y_test))
print('rmse score:', mean_squared_error(y_test, pred, squared=False))
cvs = cross_val_score(rfr, X_test, y_test, cv=10)
cvs.mean()
###Output
_____no_output_____
###Markdown
This is solely based off the description of the wine. As you can see this is a large improvement in both the score and rmse for any sort of prediction that was done with any of the methods performed above. However, it is still not the best for several reasons. The first being the $r^2$ score, or how well our model is at making predictions. There is still a large portion of the data that is not being accurately predicted. The other issue pertains to when the model does fail at making the prediction. given that the rmse score is very high this can be interpreted as when we do fail we fail rather spectacualary. However, given that the context of this problem is making a prediction for determining arbitrary integer point values for bottles of wine, failing spectaculary is not necesarilly what is occuring. The rmse value tells use that with each incorrect prediction we are about 2.1 points off. However, it is still less than ideal.Below we see if we can improve upon these shortcomings. Combining features Next we combine the features that were obtained from `TfidfVectorizer` with the features that we just factorized in there respective rows.
###Code
wine_X = factorized_wine.to_numpy('int64')
X = np.concatenate((wine_X,X),axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 12)
rfr_fac = RandomForestRegressor()
rfr_fac.fit(X_train, y_train)
fac_pred = rfr_fac.predict(X_test)
###Output
_____no_output_____
###Markdown
Next we perform the same actions as above to determine the accuracy of the prediction. That is we use `score()` and perform a 10 fold cross validation and then take the mean of the scores.
###Code
print('r2 score:', rfr_fac.score(X_test, y_test))
print('rmse score:', mean_squared_error(y_test, fac_pred, squared=False))
fac_cvs = cross_val_score(rfr_fac, X_test, y_test, cv=10)
fac_cvs.mean()
###Output
_____no_output_____ |
C) RoadMap 3 - Torch Main 3 - Linear Algebraic Operations.ipynb | ###Markdown
RoadMap 3 - Linear Algebraic Operations 1. torch.addbmm - Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension). mat is added to the final result. 2. torch.addmm - Performs a matrix multiplication of the matrices mat1 and mat2. The matrix mat is added to the final result. 3. torch.addmv - Performs a matrix-vector product of the matrix mat and the vector vec. The vector tensor is added to the final result. 4. torch.addr - Performs the outer-product of vectors vec1 and vec2 and adds it to the matrix mat. 5. torch.bmm - Performs a batch matrix-matrix product of matrices stored in batch1 and batch2. 6. torch.eig - Computes the eigenvalues and eigenvectors of a real square matrix. 7. torch.ger - Outer product of vec1 and vec2. If vec1 is a vector of size n and vec2 is a vector of size m, then out must be a matrix of size (n×m). 8. torch.inverse - Takes the inverse of the square matrix input. 9.1 torch.det - Calculates determinant of a 2D square tensor. 9.2 torch.logdet - Calculates log-determinant of a 2D square tensor 10.1 torch.matmul - Matrix product of two tensors. 10.2 torch.mm - Performs a matrix multiplication of the matrices mat1 and mat2. 11. torch.mv - Performs a matrix-vector product of the matrix mat and the vector vec 12. torch.pinverse - Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor. 13. torch.cholesky - Computes the Cholesky decomposition of a symmetric positive-definite matrix A. 14. torch.qr - Computes the QR decomposition of a matrix input, and returns matrices Q and R such that input=QR, with Q being an orthogonal matrix and R being an upper triangular matrix. 15. torch.svd - U, S, V = torch.svd(A) returns the singular value decomposition of a real matrix A of size (n x m) such that A=US(V.t)
###Code
import os
import sys
import torch
import numpy as np
import torch.nn as nn
from torchvision import transforms, datasets
from PIL import Image
import cv2
import matplotlib.pyplot as plt
import torchvision
# FUNCTIONAL modules - Implementing each module as functions
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
Extra Blog Resources1. https://towardsdatascience.com/linear-algebra-for-deep-learning-f21d7e7d7f232. https://towardsdatascience.com/linear-algebra-essentials-with-numpy-part-1-af4a867ac5ca3. https://towardsdatascience.com/linear-algebra-for-deep-learning-506c19c0d6fa Matrix-matrix product batch1. torch.addbmm - Performs a batch matrix-matrix product of matrices stored in batch1 and batch2, with a reduced add step (all matrix multiplications get accumulated along the first dimension). mat is added to the final result. - beta (Number, optional) – multiplier for mat (β) - mat (Tensor) – matrix to be added - alpha (Number, optional) – multiplier for batch1 @ batch2 (α) - batch1 (Tensor) – the first batch of matrices to be multiplied - batch2 (Tensor) – the second batch of matrices to be multiplied - out (Tensor, optional) – the output tensor $$out = \beta\ mat + \alpha\ (\sum_{i=0}^{b} batch1_i \mathbin{@} batch2_i)$$
###Code
M = torch.randn(3, 5)
batch1 = torch.randn(10, 3, 4)
batch2 = torch.randn(10, 4, 5)
out = torch.addbmm(M, batch1, batch2)
print("out = ", out)
###Output
out = tensor([[ 4.4034, 10.0449, -1.6211, -0.1874, 1.8084],
[-13.6686, 7.2830, -24.0032, 9.1567, -2.6577],
[ 3.6453, 1.3986, 2.3641, 2.6715, 4.4922]])
###Markdown
Matrix-matrix product 2. torch.addmm - Performs a matrix multiplication of the matrices mat1 and mat2. The matrix mat is added to the final result. - beta (Number, optional) – multiplier for mat (β) - mat (Tensor) – matrix to be added - alpha (Number, optional) – multiplier for batch1 @ batch2 (α) - batch1 (Tensor) – the first batch of matrices to be multiplied - batch2 (Tensor) – the second batch of matrices to be multiplied - out (Tensor, optional) – the output tensor $$out = \beta\ mat + \alpha\ (mat1_i \mathbin{@} mat2_i)$$
###Code
M = torch.randn(2, 3)
mat1 = torch.randn(2, 3)
mat2 = torch.randn(3, 3)
out = torch.addmm(M, mat1, mat2)
print("out = ", out)
###Output
out = tensor([[ 2.8556, -1.9296, -3.7283],
[ 0.7363, -0.2024, -0.5542]])
###Markdown
Matrix-matrix product 3. torch.addmv - Performs a matrix-vector product of the matrix mat and the vector vec. The vector tensor is added to the final result. - beta (Number, optional) – multiplier for tensor (β) - tensor (Tensor) – vector to be added - alpha (Number, optional) – multiplier for mat@vec(α) - mat (Tensor) – matrix to be multiplied - vec (Tensor) – vector to be multiplied - out (Tensor, optional) – the output tensor $$out = \beta\ mat + \alpha\ (mat1_i \mathbin{@} mat2_i)$$
###Code
M = torch.randn(2)
mat = torch.randn(2, 3)
vec = torch.randn(3)
out = torch.addmv(M, mat, vec)
print("out = ", out)
###Output
out = tensor([1.3346, 0.5646])
###Markdown
Vector-vector product 4. torch.addr - Performs the outer-product of vectors vec1 and vec2 and adds it to the matrix mat. - beta (Number, optional) – multiplier for mat (β) - mat (Tensor) – matrix to be added - alpha (Number, optional) – multiplier for vec1⊗vec2(α) - vec1 (Tensor) – the first vector of the outer product - vec2 (Tensor) – the second vector of the outer product - out (Tensor, optional) – the output tensor $$out = \beta\ mat + \alpha\ (vec1 \otimes vec2)$$
###Code
vec1 = torch.arange(1., 4.)
vec2 = torch.arange(1., 3.)
M = torch.zeros(3, 2)
out = torch.addr(M, vec1, vec2)
print("out = ", out)
###Output
out = tensor([[1., 2.],
[2., 4.],
[3., 6.]])
###Markdown
Matrix-matrix product (W/O any addition)5. torch.bmm - Performs a batch matrix-matrix product of matrices stored in batch1 and batch2. - batch1 (Tensor) – the first batch of matrices to be multiplied - batch2 (Tensor) – the second batch of matrices to be multiplied - out (Tensor, optional) – the output tensor $$out_i = batch1_i \mathbin{@} batch2_i$$
###Code
batch1 = torch.randn(10, 3, 4)
batch2 = torch.randn(10, 4, 5)
out = torch.bmm(batch1, batch2)
print("out = ", out)
print("out.size = ", out.size())
# Find eigen values
'''
6. torch.eig(a, eigenvectors=False, out=None) - Computes the eigenvalues and eigenvectors of a real square matrix.
- a (Tensor) – the square matrix for which the eigenvalues and eigenvectors will be computed
- eigenvectors (bool) – True to compute both eigenvalues and eigenvectors; otherwise,
only eigenvalues will be computed
- out (tuple, optional) – the output tensors
'''
x_in = torch.randn(2, 2)
eigen_values, eigen_vectors = torch.eig(x_in, True)
print("x_in = ", x_in)
print("eigen_values = ", eigen_values)
print("eigen_vectors = ", eigen_vectors)
# LAPACK based outer product
'''
7. torch.ger - Outer product of vec1 and vec2. If vec1 is a vector of size n and vec2 is a
vector of size m, then out must be a matrix of size (n×m).
- vec1 (Tensor) – 1-D input vector
- vec2 (Tensor) – 1-D input vector
- out (Tensor, optional) – optional output matrix
'''
v1 = torch.arange(1., 5.)
v2 = torch.arange(1., 4.)
x_out = torch.ger(v1, v2)
print("v1 = ", v1)
print("v2 = ", v2)
print("x_out = ", x_out)
# Inverse of matrix
'''
8. torch.inverse - Takes the inverse of the square matrix input.
'''
x = torch.rand(4, 4)
x_inverse = torch.inverse(x)
print("x = ", x)
print("x_inverse = ", x_inverse)
# Determinant of matrix
'''
9.1 torch.det - Calculates determinant of a 2D square tensor.
'''
'''
9.2 torch.logdet - Calculates log-determinant of a 2D square tensor
'''
x = torch.rand(4, 4)
det = torch.det(x)
logdet = torch.logdet(x)
print("x = ", x)
print("Determinant of x = ", det)
print("Log Determinant of x = ", logdet)
# Matrix product
'''
10.1 torch.matmul - Matrix product of two tensors.
- tensor1 (Tensor) – the first tensor to be multiplied
- tensor2 (Tensor) – the second tensor to be multiplied
- out (Tensor, optional) – the output tensor
'''
'''
10.2 torch.mm - Performs a matrix multiplication of the matrices mat1 and mat2. (2D tensors only)
- mat1 (Tensor) – the first matrix to be multiplied
- mat2 (Tensor) – the second matrix to be multiplied
- out (Tensor, optional) – the output tensor
'''
x1 = torch.randn(2, 2)
x2 = torch.randn(2, 2)
x_out = torch.matmul(x1, x2)
x_out_size = x_out.size()
print("Using torch.matmul")
print("x1 = ", x1)
print("x2 = ", x2)
print("x_out = ", x_out)
print("Output size = ", x_out_size)
print("\n")
x_out = torch.mm(x1, x2)
x_out_size = x_out.size()
print("Using torch.mm")
print("x1 = ", x1)
print("x2 = ", x2)
print("x_out = ", x_out)
print("Output size = ", x_out_size)
print("\n")
# Matrix-Vector multiplication
'''
11. torch.mv - Performs a matrix-vector product of the matrix mat and the vector vec
- mat (Tensor) – matrix to be multiplied
- vec (Tensor) – vector to be multiplied
- out (Tensor, optional) – the output tensor
'''
mat = torch.randn(2, 3)
vec = torch.randn(3)
out = torch.mv(mat, vec)
print("out = ", out)
# Moore - Penrose Inverse
'''
12. torch.pinverse - Calculates the pseudo-inverse (also known as the Moore-Penrose inverse) of a 2D tensor.
- input (Tensor) – The input 2D tensor of dimensions m×n
- rcond (float) – A floating point value to determine the cutoff for small singular values. Default: 1e-15
'''
x_in = torch.randn(3, 5)
x_out = torch.pinverse(x_in)
print("x_out = ", x_out)
# Cholesky decomposition
'''
13. torch.cholesky - Computes the Cholesky decomposition of a symmetric positive-definite matrix A.
- a (Tensor) – the input 2-D tensor, a symmetric positive-definite matrix
- upper (bool, optional) – flag that indicates whether to return the upper or lower triangular matrix
- out (Tensor, optional) – the output matrix
'''
'''
Use - The Cholesky decomposition is mainly used for the numerical solution of linear equations
'''
a = torch.randn(3, 3)
a = torch.mm(a, a.t()) # make symmetric positive definite
u = torch.cholesky(a)
print("u = ", u)
# QR decomposition
'''
14. torch.qr - Computes the QR decomposition of a matrix input, and returns matrices Q and R such that input=QR,
with Q being an orthogonal matrix and R being an upper triangular matrix.
'''
x_in = torch.randn(3, 3)
q, r = torch.qr(x_in)
print("q = ", q)
print("r = ", r)
# Singular Value Decomposition (SVD)
'''
15. torch.svd - U, S, V = torch.svd(A) returns the singular value decomposition of a real matrix A of size (n x m)
such that A=US(V.t)
'''
a = torch.tensor([[8.79, 6.11, -9.15, 9.57, -3.49, 9.84],
[9.93, 6.91, -7.93, 1.64, 4.02, 0.15],
[9.83, 5.04, 4.86, 8.83, 9.80, -8.99],
[5.45, -0.27, 4.85, 0.74, 10.00, -6.02],
[3.16, 7.98, 3.01, 5.80, 4.27, -5.31]]).t()
u, s, v = torch.svd(a)
###Output
_____no_output_____ |
notebooks/Figure - Mock streams.ipynb | ###Markdown
Mock streams
###Code
# FOR PAPER
style = {
"linestyle": "none",
"marker": 'o',
"markersize": 2,
"alpha": 0.1,
"color": "#555555"
}
line_style = {
"marker": None,
"linestyle": '--',
"color": 'k',
"linewidth": 1.5,
"dashes": (3,2)
}
data_style = dict(marker='o', ms=4, ls='none', ecolor='#333333', alpha=0.75, color='k')
data_b_style = data_style.copy()
data_b_style['marker'] = 's'
# contour stuff
levels = np.array([-2,-1,0,1]) - 0.1
for k,name_group in enumerate([all_names[:5],all_names[5:]]):
fig,allaxes = pl.subplots(3, 5, figsize=(9,6.5), sharex='col', sharey='row')
for i,name in enumerate(name_group):
axes = allaxes[:,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot, streams, prog, _ = get_potential_stream_prog(name)
stream = streams[:,best_ix]
model_c,model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# cut to just points in the window
ix = (model_c.l > 0*u.deg) & (model_c.l < 11*u.deg) & (model_c.b > 25.5*u.deg) & (model_c.b < 34.5*u.deg)
# density in sky coords
sky_ix = ix & (model_c.distance > 4.5*u.kpc) & (model_c.distance < 10*u.kpc)
grid,log_dens = surface_density(model_c[sky_ix], bandwidth=0.2)
cs = axes[0].contourf(grid[:,0].reshape(log_dens.shape), grid[:,1].reshape(log_dens.shape), log_dens,
levels=levels, cmap='magma_r')
axes[1].plot(model_c.l[ix], model_c.distance[ix], rasterized=True, **style)
axes[2].plot(model_c.l[ix], galactic.decompose(model_v[2][ix]), rasterized=True, **style)
# l,b
axes[0].plot([3.8,3.8], [31.5,32.5], **line_style)
axes[0].plot([5.85,5.85], [30.,31], **line_style)
axes[0].set_aspect('equal')
# l,d
axes[1].plot([3.8,3.8], [8.7,9.3], **line_style)
axes[1].plot([5.85,5.85], [7.2,7.8], **line_style)
# l,vr
axes[2].plot([3.8,3.8], [276,292], **line_style)
axes[2].plot([5.85,5.85], [284,300], **line_style)
# the data
# _tmp = data_style.copy(); _tmp.pop('ecolor')
# axes[0].plot(ophdata_fit.coord.l.degree, ophdata_fit.coord.b.degree, **_tmp)
_tmp = data_b_style.copy(); _tmp.pop('ecolor')
axes[0].plot(ophdata_fan.coord.l.degree, ophdata_fan.coord.b.degree, **_tmp)
# axes[1].errorbar(ophdata_fit.coord.l.degree, ophdata_fit.coord.distance.to(u.kpc).value,
# ophdata_fit.coord_err['distance'].to(u.kpc).value, **data_style)
axes[1].errorbar(ophdata_fan.coord.l.degree, ophdata_fan.coord.distance.to(u.kpc).value,
ophdata_fan.coord_err['distance'].to(u.kpc).value, **data_b_style)
# axes[2].errorbar(ophdata_fit.coord.l.degree, ophdata_fit.veloc['vr'].to(u.km/u.s).value,
# ophdata_fit.veloc_err['vr'].to(u.km/u.s).value, **data_style)
axes[2].errorbar(ophdata_fan.coord.l.degree, ophdata_fan.veloc['vr'].to(u.km/u.s).value,
ophdata_fan.veloc_err['vr'].to(u.km/u.s).value, **data_b_style)
# Axis limits
axes[0].set_xlim(9.,2)
axes[0].set_ylim(26.5, 33.5)
axes[1].set_ylim(5.3, 9.7)
axes[2].set_ylim(225, 325)
# Text
axes[0].set_title(name_map[name], fontsize=20)
axes[2].set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
axes[0].set_ylabel("$b$ [deg]", fontsize=18)
axes[1].set_ylabel(r"$d_\odot$ [kpc]", fontsize=18)
axes[2].set_ylabel(r"$v_r$ [${\rm km}\,{\rm s}^{-1}$]", fontsize=18)
fig.tight_layout()
# fig.savefig(os.path.join(plotpath, "mockstream{}.pdf".format(k)), rasterized=True, dpi=400)
# fig.savefig(os.path.join(plotpath, "mockstream{}.png".format(k)), dpi=400)
###Output
_____no_output_____
###Markdown
--- Color points by release time
###Code
import matplotlib.colors as colors
def truncate_colormap(cmap, minval=0.0, maxval=1.0, n=100):
new_cmap = colors.LinearSegmentedColormap.from_list(
'trunc({n},{a:.2f},{b:.2f})'.format(n=cmap.name, a=minval, b=maxval),
cmap(np.linspace(minval, maxval, n))[::-1])
return new_cmap
x = np.linspace(0, 1, 128)
pl.scatter(x,x,c=x,
cmap=truncate_colormap(pl.get_cmap('magma'), 0., .9))
custom_cmap = truncate_colormap(pl.get_cmap('magma'), 0., 0.9)
scatter_kw = dict(cmap=custom_cmap, alpha=0.8, s=4, vmin=-1, vmax=0, rasterized=True)
data_b_style = dict(marker='s', ms=4, ls='none', ecolor='#31a354', alpha=1, color='#31a354')
line_style = {
"marker": None,
"linestyle": '-',
"color": '#888888',
"linewidth": 1.5
}
for k,name_group in enumerate([all_names[:5],all_names[5:]]):
fig,allaxes = pl.subplots(3, 5, figsize=(10,6.5), sharex='col', sharey='row')
for i,name in enumerate(name_group):
axes = allaxes[:,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot, streams, prog, release_t = get_potential_stream_prog(name)
release_t = release_t/1000.
stream = streams[:,best_ix]
model_c,_model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# remove progenitor orbit
model_c = model_c[1:]
model_v = []
for v in _model_v:
model_v.append(v[1:])
# cut to just points in the window
ix = (model_c.l > 0*u.deg) & (model_c.l < 11*u.deg) & (model_c.b > 25.5*u.deg) & (model_c.b < 34.5*u.deg)
# plot model points
cs = axes[0].scatter(model_c.l[ix], model_c.b[ix], c=release_t[ix], **scatter_kw)
axes[1].scatter(model_c.l[ix], model_c.distance[ix], c=release_t[ix], **scatter_kw)
axes[2].scatter(model_c.l[ix], galactic.decompose(model_v[2][ix]), c=release_t[ix], **scatter_kw)
# l,b
axes[0].plot([3.8,3.8], [31.5,32.5], **line_style)
axes[0].plot([5.85,5.85], [30.,31], **line_style)
axes[0].set_aspect('equal')
# l,d
axes[1].plot([3.8,3.8], [8.7,9.3], **line_style)
axes[1].plot([5.85,5.85], [7.2,7.8], **line_style)
# l,vr
axes[2].plot([3.8,3.8], [276,292], **line_style)
axes[2].plot([5.85,5.85], [284,300], **line_style)
# the data
# _tmp = data_style.copy(); _tmp.pop('ecolor')
# axes[0].plot(ophdata_fit.coord.l.degree, ophdata_fit.coord.b.degree, **_tmp)
_tmp = data_b_style.copy(); _tmp.pop('ecolor')
axes[0].plot(ophdata_fan.coord.l.degree, ophdata_fan.coord.b.degree, **_tmp)
# axes[1].errorbar(ophdata_fit.coord.l.degree, ophdata_fit.coord.distance.to(u.kpc).value,
# ophdata_fit.coord_err['distance'].to(u.kpc).value, **data_style)
axes[1].errorbar(ophdata_fan.coord.l.degree, ophdata_fan.coord.distance.to(u.kpc).value,
ophdata_fan.coord_err['distance'].to(u.kpc).value, **data_b_style)
# axes[2].errorbar(ophdata_fit.coord.l.degree, ophdata_fit.veloc['vr'].to(u.km/u.s).value,
# ophdata_fit.veloc_err['vr'].to(u.km/u.s).value, **data_style)
axes[2].errorbar(ophdata_fan.coord.l.degree, ophdata_fan.veloc['vr'].to(u.km/u.s).value,
ophdata_fan.veloc_err['vr'].to(u.km/u.s).value, **data_b_style)
# Axis limits
axes[0].set_xlim(9.,2)
axes[0].set_ylim(26.5, 33.5)
axes[1].set_ylim(5.3, 9.7)
axes[2].set_ylim(225, 325)
# Text
axes[0].set_title(name_map[name], fontsize=20)
axes[2].set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
axes[0].set_ylabel("$b$ [deg]", fontsize=18)
axes[1].set_ylabel(r"$d_\odot$ [kpc]", fontsize=18)
axes[2].set_ylabel(r"$v_r$ [${\rm km}\,{\rm s}^{-1}$]", fontsize=18)
axes[0].set_yticks([26,28,30,32,34])
fig.tight_layout()
fig.subplots_adjust(right=0.85)
cbar_ax = fig.add_axes([0.87, 0.125, 0.02, 0.8])
fig.colorbar(cs, cax=cbar_ax)
cbar_ax.axes.set_ylabel('Stripping time [Gyr]')
fig.savefig(os.path.join(plotpath, "mockstream{}.pdf".format(k)), rasterized=True, dpi=400)
fig.savefig(os.path.join(plotpath, "mockstream{}.png".format(k)), dpi=400)
for k,name_group in enumerate([all_names[:5],all_names[5:]]):
fig,allaxes = pl.subplots(2, 5, figsize=(10,5.2), sharex='col', sharey='row')
for i,name in enumerate(name_group):
axes = allaxes[:,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot, streams, prog, release_t = get_potential_stream_prog(name)
release_t = release_t/1000.
stream = streams[:,best_ix]
model_c,_model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# remove progenitor orbit
model_c = model_c[1:]
model_v = []
for v in _model_v:
model_v.append(v[1:])
# cut to just points in the window
ix = (model_c.l > 0*u.deg) & (model_c.l < 11*u.deg) & (model_c.b > 25.5*u.deg) & (model_c.b < 34.5*u.deg)
# plot model points
axes[0].scatter(model_c.l[ix], galactic.decompose(model_v[0][ix]), c=release_t[ix], **scatter_kw)
axes[1].scatter(model_c.l[ix], galactic.decompose(model_v[1][ix]), c=release_t[ix], **scatter_kw)
# Axis limits
axes[0].set_xlim(9.,2)
axes[0].set_ylim(-12,-2)
axes[1].set_ylim(-2,8)
# Text
axes[0].set_title(name_map[name], fontsize=20)
axes[1].set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
axes[0].set_ylabel(r"$\mu_l$ [${\rm mas}\,{\rm yr}^{-1}$]", fontsize=18)
axes[1].set_ylabel(r"$\mu_b$ [${\rm mas}\,{\rm yr}^{-1}$]", fontsize=18)
fig.tight_layout()
# fig.savefig(os.path.join(plotpath, "mockstream-pm{}.pdf".format(k)), rasterized=True, dpi=400)
# fig.savefig(os.path.join(plotpath, "mockstream-pm{}.png".format(k)), dpi=400)
###Output
_____no_output_____
###Markdown
---
###Code
# contour stuff
levels = np.array([-2,-1,0,1]) - 0.1
fig,allaxes = pl.subplots(2, 5, figsize=(9,4.7), sharex=True, sharey=True)
for k,name_group in enumerate([all_names[:5],all_names[5:]]):
for i,name in enumerate(name_group):
ax = allaxes[k,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot, streams, prog, _ = get_potential_stream_prog(name)
stream = streams[:,best_ix]
model_c,model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# cut to just points in the window
ix = (model_c.l > 0*u.deg) & (model_c.l < 11*u.deg) & (model_c.b > 25.5*u.deg) & (model_c.b < 34.5*u.deg)
# density in sky coords
sky_ix = ix & (model_c.distance > 4.5*u.kpc) & (model_c.distance < 10*u.kpc)
grid,log_dens = surface_density(model_c[sky_ix], bandwidth=0.2)
cs = ax.contourf(grid[:,0].reshape(log_dens.shape), grid[:,1].reshape(log_dens.shape), log_dens,
levels=levels, cmap='magma_r')
# l,b
ax.plot([3.8,3.8], [31.5,32.5], **line_style)
ax.plot([5.85,5.85], [30.,31], **line_style)
# ax.set_aspect('equal')
# the data
# _tmp = data_style.copy(); _tmp.pop('ecolor')
# axes[0].plot(ophdata_fit.coord.l.degree, ophdata_fit.coord.b.degree, **_tmp)
_tmp = data_b_style.copy(); _tmp.pop('ecolor')
ax.plot(ophdata_fan.coord.l.degree, ophdata_fan.coord.b.degree, **_tmp)
# Axis limits
ax.set_xlim(9.,2)
ax.set_ylim(26.5, 33.5)
# Text
ax.set_title(name_map[name], fontsize=20)
if i == 2 and k == 1:
ax.set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
ax.set_ylabel("$b$ [deg]", fontsize=18)
fig.tight_layout()
fig.savefig(os.path.join(plotpath, "mockstream-density.pdf"), rasterized=True, dpi=400)
fig.savefig(os.path.join(plotpath, "mockstream-density.png"), dpi=400)
###Output
_____no_output_____
###Markdown
---
###Code
fig,allaxes = pl.subplots(2, 5, figsize=(9,5), sharex=True, sharey=True)
for k,name_group in enumerate([all_names[:5],all_names[5:]]):
for i,name in enumerate(name_group):
ax = allaxes[k,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot, streams, prog, _ = get_potential_stream_prog(name)
stream = streams[:,best_ix]
model_c,model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# ix = distance_ix(model_c.l, model_c.distance) & (model_c.l > 0) & (model_c.l < 10*u.deg) | True
ix = np.ones(model_c.l.size).astype(bool)
ax.plot(stream.pos[0][ix], stream.pos[2][ix], rasterized=True, **style)
lbd_corners = np.vstack(map(np.ravel, np.meshgrid([1.5,9.5],[26.,33.],[5.3,9.7])))
corners_g = coord.Galactic(l=lbd_corners[0]*u.degree, b=lbd_corners[1]*u.degree, distance=lbd_corners[2]*u.kpc)
corners = corners_g.transform_to(galactocentric_frame).cartesian.xyz.value
# corners = corners.reshape(3,2,2,2)[...,0].reshape(3,4)
# only pick 4 of the points to plot
corners = corners[:,corners[1] > 0.5][[0,2]]
# compute centroid
cent = np.mean(corners, axis=1)
# sort by polar angle
corner_list = corners.T.tolist()
corner_list.sort(key=lambda p: np.arctan2(p[1]-cent[1],p[0]-cent[0]))
# plot polyline
poly = mpl.patches.Polygon(corner_list, closed=True, fill=False, color='#2166AC', linestyle='-')
ax.add_patch(poly)
# ax.plot(corners[0], corners[2], ls='none', marker='o', markersize=4)
# ax.plot(-galactocentric_frame.galcen_distance, 0., marker='o', color='y', markersize=8)
ax.text(-galactocentric_frame.galcen_distance.value, 0., r"$\odot$", fontsize=18, ha='center', va='center')
# Text
ax.set_title(name_map[name], fontsize=20)
if k == 1:
ax.set_xlabel("$x$ [kpc]", fontsize=18)
# break
# break
# Axis limits
ax.set_xlim(-10,5)
ax.set_ylim(-7.5,7.5)
ax.xaxis.set_ticks([-8,-4,0,4])
ax.yaxis.set_ticks([-4,0,4])
allaxes[0,0].set_ylabel("$z$ [kpc]", fontsize=18)
allaxes[1,0].set_ylabel("$z$ [kpc]", fontsize=18)
fig.tight_layout()
fig.savefig(os.path.join(plotpath, "mockstream-xyz.pdf"), dpi=400)
# fig.savefig(os.path.join(plotpath, "mockstream-xyz.png"), dpi=400)
import astropy.units as u
(300*u.km/u.s / (4*u.kpc)).to(u.mas/u.yr, equivalencies=u.dimensionless_angles())
from gala.observation import distance
distance(12.5 - (-2.5))
###Output
_____no_output_____
###Markdown
--- Normalize the total number of stars in "dense" part of stream
###Code
def normalize_to_total_number(name, noph_stars=500): # from Bernard paper
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot, streams, prog, _ = get_potential_stream_prog(name)
stream = streams[:-1000,best_ix] # cut off at time of disruption, -250 Myr (2 stars, 0.5 Myr release)
# stream = streams[:,best_ix]
model_c,model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
ix = (model_c.distance > 4.5*u.kpc) & (model_c.distance < 15*u.kpc)
# how many points are between dense part of stream?
ix2 = ix & (model_c.l > 3.8*u.deg) & (model_c.l < 5.85*u.deg) & (model_c.b > 29.5*u.deg) & (model_c.b < 32.5*u.deg)
print(ix2.sum())
every = int(round(ix2.sum() / float(noph_stars)))
return model_c[ix]#[::every]
xbounds = (2, 9)
ybounds = (27, 34)
area = (xbounds[1]-xbounds[0]) * (ybounds[1]-ybounds[0])
static_gal = normalize_to_total_number("static_mw")
static_lb = np.vstack((static_gal.l.degree, static_gal.b.degree))
bar_gal = normalize_to_total_number("barred_mw_8")
bar_lb = np.vstack((bar_gal.l.degree, bar_gal.b.degree))
from scipy.ndimage import gaussian_filter
# ddeg = (6*u.arcmin).to(u.degree).value
ddeg = (10*u.arcmin).to(u.degree).value
xbins = np.arange(xbounds[0], xbounds[1]+ddeg, ddeg)
ybins = np.arange(ybounds[0], ybounds[1]+ddeg, ddeg)
H_static,_,_ = np.histogram2d(static_lb[0], static_lb[1], bins=(xbins, ybins))
H_bar,_,_ = np.histogram2d(bar_lb[0], bar_lb[1], bins=(xbins, ybins))
bg_density = 1500 # stars / deg^2
n_bg = int(bg_density*area)
# bg_l = np.random.uniform(xbounds[0], xbounds[1], size=int(bg_density*area))
# bg_b = np.random.uniform(ybounds[0], ybounds[1], size=int(bg_density*area))
# bg_im = np.random.poisson(size=H_static.shape)
# bg_im = bg_im * float(n_bg) / bg_im.sum()
bg_im = np.random.poisson(lam=42, size=H_static.shape)
bg_im.sum(), n_bg
((1500 / u.degree**2) * (10*u.arcmin)**2).decompose()
static_im = H_static + bg_im
bar_im = H_bar + bg_im
_ = pl.hist(bg_im.ravel())
# H = gaussian_filter(H, sigma=ddeg)
from scipy.stats import scoreatpercentile
_flat = static_im.ravel()
pl.hist(static_im.ravel(), bins=np.linspace(0, 1.5*scoreatpercentile(_flat,75),32), alpha=0.5);
pl.hist(bar_im.ravel(), bins=np.linspace(0, 1.5*scoreatpercentile(_flat,75),32), alpha=0.5);
vmin = scoreatpercentile(_flat,2)
vmax = scoreatpercentile(_flat,98)
imshow_kw = dict(extent=[xbins.min(), xbins.max(), ybins.min(), ybins.max()],
origin='bottom', cmap='Greys', vmin=vmin, vmax=vmax, interpolation='nearest')
line_style = {
"marker": None,
"linestyle": '--',
"color": 'k',
"linewidth": 1.5,
"dashes": (3,2)
}
# H,_,_ = np.histogram2d(all_static[0], all_static[1], bins=(xbins, ybins))
# H = gaussian_filter(H, sigma=ddeg)
pl.imshow(static_im.T, **imshow_kw)
pl.plot([3.8,3.8], [31.5,32.5], **line_style)
pl.plot([5.85,5.85], [30.,31], **line_style)
pl.xlim(xbins.max(), xbins.min())
pl.ylim(ybins.min(), ybins.max())
pl.xlabel("$l$ [deg]")
pl.ylabel("$b$ [deg]")
pl.title("static")
# pl.tight_layout()
pl.gca().set_aspect('equal')
# H,_,_ = np.histogram2d(all_bar[0], all_bar[1], bins=(xbins, ybins))
# H = gaussian_filter(H, sigma=ddeg)
pl.imshow(bar_im.T, **imshow_kw)
pl.plot([3.8,3.8], [31.5,32.5], **line_style)
pl.plot([5.85,5.85], [30.,31], **line_style)
pl.xlim(xbins.max(), xbins.min())
pl.ylim(ybins.min(), ybins.max())
pl.xlabel("$l$ [deg]")
pl.ylabel("$b$ [deg]")
pl.title("bar8")
pl.tight_layout()
_density_plot_cache = dict()
line_style = {
"marker": None,
"linestyle": '-',
"color": 'k',
"linewidth": 2,
}
fig,allaxes = pl.subplots(2, 5, figsize=(9,5), sharex=True, sharey=True)
vmin = vmax = None
for i,name in enumerate(all_names):
print(name)
ax = allaxes.flat[i]
if name not in _density_plot_cache:
gal = normalize_to_total_number(name)
lb = np.vstack((gal.l.degree, gal.b.degree))
H,_,_ = np.histogram2d(lb[0], lb[1], bins=(xbins, ybins))
im = H + bg_im
_density_plot_cache[name] = im.T
im = _density_plot_cache[name]
if vmin is None:
_flat = im.ravel()
vmin = scoreatpercentile(_flat,2)
vmax = scoreatpercentile(_flat,98)
imshow_kw = dict(extent=[xbins.min(), xbins.max(), ybins.min(), ybins.max()],
origin='bottom', cmap='Greys', vmin=vmin, vmax=vmax, interpolation='nearest')
ax.imshow(im, **imshow_kw)
ax.plot([3.8,3.8], [31.5,32.5], **line_style)
ax.plot([5.85,5.85], [30.,31], **line_style)
# Text
ax.set_title(name_map[name], fontsize=20)
if i > 4:
ax.set_xlabel("$l$ [deg]", fontsize=18)
ax.set(adjustable='box-forced', aspect='equal')
ax.set_xlim(xbins.max(), xbins.min())
ax.set_ylim(ybins.min(), ybins.max())
allaxes[0,0].set_ylabel("$b$ [deg]", fontsize=18)
allaxes[1,0].set_ylabel("$b$ [deg]", fontsize=18)
fig.tight_layout()
fig.savefig(os.path.join(plotpath, "densitymaps.pdf"))
fig.savefig(os.path.join(plotpath, "densitymaps.png"), dpi=400)
###Output
_____no_output_____
###Markdown
--- For CfA talk
###Code
pick = [1,8,9]
style = {
"linestyle": "none",
"marker": 'o',
"markersize": 2,
"alpha": 0.3,
"color": "#555555"
}
line_style = {
"marker": None,
"linestyle": '--',
"color": 'k',
"linewidth": 1.5,
"dashes": (3,2)
}
data_style = dict(marker='o', ms=4, ls='none', ecolor='#333333', alpha=0.75, color='k')
data_b_style = data_style.copy()
data_b_style['marker'] = 's'
# contour stuff
levels = np.array([-2,-1,0,1]) - 0.1
fig,allaxes = pl.subplots(2, 4, figsize=(12,6), sharex='col', sharey='row')
for i,name in enumerate(['static_mw']+['barred_mw_'+str(j) for j in pick]):
axes = allaxes[:,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot = op.load_potential(name)
Omega = (pot.parameters['bar']['Omega']/u.Myr).to(u.km/u.s/u.kpc).value
if i == 0:
title = 'no bar'
else:
title = r'$\Omega_p={:d}\,{{\rm km/s/kpc}}$'.format(int(Omega))
pot, streams, prog, _ = get_potential_stream_prog(name)
stream = streams[:,best_ix]
model_c,model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# cut to just points in the window
ix = (model_c.l > 0*u.deg) & (model_c.l < 11*u.deg) & (model_c.b > 25.5*u.deg) & (model_c.b < 34.5*u.deg)
# density in sky coords
sky_ix = ix & (model_c.distance > 4.5*u.kpc) & (model_c.distance < 10*u.kpc)
grid,log_dens = surface_density(model_c[sky_ix], bandwidth=0.2)
cs = axes[0].contourf(grid[:,0].reshape(log_dens.shape), grid[:,1].reshape(log_dens.shape), log_dens,
levels=levels, cmap='magma_r')
axes[1].plot(model_c.l[ix], galactic.decompose(model_v[2][ix]), rasterized=True, **style)
# l,b
axes[0].plot([3.8,3.8], [31.5,32.5], **line_style)
axes[0].plot([5.85,5.85], [30.,31], **line_style)
# l,vr
axes[1].plot([3.8,3.8], [276,292], **line_style)
axes[1].plot([5.85,5.85], [284,300], **line_style)
# the data
_tmp = data_b_style.copy(); _tmp.pop('ecolor')
axes[0].plot(ophdata_fan.coord.l.degree, ophdata_fan.coord.b.degree, **_tmp)
axes[1].errorbar(ophdata_fan.coord.l.degree, ophdata_fan.veloc['vr'].to(u.km/u.s).value,
ophdata_fan.veloc_err['vr'].to(u.km/u.s).value, **data_b_style)
# Axis limits
axes[0].set_xlim(9.,2)
axes[0].set_ylim(26.5, 33.5)
axes[1].set_ylim(225, 325)
# Text
axes[0].set_title(title, fontsize=18)
axes[1].set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
axes[0].set_ylabel("$b$ [deg]", fontsize=18)
axes[1].set_ylabel(r"$v_r$ [${\rm km}\,{\rm s}^{-1}$]", fontsize=18)
fig.tight_layout()
_density_plot_cache2 = dict()
line_style = {
"marker": None,
"linestyle": '-',
"color": 'k',
"linewidth": 2,
}
fig,allaxes = pl.subplots(1, 4, figsize=(11,3), sharex=True, sharey=True)
vmin = vmax = None
for i,name in enumerate(['static_mw']+['barred_mw_'+str(j) for j in pick]):
print(name)
ax = allaxes.flat[i]
pot = op.load_potential(name)
Omega = (pot.parameters['bar']['Omega']/u.Myr).to(u.km/u.s/u.kpc).value
if i == 0:
title = 'no bar'
else:
title = r'$\Omega_p={:d}\,{{\rm km/s/kpc}}$'.format(int(Omega))
if name not in _density_plot_cache2:
gal = normalize_to_total_number(name)
lb = np.vstack((gal.l.degree, gal.b.degree))
H,_,_ = np.histogram2d(lb[0], lb[1], bins=(xbins, ybins))
im = H + bg_im
_density_plot_cache2[name] = im.T
im = _density_plot_cache2[name]
if vmin is None:
_flat = im.ravel()
vmin = scoreatpercentile(_flat,2)
vmax = scoreatpercentile(_flat,98)
imshow_kw = dict(extent=[xbins.min(), xbins.max(), ybins.min(), ybins.max()],
origin='bottom', cmap='Greys', vmin=vmin, vmax=vmax, interpolation='nearest')
ax.imshow(im, **imshow_kw)
ax.plot([3.8,3.8], [31.5,32.5], **line_style)
ax.plot([5.85,5.85], [30.,31], **line_style)
# Text
ax.set_title(title, fontsize=18)
ax.set_xlabel("$l$ [deg]", fontsize=18)
ax.set(adjustable='box-forced', aspect='equal')
ax.set_xlim(xbins.max(), xbins.min())
ax.set_ylim(ybins.min(), ybins.max())
allaxes[0].set_ylabel("$b$ [deg]", fontsize=18)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
---
###Code
style = {
"linestyle": "none",
"marker": 'o',
"markersize": 2,
"alpha": 0.3,
"color": "#555555"
}
line_style = {
"marker": None,
"linestyle": '--',
"color": 'k',
"linewidth": 1.5,
"dashes": (3,2)
}
data_style = dict(marker='o', ms=6, ls='none', alpha=0.9, color='#2166AC')
data_b_style = data_style.copy()
data_b_style['marker'] = 's'
# contour stuff
levels = np.array([-2,-1,0,1]) - 0.1
fig,allaxes = pl.subplots(2, 4, figsize=(12,6), sharex='col', sharey='row')
for i,name in enumerate(['static_mw']+['barred_mw_'+str(j) for j in pick]):
axes = allaxes[:,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot = op.load_potential(name)
Omega = (pot.parameters['bar']['Omega']/u.Myr).to(u.km/u.s/u.kpc).value
if i == 0:
title = 'no bar'
else:
title = r'$\Omega_p={:d}\,{{\rm km/s/kpc}}$'.format(int(Omega))
pot, streams, prog, _ = get_potential_stream_prog(name)
stream = streams[:,best_ix]
model_c,model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# cut to just points in the window
ix = (model_c.l > 0*u.deg) & (model_c.l < 11*u.deg) & (model_c.b > 25.5*u.deg) & (model_c.b < 34.5*u.deg)
# density in sky coords
# sky_ix = ix & (model_c.distance > 4.5*u.kpc) & (model_c.distance < 10*u.kpc)
sky_ix = ix
grid,log_dens = surface_density(model_c[sky_ix], bandwidth=0.2)
cs = axes[0].contourf(grid[:,0].reshape(log_dens.shape), grid[:,1].reshape(log_dens.shape), log_dens,
levels=levels, cmap='magma_r')
axes[1].plot(model_c.l[ix], galactic.decompose(model_v[2][ix]), rasterized=True, **style)
# l,b
axes[0].plot([3.8,3.8], [31.5,32.5], **line_style)
axes[0].plot([5.85,5.85], [30.,31], **line_style)
# l,vr
axes[1].plot([3.8,3.8], [276,292], **line_style)
axes[1].plot([5.85,5.85], [284,300], **line_style)
# the data
axes[0].plot(ophdata_fan.coord.l.degree, ophdata_fan.coord.b.degree, **data_b_style)
axes[1].plot(ophdata_fan.coord.l.degree, ophdata_fan.veloc['vr'].to(u.km/u.s).value, **data_b_style)
# Axis limits
axes[0].set_xlim(9.,2)
axes[0].set_ylim(26.5, 33.5)
axes[1].set_ylim(200, 350)
# Text
axes[0].set_title(title, fontsize=18)
axes[1].set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
axes[0].set_ylabel("$b$ [deg]", fontsize=18)
axes[1].set_ylabel(r"$v_{\rm los}$ [${\rm km}\,{\rm s}^{-1}$]", fontsize=18)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
MOVE TO A "talk figures" NOTEBOOK
###Code
style = {
"marker": '.',
"alpha": 0.75,
"cmap": custom_cmap,
"vmin": -1000, "vmax": 0.
}
line_style = {
"marker": None,
"linestyle": '--',
"color": 'k',
"linewidth": 1.5,
"dashes": (3,2)
}
data_style = dict(marker='o', ms=6, ls='none', alpha=0.9, color='#2166AC')
data_b_style = data_style.copy()
data_b_style['marker'] = 's'
# contour stuff
levels = np.array([-2,-1,0,1]) - 0.1
fig,allaxes = pl.subplots(2, 4, figsize=(12,6), sharex='col', sharey='row')
for i,name in enumerate(['static_mw']+['barred_mw_'+str(j) for j in pick]):
axes = allaxes[:,i]
# path to file to cache the likelihoods
cache_file = os.path.join(RESULTSPATH, name, "mockstream", "ln_likelihoods.npy")
lls = np.load(cache_file)
best_ix = lls.sum(axis=1).argmax()
pot = op.load_potential(name)
Omega = (pot.parameters['bar']['Omega']/u.Myr).to(u.km/u.s/u.kpc).value
if i == 0:
title = 'no bar'
else:
title = r'$\Omega_p={:d}\,{{\rm km/s/kpc}}$'.format(int(Omega))
pot, streams, prog, release_t = get_potential_stream_prog(name)
stream = streams[:,best_ix]
model_c,_model_v = stream.to_frame(coord.Galactic, galactocentric_frame=galactocentric_frame, vcirc=vcirc, vlsr=vlsr)
# remove prog orbit
model_c = model_c[1:]
model_v = []
for v in _model_v:
model_v.append(v[1:])
# cut to just points in the window
ix = (model_c.l > -2*u.deg) & (model_c.l < 14*u.deg) & (model_c.b > 23.5*u.deg) & (model_c.b < 36.5*u.deg)
axes[0].scatter(model_c.l[ix], model_c.b[ix], rasterized=True, c=release_t[ix], **style)
axes[1].scatter(model_c.l[ix], galactic.decompose(model_v[2][ix]), rasterized=True,
c=release_t[ix], **style)
# l,b
axes[0].plot([3.8,3.8], [31.5,32.5], **line_style)
axes[0].plot([5.85,5.85], [30.,31], **line_style)
# l,vr
axes[1].plot([3.8,3.8], [276,292], **line_style)
axes[1].plot([5.85,5.85], [284,300], **line_style)
# the data
axes[0].plot(ophdata_fan.coord.l.degree, ophdata_fan.coord.b.degree, **data_b_style)
axes[1].plot(ophdata_fan.coord.l.degree, ophdata_fan.veloc['vr'].to(u.km/u.s).value, **data_b_style)
# Axis limits
axes[0].set_xlim(9.,2)
axes[0].set_ylim(26.5, 33.5)
axes[1].set_ylim(200, 350)
# Text
axes[0].set_title(title, fontsize=18)
axes[1].set_xlabel("$l$ [deg]", fontsize=18)
if i == 0:
axes[0].set_ylabel("$b$ [deg]", fontsize=18)
axes[1].set_ylabel(r"$v_{\rm los}$ [${\rm km}\,{\rm s}^{-1}$]", fontsize=18)
fig.tight_layout()
from ophiuchus.experiments import LyapunovGrid
bins = np.linspace(0.3,1.1,12)*u.Gyr
fig,axes = pl.subplots(1, 4, figsize=(12,3.5), sharex='col', sharey='row')
for i,name in enumerate(['static_mw']+['barred_mw_'+str(j) for j in pick]):
pot = op.load_potential(name)
Omega = (pot.parameters['bar']['Omega']/u.Myr).to(u.km/u.s/u.kpc).value
if i == 0:
title = 'no bar'
else:
title = r'$\Omega_p={:d}\,{{\rm km/s/kpc}}$'.format(int(Omega))
gr = LyapunovGrid.from_config(os.path.join(RESULTSPATH,name,"lyapunov"),
os.path.join(RESULTSPATH,"global_lyapunov.cfg"),
potential_name=name)
d = gr.read_cache()
ftmle = (d['mle_avg']*1/u.Myr)
lyap_time = (1/ftmle).to(u.Myr)
axes[i].hist(lyap_time, bins=bins.to(u.Myr))
axes[i].set_xlim(200,1300)
axes[i].set_ylim(0,60)
axes[i].xaxis.set_ticks([300,600,900,1200])
axes[i].yaxis.set_ticks([0,20,40,60])
axes[i].set_xlabel(r"$t_\lambda$ [Myr]", fontsize=18)
axes[i].set_title(title, fontsize=18)
axes[0].set_ylabel('$N$')
fig.tight_layout()
###Output
_____no_output_____ |
03 ML0101EN-Reg-Polynomial-Regression-Co2-py-v1.ipynb | ###Markdown
Polynomial RegressionAbout this NotebookIn this notebook, we learn how to use scikit-learn for Polynomial regression. We download a dataset that is related to fuel consumption and Carbon dioxide emission of cars. Then, we split our data into training and test sets, create a model using training set, evaluate our model using test set, and finally use model to predict unknown value. Table of contents Downloading Data Polynomial regression Evaluation Practice Importing Needed packages
###Code
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Downloading DataTo download the data, we will use !wget to download it from IBM Object Storage.
###Code
!wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
###Output
--2020-08-07 09:43:38-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196
Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72629 (71K) [text/csv]
Saving to: ‘FuelConsumption.csv’
FuelConsumption.csv 100%[===================>] 70.93K --.-KB/s in 0.04s
2020-08-07 09:43:38 (1.79 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
###Markdown
__Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) Understanding the Data `FuelConsumption.csv`:We have downloaded a fuel consumption dataset, **`FuelConsumption.csv`**, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. [Dataset source](http://open.canada.ca/data/en/dataset/98f1a129-f628-4ce4-b24d-6f16bf24dd64)- **MODELYEAR** e.g. 2014- **MAKE** e.g. Acura- **MODEL** e.g. ILX- **VEHICLE CLASS** e.g. SUV- **ENGINE SIZE** e.g. 4.7- **CYLINDERS** e.g 6- **TRANSMISSION** e.g. A6- **FUEL CONSUMPTION in CITY(L/100 km)** e.g. 9.9- **FUEL CONSUMPTION in HWY (L/100 km)** e.g. 8.9- **FUEL CONSUMPTION COMB (L/100 km)** e.g. 9.2- **CO2 EMISSIONS (g/km)** e.g. 182 --> low --> 0 Reading the data in
###Code
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
###Output
_____no_output_____
###Markdown
Lets select some features that we want to use for regression.
###Code
#selecting our columns of interest
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
###Output
_____no_output_____
###Markdown
Lets plot Emission values with respect to Engine size:
###Code
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
###Output
_____no_output_____
###Markdown
Creating train and test datasetTrain/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.
###Code
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
###Output
_____no_output_____
###Markdown
Polynomial regression Sometimes, the trend of data is not really linear, and looks curvy. In this case we can use Polynomial regression methods. In fact, many different regressions exist that can be used to fit whatever the dataset looks like, such as quadratic, cubic, and so on, and it can go on and on to infinite degrees.In essence, we can call all of these, polynomial regression, where the relationship between the independent variable x and the dependent variable y is modeled as an nth degree polynomial in x. Lets say you want to have a polynomial regression (let's make 2 degree polynomial):$y = b + \theta_1 x + \theta_2 x^2$Now, the question is: how we can fit our data on this equation while we have only x values, such as __Engine Size__? Well, we can create a few additional features: 1, $x$, and $x^2$.__PloynomialFeatures()__ function in Scikit-learn library, drives a new feature sets from the original feature set. That is, a matrix will be generated consisting of all polynomial combinations of the features with degree less than or equal to the specified degree. For example, lets say the original feature set has only one feature, _ENGINESIZE_. Now, if we select the degree of the polynomial to be 2, then it generates 3 features, degree=0, degree=1 and degree=2:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=2)
train_x_poly = poly.fit_transform(train_x)
#train_x_poly
###Output
_____no_output_____
###Markdown
**fit_transform** takes our x values, and output a list of our data raised from power of 0 to power of 2 (since we set the degree of our polynomial to 2).$\begin{bmatrix} v_1\\ v_2\\ \vdots\\ v_n\end{bmatrix}$$\longrightarrow$$\begin{bmatrix} [ 1 & v_1 & v_1^2]\\ [ 1 & v_2 & v_2^2]\\ \vdots & \vdots & \vdots\\ [ 1 & v_n & v_n^2]\end{bmatrix}$in our example$\begin{bmatrix} 2.\\ 2.4\\ 1.5\\ \vdots\end{bmatrix}$$\longrightarrow$$\begin{bmatrix} [ 1 & 2. & 4.]\\ [ 1 & 2.4 & 5.76]\\ [ 1 & 1.5 & 2.25]\\ \vdots & \vdots & \vdots\\\end{bmatrix}$ It looks like feature sets for multiple linear regression analysis, right? Yes. It Does. Indeed, Polynomial regression is a special case of linear regression, with the main idea of how do you select your features. Just consider replacing the $x$ with $x_1$, $x_1^2$ with $x_2$, and so on. Then the degree 2 equation would be turn into:$y = b + \theta_1 x_1 + \theta_2 x_2$Now, we can deal with it as 'linear regression' problem. Therefore, this polynomial regression is considered to be a special case of traditional multiple linear regression. So, you can use the same mechanism as linear regression to solve such a problems. so we can use __LinearRegression()__ function to solve it:
###Code
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
###Output
Coefficients: [[ 0. 51.38793947 -1.66339345]]
Intercept: [105.41794938]
###Markdown
As mentioned before, __Coefficient__ and __Intercept__ , are the parameters of the fit curvy line. Given that it is a typical multiple linear regression, with 3 parameters, and knowing that the parameters are the intercept and coefficients of hyperplane, sklearn has estimated them from our new set of feature sets. Lets plot it:
###Code
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1) #arrange from 0 to 10 in interval of 0.1
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import r2_score
test_x = np.asanyarray(train[['ENGINESIZE']])
test_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
###Output
Mean absolute error: 23.26
Residual sum of squares (MSE): 935.18
R2-score: 0.69
###Markdown
PracticeTry to use a polynomial regression with the dataset but this time with degree three (cubic). Does it result in better accuracy?
###Code
# For Cubic results in a very slightly better solution
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
poly = PolynomialFeatures(degree=3)
train_x_poly = poly.fit_transform(train_x)
train_x_poly
clf = linear_model.LinearRegression()
train_y_ = clf.fit(train_x_poly, train_y)
# The coefficients
print ('Coefficients: ', clf.coef_)
print ('Intercept: ',clf.intercept_)
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
XX = np.arange(0.0, 10.0, 0.1) #arrange from 0 to 10 in interval of 0.1
yy = clf.intercept_[0]+ clf.coef_[0][1]*XX+ clf.coef_[0][2]*np.power(XX, 2)+ clf.coef_[0][3]*np.power(XX, 3)
plt.plot(XX, yy, '-r' )
plt.xlabel("Engine size")
plt.ylabel("Emission")
from sklearn.metrics import r2_score
test_x = np.asanyarray(train[['ENGINESIZE']])
test_y = np.asanyarray(train[['CO2EMISSIONS']])
test_x_poly = poly.fit_transform(test_x)
test_y_ = clf.predict(test_x_poly)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
###Output
Coefficients: [[ 0. 33.48192189 3.24974737 -0.40587362]]
Intercept: [124.52839587]
Mean absolute error: 23.20
Residual sum of squares (MSE): 932.29
R2-score: 0.69
|
Chapter 02/Hello Featuretools.ipynb | ###Markdown
Install
###Code
#!pip install featuretools
# For conda
# conda install -c conda-forge featuretools
import featuretools as ft
###Output
_____no_output_____
###Markdown
Toy dataset
###Code
data = ft.demo.load_mock_customer()
data.keys()
###Output
_____no_output_____
###Markdown
Entities* customers: unique customers who had sessions* sessions: unique sessions and associated attributes* transactions: list of events in this session* products: list of products invloved in the transactions Creating an EntitySet
###Code
es = ft.demo.load_mock_customer(return_entityset=True)
es
es["customers"].variables
###Output
_____no_output_____
###Markdown
Generating features
###Code
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="customers", #table or entity where feature will be added
agg_primitives=["count"],
trans_primitives=["month"],
max_depth=1)
feature_defs
feature_matrix
###Output
_____no_output_____
###Markdown
Creating deep features
###Code
#Deep features are build by stacking primitives.
#max_depth parameter decides the number of allowed primitive stacking.
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="customers",
agg_primitives=["mean", "sum", "mode"],
trans_primitives=["month", "hour"],
max_depth=2)
feature_defs
feature_matrix
###Output
_____no_output_____
###Markdown
Creating an EntitySet from scratch
###Code
data = ft.demo.load_mock_customer()
data['products']
data["transactions"]
data["sessions"]
data["customers"]
transactions_df = data["transactions"].merge(data["sessions"]).merge(data["customers"])
transactions_df.head()
products_df = data["products"]
products_df
#STEP 1. Initialize an EntitySet
es = ft.EntitySet(id="customer_data")
#STEP 2. Add entitites
es = es.entity_from_dataframe(entity_id="transactions",
dataframe=transactions_df,
index="transaction_id",
time_index="transaction_time", #when the data in each row became known.
variable_types={"product_id": ft.variable_types.Categorical,
"zip_code": ft.variable_types.ZIPCode})
es.plot()
# Add Products
es = es.entity_from_dataframe(entity_id="products",
dataframe=products_df,
index="product_id")
es.plot()
###Output
_____no_output_____
###Markdown
Adding a relationship
###Code
# STEP 3 - Add relationship
#Parent comes first and child comes ater
new_relationship = ft.Relationship(es["products"]["product_id"],
es["transactions"]["product_id"])
es = es.add_relationship(new_relationship)
es.plot()
#Time to build the feature matrix
feature_matrix, feature_defs = ft.dfs(entityset=es,
target_entity="products")
feature_defs
feature_matrix
###Output
_____no_output_____ |
doc/pub/LogReg/ipynb/.ipynb_checkpoints/LogReg-checkpoint.ipynb | ###Markdown
Data Analysis and Machine Learning: Logistic Regression **Morten Hjorth-Jensen**, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State UniversityDate: **Sep 26, 2018**Copyright 1999-2018, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license Logistic RegressionIn linear regression our main interest was centered on learning thecoefficients of a functional fit (say a polynomial) in order to beable to predict the response of a continuous variable on some unseendata. The fit to the continuous variable $y_i$ is based on someindependent variables $\hat{x}_i$. Linear regression resulted inanalytical expressions (in terms of matrices to invert) for severalquantities, ranging from the variance and thereby the confidenceintervals of the parameters $\hat{\beta}$ to the mean squarederror. If we can invert the product of the design matrices, linearregression gives then a simple recipe for fitting our data.Classification problems, however, are concerned with outcomes takingthe form of discrete variables (i.e. categories). We may for example,on the basis of DNA sequencing for a number of patients, like to findout which mutations are important for a certain disease; or based onscans of various patients' brains, figure out if there is a tumor ornot; or given a specific physical system, we'd like to identify itsstate, say whether it is an ordered or disordered system (typicalsituation in solid state physics); or classify the status of apatient, whether she/he has a stroke or not and many other similarsituations.The most common situation we encounter when we apply logisticregression is that of two possible outcomes, normally denoted as abinary outcome, true or false, positive or negative, success orfailure etc. Optimization and Deep learningLogistic regression will also serve as our stepping stone towards neuralnetwork algorithms and supervised deep learning. For logisticlearning, the minimization of the cost function leads to a non-linearequation in the parameters $\hat{\beta}$. The optmization of the problem calls therefore for minimization algorithms. This forms the bottle neck of all machine learning algorithms, namely how to find reliable minima of a multi-variable function. This leads us to the family of gradient descent methods. The latter are the working horses of basically all modern machine learning algorithms. We note also that many of the topics discussed here regression are also commonly used in modern supervised Deep Learningmodels, as we will see later. BasicsWe consider the case where the dependent variables, also called theresponses or the outcomes, $y_i$ are discrete and only take valuesfrom $k=0,\dots,K-1$ (i.e. $K$ classes).The goal is to predict theoutput classes from the design matrix $\hat{X}\in\mathbb{R}^{n\times p}$made of $n$ samples, each of which carries $p$ features or predictors. Theprimary goal is to identify the classes to which new unseen samplesbelong.Let us specialize to the case of two classes only, with outputs $y_i=0$ and $y_i=1$. Our outcomes could represent the status of a credit card user who could default or not on her/his credit card debt. That is $$y_i = \begin{bmatrix} 0 & \mathrm{no}\\ 1 & \mathrm{yes} \end{bmatrix}.$$ Linear classifierBefore moving to the logistic model, let us try to use our linear regression model to classify these two outcomes. We could for example fit a linear model to the default case if $y_i > 0.5$ and the no default case $y_i \leq 0.5$. We would then have our weighted linear combination, namely $$\begin{equation}\hat{y} = \hat{X}^T\hat{\beta} + \hat{\epsilon},\label{_auto1} \tag{1}\end{equation}$$ where $\hat{y}$ is a vector representing the possible outcomes, $\hat{X}$ is our$n\times p$ design matrix and $\hat{\beta}$ represents our estimators/predictors. Some selected propertiesThe main problem with our function is that it takes values on the entire real axis. In the case oflogistic regression, however, the labels $y_i$ are discretevariables. One simple way to get a discrete output is to have signfunctions that map the output of a linear regressor to values $\{0,1\}$,$f(s_i)=sign(s_i)=1$ if $s_i\ge 0$ and 0 if otherwise. We will encounter this model in our first demonstration of neural networks. Historically it is called the "perceptron" model in the machine learningliterature. This model is extremely simple. However, in many cases it is morefavorable to use a ``soft" classifier that outputsthe probability of a given category. This leads us to the logistic function.The code for plotting the perceptron can be seen here. This si nothing but the standard [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function). The logistic functionThe perceptron is an example of a ``hard classification" model. Wewill encounter this model when we discuss neural networks aswell. Each datapoint is deterministically assigned to a category (i.e$y_i=0$ or $y_i=1$). In many cases, it is favorable to have a "soft"classifier that outputs the probability of a given category ratherthan a single value. For example, given $x_i$, the classifieroutputs the probability of being in a category $k$. Logistic regressionis the most common example of a so-called soft classifier. In logisticregression, the probability that a data point $x_i$belongs to a category $y_i=\{0,1\}$ is given by the so-called logit function (or Sigmoid) which is meant to represent the likelihood for a given event, $$p(t) = \frac{1}{1+\mathrm \exp{-t}}=\frac{\exp{t}}{1+\mathrm \exp{t}}.$$ Note that $1-p(t)= p(-t)$.The following code plots the logistic function. Two parametersWe assume now that we have two classes with $y_i$ either $0$ or $1$. Furthermore we assume also that we have only two parameters $\beta$ in our fitting of the Sigmoid function, that is we define probabilities $$\begin{align*}p(y_i=1|x_i,\hat{\beta}) &= \frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}},\nonumber\\p(y_i=0|x_i,\hat{\beta}) &= 1 - p(y_i=1|x_i,\hat{\beta}),\end{align*}$$ where $\hat{\beta}$ are the weights we wish to extract from data, in our case $\beta_0$ and $\beta_1$. Note that we used $$p(y_i=0\vert x_i, \hat{\beta}) = 1-p(y_i=1\vert x_i, \hat{\beta}).$$ Maximum likelihoodIn order to define the total likelihood for all possible outcomes from a dataset $\mathcal{D}=\{(y_i,x_i)\}$, with the binary labels$y_i\in\{0,1\}$ and where the data points are drawn independently, we use the so-called [Maximum Likelihood Estimation](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation) (MLE) principle. We aim thus at maximizing the probability of seeing the observed data. We can then approximate the likelihood in terms of the product of the individual probabilities of a specific outcome $y_i$, that is $$\begin{align*}P(\mathcal{D}|\hat{\beta})& = \prod_{i=1}^n \left[p(y_i=1|x_i,\hat{\beta})\right]^{y_i}\left[1-p(y_i=1|x_i,\hat{\beta}))\right]^{1-y_i}\nonumber \\\end{align*}$$ from which we obtain the log-likelihood and our **cost/loss** function $$\mathcal{C}(\hat{\beta}) = \sum_{i=1}^n \left( y_i\log{p(y_i=1|x_i,\hat{\beta})} + (1-y_i)\log\left[1-p(y_i=1|x_i,\hat{\beta}))\right]\right).$$ The cost function rewrittenReordering the logarithms, we can rewrite the **cost/loss** function as $$\mathcal{C}(\hat{\beta}) = \sum_{i=1}^n \left(y_i(\beta_0+\beta_1x_i) -\log{(1+\exp{(\beta_0+\beta_1x_i)})}\right).$$ The maximum likelihood estimator is defined as the set of parameters that maximize the log-likelihood where we maximize with respect to $\beta$.Since the cost (error) function is just the negative log-likelihood, for logistic regression we have that $$\mathcal{C}(\hat{\beta})=-\sum_{i=1}^n \left(y_i(\beta_0+\beta_1x_i) -\log{(1+\exp{(\beta_0+\beta_1x_i)})}\right).$$ This equation is known in statistics as the **cross entropy**. Finally, we note that just as in linear regression, in practice we often supplement the cross-entropy with additional regularization terms, usually $L_1$ and $L_2$ regularization as we did for Ridge and Lasso regression. Minimizing the cross entropyThe cross entropy is a convex function of the weights $\hat{\beta}$ and,therefore, any local minimizer is a global minimizer. Minimizing thiscost function with respect to the two parameters $\beta_0$ and $\beta_1$ we obtain $$\frac{\partial \mathcal{C}(\hat{\beta})}{\partial \beta_0} = -\sum_{i=1}^n \left(y_i -\frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}}\right),$$ and $$\frac{\partial \mathcal{C}(\hat{\beta})}{\partial \beta_1} = -\sum_{i=1}^n \left(y_ix_i -x_i\frac{\exp{(\beta_0+\beta_1x_i)}}{1+\exp{(\beta_0+\beta_1x_i)}}\right).$$ A more compact expressionLet us now define a vector $\hat{y}$ with $n$ elements $y_i$, an$n\times p$ matrix $\hat{X}$ which contains the $x_i$ values and avector $\hat{p}$ of fitted probabilities $p(y_i\vert x_i,\hat{\beta})$. We can rewrite in a more compact form the firstderivative of cost function as $$\frac{\partial \mathcal{C}(\hat{\beta})}{\partial \hat{\beta}} = -\hat{X}^T\left(\hat{y}-\hat{p}\right).$$ If we in addition define a diagonal matrix $\hat{W}$ with elements $p(y_i\vert x_i,\hat{\beta})(1-p(y_i\vert x_i,\hat{\beta})$, we can obtain a compact expression of the second derivative as $$\frac{\partial^2 \mathcal{C}(\hat{\beta})}{\partial \hat{\beta}\partial \hat{\beta}^T} = \hat{X}^T\hat{W}\hat{X}.$$ Extending to more predictors Including more classes Optimizing the cost functionNewton's method and gradient descent methods A **scikit-learn** example
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
['data', 'target_names', 'feature_names', 'target', 'DESCR']
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression()
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
plt.plot(X_new, y_proba[:, 1], "g-", label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", label="Not Iris-Virginica")
plt.show()
###Output
_____no_output_____
###Markdown
A simple classification problem
###Code
import numpy as np
from sklearn import datasets, linear_model
import matplotlib.pyplot as plt
def generate_data():
np.random.seed(0)
X, y = datasets.make_moons(200, noise=0.20)
return X, y
def visualize(X, y, clf):
# plt.scatter(X[:, 0], X[:, 1], s=40, c=y, cmap=plt.cm.Spectral)
# plt.show()
plot_decision_boundary(lambda x: clf.predict(x), X, y)
plt.title("Logistic Regression")
def plot_decision_boundary(pred_func, X, y):
# Set min and max values and give it some padding
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
# Generate a grid of points with distance h between them
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Predict the function value for the whole gid
Z = pred_func(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
# Plot the contour and training examples
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral)
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral)
plt.show()
def classify(X, y):
clf = linear_model.LogisticRegressionCV()
clf.fit(X, y)
return clf
def main():
X, y = generate_data()
# visualize(X, y)
clf = classify(X, y)
visualize(X, y, clf)
if __name__ == "__main__":
main()
###Output
_____no_output_____ |
01. Visualization/NASA_Turbofan_Engine_Data_Visualization 3-1.ipynb | ###Markdown
NASA Turbofan Engine Dataset Visualization 3-1 여기서부터는 엔진별, 센서별로 분할된 데이터를 센서별로 1번부터 100번 엔진까지의 데이터를 한 차트에 보여줄 수 있도록 병합하는 코드를 만들겠다.
###Code
import pandas as pd
import numpy as np
from pathlib import Path
from pathlib import PurePath
from pandas import DataFrame
import xlwings as xw
import xlsxwriter
###Output
_____no_output_____
###Markdown
1번 ~ 100번까지 엔진별, 센서별로 분할한 데이터를 센서별로 병합하는 코드 이전에는 엔진별로 서로 다른 종류의 모든 센서 데이터를 하나의 시계열 그래프로 나타내었다면,이번에는 엔진번호(Unit_Number), 비행시간(Time), 센서 1개에서 나온 데이터만 추출해보겠다.그리고 나중에는 이를 바탕으로 특정 한 센서를 중심으로 각 엔진들을 join한 형태의 테이블을 작성해보겠다.그전에 우선 FD001의 train에서 4번엔진의 비행 싸이클인 'Time','Fan Inlet Temp' 데이터만 추출해보겠다.필요한 코드는 아래와 같다. FD001 train set 센서별 데이터 병합
###Code
# 01. Fan Inlet Temp 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 01. Fan Inlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/01. Fan Inlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 02. LPC Outlet Temp 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 02. LPC Outlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/02. LPC Outlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 03. HPC Outlet Temp 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 03. HPC Outlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/03. HPC Outlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 04. LPT Outlet Temp 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 04. LPT Outlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/04. LPT Outlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 05. Fan Inlet Press 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 05. Fan Inlet Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/05. Fan Inlet Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 06. Bypass Duct Press 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 06. Bypass Duct Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/06. Bypass Duct Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 07. Total HPC Outlet Press 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 07. Total HPC Outlet Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/07. Total HPC Outlet Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 08. Physical Fan Speed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 08. Physical Fan Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/08. Physical Fan Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 09. Physical Core Speed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 09. Physical Core Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/09. Physical Core Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 10. Engine Press Ratio 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 10. Engine Press Ratio.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/10. Engine Press Ratio/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 11. Static HPC Outlet Press 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 11. Static HPC Outlet Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/11. Static HPC Outlet Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 12. Fuel Flow Ratio to Ps30 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 12. Fuel Flow Ratio to Ps30.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/12. Fuel Flow Ratio to Ps30/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 13. Corrected Fan Speed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 13. Corrected Fan Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/13. Corrected Fan Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 14. Corrected Corr Speed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 14. Corrected Corr Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/14. Corrected Corr Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 15. Bypass Ratio 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 15. Bypass Ratio.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/15. Bypass Ratio/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 16. Burner Fuel-Air ratio 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 16. Burner Fuel-Air ratio.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/16. Burner Fuel-Air ratio/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 17. Bleed Enthalpy 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 17. Bleed Enthalpy.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/17. Bleed Enthalpy/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 18. Demanded Fan Speed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 18. Demanded Fan Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/18. Demanded Fan Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 19. Demanded Corrected Fan Speed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 19. Demanded Corrected Fan Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/19. Demanded Corrected Fan Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 20. HPT Collant Bleed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 20. HPT Collant Bleed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/20. HPT Collant Bleed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
# 21. LPT_Coolant_Bleed 데이터 병합
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 21. LPT_Coolant_Bleed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/21. LPT_Coolant_Bleed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close() # 센서별 데이터 병합 종료
print('생성 파일 : ', excel_file) # 생성한 파일 이름 출력
###Output
생성 파일 : C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/FD001 train 21. LPT_Coolant_Bleed.xlsx
###Markdown
해당 엑셀파일을 열어보면 아래와 같다.
###Code
example = pd.read_excel('C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/FD001 train 01. Fan Inlet Temp.xlsx')
example
# 02. LPC Outlet Temp
# 03. HPC Outlet Temp
# 04. LPT Outlet Temp
# 05. Fan Inlet Press
# 06. Bypass Duct Press
# 07. Total HPC Outlet Press
# 08. Physical Fan Speed
# 09. Physical Core Speed
# 10. Engine Press Ratio
# 11. Static HPC Outlet Press
# 12. Fuel Flow Ratio to Ps30
# 13. Corrected Fan Speed
# 14. Corrected Corr Speed
# 15. Bypass Ratio
# 16. Burner Fuel-Air ratio
# 17. Bleed Enthalpy
# 18. Demanded Fan Speed
# 19. Demanded Corrected Fan Speed
# 20. HPT Collant Bleed
# 21. LPT_Coolant_Bleed
# 02. LPC Outlet Temp
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 02. LPC Outlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/02. LPC Outlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 02. LPC Outlet Temp 파일 병합이 완료되었습니다.')
# 03. HPC Outlet Temp
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 03. HPC Outlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/03. HPC Outlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 03. HPC Outlet Temp 파일 병합이 완료되었습니다.')
# 04. LPT Outlet Temp
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 04. LPT Outlet Temp.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/04. LPT Outlet Temp/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 04. LPT Outlet Temp 파일 병합이 완료되었습니다.')
# 05. Fan Inlet Press
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 05. Fan Inlet Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/05. Fan Inlet Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 05. Fan Inlet Press 파일 병합이 완료되었습니다.')
# 06. Bypass Duct Press
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 06. Bypass Duct Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/06. Bypass Duct Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 06. Bypass Duct Press 파일 병합이 완료되었습니다.')
# 07. Total HPC Outlet Press
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 07. Total HPC Outlet Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/07. Total HPC Outlet Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 07. Total HPC Outlet Press 파일 병합이 완료되었습니다.')
# 08. Physical Fan Speed
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 08. Physical Fan Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/08. Physical Fan Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 08. Physical Fan Speed 파일 병합이 완료되었습니다.')
# 09. Physical Core Speed
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 09. Physical Core Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/09. Physical Core Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 09. Physical Core Speed 파일 병합이 완료되었습니다.')
# 10. Engine Press Ratio
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 10. Engine Press Ratio.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/10. Engine Press Ratio/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 10. Engine Press Ratio 파일 병합이 완료되었습니다.')
# 11. Static HPC Outlet Press
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 11. Static HPC Outlet Press.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/11. Static HPC Outlet Press/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 11. Static HPC Outlet Press 파일 병합이 완료되었습니다.')
# 12. Fuel Flow Ratio to Ps30
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 12. Fuel Flow Ratio to Ps30.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/12. Fuel Flow Ratio to Ps30/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 12. Fuel Flow Ratio to Ps30 파일 병합이 완료되었습니다.')
# 13. Corrected Fan Speed
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 13. Corrected Fan Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/13. Corrected Fan Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 13. Corrected Fan Speed 파일 병합이 완료되었습니다.')
# 14. Corrected Corr Speed
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 14. Corrected Corr Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/14. Corrected Corr Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 14. Corrected Corr Speed 파일 병합이 완료되었습니다.')
# 14. Corrected Corr Speed
# 1. 엑셀 파일 저장 경로
folder = 'C:/Users/jhj\Desktop/NASA_Turbofan_Engine_excel_data/센서별 병합 데이터 모음/train/FD001/'
excel_file = folder + 'FD001 train 14. Corrected Corr Speed.xlsx'
workbook = xlsxwriter.Workbook(excel_file) # 워크북 객체 생성
worksheet = workbook.add_worksheet() # 워크시트 생성
worksheet.write(0,0,'Time (Flight Cycle)')
# 2. 엔진 번호 입력
for i in range(100):
Engine_Number = str(i+1) + '번 엔진'
worksheet.write(0,i+1,Engine_Number)
# 3. x축에 해당하는 'Time (Cycle)'을 입력
for i in range(400):
worksheet.write(i+1,0,i+1)
# 4. 센서별 & 엔진별 분할된 데이터를 불러와서 병합
for j in range(100):
for i in range(400):
direct = "='C:/Users/jhj/Desktop/NASA_Turbofan_Engine_excel_data/센서별 & 엔진별 분할 데이터 모음/train/FD001/14. Corrected Corr Speed/[" + str(j+1) + "엔진.xlsx]Sheet1'!$C"
cell = str(i+2)
data = direct + cell
worksheet.write(i+1,j+1,data)
workbook.close()
print('FD001 train 14. Corrected Corr Speed 파일 병합이 완료되었습니다.')
###Output
_____no_output_____ |
perception analysis/individual preference.ipynb | ###Markdown
R² and user statements
###Code
g = sns.jointplot(
data=df_results,
x="accuracy_R2", y="accuracy",
xlim=(0,1), ylim=(0,1),
kind="reg"
)
g.set_axis_labels('accuracy R²', 'reported accuracy importance')
g = sns.jointplot(
data=df_results,
x="s_accuracy_R2", y="s_accuracy",
xlim=(0,1), ylim=(0,1),
kind="reg"
)
g.set_axis_labels('group accuracy R²', 'reported group accuracy importance')
g = sns.jointplot(
data=df_results,
x="abs_cv_R2", y="genderParity",
xlim=(0,1), ylim=(0,1),
kind="reg"
)
g.set_axis_labels('absolute group parity R²', 'reported group parity importance')
sns.set_style("white")
g = sns.jointplot(
data=df_results,
x="ordering_utility_R2", y="accuracy",
xlim=(0,1), ylim=(0,1),
kind="reg"
)
g.set_axis_labels('ordering utility R²', 'reported accuracy importance')
sns.set_style("white")
g = sns.jointplot(
data=df_results,
x="rND_R2", y="genderParity",
xlim=(0,1), ylim=(0,1),
kind="reg"
)
g.set_axis_labels('absolute group parity R²', 'reported group parity importance')
###Output
_____no_output_____
###Markdown
Differences between R² and user statement by R²
###Code
df_results['diff_accuracy'] = df_results.accuracy - df_results.accuracy_R2
df_results['diff_s_accuracy'] = df_results.s_accuracy - df_results.s_accuracy_R2
df_results['diff_cv'] = df_results.genderParity - df_results.abs_cv_R2
df_results['diff_ordering_utility'] = df_results.accuracy - df_results.ordering_utility_R2
df_results['diff_rND'] = df_results.genderParity - df_results.rND_R2
df_results.head()
g = sns.regplot(data=df_results, x='accuracy_R2', y='diff_accuracy')
g.set(ylim=(-1, 1), xlim=(0, 1))
sns.despine()
g = sns.regplot(data=df_results, x='s_accuracy_R2', y='diff_s_accuracy')
g.set(ylim=(-1, 1), xlim=(0, 1))
sns.despine()
g = sns.regplot(data=df_results, x='abs_cv_R2', y='diff_cv')
g.set(ylim=(-1, 1), xlim=(0, 1))
sns.despine()
g = sns.regplot(data=df_results, x='ordering_utility_R2', y='diff_ordering_utility')
g.set(ylim=(-1, 1), xlim=(0, 1))
sns.despine()
g = sns.regplot(data=df_results, x='rND_R2', y='diff_rND')
g.set(ylim=(-1, 1), xlim=(0, 1))
sns.despine()
###Output
_____no_output_____
###Markdown
R² distributions by gender
###Code
g = sns.violinplot(data=df_results[df_results.gender != 'other'], y='gender', x='accuracy_R2',
scale='count', inner='quartile', bw=0.3)
g.set(xlim=(0, 1), xlabel='accuracy R²')
sns.despine(left=True)
g = sns.violinplot(data=df_results[df_results.gender != 'other'], y='gender', x='s_accuracy_R2',
scale='count', inner='quartile', bw=0.3)
g.set(xlim=(0, 1), xlabel='group-cond. accuracy R²')
sns.despine(left=True)
g = sns.violinplot(data=df_results[df_results.gender != 'other'], y='gender', x='abs_cv_R2',
scale='count', inner='quartile', bw=0.3)
g.set(xlim=(0, 1), xlabel='abs. gender parity R²')
sns.despine(left=True)
g = sns.violinplot(data=df_results[df_results.gender != 'other'], y='gender', x='ordering_utility_R2',
scale='count', inner='quartile', bw=0.3)
g.set(xlim=(0, 1), xlabel='ordering utility R²')
sns.despine(left=True)
g = sns.violinplot(data=df_results[df_results.gender != 'other'], y='gender', x='rND_R2',
scale='count', inner='quartile', bw=0.3)
g.set(xlim=(0, 1), xlabel='abs. gender parity R²')
sns.despine(left=True)
###Output
_____no_output_____
###Markdown
Models predicting R² from demographics
###Code
cat_cols = [
'language',
'age',
'edu',
'gender',
]
y_cols = [
'accuracy',
's_accuracy',
'genderParity',
]
r2_cols = [
'accuracy_R2',
's_accuracy_R2',
'abs_cv_R2',
'ordering_utility_R2',
'rND_R2'
]
num_cols = [
'believe',
'confidence',
'fear',
'political',
'religious',
'screenHeight',
'screenWidth',
'will',
'agreeableness',
'conscientiousness',
'extraversion',
'neuroticism',
'openness',
]
df_users = pd.DataFrame(columns=cat_cols+num_cols+y_cols+r2_cols)
for i in range(len(df_results)):
user = df_results.iloc[i]
user2 = df[df['user._id'] == user.id].iloc[0]
new_row = {'accuracy': user['accuracy'],
's_accuracy': user['s_accuracy'],
'genderParity': user['genderParity'],
'abs_cv_R2': user['abs_cv_R2'],
'accuracy_R2': user['accuracy_R2'],
's_accuracy_R2': user['s_accuracy_R2'],
'ordering_utility_R2': user['ordering_utility_R2'],
'rND_R2': user['rND_R2'],
'believe': user2['user.believe'],
'confidence': user2['user.confidence'],
'fear': user2['user.fear'],
'political': user2['user.political'],
'religious': user2['user.religious'],
'screenHeight': user2['user.screenHeight'],
'screenWidth': user2['user.screenWidth'],
'will': user2['user.will'],
'agreeableness': user2['user.agreeableness'],
'conscientiousness': user2['user.conscientiousness'],
'extraversion': user2['user.extraversion'],
'neuroticism': user2['user.neuroticism'],
'openness': user2['user.openness'],
'language': user2['user.language'],
'age': user2['user.age'],
'edu': user2['user.edu'],
'gender': user2['user.gender'],
}
df_users = df_users.append(new_row, ignore_index=True)
df_users
from sklearn.preprocessing import KBinsDiscretizer
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.inspection import permutation_importance
# split for genderParity model
df_acc = df_users.copy()
df_acc = df_acc.dropna(subset=['abs_cv_R2'])
df_acc = df_acc.reset_index(drop=True)
y = df_acc.abs_cv_R2
est = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='quantile')
est.fit(y.to_numpy().reshape(-1, 1))
y = est.transform(y.to_numpy().reshape(-1, 1)).ravel()
X = df_acc[num_cols + cat_cols]
# build the genderParity model
categorical_pipe = Pipeline([
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
numerical_pipe = Pipeline([
('imputer', SimpleImputer(strategy='mean'))
])
preprocessing = ColumnTransformer(
[('cat', categorical_pipe, cat_cols),
('num', numerical_pipe, num_cols)])
rf = Pipeline([
('preprocess', preprocessing),
('classifier', RandomForestClassifier(n_jobs=-1, n_estimators=100))
])
print("abs_cv_R2 model - RF test accuracy: %0.3f" % np.mean(cross_val_score(rf, X, y, cv=10)))
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
rf.fit(X_train, y_train)
ohe = (rf.named_steps['preprocess']
.named_transformers_['cat']
.named_steps['onehot'])
feature_names = ohe.get_feature_names(input_features=cat_cols)
feature_names = np.r_[feature_names, num_cols] # num_cols
tree_feature_importances = (
rf.named_steps['classifier'].feature_importances_)
sorted_idx = tree_feature_importances.argsort()
y_ticks = np.arange(0, len(feature_names))
fig, ax = plt.subplots(figsize=(10, 8))
ax.barh(y_ticks, tree_feature_importances[sorted_idx])
ax.set_yticklabels(feature_names[sorted_idx])
ax.set_yticks(y_ticks)
ax.set_title("abs_cv_R2 model - Feature Importances")
fig.tight_layout()
plt.show()
result = permutation_importance(rf, X, y, n_repeats=10, n_jobs=-1)
sorted_idx = result.importances_mean.argsort()
fig, ax = plt.subplots(figsize=(10, 5))
ax.boxplot(result.importances[sorted_idx].T, vert=False, labels=X.columns[sorted_idx])
ax.set_title("abs_cv_R2 model - Permutation Importances")
fig.tight_layout()
plt.show()
# split for accuracy model
df_acc = df_users.copy()
df_acc = df_acc.dropna(subset=['accuracy_R2'])
df_acc = df_acc.reset_index(drop=True)
y = df_acc.accuracy_R2
est = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='quantile')
est.fit(y.to_numpy().reshape(-1, 1))
y = est.transform(y.to_numpy().reshape(-1, 1)).ravel()
X = df_acc[num_cols + cat_cols]
# build the s_accuracy model
categorical_pipe = Pipeline([
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
numerical_pipe = Pipeline([
('imputer', SimpleImputer(strategy='mean'))
])
preprocessing = ColumnTransformer(
[('cat', categorical_pipe, cat_cols),
('num', numerical_pipe, num_cols)])
rf = Pipeline([
('preprocess', preprocessing),
('classifier', RandomForestClassifier(n_jobs=-1, n_estimators=100))
])
print("accuracy_R2 model - RF test accuracy: %0.3f" % np.mean(cross_val_score(rf, X, y, cv=10)))
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
rf.fit(X_train, y_train)
ohe = (rf.named_steps['preprocess']
.named_transformers_['cat']
.named_steps['onehot'])
feature_names = ohe.get_feature_names(input_features=cat_cols)
feature_names = np.r_[feature_names, num_cols] # num_cols
tree_feature_importances = (
rf.named_steps['classifier'].feature_importances_)
sorted_idx = tree_feature_importances.argsort()
y_ticks = np.arange(0, len(feature_names))
fig, ax = plt.subplots(figsize=(10, 8))
ax.barh(y_ticks, tree_feature_importances[sorted_idx])
ax.set_yticklabels(feature_names[sorted_idx])
ax.set_yticks(y_ticks)
ax.set_title("accuracy_R2 model - Feature Importances")
fig.tight_layout()
plt.show()
result = permutation_importance(rf, X, y, n_repeats=10, n_jobs=-1)
sorted_idx = result.importances_mean.argsort()
fig, ax = plt.subplots(figsize=(10, 5))
ax.boxplot(result.importances[sorted_idx].T, vert=False, labels=X.columns[sorted_idx])
ax.set_title("abs_cv_R2 model - Permutation Importances")
fig.tight_layout()
plt.show()
# split for s_accuracy model
df_acc = df_users.copy()
df_acc = df_acc.dropna(subset=['s_accuracy_R2'])
df_acc = df_acc.reset_index(drop=True)
y = df_acc.s_accuracy_R2
est = KBinsDiscretizer(n_bins=5, encode='ordinal', strategy='quantile')
est.fit(y.to_numpy().reshape(-1, 1))
y = est.transform(y.to_numpy().reshape(-1, 1)).ravel()
X = df_acc[num_cols + cat_cols]
# build the s_accuracy model
categorical_pipe = Pipeline([
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
numerical_pipe = Pipeline([
('imputer', SimpleImputer(strategy='mean'))
])
preprocessing = ColumnTransformer(
[('cat', categorical_pipe, cat_cols),
('num', numerical_pipe, num_cols)])
rf = Pipeline([
('preprocess', preprocessing),
('classifier', RandomForestClassifier(n_jobs=-1, n_estimators=100))
])
print("s_accuracy_R2 model - RF test accuracy: %0.3f" % np.mean(cross_val_score(rf, X, y, cv=10)))
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y)
rf.fit(X_train, y_train)
ohe = (rf.named_steps['preprocess']
.named_transformers_['cat']
.named_steps['onehot'])
feature_names = ohe.get_feature_names(input_features=cat_cols)
feature_names = np.r_[feature_names, num_cols] # num_cols
tree_feature_importances = (
rf.named_steps['classifier'].feature_importances_)
sorted_idx = tree_feature_importances.argsort()
y_ticks = np.arange(0, len(feature_names))
fig, ax = plt.subplots(figsize=(10, 8))
ax.barh(y_ticks, tree_feature_importances[sorted_idx])
ax.set_yticklabels(feature_names[sorted_idx])
ax.set_yticks(y_ticks)
ax.set_title("s_accuracy_R2 model - Feature Importances")
fig.tight_layout()
plt.show()
result = permutation_importance(rf, X, y, n_repeats=10, n_jobs=-1)
sorted_idx = result.importances_mean.argsort()
fig, ax = plt.subplots(figsize=(10, 5))
ax.boxplot(result.importances[sorted_idx].T, vert=False, labels=X.columns[sorted_idx])
ax.set_title("s_accuracy_R2 model - Permutation Importances")
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
01 Beginner Level Task/Task 2-Stock Market Prediction And Forecasting Using Stacked LSTM/Beginner Level Task 2-Stock Market Prediction And Forecasting Using Stacked LSTM.ipynb | ###Markdown
Author : Loka Akash Reddy LGM VIP - Data Science September-2021 Task 2 : Stock Market Prediction And Forecasting Using Stacked LSTM Dataset Link : https://raw.githubusercontent.com/mwitiderrick/stockprice/master/NSE-TATAGLOBAL.csv Importing required Libraries
###Code
#Importing Libraries1
%time
import tensorflow as tf # This code has been tested with Tensorflow
import tensorflow_datasets as tfds
from tensorflow import keras
import math
import pandas as pd
import pandas_datareader as web
import numpy as np
import datetime as dt
import urllib.request, json
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
from keras.models import Sequential
from keras.layers import Dense,LSTM
import nltk
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import subjectivity
from nltk.sentiment import SentimentAnalyzer
from nltk.sentiment.util import *
from sklearn import preprocessing, metrics
from sklearn.preprocessing import MinMaxScaler
plt.style.use('fivethirtyeight')
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Importing dataset
###Code
df = web.DataReader('AAPL', data_source='yahoo', start='2012-01-01', end='2019-12-17')
df = pd.read_csv("NSE-TATAGLOBAL.csv")
df
###Output
_____no_output_____
###Markdown
Inspecting Data
###Code
df.head()
df.shape
df.info()
df.describe()
df.isnull().sum()
df.dropna(inplace = True, how = 'all')
df.isnull().sum()
# number of records in both stock Market Prediction and Forecasting Using Stacked LSTM
len(df)
# checking for null values in both the datasets
df.isna().any()
df['Data'] = pd.to_datetime(df['Date']).dt.normalize()
#flitering the important columns requried
df = df.filter(['Date', 'Close', 'Open', 'high', 'Low', 'Volume'])
# setting column 'Date' as the index column
df.set_index('Date', inplace = True)
# sorting the data according to the index i.e Date'
df = df.sort_index(ascending = True, axis = 0)
df
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.plot(df['Open'],'b')
plt.title("Open Price")
plt.ylabel("Price")
plt.subplot(1,2,2)
plt.plot(df['Close'],'r')
plt.title("Close Price")
plt.ylabel("Price")
plt.show()
###Output
_____no_output_____
###Markdown
200 Exponential Moving Average (EMA) and 200 Simple Moving Average (SMA)
###Code
df.ewm(span=200).mean()['Close'].plot(figsize = (15,15), label = '200 EMA')
df.rolling(window=200).mean()['Close'].plot(figsize=(15,15), label = '200 SMA')
df['Close'].plot(label = 'Close')
plt.legend()
plt.ylabel('Price')
plt.show()
training_orig = df.loc[:, ['Close']]
training_orig
training_orig['Close'].plot()
# setting figure size
plt.figure(figsize=(16,10))
# plotting close price
df['Close'].plot()
# setting plot tittle x and y labels
plt.title("Close Price")
plt.xlabel('Date')
plt.ylabel('Close Price ($)')
plt.figure(figsize=(16,8))
plt.title('Close Price History')
plt.plot(df['Close'])
plt.xlabel('Date', fontsize = 18)
plt.ylabel('Close Price USD($)', fontsize = 18)
plt.show()
data = df.filter(['Close'])
dataset = data.values
training_data_len = math.ceil(len(dataset) * .8)
#scale the data
scaler=MinMaxScaler(feature_range=(0,1))
scaled_data=scaler.fit_transform(dataset)
scaled_data
# Create training data set
train_data = scaled_data[0:training_data_len,:]
#split the data into x_train and y_train data sets
x_train = []
y_train = []
for i in range(60,len(train_data)):
x_train.append(train_data[i-60:i,0])
y_train.append(train_data[i,0])
if i <= 60:
print(x_train)
print(y_train)
print()
#convert train data to numpy arrays
x_train, y_train = np.array(x_train),np.array(y_train)
#Reshape
x_train.shape
x_train = np.reshape(x_train,(x_train.shape[0], x_train.shape[1],1))
x_train.shape
# calculating 7 days moving avaerage
df.rolling(7).mean().head(20)
#setting figure size
plt.figure(figsize = (16,10))
# plotting the close price and a 30 days rolling mean of close price
df['Close'].plot()
df.rolling(window=30).mean()['Close'].plot()
# displaying df
df
# calculating data_to_use
percentage_of_data = 1.0
data_to_use = int(percentage_of_data*(len(df)-1))
# using 80% of data for training
train_end = int(data_to_use*0.8)
total_data = len(df)
start = total_data - data_to_use
# printing number of records in the training and test datasets
print("Number of records in Training Data:", train_end)
print("Number of records in Test Data:", total_data - train_end)
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range=(0,1))
training = sc.fit_transform(np.array(training_orig['Close']).reshape(-1,1))
print(training.shape)
train_size = int(len(training)*0.65)
train = training[0:train_size]
test = training[train_size:]
print(train.shape)
print(test.shape)
test_plot = np.empty_like(training)
test_plot[: , :] = np.nan
test_plot[len(train): , :] = test
plt.figure(figsize = (20,20))
plt.plot(train, 'red', label = 'train')
plt.plot(test_plot, 'blue', label='test')
plt.legend(loc='upper left')
def helper(dataset, timestep):
x=[]
y=[]
for i in range(len(dataset)-timestep-1):
x.append(dataset[i:(i+timestep),0])
y.append(dataset[i+timestep, 0])
return np.array(x), np.array(y)
x_train, y_train = helper(train,100)
x_test, y_test = helper(test,100)
x_train.shape, y_train.shape
x_test.shape, y_test.shape
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)
x_test = x_test.reshape(x_test.shape[0], x_test.shape[1], 1)
x_test.shape
# build LSTM Model
model=Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(x_train.shape[1],1)))
model.add(LSTM(50,return_sequences=False))
model.add(Dense(25))
model.add(Dense(1))
#copile the model
model.compile(optimizer='adam', loss='mean_squared_error')
#train the model
model.fit(x_train,y_train,batch_size=1,epochs=1)
# copile the model
model.compile(optimizer='adam', loss='mean_squared_error')
#create the testing dataset
#create a new array containing scaled values from index 1543 to 2003
test_data = scaled_data[training_data_len - 60:,:]
# create the data sets x_test and y_test
x_test=[]
y_test=dataset[training_data_len:,:]
for i in range(60,len(test_data)):
x_test.append(test_data[i-60:i,0])
#convert the data to a numpy array
x_test=np.array(x_test)
#reshape
x_test=np.reshape(x_test,(x_test.shape[0], x_test.shape[1],1))
#get the models predicited price values
predictions=model.predict(x_test)
predictions=scaler.inverse_transform(predictions)
#Get the root mean squared error(RMSE)
rmse=np.sqrt( np.mean((predictions - y_test)**2))
rmse
train=data[:training_data_len]
valid=data[training_data_len:]
valid['Predictions']=predictions
#visualize the data
plt.figure(figsize=(16,8))
plt.title('Model')
plt.xlabel('Date', fontsize=18)
plt.ylabel('Close Price USD($)', fontsize=18)
plt.plot(train['Close'])
plt.plot(valid[['Close', 'Predictions']])
plt.legend(['Train', 'Val', 'Predictions'],loc='lower right')
plt.show()
#show the valid and predicted prices
valid
###Output
_____no_output_____ |
exploratory_data_analysis/SVM_Regression-Copy1.ipynb | ###Markdown
Cleaning the data
###Code
dataframe = dataframe.replace('?',np.NAN)
dict1 = dataframe.isnull().sum().to_dict()
non_zero = []
for a in dict1.keys():
if dict1[a] > 100:
# print a
# print dict1[a]
non_zero.append(a)
# print non_zero
for elem in non_zero:
del dataframe[elem]
# Perhaps its better to remove this row.
# No reason in removing whole column.
dataframe= dataframe.dropna()
cols = list(dataframe.columns.values)
###Output
_____no_output_____
###Markdown
Performing Regression
###Code
cols = [ x for x in cols if x not in ['fold', 'state', 'community', 'communityname', 'county'
,'ViolentCrimesPerPop']]
# cols = ['numbUrban', 'NumInShelters']
print len(cols)
# print cols[0]
cols1 = cols
all_errors = []
X = dataframe[list(cols1)].values
total_val = len(dataframe['ViolentCrimesPerPop'].values)
percent = 2/float(3)
edge_val = int(total_val*percent)
Y = np.asarray(dataframe['ViolentCrimesPerPop'].values)
for model in models:
model.fit(X[:edge_val], Y[:edge_val])
y_predict = [model.predict(X[edge_val:]) for model in models]
error = [0] * len(models)
for i in xrange(edge_val, total_val):
for k in xrange(len(models)):
print k
error[k] += (float(Y[i]) - y_predict[k][i-edge_val])**2
# print "Error of Estimates"
error = [er/float(total_val- edge_val) for er in error]
all_errors.append(error)
# print error
print "Im done"
# arr = np.array(all_errors)
# print np.min(arr, axis=0)
print error
best_f= np.argmin(arr, axis=0)
print "SVM ranked"
print arr[:,0].argsort()
print "Bayesian ranked"
print arr[:,2].argsort()
print best_f
bf = [cols[b1] for b1 in best_f]
print bf
# print np.min(arr,axis=1)
models_chosen = np.argmin(arr,axis=1)
# print models_chosen
fm = {}
for best in models_chosen:
if best in fm:
fm[best] +=1
else:
fm[best] = 1
print fm
cols[44]
cols[50]
###Output
_____no_output_____ |
remove_reads_from_another_fastq.ipynb | ###Markdown
Save `cleaned` KS162_CrePos_g5_rep3
###Code
%%bash
comm -2 -3 \
<(gzip -dc /data/reddylab/Keith/collab/200924_Gemberling/data/chip_seq/processed_raw_reads/Siklenka_6683_201201A5/KS162_CrePos_g5_rep3.R1.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
<(gzip -dc /data/reddylab/Keith/encode4_duke/data/atac_seq/processed_raw_reads/Keith_6683_201201A5/KS136_Th17ASTARRInput_rep1.R1.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
| tr '\t' '\n' \
| gzip -c > /data/reddylab/Alex/tmp/cleaned.KS162_CrePos_g5_rep3.R1.fastq.gz
comm -2 -3 \
<(gzip -dc /data/reddylab/Keith/collab/200924_Gemberling/data/chip_seq/processed_raw_reads/Siklenka_6683_201201A5/KS162_CrePos_g5_rep3.R2.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
<(gzip -dc /data/reddylab/Keith/encode4_duke/data/atac_seq/processed_raw_reads/Keith_6683_201201A5/KS136_Th17ASTARRInput_rep1.R2.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
| tr '\t' '\n' \
| gzip -c > /data/reddylab/Alex/tmp/cleaned.KS162_CrePos_g5_rep3.R2.fastq.gz
###Output
_____no_output_____
###Markdown
Save `cleaned` KS136_Th17ASTARRInput_rep1
###Code
%%bash
comm -1 -3 \
<(gzip -dc /data/reddylab/Keith/collab/200924_Gemberling/data/chip_seq/processed_raw_reads/Siklenka_6683_201201A5/KS162_CrePos_g5_rep3.R1.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
<(gzip -dc /data/reddylab/Keith/encode4_duke/data/atac_seq/processed_raw_reads/Keith_6683_201201A5/KS136_Th17ASTARRInput_rep1.R1.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
| tr '\t' '\n' \
| gzip -c > /data/reddylab/Alex/tmp/cleaned.KS136_Th17ASTARRInput_rep1.R1.fastq.gz
comm -1 -3 \
<(gzip -dc /data/reddylab/Keith/collab/200924_Gemberling/data/chip_seq/processed_raw_reads/Siklenka_6683_201201A5/KS162_CrePos_g5_rep3.R2.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
<(gzip -dc /data/reddylab/Keith/encode4_duke/data/atac_seq/processed_raw_reads/Keith_6683_201201A5/KS136_Th17ASTARRInput_rep1.R2.fastq.gz \
| awk 'NR%4==1{res=$1}NR%4>1{res=res"\t"$1}NR%4==0{res=res"\t"$1; print res}' \
| sort -k1,1) \
| tr '\t' '\n' \
| gzip -c > /data/reddylab/Alex/tmp/cleaned.KS136_Th17ASTARRInput_rep1.R2.fastq.gz
###Output
_____no_output_____ |
AI-Workflow-Enterprise-Model-Deployment/Model_Deployment-case-study_V3/m5-case-study-soln.ipynb | ###Markdown
CASE STUDY - Deploying a recommenderWe have seen the movie lens data on a toy dataset now lets try something a little bigger. You have somechoices.* [MovieLens Downloads](https://grouplens.org/datasets/movielens/latest/)If your resources are limited (your working on a computer with limited amount of memory)> continue to use the sample_movielens_ranting.csvIf you have a computer with at least 8GB of RAM> download the ml-latest-small.zipIf you have the computational resources (access to Spark cluster or high-memory machine)> download the ml-latest.zipThe two important pages for documentation are below.* [Spark MLlib collaborative filtering docs](https://spark.apache.org/docs/latest/ml-collaborative-filtering.html) * [Spark ALS docs](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS)
###Code
import os
import shutil
import pandas as pd
import numpy as np
import pyspark as ps
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.sql import Row
from pyspark.sql.types import DoubleType
## ensure the spark context is available
spark = (ps.sql.SparkSession.builder
.appName("sandbox")
.getOrCreate()
)
sc = spark.sparkContext
print(spark.version)
## note that this solution uses ml-latest.zip
data_dir = os.path.join("..","data","ml-latest")
ratings_file = os.path.join(data_dir,"ratings.csv")
movies_file = os.path.join(data_dir,"movies.csv")
if not os.path.exists(ratings_file):
print("ERROR make sure the path to the ratings file is correct")
## load the data
df = spark.read.format("csv").options(header="true",inferSchema="true").load(ratings_file)
df.show(n=4)
movies_df = pd.read_csv(movies_file)
movies_df.rename(columns={"movieId": "movie_id"},inplace=True)
movies_df.head(5)
###Output
_____no_output_____
###Markdown
QUESTION 1Explore the movie lens data a little and summarize it
###Code
## YOUR CODE HERE (summarize the data)
df = df.withColumnRenamed("movieID", "movie_id")
df = df.withColumnRenamed("userID", "user_id")
df.describe().show()
print('Unique users: {}'.format(df.select('user_id').distinct().count()))
print('Unique movies: {}'.format(df.select('movie_id').distinct().count()))
print('Movies with Rating > 2: {}'.format(df.filter('rating > 2').select('movie_id').distinct().count()))
print('Movies with Rating > 3: {}'.format(df.filter('rating > 3').select('movie_id').distinct().count()))
print('Movies with Rating > 4: {}'.format(df.filter('rating > 4').select('movie_id').distinct().count()))
###Output
+-------+------------------+-----------------+------------------+--------------------+
|summary| user_id| movie_id| rating| timestamp|
+-------+------------------+-----------------+------------------+--------------------+
| count| 27753444| 27753444| 27753444| 27753444|
| mean|141942.01557064414|18487.99983414671|3.5304452124932677|1.1931218549319258E9|
| stddev| 81707.40009148757|35102.62524746828| 1.066352750231982|2.1604822852233925E8|
| min| 1| 1| 0.5| 789652004|
| max| 283228| 193886| 5.0| 1537945149|
+-------+------------------+-----------------+------------------+--------------------+
Unique users: 283228
Unique movies: 53889
Movies with Rating > 2: 50735
Movies with Rating > 3: 43107
Movies with Rating > 4: 29374
###Markdown
QUESTION 2Find the ten most popular movies---that is the then movies with the highest average rating>Hint: you may want to subset the movie matrix to only consider movies with a minimum number of ratings
###Code
## YOUR CODE HERE
## get the top rated movies with more than 100 ratings
movie_counts = df.groupBy("movie_id").count()
top_rated = df.groupBy("movie_id").avg('rating')
top_rated = top_rated.withColumnRenamed("movie_id", "movie_id_2")
top_movies = top_rated.join(movie_counts, top_rated.movie_id_2 == movie_counts.movie_id)
top_movies = top_movies.filter('count>100').orderBy('avg(rating)',ascending=False).drop("movie_id_2")
top_movies = top_movies.toPandas()
## add the movie titles to data frame
movie_ids = top_movies['movie_id'].values
inds = [np.where(movies_df['movie_id'].values==mid)[0][0] for mid in movie_ids]
top_movies["title"] = movies_df['title'].values[inds]
top_movies.head(10)
###Output
_____no_output_____
###Markdown
QUESTION 3Compare at least 5 different values for the ``regParam``Use the `` ALS.trainImplicit()`` and compare it to the ``.fit()`` method. See the [Spark ALS docs](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.recommendation.ALS)for example usage.
###Code
## YOUR CODE HERE
(training, test) = df.randomSplit([0.8, 0.2])
def train_model(reg_param,implicit_prefs=False):
als = ALS(maxIter=5, regParam=reg_param, userCol="user_id",
itemCol="movie_id", ratingCol="rating",
coldStartStrategy="drop",implicitPrefs=implicit_prefs)
model = als.fit(training)
predictions = model.transform(test)
evaluator = RegressionEvaluator(metricName="rmse", labelCol="rating",
predictionCol="prediction")
rmse = evaluator.evaluate(predictions)
print("regParam={}, RMSE={}".format(reg_param,np.round(rmse,2)))
for reg_param in [0.01, 0.05, 0.1, 0.15, 0.25]:
train_model(reg_param)
###Output
regParam=0.01, RMSE=0.84
regParam=0.05, RMSE=0.83
regParam=0.1, RMSE=0.83
regParam=0.15, RMSE=0.84
regParam=0.25, RMSE=0.87
###Markdown
QUESTION 4With your best regParam try using the `implicitPrefs` flag.>Note that the results here make sense because the data are `explicit` ratings
###Code
## YOUR CODE HERE
train_model(0.1, implicit_prefs=True)
###Output
regParam=0.1, RMSE=3.26
###Markdown
QUESTION 5Use model persistence to save your finalized model
###Code
## YOUR CODE HERE
## re-train using the whole data set
print("...training")
als = ALS(maxIter=5, regParam=0.1, userCol="user_id",
itemCol="movie_id", ratingCol="rating",
coldStartStrategy="drop")
model = als.fit(df)
## save the model for furture use
save_dir = "saved-recommender"
if os.path.isdir(save_dir):
print("...overwritting saved model")
shutil.rmtree(save_dir)
## save the top-ten movies
print("...saving top-movies")
top_movies[:10000].to_csv("top-movies.csv",index=False)
## save model
model.save(save_dir)
print("done.")
###Output
...training
...overwritting saved model
...saving top-movies
done.
###Markdown
QUESTION 6Use ``spark-submit`` to load the model and demonstrate that you can load the model and interface with it.
###Code
## YOUR CODE HERE
## see recommender-submit.py
###Output
_____no_output_____ |
notebooks/ACLUM Legislation.ipynb | ###Markdown
To set `aclum_urls`:- Open - Click "See more" until it goes away- In Dev Tools Console, copy output from: ```javascript Array.from(document.querySelectorAll('.listing-page-item .field-title-field a')).map(a => a.href) ```
###Code
aclum_urls = [
'https://www.aclum.org/en/legislation/end-debt-based-incarceration-license-suspensions',
'https://www.aclum.org/en/legislation/face-surveillance-regulation',
'https://www.aclum.org/en/legislation/work-family-mobility',
'https://www.aclum.org/en/legislation/safe-communities',
'https://www.aclum.org/en/legislation/votes-act',
'https://www.aclum.org/en/legislation/remote-access-open-meetings',
'https://www.aclum.org/en/legislation/treatment-not-imprisonment-0',
'https://www.aclum.org/en/legislation/fix-massachusetts-civil-rights-act',
'https://www.aclum.org/en/legislation/massachusetts-information-privacy-act',
'https://www.aclum.org/en/legislation/artificial-intelligence-commission-0',
'https://www.aclum.org/en/legislation/automated-license-plate-readers',
'https://www.aclum.org/en/legislation/reduce-reincarceration-technical-violations-parole',
'https://www.aclum.org/en/legislation/no-cost-prison-phone-calls',
'https://www.aclum.org/en/legislation/prevent-imposition-mandatory-minimums-based-juvenile-records',
'https://www.aclum.org/en/legislation/qualified-immunity-reform',
'https://www.aclum.org/en/legislation/ending-life-without-parole',
'https://www.aclum.org/en/legislation/raise-age',
'https://www.aclum.org/en/legislation/access-justice-0',
'https://www.aclum.org/en/legislation/medication-opioid-use-disorder-all-correctional-facilities',
'https://www.aclum.org/en/legislation/treatment-non-carceral-settings-people-not-accused-crimes',
'https://www.aclum.org/en/legislation/alternatives-community-emergency-services',
'https://www.aclum.org/en/legislation/full-spectrum-pregnancy-care',
'https://www.aclum.org/en/legislation/access-emergency-contraception',
'https://www.aclum.org/en/legislation/healthy-and-safety-sex-workers',
'https://www.aclum.org/en/legislation/election-participation-eligible-incarcerated-voters',
'https://www.aclum.org/en/legislation/emergency-paid-sick-time',
'https://www.aclum.org/en/legislation/common-start-0',
'https://www.aclum.org/en/legislation/right-counsel-evictions'
]
import json
import re
import requests
from bs4 import BeautifulSoup
from tqdm.notebook import tqdm
def get_soup(url):
response = requests.get(url)
response.raise_for_status()
return BeautifulSoup(response.text, "lxml")
def select_string(soup, selector):
try:
return ' '.join(soup.select_one(selector).stripped_strings)
except AttributeError:
return ""
def parse_bills(soup):
bill_text = select_string(soup, '.field-legislation-bill')
return [
{
"session": 192,
"number": b.replace('.', '')
}
for b in re.findall(r'([HS]\.\d+)', bill_text)
]
def parse_legislation(url, soup):
return {
"id": url.split("/")[-1],
"url": url,
"title": select_string(soup, 'h1'),
"description": select_string(soup, '.field-body p:not(.alt)'),
"bills": parse_bills(soup),
}
aclum_legislation = [
parse_legislation(url, get_soup(url))
for url in tqdm(aclum_urls)
]
aclum_legislation[0]
for legislation in aclum_legislation:
print(legislation['url'])
print(legislation['title'])
print(legislation['description'])
print()
with open('../dist/legislation.json', 'w') as f:
json.dump(aclum_legislation, f, indent=4)
###Output
_____no_output_____ |
docs/samples/ML Toolbox/Image Classification/Flower/Service End to End.ipynb | ###Markdown
Order of magnitude faster training for image classification: Part II _Transfer learning using Inception Package - Cloud Run Experience_This notebook continues the codifies the capabilities discussed in this [blog post](http://localhost:8081/). In a nutshell, it uses the pre-trained inception model as a starting point and then uses transfer learning to train it further on additional, customer-specific images. For explanation, simple flower images are used. Compared to training from scratch, the time and costs are drastically reduced.This notebook does preprocessing, training and prediction by calling CloudML API instead of running them in the Datalab container. The purpose of local work is to do some initial prototyping and debugging on small scale data - often by taking a suitable (say 0.1 - 1%) sample of the full data. The same basic steps can then be repeated with much larger datasets in cloud. Setup First run the following steps only if you are running Datalab from your local desktop or laptop (not running Datalab from a GCE VM):1. Make sure you have a GCP project which is enabled for Machine Learning API and Dataflow API.2. Run "%datalab project set --project [project-id]" to set the default project in Datalab.If you run Datalab from a GCE VM, then make sure the project of the GCE VM is enabled for Machine Learning API and Dataflow API.
###Code
import mltoolbox.image.classification as model
from google.datalab.ml import *
bucket = 'gs://' + datalab_project_id() + '-lab'
preprocess_dir = bucket + '/flowerpreprocessedcloud'
model_dir = bucket + '/flowermodelcloud'
staging_dir = bucket + '/staging'
!gsutil mb $bucket
###Output
_____no_output_____
###Markdown
PreprocessPreprocessing uses a Dataflow pipeline to convert the image format, resize images, and run the converted image through a pre-trained model to get the features or embeddings. You can also do this step using alternate technologies like Spark or plain Python code if you like. The %%ml preprocess command simplifies this task. Check out the parameters shown using --usage flag first and then run the command. If you hit "PERMISSION_DENIED" when running the following cell, you need to enable Cloud DataFlow API (url is shown in error message). The DataFlow job usually takes about 20 min to complete.
###Code
train_set = CsvDataSet('gs://cloud-datalab/sampledata/flower/train1000.csv', schema='image_url:STRING,label:STRING')
preprocess_job = model.preprocess_async(train_set, preprocess_dir, cloud={'num_workers': 10})
preprocess_job.wait() # Alternatively, you can query the job status by train_job.state. The wait() call blocks the notebook execution.
###Output
/usr/local/lib/python2.7/dist-packages/apache_beam/coders/typecoders.py:136: UserWarning: Using fallback coder for typehint: Any.
warnings.warn('Using fallback coder for typehint: %r.' % typehint)
###Markdown
TrainNote that the command remains the same as that in the "local" version.
###Code
train_job = model.train_async(preprocess_dir, 30, 1000, model_dir, cloud=CloudTrainingConfig('us-central1', 'BASIC'))
train_job.wait() # Alternatively, you can query the job status by train_job.state. The wait() call blocks the notebook execution.
###Output
_____no_output_____
###Markdown
Check your job status by running (replace the job id from the one shown above):```Job('image_classification_train_170307_002934').describe()``` Tensorboard works too with GCS path. Note that the data will show up usually a minute after tensorboard starts with GCS path.
###Code
tb_id = TensorBoard.start(model_dir)
###Output
_____no_output_____
###Markdown
PredictDeploy the model and run online predictions. The deployment takes about 2 ~ 5 minutes.
###Code
Models().create('flower')
ModelVersions('flower').deploy('beta1', model_dir)
###Output
Waiting for operation "projects/bradley-playground/operations/create_flower_beta1-1488494327528"
Done.
###Markdown
Online prediction is currently in alpha, it helps to ensure a warm start if the first call fails.
###Code
images = [
'gs://cloud-ml-data/img/flower_photos/daisy/15207766_fc2f1d692c_n.jpg',
'gs://cloud-ml-data/img/flower_photos/tulips/6876631336_54bf150990.jpg'
]
# set resize=True to avoid sending large data in prediction request.
model.predict('flower.beta1', images, resize=True, cloud=True)
###Output
Predicting...
###Markdown
Batch Predict
###Code
import google.datalab.bigquery as bq
bq.Dataset('flower').create()
eval_set = CsvDataSet('gs://cloud-datalab/sampledata/flower/eval670.csv', schema='image_url:STRING,label:STRING')
batch_predict_job = model.batch_predict_async(eval_set, model_dir, output_bq_table='flower.eval_results_full',
cloud={'temp_location': staging_dir})
batch_predict_job.wait()
%%bq query --name wrong_prediction
SELECT * FROM flower.eval_results_full WHERE target != predicted
wrong_prediction.execute().result()
ConfusionMatrix.from_bigquery('flower.eval_results_full').plot()
%%bq query --name accuracy
SELECT
target,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END) as correct,
COUNT(*) as total,
SUM(CASE WHEN target=predicted THEN 1 ELSE 0 END)/COUNT(*) as accuracy
FROM
flower.eval_results_full
GROUP BY
target
accuracy.execute().result()
%%bq query --name logloss
SELECT feature, AVG(-logloss) as logloss, count(*) as count FROM
(
SELECT feature, CASE WHEN correct=1 THEN LOG(prob) ELSE LOG(1-prob) END as logloss
FROM
(
SELECT
target as feature,
CASE WHEN target=predicted THEN 1 ELSE 0 END as correct,
target_prob as prob
FROM flower.eval_results_full))
GROUP BY feature
FeatureSliceView().plot(logloss)
###Output
_____no_output_____
###Markdown
Clean up
###Code
ModelVersions('flower').delete('beta1')
Models().delete('flower')
!gsutil -m rm -r {preprocess_dir}
!gsutil -m rm -r {model_dir}
###Output
_____no_output_____ |
FUNCOES PARAMETRO E LAMBDA/LAMBDA EXPRESSIONS.ipynb | ###Markdown
FUNCOES LAMBDAFunções de uma única função e pode ser chamada de função anônima.Funções lambda não possuem nomes, SÃO ATRIBUÍDAS DIRETAMENTE A VARIÁVEL (Umas das formas de usar)É atribuído a variável um *nome* a palavra reservada "lambda" o parametro seguido de dois pontos e a expressoa conforme o exemplo abaixo.nome_da_funcao = *lambda* parametro: expressao**Executa 1 linha de código e retorna o valor**
###Code
def functionTeste(num):
return 2 * num
functionTeste(10)
###Output
_____no_output_____
###Markdown
Usando LAMBDAnome_da_funcao = *lambda* parametro: expressao
###Code
minha_funcao2 = lambda num: num * 2
minha_funcao2(5) #Aqui a variável é chamada já como parâmetro e usada como funcao
###Output
_____no_output_____
###Markdown
USANDO LAMBDA
###Code
#Exemplo de funcao
imposto = 0.3
def preco_imposto(preco):
return preco * ( 1 + 0.3)
#Usando lambda
preco_imposto2 = lambda preco: preco * (1 + imposto)
preco_imposto(2)
preco_imposto2(2)
###Output
_____no_output_____
###Markdown
Principal Aplicação LAMBDAPrincipal utilização é usar uma função dentro de um método que já existe. Exemplo map() sort() e etc...Métodos que aceitam funções como parametro.
###Code
preco_tecnologia = {'notebook asus': 2450, 'iphone': 4500, 'samsung galaxy': 3000, 'tv samsung': 1000, 'ps5': 3000, 'tablet': 1000, 'notebook dell': 3000, 'ipad': 3000, 'tv philco': 800, 'notebook hp': 1700}
calcular_imposto1 = lambda valor: valor * 1.3
preco_cImposto = list(map(calcular_imposto1, preco_tecnologia.values()))
print(preco_cImposto)
###Output
[3185.0, 5850.0, 3900.0, 1300.0, 3900.0, 1300.0, 3900.0, 3900.0, 1040.0, 2210.0]
###Markdown
Function filter()A função filter() filtra um interable.Ao invés de ter todos os valores, temos apenas aqueles que atendem a uma determinada condição.Essa função retorna **TRUE** ou **FALSE**, ela faz uma comparação
###Code
preco_tecnologia2 = {'notebook asus': 2450, 'iphone': 4500, 'samsung galaxy': 3000, 'tv samsung': 1000, 'ps5': 3000, 'tablet': 1000, 'notebook dell': 3000, 'ipad': 3000, 'tv philco': 800, 'notebook hp': 1700}
maior2k = lambda valor2: valor2 > 2000
produtos_acima_2k = dict(list(filter(lambda item: item[1] > 2000, preco_tecnologia2.items())))
print(produtos_acima_2k)
###Output
{'notebook asus': 2450, 'iphone': 4500, 'samsung galaxy': 3000, 'ps5': 3000, 'notebook dell': 3000, 'ipad': 3000}
###Markdown
LAMBDA expression para gerar funcoesAlém de aplicada com parametro de outros métodos também podem ser usadas com geradoras de funcoes
###Code
def calc_imposto(impo):
return lambda preco: preco * (1 + impo)
calcular_produto = calc_imposto(0.1)
calcular_servico = calc_imposto(0.15)
calcular_royalties = calc_imposto(0.25)
print(calcular_produto(100))
print(calcular_servico(90))
print(calcular_royalties(80))
###Output
110.00000000000001
103.49999999999999
100.0
|
Lecture Notebooks/Econ126_Class_16-Incomplete.ipynb | ###Markdown
Class 16: Introduction to New-Keynesian Business Cycle ModelingIn this notebook, we will briefly explore US macroeconomic data suggesting that, contrary to the assumptions of most RBC models, there is in fact a relationship between real and nominal quantities over the business cycle. Then we will use `linearsolve` to compute impulse responses of output, inflation, and the nominal interest rate to a monetary policy shock in the New-Keynesian model. DataThe file `business_cycle_data_actual_trend_cycle.csv`, available at https://github.com/letsgoexploring/econ126/raw/master/Data/Csv/business_cycle_data_actual_trend_cycle.csv, contains actual and trend data for real GDP per capita, real consumption per capita, real investment per capita, real physical capital per capita, TFP, hours per capita, the rea money supply (M2), (nominal) interest rate on 3-month T-bills, the PCE inflation rate, and the unemployment rate; each at quarterly frequency. The GDP, consumption, investment, capital, and money supply data are in terms of 2012 dollars. Hours is measured as an index with the value in October 2012 set to 100.
###Code
# Read business_cycle_data_actual_trend.csv into a Pandas DataFrame with the first column set as the index and parse_dates=True
data = pd.read_csv('https://github.com/letsgoexploring/econ126/raw/master/Data/Csv/business_cycle_data_actual_trend_cycle.csv',index_col=0,parse_dates=True)
# Print the last five rows of the data
data.tail()
###Output
_____no_output_____
###Markdown
Exercise: GDP and InflationConstruct a plot of the cyclical components of GDP and inflation.
###Code
# Construct plot
plt.plot(data['pce_inflation_cycle']*100,alpha=0.75,lw=3,label='Inflation')
plt.plot(data['gdp_cycle']*100,c='r',alpha=0.5,label='GDP')
plt.grid()
plt.ylabel='Percent'
plt.title('GDP and Inflation')
# Place legend to right of figure. PROVIDED
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____
###Markdown
Exercise: GDP and the 3-Month T-Bill RateConstruct a plot of the cyclical components of GDP and the 3-month T-bill rate.
###Code
# Construct plot
plt.plot(data['t_bill_3mo_cycle']*100,alpha=0.75,lw=3,label='Inflation')
plt.plot(data['gdp_cycle']*100,c='r',alpha=0.5,label='GDP')
plt.grid()
plt.ylabel='Percent'
plt.title('GDP and 3-Month T-Bill Rate')
# Place legend to right of figure. PROVIDED
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____
###Markdown
Correlations Between GDP, Inflation, and 3-Month T-Bill RateCompute the coefficients of corrrelation between GDP, inflation, and the 3-month T-bill rate.
###Code
data[['gdp_cycle','pce_inflation_cycle','t_bill_3mo_cycle']].corr()
###Output
_____no_output_____
###Markdown
Strong (but not perfect!) correlations between GDP and inflation and GDP and the T-bill rate suggest link between nominal and real quantities over the business cycle that should be exaplined by business cycle theory. The New-Keynesian ModelThe most basic version of the New-Keynesian Model can be expressed as:\begin{align}y_t & = E_t y_{t+1} - \left( r_{t} - \bar{r}\right) + g_t\\i_{t} & = r_{t} + E_t \pi_{t+1}\\i_{t} & = \bar{r} + \pi^T + \phi_{\pi}\big(\pi_t - \pi^T\big) + \phi_{y}\big(y_t - \bar{y}\big) + v_t\\\pi_t -\pi^T & = \beta \left( E_t\pi_{t+1} - \pi^T\right) + \kappa (y_t -\bar{y})+ u_t,\end{align}where: $y_t$ is (log) output, $r_t$ is the real interest rate, $i_t$ is the nominal interest rate, $\pi_t$ is the rate of inflation between periods $t-1$ and $t$, $\bar{r}$ is the long-run average real interest rate or the *natural rate of interest*, $\beta$ is the household's subjective discount factor, and $\pi^T$ is the central bank's inflation target. The coeffieints $\phi_{\pi}$ and $\phi_{y}$ reflect the degree of intensity to which the central bank *endogenously* adjusts the nominal interest rate in response to movements in inflation and output.The variables $g_t$, $u_t$, and $v_t$ represent exogenous shocks to aggregate demand, inflation, and monetary policy. They follow AR(1) processes:\begin{align}g_{t+1} & = \rho_g g_{t} + \epsilon^g_{t+1}\\u_{t+1} & = \rho_u u_{t} + \epsilon^u_{t+1}\\v_{t+1} & = \rho_v v_{t} + \epsilon^v_{t+1}.\end{align}The goal is to compute impulse responses in the model to a one percent exogenous increase in the nominal interest rate. We will use the following parameterization:| $\bar{y}$ | $\beta$ | $\bar{r}$ | $\kappa$ | $\pi^T$ | $\phi_{\pi}$ | $\phi_y$ | $\rho_g$ | $\rho_u$ | $\rho_v$ | |-----------|---------|--------------|----------|---------|--------------|----------|----------|----------|---------|| 0 | 0.995 | $-\log\beta$ | 0.1 | 0.02/4 | 1.5 | 0.5/4 | 0.5 | 0.5 | 0.5 |
###Code
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series
parameters = pd.Series()
parameters['y_bar'] = 0
parameters['beta'] = 0.995
parameters['r_bar'] = -np.log(parameters.beta)
parameters['kappa'] = 0.1
parameters['pi_T'] = 0.02/4
parameters['phi_pi'] = 1.5
parameters['phi_y'] = 0.5/4
parameters['rho_g'] = 0.5
parameters['rho_u'] = 0.5
parameters['rho_v'] = 0.5
# Print the model's parameters
print(parameters)
# Create variable called 'var_names' that stores the variable names in a list with state variables ordered first
var_names = ['g','u','v','y','pi','i','r']
# Create variable called 'shock_names' that stores an exogenous shock name for each state variable.
shock_names = ['e_g','e_u','e_v']
# Define a function that evaluates the equilibrium conditions of the model solved for zero. PROVIDED
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters. PROVIDED
p = parameters
# Current variables. PROVIDED
cur = variables_current
# Forward variables. PROVIDED
fwd = variables_forward
# IS equation
is_equation = fwd.y - (cur.r -p.r_bar) + cur.g - cur.y
# Fisher_equation
fisher_equation = cur.r + fwd.pi - cur.i
# Monetary policy
monetary_policy = p.r_bar + p.pi_T + p.phi_pi*(cur.pi - p.pi_T) + p.phi_y*cur.y + cur.v - cur.i
# Phillips curve
phillips_curve = p.beta*(fwd.pi- p.pi_T) + p.kappa*cur.y + cur.u - (cur.pi-p.pi_T)
# Demand process
demand_process = p.rho_g*cur.g - fwd.g
# Monetary policy process
monetary_policy_process = p.rho_v*cur.v - fwd.v
# Inflation process
inflation_process = p.rho_u*cur.u - fwd.u
# Stack equilibrium conditions into a numpy array
return np.array([
is_equation,
fisher_equation,
monetary_policy,
phillips_curve,
demand_process,
monetary_policy_process,
inflation_process
])
# Initialize the model into a variable named 'nk_model'
nk_model = ls.model(equations = equilibrium_equations,
n_states=3,
var_names=var_names,
shock_names=shock_names,
parameters = parameters)
# Compute the steady state numerically using .compute_ss() method of nk_model
guess = [0,0,0,0,0.01,0.01,0.01]
nk_model.compute_ss(guess)
# Print the computed steady state
print(nk_model.ss)
# Find the log-linear approximation around the non-stochastic steady state and solve using .approximate_and_solve() method of nk_model
# set argumement 'log_linear' to False because the model is already log-linear.
###Output
_____no_output_____
###Markdown
Impulse ResponsesCompute a 21 period impulse response of the model's variables to a 0.01/4 unit shock to the exogenous component of monetary policy ($v_t$) in period 5.
###Code
# Compute impulse responses
# Print the first 10 rows of the computed impulse responses to the monetary policy shock
###Output
_____no_output_____
###Markdown
Plot the computed impulses responses of the nominal interest rate, the real interest rate, output, and inflation. Express inflation and interest rates in *annualized* (e.g., multiplied by 4) terms.
###Code
# Create figure. PROVIDED
fig = plt.figure(figsize=(12,8))
# Create upper-left axis. PROVIDED
ax1 = fig.add_subplot(2,2,1)
# Create upper-right axis. PROVIDED
ax2 = fig.add_subplot(2,2,2)
# Create lower-left axis. PROVIDED
ax3 = fig.add_subplot(2,2,3)
# Create lower-right axis. PROVIDED
ax4 = fig.add_subplot(2,2,4)
# Set axis 1 ylabel
ax1.set_ylabel('% dev from steady state')
# Set axis 2 ylabel
ax2.set_ylabel('% dev from steady state')
# Set axis 3 ylabel
ax3.set_ylabel('% dev from steady state')
# Set axis 4 ylabel
ax4.set_ylabel('% dev from steady state')
# Set axis 1 limits
ax1.set_ylim([-0.2,0.8])
# Set axis 2 limits
ax2.set_ylim([-0.2,0.8])
# Set axis 3 limits
ax3.set_ylim([-0.4,0.1])
# Set axis 4 limits
ax4.set_ylim([-0.4,0.1])
# Plot the nominal interest rate, real interest rate, output, and inflation
###Output
_____no_output_____ |
experiments/tl_1v2/oracle.run1-oracle.run2/trials/17/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:oracle.run1-oracle.run2",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run2_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run2_",
},
],
"dataset_seed": 154325,
"seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
3_0.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/sharathsrini/Extended-Kalman-Filter-for-Sensor-Fusion/blob/master/3_0.ipynb)
###Code
############PROGRAM STARTS HERE ######################
import numpy as np
import math as MT
from math import floor
import matplotlib.pyplot as plt
import time
###CONSTANTS
max_angle = 0.785398 #45Deg
min_angle = -0.785398 #-45Deg
free_space=0
locked_space=1
### HYPER PARAMETERS
NUMBERS_OF_STEERS=4
STEER_OFFSET=5.0*np.pi/180
LENGTH=4.0
NUM_THETA_CELLS =60
### GRID MAKING
grid_x_m = 50
grid_y_m = 50
### FOR CELL DIVISION
coll_cell_side = 0.5
grid_on_x = np.int( np.ceil(grid_x_m/coll_cell_side) )
grid_on_y = np.int( np.ceil(grid_y_m/coll_cell_side) )
### FIT ZEROS
GRID_TEST = np.zeros((grid_on_x,grid_on_y),np.int)
### INITIALIZE COST_MAPS AND ASTAR CLOSE MAPS
closed_A_star=np.array([[free_space for x in range(grid_on_x)] for y in range(grid_on_y)])
cost_map = np.array([[-1 for x in range(grid_on_x)] for y in range(grid_on_y)])
policy_map = [[' ' for x in range(grid_on_x)] for y in range(grid_on_y)]
### MOTION MATRIX FOR ASTAR
motion_mat=np.array([[1,0],[-1,0],[0,-1],[0,1]])
policy_mat=['>',]
### STATE CLASS
class state:
def __init__(self,x,y,theta,g,f,h,steer):
self.x=x
self.y=y
self.theta=theta
self.g=g
self.f=f
self.h=h
self.steer=steer
## GOAL NODE
class goal:
def __init__(self, x, y):
self.x = x
self.y = y
### INPUT VEHICLE CO-ORDINATES
class vehicle_points():
def __init__(self,input_co_ordinates,center):
self.input_co_ordinates=input_co_ordinates
self.center=center
### PATH CLASS FOR TRACKING
class path():
def __init__(self,closed,came_from,final):
self.closed=closed
self.came_from=came_from
self.final=final
### AUGMENT DELTA +/- GIVEN OFFSET
def delta_augmentation(delta, numbers, offset):
delta_list = []
delta_list.append(delta)
delta_calc_add=delta_calc_sub = delta
for i in range(0 ,numbers):
delta_calc_add += offset
delta_calc_sub -= offset
if delta_calc_add < max_angle:
delta_list.append(delta_calc_add)
if delta_calc_sub > min_angle:
delta_list.append(delta_calc_sub)
return delta_list
### NEW STATE TRANSITIONS
def new_state_transition(current_state,goal,speed):
next_states = []
delta_angles = delta_augmentation( delta=current_state.steer, numbers=NUMBERS_OF_STEERS,offset=STEER_OFFSET)
DT=1.0/speed
for delta in delta_angles:
omega = (speed / LENGTH) * np.tan(delta)
theta2 = normalize_theta(current_state.theta + (omega * DT))
dX = speed * np.cos(theta2) * DT
dY = speed * np.sin(theta2) * DT
#i=i+1
#print(i,[SPEED,np.cos(theta2),DT,omega,theta2,dX,dY])
x2 = current_state.x + dX
y2 = current_state.y + dY
g2 = current_state.g + np.sqrt(dX*dX + dY*dY)
arc_cost=arc_heuristic(goal.x-x2,goal.y-y2,theta2)
#print(arc_cost)
h2 = euclidean_distance([x2,y2],[goal.x,goal.y])+arc_cost
if(cost_map[idx(x2)][idx(y2)]==-1):
h2+=100
else:
h2+=cost_map[idx(x2)][idx(y2)]
f2 = g2 + h2
new_state=state(x2,y2,theta2,g2,f2,h2,delta)
#jj=np.arctan2(goal.y-y2,goal.x-x2)
#print(['X: ',x2,'Y: ',y2,'ang_goal',normalize_theta(jj)*180/np.pi,'taken_angle',theta2*180/np.pi,'cost:',arc_cost])
next_states.append(new_state)
return next_states
### TRANSFORM VEHICLE CO-ORDINATES
def transform_vehicle_co_ordinates(vehicle_point_object, next_state, angle_of_rotation):
displaced_matrix = np.array([next_state[0]-vehicle_point_object.center[0],next_state[1]-vehicle_point_object.center[1]])
transformed_matrix=np.add(vehicle_point_object.input_co_ordinates,displaced_matrix)
return vehicle_points(rotate_vehicle_co_ordinates(vehicle_points(transformed_matrix,next_state),angle_of_rotation),next_state)
### ROTATE VEHICLE CO-ORDINATES
def rotate_vehicle_co_ordinates(vehicle_point_object,angle_of_rotation):
rotation_matrix = np.array([[np.cos(angle_of_rotation), np.sin(angle_of_rotation)],
[-np.sin(angle_of_rotation), np.cos(angle_of_rotation)]])
return np.add(vehicle_point_object.center,np.matmul(np.subtract(vehicle_point_object.input_co_ordinates,vehicle_point_object.center), rotation_matrix))
### CHECK VEHICLE IN SAFE POSITION
def is_vehicle_in_safe_position(vehicle_point_object,grid):
for point in vehicle_point_object.input_co_ordinates:
if(is_within_grid( idx(point[0]),idx(point[1])) and
(grid[idx(point[0])][idx(point[1])]==0)):
continue
else:
return False
return True
### CHK A STAR VEHICLE:
def A_vehicle_is_safe(vehicle_point_A,add_value,grid):
vp=vehicle_point_A.input_co_ordinates+add_value
for point in vp:
if(is_within_grid( idx(point[0]),idx(point[1])) and
(grid[idx(point[0])][idx(point[1])]==0)):
continue
else:
#print('False',add_value)
return False
#('True',add_value)
return True
### EUCLIDEAN DISTANCE
def euclidean_distance(start_point,end_point):
return np.round(np.sqrt((end_point[0]-start_point[0])**2 +(end_point[1]-start_point[1])**2),4)
### ARC HEURISTIC
def arc_heuristic(x,y,theta_to_be_taken):
ang_rad=normalize_theta(np.arctan2(y,x))
diff=np.pi-abs(abs(theta_to_be_taken-ang_rad)-np.pi)
return diff
### NORMALIZE THETA
def normalize_theta(theta):
if( theta<0 ):
theta +=( 2*np.pi )
elif( theta>2*np.pi ):
theta %=( 2*np.pi)
return theta
### THETA TO STACK NUMBER
def theta_to_stack_number(theta):
new = (theta+2*np.pi)%(2*np.pi)
stack_number = round(new*NUM_THETA_CELLS/2*np.pi)%NUM_THETA_CELLS
return int(stack_number)
### FLOOR VALUE
def idx(value):
return int(MT.floor(value))
### CHECK WITHIN GRID
def is_within_grid(x,y):
return (x>=0 and x<grid_on_x and y>=0 and y<grid_on_y)
### IS_GOAL_REACHED
def is_goal_reached(start,goal):
result=False
if( idx(start[0]) == idx(goal[0]) and idx(start[1])==idx(goal[1])):
result=True
return result
### A_STAR SEARCH
def A_Star(current_state,goal,grid):
vehicle_point_A=vehicle_points(np.array([[0,2],[0,1],[0,-1],[0,-2],[1,0],[2,0],[-1,0],[-2,0]]),[0,0])
print("STARTED A*")
open_list = []
open_list.append(current_state )
is_goal_attained=False
cost=0
heu=0
closed_A_star[current_state.x][current_state.y]=1
cost_map[current_state.x][current_state.y]=cost
while(len(open_list)>0):
open_list.sort(key=lambda state_srt : float(state_srt.f))
old_state=open_list.pop(0)
if(goal.x==old_state.x and goal.y==old_state.y):
is_goal_attained=True
print("GOAL REACHED BY A*")
return is_goal_attained
node=np.array([old_state.x,old_state.y])
for move in motion_mat:
nxt_node=node+move
if( is_within_grid(nxt_node[0],nxt_node[1])):
if(grid[nxt_node[0]][nxt_node[1]]==0 and closed_A_star[nxt_node[0]][nxt_node[1]]==0):
if(A_vehicle_is_safe(vehicle_point_A,np.array([nxt_node]),grid)):
g2=old_state.g+1
heu=euclidean_distance([nxt_node[0],nxt_node[1]],[goal.x,goal.y])
new_state=state(nxt_node[0],nxt_node[1],0,g2,g2+heu,heu,0)
open_list.append(new_state)
closed_A_star[nxt_node[0]][nxt_node[1]]=1
cost_map[nxt_node[0]][nxt_node[1]]=g2
#plt.plot([node[0],nxt_node[0]],[node[1],nxt_node[1]])
return is_goal_attained
### SEARCH ALGORITHM
def Hybrid_A_Star(grid,current_state,goal,vehicle_point_object,speed):
print("STARTED HYBRID A*")
start_time = time.time()
closed = np.array([[[free_space for x in range(grid_on_x)] for y in range(grid_on_y)] for cell in range(NUM_THETA_CELLS)])
came_from = [[[free_space for x in range(grid_on_x)] for y in range(grid_on_y)] for cell in range(NUM_THETA_CELLS)]
is_goal_attained=False
stack_number=theta_to_stack_number(current_state.theta)
closed[stack_number][idx(current_state.x)][idx(current_state.y)]=1
came_from[stack_number][idx(current_state.x)][idx(current_state.y)]=current_state
total_closed=1
opened=[current_state]
while (len(opened)>0):
opened.sort(key=lambda state_srt : float(state_srt.f))
state_now=opened.pop(0)
#print([state_now.x,state_now.y,state_now.theta*np.pi/180])
if(is_goal_reached([idx(state_now.x),idx(state_now.y)],[idx(goal.x),idx(goal.y)])):
is_goal_attained=True
print('GOAL REACHED BY HYBRID A*')
ret_path=path(closed,came_from,state_now)
end_time = time.time()
print(end_time - start_time)
return (is_goal_attained,ret_path)
for evry_state in new_state_transition(state_now,goal,speed):
#print('Before',[evry_state.x,evry_state.y,evry_state.theta*np.pi/180])
if(not is_within_grid(idx(evry_state.x),idx(evry_state.y))):
continue
stack_num=theta_to_stack_number(evry_state.theta)
#print([stack_num,idx(evry_state.x),idx(evry_state.y)])
if closed[stack_num][idx(evry_state.x)][idx(evry_state.y)]==0 and grid[idx(evry_state.x)][idx(evry_state.y)]==0:
new_vehicle_point_obj = transform_vehicle_co_ordinates(vehicle_point_object,[evry_state.x,evry_state.y],evry_state.theta)
#print(new_vehicle_point_obj.input_co_ordinates)
if(is_vehicle_in_safe_position(new_vehicle_point_obj,grid)):
opened.append(evry_state)
closed[stack_num][idx(evry_state.x)][idx(evry_state.y)]=1
came_from[stack_num][idx(evry_state.x)][idx(evry_state.y)]=state_now
total_closed+= 1
#print('After',[evry_state.x,evry_state.y,evry_state.theta*np.pi/180])
#plt.plot([state_now.x,evry_state.x],[state_now.y,evry_state.y])
#closed[stack_num][idx(evry_state.x)][idx(evry_state.y)]=1
#print('-------------')
print('No Valid path')
ret_path=path(closed,came_from,evry_state)
return (is_goal_attained,ret_path)
### RECONSTRUCT PATH
def reconstruct_path(came_from, start, final):
path = [(final)]
stack = theta_to_stack_number(final.theta)
current = came_from[stack][idx(final.x)][idx(final.y)]
stack = theta_to_stack_number(current.theta)
while [idx(current.x), idx(current.y)] != [idx(start[0]), idx(start[1])] :
path.append(current)
current = came_from[stack][idx(current.x)][idx(current.y)]
stack = theta_to_stack_number(current.theta)
return path
###DISPLAY PATH
def show_path(path, start, goal,vehicle_pt_obj_act):
X=[start[0]]
Y=[start[1]]
Theta=[]
path.reverse()
X += [p.x for p in path]
Y += [p.y for p in path]
Theta+=[p.theta for p in path]
for i in range(len(X)-1):
Xj=[]
Yj=[]
vehicle_pt_obj_now=transform_vehicle_co_ordinates(vehicle_pt_obj_act,[X[i],Y[i]], Theta[i])
rev=vehicle_pt_obj_now.input_co_ordinates
revI=rev[:4]
revL=rev[4:]
revF=np.concatenate([revI,revL[::-1]])
l=np.append(revF,[revF[0]],axis=0)
#print(l)
for i in l:
Xj.append(i[0])
Yj.append(i[1])
plt.plot(Xj,Yj)
print([p.steer*180/np.pi for p in path])
plt.plot(X,Y, color='black')
plt.scatter([start[0]], [start[1]], color='blue')
plt.scatter([goal[0]], [goal[1]], color='red')
plt.show()
### PUT OBSTACLES:
def put_obstacles(X_list,Y_list,grid):
if(len(X_list)>0):
for i in X_list:
x_XO=[]
x_YO=[]
for k in range(i[1],i[2]):
x_XO.append(i[0])
x_YO.append(k)
grid[i[0]][k]=1
plt.scatter(x_XO,x_YO)
if(len(Y_list)>0):
for i in Y_list:
y_XO=[]
y_YO=[]
for k in range(i[1],i[2]):
y_XO.append(i[0])
y_YO.append(k)
grid[k][i[0]]=1
plt.scatter(y_YO,y_XO)
import numpy as np
import matplotlib.pyplot as plt
import math
# Vehicle parameter
W = 2.5 #[m] width of vehicle
LF = 3.7 #[m] distance from rear to vehicle front end of vehicle
LB = 1.0 #[m] distance from rear to vehicle back end of vehicle
TR = 0.5 # Tyre radius [m] for plot
TW = 1.2 # Tyre width [m] for plot
MAX_STEER = 0.6 #[rad] maximum steering angle
WB = 2.7 #[m] wheel base: rear to front steer
def plot_car(x, y, yaw, steer, retrun_car = False):
car_color = "-k"
LENGTH = LB+LF
car_OutLine = np.array([[-LB, (LENGTH - LB), (LENGTH - LB), (-LB), (-LB)],
[W / 2, W / 2, - W / 2, - W / 2, W / 2]])
rr_wheel = np.array([[TR, - TR, - TR, TR, TR],
[-W / 12.0 + TW, - W / 12.0 + TW, W / 12.0 + TW, W / 12.0 + TW, - W / 12.0 + TW]])
rl_wheel = np.array([[TR, - TR, - TR, TR, TR],
[-W / 12.0 - TW, - W / 12.0 - TW, W / 12.0 - TW, W / 12.0 - TW, - W / 12.0 - TW]])
fr_wheel = np.array([[TR, - TR, - TR, TR, TR],
[- W / 12.0 + TW, - W / 12.0 + TW, W / 12.0 + TW, W / 12.0 + TW, - W / 12.0 + TW]])
fl_wheel = np.array([[TR, - TR, - TR, TR, TR],
[-W / 12.0 - TW, - W / 12.0 - TW, W / 12.0 - TW, W / 12.0 - TW, - W / 12.0 - TW]])
Rot1 = np.array([[math.cos(yaw), math.sin(yaw)],
[-math.sin(yaw), math.cos(yaw)]])
Rot2 = np.array([[math.cos(steer), -math.sin(steer)],
[math.sin(steer), math.cos(steer)]])
fr_wheel = np.dot(fr_wheel.T, Rot2).T
fl_wheel = np.dot(fl_wheel.T, Rot2).T
fr_wheel[0,:] += WB
fl_wheel[0,:] += WB
fr_wheel = np.dot(fr_wheel.T, Rot1).T
fl_wheel = np.dot(fl_wheel.T, Rot1).T
car_OutLine = np.dot(car_OutLine.T, Rot1)
rr_wheel = np.dot(rr_wheel.T, Rot1).T
rl_wheel = np.dot(rl_wheel.T, Rot1).T
car_OutLine = car_OutLine.T
car_OutLine[0,:] += x
car_OutLine[1,:] += y
fr_wheel[0, :] += x
fr_wheel[1, :] += y
rr_wheel[0, :] += x
rr_wheel[1, :] += y
fl_wheel[0, :] += x
fl_wheel[1, :] += y
rl_wheel[0, :] += x
rl_wheel[1, :] += y
if retrun_car == False:
plt.plot(x, y, "*")
plt.plot(fr_wheel[0, :], fr_wheel[1, :], car_color)
plt.plot(rr_wheel[0, :], rr_wheel[1, :], car_color)
plt.plot(fl_wheel[0, :], fl_wheel[1, :], car_color)
plt.plot(rl_wheel[0, :], rl_wheel[1, :], car_color)
plt.plot(car_OutLine[0, :], car_OutLine[1, :], car_color)
else:
return car_OutLine[0, :], car_OutLine[1, :]
plot_car(0.0, 0.0, np.pi, 0.0, retrun_car = False)
def show_animation(path, start, goal,vehicle_pt_obj_act):
x =[]
y = []
yaw = []
steer = []
path.reverse()
x += [p.x for p in path]
y += [p.y for p in path]
yaw += [p.theta for p in path]
steer =[p.steer*180/np.pi for p in path]
print(type(steer[0]))
for ii in range(0,len(x),5):
plt.cla()
plt.plot(x, y, "-r", label="Hybrid A* path")
plot_car(x[ii], y[ii], yaw[ii], steer[ii])
plt.grid(True)
plt.axis("equal")
plt.pause(0.01)
START=[40,45]
SPEED=60
goal_node = goal( 4,3)
present_heading=(np.pi/2)
vehicle_pt_obj_actual = vehicle_points( np.array([[0.5,0.5],[0.5,1.5],[0.5,2.5],[0.5,3.5],[1.5,0.5],[1.5,1.5],[1.5,2.5],[1.5,3.5]]),[0,2] )
vehicle_pt_obj=transform_vehicle_co_ordinates(vehicle_pt_obj_actual,START,present_heading)
#print(vehicle_pt_obj.input_co_ordinates)
current_state = state(vehicle_pt_obj.center[0], vehicle_pt_obj.center[1], present_heading, 0.0, 0.0, 0.0,0.0)
put_obstacles([[15,0,30],[26,0,30],[27,0,25],[60,15,35]],[],GRID_TEST)
if(A_Star(state(goal_node.x,goal_node.y,0,0,0,0,0),goal(START[0],START[1]),GRID_TEST)):
process_further,ret_val=Hybrid_A_Star(GRID_TEST,current_state,goal_node,vehicle_pt_obj,SPEED)
if(process_further):
show_animation(reconstruct_path(ret_val.came_from,START,ret_val.final),START,[goal_node.x,goal_node.y],vehicle_pt_obj_actual)
else:
print("GOAL CANT BE REACHED!!")
else:
print("GOAL CANT BE REACHED!!")
###Output
STARTED A*
GOAL REACHED BY A*
STARTED HYBRID A*
GOAL REACHED BY HYBRID A*
0.895781040192
<type 'float'>
|
Task 03-Music Recommendation System/Music Recommendation System.ipynb | ###Markdown
 ***Virtual Internship Program******Data Science Tasks*** ***Author: SARAVANAVEL*** ***BEGINNER LEVEL TASK*** Task 3 -Music recommender systemMusic recommender system can suggest songs to users based on their listening pattern.Datasetlinks https://www.kaggle.com/c/kkbox-music-recommendation-challenge/data Importing packages
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
import numpy as np
import time
import Recommenders as Recommenders
###Output
_____no_output_____
###Markdown
Load music data
###Code
members = pd.read_csv(r'c:\LGMVIP\members.csv',parse_dates=["registration_init_time","expiration_date"])
members.head()
df_train = pd.read_csv(r'c:\LGMVIP\train.csv')
df_train.head()
df_songs = pd.read_csv(r'c:\LGMVIP\songs.csv')
df_songs.head()
df_songs_extra = pd.read_csv(r'c:\LGMVIP\song_extra_info.csv')
df_songs_extra.head()
df_test = pd.read_csv(r'c:\LGMVIP\test.csv')
df_test.head()
###Output
_____no_output_____
###Markdown
Creating a new data
###Code
res = df_train.merge(df_songs[['song_id','song_length','genre_ids','artist_name','language']], on=['song_id'], how='left')
res.head()
train = res.merge(df_songs_extra,on=['song_id'],how = 'left')
train.head()
song_id = train.loc[:,["name","target"]]
song1 = song_id.groupby(["name"],as_index=False).count().rename(columns = {"target":"listen_count"})
song1.head()
dataset=train.merge(song1,on=['name'],how= 'left')
df=pd.DataFrame(dataset)
df.drop(columns=['source_system_tab','source_screen_name','source_type','target','isrc'],axis=1,inplace=True)
df=df.rename(columns={'msno':'user_id'})
###Output
_____no_output_____
###Markdown
Loading our new dataset
###Code
df.head()
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
df.shape
#checking null values
df.isnull().sum()
#filling null values
df['song_length'].fillna('0',inplace=True)
df['genre_ids'].fillna('0',inplace=True)
df['artist_name'].fillna('none',inplace=True)
df['language'].fillna('0',inplace=True)
df['name'].fillna('none',inplace=True)
df['listen_count'].fillna('0',inplace=True)
#Rechecking null values
df.isnull().sum()
print("Total no of songs:",len(df))
###Output
Total no of songs: 7377418
###Markdown
Create a subset of the dataset
###Code
df = df.head(10000)
#Merge song title and artist_name columns to make a new column
df['song'] = df['name'].map(str) + " - " + df['artist_name']
###Output
_____no_output_____
###Markdown
Showing the most popular songs in the dataset The column listen_count denotes the no of times the song has been listened.Using this column, we’ll find the dataframe consisting of popular songs:
###Code
song_gr = df.groupby(['song']).agg({'listen_count': 'count'}).reset_index()
grouped_sum = song_gr['listen_count'].sum()
song_gr['percentage'] = song_gr['listen_count'].div(grouped_sum)*100
song_gr.sort_values(['listen_count', 'song'], ascending = [0,1])
###Output
_____no_output_____
###Markdown
Count number of unique users in the dataset
###Code
users = df['user_id'].unique()
print("The no. of unique users:", len(users))
###Output
The no. of unique users: 1622
###Markdown
Now, we define a dataframe train which will create a song recommender Count the number of unique songs in the dataset
###Code
###Fill in the code here
songs = df['song'].unique()
len(songs)
###Output
_____no_output_____
###Markdown
Create a song recommender
###Code
train_data, test_data = train_test_split(df, test_size = 0.20, random_state=0)
print(train.head(5))
###Output
msno \
0 FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=
1 Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=
2 Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=
3 Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=
4 FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=
song_id source_system_tab \
0 BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik= explore
1 bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM= my library
2 JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY= my library
3 2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs= my library
4 3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc= explore
source_screen_name source_type target song_length genre_ids \
0 Explore online-playlist 1 206471.0 359
1 Local playlist more local-playlist 1 284584.0 1259
2 Local playlist more local-playlist 1 225396.0 1259
3 Local playlist more local-playlist 1 255512.0 1019
4 Explore online-playlist 1 187802.0 1011
artist_name language name \
0 Bastille 52.0 Good Grief
1 Various Artists 52.0 Lords of Cardboard
2 Nas 52.0 Hip Hop Is Dead(Album Version (Edited))
3 Soundway -1.0 Disco Africa
4 Brett Young 52.0 Sleep Without You
isrc
0 GBUM71602854
1 US3C69910183
2 USUM70618761
3 GBUQH1000063
4 QM3E21606003
###Markdown
Creating Popularity based Music Recommendation Using popularity_recommender class we made in Recommenders.py package, we create the list given below:
###Code
pm = Recommenders.popularity_recommender_py() #create an instance of the class
pm.create(train_data, 'user_id', 'song')
user_id1 = users[5] #Recommended songs list for a user
pm.recommend(user_id1)
user_id2 = users[45]
pm.recommend(user_id2)
###Output
_____no_output_____
###Markdown
Build a song recommender with personalizationWe now create an item similarity based collaborative filtering model that allows us to make personalized recommendations to each user. Creating Similarity based Music Recommendation in Python:
###Code
is_model = Recommenders.item_similarity_recommender_py()
is_model.create(train_data, 'user_id', 'song')
###Output
_____no_output_____
###Markdown
Use the personalized model to make some song recommendations
###Code
#Print the songs for the user in training data
user_id1 = users[1]
#Fill in the code here
user_items2 = is_model.get_user_items(user_id2)
print("------------------------------------------------------------------------------------")
print("Songs played by second user %s:" % user_id2)
print("------------------------------------------------------------------------------------")
for user_item in user_items1:
print(user_item)
print("----------------------------------------------------------------------")
print("Similar songs recommended for the second user:")
print("----------------------------------------------------------------------")
#Recommend songs for the user using personalized model
is_model.recommend(user_id1)
user_id2 = users[7]
#Fill in the code here
user_items2 = is_model.get_user_items(user_id2)
print("------------------------------------------------------------------------------------")
print("Songs played by second user %s:" % user_id2)
print("------------------------------------------------------------------------------------")
for user_item in user_items2:
print(user_item)
print("----------------------------------------------------------------------")
print("Similar songs recommended for the second user:")
print("----------------------------------------------------------------------")
#Recommend songs for the user using personalized model
is_model.recommend(user_id2)
###Output
------------------------------------------------------------------------------------
Songs played by second user Vgeu+u3vXE0FhQtG/Vr3I/U3V0TX/jzQAEBhi3S3qi0=:
------------------------------------------------------------------------------------
親愛陌生人【土豆網偶像劇[歡迎愛光臨]片頭曲】 - 丁噹 (Della)
道聽塗說 (Remembering you) - 林芯儀 (Shennio Lin)
愛磁場 - 張韶涵 (Angela Chang)
宇宙小姐 - S.H.E
愛旅行的人 - 張韶涵 (Angela Chang)
敢愛敢當 - 丁噹 (Della)
九號球 - [逆轉勝] 五月天∕怪獸 原聲原創紀 ([Second Chance] Soundtrack & Autobiography of Mayday Monster)
刺情 - 張韶涵 (Angela Chang)
Baby - Justin Bieber
Never Forget You - 張韶涵 (Angela Chang)
安靜了 - S.H.E
----------------------------------------------------------------------
Similar songs recommended for the second user:
----------------------------------------------------------------------
No. of unique songs for the user: 11
no. of unique songs in the training set: 4722
Non zero values in cooccurence_matrix :238
###Markdown
The lists of both the users in popularity based recommendation is the same but different in case of similarity-based recommendation. We can also apply the model to find similar songs to any song in the dataset
###Code
is_model.get_similar_items(['U Smile - Justin Bieber'])
###Output
no. of unique songs in the training set: 4722
Non zero values in cooccurence_matrix :0
|
files/3.1-reading-data-with-pandas.ipynb | ###Markdown
Reading in data with pandas* What is Pandas?* How do I read files using Pandas? What is Pandas?A Python library -> set of functions and data structurespandas is for data analysis* Reading data stored in CSV files (other file formats can be read as well)* Slicing and subsetting data in Dataframes (tables!)* Dealing with missing data* Reshaping data (long -> wide, wide -> long)* Inserting and deleting columns from data structures* Aggregating data using data grouping facilities using the split-apply-combine paradigm* Joining of datasets (after they have been loaded into Dataframes)
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Load in data
###Code
# Press 'Tab' key to use autocompletion
df = pd.read_csv("SN7577.tab", sep='\t')
# If you forget the sep argument
df_without_sep = pd.read_csv("SN7577.tab")
# How to read in the first 10 lines
df_first_10 = pd.read_csv("SN7577.tab", nrows=10, sep='\t')
df_first_10
# How to read a subset of the columns?
df_subset = pd.read_csv("SN7577.tab", usecols=["Q1", "Q2"], sep='\t')
df_subset2 = pd.read_csv("SN7577.tab", usecols=list(range(0,9)), sep='\t')
list(range(0,9))
df_subset2
###Output
_____no_output_____
###Markdown
Getting information about a Dataframe
###Code
# df is a shorthand for DataFrame
df = pd.read_csv("SN7577.tab", sep='\t')
print(type(df))
df.head()
# What is the tail method?
df.tail()
df.tail(2) # the last 2 rows
# How many rows?
len(df)
# How many rows and columns
df.shape # property (without brackets)
# How many 'cells' in the table
df.size
# Column names
df.columns
# What are the data types?
df.dtypes
labels = ['Q1 is bla bla', 'Q2 is something else']
###Output
_____no_output_____
###Markdown
Exercise print all columnsWhen we asked for the column names and their data types, the output was abridged, i.e. we didn’t get the values for all of the columns. Can you write a small piece of code which will print all of the values on separate lines.
###Code
df.head()
for column in df.columns:
print(column)
###Output
Q1
Q2
Q3
Q4
Q5ai
Q5aii
Q5aiii
Q5aiv
Q5av
Q5avi
Q5avii
Q5aviii
Q5aix
Q5ax
Q5axi
Q5axii
Q5axiii
Q5axiv
Q5axv
Q5bi
Q5bii
Q5biii
Q5biv
Q5bv
Q5bvi
Q5bvii
Q5bviii
Q5bix
Q5bx
Q5bxi
Q5bxii
Q5bxiii
Q5bxiv
Q5bxv
Q6
Q7a
Q7b
Q8
Q9
Q10a
Q10b
Q10c
Q10d
Q11a
Q11b
Q12a
Q12b
Q13i
Q13ii
Q13iii
Q13iv
Q14
Q15
Q16a
Q16b
Q16c
Q16d
Q16e
Q16f
Q16g
Q16h
Q17a
Q17b
Q17c
Q17d
Q17e
Q17f
Q17g
Q18ai
Q18aii
Q18aiii
Q18aiv
Q18av
Q18avi
Q18avii
Q18aviii
Q18aix
Q18bi
Q18bii
Q18biii
Q18biv
Q18bv
Q18bvi
Q18bvii
Q18bviii
Q18bix
Q19a
Q19b
Q19c
Q19d
access1
access2
access3
access4
access5
access6
access7
web1
web2
web3
web4
web5
web6
web7
web8
web9
web10
web11
web12
web13
web14
web15
web16
web17
web18
dbroad
intten
netfq
daily1
daily2
daily3
daily4
daily5
daily6
daily7
daily8
daily9
daily10
daily11
daily12
daily13
daily14
daily15
daily16
daily17
daily18
daily19
daily20
daily21
daily22
daily23
daily24
daily25
sunday1
sunday2
sunday3
sunday4
sunday5
sunday6
sunday7
sunday8
sunday9
sunday10
sunday11
sunday12
sunday13
sunday14
sunday15
sunday16
sunday17
sunday18
sunday19
sunday20
press1
press2
broadsheet1
broadsheet2
broadsheet3
popular1
popular2
popular3
popular4
popular5
sex
age
agegroups
numage
class
sgrade
work
gor
qual
ethnic
ethnicity
party
cie
wrkcie
income
tenure
tennet
lstage
maritl
numhhd
numkid
numkid2
numkid31
numkid32
numkid33
numkid34
numkid35
numkid36
wts
|
Notebooks/.ipynb_checkpoints/Gradient Boost-checkpoint.ipynb | ###Markdown
Scalers
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
import joblib
#pt = PowerTransformer(method='yeo-johnson')
#X[num_columns] = pt.fit_transform(X[num_columns])
X
column_transformer = ColumnTransformer(
[('num', StandardScaler(), num_columns),
('obj', OneHotEncoder(), obj_columns)],
remainder='passthrough')
trans_X = column_transformer.fit_transform(X)
joblib.dump(column_transformer, '../Models/Column_Transformer.pkl')
#joblib.dump(pt, '../Models/Power_Transformer.pkl')
#trans_X = trans_X.toarray()
y = np.asarray(y)
test_x = trans_X[:1000,]
test_y = y[:1000,]
trans_X = trans_X[1000:,]
y = y[1000:,]
test_y.shape
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(trans_X, y, random_state=16, test_size=0.2)
###Output
_____no_output_____
###Markdown
Gradient Boost Grid Search
###Code
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import make_scorer, mean_squared_log_error, mean_absolute_percentage_error
from keras import backend as K
def root_mean_squared_log_error(y_true, y_pred):
return np.sqrt(np.mean(np.square(np.log(1+y_pred) - np.log(1+y_true))))
gb = GradientBoostingRegressor(random_state=42)
scoring = {'MSLE': make_scorer(mean_squared_log_error),
'MAPE': make_scorer(mean_absolute_percentage_error)}
random_grid = {
"loss":['squared_error', 'absolute_error', 'huber'],
"learning_rate": [0.001, 0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2],
"min_samples_split": np.linspace(1, 200, 10, dtype=int),
"min_samples_leaf": np.linspace(0.1, 0.5, 12),
"max_depth":[3,5,8,10,12],
"max_features":["log2","sqrt"],
"criterion": ["friedman_mse", "mae"],
"subsample":[0.5, 0.618, 0.8, 0.85, 0.9, 0.95, 1.0],
"n_estimators":[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
}
gb_random = RandomizedSearchCV(
estimator = gb,
param_distributions = random_grid,
n_iter = 100,
cv = 5,
verbose=2,
random_state=42,
n_jobs = -1)
gb_random.fit(X_train, y_train)
params = gb_random.best_params_
gb = GradientBoostingRegressor(**params)
gb.fit(X_train, y_train)
predictions = gb.predict(X_test)
root_mean_squared_log_error(y_test, predictions)
predictions_test_x = gb.predict(test_x)
root_mean_squared_log_error(test_y, predictions_test_x)
joblib.dump(gb, '../Models/Gradient_Boost_Model.h5')
###Output
_____no_output_____ |
docs/source/notebooks/Custom_Networks.ipynb | ###Markdown
Creating Custom Networks for Multi-Class ClassificationThis tutorial demonstrates how to define, train, and use different models for multi-class classification. We will reuse most of the code from the [Logistic Regression](Logistic_Regression.html) tutorial so if you haven't gone through that, consider reviewing it first.Note that this tutorial includes a demonstration on how to build and train a simple convolutional neural network and running this colab on CPU may take some time. Therefore, we recommend to run this colab on GPU (select ``GPU`` on the menu ``Runtime`` -> ``Change runtime type`` -> ``Hardware accelerator`` if hardware accelerator is not set to GPU). Import ModulesWe start by importing the modules we will use in our code.
###Code
%pip --quiet install objax
import os
import numpy as np
import tensorflow_datasets as tfds
import objax
from objax.util import EasyDict
from objax.zoo.dnnet import DNNet
###Output
_____no_output_____
###Markdown
Load the dataNext, we will load the "[MNIST](http://yann.lecun.com/exdb/mnist/)" dataset from [TensorFlow DataSets](https://www.tensorflow.org/datasets/api_docs/python/tfds). This dataset contains handwritten digits (i.e., numbers between 0 and 9) and to correctly identify each handwritten digit. The ``prepare`` method pads 2 pixels to the left, right, top, and bottom of each image to resize into 32 x 32 pixes. While MNIST images are grayscale the ``prepare`` method expands each image to three color channels to demonstrate the process of working with color images. The same method also rescales each pixel value to [-1, 1], and converts the image to (N, C, H, W) format.
###Code
# Data: train has 60000 images - test has 10000 images
# Each image is resized and converted to 32 x 32 x 3
DATA_DIR = os.path.join(os.environ['HOME'], 'TFDS')
data = tfds.as_numpy(tfds.load(name='mnist', batch_size=-1, data_dir=DATA_DIR))
def prepare(x):
"""Pads 2 pixels to the left, right, top, and bottom of each image, scales pixel value to [-1, 1], and converts to NCHW format."""
s = x.shape
x_pad = np.zeros((s[0], 32, 32, 1))
x_pad[:, 2:-2, 2:-2, :] = x
return objax.util.image.nchw(
np.concatenate([x_pad.astype('f') * (1 / 127.5) - 1] * 3, axis=-1))
train = EasyDict(image=prepare(data['train']['image']), label=data['train']['label'])
test = EasyDict(image=prepare(data['test']['image']), label=data['test']['label'])
ndim = train.image.shape[-1]
del data
###Output
_____no_output_____
###Markdown
Deep Neural Network ModelObjax offers many predefined models that we can use for classification. One example is the ``objax.zoo.DNNet`` model comprising multiple fully connected layers with configurable size and activation functions.
###Code
dnn_layer_sizes = 3072, 128, 10
dnn_model = DNNet(dnn_layer_sizes, objax.functional.leaky_relu)
###Output
_____no_output_____
###Markdown
Custom Model DefinitionAlternatively, we can define a new model customized to our machine learning task. We demonstrate this process by defining a convolutional network (ConvNet) from scratch. We use ``objax.nn.Sequential`` to compose multiple layers of convolution (``objax.nn.Conv2D``), batch normalization (``objax.nn.BatchNorm2D``), ReLU (``objax.functional.relu``), Max Pooling (``objax.functional.max_pool_2d``), Average Pooling (``jax.mean``), and Linear (``objax.nn.Linear``) layers.Since [batch normalization layer](https://arxiv.org/abs/1502.03167) behaves differently at training and at prediction, we pass the ``training`` flag to a ``__call__`` function of ``ConvNet`` class. We also use the flag to output logits at training and probability at prediction.
###Code
class ConvNet(objax.Module):
"""ConvNet implementation."""
def __init__(self, nin, nclass):
"""Define 3 blocks of conv-bn-relu-conv-bn-relu followed by linear layer."""
self.conv_block1 = objax.nn.Sequential([objax.nn.Conv2D(nin, 16, 3, use_bias=False),
objax.nn.BatchNorm2D(16),
objax.functional.relu,
objax.nn.Conv2D(16, 16, 3, use_bias=False),
objax.nn.BatchNorm2D(16),
objax.functional.relu])
self.conv_block2 = objax.nn.Sequential([objax.nn.Conv2D(16, 32, 3, use_bias=False),
objax.nn.BatchNorm2D(32),
objax.functional.relu,
objax.nn.Conv2D(32, 32, 3, use_bias=False),
objax.nn.BatchNorm2D(32),
objax.functional.relu])
self.conv_block3 = objax.nn.Sequential([objax.nn.Conv2D(32, 64, 3, use_bias=False),
objax.nn.BatchNorm2D(64),
objax.functional.relu,
objax.nn.Conv2D(64, 64, 3, use_bias=False),
objax.nn.BatchNorm2D(64),
objax.functional.relu])
self.linear = objax.nn.Linear(64, nclass)
def __call__(self, x, training):
x = self.conv_block1(x, training=training)
x = objax.functional.max_pool_2d(x, size=2, strides=2)
x = self.conv_block2(x, training=training)
x = objax.functional.max_pool_2d(x, size=2, strides=2)
x = self.conv_block3(x, training=training)
x = x.mean((2, 3))
x = self.linear(x)
return x
cnn_model = ConvNet(nin=3, nclass=10)
print(cnn_model.vars())
###Output
(ConvNet).conv_block1(Sequential)[0](Conv2D).w 432 (3, 3, 3, 16)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[3](Conv2D).w 2304 (3, 3, 16, 16)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block2(Sequential)[0](Conv2D).w 4608 (3, 3, 16, 32)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[3](Conv2D).w 9216 (3, 3, 32, 32)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block3(Sequential)[0](Conv2D).w 18432 (3, 3, 32, 64)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[3](Conv2D).w 36864 (3, 3, 64, 64)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).linear(Linear).b 10 (10,)
(ConvNet).linear(Linear).w 640 (64, 10)
+Total(32) 73402
###Markdown
Model Training and EvaluationThe ``train_model`` method combines all the parts of defining the loss function, gradient descent, training loop, and evaluation. It takes the ``model`` as a parameter so it can be reused with the two models we defined earlier.Unlike the Logistic Regression tutorial we use the ``objax.functional.loss.cross_entropy_logits_sparse`` because we perform multi-class classification. The optimizer, gradient descent operation, and training loop remain the same. The ``DNNet`` model expects flattened images whereas ``ConvNet`` images in (C, H, W) format. The ``flatten_image`` method prepares images before passing them to the model. When using the model for inference we apply the ``objax.functional.softmax`` method to compute the probability distribution from the model's logits.
###Code
# Settings
lr = 0.03 # learning rate
batch = 128
epochs = 100
# Train loop
def train_model(model):
def predict(model, x):
""""""
return objax.functional.softmax(model(x, training=False))
def flatten_image(x):
"""Flatten the image before passing it to the DNN."""
if isinstance(model, DNNet):
return objax.functional.flatten(x)
else:
return x
opt = objax.optimizer.Momentum(model.vars())
# Cross Entropy Loss
def loss(x, label):
return objax.functional.loss.cross_entropy_logits_sparse(model(x, training=True), label).mean()
gv = objax.GradValues(loss, model.vars())
def train_op(x, label):
g, v = gv(x, label) # returns gradients, loss
opt(lr, g)
return v
train_op = objax.Jit(train_op, gv.vars() + opt.vars())
for epoch in range(epochs):
avg_loss = 0
# randomly shuffle training data
shuffle_idx = np.random.permutation(train.image.shape[0])
for it in range(0, train.image.shape[0], batch):
sel = shuffle_idx[it: it + batch]
avg_loss += float(train_op(flatten_image(train.image[sel]), train.label[sel])[0]) * len(sel)
avg_loss /= it + len(sel)
# Eval
accuracy = 0
for it in range(0, test.image.shape[0], batch):
x, y = test.image[it: it + batch], test.label[it: it + batch]
accuracy += (np.argmax(predict(model, flatten_image(x)), axis=1) == y).sum()
accuracy /= test.image.shape[0]
print('Epoch %04d Loss %.2f Accuracy %.2f' % (epoch + 1, avg_loss, 100 * accuracy))
###Output
_____no_output_____
###Markdown
Training the DNN Model
###Code
train_model(dnn_model)
###Output
Epoch 0001 Loss 2.39 Accuracy 56.31
Epoch 0002 Loss 1.24 Accuracy 74.19
Epoch 0003 Loss 0.75 Accuracy 84.91
Epoch 0004 Loss 0.56 Accuracy 83.10
Epoch 0005 Loss 0.48 Accuracy 86.42
Epoch 0006 Loss 0.43 Accuracy 89.00
Epoch 0007 Loss 0.41 Accuracy 89.62
Epoch 0008 Loss 0.39 Accuracy 89.82
Epoch 0009 Loss 0.37 Accuracy 90.04
Epoch 0010 Loss 0.36 Accuracy 89.55
Epoch 0011 Loss 0.35 Accuracy 90.53
Epoch 0012 Loss 0.35 Accuracy 90.64
Epoch 0013 Loss 0.34 Accuracy 90.85
Epoch 0014 Loss 0.33 Accuracy 90.87
Epoch 0015 Loss 0.33 Accuracy 91.02
Epoch 0016 Loss 0.32 Accuracy 91.35
Epoch 0017 Loss 0.32 Accuracy 91.35
Epoch 0018 Loss 0.31 Accuracy 91.50
Epoch 0019 Loss 0.31 Accuracy 91.57
Epoch 0020 Loss 0.31 Accuracy 91.75
Epoch 0021 Loss 0.30 Accuracy 91.47
Epoch 0022 Loss 0.30 Accuracy 88.14
Epoch 0023 Loss 0.30 Accuracy 91.82
Epoch 0024 Loss 0.30 Accuracy 91.92
Epoch 0025 Loss 0.29 Accuracy 92.03
Epoch 0026 Loss 0.29 Accuracy 92.04
Epoch 0027 Loss 0.29 Accuracy 92.11
Epoch 0028 Loss 0.29 Accuracy 92.11
Epoch 0029 Loss 0.29 Accuracy 92.18
Epoch 0030 Loss 0.28 Accuracy 92.24
Epoch 0031 Loss 0.28 Accuracy 92.36
Epoch 0032 Loss 0.28 Accuracy 92.17
Epoch 0033 Loss 0.28 Accuracy 92.42
Epoch 0034 Loss 0.28 Accuracy 92.42
Epoch 0035 Loss 0.27 Accuracy 92.47
Epoch 0036 Loss 0.27 Accuracy 92.50
Epoch 0037 Loss 0.27 Accuracy 92.49
Epoch 0038 Loss 0.27 Accuracy 92.58
Epoch 0039 Loss 0.26 Accuracy 92.56
Epoch 0040 Loss 0.26 Accuracy 92.56
Epoch 0041 Loss 0.26 Accuracy 92.77
Epoch 0042 Loss 0.26 Accuracy 92.72
Epoch 0043 Loss 0.26 Accuracy 92.80
Epoch 0044 Loss 0.25 Accuracy 92.85
Epoch 0045 Loss 0.25 Accuracy 92.90
Epoch 0046 Loss 0.25 Accuracy 92.96
Epoch 0047 Loss 0.25 Accuracy 93.00
Epoch 0048 Loss 0.25 Accuracy 92.82
Epoch 0049 Loss 0.25 Accuracy 93.18
Epoch 0050 Loss 0.24 Accuracy 93.09
Epoch 0051 Loss 0.24 Accuracy 92.94
Epoch 0052 Loss 0.24 Accuracy 93.20
Epoch 0053 Loss 0.24 Accuracy 93.26
Epoch 0054 Loss 0.23 Accuracy 93.21
Epoch 0055 Loss 0.24 Accuracy 93.42
Epoch 0056 Loss 0.23 Accuracy 93.35
Epoch 0057 Loss 0.23 Accuracy 93.36
Epoch 0058 Loss 0.23 Accuracy 93.56
Epoch 0059 Loss 0.23 Accuracy 93.54
Epoch 0060 Loss 0.22 Accuracy 93.39
Epoch 0061 Loss 0.23 Accuracy 93.56
Epoch 0062 Loss 0.22 Accuracy 93.74
Epoch 0063 Loss 0.22 Accuracy 93.68
Epoch 0064 Loss 0.22 Accuracy 93.72
Epoch 0065 Loss 0.22 Accuracy 93.76
Epoch 0066 Loss 0.22 Accuracy 93.87
Epoch 0067 Loss 0.21 Accuracy 93.89
Epoch 0068 Loss 0.21 Accuracy 93.96
Epoch 0069 Loss 0.21 Accuracy 93.90
Epoch 0070 Loss 0.21 Accuracy 93.99
Epoch 0071 Loss 0.21 Accuracy 94.02
Epoch 0072 Loss 0.21 Accuracy 93.86
Epoch 0073 Loss 0.21 Accuracy 94.06
Epoch 0074 Loss 0.21 Accuracy 94.14
Epoch 0075 Loss 0.20 Accuracy 94.31
Epoch 0076 Loss 0.20 Accuracy 94.14
Epoch 0077 Loss 0.20 Accuracy 94.15
Epoch 0078 Loss 0.20 Accuracy 94.10
Epoch 0079 Loss 0.20 Accuracy 94.16
Epoch 0080 Loss 0.20 Accuracy 94.28
Epoch 0081 Loss 0.20 Accuracy 94.30
Epoch 0082 Loss 0.20 Accuracy 94.28
Epoch 0083 Loss 0.19 Accuracy 94.37
Epoch 0084 Loss 0.19 Accuracy 94.33
Epoch 0085 Loss 0.19 Accuracy 94.31
Epoch 0086 Loss 0.19 Accuracy 94.25
Epoch 0087 Loss 0.19 Accuracy 94.37
Epoch 0088 Loss 0.19 Accuracy 94.38
Epoch 0089 Loss 0.19 Accuracy 94.35
Epoch 0090 Loss 0.19 Accuracy 94.38
Epoch 0091 Loss 0.19 Accuracy 94.41
Epoch 0092 Loss 0.19 Accuracy 94.46
Epoch 0093 Loss 0.19 Accuracy 94.53
Epoch 0094 Loss 0.18 Accuracy 94.47
Epoch 0095 Loss 0.18 Accuracy 94.54
Epoch 0096 Loss 0.18 Accuracy 94.65
Epoch 0097 Loss 0.18 Accuracy 94.56
Epoch 0098 Loss 0.18 Accuracy 94.60
Epoch 0099 Loss 0.18 Accuracy 94.63
Epoch 0100 Loss 0.18 Accuracy 94.46
###Markdown
Training the ConvNet Model
###Code
train_model(cnn_model)
###Output
Epoch 0001 Loss 0.27 Accuracy 27.08
Epoch 0002 Loss 0.05 Accuracy 41.07
Epoch 0003 Loss 0.03 Accuracy 67.77
Epoch 0004 Loss 0.03 Accuracy 73.31
Epoch 0005 Loss 0.02 Accuracy 90.30
Epoch 0006 Loss 0.02 Accuracy 93.10
Epoch 0007 Loss 0.02 Accuracy 95.98
Epoch 0008 Loss 0.01 Accuracy 98.77
Epoch 0009 Loss 0.01 Accuracy 96.58
Epoch 0010 Loss 0.01 Accuracy 99.12
Epoch 0011 Loss 0.01 Accuracy 98.88
Epoch 0012 Loss 0.01 Accuracy 98.64
Epoch 0013 Loss 0.01 Accuracy 98.66
Epoch 0014 Loss 0.00 Accuracy 98.38
Epoch 0015 Loss 0.00 Accuracy 99.15
Epoch 0016 Loss 0.00 Accuracy 97.50
Epoch 0017 Loss 0.00 Accuracy 98.98
Epoch 0018 Loss 0.00 Accuracy 98.94
Epoch 0019 Loss 0.00 Accuracy 98.56
Epoch 0020 Loss 0.00 Accuracy 99.06
Epoch 0021 Loss 0.00 Accuracy 99.26
Epoch 0022 Loss 0.00 Accuracy 99.30
Epoch 0023 Loss 0.00 Accuracy 99.18
Epoch 0024 Loss 0.00 Accuracy 99.49
Epoch 0025 Loss 0.00 Accuracy 99.34
Epoch 0026 Loss 0.00 Accuracy 99.24
Epoch 0027 Loss 0.00 Accuracy 99.38
Epoch 0028 Loss 0.00 Accuracy 99.43
Epoch 0029 Loss 0.00 Accuracy 99.40
Epoch 0030 Loss 0.00 Accuracy 99.50
Epoch 0031 Loss 0.00 Accuracy 99.44
Epoch 0032 Loss 0.00 Accuracy 99.52
Epoch 0033 Loss 0.00 Accuracy 99.46
Epoch 0034 Loss 0.00 Accuracy 99.39
Epoch 0035 Loss 0.00 Accuracy 99.22
Epoch 0036 Loss 0.00 Accuracy 99.26
Epoch 0037 Loss 0.00 Accuracy 99.47
Epoch 0038 Loss 0.00 Accuracy 99.18
Epoch 0039 Loss 0.00 Accuracy 99.39
Epoch 0040 Loss 0.00 Accuracy 99.44
Epoch 0041 Loss 0.00 Accuracy 99.43
Epoch 0042 Loss 0.00 Accuracy 99.50
Epoch 0043 Loss 0.00 Accuracy 99.50
Epoch 0044 Loss 0.00 Accuracy 99.53
Epoch 0045 Loss 0.00 Accuracy 99.51
Epoch 0046 Loss 0.00 Accuracy 99.49
Epoch 0047 Loss 0.00 Accuracy 99.46
Epoch 0048 Loss 0.00 Accuracy 99.46
Epoch 0049 Loss 0.00 Accuracy 99.35
Epoch 0050 Loss 0.00 Accuracy 99.50
Epoch 0051 Loss 0.00 Accuracy 99.48
Epoch 0052 Loss 0.00 Accuracy 99.48
Epoch 0053 Loss 0.00 Accuracy 99.48
Epoch 0054 Loss 0.00 Accuracy 99.46
Epoch 0055 Loss 0.00 Accuracy 99.48
Epoch 0056 Loss 0.00 Accuracy 99.50
Epoch 0057 Loss 0.00 Accuracy 99.41
Epoch 0058 Loss 0.00 Accuracy 99.49
Epoch 0059 Loss 0.00 Accuracy 99.48
Epoch 0060 Loss 0.00 Accuracy 99.47
Epoch 0061 Loss 0.00 Accuracy 99.52
Epoch 0062 Loss 0.00 Accuracy 99.49
Epoch 0063 Loss 0.00 Accuracy 99.48
Epoch 0064 Loss 0.00 Accuracy 99.51
Epoch 0065 Loss 0.00 Accuracy 99.46
Epoch 0066 Loss 0.00 Accuracy 99.51
Epoch 0067 Loss 0.00 Accuracy 99.49
Epoch 0068 Loss 0.00 Accuracy 99.52
Epoch 0069 Loss 0.00 Accuracy 99.49
Epoch 0070 Loss 0.00 Accuracy 99.51
Epoch 0071 Loss 0.00 Accuracy 99.51
Epoch 0072 Loss 0.00 Accuracy 99.52
Epoch 0073 Loss 0.00 Accuracy 99.43
Epoch 0074 Loss 0.00 Accuracy 99.53
Epoch 0075 Loss 0.00 Accuracy 99.47
Epoch 0076 Loss 0.00 Accuracy 99.51
Epoch 0077 Loss 0.00 Accuracy 99.55
Epoch 0078 Loss 0.00 Accuracy 99.52
Epoch 0079 Loss 0.00 Accuracy 99.52
Epoch 0080 Loss 0.00 Accuracy 98.78
Epoch 0081 Loss 0.00 Accuracy 99.16
Epoch 0082 Loss 0.00 Accuracy 99.40
Epoch 0083 Loss 0.00 Accuracy 99.35
Epoch 0084 Loss 0.00 Accuracy 99.32
Epoch 0085 Loss 0.00 Accuracy 99.49
Epoch 0086 Loss 0.00 Accuracy 99.49
Epoch 0087 Loss 0.00 Accuracy 99.56
Epoch 0088 Loss 0.00 Accuracy 99.48
Epoch 0089 Loss 0.00 Accuracy 99.48
Epoch 0090 Loss 0.00 Accuracy 99.51
Epoch 0091 Loss 0.00 Accuracy 99.45
Epoch 0092 Loss 0.00 Accuracy 99.52
Epoch 0093 Loss 0.00 Accuracy 99.52
Epoch 0094 Loss 0.00 Accuracy 99.51
Epoch 0095 Loss 0.00 Accuracy 99.51
Epoch 0096 Loss 0.00 Accuracy 99.48
Epoch 0097 Loss 0.00 Accuracy 99.51
Epoch 0098 Loss 0.00 Accuracy 99.53
Epoch 0099 Loss 0.00 Accuracy 99.50
Epoch 0100 Loss 0.00 Accuracy 99.53
###Markdown
Training with PyTorch data processing APIOne of the pain points for ML researchers/practitioners when building a new ML model is the data processing. Here, we demonstrate how to use data processing API of [PyTorch](https://pytorch.org/) to train a model with Objax. Different deep learning library comes with different data processing APIs, and depending on your preference, you can choose an API and easily combine with Objax.Similarly, we prepare an `MNIST` dataset and apply the same data preprocessing.
###Code
import torch
from torchvision import datasets, transforms
transform=transforms.Compose([
transforms.Pad((2,2,2,2), 0),
transforms.ToTensor(),
transforms.Lambda(lambda x: np.concatenate([x] * 3, axis=0)),
transforms.Lambda(lambda x: x * 2 - 1)
])
train_dataset = datasets.MNIST(os.environ['HOME'], train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(os.environ['HOME'], train=False, download=True, transform=transform)
# Define data loader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch)
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /root/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
We replace data processing pipeline of the train and test loop with `train_loader` and `test_loader` and that's it!
###Code
# Train loop
def train_model_with_torch_data_api(model):
def predict(model, x):
""""""
return objax.functional.softmax(model(x, training=False))
def flatten_image(x):
"""Flatten the image before passing it to the DNN."""
if isinstance(model, DNNet):
return objax.functional.flatten(x)
else:
return x
opt = objax.optimizer.Momentum(model.vars())
# Cross Entropy Loss
def loss(x, label):
return objax.functional.loss.cross_entropy_logits_sparse(model(x, training=True), label).mean()
gv = objax.GradValues(loss, model.vars())
def train_op(x, label):
g, v = gv(x, label) # returns gradients, loss
opt(lr, g)
return v
train_op = objax.Jit(train_op, gv.vars() + opt.vars())
for epoch in range(epochs):
avg_loss = 0
tot_data = 0
for _, (img, label) in enumerate(train_loader):
avg_loss += float(train_op(flatten_image(img.numpy()), label.numpy())[0]) * len(img)
tot_data += len(img)
avg_loss /= tot_data
# Eval
accuracy = 0
tot_data = 0
for _, (img, label) in enumerate(test_loader):
accuracy += (np.argmax(predict(model, flatten_image(img.numpy())), axis=1) == label.numpy()).sum()
tot_data += len(img)
accuracy /= tot_data
print('Epoch %04d Loss %.2f Accuracy %.2f' % (epoch + 1, avg_loss, 100 * accuracy))
###Output
###Markdown
Training the DNN Model with PyTorch data API
###Code
dnn_layer_sizes = 3072, 128, 10
dnn_model = DNNet(dnn_layer_sizes, objax.functional.leaky_relu)
train_model_with_torch_data_api(dnn_model)
###Output
Epoch 0001 Loss 2.57 Accuracy 34.35
Epoch 0002 Loss 1.93 Accuracy 58.51
Epoch 0003 Loss 1.32 Accuracy 68.46
Epoch 0004 Loss 0.83 Accuracy 80.95
Epoch 0005 Loss 0.62 Accuracy 84.74
Epoch 0006 Loss 0.53 Accuracy 86.53
Epoch 0007 Loss 0.48 Accuracy 84.18
Epoch 0008 Loss 0.45 Accuracy 88.42
Epoch 0009 Loss 0.42 Accuracy 87.34
Epoch 0010 Loss 0.40 Accuracy 89.29
Epoch 0011 Loss 0.39 Accuracy 89.31
Epoch 0012 Loss 0.38 Accuracy 89.86
Epoch 0013 Loss 0.37 Accuracy 89.91
Epoch 0014 Loss 0.36 Accuracy 86.94
Epoch 0015 Loss 0.36 Accuracy 89.89
Epoch 0016 Loss 0.35 Accuracy 90.12
Epoch 0017 Loss 0.34 Accuracy 90.40
Epoch 0018 Loss 0.34 Accuracy 90.31
Epoch 0019 Loss 0.34 Accuracy 90.79
Epoch 0020 Loss 0.33 Accuracy 90.71
Epoch 0021 Loss 0.33 Accuracy 90.70
Epoch 0022 Loss 0.33 Accuracy 90.69
Epoch 0023 Loss 0.33 Accuracy 90.91
Epoch 0024 Loss 0.32 Accuracy 90.92
Epoch 0025 Loss 0.32 Accuracy 91.06
Epoch 0026 Loss 0.32 Accuracy 91.19
Epoch 0027 Loss 0.32 Accuracy 91.31
Epoch 0028 Loss 0.31 Accuracy 91.31
Epoch 0029 Loss 0.31 Accuracy 91.20
Epoch 0030 Loss 0.31 Accuracy 91.31
Epoch 0031 Loss 0.31 Accuracy 91.36
Epoch 0032 Loss 0.31 Accuracy 91.42
Epoch 0033 Loss 0.30 Accuracy 91.27
Epoch 0034 Loss 0.31 Accuracy 91.47
Epoch 0035 Loss 0.30 Accuracy 91.57
Epoch 0036 Loss 0.30 Accuracy 91.44
Epoch 0037 Loss 0.30 Accuracy 91.55
Epoch 0038 Loss 0.30 Accuracy 91.56
Epoch 0039 Loss 0.29 Accuracy 91.75
Epoch 0040 Loss 0.29 Accuracy 91.69
Epoch 0041 Loss 0.29 Accuracy 91.60
Epoch 0042 Loss 0.29 Accuracy 91.77
Epoch 0043 Loss 0.29 Accuracy 91.76
Epoch 0044 Loss 0.29 Accuracy 91.84
Epoch 0045 Loss 0.28 Accuracy 92.05
Epoch 0046 Loss 0.28 Accuracy 91.78
Epoch 0047 Loss 0.28 Accuracy 92.01
Epoch 0048 Loss 0.28 Accuracy 91.95
Epoch 0049 Loss 0.28 Accuracy 90.11
Epoch 0050 Loss 0.28 Accuracy 92.14
Epoch 0051 Loss 0.28 Accuracy 92.03
Epoch 0052 Loss 0.27 Accuracy 92.29
Epoch 0053 Loss 0.27 Accuracy 92.17
Epoch 0054 Loss 0.27 Accuracy 92.12
Epoch 0055 Loss 0.27 Accuracy 92.34
Epoch 0056 Loss 0.27 Accuracy 92.32
Epoch 0057 Loss 0.27 Accuracy 92.47
Epoch 0058 Loss 0.27 Accuracy 92.38
Epoch 0059 Loss 0.27 Accuracy 92.39
Epoch 0060 Loss 0.26 Accuracy 92.51
Epoch 0061 Loss 0.27 Accuracy 92.50
Epoch 0062 Loss 0.26 Accuracy 92.46
Epoch 0063 Loss 0.26 Accuracy 92.65
Epoch 0064 Loss 0.26 Accuracy 92.57
Epoch 0065 Loss 0.26 Accuracy 92.63
Epoch 0066 Loss 0.26 Accuracy 92.75
Epoch 0067 Loss 0.26 Accuracy 92.57
Epoch 0068 Loss 0.26 Accuracy 92.88
Epoch 0069 Loss 0.25 Accuracy 92.53
Epoch 0070 Loss 0.25 Accuracy 92.80
Epoch 0071 Loss 0.25 Accuracy 92.71
Epoch 0072 Loss 0.25 Accuracy 92.75
Epoch 0073 Loss 0.25 Accuracy 92.84
Epoch 0074 Loss 0.25 Accuracy 92.71
Epoch 0075 Loss 0.25 Accuracy 92.95
Epoch 0076 Loss 0.25 Accuracy 92.82
Epoch 0077 Loss 0.25 Accuracy 92.90
Epoch 0078 Loss 0.25 Accuracy 92.87
Epoch 0079 Loss 0.25 Accuracy 89.55
Epoch 0080 Loss 0.25 Accuracy 92.86
Epoch 0081 Loss 0.24 Accuracy 92.99
Epoch 0082 Loss 0.24 Accuracy 93.03
Epoch 0083 Loss 0.24 Accuracy 93.03
Epoch 0084 Loss 0.24 Accuracy 93.01
Epoch 0085 Loss 0.24 Accuracy 93.13
Epoch 0086 Loss 0.24 Accuracy 93.17
Epoch 0087 Loss 0.24 Accuracy 92.87
Epoch 0088 Loss 0.24 Accuracy 92.93
Epoch 0089 Loss 0.24 Accuracy 93.16
Epoch 0090 Loss 0.24 Accuracy 93.38
Epoch 0091 Loss 0.24 Accuracy 92.98
Epoch 0092 Loss 0.24 Accuracy 93.30
Epoch 0093 Loss 0.23 Accuracy 93.09
Epoch 0094 Loss 0.23 Accuracy 93.19
Epoch 0095 Loss 0.23 Accuracy 93.25
Epoch 0096 Loss 0.23 Accuracy 93.22
Epoch 0097 Loss 0.23 Accuracy 93.28
Epoch 0098 Loss 0.23 Accuracy 93.39
Epoch 0099 Loss 0.23 Accuracy 93.25
Epoch 0100 Loss 0.23 Accuracy 93.30
###Markdown
Training the ConvNet Model with PyTorch data API
###Code
cnn_model = ConvNet(nin=3, nclass=10)
print(cnn_model.vars())
train_model_with_torch_data_api(cnn_model)
###Output
(ConvNet).conv_block1(Sequential)[0](Conv2D).w 432 (3, 3, 3, 16)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[3](Conv2D).w 2304 (3, 3, 16, 16)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block2(Sequential)[0](Conv2D).w 4608 (3, 3, 16, 32)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[3](Conv2D).w 9216 (3, 3, 32, 32)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block3(Sequential)[0](Conv2D).w 18432 (3, 3, 32, 64)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[3](Conv2D).w 36864 (3, 3, 64, 64)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).linear(Linear).b 10 (10,)
(ConvNet).linear(Linear).w 640 (64, 10)
+Total(32) 73402
Epoch 0001 Loss 0.26 Accuracy 24.18
Epoch 0002 Loss 0.05 Accuracy 37.53
Epoch 0003 Loss 0.03 Accuracy 42.17
Epoch 0004 Loss 0.03 Accuracy 73.50
Epoch 0005 Loss 0.02 Accuracy 80.33
Epoch 0006 Loss 0.02 Accuracy 83.28
Epoch 0007 Loss 0.02 Accuracy 90.87
Epoch 0008 Loss 0.01 Accuracy 98.77
Epoch 0009 Loss 0.01 Accuracy 98.42
Epoch 0010 Loss 0.01 Accuracy 98.16
Epoch 0011 Loss 0.01 Accuracy 98.74
Epoch 0012 Loss 0.01 Accuracy 95.05
Epoch 0013 Loss 0.01 Accuracy 98.89
Epoch 0014 Loss 0.00 Accuracy 98.70
Epoch 0015 Loss 0.00 Accuracy 99.01
Epoch 0016 Loss 0.00 Accuracy 98.97
Epoch 0017 Loss 0.00 Accuracy 98.79
Epoch 0018 Loss 0.00 Accuracy 98.37
Epoch 0019 Loss 0.00 Accuracy 99.19
Epoch 0020 Loss 0.00 Accuracy 99.22
Epoch 0021 Loss 0.00 Accuracy 98.43
Epoch 0022 Loss 0.00 Accuracy 99.02
Epoch 0023 Loss 0.00 Accuracy 99.38
Epoch 0024 Loss 0.00 Accuracy 99.42
Epoch 0025 Loss 0.00 Accuracy 99.45
Epoch 0026 Loss 0.00 Accuracy 99.35
Epoch 0027 Loss 0.00 Accuracy 99.42
Epoch 0028 Loss 0.00 Accuracy 99.42
Epoch 0029 Loss 0.00 Accuracy 99.14
Epoch 0030 Loss 0.00 Accuracy 99.33
Epoch 0031 Loss 0.00 Accuracy 99.36
Epoch 0032 Loss 0.00 Accuracy 99.18
Epoch 0033 Loss 0.00 Accuracy 99.43
Epoch 0034 Loss 0.00 Accuracy 99.47
Epoch 0035 Loss 0.00 Accuracy 99.49
Epoch 0036 Loss 0.00 Accuracy 99.53
Epoch 0037 Loss 0.00 Accuracy 99.38
Epoch 0038 Loss 0.00 Accuracy 99.39
Epoch 0039 Loss 0.00 Accuracy 99.49
Epoch 0040 Loss 0.00 Accuracy 99.49
Epoch 0041 Loss 0.00 Accuracy 99.47
Epoch 0042 Loss 0.00 Accuracy 99.54
Epoch 0043 Loss 0.00 Accuracy 99.35
Epoch 0044 Loss 0.00 Accuracy 99.45
Epoch 0045 Loss 0.00 Accuracy 99.47
Epoch 0046 Loss 0.00 Accuracy 99.53
Epoch 0047 Loss 0.00 Accuracy 99.50
Epoch 0048 Loss 0.00 Accuracy 99.52
Epoch 0049 Loss 0.00 Accuracy 99.51
Epoch 0050 Loss 0.00 Accuracy 99.49
Epoch 0051 Loss 0.00 Accuracy 99.45
Epoch 0052 Loss 0.00 Accuracy 99.48
Epoch 0053 Loss 0.00 Accuracy 99.50
Epoch 0054 Loss 0.00 Accuracy 99.46
Epoch 0055 Loss 0.00 Accuracy 99.50
Epoch 0056 Loss 0.00 Accuracy 99.48
Epoch 0057 Loss 0.00 Accuracy 99.46
Epoch 0058 Loss 0.00 Accuracy 99.44
Epoch 0059 Loss 0.00 Accuracy 99.46
Epoch 0060 Loss 0.00 Accuracy 99.26
Epoch 0061 Loss 0.00 Accuracy 93.99
Epoch 0062 Loss 0.00 Accuracy 97.80
Epoch 0063 Loss 0.00 Accuracy 80.26
Epoch 0064 Loss 0.00 Accuracy 99.20
Epoch 0065 Loss 0.00 Accuracy 99.38
Epoch 0066 Loss 0.00 Accuracy 99.44
Epoch 0067 Loss 0.00 Accuracy 99.51
Epoch 0068 Loss 0.00 Accuracy 99.45
Epoch 0069 Loss 0.00 Accuracy 99.42
Epoch 0070 Loss 0.00 Accuracy 99.50
Epoch 0071 Loss 0.00 Accuracy 99.52
Epoch 0072 Loss 0.00 Accuracy 99.44
Epoch 0073 Loss 0.00 Accuracy 99.41
Epoch 0074 Loss 0.00 Accuracy 99.46
Epoch 0075 Loss 0.00 Accuracy 99.42
Epoch 0076 Loss 0.00 Accuracy 99.49
Epoch 0077 Loss 0.00 Accuracy 99.50
Epoch 0078 Loss 0.00 Accuracy 99.56
Epoch 0079 Loss 0.00 Accuracy 99.52
Epoch 0080 Loss 0.00 Accuracy 99.42
Epoch 0081 Loss 0.00 Accuracy 99.49
Epoch 0082 Loss 0.00 Accuracy 99.48
Epoch 0083 Loss 0.00 Accuracy 99.44
Epoch 0084 Loss 0.00 Accuracy 99.49
Epoch 0085 Loss 0.00 Accuracy 99.53
Epoch 0086 Loss 0.00 Accuracy 99.52
Epoch 0087 Loss 0.00 Accuracy 99.52
Epoch 0088 Loss 0.00 Accuracy 99.50
Epoch 0089 Loss 0.00 Accuracy 99.51
Epoch 0090 Loss 0.00 Accuracy 99.50
Epoch 0091 Loss 0.00 Accuracy 99.49
Epoch 0092 Loss 0.00 Accuracy 99.52
Epoch 0093 Loss 0.00 Accuracy 99.50
Epoch 0094 Loss 0.00 Accuracy 99.50
Epoch 0095 Loss 0.00 Accuracy 99.55
Epoch 0096 Loss 0.00 Accuracy 99.48
Epoch 0097 Loss 0.00 Accuracy 99.51
Epoch 0098 Loss 0.00 Accuracy 99.52
Epoch 0099 Loss 0.00 Accuracy 99.49
Epoch 0100 Loss 0.00 Accuracy 99.52
###Markdown
Creating Custom Networks for Multi-Class ClassificationThis tutorial demonstrates how to define, train, and use different models for multi-class classification. We will reuse most of the code from the [Logistic Regression](Logistic_Regression.html) tutorial so if you haven't gone through that, consider reviewing it first.Note that this tutorial includes a demonstration on how to build and train a simple convolutional neural network and running this colab on CPU may take some time. Therefore, we recommend to run this colab on GPU (select ``GPU`` on the menu ``Runtime`` -> ``Change runtime type`` -> ``Hardware accelerator`` if hardware accelerator is not set to GPU). Import ModulesWe start by importing the modules we will use in our code.
###Code
%pip --quiet install objax
import os
import numpy as np
import tensorflow_datasets as tfds
import objax
from objax.util import EasyDict
from objax.zoo.dnnet import DNNet
###Output
_____no_output_____
###Markdown
Load the dataNext, we will load the "[MNIST](http://yann.lecun.com/exdb/mnist/)" dataset from [TensorFlow DataSets](https://www.tensorflow.org/datasets/api_docs/python/tfds). This dataset contains handwritten digits (i.e., numbers between 0 and 9) and to correctly identify each handwritten digit. The ``prepare`` method pads 2 pixels to the left, right, top, and bottom of each image to resize into 32 x 32 pixes. While MNIST images are grayscale the ``prepare`` method expands each image to three color channels to demonstrate the process of working with color images. The same method also rescales each pixel value to [-1, 1], and converts the image to (N, C, H, W) format.
###Code
# Data: train has 60000 images - test has 10000 images
# Each image is resized and converted to 32 x 32 x 3
DATA_DIR = os.path.join(os.environ['HOME'], 'TFDS')
data = tfds.as_numpy(tfds.load(name='mnist', batch_size=-1, data_dir=DATA_DIR))
def prepare(x):
"""Pads 2 pixels to the left, right, top, and bottom of each image, scales pixel value to [-1, 1], and converts to NCHW format."""
s = x.shape
x_pad = np.zeros((s[0], 32, 32, 1))
x_pad[:, 2:-2, 2:-2, :] = x
return objax.util.image.nchw(
np.concatenate([x_pad.astype('f') * (1 / 127.5) - 1] * 3, axis=-1))
train = EasyDict(image=prepare(data['train']['image']), label=data['train']['label'])
test = EasyDict(image=prepare(data['test']['image']), label=data['test']['label'])
ndim = train.image.shape[-1]
del data
###Output
_____no_output_____
###Markdown
Deep Neural Network ModelObjax offers many predefined models that we can use for classification. One example is the ``objax.zoo.DNNet`` model comprising multiple fully connected layers with configurable size and activation functions.
###Code
dnn_layer_sizes = 3072, 128, 10
dnn_model = DNNet(dnn_layer_sizes, objax.functional.leaky_relu)
###Output
_____no_output_____
###Markdown
Custom Model DefinitionAlternatively, we can define a new model customized to our machine learning task. We demonstrate this process by defining a convolutional network (ConvNet) from scratch. We use ``objax.nn.Sequential`` to compose multiple layers of convolution (``objax.nn.Conv2D``), batch normalization (``objax.nn.BatchNorm2D``), ReLU (``objax.functional.relu``), Max Pooling (``objax.functional.max_pool_2d``), Average Pooling (``jax.mean``), and Linear (``objax.nn.Linear``) layers.Since [batch normalization layer](https://arxiv.org/abs/1502.03167) behaves differently at training and at prediction, we pass the ``training`` flag to a ``__call__`` function of ``ConvNet`` class. We also use the flag to output logits at training and probability at prediction.
###Code
class ConvNet(objax.Module):
"""ConvNet implementation."""
def __init__(self, nin, nclass):
"""Define 3 blocks of conv-bn-relu-conv-bn-relu followed by linear layer."""
self.conv_block1 = objax.nn.Sequential([objax.nn.Conv2D(nin, 16, 3, use_bias=False),
objax.nn.BatchNorm2D(16),
objax.functional.relu,
objax.nn.Conv2D(16, 16, 3, use_bias=False),
objax.nn.BatchNorm2D(16),
objax.functional.relu])
self.conv_block2 = objax.nn.Sequential([objax.nn.Conv2D(16, 32, 3, use_bias=False),
objax.nn.BatchNorm2D(32),
objax.functional.relu,
objax.nn.Conv2D(32, 32, 3, use_bias=False),
objax.nn.BatchNorm2D(32),
objax.functional.relu])
self.conv_block3 = objax.nn.Sequential([objax.nn.Conv2D(32, 64, 3, use_bias=False),
objax.nn.BatchNorm2D(64),
objax.functional.relu,
objax.nn.Conv2D(64, 64, 3, use_bias=False),
objax.nn.BatchNorm2D(64),
objax.functional.relu])
self.linear = objax.nn.Linear(64, nclass)
def __call__(self, x, training):
x = self.conv_block1(x, training=training)
x = objax.functional.max_pool_2d(x, size=2, strides=2)
x = self.conv_block2(x, training=training)
x = objax.functional.max_pool_2d(x, size=2, strides=2)
x = self.conv_block3(x, training=training)
x = x.mean((2, 3))
x = self.linear(x)
return x
cnn_model = ConvNet(nin=3, nclass=10)
print(cnn_model.vars())
###Output
(ConvNet).conv_block1(Sequential)[0](Conv2D).w 432 (3, 3, 3, 16)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[3](Conv2D).w 2304 (3, 3, 16, 16)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block2(Sequential)[0](Conv2D).w 4608 (3, 3, 16, 32)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[3](Conv2D).w 9216 (3, 3, 32, 32)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block3(Sequential)[0](Conv2D).w 18432 (3, 3, 32, 64)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[3](Conv2D).w 36864 (3, 3, 64, 64)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).linear(Linear).b 10 (10,)
(ConvNet).linear(Linear).w 640 (64, 10)
+Total(32) 73402
###Markdown
Model Training and EvaluationThe ``train_model`` method combines all the parts of defining the loss function, gradient descent, training loop, and evaluation. It takes the ``model`` as a parameter so it can be reused with the two models we defined earlier.Unlike the Logistic Regression tutorial we use the ``objax.functional.loss.cross_entropy_logits_sparse`` because we perform multi-class classification. The optimizer, gradient descent operation, and training loop remain the same. The ``DNNet`` model expects flattened images whereas ``ConvNet`` images in (C, H, W) format. The ``flatten_image`` method prepares images before passing them to the model. When using the model for inference we apply the ``objax.functional.softmax`` method to compute the probability distribution from the model's logits.
###Code
# Settings
lr = 0.03 # learning rate
batch = 128
epochs = 100
# Train loop
def train_model(model):
def predict(model, x):
""""""
return objax.functional.softmax(model(x, training=False))
def flatten_image(x):
"""Flatten the image before passing it to the DNN."""
if isinstance(model, DNNet):
return objax.functional.flatten(x)
else:
return x
opt = objax.optimizer.Momentum(model.vars())
# Cross Entropy Loss
def loss(x, label):
return objax.functional.loss.cross_entropy_logits_sparse(model(x, training=True), label).mean()
gv = objax.GradValues(loss, model.vars())
def train_op(x, label):
g, v = gv(x, label) # returns gradients, loss
opt(lr, g)
return v
train_op = objax.Jit(train_op, gv.vars() + opt.vars())
for epoch in range(epochs):
avg_loss = 0
# randomly shuffle training data
shuffle_idx = np.random.permutation(train.image.shape[0])
for it in range(0, train.image.shape[0], batch):
sel = shuffle_idx[it: it + batch]
avg_loss += float(train_op(flatten_image(train.image[sel]), train.label[sel])[0]) * len(sel)
avg_loss /= it + len(sel)
# Eval
accuracy = 0
for it in range(0, test.image.shape[0], batch):
x, y = test.image[it: it + batch], test.label[it: it + batch]
accuracy += (np.argmax(predict(model, flatten_image(x)), axis=1) == y).sum()
accuracy /= test.image.shape[0]
print('Epoch %04d Loss %.2f Accuracy %.2f' % (epoch + 1, avg_loss, 100 * accuracy))
###Output
_____no_output_____
###Markdown
Training the DNN Model
###Code
train_model(dnn_model)
###Output
Epoch 0001 Loss 2.39 Accuracy 56.31
Epoch 0002 Loss 1.24 Accuracy 74.19
Epoch 0003 Loss 0.75 Accuracy 84.91
Epoch 0004 Loss 0.56 Accuracy 83.10
Epoch 0005 Loss 0.48 Accuracy 86.42
Epoch 0006 Loss 0.43 Accuracy 89.00
Epoch 0007 Loss 0.41 Accuracy 89.62
Epoch 0008 Loss 0.39 Accuracy 89.82
Epoch 0009 Loss 0.37 Accuracy 90.04
Epoch 0010 Loss 0.36 Accuracy 89.55
Epoch 0011 Loss 0.35 Accuracy 90.53
Epoch 0012 Loss 0.35 Accuracy 90.64
Epoch 0013 Loss 0.34 Accuracy 90.85
Epoch 0014 Loss 0.33 Accuracy 90.87
Epoch 0015 Loss 0.33 Accuracy 91.02
Epoch 0016 Loss 0.32 Accuracy 91.35
Epoch 0017 Loss 0.32 Accuracy 91.35
Epoch 0018 Loss 0.31 Accuracy 91.50
Epoch 0019 Loss 0.31 Accuracy 91.57
Epoch 0020 Loss 0.31 Accuracy 91.75
Epoch 0021 Loss 0.30 Accuracy 91.47
Epoch 0022 Loss 0.30 Accuracy 88.14
Epoch 0023 Loss 0.30 Accuracy 91.82
Epoch 0024 Loss 0.30 Accuracy 91.92
Epoch 0025 Loss 0.29 Accuracy 92.03
Epoch 0026 Loss 0.29 Accuracy 92.04
Epoch 0027 Loss 0.29 Accuracy 92.11
Epoch 0028 Loss 0.29 Accuracy 92.11
Epoch 0029 Loss 0.29 Accuracy 92.18
Epoch 0030 Loss 0.28 Accuracy 92.24
Epoch 0031 Loss 0.28 Accuracy 92.36
Epoch 0032 Loss 0.28 Accuracy 92.17
Epoch 0033 Loss 0.28 Accuracy 92.42
Epoch 0034 Loss 0.28 Accuracy 92.42
Epoch 0035 Loss 0.27 Accuracy 92.47
Epoch 0036 Loss 0.27 Accuracy 92.50
Epoch 0037 Loss 0.27 Accuracy 92.49
Epoch 0038 Loss 0.27 Accuracy 92.58
Epoch 0039 Loss 0.26 Accuracy 92.56
Epoch 0040 Loss 0.26 Accuracy 92.56
Epoch 0041 Loss 0.26 Accuracy 92.77
Epoch 0042 Loss 0.26 Accuracy 92.72
Epoch 0043 Loss 0.26 Accuracy 92.80
Epoch 0044 Loss 0.25 Accuracy 92.85
Epoch 0045 Loss 0.25 Accuracy 92.90
Epoch 0046 Loss 0.25 Accuracy 92.96
Epoch 0047 Loss 0.25 Accuracy 93.00
Epoch 0048 Loss 0.25 Accuracy 92.82
Epoch 0049 Loss 0.25 Accuracy 93.18
Epoch 0050 Loss 0.24 Accuracy 93.09
Epoch 0051 Loss 0.24 Accuracy 92.94
Epoch 0052 Loss 0.24 Accuracy 93.20
Epoch 0053 Loss 0.24 Accuracy 93.26
Epoch 0054 Loss 0.23 Accuracy 93.21
Epoch 0055 Loss 0.24 Accuracy 93.42
Epoch 0056 Loss 0.23 Accuracy 93.35
Epoch 0057 Loss 0.23 Accuracy 93.36
Epoch 0058 Loss 0.23 Accuracy 93.56
Epoch 0059 Loss 0.23 Accuracy 93.54
Epoch 0060 Loss 0.22 Accuracy 93.39
Epoch 0061 Loss 0.23 Accuracy 93.56
Epoch 0062 Loss 0.22 Accuracy 93.74
Epoch 0063 Loss 0.22 Accuracy 93.68
Epoch 0064 Loss 0.22 Accuracy 93.72
Epoch 0065 Loss 0.22 Accuracy 93.76
Epoch 0066 Loss 0.22 Accuracy 93.87
Epoch 0067 Loss 0.21 Accuracy 93.89
Epoch 0068 Loss 0.21 Accuracy 93.96
Epoch 0069 Loss 0.21 Accuracy 93.90
Epoch 0070 Loss 0.21 Accuracy 93.99
Epoch 0071 Loss 0.21 Accuracy 94.02
Epoch 0072 Loss 0.21 Accuracy 93.86
Epoch 0073 Loss 0.21 Accuracy 94.06
Epoch 0074 Loss 0.21 Accuracy 94.14
Epoch 0075 Loss 0.20 Accuracy 94.31
Epoch 0076 Loss 0.20 Accuracy 94.14
Epoch 0077 Loss 0.20 Accuracy 94.15
Epoch 0078 Loss 0.20 Accuracy 94.10
Epoch 0079 Loss 0.20 Accuracy 94.16
Epoch 0080 Loss 0.20 Accuracy 94.28
Epoch 0081 Loss 0.20 Accuracy 94.30
Epoch 0082 Loss 0.20 Accuracy 94.28
Epoch 0083 Loss 0.19 Accuracy 94.37
Epoch 0084 Loss 0.19 Accuracy 94.33
Epoch 0085 Loss 0.19 Accuracy 94.31
Epoch 0086 Loss 0.19 Accuracy 94.25
Epoch 0087 Loss 0.19 Accuracy 94.37
Epoch 0088 Loss 0.19 Accuracy 94.38
Epoch 0089 Loss 0.19 Accuracy 94.35
Epoch 0090 Loss 0.19 Accuracy 94.38
Epoch 0091 Loss 0.19 Accuracy 94.41
Epoch 0092 Loss 0.19 Accuracy 94.46
Epoch 0093 Loss 0.19 Accuracy 94.53
Epoch 0094 Loss 0.18 Accuracy 94.47
Epoch 0095 Loss 0.18 Accuracy 94.54
Epoch 0096 Loss 0.18 Accuracy 94.65
Epoch 0097 Loss 0.18 Accuracy 94.56
Epoch 0098 Loss 0.18 Accuracy 94.60
Epoch 0099 Loss 0.18 Accuracy 94.63
Epoch 0100 Loss 0.18 Accuracy 94.46
###Markdown
Training the ConvNet Model
###Code
train_model(cnn_model)
###Output
Epoch 0001 Loss 0.27 Accuracy 27.08
Epoch 0002 Loss 0.05 Accuracy 41.07
Epoch 0003 Loss 0.03 Accuracy 67.77
Epoch 0004 Loss 0.03 Accuracy 73.31
Epoch 0005 Loss 0.02 Accuracy 90.30
Epoch 0006 Loss 0.02 Accuracy 93.10
Epoch 0007 Loss 0.02 Accuracy 95.98
Epoch 0008 Loss 0.01 Accuracy 98.77
Epoch 0009 Loss 0.01 Accuracy 96.58
Epoch 0010 Loss 0.01 Accuracy 99.12
Epoch 0011 Loss 0.01 Accuracy 98.88
Epoch 0012 Loss 0.01 Accuracy 98.64
Epoch 0013 Loss 0.01 Accuracy 98.66
Epoch 0014 Loss 0.00 Accuracy 98.38
Epoch 0015 Loss 0.00 Accuracy 99.15
Epoch 0016 Loss 0.00 Accuracy 97.50
Epoch 0017 Loss 0.00 Accuracy 98.98
Epoch 0018 Loss 0.00 Accuracy 98.94
Epoch 0019 Loss 0.00 Accuracy 98.56
Epoch 0020 Loss 0.00 Accuracy 99.06
Epoch 0021 Loss 0.00 Accuracy 99.26
Epoch 0022 Loss 0.00 Accuracy 99.30
Epoch 0023 Loss 0.00 Accuracy 99.18
Epoch 0024 Loss 0.00 Accuracy 99.49
Epoch 0025 Loss 0.00 Accuracy 99.34
Epoch 0026 Loss 0.00 Accuracy 99.24
Epoch 0027 Loss 0.00 Accuracy 99.38
Epoch 0028 Loss 0.00 Accuracy 99.43
Epoch 0029 Loss 0.00 Accuracy 99.40
Epoch 0030 Loss 0.00 Accuracy 99.50
Epoch 0031 Loss 0.00 Accuracy 99.44
Epoch 0032 Loss 0.00 Accuracy 99.52
Epoch 0033 Loss 0.00 Accuracy 99.46
Epoch 0034 Loss 0.00 Accuracy 99.39
Epoch 0035 Loss 0.00 Accuracy 99.22
Epoch 0036 Loss 0.00 Accuracy 99.26
Epoch 0037 Loss 0.00 Accuracy 99.47
Epoch 0038 Loss 0.00 Accuracy 99.18
Epoch 0039 Loss 0.00 Accuracy 99.39
Epoch 0040 Loss 0.00 Accuracy 99.44
Epoch 0041 Loss 0.00 Accuracy 99.43
Epoch 0042 Loss 0.00 Accuracy 99.50
Epoch 0043 Loss 0.00 Accuracy 99.50
Epoch 0044 Loss 0.00 Accuracy 99.53
Epoch 0045 Loss 0.00 Accuracy 99.51
Epoch 0046 Loss 0.00 Accuracy 99.49
Epoch 0047 Loss 0.00 Accuracy 99.46
Epoch 0048 Loss 0.00 Accuracy 99.46
Epoch 0049 Loss 0.00 Accuracy 99.35
Epoch 0050 Loss 0.00 Accuracy 99.50
Epoch 0051 Loss 0.00 Accuracy 99.48
Epoch 0052 Loss 0.00 Accuracy 99.48
Epoch 0053 Loss 0.00 Accuracy 99.48
Epoch 0054 Loss 0.00 Accuracy 99.46
Epoch 0055 Loss 0.00 Accuracy 99.48
Epoch 0056 Loss 0.00 Accuracy 99.50
Epoch 0057 Loss 0.00 Accuracy 99.41
Epoch 0058 Loss 0.00 Accuracy 99.49
Epoch 0059 Loss 0.00 Accuracy 99.48
Epoch 0060 Loss 0.00 Accuracy 99.47
Epoch 0061 Loss 0.00 Accuracy 99.52
Epoch 0062 Loss 0.00 Accuracy 99.49
Epoch 0063 Loss 0.00 Accuracy 99.48
Epoch 0064 Loss 0.00 Accuracy 99.51
Epoch 0065 Loss 0.00 Accuracy 99.46
Epoch 0066 Loss 0.00 Accuracy 99.51
Epoch 0067 Loss 0.00 Accuracy 99.49
Epoch 0068 Loss 0.00 Accuracy 99.52
Epoch 0069 Loss 0.00 Accuracy 99.49
Epoch 0070 Loss 0.00 Accuracy 99.51
Epoch 0071 Loss 0.00 Accuracy 99.51
Epoch 0072 Loss 0.00 Accuracy 99.52
Epoch 0073 Loss 0.00 Accuracy 99.43
Epoch 0074 Loss 0.00 Accuracy 99.53
Epoch 0075 Loss 0.00 Accuracy 99.47
Epoch 0076 Loss 0.00 Accuracy 99.51
Epoch 0077 Loss 0.00 Accuracy 99.55
Epoch 0078 Loss 0.00 Accuracy 99.52
Epoch 0079 Loss 0.00 Accuracy 99.52
Epoch 0080 Loss 0.00 Accuracy 98.78
Epoch 0081 Loss 0.00 Accuracy 99.16
Epoch 0082 Loss 0.00 Accuracy 99.40
Epoch 0083 Loss 0.00 Accuracy 99.35
Epoch 0084 Loss 0.00 Accuracy 99.32
Epoch 0085 Loss 0.00 Accuracy 99.49
Epoch 0086 Loss 0.00 Accuracy 99.49
Epoch 0087 Loss 0.00 Accuracy 99.56
Epoch 0088 Loss 0.00 Accuracy 99.48
Epoch 0089 Loss 0.00 Accuracy 99.48
Epoch 0090 Loss 0.00 Accuracy 99.51
Epoch 0091 Loss 0.00 Accuracy 99.45
Epoch 0092 Loss 0.00 Accuracy 99.52
Epoch 0093 Loss 0.00 Accuracy 99.52
Epoch 0094 Loss 0.00 Accuracy 99.51
Epoch 0095 Loss 0.00 Accuracy 99.51
Epoch 0096 Loss 0.00 Accuracy 99.48
Epoch 0097 Loss 0.00 Accuracy 99.51
Epoch 0098 Loss 0.00 Accuracy 99.53
Epoch 0099 Loss 0.00 Accuracy 99.50
Epoch 0100 Loss 0.00 Accuracy 99.53
###Markdown
Training with PyTorch data processing APIOne of the pain points for ML researchers/practioners when building a new ML model is the data processing. Here, we demonstrate how to use data processing API of [PyTorch](https://pytorch.org/) to train a model with Objax. Different deep learning library comes with different data processing APIs, and depending on your preference, you can choose an API and easily combine with Objax.Similarly, we prepare an `MNIST` dataset and apply the same data preprocessing.
###Code
import torch
from torchvision import datasets, transforms
transform=transforms.Compose([
transforms.Pad((2,2,2,2), 0),
transforms.ToTensor(),
transforms.Lambda(lambda x: np.concatenate([x] * 3, axis=0)),
transforms.Lambda(lambda x: x * 2 - 1)
])
train_dataset = datasets.MNIST(os.environ['HOME'], train=True, download=True, transform=transform)
test_dataset = datasets.MNIST(os.environ['HOME'], train=False, download=True, transform=transform)
# Define data loader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch)
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to /root/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
We replace data processing pipeline of the train and test loop with `train_loader` and `test_loader` and that's it!
###Code
# Train loop
def train_model_with_torch_data_api(model):
def predict(model, x):
""""""
return objax.functional.softmax(model(x, training=False))
def flatten_image(x):
"""Flatten the image before passing it to the DNN."""
if isinstance(model, DNNet):
return objax.functional.flatten(x)
else:
return x
opt = objax.optimizer.Momentum(model.vars())
# Cross Entropy Loss
def loss(x, label):
return objax.functional.loss.cross_entropy_logits_sparse(model(x, training=True), label).mean()
gv = objax.GradValues(loss, model.vars())
def train_op(x, label):
g, v = gv(x, label) # returns gradients, loss
opt(lr, g)
return v
train_op = objax.Jit(train_op, gv.vars() + opt.vars())
for epoch in range(epochs):
avg_loss = 0
tot_data = 0
for _, (img, label) in enumerate(train_loader):
avg_loss += float(train_op(flatten_image(img.numpy()), label.numpy())[0]) * len(img)
tot_data += len(img)
avg_loss /= tot_data
# Eval
accuracy = 0
tot_data = 0
for _, (img, label) in enumerate(test_loader):
accuracy += (np.argmax(predict(model, flatten_image(img.numpy())), axis=1) == label.numpy()).sum()
tot_data += len(img)
accuracy /= tot_data
print('Epoch %04d Loss %.2f Accuracy %.2f' % (epoch + 1, avg_loss, 100 * accuracy))
###Output
###Markdown
Training the DNN Model with PyTorch data API
###Code
dnn_layer_sizes = 3072, 128, 10
dnn_model = DNNet(dnn_layer_sizes, objax.functional.leaky_relu)
train_model_with_torch_data_api(dnn_model)
###Output
Epoch 0001 Loss 2.57 Accuracy 34.35
Epoch 0002 Loss 1.93 Accuracy 58.51
Epoch 0003 Loss 1.32 Accuracy 68.46
Epoch 0004 Loss 0.83 Accuracy 80.95
Epoch 0005 Loss 0.62 Accuracy 84.74
Epoch 0006 Loss 0.53 Accuracy 86.53
Epoch 0007 Loss 0.48 Accuracy 84.18
Epoch 0008 Loss 0.45 Accuracy 88.42
Epoch 0009 Loss 0.42 Accuracy 87.34
Epoch 0010 Loss 0.40 Accuracy 89.29
Epoch 0011 Loss 0.39 Accuracy 89.31
Epoch 0012 Loss 0.38 Accuracy 89.86
Epoch 0013 Loss 0.37 Accuracy 89.91
Epoch 0014 Loss 0.36 Accuracy 86.94
Epoch 0015 Loss 0.36 Accuracy 89.89
Epoch 0016 Loss 0.35 Accuracy 90.12
Epoch 0017 Loss 0.34 Accuracy 90.40
Epoch 0018 Loss 0.34 Accuracy 90.31
Epoch 0019 Loss 0.34 Accuracy 90.79
Epoch 0020 Loss 0.33 Accuracy 90.71
Epoch 0021 Loss 0.33 Accuracy 90.70
Epoch 0022 Loss 0.33 Accuracy 90.69
Epoch 0023 Loss 0.33 Accuracy 90.91
Epoch 0024 Loss 0.32 Accuracy 90.92
Epoch 0025 Loss 0.32 Accuracy 91.06
Epoch 0026 Loss 0.32 Accuracy 91.19
Epoch 0027 Loss 0.32 Accuracy 91.31
Epoch 0028 Loss 0.31 Accuracy 91.31
Epoch 0029 Loss 0.31 Accuracy 91.20
Epoch 0030 Loss 0.31 Accuracy 91.31
Epoch 0031 Loss 0.31 Accuracy 91.36
Epoch 0032 Loss 0.31 Accuracy 91.42
Epoch 0033 Loss 0.30 Accuracy 91.27
Epoch 0034 Loss 0.31 Accuracy 91.47
Epoch 0035 Loss 0.30 Accuracy 91.57
Epoch 0036 Loss 0.30 Accuracy 91.44
Epoch 0037 Loss 0.30 Accuracy 91.55
Epoch 0038 Loss 0.30 Accuracy 91.56
Epoch 0039 Loss 0.29 Accuracy 91.75
Epoch 0040 Loss 0.29 Accuracy 91.69
Epoch 0041 Loss 0.29 Accuracy 91.60
Epoch 0042 Loss 0.29 Accuracy 91.77
Epoch 0043 Loss 0.29 Accuracy 91.76
Epoch 0044 Loss 0.29 Accuracy 91.84
Epoch 0045 Loss 0.28 Accuracy 92.05
Epoch 0046 Loss 0.28 Accuracy 91.78
Epoch 0047 Loss 0.28 Accuracy 92.01
Epoch 0048 Loss 0.28 Accuracy 91.95
Epoch 0049 Loss 0.28 Accuracy 90.11
Epoch 0050 Loss 0.28 Accuracy 92.14
Epoch 0051 Loss 0.28 Accuracy 92.03
Epoch 0052 Loss 0.27 Accuracy 92.29
Epoch 0053 Loss 0.27 Accuracy 92.17
Epoch 0054 Loss 0.27 Accuracy 92.12
Epoch 0055 Loss 0.27 Accuracy 92.34
Epoch 0056 Loss 0.27 Accuracy 92.32
Epoch 0057 Loss 0.27 Accuracy 92.47
Epoch 0058 Loss 0.27 Accuracy 92.38
Epoch 0059 Loss 0.27 Accuracy 92.39
Epoch 0060 Loss 0.26 Accuracy 92.51
Epoch 0061 Loss 0.27 Accuracy 92.50
Epoch 0062 Loss 0.26 Accuracy 92.46
Epoch 0063 Loss 0.26 Accuracy 92.65
Epoch 0064 Loss 0.26 Accuracy 92.57
Epoch 0065 Loss 0.26 Accuracy 92.63
Epoch 0066 Loss 0.26 Accuracy 92.75
Epoch 0067 Loss 0.26 Accuracy 92.57
Epoch 0068 Loss 0.26 Accuracy 92.88
Epoch 0069 Loss 0.25 Accuracy 92.53
Epoch 0070 Loss 0.25 Accuracy 92.80
Epoch 0071 Loss 0.25 Accuracy 92.71
Epoch 0072 Loss 0.25 Accuracy 92.75
Epoch 0073 Loss 0.25 Accuracy 92.84
Epoch 0074 Loss 0.25 Accuracy 92.71
Epoch 0075 Loss 0.25 Accuracy 92.95
Epoch 0076 Loss 0.25 Accuracy 92.82
Epoch 0077 Loss 0.25 Accuracy 92.90
Epoch 0078 Loss 0.25 Accuracy 92.87
Epoch 0079 Loss 0.25 Accuracy 89.55
Epoch 0080 Loss 0.25 Accuracy 92.86
Epoch 0081 Loss 0.24 Accuracy 92.99
Epoch 0082 Loss 0.24 Accuracy 93.03
Epoch 0083 Loss 0.24 Accuracy 93.03
Epoch 0084 Loss 0.24 Accuracy 93.01
Epoch 0085 Loss 0.24 Accuracy 93.13
Epoch 0086 Loss 0.24 Accuracy 93.17
Epoch 0087 Loss 0.24 Accuracy 92.87
Epoch 0088 Loss 0.24 Accuracy 92.93
Epoch 0089 Loss 0.24 Accuracy 93.16
Epoch 0090 Loss 0.24 Accuracy 93.38
Epoch 0091 Loss 0.24 Accuracy 92.98
Epoch 0092 Loss 0.24 Accuracy 93.30
Epoch 0093 Loss 0.23 Accuracy 93.09
Epoch 0094 Loss 0.23 Accuracy 93.19
Epoch 0095 Loss 0.23 Accuracy 93.25
Epoch 0096 Loss 0.23 Accuracy 93.22
Epoch 0097 Loss 0.23 Accuracy 93.28
Epoch 0098 Loss 0.23 Accuracy 93.39
Epoch 0099 Loss 0.23 Accuracy 93.25
Epoch 0100 Loss 0.23 Accuracy 93.30
###Markdown
Training the ConvNet Model with PyTorch data API
###Code
cnn_model = ConvNet(nin=3, nclass=10)
print(cnn_model.vars())
train_model_with_torch_data_api(cnn_model)
###Output
(ConvNet).conv_block1(Sequential)[0](Conv2D).w 432 (3, 3, 3, 16)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[1](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[3](Conv2D).w 2304 (3, 3, 16, 16)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_mean 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).running_var 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).beta 16 (1, 16, 1, 1)
(ConvNet).conv_block1(Sequential)[4](BatchNorm2D).gamma 16 (1, 16, 1, 1)
(ConvNet).conv_block2(Sequential)[0](Conv2D).w 4608 (3, 3, 16, 32)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[1](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[3](Conv2D).w 9216 (3, 3, 32, 32)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_mean 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).running_var 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).beta 32 (1, 32, 1, 1)
(ConvNet).conv_block2(Sequential)[4](BatchNorm2D).gamma 32 (1, 32, 1, 1)
(ConvNet).conv_block3(Sequential)[0](Conv2D).w 18432 (3, 3, 32, 64)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[1](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[3](Conv2D).w 36864 (3, 3, 64, 64)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_mean 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).running_var 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).beta 64 (1, 64, 1, 1)
(ConvNet).conv_block3(Sequential)[4](BatchNorm2D).gamma 64 (1, 64, 1, 1)
(ConvNet).linear(Linear).b 10 (10,)
(ConvNet).linear(Linear).w 640 (64, 10)
+Total(32) 73402
Epoch 0001 Loss 0.26 Accuracy 24.18
Epoch 0002 Loss 0.05 Accuracy 37.53
Epoch 0003 Loss 0.03 Accuracy 42.17
Epoch 0004 Loss 0.03 Accuracy 73.50
Epoch 0005 Loss 0.02 Accuracy 80.33
Epoch 0006 Loss 0.02 Accuracy 83.28
Epoch 0007 Loss 0.02 Accuracy 90.87
Epoch 0008 Loss 0.01 Accuracy 98.77
Epoch 0009 Loss 0.01 Accuracy 98.42
Epoch 0010 Loss 0.01 Accuracy 98.16
Epoch 0011 Loss 0.01 Accuracy 98.74
Epoch 0012 Loss 0.01 Accuracy 95.05
Epoch 0013 Loss 0.01 Accuracy 98.89
Epoch 0014 Loss 0.00 Accuracy 98.70
Epoch 0015 Loss 0.00 Accuracy 99.01
Epoch 0016 Loss 0.00 Accuracy 98.97
Epoch 0017 Loss 0.00 Accuracy 98.79
Epoch 0018 Loss 0.00 Accuracy 98.37
Epoch 0019 Loss 0.00 Accuracy 99.19
Epoch 0020 Loss 0.00 Accuracy 99.22
Epoch 0021 Loss 0.00 Accuracy 98.43
Epoch 0022 Loss 0.00 Accuracy 99.02
Epoch 0023 Loss 0.00 Accuracy 99.38
Epoch 0024 Loss 0.00 Accuracy 99.42
Epoch 0025 Loss 0.00 Accuracy 99.45
Epoch 0026 Loss 0.00 Accuracy 99.35
Epoch 0027 Loss 0.00 Accuracy 99.42
Epoch 0028 Loss 0.00 Accuracy 99.42
Epoch 0029 Loss 0.00 Accuracy 99.14
Epoch 0030 Loss 0.00 Accuracy 99.33
Epoch 0031 Loss 0.00 Accuracy 99.36
Epoch 0032 Loss 0.00 Accuracy 99.18
Epoch 0033 Loss 0.00 Accuracy 99.43
Epoch 0034 Loss 0.00 Accuracy 99.47
Epoch 0035 Loss 0.00 Accuracy 99.49
Epoch 0036 Loss 0.00 Accuracy 99.53
Epoch 0037 Loss 0.00 Accuracy 99.38
Epoch 0038 Loss 0.00 Accuracy 99.39
Epoch 0039 Loss 0.00 Accuracy 99.49
Epoch 0040 Loss 0.00 Accuracy 99.49
Epoch 0041 Loss 0.00 Accuracy 99.47
Epoch 0042 Loss 0.00 Accuracy 99.54
Epoch 0043 Loss 0.00 Accuracy 99.35
Epoch 0044 Loss 0.00 Accuracy 99.45
Epoch 0045 Loss 0.00 Accuracy 99.47
Epoch 0046 Loss 0.00 Accuracy 99.53
Epoch 0047 Loss 0.00 Accuracy 99.50
Epoch 0048 Loss 0.00 Accuracy 99.52
Epoch 0049 Loss 0.00 Accuracy 99.51
Epoch 0050 Loss 0.00 Accuracy 99.49
Epoch 0051 Loss 0.00 Accuracy 99.45
Epoch 0052 Loss 0.00 Accuracy 99.48
Epoch 0053 Loss 0.00 Accuracy 99.50
Epoch 0054 Loss 0.00 Accuracy 99.46
Epoch 0055 Loss 0.00 Accuracy 99.50
Epoch 0056 Loss 0.00 Accuracy 99.48
Epoch 0057 Loss 0.00 Accuracy 99.46
Epoch 0058 Loss 0.00 Accuracy 99.44
Epoch 0059 Loss 0.00 Accuracy 99.46
Epoch 0060 Loss 0.00 Accuracy 99.26
Epoch 0061 Loss 0.00 Accuracy 93.99
Epoch 0062 Loss 0.00 Accuracy 97.80
Epoch 0063 Loss 0.00 Accuracy 80.26
Epoch 0064 Loss 0.00 Accuracy 99.20
Epoch 0065 Loss 0.00 Accuracy 99.38
Epoch 0066 Loss 0.00 Accuracy 99.44
Epoch 0067 Loss 0.00 Accuracy 99.51
Epoch 0068 Loss 0.00 Accuracy 99.45
Epoch 0069 Loss 0.00 Accuracy 99.42
Epoch 0070 Loss 0.00 Accuracy 99.50
Epoch 0071 Loss 0.00 Accuracy 99.52
Epoch 0072 Loss 0.00 Accuracy 99.44
Epoch 0073 Loss 0.00 Accuracy 99.41
Epoch 0074 Loss 0.00 Accuracy 99.46
Epoch 0075 Loss 0.00 Accuracy 99.42
Epoch 0076 Loss 0.00 Accuracy 99.49
Epoch 0077 Loss 0.00 Accuracy 99.50
Epoch 0078 Loss 0.00 Accuracy 99.56
Epoch 0079 Loss 0.00 Accuracy 99.52
Epoch 0080 Loss 0.00 Accuracy 99.42
Epoch 0081 Loss 0.00 Accuracy 99.49
Epoch 0082 Loss 0.00 Accuracy 99.48
Epoch 0083 Loss 0.00 Accuracy 99.44
Epoch 0084 Loss 0.00 Accuracy 99.49
Epoch 0085 Loss 0.00 Accuracy 99.53
Epoch 0086 Loss 0.00 Accuracy 99.52
Epoch 0087 Loss 0.00 Accuracy 99.52
Epoch 0088 Loss 0.00 Accuracy 99.50
Epoch 0089 Loss 0.00 Accuracy 99.51
Epoch 0090 Loss 0.00 Accuracy 99.50
Epoch 0091 Loss 0.00 Accuracy 99.49
Epoch 0092 Loss 0.00 Accuracy 99.52
Epoch 0093 Loss 0.00 Accuracy 99.50
Epoch 0094 Loss 0.00 Accuracy 99.50
Epoch 0095 Loss 0.00 Accuracy 99.55
Epoch 0096 Loss 0.00 Accuracy 99.48
Epoch 0097 Loss 0.00 Accuracy 99.51
Epoch 0098 Loss 0.00 Accuracy 99.52
Epoch 0099 Loss 0.00 Accuracy 99.49
Epoch 0100 Loss 0.00 Accuracy 99.52
|
K-Cap_2021/regressions/data.ipynb | ###Markdown
--- Dimensionality Reduction
###Code
from sklearn.decomposition import PCA, TruncatedSVD
svd = TruncatedSVD(n_components=100)
svdmat = svd.fit_transform(mat.T)
4598195/(871445*94464), 871445*94464
svdmat.shape, mat.shape, mat.size, (svdmat > 0.00001).sum()/svdmat.size, mat.size/(mat.shape[0]*mat.shape[1])
len(tfidf.get_feature_names())
np.savetxt("svd_mat_0.15.tsv", svdmat, delimiter="\t")
from umap import UMAP
umap = UMAP(
n_neighbors=15, min_dist=0.1, n_components=2, metric='euclidean',
verbose=True
)
orig_n = mat.T.shape[0]
rand_inds = np.random.randint(orig_n, size=orig_n//10)
umat = umap.fit_transform(mat.T)
###Output
_____no_output_____ |
The+Stroop+effect.ipynb | ###Markdown
The Stroop effect **Independent variable** - the two conditions - congruent words, and incongruent words.**Dependent variable** - reaction time in each of the two conditions.**Hypotheses:**Null hypotheses, $H_0$ - the reaction time of the congruent words condition is greater than or equal to the incongruent words condition: $\mu_{congruent} \ge \mu_{incongruent}$Alternate hypotheses, $H_A$ - the reaction time of the congruent words condition is less than the incongruent words condition: $\mu_{congruent} < \mu_{incongruent}$Where: $\mu_{congruent}$ - Mean of congruent reaction times in seconds, $\mu_{incongruent}$ - Mean of incongruent reaction times in seconds**Recommended statistical test** - dependent samples t-test, one-tailed, negative.**Justification for the hypotheses and test selection** - to prove that the reaction time of the incongruent words conditions is significantly longer than the congruent words condition. This will be in alignment with previous experimental findings/ results - http://psychclassics.yorku.ca/Stroop/; https://en.wikipedia.org/wiki/Stroop_effectStroop_testMy stroop effect time (https://faculty.washington.edu/chudler/java/ready.html) - congruent = 13.287 secs, incongruent = 27.269 secs Reading in and previewing the data
###Code
import pandas as pd
stroop_data = pd.read_csv("stroopdata.csv")
stroop_data.head(5)
stroop_data.shape
###Output
_____no_output_____
###Markdown
Descriptive statistics
###Code
stroop_data.describe()
###Output
_____no_output_____
###Markdown
From above:* Mean - * Congruent = 14.05 secs * Incongruent = 22.02 secs* Standard deviation - * Congruent = 3.56 secs * Incongruent = 4.80 secs* Median - * Congruent = 14.36 secs * Incongruent = 21.02 secs Data visualizations
###Code
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
fig = plt.figure(figsize=(10,4))
ax1 = fig.add_subplot(1,2,1)
ax1.set_xlim(5,40)
ax1.set_ylim(0,4)
sns.distplot(stroop_data["Congruent"], bins=10, kde=False, color=(95/255,158/255,209/255))
ax2 = fig.add_subplot(1,2,2)
ax2.set_xlim(5,40)
ax2.set_ylim(0,4)
sns.distplot(stroop_data["Incongruent"], bins=15, kde=False, color=(200/255, 82/255, 0/255))
sns.despine(left=True, bottom=True)
ax1.set_title("Congruent task - Histogram")
ax1.set_xlabel("time (secs)")
ax1.set_yticks([0,1,2,3,4])
ax2.set_title("Incongruent task - Histogram")
ax2.set_xlabel("time (secs)")
ax2.set_yticks([0,1,2,3,4])
plt.show()
sns.set_style("white")
fig = plt.figure(figsize=(10,4))
ax1 = fig.add_subplot(1,2,1)
sns.kdeplot(stroop_data["Congruent"], shade=True, legend=False, color=(95/255,158/255,209/255))
ax2 = fig.add_subplot(1,2,2)
sns.kdeplot(stroop_data["Incongruent"], shade=True, legend=False, color=(200/255, 82/255, 0/255))
sns.despine(left=True, bottom=True)
ax1.set_title("Congruent task - Kernel density plot")
ax1.set_xlabel("time (secs)")
ax2.set_title("Incongruent task - Kernel density plot")
ax2.set_xlabel("time (secs)")
plt.show()
sns.set_style("white")
fig, ax = plt.subplots(figsize=(5,4))
sns.boxplot(data=stroop_data, orient="h", palette=[(95/255,158/255,209/255),(200/255, 82/255, 0/255)])
sns.despine(left=True, bottom=True)
ax.set_title("Congruent vs Incongruent task - Box plot")
ax.set_xlabel("time (secs)")
plt.show()
###Output
_____no_output_____
###Markdown
**Observations:**1. Distribution of congruent and incongruent times are dense around the median and less spread out.2. It is easily noticeable that the incongruent times are considerably higher than congruent times. Dependent samples t-test $\bar x_c = 14.05$ secs$\bar x_i = 22.02$ secs$n = 24$$df = 23$
###Code
stroop_data["Congruent - Incongruent"] = stroop_data["Congruent"] - stroop_data["Incongruent"]
stroop_data["Congruent - Incongruent"].std()
###Output
_____no_output_____
###Markdown
$s_{c-i} = 4.86$ secs
###Code
import math
from scipy import stats
mean_diff = stroop_data["Congruent"].mean()-stroop_data["Incongruent"].mean()
std_err = stroop_data["Congruent - Incongruent"].std()/math.sqrt(stroop_data["Congruent"].count())
print(mean_diff, std_err, stats.ttest_rel(stroop_data["Congruent"], stroop_data["Incongruent"]))
###Output
-7.964791666666665 0.9930286347783406 Ttest_relResult(statistic=-8.020706944109957, pvalue=4.1030005857111781e-08)
###Markdown
$\bar x_c - \bar x_i = -7.96$ secsS.E. = 0.99 secs$t_{stat} = -8.021$P-value = 0.00000004t(23) = -8.021, p=.00, one-tailed$\alpha = 0.01$ i.e. confidence level = 99%$t_{critical} = -2.500$**Decision: Reject $H_0$ i.e. Reject - "the reaction time of the congruent words condition is greater than or equal to the incongruent words condition"****This means, that the reaction time of the congruent words condition is significantly less than the incongruent words condition. This outcome was expected based on previous experimental results.**
###Code
margin_err = 2.807*std_err
print("Confidence interval on the mean difference; 99% CI = (", round(mean_diff-margin_err,2),",",round(mean_diff+margin_err,2),")")
print("d =", round(mean_diff/stroop_data["Congruent - Incongruent"].std(),2))
print("r^2", "=", round(t_stat**2/(t_stat**2+(stroop_data["Congruent"].count()-1)),2), end="")
###Output
r^2 = 0.74 |
sandbox_script.ipynb | ###Markdown
Test Metric notes: WER from jiwer feels a bit weird, we just implement our own MER and CER
###Code
def tokenize_for_mer(text):
tokens = list(filter(lambda tok: len(tok.strip()) > 0, jieba.lcut(text)))
tokens = [[tok] if tok.isascii() else list(tok) for tok in tokens]
return list(chain(*tokens))
def tokenize_for_cer(text):
tokens = list(filter(lambda tok: len(tok.strip()) > 0, list(text)))
return tokens
pred = tokenize_for_mer('我爱你 anak gembala')
ref = tokenize_for_mer('我爱你 agum')
editdistance.distance(pred, ref) / len(ref)
pred = tokenize_for_cer('我爱你 anak gembala')
ref = tokenize_for_cer('我爱你 agum')
editdistance.distance(pred, ref) / len(ref)
###Output
_____no_output_____
###Markdown
Code starts here
###Code
#####
# Common Functions
#####
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "ʻ", "ˆ"]
def remove_special_characters(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
return batch
def extract_all_chars(batch):
all_text = " ".join(batch["sentence"])
vocab = list(set(all_text))
return {"vocab": [vocab], "all_text": [all_text]}
#####
# Data Loading Function
#####
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["audio_path"])
batch["speech_sample"] = speech_array[0].numpy()
batch["sampling_rate"] = sampling_rate
return batch
def load_dataset(manifest_file, num_proc):
batches = {"path": [], "text": [], "target_sample_rate": []}
base_path = '/'.join(manifest_file.split('/')[:-1])
manifest_df = pd.read_csv(manifest_file)
manifest_df = manifest_df.rename({'text': 'target_text'}, axis=1)
manifest_df['audio_path'] = manifest_df['audio_path'].apply(lambda path: f'{base_path}/{path}')
batches = Dataset.from_pandas(manifest_df)
batches = batches.map(speech_file_to_array_fn, num_proc=num_proc)
return batches
#####
# Data Collator
#####
@dataclass
class DataCollatorCTCWithPadding:
"""
Data collator that will dynamically pad the inputs received.
Args:
processor (:class:`~transformers.Wav2Vec2Processor`)
The processor used for proccessing the data.
padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding index)
among:
* :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
sequence if provided).
* :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the
maximum acceptable input length for the model if that argument is not provided.
* :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of
different lengths).
max_length (:obj:`int`, `optional`):
Maximum length of the ``input_values`` of the returned list and optionally padding length (see above).
max_length_labels (:obj:`int`, `optional`):
Maximum length of the ``labels`` returned list and optionally padding length (see above).
pad_to_multiple_of (:obj:`int`, `optional`):
If set will pad the sequence to a multiple of the provided value.
This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >=
7.5 (Volta).
"""
processor: Wav2Vec2Processor
padding: Union[bool, str] = True
max_length: Optional[int] = None
max_length_labels: Optional[int] = None
pad_to_multiple_of: Optional[int] = None
pad_to_multiple_of_labels: Optional[int] = None
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lenghts and need
# different padding methods
input_features = [{"input_values": feature["input_values"]} for feature in features]
label_features = [{"input_ids": feature["labels"]} for feature in features]
batch = self.processor.pad(
input_features,
padding=self.padding,
max_length=self.max_length,
pad_to_multiple_of=self.pad_to_multiple_of,
return_tensors="pt",
)
with self.processor.as_target_processor():
labels_batch = self.processor.pad(
label_features,
padding=self.padding,
max_length=self.max_length_labels,
pad_to_multiple_of=self.pad_to_multiple_of_labels,
return_tensors="pt",
)
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
#####
# Compute Metric Function
#####
wer_metric = load_metric("wer")
cer_metric = load_metric("cer")
def tokenize_for_mer(text):
tokens = list(filter(lambda tok: len(tok.strip()) > 0, jieba.lcut(text)))
tokens = [[tok] if tok.isascii() else list(tok) for tok in tokens]
return list(chain(*tokens))
def tokenize_for_cer(text):
tokens = list(filter(lambda tok: len(tok.strip()) > 0, list(text)))
return tokens
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_strs = processor.batch_decode(pred_ids)
# we do not want to group tokens when computing the metrics
label_strs = processor.batch_decode(pred.label_ids, group_tokens=False)
mixed_distance, mixed_tokens = 0, 0
char_distance, char_tokens = 0, 0
for pred_str, label_str in zip(pred_strs, label_strs)
# Calculate
m_pred = tokenize_for_mer(pred_str)
m_ref = tokenize_for_mer(label_str)
mixed_distance += editdistance.distance(m_pred, m_ref)
mixed_tokens += len(m_ref)
c_pred = tokenize_for_cer(pred_str)
c_ref = tokenize_for_cer(label_str)
char_distance += editdistance.distance(c_pred, c_ref)
char_tokens += len(c_ref)
mer = mixed_distance / mixed_tokens
cer = char_distance / char_tokens
# wer = wer_metric.compute(predictions=pred_str, references=label_str)
# cer = cer_metric.compute(predictions=pred_str, references=label_str)
return {"mer": mer, "cer": cer}
#####
# Main Functions
#####
def run(model_args, data_args, training_args):
###
# Prepare Dataset
###
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset(data_args.train_manifest_path, data_args.num_proc)
raw_datasets["valid"] = load_dataset(data_args.valid_manifest_path, data_args.num_proc)
raw_datasets["test"] = load_dataset(data_args.test_manifest_path, data_args.num_proc)
###
# Prepare Processor & Model
###
processor = Wav2Vec2Processor.from_pretrained(model_args.model_name_or_path)
model = Wav2Vec2ForCTC.from_pretrained(model_args.model_name_or_path)
model.cuda()
###
# Preprocessing datasets
###
# Remove ignorable characters
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
def remove_special_characters(batch):
if chars_to_ignore_regex is not None:
batch["target_text"] = re.sub(chars_to_ignore_regex, "", batch[data_args.text_column_name]).lower() + " "
else:
batch["target_text"] = batch[data_args.text_column_name].lower() + " "
return batch
with training_args.main_process_first(desc="dataset map special characters removal"):
raw_datasets = raw_datasets.map(
remove_special_characters,
remove_columns=[data_args.text_column_name],
desc="remove special characters from datasets",
)
# Preprocess audio sample and label text
def prepare_dataset(batch):
# Preprocess audio
batch["input_values"] = processor(batch["speech_sample"]).input_values[0]
# Preprocess text
with processor.as_target_processor():
batch["labels"] = processor(batch["target_text"]).input_ids
return batch
with training_args.main_process_first(desc="dataset map preprocessing"):
vectorized_datasets = raw_datasets.map(
prepare_dataset,
remove_columns=raw_datasets["train"].column_names,
num_proc=data_args.preprocessing_num_workers,
desc="preprocess datasets",
)
if data_args.preprocessing_only:
logger.info(f"Data preprocessing finished. Files cached at {vectorized_datasets.cache_files}")
return
###
# Prepare Data Collator and Trainer
###
# Instantiate custom data collator
data_collator = DataCollatorCTCWithPadding(processor=processor)
# Initialize Trainer
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=vectorized_datasets["train"] if training_args.do_train else None,
eval_dataset=vectorized_datasets["valid"] if training_args.do_eval else None,
tokenizer=processor.feature_extractor,
)
###
# Training Phase
###
if training_args.do_train:
# use last checkpoint if exist
if last_checkpoint is not None:
checkpoint = last_checkpoint
elif os.path.isdir(model_args.model_name_or_path):
checkpoint = model_args.model_name_or_path
else:
checkpoint = None
# Save the feature_extractor and the tokenizer
if is_main_process(training_args.local_rank):
processor.save_pretrained(training_args.output_dir)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
trainer.save_model()
metrics = train_result.metrics
max_train_samples = (
data_args.max_train_samples
if data_args.max_train_samples is not None
else len(vectorized_datasets["train"])
)
metrics["train_samples"] = min(max_train_samples, len(vectorized_datasets["train"]))
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
###
# Evaluation Phase
###
results = {}
if training_args.do_eval:
logger.info("*** Evaluate ***")
metrics = trainer.evaluate(eval_dataset=vectorized_datasets["test"])
metrics["eval_samples"] = len(vectorized_datasets["test"])
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
# Write model card and (optionally) push to hub
kwargs = {
"finetuned_from": model_args.model_name_or_path,
"tasks": "speech-recognition",
"tags": ["automatic-speech-recognition", "ASCEND"],
"dataset_args": f"Config: na",
"dataset": f"ASCEND",
"language": 'zh-en'
}
if training_args.push_to_hub:
trainer.push_to_hub(**kwargs)
else:
trainer.create_model_card(**kwargs)
return results
def main():
###
# Parsing & Initialization
###
# Parse argument
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))
if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
# Set random seed
set_seed(training_args.seed)
# Detect last checkpoint
last_checkpoint = None
if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:
last_checkpoint = get_last_checkpoint(training_args.output_dir)
if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:
raise ValueError(
f"Output directory ({training_args.output_dir}) already exists and is not empty. "
"Use --overwrite_output_dir to overcome."
)
elif last_checkpoint is not None:
logger.info(
f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change "
"the `--output_dir` or add `--overwrite_output_dir` to train from scratch."
)
###
# Prepare logger
###
# Init logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
handlers=[logging.StreamHandler(sys.stdout)],
)
logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN)
# Log on each process the small summary:
logger.warning(
f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}"
f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}"
)
# Set the verbosity to info of the Transformers logger (on main process only):
if is_main_process(training_args.local_rank):
transformers.utils.logging.set_verbosity_info()
logger.info("Training/evaluation parameters %s", training_args)
###
# RUN RUN RUN!!!
###
run(model_args, data_args, training_args)
###Output
_____no_output_____ |
Python basics practice/Python 3 (26)/All In - Exercise_Py3.ipynb | ###Markdown
All in - Conditional Statements, Functions, and Loops You are provided with the 'nums' list. Complete the code in the cell that follows. Use a while loop to count the number of values lower than 20. *Hint: This exercise is similar to what we did in the video lecture. You might prefer using the x[item] structure for indicating the value of an element from the list.*
###Code
nums = [1,12,24,31,51,70,100]
nums
i=0
count=0
while nums[i]<=50:
count=count+1
i=i+1
print(count)
###Output
4
|
functional_keras.ipynb | ###Markdown
THERE ARE 2 WAYS TO BULD MODELS1. sequential > 2 ways2. functional api
###Code
# Sequential
from keras.models import Sequential
from keras.layers import Dense, Activation
model = Sequential([
Dense(32, input_shape= (784,)),
Activation('relu'),
Dense(10),
Activation('softmax'),
])
# Sequantial with .add() method
model = Sequential()
model.add(Dense(32,input_dim=784))
model.add(Activation('relu'))
# Functional api
from Keras.models import Model #what is Model for?
# This returns a tensor
input = Input(shape-(784,))
# a layer instance is callable on a tensor, and returns a tensor
x = Dense(64, activation = 'relu')(inputs)
x = Dense(64, activation = 'relu')(x)
predictions = Dense(10, activation = 'softmax')(x)
# This creates a model that includes
# the Input layer and three Dense layers
model = Model(inputs = inputs, outputs=predictions)
model.compile(optimizer='rmsprop',
loss ='categorical_crossentropy',
metrics=['accuracy'])
# model.fit(data,labels)
x = [1,2,3,4]
x[:-1]
###Output
_____no_output_____ |
tutorials/Observatory_usecase6_observationdata.ipynb | ###Markdown
A Notebook to analyze downloaded gridded climate time-series data (Case study: the Sauk-Suiattle Watershed )<img src= "http://www.sauk-suiattle.com/images/Elliott.jpg"style="float:left;width:150px;padding:20px"> This data is compiled to digitally observe the Sauk-Suiattle Watershed, powered by HydroShare. Use this Jupyter Notebook to: Migrate data sets from prior data download events,Compute daily, monthly, and annual temperature and precipitation statistics, Visualize precipitation results relative to the forcing data, Visualize the time-series trends among the gridded cells using different Gridded data products. A Watershed Dynamics Model by the Watershed Dynamics Research Group in the Civil and Environmental Engineering Department at the University of Washington 1. HydroShare Setup and PreparationTo run this notebook, we must import several libaries. These are listed in order of 1) Python standard libraries, 2) hs_utils library provides functions for interacting with HydroShare, including resource querying, dowloading and creation, and 3) the observatory_gridded_hydromet library that is downloaded with this notebook.
###Code
#conda install -c conda-forge basemap-data-hires --yes
# data processing
import os
import pandas as pd, numpy as np, dask, json
import ogh
import geopandas as gpd
# data migration library
from utilities import hydroshare
# plotting and shape libraries
%matplotlib inline
# silencing warning
import warnings
warnings.filterwarnings("ignore")
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
# spatial plotting
import fiona
import shapely.ops
from shapely.geometry import MultiPolygon, shape, point, box, Polygon
from mpl_toolkits.basemap import Basemap
# # spatial plotting
# import fiona
# import shapely.ops
# from shapely.geometry import MultiPolygon, shape, point, box, Polygon
# from descartes import PolygonPatch
# from matplotlib.collections import PatchCollection
# from mpl_toolkits.basemap import Basemap
# initialize ogh_meta
meta_file = dict(ogh.ogh_meta())
sorted(meta_file.keys())
sorted(meta_file['dailymet_livneh2013'].keys())
###Output
_____no_output_____
###Markdown
Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.
###Code
notebookdir = os.getcwd()
hs=hydroshare.hydroshare()
homedir = hs.getContentPath(os.environ["HS_RES_ID"])
os.chdir(homedir)
print('Data will be loaded from and save to:'+homedir)
###Output
_____no_output_____
###Markdown
If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource. 2. Get list of gridded climate points for the watershedThis example uses a shapefile with the watershed boundary of the Sauk-Suiattle Basin, which is stored in HydroShare at the following url: https://www.hydroshare.org/resource/c532e0578e974201a0bc40a37ef2d284/. The data for our processing routines can be retrieved using the getResourceFromHydroShare function by passing in the global identifier from the url above. In the next cell, we download this resource from HydroShare, and identify that the points in this resource are available for downloading gridded hydrometeorology data, based on the point shapefile at https://www.hydroshare.org/resource/ef2d82bf960144b4bfb1bae6242bcc7f/, which is for the extent of North America and includes the average elevation for each 1/16 degree grid cell. The file must include columns with station numbers, latitude, longitude, and elevation. The header of these columns must be FID, LAT, LONG_, and ELEV or RASTERVALU, respectively. The station numbers will be used for the remainder of the code to uniquely reference data from each climate station, as well as to identify minimum, maximum, and average elevation of all of the climate stations. The webserice is currently set to a URL for the smallest geographic overlapping extent - e.g. WRF for Columbia River Basin (to use a limit using data from a FTP service, treatgeoself() would need to be edited in observatory_gridded_hydrometeorology utility).
###Code
"""
Sauk
"""
# Watershed extent
hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')
sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']
###Output
_____no_output_____
###Markdown
Summarize the file availability from each watershed mapping file
###Code
# map the mappingfiles from usecase1
mappingfile1 = os.path.join(homedir,'Sauk_mappingfile.csv')
mappingfile2 = os.path.join(homedir,'Elwha_mappingfile.csv')
mappingfile3 = os.path.join(homedir,'RioSalado_mappingfile.csv')
t1 = ogh.mappingfileSummary(listofmappingfiles = [mappingfile1, mappingfile2, mappingfile3],
listofwatershednames = ['Sauk-Suiattle river','Elwha river','Upper Rio Salado'],
meta_file=meta_file)
t1
###Output
_____no_output_____
###Markdown
3. Compare Hydrometeorology This section performs computations and generates plots of the Livneh 2013, Livneh 2016, and WRF 2014 temperature and precipitation data in order to compare them with each other and observations. The generated plots are automatically downloaded and saved as .png files in the "plots" folder of the user's home directory and inline in the notebook.
###Code
# Livneh et al., 2013
dr1 = meta_file['dailymet_livneh2013']
# Salathe et al., 2014
dr2 = meta_file['dailywrf_salathe2014']
# define overlapping time window
dr = ogh.overlappingDates(date_set1=tuple([dr1['start_date'], dr1['end_date']]),
date_set2=tuple([dr2['start_date'], dr2['end_date']]))
dr
###Output
_____no_output_____
###Markdown
INPUT: gridded meteorology from Jupyter Hub foldersData frames for each set of data are stored in a dictionary. The inputs to gridclim_dict() include the folder location and name of the hydrometeorology data, the file start and end, the analysis start and end, and the elevation band to be included in the analsyis (max and min elevation). Create a dictionary of climate variables for the long-term mean (ltm) using the default elevation option of calculating a high, mid, and low elevation average. The dictionary here is initialized with the Livneh et al., 2013 dataset with a dictionary output 'ltm_3bands', which is used as an input to the second time we run gridclim_dict(), to add the Salathe et al., 2014 data to the same dictionary.
###Code
%%time
ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='dailymet_livneh2013',
file_start_date=dr1['start_date'],
file_end_date=dr1['end_date'],
file_time_step=dr1['temporal_resolution'],
subset_start_date=dr[0],
subset_end_date=dr[1])
%%time
ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='dailyvic_livneh2013',
file_start_date=dr1['start_date'],
file_end_date=dr1['end_date'],
file_time_step=dr1['temporal_resolution'],
subset_start_date=dr[0],
subset_end_date=dr[1],
df_dict=ltm_3bands)
sorted(ltm_3bands.keys())
meta = meta_file['dailyvic_livneh2013']['variable_info']
meta_df = pd.DataFrame.from_dict(meta).T
meta_df.loc[['BASEFLOW','RUNOFF'],:]
ltm_3bands['STREAMFLOW_dailyvic_livneh2013']=ltm_3bands['BASEFLOW_dailyvic_livneh2013']+ltm_3bands['RUNOFF_dailyvic_livneh2013']
ltm_3bands['STREAMFLOW_dailyvic_livneh2013']
"""
Sauk-Suiattle
"""
# Watershed extent
hs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')
sauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']
"""
Elwha
"""
# Watershed extent
hs.getResourceFromHydroShare('4aff8b10bc424250b3d7bac2188391e8', )
elwha = hs.content["elwha_ws_bnd_wgs84.shp"]
"""
Upper Rio Salado
"""
# Watershed extent
hs.getResourceFromHydroShare('5c041d95ceb64dce8eb85d2a7db88ed7')
riosalado = hs.content['UpperRioSalado_delineatedBoundary.shp']
# generate surface area for each gridded cell
def computegcSurfaceArea(shapefile, spatial_resolution, vardf):
"""
Data-driven computation of gridded cell surface area using the list of gridded cells centroids
shapefile: (dir) the path to the study site shapefile for selecting the UTM boundary
spatial_resolution: (float) the spatial resolution in degree coordinate reference system e.g., 1/16
vardf: (dataframe) input dataframe that contains FID, LAT and LONG references for each gridded cell centroid
return: (mean surface area in meters-squared, standard deviation in surface area)
"""
# ensure projection into WGS84 longlat values
ogh.reprojShapefile(shapefile)
# generate the figure axis
fig = plt.figure(figsize=(2,2), dpi=500)
ax1 = plt.subplot2grid((1,1),(0,0))
# calculate bounding box based on the watershed shapefile
watershed = gpd.read_file(shapefile)
watershed['watershed']='watershed'
watershed = watershed.dissolve(by='watershed')
# extract area centroid, bounding box info, and dimension shape
lon0, lat0 = np.array(watershed.centroid.iloc[0])
minx, miny, maxx, maxy = watershed.bounds.iloc[0]
# generate traverse mercatur projection
m = Basemap(projection='tmerc', resolution='h', ax=ax1, lat_0=lat0, lon_0=lon0,
llcrnrlon=minx, llcrnrlat=miny, urcrnrlon=maxx, urcrnrlat=maxy)
# generate gridded cell bounding boxes
midpt_dist=spatial_resolution/2
cat=vardf.T.reset_index(level=[1,2]).rename(columns={'level_1':'LAT','level_2':'LONG_'})
geometry = cat.apply(lambda x:
shapely.ops.transform(m, box(x['LONG_']-midpt_dist, x['LAT']-midpt_dist,
x['LONG_']+midpt_dist, x['LAT']+midpt_dist)), axis=1)
# compute gridded cell area
gc_area = geometry.apply(lambda x: x.area)
plt.gcf().clear()
return(gc_area.mean(), gc_area.sd())
gcSA = computeMeanSurfaceArea(shapefile=sauk, spatial_resolution=1/16, vardf=ltm_3bands['STREAMFLOW_dailyvic_livneh2013'])
gcSA
# convert mm/s to m/s
df_dict = ltm_3bands
objname = 'STREAMFLOW_dailyvic_livneh2013'
dataset = objname.split('_',1)[1]
gridcell_area = geo_area
exceedance = 10
# convert mmps to mps
mmps = df_dict[objname]
mps = mmps*0.001
# multiply streamflow (mps) with grid cell surface area (m2) to produce volumetric streamflow (cms)
cms = mps.multiply(np.array(geo_area))
# convert m^3/s to cfs; multiply with (3.28084)^3
cfs = cms.multiply((3.28084)**3)
# output to df_dict
df_dict['cfs_'+objname] = cfs
# time-group by month-yearly streamflow volumetric values
monthly_cfs = cfs.groupby(pd.TimeGrouper('M')).sum()
monthly_cfs.index = pd.Series(monthly_cfs.index).apply(lambda x: x.strftime('%Y-%m'))
# output to df_dict
df_dict['monthly_cfs_'+objname] = monthly_cfs
# prepare for Exceedance computations
row_indices = pd.Series(monthly_cfs.index).map(lambda x: pd.datetime.strptime(x, '%Y-%m').month)
months = range(1,13)
Exceed = pd.DataFrame()
# for each month
for eachmonth in months:
month_index = row_indices[row_indices==eachmonth].index
month_res = monthly_cfs.iloc[month_index,:].reset_index(drop=True)
# generate gridded-cell-specific 10% exceedance probability values
exceed = pd.DataFrame(month_res.apply(lambda x: np.percentile(x, 0.90), axis=0)).T
# append to dataframe
Exceed = pd.concat([Exceed, exceed])
# set index to month order
Exceed = Exceed.set_index(np.array(months))
# output to df_dict
df_dict['EXCEED{0}_{1}'.format(exceedance,dataset)] = Exceed
# return(df_dict)
def monthlyExceedence_cfs (df_dict,
daily_streamflow_dfname,
gridcell_area,
exceedance=10,
start_date=None,
end_date=None):
"""
streamflow_df: (dataframe) streamflow values in cu. ft per second (row_index: date, col_index: gridcell ID)
daily_baseflow_df: (dataframe) groundwater flow in cu. ft per second (row_index: date, col_index: gridcell ID)
daily_surfacerunoff_df: (dateframe) surface flow in cu. ft per second (row_index: date, col_index: gridcell ID)
start_date: (datetime) start date
end_date: (datetime) end date
"""
#daily_baseflow_df=None, #'BASEFLOW_dailyvic_livneh2013',
#daily_surfacerunoff_df=None, #'RUNOFF_dailyvic_livneh2013',
## aggregate each daily streamflow value to a month-yearly sum
mmps = df_dict[daily_streamflow_dfname]
## subset daily streamflow_df to that index range
if isinstance(start_date, type(None)):
startyear=0
if isinstance(end_date, type(None)):
endyear=len(mmps)-1
mmps = mmps.iloc[start_date:end_date,:]
# convert mmps to mps
mps = mmps*0.001
# multiply streamflow (mps) with grid cell surface area (m2) to produce volumetric streamflow (cms)
cms = mps.multiply(np.array(geo_area))
# convert m^3/s to cfs; multiply with (3.28084)^3
cfs = cms.multiply((3.28084)**3)
# output to df_dict
df_dict['cfs_'+objname] = cfs
# time-group by month-yearly streamflow volumetric values
monthly_cfs = cfs.groupby(pd.TimeGrouper('M')).sum()
monthly_cfs.index = pd.Series(monthly_cfs.index).apply(lambda x: x.strftime('%Y-%m'))
# output to df_dict
df_dict['monthly_cfs_'+objname] = monthly_cfs
monthly_streamflow_df = daily_streamflow_df.groupby(pd.TimeGrouper("M")).sum()
# loop through each station
for eachcol in monthly_streamflow_df.columns():
station_moyr = dask.delayed(monthly_streamflow_df.loc[:,eachcol])
station_moyr
df_dict = ltm_3bands
objname = 'STREAMFLOW_dailyvic_livneh2013'
dataset = objname.split('_',1)[1]
gridcell_area = geo_area
exceedance = 10
ltm_3bands['STREAMFLOW_dailyvic_livneh2013']=ltm_3bands['BASEFLOW_dailyvic_livneh2013']+ltm_3bands['RUNOFF_dailyvic_livneh2013']
# function [Qex] = monthlyExceedence_cfs(file,startyear,endyear)
# % Load data from specified file
# data = load(file);
# Y=data(:,1);
# MO=data(:,2);
# D=data(:,3);
# t = datenum(Y,MO,D);
# %%%
# % startyear=data(1,1);
# % endyear=data(length(data),1);
# Qnode = data(:,4);
# % Time control indices that cover selected period
# d_start = datenum(startyear,10,01,23,0,0); % 1 hour early to catch record for that day
# d_end = datenum(endyear,09,30,24,0,0);
# idx = find(t>=d_start & t<=d_end);
# exds=(1:19)./20; % Exceedence probabilities from 0.05 to 0.95
# mos=[10,11,12,1:9];
# Qex(1:19,1:12)=0 ; % initialize array
# for imo=1:12;
# mo=mos(imo);
# [Y, M, D, H, MI, S] = datevec(t(idx));
# ind=find(M==mo); % find all flow values in that month in the period identified
# Q1=Qnode(idx(ind)); % input is in cfs
# for iex=1:19
# Qex(iex,imo)=quantile(Q1,1-exds(iex));
# end
# end
# end
ltm_3bands['cfs_STREAMFLOW_dailyvic_livneh2013']
ltm_3bands['monthly_cfs_STREAMFLOW_dailyvic_livneh2013']
ltm_3bands['EXCEED10_dailyvic_livneh2013']
# loop through each month to compute the 10% Exceedance Probability
for eachmonth in range(1,13):
monthlabel = pd.datetime.strptime(str(eachmonth), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['EXCEED10_dailyvic_livneh2013'],
vardf_dateindex=eachmonth,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath='sauk{0}exceed10.png'.format(monthlabel.strftime('%b')),
plottitle='Sauk {0} 10% Exceedance Probability'.format(monthlabel.strftime('%B')),
colorbar_label='cubic feet per second',
cmap='seismic_r')
###Output
_____no_output_____
###Markdown
4. Visualize monthly precipitation spatially using Livneh et al., 2013 Meteorology data Apply different plotting options:time-index option Basemap option colormap option projection option
###Code
%%time
month=3
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='Demographics/USA_Social_Vulnerability_Index',
cmap='gray_r')
month=6
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='ESRI_StreetMap_World_2D',
cmap='gray_r')
month=9
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='ESRI_Imagery_World_2D',
cmap='gray_r')
month=12
monthlabel = pd.datetime.strptime(str(month), '%m')
ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
outfilepath=os.path.join(homedir, 'SaukPrecip{0}.png'.format(monthlabel.strftime('%b'))),
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+ monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
spatial_resolution=1/16, margin=0.5, epsg=3857,
basemap_image='Elevation/World_Hillshade',
cmap='seismic_r')
###Output
_____no_output_____
###Markdown
Visualize monthly precipitation difference between different gridded data products
###Code
for month in [3, 6, 9, 12]:
monthlabel = pd.datetime.strptime(str(month), '%m')
outfile='SaukLivnehPrecip{0}.png'.format(monthlabel.strftime('%b'))
ax1 = ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailymet_livneh2013'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
basemap_image='ESRI_Imagery_World_2D',
cmap='seismic_r',
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
outfilepath=os.path.join(homedir, outfile))
###Output
_____no_output_____
###Markdown
comparison to WRF data from Salathe et al., 2014
###Code
%%time
ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile1,
metadata=meta_file,
dataset='dailywrf_salathe2014',
colvar=None,
file_start_date=dr2['start_date'],
file_end_date=dr2['end_date'],
file_time_step=dr2['temporal_resolution'],
subset_start_date=dr[0],
subset_end_date=dr[1],
df_dict=ltm_3bands)
for month in [3, 6, 9, 12]:
monthlabel = pd.datetime.strptime(str(month), '%m')
outfile='SaukSalathePrecip{0}.png'.format(monthlabel.strftime('%b'))
ax1 = ogh.renderValuesInPoints(vardf=ltm_3bands['month_PRECIP_dailywrf_salathe2014'],
vardf_dateindex=month,
shapefile=sauk.replace('.shp','_2.shp'),
basemap_image='ESRI_Imagery_World_2D',
cmap='seismic_r',
plottitle='Sauk-Suiattle watershed'+'\nPrecipitation in '+monthlabel.strftime('%B'),
colorbar_label='Average monthly precipitation (meters)',
outfilepath=os.path.join(homedir, outfile))
def plot_meanTmin(dictionary, loc_name, start_date, end_date):
# Plot 1: Monthly temperature analysis of Livneh data
if 'meanmonth_temp_min_liv2013_met_daily' and 'meanmonth_temp_min_wrf2014_met_daily' not in dictionary.keys():
pass
# generate month indices
wy_index=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
wy_numbers=[10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9]
month_strings=[ 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sept']
# initiate the plot object
fig, ax=plt.subplots(1,1,figsize=(10, 6))
if 'meanmonth_temp_min_liv2013_met_daily' in dictionary.keys():
# Liv2013
plt.plot(wy_index, dictionary['meanmonth_temp_min_liv2013_met_daily'][wy_numbers],'r-', linewidth=1, label='Liv Temp min')
if 'meanmonth_temp_min_wrf2014_met_daily' in dictionary.keys():
# WRF2014
plt.plot(wy_index, dictionary['meanmonth_temp_min_wrf2014_met_daily'][wy_numbers],'b-',linewidth=1, label='WRF Temp min')
if 'meanmonth_temp_min_livneh2013_wrf2014bc_met_daily' in dictionary.keys():
# WRF2014
plt.plot(wy_index, dictionary['meanmonth_temp_min_livneh2013_wrf2014bc_met_daily'][wy_numbers],'g-',linewidth=1, label='WRFbc Temp min')
# add reference line at y=0
plt.plot([1, 12],[0, 0], 'k-',linewidth=1)
plt.ylabel('Temp (C)',fontsize=14)
plt.xlabel('Month',fontsize=14)
plt.xlim(1,12);
plt.xticks(wy_index, month_strings);
plt.tick_params(labelsize=12)
plt.legend(loc='best')
plt.grid(which='both')
plt.title(str(loc_name)+'\nMinimum Temperature\n Years: '+str(start_date.year)+'-'+str(end_date.year)+'; Elevation: '+str(dictionary['analysis_elev_min'])+'-'+str(dictionary['analysis_elev_max'])+'m', fontsize=16)
plt.savefig('monthly_Tmin'+str(loc_name)+'.png')
plt.show()
###Output
_____no_output_____
###Markdown
6. Compare gridded model to point observations Read in SNOTEL data - assess available data If you want to plot observed snotel point precipitation or temperature with the gridded climate data, set to 'Y' Give name of Snotel file and name to be used in figure legends. File format: Daily SNOTEL Data Report - Historic - By individual SNOTEL site, standard sensors (https://www.wcc.nrcs.usda.gov/snow/snotel-data.html)
###Code
# Sauk
SNOTEL_file = os.path.join(homedir,'ThunderBasinSNOTEL.txt')
SNOTEL_station_name='Thunder Creek'
SNOTEL_file_use_colsnames = ['Date','Air Temperature Maximum (degF)', 'Air Temperature Minimum (degF)','Air Temperature Average (degF)','Precipitation Increment (in)']
SNOTEL_station_elev=int(4320/3.281) # meters
SNOTEL_obs_daily = ogh.read_daily_snotel(file_name=SNOTEL_file,
usecols=SNOTEL_file_use_colsnames,
delimiter=',',
header=58)
# generate the start and stop date
SNOTEL_obs_start_date=SNOTEL_obs_daily.index[0]
SNOTEL_obs_end_date=SNOTEL_obs_daily.index[-1]
# peek
SNOTEL_obs_daily.head(5)
###Output
_____no_output_____
###Markdown
Read in COOP station data - assess available datahttps://www.ncdc.noaa.gov/
###Code
COOP_file=os.path.join(homedir, 'USC00455678.csv') # Sauk
COOP_station_name='Mt Vernon'
COOP_file_use_colsnames = ['DATE','PRCP','TMAX', 'TMIN','TOBS']
COOP_station_elev=int(4.3) # meters
COOP_obs_daily = ogh.read_daily_coop(file_name=COOP_file,
usecols=COOP_file_use_colsnames,
delimiter=',',
header=0)
# generate the start and stop date
COOP_obs_start_date=COOP_obs_daily.index[0]
COOP_obs_end_date=COOP_obs_daily.index[-1]
# peek
COOP_obs_daily.head(5)
#initiate new dictionary with original data
ltm_0to3000 = ogh.gridclim_dict(metadata=meta_file,
mappingfile=mappingfile1,
dataset='dailymet_livneh2013',
file_start_date=dr1['start_date'],
file_end_date=dr1['end_date'],
subset_start_date=dr[0],
subset_end_date=dr[1])
ltm_0to3000 = ogh.gridclim_dict(metadata=meta_file,
mappingfile=mappingfile1,
dataset='dailywrf_salathe2014',
file_start_date=dr2['start_date'],
file_end_date=dr2['end_date'],
subset_start_date=dr[0],
subset_end_date=dr[1],
df_dict=ltm_0to3000)
sorted(ltm_0to3000.keys())
# read in the mappingfile
mappingfile = mappingfile1
mapdf = pd.read_csv(mappingfile)
# select station by first FID
firstStation = ogh.findStationCode(mappingfile=mappingfile, colvar='FID', colvalue=0)
# select station by elevation
maxElevStation = ogh.findStationCode(mappingfile=mappingfile, colvar='ELEV', colvalue=mapdf.loc[:,'ELEV'].max())
medElevStation = ogh.findStationCode(mappingfile=mappingfile, colvar='ELEV', colvalue=mapdf.loc[:,'ELEV'].median())
minElevStation = ogh.findStationCode(mappingfile=mappingfile, colvar='ELEV', colvalue=mapdf.loc[:,'ELEV'].min())
# print(firstStation, mapdf.iloc[0].ELEV)
# print(maxElevStation, mapdf.loc[:,'ELEV'].max())
# print(medElevStation, mapdf.loc[:,'ELEV'].median())
# print(minElevStation, mapdf.loc[:,'ELEV'].min())
# let's compare monthly averages for TMAX using livneh, salathe, and the salathe-corrected livneh
comp = ['month_TMAX_dailymet_livneh2013',
'month_TMAX_dailywrf_salathe2014']
obj = dict()
for eachkey in ltm_0to3000.keys():
if eachkey in comp:
obj[eachkey] = ltm_0to3000[eachkey]
panel_obj = pd.Panel.from_dict(obj)
panel_obj
comp = ['meanmonth_TMAX_dailymet_livneh2013',
'meanmonth_TMAX_dailywrf_salathe2014']
obj = dict()
for eachkey in ltm_0to3000.keys():
if eachkey in comp:
obj[eachkey] = ltm_0to3000[eachkey]
df_obj = pd.DataFrame.from_dict(obj)
df_obj
t_res, var, dataset, pub = each.rsplit('_',3)
print(t_res, var, dataset, pub)
ylab_var = meta_file['_'.join([dataset, pub])]['variable_info'][var]['desc']
ylab_unit = meta_file['_'.join([dataset, pub])]['variable_info'][var]['units']
print('{0} {1} ({2})'.format(t_res, ylab_var, ylab_unit))
%%time
comp = [['meanmonth_TMAX_dailymet_livneh2013','meanmonth_TMAX_dailywrf_salathe2014'],
['meanmonth_PRECIP_dailymet_livneh2013','meanmonth_PRECIP_dailywrf_salathe2014']]
wy_numbers=[10, 11, 12, 1, 2, 3, 4, 5, 6, 7, 8, 9]
month_strings=[ 'Oct', 'Nov', 'Dec', 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep']
fig = plt.figure(figsize=(20,5), dpi=500)
ax1 = plt.subplot2grid((2, 2), (0, 0), colspan=1)
ax2 = plt.subplot2grid((2, 2), (1, 0), colspan=1)
# monthly
for eachsumm in df_obj.columns:
ax1.plot(df_obj[eachsumm])
ax1.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=2, fontsize=10)
plt.show()
df_obj[each].index.apply(lambda x: x+2)
fig, ax = plt.subplots()
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
lws=[3, 10, 3, 3]
styles=['b--','go-','y--','ro-']
for col, style, lw in zip(comp, styles, lws):
panel_obj.xs(key=(minElevStation[0][0], minElevStation[0][1], minElevStation[0][2]), axis=2)[col].plot(style=style, lw=lw, ax=ax, legend=True)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=2)
fig.show()
fig, ax = plt.subplots()
lws=[3, 10, 3, 3]
styles=['b--','go-','y--','ro-']
for col, style, lw in zip(comp, styles, lws):
panel_obj.xs(key=(maxElevStation[0][0], maxElevStation[0][1], maxElevStation[0][2]),
axis=2)[col].plot(style=style, lw=lw, ax=ax, legend=True)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05), fancybox=True, shadow=True, ncol=2)
fig.show()
###Output
_____no_output_____
###Markdown
Set up VIC dictionary (as an example) to compare to available data
###Code
vic_dr1 = meta_file['dailyvic_livneh2013']['date_range']
vic_dr2 = meta_file['dailyvic_livneh2015']['date_range']
vic_dr = ogh.overlappingDates(tuple([vic_dr1['start'], vic_dr1['end']]),
tuple([vic_dr2['start'], vic_dr2['end']]))
vic_ltm_3bands = ogh.gridclim_dict(mappingfile=mappingfile,
metadata=meta_file,
dataset='dailyvic_livneh2013',
file_start_date=vic_dr1['start'],
file_end_date=vic_dr1['end'],
file_time_step=vic_dr1['time_step'],
subset_start_date=vic_dr[0],
subset_end_date=vic_dr[1])
sorted(vic_ltm_3bands.keys())
###Output
_____no_output_____
###Markdown
10. Save the results back into HydroShareUsing the `hs_utils` library, the results of the Geoprocessing steps above can be saved back into HydroShare. First, define all of the required metadata for resource creation, i.e. *title*, *abstract*, *keywords*, *content files*. In addition, we must define the type of resource that will be created, in this case *genericresource*. ***Note:*** Make sure you save the notebook at this point, so that all notebook changes will be saved into the new HydroShare resource.
###Code
#execute this cell to list the content of the directory
!ls -lt
###Output
_____no_output_____
###Markdown
Create list of files to save to HydroShare. Verify location and names.
###Code
!tar -zcf {climate2013_tar} livneh2013
!tar -zcf {climate2015_tar} livneh2015
!tar -zcf {wrf_tar} salathe2014
ThisNotebook='Observatory_Sauk_TreatGeoSelf.ipynb' #check name for consistency
climate2013_tar = 'livneh2013.tar.gz'
climate2015_tar = 'livneh2015.tar.gz'
wrf_tar = 'salathe2014.tar.gz'
mappingfile = 'Sauk_mappingfile.csv'
files=[ThisNotebook, mappingfile, climate2013_tar, climate2015_tar, wrf_tar]
# for each file downloaded onto the server folder, move to a new HydroShare Generic Resource
title = 'Results from testing out the TreatGeoSelf utility'
abstract = 'This the output from the TreatGeoSelf utility integration notebook.'
keywords = ['Sauk', 'climate', 'Landlab','hydromet','watershed']
rtype = 'genericresource'
# create the new resource
resource_id = hs.createHydroShareResource(abstract,
title,
keywords=keywords,
resource_type=rtype,
content_files=files,
public=False)
###Output
_____no_output_____ |
Random_Forest_Regression.ipynb | ###Markdown
Random Forest RegressionA [random forest](https://en.wikipedia.org/wiki/Random_forest) is a meta estimator that fits a number of classifying [decision trees](https://en.wikipedia.org/wiki/Decision_tree_learning) on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement (can be changed by user).Generally, Decision Tree and Random Forest models are used for classification task. However, the idea of Random Forest as a regularizing meta-estimator over single decision tree is best demonstrated by applying them to regresion problems. This way it can be shown that, **in the presence of random noise, single decision tree is prone to overfitting and learn spurious correlations while a properly constructed Random Forest model is more immune to such overfitting.** Create some syntehtic data using scikit-learn's built-in regression generator Import libraries
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import numpy as np
# Import make_regression method to generate artificial data samples
from sklearn.datasets import make_regression
###Output
_____no_output_____
###Markdown
What is *make_regression* method?It is a convinient method/function from scikit-learn stable to generate a random regression problem. The input set can either be well conditioned (by default) or have a low rank-fat tail singular profile.The output is generated by applying a (potentially biased) random linear regression model with `n_informative` nonzero regressors to the previously generated input and some gaussian centered noise with some adjustable scale.
###Code
n_samples = 100 # Number of samples
n_features = 6 # Number of features
n_informative = 3 # Number of informative features i.e. actual features which influence the output
X, y,coef = make_regression(n_samples=n_samples, n_features=n_features, n_informative=n_informative,
random_state=None, shuffle=False,noise=20,coef=True)
###Output
_____no_output_____
###Markdown
Make a data frame and create basic visualizations Data Frame
###Code
df1 = pd.DataFrame(data=X,columns=['X'+str(i) for i in range(1,n_features+1)])
df2=pd.DataFrame(data=y,columns=['y'])
df=pd.concat([df1,df2],axis=1)
df.head(10)
###Output
_____no_output_____
###Markdown
Scatter plots
###Code
with plt.style.context(('seaborn-dark')):
for i,col in enumerate(df.columns[:-1]):
plt.figure(figsize=(6,4))
plt.grid(True)
plt.xlabel('Feature:'+col,fontsize=12)
plt.ylabel('Output: y',fontsize=12)
plt.scatter(df[col],df['y'],c='red',s=50,alpha=0.6)
###Output
_____no_output_____
###Markdown
It is clear from the scatter plots that some of the features influence the output while the others don't. This is the result of choosing a particular *n_informative* in the *make_regression* method Histograms of the feature space
###Code
with plt.style.context(('fivethirtyeight')):
for i,col in enumerate(df.columns[:-1]):
plt.figure(figsize=(6,4))
plt.grid(True)
plt.xlabel('Feature:'+col,fontsize=12)
plt.ylabel('Output: y',fontsize=12)
plt.hist(df[col],alpha=0.6,facecolor='g')
###Output
_____no_output_____
###Markdown
How will a Decision Tree regressor do?Every run will generate different result but on most occassions, **the single decision tree regressor is likely to learn spurious features** i.e. will assign small importance to features which are not true regressors.
###Code
from sklearn import tree
tree_model = tree.DecisionTreeRegressor(max_depth=5,random_state=None)
tree_model.fit(X,y)
print("Relative importance of the features: ",tree_model.feature_importances_)
with plt.style.context('dark_background'):
plt.figure(figsize=(10,7))
plt.grid(True)
plt.yticks(range(n_features+1,1,-1),df.columns[:-1],fontsize=20)
plt.xlabel("Relative (normalized) importance of parameters",fontsize=15)
plt.ylabel("Features\n",fontsize=20)
plt.barh(range(n_features+1,1,-1),width=tree_model.feature_importances_,height=0.5)
###Output
Relative importance of the features: [ 0.06896493 0.35741588 0.54578154 0.0230699 0. 0.00476775]
###Markdown
Print the $R^2$ score of the Decision Tree regression modelEven though the $R^2$ score is pretty high, the model is slightly flawed because it has assigned importance to regressors which have no true significance.
###Code
print("Regression coefficient:",tree_model.score(X,y))
###Output
Regression coefficient: 0.95695111153
###Markdown
Random Forest Regressor
###Code
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(max_depth=5, random_state=None,max_features='auto',max_leaf_nodes=5,n_estimators=100)
model.fit(X, y)
###Output
_____no_output_____
###Markdown
Print the relative importance of the features
###Code
print("Relative importance of the features: ",model.feature_importances_)
with plt.style.context('dark_background'):
plt.figure(figsize=(10,7))
plt.grid(True)
plt.yticks(range(n_features+1,1,-1),df.columns[:-1],fontsize=20)
plt.xlabel("Relative (normalized) importance of parameters",fontsize=15)
plt.ylabel("Features\n",fontsize=20)
plt.barh(range(n_features+1,1,-1),width=model.feature_importances_,height=0.5)
###Output
Relative importance of the features: [ 0.03456204 0.46959355 0.49500136 0. 0. 0.00084305]
###Markdown
Print the $R^2$ score of the Random Forest regression model
###Code
print("Regression coefficient:",model.score(X,y))
###Output
Regression coefficient: 0.811794737752
###Markdown
Benchmark to statsmodel (ordinary least-square solution by exact analytic method)[Statsmodel is a Python module](http://www.statsmodels.org/dev/index.html) that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration.
###Code
import statsmodels.api as sm
Xs=sm.add_constant(X)
stat_model = sm.OLS(y,Xs)
stat_result = stat_model.fit()
print(stat_result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.971
Model: OLS Adj. R-squared: 0.969
Method: Least Squares F-statistic: 521.4
Date: Thu, 04 Jan 2018 Prob (F-statistic): 2.79e-69
Time: 23:56:18 Log-Likelihood: -434.92
No. Observations: 100 AIC: 883.8
Df Residuals: 93 BIC: 902.1
Df Model: 6
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 0.9153 2.056 0.445 0.657 -3.167 4.998
x1 38.2702 1.900 20.142 0.000 34.497 42.043
x2 69.6705 1.888 36.908 0.000 65.922 73.419
x3 75.1667 2.057 36.546 0.000 71.082 79.251
x4 -0.3881 2.073 -0.187 0.852 -4.505 3.729
x5 -1.3036 2.038 -0.640 0.524 -5.352 2.744
x6 -0.2271 2.112 -0.108 0.915 -4.420 3.966
==============================================================================
Omnibus: 0.366 Durbin-Watson: 1.855
Prob(Omnibus): 0.833 Jarque-Bera (JB): 0.531
Skew: 0.044 Prob(JB): 0.767
Kurtosis: 2.654 Cond. No. 1.47
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Make arrays of regression coefficients estimated by the models
###Code
rf_coef=np.array(coef)
stat_coef=np.array(stat_result.params[1:])
###Output
_____no_output_____
###Markdown
Show the true regression coefficients (as returned by the generator function) and the OLS model's estimated coefficients side by side
###Code
df_coef = pd.DataFrame(data=[rf_coef,stat_coef],columns=df.columns[:-1],index=['True Regressors', 'OLS method estimation'])
df_coef
###Output
_____no_output_____
###Markdown
Show the relative importance of regressors side by sideFor Random Forest Model, show the relative importance of features as determined by the meta-estimator. For the OLS model, show normalized t-statistic values.**It will be clear that although the RandomForest regressor identifies the important regressors correctly, it does not assign the same level of relative importance to them as done by OLS method t-statistic**
###Code
df_importance = pd.DataFrame(data=[model.feature_importances_,stat_result.tvalues[1:]/sum(stat_result.tvalues[1:])],
columns=df.columns[:-1],
index=['RF Regressor relative importance', 'OLS method normalized t-statistic'])
df_importance
###Output
_____no_output_____
###Markdown
Bangalore House Price Prediction With Random Forest Regression
###Code
import pandas as pd
path = r"https://drive.google.com/uc?export=download&id=1xxDtrZKfuWQfl-6KA9XEd_eatitNPnkB"
df = pd.read_csv(path)
df.head()
df.tail()
X=df.drop("price",axis=1)
y=df["price"]
print("shape of X:",X.shape)
print("shape of y:",y.shape)
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=51)
print("shape of X_train:",X_train.shape)
print("shape of X_test:",X_test.shape)
print("shape of y_train:",y_train.shape)
print("shape of y_test:",y_test.shape)
###Output
shape of X_train: (5696, 107)
shape of X_test: (1424, 107)
shape of y_train: (5696,)
shape of y_test: (1424,)
###Markdown
Random Forest Regression Model
###Code
from sklearn.ensemble import RandomForestRegressor
rfre=RandomForestRegressor()
rfre.fit(X_train,y_train)
rfre.score(X_test,y_test)
rfre_100=RandomForestRegressor(n_estimators=500)
rfre_100.fit(X_train,y_train)
rfre_100.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Predict the Value of House
###Code
X_test.iloc[-1,:]
rfre.predict([X_test.iloc[-1,:]])
y_test.iloc[-1]
rfre.predict(X_test)
y_test
###Output
_____no_output_____ |
nbs/8.0_interpretability.i.ipynb | ###Markdown
Interpretability Notebook>> Leyli Jan 2021> In this notebook you will find the Prototypes and Criticism functionality for interpreting higher dimensional spaces.
###Code
# ! pip install -e . <----- Install in the console
#! pip install XXX
import numpy as np
import pandas as pd
#Export
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
m = np.zeros(5)
print(m)
m
###Output
_____no_output_____ |
k_armed_testbed.ipynb | ###Markdown
K armed bandit testbed Figure 2.2 and 2.3 and Exersice 2.5: from 'Reinforcement Learning: an introduction' by Sutton & Barto.
###Code
import numpy as np
from dataclasses import dataclass
from multiprocessing import Pool, cpu_count
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
Simulation
###Code
@dataclass
class Simulation():
qa: np.ndarray
A: np.ndarray
R: np.ndarray
name: str = ''
greedy_policy = lambda qa, eps, K: qa.argmax() if np.random.rand() > eps else np.random.randint(K)
get_reward = lambda q, a: np.random.normal(loc = q[a])
def play_one_game(args):
eps, K, tmax, qa, Qa, alpha, qa_type = args
# initialize registers to zeros
A = np.zeros(tmax, dtype = np.uint);
R = np.zeros_like(A, dtype = np.float)
# initialize current estimates of action values and action counters
use_sampleavg = alpha == 'sampleaverage'
if use_sampleavg: Na = np.zeros(K, dtype = np.uint)
for t in range(tmax):
# update true action values - if not constant
if qa_type == 'randomwalk':
qa += np.random.normal(scale = 0.01, size = K)
# choose action and compute reward
a = greedy_policy(Qa, eps, K)
r = get_reward(qa, a)
# update action values estimates
if use_sampleavg:
Na[a] += 1
Qa[a] += (r - Qa[a]) / Na[a]
else:
Qa[a] += (r - Qa[a]) * alpha
# save to register
A[t], R[t] = a, r
return qa, A, R
def simulate_kbandits(K, N, tmax, **kwargs):
"""K-armed bandits game simulation.
Args:
K (int): number of bandits arms.
N (int): numer of episods/repetitions.
tmax (int): number of epochs per episod.
eps (float): percentage of greed/exploration probability. Defaults to 0.
name (str): name of the simulation. Defaults to empty an string.
stepsize (str or float): stepsize for the update of action values. Defaults to "mean".
actval_type (str): type of true unknown action values, either "constant" or "randomwalk". Defaults to "constant".
actval_init (np.array): initial conditions for the estimate of action values. Defaults to all zeros.
Returns:
Simulation: `Simulation` class object containing the true action values `qa`,
the actions taken `A`, the rewards received `R`,
and the estimated action values `Qa` by sample-average.
"""
# arguments validation
assert K > 0, 'number of bandits arms must be positive'
assert N > 0, 'number of repetitions must be positive'
assert tmax > 0, 'number of episodes length must be positive'
alpha = kwargs.get('stepsize', 'sampleaverage')
eps = kwargs.get('eps', 0.0)
qa_type = kwargs.get('actval_type', 'constant')
Qa_init = kwargs.get('actval_init', np.zeros(K, dtype = np.float))
assert alpha == 'sampleaverage' or isinstance(alpha, float), 'stepsize must either be a float or "sampleaverage"'
assert 0.0 <= eps <= 1.0, 'eps must be between 0 and 1'
assert qa_type in ['constant', 'randomwalk'], 'invalid action value type'
assert isinstance(Qa_init, np.ndarray) and Qa_init.shape == (K,), 'Initial conditions for action values is either not an array or has length != K'
# initialize registers to zeros
A = np.zeros((N, tmax), dtype = np.uint);
R = np.zeros_like(A, dtype = np.float)
# initialize true and estimated action values per action per each game
if qa_type == 'constant':
qa = np.random.normal(size = (N, K))
else:
qa = np.zeros((N, K), dtype = np.float)
# play N games
args = [(eps, K, tmax, qa[n, :], Qa_init.copy(), alpha, qa_type) for n in range(N)]
pool = Pool(cpu_count())
for n, o in enumerate(tqdm(pool.imap(play_one_game, args), total = N, desc = 'playing N times')):
qa[n, :], A[n, :], R[n, :] = o
pass
pool.close()
pool.join()
return Simulation(qa, A, R, kwargs.get('name', ''))
###Output
_____no_output_____
###Markdown
Plotting
###Code
import plotly.graph_objects as go
import matplotlib.pyplot as plt
PLOT_INTERACTIVE = False
def plot_true_action_values(sim, n):
x = np.arange(1, sim.qa.shape[1] + 1)
y = sim.qa[n, :]
if PLOT_INTERACTIVE:
fig = go.Figure()
fig.add_trace(go.Scatter(
x = x, y = y, mode = 'markers',
error_y = {
'type': 'constant',
'value': 1,
'color': 'purple'
},
marker = dict(color = 'purple', size = 15)))
fig.update_xaxes(title_text = 'Action')
fig.update_yaxes(title_text = f'Reward distribution (+/- std) for game {n}')
fig.show()
else:
_, ax = plt.subplots(figsize = (17, 7))
ax.set_title(f'Reward distribution (+/- std) for game {n}')
ax.errorbar(x, y, 1, fmt = 'o', ms = 20)
ax.set_xticks(list(range(1, len(y) + 1)))
ax.set_xlabel('Action')
ax.set_ylabel('Reward')
def plot_mean_rewards(*sims):
if PLOT_INTERACTIVE:
fig = go.Figure()
for sim in sims:
fig.add_trace(go.Scatter(y = sim.R.mean(axis = 0), name = sim.name))
fig.update_xaxes(title_text = 'Steps')
fig.update_yaxes(title_text = 'Mean reward')
fig.show()
else:
_, ax = plt.subplots(figsize = (17, 7))
for sim in sims:
ax.plot(sim.R.mean(axis = 0), label = sim.name)
ax.set_title('Mean reward per step')
ax.set_xlabel('Steps')
ax.set_ylabel('Mean reward')
ax.legend()
def plot_optimal_actions(*sims):
if PLOT_INTERACTIVE:
fig = go.Figure()
for sim in sims:
qa_opt = sim.qa.argmax(axis = 1)
act_is_opt = (sim.A.transpose() == qa_opt).transpose() # for some reasoning, can only broadcast-compare on second dimension
fig.add_trace(go.Scatter(y = act_is_opt.mean(axis = 0), name = sim.name))
fig.update_xaxes(title_text = 'Steps')
fig.update_yaxes(title_text = '% Optimal action', range = (0, 1), tickformat = ',.0%')
fig.show()
else:
_, ax = plt.subplots(figsize = (17, 7))
for sim in sims:
qa_opt = sim.qa.argmax(axis = 1)
act_is_opt = (sim.A.transpose() == qa_opt).transpose() # for some reasoning, can only broadcast-compare on second dimension
ax.plot(act_is_opt.mean(axis = 0), label = sim.name)
ax.set_title('Optimal decisions per step')
ax.set_xlabel('Steps')
ax.set_ylabel('% Optimal action')
ax.set_ylim(0, 1)
vals = ax.get_yticks()
ax.set_yticklabels(['{:,.0%}'.format(x) for x in vals])
ax.legend()
###Output
_____no_output_____
###Markdown
Main
###Code
# Figure 2.2
K = 10
N = 2000
tmax = 1000
greedy = simulate_kbandits(K, N, tmax, eps = 0, name = 'e = 0')
eps_greedy_01 = simulate_kbandits(K, N, tmax, eps = 0.1, name = 'e = 0.1')
eps_greedy_001 = simulate_kbandits(K, N, tmax, eps = 0.01, name = 'e = 0.01')
plot_true_action_values(greedy, 27)
plot_mean_rewards(eps_greedy_01, eps_greedy_001, greedy)
plot_optimal_actions(eps_greedy_01, eps_greedy_001, greedy)
# Exercise 2.5
K = 10
N = 1000
tmax = 10000
sample_avg = simulate_kbandits(K, N, tmax, eps = 0.1, actval_type = 'randomwalk', name = 'sample avg')
const_size = simulate_kbandits(K, N, tmax, eps = 0.1, actval_type = 'randomwalk', stepsize = 0.1, name = 'a = 0.1')
plot_mean_rewards(sample_avg, const_size)
plot_optimal_actions(sample_avg, const_size)
# Figure 2.3
K = 10
N = 2000
tmax = 1000
greed_no_zero_init = simulate_kbandits(K, N, tmax, eps = 0, stepsize = 0.1, actval_init = np.ones(K) * 5.0, name = 'Q1 = 5, e = 0, a = 0.1')
greed_zero_init = simulate_kbandits(K, N, tmax, eps = 0.1, stepsize = 0.1, actval_init = np.zeros(K), name = 'Q1 = 0, e = 0.1, a = 0.1')
plot_mean_rewards(greed_no_zero_init, greed_zero_init)
plot_optimal_actions(greed_no_zero_init, greed_zero_init)
###Output
_____no_output_____ |
notebooks/5_Machine_learning_classification.ipynb | ###Markdown
Very quick machine learningThis notebook goes with We're going to go over a very simple machine learning exercise. We're using the data from the [2016 SEG machine learning contest](https://github.com/seg/2016-ml-contest). This exercise previously appeared as [an Agile blog post](http://ageo.co/xlines04).
###Code
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
###Output
_____no_output_____
###Markdown
Read the data [Pandas](http://pandas.pydata.org/) is really convenient for this sort of data.
###Code
import pandas as pd
uid = "1WZsd3AqH9dEOOabZNjlu1M-a8RlzTyu9BYc2cw8g6J8"
uri = f"https://docs.google.com/spreadsheets/d/{uid}/export?format=csv"
df = pd.read_csv(uri)
df.head()
###Output
_____no_output_____
###Markdown
A word about the data. This dataset is not, strictly speaking, open data. It has been shared by the Kansas Geological Survey for the purposes of the contest. That's why I'm not copying the data into this repository, but instead reading it from the web. We are working on making an open access version of this dataset. In the meantime, I'd appreciarte it if you didn't replicate the data anywhere. Thanks! Inspect the dataFirst, we need to see what we have.
###Code
df.describe()
facies_dict = {1:'sandstone', 2:'c_siltstone', 3:'f_siltstone', 4:'marine_silt_shale',
5:'mudstone', 6:'wackestone', 7:'dolomite', 8:'packstone', 9:'bafflestone'}
df["Facies"] = df["Facies Code"].replace(facies_dict)
df.groupby('Facies').count()
features = ['GR', 'ILD', 'DeltaPHI', 'PHIND', 'PE']
sns.pairplot(df, vars=features, hue='Facies')
fig, axs = plt.subplots(ncols=5, figsize=(15, 3))
for ax, feature in zip(axs, features):
sns.distplot(df[feature], ax=ax)
###Output
_____no_output_____
###Markdown
Label and feature engineering
###Code
fig, axs = plt.subplots(nrows=5, figsize=(15, 10))
for ax, feature in zip(axs, features):
for facies in df.Facies.unique():
sns.kdeplot(df.loc[df.Facies==facies][feature], ax=ax, label=facies)
ax.legend('')
sns.distplot(df.ILD)
sns.distplot(np.log10(df.ILD))
df['log_ILD'] = np.log10(df.ILD)
###Output
_____no_output_____
###Markdown
Get the feature vectors, `X`
###Code
features = ['GR', 'log_ILD', 'DeltaPHI', 'PHIND', 'PE']
###Output
_____no_output_____
###Markdown
Now we'll load the data we want. First the feature vectors, `X`. We'll just get the logs, which are in columns 4 to 8:
###Code
X = df[features].values
X.shape
X
###Output
_____no_output_____
###Markdown
Get the label vector, `y`
###Code
y = df.Facies.values
y
y.shape
plt.figure(figsize=(15,2))
plt.fill_between(np.arange(y.size), y, -1)
###Output
_____no_output_____
###Markdown
We have data! Almost ready to train, we just have to get our test / train subsets sorted. Extracting some training data
###Code
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y)
X_train.shape, y_train.shape, X_val.shape, y_val.shape
###Output
_____no_output_____
###Markdown
**Optional exercise:** Use [the docs for `train_test_split`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) to set the size of the test set, and also to set a random seed for the splitting. Now the fun can really begin. Training
###Code
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier()
clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict and evaulate
###Code
y_pred = clf.predict(X_val)
###Output
_____no_output_____
###Markdown
How did we do? A quick score:
###Code
from sklearn.metrics import accuracy_score
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
A better score:
###Code
from sklearn.metrics import f1_score
f1_score(y_val, y_pred, average='weighted')
###Output
_____no_output_____
###Markdown
We can also get a quick score, without using `predict`, but it's not always clear what this score represents.
###Code
clf.score(X_val, y_val)
###Output
_____no_output_____
###Markdown
Model tuning and model selection **Optional exercise:** Let's change the hyperparameters of the model. E.g. try changing the `n_neighbours` argument.
###Code
clf = KNeighborsClassifier(... HYPERPARAMETERS GO HERE ...)
clf.fit(X_train, y_train)
clf.score(X_val, y_val)
###Output
_____no_output_____
###Markdown
**Optional exercise:** Try another classifier.
###Code
from sklearn.svm import SVC
clf = SVC(gamma='auto')
clf.fit(X_train, y_train)
clf.score(X_val, y_val)
###Output
_____no_output_____
###Markdown
**Optional exercise:** Let's look at another classifier. Try changing some hyperparameters, eg `verbose`, `n_estimators`, `n_jobs`, and `random_state`.
###Code
from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier(n_estimators=100)
clf.fit(X_train, y_train)
clf.score(X_val, y_val)
###Output
_____no_output_____
###Markdown
All models have the same API (but not the same hyperparameters), so it's very easy to try lots of models. More in-depth evaluationThe confusion matrix, showing exactly what kinds of mistakes (type 1 and type 2 errors) we're making:
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(y_val, y_pred)
###Output
_____no_output_____
###Markdown
Finally, the classification report shows the type 1 and type 2 error rates (well, 1 - the error) for each facies, along with the combined, F1, score:
###Code
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred))
###Output
_____no_output_____ |
notebooks/affiliation_scores.ipynb | ###Markdown
Weisfeiler-Lehman RDF
###Code
RANDOM_STATE = 42
depth_values = [1, 2, 3]
iteration_values = [0, 2, 4, 6]
C_values = [0.001, 0.01, 0.1, 1., 10., 100.]
results = OrderedDict()
for d in depth_values:
for it in iteration_values:
wlrdf_graph = wlkernel.WLRDFGraph(triples, instances, max_depth=d)
kernel_matrix = wlkernel.wlrdf_kernel_matrix(wlrdf_graph, instances, iterations=it)
kernel_matrix = wlkernel.kernel_matrix_normalization(kernel_matrix)
results[(d, it)] = [0, 0, 0]
for c in C_values:
classifier = svm.SVC(C=c, kernel='precomputed', class_weight='balanced', random_state=RANDOM_STATE)
scores = cross_validate(classifier, kernel_matrix, y, cv=10, scoring=('accuracy', 'f1_macro'))
acc_mean = scores['test_accuracy'].mean()
f1_mean = scores['test_f1_macro'].mean()
if acc_mean > results[(d, it)][0]:
results[(d, it)] = [acc_mean, f1_mean, c]
fn = 'wlrdf_affiliation_results_with_normalization'
df_res = pd.DataFrame(index=list(results.keys()))
df_res['accuracy'] = [t[0] for t in results.values()]
df_res['f1'] = [t[1] for t in results.values()]
df_res['C'] = [t[2] for t in results.values()]
df_res = df_res.set_index(pd.MultiIndex.from_tuples(df_res.index, names=['depth', 'iterations']))
df_res.to_csv(f'../results/{fn}.csv')
df_res_test = pd.read_csv(f'../results/{fn}.csv', index_col=['depth', 'iterations'])
df_res_test.to_html(f'../results/{fn}.html')
df_res_test
###Output
_____no_output_____
###Markdown
Weisfeiler-Lehman
###Code
RANDOM_STATE = 42
depth_values = [1, 2, 3]
iteration_values = [0, 2, 4, 6]
C_values = [0.001, 0.01, 0.1, 1., 10., 100.]
results = OrderedDict()
for d in depth_values:
for it in iteration_values:
wl_graphs = [wlkernel.WLGraph(triples, instance, max_depth=d) for instance in instances]
kernel_matrix = wlkernel.wl_kernel_matrix(wl_graphs, iterations=it)
kernel_matrix = wlkernel.kernel_matrix_normalization(kernel_matrix)
results[(d, it)] = [0, 0, 0]
for c in C_values:
classifier = svm.SVC(C=c, kernel='precomputed', class_weight='balanced', random_state=RANDOM_STATE)
scores = cross_validate(classifier, kernel_matrix, y, cv=10, scoring=('accuracy', 'f1_macro'))
acc_mean = scores['test_accuracy'].mean()
f1_mean = scores['test_f1_macro'].mean()
if acc_mean > results[(d, it)][0]:
results[(d, it)] = [acc_mean, f1_mean, c]
fn = 'wl_affiliation_results_with_normalization'
df_res = pd.DataFrame(index=list(results.keys()))
df_res['accuracy'] = [t[0] for t in results.values()]
df_res['f1'] = [t[1] for t in results.values()]
df_res['C'] = [t[2] for t in results.values()]
df_res = df_res.set_index(pd.MultiIndex.from_tuples(df_res.index, names=['depth', 'iterations']))
df_res.to_csv(f'../results/{fn}.csv')
df_res_test = pd.read_csv(f'../results/{fn}.csv', index_col=['depth', 'iterations'])
df_res_test.to_html(f'../results/{fn}.html')
df_res_test
###Output
_____no_output_____ |
cnn/cifar10-augmentation/cifar10_augmentation-gpu.ipynb | ###Markdown
Convolutional Neural Networks---In this notebook, we train a CNN on augmented images from the CIFAR-10 database. 1. Load CIFAR-10 Database
###Code
import keras
from keras.datasets import cifar10
# load the pre-shuffled train and test data
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
Using TensorFlow backend.
###Markdown
2. Visualize the First 24 Training Images
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
###Output
_____no_output_____
###Markdown
3. Rescale the Images by Dividing Every Pixel in Every Image by 255
###Code
# rescale [0,255] --> [0,1]
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
###Output
_____no_output_____
###Markdown
4. Break Dataset into Training, Testing, and Validation Sets
###Code
from keras.utils import np_utils
# break training set into training and validation sets
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
# one-hot encode the labels
num_classes = len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
y_valid = keras.utils.to_categorical(y_valid, num_classes)
# print shape of training set
print('x_train shape:', x_train.shape)
# print number of training, validation, and test images
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print(x_valid.shape[0], 'validation samples')
###Output
x_train shape: (45000, 32, 32, 3)
45000 train samples
10000 test samples
5000 validation samples
###Markdown
5. Create and Configure Augmented Image Generatorreference documentation: https://keras.io/preprocessing/image/**Arguments**- featurewise_center: Boolean. Set input mean to 0 over the dataset, feature-wise.- samplewise_center: Boolean. Set each sample mean to 0.- featurewise_std_normalization: Boolean. Divide inputs by std of the dataset, feature-wise.- samplewise_std_normalization: Boolean. Divide each input by its std.- zca_epsilon: epsilon for ZCA whitening. Default is 1e-6.- zca_whitening: Boolean. Apply ZCA whitening.- rotation_range: Int. Degree range for random rotations.- width_shift_range: Float (fraction of total width). Range for random horizontal shifts.- height_shift_range: Float (fraction of total height). Range for random vertical shifts.- shear_range: Float. Shear Intensity (Shear angle in counter-clockwise direction as radians)- zoom_range: Float or [lower, upper]. Range for random zoom. If a float, [lower, upper] = [1-zoom_range, 1+zoom_range].- channel_shift_range: Float. Range for random channel shifts.- fill_mode: One of {"constant", "nearest", "reflect" or "wrap"}. Default is 'nearest'. Points outside the boundaries of the input are filled according to the given mode:- 'constant': kkkkkkkk|abcd|kkkkkkkk (cval=k)- 'nearest': aaaaaaaa|abcd|dddddddd- 'reflect': abcddcba|abcd|dcbaabcd- 'wrap': abcdabcd|abcd|abcdabcd- cval: Float or Int. Value used for points outside the boundaries when fill_mode = "constant".- horizontal_flip: Boolean. Randomly flip inputs horizontally.- vertical_flip: Boolean. Randomly flip inputs vertically.- rescale: rescaling factor. Defaults to None. If None or 0, no rescaling is applied, otherwise we multiply the data by the value provided (before applying any other transformation).- preprocessing_function: function that will be implied on each input. The function will run after the image is resized and augmented. The function should take one argument: one image (Numpy tensor with rank 3), and should output a Numpy tensor with the same shape.- data_format: One of {"channels_first", "channels_last"}. "channels_last" mode means that the images should have shape (samples, height, width, channels), "channels_first" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_data_format value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "channels_last".- validation_split: Float. Fraction of images reserved for validation (strictly between 0 and 1).**example:** datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True)
###Code
from keras.preprocessing.image import ImageDataGenerator
from datetime import timedelta
from time import time
start = time()
featurewise_center = False # original set up: None, Set input mean to 0 over the dataset, feature-wise.
featurewise_std_normalization=False # original set up: None, Set each sample mean to 0.
rotation_range=10 # original set up: None, degree range for random rotation
width_shift_range=0.1 # original set up: 0.1, randomly shift images horizontally (% of total width)
height_shift_range=0.1 # original set up: 0.1, randomly shift images vertically (% of total height)
horizontal_flip=True # original set up: True, randomly flip images horizontally
zoom_range=0.2 # Range for random zoom
# create and configure augmented image generator
datagen_train = ImageDataGenerator(
featurewise_center=featurewise_center,
featurewise_std_normalization=featurewise_std_normalization,
rotation_range=rotation_range,
width_shift_range=width_shift_range,
height_shift_range=height_shift_range,
horizontal_flip=horizontal_flip,
zoom_range=zoom_range)
# create and configure augmented image generator
datagen_valid = ImageDataGenerator(
featurewise_center=featurewise_center,
featurewise_std_normalization=featurewise_std_normalization,
rotation_range=rotation_range,
width_shift_range=width_shift_range,
height_shift_range=height_shift_range,
horizontal_flip=horizontal_flip,
zoom_range=zoom_range)
# fit augmented image generator on data
datagen_train.fit(x_train)
datagen_valid.fit(x_valid)
print("Image pre-processing finished. Total time elapsed: {}".format(timedelta(seconds=time() - start)))
###Output
Image pre-processing finished. Total time elapsed: 0:00:00.232618
###Markdown
6. Visualize Original and Augmented Images
###Code
import matplotlib.pyplot as plt
# take subset of training data
x_train_subset = x_train[:12]
# visualize subset of training data
fig = plt.figure(figsize=(20,2))
for i in range(0, len(x_train_subset)):
ax = fig.add_subplot(1, 12, i+1)
ax.imshow(x_train_subset[i])
fig.suptitle('Subset of Original Training Images', fontsize=20)
plt.show()
# visualize augmented images
fig = plt.figure(figsize=(20,2))
for x_batch in datagen_train.flow(x_train_subset, batch_size=12):
for i in range(0, 12):
ax = fig.add_subplot(1, 12, i+1)
ax.imshow(x_batch[i])
fig.suptitle('Augmented Images', fontsize=20)
plt.show()
break;
###Output
_____no_output_____
###Markdown
7. Define the Model Architecture
###Code
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu',
input_shape=(32, 32, 3)))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_13 (Conv2D) (None, 32, 32, 16) 208
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 16, 16, 16) 0
_________________________________________________________________
conv2d_14 (Conv2D) (None, 16, 16, 32) 2080
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 8, 8, 32) 0
_________________________________________________________________
conv2d_15 (Conv2D) (None, 8, 8, 64) 8256
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 4, 4, 64) 0
_________________________________________________________________
dropout_9 (Dropout) (None, 4, 4, 64) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 1024) 0
_________________________________________________________________
dense_9 (Dense) (None, 500) 512500
_________________________________________________________________
dropout_10 (Dropout) (None, 500) 0
_________________________________________________________________
dense_10 (Dense) (None, 10) 5010
=================================================================
Total params: 528,054.0
Trainable params: 528,054.0
Non-trainable params: 0.0
_________________________________________________________________
###Markdown
8. Compile the Model
###Code
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
9. Train the Model
###Code
from keras.callbacks import ModelCheckpoint
batch_size = 32
epochs = 10 #100
start = time()
# train the model
# benchmark on cpu: 47s per epoch
# val accuracy evolution on first 10 epochs: 46% to 66%
checkpointer = ModelCheckpoint(filepath='aug_model_gpu.weights.best.hdf5', verbose=1,
save_best_only=True)
hist = model.fit_generator(datagen_train.flow(x_train, y_train, batch_size=batch_size),
steps_per_epoch=x_train.shape[0] // batch_size,
epochs=epochs, verbose=2, callbacks=[checkpointer],
validation_data=datagen_valid.flow(x_valid, y_valid, batch_size=batch_size),
validation_steps=x_valid.shape[0] // batch_size)
print("training finished. Total time elapsed: {}".format(timedelta(seconds=time() - start)))
def plotHistory(hist):
e = [x+1 for x in hist.epoch]
val_loss_history = hist.history['val_loss']
minValLossVal = min(val_loss_history)
minValLossIdx = val_loss_history.index(minValLossVal)+1
# summarize history for accuracy
plt.plot(e, hist.history['acc'])
plt.plot(e, hist.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.grid()
plt.show()
# summarize history for loss
plt.plot(e, hist.history['loss'])
plt.plot(e, hist.history['val_loss'])
plt.plot(minValLossIdx, minValLossVal, 'or')
plt.annotate(
"epoch {}: {:.4f}".format(minValLossIdx, minValLossVal),
xy=(minValLossIdx, minValLossVal), xytext=(0, 20),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='red', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0'))
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.grid()
plt.show()
plotHistory(hist)
###Output
_____no_output_____
###Markdown
10. Load the Model with the Best Validation Accuracy
###Code
# load the weights that yielded the best validation accuracy
model.load_weights('aug_model_gpu.weights.best.hdf5')
###Output
_____no_output_____
###Markdown
11. Calculate Classification Accuracy on Test Set
###Code
# evaluate and print test accuracy
# benchmark with original image preprocessing: 0.581
score = model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracy:', score[1])
###Output
Test accuracy: 0.6674
|
notebooks/AI_for_good_Accessing_satellite_data_apply_Kmeans.ipynb | ###Markdown
A simple example that retrives a time series of a satellite image set and visualizes itNote: While the image set corresponds to a time series progression, it is not clear what the time stamp of a particular image is
###Code
## Define your image collection
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
## Define time range and filter the data
collection_time = collection.filterDate('2000-01-01', '2018-01-01') #YYYY-MM-DD
## Select location based on location of tile
path = collection_time.filter(ee.Filter.eq('WRS_PATH', 198))
pathrow = path.filter(ee.Filter.eq('WRS_ROW', 24))
## Select imagery with less then 5% of image covered by clouds
clouds = pathrow.filter(ee.Filter.lt('CLOUD_COVER', 10))
## Select bands
bands = clouds.select(['B4', 'B5', 'B6'])
## Retrieve a list of the images
collectionList = bands.toList(bands.size()) # Converts the image collection to a list accessible via index
collectionSize = collectionList.size().getInfo()
# Get the information about the dates of this collection
collectionDates = [ee.Image(collectionList.get(indx)).getInfo()['properties']['DATE_ACQUIRED'] for indx in range(collectionSize)]
print('='*100)
print('The number of items in this collection: ', collectionSize)
print()
print('Dates in the collection:')
print(collectionDates)
print('='*100)
# Define the region of interest
ROI = ee.Geometry.Rectangle([5.727906, 51.993435,
5.588144, 51.944356])
# Choose a specific image from the collection List
indx = 0
image = ee.Image(collectionList.get(indx))
parameters = {'min': 0,
'max': 0.5,
'bands': ['B4', 'B5', 'B6'],
'region': ROI }
# Plot the satellite image
Image(url = image.getThumbUrl(parameters))
###Output
====================================================================================================
The number of items in this collection: 16
Dates in the collection:
['2013-09-30', '2014-03-09', '2014-07-31', '2014-09-17', '2014-10-03', '2015-01-07', '2015-03-12', '2015-08-03', '2015-11-23', '2015-12-09', '2016-02-27', '2016-03-14', '2016-07-20', '2016-11-25', '2016-12-27', '2017-04-02']
====================================================================================================
###Markdown
Having visualized the image, we now show how to convert it into a Numpy array so that we can manipulate the image
###Code
# Reference
# https://gis.stackexchange.com/questions/350771/earth-engine-simplest-way-to-move-from-ee-image-to-array-for-use-in-sklearn
# Now that we have a specific image from the list, we convert it to a numpy array
indx = 0
# Retrieve the specific image
image = ee.Image(collectionList.get(indx))#
info = image.getInfo()
print(info.keys())#['DATE_ACQUIRED'])
#print(info['properties'])
# Define an area of interest.
aoi = ee.Geometry.Polygon([[[5.588144,51.993435], [5.727906, 51.993435],[5.727906, 51.944356],[5.588144, 51.944356]]], None, False)
# Get 2-d pixel array for AOI - returns feature with 2-D pixel array as property per band.
band_arrs = image.sampleRectangle(region=aoi)
# Get individual band arrays.
band_arr_b4 = band_arrs.get('B4')
band_arr_b5 = band_arrs.get('B5')
band_arr_b6 = band_arrs.get('B6')
# Transfer the arrays from server to client and cast as np array.
np_arr_b4 = np.array(band_arr_b4.getInfo())
np_arr_b5 = np.array(band_arr_b5.getInfo())
np_arr_b6 = np.array(band_arr_b6.getInfo())
# Expand the dimensions of the arrays
np_arr_b4 = np.expand_dims(np_arr_b4, 2)
np_arr_b5 = np.expand_dims(np_arr_b5, 2)
np_arr_b6 = np.expand_dims(np_arr_b6, 2)
# Stack the individual bands to make a 3-D array.
rgb_img = np.concatenate((np_arr_b4, np_arr_b5, np_arr_b6), 2)
print(rgb_img.shape)
# Plot the array
plt.imshow(rgb_img)
plt.show()
indx = 0
# Load a pre-computed Landsat composite for input.
input = ee.Image(collectionList.get(indx))
# Define a region in which to generate a sample of the input.
region = ee.Geometry.Polygon([[[5.588144,51.993435], [5.727906, 51.993435],[5.727906, 51.944356],[5.588144, 51.944356]]], None, False)
#Map.addLayer(ee.Image().paint(region, 0, 2), {}, 'region')
# Make the training dataset.
training = input.sample(**{
'region': region,
'scale': 30,
'numPixels': 5000
})
# Instantiate the clusterer and train it.
clusterer = ee.Clusterer.wekaKMeans(6).train(training)
# Cluster the input using the trained clusterer.
result = input.cluster(clusterer)
print(result.getInfo()['bands'])
# Define an area of interest.
aoi = ee.Geometry.Polygon([[[5.588144,51.993435], [5.727906, 51.993435],[5.727906, 51.944356],[5.588144, 51.944356]]], None, False)
# Get 2-d pixel array for AOI - returns feature with 2-D pixel array as property per band.
band_arrs = result.sampleRectangle(region=aoi)
data = np.array(band_arrs.get('cluster').getInfo())
print(data.shape)
#plt.plot(data)
plt.imshow(data)
plt.plot()
plt.show()
plt.clf()
plt.imshow(rgb_img)
plt.show()
print(np.sum(data.flatten()==1 ))
###Output
(194, 327)
|
examples/matplotlib/ring_gear_plot.ipynb | ###Markdown
Ring Gear profile
###Code
module = 1.0
n_teeth = 19
pressure_angle = 20.0
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_aspect('equal')
gear = RingGear(module, n_teeth, width=5.0, rim_width=1.5,
pressure_angle=pressure_angle)
pts = gear.gear_points()
x, y = pts[:, 0], pts[:, 1]
ax.plot(x, y, label='gear profile', color='C0')
circles = (
(gear.r0, 'pitch circle'),
(gear.ra, 'addendum circle'),
(gear.rb, 'base circle'),
(gear.rd, 'dedendum circle'),
)
t = np.linspace(0.0, np.pi * 2.0, 200)
for r, lbl in circles:
cx = np.cos(t) * r
cy = np.sin(t) * r
ax.plot(cx, cy, label=lbl, linestyle='--', linewidth=0.8)
cx = np.cos(t) * gear.rim_r
cy = np.sin(t) * gear.rim_r
ax.plot(cx, cy, color='C0')
plt.legend(loc='center')
plt.show()
###Output
_____no_output_____
###Markdown
Planetary Gearset
###Code
animate = False # Set to True to animate
module = 1.0
sun_teeth = 32
planet_teeth = 16
ring_teeth = sun_teeth + planet_teeth * 2
n_planets = 4
ring = RingGear(module, ring_teeth, 5.0, 5.0)
sun = SpurGear(module, sun_teeth, 5.0)
planet = SpurGear(module, planet_teeth, 5.0)
orbit_r = sun.r0 + planet.r0
planet_a = np.pi * 2.0 / n_planets
ratio = 1.0 / ((planet.z / sun.z) * 2.0)
c_ratio = 1.0 / (ring.z / sun.z + 1.0)
fig, ax = plt.subplots(figsize=(8, 8))
ax.set_aspect('equal')
def plot_gears(frame):
ax.clear()
pts = ring.gear_points()
x, y = pts[:, 0], pts[:, 1]
ax.plot(x, y)
w = np.pi * 2.0 / 200.0 * frame
w1 = w * ratio
w2 = w * c_ratio
pts = sun.gear_points()
x, y = pts[:, 0], pts[:, 1]
x, y = rotate_xy(x, y, sun.tau / 2.0 + w)
ax.plot(x, y)
for i in range(n_planets):
pts = planet.gear_points()
x, y = pts[:, 0], pts[:, 1]
x, y = rotate_xy(x, y, -w1)
cx = np.cos(i * planet_a + w2) * orbit_r
cy = np.sin(i * planet_a + w2) * orbit_r
x += cx
y += cy
ax.plot(x, y, color='C2')
if animate:
anim = animation.FuncAnimation(fig, plot_gears, 400, interval=25)
else:
plot_gears(0)
plt.show()
###Output
_____no_output_____ |
Macro/pyautogui_SSAM_trj_to_csv.ipynb | ###Markdown
Install
###Code
#!pip install pyperclip
#!pip install pyautogui
###Output
_____no_output_____
###Markdown
Imports
###Code
import pyperclip #> Copy & paste Korean sentence
import pyautogui as ptg #> Macro
import os
import numpy as np
import time
###Output
_____no_output_____
###Markdown
Settings Get SSAM Button coordinates* **`ptg.position`** : It gets coordinates of location of mouse cursor.
###Code
ptg.position()
###Output
_____no_output_____
###Markdown
Set coordinates of SSAM Buttons
###Code
configuration_tab = (1968, 42) # SSAM configuration Tab
add_button = (2034, 155) # add Button
file_write = (2813, 742) # Input window of file name(파일이름 입력창)
open_button = (3348, 778) # open button
analyze_button = (2014, 474) # analyze button
analyze_complete = (2926, 569) # Close 'analyze complete' window
summary_tab = (2103, 47) # Summary Tab
export_csv_button = (2089, 608) # Export to csv file button
save_csv_button = (3322, 775) # Save to CSV button
select_file = (2343, 106) # File name list
delete_button = (2025, 187) # Delete button
###Output
_____no_output_____
###Markdown
Set .trj File Names
###Code
length = [str(600 + i * 50) + 'm' for i in range(9)]
num = []
for i in range(1, 57):
if i < 10:
num.append('00' + str(i))
else:
num.append('0' + str(i))
length
f = []
for l in length:
for n in num:
file_name = f'낙동jc_allpc_{l}_{n}.trj' #> .trj file names are automately created
f.append(file_name)
len(f)
f[29]
###Output
_____no_output_____
###Markdown
RUN! Automately Create SSAM Result Files
###Code
for i in range(len(f)):
# Click : Configuration Tab
ptg.click(x=configuration_tab[0], y=configuration_tab[1], clicks=1, button='left')
# Click : ADD Button
ptg.click(x=add_button[0], y=add_button[1], clicks=1, button='left')
time.sleep(1)
# Click : File name input window
ptg.click(x=file_write[0], y=file_write[1], clicks=1, button='left')
# Go to C:
ptg.typewrite('C:', 0.1)
ptg.press('enter')
#> Click : File name input window
ptg.click(x=file_write[0], y=file_write[1], clicks=1, button='left')
ptg.keyDown('backspace')
ptg.keyUp('backspace')
# Go to E: - where .trj files were saved in.
ptg.typewrite('E:', 0.1)
ptg.press('enter')
# Click : File name input window
ptg.click(x=file_write[0], y=file_write[1], clicks=1, button='left')
ptg.keyDown('backspace')
ptg.keyUp('backspace')
# Go to the folder : Put your folder name('[엇갈림구간] VISSIM traj 파일' ) and press enter
pyperclip.copy('[엇갈림구간] VISSIM traj 파일')
ptg.hotkey("ctrl", "v")
ptg.press('enter')
# Click : File name input window
ptg.click(x=file_write[0], y=file_write[1], clicks=1, button='left')
ptg.keyDown('backspace')
ptg.keyUp('backspace')
# Go to the folder : Put your folder name('A_1_낙동JC') and press enter
pyperclip.copy('A_1_낙동JC')
ptg.hotkey("ctrl", "v")
ptg.press('enter')
# Click : File name input window
ptg.click(x=file_write[0], y=file_write[1], clicks=1, button='left')
ptg.keyDown('backspace')
ptg.keyUp('backspace')
# Put your file name
pyperclip.copy(f[i])
ptg.hotkey("ctrl", "v")
#> Click : Open button
ptg.click(x=open_button[0], y=open_button[1], clicks=1, button='left')
time.sleep(1) #> Time Sleep : It depends on your computer environment. After some trial and error, you may have to put in other stable values.
# Click : Analyze button
ptg.click(x=analyze_button[0], y=analyze_button[1], clicks=1, button='left')
time.sleep(15) #> Time Sleep : It depends on your computer environment. After some trial and error, you may have to put in other stable values.
# Click : Analysis complete Window
#> Time Sleep : It depends on your computer environment. After some trial and error, you may have to put in other stable values.
ptg.click(x=analyze_complete[0], y=analyze_complete[1], clicks=3, button='left')
time.sleep(1)
ptg.click(x=analyze_complete[0], y=analyze_complete[1], clicks=3, button='left')
time.sleep(1)
#> Click : Summary tab
ptg.click(x=summary_tab[0], y=summary_tab[1], clicks=1, button='left')
# Click : Export to csv file button
ptg.click(x=export_csv_button[0], y=export_csv_button[1], clicks=1, button='left')
time.sleep(1)
#> Click : Save to csv button
ptg.click(x=save_csv_button[0], y=save_csv_button[1], clicks=1, button='left')
time.sleep(1)
#> Click : Configuration tab
ptg.click(x=configuration_tab[0], y=configuration_tab[1], clicks=1, button='left')
#> Click : Select file
ptg.click(x=select_file[0], y=select_file[1], clicks=1, button='left')
#> Click : Delete button
ptg.click(x=delete_button[0], y=delete_button[1], clicks=3, button='left')
time.sleep(1)
###Output
_____no_output_____ |
01-tle-fitting-example.ipynb | ###Markdown
DescriptionThis Notebook does the following:* Propagate a spacecraft using a numerical propagator, with the following perturbation forces included: * Gravity field (EIGEN6S with degree 64, order 64) * Atmospheric drag (NRLMSISE00 at average solar activity) * Solar radiation pressure * Moon attraction * Sun attraction* Fit a Two-Line Elements set on the spacecraft states obtained by the propagation Parameters Spacecraft properties
###Code
sc_mass = 400.0 # kg
sc_cross_section = 0.3 # m2
cd_drag_coeff = 2.0
cr_radiation_pressure = 1.0
###Output
_____no_output_____
###Markdown
* The start date has an influence on the solar activity and therefore on the drag* The duration has an influence on the mean elements being fitted to the Keplerian elements
###Code
from datetime import datetime
date_start = datetime(2019, 1, 1)
fitting_duration_d = 1 # days
###Output
_____no_output_____
###Markdown
Keplerian elements
###Code
import numpy as np
a = 7000.0e3 # meters
e = 0.001
i = float(np.deg2rad(98.0)) # Conversion to Python float is required for Orekit
pa = float(np.deg2rad(42.0))
raan = float(np.deg2rad(42.0))
ma = float(np.deg2rad(42.0)) # Mean anomaly
###Output
_____no_output_____
###Markdown
Satellite information
###Code
satellite_number = 99999
classification = 'X'
launch_year = 2018
launch_number = 42
launch_piece = 'F'
ephemeris_type = 0
element_number = 999
revolution_number = 100
###Output
_____no_output_____
###Markdown
Numerical propagator parameters
###Code
dt = 60.0 # s, period at which the spacecraft states are saved to fit the TLE
prop_min_step = 0.001 # s
prop_max_step = 300.0 # s
prop_position_error = 10.0 # m
# Estimator parameters
estimator_position_scale = 1.0 # m
estimator_convergence_thres = 1e-3
estimator_max_iterations = 25
estimator_max_evaluations = 35
###Output
_____no_output_____
###Markdown
Setting up Orekit Creating VM and loading data zip
###Code
import orekit
orekit.initVM()
from orekit.pyhelpers import setup_orekit_curdir
setup_orekit_curdir()
###Output
_____no_output_____
###Markdown
Setting up frames
###Code
from org.orekit.frames import FramesFactory, ITRFVersion
from org.orekit.utils import IERSConventions
gcrf = FramesFactory.getGCRF()
teme = FramesFactory.getTEME()
itrf = FramesFactory.getITRF(IERSConventions.IERS_2010, False)
from org.orekit.models.earth import ReferenceEllipsoid
wgs84_ellipsoid = ReferenceEllipsoid.getWgs84(itrf)
from org.orekit.bodies import CelestialBodyFactory
moon = CelestialBodyFactory.getMoon()
sun = CelestialBodyFactory.getSun()
###Output
_____no_output_____
###Markdown
Creating the Keplerian orbit
###Code
from org.orekit.orbits import KeplerianOrbit, PositionAngle
from org.orekit.utils import Constants as orekit_constants
from orekit.pyhelpers import datetime_to_absolutedate
date_start_orekit = datetime_to_absolutedate(date_start)
keplerian_orbit = KeplerianOrbit(a, e, i, pa, raan, ma, PositionAngle.MEAN,
gcrf, date_start_orekit, orekit_constants.EIGEN5C_EARTH_MU)
###Output
_____no_output_____
###Markdown
Creating the initial TLE from the Keplerian elements. The mean elements should differ from the Keplerian elements, but they will be fitted later.
###Code
from org.orekit.propagation.analytical.tle import TLE
mean_motion = float(np.sqrt(orekit_constants.EIGEN5C_EARTH_MU / np.power(a, 3)))
mean_motion_first_derivative = 0.0
mean_motion_second_derivative = 0.0
b_star_first_guess = 1e-5 # Does not play any role, because it is a free parameter when fitting the TLE
tle_first_guess = TLE(satellite_number,
classification,
launch_year,
launch_number,
launch_piece,
ephemeris_type,
element_number,
date_start_orekit,
mean_motion,
mean_motion_first_derivative,
mean_motion_second_derivative,
e,
i,
pa,
raan,
ma,
revolution_number,
b_star_first_guess)
print(tle_first_guess)
###Output
1 99999X 18042F 19001.00000000 .00000000 00000-0 10000-4 0 9996
2 99999 98.0000 42.0000 0010000 42.0000 42.0000 14.82366875 1004
###Markdown
Setting up the numerical propagator
###Code
from org.orekit.attitudes import NadirPointing
nadir_pointing = NadirPointing(gcrf, wgs84_ellipsoid)
from org.orekit.propagation.conversion import DormandPrince853IntegratorBuilder
integrator_builder = DormandPrince853IntegratorBuilder(prop_min_step, prop_max_step, prop_position_error)
from org.orekit.propagation.conversion import NumericalPropagatorBuilder
propagator_builder = NumericalPropagatorBuilder(keplerian_orbit,
integrator_builder, PositionAngle.MEAN, estimator_position_scale)
propagator_builder.setMass(sc_mass)
propagator_builder.setAttitudeProvider(nadir_pointing)
# Earth gravity field with degree 64 and order 64
from org.orekit.forces.gravity.potential import GravityFieldFactory
gravity_provider = GravityFieldFactory.getConstantNormalizedProvider(64, 64)
from org.orekit.forces.gravity import HolmesFeatherstoneAttractionModel
gravity_attraction_model = HolmesFeatherstoneAttractionModel(itrf, gravity_provider)
propagator_builder.addForceModel(gravity_attraction_model)
# Moon and Sun perturbations
from org.orekit.forces.gravity import ThirdBodyAttraction
moon_3dbodyattraction = ThirdBodyAttraction(moon)
propagator_builder.addForceModel(moon_3dbodyattraction)
sun_3dbodyattraction = ThirdBodyAttraction(sun)
propagator_builder.addForceModel(sun_3dbodyattraction)
# Solar radiation pressure
from org.orekit.forces.radiation import IsotropicRadiationSingleCoefficient
isotropic_radiation_single_coeff = IsotropicRadiationSingleCoefficient(sc_cross_section, cr_radiation_pressure);
from org.orekit.forces.radiation import SolarRadiationPressure
solar_radiation_pressure = SolarRadiationPressure(sun, wgs84_ellipsoid.getEquatorialRadius(),
isotropic_radiation_single_coeff)
propagator_builder.addForceModel(solar_radiation_pressure)
# Atmospheric drag
from org.orekit.forces.drag.atmosphere.data import MarshallSolarActivityFutureEstimation
msafe = MarshallSolarActivityFutureEstimation(
'(?:Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\p{Digit}\p{Digit}\p{Digit}\p{Digit}F10\.(?:txt|TXT)',
MarshallSolarActivityFutureEstimation.StrengthLevel.AVERAGE)
from org.orekit.data import DataProvidersManager
DM = DataProvidersManager.getInstance()
DM.feed(msafe.getSupportedNames(), msafe) # Feeding the F10.7 bulletins to Orekit's data manager
from org.orekit.forces.drag.atmosphere import NRLMSISE00
atmosphere = NRLMSISE00(msafe, sun, wgs84_ellipsoid)
from org.orekit.forces.drag import IsotropicDrag
isotropic_drag = IsotropicDrag(sc_cross_section, cd_drag_coeff)
from org.orekit.forces.drag import DragForce
drag_force = DragForce(atmosphere, isotropic_drag)
propagator_builder.addForceModel(drag_force)
propagator = propagator_builder.buildPropagator([a, e, i, pa, raan, ma])
###Output
_____no_output_____
###Markdown
Propagating and fitting TLE Propagating
###Code
from org.orekit.propagation import SpacecraftState
initial_state = SpacecraftState(keplerian_orbit, sc_mass)
propagator.resetInitialState(initial_state)
propagator.setEphemerisMode()
date_end_orekit = date_start_orekit.shiftedBy(fitting_duration_d * 86400.0)
state_end = propagator.propagate(date_end_orekit)
###Output
_____no_output_____
###Markdown
Getting generating ephemeris and saving all intermediate spacecraft states
###Code
from java.util import ArrayList
states_list = ArrayList()
bounded_propagator = propagator.getGeneratedEphemeris()
date_current = date_start_orekit
while date_current.compareTo(date_end_orekit) <= 0:
spacecraft_state = bounded_propagator.propagate(date_current)
states_list.add(spacecraft_state)
date_current = date_current.shiftedBy(dt)
###Output
_____no_output_____
###Markdown
Fitting the TLE, based on great example by RomaricH on the Orekit forum: https://forum.orekit.org/t/generation-of-tle/265/4
###Code
from org.orekit.propagation.conversion import TLEPropagatorBuilder, FiniteDifferencePropagatorConverter
from org.orekit.propagation.analytical.tle import TLEPropagator
threshold = 1.0 # "absolute threshold for optimization algorithm", but no idea about its impact
tle_builder = TLEPropagatorBuilder(tle_first_guess, PositionAngle.MEAN, 1.0)
fitter = FiniteDifferencePropagatorConverter(tle_builder, threshold, 1000)
fitter.convert(states_list, False, 'BSTAR') # Setting BSTAR as free parameter
tle_propagator = TLEPropagator.cast_(fitter.getAdaptedPropagator())
tle_fitted = tle_propagator.getTLE()
###Output
_____no_output_____
###Markdown
Now let's compare the initial guess with the fitted TLE:
###Code
print(tle_first_guess)
print('')
print(tle_fitted)
###Output
1 99999X 18042F 19001.00000000 .00000000 00000-0 10000-4 0 9996
2 99999 98.0000 42.0000 0010000 42.0000 42.0000 14.82366875 1004
1 99999X 18042F 19001.00000000 .00000000 00000-0 86025-3 0 9995
2 99999 97.9236 42.2567 0016635 52.1798 31.8862 14.78537902 1007
|
MonkeyNetPyTorch.ipynb | ###Markdown
###Code
import os
os.environ['KAGGLE_USERNAME'] = "theroyakash"
os.environ['KAGGLE_KEY'] = "👀"
!kaggle datasets download -d slothkong/10-monkey-species
!unzip /content/10-monkey-species.zip
import torch
import torch.nn as nn
from PIL import Image
import torch.utils.data as data
import torchvision
from torchvision import transforms, models
from tqdm.auto import tqdm
import numpy as np
from matplotlib import pyplot as plt
# Set seed for recreating results
import random
torch.manual_seed(123)
np.random.seed(123)
random.seed(123)
label2name = {
'n0':'alouatta_palliata',
'n1':'erythrocebus_patas',
'n2':'cacajao_calvus',
'n3':'macaca_fuscata',
'n4':'cebuella_pygmea',
'n5':'cebus_capucinus',
'n6':'mico_argentatus',
'n7':'saimiri_sciureus',
'n8':'aotus_nigriceps',
'n9':'trachypithecus_johnii',
}
name2id = {
'alouatta_palliata':0,
'erythrocebus_patas':1,
'cacajao_calvus':2,
'macaca_fuscata':3,
'cebuella_pygmea':4,
'cebus_capucinus':5,
'mico_argentatus':6,
'saimiri_sciureus':7,
'aotus_nigriceps':8,
'trachypithecus_johnii':9,
}
class Transform():
"""
Image Transformer class.
Args:
- ``resize``: Image size after resizing
- ``mean``: R,G,B avg of each channel
- ``std``: Standard deviation of each channel
"""
def __init__(self, resize, mean, std):
self.transform = {
'train': transforms.Compose([
transforms.RandomResizedCrop(resize, scale=(0.5, 1.0)),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean, std=std)
]),
'validation': transforms.Compose([
transforms.Resize(resize),
transforms.CenterCrop(resize),
transforms.ToTensor(),
transforms.Normalize(mean, std)
])
}
def __call__(self, image, phase='train'):
return self.transform[str(phase)](image)
import glob
def pathlist(phase):
root = '/content/'
target = os.path.join(root+phase+'/**/**/*.jpg')
paths = []
for path in glob.glob(target):
paths.append(path)
return paths
train_list = pathlist(phase='training')
validation_list = pathlist(phase='validation')
class Dataset(torch.utils.data.Dataset):
"""
Dataset class.
Args:
- ``path_list``: List of paths for the data
- ``transform``: Image Transform Object
- ``phase``: Traning or Validation
"""
def __init__(self, path_list, transform, phase):
self.path_list = path_list
self.transform = transform
self.phase = phase
def __len__(self):
return len(self.path_list)
def __getitem__(self, idx):
image_path = self.path_list[idx]
image = Image.open(image_path)
# Pre-processing:
transformed_image = self.transform(image, self.phase)
# Get Label of the Image
array = image_path.split('/')
label = array[-2]
name = label2name[label]
# Transform Label to Number
label_number = name2id[name]
return transformed_image, label_number
image_size = 224
mean = (0.485,0.456,0.406)
std = (0.229,0.224,0.225)
# Training and Validation Dataset Generator from Dataset Class
training_dataset = Dataset(path_list=train_list, transform=Transform(image_size, mean, std), phase='train')
validation_dataset = Dataset(validation_list, transform=Transform(image_size, mean, std), phase='validation')
batch_size = 64
training_dataloader = torch.utils.data.DataLoader(training_dataset, batch_size=batch_size, shuffle=True)
validation_dataloader = torch.utils.data.DataLoader(validation_dataset, batch_size=batch_size, shuffle=True)
dataloaders = {
'train': training_dataloader,
'validation': validation_dataloader
}
network = torchvision.models.vgg16(pretrained=True)
network.classifier[6] = nn.Linear(in_features=4096, out_features=10)
network.train()
criterion = nn.CrossEntropyLoss()
# Fine tuning
param_to_update_1 = []
param_to_update_2 = []
param_to_update_3 = []
# learn parameter list
update_param_names_1 = ['features']
update_param_names_2 = ['classifier.0.weight','classifier.0.bias','classifier.3.weight','classifier.3.bias']
update_param_names_3 = ['classifier.6.weight','classifier.6.bias']
for name,param in network.named_parameters():
print(name)
if update_param_names_1[0] in name:
param.requires_grad = True
param_to_update_1.append(param)
elif name in update_param_names_2:
param.requires_grad = True
param_to_update_2.append(param)
elif name in update_param_names_3:
param.requires_grad = True
param_to_update_3.append(param)
else:
param.requires_grad = False
optimizer = torch.optim.SGD([
{'params':param_to_update_1,'lr':1e-4},
{'params':param_to_update_2,'lr':5e-4},
{'params':param_to_update_3,'lr':1e-3},
],momentum=0.9)
def train_model(net,dataloaders_dict,criterion,optimizer,num_epochs):
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f'Using device: {device} for deep learning')
network.to(device)
history_train_loss = []
history_train_acc = []
history_val_loss = []
history_val_acc = []
torch.backends.cudnn.benchmark = True
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch+1,num_epochs))
print('-------------------------------------')
for phase in ['train','validation']:
if phase == 'train':
network.train()
else:
network.eval()
epoch_loss = 0.0
epoch_corrects = 0
# pick mini batch from dataloader
for inputs,labels in tqdm(dataloaders_dict[phase]):
inputs = inputs.to(device)
labels = labels.to(device)
# init optimizer
optimizer.zero_grad()
# calculate forward
with torch.set_grad_enabled(phase=='train'):
outputs = network(inputs)
loss = criterion(outputs,labels) # calculate loss
_,preds = torch.max(outputs,1) # predict label
# backward (train only)
if phase == 'train':
loss.backward()
optimizer.step()
# update loss sum
epoch_loss += loss.item() * inputs.size(0)
# correct answer count
epoch_corrects += torch.sum(preds == labels.data)
# show loss and correct answer rate per epoch
epoch_loss = epoch_loss / len(dataloaders_dict[phase].dataset)
epoch_acc = epoch_corrects.double() / len(dataloaders_dict[phase].dataset)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase,epoch_loss,epoch_acc))
if phase == 'train':
history_train_loss.append(epoch_loss)
history_train_acc.append(epoch_acc)
else:
history_val_loss.append(epoch_loss)
history_val_acc.append(epoch_acc)
return history_train_loss,history_train_acc,history_val_loss,history_val_acc
num_epochs=10
train_loss,train_acc,val_loss,val_acc = train_model(network,
dataloaders,
criterion,
optimizer,
num_epochs=num_epochs)
###Output
_____no_output_____ |
Session02/IntroToDataProductsAndTasks.ipynb | ###Markdown
Session 02: Intro to the Science Pipelines Software and Data productsOwner(s): **Yusra AlSayyad** ([@yalsayyad](https://github.com/LSSTScienceCollaborations/StackClub/issues/new?body=@yalsayyad))Last Verified to Run: **2020-05-14?**Verified Stack Release: **w_2020_19**Now that you have brought yourself to the data via the Science Platform (Lesson 1), you can retrieve that data and rerun elements of the Science Pipelines. Today we'll cover:* What are the **Science Pipelines**?* What is this **Stack**?* What are the **data products**?* How to rerun an element of the pipelines, a **Task**, with a different configuration. We'll only quickly inspect the images and catalogs. Next week's lesson will be dedicated to learning more sophisticated methods for exploring the data. 1. Overview of the Science Pipelines and the stack (presentation)The stack is an implementation of the science pipelines and its corresponding library. 2. The Data ProductsData products include both catalogs and images.Instead of operating directly on files and directories, we interact with on-disk data products via an abstraction layer called the data Butler. The butler operates on data repositories. DM regularly tests the science pipelines on precursor data from HSC, DECam, and simulated data generated by DESC that we call "DC2 ImSim". These Data Release Production (DRP) outputs can be found in `/datasets`. This notebook will be using HSC Gen 2 repo.*Jargon Watch: Gen2/Gen3 - We're in the process of building a brand new Butler, which we are calling the 3rd Generation Butler, or Gen3 for short.* | generation | Class || ------------- | ------------- || Gen 2 | `lsst.daf.persistence.Butler` || Gen 3 | `lsst.daf.butler.Butler` |A Gen 2 Butler is the first stack object we are going to instantiate, with a path to a directory that is a repo.
###Code
# What version of the Stack am I using?
! echo $HOSTNAME
! eups list lsst_distrib -s
import os
REPO = '/datasets/hsc/repo/rerun/RC/w_2020_19/DM-24822'
from lsst.daf.persistence import Butler
butler = Butler(REPO)
HSC_REGISTRY_COLUMNS = ['taiObs', 'expId', 'pointing', 'dataType', 'visit', 'dateObs', 'frameId', 'filter', 'field', 'pa', 'expTime', 'ccdTemp', 'ccd', 'proposal', 'config', 'autoguider']
butler.queryMetadata('calexp', HSC_REGISTRY_COLUMNS, dataId={'filter': 'HSC-I', 'visit': 30504, 'ccd': 50})
###Output
_____no_output_____
###Markdown
**Common error messages** when instantiating a Butler:1) ```PermissionError: [Errno 13] Permission denied: '/datasets/hsc/repo/rerun/RC/w_2020_19/DM-248222'```- Translation: This directory does not exist. Confirm with `os.path.exists(REPO)`2) `RuntimeError: No default mapper could be established from inputs`:- Translation: This directory exists, but is not a data repo. Does `REPO` have a file called `repositoryCfg.yaml` in it? Nope? It's not a data repo. Use `os.listdir` to see what's in your directory*Next we'll look at 3 types of data products:** Images* Catalogs: lsst.afw.table* Catalogs: parquet/pyArrow DataFrames 2.1 Images
###Code
VISIT = 34464
CCD = 81
exposure = butler.get('calexp', visit=int(VISIT), ccd=CCD)
###Output
_____no_output_____
###Markdown
**Common error messages** when getting data:1) `'HscMapper' object has no attribute 'map_calExp'`- You're asking for a data product that doesn't exist. In this example, I asked for a 'calExp' with a capital E, which is not a thing. Double check your spelling in: https://github.com/lsst/obs_base/blob/master/policy/exposures.yaml for images or https://github.com/lsst/obs_base/blob/master/policy/datasets.yaml for catalogs or models.2) `NoResults: No locations for get: datasetType:calexp dataId:DataId(initialdata={'visit': 34464, 'ccd': 105}, tag=set())`:- This file doesn't exist. If you don't believe the Butler, add "_filename" to the data product you want, and you'll get back the filename you can lookup. For example: butler.get('calexp_filename', visit=VISIT, ccd=105)
###Code
butler.get('calexp_filename', visit=VISIT, ccd=105)
###Output
_____no_output_____
###Markdown
Rare error message: Did you try that and now it says it can't find the filename? `NoResults: No locations for get: datasetType:calexp_filename dataId:DataId(initialdata={'visit': 34464, 'ccd': 81}, tag=set())` Sqlalchemy doesn't handle data types well. Force your visit or ccd numbers to be integers like `butler.get('calexp', visit=int(34464), ...` Q: If I can get the filename from the butler, why can't I just read it in manually like I do other fits files and fits tables?A: Because in operations, the data will not be on a shared GPFS disk like you're reading from now. We guarantee `butler.get` to work the same regardless of the backend storage. Exposure ObjectsThe data that the butler just fetched for us is an `Exposure` object. It composes a `maskedImage` which has 3 `Image` object: an `image`, `mask`, and `variance`. These are pointers/views!
###Code
exposure
exposure.maskedImage.image
exposure.maskedImage.mask
exposure.maskedImage.variance
# These shortcuts work too.
exposure.image
exposure.variance
exposure.mask
# each image also has an array property e.g.
exposure.image.array
###Output
_____no_output_____
###Markdown
The exposures also include a WCS Object, a PSF Object and ExposureInfo. These can be accessed via the following methods.
###Code
wcs = exposure.getWcs()
psf = exposure.getPsf()
photoCalib = exposure.getPhotoCalib()
expInfo = exposure.getInfo()
visitInfo = expInfo.getVisitInfo()
###Output
_____no_output_____
###Markdown
- [x] **Exercise:** Use tab-complete or '?exposure' to explore the Exposure. Explore details are in this visit info. What was the exposure time? What was the observation date? Exploring the other methods of Exposure object, what are the dimensions of the image?
###Code
?exposure
# visitInfo.
# exposure.
###Output
_____no_output_____
###Markdown
For more documentation on Exposure objects:* https://pipelines.lsst.io/modules/lsst.afw.image/indexing-conventions.htmlFor another notebook on Exposure objects:* https://github.com/LSSTScienceCollaborations/StackClub/blob/master/Basics/Calexp_guided_tour.ipynbSession 3 will introduce more sophisticated image display tools, and go into detail on the `Display` objects in the stack, but let's take a quick look at this image:
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import lsst.afw.display as afw_display
%matplotlib inline
matplotlib.rcParams["figure.figsize"] = (6, 4)
matplotlib.rcParams["font.size"] = 12
matplotlib.rcParams["figure.dpi"] = 120
# Let's smooth it first just for fun.
from skimage.filters import gaussian
exposure.image.array[:] = gaussian(exposure.image.array, sigma=5)
# and display it
display = afw_display.Display(frame=1, backend='matplotlib')
display.scale("linear", "zscale")
display.mtv(exposure)
###Output
_____no_output_____
###Markdown
From the colorbar, you can tell that the background has been subtracted.The first step of the pipeline, `processCcd` takes a `postISRCCD` as input. Before detection and measurement, it estimates a naive background and subtracts it. Indeed we store a `calexp` with the background subtracted and model that was subtracted from the original `postISRCCD` as the `calexpBackground`.*Jargon Watch: ISR - Instrument Signature Removal (ISR) encapsulates all the steps you normally associated with reducing astronomical imaging data (bias subtraction, flat fielding, crosstalk correction, cosmic ray removal, etc.)*There's full focal plane background estimation step that produces a delta on the `calexpBackground`, which is called `skyCorr`. Let's quickly plot these:
###Code
# Fetch background models from the butler
background = butler.get('calexpBackground', visit=VISIT, ccd=CCD)
skyCorr = butler.get('skyCorr', visit=VISIT, ccd=CCD)
# call "getImage" to evaluate the model on a pixel grid
plt.subplot(121)
plt.imshow(background.getImage().array, origin='lower', cmap='gray')
plt.title("Local Polynomial Bkgd")
plt.subplot(122)
plt.imshow(background.getImage().array - skyCorr.getImage().array, origin='lower', cmap='gray')
plt.title("SkyCorr Bkgd")
exposure = butler.get('calexp', visit=VISIT, ccd=CCD)
background = butler.get('calexpBackground', visit=VISIT, ccd=CCD)
# create a view to the masked image
mi = exposure.maskedImage
# add the background image to that view.
mi += background.getImage()
display1 = afw_display.Display(frame=1, backend='matplotlib')
display1.scale("linear", "zscale")
exposure.image.array[:] = gaussian(exposure.image.array, sigma=5)
display1.mtv(exposure)
###Output
_____no_output_____
###Markdown
It is good to get in the habit of mathmatical operations on maskedImages instead of images, because it scales the variance plane appropriately. For example, when you multiply a `MaskedImage` by 2, it multiplies the `Image` by 2 and the `Variance` by 4. **Exercise 2.1)** Coadds have dataId's defined by their SkyMap. Fetch the `deepCoadd` with `tract=9813`, `patch='3,3'` and `filter='HSC-I'` from the same repo. Bonus: a `deepCoadd_calexp` has had an additional aggressive background model applied called a `deepCoadd_calexp_background`. Confirm that the `deepCoadd_calexp` + `deepCoadd_calexp_background` = `deepCoadd`
###Code
deepCoadd = butler.get('deepCoadd', tract=9813, patch='3,3', filter='HSC-I')
deepCoadd_calexp_background = butler.get('deepCoadd_calexp_background', tract=9813, patch='3,3', filter='HSC-I')
deepCoadd_calexp = butler.get('deepCoadd_calexp', tract=9813, patch='3,3', filter='HSC-I')
mi = deepCoadd_calexp.maskedImage
mi += deepCoadd_calexp_background.getImage()
deepCoadd.image.array - deepCoadd_calexp.image.array
###Output
_____no_output_____
###Markdown
2.2 Catalogs (lsst.afw.table format)afwTables are for passing to tasks. The pipeline needed C++ readable table format, so we wrote one. If you want to pass a catalog to a Task, it'll probably take one of these. They are:* Row stores and* the column names are oriented for softwareThe source table immediatly output by processCcd is called `src`
###Code
src = butler.get('src', visit=VISIT, ccd=CCD)
src
###Output
_____no_output_____
###Markdown
The returned object, `src`, is a `lsst.afw.table.SourceCatalog` object.
###Code
src.getSchema()
###Output
_____no_output_____
###Markdown
Inspecting the schema reveals that instFluxes are in uncalibrated units of counts. coord_ra/coord_dec are in units of radians. `lsst.afw.table.SourceCatalog`s have their own API. However if you are *just* going to use it for analysis, you can convert it to an AstroPy table or a pandas DataFrame:
###Code
src.asAstropy()
df = src.asAstropy().to_pandas()
df.tail()
###Output
_____no_output_____
###Markdown
2.3 Catalogs (Parquet/PyArrow DataFrame format)* Output data product ready for analysis* Column store* Full visit and full tract options* Column namesThe parquet outputs have been transformed to database-specified units. Fluxes are in nanojanskys, coordinates are in degrees. These will match what you get via the Portal and the Catalog Access tool Simon showed last week.**NOTE:** The `sourceTable` is a relatively new edition to the Stack, and probably requires a stack version more recent than v19.
###Code
parq = butler.get('sourceTable', visit=VISIT, ccd=CCD)
###Output
_____no_output_____
###Markdown
The `ParquetTable` is just a light wrapper around a `pyarrow.parquet.ParquetFile`.You can get a parquet table from the butler, but it doesn't fetch any columns until you ask for them. It's a column store which means that it can read only one column at a time. This is great for analysis when you want to plot two million element arrays. In a row-store you'd have to read the whole ten million-row table just for those two columns you wanted. But don't even try to loop through rows! If you want a whole row, use the `afwTable`. Last I checked the processing step that consolidates the Source Tables from a per-ccd Source Table to a `parq = butler.get('sourceTable', visit=VISIT)`
###Code
parq
# inspect the columns with:
parq.columns
###Output
_____no_output_____
###Markdown
Note that the column names are different. Now fetch just the columns you want. For example:
###Code
df = parq.toDataFrame(columns=['ra', 'decl', 'PsFlux', 'PsFluxErr', 'sky_source',
'PixelFlags_bad', 'PixelFlags_sat', 'PixelFlags_saturated'])
###Output
_____no_output_____
###Markdown
**Exercise:** Using this DataFrame `df`, make a histogram of `PsFlux` for sky sources using this parquet source table. If `sky_source == True` then the source was not a normal detection, but rather an randomly placed centroid to measure properties of blank sky. The distribution should be centered at 0. **Exercise:** A parquet `objectTable_tract` contains deep coadd measurements for 1.5 sq. deg. tract.**Make a r-i vs. g-r plot** of stars with a r-band SNR > 100. Use `refExtendedness` == 0 to select for stars. It means that the galaxy model Flux was similar to the PSF Flux. By the looks of your plot, what do you think about using refExtendedness for star galaxy separation?*
###Code
# butler = Butler('/datasets/hsc/repo/rerun/DM-23243/OBJECT/DEEP')
# parq = butler.get('objectTable_tract', tract=9813)
# parq.columns
###Output
_____no_output_____
###Markdown
Tasks**TL;DR If you remember one thing about tasks it's go to http://pipelines.lsst.io, then click on lsst.pipe.base**On the landing page for lsst.pipe.base documenation https://pipelines.lsst.io/modules/lsst.pipe.base/index.html, you'll see a number of tutorials on how to use Tasks and how to create one.CmdlineTask extends Task with commandline driver utils for use with Gen2 Butlers, and will be deprecated soon. However, not all the links under "CommandlineTask" will become obsolete. For example, Retargeting subtasks of command-line tasks will live on.Read: https://pipelines.lsst.io/modules/lsst.pipe.base/task-framework-overview.htmlWhat is a Task?Tasks implement astronomical data processing functionality. They are:* **Configurable:** Modify a task’s behavior by changing its configuration. Automatically apply camera-specific modifications* **Hierarchical:** Tasks can call other tasks as subtasks* **Extensible:** Replace (“retarget”) any subtask with a variant. Write your own subclass of a task.
###Code
# Edited highlights of ${PIPE_TASKS_DIR}/example/exampleStatsTask.py
import sys
import numpy as np
from lsst.geom import Box2I, Point2I, Extent2I
from lsst.afw.image import MaskedImageF
from lsst.pipe.tasks.exampleStatsTasks import ExampleSimpleStatsTask, ExampleSigmaClippedStatsTask
# Load a MaskedImageF -- an image containing floats
# together with a mask and a per-pixel variance.
WIDTH = 40
HEIGHT = 20
maskedImage = MaskedImageF(Box2I(Point2I(10, 20),
Extent2I(WIDTH, HEIGHT)))
x = np.random.normal(10, 20, size=WIDTH*HEIGHT)
# Because we are shoving it into an ImageF and numpy defaults
# to double precision
X = x.reshape(HEIGHT, WIDTH).astype(np.float32)
im = maskedImage.image
im.array = X
# We initialize the Task once but can call it many times.
task = ExampleSimpleStatsTask()
# Simply call the .run() method with the MaskedImageF.
# Most Tasks have a .run() method. Look there first.
result = task.run(maskedImage)
# And print the result.
print(result)
###Output
Struct(mean=10.747188781565054; meanErr=0.7108371746222903; stdDev=20.1055114597963; stdDevErr=0.5023235396455004)
###Markdown
Using a Task with configurationNow we are going to instantiate Tasks with two different configs. Configs must be set *before* instantiating the task. Do not change the config of an already-instatiated Task object. It will not do what you think it's doing. In fact, during commandline processing, the `Task` drivers such as `CmdLineTask` freeze the configs before running the Task. When you're running them from notebooks, they are not frozen, hence this warning.
###Code
# Edited highlights of ${PIPE_TASKS_DIR}/example/exampleStatsTask.py
config1 = ExampleSigmaClippedStatsTask.ConfigClass(numSigmaClip=1)
config2 = ExampleSigmaClippedStatsTask.ConfigClass()
config2.numSigmaClip = 3
task1 = ExampleSigmaClippedStatsTask(config=config1)
task2 = ExampleSigmaClippedStatsTask(config=config2)
print(task1.run(maskedImage).mean)
print(task2.run(maskedImage).mean)
# Example of what not to do
# -------------------------
# task1 = ExampleSigmaClippedStatsTask(config=config1)
# print(task1.run(maskedImage).mean)
# DO NOT EVER DO THIS!
# task1.config.numSigmaClip = 3 <--- bad bad bad
# print(task1.run(maskedImage).mean)
###Output
10.687153584309756
10.822356706360981
###Markdown
Background Subtraction and Task ConfigurationThe following example of reconfiguring a task is one step in an introduction to `processCcd`: https://github.com/lsst-sqre/notebook-demo/blob/master/AAS_2019_tutorial/intro-process-ccd.ipynb`processCcd`, our basic source extractor run as the first step step in the pipeline, will be covered in more detail in Session 4.
###Code
from lsst.meas.algorithms import SubtractBackgroundTask
# Add the background back in so that we can remodel it (like we did above)
postISRCCD = butler.get("calexp", visit=30502, ccd=CCD)
bkgd = butler.get("calexpBackground", visit=30502, ccd=CCD)
mi = exposure.maskedImage
mi += bkgd.getImage()
# Execute this cell to get fun & terrible results!
bkgConfig = SubtractBackgroundTask.ConfigClass()
bkgConfig.useApprox = False
bkgConfig.binSize = 20
###Output
_____no_output_____
###Markdown
The `config` object here is an instance of a class that inherits from `lsst.pex.config.Config` that contains a set of `lsst.pex.config.Field` objects that define the options that can be modified. Each `Field` behaves more or less like a Python `property`, and you can get information on all of the fields in a config object by either using `help`:
###Code
help(bkgConfig)
SubtractBackgroundTask.ConfigClass.algorithm?
bkgTask = SubtractBackgroundTask(config=bkgConfig)
bkgResult = bkgTask.run(exposure)
display1 = afw_display.Display(frame=1, backend='matplotlib')
display1.scale("linear", min=-0.5, max=10)
display1.mtv(exposure[700:1400,1800:2400])
###Output
_____no_output_____ |
Deep_Learning_Classifier_model.ipynb | ###Markdown
Import libraries & dataset
###Code
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
import time
import statistics as stats
import requests
import pickle
import json
from sklearn.svm import LinearSVC
import xgboost as xgb
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
import tensorflow as tf
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.feature_selection import SelectFromModel
from sklearn.metrics import classification_report
from sklearn.base import clone
from sklearn.externals.joblib import dump, load
from mlxtend.feature_selection import ExhaustiveFeatureSelector as EFS
from imblearn.over_sampling import SMOTE, SMOTENC, ADASYN, BorderlineSMOTE
from imblearn.under_sampling import NearMiss, RandomUnderSampler
df = pd.read_csv('http://puma.swstats.info/files/kickstarter_with_trends.csv', index_col="ID")
df.columns
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/externals/joblib/__init__.py:15: FutureWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.
warnings.warn(msg, category=FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/externals/six.py:31: FutureWarning: The module is deprecated in version 0.21 and will be removed in version 0.23 since we've dropped support for Python 2.7. Please rely on the official version of six (https://pypi.org/project/six/).
"(https://pypi.org/project/six/).", FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.neighbors.base module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.neighbors. Anything that cannot be imported from sklearn.neighbors is now part of the private API.
warnings.warn(message, FutureWarning)
###Markdown
Prepare for data cleaning
###Code
link = 'https://3l7z4wecia.execute-api.us-east-1.amazonaws.com/default/api-dynamodb/'
get_categories = {
"operation": "list",
"table": "categories",
}
categories = requests.post(link, json=get_categories)
categories = categories.json()['Items']
categories_proper = dict()
for item in categories:
categories_proper[item['name']] = item['id'] # map NAME to ID
get_main_categories = {
"operation": "list",
"table": "maincategories",
}
main_categories = requests.post(link, json=get_main_categories)
main_categories = main_categories.json()['Items']
main_categories_proper = dict()
for item in main_categories:
main_categories_proper[item['name']] = item['id'] # map NAME to ID
get_countries = {
"operation": "list",
"table": "countries",
}
countries = requests.post(link, json=get_countries)
countries = countries.json()['Items']
countries_proper = dict()
for item in countries:
countries_proper[item['name']] = item['id'] # map NAME to ID
###Output
_____no_output_____
###Markdown
Clean & prepare data* Calculate campaign length * Delete all incomplete data (like country == N,0")* Delete all kickstarter projects with different state than 'failed' and 'successful' * Cast to numerical types all non-numerical features and drop all empty data* Use Label Encoding or One-Hot Encoding
###Code
df_clean = df.copy()
indexes = df_clean[df_clean['country'] == 'N,0"'].index
df_clean.drop(indexes, inplace=True)
# drop live & undefined states
indexes = df_clean[(df_clean['state'] == 'live') | (df_clean['state'] == 'undefined')].index
df_clean.drop(indexes, inplace=True)
df_clean['campaign_length'] = pd.to_timedelta((pd.to_datetime(df_clean['deadline']) - pd.to_datetime(df_clean['launched'])), unit='days').dt.days
# df_clean = df_clean[(df_clean['usd_goal_real'] >= 10) & (df_clean['campaign_length'] >= 7)] # drop all with lower goal than 10$ and shorter than week
##########################################################
# """ Label Encoding - if you want to run this, just comment lines with quotation marks
jsons = dict()
map_dict = {
'category': categories_proper,
'main_category': main_categories_proper,
'country': countries_proper,
}
for key, val in map_dict.items():
df_clean[key] = df_clean[key].map(val)
json.dump(jsons, open('categories.json', 'w'))
df_clean.drop(['tokenized_name', 'currency', 'name'], inplace=True, axis=1)
df_clean.dropna(inplace=True)
# """
##########################################################
###########################################################
""" One-Hot Encoding - if you want to run this, just comment lines with quotation marks
column_transformer = ColumnTransformer([('encoder', OneHotEncoder(), ['category', 'main_category', 'currency', 'country'])], sparse_threshold=0, n_jobs=-1)
onehot = pd.DataFrame(column_transformer.fit_transform(df_clean)).set_index(df_clean.index)
new_cols_encoding = [col.replace('encoder__x0_', '').replace('encoder__x1_', '').replace('encoder__x2_', '').replace('encoder__x3_', '') for col in column_transformer.get_feature_names()]
onehot.columns = new_cols_encoding
df_clean = pd.concat([df_clean, onehot], axis=1)
df_clean.drop(['category', 'main_category', 'currency', 'country', 'tokenized_name'], inplace=True, axis=1)
df_clean = df_clean.loc[:,~df_clean.columns.duplicated()]
"""
##########################################################
df_xd = df_clean[~df_clean['state'].str.contains('successful')].index
df_clean.loc[df_clean['state'].str.contains('successful'), 'state'] = 1
df_clean.loc[df_xd, 'state'] = 0
df_clean['state'] = df_clean['state'].astype(int)
df_clean
###Output
_____no_output_____
###Markdown
Check features correlationWe say features are dependant, if abs(correlation) > .5
###Code
corr = df_clean.corr()
plt.matshow(corr)
plt.show()
corr[(corr > .5) | (corr < -.5)]
###Output
_____no_output_____
###Markdown
Delete unnecessary featuresWe delete dupe features (like converted goal value) and the ones that user won't be able to provide, like backers.
###Code
df_shortened = df_clean.copy()
df_shortened.drop(['pledged', 'backers', 'usd pledged', 'deadline', 'launched', 'usd_pledged_real', 'goal'], axis=1, inplace=True)
df_shortened
###Output
_____no_output_____
###Markdown
Split dataSplit data for training & test set, with 10% being in test set. 30k is enough for testing.
###Code
X = df_shortened.drop('state', axis=1)
y = df_shortened['state']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=2137) # 90%:10%
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=.1, random_state=2137) # 81%:9% -> 90%
X_train
###Output
_____no_output_____
###Markdown
Data Over/Undersampling
###Code
print(pd.Series(train_data['y']).value_counts())
def sample_data(sampler, X_train, y_train, cols):
start = time.time()
X_train_new, y_train_new = sampler.fit_sample(X_train, y_train)
X_train_new = pd.DataFrame(X_train_new)
X_train_new.columns = cols
print(f"SMOTENC done in {round(time.time() - start, 2)} seconds")
return {
'x': X_train_new,
'y': y_train_new,
}
train_data = sample_data(SMOTENC([0, 1, 2], n_jobs=-1), X_train, y_train, X_train.columns)
test_data = { 'x': X_test, 'y': y_test }
val_data = { 'x': X_val, 'y': y_val }
print(pd.Series(train_data['y']).value_counts())
print(pd.Series(test_data['y']).value_counts())
print(pd.Series(val_data['y']).value_counts())
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function safe_indexing is deprecated; safe_indexing is deprecated in version 0.22 and will be removed in version 0.24.
warnings.warn(msg, category=FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function safe_indexing is deprecated; safe_indexing is deprecated in version 0.22 and will be removed in version 0.24.
warnings.warn(msg, category=FutureWarning)
###Markdown
(Optional) Delete all irrelevant featuresDelete all irrelevant features, but keep AT MAX 5
###Code
""" If you want to use this cell, just comment lines with quotation marks at the beginning
logistic = LogisticRegression(C=1, penalty="l2", max_iter=1000).fit(X_train, y_train)
model = SelectFromModel(logistic, prefit=True, max_features=5)
X_new = model.transform(X_train)
selected_features = pd.DataFrame(model.inverse_transform(X_new), index=X_train.index, columns=X_train.columns)
selected_columns = selected_features.columns[selected_features.var() != 0]
X_train = X_train[selected_columns]
X_test = X_test[selected_columns]
selected_features
"""
###Output
_____no_output_____
###Markdown
Standarization & min-max scalingStandarization -> mean-stdMin-Max scaling -> min-max
###Code
def standarize(X_train, X_test, X_val):
cols = X_train.columns
indexes_x_train = X_train.index
indexes_x_test = X_test.index
indexes_x_val = X_val.index
X_train_categorical = X_train[['category', 'main_category', 'country']]
X_test_categorical = X_test[['category', 'main_category', 'country']]
X_val_categorical = X_val[['category', 'main_category', 'country']]
scaler = StandardScaler()
scaler.fit(X_train.drop(['category', 'main_category', 'country'], axis=1))
X_train = pd.concat([X_train_categorical, pd.DataFrame(scaler.transform(X_train.drop(['category', 'main_category', 'country'], axis=1))).set_index(indexes_x_train)], axis=1)
X_test = pd.concat([X_test_categorical, pd.DataFrame(scaler.transform(X_test.drop(['category', 'main_category', 'country'], axis=1))).set_index(indexes_x_test)], axis=1)
X_val = pd.concat([X_val_categorical, pd.DataFrame(scaler.transform(X_val.drop(['category', 'main_category', 'country'], axis=1))).set_index(indexes_x_val)], axis=1)
X_train.columns = cols
X_test.columns = cols
X_val.columns = cols
return X_train, X_test, X_val, scaler
train_data['x'], test_data['x'], val_data['x'], standarizer = standarize(train_data['x'], test_data['x'], val_data['x'])
test_data['x']
###Output
_____no_output_____
###Markdown
Load Standarizer (Scaler) from Web Server Deep Learning
###Code
! pip install -q tensorflow-model-optimization
from tensorflow_model_optimization.sparsity import keras as sparsity
l = tf.keras.layers
batch_size = 1024
epochs = 500
end_step = np.ceil(1.0 * train_data['x'].shape[0] / batch_size).astype(np.int32) * epochs
pruning_params = {
'pruning_schedule': sparsity.PolynomialDecay(initial_sparsity=0.01,
final_sparsity=0.2,
begin_step=round(end_step/epochs/2),
end_step=end_step,
frequency=end_step/epochs)
}
tf.random.set_seed(2137)
pruned_model = tf.keras.Sequential([
sparsity.prune_low_magnitude(
tf.keras.layers.Dense(12, input_dim=train_data['x'].shape[1], activation='selu'),
**pruning_params),
l.BatchNormalization(),
sparsity.prune_low_magnitude(
tf.keras.layers.Dense(12, activation='relu'),**pruning_params),
l.Flatten(),
sparsity.prune_low_magnitude(
tf.keras.layers.Dense(12*train_data['x'].shape[1], activation='selu'),**pruning_params),
l.Dropout(0.001),
sparsity.prune_low_magnitude(tf.keras.layers.Dense(1, activation='sigmoid'),
**pruning_params)
])
pruned_model.summary()
pruned_model.compile(
loss=tf.keras.losses.binary_crossentropy,
optimizer='Adam',
metrics=['accuracy'])
# Add a pruning step callback to peg the pruning step to the optimizer's
# step. Also add a callback to add pruning summaries to tensorboard
callbacks = [
sparsity.UpdatePruningStep(),
sparsity.PruningSummaries(log_dir=logdir, profile_batch=0)
]
pruned_model.fit(train_data['x'],train_data['y'],
batch_size=batch_size,
epochs=epochs,
verbose=1,
callbacks=callbacks,
validation_data=(val_data['x'],val_data['y']))
score = pruned_model.evaluate(test_data['x'],test_data['y'], verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# Zapisanie modelu sieci
pruned_model.save('Model_Sieci_Glebokiego_Uczenia')
# Wczytanie modelu sieci
siec = tf.keras.models.load_model('Model_Sieci_Glebokiego_Uczenia')
import seaborn as sns
y_pred=siec.predict_classes(train_data['x'])
con_mat = tf.math.confusion_matrix(labels=train_data['y'], predictions=y_pred).numpy()
con_mat_norm = np.around(con_mat.astype('float') / con_mat.sum(axis=1)[:, np.newaxis], decimals=2)
con_mat_df = pd.DataFrame(con_mat_norm)
figure = plt.figure(figsize=(8, 8))
sns.heatmap(con_mat_df, annot=True,cmap=plt.cm.Blues)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
###Output
_____no_output_____ |
Capstone WEEK 3-1: lab_jupyter_launch_site_location.ipynb | ###Markdown
**Launch Sites Locations Analysis with Folium** Estimated time needed: **40** minutes The launch success rate may depend on many factors such as payload mass, orbit type, and so on. It may also depend on the location and proximities of a launch site, i.e., the initial position of rocket trajectories. Finding an optimal location for building a launch site certainly involves many factors and hopefully we could discover some of the factors by analyzing the existing launch site locations. In the previous exploratory data analysis labs, you have visualized the SpaceX launch dataset using `matplotlib` and `seaborn` and discovered some preliminary correlations between the launch site and success rates. In this lab, you will be performing more interactive visual analytics using `Folium`. Objectives This lab contains the following tasks:* **TASK 1:** Mark all launch sites on a map* **TASK 2:** Mark the success/failed launches for each site on the map* **TASK 3:** Calculate the distances between a launch site to its proximitiesAfter completed the above tasks, you should be able to find some geographical patterns about launch sites. Let's first import required Python packages for this lab:
###Code
!pip3 install folium
!pip3 install wget
import folium
import wget
import pandas as pd
# Import folium MarkerCluster plugin
from folium.plugins import MarkerCluster
# Import folium MousePosition plugin
from folium.plugins import MousePosition
# Import folium DivIcon plugin
from folium.features import DivIcon
###Output
_____no_output_____
###Markdown
If you need to refresh your memory about folium, you may download and refer to this previous folium lab: [Generating Maps with Python](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/labs/module\_3/DV0101EN-3-5-1-Generating-Maps-in-Python-py-v2.0.ipynb) Task 1: Mark all launch sites on a map First, let's try to add each site's location on a map using site's latitude and longitude coordinates The following dataset with the name `spacex_launch_geo.csv` is an augmented dataset with latitude and longitude added for each site.
###Code
# Download and read the `spacex_launch_geo.csv`
spacex_csv_file = wget.download('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/spacex_launch_geo.csv')
spacex_df=pd.read_csv(spacex_csv_file)
###Output
_____no_output_____
###Markdown
Now, you can take a look at what are the coordinates for each site.
###Code
# Select relevant sub-columns: `Launch Site`, `Lat(Latitude)`, `Long(Longitude)`, `class`
spacex_df = spacex_df[['Launch Site', 'Lat', 'Long', 'class']]
launch_sites_df = spacex_df.groupby(['Launch Site'], as_index=False).first()
launch_sites_df = launch_sites_df[['Launch Site', 'Lat', 'Long']]
launch_sites_df
###Output
_____no_output_____
###Markdown
Above coordinates are just plain numbers that can not give you any intuitive insights about where are those launch sites. If you are very good at geography, you can interpret those numbers directly in your mind. If not, that's fine too. Let's visualize those locations by pinning them on a map. We first need to create a folium `Map` object, with an initial center location to be NASA Johnson Space Center at Houston, Texas.
###Code
# Start location is NASA Johnson Space Center
nasa_coordinate = [29.559684888503615, -95.0830971930759]
site_map = folium.Map(location=nasa_coordinate, zoom_start=10)
###Output
_____no_output_____
###Markdown
We could use `folium.Circle` to add a highlighted circle area with a text label on a specific coordinate. For example,
###Code
# Create a blue circle at NASA Johnson Space Center's coordinate with a popup label showing its name
circle = folium.Circle(nasa_coordinate, radius=1000, color='#d35400', fill=True).add_child(folium.Popup('NASA Johnson Space Center'))
# Create a blue circle at NASA Johnson Space Center's coordinate with a icon showing its name
marker = folium.map.Marker(
nasa_coordinate,
# Create an icon as a text label
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % 'NASA JSC',
)
)
site_map.add_child(circle)
site_map.add_child(marker)
###Output
_____no_output_____
###Markdown
and you should find a small yellow circle near the city of Houston and you can zoom-in to see a larger circle. Now, let's add a circle for each launch site in data frame `launch_sites` *TODO:* Create and add `folium.Circle` and `folium.Marker` for each launch site on the site map An example of folium.Circle: `folium.Circle(coordinate, radius=1000, color='000000', fill=True).add_child(folium.Popup(...))` An example of folium.Marker: `folium.map.Marker(coordinate, icon=DivIcon(icon_size=(20,20),icon_anchor=(0,0), html='%s' % 'label', ))`
###Code
# Initial the map
site_map = folium.Map(location=nasa_coordinate, zoom_start=5)
# For each launch site, add a Circle object based on its coordinate (Lat, Long) values. In addition, add Launch site name as a popup label
for lat, long, label in zip(launch_sites_df['Lat'], launch_sites_df['Long'], launch_sites_df['Launch Site']):
circle=folium.Circle([lat,long],radius=1000,color='#d35400', fill=True).add_child(folium.Popup(str(label)))
marker=folium.Marker([lat,long],
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size:20; color:#d35400;">'+label+'</div><br/>',
) )
site_map.add_child(circle)
site_map.add_child(marker)
site_map
###Output
_____no_output_____
###Markdown
The generated map with marked launch sites should look similar to the following: Now, you can explore the map by zoom-in/out the marked areas, and try to answer the following questions:* Are all launch sites in proximity to the Equator line?* Are all launch sites in very close proximity to the coast?Also please try to explain your findings. Task 2: Mark the success/failed launches for each site on the map Next, let's try to enhance the map by adding the launch outcomes for each site, and see which sites have high success rates.Recall that data frame spacex_df has detailed launch records, and the `class` column indicates if this launch was successful or not
###Code
spacex_df.tail(10)
spacex_df
###Output
_____no_output_____
###Markdown
Next, let's create markers for all launch records.If a launch was successful `(class=1)`, then we use a green marker and if a launch was failed, we use a red marker `(class=0)` Note that a launch only happens in one of the four launch sites, which means many launch records will have the exact same coordinate. Marker clusters can be a good way to simplify a map containing many markers having the same coordinate. Let's first create a `MarkerCluster` object
###Code
marker_cluster = MarkerCluster()
###Output
_____no_output_____
###Markdown
*TODO:* Create a new column in `launch_sites` dataframe called `marker_color` to store the marker colors based on the `class` value
###Code
launch_sites_df['marker_color'] = ''
# Apply a function to check the value of `class` column
# If class=1, marker_color value will be green
# If class=0, marker_color value will be red
# Function to assign color to launch outcome
def assign_marker_color(launch_outcome):
if launch_outcome == 1:
return 'green'
else:
return 'red'
spacex_df['marker_color'] = spacex_df['class'].apply(assign_marker_color)
spacex_df.tail(10)
###Output
_____no_output_____
###Markdown
*TODO:* For each launch result in `spacex_df` data frame, add a `folium.Marker` to `marker_cluster`
###Code
# Add marker_cluster to current site_map
site_map.add_child(marker_cluster)
# for each row in spacex_df data frame
# create a Marker object with its coordinate
# and customize the Marker's icon property to indicate if this launch was successed or failed,
# e.g., icon=folium.Icon(color='white', icon_color=row['marker_color']
for index, row in spacex_df.iterrows():
# TODO: Create and add a Marker cluster to the site map
marker= folium.Marker((row['Lat'],row['Long']),
icon=folium.Icon(color = 'white',icon_color=row['marker_color']))
marker_cluster.add_child(marker)
site_map
###Output
_____no_output_____
###Markdown
Your updated map may look like the following screenshots: From the color-labeled markers in marker clusters, you should be able to easily identify which launch sites have relatively high success rates. TASK 3: Calculate the distances between a launch site to its proximities Next, we need to explore and analyze the proximities of launch sites. Let's first add a `MousePosition` on the map to get coordinate for a mouse over a point on the map. As such, while you are exploring the map, you can easily find the coordinates of any points of interests (such as railway)
###Code
# Add Mouse Position to get the coordinate (Lat, Long) for a mouse over on the map
formatter = "function(num) {return L.Util.formatNum(num, 5);};"
mouse_position = MousePosition(
position='topright',
separator=' Long: ',
empty_string='NaN',
lng_first=False,
num_digits=20,
prefix='Lat:',
lat_formatter=formatter,
lng_formatter=formatter,
)
site_map.add_child(mouse_position)
site_map
###Output
_____no_output_____
###Markdown
Now zoom in to a launch site and explore its proximity to see if you can easily find any railway, highway, coastline, etc. Move your mouse to these points and mark down their coordinates (shown on the top-left) in order to the distance to the launch site. You can calculate the distance between two points on the map based on their `Lat` and `Long` values using the following method:
###Code
from math import sin, cos, sqrt, atan2, radians
def calculate_distance(lat1, lon1, lat2, lon2):
# approximate radius of earth in km
R = 6373.0
lat1 = radians(lat1)
lon1 = radians(lon1)
lat2 = radians(lat2)
lon2 = radians(lon2)
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
c = 2 * atan2(sqrt(a), sqrt(1 - a))
distance = R * c
return distance
###Output
_____no_output_____
###Markdown
*TODO:* Mark down a point on the closest coastline using MousePosition and calculate the distance between the coastline point and the launch site.
###Code
# find coordinate of the closet coastline
# e.g.,: Lat: 28.56367 Lon: -80.57163
# distance_coastline = calculate_distance(launch_site_lat, launch_site_lon, coastline_lat, coastline_lon)
launch_site_lat = 28.563197
launch_site_lon = -80.576820
coastline_lat = 28.5631
coastline_lon = -80.56807
distance_coastline = calculate_distance(launch_site_lat, launch_site_lon, coastline_lat, coastline_lon)
distance_coastline
###Output
_____no_output_____
###Markdown
*TODO:* After obtained its coordinate, create a `folium.Marker` to show the distance
###Code
# Create and add a folium.Marker on your selected closest coastline point on the map
# Display the distance between coastline point and launch site using the icon property
# for example
# distance_marker = folium.Marker(
# coordinate,
# icon=DivIcon(
# icon_size=(20,20),
# icon_anchor=(0,0),
# html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance),
# )
# )
distance_coastline = calculate_distance(launch_site_lat, launch_site_lon, coastline_lat, coastline_lon)
coordinate_coastline = [coastline_lat, coastline_lon]
distance_marker = folium.Marker(
coordinate_coastline,
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance_coastline),
)
)
###Output
_____no_output_____
###Markdown
*TODO:* Draw a `PolyLine` between a launch site to the selected coastline point
###Code
# Create a `folium.PolyLine` object using the coastline coordinates and launch site coordinate
#lines=folium.PolyLine(locations=coordinates, weight=1)
coordinates_coastline = [[launch_site_lat, launch_site_lon], [coastline_lat, coastline_lon]]
lines=folium.PolyLine(locations=coordinates_coastline, weight=1)
site_map.add_child(lines)
site_map.add_child(lines)
###Output
_____no_output_____
###Markdown
Your updated map with distance line should look like the following screenshot: *TODO:* Similarly, you can draw a line betwee a launch site to its closest city, railway, highway, etc. You need to use `MousePosition` to find the their coordinates on the map first A railway map symbol may look like this: A highway map symbol may look like this: A city map symbol may look like this:
###Code
# Create a marker with distance to a closest city, railway, highway, etc.
# Draw a line between the marker to the launch site
# Create a marker with distance to a closest city
# Draw a line between the marker to the launch site
launch_site_lat = 28.563197
launch_site_lon = -80.576820
Titusville_lat = 28.61105
Titusville_lon = -80.81028
distance_Titusville = calculate_distance(launch_site_lat, launch_site_lon, Titusville_lat, Titusville_lon)
coordinate_Titusville = [Titusville_lat, Titusville_lon]
distance_marker = folium.Marker(
coordinate_Titusville,
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance_Titusville),
)
)
coordinates_Titusville = [[launch_site_lat, launch_site_lon], [Titusville_lat, Titusville_lon]]
lines=folium.PolyLine(locations=coordinates_Titusville, weight=1)
site_map.add_child(lines)
# Create a marker with distance to a closest city, railway, highway, etc.
# Draw a line between the marker to the launch site
# Create a marker with distance to a closest highway
# Draw a line between the marker to the launch site
launch_site_lat = 28.563197
launch_site_lon = -80.576820
SamuelCphilips_lat = 28.5634
SamuelCphilips_lon = -80.57086
distance_SamuelCphilips = calculate_distance(launch_site_lat, launch_site_lon, SamuelCphilips_lat, SamuelCphilips_lon)
coordinate_SamuelCphilips = [SamuelCphilips_lat,SamuelCphilips_lon]
distance_marker = folium.Marker(
coordinate_SamuelCphilips,
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance_SamuelCphilips),
)
)
coordinates_SamuelCphilips = [[launch_site_lat, launch_site_lon], [SamuelCphilips_lat, SamuelCphilips_lon]]
lines=folium.PolyLine(locations=coordinates_SamuelCphilips, weight=1)
site_map.add_child(lines)
# Create a marker with distance to a closest city, railway, highway, etc.
# Draw a line between the marker to the launch site
# Create a marker with distance to a closest railway
# Draw a line between the marker to the launch site
launch_site_lat = 28.563197
launch_site_lon = -80.576820
nasaRailway_lat = 28.5724
nasaRailway_lon = -80.58525
distance_nasaRailway = calculate_distance(launch_site_lat, launch_site_lon, nasaRailway_lat, nasaRailway_lon)
coordinate_nasaRailway = [nasaRailway_lat, nasaRailway_lon]
distance_marker = folium.Marker(
coordinate_nasaRailway,
icon=DivIcon(
icon_size=(20,20),
icon_anchor=(0,0),
html='<div style="font-size: 12; color:#d35400;"><b>%s</b></div>' % "{:10.2f} KM".format(distance_nasaRailway),
)
)
coordinates_nasaRailway = [[launch_site_lat, launch_site_lon], [nasaRailway_lat, nasaRailway_lon]]
lines=folium.PolyLine(locations=coordinates_nasaRailway, weight=1)
site_map.add_child(lines)
###Output
_____no_output_____ |
Kopie_von_Kopie_von_batch_process_4_neural_style_tf_shareable.ipynb | ###Markdown
Style Transfer Batch ProcessorStyle transfer is the process of applying one texture (the Style image) to the content of another image (the Content image). this notebook is based on [derrick neural style playground](https://github.com/dvschultz/ai/blob/master/neural_style_tf.ipynb).Only use this once you understand the playground!This notebook is meant to speed up the process.I run this notebook across a few accounts for processingQuestions? twitter: @errthangisalive Set up our RuntimeColab needs to know we need to use a GPU-powered machine in order to do style transfers. At the top of this page, click on the `Runtime` tab, then select `Change runtime type`. In the modal that pops up, select `GPU` under the `Hardware accelerator` options.We then need to make sure we’re using the latest version of Tensorflow 1, otherwise we get some annoying messages.
###Code
#install TF 1.15 to avoid some annoying warning messages
# Restart runtime using 'Runtime' -> 'Restart runtime...'
%tensorflow_version 1.x
import tensorflow as tf
print(tf.__version__)
!nvidia-smi
###Output
_____no_output_____
###Markdown
Install the neural-style-tf libraryWe’re going to work with the library called Neural Style. This is a version I’ve customized to do a couple things that I think are helpful for artists.In the next cell, type `Shift+Return` to run the code
###Code
#import some image display tools
from IPython.display import Image, display
#install the library in colab
!git clone https://github.com/dvschultz/neural-style-tf
#change into that directory
%cd neural-style-tf/
#install the library dependencies (it's likely Colab already has them installed, but let's be sure)
!pip install -r requirements.txt
#install the VGG19 pre-trained model
!wget http://www.vlfeat.org/matconvnet/models/imagenet-vgg-verydeep-19.mat
###Output
_____no_output_____
###Markdown
2 ways to do Batch Processing1. Optical flow, this process takes a long time but I think better results2. Batch style transfer, this process is fasterwe'll do everything from google drive so lets connect that first.
###Code
from google.colab import drive
drive.mount('/content/drive')
cd /content/neural-style-tf/
###Output
_____no_output_____
###Markdown
optical flow for video in 1, 2, 3 steps1. first we create image sequences from the video with ffmpeg2. then we generate optical flow files3. then we combine the files into a video with ffmpeg
###Code
#1 create image sequences from video (tip: use a google drive path)
!ffmpeg -i inputsomefilename.mp4 outputsomefilename%04d.png
#2 generate opt-flow frame by frame (tip: use a google drive path)
cd /content/neural-style-tf/video_input/
!bash ./make-opt-flow.sh "inputsomefilename%04d.png" "./outputsomefoldername"
#3 style transfers frames by frame use (tip: use a google drive path)
cd /content/neural-style-tf/
!python neural_style.py --video --video_input_dir ./outputsomefoldername --style_imgs yourstyleimage.png --max_size 1024 --frame_iterations 500 --style_scale 0.1 --content_weight 1e0 --start_frame 1 --end_frame 99 --first_frame_iterations 400 --verbose --content_frame_frmt somefilename{}.png --init_img_type random --video_output_dir /myDrivefoldersomewhere
#4 create video from image sequences (tip: use a google drive path)
!ffmpeg -i inputsomefilename%04d.png -vcodec mpeg4 outputsomefilename.mp4
###Output
_____no_output_____
###Markdown
batch style transferthis is fairly easy, batch process for a folder of imagesfirst you must edit the neural_style.py replace this line (648):``` img_path = os.path.join(out_dir, args.img_output_dir+'-'+str(args.max_iterations)+'.png')```with:``` img_path = os.path.join(out_dir, str(args.content_img.split(".")[0])+'.png')```
###Code
#(tip: use a google drive path)
import os
for files in os.listdir("/content/drive/MyDrive/input4"):
!python neural_style.py --content_img_dir /content/drive/MyDrive/input4 --content_img $files --style_imgs f2.jpg --max_size 500 --max_iterations 500 --style_scale 0.75 --content_weight 1e0 --img_output_dir /content/drive/MyDrive/output
###Output
_____no_output_____ |
.ipynb_checkpoints/IexFinder-checkpoint.ipynb | ###Markdown
Symbolic Calculation of Extreme Values Zoufiné Lauer-Baré ([LinkedIn](https://www.linkedin.com/in/zoufine-lauer-bare-14677a77), [OrcidID](https://orcid.org/0000-0002-7083-6909))The ```exFinder``` package, provides methods that calculate the extreme values of multivariate functions $$f:\mathbb{R}^2\to\mathbb{R}$$ symbolically. It is based mathematically on ```SymPy``` and ```NumPy```.
###Code
f = (x-2)**4+(x-2*y)**2
exFinder(f)
f=y**2*(x-1)+x**2*(x+1)
exFinder(f)
f = sym.exp(-(x**2+y**2))
exFinder(f)
###Output
_____no_output_____ |
repayment_simulations.ipynb | ###Markdown
Repayment of varying total amounts Asumptions:- The same annual income progression - starting with 25k and increasing 5% each year- Constant inflation rate of 2%- Varying total amounts to pay off - 30k, 40k, 50k
###Code
annual_rpis = [2 for _ in range(30)]
annual_salaries = [27000 * 1.05 ** i for i in range(30)]
total_debts = [30000, 40000, 50000]
repayments0, interest_added_0, balance_0 = simulate_repayment(loan_outstanding=total_debts[0], annual_rpis=annual_rpis, annual_salaries=annual_salaries)
repayments1, interest_added_1, balance_1 = simulate_repayment(loan_outstanding=total_debts[1], annual_rpis=annual_rpis, annual_salaries=annual_salaries)
repayments2, interest_added_2, balance_2 = simulate_repayment(loan_outstanding=total_debts[2], annual_rpis=annual_rpis, annual_salaries=annual_salaries)
fig, ax = plt.subplots(2, 1, figsize=(12, 12))
x_months = np.arange(1, len(balance_0)+1) / 12
ax[0].plot(x_months, balance_0, label="debt balance - 30k debt")
ax[0].plot(x_months, balance_1, label="debt balance - 40k debt")
ax[0].plot(x_months, balance_2, label="debt balance - 50k debt")
ax[0].grid()
ax[0].set_xlabel("Years of repayment")
ax[0].set_ylabel("Outstanding balance")
ax[0].legend()
ax[1].plot(x_months, repayments0, label="monthly repayment - 30k debt")
ax[1].plot(x_months, repayments1, label="monthly repayment - 40k debt")
ax[1].plot(x_months, repayments2, label="monthly repayment - 50k debt")
ax[1].plot(x_months, interest_added_0, label="monthly interest added - 30k debt")
ax[1].plot(x_months, interest_added_1, label="monthly interest added - 40k debt")
ax[1].plot(x_months, interest_added_2, label="monthly interest added - 50k debt")
ax[1].set_xlabel("Years of repayment")
ax[1].set_ylabel("Interest added / Monthly repayment")
ax[1].grid()
ax[1].legend()
plt.show()
pv_repayments0 = calculate_present_value(repayments0, annual_rpis)
pv_repayments1 = calculate_present_value(repayments1, annual_rpis)
pv_repayments2 = calculate_present_value(repayments2, annual_rpis)
years_repayments0 = months_till_clearance(balance_0) / 12
years_repayments1 = months_till_clearance(balance_1) / 12
years_repayments2 = months_till_clearance(balance_2) / 12
print(f"Present Value of repayments - 30k debt: {pv_repayments0} in {np.round(years_repayments0, 1)} years")
print(f"Present Value of repayments - 40k debt: {pv_repayments1} in {np.round(years_repayments1, 1)} years")
print(f"Present Value of repayments - 50k debt: {pv_repayments2} in {np.round(years_repayments2, 1)} years")
###Output
_____no_output_____
###Markdown
Repayment with varying inflation rates Asumptions:- The same annual income progression - starting with 25k and increasing 5% each year- Constant amount to pay off - 30k- Varying average inflation rates 2%, 3%, 4%
###Code
annual_rpi_2 = [2 for _ in range(30)]
annual_rpi_3 = [3 for _ in range(30)]
annual_rpi_4 = [4 for _ in range(30)]
annual_salaries = [27000 * 1.05 ** i for i in range(30)]
total_debt = 30000
repayments0, interest_added_0, balance_0 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpi_2, annual_salaries=annual_salaries)
repayments1, interest_added_1, balance_1 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpi_3, annual_salaries=annual_salaries)
repayments2, interest_added_2, balance_2 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpi_4, annual_salaries=annual_salaries)
fig, ax = plt.subplots(2, 1, figsize=(12, 12))
x_months = np.arange(1, len(balance_0)+1) / 12
ax[0].plot(x_months, balance_0, label=r"debt balance - 2% rpi")
ax[0].plot(x_months, balance_1, label=r"debt balance - 3% rpi")
ax[0].plot(x_months, balance_2, label=r"debt balance - 4% rpi")
ax[0].set_xlabel("Years of repayment")
ax[0].set_ylabel("Outstanding balance")
ax[0].grid()
ax[0].legend()
ax[1].plot(x_months, repayments0, label=r"monthly repayment - 2% rpi")
ax[1].plot(x_months, repayments1, label=r"monthly repayment - 3% rpi")
ax[1].plot(x_months, repayments2, label=r"monthly repayment - 4% rpi")
ax[1].plot(x_months, interest_added_0, label=r"monthly interest added - 2% rpi")
ax[1].plot(x_months, interest_added_1, label=r"monthly interest added - 3% rpi")
ax[1].plot(x_months, interest_added_2, label=r"monthly interest added - 4% rpi")
ax[1].set_xlabel("Years of repayment")
ax[1].set_ylabel("Interest added / Monthly repayment")
ax[1].grid()
ax[1].legend()
plt.show()
pv_repayments0 = np.sum(np.array(repayments0) / np.array(get_monthly_discount_factor(annual_rpi_2)[:len(repayments0)]))
pv_repayments1 = np.sum(np.array(repayments1) / np.array(get_monthly_discount_factor(annual_rpi_3)[:len(repayments1)]))
pv_repayments2 = np.sum(np.array(repayments2) / np.array(get_monthly_discount_factor(annual_rpi_4)[:len(repayments2)]))
years_repayments0 = months_till_clearance(balance_0) / 12
years_repayments1 = months_till_clearance(balance_1) / 12
years_repayments2 = months_till_clearance(balance_2) / 12
print(f"Present Value of repayments - 2% rpi: {pv_repayments0} in {np.round(years_repayments0, 1)} years")
print(f"Present Value of repayments - 3% rpi: {pv_repayments1} in {np.round(years_repayments1, 1)} years")
print(f"Present Value of repayments - 4% rpi: {pv_repayments2} in {np.round(years_repayments2, 1)} years")
###Output
_____no_output_____
###Markdown
Repayment with varying salary progressions Asumptions:- Constant amount to pay off - 30k- Constant average inflation rate - 2%- Varying annual income progression - 23k with 3%, 27k with 5%, 50k with 15%
###Code
total_debt = 30000
annual_rpis = [2 for _ in range(30)]
annual_salaries_1 = [23000 * 1.03 ** i for i in range(30)]
annual_salaries_2 = [27000 * 1.05 ** i for i in range(30)]
annual_salaries_3 = [50000 * 1.15 ** i for i in range(30)]
repayments0, interest_added_0, balance_0 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries_1)
repayments1, interest_added_1, balance_1 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries_2)
repayments2, interest_added_2, balance_2 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries_3)
fig, ax = plt.subplots(2, 1, figsize=(12, 12))
x_months = np.arange(1, len(balance_0)+1) / 12
ax[0].plot(x_months, balance_0, label=r"debt balance - 23k salary")
ax[0].plot(x_months, balance_1, label=r"debt balance - 27k salary")
ax[0].plot(x_months, balance_2, label=r"debt balance - 50k salary")
ax[0].set_xlabel("Years of repayment")
ax[0].set_ylabel("Outstanding balance")
ax[0].grid()
ax[0].legend()
ax[1].plot(x_months, repayments0, label=r"monthly repayment - 23k salary")
ax[1].plot(x_months, repayments1, label=r"monthly repayment - 27k salary")
ax[1].plot(x_months, repayments2, label=r"monthly repayment - 50k salary")
ax[1].plot(x_months, interest_added_0, label=r"monthly interest added - 23k salary")
ax[1].plot(x_months, interest_added_1, label=r"monthly interest added - 27k salary")
ax[1].plot(x_months, interest_added_2, label=r"monthly interest added - 50k salary")
ax[1].set_xlabel("Years of repayment")
ax[1].set_ylabel("Interest added / Monthly repayment")
ax[1].grid()
ax[1].legend()
plt.show()
pv_repayments0 = np.sum(np.array(repayments0) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments0)]))
pv_repayments1 = np.sum(np.array(repayments1) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments1)]))
pv_repayments2 = np.sum(np.array(repayments2) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments2)]))
years_repayments0 = months_till_clearance(balance_0) / 12
years_repayments1 = months_till_clearance(balance_1) / 12
years_repayments2 = months_till_clearance(balance_2) / 12
print(f"Present Value of repayments - 23k salary: {pv_repayments0} in {np.round(years_repayments0, 1)} years")
print(f"Present Value of repayments - 27k salary: {pv_repayments1} in {np.round(years_repayments1, 1)} years")
print(f"Present Value of repayments - 50k salary: {pv_repayments2} in {np.round(years_repayments2, 1)} years")
###Output
_____no_output_____
###Markdown
Repayment with discretionary repayments Asumptions:- Constant amount to pay off - 30k- Constant average inflation rate - 2%- Constant annual income progression - 40k with 10%- Varying amounts of discretionary repayments - 0, 5k, 10k
###Code
total_debt = 30000
annual_rpis = [2 for _ in range(30)]
annual_salaries = [40000 * 1.05 ** i for i in range(30)]
repayments0, interest_added_0, balance_0 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries)
repayments1, interest_added_1, balance_1 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries, discretionary_repayments={0: 5000})
repayments2, interest_added_2, balance_2 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries, discretionary_repayments={0: 10000})
fig, ax = plt.subplots(2, 1, figsize=(12, 12))
x_months = np.arange(1, len(balance_0)+1) / 12
ax[0].plot(x_months, balance_0, label=r"debt balance - 0k repayment")
ax[0].plot(x_months, balance_1, label=r"debt balance - 5k repayment")
ax[0].plot(x_months, balance_2, label=r"debt balance - 10k repayment")
ax[0].set_xlabel("Years of repayment")
ax[0].set_ylabel("Outstanding balance")
ax[0].grid()
ax[0].legend()
ax[1].plot(x_months, repayments0, label=r"monthly repayment - 0k repayment")
ax[1].plot(x_months, repayments1, label=r"monthly repayment - 5k repayment")
ax[1].plot(x_months, repayments2, label=r"monthly repayment - 10k repayment")
ax[1].plot(x_months, interest_added_0, label=r"monthly interest added - 0k repayment")
ax[1].plot(x_months, interest_added_1, label=r"monthly interest added - 5k repayment")
ax[1].plot(x_months, interest_added_2, label=r"monthly interest added - 10k repayment")
ax[1].set_xlabel("Years of repayment")
ax[1].set_ylabel("Interest added / Monthly repayment")
ax[1].grid()
ax[1].legend()
ax[1].set_ylim(0, 500)
plt.show()
pv_repayments0 = np.sum(np.array(repayments0) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments0)]))
pv_repayments1 = np.sum(np.array(repayments1) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments1)]))
pv_repayments2 = np.sum(np.array(repayments2) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments2)]))
years_repayments0 = months_till_clearance(balance_0) / 12
years_repayments1 = months_till_clearance(balance_1) / 12
years_repayments2 = months_till_clearance(balance_2) / 12
print(f"Present Value of repayments - 0k repayment: {pv_repayments0} in {np.round(years_repayments0, 1)} years")
print(f"Present Value of repayments - 5k repayment: {pv_repayments1} in {np.round(years_repayments1, 1)} years")
print(f"Present Value of repayments - 10k repayment: {pv_repayments2} in {np.round(years_repayments2, 1)} years")
###Output
_____no_output_____
###Markdown
Calculating the real (inflation adjusted) rate of return of each repayment _investments_
###Code
investment_5k_return = (1 + (pv_repayments0 - pv_repayments1) / 5000) ** (1 / (years_repayments0))
investment_10k_return = (1 + (pv_repayments0 - pv_repayments2) / 10000) ** (1 / (years_repayments0))
print(f"Annualised rate of return on debt repayment: {np.round((investment_5k_return - 1) * 100, 2)}%")
print(f"Annualised rate of return on debt repayment: {np.round((investment_10k_return - 1) * 100, 2)}%")
total_debt = 45000
annual_rpis = [2 for _ in range(30)]
annual_salaries = [27000 * 1.05 ** i for i in range(30)]
repayments0, interest_added_0, balance_0 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries)
repayments1, interest_added_1, balance_1 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries, discretionary_repayments={0: 5000})
repayments2, interest_added_2, balance_2 = simulate_repayment(loan_outstanding=total_debt, annual_rpis=annual_rpis, annual_salaries=annual_salaries, discretionary_repayments={0: 10000})
fig, ax = plt.subplots(2, 1, figsize=(12, 12))
x_months = np.arange(1, len(balance_0)+1) / 12
ax[0].plot(x_months, balance_0, label=r"debt balance - 0k repayment")
ax[0].plot(x_months, balance_1, label=r"debt balance - 5k repayment")
ax[0].plot(x_months, balance_2, label=r"debt balance - 10k repayment")
ax[0].set_xlabel("Years of repayment")
ax[0].set_ylabel("Outstanding balance")
ax[0].grid()
ax[0].legend()
ax[1].plot(x_months, repayments0, label=r"monthly repayment - 0k repayment")
ax[1].plot(x_months, repayments1, label=r"monthly repayment - 5k repayment")
ax[1].plot(x_months, repayments2, label=r"monthly repayment - 10k repayment")
ax[1].plot(x_months, interest_added_0, label=r"monthly interest added - 0k repayment")
ax[1].plot(x_months, interest_added_1, label=r"monthly interest added - 5k repayment")
ax[1].plot(x_months, interest_added_2, label=r"monthly interest added - 10k repayment")
ax[1].set_xlabel("Years of repayment")
ax[1].set_ylabel("Interest added / Monthly repayment")
ax[1].grid()
ax[1].legend()
ax[1].set_ylim(0, 700)
plt.show()
pv_repayments0 = np.sum(np.array(repayments0) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments0)]))
pv_repayments1 = np.sum(np.array(repayments1) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments1)]))
pv_repayments2 = np.sum(np.array(repayments2) / np.array(get_monthly_discount_factor(annual_rpis)[:len(repayments2)]))
years_repayments0 = months_till_clearance(balance_0) / 12
years_repayments1 = months_till_clearance(balance_1) / 12
years_repayments2 = months_till_clearance(balance_2) / 12
print(f"Present Value of repayments - 0k repayment: {pv_repayments0} in {np.round(years_repayments0, 1)} years")
print(f"Present Value of repayments - 5k repayment: {pv_repayments1} in {np.round(years_repayments1, 1)} years")
print(f"Present Value of repayments - 10k repayment: {pv_repayments2} in {np.round(years_repayments2, 1)} years")
investment_5k_return = (1 + (pv_repayments0 - pv_repayments1) / 5000) ** (1 / (years_repayments0))
investment_10k_return = (1 + (pv_repayments0 - pv_repayments2) / 10000) ** (1 / (years_repayments0))
print(f"Annualised rate of return on debt repayment: {np.round((investment_5k_return - 1) * 100, 2)}%")
print(f"Annualised rate of return on debt repayment: {np.round((investment_10k_return - 1) * 100, 2)}%")
###Output
Annualised rate of return on debt repayment: -2.11%
Annualised rate of return on debt repayment: 0.62%
|
labs/Lab_03_DL Keras Intro Convolutional Models.ipynb | ###Markdown
Keras Intro: Convolutional ModelsKeras Documentation: https://keras.ioIn this notebook we explore how to use Keras to implement Convolutional models Machine learning on images
###Code
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
MNIST
###Code
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
X_test.shape
plt.imshow(X_train[0], cmap='gray')
X_train_flat = X_train.reshape(-1, 28*28)
X_test_flat = X_test.reshape(-1, 28*28)
X_train_flat.shape
X_train_sc = X_train_flat.astype('float32') / 255.0
X_test_sc = X_test_flat.astype('float32') / 255.0
from keras.utils.np_utils import to_categorical
y_train_cat = to_categorical(y_train)
y_test_cat = to_categorical(y_test)
y_train[0]
y_train_cat[0]
y_train_cat.shape
y_test_cat.shape
###Output
_____no_output_____
###Markdown
Fully connected on images
###Code
from keras.models import Sequential
from keras.layers import Dense
import keras.backend as K
K.clear_session()
model = Sequential()
model.add(Dense(512, input_dim=28*28, activation='relu'))
model.add(Dense(256, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
h = model.fit(X_train_sc, y_train_cat, batch_size=128, epochs=10, verbose=1, validation_split=0.3)
plt.plot(h.history['acc'])
plt.plot(h.history['val_acc'])
plt.legend(['Training', 'Validation'])
plt.title('Accuracy')
plt.xlabel('Epochs')
test_accuracy = model.evaluate(X_test_sc, y_test_cat)[1]
test_accuracy
###Output
_____no_output_____
###Markdown
Convolutional layers
###Code
from keras.layers import Conv2D
from keras.layers import MaxPool2D
from keras.layers import Flatten, Activation
X_train_t = X_train_sc.reshape(-1, 28, 28, 1)
X_test_t = X_test_sc.reshape(-1, 28, 28, 1)
X_train_t.shape
K.clear_session()
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(28, 28, 1)))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.summary()
model.fit(X_train_t, y_train_cat, batch_size=128,
epochs=2, verbose=1, validation_split=0.3)
model.evaluate(X_test_t, y_test_cat)
###Output
_____no_output_____ |
devel/skimage_vs_matplotlib.ipynb | ###Markdown
Ignore label 0 (default)
###Code
data1, data2 = make_data([10,10])
show_data(data1,data2)
print(skimage.metrics.adapted_rand_error(data1, data2))
data1, data2 = make_data([11,10])
show_data(data1,data2)
print(skimage.metrics.adapted_rand_error(data1, data2))
###Output
(0.26315789473684215, 0.7368421052631579, 0.7368421052631579)
###Markdown
Use all labels
###Code
data1, data2 = make_data([10,10])
show_data(data1,data2)
print(skimage.metrics.adapted_rand_error(data1, data2, ignore_labels=()))
data1, data2 = make_data([11,10])
show_data(data1,data2)
print(skimage.metrics.adapted_rand_error(data1, data2, ignore_labels=()))
###Output
(0.44881889763779526, 0.5343511450381679, 0.5691056910569106)
(0.43578447983734325, 0.5471574104502136, 0.5823714585519413)
###Markdown
Check labels missplacement
###Code
data1, data2 = make_data([10,10])
data2 = data1.copy()
data2[4,4]=1
show_data(data1,data2)
print(skimage.metrics.adapted_rand_error(data1, data2, ignore_labels=()))
data1, data2 = make_data([10,10])
data2 = data1.copy()
data2[4,4]=0
show_data(data1,data2)
print(skimage.metrics.adapted_rand_error(data1, data2, ignore_labels=()))
###Output
(0.01572656741455225, 0.9953350296861747, 0.9734549979261717)
|
notebook/.ipynb_checkpoints/3 - EDA_feat_engineering-checkpoint.ipynb | ###Markdown
3. Análise Exploratória (EDA) e Feature Engineering
###Code
# libs:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns',60)
#Testes estatísticos
import scipy
from scipy.stats import stats
import statsmodels.api as sm
from feature_engine.categorical_encoders import CountFrequencyCategoricalEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
#Clusterização
from sklearn.cluster import DBSCAN
from sklearn.neighbors import NearestNeighbors
from kneed import KneeLocator
#fearture selection:
from sklearn.feature_selection import f_classif, chi2
from statsmodels.stats.outliers_influence import variance_inflation_factor
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Dados Constantes
###Code
DATA_INPUT_CLEANED = '../data/DATA_train_cleaned.csv'
DATA_OUTPUT_TRAIN_ENG = '../data/DATA_train_eng.csv'
df_train = pd.read_csv(DATA_INPUT_CLEANED)
df_train.head()
df_train = df_train.drop('Unnamed: 0', axis=1)
df_train.shape
###Output
_____no_output_____
###Markdown
**Base de dados:**Base de treino com 4209 observações, sendo 849 imóveis da baixada santista.Cidades: Santos, São Vicente, Guarujá, Praia grande, Mongaguá, Itanhaém, Peruíbe e Bertioga. 3.1 Análise Exploratória (EDA) Classificação de variáveis
###Code
df_train.dtypes
# Transformação de Object para dateTime
def para_datetime(df):
df['Check-In'] = pd.to_datetime(df_train['Check-In'], format ='%Y-%m-%d')
df['Check-Out'] = pd.to_datetime(df_train['Check-Out'], format ='%Y-%m-%d')
return df
df_train = para_datetime(df_train)
col_num = ['Número Hóspedes', 'Número Quartos', 'Número Camas', 'Número Banheiros', 'Avaliação', 'Número Comentários', 'Preço com taxas']
col_cat = ['Academia', 'Ar-condicionado', 'Cozinha', 'Elevador', 'Estacionamento gratuito', 'Máquina de Lavar', 'Permitido animais',
'Piscina', 'Secadora', 'Self check-In', 'Vista para as águas', 'Vista para o mar', 'Wi-Fi', 'Café da Manhã',
'Entrada/saída para esquis', 'Jacuzzi', 'Lareira interna','Lava-louças', 'Novo preço mais baixo', 'Raridade', 'Localização']
col_date = ['Check-In', 'Check-Out']
Target = ['Preço/Noite']
# Número de ID
print(f'Total de ID únicos: {len(df_train["ID"].unique())}')
# Transformar tudo para letras maíusculas.
df_train['Localização'] = df_train['Localização'].apply(lambda x: x.upper())
###Output
_____no_output_____
###Markdown
Histograma Númericas e categóricas
###Code
col_cat.remove('Localização')
for colunas in col_num + col_cat + Target:
plt.figure(figsize=(6, 4))
plt.hist(df_train[colunas])
plt.title(f'Histograma de {colunas}', fontsize=12, fontweight='bold')
plt.xlabel(f'Valores de {colunas}')
plt.ylabel(f'Frequência')
#plt.savefig(f'../img/hist_{colunas}.png')
plt.show()
###Output
_____no_output_____
###Markdown
**Considerações:** **Número de hóspedes:** Frequência entre 4 e 6 – distribuição assimétrica à direita**Número de quartos:** Frequência 1, seguido de 2 e 3 – distribuição assimétrica à direita**Número de camas:** 2 é a maior frequência disparado, depois 1, 3 camas -distribuição assimétrica à direita.**Número de banheiros:** maior frequência 1, depois bem longe 2.**Avaliação:** distribuição assimétrica à esquerda e maior frequência de notas 5, seguidos de 4,75.**Número de comentários:** assimétrica à direita e maior frequência 25.**Preço com taxas:** distribuição assimétrica à direita e maior frequência de preços totais (duas diárias) entre 500 e 1000 reais.**Preço/Noite:** distribuição assimétrica à direita e maior frequência na faixa de 100 a 200 reais.**Academia:** maioria não tem. Proporção ≈ 4000/250 **Ar-condicionado:** praticamente empatados entre ter e não.**Cozinha:** Maioria tem. Proporção ≈ 2700/1500 **Elevador:** Maioria não tem. Proporção ≈ 3000/1400 **Estacionamento gratuíto:** ligeiramente maior para sim. Proporção ≈ 2400/1800 **Máquina de lavar:** Maioria não tem. Proporção ≈ 3700/500 **Permitidos animais:** A maioria não permite. Proporção ≈ 3500/800 **Piscina:** A maioria não tem. Proporção ≈ 3000/1200 **Secadora:** Imensa maioria não tem. Proporção ≈ 4000/100 **Self check-in:** A maioria não tem. Proporção ≈ 3300/1000 **Vista para águas:** maioria não tem. Proporção ≈ 4000/100 **Vista para o mar:** maioria não tem. Proporção ≈ 3400/800 **Wi-fi:** Maioria tem. Proporção ≈ 3600/500 **Oferece café da manhã:** Não tem em 100% das observações **Entrada e saída de esqui:** Não tem em 100% das observações **Jacuzzi:** Não tem em praticamente 100% das observações. **Lareira interna:** Não tem em praticamente 100% das observações **Lava-louças:** Não tem em praticamente 100% das observações. **Tag Preço Novo Mais baixo:** Praticamente dividido. **Tag Raridade:** Praticamente dividido. Raio-X - Imóveis mais frequentes na base de dados.Imóvel localizado em Santos com capacidade de 4 a 6 hospedes, sendo 1 quarto, 2 camas, 1 banheiro, avaliação acima de 4,75 (notas de 1-5), com no máximo 25 comentários no anúncio e preço/noite na faixa entre 100 e 200 reais. Matriz de Correlação:
###Code
# Correção entre as variáveis numéricas
correlacao_num = df_train[col_num + Target].corr()['Preço/Noite'].sort_values(ascending=False)
correlacao_num
# Matriz de Correlação
correlacao = df_train[col_num + Target].corr()
plt.figure(figsize=(8,6))
sns.heatmap(correlacao, cmap="YlGnBu", annot=True)
plt.savefig(f'../img/matriz_correlacao.png')
plt.show()
# Estatísticamente
colunas = ['Número Hóspedes', 'Número Quartos', 'Número Camas', 'Número Banheiros', 'Avaliação', 'Número Comentários']
correlation_features = []
for col in colunas:
cor, p = stats.spearmanr(df_train[col],df_train[Target])
if p <= 0.5:
correlation_features.append(col)
print(f'p-value: {p}, correlation: {cor}')
print(f'Existe correlação entre {col} e {Target}.')
print('--'*30)
else:
print(f'p-value: {p}, correlation: {cor}')
print(f'Não há correlação entre {col} e {Target}.')
print('--'*30)
###Output
p-value: 6.623876109521824e-188, correlation: 0.4288234209127937
Existe correlação entre Número Hóspedes e ['Preço/Noite'].
------------------------------------------------------------
p-value: 5.19620215557542e-226, correlation: 0.46604714774207756
Existe correlação entre Número Quartos e ['Preço/Noite'].
------------------------------------------------------------
p-value: 5.751401084000197e-79, correlation: 0.28409140272357275
Existe correlação entre Número Camas e ['Preço/Noite'].
------------------------------------------------------------
p-value: 5.559706477507899e-225, correlation: 0.4651001524381476
Existe correlação entre Número Banheiros e ['Preço/Noite'].
------------------------------------------------------------
p-value: 6.461844436581712e-12, correlation: 0.10561393105023931
Existe correlação entre Avaliação e ['Preço/Noite'].
------------------------------------------------------------
p-value: 3.035135545539631e-11, correlation: -0.10218483438753917
Existe correlação entre Número Comentários e ['Preço/Noite'].
------------------------------------------------------------
###Markdown
**Considerações:** **Matriz de Correlação**Correlações positiva em Número de banheiros, número de quartos e número de hospedes. 0.59, 0.58 e 0.52 respectivamente. Seguido de Número de camas 0.37.Sendo assim O preço/Noite aumenta quanto maior for o número de banheiros, quartos e hospedes o que nos remete que quanto maior o imóvel mais caro será a diária dele. Baixa correlação em avaliação e números de comentários. Este sendo negativa. (Conforme abaixa o preço aumenta o número de comentários.**Correlação entre as features.**Alta correlação positiva entre as variáveis número de hospedes com número de quartos, camas e banheiros, o que sustenta uma das argumentações de quanto maior estes itens maior será a diária (Preço/Noite). Hipóteses
###Code
atributos = ['Número Hóspedes', 'Número Quartos', 'Número Camas', 'Número Banheiros']
###Output
_____no_output_____
###Markdown
H1
###Code
# H1: Imóveis com maior número de atributos (número de Hospedes, Quartos, Camas e banheiro) possuem maior preço/noite maior?
for colunas in atributos:
plt.figure(figsize=(10,8))
sns.boxplot(x=df_train[colunas],
y= 'Preço/Noite',
data=df_train,
showfliers=False)
plt.title(f'Box Plot {colunas}')
plt.xlabel(colunas)
plt.ylabel('Preço/Noite')
#plt.savefig(f'../img/box_plot_atributos_{colunas}.png')
plt.show()
###Output
_____no_output_____
###Markdown
**Considerações:**Graficamente é possível ver a tendência de crescimento. Quanto maior é número desses itens maior será o preço/Noite H2
###Code
# H2: Imóveis com Comodidades tem o preço/noite maior?
for colunas in col_cat:
plt.figure(figsize=(6,6))
sns.boxplot(x=df_train[colunas],
y='Preço/Noite',
data=df_train,
showfliers=False)
plt.title(f'Box Plot {colunas}')
plt.xlabel(colunas)
plt.ylabel('Preço/Noite')
#plt.savefig(f'../img/box_plot_comodidades_{colunas}.png')
plt.show()
###Output
_____no_output_____
###Markdown
Testes das Hipóteses H1 e H2:
###Code
def teste_t(df, features, target='0', alpha=0.05):
"""
Teste T-Student
df = DataFrame
fearures = List of columns to be tested
target = 'Target'
alpha = significance index. Default is 0.05
"""
import scipy
for colunas in features:
true_target = df.loc[df[colunas] == 1, [target]]
false_target = df.loc[df[colunas]==0, [target]]
teste_T_result = scipy.stats.ttest_ind(true_target, false_target, equal_var=False)
if teste_T_result[1] < alpha:
print(f'{colunas}: ')
print(f'{teste_T_result}')
print(f'Não')
print('-'*30)
else:
print(f'{colunas}: ')
print(f'{teste_T_result}')
print(f'Sim - O preço é maior')
print('-'*30)
# H1: Imóveis com maior número de atributos (número de Hospesdes, Quartos, Camas e banheiro) possuem maior preço/noite maior?
teste_t(df_train, atributos, target='Preço/Noite')
###Output
Número Hóspedes:
Ttest_indResult(statistic=array([nan]), pvalue=array([nan]))
Sim - O preço é maior
------------------------------
Número Quartos:
Ttest_indResult(statistic=array([nan]), pvalue=array([nan]))
Sim - O preço é maior
------------------------------
Número Camas:
Ttest_indResult(statistic=array([0.20198042]), pvalue=array([0.84178474]))
Sim - O preço é maior
------------------------------
Número Banheiros:
Ttest_indResult(statistic=array([-0.41358328]), pvalue=array([0.67920837]))
Sim - O preço é maior
------------------------------
###Markdown
**Considerações:**É estatisticamente comprovado pelo teste t-student de que quanto maior for o tamanho do imóvel maior será o preço dele.
###Code
# H2: Imóveis com Comodidades tem o preço/noite maior?
teste_t(df_train, col_cat, target='Preço/Noite')
###Output
Academia:
Ttest_indResult(statistic=array([-2.04011699]), pvalue=array([0.04266947]))
Não
------------------------------
Ar-condicionado:
Ttest_indResult(statistic=array([1.9791777]), pvalue=array([0.04786125]))
Não
------------------------------
Cozinha:
Ttest_indResult(statistic=array([4.1373094]), pvalue=array([3.59416842e-05]))
Não
------------------------------
Elevador:
Ttest_indResult(statistic=array([-17.06749817]), pvalue=array([5.08097563e-63]))
Não
------------------------------
Estacionamento gratuito:
Ttest_indResult(statistic=array([-0.90894358]), pvalue=array([0.36343889]))
Sim - O preço é maior
------------------------------
Máquina de Lavar:
Ttest_indResult(statistic=array([-1.24409889]), pvalue=array([0.21396324]))
Sim - O preço é maior
------------------------------
Permitido animais:
Ttest_indResult(statistic=array([-11.28474957]), pvalue=array([5.30645242e-28]))
Não
------------------------------
Piscina:
Ttest_indResult(statistic=array([19.40450262]), pvalue=array([3.35987321e-75]))
Não
------------------------------
Secadora:
Ttest_indResult(statistic=array([1.91078605]), pvalue=array([0.05745774]))
Sim - O preço é maior
------------------------------
Self check-In:
Ttest_indResult(statistic=array([-9.72963491]), pvalue=array([5.59752902e-22]))
Não
------------------------------
Vista para as águas:
Ttest_indResult(statistic=array([-1.15786276]), pvalue=array([0.24810592]))
Sim - O preço é maior
------------------------------
Vista para o mar:
Ttest_indResult(statistic=array([-8.80590041]), pvalue=array([3.70878473e-18]))
Não
------------------------------
Wi-Fi:
Ttest_indResult(statistic=array([6.87587315]), pvalue=array([1.29321065e-11]))
Não
------------------------------
Café da Manhã:
Ttest_indResult(statistic=array([-5.77656846]), pvalue=array([6.15574438e-06]))
Não
------------------------------
Entrada/saída para esquis:
Ttest_indResult(statistic=array([-0.26163455]), pvalue=array([0.81050166]))
Sim - O preço é maior
------------------------------
Jacuzzi:
Ttest_indResult(statistic=array([7.11528731]), pvalue=array([1.36042142e-09]))
Não
------------------------------
Lareira interna:
Ttest_indResult(statistic=array([5.75807653]), pvalue=array([0.00027015]))
Não
------------------------------
Lava-louças:
Ttest_indResult(statistic=array([2.18167626]), pvalue=array([0.04971726]))
Não
------------------------------
Novo preço mais baixo:
Ttest_indResult(statistic=array([nan]), pvalue=array([nan]))
Sim - O preço é maior
------------------------------
Raridade:
Ttest_indResult(statistic=array([nan]), pvalue=array([nan]))
Sim - O preço é maior
------------------------------
###Markdown
**Considerações do teste:**Academia, Ar-condicionado, cozinha, elevador, permitido animais, piscina, self- Check-In, Vista para o mar, wi-fi, café da manhã, Jacuzzi, lareira interna e lava-louças. Não possuem a diária maior (Preço/Noite) estatisticamente de acordo com o teste t-student.Estacionamento gratuito, máquina de lavar, secadora, vista para águas, entrada/saída para esquis, Tag novo preço mais baixo e Raridade. Possuem o preço maior (Preço/Noite) estatisticamente de acordo com o teste t-student.**Observação:**Piscina, Wi-fi, Jacuzzi e Lava-louças embora graficamente apontaram como tendo maior preço/noite, elas não foram estatisticamente significantes no teste t-student.Vamos entender o porquê de vista para águas tem o preço mais alto do que vista para o mar. Graficamente: Na vista para o mar. A caixa que possui maior tanho e maior cauda é aquela que o preço é menor. Em relação a vista para águas. O tamanho das caixas e a cauda são parecidos.Neste caso o ideal seria agrupar as duas fatures. E testar novamente para saber se ter água a frente influencia no preço/noite. H3 - Localização
###Code
len(df_train['Localização'].unique())
# vmmos observar a distribuição:
plt.figure(figsize=(12,8))
df_train['Localização'].hist()
plt.show()
df_train['Localização'].value_counts()
###Output
_____no_output_____
###Markdown
Para facilitar, vamos transformar essa variável categórica em frequência.**Neste caso teremos pontos com maior concentração**
###Code
# Usando o transformador de Frequency Encounder.
cfce = CountFrequencyCategoricalEncoder(encoding_method='frequency', variables=['Localização'])
df_train_transf = cfce.fit_transform(df_train)
plt.figure(figsize=(8,6))
df_train_transf['Localização'].hist()
plt.show()
###Output
_____no_output_____
###Markdown
Agora é posível ver melhor a sua distribuição.Temos um local de maior concentração e depois alguns outros locais. Seria possível dizer que pode até haver duas modas.
###Code
# H3: Existe Correlação entre a localização e o Preço/Noite?
local = ['Localização', 'Preço/Noite']
correlacao_local = df_train_transf[local].corr()['Preço/Noite']
correlacao_local
# vamos testar estatísticamente:
cor, p = stats.spearmanr(df_train_transf['Localização'],df_train_transf['Preço/Noite'])
if p <= 0.5:
print(f'p-value: {p}, correlation: {cor}')
print(f'Existe correlação.')
else:
print(f'p-value: {p}, correlation: {cor}')
print(f'Não há correlação.')
###Output
p-value: 4.3741112668780545e-08, correlation: -0.08426624393023412
Existe correlação.
###Markdown
**Considerações:**É possível observar que 95% de confiança que o preço/noite tende a ser menor em locais com maior concentração de imóveis disponíveis. Ou seja, quanto maior a oferta menor o preço. **Feature Engineering:**Criar uma coluna sinalizando se o local tem alta demanda naquele local. Comportamento Preço/Noite ao longo do período de coleta
###Code
df_train.pivot_table(values='Preço/Noite', columns='Check-In',aggfunc='mean').T.plot()
plt.show()
# Primeiro, vamos gerar um dataframe agrupado por mês, com a média, desvio padrão, min e máximo.
df_preco_medio = df_train.groupby('Check-In').agg(Mean = ('Preço/Noite', 'mean'),
Desvio = ('Preço/Noite', 'std'),
Median = ('Preço/Noite', 'median'),
Min = ('Preço/Noite', 'min'),
Max = ('Preço/Noite', 'max')).reset_index()
df_preco_medio
plt.figure(figsize=(16, 4))
#faz o plot da linha
plt.plot(df_preco_medio['Check-In'],
df_preco_medio['Mean'],
'o-',
label='Média')
plt.plot(df_preco_medio['Check-In'],
df_preco_medio['Desvio'],
'o--',
label='Desvio')
# Adiciona títulos
plt.title(
'Gráfico da média e desvio-padrão Preço/Noite no período',
fontsize=15,
fontweight='bold')
plt.xlabel('Check-In')
plt.ylabel('Valor de Preço/Noite (R$)')
plt.legend()
plt.savefig(f'../img/media_preço_noite_por_periodo.png')
plt.show()
###Output
_____no_output_____
###Markdown
**Considerações** É possível notar uma queda acentuada no dia 02/04 - motivo - Feriado Sexta-feira Santa.Não é possível afirmar pois faltam mais dados de outros períodos, mas sendo o check-in no dia do feriado o preço a priori seria menor.Considerando ser uma região litoranea, o preços podem ser mais altos na meses mais quentes do ano.Verão: Janeiro, Fevereiro e Março Lista de Feriados
###Code
# Importar lista de feriados
df_feriados = pd.read_csv('../data/feriados_nacionais_2021.csv', sep=';')
df_feriados
def transform_holiday_table(df):
df.replace({'feriado nacional' : '1', 'ponto facultativo': '1'}, inplace=True)
df.rename(columns={'status': 'É_feriado'}, inplace=True)
df.rename(columns={'data': 'Check-In' }, inplace=True)
df['Check-In'] = pd.to_datetime(df_feriados['Check-In'], format ='%Y-%m-%d')
return df
df_feriados = transform_holiday_table(df_feriados)
df_feriados
# Vamos juntar as duas tabelas Preço Médio e Feriados
df_preco_medio = df_preco_medio.merge(df_feriados, left_on='Check-In', right_on='Check-In', how='left')
df_preco_medio = df_preco_medio.fillna(0)
df_preco_medio
###Output
_____no_output_____
###Markdown
**Considerações**É possível perceber que quando a data de check-In coincide no mesmo dia do feriado temos uma redução de preço médio.**Featuring Engineering:**Criar uma coluna informando se o feriado no dia do check-In e na semana Valor da taxa
###Code
def create_taxa(df):
df['Diária com taxas'] = df['Preço com taxas'] / 2
df['Taxa'] = df['Diária com taxas'] - df['Preço/Noite']
# Tratando Valores Negativos
df['Taxa'] = df['Taxa'].mask(df['Taxa'].lt(0)).ffill().fillna(0).convert_dtypes()
return df
# Criando coluna temporária diária com taxas.
df_train['Diária com taxas'] = df_train['Preço com taxas'] / 2
# Calculo da Taxa
df_train['Taxa'] = df_train['Diária com taxas'] - df_train['Preço/Noite']
# Percebemmos que surgiriam valores negativos que não fazem sentido.
df_train.loc[df_train['Taxa'] < 0]
# Transformação dos valores negativos em 0
df_train['Taxa'] = df_train['Taxa'].mask(df_train['Taxa'].lt(0)).ffill().fillna(0).convert_dtypes()
df_train['Taxa'].value_counts()
# Observando a distribuição da variável criada.
df_train['Taxa'].hist()
plt.show()
###Output
_____no_output_____
###Markdown
**Conclusões:**Apesar das transformações realizadas. Os valores são inconsistentes e não fazem sentido.Dropar as colunas Preço com taxas, Diária com taxas e taxas Será que há clusters?
###Code
# Vamos usar o DBSCAN
X = df_train.drop({'ID', 'Check-In', 'Check-Out'}, axis=1)
cfce = CountFrequencyCategoricalEncoder(encoding_method='frequency', variables=['Localização'])
pipe = Pipeline(steps=[('scaler', MinMaxScaler())])
X = cfce.fit_transform(X)
X = pipe.fit_transform(X)
# DBSCAN
# método para tunar o eps
# trabalhando com o número de ouro 10 vizinhos
nearest_neighbors = NearestNeighbors(n_neighbors=11)
neighbors = nearest_neighbors.fit(X)
distances, indices = neighbors.kneighbors(X)
distances = np.sort(distances[:,10], axis=0)
# Escolha do EPS
i = np.arange(len(distances))
knee = KneeLocator(i, distances, S=1, curve='convex', direction='increasing', interp_method='polynomial')
fig = plt.figure(figsize=(5, 5))
knee.plot_knee()
plt.xlabel("Points")
plt.ylabel("Distance")
plt.show()
print(distances[knee.knee])
# trabalhando com o núemro de eps fornecido no gráfico.
db = DBSCAN(eps=distances[knee.knee], min_samples=11).fit(X)
labels = db.labels_
fig = plt.figure(figsize=(10, 10))
sns.scatterplot(X[:,0], X[:,1], hue=["cluster={}".format(x) for x in labels])
plt.show()
###Output
_____no_output_____
###Markdown
**Conclusão:**Podemos perceber que não há grupos, porem foi identificado outliers.Vamos criar uma coluna sinalizando quem são os outliers. 3.2 Feature Engineering **Ações:*** Agrupar as variáveis vista para águas e vista para o mar* Criar uma coluna sinalizando se o local tem alta demanda oferta de locais disponíveis naquele local.* Criar uma coluna informando se é feriado no dia do check-In com peso negativo* Criar uma coluna informando se há feriado na semana do check-In.* Criar coluna é outlier* Transformar a coluna Check-in em dia, mês e ano. 1. Agrupar as variáveis vista para águas e vista para o mar em uma única variável.
###Code
# Agrupando as duas variáveis.
df_train['Vista'] = df_train['Vista para as águas'] + df_train['Vista para o mar']
df_train.drop({'Vista para as águas', 'Vista para o mar'}, axis=1, inplace=True)
# 0 não tem esse atributo
# 1 tem uma ou outra vista.
# 2 Tem as duas vistas.
df_train['Vista'].value_counts()
# Vamos testar:
features = ['Vista']
teste_t(df_train, features , target='Preço/Noite')
def create_vista(df):
"""
Create variable called Vista
df= Dataframe
"""
df['Vista'] = df_train['Vista para as águas'] + df['Vista para o mar']
return df
###Output
_____no_output_____
###Markdown
A dúvida anterior era se as duas variáveis teriam melhor desempenho no teste t-studend. Porém não houve melhora na estatística. 2. Criar uma coluna sinalizando se o local tem alta demanda oferta de locais disponíveis naquele local
###Code
def tranform_frequency(df, variables='Localização'):
"""
Transform categorical variable into frequency
df = dataset
variable = name of vaviable to be transformed
"""
from feature_engine.categorical_encoders import CountFrequencyCategoricalEncoder
cfce = CountFrequencyCategoricalEncoder(encoding_method='frequency', variables=[variables])
df = cfce.fit_transform(df)
return df
df_train = tranform_frequency(df_train, variables='Localização')
df_train['Localização'].max()
def eng_create_demand(df):
"""
Create new a column called Demanda from maximum localization
df= dataset
"""
df['Demanda'] = df['Localização']
df['Demanda'] = [1 if i == df['Localização'].max() else 0 for i in df['Demanda']]
return df
df_train = eng_create_demand(df_train)
df_train['Demanda'].value_counts()
###Output
_____no_output_____
###Markdown
3. Criar uma coluna informando se é feriado no dia do check-In
###Code
df_feriados = pd.read_csv('../data/feriados_nacionais_2021.csv', sep=';')
def eng_create_is_holiday(df , df_feriados):
"""
Create new column called É feriado.
df = Dataframe
df_feriados = Dafaframe contendo uma lista de feriados nacionais
"""
#import da tabela feriado
df_feriados = df_feriados.drop('evento', axis=1)
df_feriados.replace({'feriado nacional' : '1', 'ponto facultativo': '1'}, inplace=True)
df_feriados.rename(columns={'status': 'É_feriado'}, inplace=True)
df_feriados.rename(columns={'data': 'Check-In' }, inplace=True)
df_feriados['Check-In'] = pd.to_datetime(df_feriados['Check-In'], format ='%Y-%m-%d')
# Vamos juntar as duas tabelas Preço Médio e Feriados
df = df.merge(df_feriados, left_on='Check-In', right_on='Check-In', how='left')
#preenche os nulos com 0
df = df.fillna(0)
return df
df_train = eng_create_is_holiday(df_train, df_feriados)
df_train['É_feriado'].value_counts()
###Output
_____no_output_____
###Markdown
4. Identificar os outliers
###Code
labels
df_train['É outilier'] = labels
df_train['É outilier'].value_counts()
def eng_is_outlier(df, n_neighbors=11 ):
"""
Create column is outlier from DBSCAM model
df = DataFrame
n_neighbors = default is 11
"""
#libs
from feature_engine.categorical_encoders import CountFrequencyCategoricalEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.cluster import DBSCAN
from sklearn.neighbors import NearestNeighbors
from kneed import KneeLocator
X = df.drop({'ID', 'Check-In', 'Check-Out'}, axis=1)
cfce = CountFrequencyCategoricalEncoder(encoding_method='frequency', variables=['É Feriado'])
pipe = Pipeline(steps=[('scaler', MinMaxScaler())])
X = cfce.fit_transform(X)
X = pipe.fit_transform(X)
nearest_neighbors = NearestNeighbors(n_neighbors=11)
neighbors = nearest_neighbors.fit(X)
distances, indices = neighbors.kneighbors(X)
distances = np.sort(distances[:,10], axis=0)
i = np.arange(len(distances))
knee = KneeLocator(i, distances, S=1, curve='convex', direction='increasing', interp_method='polynomial')
db = DBSCAN(eps=distances[knee.knee], min_samples=11).fit(X)
labels = db.labels_
df['É outilier'] = labels
return df
#df_train = eng_is_outlier(df_train)
###Output
_____no_output_____
###Markdown
5. Transformar a coluna Check-in em dia, mês e ano.
###Code
df_train['Check-In'].dt.year
df_train['Check-In'].dt.weekday
###Output
_____no_output_____
###Markdown
As colunas ano e dia da semana são constantes, neste caso vamos colocar-las na funçao e processo de split.
###Code
def create_dates(df, date='Check-In'):
"""
Split date into year, month, day and day of year
df = DataFrame
date = put date column. Default is 'Check-In'
In week, Monday is 0 and Sunday is 6.
"""
df['Mes'] = df[date].dt.month
df['Dia'] = df[date].dt.day
df['Semana_do_ano'] = df[date].dt.week
return df
df_train = create_dates(df_train)
df_train.head()
###Output
_____no_output_____
###Markdown
6. Criar uma coluna informando se há feriado na semana do check-In
###Code
# Tabela de feriados
df_feriados
def eng_create_holiday_week(df , df_feriados):
"""
Create new column called Semana de feriado.
df = Dataframe
df_feriados = Dafaframe contendo uma lista de feriados nacionais
"""
#import da tabela feriado
df_feriados = df_feriados.drop({'evento', 'status'}, axis=1)
df_feriados.rename(columns={'data': 'Check-In' }, inplace=True)
df_feriados['Check-In'] = pd.to_datetime(df_feriados['Check-In'], format ='%Y-%m-%d')
df_feriados['Semana de Feriado'] = df_feriados['Check-In'].dt.week
# Vamos juntar as duas tabelas Preço Médio e Feriados
df = df.merge(df_feriados, left_on='Check-In', right_on='Check-In', how='left')
#preenche os nulos com 0
df = df.fillna(int(0))
return df
df_train = eng_create_holiday_week(df_train, df_feriados)
df_train['Semana de Feriado'].value_counts()
df_train.head()
###Output
_____no_output_____
###Markdown
Exportar
###Code
df_train.to_csv(DATA_OUTPUT_TRAIN_ENG)
###Output
_____no_output_____ |
Models and Tags Analysis notebooks/Logistic regression model.ipynb | ###Markdown
Model Analysis with Logistic regression
###Code
import pandas as pd
import numpy as np
import re
from tqdm import tqdm
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from wordcloud import WordCloud
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem.snowball import SnowballStemmer
from sklearn.model_selection import train_test_split
from sklearn.multiclass import OneVsRestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from sklearn.metrics import f1_score,precision_score,recall_score
%%time
## sample_500k is a sample from main dataset
preprocess_data = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Stack overflow Tag /preprocessed_3title_100k.csv")
preprocess_data.head()
preprocess_data.head()
def text_splitter(text):
return text.split()
# binary='true' will give a binary vectorizer
tag_vectorizer = CountVectorizer(tokenizer = text_splitter, binary=True)
multi_label_y = tag_vectorizer.fit_transform(preprocess_data['tags'].values.astype(str))
# make sum column wise
tag_column_sum = multi_label_y.sum(axis=0).tolist()[0]
# To select n number of top tags
def select_top_tags(n):
# To get sotred list (means: tags appear in maximum number of questions come first)
# top 10: [3711, 15246, 22934, 15324, 1054, 15713, 3720, 24481, 14905, 1897]
sorted_tags = sorted(range(len(tag_column_sum)), key=lambda i: tag_column_sum[i], reverse=True)
# With this line of code we get tags in our columns which are come in most of the questions
# we will get shape: (999999, n)
multi_label_n_y = multi_label_y[:,sorted_tags[:n]]
return multi_label_n_y
#
def questions_covered_fn(n):
multi_label_n_y = select_top_tags(n)
# This line will give us row wise sum of each row [[1, 2], [[3],
# [4, 3]] to [7]]
row_sum_array = multi_label_n_y.sum(axis=1)
# Counts the number of non-zero values in the array
return (np.count_nonzero(row_sum_array==0))
# With this code we checking how much percent questions are explained by how many tags
# Here we are starting from 500 because we think top 500 are most important tags we can't skip them
questions_covered=[]
total_tags=multi_label_y.shape[1]
total_qs=preprocess_data.shape[0]
for i in range(500, total_tags, 100):
questions_covered.append(np.round(((total_qs-questions_covered_fn(i))/total_qs)*100,3))
multi_label_n_y = select_top_tags(500)
print("number of questions that are not covered :", questions_covered_fn(5500),"out of ", total_qs)
print("Number of tags in sample :", multi_label_y.shape[1])
print("number of tags taken :", multi_label_n_y.shape[1],"-->",round((multi_label_n_y.shape[1]/multi_label_y.shape[1]),3)*100,"%")
total_size=preprocess_data.shape[0]
train_size=int(0.80*total_size)
x_train=preprocess_data.head(train_size)
x_test=preprocess_data.tail(total_size - train_size)
y_train = multi_label_n_y[0:train_size,:]
y_test = multi_label_n_y[train_size:total_size,:]
# To get new features with tfidf technique get 200000 features with upto 3-grams
vectorizer = TfidfVectorizer(min_df=0.00009, max_features=200000, smooth_idf=True, norm="l2", tokenizer = text_splitter, sublinear_tf=False, ngram_range=(1,3))
# Apply this vectorizer only on question data column
x_train_multi_label = vectorizer.fit_transform(x_train['question'])
x_test_multi_label = vectorizer.transform(x_test['question'])
# Now check data shapes after featurization
print("Dimensions of train data X:",x_train_multi_label.shape, "Y :",y_train.shape)
print("Dimensions of test data X:",x_test_multi_label.shape,"Y:",y_test.shape)
from joblib import dump
dump(vectorizer, '/content/drive/MyDrive/Colab Notebooks/Stack overflow Tag /stackoverflow_tfidf_vectorizer_logistic_regression_100k.pkl')
classifier = OneVsRestClassifier(LogisticRegression(penalty='l1',solver='liblinear'), n_jobs=-1)
import time
start = time.time()
classifier.fit(x_train_multi_label, y_train)
print("Time it takes to run this :",(time.time()-start)/60,"minutes")
dump(classifier, '/content/drive/MyDrive/Colab Notebooks/Stack overflow Tag /stackoverflow_model_logistic_regression_100k.pkl')
predictions = classifier.predict(x_test_multi_label)
print("accuracy :",metrics.accuracy_score(y_test,predictions))
print("macro f1 score :",metrics.f1_score(y_test, predictions, average = 'macro'))
print("micro f1 scoore :",metrics.f1_score(y_test, predictions, average = 'micro'))
print("hamming loss :",metrics.hamming_loss(y_test,predictions))
report = metrics.classification_report(y_test, predictions, output_dict=True)
report_df = pd.DataFrame(report).transpose()
report_df.to_csv("/content/report_3title_100k.csv")
###Output
_____no_output_____ |
AQI/models/7. Implementing ANN.ipynb | ###Markdown
Implementing ANN Authors:- Nooruddin Shaikh- Milind Sai- Saurabh Jejurkar- Kartik Bhargav
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import metrics
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LeakyReLU,PReLU,ELU
from keras.layers import Dropout
import tensorflow as tf
data = pd.read_csv("Data/final_data.csv")
data.head()
#Splitting Data
X = data.iloc[:, :-1] #Independent features
y = data.iloc[:, -1] #Dependent feature
#Train Test Splitting
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
NN_model = Sequential()
# The Input Layer :
NN_model.add(Dense(128, kernel_initializer='normal',input_dim = X_train.shape[1], activation='relu'))
# The Hidden Layers :
NN_model.add(Dense(256, kernel_initializer='normal',activation='relu'))
NN_model.add(Dense(256, kernel_initializer='normal',activation='relu'))
NN_model.add(Dense(256, kernel_initializer='normal',activation='relu'))
# The Output Layer :
NN_model.add(Dense(1, kernel_initializer='normal',activation='linear'))
# Compile the network :
NN_model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['mean_absolute_error'])
NN_model.summary()
# Fitting the ANN to the Training set
model_history = NN_model.fit(X_train, y_train,validation_split=0.33, batch_size = 10, epochs = 100)
prediction = NN_model.predict(X_test)
y_test
sns.distplot(y_test.values.reshape(-1,1)-prediction)
plt.scatter(y_test,prediction)
print('MAE:', metrics.mean_absolute_error(y_test, prediction))
print('MSE:', metrics.mean_squared_error(y_test, prediction))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
from sklearn.metrics import r2_score
r2_score(y_test, prediction)
###Output
_____no_output_____
###Markdown
Prediction
###Code
X_test.head()
y_test.head()
print(NN_model.predict([[38.82,26.56,0.82,10.25,20.06]]))
print(NN_model.predict([[63.58,40.25,0.23,27.84,50.72]]))
print(NN_model.predict([[62.33,2.60,0.59,7.46,29.58]]))
print(NN_model.predict([[118.43,84.21,0.89,37.55,39.59]]))
print(NN_model.predict([[37.67,37.32,1.06,7.06,34.92]]))
###Output
[[87.93328]]
[[138.57399]]
[[128.83249]]
[[279.8932]]
[[88.93133]]
|
additional/examples/data_preporation.ipynb | ###Markdown
###Code
%matplotlib inline
plt.figure(figsize=(16, 8))
plt.plot(df_stock["Open"])
plt.plot(df_stock["High"])
plt.plot(df_stock["Low"])
plt.plot(df_stock["Close"])
plt.title('История цен акций NVIDIA')
plt.ylabel('Price (USD)')
plt.xlabel('Days')
plt.legend(['Open','High','Low','Close'], loc='upper left')
plt.show()
%matplotlib inline
tone_columns = [e for e in df_stock.columns if 'tone' in e]
plt.figure(figsize=(16, 8))
for tc in tone_columns:
plt.plot(df_stock[tc])
plt.title('История тональности главных продуктов NVIDIA')
plt.ylabel('Tone')
plt.xlabel('Days')
plt.legend(tone_columns, loc='upper left')
plt.show()
%matplotlib inline
news_count_columns = [e for e in df_stock.columns if 'count' in e]
plt.figure(figsize=(16, 8))
for nc in news_count_columns:
plt.plot(df_stock[nc])
plt.title('История количества новостей главных продуктов NVIDIA')
plt.ylabel('Number')
plt.xlabel('Days')
plt.legend(news_count_columns, loc='upper left')
plt.show()
###Output
_____no_output_____ |
nbs/datasets/prepare-conala.ipynb | ###Markdown
© 2020 NokiaLicensed under the BSD 3 Clause licenseSPDX-License-Identifier: BSD-3-Clause Prepare Conala snippet collection and evaluation data
###Code
from pathlib import Path
import json
from collections import defaultdict
from codesearch.data import load_jsonl, save_jsonl
corpus_url = "http://www.phontron.com/download/conala-corpus-v1.1.zip"
conala_dir = Path("conala-corpus")
conala_train_fn = conala_dir/"conala-test.json"
conala_test_fn = conala_dir/"conala-train.json"
conala_mined_fn = conala_dir/"conala-mined.jsonl"
conala_snippets_fn = "conala-curated-snippets.jsonl"
conala_retrieval_test_fn = "conala-test-curated-0.5.jsonl"
if not conala_train_fn.exists():
!wget $corpus_url
!unzip conala-corpus-v1.1.zip
conala_mined = load_jsonl(conala_mined_fn)
###Output
_____no_output_____
###Markdown
The mined dataset seems to noisy to incorporate in the snippet collection:
###Code
!sed -n '10000,10009p;10010q' $conala_mined_fn
with open(conala_train_fn) as f:
conala_train = json.load(f)
with open(conala_test_fn) as f:
conala_test = json.load(f)
conala_all = conala_train + conala_test
conala_all[:2], len(conala_all), len(conala_train), len(conala_test)
for s in conala_all:
if s["rewritten_intent"] == "Convert the first row of numpy matrix `a` to a list":
print(s)
question_ids = {r["question_id"] for r in conala_all}
intents = set(r["intent"] for r in conala_all)
len(question_ids), len(conala_all), len(intents)
id2snippet = defaultdict(list)
for r in conala_all:
id2snippet[r["question_id"]].append(r)
for r in conala_all:
if not r["intent"]:
print(r)
if r["intent"].lower() == (r["rewritten_intent"] or "").lower():
print(r)
import random
random.seed(42)
snippets = []
eval_records = []
for question_id in id2snippet:
snippets_ = [r for r in id2snippet[question_id] if r["rewritten_intent"]]
if not snippets_: continue
for i, record in enumerate(snippets_):
snippet_record = {
"id": f'{record["question_id"]}-{i}',
"code": record["snippet"],
"description": record["rewritten_intent"],
"language": "python",
"attribution": f"https://stackoverflow.com/questions/{record['question_id']}"
}
snippets.append(snippet_record)
# occasionally snippets from the same question have a slightly different intent
# to avoid similar queries, we create only one query per question
query = random.choice(snippets_)["intent"]
if any(query.lower() == r["description"].lower() for r in snippets[-len(snippets_):] ):
print(f"filtering query {query}")
continue
relevant_ids = [r["id"] for r in snippets[-len(snippets_):] ]
eval_records.append({"query": query, "relevant_ids": relevant_ids})
snippets[:2], len(snippets), eval_records[:2], len(eval_records)
id2snippet_ = {r["id"]: r for r in snippets}
for i, eval_record in enumerate(eval_records):
print(f"Query: {eval_record['query']}")
print(f"Relevant descriptions: {[id2snippet_[id]['description'] for id in eval_record['relevant_ids']]}")
if i == 10:
break
from codesearch.text_preprocessing import compute_overlap
compute_overlap("this is a test", "test test")
overlaps = []
filtered_eval_records = []
for r in eval_records:
query = r["query"]
descriptions = [id2snippet_[id]['description'] for id in r['relevant_ids']]
overlap = max(compute_overlap(query, d)[1] for d in descriptions)
overlaps.append(overlap)
if overlap < 0.5 :
filtered_eval_records.append(r)
filtered_eval_records[:2], len(filtered_eval_records)
save_jsonl(conala_snippets_fn, snippets)
save_jsonl(conala_retrieval_test_fn, filtered_eval_records)
###Output
_____no_output_____ |
notebooks/NLUsentimentPerformanceEval.ipynb | ###Markdown
Notebook for testing performance of NLU Sentiment[Watson Developer Cloud](https://www.ibm.com/watsondevelopercloud) is a platform of cognitive services that leverage machine learning techniques to help partners and clients solve a variety business problems. It is critical to understand that training a machine learning solution is an iterative process where it is important to continually improve the solution by providing new examples and measuring the performance of the trained solution. In this notebook, we show how you can compute important Machine Learning metrics (accuracy, precision, recall, confusion_matrix) to judge the performance of your solution. For more details on these various metrics, please consult the **[Is Your Chatbot Ready for Prime-Time?](https://developer.ibm.com/dwblog/2016/chatbot-cognitive-performance-metrics-accuracy-precision-recall-confusion-matrix/)** blog. The notebook assumes you have already created a Watson [Natural Language Understanding](https://www.ibm.com/watson/services/natural-language-understanding/) instance. To leverage this notebook, you need to provide the following information* Credentials for your NLU instance (username and password)* csv file with your text utterances and corresponding sentiment labels (positive, negative, neutral)* results csv file to write the results to
###Code
# Only run this cell if you don't have pandas_ml or watson_developer_cloud installed
!pip install pandas_ml
# You can specify the latest verion of watson_developer_cloud (1.0.0 as of November 20, 2017)
#!pip install -I watson-developer-cloud==1.0.0
## install latest watson developer cloud Python SDK
!pip install --upgrade watson-developer-cloud
#Import utilities
import json
import sys
import codecs
import unicodecsv as csv
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
import pandas_ml
from pandas_ml import ConfusionMatrix
from watson_developer_cloud import NaturalLanguageUnderstandingV1
from watson_developer_cloud.natural_language_understanding.features import (v1 as Features)
###Output
_____no_output_____
###Markdown
Provide the path to the parms file which includes credentials to access your NLU service as well as the inputtest csv file and the output csv file to write the output results to.
###Code
# Sample parms file data
#{
# "url": "https://gateway.watsonplatform.net/natural-language-understanding/api/v1",
# "user":"YOUR_NLU_USERNAME",
# "password": "YOUR_NLU_PASSWORD",
# "test_csv_file": "COMPLETE_PATH_TO_YOUR_TEST_CSV_FILE",
# "results_csv_file": "COMPLETE PATH TO RESULTS FILE (any file you can write to)",
# "confmatrix_csv_file": "COMPLETE PATH TO CONFUSION MATRIX FILE (any file you can write to)"
#}
# Provide complete path to the file which includes all required parms
# A sample parms file is included (example_parms.json)
nluParmsFile = 'COMPLETE PATH TO YOUR PARMS FILE'
parms = ''
with open(nluParmsFile) as parmFile:
parms = json.load(parmFile)
url=parms['url']
user=parms['user']
password=parms['password']
test_csv_file=parms['test_csv_file']
results_csv_file=parms['results_csv_file']
confmatrix_csv_file=parms['confmatrix_csv_file']
json.dumps(parms)
# Create an object for your NLU instance
natural_language_understanding = NaturalLanguageUnderstandingV1(
version = '2017-02-27',
username = user,
password = password
)
###Output
_____no_output_____
###Markdown
Define useful methods to return sentiment of text using NLU.
###Code
# Given a text string and a pointer to NLU instance, get back NLU sentiment response
def getNLUresponse(nlu_instance,features,string):
nlu_analysis = natural_language_understanding.analyze(features, string, return_analyzed_text=True)
return nlu_analysis
# Process multiple text utterances (provided via csv file) in batch. Effectively, read the csv file and for each text
# utterance, get NLU sentiment. Aggregate and return results.
def batchNLU(nlu_instance,features,csvfile):
test_labels=[]
nlupredict_labels=[]
nlupredict_score =[]
text=[]
i=0
print ('reading csv file: ', csvfile)
with open(csvfile, 'rb') as csvfile:
# For better handling of utf8 encoded text
csvReader = csv.reader(csvfile, encoding="utf-8-sig")
for row in csvReader:
print(row)
# Assume input text is 2 column csv file, first column is text
# and second column is the label/class/intent
# Sometimes, the text string includes commas which may split
# the text across multiple colmns. The following code handles that.
if len(row) > 2:
qelements = row[0:len(row)-1]
utterance = ",".join(qelements)
test_labels.append(row[len(row)-1])
else:
utterance = row[0]
test_labels.append(row[1])
utterance = utterance.replace('\r', ' ')
print ('i: ', i, ' testing row: ', utterance)
nlu_response = getNLUresponse(nlu_instance,features,utterance)
if nlu_response['sentiment']:
nlupredict_labels.append(nlu_response['sentiment']['document']['label'])
nlupredict_score.append(nlu_response['sentiment']['document']['score'])
else:
nlupredict_labels.append('')
nlupredict_score.append(0)
text.append(utterance)
i = i+1
if(i%250 == 0):
print("")
print("Processed ", i, " records")
if(i%10 == 0):
sys.stdout.write('.')
print("")
print("Finished processing ", i, " records")
return test_labels, nlupredict_labels, nlupredict_score, text
# Plot confusion matrix as an image
def plot_conf_matrix(conf_matrix):
plt.figure()
plt.imshow(conf_matrix)
plt.show()
# Print confusion matrix to a csv file
def confmatrix2csv(conf_matrix,labels,csvfile):
with open(csvfile, 'wb') as csvfile:
csvWriter = csv.writer(csvfile)
row=list(labels)
row.insert(0,"")
csvWriter.writerow(row)
for i in range(conf_matrix.shape[0]):
row=list(conf_matrix[i])
row.insert(0,labels[i])
csvWriter.writerow(row)
# This is an optional step to quickly test response from NLU for a given utterance
testQ='it was great talking to your agent '
features = {"sentiment":{}}
results = getNLUresponse(natural_language_understanding,features,testQ)
print(json.dumps(results, indent=2))
###Output
_____no_output_____
###Markdown
Call NLU on the specified csv file and collect results.
###Code
features = {"sentiment":{}}
test_labels,nlupredict_labels,nlupredict_score,text=batchNLU(natural_language_understanding,features,test_csv_file)
# print results to csv file including original text, the correct label,
# the predicted label and the score reported by NLU.
csvfileOut=results_csv_file
with open(csvfileOut, 'wb') as csvOut:
outrow=['text','true label','NLU Predicted label','Score']
csvWriter = csv.writer(csvOut,dialect='excel')
csvWriter.writerow(outrow)
for i in range(len(text)):
outrow=[text[i],test_labels[i],nlupredict_labels[i],str(nlupredict_score[i])]
csvWriter.writerow(outrow)
# Compute confusion matrix
labels=list(set(test_labels))
nlu_confusion_matrix = confusion_matrix(test_labels, nlupredict_labels, labels)
nluConfMatrix = ConfusionMatrix(test_labels, nlupredict_labels)
# Print out confusion matrix with labels to csv file
confmatrix2csv(nlu_confusion_matrix,labels,confmatrix_csv_file)
%matplotlib inline
nluConfMatrix.plot()
# Compute accuracy of classification
acc=accuracy_score(test_labels, nlupredict_labels)
print('Classification Accuracy: ', acc)
# print precision, recall and f1-scores for the different classes
print(classification_report(test_labels, nlupredict_labels, labels=labels))
#Optional if you would like each of these metrics separately
#[precision,recall,fscore,support]=precision_recall_fscore_support(test_labels, nlupredict_labels, labels=labels)
#print("precision: ", precision)
#print("recall: ", recall)
#print("f1 score: ", fscore)
#print("support: ", support)
###Output
_____no_output_____ |
chapter25.ipynb | ###Markdown
Chapter 25. Introduction to the Various Connectivity Analyses
###Code
import numpy as np
import matplotlib.pyplot as plt
from numpy.fft import fft, ifft
###Output
_____no_output_____
###Markdown
25.3
###Code
s_rate = 1000
rand_sig1 = np.random.randn(3*s_rate)
rand_sig2 = np.random.randn(3*s_rate)
# filter @ 5Hz
f = 5 # freq of wavelet in Hz
time = np.arange(-1, 1 + 1/s_rate, 1/s_rate) # time for wavelet from -1 to 1 in secs
s = 6/(2*np.pi*f) # stdev of Gaussian
wavelet = np.exp(2*np.pi*1j*f*time) * np.exp(-time**2 / (2*s**2))
# fft params
n_wavelet = len(wavelet)
n_data = len(rand_sig1)
n_convolution = n_data+n_wavelet-1
half_of_wavelet_size = len(wavelet)//2
# fft of wavelet and eeg data
convolution_result_fft = ifft(fft(wavelet, n_convolution) * fft(rand_sig1, n_convolution))*np.sqrt(s)/10
filtsig1 = convolution_result_fft[half_of_wavelet_size:-half_of_wavelet_size].real
anglesig1 = np.angle(convolution_result_fft[half_of_wavelet_size:-half_of_wavelet_size])
convolution_result_fft = ifft(fft(wavelet, n_convolution) * fft(rand_sig2, n_convolution))*np.sqrt(s)/10
filtsig2 = convolution_result_fft[half_of_wavelet_size:-half_of_wavelet_size].real
anglesig2 = np.angle(convolution_result_fft[half_of_wavelet_size:-half_of_wavelet_size])
fig, ax = plt.subplots(2,1,sharex='all', sharey='all')
ax[0].plot(rand_sig1)
ax[0].plot(filtsig1)
ax[1].plot(rand_sig2)
ax[1].plot(filtsig2)
correlations = np.zeros((5, int(1000/f)))
# write simpler correlation coeff instead of np.corrcoef
# i.e. the dotproduct between two vecs, divided by their lengths
corr = lambda x,y: x@y/(np.linalg.norm(x)*np.linalg.norm(y))
for i in range(1, int(1000/f)):
# corr of unfiltered random sig
correlations[0, i] = corr(rand_sig1[:-i], rand_sig1[i:])
# corr of filtered sig
correlations[1, i] = corr(filtsig1[:-i], filtsig1[i:])
# phase clustering
correlations[2, i] = np.abs(np.mean(np.exp(1j * np.angle(anglesig1[:-i] -anglesig1[i:]))))
# diff of correlations of filtered signal
correlations[3, i] = corr(filtsig2[:-i], filtsig2[i:]) - correlations[1,i]
# difference of phase clusterings
correlations[4, i] = np.abs(np.mean(np.exp(1j * np.angle(anglesig2[:-i] -anglesig2[i:])))) - correlations[2,i]
plt.plot(correlations.T)
plt.legend(['unfilt','power corr','ipsc','corr diffs','psc diffs'])
###Output
_____no_output_____ |
for-the-homework-of-hyl.ipynb | ###Markdown
在全部117个品牌中,其安装的数量多数(91个)是少于10个的,我们暂且决定不规范地把他们归为`Others` 即,下面我们将将`BRANDS <= 10` 归为 `Others`
###Code
ans[(ans["COUNT"]>30)]
import matplotlib.pyplot as plt
# plt.figure(dpi=200,figsize=(2,1))
plt.plot(ans["BRANDS"],ans["COUNT"],color='r',marker='o',linestyle='dashed')
plt.axis([0,10,10,3000])
plt.annotate('Most Popular Phone \n used to play the game',xy=(0,2600),xytext=(3,2500),
fontsize=16,
# arrowprops=dict(facecolor='black',shrink=0.1)
arrowprops=dict(arrowstyle='<-')
)
plt.xticks(fontsize=8)
# plt.legend()
plt.savefig('line-chart')
plt.show()
# assign the data
samsung = phone_brands.BRANDS.value_counts()['SAMSUNG']
oppo = phone_brands.BRANDS.value_counts()['OPPO']
huawei = phone_brands.BRANDS.value_counts()['HUAWEI']
xiaomi = phone_brands.BRANDS.value_counts()['XIAOMI']
asus = phone_brands.BRANDS.value_counts()['ASUS']
htc = phone_brands.BRANDS.value_counts()['HTC']
sony = phone_brands.BRANDS.value_counts()['SONY']
vivo = phone_brands.BRANDS.value_counts()['VIVO']
lge = phone_brands.BRANDS.value_counts()['LGE']
names = ['SAMSUNG', 'OPPO', 'HUAWEI', 'XIAOMI', 'ASUS', 'HTC', 'SONY', 'VIVO', 'LGE']
size = [samsung, oppo, huawei, xiaomi, asus, htc, sony, vivo, lge]
#explode = (0.2, 0, 0)
# create a pie chart
plt.pie(x=size,
labels=names,
# colors=['red', 'darkorange', 'silver',],
colors=['#74d2e7','#48a9c5','#0085ad','#8db9ca','#4298b5', '#005670', '#00205b', '#008eaa'],
autopct='%1.2f%%',
pctdistance=0.6,
textprops=dict(fontweight='bold'),
wedgeprops={'linewidth':7, 'edgecolor':'white'})
# create circle for the center of the plot to make the pie look like a donut
my_circle = plt.Circle((0,0), 0.6, color='white')
# plot the donut chart
fig = plt.gcf()
fig.set_size_inches(8,8)
fig.gca().add_artist(my_circle)
plt.title('\nTop 8 (COUNT >= 100) Phone Brands', fontsize=14, fontweight='bold')
plt.savefig('Pie-chart')
plt.show()
# plt.savefig('Pie-chart')
###Output
_____no_output_____
###Markdown
**To Do List:**- [ ] **南丁格尔玫瑰图**- [x] **折线图**- [x] **饼状图**https://www.kaggle.com/alafun/for-the-homework-of-hyl
###Code
# plt.rcParams['font.sans-serif'] = ['SimHei']
product_type=["SWE08UF","SWE18UF","SWE25F","SWE40UF","SWE60E","SWE70E","SWE80E9","SWE100E","SWE150E","SWE155E","SWE205E","SWE215E","SWE365E-3","SWE385E","SWE470E","SWE500E","SWE580E","SWE900E"]
sales_num=[int(abs(i)*100) for i in np.random.randn(len(product_type))]
sn=pd.DataFrame({"产品类型":product_type,"销量":sales_num}).sort_values("销量",ascending=False)
N=len(product_type)
theta=np.linspace(0+(100/180)*np.pi,2*np.pi+(100/180)*np.pi,N,endpoint=False)
width=2*np.pi/N
colors=plt.cm.viridis([i for i in np.random.random(N)])
plt.figure(figsize=(15,15))
plt.subplot(111,projection="polar")
bars=plt.bar(theta,sn["销量"],width=width,bottom=30,color=colors)
for r,s,t,bar in zip(theta,sn["销量"],sn["产品类型"],bars):
plt.text(r,s+45,t+"\n"+str(s)+' ',ha="center",va="baseline",fontsize=14)
plt.axis("off")
# plt.savefig(".jpg")
###Output
_____no_output_____ |
model/model-keras.ipynb | ###Markdown
Model built with raw TensorFlowThis model is an attempt to drop HuggingFace transformers library and use pure TensorFlow code.The goal is to get a single source of truth and to use directly Google's models.Adapted from:
###Code
# Base
import os
import shutil
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Tensorflow
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
import tensorflowjs as tfjs
#from official.nlp import optimization # to create AdamW optimizer
# Data
from datasets import load_dataset
# Custom
from helper_functions import load_sst_dataset, plot_model_history, save_ts_model
###Output
_____no_output_____
###Markdown
Configuration
###Code
# Set TensorFlow to log only the errors
tf.get_logger().setLevel('ERROR')
# Force the use of the CPU instead of the GPU if running out of GPU memory
device = '/CPU:0' # input '/CPU:0' to use the CPU or '/GPU:0' for the GPU
# Model to be used
bert_model_name = 'small_bert/bert_en_uncased_L-2_H-128_A-2'
tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/2'
tfhub_handle_preprocess = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
# Tokenizing parameters
max_length = 60 # Max length of an input
# Training parameters
epochs = 1 # 1 is enough for code testing
###Output
_____no_output_____
###Markdown
Load the dataset
###Code
text_train, Y1, Y2, text_test, Y1_test, Y2_test = load_sst_dataset()
###Output
WARNING:datasets.builder:Reusing dataset sst (C:\Users\thiba\.cache\huggingface\datasets\sst\default\1.0.0\b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff)
100%|██████████| 3/3 [00:00<00:00, 1000.23it/s]
###Markdown
Load Bert
###Code
# For preprocessing
bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)
# Bert itself
bert_model = hub.KerasLayer(tfhub_handle_encoder)
###Output
_____no_output_____
###Markdown
Build the model
###Code
def build_model():
# Input
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
# Preprocessing
preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')
# Encoder
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')
# Encoder's output
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
net = tf.keras.layers.Dropout(0.1)(net)
# Classifier
regression = tf.keras.layers.Dense(1, name='regression', activation=None)(net)
classifier = tf.keras.layers.Dense(1, name='classifier', activation='sigmoid')(net)
# Final output
outputs = {'regression': regression, 'classifier': classifier}
# Return the model
return tf.keras.Model(text_input, outputs)
# Build the model
model = build_model()
# Loss function used
loss = tf.keras.losses.BinaryCrossentropy(from_logits=False)
# Metric for results evaluation
metrics = tf.metrics.BinaryAccuracy()
# Define the optimizer
optimizer = tf.keras.optimizers.Adam(
learning_rate=5e-05,
epsilon=1e-08,
decay=0.01,
clipnorm=1.0)
# Compile the model
model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
# Training input
x = {'text': tf.convert_to_tensor(text_train)}
# Training output
y = {'classifier': Y2, 'regression':Y1}
# doc: https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
history = model.fit(
x=x,
y=y,
validation_split=0.2,
batch_size=64,
epochs=epochs,
)
# Test input
x_test = {'text': tf.convert_to_tensor(text_test)}
# Test output
y_test = {'classifier': Y2_test, 'regression':Y1_test}
model_eval = model.evaluate(
x=x_test,
y=y_test,
)
###Output
104/104 [==============================] - 26s 254ms/step - loss: 1.4230 - classifier_loss: 0.7068 - regression_loss: 0.7161 - classifier_binary_accuracy: 0.4953 - regression_binary_accuracy: 0.0033
###Markdown
Save the model with model.save Documentation: ```save_format```- tf: Tensorflow SavedModel- h5: HDF5
###Code
# Save to Tensorflow SavedModel
model.save("./formats/tf_savedmodel",save_format='tf')
# Save to HDF5
model.save('./formats/tf_hdf5/model.h5',save_format='h5')
###Output
_____no_output_____
###Markdown
Convert the model with tensorflowjs_converter Documentation: ```--input_format```- tf_saved_model: SavedModel- tfjs_layers_model: TensorFlow.js JSON format- keras: Keras HDF5```--output_format```- tfjs_layers_model- tfjs_graph_model- keras
###Code
# Keras HDF5 --> tfjs_layers_model
!tensorflowjs_converter --input_format keras --output_format tfjs_layers_model ./formats/tf_hdf5/model.h5 ./formats/tfjs_layers_model_from_keras_hdf5
# Keras HDF5 --> tfjs_graph_model
!tensorflowjs_converter --input_format keras --output_format tfjs_graph_model ./formats/tf_hdf5/model.h5 ./formats/tfjs_graph_model_from_keras_hdf5
# tf_saved_model --> tfjs_layers_model
!tensorflowjs_converter --input_format tf_saved_model --output_format=tfjs_layers_model ./formats/tf_savedmodel ./formats/tfjs_layers_model_from_tf_saved_model
# tf_saved_model --> tfjs_graph_model
!tensorflowjs_converter --input_format tf_saved_model --output_format=tfjs_graph_model ./formats/tf_savedmodel ./formats/tfjs_graph_model_from_tf_saved_model
###Output
2021-10-17 21:13:22.786947: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-10-17 21:13:23.483860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1789 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1
Traceback (most recent call last):
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\framework\ops.py", line 3962, in _get_op_def
return self._op_def_cache[type]
KeyError: 'CaseFoldUTF8'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\saved_model\load.py", line 902, in load_internal
loader = loader_cls(object_graph_proto, saved_model_proto, export_dir,
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\saved_model\load.py", line 137, in __init__
function_deserialization.load_function_def_library(
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\saved_model\function_deserialization.py", line 388, in load_function_def_library
func_graph = function_def_lib.function_def_to_graph(copy)
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\framework\function_def_to_graph.py", line 63, in function_def_to_graph
graph_def, nested_to_flat_tensor_name = function_def_to_graph_def(
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\framework\function_def_to_graph.py", line 228, in function_def_to_graph_def
op_def = default_graph._get_op_def(node_def.op) # pylint: disable=protected-access
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\framework\ops.py", line 3966, in _get_op_def
pywrap_tf_session.TF_GraphGetOpDef(self._c_graph, compat.as_bytes(type),
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'CaseFoldUTF8' in binary running on IDEAPAD. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\thiba\anaconda3\envs\bert\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\thiba\anaconda3\envs\bert\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\thiba\anaconda3\envs\bert\Scripts\tensorflowjs_converter.exe\__main__.py", line 7, in <module>
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflowjs\converters\converter.py", line 813, in pip_main
main([' '.join(sys.argv[1:])])
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflowjs\converters\converter.py", line 817, in main
convert(argv[0].split(' '))
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflowjs\converters\converter.py", line 803, in convert
_dispatch_converter(input_format, output_format, args, quantization_dtype_map,
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflowjs\converters\converter.py", line 523, in _dispatch_converter
tf_saved_model_conversion_v2.convert_tf_saved_model(
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 599, in convert_tf_saved_model
model = _load_model(saved_model_dir, saved_model_tags)
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 536, in _load_model
model = load(saved_model_dir, saved_model_tags)
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\saved_model\load.py", line 864, in load
result = load_internal(export_dir, tags, options)["root"]
File "C:\Users\thiba\anaconda3\envs\bert\lib\site-packages\tensorflow\python\saved_model\load.py", line 905, in load_internal
raise FileNotFoundError(
FileNotFoundError: Op type not registered 'CaseFoldUTF8' in binary running on IDEAPAD. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
If trying to load on a different device from the computational device, consider using setting the `experimental_io_device` option on tf.saved_model.LoadOptions to the io_device such as '/job:localhost'.
###Markdown
Export directly the model with tfjs convertors As suggested in: Where is the API documentation of tfjs.converters ?From our test, it saves the model to the tfjs_layers_model format by default.
###Code
# Save directly
tfjs.converters.save_keras_model(model, './formats/tfjs-direct')
###Output
_____no_output_____ |
machine_learning_nanodegree/modulo-02/projeto/boston_housing_PT.ipynb | ###Markdown
Nanodegree Engenheiro de Machine Learning Modelo de Avaliação e Validação Projeto 1: Estimando Preços dos Imóveis de BostonBem-vindo ao primeiro projeto do Nanodegree de Engenheiro de Machine Learning! Neste Notebook, alguns templates de código estão sendo fornecidos para você, e você irá precisar implementar funcionalidades adicionais para completar este projeto com sucesso. Você não vai precisar modificar o código que foi incluído além do que está sendo pedido. Seções que começam com **'Implementação'** no cabeçalho indicam que o bloco de código seguinte vai exigir que você providencie funcionalidade adicional. Instruções serão fornecidas para cada seção e as especificidades da implementação são marcadas no bloco de código com o comando 'TODO'. Não esqueça de ler as instruções atentamente!Além do código implementado, haverá questões relacionadas com o projeto e sua implementação que você deve responder. Cada seção em que há uma questão para você responder, ela será precedida por **'Questão X'** no cabeçalho. Leia cada questão cuidadosamente e dê respostas completas no seguinte box de texto que contém **'Resposta: '**. O projeto enviado será avaliado com base nas respostas para cada uma das questões e a implementação que você nos forneceu. >**Nota:** Células de Código e de Markdown podem ser executadas utilizando o atalho de teclado **Shift + Enter**. Além disso, as células Markdown podem ser editadas ao clicar normalmente duas vezes na célula para entrar no modo de edição. Antes de começarCertifique-se que a sua versão do scikit-learn é a mesma que deve ser utilizada neste notebook. Execute a célula abaixo para verificar se sua versão é a ideal. Se você não quiser fazer *downgrade*, você precisa ficar atento as diferenças citadas ao decorrer do código.
###Code
import sklearn
print 'The scikit-learn version is ', sklearn.__version__
if sklearn.__version__ >= '0.18':
print "Você precisa fazer downgrade do scikit-learn ou ficar atento as diferenças nas versões citadas."
print "Pode ser feito executando:\n"
print "pip install scikit-learn==0.17"
else:
print "Tudo certo!"
###Output
The scikit-learn version is 0.19.1
Você precisa fazer downgrade do scikit-learn ou ficar atento as diferenças nas versões citadas.
Pode ser feito executando:
pip install scikit-learn==0.17
###Markdown
Ambiente de análiseAntes de qualquer análise, para auxiliar na reprodutibilidade por outras pessoas, eu gosto de utilizar uma função que mostra de modo detalhado o meu **ambiente de análise** (hardware, software, versões do Python e packages utilizados). A próxima célula define essa função e mostra o meu ambiente.**ATENÇÃO:** eu fiz essa função para poder reportar meu ambiente de trabalho, em Linux. Não me preocupei em preparar a função para funcionar corretamente em Windows ou Mac, portanto, **a função não deve funcionar em Windows ou Mac!**
###Code
def ambienteDaAnalise():
"""
Abrantes Araújo Silva Filho
[email protected]
Imprime informações a respeito do ambiente de análise utilizado,
incluindo detalhes do hardware, software, versões do Python e
packages utilizados.
Pré-requisitos: utiliza os packages subprocess, sys, pip, platform,
psutil e math
Limitações: essa função foi pensada para sistemas operacionais
Linux, não testada em Windows ou Mac. Na verdae nem me
preocupei em preparar a função para funcionar corretamente
em Windows ou Mac, portanto provavelmente ela NÃO FUNCIONARÁ
nesses sistemas operacionais!
Inputs: nenhum
Outputs: print de informações
Retorno: nenhum
"""
import os
import subprocess
import sys
import pip
import platform
import psutil
import math
data = subprocess.check_output(['date', '+%Y-%m-%d, %H:%M (UTC %:z), %A']).replace('\n', '')
nomeComputador = platform.node()
arquiteturaComputador = platform.machine()
processador = platform.processor()
MHz = psutil.cpu_freq()[2] / 1000.0
coresFisicos = psutil.cpu_count(logical = False)
coresLogicos = psutil.cpu_count()
memoriaFisica = math.ceil(psutil.virtual_memory()[0] / 1024 / 1024 / 1024.0)
plataforma = platform.system()
plataforma2 = sys.platform
osName = os.name
versaoSO = platform.version()
release = platform.release()
versaoPython = platform.python_version()
implementacaoPython = platform.python_implementation()
buildPython = platform.python_build()[0] + ', ' + platform.python_build()[1]
interpreterPython = sys.version.replace('\n', '')
pacotes = sorted(["%s==%s" % (i.key, i.version) \
for i in pip.get_installed_distributions()])
print('|-----------------------------------------------------------|')
print('| HARDWARE |')
print('|-----------------------------------------------------------|')
print('Host: ' + nomeComputador)
print('Arquitetura: ' + arquiteturaComputador)
print('Processador: ' + processador + ' de ' + str(MHz) + ' MHz (' \
+ str(coresFisicos) + ' cores físicos e ' + str(coresLogicos) + \
' cores lógicos)')
print('Memória física: ' + str(memoriaFisica) + ' GB')
print('|-----------------------------------------------------------|')
print('| SOFTWARE |')
print('|-----------------------------------------------------------|')
print('Plataforma: ' + plataforma + ' (' + plataforma2 + ', ' + osName \
+ ')')
print('Versão: ' + versaoSO)
print('Release: ' + release)
print('|-----------------------------------------------------------|')
print('| PYTHON |')
print('|-----------------------------------------------------------|')
print('Python: ' + versaoPython)
print('Implementação: ' + implementacaoPython)
print('Build: ' + buildPython)
print('Interpreter: ' + interpreterPython)
print('|-----------------------------------------------------------|')
print('| PACKAGES |')
print('|-----------------------------------------------------------|')
for pacote in pacotes:
print(pacote)
print('|-----------------------------------------------------------|')
print('| DATA |')
print('|-----------------------------------------------------------|')
print('A data atual do sistema é: ' + data)
ambienteDaAnalise()
###Output
|-----------------------------------------------------------|
| HARDWARE |
|-----------------------------------------------------------|
Host: analytics01
Arquitetura: x86_64
Processador: x86_64 de 3.3 MHz (2 cores físicos e 4 cores lógicos)
Memória física: 8.0 GB
|-----------------------------------------------------------|
| SOFTWARE |
|-----------------------------------------------------------|
Plataforma: Linux (linux2, posix)
Versão: #29~16.04.2-Ubuntu SMP Tue Jan 9 22:00:44 UTC 2018
Release: 4.13.0-26-generic
|-----------------------------------------------------------|
| PYTHON |
|-----------------------------------------------------------|
Python: 2.7.14
Implementação: CPython
Build: default, Dec 7 2017 17:05:42
Interpreter: 2.7.14 |Anaconda, Inc.| (default, Dec 7 2017, 17:05:42) [GCC 7.2.0]
|-----------------------------------------------------------|
| PACKAGES |
|-----------------------------------------------------------|
alabaster==0.7.10
asn1crypto==0.24.0
astroid==1.6.0
babel==2.5.1
backports-abc==0.5
backports.functools-lru-cache==1.4
backports.shutil-get-terminal-size==1.0.0
backports.ssl-match-hostname==3.5.0.1
beautifulsoup4==4.6.0
bleach==2.1.2
certifi==2017.11.5
cffi==1.11.2
chardet==3.0.4
cloudpickle==0.5.2
configparser==3.5.0
cryptography==2.1.4
cycler==0.10.0
decorator==4.1.2
docopt==0.6.2
docutils==0.14
entrypoints==0.2.3
enum34==1.1.6
functools32==3.2.3.post2
html5lib==1.0.1
idna==2.6
imagesize==0.7.1
ipaddress==1.0.19
ipykernel==4.7.0
ipython-genutils==0.2.0
ipython==5.4.1
ipywidgets==7.1.0
isort==4.2.15
jedi==0.11.0
jinja2==2.10
jsonschema==2.6.0
jupyter-client==5.2.1
jupyter-console==5.2.0
jupyter-core==4.4.0
lazy-object-proxy==1.3.1
markupsafe==1.0
matplotlib==2.1.1
mccabe==0.6.1
mistune==0.8.1
nbconvert==5.3.1
nbformat==4.4.0
notebook==5.2.2
numpy==1.14.0
numpydoc==0.7.0
pandas==0.22.0
pandocfilters==1.4.2
parso==0.1.1
pathlib2==2.3.0
patsy==0.4.1
pexpect==4.3.1
pickleshare==0.7.4
pip==9.0.1
prompt-toolkit==1.0.15
psutil==5.4.3
ptyprocess==0.5.2
pycodestyle==2.3.1
pycparser==2.18
pyflakes==1.6.0
pygments==2.2.0
pylint==1.8.1
pyopenssl==17.5.0
pyparsing==2.2.0
pysocks==1.6.7
python-dateutil==2.6.1
pytz==2017.3
pyzmq==16.0.3
qtawesome==0.4.4
qtconsole==4.3.1
qtpy==1.3.1
requests==2.18.4
rope==0.10.7
scandir==1.6
scikit-learn==0.19.1
scipy==1.0.0
seaborn==0.8.1
setuptools==36.5.0.post20170921
simplegeneric==0.8.1
singledispatch==3.4.0.3
six==1.11.0
snowballstemmer==1.2.1
sphinx==1.6.6
sphinxcontrib-websupport==1.0.1
spyder==3.2.6
statsmodels==0.8.0
stemgraphic==0.3.7
subprocess32==3.2.7
terminado==0.8.1
testpath==0.3.1
tornado==4.5.3
traitlets==4.3.2
typing==3.6.2
unicodecsv==0.14.1
urllib3==1.22
wcwidth==0.1.7
webencodings==0.5.1
wheel==0.30.0
widgetsnbextension==3.1.0
wrapt==1.10.11
|-----------------------------------------------------------|
| DATA |
|-----------------------------------------------------------|
A data atual do sistema é: 2018-01-15, 22:52 (UTC -02:00), segunda
###Markdown
ComeçandoNeste projeto, você irá avaliar o desempenho e o poder de estimativa de um modelo que foi treinado e testado em dados coletados dos imóveis dos subúrbios de Boston, Massachusetts. Um modelo preparado para esses dados e visto como *bem ajustado* pode ser então utilizado para certas estimativas sobre um imóvel – em particular, seu valor monetário. Esse modelo seria de grande valor para alguém como um agente mobiliário, que poderia fazer uso dessas informações diariamente.O conjunto de dados para este projeto se origina do [repositório de Machine Learning da UCI](https://archive.ics.uci.edu/ml/datasets/Housing). Os dados de imóveis de Boston foram coletados em 1978 e cada uma das 489 entradas representa dados agregados sobre 14 atributos para imóveis de vários subúrbios de Boston. Para o propósito deste projeto, os passos de pré-processamento a seguir foram feitos para esse conjunto de dados:- 16 observações de dados possuem um valor `'MEDV'` de 50.0. Essas observações provavelmente contêm **valores ausentes ou censurados** e foram removidas.- 1 observação de dados tem um valor `'RM'` de 8.78. Essa observação pode ser considerada **valor atípico (outlier)** e foi removida.- Os atributos `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` são essenciais. O resto dos **atributos irrelevantes** foram excluídos.- O atributo `'MEDV'` foi **escalonado multiplicativamente** para considerar 35 anos de inflação de mercado.** Execute a célula de código abaixo para carregar o conjunto dos dados dos imóveis de Boston, além de algumas bibliotecas de Python necessárias para este projeto. Você vai saber que o conjunto de dados carregou com sucesso se o seu tamanho for reportado. **
###Code
# Importar as bibliotecas necessárias para este projeto
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
import matplotlib.pyplot as plt
from sklearn.cross_validation import ShuffleSplit
# Formatação mais bonita para os notebooks
%matplotlib inline
# Executar o conjunto de dados de imóveis de Boston
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Êxito
print "O conjunto de dados de imóveis de Boston tem {} pontos com {} variáveis em cada.".format(*data.shape)
###Output
O conjunto de dados de imóveis de Boston tem 489 pontos com 4 variáveis em cada.
###Markdown
Explorando os DadosNa primeira seção deste projeto, você fará uma rápida investigação sobre os dados de imóveis de Boston e fornecerá suas observações. Familiarizar-se com os dados durante o processo de exploração é uma prática fundamental que ajuda você a entender melhor e justificar seus resultados.Dado que o objetivo principal deste projeto é construir um modelo de trabalho que tem a capacidade de estimar valores dos imóveis, vamos precisar separar os conjuntos de dados em **atributos** e **variável alvo**. O **atributos**, `'RM'`, `'LSTAT'` e `'PTRATIO'`, nos dão informações quantitativas sobre cada ponto de dado. A **variável alvo**, `'MEDV'`, será a variável que procuramos estimar. Eles são armazenados em `features` e ` prices`, respectivamente. Minha Análise Exploratória dos DadosAntes de continuar a responder as questões deste projeto, incluí aqui uma breve análise exploratória para me ajudar a compreender melhor o dataset:
###Code
# CONHECIMENTO OS DADOS:
# Primeiras e útlimas observações
print(data.head())
print('')
print(data.tail())
# Alguma variável tem NA?
data.isna().apply(np.sum)
# OK, nenhuma variável tem NA. Vamos a uma descrição básica das variáveis:
data.describe()
# Range das variáveis:
data.max() - data.min()
# Vamos ter uma noção visual da distribuição das variáveis:
data.hist('MEDV')
plt.show()
data.boxplot('MEDV')
plt.show()
data.hist('RM')
plt.show()
data.boxplot('RM')
plt.show()
data.hist('LSTAT')
plt.show()
data.boxplot('LSTAT')
plt.show()
data.hist('PTRATIO')
plt.show()
data.boxplot('PTRATIO')
plt.show()
###Output
_____no_output_____
###Markdown
Implementação: Calcular EstatísticasPara a sua primeira implementação de código, você vai calcular estatísticas descritivas sobre preços dos imóveis de Boston. Dado que o `numpy` já foi importado para você, use essa biblioteca para executar os cálculos necessários. Essas estatísticas serão extremamente importantes depois para analisar várias estimativas resultantes do modelo construído.Na célula de código abaixo, você precisará implementar o seguinte:- Calcular o mínimo, o máximo, a média, a mediana e o desvio padrão do `'MEDV'`, que está armazenado em `prices`. - Armazenar cada cálculo em sua respectiva variável.
###Code
# TODO: Preço mínimo dos dados
minimum_price = np.min(prices)
# TODO: Preço máximo dos dados
maximum_price = np.max(prices)
# TODO: Preço médio dos dados
mean_price = np.mean(prices)
# TODO: Preço mediano dos dados
median_price = np.median(prices)
# TODO: Desvio padrão do preço dos dados
# Atenção: por padrão o Numpy utiliza N graus de liberdade no denominador do cálculo
# da variância (e, portanto, do desvio padrão). Para igualar o comportamento do cálculo
# do desvio padrão do Numpy com o do Pandas e do R, que utilizam N-1 graus de liberdade
# (Correção de Bessel), utilizei o parâmetro "ddof" abaixo:
std_price = np.std(prices, ddof = 1)
# Mostrar as estatísticas calculadas
print "Estatísticas para os dados dos imóveis de Boston:\n"
print "Preço mínimo: ${:,.2f}".format(minimum_price)
print "Preço máximo: ${:,.2f}".format(maximum_price)
print "Preço médio: ${:,.2f}".format(mean_price)
print "Preço mediano: ${:,.2f}".format(median_price)
print "Desvio padrão dos preços: ${:,.2f}".format(std_price)
###Output
Estatísticas para os dados dos imóveis de Boston:
Preço mínimo: $105,000.00
Preço máximo: $1,024,800.00
Preço médio: $454,342.94
Preço mediano: $438,900.00
Desvio padrão dos preços: $165,340.28
###Markdown
Questão 1 - Observação de AtributosPara lembrar, estamos utilizando três atributos do conjunto de dados dos imóveis de Boston: `'RM'`, `'LSTAT'` e `'PTRATIO'`. Para cada observação de dados (vizinhança):- `'RM'` é o número médio de cômodos entre os imóveis na vizinhança.- `'LSTAT'` é a porcentagem de proprietários na vizinhança considerados de "classe baixa" (proletariado).- `'PTRATIO'` é a razão de estudantes para professores nas escolas de ensino fundamental e médio na vizinhança.**Usando a sua intuição, para cada um dos atributos acima, você acha que um aumento no seu valor poderia levar a um _aumento_ no valor do `'MEDV'` ou uma _diminuição_ do valor do `'MEDV'`? Justifique sua opinião para cada uma das opções.** **Dica:** Você pode tentar responder pensando em perguntas como:* Você espera que um imóvel que tem um valor `'RM'` de 6 custe mais ou menos que um imóvel com valor `'RM'` de 7?* Você espera que um imóvel em um bairro que tem um valor `'LSTAT'` de 15 custe mais ou menos que em um bairro com valor `'LSTAD'` de 20?* Você espera que um imóvel em um bairro que tem um valor `'PTRATIO'` de 10 custe mais ou menos que em um bairro com `'PTRATIO'` de 15? **Resposta:** Intuitivamente, pelo senso comum, eu espero que:* `'RM'` tenha correlação positiva com `'MEDV'`, ou seja, quando `'RM'` aumenta, `'MEDV'` também aumenta. * Arrazoado: se o número médio de cômodos na vizinhança aumenta, possivelmente estamos um um local ou bairro com casas de maior tamanho, para pessoas com maior poder aquisitivo; assim, eu espero que o valor das casas também aumente.* `'LSTAT'` tenha correlação negativa com `'MEDV'`, ou seja, quando `'LSTAT'` aumenta, `'MEDV'` diminui. * Arrazoado: se a porcentagem de proprietários na vizinhança considerados de classe baixa aumenta, possivelmente estamos em um local ou bairro com casas de preços mais baixos; assim, eu espero que o valor das casas diminua quando o número de vizinhos de classe baixa aumente.* `'PTRATIO'` tenha correlação negativa com `'MEDV'`, ou seja, quando `'PTRATION'` diminui, `'MEDV'` aumenta. * Arrazoado: se a razão de estudantes para professores nas escolas de ensino fundamental e médio na vizinhança diminui, possivelmente estamos em um local ou bairro onde os pais podem pagar por uma educação mais exclusiva para seus filhos; assim eu espero que quando os pais tem um poder aquisitivo maior, a razão estudante/professor diminui e o valor das casas aumente.Vou conferir esses meus pressupostos visualmente, com scatters plots na célula abaixo:
###Code
# Scatter Matrix das variáveis, e diagonal com density plots
pd.plotting.scatter_matrix(data, alpha = 0.2, diagonal = 'kde', figsize = (10,10))
plt.show()
data.plot.scatter('RM', 'MEDV')
plt.show()
data.plot.scatter('LSTAT', 'MEDV')
plt.show()
data.plot.scatter('PTRATIO', 'MEDV')
plt.show()
###Output
_____no_output_____
###Markdown
Conforme os gráficos acima parecem sugerir, as duas primeiras suposições entre as relações de `'MEDV'` com as variáveis `'RM'` e `'LSTAT'` estavam corretas.A relação entre `'PTRATIO'` e `'MEDV'` é mais difícil de estabelecer com certeza, parece uma correlação negativa, mas é difícil dizer pelo scatter plot. Vou fazer um boxplot comparando o `'MEDV'` com valores arredondados ("discretizados" com a função round) da variável `'PTRATIO'`:
###Code
# Adiciona variável temporário ao dataframe, discretizando a PTRATIO como uma string de iteiros (com a função round):
data['temp'] = features['PTRATIO'].apply(round).apply(int).apply(str)
# Faz o boxplot da MEDV por PTRATIO discretizada:
data.boxplot('MEDV', by = 'temp', figsize=(10,10))
plt.show()
# Remove variável temporária:
data = data.drop('temp', axis = 1)
###Output
_____no_output_____
###Markdown
Mesmo com o boxplot acima ainda é difícil estabelecer qualquer correlação conclusiva entre `'PTRATIO'` e `'MEDV'`. O que o gráfico parece indicar é que para `'PTRATIO'` de 13, o valor mediano de `'MEDV'` é bem alto (ao redor de 700.000), e que para `'PTRATIO'` a partir de 20, o valor mediano de `'MEDV'` tende a ser mais baixo (até 400.000). Mas como a variação dos preços em cada faixa é muito grande, nada pode ser dito com certeza. Para uma idéia melhor da relação entre essas variáveis, podemos utilizar um cálculo de correlação ou um modelo de regressão linear simples (célula abaixo).
###Code
# Import da função de regressão linear
from sklearn.linear_model import LinearRegression
# Variável dependente
y = prices
# Variável independente
x = data['PTRATIO'].values.reshape(-1, 1)
# Ajuste do modelo
modelo = LinearRegression()
modelo.fit(x, y)
print(modelo.coef_)
###Output
[-40647.21475514]
###Markdown
Pela regressão linear simples entre `'PTRATIO'` e `'MEDV'` existe uma correlação negativa entre essas variáveis. O modelo simples (sem considerar outras variáveis de confundimento ou interação) mostra que a cada aumento unitário da `'PTRARIO'` o valor de `'MEDV'` tende a diminuir cerca de 40 mil dólares. ---- Desenvolvendo um ModeloNa segunda seção deste projeto, você vai desenvolver ferramentas e técnicas necessárias para um modelo que faz estimativas. Ser capaz de fazer avaliações precisas do desempenho de cada modelo através do uso dessas ferramentas e técnicas ajuda a reforçar a confiança que você tem em suas estimativas. Implementação: Definir uma Métrica de DesempenhoÉ difícil medir a qualidade de um modelo dado sem quantificar seu desempenho durante o treinamento e teste. Isso é geralmente feito utilizando algum tipo de métrica de desempenho, através do cálculo de algum tipo de erro, qualidade de ajuste, ou qualquer outra medida útil. Para este projeto, você irá calcular o [*coeficiente de determinação*](https://pt.wikipedia.org/wiki/R%C2%B2), R2, para quantificar o desempenho do seu modelo. O coeficiente de determinação é uma estatística útil no campo de análise de regressão uma vez que descreve o quão "bom" é a capacidade do modelo em fazer estimativas. Os valores para R2 têm um alcance de 0 a 1, que captura a porcentagem da correlação ao quadrado entre a estimativa e o valor atual da **variável alvo**. Um modelo R2 de valor 0 sempre falha ao estimar a variável alvo, enquanto que um modelo R2 de valor 1, estima perfeitamente a variável alvo. Qualquer valor entre 0 e 1 indica qual a porcentagem da variável alvo (ao utilizar o modelo) que pode ser explicada pelos **atributos**. *Um modelo pode dar também um R2 negativo, que indica que o modelo não é melhor do que aquele que estima ingenuamente a média da variável alvo.*Para a função ‘performance_metric’ na célula de código abaixo, você irá precisar implementar o seguinte:- Utilizar o `r2_score` do `sklearn.metrics` para executar um cálculo de desempenho entre `y_true` e `y_predict`.- Atribuir a pontuação do desempenho para a variável `score`.
###Code
# TODO: Importar 'r2_score'
def performance_metric(y_true, y_predict):
""" Calcular e retornar a pontuação de desempenho entre
valores reais e estimados baseado na métrica escolhida. """
# Importa a função r2_score:
from sklearn.metrics import r2_score
# TODO: Calcular a pontuação de desempenho entre 'y_true' e 'y_predict'
score = r2_score(y_true, y_predict)
# Devolver a pontuação
return score
###Output
_____no_output_____
###Markdown
Questão 2 - Qualidade do AjusteAdmita que um conjunto de dados que contém cinco observações de dados e um modelo fez a seguinte estimativa para a variável alvo:| Valores Reais | Estimativa || :-------------: | :--------: || 3.0 | 2.5 || -0.5 | 0.0 || 2.0 | 2.1 || 7.0 | 7.8 || 4.2 | 5.3 |** Executar a célula de código abaixo para usar a função `performance_metric’ e calcular o coeficiente de determinação desse modelo. **
###Code
# Calcular o desempenho deste modelo
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "O coeficiente de determinação, R^2, do modelo é {:.3f}.".format(score)
###Output
O coeficiente de determinação, R^2, do modelo é 0.923.
###Markdown
* Você consideraria que esse modelo foi capaz de capturar a variação da variável alvo com sucesso? Por que ou por que não?** Dica: *** R2 score com valor 0 significa que a variável dependente não pode ser estimada pela variável independente.* R2 score com valor 1 significa que a variável dependente pode ser estimada pela variável independente.* R2 score com valor entre 0 e 1 significa quanto a variável dependente pode ser estimada pela variável independente.* R2 score com valor 0.40 significa que 40 porcento da variância em Y é estimável por X. **Resposta:** Sim, pois 92% da variância da variável dependente pôde ser explicada pelo modelo com as variáveis independentes. Implementação: Misturar e Separar os DadosSua próxima implementação exige que você pegue o conjunto de dados de imóveis de Boston e divida os dados em subconjuntos de treinamento e de teste. Geralmente os dados são também misturados em uma ordem aleatória ao criar os subconjuntos de treinamento e de teste para remover qualquer viés (ou erro sistemático) na ordenação do conjunto de dados.Para a célula de código abaixo, você vai precisar implementar o seguinte:- Utilize `train_test_split` do `sklearn.cross_validation` para misturar e dividir os dados de `features` e `prices` em conjuntos de treinamento e teste. (se estiver com a versão do scikit-learn > 0.18, utilizar o `sklearn.model_selection`. Leia mais [aqui](http://scikit-learn.org/0.19/modules/generated/sklearn.cross_validation.train_test_split.html)) - Divida os dados em 80% treinamento e 20% teste. - Mude o `random_state` do `train_test_split` para um valor de sua escolha. Isso garante resultados consistentes.- Atribuir a divisão de treinamento e teste para X_train`, `X_test`, `y_train` e `y_test`.
###Code
# TODO: Importar 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Misturar e separar os dados em conjuntos de treinamento e teste
X_train, X_test, y_train, y_test = train_test_split(features, prices,
test_size = 0.2,
random_state = 1974)
# Êxito
print "Separação entre treino e teste feita com êxito."
###Output
Separação entre treino e teste feita com êxito.
###Markdown
Questão 3 - Treinamento e Teste* Qual o benefício de separar o conjunto de dados em alguma relação de subconjuntos de treinamento e de teste para um algoritmo de aprendizagem?**Dica:** O que pode dar errado se não houver uma maneira de testar seu modelo? **Resposta:** se não separarmos uma proporção dos dados para servir de teste final para o algoritmo de aprendizagem escolhido, não conseguiremos saber se o modelo definido é bom de verdade, não conseguiremos estimar seu desempenho e sua capacidade de generalização.O objetivo do conjunto de teste é estimar a qualidade do modelo em um conjunto de dados em uma situação próxima daquela em que ele irá operar, com dados não vistos na fase de treinamento. Se não temos esses dados de treinamento, não conseguiremos saber como o modelo se comportará com novos dados e assim não saberemos se o modelo é bom e/ou generalizável.Esse modelo que foi treinado com todos os dados teria um escore muito bom nos dados de treinamento, mas seria muito ruim na predição de dados que ele não conhece. Essa situação é chamada de **overfitting** e significa que o modelo treinado não é generalizável.Dividir o conjunto de dados em dados de treinamento e dados de teste garante que o modelo deverá aprender com um subconjunto de dados e que será testado com um novo conjunto de dados que o modelo não conhece e os resultados da predição nesse novo conjunto de dados poderão nos confirmar se o modelo é bom ou não. ---- Analisando o Modelo de DesempenhoNa terceira parte deste projeto, você verá o desempenho em aprendizagem e teste de vários modelos em diversos subconjuntos de dados de treinamento. Além disso, você irá investigar um algoritmo em particular com um parâmetro `'max_depth'` (profundidade máxima) crescente, em todo o conjunto de treinamento, para observar como a complexidade do modelo afeta o desempenho. Plotar o desempenho do seu modelo baseado em critérios diversos pode ser benéfico no processo de análise, por exemplo: para visualizar algum comportamento que pode não ter sido aparente nos resultados sozinhos. Curvas de AprendizagemA célula de código seguinte produz quatro gráficos para um modelo de árvore de decisão com diferentes níveis de profundidade máxima. Cada gráfico visualiza a curva de aprendizagem do modelo para ambos treinamento e teste, assim que o tamanho do conjunto treinamento aumenta. Note que a região sombreada da curva de aprendizagem denota a incerteza daquela curva (medida como o desvio padrão). O modelo é pontuado em ambos os conjuntos treinamento e teste utilizando R2, o coeficiente de determinação. **Execute a célula de código abaixo e utilizar esses gráficos para responder as questões a seguir.**
###Code
# Criar curvas de aprendizagem para tamanhos de conjunto de treinamento variável e profundidades máximas
vs.ModelLearning(features, prices)
###Output
_____no_output_____
###Markdown
Questão 4 - Compreendendo os Dados* Escolha qualquer um dos gráficos acima e mencione a profundidade máxima escolhida.* O que acontece com a pontuação da curva de treinamento se mais pontos de treinamento são adicionados? E o que acontece com a curva de teste?* Ter mais pontos de treinamento beneficia o modelo?**Dica:** As curvas de aprendizagem convergem para uma pontuação em particular? Geralmente, quanto mais dados você tem, melhor. Mas, se sua curva de treinamento e teste estão convergindo com um desempenho abaixo do benchmark, o que seria necessário? Pense sobre os prós e contras de adicionar mais pontos de treinamento baseado na convergência das curvas de treinamento e teste. **Resposta:*** *Primeira pergunta*: pelos quatro gráficos de Learning Curves apresentados, eu escolheria como o melhor modelo a árvore de decisão com `max_depth = 3`, já que esse modelo apresentou um bom resultado (menos erros) tanto no conjunto de treinamento quanto no conjunto de teste. * Também podemos ver que a característica de convergência das curvas de treinamento e de teste em um nível de erro baixo (conforme indicado pelo alto score) é característica de modelos mais próximos ao ideal, sem overfitting nem underfitting. A característica de convergência das curvas nos modelos mais próximos ao ideal é a seguinte: * Curva de Treinamento: começa com poucos erros (escore alto) e, à medida em que mais pontos de treinamento são adicionados, a curva começa a apresentar mais erros (e o escore diminui um pouco), mas mesmo assim ainda apresenta escore bom (poucos erros) * Curva e Teste: começa com muito erros (escore baixo) e, à medida em que mais pontos de teste são adicionados, a curta começa a apresentam menos erros (e o escore aumenta bastante) * Característica da Convergência: as curvas de treinamento e teste tendem a convergir em um nível baixo de erro (escore alto) * *Segunda pergunta*: quando mais pontos de treinamento são adicionados, as características das Learning Curves variará conforme o ajuste do modelo, se mais próximo do ideal, se está com underfitting ou com overfitting. Em geral: * Modelos mais próximos do ideal: * Curva de Treinamento: começa com poucos erros (escore alto) e, à medida em que mais pontos de treinamento são adicionados, a curva começa a apresentar mais erros (e o escore diminui um pouco), mas mesmo assim ainda apresenta escore bom (poucos erros) * Curva e Teste: começa com muito erros (escore baixo) e, à medida em que mais pontos de teste são adicionados, a curta começa a apresentam menos erros (e o escore aumenta bastante) * Característica da Convergência: as curvas de treinamento e teste tendem a convergir em um nível baixo de erro (escore alto) * Modelos com underfitting: * Curva de Treinamento: começa com poucos erros (escore alto) e, à medida em que mais pontos de treinamento são adicionados, a curva começa a apresentar mais erros (e o escore diminui bastante), passando a apresentar um escore ruim (em um nível mais alto de erros) * Curva e Teste: começa com muito erros (escore baixo) e, à medida em que mais pontos de teste são adicionados, a curta começa a apresentam um pouco menos de erro, mas o escore continua baixo * Característica da Convergência: as curvas de treinamento e teste tendem a convergir em um nível alto de erro (escore baixo) * Modelos com overfitting: * Curva de Treinamento: começa com poucos erros (escore alto) e, à medida em que mais pontos de treinamento são adicionados, os erros aumentam muito pouco ou quase nada, mantendo um escore bem alto * Curva e Teste: começa com muito erros (escore baixo) e, à medida em que mais pontos de teste são adicionados, a curta começa a apresentar um pouco menos de erro, mas o escore continua baixo * Característica da Convergência: como a curva de treinamento apresenta escore muito alto e a curva de teste apresenta escore muito baixo as curvas não tendem à convergência, ao contrário: tendem a ficar separadas (quanto mais separadas, maior o overfitting) * *Terceira pergunta*: sim, ter mais pontos de treinamento beneficia o modelo que pode "aprender" a partir de uma base com maior variabilidade e, assim, ser capaz de ser mais generalizável. O limite é que não podemos utilizar *todos* os pontos do dataset para o treinamento pois isso poderia levar a uma situação de overfitting e diminuir a capacidade preditiva para novos dados. Curvas de ComplexidadeA célula de código a seguir produz um gráfico para um modelo de árvore de decisão que foi treinada e validada nos dados de treinamento utilizando profundidades máximas diferentes. O gráfico produz duas curvas de complexidade – uma para o treinamento e uma para a validação. Como a **curva de aprendizagem**, a área sombreada de ambas as curvas de complexidade denota uma incerteza nessas curvas, e o modelo pontuou em ambos os conjuntos de treinamento e validação utilizando a função `performance_metric`. ** Execute a célula de código abaixo e utilize o gráfico para responder as duas questões a seguir. **
###Code
vs.ModelComplexity(X_train, y_train)
###Output
_____no_output_____
###Markdown
Questão 5 - Equilíbrio entre viés e variância* Quando o modelo é treinado com o profundidade máxima 1, será que o modelo sofre mais de viés (erro sistemático) ou variância (erro aleatório)?* E o que acontece quando o modelo é treinado com profundidade máxima 10? Quais pistas visuais existem no gráfico para justificar suas conclusões?**Dica:** Como você sabe que um modelo está experimentando viés alto ou variância alta? Viés alto é um sinal de *underfitting* (o modelo não é complexo o suficiente para aprender os dados) e alta variância é um sinal de *overfitting* (o modelo está "decorando" os dados e não consegue generalizar bem o problema). Pense em modelos (com profundidade de 1 e 10, por exemplo) e qual deles está alinhado com qual parte do equilíbrio. **Respostas:*** *Primeira pergunta*: o modelo com `max_depth = 1` está sofrendo mais de viés (erro sistemático) pois o modelo não é complexo o suficiente para captar toda as características que podem ser aprendidas com os dados e, assim, sistematicamente, ele não consegue fazer uma boa predição. Esse modelo sofre de underfitting.* *Segunda pergunta*: o modelo com `max_depth = 10` está sofrendo mais de variância (erro aleatório) pois o modelo foi tão complexo que, na verdade, ele não "aprendeu" realmente com os dados, ele praticamente "decorou" os dados e se ajustou perfeitamente ao dataset e, assim, ele não consegue fazer uma boa predição pois ele não é generalizável o suficiente para captar a variabilidade que pode existir em novos dados. Esse modelo sofre de overfitting. Visualmente o Complexity Model Graph mostra que, a partir de `max_depth = 5` o escore na curva de treinamento é cada vez melhor, aproximando-se de 1.0, e o escore na curva de validação começa a diminuir fazendo com que as curvas se distanciem. Isso é característico de um modelo com overfitting: bons escores no treinamento, mas escores baixos no teste. O gráfico mostra esse distanciamento das curvas de modo bem claro. Questão 6 - Modelo Ótimo de Melhor Suposição* Qual profundidade máxima (`'max_depth'`) você acredita que resulta em um modelo que melhor generaliza um dado desconhecido?* Que intuição te levou a essa resposta?**Dica: ** Olhe no gráfico acima e veja o desempenho de validação para várias profundidades atribuidas ao modelo. Ele melhora conforme a profundidade fica maior? Em qual ponto nós temos nosso melhor desempenho de validação sem supercomplicar nosso modelo? E lembre-se, de acordo com a [Navalha de Occam](https://pt.wikipedia.org/wiki/Navalha_de_Occam), sempre devemos optar pelo mais simples ao complexo se ele conseguir definir bem o problema. **Resposta:**Eu escolheria como melhor o modelo com `max_depth = 3` pois apresenta um número baixo de erros tanto no conjunto de treinamento quanto no conjunto de teste, conforme pode-se observer pelo Model Complexity Graph acima.O modelo com `max_depth = 4` também seria um bom candidato com nível de erro um pouco mais baixo que o modelo com `max_depth = 3`, mas eu não escolheria o modelo com profundidade 4 pois o ganho em relação ao modelo com profundidade 3 é pequeno e, em contrapartida, a complexidade do modelo aumentaria muito. ----- Avaliando o Desempenho do ModeloNesta parte final do projeto, você irá construir um modelo e fazer uma estimativa de acordo com o conjunto de atributos do cliente utilizando um modelo otimizado a partir de `fit_model`. Questão 7 - Busca em Matriz* O que é a técnica de busca em matriz (*grid search*)?* Como ela pode ser aplicada para otimizar um algoritmo de aprendizagem?** Dica: ** Quando explicar a técnica de busca em matriz, tenha certeza que você explicou o motivo dela ser usada, o que a 'matriz' significa nesse caso e qual o objetivo da técnica. Para ter uma resposta mais sólida, você pode também dar exemplo de um parâmetro em um modelo que pode ser otimizado usando essa técnica. **RESPOSTA DA PRIMEIRA PERGUNTA:**Antes de explicar o que é a *grid search* temos que diferenciar o que são os **parâmetros** e os **hiperparâmetros** de um algoritmo de aprendizagem.Os **parâmetros** de um modelo de aprendizagem são obtidos a partir do treinamento do modelo, por exemplo: em um modelo de regressão linear ou de regressão logística, os parâmetros estimados são os valores para cada coeficiante do modelo. Vejamos na prática um exemplo. Um modelo de regressão linear poderia nos fornecer o sequinte resultado (valores fictícios):\begin{equation}MEDV = 100000 + 200000 \times RM - 40000 \times LSTAT - 5000 \times PTRATIO\end{equation}Na regressão linear acima, os **parâmetros** são os valores estimados para cada coeficiente do modelo. Eles são determinados a partir do próprio treinamento do modelo.Os **hiperparâmetros** são outra coisa: eles são parâmetros que devem ser *definidos ANTES que o processo de treinamento e aprendizagem do modelo comece*, são uma espécie de "pressupostos" que nosso modelo utilizará para proceder com o treinamento.Nem todos os algoritmos utilizam hiperparâmetros, por exemplo: regressão linear simples não utiliza nenhum hiperparâmetro, mas SVM utiliza 2 hiperparâmetros principais (o Kernel e o Gamma).Note que como os **hiperparâmetros** devem ser definidos ANTES do aprendizado do modelo, uma tarefa importante é *determinar a melhor combinação de hiperparâmetros* para que o modelo tenha boa capacidade preditiva. Isso é chamado de **otimização de hiperparâmetros** e é o processo de escolher o melhor conjunto de hiperparâmetros para o algoritmo de aprendizagem em uso.E uma das técnicas de otimização de hiperparâmetros é a **GRID SEARCH**, que consiste em uma busca manual e exaustiva em um subconjunto de hiperparâmetros do algoritmo de aprendizagem (uma matriz de hiperparâmetros) guiada por alguma métrica de performance (tipicamente determinada por cross-validation ou por validação em um conjunto de dados de validação separado do conjunto de dados de teste).Um exemplo concreto ajudará a esclarecer o conceito de Grid Search: considere que vamos treinam um algoritmo de SVM. Esse algoritmo tem 2 hiperparâmetros principais, o *Kernel* e o *Gamma* (existem outros hiperparâmetros para a SVM, mas vamos considerar inicialmente somente esses dois). O *Kernel* pode ser escolhido entre as opções `rbf`, `linear`, `poly`, `sigmoid`, e o *Gamma* é um valor numérico real positivo (para os Kernels rbf, poly ou sigmoid).Como podemos então estabelecer a melhor combinação entre esses hiperparâmetros, sendo que o Kernel é um valor nominal e o Gamme é um valor real positivo? Bem, aqui usamos a GRID SEARCH e construímos uma matriz com os valores possíveis do Kernel e com alguns valores escolhidos do Gamma (geralmente em escala logarítmica para abranger uma grande quantidade de valores). Nossa matriz poderia se parecer com a seguinte:| Gamma \ Kernel | rbf | poly | sigmoid ||----------------|:---:|:----:|:-------:|| 0,1 | | | || 1 | | | || 10 | | | || 100 | | | |Agora treinaremos nosso modelo de SVM para cada combinação de Kernel/Gamma definida em nossa matriz e utilizamos alguma métrica de avaliação (obtida, por exemplo, por cross-validation) como o F1 Score e verificamos qual combinação foi a de melhor desempenho (no exemplo fictício abaixo, a melhor combinação foi com Kernel rfb e gamma de 100):| Gamma \ Kernel | rbf | poly | sigmoid ||----------------|---------:|---------:|---------:|| 0,1 | F1 = 0,2 | F1 = 0,4 | F1 = 0,4 || 1 | F1 = 0,3 | F1 = 0,5 | F1 = 0,3 || 10 | F1 = 0,6 | F1 = 0,6 | F1 = 0,2 || 100 | **F1 = 0,8** | F1 = 0,6 | F1 = 0,2 |Note que o processo da GRID SEARCH pode se tornar bastante complexo se o número de hiperparâmetros a serem definidos é maior do que 2. Com 3 hiperparâmetros, teríamos uma matriz tridimensional para testar (o produto cartesiano entre o conjunto de todos os valores possíveis dos hiperparâmetros) e com 4 hiperparâmetros teríamos uma matriz de quatro dimensões para testar. Quanto maior o número de hiperparâmetros, mais complexa fica a GRID SEARCH.**RESPOSTA DA SEGUNDA PERGUNTA:**Conforme já demonstrado na resposta anterior, a GRID SEARCH é uma das técnicas que podem ser utilizadas na otimização de hiperparâmetros para nos ajudar a decidir qual a combinação ótima desses hiperparâmetros que levam ao melhor algoritmo de aprendizagem. Questão 8 - Validação Cruzada* O que é a técnica de treinamento de validação-cruzada k-fold?* Quais benefícios essa técnica proporciona para busca em matriz ao otimizar um modelo?**Dica:** Lembre-se de expllicar o que significa o 'k' da validação-cruzada k-fold, como a base de dados é dividida e quantas vezes ela é executada.Assim como há um raciocínio por trás de utilizar um conjunto de teste, o que poderia dar errado ao utilizar busca em matriz sem um conjunto de validação cruzada? Você pode utilizar a [documentação](http://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation) para basear sua resposta. **RESPOSTA DA PRIMEIRA PERGUNTA:**Uma das principais questões que o cientista de dados tem que responder é determinar se o algoritmo de treinamento e o modelo resultante é bom, ou seja, se é capaz de prever bem a variável resposta, sendo generalizável para novos dados. Para esse treinamento e avaliação, algumas situações são possíveis:* *__Usar todos os dados para treinamento__*: uma possibilidade é usar todos os dados para treinamento do modelo. Isso é absolutamente ineficaz e errado pois simplesmente não poderemos determinar se o modelo é bom. Se todos os dados foram utilizados para treinamento, como podemos saber se o modelo é generalizável? Não temos mais nenhum dado para testar e esse modelo provavelmente sofrerá de muito overfitting.* *__Usar um conjunto de treinamento e um conjunto de teste__*: dividir os dados em dois conjuntos separados (treinamento e teste) já permite que o modelo treinado seja avaliado para determinar se é bom. A idéia aqui é treinar o modelo e estimar os parâmetros com os dados de treinamento, e testar a qualidade do modelo em um conjunto de dados de teste que segue a mesma distribuição de probabilidade dos dados de treinamento (na prática os dados são divididos aleatoriamente em dados de treinamento e de teste, garantindo assim que os dados de teste sejam representativos dos dados de treinamento). Mas essa abordagem também tem um problema: se quisermos otimizar os hiperparâmetros dos algoritmos em uma Grid Search, por exemplo, *__não podemos utilizar os dados de teste__*, pois o conhecimento dos resultados com os dados de teste poderia influenciar a escolha dos hiperparâmetros e assim gerar overfitting e diminuir a capacidade e generalização do modelo. Os dados de teste devem ser utilizados somente no final de todo o processo, para determinar a qualidade geral do modelo: não podem ser utilizados durante o processo de treinamento em si.* *__Usar um conjunto de treinamento, um conjunto de validação e um conjunto de teste__*: dividindo os dados em três conjuntos separados (mas com a mesma ditribuição de probabilidade do conjunto de treinamento) podemos treinar o modelo com o conjunto de dados de treinamento, validar os modelos e otimizar os hiperparâmetros com o conjunto de dados de valiação e, após a escolha do modelo final, testar a qualidade do modelo com o conjunto de dados de teste. Essa é a melhor abordagem: usamos conjuntos de dados separados para o treinamento, validação e teste. Entretanto, essa abordagem apresenta uma desvantagem: ao separarmos o conjunto de teste dos dados, já estamos diminuindo a quantidade de dados disponíveis para o treinamento do modelo; e ao separarmos mais um conjunto de dados, para a validação, estaremos diminuindo drasticamente ainda mais os dados disponíveis para o treinamento e isso pode causar um efeito indesejável: os resultados do modelo podem variar muito dependendo da escolha aleatória dos conjuntos de treinamento e validação. É para resolver esse problema que utilizamos a **Cross-Validation**.A **Cross-Validation** é uma técnica de validação para verificar como os resultados de um modelo são generalizáveis para outros conjuntos de dados, ou seja, é uma abordagem para avaliar a qualidade de nosso modelo, *sem utilizar os dados de teste durante a validação*.De modo geral o conceito de cross-validation envolve particionar os dados de treinamento em subconjuntos complementares, realizar o treinamento do modelo em um subconjunto (que passa a ser considerado como o conjunto de treinamento), e validar o modelo obtido com o outro subconjunto (que passa a ser considerado como o conjunto de validação). Além disso, para reduzir a variabilidade, a maioria dos métodos de cross-validation é realizada usando uma forma de "rodízio", onde múltiplas rodadas de cross-validation são realizadas e, a cada rodada, o modelo é treinado e validado utilizando diferentes partições dos dados. O resultado final da validação é dado por alguma forma de combinação dos resultados de cada rodada (por exemplo, a média) para estimar a qualidade do modelo preditivo treinado.Em resumo, técnicas de cross-validation combinam métricas de avaliação do modelo para obter uma informação mais precisa a respeito da performance preditiva do modelo.Existem diversas técnicas para a realização da cross-validation, tais como:* Cross-validation exaustiva: são métodos que realizam o treinamento e a validação em todas as maneiras possíveis de dividir os dados em conjuntos de treinamento e validação. As mais comuns são: * Leave-p-out cross-validation * Leave-one-out corss-validation* Cross-validation não-exaustiva: são métodos que realizam o treinamento e a validação em algumas das maneiras possíveis de dividir os dados em conjuntos de treinamento e validação. As mais comuns são: * K-Fold Cross-Validation * Holdout * Repeated random sub-samplingA **K-Fold Cross-Validation** é uma técnica de validação cruzada não exaustiva que divide **aleatoriamente** o conjunto de dados de treinamento (sendo que o conjunto de dados de teste já foi separado previamente) em **K** grupos de mesmo tamanho. Desses K grupos, um é escolhido para ser considerado como conjunto de validação, e o restante dos **K - 1** grupos são utilizados no treinamento do modelo. Esse procedimento de validação é repetido **K** vezes, cada vez utilizando um grupo diferente como dados de validação. Esses **K** resultados diferentes são combinados e a média deles é utilizada como estimativa da qualidade do modelo.O número exato de K grupos é sujeito a debate, sendo comuns valores de 4, 5 ou 10 grupos.Como um exemplo, vamos considerar um dataset com 100 observações. Separamos 20% dos dados para teste, e ficamos com 80 observações para treinamento. Dessas 80 observações, faremos uma K-Fold Cross-Validation usando `K = 4`, ou seja, dividiremos as 80 observações em 4 grupos de igual tamanho (20 observações em cada grupo). Chamaremos esses grupos de A, B, C e D. O processo de K-Fold Cross-validation é então repedito K vezes (em nosso exemplo, 4 vezes) e, em cada repetição, um grupo diferente é utilizado como validação, e o restante dos grupos (`K - 1`, 3 em nosso exemplo) será utilizado para treinamento:| **Rodada** | **Grupo de Validação** | **Grupos de Treinamento** ||:----------:|:----------------------:|:-------------------------:|| 1 | A | B, C, D || 2 | B | A, C, D || 3 | C | A, B, D || 4 | D | A, B, C |A média dos K resultados das métricas de avaliação desse modelo será considerada como a avaliação do modelo.**RESPOSTA DA SEGUNDA PERGUNTA:**A K-Fold Cross-Validation nos permite validar os modelos e otimizar os hiperparâmetros através de algum processo de otimização, como o Grid Search, sem que os dados de teste tenham qualquer influência no processo de treinamento e validação. Especificamente para a Grid Search, o maior benefício da K-Fold Cross Validation é que todos os dados são utilizados para treinamento e validação (K - 1 vezes como treinamento, e 1 vez como validação), nos dando uma maior segurança que não estamos superestimando ou subestimando a qualidade do modelo (o que terima maior chance de ocorrer se estivéssemos usando somente 1 grupo de validação fixo). Implementação: Ajustar um ModeloNa sua última implementação, você vai precisar unir tudo o que foi aprendido e treinar um modelo utilizando o **algoritmo de árvore de decisão**. Para garantir que você está produzindo um modelo otimizado, você treinará o modelo utilizando busca em matriz para otimizar o parâmetro de profundidade máxima (`'max_depth'`) para uma árvore de decisão. Esse parâmetro pode ser entendido como o número de perguntas que o algoritmo de árvore de decisão pode fazer sobre os dados antes de fazer uma estimativa. Árvores de decisão são parte de uma classe de algoritmos chamados *algoritmos de aprendizagem supervisionada*.Além disso, você verá que a implementação está usando o `ShuffleSplit()` como alternativa para a validação cruzada (veja a variável `cv_sets`). Ela não é a técnica que você descreveu na **Questão 8**, mas ela é tão útil quanto. O `ShuffleSplit()` abaixo irá criar 10 (`n_splits`) conjuntos misturados e 20% (`test_size`) dos dados serão utilizados para validação. Enquanto estiver trabalhando na sua implementação, pense nas diferenças e semelhanças com a validação k-fold.** Fique atento que o `ShuffleSplit` tem diferentes parâmetros nas versões 0.17 e 0.18/0.19 do scikit-learn.*** [Versão 0.17](http://scikit-learn.org/0.17/modules/generated/sklearn.cross_validation.ShuffleSplit.htmlsklearn.cross_validation.ShuffleSplit) - `ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, indices=None, random_state=None, n_iterations=None)`* [Versão 0.18](http://scikit-learn.org/0.18/modules/generated/sklearn.model_selection.ShuffleSplit.htmlsklearn.model_selection.ShuffleSplit) - `ShuffleSplit(n_splits=10, test_size=’default’, train_size=None, random_state=None)`Para a função `fit_model` na célula de código abaixo, você vai precisar implementar o seguinte:- Utilize o [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) do `sklearn.tree` para gerar um objeto regressor de árvore de decisão. - Atribua esse objeto à variável `'regressor'`.- Gere um dicionário para `'max_depth'` com os valores de 1 a 10 e atribua isso para a variável `'params'`.- Utilize o [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) do `sklearn.metrics` para gerar um objeto de função de pontuação. - Passe a função `performance_metric` como um parâmetro para esse objeto. - Atribua a função de pontuação à variável `'scoring_fnc'`.- Utilize o [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.grid_search.GridSearchCV.html) do `sklearn.grid_search` para gerar um objeto de busca por matriz. - Passe as variáveis `'regressor'`, `'params'`, `'scoring_fnc'` and `'cv_sets'` como parâmetros para o objeto. - Atribua o objeto `GridSearchCV` para a variável `'grid'`.
###Code
# TODO: Importar 'make_scorer', 'DecisionTreeRegressor' e 'GridSearchCV'
from sklearn.model_selection import ShuffleSplit
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
""" Desempenhar busca em matriz sobre o parâmetro the 'max_depth' para uma
árvore de decisão de regressão treinada nos dados de entrada [X, y]. """
# Gerar conjuntos de validação-cruzada para o treinamento de dados
# sklearn versão 0.17: ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, random_state=None)
# sklearn versão 0.18: ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Gerar uma árvore de decisão de regressão de objeto
regressor = DecisionTreeRegressor(random_state = 1974)
# TODO: Gerar um dicionário para o parâmetro 'max_depth' com um alcance de 1 a 10
params = {'max_depth': range(1,11)}
# TODO: Transformar 'performance_metric' em uma função de pontuação utilizando 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Gerar o objeto de busca em matriz
grid = GridSearchCV(estimator = regressor,
param_grid = params,
scoring = scoring_fnc,
cv = cv_sets)
# Ajustar o objeto de busca em matriz com os dados para calcular o modelo ótimo
grid = grid.fit(X, y)
# Devolver o modelo ótimo depois de realizar o ajuste dos dados
return grid.best_estimator_
###Output
_____no_output_____
###Markdown
Fazendo EstimativasUma vez que o modelo foi treinado em conjunto de dados atribuído, ele agora pode ser utilizado para fazer estimativas em novos conjuntos de entrada de dados. No caso do *regressor da árvore de decisão*, o modelo aprendeu *quais são as melhores perguntas sobre a entrada de dados*, e pode responder com uma estimativa para a **variável alvo**. Você pode utilizar essas estimativas para conseguir informações sobre os dados dos quais o valor da variável alvo é desconhecida – por exemplo, os dados dos quais o modelo não foi treinado. Questão 9 - Modelo Ótimo* Qual profundidade máxima do modelo ótimo? Como esse resultado se compara com a sua suposição na **Questão 6**? ** Executar a célula de código abaixo para ajustar o regressor da árvore de decisão com os dados de treinamento e gerar um modelo ótimo. **
###Code
# Ajustar os dados de treinamento para o modelo utilizando busca em matriz
reg = fit_model(X_train, y_train)
# Produzir valores para 'max_depth'
print "O parâmetro 'max_depth' é {} para o modelo ótimo.".format(reg.get_params()['max_depth'])
###Output
O parâmetro 'max_depth' é 4 para o modelo ótimo.
###Markdown
**Dica: ** A resposta vem da saída do código acima.**Resposta:**Através do uso da Grid Search e Cross-Validation, o melhor modelo de árvore de decisão é o modelo com `max_depth = 4`. Na Questão 6 eu tinha apontado que o modelo com profundidade 3 seria melhor pois o ganho na qualidade do modelo com 4 níveis de profundidade talvez não superasse a desvantagem de usar um modelo mais complexo. Questão 10 - Estimando Preços de VendaImagine que você era um corretor imobiliário na região de Boston ansioso para utilizar esse modelo que ajuda os imóveis que seus clientes desejam vender. Você coletou as seguintes informações de três dos seus clientes:| Atributos | Cliente 1 | Cliente 2 | Cliente 3 || :---: | :---: | :---: | :---: || Número total de cômodos em um imóvel | 5 cômodos | 4 cômodos | 8 cômodos || Nível de pobreza da vizinhança (em %) | 17% | 32% | 3% || Razão estudante:professor das escolas próximas | 15-to-1 | 22-to-1 | 12-to-1 |* Qual valor você sugeriria para cada um dos seus clientes para a venda de suas casas?* Esses preços parecem razoáveis dados os valores para cada atributo?* **Dica:** Utilize as estatísticas que você calculou na seção **Explorando Dados** para ajudar a justificar sua resposta. Dos três clientes, o Cliente 3 tem a maior casa, no melhor bairro de escolas públicas e menor inídice de pobreza; Cliente 2 tem a menor casa, em um bairro com índice de pobreza relativamente alto e sem as melhores escolas públicas.** Execute a célula de códigos abaixo para que seu modelo otimizado faça estimativas para o imóvel de cada um dos clientes.**
###Code
# Gerar uma matriz para os dados do cliente
client_data = [[5, 17, 15], # Cliente 1
[4, 32, 22], # Cliente 2
[8, 3, 12]] # Cliente 3
# Mostrar estimativas
for i, price in enumerate(reg.predict(client_data)):
print "Preço estimado para a casa do cliente {}: ${:,.2f}".format(i+1, price)
# Características das casas com valores MUITO próximos aos do primeiro ciente (mais ou menos 1/30 std):
data[(data['MEDV'] > 414184.162 - std_price/30) & (data['MEDV'] < 414184.62 + std_price/30)].sort_values('MEDV')
# Características das casas com valores MUITO próximos aos do segundo ciente (mais ou menos 1/30 std):
data[(data['MEDV'] > 230452.17 - std_price/10) & (data['MEDV'] < 230452.17 + std_price/10)].sort_values('MEDV')
# Características das casas com valores BEM próximos aos do terceiro ciente (mais ou menos 1/15 std):
data[(data['MEDV'] > 896962.50 - std_price/15) & (data['MEDV'] < 896962.50 + std_price/15)].sort_values('MEDV')
###Output
_____no_output_____
###Markdown
**Resposta:**Eu apontaria para cada cliente os valores que o modelo de árvore de decisão calculou, pois parecem ser bem adequados às características dos clientes:* O cliente 3 é o que tem a maior casa, no melhor bairro de escolas públicas e menor inídice de pobreza. O valor previsto pelo modelo é bem parecido com as características de casas com valores próximos.* O cliente 2 tem a menor casa, em um bairro com índice de pobreza relativamente alto e sem as melhores escolas públicas. O valor previsto pelo modelo é semelhante com as características de casas com valores próximos mas, à primeira impressão parece que o modelo talvez tenha superestimado o valor pois esse cliente está em situação mais desfavorável do que os clientes de casas com valores semelhantes.* O cliente 1 é um cliente intermediário. O valor previsto pelo modelo é bem parecido com as características de casas com valores próximos. Como essa casa tem um valor intermediário, muito mais casas estão nessa faixa de valores, quando comparadas com as casas nas faixas de valores inferiores (cliente 2) ou superiores (cliente 3) SensibilidadeUm modelo ótimo não é necessariamente um modelo robusto. Às vezes, um modelo é muito complexo ou muito simples para generalizar os novos dados. Às vezes, o modelo pode utilizar um algoritmo de aprendizagem que não é apropriado para a estrutura de dados especificado. Outras vezes, os próprios dados podem ter informação excessiva ou exemplos insuficientes para permitir que o modelo apreenda a variável alvo – ou seja, o modelo não pode ser ajustado.** Execute a célula de código abaixo para rodar a função `fit_model` dez vezes com diferentes conjuntos de treinamento e teste para ver como as estimativas para um cliente específico mudam se os dados foram treinados.**
###Code
vs.PredictTrials(features, prices, fit_model, client_data)
###Output
Trial 1: $391,183.33
Trial 2: $419,700.00
Trial 3: $415,800.00
Trial 4: $420,622.22
Trial 5: $418,377.27
Trial 6: $411,931.58
Trial 7: $399,663.16
Trial 8: $407,232.00
Trial 9: $351,577.61
Trial 10: $413,700.00
Range in prices: $69,044.61
###Markdown
Questão 11 - Aplicabilidade* Em poucas linhas, argumente se o modelo construído deve ou não ser utilizado de acordo com as configurações do mundo real.**Dica:** Olhe os valores calculados acima. Algumas questões para responder:* Quão relevante dados coletados em 1978 podem ser nos dias de hoje? A inflação é importante?* Os atributos presentes são suficientes para descrever um imóvel?* Esse modelo é robusto o suficiente para fazer estimativas consistentes?* Dados coletados em uma cidade urbana como Boston podem ser aplicados para uma cidade rural?* É justo julgar o preço de um único imóvel baseado nas características de todo o bairro? **Resposta:**O modelo treinado **não poderia** ser utilizado em cenários reais hoje em dia, pois uma série de limitações importantes existem:* Os dados são de 1978 e, mesmo corrigidos pela inflação, podem não representar a estrutura de preços atual dos imóveis pois vários fatores externos importantes causaram grandes mudanças no preço dos imóveis, por exemplo: a bolha imobiliária de 2008 nos EUA.* O único atributo do próprio imóvel que foi incluído no modelo foi o número de quartos. Outros atributos que poderiam ser importantes no modelo foram deixados de fora, como por exemplo: área construída, número de pavimentos, tipo de construção (madeira, alvenaria, etc.), vagas de garagem, área de lazer, piscina, quantidade de banheiros e suítes, ano de construção e outros.* Algumas características do bairro e localização do imóvel que talvez fossem importantes no modelo foram deixados de fora, como por exemplo: presença de praças e áreas de lazer e localização em área nobre da cidade.* Como resultado da simplicidade do modelo e do pequeno número de observações que foram utilizadas para o treinamento do modelo (e provável underfitting em cenários reais), ele não é capaz de fazer estimativas mais consistentes. > **Nota**: Uma vez que você tenha completado todos os códigos e respondido todas as questões acima, você pode finalizar seu trabalho exportando o iPython Notebook como um documento HTML.Você pode fazer isso usando o menu acima e navegando até* **File -> Download as -> HTML (.html)*** **Arquivo -> Download como -> HTML (.html)**> Inclua o documento gerado junto com esse notebook na sua submissão.
###Code
print('Projeto do Módulo 2 do Programa Nanodegree Engenheiro de Machine Learning')
print('Aluno: Abrantes Araújo Silva Filho')
###Output
Projeto do Módulo 2 do Programa Nanodegree Engenheiro de Machine Learning
Aluno: Abrantes Araújo Silva Filho
|
11_MarkovAnalysis/11_09_TimeDependentSolution.ipynb | ###Markdown
11.9 Time dependent solutionIn this notebook, the numerical computation of the Kolmogorov formula is presented at different time step.The Kolmogorov formula is:$$P(t) = P(0) \cdot \exp(t\,A)$$In the following, the *system* depicted above is considered with:- State 2: Normal operating state- State 1: Degraded operating state- State 0: Failure state$\lambda_{i,j}$ is the transition rate from state $i$ to state $j$. Therefore $\lambda_{2,1}$ and $\lambda_{1,0}$ should be considered as degradation rates while $\lambda_{0,2}$ is a renewing rate. The initial state is the state 2. Import
###Code
import numpy as np
from scipy.linalg import expm
%matplotlib notebook
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Parameters
###Code
# Transition rates (1/h)
lambda_21 = 1e-3
lambda_10 = 1e-3
lambda_02 = 1e-2
# Time (h)
t_start = 0
t_end = 24*365*4
t_nstep = 10000
# Initial state
state0 = 2
###Output
_____no_output_____
###Markdown
Equation variables
###Code
# matrix A
A = np.array([
[-lambda_02, 0, lambda_02],
[lambda_10, -lambda_10, 0],
[0, lambda_21, -lambda_21]])
# initial system state
P0 = np.zeros((3, ))
P0[state0] = 1
# time vector
t = np.linspace(t_start, t_end, t_nstep)
###Output
_____no_output_____
###Markdown
Numerical computation
###Code
P = np.zeros((3, t_nstep))
for it in range(t_nstep):
P[:, it] = P0@expm(t[it]*A)
###Output
_____no_output_____
###Markdown
Illustration
###Code
fig = plt.figure(figsize=(8, 4))
ax = fig.add_subplot(1, 1, 1)
vColor = ['r', 'b', 'g']
for jd in range(3):
ax.plot(t, P[jd, :], color=vColor[jd], label='State {:d}'.format(jd))
ax.set_xlabel('Time [h]')
ax.set_ylabel('State probability [-]')
ax.legend()
ax.grid(True)
###Output
_____no_output_____ |
documentation/baseFiles/Untitled2.ipynb | ###Markdown
Exploring Pathlib
###Code
from pathlib import Path
# get home folder
homePath = Path(str(Path.home()))
homePath.as_posix()
homePath.name
# use join path to descend
homePath.joinpath('csBehaviorData').joinpath('cad').exists()
cadPath = homePath.joinpath('csBehaviorData').joinpath('cad')
cadPathGlob = sorted(cadPath.rglob("*.csv"))
cadPathGlob
cadPathGlob[0].as_posix()
txtPath = cadPath.touch('testText.txt')
txtPath = cadPath.joinpath('qText.txt')
txtPath.touch()
a=['kl','lk',1]
tempNoteString = ''
for v in a:
if type(v) is not 'string':
v = '{}'.format(v)
tempNoteString = tempNoteString + v + '\n'
txtPath.write_text(tempNoteString)
tempNoteString
txtPath.write_text("\n Hi there \n" + txtPath.read_text())
csDataPath.exists()
if csDataPath.exists()==0:
csDataPath.mkdir()
homePath.joinpath('cad').exists()
import time
import datetime
tdStamp=datetime.datetime.now().strftime("%Y_%m_%d_%H:%M:%S")
type(tdStamp)
###Output
_____no_output_____ |
Content-Based Filtering.ipynb | ###Markdown
Content Based FilteringUsing the idea of cosine similaries in the follwoing formal is:
###Code
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics. pairwise import linear_kernel
###Output
_____no_output_____
###Markdown
Wrangle the data
###Code
data = pd.read_csv("system-data.csv")
tf = TfidfVectorizer(analyzer='word', ngram_range=(1,3), min_df=0, stop_words='english')
tf_matrix = tf.fit_transform(data['description'])
###Output
_____no_output_____
###Markdown
Caculate cosine of similarities
###Code
def cosine_calculate(data):
cosine_similarties = linear_kernel(tf_matrix, tf_matrix)
results = {}
for index, row in data.iterrows():
similar_indices = cosine_similarties[index].argsort()[:-100:-1]
similar_items = [(cosine_similarties[index][i], data['id'][i]) for i in similar_indices]
results[row['id']] = similar_items[1:]
return results
def item(id):
return data.loc[data['id'] == id]['description'].tolist()[0].split(' - ')[0]
def recommend_item(item_id, num):
print("Recommending " + str(num) + " products similar to " + item(item_id) + "...")
print("-------")
results = cosine_calculate(data)
recs = results[item_id][:num]
for rec in recs:
print("Recommended: " + item(rec[1]) + " (score:" + str(rec[0]) + ")")
###Output
_____no_output_____
###Markdown
Now recommend an Item
###Code
recommend_item(item_id=11, num=3)
recommend_item(item_id=5, num=5)
recommend_item(item_id=42, num=2)
###Output
Recommending 2 products similar to Freewheeler...
-------
Recommended: Freewheeler max (score:0.6896990390184957)
Recommended: Mlc wheelie (score:0.21279952106418648)
|
notebooks/chapter11_image/06_speech_py2.ipynb | ###Markdown
> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python. 11.6. Applying digital filters to speech sounds > **PYTHON 2 VERSION** You need the pydub package: you can install it with `pip install pydub`. (https://github.com/jiaaro/pydub/) This package requires the open-source multimedia library FFmpeg for the decompression of MP3 files. (http://www.ffmpeg.org) 1. Let's import the packages.
###Code
import urllib
import urllib2
import cStringIO
import numpy as np
import scipy.signal as sg
import pydub
import matplotlib.pyplot as plt
from IPython.display import Audio, display
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. We create a Python function to generate a sound from an English sentence. This function uses Google's Text-To-Speech (TTT) API. We retrieve the sound in mp3 format, and we convert it to the Wave format with pydub. Finally, we retrieve the raw sound data by removing the wave header with NumPy.
###Code
def speak(sentence):
url = "http://translate.google.com/translate_tts?tl=en&q=" + \
urllib.quote_plus(sentence)
req = urllib2.Request(url, headers={'User-Agent': ''})
mp3 = urllib2.urlopen(req).read()
# We convert the mp3 bytes to wav.
audio = pydub.AudioSegment.from_mp3(cStringIO.StringIO(mp3))
wave = audio.export(cStringIO.StringIO(), format='wav')
wave.reset()
wave = wave.read()
# We get the raw data by removing the 24 first bytes
# of the header.
x = np.frombuffer(wave, np.int16)[24:] / 2.**15
return x, audio.frame_rate
###Output
_____no_output_____
###Markdown
3. We create a function that plays a sound (represented by a NumPy vector) in the notebook, using IPython's `Audio` class.
###Code
def play(x, fr, autoplay=False):
display(Audio(x, rate=fr, autoplay=autoplay))
###Output
_____no_output_____
###Markdown
4. Let's play the sound "Hello world". We also display the waveform with matplotlib.
###Code
x, fr = speak("Hello world")
play(x, fr)
plt.figure(figsize=(6,3));
t = np.linspace(0., len(x)/fr, len(x))
plt.plot(t, x, lw=1);
###Output
_____no_output_____
###Markdown
5. Now, we will hear the effect of a Butterworth low-pass filter applied to this sound (500 Hz cutoff frequency).
###Code
b, a = sg.butter(4, 500./(fr/2.), 'low')
x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr)
plt.figure(figsize=(6,3));
plt.plot(t, x, lw=1);
plt.plot(t, x_fil, lw=1);
###Output
_____no_output_____
###Markdown
We hear a muffled voice. 6. And now with a high-pass filter (1000 Hz cutoff frequency).
###Code
b, a = sg.butter(4, 1000./(fr/2.), 'high')
x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr)
plt.figure(figsize=(6,3));
plt.plot(t, x, lw=1);
plt.plot(t, x_fil, lw=1);
###Output
_____no_output_____
###Markdown
It sounds like a phone call. 7. Finally, we can create a simple widget to quickly test the effect of a high-pass filter with an arbitrary cutoff frequency.
###Code
from IPython.html import widgets
@widgets.interact(t=(100., 5000., 100.))
def highpass(t):
b, a = sg.butter(4, t/(fr/2.), 'high')
x_fil = sg.filtfilt(b, a, x)
play(x_fil, fr, autoplay=True)
###Output
_____no_output_____ |
PCA(Principle_component_analysis).ipynb | ###Markdown
Inference : First PCA is having the variance of 76.86 % means it contains the data 76.86 % second PCA contains 13.11 % of the total data and so on..
###Code
# calculating cumulative variance
var_c = np.cumsum(np.round(var,decimals = 4)*100)
var_c
###Output
_____no_output_____
###Markdown
Inference : If we use first PCA it will be like we are using 76.87 percent of data, if we will use 1st and 2nd PCA it will be like we are using 89.98 percent of the total data and so on...
###Code
# Now we will view the PCA components which are created by PCA internally, pca.components_ will display all the components to us
pca.components_
# Now visualizing the variance plot for PCA components obtained
plt.plot(var_c, color = 'red' )
###Output
_____no_output_____
###Markdown
Inference : From this plot we can infer that PCA 1 is capturing 76.8 percent of data PCA 2 is capturing 89.98 percent of data and so on...
###Code
# Getting first PCA values
a = pca_values[:, 0:1]
# getting second PCA values
b = pca_values[:,1:2]
plt.scatter(x= a, y=b)
# we will join this pca values with the university names of our original data.
final_data = pd.concat([pd.DataFrame(pca_values[:,0:2], columns =['pc1','pc2']) ,univ[['Univ']]], axis = 1)
final_data
plt.figure(figsize = (10,8))
sns.scatterplot(data = final_data, x = 'pc1', y = 'pc2', hue = 'Univ')
###Output
_____no_output_____ |
ha_regression.ipynb | ###Markdown
Regression Analysis: Seasonal Effects with Sklearn Linear RegressionIn this notebook, you will build a SKLearn linear regression model to predict Yen futures ("settle") returns with *lagged* Yen futures returns.
###Code
# Futures contract on the Yen-dollar exchange rate:
# This is the continuous chain of the futures contracts that are 1 month to expiration
yen_futures = pd.read_csv(
Path("yen.csv"), index_col="Date", infer_datetime_format=True, parse_dates=True
)
yen_futures.head()
# Trim the dataset to begin on January 1st, 1990
yen_futures = yen_futures.loc["1990-01-01":, :]
yen_futures.head()
test = yen_futures.copy()
test['Return'] = test['Settle'].pct_change()*100
# df.replace([np.inf, -np.inf], np.nan, inplace=True).dropna(subset=["col1", "col2"], how="all")
test.replace([np.inf, -np.inf], np.nan, inplace=True)
test.dropna(inplace = True)
test.head(5)
###Output
_____no_output_____
###Markdown
Data Preparation Returns
###Code
# Create a series using "Settle" price percentage returns, drop any nan"s, and check the results:
# (Make sure to multiply the pct_change() results by 100)
# In this case, you may have to replace inf, -inf values with np.nan"s
# YOUR CODE HERE!
test = yen_futures.copy()
yen_futures['Return'] = yen_futures['Settle'].pct_change()*100
# df.replace([np.inf, -np.inf], np.nan, inplace=True).dropna(subset=["col1", "col2"], how="all")
yen_futures.replace([np.inf, -np.inf], np.nan, inplace=True)
yen_futures.dropna(inplace = True)
yen_futures.tail(5)
###Output
_____no_output_____
###Markdown
Lagged Returns
###Code
# Create a lagged return using the shift function
# YOUR CODE HERE!
yen_futures ['Lagged_Return'] = yen_futures ['Return'].shift()
yen_futures.tail(5)
###Output
_____no_output_____
###Markdown
Train Test Split Author's Note: using the cell bellow produces results that are *different* from the results in the starter notebook.
###Code
# Create a train/test split for the data using 2018-2019 for testing and the rest for training
train = yen_futures[:'2017'].copy()
test = yen_futures['2018':].copy()
train.head(5)
# Create four dataframes:
# X_train (training set using just the independent variables), X_test (test set of of just the independent variables)
# Y_train (training set using just the "y" variable, i.e., "Futures Return"), Y_test (test set of just the "y" variable):
# YOUR CODE HERE!
# remove nas
train.dropna(inplace=True)
test.dropna(inplace = True)
# set up train and test
X_train = train['Lagged_Return'].to_frame()
X_test = test['Lagged_Return'].to_frame()
Y_train = train['Return']
Y_test = test['Return']
# set maximum number of lines
pd.set_option('display.max_rows', 10)
X_train
# Create a Linear Regression model and fit it to the training data
from sklearn.linear_model import LinearRegression
# Fit a SKLearn linear regression using just the training set (X_train, Y_train):
# YOUR CODE HERE!
model = LinearRegression()
model.fit(X_train, Y_train)
# Create a Linear Regression model and fit it to the training data
from sklearn.linear_model import LinearRegression
# Fit a SKLearn linear regression using just the training set (X_train, Y_train):
# YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Make predictions using the Testing DataNote: We want to evaluate the model using data that it has never seen before, in this case: X_test.
###Code
# Make a prediction of "y" values using just the test dataset
# YOUR CODE HERE!
prediction = model.predict(X_test)
prediction[0:5]
# Assemble actual y data (Y_test) with predicted y data (from just above) into two columns in a dataframe:
# YOUR CODE HERE!
Results = Y_test.to_frame()
Results["Predicted Return"] = prediction
# # Plot the first 20 predictions vs the true values
# # YOUR CODE HERE!
# Results[:20].plot(subplots=True)
# Plot the first 20 predictions vs the true values
# YOUR CODE HERE!
import matplotlib.pyplot as plt
Results[:20].plot(subplots=True)
plt.savefig(Path("./Images/Regression_Prediction.png"))
###Output
_____no_output_____
###Markdown
Out-of-Sample PerformanceEvaluate the model using "out-of-sample" data (X_test and y_test)
###Code
from sklearn.metrics import mean_squared_error
# Calculate the mean_squared_error (MSE) on actual versus predicted test "y"
# YOUR CODE HERE!
mse = mean_squared_error(
Results["Return"],
Results["Predicted Return"]
)
# Using that mean-squared-error, calculate the root-mean-squared error (RMSE):
# YOUR CODE HERE!
rmse = np.sqrt(mse)
print(f"Out-of-Sample Root Mean Squared Error (RMSE): {rmse}")
###Output
Out-of-Sample Root Mean Squared Error (RMSE): 0.4154832784856737
###Markdown
In-Sample PerformanceEvaluate the model using in-sample data (X_train and y_train)
###Code
# Construct a dataframe using just the "y" training data:
# YOUR CODE HERE!
in_sample_results = Y_train.to_frame()
# Add a column of "in-sample" predictions to that dataframe:
# YOUR CODE HERE!
in_sample_results["In-sample Predictions"] = model.predict(X_train)
# Calculate in-sample mean_squared_error (for comparison to out-of-sample)
# YOUR CODE HERE!
in_sample_mse = mean_squared_error(
in_sample_results["Return"],
in_sample_results["In-sample Predictions"]
)
# Calculate in-sample root mean_squared_error (for comparison to out-of-sample)
# YOUR CODE HERE!
in_sample_rmse = np.sqrt(in_sample_mse)
print(f"In-sample Root Mean Squared Error (RMSE): {in_sample_rmse}")
###Output
In-sample Root Mean Squared Error (RMSE): 0.5963660785073426
|
i18n/locales/ja/widgets-index.ipynb | ###Markdown
ウィジェットのデモンストレーション 実験できる実用的なコードを提供するだけでなく、本テキストブックは特定の概念を説明するのに役立つ多くのウィジェットも提供します。 ここでは、それらをインデックスとして提供しています。 各セルを実行してウィジェットを操作します。**注意:** コードセルの左下隅にある 「Try」 を押すか、[IBM Quantum Experience](https://quantum-computing.ibm.com/jupyter/user/qiskit-textbook/content/widgets-index.ipynb) でこのページを表示して、対話機能を有効にする必要があります。 インタラクティブコード本テキストブックの最も重要なインタラクティブ要素は、コードを変更して実験する機能です。 これは本テキストブックのWebページで直接可能ですが、Jupyterノートブックとして表示して、セルを追加したり、変更を保存したりすることもできます。 インタラクティブなPythonコードでは、[ipywidgets](https://ipywidgets.readthedocs.io/en/latest/) を介したウィジェットも使用できます。以降、Qiskit テキストブックが提供するウィジェットの一部を紹介します。
###Code
#「try」をクリックしてから「run」をクリックして出力を確認します
print("This is code works!")
###Output
This is code works!
###Markdown
ゲートデモこのウィジェットは、Q-sphereを通して示される、量子ビットに対するいくつかのゲートの効果を示しています。 [単一量子ビットゲート](./ch-states/single-qubit-gates.html)で多く使用されています。
###Code
from qiskit_textbook.widgets import gate_demo
gate_demo()
###Output
_____no_output_____
###Markdown
バイナリデモンストレーションこのシンプルなウィジェットを使用すると、2進数を操作できます。 [原子の計算](./ch-states/atoms-computation.html)にあります。
###Code
from qiskit_textbook.widgets import binary_widget
binary_widget(nbits=5)
###Output
_____no_output_____
###Markdown
スケーラブルな回路ウィジェット[量子フーリエ変換の章](./ch-algorithms/quantum-fourier-transform.html) のような回路を使用する場合、さまざまな量子ビットの数にどのようにスケーリングするかを確認すると便利なことがよくあります。 関数が回路(QuantumCircuit)といくつかの量子ビット(int)を入力として受け取る場合、以下のウィジェットを使用してどのようにスケーリングするかを確認できます。
###Code
from qiskit_textbook.widgets import scalable_circuit
from numpy import pi
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cu1(pi/2**(n-qubit), qubit, n)
# 関数の最後で、次の量子ビットで同じ関数を再度呼び出します(関数の前半でnを1つ減らしました)
qft_rotations(circuit, n)
def swap_qubits(circuit, n):
"""Reverse the order of qubits"""
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft(circuit, n):
"""QFT on the first n qubits in circuit"""
qft_rotations(circuit, n)
swap_qubits(circuit, n)
return circuit
scalable_circuit(qft)
###Output
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:11: DeprecationWarning: The QuantumCircuit.cu1 method is deprecated as of 0.16.0. It will be removed no earlier than 3 months after the release date. You should use the QuantumCircuit.cp method instead, which acts identically.
# This is added back by InteractiveShellApp.init_path()
###Markdown
ベルンシュタイン-ヴァジラニ ウィジェットこのウィジェットを介して、[ベルンシュタイン-ヴァジラニ アルゴリズム](./ch-algorithms/bernstein-vazirani.html) のインスタンスから数学を追跡できます。 ボタンを押して、アルゴリズムのさまざまなステップを適用します。 最初の引数は量子ビットの数を設定し、2番目の引数は非表示のバイナリ文字列を設定してからセルを再実行します。 `hide_oracle = False`を設定してセルを再実行することにより、オラクルの内容を明らかにすることもできます。
###Code
from qiskit_textbook.widgets import bv_widget
bv_widget(2, "11", hide_oracle=True)
###Output
_____no_output_____
###Markdown
ドイチュ-ジョザ ウィジェットベルンシュタイン-ヴァジラニ ウィジェットと同様に、ドイチュ-ジョザ ウィジェットを介して、[ドイチュ-ジョザ アルゴリズム](./ch-algorithms/deutsch-josza.html)のインスタンスから数学を追跡できます。 ボタンを押して、アルゴリズムのさまざまなステップを適用します。 「case」は"balanced"または"constant"、「size」は"small"または"large"にすることができます。 ランダムに選択されたオラクルに対してセルを再実行します。 `hide_oracle = False`を設定してセルを再実行することにより、オラクルの内容を明らかにすることもできます。
###Code
from qiskit_textbook.widgets import dj_widget
dj_widget(size="large", case="balanced", hide_oracle=True)
###Output
_____no_output_____ |
notebooks/08_UMAP_gridsearch.ipynb | ###Markdown
Calculate UMAP embeddings with different parameters
###Code
import os
import pandas as pd
import sys
import numpy as np
from pandas.core.common import flatten
import pickle
import umap
from pathlib import Path
import librosa
import numba
import math
from scipy.signal import butter, lfilter
from preprocessing_functions import calc_zscore, pad_spectro, create_padded_data
from preprocessing_functions import preprocess_spec_numba,preprocess_spec_numba_fl, pad_transform_spectro
from spectrogramming_functions import generate_mel_spectrogram, generate_stretched_spectrogram
wd = os.getcwd()
DF = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "processed", "df_focal_reduced.pkl")
OUT_COORDS = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "interim", "parameter_search", "grid_search")
###Output
_____no_output_____
###Markdown
Generate dataframe for grid search
###Code
spec_df = pd.read_pickle(DF)
spec_df.shape
# Bandpass filters for calculating audio intensity
LOWCUT = 300.0
HIGHCUT = 3000.0
# Function that calculates intensity score from
# amplitude audio data
# Input: 1D numeric numpy array (audio data)
# Output: Float (Intensity)
def calc_audio_intense_score(audio):
res = 10*math.log((np.mean(audio**2)),10)
return res
# Butter bandpass filter implementation:
# from https://scipy-cookbook.readthedocs.io/items/ButterworthBandpass.html
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype='band')
return b, a
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = lfilter(b, a, data)
return y
from spectrogramming_functions import generate_spectrogram
# Using the band-pass filtered signal!
raw_audios = spec_df['raw_audio']
srs = spec_df['samplerate_hz']
audio_filtered = [butter_bandpass_filter(audio, LOWCUT, HIGHCUT, sr, order=6) for audio,sr in zip(raw_audios, srs)]
spec_df['raw_audio_filtered'] = audio_filtered
# Spectrogramming parameters
FFT_WIN = 0.03 # FFT_WIN*samplerate = length of fft/n_fft (number of audio frames that go in one fft)
FFT_HOP = FFT_WIN/8 # FFT_HOP*samplerate = n of audio frames between successive ffts
N_MELS = 40 # number of mel bins
WINDOW = 'hann' # each frame of audio is windowed by a window function (its length can also be
# determined and is then padded with zeros to match n_fft. we use window_length = length of fft
FMAX = 4000
N_MFCC = 13
MAX_DURATION = 0.5
raw_specs = spec_df.apply(lambda row: generate_spectrogram(row['raw_audio'],
row['samplerate_hz'],
WINDOW,
FFT_WIN,
FFT_HOP),
axis=1)
spec_df['raw_specs'] = raw_specs
raw_specs_filtered = spec_df.apply(lambda row: generate_spectrogram(row['raw_audio_filtered'],
row['samplerate_hz'],
WINDOW,
FFT_WIN,
FFT_HOP),
axis=1)
spec_df['raw_specs_filtered'] = raw_specs_filtered
raw_specs_stretched = spec_df.apply(lambda row: generate_stretched_spectrogram(row['raw_audio'],
row['samplerate_hz'],
row['duration_s'],
WINDOW,
FFT_WIN,
FFT_HOP,
MAX_DURATION),
axis=1)
spec_df['raw_specs_stretched'] = raw_specs_stretched
raw_specs_filtered_stretched = spec_df.apply(lambda row: generate_stretched_spectrogram(row['raw_audio_filtered'],
row['samplerate_hz'],
row['duration_s'],
WINDOW,
FFT_WIN,
FFT_HOP,
MAX_DURATION),
axis=1)
spec_df['raw_specs_filtered_stretched'] = raw_specs_filtered_stretched
spec_df.to_pickle(os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "processed", "df_grid.pkl"))
###Output
_____no_output_____
###Markdown
Gridsearch
###Code
@numba.njit()
def unpack_specs(a,b):
"""
Function that unpacks two specs that have been transformed into
a 1D array with preprocessing_functions.pad_transform_spec and
restores their original 2D shape
Parameters
----------
a,b : 1D numpy arrays (numeric)
Returns
-------
spec_s, spec_l : 2D numpy arrays (numeric)
the restored specs
Example
-------
>>>
"""
a_shape0 = int(a[0])
a_shape1 = int(a[1])
b_shape0 = int(b[0])
b_shape1 = int(b[1])
spec_a= np.reshape(a[2:(a_shape0*a_shape1)+2], (a_shape0, a_shape1))
spec_b= np.reshape(b[2:(b_shape0*b_shape1)+2], (b_shape0, b_shape1))
len_a = a_shape1
len_b = b_shape1
# find bigger spec
spec_s = spec_a
spec_l = spec_b
if len_a>len_b:
spec_s = spec_b
spec_l = spec_a
return spec_s, spec_l
spec_df = pd.read_pickle(os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "processed", "df_grid.pkl"))
DEF_PREPROCESS_TYPE = 'zs'
DEF_METRIC_TYPE = 'euclidean'
DEF_DURATION_METHOD = 'pad'
DEF_MIN_DIST = 0
DEF_SPREAD = 1
DEF_N_NEIGHBORS = 15
DEF_N_COMPS = 3
DEF_DENOISE = 'no'
DEF_N_MELS = 40
DEF_F_UNIT = 'dB'
# Spectrogramming parameters
FFT_WIN = 0.03 # FFT_WIN*samplerate = length of fft/n_fft (number of audio frames that go in one fft)
FFT_HOP = FFT_WIN/8 # FFT_HOP*samplerate = n of audio frames between successive ffts
N_MELS = 40 # number of mel bins
WINDOW = 'hann' # each frame of audio is windowed by a window function (its length can also be
# determined and is then padded with zeros to match n_fft. we use window_length = length of fft
FMAX = 4000
N_MFCC = 13
MAX_DURATION = 0.5
MIN_OVERLAP = 0.9
from scipy.spatial.distance import correlation
from scipy.spatial.distance import cosine
def get_param_string():
param_combi = "_".join([str(x) for x in [preprocess_type, metric_type, duration_method,
min_dist, spread, n_neighbors, n_comps, input_type,
denoise, n_mels, f_unit, bp_filtered]])
return param_combi
def calc_umap(data, outname, metric=DEF_METRIC_TYPE, min_dist=DEF_MIN_DIST, spread=DEF_SPREAD, n_neighbors=DEF_N_NEIGHBORS, n_comps=DEF_N_COMPS,n = 1):
for i in range(n):
reducer = umap.UMAP(n_components = n_comps,
min_dist=min_dist,
spread=spread,
n_neighbors=n_neighbors,
metric=metric,
random_state=2204)
embedding = reducer.fit_transform(data)
print("\r"+outname, end="")
np.savetxt(outname+'_'+str(i)+'.csv', embedding, delimiter=";")
# *************** DO THE FULL GRIDSEARCH ***************
n_neighbors = DEF_N_NEIGHBORS
spread = DEF_SPREAD
min_dist = DEF_MIN_DIST
n_comps = 3
input_type = "specs"
FMAX = 4000
for duration_method in ['pad', 'pairwise-pad', 'stretched', 'timeshift-pad', 'timeshift-overlap', 'overlap']:
for preprocess_type in ['no', 'zs', 'zs-fl-ce']:
for metric_type in ['cosine', 'euclidean', 'correlation', 'manhattan']:
for denoise in ['yes', 'no']:
for n_mels in [0, 10, 20, 30, 40, 50]:
for f_unit in ['dB', 'magnitude']:
for bp_filtered in ['yes', 'no']:
outname = os.path.join(os.path.sep, OUT_COORDS, get_param_string())
#print("\r"+outname, end="")
try:
# select dataframe column
if bp_filtered=='yes':
if duration_method=='stretched':
specs = spec_df.raw_specs_filtered_stretched.copy()
else:
specs = spec_df.raw_specs_filtered.copy()
else:
if duration_method=='stretched':
specs = spec_df.raw_specs_stretched.copy()
else:
specs = spec_df.raw_specs.copy()
# Mel transform
if n_mels>0:
srs = spec_df.samplerate_hz.copy()
specs = [librosa.feature.melspectrogram(S=s, sr=sr, n_mels=n_mels, fmax=FMAX) for s, sr in zip(specs, srs)]
else:
# Problem: Because of the varying samplerate, the non-mel-transformed spectrograms are not all of the same height!!
# So if I'm using them, I need to cut the ones with the larger frequency range.
n_bins = [s.shape[0] for s in specs]
min_bins = np.min(n_bins)
specs = [s[0:min_bins,:] for s in specs]
# Spectrogram intensity unit
if f_unit=="dB":
specs = [librosa.power_to_db(s, ref=np.max) for s in specs]
# Denoising
if denoise=='yes':
specs = [(s-np.median(s, axis=0)) for s in specs]
# Pre-processing
if preprocess_type=='zs':
specs = [calc_zscore(s) for s in specs]
elif preprocess_type=='zs-fl-ce':
specs = [calc_zscore(s) for s in specs]
specs = [np.where(s<0,0,s) for s in specs]
specs = [np.where(s>3,3,s) for s in specs]
# Duration method
if duration_method in ['pad', 'stretched']:
data = create_padded_data(specs)
calc_umap(data, outname=outname, metric = metric_type)
else:
n_bins = specs[0].shape[0]
maxlen = np.max([spec.shape[1] for spec in specs]) * n_bins + 2
trans_specs = [pad_transform_spectro(spec, maxlen) for spec in specs]
data = np.asarray(trans_specs)
# set spec_dist depending on metric_type!!
if metric_type=='euclidean':
@numba.njit()
def spec_dist(a,b,size):
dist = np.sqrt((np.sum(np.subtract(a,b)*np.subtract(a,b)))) / np.sqrt(size)
return dist
elif metric_type=='manhattan':
@numba.njit()
def spec_dist(a,b,size):
dist = (np.sum(np.abs(np.subtract(a,b)))) / size
return dist
elif metric_type=='cosine':
@numba.njit()
def spec_dist(a,b,size):
# turn into unit vectors by dividing each vector field by magnitude of vector
dot_product = np.sum(a*b)
a_magnitude = np.sqrt(np.sum(a*a))
b_magnitude = np.sqrt(np.sum(b*b))
dist = 1 - dot_product/(a_magnitude*b_magnitude)
return dist
elif metric_type=='correlation':
@numba.njit()
def spec_dist(a,b,size):
a_meandiff = a - np.mean(a)
b_meandiff = b - np.mean(b)
dot_product = np.sum(a_meandiff*b_meandiff)
a_meandiff_magnitude = np.sqrt(np.sum(a_meandiff*a_meandiff))
b_meandiff_magnitude = np.sqrt(np.sum(b_meandiff*b_meandiff))
dist = 1 - dot_product/(a_meandiff_magnitude * b_meandiff_magnitude)
return dist
# for duration_method in ['pad', 'pairwise-pad', 'stretch', 'timeshift-pad', 'timeshift-overlap', 'overlap']:
if duration_method=='pairwise-pad':
@numba.njit()
def calc_pairwise_pad(a, b):
spec_s, spec_l = unpack_specs(a,b)
n_padding = int(spec_l.shape[1] - spec_s.shape[1])
n_bins = spec_s.shape[0]
# pad the smaller spec (if necessary)
if n_padding!=0:
pad = np.full((n_bins, n_padding), 0.0)
spec_s_padded = np.concatenate((spec_s, pad), axis=1)
spec_s_padded = spec_s_padded.astype(np.float64)
else:
spec_s_padded = spec_s.astype(np.float64)
# compute distance
spec_s_padded = np.reshape(spec_s_padded, (-1)).astype(np.float64)
spec_l = np.reshape(spec_l, (-1)).astype(np.float64)
size = spec_l.shape[0]
dist = spec_dist(spec_s_padded, spec_l, size)
return dist
calc_umap(data, outname=outname, metric = calc_pairwise_pad)
elif duration_method=='timeshift-pad':
@numba.njit()
def calc_timeshift_pad(a,b):
spec_s, spec_l = unpack_specs(a,b)
len_s = spec_s.shape[1]
len_l = spec_l.shape[1]
nfreq = spec_s.shape[0]
# define start position
min_overlap_frames = int(MIN_OVERLAP * len_s)
start_timeline = min_overlap_frames-len_s
max_timeline = len_l - min_overlap_frames
n_of_calculations = int((((max_timeline+1-start_timeline)+(max_timeline+1-start_timeline))/2) +1)
distances = np.full((n_of_calculations),999.)
count=0
for timeline_p in range(start_timeline, max_timeline+1,2):
#print("timeline: ", timeline_p)
# mismatch on left side
if timeline_p < 0:
len_overlap = len_s - abs(timeline_p)
pad_s = np.full((nfreq, (len_l-len_overlap)),0.)
pad_l = np.full((nfreq, (len_s-len_overlap)),0.)
s_config = np.append(spec_s, pad_s, axis=1).astype(np.float64)
l_config = np.append(pad_l, spec_l, axis=1).astype(np.float64)
# mismatch on right side
elif timeline_p > (len_l-len_s):
len_overlap = len_l - timeline_p
pad_s = np.full((nfreq, (len_l-len_overlap)),0.)
pad_l = np.full((nfreq, (len_s-len_overlap)),0.)
s_config = np.append(pad_s, spec_s, axis=1).astype(np.float64)
l_config = np.append(spec_l, pad_l, axis=1).astype(np.float64)
# no mismatch on either side
else:
len_overlap = len_s
start_col_l = timeline_p
end_col_l = start_col_l + len_overlap
pad_s_left = np.full((nfreq, start_col_l),0.)
pad_s_right = np.full((nfreq, (len_l - end_col_l)),0.)
l_config = spec_l.astype(np.float64)
s_config = np.append(pad_s_left, spec_s, axis=1).astype(np.float64)
s_config = np.append(s_config, pad_s_right, axis=1).astype(np.float64)
size = s_config.shape[0]*s_config.shape[1]
distances[count] = spec_dist(s_config, l_config, size)
count = count + 1
min_dist = np.min(distances)
return min_dist
calc_umap(data, outname=outname, metric = calc_timeshift_pad)
elif duration_method=='timeshift-overlap':
@numba.njit()
def calc_timeshift(a,b):
spec_s, spec_l = unpack_specs(a,b)
len_l = spec_l.shape[1]
len_s = spec_s.shape[1]
# define start position
min_overlap_frames = int(MIN_OVERLAP * len_s)
start_timeline = min_overlap_frames-len_s
max_timeline = len_l - min_overlap_frames
n_of_calculations = (max_timeline+1-start_timeline)+(max_timeline+1-start_timeline)
distances = np.full((n_of_calculations),999.)
count=0
for timeline_p in range(start_timeline, max_timeline+1):
# mismatch on left side
if timeline_p < 0:
start_col_l = 0
len_overlap = len_s - abs(timeline_p)
end_col_l = start_col_l + len_overlap
end_col_s = len_s # until the end
start_col_s = end_col_s - len_overlap
# mismatch on right side
elif timeline_p > (len_l-len_s):
start_col_l = timeline_p
len_overlap = len_l - timeline_p
end_col_l = len_l
start_col_s = 0
end_col_s = start_col_s + len_overlap
# no mismatch on either side
else:
start_col_l = timeline_p
len_overlap = len_s
end_col_l = start_col_l + len_overlap
start_col_s = 0
end_col_s = len_s # until the end
s_s = spec_s[:,start_col_s:end_col_s].astype(np.float64)
s_l = spec_l[:,start_col_l:end_col_l].astype(np.float64)
size = s_s.shape[0]*s_s.shape[1]
distances[count] = spec_dist(s_s, s_l, size)
count = count + 1
min_dist = np.min(distances)
return min_dist
calc_umap(data, outname=outname, metric = calc_timeshift)
elif duration_method=='overlap':
@numba.njit()
def calc_overlap_only(a,b):
spec_s, spec_l = unpack_specs(a,b)
#only use overlap section from longer spec
spec_l = spec_l[:spec_s.shape[0],:spec_s.shape[1]]
spec_s = spec_s.astype(np.float64)
spec_l = spec_l.astype(np.float64)
size = spec_s.shape[1]*spec_s.shape[0]
dist = spec_dist(spec_s, spec_l, size)
return dist
calc_umap(data, outname=outname, metric = calc_overlap_only)
except:
print("FAILED: ", get_param_string())
break
###Output
_____no_output_____
###Markdown
Check results
###Code
n_neighbors = DEF_N_NEIGHBORS
spread = DEF_SPREAD
min_dist = DEF_MIN_DIST
n_comps = 3
input_type = "specs"
FMAX = 4000
expected_files = []
for duration_method in ['pad', 'pairwise-pad', 'stretched', 'timeshift-pad', 'timeshift-overlap', 'overlap']:
for preprocess_type in ['no', 'zs', 'zs-fl-ce']:
for metric_type in ['cosine', 'euclidean', 'correlation', 'manhattan']:
for denoise in ['yes', 'no']:
for n_mels in [0, 10, 20, 30, 40, 50]:
for f_unit in ['dB', 'magnitude']:
for bp_filtered in ['yes', 'no']:
expected_files.append(get_param_string()+'_0.csv')
print('Expected: ',len(expected_files))
all_embedding_files = list(sorted(os.listdir(OUT_COORDS)))
print('Observed: ',len(all_embedding_files))
missing_in_observed = [x for x in expected_files if x not in all_embedding_files]
print(len(missing_in_observed))
missing_in_observed
def get_params_from_filename(embedding_file):
embedding_params_string = embedding_file.replace('.csv', '')
embedding_params_list = embedding_params_string.split('_')
return embedding_params_list
for f in missing_in_observed[:1]:
print(f)
preprocess_type, metric_type, duration_method,min_dist, spread, n_neighbors, n_comps, input_type, denoise, n_mels, f_unit, bp_filtered, n_repeat = get_params_from_filename(f)
min_dist = int(min_dist)
spread = int(spread)
n_neighbors = int(n_neighbors)
n_comps = int(n_comps)
n_mels = int(n_mels)
if bp_filtered=='yes':
if duration_method=='stretched':
specs = spec_df.raw_specs_filtered_stretched.copy()
else:
specs = spec_df.raw_specs_filtered.copy()
else:
if duration_method=='stretched':
specs = spec_df.raw_specs_stretched.copy()
else:
specs = spec_df.raw_specs.copy()
if n_mels>0:
srs = spec_df.samplerate_hz.copy()
specs = [librosa.feature.melspectrogram(S=s, sr=sr, n_mels=n_mels, fmax=FMAX) for s, sr in zip(specs, srs)]
else:
# Problem: Because of the varying samplerate, the non-mel-transformed spectrograms are not all of the same height!!
# So if I'm using them, I need to cut the ones with the more frequency range.
n_bins = [s.shape[0] for s in specs]
min_bins = np.min(n_bins)
specs = [s[0:min_bins,:] for s in specs]
if denoise=='yes':
specs = [(s-np.median(s, axis=0)) for s in specs]
#if preprocess_type=='zs':
# specs = [calc_zscore(s) for s in specs]
if preprocess_type=='zs-fl-ce':
specs = [calc_zscore(s) for s in specs]
specs = [np.where(s<0,0,s) for s in specs]
specs = [np.where(s>3,3,s) for s in specs]
x=specs[0]
pd.DataFrame(x)
np.min(x)
np.max(x)
import matplotlib.pyplot as plt
n, bins, pathc = plt.hist(x)
specs_z = [calc_zscore(s) for s in specs]
x_z = specs_z[0]
pd.DataFrame(x_z)
n, bins, pathc = plt.hist(x_z)
specs = [np.where(s<0,0,s) for s in specs_z]
specs = [np.where(s>3,3,s) for s in specs_z]
f = specs[0]
np.min(f)
def calc_umap(data, outname, metric=DEF_METRIC_TYPE, min_dist=DEF_MIN_DIST, spread=DEF_SPREAD, n_neighbors=DEF_N_NEIGHBORS, n_comps=DEF_N_COMPS,n = 1):
for i in range(n):
reducer = umap.UMAP(n_components = n_comps,
min_dist=min_dist,
spread=spread,
n_neighbors=n_neighbors,
metric=metric,
random_state=1)
embedding = reducer.fit_transform(data)
print(outname)
np.savetxt(outname+'_'+str(i)+'.csv', embedding, delimiter=";")
# Re-do missing
for f in missing_in_observed:
preprocess_type, metric_type, duration_method,min_dist, spread, n_neighbors, n_comps, input_type, denoise, n_mels, f_unit, bp_filtered, n_repeat = get_params_from_filename(f)
min_dist = int(min_dist)
spread = int(spread)
n_neighbors = int(n_neighbors)
n_comps = int(n_comps)
n_mels = int(n_mels)
outname = os.path.join(os.path.sep, OUT_COORDS, get_param_string())
try:
# select dataframe column
#print("SELECT")
if bp_filtered=='yes':
if duration_method=='stretched':
specs = spec_df.raw_specs_filtered_stretched.copy()
else:
specs = spec_df.raw_specs_filtered.copy()
else:
if duration_method=='stretched':
specs = spec_df.raw_specs_stretched.copy()
else:
specs = spec_df.raw_specs.copy()
except:
print("FAILED SELECT: ", get_param_string())
try: # Mel transform
#print("n_mels")
if n_mels>0:
#print("LAGGA")
srs = spec_df.samplerate_hz.copy()
specs = [librosa.feature.melspectrogram(S=s, sr=sr, n_mels=n_mels, fmax=FMAX) for s, sr in zip(specs, srs)]
else:
# Problem: Because of the varying samplerate, the non-mel-transformed spectrograms are not all of the same height!!
# So if I'm using them, I need to cut the ones with the more frequency range.
#print("SMALLE")
n_bins = [s.shape[0] for s in specs]
min_bins = np.min(n_bins)
specs = [s[0:min_bins,:] for s in specs]
#print("NEMELS")
except:
print("FAILED MEL: ", get_param_string())
try:
# Spectrogram intensity unit
#print("UNIT")
if f_unit=="dB":
specs = [librosa.power_to_db(s, ref=np.max) for s in specs]
except:
print("FAILED INTENSITY UNIT: ", get_param_string())
# Denoising
if denoise=='yes':
specs = [(s-np.median(s, axis=0)) for s in specs]
try:
#print("OPREPRO")# Pre-processing
if preprocess_type=='zs':
specs = [calc_zscore(s) for s in specs]
elif preprocess_type=='zs-fl-ce':
specs = [calc_zscore(s) for s in specs]
specs = [np.where(s<0,0,s) for s in specs]
specs = [np.where(s>3,3,s) for s in specs]
except:
print("FAILED PREPROCESS: ", get_param_string())
try:
#print("trnasform")
n_bins = specs[0].shape[0]
maxlen = np.max([spec.shape[1] for spec in specs]) * n_bins + 2
trans_specs = [pad_transform_spectro(spec, maxlen) for spec in specs]
data = np.asarray(trans_specs)
except:
print("FAILED DATA PREP: ", get_param_string())
# set spec_dist depending on metric_type!!
if metric_type=='euclidean':
@numba.njit()
def spec_dist(a,b,size):
dist = np.sqrt((np.sum(np.subtract(a,b)*np.subtract(a,b)))) / np.sqrt(size)
return dist
elif metric_type=='manhattan':
@numba.njit()
def spec_dist(a,b,size):
dist = (np.sum(np.abs(np.subtract(a,b)))) / size
return dist
elif metric_type=='cosine':
@numba.njit()
def spec_dist(a,b,size):
# turn into unit vectors by dividing each vector field by magnitude of vector
dot_product = np.sum(a*b)
a_magnitude = np.sqrt(np.sum(a*a))
b_magnitude = np.sqrt(np.sum(b*b))
if (a_magnitude*b_magnitude)==0:
dist=0
else:
dist = 1 - dot_product/(a_magnitude*b_magnitude)
return dist
elif metric_type=='correlation':
@numba.njit()
def spec_dist(a,b,size):
a_meandiff = a - np.mean(a)
b_meandiff = b - np.mean(b)
dot_product = np.sum(a_meandiff*b_meandiff)
a_meandiff_magnitude = np.sqrt(np.sum(a_meandiff*a_meandiff))
b_meandiff_magnitude = np.sqrt(np.sum(b_meandiff*b_meandiff))
if (a_meandiff_magnitude * b_meandiff_magnitude)==0:
dist = 0
else:
dist = 1 - dot_product/(a_meandiff_magnitude * b_meandiff_magnitude)
return dist
if duration_method=='timeshift-overlap':
#print("CHEFDDF")
@numba.njit()
def calc_timeshift(a,b):
spec_s, spec_l = unpack_specs(a,b)
len_l = spec_l.shape[1]
len_s = spec_s.shape[1]
# define start position
min_overlap_frames = int(MIN_OVERLAP * len_s)
start_timeline = min_overlap_frames-len_s
max_timeline = len_l - min_overlap_frames
n_of_calculations = (max_timeline+1-start_timeline)+(max_timeline+1-start_timeline)
distances = np.full((n_of_calculations),999.)
count=0
for timeline_p in range(start_timeline, max_timeline+1):
# mismatch on left side
if timeline_p < 0:
start_col_l = 0
len_overlap = len_s - abs(timeline_p)
end_col_l = start_col_l + len_overlap
end_col_s = len_s # until the end
start_col_s = end_col_s - len_overlap
# mismatch on right side
elif timeline_p > (len_l-len_s):
start_col_l = timeline_p
len_overlap = len_l - timeline_p
end_col_l = len_l
start_col_s = 0
end_col_s = start_col_s + len_overlap
# no mismatch on either side
else:
start_col_l = timeline_p
len_overlap = len_s
end_col_l = start_col_l + len_overlap
start_col_s = 0
end_col_s = len_s # until the end
s_s = spec_s[:,start_col_s:end_col_s].astype(np.float64)
s_l = spec_l[:,start_col_l:end_col_l].astype(np.float64)
size = s_s.shape[0]*s_s.shape[1]
distances[count] = spec_dist(s_s, s_l, size)
count = count + 1
min_dist = np.min(distances)
return min_dist
try:
calc_umap(data, outname=outname, metric = calc_timeshift)
except:
print("FAILED UMAP: ", get_param_string())
###Output
/home/mthomas/anaconda3/envs/meerkat_umap_env_3/lib/python3.7/site-packages/umap/umap_.py:1728: UserWarning: custom distance metric does not return gradient; inverse_transform will be unavailable. To enable using inverse_transform method method, define a distance function that returns a tuple of (distance [float], gradient [np.array])
"custom distance metric does not return gradient; inverse_transform will be unavailable. "
|
Python/numpy-pandas/05_correlation_coefficient_pearson.ipynb | ###Markdown
상관계수 ( Correlation Coefficient ) Index1. 분산1. 공분산1. 상관계수1. 결정계수1. 프리미어리그 데이터 상관계수 분석
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
샘플 데이터 생성
###Code
data1 = np.array([80, 85, 100, 90, 95])
data2 = np.array([70, 80, 100, 95, 95])
###Output
_____no_output_____
###Markdown
2. 분산(variance)- 1개의 이산정도를 나타냄- 편차제곱의 평균 $ variance = \frac{\sum_{i=1}^n{(x_i-\bar{x})^2}}{n}, (\bar{x}:평균) $
###Code
# variance code
def variance(data):
avg = np.average(data)
var = 0
for num in data:
var += (num - avg) ** 2
return var / len(data)
variance(data1), variance(data2)
np.var(data1), np.var(data2)
variance(data1) ** 0.5, variance(data2) ** 0.5
np.std(data1), np.std(data2)
###Output
_____no_output_____
###Markdown
일반 함수와 numpy 함수의 퍼포먼스 비교
###Code
p_data1 = np.random.randint(60, 100, int(1E5))
p_data2 = np.random.randint(60, 100, int(1E5))
p_data2[-5:]
# 일반함수
%%time
variance(p_data1), variance(p_data2)
# numpy
%%time
np.var(p_data1), np.var(p_data2)
###Output
Wall time: 2 ms
###Markdown
3. 공분산(covariance)- 2개의 확률변수의 상관정도를 나타냄- 평균 편차곱- 방향성은 보여줄수 있으나 강도를 나타내는데 한계가 있다 - 표본데이터의 크기에 따라서 값의 차이가 큰 단점이 있다 $ covariance = \frac{\sum_{i=1}^{n}{(x_i-\bar{x})(y_i-\bar{y})}}{n}, (\bar{x}:x의 평균, \bar{y}:y의 평균) $
###Code
# covariance function
def covariance(data1, data2):
data1_avg = np.average(data1)
data2_avg = np.average(data2)
cov = 0
for num1, num2 in list(zip(data1, data2)):
cov += ((num1 - data1_avg) * (num2 - data2_avg))
return cov / (len(data1) - 1)
data1 = np.array([80, 85, 100, 90, 95])
data2 = np.array([70, 80, 100, 95, 95])
np.cov(data1, data2)[0, 1]
covariance(data1, data2)
data3 = np.array([80, 85, 100, 90, 95])
data4 = np.array([100, 90, 70, 90, 80])
np.cov(data3, data4)[0, 1]
covariance(data3, data4)
data5 = np.array([800, 850, 1000, 900, 950])
data6 = np.array([1000, 900, 700, 900, 800])
np.cov(data5, data6)[0, 1]
covariance(data5, data6)
###Output
_____no_output_____
###Markdown
4. 상관계수(correlation coefficient)- 공분산의 한계를 극복하기 위해서 만들어짐- -1 ~ 1까지의 수를 가지며 0과 가까울수록 상관도가 적음을 의미- x의 분산과 y의 분산을 곱한 결과의 제곱근을 나눠주면 x나 y의 변화량이 클수록 0에 가까워짐- https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.corrcoef.html $ correlation-coefficient = \frac{공분산}{\sqrt{{x분산} \cdot {y분산}}} $ 최종 상관계수 $ r = \frac{\sum(x-\bar{x})(y-\bar{y})}{\sqrt{{\sum(x-\bar{x})^2}\cdot{\sum(y-\bar{y})^2}}} $
###Code
# correlation coefficient function
def corrcoef_func(data1, data2):
data1_avg = np.average(data1)
data2_avg = np.average(data2)
cov = 0
var1 = 0
var2 = 0
# 공분산
for num1, num2 in list(zip(data1, data2)):
cov += (num1 - data1_avg) * (num2 - data2_avg)
covariance = cov / len(data1 - 1)
# data1, data2의 분산
for num1, num2 in list(zip(data1, data2)):
var1 += (num1 - data1_avg) ** 2
var2 += (num2 - data2_avg) ** 2
variance1 = var1 / len(data1)
variance2 = var2 / len(data1)
return covariance / ((variance1 * variance2) ** 0.5)
round(corrcoef_func(data1, data2), 2), \
round(corrcoef_func(data3, data4), 2), \
round(corrcoef_func(data5, data6), 2)
round(np.corrcoef(data1, data2)[0, 1], 2), \
round(np.corrcoef(data3, data4)[0, 1], 2), \
round(np.corrcoef(data5, data6)[0, 1], 2)
###Output
_____no_output_____
###Markdown
5. 결정계수(cofficient of determination: R-squared)- x로부터 y를 예측할수 있는 정도- 상관계수의 제곱 (상관계수를 양수화)- 수치가 클수록 회귀분석을 통해 예측할수 있는 수치의 정도가 더 정확
###Code
round(np.corrcoef(data1, data2)[0, 1] ** 2, 2), \
round(np.corrcoef(data1, data4)[0, 1] ** 2, 2)
###Output
_____no_output_____
###Markdown
6. 프리미어리그 데이터 상관계수 분석- 2016년 프리미어리그 성적에서 득점과 실점 데이터중에 승점에 영향을 더 많이 준 데이터는?
###Code
import pandas as pd
df = pd.read_csv("datas/premierleague.csv")
df.tail()
# 득점
gf = np.array(df["gf"])
gf
# 실점
ga = np.array(df["ga"])
ga
# 승점
points = np.array(df["points"])
points
data1, data2 = np.corrcoef(gf, points)[0, 1] ** 2, np.corrcoef(ga, points)[0, 1] ** 2
data1, data2
round(data1, 2), round(data2, 2)
###Output
_____no_output_____ |
_notebooks/2021-12-02-advent-of-code-2.ipynb | ###Markdown
Advent of Code 2021 - Day 2> My solution and thoughts- toc: false- badges: false- comments: false- categories: [advent-of-code] Day 2!Technically, I wrote this the same day as day 1, and on the third day release... but you can forgive that, right? Part 1Anyway, we're on to day 2! This challenge centers around calculating positions. So our input is a list of directions, and values to apply to those directions. First things first, we're going to want to split those up into something meaningful. Personally, I love a good `namedtuple`:
###Code
from collections import namedtuple
Movement = namedtuple('Movement', ['direction', 'value'])
###Output
_____no_output_____
###Markdown
So each instance in the input list can now be instantiated as a `Movement`, so let's read in that data!
###Code
with open('../assets/inputs/aoc/2021/day_2.txt', 'r') as data_file:
lines = data_file.readlines()
movements = [Movement(line.strip().split(' ')[0], int(line.strip().split(' ')[1])) for line in lines]
display(movements[:3])
display(movements[997:])
###Output
_____no_output_____
###Markdown
Ok! We have some data, and we've got it formatted in a way that seems reasonable. Essentially, we need to calculate the horizontal and vertical position that the list of movements leaves us in. We will subtract from the vertical when we go up, add when we go down, and add to the horizontal when we move forward. This is a pretty easy set of conditions:
###Code
horizontal_position = 0
vertical_position = 0
for movement in movements:
if movement.direction == 'up':
vertical_position -= movement.value
elif movement.direction == 'down':
vertical_position += movement.value
else:
horizontal_position += movement.value
display(horizontal_position * vertical_position)
###Output
_____no_output_____
###Markdown
Since the puzzle asks us to give the answer as the multiple of our tracked values, we do.Success!But, what about that code? Does it look... a little gross? I think it does. Namely, we're doing almost the exact same thing for every branch. Can we clean that up?One approach that I like is to declare the operations in a dictionary:
###Code
import operator as ops
movement_operation = {
'up': ops.sub,
'down': ops.add,
'forward': ops.add
}
###Output
_____no_output_____
###Markdown
Now we can re-write a bit:
###Code
horizontal_position = 0
vertical_position = 0
for movement in movements:
op = movement_operation[movement.direction]
if movement.direction in ['up', 'down']:
vertical_position = op(vertical_position, movement.value)
else:
horizontal_position = op(horizontal_position, movement.value)
display(horizontal_position * vertical_position)
###Output
_____no_output_____
###Markdown
Is this better? It's a little harder to read, and we have now encoded that `vertical_position` has to be the thing that both 'up' and 'down' deal with. If we were to need to change 'down' to deal with some other value, this is harder to break apart. On the other hand, the operations are now lifted. If I need to change the operation that is performed for anything, I can do so in one place and be sure that it propogates to all the usage points. That's pretty cool. Part 2Now we introduce the idea of "aim". This is fun if you're a person who has played _entirely_ too much [Subnautica](http://subnauticagame.com/) because the idea makes sense. In this story, you're piloting a sub-marine vehicle. So the aim is basically where the nose of the craft is pointed. Adjusting 'up' and 'down' now affects aim, and only forward changes the values. Pretty easy:
###Code
aim = 0
horizontal_position = 0
vertical_position = 0
for movement in movements:
op = movement_operation[movement.direction]
if movement.direction in ['up', 'down']:
aim = op(aim, movement.value)
else:
horizontal_position = op(horizontal_position, movement.value)
vertical_position = op(vertical_position, (movement.value * aim))
display(horizontal_position * vertical_position)
###Output
_____no_output_____ |
PCA&Boruta/PCA/BA888_PCA.ipynb | ###Markdown
###Code
## upload files and import library
from google.colab import files
uploaded = files.upload()
import pandas as pd
import numpy as np
train = pd.read_csv('train_data.csv',index_col=0)
test = pd.read_csv('test_data.csv',index_col=0)
## base on Kiki's feature selection results and data prepare process
from sklearn.preprocessing import StandardScaler
features = ['Effect_of_forex_changes_on_cash','EPS_Diluted','Revenue_Growth','Gross_Profit_Growth','Net_cash_flow_/_Change_in_cash','EPS','Financing_Cash_Flow',
'Weighted_Average_Shares_Growth','EPS_Diluted_Growth','Book_Value_per_Share_Growth','Operating_Cash_Flow_growth','Receivables_growth',
'Weighted_Average_Shs_Out','EPS_Growth','Weighted_Average_Shares_Diluted_Growth','SG&A_Expenses_Growth','Operating_Cash_Flow','Retained_earnings_(deficit)',
'Operating_Income_Growth','Operating_Cash_Flow_per_Share']
# Separating out the features
x_train = train.loc[:, features].values
x_test= test.loc[:,features].values
# transfer into dataframe
x_train = pd.DataFrame(data=x_train,columns=[features])
x_test = pd.DataFrame(data=x_test,columns=[features])
# transfer to numeric for X-train and X-test
cols = x_train.select_dtypes(exclude=['float']).columns
cols2 = x_test.select_dtypes(exclude=['float']).columns
x_train[cols] = x_train[cols].apply(pd.to_numeric, downcast='float', errors='coerce')
x_test[cols2] = x_test[cols2].apply(pd.to_numeric, downcast='float', errors='coerce')
# Separating out the target
y_train = train.loc[:,['Class']].values
y_test = test.loc[:,['Class']].values
# transfer into dataframe
y_train= pd.DataFrame(data=y_train,columns=["Class"])
y_test= pd.DataFrame(data=y_test,columns=["Class"])
# transfer to numeric for y-train
y_train['Class'] =y_train['Class'].apply(pd.to_numeric, downcast='float', errors='coerce')
# replace x NAs with 0
x_train.fillna(0, inplace=True)
x_test.fillna(0, inplace=True)
# drop NAs in y
y_train[pd.isnull(y_train).any(axis=1)]
y_train.dropna(inplace = True)
# drop corresponding x rows
x_train.drop(x_train.index[[7928,12726,12727,17688,17689]],inplace=True)
# winsorize the x-train
from scipy.stats.mstats import winsorize
# remove outliers 5% higher than the max and 5% lower than the min using for loop and winsorize
for col in x_train:
x_train[col] = winsorize(x_train[col], limits=[0.05,0.05])
# Standardizing the features
from sklearn.preprocessing import StandardScaler
x_train = StandardScaler().fit_transform(x_train)
x_test = StandardScaler().fit_transform(x_test)
from sklearn.decomposition import PCA
pca = PCA(n_components=6)
principalComponents = pca.fit_transform(x_train)
principalDf = pd.DataFrame(data = principalComponents, columns = ['principal component 1', 'principal component 2','principal component 3','principal component 4','principal component 5','principal component 6'])
finalDf = pd.concat([principalDf, y_train.reindex(principalDf.index)], axis=1, sort=False)
pca.explained_variance_ratio_
# sns.pairplot(data=finalDf, hue="Class", height=8,corner=True)
# sns.set_context("notebook", font_scale=2)
# import KMeans
from sklearn.cluster import KMeans
# Convert DataFrame to matrix
mat = principalDf.values
# Using sklearn, change clusters accordingly
km = KMeans(n_clusters=2)
km.fit(mat)
# Get cluster assignment labels
labels = km.labels_
labels
finalDf['Group']=labels
finalDf
finalDf['Match'] = np.where(finalDf['Group'] == finalDf['Class'], 1,0)
# The variable total adds up the total number of matches.
Total = finalDf['Match'].sum()
print (Total/17685)
###Output
0.5014984450098954
|
Predicting the artist from text.ipynb | ###Markdown
Predicting the artist from text using Web Scraping and a Naive Bayes classifier __1) The Goal:__ Build a text classification model to predict the artist from a piece of text. __2) Get the Data:__ Download HTML pages, Get a list of song urls, Extract lyrics from song urls __3) Split the Data:__ As usual. __4) Exploratory Data Analysis (EDA):__ No need for EDA here.__5)-9) Feature Engineering (FE), Train Model, Optimize Hyperparameters/Cross-Validation:__ Convert text to numbers by applying the Bag Of Words method; Build and train a Naive Bayes classifier; Balance out the dataset.__10) Calculate Test Score__ __11) Deploy and Monitor:__ Write a command-line interface ------------------ Getting the data: Web Scraping
###Code
import requests
from bs4 import BeautifulSoup
from bs4 import SoupStrainer
###Output
_____no_output_____
###Markdown
Step 1. Get the raw HTML text from the target website
###Code
type(requests.get('https://www.metrolyrics.com/' + artist + '-alpage-1.html')) #.text is a string
artist = 'michael-jackson'
with open('mj_page1.html', 'w') as file:
file.write(requests.get('https://www.metrolyrics.com/' + artist + '-alpage-1.html').text)
with open('mj_page1.html', 'r') as file:
soup = BeautifulSoup(markup=file)
pages = []
for link in soup.find_all(class_ = 'pages')[0].find_all('a'):
pages.append(link.get('href'))
print(pages)
###Output
['http://www.metrolyrics.com/michael-jackson-alpage-1.html', 'http://www.metrolyrics.com/michael-jackson-alpage-2.html', 'http://www.metrolyrics.com/michael-jackson-alpage-3.html', 'http://www.metrolyrics.com/michael-jackson-alpage-4.html', 'http://www.metrolyrics.com/michael-jackson-alpage-5.html', 'http://www.metrolyrics.com/michael-jackson-alpage-6.html']
###Markdown
Takes a long time. don't run !for i in range(len(pages)): with open('mj_page'+str(i+1)+'.html', 'w') as file: file.write(requests.get(pages[i]).text)
###Code
mj_links = []
for i in range(len(pages)):
with open('mj_page'+str(i+1)+'.html', 'r') as file:
soup = BeautifulSoup(markup=file)
for div in soup.find_all('div', attrs={'class':'module', 'id': 'popular'}):
for td in div.find_all('td'):
if td.a is not None:
mj_links.append(td.a.get('href'))
print(len(mj_links))
print(len(pages))
###Output
402
6
###Markdown
Takes a long time. don't run !mj_lyrics = [] for link in mj_links: soup = BeautifulSoup(markup=requests.get(link).text) lyrics_section = soup.find(attrs={'class': 'module', 'id':'popular'}) lyrics_chunk = [] for verse in soup.find_all('p', class_='verse'): for string in verse.stripped_strings: lyrics_chunk.append(string) mj_lyrics = mj_lyrics+lyrics_chunk.append((' '.join(lyrics_chunk), 'michael jackson'))with open('mj_lyrics.txt', 'w') as file: for i in range(len(mj_lyrics)): file.write(mj_lyrics[i] + '\n')
###Code
len(mj_links), len(mj_lyrics)
# Run this to use mj_lyrics !
with open('mj_lyrics.txt', 'r') as f:
mj_lyrics = [line.rstrip() for line in f]
given = 'aaaabbbcca'
result = []
i = 0
count = 0
for l in given:
if l == given[i]:
count = count + 1
else:
i = i + count
new_count = 0
count = new_count + 1
result.append((l,count))
tuple_result = []
for i in range((len(result))):
if (i+1) < len(result):
if result[i][0] == result[i+1][0]:
continue
else:
tuple_result.append(result[i])
continue
else:
tuple_result.append(result[len(result)-1])
print(tuple_result)
bsjgefue =
print(bsjgefue)
soup___________ = [BeautifulSoup(markup=markup) for markup in pages]
print(soup___________)
###Output
[<html><body><p>http://www.metrolyrics.com/michael-jackson-alpage-1.html</p></body></html>, <html><body><p>http://www.metrolyrics.com/michael-jackson-alpage-2.html</p></body></html>, <html><body><p>http://www.metrolyrics.com/michael-jackson-alpage-3.html</p></body></html>, <html><body><p>http://www.metrolyrics.com/michael-jackson-alpage-4.html</p></body></html>, <html><body><p>http://www.metrolyrics.com/michael-jackson-alpage-5.html</p></body></html>, <html><body><p>http://www.metrolyrics.com/michael-jackson-alpage-6.html</p></body></html>]
###Markdown
mj_1page = requests.get('https://www.metrolyrics.com/michael-jackson-alpage-1.html')with open('file.html', 'w') as file: file.write(mj_1page.text) mj_1page.text[:100]
###Code
### Step 2. Convert the raw HTML string to a BeautifulSoup-object, so that we can parse the data.
###Output
_____no_output_____
###Markdown
mj_1page_soup = BeautifulSoup(mj_1page.text, 'html.parser')mj_1page_soup.find_all(class_ = 'pages')
###Code
### Step 3. Use the BeautifulSoup object to parse the HTML document tree down to the tag that contains the data you want.
- There are multiple ways to get to the solution!
- `.find()` always returns the first instance of your "query"
- `.find_all()` returns a list-like object (called a "ResultSet") that contains the matching results.
- `.text` returns the actual part of the tag that is outside of the **< angled brackets >** (i.e. the text)
###Output
_____no_output_____
###Markdown
tag_list = []for tag in mj_1page_soup.find_all(class_ = 'pages'): print(tag.text) tag_list.append(tag.text)tag_list pages = []for link in mj_1page_soup.find_all(class_ = 'pages')[0].find_all('a'): pages.append(link.get('href')) print(len(pages),pages)
###Code
-------------------------------
### So you've spent a lot of time writing useful code...Now what?
###Output
_____no_output_____
###Markdown
artist = 'michael-jackson'url = 'https://www.metrolyrics.com/' + artist + '-alpage-1.html'response = requests.get(url)soup = BeautifulSoup(markup=response.text) mj_links = []for div in soup.find_all('div', attrs={'class':'module', 'id': 'popular'}): print('here') for td in div.find_all('td'): if td.a is not None: mj_links.append(td.a.get('href')) mj_lyrics = []for li in mj_links[:10]: response = requests.get(li) soup2 = BeautifulSoup(markup=response.text) lyrics_section = soup2.find(attrs={'class': 'module', 'id':'popular'}) lyrics_chunk = [] for verse in soup2.find_all('p', class_='verse'): lyrics_chunk.append(verse.text) mj_lyrics.append((' '.join(lyrics_chunk), 'michael jackson'))
###Code
### Saving the list of all the Michael Jackson lyrics as a binary data stream.
###Output
_____no_output_____
###Markdown
import picklewith open('mj_lyrics_binary.txt', 'wb') as filehandle: pickle.dump(mj_lyrics, filehandle)
###Code
### Reading the list of all Michael Jackson lyrics from the binary data file.
###Output
_____no_output_____
###Markdown
with open('mj_lyrics_binary.txt', 'rb') as filehandle: test_binary = pickle.load(filehandle) type(test_binary), len(test_binary) type(mj_lyrics), len(mj_lyrics)
###Code
### Saving and reading the list of all the Michael Jackson lyrics as JavaScript Objecct Notation (JSON).
###Output
_____no_output_____
###Markdown
import jsonwith open('mj_lyrics_json.txt', 'w') as filehandle: json.dump(mj_lyrics, filehandle) with open('mj_lyrics_json.txt', 'r') as filehandle: test_json = json.load(filehandle)type(test_json), len(test_json)
###Code
----------------
---------------
###Output
_____no_output_____
###Markdown
artist = 'mariah-carey'url = 'https://www.metrolyrics.com/' + artist + '-alpage-1.html'response = requests.get(url)soup = BeautifulSoup(markup=response.text) mc_links = []for div in soup.find_all('div', attrs={'class':'module', 'id': 'popular'}): print('here') for td in div.find_all('td'): if td.a is not None: mc_links.append(td.a.get('href')) mc_lyrics = []for li in mc_links[:10]: response = requests.get(li) soup2 = BeautifulSoup(markup=response.text) lyrics_section = soup2.find(attrs={'class': 'module', 'id':'popular'}) lyrics_chunk = [] for verse in soup2.find_all('p', class_='verse'): lyrics_chunk.append(verse.text) mc_lyrics.append((' '.join(lyrics_chunk), 'mariah carey')) type(mc_lyrics), len(mc_lyrics)
###Code
### Saving the list of all the Michael Jackson lyrics as JavaScript Objecct Notation (JSON).
###Output
_____no_output_____
###Markdown
with open('mc_lyrics_json.txt', 'w') as filehandle: json.dump(mc_lyrics, filehandle)
###Code
----------------
---------------
### Clean up text
###Output
_____no_output_____
###Markdown
mc_lyrics_str = ' '.join(map(str, mc_lyrics))mc_lyrics_string_3=mc_lyrics_str.replace('("','')mc_lyrics_string_2=mc_lyrics_string_3.replace('", ', ' ')mc_lyrics_string_1=mc_lyrics_string_2.replace(" 'mariah carey')", ". ")mc_lyrics_string=mc_lyrics_string_1.replace(r"\n", ". ")print(mc_lyrics_string)
###Code
import spacy
nlp = spacy.load('en_core_web_md')
with open('mj_lyrics_json.txt', 'r') as filehandle:
mj_json = json.load(filehandle)
#with open('mc_lyrics_json.txt', 'r') as filehandle:
# mc_json = json.load(filehandle)
text_original = [i[0] for i in mj_json]
text = nlp(text_original[0]) # when we make it, it's already tokenized
for token in text:
print(token)
###Output
Sadness
had
been
close
as
my
next
of
kin
Then
Happy
came
one
day
,
chased
my
blues
away
My
life
began
when
Happy
smiled
Sweet
,
like
candy
to
a
child
Stay
here
and
love
me
just
a
while
Let
Sadness
see
what
Happy
does
Let
Happy
be
where
Sadness
was
Happy
,
that
's
you
,
you
made
my
life
brand
new
Lost
as
a
little
lamb
was
I
,
till
you
came
in
My
life
began
when
happy
smiled
Sweet
,
like
candy
to
a
child
Stay
here
and
love
me
just
a
while
Let
Sadness
see
what
Happy
does
Let
Happy
be
where
Sadness
was
(
Till
now
)
Where
have
I
been
,
what
lifetime
was
I
in
?
Suspended
between
time
and
space
Lonely
until
,
Happy
came
smiling
up
at
me
Sadness
had
no
choice
but
to
flee
I
said
a
prayer
so
silently
Let
Sadness
see
what
Happy
does
Let
Happy
be
where
Sadness
was
till
now
Till
now
Happy
,
yeah
,
yeah
,
happy
La
,
la
,
la
,
la
,
la
,
la
,
la
,
la
Yeah
happy
,
ooh
,
happy
La
,
la
,
la
,
la
,
la
,
la
,
la
,
la
Happy
,
oh
yeah
happy
La
,
la
,
la
,
la
,
la
,
la
,
la
,
la
Happy
###Markdown
Linguistic Features
###Code
# Tokenization, stop words
li = []
for token in text:
if token.is_stop == True:
li.append(token)
print('Tokens that are stop words: {}'.format(li))
li = []
for token in text:
if token.is_stop == True:
if token.lemma_ not in li:
li.append(token.lemma_)
print('Tokens that are stop words: {}'.format(li))
text_original[2]
def spacy_cleaner(document):
tokenize_doc = nlp(document)
new_doc = []
for token in tokenize_doc:
if not token.is_stop and token.is_alpha:
new_doc.append(token.lemma_)
return new_doc
jfjffkjfku = []
with open('mj_lyrics_json.txt', 'r') as filehandle:
mj_json = json.load(filehandle)
text_original = [i[0] for i in mj_json]
jfjffkjfku = [spacy_cleaner(song) for song in text_original]
print(jfjffkjfku[1])
# POS-tagging
for token in text[:15]:
print(token, token.pos_)
spacy.explain('CCONJ')
# Named Entity Recognition (NER)
text2 = nlp("My name is Ada Lovelace, I am in New York until Christmas and work at Google since May 2020.")
text2
spacy.displacy.render(text2, style = 'ent')
# Dependencies
spacy.displacy.render(text2, style='dep')
###Output
_____no_output_____
###Markdown
Word embeddings
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
things = pd.DataFrame({'size': [60, 65, 15, 90, 92, 45, 70, 50, 21],
'roundness': [76, 11, 94, 99, 96, 8, 18, 15, 56]},
index=['Apple', 'Banana', 'Blueberry',
'Melon', 'Football', 'Pen', 'Shoe', 'Spoon', 'Dice'])
things
plt.figure(figsize=(8,6))
plt.scatter(things['size'], things['roundness'])
plt.xlabel('size'), plt.ylabel('roundness')
for i, txt in enumerate(things.index):
plt.annotate(txt, (things.iloc[i][0], things.iloc[i][1]+2), fontsize=12)
###Output
_____no_output_____
###Markdown
Word vectors in SpacySpacy can compare two word(vector)s regarding how similar they are. For this, you need a model of at least medium size, e.g. `en_core_web_md`.
###Code
pd.DataFrame(nlp('cat').vector)
word1 = nlp('cat')
word2 = nlp('dog')
round(word1.similarity(word2), 2)
word1 = nlp('hound')
round(word1.similarity(word2), 2)
word1 = nlp('genius')
word2 = nlp('man')
round(word1.similarity(word2), 2)
word1 = nlp('genius')
word2 = nlp('woman')
round(word1.similarity(word2), 2)
def vec(s):
return nlp.vocab[s].vector.reshape(1, -1)
new_queen = vec('king') - vec('man') + vec('woman')
from sklearn.metrics.pairwise import cosine_similarity
cosine_similarity(new_queen, vec('queen'))
###Output
_____no_output_____
###Markdown
---------------- "Bag of Words" Step 1. Construct a Text Corpus- This is basically what you end up with after all your scraping and cleaning.
###Code
#for i in mj_lyrics_string:
# print(i[0])
with open('mj_lyrics_json.txt', 'r') as filehandle:
mj_json = json.load(filehandle)
with open('mc_lyrics_json.txt', 'r') as filehandle:
mc_json = json.load(filehandle)
mj_corpus = [i[0] for i in mj_json]
mc_corpus = [i[0] for i in mc_json]
CORPUS = mj_corpus + mc_corpus
###Output
_____no_output_____
###Markdown
len(CORPUS)CORPUS[19]new = CORPUSnew_print = []for i in range(len(new)): (.replace('("','')) new_print.append(new[i].replace(r"\n", ". "))new_print[0]
###Code
LABELS = ['michael jackson'] * len(mj_lyrics) + ['mariah carey'] * len(mc_lyrics)
###Output
_____no_output_____
###Markdown
Step 2. Vectorize the text input using the "Bag of Words" technique.
###Code
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(stop_words='english')
vec = cv.fit_transform(CORPUS[i] for i in range(len(CORPUS)))
pd.DataFrame(vec.todense(), columns = cv.get_feature_names(), index=LABELS).head()
#cv.get_stop_words()
###Output
_____no_output_____
###Markdown
Step 3. Apply Tf-Idf Transformation (Normalization)* TF - Term Frequency (% count of a word $w$ in doc $d$)* IDF - Inverse Document Frequency
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tf = TfidfTransformer()
vec2 = tf.fit_transform(vec)
pd.DataFrame(vec2.todense(), columns = cv.get_feature_names(), index = LABELS).head()
###Output
_____no_output_____
###Markdown
Step 4. Put everything together in a pipeline
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words = 'english')
X = vectorizer.fit_transform(CORPUS).todense()
from sklearn.ensemble import RandomForestClassifier
m = RandomForestClassifier()
m.fit(X, LABELS)
m.score(X, LABELS)
#m.predict(['yellow submarine'])
from sklearn.pipeline import make_pipeline
pipeline = make_pipeline(TfidfVectorizer(stop_words='english'), RandomForestClassifier(max_depth=10))
pipeline.fit(CORPUS, LABELS)
pipeline.predict_proba(['3002']) #mariah carey is the second one
pipeline.predict_proba(['addicted'])
pipeline.predict_proba(['bad'])
pipeline.predict_proba(['Alexandra']) # same default prediction whether it's chocolate or Alexandra
# up sampeling, down sampeling
###Output
_____no_output_____
###Markdown
-------------------------- The Naive Bayes Classifier A simple, probabilistic classification model built on top of Bayes' Theorem. --- Naive Bayes in Scikit-Learn
###Code
from sklearn.naive_bayes import ComplementNB
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(stop_words='english')
vec = cv.fit_transform(CORPUS[i] for i in range(len(CORPUS)))
pd.DataFrame(vec.todense(), columns = cv.get_feature_names(), index=LABELS).head()
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
import numpy as np
pipeline = make_pipeline(TfidfVectorizer(stop_words='english'), ComplementNB(alpha=0.5))
pipeline.fit(CORPUS, LABELS)
pipeline.predict_proba(['100']) # first is mariah carey
pipeline.predict_proba(['2000'])
pipeline.predict_proba(['tensor tarragons rule']) # ???
###Output
_____no_output_____
###Markdown
---
###Code
fe = pipeline.named_steps['tfidfvectorizer']
nb_model = pipeline.named_steps['complementnb']
df = pd.DataFrame(np.exp(nb_model.feature_log_prob_), columns=fe.get_feature_names(), index=['Michael Jackson', 'Mariah Carey']).T
df.head()
df['diff'] = df['Michael Jackson'] - df['Mariah Carey']
df['diff'].sort_values().plot.bar(figsize=(30, 8), fontsize=25);
###Output
_____no_output_____ |
colab_notebooks/ntbk_04b_labeled_train_bert_gold_labels.ipynb | ###Markdown
Dive into Abusive Language with SnorkelAuthor: BingYune Chen Updated: 2021-07-18---------- Train BERT using Ground Truth labelsWe just completed the following step to work with our BERT model:1. Fine-tuned BERT model using Sentiment140 to generalize on Twitter data**We will now train our BERT model from Step 1 using ground truth labels for X_train to compare against Snorkel labels for X_train.**
###Code
# Imports and setup for Google Colab
# Mount Google Drive
from google.colab import drive ## module to use Google Drive with Python
drive.mount('/content/drive') ## mount to access contents
# Install python libraries
! pip install --upgrade tensorflow --quiet
! pip install snorkel --quiet
! pip install tensorboard==1.15.0 --quiet
! pip install transformers --quiet
# Imports for data and plotting
import numpy as np
import pandas as pd
import pickle
import os
import re
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set(font_scale=1.5, style='whitegrid')
plt.rcParams["font.family"] = "sans serif"
# Style configuration
COLORS = [
'#0059ff',
'#fdbd28',
'#28D9AA',
'#EE5149',
'#060F41',
'#788995',
'#FF69B4',
'#7F00FF',
]
GREY = '#788995'
DARK_GREY = '#060F41'
BLUE = '#0059ff'
DBLUE = '#060F41'
GOLD = '#fdbd28'
GREEN = '#28D9AA'
RED = '#EE5149'
BLACK = '#000000'
WHITE = '#FFFFFF'
LINEWIDTH = 5
LINESPACING = 1.25
FS_SUPTITLE = 30
FS_CAPTION = 24
FS_LABEL = 24
FS_FOOTNOTE = 20
# Imports for snorkel analysis and multi-task learning
from snorkel.labeling.model import LabelModel
from snorkel.labeling import filter_unlabeled_dataframe
# Imports for bert language model
from sklearn.model_selection import train_test_split
from sklearn.utils.class_weight import compute_class_weight
from sklearn import metrics
import transformers
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
from torch.utils.data import TensorDataset, DataLoader
from torch.utils.data import RandomSampler, SequentialSampler
import time
import datetime
import random
# Access notebook directory
# Define paths
LOAD_MODEL = '../models/'
LOAD_LFS = '../archive_git_lfs/models/'
LOAD_DATA = '../data/processed/'
SAVE_MODEL = '../models/'
SAVE_DATA = '../data/published/'
SAVE_FIG = '../assets/'
# Define files for training
TRAIN_FILE = 'df_train.pkl' ## update
TRAIN_LFS_FILE = 'lf_train_final.pkl'
# Define files for validation and testing
VALID_FILE = 'df_valid.pkl'
VALID_LFS_FILE = 'lf_valid_final.pkl'
TEST_FILE = 'df_test.pkl'
TEST_LFS_FILE = 'lf_test_final.pkl'
# Define model names to save and load
MODEL_FILE1 = 'bert_tokens_{}_v1.pkl'.format(TRAIN_FILE[:-4])
MODEL_FILE2 = 'bert_tokens_df_valid.pkl'
MODEL_FILE3 = 'bert_tokens_df_test.pkl'
MODEL_FILE4 = 'bert_trainstats_{}_v1.pt'.format(TRAIN_FILE[:-4])
# Define current version of BERT model to load
BERT_PRE = 'model_bert_ts140_dict.pt' ## update
## fine-tuned on Sentiment140
# Define new version of BERT model to save
BERT_POST_DICT = 'model_bert_{}_dict_v1.pt'.format(TRAIN_FILE[:-4])
BERT_POST_FULL = 'model_bert_{}_full_v1.pt'.format(TRAIN_FILE[:-4])
BERT_POST_CHECKPOINT = 'model_bert_{}_v1_'.format(TRAIN_FILE[:-4])
# Define figure names
FIG_FILE1 = 'lm-conf-matrix-{}-v1.png'.format(TRAIN_FILE[:-4])
FIG_FILE2 = 'bert-train-val-loss-{}-v1.png'.format(TRAIN_FILE[:-4])
FIG_FILE3 = 'bert-conf-matrix-{}-v1.png'.format(TRAIN_FILE[:-4])
FIG_FILE4 = 'bert-auc-roc-pre-rec-{}-v1.png'.format(TRAIN_FILE[:-4])
# Load labeled dataset for training
df_train = pd.read_pickle(os.path.join(LOAD_DATA, TRAIN_FILE))
df_train.reset_index(drop=True, inplace=True)
df_valid = pd.read_pickle(os.path.join(LOAD_DATA, VALID_FILE))
df_valid.reset_index(drop=True, inplace=True)
df_test = pd.read_pickle(os.path.join(LOAD_DATA, TEST_FILE))
df_test.reset_index(drop=True, inplace=True)
pd.set_option('display.max_colwidth', 500)
# Load matrix of votes obtained from all the labeling functions
l_train = pd.read_pickle(os.path.join(LOAD_MODEL, TRAIN_LFS_FILE))
l_valid = pd.read_pickle(os.path.join(LOAD_MODEL, VALID_LFS_FILE))
l_test = pd.read_pickle(os.path.join(LOAD_MODEL, TEST_LFS_FILE))
###Output
_____no_output_____
###Markdown
Snorkel Label Model`Majority Vote` is the simpliest function Snorkel uses to generate a class label, based on the most voted label from all of the labeling functions. However, in order to take advantage of Snorkel's full functionality, `Label Model` predicts new class labels by learning to estimate the accuracy and correlations of the labeling functions based on their conflicts and overlaps. We will use the `Label Model` function to generate a single confidence-weighted label given a matrix of predictions.
###Code
# Apply LabelModel, no gold labels for training
label_model_train = LabelModel(
cardinality=2, ## number of classes
verbose=True
)
label_model_train.fit(L_train=l_train,
n_epochs=2000,
log_freq=100,
seed=42
)
# Get our predictions for the unlabeled dataset
probs_train = label_model_train.predict(L=l_train)
# Calculate Label Model F1 score using validation data
label_model_f1 = label_model_train.score(
L=l_valid,
Y=df_valid.label,
tie_break_policy='random',
metrics=['f1']
)['f1']
print(f"{'Label Model F1:': <25} {label_model_f1:.2f}")
# Filter out unlabeled data points
df_train_filtered, probs_train_filtered = filter_unlabeled_dataframe(
X=df_train,
y=probs_train,
L=l_train
)
print('Shape of X_train: {} Shape of X_train_probs: {}'.format(
df_train_filtered.shape,
probs_train_filtered.shape
)
) # Confirm 100% coverage from labeling functions
# Explore conflicts between gold labels and LM labels
df_train_filtered['lm_label'] = probs_train_filtered
df_train_filtered.loc[
df_train_filtered['lm_label'] != df_train_filtered['label'], :].head(50)
# Evaluate performance of Label Model
# Code adapted from Mauro Di Pietro
# >>> https://towardsdatascience.com/text-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794
classes = np.unique(df_train_filtered.label)
y_test_array = pd.get_dummies(df_train_filtered.label, drop_first=False).values
# Print Accuracy, Precision, Recall
accuracy = metrics.accuracy_score(
df_train_filtered.label, df_train_filtered.lm_label
)
auc = metrics.roc_auc_score(df_train_filtered.label, df_train_filtered.lm_label)
print("Accuracy:", round(accuracy,2))
print("ROC-AUC:", round(auc,2))
print("Detail:")
print(metrics.classification_report(
df_train_filtered.label, df_train_filtered.lm_label
)
)
# Plot confusion matrix
cm = metrics.confusion_matrix(
df_train_filtered.label, df_train_filtered.lm_label
)
fig, ax = plt.subplots(figsize=(7.5, 7.5))
sns.heatmap(cm, annot=True, fmt='d', ax=ax, cmap=plt.cm.Blues,
cbar=False)
ax.set(xlabel="Predicted Label", ylabel="True Label", xticklabels=classes,
yticklabels=classes, title="Confusion Matrix for Label Model")
plt.yticks(rotation=0)
plt.savefig(os.path.join(SAVE_FIG, FIG_FILE1),
bbox_inches='tight'
)
plt.show(); ## F1 of 88%
###Output
Accuracy: 0.85
ROC-AUC: 0.87
Detail:
precision recall f1-score support
0 0.68 0.92 0.78 8377
1 0.96 0.82 0.88 19583
accuracy 0.85 27960
macro avg 0.82 0.87 0.83 27960
weighted avg 0.88 0.85 0.85 27960
###Markdown
Train BERT ModelThe output of the Snorkel `LabelModel` is a set of labels that can be used with most popular supervised learning models such as Logistic Regression, XGBoost, and neural networks.
###Code
# Create BERT tokenizer (original BERT of 110M parameters)
# BERT tokenizer can handle punctuation, simleys, etc.
# Previously replaced mentions and urls with special tokens (#has_url, #has_mention)
bert_token = transformers.BertTokenizerFast.from_pretrained(
'bert-base-uncased',
do_lower_case=True)
# Create helper function for text parsing
def bert_encode(tweet_df, tokenizer):
## add '[CLS]' token as prefix to flag start of text
## append '[SEP]' token to flag end of text
## append '[PAD]' token to fill uneven text
bert_tokens = tokenizer.batch_encode_plus(
tweet_df['tweet'].to_list(),
padding='max_length',
truncation=True,
max_length=30
)
## convert list to tensors
input_word_ids = torch.tensor(bert_tokens['input_ids'])
input_masks = torch.tensor(bert_tokens['attention_mask'])
input_type_ids = torch.tensor(bert_tokens['token_type_ids'])
inputs = {
'input_word_ids': input_word_ids,
'input_masks': input_masks,
'input_type_ids': input_type_ids
}
return inputs
# Apply BERT encoding for training
'''
X_train = bert_encode(df_train_filtered, bert_token)
with open(MODEL_FILE1, 'wb') as file:
pickle.dump(X_train, file)
'''
X_train = pd.read_pickle(os.path.join(LOAD_LFS + MODEL_FILE1))
y_train = df_train.label.values ## use original training labels
y_train_lm = probs_train_filtered ## use noisy labels from Snorkel LM
'''
# Apply BERT encoding for validation
X_valid = bert_encode(df_valid, bert_token)
with open(MODEL_FILE2, 'wb') as file:
pickle.dump(X_valid, file)
# Apply BERT encoding for testing
X_test = bert_encode(df_test, bert_token)
with open(MODEL_FILE3, 'wb') as file:
pickle.dump(X_test, file)
'''
# Load BERT encodes for validation and testing
X_valid = pd.read_pickle(os.path.join(LOAD_MODEL, MODEL_FILE2))
y_valid = df_valid.label.values
X_test = pd.read_pickle(os.path.join(LOAD_MODEL, MODEL_FILE3))
y_test = df_test.label.values
# Define helper functions to calculate accuracy
# Code adapted from Chris McCormick
# >>> https://mccormickml.com/2019/07/22/BERT-fine-tuning/#32-required-formatting
# Calculate accuracy for predictions vs labels
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
# Return time in seconds as string hh:mm:ss
def format_time(elapsed):
## round to the nearest second
elapsed_rounded = int(round((elapsed)))
## format as hh:mm:ssa
return str(datetime.timedelta(seconds=elapsed_rounded))
# Redfine BERT model for additional fine-tuning
nlp_bert = transformers.BertForSequenceClassification.from_pretrained(
'bert-base-uncased', ## use 12-layer BERT model, uncased vocab
num_labels=2, ## binary classfication
output_attentions = False, ## model returns attentions weights
output_hidden_states = False, ## model returns all hidden-states
)
nlp_bert.cuda()
# Load saved BERT model
nlp_bert.load_state_dict(torch.load(os.path.join(LOAD_LFS, BERT_PRE)))
# Get all of the model's parameters as a list of tuples
# Code adapted from Chris McCormick
# >>> https://mccormickml.com/2019/07/22/BERT-fine-tuning/#32-required-formatting
params = list(nlp_bert.named_parameters())
print('The BERT model has {:} different named parameters.\n'.format(
len(params))
)
print('==== Embedding Layer ====\n')
for p in params[0:5]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== First Transformer ====\n')
for p in params[5:21]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
print('\n==== Output Layer ====\n')
for p in params[-4:]:
print("{:<55} {:>12}".format(p[0], str(tuple(p[1].size()))))
# Create data loaders for train and test set
# Pass batches of train data and test data as input to the model during training
batch_size = 32 ## define batch size, authors recommend 16 or 32 for fine-tuning
# Wrap tensors for train
X_train_data = TensorDataset(
X_train['input_word_ids'],
X_train['input_masks'],
torch.tensor(y_train) ## use ground truth labels
)
# Make sampler for sampling the data during training
X_train_sampler = RandomSampler(X_train_data)
# Make dataLoader for train set
X_train_dataloader = DataLoader(
X_train_data,
sampler=X_train_sampler,
batch_size=batch_size
)
# Wrap tensors for valid
X_valid_data = TensorDataset(
X_valid['input_word_ids'],
X_valid['input_masks'],
torch.tensor(y_valid)
)
# Make sampler for sampling the data during training
X_valid_sampler = SequentialSampler(X_valid_data)
# Make dataLoader for validation set
X_valid_dataloader = DataLoader(
X_valid_data,
sampler=X_valid_sampler,
batch_size=batch_size
)
# Wrap tensors for test
X_test_data = TensorDataset(
X_test['input_word_ids'],
X_test['input_masks'],
torch.tensor(y_test)
)
# Make sampler for predicting data after training
X_test_sampler = SequentialSampler(X_test_data)
# Make dataLoader for testing set
X_test_dataloader = DataLoader(
X_test_data,
sampler=X_test_sampler,
batch_size=batch_size
)
# Define optimizer
optimizer = transformers.AdamW(nlp_bert.parameters(),
lr=2e-5, ## learning rate
eps=1e-8 # small number prevent divison by zero
)
# Check the class weights for imbalanced classes
class_weights = compute_class_weight(
'balanced',
np.unique(y_train), # use ground truth labels
y_train
)
print("Class Weights:", class_weights)
# Update model architecture to handle imbalanced classes
weights= torch.tensor(class_weights, dtype=torch.float) ## convert to tensor
weights = weights.to('cuda') ## push to GPU
cross_entropy = torch.nn.NLLLoss(weight=weights) ## define the loss function
# Define learning rate scheduler
epochs = 4 ## define number of training epochs
total_steps = len(X_train_dataloader) * epochs ## batches X epochs
scheduler = transformers.get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps=total_steps
)
# Run training loop
# Code adapted from Chris McCormick
# >>> https://mccormickml.com/2019/07/22/BERT-fine-tuning/#32-required-formatting
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# Store quantities such as training and validation loss,
# validation accuracy, and timings
training_stats = []
# Measure the total training time for the whole run
total_t0 = time.time()
# Loop each epoch...
for epoch_i in range(0, epochs):
## perform one full pass over the training set
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
## measure how long the training epoch takes.
t0 = time.time()
## reset the total loss for this epoch.
total_train_loss = 0
## put the model into training mode
nlp_bert.train()
## loop each batch of training data...
for step, batch in enumerate(X_train_dataloader):
## update progress every 40 batches
if step % 40 == 0 and not step == 0:
## calculate elapsed time in minutes
elapsed = format_time(time.time() - t0)
## report progress
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(
step, len(X_train_dataloader), elapsed))
## unpack this training batch from our dataloader
## NOTE: `model.to(device) is in-place operation
## batch (Tensor) to GPU is not in-place operation
b_input_ids = batch[0].to('cuda') ## input ids
b_input_mask = batch[1].to('cuda') ## attention masks
b_labels = batch[2].to('cuda') ## labels
## Clear previously calculated gradients before performing backward pass
nlp_bert.zero_grad()
## perform a forward pass (evaluate the model on this training batch)
loss_logit_dict = nlp_bert(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels
)
loss = loss_logit_dict['loss']
logits = loss_logit_dict['logits']
## accumulate the training loss over all of the batches
## calculate the average loss at the end
total_train_loss += loss.item()
## perform a backward pass to calculate the gradients
loss.backward()
## clip the norm of the gradients to 1.0
## prevent the "exploding gradients" problem
torch.nn.utils.clip_grad_norm_(nlp_bert.parameters(), 1.0)
## update parameters and take a step using the computed gradient
optimizer.step() ## dictates update rule based on gradients, lr, etc.
## update the learning rate
scheduler.step()
## save checkpoint
'''
print("")
print("Saving Checkpoint...")
torch.save({'epoch': epoch_i,
'model_state_dict': nlp_bert.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss
},
os.path.join(
SAVE_MODEL,
BERT_POST_CHECKPOINT + 'epoch{}_chkpt.pt'.format(epoch_i)
)
)
'''
## calculate the average loss over all of the batches
avg_train_loss = total_train_loss / len(X_train_dataloader)
## measure how long this epoch took
training_time = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epoch took: {:}".format(training_time))
## start validation
print("")
print("Running Validation...")
t0 = time.time()
## put the model in evaluation mode--the dropout layers behave differently
nlp_bert.eval()
## define tracking variables
total_eval_accuracy = 0
total_eval_loss = 0
nb_eval_steps = 0
## evaluate data for one epoch
for batch in X_valid_dataloader:
## unpack this training batch from our dataloader
b_input_ids = batch[0].to('cuda') ## input ids
b_input_mask = batch[1].to('cuda') ## attention masks
b_labels = batch[2].to('cuda') ## labels
## tell pytorch not to bother with constructing the compute graph during
# the forward pass, since this is only needed for backprop (training)
with torch.no_grad():
## check forward pass, calculate logit predictions
loss_logit_dict = nlp_bert(b_input_ids,
token_type_ids=None,
## segment ids for multiple sentences
attention_mask=b_input_mask,
labels=b_labels
)
loss = loss_logit_dict['loss']
logits = loss_logit_dict['logits']
## accumulate the validation loss
total_eval_loss += loss.item()
## move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
## calculate the accuracy for this batch of tweets
total_eval_accuracy += flat_accuracy(logits, label_ids)
## report the final accuracy for this validation run
avg_val_accuracy = total_eval_accuracy / len(X_valid_dataloader)
print(" Accuracy: {0:.2f}".format(avg_val_accuracy))
## calculate the average loss over all of the batches
avg_val_loss = total_eval_loss / len(X_valid_dataloader)
## measure how long the validation run took
validation_time = format_time(time.time() - t0)
print(" Validation Loss: {0:.2f}".format(avg_val_loss))
print(" Validation took: {:}".format(validation_time))
## record all statistics from this epoch
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Valid. Accur.': avg_val_accuracy,
'Training Time': training_time,
'Validation Time': validation_time
}
)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(
format_time(time.time()-total_t0)
)
)
# Display floats with two decimal places
pd.set_option('precision', 2)
# Create a DataFrame from our training statistics
df_stats = pd.DataFrame(data=training_stats)
# Use the 'epoch' as the row index
df_stats = df_stats.set_index('epoch')
'''
# A hack to force the column headers to wrap
df = df.style.set_table_styles(
[dict(selector="th", props=[('max-width', '70px')])]
)
'''
# Save training stats
'''
with open(MODEL_FILE4, 'wb') as file:
pickle.dump(df_stats, file)
'''
# Display the table
df_stats
# Increase the plot size and font size
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(15, 15/1.6))
# Plot the learning curve
plt.plot(df_stats['Training Loss'], 'b-o', label="Training")
plt.plot(df_stats['Valid. Loss'], 'g-o', label="Validation")
# Label the plot
plt.title("Training & Validation Loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plt.xticks([1, 2, 3, 4])
plt.savefig(os.path.join(SAVE_FIG, FIG_FILE2),
bbox_inches='tight'
)
plt.show();
# Overfitting if training loss << validation loss
# Underfitting if training loss >> validation loss
# Just right if training loss ~ validation loss
# Redfine BERT model for additional fine-tuning with epochs = 2
nlp_bert = transformers.BertForSequenceClassification.from_pretrained(
'bert-base-uncased', ## use 12-layer BERT model, uncased vocab
num_labels=2, ## binary classfication
output_attentions = False, ## model returns attentions weights
output_hidden_states = False, ## model returns all hidden-states
)
nlp_bert.cuda()
# Load saved BERT model
nlp_bert.load_state_dict(torch.load(os.path.join(LOAD_LFS, BERT_PRE)))
# Define learning rate scheduler
epochs = 2 ## define number of training epochs
total_steps = len(X_train_dataloader) * epochs ## batches X epochs
scheduler = transformers.get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps=total_steps
)
# Run training loop
# Code adapted from Chris McCormick
# >>> https://mccormickml.com/2019/07/22/BERT-fine-tuning/#32-required-formatting
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
# Store quantities such as training and validation loss,
# validation accuracy, and timings
training_stats = []
# Measure the total training time for the whole run
total_t0 = time.time()
# Loop each epoch...
for epoch_i in range(0, epochs):
## perform one full pass over the training set
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
## measure how long the training epoch takes.
t0 = time.time()
## reset the total loss for this epoch.
total_train_loss = 0
## put the model into training mode
nlp_bert.train()
## loop each batch of training data...
for step, batch in enumerate(X_train_dataloader):
## update progress every 40 batches
if step % 40 == 0 and not step == 0:
## calculate elapsed time in minutes
elapsed = format_time(time.time() - t0)
## report progress
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(
step, len(X_train_dataloader), elapsed))
## unpack this training batch from our dataloader
## NOTE: `model.to(device) is in-place operation
## batch (Tensor) to GPU is not in-place operation
b_input_ids = batch[0].to('cuda') ## input ids
b_input_mask = batch[1].to('cuda') ## attention masks
b_labels = batch[2].to('cuda') ## labels
## Clear previously calculated gradients before performing backward pass
nlp_bert.zero_grad()
## perform a forward pass (evaluate the model on this training batch)
loss_logit_dict = nlp_bert(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels
)
loss = loss_logit_dict['loss']
logits = loss_logit_dict['logits']
## accumulate the training loss over all of the batches
## calculate the average loss at the end
total_train_loss += loss.item()
## perform a backward pass to calculate the gradients
loss.backward()
## clip the norm of the gradients to 1.0
## prevent the "exploding gradients" problem
torch.nn.utils.clip_grad_norm_(nlp_bert.parameters(), 1.0)
## update parameters and take a step using the computed gradient
optimizer.step() ## dictates update rule based on gradients, lr, etc.
## update the learning rate
scheduler.step()
## save checkpoint
print("")
print("Saving Checkpoint...")
torch.save({'epoch': epoch_i,
'model_state_dict': nlp_bert.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss
},
os.path.join(
SAVE_MODEL,
BERT_POST_CHECKPOINT + 'epoch{}_chkpt.pt'.format(epoch_i)
)
)
## calculate the average loss over all of the batches
avg_train_loss = total_train_loss / len(X_train_dataloader)
## measure how long this epoch took
training_time = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epoch took: {:}".format(training_time))
## start validation
print("")
print("Running Validation...")
t0 = time.time()
## put the model in evaluation mode--the dropout layers behave differently
nlp_bert.eval()
## define tracking variables
total_eval_accuracy = 0
total_eval_loss = 0
nb_eval_steps = 0
## evaluate data for one epoch
for batch in X_valid_dataloader:
## unpack this training batch from our dataloader
b_input_ids = batch[0].to('cuda') ## input ids
b_input_mask = batch[1].to('cuda') ## attention masks
b_labels = batch[2].to('cuda') ## labels
## tell pytorch not to bother with constructing the compute graph during
# the forward pass, since this is only needed for backprop (training)
with torch.no_grad():
## check forward pass, calculate logit predictions
loss_logit_dict = nlp_bert(b_input_ids,
token_type_ids=None,
## segment ids for multiple sentences
attention_mask=b_input_mask,
labels=b_labels
)
loss = loss_logit_dict['loss']
logits = loss_logit_dict['logits']
## accumulate the validation loss
total_eval_loss += loss.item()
## move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
## calculate the accuracy for this batch of tweets
total_eval_accuracy += flat_accuracy(logits, label_ids)
## report the final accuracy for this validation run
avg_val_accuracy = total_eval_accuracy / len(X_valid_dataloader)
print(" Accuracy: {0:.2f}".format(avg_val_accuracy))
## calculate the average loss over all of the batches
avg_val_loss = total_eval_loss / len(X_valid_dataloader)
## measure how long the validation run took
validation_time = format_time(time.time() - t0)
print(" Validation Loss: {0:.2f}".format(avg_val_loss))
print(" Validation took: {:}".format(validation_time))
## record all statistics from this epoch
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Valid. Accur.': avg_val_accuracy,
'Training Time': training_time,
'Validation Time': validation_time
}
)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(
format_time(time.time()-total_t0)
)
)
# Save current state with best parameters
'''
torch.save(nlp_bert.state_dict(),
os.path.join(SAVE_MODEL, BERT_POST_DICT)
)
# Save full model
torch.save(nlp_bert, os.path.join(SAVE_MODEL, BERT_POST_FULL))
'''
# Redfine BERT model for additional fine-tuning
nlp_bert = transformers.BertForSequenceClassification.from_pretrained(
'bert-base-uncased', ## use 12-layer BERT model, uncased vocab
num_labels=2, ## binary classfication
output_attentions = False, ## model returns attentions weights
output_hidden_states = False, ## model returns all hidden-states
)
nlp_bert.cuda()
# Load saved BERT model
nlp_bert.load_state_dict(torch.load(os.path.join(LOAD_LFS, BERT_POST_DICT)))
# Prediction on test set
# Code adapted from Chris McCormick
# >>> https://mccormickml.com/2019/07/22/BERT-fine-tuning/#32-required-formatting
print('Predicting labels for {:,} test sentences...'.format(
len(X_test['input_word_ids'])))
# Put model in evaluation mode
nlp_bert.eval()
# Tracking variables
predictions , true_labels = [], []
# Predict
for batch in X_test_dataloader:
# Add batch to GPU
batch = tuple(t.to('cuda') for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and
# speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
outputs = nlp_bert(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
print(' DONE.')
# Flatten arrays into single list
y_test_true = [item for sublist in true_labels for item in sublist]
y_test_preds = [np.argmax(item) for sublist in
predictions for item in sublist]
# Convert logit to odds to probability
preds_prob = np.array(
[[np.exp(a) / (1 + np.exp(a)), np.exp(b) / (1 + np.exp(b))]
for sublist in predictions for a,b in sublist]
)
# Evaluate performance of BERT
# Code adapted from Mauro Di Pietro
# >>> https://towardsdatascience.com/text-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794
classes = np.unique(y_test_true)
y_test_array = pd.get_dummies(y_test_true, drop_first=False).values
# Print Accuracy, Precision, Recall
accuracy = metrics.accuracy_score(y_test_true, y_test_preds)
auc = metrics.roc_auc_score(y_test_true, y_test_preds)
print("Accuracy:", round(accuracy,2))
print("ROC-AUC:", round(auc,2))
print("Detail:")
print(metrics.classification_report(y_test_true, y_test_preds))
# Plot confusion matrix
cm = metrics.confusion_matrix(y_test_true, y_test_preds)
fig, ax = plt.subplots(figsize=(7.5, 7.5))
sns.heatmap(cm, annot=True, fmt='d', ax=ax, cmap=plt.cm.Blues,
cbar=False)
ax.set(xlabel="Predicted Label", ylabel="True Label", xticklabels=classes,
yticklabels=classes, title="Confusion Matrix for BERT")
plt.yticks(rotation=0)
plt.savefig(os.path.join(SAVE_FIG, FIG_FILE3),
bbox_inches='tight'
)
plt.show(); ## F1 score of 93%
# Evaluate performance of BERT model
# Code adapted from Mauro Di Pietro
# >>> https://towardsdatascience.com/text-classification-with-nlp-tf-idf-vs-word2vec-vs-bert-41ff868d1794
# Plot ROC
classes_name = ['NO_ABUSE', 'ABUSE ']
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(15, 15/1.6))
fpr, tpr, thresholds = metrics.roc_curve(
y_test_array[:,0], preds_prob[:,0]
)
ax1.plot(fpr, tpr, color=COLORS[0], lw=LINEWIDTH, alpha=1.0,
label='BERT w/o Snorkel (area={0:0.2f})'.format(
metrics.roc_auc_score(y_test_true, y_test_preds)
)
)
ax1.plot([0,1], [0,1], color=RED, linewidth=LINEWIDTH, linestyle='--')
ax1.set_adjustable('box')
ax1.set(xlim=[-0.05,1.05], ylim=[0.0,1.05], aspect=1,
title="Receiver Operating Characteristic"
)
ax1.set_xlabel('False Positive Rate', color=DARK_GREY, fontsize=20,
alpha=0.6, labelpad=16
)
ax1.set_ylabel('True Positive Rate (Recall)', color=DARK_GREY, fontsize=20,
alpha=0.6, labelpad=16
)
ax1.tick_params(axis='both', labelsize=16, length=8, colors=GREY)
ax1.legend(loc="lower right", frameon=False, fontsize='small')
ax1.grid(b=True, color=GREY, alpha=0.1, linewidth=3)
for spine in ['top', 'right', 'left', 'bottom']:
ax1.spines[spine].set_visible(False)
# Plot precision-recall curve
precision, recall, thresholds = metrics.precision_recall_curve(
y_test_array[:,0], preds_prob[:,0])
ax2.plot(recall, precision, color=COLORS[0], lw=LINEWIDTH, alpha=1.0,
label='BERT w/o Snorkel (area={0:0.2f})'.format(
metrics.auc(recall, precision)
)
)
ax2.set(xlim=[-0.05,1.05], ylim=[0.0,1.05], aspect=1,
title="Precision-Recall Curve"
)
ax2.set_xlabel('True Positive Rate (Recall)', color=DARK_GREY, fontsize=20,
alpha=0.6, labelpad=16
)
ax2.set_ylabel('Precision', color=DARK_GREY, fontsize=20,
alpha=0.6, labelpad=16
)
ax2.tick_params(axis='both', labelsize=16, length=8, colors=GREY)
ax2.legend(loc="lower right", frameon=False, fontsize='small')
ax2.set_adjustable('box')
ax2.grid(b=True, color=GREY, alpha=0.1, linewidth=3)
for spine in ['top', 'right', 'left', 'bottom']:
ax2.spines[spine].set_visible(False)
plt.savefig(os.path.join(SAVE_FIG, FIG_FILE4),
bbox_inches='tight',
facecolor=fig.get_facecolor(),
edgecolor='none'
)
plt.show();
###Output
_____no_output_____ |
jupyter_notebooks/2.2_stats_b_cleaning.ipynb | ###Markdown
Batter Stats from Data Collection Cleaning---*By Ihza Gonzales*This notebook aims to clean the data that was collected in a previous notebook. Cleaning includes merging the different seasons of each player, checking for null values, and changing the date as the index. Import Libraries---
###Code
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
###Output
_____no_output_____
###Markdown
Functions Implemented---
###Code
def merge_years (mlbid, first, last):
"""
Function reads data and merges the data of player for all years
Saves merged dataframe
"""
base_path = '../data/og_players_bat/'
#This string will be used to specifiy the player
player_name = first + '-' + last + '-' + str(mlbid) + '-2021'
#Full path to file
file_path = base_path + player_name + '.csv'
try:
df = pd.read_csv(file_path)
years = ['2018', '2019', '2020']
for year in years:
try:
#This string will be used to specifiy the player
player_name = first + '-' + last + '-' + str(mlbid) + '-' + year
#Full path to file
file_path = base_path + player_name + '.csv'
df_2 = pd.read_csv(file_path)
df = df.append(df_2, ignore_index = True)
except:
pass
df.to_csv(f'../data/clean_players_bat/{first}-{last}-{mlbid}.csv', index = False)
except FileNotFoundError:
pass
def clean_data(mlbid, first, last):
"""
Function to read in and clean data.
Since all the csv files are similar, this puts
all files in similar formats.
Deletes the "unnamed: 0" column and removes rows with the montly totals.
Save new file with the clean data.
"""
base_path = '../data/clean_players_bat/'
#This string will be used to specifiy the player
player_name = first + '-' + last + '-' + str(mlbid)
#Full path to file
file_path = base_path + player_name + '.csv'
try:
#read in csv file of stats
df = pd.read_csv(file_path)
#drop unnamed: 0 column
df.drop(columns = ['Unnamed: 0'], inplace = True)
#check for null values
total_nulls = df.isnull().sum().sum()
if total_nulls == 0:
#Only want rows with dates not the total of each month
months = ['March', 'April', 'May', 'June', 'July', 'August', 'September', 'October']
for month in months:
df = df[df['date'] != month]
df.reset_index(drop=True, inplace = True)
#Sort rows by date then set it as index
df["date"] = pd.to_datetime(df["date"])
df = df.sort_values(by="date")
#Save Clean Dataframe
df.to_csv(f'../data/clean_players_bat/{first}-{last}-{mlbid}.csv')
else:
print(f'{first} {last} has null values')
except FileNotFoundError:
pass
def convert_date (mlbid, first, last):
"""
Function converts date to datetime and sets it as index.
Returns the updated csv file
"""
base_path = '../data/clean_players_bat/'
#This string will be used to specifiy the player
player_name = first + '-' + last + '-' + str(mlbid)
#Full path to file
file_path = base_path + player_name + '.csv'
try:
#read in csv file of stats
df = pd.read_csv(file_path)
#drop unnamed: 0 column
df.drop(columns = ['Unnamed: 0'], inplace = True)
#Set data as index and remove date column
df.set_index(pd.DatetimeIndex(df['date']), inplace=True)
df.drop(columns = ['date'], inplace = True)
#Save Clean Dataframe
df.to_csv(f'../data/clean_players_bat/{first}-{last}-{mlbid}.csv', index_label = False)
except (FileNotFoundError, KeyError):
print(f'{first} {last}')
###Output
_____no_output_____
###Markdown
Import the File with Active Batters---
###Code
players = pd.read_csv('../data/mlb_players_bat.csv').drop('Unnamed: 0', axis = 1)
players.head()
###Output
_____no_output_____
###Markdown
Merging, Cleaning, and Converting All Player Files---
###Code
for index, row in players.iterrows():
mlbid = row['MLBID']
first = row['FIRSTNAME']
last = row['LASTNAME']
merge_years(mlbid, first, last)
clean_data(mlbid, first, last)
convert_date(mlbid, first, last)
print ('Finished')
# Copied from https://stackoverflow.com/questions/16476924/how-to-iterate-over-rows-in-a-dataframe-in-pandas
###Output
Finished
|
python3/notebooks/television/keras-linear-reg-functional-api.ipynb | ###Markdown
http://www.stat.yale.edu/Courses/1997-98/101/linreg.htm
###Code
import pandas as pd
import numpy as np
import keras
from matplotlib import pyplot as plt
from keras.layers import Input, Dense
from keras.models import Model
from sklearn.preprocessing import StandardScaler, MinMaxScaler, Normalizer
%matplotlib inline
df = pd.read_csv("data.csv")
df.head()
df.columns,df.dtypes
# df["People per Television"] = df["People per Television"].as_numeric
df["People per Television"] = pd.to_numeric(df["People per Television"],errors='coerce')
df = df.dropna()
df.head()
# x = ppl/television
# y = ppl/doctor
x = df["People per Television"].values.reshape(-1,1).astype(np.float64)
y = df["People per Physician"].values.reshape(-1,1).astype(np.float64)
x.shape,y.shape
sc = StandardScaler()
x_ = sc.fit_transform(x)
y_ = sc.fit_transform(y)
inputs = Input(shape=(1,))
preds = Dense(1,activation='linear')(inputs)
model = Model(inputs=inputs,outputs=preds)
sgd=keras.optimizers.SGD()
model.compile(optimizer=sgd ,loss='mse')
model.fit(x_,y_, batch_size=1, verbose=1, epochs=10, shuffle=False)
plt.scatter(x_,y_,color='black')
plt.plot(x_,model.predict(x_), color='blue', linewidth=3)
# min-max 0,1
sc = MinMaxScaler(feature_range=(0,1))
x_ = sc.fit_transform(x)
y_ = sc.fit_transform(y)
inputs = Input(shape=(1,))
preds = Dense(1,activation='linear')(inputs)
model = Model(inputs=inputs,outputs=preds)
sgd=keras.optimizers.SGD()
model.compile(optimizer=sgd ,loss='mse')
model.fit(x_,y_, batch_size=1, epochs=10, verbose=1, shuffle=False)
plt.scatter(x_,y_,color='black')
plt.plot(x_,model.predict(x_), color='blue', linewidth=3)
# min-max -1,1
sc = MinMaxScaler(feature_range=(-1,1))
x_ = sc.fit_transform(x)
y_ = sc.fit_transform(y)
inputs = Input(shape=(1,))
preds = Dense(1,activation='linear')(inputs)
model = Model(inputs=inputs,outputs=preds)
sgd=keras.optimizers.SGD()
model.compile(optimizer=sgd ,loss='mse')
model.fit(x_,y_, batch_size=1, verbose=1, epochs=10, shuffle=False)
plt.scatter(x_,y_,color='black')
plt.plot(x_,model.predict(x_), color='blue', linewidth=3)
###Output
Epoch 1/10
38/38 [==============================] - 0s - loss: 0.1739
Epoch 2/10
38/38 [==============================] - 0s - loss: 0.0847
Epoch 3/10
38/38 [==============================] - 0s - loss: 0.0775
Epoch 4/10
38/38 [==============================] - 0s - loss: 0.0762
Epoch 5/10
38/38 [==============================] - 0s - loss: 0.0755
Epoch 6/10
38/38 [==============================] - 0s - loss: 0.0750
Epoch 7/10
38/38 [==============================] - 0s - loss: 0.0745
Epoch 8/10
38/38 [==============================] - 0s - loss: 0.0741
Epoch 9/10
38/38 [==============================] - 0s - loss: 0.0738
Epoch 10/10
38/38 [==============================] - 0s - loss: 0.0736
|
notebooks/review_of_fundamentals_a.ipynb | ###Markdown
Table of Contents1 Review of fundamentals1.1 Simple operations1.2 Conditionals1.3 Lists Review of fundamentalsIn this notebook, you will guess the output of simple Python statements. You should be able to correctly predict most (if not all) of them. If you get something wrong, figure out why! Simple operations
###Code
3 + 10
3 = 10
3 == 10
3 ** 10
'3 + 10'
3 * '10'
a*10
'a' * 10
int('3') + int('10')
int('hello world')
10 / 5
10 / 4
float(10 / 4)
float(10)/4
type("True")
type(True)
###Output
_____no_output_____
###Markdown
Conditionals
###Code
a=3
if (a==3):
print "it's a three!"
a=3
if a==3:
print "it's a four!"
a=3
if a=4:
print "it's a four!"
a=3
if a<10:
print "we're here"
elif a<100:
print "and also here"
a=3
if a<10:
print "we're here"
if a<100:
print "and also here"
a = "True"
if a:
print "we're in the 'if'"
else:
print "we're in the else"
a = "False"
if a:
print "we're in the 'if'"
else:
print "we're in the 'else'"
###Output
we're in the 'if'
###Markdown
If you were surprised by that, think about the difference between `False` and `"False"`
###Code
a = 5
b = 10
if a and b:
print "a is", a
print "b is", b
c = 20
if c or d:
print "c is", c
print "d is", d
c = 20
if c and d:
print "c is", c
print "d is", d
###Output
_____no_output_____
###Markdown
Lists
###Code
animals= ['dog', 'cat', 'panda']
if panda in animals:
print "a hit!"
animals= ['dog', 'cat', 'panda']
if "panda" or "giraffe" in animals:
print "a hit!"
if ["dog", "cat"] in animals:
print "we're here"
some_nums = range(1,10)
print some_nums
print some_nums[0]
animals= ['dog', 'cat', 'panda']
print animals[-1]
animals= ['dog', 'cat', 'panda']
print animals.index('cat')
animals= ['dog', 'cat', 'panda']
more_animals = animals+['giraffe']
print more_animals
animals= ['dog', 'cat', 'panda']
more_animals = animals.append('giraffe')
print more_animals
###Output
None
###Markdown
The above is a tricky one! The issue is that append() does not return a value. It simply appends
###Code
animals= ['dog', 'cat', 'panda']
animals.append("giraffe")
print animals
animals= ['dog', 'cat', 'panda']
for num,animal in enumerate(animals):
print "Number", num+1, "is", animals
animals= ['dog', 'cat', 'panda']
for num,animal in enumerate(animals):
print "Number", num, "is", animal
print "\nWe have", len(animals), "animals in our list."
animals= ['dog', 'cat', 'panda']
while animals:
print animals.pop()
print "\nWe have", len(animals), "animals in our list."
###Output
panda
cat
dog
We have 0 animals in our list.
|
notebooks/dev_summit_2019/Step 5 - Spatially Assign Work.ipynb | ###Markdown
Spatially Assign Work¶In this example, assignments will be assigned to specific workers based on the city district that it falls in. A layer in ArcGIS Online representing the city districts in Palm Springs will be used.* Note: This example requires having Arcpy or Shapely installed in the Python environment. Import ArcGIS API for PythonImport the `arcgis` library and some modules within it.
###Code
from datetime import datetime
from arcgis.gis import GIS
from arcgis.geometry import Geometry
from arcgis.mapping import WebMap
from arcgis.apps import workforce
from datetime import datetime
###Output
_____no_output_____
###Markdown
Connect to organization and Get the ProjectLet's connect to ArcGIS Online and get the Project with assignments.
###Code
gis = GIS("https://arcgis.com", "workforce_scripts")
item = gis.content.get("c765482bd0b9479b9104368da54df90d")
project = workforce.Project(item)
###Output
_____no_output_____
###Markdown
Get Layer of City DistrictsLet's get the layer representing city districts and display it.
###Code
districts_layer = gis.content.get("8a79535e0dc04410b5564c0e45428a2c").layers[0]
districts_map = gis.map("Palm Springs, CA", zoomlevel=10)
districts_map.add_layer(districts_layer)
districts_map
###Output
_____no_output_____
###Markdown
Add Assignments to the Map
###Code
districts_map.add_layer(project.assignments_layer)
###Output
_____no_output_____
###Markdown
Create a spatially enabled dataframe of the districts
###Code
districts_df = districts_layer.query(as_df=True)
###Output
_____no_output_____
###Markdown
Get all of the unassigned assignments
###Code
assignments = project.assignments.search("status=0")
###Output
_____no_output_____
###Markdown
Assign Assignments Based on Which District They Intersect¶Let's fetch the districts layer and query to get all of the districts. Then, for each unassigned assignment intersect the assignment with all districts to determine which district it falls in. Assignments in district 10 should be assigned to James. Assignments in district 9 should be assigned to Aaron. Finally update all of the assignments using "batch_update".
###Code
aaron = project.workers.get(user_id="aaron_nitro")
james = project.workers.get(user_id="james_Nitro")
for assignment in assignments:
contains = districts_df["SHAPE"].geom.contains(Geometry(assignment.geometry))
containers = districts_df[contains]
if not containers.empty:
district = containers['ID'].iloc[0]
if district == 10:
assignment.worker = james
assignment.status = "assigned"
assignment.assigned_date = datetime.utcnow()
elif district == 9:
assignment.worker = aaron
assignment.status = "assigned"
assignment.assigned_date = datetime.utcnow()
assignments = project.assignments.batch_update(assignments)
###Output
_____no_output_____
###Markdown
Verify Assignments are Assigned
###Code
webmap = gis.map("Palm Springs", zoomlevel=11)
webmap.add_layer(project.assignments_layer)
webmap
###Output
_____no_output_____ |
Projects/2018-World-cup-predictions pro.ipynb | ###Markdown
Data exploration & visualization
###Code
#function for each feature(datatype & null )
def data_info(data):
total_null = data.isnull().sum()
tt=pd.DataFrame(total_null,columns=["n_null"])
types = []
for col in data.columns:
dtype = str(data[col].dtype)
types.append(dtype)
tt['Types'] = types
if(data.isna().sum().any())==False:
print("data has no null values")
else:
print("data has null values")
return(np.transpose(tt.sort_values(by="n_null",ascending=False)))
data_info(World_cup)
data_info(results)
results.describe().transpose()
results.describe(include='O').transpose()
#results for teams participate in world cup
winner=[]
for i in results.index:
if results["home_score"][i]>results["away_score"][i]:
winner.append(results["home_team"][i])
elif results["home_score"][i]==results["away_score"][i]:
winner.append("draw")
else:
winner.append(results["away_team"][i])
results["winner"]=winner
#adding goal difference col
results["goal_difference"]= np.absolute(results["home_score"]- results["away_score"])
results.tail()
#all 32 country that will participate
World_cup_teams= World_cup.Team.unique()
World_cup_teams
World_cup_home=results[results["home_team"].isin(World_cup_teams)]
World_cup_away=results[results["away_team"].isin(World_cup_teams)]
df_teams=pd.concat((World_cup_home,World_cup_away))
df_teams.drop_duplicates()
year=[]
for date in df_teams["date"]:
year.append(int(date[:4]))
df_teams["year"]=year
df_teams_1930=df_teams[df_teams["year"] >= 1930]
df_teams_1930
df_teams_1930=df_teams_1930.drop(["date","tournament","city","country","year","home_score","away_score","goal_difference"],axis=1)
df_teams_1930
df_teams_1930=df_teams_1930.reset_index(drop=True)
df_teams_1930.loc[df_teams_1930.home_team == df_teams_1930.winner ,"winner"]=1
df_teams_1930.loc[df_teams_1930.away_team == df_teams_1930.winner ,"winner"]=2
df_teams_1930.loc[df_teams_1930.winner == "draw" ,"winner"]=0
df_teams_1930
final=pd.get_dummies(df_teams_1930,prefix=["home_team","away_team"],columns=["home_team","away_team"])
x=final.drop("winner",axis=1)
y=final["winner"]
y=y.astype('int')
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3)
y_train
logreg=LogisticRegression()
logreg.fit(x_train,y_train.values.ravel())
print(logreg.score(x_train,y_train))
print(logreg.score(x_test,y_test))
decision_tree = DecisionTreeClassifier()
decision_tree.fit(x_train, y_train.values.ravel())
Y_pred = decision_tree.predict(x_test)
acc_decision_tree = round(decision_tree.score(x_train, y_train) * 100, 2)
acc_decision_tree
fifa_rankings=pd.read_csv("../datasets/FIFA-2018-World-cup/fifa_rankings.csv")
fixtures=pd.read_csv("../datasets/FIFA-2018-World-cup/fixtures.csv")
pred_set=[]
fixtures.insert(1,"home_ranking",fixtures["Home Team"].map(fifa_rankings.set_index("Team")["Position"]))
fixtures.insert(2,"away_ranking",fixtures["Away Team"].map(fifa_rankings.set_index("Team")["Position"]))
fixtures
for index,row in fixtures.iterrows():
if row["home_ranking"]>row["away_ranking"]:
pred_set.append({"Home Team":row["Home Team"],"Away Team":row["Away Team"],"winner":None})
else:
pred_set.append({"Home Team":row["Away Team"],"Away Team":row["Home Team"],"winner":None})
pred_set_df=pd.DataFrame(pred_set)
#used later
backup_pred_set=pred_set_df
pred_set_df=pd.get_dummies(pred_set_df,prefix=["Home Team","Away Team"],columns=["Home Team","Away Team"])
final.shape
#teams not participate in world cup
missing_cols=set(final.columns) -set(pred_set_df.columns)
for c in missing_cols:
pred_set_df[c]=0
pred_set_df= pred_set_df[final.columns]
#pred_set_df
pred_set_df=pred_set_df.drop("winner",axis=1)
#group matches
predictions = logreg.predict(pred_set_df)
for i in range(fixtures.shape[0]):
print(backup_pred_set.iloc[i, 1] + " and " + backup_pred_set.iloc[i, 0])
if predictions[i] == 2:
print("Winner: " + backup_pred_set.iloc[i, 1])
elif predictions[i] == 1:
print("Draw")
elif predictions[i] == 0:
print("Winner: " + backup_pred_set.iloc[i, 0])
'''' print('Probability of ' + backup_pred_set.iloc[i, 1] + ' winning: ', '%.3f'%(logreg.predict_proba(pred_set_df)[i][2]))
print('Probability of Draw: ', '%.3f'%(logreg.predict_proba(pred_set)[i][1]))
print('Probability of ' + backup_pred_set.iloc[i, 0] + ' winning: ', '%.3f'%(logreg.predict_proba(pred_set_df)[i][0]))
print("") '''
###Output
Russia and Saudi Arabia
Draw
Uruguay and Egypt
Draw
Iran and Morocco
Draw
Portugal and Spain
Draw
France and Australia
Draw
Argentina and Iceland
Draw
Peru and Denmark
Draw
Croatia and Nigeria
Draw
Costa Rica and Serbia
Draw
Germany and Mexico
Draw
Brazil and Switzerland
Draw
Sweden and Korea Republic
Draw
Belgium and Panama
Draw
England and Tunisia
Draw
Colombia and Japan
Draw
Poland and Senegal
Draw
Egypt and Russia
Draw
Portugal and Morocco
Draw
Uruguay and Saudi Arabia
Draw
Spain and Iran
Draw
Denmark and Australia
Draw
France and Peru
Draw
Argentina and Croatia
Draw
Brazil and Costa Rica
Draw
Iceland and Nigeria
Draw
Switzerland and Serbia
Draw
Belgium and Tunisia
Draw
Mexico and Korea Republic
Draw
Germany and Sweden
Draw
England and Panama
Draw
Senegal and Japan
Draw
Poland and Colombia
Draw
Uruguay and Russia
Draw
Egypt and Saudi Arabia
Draw
Portugal and Iran
Draw
Spain and Morocco
Draw
France and Denmark
Draw
Peru and Australia
Draw
Argentina and Nigeria
Draw
Croatia and Iceland
Draw
Mexico and Sweden
Draw
Germany and Korea Republic
Draw
Brazil and Serbia
Draw
Switzerland and Costa Rica
Draw
Poland and Japan
Draw
Colombia and Senegal
Draw
Tunisia and Panama
Draw
Belgium and England
Draw
Winner Group A and Runner-up Group B
Draw
Winner Group C and Runner-up Group D
Draw
Winner Group B and Runner-up Group A
Draw
Winner Group D and Runner-up Group C
Draw
Winner Group E and Runner-up Group F
Draw
Winner Group G and Runner-up Group H
Draw
Winner Group F and Runner-up Group E
Draw
Winner Group H and Runner-up Group G
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
To be announced and To be announced
Draw
|
ScientificComputingRobertJohansson/Lecture-5-Sympy.ipynb | ###Markdown
Sympy - Symbolic algebra in Python J.R. Johansson (jrjohansson at gmail.com)The latest version of this [IPython notebook](http://ipython.org/notebook.html) lecture is available at [http://github.com/jrjohansson/scientific-python-lectures](http://github.com/jrjohansson/scientific-python-lectures).The other notebooks in this lecture series are indexed at [http://jrjohansson.github.io](http://jrjohansson.github.io).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Introduction There are two notable Computer Algebra Systems (CAS) for Python:* [SymPy](http://sympy.org/en/index.html) - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features. * [Sage](http://www.sagemath.org/) - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language.Sage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the IPython notebook. In this lecture we will therefore look at how to use SymPy with IPython notebooks. If you are interested in an open source CAS environment I also recommend to read more about Sage.To get started using SymPy in a Python program or notebook, import the module `sympy`:
###Code
from sympy import *
###Output
_____no_output_____
###Markdown
To get nice-looking $\LaTeX$ formatted output run:
###Code
init_printing()
# or with older versions of sympy/ipython, load the IPython extension
#%load_ext sympy.interactive.ipythonprinting
# or
#%load_ext sympyprinting
###Output
_____no_output_____
###Markdown
Symbolic variables In SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the `Symbol` class:
###Code
x = Symbol('x')
(pi + x)**2
# alternative way of defining symbols
a, b, c = symbols("a, b, c")
type(a)
###Output
_____no_output_____
###Markdown
We can add assumptions to symbols when we create them:
###Code
x = Symbol('x', real=True)
x.is_imaginary
x = Symbol('x', positive=True)
x > 0
###Output
_____no_output_____
###Markdown
Complex numbers The imaginary unit is denoted `I` in Sympy.
###Code
1+1*I
I**2
(x * I + 1)**2
###Output
_____no_output_____
###Markdown
Rational numbers There are three different numerical types in SymPy: `Real`, `Rational`, `Integer`:
###Code
r1 = Rational(4,5)
r2 = Rational(5,4)
r1
r1+r2
r1/r2
###Output
_____no_output_____
###Markdown
Numerical evaluation SymPy uses a library for artitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as: `pi`, `e`, `oo` for infinity.To evaluate an expression numerically we can use the `evalf` function (or `N`). It takes an argument `n` which specifies the number of significant digits.
###Code
pi.evalf(n=50)
y = (x + pi)**2
N(y, 5) # same as evalf
###Output
_____no_output_____
###Markdown
When we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the `subs` function:
###Code
y.subs(x, 1.5)
N(y.subs(x, 1.5))
###Output
_____no_output_____
###Markdown
The `subs` function can of course also be used to substitute Symbols and expressions:
###Code
y.subs(x, a+pi)
###Output
_____no_output_____
###Markdown
We can also combine numerical evolution of expressions with NumPy arrays:
###Code
import numpy
x_vec = numpy.arange(0, 10, 0.1)
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
###Output
_____no_output_____
###Markdown
However, this kind of numerical evolution can be very slow, and there is a much more efficient way to do it: Use the function `lambdify` to "compile" a Sympy expression into a function that is much more efficient to evaluate numerically:
###Code
f = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that
# f will be a function of: in this case only x -> f(x)
y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated
###Output
_____no_output_____
###Markdown
The speedup when using "lambdified" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up:
###Code
%%timeit
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
%%timeit
y_vec = f(x_vec)
###Output
The slowest run took 22.35 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 3.02 µs per loop
###Markdown
Algebraic manipulations One of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simply an expression. The functions for doing these basic operations in SymPy are demonstrated in this section. Expand and factor The first steps in an algebraic manipulation
###Code
(x+1)*(x+2)*(x+3)
expand((x+1)*(x+2)*(x+3))
###Output
_____no_output_____
###Markdown
The `expand` function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the `trig=True` keyword argument:
###Code
sin(a+b)
expand(sin(a+b), trig=True)
###Output
_____no_output_____
###Markdown
See `help(expand)` for a detailed explanation of the various types of expansions the `expand` functions can perform. The opposite a product expansion is of course factoring. The factor an expression in SymPy use the `factor` function:
###Code
factor(x**3 + 6 * x**2 + 11*x + 6)
###Output
_____no_output_____
###Markdown
Simplify The `simplify` tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the `simplify` functions also exists: `trigsimp`, `powsimp`, `logcombine`, etc. The basic usages of these functions are as follows:
###Code
# simplify expands a product
simplify((x+1)*(x+2)*(x+3))
# simplify uses trigonometric identities
simplify(sin(a)**2 + cos(a)**2)
simplify(cos(x)/sin(x))
###Output
_____no_output_____
###Markdown
apart and together To manipulate symbolic expressions of fractions, we can use the `apart` and `together` functions:
###Code
f1 = 1/((a+1)*(a+2))
f1
apart(f1)
f2 = 1/(a+2) + 1/(a+3)
f2
together(f2)
###Output
_____no_output_____
###Markdown
Simplify usually combines fractions but does not factor:
###Code
simplify(f2)
###Output
_____no_output_____
###Markdown
Calculus In addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions. Differentiation Differentiation is usually simple. Use the `diff` function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative:
###Code
y
diff(y**2, x)
###Output
_____no_output_____
###Markdown
For higher order derivatives we can do:
###Code
diff(y**2, x, x)
diff(y**2, x, 2) # same as above
###Output
_____no_output_____
###Markdown
To calculate the derivative of a multivariate expression, we can do:
###Code
x, y, z = symbols("x,y,z")
f = sin(x*y) + cos(y*z)
###Output
_____no_output_____
###Markdown
$\frac{d^3f}{dxdy^2}$
###Code
diff(f, x, 1, y, 2)
###Output
_____no_output_____
###Markdown
Integration Integration is done in a similar fashion:
###Code
f
integrate(f, x)
###Output
_____no_output_____
###Markdown
By providing limits for the integration variable we can evaluate definite integrals:
###Code
integrate(f, (x, -1, 1))
###Output
_____no_output_____
###Markdown
and also improper integrals
###Code
integrate(exp(-x**2), (x, -oo, oo))
###Output
_____no_output_____
###Markdown
Remember, `oo` is the SymPy notation for inifinity. Sums and products We can evaluate sums and products using the functions: 'Sum'
###Code
n = Symbol("n")
Sum(1/n**2, (n, 1, 10))
Sum(1/n**2, (n,1, 10)).evalf()
Sum(1/n**2, (n, 1, oo)).evalf()
###Output
_____no_output_____
###Markdown
Products work much the same way:
###Code
Product(n, (n, 1, 10)) # 10!
###Output
_____no_output_____
###Markdown
Limits Limits can be evaluated using the `limit` function. For example,
###Code
limit(sin(x)/x, x, 0)
###Output
_____no_output_____
###Markdown
We can use 'limit' to check the result of derivation using the `diff` function:
###Code
f
diff(f, x)
###Output
_____no_output_____
###Markdown
$\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
###Code
h = Symbol("h")
limit((f.subs(x, x+h) - f)/h, h, 0)
###Output
_____no_output_____
###Markdown
OK! We can change the direction from which we approach the limiting point using the `dir` keywork argument:
###Code
limit(1/x, x, 0, dir="+")
limit(1/x, x, 0, dir="-")
###Output
_____no_output_____
###Markdown
Series Series expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the `series` function:
###Code
series(exp(x), x)
###Output
_____no_output_____
###Markdown
By default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call:
###Code
series(exp(x), x, 1)
###Output
_____no_output_____
###Markdown
And we can explicitly define to which order the series expansion should be carried out:
###Code
series(exp(x), x, 1, 10)
###Output
_____no_output_____
###Markdown
The series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order:
###Code
s1 = cos(x).series(x, 0, 5)
s1
s2 = sin(x).series(x, 0, 2)
s2
expand(s1 * s2)
###Output
_____no_output_____
###Markdown
If we want to get rid of the order information we can use the `removeO` method:
###Code
expand(s1.removeO() * s2.removeO())
###Output
_____no_output_____
###Markdown
But note that this is not the correct expansion of $\cos(x)\sin(x)$ to $5$th order:
###Code
(cos(x)*sin(x)).series(x, 0, 6)
###Output
_____no_output_____
###Markdown
Linear algebra Matrices Matrices are defined using the `Matrix` class:
###Code
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
A = Matrix([[m11, m12],[m21, m22]])
A
b = Matrix([[b1], [b2]])
b
###Output
_____no_output_____
###Markdown
With `Matrix` class instances we can do the usual matrix algebra operations:
###Code
A**2
A * b
###Output
_____no_output_____
###Markdown
And calculate determinants and inverses, and the like:
###Code
A.det()
A.inv()
###Output
_____no_output_____
###Markdown
Solving equations For solving equations and systems of equations we can use the `solve` function:
###Code
solve(x**2 - 1, x)
solve(x**4 - x**2 - 1, x)
###Output
_____no_output_____
###Markdown
System of equations:
###Code
solve([x + y - 1, x - y - 1], [x,y])
###Output
_____no_output_____
###Markdown
In terms of other symbolic expressions:
###Code
solve([x + y - a, x - y - c], [x,y])
###Output
_____no_output_____
###Markdown
Further reading * http://sympy.org/en/index.html - The SymPy projects web page.* https://github.com/sympy/sympy - The source code of SymPy.* http://live.sympy.org - Online version of SymPy for testing and demonstrations. Versions
###Code
%reload_ext version_information
%version_information numpy, matplotlib, sympy
###Output
_____no_output_____ |
morse-2.ipynb | ###Markdown
alphabet permutations
###Code
'''
A .- | . -
B -... | -.. . | -. .. | -. . . | - . .. | - . . . | - .. . | - ...
C -.-. | -.- . | -. -. | -. - . | - .-. | - .- . | - . -. | - . - .
D -.. | -. . | - .. | - . .
E .
F ..-. | ..- . | .. -. | . . -. | . .-. | . . - .
G --. | -- . | - -. | - - .
H .... | ... . | .. .. | . ... | .. . . | . . .. | . . . .
I .. | . .
J .--- | .-- - | .- -- | .- - - | . --- | . -- - | . - -- | . - - -
K -.- | -. - | - .- | - . -
L .-.. | .-. . | .- .. | .- . . | . -.. | . -. . | . - .. | . - . .
M -- | - -
N -. | - .
O --- | -- - | - -- | - - -
P .--. | .-- . | .- -. | .- - . | . --. | . -- . | . - -. | . - - .
Q --.- | --. - | -- .- | -- . - | - -.- | - -. - | - - .- | - - . -
R .-. | .- . | . -.
S ... | .. . | . ..
T -
U ..- | .. - | . .- | . . -
V ...- | ... - | .. .- | .. . - | . ..- | . .. - | . . .- | . . . -
W .-- | .- - | . -- | . - -
X -..- | -.. - | -. .- | -. . - | - ..- | - .. - | - . .- | - . . -
Y -.-- | -.- - | -. -- | -. - - | - .-- | - .- - | - . -- | - . - -
Z --.. | --. . | -- .. | -- . . | - -.. | - -. . | - - .. | - - . .
'''
###Output
_____no_output_____
###Markdown
build sentence
###Code
morse_sentence = '......-...-..---.-----.-..-..-..'
###Output
_____no_output_____
###Markdown
build words
###Code
words = ['HELL', 'HELLO', 'WORLD', 'OWORLD', 'TEST']
def encode_word(word):
encoded_word = ''.join([alphabet_map[letter] for letter in word])
return encoded_word
word_translator = {encode_word(word):word for word in words}
word_translator
encoded_word_translator_map = {}
def generate_word_translator(word, sep=''):
letters = []
for letter in word: letters.append(alphabet_map[letter])
encoded_word = sep.join(letters)
encoded_word_translator_map[encoded_word] = word
[generate_word_translator(w) for w in words]
encoded_word_translator_map
encoded_word_counts_map = {}
def generate_word_count(word, sep=''):
letters = []
for letter in word: letters.append(alphabet_map[letter])
encoded_word = sep.join(letters)
if encoded_word in encoded_word_counts_map: encoded_word_counts_map[encoded_word] += 1
else: encoded_word_counts_map[encoded_word] = 1
[generate_word_count(w) for w in words]
encoded_word_counts_map
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
encoded_word_sizes_map
morse_words = list(encoded_word_counts_map.keys())
morse_words
###Output
_____no_output_____
###Markdown
decode words
###Code
def decode_words_test(level, current_sentence, remaining):
if not remaining:
print(f'== {level} {current_sentence}')
return 1
nb_options = 0
for word, duplicates in encoded_word_counts_map.items():
if remaining.startswith(word):
print(f'{level} {current_sentence} + {word}, {duplicates}')
new_sentence = current_sentence.copy()
new_sentence.append(word)
i = encoded_word_sizes_map[word]
new_remaining = str(remaining[i:])
nb_options += decode_words_test(level+1, new_sentence, new_remaining) * duplicates
return nb_options
morse_sentence = '......-...-..---.-----.-..-..-..' # HELLOWORLD
words = ['HELL', 'HELLO', 'WORLD', 'OWORLD', 'TEST']
encoded_word_counts_map = {}
[generate_word_count(w) for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_words_test(0, [], morse_sentence)
print(res)
assert res == 2 # HELLO WORLD, HELL OWORLD
morse_sentence = '......-...-..---' # HELLO
words = ['HELL', 'HELLO', 'WORLD', 'OWORLD', 'TEST', 'HE', 'EH', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w) for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_words_test(0, [], morse_sentence)
print(res)
assert res == 4 # HELLO, HELL O, HE L L O, EH L L O
morse_sentence = '......-...-..---' # HELLO
words = ['HELL', 'TEST']
encoded_word_counts_map = {}
[generate_word_count(w) for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_words_test(0, [], morse_sentence)
print(res)
assert res == 0 # no match
morse_sentence = '......-..' # HEL fix HE L hide HEL
words = ['HEL', 'HE', 'EH', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w) for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_words_test(0, [], morse_sentence)
print(res)
assert res == 3 # HEL, HE L, EH L
morse_sentence = '......-..' # HEL fix HE L hide HEL
words = ['HEL', 'HE', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w) for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_words_test(0, [], morse_sentence)
print(res)
assert res == 2 # HEL, HE L
###Output
_____no_output_____
###Markdown
decode letters
###Code
def decode_letters_test(level, current_sentence, remaining):
def translate(word): return ''.join([reverse_alphabet_map[l] for l in word.split('|')])
words = [translate(word) for word in current_sentence]
key = ' '.join(words)
if not remaining:
print(f'{key} == {level} {current_sentence}')
return 1
nb_options = 0
for encoded_letter in morse_alphabet:
if remaining.startswith(encoded_letter):
print(f'{key} {level} {current_sentence} + {encoded_letter}')
new_sentence = current_sentence.copy()
new_sentence.append(encoded_letter)
i = len(encoded_letter)
new_remaining = str(remaining[i:])
nb_options += decode_letters_test(level+1, new_sentence, new_remaining)
return nb_options
morse_sentence = '....' # E . I .. S ... H ....
words = ['EEEE', 'II', 'SE', 'ES', 'H', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w) for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters_test(0, [], morse_sentence)
print(res)
assert res == 8 # EEEE, EEI, EIE, ES, H, IEE, II, SE
###Output
_____no_output_____
###Markdown
decode list of letters
###Code
# confusion HEL HE L
def decode_words_old(level, current_sentence, current_word, remaining):
if not remaining:
print(f'====> {level} {current_sentence}')
return 1 if not current_word else 0
nb_options = 0
new_letter = remaining[0]
new_word = current_word.copy()
new_word.append(new_letter)
word = '|'.join(new_word)
if word in encoded_word_counts_map:
print(f'{level} {current_sentence} -> found {word}')
new_sentence = current_sentence.copy()
new_sentence.append(word)
new_remaining = remaining.copy()
nb_options += decode_words(level+1, new_sentence, [], new_remaining[1:]) * encoded_word_counts_map[word]
else:
print(f'{level} {current_sentence} + {new_word}')
new_remaining = remaining.copy()
nb_options += decode_words_old(level+1, current_sentence, new_word, new_remaining[1:])
return nb_options
def decode_words(level, current_sentence, current_word, remaining):
if not remaining:
print(f'====> {level} {current_sentence}')
return 1 if not current_word else 0
nb_options = 0
new_letter = remaining[0]
new_word = current_word.copy()
new_word.append(new_letter)
word = '|'.join(new_word)
if word in encoded_word_counts_map:
print(f'{level} {current_sentence} -> found {word}')
new_sentence = current_sentence.copy()
new_sentence.append(word)
new_remaining = remaining.copy()
nb_options += decode_words(level+1, new_sentence, [], new_remaining[1:]) * encoded_word_counts_map[word]
print(f'{level} {current_sentence} + {new_word}')
new_remaining = remaining.copy()
nb_options += decode_words(level+1, current_sentence, new_word, new_remaining[1:])
return nb_options
morse_sentence = '....' # E . I .. S ... H ....
words = ['EEEE', 'EIE', 'SE', 'ES', 'H', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
morse_sentence_letters = ['.', '..', '.'] # EIE
res = decode_words(0, [], [], morse_sentence_letters)
print(res)
assert res == 1 # EIE
morse_sentence = '....' # E . I .. S ... H ....
words = ['EIE', 'SE', 'ES', 'H', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
morse_sentence_letters = ['.', '...'] # ES
res = decode_words(0, [], [], morse_sentence_letters)
print(res)
assert res == 1 # ES
morse_sentence = '....' # E . I .. S ... H ....
words = ['S', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
morse_sentence_letters = ['.', '...'] # ES
res = decode_words(0, [], [], morse_sentence_letters)
print(res)
assert res == 0 # no match
morse_sentence = '......-..' # HEL
words = ['HEL', 'HE', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
morse_sentence_letters = ['....', '.', '.-..'] # ES
res = decode_words(0, [], [], morse_sentence_letters)
print(res)
assert res == 2 # HEL, HE L
###Output
_____no_output_____
###Markdown
mix
###Code
def decode_words(level, current_sentence, current_word, remaining):
def translate(word): return ''.join([reverse_alphabet_map[l] for l in word.split('|')])
words = [translate(word) for word in current_sentence]
keys = ' '.join(words)
keyw = ''.join([reverse_alphabet_map[l] for l in current_word])
key = f'{keys}+{keyw}'
if not remaining:
if not current_word: print(f'{key} ==== {level} {current_sentence} {current_word}')
else: print(f'{key} ====X {level} {current_sentence} {current_word}')
return 1 if not current_word else 0
nb_options = 0
new_letter = remaining[0]
new_word = current_word.copy()
new_word.append(new_letter)
word = '|'.join(new_word)
if word in encoded_word_counts_map:
new_sentence = current_sentence.copy()
new_sentence.append(word)
new_remaining = remaining.copy()
#print(f'{key} {level} {new_sentence} [] rem={new_remaining[1:]}' )
nb_options += decode_words(level+1, new_sentence, [], new_remaining[1:]) * encoded_word_counts_map[word]
# if else find HE L and stop before HEL
new_remaining = remaining.copy()
#print(f'{key} {level} {current_sentence} {new_word} rem={new_remaining[1:]}' )
nb_options += decode_words(level+1, current_sentence, new_word, new_remaining[1:])
return nb_options
def decode_letters(level, current_sentence, remaining):
if not remaining:
#if current_sentence: print(f'==> {level} {current_sentence}')
return decode_words(0, [], [], current_sentence) if current_sentence else 0
nb_options = 0
for encoded_letter in morse_alphabet:
if remaining.startswith(encoded_letter):
#print(f'{level} {current_sentence} + {encoded_letter}')
new_sentence = current_sentence.copy()
new_sentence.append(encoded_letter)
i = len(encoded_letter)
new_remaining = str(remaining[i:])
nb_options += decode_letters(level+1, new_sentence, new_remaining)
return nb_options
morse_sentence = '....' # E . I .. S ... H ....
words = ['EIE', 'SE', 'ES', 'H', 'L', 'O']
encoded_word_translator_map
[generate_word_translator(w) for w in words]
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 4 # EIE, ES, H, SE
morse_sentence = '....' # E . I .. S ... H ....
words = ['S', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 0 # no match
morse_sentence = '.....' # confusion EH/HE
words = ['HEL', 'HE', 'EH', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 2 # HE, EH
morse_sentence = '......-..' # HEL single option
words = ['HEL', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 1 # HEL
morse_sentence = '......-..' # HEL no EH/HE confusion
words = ['HEL', 'HE', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 2 # HEL, HE L -- fix stops when HE L is foundd and never reach HEL
morse_sentence = '......-..' # HEL with confusion EH/HE
words = ['HEL', 'HE', 'EH', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 3 # HEL, HE L, EH L
morse_sentence = '......-...-..---' # HELLO with confusion EH/HE
words = ['HELL', 'HELLO', 'WORLD', 'OWORLD', 'TEST', 'HE', 'EH', 'L', 'O']
encoded_word_counts_map = {}
[generate_word_count(w,'|') for w in words]
encoded_word_sizes_map = {word:len(word) for word in encoded_word_counts_map.keys()}
res = decode_letters(0, [], morse_sentence)
print(res)
assert res == 4 # HELLO, HELL O, HE L L O, EH L L O
print('ok')
###Output
_____no_output_____
###Markdown
solution finalisée
###Code
alphabet = '''A .- B -... C -.-. D -..
E . F ..-. G --. H ....
I .. J .--- K -.- L .-..
M -- N -. O --- P .--.
Q --.- R .-. S ... T -
U ..- V ...- W .-- X -..-
Y -.-- Z --..'''
def generate_alphabet_map():
key = None
alphabet_map = {}
for value in alphabet.split():
if not key is None:
alphabet_map[key] = value
key = None
else:
key = value
return alphabet_map
alphabet_map = generate_alphabet_map()
morse_alphabet = alphabet_map.values()
#morse_alphabet
encoded_word_list = []
def generate_word_list(word, sep=''):
letters = []
for letter in word: letters.append(alphabet_map[letter])
encoded_word = sep.join(letters)
encoded_word_list.append(encoded_word)
def decode_words(current_word, remaining):
if not remaining:
return 1 if not current_word else 0
nb_options = 0
new_letter = remaining[0]
new_word = current_word.copy()
new_word.append(new_letter)
word = '|'.join(new_word)
if word in encoded_word_list:
new_remaining = remaining.copy()
nb_options += decode_words([], new_remaining[1:])
new_remaining = remaining.copy()
nb_options += decode_words(new_word, new_remaining[1:])
return nb_options
def decode_letters(current_sentence, remaining):
if not remaining:
return decode_words([], current_sentence) if current_sentence else 0
nb_options = 0
for encoded_letter in morse_alphabet:
if remaining.startswith(encoded_letter):
new_sentence = current_sentence.copy()
new_sentence.append(encoded_letter)
i = len(encoded_letter)
new_remaining = str(remaining[i:])
nb_options += decode_letters(new_sentence, new_remaining)
return nb_options
morse_sentence = '....' # E . I .. S ... H ....
words = ['EIE', 'SE', 'ES', 'H', 'L', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 4 # EIE, ES, H, SE
morse_sentence = '....' # E . I .. S ... H ....
words = ['S', 'L', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 0 # no match
morse_sentence = '.....' # confusion EH/HE
words = ['HEL', 'HE', 'EH', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 2 # HE, EH
morse_sentence = '......-..' # HEL single option
words = ['HEL', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 1 # HEL
morse_sentence = '......-..' # HEL no EH/HE confusion
words = ['HEL', 'HE', 'L', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 2 # HEL, HE L -- fix stops when HE L is foundd and never reach HEL
morse_sentence = '......-..' # HEL with confusion EH/HE
words = ['HEL', 'HE', 'EH', 'L', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 3 # HEL, HE L, EH L
morse_sentence = '......-...-..---' # HELLO with confusion EH/HE
words = ['HELL', 'HELLO', 'WORLD', 'OWORLD', 'TEST', 'HE', 'EH', 'L', 'O']
encoded_word_list = []
[generate_word_list(w,'|') for w in words]
print(encoded_word_list)
res = decode_letters([], morse_sentence)
print(res)
assert res == 4 # HELLO, HELL O, HE L L O, EH L L O
###Output
['....|.|.-..|.-..', '....|.|.-..|.-..|---', '.--|---|.-.|.-..|-..', '---|.--|---|.-.|.-..|-..', '-|.|...|-', '....|.', '.|....', '.-..', '---']
4
|
Decorators/@classmethod.ipynb | ###Markdown
Introduction **What?** @classmethod Class decorators - `@property` The Pythonic way to introduce attributes is to make them public, and not introduce getters and setters to retrieve or change them.- `@classmethod` To add additional constructor to the class.- `@staticmethod` To attach functions to classes so people won't misuse them in wrong places. @classmethod - `@classmethod` is bound to a class rather than an object.- Class methods can be called by both class and object. - These methods can be called with a class **or** with an object. Example
###Code
from datetime import date
class Person:
def __init__(self, name, age):
"""
This will create the object by name and age
"""
self.name = name
self.age = age
"""
This will create the object by name and year
"""
@classmethod
def fromBirthYear(cls, name, year):
return cls(name, date.today().year - year)
def display(self):
print("Name : ", self.name, "Age : ", self.age)
person = Person('mayank', 21)
person.display()
person1 = Person.fromBirthYear('mayank', 2000)
person1.display()
# called by the class
person1 = Person('just something', 1000)
# called by the object
person2 = person1.fromBirthYear('mayank', 2000)
person2.display()
###Output
Name : mayank Age : 21
|
processing/.ipynb_checkpoints/Ensemble_Features-checkpoint.ipynb | ###Markdown
**The module is for matching the original features with the results get from three methods**The different datasets will be generated here:
###Code
import pandas as pd
# ori_df = pd.read_csv('../dataset/Dataset.csv')
ori_df = pd.read_pickle('../dataset/Dataset.pkl')
# ori_df.to_pickle('../dataset/Dataset.pkl')
ori_df
###Output
_____no_output_____
###Markdown
1. Dataset for feature-selector
###Code
df_features_all = pd.read_pickle('../dataset/Feature_Selector/selected_features.pkl')
df_features_all = df_features_all.iloc[:,2:]
df_features_all = df_features_all.reset_index()
df_features_all['label'] = ori_df['label']
df_features_all
# save the result to dataset
df_features_all.to_pickle('../dataset/training_data/features_selected.pkl')
###Output
_____no_output_____
###Markdown
2. Dataset for TF_IDF in total
###Code
# load all tf-idf features
df_tf_idf_all = pd.read_pickle('../dataset/TF_IDF_Sen/features_ran_all.pkl')
# load tf-idf featuers without some of normal features
df_tf_idf_part = pd.read_pickle('../dataset/TF_IDF_Sen/features_ran_delete_nor.pkl')
# define the module to match features
def match_features(features_all, extracted_features):
# create the dict with {index: features}
features_dict = {}
features_dict = {feature.split(":")[0]:feature.split(":")[1] for feature in features_all}
indexes = []
# create the indexes list to save the matched index
for index, feature in features_dict.items():
if feature in extracted_features:
indexes.append(index)
# add the label column
indexes.append('label')
return indexes
###Output
_____no_output_____
###Markdown
load all features in order to match from them
###Code
with open('../dataset/Dynfeatures.txt', encoding='utf-8') as fr:
features_all = fr.read().splitlines()
features_all
tf_idf_all = df_tf_idf_all['Features']
tf_idf_all.values
###Output
_____no_output_____
###Markdown
create the indexes list for all tf_idf sentences
###Code
tf_idf_all_indexes = match_features(features_all = features_all, extracted_features = tf_idf_all.values)
tf_idf_all_indexes
ori_df[tf_idf_all_indexes]
# save the result to dataset
ori_df[tf_idf_all_indexes].to_pickle('../dataset/TF_IDF_Sen/features_ran_all_indexes.pkl')
###Output
_____no_output_____
###Markdown
3. Dataset for TF_IDF after deleting some of normal features
###Code
df_tf_idf_part
tf_idf_part = df_tf_idf_part['Features']
tf_idf_part.values
###Output
_____no_output_____
###Markdown
create the indexes list for part tf_idf sentences
###Code
tf_idf_part_indexes = match_features(features_all = features_all, extracted_features = tf_idf_part.values)
tf_idf_part_indexes
# ori_df[tf_idf_part_indexes]
# save the result to dataset
ori_df[tf_idf_part_indexes].to_pickle('../dataset/TF_IDF_Sen/features_ran_part_indexes.pkl')
###Output
_____no_output_____
###Markdown
4. Dataset for unique 3-Gram features
###Code
thr_Gram_df = pd.read_pickle('../dataset/N-Gram/unique_fd_three.pkl')
thr_Gram_df
features_all
# define the module to match strings features
def match_strings(features_all, extracted_strings):
# create the dict with {index: features}
features_dict = {}
if row['a'].str.contains('foo')==True
features_dict = {feature.split(":")[0]:feature.split(":")[1] for feature in features_all}
indexes = []
# create the indexes list to save the matched index
for index, feature in features_dict.items():
if feature in extracted_features:
indexes.append(index)
# add the label column
indexes.append('label')
return indexes
# if two 3-grams exist in one string, count it once only
# join the words to strings
extracted_strings = []
for features in thr_Gram_df['features']
fullstring = "S\\tack\\Abuse"
substring = "tack"
if fullstring.find(substring) != -1:
print("Found!")
else:
print("Not found!")
###Output
Found!
###Markdown
5. Dataset for unique 4-Gram features
###Code
four_Gram_df = pd.read_pickle('../dataset/N-Gram/unique_fd_four.pkl')
four_Gram_df
# if two 4-grams exist in one string, count it once only
###Output
_____no_output_____
###Markdown
6. Dataset for combine feature-selector + TF_IDF + unique 4-Gram and delete the duplicate features
###Code
# read all the separate dataset and delete the label column
###Output
_____no_output_____ |
Classification Predict.ipynb | ###Markdown
Classification Predict Problem statement BackgroundMany companies are built around lessening one’s environmental impact or carbon footprint. They offer products and services that are environmentally friendly and sustainable, in line with their values and ideals. They would like to determine how people perceive climate change and whether or not they believe it is a real threat. Problem StatementCreate a Machine Learning model that is able to classify whether or not a person believes in climate change, based on their novel tweet data. Importing the libraries
###Code
# Analysis Libraries
import pandas as pd
import numpy as np
from collections import Counter
# Visualisation Libraries
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from PIL import Image
# Language Processsing Libraries
import nltk
nltk.download(['punkt','stopwords'])
nltk.download('vader_lexicon')
nltk.download('popular')
from sklearn.utils import resample
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.tokenize import word_tokenize, TreebankWordTokenizer
import re
import string
from nltk import SnowballStemmer
import spacy
# ML Libraries
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC,SVC
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import classification_report,confusion_matrix,accuracy_score,recall_score,precision_score
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import metrics
from nltk import SnowballStemmer
# Code for hiding seaborn warnings
import warnings
warnings.filterwarnings("ignore")
###Output
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package vader_lexicon to /root/nltk_data...
[nltk_data] Package vader_lexicon is already up-to-date!
[nltk_data] Downloading collection 'popular'
[nltk_data] |
[nltk_data] | Downloading package cmudict to /root/nltk_data...
[nltk_data] | Package cmudict is already up-to-date!
[nltk_data] | Downloading package gazetteers to /root/nltk_data...
[nltk_data] | Package gazetteers is already up-to-date!
[nltk_data] | Downloading package genesis to /root/nltk_data...
[nltk_data] | Package genesis is already up-to-date!
[nltk_data] | Downloading package gutenberg to /root/nltk_data...
[nltk_data] | Package gutenberg is already up-to-date!
[nltk_data] | Downloading package inaugural to /root/nltk_data...
[nltk_data] | Package inaugural is already up-to-date!
[nltk_data] | Downloading package movie_reviews to
[nltk_data] | /root/nltk_data...
[nltk_data] | Package movie_reviews is already up-to-date!
[nltk_data] | Downloading package names to /root/nltk_data...
[nltk_data] | Package names is already up-to-date!
[nltk_data] | Downloading package shakespeare to /root/nltk_data...
[nltk_data] | Package shakespeare is already up-to-date!
[nltk_data] | Downloading package stopwords to /root/nltk_data...
[nltk_data] | Package stopwords is already up-to-date!
[nltk_data] | Downloading package treebank to /root/nltk_data...
[nltk_data] | Package treebank is already up-to-date!
[nltk_data] | Downloading package twitter_samples to
[nltk_data] | /root/nltk_data...
[nltk_data] | Package twitter_samples is already up-to-date!
[nltk_data] | Downloading package omw to /root/nltk_data...
[nltk_data] | Package omw is already up-to-date!
[nltk_data] | Downloading package wordnet to /root/nltk_data...
[nltk_data] | Package wordnet is already up-to-date!
[nltk_data] | Downloading package wordnet_ic to /root/nltk_data...
[nltk_data] | Package wordnet_ic is already up-to-date!
[nltk_data] | Downloading package words to /root/nltk_data...
[nltk_data] | Package words is already up-to-date!
[nltk_data] | Downloading package maxent_ne_chunker to
[nltk_data] | /root/nltk_data...
[nltk_data] | Package maxent_ne_chunker is already up-to-date!
[nltk_data] | Downloading package punkt to /root/nltk_data...
[nltk_data] | Package punkt is already up-to-date!
[nltk_data] | Downloading package snowball_data to
[nltk_data] | /root/nltk_data...
[nltk_data] | Package snowball_data is already up-to-date!
[nltk_data] | Downloading package averaged_perceptron_tagger to
[nltk_data] | /root/nltk_data...
[nltk_data] | Package averaged_perceptron_tagger is already up-
[nltk_data] | to-date!
[nltk_data] |
[nltk_data] Done downloading collection popular
###Markdown
Load Data X_df DataFrame contains three columns; sentiment, message and tweetid
###Code
X_df = pd.read_csv('train.csv')
X_df.head(5)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis A check for uniqueness indicates that the message column has over 1000 duplicates due to repeated tweetids. This should be removed as to not train the model on repeated data.
###Code
# Inspect structure of dataset
X_df.info()
# Generate dataframe to indicate unique values
number_of_unique=[X_df[i].nunique() for i in X_df.columns] # Number of unique values per column
column_names=[i for i in X_df.columns]
unique_zip=list(zip(column_names,number_of_unique))
unique_df=pd.DataFrame(unique_zip,columns=['Column_Feature','Unique_Values'])
unique_df
# A function to remove duplicate rows from the message column
def delete_dup(df):
df=df.copy()
df = df.drop_duplicates(subset='message') #messges specified as subset to evaluate
return df
X_df=delete_dup(X_df)
# Recheck for duplicates
number_of_unique=[X_df[i].nunique() for i in X_df.columns]
unique_df=pd.DataFrame(unique_zip,columns=['Column_Feature','Unique_Values'])
unique_df
###Output
_____no_output_____
###Markdown
In order to give our graphs more context, we have added the text version of the sentiment to the train DataFrame. These new columns will be deleted as they do not assist in the actual classification problem.
###Code
# A function to add the text version of 'sentiment'. This is just for graphing purposes
# and should be droped.
def add_text_sent(df):
# Copy the input DataFrame
out_df = df.copy()
sentiment_text = []
# Loop though the sentiments and assign the text version.
# Pro: 1, News: 2, Neutral: 0, Anti: -1
for sent in df['sentiment']:
if sent == 1:
sentiment_text.append('Pro')
elif sent == 2:
sentiment_text.append('News')
elif sent == 0:
sentiment_text.append('Neutral')
elif sent == -1:
sentiment_text.append('Anti')
out_df['sentiment_text'] = sentiment_text
out_df.drop(['message', 'tweetid'], axis = 1, inplace = True)
return out_df
# Function to arrange the DataFrame to show percentage of classes
def class_table(df):
out_df = df.groupby(['sentiment_text']).count()
class_perc = [round(100 * x / len(df), 1) for x in out_df['sentiment']]
out_df['% of Total Classes'] = class_perc
return out_df
# Create a new DataFrame for graphing purposes. Show the sentiment classes as a
# percentage.
new_X_df = add_text_sent(X_df)
new_X_df_t = class_table(new_X_df)
new_X_df_t
###Output
_____no_output_____
###Markdown
The class labels of the training data are not balanced. As shown in the table above, most of the tweets (50.8%) are classified as `Pro`, which in the context of the `Problem Statement` means they believe in man-made climate change. The other classes (`News`, `Neutral` and `Anti`) account for `24.9%`, `15.8%` and `8.6%`respectively. This presents a challenge in that the models developed might have a bias in classifying tweets. To further illustrate this distribution, a bar graph of the count of each sentiment class is shown below:
###Code
# Show the ditribution of the classes as a graph
f, ax = plt.subplots(figsize=(10, 8))
sns.set(style="whitegrid")
ax = sns.countplot(x="sentiment_text", data=new_X_df)
plt.title('Message Count', fontsize =20)
plt.show()
###Output
_____no_output_____
###Markdown
Length of Tweets per Sentiment Class
###Code
# Add a column of length of tweets
new_X_df['message_length'] = X_df['message'].str.len()
new_X_df.head()
# Display the boxplot of the length of tweets.
plt.figure(figsize=(12.8,6))
sns.boxplot(data=new_X_df, x='sentiment_text', y='message_length');
###Output
_____no_output_____
###Markdown
The Tweets seem to be around `125` characters loong on average. This is shown by both the mean of the classes in the boxplots above and distribution density of the classes below. All the classes have a distribution density centred around `125` characters. The length of `Neutral` Tweets seem to have a wider range and `Pro` Tweets have a shorter range. The `Pro` Tweets also seem to have a lot of `outliers` by this measurement.
###Code
# Plot of distribution of scores for building categories
plt.figure(figsize=(12.8,6))
# Density plot of Energy Star scores
sns.kdeplot(new_X_df[new_X_df['sentiment_text'] == 'Pro']['message_length'], label = 'Pro', shade = False, alpha = 0.8);
sns.kdeplot(new_X_df[new_X_df['sentiment_text'] == 'News']['message_length'], label = 'News', shade = False, alpha = 0.8);
sns.kdeplot(new_X_df[new_X_df['sentiment_text'] == 'Neutral']['message_length'], label = 'Neutral', shade = False, alpha = 0.8);
sns.kdeplot(new_X_df[new_X_df['sentiment_text'] == 'Anti']['message_length'], label = 'Anti', shade = False, alpha = 0.8);
# label the plot
plt.xlabel('message length (char)', size = 15); plt.ylabel('Density', size = 15);
plt.title('Density Plot of Message Length by Sentiment Class ', size = 20);
###Output
_____no_output_____
###Markdown
Data Preprocessing Text data needs to be pre-processed before modelling. Steps taken include1. Removing/Replacing of certain text and characters2. Tokenization and lemmatization of text **Clean Text Data**
###Code
# Function to remove/replace unwanted text such as characters,URLs etc
def clean(text):
text=text.replace("'",'')
text=text.replace(".",' ')
text=text.replace(" ",'')
text=text.replace(",",' ')
text=text.replace("_",' ')
text=text.replace("!",' ')
text=text.replace("RT",'retweet') #Replace RT(Retweet) with relay
text=text.replace(r'\d+','')
text=re.sub('((www\.[^\s]+)|(https?://[^\s]+)|(https?//[^\s]+))','weblink',text)
text=re.sub('((co/[^\s]+)|(co?://[^\s]+)|(co?//[^\s]+))','',text)
text=text.lower() # Lowercase tweet
text =text.lstrip('\'"') # Remove extra white space
return text
###Output
_____no_output_____
###Markdown
Punctuation at the beginning and end of words can be removed
###Code
#Function 3
def rm_punc(text):
clean_text=[]
for i in str(text).split():
rm=i.strip('\'"?,.:_/<>!')
clean_text.append(rm)
return ' '.join(clean_text)
X_df['message']=X_df['message'].apply(clean)
X_df['message']=X_df['message'].apply(rm_punc)
X_df.head(5)
###Output
_____no_output_____
###Markdown
Furthermore, the @ and must be dealt with by replacing at with @ and removing
###Code
# Function replaces the @ symbol with the word at
def at(text):
return ' '.join(re.sub("(@+)","at ",text).split())
# Function replaces the # symbol with the word tag
def hashtag(text):
return ' '.join(re.sub("(#+)"," tag ",text).split())
# Remove hashtags and replace @
X_df['message']=X_df['message'].apply(at)
X_df['message']=X_df['message'].apply(hashtag)
X_df.head(5)
###Output
_____no_output_____
###Markdown
**Tokenize and Lemmatization** Split tweets into individual words via tokenization in the tokens column.
###Code
# Tokenise each tweet messge
tokeniser = TreebankWordTokenizer()
X_df['tokens'] = X_df['message'].apply(tokeniser.tokenize)
X_df.head(5)
###Output
_____no_output_____
###Markdown
Perform Lemmatization of tokens to group together the different inflected forms of a word so they can be analysed as a single item
###Code
# Function performs lemmatization in the tokens column
def lemma(text):
lemma = WordNetLemmatizer()
return [lemma.lemmatize(i) for i in text]
###Output
_____no_output_____
###Markdown
Generate Lemmatization column
###Code
X_df['lemma'] =X_df['tokens'].apply(lemma)
X_df.head(5)
###Output
_____no_output_____
###Markdown
Lastly a new column derived from the lemma column can be generate in order to train the model/s
###Code
# Insert new clean message column
X_df['clean message'] = X_df['lemma'].apply(lambda i: ' '.join(i))
X_df.head(5)
###Output
_____no_output_____
###Markdown
Text Analysis The words most popular amongst the sentiment groups can be represented visually. This provides insights into the nature of the Tweet messages for each class **Word Clouds** The strings generated for each sentiment class will provide the information used to generate the wordclouds for each class
###Code
# Create copy of X_df to generate word cloud DataFrame
word_df=X_df.copy()
# Remove small words that will clutter word cloud and have no significant meaning
def remove_small(text):
output=[]
for i in text.split():
if len(i)>3:
output.append(i)
else:
pass
return ' '.join(output)
word_df['clean message']=word_df['clean message'].apply(remove_small)
# Create and generate a word cloud image:
# Display the generated image:
fig, axs = plt.subplots(2, 2, figsize=(18,10))
# Anti class word cloud
anti_wordcloud = WordCloud(width=1800, height = 1200,background_color="white").generate(' '.join(i for i in word_df[word_df['sentiment']==-1]['clean message']))
axs[0, 0].imshow(anti_wordcloud, interpolation='bilinear')
axs[0, 0].set_title('Anti Tweets')
axs[0, 0].axis('off')
# Neutral cloud word cloud
neutral_wordcloud = WordCloud(width=1800, height = 1200,background_color="white").generate(' '.join(i for i in word_df[word_df['sentiment']==0]['clean message']))
axs[0, 1].imshow(neutral_wordcloud, interpolation='bilinear')
axs[0, 1].set_title('Neutral Tweets')
axs[0, 1].axis('off')
# Positive class word cloud
positive_wordcloud = WordCloud(width=1800, height = 1200).generate(' '.join(i for i in word_df[word_df['sentiment']==1]['clean message']))
axs[1, 0].imshow(positive_wordcloud, interpolation='bilinear')
axs[1, 0].set_title('Positive Tweets')
axs[1, 0].axis('off')
# News class word cloud
news_wordcloud = WordCloud(width=1800, height = 1200).generate(' '.join(i for i in word_df[word_df['sentiment']==2]['clean message']))
axs[1, 1].imshow(news_wordcloud, interpolation='bilinear')
axs[1, 1].set_title('News Tweets')
axs[1, 1].axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
From the word clouds it is clear that all groups speak about climate change and global warming as expected. The word clouds of each group however contain information that echo their sentiment.1. The Anti cloud contains words such as fake, man made and scam2. The Positive cloud contains words such as real, believe,change and talk climate3. The Neutral cloud contains a mixture of words that could lean either to anti or positive4. The News cloud contains words and names such as Trump, Scott Pruitt (Administrator of the United States Environmental Protection Agency) and change policy. Typical of news reporting. **Named Entity Recognition** Futhermore, it can be seen which people and organisations are refered in tweets using NER
###Code
# Spacy will be used to generate entities
nlp = spacy.load('en_core_web_sm')
# A new dataframe NER_df is created for the following visualisations
NER_df=pd.DataFrame(X_df['clean message'])
# Function generates docs to get Name Entity Recognitions
def doc(text):
doc=nlp(text)
return doc
# Create a new column containing the nlp transformed text
NER_df['doc']=NER_df['clean message'].apply(doc)
#Functions below extract Persons and organisations from the input parameter text. If entity is not found 'None' is populated in cell
def person(doc):
if doc.ents:
for ent in doc.ents:
if ent.label_=='PERSON':
return (ent.text)
else:
return ('None')
def org(doc):
if doc.ents:
for ent in doc.ents:
if ent.label_=='ORG':
return (ent.text)
else:
return ('None')
# Generate new columns 'persons' and 'organisation'
NER_df['persons']=NER_df['doc'].apply(person)
NER_df['organisation']=NER_df['doc'].apply(org)
###Output
_____no_output_____
###Markdown
Below are the plots for the top persons and organisations tweeted about
###Code
# Retrive all the PERSON labels from the NER_df and generate a new dataframe person_df for analysis
persons=[i for i in NER_df['persons']]
person_counts = Counter(persons).most_common(20)
person_df=pd.DataFrame(person_counts,columns=['persons name','count'])
person_df.drop([0,1,7,8,13,15,16],axis=0,inplace=True) # rows removed due to 'None' entries, incorrect classification or different entry of a same entity (repetition)
# Plot top persons tweeted
f, ax = plt.subplots(figsize=(30, 10))
sns.set(style='white',font_scale=1.2)
sns.barplot(x=person_df[person_df['count'] <1000].iloc[:,0],y=person_df[person_df['count'] <1000].iloc[:,1])
plt.xlabel('Persons Name')
plt.ylabel('Mentions')
plt.show()
###Output
_____no_output_____
###Markdown
As seen in the plot above, the people most tweeted about are prominent USA politicians such as Donald trump and Al Gore, Leonardo Dicaprio is also a popular figure tweeted about regarding climate change. Although there is a misclassifications of doe(Department of Energy)
###Code
# Retrive all the ORG labels from the NER_df and generate a new dataframe org_df for analysis
org=[i for i in NER_df['organisation']]
org_counts = Counter(org).most_common(15)
org_df=pd.DataFrame(org_counts,columns=['organisation name','count'])
org_df.drop([0,1,3,8,12],axis=0,inplace=True) # rows removed due to 'None' entries, incorrect classification or different entry of a same entity (repetition)
# Plot top organisations tweeted
f, ax = plt.subplots(figsize=(30, 10))
sns.set(style='white',font_scale=2)
org_bar=sns.barplot(x=org_df[org_df['count'] <1000].iloc[:,0],y=org_df[org_df['count'] <1000].iloc[:,1])
plt.xlabel('Organisation Name')
plt.ylabel('Mentions')
plt.show()
###Output
_____no_output_____
###Markdown
There are many organisations that are referenced in tweets but the epa (United States Environmental Protection Agency) is mentioned the most, followed by well known organisations such as CNN (news) , the UN (ntergovernmental organization) and DOE( Department of Energy) Model Training and Testing The Perparation of the data for modelling will require spliting of features and labels. In this instance, only the sentiment and clean message columns are required from X_df **Train/Test Split** X and y training features split from X_df
###Code
# Feature and label split
X=X_df['clean message']
y=X_df['sentiment']
###Output
_____no_output_____
###Markdown
25% of data allocated to testing, with random state set to 42
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
###Output
_____no_output_____
###Markdown
**SVC Classification with TfidfVectorization** Parameter Search performed using GridSearch.(Takes a while to run, output hardcopied in pipeline.Uncheck to run)
###Code
#pipeline = Pipeline([('tfidf', TfidfVectorizer()),('clf', SVC())])
#parameters = {
# 'tfidf__max_df': (0.25, 0.5, 0.75),'tfidf__ngram_range': [(1, 1), (1, 2), (1, 3)],
# 'tfidf__max_features':(500,2500,5000),'clf__C':(0.1,1,10),'clf__gamma':(1,0.1,0.001)}
#svc = GridSearchCV(pipeline, parameters, cv=2, n_jobs=2, verbose=3)
#svc.fit(X_train, y_train)
#svc.best_params_
#svc_predictions = svc.predict(X_test)
###Output
_____no_output_____
###Markdown
From the Gridsearch. The ideal parameters for SVC classification combined with TfidfVectoriztion is c=10 , gamma=1 , max_df=0.25 , max_features=5000 , ngram=(1,1) Fitting model with optimized parameters
###Code
#Pipeline
svc = Pipeline(
[('tfidf', TfidfVectorizer(analyzer='word', max_df=0.75,max_features=5000,ngram_range=(1,1)))
,('clf', SVC(C=10,gamma=1))])
# Train model
model=svc.fit(X_train, y_train)
# Form a prediction set
predictions = model.predict(X_test)
# Print Results
#Confusion matrix
confusion = 'Confusion Matrix'.center(100, '*')
print(confusion)
matrix=confusion_matrix(y_test,predictions)
print(confusion_matrix(y_test,predictions))
print('')
#Classification report
report='Classification Report'.center(100,'*')
print(report)
print('')
print(classification_report(y_test,predictions))
print('')
#Model Performance
performance='Performance Metrics'.center(100,'*')
print(performance)
print('The model accuracy is :',accuracy_score(y_test,predictions))
print('The model recall is :',recall_score(y_test, predictions,average='weighted'))
F1 = 2 * (precision_score(y_test,predictions,average='weighted') * recall_score(y_test, predictions,average='weighted')) / (precision_score(y_test,predictions,average='weighted') + recall_score(y_test, predictions,average='weighted'))
print('The model F1score is : ',F1)
###Output
******************************************Confusion Matrix******************************************
[[ 105 40 154 12]
[ 23 184 311 42]
[ 19 77 1570 129]
[ 2 7 188 695]]
***************************************Classification Report****************************************
precision recall f1-score support
-1 0.70 0.34 0.46 311
0 0.60 0.33 0.42 560
1 0.71 0.87 0.78 1795
2 0.79 0.78 0.79 892
accuracy 0.72 3558
macro avg 0.70 0.58 0.61 3558
weighted avg 0.71 0.72 0.70 3558
****************************************Performance Metrics*****************************************
The model accuracy is : 0.7178189994378864
The model recall is : 0.7178189994378864
The model F1score is : 0.7140773273071888
###Markdown
**Save model as pickle**
###Code
import pickle
model_save_path = "SVC.pkl"
with open(model_save_path,'wb') as file:
pickle.dump(svc,file)
###Output
_____no_output_____
###Markdown
Model Prediction on test.csv Perform data cleaning as done before on the test dataframe from test.csv
###Code
#import tes.csv
test=pd.read_csv('test.csv')
# Text cleaning
test['message']=test['message'].apply(clean) #clean data
test['message']=test['message'].apply(rm_punc) #remove punctuation
test['message']=test['message'].apply(at) #replace @
test['message']=test['message'].apply(hashtag) #remove #
# Tokenize messages
tokeniser = TreebankWordTokenizer()
test['tokens'] = test['message'].apply(tokeniser.tokenize)
# Lemmatize tokens column
test['lemma'] = test['tokens'].apply(lemma)
# Generate clean message column
test['clean message'] = test['lemma'].apply(lambda i: ' '.join(i))
test.head(5)
#Drop columns not needed for predictions
drop_list=['message','tokens','lemma']
test.drop(drop_list,axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Final Dataframe format for kaggle prediction
###Code
test.head(5)
###Output
_____no_output_____
###Markdown
Load model pickle to make predictions
###Code
model_load_path = "SVC.pkl"
with open(model_load_path,'rb') as file:
pickle_rick = pickle.load(file)
# Perfom predictions on test set
kaggle_predictions = pickle_rick.predict(test['clean message'])
kaggle_predictions = pd.DataFrame(kaggle_predictions)
kaggle_predictions.rename(columns={0: "sentiment"}, inplace=True)
kaggle_predictions["tweetid"] = test['tweetid']
cols = ['tweetid','sentiment']
kaggle_predictions = kaggle_predictions[cols]
###Output
_____no_output_____
###Markdown
Write predictions to upload_kaggle_pred.csv
###Code
kaggle_predictions.to_csv(path_or_buf='/content/upload_kaggle_pred.csv',index=False)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.