path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
machine-learning-intro/exercise/model-validation.ipynb | ###Markdown
**[Introduction to Machine Learning Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**--- RecapYou've built a model. In this exercise you will test how good your model is.Run the cell below to set up your coding environment where the previous exercise left off.
###Code
# load data
import pandas as pd
iowa_file_path = '../input/home-data-for-ml-course/train.csv'
home_data = pd.read_csv(iowa_file_path)
# select the prediction target
y = home_data['SalePrice']
# choose features
feature_columns = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']
X = home_data[feature_columns]
# specify model
from sklearn.tree import DecisionTreeRegressor
iowa_model = DecisionTreeRegressor()
# fit model
iowa_model.fit(X, y)
print("First in-sample predictions:", iowa_model.predict(X.head()))
print("Actual target values for those homes:", y.head().tolist())
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.machine_learning.ex4 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
Exercises Step 1: Split Your DataUse the `train_test_split` function to split up your data.Give it the argument `random_state=1` so the `check` functions know what to expect when verifying your code.Recall, your features are loaded in the DataFrame **X** and your target is loaded in **y**.
###Code
# import the train_test_split function and uncomment
from sklearn.model_selection import train_test_split
# split data to train and validation sets
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)
# Check your answer
step_1.check()
# The lines below will show you a hint or the solution.
# step_1.hint()
# step_1.solution()
###Output
_____no_output_____
###Markdown
Step 2: Specify and Fit the ModelCreate a `DecisionTreeRegressor` model and fit it to the relevant data.Set `random_state` to 1 again when creating the model.
###Code
# You imported DecisionTreeRegressor in your last exercise
# and that code has been copied to the setup code above. So, no need to
# import it again
# specify the model
iowa_model = DecisionTreeRegressor(random_state=1)
# fit iowa_model with the training data.
iowa_model.fit(train_X, train_y)
# Check your answer
step_2.check()
# step_2.hint()
# step_2.solution()
###Output
_____no_output_____
###Markdown
Step 3: Make Predictions with Validation data
###Code
# Predict with all validation observations
val_predictions = iowa_model.predict(val_X)
# Check your answer
step_3.check()
# step_3.hint()
# step_3.solution()
###Output
_____no_output_____
###Markdown
Inspect your predictions and actual values from validation data.
###Code
# print the top few validation predictions
print('Prediction:', val_predictions[:5])
# print the top few actual prices from validation data
print('Actual:', val_y.head().tolist())
###Output
_____no_output_____
###Markdown
What do you notice that is different from what you saw with in-sample predictions (which are printed after the top code cell in this page).Do you remember why validation predictions differ from in-sample (or training) predictions? This is an important idea from the last lesson. Step 4: Calculate the Mean Absolute Error in Validation Data
###Code
# mae in validation data
from sklearn.metrics import mean_absolute_error
val_mae = mean_absolute_error(val_y, val_predictions)
# uncomment following line to see the validation_mae
print(val_mae)
# Check your answer
step_4.check()
# step_4.hint()
# step_4.solution()
###Output
_____no_output_____ |
pandas/pandas_IO.ipynb | ###Markdown
Reading and savinghttps://youtu.be/9Z7wvippeko?list=PLQVvvaa0QuDc-3szzjeP6N6b0aDrrKyL-- data https://www.quandl.com/data/ZILL/Z77006_MLP-Zillow-Home-Value-Index-ZIP-Median-List-Price-Houston- mans http://pandas.pydata.org/pandas-docs/stable/io.html
###Code
import pandas as pd;
df = pd.read_csv("./data/ZILL-Z77006_MLP.csv");
print(df.head());
df.set_index("Date", inplace=True);
print(df.head());
df.to_csv("./data/z77006_mlp_new.csv");
df = pd.read_csv("./data/z77006_mlp_new.csv", index_col=0);
df.columns = ["Austin_HPI"];
print(df.head());
df.to_csv("./data/z77006_mlp_no_header.csv", header=False);
df = pd.read_csv("./data/z77006_mlp_no_header.csv", names=["Date", "Austin_HPI"])
print(df.head());
df = pd.read_csv("./data/z77006_mlp_no_header.csv", names=["Date", "Austin_HPI"], index_col=0)
print(df.head());
df.to_html("./data/z77006_mlp_no_header.html"); #to html <table>
df.rename(columns={"Austin_HPI":"77006_HPI"}, inplace=True); # Have you thinked that this should be a function?
print(df.head());
###Output
77006_HPI
Date
2015-02-28 620000.0
2015-01-31 620000.0
2014-12-31 629000.0
2014-11-30 632500.0
2014-10-31 619900.0
|
notebooks/connectfour-withMinmaxStatistics.ipynb | ###Markdown
Tournament Minmax vs Random
###Code
his = plt.hist(outcomes,bins=3)
offset = -.18
plt.title("1st Tournament, draws and wins, depth="+str(difficulty))
#plt.xlabel("left: o wins, middle: draw, right: x wins")
plt.ylabel("# Games")
axes = plt.gca()
axes.set_ylim([1,len(outcomes)+1]) # y axis should include all 2000 games
#axes.set_xlim([0,0])
axes.set_xticks(his[1][1:]+offset)
axes.set_xticklabels( ('Random', 'Minmax', 'draw') )
outcomes.count(0), outcomes.count(1), outcomes.count(2)
his = plt.hist(mvcntrlist,bins=max(mvcntrlist))
plt.title("1st Tournament, # of moves, depth="+str(difficulty))
#plt.xlabel("left: o wins, middle: draw, right: x wins")
plt.ylabel("# Games")
plt.xlabel('# of moves')
axes = plt.gca()
his = plt.hist(nodeslist,bins=max(nodeslist))
plt.title("1st Tournament, # of nodes , depth="+str(difficulty))
#plt.xlabel("left: o wins, middle: draw, right: x wins")
plt.ylabel("# Games")
plt.xlabel('# of nodes')
axes = plt.gca()
###Output
_____no_output_____
###Markdown
cProfilewith command: python -m cProfile -s cumtime connectfour-withMinmax.py Depth 2: 11008374 function calls (10559783 primitive calls) in 14.654 seconds10845027 function calls (10398214 primitive calls) in 12.555 seconds8587527 function calls (8188635 primitive calls) in 7.127 seconds Depth 3:3336653 function calls (3330229 primitive calls) in 4.693 seconds10010685 function calls (9583101 primitive calls) in 9.372 seconds Depth 4:19532712 function calls (18898637 primitive calls) in 24.994 seconds Depth 569043885 function calls (67314936 primitive calls) in 118.056 seconds Time the Building of a minmax treewith a timer method
###Code
import time
def timing(f):
def wrap(*args):
time1 = time.time()
ret = f(*args)
time2 = time.time()
print '%s function took %0.3f ms' % (f.func_name, (time2-time1)*1000.0)
return ret
return wrap
testgame=np.array([[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0],
[0,0,0,0,0,0,0]])
m = Minmax()
@timing
def do_work(dif):
return m.minmax(testgame, 1, dif, -np.inf, np.inf)
for i in range(1,7):
do_work(i)
t=[0,1.235,34.304,94.698,288.690,2281.134,3482.528]
plt.plot(t)
plt.title('Time per Depth for building Minmax tree')
plt.xlabel('depth')
plt.ylabel('time in ms')
plt.xlim(1,7)
###Output
_____no_output_____ |
Module - Dates and Times.ipynb | ###Markdown
UNCLASSIFIEDTranscribed from FOIA Doc ID: 6689695https://archive.org/details/comp3321 (U) Introduction (U) There are many great built-in tools for date and time manipulation in Python. They are spread over a few different modules, which is a little annoying.(U) That being said, the **datetime** module is very complete and the most useful, so we will concentrate on that one that most. The other one that has some nice methods is the **time** module. (U) `time` Module(U) The **time** module is handy for its time accessor functions and the `sleep` command. It is most useful when you want quick access to the time but don't need to _manipulate_ it.
###Code
import time
time.time() # "epoch" time (seconds since Jan. 1st, 1970)
time.gmtime()
time.localtime()
time.gmtime() == time.localtime()
time.asctime() # will take an optional timestamp
time.strftime('%c') # many formatting options here
time.strptime('Tue Nov 19 07:04:38 2013')
###Output
_____no_output_____
###Markdown
(U) The last method you might use from the time module is `sleep`. (Doesn't this seem out of place?)
###Code
time.sleep(5) # argument is a number of seconds
print("I'm awake!")
###Output
_____no_output_____
###Markdown
(U) `datetime` Module (U) The **datetime** module is a more robust module for dates and times that is object-oriented and has a notion of datetime arithmetic.(U) There are 5 basic types that comes with the datetime module. These are: 1. `date`: a type to store the date (year, month, day) using the current Gregorian calendar, 2. `time`: a type to store the time (hour, minute, second, microsecond, tzinfo-all idealized, with no notion of leap seconds), 3. `datetime`: a type to store both date and time together, 4. `timedelta`: a type to store the duration or difference between two date, time, or datetime instances, and 5. `tzinfo`: a base class for storing and using time zone information (we will not look at this). (U) `date` type
###Code
import datetime
datetime.date(2013, 11, 19)
datetime.date.today()
now_epoch = time.time()
print(now_epoch)
datetime.date.fromtimestamp(now_epoch)
today = datetime.date.today()
today.day
today.month
today.timetuple()
today.weekday() # 0 ordered starting from Monday
today.isoweekday() # 1 ordered starting from Monday
print(today)
today.strftime('%m/%d/%Y')
###Output
_____no_output_____
###Markdown
(U) `time` type
###Code
t = datetime.time(8, 30, 50, 0)
t.hour = 9
t.hour
t.minute
t.second
print(t)
print(t.replace(hour=12))
print(t)
###Output
_____no_output_____
###Markdown
(U) `datetime` type
###Code
dt = datetime.datetime(2013, 11, 19, 8, 30, 50, 400)
print(dt)
datetime.datetime.fromtimestamp(now_epoch)
now = datetime.datetime.now()
###Output
_____no_output_____
###Markdown
(U) We can break apart the `datetime` object into `date` and `time` objects. We can also combine `date` and `time` objects into one `datetime` object.
###Code
now.date()
print(now.date())
print(now.time())
day = datetime.date(2011, 12, 30)
t = datetime.time(2, 30, 38)
day
t
dt = datetime.datetime.combine(day,t)
print(dt)
###Output
_____no_output_____
###Markdown
(U) timedelta type (U) The best part of the **datetime** module is date arithmetic. What do you get when you subtract two dates?
###Code
day1 = datetime.datetime(2013, 10, 30)
day2 = datetime.datetime(2013, 9, 20)
day1 - day2
print(day1 - day2)
print(day2 - day1)
print(day1 + day2) # Of course not...that doesn't make sense :)
###Output
_____no_output_____
###Markdown
(U) The `timedelta` type is a measure of duration between two time events. So, if we subtract two `datetime` objects (or `date` or `time`) as we did above, we get a `timedelta` object. The properties of a `timedelta` object are (`days`, `seconds`, `microseconds`, `milliseconds`, `minutes`, `hours`, `weeks`). They are all optional and set to 0 by default. A `timedelta` object can take these values as arguments but converts and normalizes the data into days, seconds, and microseconds.
###Code
from datetime import timedelta
day = timedelta(days=1)
day
now = datetime.datetime.now()
now + day
now - day
now + 300 * day
now - 175 * day
year = timedelta(days=365.25)
another_year = timedelta(weeks=40, days=84, hours=23, minutes=50, seconds=600)
year == another_year
year.total_seconds()
ten_years = 10 * year
ten_years
###Output
_____no_output_____
###Markdown
(U) Conversions (U) It's easy to get confused when attempting to convert back and forth between strings, numbers, `time`s, and `datetime`s. When you need to do it, the best course of action is probably to open up an interactive session, fiddle around until you have what you need, then capture that in a well named function. Still, some pointers may be helpful. (U) Objects of types `time` and `datetime` provide `strptime` and `strftime` methods for reading times and dates from strings (a.k.a. ***p***arsing) and converting to strings (a.k.a. ***f***ormatting), respectively. These methods employ a [syntax](https://docs.python.org/3/library/datetime.htmlstrftime-strptime-behavior) that is shared across many programming languages. Missing pages!I've done my best to fill in the blanks here. The source material above this point was starting to discuss how to convert dates and times to various formats using strftime and how to read those formats back into datetime, time, and date objects using strptime.The source material picks back up after the gap with some examples of converting between timezones. I'm not sure what else might have been missing between those two subjects. `strftime()` and `strptime()` Format CodesThe following is a list of all the format codes accepted by the `strftime()` and `strptime()` methods. This table is found in the python documentation [here](https://docs.python.org/3/library/datetime.htmlstrftime-and-strptime-format-codes).DirectiveMeaningExampleNotes%aWeekday as locale’sabbreviated name.Sun, Mon, …, Sat(en_US);So, Mo, …, Sa(de_DE)(1)%AWeekday as locale’s full name.Sunday, Monday, …,Saturday (en_US);Sonntag, Montag, …,Samstag (de_DE)(1)%wWeekday as a decimal number,where 0 is Sunday and 6 isSaturday.0, 1, …, 6%dDay of the month as azero-padded decimal number.01, 02, …, 31(9)%bMonth as locale’s abbreviatedname.Jan, Feb, …, Dec(en_US);Jan, Feb, …, Dez(de_DE)(1)%BMonth as locale’s full name.January, February,…, December (en_US);Januar, Februar, …,Dezember (de_DE)(1)%mMonth as a zero-paddeddecimal number.01, 02, …, 12(9)%yYear without century as azero-padded decimal number.00, 01, …, 99(9)%YYear with century as a decimalnumber.0001, 0002, …, 2013,2014, …, 9998, 9999(2)%HHour (24-hour clock) as azero-padded decimal number.00, 01, …, 23(9)%IHour (12-hour clock) as azero-padded decimal number.01, 02, …, 12(9)%pLocale’s equivalent of eitherAM or PM.AM, PM (en_US);am, pm (de_DE)(1),(3)%MMinute as a zero-paddeddecimal number.00, 01, …, 59(9)%SSecond as a zero-paddeddecimal number.00, 01, …, 59(4),(9)%fMicrosecond as a decimalnumber, zero-padded on theleft.000000, 000001, …,999999(5)%zUTC offset in the form±HHMM[SS[.ffffff]] (emptystring if the object isnaive).(empty), +0000,-0400, +1030,+063415,-030712.345216(6)%ZTime zone name (empty stringif the object is naive).(empty), UTC, EST, CST%jDay of the year as azero-padded decimal number.001, 002, …, 366(9)%UWeek number of the year(Sunday as the first day ofthe week) as a zero paddeddecimal number. All days in anew year preceding the firstSunday are considered to be inweek 0.00, 01, …, 53(7),(9)%WWeek number of the year(Monday as the first day ofthe week) as a decimal number.All days in a new yearpreceding the first Mondayare considered to be inweek 0.00, 01, …, 53(7),(9)%cLocale’s appropriate date andtime representation.Tue Aug 16 21:30:001988 (en_US);Di 16 Aug 21:30:001988 (de_DE)(1)%xLocale’s appropriate daterepresentation.08/16/88 (None);08/16/1988 (en_US);16.08.1988 (de_DE)(1)%XLocale’s appropriate timerepresentation.21:30:00 (en_US);21:30:00 (de_DE)(1)%%A literal '%' character.%
###Code
# Date and time in a format fairly close to ISO8601
now.strftime("%Y-%m-%dT%H:%M:%S.%f")
# The same result can be obtained by converting a datetime object to a string.
str(now)
# In this example we insert a '-' between the '%' and the 'I'
# This tells python not to display the leading '0'
# On windows you might need a '#' there instead of a '-'
now.strftime("%-I:%M %p %A %B %d, %Y")
###Output
_____no_output_____
###Markdown
Parsing dates and times from stringsThis can be hit or miss if your source data isn't formatted consistently, but this can be very helpful.
###Code
datetime.datetime.strptime("1981-12-25 22:31:07.337215", "%Y-%m-%d %H:%M:%S.%f")
datetime.datetime.strptime('8:54 PM Saturday April 18, 2020', "%I:%M %p %A %B %d, %Y")
###Output
_____no_output_____
###Markdown
TimezonesSo far we've been using `datetime` and `time` objects as naive times, meaning they are not timezone aware. Without timezone information attached it can be difficult to tell whether the time you're working with is local to you or if it's the time in some other part of the world. You also have no way of knowing if 3:00 AM in London is the same as your current local time of 9:00 PM.We can use the **pytz** module to localize times and add timezone information to `datetime` and `time` objects. With timezone information attached python can tell that the current UTC time is equal to the current MDT time.
###Code
import pytz # Import the pytz module
###Output
_____no_output_____
###Markdown
First let's get the current UTC time.
###Code
utc_now = datetime.datetime.utcnow()
print(utc_now.tzinfo)
print(utc_now)
###Output
_____no_output_____
###Markdown
This datetime object is **naive** which means it doesn't have timezone information attached. Let's fix that.
###Code
utc_now = pytz.utc.localize(utc_now)
print(utc_now.tzinfo)
print(utc_now)
###Output
_____no_output_____
###Markdown
Now there's an offset of +00:00 attached to the datetime stored in utc_now so it's time zone aware.It's a good idea to always work with times using utc and convert to other timezones as needed.Let's create a timezone object that can calculate Mountain Daylight Time or Mountain Standard Time depending on the time of year.
###Code
utah_tz = pytz.timezone('MST7MDT')
###Output
_____no_output_____
###Markdown
Now let's localize the current UTC time to get the current Salt Lake City time.
###Code
utah_now = utc_now.astimezone(utah_tz)
print(utah_now.tzinfo)
print(utah_now)
###Output
_____no_output_____
###Markdown
We can compare the UTC time and the Utah time and see that they're equal even though they have different times because they are both timezone aware.
###Code
utah_now == utc_now
###Output
_____no_output_____
###Markdown
We can also automatically get the correct offset for dates that do and do not fall within Daylight Savings Time
###Code
jan1 = datetime.datetime(2020, 1, 1, 12, 0).astimezone(utah_tz)
print(jan1.tzinfo)
print(jan1)
jun1 = datetime.datetime(2020, 6, 1, 12, 0).astimezone(utah_tz)
print(jun1.tzinfo)
print(jun1)
###Output
_____no_output_____
###Markdown
Now that we have a Utah timezone object we can use that to localize the current Utah time to other timezones, but it's probably a better idea to localize UTC time to whatever timezone you need. end missing material
###Code
kabul_tz = pytz.timezone('Asia/Kabul') # Kabul is an interesting timezone because it's 30 minutes out of sync
kabul_dt = utah_tz.localize(datetime.datetime.now()).astimezone(kabul_tz) # Probably unnecessary
print(kabul_dt) # We can now print this time
print(utah_tz.normalize(kabul_dt)) # and we can convert it directly to other timezones by normalizing it
###Output
_____no_output_____
###Markdown
(U) To get a list of all timezones:
###Code
pytz.all_timezones
###Output
_____no_output_____
###Markdown
(U) The arrow package (U) The arrow package is a third party package useful for manipulating dates and times in a non-naive way (in this case, meaning that it deals with multiple timezones very seamlessly). Examples below are based on examples from "20 Python Libraries You Aren't Using (But Should)" on Safari.
###Code
import ipydeps
ipydeps.pip('arrow')
import arrow
t0 = arrow.now()
print (t0)
t1 = arrow.utcnow()
print(t1)
difference = (t0 - t1).total_seconds()
print('Total difference: %.2f seconds' % difference)
t0 = arrow.now()
t0
t0.date()
t0.time()
t0.timestamp
t0.year
t0.month
t0.day
t0.datetime
t1.datetime
t0 = arrow.now()
t0.humanize()
t0.humanize()
t0 = t0.replace(hour=3,minute=10)
t0.humanize()
###Output
_____no_output_____
###Markdown
(U) The parsedate module (U) The parsedate package is a third party package that does a very good job of parsing dates and times from "messy" input formats. It also does a good job of calculating relative times from human-style input. Examples below are from "20 Python Libraries You Aren't Using (But Should)" on Safari.
###Code
ipydeps.pip('parsedatetime')
import parsedatetime as pdt
cal = pdt.Calendar()
examples = [
"2016-07-16",
"2016/07/16",
"2016-7-16",
"2016/7/16",
"07-16-2016",
"7-16-2016",
"7-16-16",
"7/16/16",
]
# print the header
print('{:30s}{:>30s}'.format('Input', 'Result'))
print('=' * 60)
# Loop through the examples list and show that parseDT successfully
# parses out the date/time based on the messy values
for e in examples:
dt, result = cal.parseDT(e)
print('{:<30s}{:>30}'.format('"' + e + '"', str(dt.date())))
help(datetime.datetime.ctime)
examples = [
"19 November 1975",
"19 November 75",
"19 Nov 75",
"tomorrow",
"yesterday",
"10 minutes from now",
"the first of January, 2001",
"3 days ago",
"in four days' time",
"two weeks from now",
"three months ago",
"2 weeks and 3 days in the future",
]
# print the time right now for reference
print('Now: {}'.format(datetime.datetime.now().ctime()), end='\n\n')
# print the header
print('{:40s}{:>30s}'.format('Input', 'Result'))
print('=' * 70)
# Loop through the examples list to show how parseDT can
# successfully determine the date/time based on both messy
# input and messy relative time offset inputs
for e in examples:
dt, result = cal.parseDT(e)
print('{:<40s}{:>30}'.format('"' + e + '"', str(dt)))
###Output
_____no_output_____
###Markdown
Added material `dateutil` moduleGiven the exercise below regarding calculating the number of days between Easter and Christmas in a given year, I thought I would throw in a mention of the **dateutil** module which happens to include a helpful function for calculating the date that Easter falls on. This module may have been mentioned in the missing material since they included an exercise that it would be relevant for. Of all of the holidays which don't fall on regular dates, Easter is probably the hardest to calculate because it follows some very complex rules.`dateutil` also provides several other useful tools. You can read the documentation for the module [here](https://dateutil.readthedocs.io/en/stable/). Easter
###Code
from dateutil.easter import easter
easter(1999)
easter(2020, method=2)
###Output
_____no_output_____
###Markdown
There's an optional `method` parameter that lets you see the dates for easter by different calculation methods. The default is option `3` EASTER_WESTERN but you can also choose `2` for EASTER_ORTHODOX (Russia), or `1` for EASTER_JULIAN (Pre-Gregorian Calendar).If you're curious about how the date for Easter in a given year is calculated, [here](https://www.tondering.dk/claus/cal/easter.php) is a page that discusses it. _Warning: Crucifixion imagery prominently displayed there._ Long story short, calculating Easter is a pain in the butt and someone already did it for us. Parser
###Code
from dateutil.parser import parse
###Output
_____no_output_____
###Markdown
`parse` will attempt to figure out a valid date from an input string. It generally does a very good job at applying some basic rules to determine the input format. For example, the American date format of %m/%d/%Y is obvious in a date like 05/30/2020. Other dates are more ambiguous. 12/05/2020 Might be December 5th 2020 or it might be May 12th if the date is in the European format. 31/12/2020 must be a European date because there can't be more than 12 months so `parse` can automatically figure that out. Generally though `parse` will default to preferring the American format for ambiguous dates unless you tell it otherwise using the optional kword arguments `dayfirst` or `yearfirst` to override that behavior for ambiguous dates.There is also a `fuzzy` kwarg that will allow `parse` to accept some plain English strings and attempt to parse them for dates.
###Code
parse("12/11/10") # Ambiguous date so American Month/Day/Year is preferred.
parse("12/11/10", yearfirst=True) # Specify that year comes first. Remaining columns default to month then day
parse("12/11/10", dayfirst=True) # Changes default to European style dates
parse("12/11/10", yearfirst=True, dayfirst=True) # Force Year/Day/Month... No one uses this format.
# If dayfirst isn't specified and the first column exceeds 12 it will
# automatically switch to European format.
parse("31/12/69")
# This is also the highest 2 digit year that will be interpretted as 2000s.
# 70-99 will be interpretted as 1900s. It's a good idea to use 4 digit years,
# but we wouldn't be using this function if we had control of the format.
# the fuzzy kwarg will look through longer strings for things that look like dates.
parse("Frank was born at 5:54PM on the 17th of February in 1992", fuzzy=True)
###Output
_____no_output_____
###Markdown
Optional Datetime Exercises Use the functions and methods you learned about in the Dates and Times Lesson to answer the following questions: (U) How long before Christmas? 247 days
###Code
christmas = datetime.date(2020, 12, 25)
today = datetime.date.today()
christmas - today
###Output
_____no_output_____
###Markdown
(U) How many seconds since you were born? 1211074028.943365
###Code
birthday = datetime.datetime(1981, 12, 6, 8, 57, 30)
now = datetime.datetime.now()
duration = now - birthday
total_seconds = duration.total_seconds()
print(total_seconds)
###Output
_____no_output_____
###Markdown
(U) What is the average number of days between Easter and Christmas for the years 2000 - 2999? days=260, seconds=41904
###Code
from dateutil.easter import easter
days = []
for year in range(2000, 3000):
estr = easter(year)
xmas = datetime.date(year, 12, 25)
duration = xmas - estr
days.append(duration)
total = datetime.timedelta(days=0)
for dur in days:
total += dur
total / 1000
###Output
_____no_output_____
###Markdown
(U) What day of the week does Christmas fall on this year? Friday
###Code
christmas.strftime('%A')
###Output
_____no_output_____
###Markdown
(U) You get an email with a POSIX timestamp of 1435074325. The email is from the leader of a Zendian extremist group and says that there will be an attack on the Zendian capitol in 14 hours. In Zendian local time, when will the attack occur? (Assume Zendia is in the same time zone as Kabul)2015-06-23 20:15:25 Zendian local time
###Code
import pytz
zendia_tz = pytz.timezone('Asia/Kabul')
datetime.datetime.fromtimestamp(1435074325, zendia_tz)
help(datetime.datetime.fromtimestamp)
###Output
_____no_output_____ |
notebooks/sentiment-pipeline-sklearn-4.ipynb | ###Markdown
How to build a social media sentiment analysis pipeline with scikit-learn*This is Part 4 of 5 in a series on building a sentiment analysis pipeline using scikit-learn. You can find Part 5 [here](./sentiment-pipeline-sklearn-5.ipynb), and the introduction [here](./sentiment-pipeline-sklearn-1.ipynb).**Jump to:* * *[**Part 1 - Introduction and requirements**](./sentiment-pipeline-sklearn-1.ipynb)** *[**Part 2 - Building a basic pipeline**](./sentiment-pipeline-sklearn-2.ipynb)** *[**Part 3 - Adding a custom function to a pipeline**](./sentiment-pipeline-sklearn-3.ipynb)** *[**Part 5 - Hyperparameter tuning in pipelines with GridSearchCV**](./sentiment-pipeline-sklearn-5.ipynb)* Part 4 - Adding a custom feature to a pipeline with FeatureUnionNow that we know how to add a function to a pipeline, let's learn how to add the *output* of a function as an additional feature for our classifier. Setup
###Code
%%time
from fetch_twitter_data import fetch_the_data
import nltk
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
df = fetch_the_data()
X, y = df.text, df.sentiment
X_train, X_test, y_train, y_test = train_test_split(X, y)
tokenizer = nltk.casual.TweetTokenizer(preserve_case=False, reduce_len=True)
count_vect = CountVectorizer(tokenizer=tokenizer.tokenize)
classifier = LogisticRegression()
###Output
got 92 posts from page 1...
got 88 posts from page 2...
got 88 posts from page 3...
got 91 posts from page 4...
got 87 posts from page 5...
got 89 posts from page 6...
got 95 posts from page 7...
got 93 posts from page 8...
got 86 posts from page 9...
got 90 posts from page 10...
got all pages - 899 posts in total
CPU times: user 710 ms, sys: 193 ms, total: 903 ms
Wall time: 6.24 s
###Markdown
The functionFor this example, we'll append the length of the post to the output of the count vectorizer, the thinking being that longer posts could be more likely to be polarized (such as someone going on a rant).
###Code
def get_tweet_length(text):
return len(text)
###Output
_____no_output_____
###Markdown
Adding new featuresscikit-learn has a nice [FeatureUnion class](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.FeatureUnion.html) that enables you to essentially concatenate more feature columns to the output of the count vectorizer. This is useful for adding "meta" features.It's pretty silly, but to add a feature in a `FeatureUnion`, it has to come back as a numpy array of `dim(rows, num_cols)`. For our purposes in this example, we're only bringing back a single column, so we have to reshape the output to `dim(rows, 1)`. Gotta love it. So first, we'll define a method to reshape the output of a function into something acceptable for `FeatureUnion`. After that, we'll build our function that will wrap a function to be easily pipeline-able.
###Code
import numpy as np
def reshape_a_feature_column(series):
return np.reshape(np.asarray(series), (len(series), 1))
def pipelinize_feature(function, active=True):
def list_comprehend_a_function(list_or_series, active=True):
if active:
processed = [function(i) for i in list_or_series]
processed = reshape_a_feature_column(processed)
return processed
# This is incredibly stupid and hacky, but we need it to do a grid search with activation/deactivation.
# If a feature is deactivated, we're going to just return a column of zeroes.
# Zeroes shouldn't affect the regression, but other values may.
# If you really want brownie points, consider pulling out that feature column later in the pipeline.
else:
return reshape_a_feature_column(np.zeros(len(list_or_series)))
###Output
_____no_output_____
###Markdown
Adding the function and testing
###Code
from sklearn.pipeline import FeatureUnion, Pipeline
from sklearn_helpers import pipelinize, genericize_mentions, train_test_and_evaluate
sentiment_pipeline = Pipeline([
('genericize_mentions', pipelinize(genericize_mentions, active=True)),
('features', FeatureUnion([
('vectorizer', count_vect),
('post_length', pipelinize_feature(get_tweet_length, active=True))
])),
('classifier', classifier)
])
sentiment_pipeline, confusion_matrix = train_test_and_evaluate(sentiment_pipeline, X_train, y_train, X_test, y_test)
###Output
null accuracy: 38.67%
accuracy score: 62.22%
model is 23.56% more accurate than null accuracy
---------------------------------------------------------------------------
Confusion Matrix
Predicted negative neutral positive __all__
Actual
negative 28 19 17 64
neutral 10 44 20 74
positive 5 14 68 87
__all__ 43 77 105 225
---------------------------------------------------------------------------
Classification Report
precision recall F1_score support
Classes
negative 0.651163 0.4375 0.523364 64
neutral 0.571429 0.594595 0.582781 74
positive 0.647619 0.781609 0.708333 87
__avg / total__ 0.623569 0.622222 0.614427 225
|
REINFORCE_exercise.ipynb | ###Markdown
Policy-Gradients with the REINFORCE algorithm**Background**:In this practical we will train an agent using the REINFORCE algorithm to learn to balance a pole in the OpenAI gym [Cartpole environment](https://gym.openai.com/envs/CartPole-v1).**Learning objectives**:* Understand the policy-gradient approach to directly training a parameterised policy to maximise expected future rewards.* Understand how the policy-gradient theorem allows us to improve the policy using on-policy state = env.reset().**What is expected of you**: * Go through the explanation, keeping the above learning objectives in mind. * Fill in the missing code ("IMPLEMENT-ME") and train a model to solve the Cartpole-v1 environment in OpenAI gym (you solve it when reward=500). A Simple Policy-Gradient Cartpole Example IntroductionWe have seen in your course that there are many different approaches to training RL agents. In this practical we will take a look at REINFORCE - a simple policy-based method. REINFORCE (and policy-based methods in general) directly optimise a parametrised policy in order to maximise future rewards.We will try to learn a policy $\pi_\theta(a | s)$ which outputs a distribution over the possible actions $a$, given the current state $s$ of the environment. The goal is find a set of parameters $\theta$ to maximise the expected discounted return:\begin{align}J(\theta) = \mathbb{E}_{\tau \sim p_\theta} \left[\sum_{t=0}^T \gamma^t r(s_t, a_t)\right],\end{align}where $\tau$ is a trajectory sampled from $p_\theta$. The **policy-gradient** theorem gives us the derivative of this objective function:\begin{align} \nabla_\theta J(\theta) = \mathbb{E}_{\tau \sim p_\theta} \left[\left(\sum_{t=0}^{T} \nabla_\theta \log \pi_\theta(a_t|s_t)\right) \left(\sum_{t=0}^T \gamma^t r(s_t, a_t) \right) \right].\end{align}**NOTE**: * We have a policy $\pi_\theta(a|s)$ which tells the agent which action $a$ to take, given the state $s$, and it is parameterised in terms of parameters $\theta$.* Our goal is to maximise $J(\theta)$ by **choosing actions from this policy** that lead to high future rewards.* We'll use gradient-based optimisation to update the policy parameters $\theta$. We therefore want the gradient of our objective wrt the policy parameters.* We use the policy-gradient theorem to find a expression for the gradient. This is an expectation over trajectories from our policy and the environment.* Since we can now sample trajectories $(s_0, a_0, R_0, s_1, a_1, R_1, \ldots)$ using our policy $\pi_\theta$, we can approximate this gradient using **[Monte-Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_integration)** methods.This algorithm is called **Monte-Carlo REINFORCE**, and is one type of policy-gradient algorithm. Let's use this to solve the Cartpole environment!**Monte-Carlo REINFORCE**:for each episode:1. sample a trajectory $\tau$ using the policy $\pi_\theta$.2. compute $\nabla_\theta J(\theta) \approx \left(\sum_{t=0}^{T} \nabla_\theta \log \pi_\theta(a_t|s_t)\right) \left(\sum_{t=0}^T \gamma^t r(s_t, a_t) \right)$.3. update policy parameters $\theta \leftarrow \theta + \alpha \nabla_\theta J(\theta)$
###Code
# import various packages
from collections import deque
import numpy as np
import matplotlib.pyplot as plt
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
# use gpu if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# configure matplotlib
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 10.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
The EnvironmentCartpole is a standard benchmark in reinforcement learning and is a good sandbox for trying things out. The goal is to balance a pendulum on top of a moving cart. We have 2 actions - either push the cart to the left or push to the right. The state space consists of the cart's position and velocity and the pendulum's angle and angular velocity. Let's create the environment and take a look at the state and action spaces:
###Code
env = gym.make('CartPole-v1')
print('action space:', env.action_space)
print('observation space:', env.observation_space)
###Output
action space: Discrete(2)
observation space: Box(4,)
###Markdown
Here we can see that there are 2 discrete actions and a continuous state space. Taking a few stepsTo get a better feel for the environment, we will use a random policy to genrate a short trajectory.
###Code
SUB = str.maketrans("t0123456789+", "ₜ₀₁₂₃₄₅₆₇₈₉₊")
print('-'*115)
state = env.reset()
for i in range (5):
# sample random action
action = env.action_space.sample()
# take action in the environment
new_state, reward, done, _ = env.step(action)
print('Step t=', i+1, ': (', 'st, at , rt, st+1'.translate(SUB),')')
print('(', state, ',', action, ',', reward, ',', new_state, ')')
print('-'*115)
state = new_state
###Output
-------------------------------------------------------------------------------------------------------------------
Step t= 1 : ( sₜ, aₜ , rₜ, sₜ₊₁ )
( [-0.04343247 -0.01849495 0.02620421 -0.01618236] , 0 , 1.0 , [-0.04380237 -0.21398271 0.02588056 0.28465177] )
-------------------------------------------------------------------------------------------------------------------
Step t= 2 : ( sₜ, aₜ , rₜ, sₜ₊₁ )
( [-0.04380237 -0.21398271 0.02588056 0.28465177] , 1 , 1.0 , [-0.04808202 -0.01923926 0.0315736 0.00024245] )
-------------------------------------------------------------------------------------------------------------------
Step t= 3 : ( sₜ, aₜ , rₜ, sₜ₊₁ )
( [-0.04808202 -0.01923926 0.0315736 0.00024245] , 1 , 1.0 , [-0.04846681 0.17541599 0.03157844 -0.2823138 ] )
-------------------------------------------------------------------------------------------------------------------
Step t= 4 : ( sₜ, aₜ , rₜ, sₜ₊₁ )
( [-0.04846681 0.17541599 0.03157844 -0.2823138 ] , 0 , 1.0 , [-0.04495849 -0.02014182 0.02593217 0.02015919] )
-------------------------------------------------------------------------------------------------------------------
Step t= 5 : ( sₜ, aₜ , rₜ, sₜ₊₁ )
( [-0.04495849 -0.02014182 0.02593217 0.02015919] , 1 , 1.0 , [-0.04536132 0.17459882 0.02633535 -0.26423036] )
-------------------------------------------------------------------------------------------------------------------
###Markdown
Watching a random policy agent playLet's also see how a random policy performs in this enviroment:
###Code
env_1 = gym.make('CartPole-v1')
state = env_1.reset()
for t in range(200):
# sample a random action
action = env_1.action_space.sample()
env_1.render()
state, reward, done, _ = env_1.step(action)
env_1.close()
###Output
[33mWARN: You are calling 'step()' even though this environment has already returned done = True. You should always call 'reset()' once you receive 'done = True' -- any further steps are undefined behavior.[0m
###Markdown
Not very good! The pole only stayed up for a few time steps... Now let's improve things using REINFORCE. The PolicyWe begin by parameterising the policy $\pi_\theta(a | s)$ as a simple neural network which takes the state (a vector of 4 elements provided by `gym`) as input, and produces a Categorical distribution over the possible actions as output. Simple enough. Refer to [torch.nn](https://pytorch.org/docs/stable/nn.html)
###Code
class Policy(nn.Module):
def __init__(self):
super(Policy, self).__init__()
# Define neural network layers. Refer to nn.Linear (https://pytorch.org/docs/stable/nn.html#torch.nn.Linear)
# We are going to use a neural network with one hidden layer of size 16.
# The first layer should have an input size of env.observation_space.shape and an output size of 16
self.fc1 = nn.Linear(4, 16)
# The second layer should have an input size of 16 and an output size of env.action_space.n
self.fc2 = nn.Linear(16, 2)
def forward(self, x):
# IMPLEMENT-ME
# Implement the forward pass
# apply a ReLU activation after the first linear layer
x = F.relu(self.fc1(x))
# apply the second linear layer (without an activation).
# the outputs of the second layer will act as the log probabilities for the Categorial distribution.
x = self.fc2(x)
return Categorical(logits=x)
###Output
_____no_output_____
###Markdown
Selecting actions with our policyFor a given state our policy returns a pytorch `Categorial` object. We can sample from this distribution by calling it's `sample` method and we can find the log probability of an action using `log_prob`:
###Code
policy = Policy().to(device)
state = env.reset()
# convert state (a numpy array) to a torch tensor
state = torch.from_numpy(state).float().to(device)
dist = policy(state)
action = dist.sample()
print("Sampled action: ", action.item())
print("Log probability of action: ", dist.log_prob(action).item())
print(dist.log_prob(action))
print(dist.log_prob(action).unsqueeze(0))
###Output
Sampled action: 1
Log probability of action: -0.8268687725067139
tensor(-0.8269, grad_fn=<SqueezeBackward1>)
tensor([-0.8269], grad_fn=<UnsqueezeBackward0>)
###Markdown
Computing the returnGiven a sequence of rewards $(r_0, \ldots, r_T)$ we want to calculate the return $\sum_{t=0}^T \gamma^t r(s_t, a_t)$.
###Code
def compute_returns(rewards, gamma):
# IMPLEMENT-ME
# compute the return using the above equation
returns = 0
for t in range(len(rewards)):
returns += gamma**t * rewards[t]
return returns
###Output
_____no_output_____
###Markdown
REINFORCENow its time to implement the algorithm**Monte-Carlo REINFORCE**:for each episode:1. sample a trajectory $\tau$ using the policy $\pi_\theta$.2. compute $\nabla_\theta J(\theta) \approx \left(\sum_{t=0}^{T} \nabla_\theta \log \pi_\theta(a_t|s_t)\right) \left(\sum_{t=0}^T \gamma^t r(s_t, a_t) \right)$.3. update policy parameters $\theta \leftarrow \theta + \alpha \nabla_\theta J(\theta)$ Hyperameters
###Code
learning_rate = 1e-2
number_episodes = 1500
max_episode_length = 1000
gamma = 1.0
def reinforce(seed, verbose=True):
# set random seeds (for reproducibility)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
env.seed(seed)
# instantiate the policy and optimizer
policy = Policy().to(device)
optimizer = optim.Adam(policy.parameters(), lr=learning_rate)
scores = []
scores_deque = deque(maxlen=100)
for episode in range(1, number_episodes+1):
#################################################################
# 1. Collect trajectories using our policy and save the rewards #
# and the log probability of each action taken. #
#################################################################
log_probs = []
rewards = []
state = env.reset()
for t in range(max_episode_length):
# IMPLEMENT-ME: get the distribution over actions for state
torch_state = torch.from_numpy(state).float().to(device)
dist = policy(torch_state )
# IMPLEMENT-ME: sample an action from the distribution
action = dist.sample()
# IMPLEMENT-ME: compute the log probability
log_prob = dist.log_prob(action).unsqueeze(0)
# IMPLEMENT-ME: take a step in the environment
state, reward, done, _ = env.step(action.item())
# save the reward and log probability
rewards.append(reward)
log_probs.append(log_prob.unsqueeze(0))
if done:
break
# for reporting save the score
scores.append(sum(rewards))
scores_deque.append(sum(rewards))
#################################################################
# 2. evaluate the policy gradient #
#################################################################
# IMPLEMENT-ME: calculate the discounted return of the trajectory
returns = compute_returns(rewards, gamma)
log_probs = torch.cat(log_probs)
# IMPLEMENT-ME: multiply the log probabilities by the returns and sum (see the policy-gradient theorem)
# Remember to multiply the result by -1 because we want to maximise the returns
policy_loss = -torch.sum(log_probs * returns)
#################################################################
# 3. update the policy parameters (gradient descent) #
#################################################################
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
# report the score to check that we're making progress
if episode % 50 == 0 and verbose:
print('Episode {}\tAverage Score: {:.2f}'.format(episode, np.mean(scores_deque)))
if np.mean(scores_deque) >= 495.0 and verbose:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(episode, np.mean(scores_deque)))
break
return policy, scores
policy, scores = reinforce(42)
###Output
Episode 50 Average Score: 27.78
Episode 100 Average Score: 32.39
Episode 150 Average Score: 42.82
Episode 200 Average Score: 46.35
Episode 250 Average Score: 51.75
Episode 300 Average Score: 58.77
Episode 350 Average Score: 67.28
Episode 400 Average Score: 78.60
Episode 450 Average Score: 79.20
Episode 500 Average Score: 82.28
Episode 550 Average Score: 101.07
Episode 600 Average Score: 111.77
Episode 650 Average Score: 115.17
Episode 700 Average Score: 252.52
Episode 750 Average Score: 345.24
Episode 800 Average Score: 304.47
Episode 850 Average Score: 383.91
Episode 900 Average Score: 407.94
Episode 950 Average Score: 248.61
Episode 1000 Average Score: 156.76
Episode 1050 Average Score: 255.69
Episode 1100 Average Score: 422.78
Environment solved in 1126 episodes! Average Score: 495.12
###Markdown
Seeing our learned policy in actionLet's watch our learned policy balance the pole!
###Code
env = gym.make('CartPole-v1')
state = env.reset()
for t in range(2000):
dist = policy(torch.from_numpy(state).float().to(device))
action = dist.sample()
env.render()
state, reward, done, _ = env.step(action.item())
if done:
break
env.close()
###Output
_____no_output_____
###Markdown
Plotting the resultsFinally, let's plot the learning curve.
###Code
def moving_average(a, n) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret / n
fig = plt.figure()
ax = fig.add_subplot(111)
x = np.arange(1, len(scores)+1)
ax.plot(x, scores, label='Score')
m_average = moving_average(scores, 50)
ax.plot(x, m_average, label='Moving Average (w=100)', linestyle='--')
plt.legend()
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.title('REINFORCE learning curve - CartPole-v1')
plt.show()
###Output
_____no_output_____
###Markdown
Here is what your graph should look like. We can see that at the end of training our policy consistantly (more or less) recieves returns of 500. Investigating the variance of REINFORCEWe noted in class that REINFORCE is a high variance algorithm. We can investigate the variance by running multiple trials and averaging the results.
###Code
np.random.seed(53)
seeds = np.random.randint(1000, size=5)
all_scores = []
for seed in seeds:
print("started training with seed: ", seed)
_, scores = reinforce(int(seed), verbose=False)
print("completed training with seed: ", seed)
all_scores.append(scores)
smoothed_scores = [moving_average(s, 50) for s in all_scores]
smoothed_scores = np.array(smoothed_scores)
mean = smoothed_scores.mean(axis=0)
std = smoothed_scores.std(axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
x = np.arange(1, len(mean)+1)
ax.plot(x, mean, '-', color='blue')
ax.fill_between(x, mean - std, mean + std, color='blue', alpha=0.2)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.title('REINFORCE averaged over 5 seeds')
plt.show()
###Output
_____no_output_____
###Markdown
Here is what your graph should look like. Reducing the variance of REINFORCE In class we saw a couple of tricks to reduce the variance of REINFORCE and improve its performance. Firstly, future actions should not change past decision. Present actions only impact the future. Therefore, we can change our objective function to reflect this:\begin{align} \nabla_\theta J(\theta) = \mathbb{E}_{\tau \sim p_\theta} \left[\sum_{t=0}^{T} \nabla_\theta \log \pi_\theta(a_t|s_t) \sum_{t'= t}^T \gamma^{t'- t} r(s_{t'}, a_{t'})\right].\end{align}We can also reduce variance by subtracing a state dependent baseline to get:\begin{align} \nabla_\theta J(\theta) = \mathbb{E}_{\tau \sim p_\theta} \left[\sum_{t=0}^{T} \nabla_\theta \log \pi_\theta(a_t|s_t) \sum_{t'= t}^T \left( \gamma^{t'- t} r(s_{t'}, a_{t'}) - b(s_{t'}) \right)\right].\end{align}For our baseline we'll use the average of the returns over the trajectory. As a final trick we normalise the returns by dividing by the standard deviation.
###Code
def compute_returns_baseline(rewards, gamma):
r = 0
returns = []
for step in reversed(range(len(rewards))):
r = rewards[step] + gamma * r
returns.insert(0, r)
returns = np.array(returns)
# IMPLEMENT-ME: normalize the returns by subtracting the mean and dividing by the standard deviation
#mean = np.mean(returns, 0)
#std =
returns = (returns -returns.mean()) / returns.std()
return returns
def reinforce_baseline(seed, verbose=True):
# set random seeds (for reproducibility)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
env.seed(seed)
# instantiate the policy and optimizer
policy = Policy().to(device)
optimizer = optim.Adam(policy.parameters(), lr=learning_rate)
scores = []
scores_deque = deque(maxlen=100)
for episode in range(1, number_episodes+1):
#################################################################
# 1. Collect trajectories using our policy and save the rewards #
# and the log probability of each action taken. #
#################################################################
log_probs = []
rewards = []
state = env.reset()
for t in range(max_episode_length):
# IMPLEMENT-ME: get the distribution over actions for state
dist = policy(torch.from_numpy(state).float().to(device))
# IMPLEMENT-ME: sample an action from the distribution
action = dist.sample()
# IMPLEMENT-ME: compute the log probability
log_prob = dist.log_prob(action)
# IMPLEMENT-ME: take a step in the environment
state, reward, done, _ = env.step(action.item())
# save the reward and log probability
rewards.append(reward)
log_probs.append(log_prob.unsqueeze(0))
if done:
break
# for reporting save the score
scores.append(sum(rewards))
scores_deque.append(sum(rewards))
#################################################################
# 2. evaluate the policy gradient (with variance reduction) #
#################################################################
# calculate the discounted return of the trajectory
returns = compute_returns_baseline(rewards, gamma)
returns = torch.from_numpy(returns).float().to(device)
log_probs = torch.cat(log_probs)
# IMPLEMENT-ME: multiply the log probabilities by the returns and sum (see the policy-gradient theorem)
# Remember to multiply the result by -1 because we want to maximise the returns
policy_loss = -torch.sum(log_probs * returns)
#################################################################
# 3. update the policy parameters (gradient descent) #
#################################################################
optimizer.zero_grad()
policy_loss.backward()
optimizer.step()
# report the score to check that we're making progress
if episode % 50 == 0 and verbose:
print('Episode {}\tAverage Score: {:.2f}'.format(episode, np.mean(scores_deque)))
if np.mean(scores_deque) >= 495.0 and verbose:
print('Environment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(episode, np.mean(scores_deque)))
break
return policy, scores
###Output
_____no_output_____
###Markdown
Let's see if these changes give us any improvement.
###Code
np.random.seed(53)
seeds = np.random.randint(1000, size=5)
all_scores_baseline = []
for seed in seeds:
print("started training with seed: ", seed)
_, scores = reinforce_baseline(int(seed), verbose=False)
print("completed training with seed: ", seed)
all_scores_baseline.append(scores)
###Output
started training with seed: 537
completed training with seed: 537
started training with seed: 797
completed training with seed: 797
started training with seed: 885
completed training with seed: 885
started training with seed: 421
completed training with seed: 421
started training with seed: 763
completed training with seed: 763
###Markdown
Comparing the methodsFinally we'll compare the performance of the two methods.
###Code
smoothed_scores_baseline = [moving_average(s, 50) for s in all_scores_baseline]
smoothed_scores_baseline = np.array(smoothed_scores_baseline)
mean_baseline = smoothed_scores_baseline.mean(axis=0)
std_baseline = smoothed_scores_baseline.std(axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
x = np.arange(1, len(mean_baseline)+1)
ax.plot(x, mean_baseline, '-', color='green', label='Variance reduced REINFORCE')
ax.plot(x, mean, '-', color='blue', label='REINFORCE')
ax.fill_between(x, mean_baseline - std_baseline, mean_baseline + std_baseline, color='green', alpha=0.2)
ax.fill_between(x, mean - std, mean + std, color='blue', alpha=0.2)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.legend()
plt.title('Comparison of REINFORCE and Variance reduced REINFORCE (averaged over 5 seeds)')
plt.show()
###Output
_____no_output_____ |
ExploringTekore.ipynb | ###Markdown
3 things:- begin listing processing steps for each song- script to compile all tracks under a category- search genres, date range, duration range I'm starting with a look at one categoryMany playlists can be tagged with one category label, so out of anticipation that dance/electronic is going to be pretty likely to sound the same, let's get a track list for the first playlist under the "Dance/Electronic" label
###Code
edm_dance_list = spotify.category_playlists('edm_dance',limit=20)
edm_dance_list, edm_dance_list.items
# This is the object representation of a particular list of playlists, ".items" is most useful here
mint = edm_dance_list.items[0]
mint # Not guaranteed to be "mint" playlist because the category order shifted above
###Output
_____no_output_____
###Markdown
Let's get the first 2 songs' track data
###Code
print("This just shows us the link to the playlist and how many songs are in it")
print(mint.tracks)
print("---\n","playlist_items is the class that gets us track ID's")
mint_tracks = spotify.playlist_items(mint.id).items
mint_tracks
song1 = mint_tracks[0].track
song2 = mint_tracks[1].track
song1
# audio_analysis and audio_features
song1_analysis = spotify.track_audio_analysis(song1.id)
song2_analysis = spotify.track_audio_analysis(song2.id)
song1_features = spotify.track_audio_features(song1.id)
song2_features = spotify.track_audio_features(song2.id)
song1_analysis, song1_features
###Output
_____no_output_____
###Markdown
Combine all the track's data and compare correlation
###Code
song1song2_compare = pd.DataFrame([[vars(song1), vars(song1_features)],
[vars(song2), vars(song1_features)]])
song1song2_compare.head()
###Output
_____no_output_____
###Markdown
New activity: Condense an entire mood category
###Code
track_analysis = spotify.track_audio_analysis(track.track.id)
track_features = spotify.track_audio_features(track.track.id)
# for section in track_analysis.sections:
# print(f'Key:{section.key} Conf:{section.key_confidence}')
track_analysis.
# everything needs expanding
# don't need bars, beats, meta
import numpy as np
import time
alb_cols = ['album_lead_artist','album_name','release_date','total_tracks'] # these don't map exactly to the names they have in the object but it's what I want the DF columns to be
art_cols = ['track_artists','genres', 'artist_pop']
keep_cols = ['id','duration_ms','explicit','name','popularity','track_number']
feature_cols = ['acousticness', 'danceability', 'duration_ms', 'energy', 'instrumentalness', 'key', 'liveness', 'loudness', 'mode', 'speechiness', 'tempo', 'time_signature', 'valence']
df_colnames = ['cat_idx'] + alb_cols + art_cols + keep_cols + feature_cols
#question marks: episode/track, disc_number, restrictions, track_number
#album question marks: album_group, album_type
def basic_track_data(full_track, full_playlist_track=False):
'''
Using object type FullPlaylistTrack, this function returns a row of data with columns matching what the final DataFrame of training set is going to look like.
A track of this type can be retrieved by calling (with Tekore):
paging_obj = spotify.playlist_items(playlist_placeholder.id) #playlist paging object
paging_obj.items[track_index] #returns FullPlaylistTrack object type
We're not using any playlist-specific fields here, ~~~"so it should work the same with a regular FullTrack object"~~~ this is wrong.
It does not, so the default input is now FullTrack object, and if full_playlist_track is True, it converts
'''
if full_playlist_track:
full_track = full_track.track
# expand album field, retrieve album data labeled by alb_expand_cols
alb_data = []
alb_data.append(full_track.album.artists[0].name)
alb_data.append(full_track.album.name)
alb_data.append(full_track.album.release_date)
alb_data.append(full_track.album.total_tracks)
# expand artists field, list all artists as string separated by newlines
artists_str = ''
for artist in full_track.artists:
artists_str = artists_str + artist.name + '\n'
# lookup lead artist, list genres as string, separated by newlines
genres_str = ''
lead_artist = spotify.artist(full_track.artists[0].id)
for genre in lead_artist.genres:
genres_str = genres_str + genre + '\n'
# record lead artist popularity and save
artist_pop = lead_artist.popularity
art_data = [artists_str, genres_str, artist_pop]
# generate data as labeled by keep_cols
track_data = [full_track.__dict__[col] for col in keep_cols]
# return single row containing all track data
return alb_data + art_data + track_data
def audio_data(full_playlist_track): # don't use this
features_obj = spotify.track_audio_features(full_playlist_track.track.id)
#analysis_obj = spotify.track_audio_analysis(full_playlist_track.track.id)
#Will this be too many API calls? we'll find out. (it is possible to batch audio_features request but not audio_analysis)
features_data = [features_obj.__dict__[fcol] for fcol in feature_cols]
#Skipping analysis for now
return features_data #+ analysis_data
#[df_colnames, *[basic_track_data(track)+ audio_data(track)]]
#training_songs = pd.DataFrame([], columns=df_colnames)
#training_songs = pd.read_csv('training_data.csv') #Use this loading if adding to existing version of training_songs
sleep_min = 2
sleep_max = 5
start_time = time.time()
request_count = 0
starting_idx = 31
with spotify_ch.chunked():
# Loop thru categories here
for i,cat in tqdm(enumerate(all_cats_master[starting_idx:]), desc='categories', leave=False):#currently picking up from index 14
i=i+starting_idx
# Loop thru playlists/subcategories here
cat_playlists = spotify.category_playlists(cat[0],limit=10)
cat_playlist_total = min(10, cat_playlists.total)
#print(f'Accessing {cat_playlist_total} playlists')
for p in tqdm(cat_playlists.items, total=cat_playlist_total, desc='playlists', leave=False):
# Loop thru and collect tracks
cat_playlist_tracks_pg = spotify.playlist_items(p.id) #could return "as_tracks"
track_total = cat_playlist_tracks_pg.total
cat_playlist_track_ids = [track.track.id if track.track!=None else None for track in spotify.all_items(cat_playlist_tracks_pg)]
#Remove None's from ids list
try:
while cat_playlist_track_ids.index(None):
cat_playlist_track_ids.pop(cat_playlist_track_ids.index(None))
except:
pass
cat_playlist_tracks = spotify_ch.tracks(cat_playlist_track_ids)
cat_playlist_tracks_feat = spotify_ch.tracks_audio_features(cat_playlist_track_ids)
#cat_playlist_tracks.__dir__()
#cat_playlist_tracks.__sizeof__() # this returns 48 instead of 300, not sure why
#print(f' Collecting {track_total} songs in playlist')
for j,track in enumerate(cat_playlist_tracks): # iterating like this reliably counts all 300
request_count+=1
if request_count%5==0:
time.sleep(np.random.uniform(sleep_min, sleep_max))
# if track.track == None:
# continue
features_data = [cat_playlist_tracks_feat[j].__dict__[fcol] for fcol in feature_cols]
training_songs = training_songs.append(dict(zip(df_colnames,[i]+basic_track_data(track)+features_data)), ignore_index=True)
print(training_songs.shape)
training_songs
for cat in all_cats_master[19:]:
try:
spotify.category(cat[0]).name
except:
print(cat[1],"doesn't exist")
drop_idx = training_songs.cat_idx==31
training_songs.drop(training_songs.index[drop_idx],inplace=True)
ts_backup = training_songs #---uncomment to store current version
#training_songs = ts_backup #---uncomment to reset DF to last backup
for i in range(training_songs['cat_idx'].max()+2):
print(i,(training_songs['cat_idx']==i).sum())
training_songs.to_csv('training_data.csv')
###Output
_____no_output_____
###Markdown
New New activity: make weird searches
###Code
spotify.search('popularity:54',types='track')
same_as_song1 = spotify.track('2dh6Pnl5egc1FrQS6EsW4n')
# this is a cool thing that I don't actually need: track_keys = [*same_as_song1.__dict__.keys()]
for key, value in same_as_song1.__dict__.items():
try:
result = spotify.search(f'{key}:{value}',types=('track',),limit=20)
if result[0].total>0: print(result)
except:
pass
else:
print('^---',key,', ',value,'---^')
###Output
^--- id , 2dh6Pnl5egc1FrQS6EsW4n ---^
^--- href , https://api.spotify.com/v1/tracks/2dh6Pnl5egc1FrQS6EsW4n ---^
(FullTrackPaging with fields:
href = 'https://api.spotify.com/v1/search?query=type%3Atrack&type=tra...'
items = [20 x FullTrack(album, artists, available_markets, ...)]
limit = 20
next = 'https://api.spotify.com/v1/search?query=type%3Atrack&type=tra...'
offset = 0
previous = None
total = 283,)
^--- type , track ---^
^--- uri , spotify:track:2dh6Pnl5egc1FrQS6EsW4n ---^
^--- artists , ModelList with items: [
SimpleArtist(external_urls, href, id, name, type, uri)
SimpleArtist(external_urls, href, id, name, type, uri)
] ---^
(FullTrackPaging with fields:
href = 'https://api.spotify.com/v1/search?query=disc_number%3A1&type=...'
items = [20 x FullTrack(album, artists, available_markets, ...)]
limit = 20
next = 'https://api.spotify.com/v1/search?query=disc_number%3A1&type=...'
offset = 0
previous = None
total = 99,)
^--- disc_number , 1 ---^
^--- duration_ms , 182400 ---^
(FullTrackPaging with fields:
href = 'https://api.spotify.com/v1/search?query=explicit%3AFalse&type...'
items = [9 x FullTrack(album, artists, available_markets, ...)]
limit = 20
next = None
offset = 0
previous = None
total = 9,)
^--- explicit , False ---^
^--- external_urls , {'spotify': 'https://open.spotify.com/track/2dh6Pnl5egc1FrQS6EsW4n'} ---^
^--- name , 2AM (feat. Carla Monroe) ---^
^--- preview_url , https://p.scdn.co/mp3-preview/fd4257f22530a43d582131e83c194090b9734986?cid=340bdfadbf1b48d684111081b0be204d ---^
(FullTrackPaging with fields:
href = 'https://api.spotify.com/v1/search?query=track_number%3A1&type...'
items = [20 x FullTrack(album, artists, available_markets, ...)]
limit = 20
next = 'https://api.spotify.com/v1/search?query=track_number%3A1&type...'
offset = 0
previous = None
total = 111,)
^--- track_number , 1 ---^
(FullTrackPaging with fields:
href = 'https://api.spotify.com/v1/search?query=is_local%3AFalse&type...'
items = [1 x FullTrack(album, artists, available_markets, ...)]
limit = 20
next = None
offset = 0
previous = None
total = 1,)
^--- is_local , False ---^
^--- album , SimpleAlbum with fields:
album_group = None
album_type = <AlbumType.single: 'single'>
artists = [2 x SimpleArtist(external_urls, href, id, name, type, uri)]
available_markets = [90 x str]
external_urls = {'spotify'}
href = 'https://api.spotify.com/v1/albums/4CAvGuvYg9frLJFbPPHLmB'
id = '4CAvGuvYg9frLJFbPPHLmB'
images = [3 x Image(height, url, width)]
is_playable = None
name = '2AM (feat. Carla Monroe)'
release_date = '2020-07-24'
release_date_precision = <ReleaseDatePrecision.day: 'day'>
total_tracks = 1
type = 'album'
uri = 'spotify:album:4CAvGuvYg9frLJFbPPHLmB' ---^
^--- external_ids , {'isrc': 'GBARL2000691'} ---^
^--- popularity , 54 ---^
^--- available_markets , ModelList with items: [
'AD'
'AE'
'AL'
'AR'
'AT'
'AU'
'BA'
'BE'
'BG'
'BH'
'BO'
'BR'
'BY'
'CH'
'CL'
'CO'
'CR'
'CY'
'CZ'
'DE'
'DK'
'DO'
'DZ'
'EC'
'EE'
'EG'
'ES'
'FI'
'FR'
'GB'
'GR'
'GT'
'HK'
'HN'
'HR'
'HU'
'ID'
'IE'
'IL'
'IN'
'IS'
'IT'
'JO'
'JP'
'KW'
'KZ'
'LB'
'LI'
'LT'
'LU'
'LV'
'MA'
'MC'
'MD'
'ME'
'MK'
'MT'
'MX'
'MY'
'NI'
'NL'
'NO'
'NZ'
'OM'
'PA'
'PE'
'PH'
'PL'
'PS'
'PT'
'PY'
'QA'
'RO'
'RS'
'RU'
'SA'
'SE'
'SG'
'SI'
'SK'
'SV'
'TH'
'TN'
'TR'
'TW'
'UA'
'UY'
'VN'
'XK'
'ZA'
] ---^
^--- linked_from , None ---^
^--- is_playable , None ---^
^--- restrictions , None ---^
###Markdown
Does searching for multiple artists the goofy way actually return results?
###Code
spotify.search('genre:"rock"', types=('tracks',), limit=20)
for hot_explicits in spotify.search('explicit:True')[0].items:
name = hot_explicits.name
artist1 = hot_explicits.artists[0].name
pop = hot_explicits.popularity
#year = hot_explicits.year can't do year?
print(name+' by '+artist1)
print('Popularity: ',pop,'\n---')
#print('Released ',year)
for hipster in spotify.search('tag:hipster')[0].items:
name = hipster.name
artist1 = hipster.artists[0].name
print(name+' by '+artist1)
print('Popularity: ',pop,'\n---')
###Output
Valse mélancolique in F-Sharp Minor, Op. 332 "Le regret" (Previously attributed to Chopin) by Charles Mayer
Popularity: 5
---
Holz, Holz geigt die Quartette so (Attrib. Beethoven as WoO 204) by Karl Holz
Popularity: 5
---
Soft pink noise putting down baby by Colic Remedies Pink Noise
Popularity: 5
---
ASMR deep sleep by Easy Falling Asleep ASMR
Popularity: 5
---
Calm down fussy baby by Colic Remedies Pink Noise
Popularity: 5
---
Better night shusher by Pink Noise for Better Night
Popularity: 5
---
Healing ASMR sounds by Easy Falling Asleep ASMR
Popularity: 5
---
Brown noise airplane sound by Brown Noise Enjoyable Sleep
Popularity: 5
---
Baby stops crying by Brown Noise Enjoyable Sleep
Popularity: 5
---
Fall asleep easily by Easy Falling Asleep ASMR
Popularity: 5
---
White noise sleep aid all night by Soothing Airplane Sound All Night
Popularity: 5
---
Soft colic remedies by Colic Remedies Pink Noise
Popularity: 5
---
Airplane sound chill out by Soothing Airplane Sound All Night
Popularity: 5
---
Comfy and easy night by Brown Noise Enjoyable Sleep
Popularity: 5
---
Calm down fussy infant by Brown Noise Enjoyable Sleep
Popularity: 5
---
Anxiety relief looped sounds by Easy Falling Asleep ASMR
Popularity: 5
---
Healing ASMR sounds by Easy Falling Asleep ASMR
Popularity: 5
---
Comfy and easy night by Brown Noise Enjoyable Sleep
Popularity: 5
---
Baby goes to sleep by Pink Noise for Better Night
Popularity: 5
---
Pink noise for sleepless night by Pink Noise for Better Night
Popularity: 5
---
###Markdown
I need to chunk
###Code
spotify_ch = tk.Spotify(app_token,chunked_on=True)
#spotify_ch.chunked_on = False
with spotify_ch.chunked():
tracks_chunk = spotify_ch.tracks(list(training_songs['id'][:100]))
tracks_chunk_feat = spotify_ch.tracks_audio_features(list(training_songs['id'][:100]))
#spotify.tracks(list(training_songs['id'][:100]))
# cat_playlist_track_ids = [track.track.id for track in spotify.all_items(cat_playlist_tracks)]
# [track.track.id for track in spotify.all_items(spotify.playlist_items(p.id))]
for track in tracks_chunk:
basic_track_data(track)
###Output
_____no_output_____ |
pyacs_docs/pyacs/downloads/00168f0b6fb780847440089d972ff281/getting_started.ipynb | ###Markdown
This notebook shows some basic time series analysis* author: [email protected]* date: 07/05/2020 Load all time series as a Sgts (Geodetic Time Series Super Class)
###Code
# import
from pyacs.gts.Sgts import Sgts
# directory where the time series are
ts_dir = '../data/pos'
# load the time series
ts = Sgts(ts_dir=ts_dir, verbose=False)
# print available Geodetic Time Series as a list
ts.lcode()
# plot time series for YAKI
ts.YAKI.plot()
# detrend YAKI using least-squares and plot it
ts.YAKI.detrend().plot()
# detrend and remove seasonal terms using least-squares
ts.YAKI.detrend_seasonal().plot()
# see plot in a separate interactive QT window
%matplotlib qt
ts.YAKI.detrend().plot()
# find outliers using a sliding window of 3 days, excluding residuals larger than 3 times the median
help(ts.YAKI.find_outliers_sliding_window)
#ts.YAKI.find_outliers_sliding_window(threshold=3,window_len=3).plot()
# includes the Up component in the search of outliers, detrend and plot
ts.YAKI.find_outliers_sliding_window(threshold=3,window_len=3, component='ENU').detrend().plot()
# same but not actually removes the outliers (before they were only flagged) and save the clean time series
cln_YAKI = ts.YAKI.find_outliers_sliding_window(threshold=3,window_len=3, component='ENU').remove_outliers()
# see the result
cln_YAKI.detrend().plot()
# there are two obvious offsets
# Let's try to detect it, with the default values
cln_YAKI.find_offsets_edge_filter().plot()
# Not perfect, but still useful
# See how then get estimated
cln_YAKI.find_offsets_edge_filter().plot().detrend().plot().info()
# My own implementation of MIDAS velocity
cln_YAKI.find_offsets_edge_filter().detrend_median().plot().info()
#ts.YAKI.plot(superimposed=ts.YAKI.vondrak(fc=10))
#ts.YAKI.find_outliers_vondrak(fc=10,threshold=3).plot()
###Output
_____no_output_____ |
notebooks/output_analysis.ipynb | ###Markdown
Output analysis
###Code
import os
import sys
sys.path.append('..')
import src.config as config
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
acc = pd.read_csv(config.ACCURACIES)
loss = pd.read_csv(config.LOSSES)
train_acc = acc.train.values
val_acc = acc.val.values
loss_train = loss.train.values
loss_valid = loss.val.values
plt.figure(1)
plt.subplot(121)
plt.plot(loss_train, label = "Train loss")
plt.plot(loss_valid, label = "Valid loss")
plt.title("Losses")
plt.subplot(122)
plt.plot(train_acc, label = "Train accuracy")
plt.plot(val_acc, label = "Valid accuracy")
plt.title("Accuracies")
###Output
_____no_output_____ |
_portfolio/uaphv/uaphv.ipynb | ###Markdown
Modelling geographical and built environment’s attributes as predictors of human vulnerability during tsunami evacuations: a multi-case study and paths to improvement I've been working with [Dr. Jorge León](https://www.researchgate.net/profile/Jorge-Leon-12) since 2019 on projects related to earthquakes, tsunamis and wildfires since a couple of years, motivated by his Architecture and Urbanism background. However, my job has been to implement mathematical and statistical tools for predictions and analysis.Now, I would like to show you my contribution to a one of these projects, where using tsunami evacuation simulations we implemented a multivariate regression model for prediction the tsunami death ratio analyzing built environment and geographical attributes of coastal locations. After this, we have applied an explaantion model in order to get the feature importance of each metric. These results could lead to spatial planning guidelines for developing new urban areas into expodes territories or retrofitting existing ones, with the final aim of enhancing evacuation and therefore increasing resilience. Prediction Model
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import shap
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, cross_validate, RepeatedKFold, KFold, GroupShuffleSplit
from sklearn.metrics import mean_squared_error, r2_score
shap.initjs()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Honestly, I don't want to talk about _Exploratory Data Analysis_ or _Data Processing_ since there are a billion of other blog posts, tutorials and courses talking about it. The raw data for each case study was a raster shapefile representing the evacuation territory, where each entry represent a urban cell (micro-scale analysis) that you could think it as a _portion of a street_ and included urban and geographical data fields, included the tsunami death ratio. These files were readed using geopandas, merged and then I dropped independt variables with a large correlation. We are going to work with 530,091 urban cells from 7 case of studies (cities), tsunami death ratio as a target variable and other 11 as predictors. * Mean travel time* Sea distance* Shelter distance* Elevation* Total route length* Estimated arrival time (ETA) of the maximum flood* Maximum flood* Betweenness* Closeness* Straightness* Pedestrian directness ratio (PDR)
###Code
data = pd.read_csv("data.csv")
data.head()
###Output
_____no_output_____
###Markdown
After trying with some linear models we realized this data don't fit well on it, so we decided to use Random Forest, since it gave us good metrics values and it is a model easy to understand, as a couple of indendent trees predicting the same target.Let's split our data in train and test sets.
###Code
split_random_state = 42 # Reproducibility
X = data.drop(columns=["city", "death_ratio"]) # Independent of each case of study
y = data["death_ratio"]
X_train, X_test, y_train, y_test = train_test_split(
X,
y,
test_size=0.2,
random_state=split_random_state,
stratify=data["city"].values
)
###Output
_____no_output_____
###Markdown
And now our Random Forest model.
###Code
model = RandomForestRegressor(
n_estimators=10,
max_depth=15,
max_features=None,
n_jobs=-1,
random_state=42,
)
_ = model.fit(X_train, y_train)
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
With no much effort we got a really good score. Cross-Validation We have to make sure that our predictor model was out of luck, so we are going to run a Cross Validation analysis to get different metrics (R2 and Negative Root Mean Square Error) and train/test partitions.
###Code
cv_model = cross_validate(
model,
X,
y,
scoring=["r2", "neg_root_mean_squared_error"],
cv=RepeatedKFold(n_splits=5, n_repeats=6, random_state=42),
return_train_score=True,
return_estimator=True,
n_jobs=-1
)
coefs = pd.DataFrame(
[est.feature_importances_ for est in cv_model['estimator']],
columns=X.columns
)
cv_scores = pd.DataFrame(
{
"Train": cv_model['train_r2'],
"Test": cv_model['test_r2']
}
)
plt.figure(figsize=(10, 9))
sns.boxplot(data=cv_scores, orient='v', color='cyan', saturation=0.5)
plt.ylabel('Score')
plt.title('R2 Score for train and test sets')
# plt.savefig("r2_score.png", dpi=300)
plt.show()
cv_scores = pd.DataFrame(
{
"Train": cv_model['train_neg_root_mean_squared_error'],
"Test": cv_model['test_neg_root_mean_squared_error']
}
)
plt.figure(figsize=(10, 9))
sns.boxplot(data=cv_scores, orient='v', color='cyan', saturation=0.5)
plt.ylabel('Score')
plt.title('NRMSE for train and test sets')
# plt.savefig("nrmse_score.png", dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, it wasn't lucky, the model learn very well (not perfect) but is enough for our goal: feature importance Explanation Model SHAP Values This is a unified framework for explaining model predictions. This tool is motivated by the idea that model interpretability is as important as model accuracy, since some modern models act as black boxes due to its complexity.There are three main points that make everyone loves this approach:1. Interpreting a prediction model is a model itself. In particular, it beloong to the class of additive feature attribution methods.2. Game theory results guaranteeing a unique solution.3. This new method is better aligned with human intuition
###Code
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
import h5py
with h5py.File("shap_values.h5", 'r') as hf:
shap_values = hf['shap_values'][:]
shap.summary_plot(shap_values, X, show=False, plot_size=(12, 8))
# plt.savefig("shap_summary.png")
plt.show()
###Output
_____no_output_____
###Markdown
The last visualization gives us a lot of insights related to the model and how each feature contribute to the prediction._Keep it simple_.* Each point correspond to a cell's data field value, then each row of this plot shows us around 530,000 points.* The color represents the (standarized) value of the each observation.* Their horizontal place corresponds to the impact on the model, since we are predicting the tsunami death ratio then if a point has been plotted on the right side that means it increases the death ratio prediction.For example, the maximum flood is one of the most important feature of the model, since a lot of the observations have a large absolute SHAP value. In order to interprete this notice the righ side points are mostly red, so that means that cells with a large maximum flood value incresing the death ratio prediction.
###Code
fig, ax = plt.subplots(figsize=(12, 8))
shap.dependence_plot(
ind="max_flood",
shap_values=shap_values,
features=X,
interaction_index=None,
alpha=0.1,
title="SHAP dependence plot of maximum flood feature",
ax=ax
)
fig.tight_layout()
# fig.savefig("shap_dependence_maximum_flood.png", dpi=300)
fig.show()
###Output
_____no_output_____ |
A1/Softmax_code.ipynb | ###Markdown
Data Preparation
###Code
## Training and testing
train_sents = list(emergingE.tagged_sents('wnut17train.conll'))
valid_sents = list(emergingE.tagged_sents('emerging.dev.conll'))
test_sents = list(emergingE.tagged_sents('emerging.test.conll'))
print(train_sents[0])
#each tuple contains token, syntactic tag, ner label
# functions of sentence representations for sequence labelling
def sent2labels(sent):
return [label for token, label in sent]
def sent2tokens(sent):
return [token for token, label in sent]
# sentence representations for sequence labelling
train_sent_tokens = [sent2tokens(s) for s in train_sents]
train_labels = [sent2labels(s) for s in train_sents]
train_id_2_label = list(set([label for sent in train_labels for label in sent]))
train_label_2_id = {label:i for i, label in enumerate(train_id_2_label)}
print("Number of unique labels in training data:", len(train_id_2_label))
def convert_labels_to_inds(sent_labels, label_2_id):
return [label_2_id[label] for label in sent_labels]
train_label_inds = [convert_labels_to_inds(sent_labels, train_label_2_id) for sent_labels in train_labels]
test_sent_tokens = [sent2tokens(s) for s in test_sents]
test_labels = [sent2labels(s) for s in test_sents]
### Test set contains label such as (B-corporation,B-person,B-location),
### so we have to separate them into (B-corporation), (B-person) and (B-location)
### if not we will encounter key error
test_labels_sep = list(([label for sent in test_labels for label in sent]))
test_labels_sep = ",".join(test_labels_sep)
test_labels_sep = test_labels_sep.split(",")
###
test_label_inds = [convert_labels_to_inds(test_labels_sep, train_label_2_id) for s in test_labels_sep]
window_size = 2
# converting tokenized sentence lists to vocabulary indices
id_2_word = list(set([token for sent in train_sent_tokens for token in sent])) + ["<pad>", "<unk>"]
word_2_id = {w:i for i,w in enumerate(id_2_word)}
def convert_tokens_to_inds(sentence, word_2_id):
return [word_2_id.get(t, word_2_id["<unk>"]) for t in sentence]
# padding for windows
def pad_sentence_for_window(sentence, window_size, pad_token="<pad>"):
return [pad_token]*window_size + sentence + [pad_token]*window_size
import pprint
pp = pprint.PrettyPrinter()
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from functools import partial
# Batching sentences together with a DataLoader
def my_collate(data, window_size, word_2_id):
"""
For some chunk of sentences and labels
-add winow padding
-pad for lengths using pad_sequence
-convert our labels to one-hots
-return padded inputs, one-hot labels, and lengths
"""
x_s, y_s = zip(*data)
# deal with input sentences as we've seen
window_padded = [convert_tokens_to_inds(pad_sentence_for_window(sentence, window_size), word_2_id)
for sentence in x_s]
# append zeros to each list of token ids in batch so that they are all the same length
padded = nn.utils.rnn.pad_sequence([torch.LongTensor(t) for t in window_padded], batch_first=True)
# convert labels to one-hots
labels = []
lengths = []
for y in y_s:
lengths.append(len(y))
one_hot = torch.zeros(len(y), len(train_id_2_label))
y = torch.tensor(y)
y = y.unsqueeze(1)
label = one_hot.scatter_(1, y, 1)
labels.append(label)
padded_labels = nn.utils.rnn.pad_sequence(labels, batch_first=True)
return padded.long(), padded_labels, torch.LongTensor(lengths)
batch_size = 4
# Shuffle True is good practice for train loaders.
# Use functools.partial to construct a partially populated collate function
train_loader = DataLoader(list(zip(train_sent_tokens, train_label_inds)),
batch_size=batch_size, shuffle=True,
collate_fn=partial(my_collate, window_size=2, word_2_id=word_2_id))
for batched_input, batched_labels, batch_lengths in train_loader:
pp.pprint(("inputs", batched_input, batched_input.size()))
pp.pprint(("labels", batched_labels, batched_labels.size()))
pp.pprint(batch_lengths)
break
class SoftmaxWordWindowClassifier(nn.Module):
"""
A one-layer, binary word-window classifier.
"""
def __init__(self, config, vocab_size, pad_idx=0):
super(SoftmaxWordWindowClassifier, self).__init__()
"""
Instance variables.
"""
self.window_size = 2*config["half_window"]+1
self.embed_dim = config["embed_dim"]
self.hidden_dim = config["hidden_dim"]
self.num_classes = config["num_classes"]
self.freeze_embeddings = config["freeze_embeddings"]
"""
Embedding layer
-model holds an embedding for each layer in our vocab
-sets aside a special index in the embedding matrix for padding vector (of zeros)
-by default, embeddings are parameters (so gradients pass through them)
"""
self.embed_layer = nn.Embedding(vocab_size, self.embed_dim, padding_idx=pad_idx)
if self.freeze_embeddings:
self.embed_layer.weight.requires_grad = False
"""
Hidden layer
-we want to map embedded word windows of dim (window_size+1)*self.embed_dim to a hidden layer.
-nn.Sequential allows you to efficiently specify sequentially structured models
-first the linear transformation is evoked on the embedded word windows
-next the nonlinear transformation tanh is evoked.
"""
self.hidden_layer = nn.Sequential(nn.Linear(self.window_size*self.embed_dim,
self.hidden_dim),
nn.Tanh())
"""
Output layer
-we want to map elements of the output layer (of size self.hidden dim) to a number of classes.
"""
self.output_layer = nn.Linear(self.hidden_dim, self.num_classes)
"""
Softmax
-The final step of the softmax classifier: mapping final hidden layer to class scores.
-pytorch has both logsoftmax and softmax functions (and many others)
-since our loss is the negative LOG likelihood, we use logsoftmax
-technically you can take the softmax, and take the log but PyTorch's implementation
is optimized to avoid numerical underflow issues.
"""
self.log_softmax = nn.LogSoftmax(dim=2)
def forward(self, inputs):
"""
Let B:= batch_size
L:= window-padded sentence length
D:= self.embed_dim
S:= self.window_size
H:= self.hidden_dim
inputs: a (B, L) tensor of token indices
"""
B, L = inputs.size()
"""
Reshaping.
Takes in a (B, L) LongTensor
Outputs a (B, L~, S) LongTensor
"""
# Fist, get our word windows for each word in our input.
token_windows = inputs.unfold(1, self.window_size, 1)
_, adjusted_length, _ = token_windows.size()
# Good idea to do internal tensor-size sanity checks, at the least in comments!
assert token_windows.size() == (B, adjusted_length, self.window_size)
"""
Embedding.
Takes in a torch.LongTensor of size (B, L~, S)
Outputs a (B, L~, S, D) FloatTensor.
"""
embedded_windows = self.embed_layer(token_windows)
"""
Reshaping.
Takes in a (B, L~, S, D) FloatTensor.
Resizes it into a (B, L~, S*D) FloatTensor.
-1 argument "infers" what the last dimension should be based on leftover axes.
"""
embedded_windows = embedded_windows.view(B, adjusted_length, -1)
"""
Layer 1.
Takes in a (B, L~, S*D) FloatTensor.
Resizes it into a (B, L~, H) FloatTensor
"""
layer_1 = self.hidden_layer(embedded_windows)
"""
Layer 2
Takes in a (B, L~, H) FloatTensor.
Resizes it into a (B, L~, 2) FloatTensor.
"""
output = self.output_layer(layer_1)
"""
Softmax.
Takes in a (B, L~, 2) FloatTensor of unnormalized class scores.
Outputs a (B, L~, 2) FloatTensor of (log-)normalized class scores.
"""
output = self.log_softmax(output)
return output
def loss_function(outputs, labels, lengths):
"""Computes negative LL loss on a batch of model predictions."""
B, L, num_classes = outputs.size()
num_elems = lengths.sum().float()
# get only the values with non-zero labels
loss = outputs*labels
# rescale average
return -loss.sum() / num_elems
def train_epoch(loss_function, optimizer, model, train_data):
## For each batch, we must reset the gradients
## stored by the model.
total_loss = 0
for batch, labels, lengths in train_data:
# clear gradients
optimizer.zero_grad()
# evoke model in training mode on batch
outputs = model.forward(batch)
# compute loss w.r.t batch
loss = loss_function(outputs, labels, lengths)
# pass gradients back, startiing on loss value
loss.backward()
# update parameters
optimizer.step()
total_loss += loss.item()
# return the total to keep track of how you did this time around
return total_loss
config = {"batch_size": 4,
"half_window": 2,
"embed_dim": 25,
"hidden_dim": 25,
"num_classes": 13,
"freeze_embeddings": False,
}
learning_rate = 0.0002
num_epochs = 100
model = SoftmaxWordWindowClassifier(config, len(word_2_id))
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
for epoch in range(num_epochs):
epoch_loss = train_epoch(loss_function, optimizer, model, train_loader)
print(epoch, epoch_loss)
###Output
0 1984.0911266803741
1 1792.0715091228485
2 1607.2546564340591
3 1428.6124905347824
4 1261.417154788971
5 1111.3024247288704
6 978.8777705430984
7 868.2715724110603
8 777.5037584900856
9 699.8801460266113
10 634.8877540230751
11 585.3708375096321
12 544.7903020083904
13 506.13670694828033
14 477.1411560624838
15 454.2796929627657
16 431.78946666419506
17 413.5574513077736
18 401.9932784512639
19 385.9289086908102
20 376.22236704826355
21 366.0073767527938
22 358.61624309420586
23 349.6384425610304
24 341.78234745562077
25 336.6677959486842
26 332.50128585100174
27 326.40398567169905
28 321.82110530883074
29 319.4336385950446
30 316.42031425237656
31 313.945821352303
32 309.1180630326271
33 307.0945975407958
34 301.8980015181005
35 302.4247559569776
36 300.71974082291126
37 298.57036846131086
38 297.4774141609669
39 295.9250449799001
40 293.87659879401326
41 294.1891141496599
42 291.8307018019259
43 290.55560522153974
44 290.0176186785102
45 288.4694458581507
46 290.69103196263313
47 285.3020005635917
48 286.0992246977985
49 285.42470322176814
50 283.05480482801795
51 283.4681849963963
52 281.798903927207
53 282.16640519723296
54 282.98973498120904
55 280.56557754054666
56 281.3960764147341
57 277.1977654211223
58 280.02986623346806
59 279.2754381671548
60 278.31241361796856
61 277.60159132257104
62 278.5124855078757
63 277.9728636853397
64 276.955265507102
65 277.472096554935
66 276.6942363381386
67 274.4087910167873
68 274.8529318533838
69 273.7909520752728
70 276.37995108217
71 274.16476432979107
72 274.90685304254293
73 275.01163578778505
74 276.03534434363246
75 271.83678248152137
76 272.1792895644903
77 270.33009542152286
78 274.4534385763109
79 272.5692033097148
80 274.4494830034673
81 272.5119951181114
82 271.70500018820167
83 272.76815248280764
84 271.77649813890457
85 272.6036176905036
86 272.6538703478873
87 270.2446316257119
88 272.3693539649248
89 270.57217145711184
90 271.4938544332981
91 272.02028929814696
92 269.07329988107085
93 272.4935442470014
94 270.30595829337835
95 271.57193425670266
96 271.51481158286333
97 270.6403106711805
98 269.64140623807907
99 268.69167917594314
###Markdown
Evaluation
###Code
test_loader = DataLoader(list(zip(test_sent_tokens, test_label_inds)),
batch_size=batch_size, shuffle=False,
collate_fn=partial(my_collate, window_size=2, word_2_id=word_2_id))
test_outputs = []
for test_instance, labs, _ in test_loader:
outputs_full = model.forward(test_instance)
outputs = torch.argmax(outputs_full, dim=2)
for i in range(outputs.size(0)):
test_outputs.append(outputs[i].tolist())
y_test = test_labels
y_pred = []
for test, pred in zip(test_labels, test_outputs):
y_pred.append([train_id_2_label[id] for id in pred[:len(test)]])
assert len(y_pred) == len(y_test), '{} vs. {}'.format(len(y_pred), len(y_test))
for i, pred, test in zip(list(range(len(y_pred))), y_pred, y_test):
assert len(pred) == len(test), '{}: {} vs. {}'.format(i, len(pred), len(test))
# evaluate CRF model
from sklearn_crfsuite import metrics
metrics.flat_f1_score(y_test, y_pred, average='weighted', labels=train_id_2_label)
###Output
C:\Users\brand\Anaconda3\lib\site-packages\sklearn\metrics\classification.py:1437: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
|
marching-squares-demo/Marching Squares demo.ipynb | ###Markdown
Marching SquaresMarching Squares is a computer graphics algorithm that computes contours for a 2D scalar field, that is, a rectangular array of individual numerical values (“Marching squares”, 2019). It can be used to generate the contours of a kernel density map (Lu et al., 2020). Material- [Wikipedia page](https://en.wikipedia.org/wiki/Marching_squares) (“Marching squares”, 2019).- ["Isosurfaces: Geometry, Topology, and Algorithms" book](http://web.cse.ohio-state.edu/~wenger.4/publications/) (Wenger, 2013). - [*scikit-image* (or *skimage*) library](https://scikit-image.org/) (van der Walt et al., 2014).- [`skimage.measure.find_contours` function](https://scikit-image.org/docs/stable/api/skimage.measure.htmlfind-contours) (van der Walt et al., 2014). ReferencesWikipedia contributors. (2019, December 30). Marching squares. In *Wikipedia, The Free Encyclopedia*.Retrieved 12:13, February 26, 2020, from https://en.wikipedia.org/w/index.php?title=Marching_squares&oldid=933213691Lu, M., Wang, S., Lanir, J., Fish, N., Yue, Y., Cohen-Or, D., & Huang, H. (2020). Winglets: Visualizing Association with Uncertainty in Multi-class Scatterplots. *IEEE Transactions on Visualization and Computer Graphics, 26*(1), 770–779. https://doi.org/10.1109/TVCG.2019.2934811Wenger, R. (2013).*Isosurfaces*. New York: A K Peters/CRC Press.https://doi.org/10.1201/b15025van der Walt, S., Schönberger, J.L., Nunez-Iglesias, J., Boulogne, F., Warner, J.D., Yager, N., Gouillart, E., Yu, T., & scikit-image contributors. (2014).scikit-image: image processing in Python. *PeerJ, 2*, e453.https://doi.org/10.7717/peerj.453
###Code
import numpy as np
from skimage import measure
import matplotlib.pyplot as plt
print(f"NumPy: {np.__version__}")
# Generate dummy data (an image).
a = np.zeros((10, 10))
a[1, 8] = 1
for r in range(1,5):
for c in range(1,5):
if (r != 4 or c != 4):
a[r, c] = 1
a[r + 4, c + 4] = 1
# L-R = same y, different x.
# T-B = same x, different y.
a
# The origin in the graphs below is in the bottom-left corner,
# instead of the top-left corner as usual for images:
# https://matplotlib.org/api/_as_gen/matplotlib.pyplot.imshow.html
# This option for the origin was made so that the axes are ordered
# in the same way as is usual for a scatterplot, for example.
np.flip(a, axis=0) # Flip the image vertically.
# Find contours at a constant value of 0.50.
THRESHOLD = 0.50 # a.k.a. isovalue
contours = measure.find_contours(a, THRESHOLD)
# contours
x = [value for value in range(0, len(a))] * len(a)
# x
y = [item for sublist in [[value] * len(a) for value in range(0, len(a))] for item in sublist]
# y
a_flat = a.flatten()
# a_flat
clrs = ['green' if (x <= THRESHOLD) else 'none' for x in a_flat]
# clrs
# Display the image.
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(a, cmap=plt.cm.gray_r, origin="lower")
plt.show()
# Display the image and the binary index.
# The filled points are below the threshold.
# The unfilled points are above the threshold.
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(a, cmap=plt.cm.gray_r, origin="lower")
plt.scatter(x, y, c=clrs, edgecolors="green")
plt.show()
# Display the image, the binary index, and the contours.
# Every 2x2 block of pixels (or dots) forms a cell.
# Marching Squares generates a contour line for each cell
# according to the following lookup table:
# https://en.wikipedia.org/wiki/Marching_squares#Basic_algorithm
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(a, cmap=plt.cm.gray_r, origin="lower")
plt.scatter(x, y, c=clrs, edgecolors="green")
for n, contour in enumerate(contours):
print(f"Coordinates (contour {n + 1})\nx: {contour[:, 1]}\ny: {contour[:, 0]}\n")
ax.plot(contour[:, 1], contour[:, 0], linewidth=2, zorder=1)
plt.show()
###Output
_____no_output_____
###Markdown
Linear InterpolationConsidering, for example, the cell in the bottom-left corner, the exact position of the contour line (more specifically, the beginning and the end of the contour line) along the edges of this cell was computed with the following (abbreviated) algorithm (Wenger, 2013):**Input**: - Two points or vertices of a cell (*p* and *q*).- The scalar values associated with the aforementioned points according to the dataset or image (*sp* and *sq*).- The threshold or isovalue (*t*).- **Note**: The coordinates of the points can be checked on the graph axes. **Output**:- A point lying on [*p*, *q*] (*r*).- **Note**: This point will correspond to the beginning or end of a contour line. **Linear interpolation algorithm**:```pythondef linear_interpolation(p, sp, q, sq, t): alpha = (t - sp) / (sq - sp) dimensions = len(p) r = np.zeros(dimensions) for d in range(dimensions): r[d] = (1 - alpha) * p[d] + alpha * q[d] return(r)```
###Code
def linear_interpolation(p, sp, q, sq, t, verbose=True):
assert (sp != sq), "This algorithm requires sp != sq."
assert ((t >= sp and t <= sq) or (t <= sp and t >= sq)), "This algorithm requires sp <= t <= sq or sp >= t >= sq."
assert (len(p) == len(q)), "Both points must have the same dimension."
alpha = (t - sp) / (sq - sp)
dimensions = len(p)
r = np.zeros(dimensions)
for d in range(dimensions):
r[d] = (1 - alpha) * p[d] + alpha * q[d]
if(verbose):
print(f"Coordinates:\nx = {r[0]}\ny = {r[1]}")
return(r)
# Considering, for example, the points or vertices (0, 1) and (1, 1):
p = np.array([0, 1])
q = np.array([1, 1])
sp = a[tuple(p)]
sq = a[tuple(q)]
print(f"p: {p}\nq: {q}\nsp: {sp}\nsq: {sq}")
r = linear_interpolation(p, sp, q, sq, THRESHOLD)
# Display the image, the binary index, the contours,
# and a linearly interpolated point (red dot).
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(a, cmap=plt.cm.gray_r, origin="lower")
plt.scatter(x, y, c=clrs, edgecolors="green")
for n, contour in enumerate(contours):
print(f"Coordinates (contour {n + 1})\nx: {contour[:, 1]}\ny: {contour[:, 0]}\n")
ax.plot(contour[:, 1], contour[:, 0], linewidth=2, zorder=1)
plt.scatter(r[0], r[1], c="red", edgecolors="red", zorder=2)
plt.show()
# The choice of the threshold or isovalue is important.
# This value affects linear interpolation and can yield less accurate contours.
# Check the warning in the notes:
# https://scikit-image.org/docs/dev/api/skimage.measure.html#find-contours
THRESHOLD_2 = 0.25
contours_2 = measure.find_contours(a, THRESHOLD_2)
clrs_2 = ['green' if (x <= THRESHOLD_2) else 'none' for x in a_flat ]
# The same coordinates and values as in the previous example,
# but now with a different (lower) threshold.
r_2 = linear_interpolation(p, sp, q, sq, THRESHOLD_2)
# Display the image, the binary index, the contours,
# the previous linearly interpolated point (gray dot),
# and a new linearly interpolated point (red dot).
fig, ax = plt.subplots(figsize=(10, 10))
ax.imshow(a, cmap=plt.cm.gray_r, origin="lower")
plt.scatter(x, y, c=clrs_2, edgecolors="green")
for n, contour in enumerate(contours_2):
print(f"Coordinates (contour {n + 1})\nx: {contour[:, 1]}\ny: {contour[:, 0]}\n")
ax.plot(contour[:, 1], contour[:, 0], linewidth=2, zorder=1)
plt.scatter([r[0], r_2[0]], [r[1], r_2[1]], c=["silver", "red"], edgecolors=["silver", "red"], zorder=2)
plt.show()
###Output
_____no_output_____ |
notebooks/Classification of Lab Samples.ipynb | ###Markdown
Classification of Lab Samples
###Code
import init
from common import constants as cn
from common.trinary_data import TrinaryData
from common.data_provider import DataProvider
from common_python.plots import util_plots
from common_python.classifier import classifier_ensemble
from common_python.classifier import classifier_collection
from common_python.classifier.classifier_ensemble_random_forest import ClassifierEnsembleRandomForest
from common import transform_data
import collections
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn
from sklearn.decomposition import PCA
from sklearn import svm
from sklearn.model_selection import cross_val_score
%matplotlib inline
data = TrinaryData()
data.df_X.head()
df_sampleAM = transform_data.trinaryReadsDF(
csv_file="AM_MDM_Mtb_transcripts_DEseq.csv")
df_sampleAW = transform_data.trinaryReadsDF(
"AW_plus_v_AW_neg_Mtb_transcripts_DEseq.csv")
df_sampleAM = df_sampleAM.T
df_sampleAM.head()
df_sampleAW = df_sampleAW.T
df_sampleAW.head()
###Output
_____no_output_____
###Markdown
Classification ValidationsClassify T2-T25 and see if result is same as original class
###Code
provider = DataProvider(is_normalize=False)
provider.do()
def getTimeSample(time_index):
df0 = provider.dfs_cenered_adjusted_read_count[0]
num = len(provider.dfs_cenered_adjusted_read_count)
ser = pd.Series(np.repeat(0, len(df0.index)), index=df0.index)
for idx in range(num):
ser += provider.dfs_cenered_adjusted_read_count[idx][time_index]
df = pd.DataFrame(ser/num)
df_result = transform_data.trinaryReadsDF(df_sample=df)
return df_result.T
getTimeSample(2).head()
provider.dfs_cenered_adjusted_read_count[1].head()
svm_ensemble = classifier_ensemble.ClassifierEnsemble(
classifier_ensemble.ClassifierDescriptorSVM(), filter_high_rank=15, size=30)
df_X = data.df_X.copy()
df_X.columns = data.features
svm_ensemble.fit(df_X, data.ser_y)
if False:
data.ser_y
df_expected = pd.DataFrame(data.ser_y)
df_expected.columns = ["columns"]
df_expected["values"] = 1.0
dff_expected = df_expected.pivot(columns="columns", values="values")
dff_expected = dff_expected.fillna(0)
dff_expected.index = [int(v[1:]) for v in dff_expected.index]
dff_expected
df_diff = (dff_expected - dfff)
df_diff = df_diff.applymap(lambda v: np.abs(v))
df_diff.sum().sum()/len(df_diff)
###Output
_____no_output_____
###Markdown
Classification of Lab Samples
###Code
svm_ensemble = classifier_ensemble.ClassifierEnsemble(
classifier_ensemble.ClassifierDescriptorSVM(), filter_high_rank=15, size=30)
df_X = data.df_X.copy()
df_X.columns = data.features
svm_ensemble.fit(df_X, data.ser_y)
svm_ensemble.predict(df_sampleAM)
svm_ensemble.predict(df_sampleAW)
###Output
_____no_output_____
###Markdown
Comparisons With Random Forest
###Code
dummy_columns = [v+"-" for v in data.features]
truncated_columns = [f[0:f.index("-")] for f in dummy_columns]
len(truncated_columns)
df_X = data.df_X.copy()
df_X.columns = truncated_columns
clf = ClassifierEnsembleRandomForest(size=150, filter_high_rank=30)
clf.fit(df_X, data.ser_y)
df_sampleAM[truncated_columns].head()
clf.predict(df_sampleAM[clf.columns])
,Resuscitation,Transition,StageII,Stage1a,Stage1b
AM_D20_1,0.0,0.3,0.0,0.0,0.7
AM_D20_3,0.0,0.3,0.0,0.0,0.7
AM_D20_4,0.0,0.0,0.0,0.0,1.0
AM_D20_5,0.0,0.3,0.0,0.0,0.7
MDM_D20_1,0.0,0.3,0.0,0.0,0.7
MDM_D20_3,0.0,0.3,0.0,0.0,0.7
MDM_D20_4,0.0,1.0,0.0,0.0,0.0
MDM_D20_5,0.0,0.3,0.0,0.0,0.7
clf.predict(df_sampleAW[truncated_columns])
###Output
_____no_output_____
###Markdown
Random Forest Validations
###Code
set(clf.columns).difference(data.features)
df_X = data.df_X.copy()
df_X.columns = data.features
df_X[clf.columns]
def calcCVR(num_features, num_classifiers, num_iterations=10):
clf = ClassifierEnsembleRandomForest(
filter_high_rank=num_features, size=num_classifiers)
return classifier_collection.ClassifierCollection.crossValidateByState(
clf, df_X, data.ser_y, num_iterations)
calcCVR(30, 100)
###Output
_____no_output_____ |
Difussion2D_1.0.ipynb | ###Markdown
###Code
pip install --upgrade git+https://github.com/adantra/nangs.git
# autoreload nangs
%reload_ext autoreload
%autoreload 2
%matplotlib inline
#imports
import numpy as np
import matplotlib.pyplot as plt
import torch
import pandas as pd
import nangs
from nangs import *
device = "cuda" if torch.cuda.is_available() else "cpu"
nangs.__version__, torch.__version__
device
def perm(y):
return 1+100*torch.exp(-(y-0.5)**2/0.1)
alpha=1
class Difussion2d(PDE):
def computePDELoss(self, inputs, outputs):
p = outputs[:, 0]
#y = torch.tensor(inputs[:,1])
grads = self.computeGrads(p, inputs)
dpdx, dpdy, dpdt = grads[:, 0], grads[:, 1], grads[:, 2]
# second order derivatives
dp2dx2 = self.computeGrads(dpdx, inputs)[:, 0]
dp2dy2 = self.computeGrads(dpdy, inputs)[:, 1]
return {'pde':alpha*dpdt - (dp2dx2 + dp2dy2)}
class NeumannX(Neumann):
def computeBocoLoss(self, inputs, outputs):
p = outputs[:, 0]
grads = self.computeGrads(p, inputs)
dpdx = grads[:, 0]
return {'gradX': dpdx}
class NeumannY(Neumann):
def computeBocoLoss(self, inputs, outputs):
p = outputs[:, 0]
grads = self.computeGrads(p, inputs)
dpdy = grads[:, 1]
return {'gradX': dpdy}
class PBC1(Neumann):
def computeBocoLoss(self, inputs, outputs):
pbc = 0
p = outputs[:, 0]
return {'PBC': 10*(p-pbc)}
class PBC0(Neumann):
def computeBocoLoss(self, inputs, outputs):
pbc = 1
p = outputs[:, 0]
return {'PBC': 10*(p-pbc)}
pde = Difussion2d(inputs=('x', 'y', 't'), outputs='p')
# mesh
x = np.linspace(0,1,50)
y = np.linspace(0,1,50)
t = np.linspace(0,0.5,50)
mesh = Mesh({'x': x, 'y': y, 't':t}, device=device)
pde.set_mesh(mesh)
from nangs import Dirichlet
t0 = np.array([0])
_x, _y = np.meshgrid(x, y)
#p0 = np.sin(2*np.pi*_x)*np.sin(2*np.pi*_y)
p0=np.zeros((len(x),len(y)))
p0[:]=1
p0[:,0]=0
#p0[47:53,47:53]=1
p0.shape
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(_x, _y, p0.reshape((_y.shape[0],_x.shape[1])), cmap=cm.coolwarm, linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
initial_condition = Dirichlet({'x': x, 'y': y,'t':t0}, {'p': p0.reshape(-1)}, device=device, name="initial")
pde.add_boco(initial_condition)
#p0=np.ones(len(y))
#p1=np.zeros(len(y))
left = PBC1({'x': np.array([0]), 'y': y,'t':t}, name='left', device=device)
#left = Dirichlet({'x': x, 'y': y}, {'p': p0.reshape(-1)}, device=device, name="left")
pde.add_boco(left)
#right = PBC0({'x': np.array([1]), 'y': y, 't':t}, name='right', device=device)
#right = Dirichlet({'x': x[-1], 'y': y}, {'p': p1.reshape(-1)}, device=device, name="right")
right = NeumannX({'x': x[-1], 'y': y,'t':t}, name='right', device=device)
pde.add_boco(right)
top = NeumannY({'x': x, 'y': y[-1],'t':t}, name='top', device=device)
pde.add_boco(top)
bottom = NeumannY({'x': x, 'y': y[-1],'t':t}, name='bottom', device=device)
pde.add_boco(bottom)
from nangs import MLP
BATCH_SIZE = 1024
LR = 1e-2
EPOCHS = 50
NUM_LAYERS = 3
NUM_HIDDEN = 50
mlp = MLP(len(pde.inputs), len(pde.outputs), NUM_LAYERS, NUM_HIDDEN).to(device)
optimizer = torch.optim.Adam(mlp.parameters())
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=LR, pct_start=0.1, total_steps=EPOCHS)
EPOCHS = 300
optimizer = torch.optim.Adam(mlp.parameters())
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=LR, pct_start=0.1, total_steps=EPOCHS)
pde.compile(mlp, optimizer, scheduler)
%time hist = pde.solve(EPOCHS, BATCH_SIZE)
te = 0.01
xe = np.linspace(0,1.,10)
ye = np.linspace(0,1,10)
_x, _y, _t = np.meshgrid(xe, ye, te)
eval_mesh = Mesh({'x': xe, 'y': ye, 't':te}, device=device)
p = pde.eval(eval_mesh)
p = p.cpu().numpy()
p5 = p.reshape((_y.shape[0],_x.shape[1]))
plt.plot(xe,p5[:,0],marker="o")
p5.shape
def computep(te):
#te = 0.01
xe = np.linspace(0,1.5,20)
ye = np.linspace(0,1,20)
_x, _y, _t = np.meshgrid(xe, ye, te)
eval_mesh = Mesh({'x': xe, 'y': ye, 't':te}, device=device)
p = pde.eval(eval_mesh)
p = p.cpu().numpy()
p5 = p.reshape((_y.shape[0],_x.shape[1]))
plt.plot(xe,p5[0],marker="o")
plt.plot(xe,p5[-1],marker="o")
for te in [0.01,0.1,0.2,0.5]:
computep(te)
plt.plot(p5[0])
p0.shape
p1.shape
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
fig = plt.figure()
ax = fig.gca(projection='3d')
#surf = ax.plot_surface(_x, _y, p1.reshape((_y.shape[0],_x.shape[1])), cmap=cm.coolwarm, linewidth=0, antialiased=False)
surf = ax.plot_surface(_x, _y, p5.reshape((_y.shape[0],_x.shape[1])), cmap=cm.coolwarm, linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
p1 = p.reshape((_y.shape[0],_x.shape[1]))
#plt.plot(p00[0],color='b')
#plt.plot(p1[0],color='r')
#plt.plot(p2[0])
#plt.plot(p3[0])
#plt.plot(p4[0])
plt.plot(p5[0],marker="o")
plt.plot(p5[0],marker="o")
p3[0][-1]
ax.plot_surface(_x, _y, p.reshape((_y.shape[0],_x.shape[1])), cmap=cm.coolwarm, linewidth=0, antialiased=False)
p1.shape
_x.shape
from copy import deepcopy
mlp2=deepcopy(mlp)
mlp2
###Output
_____no_output_____ |
DeepLearning/03.ImageRelatedNeuralNetworks/Image-Related NNs.ipynb | ###Markdown
Image-Related Neural Networks Live Demos
###Code
cnn = Sequential([
Input((224, 224, 3)),
Conv2D(filters = 128, kernel_size = 3, padding = "same", activation = "relu"),
Conv2D(filters = 64, kernel_size = 3, padding = "same", activation = "relu"),
Conv2D(filters = 32, kernel_size = 3, padding = "same", activation = "relu"),
])
cnn.summary()
cnn_max_pooling = Sequential([
Input((224, 224, 3)),
Conv2D(filters = 128, kernel_size = 3, padding = "same", activation = "relu"),
MaxPool2D(),
Conv2D(filters = 64, kernel_size = 3, padding = "same", activation = "relu"),
MaxPool2D(),
Conv2D(filters = 32, kernel_size = 3, padding = "same", activation = "relu"),
MaxPool2D(),
Flatten(),
Dense(50, activation = "relu"),
Dropout(0.1),
Dense(25, activation = "relu"),
Dropout(0.05),
Dense(10, activation = "softmax")
])
cnn_max_pooling.summary()
cnn_max_pooling.save("model.sav")
###Output
WARNING:tensorflow:From C:\Users\Yordan\Anaconda3\envs\tf2\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1781: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Assets written to: model.sav\assets
|
course1_intro/2. Intro to CV.ipynb | ###Markdown
**Sequential**: That defines a SEQUENCE of layers in the neural network**Flatten**: Remember earlier where our images were a square, when you printed them out? Flatten just takes that square and turns it into a 1 dimensional set.**Dense**: Adds a layer of neuronsEach layer of neurons need an **activation function** to tell them what to do. There's lots of options, but just use these for now. **Relu** effectively means "If X>0 return X, else return 0" -- so what it does it it only passes values 0 or greater to the next layer in the network.**Softmax** takes a set of values, and effectively picks the biggest one, so, for example, if the output of the last layer looks like [0.1, 0.1, 0.05, 0.1, 9.5, 0.1, 0.05, 0.05, 0.05], it saves you from fishing through it looking for the biggest value, and turns it into [0,0,0,0,1,0,0,0,0] -- The goal is to save a lot of coding!
###Code
model.compile(optimizer = tf.optimizers.Adam(),
loss="sparse_categorical_crossentropy",
metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
model.evaluate(test_images, test_labels)
###Output
313/313 [==============================] - 0s 327us/step - loss: 0.3470 - accuracy: 0.8758
###Markdown
Exercises Part 1
###Code
classifications = model.predict(test_images)
print(classifications[0]) # prob for each category
print(np.argmax(classifications[0])) # predicted label
print(test_labels[0]) # true label
###Output
[7.9467400e-06 1.2366503e-06 1.4769585e-05 7.8836802e-06 3.4343229e-05 2.5804216e-02 3.1887823e-05 3.3108994e-01 2.5759658e-04 6.4275026e-01]
9
9
###Markdown
Part 2
###Code
model1 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model1.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model1.fit(training_images, training_labels, epochs=5)
model1.evaluate(test_images, test_labels)
model2 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1024, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model2.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model2.fit(training_images, training_labels, epochs=5)
model2.evaluate(test_images, test_labels)
classifications = model2.predict(test_images)
print(classifications[0])
print(test_labels[0])
###Output
[1.3590454e-08 1.5013335e-08 3.8980774e-09 4.3385268e-10 1.1502455e-08 3.2536718e-01 3.8077804e-08 2.6089303e-02 1.4549121e-08 6.4854348e-01]
9
###Markdown
Part 3 Add another dense layer. It does improve the performance
###Code
model3 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model3.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model3.fit(training_images, training_labels, epochs=5)
model3.evaluate(test_images, test_labels)
###Output
Epoch 1/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.4683 - accuracy: 0.8296
Epoch 2/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3558 - accuracy: 0.8690
Epoch 3/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.3182 - accuracy: 0.8826
Epoch 4/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.2964 - accuracy: 0.8900
Epoch 5/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.2780 - accuracy: 0.8959
313/313 [==============================] - 0s 481us/step - loss: 0.3341 - accuracy: 0.8843
###Markdown
Part 4Change epochs
###Code
model4 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model4.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model4.fit(training_images, training_labels, epochs=30)
model4.evaluate(test_images, test_labels)
###Output
Epoch 1/30
1/1875 [..............................] - ETA: 0s - loss: 2.4183 - accuracy: 0.0938WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0000s vs `on_train_batch_end` time: 0.0027s). Check your callbacks.
1875/1875 [==============================] - 1s 437us/step - loss: 0.5030 - accuracy: 0.8250
Epoch 2/30
1875/1875 [==============================] - 1s 435us/step - loss: 0.3793 - accuracy: 0.8630
Epoch 3/30
1875/1875 [==============================] - 1s 437us/step - loss: 0.3410 - accuracy: 0.8763
Epoch 4/30
1875/1875 [==============================] - 1s 438us/step - loss: 0.3142 - accuracy: 0.8839
Epoch 5/30
1875/1875 [==============================] - 1s 436us/step - loss: 0.2968 - accuracy: 0.8905
Epoch 6/30
1875/1875 [==============================] - 1s 437us/step - loss: 0.2814 - accuracy: 0.8965
Epoch 7/30
1875/1875 [==============================] - 1s 437us/step - loss: 0.2704 - accuracy: 0.8990
Epoch 8/30
1875/1875 [==============================] - 1s 441us/step - loss: 0.2590 - accuracy: 0.9035
Epoch 9/30
1875/1875 [==============================] - 1s 445us/step - loss: 0.2485 - accuracy: 0.9073
Epoch 10/30
1875/1875 [==============================] - 1s 441us/step - loss: 0.2405 - accuracy: 0.9085
Epoch 11/30
1875/1875 [==============================] - 1s 430us/step - loss: 0.2342 - accuracy: 0.9124
Epoch 12/30
1875/1875 [==============================] - 1s 432us/step - loss: 0.2246 - accuracy: 0.9156
Epoch 13/30
1875/1875 [==============================] - 1s 431us/step - loss: 0.2180 - accuracy: 0.9182
Epoch 14/30
1875/1875 [==============================] - 1s 440us/step - loss: 0.2112 - accuracy: 0.9209
Epoch 15/30
1875/1875 [==============================] - 1s 435us/step - loss: 0.2065 - accuracy: 0.9232
Epoch 16/30
1875/1875 [==============================] - 1s 431us/step - loss: 0.1985 - accuracy: 0.9250
Epoch 17/30
1875/1875 [==============================] - 1s 436us/step - loss: 0.1944 - accuracy: 0.9265
Epoch 18/30
1875/1875 [==============================] - 1s 435us/step - loss: 0.1901 - accuracy: 0.9273
Epoch 19/30
1875/1875 [==============================] - 1s 432us/step - loss: 0.1835 - accuracy: 0.9301
Epoch 20/30
1875/1875 [==============================] - 1s 432us/step - loss: 0.1799 - accuracy: 0.9333
Epoch 21/30
1875/1875 [==============================] - 1s 432us/step - loss: 0.1745 - accuracy: 0.9345
Epoch 22/30
1875/1875 [==============================] - 1s 441us/step - loss: 0.1718 - accuracy: 0.9358
Epoch 23/30
1875/1875 [==============================] - 1s 439us/step - loss: 0.1685 - accuracy: 0.9361
Epoch 24/30
1875/1875 [==============================] - 1s 434us/step - loss: 0.1617 - accuracy: 0.9392
Epoch 25/30
1875/1875 [==============================] - 1s 436us/step - loss: 0.1609 - accuracy: 0.9391
Epoch 26/30
1875/1875 [==============================] - 1s 433us/step - loss: 0.1577 - accuracy: 0.9402
Epoch 27/30
1875/1875 [==============================] - 1s 433us/step - loss: 0.1534 - accuracy: 0.9421
Epoch 28/30
1875/1875 [==============================] - 1s 434us/step - loss: 0.1503 - accuracy: 0.9431
Epoch 29/30
1875/1875 [==============================] - 1s 436us/step - loss: 0.1461 - accuracy: 0.9442
Epoch 30/30
1875/1875 [==============================] - 1s 435us/step - loss: 0.1454 - accuracy: 0.9450
313/313 [==============================] - 0s 340us/step - loss: 0.3994 - accuracy: 0.8886
###Markdown
Part 7Remove scaling
###Code
mnist = tf.keras.datasets.mnist
(training_images, training_labels) , (test_images, test_labels) = mnist.load_data()
model3 = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)])
model3.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
model3.fit(training_images, training_labels, epochs=5)
model3.evaluate(test_images, test_labels)
###Output
Epoch 1/5
1875/1875 [==============================] - 3s 2ms/step - loss: 1.4646 - accuracy: 0.9038
Epoch 2/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.2063 - accuracy: 0.9482
Epoch 3/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.1604 - accuracy: 0.9570
Epoch 4/5
1875/1875 [==============================] - 3s 1ms/step - loss: 0.1565 - accuracy: 0.9588
Epoch 5/5
1875/1875 [==============================] - 3s 2ms/step - loss: 0.1309 - accuracy: 0.9653
313/313 [==============================] - 0s 468us/step - loss: 0.1214 - accuracy: 0.9711
###Markdown
Part 8: callbacks
###Code
import tensorflow as tf
print(tf.__version__)
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.85):
print("\nReached 60% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images/255.0
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5, callbacks=[callbacks])
###Output
2.3.0
Epoch 1/5
1875/1875 [==============================] - 2s 1ms/step - loss: 0.4779 - accuracy: 0.8303
Epoch 2/5
1855/1875 [============================>.] - ETA: 0s - loss: 0.3596 - accuracy: 0.8704
Reached 60% accuracy so cancelling training!
1875/1875 [==============================] - 2s 1ms/step - loss: 0.3594 - accuracy: 0.8705
###Markdown
Exercise project
###Code
# YOUR CODE SHOULD START HERE
# YOUR CODE SHOULD END HERE
import tensorflow as tf
mnist = tf.keras.datasets.mnist
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
(x_train, y_train),(x_test, y_test) = mnist.load_data()
# YOUR CODE SHOULD START HERE
x_train = x_train/255.0
x_test = x_test/255.0
# YOUR CODE SHOULD END HERE
model = tf.keras.models.Sequential([
# YOUR CODE SHOULD START HERE
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# YOUR CODE SHOULD END HERE
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# YOUR CODE SHOULD START HERE
model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
model.evaluate(x_test, y_test)
# YOUR CODE SHOULD END HERE
###Output
313/313 [==============================] - 0s 357us/step - loss: 0.0753 - accuracy: 0.9771
|
Data103_(Chapter2_distance).ipynb | ###Markdown
###Code
import numpy as np #package สำหรับทำงานกับข้อมูลที่เป็นตัวเลข (ในรูปแบบ matrix)
###Output
_____no_output_____
###Markdown
Numpy Array Create numpy array
###Code
np_a = np.array([[1,2,3],[4,5,6]]) # np.array(list), list = [[1,2,3],[4,5,6]]
np_a
np_a.shape #มิติข้อมูล
np_b = np.array([[1, 4], [2, 5], [3,6]])
print(np_b.shape)
np_b
###Output
(3, 2)
###Markdown
คำสั่งสร้าง matrix เริ้มต้น zeros, ones
###Code
np.zeros((3,2)) # zeros , identity การบวก อะไรมา+ก็จะได้ตัวนัั้น
np.ones((3,2)) #ones, identity การคูณ อะไรมาคูณก็จะได้ตัวนัั้น
###Output
_____no_output_____
###Markdown
คำสั่งสร้าง matrix เริ้มต้นด้วย random
###Code
np.random.randn(3,2) #สรุ่มเลขมาสร้าง, randn: normal distribution
np.random.randint(0,10,(3,2)) #randint: int
###Output
_____no_output_____
###Markdown
Indexing
###Code
np_b[0,1] #การชี้สมาชิกใน numpy matrix ใช้ ชื่อ matrix[แถว,หลัก]
###Output
_____no_output_____
###Markdown
Transpose matrixLink: https://numpy.org/doc/stable/reference/generated/numpy.matrix.transpose.html
###Code
#Transpose
np_b.T
###Output
_____no_output_____
###Markdown
matrix multiplication หลักตัวหน้า = แถวตัวหลัง
###Code
np_a
np_b
np.dot(np_b,np_a) #สลับตัวผลลัพธ์ไม่เหมือนกัน
np.dot(np_a,np_b) #สลับตัวผลลัพธ์ไม่เหมือนกัน
###Output
_____no_output_____
###Markdown
Useful numpy function *** summation $\sum$
###Code
np_a = np.array([[1,2,3],[4,5,6]])
np_a
np.sum(np_a) #sum all
#sum เเนวแถว
np.sum(np_a, axis=0)
# เเนวคอลัมน์
np.sum(np_a, axis=1)
np.sum(np_a)
## วนลูปเอง
print(np_a.shape) #เเสดงค่า มิติข้อมูลมีกี่เเถวกี่คอมลัมน์
sum_all = 0
for row in range(np_a.shape[0]): #ลูป row
for col in range(np_a.shape[1]): #ลูป col
sum_all = sum_all + np_a[row,col]
sum_all
## วนลูปเอง
# sum เเนวแถว
print(np.sum(np_a, axis=0))
sum_vert = np.zeros((np_a.shape[1]))#สร้าง matrix 0 มิติ 3x1
for col in range(np_a.shape[1]):
for row in range(np_a.shape[0]):
sum_vert[col] = sum_vert[col] + np_a[row,col]
sum_vert
## วนลูปเอง
# sum เเนวคอลัมน์
print(np.sum(np_a, axis=1))
sum_vert = np.zeros((np_a.shape[0]))#สร้าง matrix 0 มิติ 3x1
for row in range(np_a.shape[0]):
for col in range(np_a.shape[1]): # เปลี่ยน row,col วนอะไรก่อนก็ได้
sum_vert[row] = sum_vert[row] + np_a[row,col]
sum_vert
## วนลูปเอง
#เขียนโคดให้มันง่ายขึ้น
n_rows, n_cols = np_a.shape
sum_vert = np.zeros((np_a.shape[0]))
for row in range(n_rows):
for col in range(n_cols):
sum_vert[row] = sum_vert[row] + np_a[row,col]
sum_vert
###Output
_____no_output_____
###Markdown
Array Slicing : เลือกเเสดง
###Code
np_a
np_a[:,1] #เลือกเอาข้อมูลเฉพาะเเถวที่ 2 ตำเเหน่งเเถว array ที่ 1
np_a[1,:]#เลือกเอาข้อมูลเฉพาะคอลัมน์ที่ 2 ตำเเหน่งคอลัมน์ array ที่ 1
###Output
_____no_output_____
###Markdown
Average $\mu$ , $\bar{x}$
###Code
np_a = np.array([[1,2,3],[4,5,6]])
np_a
np.mean(np_a) #mean all
np.mean(np_a,axis = 0) #mean เเนวแถว
np.mean(np_a,axis = 1) #mean เเนวคอลัมน์
###Output
_____no_output_____
###Markdown
Distance Matrix
###Code
data1 = np.array([[1,2], [3,5], [2,0], [4,5]])
data1
from matplotlib import pyplot as plt
plt.plot(data1[:,0], data1[:,1],'o') #แกน X: data1[:,0] ,Y: data1[:,1]
###Output
_____no_output_____
###Markdown
Euclidean Distance $\sqrt{(x_1 - x_2)^2 - (y_1 - y_2)^2} $
###Code
"#หาระยางระหว่าง จุด จุด
print(data1[0,0], data1[0,1])
print(data1[1,0], data1[1,1])
dist_p1_p2 = np.sqrt(np.power(data1[0, 0]-data1[1, 0], 2)+np.power(data1[0, 1]-data1[1, 1], 2))
dist_p1_p2
# 2 จุดทั้วไป 2 dimentions
def eud_dist(pt1, pt2): # 2 dims
return np.sqrt(np.power(pt1[0] - pt2[0], 2)+np.power(pt1[1] - pt2[1], 2))
eud_dist(data1[0, :], data1[1, :])
# 2 จุดทั้วไป n dimentions
def eud_dist_n(pt1, pt2): # n dims
sum =0
for i in range(len(pt1)):
sum = sum + np.power(pt1[i] - pt2[i], 2)
return np.sqrt(sum)
eud_dist_n(data1[1, :], data1[3, :])
###Output
_____no_output_____
###Markdown
Quiz 9 เขียนฟังก์ชัน คำนวณ Manhatton Distance${|x_1 - x_2|+|y_1 - y_2|}$
###Code
def ManhDistN(pt1, pt2):
sum =0
for i in range(len(pt1)):
sum = sum + abs(pt1[i] - pt2[i])
return sum
ManhDistN(data1[0, :], data1[1, :])
###Output
_____no_output_____ |
previsions-methodes-lissage.ipynb | ###Markdown
First notebook
###Code
print('Hello World')
###Output
Hello World
|
Flask/Python Web Development with Flask.ipynb | ###Markdown
Python Web Development with Flask  http://flask.pocoo.org/ Web Server"Web server" can refer to hardware or software, or both of them working together.On the hardware side, a web server is a computer that stores web server software and a website's component files (e.g. HTML documents, images, CSS stylesheets, and JavaScript files). It is connected to the Internet and supports physical data interchange with other devices connected to the web.On the software side, a web server includes several parts that control how web users access hosted files, at minimum an HTTP server. An HTTP server is a piece of software that understands URLs (web addresses) and HTTP (the protocol your browser uses to view webpages). It can be accessed through the domain names (like mozilla.org) of websites it stores, and delivers their content to the end-user's device.[From Mozilla](https://developer.mozilla.org/en-US/docs/Learn/Common_questions/What_is_a_web_server)Then there is HTTPS which is an extension of HTTP but provides authentication/security/integrity, protecting from Man-In-the-Middle attacks. HTTPS is pretty much mandated these days. Static Files vs Dynamic Files Getting started with Flask Create an environmentCreate a project folder and a venv folder within:`mkdir flaskappcd myprojectpython -m venv venv` Activate the environmentBefore you work on your project, activate the corresponding environment:On MacOS/Linux from your project directory run `. venv/bin/activate`On Windows:`venv\Scripts\activate`Your shell prompt will change to show the name of the activated environment.deactivate with deactivate :) SetupOnce we have (venv) prompt that is active venv`$ pip install Flask` Enviroment variable but we are not going to use this$ FLASK_APP=hflask.py flask run Instead just using`python hflask.py` will work for demonstration purposes
###Code
%%writefile hflask.py
from flask import Flask
app = Flask(__name__)
@app.route('/') # remember decorators? :)
def hello():
return 'Sup!'
if __name__ == '__main__':
app.run()
## localhost:5000 , stop it with Ctrl-C in the terminal
%%writefile hflask.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Sup!'
@app.route('/admin')
def admin():
return 'Super Secret Admin Page'
if __name__ == '__main__':
app.run()
### How about adding some HTML ?
%%writefile hflask.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Sup!'
@app.route('/admin')
def admin():
return 'Super Secret Admin Page'
@app.route('/about')
def about():
user = {'username': 'Valdis'}
return '''
<html>
<head>
<title>Home Page - Microblog</title>
</head>
<body>
<h1>Hello, ''' + user['username'] + '''!</h1>
</body>
</html>'''
if __name__ == '__main__':
app.run()
###Output
Overwriting hflask.py
###Markdown
What are problems with the above approach?* unwieldy code* inline HTML is generally hard to view especially for large pages VS Code -> Tasks -> Configure Default Build Task Replace Task json with one below This will allow you to run current .py file with Ctrl-Shift-B (Option-Shift-B on MAc ??)
###Code
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
// this goes in your .vscode folder under tasks.json (one already exists there)
//"command": "python ${relativeFile}" opens your currently active file with python (or at least tries to..)
// you activate the build command with Ctrl-Shift-B in VS Code
"version": "2.0.0",
"tasks": [
{
"label": "Run Python active file",
"type": "shell",
"command": "python ${relativeFile}",
"group": {
"kind": "build",
"isDefault": true
}
}
]
}
###Output
_____no_output_____
###Markdown
Using Templates
###Code
from flask import Flask
from flask import render_template
app = Flask(__name__)
@app.route('/')
def hello():
return 'Sup!'
@app.route('/admin')
def admin():
return 'Super Secret Admin Page'
@app.route('/about')
def about():
user = {'username': 'Valdis'}
return '''
<html>
<head>
<title>Home Page - Microblog</title>
</head>
<body>
<h1>Hello, ''' + user['username'] + '''!</h1>
</body>
</html>'''
@app.route('/hello/')
@app.route('/hello/<name>')
def bighello(name=None):
return render_template('hello.html', name=name)
if __name__ == '__main__':
app.run()
%%writefile hello.html
<!doctype html>
<html>
<head>
<title>Hello from Flask</title>
<link rel="stylesheet" href="{{ url_for('static', filename='style.css') }}">
</head>
<body>
{% if name %}
<h1>Hello {{ name }}!</h1>
{% else %}
<h1>Hello, World!</h1>
{% endif %}
<hr>
<p>Just a normal paragraph</p>
</body>
</html>
## Place hello.html in yourproject/templates subdirectory (mkdir templates if one is not made)
%%writefile style.css
h1 { font-family: serif; color: #377ba8; margin: 1rem 0; }
a { color: #377ba8; }
hr { border: none; border-top: 2px solid darkblue; }
#place style.css in yourproject/static directory (make static directory if one is not already made)
###Output
_____no_output_____ |
5-Data-Operation-in-Pandas.ipynb | ###Markdown
Pandas中的数据操作 通用函数操作
###Code
import pandas as pd
import numpy as np
rng = np.random.RandomState(21)
series = pd.Series( rng.randint(0,400,4) )
series
columns = ['one','two','three','four','five']
index = [1,3,5]
df = pd.DataFrame(rng.randint(0,200,(3,5)),index = index, columns = columns)
df
np.random.randint(2,41,(3,2))
np.exp(series)
df
np.sin(df*np.pi/4)
###Output
_____no_output_____
###Markdown
Index 的对齐 Series 中的index 对齐
###Code
area = pd.Series({'Alaska': 1723337,
'Texas': 695662,
'California': 423967},
name='area')
population = pd.Series({'California': 38332521,
'Texas': 26448193,
'New York': 19651127},
name='population')
population/area
area.index|population.index
###Output
_____no_output_____
###Markdown
只有对应索引相加,其他索引位置无法直接计算
###Code
A = pd.Series([2,4,5,2],index = [0,1,2,4])
B = pd.Series([1,3,4,2],index = [3,5,6,0])
A + B # 按照索引链接起来形成长链
A.add(B,fill_value = 1) # 没有匹配的索引值 则只保持原来的值,后自动添加+1 = fill_value
###Output
_____no_output_____
###Markdown
DataFrame 中的Index 对齐
###Code
materiA = pd.DataFrame(rng.randint(100,200,(4,2)),index = {'A','B','C','D'} ,columns = list('AB') )
materiA
materiB = pd.DataFrame(rng.randint(100,200,(5,3)),index = {'A','B','C','D','E'} ,columns = list('BCS') )
materiB
materiB + materiA
materiA.stack()
materiA['A']
materiA.stack().mean() #
materiA['B']
np.sum(materiA['B']+ materiA['A'])/8
fill = materiA.stack().mean()
materiA.add(materiB,fill_value = fill) # B列得到materiA和materiB的计算
materiA
materiB
###Output
_____no_output_____
###Markdown
DataFrame 和Series 之间的运算。
###Code
A = rng.randint(10,size = (3,5))
A
A - A[0]
df = pd.DataFrame(A, columns = list('HEXLO'))
df
df.iloc[0]
###Output
_____no_output_____
###Markdown
对应行数相减
###Code
df - df.iloc[0]
###Output
_____no_output_____
###Markdown
按照列数计算 每一列均减去指定的一列数据值
###Code
df.subtract(df['E'],axis = 0)
df
###Output
_____no_output_____
###Markdown
对部分数据进行计算
###Code
halfrow = df.iloc[0,::2] # 第1行,列数步长 = 2 取值
halfrow
df - halfrow #
df
###Output
_____no_output_____ |
Basics-Tutorial/Local-then-Global-Variables.ipynb | ###Markdown
Create a TensorFlow cluster with one worker node and one ps node.
###Code
cluster_spec = tf.train.ClusterSpec({'worker' : ['localhost:2223'], 'ps' : ['localhost:2222']})
server = tf.train.Server(cluster_spec,job_name='worker')
###Output
_____no_output_____
###Markdown
**Now launch run all the cells in the parameter server notebook** Create variables locally then makes global copy. One worker scenario
###Code
tf.reset_default_graph()
#create local graph like normal specifying the local device
with tf.device('/job:worker/task:0'):
a = tf.Variable([0.],name='a',collections=[tf.GraphKeys.LOCAL_VARIABLES])
b = tf.constant([100.])
loss = tf.abs(a-b)
optimizer = tf.train.GradientDescentOptimizer(.1)
grads,local_vars = zip(*optimizer.compute_gradients(loss,var_list=tf.local_variables()))
local_update = optimizer.apply_gradients(zip(grads,local_vars))
init_local = tf.local_variables_initializer()
#create the globabl copies on the ps
with tf.device('/job:ps/task:0'):
for v in tf.local_variables():
v_g = tf.get_variable('g/'+v.op.name,
shape = v.shape,
dtype = v.dtype,
trainable=True,
collections=[tf.GraphKeys.GLOBAL_VARIABLES,tf.GraphKeys.TRAINABLE_VARIABLES])
#gloabl updates
with tf.device('/job:worker/task:0'):
#this needs to be updated. Clearly not robust for any graph more complext
global_vars = tf.global_variables()
global_update = optimizer.apply_gradients(zip(grads,global_vars))
#create init op on the chief node
with tf.device('/job:worker/task:0'):
init_global = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
View the device placement of ops and variables
###Code
a_global = tf.global_variables()[0]
print(a.device)
print(b.device)
print(loss.device)
print(local_update.device)
print(global_update.device)
print(init_global.device)
print(init_local.device)
print(a_global.device)
###Output
/job:worker/task:0
/job:worker/task:0
/job:worker/task:0
/job:worker/task:0
/job:ps/task:0
/job:ps/task:0
/job:worker/task:0
/job:ps/task:0
###Markdown
Now, let's view the states of local and global variables as we do local then global updates
###Code
sess = tf.Session(target=server.target)
sess.run([init_local,init_global])
sess.run([a,a_global])
sess.run(local_update)
sess.run([a,a_global])
###Output
_____no_output_____
###Markdown
Notice that the state of the global variable hasn't changed
###Code
sess.run(global_update)
sess.run([a,a_global])
###Output
_____no_output_____ |
problems/oscillator_train_test.ipynb | ###Markdown
Imports
###Code
# Scinet modules
from scinet import *
import scinet.ed_oscillator as osc
#Other modules
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy as np
###Output
_____no_output_____
###Markdown
Helper Functions (copied from oscillator example)
###Code
def osc_eqn(A_0, delta_0, b, kappa, t):
return np.real(A_0 * np.exp(-b / 2. * t) * np.exp(1 / 2. * np.sqrt(b**2 - 4 * kappa + 0.j) * t + 1.j * delta_0))
def gen_input(A_0, delta_0, b, kappa, tt_predicted):
tt_in = np.linspace(0, 5, 50)
in1 = np.array([osc_eqn(A_0, delta_0, b, kappa, tt_in) for _ in tt_predicted])
in2 = np.reshape(tt_predicted, (-1, 1))
out = in2 #dummy filler
return [in1, in2, out]
def pendulum_prediction(net, b, kappa):
tt_given = np.linspace(0, 10, 250)
tt_predicted = np.linspace(0, 10, 250)
a_given = osc_eqn(1, 0, b, kappa, tt_given)
a_precicted = net.run(gen_input(1, 0, b, kappa, tt_predicted), net.output).ravel()
fig = plt.figure(figsize=(3.4, 2.1))
ax = fig.add_subplot(111)
ax.plot(tt_given, a_given, color=orange_color, label='True time evolution')
ax.plot(tt_predicted, a_precicted, '--', color=blue_color, label='Predicted time evolution')
ax.set_xlabel(r'$t$ [$s$]')
ax.set_ylabel(r'$x$ [$m$]')
handles, labels = ax.get_legend_handles_labels()
lgd=ax.legend(handles, labels,loc='upper center', bbox_to_anchor=(0.6, 1.3), shadow=True, ncol=1)
fig.tight_layout()
return fig
def osc_representation_plot(net, b_range, kappa_range, step_num=100, eval_time=7.5):
bb = np.linspace(*b_range, num=step_num)
kk = np.linspace(*kappa_range, num=step_num)
B, K = np.meshgrid(bb, kk)
out = np.array([net.run(gen_input(1, 0, b, kappa, [eval_time]), net.mu)[0] for b, kappa in zip(np.ravel(B), np.ravel(K))])
fig = plt.figure(figsize=(net.latent_size*3.9, 2.1))
for i in range(net.latent_size):
zs = out[:, i]
ax = fig.add_subplot('1{}{}'.format(net.latent_size, i + 1), projection='3d')
Z = np.reshape(zs, B.shape)
surf = ax.plot_surface(B, K, Z, rstride=1, cstride=1, cmap=cm.inferno, linewidth=0)
ax.set_xlabel(r'$b$ [$kg/s$]')
ax.set_ylabel(r'$\kappa$ [$kg/s^2$]')
ax.set_zlabel('Latent activation {}'.format(i + 1))
if (i==2):
ax.set_zlim(-1,1) #Fix the scale for the third plot, where the activation is close to zero
ax.set_zticks([-1,-0.5,0,0.5,1])
fig.tight_layout()
return fig
blue_color='#000cff'
orange_color='#ff7700'
###Output
_____no_output_____
###Markdown
Input Variables
###Code
observation_size = 50
latent_size = 3
question_size = 1
answer_size = 1
dev_percent = 10
num_examples = 50000
###Output
_____no_output_____
###Markdown
Data creation and loading
###Code
osc.oscillator_data(num_examples, fileName='oscillator_example');
td, vd, ts, vs, proj = dl.load(dev_percent, 'oscillator_example')
###Output
_____no_output_____
###Markdown
Create and train neural network
###Code
# Create network object
net = nn.Network(observation_size, latent_size, question_size, answer_size)
# Print initial reconstruction loss (depends on initialization)
print(net.run(vd, net.recon_loss)) #default
print(net.run(vd, net.kl_loss))
# Train
net.train(50, 256, 0.001, td, vd)
# Check progress. It is recommended to use Tensorboard instead for this.
print(net.run(vd, net.recon_loss)) #default
print(net.run(vd, net.kl_loss))
# Plot prediction
b_damper = 0.6
k_spring = 10.0
pendulum_prediction(net, b_damper, k_spring);
%matplotlib tk
osc_representation_plot(net, [0.5, 1], [5, 10]);
###Output
_____no_output_____ |
content/notebook/.ipynb_checkpoints/intro_to_python-checkpoint.ipynb | ###Markdown
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks
###Code
for 索引 in range(6):
print(索引)
from IPython.lib import passwd
passw
字串資料 = "這是一個字串資料"
print("字串資料的長度為:", len(字串資料))
print("-"*30)
位址 = 0
for 索引 in 字串資料:
print(索引)
print("--- 以上為列印索引 ---")
print(字串資料[位址])
print("--- 以上為列印字串資料數列 ---")
位址 = 位址 + 1
# 本程式在 demo
# print()
# def
# range()
# list()
# list 內容變化
# reversed()
# zip()
# if elif else
def 列印星號(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號2(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號3(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2-1):
if 數 <= 索引[0] and 數 >= 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號4(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2+1):
if 數 >= 索引[0] and 數 <= 索引[1]:
print("2", end="")
else:
print("1", end="")
print()
def 列印菱形(輸入變數):
列印星號3(輸入變數)
列印星號4(輸入變數-1)
列印菱形(11)
# 請將本程式改為菱形中間也要列印星號
def 增量列印(行數):
# 基本的 print() 函式列印
# 只要將 4 改為 11, 就可以重複列印 10 行
#for 列印行數 in range(1, 4):
for 列印行數 in range(1, 行數+1):
print("Welcome to Python3 ", end="")
for 列印個數 in range(列印行數):
print("*", end="")
# 完成多星號列印後, 必須額外要求跳行, 否則各行會列在同一行
print()
增量列印(10)
def 菱形(n):
數列1 = [x+n for x in range(0, n)]
數列2 = list(range(n, 0, -1))
數列3 = zip(數列1, 數列2)
for i in 數列3:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:https://120.113.76.47:8888/notebooks/intro_to_python.ipynb#
print(" ", end="")
print()
數列4 = [x for x in range(2, n+1)]
數列5 = [x+n-2 for x in range(n, 0, -1)]
數列6 = zip(數列4, 數列5)
for i in 數列6:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:
print(" ", end="")
print()
n = 20
菱形(n)
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def bohachevsky_arg0(sol):
return benchmarks.bohachevsky(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 50)
# ax = Axes3D(fig)
X = np.arange(-15, 15, 0.5)
Y = np.arange(-15, 15, 0.5)
X, Y = np.meshgrid(X, Y)
Z = np.zeros(X.shape)
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Z[i,j] = bohachevsky_arg0((X[i,j],Y[i,j]))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def untuple(sol):
return benchmarks.himmelblau(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 49)
X = np.arange(-6, 6, 0.1)
Y = np.arange(-6, 6, 0.1)
X, Y = np.meshgrid(X, Y)
Z = np.array(list(map(untuple, zip(X,Y))))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
###Output
_____no_output_____
###Markdown
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks
###Code
for 索引 in range(6):
print(索引)
from IPython.lib import passwd
passw
字串資料 = "這是一個字串資料"
print("字串資料的長度為:", len(字串資料))
print("-"*30)
位址 = 0
for 索引 in 字串資料:
print(索引)
print("--- 以上為列印索引 ---")
print(字串資料[位址])
print("--- 以上為列印字串資料數列 ---")
位址 = 位址 + 1
# 本程式在 demo
# print()
# def
# range()
# list()
# list 內容變化
# reversed()
# zip()
# if elif else
def 列印星號(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號2(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號3(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2-1):
if 數 <= 索引[0] and 數 >= 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號4(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2+1):
if 數 >= 索引[0] and 數 <= 索引[1]:
print("2", end="")
else:
print("1", end="")
print()
def 列印菱形(輸入變數):
列印星號3(輸入變數)
列印星號4(輸入變數-1)
列印菱形(11)
# 請將本程式改為菱形中間也要列印星號
def 增量列印(行數):
# 基本的 print() 函式列印
# 只要將 4 改為 11, 就可以重複列印 10 行
#for 列印行數 in range(1, 4):
for 列印行數 in range(1, 行數+1):
print("Welcome to Python3 ", end="")
for 列印個數 in range(列印行數):
print("*", end="")
# 完成多星號列印後, 必須額外要求跳行, 否則各行會列在同一行
print()
增量列印(10)
def 菱形(n):
數列1 = [x+n for x in range(0, n)]
數列2 = list(range(n, 0, -1))
數列3 = zip(數列1, 數列2)
for i in 數列3:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:https://120.113.76.47:8888/notebooks/intro_to_python.ipynb#
print(" ", end="")
print()
數列4 = [x for x in range(2, n+1)]
數列5 = [x+n-2 for x in range(n, 0, -1)]
數列6 = zip(數列4, 數列5)
for i in 數列6:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:
print(" ", end="")
print()
n = 20
菱形(n)
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def bohachevsky_arg0(sol):
return benchmarks.bohachevsky(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 50)
# ax = Axes3D(fig)
X = np.arange(-15, 15, 0.5)
Y = np.arange(-15, 15, 0.5)
X, Y = np.meshgrid(X, Y)
Z = np.zeros(X.shape)
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Z[i,j] = bohachevsky_arg0((X[i,j],Y[i,j]))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def untuple(sol):
return benchmarks.himmelblau(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 49)
X = np.arange(-6, 6, 0.1)
Y = np.arange(-6, 6, 0.1)
X, Y = np.meshgrid(X, Y)
Z = np.array(list(map(untuple, zip(X,Y))))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
###Output
_____no_output_____ |
Random_Forest_(Ejemplo_2).ipynb | ###Markdown
###Code
#Carga de las librerías
import urllib.request
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
#Descargamos los datos desde internet!
file_name = 'dataR2.csv'
def download_file(file_name):
print('Descargando el dataset')
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00451/dataR2.csv'
urllib.request.urlretrieve(url, file_name)
download_file(file_name)
#Ahora si cargamos los datos en jupyter!
data = pd.read_csv(file_name)
data.head()
#Analicemos brevemente el dataset
data.shape
data.info()
data.isnull().sum()
###Output
_____no_output_____
###Markdown
**Insights**:* El dataset se compone de 10 columnas de las cuales 9 corresponden a variables independientes, que usaremos para predecir el target. * Classification es la variable a predecir. Todas las variables son numéricas, ya sea enteras o reales y no tiene valores nulos.
###Code
#Separamos en X e y
X = data.drop(["Classification"], axis=1)
y = data["Classification"]
X
y
y.unique()
#Separamos en train y test!
(X_train, X_test,y_train, y_test) = train_test_split(X,y,stratify=y,test_size=0.30,random_state=42)
#Creamos un arbol de decisión sencillo y lo fiteamos
tree = DecisionTreeClassifier(random_state=42)
tree.fit(X_train, y_train)
X_test
y_test_pred = tree.predict(X_test) #Prediccion en Test
y_test_pred
y_test
from sklearn.metrics import accuracy_score
#Calculo el accuracy en Test
test_accuracy = accuracy_score(y_test, y_test_pred)
print('% de aciertos sobre el set de evaluación:',test_accuracy)
#Creamos un random forest!
model = RandomForestClassifier(random_state=42, n_estimators=100,
class_weight="balanced", max_features="log2")
model.fit(X_train, y_train)
y_test_pred = model.predict(X_test) #Prediccion en Test
y_test_pred
#Calculo el accuracy en Test
test_accuracy = accuracy_score(y_test, y_test_pred)
print('% de aciertos sobre el set de evaluación:',test_accuracy)
###Output
% de aciertos sobre el set de evaluación: 0.8285714285714286
|
segmentation-NMs/3_nm_seg_inference.ipynb | ###Markdown
3 - Model loading and NM size/yield analysisIn this notebook we will:1. Load an image for analysis.2. Load our previously-trained model.3. Use model to label the SEM image.4. Perform post-processing on model output to learn about our NM characteristics Note: A GPU instance is not necessary for this notebook as we will only be performing inference which is not as computationally-expensive as training. Install detectron2Again, we will be using Facebook's [detectron2](https://github.com/facebookresearch/detectron2) library to run the interence on our images to let's install it.
###Code
# install dependencies: (use cu101 because colab has CUDA 10.1)
!pip install -U torch==1.5 torchvision==0.6 -f https://download.pytorch.org/whl/cu101/torch_stable.html
!pip install cython pyyaml==5.1
!pip install -U 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
import torch, torchvision
print(torch.__version__, torch.cuda.is_available())
!gcc --version
# install detectron2:
!pip install detectron2==0.1.2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/index.html
# Setup detectron2 logger
import detectron2
from detectron2.utils.logger import setup_logger
setup_logger('logs')
# import some common libraries
import numpy as np
import pandas as pd
import seaborn as sns
import os, cv2, random, tifffile, json, datetime, time, urllib
from glob import glob
from google.colab.patches import cv2_imshow
from PIL import Image
from pathlib import Path
from matplotlib import pyplot as plt
plt.rc('axes', axisbelow=True)
# import some common detectron2 utilities
from detectron2 import model_zoo
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
from detectron2.utils.visualizer import Visualizer
from detectron2.data import DatasetCatalog, MetadataCatalog
# Define the classes in our dataset
class_dict = {'slit': '1',
'nanomembrane': '2',
'parasitic': '3',
'bottom_nucleus': '4',
'side_nucleus': '5',
'nanowire': '6',
'overgrowth': '7'}
# Define a dict that maps class number back to class name too
inv_class_dict = dict(map(reversed, class_dict.items()))
# Define some paths/constants that will be useful later
desired_mag = 50000 # Used to filter the TIFF input files
root = Path('./DeepSEM/segmentation-NMs/')
dataset_dir = root.joinpath('datasets')
output_dir = root.joinpath('output')
models_dir = root.joinpath('trained_models')
github_url = 'https://github.com/Martin09/' + str(root).replace('/','/trunk/')
imgs_zip = dataset_dir.joinpath('Nick_NMs_allrawimgs.zip')
imgs_dir = dataset_dir.joinpath(imgs_zip.stem)
imgs_google_drive_id = '1M2_0GLScsNY53w8hU2xJdXtisESfkOqI'
test_dir = imgs_dir.joinpath('test')
train_dir = imgs_dir.joinpath('train')
dataset_root_name = 'nm_masks'
train_name = dataset_root_name + '_train'
test_name = dataset_root_name + '_test'
model_path = models_dir.joinpath('nm_seg_it20k_loss0.028.yaml')
weights_path = models_dir.joinpath('nm_seg_it20k_loss0.028.pth')
weights_google_drive_id = '1btMy-EyU2sTSSPQk8kf663DYgO-sD3kR'
###Output
_____no_output_____
###Markdown
3.1 - Unpack and load our images
###Code
# # Optional: Save everything to your own GoogleDrive
# from google.colab import drive
# drive.mount('/content/gdrive/')
# %cd "/content/gdrive/My Drive/path/to/save/location"
# Clone just the relevant folder from the DeepSEM repo
!rm -rf $root
!apt install subversion
!svn checkout $github_url $root
# # Alternative: Clone whole DeepSEM repository
# !rm -rf DeepSEM # Remove folder if it already exists
# !git clone https://github.com/Martin09/DeepSEM
###Output
_____no_output_____
###Markdown
For simplicity, I will use our previous training images for inference. However these could be replaced with any similar un-labelled SEM images.
###Code
# Check if .zip file exists, if not, download it from Google Drive
if imgs_zip.exists():
print('Dataset already exists. Skipping download!')
else:
print('Dataset does not exist... Downloading!')
!gdown --id $imgs_google_drive_id -O $imgs_zip
# Unzip raw dataset
!rm -rf $imgs_dir
!unzip -o $imgs_zip -d $imgs_dir
###Output
_____no_output_____
###Markdown
Now we will sort the input files which have many different magnifications into images that only have the desired magnification (50k in this case).
###Code
in_files = list(imgs_dir.rglob('*.tif'))
images = []
# Start to loop over all TIFF files
for file in in_files:
# Open each file using the TiffFile library
with tifffile.TiffFile(file) as tif:
# Extract magnification data
mag = tif.sem_metadata['ap_mag'][1]
if type(mag) is str: # Apply correction for "k" ex: mag = "50 k"
mag = float(mag.split(' ')[0]) * 1000
else:
mag = float(mag)
# Only filter the images that have the magnification that we are interested in
if not mag == desired_mag:
continue
images.append(file)
###Output
_____no_output_____
###Markdown
Load a random image and show it.
###Code
im_path = random.sample(images,1)[0]
im = cv2.imread(str(im_path), cv2.IMREAD_GRAYSCALE)
print(im.shape)
cv2_imshow(im)
###Output
_____no_output_____
###Markdown
Do some pre-processing to get it ready to feed into our model.
###Code
# Model expects an RGB image, so copy the greyscale data into other 2 channels
im_RGB = np.repeat(im[:, :, np.newaxis], 3, axis=2)
print(im_RGB.shape)
cv2_imshow(im_RGB)
# For use later
img_h = im_RGB.shape[0]
img_w = im_RGB.shape[1]
###Output
_____no_output_____
###Markdown
3.2 - Load our modelNow we will load a trained model and use it to label the above image. First we load a default config with `get_cfg()` and we then overwrite some of its parameters with our saved YAML configuration file. One important point is that we need to have `cfg.MODEL.WEIGHTS` set to point to the weights file. As this file can be quite big (>300MB) and since Github isn't designed to host big binary files, I have saved the weights for this model on my Google Drive instead. However, if you have your weights saved locally (ex: on your Google Drive), you can skip this download.
###Code
# Check if .zip file exists, if not, download it from Google Drive
if weights_path.exists():
print('Dataset already exists. Skipping download!')
else:
print('Dataset does not exist... Downloading!')
!gdown --id $weights_google_drive_id -O $weights_path
###Output
_____no_output_____
###Markdown
Now we can go ahead with the rest of the configuration of the model.
###Code
cfg = get_cfg()
cfg.merge_from_file(model_path)
cfg.MODEL.WEIGHTS = str(weights_path)
cfg.MODEL.DEVICE = 'cpu' # CPU is enough for inference, no need for GPU
# If we have a lot of objects to detect, need to set higher # of proposals here:
cfg.MODEL.RPN.POST_NMS_TOPK_TEST = 1000
cfg.MODEL.RPN.PRE_NMS_TOPK_TEST = 1000
cfg.TEST.DETECTIONS_PER_IMAGE = 200
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # Set the testing threshold for this model
cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.2 # Non-max supression threshold
cfg.MODEL.ROI_HEADS.NUM_CLASSES = len(class_dict) # We have three classification classes
# Setting allowed input sizes (avoid scaling)
cfg.INPUT.MIN_SIZE_TEST = 0
cfg.INPUT.MAX_SIZE_TEST = 99999
# A bit of a hacky way to be able to use the DefaultPredictor:
# Register a "fake" dataset to then set the 'thing_classes' metadata
# (there is probably a better way to do this...)
cfg.DATASETS.TEST = ('placeholder')
DatasetCatalog.clear()
DatasetCatalog.register("placeholder", lambda _: None)
MetadataCatalog.get("placeholder").set(thing_classes=list(class_dict))
predictor = DefaultPredictor(cfg)
outputs = predictor(im_RGB)
print('Number of detected objects = {}'.format(len(outputs["instances"])))
# Verify outputs manually
# outputs["instances"].pred_classes
# outputs["instances"].pred_boxes
# outputs["instances"].scores
# We can use Visualizer to draw the predictions on the image.
v = Visualizer(im_RGB[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TEST[0]), scale=1.5)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(v.get_image()[:, :, ::-1])
###Output
_____no_output_____
###Markdown
3.4 - Post-processing model outputHowever, just getting the output from the model isn't enough. Now we have to do bit more work to post-process the output and extract things like nanomembrane yield, sizes and other interesting data!First lets divide up the output of the neural net for further processing:
###Code
cl = np.array(outputs["instances"].pred_classes.cpu()) # Classes
s = np.array(outputs["instances"].scores.cpu()) # Prediction scores
b = np.array([x.numpy() for x in outputs["instances"].pred_boxes]) # Bounding boxes
c = np.array(outputs["instances"].pred_boxes.get_centers()) # Bounding box centres
m = np.array([x.numpy() for x in outputs["instances"].pred_masks]) # Segmentation masks
###Output
_____no_output_____
###Markdown
Now we can loop over all the possible classes and display images with segmentation masks of each class individually.
###Code
for clas in range(len(class_dict)):
i_filt = list(np.argwhere(cl==clas).flatten()) # Choose only the indixes with specific class
print(f"{inv_class_dict[str(clas+1)]}:")
# We can use Visualizer to draw the predictions on the image.
v = Visualizer(im_RGB[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TEST[0]), scale=1.0)
v = v.draw_instance_predictions(outputs["instances"][[i_filt]].to("cpu"))
cv2_imshow(v.get_image()[:, :, ::-1])
###Output
_____no_output_____
###Markdown
Now, before we can start to mess around with dimensional analysis we first need to extract the pixel size from the raw TIF image:
###Code
with tifffile.TiffFile(im_path) as tif:
# Extract magnification data
mag = tif.sem_metadata['ap_mag'][1]
if type(mag) is str: # Apply correction for "k" ex: mag = "50 k"
mag = float(mag.split(' ')[0]) * 1000
else:
mag = float(mag)
# Extract pixel size data
pixel_size = float(tif.sem_metadata['ap_pixel_size'][1]) # nm
if 'µm' in tif.sem_metadata['ap_pixel_size'][2]: # Correction for um
pixel_size *= 1000
# Extract tilt data
tilt = tif.sem_metadata['ap_stage_at_t'][1] # degrees tilt
pixel_size_x = pixel_size # nm
pixel_size_y = pixel_size / np.cos(np.deg2rad(tilt)) # nm
###Output
_____no_output_____
###Markdown
Let's put the output into a handy Pandas Dataframe before any more processing:
###Code
# Define data structure
data = { 'class':[],
'class_num':[],
'score':[],
'bbox':[],
'bbox_centre':[],
'height':[],
'width':[],
'mask':[],
'area':[],
'area_bbox':[]}
# Loop over all objects
for i in range(len(outputs["instances"])):
data['class'].append(inv_class_dict[str(cl[i]+1)])
data['class_num'].append(cl[i])
data['score'].append(s[i])
data['bbox'].append(b[i])
data['bbox_centre'].append(c[i])
h = (b[i,3] - b[i,1]) * pixel_size_y
data['height'].append(h)
w = (b[i,2] - b[i,0]) * pixel_size_x
data['width'].append(w)
data['area_bbox'].append(w*h)
data['mask'].append(m[i])
data['area'].append(m[i].astype(int).sum() * pixel_size_x * pixel_size_y)
df = pd.DataFrame.from_dict(data)
###Output
_____no_output_____
###Markdown
Let's plot a simple bar graph of the number of objects in each class.
###Code
fig_size = (8,6)
fig = plt.figure(figsize=fig_size)
counts = df['class'].value_counts()
total = sum(counts)
values = [c/total*100 for c in counts]
labels = list(counts.index)
sns.barplot(x=labels, y=values) # height should be three times width);
for i, v in enumerate(values):
plt.text(i, v + np.max(values)*0.01, f'{v:.1f}% ({counts[i]:.0f})', color='k', ha='center')
plt.ylim([0, np.max(values)*(1.1)])
plt.title('Growth Structure Yield (count)')
plt.ylabel('Yield (%)')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Let's start with slit length/width.
###Code
df_slits = df[df['class']=='slit']
print(f"Mean slit width: {df_slits['height'].mean():.0f} +/- {df_slits['height'].std():.0f} nm")
print(f"Mean slit length: {df_slits['width'].mean():.0f} +/- {df_slits['width'].std():.0f} nm")
###Output
_____no_output_____
###Markdown
Can use the slit bounding boxes to plot a comparison of total area fraction:
###Code
# Calculate slit area using the bounding box height (slit width) and image width
# -> Avoids problems of weirdly-shaped slit segmentation masks
slit_area = 0
for h in df_slits['height']:
slit_area += h * (img_w * pixel_size_x)
print(f"slit area (bbox): {slit_area:.0f} nm2")
class_areas = []
# Loop over all classes
for clas in range(len(class_dict)):
df_filt = df[df['class_num']==clas]
# There isn't any object with this class in the current image
if df_filt.size == 0:
print(f"{inv_class_dict[str(clas+1)]} area: 0 nm2")
class_areas.append(0)
continue
# Stack all masks in this class together
overall_mask = df_filt['mask'].sum()
# This approach avoids double counting pixels if masks overlap with eachother
overall_mask = overall_mask.astype(bool).astype(int)
# Calculate area
area = overall_mask.sum() * pixel_size_x * pixel_size_y
print(f"{inv_class_dict[str(clas+1)]} area: {area:.0f} nm2")
# Add area to the areas list
class_areas.append(area)
# Remove the "slit" class entry of the areas array since we will use the bbox value from above
class_areas = class_areas[1:]
###Output
_____no_output_____
###Markdown
Now we can plot the total area of each class:
###Code
fig = plt.figure(figsize=fig_size)
values = [area/slit_area*100 for area in class_areas]
labels = [inv_class_dict[str(clas+1)] for clas in range(1, len(class_dict))]
sns.barplot(x=labels, y=values)
for i, v in enumerate(values):
plt.text(i, v + np.max(values)*0.01, f'{v:.1f}%\n({class_areas[i]:.0f} nm$^2$)', color='k', ha='center')
plt.ylim([0, np.max(values)*(1.1)])
plt.title('Growth Structure Yield (% of slit area)')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Then we can also look at nanomembrane dimensions:
###Code
df_nms = df[df['class']=='nanomembrane']
print(f"Mean nanomembrane width: {df_nms['height'].mean():.0f} +/- {df_nms['height'].std():.0f} nm")
print(f"Mean nanomembrane length: {df_nms['width'].mean():.0f} +/- {df_nms['width'].std():.0f} nm")
print(f"Max nanomembrane length: {df_nms['width'].max():.0f} nm")
###Output
_____no_output_____
###Markdown
If we want to compare the dimensions of all classes we can define a handy plotting function that we then use to plot different values:
###Code
# Define a helper function for easy box plotting
def make_box_plot(dat, x, x_lab):
# Create figure
fig = plt.figure(figsize=fig_size)
# Plot the orbital period with horizontal boxes
ax = sns.boxplot(x=x, y='class', data=dat, whis=[0, 100], palette="vlag")
ax.set(xlim=(0, dat[x].max()*1.2))
ax.xaxis.grid(True)
# Add in points to show each observation
g = sns.swarmplot(x=x, y='class', data=dat, size=5, color=".3", linewidth=0)
# Tweak the visual presentation
plt.xlabel(x_lab)
plt.ylabel("")
title = x_lab.split(" ")[0]
plt.title(f"{title} Distributions")
# Filter dataframe to exclude the slits, plot the data
df_no_slit = df[df['class']!='slit']
make_box_plot(df_no_slit,'height','Width (nm)');
# Filter dataframe to exclude the slits, plot the data
df_no_slit = df[df['class']!='slit']
make_box_plot(df_no_slit,'width','Length (nm)')
# Filter dataframe to exclude the slits, plot the data
df_no_slit = df[df['class']!='slit']
make_box_plot(df_no_slit,'area','Area (nm$^2$)')
# Plot a scatter plot of growth structure length vs width to try and see some trends
fig = plt.figure(figsize=fig_size)
ax = sns.scatterplot(x="width", y="height", hue="class", style="class", s=50, data=df_no_slit)
ax.grid()
ax.set(xlabel='Length (nm)')
ax.set(ylabel='Width (nm)')
ax.set(title='Length vs Width Plot')
# Add a horizontal line to represent median slit width
ax.axhline(df_slits['height'].median(), ls='--', color='r')
ax.text(0, df_slits['height'].median()*1.02, "Median Slit Width", color='r');
###Output
_____no_output_____ |
Atlantic-Vineyard/postpro/compare_unstable_AMR_SOWFA.ipynb | ###Markdown
Load the SOWFA reference data
###Code
# WRF Forcing data
# SOWFA profile directories
SOWFAdir = '../SOWFA-WRFforcing/winter-unstable/drivingData/'
Tfile = SOWFAdir+'/givenSourceT'
Ufile = SOWFAdir+'/givenSourceU_component_rotated'
tfluxfile = SOWFAdir+'/surfaceTemperatureFluxTable'
# Load the SOWFA data
zT = sowfa.readsection(Tfile, 'sourceHeightsTemperature')
sowfatime, sowfatemp = sowfa.readsection(Tfile, 'sourceTableTemperature',
splitfirstcol=True)
zMom = sowfa.readsection(Ufile, 'sourceHeightsMomentum')
t1, sowfa_momu = sowfa.readsection(Ufile, 'sourceTableMomentumX',
splitfirstcol=True)
t2, sowfa_momv = sowfa.readsection(Ufile, 'sourceTableMomentumY',
splitfirstcol=True)
t3, sowfa_tflux = sowfa.readplainfile(tfluxfile, splitfirstcol=True)
t2, sowfa_WS = sowfa.readsection(SOWFAdir+'/givenSourceU_speed_dir_rotated', 'sourceTableMomentumX',
splitfirstcol=True)
print("Loaded SOWFA profiles")
###Output
Loaded SOWFA profiles
###Markdown
Load the SOWFA results
###Code
SOWFAresultdir = '../SOWFA-results'
SOWFA_velocity = sowfa.readSOWFAresult(SOWFAresultdir+'/SOWFA_Vineyard-Winter-Unstable_timeAveraged_velocity.dat')
SOWFA_Restress = sowfa.readSOWFAresult(SOWFAresultdir+'/SOWFA_Vineyard-Winter-Unstable_timeAveraged_reynolds_stresses.dat')
SOWFA_Tflux = sowfa.readSOWFAresult(SOWFAresultdir+'/SOWFA_Vineyard-Winter-Unstable_timeAveraged_temperature_flux.dat')
SOWFA_temperature = sowfa.readSOWFAresult(SOWFAresultdir+'/SOWFA_Vineyard-Winter-Unstable_timeAveraged_temperature.dat')
###Output
_____no_output_____
###Markdown
Plot instantaneous horizontal velocity vs WRF
###Code
fig, axs = plt.subplots(1,len(comparetimes),figsize=(2.5*len(comparetimes),5), facecolor='w', dpi=150, sharey=True)
for it, time in enumerate(comparetimes):
ax=axs[it]
for case in comparedirs:
loaddir = case['dir']
prefix = case['prefix']
ls = case['ls']
print(loaddir)
profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_velocity_%06i.dat'%time)
ax.plot(profiledat[:,4], profiledat[:,0], ls=ls, color=case['c'], label=case['label'])
# Plot the SOWFA data
stime = int(time/300)
SOWFA_Uhoriz = np.sqrt(sowfa_momu[stime,:]**2 + sowfa_momv[stime,:]**2)
ax.plot(SOWFA_Uhoriz, zMom, 'k.', label='SOWFA WRF')
#ax.plot(sowfa_WS[stime,:], zMom, 'k.', label='SOWFA WRF')
ax.set_title('Time: %0.1f hr'%(time/3600.0))
ax.set_xlim([0,10])
ax.set_ylim([0,400])
ax.set_xlabel('Uhoriz [m/s]')
ax.grid(ls=':', lw=0.5)
axs[-1].legend()
plt.suptitle('Horizontal velocity')
###Output
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
###Markdown
Plot instantaneous temperature vs WRF
###Code
fig, axs = plt.subplots(1,len(comparetimes),figsize=(2.5*len(comparetimes),5), facecolor='w', dpi=150, sharey=True)
for it, time in enumerate(comparetimes):
ax=axs[it]
for case in comparedirs:
loaddir = case['dir']
prefix = case['prefix']
ls = case['ls']
print(loaddir)
profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_temperature_%06i.dat'%time)
ax.plot(profiledat[:,1], profiledat[:,0], ls=ls, color=case['c'], label=case['label'])
# Plot SOWFA
stime = int(time/300)
ax.plot(sowfatemp[stime,:], zMom, 'k.', label='SOWFA WRF')
ax.set_title('Time: %0.1f hr'%(time/3600.0))
ax.set_xlim([275, 278])
ax.set_ylim([0, 400])
ax.set_xlabel('T [K]')
ax.grid(ls=':', lw=0.5)
#ax.set_xlim([0,12.5])
axs[-1].legend()
plt.suptitle('Temperature profiles')
###Output
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
###Markdown
Plot hourly averaged velocity
###Code
fig, axs = plt.subplots(1,len(comparetimes),figsize=(2.5*len(comparetimes),5), facecolor='w', dpi=150, sharey=True)
for it, time in enumerate(comparetimes):
indx = int(time/3600)
ax=axs[it]
for case in comparedirs:
loaddir = case['dir']
prefix = case['prefix']
ls = case['ls']
try:
profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_velocity_avg_%i_to_%i.dat'%(indx-1, indx))
umag = np.sqrt(profiledat[:,1]**2 + profiledat[:,2]**2)
ax.plot(umag, profiledat[:,0], ls=ls, color=case['c'], label=case['label'])
except:
pass
# Plot SOWFA result
if (indx <= SOWFA_velocity.shape[0]):
umag = np.sqrt(SOWFA_velocity[indx-1, :, 1]**2 + SOWFA_velocity[indx-1, :, 2]**2)
ax.plot(umag, SOWFA_velocity[indx-1, :, 0], 'k.', label='SOWFA', markersize=2)
ax.set_title('Avg: %i - %i hr'%(indx-1, indx))
ax.set_ylim([0,600])
ax.set_xlim([0, 10])
ax.grid(ls=':')
ax.set_xlabel('Velocity [$m/s$]')
axs[-1].legend()
plt.suptitle('Velocity')
###Output
_____no_output_____
###Markdown
Plot hourly averaged temperature
###Code
fig, axs = plt.subplots(1,len(comparetimes),figsize=(2.5*len(comparetimes),5), facecolor='w', dpi=150, sharey=True)
for it, time in enumerate(comparetimes):
indx = int(time/3600)
ax=axs[it]
for case in comparedirs:
loaddir = case['dir']
prefix = case['prefix']
ls = case['ls']
try:
profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_temperature_avg_%i_to_%i.dat'%(indx-1, indx))
ax.plot(profiledat[:,1], profiledat[:,0], ls=ls, color=case['c'], label=case['label'])
except:
pass
# Plot SOWFA result
if (indx <= SOWFA_temperature.shape[0]):
ax.plot(SOWFA_temperature[indx-1, :, 1], SOWFA_temperature[indx-1, :, 0], 'k.', label='SOWFA', markersize=2)
ax.set_title('Avg: %i - %i hr'%(indx-1, indx))
ax.set_ylim([0,600])
ax.set_xlim([275.25, 280])
ax.grid(ls=':')
ax.set_xlabel('Velocity [$m/s$]')
axs[-1].legend()
plt.suptitle('Temperature')
###Output
_____no_output_____
###Markdown
Plot Hourly Averaged TKE
###Code
fig, axs = plt.subplots(1,len(comparetimes),figsize=(2.5*len(comparetimes),5), facecolor='w', dpi=150, sharey=True)
for it, time in enumerate(comparetimes):
indx = int(time/3600)
ax=axs[it]
for case in comparedirs:
loaddir = case['dir']
prefix = case['prefix']
ls = case['ls']
try:
profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_reynoldsstresses_avg_%i_to_%i.dat'%(indx-1, indx))
TKE = 0.5*(profiledat[:,1]+profiledat[:,4]+profiledat[:,6])
TKE = profiledat[:,7]
ax.plot(TKE, profiledat[:,0], ls=ls, color=case['c'], label=case['label'])
except:
pass
# Plot SOWFA result
if (indx <= SOWFA_Restress.shape[0]):
ax.plot(SOWFA_Restress[indx-1, :, 7], SOWFA_Restress[indx-1, :, 0], 'k-', label='SOWFA')
ax.set_title('Avg: %i - %i hr'%(indx-1, indx))
ax.set_ylim([0,600])
ax.set_xlim([0, 0.55])
ax.grid(ls=':')
ax.set_xlabel('TKE [$m^2/s^2$]')
axs[-1].legend()
plt.suptitle('Turbulent Kinetic Energy')
###Output
_____no_output_____
###Markdown
Plot hourly averaged temperature flux
###Code
fig, axs = plt.subplots(1,len(comparetimes),figsize=(2.5*len(comparetimes),5), facecolor='w', dpi=150, sharey=True)
for it, time in enumerate(comparetimes):
indx = int(time/3600)
ax=axs[it]
maxscale=0
for case in comparedirs:
loaddir = case['dir']
prefix = case['prefix']
ls = case['ls']
print(loaddir)
try:
#profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_temperaturefluxes_%06i.dat'%time)
profiledat=np.loadtxt(rootdir+'/'+loaddir+'/'+profiledir+'/'+prefix+'_temperaturefluxes_avg_%i_to_%i.dat'%(indx-1, indx))
ax.plot(profiledat[:,3], profiledat[:,0], ls=ls, color=case['c'], label=case['label'])
if np.max(profiledat[:,3])>maxscale: maxscale=np.max(profiledat[:,3])
except:
pass
# Plot SOWFA result
if (indx <= SOWFA_Tflux.shape[0]):
ax.plot(SOWFA_Tflux[indx-1, :, 3], SOWFA_Tflux[indx-1, :, 0], 'k-', label='SOWFA')
ax.set_title('Avg: %i - %i hr'%(indx-1, indx))
ax.set_ylim([0,600])
if maxscale>0.02:
ax.set_xlim([-maxscale, maxscale])
else:
ax.set_xlim([-0.011, 0.011])
ax.set_xlim([-0.011, 0.025])
ax.grid(ls=':')
ax.set_xlabel('<wT> [K-m/s]')
axs[-1].legend()
plt.suptitle('Vertical temperature flux')
###Output
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
AMRWind_unstable_testrotated
AMRWind_unstable_bigdomain2
|
Barcode Generation.ipynb | ###Markdown
 If you are finding the best AI solution and projects using python then connect with team moai at www.moai.blog and see how worlds change in every second with AI. if you are started with us then got best Computer Vision and Deep Learning contents which easier than you think. We created the moai website to show you what I believe is the best possible way to get your start. Follow me and I promise you’ll find your way in Computer Vision, Deep Learning field. If you are thinking about why moai? Then come and feel it. Like, share and subscribe to our YouTube channel "Moai Vision".website: www.moai.blog Follow us onGithub: https://github.com/moai-visionKaggle: https://www.kaggle.com/moaivisionFacebook: https://www.facebook.com/moaivisionInstagram: https://www.instagram.com/moai_vision "moai of today is the future of tomorrow." -Team Moai Project 1: Create 10000+ Barcodes per minute using python Required Package Url: https://pypi.org/project/python-barcode/>pip install PyQRCode Import Modules
###Code
import barcode
from barcode.writer import ImageWriter
from tqdm.auto import tqdm
from random import randint
###Output
_____no_output_____
###Markdown
Random Digit Generator
###Code
def digit_generator(n):
start=10**(n-1)
end=(10**n)-1
return randint(start,end)
digit_generator(10)#Test digit generator
###Output
_____no_output_____
###Markdown
Bar Code Generator
###Code
def bargenerator(data,image):
Type=barcode.get_barcode_class('code128')
code=Type(str(data),writer=ImageWriter())
code.save("barcodes/"+image)
bargenerator(123456789,"test") #Test bargenerator
###Output
_____no_output_____
###Markdown
Create List For Barcode Creation
###Code
number_list=[]
for i in tqdm(range(10000)):
value=digit_generator(16)
number_list.append(value)
number_list[-10:] #print first 10 values
len(number_list) #length of values
###Output
_____no_output_____
###Markdown
Barcode Generation From Number List
###Code
counter=0
for code in tqdm(number_list):
text='bar'+str(counter)
bargenerator(code,text)
counter+=1
###Output
_____no_output_____ |
Final_Experiments/UnderstandPolyAccLayer.ipynb | ###Markdown
Install libraries & version checkCode pipeline from the PNAS 2020 paper by Jiawei Zhuang et al.
###Code
# %%capture
# !pip install -U numpy==1.18.5
# !pip install h5py==2.10.0
'Comment above cell and restart runtime'
'Upload 3 arrays for OOA analysis'
'Check numpys version BEFORE and AFTER runtime restart'
import numpy as np
import matplotlib.pyplot as plt
print(np.__version__)
###Output
1.18.5
###Markdown
Setup
###Code
%%capture
# !git clone https://github.com/aditya5252/Multiprocessor_Advection_.git
!pip install git+https://github.com/JiaweiZhuang/data-driven-pdes@fix-beam
%tensorflow_version 1.x
import os
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import choice
import pandas as pd
import tensorflow as tf
tf.enable_eager_execution()
%matplotlib inline
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams['font.size'] = 14
from google.colab import files # colab-specific utilities; comment out when running locally
tf.enable_eager_execution()
tf.__version__, tf.keras.__version__
import xarray
from datadrivenpdes.core import grids
from datadrivenpdes.core import integrate
from datadrivenpdes.core import models
from datadrivenpdes.core import tensor_ops
from datadrivenpdes.advection import equations as advection_equations
from datadrivenpdes.pipelines import model_utils
from datadrivenpdes.core import geometry
from datadrivenpdes.core import polynomials
# tf.keras.backend.set_floatx('float32')
import copy
def CD2_data(linear,init,c,ntime,N_x,delt,delx):
data_ls=[]
u=init
if linear == True:
for step in range(ntime):
data_ls.append(u) # At t = 0 ,step = 0 At t = ntime-1 , step = ntime-1
unew=u.copy()
for i in range(1,N_x-1):
unew[i]=u[i] + delt*( -c*(u[i+1]-u[i-1])/(2*delx) ) ## B.D. model
unew[0]=u[0] + delt*( -c*(u[1]-u[N_x-2])/(2*delx) )
unew[N_x-1]=unew[0]
u=unew
elif linear == False:
pass
data_sol=np.stack(data_ls)
return data_sol
def upwind_data(linear,init,c,ntime,N_x,delt,delx):
data_ls=[]
u=init
if ((linear == True) and (c>0)): ## Apply B.D. with preiodic boundary conditions
for step in range(ntime):
data_ls.append(u) # At t = 0 ,step = 0 At t = ntime-1 , step = ntime-1
unew=u.copy()
for i in range(1,N_x-1):
unew[i]=u[i] + delt*( -c*(u[i]-u[i-1])/delx ) ## B.D. model
unew[0]=u[0] + delt*( -c*(u[0]-u[N_x-2])/delx )
unew[N_x-1]=unew[0]
u=unew
elif ((linear == True) and (c<=0)): ## Apply F.D. with preiodic boundary conditions
for step in range(ntime):
data_ls.append(u) # At t = 0 ,step = 0 At t = ntime-1 , step = ntime-1
unew=u.copy()
for i in range(1,N_x-1):
unew[i]=u[i] + delt*( -c*(u[i+1]-u[i])/delx ) ## F.D. model
unew[0]=u[0] + delt*( -c*(u[1]-u[0])/delx )
unew[N_x-1]=unew[0]
u=unew
else:
print(c)
for step in range(ntime):
data_ls.append(u) # At t = 0 ,step = 0 At t = ntime-1 , step = ntime-1
unew=u.copy()
for i in range(1,N_x-1):
if u[i]>0:
unew[i]=u[i] + delt*( -u[i]*(u[i]-u[i-1])/delx)
else:
unew[i]=u[i] + delt*( -u[i]*(u[i+1]-u[i])/delx)
if u[0]>0:
unew[0]=u[0] + delt*( -u[0]*(u[0]-u[N_x-2])/delx)
else:
unew[0]=u[0] + delt*( -u[0]*(u[1]-u[0])/delx )
unew[N_x-1]=unew[0]
u=unew
data_sol=np.stack(data_ls)
return data_sol
def ic(A,K,PHI,x_mesh):
u=np.zeros_like(x_mesh)
for A1,k1 in zip(A,K):
for phi1 in PHI:
u+= A1*np.sin(k1*x_mesh + phi1)
return u
def solution_data(A,K,PHI,x_mesh,ntime,delt):
# data_ls=[ u_ana[i]+= amp[k1]*exp(-kappa[k1]*kappa[k1]*nu*tEnd)*sin(kappa[k1]*(x[i]-cx*tEnd)+phi[k2]) for i in range(ntime)]
data_ls=[]
for step in range(ntime):
u=np.zeros_like(x_mesh)
for A1,k1 in zip(A,K):
for phi1 in PHI:
u+= A1*np.sin(k1*(x_mesh-step*delt) + phi1)
data_ls.append(u)
data_sol=np.stack(data_ls)
return data_sol
'Find dt for Advection-1d equation'
def _dx_dt(data,adv_coff):
dx=2*np.pi/(data.shape[1])
return dx,dx*0.08/adv_coff
'Plot time propagation of dataset'
def plot_time_prop(data,t0,t1,t2,tr='UnTrained'):
plt.plot(data[t0],label=f'Max_{t0}={data[t0].max()}')
plt.plot(data[t1],label=f'Max_{t1}={data[t1].max()}')
plt.plot(data[t2],label=f'Max_{t2}={data[t2].max()}')
plt.ylabel('Concentration')
plt.xlabel('N_x')
plt.title(tr+'Model Predictions')
plt.legend()
plt.show()
'Create initial_state dictionary from dataset'
def create_init_state_from_2d_data(data,adv_coff):
c_init=data[0][np.newaxis,:,np.newaxis]
initial_state_obj = {
'concentration': c_init.astype(np.float32), # tensorflow code expects float32
'x_velocity': adv_coff*np.ones(c_init.shape, np.float32) * 1.0,
'y_velocity': np.zeros(c_init.shape, np.float32)
}
for k, v in initial_state_obj.items():
print(k, v.shape) # (sample, x, y)
return initial_state_obj
def create_init_state_from_Burger_init(c_data):
c_init=c_data[np.newaxis,:,np.newaxis]
initial_state_obj = {
'concentration': c_init.astype(np.float32), # tensorflow code expects float32
'x_velocity': c_init.astype(np.float32),
'y_velocity': np.zeros(c_init.shape, np.float32)}
for k, v in initial_state_obj.items():
print(k, v.shape) # (sample, x, y)
return initial_state_obj
'Create xarray DatArray from integrated dictionary'
def wrap_as_xarray(integrated):
dr = xarray.DataArray(
integrated['concentration'].numpy().squeeze(-1),
dims = ('time', 'sample', 'x'),
coords = {'time': time_steps, 'x': x_coarse.squeeze()}
)
return dr
def plotOOA(m,c,ls,err_ls):
plt.plot(np.log(ls),-m*np.log(ls)+c,'r',label=f'{m}order accurate')
plt.plot(np.log(ls),np.log(err_ls),'b',label='Log-Error')
plt.xlabel('LogNx')
plt.ylabel('LogError')
plt.legend()
plt.title('Order of Accuracy Plot')
plt.show()
def delay_(max_delay,prob_dist):
allowed_delays=np.arange(0.,max_delay)
delay_chosen=choice(allowed_delays,p=prob_dist)
return delay_chosen
def modify_data(sub_data,DAsync=None):
one_arr=np.ones_like(sub_data)
boundary_arr=np.zeros_like(sub_data)
boundary_arr[:,0]=1.
boundary_arr[:,-1]=1.
if (DAsync==0):
delay_arr=np.zeros_like(sub_data)
elif (DAsync==1):
delay_arr=np.zeros_like(sub_data)
for i in range(delay_arr.shape[0]):
delay_arr[i,0]=delay_(nlevels,prob_set)
delay_arr[i,-1]=delay_(nlevels,prob_set)
del_arr = delay_arr + boundary_arr + one_arr
sub_data_modified=np.multiply(del_arr,sub_data)
return sub_data_modified
# This data-generation code is a bit involved, mostly because we use multi-step loss function.
# To produce large training data in parallel, refer to the create_training_data.py script in source code.
def reference_solution(initial_state_fine, fine_grid, coarse_grid,
coarse_time_steps=256):
'What does this function do'
'Runs high-accuracy model at high-resolution'
'smaller dx, => More Nx => More Nt'
'Subsample with subsampling_factor=Resamplingfactor '
'High accuracy data achieved on a coarse grid'
'So essentially obtain coarse-grained, HIGH-ACCURACY, GROUND TRUTH data'
'Return dict of items'
'For my simple use-case , Resamplingfactor = 1 '
'Hence, given sync_data dataset(128 x 32)'
'sync_data dataset itself is taken as the ground truth'
'Hence we do not need this function to obtain Ground truth data '
# use high-order traditional scheme as reference model
equation = advection_equations.VanLeerAdvection(cfl_safety_factor=0.08)
key_defs = equation.key_definitions
# reference model runs at high resolution
model = models.FiniteDifferenceModel(equation, fine_grid)
# need 8x more time steps for 8x higher resolution to satisfy CFL
coarse_ratio = fine_grid.size_x // coarse_grid.size_x
steps = np.arange(0, coarse_time_steps*coarse_ratio+1, coarse_ratio)
# solve advection at high resolution
integrated_fine = integrate.integrate_steps(model, initial_state_fine, steps)
# regrid to coarse resolution
integrated_coarse = tensor_ops.regrid(
integrated_fine, key_defs, fine_grid, coarse_grid)
return integrated_coarse
def ground_dict_from_data(data):
conc_ground=tf.convert_to_tensor(data[:,np.newaxis,:,np.newaxis], dtype=tf.float32, dtype_hint=None, name=None)
ground_soln_dict = {
'concentration': conc_ground, # tensorflow code expects float32
'x_velocity': tf.ones_like(conc_ground, dtype=None, name=None) * 1.0,
'y_velocity': tf.zeros_like(conc_ground, dtype=None, name=None)
}
for k, v in ground_soln_dict.items():
print(k, v.shape) # (sample, x, y)
return ground_soln_dict
def make_train_data(integrated_coarse, coarse_time_steps=256, example_time_steps=4):
# we need to re-format data so that single-step input maps to multi-step output
# remove the last several time steps, as training input
train_input = {k: v[:-example_time_steps] for k, v in integrated_coarse.items()}
# merge time and sample dimension as required by model
n_time, n_sample, n_x, n_y = train_input['concentration'].shape
for k in train_input:
train_input[k] = tf.reshape(train_input[k], [n_sample * n_time, n_x, n_y])
print('\n train_input shape:')
for k, v in train_input.items():
print(k, v.shape) # (merged_sample, x, y)
# pick the shifted time series, as training output
output_list = []
for shift in range(1, example_time_steps+1):
# output time series, starting from each single time step
output_slice = integrated_coarse['concentration'][shift:coarse_time_steps - example_time_steps + shift + 1]
# merge time and sample dimension as required by training
n_time, n_sample, n_x, n_y = output_slice.shape
output_slice = tf.reshape(output_slice, [n_sample * n_time, n_x, n_y])
output_list.append(output_slice)
train_output = tf.stack(output_list, axis=1) # concat along shift_time dimension, after sample dimension
print('\n train_output shape:', train_output.shape) # (merged_sample, shift_time, x, y)
# sanity check on shapes
assert train_output.shape[0] == train_input['concentration'].shape[0] # merged_sample
assert train_output.shape[2] == train_input['concentration'].shape[1] # x
assert train_output.shape[3] == train_input['concentration'].shape[2] # y
assert train_output.shape[1] == example_time_steps
return train_input, train_output
###Output
_____no_output_____
###Markdown
Pol Acc layer Create Custom Poly Acc layer
###Code
from typing import Any, Iterator, Optional, Sequence, Tuple
import enum
class Method(enum.Enum):
"""Discretization method."""
FINITE_DIFFERENCE = 1
FINITE_VOLUME = 2
class PolynomialAccuracy(tf.keras.layers.Layer):
def __init__(self,stencils: Sequence[np.ndarray],method: Method,derivative_orders: Sequence[int],accuracy_order: int = 1,
bias_accuracy_order: Optional[int] = 1,grid_step: float = None,bias: np.ndarray = None,dtype: Any = np.float32,):
A, b = polynomials.constraints(stencils, method, derivative_orders, accuracy_order, grid_step)
if bias is None:
bias_grid = polynomials.coefficients(stencils, method, derivative_orders,bias_accuracy_order,grid_step)
bias = bias_grid.ravel()
norm = np.linalg.norm(np.dot(A, bias) - b)
if norm > 1e-8:
raise ValueError('invalid bias, not in nullspace')
# https://en.wikipedia.org/wiki/Kernel_(linear_algebra)#Nonhomogeneous_systems_of_linear_equations
_, _, v = np.linalg.svd(A)
input_size = A.shape[1] - A.shape[0]
if not input_size:
raise ValueError( # pylint: disable=g-doc-exception
'there is only one valid solution accurate to this order')
# nullspace from the SVD is always normalized such that its singular values
# are 1 or 0, which means it's actually independent of the grid spacing.
nullspace = v[-input_size:]
# nullspace /= (grid_step**np.array(derivative_orders)).prod()
self.input_size = input_size
self.output_size = b.size
self.nullspace = tf.convert_to_tensor(nullspace, dtype)
self.bias = tf.convert_to_tensor(bias, dtype)
super().__init__(trainable=False, dtype=dtype)
def compute_output_shape(self, input_shape):
return input_shape[:-1] + (self.output_size,)
def call(self, x: tf.Tensor) -> tf.Tensor:
# TODO(geraschenko): explore projecting out the nullspace from full rank
# inputs instead.
return self.bias + tf.tensordot(x, self.nullspace, axes=[-1, 0])
###Output
_____no_output_____
###Markdown
Constraint
###Code
b
b.size
sten=[np.array([-1.,0. ,1.])]
deriv=1
for acc in range(1,3):
print(f'This is constrained acc order = {acc}')
print(f'No. of rows is {deriv}(deriv)+{acc}(acc) = {deriv+acc}')
A,b=polynomials.constraints(accuracy_order=acc,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[1],stencils=sten,grid_step=0.25)
print(f'A is {A}\n')
print(f'b is {b}\n \n')
sten=[np.array([-2,-1.,0. ,1.,2])]
deriv=1
for acc in range(1,5):
print(f'This is constrained acc order = {acc}')
print(f'No. of rows is {deriv}(deriv)+{acc}(acc) = {deriv+acc}')
A,b=polynomials.constraints(accuracy_order=acc,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[1],stencils=sten,grid_step=0.25)
print(f'A is {A}\n')
print(f'b is {b}\n \n')
sten=[np.array([-1.,0. ,1.])]
deriv=2
for acc in range(1,3):
print(f'This is constrained acc order = {acc}')
print(f'No. of rows is {deriv}(deriv)+{acc}(acc) = {deriv+acc}')
A,b=polynomials.constraints(accuracy_order=acc,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[deriv],stencils=sten,grid_step=0.25)
print(f'A is {A}\n')
print(f'b is {b}\n \n')
###Output
This is constrained acc order = 1
No. of rows is 2(deriv)+1(acc) = 3
A is [[-1. 0. 1.]
[ 1. 0. 1.]
[ 1. 1. 1.]]
b is [0. 2. 0.]
This is constrained acc order = 2
No. of rows is 2(deriv)+2(acc) = 4
A is [[-1. 0. 1.]
[ 1. 0. 1.]
[ 1. 1. 1.]]
b is [0. 2. 0.]
###Markdown
Coefficient function
###Code
sten=[np.array([-2,-1.,0. ,1.,2])]
for gd in [0.25,0.5,1.5]:
coeff = polynomials.coefficients(sten, polynomials.Method.FINITE_DIFFERENCE,[2],grid_step=gd)
print(coeff)
p=np.array([2])
2**p
a=tf.convert_to_tensor(np.arange(12).reshape(3,4))
print(a)
b=tf.convert_to_tensor(np.arange(20).reshape(4,5))
print(b)
print(a@b)
print(tf.tensordot(a,b,axes=[-1, 0]))
a=tf.random.normal([3,4])
print(a)
b=tf.convert_to_tensor(np.arange(20.).reshape(4,5),dtype=tf.float32)
print(b)
print(a@b)
print(tf.tensordot(a,b,axes=[-1, 0]))
sten=[np.array([-2,-1.,0. ,1.,2])]
deriv=1
for acc in range(1,5):
print(f'This is constrained acc order = {acc}')
print(f'No. of rows is {deriv}(deriv)+{acc}(acc) = {deriv+acc}')
A,b=polynomials.constraints(accuracy_order=acc,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[1],stencils=sten,grid_step=0.25)
coeff = polynomials.coefficients(sten, polynomials.Method.FINITE_DIFFERENCE,[deriv],grid_step=0.25)
print(f'coeff is {coeff} \n')
print(f'A is {A}\n')
print(f'b is {b}\n')
print(A@coeff -b)
print('\n \n')
sten=[np.array([-1.,0. ,1.])]
for acc in range(1,3):
print('This is constrained acc order')
A,b=polynomials.constraints(accuracy_order=acc,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[1],stencils=sten,grid_step=0.25)
# coeff = polynomials.coefficients(sten, polynomials.Method.FINITE_DIFFERENCE,[1],grid_step=0.25)
# print(f'coeff are {np.squeeze(coeff)}\n')
# norm = np.linalg.norm(np.dot(A, coeff.flatten()) - b)
# print(f'Norm is {norm}')
print(f'A is {A}\n')
print(f'b is {b}\n')
# coff_solve=np.linalg.solve(A,b)
# print(f'x=AinvB=x={coff_solve}\n')
# _, _, v = np.linalg.svd(A)
# print(f'v is {v}\n')
if 1.2:
print('Hello')
polynomials.coefficients(sten, polynomials.Method.FINITE_DIFFERENCE,[1],grid_step=0.25)
bias_grid = coefficients(stencils, method, derivative_orders,bias_accuracy_order, grid_step)
x=tf.random.normal([1,64,1,3])
sten=[np.array([-1.,0. ,1.]),np.array([-1.,0. ,1.])]
fd=polynomials.Method.FINITE_DIFFERENCE
deriv=[1,0]
poly_layer=polynomials.PolynomialAccuracy(stencils=sten,method=fd,derivative_orders=deriv,accuracy_order=1,grid_step=0.5)
poly_layer(x)
###Output
_____no_output_____
###Markdown
Pol Acc
###Code
A = np.matrix([[1,3], [1,2], [1, -1], [2,1]])
rank = np.linalg.matrix_rank(A)
# U, s, V = np.linalg.svd(A, full_matrices = True)
U, s, V = np.linalg.svd(A, full_matrices = False)
print(f"rank is {rank}\n")
print(f'U shape is is {U.shape}\n')
print(f'Sigma shape is {s.shape}\n')
print(f'V shape is {V.shape}\n')
# t_U_A = np.transpose(U)
# nrow = t_U_A.shape[0]
# left_null_A = t_U_A[rank:nrow,:]
# left_null_A
# np.dot((left_null_A[0,:] + left_null_A[0,:]), A)
not 5
not 0
snew=np.zeros((4,2))
snew[0,0]=s[0]
snew[1,1]=s[1]
snew
U@snew@V
# Right null:
B = np.transpose(A)
rank = np.linalg.matrix_rank(B)
U, s, V = np.linalg.svd(B, full_matrices = True)
t_V_B = np.transpose(V)
ncols = t_V_B.shape[1]
right_null_B = t_V_B[:,rank:ncols]
right_null_B
np.dot(B, (right_null_B[:,0] + right_null_B[:,1]))
FINITE_DIFF = polynomials.Method.FINITE_DIFFERENCE
stencil=[-1, 0, 1]
coeff = polynomials.coefficients([np.array(stencil)], FINITE_DIFF,[1])
coeff
# a, b = polynomials.constraints([np.array([-0.5, 0.5])],method,derivative_orders=[0],accuracy_order=accuracy_order,grid_step=1.0)
sten=[np.array([-2,-1.,0. ,1.,2.])]
A,b=polynomials.constraints(accuracy_order=2,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[1],stencils=sten,grid_step=0.25)
coeff = polynomials.coefficients(sten, FINITE_DIFF,[1],grid_step=0.25)
print(f'coeff are {np.squeeze(coeff)}\n')
norm = np.linalg.norm(np.dot(A, coeff.flatten()) - b)
print(f'Norm is {norm}')
print(f'A is {A}\n')
print(f'b is {b}\n')
coff_solve=np.linalg.solve(A,b)
print(f'x=AinvB=x={coff_solve}\n')
_, _, v = np.linalg.svd(A)
print(f'v is {v}\n')
sten=[np.array([-1.,0.,1.]),np.array([0])]
A,b=polynomials.constraints(accuracy_order=2,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[1,0], stencils= sten )
coeff = polynomials.coefficients(sten, FINITE_DIFF,derivative_orders=[1,0],grid_step=0.25)
_, _, v = np.linalg.svd(A)
# print(f'v is {v}\n')
print(f'coeff are {np.squeeze(coeff)}')
print(f'A is {A}')
print(f'b is {b}')
norm = np.linalg.norm(np.dot(A, coeff.flatten()) - b)
print(f'Norm is {norm}')
# coff_solve=np.linalg.solve(A,b)
# print(f'x=AinvB=x={coff_solve}')
coeff.ravel()
coeff
2**coeff
sten=[np.array([-0.5 ,0.5]),np.array([-0.5 ,0.5])]
A,b=polynomials.constraints(accuracy_order=2,method=polynomials.Method.FINITE_DIFFERENCE,derivative_orders=[0,0], stencils= sten )
coeff = polynomials.coefficients(sten, FINITE_DIFF,derivative_orders=[0,0])
print(f'coeff are {np.squeeze(coeff)}')
print(f'A is {A}')
print(f'b is {b}')
norm = np.linalg.norm(np.dot(A, coeff.flatten()) - b)
print(f'Norm is {norm}')
np.dot(A, np.squeeze(coeff))
A.shape
coeff.flatten().shape
print(A@(coeff.flatten()))
print(A.dot([4 / 10, 1 / 10, 1 / 10, 4 / 10]))
[1,2,3,4]*2
###Output
_____no_output_____
###Markdown
Results Define & Initialize NN model
###Code
res=2**6
numPE=1
grid_length = 2*np.pi
fine_grid_resolution = res
# 1d domain, so only 1 point along y dimension
fine_grid = grids.Grid(
size_x=fine_grid_resolution, size_y=1,
step=grid_length/fine_grid_resolution
)
x_fine, _ = fine_grid.get_mesh()
print(x_fine.shape)
CFL,u0,tend=0.08,1.,15.
dx=grid_length/len(x_fine)
dt=dx*CFL/abs(u0)
N_t=int(tend//dt)
time_steps=np.arange(N_t)
initS=[[1],[1],[0]]
data_ana=solution_data(initS[0],initS[1],initS[2],x_fine[:,0],N_t,dt)
'Create initial state from data'
initial_state=create_init_state_from_2d_data(data_ana,u0)
# model1=models.conv2d_stack(num_outputs=10,num_layers=5,filters=32, kernel_size=5,activation='relu')
model_nn = models.PseudoLinearModel(advection_equations.FiniteDifferenceAdvection(0.08), fine_grid,
num_time_steps=10,stencil_size=3, kernel_size=(3,1), num_layers=5, filters=16,constrained_accuracy_order=1,
learned_keys = {'concentration_x', 'concentration_y'}, activation='relu',)
integrated_UT1 = integrate.integrate_steps(model_nn, initial_state, time_steps)
model_nn(initial_state).shape
print('Input to model_nn is dict initial_state \n with keys conc, x-vel y-vel')
print(initial_state['concentration'].shape)
print('The _apply_model method outputs \n delc/dex array \n & \n delc/dey array')
print(model_nn._apply_model(initial_state).keys())
print(list(model_nn._apply_model(initial_state).values())[0].shape)
print(list(model_nn._apply_model(initial_state).values())[1].shape)
print(integrated_UT1['concentration'].shape)
'First analyze method _apply_model of PseudoLinearMode Class'
'Analyze core_model_func i.e. conv_2d_stack'
from typing import (
Any, Dict, List, Optional, Mapping, Set, TypeVar, Tuple, Union,
)
T = TypeVar('T')
def sorted_values(x: Dict[Any, T]) -> List[T]:
"""Returns the sorted values of a dictionary."""
return [x[k] for k in sorted(x)]
def stack_dict(state: Dict[Any, tf.Tensor]) -> tf.Tensor:
"""Stack a dict of tensors along its last axis."""
return tf.stack(sorted_values(state), axis=-1)
def conv2d_stack(num_outputs, num_layers=5, filters=32, kernel_size=5,
activation='relu', **kwargs):
"""Create a sequence of Conv2DPeriodic layers."""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Lambda(stack_dict))
for _ in range(num_layers - 1):
layer = models.Conv2DPeriodic(
filters, kernel_size, activation=activation, **kwargs)
model.add(layer)
model.add(models.Conv2DPeriodic(num_outputs, kernel_size, **kwargs))
return model
'Check function sorted_values'
dc={'ab':456,'xy':1,'rt':1234}
print(dc)
print(sorted_values(dc))
'Check function stack_dict'
stack_ls=stack_dict(initial_state)
print(stack_dict(initial_state).shape)
'''Stacks initial_state concen,x-vel,y-vel in alphabetical order
along last axis'''
stack_dict(initial_state)[...,2].shape
'Check function entire conv2d_stack'
for i in range(20):
coreModel = conv2d_stack(num_outputs = 3, num_layers=5, filters=32, kernel_size=3,
activation='relu')
print(coreModel(initial_state).shape)
def conv2d_stack(num_outputs, num_layers=5, filters=32, kernel_size=5,
activation='relu', **kwargs):
"""Create a sequence of Conv2DPeriodic layers."""
model = tf.keras.Sequential()
model.add(tf.keras.layers.Lambda(stack_dict))
for _ in range(num_layers - 1):
layer = models.Conv2DPeriodic(
filters, kernel_size, activation=activation, **kwargs)
model.add(layer)
model.add(models.Conv2DPeriodic(num_outputs, kernel_size, **kwargs))
return model
m1 = tf.keras.Sequential()
print(type(m1(initial_state)))
m1.add(tf.keras.layers.Lambda(stack_dict))
print(type(m1(initial_state)))
print(m1(initial_state).shape)
# layer1=models.Conv2DPeriodic(filters=9, kernel_size=(3.1), activation='relu')
layer1= tf.keras.layers.Conv2D(filters=9, kernel_size=(7,1), activation='relu')
m1.add(layer1)
print(m1(initial_state).shape)
'Analyze Conv2dPeriodic class '
in_sh=stack_ls.shape
print(in_sh)
print(in_sh[:-1]+(9,)) # For 9 filters used in convolution
stack_ls.shape
stack_ls[0,:,0,2]
# tensor_ops.pad_periodic_2d(stack_ls,))
padded_=tensor_ops._pad_periodic_by_axis(stack_ls, [1, 1],1)
print(padded_.shape)
padded_[0,:,0,2]
###Output
Input to model_nn is dict initial_state
with keys conc, x-vel y-vel
(1, 64, 1)
The _apply_model method outputs
delc/dex array
&
delc/dey array
dict_keys(['concentration_x', 'concentration_y'])
(1, 64, 1)
(1, 64, 1)
(1909, 1, 64, 1)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
(1, 64, 1, 3)
<class 'dict'>
<class 'tensorflow.python.framework.ops.EagerTensor'>
(1, 64, 1, 3)
(1, 58, 1, 9)
(1, 64, 1, 3)
(1, 64, 1, 9)
(1, 66, 1, 3)
###Markdown
Post convolution
###Code
print(model_nn.learned_keys)
print(model_nn.fixed_keys)
l,f=(models.normalize_learned_and_fixed_keys(model_nn.learned_keys, model_nn.fixed_keys,advection_equations.FiniteDifferenceAdvection(0.08)))
print(l)
print(f)
coreModel = conv2d_stack(num_outputs = 12, num_layers=5, filters=32, kernel_size=3,activation='relu')
print(coreModel(initial_state).shape)
for stenc in [11,5,7,9,3]:
coreModel = conv2d_stack(num_outputs = 12, num_layers=5, filters=32, kernel_size=3,activation='relu')
out_layers = models.build_output_layers(equation=advection_equations.FiniteDifferenceAdvection(0.08), grid=fine_grid, learned_keys=model_nn.learned_keys, stencil_size=stenc,
initial_accuracy_order=1,constrained_accuracy_order=1, layer_cls=models.VaryingCoefficientsLayer,predict_permutations=True)
print(type(out_layers))
for key in out_layers:
print(key)
for key in out_layers:
print(out_layers[key].kernel_size)
size_splits = [out_layers[key].kernel_size for key in out_layers]
size_splits
size_splits
net=coreModel(initial_state)
net.shape
heads = tf.split(net, size_splits, axis=-1)
print(type(heads))
print(len(heads))
print(heads[0].shape)
print(heads[1].shape)
out_layers
out_layers.items()
for key,layer in out_layers.items():
print(key)
print(layer)
print('\n')
for head in heads:
print(head.shape)
for (key, layer), head in zip(out_layers.items(), heads):
print(key)
print(layer)
print(head.shape)
print('\n')
for (key, layer), head in zip(out_layers.items(), heads):
print(key)
print(layer.input_key)
print(head.shape)
print('\n')
result = {}
for (key, layer), head in zip(out_layers.items(), heads):
input_tensor = initial_state[layer.input_key]
'Input tensor of shape 1,64,1'
'polyAcc layer takes Input : '
'(1, 64, 1, 118)-delc/delx '
'(1, 64, 1 )-c '
'result[concx] = gives delc/delx'
'result[concy] = gives delc/dely'
result[key] = layer([head, input_tensor])
print(type(result))
print(len(result))
for key,tens in result.items():
print(key)
print(tens.shape)
print('\n')
###Output
{'concentration_x', 'concentration_y'}
{'y_velocity', 'x_velocity', 'concentration'}
{'concentration_x', 'concentration_y'}
{'y_velocity', 'x_velocity', 'concentration'}
###Markdown
Create standard schemes data
###Code
datu=upwind_data(linear=True,init=initial_state['concentration'].squeeze(),c=u0,ntime=N_t,N_x=len(x_fine),delt=dt,delx=dx)
datc=CD2_data(linear=True,init=initial_state['concentration'].squeeze(),c=u0,ntime=N_t,N_x=len(x_fine),delt=dt,delx=dx)
###Output
_____no_output_____
###Markdown
At t = tend Compare 3 schemes
###Code
def compare_t(nt):
plt.rcParams.update({'font.size': 18})
plt.figure(figsize=(8,8))
plt.plot(x_fine,datu[nt],'g',label='Upwind predictions')
plt.plot(x_fine,datc[nt],'r',label='CD2 predictions')
plt.plot(x_fine,integrated_T1['concentration'].numpy().squeeze()[nt],'m',label='Neural Network predictions',linestyle='dashed',linewidth=6, markersize=12)
plt.plot(x_fine,data_ana[nt],'b',label='True Solution')
plt.plot(x_fine,-np.ones_like(x_fine),'k')
plt.plot(x_fine,np.ones_like(x_fine),'k')
plt.xlabel('x')
plt.ylabel('Concentration')
plt.legend()
plt.grid()
# plt.title(f'Concentration plot at time step N_t = {nt}',y=1.08)
plt.title(f'Concentration plot at time step N_t = {nt}')
# plt.tight_layout()
plt.show()
compare_t(N_t-1)
# compare_t(30)
# compare_t(1000)
# compare_t(1500)
###Output
_____no_output_____
###Markdown
Compare Amplitude progression with time for 3 schemes
###Code
ampUp_avg = np.stack([np.mean(np.abs(datu[i])) for i in range(N_t)])
ampCD_avg = np.stack([np.mean(np.abs(datc[i])) for i in range(N_t)])
ampNN_avg = np.stack([np.mean(np.abs(integrated_T1['concentration'].numpy().squeeze()[i])) for i in range(N_t)])
ampGround_avg = np.stack([np.mean(np.abs(data_ana[i])) for i in range(N_t)])
plt.rcParams.update({'font.size': 18})
plt.figure(figsize=(8,8))
plt.plot(np.arange(N_t),ampUp_avg,'g',label='Upwind Scheme')
plt.plot(np.arange(N_t),ampCD_avg,'r',label='CD2 Scheme')
plt.plot(np.arange(N_t),ampNN_avg,'k',label='Neural Network')
markers_on = np.arange(0,1900,100).tolist()
plt.plot(np.arange(N_t),ampGround_avg,'D',markevery=markers_on,label='Ground Truth')
plt.ylabel('Mean Absolute Amplitude')
plt.xlabel('N_t = Time step')
plt.legend()
plt.title('Mean Amplitude with time')
plt.show()
plt.rcParams.update({'font.size': 18})
plt.figure(figsize=(8,8))
# plt.plot(np.arange(N_t),ampUp_avg,'g',label='Upwind Scheme')
# plt.plot(np.arange(N_t),ampCD_avg,'r',label='CD2 Scheme')
plt.plot(np.arange(N_t),ampNN_avg,'g',label='Neural Network')
plt.plot(np.arange(N_t),ampGround_avg,'.',label='Ground Truth')
plt.ylabel('Mean Absolute Amplitude')
plt.xlabel('N_t = Time step')
plt.legend()
plt.title('Mean Amplitude with time')
plt.show()
###Output
_____no_output_____
###Markdown
Compare Order of Accuracy
###Code
import numpy as np
import matplotlib.pyplot as plt
ne=np.loadtxt('nn')
ce=np.loadtxt('cd')
ue=np.loadtxt('up')
nl=2**(np.arange(5,9))
def OOA(x,y,m,c,mylabel='Neural Network Error'):
plt.rcParams.update({'font.size': 20})
plt.figure(figsize=(8,6))
plt.plot(np.log(x),np.log(y),'o-',label=mylabel,linestyle='-.',linewidth=3, markersize=12)
plt.plot(np.log(x),-m*np.log(x)+c,'o-',label=f'Line with slope = -{m}',linestyle='-',linewidth=3, markersize=12)
plt.ylabel('log(err_MAE)')
plt.xlabel('log(N_x)')
plt.grid()
plt.title(' log-log accuracy order')
plt.legend()
plt.show()
def OOA_loop(x,y,m_ls,c_ls,mylabel='Neural Network'):
plt.rcParams.update({'font.size': 20})
plt.figure(figsize=(8,6))
plt.plot(np.log(x),np.log(y),'o-',label=mylabel+' Error',linestyle='-.',linewidth=3, markersize=12)
for i in range(len(m_ls)):
plt.plot(np.log(x),-m_ls[i]*np.log(x)+c_ls[i],'o-',label=f'Line with slope = -{m_ls[i]}',linestyle='-',linewidth=3, markersize=12)
plt.ylabel('log(err_MAE)')
plt.xlabel('log(N_x)')
plt.grid()
plt.title(f'{mylabel} log-log accuracy order')
plt.legend()
plt.show()
OOA(nl,ne,2,0,'Neural Network Error')
OOA(nl,ue,1,0,'Upwind Scheme Error')
OOA(nl,ce,1,0,'CD2 Scheme Error')
OOA_loop(x=nl,y=ue,m_ls=[1,2,3],c_ls=[2,5,7],mylabel='Upwind Scheme')
OOA_loop(x=nl,y=ce,m_ls=[1,2,3],c_ls=[2,5,7],mylabel='CD2 Scheme')
OOA_loop(x=nl,y=ne,m_ls=[1,2,3],c_ls=[0.5,1.5,3],mylabel='Neural Network')
###Output
_____no_output_____
###Markdown
Other ideas
###Code
'N_x other than just powers of 2'
'''Mean solution amplitude with time
MAE with time
MAE with grid points increase'''
###Output
_____no_output_____ |
Pandas_challenege/PyCitySchools/PyCitySchool.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
school_complete = "Resources/schools_complete.csv"
student_complete= "Resources/students_complete.csv"
# Read School and Student Data File and store into Pandas Data Frames
school_data_df = pd.read_csv(school_complete)
student_data_df = pd.read_csv(student_complete)
# Combine the data into a single dataset
school_student_combined = pd.merge(school_data_df, student_data_df, how="left", on=["school_name", "school_name"])
school_student_combined
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the overall passing rate (overall average score), i.e. (avg. math score + avg. reading score)/2* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
total_schools = len(school_student_combined["School ID"].unique())
total_schools
total_students = len(school_student_combined["Student ID"].unique())
total_students
total_budget = school_data_df["budget"].sum()
total_budget
average_math = student_data_df["math_score"].mean()
average_math
average_reading = student_data_df["reading_score"].mean()
average_reading
overall_average_score = (average_math + average_reading)/2
overall_average_score
number_passing_math = student_data_df.loc[(student_data_df["math_score"]>=70)].count()["student_name"]
percent_passing_math = (number_passing_math)/(total_students)*100
print(percent_passing_math)
number_passing_reading = student_data_df.loc[(student_data_df["reading_score"]>=70)].count()["student_name"]
percent_passing_reading = (number_passing_reading)/(total_students)*100
percent_passing_reading
#Creat a dataframe to hold the above results
district_summary = pd.DataFrame({
"Total School":total_schools,
"Total Student":total_students,
"Total Budget":total_budget,
"Average Math Score":average_math,
"Average Reading Score":average_reading,
"Overall Average Score":overall_average_score,
"Percentage Passing Math":percent_passing_math,
"Percentage Passing Reading":percent_passing_reading
},index = [0])
district_summary
#give the displayed data cleaner formatting
district_summary["Percentage Passing Math"] = district_summary["Percentage Passing Math"].map("{:,.2f}%".format)
district_summary["Percentage Passing Reading"] = district_summary["Percentage Passing Reading"].map("{:,.2f}%".format)
district_summary["Overall Average Score"] = district_summary["Overall Average Score"].map("{:,.2f}%".format)
district_summary["Total Budget"] = district_summary["Total Budget"].map("{:,.2f}%".format)
district_summary["Total Student"] = district_summary["Total Student"].map("{:,.2f}%".format)
district_summary
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two) * Create a dataframe to hold the above results
###Code
#school Summary
by_school_name = school_student_combined.groupby(['school_name'])
print(by_school_name)
#school types
school_types = school_data_df.set_index('school_name')['type']
# total students by school
student_per_school = by_school_name['Student ID'].count()
# school budget
school_budget = school_data_df.set_index('school_name')['budget']
#per student budget
student_budget = school_data_df.set_index('school_name')['budget']/school_data_df.set_index('school_name')['size']
#avg scores by school
avgerage_math_score = by_school_name['math_score'].mean()
avgerage_reading_score = by_school_name['reading_score'].mean()
# % passing scores
passing_math = school_student_combined[school_student_combined['math_score'] >= 70].groupby('school_name')['Student ID'].count()/student_per_school
passing_reading = school_student_combined[school_student_combined['reading_score'] >= 70].groupby('school_name')['Student ID'].count()/student_per_school
overall_passing = school_student_combined[(school_student_combined['reading_score'] >= 70) & (school_student_combined['math_score'] >= 70)].groupby('school_name')['Student ID'].count()/student_per_school
school_summary = pd.DataFrame({
"School Type": school_types,
"Total Students": student_per_school,
"Student Budget": student_budget,
"Total School Budget": school_budget,
"Average Math Score": avgerage_math_score,
"Average Reading Score": avgerage_reading_score,
"Passing Math": passing_math,
"Passing Reading": passing_reading,
"Overall Passing Rate": overall_passing
})
school_summary
###Output
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x0000024FCCC01908>
###Markdown
Top Performing Schools (By Passing Rate) * Sort and display the top five schools in overall passing rate
###Code
sort_schools = school_summary.sort_values(["Overall Passing Rate"],ascending = False)# in"school_name"(by = "overall_passing_percent")
sort_schools.head(5)
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By Passing Rate) * Sort and display the five worst-performing schools
###Code
sort_schools = school_summary.sort_values(["Overall Passing Rate"],ascending = True)# in"school_name"(by = "overall_passing_percent")
sort_schools.head(5)
###Output
_____no_output_____
###Markdown
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
###Code
nine_grade_series = student_data_df.loc[student_data_df['grade'] =='9th'] #grade=True
ten_grade_series = student_data_df.loc[student_data_df['grade'] =='10th'] #grade=True
eleven_grade_series = student_data_df.loc[student_data_df['grade'] =='11th'] #grade=True
tweleve_grade_series = student_data_df.loc[student_data_df['grade'] =='12th'] #grade=True
nine_grade_by_school = (nine_grade_series.groupby(["school_name"]).mean()["math_score"])
ten_grade_by_school = (ten_grade_series.groupby(["school_name"]).mean()["math_score"])
eleven_grade_by_school = (eleven_grade_series.groupby(["school_name"]).mean()["math_score"])
tweleve_grade_by_school = (tweleve_grade_series.groupby(["school_name"]).mean()["math_score"])
series_df = pd.DataFrame({
"9th Grade Series": nine_grade_by_school,
"10th Gared Series": ten_grade_by_school,
"11th Grade Series": eleven_grade_by_school,
"12th Grade Series": tweleve_grade_by_school
})
# series_df = series_df[[
# "9th Grade Series",
# "10th Gared Series",
# "11th Grade Series",
# "12th Grade Series"]]
# series_df.index.name = None
series_df
###Output
_____no_output_____
###Markdown
Reading Score by Grade * Perform the same operations as above for reading scores
###Code
nine_grade_series = student_data_df.loc[student_data_df['grade'] =='9th'] #grade=True
ten_grade_series = student_data_df.loc[student_data_df['grade'] =='10th'] #grade=True
eleven_grade_series = student_data_df.loc[student_data_df['grade'] =='11th'] #grade=True
tweleve_grade_series = student_data_df.loc[student_data_df['grade'] =='12th'] #grade=True
nine_grade_by_school = (nine_grade_series.groupby(["school_name"]).mean()["reading_score"])
ten_grade_by_school = (ten_grade_series.groupby(["school_name"]).mean()["reading_score"])
eleven_grade_by_school = (eleven_grade_series.groupby(["school_name"]).mean()["reading_score"])
tweleve_grade_by_school = (tweleve_grade_series.groupby(["school_name"]).mean()["reading_score"])
series_df = pd.DataFrame({
"9th Grade Series": nine_grade_by_school,
"10th Gared Series": ten_grade_by_school,
"11th Grade Series": eleven_grade_by_school,
"12th Grade Series": tweleve_grade_by_school
})
# series_df = series_df[["9th Grade Series",
# "10th Gared Series",
# "11th Grade Series",
# "12th Grade Series"]]
# series_df.index.name = None
series_df
###Output
_____no_output_____
###Markdown
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
###Code
# Sample bins. Feel free to create your own bins.
spending_bins = [0, 585, 615, 645, 675]
group_names = ["<$585", "$585-615", "$615-645", "$645-675"]
school_spending_summary = school_summary.copy()
school_spending_summary["Spending Ranges (Student Budget)"] = pd.cut(school_summary["Student Budget"], spending_bins, labels=group_names)
school_spending_grouped = school_spending_summary.groupby("Spending Ranges (Student Budget)").mean()
school_spending_grouped
###Output
_____no_output_____
###Markdown
Scores by School Size * Perform the same operations as above, based on school size.
###Code
# Sample bins. Feel free to create your own bins.
size_bins = [0, 1000, 2000, 5000]
group_names = ["Small (<1000)", "Medium (1000-2000)", "Large (2000-5000)"]
school_summary["Total Students"] = pd.cut(school_summary["Total Students"], size_bins, labels=group_names)
school_summary
#group by size
school_size_grouped = school_summary.groupby("Total Students").mean()
school_size_grouped
###Output
_____no_output_____
###Markdown
Scores by School Type * Perform the same operations as above, based on school type.
###Code
school_type = pd.DataFrame()
school_type["Average Math Score"] = (school_summary.groupby(["School Type"]).mean()["Average Math Score"])
school_type["Average Reading Score"] = school_summary.groupby(["School Type"]).mean()["Average Reading Score"]
school_type["Average Passing Math"] = school_summary.groupby(["School Type"]).mean()["Passing Math"]
school_type["Average Passing Reading"] = school_summary.groupby(["School Type"]).mean()["Passing Reading"]
school_type["Overall Passing"] = school_summary.groupby(["School Type"]).mean()["Overall Passing Rate"]
school_type = pd. DataFrame({
"Average Math Score : School Math Score",
"Average Reading Score : Average Reading Score",
"Average Passing Math : Passing Math",
"Average Passing Reading : Passing Reading",
"Overall Passing : Overall Passing Rate"
})
school_type
###Output
_____no_output_____ |
analysis/Azure Public Dataset - Trace Analysis.ipynb | ###Markdown
Read File: vmtable.csv and deployment.csv
###Code
data_path = 'trace_data/vmtable/vmtable.csv'
headers=['vmid','subscriptionid','deploymentid','vmcreated', 'vmdeleted', 'maxcpu', 'avgcpu', 'p95maxcpu', 'vmcategory', 'vmcorecount', 'vmmemory']
trace_dataframe = pd.read_csv(data_path, header=None, index_col=False,names=headers,delimiter=',')
deployment_data_path = 'trace_data/deployment/deployment.csv'
deployment_headers=['deploymentid','deploymentsize']
deployment_trace_dataframe = pd.read_csv(deployment_data_path, header=None, index_col=False,names=deployment_headers,delimiter=',')
#Compute VM Lifetime based on VM Created and VM Deleted timestamps and transform to Hour
trace_dataframe['lifetime'] = np.maximum((trace_dataframe['vmdeleted'] - trace_dataframe['vmcreated']),300)/ 3600
trace_dataframe['corehour'] = trace_dataframe['lifetime'] * trace_dataframe['vmcorecount']
trace_dataframe.head()
###Output
_____no_output_____
###Markdown
General Statistics
###Code
vm_count = trace_dataframe.shape[0]
subscription_count = trace_dataframe['subscriptionid'].unique().shape[0]
deployment_count = trace_dataframe['deploymentid'].unique().shape[0]
total_vm_hour_available = trace_dataframe['lifetime'].sum()
total_core_hour_available = trace_dataframe['corehour'].sum()
print("Total Number of Virtual Machines in the Dataset: %d" % vm_count)
print("Total Number of Subscriptions in the Dataset: %d" % subscription_count)
print("Total Number of Deployments in the Dataset: %d" % deployment_count)
print("Total VM Hours Available in the Dataset: %f" % total_vm_hour_available)
print("Total Core Hours Available in the Dataset: %f" % total_core_hour_available)
###Output
Total Number of Virtual Machines in the Dataset: 2013767
Total Number of Subscriptions in the Dataset: 5958
Total Number of Deployments in the Dataset: 35941
Total VM Hours Available in the Dataset: 104371713.416676
Total Core Hours Available in the Dataset: 237815104.749834
###Markdown
Read SOSP Paper datasets
###Code
lifetime_sosp = pd.read_csv('sosp_data/lifetime.txt', header=0, delimiter='\t')
cpu_sosp = pd.read_csv('sosp_data/cpu.txt', header=0, delimiter='\t')
memory_sosp = pd.Series.from_csv('sosp_data/memory.txt', header=0, sep='\t')
core_sosp = pd.Series.from_csv('sosp_data/cores.txt', header=0, sep='\t')
category_sosp = pd.Series.from_csv('sosp_data/category.txt', header=0, sep='\t')
deployment_sosp = pd.read_csv('sosp_data/deployment.txt', header=0, delimiter='\t')
###Output
_____no_output_____
###Markdown
Plot Functions
###Code
TraceLegend = "Azure Public Dataset"
PaperLegend = "SOSP Paper"
def CPUPlot(df, sosp):
counts_AVG = pd.DataFrame(df.groupby('avgcpu').size().rename('Freq')).reset_index()
counts_P95 = pd.DataFrame(df.groupby('p95maxcpu').size().rename('Freq')).reset_index()
counts_AVG = counts_AVG.rename(columns={'avgcpu': 'Bucket'})
counts_P95 = counts_P95.rename(columns={'p95maxcpu': 'Bucket'})
counts_AVG['cum'] = counts_AVG['Freq'].cumsum() / counts_AVG['Freq'].sum() * 100
counts_P95['cum'] = counts_P95['Freq'].cumsum() / counts_P95['Freq'].sum() * 100
ax = counts_AVG.plot(x='Bucket', y='cum',linestyle='--', title="VM CPU Utilization",logx=False, legend=True, ylim=(0,100), yticks=range(0,110,20))
sosp.plot(x='bucket', y='avg', linestyle='-', logx=False, ax=ax)
counts_P95.plot(x='Bucket', y='cum', linestyle='--', logx=False, color='b', ax=ax)
sosp.plot(x='bucket', y='p95', linestyle='-', logx=False, color='g', ax=ax)
ax.text(9, 85, 'Average', size=11, weight='bold')
ax.text(80, 30, 'P95 Max', size=11, weight='bold')
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('CPU Utilization (%)')
ax.set_ylabel('CDF')
ax.legend([TraceLegend, PaperLegend], loc='best');
ax.minorticks_off()
def LifetimePlot(df, sosp):
counts_lifetime = pd.DataFrame(df.groupby('lifetime').size().rename('Freq')).reset_index()
counts_lifetime = counts_lifetime.rename(columns={'lifetime': 'bucket'})
counts_lifetime['cum'] = counts_lifetime['Freq'].cumsum() / counts_lifetime['Freq'].sum() * 100
ax = counts_lifetime[0:2500].plot(x='bucket', y='cum',linestyle='--', title="VM Lifetime",logx=False, legend=True, ylim=(0,100), yticks=range(0,110,10))
sosp[0:2500].plot(x='bucket', y='value', linestyle='-', logx=False, ax=ax)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('Lifetime (Hours)')
ax.set_ylabel('CDF')
ax.legend([TraceLegend, PaperLegend], loc='best');
def MemoryPlot(df, sosp):
dataset = (df['vmmemory'].value_counts(normalize=True)*100).sort_index().to_frame().T
paper = sosp.to_frame().T
frames = [dataset, paper]
result = pd.concat(frames)
ax = result.plot.bar(stacked=True, ylim=(0,100), title='VM Memory Distribution', width=0.5, align='center')
ax.set_xticklabels([TraceLegend, PaperLegend], rotation=0)
ax.legend(title='Memory Size (GB)', loc='upper center', bbox_to_anchor=(0.5, -0.1), ncol=4);
ax.set_ylabel('% of VMs')
def CorePlot(df, sosp):
dataset = (df['vmcorecount'].value_counts(normalize=True)*100).sort_index().to_frame().T
paper = sosp.to_frame().T
frames = [dataset, paper]
result = pd.concat(frames)
ax = result.plot.bar(stacked=True, ylim=(0,100), title='VM Cores Distribution', width=0.5, align='center')
ax.set_xticklabels([TraceLegend, PaperLegend], rotation=0)
ax.set_ylabel('% of VMs')
ax.legend(title='Core Count', loc='upper center', bbox_to_anchor=(0.5, -0.1), ncol=5);
def CategoryPlot(df, sosp):
dataset = pd.DataFrame(df.groupby('vmcategory')['corehour'].sum().rename('corehour'))
dataset = dataset.rename(columns={'vmcategory': 'Bucket'})
dataset['cum'] = dataset['corehour']/dataset['corehour'].sum() * 100
dataset= dataset.drop('corehour', 1)
dataset = dataset.sort_index().T
paper = sosp.to_frame().T
frames = [dataset, paper]
result = pd.concat(frames)
ax = result.plot.bar(stacked=True, title='VM Category Distribution', color=['lightskyblue', 'orange', '0.75'])
ax.set_ylabel('% of core hours')
ax.set_xticklabels([TraceLegend, PaperLegend], rotation=0)
ax.legend(["Delay-insensitive", "Interactive", "Unknown"], loc='upper center', title='Categories', bbox_to_anchor=(0.5, -0.10), ncol=3, fontsize=10.5);
def DeploymentPlot(df,sosp):
counts_deployment = pd.DataFrame(df.groupby('deploymentsize').size().rename('Freq')).reset_index()
counts_deployment = counts_deployment.rename(columns={'deploymentsize': 'bucket'})
counts_deployment.to_csv('deployment.txt', sep='\t', index=False)
counts_deployment['cum'] = counts_deployment['Freq'].cumsum() / counts_deployment['Freq'].sum() * 100
ax = counts_deployment[0:50].plot(x='bucket', y='cum',linestyle='--', title="Deployment Size",logx=False, legend=True, ylim=(0,100), yticks=range(0,110,20))
sosp[0:50].plot(x='bucket', y='value', linestyle='-', logx=False, ax=ax)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
ax.set_xlabel('# VMs')
ax.set_ylabel('CDF')
ax.legend([TraceLegend, PaperLegend], loc='best');
CPUPlot(trace_dataframe, cpu_sosp)
LifetimePlot(trace_dataframe, lifetime_sosp)
DeploymentPlot(deployment_trace_dataframe, deployment_sosp)
MemoryPlot(trace_dataframe, memory_sosp)
CorePlot(trace_dataframe, core_sosp)
CategoryPlot(trace_dataframe, category_sosp)
###Output
_____no_output_____ |
ipynb/Fxxking_Date_and_Time_in_Python.ipynb | ###Markdown
Fxxking Date and Time in Python* date string / datetime / timestamp / timezone / timedelta* time / datetime / pytz / dateutil
###Code
import time
import datetime
import pytz
import dateutil.parser
###Output
_____no_output_____
###Markdown
1. From ISO-8601 Date String to Datetime / Parsing2. From ISO-8601 Date String to Datetime UTC3. From Datetime UTC (Now) to Timestamp4. From Naive Date String to Datetime with Timezone5. From Naive Date String to Timestamp UTC6. From Naive Datetime to ISO-8601 Date String7. From Timestamp to Datetime UTC8. From UTC to Another Timezone9. Add and Substract Time with Timedelta 1. From ISO-8601 Date String to Datetime / Parsing
###Code
format = '%Y-%m-%dT%H:%M:%S%z'
datestring = '2018-04-27T16:44:44-0200'
date = datetime.datetime.strptime(datestring, format)
print(date)
display(date)
###Output
2018-04-27 16:44:44-02:00
###Markdown
2. From ISO-8601 Date String to Datetime UTC
###Code
format = '%Y-%m-%dT%H:%M:%S%z'
datestring = '2018-04-27T16:44:44-0200'
date = dateutil.parser.parse(datestring)
print(date)
display(date)
date = date.replace(tzinfo=pytz.utc) - date.utcoffset()
print(date)
display(date)
###Output
2018-04-27 16:44:44-02:00
###Markdown
3. From Datetime UTC (Now) to Timestamp
###Code
now = datetime.datetime.utcnow().timestamp()
print(now)
display(now)
###Output
1524870835.538517
###Markdown
4. From Naive Date String to Datetime with Timezone
###Code
#naive datetime
date = datetime.datetime.strptime('04/12/2018', '%d/%m/%Y')
date
#add proper timezone for the date
date = pytz.timezone('America/Los_Angeles').localize(date)
date
###Output
_____no_output_____
###Markdown
5. From Naive Date String to Timestamp UTC
###Code
#convert to UTC timezone
date = date.astimezone(pytz.UTC)
date
timestamp = time.mktime(date.timetuple())
timestamp
###Output
_____no_output_____
###Markdown
6. From Naive Datetime to ISO-8601 Date String
###Code
date = datetime.datetime.strptime('28/04/2018 18:23:45', '%d/%m/%Y %H:%M:%S')
date = pytz.timezone('America/Los_Angeles').localize(date)
date = date.isoformat()
date
date = datetime.datetime.strptime('28/04/2018 18:23:45', '%d/%m/%Y %H:%M:%S')
date = date.strftime('%d/%m/%Y %H:%M:%S')
date
###Output
_____no_output_____
###Markdown
7. From Timestamp to Datetime UTC
###Code
date = datetime.datetime.utcfromtimestamp(1524792702)
date = pytz.UTC.localize(date)
date
###Output
_____no_output_____
###Markdown
8. From UTC to Another Timezone
###Code
date = date.astimezone(pytz.timezone('America/Los_Angeles'))
date
###Output
_____no_output_____
###Markdown
9. Add and Substract Time with Timedelta
###Code
date = datetime.datetime(2018, 4, 28, 18, 20, 22)
date = date.astimezone(pytz.UTC)
date = date + datetime.timedelta(days=1)
date = date.astimezone(pytz.timezone('America/Los_Angeles'))
date
###Output
_____no_output_____ |
reports/2/hw2.ipynb | ###Markdown
Task 1> Use PCA to investigate the dimensionality of $𝑋_{train}$ and plot the first 16 PCA modes as 16 ×16 images
###Code
# Analyzing the first 100 singular values
pca = PCA()
pca.fit(X_train)
fig, ax = plt.subplots(1, 1, figsize=(8, 8))
ax.plot(np.log(pca.singular_values_[0:100]))
ax.set_xlabel('index $j$')
ax.set_ylabel('$\log(\sigma_j)$')
plot_digits(pca.components_, 4, 'First 16 PCA modes')
###Output
_____no_output_____
###Markdown
Task 2> How many PCA modes do you need to keep in order to approximate $𝑋_{train}$ up to 60%,80% and 90% in the Frobenius norm? Do you need the entire 16 ×16 image for each data point?
###Code
# define function to get the num of modes needed for keeping specific percent of Frobenius norm
F_norm = np.sqrt(sum(pca.singular_values_**2))
def find_num_of_modes(percent: float) -> int:
for _ in range(100):
approx_F_norm = np.sqrt(sum(pca.singular_values_[:_]**2))
if approx_F_norm >= F_norm * percent:
return _
percents = [0.6, 0.8, 0.9]
print('number of modes needed respectively:',
*list(map(find_num_of_modes, percents)))
###Output
number of modes needed respectively: 3 7 14
###Markdown
From this, we can see that not the entire 16$\times$16 image is needed for each data point. Only the first few or several modes are needed for approximation. Task 3> Train a classifier to distinguish the digits 1 and 8 via the following steps:* First, you need to write a function that extracts the features and labels of the digits 1 and 8 fromthe training data set. Let us call these 𝑋(1,8) and 𝑌(1,8).* Then project 𝑋(1,8) on the first 16 PCA modes of $𝑋_{train}$ computed in step 1, this should give you a matrix $𝐴_{train}$ which has 16 columns corresponding to the PCA coefficients of each feature and 455 rows corresponding to the total number of 1’s and 8’s in the training set. * Assign label −1 to the images of the digit 1 and label +1 to the images of the digit 8. This should result in a vector $𝑏_{train}$ ∈{−1,+1}$^{455}$.* Use Ridge regression or least squares to train a predictor for the vector $b_{train}$ by linearly combining the columns of $A_{train}$.* Report the training mean squared error (MSE) of your classifier$$MSE_{train}(1,8) = \frac{1}{\text{length of }𝑏_{train}}×‖𝐴_{train} \hat{𝛽} −𝑏_{train}‖_2^2$$* Report the testing MSE of your classifier $$MSE_{test}(1,8) = \frac{1}{\text{length of }𝑏_{test}}×‖𝐴_{test} \hat{𝛽} −𝑏_{test}‖_2^2$$ where you need to construct analogously the matrix $𝐴_{test}$ and $𝑏_{test}$ corresponding to the digits 1 and 8 from the test data set.
###Code
# perform 16 modes PCA
pca_16 = PCA(n_components=16)
# define fuction to extract relevant features with labels n1 ,n2
# return A for 16 modes PCA transforming A_train
# return b for b_train of labeling -1 to 1 and 1 to 8
def extract_relavent_features(n1: int, n2: int, X: np.array,
Y: np.array) -> Tuple[np.array, list]:
N = len(X)
index = [i for i in range(N) if Y[i] == n1 or Y[i] == n2]
A = pca_16.fit_transform(X)[index]
b = list(map(lambda x: -1 if Y[index][x] == n1 else 1, range(len(index))))
return A, b
# get the A_train, b_train as well as the A_test, b_test
A_train, b_train = extract_relavent_features(1, 8, X_train, Y_train)
A_test, b_test = extract_relavent_features(1, 8, X_test, Y_test)
# two Ridge regression for different alphas, 1 and 0.02
reg1 = Ridge(alpha=1)
reg1.fit(A_train, b_train)
reg2 = Ridge(alpha=0.02)
reg2.fit(A_train, b_train)
# calculating MSE for train and test set for alpha = 1
b = np.insert(reg1.coef_, 0, reg1.intercept_)
MSE_train = 1/len(b_train) * \
np.linalg.norm((np.dot(A_train, b[1:])+b[0]-b_train), 2)**2
MSE_test = 1/len(b_test) * \
np.linalg.norm((np.dot(A_test, b[1:])+b[0]-b_test), 2)**2
print(
f'For alpha = 1, the MSE for train set is {MSE_train} and {MSE_test} for test set'
)
# calculating MSE for train and test set for alpha = 0.02
b = np.insert(reg2.coef_, 0, reg2.intercept_)
MSE_train = 1/len(b_train) * \
np.linalg.norm((np.dot(A_train, b[1:])+b[0]-b_train), 2)**2
MSE_test = 1/len(b_test) * \
np.linalg.norm((np.dot(A_test, b[1:])+b[0]-b_test), 2)**2
print(
f'For alpha = 0.02, the MSE for train set is {MSE_train} and {MSE_test} for test set'
)
###Output
For alpha = 0.02, the MSE for train set is 0.07459894314853126 and 0.13507635517360442 for test set
###Markdown
We see that two different alphas give results that are very close. The smaller alpha =0.02 gives a slightly smaller MSE. So for the next part, we use alpha =0.02. Task 4> Use your code from step 3 to train classifiers for the pairs of digits (3,8) and (2,7) and report the training and test MSE’s. Can you explain the performance variations?
###Code
# for number 3 and 8
A_train, b_train = extract_relavent_features(3, 8, X_train, Y_train)
A_test, b_test = extract_relavent_features(3, 8, X_test, Y_test)
reg = Ridge(alpha=0.02)
reg.fit(A_train, b_train)
# calculating MSE for train and test set for alpha = 0.02
b = np.insert(reg.coef_, 0, reg.intercept_)
MSE_train = 1/len(b_train) * \
np.linalg.norm((np.dot(A_train, b[1:])+b[0]-b_train), 2)**2
MSE_test = 1/len(b_test) * \
np.linalg.norm((np.dot(A_test, b[1:])+b[0]-b_test), 2)**2
print(
f'For alpha = 0.02, the MSE for train set is {MSE_train} and {MSE_test} for test set'
)
# for number 2 and 7
A_train, b_train = extract_relavent_features(2, 7, X_train, Y_train)
A_test, b_test = extract_relavent_features(2, 7, X_test, Y_test)
reg = Ridge(alpha=0.02)
reg.fit(A_train, b_train)
# calculating MSE for train and test set for alpha = 0.02
b = np.insert(reg.coef_, 0, reg.intercept_)
MSE_train = 1/len(b_train) * \
np.linalg.norm((np.dot(A_train, b[1:])+b[0]-b_train), 2)**2
MSE_test = 1/len(b_test) * \
np.linalg.norm((np.dot(A_test, b[1:])+b[0]-b_test), 2)**2
print(
f'For alpha = 0.02, the MSE for train set is {MSE_train} and {MSE_test} for test set'
)
###Output
For alpha = 0.02, the MSE for train set is 0.09177852736527603 and 0.3742354809625476 for test set
|
Assignment_Day_5.ipynb | ###Markdown
- DAY 5 - **ASSIGNMENT - 1**---Write a Program to identify sub-list [ 1,1,15 ] is there in the given list in the same order, if yes print "it's a Match", if no then print "it's Gone" in function.>Example:>> Listy = [ 1,5,6,4,1,2,3,5 ]>> Listy = [ 1,5,6,4,1,2,3,6 ]
###Code
def sub_lists(mainlist , listFind ):
# store all the sublists
sublist = [[]]
for i in range( len(mainlist) -2 ):
for j in range( i + 1, len(mainlist) -1 ):
for k in range( j + 1, len(mainlist) ):
sub = [ mainlist[ i ], mainlist[ j ], mainlist[ k ] ]
sublist.append(sub)
if listFind in sublist:
print( "it's a Match" )
else:
print( "it's a Gone" )
return
sublistFind = [1, 1, 5]
x = [ 1,5,6,4,1,2,3,5 ]
y = [ 1,5,6,4,1,2,3,6 ]
sub_lists( mainlist = x, listFind = sublistFind )
sub_lists( mainlist = y, listFind = sublistFind )
###Output
it's a Match
it's a Gone
###Markdown
**ASSIGNMENT - 2**---Make a function for Prime numbers and use Filter to filter out all the prime numbers from 1-2500.
###Code
def printPrime( num ):
if num < 2:
return False
for lp in range(2,num):
if (num%lp ==0):
return False
return True
primeRange = filter(printPrime, range(1,2500) )
for p in primeRange:
print( p )
###Output
2
3
5
7
11
13
17
19
23
29
31
37
41
43
47
53
59
61
67
71
73
79
83
89
97
101
103
107
109
113
127
131
137
139
149
151
157
163
167
173
179
181
191
193
197
199
211
223
227
229
233
239
241
251
257
263
269
271
277
281
283
293
307
311
313
317
331
337
347
349
353
359
367
373
379
383
389
397
401
409
419
421
431
433
439
443
449
457
461
463
467
479
487
491
499
503
509
521
523
541
547
557
563
569
571
577
587
593
599
601
607
613
617
619
631
641
643
647
653
659
661
673
677
683
691
701
709
719
727
733
739
743
751
757
761
769
773
787
797
809
811
821
823
827
829
839
853
857
859
863
877
881
883
887
907
911
919
929
937
941
947
953
967
971
977
983
991
997
1009
1013
1019
1021
1031
1033
1039
1049
1051
1061
1063
1069
1087
1091
1093
1097
1103
1109
1117
1123
1129
1151
1153
1163
1171
1181
1187
1193
1201
1213
1217
1223
1229
1231
1237
1249
1259
1277
1279
1283
1289
1291
1297
1301
1303
1307
1319
1321
1327
1361
1367
1373
1381
1399
1409
1423
1427
1429
1433
1439
1447
1451
1453
1459
1471
1481
1483
1487
1489
1493
1499
1511
1523
1531
1543
1549
1553
1559
1567
1571
1579
1583
1597
1601
1607
1609
1613
1619
1621
1627
1637
1657
1663
1667
1669
1693
1697
1699
1709
1721
1723
1733
1741
1747
1753
1759
1777
1783
1787
1789
1801
1811
1823
1831
1847
1861
1867
1871
1873
1877
1879
1889
1901
1907
1913
1931
1933
1949
1951
1973
1979
1987
1993
1997
1999
2003
2011
2017
2027
2029
2039
2053
2063
2069
2081
2083
2087
2089
2099
2111
2113
2129
2131
2137
2141
2143
2153
2161
2179
2203
2207
2213
2221
2237
2239
2243
2251
2267
2269
2273
2281
2287
2293
2297
2309
2311
2333
2339
2341
2347
2351
2357
2371
2377
2381
2383
2389
2393
2399
2411
2417
2423
2437
2441
2447
2459
2467
2473
2477
###Markdown
**ASSIGNMENT - 3**---Make a Lambda function for capitalizing each words in the whole sentence passed using arguments. Then map all the sentences in a list with the created lambda function.
###Code
capitalizeWords = lambda words : (word.capitalize() for word in words)
capString = lambda string : ' '.join( capitalizeWords( string.split() ) )
#capString("is this okay")
listStrings = ['i am vishnu', 'who loves embedded', 'and python too']
final_list = list( map( capString , listStrings ) )
print( final_list )
###Output
['I Am Vishnu', 'Who Loves Embedded', 'And Python Too']
|
2021/Python-Maths/chapter3.ipynb | ###Markdown
Describing Data With StatisticsIn this chapter, we’ll use Python to explorestatistics so we can study, describe, andbetter understand sets of data. After look-ing at some basic statistical measures—the mean,median, mode, and range—we’ll move on to somemore advanced measures, such as variance and stan-dard deviation.
###Code
shortlist = [1, 2, 3]
sum(shortlist)
# len(shortlist)
###Output
_____no_output_____
###Markdown
Calculating Mean
###Code
def calculate_mean(numbers):
s = sum(numbers)
N = len(numbers)
# Calculate mean
mean = s/N
return mean
if __name__ == '__main__':
donations = [100, 60, 70, 900, 100, 200, 500, 500, 503, 600, 1000, 1200]
mean = calculate_mean(donations)
N = len(donations)
print('Mean donation over the last {0} days is {1}'.format(N, mean))
samplelist = [4, 1, 3]
samplelist.sort()
samplelist
###Output
_____no_output_____
###Markdown
Calculating Median
###Code
def calculate_median(numbers):
N = len(numbers)
numbers.sort()
# Find Median
if N % 2 == 0:
# if N is even
m1 = N/2
m2 = (N/2) + 1
# Convert to integer, match position
m1 = int(m1) - 1
m2 = int(m2) - 1
median = (numbers[m1] + numbers[m2])/2
else:
m = (N+1)/2
# Convert to integer, match position
m = int(m) - 1
median = numbers[m]
return median
if __name__ == '__main__':
donations = [100, 60, 70, 900, 100, 200, 500, 500, 503, 600, 1000, 1200]
median = calculate_median(donations)
N = len(donations)
print('Median donation over the last {0} days is {1}'.format(N, median))
###Output
Median donation over the last 12 days is 500.0
|
trainRandomHoldOut.ipynb | ###Markdown
Train a Low Rank Matrix Factorization Modelauthor: Andrew E. Davidson, [email protected] this notebook to train models using different hyper parameters. Hyper Parameter options:- numFeatures: the number of features to learn- filterPercent: removes genes that are missing more than filterPercent of values + low rank matrix factorization works well with sparse data. It only trains on observed values. How ever it is a good idea to remove genes that are missing a high percentage of values if for no other reason than the model will train much faster- holdOutPercent + enables you to control how much data you wish to train on + split() is called a second time to break the holdout set into a validation and test set of equal size output:Trained results, along with clean versions of our gene essentiality data are written to the same directory as the raw data file- *X*.csv: our low dimension matrix with shape numGenes x numLearned Features- *Theta*.csv: our low dimension matrix with shape cellLines x numLearned Features- *Y*.csv: a clean, filtered, tidy version of the gene essentiality data- *R*.csv: a knockout matrix with values = 1 for observed values else 0- *RTRain*, *Rvalidation*, *RTest*,knockout matrix you can use to select data sets from Ysample files```$ ls -1 data/n_19_geneFilterPercent_0.25_holdOutPercent_0.4/D2_Achilles_gene_dep_scores_RTEST_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_RTRAIN_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_RTRUTH_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_RVALIDATE_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_Theta_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_X_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_Y_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_cellLineNames_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csvD2_Achilles_gene_dep_scores_geneNames_n_19_geneFilterPercent_0.25_holdOutPercent_0.4.csv```This notebook does not evaluate model performace.
###Code
from datetime import datetime
import logging
from setupLogging import setupLogging
configFilePath = setupLogging( default_path='src/test/logging.test.ini.json')
logger = logging.getLogger("notebook")
logger.info("using logging configuration file:{}".format(configFilePath))
import numpy as np
from DEMETER2.lowRankMatrixFactorizationEasyOfUse \
import LowRankMatrixFactorizationEasyOfUse as LrmfEoU
###Output
[INFO <ipython-input-1-dae33e64cf83>:6 - <module>()] using logging configuration file:src/test/logging.test.ini.json
###Markdown
filterPercent Hyper Parameter Tunning. Based on explore.ipynb setting filtePercent = 0.01 will remove 5604 genes. This suggests model only works or able to impute missing values in dense matrix. Setting filterPercentTunning = 0.25 removes 5562 genes and suggest that model works with sparce matrix. Becareful about the "optics" of setting the hyper parmeter. In reality I do not think it makes a difference
###Code
np.random.seed(42)
dataDir = "data/"
dataFileName = "D2_Achilles_gene_dep_scores.tsv"
numFeatures = 19
geneFilterPercent = 0.25
holdOutPercent = 0.40
#randomize=True
#easyOfUse = LrmfEoU(dataDir, dataFileName, numFeatures, geneFilterPercent, holdOutPercent, randomize, tag="_randomized")
easyOfUse = LrmfEoU(dataDir, dataFileName, numFeatures, geneFilterPercent, holdOutPercent)
start = datetime.now()
dateTimeFmt = "%H:%M:%S"
startTime = start.strftime(dateTimeFmt)
%time resultsDict = easyOfUse.runTrainingPipeLine()
end = datetime.now()
endTime = end.strftime(dateTimeFmt)
duration = end - start
durationTime = str(duration)
fmt = "data is up: numFeatures:{} holdOut:{} run time: {}"
msg = fmt.format(numFeatures, holdOutPercent, durationTime)
cmdFmt = 'tell application \\"Messages\\" to send \\"{}\\" to buddy \\"Andy Davidson\\"'
cmd = cmdFmt.format(msg)
print(cmd)
# apple iMessage
#! osascript -e 'tell application "Messages" to send "data is up " to buddy "Andy Davidson"'
! osascript -e "$cmd"
storageLocation = easyOfUse.saveAll(resultsDict)
print("model and tidy data sets have been stored to {}".format(storageLocation))
# clean tidy version of demeter data
Y, R, cellLines, geneNames, = resultsDict["DEMETER2"]
# trained model
# scipy.optimize.OptimizeResult
X, Theta, optimizeResult = resultsDict["LowRankMatrixFactorizationModel"]
# knockout logical filters. Use to select Y Train, Validations, and Test values
RTrain, RValidation, RTest = resultsDict["filters"]
easyOfUse.dipslayOptimizedResults(optimizeResult)
###Output
success:False
status:2 message:Desired error not necessarily achieved due to precision loss.
final cost: 25905.072273820377
number of iterations: 186
Number of evaluations of the objective functions : 399
Number of evaluations of the objective functions and of its gradient : 387
|
Assignment_2_solutions.ipynb | ###Markdown
Solutions - Assignment 2 Exercise 0A cardioid is a specific mathematical curve. It comes up in things like directional antenna design, or the response of a microphone, that depends on the direction the microphone is pointing to. In particular, a directional microphone picks up sound in front very well, but does not pcik up sounds from the back side. You can look up this curve in various places -- even Wikipedia. In polar coordinates, it given by$$ r = 1 - \sin\theta.$$To plot, we just give the x,y coordinates from the polar form:$$ x = r\cos\theta = (1-\sin\theta)\cos\theta,$$$$ y = r\sin\theta = (1-\sin\theta)\sin\theta.$$The rest is just coding.
###Code
from matplotlib.pyplot import *
from numpy import *
theta = linspace(0,2*pi,100)
x = (1-sin(theta))*cos(theta)
y = (1-sin(theta))*sin(theta)
plot(x,y);
###Output
_____no_output_____
###Markdown
Polar formIt is worth noting that PyPlot knows how to plot in polar form, which some people did directly. It is worth knowing about this. See the following code.
###Code
theta = linspace(0,2*pi,100)
r = 1-sin(theta)
polar(theta, r);
###Output
_____no_output_____
###Markdown
Extra workIf you want a curve that looks more like a heart, you can lookup in Worlfram's MathWorld for some other heart-shaped curves. Here is a nice one:$$ x = 16\sin^3\theta,$$$$ y = 13\cos\theta-5\cos(2\theta)-2\cos(3\theta)-\cos(4\theta).$$
###Code
theta = linspace(0,2*pi,100)
x = 16*sin(theta)**3
y = 13*cos(theta)-5*cos(2*theta)-2*cos(3*theta)-cos(4*theta)
plot(x,y);
###Output
_____no_output_____
###Markdown
Exercise 1.Note the 4 parameters are T,R,k and m. (Period of the orbit, mean radius of the orbit, spring constant and mass.) Physical units are time, distance and mass, so we have 4-3=1 constant from Buckingham Pi.Figuring out the units of k is important. From Newton's law, and the Hooke's law, we have Force = mass times acceleration = -k times distance.This tells us the units of k are $mass/sec^2$. So a dimensionless combination is $kT^2/m =$ constant.From this we get $$ T = c\sqrt{m/k}.$$Does this make sense? Well, a heavy mass on a spring will move more slowly, so the period of orbit T should increase. So T increases as m increases, so that part makes sense. A stronger spring (k bigger) will move things more quickly, so T decreases as k increases. So yes, this makes sense. Does this mean the earth is not connected to the sun? Well, it is evidence that it is not. The earth and moon both go around the sun at about the same rate, and yet the moon is much lighter than the earth. Maybe the moon has a weaker spring attaching it to the sun, or maybe the moon only follows the earth and is not attached to the sun at all. But that would be weird!NOTE: A few people used T,R,m and force F as "parameters." But force is not really a parameter, it is a variable, that changes with time. (In fact F = k times distances says the force changes as the distance changes with time.) By making the substitution F = kR, they got the "right answer" but this is blurring the distinction between the mean radius of the orbit (which is a constant) and the distance between the sun and early (which can vary as a function of time.) Exercise 2This is an exercise in expressing an systems of ODEs as a first order system, then formating some code to call up odeint (from SciPy), our numerical ode solver.From Newton's law (mass times accelerations equals force) and Hooke's law (force is a constant times the displacement of the spring), we have$$\begin{eqnarray}m\frac{d^2x}{dt^2} &=& - k x, \\m\frac{d^2y}{dt^2} &=& - k y. \end{eqnarray}$$To write this as a first order system, we define $\dot{x} = dx/dt, \dot{y} = dy/dt$ and then have$$ \frac{d}{dt}\left[\begin{array}{c}x \\ \dot{x} \\ y \\ \dot{y} \end{array}\right] = \left[\begin{array}{c}\dot{x} \\\frac{-kx}{m}\\\dot{y} \\\frac{-ky}{m}\end{array}\right].$$In our code, we will use a 4-vector $X = [x,\dot{x},y,\dot{y}]$ and access the components as necessary. The RHS function just compute the derivative from the matrix formula above. To plot the result in the x-y plane, be sure you plot x and y values, not x and its derivative. This corresponts to the results matrix X[:,0] and X[:,2].
###Code
from numpy import *
from matplotlib.pyplot import *
from scipy.integrate import odeint
def RHS(X,t,m,k):
return X[1],-k*X[0]/m,X[3],-k*X[2]/m
X_init = array([1,1,1,0]) # Initial values for position and velocities
t_arr = linspace(0,10,1000)
m = 1 #kg
k = 1 #N/m
X = odeint(RHS,X_init,t_arr, args=(m,k))
plot(X[:,0],X[:,2]);
xlabel("x-axis"); ylabel("y-axis");
###Output
_____no_output_____
###Markdown
Exercise 3The pipe example.**Part 1:** Show there are 3 physical units, so by Buckingham Pi, there are 5-3 = 2 independent, unitless quantities $\Pi_1, \Pi_2.$ Find them. (Hint: one might be the Reynolds number. Always a good choice.)**Part 2:** Since there are two $\Pi's$, we can expect to "solve the problem" in the form$$\Pi_2 = F(\Pi_1)$$for some unspecified function $F$. Use this approach to to find the constant pressure gradient $P'$ as a product of powers of $D,\rho,U$ and $F(\Pi_1).$ SolutionRemember, the Buckingham Pi theorem has two parts. First, is that you can find a certain number of dimensionless parameter. And second, if there is a formula relating input parameters to an output result (like mass and spring constants determining the period of oscillation), then there is an equivalent formula using only dimensionless parameters. In this problem, there are 5 parameters and 3 physical units, so we expect 5-3 = 2 dimensionless parameters. Since we expect that the pressure drop in the pipe can be determined by the other 4 parameters (eg a high velocity means there is a lot of pressure pushing the fluid), we expect to be able to solve one dimensionless parameter in terms of the other. That is, we expect a formula of the form$$ \Pi_2 = F(\Pi_1)$$,where $\Pi_1, \Pi_2$ are the dimensionless parameters. **Part 1**:The 5 physical parameters given have the following units- $D$, the diameter of the pipe (eg meters)- $\rho$, the density of the fluid (eg grams per centimeters cubed)- $\mu$, the dynamic viscosity of the fluid (grams per centimeter*second -- look it up!)- $U$, the average velocity of the fluid (meters per second)- $P' = dP/dx$, the gradient of the pressure Maybe we have to work a bit for $dP/dx.$ Pressure is a force per unit area. With dx, this differential is in units of force per unit volume. So this is a mass times an acceleration, divided by a length cubed. Net result is, the units are a mass divided by (time squared, length squared). Anyhow, so we have 3 basic physical units (distance, time, mass) and our matrix of the powers is| D |$\rho$ | $\mu$ | U| P' | || --- | --- | --- | --- | --- | --- || 1 | -3 | -1 | 1 | -2 | length| 0 | 1 | 1 | 0 | 1 | mass| 0 | 0 | -1 | -1 | -2 | timeYou might notice this matrix is basically in row-echelon form, so it is eary to read off the kernel vectors. Looking at the fourth column, if we put in a 1 at the fourth element of a column vector, and a zero at the fifth element, we can easily solve this matrix equation by back-substitution:$$\left[\begin{array}{rrrrr}1 & -3 & -1 & 1 & -2 \\0 & 1 & 1 & 0 & 1 \\0 & 0 & -1 &-1 & -2\end{array}\right]\left[\begin{array}{r}x_1 \\x_2 \\x_3 \\1 \\0\end{array}\right] = 0$$to find this vector$$\left[\begin{array}{r}1 \\1 \\-1 \\1 \\0\end{array}\right]$$in the kernel. Of course, reading off the powers, we see the first dimensionless parameter is $$ \Pi_1 = \frac{\rho DU}{\mu}$$which is just Reynolds number. (NOTE: This has been corrected!)To get another vector in the kernel, we COULD put the fifth element equation to 1, and solve $$\left[\begin{array}{rrrrr}1 & -3 & -1 & 1 & -2 \\0 & 1 & 1 & 0 & 1 \\0 & 0 & -1 &-1 & -2\end{array}\right]\left[\begin{array}{r}x_1 \\x_2 \\x_3 \\0 \\1\end{array}\right] = 0$$to find this vector$$\left[\begin{array}{r}3 \\1 \\-2 \\0 \\1\end{array}\right]$$in the kernel. This corresponds to the dimensionless parameter$$ \frac{D^3 \rho P'}{\mu^2}.$$But that is not what we want!! The problem said to solve for P' in terms of U, D, $\rho$ and the Reynold's number. So instead, we should solve $$\left[\begin{array}{rrrrr}1 & -3 & -1 & 1 & -2 \\0 & 1 & 1 & 0 & 1 \\0 & 0 & -1 &-1 & -2\end{array}\right]\left[\begin{array}{r}x_1 \\x_2 \\0 \\x_4 \\1\end{array}\right] = 0.$$ Notice the non-zero entries in the vector correspond to the variables, D, $\rho$, U, P', which are the ones we want. (See the statement of the problem, part 2.)Again, solve by back-substitution and we get $$\left[\begin{array}{r}1 \\-1 \\0 \\-2 \\1\end{array}\right]$$in the kernel. This corresonds to the dimensionless parameter$$\Pi_2 = \frac{DP'}{\rho U^2}.$$ Part 2Now we write$$\Pi_2 = F(\Pi_1)$$where $F$ is some unknown function. This gives us$$\frac{D P'}{\rho U^2} = F(Re),$$where $Re$ is the Reynolds numbers. Solving for P', we have$$ P' = \frac{\rho U^2}{D}F(Re).$$This says, for instance, if you keep the Reynolds number fixed, then the pressure drop increases like the square of average velocity. Exercise 4Find an expansion for the roots of the cubic$$\epsilon x^3 + x -1 = 0$$for small values of $\epsilon.$ Find three (non-zero) terms in each expansion.NOTE: YOU HAVE ALL THIS TOOLS IN PYTHON, LIKE GRAPHING. USE THEM!
###Code
## Plot this for small epsilson >0. Notice only one root.
eps = .01
x =linspace(-10,10,1000)
y = eps*x**3 + x - 1
plot(x,y,x,0*x);
## Plot this for small epsilson < 0. Notice three roots.
eps = -.01
x =linspace(-12,12,1000)
y = eps*x**3 + x - 1
plot(x,y,x,0*x);
###Output
_____no_output_____ |
Visualizations/Santiago Puertas - Analyze Scripts.ipynb | ###Markdown
Visualizations - DWELING
###Code
%reset
# from __future__ import division
import numpy as np
from sklearn import preprocessing
import math
from matplotlib import pyplot as plt
from matplotlib import patches as patches
import pandas as pd
from numpy import *
%matplotlib inline
filename = 'data.xlsx'
data = pd.read_excel(filename)
data.info()
# max prod
x = data['max_prod_date']
y = data['max_prod']
# Set up the axes with gridspec
fig = plt.figure(figsize=(10, 10))
grid = plt.GridSpec(4, 4, hspace=0.2, wspace=0.2)
main_ax = fig.add_subplot(grid[:-1, 1:])
y_hist = fig.add_subplot(grid[:-1, 0], xticklabels=[], sharey=main_ax)
x_hist = fig.add_subplot(grid[-1, 1:], yticklabels=[], sharex=main_ax)
# scatter points on the main axes
main_ax.plot(x, y, 'ok', markersize=6, alpha=0.5)
main_ax.set_title('\n\nmax prod:')
# histogram on the attached axes
x_hist.hist(x, 40, histtype='stepfilled',
orientation='vertical', color='green')
x_hist.invert_yaxis()
y_hist.hist(y, 40, histtype='stepfilled',
orientation='horizontal', color='green')
y_hist.invert_xaxis()
#________________________
# max cons
x = data['max_cons_date']
y = data['max_cons']
# Set up the axes with gridspec
fig = plt.figure(figsize=(10, 10))
grid = plt.GridSpec(4, 4, hspace=0.2, wspace=0.2)
main_ax = fig.add_subplot(grid[:-1, 1:])
y_hist = fig.add_subplot(grid[:-1, 0], xticklabels=[], sharey=main_ax)
x_hist = fig.add_subplot(grid[-1, 1:], yticklabels=[], sharex=main_ax)
# scatter points on the main axes
main_ax.plot(x, y, 'ok', markersize=6, alpha=0.5)
main_ax.set_title('\n\nmax cons:')
# histogram on the attached axes
x_hist.hist(x, 40, histtype='stepfilled',
orientation='vertical', color='green')
x_hist.invert_yaxis()
y_hist.hist(y, 40, histtype='stepfilled',
orientation='horizontal', color='green')
y_hist.invert_xaxis()
#__________________________
#relation between solar panels and production
x = data['solar_panels']
y = data['max_prod']
# Set up the axes with gridspec
fig = plt.figure(figsize=(10, 10))
grid = plt.GridSpec(4, 4, hspace=0.2, wspace=0.2)
main_ax = fig.add_subplot(grid[:-1, 1:])
y_hist = fig.add_subplot(grid[:-1, 0], xticklabels=[], sharey=main_ax)
x_hist = fig.add_subplot(grid[-1, 1:], yticklabels=[], sharex=main_ax)
# scatter points on the main axes
main_ax.plot(x, y, 'ok', markersize=6, alpha=0.5)
main_ax.set_title('\n\nrelation between solar panels and production:')
# histogram on the attached axes
x_hist.hist(x, 40, histtype='stepfilled',
orientation='vertical', color='green')
x_hist.invert_yaxis()
y_hist.hist(y, 40, histtype='stepfilled',
orientation='horizontal', color='green')
y_hist.invert_xaxis()
#___________________________
#relation between people and production
x = data['people']
y = data['max_prod']
# Set up the axes with gridspec
fig = plt.figure(figsize=(10, 10))
grid = plt.GridSpec(4, 4, hspace=0.2, wspace=0.2)
main_ax = fig.add_subplot(grid[:-1, 1:])
y_hist = fig.add_subplot(grid[:-1, 0], xticklabels=[], sharey=main_ax)
x_hist = fig.add_subplot(grid[-1, 1:], yticklabels=[], sharex=main_ax)
# scatter points on the main axes
main_ax.plot(x, y, 'ok', markersize=6, alpha=0.5)
main_ax.set_title('\n\nrelation between people and production:')
# histogram on the attached axes
x_hist.hist(x, 40, histtype='stepfilled',
orientation='vertical', color='green')
x_hist.invert_yaxis()
y_hist.hist(y, 40, histtype='stepfilled',
orientation='horizontal', color='green')
y_hist.invert_xaxis()
#_________________________________
#relation between people and cons
x = data['people']
y = data['max_cons']
# Set up the axes with gridspec
fig = plt.figure(figsize=(10, 10))
grid = plt.GridSpec(4, 4, hspace=0.2, wspace=0.2)
main_ax = fig.add_subplot(grid[:-1, 1:])
y_hist = fig.add_subplot(grid[:-1, 0], xticklabels=[], sharey=main_ax)
x_hist = fig.add_subplot(grid[-1, 1:], yticklabels=[], sharex=main_ax)
# scatter points on the main axes
main_ax.plot(x, y, 'ok', markersize=6, alpha=0.5)
main_ax.set_title('\n\nrelation between people and cons:')
# histogram on the attached axes
x_hist.hist(x, 40, histtype='stepfilled',
orientation='vertical', color='green')
x_hist.invert_yaxis()
y_hist.hist(y, 40, histtype='stepfilled',
orientation='horizontal', color='green')
y_hist.invert_xaxis()
###Output
_____no_output_____ |
examples/howto/howto.ipynb | ###Markdown
How to use [](https://colab.research.google.com/github/simaki/epymetheus/blob/master/examples/howto/howto.ipynb)
###Code
# !pip install pandas matplotlib seaborn
# !pip install epymetheus
import pandas as pd
from pandas.tseries.offsets import DateOffset
import matplotlib.pyplot as plt
import seaborn
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
seaborn.set_style('ticks')
import epymetheus
from epymetheus import Trade, Strategy
epymetheus?
###Output
_____no_output_____
###Markdown
Let's construct your own strategy by subclassing `TradeStrategy`.
###Code
class SimpleTrendFollower(Strategy):
"""
A simple trend-following strategy.
Buy stocks for a month with the highest percentile of one month returns.
Sell stocks for a month with the lowest percentile of one month returns.
Parameters
----------
- percentile : float
The threshold to buy or sell.
E.g. If 0.1, buy stocks with returns of highest 10%.
- l_price : float
- s_price : float
- take
- stop
- hold
"""
def __init__(self, percentile, l_price, s_price, take, stop, hold=DateOffset(months=3)):
self.percentile = percentile
self.l_price = l_price
self.s_price = s_price
self.take = take
self.stop = stop
self.hold = hold
@staticmethod
def sorted_assets(universe, open_date):
"""
Return list of asset sorted according to one-month returns.
Sort is ascending (poor-return first).
Returns
-------
list
"""
prices_now = universe.prices.loc[open_date]
prices_bef = universe.prices.loc[open_date - DateOffset(months=1)]
onemonth_returns = prices_now - prices_bef
return list(onemonth_returns.sort_values().index)
def logic(self, universe):
n_trade = int(universe.n_assets * self.percentile)
date_range = pd.date_range(universe.bars[0], universe.bars[-1], freq='BM')
for open_date in date_range[1:]:
assets = self.sorted_assets(universe, open_date)
for asset_l, asset_s in zip(assets[-n_trade:][::-1], assets[:n_trade]):
lot_l = +self.l_price / universe.prices.at[open_date, asset_l]
lot_s = -self.s_price / universe.prices.at[open_date, asset_s]
yield Trade(
asset=[asset_l, asset_s],
lot=[lot_l, lot_s],
open_bar=open_date,
shut_bar=open_date + self.hold,
take=self.take,
stop=self.stop,
)
strategy = SimpleTrendFollower(
percentile=0.2, l_price=10000, s_price=1000, take=1000, stop=-1000,
)
print(strategy.name)
print(strategy.description)
###Output
A simple trend-following strategy.
Buy stocks for a month with the highest percentile of one month returns.
Sell stocks for a month with the lowest percentile of one month returns.
Parameters
----------
- percentile : float
The threshold to buy or sell.
E.g. If 0.1, buy stocks with returns of highest 10%.
- l_price : float
- s_price : float
- take
- stop
- hold
###Markdown
The strategy can be readily applied to any `Universe`.
###Code
from epymetheus.datasets import fetch_usstocks
universe = fetch_usstocks(n_assets=10)
universe.prices
strategy.run(universe)
###Output
Running ...
Generating 478 trades (2019-12-31 00:00:00) ... Done. (Runtime : 0.29 sec)
Executing 478 trades ... Done. (Runtime : 0.42 sec)
Done. (Runtime : 0.71 sec)
###Markdown
Now the result is stored as the attributes of `strategy`.You can plot the wealth right away:
###Code
df_wealth = strategy.wealth.to_dataframe()
df_wealth[-5:]
plt.figure(figsize=(24, 4))
plt.plot(df_wealth, linewidth=1)
plt.title('Wealth / USD')
plt.savefig('wealth.png', bbox_inches="tight", pad_inches=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
You can also inspect the exposure as:
###Code
from epymetheus.metrics import Exposure
net_exposure = strategy.evaluate(Exposure())
abs_exposure = strategy.evaluate(Exposure(net=False))
series_net_exposure = pd.Series(net_exposure, index=strategy.universe.bars)
series_abs_exposure = pd.Series(abs_exposure, index=strategy.universe.bars)
plt.figure(figsize=(24, 8))
plt.subplot(2,1,1)
plt.plot(series_net_exposure)
plt.axhline(0, ls='--', color='k')
plt.title('Net exposure')
plt.subplot(2,1,2)
plt.plot(series_abs_exposure)
plt.axhline(0, ls='--', color='k')
plt.title('Abs exposure')
plt.savefig('exposure.png', bbox_inches="tight", pad_inches=0.1)
plt.show()
from epymetheus.metrics import Drawdown, MaxDrawdown, SharpeRatio
drawdown = strategy.evaluate(Drawdown())
series_drawdown = pd.Series(drawdown, index=strategy.universe.bars)
plt.figure(figsize=(24, 4))
plt.plot(series_drawdown, linewidth=1)
plt.title('Drawdown / USD')
plt.savefig('drawdown.png', bbox_inches="tight", pad_inches=0.1)
plt.show()
max_drawdown = strategy.evaluate(MaxDrawdown())
print(max_drawdown)
sharpe_ratio = strategy.evaluate(SharpeRatio())
print(sharpe_ratio)
###Output
0.03908739015050315
###Markdown
Profit-loss distribution can be accessed by:
###Code
df_history = strategy.history.to_dataframe()
df_history
###Output
_____no_output_____
###Markdown
Profit-loss distribution
###Code
pnls = df_history.groupby('trade_id').aggregate('sum').pnl
plt.figure(figsize=(24, 4))
plt.hist(pnls, bins=100)
plt.axvline(0, ls='--', color='k')
plt.title('Profit-loss distribution')
plt.xlabel('Profit-loss')
plt.ylabel('Number of trades')
plt.savefig('pnl.png', bbox_inches="tight", pad_inches=0.1)
plt.show()
###Output
_____no_output_____
###Markdown
Detailed trade history can be viewed as:
###Code
df_history = strategy.history.to_dataframe()
with open('history.md', 'w') as f:
f.write(df_history.head().to_markdown())
df_history.head()
###Output
_____no_output_____
###Markdown
How to use[](https://colab.research.google.com/github/simaki/cointanalysis/blob/master/examples/howto/howto.ipynb)
###Code
# !pip install pandas pandas_datareader matplotlib numpy
# !pip install cointanalysis
import pandas as pd
from pandas_datareader.data import DataReader
import matplotlib.pyplot as plt
import numpy as np
from cointanalysis import CointAnalysis
def fetch_etf(ticker):
return DataReader(ticker, 'yahoo', '2012-01-01', '2018-12-31')['Adj Close']
###Output
_____no_output_____
###Markdown
Let us see how the main class `CointAnalysis` works using two ETFs, [HYG](https://www.bloomberg.com/quote/HYG:US) and [BKLN](https://www.bloomberg.com/quote/BKLN:US), as examples.
###Code
hyg = fetch_etf('HYG')
bkln = fetch_etf('BKLN')
###Output
_____no_output_____
###Markdown
Since they are both connected with liabilities of low-rated companies, these prices behave quite similarly.
###Code
plt.figure(figsize=(16, 4))
plt.title('HYG and BKLN')
hyg_norm = 100 * hyg / hyg[0]
bkln_norm = 100 * bkln / bkln[0]
plt.plot(hyg_norm, label='HYG (2012-01-01 = 100)', linewidth=1)
plt.plot(bkln_norm, label='BKLN (2012-01-01 = 100)', linewidth=1)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Cointegration test The method `test` carries out a cointegration test.The following code gives p-value for null-hypothesis that there is no cointegration.
###Code
coint = CointAnalysis()
X = np.stack([hyg, bkln], axis=1)
coint.test(X).pvalue_
###Output
_____no_output_____
###Markdown
The test have rejected the null-hypothesis by p-value of 0.55%, which implies cointegration. Get spread The method `fit` finds the cointegration equation.
###Code
coint.fit(X)
print(f'coef: {coint.coef_}')
print(f'mean: {coint.mean_}')
print(f'std: {coint.std_}')
###Output
coef: [-0.18105882 1. ]
mean: 6.969916306145498
std: 0.1528907317256263
###Markdown
This means that spread "-0.18 HYG + BKLN" has the mean 6.97 and standard deviation 0.15. In fact, the prices adjusted with these parameters clarifies the similarities of these ETFs:
###Code
plt.figure(figsize=(16, 4))
hyg_adj = (-coint.coef_[0]) * hyg + coint.mean_
plt.title('HYG and BKLN')
plt.plot(hyg_adj, label=f'{-coint.coef_[0]:.2f} * HYG + {coint.mean_:.2f}', linewidth=1)
plt.plot(bkln, label='BKLN', linewidth=1)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The time-series of spread is obtained by applying the method `transform` subsequently.The mean and the standard deviation are automatically adjusted (unless you pass parameters asking not to).The method `fit_transform` carries out `fit` and `transform` at once.
###Code
spread = coint.transform(X)
series_spread = pd.Series(spread, index=hyg.index)
plt.figure(figsize=(16, 4))
plt.title('Spread between HYG and BKLN')
plt.plot(series_spread, linewidth=1)
plt.show()
###Output
_____no_output_____ |
Linear_Transformation_demo.ipynb | ###Markdown
###Code
import numpy as np
A = np.array([[4, 10, 8], [10, 26, 26], [8, 26, 61]])
print(A)
B = np.array([[44], [128], [214]])
print(B)
#AA^-1X = BA^-1
## Solving for the inverse of A
A_inv = np.linalg.inv(A)
print(A_inv)
## Solving for BA^-1
X = np.dot(A_inv, B) ## Alternative: A_inv @ B
print(X)
###Output
[[ 25.27777778 -11.16666667 1.44444444]
[-11.16666667 5. -0.66666667]
[ 1.44444444 -0.66666667 0.11111111]]
[[-8.]
[ 6.]
[ 2.]]
|
test/comparison/Offspec_comparison.ipynb | ###Markdown
Old QuickNXS v1 version
###Code
_, dk, qz, off_spec = load_data('/SNS/users/m2d/git/reflectivity_ui/test/comparison/v1/REF_M_30806+30807+30808_OffSpecSmooth_Off_Off.dat')
plot_heatmap(dk, qz, np.log(off_spec), x_title='', y_title='', surface=False,
x_log=False, y_log=False)
###Output
File: /SNS/users/m2d/git/reflectivity_ui/test/comparison/v1/REF_M_30806+30807+30808_OffSpecSmooth_Off_Off.dat
Binning: 84 282 [23688]
Scale = 3.602153
Const value = 0.01210843
Width = 0
###Markdown
QuickNXS v2 output
###Code
_, dk, qz, off_spec = load_data('/SNS/users/m2d/git/reflectivity_ui/test/comparison/v2/REF_M_30806+30807+30808_OffSpecSmooth_Off_Off.dat')
plot_heatmap(dk, qz, np.log(off_spec), x_title='', y_title='', surface=False,
x_log=False, y_log=False)
###Output
File: /SNS/users/m2d/git/reflectivity_ui/test/comparison/v2/REF_M_30806+30807+30808_OffSpecSmooth_Off_Off.dat
Binning: 84 282 [23688]
Scale = 3.48616
Const value = 0.0121084
Width = 0
|
tutorials/kaggle/machine-learning/random-forests.ipynb | ###Markdown
*This tutorial is part of the series [Learn Machine Learning](https://www.kaggle.com/learn/machine-learning). At the end of this step, you will be able to use your first sophisticated machine learning model, the Random Forest.* IntroductionDecision trees leave you with a difficult decision. A deep tree with lots of leaves will overfit because each prediction is coming from historical data from only the few houses at its leaf. But a shallow tree with few leaves will perform poorly because it fails to capture as many distinctions in the raw data.Even today's most sophisticated modeling techniques face this tension between underfitting and overfitting. But, many models have clever ideas that can lead to better performance. We'll look at the **random forest** as an example.The random forest uses many trees, and it makes a prediction by averaging the predictions of each component tree. It generally has much better predictive accuracy than a single decision tree and it works well with default parameters. If you keep modeling, you can learn more models with even better performance, but many of those are sensitive to getting the right parameters. ExampleYou've already seen the code to load the data a few times. At the end of data-loading, we have the following variables:- train_X- val_X- train_y- val_y
###Code
import pandas as pd
# Load data
melbourne_file_path = './data/melbourne-housing-snapshot/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
# Filter rows with missing values
melbourne_data = melbourne_data.dropna(axis=0)
# Choose target and predictors
y = melbourne_data.Price
melbourne_predictors = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea',
'YearBuilt', 'Lattitude', 'Longtitude']
X = melbourne_data[melbourne_predictors]
from sklearn.model_selection import train_test_split
# split data into training and validation data, for both predictors and target
# The split is based on a random number generator. Supplying a numeric value to
# the random_state argument guarantees we get the same split every time we
# run this script.
train_X, val_X, train_y, val_y = train_test_split(X, y,random_state = 0)
###Output
_____no_output_____
###Markdown
We build a RandomForest similarly to how we built a decision tree in scikit-learn.
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
forest_model = RandomForestRegressor()
forest_model.fit(train_X, train_y)
melb_preds = forest_model.predict(val_X)
print(mean_absolute_error(val_y, melb_preds))
###Output
205947.41056595655
###Markdown
Conclusion There is likely room for further improvement, but this is a big improvement over the best decision tree error of 250,000. There are parameters which allow you to change the performance of the Random Forest much as we changed the maximum depth of the single decision tree. But one of the best features of Random Forest models is that they generally work reasonably even without this tuning.You'll soon learn the XGBoost model, which provides better performance when tuned well with the right parameters (but which requires some skill to get the right model parameters). Your TurnRun the RandomForestRegressor on your data. You should see a big improvement over your best Decision Tree models. Continue You will see more big improvements in your models as soon as you start the Intermediate track in*Learn Machine Learning* . But you now have a model that's a good starting point to compete in a machine learning competition!Follow **[these steps](https://www.kaggle.com/dansbecker/submitting-from-a-kernel/)** to make submissions for your current model. Then you can watch your progress in subsequent steps as you climb up the leaderboard with your continually improving models.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
data = pd.read_csv('./data/house-prices-advanced-regression-techniques/train.csv')
X = data[["LotArea", "YearBuilt", "1stFlrSF", "2ndFlrSF", "FullBath", "BedroomAbvGr", "TotRmsAbvGrd"]]
y = data['SalePrice']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
rmf_reg = RandomForestRegressor(random_state=42)
rmf_reg.fit(X_train, y_train)
y_pred = rmf_reg.predict(X_test)
mean_absolute_error(y_test, y_pred)
###Output
_____no_output_____ |
examples/2_Models.ipynb | ###Markdown
ModelsThis notebook will demonstrate the basic features of AutoMPC for system ID modeling and model evaluation. Set-UpAs before, we begin by importing autompc.
###Code
import autompc as ampc
###Output
Loading AutoMPC...
Finished loading AutoMPC
###Markdown
To perform system identification, we need a dataset of trajectories to work with. We will use the cartpole system, available from the `benchmarks` package, to generate our dataset.
###Code
from autompc.benchmarks import CartpoleSwingupBenchmark
benchmark = CartpoleSwingupBenchmark()
system = benchmark.system
trajs = benchmark.gen_trajs(seed=100, n_trajs=500, traj_len=200)
###Output
_____no_output_____
###Markdown
ModelsAutoMPC provides a variety of sytem ID models which can be used to learn the sytem dynamics. Here, we will use an MLP model, but for a complete list see [here](https://autompc.readthedocs.io/en/latest/source/sysid.htmlsupported-system-id-models).There are two ways to create a model: we can either instantiate the model class directly and pass the hyperparameter values to the constructor, or we can use a factory class. Here we will use the first method, but for more information on using model factories, see [4. Factories and Pipelines](https://github.com/williamedwards/autompc/tree/main/examples). (**Note:** This will take several minutes to run depending on your hardware).
###Code
from autompc.sysid import MLP
model = MLP(system, n_hidden_layers=2, hidden_size_1=128, hidden_size_2=128, n_train_iters=50,
nonlintype="relu")
model.train(trajs)
###Output
MLP Using Cuda
hidden_sizes= [128, 128]
100%|██████████| 50/50 [02:49<00:00, 3.40s/it]
###Markdown
Now that we have trained our model, we can use it to make predictions. Let's try predicting the next state from one of our training trajectories. We first compute the model state at a certain point in the trajectory
###Code
traj = trajs[0]
model_state = model.traj_to_state(traj[:100])
###Output
_____no_output_____
###Markdown
The model state contains the information the model needs to predict the next time step. `model_state[:system.obs_dim]` is always equal to the most recent observation. For the MLP, that's actually all there is to the model state, but some models require a larger state. We can see the dimension of the model state by running
###Code
model.state_dim
###Output
_____no_output_____
###Markdown
We can also check other properties of the model, such as whether it is differentiable and whether it is linear.
###Code
print("Model is Differentiable? ", model.is_diff)
print("Model is Linear? ", model.is_linear)
###Output
Model is Differentiable? True
Model is Linear? False
###Markdown
For comparison, consider the ARX model. We observe that, unlike the MLP model, the ARX model state size is larger than `system.obs_dim` since the model state includes the history of several observations. Make sure to use the `traj_to_state` method to properly derive the model state. We can also observe that the ARX model is linear, which means that it is suitable for use with LQR control.
###Code
from autompc.sysid import ARX
model_arx = ARX(system, history=4)
model_arx.train(trajs)
model_state_arx = model_arx.traj_to_state(traj[:100])
model_arx.state_dim
model_arx.is_linear
###Output
_____no_output_____
###Markdown
We can use our current model state, and the control to make a prediction of the new model state
###Code
pred_state = model.pred(model_state, traj[99].ctrl)
pred_state
###Output
_____no_output_____
###Markdown
Compare this to the true observation
###Code
traj[100].obs
###Output
_____no_output_____
###Markdown
We can use the true observation to update our model state
###Code
new_model_state = model.update_state(model_state, traj[99].ctrl, traj[100].obs)
new_model_state
###Output
_____no_output_____
###Markdown
For differentiable models, we can also get the Jacobian of themodel prediction
###Code
pred_state, state_jac, ctrl_jac = model.pred_diff(model_state, traj[99].ctrl)
state_jac
###Output
_____no_output_____
###Markdown
Graphing Model AccuracyLet's train another, much smaller MLP model
###Code
from autompc.sysid import MLP
model2 = MLP(system, n_hidden_layers=1, hidden_size_1=32, n_train_iters=50,
nonlintype="relu")
model2.train(trajs)
###Output
MLP Using Cuda
hidden_sizes= [32]
100%|██████████| 50/50 [01:47<00:00, 2.15s/it]
###Markdown
Now, we'd like to compare this to our original model. One convenient way to do this is by graphing the model prediction horizon over various prediction horizons. AutoMPC provides tools for easily constructing this graph. (**Note:** This may take a few minutes to run)
###Code
import matplotlib.pyplot as plt
from autompc.graphs.kstep_graph import KstepPredAccGraph
graph = KstepPredAccGraph(system, trajs, kmax=20, metric="rmse")
graph.add_model(model, "Large MLP")
graph.add_model(model2, "Small MLP")
fig = plt.figure()
ax = fig.gca()
graph(fig, ax)
ax.set_title("Comparison of MLP models")
plt.show()
###Output
/usr/lib/python3/dist-packages/pyparsing.py:1745: FutureWarning: Possible set intersection at position 3
self.re = re.compile( self.reString )
|
Assignment 1 Day 4(letsupgrad).ipynb | ###Markdown
Assignment 1 Day 4
###Code
#print the first ArmStrng number in the range of 1042000 to 702648265 and
#exit the loop as soon you encounter the first armstrong number
#use while loop
# Program to check Armstrong numbers between 1042000 to 702648265
print(' the first Armstrong number between 1042000 to 702648265 is as follows')
for num in range(1042000,702648266):
# order of number
order = len(str(num))
# initialize sum
sum = 0
temp = num
while temp > 0:
digit = temp % 10
sum += digit ** order
temp //= 10
if num == sum:
print(num)
break
###Output
the first Armstrong number between 1042000 to 702648265 is as follows
1741725
|
notebooks/keras-20ng.ipynb | ###Markdown
20 Newsgroups text classification with pre-trained word embeddingsIn this notebook, we'll use pre-trained [GloVe word embeddings](http://nlp.stanford.edu/projects/glove/) for text classification using Keras (version $\ge$ 2 is required). This notebook is largely based on the blog post [Using pre-trained word embeddings in a Keras model](https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html) by François Chollet.**Note that using a GPU with this notebook is highly recommended.**First, the needed imports. Keras tells us which backend (Theano, Tensorflow, CNTK) it will be using.
###Code
%matplotlib inline
from keras.preprocessing import sequence, text
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.layers import Embedding
from keras.layers import Conv1D, MaxPooling1D, GlobalMaxPooling1D
from keras.layers import LSTM, CuDNNLSTM
from keras.utils import to_categorical
from distutils.version import LooseVersion as LV
from keras import __version__
from keras import backend as K
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
print('Using Keras version:', __version__, 'backend:', K.backend())
assert(LV(__version__) >= LV("2.0.0"))
###Output
_____no_output_____
###Markdown
If we are using TensorFlow as the backend, we can use TensorBoard to visualize our progress during training.
###Code
if K.backend() == "tensorflow":
import tensorflow as tf
from keras.callbacks import TensorBoard
import os, datetime
logdir = os.path.join(os.getcwd(), "logs",
"20ng-"+datetime.datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
print('TensorBoard log directory:', logdir)
os.makedirs(logdir)
callbacks = [TensorBoard(log_dir=logdir)]
else:
callbacks = None
###Output
_____no_output_____
###Markdown
GloVe word embeddingsLet's begin by loading a datafile containing pre-trained word embeddings from [Pouta Object Storage](https://research.csc.fi/pouta-object-storage). The datafile contains 100-dimensional embeddings for 400,000 English words.
###Code
!wget -nc https://object.pouta.csc.fi/swift/v1/AUTH_dac/mldata/glove6b100dtxt.zip
!unzip -u glove6b100dtxt.zip
GLOVE_DIR = "."
#GLOVE_DIR = "/home/cloud-user/machine-learning-scripts/notebooks"
#GLOVE_DIR = "/home/cloud-user/glove.6B"
print('Indexing word vectors.')
embeddings_index = {}
with open(os.path.join(GLOVE_DIR, 'glove.6B.100d.txt')) as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
print('Found %s word vectors.' % len(embeddings_index))
print('Examples of embeddings:')
for w in ['some', 'random', 'words']:
print(w, embeddings_index[w])
###Output
_____no_output_____
###Markdown
20 Newsgroups data setNext we'll load the [20 Newsgroups](http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.html) data set. The dataset contains 20000 messages collected from 20 different Usenet newsgroups (1000 messages from each group):|[]()|[]()|[]()|[]()|| --- | --- |--- | --- || alt.atheism | soc.religion.christian | comp.windows.x | sci.crypt | | talk.politics.guns | comp.sys.ibm.pc.hardware | rec.autos | sci.electronics | | talk.politics.mideast | comp.graphics | rec.motorcycles | sci.space | | talk.politics.misc | comp.os.ms-windows.misc | rec.sport.baseball | sci.med | | talk.religion.misc | comp.sys.mac.hardware | rec.sport.hockey | misc.forsale |
###Code
!wget -nc https://object.pouta.csc.fi/swift/v1/AUTH_dac/mldata/news20.tar.gz
!tar -x --skip-old-files -f news20.tar.gz
TEXT_DATA_DIR = "./20_newsgroup"
#TEXT_DATA_DIR = "/home/cloud-user/20_newsgroup"
print('Processing text dataset')
texts = [] # list of text samples
labels_index = {} # dictionary mapping label name to numeric id
labels = [] # list of label ids
for name in sorted(os.listdir(TEXT_DATA_DIR)):
path = os.path.join(TEXT_DATA_DIR, name)
if os.path.isdir(path):
label_id = len(labels_index)
labels_index[name] = label_id
for fname in sorted(os.listdir(path)):
if fname.isdigit():
fpath = os.path.join(path, fname)
args = {} if sys.version_info < (3,) else {'encoding': 'latin-1'}
with open(fpath, **args) as f:
t = f.read()
i = t.find('\n\n') # skip header
if 0 < i:
t = t[i:]
texts.append(t)
labels.append(label_id)
print('Found %s texts.' % len(texts))
###Output
_____no_output_____
###Markdown
First message and its label:
###Code
print(texts[0])
print('label:', labels[0], labels_index)
###Output
_____no_output_____
###Markdown
Vectorize the text samples into a 2D integer tensor.
###Code
MAX_NUM_WORDS = 10000
MAX_SEQUENCE_LENGTH = 1000
tokenizer = text.Tokenizer(num_words=MAX_NUM_WORDS)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = sequence.pad_sequences(sequences, maxlen=MAX_SEQUENCE_LENGTH)
labels = to_categorical(np.asarray(labels))
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)
###Output
_____no_output_____
###Markdown
Split the data into a training set and a validation set
###Code
VALIDATION_SET, TEST_SET = 1000, 4000
x_train, x_test, y_train, y_test = train_test_split(data, labels,
test_size=TEST_SET,
shuffle=True, random_state=42)
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train,
test_size=VALIDATION_SET,
shuffle=False)
print('Shape of training data tensor:', x_train.shape)
print('Shape of training label tensor:', y_train.shape)
print('Shape of validation data tensor:', x_val.shape)
print('Shape of validation label tensor:', y_val.shape)
print('Shape of test data tensor:', x_test.shape)
print('Shape of test label tensor:', y_test.shape)
###Output
_____no_output_____
###Markdown
Prepare the embedding matrix:
###Code
print('Preparing embedding matrix.')
num_words = min(MAX_NUM_WORDS, len(word_index) + 1)
embedding_dim = 100
embedding_matrix = np.zeros((num_words, embedding_dim))
for word, i in word_index.items():
if i >= MAX_NUM_WORDS:
continue
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
print('Shape of embedding matrix:', embedding_matrix.shape)
###Output
_____no_output_____
###Markdown
1-D CNN Initialization
###Code
print('Build model...')
model = Sequential()
model.add(Embedding(num_words,
embedding_dim,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False))
#model.add(Dropout(0.2))
model.add(Conv1D(128, 5, activation='relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(128, 5, activation='relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(128, 5, activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(128, activation='relu'))
model.add(Dense(20, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
print(model.summary())
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
Learning
###Code
%%time
epochs = 10
batch_size=128
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_val, y_val),
verbose=2, callbacks=callbacks)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'], label='training')
plt.plot(history.epoch,history.history['val_loss'], label='validation')
plt.title('loss')
plt.legend(loc='best')
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['acc'], label='training')
plt.plot(history.epoch,history.history['val_acc'], label='validation')
plt.title('accuracy')
plt.legend(loc='best');
###Output
_____no_output_____
###Markdown
InferenceWe evaluate the model using the test set. If accuracy on the test set is notably worse than with the training set, the model has likely overfitted to the training samples.
###Code
%%time
scores = model.evaluate(x_test, y_test, verbose=2)
print("Test set %s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
###Output
_____no_output_____
###Markdown
We can also look at classification accuracies separately for each newsgroup, and compute a confusion matrix to see which newsgroups get mixed the most:
###Code
predictions = model.predict(x_test)
cm=confusion_matrix(np.argmax(y_test, axis=1), np.argmax(predictions, axis=1),
labels=list(range(20)))
print('Classification accuracy for each newsgroup:'); print()
labels = [l[0] for l in sorted(labels_index.items(), key=lambda x: x[1])]
for i,j in enumerate(cm.diagonal()/cm.sum(axis=1)): print("%s: %.4f" % (labels[i].ljust(26), j))
print()
print('Confusion matrix (rows: true newsgroup; columns: predicted newsgroup):'); print()
np.set_printoptions(linewidth=9999)
print(cm); print()
plt.figure(figsize=(10,10))
plt.imshow(cm, cmap="gray", interpolation="none")
plt.title('Confusion matrix (rows: true newsgroup; columns: predicted newsgroup)')
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, labels, rotation=90)
plt.yticks(tick_marks, labels);
###Output
_____no_output_____
###Markdown
LSTM Initialization
###Code
print('Build model...')
model = Sequential()
model.add(Embedding(num_words,
embedding_dim,
weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False))
model.add(Dropout(0.5))
model.add(CuDNNLSTM(128, return_sequences=True))
model.add(CuDNNLSTM(128))
model.add(Dense(128, activation='relu'))
model.add(Dense(20, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
print(model.summary())
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
###Output
_____no_output_____
###Markdown
Learning
###Code
%%time
epochs = 10
batch_size=128
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_val, y_val),
verbose=2, callbacks=callbacks)
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['loss'], label='training')
plt.plot(history.epoch,history.history['val_loss'], label='validation')
plt.title('loss')
plt.legend(loc='best')
plt.figure(figsize=(5,3))
plt.plot(history.epoch,history.history['acc'], label='training')
plt.plot(history.epoch,history.history['val_acc'], label='validation')
plt.title('accuracy')
plt.legend(loc='best');
###Output
_____no_output_____
###Markdown
Inference
###Code
%%time
scores = model.evaluate(x_test, y_test, verbose=2)
print("Test set %s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
predictions = model.predict(x_test)
cm=confusion_matrix(np.argmax(y_test, axis=1), np.argmax(predictions, axis=1),
labels=list(range(20)))
print('Classification accuracy for each newsgroup:'); print()
labels = [l[0] for l in sorted(labels_index.items(), key=lambda x: x[1])]
for i,j in enumerate(cm.diagonal()/cm.sum(axis=1)): print("%s: %.4f" % (labels[i].ljust(26), j))
print()
print('Confusion matrix (rows: true newsgroup; columns: predicted newsgroup):'); print()
np.set_printoptions(linewidth=9999)
print(cm); print()
plt.figure(figsize=(10,10))
plt.imshow(cm, cmap="gray", interpolation="none")
plt.title('Confusion matrix (rows: true newsgroup; columns: predicted newsgroup)')
tick_marks = np.arange(len(labels))
plt.xticks(tick_marks, labels, rotation=90)
plt.yticks(tick_marks, labels);
###Output
_____no_output_____ |
docs/Dimension.ipynb | ###Markdown
Dimension Dimension's dict The Dimension class is based on a single dictionnary, stored as a json file "`dimension.txt`":
###Code
for key, value in physipy.quantity.dimension.SI_UNIT_SYMBOL.items():
print(f"{key: >5} : {value: <3}")
###Output
L : m
M : kg
T : s
I : A
theta : K
N : mol
J : cd
RAD : rad
SR : sr
###Markdown
Construction The Dimension object is basically a dictionnary that stores the dimensions' name and power. A dimension can be created different ways. The values associated can be int, float, or fractions.Fraction (actually, anything that supports addition, subtraction, multiplication, "minus" notation, and can be parsed by sympy). If possible, the values are casted into integers. - from None to create dimensionless
###Code
dimensionless = physipy.Dimension(None)
print(dimensionless)
print(repr(dimensionless))
dimensionless
###Output
no-dimension
<Dimension : {'L': 0, 'M': 0, 'T': 0, 'I': 0, 'theta': 0, 'N': 0, 'J': 0, 'RAD': 0, 'SR': 0}>
###Markdown
- from a string of a single dimension
###Code
a_length_dimension = physipy.Dimension("L")
print(a_length_dimension)
print(repr(a_length_dimension))
a_length_dimension
Dimension({"L":Fraction(1/2)})
###Output
_____no_output_____
###Markdown
- from a string of a single dimension's SI unit symbol
###Code
a_length_dimension = physipy.Dimension("m")
print(a_length_dimension)
print(repr(a_length_dimension))
a_length_dimension
###Output
L
<Dimension : {'L': 1, 'M': 0, 'T': 0, 'I': 0, 'theta': 0, 'N': 0, 'J': 0, 'RAD': 0, 'SR': 0}>
###Markdown
- form a dict of dimension symbols
###Code
a_speed_dimension = physipy.Dimension({"L": 1, "T":-1})
print(a_speed_dimension)
print(repr(a_speed_dimension))
a_speed_dimension
###Output
L/T
<Dimension : {'L': 1, 'M': 0, 'T': -1, 'I': 0, 'theta': 0, 'N': 0, 'J': 0, 'RAD': 0, 'SR': 0}>
###Markdown
- from a string of a product-ratio of dimension symbols
###Code
complex_dim = physipy.Dimension("L**2/T**3*theta**(-1/2)")
print(complex_dim)
print(repr(complex_dim))
complex_dim
###Output
L**2/(T**3*sqrt(theta))
<Dimension : {'L': 2, 'M': 0, 'T': -3, 'I': 0, 'theta': -1/2, 'N': 0, 'J': 0, 'RAD': 0, 'SR': 0}>
###Markdown
- from a string of a product-ratio of dimension's SI unit symbols
###Code
complex_dim = physipy.Dimension("m**2/s**3*K**-1")
print(complex_dim)
print(repr(complex_dim))
complex_dim
###Output
L**2/(T**3*theta)
<Dimension : {'L': 2, 'M': 0, 'T': -3, 'I': 0, 'theta': -1, 'N': 0, 'J': 0, 'RAD': 0, 'SR': 0}>
###Markdown
Operations on Dimension : mul, div, powDimension implements the following : - multiplication with another Dimension - division by another Dimension - pow by a number : this can be int, float, fractions.Fraction Dimensions can be multiplied and divided together as expected :
###Code
product_dim = a_length_dimension * a_speed_dimension
print(product_dim)
product_dim
div_dim = a_length_dimension / a_speed_dimension
print(div_dim)
div_dim
###Output
T
###Markdown
The inverse of a dimension can be computed by computing the division from 1, and the inverse method
###Code
1/a_speed_dimension
a_speed_dimension.inverse()
###Output
_____no_output_____
###Markdown
Computing the power :
###Code
a_speed_dimension**2
a_speed_dimension**(1/2)
a_speed_dimension**Fraction(1/2) * a_length_dimension**Fraction(10/3)
###Output
_____no_output_____
###Markdown
Not implemented operations - addition and substraction by anything - multiplication by anything that is not a Dimension - division by anaything that is not a Dimension or 1
###Code
# a_speed_dimension + a_speed_dimension --> NotImplemented
# a_speed_dimension / 1 --> TypeError: A dimension can only be divided by another dimension, not 1.
# a_speed_dimension * 1 --> TypeError: A dimension can only be multiplied by another dimension, not 1
###Output
_____no_output_____
###Markdown
Printing and display : str, repr, latex You can display a dimension many different ways : - with the standard repr format : `repr()` - as a latex form : `_repr_latex_` - in terms of dimension symbol : `str` - in terms of corresponding SI unit (returns a string) : `str_SI_unit()`Note that Dimension implements `__format__`, which is directly applied to its string representation.
###Code
print(complex_dim.__repr__())
print(complex_dim._repr_latex_())
print(complex_dim.__str__())
print(complex_dim.str_SI_unit())
###Output
<Dimension : {'L': 2, 'M': 0, 'T': -3, 'I': 0, 'theta': -1, 'N': 0, 'J': 0, 'RAD': 0, 'SR': 0}>
$\frac{L^{2}}{T^{3} \theta}$
L**2/(T**3*theta)
m**2/(K*s**3)
###Markdown
In a notebook, the latex form is automaticaly called and rendered :
###Code
complex_dim
###Output
_____no_output_____
###Markdown
Introspection : siunit_dict, dimensionality A dict containing the SI unit symbol as keys can be accessed :
###Code
a_speed_dimension.siunit_dict()
###Output
_____no_output_____
###Markdown
A high-level "dimensionality" can be accessed :
###Code
a_speed_dimension.dimensionality
###Output
_____no_output_____
###Markdown
The available dimensionality are stored in a dict :
###Code
from physipy.quantity.dimension import DIMENSIONALITY
for k, v in DIMENSIONALITY.items():
print(f"{k: >20} : {v: >20} : {v.str_SI_unit(): >20}")
###Output
length : L : m
mass : M : kg
time : T : s
electric_current : I : A
temperature : theta : K
amount_of_substance : N : mol
luminous_intensity : J : cd
plane_angle : RAD : rad
solid_angle : SR : sr
area : L**2 : m**2
volume : L**3 : m**3
speed : L/T : m/s
acceleration : L/T**2 : m/s**2
force : L*M/T**2 : kg*m/s**2
energy : L**2*M/T**2 : kg*m**2/s**2
power : L**2*M/T**3 : kg*m**2/s**3
capacitance : I**2*T**4/(L**2*M) : A**2*s**4/(kg*m**2)
voltage : L**2*M/(I*T**3) : kg*m**2/(A*s**3)
|
Week3/11. Neural Networks with MNIST_seungju.ipynb | ###Markdown
11. Neural Networks with MNIST
###Code
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import numpy as np
import random
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
11.1 Prepare MNIST Data
###Code
train_data = dsets.MNIST(root='data/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_data = dsets.MNIST(root='data/',
train=False,
transform=transforms.ToTensor(),
download=True)
fig = plt.figure(figsize = (15, 5))
ax1 = fig.add_subplot(1, 3, 1)
ax2 = fig.add_subplot(1, 3, 2)
ax3 = fig.add_subplot(1, 3, 3)
ax1.set_title(train_data.targets[0].item())
ax1.imshow(train_data.data[0,:,:].numpy(), cmap='gray')
ax2.set_title(train_data.targets[1].item())
ax2.imshow(train_data.data[1,:,:].numpy(), cmap='gray')
ax3.set_title(train_data.targets[2].item())
ax3.imshow(train_data.data[2,:,:].numpy(), cmap='gray')
###Output
_____no_output_____
###Markdown
11.2 Make Batch Loader
###Code
batch_size = 100
train_loader = DataLoader(dataset=train_data,
batch_size=batch_size,
shuffle=True)
batch_images, batch_labels = iter(train_loader).next()
print(batch_labels.numpy(), ", ", len(batch_labels.numpy()))
###Output
[2 8 7 9 2 2 8 4 1 0 6 6 9 3 1 0 3 9 8 7 8 5 7 9 8 7 1 1 3 3 7 0 2 4 4 4 7
1 8 8 1 2 4 1 7 5 9 7 2 4 4 6 7 5 1 7 3 8 6 4 2 6 6 3 8 1 9 6 9 1 0 3 2 3
2 4 7 9 4 6 1 2 2 4 1 5 4 3 6 5 8 2 6 9 9 2 7 1 6 5] , 100
###Markdown
11.3 Define Model
###Code
model = nn.Sequential(
nn.Linear(784, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
loss = nn.CrossEntropyLoss()
# def cross_entropy(input, target, weight=None, size_average=True, ignore_index=-100, reduce=True):
# Args:
# input: Variable :math:`(N, C)` where `C = number of classes`
# target: Variable :math:`(N)` where each value is
# `0 <= targets[i] <= C-1`
# weight (Tensor, optional): a manual rescaling weight given to each
optimizer = optim.SGD(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
11.4 Train Model
###Code
num_epochs = 5
for epoch in range(num_epochs):
total_batch = len(train_data) // batch_size
for i, (batch_images, batch_labels) in enumerate(train_loader):
X = batch_images.view(-1, 28 * 28)
Y = batch_labels
pre = model(X)
cost = loss(pre, Y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if (i+1) % 300 == 0:
print('Epoch [%d/%d], lter [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, total_batch, cost.item()))
print("Learning Finished!")
###Output
Epoch [1/5], lter [300/600], Loss: 2.2339
Epoch [1/5], lter [600/600], Loss: 2.1724
Epoch [2/5], lter [300/600], Loss: 2.1140
Epoch [2/5], lter [600/600], Loss: 2.0447
Epoch [3/5], lter [300/600], Loss: 1.9354
Epoch [3/5], lter [600/600], Loss: 1.7694
Epoch [4/5], lter [300/600], Loss: 1.7353
Epoch [4/5], lter [600/600], Loss: 1.5700
Epoch [5/5], lter [300/600], Loss: 1.5151
Epoch [5/5], lter [600/600], Loss: 1.4449
Learning Finished!
###Markdown
11.5 Test Model
###Code
correct = 0
total = 0
for images, labels in test_data:
images = images.view(-1, 28 * 28)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += 1
correct += (predicted == labels).sum()
print('Accuracy of test images: %f %%' % (100 * float(correct) / total))
r = random.randint(0, len(test_data)-1)
X_single_data = test_data.data[r:r + 1].view(-1,28*28).float()
Y_single_data = test_data.targets[r:r + 1]
single_pre = model(X_single_data)
plt.imshow(X_single_data.data.view(28,28).numpy(), cmap='gray')
print('Label : ', Y_single_data.data.view(1).numpy())
print('Prediction : ', torch.max(single_pre.data, 1)[1].numpy())
###Output
Label : [7]
Prediction : [9]
###Markdown
11.6 Black Box
###Code
blackbox = torch.rand(X_single_data.size())
blackbox_pre = model(blackbox)
plt.imshow(blackbox.data.view(28,28).numpy(), cmap='gray')
print('Prediction : ', torch.max(blackbox_pre.data, 1)[1].numpy())
###Output
Prediction : [0]
###Markdown
추가 Xavier Initialization 추가해서 MNIST 돌려봄. 참고링크: * https://github.com/deeplearningzerotoall/PyTorch/blob/master/lab-09_3_mnist_nn_xavier.ipynb
###Code
linear1 = nn.Linear(784, 512, bias=True)
linear2 = nn.Linear(512, 10, bias=True)
relu = nn.ReLU()
nn.init.xavier_uniform_(linear1.weight)
nn.init.xavier_uniform_(linear2.weight)
model = nn.Sequential(linear1, relu, linear2)
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
# train
num_epochs = 5
for epoch in range(num_epochs):
total_batch = len(train_data) // batch_size
for i, (batch_images, batch_labels) in enumerate(train_loader):
X = batch_images.view(-1, 28 * 28)
Y = batch_labels
pre = model(X)
cost = loss(pre, Y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
if (i+1) % 300 == 0:
print('Epoch [%d/%d], lter [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, total_batch, cost.item()))
print("Learning Finished!")
correct = 0
total = 0
for images, labels in test_data:
images = images.view(-1, 28 * 28)
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += 1
correct += (predicted == labels).sum()
print('Accuracy of test images: %f %%' % (100 * float(correct) / total))
###Output
Accuracy of test images: 91.710000 %
|
scripts/explore_package/check_math.ipynb | ###Markdown
Check mathStefan/Yuzhao HengSince Sun. Dec. 5th, 2021 Inverse transformationIt might help to run ICP with one order of (source, target), but the other order produces clearer visualizationsHow to visualize in reverse? Setup
###Code
from scripts.util import *
from scripts.robo_pose_estimator import *
pcr_kuka = get_kuka_pointcloud()
pts_hsr = eg_hsr_scan()
###Output
_____no_output_____
###Markdown
Check For one without thetaJust invert the x and y translations, looks good
###Code
lbs = Cluster.cluster(pts_hsr, approach='hierarchical', distance_threshold=1)
d_clusters = {lb: pts_hsr[np.where(lbs == lb)] for lb in np.unique(lbs)}
pts_cls = d_clusters[11]
x, y = config('heuristics.pose_guess.good_no_rotation')
tsf, states = visualize(
pts_cls, pcr_kuka,
title='HSR locates KUKA, from the real cluster, good translation estimate, reversed',
init_tsf=tsl_n_angle2tsf([-x, -y]),
)
# tsf_rev = np.linalg.inv(tsf) # Looks like this works
# states_rev = [(tgt, src, np.linalg.inv(tsf)) for (src, tgt, tsf, err) in states] # This doesn't
# plot_icp_result(pcr_kuka, pts_cls, tsf_rev, init_tsf=tsl_n_angle2tsf([x, y]))
###Output
ic| 'Initial guess': 'Initial guess'
init_tsf: array([[ 1. , -0. , -2.5],
[ 0. , 1. , 0.5],
[ 0. , 0. , 1. ]])
ic| tsf_: array([[ 0.99738895, -0.07221688, -2.52595887],
[ 0.07221688, 0.99738895, 0.5194569 ],
[ 0. , 0. , 1. ]])
tsl: array([-2.52595887, 0.5194569 ])
degrees(theta): 4.141327379332864
###Markdown
With theta
###Code
cand = (2.4765866676985326, -0.7653859276855487, -0.09541848029809685) # Taken from RANSAC output
tsf1 = np.linalg.inv(tsl_n_angle2tsf(cand))
tsf2 = tsl_n_angle2tsf([-e for e in cand])
ic(tsf1, tsf2)
for label, tsf in zip(['Matrix inverse', 'Heuristic'], [tsf1, tsf1]):
visualize(
pts_cls, pcr_kuka,
title=label,
init_tsf=tsf,
)
# plot_icp_result(pcr_kuka, pts_cls, tsf2, init_tsf=tsf1))
###Output
ic| tsf1: array([[ 0.99545111, -0.09527375, -2.53824214],
[ 0.09527375, 0.99545111, 0.52595056],
[ 0. , 0. , 1. ]])
tsf2: array([[ 0.99545111, -0.09527375, -2.47658667],
[ 0.09527375, 0.99545111, 0.76538593],
[ 0. , 0. , 1. ]])
ic| 'Initial guess': 'Initial guess'
init_tsf: array([[ 0.99545111, -0.09527375, -2.53824214],
[ 0.09527375, 0.99545111, 0.52595056],
[ 0. , 0. , 1. ]])
ic| tsf_: array([[ 0.99682352, -0.07964209, -2.52500609],
[ 0.07964209, 0.99682352, 0.51523587],
[ 0. , 0. , 1. ]])
tsl: array([-2.52500609, 0.51523587])
degrees(theta): 4.5679932186838865
|
Listings_EDA.ipynb | ###Markdown
Business Understanding1. How do the number of listings and prices vary in the Seattle's neighbourhoods?2. Do the room or property types change the price?3. Are there extra fees and how do they vary?
###Code
# Import neccessary libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pickle as pkl
from folium.plugins import HeatMap
import numpy as np
%matplotlib inline
#If you have not yet installed ipynb,uncomment the line below first
#!pip install ipynb
# Import customarized functions from my_functions notebook
from ipynb.fs.full.my_functions import cat_counts
from ipynb.fs.full.my_functions import stats_description
###Output
_____no_output_____
###Markdown
Data Understanding
###Code
# import data
df = pd.read_csv('Seattle_data/listings.csv')
df.head()
# Summary of the data
df.info()
# Number of rows and columns in the dataset
cat_df =df.select_dtypes(include=['object']).copy()
num_rows = df.shape[0] #Provide the number of rows in the dataset
print ("number of rows = {}".format(num_rows))
#Provide the number of rows in the dataset
num_cols = df.shape[1]
print ("number of columns = {}".format(num_cols))
#Columns without missing values
no_nulls =set(df.columns[df.isnull().mean()==0])
no_nulls = pd.DataFrame(no_nulls,columns=['Column_without_NANs']).shape[0]
print ("number of columns without NaNs = {}".format(no_nulls))
#Columns with only missing values
All_nulls =df.columns[df.isna().all()].tolist()
All_nulls = pd.DataFrame(All_nulls,columns=['Columns_only_NANs'])
All_nulls
#Columns with more than half of the values missing.
no_nulls =set(df.columns[df.isnull().mean()>0.5])
no_nulls = pd.DataFrame(no_nulls,columns=['Column_with_50%_NAN'])
no_nulls
#Columns with more than 75% of the values missing.
no_nulls =set(df.columns[df.isnull().mean()>0.75])
no_nulls = pd.DataFrame(no_nulls,columns=['Column_with_75%_NAN'])
no_nulls
# Categorical columns.
cat_vars = df.select_dtypes(include=['object']).copy().columns
cat_vars
# Costs and scores related features mislabelled as categorical values
miscategorized_vars = df[['price','security_deposit','cleaning_fee','extra_people','host_response_rate','host_acceptance_rate']]
miscategorized_vars.head()
###Output
_____no_output_____
###Markdown
* We observe that some of the numeric variables are miscategorized as categorical and we shall have to change them to numeric.
###Code
# numeric columns.
num_vars = df.select_dtypes(include=['int','float']).copy().columns
num_vars
###Output
_____no_output_____
###Markdown
Data Preparation * Based on the results above, the licence column has no values and this serves no purpose in this study.* Columns like square_feet which have more than 75% of missing values will be removed.* The monthly_price column has more than 50% of missing values and will also be removed. Removing the monthly price will not affect our analysis we have the daily price of all the listings.* Miscategorized numeric values will also be converted back.* To help with the data processing I have created a clean_data function which I describe in detail below.
###Code
def clean_data(df):
'''
Function that will help in data preprocessing and cleaning
INPUT
df - pandas dataframe
OUTPUT
clean_df - A dataframe with mssing values removed or interpolated
This function cleans df using the following steps;
1. Drop all the columns with missing values - Licence column was the only.
2. Drop columns with 50% or more missing values and column with "None" (experiences_offered)
3. Remove dollar signs from cost related variables and convert them to numerics
4. Remove the percentage sign from score related variables and convert them to numerics
5. Missing values related to costs are filled with the mean of the column
6. Missing values related to scores are filled with the mode of the column.
This is because ratings have specific values for example 1 - 10 and taking the mean may give fractional ratings.
'''
# Drop all the columns with Nan
df =df.dropna(how='all',axis=1)
#df =df.dropna(subset=['host_total_listings_count'],axis=0)
df = df.drop(['square_feet','monthly_price','weekly_price','experiences_offered'],axis=1)
#Remove dollar signs
fees = df[['price','security_deposit','cleaning_fee','extra_people']].columns
for col in fees:
df[col] = df[col].astype(str).str[1:]
df[col] = pd.to_numeric(df[col],errors='coerce').astype('float')
# Remove percentage signs
rates = df[['host_response_rate','host_acceptance_rate']].columns
for col in rates:
df[col] = df[col].astype(str).str.replace('%','').astype('float')
# Fill numeric columns with the mean
num_df = df[['security_deposit','cleaning_fee','price']].columns
for col in num_df:
df[col].fillna((df[col].mean()), inplace=True)
# Fill numeric columns with the median
num_df2 = df[['host_response_rate','review_scores_rating','review_scores_rating','reviews_per_month',
'review_scores_cleanliness','host_acceptance_rate','review_scores_cleanliness','bathrooms','reviews_per_month',
'review_scores_communication','review_scores_location','bedrooms','host_listings_count','beds','review_scores_communication',
'review_scores_value','review_scores_accuracy','review_scores_checkin','host_total_listings_count','review_scores_rating',
'review_scores_location','review_scores_checkin','review_scores_accuracy','review_scores_cleanliness','review_scores_value',
'review_scores_communication','review_scores_location','review_scores_value']].columns
for col in num_df2:
df[col].fillna((df[col].median()), inplace=True)
return df
clean_lisitings = clean_data(df)
#I would wish to use this preprocessed data in the price prediction model.
# For this reason i use the pandas pickle object to save and load preprocessed data.
clean_lisitings.to_pickle('Seattle_data/clean_lisitings.pkl')
###Output
_____no_output_____
###Markdown
Review the cleaned listings data
###Code
df = pd.read_pickle('Seattle_data/clean_lisitings.pkl')
df.info()
df.describe().T
#Columns with only missing values
All_nulls =df.columns[df.isna().all()].tolist()
All_nulls = pd.DataFrame(All_nulls,columns=['Columns_only_NANs'])
All_nulls
###Output
_____no_output_____
###Markdown
* No column has missing values
###Code
#Columns with more than half of the values missing.
no_nulls =set(df.columns[df.isnull().mean()>= 0.5])
no_nulls = pd.DataFrame(no_nulls,columns=['Column_with_50%_NAN'])
no_nulls
###Output
_____no_output_____
###Markdown
* No columns with 50% or more missing values Modelling* To address my business questions, I create these 2 functions to help in my analysis.* Cat_counts fucntion performs value counts on categorical columns then then calculate the percentage shares.* Stats_description functions groups the the dataframe by a categorical colummn and calculate the statistical features of a numeric column of it.* The cat_counts and Stats_description functions are saved in the my_functions notebooks. How many listings are in every neighbourhood in Seattle * To answer this questions, I count the number of listings in every neighbourhood in Seattle.* From the counts , I calculate the percentage share of every neighbourhood in propotional to the entire district.* I employ the cat_counts function from above.
###Code
# The counts and shares of listings in the different neighbourhoods.
num_listings = cat_counts(df,"neighbourhood_group_cleansed")
num_listings
###Output
_____no_output_____
###Markdown
Evaluation* We see that the number of listings vary from neighbourhood to another* Capitol hill and Downtown account for 15% and 14% of Seattle's listings respectively.* Interbay and Seward Park account for only 0.2% and 1.1% of Seattles's listings respectively. How do the listings prices vary in the different neighbourhoods?* To answer this question, I calculate the calculate the statistical features of every neighbourhood.* I use the stats_description function which groups the listings dataframe by the neighbourhood.* The price statistics are calculated and presented in the table below.* The results are also presented in the plots that follow below
###Code
# Listings' prices in the different neighbourhoods.
stats_description(df,"neighbourhood_group_cleansed",'price')
# Visualise the neighbbourhood counts, mean prices and distribution of the prices
#Countplot
plt.figure(figsize=(10,8))
ax=sns.countplot(x= "neighbourhood_group_cleansed",data=df,palette='Set1')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90);
plt.title('The Listings in Seattle per neighbourhood')
plt.xlabel('Neighbourhood')
#Barplot
plt.figure(figsize=(10, 8))
ax = sns.barplot(data = df, x = "neighbourhood_group_cleansed", y = 'price',estimator = np.mean,palette='prism')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
plt.xlabel('Neighbourhood')
#ax.set_ylim(0, 450);
plt.title('The average prices of the neighbourhoods in Seattle');
#Boxplot
plt.figure(figsize=(10, 8))
ax = sns.boxplot(data = df, x = "neighbourhood_group_cleansed", y = 'price', fliersize = 0.1,palette='Set3')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
ax.set_ylim(0, 450);
plt.title('The distribution of the prices in the neigbourhoods in Seattle')
plt.xlabel('Neighbourhood');
###Output
_____no_output_____
###Markdown
Evaluations* Capitol Hill and Interbay have the highest(15%) and lowest(0.2%) listings in Seattle respectively* On average the prices are highest in Magnolia (178) and lowest in Delridge(83)* In general the prices in Seattle heighbourhoods are unevenly distributed.* The prices in Seattle are most spread out in Magnolia as seen from the std = 150* The prices in Seattle are least spread out in Northgate as seen from the std = 38 Does the property type affect the price?* To answer this question, I count the number and percentage shares of property types in Seattle.* Then I calculate the price statistical features of each property type.* I present the results in the tables and graphs that follow.* The evaluation is also presented after the visuals.
###Code
# The number and percentage share of the different property types
cat_counts(df,"property_type")
# The prices of the different propert types in Seattle
stats_description(df,"property_type",'price')
#Viualixe the distribution of the property prices in Seattle
#Box plots
plt.figure(figsize=(10, 8))
ax = sns.boxplot(data = df, x = "property_type", y = 'price', fliersize = 0.1,palette='prism')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
ax.set_ylim(0, 750);
plt.title('The distribution of the property prices in Seattle');
###Output
_____no_output_____
###Markdown
Results* Aproximately 90% of the properties listed in Seattle are apartments and houses i.e. 45% and 44.7% respectively* Yurts,dorms and charlet are the 3 least popular properties in Seattle.* On avergae, a boat is the most highly priced property in Seattle with an average price 282. The prices for boats also have the highest spread (std = 281)* The appartment prices range from 20 to 999, having the lowest(min) and highest(max)prices among the properties in Seattle.* The dorm prices are the least spread (std=2) and this is because they also have the lowest counts in the data.The property type one chooses will directly affect the price. Does the room type affect the price?* To answer this question, I count the number and percentage shares of room types in Seattle.* Then I calculate and compare the price statistical features of each room type.* I present the results in the tables and graphs that follow.* The evaluation is also presented after the visuals.
###Code
#Count and share of room types in Seattle.
room_type_count = cat_counts(df,"room_type")
room_type_count
#Room type prices in Seattle.
stats_description(df,"room_type",'price')
# Visualize the percentage share
labels = room_type_count['room_type']
sizes = room_type_count['Count']
colors = ['pink', 'yellowgreen','lightskyblue']
explode = (0.1, 0, 0) # explode 1st slice
# Pie-chart
plt.figure(figsize=(12,10))
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=140)
plt.title('Percentage of the room types in Seattle');
###Output
_____no_output_____
###Markdown
Results* Entire home/apartment take the highest share of room types 66.5% in Seattle* Shared room have the lowest share 3.1% in Seattle* Entire home/apartment charged the highest price on average. Are there extra fees and how do they vary?* Yes there are other costs like the cleaning fees, security deposits and extra person fees.* To understand how these fees vary, I calulate these fees based on the categorical variables.* First I calculate the cleaning fees depending on the room type and property type.* Secondly I calculate the security deposit in relation to the cancellation policy.
###Code
# Cleaning fee in relation to the property type in Seattle.
stats_description(df,"property_type",'cleaning_fee')
###Output
_____no_output_____
###Markdown
Results:* On average a boat is the most expensive property to clean in Seattle with average cleaning price = 87.* On average a yurt is the least expendive property to clean in Seattle with average cleaning price = 25. This is expected based on the kind of property it is.* The highest cleaning fee paid is 300 for the property type house. I can't explain why this house's cleaning cost was this high but maybe the answer can be found in the data.
###Code
# Cleaning fee in relation to the room type in Seattle.
stats_description(df,"room_type",'cleaning_fee')
plt.figure(figsize=(10, 8))
ax = sns.barplot(data = df, x = "room_type", y = 'cleaning_fee',estimator = np.mean, palette='rocket')
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
#ax.set_ylim(0, 450);
plt.title('The average cleaning fees for the room types in Seattle');
###Output
_____no_output_____
###Markdown
Results* Entire home/apartment are charged the highest cleaning fee (71.73)* Cleaning fees for Entire home/apartment have the highest spread (45.5)* The lowest price one can party for cleaning any room type in Seattle is 5 dollars (min price).* One pays slightly more for cleaning a shared room than a private room (on average 3.5 difference)
###Code
# The number and percentage share of cancellation policies
cancellation_policy_count = cat_counts(df,"cancellation_policy")
cancellation_policy_count
# Security deposit according to the cancellation policy.
stats_description(df,"cancellation_policy",'security_deposit')
# Bar plot
plt.figure(figsize=(10, 8))
sns.barplot(x='cancellation_policy',y='security_deposit',data=df,palette = "Set2_r");
plt.title('The average security deposit based on the cancellation policy');
###Output
_____no_output_____ |
notebooks/12_spindles-SO_coupling.ipynb | ###Markdown
Phase-Amplitude CouplingThis notebook demonstrates how to use YASA to calculate phase-amplitude coupling (PAC) between two frequency bands.Please make sure to install the latest version of YASA first by typing the following line in your terminal or command prompt:`pip install --upgrade yasa`If you are not familiar with PAC methods, I highly recommend reading the two papers below:* Tort, A. B. L., Komorowski, R., Eichenbaum, H., & Kopell, N. (2010). Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. *Journal of Neurophysiology, 104(2), 1195–1210.* https://doi.org/10.1152/jn.00106.2010* Aru, J., Aru, J., Priesemann, V., Wibral, M., Lana, L., Pipa, G., Singer, W., & Vicente, R. (2015). Untangling cross-frequency coupling in neuroscience. *Current Opinion in Neurobiology, 31, 51–61.* https://doi.org/10.1016/j.conb.2014.08.002
###Code
import mne
import yasa
import numpy as np
import pandas as pd
import pingouin as pg
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style='ticks', font_scale=1.2)
# Load EEG data
f = np.load('data_full_6hrs_100Hz_Cz+Fz+Pz.npz')
data, ch_names = f['data'], f['chan']
sf = 100.
times = np.arange(data.size) / sf
# Keep only Cz
data = data[0, :].astype(np.float64)
print(data.shape, np.round(data[0:5], 3))
# Load the hypnogram data
hypno = np.loadtxt('data_full_6hrs_100Hz_hypno_30s.txt').astype(int)
print(hypno.shape, 'Unique values =', np.unique(hypno))
# Upsample to data
hypno = yasa.hypno_upsample_to_data(hypno=hypno, sf_hypno=1/30, data=data, sf_data=sf)
###Output
30-Sep-20 09:49:45 | WARNING | Hypnogram is SHORTER than data by 10.58 seconds. Padding hypnogram with last value to match data.size.
###Markdown
Event-locked analysesOne PAC approach that has been used in several recent publications is to first detect slow-waves, and then calculate the PAC based on epochs that are centered around the negative peak of the slow-waves.For example, this is from the Methods section of [Winer et al., J Neuro 2019](https://doi.org/10.1523/JNEUROSCI.0503-19.2019):> *For event-locked cross-frequency analyses (Dvorak and Fenton, 2014; Staresina et al.,2015; Helfrich et al., 2018), the normalized SO trough-locked data were first filtered into the SO component (0.1–1.25 Hz), and then the instantaneous phase angle was extracted after applying a Hilbert transform. Then the same trials were filtered between 12 and 16 Hz, and the instantaneous amplitude was extracted from the Hilbert transform. Only the time range from 2 to 2 s was considered, to avoid filter edge artifacts. For every subject, channel, and epoch, the maximal sleep spindle amplitude and corresponding SO phase angle were detected. The mean circular direction and resultant vector length across all NREM events were determined using the CircStat toolbox(Berens, 2009).*YASA provides some convenient tools to automatize these steps. Specifically, the ``coupling`` argument of the [yasa.sw_detect](https://raphaelvallat.com/yasa/build/html/generated/yasa.sw_detect.html) allows us to replicate these exact same steps:
###Code
sw = yasa.sw_detect(data, sf, ch_names=["Cz"], hypno=hypno, include=(2, 3), coupling=True, freq_sp=(12, 16))
events = sw.summary()
events.round(3)
###Output
The history saving thread hit an unexpected error (OperationalError('database is locked')).History will not be written to the database.
###Markdown
As explained in the documentation of the [yasa.sw_detect](https://raphaelvallat.com/yasa/build/html/generated/yasa.sw_detect.html), we get some additional columns in the output dataframe:1) The ``SigmaPeak`` column contains the timestamp (in seconds from the beginning of the recording) where the sigma-filtered (12 - 16 Hz, see ``freq_sp``) amplitude is at its maximal. This is calculated separately for each detected slow-wave, using a **4-seconds epoch centered around the negative peak of the slow-wave**.2) The ``PhaseAtSigmaPeak`` column contains the phase (in radians) of the slow-wave filtered signal (0.3 - 2 Hz, see ``freq_sw``) at ``SigmaPeak``. Using the [Pingouin package](https://pingouin-stats.org/), we can then easily extract and visualize the direction and strength of coupling across all channels:
###Code
import pingouin as pg
pg.plot_circmean(events['PhaseAtSigmaPeak'])
print('Circular mean: %.3f rad' % pg.circ_mean(events['PhaseAtSigmaPeak']))
print('Vector length: %.3f' % pg.circ_r(events['PhaseAtSigmaPeak']))
###Output
Circular mean: -0.178 rad
Vector length: 0.167
###Markdown
3) The ``ndPAC`` columns contains the normalized mean vector length (also called the normalized direct PAC, see [Ozkurt 2012](https://doi.org/10.1109/TBME.2012.2194783)). Note that ``ndPAC`` should be highly correlated the vector length of the ``PhaseAtSigmaPeak``. It may be more accurate though since it is calculated on the entire 4-seconds epoch.For more details, please refer to:* Özkurt, T. E. (2012). Statistically Reliable and Fast Direct Estimation of Phase-Amplitude Cross-Frequency Coupling. IEEE Transactions on Biomedical Engineering, 59(7), 1943–1950. https://doi.org/10.1109/TBME.2012.2194783
###Code
# Distribution of ndPAC value:
events['ndPAC'].hist();
# This should be close to the vector length that we calculated above
events['ndPAC'].mean()
###Output
_____no_output_____
###Markdown
************************* Data-driven PACHere, rather than focusing on event-locked coupling (e.g. based on slow-waves detection), we'll simply use epochs of 15-seconds of N2 sleep to estimate PAC across a range of phase and amplitude frequencies.To calculate formal phase-amplitude coupling, we'll be using the [tensorpac](https://etiennecmb.github.io/tensorpac/) package. Make sure to install it using: `pip install -U tensorpac`.
###Code
# Segment N2 sleep into 15-seconds non-overlapping epochs
_, data_N2 = yasa.sliding_window(data[hypno == 2], sf, window=15)
# We end up with 636 epochs of 15-seconds
data_N2.shape
# First, let's define our array of frequencies for phase and amplitude
f_pha = np.arange(0.125, 4.25, 0.25) # Frequency for phase
f_amp = np.arange(7.25, 25.5, 0.5) # Frequency for amplitude
f_pha, f_amp
###Output
_____no_output_____
###Markdown
Let's now calculate the comodulogram. Please refer to the [main API of tensorpac](https://etiennecmb.github.io/tensorpac/generated/tensorpac.Pac.htmltensorpac.Pac) for more details
###Code
from tensorpac import Pac
sns.set(font_scale=1.1, style='white')
# Define a PAC object
p = Pac(idpac=(1, 0, 0), f_pha=f_pha, f_amp=f_amp, verbose='WARNING')
# Filter the data and extract the PAC values
xpac = p.filterfit(sf, data_N2)
# Plot the comodulogram
p.comodulogram(xpac.mean(-1), title=str(p), vmin=0, plotas='imshow');
# Extract PAC values into a DataFrame
df_pac = pd.DataFrame(xpac.mean(-1), columns=p.xvec, index=p.yvec)
df_pac.columns.name = 'FreqPhase'
df_pac.index.name = 'FreqAmplitude'
df_pac.head(20).round(2)
###Output
_____no_output_____
###Markdown
From the Pandas DataFrame above, we see that the maximal coupling (mean vector length) across all epochs is between the 0.25 Hz frequency for phase and 12.5 hz frequency for amplitude.As a last example, we now change the ``idpac`` argument to calculate the Modulation Index instead of the Mean Vector Length.
###Code
# Define a PAC object
p = Pac(idpac=(2, 0, 0), f_pha=f_pha, f_amp=f_amp, verbose='WARNING')
# Filter the data and extract the PAC values
xpac = p.filterfit(sf, data_N2)
# Plot the comodulogram
p.comodulogram(xpac.mean(-1), title=str(p), plotas='imshow');
df_pac = pd.DataFrame(xpac.mean(-1), columns=p.xvec, index=p.yvec)
df_pac.columns.name = 'FreqPhase'
df_pac.index.name = 'FreqAmplitude'
df_pac.round(3)
###Output
_____no_output_____
###Markdown
Phase-Amplitude CouplingThis notebook demonstrates how to use YASA to calculate phase-amplitude coupling (PAC) between two frequency bands.Please make sure to install the latest version of YASA first by typing the following line in your terminal or command prompt:`pip install --upgrade yasa`If you are not familiar with PAC methods, I highly recommend reading the two papers below:* Tort, A. B. L., Komorowski, R., Eichenbaum, H., & Kopell, N. (2010). Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. *Journal of Neurophysiology, 104(2), 1195–1210.* https://doi.org/10.1152/jn.00106.2010* Aru, J., Aru, J., Priesemann, V., Wibral, M., Lana, L., Pipa, G., Singer, W., & Vicente, R. (2015). Untangling cross-frequency coupling in neuroscience. *Current Opinion in Neurobiology, 31, 51–61.* https://doi.org/10.1016/j.conb.2014.08.002
###Code
import mne
import yasa
import numpy as np
import pandas as pd
import pingouin as pg
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style='ticks', font_scale=1.2)
# Load EEG data
f = np.load('data_full_6hrs_100Hz_Cz+Fz+Pz.npz')
data, ch_names = f['data'], f['chan']
sf = 100.
times = np.arange(data.size) / sf
# Keep only Cz
data = data[0, :].astype(np.float64)
print(data.shape, np.round(data[0:5], 3))
# Load the hypnogram data
hypno = np.loadtxt('data_full_6hrs_100Hz_hypno_30s.txt').astype(int)
print(hypno.shape, 'Unique values =', np.unique(hypno))
# Upsample to data
hypno = yasa.hypno_upsample_to_data(hypno=hypno, sf_hypno=1/30, data=data, sf_data=sf)
###Output
09-May-20 10:22:17 | WARNING | Hypnogram is SHORTER than data by 10.58 seconds. Padding hypnogram with last value to match data.size.
###Markdown
Event-locked analysesOne PAC approach that has been used in several recent publications is to first detect slow-waves, and then calculate the PAC based on epochs that are centered around the negative peak of the slow-waves.For example, this is from the Methods section of [Winer et al., J Neuro 2019](https://doi.org/10.1523/JNEUROSCI.0503-19.2019):> *For event-locked cross-frequency analyses (Dvorak and Fenton, 2014; Staresina et al.,2015; Helfrich et al., 2018), the normalized SO trough-locked data were first filtered into the SO component (0.1–1.25 Hz), and then the instantaneous phase angle was extracted after applying a Hilbert transform. Then the same trials were filtered between 12 and 16 Hz, and the instantaneous amplitude was extracted from the Hilbert transform. Only the time range from 2 to 2 s was considered, to avoid filter edge artifacts. For every subject, channel, and epoch, the maximal sleep spindle amplitude and corresponding SO phase angle were detected. The mean circular direction and resultant vector length across all NREM events were determined using the CircStat toolbox(Berens, 2009).*YASA provides some convenient tools to automatize these steps. Specifically, the ``coupling`` argument of the [yasa.sw_detect](https://raphaelvallat.com/yasa/build/html/generated/yasa.sw_detect.html) allows us to replicate these exact same steps:
###Code
sw = yasa.sw_detect(data, sf, ch_names=["Cz"], hypno=hypno, include=(2, 3), coupling=True, freq_sp=(12, 16))
events = sw.summary()
events.round(3)
###Output
_____no_output_____
###Markdown
As explained in the documentation of the [yasa.sw_detect](https://raphaelvallat.com/yasa/build/html/generated/yasa.sw_detect.html), we get two coupling metrics:1) The ``PhaseAtSigmaPeak`` column contains the phase (in radians) of the slow-wave filtered signal (0.3 - 2 Hz, see ``freq_sw``) where the sigma-filtered (12 - 16 Hz, see ``freq_sp``) amplitude is at its maximal. This is calculated separately for each detected slow-wave, using a **4-seconds epoch centered around the negative peak of the slow-wave**. Using the [Pingouin package](https://pingouin-stats.org/), we can then easily extract and visualize the direction and strength of coupling across all channels:
###Code
import pingouin as pg
pg.plot_circmean(events['PhaseAtSigmaPeak'])
print('Circular mean: %.3f rad' % pg.circ_mean(events['PhaseAtSigmaPeak']))
print('Vector length: %.3f' % pg.circ_r(events['PhaseAtSigmaPeak']))
###Output
Circular mean: -0.178 rad
Vector length: 0.167
###Markdown
2) The ``ndPAC`` columns contains the normalized mean vector length (also called the normalized direct PAC, see [Ozkurt 2012](https://doi.org/10.1109/TBME.2012.2194783)). Note that ``ndPAC`` should be highly correlated the vector length of the ``PhaseAtSigmaPeak``. It may be more accurate though since it is calculated on the entire 4-seconds epoch.For more details, please refer to:* Özkurt, T. E. (2012). Statistically Reliable and Fast Direct Estimation of Phase-Amplitude Cross-Frequency Coupling. IEEE Transactions on Biomedical Engineering, 59(7), 1943–1950. https://doi.org/10.1109/TBME.2012.2194783
###Code
# Distribution of ndPAC value:
events['ndPAC'].hist();
# This should be close to the vector length that we calculated above
events['ndPAC'].mean()
###Output
_____no_output_____
###Markdown
************************* Data-driven PACHere, rather than focusing on event-locked coupling (e.g. based on slow-waves detection), we'll simply use epochs of 15-seconds of N2 sleep to estimate PAC across a range of phase and amplitude frequencies.To calculate formal phase-amplitude coupling, we'll be using the [tensorpac](https://etiennecmb.github.io/tensorpac/) package. Make sure to install it using: `pip install -U tensorpac`.
###Code
# Segment N2 sleep into 15-seconds non-overlapping epochs
_, data_N2 = yasa.sliding_window(data[hypno == 2], sf, window=15)
# We end up with 636 epochs of 15-seconds
data_N2.shape
# First, let's define our array of frequencies for phase and amplitude
f_pha = np.arange(0.125, 4.25, 0.25) # Frequency for phase
f_amp = np.arange(7.25, 25.5, 0.5) # Frequency for amplitude
f_pha, f_amp
###Output
_____no_output_____
###Markdown
Let's now calculate the comodulogram. Please refer to the [main API of tensorpac](https://etiennecmb.github.io/tensorpac/generated/tensorpac.Pac.htmltensorpac.Pac) for more details
###Code
from tensorpac import Pac
sns.set(font_scale=1.1, style='white')
# Define a PAC object
p = Pac(idpac=(1, 0, 0), f_pha=f_pha, f_amp=f_amp, verbose='WARNING')
# Filter the data and extract the PAC values
xpac = p.filterfit(sf, data_N2)
# Plot the comodulogram
p.comodulogram(xpac.mean(-1), title=str(p), vmin=0, plotas='imshow');
# Extract PAC values into a DataFrame
df_pac = pd.DataFrame(xpac.mean(-1), columns=p.xvec, index=p.yvec)
df_pac.columns.name = 'FreqPhase'
df_pac.index.name = 'FreqAmplitude'
df_pac.head(20).round(2)
###Output
_____no_output_____
###Markdown
From the Pandas DataFrame above, we see that the maximal coupling (mean vector length) across all epochs is between the 0.25 Hz frequency for phase and 12.5 hz frequency for amplitude.As a last example, we now change the ``idpac`` argument to calculate the Modulation Index instead of the Mean Vector Length.
###Code
# Define a PAC object
p = Pac(idpac=(2, 0, 0), f_pha=f_pha, f_amp=f_amp, verbose='WARNING')
# Filter the data and extract the PAC values
xpac = p.filterfit(sf, data_N2)
# Plot the comodulogram
p.comodulogram(xpac.mean(-1), title=str(p), plotas='imshow');
df_pac = pd.DataFrame(xpac.mean(-1), columns=p.xvec, index=p.yvec)
df_pac.columns.name = 'FreqPhase'
df_pac.index.name = 'FreqAmplitude'
df_pac.round(3)
###Output
_____no_output_____ |
post/pipeFlow/processPlotChannelFlow.ipynb | ###Markdown
Post processing for pipeFlow case* Compute ODT mean and rms velocity profiles. Plot results versus DNS.* Values are in wall units (y+, u+).* Scaling is done in the input file (not explicitly here).* Plots are shown here, and saved as PDF files.* Stats data is saved in ../../data//post/ **Change ```caseN``` below**
###Code
caseN = 'pipe' # changeme
import numpy as np
import glob as gb
import yaml
import sys
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import os
from scipy.interpolate import interp1d
#--------------------------------------------------------------------------------------------
if not os.path.exists("../../data/"+caseN+"/post") :
os.mkdir("../../data/"+caseN+"/post")
#-----------------------------------
with open("../../data/"+caseN+"/input/input.yaml") as ifile :
yml = yaml.load(ifile, Loader=yaml.FullLoader)
kvisc = yml["params"]["kvisc0"]
dxmin = yml["params"]["dxmin"]
delta = yml["params"]["domainLength"] * 0.5
Retau = 1.0/kvisc
flist = sorted(gb.glob('../../data/'+caseN+'/data/data_00000/dmp_*.dat'))
nunif = int(1/dxmin) # uniform grid (using smallest grid size)
nunif2 = int(nunif/2) # half as many (for ploting to domain center)
nfiles = len(flist)
yu = np.linspace(-delta,delta,nunif)
um = np.zeros(nunif) # mean velocity
u2m = np.zeros(nunif) # mean square velocity (for rms)
vm = np.zeros(nunif)
v2m = np.zeros(nunif)
wm = np.zeros(nunif)
w2m = np.zeros(nunif)
for ifile in flist :
data = np.loadtxt(ifile);
y = data[:,0]
u = data[:,2]
v = data[:,3]
w = data[:,4]
uu = interp1d(y, u, fill_value='extrapolate')(yu) # interpolate to uniform grid
vv = interp1d(y, v, fill_value='extrapolate')(yu)
ww = interp1d(y, w, fill_value='extrapolate')(yu)
um += uu # update mean profiles
vm += vv
wm += ww
u2m += uu*uu
v2m += vv*vv
w2m += ww*ww
um /= nfiles
vm /= nfiles
wm /= nfiles
um = 0.5*(um[:nunif2] + np.flipud(um[nunif2:])) # mirror data (symmetric)
vm = 0.5*(vm[:nunif2] + np.flipud(vm[nunif2:]))
wm = 0.5*(wm[:nunif2] + np.flipud(wm[nunif2:]))
u2m /= nfiles
v2m /= nfiles
w2m /= nfiles
u2m = 0.5*(u2m[:nunif2] + np.flipud(u2m[nunif2:]))
v2m = 0.5*(v2m[:nunif2] + np.flipud(v2m[nunif2:]))
w2m = 0.5*(w2m[:nunif2] + np.flipud(w2m[nunif2:]))
urms = np.sqrt(u2m - um*um)
vrms = np.sqrt(v2m - vm*vm)
wrms = np.sqrt(w2m - wm*wm)
yu += delta # domain center is at 0; shift so left side is zero
yu = yu[:nunif2] # plotting to domain center
dudy = (um[1]-um[0])/(yu[1]-yu[0])
utau = np.sqrt(kvisc * np.abs(dudy))
RetauOdt = utau * delta / kvisc
yu *= utau/kvisc # scale y --> y+ (note: utau should be unity)
odt_data = np.vstack([yu,um,vm,wm,urms,vrms,wrms]).T
fname = "../../data/"+caseN+"/post/ODTstat.dat"
np.savetxt(fname, odt_data, header="y+, u+_mean, v+_mean, w+_mean, u+_rms, v+_rms, w+_rms", fmt='%12.5E')
print("Nominal Retau: ", Retau)
print("Actual Retau: ", RetauOdt)
#--------------------------------------------------------------------------------------------
print(f"MAKING PLOT OF MEAN U PROFILE: ODT vs DNS in ../../data/{caseN}/post/u_mean.pdf" )
dns = np.loadtxt("DNS_raw/550_Re_1.dat", comments='%')
odt = np.loadtxt("../../data/"+caseN+"/post/ODTstat.dat")
matplotlib.rcParams.update({'font.size':20, 'figure.autolayout': True}) #, 'font.weight':'bold'})
fig, ax = plt.subplots()
y_odt = odt[:,0]
y_dns = dns[:,1]
u_odt = odt[:,1]
u_dns = dns[:,2]
ax.semilogx(y_odt, u_odt, 'k-', label=r'ODT')
ax.semilogx(y_dns, u_dns, 'k--', label=r'DNS')
ax.set_xlabel(r'$(R-r)^+$') #, fontsize=22)
ax.set_ylabel(r'$u^+$') #, fontsize=22)
ax.legend(loc='upper left', frameon=False, fontsize=16)
ax.set_ylim([0, 30])
ax.set_xlim([1, 1000])
plt.savefig(f"../../data/{caseN}/post/u_mean.pdf")
#--------------------------------------------------------------------------------------------
print(f"MAKING PLOT OF RMS VEL PROFILES: ODT vs DNS in ../../data/{caseN}/post/u_rms.pdf" )
dns = np.loadtxt("DNS_raw/550_Re_1.dat", comments='%')
odt = np.loadtxt("../../data/"+caseN+"/post/ODTstat.dat")
matplotlib.rcParams.update({'font.size':20, 'figure.autolayout': True}) #, 'font.weight':'bold'})
fig, ax = plt.subplots()
y_odt = odt[:,0]
y_dns = dns[:,1]
urms_odt = odt[:,4]
vrms_odt = odt[:,5]
wrms_odt = odt[:,6]
urms_dns = dns[:,6]**0.5
vrms_dns = dns[:,4]**0.5
wrms_dns = dns[:,5]**0.5
ax.plot(y_odt, urms_odt, 'k-', label=r'$u_{rms}^+$')
ax.plot(y_odt, vrms_odt, 'b--', label=r'$v_{rms}$, $u_{t,rms}$')
ax.plot(y_odt, wrms_odt, 'r:', label=r'$w_{rms}$, $u_{r,rms}$')
ax.plot(-y_dns, urms_dns, 'k-', label='')
ax.plot(-y_dns, vrms_dns, 'b--', label='')
ax.plot(-y_dns, wrms_dns, 'r:', label='')
ax.plot([0,0], [0,3], 'k-', linewidth=0.5, color='gray')
ax.arrow( 30,0.2, 50, 0, head_width=0.05, head_length=10, color='gray')
ax.arrow(-30,0.2, -50, 0, head_width=0.05, head_length=10, color='gray')
ax.text( 30,0.3,"ODT", fontsize=14, color='gray')
ax.text(-80,0.3,"DNS", fontsize=14, color='gray')
ax.set_xlabel(r'$(R-r)^+$') #, fontsize=22)
ax.set_ylabel(r'$u_{i,rms}^+$')
ax.legend(loc='upper right', frameon=False, fontsize=16)
ax.set_xlim([-300, 300])
ax.set_ylim([0, 3])
plt.savefig(f"../../data/{caseN}/post/u_rms.pdf")
###Output
Nominal Retau: 550.0000055
Actual Retau: 547.1360079432962
MAKING PLOT OF MEAN U PROFILE: ODT vs DNS in ../../data/pipe/post/u_mean.pdf
MAKING PLOT OF RMS VEL PROFILES: ODT vs DNS in ../../data/pipe/post/u_rms.pdf
|
4-Training_Experiment_Random_Forest.ipynb | ###Markdown
Baseline Experiment 2: Training Random Forest ClassifierUsing Grid Search with 5-fold Cross-Validation, with a 70/30 train/test split.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import joblib
import sklearn
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix, plot_confusion_matrix
from sklearn.utils import class_weight # For balanced class weighted classification training
# For reproducible results
RANDOM_STATE_SEED = 420
df_dataset = pd.read_csv("processed_friday_dataset.csv")
df_dataset
df_dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1364431 entries, 0 to 1364430
Data columns (total 73 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Flow Duration 1364431 non-null int64
1 Tot Fwd Pkts 1364431 non-null int64
2 Tot Bwd Pkts 1364431 non-null int64
3 TotLen Fwd Pkts 1364431 non-null int64
4 TotLen Bwd Pkts 1364431 non-null float64
5 Fwd Pkt Len Max 1364431 non-null int64
6 Fwd Pkt Len Min 1364431 non-null int64
7 Fwd Pkt Len Mean 1364431 non-null float64
8 Fwd Pkt Len Std 1364431 non-null float64
9 Bwd Pkt Len Max 1364431 non-null int64
10 Bwd Pkt Len Min 1364431 non-null int64
11 Bwd Pkt Len Mean 1364431 non-null float64
12 Bwd Pkt Len Std 1364431 non-null float64
13 Flow IAT Mean 1364431 non-null float64
14 Flow IAT Std 1364431 non-null float64
15 Flow IAT Max 1364431 non-null float64
16 Flow IAT Min 1364431 non-null float64
17 Fwd IAT Tot 1364431 non-null float64
18 Fwd IAT Mean 1364431 non-null float64
19 Fwd IAT Std 1364431 non-null float64
20 Fwd IAT Max 1364431 non-null float64
21 Fwd IAT Min 1364431 non-null float64
22 Bwd IAT Tot 1364431 non-null float64
23 Bwd IAT Mean 1364431 non-null float64
24 Bwd IAT Std 1364431 non-null float64
25 Bwd IAT Max 1364431 non-null float64
26 Bwd IAT Min 1364431 non-null float64
27 Fwd Header Len 1364431 non-null int64
28 Bwd Header Len 1364431 non-null int64
29 Fwd Pkts/s 1364431 non-null float64
30 Bwd Pkts/s 1364431 non-null float64
31 Pkt Len Min 1364431 non-null int64
32 Pkt Len Max 1364431 non-null int64
33 Pkt Len Mean 1364431 non-null float64
34 Pkt Len Std 1364431 non-null float64
35 Pkt Len Var 1364431 non-null float64
36 FIN Flag Cnt 1364431 non-null int64
37 SYN Flag Cnt 1364431 non-null int64
38 RST Flag Cnt 1364431 non-null int64
39 PSH Flag Cnt 1364431 non-null int64
40 ACK Flag Cnt 1364431 non-null int64
41 URG Flag Cnt 1364431 non-null int64
42 CWE Flag Count 1364431 non-null int64
43 ECE Flag Cnt 1364431 non-null int64
44 Down/Up Ratio 1364431 non-null int64
45 Pkt Size Avg 1364431 non-null float64
46 Fwd Seg Size Avg 1364431 non-null float64
47 Bwd Seg Size Avg 1364431 non-null float64
48 Fwd Byts/b Avg 1364431 non-null int64
49 Fwd Pkts/b Avg 1364431 non-null int64
50 Fwd Blk Rate Avg 1364431 non-null int64
51 Bwd Byts/b Avg 1364431 non-null int64
52 Bwd Pkts/b Avg 1364431 non-null int64
53 Bwd Blk Rate Avg 1364431 non-null int64
54 Subflow Fwd Pkts 1364431 non-null int64
55 Subflow Fwd Byts 1364431 non-null int64
56 Subflow Bwd Pkts 1364431 non-null int64
57 Subflow Bwd Byts 1364431 non-null int64
58 Init Fwd Win Byts 1364431 non-null int64
59 Init Bwd Win Byts 1364431 non-null int64
60 Fwd Act Data Pkts 1364431 non-null int64
61 Fwd Seg Size Min 1364431 non-null int64
62 Active Mean 1364431 non-null float64
63 Active Std 1364431 non-null float64
64 Active Max 1364431 non-null float64
65 Active Min 1364431 non-null float64
66 Idle Mean 1364431 non-null float64
67 Idle Std 1364431 non-null float64
68 Idle Max 1364431 non-null float64
69 Idle Min 1364431 non-null float64
70 Protocol_17 1364431 non-null int64
71 Protocol_6 1364431 non-null int64
72 Label 1364431 non-null int64
dtypes: float64(35), int64(38)
memory usage: 759.9 MB
###Markdown
1- Making an 70/30 train/test split
###Code
train, test = train_test_split(df_dataset, test_size=0.3, random_state=RANDOM_STATE_SEED)
train
test
###Output
_____no_output_____
###Markdown
MinMax Scaling of numerical attributes
###Code
numerical_cols = train.columns[:-3]
numerical_cols
min_max_scaler = MinMaxScaler().fit(train[numerical_cols])
train[numerical_cols] = min_max_scaler.transform(train[numerical_cols])
train
test[numerical_cols] = min_max_scaler.transform(test[numerical_cols])
test
###Output
_____no_output_____
###Markdown
2- Checking label distribution
###Code
print("Full dataset:\n")
print("Benign: " + str(df_dataset["Label"].value_counts()[[0]].sum()))
print("Malicious: " + str(df_dataset["Label"].value_counts()[[1]].sum()))
print("---------------")
print("Training set:\n")
print("Benign: " + str(train["Label"].value_counts()[[0]].sum()))
print("Malicious: " + str(train["Label"].value_counts()[[1]].sum()))
print("---------------")
print("Test set:\n")
print("Benign: " + str(test["Label"].value_counts()[[0]].sum()))
print("Malicious: " + str(test["Label"].value_counts()[[1]].sum()))
###Output
Full dataset:
Benign: 1074342
Malicious: 290089
---------------
Training set:
Benign: 751849
Malicious: 203252
---------------
Test set:
Benign: 322493
Malicious: 86837
###Markdown
3- Splitting to X_train, y_train, X_test, y_test
###Code
y_train = np.array(train.pop("Label")) # pop removes "Label" from the dataframe
X_train = train.values
print(type(X_train))
print(type(y_train))
print(X_train.shape)
print(y_train.shape)
y_test = np.array(test.pop("Label")) # pop removes "Label" from the dataframe
X_test = test.values
print(type(X_test))
print(type(y_test))
print(X_test.shape)
print(y_test.shape)
###Output
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
(409330, 72)
(409330,)
###Markdown
4- Fitting Random Forest model
###Code
# Calculating class weights for balanced class weighted classifier training
class_weights = class_weight.compute_class_weight(
class_weight='balanced',
classes=np.unique(y_train),
y=y_train
)
print(class_weights)
# Must be in dict format for scikitlearn
class_weights = {
0: class_weights[0],
1: class_weights[1]
}
print(class_weights)
model = RandomForestClassifier(
n_estimators=100,
criterion='gini',
max_depth=None,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.0,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
bootstrap=True,
oob_score=False,
n_jobs=None,
random_state=None,
verbose=0,
warm_start=False,
class_weight=class_weights,
ccp_alpha=0.0,
max_samples=None
)
hyperparameters = {
'n_estimators': [50, 75, 100, 125, 150]
}
clf = GridSearchCV(
estimator=model,
param_grid=hyperparameters,
cv=5,
verbose=1,
n_jobs=-1 # Use all available CPU cores
)
clf.fit(X=X_train, y=y_train)
###Output
Fitting 5 folds for each of 5 candidates, totalling 25 fits
###Markdown
5- Extracting best performing model in the 5-fold cross-validation Grid Search
###Code
print("Accuracy score on Validation set: \n")
print(clf.best_score_ )
print("---------------")
print("Best performing hyperparameters on Validation set: ")
print(clf.best_params_)
print("---------------")
print(clf.best_estimator_)
model = clf.best_estimator_
model
###Output
_____no_output_____
###Markdown
6- Evaluating on Test set
###Code
predictions = model.predict(X_test)
predictions
###Output
_____no_output_____
###Markdown
6.1 Accuracy on Test set
###Code
print(accuracy_score(y_test, predictions))
###Output
0.9990374514450443
###Markdown
6.2 Confusion matrix
###Code
cm = confusion_matrix(y_test, predictions)
print(cm)
plot_confusion_matrix(model, X_test, y_test, cmap="cividis")
###Output
/home/tamer/anaconda3/envs/cic-ids-2018/lib/python3.7/site-packages/sklearn/utils/deprecation.py:87: FutureWarning: Function plot_confusion_matrix is deprecated; Function `plot_confusion_matrix` is deprecated in 1.0 and will be removed in 1.2. Use one of the class methods: ConfusionMatrixDisplay.from_predictions or ConfusionMatrixDisplay.from_estimator.
warnings.warn(msg, category=FutureWarning)
###Markdown
6.3 Classification report
###Code
print(classification_report(y_test, predictions, digits=5))
###Output
precision recall f1-score support
0 0.99990 0.99887 0.99939 322493
1 0.99584 0.99964 0.99774 86837
accuracy 0.99904 409330
macro avg 0.99787 0.99926 0.99856 409330
weighted avg 0.99904 0.99904 0.99904 409330
###Markdown
7- Saving model
###Code
joblib.dump(model, "trained_models/random-forest-classifier.pkl")
###Output
_____no_output_____
###Markdown
8- Testing loading model
###Code
model = joblib.load("trained_models/random-forest-classifier.pkl")
model
###Output
_____no_output_____ |
machinelearning/sklearn-freecodecamp/machine-learning-sklearn-freecodecamp.ipynb | ###Markdown
[Scikit-Learn Course - Machine Learning in Python Tutorial - freeCodeCamp.org](https://www.youtube.com/watch?v=pqNCD_5r0IU)
[Source Code](https://github.com/DL-Academy/MachineLearningSKLearn)
[Scikit-learn website](https://github.com/DL-Academy/MachineLearningSKLearn)
[UCI Machine Learning Repository - Center for ML and Intelligent Systems](https://archive.ics.uci.edu/ml/datasets/car+evaluation)
[GIMP download](https://www.gimp.org/downloads/) Installing SKlearn [0:22]
```
pip install -U scikit-learn
``` Plot a Graph [3:37]
###Code
import matplotlib.pyplot as plt
x = [i for i in range(10)]
print(x)
y = [2 * i for i in range(10)]
print(y)
plt.plot(x, y)
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.scatter(x, y)
###Output
_____no_output_____
###Markdown
Features and Labels_1[7:33] - Features, attributes, independent variables, input, X
- Labels, dependent variables, output, y
- Dimension: number of columns
- Instance: number of rows
- [Data Set](https://archive.ics.uci.edu/ml/datasets/car+evaluation) Save and Open a Model [11:45] Classification [13:47] Train Test Split [17:28]
###Code
from sklearn import datasets
import numpy as np
from sklearn.model_selection import train_test_split
iris = datasets.load_iris()
# Split it in features and labels
X = iris.data
y = iris.target
print(X.shape)
print(y.shape)
print(X, y)
# Hours of study vs good/bad grades
# 10 different students
# Train with 8 students
# Predict with the remaining 2
# Level of accuracy
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
###Output
(120, 4)
(30, 4)
(120,)
(30,)
###Markdown
[What is KNN? [25:31]](https://scikit-learn.org/stable/modules/neighbors.html)
- KNN: K Nearest Neighbors Classification KNN Example [33:48]
###Code
import numpy as np
import pandas as pd
from sklearn import neighbors, metrics
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
data = pd.read_csv('https://raw.githubusercontent.com/davidzheng66/python3/master/machinelearning/sklearn-freecodecamp/car.data')
print(data.head())
X = data[[
'buying',
'maint',
'safety'
]].values
y = data[['class']]
print(X, '\n', y)
# Converting the data
Le = LabelEncoder()
print(len(X[0]))
print(X)
for i in range(len(X[0])):
X[:, i] = Le.fit_transform(X[:, i])
print(X)
# le1 = LabelEncoder()
# print(le1.fit([1,2,2,6]))
# print(le1.classes_)
# print(le1.transform([1,1,2,6]))
# print(le1.fit_transform([1,1,2,6]))
# print(le1.inverse_transform([0,0,1,2]))
# y
label_mapping = {
'unacc':0,
'acc':1,
'good':2,
'vgood':3
}
y['class'] = y['class'].map(label_mapping)
y = np.array(y)
print(y)
print(X.shape, y.shape)
# Create model
knn = neighbors.KNeighborsClassifier(n_neighbors=25, weights='uniform')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X)
print(X_train)
knn.fit(X_train, y_train)
prediction = knn.predict(X_test)
print('predictions: ', prediction)
accuracy = metrics.accuracy_score(y_test, prediction)
print('accuracy: ', accuracy)
print(knn)
a = 1727
print('actual value: ', y[a])
print('predicted value: ', knn.predict(X)[a])
###Output
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=25, p=2,
weights='uniform')
actual value: [3]
predicted value: 3
###Markdown
SVM Explained [43:54]
- [Support Vector Machine](https://scikit-learn.org/stable/modules/svm.html) SVM Example [51:11]
###Code
from sklearn import datasets
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn.metrics import accuracy_score
iris = datasets.load_iris()
# Split it in features and labels
X = iris.data
y = iris.target
# print(X.shape)
# print(y.shape)
# print(X, y)
# Hours of study vs good/bad grades
# 10 different students
# Train with 8 students
# Predict with the remaining 2
# Level of accuracy
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print(X_train.shape)
print(X_test.shape)
print(y_train.shape)
print(y_test.shape)
model = svm.SVC()
model.fit(X_train, y_train)
print(model)
predictions = model.predict(X_test)
accuracy = metrics.accuracy_score(y_test, predictions)
print('predictions: ', predictions)
print('actual: ', y_test)
print('accuracy: ', accuracy)
classes = ['Iris Setosa', 'Iris Versicolour', 'Iris Virginica']
for i in range(len(predictions)):
print(classes[predictions[i]])
###Output
(120, 4)
(30, 4)
(120,)
(30,)
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='scale', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
predictions: [2 1 1 0 0 0 2 0 1 2 0 2 1 0 1 1 0 1 2 0 0 2 2 2 0 0 2 0 0 0]
actual: [1 1 1 0 0 0 2 0 1 2 0 2 1 0 1 1 0 1 2 0 0 2 2 2 0 0 2 0 0 0]
accuracy: 0.9666666666666667
Iris Virginica
Iris Versicolour
Iris Versicolour
Iris Setosa
Iris Setosa
Iris Setosa
Iris Virginica
Iris Setosa
Iris Versicolour
Iris Virginica
Iris Setosa
Iris Virginica
Iris Versicolour
Iris Setosa
Iris Versicolour
Iris Versicolour
Iris Setosa
Iris Versicolour
Iris Virginica
Iris Setosa
Iris Setosa
Iris Virginica
Iris Virginica
Iris Virginica
Iris Setosa
Iris Setosa
Iris Virginica
Iris Setosa
Iris Setosa
Iris Setosa
###Markdown
[Linear Regression](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/)
###Code
from sklearn import datasets
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from matplotlib import pyplot as plt
boston = datasets.load_boston()
#featres / labels
X = boston.data
y = boston.target
print("X")
print(X)
print(X.shape)
print("y")
print(y)
print(y.shape)
#algorithm
l_reg = linear_model.LinearRegression()
plt.scatter(X.T[5], y)
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
#train
model = l_reg.fit(X_train, y_train)
predictions = model.predict(X_test)
print("predictions: ", predictions)
print("R^2: ", l_reg.score(X, y))
print("coeff: ", l_reg.coef_)
print("intercept: ", l_reg.intercept_)
###Output
X
[[6.3200e-03 1.8000e+01 2.3100e+00 ... 1.5300e+01 3.9690e+02 4.9800e+00]
[2.7310e-02 0.0000e+00 7.0700e+00 ... 1.7800e+01 3.9690e+02 9.1400e+00]
[2.7290e-02 0.0000e+00 7.0700e+00 ... 1.7800e+01 3.9283e+02 4.0300e+00]
...
[6.0760e-02 0.0000e+00 1.1930e+01 ... 2.1000e+01 3.9690e+02 5.6400e+00]
[1.0959e-01 0.0000e+00 1.1930e+01 ... 2.1000e+01 3.9345e+02 6.4800e+00]
[4.7410e-02 0.0000e+00 1.1930e+01 ... 2.1000e+01 3.9690e+02 7.8800e+00]]
(506, 13)
y
[24. 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 15. 18.9 21.7 20.4
18.2 19.9 23.1 17.5 20.2 18.2 13.6 19.6 15.2 14.5 15.6 13.9 16.6 14.8
18.4 21. 12.7 14.5 13.2 13.1 13.5 18.9 20. 21. 24.7 30.8 34.9 26.6
25.3 24.7 21.2 19.3 20. 16.6 14.4 19.4 19.7 20.5 25. 23.4 18.9 35.4
24.7 31.6 23.3 19.6 18.7 16. 22.2 25. 33. 23.5 19.4 22. 17.4 20.9
24.2 21.7 22.8 23.4 24.1 21.4 20. 20.8 21.2 20.3 28. 23.9 24.8 22.9
23.9 26.6 22.5 22.2 23.6 28.7 22.6 22. 22.9 25. 20.6 28.4 21.4 38.7
43.8 33.2 27.5 26.5 18.6 19.3 20.1 19.5 19.5 20.4 19.8 19.4 21.7 22.8
18.8 18.7 18.5 18.3 21.2 19.2 20.4 19.3 22. 20.3 20.5 17.3 18.8 21.4
15.7 16.2 18. 14.3 19.2 19.6 23. 18.4 15.6 18.1 17.4 17.1 13.3 17.8
14. 14.4 13.4 15.6 11.8 13.8 15.6 14.6 17.8 15.4 21.5 19.6 15.3 19.4
17. 15.6 13.1 41.3 24.3 23.3 27. 50. 50. 50. 22.7 25. 50. 23.8
23.8 22.3 17.4 19.1 23.1 23.6 22.6 29.4 23.2 24.6 29.9 37.2 39.8 36.2
37.9 32.5 26.4 29.6 50. 32. 29.8 34.9 37. 30.5 36.4 31.1 29.1 50.
33.3 30.3 34.6 34.9 32.9 24.1 42.3 48.5 50. 22.6 24.4 22.5 24.4 20.
21.7 19.3 22.4 28.1 23.7 25. 23.3 28.7 21.5 23. 26.7 21.7 27.5 30.1
44.8 50. 37.6 31.6 46.7 31.5 24.3 31.7 41.7 48.3 29. 24. 25.1 31.5
23.7 23.3 22. 20.1 22.2 23.7 17.6 18.5 24.3 20.5 24.5 26.2 24.4 24.8
29.6 42.8 21.9 20.9 44. 50. 36. 30.1 33.8 43.1 48.8 31. 36.5 22.8
30.7 50. 43.5 20.7 21.1 25.2 24.4 35.2 32.4 32. 33.2 33.1 29.1 35.1
45.4 35.4 46. 50. 32.2 22. 20.1 23.2 22.3 24.8 28.5 37.3 27.9 23.9
21.7 28.6 27.1 20.3 22.5 29. 24.8 22. 26.4 33.1 36.1 28.4 33.4 28.2
22.8 20.3 16.1 22.1 19.4 21.6 23.8 16.2 17.8 19.8 23.1 21. 23.8 23.1
20.4 18.5 25. 24.6 23. 22.2 19.3 22.6 19.8 17.1 19.4 22.2 20.7 21.1
19.5 18.5 20.6 19. 18.7 32.7 16.5 23.9 31.2 17.5 17.2 23.1 24.5 26.6
22.9 24.1 18.6 30.1 18.2 20.6 17.8 21.7 22.7 22.6 25. 19.9 20.8 16.8
21.9 27.5 21.9 23.1 50. 50. 50. 50. 50. 13.8 13.8 15. 13.9 13.3
13.1 10.2 10.4 10.9 11.3 12.3 8.8 7.2 10.5 7.4 10.2 11.5 15.1 23.2
9.7 13.8 12.7 13.1 12.5 8.5 5. 6.3 5.6 7.2 12.1 8.3 8.5 5.
11.9 27.9 17.2 27.5 15. 17.2 17.9 16.3 7. 7.2 7.5 10.4 8.8 8.4
16.7 14.2 20.8 13.4 11.7 8.3 10.2 10.9 11. 9.5 14.5 14.1 16.1 14.3
11.7 13.4 9.6 8.7 8.4 12.8 10.5 17.1 18.4 15.4 10.8 11.8 14.9 12.6
14.1 13. 13.4 15.2 16.1 17.8 14.9 14.1 12.7 13.5 14.9 20. 16.4 17.7
19.5 20.2 21.4 19.9 19. 19.1 19.1 20.1 19.9 19.6 23.2 29.8 13.8 13.3
16.7 12. 14.6 21.4 23. 23.7 25. 21.8 20.6 21.2 19.1 20.6 15.2 7.
8.1 13.6 20.1 21.8 24.5 23.1 19.7 18.3 21.2 17.5 16.8 22.4 20.6 23.9
22. 11.9]
(506,)
###Markdown
Logistic vs linear regression [1:07:49] Kmeans and the math beind it [1:23:12] KMeans Example [1:31:08]
###Code
from sklearn.datasets import load_breast_cancer
from sklearn.cluster import KMeans
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import scale
import pandas as pd
bc = load_breast_cancer()
print(bc)
X = scale(bc.data)
print(X)
y = bc.target
print(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=60)
model = KMeans(n_clusters=2, random_state=0)
model.fit(X_train)
labels = model.labels_
print('labels: ', labels)
predictions = model.predict(X_test)
print('predictions: ', predictions)
print('accuracy: ', accuracy_score(y_test, predictions))
print('actual: ', y_test)
print(pd.crosstab(y_train, labels))
from sklearn import metrics
# Commented out IPython magic to ensure Python compatibility.
def bench_k_means(estimator, name, data):
estimator.fit(data)
print('%-9s\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, estimator.inertia_,
metrics.homogeneity_score(y, estimator.labels_),
metrics.completeness_score(y, estimator.labels_),
metrics.v_measure_score(y, estimator.labels_),
metrics.adjusted_rand_score(y, estimator.labels_),
metrics.adjusted_mutual_info_score(y, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean')))
bench_k_means(model, "1", X)
print(pd.crosstab(y_train, labels))
###Output
1 11595 0.544 0.565 0.555 0.671 0.554 0.345
col_0 0 1
row_0
0 32 156
1 308 13
###Markdown
Neural Network [1:42:02] Overfitting and Underfitting [1:56:03]
- Perfect: low variance on both training and testing data
- Overfitting: Good for training data, but not good for testing data (due to too complicated, too much input data, high variance on testing data, and activation function)
- Underfitting: Good for neither training data nor testing data (due to too simple, too few input data and activation function) Cost/Loss Function and Gradient Descent [2:03:05] Backpropagation [2:18:16] CNN - Convolutional Neural Network [2:26:24] [Handwritten Digits Recognizer [2:31:46]](https://the-dl-academy.teachable.com/courses)
###Code
!pip install mnist
from PIL import Image
import numpy as np
import mnist
from sklearn.preprocessing import StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from sklearn import svm
X_train = mnist.train_images()
y_train = mnist.train_labels()
X_test = mnist.test_images()
y_test = mnist.test_labels()
print('X_train', X_train)
print('X shape', X_train.shape)
print('y_train', y_train)
print('y shape', y_train.shape)
print(X_train[0])
X_train = X_train.reshape((-1, 28*28))
X_test = X_test.reshape((-1, 28*28))
print(X_train.shape)
X_train = np.array(X_train/256)
X_test = np.array(X_test/256)
print(X_train[0])
print(X_train.shape)
print(y_train.shape)
clf = MLPClassifier(solver='adam', activation='relu', hidden_layer_sizes=(64, 64))
print(clf)
clf.fit(X_train, y_train)
# acc = clf.score(X_test, y_test)
# print('accuracy: ', acc)
# filename = 'hd_recognition.sav'
# joblib.dump(filename)
predictions = clf.predict(X_test)
print('y_test.shape: ', y_test.shape)
print('predictions.shape: ', predictions.shape)
acc = confusion_matrix(y_test, predictions)
print('acc:\n', acc)
def accuracy(confusion_matrix):
diagonal_sum = confusion_matrix.trace()
sum_of_all_elements = confusion_matrix.sum()
return diagonal_sum / sum_of_all_elements
print(accuracy(acc))
zero = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
one = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
two = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
three = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
four = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 255, 0, 0, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
five = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 0, 0, 0, 0, 0, 0, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
zero = np.array(zero)/256
p0 = clf.predict([zero])
print(p0)
one = np.array(one)/256
p1 = clf.predict([one])
print(p1)
two = np.array(two)/256
p2 = clf.predict([two])
print(p2)
three = np.array(three)/256
p3 = clf.predict([three])
print(p3)
four = np.array(four)/256
p4 = clf.predict([four])
print(p4)
five = np.array(five)/256
p5 = clf.predict([five])
print(p5)
# from PIL import Image
# img = Image.open('https://github.com/davidzheng66/python3/blob/master/machinelearning/sklearn-freecodecamp/five1.png')
# data = list(img.getdata())
# for i in range(len(data)):
# data[i] = 255 - data[i]
# print(data)
###Output
_____no_output_____ |
Lab8/Lab08_01_Solution.ipynb | ###Markdown
Try this on full spam1.csv file and unigram matching Spam classification using Count Vectorization and TFIDf Vectorization
###Code
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import LabelEncoder
stopwords = ["a", "about", "above", "after", "again", "against", "all", "am",
"an", "and", "any", "are", "as", "at", "be", "because", "been", "before",
"being", "below", "between", "both", "but", "by", "could", "did", "do",
"does", "doing", "down", "during", "each", "few", "for", "from", "further",
"had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here",
"here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i",
"i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its",
"itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on",
"once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out",
"over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so",
"some", "such", "than", "that", "that's", "the", "their", "theirs", "them",
"themselves", "then", "there", "there's", "these", "they", "they'd",
"they'll", "they're", "they've", "this", "those", "through", "to", "too",
"under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're",
"we've", "were", "what", "what's", "when", "when's", "where", "where's",
"which", "while", "who", "who's", "whom", "why", "why's", "with", "would",
"you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself",
"yourselves"]
from sklearn import preprocessing
import numpy as np
datasets = pd.read_csv('/content/drive/MyDrive/ML/spam1.csv', encoding="ISO-8859-1")
datasets["v1"]=np.where(datasets["v1"]=='spam',1,0)
print("\nData :\n",datasets.head(20))
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(datasets["v2"], datasets["v1"],random_state=77)
X_test.shape
###Output
_____no_output_____
###Markdown
CountVectorizerPerform count vectorization on training data.
###Code
vectorizer = CountVectorizer(ngram_range = (1, 1),stop_words=stopwords)
vectorizer.fit(X_train)
X_train_vectorized = vectorizer.fit_transform(X_train)
print(X_train_vectorized.toarray())
from google.colab import drive
drive.mount('/content/drive')
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
model=MultinomialNB()
model.fit(X_train_vectorized,y_train)
from sklearn.metrics import accuracy_score, precision_score, recall_score
predictions = model.predict(vectorizer.transform(X_test))
print("Accuracy: ", accuracy_score(y_test, predictions))
print("\nPrecision: ", precision_score(y_test, predictions))
print("\nRecall: ", recall_score(y_test, predictions))
###Output
Accuracy: 0.9844961240310077
Precision: 1.0
Recall: 0.9574468085106383
###Markdown
Apply the Decision Tree Classifier on vectorized data.
###Code
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(X_train_vectorized, y_train)
from sklearn.metrics import accuracy_score, precision_score, recall_score
predictions = clf.predict(vectorizer.transform(X_test))
print("Accuracy: ", accuracy_score(y_test, predictions))
print("\nPrecision: ", precision_score(y_test, predictions))
print("\nRecall: ", recall_score(y_test, predictions))
###Output
Accuracy: 0.9534883720930233
Precision: 1.0
Recall: 0.8723404255319149
###Markdown
TFIDF VectorizerPerform IFIDF vectorization on training data.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(ngram_range = (1, 1),stop_words = stopwords)
vectorizer.fit(X_train)
X_train_tfidf_vectorized = vectorizer.transform(X_train)
print(X_train_tfidf_vectorized.toarray())
###Output
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Apply the Naive Bayes on IFIDF vectorized data.
###Code
model = MultinomialNB()
model.fit(X_train_tfidf_vectorized, y_train)
predictions = model.predict(vectorizer.transform(X_test))
print("Accuracy: ", accuracy_score(y_test, predictions))
print("\nPrecision: ", precision_score(y_test, predictions))
print("\nRecall: ", recall_score(y_test, predictions))
###Output
Accuracy: 0.9457364341085271
Precision: 1.0
Recall: 0.851063829787234
###Markdown
Apply the Decision Tree Classifier on IFIDF vectorized data.
###Code
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(X_train_tfidf_vectorized, y_train)
predictions = clf.predict(vectorizer.transform(X_test))
print("Accuracy: ", accuracy_score(y_test, predictions))
print("\nPrecision: ", precision_score(y_test, predictions))
print("\nRecall: ", recall_score(y_test, predictions))
###Output
Accuracy: 0.9147286821705426
Precision: 0.9285714285714286
Recall: 0.8297872340425532
|
notebooks/Adjusted_Rand_Index.ipynb | ###Markdown
Understanding the Rand indexSee [Wikipedia](https://en.wikipedia.org/wiki/Rand_index) for a simple explanationThe ARi will give a score from close to zero for random assignments to 1 for a perfect assignment, regardless of the number of clusters used.
###Code
import numpy as np
import pandas as pd
y_true = np.random.randint(0, 3, 100)
y_pred_1 = np.random.randint(0, 3, 100)
y_pred_2 = y_true
y_pred_3 = np.where(np.random.rand(100) < 0.8, y_true, y_pred_1)
y = pd.DataFrame(np.c_[y_true, y_pred_1, y_pred_2, y_pred_3],
columns='Y Y1 Y2 Y3'.split())
y
from sklearn.metrics import adjusted_rand_score
for pred in ['Y1', 'Y2', 'Y3']:
print(f'ARI(Y, {pred}) = {adjusted_rand_score(y.Y, y[pred]):.3}')
###Output
_____no_output_____ |
Package_Demos/Image_Manipulation.ipynb | ###Markdown
PIL from PillowPIL stands for Python Imaging Library
###Code
## Import Module
from PIL import Image
## Execption Hanlding of file import
try:
img = Image.open("data/barr.png")
except IOError:
pass
###Output
_____no_output_____
###Markdown
Retrieve the Image Size
###Code
width, height = img.size
print(width, height)
###Output
(720, 540)
###Markdown
Rotating an Image
###Code
## Rotate the Image
img_rotated = img.rotate(180)
## Save the rotated Image
img_rotated.save("data/rotated_barr.png")
###Output
_____no_output_____
###Markdown
Transposing an Image
###Code
## Transpose
transposed_img = img.transpose(Image.FLIP_LEFT_RIGHT)
## Save Transposed Image
transposed_img.save("data/transposed_barr.png")
###Output
_____no_output_____ |
notebooks/Python-in-2-days/D1_L2_IPython/.ipynb_checkpoints/02-Help-And-Documentation-checkpoint.ipynb | ###Markdown
Help and Documentation in IPython If you read no other notebook in this section, read this one: I find the tools discussed here to be the most transformative contributions of IPython to my daily workflow.When a technologically-minded person is asked to help a friend, family member, or colleague with a computer problem, most of the time it's less a matter of knowing the answer as much as knowing how to quickly find an unknown answer.In data science it's the same: searchable web resources such as online documentation, mailing-list threads, and StackOverflow answers contain a wealth of information, even (especially?) if it is a topic you've found yourself searching before.Being an effective practitioner of data science is less about memorizing the tool or command you should use for every possible situation, and more about learning to effectively find the information you don't know, whether through a web search engine or another means.One of the most useful functions of IPython/Jupyter is to shorten the gap between the user and the type of documentation and search that will help them do their work effectively.While web searches still play a role in answering complicated questions, an amazing amount of information can be found through IPython alone.Some examples of the questions IPython can help answer in a few keystrokes:- How do I call this function? What arguments and options does it have?- What does the source code of this Python object look like?- What is in this package I imported? What attributes or methods does this object have?Here we'll discuss IPython's tools to quickly access this information, namely the ``?`` character to explore documentation, the ``??`` characters to explore source code, and the Tab key for auto-completion. Accessing Documentation with ``?``The Python language and its data science ecosystem is built with the user in mind, and one big part of that is access to documentation.Every Python object contains the reference to a string, known as a *doc string*, which in most cases will contain a concise summary of the object and how to use it.Python has a built-in ``help()`` function that can access this information and prints the results.For example, to see the documentation of the built-in ``len`` function, you can do the following:```ipythonIn [1]: help(len)Help on built-in function len in module builtins:len(...) len(object) -> integer Return the number of items of a sequence or mapping.```Depending on your interpreter, this information may be displayed as inline text, or in some separate pop-up window.
###Code
help(len)
###Output
Help on built-in function len in module builtins:
len(obj, /)
Return the number of items in a container.
###Markdown
Because finding help on an object is so common and useful, IPython introduces the ``?`` character as a shorthand for accessing this documentation and other relevant information:```ipythonIn [2]: len?Type: builtin_function_or_methodString form: Namespace: Python builtinDocstring:len(object) -> integerReturn the number of items of a sequence or mapping.```
###Code
L = [1,2,3]
L?
###Output
_____no_output_____
###Markdown
This notation works for just about anything, including object methods:```ipythonIn [3]: L = [1, 2, 3]In [4]: L.insert?Type: builtin_function_or_methodString form: Docstring: L.insert(index, object) -- insert object before index```or even objects themselves, with the documentation from their type:```ipythonIn [5]: L?Type: listString form: [1, 2, 3]Length: 3Docstring:list() -> new empty listlist(iterable) -> new list initialized from iterable's items``` Importantly, this will even work for functions or other objects you create yourself!Here we'll define a small function with a docstring:```ipythonIn [6]: def square(a): ....: """Return the square of a.""" ....: return a ** 2 ....:```Note that to create a docstring for our function, we simply placed a string literal in the first line.Because doc strings are usually multiple lines, by convention we used Python's triple-quote notation for multi-line strings. Now we'll use the ``?`` mark to find this doc string:```ipythonIn [7]: square?Type: functionString form: Definition: square(a)Docstring: Return the square of a.```This quick access to documentation via docstrings is one reason you should get in the habit of always adding such inline documentation to the code you write! Accessing Source Code with ``??``Because the Python language is so easily readable, another level of insight can usually be gained by reading the source code of the object you're curious about.IPython provides a shortcut to the source code with the double question mark (``??``):```ipythonIn [8]: square??Type: functionString form: Definition: square(a)Source:def square(a): "Return the square of a" return a ** 2```For simple functions like this, the double question-mark can give quick insight into the under-the-hood details.
###Code
def square(a):
"""Return square of number."""
return a**2
square??
###Output
_____no_output_____
###Markdown
If you play with this much, you'll notice that sometimes the ``??`` suffix doesn't display any source code: this is generally because the object in question is not implemented in Python, but in C or some other compiled extension language.If this is the case, the ``??`` suffix gives the same output as the ``?`` suffix.You'll find this particularly with many of Python's built-in objects and types, for example ``len`` from above:```ipythonIn [9]: len??Type: builtin_function_or_methodString form: Namespace: Python builtinDocstring:len(object) -> integerReturn the number of items of a sequence or mapping.```Using ``?`` and/or ``??`` gives a powerful and quick interface for finding information about what any Python function or module does. Exploring Modules with Tab-CompletionIPython's other useful interface is the use of the tab key for auto-completion and exploration of the contents of objects, modules, and name-spaces.In the examples that follow, we'll use ```` to indicate when the Tab key should be pressed. Tab-completion of object contentsEvery Python object has various attributes and methods associated with it.Like with the ``help`` function discussed before, Python has a built-in ``dir`` function that returns a list of these, but the tab-completion interface is much easier to use in practice.To see a list of all available attributes of an object, you can type the name of the object followed by a period ("``.``") character and the Tab key:```ipythonIn [10]: L.L.append L.copy L.extend L.insert L.remove L.sort L.clear L.count L.index L.pop L.reverse ```To narrow-down the list, you can type the first character or several characters of the name, and the Tab key will find the matching attributes and methods:```ipythonIn [10]: L.cL.clear L.copy L.count In [10]: L.coL.copy L.count ```If there is only a single option, pressing the Tab key will complete the line for you.For example, the following will instantly be replaced with ``L.count``:```ipythonIn [10]: L.cou```Though Python has no strictly-enforced distinction between public/external attributes and private/internal attributes, by convention a preceding underscore is used to denote such methods.For clarity, these private methods and special methods are omitted from the list by default, but it's possible to list them by explicitly typing the underscore:```ipythonIn [10]: L._L.__add__ L.__gt__ L.__reduce__L.__class__ L.__hash__ L.__reduce_ex__```For brevity, we've only shown the first couple lines of the output.Most of these are Python's special double-underscore methods (often nicknamed "dunder" methods). Tab completion when importingTab completion is also useful when importing objects from packages.Here we'll use it to find all possible imports in the ``itertools`` package that start with ``co``:```In [10]: from itertools import cocombinations compresscombinations_with_replacement count```Similarly, you can use tab-completion to see which imports are available on your system (this will change depending on which third-party scripts and modules are visible to your Python session):```In [10]: import Display all 399 possibilities? (y or n)Crypto dis py_compileCython distutils pyclbr... ... ...difflib pwd zmqIn [10]: import hhashlib hmac http heapq html husl ```(Note that for brevity, I did not print here all 399 importable packages and modules on my system.) Beyond tab completion: wildcard matchingTab completion is useful if you know the first few characters of the object or attribute you're looking for, but is little help if you'd like to match characters at the middle or end of the word.For this use-case, IPython provides a means of wildcard matching for names using the ``*`` character.For example, we can use this to list every object in the namespace that ends with ``Warning``:```ipythonIn [10]: *Warning?BytesWarning RuntimeWarningDeprecationWarning SyntaxWarningFutureWarning UnicodeWarningImportWarning UserWarningPendingDeprecationWarning WarningResourceWarning```Notice that the ``*`` character matches any string, including the empty string.Similarly, suppose we are looking for a string method that contains the word ``find`` somewhere in its name.We can search for it this way:```ipythonIn [10]: str.*find*?str.findstr.rfind```I find this type of flexible wildcard search can be very useful for finding a particular command when getting to know a new package or reacquainting myself with a familiar one.
###Code
*Warning?
###Output
_____no_output_____ |
_notebooks/2021-02-24-reproducibility.ipynb | ###Markdown
Reproducibility in Deep Learning> do you want to check your ideas in DL? you need Reproducibility (PyTorch, TF2.X)- toc: true- branch: master- badges: true,- comments: true- image: images/reproducibility.jpg- author: Sajjad Ayoubi- categories: [tips] Reproducibility ?!- deep learning training processes are stochastic in nature,During development of a model, sometimes it is useful to be able to obtain reproducible results from run to run in order to determine if a change in performance is due to an actual model or data modification, also for comparing different things and evaluate new tricks and ideaswe need to train our neural nets in a deterministic way- In the process of training a neural network, there are multiple stages where randomness is used, for example - random initialization of weights of the network before the training starts. - regularization, dropout, which involves randomly dropping nodes in the network while training. - optimization process like SGD or Adam also include random initializations.- we will see that how can we use Frameworks in a deterministic way- note in deterministic training you are a bit slow than stochastic PyTorch- Mnist classification with Reproducibility> from PyTorch Team: Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds, also Deterministic operations are often slower than nondeterministic operations - the following works with all models (maybe not LSTMs I didn’t check that)
###Code
import numpy as np
import random, os
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
###Output
_____no_output_____
###Markdown
- create dataloder
###Code
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0), (255))])
train_ds = datasets.MNIST(root='./data', train=True, download=True, transform=transform)
# if you set augmentations set worker_init_fn=(random.seed(0)) and num_workers=0 in dataloder
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=32, shuffle=True, num_workers=4)
###Output
_____no_output_____
###Markdown
- the following works with all models
###Code
def torch_seed(seed=0):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.cuda.manual_seed_all(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
def train(reproducibility=True, n_run=2, device='cuda'):
for n in range(n_run):
print('run number: ', n+1)
# set seed before create your model
if reproducibility:
torch_seed(seed=0)
# compile model
model = nn.Sequential(nn.Flatten(), nn.Linear(28*28, 128), nn.GELU(), nn.Linear(128, 10)).to(device)
loss_fn = nn.CrossEntropyLoss().to(device)
optimizer = optim.AdamW(model.parameters(), lr=0.005, weight_decay=0.0)
# training loop
loss_avg = 0.0
for i, data in enumerate(train_dl):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs.to(device))
loss = loss_fn(outputs, labels.to(device))
loss_avg = (loss_avg * i + loss) / (i+1)
loss.backward()
optimizer.step()
if i%850==0:
print('[%d, %4d] loss: %.4f' %(i+1, len(train_dl), loss_avg))
train(reproducibility=False)
train(reproducibility=True)
###Output
run number: 1
[1, 1875] loss: 2.2983
[851, 1875] loss: 0.8051
[1701, 1875] loss: 0.5927
run number: 2
[1, 1875] loss: 2.2983
[851, 1875] loss: 0.8051
[1701, 1875] loss: 0.5927
###Markdown
- if you check your new ideas like me- you have to always see how much is overhead of your implementation- in pytorch for giving acutual time we use `synchronize`
###Code
%%timeit
# stay in GPUs until it done
torch.cuda.synchronize()
###Output
_____no_output_____
###Markdown
Keras & TF 2.X- Mnist classification with Reproducibility> from Keras Team: when running on a GPU, some operations have non-deterministic outputs, in particular tf.reduce_sum(). This is due to the fact that GPUs run many operations in parallel, so the order of execution is not always guaranteed. Due to the limited precision of floats, even adding several numbers together may give slightly different results depending on the order in which you add them. You can try to avoid the non-deterministic operations, but some may be created automatically by TensorFlow to compute the gradients, so it is much simpler to just run the code on the CPU. For this, you can set the CUDA_VISIBLE_DEVICES environment variable to an empty string - they said Keras REPRODUCIBILITY works just on CPUs- but we need GPUs- after a week seach I found a possible way on GPUs - based on this work [TensorFlow Determinism](https://github.com/NVIDIA/framework-determinism) from `NVIDIA` - now we can run Keras with REPRODUCIBILITY on GPUs :)- Note: it works just for `TF >= 2.3` - also it works fine with `tf.data` - but you have to watch out (especially prefetch) - let's check this out
###Code
import random, os
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def tf_seed(seed=0):
os.environ['PYTHONHASHSEED'] = str(seed)
# For working on GPUs from "TensorFlow Determinism"
os.environ["TF_DETERMINISTIC_OPS"] = str(seed)
np.random.seed(seed)
random.seed(seed)
tf.random.set_seed(seed)
def train(reproducibility=True, n_run=2):
for n in range(n_run):
print('run number: ', n+1)
# set seed before create your model
if reproducibility:
tf_seed(seed=0)
# compile model
model = tf.keras.models.Sequential([Flatten(input_shape=(28, 28)), Dense(128, activation='gelu'), Dense(10)])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam', loss=loss_fn)
# training
model.fit(x_train, y_train, epochs=1)
train(reproducibility=False)
train(reproducibility=True)
###Output
run number: 1
1875/1875 [==============================] - 3s 1ms/step - loss: 0.4124
run number: 2
1875/1875 [==============================] - 3s 1ms/step - loss: 0.4124
###Markdown
- if you want run it on CPUs see this
###Code
def tf_seed(seed=0):
os.environ['PYTHONHASHSEED'] = str(seed)
# if your machine has GPUs use following to off it
os.environ['CUDA_VISBLE_DEVICE'] = ''
np.random.seed(seed)
random.seed(seed)
python_random.seed(seed)
tf.random.set_seed(seed)
###Output
_____no_output_____ |
02-NumPy/Numpy Exercises.ipynb | ###Markdown
___ ___*Copyright Pierian Data 2017**For more information, visit us at www.pieriandata.com* NumPy ExercisesNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.** IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! ** Import NumPy as np
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create an array of 10 zeros
###Code
# CODE HERE
np.zeros(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 ones
###Code
# CODE HERE
np.ones(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 fives
###Code
# CODE HERE
np.zeros(10) + 5
###Output
_____no_output_____
###Markdown
Create an array of the integers from 10 to 50
###Code
# CODE HERE
np.arange(10,51)
###Output
_____no_output_____
###Markdown
Create an array of all the even integers from 10 to 50
###Code
# CODE HERE
np.arange(10,51,2)
###Output
_____no_output_____
###Markdown
Create a 3x3 matrix with values ranging from 0 to 8
###Code
# CODE HERE
np.arange(9).reshape(3,3)
###Output
_____no_output_____
###Markdown
Create a 3x3 identity matrix
###Code
# CODE HERE
np.eye(3)
###Output
_____no_output_____
###Markdown
Use NumPy to generate a random number between 0 and 1
###Code
# CODE HERE
np.random.rand(1)
###Output
_____no_output_____
###Markdown
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
###Code
# CODE HERE
np.random.randn(25)
###Output
_____no_output_____
###Markdown
Create the following matrix:
###Code
np.linspace(0, 1, 101)[1:].reshape(10,10)
###Output
_____no_output_____
###Markdown
Create an array of 20 linearly spaced points between 0 and 1:
###Code
np.linspace(0, 1, 20)
###Output
_____no_output_____
###Markdown
Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
###Code
# HERE IS THE GIVEN MATRIX CALLED MAT
# USE IT FOR THE FOLLOWING TASKS
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:, 1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[0:3, 1].reshape(3,1)
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3:]
###Output
_____no_output_____
###Markdown
Now do the following Get the sum of all the values in mat
###Code
# CODE HERE
sum(sum(mat))
###Output
_____no_output_____
###Markdown
Get the standard deviation of the values in mat
###Code
# CODE HERE
np.std(mat)
###Output
_____no_output_____
###Markdown
Get the sum of all the columns in mat
###Code
# CODE HERE
sum(mat)
###Output
_____no_output_____
###Markdown
Bonus QuestionWe worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? [Click Here for a Hint](https://www.google.com/search?q=numpy+random+seed&rlz=1C1CHBF_enUS747US747&oq=numpy+random+seed&aqs=chrome..69i57j69i60j0l4.2087j0j7&sourceid=chrome&ie=UTF-8)
###Code
np.random.seed(3)
np.random.random()
###Output
_____no_output_____
###Markdown
NumPy ExercisesNow that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions.** IMPORTANT NOTE! Make sure you don't run the cells directly above the example output shown, otherwise you will end up writing over the example output! ** Import NumPy as np
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create an array of 10 zeros
###Code
np.zeros(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 ones
###Code
np.ones(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 fives
###Code
np.ones(10) * 5
###Output
_____no_output_____
###Markdown
Create an array of the integers from 10 to 50
###Code
np.arange(10,51)
###Output
_____no_output_____
###Markdown
Create an array of all the even integers from 10 to 50
###Code
np.arange(10,51,2)
###Output
_____no_output_____
###Markdown
Create a 3x3 matrix with values ranging from 0 to 8
###Code
np.arange(9).reshape(3,3)
###Output
_____no_output_____
###Markdown
Create a 3x3 identity matrix
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
Use NumPy to generate a random number between 0 and 1
###Code
np.random.rand(1)
###Output
_____no_output_____
###Markdown
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
###Code
np.random.randn(25)
###Output
_____no_output_____
###Markdown
Create the following matrix:
###Code
np.linspace(0.01,1,100).reshape(10,10)
np.arange(1,101).reshape(10,10)/100
###Output
_____no_output_____
###Markdown
Create an array of 20 linearly spaced points between 0 and 1:
###Code
np.linspace(0,1,20)
###Output
_____no_output_____
###Markdown
Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
###Code
# HERE IS THE GIVEN MATRIX CALLED MAT
# USE IT FOR THE FOLLOWING TASKS
mat = np.arange(1,26).reshape(5,5)
mat
mat[2:,1:]
mat[3,4]
mat[:3,1].reshape(3,1)
mat[:3,1:2] # per non dover fare reshape
mat[4]
mat[3:5]
###Output
_____no_output_____
###Markdown
Now do the following Get the sum of all the values in mat
###Code
mat.sum()
###Output
_____no_output_____
###Markdown
Get the standard deviation of the values in mat
###Code
mat.std()
###Output
_____no_output_____
###Markdown
Get the sum of all the columns in mat
###Code
# 0=row ma somma per colonna in quanto per sommare sulle colonne si guarda riga per riga
mat.sum(axis=0)
###Output
_____no_output_____
###Markdown
Bonus QuestionWe worked a lot with random data with numpy, but is there a way we can insure that we always get the same random numbers? [Click Here for a Hint](https://www.google.com/search?q=numpy+random+seed&rlz=1C1CHBF_enUS747US747&oq=numpy+random+seed&aqs=chrome..69i57j69i60j0l4.2087j0j7&sourceid=chrome&ie=UTF-8)
###Code
# per avere sempre gli stessi rand seed deve essere richiamato insieme e rand (nella stessa cella)
np.random.seed(0)
np.random.rand(4)
###Output
_____no_output_____ |
environment/environment.ipynb | ###Markdown
Copyright 2021 The Google Research Authors.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.
###Code
"""Ecosystem gym environment."""
import collections
import itertools
from absl import flags
from absl import logging
import numpy as np
from recsim import choice_model
from recsim import document
from recsim.simulator import environment
from recsim.simulator import recsim_gym
from recs_ecosystem_creator_rl.environment import creator
from recs_ecosystem_creator_rl.environment import user
FLAGS = flags.FLAGS
NUM_USERS = 10
NUM_CREATORS = 20
ENV_CONFIG = {
# Hyperparameters for environment.
'resample_documents': True,
'topic_dim': 10,
'choice_features': dict(),
'sampling_space': 'unit ball',
'num_candidates': 5,
# Hyperparameters for users.
'user_quality_sensitivity': [0.5] * NUM_USERS,
'user_topic_influence': [0.2] * NUM_USERS,
'observation_noise_std': [0.1] * NUM_USERS,
'user_initial_satisfaction': [10] * NUM_USERS,
'user_satisfaction_decay': [1.0] * NUM_USERS,
'user_viability_threshold': [0] * NUM_USERS,
'user_model_seed': list(range(NUM_USERS)),
'num_users': NUM_USERS,
'slate_size': 2,
# Hyperparameters for creators and documents.
'num_creators': NUM_CREATORS,
'creator_initial_satisfaction': [10] * NUM_CREATORS,
'creator_viability_threshold': [0] * NUM_CREATORS,
'creator_no_recommendation_penalty': [1] * NUM_CREATORS,
'creator_new_document_margin': [0.5] * NUM_CREATORS,
'creator_recommendation_reward': [1] * NUM_CREATORS,
'creator_user_click_reward': [1.0] * NUM_CREATORS,
'creator_satisfaction_decay': [1.0] * NUM_CREATORS,
'doc_quality_std': [0.1] * NUM_CREATORS,
'doc_quality_mean_bound': [0.8] * NUM_CREATORS,
'creator_initial_num_docs': [2] * NUM_CREATORS,
'creator_topic_influence': [0.1] * NUM_CREATORS,
'creator_is_saturation': [True] * NUM_CREATORS,
'copy_varied_property': None
}
class EcosystemEnvironment(environment.MultiUserEnvironment):
"""Class to represent an ecosystem environment with multiple users and multiple creators.
Attributes:
_document_sampler: A sampler to sample documents.
num_users: Number of viable users on the platform.
num_creator: Number of viable creators on the platform.
_slate_size: Number of recommended documents in a slate for a given user.
user_model: A list of UserModel objects representing each viable user on the
platform.
_candidate_set: A dictionary of current document candidates provided to the
agent to generate recommendation slate for a given user. Key=document.id,
value=document object.
_current_documents: Generated from _candidate_set. An ordered dictionary
with key=document.id, value=document.observable_features.
"""
def __init__(self, *args, **kwargs):
super(EcosystemEnvironment, self).__init__(*args, **kwargs)
if not isinstance(self._document_sampler, creator.DocumentSampler):
raise TypeError('The document sampler must have type DocumentSampler.')
logging.info('Multi user environment created.')
def _do_resample_documents(self):
"""Resample documents without replacement."""
if self._num_candidates > self.num_documents:
raise ValueError(
f'Cannot sample {self._num_candidates} from {self.num_documents} documents.'
)
self._candidate_set = document.CandidateSet()
for doc in self._document_sampler.sample_document(
size=self._num_candidates):
self._candidate_set.add_document(doc)
def reset(self):
"""Resets the environment and return the first observation.
Returns:
user_obs: An array of floats representing observations of the user's
current state.
doc_obs: An OrderedDict of document observations keyed by document ids.
"""
self._document_sampler.reset_creator()
self.user_terminates = dict()
user_obs = dict()
for user_model in self.user_model:
user_model.reset()
self.user_terminates[user_model.get_user_id()] = False
user_obs[user_model.get_user_id()] = user_model.create_observation()
if self._resample_documents:
self._do_resample_documents()
self._current_documents = collections.OrderedDict(
self._candidate_set.create_observation())
creator_obs = dict()
for creator_id, creator_model in self._document_sampler.viable_creators.items(
):
creator_obs[creator_id] = creator_model.create_observation()
return user_obs, creator_obs, self._current_documents
@property
def num_users(self):
return len(self.user_model) - np.sum(list(self.user_terminates.values()))
@property
def num_creators(self):
return self._document_sampler.num_viable_creators
@property
def num_documents(self):
return self._document_sampler.num_documents
@property
def topic_documents(self):
return self._document_sampler.topic_documents
def step(self, slates):
"""Executes the action, returns next state observation and reward.
Args:
slates: A list of slates, where each slate is an integer array of size
slate_size, where each element is an index into the set of
current_documents presented.
Returns:
user_obs: A list of gym observation representing all users' next state.
doc_obs: A list of observations of the documents.
responses: A list of AbstractResponse objects for each item in the slate.
done: A boolean indicating whether the episode has terminated. An episode
is terminated whenever there is no user or creator left.
"""
assert (len(slates) == self.num_users
), 'Received unexpected number of slates: expecting %s, got %s' % (
self._slate_size, len(slates))
for user_id in slates:
assert (len(slates[user_id]) <= self._slate_size
), 'Slate for user %s is too large : expecting size %s, got %s' % (
user_id, self._slate_size, len(slates[user_id]))
all_documents = dict() # Accumulate documents served to each user.
all_responses = dict(
) # Accumulate each user's responses to served documents.
for user_model in self.user_model:
if not user_model.is_terminal():
user_id = user_model.get_user_id()
# Get the documents associated with the slate.
doc_ids = list(self._current_documents) # pytype: disable=attribute-error
mapped_slate = [doc_ids[x] for x in slates[user_id]]
documents = self._candidate_set.get_documents(mapped_slate)
# Acquire user response and update user states.
responses = user_model.update_state(documents)
all_documents[user_id] = documents
all_responses[user_id] = responses
def flatten(list_):
return list(itertools.chain(*list_))
# Update the creators' state.
creator_response = self._document_sampler.update_state(
flatten(list(all_documents.values())),
flatten(list(all_responses.values())))
# Obtain next user state observation.
self.user_terminates = {
u_model.get_user_id(): u_model.is_terminal()
for u_model in self.user_model
}
all_user_obs = dict()
for user_model in self.user_model:
if not user_model.is_terminal():
all_user_obs[user_model.get_user_id()] = user_model.create_observation()
# Obtain next creator state observation.
all_creator_obs = dict()
for creator_id, creator_model in self._document_sampler.viable_creators.items(
):
all_creator_obs[creator_id] = creator_model.create_observation()
# Check if reaches a terminal state and return.
# Terminal if there is no user or creator on the platform.
done = self.num_users <= 0 or self.num_creators <= 0
# Optionally, recreate the candidate set to simulate candidate
# generators for the next query.
if self.num_creators > 0:
if self._resample_documents:
# Resample the candidate set from document corpus for the next time
# step recommendation.
# Candidate set is provided to the agent to select documents that will
# be recommended to the user.
self._do_resample_documents()
# Create observation of candidate set.
self._current_documents = collections.OrderedDict(
self._candidate_set.create_observation())
else:
self._current_documents = collections.OrderedDict()
return (all_user_obs, all_creator_obs, self._current_documents,
all_responses, self.user_terminates, creator_response, done)
def aggregate_multi_user_reward(responses):
def _generate_single_user_reward(responses):
for response in responses:
if response['click']:
return response['reward']
return -1
return np.sum([
_generate_single_user_reward(response)
for _, response in responses.items()
])
def _assert_consistent_env_configs(env_config):
"""Raises ValueError if the env_config values are not consistent."""
# User hparams should have the length num_users.
if len(env_config['user_initial_satisfaction']) != env_config['num_users']:
raise ValueError(
'Length of `user_initial_satisfaction` should be the same as number of users.'
)
if len(env_config['user_satisfaction_decay']) != env_config['num_users']:
raise ValueError(
'Length of `user_satisfaction_decay` should be the same as number of users.'
)
if len(env_config['user_viability_threshold']) != env_config['num_users']:
raise ValueError(
'Length of `user_viability_threshold` should be the same as number of users.'
)
if len(env_config['user_quality_sensitivity']) != env_config['num_users']:
raise ValueError(
'Length of `user_quality_sensitivity` should be the same as number of users.'
)
if len(env_config['user_topic_influence']) != env_config['num_users']:
raise ValueError(
'Length of `user_topic_influence` should be the same as number of users.'
)
if len(env_config['observation_noise_std']) != env_config['num_users']:
raise ValueError(
'Length of `observation_noise_std` should be the same as number of users.'
)
if len(env_config['user_model_seed']) != env_config['num_users']:
raise ValueError(
'Length of `user_model_seed` should be the same as number of users.')
# Creator hparams should have the length num_creators.
if len(
env_config['creator_initial_satisfaction']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_initial_satisfaction` should be the same as number of creators.'
)
if len(
env_config['creator_viability_threshold']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_viability_threshold` should be the same as number of creators.'
)
if len(
env_config['creator_new_document_margin']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_new_document_margin` should be the same as number of creators.'
)
if len(env_config['creator_no_recommendation_penalty']
) != env_config['num_creators']:
raise ValueError(
'Length of `creator_no_recommendation_penalty` should be the same as number of creators.'
)
if len(env_config['creator_recommendation_reward']
) != env_config['num_creators']:
raise ValueError(
'Length of `creator_recommendation_reward` should be the same as number of creators.'
)
if len(env_config['creator_user_click_reward']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_user_click_reward` should be the same as number of creators.'
)
if len(
env_config['creator_satisfaction_decay']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_satisfaction_decay` should be the same as number of creators.'
)
if len(env_config['doc_quality_std']) != env_config['num_creators']:
raise ValueError(
'Length of `doc_quality_std` should be the same as number of creators.')
if len(env_config['doc_quality_mean_bound']) != env_config['num_creators']:
raise ValueError(
'Length of `doc_quality_mean_bound` should be the same as number of creators.'
)
if len(env_config['creator_initial_num_docs']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_initial_num_docs` should be the same as number of creators.'
)
if len(env_config['creator_is_saturation']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_is_saturation` should be the same as number of creators.'
)
if len(env_config['creator_topic_influence']) != env_config['num_creators']:
raise ValueError(
'Length of `creator_topic_influence` should be the same as number of creators.'
)
class EcosystemGymEnv(recsim_gym.RecSimGymEnv):
"""Class to wrap recommender ecosystem to gym.Env."""
def reset(self):
user_obs, creator_obs, doc_obs = self._environment.reset()
return dict(
user=user_obs,
creator=creator_obs,
doc=doc_obs,
total_doc_number=self.num_documents)
def step(self, action):
(user_obs, creator_obs, doc_obs, user_response, user_terminate,
creator_response, done) = self._environment.step(action)
obs = dict(
user=user_obs,
creator=creator_obs,
doc=doc_obs,
user_terminate=user_terminate,
total_doc_number=self.num_documents,
user_response=user_response,
creator_response=creator_response)
# Extract rewards from responses.
reward = self._reward_aggregator(user_response)
info = self.extract_env_info()
#logging.info(
# 'Environment steps with aggregated %f reward. There are %d viable users, %d viable creators and %d viable documents on the platform.',
# reward, self.num_users, self.num_creators, self.num_documents)
#print(self.topic_documents)
return obs, reward, done, info
@property
def num_creators(self):
return self._environment.num_creators
@property
def num_users(self):
return self._environment.num_users
@property
def num_documents(self):
return self._environment.num_documents
@property
def topic_documents(self):
return self._environment.topic_documents
def create_gym_environment(env_config):
"""Return a RecSimGymEnv."""
_assert_consistent_env_configs(env_config)
# Build a user model.
def _choice_model_ctor():
# env_config['choice_feature'] decides the mass of no-click option when the
# user chooses one document from the recommended slate. It is a dictionary
# with key='no_click_mass' and value=logit of no-click. If empty,
# the MultinomialLogitChoice Model sets the logit of no-click to be -inf,
# and thus the user has to click one document from the recommended slate.
return choice_model.MultinomialLogitChoiceModel(
env_config['choice_features'])
user_model = []
for user_id in range(env_config['num_users']):
user_sampler = user.UserSampler(
user_id=user_id,
user_ctor=user.UserState,
quality_sensitivity=env_config['user_quality_sensitivity'][user_id],
topic_influence=env_config['user_topic_influence'][user_id],
topic_dim=env_config['topic_dim'],
observation_noise_std=env_config['observation_noise_std'][user_id],
initial_satisfaction=env_config['user_initial_satisfaction'][user_id],
satisfaction_decay=env_config['user_satisfaction_decay'][user_id],
viability_threshold=env_config['user_viability_threshold'][user_id],
sampling_space=env_config['sampling_space'],
seed=env_config['user_model_seed'][user_id],
)
user_model.append(
user.UserModel(
slate_size=env_config['slate_size'],
user_sampler=user_sampler,
response_model_ctor=user.ResponseModel,
choice_model_ctor=_choice_model_ctor,
))
# Build a document sampler.
document_sampler = creator.DocumentSampler(
doc_ctor=creator.Document,
creator_ctor=creator.Creator,
topic_dim=env_config['topic_dim'],
num_creators=env_config['num_creators'],
initial_satisfaction=env_config['creator_initial_satisfaction'],
viability_threshold=env_config['creator_viability_threshold'],
new_document_margin=env_config['creator_new_document_margin'],
no_recommendation_penalty=env_config['creator_no_recommendation_penalty'],
recommendation_reward=env_config['creator_recommendation_reward'],
user_click_reward=env_config['creator_user_click_reward'],
satisfaction_decay=env_config['creator_satisfaction_decay'],
doc_quality_std=env_config['doc_quality_std'],
doc_quality_mean_bound=env_config['doc_quality_mean_bound'],
initial_num_docs=env_config['creator_initial_num_docs'],
topic_influence=env_config['creator_topic_influence'],
is_saturation=env_config['creator_is_saturation'],
sampling_space=env_config['sampling_space'],
copy_varied_property=env_config['copy_varied_property'])
# Build a simulation environment.
env = EcosystemEnvironment(
user_model=user_model,
document_sampler=document_sampler,
num_candidates=env_config['num_candidates'],
slate_size=env_config['slate_size'],
resample_documents=env_config['resample_documents'],
)
return EcosystemGymEnv(env, aggregate_multi_user_reward)
###Output
_____no_output_____ |
Meetups/2015USTC/USTCTalk2/QuDynamics_25_12_15.ipynb | ###Markdown
Hello World in Julia!
###Code
println("你好世界")
###Output
_____no_output_____
###Markdown
Functions, Method Dispatch, Types !
###Code
# function to compute the volume of sphere
function volume_of_sphere(radius)
return 4/3*pi*radius^3
end
volume_of_sphere(1)
# function to compute if three given numbers form a triangle
function sides_of_triangle(a, b, c)
if a+b>c && b+c>a && c+a>b && a>0 && b>0 && c>0
return "Triangle possible with $a, $b, $c"
else
return "Triangle not possible with $a, $b, $c"
end
end
sides_of_triangle(1,1,1)
function even_odd(a)
if a%2 == 0
println("even")
else
println("odd")
end
end
even_odd(2.1)
function even_odd_1(a::Int64)
if a%2 == 0
println("even")
else
println("odd")
end
end
even_odd_1(2.1)
type Complex
real::Float64
imag::Float64
Complex(real, imag) = new(real, imag)
end
Complex(2,2)
# Methods of + for Complex type !
+(a1::Complex, a2::Complex) = Complex(a1.real+a2.real, a1.imag+a2.imag)
*(a1::Complex, a2::Complex) = Complex(a1.real*a2.real - a1.imag*a2.imag, a1.imag*a2.real + a1.real*a2.imag)
x1 = Complex(2,2); x2 = Complex(3,3);
x1+x2, x1*x2
using QuBase
using QuDynamics
using Gadfly
h = sigmax # hamiltonian of the system
init_state = statevec(1, FiniteBasis(2))
t = 0.:0.1:2*pi
qprop = QuPropagator(h, init_state, t, QuODE45())
psi_final = Array(Float64,0)
time = Array(Float64, 0)
for (t, psi) in qprop
# println(t, real(psi[1])
# plot(t, real(psi[1]), "ro")
push!(psi_final, psi[1])
push!(time, t)
end
plot(x=time, y=psi_final)
qprop = QuStateEvolution(h, init_state, t, QuODE23s())
typeof(qprop.eq)
qprop.init_state
qprop_1 = QuStateEvolution(h, init_state*init_state', t, QuODE23s())
# observe the type of `eq`
typeof(qprop_1.eq)
# Alternative ways, though resulting in the same structure !
qprop_eq = QuStateEvolution(QuSchrodingerEq(sigmax), init_state, t, QuODE78())
qprop_1_eq = QuStateEvolution(QuLiouvillevonNeumannEq(QuDynamics.liouvillian_op(sigmax)), init_state*init_state', t, QuODE78())
qlme = QuStateEvolution(sigmax, [lowerop(2)], init_state*init_state', t, QuODE23s())
for (t, psi) in qlme
println("t : $t, psi : $psi")
end
qlme2 = QuLindbladMasterEq(sigmax, [lowerop(2)])
qlme2.lindblad
qlme2.collapse_ops
qlme2.hamiltonian
###Output
_____no_output_____ |
notebooks/rxn_mass_signatures-ModelSeed.ipynb | ###Markdown
reaction mass delta signatures- Examples using ModelSeed universal model- Minghao Gong, 04-14-2022
###Code
!pip install --upgrade metDataModel
!pip install --upgrade numpy
!pip install --upgrade mass2chem
!pip install cobra
import sys
sys.path.append("/Users/gongm/Documents/projects/mass2chem/")
import cobra
from metDataModel.core import Compound, Reaction, Pathway
from mass2chem.formula import *
###Output
_____no_output_____
###Markdown
Switching to MG's JSON modelafter parsing and formula-mass calculation.Example of model conversion - ?
###Code
import json
from operator import itemgetter
# JSON models are in JMS repo
M = json.load(open('../output/ModelSeed/Universal_ModelSeed.json'))
M.keys()
[print(x) for x in [
M['meta_data'],
len(M['list_of_reactions']),
M['list_of_reactions'][222],
len(M['list_of_compounds']),
M['list_of_compounds'][1000], ]
]
M['list_of_compounds'][300: 302]
# index metabolites, and add calculated charged_formula_mass via mass2chem.formula.calculate_formula_mass
cpdDict = {}
for C in M['list_of_compounds']:
if C['neutral_mono_mass']:
cpdDict[C['id']] = C
else:
print(C['id'], C['charged_formula'])
len(cpdDict)
cpdDict.get('cpd00783', None)
M['list_of_reactions'][891]
def get_delta(tuple1, tuple2):
# tuple1,2 are (mass, formula)
# get diff of mass and formulas. tuple2 as from products.
# return (mdiff, formulaDiff, tuple1[1], tuple2[1])
F1, F2 = tuple1[1], tuple2[1]
mdiff = tuple2[0] - tuple1[0]
if tuple1[0] <= tuple2[0]:
F2, F1 = tuple1[1], tuple2[1]
# F1 is the larger
F1dict, F2dict = parse_chemformula_dict(F1), parse_chemformula_dict(F2)
# invert F2 and calculate differential formula
for k,v in F2dict.items():
F2dict[k] = -v
formulaDiff = add_formula_dict( F1dict, F2dict )
if formulaDiff:
formulaDiff = dict_to_hill_formula(formulaDiff)
return (mdiff, formulaDiff, tuple1[1], tuple2[1])
# get useful rxns
# signature_mass_diff is the mass shift btw all products and all reactants
good = []
for R in M['list_of_reactions']:
if R['products'] and R['reactants']:
for C1 in R['reactants']:
for C2 in R['products']:
if C1 != C2:
D1 = cpdDict.get(C1, None)
D2 = cpdDict.get(C2, None)
if D1 and D2:
diff = get_delta((D1['neutral_mono_mass'], D1['neutral_formula']),
(D2['neutral_mono_mass'], D2['neutral_formula'])
)
good.append((R['id'], diff))
R['signature_mass_diff'] = diff[0]
R['signature_formula_diff'] = diff[1]
print(len(good))
good[40:60]
def add_formula_dict2(dict1, dict2):
'''
Addition of two formulae as dictionaries.
This allows calculating formula after a reaction, as dict2 can contain substraction of elements.
Not as good as using real chemical structures, just a simple approximation.
'''
new, result = {}, {}
for k in set(dict1.keys()).union( set(dict2.keys()) ):
if k in dict1 and k in dict2:
new[k] = dict1[k] + dict2[k]
elif k in dict1:
new[k] = dict1[k]
else:
new[k] = dict2[k]
for k,v in new.items():
if v != 0:
result[k] = v
return result
good2 = []
for line in good:
rid, DD = line
if DD[0] != 0:
F1, F2 = DD[2:4]
F1dict, F2dict = parse_chemformula_dict(F1), parse_chemformula_dict(F2)
for k,v in F2dict.items():
F2dict[k] = -v
formula_d = add_formula_dict2(F1dict, F2dict)
good2.append(
[rid] + [x for x in DD] + [str(formula_d)]
)
print(len(good2))
good2[88:93]
# combine +/- values
all_mds = [str(round(abs(x[1]),4)) for x in good2]
print(len(all_mds), len(set(all_mds)))
# so there's 5556 mz_diffs. because all mass were calculated from same source, bin in 0.0002 okay
# make a dict of mz_diff to reaction IDs
mzdiff_dict = {}
for k in set(all_mds):
mzdiff_dict[k] = []
for x in good2:
if abs(abs(x[1])-float(k)) < 0.0002:
mzdiff_dict[k].append(x[0])
list(mzdiff_dict.items())[88:90]
freq = list(zip(mzdiff_dict.keys(), [len(set(v)) for k,v in mzdiff_dict.items()]))
freq.sort(reverse=True, key=itemgetter(1))
from matplotlib import pyplot as plt
freq[500]
plt.plot([x[1] for x in freq[10:500]] )
freq[:40]
freq[199], mzdiff_dict[freq[199][0]]
fil = [x for x in freq if 2 < float(x[0]) < 100]
print(len(fil))
print(len([x for x in fil if x[1] > 5]))
###Output
2084
474
###Markdown
With 14035 mass diff, 2084 are within (2, 100) amuand 474 occurs more than 5 times
###Code
s = json.JSONEncoder().encode( mzdiff_dict )
with open("ModelSeed_universal_signatures_dict.json", "w") as O:
O.write(s)
###Output
_____no_output_____ |
0_ltpy_Intro_to_Python_and_Jupyter.ipynb | ###Markdown
11 - Atmospheric Composition Data - Overview and data access >> Optional: Introduction to Python and Project Jupyter Project Jupyter "Project Jupyter exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages." Project Jupyter offers different tools to facilitate interactive computing, either with a web-based application (`Jupyter Notebooks`), an interactive development environment (`JupyterLab`) or via a `JupyterHub` that brings interactive computing to groups of users. * **Jupyter Notebook** is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.* **JupyterLab 1.0: Jupyter’s Next-Generation Notebook Interface** JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data.* **JupyterHub** JupyterHub brings the power of notebooks to groups of users. It gives users access to computational environments and resources without burdening the users with installation and maintenance tasks. Users - including students, researchers, and data scientists - can get their work done in their own workspaces on shared resources which can be managed efficiently by system administrators. Why Jupyter Notebooks? * Started with Python support, now **support of over 40 programming languages, including Python, R, Julia, ...*** Notebooks can **easily be shared via GitHub, NBViewer, etc.*** **Code, data and visualizations are combined in one place*** A great tool for **teaching*** **JupyterHub allows you to access an environment ready to code** Installation Installing Jupyter using Anaconda Anaconda comes with the Jupyter Notebook installed. You just have to download Anaconda and following the installation instructions. Once installed, the jupyter notebook can be started with:
###Code
jupyter notebook
###Output
_____no_output_____
###Markdown
Installing Jupyter with pip Experienced Python users may want to install Jupyter using Python's package manager `pip`.With `Python3` you do:
###Code
python3 -m pip install --upgrade pip
python3 -m pip install jupyter
###Output
_____no_output_____
###Markdown
In order to run the notebook, you run the same command as with Anaconda at the Terminal :
###Code
jupyter notebook
###Output
_____no_output_____
###Markdown
Jupyter notebooks UI * Notebook dashboard * Create new notebook* Notebook editor (UI) * Menu * Toolbar * Notebook area and cells* Cell types * Code * Markdown* Edit (green) vs. Command mode (blue) Notebook editor User Interface (UI) Shortcuts Get an overview of the shortcuts by hitting `H` or go to `Help/Keyboard shortcuts` Most useful shortcuts * `Esc` - switch to command mode* `B` - insert below* `A` - insert above* `M` - Change current cell to Markdown* `Y` - Change current cell to code* `DD` - Delete cell* `Enter` - go back to edit mode* `Esc + F` - Find and replace on your code* `Shift + Down / Upwards` - Select multiple cells* `Shift + M` - Merge multiple cells Cell magics Magic commands can make your life a lot easier, as you only have one command instead of an entire function or multiple lines of code.> Go to an [extensive overview of magic commands]() Some of the handy ones **Overview of available magic commands**
###Code
%lsmagic
###Output
_____no_output_____
###Markdown
**See and set environment variables**
###Code
%env
###Output
_____no_output_____
###Markdown
**Install and list libraries**
###Code
!pip install numpy
!pip list | grep pandas
###Output
_____no_output_____
###Markdown
**Write cell content to a Python file**
###Code
%%writefile hello_world.py
print('Hello World')
###Output
_____no_output_____
###Markdown
**Load a Python file**
###Code
%pycat hello_world.py
###Output
_____no_output_____
###Markdown
**Get the time of cell execution**
###Code
%%time
tmpList = []
for i in range(100):
tmpList.append(i+i)
print(tmpList)
###Output
_____no_output_____
###Markdown
**Show matplotlib plots inline**
###Code
%matplotlib inline
###Output
_____no_output_____ |
28_ICA_Sound Analysis/Independent Component Analysis Lab.ipynb | ###Markdown
Independent Component Analysis LabIn this notebook, we'll use Independent Component Analysis to retrieve original signals from three observations each of which contains a different mix of the original signals. This is the same problem explained in the ICA video. DatasetLet's begin by looking at the dataset we have. We have three WAVE files, each of which is a mix, as we've mentioned. If you haven't worked with audio files in python before, that's okay, they basically boil down to being lists of floats.Let's begin by loading our first audio file, **[ICA mix 1.wav](ICA mix 1.wav)** [click to listen to the file]:
###Code
import numpy as np
import wave
# Read the wave file
mix_1_wave = wave.open('ICA mix 1.wav','r')
###Output
_____no_output_____
###Markdown
Let's peak at the parameters of the wave file to learn more about it
###Code
mix_1_wave.getparams()
###Output
_____no_output_____
###Markdown
So this file has only channel (so it's mono sound). It has a frame rate of 44100, which means each second of sound is represented by 44100 integers (integers because the file is in the common PCM 16-bit format). The file has a total of 264515 integers/frames, which means its length in seconds is:
###Code
264515/44100
###Output
_____no_output_____
###Markdown
Let's extract the frames of the wave file, which will be a part of the dataset we'll run ICA against:
###Code
# Extract Raw Audio from Wav File
signal_1_raw = mix_1_wave.readframes(-1)
signal_1 = np.fromstring(signal_1_raw, 'Int16')
###Output
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:3: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
This is separate from the ipykernel package so we can avoid doing imports until
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:3: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
This is separate from the ipykernel package so we can avoid doing imports until
###Markdown
signal_1 is now a list of ints representing the sound contained in the first file.
###Code
'length: ', len(signal_1) , 'first 100 elements: ',signal_1[:100]
###Output
_____no_output_____
###Markdown
If we plot this array as a line graph, we'll get the familiar wave form representation:
###Code
import matplotlib.pyplot as plt
fs = mix_1_wave.getframerate()
timing = np.linspace(0, len(signal_1)/fs, num=len(signal_1))
plt.figure(figsize=(12,2))
plt.title('Recording 1')
plt.plot(timing,signal_1, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
###Output
_____no_output_____
###Markdown
In the same way, we can now load the other two wave files, **[ICA mix 2.wav](ICA mix 2.wav)** and **[ICA mix 3.wav](ICA mix 3.wav)**
###Code
mix_2_wave = wave.open('ICA mix 2.wav','r')
#Extract Raw Audio from Wav File
signal_raw_2 = mix_2_wave.readframes(-1)
signal_2 = np.fromstring(signal_raw_2, 'Int16')
mix_3_wave = wave.open('ICA mix 3.wav','r')
#Extract Raw Audio from Wav File
signal_raw_3 = mix_3_wave.readframes(-1)
signal_3 = np.fromstring(signal_raw_3, 'Int16')
plt.figure(figsize=(12,2))
plt.title('Recording 2')
plt.plot(timing,signal_2, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
plt.figure(figsize=(12,2))
plt.title('Recording 3')
plt.plot(timing,signal_3, c="#3ABFE7")
plt.ylim(-35000, 35000)
plt.show()
###Output
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:6: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:6: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:13: DeprecationWarning: Numeric-style type codes are deprecated and will result in an error in the future.
del sys.path[0]
C:\Anaconda\lib\site-packages\ipykernel_launcher.py:13: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
del sys.path[0]
###Markdown
Now that we've read all three files, we're ready to [zip](https://docs.python.org/3/library/functions.htmlzip) them to create our dataset.* Create dataset ```X``` by zipping signal_1, signal_2, and signal_3 into a single list
###Code
X = list(zip(signal_1, signal_2, signal_3))
# Let's peak at what X looks like
X[:10]
###Output
_____no_output_____
###Markdown
We are now ready to run ICA to try to retrieve the original signals.* Import sklearn's [FastICA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FastICA.html) module* Initialize FastICA look for three components* Run the FastICA algorithm using fit_transform on dataset X
###Code
# TODO: Import FastICA
from sklearn.decomposition import FastICA
# TODO: Initialize FastICA with n_components=3
ica = FastICA(n_components=3)
# TODO: Run the FastICA algorithm using fit_transform on dataset X
ica_result = ica.fit_transform(X)
###Output
_____no_output_____
###Markdown
```ica_result``` now contains the result of FastICA, which we hope are the original signals. It's in the shape:
###Code
ica_result.shape
###Output
_____no_output_____
###Markdown
Let's split into separate signals and look at them
###Code
result_signal_1 = ica_result[:,0]
result_signal_2 = ica_result[:,1]
result_signal_3 = ica_result[:,2]
###Output
_____no_output_____
###Markdown
Let's plot to see how the wave forms look
###Code
# Plot Independent Component #1
plt.figure(figsize=(12,2))
plt.title('Independent Component #1')
plt.plot(result_signal_1, c="#df8efd")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #2
plt.figure(figsize=(12,2))
plt.title('Independent Component #2')
plt.plot(result_signal_2, c="#87de72")
plt.ylim(-0.010, 0.010)
plt.show()
# Plot Independent Component #3
plt.figure(figsize=(12,2))
plt.title('Independent Component #3')
plt.plot(result_signal_3, c="#f65e97")
plt.ylim(-0.010, 0.010)
plt.show()
###Output
_____no_output_____
###Markdown
Do some of these look like musical wave forms? The best way to confirm the result is to listen to resulting files. So let's save as wave files and verify. But before we do that, we'll have to:* convert them to integer (so we can save as PCM 16-bit Wave files), otherwise only some media players would be able to play them and others won't* Map the values to the appropriate range for int16 audio. That range is between -32768 and +32767. A basic mapping can be done by multiplying by 32767.* The sounds will be a little faint, we can increase the volume by multiplying by a value like 100
###Code
from scipy.io import wavfile
# Convert to int, map the appropriate range, and increase the volume a little bit
result_signal_1_int = np.int16(result_signal_1*32767*100)
result_signal_2_int = np.int16(result_signal_2*32767*100)
result_signal_3_int = np.int16(result_signal_3*32767*100)
# Write wave files
wavfile.write("result_signal_1.wav", fs, result_signal_1_int)
wavfile.write("result_signal_2.wav", fs, result_signal_2_int)
wavfile.write("result_signal_3.wav", fs, result_signal_3_int)
###Output
_____no_output_____ |
notebooks/0.1-exploring-master-groups.ipynb | ###Markdown
Restrict Data to Metro Areas
###Code
groups_df.dropna(subset=[ 'MSA_CODE' ], inplace=True)
metro_groups_df = groups_df[ groups_df['MSA_NAME'].str.contains('Metro') ]
groups_by_metro = metro_groups_df.groupby([ 'MSA_CODE', 'MSA_NAME' ], as_index=False)
###Output
_____no_output_____
###Markdown
Explore Metro Areas with most number of groups
###Code
groups_by_metro_count = groups_by_metro.count()
top_metros = groups_by_metro_count.nlargest(30, 'id')
top_metros['MSA_NAME'] = top_metros["MSA_NAME"].apply(lambda x: x.split(',')[0])
plt.subplots(figsize=(12,10))
ax = sns.barplot(y='MSA_NAME', x='id', data=top_metros)
ax.set_xlabel('Number of Groups')
ax.set_ylabel('MSA Name')
ax.axes.set_title('Top 30 Metro Areas with Meetup Groups')
print()
###Output
###Markdown
Explore attributes of top-5 Metro Areas with most number of groups
###Code
group_count_lower_bound = groups_by_metro.count().nlargest(5, 'id')['id'].min()
sample_groups = groups_by_metro.filter(lambda x: x['id'].count() >= group_count_lower_bound)
###Output
_____no_output_____
###Markdown
Explore top-5 groups in each of the sample metro areas ordered by number of past events
###Code
sample_groups_sorted_by_events = sample_groups.sort_values([ 'MSA_CODE', 'past_event_count' ], ascending=False)
sample_groups_sorted_by_events.groupby([ 'MSA_NAME' ]).apply(lambda x: x.head(5))
###Output
_____no_output_____
###Markdown
Explore top-5 groups ineach of the sample metro areas order by number of members
###Code
sample_groups_sorted_by_members = sample_groups.sort_values([ 'MSA_CODE', 'members' ], ascending=False)
sample_groups_sorted_by_members.groupby([ 'MSA_NAME']).apply(lambda x: x.head(5))
###Output
_____no_output_____
###Markdown
Explore top-5 categories in each Metro Area
###Code
sample_groups_category_count = sample_groups.groupby(['MSA_CODE', 'MSA_NAME', 'category.name']).count().reset_index()
sample_groups_category_count.rename(columns={ 'id': 'count'}, inplace=True)
group_category_counts_by_msa = sample_groups_category_count[ ['MSA_NAME', 'count', 'category.name']].groupby('MSA_NAME')
group_category_counts_by_msa.apply(lambda x: x.sort_values('count', ascending=False).head(5)).drop('MSA_NAME', axis=1)
###Output
_____no_output_____
###Markdown
Exploring distribution of categories among groups in the sample Metro Areas
###Code
sample_groups_category_count['MSA_NAME'] = sample_groups_category_count['MSA_NAME'].apply(lambda x: x.split(',')[0])
g = sns.factorplot(x='MSA_NAME', y='count', kind='bar', hue='category.name', size=12,
palette=sns.color_palette('Set1', 10), data=sample_groups_category_count)
g.set_xlabels('MSA Name')
g.set_ylabels('Number of Groups')
g.set_xticklabels(rotation=25)
plt.title('Most Popular Categories', fontsize=20)
print()
###Output
|
Db2_11.1_Features/Db2_11.1_Compatibility_Features.ipynb | ###Markdown
Db2 Compatibility FeaturesUpdated: 2019-10-03 Moving from one database vendor to another can sometimes be difficult due to syntax differences between data types, functions, and language elements. Db2 already has a high degree of compatibility with Oracle PLSQL along with some of the Oracle data types. Db2 11 introduces some additional data type and function compatibility that will reduce some of the migration effort required when porting from other systems. There are some specific features within Db2 that are targeted at Netezza SQL and that is discussed in a separate section.
###Code
%run ../db2.ipynb
%run ../connection.ipynb
###Output
_____no_output_____
###Markdown
We populate the database with the `EMPLOYEE` and `DEPARTMENT` tables so that we can run the various examples.
###Code
%sql -sampledata
###Output
_____no_output_____
###Markdown
Table of Contents* [Outer Join Operator](outer)* [CHAR datatype size increase](char)* [Binary Data Type](binary)* [Boolean Data Type](boolean)* [Synonyms for Data Types](synonyms)* [Function Synonymns](function)* [Netezza Compatibility](netezza)* [Select Enhancements](select)* [Hexadecimal Functions](hexadecimal)* [Table Creation with Data](create) Outer Join OperatorDb2 allows the use of the Oracle outer-join operator when Oracle compatibility is turned on within a database. In Db2 11, the outer join operator is available by default and does not require the DBA to turn on Oracle compatibility.Db2 supports standard join syntax for LEFT and RIGHT OUTER JOINS.However, there is proprietary syntax used by Oracle employing a keyword: "(+)"to mark the "null-producing" column reference that precedes it in animplicit join notation. That is (+) appears in the WHERE clause andrefers to a column of the inner table in a left outer join.For instance:```Python SELECT * FROM T1, T2WHERE T1.C1 = T2.C2 (+)```Is the same as:```PythonSELECT * FROM T1 LEFT OUTER JOIN T2 ON T1.C1 = T2.C2```In this example, we get list of departments and their employees, aswell as the names of departments who have no employees.This example uses the standard Db2 syntax.
###Code
%%sql -a
SELECT DEPTNAME, LASTNAME FROM
DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E
ON D.DEPTNO = E.WORKDEPT
###Output
_____no_output_____
###Markdown
This example works in the same manner as the last one, but usesthe "+" sign syntax. The format is a lot simpler to remember than OUTER JOINsyntax, but it is not part of the SQL standard.
###Code
%%sql
SELECT DEPTNAME, LASTNAME FROM
DEPARTMENT D, EMPLOYEE E
WHERE D.DEPTNO = E.WORKDEPT (+)
###Output
_____no_output_____
###Markdown
[Back to Top](top) CHAR Datatype Size Increase The CHAR datatype was limited to 254 characters in prior releases of Db2. In Db2 11, the limit has been increasedto 255 characters to bring it in line with other SQL implementations.First we drop the table if it already exists.
###Code
%%sql -q
DROP TABLE LONGER_CHAR;
CREATE TABLE LONGER_CHAR
(
NAME CHAR(255)
);
###Output
_____no_output_____
###Markdown
[Back to Top](top) Binary Data TypesDb2 11 introduces two new binary data types: BINARY and VARBINARY. These two data types can contain any combination of characters or binary values and are not affected by the codepage of the server that the values are stored on.A BINARY data type is fixed and can have a maximum length of 255 bytes, while a VARBINARY column can contain up to 32672 bytes. Each of these data types is compatible with columns created with the FOR BIT DATA keyword.The BINARY data type will reduce the amount of conversion required from other data bases. Although binary data was supported with the FOR BIT DATA clause on a character column, it required manual DDL changes when migrating a table definition.This example shows the creation of the three types of binary data types.
###Code
%%sql -q
DROP TABLE HEXEY;
CREATE TABLE HEXEY
(
AUDIO_SHORT BINARY(255),
AUDIO_LONG VARBINARY(1024),
AUDIO_CHAR VARCHAR(255) FOR BIT DATA
);
###Output
_____no_output_____
###Markdown
Inserting data into a binary column can be done through the use of BINARY functions, or the use of X'xxxx' modifiers when using the VALUE clause. For fixed strings you use the X'00' format to specify a binary value and BX'00' for variable length binary strings. For instance, the following SQL will insert data into the previous table that was created.
###Code
%%sql
INSERT INTO HEXEY VALUES
(BINARY('Hello there'),
BX'2433A5D5C1',
VARCHAR_BIT_FORMAT(HEX('Hello there')));
SELECT * FROM HEXEY;
###Output
_____no_output_____
###Markdown
Handling binary data with a FOR BIT DATA column was sometimes tedious, so the BINARY columns will make coding a little simpler. You can compare and assign values between any of these types of columns. The next SQL statement will update the AUDIO_CHAR column with the contents of the AUDIO_SHORT column. Then the SQL will test to make sure they are the same value.
###Code
%%sql
UPDATE HEXEY
SET AUDIO_CHAR = AUDIO_SHORT
###Output
_____no_output_____
###Markdown
We should have one record that is equal.
###Code
%%sql
SELECT COUNT(*) FROM HEXEY WHERE
AUDIO_SHORT = AUDIO_CHAR
###Output
_____no_output_____
###Markdown
[Back to Top](top) Boolean Data TypeThe boolean data type (true/false) has been available in SQLPL and PL/SQL scripts for some time. However,the boolean data type could not be used in a table definition. Db2 11 FP1 now allows you to use thisdata type in a table definition and use TRUE/FALSE clauses to compare values.This simple table will be used to demonstrate how BOOLEAN types can be used.
###Code
%%sql -q
DROP TABLE TRUEFALSE;
CREATE TABLE TRUEFALSE (
EXAMPLE INT NOT NULL,
STATE BOOLEAN NOT NULL
);
###Output
_____no_output_____
###Markdown
The keywords for a true value are TRUE, 'true', 't', 'yes', 'y', 'on', and '1'. For false the values areFALSE, 'false', 'f', 'no', 'n', and '0'.
###Code
%%sql
INSERT INTO TRUEFALSE VALUES
(1, TRUE),
(2, FALSE),
(3, 0),
(4, 't'),
(5, 'no')
###Output
_____no_output_____
###Markdown
Now we can check to see what has been inserted into the table.
###Code
%sql SELECT * FROM TRUEFALSE
###Output
_____no_output_____
###Markdown
Retrieving the data in a SELECT statement will return an integer value for display purposes.1 is true and 0 is false (binary 1 and 0).Comparison operators with BOOLEAN data types will use TRUE, FALSE, 1 or 0 or any of the supported binary values. You have the choice of using the equal (=) operator or the IS or IS NOT syntax as shown in the following SQL.
###Code
%%sql
SELECT * FROM TRUEFALSE
WHERE STATE = TRUE OR STATE = 1 OR STATE = 'on' OR STATE IS TRUE
###Output
_____no_output_____
###Markdown
[Back to Top](top) Synonym Data typesDb2 has the standard data types that most developers are familiar with, like CHAR, INTEGER, and DECIMAL. There are other SQL implementations that use different names for these data types, so Db2 11 now allows these data types as syonomys for the base types.These data types are:|Type |Db2 Equivalent|:----- |:-------------|INT2 |SMALLINT|INT4 |INTEGER|INT8 |BIGINT|FLOAT4 |REAL|FLOAT8 |FLOATThe following SQL will create a table with all of these data types.
###Code
%%sql -q
DROP TABLE SYNONYM_EMPLOYEE;
CREATE TABLE SYNONYM_EMPLOYEE
(
NAME VARCHAR(20),
SALARY INT4,
BONUS INT2,
COMMISSION INT8,
COMMISSION_RATE FLOAT4,
BONUS_RATE FLOAT8
);
###Output
_____no_output_____
###Markdown
When you create a table with these other data types, Db2 does not use these "types" in the catalog. What Db2 will do is use the Db2 type instead of these synonym types. What this means is that if you describe the contents of a table, you will see the Db2 types displayed, not these synonym types.
###Code
%%sql
SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME='SYNONYM_EMPLOYEE' AND TBCREATOR=CURRENT USER
###Output
_____no_output_____
###Markdown
[Back to Top](top) Function Name CompatibilityDb2 has a wealth of built-in functions that are equivalent to competitive functions, but with a different name. InDb2 11, these alternate function names are mapped to the Db2 function so that there is no re-write of the functionname required. This first SQL statement generates some data required for the statistical functions. Generate Linear DataThis command generates X,Y coordinate pairs in the xycoord table that are based on thefunction y = 2x + 5. Note that the table creation uses Common Table Expressionsand recursion to generate the data!
###Code
%%sql -q
DROP TABLE XYCOORDS;
CREATE TABLE XYCOORDS
(
X INT,
Y INT
);
INSERT INTO XYCOORDS
WITH TEMP1(X) AS
(
VALUES (0)
UNION ALL
SELECT X+1 FROM TEMP1 WHERE X < 10
)
SELECT X, 2*X + 5
FROM TEMP1;
###Output
_____no_output_____
###Markdown
COVAR_POP is an alias for COVARIANCE
###Code
%%sql
SELECT 'COVAR_POP', COVAR_POP(X,Y) FROM XYCOORDS
UNION ALL
SELECT 'COVARIANCE', COVARIANCE(X,Y) FROM XYCOORDS
###Output
_____no_output_____
###Markdown
VAR_POP is an alias for VARIANCE
###Code
%%sql
SELECT 'STDDEV_POP', STDDEV_POP(X) FROM XYCOORDS
UNION ALL
SELECT 'STDDEV', STDDEV(X) FROM XYCOORDS
###Output
_____no_output_____
###Markdown
VAR_SAMP is an alias for VARIANCE_SAMP
###Code
%%sql
SELECT 'VAR_SAMP', VAR_SAMP(X) FROM XYCOORDS
UNION ALL
SELECT 'VARIANCE_SAMP', VARIANCE_SAMP(X) FROM XYCOORDS
###Output
_____no_output_____
###Markdown
ISNULL, NOTNULL is an alias for IS NULL, IS NOT NULL
###Code
%%sql
WITH EMP(LASTNAME, WORKDEPT) AS
(
VALUES ('George','A01'),
('Fred',NULL),
('Katrina','B01'),
('Bob',NULL)
)
SELECT * FROM EMP WHERE
WORKDEPT ISNULL
###Output
_____no_output_____
###Markdown
LOG is an alias for LN
###Code
%%sql
VALUES ('LOG',LOG(10))
UNION ALL
VALUES ('LN', LN(10))
###Output
_____no_output_____
###Markdown
RANDOM is an alias for RANDNotice that the random number that is generated for the two calls results in a different value! This behavior is the not the same with timestamps, where the value is calculated once during the execution of the SQL.
###Code
%%sql
VALUES ('RANDOM', RANDOM())
UNION ALL
VALUES ('RAND', RAND())
###Output
_____no_output_____
###Markdown
STRPOS is an alias for POSSTR
###Code
%%sql
VALUES ('POSSTR',POSSTR('Hello There','There'))
UNION ALL
VALUES ('STRPOS',STRPOS('Hello There','There'))
###Output
_____no_output_____
###Markdown
STRLEFT is an alias for LEFT
###Code
%%sql
VALUES ('LEFT',LEFT('Hello There',5))
UNION ALL
VALUES ('STRLEFT',STRLEFT('Hello There',5))
###Output
_____no_output_____
###Markdown
STRRIGHT is an alias for RIGHT
###Code
%%sql
VALUES ('RIGHT',RIGHT('Hello There',5))
UNION ALL
VALUES ('STRRIGHT',STRRIGHT('Hello There',5))
###Output
_____no_output_____
###Markdown
Additional SynonymsThere are a couple of additional keywords that are synonyms for existing Db2 functions. The list below includes onlythose features that were introduced in Db2 11.|Keyword | Db2 Equivalent|:------------| :-----------------------------|BPCHAR | VARCHAR (for casting function)|DISTRIBUTE ON| DISTRIBUTE BY [Back to Top](top) Netezza CompatibilityDb2 provides features that enable applications that were written for a Netezza Performance Server (NPS) database to use a Db2 database without having to be rewritten.The SQL_COMPAT global variable is used to activate the following optional NPS compatibility features: - Double-dot notation - When operating in NPS compatibility mode, you can use double-dot notation to specify a database object. - TRANSLATE parameter syntax - The syntax of the TRANSLATE parameter depends on whether NPS compatibility mode is being used. - Operators - Which symbols are used to represent operators in expressions depends on whether NPS compatibility mode is being used. - Grouping by SELECT clause columns - When operating in NPS compatibility mode, you can specify the ordinal position or exposed name of a SELECT clause column when grouping the results of a query. - Routines written in NZPLSQL - When operating in NPS compatibility mode, the NZPLSQL language can be used in addition to the SQL PL language. Special CharactersA quick review of Db2 special characters. Before we change the behavior of Db2, we need to understandwhat some of the special characters do. The following SQL shows how some of the special characterswork. Note that the HASH/POUND sign () has no meaning in Db2.
###Code
%%sql
WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS
(
VALUES
(' | ','OR ', '2 | 3 ', 2 | 3),
(' & ','AND ', '2 & 3 ', 2 & 3),
(' ^ ','XOR ', '2 ^ 3 ', 2 ^ 3),
(' ~ ','COMPLEMENT', '~2 ', ~2),
(' # ','NONE ', ' ',0)
)
SELECT * FROM SPECIAL
###Output
_____no_output_____
###Markdown
If we turn on NPS compatibility, you see a couple of special characters change behavior. Specifically the ^ operator becomes a "power" operator, and the becomes an XOR operator.
###Code
%%sql
SET SQL_COMPAT = 'NPS';
WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS
(
VALUES
(' | ','OR ', '2 | 3 ', 2 | 3),
(' & ','AND ', '2 & 3 ', 2 & 3),
(' ^ ','POWER ', '2 ^ 3 ', 2 ^ 3),
(' ~ ','COMPLIMENT', '~2 ', ~2),
(' # ','XOR ', '2 # 3 ', 2 # 3)
)
SELECT * FROM SPECIAL;
###Output
_____no_output_____
###Markdown
GROUP BY Ordinal LocationThe GROUP BY command behavior also changes in NPS mode. The following SQL statement groups resultsusing the default Db2 syntax:
###Code
%%sql
SET SQL_COMPAT='DB2';
SELECT WORKDEPT,INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY WORKDEPT;
###Output
_____no_output_____
###Markdown
If you try using the ordinal location (similar to an ORDER BY clause), you willget an error message.
###Code
%%sql
SELECT WORKDEPT, INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY 1;
###Output
_____no_output_____
###Markdown
If NPS compatibility is turned on then then you use the GROUP BY clause with an ordinal location.
###Code
%%sql
SET SQL_COMPAT='NPS';
SELECT WORKDEPT, INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY 1;
###Output
_____no_output_____
###Markdown
TRANSLATE FunctionThe translate function syntax in Db2 is: ```PythonTRANSLATE(expression, to_string, from_string, padding)```The TRANSLATE function returns a value in which one or more characters in a string expression might have been converted to other characters. The function converts all the characters in char-string-expin from-string-exp to the corresponding characters in to-string-exp or, if no corresponding characters exist, to the pad character specified by padding.If no parameters are given to the function, the original string is converted to uppercase.In NPS mode, the translate syntax is: ```PythonTRANSLATE(expression, from_string, to_string)```If a character is found in the from string, and there is no corresponding character in the to string, it is removed. If it was using Db2 syntax, the padding character would be used instead.Note: If ORACLE compatibility is ON then the behavior of TRANSLATE is identical to NPS mode.This first example will uppercase the string.
###Code
%%sql
SET SQL_COMPAT = 'NPS';
VALUES TRANSLATE('Hello');
###Output
_____no_output_____
###Markdown
In this example, the letter 'o' will be replaced with an '1'.
###Code
%sql VALUES TRANSLATE('Hello','o','1')
###Output
_____no_output_____
###Markdown
Note that you could replace more than one character by expanding both the "to" and "from" strings. Thisexample will replace the letter "e" with an "2" as well as "o" with "1".
###Code
%sql VALUES TRANSLATE('Hello','oe','12')
###Output
_____no_output_____
###Markdown
Translate will also remove a character if it is not in the "to" list.
###Code
%sql VALUES TRANSLATE('Hello','oel','12')
###Output
_____no_output_____
###Markdown
Reset the behavior back to Db2 mode.
###Code
%sql SET SQL_COMPAT='DB2'
###Output
_____no_output_____
###Markdown
[Back to Top](top) SELECT EnhancementsDb2 has the ability to limit the amount of data retrieved on a SELECT statementthrough the use of the FETCH FIRST n ROWS ONLY clause. In Db2 11, the ability to offset the rows before fetching was added to the FETCH FIRST clause. Simple SQL with Fetch First ClauseThe FETCH first clause can be used in a variety of locations in a SELECT clause. Thisfirst example fetches only 10 rows from the EMPLOYEE table.
###Code
%%sql
SELECT LASTNAME FROM EMPLOYEE
FETCH FIRST 5 ROWS ONLY
###Output
_____no_output_____
###Markdown
You can also add ORDER BY and GROUP BY clauses in the SELECT statement. Note thatDb2 still needs to process all of the records and do the ORDER/GROUP BY workbefore limiting the answer set. So you are not getting the first 5 rows "sorted". You are actually getting the entire answer set sorted before retrieving just 5 rows.
###Code
%%sql
SELECT LASTNAME FROM EMPLOYEE
ORDER BY LASTNAME
FETCH FIRST 5 ROWS ONLY
###Output
_____no_output_____
###Markdown
Here is an example with the GROUP BY statement. This first SQL statement gives us the totalanswer set - the count of employees by WORKDEPT.
###Code
%%sql
SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY WORKDEPT
###Output
_____no_output_____
###Markdown
Adding the FETCH FIRST clause only reduces the rows returned, not the rows thatare used to compute the GROUPing result.
###Code
%%sql
SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY WORKDEPT
FETCH FIRST 5 ROWS ONLY
###Output
_____no_output_____
###Markdown
OFFSET ExtensionThe FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword allows you to retrieve the answer set after skipping "n" number of rows. The syntax of the OFFSETkeyword is:```PythonOFFSET n ROWS FETCH FIRST x ROWS ONLY```The OFFSET n ROWS must precede the FETCH FIRST x ROWS ONLY clause. The OFFSET clause can be used to scroll down an answer set without having to hold a cursor. For instance, you could have the first SELECT call request 10 rows by just using the FETCH FIRST clause. After that you couldrequest the first 10 rows be skipped before retrieving the next 10 rows. The one thing you must be aware of is that that answer set could change between calls if you usethis technique of a "moving" window. If rows are updated or added after your initial query you mayget different results. This is due to the way that Db2 adds rows to a table. If there is a DELETE and thenan INSERT, the INSERTed row may end up in the empty slot. There is no guarantee of the order of retrieval. Forthis reason you are better off using an ORDER by to force the ordering although this too won't always prevent rows changing positions.Here are the first 10 rows of the employee table (not ordered).
###Code
%%sql
SELECT LASTNAME FROM EMPLOYEE
FETCH FIRST 10 ROWS ONLY
###Output
_____no_output_____
###Markdown
You can specify a zero offset to begin from the beginning.
###Code
%%sql
SELECT LASTNAME FROM EMPLOYEE
OFFSET 0 ROWS
FETCH FIRST 10 ROWS ONLY
###Output
_____no_output_____
###Markdown
Now we can move the answer set ahead by 5 rows and get the remaining5 rows in the answer set.
###Code
%%sql
SELECT LASTNAME FROM EMPLOYEE
OFFSET 5 ROWS
FETCH FIRST 5 ROWS ONLY
###Output
_____no_output_____
###Markdown
FETCH FIRST and OFFSET in SUBSELECTsThe FETCH FIRST/OFFSET clause is not limited to regular SELECT statements. You can also limit the number of rows that are used in a subselect. In this case you are limiting the amount ofdata that Db2 will scan when determining the answer set.For instance, say you wanted to find the names of the employees who make more than theaverage salary of the 3rd highest paid department. (By the way, there are multiple ways to do this, but this is one approach).The first step is to determine what the average salary is of all departments.
###Code
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC;
###Output
_____no_output_____
###Markdown
We only want one record from this list (the third one), so we can use the FETCH FIRST clause withan OFFSET to get the value we want (Note: we need to skip 2 rows to get to the 3rd one).
###Code
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
OFFSET 2 ROWS FETCH FIRST 1 ROWS ONLY
###Output
_____no_output_____
###Markdown
And here is the list of employees that make more than the average salary of the 3rd highest department in the company.
###Code
%%sql
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE
SALARY > (
SELECT AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
OFFSET 2 ROWS FETCH FIRST 1 ROW ONLY
)
ORDER BY SALARY
###Output
_____no_output_____
###Markdown
Alternate Syntax for FETCH FIRSTThe FETCH FIRST n ROWS ONLY and OFFSET clause can also be specified using a simpler LIMIT/OFFSET syntax.The LIMIT clause and the equivalent FETCH FIRST syntax are shown below.|Syntax |Equivalent|:-----------------|:-----------------------------|LIMIT x |FETCH FIRST x ROWS ONLY|LIMIT x OFFSET y |OFFSET y ROWS FETCH FIRST x ROWS ONLY|LIMIT y,x |OFFSET y ROWS FETCH FIRST x ROWS ONLYThe previous examples are rewritten using the LIMIT clause.We can use the LIMIT clause with an OFFSET to get the value we want from the table.
###Code
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
LIMIT 1 OFFSET 2
###Output
_____no_output_____
###Markdown
Here is the list of employees that make more than the average salary of the 3rd highest department in the company. Note that the LIMIT clause specifies only the offset (LIMIT x) or the offset and limit (LIMIT y,x) when you do not use the LIMIT keyword. One would think that LIMIT x OFFSET y would translate into LIMIT x,y but that is not the case. Don't try to figure out the SQL standards reasoning behind the syntax!
###Code
%%sql
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE
SALARY > (
SELECT AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
LIMIT 2,1
)
ORDER BY SALARY
###Output
_____no_output_____
###Markdown
[Back to Top](top) Hexadecimal FunctionsA number of new HEX manipulation functions have been added to Db2 11. There are a class of functionsthat manipulate different size integers (SMALL, INTEGER, BIGINT) using NOT, OR, AND, and XOR. In addition tothese functions, there are a number of functions that display and convert values into hexadecimal values. INTN FunctionsThe INTN functions are bitwise functions that operate on the "two's complement" representation of the integer value of the input arguments and return the result as a corresponding base 10 integer value.The function names all include the size of the integers that are being manipulated: - N = 2 (Smallint), 4 (Integer), 8 (Bigint)There are four functions:- INTNAND - Performs a bitwise AND operation, 1 only if the corresponding bits in both arguments are 1- INTNOR - Performs a bitwise OR operation, 1 unless the corresponding bits in both arguments are zero- INTNXOR Performs a bitwise exclusive OR operation, 1 unless the corresponding bits in both arguments are the same- INTNNOT - Performs a bitwise NOT operation, opposite of the corresponding bit in the argumentSix variables will be created to use in the examples. The X/Y values will be set to X=1 (01) and Y=3 (11) and different sizes to show how the functions work.
###Code
%%sql -q
DROP VARIABLE XINT2;
DROP VARIABLE YINT2;
DROP VARIABLE XINT4;
DROP VARIABLE YINT4;
DROP VARIABLE XINT8;
DROP VARIABLE YINT8;
CREATE VARIABLE XINT2 INT2 DEFAULT(1);
CREATE VARIABLE YINT2 INT2 DEFAULT(3);
CREATE VARIABLE XINT4 INT4 DEFAULT(1);
CREATE VARIABLE YINT4 INT4 DEFAULT(3);
CREATE VARIABLE XINT8 INT8 DEFAULT(1);
CREATE VARIABLE YINT8 INT8 DEFAULT(3);
###Output
_____no_output_____
###Markdown
This example will show the four functions used against SMALLINT (INT2) data types.
###Code
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT2AND(X,Y)',XINT2,YINT2,INT2AND(XINT2,YINT2)),
('INT2OR(X,Y) ',XINT2,YINT2,INT2OR(XINT2,YINT2)),
('INT2XOR(X,Y)',XINT2,YINT2,INT2XOR(XINT2,YINT2)),
('INT2NOT(X) ',XINT2,YINT2,INT2NOT(XINT2))
)
SELECT * FROM LOGIC
###Output
_____no_output_____
###Markdown
This example will use the 4 byte (INT4) data type.
###Code
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT4AND(X,Y)',XINT4,YINT4,INT4AND(XINT4,YINT4)),
('INT4OR(X,Y) ',XINT4,YINT4,INT4OR(XINT4,YINT4)),
('INT4XOR(X,Y)',XINT4,YINT4,INT4XOR(XINT4,YINT4)),
('INT4NOT(X) ',XINT4,YINT4,INT4NOT(XINT4))
)
SELECT * FROM LOGIC
###Output
_____no_output_____
###Markdown
Finally, the INT8 data type is used in the SQL. Note that you can mix and match the INT2, INT4, and INT8 valuesin these functions but you may get truncation if the value is too big.
###Code
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT8AND(X,Y)',XINT8,YINT8,INT8AND(XINT8,YINT8)),
('INT8OR(X,Y) ',XINT8,YINT8,INT8OR(XINT8,YINT8)),
('INT8XOR(X,Y)',XINT8,YINT8,INT8XOR(XINT8,YINT8)),
('INT8NOT(X) ',XINT8,YINT8,INT8NOT(XINT8))
)
SELECT * FROM LOGIC
###Output
_____no_output_____
###Markdown
TO_HEX FunctionThe TO_HEX function converts a numeric expression into a character hexadecimal representation. For example, thenumeric value 255 represents x'FF'. The value returned from this function is a VARCHAR value and its length depends on the size of the number you supply.
###Code
%sql VALUES TO_HEX(255)
###Output
_____no_output_____
###Markdown
RAWTOHEX FunctionThe RAWTOHEX function returns a hexadecimal representation of a value as a character string. Theresult is a character string itself.
###Code
%sql VALUES RAWTOHEX('Hello')
###Output
_____no_output_____
###Markdown
The string "00" converts to a hex representation of x'3030' which is 12336 in Decimal.So the TO_HEX function would convert this back to the HEX representation.
###Code
%sql VALUES TO_HEX(12336)
###Output
_____no_output_____
###Markdown
The string that is returned by the RAWTOHEX function should be the same.
###Code
%sql VALUES RAWTOHEX('00');
###Output
_____no_output_____
###Markdown
[Back to Top](top) Table Creation ExtensionsThe CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the dataat the same time. Create Table SyntaxThe syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause:```PythonCREATE TABLE AS (SELECT ...) [ WITH DATA | DEFINITION ONLY ]```The table definition will be generated based on the SQL statement that you specify. The column namesare derived from the columns that are in the SELECT list and can only be changed by specifying the columns namesas part of the table name: EMP(X,Y,Z,...) AS (...).For example, the following SQL will fail because a column list was not provided:
###Code
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;
###Output
_____no_output_____
###Markdown
You can name a column in the SELECT list or place it in the table definition.
###Code
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) DEFINITION ONLY;
###Output
_____no_output_____
###Markdown
You can check the SYSTEM catalog to see the table definition.
###Code
%%sql
SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME='AS_EMP' AND TBCREATOR=CURRENT USER
###Output
_____no_output_____
###Markdown
The DEFINITION ONLY clause will create the table but not load any data into it. Adding the WITH DATA clause will do an INSERT of rows into the newly created table. If you have a large amount of data to load into the table you may be better off creating the table with DEFINITION ONLY and then using LOADor other methods to load the data into the table.
###Code
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) WITH DATA;
###Output
_____no_output_____
###Markdown
The SELECT statement can be very sophisticated. It can do any type of calculation or limit thedata to a subset of information.
###Code
%%sql -q
DROP TABLE AS_EMP;
CREATE TABLE AS_EMP(LAST,PAY) AS
(
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE WORKDEPT='D11'
FETCH FIRST 3 ROWS ONLY
) WITH DATA;
###Output
_____no_output_____
###Markdown
You can also use the OFFSET clause as part of the FETCH FIRST ONLY to get chunks of data from theoriginal table.
###Code
%%sql -q
DROP TABLE AS_EMP;
CREATE TABLE AS_EMP(DEPARTMENT, LASTNAME) AS
(SELECT WORKDEPT, LASTNAME FROM EMPLOYEE
OFFSET 5 ROWS
FETCH FIRST 10 ROWS ONLY
) WITH DATA;
SELECT * FROM AS_EMP;
###Output
_____no_output_____ |
tutorials/data_management/upload_and_manage_data_and_metadata/chapter.ipynb | ###Markdown
Upload & Manage Data & Metadata Upload specific files When you have specific files you want to upload, you can upload them all into a dataset using this script:
###Code
import dtlpy as dl
if dl.token_expired():
dl.login()
project = dl.projects.get(project_name='project_name')
dataset = project.datasets.get(dataset_name='dataset_name')
dataset.items.upload(local_path=[r'C:/home/project/images/John Morris.jpg',
r'C:/home/project/images/John Benton.jpg',
r'C:/home/project/images/Liu Jinli.jpg'],
remote_path='/folder_name') # Remote path is optional, images will go to the main directory by default
###Output
_____no_output_____
###Markdown
Upload all files in a folder If you want to upload all files from a folder, you can do that by just specifying the folder name:
###Code
import dtlpy as dl
if dl.token_expired():
dl.login()
project = dl.projects.get(project_name='project_name')
dataset = project.datasets.get(dataset_name='dataset_name')
dataset.items.upload(local_path=r'C:/home/project/images',
remote_path='/folder_name') # Remote path is optional, images will go to the main directory by default
###Output
_____no_output_____
###Markdown
Upload items from URL link You can provide Dataloop with the link to the item, and not necessarily the item itself.
###Code
dataset = project.datasets.get(dataset_name='dataset_name')
url_path = 'http://ww.some_website/beautiful_flower.jpg'
# Create link
link = dl.UrlLink(ref=url_path, mimetype='image', name='file_name.jpg')
# Upload link
item = dataset.items.upload(local_path=link)
###Output
_____no_output_____
###Markdown
You can open an item uploaded to Dataloop by opening it in a viewer.
###Code
show
item.open_in_web()
###Output
_____no_output_____
###Markdown
Additional upload options Additional upload options include using buffer, pillow, openCV, and NdArray - see our complete documentation for code examples. Upload Items and Annotations Metadata You can upload items as a table using a pandas data frame that will let you upload items with info (annotations, metadata such as confidence, filename, etc.) attached to it.
###Code
import pandas
import dtlpy as dl
dataset = dl.datasets.get(dataset_id='id') # Get dataset
to_upload = list()
# First item and info attached:
to_upload.append({'local_path': r"E:\TypesExamples\000000000064.jpg", # Item file path
'local_annotations_path': r"E:\TypesExamples\000000000776.json", # Annotations file path
'remote_path': "/first", # Dataset folder to upload the item to
'remote_name': 'f.jpg', # Dataset folder name
'item_metadata': {'user': {'dummy': 'fir'}}}) # Added user metadata
# Second item and info attached:
to_upload.append({'local_path': r"E:\TypesExamples\000000000776.jpg", # Item file path
'local_annotations_path': r"E:\TypesExamples\000000000776.json", # Annotations file path
'remote_path': "/second", # Dataset folder to upload the item to
'remote_name': 's.jpg', # Dataset folder name
'item_metadata': {'user': {'dummy': 'sec'}}}) # Added user metadata
df = pandas.DataFrame(to_upload) # Make data into table
items = dataset.items.upload(local_path=df,
overwrite=True) # Upload table to platform
###Output
_____no_output_____ |
NycAutoCollisions2015.ipynb | ###Markdown
###Code
!pip install streamlit as st
!pip install yfinance as yfin
import streamlit as st
import pandas as pd
import numpy as np
import pydeck as pdk
import plotly.express as px
DATE_TIME = "date/time"
DATA_URL = pd.read_csv('https://github.com/RCML2016/NycAutoCollisions_2015/blob/master/NycAutoCollisions.zip?raw=true',compression='zip',sep=',')
st.title("Motor Vehicle Collisions in New York City")
st.markdown("This application is a Streamlit dashboard that can be used "
"to analyze motor vehicle collisions in NYC 🗽💥🚗")
@st.cache(persist=True)
def load_data(nrows):
data = pd.read_csv(DATA_URL, nrows=nrows, parse_dates=[['CRASH_DATE', 'CRASH_TIME']])
data.dropna(subset=['LATITUDE', 'LONGITUDE'], inplace=True)
lowercase = lambda x: str(x).lower()
data.rename(lowercase, axis="columns", inplace=True)
data.rename(columns={"crash_date_crash_time": "date/time"}, inplace=True)
#data = data[['date/time', 'latitude', 'longitude']]
return data
data = load_data(15000)
data[['latitude','longitude']].to_csv('lat_long.csv', index=False)
st.header("Where are the most people injured in NYC?")
injured_people = st.slider("Number of persons injured in vehicle collisions", 0, 19)
st.map(data.query("injured_persons >= @injured_people")[["latitude", "longitude"]].dropna(how="any"))
st.header("How many collisions occur during a given time of day?")
hour = st.slider("Hour to look at", 0, 23)
original_data = data
data = data[data[DATE_TIME].dt.hour == hour]
st.markdown("Vehicle collisions between %i:00 and %i:00" % (hour, (hour + 1) % 24))
midpoint = (np.average(data["latitude"]), np.average(data["longitude"]))
st.write(pdk.Deck(
map_style="mapbox://styles/mapbox/light-v9",
initial_view_state={
"latitude": midpoint[0],
"longitude": midpoint[1],
"zoom": 11,
"pitch": 50,
},
layers=[
pdk.Layer(
"HexagonLayer",
data=data[['date/time', 'latitude', 'longitude']],
get_position=["longitude", "latitude"],
auto_highlight=True,
radius=100,
extruded=True,
pickable=True,
elevation_scale=4,
elevation_range=[0, 1000],
),
],
))
if st.checkbox("Show raw data", False):
st.subheader("Raw data by minute between %i:00 and %i:00" % (hour, (hour + 1) % 24))
st.write(data)
st.subheader("Breakdown by minute between %i:00 and %i:00" % (hour, (hour + 1) % 24))
filtered = data[
(data[DATE_TIME].dt.hour >= hour) & (data[DATE_TIME].dt.hour < (hour + 1))
]
hist = np.histogram(filtered[DATE_TIME].dt.minute, bins=60, range=(0, 60))[0]
chart_data = pd.DataFrame({"minute": range(60), "crashes": hist})
fig = px.bar(chart_data, x='minute', y='crashes', hover_data=['minute', 'crashes'], height=400)
st.write(fig)
st.header("Top 5 dangerous streets by affected class")
select = st.selectbox('Affected class', ['Pedestrians', 'Cyclists', 'Motorists'])
if select == 'Pedestrians':
st.write(original_data.query("injured_pedestrians >= 1")[["on_street_name", "injured_pedestrians"]].sort_values(by=['injured_pedestrians'], ascending=False).dropna(how="any")[:5])
elif select == 'Cyclists':
st.write(original_data.query("injured_cyclists >= 1")[["on_street_name", "injured_cyclists"]].sort_values(by=['injured_cyclists'], ascending=False).dropna(how="any")[:5])
else:
st.write(original_data.query("injured_motorists >= 1")[["on_street_name", "injured_motorists"]].sort_values(by=['injured_motorists'], ascending=False).dropna(how="any")[:5])
###Output
Requirement already satisfied: streamlit in /usr/local/lib/python3.6/dist-packages (0.67.0)
Requirement already satisfied: as in /usr/local/lib/python3.6/dist-packages (0.1)
Requirement already satisfied: st in /usr/local/lib/python3.6/dist-packages (0.0.8)
Requirement already satisfied: pandas>=0.21.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (1.0.5)
Requirement already satisfied: watchdog in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.10.3)
Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (7.1.2)
Requirement already satisfied: python-dateutil in /usr/local/lib/python3.6/dist-packages (from streamlit) (2.8.1)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from streamlit) (2.23.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from streamlit) (1.18.5)
Requirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (3.12.4)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from streamlit) (20.4)
Requirement already satisfied: altair>=3.2.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (4.1.0)
Requirement already satisfied: enum-compat in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.0.3)
Requirement already satisfied: pydeck>=0.1.dev5 in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.5.0b1)
Requirement already satisfied: toml in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.10.1)
Requirement already satisfied: base58 in /usr/local/lib/python3.6/dist-packages (from streamlit) (2.0.1)
Requirement already satisfied: botocore>=1.13.44 in /usr/local/lib/python3.6/dist-packages (from streamlit) (1.17.59)
Requirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from streamlit) (1.14.59)
Requirement already satisfied: validators in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.18.1)
Requirement already satisfied: blinker in /usr/local/lib/python3.6/dist-packages (from streamlit) (1.4)
Requirement already satisfied: pyarrow in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.14.1)
Requirement already satisfied: cachetools>=4.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (4.1.1)
Requirement already satisfied: tzlocal in /usr/local/lib/python3.6/dist-packages (from streamlit) (1.5.1)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (7.0.0)
Requirement already satisfied: tornado>=5.0 in /usr/local/lib/python3.6/dist-packages (from streamlit) (5.1.1)
Requirement already satisfied: astor in /usr/local/lib/python3.6/dist-packages (from streamlit) (0.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.21.0->streamlit) (2018.9)
Requirement already satisfied: pathtools>=0.1.1 in /usr/local/lib/python3.6/dist-packages (from watchdog->streamlit) (0.1.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil->streamlit) (1.15.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->streamlit) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->streamlit) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->streamlit) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->streamlit) (2020.6.20)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.0->streamlit) (50.3.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->streamlit) (2.4.7)
Requirement already satisfied: jsonschema in /usr/local/lib/python3.6/dist-packages (from altair>=3.2.0->streamlit) (2.6.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from altair>=3.2.0->streamlit) (2.11.2)
Requirement already satisfied: toolz in /usr/local/lib/python3.6/dist-packages (from altair>=3.2.0->streamlit) (0.10.0)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.6/dist-packages (from altair>=3.2.0->streamlit) (0.3)
Requirement already satisfied: ipykernel>=5.1.2; python_version >= "3.4" in /usr/local/lib/python3.6/dist-packages (from pydeck>=0.1.dev5->streamlit) (5.3.4)
Requirement already satisfied: ipywidgets>=7.0.0 in /usr/local/lib/python3.6/dist-packages (from pydeck>=0.1.dev5->streamlit) (7.5.1)
Requirement already satisfied: traitlets>=4.3.2 in /usr/local/lib/python3.6/dist-packages (from pydeck>=0.1.dev5->streamlit) (4.3.3)
Requirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore>=1.13.44->streamlit) (0.15.2)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from botocore>=1.13.44->streamlit) (0.10.0)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->streamlit) (0.3.3)
Requirement already satisfied: decorator>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from validators->streamlit) (4.4.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->altair>=3.2.0->streamlit) (1.1.1)
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.6/dist-packages (from ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (5.3.5)
Requirement already satisfied: ipython>=5.0.0 in /usr/local/lib/python3.6/dist-packages (from ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (5.5.0)
Requirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (5.0.7)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (3.5.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.3.2->pydeck>=0.1.dev5->streamlit) (0.2.0)
Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.6/dist-packages (from jupyter-client->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (19.0.2)
Requirement already satisfied: jupyter-core>=4.6.0 in /usr/local/lib/python3.6/dist-packages (from jupyter-client->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (4.6.3)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (0.8.1)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (4.8.0)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (2.6.1)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (1.0.18)
Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.6/dist-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (5.3.1)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != "win32"->ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (0.6.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython>=5.0.0->ipykernel>=5.1.2; python_version >= "3.4"->pydeck>=0.1.dev5->streamlit) (0.2.5)
Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (0.8.3)
Requirement already satisfied: nbconvert in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (5.6.1)
Requirement already satisfied: Send2Trash in /usr/local/lib/python3.6/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (1.5.0)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (0.4.4)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (3.1.5)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (0.6.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (0.8.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (1.4.2)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->pydeck>=0.1.dev5->streamlit) (0.5.1)
Collecting yfinance
Using cached https://files.pythonhosted.org/packages/c2/31/8b374a12b90def92a4e27d0fc595fc43635f395984e36a075244d98bd265/yfinance-0.1.54.tar.gz
Requirement already satisfied: as in /usr/local/lib/python3.6/dist-packages (0.1)
[31mERROR: Could not find a version that satisfies the requirement yfin (from versions: none)[0m
[31mERROR: No matching distribution found for yfin[0m
|
autoencoder/autoencoder_test_data_anomaly_detection.ipynb | ###Markdown
Example of using autoencoders to predict typical and anomaly data
###Code
import numpy as np
import keras as K
import matplotlib.pyplot as plt
from datetime import datetime
import os
from astropy.io import fits
np.random.seed(1)
# open fits file
hdu = fits.open('/home/sbenzvi/test_spectra_autoencoder.fits')
# grab x data
x_data = hdu['NORM'].data
norm_x = (x_data - np.min(x_data)) /(np.max(x_data)-np.min(x_data))
# split training set and test set
x_train = norm_x[-9000:]
x_test = norm_x[:1000]
# define autoencoder 1000-500-250-500-1000 CNN, tanh activation between nodes.
# Note have to batch-normalize if using relu
my_init = K.initializers.glorot_uniform(seed=1)
autoenc = K.models.Sequential()
autoenc.add(K.layers.Dense(input_dim=1000, units=500,
activation='tanh', kernel_initializer=my_init))
autoenc.add(K.layers.Dense(units=250,
activation='tanh', kernel_initializer=my_init))
autoenc.add(K.layers.Dense(units=500,
activation='tanh', kernel_initializer=my_init))
autoenc.add(K.layers.Dense(units=1000,
activation='tanh', kernel_initializer=my_init))
# Compile with Adam Optimizer
simple_adam = K.optimizers.adam()
autoenc.compile(loss='mean_squared_error',
optimizer=simple_adam)
# create checkpoint directory
ct = datetime.now().strftime("%m-%d_%H:%M:%S")
basedir = f'/scratch/dgandhi/desi/time-domain-bkup/models/autoencoder/run({ct})/'
path= basedir+'weights.Ep{epoch:02d}-ValLoss{val_loss:.2f}.hdf5'
os.makedirs(basedir)
checkpoint = K.callbacks.ModelCheckpoint(path, monitor='val_loss', verbose=1,
save_best_only=True, mode='max',)
# start training
max_epochs = 100
h = autoenc.fit(x_train, x_train, batch_size=30, validation_data=(x_test, x_test),
epochs=max_epochs, verbose=1, callbacks=[checkpoint])
###Output
Train on 9000 samples, validate on 1000 samples
Epoch 1/100
9000/9000 [==============================] - 7s 735us/step - loss: 0.0197 - val_loss: 0.0166
Epoch 00001: val_loss improved from -inf to 0.01659, saving model to /scratch/dgandhi/desi/time-domain-bkup/models/autoencoder/run(06-26_15:50:56)/weights.Ep01-ValLoss0.02.hdf5
Epoch 2/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0163 - val_loss: 0.0166
Epoch 00002: val_loss improved from 0.01659 to 0.01660, saving model to /scratch/dgandhi/desi/time-domain-bkup/models/autoencoder/run(06-26_15:50:56)/weights.Ep02-ValLoss0.02.hdf5
Epoch 3/100
9000/9000 [==============================] - 6s 642us/step - loss: 0.0163 - val_loss: 0.0165
Epoch 00003: val_loss did not improve from 0.01660
Epoch 4/100
9000/9000 [==============================] - 6s 641us/step - loss: 0.0162 - val_loss: 0.0163
Epoch 00004: val_loss did not improve from 0.01660
Epoch 5/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0159 - val_loss: 0.0159
Epoch 00005: val_loss did not improve from 0.01660
Epoch 6/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0155 - val_loss: 0.0155
Epoch 00006: val_loss did not improve from 0.01660
Epoch 7/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0149 - val_loss: 0.0150
Epoch 00007: val_loss did not improve from 0.01660
Epoch 8/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0144 - val_loss: 0.0146
Epoch 00008: val_loss did not improve from 0.01660
Epoch 9/100
9000/9000 [==============================] - 6s 643us/step - loss: 0.0138 - val_loss: 0.0140
Epoch 00009: val_loss did not improve from 0.01660
Epoch 10/100
9000/9000 [==============================] - 6s 656us/step - loss: 0.0133 - val_loss: 0.0137
Epoch 00010: val_loss did not improve from 0.01660
Epoch 11/100
9000/9000 [==============================] - 6s 649us/step - loss: 0.0129 - val_loss: 0.0134
Epoch 00011: val_loss did not improve from 0.01660
Epoch 12/100
9000/9000 [==============================] - 6s 662us/step - loss: 0.0125 - val_loss: 0.0130
Epoch 00012: val_loss did not improve from 0.01660
Epoch 13/100
9000/9000 [==============================] - 6s 646us/step - loss: 0.0122 - val_loss: 0.0128
Epoch 00013: val_loss did not improve from 0.01660
Epoch 14/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0119 - val_loss: 0.0125
Epoch 00014: val_loss did not improve from 0.01660
Epoch 15/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0117 - val_loss: 0.0124
Epoch 00015: val_loss did not improve from 0.01660
Epoch 16/100
9000/9000 [==============================] - 6s 660us/step - loss: 0.0115 - val_loss: 0.0122
Epoch 00016: val_loss did not improve from 0.01660
Epoch 17/100
9000/9000 [==============================] - 6s 666us/step - loss: 0.0113 - val_loss: 0.0120
Epoch 00017: val_loss did not improve from 0.01660
Epoch 18/100
9000/9000 [==============================] - 6s 655us/step - loss: 0.0111 - val_loss: 0.0119
Epoch 00018: val_loss did not improve from 0.01660
Epoch 19/100
9000/9000 [==============================] - 6s 651us/step - loss: 0.0110 - val_loss: 0.0118
Epoch 00019: val_loss did not improve from 0.01660
Epoch 20/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0109 - val_loss: 0.0117
Epoch 00020: val_loss did not improve from 0.01660
Epoch 21/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0108 - val_loss: 0.0117
Epoch 00021: val_loss did not improve from 0.01660
Epoch 22/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0107 - val_loss: 0.0115
Epoch 00022: val_loss did not improve from 0.01660
Epoch 23/100
9000/9000 [==============================] - 6s 649us/step - loss: 0.0106 - val_loss: 0.0115
Epoch 00023: val_loss did not improve from 0.01660
Epoch 24/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0105 - val_loss: 0.0113
Epoch 00024: val_loss did not improve from 0.01660
Epoch 25/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0105 - val_loss: 0.0113
Epoch 00025: val_loss did not improve from 0.01660
Epoch 26/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0104 - val_loss: 0.0112
Epoch 00026: val_loss did not improve from 0.01660
Epoch 27/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0104 - val_loss: 0.0112
Epoch 00027: val_loss did not improve from 0.01660
Epoch 28/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0103 - val_loss: 0.0111
Epoch 00028: val_loss did not improve from 0.01660
Epoch 29/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0102 - val_loss: 0.0111
Epoch 00029: val_loss did not improve from 0.01660
Epoch 30/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0102 - val_loss: 0.0110
Epoch 00030: val_loss did not improve from 0.01660
Epoch 31/100
9000/9000 [==============================] - 6s 656us/step - loss: 0.0102 - val_loss: 0.0110
Epoch 00031: val_loss did not improve from 0.01660
Epoch 32/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0101 - val_loss: 0.0111
Epoch 00032: val_loss did not improve from 0.01660
Epoch 33/100
9000/9000 [==============================] - 6s 662us/step - loss: 0.0101 - val_loss: 0.0110
Epoch 00033: val_loss did not improve from 0.01660
Epoch 34/100
9000/9000 [==============================] - 6s 650us/step - loss: 0.0101 - val_loss: 0.0110
Epoch 00034: val_loss did not improve from 0.01660
Epoch 35/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0100 - val_loss: 0.0109
Epoch 00035: val_loss did not improve from 0.01660
Epoch 36/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0100 - val_loss: 0.0110
Epoch 00036: val_loss did not improve from 0.01660
Epoch 37/100
9000/9000 [==============================] - 6s 646us/step - loss: 0.0100 - val_loss: 0.0109
Epoch 00037: val_loss did not improve from 0.01660
Epoch 38/100
9000/9000 [==============================] - 6s 649us/step - loss: 0.0100 - val_loss: 0.0109
Epoch 00038: val_loss did not improve from 0.01660
Epoch 39/100
9000/9000 [==============================] - 6s 664us/step - loss: 0.0100 - val_loss: 0.0109
Epoch 00039: val_loss did not improve from 0.01660
Epoch 40/100
9000/9000 [==============================] - 6s 673us/step - loss: 0.0100 - val_loss: 0.0109
Epoch 00040: val_loss did not improve from 0.01660
Epoch 41/100
9000/9000 [==============================] - 6s 646us/step - loss: 0.0099 - val_loss: 0.0109
Epoch 00041: val_loss did not improve from 0.01660
Epoch 42/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00042: val_loss did not improve from 0.01660
Epoch 43/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0099 - val_loss: 0.0109
Epoch 00043: val_loss did not improve from 0.01660
Epoch 44/100
9000/9000 [==============================] - 6s 645us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00044: val_loss did not improve from 0.01660
Epoch 45/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00045: val_loss did not improve from 0.01660
Epoch 46/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00046: val_loss did not improve from 0.01660
Epoch 47/100
9000/9000 [==============================] - 6s 644us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00047: val_loss did not improve from 0.01660
Epoch 48/100
9000/9000 [==============================] - 6s 643us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00048: val_loss did not improve from 0.01660
Epoch 49/100
9000/9000 [==============================] - 6s 646us/step - loss: 0.0099 - val_loss: 0.0107
Epoch 00049: val_loss did not improve from 0.01660
Epoch 50/100
9000/9000 [==============================] - 6s 647us/step - loss: 0.0099 - val_loss: 0.0108
Epoch 00050: val_loss did not improve from 0.01660
Epoch 51/100
###Markdown
Function of Loss over Epochs
###Code
plt.subplot(1,2,1)
plt.title('Loss over Epochs')
plt.plot(h.history['loss'])
plt.subplot(1,2,2)
plt.title('Val Loss over Epochs')
plt.plot(h.history['val_loss'])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Example of Predicting a Typical Data Vector
###Code
# Predict validation set again
x_predict = autoenc.predict(x=x_test, batch_size=30)
plt.plot(x_test[19], label='Input')
plt.plot(x_predict[19], label='Prediction')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Example of Predicting an Anomaly Vector
###Code
# Get anomaly data
x_anamolies = hdu['ANMNOISY'].data
norm_x_anom = (x_anamolies - np.min(x_anamolies)) /(np.max(x_anamolies)-np.min(x_anamolies))
# Predict anomalies
predict_x_anom = autoenc.predict(norm_x_anom, batch_size=30)
plt.subplot(2,1,1)
plt.plot(norm_x_anom[1], label='Input')
plt.plot(predict_x_anom[1], label='Prediction')
plt.legend()
plt.subplot(2,1,2)
plt.plot(x_anamolies[0], label='un-normalized Input')
plt.legend()
plt.show()
###Output
_____no_output_____ |
we_rate_dogs.ipynb | ###Markdown
We Rate Dogs: Data Cleaning & AnalysisRichard Gathering Data
###Code
#Import packages used.
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import json
import requests
import tweepy
from tweepy import OAuthHandler
from timeit import default_timer as timer
# Load file from folder
tweets = pd.read_csv('twitter-archive-enhanced.csv')
# Assign URL to response using requests
url = ' https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv'
response = requests.get(url)
# Download to current folder
open(url.split('/')[-1], 'wb').write(response.content)
predictions = pd.read_csv(url.split('/')[-1], sep='\t')
#Twitter API
# Keys are hidden. You can obtain your own from twitter or just use the 'tweet_json.txt' file
api_key = 'HIDDEN'
api_secret = 'HIDDEN'
access_token = 'HIDDEN'
access_secret = 'HIDDEN'
auth = tweepy.OAuthHandler(api_key, api_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth, wait_on_rate_limit= True, wait_on_rate_limit_notify=True)
# Query Twitter's API for JSON data for each tweet ID in the Twitter archive
tweet_ids = list(tweets.tweet_id)
twitter_query = {}
fails = {}
# Appending dictionaries to twitter_query
start = timer() #Timing how long the loop takes
for id in tweet_ids:
try:
tweet = api.get_status(id, tweet_mode='extended')
twitter_query[id] = tweet._json
except tweepy.TweepError as error:
fails[id] = error
pass
end = timer()
print(end - start)
# Loading JSON into file
with open('tweet_json.txt', 'w') as file:
json.dump(twitter_query , file)
with open('tweet_json.txt') as json_file:
data = json.load(json_file)
# Create Dictionary twitter_data from file
twitter_data = []
for id in data.keys():
retweets = data[id]['retweet_count']
likes = data[id]['favorite_count']
twitter_data.append({'tweet_id' : id,
"retweets" : retweets,
"likes" : likes})
# Turn into pandas df
twitter_data = pd.DataFrame(twitter_data)
###Output
_____no_output_____
###Markdown
Assessing Data
###Code
# Retweets are creating extra ratings.
tweets[tweets.retweeted_status_id.notnull()]
# Null Values, Incorrect Dtypes.
tweets.info()
# Notice the ratings under 10 (Does not comply with rating standards)
tweets.rating_numerator.value_counts()
# Notice the denominators above 10
tweets.rating_denominator.value_counts()
# Some dogs have 'None' and 'a' as a name
tweets.name.value_counts()
predictions.info()
# Wierd dog predictions: paper_towel, orange
predictions.p1.value_counts
#doggo,floofer,pupper,puppo columns are redundant
tweets.head(3)
#Multiple prediction columns
predictions.head()
###Output
_____no_output_____
###Markdown
Cleaning Data
###Code
# Creating copies of the dataframes.
tweets_clean = tweets.copy()
predictions_clean = predictions.copy()
twitter_clean = twitter_data.copy()
# Removing columns where in_reply_to_status_id has values
tweets_clean = tweets_clean[tweets_clean['in_reply_to_status_id'].isnull()]
# Dropped unwanted columns to remove null values from dataframe
tweets_clean = tweets_clean.drop(['in_reply_to_status_id','in_reply_to_user_id',
'retweeted_status_id','retweeted_status_user_id',
'retweeted_status_timestamp', 'expanded_urls'],axis = 1)
# Changed incorrect datatypes
tweets_clean['tweet_id'] = tweets_clean['tweet_id'].astype(str)
predictions_clean['tweet_id'] =predictions_clean['tweet_id'].astype(str)
tweets_clean['timestamp'] = pd.to_datetime(tweets_clean['timestamp'])
# Removed rows with denominator not 10
tweets_clean = tweets_clean.query('rating_denominator == 10')
# Removed rows with numerator less than 10
tweets_clean = tweets_clean.query('rating_numerator >= 10')
# Removed dogs with names 'None' and 'a'
tweets_clean = tweets_clean[(tweets_clean['name'] != 'None') & (tweets_clean['name'] != 'a')]
# Dropped more columns from dataframes
tweets_clean = tweets_clean.drop(['source', 'text'],axis = 1)
predictions_clean = predictions_clean.drop('img_num', axis = 1)
# Create prediction_count, prediction, confidence, and prediction_correct columns
# to replace the previous columns to make the dataframe look nicer
melt1 = predictions_clean.melt(id_vars=['tweet_id','jpg_url'],value_vars=['p1','p2','p3'],
var_name = "prediction_count",value_name = "predictions")
melt2 = predictions_clean.melt(id_vars=['tweet_id','jpg_url'], value_vars=['p1_conf','p2_conf','p3_conf'],
value_name = "confidence")
melt3 = predictions_clean.melt(id_vars=['tweet_id','jpg_url'], value_vars=['p1_dog','p2_dog','p3_dog'],
value_name = "prediction_correct")
predictions_clean = pd.concat([melt1,melt2["confidence"],melt3['prediction_correct']],axis=1)
# Loop to remove rows that contain certain key-words in the predictions column
wack_predictions = ["orange", "banana", "traffic_light","paper_towel","spatula","Cardigan",
"academic_gown",'trench_coat','cheeseburger','paintbrush']
for nonsense in wack_predictions:
predictions_clean = predictions_clean[predictions_clean['predictions'] != nonsense]
# Remove doggo, floofer, pupper, puppo columns and create one stage column that contains all the information
tweets_cleanish = tweets_clean.melt(id_vars=['tweet_id','timestamp','rating_numerator','rating_denominator','name'],
value_name = 'stage')
tweets_cleanish = tweets_cleanish[tweets_cleanish['stage'] != 'None']
tweets_clean = tweets_clean.merge(tweets_cleanish[['tweet_id','stage']],
how = 'left').drop(['doggo','floofer','pupper','puppo'], axis = 1)
tweets_clean['stage'] = tweets_clean['stage'].fillna("None")
# Merge tweets_clean and twitter_clean dataframes
tweets_clean = tweets_clean.merge(twitter_clean, on = ['tweet_id'], how = 'inner')
# Saving cleaned files for later
tweets_clean.to_csv('twitter_archive_master.csv', index = False)
predictions_clean.to_csv('image_predictions_clean.csv', index = False)
###Output
_____no_output_____
###Markdown
Analysis
###Code
# Read the files that were saved earlier
twitter = pd.read_csv('twitter_archive_master.csv')
predict = pd.read_csv('image_predictions_clean.csv')
# Creating values low_rating and high_rating
twitter['rating'] = twitter.rating_numerator/twitter.rating_denominator
mean = twitter['rating'].mean()
low_rating = twitter[twitter['rating']< mean]
high_rating = twitter[twitter['rating'] >= mean]
# Plot bar graph
plt.bar([1,2],[low_rating.retweets.mean(),high_rating.retweets.mean()], color = 'orange')
plt.title("Average Amount of Retweets")
plt.ylabel("Number of Retweets")
plt.xlabel("Average Rating");
# Location for the plots
location = [1,2,3,4,5]
# Creating df with average of likes grouped by stage
avg_likes = twitter.groupby('stage').likes.mean().sort_values(ascending = False).reset_index()
likes = avg_likes['likes']
stage = avg_likes['stage']
# Plot bar graph
plt.bar(x = location,height = likes,tick_label = stage,color = 'pink')
plt.title('Average Amount of Likes')
plt.ylabel('Likes')
plt.xlabel('Dog Stage');
# Merging master dataframe with predictions dataframe
df_merge = twitter.merge(predict)
# Create new df with probability of correct predictions grouped by stage
prob_correct = df_merge.groupby(['stage']).prediction_correct.mean().sort_values(ascending = False).reset_index()
correct = prob_correct['prediction_correct']
group = prob_correct['stage']
# Plot bar chart using the columns from prob_correct
plt.bar(location, height = correct, tick_label = group, color = 'black')
plt.title('Average Correct Predictions')
plt.ylabel('Percentage of Correct Predictions')
plt.xlabel('Dog Stage');
###Output
_____no_output_____ |
notebooks/rlExperiments.ipynb | ###Markdown
General StatsGathering some general statistics about states from random actions
###Code
state_dict = False
state_dict, rand_game_df = fl.run_n_games(20000, fl.rand_strategy, state_dict)
fl.print_game_stats(rand_game_df)
fl.plot_winning_percentage(rand_game_df)
###Output
_____no_output_____
###Markdown
Q-Learning via table
###Code
state_dict = False
state_dict, q_table_game_df = fl.run_n_games(1000, fl.FrozenLakeQTable(), state_dict)
fl.print_game_stats(q_table_game_df)
fl.plot_winning_percentage(q_table_game_df, gap = 10)
fl.plot_game_length(q_table_game_df, window = 100)
# run experiments
gamma_options = [0.95, 0.9, 0.8, 0.5]
learning_rate_options = [0.1, 0.5, 0.9, 1]
damper_options = [0.01, 0.001, 0.0001]
game_dfs = []
for gamma in gamma_options:
for learning_rate in learning_rate_options:
for damper in damper_options:
for exp_run in [0, 1, 2, 3, 4]:
_, q_table_game_df = fl.run_n_games(2000, fl.FrozenLakeQTable(gamma, learning_rate, damper), False, use_tqdm = False)
q_table_game_df["gamma"] = gamma
q_table_game_df["learning_rate"] = learning_rate
q_table_game_df["damper"] = damper
q_table_game_df["exp_run"] = exp_run
game_dfs.append(q_table_game_df)
exp_game_df = pd.concat(game_dfs)
exp_game_gb = exp_game_df[["gamma", "learning_rate", "damper", "exp_run", "end_game", "reward"]]\
.groupby(["gamma", "learning_rate", "damper", "exp_run"], as_index = False)\
.agg(sum)
exp_game_gb["win_pcnt"] = 100.0 * exp_game_gb.reward / exp_game_gb.end_game
exp_game_stats = exp_game_gb.groupby(["gamma", "learning_rate", "damper"]).mean()
exp_game_stats.sort_values("win_pcnt", ascending = False)
###Output
_____no_output_____
###Markdown
Q learning with neural networks
###Code
flnn = fl.FrozenLakeNN()
state_dict = False
state_dict, q_nn_game_df = fl.run_n_games(2000, flnn, state_dict)
flnn.stop_sess()
fl.print_game_stats(q_nn_game_df)
fl.plot_winning_percentage(q_nn_game_df, gap = 10)
fl.plot_game_length(q_nn_game_df, window = 100)
###Output
_____no_output_____ |
mimic-iv/projects/notebooks/05b_export_raw_data.ipynb | ###Markdown
Exports raw data from mimic-iv databaseOnly the following care unit patients are exported:- Coronary Care unit (CCU)- Cardiac Vascular Intensive Care unit (CVICU)
###Code
from projects.utils import *
from projects.common import *
from typing import Tuple
from tqdm import tqdm
# from tqdm.notebook import tqdm
from multiprocessing import Pool, RLock
from configobj import ConfigObj
import numpy as np
import getpass
import json
import math
import os
import psycopg2
import pandas as pd
import time
import matplotlib.pyplot as plt
%matplotlib inline
def connect_db():
db_dir = os.path.abspath('') + "/../../../db"
return connect_to_database(db_dir)
connect_db()
# def merge_lab_df(df1: pd.DataFrame, df2: pd.DataFrame):
# target_cols = ['subject_id', 'hadm_id', 'charttime', 'specimen_id']
# df1 = df1.sort_values(target_cols)
# df2 = df2.sort_values(target_cols)
# df2 = df2.loc[:, ~df2.columns.isin(target_cols)]
# return pd.concat([df1, df2], axis=1)
def get_db_table_as_df_ext(_schema_type: str, _table: str,
_chunk: int = 10000):
"""Wrapper for generating dataframe from the db. """
(query_schema_core,
query_schema_hosp,
query_schema_icu,
query_schema_derived,
conn) = connect_db()
if _schema_type == 'core':
_query_schema = query_schema_core
if _schema_type == 'hosp':
_query_schema = query_schema_hosp
if _schema_type == 'icu':
_query_schema = query_schema_icu
if _schema_type == 'derived':
_query_schema = query_schema_derived
else:
_query_schema = None
df_iter, num_entries = get_database_table_as_dataframe(
conn, _query_schema, _table, _chunk_size=_chunk*MP_NUM_PROCESSES)
num_entries = math.ceil(num_entries / (_chunk*MP_NUM_PROCESSES))
return df_iter, num_entries
def get_info_save_path(data_dir: str, stay_id: int):
return os.path.join(data_dir, 'info_'+str(stay_id)+'.dsv')
def get_data_save_path(data_dir: str, stay_id: int):
return os.path.join(data_dir, 'data_'+str(stay_id)+'.dsv')
def save_data_dsv_ext(save_path: str, data: dict) -> None:
save_dsv(save_path, pd.DataFrame(data))
def create_dummy_files_func(export_dir, _custom_icustays_list, pid):
for icustay_id in tqdm(_custom_icustays_list):
# if icustay_id != 39060235:
# continue
save_path = get_data_save_path(export_dir, icustay_id)
assert not os.path.exists(save_path)
save_data_dsv_ext(save_path, pd.DataFrame(DataTable().data))
def create_dummy_files(export_dir: str, _custom_icustays_list: list):
""" Create empty dummy .dsv files."""
parallel_processing(create_dummy_files_func, MP_NUM_PROCESSES,
export_dir, _custom_icustays_list)
print("Created dummy .dsv files.")
def split_df(df: pd.DataFrame, num_processes: int = 8):
interval = math.ceil(len(df)/num_processes)
dfs = [df.iloc[interval*i:interval*(i+1)]
for i in range((num_processes-1))]
dfs += [df.iloc[interval*(num_processes-1):]]
return dfs
def parallel_processing_ext(_func,
_df_iter,
_num_entries: int,
_custom_icustays_list: list):
"""Wrapper for parallel processing. Sorts the dataframe based on
`sort_list` before running the `func`.
TODO: df should be splitted up based on `stay_id`, i.e. where all
the `stay_id` are assigned to the same process. If not may cause reading
error because this id determines which dsv file to read from. The current
hack is to create a large enough df chunk so that this error situation
will not occur.
"""
sort_list = ['subject_id', 'hadm_id', 'stay_id',
'charttime', 'starttime', 'endtime', ]
# mem_flag, df_mem = False, pd.DataFrame()
for df in tqdm(_df_iter, total=_num_entries):
if 'stay_id' in df.columns.tolist():
df = df[df.stay_id.isin(_custom_icustays_list)]
# df_mem = pd.concat([df_mem, df]) if mem_flag else df
# uc = df_mem.nunique(axis=1)
# if uc < MP_NUM_PROCESSES:
# mem_flag = True
# continue
# else:
# df = df_mem
# mem_flag, df_mem = False, pd.DataFrame()
df = df.sort_values(
by=[i for i in sort_list if i in df.columns.tolist()])
dfs = split_df(df, MP_NUM_PROCESSES)
parallel_processing(_func, MP_NUM_PROCESSES, dfs)
def export_func_factory(append_func, export_dir: str):
global export_func
def export_func(dfs, pid):
stay_id_mem = -1
subject_id_mem = -1
hadm_id_mem = -1
stay_ids_dt = {} # dict of DataTable
stay_ids_path = {} # dict of paths
save_path = None
save_flag = -1
df = dfs[0]
it = InfoTable()
dt = DataTable()
for df_i in df.iterrows():
df_row = df_i[1]
if 'stay_id' in df.columns.tolist():
save_flag = 0
stay_id = df_row['stay_id']
if stay_id_mem == -1:
# Initial loading of data
stay_id_mem = df_row['stay_id']
save_path = get_data_save_path(export_dir, stay_id)
dt.data = load_data_dsv(save_path)
elif df_row['stay_id'] != stay_id_mem:
# Only load data when the id changes.
# Saves the previous data first before loading.
save_path = get_data_save_path(export_dir, stay_id_mem)
save_data_dsv_ext(save_path, dt.data)
stay_id_mem = df_row['stay_id']
save_path = get_data_save_path(export_dir, stay_id)
dt.data = load_data_dsv(save_path)
dt = append_func(dt, df_row, df.columns.tolist())
else:
# elif 'hadm_id' not in df.columns.tolist():
save_flag = 1
sub_id = str(df_row['subject_id'])
if sub_id not in custom_icustays_dict:
continue
if subject_id_mem == -1:
# Initial loading of data
subject_id_mem = sub_id
for _, stay_ids in custom_icustays_dict[sub_id].items():
for stay_id in stay_ids:
save_path = get_data_save_path(export_dir, stay_id)
_dt = DataTable()
_dt.data = load_data_dsv(save_path)
stay_ids_dt[stay_id] = _dt
stay_ids_path[stay_id] = save_path
elif subject_id_mem != sub_id:
# Only load data when the id changes.
# Saves the previous data first before loading.
subject_id_mem = sub_id
for stay_id, stay_id_dt in stay_ids_dt.items():
save_data_dsv_ext(
stay_ids_path[stay_id], stay_id_dt.data)
stay_ids_dt, stay_ids_path = {}, {}
for _, stay_ids in custom_icustays_dict[sub_id].items():
for stay_id in stay_ids:
save_path = get_data_save_path(export_dir, stay_id)
_dt = DataTable()
_dt.data = load_data_dsv(save_path)
stay_ids_dt[stay_id] = _dt
stay_ids_path[stay_id] = save_path
for _, stay_ids in custom_icustays_dict[sub_id].items():
for stay_id in stay_ids:
info_save_path = get_info_save_path(
STRUCTURED_EXPORT_DIR, stay_id)
it.data = load_info_dsv(info_save_path)
icu_intime = it.data['value'][13]
icu_outtime = it.data['value'][14]
if pd.Timestamp(str(icu_intime)) <= df_row['charttime'] <= pd.Timestamp(str(icu_outtime)):
stay_ids_dt[stay_id] = append_func(
stay_ids_dt[stay_id], df_row, df.columns.tolist())
# else:
# save_flag = 2
# sub_id = str(df_row['subject_id'])
# hadm_id = str(df_row['hadm_id'])
# if sub_id not in custom_icustays_dict:
# continue
# if hadm_id not in custom_icustays_dict[sub_id]:
# continue
# stay_ids = custom_icustays_dict[sub_id][hadm_id]
# if subject_id_mem == -1 and hadm_id_mem == -1:
# # Initial loading of data
# subject_id_mem = sub_id
# hadm_id_mem = hadm_id
# for stay_id in stay_ids:
# save_path = get_data_save_path(export_dir, stay_id)
# _dt = DataTable()
# _dt.data = load_data_dsv(save_path)
# stay_ids_dt.append(_dt)
# stay_ids_path.append(save_path)
# elif subject_id_mem != sub_id or hadm_id_mem != hadm_id:
# # Only load data when the id changes.
# # Saves the previous data first before loading.
# subject_id_mem = sub_id
# hadm_id_mem = hadm_id
# for stay_id_dt, stay_id_path in zip(stay_ids_dt,
# stay_ids_path):
# save_data_dsv_ext(stay_id_path, stay_id_dt.data)
# stay_ids_dt, stay_ids_path = [], []
# for stay_id in stay_ids:
# save_path = get_data_save_path(export_dir, stay_id)
# _dt = DataTable()
# _dt.data = load_data_dsv(save_path)
# stay_ids_dt.append(_dt)
# stay_ids_path.append(save_path)
# for idx, stay_id_dt in enumerate(stay_ids_dt):
# info_save_path = get_info_save_path(
# STRUCTURED_EXPORT_DIR, stay_id)
# it.data = load_info_dsv(info_save_path)
# icu_intime = it.data['value'][13]
# icu_outtime = it.data['value'][14]
# if icu_intime <= df_row['charttime'] <= icu_outtime:
# stay_ids_dt[idx] = append_func(
# stay_id_dt, df_row, df.columns.tolist())
if save_flag == 0:
# Saves the final data.
save_data_dsv_ext(save_path, dt.data)
elif save_flag == 1 or save_flag == 2:
# Saves the final data.
for stay_id, stay_id_dt in stay_ids_dt.items():
save_data_dsv_ext(stay_ids_path[stay_id], stay_id_dt.data)
stay_ids_dt, stay_ids_path = {}, {}
return export_func
# # Save labevents_info table from db.
# (_, _, _, query_schema_derived, conn) = connect_db()
# df = get_database_table_as_dataframe(
# conn, query_schema_derived, 'labevents_info')
# df = df.sort_values('itemid')
# df.to_csv("../../../"+LAB_INFO_PATH, na_rep='', sep='\t', index=False)
###Output
_____no_output_____
###Markdown
Prepare the required mappings.The custom dict and list are created from `05a_export_raw_info.ipynb` .
###Code
# labitems = pd.read_csv("../../../"+LAB_ITEM_PATH, sep='\t', header=0)
# labitems.fillna('None')
d_derived = pd.read_csv("../../../"+DERIVED_ITEM_PATH, sep='\t', header=0)
d_derived = d_derived.fillna('None')
print('d_derived', d_derived.columns.to_list())
d_items = pd.read_csv("../../../"+CHART_ITEM_PATH, sep='\t', header=0)
d_items = d_items.fillna('None')
print('d_items', d_items.columns.to_list())
d_labinfos = pd.read_csv("../../../"+LAB_INFO_PATH, sep='\t', header=0)
d_labinfos = d_labinfos.fillna('None')
print('d_labinfos', d_labinfos.columns.to_list())
d_labitems = pd.read_csv("../../../"+LAB_ITEM_PATH, sep='\t', header=0)
d_labitems = d_labitems.fillna('None')
print('d_labitems', d_labitems.columns.to_list())
with open("../../../" + TMP_CUSTOM_LIST, 'r') as f:
custom_icustays_list = json.load(f)
with open("../../../" + TMP_CUSTOM_DICT, 'r') as f:
custom_icustays_dict = json.load(f)
def create_mappings(_id_mapping: dict):
id_mapping = {}
unit_mapping = {}
low_mapping = {}
high_mapping = {}
cat_mapping = {}
for k, v in _id_mapping.items():
id_mapping[v] = k
if k//100000 == 1:
unit_mapping[v] = d_derived[d_derived['uid']
== k]['units'].values[0]
low_mapping[v] = None # TODO ADD THIS IN THE DERIVED TABLE?
high_mapping[v] = None # TODO ADD THIS IN THE DERIVED TABLE?
cat_mapping[v] = d_derived[d_derived['uid']
== k]['category'].values[0]
elif k//200000 == 1:
unit_mapping[v] = d_items[d_items['uid']
== k]['unitname'].values[0]
low_mapping[v] = d_items[d_items['uid']
== k]['lownormalvalue'].values[0]
high_mapping[v] = d_items[d_items['uid']
== k]['highnormalvalue'].values[0]
cat_mapping[v] = d_items[d_items['uid'] == k]['category'].values[0]
elif k//500000 == 1:
cat_mapping[v] = d_labitems[d_labitems['uid']
== k]['category'].values[0]
unit_mapping[v] = None # From db table
low_mapping[v] = None # From db table
high_mapping[v] = None # From db table
else:
unit_mapping[v] = None
low_mapping[v] = None
high_mapping[v] = None
cat_mapping[v] = None
return id_mapping, unit_mapping, low_mapping, high_mapping, cat_mapping
###Output
d_derived ['uid', 'label', 'units', 'category', 'notes']
d_items ['uid', 'itemid', 'label', 'abbreviation', 'linksto', 'category', 'unitname', 'param_type', 'lownormalvalue', 'highnormalvalue']
d_labinfos ['itemid', 'valueuom', 'valueuom_count', 'ref_range_lower', 'ref_range_lower_count', 'ref_range_upper', 'ref_range_upper_count']
d_labitems ['uid', 'itemid', 'label', 'fluid', 'category', 'loinc_code']
###Markdown
Export dataCurrently the unit is taken from the original tables. A better solution is to include them in the concepts.
###Code
count = 0
###Output
_____no_output_____
###Markdown
Height
###Code
def append_func(dt, df_row, cols):
dt.append(
uid=100001,
value=df_row['height'],
unit='cm',
category='General',
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# [subject_id, stay_id, charttime, height]
df_iter, num_entries = get_db_table_as_df_ext('derived', 'height')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added height entries.")
###Output
100%|██████████| 2694/2694 [00:04<00:00, 663.22it/s]
100%|██████████| 2694/2694 [00:04<00:00, 663.30it/s]
100%|██████████| 2688/2688 [00:04<00:00, 665.86it/s]
100%|██████████| 2694/2694 [00:04<00:00, 660.80it/s]
100%|██████████| 2694/2694 [00:04<00:00, 663.02it/s]
100%|██████████| 2694/2694 [00:04<00:00, 659.59it/s]
100%|██████████| 2694/2694 [00:04<00:00, 657.87it/s]
100%|██████████| 2694/2694 [00:04<00:00, 658.87it/s]
###Markdown
Weight
###Code
def append_func(dt, df_row, col):
dt.append(
uid=100002,
value=df_row['weight'],
unit='kg',
category='General, ' + df_row['weight_type'],
starttime=df_row['starttime'],
endtime=df_row['endtime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['stay_id', 'starttime', 'endtime', 'weight', 'weight_type']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'weight_durations')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added weight entries.")
###Output
100%|██████████| 2694/2694 [00:03<00:00, 769.12it/s]
100%|██████████| 2688/2688 [00:03<00:00, 767.91it/s]
100%|██████████| 2694/2694 [00:03<00:00, 762.93it/s]
100%|██████████| 2694/2694 [00:03<00:00, 763.55it/s]
100%|██████████| 2694/2694 [00:03<00:00, 758.29it/s]
100%|██████████| 2694/2694 [00:03<00:00, 762.92it/s]
100%|██████████| 2694/2694 [00:03<00:00, 755.93it/s]
100%|██████████| 2694/2694 [00:03<00:00, 753.58it/s]
###Markdown
Chemistry
###Code
id_mapping = {
550862: 'albumin',
550930: 'globulin',
550976: 'total_protein',
550868: 'aniongap',
550882: 'bicarbonate',
551006: 'bun',
550893: 'calcium',
550902: 'chloride',
550912: 'creatinine',
550931: 'glucose',
550983: 'sodium',
550971: 'potassium',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=df_row[col+'_unit'],
lower_range=df_row[col+'_lower'],
upper_range=df_row[col+'_upper'],
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'albumin', 'globulin', 'total_protein', 'aniongap', 'bicarbonate', 'bun', 'calcium', 'chloride', 'creatinine', 'glucose', 'sodium', 'potassium']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'chemistry')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added chemistry (lab) entries.")
###Output
100%|██████████| 2694/2694 [00:03<00:00, 879.77it/s]
100%|██████████| 2688/2688 [00:03<00:00, 879.77it/s]
100%|██████████| 2694/2694 [00:03<00:00, 871.83it/s]
100%|██████████| 2694/2694 [00:03<00:00, 868.25it/s]
100%|██████████| 2694/2694 [00:03<00:00, 870.16it/s]
100%|██████████| 2694/2694 [00:03<00:00, 865.58it/s]
100%|██████████| 2694/2694 [00:03<00:00, 859.84it/s]
100%|██████████| 2694/2694 [00:03<00:00, 857.58it/s]
###Markdown
Blood Gas
###Code
id_mapping = {
552028: 'specimen',
550801: 'aado2',
550802: 'baseexcess',
550803: 'bicarbonate',
550804: 'totalco2',
550805: 'carboxyhemoglobin',
550806: 'chloride',
550808: 'calcium',
550809: 'glucose',
550810: 'hematocrit',
550811: 'hemoglobin',
550813: 'lactate',
550814: 'methemoglobin',
550816: 'fio2',
550817: 'so2',
550818: 'pco2',
550820: 'ph',
550821: 'po2',
550822: 'potassium',
550824: 'sodium',
550825: 'temperature',
223835: 'fio2_chartevents',
100038: 'pao2fio2ratio', # nounit
100039: 'aado2_calc',
100040: 'specimen_pred', # nounit
100041: 'specimen_prob',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
if id_mapping[col]//100000 == 1:
unit = unit_mapping[col]
lower_range = low_mapping[col]
upper_range = high_mapping[col]
if id_mapping[col] == 100039:
unit = df_row['aado2_unit'],
lower_range = df_row['aado2_lower'],
upper_range = df_row['aado2_upper'],
elif id_mapping[col]//100000 == 2:
unit = unit_mapping[col]
lower_range = low_mapping[col]
upper_range = high_mapping[col]
else:
unit = df_row[col+'_unit'],
lower_range = df_row[col+'_lower'],
upper_range = df_row[col+'_upper'],
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit,
lower_range=lower_range,
upper_range=upper_range,
category=cat_mapping[col],
starttime=df_row['charttime'],
)
return dt
# count += 1
# create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen', 'specimen_pred', 'specimen_prob', 'so2', 'po2', 'pco2', 'fio2_chartevents', 'fio2', 'aado2', 'aado2_calc', 'pao2fio2ratio', 'ph', 'baseexcess', 'bicarbonate', 'totalco2', 'hematocrit', 'hemoglobin', 'carboxyhemoglobin', 'methemoglobin', 'chloride', 'calcium', 'temperature', 'potassium', 'sodium', 'lactate', 'glucose']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'bg')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added bg (lab) entries.")
###Output
Database: mimiciv
Username: mimiciv
>>>>> Connected to DB <<<<<
Getting bg data
###Markdown
Blood Differential
###Code
# impute absolute count if percentage & WBC is available
id_mapping = {
551146: 'basophils',
552069: 'basophils_abs',
551200: 'eosinophils',
551254: 'monocytes',
551256: 'neutrophils',
552075: 'neutrophils_abs',
551143: 'atypical_lymphocytes',
551144: 'bands',
552135: 'immature_granulocytes',
551251: 'metamyelocytes',
551257: 'nrbc',
100003: 'wbc', # TODO: May need to split due to category.
100004: 'lymphocytes',
100005: 'eosinophils_abs',
100006: 'lymphocytes_abs',
100007: 'monocytes_abs',
# 51300: 'wbc',
# 51301: 'wbc',
# 51755: 'wbc',
# [51244, 51245]: lymphocytes
# [52073, 51199]: eosinophils_abs
# [51133, 52769]: lymphocytes_abs
# [52074, 51253]: monocytes_abs
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
if id_mapping[col]//100000 == 1:
unit = unit_mapping[col]
lower_range = low_mapping[col]
upper_range = high_mapping[col]
if id_mapping[col] == 100003:
lower_range = df_row['wbc_lower'],
upper_range = df_row['wbc_upper'],
elif id_mapping[col] == 100004:
lower_range = df_row['lymphocytes_lower'],
upper_range = df_row['lymphocytes_upper'],
elif id_mapping[col] == 100005:
lower_range = df_row['eosinophils_abs_lower'],
upper_range = df_row['eosinophils_abs_upper'],
elif id_mapping[col] == 100006:
lower_range = df_row['lymphocytes_abs_lower'],
upper_range = df_row['lymphocytes_abs_upper'],
elif id_mapping[col] == 100007:
lower_range = df_row['monocytes_abs_lower'],
upper_range = df_row['monocytes_abs_upper'],
else:
unit = df_row[col+'_unit'],
lower_range = df_row[col+'_lower'],
upper_range = df_row[col+'_upper'],
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit,
lower_range=lower_range,
upper_range=upper_range,
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'wbc', 'basophils_abs', 'eosinophils_abs', 'lymphocytes_abs', 'monocytes_abs', 'neutrophils_abs', 'basophils', 'eosinophils', 'lymphocytes', 'monocytes', 'neutrophils', 'atypical_lymphocytes', 'bands', 'immature_granulocytes', 'metamyelocytes', 'nrbc']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'blood_differential')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added blood_differential (lab) entries.")
###Output
100%|██████████| 2694/2694 [00:03<00:00, 836.01it/s]
100%|██████████| 2694/2694 [00:03<00:00, 833.60it/s]
100%|██████████| 2694/2694 [00:03<00:00, 831.48it/s]
100%|██████████| 2694/2694 [00:03<00:00, 833.92it/s]
100%|██████████| 2694/2694 [00:03<00:00, 832.28it/s]
100%|██████████| 2694/2694 [00:03<00:00, 834.78it/s]
100%|██████████| 2688/2688 [00:03<00:00, 833.34it/s]
100%|██████████| 2694/2694 [00:03<00:00, 828.49it/s]
###Markdown
Cardiac Marker
###Code
id_mapping = {
551002: 'troponin_i',
551003: 'troponin_t',
550911: 'ck_mb',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=df_row[col+'_unit'],
lower_range=df_row[col+'_lower'],
upper_range=df_row[col+'_upper'],
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'troponin_i', 'troponin_t', 'ck_mb']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'cardiac_marker')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added cardiac_marker (lab) entries.")
# hadm_list, sub_list = [], []
# for chunk in tqdm(df_iter):
# num_entries += len(chunk)
# hadm_list += chunk['hadm_id'].tolist()
# sub_list += chunk['subject_id'].tolist()
###Output
100%|██████████| 2694/2694 [00:02<00:00, 911.99it/s]
100%|██████████| 2694/2694 [00:03<00:00, 897.87it/s]
100%|██████████| 2688/2688 [00:03<00:00, 891.70it/s]
100%|██████████| 2694/2694 [00:03<00:00, 889.30it/s]
100%|██████████| 2694/2694 [00:03<00:00, 880.63it/s]
100%|██████████| 2694/2694 [00:03<00:00, 875.25it/s]
100%|██████████| 2694/2694 [00:03<00:00, 884.70it/s]
100%|██████████| 2694/2694 [00:03<00:00, 863.45it/s]
###Markdown
Coagulation
###Code
id_mapping = {
551196: 'd_dimer',
551214: 'fibrinogen',
551297: 'thrombin',
551237: 'inr',
551274: 'pt',
551275: 'ptt',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=df_row[col+'_unit'],
lower_range=df_row[col+'_lower'],
upper_range=df_row[col+'_upper'],
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'd_dimer', 'fibrinogen', 'thrombin', 'inr', 'pt', 'ptt']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'coagulation')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added coagulation (lab) entries.")
###Output
100%|██████████| 2688/2688 [00:04<00:00, 648.06it/s]
100%|██████████| 2694/2694 [00:04<00:00, 643.84it/s]
100%|██████████| 2694/2694 [00:04<00:00, 641.45it/s]
100%|██████████| 2694/2694 [00:04<00:00, 638.88it/s]
100%|██████████| 2694/2694 [00:04<00:00, 638.27it/s]
100%|██████████| 2694/2694 [00:04<00:00, 636.93it/s]
100%|██████████| 2694/2694 [00:04<00:00, 635.02it/s]
100%|██████████| 2694/2694 [00:04<00:00, 622.59it/s]
###Markdown
Complete blood count
###Code
id_mapping = {
551221: 'hematocrit',
551222: 'hemoglobin',
551248: 'mch',
551249: 'mchc',
551250: 'mcv',
551265: 'platelet',
551279: 'rbc',
551277: 'rdw',
552159: 'rdwsd',
# 551301: 'wbc', # present in blood_differential
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=df_row[col+'_unit'],
lower_range=df_row[col+'_lower'],
upper_range=df_row[col+'_upper'],
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'hematocrit', 'hemoglobin', 'mch', 'mchc', 'mcv', 'platelet', 'rbc', 'rdw', 'rdwsd', 'wbc']
df_iter, num_entries = get_db_table_as_df_ext(
'derived', 'complete_blood_count')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added complete_blood_count (lab) entries.")
###Output
100%|██████████| 2694/2694 [00:03<00:00, 810.41it/s]
100%|██████████| 2694/2694 [00:03<00:00, 790.47it/s]
100%|██████████| 2688/2688 [00:03<00:00, 792.33it/s]
100%|██████████| 2694/2694 [00:03<00:00, 788.04it/s]
100%|██████████| 2694/2694 [00:03<00:00, 788.10it/s]
100%|██████████| 2694/2694 [00:03<00:00, 781.35it/s]
100%|██████████| 2694/2694 [00:03<00:00, 785.62it/s]
100%|██████████| 2694/2694 [00:03<00:00, 778.55it/s]
###Markdown
Enzyme
###Code
id_mapping = {
550861: 'alt',
550863: 'alp',
550878: 'ast',
550867: 'amylase',
550885: 'bilirubin_total',
550884: 'bilirubin_indirect',
550883: 'bilirubin_direct',
550910: 'ck_cpk',
550911: 'ck_mb',
550927: 'ggt',
550954: 'ld_ldh',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=df_row[col+'_unit'],
lower_range=df_row[col+'_lower'],
upper_range=df_row[col+'_upper'],
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'alt', 'alp', 'ast', 'amylase', 'bilirubin_total', 'bilirubin_direct', 'bilirubin_indirect', 'ck_cpk', 'ck_mb', 'ggt', 'ld_ldh']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'enzyme')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added enzyme (lab) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 1009.40it/s]
100%|██████████| 2694/2694 [00:02<00:00, 993.42it/s]
100%|██████████| 2694/2694 [00:02<00:00, 987.48it/s]
100%|██████████| 2694/2694 [00:02<00:00, 989.73it/s]
100%|██████████| 2688/2688 [00:02<00:00, 986.48it/s]
100%|██████████| 2694/2694 [00:02<00:00, 980.74it/s]
100%|██████████| 2694/2694 [00:02<00:00, 970.26it/s]
100%|██████████| 2694/2694 [00:02<00:00, 966.74it/s]
###Markdown
Inflamation
###Code
id_mapping = {
550889: 'crp',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=df_row[col+'_unit'],
lower_range=df_row[col+'_lower'],
upper_range=df_row[col+'_upper'],
category=cat_mapping[col],
specimen_id=df_row['specimen_id'],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'charttime', 'specimen_id', 'crp']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'inflammation')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added inflammation (lab) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 1012.64it/s]
100%|██████████| 2694/2694 [00:02<00:00, 988.76it/s]
100%|██████████| 2694/2694 [00:02<00:00, 987.17it/s]
100%|██████████| 2694/2694 [00:02<00:00, 983.03it/s]
100%|██████████| 2694/2694 [00:02<00:00, 986.62it/s]
100%|██████████| 2688/2688 [00:02<00:00, 981.79it/s]
100%|██████████| 2694/2694 [00:02<00:00, 973.35it/s]
100%|██████████| 2694/2694 [00:02<00:00, 966.15it/s]
###Markdown
O2 delivery
###Code
id_mapping = {
227287: 'o2_flow_additional',
100012: 'o2_flow',
100008: 'o2_delivery_device_1',
100009: 'o2_delivery_device_2',
100010: 'o2_delivery_device_3',
100011: 'o2_delivery_device_4',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit_mapping[col],
lower_range=low_mapping[col],
upper_range=high_mapping[col],
category=cat_mapping[col],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'stay_id', 'charttime', 'o2_flow', 'o2_flow_additional', 'o2_delivery_device_1', 'o2_delivery_device_2', 'o2_delivery_device_3', 'o2_delivery_device_4']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'oxygen_delivery')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added oxygen_delivery (chart) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 937.72it/s]
100%|██████████| 2694/2694 [00:02<00:00, 931.84it/s]
100%|██████████| 2694/2694 [00:02<00:00, 932.92it/s]
100%|██████████| 2694/2694 [00:02<00:00, 930.77it/s]
100%|██████████| 2694/2694 [00:02<00:00, 926.89it/s]
100%|██████████| 2688/2688 [00:02<00:00, 921.38it/s]
100%|██████████| 2694/2694 [00:02<00:00, 918.96it/s]
100%|██████████| 2694/2694 [00:02<00:00, 908.19it/s]
###Markdown
Rhythm
###Code
id_mapping = {
220048: 'heart_rhythm',
224650: 'ectopy_type',
224651: 'ectopy_frequency',
226479: 'ectopy_type_secondary',
226480: 'ectopy_frequency_secondary',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit_mapping[col],
lower_range=low_mapping[col],
upper_range=high_mapping[col],
category=cat_mapping[col],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'charttime', 'heart_rhythm', 'ectopy_type', 'ectopy_frequency', 'ectopy_type_secondary', 'ectopy_frequency_secondary']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'rhythm')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added rhythm (chart) entries.")
###Output
100%|██████████| 2688/2688 [00:02<00:00, 1003.54it/s]
100%|██████████| 2694/2694 [00:02<00:00, 970.44it/s]
100%|██████████| 2694/2694 [00:02<00:00, 976.15it/s]
100%|██████████| 2694/2694 [00:02<00:00, 970.84it/s]
100%|██████████| 2694/2694 [00:02<00:00, 975.84it/s]
100%|██████████| 2694/2694 [00:02<00:00, 966.39it/s]
100%|██████████| 2694/2694 [00:02<00:00, 968.59it/s]
100%|██████████| 2694/2694 [00:02<00:00, 964.01it/s]
###Markdown
Urine Output
###Code
def append_func(dt, df_row, cols):
dt.append(
uid=100013,
value=df_row['urineoutput'],
unit='mL',
category='Output',
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['stay_id', 'charttime', 'urineoutput']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'urine_output')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added urine_output (chart) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 1000.78it/s]
100%|██████████| 2688/2688 [00:02<00:00, 986.44it/s]
100%|██████████| 2694/2694 [00:02<00:00, 985.12it/s]
100%|██████████| 2694/2694 [00:02<00:00, 984.18it/s]
100%|██████████| 2694/2694 [00:02<00:00, 970.00it/s]
100%|██████████| 2694/2694 [00:02<00:00, 978.69it/s]
100%|██████████| 2694/2694 [00:02<00:00, 963.08it/s]
100%|██████████| 2694/2694 [00:02<00:00, 959.63it/s]
###Markdown
Urine Output Rate
###Code
# -- attempt to calculate urine output per hour
# -- rate/hour is the interpretable measure of kidney function
# -- though it is difficult to estimate from aperiodic point measures
# -- first we get the earliest heart rate documented for the stay
id_mapping = {
# 100013: 'uo', present in previous table.
100014: 'urineoutput_6hr', # output within 6hr (floor)
100015: 'urineoutput_12hr',
100016: 'urineoutput_24hr',
100017: 'uo_mlkghr_6hr', # (urineoutput_6hr/weight/uo_tm_6hr)
100018: 'uo_mlkghr_12hr',
100019: 'uo_mlkghr_24hr',
100020: 'uo_tm_6hr', # time from last uo measurement within 6hr (floor)
100021: 'uo_tm_12hr',
100022: 'uo_tm_24hr',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit_mapping[col],
lower_range=low_mapping[col],
upper_range=high_mapping[col],
category=cat_mapping[col],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['stay_id', 'charttime', 'weight', 'uo', 'urineoutput_6hr', 'urineoutput_12hr', 'urineoutput_24hr', 'uo_mlkghr_6hr', 'uo_mlkghr_12hr', 'uo_mlkghr_24hr', 'uo_tm_6hr', 'uo_tm_12hr', 'uo_tm_24hr']
df_iter, num_entries = get_db_table_as_df_ext(
'derived', 'urine_output_rate', _chunk=30000)
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added urine_output_rate (derived) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 987.26it/s]
100%|██████████| 2694/2694 [00:02<00:00, 966.47it/s]
100%|██████████| 2694/2694 [00:02<00:00, 966.99it/s]
100%|██████████| 2694/2694 [00:02<00:00, 961.49it/s]
100%|██████████| 2694/2694 [00:02<00:00, 960.83it/s]
100%|██████████| 2694/2694 [00:02<00:00, 965.67it/s]
100%|██████████| 2688/2688 [00:02<00:00, 950.25it/s]
100%|██████████| 2694/2694 [00:02<00:00, 941.93it/s]
###Markdown
Vent settings
###Code
id_mapping = {
224688: 'respiratory_rate_set',
224690: 'respiratory_rate_total',
224689: 'respiratory_rate_spontaneous',
224687: 'minute_volume',
224684: 'tidal_volume_set',
224685: 'tidal_volume_observed',
224686: 'tidal_volume_spontaneous',
224696: 'plateau_pressure',
100023: 'peep',
# 223835: 'fio2', # same as fio2_chartevents
223849: 'ventilator_mode',
229314: 'ventilator_mode_hamilton',
223848: 'ventilator_type',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit_mapping[col],
lower_range=low_mapping[col],
upper_range=high_mapping[col],
category=cat_mapping[col],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'stay_id', 'charttime', 'respiratory_rate_set', 'respiratory_rate_total', 'respiratory_rate_spontaneous', 'minute_volume', 'tidal_volume_set', 'tidal_volume_observed', 'tidal_volume_spontaneous', 'plateau_pressure', 'peep', 'fio2', 'ventilator_mode', 'ventilator_mode_hamilton', 'ventilator_type']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'ventilator_setting')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added ventilator_setting (chart) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 1022.10it/s]
100%|██████████| 2688/2688 [00:02<00:00, 1013.40it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1008.24it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1005.54it/s]
100%|██████████| 2694/2694 [00:02<00:00, 994.89it/s]
100%|██████████| 2694/2694 [00:02<00:00, 998.71it/s]
100%|██████████| 2694/2694 [00:02<00:00, 987.40it/s]
100%|██████████| 2694/2694 [00:02<00:00, 987.34it/s]
###Markdown
Vital Signs
###Code
id_mapping = {
220045: 'heart_rate',
100024: 'sbp',
100025: 'dbp',
100026: 'mbp',
220179: 'sbp_ni',
220180: 'dbp_ni',
220181: 'mbp_ni',
100027: 'resp_rate',
100028: 'temperature',
224642: 'temperature_site',
220277: 'spo2',
100029: 'glucose_chartevents',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'stay_id', 'charttime', 'heart_rate', 'sbp', 'dbp', 'mbp', 'sbp_ni', 'dbp_ni', 'mbp_ni', 'resp_rate', 'temperature', 'temperature_site', 'spo2', 'glucose']
df_iter, num_entries = get_db_table_as_df_ext(
'derived', 'vitalsign', _chunk=100000)
def func(dfs, pid):
stay_id_mem = -1
save_path = None
df = dfs[0]
dt = DataTable()
for df_i in df.iterrows():
df_row = df_i[1]
if stay_id_mem == -1:
stay_id_mem = df_row['stay_id']
save_path = get_data_save_path(STRUCTURED_EXPORT_DIR+str(count),
df_row['stay_id'])
dt.data = load_data_dsv(save_path)
elif df_row['stay_id'] != stay_id_mem:
save_path = get_data_save_path(STRUCTURED_EXPORT_DIR+str(count),
stay_id_mem)
save_data_dsv_ext(save_path, dt.data)
stay_id_mem = df_row['stay_id']
save_path = get_data_save_path(STRUCTURED_EXPORT_DIR+str(count),
df_row['stay_id'])
dt.data = load_data_dsv(save_path)
for col in df.columns.tolist():
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit_mapping[col],
lower_range=low_mapping[col],
upper_range=high_mapping[col],
category=cat_mapping[col],
starttime=df_row['charttime'],
)
save_data_dsv_ext(save_path, dt.data)
# parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
sort_list = ['subject_id', 'hadm_id', 'stay_id',
'charttime', 'starttime', 'endtime', ]
# mem_flag, df_mem = False, pd.DataFrame()
for df in tqdm(df_iter, total=num_entries):
if 'stay_id' in df.columns.tolist():
df = df[df.stay_id.isin(custom_icustays_list)]
# df_mem = pd.concat([df_mem, df]) if mem_flag else df
# uc = df_mem.nunique(axis=1)
# if uc < MP_NUM_PROCESSES:
# mem_flag = True
# continue
# else:
# df = df_mem
# mem_flag, df_mem = False, pd.DataFrame()
df = df.sort_values(
by=[i for i in sort_list if i in df.columns.tolist()])
dfs = split_df(df, MP_NUM_PROCESSES)
parallel_processing(func, MP_NUM_PROCESSES, dfs)
print("Added vitalsign (chart) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 1052.50it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1045.58it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1034.73it/s]
100%|██████████| 2688/2688 [00:02<00:00, 1041.94it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1033.84it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1036.97it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1037.08it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1026.84it/s]
###Markdown
Antibiotics
###Code
# (query_schema_core,
# query_schema_hosp,
# query_schema_icu,
# query_schema_derived,
# conn) = connect_db()
# # ['subject_id', 'hadm_id', 'stay_id', 'antibiotic', 'route', 'starttime', 'stoptime']
# df = get_database_table_as_dataframe(conn, query_schema_derived, 'antibiotic')
# df = df[df.stay_id.isin(custom_icustays_list)]
# def func(dfs, pid):
#
#
# df = dfs[0]
# it = InfoTable()
# dt = DataTable()
# for df_i in df.iterrows():
# df_row = df_i[1]
# dt.data = load_data_dsv(STRUCTURED_EXPORT_DIR, df_row['stay_id'])
# dt.append(
# uid=100030,
# value=df_row['antibiotic'],
# category=df_row['route'],
# starttime=df_row['starttime'],
# endtime=df_row['stoptime'],
# )
# save_data_dsv(STRUCTURED_EXPORT_DIR,
# df_row['stay_id'], pd.DataFrame(dt.data))
# dfs = split_df(df, MP_NUM_PROCESSES)
# parallel_processing(func, MP_NUM_PROCESSES, dfs)
# print("Added antibiotic (hosp.prescriptions) entries.")
###Output
_____no_output_____
###Markdown
Medications
###Code
med_ids = [
220995, # Sodium Bicarbonate 8.4%
221794, # Furosemide (Lasix) **
228340, # Furosemide (Lasix) 250/50 **
# 100037, # Furosemide (Lasix)
221986, # Milrinone
229068, # Protamine sulfate
229639, # Bumetanide (Bumex)
221653, # Dobutamine
221662, # Dopamine
221289, # Epinephrine
229617, # Epinephrine. ~145 entries only
# 100036, # Epinephrine
221906, # Norepinephrine
221749, # Phenylephrine
222315, # Vasopressin
]
id_mapping = {
221794: 100037,
228340: 100037,
221289: 100036,
229617: 100036,
}
def append_func(dt, df_row, df_cols):
uid = df_row['itemid']
dt.append(
uid=id_mapping[uid] if uid in id_mapping else uid,
value=df_row['amount'],
unit=df_row['amountuom'],
rate=df_row['rate'],
rate_unit=df_row['rateuom'],
category='Medication',
starttime=df_row['starttime'],
endtime=df_row['endtime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
(query_schema_core,
query_schema_hosp,
query_schema_icu,
query_schema_derived,
conn) = connect_db()
df_iter, num_entries = get_database_table_as_dataframe(
conn, query_schema_icu, 'inputevents',
_filter_col='itemid',
_filter_col_val=tuple(med_ids),
_chunk_size=10000*MP_NUM_PROCESSES)
num_entries = math.ceil(num_entries / (10000*MP_NUM_PROCESSES))
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added (medication) entries.")
###Output
100%|██████████| 2694/2694 [00:02<00:00, 1079.15it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1074.99it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1064.84it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1062.09it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1065.17it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1058.13it/s]
100%|██████████| 2688/2688 [00:02<00:00, 1051.37it/s]
100%|██████████| 2694/2694 [00:02<00:00, 1049.97it/s]
###Markdown
KDIGO
###Code
id_mapping = {
100031: 'creat_low_past_48hr',
100032: 'creat_low_past_7day',
100033: 'aki_stage_creat',
100034: 'aki_stage_uo',
100035: 'aki_stage',
}
(id_mapping,
unit_mapping,
low_mapping,
high_mapping,
cat_mapping) = create_mappings(id_mapping)
def append_func(dt, df_row, df_cols):
for col in df_cols:
if col in id_mapping:
dt.append(
uid=id_mapping[col],
value=df_row[col],
unit=unit_mapping[col],
lower_range=low_mapping[col],
upper_range=high_mapping[col],
category=cat_mapping[col],
starttime=df_row['charttime'],
)
return dt
count += 1
create_dummy_files(STRUCTURED_EXPORT_DIR+str(count), custom_icustays_list)
# ['subject_id', 'hadm_id', 'stay_id', 'charttime', 'creat_low_past_7day', 'creat_low_past_48hr', 'creat', 'aki_stage_creat', 'uo_rt_6hr', 'uo_rt_12hr', 'uo_rt_24hr', 'aki_stage_uo', 'aki_stage']
df_iter, num_entries = get_db_table_as_df_ext('derived', 'kdigo_stages')
func = export_func_factory(append_func, STRUCTURED_EXPORT_DIR+str(count))
parallel_processing_ext(func, df_iter, num_entries, custom_icustays_list)
print("Added kdigo_stages (derived) entries.")
###Output
100%|██████████| 2694/2694 [00:03<00:00, 680.95it/s]
100%|██████████| 2694/2694 [00:03<00:00, 679.40it/s]
100%|██████████| 2694/2694 [00:03<00:00, 675.48it/s]
100%|██████████| 2694/2694 [00:03<00:00, 678.52it/s]
100%|██████████| 2688/2688 [00:03<00:00, 676.85it/s]
100%|██████████| 2694/2694 [00:03<00:00, 675.83it/s]
100%|██████████| 2694/2694 [00:03<00:00, 677.34it/s]
100%|██████████| 2694/2694 [00:04<00:00, 672.69it/s]
###Markdown
Merge folders
###Code
def func(files, pid):
parent_dir, main_folder = os.path.split(STRUCTURED_EXPORT_DIR)
folders = [os.path.join(parent_dir, i)
for i in os.listdir(parent_dir) if i != main_folder]
for f in tqdm(files):
main_path = os.path.join(STRUCTURED_EXPORT_DIR, f)
df = pd.read_csv(main_path, sep='$')
for folder in folders:
path = os.path.join(folder, f)
df = pd.concat([df, pd.read_csv(path, sep='$')])
sort_list = ['starttime', 'uid']
df = df.sort_values(by=sort_list)
df.to_csv(main_path, na_rep='', sep='$', index=False)
create_dummy_files(STRUCTURED_EXPORT_DIR, custom_icustays_list)
data_files = [i for i in os.listdir(STRUCTURED_EXPORT_DIR) if 'data' in i]
parallel_processing(func, MP_NUM_PROCESSES, data_files)
###Output
100%|██████████| 2688/2688 [00:04<00:00, 585.70it/s]
100%|██████████| 2694/2694 [00:04<00:00, 580.19it/s]
100%|██████████| 2694/2694 [00:04<00:00, 580.80it/s]
100%|██████████| 2694/2694 [00:04<00:00, 580.82it/s]
100%|██████████| 2694/2694 [00:04<00:00, 579.57it/s]
100%|██████████| 2694/2694 [00:04<00:00, 574.86it/s]
100%|██████████| 2694/2694 [00:04<00:00, 578.59it/s]
100%|██████████| 2694/2694 [00:04<00:00, 572.77it/s]
|
20.01.1400_GridSearchCV/ER_LAD_30sets_def.ipynb | ###Markdown
Expectation Reflection + Least Absolute DeviationsIn the following, we demonstrate how to apply Least Absolute Deviations (LAD) for classification task such as medical diagnosis.We import the necessary packages to the Jupyter notebook:
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split,KFold
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,\
recall_score,roc_curve,auc
from scipy import linalg
from scipy.special import erf as sperf
import expectation_reflection as ER
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import MinMaxScaler
from function import split_train_test,make_data_balance
np.random.seed(1)
###Output
_____no_output_____
###Markdown
First of all, the processed data are imported.
###Code
data_list = np.loadtxt('data_list.txt',dtype='str')
#data_list = ['29parkinson','30paradox2','31renal','32patientcare','33svr','34newt','35pcos']
print(data_list)
def read_data(data_id):
data_name = data_list[data_id]
print('data_name:',data_name)
Xy = np.loadtxt('../data/%s/data_processed.dat'%data_name)
X = Xy[:,:-1]
y = Xy[:,-1]
#print(np.unique(y,return_counts=True))
X,y = make_data_balance(X,y)
print(np.unique(y,return_counts=True))
X, y = shuffle(X, y, random_state=1)
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,random_state = 1)
sc = MinMaxScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
return X_train,X_test,y_train,y_test
def infer_LAD(x, y, regu=0.1,tol=1e-6, max_iter=1000):
## 2019.12.26: Jungmin's code
weights_limit = sperf(1e-10)*1e10
s_sample, s_pred = x.shape
s_sample, s_target = y.shape
mu = np.zeros(x.shape[1])
w_sol = 0.0*(np.random.rand(s_pred,s_target) - 0.5)
b_sol = np.random.rand(1,s_target) - 0.5
# print(weights.shape)
for index in range(s_target):
error, old_error = np.inf, 0
weights = np.ones((s_sample, 1))
cov = np.cov(np.hstack((x,y[:,index][:,None])), rowvar=False, \
ddof=0, aweights=weights.reshape(s_sample))
cov_xx, cov_xy = cov[:s_pred,:s_pred],cov[:s_pred,s_pred:(s_pred+1)]
# print(cov.shape, cov_xx.shape, cov_xy.shape)
counter = 0
while np.abs(error-old_error) > tol and counter < max_iter:
counter += 1
old_error = np.mean(np.abs(b_sol[0,index] + x.dot(w_sol[:,index]) - y[:,index]))
# old_error = np.mean(np.abs(b_sol[0,index] + x_test.dot(w_sol[:,index]) - y_test[:,index]))
# print(w_sol[:,index].shape, npl.solve(cov_xx, cov_xy).reshape(s_pred).shape)
# 2019.12.26: Tai - added regularization
sigma_w = np.std(w_sol[:,index])
w_eq_0 = np.abs(w_sol[:,index]) < 1e-10
mu[w_eq_0] = 2./np.sqrt(np.pi)
mu[~w_eq_0] = sigma_w*sperf(w_sol[:,index][~w_eq_0]/sigma_w)/w_sol[:,index][~w_eq_0]
w_sol[:,index] = np.linalg.solve(cov_xx + regu*np.diag(mu),cov_xy).reshape(s_pred)
b_sol[0,index] = np.mean(y[:,index]-x.dot(w_sol[:,index]))
weights = (b_sol[0,index] + x.dot(w_sol[:,index]) - y[:,index])
sigma = np.std(weights)
error = np.mean(np.abs(weights))
# error = np.mean(np.abs(b_sol[0,index] + x_test.dot(w_sol[:,index]) - y_test[:,index]))
weights_eq_0 = np.abs(weights) < 1e-10
weights[weights_eq_0] = weights_limit
weights[~weights_eq_0] = sigma*sperf(weights[~weights_eq_0]/sigma)/weights[~weights_eq_0]
#weights = sigma*sperf(weights/sigma)/weights
cov = np.cov(np.hstack((x,y[:,index][:,None])), rowvar=False, \
ddof=0, aweights=weights.reshape(s_sample))
cov_xx, cov_xy = cov[:s_pred,:s_pred],cov[:s_pred,s_pred:(s_pred+1)]
# print(old_error,error)
#return b_sol,w_sol
return b_sol[0][0],w_sol[:,0] # for only one target case
def fit_LAD(x,y,niter_max,l2):
# convert 0, 1 to -1, 1
y1 = 2*y - 1.
#print(niter_max)
n = x.shape[1]
x_av = np.mean(x,axis=0)
dx = x - x_av
c = np.cov(dx,rowvar=False,bias=True)
# 2019.07.16:
c += l2*np.identity(n) / (2*len(y))
c_inv = linalg.pinvh(c)
# initial values
h0 = 0.
w = np.random.normal(0.0,1./np.sqrt(n),size=(n))
cost = np.full(niter_max,100.)
for iloop in range(niter_max):
h = h0 + x.dot(w)
y1_model = np.tanh(h/2.)
# stopping criterion
p = 1/(1+np.exp(-h))
cost[iloop] = ((p-y)**2).mean()
#h_test = h0 + x_test.dot(w)
#p_test = 1/(1+np.exp(-h_test))
#cost[iloop] = ((p_test-y_test)**2).mean()
if iloop>0 and cost[iloop] >= cost[iloop-1]: break
# update local field
t = h!=0
h[t] *= y1[t]/y1_model[t]
h[~t] = 2*y1[~t]
# find w from h
#h_av = h.mean()
#dh = h - h_av
#dhdx = dh[:,np.newaxis]*dx[:,:]
#dhdx_av = dhdx.mean(axis=0)
#w = c_inv.dot(dhdx_av)
#h0 = h_av - x_av.dot(w)
# 2019.12.26:
h0,w = infer_LAD(x,h[:,np.newaxis],l2)
return h0,w
def measure_performance(X_train,X_test,y_train,y_test):
n = X_train.shape[1]
l2 = [0.0001,0.001,0.01,0.1,1.,10.,100.]
#l2 = [0.0001,0.001,0.01,0.1,1.,10.]
nl2 = len(l2)
# cross validation
kf = 4
kfold = KFold(n_splits=kf,shuffle=False,random_state=1)
h01 = np.zeros(kf)
w1 = np.zeros((kf,n))
cost1 = np.zeros(kf)
h0 = np.zeros(nl2)
w = np.zeros((nl2,n))
cost = np.zeros(nl2)
for il2 in range(len(l2)):
for i,(train_index,val_index) in enumerate(kfold.split(y_train)):
X_train1, X_val = X_train[train_index], X_train[val_index]
y_train1, y_val = y_train[train_index], y_train[val_index]
#h01[i],w1[i,:] = ER.fit(X_train1,y_train1,niter_max=100,l2=l2[il2])
#h01[i],w1[i,:] = ER.fit_LAD(X_train1,y_train1,niter_max=100,l2=l2[il2])
h01[i],w1[i,:] = fit_LAD(X_train1,y_train1,niter_max=100,l2=l2[il2])
y_val_pred,p_val_pred = ER.predict(X_val,h01[i],w1[i])
cost1[i] = ((p_val_pred - y_val)**2).mean()
h0[il2] = h01.mean(axis=0)
w[il2,:] = w1.mean(axis=0)
cost[il2] = cost1.mean()
# optimal value of l2:
il2_opt = np.argmin(cost)
print('optimal l2:',l2[il2_opt])
# performance:
y_test_pred,p_test_pred = ER.predict(X_test,h0[il2_opt],w[il2_opt,:])
fp,tp,thresholds = roc_curve(y_test, p_test_pred, drop_intermediate=False)
roc_auc = auc(fp,tp)
#print('AUC:', roc_auc)
acc = accuracy_score(y_test,y_test_pred)
#print('Accuracy:', acc)
precision = precision_score(y_test,y_test_pred)
#print('Precision:',precision)
recall = recall_score(y_test,y_test_pred)
#print('Recall:',recall)
return acc,roc_auc,precision,recall
n_data = len(data_list)
roc_auc = np.zeros(n_data) ; acc = np.zeros(n_data)
precision = np.zeros(n_data) ; recall = np.zeros(n_data)
for data_id in range(n_data):
X_train,X_test,y_train,y_test = read_data(data_id)
acc[data_id],roc_auc[data_id],precision[data_id],recall[data_id] =\
measure_performance(X_train,X_test,y_train,y_test)
print(data_id,acc[data_id],roc_auc[data_id])
print('acc_mean:',acc.mean())
print('roc_mean:',roc_auc.mean())
print('precision:',precision.mean())
print('recall:',recall.mean())
#np.savetxt('ER_LAD_result_30sets.dat',(roc_auc,acc,precision,recall),fmt='%f')
###Output
_____no_output_____ |
fflabs/nlp-preprocessing/NLP-preprocessing.ipynb | ###Markdown
Natural Language PreprocessingThe goal of this notebook is to provide a brief overview of common steps taken during natural language preprocessing (NLP). When dealing with text data, the first major hurdle is figuring out how go from a collection of strings to a format that statistical and maching learning models can understand. This resource is meant get you started thinking about how to process your data, not to provide a formal pipeline. Preprocessing follows a general series of steps, each requiring decisions that can substantially impact the final outcome of analyses if not considered carefully. Below we will be emphasizing how different sources of text require different approaches for preprocessing and modeling. As you approach your own data, ***think about the implications of each decision*** on the outcome of your analysis.**Note:** Please send along any errata or comments you may have. Outline---1. [**Requirements**](recs)2. [**Importing Data**](data)3. [**Preprocessing Text**](preprocessing) - [Tokenization](tokens) - [Stop Words](stops) - [Stemming and Lemmatization](stemlemm) - [Vectorization](vecs) 4. [**Next Steps**](next) RequirementsThis notebook requires several commonly used Python packages for data analysis and Natural Language Processing (NLP):Pandas: Data structures and analysisNLTK: Natural Language Toolkitscikit-learn: Machine Learning and data processesing DataAs a working example, we will be exploring data from New York Times op-eds articles. While a rich and appropriate data set, keep in mind that the examples below will likely be quite different than the data you are working with.In NY Times op-ed repository, there is a subset of 947 op-ed articles. To begin we will start by looking at one article to illustrate the steps of preprocessing. Later we will suggest some potential future directions for exploring the dataset in full.
###Code
import pandas as pd
# read subset of data from csv file into panadas dataframe
df = pd.read_csv('data_files/1_100.csv')
# get rid of any missing text data
df = df[pd.notnull(df['full_text'])]
# for now, chosing one article to illustrate preprocessing
article = df['full_text'][939]
###Output
_____no_output_____
###Markdown
Let's take a peek at the raw text of this article to see what we are dealing with! Right off the bat you can see that we have a mixture of uppercase and lowercase words, punctuation, and some character encoding (e.g., "\xe2\x80\x9cIf" below).
###Code
# NY Times
article[:500]
###Output
_____no_output_____
###Markdown
Preprocessing Text When working with text data, the goal is to process (remove, filter, and combine) the text in such a way that informative text is preserve and munged into a form that models can better understand. After looking at our raw text, we know that there are a number of textual attributes that we will need to address before we can ultimately represent our text as quantified features. A common first step is to handle [string encoding](http://kunststube.net/encoding/) and formatting issues. Often it is easy to address the character encoding and mixed capitalization using Python's built-in functions. For our NY Times example, we will convert everything to UTF-8 encoding and convert all letters to lowercase.
###Code
print(article[:500].decode('utf-8').lower())
###Output
americans work some of the longest hours in the western world, and many struggle to achieve a healthy balance between work and life. as a result, there is an understandable tendency to assume that the problem we face is one of quantity: we simply do not have enough free time. “if i could just get a few more hours off work each week,” you might think, “i would be happier.” this may be true. but the situation, i believe, is more complicated than that. as i discovered in a study that i publ
###Markdown
1. TokenizationIn order to process text, it must be deconstructed into its constituent elements through a process termed tokenization. Often, the tokens yielded from this process are simply individual words in a document. In certain cases, it can be useful to tokenize stranger objects like emoji or parts of html (or other code).A simplistic way to tokenize text relies on white space, such as in nltk.tokenize.WhitespaceTokenizer. Relying on white space, however, does not take punctuation into account, and depending on this some tokens will include punctuation and will require further preprocessing (e.g. 'account,'). Depending on your data, the punctuation may provide meaningful information, so you will want to think about whether it should be preserved or if it can be removed.Tokenization is particularly challenging in the biomedical field, where many phrases contain substantial punctuation (parentheses, hyphens, etc.) that can't necessarily be ignored. Additionally, negation detection can be critical in this context which can provide an additional preprocessing challenge.NLTK contains many built-in modules for tokenization, such as nltk.tokenize.WhitespaceTokenizer and nltk.tokenize.RegexpTokenizer.See also:[The Art of Tokenization](https://www.ibm.com/developerworks/community/blogs/nlp/entry/tokenization?lang=en)[Negation's Not Solved: Generalizability Versus Optimizability in Clinical Natural Language Processing](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4231086/) Example: Whitespace TokenizationHere we apply the Whitespace Tokenizer on our example article. Notice that we are again decoding characters (such as quotation marks) and using all lowercase characters. Because we used white space as the marker between tokens, we still have punctuation attached to some tokens (e.g. __'life.'__ and __'\u201cif'__)
###Code
from nltk.tokenize import WhitespaceTokenizer
ws_tokenizer = WhitespaceTokenizer()
# tokenize example document
nyt_ws_tokens = ws_tokenizer.tokenize(article.decode('utf-8').lower())
print nyt_ws_tokens[:75]
###Output
[u'americans', u'work', u'some', u'of', u'the', u'longest', u'hours', u'in', u'the', u'western', u'world,', u'and', u'many', u'struggle', u'to', u'achieve', u'a', u'healthy', u'balance', u'between', u'work', u'and', u'life.', u'as', u'a', u'result,', u'there', u'is', u'an', u'understandable', u'tendency', u'to', u'assume', u'that', u'the', u'problem', u'we', u'face', u'is', u'one', u'of', u'quantity:', u'we', u'simply', u'do', u'not', u'have', u'enough', u'free', u'time.', u'\u201cif', u'i', u'could', u'just', u'get', u'a', u'few', u'more', u'hours', u'off', u'work', u'each', u'week,\u201d', u'you', u'might', u'think,', u'\u201ci', u'would', u'be', u'happier.\u201d', u'this', u'may', u'be', u'true.', u'but']
###Markdown
Example: Regular Expression TokenizationBy applying the regular expression tokenizer we can more highly tune our tokenizer to yield the types of tokens useful for our data. Here we return a list of word tokens without punctuation.
###Code
from nltk.tokenize import RegexpTokenizer
re_tokenizer = RegexpTokenizer(r'\w+')
nyt_re_tokens = re_tokenizer.tokenize(article.decode('utf-8').lower())
print nyt_re_tokens[:100]
###Output
[u'americans', u'work', u'some', u'of', u'the', u'longest', u'hours', u'in', u'the', u'western', u'world', u'and', u'many', u'struggle', u'to', u'achieve', u'a', u'healthy', u'balance', u'between', u'work', u'and', u'life', u'as', u'a', u'result', u'there', u'is', u'an', u'understandable', u'tendency', u'to', u'assume', u'that', u'the', u'problem', u'we', u'face', u'is', u'one', u'of', u'quantity', u'we', u'simply', u'do', u'not', u'have', u'enough', u'free', u'time', u'if', u'i', u'could', u'just', u'get', u'a', u'few', u'more', u'hours', u'off', u'work', u'each', u'week', u'you', u'might', u'think', u'i', u'would', u'be', u'happier', u'this', u'may', u'be', u'true', u'but', u'the', u'situation', u'i', u'believe', u'is', u'more', u'complicated', u'than', u'that', u'as', u'i', u'discovered', u'in', u'a', u'study', u'that', u'i', u'published', u'with', u'my', u'colleague', u'chaeyoon', u'lim', u'in', u'the']
###Markdown
***Critical thoughts:*** Decisions about tokens can be difficult. In general its best to start with common sense, intuition, and domain knowledge to start, and iterate based on overall model performance.2. Stop WordsDepending on the application, many words provide little value when building an NLP model. Moreover, they may provide a source of "distraction" for models since model capacity is used to understand words with low information content. Accordingly, these are termed stop words. Examples of stop words include pronouns, articles, prepositions and conjunctions, but there are many other words, or non meaningful tokens, that you may wish to remove. Stop words can be determined and handled in many different ways, including:Using a list of words determined a priori - either a standard list from the NLTK package or one modified from such a list based on domain knowledge of a particular subjectSorting the terms by collection frequency (the total number of times each term appears in the document collection), and then to taking the most frequent terms as a stop list based on semantic content.Using no defined stop list at all, and dealing with text data in a purely statistical manner. In general, search engines do not use stop lists.As you work with your text, you may decide to iterate on this process. When in doubt, it is often a fruitful strategy to try the above bullets in order. See also: [Stop Words](http://nlp.stanford.edu/IR-book/html/htmledition/dropping-common-terms-stop-words-1.html) Example: Stopword CorpusFor this example, we will use the english stopword corpus from NLTK.
###Code
from nltk.corpus import stopwords
# here you can see the words included in the stop words corpus
print stopwords.words('english')
###Output
[u'i', u'me', u'my', u'myself', u'we', u'our', u'ours', u'ourselves', u'you', u'your', u'yours', u'yourself', u'yourselves', u'he', u'him', u'his', u'himself', u'she', u'her', u'hers', u'herself', u'it', u'its', u'itself', u'they', u'them', u'their', u'theirs', u'themselves', u'what', u'which', u'who', u'whom', u'this', u'that', u'these', u'those', u'am', u'is', u'are', u'was', u'were', u'be', u'been', u'being', u'have', u'has', u'had', u'having', u'do', u'does', u'did', u'doing', u'a', u'an', u'the', u'and', u'but', u'if', u'or', u'because', u'as', u'until', u'while', u'of', u'at', u'by', u'for', u'with', u'about', u'against', u'between', u'into', u'through', u'during', u'before', u'after', u'above', u'below', u'to', u'from', u'up', u'down', u'in', u'out', u'on', u'off', u'over', u'under', u'again', u'further', u'then', u'once', u'here', u'there', u'when', u'where', u'why', u'how', u'all', u'any', u'both', u'each', u'few', u'more', u'most', u'other', u'some', u'such', u'no', u'nor', u'not', u'only', u'own', u'same', u'so', u'than', u'too', u'very', u's', u't', u'can', u'will', u'just', u'don', u'should', u'now', u'd', u'll', u'm', u'o', u're', u've', u'y', u'ain', u'aren', u'couldn', u'didn', u'doesn', u'hadn', u'hasn', u'haven', u'isn', u'ma', u'mightn', u'mustn', u'needn', u'shan', u'shouldn', u'wasn', u'weren', u'won', u'wouldn']
###Markdown
Let's remove the stop words and compare to our original list of tokens from our regular expression tokenizer.
###Code
cleaned_tokens = []
stop_words = set(stopwords.words('english'))
for token in nyt_re_tokens:
if token not in stop_words:
cleaned_tokens.append(token)
print 'Number of tokens before removing stop words: %d' % len(nyt_re_tokens)
print 'Number of tokens after removing stop words: %d' % len(cleaned_tokens)
###Output
Number of tokens before removing stop words: 825
Number of tokens after removing stop words: 405
###Markdown
You can see that by removing stop words, we now have less than half the number of tokens as our original list. Taking a peek at the cleaned tokens, we can see that a lot of the information that makes sentences human-readable has been lost, but the key nouns, verbs, adjectives, and adverbs remain.
###Code
print cleaned_tokens[:50]
###Output
[u'americans', u'work', u'longest', u'hours', u'western', u'world', u'many', u'struggle', u'achieve', u'healthy', u'balance', u'work', u'life', u'result', u'understandable', u'tendency', u'assume', u'problem', u'face', u'one', u'quantity', u'simply', u'enough', u'free', u'time', u'could', u'get', u'hours', u'work', u'week', u'might', u'think', u'would', u'happier', u'may', u'true', u'situation', u'believe', u'complicated', u'discovered', u'study', u'published', u'colleague', u'chaeyoon', u'lim', u'journal', u'sociological', u'science', u'shortage', u'free']
###Markdown
***Critical thoughts:*** You may notice from looking at this sample, however, that a potentially meaningful word has been removed: __'not'__. This stopword corpus includes the words 'no', 'nor', and 'not' and so by removing these words we have removed negation. 3. Stemming and LemmatizationThe overarching goal of stemming and lemmatization is to reduce differential forms of a word to a common base form. By performing stemming and lemmitzation, the count occurrences of words are can be very informative when further processing the data (such as the vectorization, see below). In deciding how to reduce the differential forms of words, you will want to consider how much information you will need to retain for your application. For instance, in many cases markers of tense and plurality are not informative, and so removing these markers will allow you to reduce the number of features. In other cases, retaining these variations results in better understanding of the underlying content. **Stemming** is the process of representing the word as its root word while removing inflection. For example, the stem of the word 'explained' is 'explain'. By passing this word through a stemming function you would remove the tense inflection. There are multiple approaches to stemming: [Porter stemming](http://snowball.tartarus.org/algorithms/porter/stemmer.html), [Porter2 (snowball) stemming](http://snowball.tartarus.org/algorithms/english/stemmer.html), and [Lancaster stemming](http://www.nltk.org/_modules/nltk/stem/lancaster.html). You can read more in depth about these approaches.
###Code
from nltk.stem.porter import PorterStemmer
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.lancaster import LancasterStemmer
porter = PorterStemmer()
snowball = SnowballStemmer('english')
lancaster = LancasterStemmer()
print 'Porter Stem of "explanation": %s' % porter.stem('explanation')
print 'Porter2 (Snowball) Stem of "explanation": %s' %snowball.stem('explanation')
print 'Lancaster Stem of "explanation": %s' %lancaster.stem('explanation')
###Output
Porter Stem of "explanation": explan
Porter2 (Snowball) Stem of "explanation": explan
Lancaster Stem of "explanation": expl
###Markdown
While stemming is a heuristic process that selectively removes the end of words, lemmatization is a more sophisticated process that can account for variables such as part-of-speech, meaning, and context within a document or neighboring sentences.
###Code
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print lemmatizer.lemmatize('explanation')
###Output
explanation
###Markdown
In this example, lemmatization retains a bit more information than stemming. Within stemming, the Lancaster method is more aggressive than Porter and Snowball. Remember that this step allows us to reduce words to a common base form so that we can reduce our feature space and perform counting of occurrences. It will depend on your data and your application as to how much information you need to retain.As a good starting point, see also: [Stemming and lemmatization](http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) Example: Stemming and LemmatizationTo illustrate the difference between stemming and lemmatization, we will apply both methods to our articles.
###Code
stemmed_tokens = []
lemmatized_tokens = []
for token in cleaned_tokens:
stemmed_tokens.append(snowball.stem(token))
lemmatized_tokens.append(lemmatizer.lemmatize(token))
###Output
_____no_output_____
###Markdown
Let's take a look at a sample of our __stemmed__ tokens
###Code
print stemmed_tokens[:50]
###Output
[u'american', u'work', u'longest', u'hour', u'western', u'world', u'mani', u'struggl', u'achiev', u'healthi', u'balanc', u'work', u'life', u'result', u'understand', u'tendenc', u'assum', u'problem', u'face', u'one', u'quantiti', u'simpli', u'enough', u'free', u'time', u'could', u'get', u'hour', u'work', u'week', u'might', u'think', u'would', u'happier', u'may', u'true', u'situat', u'believ', u'complic', u'discov', u'studi', u'publish', u'colleagu', u'chaeyoon', u'lim', u'journal', u'sociolog', u'scienc', u'shortag', u'free']
###Markdown
In contrast, here are the same tokens in their __lemmatized__ form
###Code
print lemmatized_tokens[:50]
###Output
[u'american', u'work', u'longest', u'hour', u'western', u'world', u'many', u'struggle', u'achieve', u'healthy', u'balance', u'work', u'life', u'result', u'understandable', u'tendency', u'assume', u'problem', u'face', u'one', u'quantity', u'simply', u'enough', u'free', u'time', u'could', u'get', u'hour', u'work', u'week', u'might', u'think', u'would', u'happier', u'may', u'true', u'situation', u'believe', u'complicated', u'discovered', u'study', u'published', u'colleague', u'chaeyoon', u'lim', u'journal', u'sociological', u'science', u'shortage', u'free']
###Markdown
Looking at the above, it is clear different strategies for generating tokens might retain different information. Moreover, given the transformations stemming and lemmatization apply there will be a different amount of tokens retained in the overall vocabularity.***Critical thoughts:*** It's best to apply intuition and domain knowledge to get a feel for which strategy(ies) to begin with. In short, it's usually a good idea to optimize for smaller numbers of unique tokens and greater interpretibility as long as it doesn't disagree with common sense and (sometimes more importantly) overall performance. 4. Vectorization Often in natural language processing we want to represent our text as a quantitative set of features for subsequent analysis. We can refer to this as vectorization. One way to generate features from text is to count the occurrences words. This apporoach is often referred to as a bag of words approach. For the example of our article, we can represent the document as a vector of counts for each token. We can do the same for the other articles, and in the end we would have a set of vectors - with each vector representing an article. These vectors could then be used in the next phase of analysis (e.g. classification, document clustering, ...). When we apply a count vectorizer to our corpus of articles, the output will be a matrix with the number of rows corresponding to the number of articles and the number of columns corresponding to the number of unique tokens across (across articles). You can imagine that if we have many articles in a corpus of varied content, the number of unique tokens could get quite large. Some of our preprocessing steps address this issue. In particular, the stemming/lemmatization step reduces the number of unique versions of a word that appear in the corpus. Additionally it is possible to reduce the number of features by removing words that appear least frequently, or by removing words that are common to each article and therefore may not be informative for subsequent analysis. Count Vectorization of ArticleFor this example we will use the stemmed tokens from our article. We will need to join the tokens together to represent one article.Check out the documentation for [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) in scikit-learn. You will see that there are a number of parameters that you can specify - including the maximum number of features. Depending on your data, you may choose to restrict the number of features by removing words that appear with least frequency (and this number may be set by cross-validation).**Example:**
###Code
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
# stem our example article
stemmed_article = ' '.join(wd for wd in stemmed_tokens)
# performe a count-based vectorization of the document
article_vect = vectorizer.fit_transform([stemmed_article])
###Output
_____no_output_____
###Markdown
As shown below, we can see that the five most frequently occuring words in this article, titled "You Don't Need More Free Time," are __time, work, weekend, people,__ and __well__:
###Code
freqs = [(word, article_vect.getcol(idx).sum()) for word, idx in vectorizer.vocabulary_.items()]
print 'top 5 words for op-ed titled "%s"' % df['title'][939]
print sorted (freqs, key = lambda x: -x[1])[0:5]
###Output
top 5 words for op-ed titled "You Don’t Need More Free Time"
[(u'time', 19), (u'work', 18), (u'weekend', 12), (u'peopl', 11), (u'well', 7)]
###Markdown
Now you can imagine that we could apply this count vectorizer to all of our articles. We could then use the word count vectors in a number of subsequent analyses (e.g. exploring the topics appearing across the corpus). Term Frequency - Inverse Document Frequency (tf-idf) VectorizationWe have mentioned that you may want to limit the number of features in your vector, and that one way to do this would be to only take the tokens that occur most frequently. Imagine again the above example of trying to differentiate between supporting and opposing documents in a political context. If the documents are all related to the same political initiative, then very likely there will be words related to the intitiative that appear in both documents and thus have high frequency counts. If we cap the number of features by frequency, these words would likely be included, but will they be the most informative when trying to differentiate documents?For many such cases we may want to use a vectorization approach called **term frequency - inverse document frequency (tf-idf)**. [Tf-idf](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) allows us to weight words by their importance by considering how often a word appears in a given document and throughout the corpus. That is, if a word occurs frequently in a (preprocessed) document it should be important, yet if it also occurs frequently accross many documents it is less informative and differentiating.In our example, the name of the inititative would likely appear numerous times in each document for both opposing and supporting positions. Because the name occurs across all documents, this word would be down weighted in importance. For a more in depth read, these posts go into a bit more depth about text vectorization: [tf-idf part 1](http://blog.christianperone.com/2011/09/machine-learning-text-feature-extraction-tf-idf-part-i/) and [tf-idf part 2](http://blog.christianperone.com/2011/10/machine-learning-text-feature-extraction-tf-idf-part-ii/).**Example:**To utilize tf-idf, we will add in additional articles from our dataset. We will need to preprocess the text from these articles and then we can use [TfidfVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) on our stemmed tokens.To perform tf-idf tranformations, we first need occurence vectors for all our articles using (like the above) count vectorizer. From here, we could use scikit-learn's [TfidfTransformer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.htmlsklearn.feature_extraction.text.TfidfTransformer) to transform our matrix into a tf-idf matrix.For a more complete example, consider a preprocessing pipeline where we first tokenize using regexp, remove standard stop words, perform stemming, and finally convert to tf-idf vectors:
###Code
def preprocess_article_content(text_df):
"""
Simple preprocessing pipeline which uses RegExp, sets basic token requirements, and removes stop words.
"""
print 'preprocessing article text...'
# tokenizer, stops, and stemmer
tokenizer = RegexpTokenizer(r'\w+')
stop_words = set(stopwords.words('english')) # can add more stop words to this set
stemmer = SnowballStemmer('english')
# process articles
article_list = []
for row, article in enumerate(text_df['full_text']):
cleaned_tokens = []
tokens = tokenizer.tokenize(article.decode('utf-8').lower())
for token in tokens:
if token not in stop_words:
if len(token) > 0 and len(token) < 20: # removes non words
if not token[0].isdigit() and not token[-1].isdigit(): # removes numbers
stemmed_tokens = stemmer.stem(token)
cleaned_tokens.append(stemmed_tokens)
# add process article
article_list.append(' '.join(wd for wd in cleaned_tokens))
# echo results and return
print 'preprocessed content for %d articles' % len(article_list)
return article_list
# process articles
processed_article_list = preprocess_article_content(df)
# vectorize the articles and compute count matrix
from sklearn.feature_extraction.text import TfidfVectorizer
tf_vectorizer = TfidfVectorizer()
tfidf_article_matrix = tf_vectorizer.fit_transform(processed_article_list)
print tfidf_article_matrix.shape
###Output
preprocessing article text...
preprocessed content for 947 articles
(947, 19702)
|
Modulo2/Notebooks_Clase/Actividad 5/Actividad_5_RegLinealMultiple.ipynb | ###Markdown
Actividad 5: Aprendizaje Supervizado Reg. Líneal ___ Nombre: Considere el dataset "Fish.csv". Este conjunto de datos es un registro de 7 especies diferentes de peces qué son comunmente vendidos en un mercado. > Utilizando las librerías de sklearn, obtenga un modelo predictivo de regresión múltiple para predecir el peso estimado de los peces.> Predecir el peso de nuevas muestras.> Elegir tres variable y graficar.
###Code
import pandas as pd
fish = pd.read_csv('Data/Fish.csv')
fish.head()
###Output
_____no_output_____ |
4. Sequence Models/Week 1/Building a Recurrent Neural Network - Step by Step/.ipynb_checkpoints/Building a Recurrent Neural Network - Step by Step - v3-checkpoint.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input.- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. Let's go! 1.1 - RNN cellA Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cacheWe will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh( np.matmul(Wax,xt) + np.matmul(Waa,a_prev) + ba )
# compute output of the current cell using the formula given above
yt_pred = softmax( np.matmul(Wya,a_next) + by )
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
###Output
a_next[4] = [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978
-0.18887155 0.99815551 0.6531151 0.82872037]
a_next.shape = (5, 10)
yt_pred[1] = [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212
0.36920224 0.9966312 0.9982559 0.17746526]
yt_pred.shape = (2, 10)
###Markdown
**Expected Output**: **a_next[4]**: [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] **a_next.shape**: (5, 10) **yt[1]**: [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] **yt.shape**: (2, 10) 1.2 - RNN forward pass You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.2. Initialize the "next" hidden state as $a_0$ (initial hidden state).3. Start looping over each time step, your incremental index is $t$ : - Update the "next" hidden state and the cache by running `rnn_cell_forward` - Store the "next" hidden state in $a$ ($t^{th}$ position) - Store the prediction in y - Add the cache to the list of caches4. Return $a$, $y$ and caches
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and parameters["Wya"]
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros( (n_a, m, T_x), dtype=np.float )
y_pred = np.zeros( (n_y, m, T_x), dtype=np.float )
# Initialize a_next (≈1 line)
a_next = a0
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
###Output
a[4][1] = [-0.99999375 0.77911235 -0.99861469 -0.99833267]
a.shape = (5, 10, 4)
y_pred[1][3] = [ 0.79560373 0.86224861 0.11118257 0.81515947]
y_pred.shape = (2, 10, 4)
caches[1][1][3] = [-1.1425182 -0.34934272 -0.20889423 0.58662319]
len(caches) = 2
###Markdown
**Expected Output**: **a[4][1]**: [-0.99999375 0.77911235 -0.99861469 -0.99833267] **a.shape**: (5, 10, 4) **y[1][3]**: [ 0.79560373 0.86224861 0.11118257 0.81515947] **y.shape**: (2, 10, 4) **cache[1][1][3]**: [-1.1425182 -0.34934272 -0.20889423 0.58662319] **len(cache)**: 2 Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$). In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThis following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps. About the gates - Forget gateFor the sake of this illustration, let's assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate let's us do this: $$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information. - Update gateOnce we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formula for the update gate: $$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$ Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$. - Updating the cell To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is: $$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$Finally, the new cell state is: $$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$ - Output gateTo decide which outputs we will use, we will use the following two formulas: $$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$ $$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state. 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (3).**Instructions**:1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$2. Compute all the formulas 1-6. You can use `sigmoid()` (provided) and `np.tanh()`.3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilde),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.concatenate((a_prev, xt), axis=0)
#concat[: n_a, :] = None
#concat[n_a :, :] = None
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid( np.matmul( Wf, concat ) + bf )
it = sigmoid( np.matmul( Wi, concat ) + bi )
cct = np.tanh( np.matmul( Wc, concat ) + bc )
c_next = ft*c_prev + it*cct
ot = sigmoid( np.matmul( Wo, concat )+ bo )
a_next = ot*np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(np.matmul(Wy,a_next)+by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
###Output
a_next[4] = [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482
0.76566531 0.34631421 -0.00215674 0.43827275]
a_next.shape = (5, 10)
c_next[2] = [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942
0.76449811 -0.0981561 -0.74348425 -0.26810932]
c_next.shape = (5, 10)
yt[1] = [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381
0.00943007 0.12666353 0.39380172 0.07828381]
yt.shape = (2, 10)
cache[1][3] = [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874
0.07651101 -1.03752894 1.41219977 -0.37647422]
len(cache) = 10
###Markdown
**Expected Output**: **a_next[4]**: [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] **a_next.shape**: (5, 10) **c_next[2]**: [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] **c_next.shape**: (5, 10) **yt[1]**: [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381] **yt.shape**: (2, 10) **cache[1][3]**: [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422] **len(cache)**: 10 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 5**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (4).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the update gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the output gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of x and parameters['Wy'] (≈2 lines)
n_x, m, T_x = x.shpae
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = None
c = None
y = None
# Initialize a_next and c_next (≈2 lines)
a_next = None
c_next = None
# loop over all time-steps
for t in range(None):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
a_next, c_next, yt, cache = None
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = None
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = None
# Save the value of the next cell state (≈1 line)
c[:,:,t] = None
# Append the cache into caches (≈1 line)
None
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
###Output
_____no_output_____
###Markdown
**Expected Output**: **a[4][3][6]** = 0.172117767533 **a.shape** = (5, 10, 7) **y[1][4][3]** = 0.95087346185 **y.shape** = (2, 10, 7) **caches[1][1][1]** = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] **c[1][2][1]** = -0.855544916718 **len(caches)** = 2 Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell. **Figure 6**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculus. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. Deriving the one step backward functions: To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \text{sech}(x)^2 = 1 - \tanh(x)^2$Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$. The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_cell_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
dtanh = None
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = None
dWax = None
# compute the gradient with respect to Waa (≈2 lines)
da_prev = None
dWaa = None
# compute the gradient with respect to b (≈1 line)
dba = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.0842968653807 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.393081873922 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = -0.28483955787 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.80517166] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = None
(a1, a0, x1, parameters) = None
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈6 lines)
dx = None
dWax = None
dWaa = None
dba = None
da0 = None
da_prevt = None
# Loop through all the time steps
for t in reversed(range(None)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = None
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
dx[:, :, t] = None
dWax += None
dWaa += None
dba += None
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivatives$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_u^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$ 3.2.3 parameter derivatives $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the output gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the output gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = None
n_a, m = None
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dot = None
dcct = None
dit = None
dft = None
# Code equations (7) to (10) (≈4 lines)
dit = None
dft = None
dot = None
dcct = None
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
da_prev = None
dc_prev = None
dxt = None
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = None
n_x, m = None
# initialize the gradients with the right sizes (≈12 lines)
dx = None
da0 = None
da_prevt = None
dc_prevt = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# loop back over the whole sequence
for t in reversed(range(None)):
# Compute all gradients using lstm_cell_backward
gradients = None
# Store or add the gradient to the parameters' previous step's gradient
dx[:,:,t] = None
dWf = None
dWi = None
dWc = None
dWo = None
dbf = None
dbi = None
dbc = None
dbo = None
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = None
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____ |
_build/jupyter_execute/Proj/StudentProjects/SarahThayer.ipynb | ###Markdown
Predict Position of Player with Seasonal StatsAuthor: Sarah ThayerCourse Project, UC Irvine, Math 10, W22 IntroductionMy project is exploring two Kaggle Datasets of NBA stats to predict their listed position. Two datasets are needed. One contains the players and their listed positions. The second contains season stats, height, and weight. We. merge the two for a complete dataset. Main portion of the project(You can either have all one section or divide into multiple sections)
###Code
import numpy as np
import pandas as pd
import altair as alt
###Output
_____no_output_____
###Markdown
Load First Dataset NBA Stats [found here](https://www.kaggle.com/mcamli/nba17-18?select=nba.csv). The columns we are interested in extracting are `Player`, `Team`, `Pos`, `Age`.Display shape of the dataframe to confirm we have enough data points.
###Code
df_position = pd.read_csv("nba.csv", sep = ',' ,encoding = 'latin-1')
df_position = df_position.loc[:,df_position.columns.isin(['Player', 'Tm', 'Pos', 'Age']) ]
df_position.head()
df_position.shape
###Output
_____no_output_____
###Markdown
Clean First DatasetRename column `TM` to `Team`.NBA Players have unique player id in `Player` column. Remove player ID to view names. ( i.e."Alex Abrines\abrinal01" remove the unique player id after the "\\")NBA players that have been traded mid-season appear twice. Drop `Player` duplicates from the dataframe and check shape.
###Code
df_position = df_position.rename(columns={'Tm' : 'Team'})
df_position['Player'] = df_position['Player'].map(lambda x: x.split('\\')[0])
df_pos_unique = df_position[~df_position.duplicated(subset=['Player'])]
df_pos_unique.shape
df_pos_unique.head()
###Output
_____no_output_____
###Markdown
Load Second DatasetNBA stats [found here](https://www.kaggle.com/jacobbaruch/basketball-players-stats-per-season-49-leagues).Large dataset of 20 years of stats form 49 differrent leagues. Parse dataframe for the the relevant data in the NBA league during the 2017 - 2018 regular season. Then our new dataframe contains height and weight.
###Code
df = pd.read_csv("players_stats_by_season_full_details.csv", encoding='latin-1' )
df= df[(df["League"] == 'NBA') & (df["Season"] == '2017 - 2018') & (df["Stage"] == 'Regular_Season')]
df_hw = df.loc[:,~df.columns.isin(['Rk', 'League', 'Season', 'Stage', 'birth_month', 'birth_date', 'height', 'weight',
'nationality', 'high_school', 'draft_round', 'draft_pick', 'draft_team'])]
df_hw.shape
###Output
_____no_output_____
###Markdown
Clean Second DatasetDrop duplicates from dataframe with `Player` from the dataframe containing height and weight.
###Code
df_hw_unique = df_hw[~df_hw.duplicated(subset=['Player'])]
df_hw_unique.shape
###Output
_____no_output_____
###Markdown
Prepare Merged Data- Merge First and Second Dataset - Encode the NBA listed positions Confirm it's the same player by matching name and team.
###Code
df_merged = pd.merge(df_pos_unique,df_hw_unique, on = ['Player','Team'])
###Output
_____no_output_____
###Markdown
One-Hot EncodingEncode the positions (`strings`) into numbers (`ints`).
###Code
enc = {'PG' : 1, 'SG' : 2, 'SF': 3, 'PF': 4, 'C':5}
df_merged["Pos_enc"] = df_merged["Pos"].map(enc)
df_merged
###Output
_____no_output_____
###Markdown
Find Best ModelFeature Selection: Data has player name, team, position, height, weight, and 20+ seasonal stats. Not all features are relevant to predicting NBA Position. Test with different varations of features. Iterate through `combinations()` of k columns in `cols`. Combinations and estimating the counts of the training trials can be found here: ["...the number of k-element subsets (or k-combinations) of an n-element set"](https://en.wikipedia.org/wiki/Binomial_coefficient).KNN Optimization: On each combination of possible training features, iterate through range of ints for possible `n_neighbors`.Log Results: Create results dictionary to store `features`, `n_neighbors`, `log_loss`, and `classifier` for each training iteration. Iterate through results dictionary to find smallest `log_loss` along with features used, `n_neighbors` used, and the classifier object.```results = { trial_num: { 'features': [], 'n_neighbors':, 'log_loss': 'classifier': }}```
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import log_loss
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import warnings
warnings.filterwarnings('ignore')
from itertools import combinations
cols = ['Pos_enc','FGM', 'FGA', '3PM','3PA','FTM', 'FTA','TOV','PF','ORB','DRB',
'REB','AST','STL', "BLK",'PTS','height_cm','weight_kg'
]
trial_num = 0 # count of training attempts
loss_min = False # found log_loss minimum
n_search = True # searching for ideal n neighbors
results = {} # dictionary of results per training attempt
found_clf = False
for count in range(12, 18):
print(f"Testing: {len(cols)} Choose {count}")
for tup in combinations(cols,count): # iterate through combination of columns
for i in range(3,6): # iterate through options of n neighbors
if n_search:
X = df_merged[list(tup)]
y = df_merged["Pos"]
scaler = StandardScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
X_scaled_train, X_scaled_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
clf = KNeighborsClassifier(n_neighbors=i)
clf.fit(X_scaled_train,y_train)
X["pred"] = clf.predict(X_scaled)
probs = clf.predict_proba(X_scaled_test)
loss = log_loss( y_true=y_test ,y_pred=probs , labels= clf.classes_)
results[trial_num] = {
'features': list(tup) ,
'n_neighbors': i,
'log_loss': loss,
'classifier': clf
}
trial_num+=1
if loss < .7:
n_search = False
loss_min = True
found_clf = True
print(f"Found ideal n_neighbors")
break
if (n_search == False) or (loss<.6):
loss_min = True
print('Found combination of features')
break
if loss_min:
print('Return classifier')
break
if not found_clf:
print(f"Couldn't find accurate classifier. Continue to find best results.")
###Output
Testing: 18 Choose 12
Found ideal n_neighbors
Found combination of features
eturn classifier
###Markdown
Return Best ResultsFind the training iteration with the best `log_loss`. Return the classifier and print the features selected, neighbors used, and corresponding `log_loss`.
###Code
min_log_loss = results[0]['log_loss']
for key in results:
# key = trial number
iter_features = results[key]['features']
iter_n_neighbors = results[key]['n_neighbors']
iter_log_loss = results[key]['log_loss']
if iter_log_loss < min_log_loss:
min_log_loss = iter_log_loss
min_key=key
print(f"Total Attempts: {len(results)}")
print(f"Best log_loss: {results[min_key]['log_loss']}")
print(f"Best features: {results[min_key]['features']}")
print(f"Number of features: {len(results[min_key]['features'])}")
print(f"Ideal n_neighbors: {results[min_key]['n_neighbors']}")
print(f"Best classifier: {results[min_key]['classifier']}")
###Output
Total Attempts: 54609
Best log_loss: 0.6832883413753273
Best features: ['3PM', '3PA', 'FTM', 'FTA', 'TOV', 'ORB', 'DRB', 'REB', 'AST', 'PTS', 'height_cm', 'weight_kg']
Number of features: 12
Ideal n_neighbors: 5
Best classifier: KNeighborsClassifier()
###Markdown
Predict position of NBA players on entire dataset.Access best classifier in `results` dict by `min_key`.
###Code
X = df_merged[results[min_key]['features']]
scaler = StandardScaler()
scaler.fit(X)
X_scaled = scaler.transform(X)
clf = results[min_key]['classifier']
df_merged['Preds'] = clf.predict(X)
###Output
_____no_output_____
###Markdown
Vizualize ResultsDisplay all the Centers. Our predicted values show good results at identifying Centers.Look at a true Point Gaurd. Chris Paul is a good example of a Point Gaurd.Look at Lebron James. In 2018, for Clevland, Kaggle has him listed has a Power Foward.
###Code
df_merged[df_merged['Pos_enc']==5]
df_merged[df_merged['Player']=='Chris Paul']
df_merged[df_merged['Player']=='LeBron James']
###Output
_____no_output_____
###Markdown
Evaluate PerformanceEvalute the performance of classifier using the `log_loss` metric.
###Code
X_scaled_train, X_scaled_test, y_train, y_test = train_test_split(X,y,test_size=0.2)
probs = clf.predict_proba(X_scaled_test)
log_loss(y_true=y_test ,y_pred=probs , labels= clf.classes_)
loss = log_loss(y_true=y_test ,y_pred=probs , labels= clf.classes_)
loss
###Output
_____no_output_____
###Markdown
1st row mostly Pink some Light Blue. We're good at determining Point Guards, sometimes Small Forwards trigger false positive.5th row mostly Blue with some Yellow.Makes sense. Great at determining Centers. Sometimes Power Forwards trigger false positive.
###Code
chart = alt.Chart(df_merged).mark_circle().encode(
x=alt.X('height_cm:O', axis=alt.Axis(title='Height in cm')),
y=alt.X('Pos_enc:O', axis=alt.Axis(title='Encoded')),
color = alt.Color("Preds", title = "Positions"),
).properties(
title = f"Predicted NBA Positions",
)
chart
###Output
_____no_output_____ |
token_cnn_experiments.ipynb | ###Markdown
* https://pytorch.org/tutorials/beginner/data_loading_tutorial.html* https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html* https://pytorch.org/tutorials/beginner/saving_loading_models.html
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import imgaug
import numpy as np
import os
import pandas as pd
import time
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import cv2
from tqdm.notebook import tqdm
from IPython.display import display, HTML
from torch.utils.data.sampler import SubsetRandomSampler
from pathlib import Path
import json
IMAGE_SIZE = 96
###Output
_____no_output_____
###Markdown
After unpickling, `train_set` and `test_set` will be lists, where each element is a dictionary that has keys `features` and `labels`. `features` will be a 1D numpy array of 1's and 0's, with size `box_size * box_size` where `box_size` is the size of the image. `label` will be a one-hot-encoded array. Generating Dataset
###Code
class MathTokensDataset(Dataset):
"""
Dataset containing math tokens extracted from the CROHME 2011, 2012, and 2013 datasets.
"""
def __init__(self, pickle_file, image_size, transform=None):
"""
Args:
pickle_file (string): Path to dataset pickle file.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
with open(pickle_file, 'rb') as f:
self.df_data = pd.DataFrame(pickle.load(f))
# Reshape features to 3D tensor.
self.df_data['features'] = self.df_data['features'].apply(lambda vec: vec.reshape(1, image_size, image_size))
# # Convert one-hot labels to numbers (PyTorch expects this).
# self.df_data['label'] = self.df_data['label'].apply(lambda ohe_vec: np.argmax(ohe_vec))
self.transform = transform
def __len__(self):
return len(self.df_data)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
sample = {
'features': self.df_data.iloc[idx]['features'],
'label': self.df_data.iloc[idx]['label']
}
if self.transform:
sample = self.transform(sample)
return sample
class BaselineTokenCNN(nn.Module):
def __init__(self, num_classes):
super(BaselineTokenCNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=4, kernel_size=7)
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = nn.Conv2d(in_channels=4, out_channels=8, kernel_size=5)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = nn.Conv2d(in_channels=8, out_channels=16, kernel_size=3)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.fc1 = nn.Linear(16 * 9 * 9, 600)
self.fc2 = nn.Linear(600, 200)
self.fc3 = nn.Linear(200, num_classes)
def forward(self, x):
x = x.float()
x = self.pool1(F.relu(self.conv1(x)))
x = self.pool2(F.relu(self.conv2(x)))
x = self.pool3(F.relu(self.conv3(x)))
x = x.view(-1, 16 * 9 * 9)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# Set device to GPU if available.
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device
###Output
_____no_output_____
###Markdown
Mods:1. Changing Optimizers2. Changing NN structure3. ???
###Code
#### 1. Optimizers to try: ###
#we can add more but just wanted to see
optimizer_dict = {"adam": optim.Adam,
"sgd": optim.SGD,
"adamW": optim.AdamW}
optimizer_params_dict = {"adam": {"lr": 0.001,
"weight_decay": 0},
"sgd": {"lr": 0.001,
"momentum": 0.9},
"adamW": {"lr": 0.001,
"weight_decay": 0.01 }}
class Experiment():
def __init__(self, experiment_name, optimizer_class, train_set, val_split, test_set, classes, batch_size, save_dir):
#get runtime:
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#get name for save files:
self.experiment_name = experiment_name
#make CNN
self.net = BaselineTokenCNN(num_classes=len(classes))
self.net.to(device) # Send to GPU.
#make loss
self.criterion = nn.CrossEntropyLoss()
#get optimizer and params:
optimizer = optimizer_dict[optimizer_class]
optimizer_params = optimizer_params_dict[optimizer_class]
#add in the parameters:
optimizer_params["params"] = self.net.parameters()
# print(optimizer_params)
#add in parameters to optimizer:
self.optimizer = optimizer([optimizer_params])
#keep track of train_history
self.train_loss_history = []
print("Model created with optimizer {}".format(optimizer_class))
self.init_dataloaders(train_set, val_split, test_set, batch_size)
print(f'{len(classes)} classes.')
self.history = {
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': []
}
self.save_dir = save_dir
# Save the experiment settings.
exp_dir = os.path.join(self.save_dir, self.experiment_name)
Path(exp_dir).mkdir(parents=True, exist_ok=True)
settings = {
'optimizer': self.optimizer.state_dict(),
'batch_size': batch_size,
'val_split': val_split
}
settings_path = os.path.join(self.save_dir, self.experiment_name, 'settings.json' )
with open(settings_path, 'w') as f:
json.dump(settings, f)
print(f'Initialized experiment \'{self.experiment_name}\'')
def init_dataloaders(self, train_set, val_split, test_set, batch_size):
if val_split is None or val_split == 0:
self.train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=0)
else:
# Split the training set into train/validation.
# Creating data indices for training and validation splits:
num_train = len(train_set)
indices = np.arange(num_train)
split = int(np.floor(val_split * num_train)) # Index to split at.
# Uncomment the line below if you want the train/val split to be different every time.
# np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
# Create PyTorch data samplers and loaders.
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
self.train_loader = torch.utils.data.DataLoader(train_set,
batch_size=batch_size,
sampler=train_sampler)
self.val_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, sampler=val_sampler)
self.test_loader = DataLoader(test_set, batch_size=256, shuffle=False, num_workers=0)
print(f'{len(train_indices)} training examples.')
print(f'{len(val_indices)} validation examples.')
print(f'{len(test_set)} test examples.')
def train(self, max_epochs, patience):
best_val_loss = np.inf
no_up = 0
for epoch in tqdm(range(max_epochs), desc='Max Epochs'):
for i, data in tqdm(enumerate(self.train_loader), total=len(self.train_loader), desc='Training Batches', leave=False):
# Get the inputs and send to GPU if available.
features = data['features'].to(self.device)
labels = data['label'].to(self.device)
# zero the parameter gradients
self.optimizer.zero_grad()
# forward + backward + optimize
outputs = self.net(features)
loss = self.criterion(outputs, labels)
loss.backward()
self.optimizer.step()
train_loss, train_acc = self.evaluate(self.train_loader, tqdm_desc='Eval. Train')
val_loss, val_acc = self.evaluate(self.val_loader, tqdm_desc='Eval. Val')
# Save statistics to history.
self.history['train_loss'].append(train_loss)
self.history['train_acc'].append(train_acc)
self.history['val_loss'].append(val_loss)
self.history['val_acc'].append(val_acc)
if val_loss < best_val_loss:
best_val_loss = val_loss
self.save_checkpoint(epoch, val_loss)
no_up = 0
else:
no_up += 1
if no_up == patience:
self.save_checkpoint(epoch, val_loss)
print(f'Stopping after {epoch} epochs.')
print(f'Early stopping condition met, validation loss did not decrease for {patience} epochs.')
break
def evaluate(self, dataloader, tqdm_desc=''):
num_correct = 0
num_total = 0
total_loss = 0
with torch.no_grad():
for data in tqdm(dataloader, desc=tqdm_desc, leave=False):
# Get the inputs and send to GPU if available.
features = data['features'].to(self.device)
labels = data['label'].to(self.device)
# Get the predictions / loss.
outputs = self.net(features)
_, predicted = torch.max(outputs.data, dim=1)
loss = self.criterion(outputs, labels)
# Update correct/total counts.
num_correct += (predicted == labels).sum().item()
num_total += labels.size()[0]
# Update total loss.
total_loss += loss.item()
acc = num_correct / num_total * 100
avg_loss = total_loss / len(dataloader)
return avg_loss, acc
def train_loss(self):
# TODO: Is this correct? Should we really be averaging the train loss over all epochs?
loss = np.mean(self.train_loss_history)
print(f"Loss of the network on train set: {loss}")
return loss
def test_accuracy(self, classes, test_loader):
self.num_classes = len(classes)
self.total_counts = np.zeros(self.num_classes)
self.correct_counts = np.zeros(self.num_classes)
self.predictions = []
# print(total_counts)
# print(correct_counts)
self.num_correct = 0
self.num_total_examples = 0
with torch.no_grad():
for test_data in tqdm(test_loader):
test_features = test_data['features'].to(self.device)
labels = test_data['label'].to(self.device)
outputs = self.net(test_features)
_, predicted = torch.max(outputs.data, dim=1)
self.predictions.append(predicted)
for p, l in zip(labels, predicted):
self.total_counts[l] += 1
if p == l:
self.correct_counts[p] += 1
self.num_total_examples += labels.size(0)
self.num_correct += (predicted == labels).sum().item()
self.test_accuracy = self.num_correct / self.num_total_examples * 100
print(f'Accuracy of the network on test set: {self.test_accuracy}%')
def save_checkpoint(self, epoch, val_loss):
checkpoint_dir = os.path.join(self.save_dir, self.experiment_name, 'checkpoints')
Path(checkpoint_dir).mkdir(parents=True, exist_ok=True)
path = os.path.join(checkpoint_dir, f'epoch={epoch}_valLoss={np.round(val_loss, 4)}.pt')
torch.save(self.net.state_dict(), path)
def save_history(self):
history_path = os.path.join(self.save_dir, self.experiment_name, 'history.csv' )
pd.DataFrame(self.history).to_csv(history_path)
def save_test_performance(self):
test_loss, test_acc = self.evaluate(self.test_loader, tqdm_desc='Eval. Test')
print(f'Test accuracy = {test_acc}%')
test_perf_path = os.path.join(self.save_dir, self.experiment_name, 'test.json' )
with open(test_perf_path, 'w') as f:
json.dump({'test_loss': test_loss, 'test_acc': test_acc}, f)
def run_experiment(name, tokens_dataset_folder):
prefix = os.path.join(os.getcwd(), 'data', 'tokens', tokens_dataset_folder)
train_path = os.path.join(prefix, 'train.pickle')
test_path = os.path.join(prefix, 'test.pickle')
int_to_token_path = os.path.join(prefix, 'int_to_token.pickle')
train_set = MathTokensDataset(train_path, IMAGE_SIZE)
test_set = MathTokensDataset(test_path, IMAGE_SIZE)
train_loader = DataLoader(train_set, batch_size=256, shuffle=True, num_workers=0)
test_loader = DataLoader(test_set, batch_size=256, shuffle=False, num_workers=0)
with open(int_to_token_path, 'rb') as f:
int_to_token = pickle.load(f)
classes = list(int_to_token.values())
exp = Experiment(experiment_name=name,
optimizer_class='adamW',
train_set=train_set,
val_split=0.2,
test_set=test_set,
classes=classes,
batch_size=256,
save_dir=os.path.join(os.getcwd(), 'experiments', 'token_cnn'))
exp.train(max_epochs=100, patience=3)
exp.save_history()
exp.save_test_performance()
run_experiment(name='t=5', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=5')
run_experiment(name='t=3,5,7', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=3,5,7')
run_experiment(name='t=1,3,5,7,9', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=1,3,5,7,9')
import gc
run_experiment(name='t=5_rotate', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=5_rotate')
gc.collect()
run_experiment(name='t=5_shear', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=5_shear')
gc.collect()
run_experiment(name='t=3,5,7_rotate', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=3,5,7_rotate')
gc.collect()
run_experiment(name='t=3,5,7_shear', tokens_dataset_folder='b=96_train=2011,2013_test=2012_c=all_t=3,5,7_shear')
gc.collect()
###Output
Model created with optimizer adamW
446285 training examples.
111571 validation examples.
50121 test examples.
101 classes.
Initialized experiment 't=3,5,7_shear'
|
Note.ipynb | ###Markdown
- Possible issue: std might be too small because dividing too big num_values.- The mean/std calculated from the training set was: [-1.4, 6e-5]. - Note that the std is much smaller than test set's.
###Code
from torchvision import transforms
test_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(-1.3956809560512295, 0.00033731180361859254),
])
import FSDnoisy18k.dataset.FSDnoisy18k
import yaml
args = yaml.load(open('config/params_supervised_lineval.yaml'))
testset = FSDnoisy18k.dataset.FSDnoisy18k.FSDnoisy18k(args, mode='test', hdf5_path='work/FSDnoisy18k/data_hdf5/')
testset.transform = test_transform
testset[0]
x = testset[0][0]
x.mean(), x.std(), x.max(), x.min()
###Output
_____no_output_____ |
110 - Surfaces/370 - Classical Surfaces.ipynb | ###Markdown
Classical SurfacesIn this section, we introduce several classes of surface that can assistus in meeting some of the demanding dictates of realizing a form,and that possess desirable surface properties. Each of these are constructed by subjecting anumber of initial curves to some form of transformation. Many classicalsurfaces may be described in terms of the lower-level curves thatcomprise them. As such, in this section we may expect to see surfaceparameterization functions that ***do not define coordinates in spacedirectly***, but rather rely on the evaluation of a subsidiary curve or setof curves.We focus on just three categories of classical surface:* rotational* translational * ruled Rotational SurfacesRotational surfaces are those that are constructed by rotating a given guide curve about an axis. A generalized rotational surface can be defined as one with a parameterization function that relies upon three additional inputs: `crv`, the curve to be rotated; `axis`, a Ray describing the axis of rotation; and `rot_ival`, an Interval of allowable rotation angles. In the parameterization function below, a Point is constructed by first evaluating `crv`, and then subjecting the result to a rotation transformation.
###Code
"""
Rotational Surface
The parameterization function for a rotational surface may be expressed as the plotting of a Point
on a given Curve crv at parameter v, and the transformation of this Point by an Xform defined by an
axis Ray and the evaluation of an Interval of allowable rotation angles by parameter u.
"""
def func(u,v):
pt = crv.eval(v)
angle = rot_ival.eval(u)
xf = Xform.rotation( axis=axis.vec, angle=angle, center=axis.spt )
return pt*xf
###Output
_____no_output_____
###Markdown
###Code
"""
Rotational Surface
Rotational surfaces are encapsulated by the subclass RotationalSurface in a Decod.es extension.
Expressing this class of Surfaces as its own subclass allows for certain properties to be defined
in more specific ways.
"""
from decodes.extensions.classical_surfaces import *
surf = RotationalSurface(crv, axis = axis, dom_u = rot_ival )
# a Curve along an edge of the Surface
srf.isocurve( v_val = 0.0 )
# an Arc through the middle of the Surface
srf.isocurve( u_val = rot_ival.eval(0.5) )
###Output
_____no_output_____
###Markdown
Translational SurfacesTranslational surfaces are constructed by translating a curve - either linearly, or along another curve. The simplest translational surface is the former: an ***extrusion surface***,which results from a curve translated along a direction described bya line segment. Every point on such a surface is a result of translatinga point on the curve by the vector of the segment.
###Code
"""
Extrusion Surface
Constructed by translating a curve along the vector of a given Segment.
"""
origin = line.spt
def func(u,v):
vec = line.eval(u) - origin
return crv.eval(v) + vec
###Output
_____no_output_____
###Markdown
This construction can be extended to the general case where themoving curve, called a ***generator***, is swept, not along a segment, butinstead along another curve called the ***directrix***. If the origin denotesthe intersection of the two, then any point can be constructed bytranslating a point on the generator by a vector determined by thedirectrix.
###Code
"""
Translational Surface
Constructed by translating a generator Curve along a directrix Curve
"""
def func(u,v):
vec = directrix.eval(u) - origin
return generator.eval(v) + vec
###Output
_____no_output_____
###Markdown
Gallery of Translational SurfacesTranslational surfaces require two input curves and a point of intersection. As implied by their name, the boundaries of this class of surface are translates of the two given curves, which makes it quite easy to envision the overall surface by defining two curves that align with the boundary edges and which meet at a point. The first two examples below are constructed in this way. An oft-used class of mathematical surfaces are the quadrics, many of which we have already seen such as the cone, sphere, cylinder as well as their elliptic and parabolic counterparts. A number of these quadrics can be constructed as translational surfaces, as evidenced by the second pair of examples below. Parabolic Sine Surface
###Code
"""
Parabolic Sine Surface
"""
def func_gen(t):
return Point( t, 0, sin(t) )
def func_dir(t):
return Point( 0, 2*t, -t*t )
generator = Curve( func_gen, Interval(0,4.5*pi) )
directrix = Curve( func_dir, Interval(-2,2) )
srf = TranslationalSurface( generator, directrix, Point() )
###Output
_____no_output_____
###Markdown
Skew Cosine Surface
###Code
"""
Skew Cosine Surface
Constructs a hat-like Surface given parameters for the height hei, length len, and skew amount skw.
"""
def func_gen(t):
xf_rot = Xform.rotation( angle = skw*pi/4 )
return xf_rot*Point(t, 0, (hei/2)*(1-cos(2*pi*t/len)))
def func_dir(t):
xf_rot = Xform.rotation( angle = -skw*pi/4 )
return xf_rot*Point(0, t, (hei/2)*(1-cos(2*pi*t/len)))
generator = Curve( func_gen, Interval(0, len) )
directrix = Curve( func_dir, Interval(0, len) )
srf = TranslationalSurface( generator, directrix, Point() )
###Output
_____no_output_____
###Markdown
Elliptic Paraboloid
###Code
"""
Elliptic Paraboloid
Constructs a Surface given parameters for the length len, width wid, and height hei.
"""
def func_gen(t):
return Point( len*t, 0, hei*t*t )
def func_dir(t):
return Point( 0, wid*t, hei*t*t )
generator = Curve( func_gen, Interval(-1,1) )
directrix = Curve( func_dir, Interval(-1,1) )
srf = TranslationalSurface( generator, directrix, Point() )
###Output
_____no_output_____
###Markdown
Hyperbolic Paraboloid
###Code
"""
Hyperbolic Paraboloid
Constructs a Surface given parameters for the length len, width wid, and height hei. Although the
parameterization and the boundary conditions differ, this surface is identical to that constructed
as a ruled surface by connecting two Segments.
"""
def func_gen(t):
return Point( len*t, 0, hei*t*t )
def func_dir(t):
return Point( 0, wid*t, -hei*t*t )
generator = Curve( func_gen, Interval(-1,1) )
directrix = Curve( func_dir, Interval(-1,1) )
srf = TranslationalSurface( generator, directrix, Point() )
###Output
_____no_output_____
###Markdown
Ruled SurfacesA ruled surface is one that can be generated by moving a straight linein space along some set of guide curves. The straight lines are calledthe ***rulings*** of the surface. The two main ways of constructing such asurface reflect the two descriptions of a line: one that is defined by apoint and direction, and the other by its endpoints. Base-Director ConstructionThe first construction uses a directrix or base curve to describe the path of a point on the line, and a director curve which gives the direction of each ruling in the form of a unit vector at each point.
###Code
"""
Base-Director Construction of a Ruled Surface
Construction by moving a Segment of fixed length along a base curve with direction given by director
"""
def func(u,v):
return base_crv.eval(u) + director_crv.eval(u)*v
###Output
_____no_output_____
###Markdown
Point-Pair ConstructionHere, two given input curves areconnected by corresponding points, which together determine theendpoints of each ruling.
###Code
"""
Point-Pair Construction of a Ruled Surface
Construction by connecting corresponding points along two curves
"""
def func(u,v):
ruling = Segment(crv_a.eval(u), crv_b.eval(u))
return ruling.eval(v)
###Output
_____no_output_____
###Markdown
Gallery of Ruled SurfacesThe range of forms in this remarkable class of surfaces belies the simplicity of its construction. Two constructions are demonstrated below. The first pair of Surfaces constructs rulings by two Curves: a base curve `crv_base` to describe the path of a point, and a director curve `crv_dirc` which gives the direction of each ruling. The second pair of Surfaces shown below constructs rulings more directly, by connecting matching pairs of points on two defined Curves, `crv_a` and `crv_b`. Note the alternative construction of the hyperbolic paraboloid, which may also be described as a translational surface. Conoid
###Code
"""
Conoid
Given a desired height hei, width wid, and integer number of turns trns, a Conoid Surface is
defined with a base Curve of a vertical line Segment, a director Curve of a unit circle.
"""
crv_base = Segment(Point(), Point(0, 0, hei))
crv_dirc = Curve.circle( ival = Interval(0, trns*two_pi) )
def func(u,v):
return crv_base.eval(u) + crv_dirc.eval(u)*v
surf = Surface( func, Interval(), Interval(0,wid) )
###Output
_____no_output_____
###Markdown
Mobius Band
###Code
"""
Mobius Band
Given a base radius rad, and a width wid, a Mobius Band is constructed with a base Curve of a
circle and a director Curve that resembles a spherical bow-tie.
"""
def func_dirc(t):
return Point( cos(t/2)*cos(t), cos(t/2)*sin(t), sin(t/2) )
crv_base = Curve.circle(rad = rad)
crv_dirc = Curve(func_dirc, Interval.twopi())
def func(u,v):
return crv_base.eval(u) + crv_dirc.eval(u)*v
surf = Surface( func, Interval(), Interval(-v1,v1) )
###Output
_____no_output_____
###Markdown
Torqued Ellipse
###Code
"""
Torqued Ellipse
Constructs a ruled surface between two perpendicular-facing ellipses given parameters for the length
len, width wid, and height hei of each. Note that the center of ellipse B is shifted. Inspired by the
Richard Serra sculpture series with same name.
"""
def func_a(t):
return Point( len*cos(t), wid*sin(t) )
def func_b(t):
return Point( wid*cos(t)-0.5, len*sin(t), hei )
crv_a = Curve(func_a, Interval(0, 1.9*pi))
crv_b = Curve(func_b, Interval(.1*pi, 2*pi))
def func(u,v):
return Segment( crv_a.eval(u), crv_b.eval(u) ).eval(v)
surf = Surface(func)
###Output
_____no_output_____
###Markdown
Hyperbolic Paraboloid
###Code
"""
Hyperbolic Paraboloid
Demonstrates the construction of a hyperbolic paraboloid as a ruled surface by connecting points on
two line Segments. Although the parameterization and the boundary conditions differ, this surface is
identical to that constructed via translation.
"""
crv_a = Segment(Point(len, 0, hei), Point(0, wid, -hei))
crv_b = Segment(Point(0, -wid, -hei), Point(-len, 0, hei))
def func(u,v):
return Segment( crv_a.eval(u), crv_b.eval(u) ).eval(v)
surf = Surface(func)
###Output
_____no_output_____
###Markdown
Protean Classical Surfaces
###Code
"""
Hyperboloid
Constructed by connecting points on two circles
"""
#circle in plane
crv_a = Curve.circle(Point(), rad))
#circle at height with shifted startpoint
def circle_twist(t):
x = rad*cos(t+twist)
y = rad*sin(t+twist)
#height expressed in terms of length and twist
z = sqrt(length*length-4*(sin(twist/2))**2)
return Point(x,y,z)
crv_b = Curve(circle_twist, Interval.twopi()))
###Output
_____no_output_____ |
docs/_build/.doctrees/nbsphinx/contents/Covalent_chains.ipynb | ###Markdown
Covalent chains and blocks How to get covalent chainsLets load first of all a molecular system to work with in this section:
###Code
molecular_system = msm.demo_systems.files['1tcd.mmtf']
molecular_system = msm.convert(molecular_system)
msm.info(molecular_system)
###Output
_____no_output_____
###Markdown
MolSysMT includes a method to get all covalent chains found in the molecular system given by a sequence of atom names. To illustrate how the method `molsysmt.covalent_chains` works lets extract all segments of atoms C, N, CA an C covalently bound in this order (C-N-CA-C):
###Code
covalent_chains =msm.covalent_chains(molecular_system, chain=['atom_name=="C"', 'atom_name=="N"',
'atom_name=="CA"', 'atom_name=="C"'],
selection="component_index==0")
covalent_chains.shape
###Output
_____no_output_____
###Markdown
The output is a numpy array 2-ranked where the dimension of the first axe or rank is the number of chains found in the system, and the second rank has dimension 4 (since it chain was chosen to have 4 atoms):
###Code
covalent_chains
###Output
_____no_output_____
###Markdown
Lets check that the name of the atoms in any of the obtained chains is correct:
###Code
msm.get(molecular_system, selection=covalent_chains[0], name=True)
###Output
_____no_output_____
###Markdown
The atom name specified at each place does not need to be unique, we can introduce variants at any position defining the covalent chain. Lets see for instance how to get all 4 atoms covalent chains where the first three atoms are C-N-CA, in this order, and the fourth atom can either be C or CB:
###Code
covalent_chains =msm.covalent_chains(molecular_system, chain=['atom_name=="C"', 'atom_name=="N"',
'atom_name=="CA"', 'atom_name==["C", "CB"]'],
selection="component_index==0")
###Output
_____no_output_____
###Markdown
The covalent chains defining the $\phi$, $\psi$, $\omega$ and , $\xi_1$ dihedral angles are obtained as follows:
###Code
# Covalent chains defining all phi dihedral angles in the molecular system
phi_chains = msm.covalent_chains(molecular_system, chain=['atom_name=="C"', 'atom_name=="N"',
'atom_name=="CA"', 'atom_name=="C"'])
# Covalent chains defining all psi dihedral angles in the molecular system
psi_chains = msm.covalent_chains(molecular_system, chain=['atom_name=="N"', 'atom_name=="CA"',
'atom_name=="C"', 'atom_name=="N"'])
# Covalent chains defining all omega dihedral angles in the molecular system
omega_chains = msm.covalent_chains(molecular_system, chain=['atom_name==["CA","CH3"]', 'atom_name=="C"',
'atom_name=="N"', 'atom_name==["CA", "CH3"]'])
# Covalent chains defining all chi1 dihedral angles in the molecular system
chi1_chains = msm.covalent_chains(molecular_system, chain=['atom_name=="N"', 'atom_name=="CA"',
'atom_name=="CB"', 'atom_name=="CG"'])
###Output
_____no_output_____
###Markdown
How to get the atoms quartets defining the dihedral anglesMolSysMT includes a method to obtain the sets of atoms quartets defining all dihedral angles present in the system given their names. There is no need then to remember the atom names defining the angle $\phi$, $\psi$, $\omega$, or any of the $\chi$ angles. Lets see how this method works over one of the demo molecular systems:
###Code
molecular_system = msm.demo_systems.files['1tcd.mmtf']
molecular_system = msm.convert(molecular_system)
###Output
_____no_output_____
###Markdown
The quartets defining the angles $\phi$, $\psi$ or $\omega$ over the whole system can be obtained as follows:
###Code
phi_chains = msm.covalent_dihedral_quartets(molecular_system, dihedral_angle='phi')
print(phi_chains)
###Output
[[ 2 9 10 11]
[ 11 16 17 18]
[ 18 25 26 27]
...
[3789 3796 3797 3798]
[3798 3801 3802 3803]
[3803 3808 3809 3810]]
###Markdown
The search of these quartets can be limited to a specific selection. Lets see how to get the quartes of the $\psi$ angles in residues 10 to 15:
###Code
psi_chains = msm.covalent_dihedral_quartets(molecular_system, dihedral_angle='psi',
selection='10<=group_index<=15')
print(psi_chains)
###Output
[[ 77 78 79 86]
[ 86 87 88 92]
[ 92 93 94 100]
[100 101 102 104]
[104 105 106 110]]
###Markdown
Atoms chains defining $\chi$ angles can be also extracted. Lets get, for instance, all $\chi_{5}$ in the system:
###Code
chi5_chains = msm.covalent_dihedral_quartets(molecular_system, dihedral_angle='chi5')
###Output
_____no_output_____
###Markdown
There's a high number of ARG residues in our system. ARG is the only amino-acide with a $\chi_{5}$ dihedral angle.
###Code
print(chi5_chains.shape[0])
n_args = msm.get(molecular_system, target='group', selection='group_name=="ARG"', n_groups=True)
print(n_args)
phi_psi_chains = msm.covalent_dihedral_quartets(molecular_system, dihedral_angle='phi-psi')
print(phi_psi_chains.shape)
msm.get(molecular_system, selection=phi_psi_chains[0], name=True)
###Output
_____no_output_____
###Markdown
If all dihedral angles needs to be considered, the value 'all' for the input argument `dihedral_angle` returns all atoms quartets for any $\phi$, $\psi$, $\omega$, $\chi_{1}$, $\chi_{2}$, $\chi_{3}$, $\chi_{4}$ and $\chi_{5}$ angle:
###Code
all_angles_chains = msm.covalent_dihedral_quartets(molecular_system, dihedral_angle='all')
print(all_angles_chains.shape)
###Output
(2362, 4)
###Markdown
In the following tables a summary of the dihedral angle definitions are included in this document for future reference. The corresponding string taken by the input argument `dihedral_angle` is written down between parentesis next to each greek letter naming the angle: $\phi$ (`phi`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || all but PRO | C-N-CA-C | C cis to C | [-180, 180) || PRO | C-N-CA-C | C cis to C | ~-90 | $\psi$ (`psi`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || all | N-CA-C-N | N cis to N | [-180, 180) | $\omega$ (`omega`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || all | CA-C-N-CA | CA cis to CA | ~180 || all | CH3-C-N-CA | CA cis to CA | ~180 || all | CA-C-N-CH3 | CA cis to CA | ~180 | $\chi_{1}$ (`chi1`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || ARG | N-CA-CB-CG | CG cis to N | [-180, 180) || ASN | N-CA-CB-CG | CG cis to N | [-180, 180) || ASP | N-CA-CB-CG | CG cis to N | [-180, 180) || CYS | N-CA-CB-SG | SG cis to N | [-180, 180) || GLN | N-CA-CB-CG | CG cis to N | [-180, 180) || GLU | N-CA-CB-CG | CG cis to N | [-180, 180) || HIS | N-CA-CB-CG | CG cis to N | [-180, 180) || ILE | N-CA-CB-CG1 | CG1 cis to N | [-180°, 180) || LEU | N-CA-CB-CG | CG cis to N | [-180, 180) || LYS | N-CA-CB-CG | CG cis to N | [-180, 180) || MET | N-CA-CB-CG | CG cis to N | [-180, 180) || PHE | N-CA-CB-CG | CG cis to N | [-180, 180) || PRO | N-CA-CB-CG | CG cis to N | CA-CB is part of ring || SER | N-CA-CB-OG | OG cis to N | [-180, 180) || THR | N-CA-CB-OG1 | OG1 cis to N | [-180, 180) || TRP | N-CA-CB-CG | CG cis to N | [-180, 180) || TYR | N-CA-CB-CG | CG cis to N | [-180, 180) || VAL | N-CA-CB-CG1 | CG1 cis to N | [-180, 180) | $\chi_{2}$ (`chi2`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || ARG | CA-CB-CG-CD | CD cis to CA | [-180, 180) || ASN | CA-CB-CG-OD1 | OD1 cis to CA | [-180, 180) || ASP | CA-CB-CG-OD | OD1 cis to CA | [-180, 180) || GLN | CA-CB-CG-CD | CD cis to CA | [-180, 180) || GLU | CA-CB-CG-CD | CD cis to CA | [-180, 180) || HIS | CA-CB-CG-ND1 | ND1 cis to CA | [-180, 180) || ILE | CA-CB-CG1-CD | CD cis to CA | [-180, 180) || LEU | CA-CB-CG-CD1 | CD1 cis to CA | [-180, 180) || LYS | CA-CB-CG-CD | CD cis to CA | [-180, 180) || MET | CA-CB-CG-SD | SD cis to CA | [-180, 180) || PHE | CA-CB-CG-CD | CD1 cis to CA | [-180, 180) || PRO | CA-CB-CG-CD | CD cis to CA | CB-CG is part of ring || TRP | CA-CB-CG-CD1 | CD1 cis to CA | [-180, 180) || TYR | CA-CB-CG-CD1 | CD1 cis to CA | [-180, 180) | $\chi_{3}$ (`chi3`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || ARG | CB-CG-CD-NE | NE cis to CB | [-180, 180) || GLN | CB-CG-CD-OE1 | OE1 cis to CB | [-180, 180) || GLU | CB-CG-CD-OE1 | OE1 cis to CB | [-180, 180) || LYS | CB-CG-CD-CE | CE cis to CB | [-180, 180) || MET | CB-CG-SD-CE | CE cis to CB | [-180, 180) | $\chi_{4}$ (`chi4`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || ARG | CG-CD-NE-CZ | CZ cis to CG | [-180, 180) || LYS | CG-CD-CE-NZ | NZ cis to CG | [-180, 180) | $\chi_{5}$ (`chi5`)| Residue | Atoms | Zero value | Range (degrees)|| :---: | :---: | :---: | :---: || ARG | CD-NE-CZ-NH1 | NH1 cis to CD | [-180, 180) | Every dihedral angle is defined in a peptide by three vectors delimited by four consecutive covalently bonded atoms. The vector in the middle defines the orthogonal plane where rotations are defined by the projection of vectors first and third, this way two blocks of atoms change its relative positions: all atoms covalently bonded before and after the second vector in the polymer. Or explained in other words, removing the second vector two sets of covalently bonded atoms are defined. Each of these two atoms sets move in unison when the dihedral angle changes. MolSysMT includes the input argument `with_blocks` for the method `molsysmt.covalent_dihedral_quartets` to return these atoms sets together with the quartets. Lets see how it works with an example:
###Code
molecular_system = msm.demo_systems.metenkephalin()
phi_chains, phi_blocks = msm.covalent_dihedral_quartets(molecular_system, dihedral_angle='phi',
with_blocks=True)
###Output
_____no_output_____
###Markdown
Lets for instance have a look to the quartet defining the 3-th $\phi$ angle:
###Code
view = msm.view(molecular_system, viewer='NGLView')
selection_quartet = msm.select(molecular_system, selection=phi_chains[2], to_syntaxis='NGLView')
view.clear()
view.add_licorice(color='white')
view.add_ball_and_stick(selection_quartet, color='orange')
view
phi_blocks[2]
###Output
_____no_output_____
###Markdown
Lets show in blue and red the two blocks of atoms defined by this 4-th $\phi$ dihedral angle.
###Code
view = msm.view(molecular_system, viewer='NGLView')
selection_quartet = msm.select(molecular_system, selection=phi_chains[2], to_syntaxis='NGLView')
selection_block_0 = msm.select(molecular_system, selection=list(phi_blocks[2][0]), to_syntaxis='NGLView')
selection_block_1 = msm.select(molecular_system, selection=list(phi_blocks[2][1]), to_syntaxis='NGLView')
view.add_licorice(color='white')
view.add_ball_and_stick(selection_quartet, color='orange')
view.add_ball_and_stick(selection_block_0, color='red')
view.add_ball_and_stick(selection_block_1, color='blue')
view
###Output
_____no_output_____
###Markdown
How to get covalent blocksIn addition to getting the covalent chains, MolSysMT provides with a third method, `molsysmt.covalent_blocks`, to obtain the sets of atoms covalently bonded. In order to illustrate the results given by this method, lets load first of all a molecular system to work with it:
###Code
molecular_system = msm.demo_systems.metenkephalin()
###Output
_____no_output_____
###Markdown
With the molecular system as the only input argument, the output corresponds to the list of sets of atoms covalently bonded.
###Code
blocks = msm.covalent_blocks(molecular_system)
print(blocks)
###Output
[{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71}]
###Markdown
This way the method does not offer new information. The result is nothing but the definition of the components in the system. However, using the input argument `remove_bonds` the method turns into a more interesting tool. Lets remove a couple of bonds to see the effect:
###Code
msm.get(molecular_system, target='atom', selection='atom_name==["C", "N"]', inner_bonded_atoms=True)
blocks = msm.covalent_blocks(molecular_system, remove_bonds=[[19,21],[33,35]])
print(blocks)
###Output
[{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}
{32, 33, 34, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31}
{35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71}]
###Markdown
The output can also be return as a numpy array:
###Code
blocks = msm.covalent_blocks(molecular_system, remove_bonds=[[19,21],[33,35]], output_form='array')
###Output
_____no_output_____
###Markdown
In this case an array is returned with the index of the block each atom belongs to (0-based):
###Code
print(blocks)
###Output
[0 0 0 ... 2 2 2]
|
SD212/lab_session/lab7.ipynb | ###Markdown
SD212: Graph mining Lab 7: Soft ClusteringYou will learn how to apply soft clustering methods to graphs and to compare them to classic clustering methods. You **must** be connected to Telecom ParisTech network! Getting started
###Code
from connector import Connector
import warnings
warnings.filterwarnings('ignore')
base_url = 'http://137.194.192.13:8234'
lab_id = 'lab7'
###Output
_____no_output_____
###Markdown
To do* Enter your login and test it using question 0
###Code
# Enter your login (last name followed by first letter of first name)
login = 'zhuf'
connector = Connector(base_url, lab_id, login)
get_question = connector.get_question
post_answer = connector.post_answer
post_text = connector.post_text
get_question(0)
###Output
Welcome!
###Markdown
Import
###Code
import networkx as nx
###Output
_____no_output_____
###Markdown
The documentation is available [here](https://networkx.readthedocs.io/en/stable/).
###Code
%pylab inline
import numpy as np
colors = ['b','g','r','c','m','gold','orange','gray','k','w']
###Output
_____no_output_____
###Markdown
1. Soft modularityConsider an undirected, weighted graph of $n$ nodes and $m$ edges with adjacency matrix $A$.We use $w_i$ to denote the weighted degree of node $i$,$w_i=\sum_{j\in V} A_{ij}$,and $w$ the total weight of the graph $w = \sum_{i\in V} w_i$.Given a clustering $C$, the classic modularity is defined as:$$Q(C) = \frac{1}{w} \sum_{i,j\in V} \left( A_{ij} - \frac{w_i w_j}{w} \right) \delta_{C(i),C(j)}.$$Given a membership matrix $\mathbf{p}\in [0,1]^{n\times K}$, the soft modularity is defined as:$$Q(\mathbf{p}) = \frac{1}{w} \sum_{i,j\in V} \left(A_{ij} - \frac{w_i w_j}{w}\right){\mathbf{p}_{i\cdot}}^T \mathbf{p}_{j\cdot}.$$ Toy graphConsider the following graph:
###Code
edgelist = [(0, 1), (0, 2), (0, 3), (1, 3), (2, 3), (3, 4), (3, 5), (3, 6), (4, 6), (5, 6)]
G = nx.Graph()
G.add_edges_from(edgelist)
pos = nx.spring_layout(G, center=(.5, .5), scale=.5)
###Output
_____no_output_____
###Markdown
We compute the adjacency matrix of the graph
###Code
nodes = list(G.nodes())
A = nx.adj_matrix(G, nodes)
A.toarray()
###Output
_____no_output_____
###Markdown
Consider the following clustering:
###Code
C = {0: 0, 1: 0, 2: 0, 3: 0, 4: 1, 5: 1, 6: 1}
figure()
node_color = [colors[C[u] % len(colors)] for u in G.nodes()]
nx.draw(G, pos, node_color=node_color, edge_color='gray')
###Output
_____no_output_____
###Markdown
To do* Complete the function `modularity` below and apply it to the clustering defined above* Answer question 1 **(0.5 point)**
###Code
print(A[0,1])
def modularity(A, C, gamma):
# A: adjacency matrix of a weighted graph
# C: clustering (dictionary)
# gamma: resolution parameter
# Returns: Q, modularity (at resolution gamma)
Q = 0
w = np.sum(A)
_sum = 0.0
for i in range(A.shape[0]):
for j in range(A.shape[1]):
if( C[i] == C[j]):
_sum = _sum + (A[i,j] - np.sum(A[i]) * np.sum(A[j]) /w)*gamma
Q = _sum/w
return Q
print(modularity(A, C, 1.))
get_question(1)
answer = 0.154999999
post_answer(1, answer)
###Output
This is correct!
###Markdown
Consider the following membership matrix:
###Code
p = np.zeros((7, 2))
p[(0, 0)] = 1.
p[(1, 0)] = 1.
p[(2, 0)] = 1
p[(3, 0)] = .5
p[(3, 1)] = .5
p[(4, 1)] = 1.
p[(5, 1)] = 1.
p[(6, 1)] = 1.
print(p)
def plot_communities(graph, pos, membership, figsize=(4, 4)):
fig = plt.figure(figsize=figsize)
ax = plt.axes([0, 0, 1, 1])
ax.set_aspect('equal')
nx.draw_networkx_edges(graph, pos, ax=ax)
plt.xlim(-0.1, 1.1)
plt.ylim(-0.1, 1.1)
plt.axis('off')
trans = ax.transData.transform
trans2 = fig.transFigure.inverted().transform
pie_size = 0.05
p2 = pie_size / 2.0
for node in graph:
xx, yy = trans(pos[node]) # figure coordinates
xa, ya = trans2((xx, yy)) # axes coordinates
a = plt.axes([xa - p2, ya - p2, pie_size, pie_size])
a.set_aspect('equal')
fractions = membership[node]
a.pie(fractions)
plt.show()
plot_communities(G, pos, p)
###Output
_____no_output_____
###Markdown
To do* Complete the function `soft_modularity` below and apply it to the membership matrix defined above* Answer question 2 (**0.5 point**)* Answer question 3 (**open answer**)
###Code
def soft_modularity(A, p, gamma):
# A: adjacency matrix of a weighted graph
# p: membership matrix
# gamma: resolution parameter
# Returns: Q, modularity (at resolution gamma)
Q = 0
# to be completed
# A: adjacency matrix of a weighted graph
# C: clustering (dictionary)
# gamma: resolution parameter
# Returns: Q, modularity (at resolution gamma)
Q = 0
w = np.sum(A)
_sum = 0.0
for i in range(A.shape[0]):
for j in range(A.shape[1]):
p_v = p[i].T.dot(p[j])
_sum = _sum + (A[i,j] - np.sum(A[i]) * np.sum(A[j]) /w) * p_v
Q = _sum/w
return Q
p
p[0].T.dot(p[0])
b
print(soft_modularity(A, p, 1.))
get_question(2)
answer = 0.2
post_answer(2, answer)
get_question(3)
post_text(3)
###Output
_____no_output_____
###Markdown
2. Multiple runs of Louvain We define the Louvain algorithm that optimizes the classic modularity function.
###Code
def maximize(G, gamma, random_seed):
# G: networkx graph (undirected, weighted)
# gamma: float (resolution)
# Returns: dictionary (cluster of each node)
# node weights
node_weight = {u: 0 for u in G.nodes()}
# sum of edge weights
w = 0
for (u,v) in G.edges():
# self-loops are counted twice
node_weight[u] += G[u][v]['weight']
node_weight[v] += G[u][v]['weight']
w += G[u][v]['weight']
# init the clustering
C = {u:u for u in G.nodes()}
# cluster weights
cluster_weight = {u:node_weight[u] for u in G.nodes()}
# node-cluster weights
node_cluster_weight = {u:{v:G[u][v]['weight'] for v in G.neighbors(u) if v != u} for u in G.nodes()}
increase = True
while increase:
increase = False
nodes = list(G.nodes())
random.seed(random_seed)
random.shuffle(nodes)
for u in nodes:
k = C[u]
# target cluster, to be modified
l_max = k
Delta_Q_max = 0
for l in node_cluster_weight[u]:
Delta_Q = node_cluster_weight[u][l]
if k in node_cluster_weight[u]:
Delta_Q -= node_cluster_weight[u][k]
Delta_Q -= gamma * node_weight[u] / (2 * w) * \
(cluster_weight[l] - cluster_weight[k] + node_weight[u])
if Delta_Q > Delta_Q_max:
Delta_Q_max = Delta_Q
l_max = l
l = l_max
if l != k:
increase = True
# move u from cluster k to cluster l
C[u] = l
cluster_weight[k] -= node_weight[u]
cluster_weight[l] += node_weight[u]
for v in G.neighbors(u):
if v != u:
node_cluster_weight[v][k] -= G[u][v]['weight']
if node_cluster_weight[v][k] <= 0:
node_cluster_weight[v].pop(k)
if l not in node_cluster_weight[v]:
node_cluster_weight[v][l] = 0
node_cluster_weight[v][l] += G[u][v]['weight']
return C
def aggregate(G,C):
# G: networkx graph
# C: dictionary (clustering)
# Returns: networkx graph (aggregate graph)
H = nx.Graph()
for u in G.nodes():
H.add_node(C[u])
for (u,v) in G.edges():
if H.has_edge(C[u],C[v]):
H[C[u]][C[v]]['weight'] += G[u][v]['weight']
else:
H.add_edge(C[u],C[v])
H[C[u]][C[v]]['weight'] = G[u][v]['weight']
return H
def louvain(G, gamma=1, random_seed=0):
# G: networkx graph (undirected, weighted)
# gamma: float (resolution)
# Returns a dictionary (cluster of each node)
if nx.get_edge_attributes(G,'weight') == {}:
for (u,v) in G.edges():
G[u][v]['weight'] = 1
C = maximize(G, gamma, random_seed)
n = len(C)
k = len(set(C.values()))
while k < n:
H = aggregate(G,C)
C_new = maximize(H, gamma, random_seed)
C = {u: C_new[C[u]] for u in G.nodes()}
n = k
k = len(set(C.values()))
# reindex cluster values
cluster_values = list(set(C.values()))
reindex = {c:i for i,c in enumerate(cluster_values)}
C = {u:reindex[C[u]] for u in C}
return C
###Output
_____no_output_____
###Markdown
This implementation of `louvain` takes a parameter called `random_seed` that controls the order in which the nodes are considered in the `maximize` sub-routine.Different values of `random_seed` might produce different results.
###Code
C = louvain(G, random_seed=0)
figure()
node_color = [colors[C[u] % len(colors)] for u in G.nodes()]
nx.draw(G, pos, node_color=node_color, edge_color='gray')
C = louvain(G, random_seed=3)
figure()
node_color = [colors[C[u] % len(colors)] for u in G.nodes()]
nx.draw(G, pos, node_color=node_color, edge_color='gray')
###Output
_____no_output_____
###Markdown
We now consider the graph of [Les Misérables](https://fr.wikipedia.org/wiki/Les_Misérables) (co-occurence of characters in chapters of the novel of Victor Hugo), than can be downloaded [here](http://perso.telecom-paristech.fr/~bonald/graphs/lab6.zip). To do* Apply the Louvain algorithm to the Misérables graph for values of the `random_seed` parameter ranging from 0 to 99 (included)* Answer question 4 (**1 point**)* Answer question 5 (**open answer**)
###Code
G = nx.read_graphml("miserables.graphml", node_type = int)
names = nx.get_node_attributes(G, 'name')
pos = nx.spring_layout(G)
C = louvain(G, gamma=1, random_seed=0)
for x in G.nodes(True):
print(x)
figure()
node_color = [colors[C[u] % len(colors)] for u in G.nodes()]
nx.draw(G, pos, labels = names, font_size = 8, node_size = 100, edge_color = 'gray',node_color = node_color)
get_question(4)
answer = 1
v_sum = 0
for n in range(100):
C = louvain(G, gamma=1, random_seed=n)
if(C[50] == C[11]):
v_sum = v_sum + 1
print(v_sum)
post_answer(4, 0.7)
get_question(5)
post_text(5)
###Output
_____no_output_____
###Markdown
3. Projection onto the probability simplex The algorithm to optimize the soft modularity repeats for different values of $i$ the following steps:- $\hat{\mathbf{p}}_{i\cdot} \leftarrow \alpha \mathbf{p}_{i\cdot}+ t \sum_{j\in V}\left(A_{ij} - \gamma \frac{w_i w_j}{w} \right) \mathbf{p}_{j\cdot}$- $\mathbf{p}_{i\cdot} \leftarrow \pi_{\mathcal{Y}}(\hat{\mathbf{p}}_{i\cdot})$.where $\pi_{\mathcal{Y}}$ is the projection onto the probability simplex $\mathcal{Y} = \{ \mathbf{q}\in\mathbb{R}^n : \mathbf{q} \geq 0, \mathbf{1}^T \mathbf{q} = 1 \}$. The algorithm to perform the projection is the following:- Sort the vector $\mathbf{v}$ into $\mathbf{\mu}$:$\mu_1\geq \mu_2\geq \dots \geq \mu_n$.- Find $\rho =\max\left\{ j=1,\ldots,n,\:\mu_j-\frac{1}{j}\left( \sum_{r=1}^j \mu_r -1\right)>0\right\}$.- Define $\theta = \frac{1}{\rho}\left(\sum_{i=1}^\rho \mu_i-1 \right)$.- Finally return $\pi_{\mathcal{Y}} \left(\mathbf{v}\right) = (\mathbf{v} - \theta\mathbf{1})_+$, where $(\mathbf{x})_+$ denotes the soft-thresholding operation $[(\mathbf{x})_+]_k = \max(x_k, 0)$. To do* Complete the function `project` below to perform the projection onto $\mathcal{Y}$* Answer question 6 (**1 point**)
###Code
v = [.5, .4, .3, .4, .3, .1]
u = sorted(v, reverse=True)
rho = [u[i] - 1/(i+1)*(np.sum(u[:i+1]) - 1) for i in range(len(v))]
print('rho', rho)
rho = np.argmax(rho) + 1
print(rho)
theta = 1/5 * (np.sum(u[:5]) - 1)
print(theta)
print(v - theta * np.ones(len(v)))
def project(v):
# v: vector
# Returns: w, projection onto the probability simplex
u = sorted(v, reverse=True)
rho = [u[i] - 1/(i+1)*(np.sum(u[:i+1]) - 1) for i in range(len(v))]
print("rho", rho)
index = 0
for i in range(len(rho)):
if rho[i]>0:
index = i+1
rho = index
theta = 1/rho * (np.sum(u[:rho]) - 1)
pro = v - theta * np.ones(len(v))
return [ max(x,0) for x in pro]
###Output
_____no_output_____
###Markdown
We consider the vector $v$ defined as
###Code
v = [.5, .4, .3, .4, .3, .1]
pro = project(v)
print(pro)
get_question(6)
answer = 0.12
post_answer(6, answer)
###Output
This is correct!
###Markdown
4. Soft clustering algorithm We use the following algorithm to optimize the soft modularity.- **Initialization:** * $C\leftarrow$ result of Louvain * $\forall i,k$, $p_{ik} \leftarrow \begin{cases} 1 & C(i)=k\\ 0 & \text{otherwise}\end{cases}$ * $\forall k$, $\bar{\mathbf{p}}_k \leftarrow \sum_{i} \frac{w_i}{w} p_{ik}$.- **One epoch:** For each node $i\in V$, * $\forall k$, $\hat{p}_{ik}\leftarrow \alpha p_{ik} + t\sum_{j\sim i}A_{ij}(p_{jk} - \gamma \bar{p}_k)$ * $\mathbf{p}_{i\cdot}^+ \leftarrow \mathrm{project}(\hat{\mathbf{p}}_{i\cdot})$ * $\bar{\mathbf{p}} \leftarrow \bar{\mathbf{p}} + (w_i/w) (\mathbf{p}_{i\cdot}^+ - \mathbf{p}_{i\cdot})$ and $\mathbf{p}_{i\cdot}\leftarrow \mathbf{p}_{i\cdot}^+$. Toy graph
###Code
edgelist = [(0, 1), (0, 2), (0, 3), (1, 3), (2, 3), (3, 4), (3, 5), (3, 6), (4, 6), (5, 6)]
G = nx.Graph()
G.add_edges_from(edgelist)
pos = nx.spring_layout(G, center=(.5, .5), scale=.5)
nodes = list(G.nodes())
A = nx.adj_matrix(G, nodes)
###Output
_____no_output_____
###Markdown
To do* Complete the function `soft_maximize` below to implement the soft modularity optimization algorithm* Answer question 7 (**1 point**)
###Code
def soft_maximize(A, gamma=1., alpha=1., t=1., n_epochs=1):
# G: networkx graph
# gamma: resolution parameter
# alpha: bias parameter
# t: learning rate
# n_epochs: number of epochs
# Returns: membership matrix
n = A.shape[0]
w_vec = np.array([np.sum(A[i]) for i in range(n)])
w = np.sum(w_vec)
# Initialization
G= nx.from_scipy_sparse_matrix(A)
C = louvain(G, gamma)
K = max(C.values()) + 1
p = np.zeros((n, K))
p_bar = np.zeros((K))
t_pri = 2*t / w
# TO COMPLETE: initialization of p and p_bar
# Optimization
print(soft_modularity(A, p, gamma))
for epoch in range(n_epochs):
print("Epoch {}".format(epoch))
for i in range(n):
p_i_hat = np.zeros_like(p_bar)
print(p_bar.shape)
# TO COMPLETE: calculation of p_i_hat
for k in range(K):
item = 0
for j in G.neighbors(i):
item = item + A[i,j]*(p[j,k] - p_bar[k])
p_i_hat[k] = p[i,k] + t_prim * item
p_i_plus = project(p_i_hat)
p_bar = p_bar + (w_vec[i] / w) * (p_i_plus - p[i, :])
p[i, :] = p_i_plus
print(soft_modularity(A, p, gamma))
return p
p = soft_maximize(A, gamma=1., alpha=1., t=1., n_epochs=10)
plot_communities(G, pos, p)
get_question(7)
answer = 0.19948
post_answer(7, answer)
###Output
This is correct!
###Markdown
Openflights We now consider the graph of [OpenFlights](https://openflights.org) (number of daily flights between airports), than can be downloaded [here](http://perso.telecom-paristech.fr/~bonald/graphs/lab6.zip).
###Code
G = nx.read_graphml("openflights.graphml", node_type = int)
G.remove_edges_from(nx.selfloop_edges(G))
G = G.subgraph([node for node in G if G.degree(node) > 20])
giant_component = list(nx.connected_components(G))[0]
G = G.subgraph(giant_component)
# Get names
names = nx.get_node_attributes(G, 'name')
###Output
_____no_output_____
###Markdown
We define the adjacency matrix of the graph.**Warning:** the $i^{th}$ row of $A$ corresponds to the $i^{th}$ element of the list `nodes` and not to the node with index $i$ in the object `G`. The same remark applies to columns of $A$.
###Code
nodes = list(G.nodes())
A = nx.adj_matrix(G, nodes)
###Output
_____no_output_____
###Markdown
We define the size of a *soft* cluster $k$ as follows$$S_k = \sum_{i} p_{ik}$$and we define its weight as$$W_k = \sum_{i} w_{ik} p_{ik} = \bar{p}_k w.$$ To do* Apply the soft modularity optimization algorithm to the OpenFlights graph with parameters `gamma=1.5`, `alpha=0.`, `t=.5` and `n_epochs=3`* Print the size and the weight of each cluster.* Print the Top 10 nodes in terms of degree $w_i$ in each clustes.* List all the nodes that have a positive probability to belong to more than one cluster* Find all the clusters to which 'Lisbon Portela Airport' has a probability to belong. Print the Top 10 nodes in terms of degree $w_i$ in these clusters.* Answer question 8 and 9 (**0.5 point**)* Answer question 10 (**open answer**)
###Code
get_question(8)
answer = 0.023809523809523808
post_answer(8, answer)
get_question(9)
post_answer(9, 'Berlin-Schönefeld International Airport')
get_question(10)
post_text(10)
###Output
_____no_output_____ |
app/notebooks/learning/shot_detection_folds.ipynb | ###Markdown
Construct five folds
###Code
# Load up all manually annotated shots
shots_qs = Shot.objects.filter(labeler__name__contains='manual')
shots = VideoIntervalCollection.from_django_qs(shots_qs)
video_ids = sorted(list(shots.get_allintervals().keys()))
random.seed(0)
# randomly shuffle video IDs
random.shuffle(video_ids)
# construct five folds
total_shots = shots_qs.count()
folds = []
num_shots_in_folds = 0
cur_fold = []
for video_id in video_ids:
if num_shots_in_folds + shots.get_intervallist(video_id).size() > (len(folds) + 1) * total_shots / 5:
folds.append(cur_fold)
cur_fold = []
num_shots_in_folds += shots.get_intervallist(video_id).size()
cur_fold.append(video_id)
folds.append(cur_fold)
# store folds
with open('/app/data/shot_detection_folds.pkl', 'wb') as f:
pickle.dump(folds, f)
# or load folds from disk
with open('/app/data/shot_detection_folds.pkl', 'rb') as f:
folds = pickle.load(f)
# store shot intervals in pickle file
with open('/app/data/manually_annotated_shots.pkl', 'wb') as f:
pickle.dump({
video_id: [
(interval.start, interval.end, interval.payload)
for interval in shots.get_intervallist(video_id).get_intervals()
]
for video_id in shots.get_allintervals()
}, f)
###Output
_____no_output_____
###Markdown
Heuristic Evaluation
###Code
clips = shots.dilate(1).coalesce().dilate(-1)
cinematic_shots_qs = Shot.objects.filter(cinematic=True, video_id__in=video_ids).all()
cinematic_shots = VideoIntervalCollection.from_django_qs(
cinematic_shots_qs,
progress = True
).filter_against(clips, predicate=overlaps())
cinematic_shot_boundaries = cinematic_shots.map(lambda i: (i.start, i.start, i.payload)).set_union(
cinematic_shots.map(lambda i: (i.end + 1, i.end + 1, i.payload))
).coalesce()
gt_shot_boundaries = shots.map(lambda i: (i.start, i.start, i.payload)).set_union(
shots.map(lambda i: (i.end + 1, i.end + 1, i.payload))
).coalesce()
for fold in folds:
tp = 0
fp = 0
fn = 0
for video_id in fold:
cine_sb = cinematic_shot_boundaries.get_intervallist(video_id)
gt_sb = gt_shot_boundaries.get_intervallist(video_id)
accurate_sb = cine_sb.filter_against(gt_sb, predicate=overlaps())
inaccurate_sb = cine_sb.minus(accurate_sb)
found_human_sb = gt_sb.filter_against(cine_sb, predicate=overlaps())
missed_human_sb = gt_sb.minus(found_human_sb)
tp += accurate_sb.size()
fp += inaccurate_sb.size()
fn += missed_human_sb.size()
precision = tp / (tp + fp)
recall = tp / (tp + fn)
print('Precision: {}, {} out of {}'.format(
precision,
tp,
tp + fp
))
print('Recall: {}, {} out of {}'.format(
recall,
tp,
tp + fn
))
print('F1: {}'.format(2 * precision * recall / (precision + recall)))
print()
###Output
_____no_output_____
###Markdown
Heuristics:* Fold 1: * Precision: 0.8512396694214877, 103 out of 121 * Recall: 0.8373983739837398, 103 out of 123 * F1: 0.8442622950819672* Fold 2: * Precision: 0.948051948051948, 73 out of 77 * Recall: 0.7448979591836735, 73 out of 98 * F1: 0.8342857142857143* Fold 3: * Precision: 0.8829787234042553, 166 out of 188 * Recall: 0.9431818181818182, 166 out of 176 * F1: 0.9120879120879122* Fold 4: * Precision: 0.8571428571428571, 78 out of 91 * Recall: 0.7878787878787878, 78 out of 99 * F1: 0.8210526315789474* Fold 5: * Precision: 0.9090909090909091, 110 out of 121 * Recall: 0.8396946564885496, 110 out of 131 * F1: 0.873015873015873Average F1: .857
###Code
# Heuristic, window version
stride = 8
window_size = 16
clips_window = shots.dilate(1).coalesce().dilate(-1).map(
lambda intrvl: (
intrvl.start - stride - ((intrvl.start - stride) % stride),
intrvl.end + stride - ((intrvl.end - stride) % stride),
intrvl.payload
)
).dilate(1).coalesce().dilate(-1)
items_intrvls = {}
for video_id in clips_window.get_allintervals():
items_intrvls[video_id] = []
for intrvl in clips_window.get_intervallist(video_id).get_intervals():
items_intrvls[video_id] += [
(f, f + window_size, 0)
for f in range(intrvl.start, intrvl.end - stride, stride)
]
items_col = VideoIntervalCollection(items_intrvls)
items_w_gt_boundaries = items_col.filter_against(
gt_shot_boundaries,
predicate=during_inv()
).map(
lambda intrvl: (intrvl.start, intrvl.end, 2)
)
items_w_gt_labels = items_col.minus(
items_w_gt_boundaries, predicate=equal()
).set_union(items_w_gt_boundaries)
items_w_cinematic_boundaries = items_col.filter_against(
cinematic_shot_boundaries,
predicate=during_inv()
).map(
lambda intrvl: (intrvl.start, intrvl.end, 2)
)
items_w_cinematic_labels = items_col.minus(
items_w_cinematic_boundaries, predicate=equal()
).set_union(items_w_cinematic_boundaries)
for fold in folds:
tp = 0
tn = 0
fp = 0
fn = 0
for video_id in fold:
cine_items = items_w_cinematic_labels.get_intervallist(video_id)
gt_items = items_w_gt_labels.get_intervallist(video_id)
for cine_item, gt_item in zip(cine_items.get_intervals(), gt_items.get_intervals()):
if cine_item.payload == gt_item.payload:
if cine_item.payload == 2:
tp += 1
else:
tn += 1
else:
if cine_item.payload == 2:
fp += 1
else:
fn += 1
precision = tp / (tp + fp)
recall = tp / (tp + fn)
print('Precision: {}, {} out of {}'.format(
precision,
tp,
tp + fp
))
print('Recall: {}, {} out of {}'.format(
recall,
tp,
tp + fn
))
print('F1: {}'.format(2 * precision * recall / (precision + recall)))
print('TP: {} TN: {} FP: {} FN: {}'.format(tp, tn, fp, fn))
print()
###Output
_____no_output_____
###Markdown
```Precision: 0.8916666666666667, 214 out of 240Recall: 0.8629032258064516, 214 out of 248F1: 0.8770491803278689TP: 214 TN: 1321 FP: 26 FN: 34Precision: 0.972027972027972, 139 out of 143Recall: 0.7473118279569892, 139 out of 186F1: 0.844984802431611TP: 139 TN: 328 FP: 4 FN: 47Precision: 0.8919667590027701, 322 out of 361Recall: 0.9817073170731707, 322 out of 328F1: 0.9346879535558781TP: 322 TN: 2297 FP: 39 FN: 6Precision: 0.8802395209580839, 147 out of 167Recall: 0.8032786885245902, 147 out of 183F1: 0.84TP: 147 TN: 900 FP: 20 FN: 36Precision: 0.9184549356223176, 214 out of 233Recall: 0.852589641434263, 214 out of 251F1: 0.8842975206611571TP: 214 TN: 1148 FP: 19 FN: 37Average F1: 0.876``` DeepSBD Evaluation
###Code
# helper functions for deepsbd testing
def calculate_accuracy(outputs, targets):
batch_size = targets.size(0)
_, pred = outputs.topk(1, 1, True)
pred = pred.t()
correct = pred.eq(targets.view(1, -1))
n_correct_elems = correct.float().sum().item()
return n_correct_elems / batch_size
def prf1_array(pos_label, neg_label, gt, preds):
tp = 0.
fp = 0.
tn = 0.
fn = 0.
for truth, pred in zip(gt, preds):
if truth == pred:
if pred == pos_label:
tp += 1.
else:
tn += 1.
else:
if pred == pos_label:
fp += 1.
else:
fn += 1.
precision = tp / (tp + fp) if tp + fp != 0 else 0
recall = tp / (tp + fn) if tp + fn != 0 else 0
f1 = 2 * precision * recall / (precision + recall) if precision + recall != 0 else 0
return (precision, recall, f1, tp, tn, fp, fn)
def get_label(res_tensor):
res_numpy=res_tensor.data.cpu().numpy()
labels=[]
for row in res_numpy:
labels.append(np.argmax(row))
return labels
def test_deepsbd(model, dataloader):
preds = []
labels = []
outputs = []
for clip_tensor, l, _ in tqdm(dataloader):
o = model(clip_tensor.to(device))
preds += get_label(o)
labels += l.data.numpy().tolist()
outputs += o.cpu().data.numpy().tolist()
preds = [2 if p == 2 else 0 for p in preds]
precision, recall, f1, tp, tn, fp, fn = prf1_array(2, 0, labels, preds)
print("Precision: {}, Recall: {}, F1: {}".format(precision, recall, f1))
print("TP: {}, TN: {}, FP: {}, FN: {}".format(tp, tn, fp, fn))
return preds, labels, outputs
# Load DeepSBD datasets for each fold
deepsbd_datasets = []
for fold in folds:
shots_in_fold_qs = Shot.objects.filter(
labeler__name__contains='manual',
video_id__in = fold
)
shots_in_fold = VideoIntervalCollection.from_django_qs(shots_in_fold_qs)
data = movies_deepsbd_data.DeepSBDDataset(shots_in_fold, verbose=True)
deepsbd_datasets.append(data)
# dataset to hold multiple folds
class DeepSBDTrainDataset(Dataset):
def __init__(self, datasets):
self.datasets = datasets
def __len__(self):
return sum(len(d) for d in self.datasets)
def __getitem__(self, idx):
for d in self.datasets:
if idx < len(d):
return d[idx]
else:
idx -= len(d)
return None
def weights_for_balanced_classes(self):
labels = [
item[3]
for d in self.datasets
for item in d.items
]
class_counts = {}
for l in labels:
if l not in class_counts:
class_counts[l] = 1
else:
class_counts[l] += 1
weights_per_class = {
l: len(labels) / class_counts[l]
for l in class_counts
}
return [
weights_per_class[l]
for l in labels
]
# models
deepsbd_alexnet_model = deepsbd_alexnet.deepSBD()
deepsbd_resnet_model = deepsbd_resnet.resnet18(num_classes=3,
sample_size=128,
sample_duration=16)
# alexnet deepSBD pre-trained on ClipShots
alexnet_state_dict = torch.load('models/ClipShots-DeepSBD-Alexnet-final.pth')['state_dict']
new_state_dict = OrderedDict()
for k, v in alexnet_state_dict.items():
name = k[7:]
new_state_dict[name] = v
deepsbd_alexnet_model.load_state_dict(new_state_dict)
# deepsbd_alexnet_model = deepsbd_alexnet_model.to(device)
# deepsbd_alexnet_model = deepsbd_alexnet_model.eval()
# resnet deepSBD pre-trained on ClipShots
resnet_state_dict = torch.load('models/ClipShots-DeepSBD-Resnet-18-final.pth')['state_dict']
new_state_dict = OrderedDict()
for k, v in resnet_state_dict.items():
name = k[7:]
new_state_dict[name] = v
deepsbd_resnet_model.load_state_dict(new_state_dict)
# deepsbd_resnet_model = deepsbd_resnet_model.to(device)
deepsbd_resnet_model = deepsbd_resnet_model.train()
# resnet deepSBD pre-trained on Kinetics
deepsbd_resnet_model_no_clipshots = deepsbd_resnet.resnet18(
num_classes=3,
sample_size=128,
sample_duration=16
)
deepsbd_resnet_model_no_clipshots.load_weights('models/resnet-18-kinetics.pth')
# alexnet deepSBD
deepsbd_alexnet_model_no_clipshots = deepsbd_alexnet.deepSBD()
deepsbd_resnet_model_no_clipshots = deepsbd_resnet_model_no_clipshots.to(device).train()
training_dataset_fold1 = DeepSBDTrainDataset(deepsbd_datasets[:4])
fold1_weights = torch.DoubleTensor(training_dataset_fold1.weights_for_balanced_classes())
fold1_sampler = torch.utils.data.sampler.WeightedRandomSampler(fold1_weights, len(fold1_weights))
training_dataloader_fold1 = DataLoader(
training_dataset_fold1,
num_workers=0,
shuffle=False,
batch_size=16,
sampler=fold1_sampler
)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(deepsbd_resnet_model.parameters(),
lr=.001, momentum=.9, weight_decay=1e-3)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min',patience=60000)
def train_epoch(epoch, training_dataloader, model, criterion, optimizer, scheduler):
iter_len = len(training_dataloader)
training_iter = iter(training_dataloader)
for i in range(iter_len):
clip_tensor, targets, _ = next(training_iter)
outputs = model(clip_tensor.to(device))
targets = targets.to(device)
loss = criterion(outputs, targets)
acc = calculate_accuracy(outputs, targets)
preds = get_label(outputs)
preds = [2 if p == 2 else 0 for p in preds]
precision, recall, f1, tp, tn, fp, fn = prf1_array(
2, 0, targets.cpu().data.numpy().tolist(), preds)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Epoch: [{0}][{1}/{2}]\t'
'Loss_conf {loss_c:.4f}\t'
'acc {acc:.4f}\t'
'pre {pre:.4f}\t'
'rec {rec:.4f}\t'
'f1 {f1: .4f}\t'
'TP {tp} '
'TN {tn} '
'FP {fp} '
'FN {fn} '
.format(
epoch, i + 1, iter_len, loss_c=loss.item(), acc=acc,
pre=precision, rec=recall, f1=f1,
tp=tp, tn=tn, fp=fp, fn=fn))
save_file_path = os.path.join(
'/app/notebooks/learning/models/deepsbd_resnet_clipshots_pretrain_train_on_folds',
'fold5_{}_epoch.pth'.format(epoch)
)
states = {
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict()
}
torch.save(states, save_file_path)
state = torch.load('models/deepsbd_resnet_train_on_folds/fold4_4_epoch.pth')
deepsbd_resnet_model_no_clipshots.load_state_dict(state['state_dict'])
for i in range(5):
train_epoch(i, training_dataloader_fold1, deepsbd_resnet_model, criterion, optimizer, scheduler)
# specialize pre-trained model
# test models on splits
model = deepsbd_resnet_model.to(device).eval()
per_fold_preds_labels_outputs = []
for fold_dataset in deepsbd_datasets:
dataloader = DataLoader(fold_dataset, batch_size=8, shuffle=False, num_workers=0)
preds, labels, outputs = test_deepsbd(model, dataloader)
per_fold_preds_labels_outputs.append((preds, labels, outputs))
# test models on splits
model = deepsbd_alexnet_model.to(device).eval()
per_fold_preds_labels_outputs_alexnet = []
for fold_dataset in deepsbd_datasets:
dataloader = DataLoader(fold_dataset, batch_size=8, shuffle=False, num_workers=0)
preds, labels, outputs = test_deepsbd(model, dataloader)
per_fold_preds_labels_outputs.append((preds, labels, outputs))
model = deepsbd_resnet_model.eval()
per_fold_preds_labels_outputs_fold_training_only = []
for fold_dataset in deepsbd_datasets[4:]:
dataloader = DataLoader(fold_dataset, batch_size=8, shuffle=False, num_workers=0)
preds, labels, outputs = test_deepsbd(model, dataloader)
per_fold_preds_labels_outputs_fold_training_only.append((preds, labels, outputs))
model.load_weights('models/resnet-18-kinetics.pth')
per_fold_preds_labels_outputs_fold_training_only = []
for fold_dataset in deepsbd_datasets[:1]:
dataloader = DataLoader(fold_dataset, batch_size=8, shuffle=False, num_workers=0)
preds, labels, outputs = test_deepsbd(model, dataloader)
per_fold_preds_labels_outputs_fold_training_only.append((preds, labels, outputs))
###Output
_____no_output_____
###Markdown
DeepSBD, ResNet18 backbone trained on ClipShots:* Fold 1 * Precision: 0.8636363636363636, Recall: 0.9620253164556962, F1: 0.9101796407185629 * TP: 228.0, TN: 1322.0, FP: 36.0, FN: 9.0* Fold 2 * Precision: 0.8934010152284264, Recall: 0.9617486338797814, F1: 0.9263157894736842 * TP: 176.0, TN: 314.0, FP: 21.0, FN: 7.0* Fold 3 * Precision: 0.7666666666666667, Recall: 0.8263473053892215, F1: 0.7953890489913544 * TP: 276.0, TN: 2246.0, FP: 84.0, FN: 58.0* Fold 4 * Precision: 0.8960396039603961, Recall: 1.0, F1: 0.9451697127937337 * TP: 181.0, TN: 901.0, FP: 21.0, FN: 0.0* Fold 5 * Precision: 0.8571428571428571, Recall: 0.9831932773109243, F1: 0.9158512720156555 * TP: 234.0, TN: 1141.0, FP: 39.0, FN: 4.0Average F1: .898DeepSBD, AlexNet backbone trained on ClipShots:* Fold 1 * Precision: 0.8507462686567164, Recall: 0.9620253164556962, F1: 0.902970297029703 * TP: 228.0, TN: 1318.0, FP: 40.0, FN: 9.0* Fold 2 * Precision: 0.912568306010929, Recall: 0.912568306010929, F1: 0.912568306010929 * TP: 167.0, TN: 319.0, FP: 16.0, FN: 16.0* Fold 3 * Precision: 0.7818696883852692, Recall: 0.8263473053892215, F1: 0.8034934497816594 * TP: 276.0, TN: 2253.0, FP: 77.0, FN: 58.0* Fold 4 * Precision: 0.9782608695652174, Recall: 0.994475138121547, F1: 0.9863013698630136 * TP: 180.0, TN: 918.0, FP: 4.0, FN: 1.0* Fold 5 * Precision: 0.8669201520912547, Recall: 0.957983193277311, F1: 0.9101796407185628 * TP: 228.0, TN: 1145.0, FP: 35.0, FN: 10.0 Average F1: .903 DeepSBD, ResNet18 backbone trained on folds only:* Fold 1 * Precision: 0.7737226277372263, Recall: 0.8945147679324894, F1: 0.8297455968688846 * TP: 212.0, TN: 1296.0, FP: 62.0, FN: 25.0* Fold 2 * Precision: 0.8165680473372781, Recall: 0.7540983606557377, F1: 0.7840909090909091 * TP: 138.0, TN: 304.0, FP: 31.0, FN: 45.0* Fold 3 * Precision: 0.7407407407407407, Recall: 0.718562874251497, F1: 0.7294832826747719 * TP: 240.0, TN: 2246.0, FP: 84.0, FN: 94.0* Fold 4 * Precision: 0.7990196078431373, Recall: 0.9005524861878453, F1: 0.8467532467532468 * TP: 163.0, TN: 881.0, FP: 41.0, FN: 18.0* Fold 5 * Precision: 0.8057851239669421, Recall: 0.819327731092437, F1: 0.8125 * TP: 195.0, TN: 1133.0, FP: 47.0, FN: 43.0 Average F1: .801DeepSBD, ResNet18 backbone pre-trained on ClipShots, and then trained on folds:* Fold 1 * Precision: 0.7482758620689656, Recall: 0.9156118143459916, F1: 0.823529411764706 * TP: 217.0, TN: 1285.0, FP: 73.0, FN: 20.0* Fold 2 * Precision: 0.8685714285714285, Recall: 0.8306010928961749, F1: 0.8491620111731845 * TP: 152.0, TN: 312.0, FP: 23.0, FN: 31.0* Fold 3 * Precision: 0.8092105263157895, Recall: 0.7365269461077845, F1: 0.7711598746081504 * TP: 246.0, TN: 2272.0, FP: 58.0, FN: 88.0* Fold 4 * Precision: 0.9344262295081968, Recall: 0.9447513812154696, F1: 0.9395604395604397 * TP: 171.0, TN: 910.0, FP: 12.0, FN: 10.0* Fold 5 * Precision: 0.8771186440677966, Recall: 0.8697478991596639, F1: 0.8734177215189872 * TP: 207.0, TN: 1151.0, FP: 29.0, FN: 31.0 Average F1: .851 Weak Labels K folds
###Code
# Load DeepSBD datasets for each fold
deepsbd_datasets_logits = []
for fold in folds:
shots_in_fold_qs = Shot.objects.filter(
labeler__name__contains='manual',
video_id__in = fold
)
shots_in_fold = VideoIntervalCollection.from_django_qs(shots_in_fold_qs)
data = movies_deepsbd_data.DeepSBDDataset(shots_in_fold, verbose=True, preload=True, logits=True)
deepsbd_datasets_logits.append(data)
deepsbd_datasets_logits[0].items
# load weak labels
with open('/app/data/shot_detection_weak_labels/noisy_labels_all_windows.npy', 'rb') as f:
weak_labels_windows = np.load(f)
weak_labels_windows[:10]
weak_labels_windows[0][0][0]
weak_labels_collected = collect(
weak_labels_windows,
lambda row: row[0][0]
)
weak_labels_col = VideoIntervalCollection({
video_id: [
(row[0][1] ,row[0][2], row[1])
for row in weak_labels_collected[video_id]
]
for video_id in tqdm(list(weak_labels_collected.keys()))
})
def weak_payload_to_logits(weak_payload):
return (weak_payload[1], 0., weak_payload[0])
deepsbd_datasets_weak = []
for dataset in deepsbd_datasets_logits:
items_collected = collect(
dataset.items,
lambda item: item[0]
)
items_col = VideoIntervalCollection({
video_id: [
(item[1], item[2], item[3])
for item in items_collected[video_id]
]
for video_id in items_collected
})
new_items = weak_labels_col.join(
items_col,
predicate=equal(),
working_window=1,
merge_op = lambda weak, item: [weak]
)
dataset.items = [
(video_id, intrvl.start, intrvl.end, weak_payload_to_logits(intrvl.payload))
for video_id in sorted(list(new_items.get_allintervals().keys()))
for intrvl in new_items.get_intervallist(video_id).get_intervals()
]
deepsbd_datasets_weak.append(dataset)
# dataset to hold multiple folds for weak data
class DeepSBDWeakTrainDataset(Dataset):
def __init__(self, datasets):
self.datasets = datasets
def __len__(self):
return sum(len(d) for d in self.datasets)
def __getitem__(self, idx):
for d in self.datasets:
if idx < len(d):
return d[idx]
else:
idx -= len(d)
return None
def weights_for_balanced_classes(self):
labels = [
np.argmax(item[3])
for d in self.datasets
for item in d.items
]
class_counts = [
0
for i in range(len(self.datasets[0].items[0]))
]
for l in labels:
class_counts[l] += 1
weights_per_class = {
i: len(labels) / l if l != 0 else 0
for i, l in enumerate(class_counts)
}
return [
weights_per_class[l]
for l in labels
]
# resnet deepSBD pre-trained on Kinetics
deepsbd_resnet_model_no_clipshots = deepsbd_resnet.resnet18(
num_classes=3,
sample_size=128,
sample_duration=16
)
deepsbd_resnet_model_no_clipshots.load_weights('models/resnet-18-kinetics.pth')
deepsbd_resnet_model_no_clipshots = deepsbd_resnet_model_no_clipshots.to(device).train()
training_dataset_fold1 = DeepSBDWeakTrainDataset(deepsbd_datasets_weak[1:])
fold1_weights = torch.DoubleTensor(training_dataset_fold1.weights_for_balanced_classes())
fold1_sampler = torch.utils.data.sampler.WeightedRandomSampler(fold1_weights, len(fold1_weights))
training_dataloader_fold1 = DataLoader(
training_dataset_fold1,
num_workers=0,
shuffle=False,
batch_size=16,
sampler=fold1_sampler
)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(deepsbd_resnet_model_no_clipshots.parameters(),
lr=.001, momentum=.9, weight_decay=1e-3)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min',patience=60000)
# helper functions for deepsbd testing
def calculate_accuracy_logits(outputs, targets):
batch_size = targets.size(0)
_, pred = outputs.topk(1, 1, True)
pred = pred.t()
_, target_preds = targets.topk(1, 1, True)
correct = pred.eq(target_preds.view(1, -1))
n_correct_elems = correct.float().sum().item()
return n_correct_elems / batch_size
def prf1_array(pos_label, neg_label, gt, preds):
tp = 0.
fp = 0.
tn = 0.
fn = 0.
for truth, pred in zip(gt, preds):
if truth == pred:
if pred == pos_label:
tp += 1.
else:
tn += 1.
else:
if pred == pos_label:
fp += 1.
else:
fn += 1.
precision = tp / (tp + fp) if tp + fp != 0 else 0
recall = tp / (tp + fn) if tp + fn != 0 else 0
f1 = 2 * precision * recall / (precision + recall) if precision + recall != 0 else 0
return (precision, recall, f1, tp, tn, fp, fn)
def get_label(res_tensor):
res_numpy=res_tensor.data.cpu().numpy()
labels=[]
for row in res_numpy:
labels.append(np.argmax(row))
return labels
def test_deepsbd(model, dataloader):
preds = []
labels = []
outputs = []
for clip_tensor, l, _ in tqdm(dataloader):
o = model(clip_tensor.to(device))
l = torch.transpose(torch.stack(l).to(device), 0, 1).float()
preds += get_label(o)
labels += get_label(l)
outputs += o.cpu().data.numpy().tolist()
preds = [2 if p == 2 else 0 for p in preds]
precision, recall, f1, tp, tn, fp, fn = prf1_array(2, 0, labels, preds)
print("Precision: {}, Recall: {}, F1: {}".format(precision, recall, f1))
print("TP: {}, TN: {}, FP: {}, FN: {}".format(tp, tn, fp, fn))
return preds, labels, outputs
def train_epoch(epoch, training_dataloader, model, criterion, optimizer, scheduler, fold_num=1):
iter_len = len(training_dataloader)
training_iter = iter(training_dataloader)
for i in range(iter_len):
clip_tensor, targets, _ = next(training_iter)
outputs = model(clip_tensor.to(device))
targets = torch.transpose(torch.stack(targets).to(device), 0, 1).float()
loss = criterion(outputs, targets)
acc = calculate_accuracy_logits(outputs, targets)
preds = get_label(outputs)
preds = [2 if p == 2 else 0 for p in preds]
target_preds = get_label(targets)
precision, recall, f1, tp, tn, fp, fn = prf1_array(
2, 0, target_preds, preds)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Epoch: [{0}][{1}/{2}]\t'
'Loss_conf {loss_c:.4f}\t'
'acc {acc:.4f}\t'
'pre {pre:.4f}\t'
'rec {rec:.4f}\t'
'f1 {f1: .4f}\t'
'TP {tp} '
'TN {tn} '
'FP {fp} '
'FN {fn} '
.format(
epoch, i + 1, iter_len, loss_c=loss.item(), acc=acc,
pre=precision, rec=recall, f1=f1,
tp=tp, tn=tn, fp=fp, fn=fn))
save_file_path = os.path.join(
'/app/notebooks/learning/models/deepsbd_resnet_clipshots_pretrain_train_on_folds_weak',
'fold{}_{}_epoch.pth'.format(fold_num, epoch)
)
states = {
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict()
}
torch.save(states, save_file_path)
# train K folds
for i in range(5):
training_datasets = DeepSBDWeakTrainDataset(
deepsbd_datasets_weak[:i] + deepsbd_datasets_weak[i+1:])
fold_weights = torch.DoubleTensor(training_datasets.weights_for_balanced_classes())
fold_sampler = torch.utils.data.sampler.WeightedRandomSampler(fold_weights, len(fold_weights))
training_dataloader = DataLoader(
training_datasets,
num_workers=0,
shuffle=False,
batch_size=16,
sampler=fold_sampler
)
criterion = nn.BCEWithLogitsLoss()
# reset model
deepsbd_resnet_model_no_clipshots.load_weights('models/resnet-18-kinetics.pth')
optimizer = optim.SGD(deepsbd_resnet_model_no_clipshots.parameters(),
lr=.001, momentum=.9, weight_decay=1e-3)
scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, 'min',patience=60000)
for epoch in range(5):
train_epoch(
epoch, training_dataloader,
deepsbd_resnet_model_no_clipshots,
criterion, optimizer, scheduler, fold_num = i + 1)
per_fold_preds_labels_outputs_fold_training_only = []
for i in range(0, 5):
# load
weights = torch.load(os.path.join(
'models/deepsbd_resnet_clipshots_pretrain_train_on_folds_weak',
'fold{}_{}_epoch.pth'.format(i + 1, 4)))['state_dict']
deepsbd_resnet_model_no_clipshots.load_state_dict(weights)
deepsbd_resnet_model_no_clipshots = deepsbd_resnet_model_no_clipshots.eval()
test_dataset = deepsbd_datasets_weak[i]
dataloader = DataLoader(test_dataset, batch_size=8, shuffle=False, num_workers=0)
preds, labels, outputs = test_deepsbd(deepsbd_resnet_model_no_clipshots, dataloader)
per_fold_preds_labels_outputs_fold_training_only.append((preds, labels, outputs))
###Output
_____no_output_____
###Markdown
```Precision: 0.7669491525423728, Recall: 0.8190045248868778, F1: 0.7921225382932167TP: 181.0, TN: 1319.0, FP: 55.0, FN: 40.0Precision: 0.45294117647058824, Recall: 0.8369565217391305, F1: 0.5877862595419847TP: 77.0, TN: 333.0, FP: 93.0, FN: 15.0Precision: 0.7121771217712177, Recall: 0.6225806451612903, F1: 0.6643717728055077TP: 193.0, TN: 2276.0, FP: 78.0, FN: 117.0Precision: 0.7078651685393258, Recall: 0.7455621301775148, F1: 0.7262247838616714TP: 126.0, TN: 882.0, FP: 52.0, FN: 43.0Precision: 0.7053140096618358, Recall: 0.7564766839378239, F1: 0.73TP: 146.0, TN: 1164.0, FP: 61.0, FN: 47.0Average F1: 0.70``` Whole movies
###Code
# same as above, except train on whole movies
###Output
_____no_output_____
###Markdown
100 movies
###Code
# train on 100 movies
###Output
_____no_output_____
###Markdown
All movies
###Code
# train on all movies
###Output
_____no_output_____
###Markdown
DSM Evaluation
###Code
# adaptive filtering
# dataloaders
# model
# load pre-loaded model
# train from scratch
# specialize pre-trained model
###Output
_____no_output_____ |
old_versions/2018.11.02_ref/1main_time_series-v4.ipynb | ###Markdown
2018.10.27: Multiple states: Time series
###Code
import sys,os
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
%matplotlib inline
# setting parameter:
np.random.seed(1)
n = 10 # number of positions
m = 3 # number of values at each position
l = 2*((n*m)**2) # number of samples
g = 1.
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
i1tab,i2tab = itab(n,m)
# generate coupling matrix w0:
def generate_coupling(n,m,g):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[:,i1:i2] -= w[:,i1:i2].mean(axis=1)[:,np.newaxis]
return w
w0 = generate_coupling(n,m,g)
"""
plt.figure(figsize=(3,3))
plt.title('actual coupling matrix')
plt.imshow(w0,cmap='rainbow',origin='lower')
plt.xlabel('j')
plt.ylabel('i')
plt.clim(-0.3,0.3)
plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.3,0,0.3])
plt.show()
"""
# 2018.10.27: generate sequences: time series
def generate_sequences_MCMC(w,n,m,l):
#print(i1tab,i2tab)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
#print(s)
ntrial = 100
for t in range(l-1):
h = np.sum(s[t,:]*w[:,:],axis=1)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
k = np.random.randint(0,m)
for itrial in range(ntrial):
k2 = np.random.randint(0,m)
while k2 == k:
k2 = np.random.randint(0,m)
if np.exp(h[i1+k2]- h[i1+k]) > np.random.rand():
k = k2
s[t+1,i1:i2] = 0.
s[t+1,i1+k] = 1.
return s
s = generate_sequences_MCMC(w0,n,m,l)
print(s[:5])
def fit_increment(s,n,m):
l = s.shape[0]
s_av = np.mean(s[:-1],axis=0)
ds = s[:-1] - s_av
c = np.cov(ds,rowvar=False,bias=True)
#print(c)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv)
nm = n*m
wini = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
#print(w)
nloop = 100
w_infer = np.zeros((nm,nm))
for i in range(n):
#print(i)
i1,i2 = i1tab[i],i2tab[i]
#s1 = np.copy(s[1:,i1:i2])
w = wini[i1:i2,:]
h = s[1:,i1:i2]
for iloop in range(nloop):
h_av = h.mean(axis=0)
dh = h - h_av
dhds = dh[:,:,np.newaxis]*ds[:,np.newaxis,:]
dhds_av = dhds.mean(axis=0)
w = np.dot(dhds_av,c_inv)
h = np.dot(s[:-1],w.T)
p = np.exp(h)
p_sum = p.sum(axis=1)
for k in range(m):
p[:,k] = p[:,k]/p_sum[:]
h += s[1:,i1:i2] - p
#h += s[1:,i1:i2]/p
#cost = ((s[1:,i1:i2]-p)**2).mean(axis=0)
#print(i,iloop,cost)
#w = w - w.mean(axis=0)
w_infer[i1:i2,:] = w
return w_infer
w1 = fit_increment(s,n,m)
plt.scatter(w0,w1)
plt.plot([-0.3,0.3],[-0.3,0.3],'r--')
mse = ((w0-w1)**2).mean()
print(mse)
###Output
0.0027510545633005483
|
docs/titanic_example.ipynb | ###Markdown
Basic use of Keras-batchflow on Titanic dataBelow example shows the most basic use of keras batchflow for predicting survival in Titanic disaster. A well known [Titanic dataset](https://www.kaggle.com/c/titanic/data) from [Kaggle](https://www.kaggle.com) is used in this exampleThis dataset has a mixture of both categorical and numeric variables which will highlight the features of keras-batchflow better. Data pre-processing
###Code
import pandas as pd
import numpy as np
data = pd.read_csv('../data/titanic/train.csv')
data.shape
###Output
_____no_output_____
###Markdown
There are only 891 datapoints in training dataset
###Code
data.head()
###Output
_____no_output_____
###Markdown
Imagine after exploratory analysis and model finding, only few columns were selected as features: **Pclass, Sex, Age, and Embarked**. Lets see if there are any NAs to fill:
###Code
data[['Pclass', 'Sex', 'Age', 'Embarked', 'Survived']].isna().apply(sum)
###Output
_____no_output_____
###Markdown
Lets fill those NAs:
###Code
data['Age'] = data['Age'].fillna(0)
data['Embarked'] = data['Embarked'].fillna('')
###Output
_____no_output_____
###Markdown
The outcome column `Survived` is ordinal categorical too, but it is presented as 0 and 1 and does not require any conversion for the purpose of binary classification. For making example more generic, I will convert this outcome to text labels Yes and No
###Code
data.loc[data.Survived == 1, 'Survived'] = 'Yes'
data.loc[data.Survived == 0, 'Survived'] = 'No'
###Output
_____no_output_____
###Markdown
Batch generator I would like to build a simple neural network using embedding for all categorical values, which will predict if a passenger would survive. When building such a model, I will need to provide number of levels of each categorical feature in embedding layers declarations. Keras-batchflow provides some automation helping determining this parameter for each feature and therefore, I will build a generator first.To build a batchflow generator you will first need to define your encoders, that will map categorical value to its integer repredentation. I will use sklearn LabelEncoder for this purpose.
###Code
from sklearn.preprocessing import LabelEncoder
class_enc = LabelEncoder().fit(data['Pclass'])
sex_enc = LabelEncoder().fit(data['Sex'])
embarked_enc = LabelEncoder().fit(data['Embarked'].astype(str))
surv_enc = LabelEncoder().fit(data['Survived'])
###Output
_____no_output_____
###Markdown
Split train and validation data
###Code
from sklearn.model_selection import train_test_split
train_data, test_data = train_test_split(data, train_size=.85, random_state=0)
###Output
_____no_output_____
###Markdown
Now, I can define a batch generator. I will be using a basic class `BatchGenerator`
###Code
from keras_batchflow.batch_generator import BatchGenerator
x_structure = [
('Pclass', class_enc),
('Sex', sex_enc),
('Embarked', embarked_enc),
# None below means no encoding will be applied and values will be passed unchanged
('Age', None),
]
y_structure = ('Survived', surv_enc)
bg_train = BatchGenerator(train_data,
x_structure=x_structure,
y_structure=y_structure,
shuffle = True,
batch_size=8)
bg_test = BatchGenerator(test_data,
x_structure=x_structure,
y_structure=y_structure,
shuffle = True,
batch_size=8)
###Output
Using TensorFlow backend.
###Markdown
I can now check the first batch it generates
###Code
bg_test[0]
###Output
_____no_output_____
###Markdown
It is exactly what keras will expect: - the batch is tuple (X, y)- X is a list of numpy arrays - this is how Keras expects multiple inputs to be passed- y is a single numpy array. Before I jump into building a keras model. I'd like to show a helper functions of keras-batchflow for automated model creation
###Code
bg_train.shape
bg_train.metadata
###Output
_____no_output_____
###Markdown
Keras model
###Code
from keras.layers import Input, Embedding, Dense, Concatenate, Lambda, Dropout
from keras.models import Model
import keras.backend as K
metadata_x, metadata_y = bg_train.metadata
# define categorical and numeric inputs from X metadata
inputs = [Input(shape=m['shape'], dtype=m['dtype']) for m in metadata_x]
# Define embeddings for categorical features (where n_classes not None) and connect them to inputs
embs = [Embedding(m['n_classes'], 10)(inp) for m, inp in zip(metadata_x, inputs) if m['n_classes'] is not None]
# Collapse unnecessary dimension after emmbedding layers (None, 1, 10) -> (None, 10)
embs = [Lambda(lambda x: K.squeeze(x, axis=1))(emb) for emb in embs]
# separate numeric inputs
num_inps = [inp for m, inp in zip(metadata_x, inputs) if m['n_classes'] is None]
# convert data type to standard keras float datatype
num_x = [Lambda(lambda x: K.cast(x, 'float32'))(ni) for ni in num_inps]
# merge all inputs
x = Concatenate()(embs + num_x)
x = Dropout(.3)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(.3)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(.3)(x)
survived = Dense(2, activation='softmax')(x)
model = Model(inputs, survived)
###Output
WARNING:tensorflow:From /home/max/Code/opensource/batchflow/venv/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
###Markdown
I have added quite significant dropout to avoid overfitting as titanic dataset is quite small for neural networks.
###Code
model.summary()
###Output
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
input_2 (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
input_3 (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
embedding_1 (Embedding) (None, 1, 10) 30 input_1[0][0]
__________________________________________________________________________________________________
embedding_2 (Embedding) (None, 1, 10) 20 input_2[0][0]
__________________________________________________________________________________________________
embedding_3 (Embedding) (None, 1, 10) 40 input_3[0][0]
__________________________________________________________________________________________________
input_4 (InputLayer) (None, 1) 0
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 10) 0 embedding_1[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 10) 0 embedding_2[0][0]
__________________________________________________________________________________________________
lambda_3 (Lambda) (None, 10) 0 embedding_3[0][0]
__________________________________________________________________________________________________
lambda_4 (Lambda) (None, 1) 0 input_4[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 31) 0 lambda_1[0][0]
lambda_2[0][0]
lambda_3[0][0]
lambda_4[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 31) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 64) 2048 dropout_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 64) 0 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 32) 2080 dropout_2[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 32) 0 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 2) 66 dropout_3[0][0]
==================================================================================================
Total params: 4,284
Trainable params: 4,284
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
I can now compile and train the model
###Code
model.compile('adam', 'sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(bg_train, validation_data=bg_test, epochs=20)
###Output
WARNING:tensorflow:From /home/max/Code/opensource/batchflow/venv/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
Epoch 1/20
95/95 [==============================] - 1s 9ms/step - loss: 1.5311 - accuracy: 0.6341 - val_loss: 0.6441 - val_accuracy: 0.6119
Epoch 2/20
95/95 [==============================] - 1s 6ms/step - loss: 1.0748 - accuracy: 0.6407 - val_loss: 0.6085 - val_accuracy: 0.6940
Epoch 3/20
95/95 [==============================] - 1s 6ms/step - loss: 0.8298 - accuracy: 0.6301 - val_loss: 0.6158 - val_accuracy: 0.7313
Epoch 4/20
95/95 [==============================] - 1s 6ms/step - loss: 0.6777 - accuracy: 0.6658 - val_loss: 0.6798 - val_accuracy: 0.5299
Epoch 5/20
95/95 [==============================] - 1s 6ms/step - loss: 0.6309 - accuracy: 0.6896 - val_loss: 0.7950 - val_accuracy: 0.8060
Epoch 6/20
95/95 [==============================] - 1s 6ms/step - loss: 0.6218 - accuracy: 0.6777 - val_loss: 0.6781 - val_accuracy: 0.6194
Epoch 7/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5868 - accuracy: 0.7067 - val_loss: 0.4251 - val_accuracy: 0.7985
Epoch 8/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5782 - accuracy: 0.7054 - val_loss: 0.5653 - val_accuracy: 0.7836
Epoch 9/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5942 - accuracy: 0.6843 - val_loss: 0.7265 - val_accuracy: 0.7985
Epoch 10/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5785 - accuracy: 0.7133 - val_loss: 0.8065 - val_accuracy: 0.8134
Epoch 11/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5595 - accuracy: 0.7133 - val_loss: 0.3281 - val_accuracy: 0.7836
Epoch 12/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5611 - accuracy: 0.7464 - val_loss: 0.4726 - val_accuracy: 0.7985
Epoch 13/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5306 - accuracy: 0.7517 - val_loss: 0.3634 - val_accuracy: 0.7910
Epoch 14/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5062 - accuracy: 0.7688 - val_loss: 0.6800 - val_accuracy: 0.8060
Epoch 15/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5155 - accuracy: 0.7649 - val_loss: 0.4450 - val_accuracy: 0.8060
Epoch 16/20
95/95 [==============================] - 1s 6ms/step - loss: 0.4949 - accuracy: 0.7820 - val_loss: 0.4729 - val_accuracy: 0.7836
Epoch 17/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5027 - accuracy: 0.7926 - val_loss: 0.6071 - val_accuracy: 0.7836
Epoch 18/20
95/95 [==============================] - 1s 6ms/step - loss: 0.5295 - accuracy: 0.7688 - val_loss: 0.7283 - val_accuracy: 0.8134
Epoch 19/20
95/95 [==============================] - 1s 6ms/step - loss: 0.4824 - accuracy: 0.7834 - val_loss: 0.6828 - val_accuracy: 0.8134
Epoch 20/20
95/95 [==============================] - 1s 6ms/step - loss: 0.4916 - accuracy: 0.7768 - val_loss: 0.3909 - val_accuracy: 0.8134
###Markdown
The model is trained. The next question is how to use it to predict labels in new data? Predicting using the keras-batchflowPredicting using the same structures is really simple: once new data are in the same format as your training data, you will just need to define a batch generator for predictions using same `x_structure` that you used aboveI will continue my example to show how it works:
###Code
unlabelled_data = pd.read_csv('../data/titanic/test.csv')
unlabelled_data.shape
###Output
_____no_output_____
###Markdown
Check if I need to fill any NAs
###Code
unlabelled_data.head()
unlabelled_data[['Pclass', 'Sex', 'Age', 'Embarked']].isna().apply(sum)
unlabelled_data['Age'] = unlabelled_data['Age'].fillna(0)
###Output
_____no_output_____
###Markdown
I can define a batch generator for predicting using the same structure I used in definition of batch generators used for training. The key is to set `shuffle=False`, drop `y_structure` and provide new unlabelled data
###Code
bg_predict = BatchGenerator(unlabelled_data,
x_structure=x_structure,
shuffle = False,
batch_size=8)
pred = model.predict_generator(bg_predict, verbose=1)
pred.shape
###Output
_____no_output_____
###Markdown
Outputs are one-hot-encoded so I need to convert it to indicies:
###Code
pred = pred.argmax(axis=1)
pred
###Output
_____no_output_____
###Markdown
These are predictions in encoded format. In most cases they need to be converted back to labels. Decoding predictions back to labelsMost of sklearn encoders have function `inverse_transform` for this purpose. In this example, when the model returns only one prediction variable I could use this function from the encoder `surv_enc` directly, but imagine I had more than one prediction, each of which used its own encoder. Wouldn't it be convenient to have `inverse_transform` at a batch generator level?Batch generator's `inverse_transform` returns a dataframe with predictions converted to labels and with correct column names:
###Code
bg_train.inverse_transform(pred)
###Output
_____no_output_____
###Markdown
All I need to do is concatenate it with the unlabelled data
###Code
pd.concat([unlabelled_data, bg_train.inverse_transform(pred)], axis=1).head()
###Output
_____no_output_____ |
.ipynb_checkpoints/Data_analysis-checkpoint.ipynb | ###Markdown
A total of 2000 virtual trials are run to evaluate the performance of a static head component and a rotational head component. In order to test the obstacle avoidance algorithm is as many different scenerios as possible, each trial was run in a randomly generated labyrinth of changing size. The overall obstacle density was kept constant which means that the number of obstacles scalled quadratically with the size of the labyrinth. In consequence, the number of deadends in the larger labyrinths increased. Trials in which the robot is able to escape the labyrinth are considered a success. If it hits a deadend, the robot did not collide with anything but also cannot find a way out of the labyrinth. A time out occurs when the robot does not get stuck in a deadend but takes too long to escape the labyrinth. The successrate is the main metric in the comparision of the rotatinal to the static head design. Comparing the successrate of all trials, the rotational head design performs slightly better with escaping 48% of all trials. The static head version 42% of all trials. Noticeable is that this performance difference increases in larger Labyrinths. The rotational head design performce more than twice as well as the static head design for labyrithn radii above 10 simulation unit lenghts. The pie chart shows that the rotational head design takes the big majority of all escapes at the large scale labyrinth simulation for high obstacle density.
###Code
series =[]
index = []
for i,df in enumerate([hd,sd]):
blocks = df['Blocks'][1]
radius = df['Radius'][1]
# df = df[df['Radius'] <= 7]
print(len(df))
# df = df[df['Radius'] > 3]
f = len(df[df['Deadend']==True])
s = len(df[df['Deadend']==False])
tot = s+f
t_frac = round(len(df[df['Timeout']==True])/tot,3)
s_frac = round(s/tot,3)
f_frac = round(f/tot,3)
series.append([s_frac-t_frac,f_frac,t_frac])
if df['Turned Head'].mean() > 1:
index.append('Turning Head '.format(radius,blocks))
else:
index.append('Static Head'.format(radius,blocks))
data = pd.DataFrame(series,columns=['Escaped','Hit Deadend','Timed Out'],index = index)
print(data.head())
plt.figure()
data.plot(kind='bar')
# plt.xlabel('Outcome')
plt.ylabel('Probability of Outcome')
plt.title('Labyrith Simulation: r = 7, obstacles = 98, n = 113')
plt.show()
hblocks = hd['Blocks'].copy()
hradius = hd['Radius'].copy()
sblocks = sd['Blocks'].copy()
sradius = sd['Radius'].copy()
hd_rb = hd.loc[:,['Radius','Blocks']].copy()
uni_radius = hd_rb['Radius'].unique().tolist()
uni_blocks = hd_rb['Blocks'].unique().tolist()
uni_radius.sort()
uni_blocks.sort()
print(uni_blocks)
print(uni_radius)
area = [(2*r)**2 - (2*1)**2 for r in uni_radius]
print(area)
obstacle_density = [b/a for b,a in zip(uni_blocks,area)]
print(obstacle_density)
hd['Turned Head'].mean()
hsuccess = hd[(hd['Deadend']==False) & (hd['Timeout']==False)]['Distance Travelled'].copy()
ssuccess = sd[(sd['Deadend']==False) & (sd['Timeout']==False)]['Distance Travelled'].copy()
print(hsuccess.min())
print(ssuccess.min())
print(hsuccess.mean())
print(ssuccess.mean())
print(hsuccess.max())
print(ssuccess.max())
# plt.figure()
# plt.hold(True)
# hsuccess.plot()
# # ssuccess.plot()
# # plt.legend("Rotational Head", "Static Head")
# plt.show()
hblocks.value_counts()
hr = hradius.value_counts(False)
sr = sradius.value_counts(False)
sr.name = 'Static Head Design'
hr.name = 'Rotational Head Design'
trails = pd.concat([hr,sr],axis=1)
plt.figure()
trails.plot(kind='bar')
plt.title("Destribution of Labyrinth Sizes")
plt.ylabel("Number of Trials")
plt.xlabel("Radius [Unit Length]")
plt.show()
trails.head(10)
hfailed = hd[hd['Deadend']==True]
sfailed = sd[sd['Deadend']==True]
# Progress before failure
hfailed['Radial Distance'].mean()/hfailed['Radius'].mean()
# Progress before failure
sfailed['Radial Distance'].mean()/sfailed['Radius'].mean()
hhard = hd[hd['Radius']>8]
shard = sd[sd['Radius']>8]
len(hhard[hhard['Deadend']==False])/len(hhard)
len(shard[shard['Deadend']==False])/len(shard)
sd['Radius'].value_counts().tolist()
sd['Radius'].value_counts()
sd['Radius'].unique()
sd_freq = sd['Radius'].value_counts()
hd_new = pd.DataFrame(columns=hd.columns.tolist())
for i in sd['Radius'].unique():
# print(i,sd_freq[i])
# print(sd_freq[i],temp)
df = hd[hd['Radius']==i].copy()
df.index = range(len(df.index))
to_drop = np.arange(sd_freq[i],len(df)).tolist()
print(len(df),len(to_drop))
df = df.drop(df.index[to_drop])
hd_new=pd.concat([hd_new,df],axis=0)
print(hd_new['Radius'].value_counts())
hd_new.head()
hd_new.shape
sd.shape
hd.index[3]
hr = hd_new['Radius'].value_counts(False)
sr = sradius.value_counts(False)
sr.name = 'Static Head Design'
hr.name = 'Rotational Head Design'
trails = pd.concat([hr,sr],axis=1)
plt.figure()
trails.plot(kind='bar')
plt.title("Destribution of Labyrinth Sizes")
plt.ylabel("Number of Trials")
plt.xlabel("Labyrinth Radius [Unit Length]")
plt.show()
series =[]
index = []
for i,df in enumerate([hd_new,sd]):
blocks = df['Blocks'][1]
radius = df['Radius'][1]
df = df[df['Radius'] >= 10]
print(len(df))
# df = df[df['Radius'] > 3]
f = len(df[df['Deadend']==True])
s = len(df[df['Deadend']==False])
tot = s+f
t_frac = round(len(df[df['Timeout']==True])/tot,3)
s_frac = round(s/tot,3)
f_frac = round(f/tot,3)
series.append([s_frac-t_frac,f_frac,t_frac])
if df['Turned Head'].mean() > 1:
index.append('Turning Head '.format(radius,blocks))
else:
index.append('Static Head'.format(radius,blocks))
data = pd.DataFrame(series,columns=['Escaped','Hit Deadend','Timed Out'],index = index)
print(data.head())
plt.figure()
data.plot(kind='bar')
# plt.xlabel('Outcome')
plt.ylabel('Probability of Outcome')
plt.title('Labyrith Simulation: Overall Performance')
plt.show()
pie_data = data['Escaped'].copy()
old_sum = pie_data.sum()
for i in range(len(pie_data)):
pie_data[i]=pie_data[i]/old_sum
# pie_data['Static Head']=pie_data['Static Head']/pie_data.sum()
# pie_data['Turning Head']=pie_data['Turning Head']/pie_data.sum()
pie_data.name = ""
pie_data.head()
plt.figure()
pie_data.plot(kind='pie')
plt.title('Relative Successrate in large scale Labyrinth')
plt.show()
scoll.head()
scoll[scoll['Collided']==True].describe()
len(scoll[scoll['Collided']==True])/len(scoll)
series =[]
index = []
radii = scoll['Radius'].unique().tolist()
radii.sort()
for i in radii:
print(i)
scoll_s = scoll[scoll['Radius']==i]
value = len(scoll_s[scoll_s['Collided']==True])/len(scoll_s)
index.append(str(i))
series.append((1-value,value))
print(index)
print(len(series))
total = len(scoll[scoll['Collided']==True])/len(scoll)
series.append((1-total,total))
index.append("Total")
scoll_data = pd.DataFrame(series,columns=['Collision Free','Collided'],index = index)
print(scoll_data.head())
plt.figure()
scoll_data.plot(kind='bar')
plt.ylabel("Probability of Event")
plt.xlabel("Labyrinth Radius [meter]")
plt.title("Collisions for Obstacle Density of 0.5 n = 1000")
plt.show()
scoll['Radius'].unique().tolist()
len(scoll[scoll['Collided']==True])/len(scoll)
###Output
_____no_output_____ |
Versions/PyCitySchools-SchoolsSumComplete.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import os
from matplotlib import pyplot as plt
#import numpy as np
# File to Load (Remember to Change These)
# school_data_to_load = "Resources/schools_complete.csv"
# student_data_to_load = "Resources/students_complete.csv"
schools_data = "Resources/schools_complete.csv"
students_data = "Resources/students_complete.csv"
# Read the data file with the pandas library, Source File to Read
# Not every CSV requires an encoding, but be aware this can come up
# schools_data_df=pd.read_csv('resources/schools_complete.csv')
schools_data_df=pd.read_csv(schools_data, encoding="ISO-8859-1")
# display the header
# schools_data_df.head()
# verify counts - clean the data
# schools_data_df.count()
# display the contents of the data frame : Schools_data.csv
schools_data_df
# Read the data file with the pandas library, Source File to Read
students_data_df=pd.read_csv(students_data, encoding='iso-8859-1')
# display the header
# students_data_df.head()
# verify counts - clean the data
# students_data_df.count()
# display the contents of the data frame : Students_data.csv
students_data_df
# Combine the data into a single dataset.
# school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"])
pycityschools_df=pd.merge(schools_data_df,students_data_df,how='left', on=['school_name','school_name'])
# display the contents of the data frame
pycityschools_df
#Save remade data frame to a new csv file, pycityschools_df to /output/pycityschools_combined.csv
pycityschools_df.to_csv('Output/pycityschools_combined.csv', encoding="utf-8", index="true",header="true")
# verify counts - clean the data
pycityschools_df.count()
# Display a statistical overview of the data frame
students_data_df.describe()
# Display data types
pycityschools_df.dtypes
# Collecting a list of all columns within the DataFrame
pycityschools_df.columns
###Output
_____no_output_____
###Markdown
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Calculate the percentage of students who passed math **and** reading (% Overall Passing)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
###Code
# Calculate the number of unique schools in the DataFrame
school_total = len(pycityschools_df["school_name"].unique())
school_total
# Calculate the number of unique students in the DataFrame
student_total = pycityschools_df["student_name"].count()
student_total
# Calculate the total budget in the DataFrame
budget_total = schools_data_df["budget"].sum()
budget_total
# Calculate the average math score in the DataFrame
average_math_score=round(pycityschools_df["math_score"].mean(), 6)
average_math_score
# Calculate the average reading score in the DataFrame
average_reading_score=round(pycityschools_df["reading_score"].mean(), 5)
average_reading_score
# Students with passing math (greater that 70%)
passing_math_df=students_data_df.loc[pycityschools_df['math_score']>=70, :]
passing_math_df.head()
# Calculate the percentage passing math in the DataFrame - #"% Passing Math":[percentage_passing_math]
students_passing_math=(passing_math_df['Student ID'].count()/student_total)*100
students_passing_math_total=round(students_passing_math, 6)
students_passing_math_total
# Students with passing reading (greater that 70%)
passing_reading_df=students_data_df.loc[students_data_df['reading_score']>=70, :]
passing_reading_df.head()
# Calculate the percentage passing reading in the DataFrame - #"% Passing Reading":[percentage_passing_reading]
students_passing_reading=(passing_reading_df["Student ID"].count()/student_total)*100
students_passing_reading_total=round(students_passing_reading, 6)
students_passing_reading_total
# Create data frames for math and reading, then merge the two to determine % passing overall - #"% Overall Passing":[percentage_passing_overall]
percent_overall_passing=(students_passing_math_total+students_passing_reading_total)/2
percent_overall_passing
#total_student_count=students_data_df['student_name'].count()
#percent_overall_passing=students_data_df[(students_data_df.loc['math_score']>=70) & (students_data_df.loc['reading_score']>=70)]['student_name'].count()/student_total
#percent_overall_passing
#CHECK THIS VALUE... ???
# Creating a summary DataFrame using above values
district_summary_df=pd.DataFrame({"Total Schools": [school_total],
"Total Students": [student_total],
"Total Budget": [budget_total],
"Average Math Score": [average_math_score],
"Average Reading Score": [average_reading_score],
"% Passing Math": [students_passing_math],
"% Passing Reading": [students_passing_reading],
"% Overall Passing Rate": [percent_overall_passing],
})
district_summary_df
# Use Map to format all the columns
# district_summary_df=district_summary_df["Total Budget"].map("${:,}".format)
# district_summary_df.head()
district_summary_df.style.format({"Total Budget": "${:,.2f}"})
#"Average Reading Score": "{:.1f}",
#"Average Math Score": "{:.1f}",
#"% Passing Math": "{:.1%}",
#"% Passing Reading": "{:.0%}",
#"% Overall Passing Rate": "{:.1%}"})
# # # Groupby students passing - get total students for each group and add total...
# # # percentage_passing_overall=["percentage_passing_math"+"Percentage_passing_reading"].sum()
# # # Calculate the number of unique authors in the DataFrame
# # # district_summary_df=pycityschools_df({"Schools Count": [school_count],"Student Count": student_count})
grouped_passing_math_df = passing_math_df.groupby(['Student ID'])
grouped_passing_math_df.head()
grouped_passing_reading_df = passing_reading_df.groupby(['Student ID'])
grouped_passing_reading_df.head()
converted_pycityschools2_df=pycityschools_df.copy()
converted_pycityschools2_df['Student ID']=converted_pycityschools2_df.loc[:, 'Student ID'].astype(float)
# display the contents of the data frame
converted_pycityschools2_df
#converted_pycityschools2_df=pd.merge(grouped_passing_math_df,grouped_passing_reading_df,how='inner', on=['Student ID','Student ID'])
# display the contents of the data frame
#converted_pycityschools2_df
###Output
_____no_output_____
###Markdown
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results
###Code
# Finding how many students each school has
# school_count = len(pycityschools_df["school_name"].unique())
pycityschools_df['school_name'].value_counts()
# Finding how many schools exist in the list
schools_unique_df=pycityschools_df["school_name"].unique()
schools_unique_df
# Finding the names of the students
students_unique_df=pycityschools_df['student_name'].unique()
students_unique_df
# Calculate the number of unique students in the DataFrame
# student_count = len(pycityschools_df["student_name"].unique())
student_counts = pycityschools_df["student_name"].value_counts()
student_counts
# Average by School Math Score
school_average_math_score = students_data_df.groupby('school_name')['math_score']. mean()
school_average_math_score
# Average by School Reading Score
school_average_reading_score = students_data_df.groupby('school_name')['reading_score']. mean()
school_average_reading_score
# Create dataframe
school_average_math_score_df = pd.DataFrame({'school_name':school_average_math_score.index,'school_average_math_score':school_average_math_score.values})
school_average_math_score_df
# Create dataframe
school_average_reading_score_df = pd.DataFrame({'school_name':school_average_reading_score.index, 'school_average_reading_score':school_average_reading_score.values})
school_average_reading_score_df
# Count by school of students passing Math
school_count_passing_math = passing_math_df.groupby('school_name')['math_score'].count()
school_count_passing_math
school_count_passing_math_df = pd.DataFrame({'school_name':school_count_passing_math.index ,'school_count_passing_math':school_count_passing_math.values})
school_count_passing_math_df
# Count by school of students passing Reading
school_count_passing_reading = passing_reading_df.groupby('school_name')['reading_score'].count()
school_count_passing_reading
school_count_passing_reading_df = pd.DataFrame({'school_name':school_count_passing_reading.index ,'school_count_passing_reading':school_count_passing_math.values})
school_count_passing_reading_df
# Join Schools with their Average Math Score
schools_join_average_math_df = pd.merge(schools_data_df, school_average_math_score_df, on="school_name", how="outer")
schools_join_average_math_df.head()
# Join Schools with their Average Reading Score
schools_join_average_reading_df = pd.merge(schools_join_average_math_df, school_average_reading_score_df, on="school_name", how="outer")
schools_join_average_reading_df.head()
# Join Schools Count of Students Passing Math
schools_join_count_math_df = pd.merge(schools_join_average_reading_df, school_count_passing_math_df, on="school_name", how="outer")
schools_join_count_math_df.head()
# Join Schools Count of Students Passing Reading
schools_join_count_reading_df = pd.merge(schools_join_count_math_df, school_count_passing_reading_df, on="school_name", how="outer")
schools_join_count_reading_df.head()
# Naming Convention Change - Duplicate and rename merged datasets to new name
# (schools_join_count_reading_df to schools_merged_df)
schools_merged_df = schools_join_count_reading_df
schools_merged_df
# By School Calculate Percent of Students Passing Math
schools_merged_df['percent_passing_math'] = (schools_merged_df["school_count_passing_math"]/ schools_merged_df["size"]) * 100
schools_merged_df
# By School Calculate Percent of Students Passing Reading
schools_merged_df['percent_passing_reading'] = (schools_merged_df["school_count_passing_reading"]/ schools_merged_df["size"]) * 100
schools_merged_df
# By School Calculate Overall Passing Rate
schools_merged_df['overall_passing_rate'] = (schools_merged_df["percent_passing_math"] + schools_merged_df["percent_passing_math"])/ 2
schools_merged_df.head()
# By School Calculate Per Student Budget
schools_merged_df['per_student_budget'] = (schools_merged_df["budget"] / schools_merged_df["size"])
schools_merged_df.head()
# Naming Convention Change - Duplicate and rename merged datasets to new name
# (schools_merged_df to schools_summary_raw_df)
schools_summary_raw_df = schools_merged_df
schools_summary_raw_df
# Review data counts
schools_summary_raw_df.count()
# Rename columns in dataframe
schools_header_rename_df = schools_summary_raw_df.rename(columns={"school_name":"School Name",
"type":"School Type",
"size": "Total Students",
"budget": "Total School Budget",
"per_student_budget": "Per Student Budget",
"average_math_score": "Average Math Score",
"average_reading_score": "Average Reading Score",
"percent_passing_math": "% Passing Math",
"percent_passing_reading": "% Passing Reading",
"overall_passing_rate": "% Overall Passing Rate"})
schools_header_rename_df.head()
# Reformate columns in dataframe
schools_header_rename_df["Total School Budget"] = schools_header_rename_df["Total School Budget"].map("${:,.2f}".format)
schools_header_rename_df["Per Student Budget"] = schools_header_rename_df["Per Student Budget"].map("${:,.2f}".format)
schools_header_rename_df.head()
# Naming Convention Change - Duplicate and rename merged datasets to new name
# (schools_header_rename_df to schools_summary_df)
schools_summary_df = schools_header_rename_df
schools_summary_df
# Create the Final Dataframe - presentation
#schools_summary_df=pd.DataFrame({"School Name": [school_name],
"School Type": [type],
"Total Students": [size],
"Total School Budget": [budget],
"Per Student Budget": [per_student_budget],
"Average Math Score": [average_math_score],
"Average Reading Score": [average_reading_score],
"% Passing Math": [percent_passing_math],
"% Passing Reading": [percent_passing_reading],
"% Overall Passing Rate": [overall_passing_rate],
})
#schools_summary_df
# Create the Final Dataframe - presentation
#schools_summary_df=pd.DataFrame({"School Name": [school_name],
"School Type": [type],
"Total Students": [size],
"Total School Budget": [budget],
"Per Student Budget": [per_student_budget],
"Average Math Score": [average_math_score],
"Average Reading Score": [average_reading_score],
"% Passing Math": [percent_passing_math],
"% Passing Reading": [percent_passing_reading],
"% Overall Passing Rate": [overall_passing_rate],
})
#schools_summary_df
#Creating final dataframe
#school_summary_df = school_rename_df.loc[:, ['School Name', 'School Type', 'Total Students','Total School Budget','Per Student Budget','Average Reading Score','Average Math Score','% Passing Math','% Passing Reading','% Overall Passing Rate']]
#school_summary_df
###Output
_____no_output_____
###Markdown
Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing.
###Code
# Top Performing Schools By Pass Rate
top_performing_schools_df = schools_summary_df.sort_values("% Overall Passing Rate",ascending=False)
top_performing_schools_df
# Top Performing Schools By Pass Rate - TOP FIVE
top_performing_schools_new_index = top_performing_schools_df.reset_index(drop=True)
top_performing_schools_new_index.head()
###Output
_____no_output_____
###Markdown
Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing.
###Code
bottom_performing_schools_df = schools_summary_df.sort_values("% Overall Passing Rate",ascending=True)
bottom_performing_schools_df
bottom_performing_schools_new_index = bottom_performing_schools_df.reset_index(drop=True)
bottom_performing_schools_new_index.head()
###Output
_____no_output_____ |
DataCamp/Word Frequency in Moby Dick/notebook.ipynb | ###Markdown
1. Tools for text processing What are the most frequent words in Herman Melville's novel, Moby Dick, and how often do they occur?In this notebook, we'll scrape the novel Moby Dick from the website Project Gutenberg (which contains a large corpus of books) using the Python package requests. Then we'll extract words from this web data using BeautifulSoup. Finally, we'll dive into analyzing the distribution of words using the Natural Language ToolKit (nltk). The Data Science pipeline we'll build in this notebook can be used to visualize the word frequency distributions of any novel that you can find on Project Gutenberg. The natural language processing tools used here apply to much of the data that data scientists encounter as a vast proportion of the world's data is unstructured data and includes a great deal of text.Let's start by loading in the three main Python packages we are going to use.
###Code
# Importing requests, BeautifulSoup and nltk
from bs4 import BeautifulSoup
import requests
import nltk
###Output
_____no_output_____
###Markdown
2. Request Moby DickTo analyze Moby Dick, we need to get the contents of Moby Dick from somewhere. Luckily, the text is freely available online at Project Gutenberg as an HTML file: https://www.gutenberg.org/files/2701/2701-h/2701-h.htm .Note that HTML stands for Hypertext Markup Language and is the standard markup language for the web.To fetch the HTML file with Moby Dick we're going to use the request package to make a GET request for the website, which means we're getting data from it. This is what you're doing through a browser when visiting a webpage, but now we're getting the requested page directly into Python instead.
###Code
# Getting the Moby Dick HTML
url = 'https://s3.amazonaws.com/assets.datacamp.com/production/project_147/datasets/2701-h.htm'
r = requests.get(url)
# Setting the correct text encoding of the HTML page
r.encoding = 'utf-8'
# Extracting the HTML from the request object
html = r.text
# Printing the first 2000 characters in html
html[:2000]
###Output
_____no_output_____
###Markdown
3. Get the text from the HTMLThis HTML is not quite what we want. However, it does contain what we want: the text of Moby Dick. What we need to do now is wrangle this HTML to extract the text of the novel. For this we'll use the package BeautifulSoup.Firstly, a word on the name of the package: Beautiful Soup? In web development, the term "tag soup" refers to structurally or syntactically incorrect HTML code written for a web page. What Beautiful Soup does best is to make tag soup beautiful again and to extract information from it with ease! In fact, the main object created and queried when using this package is called BeautifulSoup. After creating the soup, we can use its .get_text() method to extract the text.
###Code
# Creating a BeautifulSoup object from the HTML
soup = BeautifulSoup(html)
# Getting the text out of the soup
text = soup.get_text()
# Printing out text between characters 32000 and 34000
text[32000:34000]
###Output
_____no_output_____
###Markdown
4. Extract the wordsWe now have the text of the novel! There is some unwanted stuff at the start and some unwanted stuff at the end. We could remove it, but this content is so much smaller in amount than the text of Moby Dick that, to a first approximation, it is okay to leave it in.Now that we have the text of interest, it's time to count how many times each word appears, and for this we'll use nltk – the Natural Language Toolkit. We'll start by tokenizing the text, that is, remove everything that isn't a word (whitespace, punctuation, etc.) and then split the text into a list of words.
###Code
# Creating a tokenizer
tokenizer = nltk.tokenize.RegexpTokenizer('\w+')
# Tokenizing the text
tokens = tokenizer.tokenize(text)
# Printing out the first 8 words / tokens
tokens[:8]
###Output
_____no_output_____
###Markdown
5. Make the words lowercaseOK! We're nearly there. Note that in the above 'Or' has a capital 'O' and that in other places it may not, but both 'Or' and 'or' should be counted as the same word. For this reason, we should build a list of all words in Moby Dick in which all capital letters have been made lower case.
###Code
# A new list to hold the lowercased words
words = []
# Looping through the tokens and make them lower case
for word in tokens:
words.append(word.lower())
# Printing out the first 8 words / tokens
words[:8]
###Output
_____no_output_____
###Markdown
6. Load in stop wordsIt is common practice to remove words that appear a lot in the English language such as 'the', 'of' and 'a' because they're not so interesting. Such words are known as stop words. The package nltk includes a good list of stop words in English that we can use.
###Code
# Getting the English stop words from nltk
sw = nltk.corpus.stopwords.words('english')
# Printing out the first eight stop words
sw[:8]
###Output
_____no_output_____
###Markdown
7. Remove stop words in Moby DickWe now want to create a new list with all words in Moby Dick, except those that are stop words (that is, those words listed in sw). One way to get this list is to loop over all elements of words and add each word to a new list if they are not in sw.
###Code
# A new list to hold Moby Dick with No Stop words
words_ns = []
# Appending to words_ns all words that are in words but not in sw
for word in words:
if word not in sw:
words_ns.append(word)
# Printing the first 5 words_ns to check that stop words are gone
words_ns[:5]
###Output
_____no_output_____
###Markdown
8. We have the answerOur original question was: What are the most frequent words in Herman Melville's novel Moby Dick and how often do they occur?We are now ready to answer that! Let's create a word frequency distribution plot using nltk.
###Code
# This command display figures inline
%matplotlib inline
# Creating the word frequency distribution
freqdist = nltk.FreqDist(words_ns)
# Plotting the word frequency distribution
freqdist.plot(20)
###Output
_____no_output_____
###Markdown
9. The most common wordNice! The frequency distribution plot above is the answer to our question. The natural language processing skills we used in this notebook are also applicable to much of the data that Data Scientists encounter as the vast proportion of the world's data is unstructured data and includes a great deal of text. So, what word turned out to (not surprisingly) be the most common word in Moby Dick?
###Code
# What's the most common word in Moby Dick?
most_common_word = 'whale'
###Output
_____no_output_____ |
muki-batch.ipynb | ###Markdown
讀取 Miku
###Code
img_count = 0
def showimg(img):
muki_pr = np.zeros((500,500,3))
l =img.tolist()
count = 0
for x in range(500):
for y in range(500):
muki_pr[y][x] = l[count]
count += 1
plt.imshow(muki_pr)
def saveimg(fname,img):
muki_pr = np.zeros((500,500,3))
l =img.tolist()
count = 0
for x in range(500):
for y in range(500):
muki_pr[y][x] = l[count]
count += 1
plt.imsave(fname,muki_pr)
def read_muki():
img_data = np.random.randn(250000,1)
xy_data = []
import random
f = open('./muki.txt','rb')
count = 0
for line in f:
y,x,c = line.split()
xy_data.append([float(x),float(y)])
x = (float(x) )*100. + 250
y = (float(y) )*100. + 250
c = float(c)
img_data[count] = c
count = count + 1
return np.matrix(xy_data),img_data
xy_data,img_data = read_muki()
showimg(img_data)
print xy_data[:10]
print img_data[:10]
###Output
[[-2.5 -2.5 ]
[-2.49 -2.5 ]
[-2.48 -2.5 ]
[-2.47 -2.5 ]
[-2.46 -2.5 ]
[-2.45 -2.5 ]
[-2.44 -2.5 ]
[-2.43 -2.5 ]
[-2.42 -2.5 ]
[-2.41 -2.5 ]]
[[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]
[ 0.]]
###Markdown
Muki NN
###Code
batch_size = 500
hidden_size = 128
x = T.matrix(name='x',dtype='float32') # size =2
y = T.matrix(name='x',dtype='float32') # size =1
w1 = theano.shared(np.random.randn(hidden_size,2))
b1 = theano.shared(np.random.randn(hidden_size))
w2 = theano.shared(np.random.randn(hidden_size,hidden_size))
b2 = theano.shared(np.random.randn(hidden_size))
w3 = theano.shared(np.random.randn(1,hidden_size))
b3 = theano.shared(np.random.randn(1))
###Output
_____no_output_____
###Markdown
第一層
###Code
z1 = T.dot(w1,x) + b1.dimshuffle(0,'x')
a1 = 1/(1+T.exp(-z1))
fa1 = theano.function(inputs=[x],outputs=[a1],allow_input_downcast=True)
l1_o= fa1(np.random.randn(2,batch_size))
l1_o= fa1(xy_data[:500].T)
###Output
_____no_output_____
###Markdown
第二層
###Code
z2 = T.dot(w2,a1) + b2.dimshuffle(0,'x')
a2 = 1/(1+T.exp(-z2))
fa2 = theano.function(inputs=[x],outputs=[a2],allow_input_downcast=True)
l2_o = fa2(np.random.randn(2,batch_size))
l2_o= fa2(xy_data[:500].T)
print l2_o[0].shape
###Output
(128, 500)
###Markdown
第三層
###Code
z3 = T.dot(w3,a2) + b3.dimshuffle(0,'x')
a3 = 1/(1+T.exp(-z3))
fa3 = theano.function(inputs=[x],outputs=[a3],allow_input_downcast=True)
l3_o = fa3(np.random.randn(2,batch_size))
print l3_o[0].shape
###Output
(1, 500)
###Markdown
定義 Cost Function & Update Function
###Code
y_hat = T.matrix('reference',dtype='float32')
cost = T.sum((a3-y_hat)**2)/batch_size
dw1,dw2,dw3,db1,db2,db3 = T.grad(cost,[w1,w2,w3,b1,b2,b3])
def Myupdates(ps,gs):
from itertools import izip
r = 0.5
pu = [ (p,p-r*g) for p,g in izip(ps,gs) ]
return pu
train = theano.function(inputs=[x,y_hat],
outputs=[a3,cost],
updates=Myupdates([w1,w2,b1,b2,w3,b3],[dw1,dw2,db1,db2,dw3,db3]),
allow_input_downcast=True,)
###Output
_____no_output_____
###Markdown
Training
###Code
def training(xy_data,img_data):
for ii in range(1000):
for i in range(500):
start = i * 500
end = start + 500
img_predict,cost_predict = train(xy_data[start:end].T,img_data[start:end].T)
if ii % 5 == 0:
saveimg('./imgs/muki_'+ str(ii) +'.png', fa3(xy_data.T)[0].T)
print cost_predict,
training(xy_data,img_data)
###Output
1.95025705114e-06 2.17305620986e-06 1.68649787039e-06 1.25956621256e-06 8.63419856337e-07 5.36193522996e-07 3.2976588041e-07 2.27709812058e-07 1.71146762211e-07 1.29372600136e-07 9.70224315787e-08 7.19332561151e-08 5.33969605359e-08 3.94842035864e-08 2.93452456106e-08 2.2706561381e-08 2.77263643607e-08 1.30487686622e-08 8.1193881343e-09 5.54503369581e-09 3.94160746659e-09 3.10275060885e-09 5.92385737653e-09 9.13232364591e-09 4.58839133446e-09 3.17988513431e-09 2.76569571208e-09 2.25529108265e-09 1.7455524496e-09 1.31412305514e-09 1.41116310656e-09 1.31992483818e-09 8.81222642023e-10 7.35196671871e-10 6.32639109534e-10 5.32096145923e-10 4.43074278112e-10 3.74625767394e-10 3.21590162479e-10 2.73204757981e-10 2.35410635127e-10 1.99438092939e-10 1.67679160499e-10 1.42884842354e-10 1.24156837642e-10 1.09978463444e-10 9.91420297121e-11 9.13505727662e-11 8.60160060876e-11 1.10311324466e-10 8.93050013613e-11 9.53464509221e-11 8.93464127494e-11 8.68752848509e-11 8.80314791815e-11 9.09302438081e-11 9.43821751585e-11 1.006757707e-10 1.06115684556e-10 1.09694339866e-10 1.15184647821e-10 1.15649008284e-10 1.15678310065e-10 1.17818512101e-10 1.19401784286e-10 1.13258216983e-10 1.05631479552e-10 9.50369949039e-11 1.02879618396e-10 1.06795047806e-10 1.0124258255e-10 8.91088721864e-11 7.73957122434e-11 6.60683167607e-11 5.59853670848e-11 4.37812628393e-11 4.02549828525e-11 3.58036273314e-11 3.07817798521e-11 2.63314102086e-11 2.31080847455e-11 1.25327917149e-11 1.76673262779e-11 9.26497858183e-12 8.85621868407e-12 7.24487742639e-12 8.2896376938e-12 5.40520636064e-12 4.91300606204e-12 4.50769087509e-12 4.1458895534e-12 3.77746955242e-12 3.43940021978e-12 3.15018242923e-12 2.91026045085e-12 2.7097280334e-12 2.54152926608e-12 2.39996934458e-12 2.2811745914e-12 2.18206595837e-12 2.10328924908e-12 2.05381927416e-12 2.02094968856e-12 1.99903344736e-12 1.98470063999e-12 1.83290831246e-12 2.56865929558e-12 2.82790663396e-12 3.14376857904e-12 3.5734332868e-12 4.13327952673e-12 4.96873676006e-12 4.77497685854e-12 4.55949957169e-12 4.38245291551e-12 4.14678190289e-12 3.83399600819e-12 3.49639379699e-12 3.15853477661e-12 2.84224044911e-12 2.57621326874e-12 2.36725091608e-12 2.1477914967e-12 1.89143033597e-12 1.63907855757e-12 1.44764739085e-12 1.28687490713e-12 1.10856326012e-12 8.78323198627e-13 8.5617207778e-13 7.49165340071e-13 7.62113992184e-13 6.71634466925e-13 6.11920998971e-13 5.55016254135e-13 5.41213869246e-13 5.369961195e-13 3.33006300972e-13 2.93327339163e-13 2.42148416716e-13 2.19112416678e-13 1.98699548152e-13 1.77762147213e-13 1.58796552585e-13 1.42384297288e-13 1.27821697072e-13 1.14340938442e-13 1.0302356366e-13 9.29591257081e-14 8.11500792161e-14 7.36262467113e-14 6.32214786944e-14 5.78258738786e-14 5.01781362873e-14 4.76052901356e-14 3.98582727245e-14 3.61235260179e-14 3.20668650204e-14 2.85090273186e-14 2.55343378458e-14 2.29935215202e-14 2.0535075335e-14 1.89042073371e-14 1.85258971057e-14 1.8223441451e-14 1.75695925312e-14 1.72878720488e-14 1.55598893929e-14 1.37061647661e-14 1.13736663246e-14 9.97301773964e-15 9.0176895208e-15 8.29876980108e-15 8.27241891133e-15 8.00205146465e-15 8.67978050338e-15 9.21804349662e-15 9.82673029994e-15 1.02159440686e-14 1.05340290615e-14 1.06215945706e-14 1.1244842184e-14 1.20118924553e-14 1.31531253478e-14 1.4285059418e-14 1.52560817768e-14 1.59637127817e-14 1.5880109304e-14 1.73540672413e-14 1.79297049892e-14 1.91041866105e-14 1.96645474826e-14 2.03584485069e-14 2.09366862685e-14 2.15542648634e-14 2.19433228871e-14 2.08956559345e-14 2.16223153821e-14 2.20247023117e-14 2.23498176894e-14 2.26798967022e-14 2.2799654732e-14 2.22682823301e-14 2.20722695491e-14 2.27837271196e-14 2.26157101515e-14 2.33114383339e-14 2.36668970315e-14 2.4045032276e-14 2.39650848953e-14 2.55767567469e-14 2.58763139021e-14 2.64399160151e-14 2.68671562108e-14 2.72740807888e-14 2.76039814847e-14 2.78694175546e-14 2.80678437128e-14 2.81907370351e-14 2.82410033656e-14 2.81471913977e-14 2.7920552562e-14 2.77142249832e-14 2.70694844939e-14 2.50095192072e-14 2.49001348445e-14 2.41766144819e-14 2.36571259004e-14 2.26059671091e-14 2.36268503272e-14 2.34327709756e-14 2.32659935645e-14 2.34411588463e-14 2.33830958388e-14 2.35924852403e-14 2.35964404943e-14 2.34162408635e-14 2.31809225321e-14 2.28353876832e-14 2.24236241753e-14 2.19599816091e-14 2.14381490951e-14 2.08366344375e-14 2.03617186938e-14 1.91000612502e-14 1.91467379212e-14 1.86294394827e-14 1.86094717453e-14 1.8190422199e-14 1.74727693137e-14 1.89880270929e-14 1.78260128536e-14 1.90147357033e-14 1.80687969569e-14 1.89512911629e-14 1.80167905466e-14 1.83264748167e-14 1.73108999667e-14 1.72774677776e-14 1.67812320871e-14 1.67488708502e-14 1.64810616945e-14 1.63320780995e-14 1.60563189048e-14 1.58620854592e-14 1.55987331349e-14 1.53971208361e-14 1.51605353554e-14 1.49693439852e-14 1.47612249343e-14 1.45858636098e-14 1.44031144017e-14 1.42437933616e-14 1.40819560817e-14 1.39366424207e-14 1.37910009643e-14 1.3656693622e-14 1.3522769063e-14 1.33963422808e-14 1.32704071571e-14 1.31493784446e-14 1.30289372805e-14 1.2911910053e-14 1.27957328865e-14 1.26822279404e-14 1.25698797024e-14 1.24598216131e-14 1.23511199949e-14 1.22444316457e-14 1.21391348269e-14 1.20355648162e-14 1.19332628215e-14 1.18323494502e-14 1.17324457474e-14 1.16335263891e-14 1.15352455251e-14 1.14374762136e-14 1.13398825389e-14 1.12422706458e-14 1.11443142624e-14 1.10457951615e-14 1.09464299369e-14 1.08460498381e-14 1.07445007965e-14 1.06417910738e-14 1.0538034644e-14 1.04335438959e-14 1.03287075848e-14 1.02238547575e-14 1.01187966555e-14 1.00122199654e-14 9.90105827829e-15 9.78265600385e-15 9.65986164949e-15 9.53756004375e-15 9.4172503172e-15 9.29608630975e-15 9.17128314674e-15 9.04576890869e-15 8.9230605582e-15 8.80590833816e-15 8.69497768644e-15 8.5908914885e-15 8.49331646154e-15 8.40252988955e-15 8.31810873181e-15 8.24040534815e-15 8.16904182645e-15 8.10452579244e-15 8.04650305396e-15 7.99566978071e-15 7.9516060453e-15 7.91519722515e-15 7.88574104179e-15 7.86426256164e-15 7.84935430812e-15 7.84210507842e-15 7.83963979855e-15 7.84309336017e-15 7.84701055847e-15 7.85381970188e-15 7.85553158535e-15 7.86034434464e-15 7.85585510958e-15 7.86042043964e-15 7.84745555371e-15 7.85736768886e-15 7.8268354216e-15 7.85688531584e-15 7.7795355103e-15 7.87992206881e-15 7.6599910083e-15 7.87584045056e-15 7.35223508545e-15 8.12396387821e-15 6.71139602254e-15 1.01394686524e-14 6.14490295894e-15 8.44599271496e-15 6.7624466742e-15 1.03189677135e-14 5.97007859707e-15 8.39301139192e-15 6.68637000848e-15 1.13441721613e-14 6.05963312124e-15 8.11571577072e-15 7.66952460659e-15 9.12447865413e-15 6.56989429567e-15 1.08988942619e-14 6.30767199593e-15 9.78763897165e-15 6.56200309353e-15 1.08071118349e-14 6.45583913672e-15 1.0315729377e-14 6.7544724585e-15 1.07139633946e-14 6.45609365854e-15 1.04276781295e-14 6.40317053458e-15 1.06244623785e-14 6.28862150753e-15 1.02121724629e-14 6.22931716014e-15 1.0631448154e-14 6.12055864863e-15 9.87938743706e-15 6.09554699183e-15 1.16393110355e-14 6.1610713843e-15 7.19079529763e-15 1.36057806196e-14 4.94135043407e-15 1.69119375505e-14 7.07242351335e-15 6.39465846591e-15 7.73653066817e-15 1.19098614696e-14 5.26820636625e-15 1.62017703874e-14 9.55585689106e-15 6.04058303925e-15 6.89960142585e-15 1.44860163044e-14 8.25777170616e-15 6.81242118934e-15 1.45441843716e-14 9.11038176271e-15 7.58531168187e-15 6.97428727591e-15 1.44403242667e-14 9.00265338706e-15 7.79561242953e-15 6.39449865188e-15 1.24369251215e-14 7.51355940381e-15 5.66200722799e-15 1.21409705706e-14 8.00300087746e-15 7.39655662539e-15 5.67815882012e-15 5.52079450138e-15 9.98093696534e-15 4.39249367615e-15 9.17508802606e-15 6.03073708388e-15 4.54000093168e-15 4.78120241903e-15 8.83847901912e-15 5.48217394108e-15 4.61029248881e-15 3.77257002408e-15 7.79676176898e-15 4.75218454592e-15 3.58920267031e-15 3.79759426458e-15 6.65840852925e-15 3.44204161234e-15 3.03721091058e-15 4.71713721863e-15 2.76075237308e-15 4.13096527867e-15 2.7630847221e-15 5.16824771652e-15 3.68730145207e-15 3.92052080709e-15 4.17061496706e-15 4.1242678537e-15 4.02543349441e-15 4.02791059199e-15 4.03003790218e-15 4.07630774042e-15 4.11050717478e-15 4.16209480039e-15 4.19712093218e-15 4.23612098503e-15 4.25926327438e-15 4.28150047928e-15 4.29076910789e-15 4.29792258556e-15 4.29513793839e-15 4.29022052648e-15 4.27778078422e-15 4.2634403341e-15 4.24329436112e-15 4.22145934572e-15 4.19498251468e-15 4.166973305e-15 4.13511056886e-15 4.10184901791e-15 4.06529876084e-15 4.02751116118e-15 3.98690079733e-15 3.9453028801e-15 3.90134986927e-15 3.85680004386e-15 3.81041975169e-15 3.76397776137e-15 3.71625935949e-15 3.66908673998e-15 3.62112116123e-15 3.57428295527e-15 3.52696834445e-15 3.48129160406e-15 3.43524496666e-15 3.39130031367e-15 3.34686871272e-15 3.30502215824e-15 3.26232161491e-15 3.22279960951e-15 3.18174070648e-15 3.14469670284e-15 3.10498853141e-15 3.07058406169e-15 3.03171027673e-15 3.00022903457e-15 2.96137334495e-15 2.93339397125e-15 2.89326819164e-15 2.86994158057e-15 2.82643467471e-15 2.80993197578e-15 2.75947019201e-15 2.75357002146e-15 2.69030631455e-15 2.70020018954e-15 2.61732415825e-15 2.64402789259e-15 2.54821528382e-15 2.57727798597e-15 2.50023784747e-15 2.52382279109e-15 2.43248947631e-15 2.45426054292e-15 2.39138188759e-15 2.41085550751e-15 2.30776149011e-15 2.31745620326e-15 2.31203271604e-15 2.30005399635e-15 2.20757920412e-15 2.23368279036e-15 2.15554836007e-15 2.16587286152e-15 2.13411929988e-15 2.1468753328e-15 2.03020744878e-15 1.99098629907e-15 2.08286629855e-15 1.99463764837e-15 2.00555922663e-15 1.93257949197e-15 1.92194854954e-15 1.94925323145e-15 1.90765850705e-15 1.88335418778e-15 1.85957820665e-15 1.84244464589e-15 1.81641696036e-15 1.80925517748e-15 1.76821308677e-15 1.78287391611e-15 1.69982222557e-15 1.67244167285e-15 1.74779763694e-15 1.65200061881e-15 1.60504587384e-15 1.67629435968e-15 1.62937873813e-15 1.63823273135e-15 1.5399342967e-15 1.36046031949e-15 7.09636618315e-16 8.71699717001e-16 8.90736209066e-16 9.45008158813e-16 7.49180519109e-16 6.12428856229e-16 5.56684801216e-16 5.47780245362e-16 4.61053199638e-16 4.17109781982e-16 3.80298931872e-16 3.51444528515e-16 3.28541000905e-16 3.09984432828e-16 2.94984327796e-16 2.82800055193e-16 2.72905139889e-16 2.64871140829e-16 2.58354819377e-16 2.53080373855e-16 2.4882711804e-16 2.45420363748e-16 2.42723263408e-16 2.4062958164e-16 2.3905717081e-16 2.37942206498e-16 2.37234262805e-16 2.36892263926e-16 2.36881304522e-16 2.3717026412e-16 2.37730096314e-16 2.38532676672e-16 2.39550084369e-16 2.40754211097e-16 2.42116619897e-16 2.43608573399e-16 2.45201197687e-16 2.46865726792e-16 2.48573794691e-16 2.50297749037e-16 2.52010966359e-16 2.53688142937e-16 2.55305549732e-16 2.56841247857e-16 2.58275252444e-16 2.59589651028e-16 2.60768673169e-16 2.61798722243e-16 2.62668370641e-16 2.63368328349e-16 2.63891388633e-16 2.64232359909e-16 2.64387985341e-16 2.64356854581e-16 2.64139313889e-16 2.63737368465e-16 2.63154581219e-16 2.62395964474e-16 2.61467859101e-16 2.60377792209e-16 2.59134313266e-16 2.57746792331e-16 2.56225186199e-16 2.5457977251e-16 2.52820864464e-16 2.509585299e-16 2.49002341448e-16 2.46961182268e-16 2.44843125793e-16 2.42655396731e-16 2.40404403147e-16 2.38095824501e-16 2.35734734662e-16 2.33325736933e-16 2.3087309792e-16 2.283808646e-16 2.25852959881e-16 2.23293253547e-16 2.20705610148e-16 2.18093917548e-16 2.15462099075e-16 2.12814116647e-16 2.10153965709e-16 2.07485669459e-16 2.04813270937e-16 2.02140828526e-16 1.99472413523e-16 1.9681211388e-16 1.94164042611e-16 1.91532354615e-16 1.8892127272e-16 1.86335129352e-16 1.83778426771e-16 1.81255929791e-16 1.78772810127e-16 1.76334877001e-16 1.73948966666e-16 1.71623630653e-16 1.69370431568e-16 1.67206559406e-16 1.65160612322e-16 1.63286993345e-16 1.61708638147e-16 1.60787071451e-16 1.62515905362e-16 1.49258535435e-16 1.32319012067e-16 1.3051626386e-16 1.27020905742e-16 1.2259435759e-16 1.18510673703e-16 1.15036701524e-16 1.1220466502e-16 1.0996220513e-16 1.0821031637e-16 1.06846947622e-16 1.05786836427e-16 1.04965693811e-16 1.04337472748e-16 1.03869730639e-16 1.03539507413e-16 1.03330262162e-16 1.03229614314e-16 1.03227553628e-16 1.03314970457e-16 1.03482506124e-16 1.03719791565e-16 1.04015113197e-16 1.04355482922e-16 1.04727022222e-16 1.05115526978e-16 1.05507071608e-16 1.05888536995e-16 1.06247996263e-16 1.06574945927e-16 1.06860402645e-16 1.07096901165e-16 1.07278430155e-16 1.07400332465e-16 1.07459182293e-16 1.07452666503e-16 1.07379454749e-16 1.07239083546e-16 1.0703184724e-16 1.06758698698e-16 1.06421161947e-16 1.06021250982e-16 1.05561396369e-16 1.05044375819e-16 1.04473247721e-16 1.03851286954e-16 1.0318192217e-16 1.02468676676e-16 1.01715112029e-16 1.00924777001e-16 1.00101163466e-16 9.92476693457e-17 9.83675727556e-17 9.746401724e-17 9.65400118615e-17 9.55984452125e-17 9.46421267425e-17 9.36738479003e-17 9.2696485942e-17 9.17131618036e-17 9.07274827216e-17 8.97438947721e-17 8.8768129587e-17 8.78075846617e-17 8.68712343071e-17 8.59685457648e-17 8.51073567927e-17 8.42917195687e-17 8.35211266834e-17 8.27914721311e-17 8.20968309098e-17 8.14310220613e-17 8.07885334365e-17 8.01648722485e-17 7.95565642699e-17 7.89609946541e-17 7.83762097206e-17 7.78007355516e-17 7.72334317704e-17 7.66733808306e-17 7.61198068203e-17 7.55720194023e-17 7.50293754188e-17 7.44912563551e-17 7.39570569405e-17 7.3426181985e-17 7.2898048724e-17 7.23720951155e-17 7.18477869456e-17 7.13246281102e-17 7.08021693138e-17 7.02800155705e-17 6.97578328614e-17 6.92353525429e-17 6.87123744972e-17 6.81887680424e-17 6.76644719206e-17 6.71394926483e-17 6.6613901522e-17 6.60878305321e-17 6.55614678972e-17 6.50350518166e-17 6.45088646699e-17 6.39832261453e-17 6.34584862209e-17 6.29350181754e-17 6.24132113158e-17 6.18934643951e-17 6.13761789572e-17 6.08617530918e-17 6.03505759932e-17 5.98430230184e-17 5.93394510162e-17 5.88401948041e-17 5.83455641332e-17 5.78558412185e-17 5.73712788328e-17 5.68920991628e-17 5.64184930281e-17 5.59506194262e-17 5.54886058636e-17 5.50325485511e-17 5.45825131615e-17 5.41385358075e-17 5.37006240942e-17 5.32687585914e-17 5.28428940631e-17 5.24229611994e-17 5.20088686698e-17 5.16005050802e-17 5.11977414266e-17 5.08004336697e-17 5.0408425842e-17 5.00215533819e-17 4.96396469547e-17 4.92625361076e-17 4.8890053376e-17 4.85220379193e-17 4.81583395014e-17 4.77988213109e-17 4.74433626882e-17 4.70918608077e-17 4.67442320202e-17 4.64004124172e-17 4.60603577508e-17 4.57240429612e-17 4.53914613802e-17 4.50626234796e-17 4.47375551697e-17 4.44162957485e-17 4.40988950524e-17 4.37854098428e-17 4.3475899294e-17 4.31704195918e-17 4.28690172906e-17 4.25717218886e-17 4.22785381259e-17 4.19894390704e-17 4.17043605866e-17 4.14231998616e-17 4.11458189992e-17 4.08720541572e-17 4.06017293243e-17 4.03346710475e-17 4.00707216594e-17 3.98097494242e-17 3.95516553941e-17 3.92963766819e-17 3.90438822318e-17 3.879415467e-17 3.85471581951e-17 3.83027993927e-17 3.80608926099e-17 3.78211393236e-17 3.75831234781e-17 3.73463214225e-17 3.71101214411e-17 3.68738489168e-17 3.66367935974e-17 3.63982362056e-17 3.61574728264e-17 3.59138356467e-17 3.56667091886e-17 3.54155422023e-17 3.5159855043e-17 3.48992432725e-17 3.46333778759e-17 3.43620030308e-17 3.40849315608e-17 3.38020388424e-17 3.3513255769e-17 3.32185603495e-17 3.29179693039e-17 3.26115291983e-17 3.22993076015e-17 3.19813847099e-17 3.16578454348e-17 3.13287724452e-17 3.09942403179e-17 3.06543108796e-17 3.03090299051e-17 2.99584251849e-17 2.96025059952e-17 2.92412636403e-17 2.88746731685e-17 2.85026956257e-17 2.81252812505e-17 2.77423726079e-17 2.73539082218e-17 2.69598262181e-17 2.65600681722e-17 2.61545832535e-17 2.57433326952e-17 2.53262949396e-17 2.49034714147e-17 2.44748928e-17 2.40406263034e-17 2.36007832924e-17 2.31555276616e-17 2.27050843433e-17 2.22497481934e-17 2.17898926442e-17 2.13259785531e-17 2.08585619878e-17 2.03883015841e-17 1.99159645288e-17 1.94424309541e-17 1.89686959114e-17 1.84958687451e-17 1.80251689231e-17 1.75579178071e-17 1.70955261241e-17 1.66394762627e-17 1.61912997779e-17 1.57525502546e-17 1.53247719952e-17 1.4909465895e-17 1.45080537811e-17 1.41218432618e-17 1.37519949191e-17 1.33994938673e-17 1.30651274934e-17 1.27494706882e-17 1.24528792565e-17 1.21754915773e-17 1.19172378912e-17 1.16778563112e-17 1.14569136564e-17 1.12538298745e-17 1.10679038749e-17 1.08983397026e-17 1.07442714321e-17 1.06047860969e-17 1.04789439335e-17 1.03657956381e-17 1.02643967632e-17 1.01738191289e-17 1.00931599692e-17 1.00215486211e-17 9.95815159214e-18 9.90217624293e-18 9.85287320715e-18 9.80953811743e-18 9.77151261257e-18 9.73818488355e-18 9.70898984998e-18 9.68340886838e-18 9.66096913847e-18 9.64124268025e-18 9.62384494487e-18 9.60843298032e-18 9.59470319942e-18 9.58238883309e-18 9.57125702312e-18 9.56110571657e-18 9.55176043978e-18 9.54307102555e-18 9.53490842008e-18 9.52716155048e-18 9.51973452185e-18 9.51254394003e-18 9.505516653e-18 9.49858768466e-18 9.49169859069e-18 9.48479605729e-18 9.47783076393e-18 9.47075664485e-18 9.46353020732e-18 9.45611019375e-18 9.44845731288e-18 9.44053419296e-18 9.43230536707e-18 9.4237374034e-18 9.4147990323e-18 9.40546141382e-18 9.39569830471e-18 9.38548640891e-18 9.37480568223e-18 9.36363965893e-18 9.35197593984e-18 9.339806724e-18 9.32712944755e-18 9.31394766863e-18 9.30027229348e-18 9.28612312298e-18 9.27153097598e-18 9.25654088684e-18 9.24121620888e-18 9.22564483486e-18 9.20994810118e-18 9.19429366242e-18 9.17891433021e-18 9.16413499136e-18 9.15040905073e-18 9.13836019513e-18 9.12880235325e-18 9.12263550466e-18 9.12029930666e-18 9.12015823977e-18 9.11740342791e-18 9.11007256975e-18 9.09810230603e-18 9.07863662361e-18 9.0527366177e-18 9.02378592771e-18 8.99421692951e-18 8.96526470256e-18 8.93749818615e-18 8.91108767515e-18 8.88590814675e-18 8.86164073836e-18 8.83789680642e-18 8.81432469128e-18 8.79066910392e-18 8.76678304755e-18 8.74260896746e-18 8.71814817471e-18 8.69343075518e-18 8.66849209499e-18 8.64335702112e-18 8.6180308583e-18
###Markdown
Training - Random Suffule
###Code
all_data = zip(xy_data,img_data)
shuffle(all_data)
temp_xy = []
temp_data = []
for row in all_data:
temp_xy.append(row[0].tolist()[0])
temp_data.append(row[1])
s_data = np.matrix(temp_data)
s_xy = np.matrix(temp_xy)
###Output
_____no_output_____
###Markdown
Testing
###Code
showimg(fa2(xy_data.T)[0].T)
###Output
|
Analysis/Lichenibacterium_genes.ipynb | ###Markdown
Gene location Nikitin Pavel, Lomonosov Moscow State University04.2021
###Code
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-36000) + '..' + str(c[1]-36000))
#m_9
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=516, end=1178, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=1191, end=3644, strand=-1, color="#ccccff",
label="GspD"),
GraphicFeature(start=3629, end=5296, strand=+1, color="#ccccff",
label="type II/IV secretion system protein"),
GraphicFeature(start=5355, end=6548, strand=+1, color="#ccccff",
label="type II secretion system F family protein"),
GraphicFeature(start=6561, end=7043, strand=+1, color="#ccccff",
label="GspG"),
GraphicFeature(start=7015, end=7494, strand=+1, color="#ffcccc",
label="HP")
]
record = GraphicRecord(sequence_length=8000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/m_9.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-412000) + '..' + str(c[1]-412000))
#m_1
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=14, end=1426, strand=+1, color="#ffcccc",
label="fumC"),
GraphicFeature(start=1480, end=2457, strand=-1, color="#ccccff",
label="type II secretion system F family protein"),
GraphicFeature(start=2478, end=3467, strand=-1, color="#ccccff",
label="type II secretion system F family protein"),
GraphicFeature(start=3467, end=4894, strand=-1, color="#ffcccc",
label="CpaF family protein"),
GraphicFeature(start=5069, end=5710, strand=-1, color="#ffcccc",
label="Isochorismatase family protein"),
GraphicFeature(start=5882, end=7234, strand=-1, color="#ffcccc",
label="PLP-dependent aminotransferase family protein"),
GraphicFeature(start=7424, end=8284, strand=+1, color="#ffcccc",
label="DMT family transporter"),
GraphicFeature(start=8328, end=10202, strand=-1, color="#ffcccc",
label="DUF3623 family protein"),
GraphicFeature(start=10505, end=11758, strand=-1, color="#ffcccc",
label="CtpF"),
GraphicFeature(start=11755, end=12519, strand=-1, color="#ffcccc",
label="CpaD"),
GraphicFeature(start=12573, end=14105, strand=-1, color="#ccccff",
label="type II and III secretion system protein family protein"),
GraphicFeature(start=14102, end=14914, strand=-1, color="#ffcccc",
label="CpaB")
]
record = GraphicRecord(sequence_length=15000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/m_1.png", dpi=300)
#r_17
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=244, end=888, strand=-1, color="#cffccc",
label="type II secretion system protein"),
GraphicFeature(start=875, end=1390, strand=-1, color="#ffcccc",
label="prepilin-type N-terminal cleavage/methylation"),
GraphicFeature(start=1365, end=1784, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=1801, end=2280, strand=-1, color="#cffccc",
label="GspG"),
GraphicFeature(start=2298, end=3491, strand=-1, color="#cffccc",
label="type II secretion system F family protein"),
GraphicFeature(start=3532, end=5214, strand=-1, color="#ffcccc",
label="type II/IV secretion system protein"),
GraphicFeature(start=5358, end=7688, strand=+1, color="#cffccc",
label="GspD"),
GraphicFeature(start=7701, end=8438, strand=+1, color="#ffcccc",
label="HP")
]
record = GraphicRecord(sequence_length=8700, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_17.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-20000) + '..' + str(c[1]-20000))
#r_11
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=629, end=1093, strand=+1, color="#ffcccc",
label="copper chaperone PCu(A)C"),
GraphicFeature(start=1477, end=1896, strand=+1, color="#cffccc",
label="GspG"),
GraphicFeature(start=1893, end=2570, strand=+1, color="#ffcccc",
label="SCO family protein"),
GraphicFeature(start=2598, end=5345, strand=+1, color="#ffcccc",
label="DUF1929 domain-containing protein"),
GraphicFeature(start=5659, end=7866, strand=+1, color="#cffccc",
label="GspD"),
GraphicFeature(start=7863, end=8321, strand=+1, color="#ffcccc",
label="HP"),
GraphicFeature(start=8477, end=9427, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=9539, end=11377, strand=+1, color="#ffcccc",
label="type II/IV secretion system protein"),
GraphicFeature(start=11377, end=12633, strand=+1, color="#cffccc",
label="type II secretion system F family protein"),
GraphicFeature(start=12630, end=13070, strand=+1, color="#ffcccc",
label="prepilin-type N-terminal cleavage/methylation domain-containing protein")
]
record = GraphicRecord(sequence_length=13500, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_11.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-183000) + '..' + str(c[1]-183000))
#r_1
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=534, end=1946, strand=+1, color="#ffcccc",
label="fumC"),
GraphicFeature(start=2093, end=3070, strand=-1, color="#cffccc",
label="type II secretion system F family protein"),
GraphicFeature(start=3153, end=4112, strand=-1, color="#ffcccc",
label="pilus assembly protein")
]
record = GraphicRecord(sequence_length=4500, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_1_1.png", dpi=300)
#r_33
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=55, end=768, strand=+1, color="#ffcccc",
label="HP"),
GraphicFeature(start=768, end=1823, strand=+1, color="#ffd700",
label="TssG "),
GraphicFeature(start=2002, end=2325, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=2460, end=3080, strand=-1, color="#ffcccc",
label="N-acetyltransferase"),
GraphicFeature(start=3080, end=3526, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=3523, end=4011, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=4330, end=5025, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=5107, end=5583, strand=-1, color="#ffd700",
label="Hcp"),
GraphicFeature(start=5589, end=8270, strand=-1, color="#ffd700",
label="TssH"),
GraphicFeature(start=8430, end=9497, strand=+1, color="#ffd700",
label="TssA"),
GraphicFeature(start=9534, end=10073, strand=+1, color="#ffd700",
label="type VI secretion system contractile sheath small subunit"),
GraphicFeature(start=10076, end=11572, strand=+1, color="#ffd700",
label="type VI secretion system contractile sheath large subunit"),
GraphicFeature(start=11583, end=12965, strand=+1, color="#ffd700",
label="type VI secretion system contractile sheath large subunit"),
GraphicFeature(start=13072, end=13905, strand=+1, color="#ffd700",
label="SciE type virulence protein"),
GraphicFeature(start=13767, end=14363, strand=+1, color="#ffd700",
label="TssE"),
GraphicFeature(start=14370, end=16190, strand=+1, color="#ffd700",
label="TssF"),
GraphicFeature(start=16304, end=16555, strand=-1, color="#ffcccc",
label="HP")
]
record = GraphicRecord(sequence_length=17000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_33.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-22000) + '..' + str(c[1]-22000))
#r_32
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=220, end=975, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=1475, end=3121, strand=-1, color="#ffd700",
label="type VI secretion protein"),
GraphicFeature(start=3259, end=3591, strand=-1, color="#ffcccc",
label="HP")
]
record = GraphicRecord(sequence_length=4000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_32.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-52000) + '..' + str(c[1]-52000))
#r_27
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=506, end=1801, strand=+1, color="#ffcccc",
label="FHA domain-containing protein", linewidth=0.5),
GraphicFeature(start=1809, end=3146, strand=+1, color="#ffd700",
label="TssK", linewidth=0.5),
GraphicFeature(start=3150, end=4490, strand=+1, color="#ffd700",
label="TssL", linewidth=0.5),
GraphicFeature(start=4487, end=7183, strand=+1, color="#ffd700",
label="TssM", linewidth=0.5),
GraphicFeature(start=6922, end=8610, strand=+1, color="#ffd700",
label="TagF", linewidth=0.5),
GraphicFeature(start=8607, end=9059, strand=+1, color="#ffcccc",
label="HP")
]
record = GraphicRecord(sequence_length=9500, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_27.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-58000) + '..' + str(c[1]-58000))
#r_3
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=640, end=1110, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=1121, end=2785, strand=-1, color="#ffd700",
label="type VI secretion protein"),
GraphicFeature(start=3495, end=4709, strand=-1, color="#ffcccc",
label="helix-turn-helix transcriptional regulator")
]
record = GraphicRecord(sequence_length=5000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_3.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-82000) + '..' + str(c[1]-82000))
#r_1
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=422, end=1996, strand=-1, color="#ffcccc",
label="DHA2 family efflux MFS transporter permease subunit"),
GraphicFeature(start=2010, end=3320, strand=-1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=3623, end=4219, strand=+1, color="#ffcccc",
label="TetR family transcriptional regulator, pseudogene")
]
record = GraphicRecord(sequence_length=5000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_1_2.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-8000) + '..' + str(c[1]-8000))
#r_9
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=114, end=1916, strand=-1, color="#ffcccc",
label="glycoside hydrolase family 15 protein"),
GraphicFeature(start=2021, end=3292, strand=-1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=3351, end=3551, strand=-1, color="#ffcccc",
label="DUF1656 domain-containing protein")
]
record = GraphicRecord(sequence_length=4000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_9.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-119000) + '..' + str(c[1]-119000))
#r_15
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=185, end=874, strand=-1, color="#ffcccc",
label="TetR/AcrR family transcriptional regulator"),
GraphicFeature(start=1014, end=2261, strand=+1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=2271, end=3830, strand=+1, color="#ffcccc",
label="DHA2 family efflux MFS transporter permease subunit")
]
record = GraphicRecord(sequence_length=4000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_15.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-0) + '..' + str(c[1]-0))
#r_16
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=367, end=2400, strand=+1, color="#ffcccc",
label="M23 family metallopeptidase"),
GraphicFeature(start=2503, end=3393, strand=-1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=3432, end=3641, strand=-1, color="#ffcccc",
label="DUF1656 domain-containing protein")
]
record = GraphicRecord(sequence_length=4000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_16.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-30000) + '..' + str(c[1]-30000))
#r_21
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=476, end=2074, strand=-1, color="#ffcccc",
label="DHA2 family efflux MFS transporter permease subunit"),
GraphicFeature(start=2071, end=3072, strand=-1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=3692, end=4360, strand=+1, color="#ffcccc",
label="TetR/AcrR family transcriptional regulator"),
GraphicFeature(start=4374, end=5162, strand=-1, color="#ffcccc",
label="aldolase"),
GraphicFeature(start=5189, end=5893, strand=-1, color="#ffcccc",
label="haloacid dehalogenase type II"),
GraphicFeature(start=5960, end=6844, strand=+1, color="#ffcccc",
label="LysR family transcriptional regulator"),
GraphicFeature(start=7424, end=7696, strand=-1, color="#ffcccc",
label="minE"),
GraphicFeature(start=7693, end=8508, strand=-1, color="#ffcccc",
label="minD"),
GraphicFeature(start=8538, end=9287, strand=-1, color="#ffcccc",
label="minC"),
GraphicFeature(start=9523, end=10944, strand=+1, color="#ffcccc",
label="methyltransferase domain-containing protein"),
GraphicFeature(start=11166, end=12077, strand=-1, color="#ffcccc",
label="RpoH"),
GraphicFeature(start=12469, end=12633, strand=-1, color="#ffcccc",
label="DUF465 domain-containing protein"),
GraphicFeature(start=12859, end=14154, strand=-1, color="#ffcccc",
label="murA"),
GraphicFeature(start=14210, end=14395, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=14594, end=15283, strand=+1, color="#ffcccc",
label="HP"),
GraphicFeature(start=15530, end=15931, strand=+1, color="#ffcccc",
label="RusA family crossover junction endodeoxyribonuclease"),
GraphicFeature(start=15928, end=16611, strand=+1, color="#ffcccc",
label="HP"),
GraphicFeature(start=16676, end=18427, strand=-1, color="#ffcccc",
label="HP"),
GraphicFeature(start=18536, end=20167, strand=-1, color="#ffcccc",
label="DHA2 family efflux MFS transporter permease subunit"),
GraphicFeature(start=20160, end=21452, strand=-1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=21445, end=21912, strand=-1, color="#ffcccc",
label="MarR family transcriptional regulator")
]
record = GraphicRecord(sequence_length=22000, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_21.png", dpi=300)
a = input()
b = a.split('..')
c = [int(i) for i in b]
print(str(c[0]-4500) + '..' + str(c[1]-4500))
#r_29
from dna_features_viewer import GraphicFeature, GraphicRecord
features=[
GraphicFeature(start=362, end=1921, strand=-1, color="#ffcccc",
label="DHA2 family efflux MFS transporter permease subunit"),
GraphicFeature(start=1937, end=3265, strand=-1, color="#ffd700",
label="HlyD family secretion protein"),
GraphicFeature(start=3348, end=4094, strand=+1, color="#ffcccc",
label="TetR/AcrR family transcriptional regulator")
]
record = GraphicRecord(sequence_length=4500, features=features)
ax, _ = record.plot(figure_width=15)
#ax.figure.savefig("./pics/r_29.png", dpi=300)
###Output
_____no_output_____ |
Week5/July9_regression_pfeats.ipynb | ###Markdown
Regression on polynomial features dataset
###Code
import os, sys
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns ; sns.set()
from google.colab import drive
drive.mount('/content/drive')
sys.path.append("/content/drive/MyDrive/GSOC-NMR-project/Work/Notebooks")
from auxillary_functions import *
from polynomial_featextract import poly_featextract
# import raw data and params.txt file
datadir_path = "/content/drive/MyDrive/GSOC-NMR-project/Work/Data/2021-06-21_classify_datagen_all_funcs"
rawdata = load_data(datadir_path)
params = load_params(datadir_path)
ker_integrals = load_wlist(datadir_path) # load wlist.txt file
# Stencil type : {'0' : 'Gaussian', '1' : 'Power Law', '2' : 'RKKY'}
print(rawdata.shape)
offset = 150
shifted_data, center = get_window(rawdata,2/3,width=offset)
print("The Echo pulse occurs at timestep:",center)
# Rescaled data
rscl_data = shifted_data / np.max(shifted_data,axis=1,keepdims=True)
pd.options.mode.chained_assignment = None
y_classes = params[['αx','αz','ξ']]
y_classes['len_scale'] = np.sqrt(ker_integrals.values.flatten()/(params['αx'].values+1))
y_classes.head()
polyfeats = poly_featextract(rscl_data, [4,5,10], [3,3,3], as_df=True)
polyfeats.head()
from sklearn.preprocessing import PowerTransformer
def plots_transform(df, var, transformer):
plt.figure(figsize=(13,5))
plt.subplot(121)
sns.kdeplot(df[var])
plt.title('before ' + str(transformer).split('(')[0])
plt.subplot(122)
p1 = transformer.fit_transform(df[[var]]).flatten()
sns.kdeplot(p1)
plt.title('after ' + str(transformer).split('(')[0])
for col_no in range(0, polyfeats.shape[1], 5):
col = polyfeats.columns[col_no]
plots_transform(polyfeats, col, PowerTransformer(method='yeo-johnson'))
###Output
_____no_output_____
###Markdown
Regression before classification
###Code
from sklearn.model_selection import RepeatedKFold, GridSearchCV, train_test_split, cross_val_score
from sklearn.feature_selection import SelectKBest, mutual_info_regression, SelectFromModel
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
X_train, X_test, y_train, y_test = train_test_split(polyfeats,
y_classes,
stratify=params['stencil_type'],
test_size=0.2, random_state=1)
###Output
_____no_output_____
###Markdown
Mutual Information based feature selection
###Code
# feature selection
def select_features(X_train, y_train, X_test):
"""configure to select all features, learn relationship from training data,
transform train input data, transform test input data"""
fs = SelectKBest(score_func=mutual_info_regression, k='all')
fs.fit(X_train, y_train)
#X_train_fs = fs.transform(X_train)
#X_test_fs = fs.transform(X_test)
return fs
MI = pd.DataFrame(polyfeats.columns.tolist(),columns=['Feature'])
for col in y_classes.columns :
fs = select_features(X_train, y_train[col], X_test)
MI[f"MI-{col}"] = fs.scores_
MI.head()
MI['MI-αx'].plot.barh()
fig, axes = plt.subplots(4,1,sharey=True,sharex=True,figsize=(18,12))
for col,ax in zip(MI.columns[1:],axes):
MI[col].plot.bar(ax=ax, alpha=0.7)
ax.set(title=f"{col}")
ax.axhline(y=MI[col].median(),color='k',ls='--',alpha=0.3,label='Median')
ax.axhline(y=MI[col].mean(),color='r',ls='--',alpha=0.3,label='Mean')
ax.legend()
plt.xticks(np.arange(76),MI['Feature'].values.tolist(),rotation=70)
plt.suptitle("Bar Chart of the Polynomial Features (x) vs. the Mutual Information Feature Importance (y)", fontsize=18)
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
plt.show()
feats_mean = []
for num_col in MI.select_dtypes(include=np.number).columns.tolist():
df_new = MI[['Feature',num_col]][MI[num_col] > MI[num_col].mean()]
feats_mean.append(df_new)
print(num_col, df_new.shape)
###Output
MI-αx (37, 2)
MI-αz (31, 2)
MI-ξ (37, 2)
MI-len_scale (32, 2)
###Markdown
Feature selection based on RF feature importance $\alpha _x$ prediction
###Code
rf = RandomForestRegressor(n_estimators=40, oob_score=True, n_jobs=-1)
?cross_val_score
### Base model
results = list()
cv = RepeatedKFold(n_splits=5, n_repeats=3, random_state=1)
scores = cross_val_score(rf, rscl_data, y_classes['αx'],
scoring='neg_mean_absolute_error', cv=cv,
verbose=1, n_jobs=-1)
results.append(scores)
plt.title("Base RF Model nMAE")
plt.boxplot(results)
plt.show()
feat_ax = MI[['Feature','MI-αx']].sort_values("MI-αx",ascending=False)
feat_ax
feat_axlist = feat_ax['Feature'].tolist()
from sklearn.tree import DecisionTreeRegressor
dt = DecisionTreeRegressor()
base_dt_results = []
dt_scores = cross_val_score(dt, polyfeats, y_classes['αx'],
scoring='neg_mean_absolute_error', cv=cv,
verbose=1, n_jobs=-1)
base_dt_results.append(dt_scores)
print(base_dt_results)
plt.title("Base DT Model nMAE")
plt.boxplot(base_dt_results)
plt.show()
dt_results = []
for i in range(len(feat_axlist)//3, len(feat_axlist), 2):
dt = DecisionTreeRegressor()
dt_scores_feats = cross_val_score(dt, polyfeats[feat_axlist[:i]], y_classes['αx'],
scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)
dt_results.append(dt_scores_feats)
print(f"{i} features, Mean_score:{np.mean(dt_scores_feats)}, std_dev:{np.std(dt_scores_feats)}")
plt.boxplot(dt_results[0])
# plot model performance for comparison
plt.figure(figsize=(16,7))
plt.boxplot(dt_results, showmeans=True)
plt.plot(np.arange(1,27),[np.mean(i) for i in dt_results],'k--', alpha=0.4)
plt.xticks(np.arange(1,27), range(len(feat_axlist)//3, len(feat_axlist), 2))
plt.xlabel("No of Features", fontsize=14)
plt.ylabel("nMAE", fontsize=14)
plt.title("Boxplot with Decision Tree model for each number of selected features Using Mutual Information", fontsize=14)
plt.show()
# Per feature increase in nMAE
mae_values = [np.mean(i) for i in dt_results]
plt.plot([mae_values[j] - mae_values[j-1] for j in range(len(mae_values)-1)])
###Output
_____no_output_____ |
dl/dive-into-dl/chapter03-DL-basics/3.11_underfit-overfit.ipynb | ###Markdown
3.11 模型选择、欠拟合和过拟合 3.11.4 多项式函数拟合实验
###Code
import torch
import numpy as np
import sys
import os, sys
sys.path.append("..")
import d2lzh_pytorch.utils as d2l
np.random.seed(666)
cur_path = os.path.abspath(os.path.dirname('__file__'))
data_path = cur_path.replace('dl\dive-into-dl\chapter03-dl-basics', 'data\\')
np.random.seed(666)
torch.manual_seed(666)
###Output
_____no_output_____
###Markdown
3.11.4.1 生成数据集$$y = 1.2x - 3.4x^2 + 5.6x^3 + 5 + \varepsilon$$
###Code
n_train, n_test, true_w, true_b = 100, 100, [1.2, -3.4, 5.6], 5
features = torch.randn((n_train + n_test, 1))
# x ** 2 + x ** 3 + 1
poly_features = torch.cat((features, torch.pow(features, 2), torch.pow(features, 3)), 1)
# 此时labels就是公式中的y
labels = (true_w[0] * poly_features[:, 0] + true_w[1] * poly_features[:, 1]
+ true_w[2] * poly_features[:, 2] + true_b)
# 加入noise
labels += torch.tensor(np.random.normal(0, 0.01, size=labels.size()), dtype=torch.float)
features[:2], poly_features[:2], labels[:2]
###Output
_____no_output_____
###Markdown
3.11.4.2 定义、训练和测试模型
###Code
def semilogy(x_vals, y_vals, x_label, y_label, x2_vals=None,
y2_vals=None, legend=None, figsize=(3.5, 2.5)):
d2l.set_figsize(figsize)
d2l.plt.xlabel(x_label)
d2l.plt.ylabel(y_label)
d2l.plt.semilogy(x_vals, y_vals)
if x2_vals and y2_vals:
d2l.plt.semilogy(x2_vals, y2_vals, linestyle=':')
d2l.plt.legend(legend)
num_epochs, loss = 100, torch.nn.MSELoss()
def fit_and_plot(train_features, test_features, train_labels, test_labels):
# pytorch中已经将Linear初始化
net = torch.nn.Linear(train_features.shape[-1], 1)
batch_size = min(10, train_labels.shape[0])
dataset = torch.utils.data.TensorDataset(train_features, train_labels)
train_iter = torch.utils.data.DataLoader(dataset, batch_size, shuffle=True)
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)
train_ls, test_ls = [], []
for _ in range(num_epochs):
for X, y in train_iter:
l = loss(net(X), y.view(-1, 1))
optimizer.zero_grad()
l.backward()
optimizer.step()
train_labels = train_labels.view(-1, 1)
test_labels = test_labels.view(-1, 1)
train_ls.append(loss(net(train_features), train_labels).item())
test_ls.append(loss(net(test_features), test_labels).item())
print('final epoch: train loss', train_ls[-1], 'test loss', test_ls[-1])
semilogy(range(1, num_epochs+1), train_ls, 'epochs', 'loss', range(1, num_epochs+1), test_ls, ['train', 'test'])
print('weight', net.weight.data, '\nbias', net.bias.data)
###Output
_____no_output_____
###Markdown
3.11.4.3 三阶多项式函数拟合
###Code
fit_and_plot(poly_features[:n_train, :], poly_features[n_train:, :], labels[:n_train], labels[n_train:])
###Output
final epoch: train loss 0.00880697462707758 test loss 0.14995457231998444
weight tensor([[ 1.3870, -3.2880, 5.5023]])
bias tensor([4.9354])
###Markdown
3.11.4.4 线性函数拟合(欠拟合)
###Code
fit_and_plot(features[:n_train, :], features[n_train:, :], labels[: n_train], labels[n_train:])
###Output
final epoch: train loss 26.610502243041992 test loss 401.5584411621094
weight tensor([[10.7544]])
bias tensor([3.9016])
###Markdown
3.11.4.5 训练样本不足(过拟合)
###Code
fit_and_plot(poly_features[0:2, :], poly_features[n_train:, :], labels[0:2], labels[n_train:])
###Output
final epoch: train loss 0.5019018650054932 test loss 314.0302734375
weight tensor([[2.5113, 1.2379, 1.9215]])
bias tensor([1.9971])
|
doc/examples/How_to_make_predictions_using_trained_model.ipynb | ###Markdown
Guide to use trained model for predictionsThis guide explain how to use a trained model to make predictions of new data set *Note: This notebook is binder ready, please uncomment the following cell to install ``HARDy`` in the binder environment*
###Code
#!: $(pip install ../../)
###Output
_____no_output_____
###Markdown
1. Preparing transformation configuration file Just like training, predictions using HARDy requires a transformation configuration .yaml file. In this example, only best performing transformation $log(q)$ vs. $d(I(q))/d(q)$ is considered. The configuration file is shown in image below: The instructions for building transformation configuration file are available at How to write configuration files Extracting new data into the binder directory
###Code
!tar -xzf new_data_set.tar.gz
###Output
_____no_output_____
###Markdown
2. Importing required modules
###Code
import hardy
import os
###Output
Using TensorFlow backend.
###Markdown
3. Preparing the new data set to be fed into trained model Applying transformations to the data - Defining the location for transformation configuration
###Code
transformation_config_path = './scattering_tform_config.yaml'
###Output
_____no_output_____
###Markdown
- Collecting the filenames of new data set having only .csv file extension
###Code
new_data_file_list = [item for item in os.listdir('./new_data_set/') if item.endswith('.csv')]
###Output
_____no_output_____
###Markdown
- Loading transformation information from the configuration file
###Code
tform_command_list, tform_command_dict = hardy.arbitrage.import_tform_config(transformation_config_path)
###Output
Successfully Loaded 1 Transforms to Try!
###Markdown
- Defining variables for transforming images
###Code
run_name = 'log_q_der_I'
new_datapath = './new_data_set/'
classes = ['sphere', 'cylinder', 'core-shell', 'ellipsoid']
project_name = 'new_data_set'
scale = 0.2
target_size = (100, 100)
###Output
_____no_output_____
###Markdown
Please note that the order of classes must be same as used for training of Machine Learning model. Moreover, the scale and target_size must also be the same as used for training. - Using data wrapper function to generate the rgb images
###Code
hardy.data_wrapper(run_name=run_name, raw_datapath=new_datapath, tform_command_dict=tform_command_dict,
classes='d', scale=0.2)
###Output
Loaded 44 of 44 Files at rate of 370 Files per Second
Success! About 0.0 Minutes...
Making rgb Images from Data... Success in 2.12seconds!
That Took 2.29 Sec !
###Markdown
Please note that the value for classes used in data_wrapper module is a string rather than classes. This is being done to use same module for different functionalites i-e for training and predictions. Since, the new data set doesn't have the labels, the class 'd' is used as assumed class - The data_rapper will apply the numerical and visual transformations and pickle the data into the new_datapath folder - To load the transformed data, following code is used:
###Code
transformed_data = hardy.handling.pickled_data_loader(new_datapath, run_name)
###Output
_____no_output_____
###Markdown
- The data now needs to be converted into iterator acceptable to tensorflow. This can be done by
###Code
new_data_set = hardy.to_catalogue.test_set(image_list=transformed_data, target_size=target_size,
classes=classes, color_mode='rgb',
iterator_mode='arrays', batch_size=len(new_data_file_list),
training=False)
###Output
The number of unique labels was found to be 1, expected 4
###Markdown
The argument for training is kept false, to avoid tagging classes in data set. During training, it is kept as true so that model can seek validation of predicted outcomes The data is now ready to be used for predictions 4. Making predictions - Loading the model
###Code
trained_model = hardy.cnn.save_load_model('./best_model.h5', load=True)
###Output
_____no_output_____
###Markdown
* Making predictions
###Code
hardy.reporting.model_predictions(trained_model, new_data_set, classes, transformed_data)
###Output
_____no_output_____ |
tobias/new_features.ipynb | ###Markdown
- Creating the structure
###Code
df = orders.copy()
df['weekpair'] = (df.time.dt.dayofyear + 1) // 14 - 13
npairs = df.weekpair.nunique()
n_items = items['itemID'].nunique()
print('total number of items:', n_items)
print('expected number of instances:', n_items * npairs)
mi = pd.MultiIndex.from_product([range(-npairs, 0), items['itemID']], names=['weekpair', 'itemID'])
data_temp = pd.DataFrame(index = mi)
data_temp = data_temp.join(df.groupby(['weekpair', 'itemID'])[['order']].sum(), how = 'left')
data_temp.fillna(0, inplace = True)
data_temp.groupby('itemID').count().min()
# data_temp
###Output
_____no_output_____
###Markdown
- Creating features
###Code
items.head(2)
# features = [
# ('itemID', 'item'),
# ('manufacturer', 'manuf'),
# ('category1', 'cat1'),
# ('category2', 'cat2'),
# ('category3', 'cat3')
# ]
# for f, n in features:
# if f not in data.columns:
# print('ops', f)
# features = [('itemID', 'item')]
# # f, name = ('manufacturer', 'manuf')
# for f, name in features:
# print(f)
# temp = data.groupby([f, 'weekpair'])[['order']].sum()
# shifted = temp.groupby(f)[['order']].shift(1)
# new_feature_block = pd.DataFrame()
# for n in range(3):
# rolled = shifted.groupby(f, as_index = False)['order'].rolling(2 ** n).mean()
# new_feature_block['%s_%d' % (name, 2 ** n)] = rolled.reset_index(0, drop = True) # rolling has a weird index behavior...
# data = pd.merge(data, new_feature_block.reset_index(), on = [f, 'weekpair'])
def gbagg(data, group_cols, targeted_cols, out_names, function, as_index = False):
X = data.values
col = {c : i for i, c in enumerate(data.columns)}
# values that are going to calculated
new_feat = []
# numbers of the columns
gcols = [col[c] for c in group_cols]
tcols = [col[c] for c in targeted_cols]
interval = None
a = None
i = 0
while i < len(X):
a = X[i, gcols]
# find the whole interval of this group
j = i
while j < len(X):
if (X[j, gcols] != a).any():
break
j += 1
interval = X[i:j, tcols]
# apply function on interval, save in new feature
output = function(interval)
new_feat.append(output)
# go to next group
i = j
idx = data.groupby(group_cols).size().index # this is actually fast...
out_df = pd.DataFrame(new_feat, columns = out_names, index = idx)
if not as_index:
out_df.reset_index(inplace = True)
return out_df
def gbtransf(data, group_cols, targeted_cols, out_names, function, params = dict()):
X = data.values
col = {c : i for i, c in enumerate(data.columns)}
# values that are going to calculated
new_feat = np.zeros((len(data), len(out_names)))
# numbers of the columns
gcols = [col[c] for c in group_cols]
tcols = [col[c] for c in targeted_cols]
interval = None
a = None
i = 0
while i < len(X):
a = X[i, gcols]
# find the whole interval of this group
j = i
while j < len(X):
if (X[j, gcols] != a).any():
break
j += 1
interval = X[i:j, tcols]
# apply function on interval, save in new feature
output = function(interval, **params)
new_feat[i:j] = output
# go to next group
i = j
out_df = pd.DataFrame(new_feat, columns = out_names, index = data.index)
return out_df
def shift_and_2n_window(x, ws):
# out = pd.DataFrame(x)
# out = out.shift()
# out = out.rolling(2 ** n).mean()
shifted = np.zeros_like(x) # output
shifted[1:] = x[:-1] # shift
out = np.zeros_like(x, dtype = float)
# rolling mean
total = shifted[:ws].sum()
out[ws - 1] = total / ws
for i in range(ws, len(out)):
total = total - shifted[i - ws] + shifted[i]
out[i] = total / ws
out[:ws] = np.NaN # maybe ws -1 should be NaN as well for receiving one NaN value when ws > 1
# out[0] = np.NaN # this is always NaN for a shift of 1
return out
data = data_temp.reset_index()
data = pd.merge(data, items, on = 'itemID')
data.sort_values(['itemID', 'weekpair'], inplace = True)
# gbtransf(data, ['itemID', 'weekpair'], ['order'], ['out'], lambda x : np.ones_like(x))
shift_and_2n_window(np.array([1 , 2, 3, 4, 5, 6]), 2 ** 1)
features = [('itemID', 'item')]
for f, name in features:
print(f)
new_feature_block = pd.DataFrame()
for n in range(3):
new_f = gbtransf(data, ['itemID'], ['order'], ['out'], shift_and_2n_window, {'ws' : 2 ** n})
new_feature_block['%s_%d' % (name, 2 ** n)] = new_f['out']
# data = pd.merge(data, new_feature_block.reset_index(), on = [f, 'weekpair'])
data = pd.concat([data, new_feature_block], axis = 1)
data.count() # the larger the window, more NaN are expected
def dist2firstvalue(x):
out = np.zeros_like(x, dtype = float)
first = np.NaN
for i in range(len(x)):
out[i] = first
if x[i] != 0:
first = i
break
if i == len(x) - 1:
return out
for j in range(int(first), len(x)):
out[j] = j - first
return out
dist2firstvalue(np.array([0 , 0, 0, 0])), dist2firstvalue(np.array([0 , 0, 3, 0, 5, 6]))
def dist2firstvalueLeak(x):
out = np.zeros_like(x, dtype = float)
for i in range(len(x)):
if x[i] != 0:
out[i] = 1
break
# else:
# out[i] = -9999
return out
# def dist2firstvalueLeakAdding(x):
# out = np.zeros_like(x, dtype = float)
# d = 1
# for i in range(len(x)):
# if x[i] != 0:
# out[i] = d
# break
# d += 1
# for j in range(i + 1, len(x)):
# out[j] = d
# d += 1
# return out
dist2firstvalueLeak(np.array([0 , 0, 3, 0, 5, 6]))
# dist2firstvalue(np.array([0 , 0, 0, 0]))
# def dist2lastpeak(x):
# out = np.zeros_like(x, dtype = float)
# peak = np.NaN
# peak_val = 0
# for i in range(0, len(x)):
# out[i] = i - peak
# if x[i] > peak_val:
# peak = i
# peak_val = x[i]
# return out
# dist2lastpeak(np.array([0 , 0, 3, 0, 5, 6]))
data.sort_values(['itemID', 'weekpair'], inplace = True)
data['dist2firstvalueLeak'] = gbtransf(data, ['itemID'], ['order'], ['out'], dist2firstvalueLeak)['out']
# data['dist2lastpeak'] = gbtransf(data, ['itemID'], ['order'], ['out'], dist2lastpeak)['out']
data.groupby("weekpair")["dist2firstvalueLeak"].sum().to_dict()
# del data["leak_cat3"]
the_cat = "manufacturer"
sla = data.groupby(["weekpair", the_cat])["dist2firstvalueLeak"].sum().reset_index()
sla = sla.rename(columns={"dist2firstvalueLeak" : "leak_cat3"})
# acho que mudou nada... opa, mudou sim
data = pd.merge(data, sla, on = ["weekpair", the_cat])
# the_cat = "brand"
# sla = data.groupby(["weekpair", the_cat])["dist2firstvalueLeak"].sum().reset_index()
# sla = sla.rename(columns={"dist2firstvalueLeak" : "leak_cat4"})
# del data["leak_cat4"]
# acho que mudou nada... opa, mudou sim
# data = pd.merge(data, sla, on = ["weekpair", the_cat])
data["total_new"] = data["weekpair"].map(data.groupby("weekpair")["dist2firstvalueLeak"].sum().to_dict())
data.fillna(0, inplace=True)
# checking if we got what we wanted
data.query('itemID == 1')
# data["ratio_manufnew"] = data["total_new"] * data["leak_cat3"]
# del data["ratio_manufnew"]
# data['weekswithtrans'] = data.groupby('itemID')['order'].apply(lambda x : (x > 0).cumsum()) / (data['weekpair'] + 14)
data.head()
###Output
_____no_output_____
###Markdown
- Split Data
###Code
weights = infos.set_index('itemID')['simulationPrice'].to_dict()
# filtered_data = data
filtered_data = data.query("dist2firstvalueLeak == 1")
len(data), len(filtered_data)
# filtered_data.pop("itemID");
sub_week = -1
train = filtered_data.query('-13 <= weekpair <= (@sub_week - 2)').reset_index(drop = True)
full_train = filtered_data.query('-13 <= weekpair <= (@sub_week - 1)').reset_index(drop = True)
val = filtered_data.query('weekpair == (@sub_week - 1)').reset_index(drop = True)
sub = filtered_data.query('weekpair == (@sub_week)').reset_index(drop = True)
len(train), len(val), len(sub)
y_train = train.pop('order').values
y_full_train = full_train.pop('order').values
y_val = val.pop('order').values
y_sub = sub.pop('order').values
w_train = train['itemID'].map(weights)
w_full_train = full_train['itemID'].map(weights)
w_val = val['itemID'].map(weights)
w_sub = sub['itemID'].map(weights)
train.pop("itemID")
full_train.pop("itemID")
val.pop("itemID")
sub.pop("itemID")
X_train = train.values
X_full_train = full_train.values
X_val = val.values
X_sub = sub.values
###Output
_____no_output_____
###Markdown
- Min Expected Error
###Code
def evaluate(prediction, target, simulationPrice):
return np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulationPrice)
# max expected rmse
from sklearn.metrics import mean_squared_error as mse
# pred = data.loc[1:12].groupby('itemID')['order'].mean().sort_index()
# target_week = data.loc[13:, 'order'].reset_index(level = 0, drop = True).sort_index()
# mse(target_week, pred) ** .5
###Output
_____no_output_____
###Markdown
- XGBoost
###Code
import xgboost as xgb
xgb.__version__
# custom objective
def gradient(prediction, dtrain):
y = dtrain.get_label()
# prediction.astype(int)
# prediction = np.minimum(prediction.astype(int), 1)
return -2 * (prediction - np.maximum(prediction - y, 0) * 1.6) * (1 - (prediction > y) * 1.6)
def hessian(prediction, dtrain):
y = dtrain.get_label()
# prediction.prediction(int)
# prediction = np.minimum(prediction.astype(int), 1)
return -2 * (1 - (prediction > y) * 1.6) ** 2
def objective(prediction, dtrain):
w = dtrain.get_weight()
grad = gradient(prediction, dtrain) * w
hess = hessian(prediction, dtrain) * w
return grad, hess
# custom feval
def feval(prediction, dtrain):
prediction = prediction.astype(int)
# predt = np.minimum(predt.astype(int), 1)
target = dtrain.get_label()
simulationPrice = dtrain.get_weight()
return 'feval', np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulationPrice)
missing = 0
dtrain = xgb.DMatrix(X_train, y_train, w_train, missing = missing)
dfulltrain = xgb.DMatrix(X_full_train, y_full_train, w_full_train, missing = missing)
dval = xgb.DMatrix(X_val, y_val, w_val, missing = missing)
dsub = xgb.DMatrix(X_sub, y_sub, w_sub, missing = missing)
# specify parameters via map
param = {
'max_depth':10,
'eta':0.005,
'objective':'reg:squarederror',
'disable_default_eval_metric': 1,
"min_child_weight" : 3,
# 'tree_method' : 'gpu_hist',
}
num_round = 400
bst = xgb.train(param, dtrain,
num_round,
early_stopping_rounds = 10,
evals = [(dtrain, 'train'), (dval, 'val')],
# obj = objective,
feval = feval,
maximize = True,
)
prediction = bst.predict(dsub, ntree_limit=bst.best_ntree_limit).astype(int)
evaluate(prediction, y_sub, w_sub)
# retrain!
bst_sub = xgb.train(param, dfulltrain,
num_boost_round = bst.best_ntree_limit,
# obj = objective,
feval = feval, maximize = True,
evals = [(dfulltrain, 'ftrain')],
verbose_eval = False,
)
bst_sub.best_ntree_limit
prediction = bst_sub.predict(dsub, ntree_limit=bst_sub.best_ntree_limit).astype(int)
evaluate(prediction, y_sub, w_sub)
# some other things below
# max possible score
evaluate(y_sub, y_sub, w_sub)
# using previous weekpair
evaluate(y_val, y_sub, w_sub)
submission = items[['itemID']].copy()
submission['demandPrediction'] = bst.predict(dsub, ntree_limit=bst.best_ntree_limit).astype(int)
submission.to_csv('../../submissions/sub_inclass_03.csv', sep = '|', index=False)
# submission.head()
###Output
_____no_output_____
###Markdown
- LGBM
###Code
def feval_lgbm(prediction, dtrain):
prediction = prediction.astype(int)
target = dtrain.get_label()
simulationPrice = dtrain.get_weight()
return 'feval', np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulationPrice), True
data.columns
list(data.columns).index("dist2firstvalueLeak")
import lightgbm as lgb
params = {
"objective" : 'regression_l1',
# "metric" :"rmse",
"learning_rate" : 0.05,
'verbosity': 2,
# 'max_depth': 6,
# 'num_leaves': 15,
"min_data_in_leaf":1500
}
# https://lightgbm.readthedocs.io/en/latest/Parameters.html
ds_params = {
# 'categorical_feature' : [3, 4, 5, 7, list(data.columns).index("dist2firstvalueLeak"),],
}
lgbtrain = lgb.Dataset(X_train, label = y_train, weight=w_train, **ds_params)
lgbfulltrain = lgb.Dataset(X_full_train, label = y_full_train, weight=w_full_train, **ds_params)
lgbvalid = lgb.Dataset(X_val, label = y_val, weight=w_val, **ds_params)
lgbsubmis = lgb.Dataset(X_sub, label = y_sub, weight=w_sub, **ds_params)
num_round = 1000
lgb_model = lgb.train(params,
lgbtrain,
num_round,
valid_sets = [lgbtrain, lgbvalid],
valid_names = ['train', 'val'],
verbose_eval=5,
early_stopping_rounds=5,
feval = feval_lgbm,
# fobj = objective,
)
prediction = lgb_model.predict(X_sub, num_iteration=lgb_model.best_iteration).astype(int)
evaluate(prediction, y_sub, w_sub)
# retrain!
lgb_model_sub = lgb.train(params,
lgbfulltrain,
lgb_model.best_iteration,
valid_sets = [lgbfulltrain],
valid_names = ['train'],
verbose_eval=5,
early_stopping_rounds=None,
feval = feval_lgbm,
# fobj = objective,
)
prediction = lgb_model_sub.predict(X_sub, num_iteration=80).astype(int)
evaluate(prediction, y_sub, w_sub)
###Output
_____no_output_____
###Markdown
- CatBoost
###Code
from catboost import CatBoost, CatBoostRegressor, Pool
smthing =0
class feval_cat(object):
def get_final_error(self, error, weight):
# return error / (weight + 1e-38)
return error
def is_max_optimal(self):
return True
def evaluate(self, approxes, target, simulationPrice):
# global smthing
# smthing = [approxes, target, simulationPrice]
prediction = np.array(approxes[0]).astype(int)
target = np.array(target).astype(int)
simulationPrice = np.array(simulationPrice)
score = np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulationPrice)
# print('score', score)
# print(approxes, type(target), type(simulationPrice))
return score, 0
ds_params = {
# 'cat_features' : [8, 9, 10],
}
train_pool = Pool(X_train, label = y_train, weight = w_train, **ds_params)
trainfull_pool = Pool(X_full_train, label = y_full_train, weight = w_full_train, **ds_params)
val_pool = Pool(X_val, label = y_val, weight = w_val, **ds_params)
sub_pool = Pool(X_sub, label = y_sub, weight = w_sub, **ds_params)
model = CatBoostRegressor(
# iterations = 2,
depth=7,
learning_rate=0.1,
loss_function='MAE',
early_stopping_rounds=5,
eval_metric = feval_cat(),
thread_count=-1,
)
model.fit(
train_pool,
eval_set=[train_pool, val_pool],
# logging_level='Verbose', # you can uncomment this for text output
);
prediction = model.predict(X_sub, ntree_end = model.best_iteration_).astype(int)
evaluate(prediction, y_sub, w_sub)
# retrain!
model.best_iteration_
{**model.get_params(), "iterations" : model.best_iteration_}
cat_sub = CatBoostRegressor(**{**model.get_params(), "iterations" : model.best_iteration_})
cat_sub.fit(
trainfull_pool,
eval_set=[trainfull_pool],
# logging_level='Verbose', # you can uncomment this for text output
);
prediction = cat_sub.predict(X_sub, ntree_end = cat_sub.best_iteration_).astype(int)
evaluate(prediction, y_sub, w_sub)
###Output
_____no_output_____
###Markdown
- Ensemble
###Code
cat_w = 1
lgb_w = 1
xgb_w = 1
ensemble = model.predict(X_sub, ntree_end = model.best_iteration_) * cat_w
ensemble += lgb_model.predict(X_sub, num_iteration=lgb_model.best_iteration) * lgb_w
ensemble += bst.predict(dsub, ntree_limit=bst.best_ntree_limit) * xgb_w
ensemble = ensemble / (cat_w + lgb_w + xgb_w)
evaluate(ensemble.astype(int), y_sub, w_sub)
###Output
_____no_output_____
###Markdown
- Linear Regression
###Code
from sklearn.linear_model import LinearRegression
# from sklearn.metrics import
lr = LinearRegression()
lr.fit(X_train, y_train, w_train)
print('train', evaluate(lr.predict(X_train), y_train, w_train))
print('test', evaluate(lr.predict(X_val), y_val, w_val))
print('sub', evaluate(lr.predict(X_sub), y_sub, w_sub))
###Output
train -25798195.590995364
test -2082818.3608716822
sub -2279245.550001612
###Markdown
###Code
# fazer feature que pega o percentil de quando o item deu de dinheiro dentro de uma categoria
# fazer features que pega dist de atual até o pico mais alto
# dist do maior pico pro segundo
# min(dist(terceiro, primeiro), dist(terceiro, segundo))
###Output
_____no_output_____ |
MyWays_Assignment/Sentence_Clustering_code.ipynb | ###Markdown
###Code
###Output
_____no_output_____
###Markdown
myWay Assignment
###Code
###Output
_____no_output_____
###Markdown
import libraries to access the data from kaggle
###Code
from google.colab import files
files.upload()
!pip install -q kaggle
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
#Downlaod dataset
!kaggle datasets download -d paultimothymooney/mueller-report
!unzip mueller-report.zip
###Output
Archive: mueller-report.zip
inflating: mueller_report.csv
inflating: muellerreport.pdf
###Markdown
Read CSV file
###Code
import pandas as pd
data = pd.read_csv('mueller_report.csv')
print(data.shape)
data.head()
df = data.copy()
df[df['page']==1]
app = []
for i in range(len(df[df['page']==2])):
app+=[list(df[df['page']==2]['text'])[i]]
app
print('total unique pages OR total number of pages : ' , data['page'].nunique())
data['page'].value_counts()
data_page= data.groupby('page')['text'].apply(lambda text: ''.join(text.to_string(index=False))).str.replace('(\\n)', '').reset_index()
print(data_page.shape)
data_page.head()
# from nltk.tokenize import sent_tokenize
# text = data_page['text'][0]
# print(len(sent_tokenize(text)), sent_tokenize(text))
data_page['text'][0].strip()
# Transform string data and remove punctuation
import string
punc = string.punctuation
print(punc)
import re
match ='''!"#$%&'()*+-/;<=>?@[\]^_`{|}~'''
print(match)
data_page['text'] = data_page.text.apply(lambda x: x.lower()) # make all the character lower form
data_page['text'] = data_page.text.apply(lambda x: ''.join([c for c in x if c not in punc]))
data_page['text'][0].strip()
#stopwords and tokenizations
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
#making the stopword set from basic english and the given list of stopwords'
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('punkt')
stopset = set(w.upper() for w in stopwords.words('english'))
#adding more stopwords from text file of stopwords
import glob
path = "StopWords*.txt" # Additional stopwords
glob.glob(path)
for filename in glob.glob(path):
with open(filename, 'r') as f:
text = f.read()
text = re.sub(r"\s+\|\s+[\w]*" , "", text)
stopset.update(text.upper().split())
data_page['text'] = data_page.text.apply(lambda x: word_tokenize(x))
data_page['text'] = data_page.text.apply(lambda x: [w for w in x if w not in stopset])
data_page['text'] = data_page.text.apply(lambda x: [w for w in x if(len(str(w))>3)])
data_page['text'] = data_page.text.apply(lambda x: ' '.join(x))
data_page['text'][0]
import re
def clean_text(text):
text = re.sub(r"\n", " ", text)
text = re.sub(r"nan", " ", text)
text = re.sub(r"name", " ", text)
text = re.sub(r"dtype", " ", text)
text = re.sub(r"object", " ", text)
text = re.sub(r"i'm", "i am ", text)
text = re.sub(r"\'re", " are ", text)
text = re.sub(r"\'d", " would ", text)
text = re.sub(r"\'ll", " will ", text)
text = re.sub(r"\'scuse", " excuse ", text)
text = re.sub('\W', ' ', text)
text = re.sub('\s+', ' ', text)
text = text.strip(' ')
return text
data_page['text'] = data_page['text'].map(lambda com : clean_text(com))
data_page['text'][0]
data_page['split'] = data_page['text'].map(lambda com : com.split())
for i in range(2):
print(data_page['split'][i])
from gensim.models import Word2Vec
sentences = list(data_page['split'])
m = Word2Vec(sentences , size=50 , min_count = 1 , sg = 1)
def vectorizer(sent , m):
vec = []
numw = 0
for w in sent:
try:
if(numw==0):
vec = m[w]
else:
vec = np.add(vec , m[w])
numw+=1
except:
pass
return np.asarray(vec)/numw
l =[]
for i in sentences:
l.append(vectorizer(i,m))
X = np.array(l)
import matplotlib.pyplot as plt
wcss = []
for i in range(1,10):
kmeans = KMeans(n_clusters = i , init ='k-means++' , random_state=42)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1,10),wcss)
plt.title('THE ELBOW CURVE')
plt.xlabel('NUmber of Clusters')
plt.ylabel('WCSS')
plt.grid('on')
plt.show()
###Output
_____no_output_____
###Markdown
we'll take k=3
###Code
n_clusters = 3
clf = KMeans(n_clusters = n_clusters,
max_iter=100 ,
init='k-means++',
n_init=1)
labels = clf.fit_predict(X)
print(len(set(labels)) , labels)
liss = []
for index , sentence in enumerate(sentences):
# print(index , str(labels[index]) , " : " + str(sentence))
liss.append(labels[index])
print(len(liss), type(liss[0]))
liss[:10]
data_page.drop('split' , axis=1 , inplace=True)
data_page.head()
data = pd.read_csv('mueller_report.csv')
data_page= data.groupby('page')['text'].apply(lambda text: ''.join(text.to_string(index=False))).str.replace('(\\n)', '').reset_index()
data_page['text'] = data_page['text'].map(lambda com : re.sub('\s+' ,' ', com.strip()))
data_page['Class']= liss
# print(data_page['text'].nunique())
data_page.head(30)
data_page.to_csv('Output.csv' , index =False)
# for i in range(len(data_page)):
# print(i, list(data_page['text'])[i])
# from sklearn.feature_extraction.text import TfidfVectorizer
# from sklearn.cluster import KMeans
# import numpy as np
# import pandas as pd
# vectorizer = TfidfVectorizer(stop_words='english')
# X = vectorizer.fit_transform(list(data_page['text']))
# true_k = 3
# model = KMeans(n_clusters=true_k, init='k-means++', max_iter=100, n_init=1)
# model.fit(X)
# order_centroids = model.cluster_centers_.argsort()[:, ::-1]
# terms = vectorizer.get_feature_names()
# for i in range(true_k):
# print("Cluster %d:" % i),
# for ind in order_centroids[i, :10]:
# print("%s" % terms[ind])
###Output
_____no_output_____ |
notebook/20161115_importData.ipynb | ###Markdown
Import Data* Walk data directories* load images (PIL)* resample, flatten, & metadata tag* Output
###Code
%matplotlib inline
from tqdm import tqdm_notebook as tqdm
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import glob
import PIL
import os
NX, NY = 128,128 # max width and height to resample
INPATTERN = '/Users/ajmendez/Downloads/dilbert2012/*.jpg'
OUTFORMAT = '/Users/ajmendez/data/dilbert/2012/{basename}'
from itertools import product
list(product(['a', 'b'], ['c', 'd', 'e']))
def convertWeekday(image, outpattern):
xmin,xmax = [5,199]
ymin,ymax = [3,197]
for j, xoffset in enumerate([0, 218, 436]):
img = image.crop((xoffset+xmin, ymin, xoffset+xmax-xmin, ymax-ymin))
img.thumbnail((NX,NY), PIL.Image.ANTIALIAS)
img.convert('L').save(outpattern.format(j=j))
def convertWeekend(image, outpattern):
xmin,xmax = [9,145]
ymin,ymax = [1,137]
xoffsets = [0, 160, 322, 486]
yoffsets = [0, 149]
# Remember people read left to right -- top to bottom
for j,(yoffset, xoffset) in enumerate(product(yoffsets, xoffsets)):
img = image.crop((xoffset + xmin,
yoffset + ymin,
xoffset + xmax - xmin,
yoffset + ymax - ymin))
img.thumbnail((NX,NY), PIL.Image.ANTIALIAS)
img.convert('L').save(outpattern.format(j=j))
for i,filename in enumerate(tqdm(sorted(glob.glob(INPATTERN)))):
# outpattern is used to output the frames
basename = os.path.basename(filename).replace('-colour.jpg', '.{j}.jpg')
dirname = os.path.dirname(OUTFORMAT)
outpattern = OUTFORMAT.format(basename=basename)
if os.path.exists(outpattern.format(j=1)):
continue
if not os.path.exists(dirname):
os.makedirs(dirname)
if os.path.getsize(filename) == 0:
print('Check on: {}'.format(filename))
continue
# img = plt.imread(filename)
with PIL.Image.open(filename) as img:
if abs(img.height-288) < 2:
# Sunday... FUNDAY
convertWeekend(img, outpattern)
elif abs(img.height-199) < 2:
# Weekday... workday
convertWeekday(img, outpattern)
else:
raise ValueError('Find out where you should be cropping: {}x{}'.format(img.height, img.width))
###Output
_____no_output_____
###Markdown
Figure out Crop Locations
###Code
filename = '/Users/ajmendez/Downloads/dilbert2012/dilbert20120102-colour.jpg'
img = plt.imread(filename)
img.shape
plt.figure(figsize=(12,12))
# plt.imshow(img[3:196,3:201,:], interpolation='nearest')
xoffset = 0
xoffset = 218
# xoffset = 436
xmin,xmax = [5,199]
ymin,ymax = [2,196]
plt.imshow(img[:,xoffset:xoffset+210,:], interpolation='nearest', alpha=0.5)
print(xmax-xmin, ymax-ymin)
plt.axvline(xmin, lw=0.5, color='r')
plt.axvline(xmax, lw=0.5, color='r')
plt.axhline(ymin, lw=0.5, color='r')
plt.axhline(ymax, lw=0.5, color='r')
filename = '/Users/ajmendez/Downloads/dilbert2012/dilbert20120101-colour.jpg'
img = plt.imread(filename)
img.shape
plt.figure(figsize=(12,12))
# plt.imshow(img[3:196,3:201,:], interpolation='nearest')
xoffset = 0
xoffset = 160
xoffset = 322
xoffset = 486
yoffset = 0
# yoffset = 149
xmin,xmax = [9,145]
ymin,ymax = [1,137]
plt.imshow(img[yoffset:yoffset+200,xoffset:xoffset+210,:], interpolation='nearest', alpha=0.5)
print(xmax-xmin, ymax-ymin)
plt.axvline(xmin, lw=0.5, color='r')
plt.axvline(xmax, lw=0.5, color='r')
plt.axhline(ymin, lw=0.5, color='r')
plt.axhline(ymax, lw=0.5, color='r')
###Output
_____no_output_____ |
qualtrics_subscale.ipynb | ###Markdown
Demographics
###Code
frames['demog'].head()
demog_subscales = frames['demog'].iloc[:,[0,10,11,12,13,15,17]]
demog_subscales.head()
###Output
_____no_output_____
###Markdown
Sleep and Stress relative scoring
###Code
demog_subscales.iloc[:,2] = demog_subscales.apply(numeralize_sleep,axis=1)
demog_subscales.head()
###Output
_____no_output_____
###Markdown
Sleep raw text response caveatsAbove, we're trying to convert text responses into parsable float values. Most are of predictable form, with participants typing out a 'range' of hours, but few include special characters and other text, which makes the otherwise workable method of splitting-and-averaging problematic.
###Code
demog_subscales['Relative Stress'] = demog_subscales.apply(score_relative_stress,axis=1)
demog_subscales['Relative Sleep'] = demog_subscales.apply(score_relative_sleep,axis=1)
demog_subscales.head()
###Output
C:\Users\ia406477\AppData\Local\Continuum\anaconda2\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
###Markdown
ZIP & SES
###Code
demog_subscales['ZIP'] = demog_subscales.iloc[:,6].apply(str)
demog_subscales.head()
###Output
C:\Users\ia406477\AppData\Local\Continuum\anaconda2\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
###Markdown
Financial Difficulty
###Code
demog_subscales['Financial Difficulty'] = demog_subscales.iloc[:,5].notna().apply(int)
demog_subscales.head()
###Output
C:\Users\ia406477\AppData\Local\Continuum\anaconda2\lib\site-packages\ipykernel_launcher.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""Entry point for launching an IPython kernel.
###Markdown
PANAS
###Code
frames['panas'].head()
pas_columns = [1,3,5,9,10,12,14,16,17,19]
nas_columns = [2,4,6,7,8,11,13,15,18,20]
frames['panas']['PAS'] = frames['panas'].iloc[:,pas_columns].sum(1)
frames['panas']['NAS'] = frames['panas'].iloc[:,nas_columns].sum(1)
frames['panas'].head()
###Output
_____no_output_____
###Markdown
BISBAS
###Code
frames['bisbas'].head()
# Reverse-score all data
bisbas_reversed = 5 - frames['bisbas']
# Revert select columns (SSID, #2, #22) to original value
bisbas_reversed.iloc[:,[0,2,22]] = 5 - bisbas_reversed.iloc[:,[0,2,22]]
bisbas_reversed.head()
bas_drive_columns = [3,9,12,21]
bas_funseeking_columns = [5,10,15,20]
bas_rewardresp_columns = [4,7,14,18,23]
bis_columns = [2,8,13,16,19,22,24]
filler_columns = [1,6,11,17]
frames['bisbas']['BAS Drive'] = bisbas_reversed.iloc[:,bas_drive_columns].sum(1)
frames['bisbas']['BAS Fun Seeking'] = bisbas_reversed.iloc[:,bas_funseeking_columns].sum(1)
frames['bisbas']['BAS Reward Responsiveness'] = bisbas_reversed.iloc[:,bas_rewardresp_columns].sum(1)
frames['bisbas']['BIS'] = bisbas_reversed.iloc[:,bis_columns].sum(1)
frames['bisbas'].head()
###Output
_____no_output_____
###Markdown
GDMS
###Code
frames['gdms'].head()
intuitive_dms_cols = [1,7,14,20,24]
dependent_dms_cols = [2,10,15,19,25]
rational_dms_cols = [3,6,11,17,21]
spontaneous_dms_cols = [4,8,13,18,23]
avoidant_dms_cols = [5,9,12,16,22]
frames['gdms']['Intuitive DMS'] = frames['gdms'].iloc[:,intuitive_dms_cols].sum(1)
frames['gdms']['Dependent DMS'] = frames['gdms'].iloc[:,dependent_dms_cols].sum(1)
frames['gdms']['Rational DMS'] = frames['gdms'].iloc[:,rational_dms_cols].sum(1)
frames['gdms']['Spontaneous DMS'] = frames['gdms'].iloc[:,spontaneous_dms_cols].sum(1)
frames['gdms']['Avoidant DMS'] = frames['gdms'].iloc[:,avoidant_dms_cols].sum(1)
frames['gdms'].head()
###Output
_____no_output_____
###Markdown
Financial Literacy
###Code
frames['financ'].head()
frames['financ']['Financial Literacy'] = frames['financ'].apply(score_financial_literacy,axis=1)
frames['financ'].head()
###Output
_____no_output_____
###Markdown
Final Subscales
###Code
demog_subscales = demog_subscales[['Participant Number:','ZIP','Relative Sleep','Relative Stress','Financial Difficulty']]
demog_subscales.head()
panas_subscales = frames['panas'][['Participant Number:','PAS','NAS']]
panas_subscales.head()
bisbas_subscales = frames['bisbas'][['Participant Number:','BAS Drive','BAS Fun Seeking','BAS Reward Responsiveness','BIS']]
bisbas_subscales.head()
gdms_subscales = frames['gdms'][['Participant Number:','Intuitive DMS','Rational DMS','Dependent DMS','Spontaneous DMS','Avoidant DMS']]
gdms_subscales.head()
financ_subscale = frames['financ'][['Participant Number:','Financial Literacy']]
financ_subscale.head()
final_output = pd.merge(demog_subscales,panas_subscales,'outer')
final_output = final_output.merge(bisbas_subscales,'outer')
final_output = final_output.merge(gdms_subscales,'outer')
final_output = final_output.merge(financ_subscale,'outer')
final_output = final_output.rename(columns={'Participant Number:':'ssid'})
final_output.head()
fpath = os.path.join(output_dir,'all_subscales.csv')
final_output.to_csv(fpath,index=False)
###Output
_____no_output_____ |
02-python-201/labs/APIs/05_Web_Crawling.ipynb | ###Markdown
Web Crawling- scrapy- BeautifulSoup- selenium...
###Code
# importamos la librería
#!pip install scrapy
###Output
Collecting scrapy
Downloading Scrapy-2.4.1-py2.py3-none-any.whl (239 kB)
[K |████████████████████████████████| 239 kB 8.5 MB/s eta 0:00:01
[?25hRequirement already satisfied: cryptography>=2.0 in /opt/conda/lib/python3.8/site-packages (from scrapy) (3.2.1)
Requirement already satisfied: lxml>=3.5.0 in /opt/conda/lib/python3.8/site-packages (from scrapy) (4.6.2)
Requirement already satisfied: pyOpenSSL>=16.2.0 in /opt/conda/lib/python3.8/site-packages (from scrapy) (20.0.0)
Requirement already satisfied: six>=1.4.1 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.15.0)
Requirement already satisfied: cffi!=1.11.3,>=1.8 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.14.4)
Requirement already satisfied: pycparser in /opt/conda/lib/python3.8/site-packages (from cffi!=1.11.3,>=1.8->cryptography>=2.0->scrapy) (2.20)
Collecting cssselect>=0.9.1
Downloading cssselect-1.1.0-py2.py3-none-any.whl (16 kB)
Collecting itemadapter>=0.1.0
Downloading itemadapter-0.2.0-py3-none-any.whl (9.3 kB)
Collecting itemloaders>=1.0.1
Downloading itemloaders-1.0.4-py3-none-any.whl (11 kB)
Collecting jmespath>=0.9.5
Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Collecting parsel>=1.5.0
Downloading parsel-1.6.0-py2.py3-none-any.whl (13 kB)
Requirement already satisfied: lxml>=3.5.0 in /opt/conda/lib/python3.8/site-packages (from scrapy) (4.6.2)
Requirement already satisfied: six>=1.4.1 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.15.0)
Collecting protego>=0.1.15
Downloading Protego-0.1.16.tar.gz (3.2 MB)
[K |████████████████████████████████| 3.2 MB 7.7 MB/s eta 0:00:01
[?25hRequirement already satisfied: six>=1.4.1 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.15.0)
Collecting PyDispatcher>=2.0.5
Downloading PyDispatcher-2.0.5.zip (47 kB)
[K |████████████████████████████████| 47 kB 322 kB/s eta 0:00:01
[?25hRequirement already satisfied: six>=1.4.1 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.15.0)
Requirement already satisfied: cryptography>=2.0 in /opt/conda/lib/python3.8/site-packages (from scrapy) (3.2.1)
Collecting queuelib>=1.4.2
Downloading queuelib-1.5.0-py2.py3-none-any.whl (13 kB)
Collecting service-identity>=16.0.0
Downloading service_identity-18.1.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: attrs>=16.0.0 in /opt/conda/lib/python3.8/site-packages (from service-identity>=16.0.0->scrapy) (20.3.0)
Requirement already satisfied: cryptography>=2.0 in /opt/conda/lib/python3.8/site-packages (from scrapy) (3.2.1)
Collecting pyasn1
Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
[K |████████████████████████████████| 77 kB 456 kB/s eta 0:00:011
[?25hCollecting pyasn1-modules
Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
[K |████████████████████████████████| 155 kB 8.5 MB/s eta 0:00:01
[?25hCollecting Twisted>=17.9.0
Downloading Twisted-20.3.0.tar.bz2 (3.1 MB)
[K |████████████████████████████████| 3.1 MB 5.0 MB/s eta 0:00:01
[?25hRequirement already satisfied: attrs>=16.0.0 in /opt/conda/lib/python3.8/site-packages (from service-identity>=16.0.0->scrapy) (20.3.0)
Collecting Automat>=0.3.0
Downloading Automat-20.2.0-py2.py3-none-any.whl (31 kB)
Requirement already satisfied: attrs>=16.0.0 in /opt/conda/lib/python3.8/site-packages (from service-identity>=16.0.0->scrapy) (20.3.0)
Requirement already satisfied: six>=1.4.1 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.15.0)
Collecting constantly>=15.1
Downloading constantly-15.1.0-py2.py3-none-any.whl (7.9 kB)
Collecting hyperlink>=17.1.1
Downloading hyperlink-20.0.1-py2.py3-none-any.whl (48 kB)
[K |████████████████████████████████| 48 kB 613 kB/s eta 0:00:01
[?25hRequirement already satisfied: idna>=2.5 in /opt/conda/lib/python3.8/site-packages (from hyperlink>=17.1.1->Twisted>=17.9.0->scrapy) (2.10)
Collecting incremental>=16.10.1
Using cached incremental-17.5.0-py2.py3-none-any.whl (16 kB)
Collecting PyHamcrest!=1.10.0,>=1.9.0
Downloading PyHamcrest-2.0.2-py3-none-any.whl (52 kB)
[K |████████████████████████████████| 52 kB 98 kB/s s eta 0:00:01
[?25hCollecting w3lib>=1.17.0
Downloading w3lib-1.22.0-py2.py3-none-any.whl (20 kB)
Requirement already satisfied: six>=1.4.1 in /opt/conda/lib/python3.8/site-packages (from cryptography>=2.0->scrapy) (1.15.0)
Collecting zope.interface>=4.1.3
Downloading zope.interface-5.2.0-cp38-cp38-manylinux2010_x86_64.whl (244 kB)
[K |████████████████████████████████| 244 kB 13.6 MB/s eta 0:00:01
[?25hRequirement already satisfied: setuptools in /opt/conda/lib/python3.8/site-packages (from zope.interface>=4.1.3->scrapy) (49.6.0.post20201009)
Building wheels for collected packages: protego, PyDispatcher, Twisted
Building wheel for protego (setup.py) ... [?25ldone
[?25h Created wheel for protego: filename=Protego-0.1.16-py3-none-any.whl size=7765 sha256=1978c599941e18ba76fc6d4a50413e6db8623942afd740dba458c729c6fbc3f1
Stored in directory: /home/jovyan/.cache/pip/wheels/91/64/36/bd0d11306cb22a78c7f53d603c7eb74ebb6c211703bc40b686
Building wheel for PyDispatcher (setup.py) ... [?25ldone
[?25h Created wheel for PyDispatcher: filename=PyDispatcher-2.0.5-py3-none-any.whl size=11517 sha256=cd4f719ab030530bf64d4ffa10ae601bb7d92cfcce96fe2b7a7d83ab9755e959
Stored in directory: /home/jovyan/.cache/pip/wheels/3c/31/7f/d7d7b5f0b9bad841ed856138ff0c5ee2bf2e04dbeb413097c8
Building wheel for Twisted (setup.py) ... [?25ldone
[?25h Created wheel for Twisted: filename=Twisted-20.3.0-cp38-cp38-linux_x86_64.whl size=3085546 sha256=fbf714326081c9e490a8508f38833af987f2bec662190362ab91807742e3cee4
Stored in directory: /home/jovyan/.cache/pip/wheels/f2/36/1b/99fe6d339e1559e421556c69ad7bc8c869145e86a756c403f4
Successfully built protego PyDispatcher Twisted
Installing collected packages: w3lib, pyasn1, cssselect, zope.interface, PyHamcrest, pyasn1-modules, parsel, jmespath, itemadapter, incremental, hyperlink, constantly, Automat, Twisted, service-identity, queuelib, PyDispatcher, protego, itemloaders, scrapy
Successfully installed Automat-20.2.0 PyDispatcher-2.0.5 PyHamcrest-2.0.2 Twisted-20.3.0 constantly-15.1.0 cssselect-1.1.0 hyperlink-20.0.1 incremental-17.5.0 itemadapter-0.2.0 itemloaders-1.0.4 jmespath-0.10.0 parsel-1.6.0 protego-0.1.16 pyasn1-0.4.8 pyasn1-modules-0.2.8 queuelib-1.5.0 scrapy-2.4.1 service-identity-18.1.0 w3lib-1.22.0 zope.interface-5.2.0
###Markdown
``` title="Diseño y Creación Digitales" href="/es/grados/diseño-creacion-digital/presentacion" class="card-absolute-link"> ``` ***``` title="Multimedia" href="/es/grados/multimedia/presentacion" class="card-absolute-link"> ``` Existe una forma de extraer estos contenidos a través de **XPATH** . Entonces si quiero sacar todos los elementos de la `class` necesitamos a través de esta sintaxis:```//a[@class="card-absolute-link"]/@title```
###Code
# importamos scrapy
import scrapy
from scrapy.crawler import CrawlerProcess
# creamos primero el bicho "araña"
class uoc_spider(scrapy.Spider):
# Asignamos un nombre al bicho
name = "uoc_spider"
# Indicamos la url que queremos analizar
start_urls = [
"https://estudios.uoc.edu/es/grados"
]
# Definimos el analizador
def parse(self, response):
# Extraemos el título del grado
for grado in response.xpath('//a[@class="card-absolute-link"]'):
yield {
'title': grado.extract()
}
###Output
_____no_output_____
###Markdown
La idea es ir saltando de USER_AGENT con diferentes elementos en Lista `[AGENT]`
###Code
# una vez definida la araña con la class ahora lanzamos
if __name__ == "__main__":
# Creamos un crawler.
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'DOWNLOAD_HANDLERS': {'s3': None},
'LOG_ENABLED': True
})
# Inicializamos el crawler con nuestra araña.
process.crawl(uoc_spider)
# Lanzamos la araña.
process.start()
###Output
2020-12-18 19:55:18 [scrapy.utils.log] INFO: Scrapy 2.4.1 started (bot: scrapybot)
2020-12-18 19:55:18 [scrapy.utils.log] INFO: Versions: lxml 4.6.2.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.6 | packaged by conda-forge | (default, Nov 27 2020, 19:31:52) - [GCC 9.3.0], pyOpenSSL 20.0.0 (OpenSSL 1.1.1h 22 Sep 2020), cryptography 3.2.1, Platform Linux-4.19.76-linuxkit-x86_64-with-glibc2.10
2020-12-18 19:55:18 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-12-18 19:55:18 [scrapy.crawler] INFO: Overridden settings:
{'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
2020-12-18 19:55:18 [scrapy.extensions.telnet] INFO: Telnet Password: b1b718a16919d19d
2020-12-18 19:55:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2020-12-18 19:55:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-12-18 19:55:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-12-18 19:55:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-12-18 19:55:18 [scrapy.core.engine] INFO: Spider opened
2020-12-18 19:55:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-12-18 19:55:18 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-12-18 19:55:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://estudios.uoc.edu/es/grados> (referer: None)
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Ciencias Sociales" href="/es/grados/ciencias-sociales/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Ciències Socials", "PC02145", "20.42", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Historia, Geografia e Historia del Arte (interuniversitario: UOC, UdL)" href="/es/grados/historia-geografia-arte/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Història, Geografia i Història de l%27Art (interuniversitari: UOC, UdL)", "PC02094", "20.42", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Humanidades" href="/es/grados/humanidades/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Humanitats", "PC02147", "20.42", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Lengua y Literatura Catalanas" href="/es/grados/lengua-literatura-catalanas/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Llengua i Literatura Catalanes", "PC02135", "20.42", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Traducción, Interpretación y Lenguas Aplicadas (interuniversitario: UVic-UCC, UOC)" href="/es/grados/traduccion-interpretacion-lenguas-aplicadas/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Traducció, Interpretació i Llengües Aplicades (interuniversitari: UVic-UCC, UOC)", "PC02120", "51.21", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Antropología y Evolución Humana (interuniversitario: URV, UOC)" href="/es/grados/antropologia-evolucion-humana/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Antropologia i Evolució Humana (interuniversitari: URV, UOC)", "PC02115", "20.42", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Artes" href="/es/grados/artes/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Arts", "PC01461", "20.42", "Graus", "Arts i Humanitats", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Logopedia (interuniversitario: Uvic-UCC, UOC)" href="/es/grados/logopedia-uvic-ucc/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Logopèdia (interuniversitari: Uvic-UCC, UOC)", "PC02109", "72.0", "Graus", "Ciències de la Salut", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Comunicación" href="/es/grados/comunicacion/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Comunicació", "PC02090", "20.42", "Graus", "Comunicació i Informació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Diseño y Creación Digitales" href="/es/grados/dise%C3%B1o-creacion-digital/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Disseny i Creació Digitals", "PC01819", "22.8", "Graus", "Comunicació i Informació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Criminología" href="/es/grados/criminologia/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Criminologia", "PC02100", "20.42", "Graus", "Dret i Ciències Polítiques", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Gestión y Administración Pública (interuniversitario: UOC, UB)" href="/es/grados/gestion-administracion-publica/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Gestió i Administració Pública (interuniversitari: UOC, UB)", "PC02103", "20.42", "Graus", "Dret i Ciències Polítiques", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Derecho" href="/es/grados/derecho/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Dret", "PC02101", "20.42", "Graus", "Dret i Ciències Polítiques", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Doble titulación de Derecho y de Administración y Dirección de Empresas" href="/es/grados/derecho-administracion-direccion-empresas-doble-titulacion/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Doble titulació de Dret i d%27Administració i Direcció d%27Empreses", "PC04309", "0.0", "Graus", "Dret i Ciències Polítiques", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Relaciones Internacionales" href="/es/grados/relaciones-internacionales/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Relacions Internacionals", "PC02137", "20.42", "Graus", "Dret i Ciències Polítiques", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Multimedia" href="/es/grados/multimedia/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Multimèdia", "PC02136", "22.8", "Graus", "Disseny, Creació i Multimèdia", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Artes" href="/es/grados/artes/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Arts", "PC01461", "20.42", "Graus", "Disseny, Creació i Multimèdia", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Diseño y Creación Digitales" href="/es/grados/dise%C3%B1o-creacion-digital/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Disseny i Creació Digitals", "PC01819", "22.8", "Graus", "Disseny, Creació i Multimèdia", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Administración y Dirección de Empresas" href="/es/grados/administracion-direccion-empresas/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Administració i Direcció d%27Empreses", "PC02089", "20.42", "Graus", "Economia i Empresa", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Doble titulación de Administración y Dirección de Empresas y de Turismo" href="/es/grados/administracion-direccion-empresas-turismo-doble-titulacion/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Doble titulació d%27Administració i Direcció d%27Empreses i de Turisme", "PC04308", "0.0", "Graus", "Economia i Empresa", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Economía" href="/es/grados/economia/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Economia", "PC02128", "20.42", "Graus", "Economia i Empresa", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Marketing e investigación de mercados" href="/es/grados/marketing-investigacion-mercados/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Màrqueting i Investigació de Mercats", "PC02130", "20.42", "Graus", "Economia i Empresa", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Relaciones Laborales y Ocupación" href="/es/grados/relaciones-laborales-ocupacion/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Relacions Laborals i Ocupació", "PC02132", "20.42", "Graus", "Economia i Empresa", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Ciencia de Datos Aplicada /Applied Data Science" href="/es/grados/data-science/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Ciència de Dades Aplicada /Applied Data Science", "PC01367", "22.8", "Graus", "Informàtica, Multimèdia i Telecomunicació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Doble titulación de Ingeniería Informática y de Administración y Dirección de Empresas" href="/es/grados/informatica-ade-doble-titulacion/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Doble titulació d%27Enginyeria Informàtica i d%27Administració i Direcció d%27Empreses", "PC04312", "0.0", "Graus", "Informàtica, Multimèdia i Telecomunicació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Ingeniería Informática" href="/es/grados/ingenieria-informatica/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Enginyeria Informàtica", "PC02093", "22.8", "Graus", "Informàtica, Multimèdia i Telecomunicació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Ingeniería de Tecnologías y Servicios de Telecomunicación" href="/es/grados/tecnologias-telecomunicacion/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Enginyeria de Tecnologies i Serveis de Telecomunicació", "PC02117", "22.8", "Graus", "Informàtica, Multimèdia i Telecomunicació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Multimedia" href="/es/grados/multimedia/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Multimèdia", "PC02136", "22.8", "Graus", "Informàtica, Multimèdia i Telecomunicació", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Educación Social" href="/es/grados/educacion-social/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Educació Social", "PC02113", "20.42", "Graus", "Psicologia i Ciències de l\' educaci>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Psicología" href="/es/grados/psicologia/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Psicologia", "PC02111", "20.42", "Graus", "Psicologia i Ciències de l\' educaci>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.scraper] DEBUG: Scraped from <200 https://estudios.uoc.edu/es/grados>
{'title': '<a title="Turismo" href="/es/grados/turismo/presentacion" class="card-absolute-link" onclick=\'_euocuc402llistatarea_WAR_euocentrega3portlet_INSTANCE_kDk3_enviarPushClick("Turisme", "PC02104", "20.42", "Graus", "Turisme", "Castellà");\'>\xa0</a>'}
2020-12-18 19:55:20 [scrapy.core.engine] INFO: Closing spider (finished)
2020-12-18 19:55:20 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 245,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 78885,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'elapsed_time_seconds': 2.12663,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 12, 18, 19, 55, 20, 920983),
'item_scraped_count': 31,
'log_count/DEBUG': 32,
'log_count/INFO': 10,
'memusage/max': 76251136,
'memusage/startup': 76251136,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2020, 12, 18, 19, 55, 18, 794353)}
2020-12-18 19:55:20 [scrapy.core.engine] INFO: Spider closed (finished)
|
nbs/00_Recursion.ipynb | ###Markdown
Recursion Recursion - Part I Factorial using Recursion
###Code
#exports
def factorial(n):
"""
Get the factorial of the number `n` using recursion.
Eg: factorial(5)
120
"""
if n == 0 or n == 1:
return 1
else:
return n * factorial(n-1)
# Test the factorial function
test_eq(factorial(5), 120)
# Time
%timeit factorial(1000)
%memit factorial(1000)
###Output
peak memory: 100.48 MiB, increment: 0.00 MiB
###Markdown
Sun of `n` Natural Numbers using Recursion
###Code
#exports
def sum_n(n):
"""
Sum of first n natural numbers
Eg: sum_n(10)
55
"""
if n == 0:
return 0
else:
return n + sum_n(n-1)
# Test the sum_n function
test_eq(sum_n(10), 55)
test_eq(sum_n(0), 0)
###Output
_____no_output_____
###Markdown
Power of `x` raised to `n` using Recursion
###Code
#exports
def pow_xn(x, n):
"""
Get the power of x to the nth power
Eg: pow_xn(2, 3)
8
"""
if n == 0:
return 1
else:
return x * pow_xn(x, n-1)
# Test the pow_xn function
test_eq(pow_xn(2, 3), 8)
test_eq(pow_xn(2, 0), 1)
test_eq(pow_xn(0, 0), 1)
test_eq(pow_xn(0, 12), 0)
test_eq(pow_xn(3, 4), 81)
###Output
_____no_output_____
###Markdown
Print `1 to n` and `n to 1` using Recursion
###Code
#exports
def print_1_to_n(n):
if n == 0:
return
print_1_to_n(n-1)
print(n, end=' ')
print_1_to_n(9)
print_1_to_n(0)
#exports
def print_n_to_1(n):
if n == 0:
return
print(n, end=' ')
print_n_to_1(n-1)
print_n_to_1(9)
###Output
9 8 7 6 5 4 3 2 1
###Markdown
Fibonacci Number using Recursion
###Code
#exports
def fib(n):
"""
Get the nth Fibonacci number
"""
if n == 1 or n == 2:
return 1
fib_n_1 = fib(n-1)
fib_n_2 = fib(n-2)
return fib(n-1) + fib(n-2)
test_eq(fib(1), 1)
test_eq(fib(2), 1)
test_eq(fib(10), 55)
###Output
_____no_output_____
###Markdown
Check if List is Sorted using Recursion
###Code
#exports
def isListSorted(array:list):
"""
Check if the List is sorted using recursion
"""
l = len(array)
if l == 0 or l == 1:
return True
if array[0] > array[1]:
return False
return isListSorted(array[1:])
a = list(range(1, 1000))
test_eq(isListSorted(a), True)
b = list(range(1, 100000))
b[444] = -1
test_eq(isListSorted(b), False)
c = []
test_eq(isListSorted(c), True)
%timeit isListSorted(b)
%memit isListSorted(b)
#exports
def isSortedBetter(array:list, n:int=0):
"""
Check if Array is sorted using Recursion
without creating copy of array.
"""
if len(array) == n or len(array)-1 == n:
return True
if array[n] > array[n+1]:
return False
return isSortedBetter(array, n+1)
a = list(range(1, 1000))
test_eq(isSortedBetter(a), True)
b = list(range(1, 100000))
b[444] = -1
test_eq(isSortedBetter(b), False)
c = []
test_eq(isSortedBetter(c), True)
%timeit isSortedBetter(b)
%memit isSortedBetter(b)
###Output
peak memory: 101.51 MiB, increment: 0.00 MiB
###Markdown
Sum of Array using Recursion
###Code
#exports
def ArraySum(array:list, n:int=0):
"""
Sum of all the elements in the List using Recursion
"""
if n == len(array):
return 0
return array[n] + ArraySum(array,n+1)
test_eq(ArraySum([7,4,11,9,-3], 0), 28)
test_eq(ArraySum([], 0), 0)
###Output
_____no_output_____
###Markdown
Check element in Array using Recursion
###Code
#exports
def checkElementInArray(array:list, x:int, n:int=0):
"""
Check if the given number `x` is present in the
array or not recursively
"""
if n == len(array):
return -1
if array[n] == x:
return n
return checkElementInArray(array,x,n+1)
test_eq( checkElementInArray([1,2,3,4,5,4,3,5,7,9,6,5,4,3,5,6,7,1,7,6,9,8], 8), 21)
test_eq( checkElementInArray([1,2,3,4,5,4,3,5,7,9,6,5,4,3,5,6,7,1,7,6,9,8], 12), -1)
%timeit checkElementInArray(range(1000), 999)
%memit checkElementInArray(range(1000), 999)
#exports
def checkElementInArrayCopy(array:list, x:int):
"""
Check if element `x` is present in the array
using Recursion and making copy in recursion
call.
"""
if len(array) == 0:
return -1
if array[0] == x:
return 0
smallList = checkElementInArrayCopy(array[1:], x)
if smallList == -1:
return -1
else:
return smallList + 1
test_eq( checkElementInArrayCopy([1,2,3,4,5,4,3,5,7,9,6,5,4,3,5,6,7,1,7,6,9,8], 8), 21)
test_eq( checkElementInArrayCopy([1,2,3,4,5,4,3,5,7,9,6,5,4,3,5,6,7,1,7,6,9,8], 12), -1)
%timeit checkElementInArrayCopy(range(1000), 999)
%memit checkElementInArrayCopy(range(1000), 999)
###Output
peak memory: 103.04 MiB, increment: 0.00 MiB
###Markdown
Get Index of Element in Array Present at Last.
###Code
#exports
def checkElementInArrayLast(array:list, x:int, n:int=0):
"""
Check for the element `x` in the list `array`. If
there are multiple `x` present, give index of last
occurence using Recursion and starting from index=0
"""
if len(array) == n:
return -1
lastIndex = checkElementInArrayLast(array, x, n+1)
if lastIndex == -1:
if array[n] == x:
return n
else:
return -1
else:
return lastIndex
arr = list(range(1000))
arr[10] = 999
arr[456] = 999
test_eq(checkElementInArrayLast(arr, 999), 999)
arr[999] = 10
test_eq(checkElementInArrayLast(arr, 999), 456)
###Output
_____no_output_____
###Markdown
Recursion - Part II Replace character `` with `` in given string
###Code
#exports
def replaceCharAwithCharB(string:str, charA:str, charB:str, n:int = 0):
"""
Given a `string` replace all the `charA` present with the
`charB`
"""
if len(string) == 0 or len(string) == n:
return string
if string[n] == charA:
string = string[:n] + charB + string[n+1:]
return replaceCharAwithCharB(string, charA, charB, n+1)
#exports
def replaceCharAwithCharB_v2(string:str, charA:str, charB:str):
"""
Given a `string` replace all the `charA` present with the
`charB`
"""
if len(string) == 0:
return string
smallerOutput = replaceCharAwithCharB(string[1:], charA, charB)
if string[0] == charA:
string = charB + smallerOutput
else:
string = string[0] + smallerOutput
return string
replaceCharAwithCharB('tatia', 'a', 'b')
###Output
_____no_output_____ |
IB-commissions.ipynb | ###Markdown
If you found any errors, please, share them [email protected]
###Code
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Fixed A fixed amount per share or a set percent of trade value, and includes all IB commissions, exchange and most regulatory fees with the exception of the transaction fees, which are passed through on all stock sales
###Code
def get_commission_fixed(n, p, sale=False):
min_per_order = 1
max_per_order = 0.01 * n * p
commission = n * 0.005
commission = np.maximum(commission, min_per_order)
commission = np.minimum(commission, max_per_order)
trans_fee = 0.0000221 * n * p * sale
finra_activity = np.minimum(0.000119 * n, 5.95)
return commission + trans_fee + finra_activity
###Output
_____no_output_____
###Markdown
Если рассчитанная максимальная плата за ордер будет меньше установленной минимальной, то будет взиматься максимальная. Например, за сделку на покупку 10 акций по цене \\$0.20 будет взиматься $0.02 (10 акций * 0.005 за акцию = 0.05 комиссии, минимум 1.00 за ордер, установленный лимит в 10 акций * 0.20 * 1% = 0.02). 100 акций по цене USD 25 за акцию = USD 1.001,000 акций по цене USD 25 за акцию = USD 5.001,000 акций по цене USD 0.25 за акцию = USD 2.50
###Code
# check against examples
print(round(get_commission_fixed(10, .20), 10))
print(get_commission_fixed(100, 25))
print(get_commission_fixed(1000, 25))
print(get_commission_fixed(1000, .25))
###Output
0.02
1.0
5.0
2.5
###Markdown
Computations do not include- Advisors fees and specialist fees (assuming you are not subscribed) - VAT cost (same for Fixed and Tiered)- Partial execution mechanics (it is not clear, when and how this occurs) - Account for the Commission-free ETFs - Modified order mechanics Unclear: - footnote 1 in the [Fixed pricing](https://www.interactivebrokers.co.uk/en/index.php?f=39753&p=stocks1) states: `Plus the applicable exchange, ECN and/or specialist fees based on execution venue.`, but at the very top of the pricing page, as well as in the notes they say `All exchange fees are included.`. Tiered Assmptions: - volume per month < 300 000 shares- orders are executed with the priciest Remove Liquidity strategy (i.e. against an existing bid or offer on an order book)- Island ECN is used for computation of exchange fees (one of the first ECN's and quite a popular one)- `IB Tiered Commissions` in the `Transaction Fees` section of the [Tiered pricing page](https://www.interactivebrokers.co.uk/en/index.php?f=39753&p=stocks2) do not include any other exchange, regulatory or clearing fees
###Code
def _compute_ladder_commission(n, verbose=False):
per_share_usd_cutoffs = [0.0035, 0.002, 0.0015, 0.001, 0.0005]
n_cutoffs = [300_000, 3_000_000, 20_000_000, 100_000_000, np.inf]
costs = {}
n_processed = 0
for per_share_usd, cutoff in zip(per_share_usd_cutoffs, n_cutoffs):
n_per_tier = min(cutoff - n_processed, n - n_processed)
costs[cutoff] = n_per_tier * per_share_usd
n_processed += n_per_tier
total_cost = sum(costs.values())
if verbose:
return total_cost, costs
else:
return total_cost
def get_commission_tiered(n, p, sale=False, verbose=False, add_liquid=False):
min_per_order = 0.35
max_per_order = 0.01 * n * p
commission = _compute_ladder_commission(n)
commission = np.maximum(commission, min_per_order)
commission = np.minimum(commission, max_per_order)
# clearing fees
clear_fee = np.minimum(0.00020 * n, 0.005 * n * p)
# exchange fees for NASDAQ / Island
# for NYSE commission is almost the same
if p < 1:
if add_liquid:
exch_fee = 0
else:
exch_fee = n * p * 0.0030
else:
if add_liquid:
exch_fee = - 0.0021 * n
else:
exch_fee = 0.0030 * n
# transaction fees
trans_fee = 0.0000221 * n * p * sale
nyse_pass_through = commission * 0.000175
finra_pass_through = commission * 0.00056
finra_activity = np.minimum(0.000119 * n, 5.95)
total_commission = commission + clear_fee + exch_fee + trans_fee + nyse_pass_through + finra_pass_through + finra_activity
if verbose:
res = [
("IBKR commission", commission),
("Exchange fee", exch_fee),
("Clearing fee", clear_fee),
("Transaction fee", trans_fee),
("NYSE pass-through", nyse_pass_through),
("Finra pass-through", finra_pass_through),
("Finra activity", finra_activity),
("TOTAL", total_commission)
]
from tabulate import tabulate
print(tabulate(res))
return total_commission
# check pass https://www.interactivebrokers.co.uk/en/index.php?f=40909
get_commission_tiered(30_000_000 , 5, sale=True, add_liquid=True, verbose=True)
# check pass https://www.interactivebrokers.co.uk/en/index.php?f=40909
get_commission_fixed(30_000_000 , 5, sale=True)
# check pass https://www.interactivebrokers.co.uk/en/index.php?f=40909
get_commission_tiered(30_000_000 , 5, sale=True, add_liquid=False, verbose=True)
# check pass https://www.interactivebrokers.co.uk/en/index.php?f=40909
get_commission_fixed(200000, 5, sale=True)
# check pass https://www.interactivebrokers.co.uk/en/index.php?f=40909
get_commission_tiered(200000, 5, sale=True, verbose=True)
###Output
------------------ ---------
IBKR commission 700
Exchange fee 600
Clearing fee 40
Transaction fee 22.1
NYSE pass-through 0.1225
Finra pass-through 0.392
Finra activity 5.95
TOTAL 1368.56
------------------ ---------
###Markdown
Unclear: - (unrelated to computations, see Assumptions) Orders where the commission cap is applied do not count towards the monthly volume tiers- (related to computations) exchange fees Island: `NASDAQ < $1.00 per share, Remove liquidity cost = Trade Value * 0.0030` - is the total cost `trade_value * 0.0030` or ` trade_value * 0.0030 * n` ? Assuming the former. Cannot check, since their examples do not include`p < $1` Note* using SmartRouting, clients should be aware that IB may route the order to an exchange with a better quoted price but with substantially higher fees. In particular, clients should understand the ECN charges for removing liquidity when sending marketable orders for low priced stocks (under USD 2.50).* Comparison Tiered vs Fixed: - a rebate for executing a trade in a Regulation NMS stock at a market-maker, dark pool, or with a liquidity provider in the IB ATS, IB will pass the full amount of that rebate to Tiered-commission customers as a venue rebate ant to Fixed-commission customers as a commission discount. - one of the main difference between Fixed and Tiered is the ability to gain from the Add Liquidity strategy of placing an order (for Add Liquidity the commission [may be even negative](https://www.interactivebrokers.co.uk/en/index.php?f=40909&nhf=T)).
###Code
plt.figure(figsize=(30, 3.5), dpi=200)
prices = [0.1, 1, 25, 50, 100, 500, 1000]
for i, p in enumerate(prices):
plt.subplot(1, len(prices), i+1)
n = np.arange(1, 300)
plt.title(f"$ {p} per share")
plt.plot(n, get_commission_fixed(n, p), label="Fixed, purchase")
plt.plot(n, get_commission_tiered(n, p), label="Tiered, purchase, remove liquid")
plt.plot(n, get_commission_tiered(n, p, add_liquid=True), label="Tiered, purchase, add liquid")
plt.plot(n, get_commission_fixed(n, p, sale=True), label="Fixed, sale")
plt.plot(n, get_commission_tiered(n, p, sale=True), label="Tiered, sale, remove liquid")
plt.plot(n, get_commission_tiered(n, p, sale=True, add_liquid=True), label="Tiered, sale, add liquid")
if i == 0:
plt.ylabel("Commission, $")
plt.xlabel("Number of shares traded")
if i == len(prices) - 1:
plt.legend(bbox_to_anchor=(1.1, 0.5))
plt.grid()
plt.tight_layout()
plt.savefig("C:/Users/Dell/Downloads/ib_commision.png")
plt.figure(figsize=(30, 3.5), dpi=200)
prices = [0.1, 1, 25, 50, 100, 300, 1000]
for i, p in enumerate(prices):
plt.subplot(1, len(prices), i+1)
n = np.arange(1, 300)
plt.title(f"$ {p} per share")
plt.plot(n, get_commission_fixed(n, p) - get_commission_tiered(n, p),
label="Fixed - Tiered, purchase, remove_liquidity")
plt.plot(n, get_commission_fixed(n, p) - get_commission_tiered(n, p, add_liquid=True),
label="Fixed - Tiered, purchase, add liquidity")
plt.plot(n, get_commission_fixed(n, p, sale=True) - get_commission_tiered(n, p, sale=True),
label="Fixed - Tiered, sale, remove liquidity")
plt.plot(n, get_commission_fixed(n, p, sale=True) - get_commission_tiered(n, p, sale=True, add_liquid=True),
label="Fixed - Tiered, sale, add liquidity")
if i == 0:
plt.ylabel("Difference in commissions, $")
plt.xlabel("Number of shares traded")
if i == len(prices) - 1:
plt.legend(bbox_to_anchor=(1.1, 0.5))
plt.grid()
plt.tight_layout()
plt.savefig("C:/Users/Dell/Downloads/ib_commision_diff.png")
###Output
_____no_output_____ |
house_factor_analysis.ipynb | ###Markdown
線性回歸預測房價 (有用factor analysis)
###Code
import numpy as np
import pandas as pd
housedatas=pd.read_csv("housedataset/kc_house_data.csv", encoding="utf-8")
housedatas.head(3)
len(housedatas)
mostfeatures=['price','bedrooms', 'bathrooms', 'sqft_living', 'floors', 'condition', 'grade',
'waterfront', 'view','yr_built','lat','long']
housedata=housedatas[mostfeatures]
housedata.head(3)
housedata=housedata.astype('float')
housedata.head(3)
housedata['label']='middle'
housedata.loc[housedata['price']>=600000, 'label'] = 'high'
housedata.loc[housedata['price']<=350000, 'label'] = 'low'
housedata_1=housedata[housedata['label'].str.contains('high')]
housedata_2=housedata[housedata['label'].str.contains('middle')]
housedata_3=housedata[housedata['label'].str.contains('low')]
print(len(housedata),len(housedata_1),len(housedata_2),len(housedata_3))
housedata2=housedata.drop(columns=['price','label'])
from sklearn import model_selection
X = np.array(housedata2)
Y = np.array(housedata['label'])
validation_size = 0.20
seed = 20
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
print(len(X_train),len(X_validation))
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.svm import SVC
SVC_model = SVC(kernel='linear')
SVC_model.fit(X_train,Y_train)
predictions = SVC_model.predict(X_validation)
print(SVC_model.score(X_train,Y_train))
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
from sklearn.decomposition import FactorAnalysis
transformer = FactorAnalysis(n_components=10, random_state=20)
house_transformed = transformer.fit_transform(housedata2)
house_transformed.shape
X = house_transformed
Y = np.array(housedata['label'])
validation_size = 0.20
seed = 20
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
SVC_model = SVC(kernel='linear')
SVC_model.fit(X_train,Y_train)
predictions = SVC_model.predict(X_validation)
print(SVC_model.score(X_train,Y_train))
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
housedata3=housedata.drop(columns=['price','waterfront','view','label'])
housedata3.head(10)
X = np.array(housedata3)
Y = np.array(housedata['label'])
validation_size = 0.20
seed = 20
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
SVC_model = SVC(kernel='linear')
SVC_model.fit(X_train,Y_train)
predictions = SVC_model.predict(X_validation)
print(SVC_model.score(X_train,Y_train))
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
###Output
0.6950260266049739
0.710617626648161
[[ 970 20 288]
[ 23 1076 257]
[ 298 365 1026]]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.