path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
covid-19_vulnerability_mapping_SA/model.ipynb | ###Markdown
South African COVID-19 Vulnerability Map:The 2011 census gives us valuable information for determining who might be most vulnerable to COVID-19 in South Africa. However, the data is nearly 10 years old, and we expect that some key indicators will have changed in that time. Building an up-to-date map showing where the most vulnerable are located will be a key step in responding to the disease. A mapping effort like this requires bringing together many different inputs and tools. For this competition, we’re starting small. Can we infer important risk factors from more readily available data?The task is to predict the percentage of households that fall into a particularly vulnerable bracket - large households who must leave their homes to fetch water - using 2011 South African census data. Solving this challenge will show that with machine learning it is possible to use easy-to-measure stats to identify areas most at risk even in years when census data is not collected.
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.feature_selection import SelectFromModel
from sklearn.model_selection import KFold, train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
import lightgbm as lgbm
import xgboost as xgb
import warnings
warnings.filterwarnings('ignore')
# Load the data
train = pd.read_csv('./raw_data/Train.csv')
test = pd.read_csv('./raw_data/Test.csv')
sub = pd.read_csv('./raw_data/samplesubmission.csv')
train.head()
def check_missing_data(data: pd.DataFrame) -> pd.DataFrame:
"""Checks a given dataframe for missing values and
types of the data features.
"""
total = data.isnull().sum()
percent = (data.isnull().sum()/data.isnull().count()*100)
tt = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
types = []
for col in data.columns:
dtype = str(data[col].dtype)
types.append(dtype)
tt['Types'] = types
return(np.transpose(tt))
check_missing_data(train)
train.describe()
# some Data cleaning and feature engineering
train = train[train['total_households']<=17500]
#train = train[train.index!=1094]
train['Individualsperhouse'] = train['total_individuals'] / train['total_households']
test['Individualsperhouse'] = test['total_individuals'] / test['total_households']
train['total_households_lt5000'] = train['total_households'].apply(lambda x:1 if 2500
<x<=5000 else 0)
test['total_households_lt5000'] = test['total_households'].apply(lambda x:1 if 2500<x
<=5000 else 0)
corr = train.corr()
fig = plt.figure(figsize = (9, 6))
sns.heatmap(corr, vmax = .8, square = True)
plt.show()
(corr
.target_pct_vunerable
.drop("target_pct_vunerable") # can't compare the variable under study to itself
.sort_values(ascending=False)
.plot
.barh(figsize=(9,7)))
plt.title("correlation bar_hist")
sns.distplot(train.target_pct_vunerable, bins=100)
def rmse(y,x):
return np.sqrt(mean_squared_error(x,y))
drop_cols = ['target_pct_vunerable', 'ward', 'dw_11', 'dw_12','lan_13']
y = train.target_pct_vunerable
X = train.drop(drop_cols, axis=1)
#X = StandardScaler().fit_transform(X)
ids = test['ward']
test = test.drop(drop_cols[1:], axis=1)
#tt = test[use_cols]
lgb_params = {
'metric' : 'rmse',
'learning_rate': 0.03,
'max_depth': 6,
'num_leaves': 50,
'objective': 'regression',
'feature_fraction': 0.5,
'bagging_fraction': 0.5,
'max_bin': 1000 }
# split data
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
train_data = lgbm.Dataset(X_train, label=y_train)
test_data = lgbm.Dataset(X_val, label=y_val)
lgb_model = lgbm.train(lgb_params, train_data, valid_sets=[train_data, test_data],
num_boost_round=9000, early_stopping_rounds=500
) #0.03lr
#lgb_df = lgbm.Dataset(X, y)
#lgb_model = lgbm.train(lgb_params, lgb_df, num_boost_round=5000)
lgbm.plot_importance(lgb_model, height=0.8, figsize=(9,12))
xgb_model = xgb.XGBRegressor(n_estimators=2000, learning_rate=0.05, n_jobs=-1)
xgb_model.fit(X_train, y_train)
val_pred = xgb_model.predict(X_val)
error = rmse(y_val, val_pred)
error
xgb.plot_importance(xgb_model, height=4.0)
# Feature selection
thresholds = np.sort(xgb_model.feature_importances_)
for thresh in thresholds:
# select features using threshold
selection = SelectFromModel(xgb_model, threshold=thresh, prefit=True)
select_X_train = selection.transform(X_train)
# train model
s_model = xgb.XGBRegressor(n_estimators=2000, learning_rate=0.05, n_jobs=-1)
s_model.fit(select_X_train, y_train)
# eval model
select_X_val = selection.transform(X_val)
y_pred = s_model.predict(select_X_val)
val_preds = [round(value) for value in y_pred]
mse = rmse(y_val, val_preds)
print("Thresh=%.3f, n=%d, mse: %.2f%" % (thresh, select_X_train.shape[1], mse))
# Random-Forest with CV
kf = KFold(n_splits=5, shuffle=False)
scores = []
for train, val in kf.split(X):
model = RandomForestRegressor(n_estimators=200, max_depth=5, n_jobs=-1,
random_state=42)
model.fit(X.iloc[train], y.iloc[train])
root_mse = rmse(y.iloc[val], model.predict(X.iloc[val]))
scores.append(root_mse)
print(root_mse)
print("Average score in 5-fold CV:", np.mean(scores))
predictions = xgb_model.predict(test)
preds = lgb_model.predict(test)
sub['ward'] = ids
sub['target_pct_vunerable'] = preds
sub.head()
#%mkdir submissions
#sub.to_csv(f'./submissions/sub{np.round(np.mean(scores), 4)}.csv', index=False)
sub.to_csv('./submissions/lgbm_sub.csv', index=False)
#sub.to_csv('./submissions/xgb_sub.csv', index=False)
###Output
_____no_output_____ |
notebooks/XGBoost_new.ipynb | ###Markdown
XGBoost
###Code
import xgboost as xgb
group = train.groupby('queried_record_id').size().values
ranker = xgb.XGBRanker()
ranker.fit(train.drop(['queried_record_id', 'target', 'linked_id_idx', 'linked_id'], axis=1), train['target'], group=group)
###Output
_____no_output_____
###Markdown
Test
###Code
test = pd.read_csv("../dataset/expanded/test_xgb.csv")
test['editdistance'] = compute_editdistance(test, validation=False)
email_pop = pd.read_csv("../dataset/original/feature/email_popularity.csv")
linked_id_pop = pd.read_csv("../dataset/original/feature/linked_id_popularity.csv")
name_pop = pd.read_csv("../dataset/original/feature/name_popularity.csv")
nonnull_addr = pd.read_csv("../dataset/original/feature/number_of_non_null_address.csv")
nonnull_email = pd.read_csv("../dataset/original/feature/number_of_non_null_email.csv")
nonnull_phone = pd.read_csv("../dataset/original/feature/number_of_non_null_phone.csv")
phone_pop = pd.read_csv("../dataset/original/feature/phone_popularity.csv")
name_length = pd.read_csv("../dataset/original/feature/test_name_length.csv")
test = test.merge(email_pop, how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
test = test.merge(linked_id_pop, how='left', left_on='predicted_record_id', right_on='linked_id').drop('linked_id', axis=1).rename(columns={'popularity':'linked_id_popularity'})
test
test = test.merge(name_pop, how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
test = test.merge(nonnull_addr, how='left', left_on='predicted_record_id', right_on='linked_id').drop('linked_id', axis=1)
test = test.merge(nonnull_email, how='left', left_on='predicted_record_id', right_on='linked_id').drop('linked_id', axis=1)
test = test.merge(nonnull_phone, how='left', left_on='predicted_record_id', right_on='linked_id').drop('linked_id', axis=1)
test = test.merge(phone_pop, how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
test = test.merge(name_length, how='left', left_on='queried_record_id', right_on='record_id').drop('record_id', axis=1)
test = test.fillna(0)
test
test['linked_id_popularity'] = test.linked_id_popularity.astype(int)
test['null_address'] = test.null_address.astype(int)
test['null_email'] = test.null_email.astype(int)
test['null_phone'] = test.null_phone.astype(int)
predictions = ranker.predict(test.drop(['queried_record_id', 'linked_id_idx'], axis=1))
test['predictions'] = predictions
df_predictions = test[['queried_record_id', 'predicted_record_id', 'predictions']]
rec_pred = []
for (r,p) in zip(df_predictions.predicted_record_id, df_predictions.predictions):
rec_pred.append((r, p))
rec_pred
df_predictions['rec_pred'] = rec_pred
group_queried = df_predictions[['queried_record_id', 'rec_pred']].groupby('queried_record_id').apply(lambda x: list(x['rec_pred']))
df_predictions = pd.DataFrame(group_queried).reset_index().rename(columns={0 : 'rec_pred'})
def reorder_preds(preds):
sorted_list = []
for i in range(len(preds)):
l = sorted(preds[i], key=lambda t: t[1], reverse=True)
l = [x[0] for x in l]
sorted_list.append(l)
return sorted_list
df_predictions['ordered_preds'] = reorder_preds(df_predictions.rec_pred.values)
df_predictions = df_predictions[['queried_record_id', 'ordered_preds']].rename(columns={'ordered_preds': 'predicted_record_id'})
new_col = []
for t in tqdm(df_predictions.predicted_record_id):
new_col.append(' '.join([str(x) for x in t]))
new_col
df_predictions.predicted_record_id = new_col
df_predictions.to_csv('xgb_sub4.csv', index=False)
df_predictions.shape
sub_old = pd.read_csv("/Users/alessiorussointroito/Documents/GitHub/Oracle_HPC_contest/notebooks/xgb_sub.csv")
set(sub_old.queried_record_id.values) - set(df_predictions.queried_record_id.values)
missing_values = {'queried_record_id' : ['12026587-TST-MR', '13009531-TST-MR'],
'predicted_record_id': [10111147, 10111147]}
missing_df = pd.DataFrame(missing_values)
missing_df
df_predictions = pd.concat([df_predictions, missing_df])
df_predictions.to_csv('xgb_sub3.csv', index=False)
train.target.sum()
train.queried_record_id.shape[0] / 10
###Output
_____no_output_____ |
_doc/notebooks/install_module.ipynb | ###Markdown
Ways to install a moduleInstall a module from a notebook.
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
###Output
_____no_output_____
###Markdown
The module [pymyinstall](http://www.xavierdupre.fr/app/pymyinstall/helpsphinx/) proposes an easy to install module mostly on Windows as it is already quite easy to do it on Linux/Mac through [pip](http://pip.readthedocs.org/en/latest/). There are a couple of ways to install a module and they should be tried in that way:* **pip**: pip* **wheel**: use also pip but on Windows, it will fetch the wheel file from the location mentioned below (Unofficial...)* **github**: source (usually from [github](https://github.com/)), the owner of the source must be specifiedOld way which should not be used anymore:* exe: a setup on Windows (only on Windows), replaced by pip in Linux/Max* exe_xd: only for Windows, some setups gathered on my website [pip](http://pip.readthedocs.org/en/latest/) is great because it deals with dependencies for you. I recommend to use Pyhon 3.4 because that is the first version which includes ``pip``. It takes the modules from [PyPI](https://pypi.python.org/pypi). The only drawback happens on Windows when a module includes [Fortran](http://en.wikipedia.org/wiki/Fortran)/[C](http://en.wikipedia.org/wiki/C_%28programming_language%29)/[C++](http://en.wikipedia.org/wiki/C%2B%2B) files which must be compiled. As opposed to Linux or Mac with [gcc](https://gcc.gnu.org/), there is not an official compiler to handle every package and it has to be installed first. On Windows, it can be done by installing [Visual Studio Express 2010](http://www.visualstudio.com/downloads/download-visual-studio-vsDownloadFamilies_4) but sometimes, the dependencies can be tricky. That's why it is recommended to install already compiled python extensions. Not every module provides a compiled version but there exists two main ways to get them:* [Unofficial Windows Binaries for Python Extension Packages](http://www.lfd.uci.edu/~gohlke/pythonlibs/)* [Anaconda](http://continuum.io/)[pymysintall](http://www.xavierdupre.fr/app/pymyinstall/helpsphinx/) takes the setup from the first source. The following snippets of code gives an example for each described way to install a module. The setup way only works on Windows. pipLet's try with the following module: [numbers_extractor](https://pypi.python.org/pypi/numbers_extractor).
###Code
from pymyinstall import ModuleInstall
###Output
_____no_output_____
###Markdown
The following only works if you have enough permissions to do it which is the case if the notebook server is started with admin permissions or if you are using a virtual environment.
###Code
ModuleInstall("numbers_extractor", "pip").install()
###Output
installation of numbers_extractor:pip:import numbers_extractor
###Markdown
wheel (for files *.whl)
###Code
ModuleInstall("numpy", "wheel").install()
###Output
_____no_output_____
###Markdown
The function checks first if the module was already installed. That's why it displays only True. All modules are not available at [Unofficial Windows Binaries for Python Extension Packages](http://www.lfd.uci.edu/~gohlke/pythonlibs/) such as [xgboost](https://github.com/dmlc/xgboost). For this one, ``wheel`` must be replaced by ``wheel2``. The wheel file will be picked from [xavierdupre.fr](http://www.xavierdupre.fr). github[github](https://github.com/) holds the source of most of the open source project. This one is no exception. You can check this page for example [Top 400 Python Projects in Github](http://pythonhackers.com/open-source/). We try here with the module [bottle](https://github.com/defnull/bottle/).
###Code
ModuleInstall("bottle", "github", "defnull").install()
###Output
installation of bottle:github:import bottle
downloading https://github.com/defnull/bottle/archive/master.zip
unzipping .\bottle.zip
creating folder .\bottle-master
unzipped bottle-master/.coveragerc to .\bottle-master/.coveragerc
unzipped bottle-master/.gitignore to .\bottle-master/.gitignore
unzipped bottle-master/.travis.yml to .\bottle-master/.travis.yml
unzipped bottle-master/AUTHORS to .\bottle-master/AUTHORS
unzipped bottle-master/LICENSE to .\bottle-master/LICENSE
unzipped bottle-master/MANIFEST.in to .\bottle-master/MANIFEST.in
unzipped bottle-master/Makefile to .\bottle-master/Makefile
unzipped bottle-master/README.rst to .\bottle-master/README.rst
unzipped bottle-master/bottle.py to .\bottle-master/bottle.py
creating folder .\bottle-master/docs
creating folder .\bottle-master/docs/_locale
unzipped bottle-master/docs/_locale/README.txt to .\bottle-master/docs/_locale/README.txt
unzipped bottle-master/docs/_locale/update.sh to .\bottle-master/docs/_locale/update.sh
creating folder .\bottle-master/docs/_locale/zh_CN
unzipped bottle-master/docs/_locale/zh_CN/api.po to .\bottle-master/docs/_locale/zh_CN/api.po
unzipped bottle-master/docs/_locale/zh_CN/async.po to .\bottle-master/docs/_locale/zh_CN/async.po
unzipped bottle-master/docs/_locale/zh_CN/changelog.po to .\bottle-master/docs/_locale/zh_CN/changelog.po
unzipped bottle-master/docs/_locale/zh_CN/configuration.po to .\bottle-master/docs/_locale/zh_CN/configuration.po
unzipped bottle-master/docs/_locale/zh_CN/contact.po to .\bottle-master/docs/_locale/zh_CN/contact.po
unzipped bottle-master/docs/_locale/zh_CN/deployment.po to .\bottle-master/docs/_locale/zh_CN/deployment.po
unzipped bottle-master/docs/_locale/zh_CN/development.po to .\bottle-master/docs/_locale/zh_CN/development.po
unzipped bottle-master/docs/_locale/zh_CN/faq.po to .\bottle-master/docs/_locale/zh_CN/faq.po
unzipped bottle-master/docs/_locale/zh_CN/index.po to .\bottle-master/docs/_locale/zh_CN/index.po
unzipped bottle-master/docs/_locale/zh_CN/plugindev.po to .\bottle-master/docs/_locale/zh_CN/plugindev.po
unzipped bottle-master/docs/_locale/zh_CN/plugins.po to .\bottle-master/docs/_locale/zh_CN/plugins.po
unzipped bottle-master/docs/_locale/zh_CN/recipes.po to .\bottle-master/docs/_locale/zh_CN/recipes.po
unzipped bottle-master/docs/_locale/zh_CN/routing.po to .\bottle-master/docs/_locale/zh_CN/routing.po
unzipped bottle-master/docs/_locale/zh_CN/stpl.po to .\bottle-master/docs/_locale/zh_CN/stpl.po
unzipped bottle-master/docs/_locale/zh_CN/tutorial.po to .\bottle-master/docs/_locale/zh_CN/tutorial.po
unzipped bottle-master/docs/_locale/zh_CN/tutorial_app.po to .\bottle-master/docs/_locale/zh_CN/tutorial_app.po
unzipped bottle-master/docs/api.rst to .\bottle-master/docs/api.rst
unzipped bottle-master/docs/async.rst to .\bottle-master/docs/async.rst
unzipped bottle-master/docs/changelog.rst to .\bottle-master/docs/changelog.rst
unzipped bottle-master/docs/conf.py to .\bottle-master/docs/conf.py
unzipped bottle-master/docs/configuration.rst to .\bottle-master/docs/configuration.rst
unzipped bottle-master/docs/contact.rst to .\bottle-master/docs/contact.rst
unzipped bottle-master/docs/deployment.rst to .\bottle-master/docs/deployment.rst
unzipped bottle-master/docs/development.rst to .\bottle-master/docs/development.rst
unzipped bottle-master/docs/faq.rst to .\bottle-master/docs/faq.rst
unzipped bottle-master/docs/index.rst to .\bottle-master/docs/index.rst
unzipped bottle-master/docs/plugindev.rst to .\bottle-master/docs/plugindev.rst
creating folder .\bottle-master/docs/plugins
unzipped bottle-master/docs/plugins/index.rst to .\bottle-master/docs/plugins/index.rst
unzipped bottle-master/docs/recipes.rst to .\bottle-master/docs/recipes.rst
unzipped bottle-master/docs/routing.rst to .\bottle-master/docs/routing.rst
unzipped bottle-master/docs/stpl.rst to .\bottle-master/docs/stpl.rst
unzipped bottle-master/docs/tutorial.rst to .\bottle-master/docs/tutorial.rst
unzipped bottle-master/docs/tutorial_app.rst to .\bottle-master/docs/tutorial_app.rst
unzipped bottle-master/setup.cfg to .\bottle-master/setup.cfg
unzipped bottle-master/setup.py to .\bottle-master/setup.py
creating folder .\bottle-master/test
unzipped bottle-master/test/.coveragerc to .\bottle-master/test/.coveragerc
unzipped bottle-master/test/build_python.sh to .\bottle-master/test/build_python.sh
unzipped bottle-master/test/servertest.py to .\bottle-master/test/servertest.py
unzipped bottle-master/test/test_auth.py to .\bottle-master/test/test_auth.py
unzipped bottle-master/test/test_config.py to .\bottle-master/test/test_config.py
unzipped bottle-master/test/test_configdict.py to .\bottle-master/test/test_configdict.py
unzipped bottle-master/test/test_contextlocals.py to .\bottle-master/test/test_contextlocals.py
unzipped bottle-master/test/test_environ.py to .\bottle-master/test/test_environ.py
unzipped bottle-master/test/test_fileupload.py to .\bottle-master/test/test_fileupload.py
unzipped bottle-master/test/test_formsdict.py to .\bottle-master/test/test_formsdict.py
unzipped bottle-master/test/test_importhook.py to .\bottle-master/test/test_importhook.py
unzipped bottle-master/test/test_jinja2.py to .\bottle-master/test/test_jinja2.py
unzipped bottle-master/test/test_mako.py to .\bottle-master/test/test_mako.py
unzipped bottle-master/test/test_mdict.py to .\bottle-master/test/test_mdict.py
unzipped bottle-master/test/test_mount.py to .\bottle-master/test/test_mount.py
unzipped bottle-master/test/test_outputfilter.py to .\bottle-master/test/test_outputfilter.py
unzipped bottle-master/test/test_plugins.py to .\bottle-master/test/test_plugins.py
unzipped bottle-master/test/test_resources.py to .\bottle-master/test/test_resources.py
unzipped bottle-master/test/test_route.py to .\bottle-master/test/test_route.py
unzipped bottle-master/test/test_router.py to .\bottle-master/test/test_router.py
unzipped bottle-master/test/test_securecookies.py to .\bottle-master/test/test_securecookies.py
unzipped bottle-master/test/test_sendfile.py to .\bottle-master/test/test_sendfile.py
unzipped bottle-master/test/test_server.py to .\bottle-master/test/test_server.py
unzipped bottle-master/test/test_stpl.py to .\bottle-master/test/test_stpl.py
unzipped bottle-master/test/test_wsgi.py to .\bottle-master/test/test_wsgi.py
unzipped bottle-master/test/testall.py to .\bottle-master/test/testall.py
unzipped bottle-master/test/tools.py to .\bottle-master/test/tools.py
unzipped bottle-master/test/travis-requirements.txt to .\bottle-master/test/travis-requirements.txt
unzipped bottle-master/test/travis_setup.sh to .\bottle-master/test/travis_setup.sh
creating folder .\bottle-master/test/views
unzipped bottle-master/test/views/jinja2_base.tpl to .\bottle-master/test/views/jinja2_base.tpl
unzipped bottle-master/test/views/jinja2_inherit.tpl to .\bottle-master/test/views/jinja2_inherit.tpl
unzipped bottle-master/test/views/jinja2_simple.tpl to .\bottle-master/test/views/jinja2_simple.tpl
unzipped bottle-master/test/views/mako_base.tpl to .\bottle-master/test/views/mako_base.tpl
unzipped bottle-master/test/views/mako_inherit.tpl to .\bottle-master/test/views/mako_inherit.tpl
unzipped bottle-master/test/views/mako_simple.tpl to .\bottle-master/test/views/mako_simple.tpl
unzipped bottle-master/test/views/stpl_include.tpl to .\bottle-master/test/views/stpl_include.tpl
unzipped bottle-master/test/views/stpl_no_vars.tpl to .\bottle-master/test/views/stpl_no_vars.tpl
unzipped bottle-master/test/views/stpl_simple.tpl to .\bottle-master/test/views/stpl_simple.tpl
unzipped bottle-master/test/views/stpl_t2base.tpl to .\bottle-master/test/views/stpl_t2base.tpl
unzipped bottle-master/test/views/stpl_t2inc.tpl to .\bottle-master/test/views/stpl_t2inc.tpl
unzipped bottle-master/test/views/stpl_t2main.tpl to .\bottle-master/test/views/stpl_t2main.tpl
unzipped bottle-master/test/views/stpl_unicode.tpl to .\bottle-master/test/views/stpl_unicode.tpl
unzipped bottle-master/tox.ini to .\bottle-master/tox.ini
install C
warning: Successfully installed not found
###Markdown
The function downloads the source from GitHub and install them using the instruction ``python setup.py install``. The function still has to be improved because analyzing the output is always obvious if there are dependencies or error messages. We check again a second call does not install the module again.
###Code
ModuleInstall("bottle", "github", "defnull").install()
###Output
installation of bottle:github:import bottle
unzipping .\bottle.zip
install C
warning: Successfully installed not found
###Markdown
We finally check the module can be imported. It sometimes requires the kernel to restarted.
###Code
import bottle
###Output
_____no_output_____
###Markdown
The two previous methods download files and create others when some file needs to be uncompressed.
###Code
import os
os.listdir(".")
###Output
_____no_output_____ |
code 7/9. DataFrame + Titanic Example.ipynb | ###Markdown
DataFrame How to create a dataframe? It could be created through passing different ways : dictionay of list or ndarrays, 2d ndarray and so on... “ 因为有了标号,所以好提取 ” How to create one?
###Code
df = pd.DataFrame({'Student_1':[90,100, 95], 'Student_2':[60, 80, 100]}, index=['Monday', 'Wednesday', 'Friday'])
df
df1 = pd.DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'], columns=['C1', 'C2', 'C3'])
df1
df1.values
df1.index
df1.columns
df1.T
df1.shape
df1.size
###Output
_____no_output_____
###Markdown
Method
###Code
df1.head(2)
df1.tail(1)
df1.describe()
df1.loc['B']
df1
df1.loc['B'].loc['C2'] # loc works on index
df1['C2'].loc['B']
df1.loc['B', 'C2']
df1.iloc[1, 1] # iloc works on position (only take integers)
df1 + 10 * 15 # element-wise operations
df1['C2'] = df1.apply(lambda x: x['C2'] ** 2 + 10, axis=1)
df1
df1.assign(C2 = lambda x: x['C2'] ** 2 + 10,\
C3 = lambda x: x['C3'] * 2 - 10).loc['A'] .max()
df1
# describe, operation (+-*/), apply, set_index/value
# where, mask (fillna)
# sort_index, sort_value
# query
# select
# filter (where) => subset
# join
###Output
_____no_output_____
###Markdown
Titanic example
###Code
from IPython.display import Image
Image("./S.O.S._Titanic_Ship_Sinking.jpg")
# picture retrieves from https://vignette.wikia.nocookie.net/titanic/images/5/50/S.O.S._Titanic_Ship_Sinking.jpg/revision/latest?cb=20150309223733
df = pd.read_csv('train.csv')
df.shape
df.head(5)
df.tail(2)
df.shape
df.dtypes
df.Survived.value_counts()
df.Survived.value_counts().plot(kind='bar')
df.isnull().sum().plot(kind='bar')
###Output
_____no_output_____
###Markdown
How to deal with missing value ? drop them or fill them with some value
###Code
df1 = df.drop('Cabin', axis=1)
df1['Age'] = df1['Age'].fillna(20)
df2 = df1[df1['Embarked'].notnull()]
# missing value removal
df3 = df.drop('Cabin', axis=1).assign(Age = lambda x: x['Age'].fillna(20))
df3=df3.loc[df3['Embarked'].notnull()]
df3.isnull().sum()
# Exploration (basic statistics)
df1.loc[10:14, ['Name', 'Sex', 'Survived']]
df3.columns
df3.pivot_table(values='PassengerId', index='Survived', columns='Sex', aggfunc='count')
df4 = df3.loc[df3['Survived'] == 1]
df4.shape
df3.shape
df3 = df1.loc[df1['Age'] > 30]
df3.shape
df4 = df2[['PassengerId', 'Name']].merge(df3[['PassengerId', 'Age']], on='PassengerId', how='outer')
df4.shape
df['Pclass'].value_counts().plot.bar()
df['Embarked'].value_counts().plot.bar()
df['Survived'].corr(df['Pclass'])
###Output
_____no_output_____ |
COSMOS2020_readcat.ipynb | ###Markdown
COSMOS2020 catalogue analysis by D. Blanquez, I. Davidzon, G. MagdisContacts: [email protected] the present Notebook the user is able to extract valuable information from the COSMOS2020 catalogue, which can be downloaded from [this data repository](https://cosmos2020.calet.org/). This is also a convenient starting point for further analysis. With minimal modifications the code can be useful to study others galaxy catalogues. The Notebook is divided in the following sections:* **Introduction**: loading the tables and selecting the columns* **Data preparation**: basic manipulations/corrections of the original photometry* **Data visualization**: sky map, redshift and color distributions, SED fitting* **Classic diagnostics**: color-color diagrams, SFR vs. M* diagram, ...* **A simple machine-learning application**: prducing "mock photometry" by means of Gaussian MixturesAcknowledgments: if you use the COSMOS2020 catalog in your study, please cite [Weaver et al. (2021)](https://arxiv.org/abs/2110.13923)
###Code
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import h5py # used in the Data Visualization section
from astropy.io import fits,ascii,votable
from astropy import units as u
from astropy import constants as const
from astropy import table
from astropy.cosmology import Planck15,FlatLambdaCDM
# For ML application
from sklearn.cluster import KMeans
from sklearn import mixture
from itertools import combinations
###Output
_____no_output_____
###Markdown
Introduction: the COSMOS2020 catalogue in two complementary versionsThere are two versions of the COSMOS2020 catalogue: a *Classic* one where the photometry is produced by a pipeline similar to Laigle et al. (2016), and the *Farmer* version that relies on the software *Tractor* recently developed by Lang & Hogg (http://thetractor.org/). Respective file names are:1. COSMOS2020_CLASSIC_R1_v2.0.fits2. COSMOS2020_FARMER_R1_v2.0.fitsBoth tables come with additional information from SED fitting anlaysis (photometric redshifts, stellar masses, etc.) and a `*.header` ASCII file explaining the content of each column. Depending on the version some column may have a slightly different name. Please note that SED fitting was performed twice, with eather *LePhare* or *EAZY* code. Therefore, there are four possible combinations to do science: *Farmer* + *LePhare*, *Farmer* + *EAZY*, etc. These and many more input parameters are specified in the present section.
###Code
# Specify the version of the catalog and the folder with the input/output files
catversion = 'Farmer' # this string can be either 'Classic' or 'Farmer'
dir_in = '/path/to/the/cosmos2020/catalogs/'
dir_out = './' # the directory where the output of this notebook will be stored
# Chose the SED fitting code:
# set to 'lp' for LePhare results or
# set to 'ez' for EAZY
fitversion = 'lp'
# Which type of photometric estimates to use? (suffix of the column name)
# This choice must be consistent with `catversion`,
# choices for Classic are: '_FLUX_APER2', '_FLUX_APER3', '_MAG_APER2,', '_MAG_APER3'
# choices for Farmer are '_FLUX' or '_MAG'
flx = '_FLUX'
flxerr = '_FLUXERR' # catalog column for flux/mag error, just add 'ERR'
outflx = 'cgs' # 'cgs' or 'uJy'
###Output
_____no_output_____
###Markdown
There are several pararemeters regarding the telescope filters used for observations. They are collectively stored in a dictionary.
###Code
# Filter names, mean wavelength, and other info (see Table 1 in W+21)
filt_name = ['GALEX_FUV', 'GALEX_NUV','CFHT_u','CFHT_ustar','HSC_g', 'HSC_r', 'HSC_i', 'HSC_z', 'HSC_y', 'UVISTA_Y', 'UVISTA_J', 'UVISTA_H', 'UVISTA_Ks', 'SC_IB427', 'SC_IB464', 'SC_IA484', 'SC_IB505', 'SC_IA527', 'SC_IB574', 'SC_IA624', 'SC_IA679', 'SC_IB709', 'SC_IA738', 'SC_IA767', 'SC_IB827', 'SC_NB711', 'SC_NB816', 'UVISTA_NB118', 'SC_B', 'SC_gp', 'SC_V', 'SC_rp', 'SC_ip','SC_zp', 'SC_zpp', 'IRAC_CH1', 'IRAC_CH2', 'IRAC_CH3','IRAC_CH4']
filt_lambda = [0.1526,0.2307,0.3709,0.3858,0.4847,0.6219,0.7699,0.8894,0.9761,1.0216,1.2525,1.6466,2.1557,0.4266,0.4635,0.4851,0.5064,0.5261,0.5766,0.6232,0.6780,0.7073,0.7361,0.7694,0.8243,0.7121,0.8150,1.1909,0.4488,0.4804,0.5487,0.6305,0.7693,0.8978,0.9063,3.5686,4.5067,5.7788,7.9958]
filt_fwhm = [0.0224,0.07909,0.05181,0.05976,0.1383,0.1547,0.1471,0.0766,0.0786,0.0923,0.1718,0.2905,0.3074,0.02073,0.02182,0.02292,0.0231,0.02429,0.02729,0.03004,0.03363,0.03163,0.03235,0.03648,0.0343,0.0072,0.01198,0.01122,0.0892,0.1265,0.0954,0.1376,0.1497,0.0847,0.1335,0.7443,1.0119,1.4082,2.8796]
# corresponding MW attenuation from Schelgel
AlambdaDivEBV = [8.31,8.742,4.807,4.674,3.69,2.715,2.0,1.515,1.298,1.213,0.874,0.565,0.365,4.261,3.844,3.622,3.425,3.265,2.938,2.694,2.431,2.29,2.151,1.997,1.748,2.268,1.787,0.946,4.041,3.738,3.128,2.673,2.003,1.436,1.466,0.163,0.112,0.075,0.045]
# photometric offsets (not available for all filters, see Table 3 in W+21)
zpoff1 = [0.000,-0.352,-0.077,-0.023,0.073,0.101,0.038,0.036,0.086,0.054,0.017,-0.045,0.000,-0.104,-0.044,-0.021,-0.018,-0.045,-0.084,0.005,0.166,-0.023,-0.034,-0.032,-0.069,-0.010,-0.064,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,-0.212,-0.219,0.000,0.000] # Farmer+LePhare
zpoff2 = [0.000,-0.029,-0.006,0.053,0.128,0.127,0.094,0.084,0.100,0.049,0.025,-0.044,0.000,-0.013,-0.008,0.022,0.025,0.033,-0.032,0.031,0.208,-0.009,0.003,-0.015,-0.001,0.023,-0.021,-0.017,-0.075,0.000,0.123,0.035,0.051,0.000,0.095,-0.087,-0.111,0.000,0.000] # Classic+LePhare
zpoff3 = [0.000,0.000,-0.196,-0.054,0.006,0.090,0.043,0.071,0.118,0.078,0.047,-0.034,0.000,-0.199,-0.129,-0.084,-0.073,-0.087,-0.124,0.004,0.154,-0.022,-0.030,-0.013,-0.057,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,0.000,-0.102,-0.044,0.000,0.000] # Farmer+EAZY
zpoff4 = [0.000,0.000,0.000,-0.021,0.055,0.124,0.121,0.121,0.145,0.085,0.057,-0.036,0.000,-0.133,-0.098,-0.046,-0.037,-0.038,-0.062,0.038,0.214,0.024,0.022,0.01,0.022,0.000,0.000,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.021,0.025,0.000,0.000] # Classic+EAZY
# create the dictionary
filt_dict = {filt_name[i]:(filt_lambda[i]*1e4,filt_fwhm[i]*1e4,AlambdaDivEBV[i],[zpoff1[i],zpoff2[i],zpoff3[i],zpoff4[i]]) for i in range(len(filt_name))}
###Output
_____no_output_____
###Markdown
-------------------- Data preparationThis section includes corrections due to Milky Way foreground extinction, photometric offsets, and flux loss in case of aperture photometry. Here, a subsample of the catalogue (either by rows or columns) can also be selected, and the table re-formatted to be saved as a different file.
###Code
# Upload the main catalogue
cat0 = table.Table.read(dir_in+'COSMOS2020_{}_R1_v2.0.fits'.format(catversion.upper()),format='fits',hdu=1)
# Create a mask to restrict the analysis to a subset of filters (optional)
filt_use = ['CFHT_ustar', 'CFHT_u', 'HSC_g', 'HSC_r', 'HSC_i', 'HSC_z', 'HSC_y', 'UVISTA_Y', 'UVISTA_J', 'UVISTA_H', 'UVISTA_Ks', 'IRAC_CH1', 'IRAC_CH2']
filt_mask = [i in filt_use for i in filt_name]
# Have a quick look inside the table
cat0[[0,-1]]
###Output
_____no_output_____
###Markdown
Flags (rows selection)Flags are used to (de-)select certain areas of the $2\,deg^2$ COSMOS field. For example, by imposing `FLAG_HSC`equal to zero, only objects within the effective area of Subaru/HyperSuprimeCam are selected (i.e., observed by HSC and outside bright star regions). The **most important flag** is `FLAG_COMBINED` to remove areas with either unreliable photometry or partial coverage. We strongly recommend to set `FLAG_COMBINED==0` before starting your analysis. Please also note that the format of photometric columns (fluxes, magnitudes) is **masked columns**, useful to single out certain entries from the table.
###Code
whichflag = 'COMBINED' # you can try HSC, SUPCAM, UVISTA, UDEEP, COMBINED
print('The parent sample includes {} sources'.format(len(cat0)))
cat0 = cat0[cat0['FLAG_{}'.format(whichflag)]==0]
print('Now restricted to {} sources by using FLAG_COMBINED'.format(len(cat0)))
###Output
The parent sample includes 964506 sources
Now restricted to 746976 sources by using FLAG_COMBINED
###Markdown
Correcting for MW extinctionThe following two cells remove the foreground extinction by the Milky Way (MW).
###Code
def mw_corr(tab_in,f_dict,ebv_col='EBV_MW',flx_col='_FLUX',flxerr_col='_FLUXERR',only_filt=[],skip_filt=[],verbose=False,out=False):
"""
Parameters
----------
tab_in : astropy table of COSMOS2020
f_dict : dictionary with filter info
ebv_col : name of the `tab_in` column containing the E(B-V) from Milky Way
flx_col : name of the `tab_in` column containing the flux
flxerr_col : name of the `tab_in` column containing the flux error bar
only_filt : list of the filters to be processed (filter names as in `f_dict`)
skip_filt : list of the filters NOT to be processed (filter names as in `f_dict`)
verbose : if True, print a verbose output
out : if True, return a new table with the changes; if False, overwrite `tab_in`
"""
if 'FLUX' in flx_col: flux=True
else: flux=False
if out: tab = tab_in.copy()
else: tab = tab_in
ff = f_dict.keys()
if len(only_filt)>0 : ff = only_filt
for c in ff:
if verbose: print('remove MW attenuation in ',c+flx_col,f_dict[c][2])
if c not in skip_filt:
atten = f_dict[c][2]*tab[ebv_col]
if flux: tab[c+flx_col] *= np.power(10.,0.4*atten)
else: tab[c+flx_col] -= atten
else:
if verbose: print('Skip filter',c)
if out: return tab
# Here, the function creates a new table but
# it is also possible to overwrite the original table `cat0`
if catversion.lower()=='classic' and flx!='_FLUX' and flx!='_MAG':
# it means you are using aperture or AUTO flux/mag, which are not available for IRAC and GALEX
cat1 = mw_corr(cat0,filt_dict,flx_col=flx,flxerr_col=flxerr,only_filt=filt_use,skip_filt=['IRAC_CH1', 'IRAC_CH2', 'GALEX_FUV', 'GALEX_NUV'],out=True) # fluxes are in uJy (zero point = 23.9)
# therefore, IRAC and GALEX have to be taken into account separately:
mw_corr(cat1,filt_dict,flx_col='_FLUX',flxerr_col='_FLUXERR',only_filt=['IRAC_CH1', 'IRAC_CH2', 'GALEX_FUV', 'GALEX_NUV'])
else:
# otherwise, all filters have the same suffix
cat1 = mw_corr(cat0,filt_dict,flx_col=flx,flxerr_col=flxerr,only_filt=filt_use,out=True) # all bands have same column suffix
###Output
_____no_output_____
###Markdown
Correcting for aperture-to-total fluxCircular aperture flux, available only in the *Classic* catalog, can be converted to total flux using a rescaling factor derived for each source from its APER-to-AUTO ratio.
###Code
def aper_to_tot(tab_in,f_dict,flx_col='_FLUX',flxerr_col='_FLUXERR',scale_col='',out_col=None,only_filt=[],skip_filt=[],verbose=False,out=False):
"""
Parameters
----------
tab_in : astropy table of COSMOS2020
f_dict : dictionary with filter info
flx_col : name of the `tab_in` column containing the flux
flxerr_col : name of the `tab_in` column containing the flux error bar
scale_col : name of the `tab_in` column containing the aper-to-total correction
out_col : if defined, the rescaled photometry will be saved in a new column (otherwise it overwrites `flx_col`)
only_filt : list of the filters to be processed (filter names as in `f_dict`)
skip_filt : list of the filters NOT to be processed (filter names as in `f_dict`)
verbose : if True, print a verbose output
out : if True, return a new table with the changes; if False, overwrite `tab_in`
"""
if 'FLUX' in flx_col: flux=True
else: flux=False
if out: tab = tab_in.copy()
else: tab = tab_in
ff = f_dict.keys()
if len(only_filt)>0 : ff = only_filt
for c in ff:
if c not in skip_filt:
if verbose and flux: print('rescale {} to total flux'.format(c+flx_col))
if verbose and not flux: print('rescale {} to total mag'.format(c+flx_col))
if flux:
resc = np.power(10.,-0.4*tab[scale_col])
if out_col:
tab[c+out_col] = tab[c+flx_col] * resc
tab[c+out_col+'ERR'] = tab[c+flxerr_col] * resc # rescale also error bars not to alter the S/N ratio
else:
tab[c+flx_col] *= resc
tab[c+flxerr_col] *= resc
else:
if out_col:
tab[c+out_col] = tab[c+flx_col] + tab[scale_col]
else:
tab[c+flx_col] += tab[scale_col]
else:
if verbose: print('Skip filter',c)
if out: return tab
# Can be applied only to aperture photometry (not to AUTO or Farmer)
if (flx[-1]=='2' or flx[-1]=='3'):
aper_to_tot(cat1,filt_dict,flx_col=flx,flxerr_col=flxerr,out_col='_FLUX',
only_filt=filt_use,skip_filt=['IRAC_CH1', 'IRAC_CH2', 'GALEX_FUV', 'GALEX_NUV'],
scale_col='total_off'+flx[-1],verbose=True)
###Output
_____no_output_____
###Markdown
Correcting for photometric offset These are the systematic offsets in flux found by either `LePhare` or `EAZY` by using the COSMOS spectroscopic sample. They depend on the photometry (rescaled aperture-to-total photometry *Classic*, or the total photometry in *Farmer*) and on the SED fitting code (*LePhare* or *EAZY*). This correction has not been calculated for the AUTO fluxes in *Classic*. In the following we consider *LePhare* as a reference, whose prefix in the catalogue is `lp_` (e.g., `lp_zBEST`). *EAZY* prefix is `ez_`.
###Code
def photo_corr(tab_in,f_dict,versions=('Farmer','lp'),flx_col='_FLUX',only_filt=[],skip_filt=[],verbose=False,out=False):
"""
Parameters
----------
tab_in : astropy table of COSMOS2020
f_dict : dictionary with filter info
ebv_col : name of the `tab_in` column containing the E(B-V) from Milky Way
flx_col : name of the `tab_in` column containing the flux
flxerr_col : name of the `tab_in` column containing the flux error bar
only_filt : list of the filters to be processed (filter names as in `f_dict`)
skip_filt : list of the filters NOT to be processed (filter names as in `f_dict`)
verbose : if True, print a verbose output
out : if True, return a new table with the changes; if False, overwrite `tab_in`
"""
if 'FLUX' in flx_col: flux=True
else: flux=False
if out: tab = tab_in.copy()
else: tab = tab_in
ff = f_dict.keys()
if len(only_filt)>0 : ff = only_filt
if versions[0]=='Farmer' and versions[1]=='lp': v=0
elif versions[0]=='Farmer' and versions[1]=='ez': v=1
elif versions[0]=='Classic' and versions[1]=='lp': v=2
elif versions[0]=='Classic' and versions[1]=='ez': v=3
else:
print("ERROR: is this catalog version real?", version)
return
for c in ff:
if verbose: print(' apply photometric offset to ',c+flx_col)
offset = f_dict[c][3][v]
if c not in skip_filt and offset!=0.:
if flux: tab[c+flx_col] *= np.power(10.,-0.4*offset)
else: tab[c+flx_col] += offset
else:
if verbose: print('Skip filter',c)
if out: return tab
photo_corr(cat1,filt_dict,only_filt=filt_use,versions=(catversion,fitversion))
###Output
_____no_output_____
###Markdown
Final formattingDefine a new astropy table `cat` which will be used in the rest of this Notebook.Before saving the new table, remove from the catalogue the columns that are not used. Also convert flux units, and add AB magnitudes.
###Code
cat = cat1.copy()
# optional: keep only the most commonly used columns (total FLUX, error bars, RA, DEC...)
cat.keep_columns(['ID','ALPHA_J2000','DELTA_J2000']+
[i+'_FLUX' for i in filt_use]+[i+'_FLUXERR' for i in filt_use]+
['lp_zBEST','lp_model','lp_age','lp_dust','lp_Attenuation','lp_zp_2','lp_zq','lp_type']+
['lp_MNUV','lp_MR','lp_MJ','lp_mass_med','lp_mass_med_min68','lp_mass_med_max68','lp_SFR_med','lp_mass_best'])
# optional: magnitudes in AB system
m0 = +23.9 # fluxes in the catalog are in microJansky
for b in filt_use:
mag = -2.5*np.log10(cat[b+'_FLUX'].data) + m0 # log of negative flux is masked
cat.add_column(mag.filled(np.nan),name=b+'_MAG') # negative flux becomes NaN
# flux conversion from uJy to erg/cm2/s/Hz
if outflx=='cgs':
for b in filt_use:
cat[b+'_FLUX'] *= 1e-29
cat[b+'_FLUX'].unit = u.erg/u.cm/u.cm/u.s/u.Hz
cat[b+'_FLUXERR'] *= 1e-29
cat[b+'_FLUXERR'].unit = u.erg/u.cm/u.cm/u.s/u.Hz
###Output
_____no_output_____
###Markdown
One may want to **rename some columns** in a more user-friendly fashion. For example, the reference photo-z estimates (the ones to use in most of the cases) are originally named `lp_zBEST` for *LePhare* and `ez_z_phot` for *EAZY*. Once chosen the version, it is convenient to change the correspondent column to a standard name (e.g., `photoz`) so that the rest of the Notebook will work either way.
###Code
cat.rename_column('lp_zBEST', 'photoz')
cat.rename_column('ALPHA_J2000','RA')
cat.rename_column('DELTA_J2000','DEC')
# Save the re-formatted table as a FITS file.
cat.write(dir_out+'COSMOS2020_{}_processed.fits'.format(catversion),overwrite=True)
###Output
_____no_output_____
###Markdown
----------------------- Visualization of the catalogue's main features We start with printing some stats. The main photoz column (e.g., `lp_zBEST` now renamed `photoz`) tells which class the object belongs to:- $z=$ for stars (classification criteria described in Sect. TBD of W+21)- $0<z\leq10$ for galaxies- $z=99$ for AGN (with the actual redshift stored in the `lp_zq` column)
###Code
nsrc = len(cat)
print('This catalogue has',nsrc,'sources: ')
print(' - unreliable/corrupted ones = ',np.count_nonzero(cat['lp_type']<0.))
print(' - photometric stars = ',np.count_nonzero(cat['lp_type']==1))
print(' - photometric galaxies = ',np.count_nonzero(cat['lp_type']==0))
print(' -- plus other {} objects that are classified as X-ray AGN'.format(np.count_nonzero(cat['lp_type']==2)))
print('\nThis catalogue has',len(cat.columns),'columns')
zmin = min(cat['photoz'][cat['photoz']>0.])
zmax = max(cat['photoz'][cat['photoz']<99.])
print('Redshift range {:.2f}<zphot<{:.2f}'.format(zmin,zmax))
ramin = min(cat['RA']); ramax = max(cat['RA'])
decmin = min(cat['DEC']); decmax = max(cat['DEC'])
print('Area covered [deg]: {:.6f}<RA<{:.6f} & {:.6f}<Dec<{:.6f}'.format(ramin,ramax,decmin,decmax))
###Output
This catalogue has 746976 sources:
- unreliable/corrupted ones = 24303
- photometric stars = 9344
- photometric galaxies = 711290
-- plus other 2039 objects that are classified as X-ray AGN
This catalogue has 58 columns
Redshift range 0.01<zphot<9.99
Area covered [deg]: 149.397215<RA<150.786063 & 1.603265<Dec<2.816009
###Markdown
A quick sky projection
###Code
# Select a random subsample for sake of clarity
a = np.random.randint(0,nsrc,size=20000)
plt.scatter(cat['RA'][a],cat['DEC'][a],color='k',s=0.4,alpha=0.3)
plt.xlim(ramax,ramin)
plt.ylim(decmin,decmax)
plt.xlabel('Right Ascension[°]')
plt.ylabel('Declination[°]')
plt.show()
###Output
_____no_output_____
###Markdown
Redshift distributions and GzK diagram.`lp_zp_2` is the secondary photoz solution in LePhare.`lp_zq` is the photoz solution in LePhare using QSO/AGN templates.
###Code
plt.hist(cat['photoz'],range=(0.01,10),log=False,bins=50,density=True,color ='grey',alpha=0.5,label = 'Z_BEST')
plt.hist(cat['lp_zp_2'],range=(0.01,10),log=False,bins=50,density=True,color ='orange',histtype='step',label = 'Z_SEC')
plt.hist(cat['lp_zq'][cat['lp_type']==2],range=(0.01,10),log=False,bins=50,density=True,color ='blue',alpha=0.5,label = 'Z_AGN',histtype='step')
plt.title('Galaxy redshift distribution')
plt.xlabel('photo-z')
plt.ylabel('normalized counts')
plt.legend(bbox_to_anchor=(1.04,0),loc='lower left')
plt.show()
color1 = cat['HSC_g_MAG'] - cat['HSC_z_MAG']
color2 = cat['HSC_z_MAG'] - cat['UVISTA_Ks_MAG']
zmap = cat['photoz']
sel = (cat['photoz']>=0.) & (cat['photoz']<6.) & (cat['UVISTA_Ks_MAG']<24.5)
plt.scatter(color1[sel],color2[sel],c=zmap[sel],cmap='hot',s=0.4,alpha=0.3)
plt.xlim(-2,5)
plt.ylim(-2,5)
cbar=plt.colorbar()
cbar.set_label('photo-z')
plt.xlabel('g-z')
plt.ylabel('z-Ks')
plt.title('g-z-K diagram and redshift map')
plt.show()
###Output
_____no_output_____
###Markdown
Show the galaxy models from Bruzual & Charlot (2003) that have been used in *LePhare* to measure physical quantities. These are the basic (dust-free) templates in the rest-frame reference system, stored in an acillary file with HDF5 format. The instrinsic models are modified a-posteriori by adding redshift, dust attenuation, intervening IGM absorption. **Structure of the HDF5 file:** each BC03 model is a dataset (`/model1`,`/model2`, etc.) with a list of spectra (e.g., `/model1/spectra`) defined at the wavelength points stored in the attribute `lambda[AA]`. The number of spectra correspond to the number of ages stored in the attribute 'age' for each model dataset.
###Code
hdf = h5py.File("COSMOS2020_LePhare_v2_20210507_LIB_COMB.hdf5","r")
def model_check(models,wvl,labels,title='SED templates (rest frame)'):
"""
This function plots the SEDs (F_lambda vs lambda and F_nu vs lambda)
of the models specified by the user.
"""
# from a matplotlib colormap, create RGB colors for the figure
cm = plt.get_cmap('gist_rainbow')
cc = [cm(1.*i/len(labels)) for i in range(len(labels))]
for i,f_lam in enumerate(models):
plt.plot(wvl,f_lam,color=cc[i],label=labels[i]) # This will plot the flux lambda
plt.legend(bbox_to_anchor=(1.04,0),loc='lower left',ncol=3)
plt.title(title)
plt.yscale('log')
plt.xscale('log')
plt.xlim(800.,50000.) # focus on wvl range from UV to mid-IR
plt.ylim(1e-14,1e-7)
plt.xlabel('wavelength [Å]')
plt.ylabel('Flux [erg/cm^2/s/Å]')
plt.show()
wvl *= u.AA # add units to ease conversion
for i,f_lam in enumerate(models):
f_lam *= u.erg/u.cm/u.cm/u.s/u.AA
f_nu = f_lam*(wvl**2)/const.c
f_nu = f_nu.to(u.erg/u.cm/u.cm/u.s/u.Hz)
plt.plot(wvl,f_nu,color=cc[i],label=labels[i])
plt.legend(bbox_to_anchor=(1.04,0),loc='lower left',ncol=3)
plt.title(title)
plt.yscale('log')
plt.xscale('log')
plt.xlim(800.,50000.)
plt.ylim(1e-26,1e-18)
plt.xlabel('wavelength [Å]')
plt.ylabel('Flux [erg/cm^2/s/Hz]')
plt.show()
nmod = 3 # which model
model_check(hdf['/model{}/spectra'.format(nmod)],hdf['/model{}/spectra'.format(nmod)].attrs['lambda[AA]'],
['age={:.2f}'.format(i/1e9) for i in hdf['/model{}'.format(nmod)].attrs['age']],
title='BC03 model {}'.format(nmod))
###Output
_____no_output_____
###Markdown
Show the observed SED of a given object, and overplot its BC03 template (after flux rescaling and dust attenuation). The template is the one resulting in the best fit (smallest $\chi^2$) according to *LePhare*. The SED fitting code takes also into account the absorption of intervening ISM and the flux contamination by strong nebular emission lines. However, for sake of simplicity, those two elements are not included here in the Notebook. To visualize *EAZY* templates, a different python script is available upon request to Gabriel Brammer ([contacts](https://gbrammer.github.io/))
###Code
# we need this to compute luminosity distances
cosmo = FlatLambdaCDM(H0=70, Om0=0.3)
# note that COSMOS2020 SED fitting assumes 'vanilla' cosmology
def dust_ext(w,law=0,ebv=0.):
law1 = np.loadtxt("SB_calzetti.dat").T
law2 = np.loadtxt("extlaw_0.9.dat").T
ext_w = [law1[0],law2[0]]
ext_k = [law1[1],law2[1]]
if ebv>0.:
k = np.interp(w,ext_w[law],ext_k[law])
return np.power(10.,-0.4*ebv*k)
else:
return 1.
id_gal = 354321 # the ID number of the galaxy you want to plot; IDs are different for Farmer and Classic
nid = np.where(cat['ID']==id_gal)[0][0]
wl_obs = np.array([filt_dict[i][0] for i in filt_use]) # wavelength center of the filter used
wl_obserr = np.array([filt_dict[i][1] for i in filt_use])/2.
fnu_obs = np.array([cat[i+'_FLUX'][nid] for i in filt_use]) # Reads the measured magnitude at that wavelength
fnu_obserr = np.array([cat[i+'_FLUXERR'][nid] for i in filt_use]) #Magnitude associated +/-error
sel = fnu_obs>0.
if cat['{}_FLUX'.format(filt_use[0])].unit.to_string()=='uJy':
plt.ylabel('Flux [$\mu$Jy]')
plt.errorbar(wl_obs[sel],fnu_obs[sel],xerr=wl_obserr[sel],yerr=fnu_obserr[sel],fmt='.k', ecolor = 'k', capsize=3, elinewidth=1,zorder=2)
ymin = min(fnu_obs[sel])*0.5
ymax = max(fnu_obs[sel]+fnu_obserr[sel])*6
else: # assuming it's cgs
plt.ylabel('Flux [$10^{-29}$ erg/cm$^2$/s/Hz]')
plt.errorbar(wl_obs[sel],fnu_obs[sel]*1e29,xerr=wl_obserr[sel],yerr=fnu_obserr[sel]*1e29,fmt='.k', ecolor = 'k', capsize=3, elinewidth=1,zorder=2)
ymin = min(fnu_obs[sel])*1e29*0.5
ymax = max(fnu_obs[sel]+fnu_obserr[sel])*1e29*6
# Using the redshift of best-fit template
zp = cat['photoz'][nid]
m = int(cat['lp_model'][nid])
wvl = hdf['/model{}/spectra'.format(m)].attrs['lambda[AA]'] *u.AA
t = np.abs(hdf['/model{}'.format(m)].attrs['age']-cat['lp_age'][nid]).argmin()
flam_mod = hdf['/model{}/spectra'.format(m)][t,:] *u.erg/u.cm/u.cm/u.s/u.AA
fnu_mod = flam_mod*(wvl**2)/const.c
# Calculates the flux in units of [uJy] also applying dust ext
fnu_mod = fnu_mod.to(u.erg/u.cm/u.cm/u.s/u.Hz) * dust_ext(wvl.value,law=cat['lp_Attenuation'][nid],ebv=cat['lp_dust'][nid])
# Rescale the template
mscal = hdf['/model{}'.format(m)].attrs['mass'][t]/10**cat['lp_mass_best'][nid] # luminosity/mass resc
dm = cosmo.luminosity_distance(zp)/(10*u.pc) # distance modulus
offset = dm.decompose()**2*mscal/(1+zp) # all together * (1+z) factor
# Plot the best-fit model
plt.plot(wvl*(1+zp),fnu_mod.to(u.uJy).value/offset,color='red',alpha=1,label='model',zorder=1)
# Show where nebular emission lines would potentially boost the flux
plt.vlines(3727*(1+zp),ymin,ymax,label='[OII]',zorder=0,color='0.3',ls=':')
plt.vlines(5007*(1+zp),ymin,ymax,label='[OIII]b',zorder=0,color='0.3',ls=':')
plt.vlines(4861*(1+zp),ymin,ymax,label='Hb',zorder=0,color='0.3',ls=':') # H_beta
plt.vlines(6563*(1+zp),ymin,ymax,label='Ha',zorder=0,color='0.3',ls=':') # H_alpha
plt.xscale('log')
plt.yscale('log')
plt.xlim(1000,100000)
plt.ylim(ymin,ymax)
plt.xlabel('wavelength [Å]')
print("The COSMOS fitted model is model number",m)
print('The offset applied is',offset,'and a redshift of',zp)
plt.show()
###Output
The COSMOS fitted model is model number 9
The offset applied is 865897.3785545913 and a redshift of 0.7233
###Markdown
---------------------------- Classic diagnosticsAlso useful to learn what columns contain the galaxy physical quantities.Columns for the *LePhare* version:- Absolute magnitudes have the `lp_M` prefix followed by the filter name in capital letters (e.g., `lp_MI` for the *i* band).- The most reliable stellar mass estimate is `lp_mass_med`, since it's the median of the PDF$(M_\ast)$; `lp_mass_best` is the $M_\ast$ of the best-fit template which is actually not the best to use.- Simliarly to $M_\ast$, also the other physical quantities should be used in their `lp_{}_med` version (e.g., `lp_SFR_med`)**WARNING:** the SFR estimates included in the COSMOS2020 catalogs have not been thoroughly tested, and are not recommended for high-level scientific projects. Nonetheless, they can be useful for sanity checks like in this case.
###Code
# Plot the NUV-r vs r-J diagram in a given z bin
zlow=0.5
zupp=0.8
# Cut the K magnitude at K<24 to remove noisy galaxies and stellar sequence
sel = (cat['UVISTA_Ks_MAG']<24) & (cat['photoz']<zupp) & (cat['photoz']>zlow) & (cat['lp_mass_med']>7)
catselec=cat[sel]
plt.scatter(catselec['lp_MR']-catselec['lp_MJ'],catselec['lp_MNUV']-catselec['lp_MR'],c=catselec['lp_SFR_med']-catselec['lp_mass_med'],s=0.3,alpha=0.05,cmap='hot',vmin=-12)
plt.clim(-5,-12)
clb = plt.colorbar()
clb.set_label('Specific SFR')
plt.ylim(-1,6.5)
plt.xlim(-2,2)
plt.xlabel('R-K')
plt.ylabel('NUV-R')
plt.title('NUV-R vs R-K plot')
plt.show()
# Plot the SFR vs stellar mass diagram
# WARNING: the SFR estimates from SED fitting without far-IR data (as in this case) are not particularly reliable, use them with caution
plt.hexbin(catselec['lp_mass_med'],catselec['lp_SFR_med'],gridsize=(50,50),extent=(5,13,-7,4),mincnt=5)
plt.ylim(-7,4)
plt.xlim(6,13)
plt.title('SFR vs Stellar mass')
plt.ylabel('log SFR [$M_\odot$/yr]')
plt.xlabel('log Stellar mass [$M_\odot$]')
plt.show()
###Output
_____no_output_____
###Markdown
-------------------------------------------------- A simple machine learning application: GMMMixture models are probabilistic models that represent a number of subgroups within a population. **Gaussian mixture models (GMM)** do so by modelling the data set through a number of Gaussians (Duda et al., 1973). Its main assumption relies on the fact that all the data points within a certain data set can be generated from a mixture of a finite number of Gaussian distributions with unknown means and standard deviations. In this section we run the GMM algorithm on the COSMOS2020 galaxy sample, setting 4 Gaussian components that will divide the data in the same number of clusters (each data point being assigned to the cluster it has the most probability to belong to).
###Code
def mlinput(dat,colors,flux=False,cname="{}_MAG",verbose=True):
"""
sominput(dat,colors,flux=False,cname="{}_MAG",verbose=True)
This function helps preparing broad-band colors as input features
of a ML algorithm.
Parameters
----------
dat : NxM astropy.Table with M magnitudes (or fluxes) for N objects
colors : list of str, each element is a pair of filters to compute colors (should be coherent with `dat` column names for filters)
flux : bool, set to True if `dat` contains fluxes instead of magnitudes
cname : str, format of the magnitude (or flux) column names
verbose : bool, set to True to print out more info
Output
------
array to be used as input in ML applications; shape is N objects x M features
"""
ngal = len(dat)
ndim = len(colors)
if verbose: print('\nDimensions of the param space:')
datin = np.empty([ngal,ndim])
# Prepare the colors
for i,col in enumerate(colors):
col1 = cname.format(col[0]); col2 = cname.format(col[1])
if verbose: print('dim#{} = '.format(i), col1,' - ',col2) #just a sanity check
if flux:
datin[:,i] = -2.5*np.log10(dat[col1]/dat[col2])
datin[:,i][(dat[col1]<0.)|(dat[col2]<0.)] = np.nan
else:
datin[:,i] = dat[col1]-dat[col2]
datin[:,i][(dat[col1]<0.)|(dat[col2]<0.)] = np.nan
return datin
# Filters to use
filt_pick = ['CFHT_u','HSC_g','HSC_r','HSC_i','HSC_z','UVISTA_Y','UVISTA_J','UVISTA_H','UVISTA_Ks','IRAC_CH1']
# Colors to make
color_pick = [(filt_pick[i],filt_pick[i+1]) for i in range(len(filt_pick)-1)] # just use pair-wise colors (u-g, g-r, r-i, etc.)
# Create a parameter space of obs. fr. colors
color_in = mlinput(cat[cat['lp_type']==0],color_pick,flux=True,cname='{}_FLUX')
# Run the GMM algorithm
X = color_in[np.isfinite(color_in).all(axis=1)] # features, only for objects where all of themm are defined (no color is NaN or inf)
Xplus = cat[cat['lp_type']==0][np.isfinite(color_in).all(axis=1)] # extra info (photoz etc)
gmm = mixture.GaussianMixture(n_components=4, covariance_type='full').fit(X)
labels = gmm.predict(X)
probs = gmm.predict_proba(X)
print(gmm.aic(X)) # print the Akaike Information Criterion (AIC)
# Project the data in a color-color space, distinguishing the cluster classification
plt.figure(figsize=(7,4))
colA = 2; colB = -2
plt.scatter(X[:, colA], X[:, colB], c=labels, s=0.3, cmap='Accent');
plt.xlabel('{0}'.format(color_pick[colA]))
plt.ylabel('{}'.format(color_pick[colB]))
plt.xlim(-2,6)
plt.ylim(-7,5)
plt.show()
# Show the redshift distribution of the GMM clusters
for i in range(0,4):
sel = labels==i
catselec = Xplus[sel]
plt.hist(catselec['photoz'],bins=30,density=True,histtype='step',linewidth=2,label='CLuster #{}'.format(i+1))
plt.legend(loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
One of the main advantages of GMM is that the probabilistic description of the data distribution can be then used to create synthetic data samples. Although these "mock" samples do not have galaxy physical properties attached, they can be helpful for various tests (e.g. Monte Carlo extractions re-shuffling the photometry).
###Code
# Create 3 synthetic "mocks" of 500 galaxies each
mocks = []
for i in range(3):
modx = gmm.sample(n_samples=500)
mocks.append(modx)
# Just visualize one color distribution
colA = 8
# the mocks
plt.hist(mocks[0][0][:,colA],bins=60,density=True,range=[-1,3],histtype='step',color='red',linewidth=2,alpha=1,label='Model 1')
plt.hist(mocks[1][0][:,colA],bins=60,density=True,range=[-1,3],histtype='step',color='blue',linewidth=2,alpha=1,label='Model 2')
plt.hist(mocks[2][0][:,colA],bins=60,density=True,range=[-1,3],histtype='step',color='green',linewidth=2,alpha=1,label='Model 3')
# and the original data
plt.hist(X[:,colA],bins=60,density=True,range=[-1,3],histtype='stepfilled',color='black',alpha=0.3,label='COSMOS2020')
plt.legend(loc='upper right')
plt.xlabel('{}-{} color'.format(color_pick[colA][0],color_pick[colA][1]))
plt.ylabel('# of sources')
plt.show()
###Output
_____no_output_____ |
notebooks/eflint3-features/1_clauses.ipynb | ###Markdown
1. Flexible type-decarationsType-declarations consists of a number of clauses, some of which are used only in, for example, act- or duty-type declarations, whereas others can be used in all type-declarations. In the first versions of eFLINT, and depending on which kind of type is declared, certain clauses are mandatory and need to be written in a fixed order. In eflint-3.0, a lot more flexibility is introduced, making additional clauses optional and allowing many clauses to be written in any order. This notebook clarifies the exact rules applicable to the different kinds of type-declarations.Since eflint-2.0, there are two types of type-declarations: declarations that *introduce* a new type (or replace an existing one) and declarations that *extend* an existing type. The clauses that can be written in a type extension are henceforth refered to as the *accumulating* clauses, the other clauses are *domain-related* clauses. The sections of this notebook discuss the domain-related and accumulating clauses of the different kinds of type-declarations. 1.2 Fact-type declarations domain-related clausesThe domain-related clauses establish the *domain* from which the values are taken that 'populate' the declared type, such as the `Identified by ...` clause of fact-type declarations. This clause is optional, with the default `Identified by String` being implicitly inserted when omitted. Thus the following declarations are identical.
###Code
Fact user
Fact user Identified by String
###Output
_____no_output_____
###Markdown
accumulating clausesThe accumulating clauses of a type-declaration can be written in any order. In fact, multiple occurrences of the same accumulating clause can appear in a single type-declarations. Internally, this is realised by considering such clauses as extensions of the type being declared. The following declarations of `admin` are therefore identifical. Accumulating clauses are always written behind domain-related clauses.
###Code
Fact logged-in Identified by user
Fact access-rights-of Identified by user * int
Fact admin Identified by user
Conditioned by logged-in(user) // can also be comma-separated
Conditioned by access-rights-of(user, 1)
Fact admin Identified by user
Extend Fact admin
Conditioned by logged-in(user)
Extend Fact admin
Conditioned by access-rights-of(user, 1)
###Output
_____no_output_____
###Markdown
Accumulating clauses are called accumulating because multiple of these can be written, whether in a single declaration or across various declarations, and their effects accumulate. The accumulating clauses of a fact-type declaration are:* Conditioned by* Holds when* Derived from All conditions specified using `Conditioned by` clauses must hold true for an instance of the specified type to be enabled. If (at least) one derivation rule (`Holds when` or `Derived from`) derives an instance of a type then the instance holds true if all its conditions hold true. 1.3 Event-type declarations domain-related clausesInstead of `Identified by` even-type declarations use `Related to` followed by a list of comma-separated types specifying the formal parameters of the type, bound in the type's clauses. The `Related to` clauses is optional. When omitted, the defined type has no parameters and has only one instance, identified by the name of the type. accumulating clausesThe accumulating clauses of an event-type are:* Conditioned by* Holds when* Derived from* Creates* Terminates* Obfuscates* Syncs withThe effects of all post-conditions (`Creates`, `Terminates` and `Obfuscates` clauses) manifest when an action/event is triggered and all instances computed from `Syncs with` clauses demand synchronisation when an action/event is triggered. 1.4 Act-type declarations domain-related clausesAn act-type declaration associates a performing actor and an optional recipient actor with the type using the `Actor` and `Recipient` clauses respectively. Both can be ommitted. If `Actor` is ommitted, it is implicitly present as `Actor actor`, with `actor` a builtin type with values taken from the domain of strings. If `Recipient` is ommitted, then only one actor is associated with the type, namely the performing actor.The one or two actors of an act-type are the first formal parameters of the type. Additional formal parameters can be specified with a `Related to` clause. The domain-related clauses are to be written in the order they are mentioned here. accumulating clausesThe accumulating clauses of an act-type are:* Conditioned by* Holds when* Derived from* Creates* Terminates* Obfuscates* Syncs withThe effects of all post-conditions (`Creates`, `Terminates` and `Obfuscates` clauses) manifest when an action/event is triggered and all instances computed from `Syncs with` clauses demand synchronisation when an action/event is triggered. 1.5 Duty-type declarations domain-related clausesA duty-type declaration associates a duty-holding actor and a claimant actor with the type using the `Holder` and `Claimant` clauses respectively. Neither can be ommitted. The two actors of a duty-type are the first formal parameters of the type. Additional formal parameters can be specified with a `Related to` clause. The domain-related clauses are to be written in the order they are mentioned here. accumulating clausesThe accumulating clauses of a duty-type are:* Conditioned by* Holds when* Derived from* Violated when* Enforced byIf any of the violation conditions holds true while the duty holds true, the duty is considered violated. Similarly, if any of the actions computed from the expressions written in the `Enforced by` clauses is enabled while the duty holds true, the duty is considered violated. 1.6 Domain constraintsImmediately following the domain-related clauses of a type-declaration, an optional *domain constraint* can be written. For example, the domain constraint `Where ...` in the example below ensures that one cannot be once's own parent, i.e. that for all `A`, `parent-of(A,A)` is not a valid instance of the type `parent-of`:
###Code
Fact person
Fact parent-of Identified by person1 * person2 Where person1 != person2
###Output
_____no_output_____
###Markdown
The effects of domain-constraints are mostly noticeable when the instances of a type with a domain constraint are enumerable. For example, when executable actions are listed (e.g. in the REPL) or when derivation rules are evaluated. The following extension of `parent-of` makes it that every pair of two different persons are considered each other's parent.
###Code
+person(Alice). +person(Bob). +person(Chloe).
Extend Fact parent-of Derived from parent-of(person1, person2)
###Output
_____no_output_____ |
gQuant/plugins/gquant_plugin/notebooks/01_tutorial.ipynb | ###Markdown
Introduction to greenflow**greenflow** is a set of open-source examples for Quantitative Analysis tasks:- Data preparation & feat. engineering- Alpha seeking modeling- Technical indicators- BacktestingIt is GPU-accelerated by leveraging [**RAPIDS.ai**](https://rapids.ai) technology, and has Multi-GPU and Multi-Node support.greenflow computing components are oriented around its plugins and task graph. Download example datasetsBefore getting started, let's download the example datasets if not present.
###Code
! ((test ! -f './data/stock_price_hist.csv.gz' || test ! -f './data/security_master.csv.gz') && \
cd .. && bash download_data.sh) || echo "Dataset is already present. No need to re-download it."
###Output
Dataset is already present. No need to re-download it.
###Markdown
About this notebookIn this tutorial, we are going to use greenflow to do a simple quant job. The job tasks are listed below: 1. load csv stock data. 2. filter out the stocks that has average volume smaller than 50. 3. sort the stock symbols and datetime. 4. add rate of return as a feature into the table. 5. in two branches, computethe mean volume and mean return. 6. read the file containing the stock symbol names, and join the computed dataframes. 7. output the result in csv files. TaskGraph playgroundRun the following greenflow code to start a empty TaskGraph where computation graph can be created. You can follow the steps as listed below.
###Code
import sys; sys.path.insert(0, '..')
from greenflow.dataframe_flow import TaskGraph
task_graph = TaskGraph()
task_graph.draw()
###Output
_____no_output_____
###Markdown
Step by Step to build your first task graph Create Task node to load the included stock csv file Explore the data and visualize it Clean up the Task nodes for next steps Filter the data and compute the rate of return feature Save current TaskGraph for a composite Task node Clean up the redudant feature computation Task nodes Compute the averge volume and returns Dump the dataframe to csv files Just in case you cannnot follow along, here you can load the tutorial taskgraph from the file. First one is the graph to calculate the return feature.
###Code
task_graph = TaskGraph.load_taskgraph('../taskgraphs/get_return_feature.gq.yaml')
task_graph.draw()
###Output
_____no_output_____
###Markdown
Load the full graph and click on the `run` button to see the result
###Code
task_graph = TaskGraph.load_taskgraph('../taskgraphs/tutorial_intro.gq.yaml')
task_graph.draw()
###Output
_____no_output_____
###Markdown
About Task graphs, nodes and pluginsQuant processing operators are defined as nodes that operates on **cuDF**/**dask_cuDF** dataframes.A **task graph** is a list of tasks composed of greenflow nodes.The cell below contains the task graph described before.
###Code
import warnings; warnings.simplefilter("ignore")
csv_average_return = 'average_return.csv'
csv_average_volume = 'average_volume.csv'
csv_file_path = './data/stock_price_hist.csv.gz'
csv_name_file_path = './data/security_master.csv.gz'
from greenflow.dataframe_flow import TaskSpecSchema
# load csv stock data
task_csvdata = {
TaskSpecSchema.task_id: 'stock_data',
TaskSpecSchema.node_type: 'CsvStockLoader',
TaskSpecSchema.conf: {'file': csv_file_path},
TaskSpecSchema.inputs: {},
TaskSpecSchema.module: "greenflow_gquant_plugin.dataloader"
}
# filter out the stocks that has average volume smaller than 50
task_minVolume = {
TaskSpecSchema.task_id: 'volume_filter',
TaskSpecSchema.node_type: 'ValueFilterNode',
TaskSpecSchema.conf: [{'min': 50.0, 'column': 'volume'}],
TaskSpecSchema.inputs: {'in': 'stock_data.cudf_out'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
# sort the stock symbols and datetime
task_sort = {
TaskSpecSchema.task_id: 'sort_node',
TaskSpecSchema.node_type: 'SortNode',
TaskSpecSchema.conf: {'keys': ['asset', 'datetime']},
TaskSpecSchema.inputs: {'in': 'volume_filter.out'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
# add rate of return as a feature into the table
task_addReturn = {
TaskSpecSchema.task_id: 'add_return_feature',
TaskSpecSchema.node_type: 'ReturnFeatureNode',
TaskSpecSchema.conf: {},
TaskSpecSchema.inputs: {'stock_in': 'sort_node.out'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
# read the stock symbol name file and join the computed dataframes
task_stockSymbol = {
TaskSpecSchema.task_id: 'stock_name',
TaskSpecSchema.node_type: 'StockNameLoader',
TaskSpecSchema.conf: {'file': csv_name_file_path },
TaskSpecSchema.inputs: {},
TaskSpecSchema.module: "greenflow_gquant_plugin.dataloader"
}
# In two branches, compute the mean volume and mean return seperately
task_volumeMean = {
TaskSpecSchema.task_id: 'average_volume',
TaskSpecSchema.node_type: 'AverageNode',
TaskSpecSchema.conf: {'column': 'volume'},
TaskSpecSchema.inputs: {'stock_in': 'add_return_feature.stock_out'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
task_returnMean = {
TaskSpecSchema.task_id: 'average_return',
TaskSpecSchema.node_type: 'AverageNode',
TaskSpecSchema.conf: {'column': 'returns'},
TaskSpecSchema.inputs: {'stock_in': 'add_return_feature.stock_out'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
task_leftMerge1 = {
TaskSpecSchema.task_id: 'left_merge1',
TaskSpecSchema.node_type: 'LeftMergeNode',
TaskSpecSchema.conf: {'column': 'asset'},
TaskSpecSchema.inputs: {'left': 'average_return.stock_out',
'right': 'stock_name.stock_name'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
task_leftMerge2 = {
TaskSpecSchema.task_id: 'left_merge2',
TaskSpecSchema.node_type: 'LeftMergeNode',
TaskSpecSchema.conf: {'column': 'asset'},
TaskSpecSchema.inputs: {'left': 'average_volume.stock_out',
'right': 'stock_name.stock_name'},
TaskSpecSchema.module: "greenflow_gquant_plugin.transform"
}
# output the result in csv files
task_outputCsv1 = {
TaskSpecSchema.task_id: 'output_csv1',
TaskSpecSchema.node_type: 'OutCsvNode',
TaskSpecSchema.conf: {'path': csv_average_return},
TaskSpecSchema.inputs: {'df_in': 'left_merge1.merged'},
TaskSpecSchema.module: "greenflow_gquant_plugin.analysis"
}
task_outputCsv2 = {
TaskSpecSchema.task_id: 'output_csv2',
TaskSpecSchema.node_type: 'OutCsvNode',
TaskSpecSchema.conf: {'path': csv_average_volume },
TaskSpecSchema.inputs: {'df_in': 'left_merge2.merged'},
TaskSpecSchema.module: "greenflow_gquant_plugin.analysis"
}
###Output
_____no_output_____
###Markdown
In Python, a greenflow task-spec is defined as a dictionary with the following fields:- `id`- `type`- `conf`- `inputs`- `filepath`- `module`As a best practice, we recommend using the `TaskSpecSchema` class for these fields, instead of strings.The `id` for a given task must be unique within a task graph. To use the result(s) of other task(s) as input(s) of a different task, we use the id(s) of the former task(s) in the `inputs` field of the next task.The `type` field contains the node type to use for the compute task. greenflow includes a collection of node classes. These can be found in `greenflow.plugin_nodes`. Click [here](node_class_example) to see a greenflow node class example.The `conf` field is used to parameterise a task. It lets you access user-set parameters within a plugin (such as `self.conf['min']` in the example above). Each node defines the `conf` json schema. The greenflow UI can use this schema to generate the proper form UI for the inputs. It is recommended to use the UI to configure the `conf`. The `filepath` field is used to specify a python module where a custom plugin is defined. It is optional if the plugin is in `plugin_nodes` directory, and mandatory when the plugin is somewhere else. In a different tutorial, we will learn how to create custom plugins.The `module` is optional to tell greenflow the name of module that the node type is from. If it is not specified, greenflow will search for it among all the customized modules. A custom node schema will look something like this:```custom_task = { TaskSpecSchema.task_id: 'custom_calc', TaskSpecSchema.node_type: 'CustomNode', TaskSpecSchema.conf: {}, TaskSpecSchema.inputs: ['some_other_node'], TaskSpecSchema.filepath: 'custom_nodes.py'}``` Below, we compose our task graph and visualize it as a graph.
###Code
from greenflow.dataframe_flow import TaskGraph
# list of nodes composing the task graph
task_list = [
task_csvdata, task_minVolume, task_sort, task_addReturn,
task_stockSymbol, task_volumeMean, task_returnMean,
task_leftMerge1, task_leftMerge2,
task_outputCsv1, task_outputCsv2]
task_graph = TaskGraph(task_list)
task_graph.draw()
###Output
_____no_output_____
###Markdown
We will use `save_taskgraph` method to save the task graph to a **yaml file**.That will allow us to re-use it in the future.
###Code
task_graph_file_name = '01_tutorial_task_graph.gq.yaml'
task_graph.save_taskgraph(task_graph_file_name)
###Output
_____no_output_____
###Markdown
Here is a snippet of the content in the resulting yaml file:
###Code
%%bash -s "$task_graph_file_name"
head -n 19 $1
###Output
- id: stock_data
type: CsvStockLoader
conf:
file: ./data/stock_price_hist.csv.gz
inputs: {}
module: greenflow_gquant_plugin.dataloader
- id: volume_filter
type: ValueFilterNode
conf:
- column: volume
min: 50.0
inputs:
in: stock_data.cudf_out
module: greenflow_gquant_plugin.transform
- id: sort_node
type: SortNode
conf:
keys:
- asset
###Markdown
The yaml file describes the computation tasks. We can load it and visualize it as a graph.
###Code
task_graph = TaskGraph.load_taskgraph(task_graph_file_name)
task_graph.draw()
###Output
_____no_output_____
###Markdown
Building a task graphRunning the task graph is the next logical step. Nevertheless, it can optionally be built before running it.By calling `build` method, the graph is traversed without running the dataframe computations. This could be useful to inspect the column names and types, validate that the plugins can be instantiated, and check for errors.The output of `build` are instances of each task in a dictionary.In the example below, we inspect the column names and types for the inputs and outputs of the `left_merge1` task:
###Code
from pprint import pprint
task_graph.build()
print('Output of build task graph are instances of each task in a dictionary:\n')
print(str(task_graph))
# output meta in 'left_merge_1' node
print('output meta in outgoing dataframe:\n')
pprint(task_graph['left_merge1'].meta_setup())
###Output
output meta in outgoing dataframe:
MetaData(inports={'left': {}, 'right': {}}, outports={'merged': {'asset': 'int64', 'returns': 'float64', 'asset_name': 'object'}})
###Markdown
Running a task graphTo execute the graph computations, we will use the `run` method. If the `Output_Collector` task node is not added to the graph, a output list can be feeded to the run method. The result can be displayed in a rich mode if the `formated` argument is turned on.`run` can also takes an optional `replace` argument which is used and explained later on
###Code
outputs = ['stock_data.cudf_out', 'output_csv1.df_out', 'output_csv2.df_out']
task_graph.run(outputs=outputs, formated=True)
###Output
_____no_output_____
###Markdown
The result can be used as a tuple or dictionary.
###Code
result = task_graph.run(outputs=outputs)
csv_data_df, csv_1_df, csv_2_df = result
result['output_csv2.df_out']
###Output
_____no_output_____
###Markdown
We can profile each of the computation node running time by turning on the profiler.
###Code
outputs = ['stock_data.cudf_out', 'output_csv1.df_out', 'output_csv2.df_out']
csv_data_df, csv_1_df, csv_2_df = task_graph.run(outputs=outputs, profile=True)
###Output
id:stock_data process time:3.168s
id:volume_filter process time:0.021s
id:sort_node process time:0.102s
id:add_return_feature process time:0.058s
id:average_volume process time:0.013s
id:average_return process time:0.014s
id:stock_name process time:0.008s
id:left_merge1 process time:0.002s
id:output_csv1 process time:0.015s
id:left_merge2 process time:0.002s
id:output_csv2 process time:0.014s
###Markdown
Where most of the time is spent on the csv file processing. This is because we have to convert the time string to the proper format via CPU. Let's inspect the content of `csv_1_df` and `csv_2_df`.
###Code
print('csv_1_df content:')
print(csv_1_df)
print('\ncsv_2_df content:')
print(csv_2_df)
###Output
csv_1_df content:
asset returns asset_name
0 869301 -0.005854 VNRBP
1 3159 0.000315 ISBC
2 8044 0.000516 SGU
3 2123 0.000801 CGNX
4 22873 -0.001068 RENN
... ... ... ...
4995 3518 0.001136 MPWR
4996 707774 -0.000417 MODN
4997 4856 0.000979 WIRE
4998 22461 -0.000243 MY
4999 1973 -0.002916 BOCH
[5000 rows x 3 columns]
csv_2_df content:
asset volume asset_name
0 24568 203.002419 ORN
1 2557 429.953169 EMMS
2 4142 487.567188 RIGL
3 869369 172.961884 IBP
4 705684 107.933333 USMD
... ... ... ...
4995 869374 279.946042 WATT
4996 701990 302.973772 FRGI
4997 24636 136.807107 SVVC
4998 6190 2069.864690 FNF
4999 24153 887.397596 DAR
[5000 rows x 3 columns]
###Markdown
Also, please notice that two resulting csv files has been created:- average_return.csv- average_volume.csv
###Code
print('\ncsv files created:')
!find . -iname "*symbol*"
###Output
csv files created:
###Markdown
SubgraphsA nice feature of task graphs is that we can evaluate any **subgraph**. For instance, if you are only interested in the `average volume` result, you can run only the tasks which are relevant for that computation.If we would not want to re-run tasks, we could also use the `replace` argument of the `run` function with a `load` option.The `replace` argument needs to be a dictionary where each key is the task/node id. The values are a replacement task-spec dictionary (i.e. each key is a spec overload, and its value is what to overload with).In the example below, instead of re-running the `stock_data` node to load a csv file into a `cudf` dataframe, we will use its dataframe output to load from it.
###Code
replace = {
'stock_data': {
'load': {
'cudf_out': csv_data_df
},
'save': True
}
}
(volume_mean_df, ) = task_graph.run(outputs=['average_volume.stock_out'],
replace=replace)
print(volume_mean_df)
###Output
asset volume
0 22705 67.929114
1 869315 151.844770
2 2526 88.337888
3 3939 91.674194
4 705893 8616.574853
... ... ...
4995 869571 639.127042
4996 7842 709.077851
4997 701570 110.977778
4998 701705 970.310847
4999 4859 143.615344
[5000 rows x 2 columns]
###Markdown
As a convenience, we can save on disk the checkpoints for any of the nodes, and re-load them if needed. It is only needed to set the save option to `True`. This step will take a while depends on the disk IO speed.In the example above, the `replace` spec directs `run` to save on disk for the `stock_data`. If `load` was boolean then the data would be loaded from disk presuming the data was saved to disk in a prior run.The default directory for saving is `/.cache/.hdf5`.`replace` is also used to override parameters in the tasks. For instance, if we wanted to use the value `40.0` instead `50.0` in the task `volume_filter`, we would do something similar to:```replace_spec = { 'volume_filter': { 'conf': { 'min': 40.0 } }, 'some_task': etc...}```
###Code
replace = {'stock_data': {'load': True},
'average_return': {'save': True}}
(return_mean_df, ) = task_graph.run(outputs=['average_return.stock_out'], replace=replace)
print('Return mean Dataframe:\n')
print(return_mean_df)
###Output
Return mean Dataframe:
asset returns
0 22705 0.001691
1 869315 0.000701
2 2526 0.002374
3 3939 0.052447
4 705893 0.000790
... ... ...
4995 869571 -0.002908
4996 7842 0.000698
4997 701570 -0.004115
4998 701705 0.002157
4999 4859 0.008666
[5000 rows x 2 columns]
###Markdown
Now, we might want to load the `return_mean_df` from the saved file and evaluate only tasks that we are interested in.In the cells below, we compare different load approaches:- in-memory,- from disk, - and not loading at all.When working interactively, or in situations requiring iterative and explorative task graphs, a significant amount of time is saved by just re-loading the data that do not require to be recalculated.
###Code
%%time
print('Using in-memory dataframes for load:')
replace = {'stock_data': {'load': {
'cudf_out': csv_data_df
}},
'average return': {'load':
{'stock_out': return_mean_df}}
}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
%%time
print('Using cached dataframes on disk for load:')
replace = {'stock_data': {'load': True},
'average return': {'load': True}}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
%%time
print('Re-running dataframes calculations instead of using load:')
replace = {'stock_data': {'load': True}}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
###Output
Re-running dataframes calculations instead of using load:
CPU times: user 2.49 s, sys: 556 ms, total: 3.04 s
Wall time: 3.05 s
###Markdown
An idiomatic way to save data, if not on disk, or load data, if present on disk, is demonstrated below.
###Code
%%time
import os
loadsave_csv_data = 'load' if os.path.isfile('./.cache/stock_data.hdf5') else 'save'
loadsave_return_mean = 'load' if os.path.isfile('./.cache/average_return.hdf5') else 'save'
replace = {'stock_data': {loadsave_csv_data: True},
'average_return': {loadsave_return_mean: True}}
_ = task_graph.run(outputs=['output_csv2.df_out'], replace=replace)
###Output
CPU times: user 2.52 s, sys: 459 ms, total: 2.98 s
Wall time: 3.01 s
###Markdown
Delete temporary filesA few cells above, we generated a .yaml file containing the example task graph, and also a couple of CSV files.Let's keep our directory clean, and delete them.
###Code
%%bash -s "$task_graph_file_name" "$csv_average_return" "$csv_average_volume"
rm -f $1 $2 $3
###Output
_____no_output_____
###Markdown
--- Node class exampleImplementing custom nodes in greenflow is very straighforward.Data scientists only need to override five methods in the parent class `Node`:- `init`- `meta_setup`- `ports_setup`- `conf_schema`- `process``init` method is usually used to define the required column names`ports_setup` defines the input and output ports for the node`meta_setup` method is used to calculate the output meta name and types.`conf_schema` method is used to define the JSON schema for the node conf so the client can generate the proper UI for it.`process` method takes input dataframes and computes the output dataframe. In this way, dataframes are strongly typed, and errors can be detected early before the time-consuming computation happens.Below, it can be observed `ValueFilterNode` implementation details:
###Code
import inspect
from greenflow_gquant_plugin.transform import ValueFilterNode
print(inspect.getsource(ValueFilterNode))
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____ |
ExamPrep/SciCompComplete/Assessment 3/Assessment_3_Q1_JP.ipynb | ###Markdown
1a) The given equation:$\frac{d^2\theta}{dt^2} = -\frac{g}{l} \sin \theta + C \cos \theta \sin \Omega t$ Can be made dimensionless by setting:$\omega ^2= \frac{g}{l}$ ; $ \beta = \frac{\Omega}{\omega}$ ; $\gamma = \frac{C}{\omega ^2}$ and changing the variable to $ x = \omega t$. First, differentiate $ x = \omega t$ twice:$ x = \omega t$$\frac{dx}{dt} = \omega$ (1)$\frac{d^2x}{dt^2} = 0$ (2) Then by the chain rule;$ \frac{d\theta}{dt} = \frac{d\theta}{dx} \frac{dx}{dt} = \frac{d\theta}{dx} \omega$ (3) Therefore using the product rule:$ \frac{d^2\theta}{dt^2} = \frac{dx}{dt} \frac{d^2\theta}{dtdx} + \frac{d \theta}{dx}\frac{d^2x}{dt^2} \implies \frac{dx}{dt} \cdot \frac{d}{dx}(\frac{d\theta}{dt}) + \frac{d \theta}{dx}\frac{d^2x}{dt^2}$ (4) Substituting (1) and (2) into (4):$ \frac{d^2\theta}{dt^2} = \omega \cdot \frac{d}{dx}(\omega \frac{d\theta}{dx}) + \frac{d \theta}{dx}\cdot 0 = \omega^2 \frac{d^2\theta}{dx^2}$ Finally, reconstructing the equation with the new constants and variable change it becomes:$\omega^2 \frac{d^2\theta}{dx^2} = -\omega^2 \sin \theta + \omega^2 \gamma \cos \theta \sin \omega \beta t = -\omega^2 \sin \theta + \omega^2 \gamma \cos \theta \sin x\beta \implies \frac{d^2\theta}{dx^2} =-\sin \theta + \gamma \cos \theta \sin x\beta $ Now seperate this second order into two first order D.E.s by introducing new variables:$ z = \frac{d\theta}{dx} \rightarrow \frac{dz}{dx} = \frac{d^2\theta}{dx^2} = -\sin z_1 + \gamma \sin x\beta \cos z_1 $ So:$ z = \frac{d\theta}{dx}$$\frac{dz}{dx}= -\sin \theta + \gamma \sin x\beta \cos \theta $
###Code
#1b
#Solving for theta means solving for z_1
#changed t to x so t=0 and t=40s need to be changed
#Initial condition is theta (z_1) = 0 and dtheta/dt=0 -> need to be changed as variable changed to x
# Import the required modules
import numpy as np
import scipy
from printSoln import *
from run_kut4 import *
import pylab as pl
g=9.81 #ms^-2
l=0.1 #m
C=2 #s^-2
OMEGA=5 #s^-1
omega=np.sqrt(g/l)
# First set up the right-hand side RHS) of the equation
Gamma= C/omega**2
beta=OMEGA/omega
def Eqs(x,y):
f=np.zeros(2) # sets up RHS as a vector
f[0]=y[1]
f[1]=-np.sin(y[0])+Gamma*np.sin(x*beta)*np.cos(y[0]) # RHS; note that z is also a vector
return f
# Using Runge-Kutta of 4th order
y = np.array([0.0, 0.0]) # Initial values
#start at t=0 -> x=0 (as omega*t when t=0 is 0)
x = 0.0 # Start of integration (Always use floats)
#Finish at t=40s -> xStop= omega*40
xStop = omega*40.0 # End of integration
h = 0.01 # Step size
X,Y = integrate(Eqs,x,y,xStop,h) # call the RK4 solver
ThetaSol1=Y[:,0]
dThetaSol1=Y[:,1]
tsol1=X/omega
pl.plot(tsol1,ThetaSol1) # Plot the solution
pl.xlabel('t(s)')
pl.ylabel('$\Theta \ (radians)$')
pl.title('Plot of $ \Theta \ $ Aganst t')
pl.show()
#1c
#repeat with C changing
#Thought the best way to find OMEGA would be to look for the point where the solution to theta was greatest.
#After attempts varying C between 2 and 10 (with a step of 1) I was able to narrow down the region to between 9 and 11
#Further attempts with step of 0.5 I narrowed down the region to 9.0 and 9.5
#Now using a step of 0.01 between 9.45 and 9.50 I should beable to find OMEGA.
for OMEGA in np.arange(9.45,9.50,0.01):
beta=OMEGA/omega
print("OMEGA = ",OMEGA)
# Using Runge-Kutta of 4th order
y = np.array([0.0, 0.0]) # Initial values
#start at t=0 -> x=0 (as omega*t when t=0 is 0)
x = 0.0 # Start of integration (Always use floats)
#Finish at t=40s -> xStop= omega*40
xStop = omega*40.0 # End of integration
h = 0.01 # Step size
X,Y = integrate(Eqs,x,y,xStop,h) # call the RK4 solver
ThetaSol=Y[:,0]
dThetaSol=Y[:,1]
tsol=X/omega
print('Maximum value of Theta at this value of OMEGA is ',round(np.amax(ThetaSol),4),'\n')
pl.plot(tsol,ThetaSol) # Plot the solutions
pl.xlabel('t(s)')
pl.ylabel('$\Theta \ (radians)$')
pl.title('Plot of $ \Theta \ $ Aganst t')
pl.show()
###Output
OMEGA = 9.45
Maximum value of Theta at this value of OMEGA is 0.5387
###Markdown
I can see that when $\Omega = 9.48$, $\theta$ is at it maximal value.
###Code
#1d
#Unfortunotly as my selsults for Theta and dTheta were rewritten so to plot the phase-space
#trajectory for the maximum Theta I need to repeat Runge-Kutta of 4th order for OMEGA=9.48
#I will reset the perameters (like intial values etc) once again, just incase.
OMEGA=9.48 #s^-1
y = np.array([0.0, 0.0]) # Initial values
#start at t=0 -> x=0 (as omega*t when t=0 is 0)
x = 0.0 # Start of integration (Always use floats)
#Finish at t=40s -> xStop= omega*40
xStop = omega*40.0 # End of integration
h = 0.01 # Step size
X,Y = integrate(Eqs,x,y,xStop,h) # call the RK4 solver again
ThetaSol=Y[:,0]
dThetaSol=Y[:,1]
tsol1=X/omega
#phase space plot at resonance
pl.plot(dThetaSol,ThetaSol)
pl.xlabel('$derivative \ \Theta\ $')
pl.ylabel('$ \Theta \ (radians) $')
pl.title('Resonant phase-space of pendulum with Theta aganst its derivative W.R.T x')
pl.axis('equal')
pl.show()
###Output
_____no_output_____
###Markdown
This plot shows the phase-space trajectory of the oscillation when it is in resonance with $\Omega = 9.48$. From what I have read is there is an observable reversal upon itself this shows a "separatrix" whish separates phase space into two regions. "Inside" the "separatrix" the pendulum would swing with back and forth. "Outside", the pedulum would complete full circles. - Paraphrased from wolfram demonstrations project to my best understanding.In this plot of phase space there does not seem to be any reversal apon its self so it would seem that when the pendulum is in resonance it will continue swinging at maximum aplitude.
###Code
#phase space plot at inital OMEGA
pl.plot(dThetaSol1,ThetaSol1)
pl.xlabel('$derivative \ \Theta\ $')
pl.ylabel('$ \Theta \ (radians) $')
pl.title('Non-resonant phase-space of pendulum with Theta aganst its derivative W.R.T x')
pl.axis('equal')
pl.show()
###Output
_____no_output_____ |
myblogs.ipynb | ###Markdown
###Code
# Cleaning the texts
import nltk
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
nltk.download('punkt')
nltk.download('stopwords')
### Steve Jobs Co-founder of Apple Inc.
paragraph= """
Your time is limited, so don’t waste it living someone else’s life.
Don’t be trapped by dogma — which is living with the results of other people’s thinking.
Don’t let the noise of others’ opinions drown out your own inner voice.
And most important, have the courage to follow your heart and intuition.
They somehow already know what you truly want to become. Everything else is secondary. """
ps = PorterStemmer()
wordnet=WordNetLemmatizer()
sentences = nltk.sent_tokenize(paragraph)
corpus = []
for i in range(len(sentences)):
review = re.sub('[^a-zA-Z]', ' ', sentences[i])
review = review.lower()
review = review.split()
review = [ps.stem(word) for word in review if not word in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
# Creating the Bag of Words model
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features = 1500)
bagged = cv.fit_transform(corpus).toarray()
bagged
###Output
_____no_output_____ |
notebooks/robin_ue1/03_Cross_validation_and_grid_search.ipynb | ###Markdown
Aufgabe 3: Cross Validation and Grid Search We use sklearn's GridSearchCV and cross validation to search for an optimal number of kneighbors for the KNeighborsClassifier to maximize the precision of the classification of the iris data from task 1.
###Code
# imports
import pandas
import matplotlib.pyplot as plt
from timeit import default_timer as timer
from sklearn.cross_validation import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.grid_search import GridSearchCV
###Output
_____no_output_____
###Markdown
First we load the iris data from task 1 and split it into training and validation set.
###Code
# load dataset from task 1
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class']
dataset = pandas.read_csv(url, names=names)
# split-out dataset
array = dataset.values
X = array[:,0:4]
y = array[:,4]
###Output
_____no_output_____
###Markdown
Then we specify our parameter space and performance metric.
###Code
# specify parameter space and performance metric
max_n = 30
k = list(range(1, max_n + 1))
parameter_grid = {"n_neighbors": k}
scoring = "accuracy"
cross_val = 10
###Output
_____no_output_____
###Markdown
Next we run a performance test on GridSearchCV. Therefor we search mulitple times to maximize the precision save the best time for later comparison. Each time we use a different number of jobs.
###Code
# parameter for performance test
max_jobs = 8
best_in = 3
# performance test
measurements = []
i = 1
while i <= max_jobs:
min_t = float("inf")
for j in range(best_in):
kneighbors = KNeighborsClassifier()
grid_search = GridSearchCV(kneighbors, parameter_grid, cv=cross_val, scoring=scoring, n_jobs=i)
start = timer()
grid_search.fit(X, y)
stop = timer()
min_t = min(min_t, stop - start)
measurements.append(min_t)
i += 1
###Output
_____no_output_____
###Markdown
Finally we plot our results:
###Code
fig = plt.figure()
fig.suptitle('Visualization of the runtime depending on the number of used jobs.')
plt.xticks(range(1, max_jobs + 1))
ax = fig.add_subplot(111)
ax.set_xlabel('used jobs')
ax.set_ylabel('runtime in seconds')
ax.plot(range(1, max_jobs + 1), measurements, 'ro')
plt.show()
neighbors = [s[0]["n_neighbors"] for s in grid_search.grid_scores_]
val_score = [s[1] for s in grid_search.grid_scores_]
fig = plt.figure()
fig.suptitle('Visualization of the precision depending on the used parameter n_neighbors.')
plt.xticks(range(1,max_n + 1))
ax = fig.add_subplot(111)
ax.set_xlabel('n_neighbors')
ax.set_ylabel('mean test score')
ax.plot(neighbors, val_score, 'ro')
plt.show()
max_score = max(val_score)
i = val_score.index(max_score)
n = neighbors[i]
print("Maximum precision:", max_score)
print("Is reached with:","n_neighbors =", n)
###Output
_____no_output_____ |
keras_v2_intro.ipynb | ###Markdown
Constructing and training a convolutional neural network with human-like performance (>98%) on MNIST Python notebook can be found at [https://github.com/sempwn/keras-intro](https://github.com/sempwn/keras-intro)Before starting we'll need to make sure tensorflow and keras are installed. Open a terminal and type the following commands:```shpip install --user tensorflowpip install --user keras --upgrade```The back-end of keras can either use theano or tensorflow. Verify that keras will use tensorflow by using the following command:```shsed -i 's/theano/tensorflow/g' $HOME/.keras/keras.json```
###Code
%pylab inline
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Convolutional neural networks : A very brief introductionTo quote wikipedia:> Convolutional neural networks are biologically inspired variants of multilayer perceptrons, designed to emulate the behaviour of a visual cortex. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images.One principle in ML is to create a feature map for data and then use your favourite classifier on those features. For image data this might be presence of straight lines, curved lines, placement of holes etc. This strategy can be very problem dependent. Instead of having to feature engineer for each specific problem, it would be better to automatically generate the features and combine with the classifer. CNNs are a way to achieve this.  Automatic feature engineeringFilters or convolution kernels can be treated like automatic feature detectors. A number of filters can be set before hand. For each filter, a convolution with this and part of the input is done for each part of the image. Weights for each filter are shared to reduce location dependency and reduce the number of parameters. The end result is a multi-dimensional matrix of copies of the original data with each filter applied to it.For a classification task, after one or more convolutional layers a fully connected layer is applied. A Final layer with an output size equal to the number of classes is then added. PoolingOnce convolutions have been performed across the whole image, we need someway of down-sampling. The easiest and most common way is to perform max pooling. For a certain pool size return the maximum from the filtered image of that subset is given as the ouput. A diagram of this is shown below
###Code
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Convolutions on imageLet's get some insight into what a random filter applied to a test image does. We'll compare this to the trained filters at the end.Each filtered pixel in the image is defined by $C_i = \sum_j{I_{i+j-k} W_j}$, where $W$ is the filter (sometimes known as a kernel), $j$ is the 2D spatial index over $W$, $I$ is the input and $k$ is the coordinate of the center of $W$, specified by origin in the input parameters.
###Code
from scipy import signal
i = np.random.randint(x_train.shape[0])
c = x_train[i,:,:]
plt.imshow(c,cmap='gray'); plt.axis('off');
plt.title('original image');
plt.figure(figsize=(18,8))
for i in range(10):
k = -1.0 + 1.0*np.random.rand(3,3)
c_digit = signal.convolve2d(c, k, boundary='symm', mode='same');
plt.subplot(2,5,i+1);
plt.imshow(c_digit,cmap='gray'); plt.axis('off');
###Output
_____no_output_____
###Markdown
Keras introduction> Keras is a high-level neural networks API, written in Python and capable of running on top of either [TensorFlow](https://www.tensorflow.org) or [Theano](http://deeplearning.net/software/theano/). It was developed with a focus on enabling fast experimentation. > Being able to go from idea to result with the least possible delay is key to doing good research.If you've used [scikit-learn](http://scikit-learn.org/stable/) then you should be on familiar ground as the library was developed with a similar philosophy. * Can use either theano or tensorflow as a back-end. For the most part, you just need to set it up and then interact with it using keras. Ordering of dimensions can be different though. * Models can be instaniated using the `Sequential()` class. * Neural networks are built up from bottom layer to top using the `add()` method. * Lots of recipes to follow and many [examples](https://github.com/fchollet/keras/tree/master/examples) for problems in NLP and image classification.
###Code
batch_size = 128
nb_classes = 10
nb_epoch = 6
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
#sub-sample of test data to improve training speed. Comment out
#if you want to train on full dataset.
x_train = x_train[:20000,:,:,:]
y_train = y_train[:20000]
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_test_inds = y_test.copy()
y_train_inds = y_train.copy()
y_train = keras.utils.to_categorical(y_train, nb_classes)
y_test = keras.utils.to_categorical(y_test, nb_classes)
###Output
('x_train shape:', (20000, 28, 28, 1))
(20000, 'train samples')
(10000, 'test samples')
###Markdown
One more trick to avoid overfitting20000 data-points isn't a huge amount for the size of the models we're considering. * One trick to avoid overfitting is to use [drop-out](http://jmlr.org/papers/v15/srivastava14a.html). This is where a weight is randomly assigned zero with a given probability to avoid the model becoming too dependent on a small number of weights. * We can also consider [ridge](https://en.wikipedia.org/wiki/Tikhonov_regularization) or [LASSO](https://en.wikipedia.org/wiki/Lasso_%28statistics%29) regularisation as a way of trimming down the dependency and effective number of parameters.* [Early stopping](https://en.wikipedia.org/wiki/Early_stopping) and [Batch Normalisation](https://arxiv.org/abs/1502.03167) are other strategies to help control over-fitting.
###Code
#Create sequential convolutional multi-layer perceptron with max pooling and dropout
#uncomment if you want to add more layers (in the interest of time we use a shallower model)
model = Sequential()
model.add(Conv2D(nb_filters, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape)) #nb_filters,
#model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
#model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
#Let's see what we've constructed layer by layer
model.summary()
model.fit(x_train, y_train, batch_size=batch_size, epochs=nb_epoch,
verbose=1, validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Train on 20000 samples, validate on 10000 samples
Epoch 1/6
20000/20000 [==============================] - 16s - loss: 0.7113 - acc: 0.8046 - val_loss: 0.2958 - val_acc: 0.9171
Epoch 2/6
20000/20000 [==============================] - 15s - loss: 0.3009 - acc: 0.9114 - val_loss: 0.2093 - val_acc: 0.9425
Epoch 3/6
20000/20000 [==============================] - 15s - loss: 0.2325 - acc: 0.9317 - val_loss: 0.1689 - val_acc: 0.9548
Epoch 4/6
20000/20000 [==============================] - 16s - loss: 0.1853 - acc: 0.9460 - val_loss: 0.1385 - val_acc: 0.9620
Epoch 5/6
20000/20000 [==============================] - 15s - loss: 0.1610 - acc: 0.9524 - val_loss: 0.1216 - val_acc: 0.9660
Epoch 6/6
20000/20000 [==============================] - 15s - loss: 0.1451 - acc: 0.9571 - val_loss: 0.1103 - val_acc: 0.9685
('Test loss:', 0.11029711169451475)
('Test accuracy:', 0.96850000000000003)
###Markdown
ResultsLet's take a random digit example to find out how confident the model is at classifying the correct category
###Code
#choose a random data from test set and show probabilities for each class.
i = np.random.randint(0,len(x_test))
digit = x_test[i].reshape(28,28)
plt.figure();
plt.subplot(1,2,1);
plt.title('Example of digit: {}'.format(y_test_inds[i]));
plt.imshow(digit,cmap='gray'); plt.axis('off');
probs = model.predict_proba(digit.reshape(1,28,28,1),batch_size=1)
plt.subplot(1,2,2);
plt.title('Probabilities for each digit class');
plt.bar(np.arange(10),probs.reshape(10),align='center'); plt.xticks(np.arange(10),np.arange(10).astype(str));
###Output
1/1 [==============================] - 0s
###Markdown
Wrong predictionsLet's look more closely at the predictions on the test data that weren't correct
###Code
predictions = model.predict_classes(x_test, batch_size=32, verbose=1)
inds = np.arange(len(predictions))
wrong_results = inds[y_test_inds!=predictions]
###Output
_____no_output_____
###Markdown
Example of an incorrectly labelled digitWe'll choose randomly from the test set a digit that was incorrectly labelled and then plot the probabilities predictedfor each class. We find that for an incorrectly labelled digit, the probabilities are in general lower and more spread betweenclasses than for a correctly labelled digit.
###Code
#choose a random wrong result from the test set
i = np.random.randint(0,len(wrong_results))
i = wrong_results[i]
digit = x_test[i].reshape(28,28)
plt.figure();
plt.subplot(1,2,1);
plt.title('Digit {}'.format(y_test_inds[i]));
plt.imshow(digit,cmap='gray'); plt.axis('off');
probs = model.predict_proba(digit.reshape(1,28,28,1),batch_size=1)
plt.subplot(1,2,2);
plt.title('Digit classification probability');
plt.bar(np.arange(10),probs.reshape(10),align='center'); plt.xticks(np.arange(10),np.arange(10).astype(str));
###Output
1/1 [==============================] - 0s
###Markdown
Comparison between incorrectly labelled digits and all digitsIt seems like for the example digit the prediction is a lot less confident when it's wrong. Is this always the case? Let's look at this by examining the maximum probability in any category for all digits that are incorrectly labelled.
###Code
prediction_probs = model.predict_proba(x_test, batch_size=32, verbose=1)
wrong_probs = np.array([prediction_probs[ind][digit] for ind,digit in zip(wrong_results,predictions[wrong_results])])
all_probs = np.array([prediction_probs[ind][digit] for ind,digit in zip(np.arange(len(predictions)),predictions)])
#plot as histogram
plt.hist(wrong_probs,alpha=0.5,normed=True,label='wrongly-labeled');
plt.hist(all_probs,alpha=0.5,normed=True,label='all labels');
plt.legend();
plt.title('Comparison between wrong and correctly classified labels');
plt.xlabel('highest probability');
###Output
_____no_output_____
###Markdown
What's been fitted ?Let's look at the convolutional layer and the kernels that have been learnt.
###Code
print (model.layers[0].get_weights()[0].shape)
weights = model.layers[0].get_weights()[0]
for i in range(nb_filters):
plt.subplot(6,6,i+1)
plt.imshow(weights[:,:,0,i],cmap='gray',interpolation='none'); plt.axis('off');
###Output
_____no_output_____
###Markdown
Visualising intermediate layers in the CNNIn order to visualise the activations half-way through the CNN and have some sense of what these convolutional kernels do to the input we need to create a new model with the same structure as before, but with the final layers missing. We then give it the weights it had previously and then predict on a given input. We now have a model that gives provides us as output the convolved input passed through the activation for each of the learnt filters (32 all together).
###Code
#Create new sequential model, same as before but just keep the convolutional layer.
model_new = Sequential()
model_new.add(Conv2D(nb_filters, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
#set weights for new model from weights trained on MNIST.
for i in range(1):
model_new.layers[i].set_weights(model.layers[i].get_weights())
#pick a random digit and "predict" on this digit (output will be first layer of CNN)
i = np.random.randint(0,len(x_test))
digit = x_test[i].reshape(1,28,28,1)
pred = model_new.predict(digit)
#check shape of prediction
print pred.shape
#For all the filters, plot the output of the input
plt.figure(figsize=(18,18))
filts = pred[0]
for i in range(nb_filters):
filter_digit = filts[:,:,i]
plt.subplot(6,6,i+1)
plt.imshow(filter_digit,cmap='gray'); plt.axis('off');
###Output
_____no_output_____
###Markdown
AppendixThe keras library is very flexible, constantly being updated and being further integrated with tensorflow. Some example scripts for keras can be found [here](https://github.com/fchollet/keras/tree/master/examples).Another advantage is its intergration with [tensorboard](https://www.tensorflow.org/get_started/summaries_and_tensorboard): A visualisation tool for neural network learning and debugging. To start we need to install it. If you've installed tensorflow already then you should already have it (check with: `which tensorboard`). Otherwise, run the command:```shpip install tensorflow``` Simple neural network We start by creating a simple neural network on a test dataset. First let's create and visualise the data
###Code
from sklearn.datasets import make_moons
from sklearn.preprocessing import scale
from sklearn.model_selection import train_test_split
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
###Output
_____no_output_____
###Markdown
Creating a neural networkWe'll create a very simple multi-layer perceptron with one hidden layer.
###Code
#Create sequential multi-layer perceptron
#uncomment if you want to add more layers (in the interest of time we use a shallower model)
model = Sequential()
model.add(Dense(32, input_dim=2,activation='relu')) #X,Y input dimensions. connecting to 8 neurons with relu activation
model.add(Dense(1, activation='sigmoid')) #binary classification so one output
model.compile(optimizer='AdaDelta',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Adding in a callback for tensorboardNext we define a callback for the model. This basically tells keras what format and where to write the data such that tensorboard can read it
###Code
tb_callback = keras.callbacks.TensorBoard(log_dir='./Graph/new3/', histogram_freq=0, write_graph=True, write_images=False)
###Output
_____no_output_____
###Markdown
Now perform model fitting. Note where we've added in the callback.
###Code
model.fit(X_train, Y_train, batch_size=16, epochs=30,
verbose=0, validation_data=(X_test, Y_test),callbacks=[tb_callback])
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
grid = np.mgrid[-3:3:100j,-3:3:100j]
grid_2d = grid.reshape(2, -1).T
X, Y = grid
prediction_probs = model.predict_proba(grid_2d, batch_size=32, verbose=1)
##plot results
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(X, Y, prediction_probs.reshape(100, 100))
ax.scatter(X_test[Y_test==0, 0], X_test[Y_test==0, 1])
ax.scatter(X_test[Y_test==1, 0], X_test[Y_test==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
###Output
_____no_output_____
###Markdown
Visualising resultsNow we visualise the results by running the following in the same terminal as this script```shtensorboard --logdir $(pwd)/Graph ```
###Code
! tensorboard --logdir $(pwd)/Graph
###Output
Starting TensorBoard 41 on port 6006
(You can navigate to http://128.189.88.2:6006)
WARNING:tensorflow:Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
WARNING:tensorflow:Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
WARNING:tensorflow:Found more than one graph event per run, or there was a metagraph containing a graph_def, as well as one or more graph events. Overwriting the graph with the newest event.
WARNING:tensorflow:Found more than one metagraph event per run. Overwriting the metagraph with the newest event.
^CTraceback (most recent call last):
File "//anaconda/bin/tensorboard", line 11, in <module>
sys.exit(main())
File "//anaconda/lib/python2.7/site-packages/tensorflow/tensorboard/tensorboard.py", line 151, in main
tb_server.serve_forever()
File "//anaconda/lib/python2.7/SocketServer.py", line 231, in serve_forever
poll_interval)
File "//anaconda/lib/python2.7/SocketServer.py", line 150, in _eintr_retry
return func(*args)
KeyboardInterrupt
|
Dynamic Programming/0930/741. Cherry Pickup.ipynb | ###Markdown
说明: 在代表樱桃字段的 N x N网格中,每个单元格是三个可能整数之一。 1、0表示单元格为空,因此您可以通过; 2、1表示该单元格包含一个樱桃,您可以拾取它并通过它; 3、-1表示该单元格包含刺,该刺会阻碍您的前进。 您的任务是按照以下规则收集最大数量的樱桃:规则: 从位置(0,0)开始并通过在有效路径单元格(值为0或1的单元格)中向右或向下移动而到达(N-1,N-1); 到达(N-1,N-1)后,通过在有效路径单元格中向左或向上移动返回到(0,0); 当通过包含樱桃的路径单元格时,将其拾取,该单元格将成为一个空单元格(0); 如果(0,0)与(N-1,N-1)之间没有有效路径,则无法收集樱桃。Example 1: Input: grid = [[0, 1, -1], [1, 0, -1], [1, 1, 1]] Output: 5 Explanation: The player started at (0, 0) and went down, down, right right to reach (2, 2). 4 cherries were picked up during this single trip, and the matrix becomes [[0,1,-1],[0,0,-1],[0,0,0]]. Then, the player went left, up, up, left to return home, picking up one more cherry. The total number of cherries picked up is 5, and this is the maximum possible.Note: 1、grid是N x N 2D数组,其中1 <= N <=50。 2、每个grid[i][j] 是集合{-1,0,1}中的整数。 3、保证grid[0][0] 和 grid[N-1][N-1]不为-1。
###Code
class Solution:
def cherryPickup(self, grid) -> int:
N = len(grid)
dp = [[0] * N for _ in range(N)]
if grid[-1][-1] == 1:
dp[-1][-1] = 1
grid[-1][-1] = 0
for i in range(N-1, -1, -1):
for j in range(N-1, -1, -1):
if i == N - 1 and j == N - 1:
continue
elif i == N - 1:
dp[i][j] = max(dp[i-1])
elif j == N - 1:
pass
else:
pass
print(dp)
for i in range(N):
for j in range(N):
pass
return dp[0][0]
class Solution:
def cherryPickup(self, grid) -> int:
def dp(r1, c1, r2, c2):
if (r1, c1, r2, c2) in mem:
return mem[(r1, c1, r2, c2)]
# 边界条件
if r1 > N-1 or c1 > N-1 or r2 > N-1 or c2 > N-1 or grid[r1][c1] == -1 or grid[r2][c2] == -1:
return -float('inf')
# 到达右下角, 到达目的地
if r1 == c1 == N-1 or r2 == c2 == N-1:
return grid[-1][-1]
# 在边界条件的时候,已经检查了 grid[r1][c1] and grid[r2][c2] 是否是-1的 situation
# 加上当前 grid的值之后,再往下走
cur_cherry = 0
if r1 == r2 and c1 == c2: # 重合的情况
cur_cherry = grid[r1][c1]
else:
cur_cherry = grid[r1][c1] + grid[r2][c2]
next_cherry = -float('inf')
dirs = [[0, 1], [1, 0]] # 向右、向下
for d1 in dirs:
for d2 in dirs:
nr_1 = d1[0] + r1
nc_1 = d1[1] + c1
nr_2 = d2[0] + r2
nc_2 = d2[1] + c2
next_cherry = max(next_cherry, dp(nr_1, nc_1, nr_2, nc_2))
cur_cherry += next_cherry
mem[(r1, c1, r2, c2)] = cur_cherry
return cur_cherry
mem = {}
N = len(grid)
ans = dp(0, 0, 0, 0)
return ans if ans > 0 else 0
solution = Solution()
solution.cherryPickup([[0, 1, -1],
[1, 0, -1],
[1, 1, 1]])
{(2, 1, 2, 1): 2,
(1, 1, 1, 1): 2,
(0, 1, 0, 1): 3,
(1, 1, 2, 0): 3,
(0, 1, 1, 0): 5,
(2, 0, 1, 1): 3,
(1, 0, 0, 1): 5,
(2, 0, 2, 0): 3,
(1, 0, 1, 0): 4,
(0, 0, 0, 0): 5}
###Output
_____no_output_____ |
experiments/tl_1v2/cores-oracle.run1/trials/28/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:cores-oracle.run1",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "CORES_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"dataset_seed": 500,
"seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
Chapter 06 Index Alignment.ipynb | ###Markdown
Chapter 6: Index Alignment Recipes* [Examining the Index object](Examining-the-index)* [Producing Cartesian products](Producing-Cartesian-products)* [Exploding indexes](Exploding-Indexes)* [Filling values with unequal indexes](Filling-values-with-unequal-indexes)* [Appending columns from different DataFrames](Appending-columns-from-different-DataFrames)* [Highlighting the maximum value from each column](Highlighting-maximum-value-from-each-column)* [Replicating idxmax with method chaining](Replicating-idxmax-with-method-chaining)* [Finding the most common maximum](Finding-the-most-common-maximum)
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Examining the index
###Code
college = pd.read_csv('data/college.csv')
columns = college.columns
columns
columns.values
columns[5]
columns[[1,8,10]]
columns[-7:-4]
columns.min(), columns.max(), columns.isnull().sum()
columns + '_A'
columns > 'G'
columns[1] = 'city'
c1 = columns[:4]
c1
c2 = columns[2:5]
c2
c1.union(c2)
c1 | c2
c1.symmetric_difference(c2)
c1 ^ c2
###Output
_____no_output_____
###Markdown
Producing Cartesian products
###Code
s1 = pd.Series(index=list('aaab'), data=np.arange(4))
s1
s2 = pd.Series(index=list('cababb'), data=np.arange(6))
s2
s1 + s2
###Output
_____no_output_____
###Markdown
There's more
###Code
s1 = pd.Series(index=list('aaabb'), data=np.arange(5))
s2 = pd.Series(index=list('aaabb'), data=np.arange(5))
s1 + s2
s1 = pd.Series(index=list('aaabb'), data=np.arange(5))
s2 = pd.Series(index=list('bbaaa'), data=np.arange(5))
s1 + s2
###Output
_____no_output_____
###Markdown
Exploding Indexes
###Code
employee = pd.read_csv('data/employee.csv', index_col='RACE')
employee.head()
salary1 = employee['BASE_SALARY']
salary2 = employee['BASE_SALARY']
salary1 is salary2
salary1 = employee['BASE_SALARY'].copy()
salary2 = employee['BASE_SALARY'].copy()
salary1 is salary2
salary1 = salary1.sort_index()
salary1.head()
salary2.head()
salary_add = salary1 + salary2
salary_add.head()
salary_add1 = salary1 + salary1
len(salary1), len(salary2), len(salary_add), len(salary_add1)
###Output
_____no_output_____
###Markdown
There's more...
###Code
index_vc = salary1.index.value_counts(dropna=False)
index_vc
index_vc.pow(2).sum()
###Output
_____no_output_____
###Markdown
Filling values with unequal indexes
###Code
baseball_14 = pd.read_csv('data/baseball14.csv', index_col='playerID')
baseball_15 = pd.read_csv('data/baseball15.csv', index_col='playerID')
baseball_16 = pd.read_csv('data/baseball16.csv', index_col='playerID')
baseball_14.head()
baseball_14.index.difference(baseball_15.index)
baseball_14.index.difference(baseball_15.index)
hits_14 = baseball_14['H']
hits_15 = baseball_15['H']
hits_16 = baseball_16['H']
hits_14.head()
(hits_14 + hits_15).head()
hits_14.add(hits_15, fill_value=0).head()
hits_total = hits_14.add(hits_15, fill_value=0).add(hits_16, fill_value=0)
hits_total.head()
hits_total.hasnans
###Output
_____no_output_____
###Markdown
How it works...
###Code
s = pd.Series(index=['a', 'b', 'c', 'd'], data=[np.nan, 3, np.nan, 1])
s
s1 = pd.Series(index=['a', 'b', 'c'], data=[np.nan, 6, 10])
s1
s.add(s1, fill_value=5)
s1.add(s, fill_value=5)
###Output
_____no_output_____
###Markdown
There's more
###Code
df_14 = baseball_14[['G','AB', 'R', 'H']]
df_14.head()
df_15 = baseball_15[['AB', 'R', 'H', 'HR']]
df_15.head()
(df_14 + df_15).head(10).style.highlight_null('yellow')
df_14.add(df_15, fill_value=0).head(10).style.highlight_null('yellow')
###Output
_____no_output_____
###Markdown
Appending columns from different DataFrames
###Code
employee = pd.read_csv('data/employee.csv')
dept_sal = employee[['DEPARTMENT', 'BASE_SALARY']]
dept_sal = dept_sal.sort_values(['DEPARTMENT', 'BASE_SALARY'],
ascending=[True, False])
max_dept_sal = dept_sal.drop_duplicates(subset='DEPARTMENT')
max_dept_sal.head()
max_dept_sal = max_dept_sal.set_index('DEPARTMENT')
employee = employee.set_index('DEPARTMENT')
employee['MAX_DEPT_SALARY'] = max_dept_sal['BASE_SALARY']
pd.options.display.max_columns = 6
employee.head()
employee.query('BASE_SALARY > MAX_DEPT_SALARY')
###Output
_____no_output_____
###Markdown
How it works...
###Code
np.random.seed(1234)
random_salary = dept_sal.sample(n=10).set_index('DEPARTMENT')
random_salary
employee['RANDOM_SALARY'] = random_salary['BASE_SALARY']
###Output
_____no_output_____
###Markdown
There's more...
###Code
employee['MAX_SALARY2'] = max_dept_sal['BASE_SALARY'].head(3)
employee.MAX_SALARY2.value_counts()
employee.MAX_SALARY2.isnull().mean()
###Output
_____no_output_____
###Markdown
Highlighting maximum value from each column
###Code
pd.options.display.max_rows = 8
college = pd.read_csv('data/college.csv', index_col='INSTNM')
college.dtypes
college.MD_EARN_WNE_P10.iloc[0]
college.GRAD_DEBT_MDN_SUPP.iloc[0]
college.MD_EARN_WNE_P10.sort_values(ascending=False).head()
cols = ['MD_EARN_WNE_P10', 'GRAD_DEBT_MDN_SUPP']
for col in cols:
college[col] = pd.to_numeric(college[col], errors='coerce')
college.dtypes.loc[cols]
college_n = college.select_dtypes(include=[np.number])
college_n.head() # only numeric columns
criteria = college_n.nunique() == 2
criteria.head()
binary_cols = college_n.columns[criteria].tolist()
binary_cols
college_n2 = college_n.drop(labels=binary_cols, axis='columns')
college_n2.head()
max_cols = college_n2.idxmax()
max_cols
unique_max_cols = max_cols.unique()
unique_max_cols[:5]
college_n2.loc[unique_max_cols].style.highlight_max()
###Output
_____no_output_____
###Markdown
There's more...
###Code
college = pd.read_csv('data/college.csv', index_col='INSTNM')
college_ugds = college.filter(like='UGDS_').head()
college_ugds.style.highlight_max(axis='columns')
pd.Timedelta(1, unit='Y')
###Output
_____no_output_____
###Markdown
Replicating idxmax with method chaining
###Code
college = pd.read_csv('data/college.csv', index_col='INSTNM')
cols = ['MD_EARN_WNE_P10', 'GRAD_DEBT_MDN_SUPP']
for col in cols:
college[col] = pd.to_numeric(college[col], errors='coerce')
college_n = college.select_dtypes(include=[np.number])
criteria = college_n.nunique() == 2
binary_cols = college_n.columns[criteria].tolist()
college_n = college_n.drop(labels=binary_cols, axis='columns')
college_n.max().head()
college_n.eq(college_n.max()).head()
has_row_max = college_n.eq(college_n.max()).any(axis='columns')
has_row_max.head()
college_n.shape
has_row_max.sum()
pd.options.display.max_rows=6
college_n.eq(college_n.max()).cumsum().cumsum()
has_row_max2 = college_n.eq(college_n.max())\
.cumsum()\
.cumsum()\
.eq(1)\
.any(axis='columns')
has_row_max2.head()
has_row_max2.sum()
idxmax_cols = has_row_max2[has_row_max2].index
idxmax_cols
set(college_n.idxmax().unique()) == set(idxmax_cols)
###Output
_____no_output_____
###Markdown
There's more...
###Code
%timeit college_n.idxmax().values
%timeit college_n.eq(college_n.max())\
.cumsum()\
.cumsum()\
.eq(1)\
.any(axis='columns')\
[lambda x: x].index
###Output
5.26 ms ± 35.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Finding the most common maximum
###Code
pd.options.display.max_rows= 40
college = pd.read_csv('data/college.csv', index_col='INSTNM')
college_ugds = college.filter(like='UGDS_')
college_ugds.head()
highest_percentage_race = college_ugds.idxmax(axis='columns')
highest_percentage_race.head()
highest_percentage_race.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
There's more...
###Code
college_black = college_ugds[highest_percentage_race == 'UGDS_BLACK']
college_black = college_black.drop('UGDS_BLACK', axis='columns')
college_black.idxmax(axis='columns').value_counts(normalize=True)
###Output
_____no_output_____ |
examples/visualizing-options-in-python-using-opstrat.ipynb | ###Markdown
IntroductionOpstrat is a package for visualizing Option payoffs.An option is a derivative, a contract that gives the buyer the right, but not the obligation, to buy or sell the underlying asset by a certain date (expiration date) at a specified price (strike price).There are two types of options: calls and puts. Traders can construct option strategies ranging from buying or selling a single option to very complex ones that involve multiple simultaneous option positions. Option payoff diagrams are profit and loss charts that show the risk/reward profile of an option or combination of options. As option probability can be complex to understand, payoff diagrams gives an insight into the risk/reward for the trading strategy. Installing the packageThe package can be installed by using pip install command.
###Code
pip install opstrat
###Output
_____no_output_____
###Markdown
Import opstratOnce the package is installed successfully, it can be imported as below:
###Code
import opstrat as op
###Output
_____no_output_____
###Markdown
Plotting single option The payoff diagram for a single option can be plotted using the single_plotter() function. Default plot:
###Code
op.single_plotter()
###Output
_____no_output_____
###Markdown
If no arguments are provided, payoff diagram for a long call option will be generated with strike price as $\$$102 and spot price $\$$100.Note that the trader's profit is shown in green shade and loss is shown in red. The call option buyer's loss is limited to $\$$2 regardless of how low the share price falls. The trader's profit increases if the stock price increase beyond $\$$104 (break-even price) Customizing single plotThe plot can be modified by providing the details of the option as arguments. Example: The following code will generate the payoff diagram for an option seller who receives option premium of $\$$12.50 selling a put option at a strike price of $\$$460 when the stock is also trading at $\$$460(spot price).
###Code
op.single_plotter(spot=460, strike=460, op_type='p', tr_type='s', op_pr=12.5)
###Output
_____no_output_____
###Markdown
Plotting for Multiple Options strategyThe payoff diagram for a single option can be plotted using the multi_plotter() function. This function will plot each individual payoff diagrams and the resultant payoff diagram.The particulars of each option has to be provided as a list of dictionaries.Example 1: Short StrangleA short strangle is an options trading strategy that involve:   (a)selling of a slightly out-of-the-money put and   (b)a slightly out-of-the-money call of the same underlying stock and expiration date
###Code
op_1 = {'op_type':'c','strike':110,'tr_type':'s','op_pr':2}
op_2 = {'op_type':'p','strike':95,'tr_type':'s','op_pr':6}
op.multi_plotter(spot=100, op_list=[op_1,op_2])
###Output
_____no_output_____
###Markdown
Example 2 : Iron Condor (Option strategy with 4 options)An iron condor is an options strategy consisting of two puts (one long and one short) and two calls (one long and one short), and four strike prices, all with the same expiration date. The stock currently trading at $\$$ 212.26 (Spot Price)  Option 1: Sell a call with a $\$$215 strike, which gives $\$$ 7.63 in premium  Option 2: Buy a call with a strike of $\$$220, which costs $\$$ 5.35.   Option 3: Sell a put with a strike of $\$$210 with premium received $\$$ 7.20  Option 4: Buy a put with a strike of $\$$205 costing $\$$ 5.52.
###Code
op1={'op_type': 'c', 'strike': 215, 'tr_type': 's', 'op_pr': 7.63}
op2={'op_type': 'c', 'strike': 220, 'tr_type': 'b', 'op_pr': 5.35}
op3={'op_type': 'p', 'strike': 210, 'tr_type': 's', 'op_pr': 7.20}
op4={'op_type': 'p', 'strike': 205, 'tr_type': 'b', 'op_pr': 5.52}
op_list=[op1, op2, op3, op4]
op.multi_plotter(spot=212.26,spot_range=10, op_list=op_list)
###Output
_____no_output_____
###Markdown
The optional argument, spot range limits the range of spot values covered in the plot. The default spot range in +/-20%. If the underlying asset is less volatile and the strike price of options are within a small range, smaller spot range like 5% can be considered. For highly volatile underlying asset, higher spot range can be used. Plotting Real Options using Yahoo Finance APIWe can plot the option-payoff by providing the option ticker and other parameters(option type, transaction type and strike price) into the yf_plotter function.Example 1 : Call Option Buyer Payoff Diagram of Microsoft Inc.The following code will generate the payoff diagram for Microsoft Inc. call option buyer, who buys call option at strike price $\$$235. MSFT is the stock ticker for Microsoft Inc.
###Code
op_list=[{'tr_type':'b', 'op_type':'c', 'strike':235}]
op.yf_plotter('msft', spot_range=10, op_list=op_list)
###Output
_____no_output_____
###Markdown
Example 2: Strangle on AmazonStrangle is a strategy which involves simultaneous purchase of call option and put option near the spot price allowing the purchaser to make a profit whether the price of the stock goes up or down. Stock ticker : AMZN(Amazon Inc.) Amazon stock is currently trading around $\$$3070. A straddle can be constructed by purchasing the following options: Option 1: Buy Call at Strike Price $\$$3070 Option 2: Buy Put option at Strike price $\$$3070 Option expiry date can be specified as parameter 'exp' in the format 'YYYY-MM-DD'.
###Code
op_1={'op_type': 'c', 'strike':3150, 'tr_type': 'b'}
op_2={'op_type': 'p', 'strike':3150, 'tr_type': 'b'}
op.yf_plotter(ticker='amzn',
exp='2021-03-26',
op_list=[op_1, op_2])
###Output
_____no_output_____ |
Python/Python-Completo/Python Completo/Notebooks Traduzidos/Map.ipynb | ###Markdown
map ()map () é uma função que leva em dois argumentos: uma função e uma seqüência iterable. Na forma: map(função, sequência) O primeiro argumento é o nome de uma função e a segunda uma seqüência (por exemplo, uma lista). map() aplica a função a todos os elementos da seqüência. Ele retorna uma nova lista com os elementos alterados por função.Quando fomos sobre a compreensão da lista, criamos uma pequena expressão para converter Fahrenheit a Celsius. Vamos fazer o mesmo aqui, mas usando map.Começaremos com duas funções:
###Code
def fahrenheit(T):
return ((float(9)/5)*T + 32)
def celsius(T):
return (float(5)/9)*(T-32)
temp = [0, 22.5, 40,100]
###Output
_____no_output_____
###Markdown
Agora vamos ver o map() em ação:
###Code
F_temps = list(map(fahrenheit, temp))
# Mostra
F_temps
# Converte devolta
list(map(celsius, F_temps))
###Output
_____no_output_____
###Markdown
No exemplo acima, não usamos uma expressão lambda. Ao usar lambda, não teríamos que definir e nomear as funções fahrenheit() e celsius().
###Code
list(map(lambda x: (5.0/9)*(x - 32), F_temps))
###Output
_____no_output_____
###Markdown
Ótimo! Nós obtivemos o mesmo resultado! O uso do map() é muito mais comumente usado com expressões lambda, já que todo o propósito do map() é economizar esforço ao ter que criar manual para loops. map() pode ser aplicado a mais de um iterable. Os iteráveis devem ter o mesmo comprimento.Por exemplo, se estamos trabalhando com duas listas-map() aplicará sua função lambda aos elementos das listas de argumentos, ou seja, aplica-se primeiro aos elementos com o índice 0, e depois aos elementos com o 1º índice até o que o índice N seja alcançado.Por exemplo, mapeamos uma expressão lambda para duas listas:
###Code
a = [1,2,3,4]
b = [5,6,7,8]
c = [9,10,11,12]
list(map(lambda x,y:x+y,a,b))
list(map(lambda x,y,z:x+y+z, a,b,c))
###Output
_____no_output_____ |
Photonics_Labs/8_sem/Lab33/Lab33.ipynb | ###Markdown
Gaussian bundles optics
###Code
import pandas as pd
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
data = []
for i in (0,1,2,3):
data.append(pd.read_excel("C:\\Users\\nekha\\OneDrive\\GitHub\\Labs\\Photonics_Labs\\8_sem\\Lab33\\lab33.xlsx",i))
data[3]
data[0]
data[1]
data[2]
def Gauss(x, *p):
A, mu, sigma = p
return A*np.exp(-(x-mu)**2/(2.*sigma**2))
coords = list(data[0]['Coordinate'])
voltage = list(data[0]['Voltage'])
p0 = [1, 0, 1]
coeff, var_matrix = curve_fit(Gauss, coords, voltage, p0 = p0)
x_set = list(np.arange(0,1,0.01))
fit=Gauss(x_set, *coeff)
plt.plot(coords, voltage, 'o')
plt.plot(x_set, fit)
plt.show()
###Output
_____no_output_____ |
Chapter04/.ipynb_checkpoints/Providing datasets-checkpoint.ipynb | ###Markdown
The datasets MNIST
###Code
# http://yann.lecun.com/exdb/mnist/
labels_filename = 'train-labels-idx1-ubyte.gz'
images_filename = 'train-images-idx3-ubyte.gz'
url = "http://yann.lecun.com/exdb/mnist/"
with TqdmUpTo() as t: # all optional kwargs
urllib.request.urlretrieve(url+images_filename, 'MNIST_'+images_filename, reporthook=t.update_to, data=None)
with TqdmUpTo() as t: # all optional kwargs
urllib.request.urlretrieve(url+labels_filename, 'MNIST_'+labels_filename, reporthook=t.update_to, data=None)
###Output
9920512it [00:01, 7506137.58it/s]
32768it [00:00, 142952.49it/s]
###Markdown
The EMNIST Dataset
###Code
# https://www.nist.gov/itl/iad/image-group/emnist-dataset
url = "http://biometrics.nist.gov/cs_links/EMNIST/gzip.zip"
filename = "gzip.zip"
with TqdmUpTo() as t: # all optional kwargs
urllib.request.urlretrieve(url, filename, reporthook=t.update_to, data=None)
zip_ref = zipfile.ZipFile(filename, 'r')
zip_ref.extractall('.')
zip_ref.close()
if os.path.isfile(filename):
os.remove(filename)
###Output
_____no_output_____
###Markdown
A MNIST-like fashion product database
###Code
# https://github.com/zalandoresearch/fashion-mnist
url = "http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz"
filename = "train-images-idx3-ubyte.gz"
with TqdmUpTo() as t: # all optional kwargs
urllib.request.urlretrieve(url, filename, reporthook=t.update_to, data=None)
url = "http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz"
filename = "train-labels-idx1-ubyte.gz"
_ = urllib.request.urlretrieve(url, filename)
###Output
_____no_output_____ |
learning_to_simulate/notebooks/check_input.ipynb | ###Markdown
Breaking Down Input Function: `input_fn` This notebook helps us to understand how the data was uploaded to create input_fn callback. First, import the nedeed libraries
###Code
import sys
sys.path.append('../../')
import os
import tensorflow as tf
import functools
import json
from learning_to_simulate import reading_utils
from learning_to_simulate import train
tf.compat.v1.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Define function to read metadata
###Code
def _read_metadata(data_path):
with open(os.path.join(data_path, 'metadata.json'), 'rt') as fp:
return json.loads(fp.read())
###Output
_____no_output_____
###Markdown
Load the tfrecord and json with the metada.After this cell, the dataset contains tuples `(context, features)```` context['particle_type'] => tf size: [n_particles] features['position'] => tf: [steps,n_particles, positions]```
###Code
info_dir = "/home/zoso/Documents/deepmind-research/information"
data_path = os.path.join(info_dir,'datasets/WaterDropSample/')
metadata = _read_metadata(data_path)
# Create a tf.data.Dataset from the TFRecord.
ds = tf.data.TFRecordDataset([os.path.join(data_path, 'train.tfrecord')])
ds = ds.map(functools.partial(reading_utils.parse_serialized_simulation_example, metadata=metadata))
for (context, features) in ds.take(2):
print("particle type: ",context['particle_type'].shape)
print("position: ", features['position'].shape)
###Output
particle type: (678,)
position: (1001, 678, 2)
particle type: (355,)
position: (1001, 355, 2)
###Markdown
`mode: one_step`Executing the next set leads us to a ds which contains `element` ```element['particle_type'] => tf : [n_particles]element['position'] => rf : [7, n_particles, positions]```
###Code
ds1 = ds
# So we can calculate the last 5 velocities.
INPUT_SEQUENCE_LENGTH = 6
batch_size = 2
# Splits an entire trajectory into chunks of 7 steps.
# Previous 5 velocities, current velocity and target.
# It is like a batch of 7 position steps
split_with_window = functools.partial(
reading_utils.split_trajectory,
window_length=INPUT_SEQUENCE_LENGTH + 1)
ds1 = ds1.flat_map(split_with_window)
for elem in ds1.take(1):
print("particle type: ", elem['particle_type'].shape)
print("position: ", elem['position'].shape)
print("-------------------")
###Output
particle type: (678,)
position: (7, 678, 2)
-------------------
###Markdown
Executing the next set leads us to a ds which contains tuples `(features,labels)` ```features['particle_type'] => tf: [n_particles]features['position'] => tf: [n_particles,6,positions]features['n_particles_per_example'] => tf: [1] value: [n_particles]labels => tf: [n_particles]```
###Code
ds1 = ds1.map(train.prepare_inputs)
for (features, labels) in ds1.take(1):
print("particle type: ",features['particle_type'].shape)
print("position: ", features['position'].shape)
print("n_particles_per_example: ",features['n_particles_per_example'])
print("labels: ",labels.shape) # the target position
print("-------------------")
###Output
particle type: (678,)
position: (678, 6, 2)
n_particles_per_example: tf.Tensor([678], shape=(1,), dtype=int32)
labels: (678, 2)
-------------------
###Markdown
Executing the next set leads us to a ds which contains tuples `(features,labels)` ```features['particle_type'] => tf: [batch_size*n_particles]features['position'] => tf: [batch_size*n_particles,6,positions]features['n_particle_per_example'] => tf: [1] value: batch_size * [n_particles] labels => tf: [batch_size*n_particles,positions]```
###Code
ds2 = train.batch_concat(ds1, batch_size)
for features, labels in ds2.take(1):
print("particle type: ",features['particle_type'].shape)
print("position: ", features['position'].shape)
print("n_particles_per_example: ",features['n_particles_per_example'])
print("labels: ",labels.shape) # the target position
###Output
WARNING:tensorflow:Entity <function _yield_value at 0x7f448dbe6ea0> appears to be a generator function. It will not be converted by AutoGraph.
WARNING: Entity <function _yield_value at 0x7f448dbe6ea0> appears to be a generator function. It will not be converted by AutoGraph.
particle type: (1356,)
position: (1356, 6, 2)
n_particles_per_example: tf.Tensor([678 678], shape=(2,), dtype=int32)
labels: (1356, 2)
###Markdown
`mode: one_step_train`This point must be executed before the last cell, and just allow us to shuffle the dataset
###Code
ds3 = ds1.repeat()
ds3 = ds3.shuffle(512)
for (context, features) in ds3.take(1):
print("particle type: ",context['particle_type'].shape)
print("position: ", context['position'].shape)
print("n_particles_per_example: ",context['n_particles_per_example'])
print("features: ",features.shape) # the target position
###Output
particle type: (678,)
position: (678, 6, 2)
n_particles_per_example: tf.Tensor([678], shape=(1,), dtype=int32)
features: (678, 2)
###Markdown
Executing the next set leads us to a ds which contains tuples `(features,labels)` ```features['particle_type'] => tf: [batch_size*n_particles]features['position'] => tf: [batch_size*n_particles,6,positions]features['n_particle_per_example'] => tf: [1] value: batch_size * [n_particles] labels => tf: [batch_size*n_particles,positions]```
###Code
ds3 = train.batch_concat(ds3, batch_size)
for features, labels in ds3.take(1):
print("particle type: ",features['particle_type'].shape)
print("position: ", features['position'].shape)
print("n_particles_per_example: ",features['n_particles_per_example'])
print("labels: ",labels.shape) # the target position
###Output
particle type: (1356,)
position: (1356, 6, 2)
n_particles_per_example: tf.Tensor([678 678], shape=(2,), dtype=int32)
labels: (1356, 2)
###Markdown
`mode: rollout`Executing the next set leads us to a ds which contains tuples `(features,labels)` ```features['particle_type'] => tf: [n_particles]features['position'] => tf: [n_particles,steps,positions]features['key'] => tf: [1] value: id_examplefeatures['n_particle_per_example'] => tf: [1] value: [n_particles]features['is_trajectory'] => tf: [1] value: True or Falselabels => tf: [n_particles, positions]```
###Code
ds4 = ds.map(train.prepare_rollout_inputs)
for features, labels in ds4:
print("particle_type: ", features['particle_type'].shape)
print("position: ", features['position'].shape)
print("key: ", features['key'])
print("n_particles_per_example: ",features['n_particles_per_example'] )
print("is_trajectory: ", features["is_trajectory"])
print("labels: ", labels.shape)
print("-------------")
###Output
particle_type: (678,)
position: (678, 1000, 2)
key: tf.Tensor(0, shape=(), dtype=int64)
n_particles_per_example: tf.Tensor([678], shape=(1,), dtype=int32)
is_trajectory: tf.Tensor([ True], shape=(1,), dtype=bool)
labels: (678, 2)
-------------
particle_type: (355,)
position: (355, 1000, 2)
key: tf.Tensor(1, shape=(), dtype=int64)
n_particles_per_example: tf.Tensor([355], shape=(1,), dtype=int32)
is_trajectory: tf.Tensor([ True], shape=(1,), dtype=bool)
labels: (355, 2)
-------------
###Markdown
`main function`Here we test the main function which generates the input_fn function calleable. You need to pass the respectivo `mode` and `split`
###Code
info_dir = "/home/zoso/Documents/deepmind-research/information"
data_path = os.path.join(info_dir,'datasets/WaterDropSample/')
#batch_size = 1
#mode = 'rollout'
batch_size = 2
mode = 'one_step_train'
input_fn = train.get_input_fn(data_path, batch_size,
mode=mode, split='train')
dataset = input_fn()
if 'one_step' in mode:
for (features, labels) in dataset.take(1):
print("particle type: ",features['particle_type'].shape)
print("position: ", features['position'].shape)
print("n_particles_per_example: ",features['n_particles_per_example'])
print("labels: ",labels.shape) # the target position
elif mode == 'rollout' and batch_size == 1:
for features, labels in dataset.take(1):
print("particle_type: ", features['particle_type'].shape)
print("position: ", features['position'].shape)
print("key: ", features['key'])
print("n_particles_per_example: ",features['n_particles_per_example'] )
print("is_trajectory: ", features["is_trajectory"])
print("labels: ", labels.shape)
print("-------------")
###Output
particle type: (1356,)
position: (1356, 6, 2)
n_particles_per_example: tf.Tensor([678 678], shape=(2,), dtype=int32)
labels: (1356, 2)
|
examples/performance_test/TresherPerformanceTest.ipynb | ###Markdown
Import package
###Code
import numpy as np
import pandas as pd
import thresher
import time
t = thresher.Thresher()
print('Currently supported algorithms:')
print(t.get_supported_algorithms())
###Output
Currently supported algorithms:
['auto', 'ls', 'sgd', 'gen', 'grid', 'sgrid']
###Markdown
Read test data
###Code
# to load the data, unpack the milion_samples.7z file first
data = pd.read_csv('milion_samples.csv')
f'Read {len(data)} rows of data'
data.head()
###Output
_____no_output_____
###Markdown
Evaluate algorithms Algorithm: LS
###Code
t_ls = thresher.Thresher(algorithm='ls',
progress_bar=True, labels=(0,1),
algorithm_params={'n_jobs': 10})
# too slow for 10^6 rows of data
# there is some room to tweak the paralelization as well
# s_time = time.process_time()
# result = t_ls.optimize_threshold(data.score.values, data.actual_label.values)
# elapsed_time = time.process_time() - s_time
###Output
_____no_output_____
###Markdown
Algorithm: SGD
###Code
t_sgd = thresher.Thresher(algorithm='sgd',
progress_bar=True, labels=(0,1))
results = []
for _ in range(10):
s_time = time.process_time()
result = t_sgd.optimize_threshold(data.score.values, data.actual_label.values)
elapsed_time = time.process_time() - s_time
print(f'Process took {elapsed_time} seconds and gave result of {result}')
results.append((elapsed_time, result))
f'Mean cpu time: {np.mean([_[0] for _ in results])} mean result: {np.mean([_[1] for _ in results])}'
###Output
_____no_output_____
###Markdown
Algorithm: GEN
###Code
t_gen = thresher.Thresher(algorithm='gen',
progress_bar=True, labels=(0,1))
results = []
for _ in range(10):
s_time = time.process_time()
result = t_gen.optimize_threshold(data.score.values, data.actual_label.values)
elapsed_time = time.process_time() - s_time
print(f'Process took {elapsed_time} seconds and gave result of {result}')
results.append((elapsed_time, result))
f'Mean cpu time: {np.mean([_[0] for _ in results])} mean result: {np.mean([_[1] for _ in results])}'
###Output
_____no_output_____
###Markdown
Algorithm: Grid search
###Code
t_grid = thresher.Thresher(algorithm='grid',
progress_bar=True, labels=(0,1))
results = []
for _ in range(10):
s_time = time.process_time()
result = t_grid.optimize_threshold(data.score.values, data.actual_label.values)
elapsed_time = time.process_time() - s_time
print(f'Process took {elapsed_time} seconds and gave result of {result}')
results.append((elapsed_time, result))
f'Mean cpu time: {np.mean([_[0] for _ in results])} mean result: {np.mean([_[1] for _ in results])}'
###Output
_____no_output_____
###Markdown
Algorithm: Stochastic Grid search
###Code
t_sgrid = thresher.Thresher(algorithm='sgrid',
progress_bar=True, labels=(0,1))
results = []
for _ in range(10):
s_time = time.process_time()
result = t_sgrid.optimize_threshold(data.score.values, data.actual_label.values)
elapsed_time = time.process_time() - s_time
print(f'Process took {elapsed_time} seconds and gave result of {result}')
results.append((elapsed_time, result))
f'Mean cpu time: {np.mean([_[0] for _ in results])} mean result: {np.mean([_[1] for _ in results])}'
###Output
_____no_output_____
###Markdown
Algorithm: Stochastic Grid search different params
###Code
t_sgrid = thresher.Thresher(algorithm='sgrid',
progress_bar=False, labels=(0,1),
algorithm_params={'no_of_decimal_places': 2,
'stoch_ratio': 0.04,
'reshuffle': False})
results = []
for _ in range(10):
s_time = time.process_time()
result = t_sgrid.optimize_threshold(data.score.values, data.actual_label.values)
elapsed_time = time.process_time() - s_time
print(f'Process took {elapsed_time} seconds and gave result of {result}')
results.append((elapsed_time, result))
f'Mean cpu time: {np.mean([_[0] for _ in results])} mean result: {np.mean([_[1] for _ in results])}'
###Output
_____no_output_____ |
OSULymanAlpha.ipynb | ###Markdown
Reading Brick-files: DESI OSU Workshop Dec 6th-9th 2016 Authors: Javier Sanchez ([email protected]), David Kirkby ([email protected]) First I set up the packages that I am going to need for the analysis
###Code
%pylab inline
import astropy.io.fits as fits
import os
###Output
_____no_output_____
###Markdown
Brick files have a standard name `brick-{CHANNEL}-{BRICK_NAME}.fits`.We are going to set up a function to read these files and give us the HDU lists. The brick files are located at NERSC in `/project/projectdirs/desi/datachallenge/OSU2016`
###Code
def readBricks(path_in,brick_name):
hdus = []
for channel in 'brz':
filename = 'brick-{}-{}.fits'.format(channel,brick_name)
hdulist = fits.open(os.path.join(path_in,filename))
hdus.append(hdulist)
return hdus
###Output
_____no_output_____
###Markdown
Change `os.environ['FAKE_QSO_PATH']` to the path to the brick files in your computer
###Code
hdus = readBricks(os.environ['FAKE_QSO_PATH'],'qso-osu')
###Output
_____no_output_____
###Markdown
`hdus` is a list containing 3 `HDUList` objects. The `0` list corresponds to the `b` camera, the `1` to the `r`, and the `2` to the `z`. More info here: http://desidatamodel.readthedocs.io/en/latest/DESI_SPECTRO_REDUX/PRODNAME/bricks/BRICKNAME/brick-CHANNEL-BRICKNAME.html* The first hdu contains the fluxes* The second hdu contains the inverse variance* The third hdu contains the wavelength grid* The fourth hdu contains the resolution matrix* The fifth hdu contains the fibermap in a Table
###Code
def plot_smooth(nqso, nresample_b, nresample_r, nresample_z):
x_b = np.mean(hdus[0][2].data.reshape(-1, nresample_b), axis=1)
y_b = np.average(hdus[0][0].data[nqso,:].reshape(-1, nresample_b), axis=1, weights=hdus[0][1].data[nqso,:].reshape(-1, nresample_b))
x_r = np.mean(hdus[1][2].data.reshape(-1, nresample_r), axis=1)
y_r = np.average(hdus[1][0].data[nqso,:].reshape(-1, nresample_r), axis=1, weights=hdus[1][1].data[nqso,:].reshape(-1, nresample_r))
x_z = np.mean(hdus[2][2].data[:-3].reshape(-1, nresample_z), axis=1)
y_z = np.average(hdus[2][0].data[nqso,:-3].reshape(-1, nresample_z), axis=1, weights=hdus[2][1].data[nqso,:-3].reshape(-1, nresample_z))
plt.plot(x_b,y_b,'b-',label='b')
plt.plot(x_r,y_r,'y-',label='r')
plt.plot(x_z,y_z,'r-',label='z')
plt.xlabel(r'$\lambda (\AA)$')
plt.ylabel(r'Flux $\times 10^{-17}$ [erg cm$^{-2}$s$^{-1}\AA^{-1}$]')
plt.xlim(3300,9500)
###Output
_____no_output_____
###Markdown
The function `plot_smooth` plots an smoothed (downsampled and weighted by its inverse variance) version of the spectra. In this case I choose a different subsampling for each camera since they contain different number of pixels. The subsampling factor should be an integer divisor of the number of pixels in each camera. I select 20 for `b`, 23 for `r`, and 35 for `z`.
###Code
plot_smooth(0,20,23,35)
plot_smooth(2,20,23,35)
def plot_snr(nqso):
x_b = hdus[0][2].data
y_b = hdus[0][0].data[nqso,:]
iv_b = hdus[0][1].data[nqso,:]
res_b = (np.max(hdus[0][2].data)-np.min(hdus[0][2].data))/len(hdus[0][2].data)
x_r = hdus[1][2].data
y_r = hdus[1][0].data[nqso,:]
iv_r = hdus[1][1].data[nqso,:]
res_r = (np.max(hdus[0][2].data)-np.min(hdus[0][2].data))/len(hdus[0][2].data)
x_z = hdus[2][2].data
y_z = hdus[2][0].data[nqso,:]
iv_z = hdus[2][1].data[nqso,:]
res_z = (np.max(hdus[0][2].data)-np.min(hdus[0][2].data))/len(hdus[0][2].data)
plt.plot(x_b,y_b*np.sqrt(iv_b),'b,',label='b')
plt.plot(x_r,y_r*np.sqrt(iv_r),'y,',label='r')
plt.plot(x_z,y_z*np.sqrt(iv_z),'r,',label='z')
med_b =np.median(y_b*np.sqrt(iv_b))
med_r =np.median(y_r*np.sqrt(iv_r))
med_z =np.median(y_z*np.sqrt(iv_z))
plt.plot(x_b,med_b*np.ones(len(x_b)),'k--',linewidth=3)
plt.plot(x_r,med_r*np.ones(len(x_r)),'k--',linewidth=3)
plt.plot(x_z,med_z*np.ones(len(x_z)),'k--',linewidth=3)
plt.text(4000,4.1,'Median b: %.2f'%med_b,color='b')
plt.text(6000,4.1,'r : %.2f'%med_r,color='y')
plt.text(8000,4.1,'z : %.2f'%med_z,color='r')
plt.xlabel(r'$\lambda (\AA)$')
plt.ylabel(r'SNR per %.1f $\AA$'%res_b)
plt.xlim(3300,9500)
###Output
_____no_output_____
###Markdown
The function `plot_snr` plots the SNR for each object in each camera per pixel. The median SNR value per pixel and per camera is printed in the plot and corresponds to the broken lines shown below.
###Code
plot_snr(0)
plot_snr(2)
print hdus[0][4].columns.names
plt.hist(hdus[0][4].data['MAG'][:,2],bins=60)
plt.xlabel('mag$_{AB}$ r-band')
plt.ylabel('$N(m)$')
###Output
_____no_output_____ |
DataScience/Cross Validation/Cross-Validation_Grid_Search_with_Random_Forest.ipynb | ###Markdown
Task 6: Credit Card Default Prediction **Run the following two cells before you begin.**
###Code
%autosave 10
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
filepath="C:/Users/uttam/anaconda3/Technocolabs/MinorProj2/Datasets/cleaned_data.csv"
df= pd.read_csv(filepath)
###Output
_____no_output_____
###Markdown
**Run the following 3 cells to create a list of features, create a train/test split, and instantiate a random forest classifier.**
###Code
features_response = df.columns.tolist()
items_to_remove = ['ID', 'GENDER', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6',
'EDUCATION_CAT', 'graduate school', 'high school', 'none',
'others', 'university']
features_response = [item for item in features_response if item not in items_to_remove]
features_response
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df[features_response[:-1]].values,
df['DEFAULT'].values,
test_size=0.2, random_state=24
)
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(
n_estimators=10, criterion='gini', max_depth=3,
min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0,
max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None,
random_state=4, verbose=0, warm_start=False, class_weight=None
)
###Output
_____no_output_____
###Markdown
**Create a dictionary representing the grid for the `max_depth` and `n_estimators` hyperparameters that will be searched. Include depths of 3, 6, 9, and 12, and 10, 50, 100, and 200 trees.**
###Code
rf_hyperparameters = {'max_depth':[3, 6, 9, 12],
'n_estimators':[10, 50, 100, 200]}
###Output
_____no_output_____
###Markdown
________________________________________________________________**Instantiate a `GridSearchCV` object using the same options that we have previously in this course, but with the dictionary of hyperparameters created above. Set `verbose=2` to see the output for each fit performed.**
###Code
from sklearn.model_selection import GridSearchCV
cv_rf = GridSearchCV(rf, param_grid=rf_hyperparameters, scoring='roc_auc',
n_jobs=-1, refit=True, cv=4, verbose=2,
error_score=np.nan, return_train_score=True)
###Output
_____no_output_____
###Markdown
____________________________________________________**Fit the `GridSearchCV` object on the training data.**
###Code
cv_rf.fit(X_train, y_train)
###Output
Fitting 4 folds for each of 16 candidates, totalling 64 fits
###Markdown
___________________________________________________________**Put the results of the grid search in a pandas DataFrame.**
###Code
cv_rf_results_df = pd.DataFrame(cv_rf.cv_results_)
cv_rf_results_df.head()
###Output
_____no_output_____
###Markdown
**Find the best hyperparameters from the cross-validation.**
###Code
cv_rf_results_df.max()
cv_rf.best_params_
###Output
_____no_output_____
###Markdown
From max_depth: max_depth=9 looks the best. Also, trees=100 looks the best ________________________________________________________________________________________________________**Create a `pcolormesh` visualization of the mean testing score for each combination of hyperparameters.** Hint: Remember to reshape the values of the mean testing scores to be a two-dimensional 4x4 grid.
###Code
# Create a 5x5 grid
xx_rf, yy_rf = np.meshgrid(range(5), range(5))
# Set color map to `plt.cm.jet`
cm_rf = plt.cm.jet
# Visualize pcolormesh
plt.figure(figsize=(10,10))
ax_rf = plt.axes()
pcolor_graph = ax_rf.pcolormesh(xx_rf, yy_rf, cv_rf_results_df['mean_test_score'].values.reshape((4,4)), cmap=cm_rf)
plt.colorbar(pcolor_graph, label='Average testing ROC AUC')
ax_rf.set_aspect('equal')
ax_rf.set_xticks([0.5, 1.5, 2.5, 3.5])
ax_rf.set_yticks([0.5, 1.5, 2.5, 3.5])
ax_rf.set_xticklabels([str(tick_label) for tick_label in rf_hyperparameters['n_estimators']])
ax_rf.set_yticklabels([str(tick_label) for tick_label in rf_hyperparameters['max_depth']])
ax_rf.set_xlabel('Number of trees')
ax_rf.set_ylabel('Maximum depth')
plt.show()
###Output
_____no_output_____
###Markdown
________________________________________________________________________________________________________**Conclude which set of hyperparameters to use.**From the set of values: max_depth 9 looks the best to use.100 trees gives the best ROC AUC value
###Code
# Create a dataframe of the feature names and importance
new_df= pd.DataFrame(df[features_response])
new_df
# Sort values by importance
###Output
_____no_output_____ |
CSE_310L-Data Warehouseing and Mining Lab/Assignment-2/Assignment 2.ipynb | ###Markdown
Assignment 2
###Code
from prettytable import PrettyTable as pt
import math
import pandas as pd
values = [15,23,23,64,23,65,22,34,24,62,45,63,45,73,46,73,56,73,56,73,45,72]
car = Pens = {'Brand': ['Audi','Mercedes','Tata','Jaguar','McLaren'],
'Price': [50000,40000,20000,35000,60000],
'Rating': [60,65,45,55,65]
}
df = pd.DataFrame(car)
###Output
_____no_output_____
###Markdown
MeanMean can also be understood as the average of certain numbers.```The arithmetic mean, also known as average or arithmetic average, is a central value of a finite set of numbers``` Formula:
###Code
def mean(values):
sum = 0
avg = 0
for i in values:
sum += i
avg = sum/len(values)
return avg
print("Mean: {}".format(mean(values)))
###Output
Mean: 48.86363636363637
###Markdown
Median```The median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution.```For a data set, it may be thought of as "the middle" value. Formula:
###Code
def median(values):
median = 0
median_values = values
values.sort()
length = len(median_values)
if length%2:
median = values[int((length+1)/2)]+ values[int((length-1)/2)]/2
return median
median = values[int(length/2)]
return median
print("Median: {}".format(median(values)))
mode_values = [1,1,1,1,1,1,1,1,1,2,2,2,3,4,5,3,3,3,4,5,6,7,8,8,6,7,5,3,4,5,6,8,9,0,8,6,7,5,4,5,7,8,9,8,5,3,2,4,6,8,9,8,9,7,5,3,4,4,5]
###Output
_____no_output_____
###Markdown
Mode```The mode is the value that appears most often in a set of data values```
###Code
def mode(mode_values):
greatest = 0
mode_val = list()
freq = {}
for item in mode_values:
if (item in freq):
freq[item] += 1
else:
freq[item] = 1
for i,j in freq.items():
if j > greatest:
greatest = j
mode_val.clear()
mode_val.insert(0,i)
elif j==greatest:
mode_val.append(i)
print("Mode : {}".format(mode_val))
print("No of Occurances : {}".format(greatest))
mode(mode_values)
###Output
Mode : [1, 5]
No of Occurances : 9
###Markdown
Variance```Variance tells you the degree of spread in your data set. The more spread the data, the larger the variance is in relation to the mean``` Formulaσ2 = population varianceΣ = sum of…Χ = each valueμ = population meanΝ = number of values in the population
###Code
def variance(values):
mean_val = mean(values)
print("Mean: {}".format(mean_val))
myTable = pt(["i", "Variance"])
for i in values:
row = [i, mean_val-i]
myTable.add_row(row)
print(myTable)
variance(values)
###Output
Mean: 48.86363636363637
+----+---------------------+
| i | Variance |
+----+---------------------+
| 15 | 33.86363636363637 |
| 22 | 26.863636363636367 |
| 23 | 25.863636363636367 |
| 23 | 25.863636363636367 |
| 23 | 25.863636363636367 |
| 24 | 24.863636363636367 |
| 34 | 14.863636363636367 |
| 45 | 3.863636363636367 |
| 45 | 3.863636363636367 |
| 45 | 3.863636363636367 |
| 46 | 2.863636363636367 |
| 56 | -7.136363636363633 |
| 56 | -7.136363636363633 |
| 62 | -13.136363636363633 |
| 63 | -14.136363636363633 |
| 64 | -15.136363636363633 |
| 65 | -16.136363636363633 |
| 72 | -23.136363636363633 |
| 73 | -24.136363636363633 |
| 73 | -24.136363636363633 |
| 73 | -24.136363636363633 |
| 73 | -24.136363636363633 |
+----+---------------------+
###Markdown
Standard DeviationSquare root of variation```In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values.``` Formula:
###Code
def sd(values):
mean_val = mean(values)
print("Mean: {}".format(mean_val))
myTable = pt(["i", "Variance"])
for i in values:
row = [i, math.sqrt(abs((mean_val-i)))]
myTable.add_row(row)
print(myTable)
sd(values)
df
###Output
_____no_output_____
###Markdown
Correlation - It does not mean that the changes in one variable actually cause the changes in the other variable. Sometimes it is clear that there is a causal relationship.
###Code
df.corr()
###Output
_____no_output_____
###Markdown
Covariance - It provides insight into how two variables are related to one another- A positive covariance means that the two variables at hand are positively related,and they move in the same direction
###Code
df.cov()
###Output
_____no_output_____ |
code/thermo/thermo_classification_t12.ipynb | ###Markdown
from Bio import SeqIOimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as snsimport tqdm import globimport reimport requestsimport ioimport torchfrom argparse import Namespacefrom esm.constants import proteinseq_toksimport mathimport torch.nn as nnimport torch.nn.functional as Ffrom esm.modules import TransformerLayer, PositionalEmbedding noqafrom esm.model import ProteinBertModelimport esmimport timeimport tapefrom tape import ProteinBertModel, TAPETokenizer, UniRepModel
###Code
pdt_embed = np.load("../../out/201120/pdt_motor_t12.npy")
pdt_motor = pd.read_csv("../../data/thermo/pdt_motor.csv")
print(pdt_embed.shape)
print(pdt_motor.shape)
pfamA_target_name = ["PF00349","PF00022","PF03727","PF06723",\
"PF14450","PF03953","PF12327","PF00091","PF10644",\
"PF13809","PF14881","PF00063","PF00225","PF03028"]
pdt_motor_target = pdt_motor.loc[pdt_motor["pfam_id"].isin(pfamA_target_name),:]
pdt_embed_target = pdt_embed[pdt_motor["pfam_id"].isin(pfamA_target_name),:]
print(pdt_embed_target.shape)
print(pdt_motor_target.shape)
print(sum(pdt_motor_target["is_thermophilic"]))
pdt_motor.groupby(["clan","is_thermophilic"]).count()
pdt_motor_target.groupby(["clan","is_thermophilic"]).count()
pdt_motor.loc[pdt_motor["clan"]=="p_loop_gtpase",:].groupby(["pfam_id","is_thermophilic"]).count()
###Output
_____no_output_____
###Markdown
Try create a balanced training set by sampling the same number of min(thermophilic, non-thermophilic) of a family. For now do no sample from a family is it does not contain one of the classes
###Code
thermo_sampled = pd.DataFrame()
for pfam_id in pdt_motor["pfam_id"].unique():
curr_dat = pdt_motor.loc[pdt_motor["pfam_id"] == pfam_id,:]
is_thermo = curr_dat.loc[curr_dat["is_thermophilic"]==1,:]
not_thermo = curr_dat.loc[curr_dat["is_thermophilic"]==0,:]
if (not_thermo.shape[0]>=is_thermo.shape[0]):
print(is_thermo.shape[0])
#sample #is_thermo.shape[0] entries from not_thermo uniformly
thermo_sampled = thermo_sampled.append(is_thermo)
tmp = not_thermo.sample(n = is_thermo.shape[0])
else:
#sample #not_thermo.shape[0] entries from is_thermo uniformly
print(not_thermo.shape[0])
thermo_sampled = thermo_sampled.append(not_thermo)
tmp = is_thermo.sample(n = not_thermo.shape[0])
thermo_sampled = thermo_sampled.append(tmp)
thermo_sampled.groupby(["clan","is_thermophilic"]).count()
thermo_sampled_embed = pdt_embed[thermo_sampled.index,:]
###Output
_____no_output_____
###Markdown
Normalize the hidden dimensions
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(thermo_sampled_embed)
thermo_sampled_embed_scaled = scaler.transform(thermo_sampled_embed)
u, s, v = np.linalg.svd(thermo_sampled_embed_scaled.T@thermo_sampled_embed_scaled)
s[0:10]
s_ratio = np.cumsum(s)/sum(s)
s_ratio[270]
a = thermo_sampled_embed_scaled.T@thermo_sampled_embed_scaled
a.shape
sigma = np.cov(thermo_sampled_embed_scaled.T)
sigma.shape
u, s, v = np.linalg.svd(sigma)
s[0:10]
s_ratio = np.cumsum(s)/sum(s)
s_ratio[75]
from sklearn.decomposition import PCA
pca = PCA(n_components=75)
thermo_sampled_embed_scaled_reduced = pca.fit_transform(thermo_sampled_embed_scaled)
np.cumsum(pca.explained_variance_ratio_)
X = thermo_sampled_embed_scaled_reduced
y = thermo_sampled["is_thermophilic"]
print(X.shape)
print(y.shape)
###Output
(56532, 75)
(56532,)
###Markdown
Classifying thermophilic using logistic regression with cross validation
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LogisticRegressionCV
clf = LogisticRegressionCV(cv=5, random_state=0).fit(X_train, y_train)
clf.score(X_test, y_test)
clf.score(X_train, y_train)
###Output
_____no_output_____
###Markdown
Classifying thermophilic using softSVM
###Code
from sklearn.svm import LinearSVC
clf = LinearSVC(random_state=0)
clf.fit(X_train, y_train)
clf.score(X_train, y_train)
clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Classifying thermophilic using kNN classifier
###Code
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=5,weights = "uniform")
neigh.fit(X_train, y_train)
neigh.score(X_train, y_train)
neigh.score(X_test, y_test)
neigh = KNeighborsClassifier(n_neighbors=5,weights = "distance")
neigh.fit(X_train, y_train)
print(neigh.score(X_train, y_train))
print(neigh.score(X_test, y_test))
neigh = KNeighborsClassifier(n_neighbors=9,weights = "distance")
neigh.fit(X_train, y_train)
print(neigh.score(X_train, y_train))
print(neigh.score(X_test, y_test))
from torch.utils.data import Dataset, DataLoader
class ThermoDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, dat,label):
"""
Args:
dat (ndarray): ndarray with the X data
label: an pdSeries with the 0/1 label of the X data
"""
self.X = dat
self.y = label
def __len__(self):
return self.X.shape[0]
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
embed = self.X[idx,:]
is_thermo = self.y.iloc[idx]
sample = {'X': embed, 'y': is_thermo}
return sample
X = thermo_sampled_embed_scaled_reduced
y = thermo_sampled["is_thermophilic"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
thermo_dataset_train = ThermoDataset(X_train,y_train)
train_loader = DataLoader(thermo_dataset_train, batch_size=100,
shuffle=True, num_workers=0)
for i_batch, sample_batched in enumerate(train_loader):
print(i_batch, sample_batched['X'].size(),
sample_batched['y'].size())
if i_batch > 3:
break
import torch.nn as nn
import torch.nn.functional as F
class ThermoClassifier_75(nn.Module):
def __init__(self):
super(ThermoClassifier_75, self).__init__()
self.fc1 = nn.Linear(75, 60)
self.fc2 = nn.Linear(60, 50)
self.fc3 = nn.Linear(50, 30)
self.fc4 = nn.Linear(30, 10)
self.fc5 = nn.Linear(10, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = self.fc5(x)
return x
import torch.optim as optim
learning_rate = 0.001
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = ThermoClassifier_75().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
device
# Train the model
num_epochs = 1000
total_step = len(train_loader)
for epoch in range(num_epochs):
for i_batch, sample_batched in enumerate(train_loader):
X = sample_batched['X']
y = sample_batched['y']
# Move tensors to the configured device
# print(X)
embed = X.to(device)
labels = y.to(device)
# Forward pass
outputs = model(embed)
# print(outputs.shape)
loss = criterion(outputs, labels)
# Backprpagation and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i_batch+1) % 200 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i_batch+1, total_step, loss.item()))
thermo_dataset_test = ThermoDataset(X_test,y_test)
test_loader = DataLoader(thermo_dataset_test, batch_size=100,
shuffle=True, num_workers=0)
# Test the model
# In the test phase, don't need to compute gradients (for memory efficiency)
with torch.no_grad():
correct = 0
total = 0
for i_batch, sample_batched in enumerate(test_loader):
X = sample_batched['X'].to(device)
y = sample_batched['y'].to(device)
outputs = model(X)
_, predicted = torch.max(outputs.data, 1)
# print(predicted)
# print(y.size(0))
total += y.size(0)
correct += (predicted == y).sum().item()
print('Accuracy of the network on the test for model_75 : {} %'.format(100 * correct / total))
# Save the model checkpoint
torch.save(model.state_dict(), 'model_75.ckpt')
###Output
Accuracy of the network on the test for model_75 : 79.35791987264527 %
###Markdown
model not using reduced data
###Code
X = thermo_sampled_embed_scaled
y = thermo_sampled["is_thermophilic"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
thermo_dataset_train = ThermoDataset(X_train,y_train)
train_loader = DataLoader(thermo_dataset_train, batch_size=100,
shuffle=True, num_workers=0)
for i_batch, sample_batched in enumerate(train_loader):
print(i_batch, sample_batched['X'].size(),
sample_batched['y'].size())
if i_batch > 3:
break
import torch.nn as nn
import torch.nn.functional as F
class ThermoClassifier(nn.Module):
def __init__(self):
super(ThermoClassifier, self).__init__()
self.fc1 = nn.Linear(768, 100)
self.fc2 = nn.Linear(100, 50)
self.fc3 = nn.Linear(50, 30)
self.fc4 = nn.Linear(30, 10)
self.fc5 = nn.Linear(10, 2)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = self.fc5(x)
return x
import torch.optim as optim
learning_rate = 0.001
criterion = nn.CrossEntropyLoss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = ThermoClassifier().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
device
# Train the model
num_epochs = 1000
total_step = len(train_loader)
for epoch in range(num_epochs):
for i_batch, sample_batched in enumerate(train_loader):
X = sample_batched['X']
y = sample_batched['y']
# Move tensors to the configured device
# print(X)
embed = X.to(device)
labels = y.to(device)
# Forward pass
outputs = model(embed)
# print(outputs.shape)
loss = criterion(outputs, labels)
# Backprpagation and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i_batch+1) % 200 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i_batch+1, total_step, loss.item()))
thermo_dataset_test = ThermoDataset(X_test,y_test)
test_loader = DataLoader(thermo_dataset_test, batch_size=100,
shuffle=True, num_workers=0)
# Test the model
# In the test phase, don't need to compute gradients (for memory efficiency)
with torch.no_grad():
correct = 0
total = 0
for i_batch, sample_batched in enumerate(test_loader):
X = sample_batched['X'].to(device)
y = sample_batched['y'].to(device)
outputs = model(X)
_, predicted = torch.max(outputs.data, 1)
# print(predicted)
# print(y.size(0))
total += y.size(0)
correct += (predicted == y).sum().item()
print('Accuracy of the network on the test for model_768 : {} %'.format(100 * correct / total))
# Save the model checkpoint
torch.save(model.state_dict(), 'model_768.ckpt')
###Output
Accuracy of the network on the test for model_768 : 83.49694879278323 %
|
safaricom_hackathon.ipynb | ###Markdown
Natural Language Processing
###Code
import nltk
import ftfy
from ftfy import *
from nltk.tokenize import sent_tokenize,word_tokenize,TweetTokenizer
nltk.download('punkt')
nltk.download('stopwords')
data.head()
y=data['tweet_location']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data2, y, test_size=0.2)
print('shapes of the X data are:train {}, test {}'.format(X_train.shape,X_test.shape))
print('shapes of the y data are:train {}, test {}'.format(y_train.shape,y_test.shape))
###Output
_____no_output_____
###Markdown
Data Pre- Processing and Analysis
###Code
a=y_train.isnull().value_counts()
a
X_train['text'].shape
blob=TextBlob()
for x in X_train['text']:
blob=TextBlob(x)
print(blob.sentiment.polarity)
###Output
_____no_output_____
###Markdown
Advanced Text Processing 1.Word Count
###Code
data['word_count']=data['text'].apply(lambda x:len(str(x).split(" ")))
data.shape
###Output
_____no_output_____
###Markdown
2.Number of Characters
###Code
data['char_count']=data['text'].str.len()
###Output
_____no_output_____
###Markdown
3.Average Word Length
###Code
def avg_word(sentence):
words=sentence.split()
return sum(len(word) for word in words)/len(words)
data['avg_word']=data['text'].apply(lambda x:avg_word(x))
from nltk.corpus import stopwords
###Output
_____no_output_____
###Markdown
4.Number of Stopwords
###Code
stop=stopwords.words('english')
data['stopwords']=data['text'].apply(lambda x: len([x for x in x.split() if x in stop]))
###Output
_____no_output_____
###Markdown
4. Number of hashtags(special characters)
###Code
data['hashtags']=data['text'].apply(lambda x:len([x for x in x.split()if x.startswith('#')]))
train2=train['text'].to_string()
train3=fix_text_segment(train2,normalization=('NFKC'))
tknzr = TweetTokenizer()
train4=tknzr.tokenize(train2)
train4
stop_words=set(stopwords.words('english'))
tokens=[x for x in train4 if not x in stop_words]
print(tokens)
###Output
_____no_output_____
###Markdown
Stemming
###Code
from nltk.stem.porter import PorterStemmer
porter=PorterStemmer()
stems=[]
for i in tokens:
stems.append(porter.stem(i))
print(stems)
from textblob import Word
data['tweet_lematized']=data['text'].apply(lambda x:" ".join([Word(word).lemmatize() for word in x.split()]))
data['tweet_lematized'][1]
TextBlob(X_train['tweet_lematized'][1]).ngrams(2)
data['tweet_lematized'].shape
###Output
_____no_output_____
###Markdown
Feature Extraction
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf=TfidfVectorizer(max_features=None,lowercase=True,analyzer='word',stop_words='english',ngram_range=(1,1))
data['data_vect']=tfidf.fit_transform(data['tweet_lematized'])
data.head(5)
X_train['text'][:5].apply(lambda x: TextBlob(x).sentiment)
X_train['tweet_lematized'][:5].apply(lambda x:TextBlob(x).sentiment)
data['sentiments_polarity']=data['tweet_lematized'].apply(lambda x:TextBlob(x).sentiment)
data[['tweet_lematized','sentiments_polarity']]
# def place(location):
# if location == 'clcncl':
# return 0
# else:
# return 1
# train['labels']=train['tweet_location'].apply(lambda x:place(x))
# def checkLocation(location):
# if location == 'NaN':
# return 'This is not location'
# train['labels']=train['tweet_location'].apply(lambda x:checkLocation(x))
data['location2']=data['tweet_location'].apply(lambda x:pd.isnull(x))
def place(location2):
if location2==True:
return 0
else:
return 1
data['labels']=data['location2'].apply(lambda x:place(x))
data[['tweet_location','location2','labels']]
###Output
_____no_output_____
###Markdown
MACHINE LEARNING
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
y=data['labels']
X_train, X_test, y_train, y_test = train_test_split(data, y, test_size=0.2)
X_test.shape
X_train.shape
y_test.shape
y_train.shape
logisticRegr = LogisticRegression()
logisticRegr.fit(X_train, y_train)
###Output
_____no_output_____ |
feed_fwd_net.ipynb | ###Markdown
Fully Connected Feed Fwd Net
###Code
class IrisNet(nn.Module):
def __init__(self, input_size, hidden1_size, hidden2_size, num_classes):
super(IrisNet, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden2_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
return out
model = IrisNet(4, 100, 50, 3).cuda()
print(model)
###Output
IrisNet(
(fc1): Linear(in_features=4, out_features=100, bias=True)
(relu1): ReLU()
(fc2): Linear(in_features=100, out_features=50, bias=True)
(relu2): ReLU()
(fc3): Linear(in_features=50, out_features=3, bias=True)
)
###Markdown
Creating the data loader
###Code
batch_size = 60
iris_data_file = 'data/iris.data.txt'
# Get the datasets
train_ds, test_ds = get_datasets(iris_data_file)
print("training set length", len(train_ds))
print("test set length", len(test_ds))
train_loader = torch.utils.data.DataLoader(dataset = train_ds, batch_size = batch_size, shuffle = True)
test_loader = torch.utils.data.DataLoader(dataset = test_ds, batch_size = batch_size, shuffle = True)
criterion = nn.CrossEntropyLoss()
learning_rate = 0.001
optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate, nesterov=True, momentum=0.9, dampening=0)
###Output
_____no_output_____
###Markdown
Training Loop
###Code
# 2 loops outer loop executes the epochs. Inner loop executes the iterations per epoch.
num_epochs = 500
train_loss = []
test_loss = []
train_accuracy = []
test_accuracy = []
for epoch in range(num_epochs):
train_correct = 0
train_total = 0
for i, (items, classes) in enumerate(train_loader):
# Each batch is a tuple. First element is a float tensor containing all the dependent variables for each batch
# Second element of tuple
# Convert torch tensor to variable
items = Variable(items.cuda())
classes = Variable(classes.cuda())
model.train()
# Clear off gradients from past operations
optimizer.zero_grad()
# Do the forward pass
outputs = model(items)
# Calculate the loss
loss = criterion(outputs, classes)
# Calculate the gradients with the help of back propagation
loss.backward()
# Ask the opitmizer to update the parameters on the basis of the gradients
optimizer.step()
# Record the correct predictions for training data
train_total += classes.size(0)
_, predicted = torch.max(outputs.data, 1)
train_correct += (predicted == classes.data).sum()
print('Epoch %d/%d, Iteration %d/%d, Loss: %.4f'%(epoch+1, num_epochs, i+1, len(train_ds)//batch_size, loss.data[0]))
model.eval()
train_loss.append(loss.data[0])
#Record the training accuracy
train_accuracy.append(100*train_correct/train_total)
#Check on the test set
test_items = torch.FloatTensor(test_ds.data.values[:,0:4])
test_classes = torch.LongTensor(test_ds.data.values[:,4])
outputs = model(Variable(test_items.cuda()))
loss = criterion(outputs, Variable(test_classes.cuda()))
test_loss.append(loss.data[0])
#Record the testing accuracy
_, predicted = torch.max(outputs.data,1)
total = test_classes.size(0)
correct = (predicted==test_classes.cuda()).sum()
test_accuracy.append((100*correct/total))
###Output
Epoch 1/500, Iteration 1/2, Loss: 1.2179
Epoch 1/500, Iteration 2/2, Loss: 1.1851
Epoch 2/500, Iteration 1/2, Loss: 1.1945
Epoch 2/500, Iteration 2/2, Loss: 1.1788
Epoch 3/500, Iteration 1/2, Loss: 1.1689
Epoch 3/500, Iteration 2/2, Loss: 1.1642
Epoch 4/500, Iteration 1/2, Loss: 1.1034
Epoch 4/500, Iteration 2/2, Loss: 1.1852
Epoch 5/500, Iteration 1/2, Loss: 1.1254
Epoch 5/500, Iteration 2/2, Loss: 1.1222
Epoch 6/500, Iteration 1/2, Loss: 1.1056
Epoch 6/500, Iteration 2/2, Loss: 1.1055
Epoch 7/500, Iteration 1/2, Loss: 1.0860
Epoch 7/500, Iteration 2/2, Loss: 1.0937
Epoch 8/500, Iteration 1/2, Loss: 1.0734
Epoch 8/500, Iteration 2/2, Loss: 1.0751
Epoch 9/500, Iteration 1/2, Loss: 1.0776
Epoch 9/500, Iteration 2/2, Loss: 1.0461
Epoch 10/500, Iteration 1/2, Loss: 1.0585
Epoch 10/500, Iteration 2/2, Loss: 1.0380
Epoch 11/500, Iteration 1/2, Loss: 1.0604
Epoch 11/500, Iteration 2/2, Loss: 1.0170
Epoch 12/500, Iteration 1/2, Loss: 1.0336
Epoch 12/500, Iteration 2/2, Loss: 1.0262
Epoch 13/500, Iteration 1/2, Loss: 1.0249
Epoch 13/500, Iteration 2/2, Loss: 1.0178
Epoch 14/500, Iteration 1/2, Loss: 1.0027
Epoch 14/500, Iteration 2/2, Loss: 1.0210
Epoch 15/500, Iteration 1/2, Loss: 1.0082
Epoch 15/500, Iteration 2/2, Loss: 0.9975
Epoch 16/500, Iteration 1/2, Loss: 1.0077
Epoch 16/500, Iteration 2/2, Loss: 0.9813
Epoch 17/500, Iteration 1/2, Loss: 0.9690
Epoch 17/500, Iteration 2/2, Loss: 1.0034
Epoch 18/500, Iteration 1/2, Loss: 0.9572
Epoch 18/500, Iteration 2/2, Loss: 1.0002
Epoch 19/500, Iteration 1/2, Loss: 0.9724
Epoch 19/500, Iteration 2/2, Loss: 0.9664
Epoch 20/500, Iteration 1/2, Loss: 0.9514
Epoch 20/500, Iteration 2/2, Loss: 0.9702
Epoch 21/500, Iteration 1/2, Loss: 0.9601
Epoch 21/500, Iteration 2/2, Loss: 0.9440
Epoch 22/500, Iteration 1/2, Loss: 0.9464
Epoch 22/500, Iteration 2/2, Loss: 0.9416
Epoch 23/500, Iteration 1/2, Loss: 0.9298
Epoch 23/500, Iteration 2/2, Loss: 0.9387
Epoch 24/500, Iteration 1/2, Loss: 0.9174
Epoch 24/500, Iteration 2/2, Loss: 0.9335
Epoch 25/500, Iteration 1/2, Loss: 0.9222
Epoch 25/500, Iteration 2/2, Loss: 0.9113
Epoch 26/500, Iteration 1/2, Loss: 0.8990
Epoch 26/500, Iteration 2/2, Loss: 0.9168
Epoch 27/500, Iteration 1/2, Loss: 0.9027
Epoch 27/500, Iteration 2/2, Loss: 0.8940
Epoch 28/500, Iteration 1/2, Loss: 0.8892
Epoch 28/500, Iteration 2/2, Loss: 0.8890
Epoch 29/500, Iteration 1/2, Loss: 0.8855
Epoch 29/500, Iteration 2/2, Loss: 0.8737
Epoch 30/500, Iteration 1/2, Loss: 0.8854
Epoch 30/500, Iteration 2/2, Loss: 0.8556
Epoch 31/500, Iteration 1/2, Loss: 0.8644
Epoch 31/500, Iteration 2/2, Loss: 0.8585
Epoch 32/500, Iteration 1/2, Loss: 0.8486
Epoch 32/500, Iteration 2/2, Loss: 0.8542
Epoch 33/500, Iteration 1/2, Loss: 0.8446
Epoch 33/500, Iteration 2/2, Loss: 0.8390
Epoch 34/500, Iteration 1/2, Loss: 0.8339
Epoch 34/500, Iteration 2/2, Loss: 0.8312
Epoch 35/500, Iteration 1/2, Loss: 0.8096
Epoch 35/500, Iteration 2/2, Loss: 0.8365
Epoch 36/500, Iteration 1/2, Loss: 0.8105
Epoch 36/500, Iteration 2/2, Loss: 0.8153
Epoch 37/500, Iteration 1/2, Loss: 0.8194
Epoch 37/500, Iteration 2/2, Loss: 0.7866
Epoch 38/500, Iteration 1/2, Loss: 0.8049
Epoch 38/500, Iteration 2/2, Loss: 0.7837
Epoch 39/500, Iteration 1/2, Loss: 0.7880
Epoch 39/500, Iteration 2/2, Loss: 0.7791
Epoch 40/500, Iteration 1/2, Loss: 0.7676
Epoch 40/500, Iteration 2/2, Loss: 0.7804
Epoch 41/500, Iteration 1/2, Loss: 0.7650
Epoch 41/500, Iteration 2/2, Loss: 0.7630
Epoch 42/500, Iteration 1/2, Loss: 0.7376
Epoch 42/500, Iteration 2/2, Loss: 0.7723
Epoch 43/500, Iteration 1/2, Loss: 0.7292
Epoch 43/500, Iteration 2/2, Loss: 0.7605
Epoch 44/500, Iteration 1/2, Loss: 0.7477
Epoch 44/500, Iteration 2/2, Loss: 0.7224
Epoch 45/500, Iteration 1/2, Loss: 0.7188
Epoch 45/500, Iteration 2/2, Loss: 0.7324
Epoch 46/500, Iteration 1/2, Loss: 0.7393
Epoch 46/500, Iteration 2/2, Loss: 0.6930
Epoch 47/500, Iteration 1/2, Loss: 0.7112
Epoch 47/500, Iteration 2/2, Loss: 0.7025
Epoch 48/500, Iteration 1/2, Loss: 0.6985
Epoch 48/500, Iteration 2/2, Loss: 0.6968
Epoch 49/500, Iteration 1/2, Loss: 0.6985
Epoch 49/500, Iteration 2/2, Loss: 0.6787
Epoch 50/500, Iteration 1/2, Loss: 0.7009
Epoch 50/500, Iteration 2/2, Loss: 0.6577
Epoch 51/500, Iteration 1/2, Loss: 0.6446
Epoch 51/500, Iteration 2/2, Loss: 0.6974
Epoch 52/500, Iteration 1/2, Loss: 0.6511
Epoch 52/500, Iteration 2/2, Loss: 0.6724
Epoch 53/500, Iteration 1/2, Loss: 0.6547
Epoch 53/500, Iteration 2/2, Loss: 0.6515
Epoch 54/500, Iteration 1/2, Loss: 0.6297
Epoch 54/500, Iteration 2/2, Loss: 0.6601
Epoch 55/500, Iteration 1/2, Loss: 0.6774
Epoch 55/500, Iteration 2/2, Loss: 0.5953
Epoch 56/500, Iteration 1/2, Loss: 0.6295
Epoch 56/500, Iteration 2/2, Loss: 0.6270
Epoch 57/500, Iteration 1/2, Loss: 0.6174
Epoch 57/500, Iteration 2/2, Loss: 0.6231
Epoch 58/500, Iteration 1/2, Loss: 0.6134
Epoch 58/500, Iteration 2/2, Loss: 0.6115
Epoch 59/500, Iteration 1/2, Loss: 0.6340
Epoch 59/500, Iteration 2/2, Loss: 0.5758
Epoch 60/500, Iteration 1/2, Loss: 0.5986
Epoch 60/500, Iteration 2/2, Loss: 0.5963
Epoch 61/500, Iteration 1/2, Loss: 0.5419
Epoch 61/500, Iteration 2/2, Loss: 0.6408
Epoch 62/500, Iteration 1/2, Loss: 0.5737
Epoch 62/500, Iteration 2/2, Loss: 0.5929
Epoch 63/500, Iteration 1/2, Loss: 0.5980
Epoch 63/500, Iteration 2/2, Loss: 0.5556
Epoch 64/500, Iteration 1/2, Loss: 0.5287
Epoch 64/500, Iteration 2/2, Loss: 0.6112
Epoch 65/500, Iteration 1/2, Loss: 0.5694
Epoch 65/500, Iteration 2/2, Loss: 0.5571
Epoch 66/500, Iteration 1/2, Loss: 0.5810
Epoch 66/500, Iteration 2/2, Loss: 0.5330
Epoch 67/500, Iteration 1/2, Loss: 0.5512
Epoch 67/500, Iteration 2/2, Loss: 0.5506
Epoch 68/500, Iteration 1/2, Loss: 0.5370
Epoch 68/500, Iteration 2/2, Loss: 0.5527
Epoch 69/500, Iteration 1/2, Loss: 0.5444
Epoch 69/500, Iteration 2/2, Loss: 0.5333
Epoch 70/500, Iteration 1/2, Loss: 0.5352
Epoch 70/500, Iteration 2/2, Loss: 0.5312
Epoch 71/500, Iteration 1/2, Loss: 0.5367
Epoch 71/500, Iteration 2/2, Loss: 0.5185
Epoch 72/500, Iteration 1/2, Loss: 0.5523
Epoch 72/500, Iteration 2/2, Loss: 0.4923
Epoch 73/500, Iteration 1/2, Loss: 0.4973
Epoch 73/500, Iteration 2/2, Loss: 0.5368
Epoch 74/500, Iteration 1/2, Loss: 0.5103
Epoch 74/500, Iteration 2/2, Loss: 0.5136
Epoch 75/500, Iteration 1/2, Loss: 0.4953
Epoch 75/500, Iteration 2/2, Loss: 0.5191
Epoch 76/500, Iteration 1/2, Loss: 0.4845
Epoch 76/500, Iteration 2/2, Loss: 0.5201
Epoch 77/500, Iteration 1/2, Loss: 0.5059
Epoch 77/500, Iteration 2/2, Loss: 0.4899
Epoch 78/500, Iteration 1/2, Loss: 0.4621
Epoch 78/500, Iteration 2/2, Loss: 0.5239
Epoch 79/500, Iteration 1/2, Loss: 0.4673
Epoch 79/500, Iteration 2/2, Loss: 0.5097
Epoch 80/500, Iteration 1/2, Loss: 0.4462
Epoch 80/500, Iteration 2/2, Loss: 0.5222
Epoch 81/500, Iteration 1/2, Loss: 0.4220
Epoch 81/500, Iteration 2/2, Loss: 0.5380
Epoch 82/500, Iteration 1/2, Loss: 0.4861
Epoch 82/500, Iteration 2/2, Loss: 0.4654
Epoch 83/500, Iteration 1/2, Loss: 0.4949
Epoch 83/500, Iteration 2/2, Loss: 0.4485
Epoch 84/500, Iteration 1/2, Loss: 0.4966
Epoch 84/500, Iteration 2/2, Loss: 0.4394
Epoch 85/500, Iteration 1/2, Loss: 0.4491
Epoch 85/500, Iteration 2/2, Loss: 0.4791
Epoch 86/500, Iteration 1/2, Loss: 0.4784
Epoch 86/500, Iteration 2/2, Loss: 0.4422
Epoch 87/500, Iteration 1/2, Loss: 0.4706
Epoch 87/500, Iteration 2/2, Loss: 0.4423
Epoch 88/500, Iteration 1/2, Loss: 0.4568
Epoch 88/500, Iteration 2/2, Loss: 0.4496
Epoch 89/500, Iteration 1/2, Loss: 0.4357
Epoch 89/500, Iteration 2/2, Loss: 0.4632
Epoch 90/500, Iteration 1/2, Loss: 0.4545
Epoch 90/500, Iteration 2/2, Loss: 0.4375
Epoch 91/500, Iteration 1/2, Loss: 0.4215
Epoch 91/500, Iteration 2/2, Loss: 0.4637
Epoch 92/500, Iteration 1/2, Loss: 0.4513
Epoch 92/500, Iteration 2/2, Loss: 0.4277
Epoch 93/500, Iteration 1/2, Loss: 0.4700
Epoch 93/500, Iteration 2/2, Loss: 0.4020
Epoch 94/500, Iteration 1/2, Loss: 0.4262
Epoch 94/500, Iteration 2/2, Loss: 0.4405
Epoch 95/500, Iteration 1/2, Loss: 0.4564
Epoch 95/500, Iteration 2/2, Loss: 0.4031
Epoch 96/500, Iteration 1/2, Loss: 0.4054
Epoch 96/500, Iteration 2/2, Loss: 0.4480
Epoch 97/500, Iteration 1/2, Loss: 0.4315
Epoch 97/500, Iteration 2/2, Loss: 0.4155
Epoch 98/500, Iteration 1/2, Loss: 0.4082
Epoch 98/500, Iteration 2/2, Loss: 0.4328
Epoch 99/500, Iteration 1/2, Loss: 0.3887
Epoch 99/500, Iteration 2/2, Loss: 0.4468
Epoch 100/500, Iteration 1/2, Loss: 0.4256
Epoch 100/500, Iteration 2/2, Loss: 0.4042
Epoch 101/500, Iteration 1/2, Loss: 0.3773
Epoch 101/500, Iteration 2/2, Loss: 0.4486
Epoch 102/500, Iteration 1/2, Loss: 0.4382
Epoch 102/500, Iteration 2/2, Loss: 0.3799
Epoch 103/500, Iteration 1/2, Loss: 0.3770
Epoch 103/500, Iteration 2/2, Loss: 0.4356
###Markdown
Loss vs Iterations Plot
###Code
fig = plt.figure(figsize=(12,8))
plt.plot(train_loss, label = 'train_loss')
plt.plot(test_loss, label = 'test_loss')
plt.title("Train and test loss")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Plotting train and test set accuracy
###Code
fig = plt.figure(figsize=(12,8))
plt.plot(train_accuracy, label = 'train_accuracy')
plt.plot(test_accuracy, label = 'test_accuracy')
plt.title("Train and test accuracy")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Persist model to disk
###Code
torch.save(model.state_dict(),"./fwd_net.pth")
###Output
_____no_output_____
###Markdown
Load the model
###Code
net = IrisNet(4, 100, 50, 3)
net.load_state_dict(torch.load("./fwd_net.pth"))
net.eval()
item = [[5.1,3.5,1.4,0.2]]
expected_class = 0 # Iris-Setosa
output = net(Variable(torch.FloatTensor(item)))
_, predicted_class = torch.max(output.data, 1)
print(predicted_class.numpy())
print('Predicted class:', predicted_class.numpy()[0])
print('Expected class:', expected_class)
###Output
[0]
Predicted class: 0
Expected class: 0
|
notebooks/check_landmarks.ipynb | ###Markdown
Check landmarks
###Code
import pickle
import imageio
import azure
import numpy as np
import matplotlib.pyplot as plt
from glob import glob
import azure
%matplotlib inline
def load_landmarks(stim='id-1274_AU26-100_AU9-33', api='google'):
stims = sorted(glob(f'../../../FEED_stimulus_frames/{stim}/*/texmap/frame*.png'))
# number of landmarks: 27 (azure), 34 (google)
n_lm = 27 if api == 'azure' else 34
xy = np.zeros((30, n_lm, 2)) # 30 frames
frames = [str(i).zfill(2) for i in range(1, 31)]
for i, frame in enumerate(frames):
info = stims[i].replace('.png', f'_api-{api}_annotations.pkl')
with open(info, 'rb') as f_in:
info = pickle.load(f_in)
if api == 'azure':
info = info[0].face_landmarks
ii = 0
for attr in dir(info):
this_attr = getattr(info, attr)
if isinstance(this_attr, azure.cognitiveservices.vision.face.models._models_py3.Coordinate):
xy[i, ii, 0] = this_attr.x
xy[i, ii, 1] = this_attr.y
ii += 1
elif api == 'google':
info = info.face_annotations[0]
for ii in range(len(info.landmarks)):
xy[i, ii, 0] = info.landmarks[ii].position.x
xy[i, ii, 1] = info.landmarks[ii].position.y
else:
raise ValueError("Choose api from 'google' and 'azure'.")
return stims, xy
stims, xy = load_landmarks(api='azure')
xy
def plot_face(imgs, landmarks, frame_nr=0):
img = imageio.imread(imgs[frame_nr])
plt.figure(figsize=(6, 8))
plt.imshow(img)
lm = landmarks[frame_nr, :, :]
for i in range(lm.shape[0]):
x, y = lm[i, :]
plt.plot([x, x], [y, y], marker='o')
plt.show();
import ipywidgets
from ipywidgets import interact, fixed
slider = ipywidgets.IntSlider(min=0, max=29, step=1, value=0)
interact(plot_face, frame_nr=slider, imgs=fixed(stims), landmarks=fixed(xy));
from scipy.ndimage import gaussian_filter
xy_std = (xy - xy.mean(axis=0)) / xy.std(axis=0)
xy_filt = butter_bandpass_filter(data=xy_std[:, 0, :], lowcut=0.01, highcut=7, fs=30/1.25, order=5)
plt.plot(xy_filt)
from scipy.signal import butter, lfilter
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype='band')
return b, a
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = lfilter(b, a, data, axis=0)
return y
from scipy.ndimage import gaussian_filter1d
gaussian_filter1d?
###Output
_____no_output_____ |
experimental/attentive_uncertainty/colabs/2019_09_11_gnp_aggregate_results_bandits.ipynb | ###Markdown
Licensed under the Apache License, Version 2.0.
###Code
import numpy as np
import os
import tensorflow as tf
gfile = tf.compat.v1.gfile
import itertools
import pickle
from collections import defaultdict
rootdir = '/tmp/wheel_bandit'
def get_trial_results(savefile):
h_rewards = None
if gfile.Exists(savefile):
with gfile.Open(savefile, 'rb') as infile:
saved_state = pickle.load(infile)
h_rewards = saved_state['h_rewards'][:, 0]
return h_rewards
algos = [['uniform'], ['neurolinear']]
deltas = [0.5, 0.7, 0.9, 0.95, 0.99]
num_trials = 50
model_types = ['cnp', 'np', 'anp', 'acnp', 'acns']
weights = ['offline']
for mt, wt in itertools.product(model_types, weights):
algos.append(['gnp_' + mt + '_' + wt])
for delta in deltas:
results_dict = defaultdict(list)
print('delta', delta)
for trial_idx in range(num_trials):
instance = str(delta) + '_' + str(trial_idx)
dataset = os.path.join(rootdir, 'data', instance + '.npz')
with gfile.GFile(dataset, 'r') as f:
sampled_vals = np.load(f)
opt_rewards = sampled_vals['opt_rewards']
print('trial_idx', trial_idx)
for algo in algos:
print('algo', algo)
all_algo_names = '_'.join(algo)
filename = instance + '_' + all_algo_names + '.pkl'
if all_algo_names[:3] == 'gnp':
filename = 'gnp/' + instance + '_' + all_algo_names + '.pkl'
savefile = os.path.join(rootdir, 'results', filename)
h_rewards = get_trial_results(savefile)
if h_rewards is not None:
per_time_step_regret = np.array(opt_rewards - h_rewards)
if np.any(per_time_step_regret < 0):
import pdb
pdb.set_trace()
results_dict[algo[0]].append(per_time_step_regret)
print()
aggfile = os.path.join(rootdir, 'results', str(delta) + '_all_results.pkl')
with gfile.Open(aggfile, 'wb') as outfile:
pickle.dump(results_dict, outfile)
###Output
_____no_output_____ |
src/Lessons 12 to 15.ipynb | ###Markdown
Lesson 12 One-way ANOVAIn statistics, one-way analysis of variance (abbreviated one-way ANOVA) is a technique that can be used to compare means of two or more samples (using the F distribution). This technique can be used only for numerical response data, the "Y", usually one variable, and numerical or (usually) categorical input data, the "X", always one variable, hence "one-way".The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. These estimates rely on various assumptions (see below). The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test (Gosset, 1908). When there are only two means to compare, the t-test and the F-test are equivalent; the relation between ANOVA and t is given by F = t2. An extension of one-way ANOVA is two-way analysis of variance that examines the influence of two different categorical independent variables on one dependent variable. AssumptionsThe results of a one-way ANOVA can be considered reliable as long as the following assumptions are met:- Response variable residuals are normally distributed (or approximately normally distributed)- Variances of populations are equal- Responses for a given group are independent and identically distributed normal random variables (not a simple random sample (SRS)) Using `scipy.stats.f_oneway()````Signature: st.f_oneway(*args, axis=0)Docstring:Perform one-way ANOVA.The one-way ANOVA tests the null hypothesis that two or more groups havethe same population mean. The test is applied to samples from two ormore groups, possibly with differing sizes.Parameters----------sample1, sample2, ... : array_like The sample measurements for each group. There must be at least two arguments. If the arrays are multidimensional, then all the dimensions of the array must be the same except for `axis`.axis : int, optional Axis of the input arrays along which the test is applied. Default is 0.Returns-------statistic : float The computed F statistic of the test.pvalue : float The associated p-value from the F distribution.Warns-----F_onewayConstantInputWarning Raised if each of the input arrays is constant array. In this case the F statistic is either infinite or isn't defined, so ``np.inf`` or ``np.nan`` is returned.F_onewayBadInputSizesWarning Raised if the length of any input array is 0, or if all the input arrays have length 1. ``np.nan`` is returned for the F statistic and the p-value in these cases.Notes-----The ANOVA test has important assumptions that must be satisfied in orderfor the associated p-value to be valid.1. The samples are independent.2. Each sample is from a normally distributed population.3. The population standard deviations of the groups are all equal. This property is known as homoscedasticity.If these assumptions are not true for a given set of data, it may stillbe possible to use the Kruskal-Wallis H-test (`scipy.stats.kruskal`)although with some loss of power.The length of each group must be at least one, and there must be atleast one group with length greater than one. If these conditionsare not satisfied, a warning is generated and (``np.nan``, ``np.nan``)is returned.If each group contains constant values, and there exist at least twogroups with different values, the function generates a warning andreturns (``np.inf``, 0).If all values in all groups are the same, function generates a warningand returns (``np.nan``, ``np.nan``).``` `scipy.stats.f`
###Code
fig = plt.figure(figsize=(15, 10))
ax = fig.add_subplot(1, 1, 1)
ax.set_facecolor('0.95') # set background color to light grey
x = np.linspace(0.01, 4, 500)
# dfn: degrees of freedom of the numerator (between groups)
# dfd: degrees of freedom of the denominator (within groups)
dfn, dfd = 2, 9
ax.plot(x, st.f.pdf(x, dfn, dfd), 'red', lw=2, label='F distribution pdf, dfn = {} and dfd = {}'.format(dfn, dfd))
dfn, dfd = 5, 50
ax.plot(x, st.f.pdf(x, dfn, dfd), 'orange', lw=2, label='F distribution pdf, dfn = {} and dfd = {}'.format(dfn, dfd))
dfn, dfd = 10, 100
ax.plot(x, st.f.pdf(x, dfn, dfd), 'blue', lw=2, label='F distribution pdf, dfn = {} and dfd = {}'.format(dfn, dfd))
dfn, dfd = 20, 200
ax.plot(x, st.f.pdf(x, dfn, dfd), 'green', lw=2, label='F distribution pdf, dfn = {} and dfd = {}'.format(dfn, dfd))
plt.legend()
plt.show()
s = np.array([15, 12, 14, 11])
i = np.array([39, 45, 48, 60])
k = np.array([65, 45, 32, 38])
n = np.array([[len(s)], [len(i)], [len(k)]])
m = np.array([[s.mean()], [i.mean()], [k.mean()]])
m_G = m.mean()
print(m)
print(m_G)
SS_between = np.dot(n.T, (m - m_G)**2)
print(SS_between)
data = np.array([s, i, k])
SS_within = np.sum((data - m)**2)
print(SS_within)
K = 3
df_between = K - 1
df_within = n.sum() - K
print(df_between, df_within)
ms_between = SS_between / df_between
ms_within = SS_within / df_within
print(ms_between, ms_within)
F = ms_between / ms_within
print("F statistic = {:.4f}".format(F[0, 0]))
confidence = .95
f_critical = st.f.ppf(confidence, df_between, df_within)
print("The F critical value for confidence = {:.2f} is: {:.3f}".format(confidence, f_critical))
pvalue = 1 - st.f.cdf(F, df_between, df_within)
print("The p value is: {:.4f}".format(pvalue[0, 0]))
print("Thus we reject the null hypothesis that the 3 means are equal")
###Output
The p value is: 0.0012
Thus we reject the null hypothesis that the 3 means are equal
###Markdown
Alternative calculation using `scipy.stats.f_oneway()`
###Code
print(st.f_oneway(s, i, k))
###Output
F_onewayResult(statistic=15.716937354988401, pvalue=0.0011580762838382535)
###Markdown
Problem Set 12 Problem 1
###Code
s = np.array([8, 7, 10, 6, 9])
i = np.array([4, 6, 7, 4, 9])
k = np.array([4, 4, 7, 2, 3])
n = np.array([[len(s)], [len(i)], [len(k)]])
m = np.array([[s.mean()], [i.mean()], [k.mean()]])
m_G = m.mean()
print(m)
print(m_G)
SS_between = np.dot(n.T, (m - m_G)**2)
print(SS_between)
data = np.array([s, i, k])
SS_within = np.sum((data - m)**2)
print(SS_within)
K = 3
df_between = K - 1
df_within = n.sum() - K
print(df_between, df_within)
ms_between = SS_between / df_between
ms_within = SS_within / df_within
print(ms_between, ms_within)
F = ms_between / ms_within
print("F statistic = {:.4f}".format(F[0, 0]))
confidence = .95
f_critical = st.f.ppf(confidence, df_between, df_within)
print("The F critical value for confidence = {:.2f} is: {:.3f}".format(confidence, f_critical))
pvalue = 1 - st.f.cdf(F, df_between, df_within)
print("The p value is: {:.4f}".format(pvalue[0, 0]))
print("Thus we reject the null hypothesis that the 3 means are equal")
n2 = SS_between / (SS_between + SS_within)
print(n2)
q = qsturng(.95, K, df_within)
print("Studentized range statistic = {:.3f}".format(q))
HSD = q * np.sqrt(ms_within / n[0, 0])
print("Tukey's HSD = {:.3f}".format(HSD))
###Output
Tukey's HSD = 3.155
###Markdown
Lesson 14 Problem 1
###Code
s = np.array([2, 3, 4])
i = np.array([5, 6, 7])
k = np.array([8, 9, 10])
n = np.array([[len(s)], [len(i)], [len(k)]])
m = np.array([[s.mean()], [i.mean()], [k.mean()]])
m_G = m.mean()
print(m)
print(m_G)
SS_between = np.dot(n.T, (m - m_G)**2)
print(SS_between)
data = np.array([s, i, k])
SS_within = np.sum((data - m)**2)
print(SS_within)
K = 3
df_between = K - 1
df_within = n.sum() - K
print(df_between, df_within)
ms_between = SS_between / df_between
ms_within = SS_within / df_within
print(ms_between, ms_within)
F = ms_between / ms_within
print("F statistic = {:.4f}".format(F[0, 0]))
confidence = .95
f_critical = st.f.ppf(confidence, df_between, df_within)
print("The F critical value for confidence = {:.2f} is: {:.3f}".format(confidence, f_critical))
###Output
The F critical value for confidence = 0.95 is: 5.143
###Markdown
Tukey's HSD
###Code
q = qsturng(.95, K, df_within)
print("Studentized range statistic = {:.3f}".format(q))
HSD = q * np.sqrt(ms_within / n[0, 0])
print("Tukey's HSD = {:.3f}".format(HSD))
result = MultiComparison(data.flatten(), np.array(['i']*3 + ['j']*3 + ['k']*3)).tukeyhsd(alpha=0.05)
print(result)
confidence = .95
f_critical = st.f.ppf(confidence, 3, 184)
print("The F critical value for confidence = {:.2f} is: {:.3f}".format(confidence, f_critical))
confidence = .99
f_critical = st.f.ppf(confidence, 3, 184)
print("The F critical value for confidence = {:.2f} is: {:.3f}".format(confidence, f_critical))
pairwise_tukeyhsd
###Output
_____no_output_____
###Markdown
Problem 2
###Code
placebo = np.array([1.5, 1.3, 1.8, 1.3, 1.6])
drug1 = np.array([1.2, 1.6, 1.7, 1.9])
drug2 = np.array([2.0, 1.4, 1.5, 1.5, 1.8, 1.7, 1.4])
drug3 = np.array([2.9, 3.1, 2.8, 2.7])
n = np.array([[len(placebo)], [len(drug1)], [len(drug2)], [len(drug3)]])
m = np.array([[placebo.mean()], [drug1.mean()], [drug2.mean()], [drug3.mean()]])
m_G = (placebo.sum() + drug1.sum() + drug2.sum() + drug3.sum()) / n.sum()
print(m)
print(m_G)
SS_between = np.dot(n.T, (m - m_G)**2)[0, 0]
print(SS_between)
s1 = ((placebo - m[0, 0])**2).sum()
s2 = ((drug1 - m[1, 0])**2).sum()
s3 = ((drug2 - m[2, 0])**2).sum()
s4 = ((drug3 - m[3, 0])**2).sum()
SS_within = np.sum([s1, s2, s3, s4])
print(SS_within)
K = 4
df_between = K - 1
df_within = n.sum() - K
print(df_between, df_within)
ms_between = SS_between / df_between
ms_within = SS_within / df_within
print(ms_between, ms_within)
F = ms_between / ms_within
print("F statistic = {:.4f}".format(F))
n2 = SS_between / (SS_between + SS_within)
print(n2)
pvalue = 1 - st.f.cdf(F, df_between, df_within)
print("The p value is: {:.10f}".format(pvalue))
print("Thus we reject the null hypothesis that all means are equal")
###Output
The p value is: 0.0000003072
Thus we reject the null hypothesis that all means are equal
###Markdown
Alternative calculation using `scipy.stats.f_oneway()`
###Code
print(st.f_oneway(placebo, drug1, drug2, drug3))
###Output
F_onewayResult(statistic=34.76212444824149, pvalue=3.072132691573616e-07)
###Markdown
Problem Set 15
###Code
st.f.ppf(.95, 2, 30)
st.f.ppf(.95, 3, 15)
st.f.ppf(.95, 1, 30)
st.f.ppf(.95, 2, 50)
n = np.array([[6], [5], [7]])
m = np.array([[-10], [12], [0.2]])
m_G = 1.4/18
print(m)
print(m_G)
SS_between = np.dot(n.T, (m - m_G)**2)
print(SS_between)
K = 3
df_between = K - 1
df_within = n.sum() - K
print(df_between, df_within)
SS_within = 80 + 50 + 3.48
ms_between = SS_between[0, 0] / df_between
ms_within = SS_within / df_within
print(ms_between, ms_within)
F = ms_between / ms_within
print("F statistic = {:.4f}".format(F))
confidence = .95
f_critical = st.f.ppf(confidence, df_between, df_within)
print("The F critical value for confidence = {:.2f} is: {:.3f}".format(confidence, f_critical))
n2 = SS_between / (SS_between + SS_within)
print(n2)
###Output
[[0.90817604]]
|
nbs/8.5_codexplainer.d2v_vectorization.ipynb | ###Markdown
d2v_vectorization> Use doc2vec models to get distributed representation (embedding vectors) for source code> @Alvaro 15 April 2021 Note:Doc2Vec model is not trained, just loaded and used through gensim
###Code
# export
# utils
def check_file_existence(path) -> bool:
path = Path(path)
if not path.exists():
logging.error('Provided file cannot be found.')
return False
return True
# export
def configure_dirs(base_path: str, config_name: str, dataset_name: str) -> str:
"""
Performs configuration of directories for storing vectors
:param base_path:
:param config_name:
:param dataset_name:
:return: Full configuration path
"""
base_path = Path(base_path)
base_path.mkdir(exist_ok=True)
full_path = base_path / config_name
full_path.mkdir(exist_ok=True)
full_path = full_path / dataset_name
full_path.mkdir(exist_ok=True)
return str(full_path)
###Output
_____no_output_____
###Markdown
Vectorizer classes Vectorizer class is defined abstract in order to provide alternatives for tokenization (SentencePiece and HuggingFace's Tokenizers)
###Code
# export
class Doc2VecVectorizer(ABC):
def __init__(self, tkzr_path:str, d2v_path: str, tokenizer: Optional[Any]=None):
"""
Default constructor for Vectorizer class
"""
self.tkzr_path = tkzr_path
self.d2v_path = d2v_path
self._load_doc2vec_model(d2v_path)
if tokenizer is None:
self._load_tokenizer_model(self.tkzr_path)
else:
self.tokenizer = tokenizer
def tokenize_df(self, df: pd.DataFrame, code_column: str) -> pd.DataFrame:
"""
Performs tokenization of a Dataframe
:param df: DataFrame containing code
:param code_column: Str indicating column name of code data
:return: Tokenized DataFrame
"""
return self.tokenizer.tokenize_df(df, code_column)
@abstractmethod
def _load_tokenizer_model(self, model_path: str):
pass
def _load_doc2vec_model(self, model_path: str):
"""
:param model_path: Path to the model file
:return: Gensim Doc2Vec model (corresponding to the loaded model)
"""
if not check_file_existence(model_path):
msg = 'Doc2vec model could no be loaded'
logging.error('Doc2vec model could no be loaded')
raise Exception(msg)
model = gensim.models.Doc2Vec.load(model_path)
self.d2v_model = model
def infer_d2v(self, df: pd.DataFrame, tokenized_column: str, out_path: str,
config_name: str, sample_set_name: str,
perform_tokenization: Optional[bool]=False,
steps: Optional[int]=200) -> tuple:
"""
Performs vectorization via Doc2Vec model
:param df: Pandas DataFrame containing source code
:param tokenized_column: Column name of the column corresponding to source code tokenized
with the appropriate implementation
:param out_path: String indicating the base location for storing vectors
:param config_name: String indicating the model from which the samples came from
:param sample_set_name: String indicating the base name for identifying the set of
samples being processed
:param perform_tokenization: Bool indicating whether tokenization is required or not
(input df is previously tokenized or not)
:param steps: Steps for the doc2vec infere
:return: Tuple containing (idx of the input DF, obtained vectors)
"""
tokenized_df = df.copy()
if perform_tokenization:
tokenized_df[tokenized_column] = self.tokenizer.tokenize_df(tokenized_df, 'code')
inferred_vecs = np.array([self.d2v_model.infer_vector(tok_snippet, steps=200) \
for tok_snippet in tokenized_df[tokenized_column].values])
indices = np.array(df.index)
dest_path = configure_dirs(out_path, config_name, sample_set_name)
now = datetime.now()
ts = str(datetime.timestamp(now))
file_name = f"{dest_path}/{self.tok_name}-{ts}"
np.save(f"{file_name}-idx", indices)
np.save(f"{file_name}-ft_vecs", inferred_vecs)
return indices, inferred_vecs
# export
class Doc2VecVectorizerSP(Doc2VecVectorizer):
"""
Class to perform vectorization via Doc2Vec model
leveraging SentencePiece to tokenizer sequences.
"""
def __init__(self, sp_path: str, d2v_path: str, tokenizer: Optional[Any]=None):
"""
:param sp_path: Path to the SentencePiece saved model
:param d2v_path: Path to the Doc2Vec saved model
"""
super().__init__(sp_path, d2v_path, tokenizer)
self.tok_name = "sp"
def _load_tokenizer_model(self, model_path: str):
"""
Loads the sentence piece model stored in the specified path
:param model_path: Path to the model file
:return: SentencePieceProcessor object (corresponding to loaded model)
"""
if not check_file_existence(model_path):
msg = 'Sentence piece model could no be loaded'
logging.error(msg)
raise Exception(msg)
sp_processor = spm.SentencePieceProcessor()
sp_processor.load(model_path)
self.tokenizer = sp_processor
# export
class Doc2VecVectorizerHF(Doc2VecVectorizer):
"""
Class to perform vectorization via Doc2Vec model
leveraging HF's Tokenizer
"""
def __init__(self, tkzr_path: str, d2v_path: str, tokenizer: Optional[Any]=None):
"""
:param tkzr_path: Path to the HF Tokenizer saved model
:param d2v_path: Path to the Doc2Vec saved model
"""
super().__init__(tkzr_path, d2v_path, tokenizer)
self.tok_name = "hf"
def _load_tokenizer_model(self, path: str) -> Tokenizer:
"""
Function to load a saved HuggingFace tokenizer
:param path: Path containing the tokenizer file
:return:
"""
if not check_file_existence(path):
msg = 'HuggingFace tokenizer could no be loaded.'
logging.error(msg)
raise Exception(msg)
self.tokenizer = Tokenizer.from_file(path)
###Output
_____no_output_____
###Markdown
Load Searchnet data
###Code
java_df = pd.read_csv("/tf/main/dvc-ds4se/code/searchnet/[codesearchnet-java-1597073966.81902].csv", header=0, index_col=0, sep='~')
java_df.head()
java_samples = java_df.sample(10)
java_samples.head()
np.array(java_samples.index)
###Output
_____no_output_____
###Markdown
Parameterization
###Code
params = {
"bpe32k_path": "/tf/main/dvc-ds4se/models/bpe/sentencepiece/deprecated/java_bpe_32k.model",
"doc2vec_java_path": "/tf/main/dvc-ds4se/models/pv/bpe8k/[doc2vec-Java-PVDBOW-500-20E-8k-1594569414.336389].model",
"hf_tokenizer": "/tf/main/nbs/tokenizer.json",
"vectors_storage_path": "/tf/main/dvc-ds4se/results/d2v_vectors"
}
###Output
_____no_output_____
###Markdown
Configure directories to store obtained vectors Test vectorization with Doc2Vec (based on SentencePiece)
###Code
sp_tokenizer = SPTokenizer(params['bpe32k_path'])
vectorizer = Doc2VecVectorizerSP(params['bpe32k_path'], params["doc2vec_java_path"], tokenizer=sp_tokenizer)
tokenized_df = vectorizer.tokenize_df(java_samples, 'code')
tokenized_df
indices, vectors = vectorizer.infer_d2v(java_samples, 'bpe32k-tokens', params["vectors_storage_path"],
"human_trn", "10-sample-20052021", perform_tokenization=True)
indices
vectors
###Output
_____no_output_____
###Markdown
Test vectorization with Doc2Vec (based on HuggingFace's Tokenizer)
###Code
hf_tokenizer = HFTokenizer(params['hf_tokenizer'])
hf_vectorizer = Doc2VecVectorizerHF(params['hf_tokenizer'], params["doc2vec_java_path
"], tokenizer=hf_tokenizer)
tokenized_df = hf_vectorizer.tokenize_df(java_samples, 'code')
tokenized_df
indices, vectors = hf_vectorizer.infer_d2v(java_samples, 'bpe-hf-tokens', params["vectors_storage_path"],
"human_trn", "10-sample-20052021", perform_tokenization=True)
indices
vectors
# TODO: Export code as module
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 0.0_mgmnt.prep.i.ipynb.
Converted 0.1_mgmnt.prep.conv.ipynb.
Converted 0.3_mgmnt.prep.bpe.ipynb.
Converted 0.6_mgmnt.prep.nltk.ipynb.
Converted 0.7_mgmnt.prep.files_mgmnt.ipynb.
Converted 0.8_mgmnt.prep.bpe_tokenization.ipynb.
Converted 1.0_exp.i.ipynb.
Converted 1.1_exp.info-[inspect].ipynb.
Converted 1.1_exp.info.ipynb.
Converted 1.2_exp.csnc.ipynb.
Converted 1.2_exp.gen.code.ipynb.
Converted 1.3_exp.csnc_python.ipynb.
Converted 10.0_utils.clusterization.ipynb.
Converted 10.1_utils.visualization.ipynb.
Converted 2.0_repr.codebert.ipynb.
Converted 2.0_repr.i.ipynb.
Converted 2.1_repr.codeberta.ipynb.
Converted 2.1_repr.roberta.train.ipynb.
Converted 2.2_repr.roberta.eval.ipynb.
Converted 2.3_repr.word2vec.train.ipynb.
Converted 2.6_repr.word2vec.eval.ipynb.
Converted 2.7_repr.distmetrics.ipynb.
Converted 2.8_repr.sentence_transformers.ipynb.
Converted 3.1_mining.unsupervised.traceability.eda.ipynb.
Converted 3.2_mining.unsupervised.eda.traceability.d2v.ipynb.
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
h
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
Converted 3.2_mining.unsupervised.mutual_information.traceability.approach.sacp-w2v.ipynb.
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
h
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
e
This cell doesn't have an export destination and was ignored:
E
Converted 3.2_mining.unsupervised.mutual_information.traceability.approach.sacp.w2v.ipynb.
Converted 3.2_mutual_information_theory.eval.ipynb.
Converted 3.4_facade.ipynb.
Converted 4.0_mining.ir.ipynb.
Converted 5.0_experiment.mining.ir.unsupervised.d2v.ipynb.
Converted 5.0_experiment.mining.ir.unsupervised.w2v-exp4.ipynb.
Converted 5.0_experiment.mining.ir.unsupervised.w2v-exp5.ipynb.
Converted 5.0_experiment.mining.ir.unsupervised.w2v-exp6.ipynb.
Converted 5.0_experiment.mining.ir.unsupervised.w2v.ipynb.
Converted 6.0_desc.stats.ipynb.
Converted 6.0_eval.mining.ir.unsupervised.x2v.ipynb.
Converted 6.1_desc.metrics.java.ipynb.
Converted 6.1_desc.metrics.main.ipynb.
Converted 6.1_desc.metrics.se.ipynb.
Converted 6.2_desc.metrics.java.ipynb.
Converted 6.2_desc.metrics.main.ipynb.
Converted 7.0_inf.i.ipynb.
Converted 7.1_inf.bayesian.ipynb.
Converted 7.2_inf.causal.ipynb.
Converted 7.3_statistical_analysis.ipynb.
Converted 8.0_interpretability.i.ipynb.
Converted 8.1_interpretability.error_checker.ipynb.
Converted 8.2_interpretability.metrics_python.ipynb.
Converted 8.3_interpretability.metrics_java.ipynb.
Converted 8.4_interpretability.metrics_example.ipynb.
Converted 8.5_interpretability.d2v_vectorization.ipynb.
Converted 8.6_interpretability.prototypes_criticisms.ipynb.
Converted 8.7_interpretability.info_theory_processing.ipynb.
Converted 9.0_ds.causality.eval.traceability.ipynb.
Converted 9.0_ds.description.eval.traceability.ipynb.
Converted 9.0_ds.prediction.eval.traceability.ipynb.
Converted index.ipynb.
|
day_10.ipynb | ###Markdown
Advent of Code - Day 10-----[The Stars Align](https://adventofcode.com/2018/day/10)
###Code
reset -fs
from utilities import load_input
data = load_input(day=10)
###Output
_____no_output_____
###Markdown
Part I-----
###Code
data_test = """position=< 9, 1> velocity=< 0, 2>
position=< 7, 0> velocity=<-1, 0>
position=< 3, -2> velocity=<-1, 1>
position=< 6, 10> velocity=<-2, -1>
position=< 2, -4> velocity=< 2, 2>
position=<-6, 10> velocity=< 2, -2>
position=< 1, 8> velocity=< 1, -1>
position=< 1, 7> velocity=< 1, 0>
position=<-3, 11> velocity=< 1, -2>
position=< 7, 6> velocity=<-1, -1>
position=<-2, 3> velocity=< 1, 0>
position=<-4, 3> velocity=< 2, 0>
position=<10, -3> velocity=<-1, 1>
position=< 5, 11> velocity=< 1, -2>
position=< 4, 7> velocity=< 0, -1>
position=< 8, -2> velocity=< 0, 1>
position=<15, 0> velocity=<-2, 0>
position=< 1, 6> velocity=< 1, 0>
position=< 8, 9> velocity=< 0, -1>
position=< 3, 3> velocity=<-1, 1>
position=< 0, 5> velocity=< 0, -1>
position=<-2, 2> velocity=< 2, 0>
position=< 5, -2> velocity=< 1, 2>
position=< 1, 4> velocity=< 2, 1>
position=<-2, 7> velocity=< 2, -2>
position=< 3, 6> velocity=<-1, -1>
position=< 5, 0> velocity=< 1, 0>
position=<-6, 0> velocity=< 2, 0>
position=< 5, 9> velocity=< 1, -2>
position=<14, 7> velocity=<-2, 0>
position=<-3, 6> velocity=< 2, -1>""".split("\n")
# Convert raw data into integers
x_points, y_points = [], []
for row in data_test:
_, x_y, h_v = row.split('<')
x, y = x_y.split(',')
y, _ = y.split('>')
x = int(x)
y = int(y)
# h = int(h[:-1])
# v = int(v[:-1])
x_points.append(x)
y_points.append(y)
y_points
y
y = int(y)
from collections import defaultdict
def solve_day_5_part_1(data: str) -> int:
letter_counts = defaultdict(int)
data_next_round = data
while True:
data = data_next_round
for i, _ in enumerate(data[:-1]):
if abs(ord(data[i]) - ord(data[i+1])) == 32: # Difference in ASCII encoding
data_next_round = data[:i]+data[i+2:]
letter_counts[data[i]+data[i+1]] += 1
break
else:
return len(data_next_round), max(letter_counts.items(), key=lambda x: x[1])[0]
assert solve_day_5_part_1("dabAcCaCBAcCcaDA")[0] == 10
solution, letters_to_remove = solve_day_5_part_1(data)
assert solution == 10584
print(f"Solution Part One: {solution}")
###Output
Solution Part One: 10584
###Markdown
Part II-----
###Code
# letters_to_remove
'xX'
def solve_day_5_part_2(data: str, letters_to_remove: str) -> int:
data = data.replace(letters_to_remove[0], "").replace(letters_to_remove[1], "")
return solve_day_5_part_1(data)[0]
assert solve_day_5_part_2("dabAcCaCBAcCcaDA", 'Cc') == 4
10140 # incorrect: too high
solution = solve_day_5_part_2(data, letters_to_remove)
# assert solution == 10584
print(f"Solution Part One: {solution}")
###Output
Solution Part One: 10140
|
Demo/Soiling calcs demo.ipynb | ###Markdown
Import modules and set things upTo run this yourself, first download the performance data from: http://dkasolarcentre.com.au/source/alice-springs/dka-m5-b-phase and save it in the same directory as this notebook
###Code
import pv_soiling as soiling
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pvlib
import matplotlib
matplotlib.rcParams.update({'font.size': 12,
'figure.figsize': [4.5, 3],
'lines.markeredgewidth': 0,
'lines.markersize': 2
})
###Output
_____no_output_____
###Markdown
Read in the performance data and set things up
###Code
# Data from DK solar center http://dkasolarcentre.com.au/historical-data
df = pd.read_csv('84-Site_12-BP-Solar.csv')
try:
df.columns = [col.decode('utf-8') for col in df.columns]
except AttributeError:
pass # Python 3 strings are already unicode literals
df = df.rename(columns = {
u'12 BP Solar - Active Power (kW)':'power',
u'12 BP Solar - Wind Speed (m/s)': 'wind',
u'12 BP Solar - Weather Temperature Celsius (\xb0C)': 'Tamb',
u'12 BP Solar - Global Horizontal Radiation (W/m\xb2)': 'ghi',
u'12 BP Solar - Diffuse Horizontal Radiation (W/m\xb2)': 'dhi',
u'12 BP Solar - Weather Daily Rainfall (mm)':'precip'
})
df.index = pd.to_datetime(df.Timestamp)
df.index = df.index.tz_localize('Australia/North')
# Metadata
lat = -23.762028
lon = 133.874886
azimuth = 0
tilt = 20
pdc = 5.1
###Output
_____no_output_____
###Markdown
Model the PV system
###Code
# calculate the POA irradiance
sky = pvlib.irradiance.isotropic(tilt, df.dhi)
sun = pvlib.solarposition.get_solarposition(df.index, lat, lon)
df['dni'] = (df.ghi - df.dhi)/np.cos(np.deg2rad(sun.zenith))
beam = pvlib.irradiance.beam_component(tilt, azimuth, sun.zenith, sun.azimuth, df.dni)
df['poa'] = beam + sky
fig, ax = plt.subplots()
ax.plot(df.poa, df.power, 'o', alpha = 0.01)
ax.set_xlim(0,1500)
ax.set_ylim(0, 5.5)
ax.set_ylabel('kW AC')
ax.set_xlabel('POA (W/m$^2$)');
# Calculate temperature
df_temp = pvlib.pvsystem.sapm_celltemp(df.poa, df.wind, df.Tamb, model = 'open_rack_cell_polymerback')
df['Tcell'] = df_temp.temp_cell
# Run a PVWatts model for the dc performance
df['pvw_dc'] = pvlib.pvsystem.pvwatts_dc(df.poa, df.Tcell, pdc, -0.005)
# aggregate what we need for soiling daily
pm = df.power.resample('D').sum() / df.pvw_dc.resample('D').sum()
insol = df.poa.resample('D').sum()
precip = df.precip.resample('D').sum()
fig, ax = plt.subplots()
ax.plot(pm.index, pm, 'o')
ax.set_ylim(0,1)
fig.autofmt_xdate()
ax.set_ylabel('Performance Index');
###Output
_____no_output_____
###Markdown
Soiling calculations
###Code
# First create a performance metric dataframe
pm_frame = soiling.create_pm_frame(pm, insol, precip = precip)
# Then calculate a results frame summarizing the soiling intervals
results = pm_frame.calc_result_frame()
# Finally perform the monte carlo simulations for the different assumptions
soiling_ratio_realizations = results.calc_monte(1000)
# show the median and confidence interval for irradiance weighted soiling ratio
np.percentile(soiling_ratio_realizations, [2.5, 50, 97.5])
###Output
_____no_output_____ |
5-sequence-models/week1/Dinosaur Island -- Character-level language model/Dinosaurus_Island_Character_level_language_model_final_v3a.ipynb | ###Markdown
Character level language model - Dinosaurus IslandWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely! Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn:- How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit- How to build a character-level text generation recurrent neural network- Why clipping the gradients is importantWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. Updates If you were working on the notebook before this update...* The current notebook is version "3a".* You can find your original work saved in the notebook with the previous version name ("v3") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* Sort and print `chars` list of characters.* Import and use pretty print* `clip`: - Additional details on why we need to use the "out" parameter. - Modified for loop to have students fill in the correct items to loop through. - Added a test case to check for hard-coding error.* `sample` - additional hints added to steps 1,2,3,4. - "Using 2D arrays instead of 1D arrays". - explanation of numpy.ravel(). - fixed expected output. - clarified comments in the code.* "training the model" - Replaced the sample code with explanations for how to set the index, X and Y (for a better learning experience).* Spelling, grammar and wording corrections.
###Code
import numpy as np
from utils import *
import random
import pprint
###Output
_____no_output_____
###Markdown
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
###Code
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
###Output
There are 19909 total characters and 27 unique characters in your data.
###Markdown
* The characters are a-z (26 characters) plus the "\n" (or newline character).* In this assignment, the newline character "\n" plays a role similar to the `` (or "End of sentence") token we had discussed in lecture. - Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence. * `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character. - This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer.
###Code
chars = sorted(chars)
print(chars)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(ix_to_char)
###Output
{ 0: '\n',
1: 'a',
2: 'b',
3: 'c',
4: 'd',
5: 'e',
6: 'f',
7: 'g',
8: 'h',
9: 'i',
10: 'j',
11: 'k',
12: 'l',
13: 'm',
14: 'n',
15: 'o',
16: 'p',
17: 'q',
18: 'r',
19: 's',
20: 't',
21: 'u',
22: 'v',
23: 'w',
24: 'x',
25: 'y',
26: 'z'}
###Markdown
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameters with the gradient descent update rule.- Return the learned parameters **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". * At each time-step, the RNN tries to predict what is the next character given the previous characters. * The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set.* $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward. * At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$. 2 - Building blocks of the modelIn this part, you will build two important blocks of the overall model:- Gradient clipping: to avoid exploding gradients- Sampling: a technique used to generate charactersYou will then apply these two functions to build the model. 2.1 - Clipping the gradients in the optimization loopIn this section you will implement the `clip` function that you will call inside of your optimization loop. Exploding gradients* When gradients are very large, they're called "exploding gradients." * Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation.Recall that your overall loop structure usually consists of:* forward pass, * cost computation, * backward pass, * parameter update. Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding." gradient clippingIn the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. * There are different ways to clip gradients.* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. * For example, if the N=10 - The range is [-10, 10] - If any component of the gradient vector is greater than 10, it is set to 10. - If any component of the gradient vector is less than -10, it is set to -10. - If any components are between -10 and 10, they keep their original values. **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. * Your function takes in a maximum threshold and returns the clipped versions of the gradients. * You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html). - You will need to use the argument "`out = ...`". - Using the "`out`" parameter allows you to update a variable "in-place". - If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`.
###Code
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWaa, dWax, dWya, db, dby]:
np.clip(gradient, -maxValue, maxValue, out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
# Test with a maxvalue of 10
maxValue = 10
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, maxValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
###Markdown
** Expected output:**```Pythongradients["dWaa"][1][2] = 10.0gradients["dWax"][3][1] = -10.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 10.]gradients["dby"][1] = [ 8.45833407]```
###Code
# Test with a maxValue of 5
maxValue = 5
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, maxValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
###Markdown
** Expected Output: **```Pythongradients["dWaa"][1][2] = 5.0gradients["dWax"][3][1] = -5.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 5.]gradients["dby"][1] = [ 5.]``` 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:- **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$. - This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$ - **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:hidden state: $$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$activation:$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$prediction:$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$- Details about $\hat{y}^{\langle t+1 \rangle }$: - Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). - $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. - We have provided a `softmax()` function that you can use. Additional Hints- $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.- $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work.- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html) Using 2D arrays instead of 1D arrays* You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors.* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.* This becomes a problem when we add two arrays where we expected them to have the same shape.* When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other.* Here is some sample code that shows the difference between using a 1D and 2D array.
###Code
import numpy as np
matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)
matrix2 = np.array([[0],[0],[0]]) # (3,1)
vector1D = np.array([1,1]) # (2,)
vector2D = np.array([[1],[1]]) # (2,1)
print("matrix1 \n", matrix1,"\n")
print("matrix2 \n", matrix2,"\n")
print("vector1D \n", vector1D,"\n")
print("vector2D \n", vector2D)
print("Multiply 2D and 1D arrays: result is a 1D array\n",
np.dot(matrix1,vector1D))
print("Multiply 2D and 2D arrays: result is a 2D array\n",
np.dot(matrix1,vector2D))
print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n",
"This is what we want here!\n",
np.dot(matrix1,vector2D) + matrix2)
print("Adding a (3,) vector to a (3 x 1) vector\n",
"broadcasts the 1D array across the second dimension\n",
"Not what we want here!\n",
np.dot(matrix1,vector1D) + matrix2
)
###Output
Adding a (3,) vector to a (3 x 1) vector
broadcasts the 1D array across the second dimension
Not what we want here!
[[2 4 6]
[2 4 6]
[2 4 6]]
###Markdown
- **Step 3**: Sampling: - Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. - To make the results more interesting, we will use np.random.choice to select a next letter that is likely, but not always the same. - Sampling is the selection of a value from a group of values, where each value has a probability of being picked. - Sampling allows us to generate random sequences of values. - Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. - This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. - You can use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html). Example of how to use `np.random.choice()`: ```python np.random.seed(0) probs = np.array([0.1, 0.0, 0.7, 0.2]) idx = np.random.choice([0, 1, 2, 3] p = probs) ``` - This means that you will pick the index (`idx`) according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$. - Note that the value that's set to `p` should be set to a 1D vector. - Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array. Additional Hints- [range](https://docs.python.org/3/library/functions.htmlfunc-range)- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.```Pythonarr = np.array([[1,2],[3,4]])print("arr")print(arr)print("arr.ravel()")print(arr.ravel())```Output:```Pythonarr[[1 2] [3 4]]arr.ravel()[1 2 3 4]```- Note that `append` is an "in-place" operation. In other words, don't do this:```Pythonfun_hobbies = fun_hobbies.append('learning') Doesn't give you what you want``` - **Step 4**: Update to $x^{\langle t \rangle }$ - The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. - You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction. - You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name. Additional Hints- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero. - You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) - Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)
###Code
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the a zero vector x that can be used as the one-hot vector
# representing the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size, 1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# idx is the index of the one-hot vector x that is set to 1
# All other positions in x are zero.
# We will initialize idx to -1
idx = -1
# Loop over time-steps t. At each time-step:
# sample a character from a probability distribution
# and append its index (`idx`) to the list "indices".
# We'll stop if we reach 50 characters
# (which should be very unlikely with a well trained model).
# Setting the maximum number of characters helps with debugging and prevents infinite loops.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.dot(Wax, x)+np.dot(Waa, a_prev)+b)
z = np.dot(Wya, a)+by
y = softmax(z)
# for grading purposes
np.random.seed(counter+seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
# (see additional hints above)
idx = np.random.choice(range(vocab_size), p=np.squeeze(y))
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.
# (see additional hints above)
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:\n", indices)
print("list of sampled characters:\n", [ix_to_char[i] for i in indices])
###Output
Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
###Markdown
** Expected output:**```PythonSampling:list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']```* Please note that over time, if there are updates to the back-end of the Coursera platform (that may update the version of numpy), the actual list of sampled indices and sampled characters may change. * If you follow the instructions given above and get an output without errors, it's possible the routine is correct even if your output doesn't match the expected output. Submit your assignment to the grader to verify its correctness. 3 - Building the language model It is time to build the character-level language model for text generation. 3.1 - Gradient descent * In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). * You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:- Forward propagate through the RNN to compute the loss- Backward propagate through time to compute the gradients of the loss with respect to the parameters- Clip the gradients- Update the parameters using gradient descent **Exercise**: Implement the optimization process (one step of stochastic gradient descent). The following functions are provided:```pythondef rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, adef update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters```Recall that you previously implemented the `clip` function:```Pythondef clip(gradients, maxValue) """Clips the gradients' values between minimum and maximum.""" ... return gradients``` parameters* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.* Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary).
###Code
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
###Output
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
###Markdown
** Expected output:**```PythonLoss = 126.503975722gradients["dWaa"][1][2] = 0.194709315347np.argmax(gradients["dWax"]) = 93gradients["dWya"][1][2] = -0.007773876032gradients["db"][4] = [-0.06809825]gradients["dby"][1] = [ 0.01538192]a_last[4] = [-1.]``` 3.2 - Training the model * Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. * Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. * Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this: Set the index `idx` into the list of examples* Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples".* If there are 100 examples, and the for-loop increments the index to 100 onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is 100, 101, etc.* Hint: 101 divided by 100 is zero with a remainder of 1.* `%` is the modulus operator in python. Extract a single example from the list of examples* `single_example`: use the `idx` index that you set previously to get one word from the list of examples. Convert a string into a list of characters: `single_example_chars`* `single_example_chars`: A string is a list of characters.* You can use a list comprehension (recommended over for-loops) to generate a list of characters.```Pythonstr = 'I love learning'list_of_chars = [c for c in str]print(list_of_chars)``````['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']``` Convert list of characters to a list of integers: `single_example_ix`* Create a list that contains the index numbers associated with each character.* Use the dictionary `char_to_ix`* You can combine this with the list comprehension that is used to get a list of characters from a string.* This is a separate line of code below, to help learners clarify each step in the function. Create the list of input characters: `X`* `rnn_forward` uses the `None` value as a flag to set the input vector as a zero-vector.* Prepend the `None` value in front of the list of input characters.* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']` Get the integer representation of the newline character `ix_newline`* `ix_newline`: The newline character signals the end of the dinosaur name. - get the integer representation of the newline character `'\n'`. - Use `char_to_ix` Set the list of labels (integer representation of the characters): `Y`* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`. - For example, `Y[0]` contains the same value as `X[1]` * The RNN should predict a newline at the last letter so add ix_newline to the end of the labels. - Append the integer representation of the newline character to the end of `Y`. - Note that `append` is an in-place operation. - It might be easier for you to add two lists together.
###Code
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text (size of the vocabulary)
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Set the index `idx` (see instructions above)
idx = j % len(examples)
# Set the input X (see instructions above)
single_example = examples[idx]
# single_example_chars =
single_example_ix = [char_to_ix[c] for c in single_example]
X = [None] + single_example_ix
# Set the labels Y (see instructions above)
ix_newline = char_to_ix['\n']
Y = single_example_ix + [ix_newline]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, 0.01)
### END CODE HERE ###
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result (for grading purposes), increment the seed by one.
print('\n')
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
###Code
parameters = model(data, ix_to_char, char_to_ix)
###Output
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901815
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.608779
Onwusceomosaurus
Lieeaerosaurus
Lxussaurus
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.070350
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaksoje
Trodiktonus
Iteration: 10000, Loss: 23.844446
Onyusaurus
Klecalosaurus
Lustodon
Ola
Xusodonia
Eeaeosaurus
Troceosaurus
Iteration: 12000, Loss: 23.291971
Onyxosaurus
Kica
Lustrepiosaurus
Olaagrraiansaurus
Yuspangosaurus
Eealosaurus
Trognesaurus
Iteration: 14000, Loss: 23.382338
Meutromodromurus
Inda
Iutroinatorsaurus
Maca
Yusteratoptititan
Ca
Troclosaurus
Iteration: 16000, Loss: 23.255630
Meustolkanolus
Indabestacarospceryradwalosaurus
Justolopinaveraterasauracoptelalenyden
Maca
Yusocles
Daahosaurus
Trodon
Iteration: 18000, Loss: 22.905483
Phytronn
Meicanstolanthus
Mustrisaurus
Pegalosaurus
Yuskercis
Egalosaurus
Tromelosaurus
Iteration: 20000, Loss: 22.873854
Nlyushanerohyisaurus
Loga
Lustrhigosaurus
Nedalosaurus
Yuslangosaurus
Elagosaurus
Trrangosaurus
Iteration: 22000, Loss: 22.710545
Onyxromicoraurospareiosatrus
Liga
Mustoffankeugoptardoros
Ola
Yusodogongterosaurus
Ehaerona
Trododongxernochenhus
Iteration: 24000, Loss: 22.604827
Meustognathiterhucoplithaloptha
Jigaadosaurus
Kurrodon
Mecaistheansaurus
Yuromelosaurus
Eiaeropeeton
Troenathiteritaus
Iteration: 26000, Loss: 22.714486
Nhyxosaurus
Kola
Lvrosaurus
Necalosaurus
Yurolonlus
Ejakosaurus
Troindronykus
Iteration: 28000, Loss: 22.647640
Onyxosaurus
Loceahosaurus
Lustleonlonx
Olabasicachudrakhurgawamosaurus
Ytrojianiisaurus
Eladon
Tromacimathoshargicitan
Iteration: 30000, Loss: 22.598485
Oryuton
Locaaesaurus
Lustoendosaurus
Olaahus
Yusaurus
Ehadopldarshuellus
Troia
Iteration: 32000, Loss: 22.211861
Meutronlapsaurus
Kracallthcaps
Lustrathus
Macairugeanosaurus
Yusidoneraverataus
Eialosaurus
Troimaniathonsaurus
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
###Markdown
** Expected Output**The output of your model may look different, but it will look something like this:```PythonIteration: 34000, Loss: 22.447230OnyxipaledisonsKiabaeropaLussiamangPacaeptabalsaurusXosalongEiacotegTroia``` ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus! 4 - Writing like ShakespeareThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. Let's become poets! We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
###Output
Using TensorFlow backend.
###Markdown
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
###Code
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
###Output
_____no_output_____ |
Unsupervised Learning/Unsupervised Learning Overview .ipynb | ###Markdown
Introduction Yann LeCun said "if intellgience was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake and reinforcement learning would be the cherry on the cake. ( co-recipient of the 2018 ACM A.M. Turing Award for his work in deep learning).Clustering falls under Unsupervised learning techniques. One of the key features of unsupervised models is that they don't require labelled data. It feels like unsupervised learning models will be a key driver in the pursuit of general intelligence. To be honest, all of the different areas will play their own unique part in the grand scheme of things. It's important to remember the model in unsupervised learning learns and discovers trends from the data we provide. Lets "try" compare supervised and unsupervised to humans.Parents teaching their children is supervised learning.The child asks the parent, "what's that mum?"The mother replies "A dog"Now the fact the child even knew that the dog and cat are different (two different groups) and that cats & dogs are different to insects is unsupervised learning. We all have this innate ability to patterns or groups. So unsupervised provides us with an idea of different groups /"clusters" that exists. In reality for humans, the label for each of those groups identified usually ends up coming from our friends, family and society. Unsupervised Learning TheoryTypical Use Cases- ClusteringGrouping instances together based on similarities. In society we have many experiences where we think new objects are similar to existing ones. " One is Tall, one is short and one is ....". We know the correct word is tall or short but what if this was X years ago, when we was cavemen. We may not know what to call them, but instantly notice 3 different groups of people relative to the population. This technique is clustering and is being used to drive customer segmentation, recommender systems, search enginers, image segmentation, semi supervised learning, dimensionality reduction and more.- Anomaly detection Detect anomalies. This can often be the most interesting data. Finding the items that are significantly different to the norm. This can be used to detect defects, unique scenarios, and remove outliers.- Density Estimation Using this type of analysis to visualise the whole distrubution of density. This distrubution, the probability density function, shows how frequent certain instances are relative to the other scenarios.Explaining some of the specific usecases- Customer SegmentationThis means we can find groups of customers based on their purchases and behaviour on your websites. You could tailor your new products to hit the largest groups of customers, or provide new services/products to boost the smaller group of customers. We are identifying markets through large datsets quickly, this would be very intensive in the past. This could also be used to reccomend items based on what customers in the same group enjoy.- Anomaly detection Instances that have a significantly low affinity to the other clusters is likely to be an anomaly. You could find out if there was an usual number of request per second to your website.- Dimensionality Reduction Often problems are made complicated, when they could be simplified. Good dimensionality reduction is when we simplfy something and stil reach the same answer/conclusion. We remove irrelevant features to save memory and process information more quickly. The dimensionality reduction notebook shows an example of dimensionality reduction on scanned images of handwritten images. These images can be intensive on memory, therefore removing or replacing these features to clusters they have strong affinity too. Essentially replacing outliers with good subsitutes/replacements. - Semi-Supervise Learning If we have a few labels, clustering can be used to determine the remaining labels. This terchnique can help label remaining images quickly, to improve the performance of a supervised learning model- Search Engines Search engines can let you search for images that are similar to a reference image. So clustering algorithm has found clusters for all the images in your database. When the user searches a reference imagine, we can return other images that are in the same cluster.- Segment an imageClustering pixels based on their color, then replacing a pixels color with the mean colour of the cluster. This can help object detection and tracking systems. It makes it clearer to the different components making up an image.What we will learn fds Contents: KMeans - Theory - Clustering - Prediction - Clustering used for Preprocessing - Clustering used for Semi Supervised Learning DBScan Theory - Clustering Predition (DBSCAN) - Technically DBSCAN outputs are a new feature, which is then used to help the prediction/classification model Outcomes Prediction (KMeans) Some algorithms identify continous regions of densily packed regions while others look for instances centered around a particular point (known as a centroid) Let's imagine we are in the park, and see a plants of different species. The plants are similar but have distinct differences. In a early notebook we had a plant dataset with labels (So features like "length, width, .." and a label "Species A, Species B and Species C). Now we have the same dataset without the labels. Using clustering, we can identify there was 3 distinct types of species in the dataset. This is the same as a human saying "There are 3 different plants". (One algorithm can achieve only 145 out of 150 correct , 3.33% wrong)K Means can cluster quickly and efficiently (few iterations). Let's finally begin training a KMeans Algorithm. Clustering as a Preprocessing Step (Dimensionality Reduction ) Right now 64 numbers composes only 1 image. The classifier will be handling many features. Sometimes simplyfing the number of features (reducing dimensionality) can lead to a better solution. Another notebook is available to learn about dimensionality reduction.We will use KMeans to tranform images to a number that shows the distance from each of the centroids (centers of each cluster). As it's a preprocessing step, new digits will be converted to distance values, and passed into the trained classifier for a prediction. 1 Preparing the Data 1.1 Import DataFirst we will bring the data into the notebook.
###Code
from sklearn.datasets import load_digits #sklearn contains datasets for ml
from sklearn.cluster import KMeans #KMeans is a Clustering algorithm (Unsupervised learning)
X_digits, y_digits = load_digits(return_X_y= True) #Nice, Scikit learn has this known dataset available to us saving us time finding an excel!
#There are other datasets available too, search the documentation if you want to expirment and learn with other well explored datasets
#This technique in python of "name , name = " is known as tuple unpacking, we assigned information to more than one variable at once
###Output
_____no_output_____
###Markdown
1.2 Split the Data into Training and Testing setsNow we have the data in this enviroment, we want to split the data into a training set and a test set.Cross validation is normally used on the training set to evaluate the performance of the model. The metrics are an average of the values from each cross validation fold. We want to make sure our model performs consistently on each fold. This shows us that our model has a good mix of examples in the data. Imagine 90% of our data for classifying between cats, dogs and foxes was of dogs. This means our model would perform better on dogs than foxes and cats. The main objective is to prevent our model from under or overfitting. There are ways to combat each of these sittuations which we will see throughout these notebooks.This phenomena happens when students revise for exams. We can overfit to mock papers, making us proffesional at answering questions of the same nature. A small change in the question can overwhelm the student.
###Code
from sklearn.model_selection import train_test_split #Function used to split data into train/test sets. Even select the percentage to split by
X_train, X_test, y_train, y_test = train_test_split(X_digits , y_digits) #default 0.25, otherwise add argument test_size= number of your choice, random_state=42)
#train_test_split(X_digits, y_digits, test_size=0.33, random_state=42) is an example
###Output
_____no_output_____
###Markdown
2. Baseline model 2.1 Creating the baseline modelWe will create a simple model to with minimum tuning to have something to compare too.This will give us something to compare our changes too. How else would we know if we've made the sittuation better or worse.
###Code
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression() #Assigned the Model to a variable name
log_reg.fit(X_train, y_train) #The model has fit to the data (found what values to use to predict based on this data)
###Output
C:\Users\Viraj\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
C:\Users\Viraj\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\linear_model\logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
###Markdown
2.2 Scoring the Baseline Model
###Code
score = log_reg.score(X_test,y_test) #
value = ("Score = ") + (str(round(score,2)*100) + "%")
value
def scorer(X_TestSet, y_TestSet): #Perfect time to define a function since we will want to obtain the score again in the future, and we aren't writing all of that again.
value = log_reg.score(X_TestSet, y_TestSet) #It would be nicer to be more dynamic where we pick any model or metric name beforehand..
print ("Score = " + (str(round(score,2)*100) + "%"))
scorer(X_test, y_test)
###Output
Score = 97.0%
###Markdown
3. Creating our Model - Classifier with KMeans PreProcessing (Dimensionality Reduction) 3.1 Creating a pipelineThat's a long title for the model. Let's break it down into what it actually is doing. This is a PreProcessing step. PreProcessing is anything that occurs before we pass the data into our model. The model is like the engine, which will accept fuel in the form of data. Any changes made to the data before entering the model is known as preprocessing.KMeans is still a machine learming algorithm. However, it is being used for preprocessing.KMeans will convert each image (currently 64 numbers representing an image) into distances. This is the distance from each cluster center (centroid). In our example, you would imagine the 0, 6 and 8 clusters to lie close together. (We have reduced the dimensionality of the dataset from 64 columns to k columns, where k is the number of clusters).This means the images are mapped out, and we classify based off their location relevant to others. Now we place KMeans and LogisticRegression into a pipeline. Pipelining all the steps which are relevant to your model. For example, we have to translate new data into distances before using our logistic classifier. It is also great for hyper parameter tuning. We usually try various different values for arguments that can be passed into our models. For example you could have KMeans(n_clusters = 50,31,23). We don't want to run this three different times, compare them and select the best manually! GridSearchCV allows us to try a range of parameters and automatically select the best .
###Code
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
("kmeans", KMeans(n_clusters=50)),
("log_reg", LogisticRegression())
])
pipeline.fit(X_train,y_train)
from sklearn.model_selection import GridSearchCV
param_grid = dict(kmeans__n_clusters = range(2, 100))
grid_clf = GridSearchCV(pipeline, param_grid, cv=3, verbose =2)
grid_clf.fit(X_train, y_train)
grid_clf.best_params_
grid_clf.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Clustering for Semi-Supervised Learning
###Code
n_labeled = 50
log_reg = LogisticRegression()
log_reg.fit(X_train[:n_labeled], y_train[:n_labeled]) #We have taken less data as the training set
log_reg.score(X_test, y_test)
X_train.shape #Theres 1347 images, that consist of 64 pixels
X_train #So 1347 rows, and each vector [64 pixels makimg up the images]
k = 50
kmeans = KMeans(n_clusters =k)
X_digits_dist = kmeans.fit_transform(X_train) #transforms the values to distance from each clusters centeroid
X_digits_dist.shape #A matrix containing a vector showing the distance from each cluster centeroid
import numpy as np
representative_digits_idx = np.argmin(X_digits_dist, axis=0)
#Takes the minimum distance for each vector, as a index value (In this case defining the number it was)
representative_digits_idx
X_Representative_Digits = X_train[representative_digits_idx]
X_Representative_Digits
for i in range(50):
some_digit = X_Representative_Digits [i]
some_digit_image = some_digit.reshape(8,8)
plt.imshow(some_digit_image,cmap= "binary")
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Import Data
###Code
from sklearn.cluster import KMeans #Import Scikit Learn's KMeans Algorithm
import pandas as pd #Import Pandas to assist with data handling tasks
FilePath = 'C:/Users/Viraj/Documents/Tekken 7 Project/Data/FrameData.xlsx'
FrameData = pd.read_excel(FilePath)
#Visualise the top 5 rows of the data
#Not all the data as we don't need to see all of it, save processing and space in notebook
FrameData.head()
###Output
_____no_output_____
###Markdown
Notes - Reading DataWe have many categorical variables, these will need to be encoded. Many Columns may be irrelevant, will trying the algorithm on a more refined version in the future. But I won't let my bias remove those columns yet.
###Code
k = 5 #5 is the number of clusters
kmeans = KMeans(n_clusters = 5) #Hit Tab inside the brackets to see more arguments available
#Assiging the algorithm to a variable, kmeans.
###Output
_____no_output_____ |
enem-2/notasEnemChallenge2.ipynb | ###Markdown
Limpeza dos dados e categorização
###Code
df_train_clean['NU_INSCRICAO'] = df_train['NU_INSCRICAO']
df_test_clean['NU_INSCRICAO'] = df_test['NU_INSCRICAO']
def create_encoder(column, prefix):
#encoder = OneHotEncoder()
#train_column_df = pd.DataFrame(encoder.fit_transform(df_train[[column]]).toarray())
#test_column_df = pd.DataFrame(encoder.fit_transform(df_test[[column]]).toarray())
train_column_df = pd.get_dummies(df_train[column])
test_column_df = pd.get_dummies(df_test[column])
train_name_columns = df_train[column].sort_values().unique()
train_name_columns_co = [str(prefix) + str(train_name_column) for train_name_column in train_name_columns]
test_name_columns = df_test[column].sort_values().unique()
test_name_columns_co = [str(prefix) + str(test_name_column) for test_name_column in test_name_columns]
train_column_df.columns=train_name_columns_co
test_column_df.columns=test_name_columns_co
global df_train_clean
global df_test_clean
df_train_clean = pd.concat([df_train_clean, train_column_df ], axis=1)
df_test_clean = pd.concat([df_test_clean, test_column_df ], axis=1)
categorical_vars = {'CO_UF_RESIDENCIA' : 'co_uf_', 'TP_SEXO' : 'sexo_', 'TP_COR_RACA': 'raca_', 'TP_ST_CONCLUSAO': 'tp_st_con_',
'TP_ANO_CONCLUIU': 'tp_ano_con_', 'TP_ESCOLA': 'tp_esc_','TP_PRESENCA_CN': 'tp_pres_cn',
'TP_PRESENCA_CH': 'tp_pres_ch', 'TP_PRESENCA_LC': 'tp_pres_lc', 'TP_LINGUA': 'tp_ling_',
'Q001': 'q001_', 'Q002': 'q002_', 'Q006': 'q006_', 'Q024': 'q024_',
'Q025': 'q025_', 'Q026': 'q026_', 'Q047': 'q047_'}
#'TP_STATUS_REDACAO': 'tp_stat_red_', 'Q027': 'q027_',
for column, prefix in categorical_vars.items():
create_encoder(column, prefix)
#Inserindo as numericas
train_numerical_vars = ['NU_NOTA_CN', 'NU_NOTA_CH', 'NU_NOTA_LC','NU_NOTA_COMP1', 'NU_NOTA_COMP2', 'NU_NOTA_COMP3',
'NU_NOTA_COMP4','NU_NOTA_COMP5', 'NU_NOTA_REDACAO']
test_numerical_vars = ['NU_NOTA_CN', 'NU_NOTA_CH', 'NU_NOTA_LC','NU_NOTA_COMP1', 'NU_NOTA_COMP2', 'NU_NOTA_COMP3',
'NU_NOTA_COMP4','NU_NOTA_COMP5', 'NU_NOTA_REDACAO']
df_train_clean = pd.concat([df_train_clean, df_train[train_numerical_vars]], axis=1)
df_test_clean = pd.concat([df_test_clean, df_test[test_numerical_vars]], axis=1)
X_train = df_train_clean.loc[:,'co_uf_11':]
y_train = df_train['NU_NOTA_MT']
X_test = df_test_clean.loc[:,'co_uf_11':]
X_train.shape, y_train.shape, X_test.shape
X_train_comp_X_test = X_train[X_test.columns]
X_train_comp_X_test.shape, y_train.shape, X_test.shape
regressor = LinearRegression()
regressor.fit(X_train_comp_X_test, y_train)
y_pred = regressor.predict(X_test)
X_test.head(5)
X_train.head(5)
df_result_insc = pd.DataFrame(df_test_clean['NU_INSCRICAO'])
resultado = pd.concat([df_result_insc, pd.DataFrame(np.round(y_pred,3))], axis=1)
resultado.reset_index(inplace=True, drop=True)
resultado.columns=['NU_INSCRICAO', 'NU_NOTA_MT']
resultado.info()
resultado.to_csv("answer.csv", index=False)
###Output
_____no_output_____ |
04_ingest/archive/glue-etl/continuous-nyc-taxi-dataset/DataDiscoveryAndConversation.ipynb | ###Markdown
Data Discover and Transformationin this section of the lab, we'll use Glue to discover new transportation data. From there, we'll use Athena to query and start looking into the dataset to understand the data we are dealing with.We've also setup a set of ETLs using Glue to create the fields into a canonical form, since all the fields call names different things. After understanding the data, and cleaning it a little, we'll go into another notebook to perform feature engineering and time series modeling. What are Databases and Tables in Glue:When you define a table in the AWS Glue Data Catalog, you add it to a database. A database is used to organize tables in AWS Glue. You can organize your tables using a crawler or using the AWS Glue console. A table can be in only one database at a time.Your database can contain tables that define data from many different data stores.A table in the AWS Glue Data Catalog is the metadata definition that represents the data in a data store. You create tables when you run a crawler, or you can create a table manually in the AWS Glue console. The Tables list in the AWS Glue console displays values of your table's metadata. You use table definitions to specify sources and targets when you create ETL (extract, transform, and load) jobs.
###Code
import boto3
database_name = '2019reinventWorkshop'
## lets first create a namespace for the tables:
glue_client = boto3.client('glue')
create_database_resp = glue_client.create_database(
DatabaseInput={
'Name': database_name,
'Description': 'This database will contain the tables discovered through both crawling and the ETL processes'
}
)
###Output
_____no_output_____
###Markdown
This will create a new database, or namespace, that can hold the collection of tableshttps://console.aws.amazon.com/glue/home?region=us-east-1catalog:tab=databases You can use a crawler to populate the AWS Glue Data Catalog with tables. This is the primary method used by most AWS Glue users. A crawler can crawl multiple data stores in a single run. Upon completion, the crawler creates or updates one or more tables in your Data Catalog. Extract, transform, and load (ETL) jobs that you define in AWS Glue use these Data Catalog tables as sources and targets. The ETL job reads from and writes to the data stores that are specified in the source and target Data Catalog tables.
###Code
crawler_name = '2019reinventworkshopcrawler'
create_crawler_resp = glue_client.create_crawler(
Name=crawler_name,
Role='GlueRole',
DatabaseName=database_name,
Description='Crawler to discover the base tables for the workshop',
Targets={
'S3Targets': [
{
'Path': 's3://serverless-analytics/reinvent-2019/taxi_data/',
},
]
}
)
response = glue_client.start_crawler(
Name=crawler_name
)
###Output
_____no_output_____
###Markdown
After starting the crawler, you can go to the glue console if you'd like to see it running.https://console.aws.amazon.com/glue/home?region=us-east-1catalog:tab=crawlersAfter it finishes crawling, you can see the datasets (represeted as "tables") it automatically discovered. Waiting for the Crawler to finish
###Code
import time
response = glue_client.get_crawler(
Name=crawler_name
)
while (response['Crawler']['State'] == 'RUNNING') | (response['Crawler']['State'] == 'STOPPING'):
print(response['Crawler']['State'])
# Wait for 40 seconds
time.sleep(40)
response = glue_client.get_crawler(
Name=crawler_name
)
print('finished running', response['Crawler']['State'])
###Output
RUNNING
RUNNING
STOPPING
STOPPING
finished running READY
###Markdown
Querying the dataWe'll use Athena to query the data. Athena allows us to perform SQL queries against datasets on S3, without having to transform them, load them into a traditional sql datastore, and allows rapid ad-hoc investigation. Later we'll use Spark to do ETL and feature engineering.
###Code
!pip install --upgrade pip > /dev/null
!pip install PyAthena > /dev/null
###Output
_____no_output_____
###Markdown
Athena uses S3 to store results to allow different types of clients to read it and so you can go back and see the results of previous queries. We can set that up next:
###Code
import sagemaker
sagemaker_session = sagemaker.Session()
athena_data_bucket = sagemaker_session.default_bucket()
###Output
_____no_output_____
###Markdown
Next we'll create an Athena connection we can use, much like a standard JDBC/ODBC connection
###Code
from pyathena import connect
import pandas as pd
sagemaker_session = sagemaker.Session()
conn = connect(s3_staging_dir="s3://" + athena_data_bucket,
region_name=sagemaker_session.boto_region_name)
df = pd.read_sql('SELECT \'yellow\' type, count(*) ride_count FROM "' + database_name + '"."yellow" ' +
'UNION ALL SELECT \'green\' type, count(*) ride_count FROM "' + database_name + '"."green"' +
'UNION ALL SELECT \'fhv\' type, count(*) ride_count FROM "' + database_name + '"."fhv"', conn)
print(df)
df.plot.bar(x='type', y='ride_count')
green_etl = '2019reinvent_green'
response = glue_client.start_job_run(
JobName=green_etl,
WorkerType='Standard', # other options include: 'G.1X'|'G.2X',
NumberOfWorkers=5
)
print('response from starting green')
print(response)
###Output
response from starting green
{'JobRunId': 'jr_466ee6fbc9356bdaf875f815035e823c382666cc060e38092fe91d5411ae0546', 'ResponseMetadata': {'RequestId': 'bf2b5ed1-2b2b-11ea-9cf9-c754cb1c941b', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Mon, 30 Dec 2019 17:42:28 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '82', 'connection': 'keep-alive', 'x-amzn-requestid': 'bf2b5ed1-2b2b-11ea-9cf9-c754cb1c941b'}, 'RetryAttempts': 0}}
###Markdown
After kicking it off, you can see it running in the console too:https://console.aws.amazon.com/glue/home?region=us-east-1etl:tab=jobsWAIT UNTIL THE ETL JOB FINISHES BEFORE CONTINUING!ALSO, YOU MUST CHANGE THE BUCKET PATH IN THIS CELL - FIND THE BUCKET IN S3 THAT CONTAINS '2019reinventetlbucket' in the name
###Code
#let's list the s3 bucket name:
!aws s3 ls | grep '2019reinventetlbucket' | head -1
# syntax should be s3://...
normalized_bucket = 's3://aim357-template2-2019reinventetlbucket-144yyhe1x8qgo'
## DO NOT MODIFY THESE LINES, they are there to ensure the line above is updated correctly
assert(normalized_bucket != 's3://FILL_IN_BUCKET_NAME')
assert(normalized_bucket.startswith( 's3://' ))
create_crawler_resp = glue_client.create_crawler(
Name=crawler_name + '_normalized',
Role='GlueRole',
DatabaseName=database_name,
Description='Crawler to discover the base tables for the workshop',
Targets={
'S3Targets': [
{
'Path': normalized_bucket + "/canonical/",
},
]
}
)
response = glue_client.start_crawler(
Name=crawler_name + '_normalized'
)
###Output
_____no_output_____
###Markdown
Let's wait for the next crawler to finish, this will discover the normalized dataset.
###Code
import time
response = glue_client.get_crawler(
Name=crawler_name + '_normalized'
)
while (response['Crawler']['State'] == 'RUNNING') | (response['Crawler']['State'] == 'STOPPING'):
print(response['Crawler']['State'])
# Wait for 40 seconds
time.sleep(40)
response = glue_client.get_crawler(
Name=crawler_name + '_normalized'
)
print('finished running', response['Crawler']['State'])
###Output
RUNNING
STOPPING
STOPPING
finished running READY
###Markdown
Querying the Normalized Data Now let's look at the total counts for the aggregated information
###Code
normalized_df = pd.read_sql('SELECT type, count(*) ride_count FROM "' + database_name + '"."canonical" group by type', conn)
print(normalized_df)
normalized_df.plot.bar(x='type', y='ride_count')
#
query = "select type, date_trunc('day', pickup_datetime) date, count(*) cnt from \"" + database_name + "\".canonical where pickup_datetime < timestamp '2099-12-31' group by type, date_trunc(\'day\', pickup_datetime) "
typeperday_df = pd.read_sql(query, conn)
typeperday_df.plot(x='date', y='cnt')
###Output
_____no_output_____
###Markdown
We see some bad data here...We are expecting only 2018 and 2019 datasets here, but can see there are records far into the future and in the past. This represents bad data that we want to eliminate before we build our model.
###Code
# Only reason we put this conditional here is so you can execute the cell multiple times
# if you don't check, it won't find the 'date' column again and makes interacting w/ the notebook more seemless
if type(typeperday_df.index) != pd.core.indexes.datetimes.DatetimeIndex:
print('setting index to date')
typeperday_df = typeperday_df.set_index('date', drop=True)
typeperday_df.head()
typeperday_df.loc['2018-01-01':'2019-12-31'].plot(y='cnt')
###Output
_____no_output_____
###Markdown
Let's look at some of the bad data now: All the bad data, at least the bad data in the future, is coming from the yellow taxi license type. Note, we are querying the transformed data.We should check the raw dataset to see if it's also bad or something happened in the ETL processLet's find the two 2088 records to make sure they are in the source data
###Code
pd.read_sql("select * from \"" + database_name + "\".yellow where tpep_pickup_datetime like '2088%'", conn)
## Next let's plot this per type:
typeperday_df.loc['2018-01-01':'2019-07-30'].pivot_table(index='date',
columns='type',
values='cnt',
aggfunc='sum').plot()
###Output
_____no_output_____
###Markdown
Fixing our Time Series dataSome details of what caused this drop: On August 14, 2018, Mayor de Blasio signed Local Law 149 of 2018, creating a new license category for TLC-licensed FHV businesses that currently dispatch or plan to dispatch more than 10,000 FHV trips in New York City per day under a single brand, trade, or operating name, referred to as High-Volume For-Hire Services (HVFHS). This law went into effect on Feb 1, 2019Let's bring the other license type and see how it affects the time series charts:
###Code
create_crawler_resp = glue_client.create_crawler(
Name=crawler_name + '_fhvhv',
Role='GlueRole',
DatabaseName=database_name,
Description='Crawler to discover the base tables for the workshop',
Targets={
'S3Targets': [
{
'Path': 's3://serverless-analytics/reinvent-2019_moredata/taxi_data/fhvhv/',
},
]
}
)
response = glue_client.start_crawler(
Name=crawler_name + '_fhvhv'
)
###Output
_____no_output_____
###Markdown
Wait to discover the fhvhv dataset...
###Code
import time
response = glue_client.get_crawler(
Name=crawler_name + '_fhvhv'
)
while (response['Crawler']['State'] == 'RUNNING') | (response['Crawler']['State'] == 'STOPPING'):
print(response['Crawler']['State'])
# Wait for 40 seconds
time.sleep(40)
response = glue_client.get_crawler(
Name=crawler_name + '_fhvhv'
)
print('finished running', response['Crawler']['State'])
query = 'select \'fhvhv\' as type, date_trunc(\'day\', cast(pickup_datetime as timestamp)) date, count(*) cnt from "' + database_name + '"."fhvhv" group by date_trunc(\'day\', cast(pickup_datetime as timestamp)) '
typeperday_fhvhv_df = pd.read_sql(query, conn)
typeperday_fhvhv_df = typeperday_fhvhv_df.set_index('date', drop=True)
print(typeperday_fhvhv_df.head())
typeperday_fhvhv_df.plot(y='cnt')
pd.concat([typeperday_fhvhv_df, typeperday_df], sort=False).loc['2018-01-01':'2019-07-30'].pivot_table(index='date',
columns='type',
values='cnt',
aggfunc='sum').plot()
###Output
_____no_output_____ |
ipynb_r_mec_optim/B03_dynamicprogramming.ipynb | ###Markdown
Block 3: Linear programming: Dyanmic programming Alfred Galichon (NYU) `math+econ+code' masterclass on matching models, optimal transport and applications© 2018-2019 by Alfred Galichon. Support from NSF grant DMS-1716489 is acknowledged. James Nesbit contributed. Learning Objectives* Basics of (finite-horizon, discrete) dynamic programming: Bellman's equation; forward induction, backward induction* Markov decision processes* Dynamic programming as linear programming: interpretation of duality* Vectorization, Kronecker products, multidimensional arrays References* Ford Jr, L. R., \& Fulkerson, D. R. (1958). Constructing maximal dynamic flows from static flows. *Operations research*, 6(3), 419-433.* Schrijver, A. (2003). *Combinatorial optimization: polyhedra and efficiency* Vol. A. Springer. Section 12.5.c.* Bertsekas, D. (2011), *Dynamic Programming and Optimal Control*, Vols. I and II. 3rd ed. Athena.* Ljungqvist, Sargent (2012), *Recursive Macroeconomic Theory* 3rd ed. MIT.* Rust (1987), Optimal replacement of GMC bus engines: an empirical model of Harold Zurcher. *Econometrica*. Movitation John Rust describes the problem of Harold Zurcher, an engineer who runs a bus fleet as follows:* each month, buses operate a stochastic number of miles* operations costs increase with mileage (maintenance, fuel, insurance and costs of unexpected breakdowns)* there is a fixed cost associated with overhaul (independent on mileage)* each month, Zurcher needs to decide to send the bus to overhaul, which resets their mileage to zero, or to let them operate.This problem is a *dynamic programming problem*. When taking the decision whether to perform the overhaul or not, Zurcher needs to compare the operation cost not only with the cost of overhaul, but also take into account the reduction in operation costs in the future periods.While in this instance of the problem there is no externality across buses, so the buses could decide in isolation whether to go on maintenance or not, it is not hard to envision a variant of this problem where there are externalities. For instance, one may assume that there is a maximum number of buses that can go on overhaul at the same time.We shall derive the optimal policy for Harold Zurcher, (somewhat freely) based on Rust's data. Linear Dynamic Programming Dynamic programming as linear programmingConsider a finite set of individual states $x\in\mathcal{X}$; and a set of possible actions $y\in\mathcal{Y}$; assume that at initial time, $n_{x}$ individuals in state $x$. The total number of individuals is $N=\sum_{x\in\mathcal{X}}n_{x}$. (Note that $n_{x}$ is not necessarily an integer, so it would be more correct to talk about "mass" than "number".The immediate payoff associated with choice $y\in\mathcal{Y}$ at time $t\in\mathcal{T}=\left\{ 1,...,T\right\} $ in state $x\in\mathcal{X}$ is $u_{xy}^{t}$, discounted at time zero (typically: $u_{xy}^{t}=\beta^{t}u_{xy}$ where $\beta$ is a constant discount factor).The individual states undergo a Markov transition. The transition depends on the $y$ chosen; hence, let $P_{x^{\prime}|xy}$ be the probability of a transition to state $x^{\prime}$ conditional on the current state being $X_{t}=x$ and the current choice being $Y_{t}=y$. For $U\in\mathbb{R}^{\mathcal{X}}$, $\left( P^{\intercal}U\right) _{xy}=\sum_{x^{\prime}}P_{x^{\prime}|xy}U_{x^{\prime}}$ denotes the expectation of $U_{X_{t+1}}$ given $X_{t}=x$ and $Y_{t}=y$.Let $\pi_{xy}^{t}$ be the number of individuals who are in state $x$ and choose $y$ ("policy variable").Define $n_{x}^{t}$ be the number of individuals in state $x$ at time $t$. We have the counting equation\begin{align*}\sum_{y\in\mathcal{Y}}\pi_{xy}^{t}=n_{x}^{t}.\end{align*}We have $n_{x}^{1}=n_{x}$ and because of the Markov transitions, \begin{align*}\sum_{x\in\mathcal{X},~y\in\mathcal{Y}}P_{x^{\prime}|xy}\pi_{xy}^{t-1}=n_{x^{\prime}}^{t}~1\leq t\leq T,\end{align*}which express that among the individual in state $x$ who choose $y$ at time $t-1$, a fraction $P_{x^{\prime}|xy}$ transit to state $x^{\prime}$ at time $t$. Primal problem: Central planner's problemThe central planner's problem is:\begin{align*}\max_{\pi_{xy}^{t}\geq0} & \sum_{x\in\mathcal{X},~y\in\mathcal{Y},~t\in\mathcal{T}}\pi_{xy}^{t}u_{xy}^{t} \\s.t. & \sum_{y^{\prime}\in\mathcal{Y}}\pi_{x^{\prime}y^{\prime}}^{t}=\sum_{x\in\mathcal{X},~y\in\mathcal{Y}}P_{x^{\prime}|xy}\pi_{xy}^{t-1}~\forall t\in\mathcal{T}\backslash\left\{ 1\right\} ~\left[U_{x^{\prime}}^{t}\right] \\& \sum_{y^{\prime}\in\mathcal{Y}}\pi_{xy^{\prime}}^{1}=n_{x}~\left[U_{x}^{1}\right]\end{align*} Dual problemWe have introduced $U_{x}^{t}$ the Lagrange multiplier associated with the constraints at time $t$. It will be convenient to also introduce $U_{x}%^{T+1}=0$. The dual problem is\begin{align*}\min_{U_{x}^{t},~t\in\mathcal{T},~x\in\mathcal{X}} & \sum_{x\in\mathcal{X}}n_{x}U_{x}^{1} \\s.t.~ & U_{x}^{t}\geq u_{xy}^{t}+\sum_{x^{\prime}}U_{x^{\prime}}^{t+1}P_{x^{\prime}|xy}~\forall x\in\mathcal{X},~y\in\mathcal{Y},~t\in\mathcal{T}\backslash\left\{ T\right\} \\& U_{x}^{T}\geq u_{xy}^{T}~\forall x\in\mathcal{X},y\in\mathcal{Y}\end{align*} Complementary slackness and Bellman's equationBy complementary slackness, we have\begin{align*}\pi_{xy}^{t}>0\Longrightarrow U_{x}^{t}=u_{xy}^{t}+\sum_{x^{\prime}}U_{x^{\prime}}^{t+1}P_{x^{\prime}|xy}\end{align*}whose interpretation is immediate: if $y$ is the optimal choice in state $x$ at time $t$, then the intertemporal payoff of $x$ at $t$ is the sum of her myopic payoff $u_{xy}^{t}$ and her expected payoff at the next step.As a result, the dual variable is called *intertemporal payoff* in the vocable of dynamic programming. The relation yields *Bellman's equation*\begin{align*}U_{x}^{t}=\max_{y\in\mathcal{Y}}\left\{ u_{xy}^{t}+\sum_{x^{\prime}}U_{x^{\prime}}^{t+1}P_{x^{\prime}|xy}\right\}.\end{align*} Dynamic programming as a linear programWe will need to represent matrices (such as $U_{x}^{t}$) and 3-dimensional arrays (such as $u_{xy}^{t}$). Consistent with the use in `R`, we will represent a matrix $M_{ij}$ by varying the first index first, i.e. a $2\times2$ matrix will be represented as $vec\left(M\right) = M_{11}, M_{21}, M_{12}, M_{22}$. Likewise, a $2\times2\times2$ 3-dimensional array $A$ will be represented by varying the first index first, then the second, i.e.$vec\left(A\right) = A_{111}, A_{211}, A_{121}, A_{221}, A_{112}, A_{212}, A_{122}, A_{222}$.In `R`, this is implemented by `c(A)` in `Matlab`, by `reshape(A,[n*m,1])`. Kronecker productA very important identity is\begin{align*}vec\left(BXA^{\intercal}\right) = \left( A\otimes B\right) vec\left(X\right),\end{align*}where $\otimes$ is the Kronecker product: for 2x2 matrices,\begin{align*}A\otimes B=\begin{pmatrix}a_{11}B & a_{12}B\\a_{21}B & a_{22}B\end{pmatrix}.\end{align*}Recall, indices $xy\in\mathbb{R}^{\left\vert \mathcal{X}\right\vert \left\vert \mathcal{Y}\right\vert}$ are represented by varying the first index first.Let:* $P$ be the ($\left\vert \mathcal{X}\right\vert \left\vert \mathcal{Y}\right\vert $)$\times\left\vert \mathcal{X}\right\vert $ matrix of term $P_{x^{\prime}|xy}$.* $J$ be the ($\left\vert \mathcal{X}\right\vert \left\vert \mathcal{Y}\right\vert $)$\times\left\vert \mathcal{X}\right\vert $ matrix of term $1\left\{ x=x^{\prime}\right\} $. One has\begin{align*}J=1_{\mathcal{Y}}\otimes I_{\mathcal{X}}.\end{align*}* $U$ be the column vector of size $\left\vert \mathcal{X}\right\vert \left\vert T\right\vert $ obtained by stacking the vectors $U^{1}$,...,$U^{T}$.* $b$ be the column vector of size $\left\vert \mathcal{X}\right\vert \left\vert T\right\vert $ whose $\left\vert \mathcal{X}\right\vert $ first terms are the terms of $n$, and whose other terms are zero.* $u$ be the column vector of size $\left\vert \mathcal{X}\right\vert \left\vert \mathcal{Y}\right\vert \left\vert T\right\vert $ obtained by stacking the vectors $u^{1}$,..., $u^{T}$.* $\pi$ be the vector obtained by stacking the vectors $\pi^{1}$,...,$\pi^{T}$. $A$ is the $\left\vert T\right\vert \left\vert \mathcal{X}\right\vert \left\vert \mathcal{Y}\right\vert \times\left\vert T\right\vert \left\vert\mathcal{X}\right\vert $ matrix\begin{align*}A=\begin{pmatrix}J & -P & 0 & \cdots & 0\\0 & J & \ddots & \ddots & \vdots\\\vdots & \ddots & \ddots & -P & 0\\\vdots & & \ddots & J & -P\\0 & \cdots & \cdots & 0 & J\end{pmatrix}\end{align*}Letting $N_{\mathcal{T}}$ be the $T\times T$ matrix given by\begin{align*}N_{\mathcal{T}}=\begin{pmatrix}0 & 1 & 0 & \cdots & 0\\\vdots & \ddots & \ddots & & \vdots\\& & \ddots & \ddots & 0\\\vdots & & & \ddots & 1\\0 & \cdots & & \cdots & 0\end{pmatrix}\end{align*}the constraint matrix can be reexpressed as\begin{align*}A=I_{\mathcal{T}}\otimes J-N_{\mathcal{T}}\otimes P=I_{\mathcal{T}}\otimes1_{\mathcal{Y}}\otimes I_{\mathcal{X}}-N_{\mathcal{T}}\otimes P.\end{align*}Although we'll see much faster direct methods, the primal and dual problems could be solved by a black-box linear programming solver.Then the primal problem expresses as\begin{align*}\max_{\pi\geq0} & \, u^{\intercal}\pi\\s.t.~ & A^{\intercal}\pi=b~\left[U\right]\end{align*}while the dual problem is given by\begin{align*}\min_{U} & \, b^{\intercal}U\\s.t.~ & AU\geq u~\left[\pi\right] .\end{align*}But there is in fact a much faster way to compute the primal and dual solutions without having to use the full power of a linear programming solver. Along with the fact that $U^{T+1}=0$, [Bellman's equation](bellman) implies that there is a particularly simple method to obtain the dual variables $U^{t}$, by solving recursively backward in time, from $t=T$ to $t=1$. This method is called *backward induction*:---**Algorithm**Set $U^{T+1}=0$For $t=T$ down to $1$, set $U_{x}^{t}:=\max_{y\in\mathcal{Y}}\left\{u_{xy}^{t}+\sum_{x^{\prime}}U_{x^{\prime}}^{t+1}P_{x^{\prime}|xy}\right\}$.---The primal variables $\pi^{t}$ are then deduced also by recursion, but this time forward in time from $t=1$ to $t=T-1$, by the so-called *forward induction* method:---**Algorithm**1. Set $n^{1}=n$ and compute $\left( U^{t}\right)$ by backward induction.2. For {$t=1$ to }$T$, pick $\pi^{t}$ such that $\pi_{xy}^{t}/n_{x}^{t}$ is a probability measure supported in the set\begin{align*}\left\{ y:U_{x}^{t}=u_{xy}^{t}+\sum_{x^{\prime}}U_{x^{\prime}}^{t+1}P_{x^{\prime}|xy}\right\} .\end{align*}3. Set $n_{x^{\prime}}^{t+1}:=\sum_{x\in\mathcal{X},~y\in\mathcal{Y}}P_{x^{\prime}|xy}\pi_{xy}^{t-1}$End--- Remarks 1. The dual variable is $U$ always unique (this follows from the backward induction computation); the primal variable is not, as there may be ties between several states.2. The computation by the combination of the backward and forward algorithms is much faster than the computation by a black-box linear programming solver.3. However, as soon as we introduce capacity constraints, the computation by backward induction no longer works, and the linear programming formulation is useful. Back to Harold ZurckerThe state $x\in\mathcal{X}=\left\{x_{0},...,\bar{x}\right\}$ of each bus at each period $t$ is the mileage since last overhaul. The transition between states is as follows:* When no overhaul is performed, these states undergo random transitions (depending on how much the bus is used): $x_{i}\rightarrow x_{i^{\prime}}$ with some probability $P_{i^{\prime}|i}$, where $i^{\prime}\geq i$.* When overhaul is performed on a bus, the state is restored to the zero state $x_{0}$.There is a fixed cost $C$ associated with overhaul (independent of mileage), while operations costs $c\left( x\right)$ increase with mileage (maintenance, fuel, insurance and costs of nexpected breakdowns). ImplementationAssume the states are discretized into $12,500$ mile brackets. There are $30$ states, so $\mathcal{X}=\left\{ 1,...,30\right\}$: `nbX = 30`.The choice set is $\mathcal{Y}=\left\{ y_{0}=1,y_{1}=2\right\}$ (operate or overhaul): `nbY = 2`.There are $40$ periods (quarter years over $10$ years): `nbT = 40`.
###Code
library("Matrix")
library("gurobi")
nbX = 3 #30
nbT = 2 #40
nbY = 2 # choice set: 1=run as usual; 2=overhaul
###Output
_____no_output_____
###Markdown
Transitions:* If no overhaul is performed, each state but the last one has a probability $25\%$ of transiting to the next one, and probability $75\%$ of remaining the same. The last state transits to $1$ with probability $25\%$ (overhaul is performed when beyond last state).* If overhaul is performed, the state transits to $1$ for sure. We are going to solve the dual problem \begin{align*}\min_{U} & \, b^{\intercal}U\\s.t.~ & AU\geq u~\left[ \pi\right] \end{align*} First let's construct our constraint matrix $A$. We build the transition matrix $P_{x^{\prime}|xy}$ matrix of dimension `nbXnbY`$\times$ `nbX` Let\begin{align*}L_{\mathcal{X}}=%\begin{pmatrix}0 & 1 & 0 & 0\\0 & \ddots & \ddots & 0\\0 & \ddots & \ddots & 1\\1 & 0 & 0 & 0\end{pmatrix}\text{ and }R_{\mathcal{X}}=%\begin{pmatrix}1 & 0 & \cdots & 0\\1 & \vdots & \ddots & \vdots\\1 & \vdots & \ddots & \vdots\\1 & 0 & \cdots & 0\end{pmatrix}\end{align*}so that $P$ is given by\begin{align*}P=1_{y_{0}}\otimes\left( 0.75I_{\mathcal{X}}+0.25L_{\mathcal{X}}\right)+1_{y_{1}}\otimes R_{\mathcal{X}}%\end{align*} Which in R looks like
###Code
IdX = Diagonal(nbX)
LX = sparseMatrix(c(nbX, 1:(nbX - 1)), 1:nbX)
RX = sparseMatrix(1:nbX, rep(1, nbX), dims = c(nbX, nbX))
P = kronecker(c(1, 0), 0.75 * IdX + 0.25 * LX) + kronecker(c(0, 1), RX)
###Output
_____no_output_____
###Markdown
* Let's make sure that we encoded rightly matrix of Markov transitions $P$:
###Code
colnames(P) = paste0("x",1:nbX)
rownames(P) = outer(paste0("x",1:nbX,","),paste0("y",1:nbY),FUN=paste0)
P
###Output
_____no_output_____
###Markdown
* Looks about right! Recall the constraint matrix $A$ can be expressed as\begin{align*}A &= I_{\mathcal{T}}\otimes J-N_{\mathcal{T}}\otimes P \\ &= I_{\mathcal{T}} \otimes1_{\mathcal{Y}}\otimes I_{\mathcal{X}}-N_{\mathcal{T}}\otimes P.\end{align*}where\begin{align*}N_{\mathcal{T}}=\begin{pmatrix}0 & 1 & 0 & \cdots & 0\\\vdots & \ddots & \ddots & & \vdots\\& & \ddots & \ddots & 0\\\vdots & & & \ddots & 1\\0 & \cdots & & \cdots & 0\end{pmatrix}\end{align*}
###Code
IdT = Diagonal(nbT)
NT = sparseMatrix(1:(nbT - 1), 2:nbT, dims = c(nbT, nbT))
A = kronecker(kronecker(IdT, matrix(1, nbY, 1)), IdX) - kronecker(NT, P)
###Output
_____no_output_____
###Markdown
* Time to take a look at matrix A
###Code
rownames(A) = c(outer(paste0("t",1:nbT,","),rownames(P),FUN=paste0))
colnames(A) = c(outer(paste0("t",1:nbT,","),colnames(P),FUN=paste0))
A
###Output
_____no_output_____
###Markdown
Costs:* The cost of replacing an engine is $C=\$8,000$ (in $1985$ dollars).* The operations cost is $c\left( x\right) =x\times5.10^{2}.$ The discount factor is $\beta=0.9$.
###Code
overhaulCost = 8000
maintCost = function(x) (x * 500)
beta = 0.9
###Output
_____no_output_____
###Markdown
Next, we build $u_{xyt}$ * First the $u_{xy}$'s so that $u_{x1}=-x\times5.10^{2}$ for $x<\bar{x}$, and $u_{\bar{x}1}=-C$, while $u_{x2}=-C$ for all $x$.* Next the $u_{xyt}$ so that $u_{xyt}=u_{xy}\beta^{t}=vec\left(\left(\beta^{1},...,\beta^{T}\right) \otimes u_{xy}\right)$Finially we build $b_{xt}$
###Code
n1_x = rep(1, nbX)
u_xy = c(-maintCost(1:(nbX - 1)), rep(-overhaulCost, nbX + 1))
u_xyt = c(kronecker(beta^(1:nbT), u_xy))
b_xt = c(n1_x, rep(0, nbX * (nbT - 1)))
result = gurobi(list(A = A, obj = c(b_xt), modelsense = "min", rhs = u_xyt, sense = ">",
lb = -Inf), params = list(OutputFlag = 0))
U_x_t_gurobi = array(result$x, dim = c(nbX, nbT))
pi_x_y_t = array(result$pi, dim = c(nbX, nbY, nbT))
print(U_x_t_gurobi[, 1])
###Output
[1] -956.25 -3127.50 -7605.00
###Markdown
Backward induction The smarter way to approach this problem is of course using backwards induction
###Code
U_x_t = matrix(0, nbX, nbT)
contVals = apply(X = array(u_xyt, dim = c(nbX, nbY, nbT))[, , nbT], FUN = max, MARGIN = 1)
U_x_t[, nbT] = contVals
for (t in (nbT - 1):1) {
myopic = array(u_xyt, dim = c(nbX, nbY, nbT))[, , t]
Econtvals = matrix(P %*% contVals, nrow = nbX)
contVals = apply(X = myopic + Econtvals, FUN = max, MARGIN = 1)
U_x_t[, t] = contVals
}
###Output
_____no_output_____
###Markdown
Which give identical solutions to the ones obtained when using linear programming:
###Code
print(U_x_t_gurobi[, 1] - U_x_t[, 1])
###Output
[1] 0 0 0
###Markdown
Capacity constraintsNow assume that the total number of alternatives $y$ chosen at time $t$ cannot be more than $m_{y}^{t}$ (either because the workshop has a maximal capacity, or because operations require a minimum number of buses in service).The primal problem becomes\begin{align*}\max_{\pi_{xy}^{t}\geq0} & \sum_{x\in\mathcal{X},~y\in\mathcal{Y}%,~t\in\mathcal{T}}\pi_{xy}^{t}u_{xy}^{t}\\s.t. & \sum_{y^{\prime}\in\mathcal{Y}}\pi_{x^{\prime}y^{\prime}}^{t}%=\sum_{x\in\mathcal{X},~y\in\mathcal{Y}}P_{x^{\prime}|xy}\pi_{xy}%^{t-1}~\left[ U_{x^{\prime}}^{t}\right] \nonumber\\& \sum_{y^{\prime}\in\mathcal{Y}}\pi_{xy^{\prime}}^{1}=n_{x}~\left[U_{x}^{1}\right] \nonumber\\& \sum_{x\in\mathcal{X}}\pi_{xy}^{t}\leq m_{y}^{t}~[\lambda_{y}^{t}]\nonumber\end{align*}Let us describe this problem in matrix form. Let $\tilde{\pi}^{t}$ be the matrix of term $\pi_{xy}^{t}$ for fixed $t$. The last constraint rewrites $1_{\mathcal{X}}^{\intercal}\tilde{\pi}^{t}\leq\left( m^{t}\right) ^{\intercal}$. Vectorizing yields $vec\left( 1_{\mathcal{X}}^{\intercal}\tilde{\pi}^{t}I_{\mathcal{Y}}\right) \leq vec\left( m^{t}\right) $, thus\begin{align*}\left( I_{\mathcal{Y}}\otimes1_{\mathcal{X}}^{\intercal}\right) vec\left(\tilde{\pi}^{t}\right) \leq vec\left( m^{t}\right) ,\end{align*}hence the constraint rewrites $B^{\intercal}\pi\leq m$, with\begin{align*}B=\begin{pmatrix}I_{\mathcal{Y}}\otimes1_{\mathcal{X}} & 0 & \cdots & 0\\0 & \ddots & \ddots & \vdots\\\vdots & \ddots & \ddots & 0\\0 & \cdots & 0 & I_{\mathcal{Y}}\otimes1_{\mathcal{X}}%\end{pmatrix}=I_{\mathcal{T}}\otimes I_{\mathcal{Y}}\otimes1_{\mathcal{X}}.\end{align*}The primal problem then writes\begin{align*}\max_{\pi\geq0} & u^{\intercal}\pi\\s.t.~ & A^{\intercal}\pi=b~\left[ U\right] \\& B^{\intercal}\pi\leq m~\left[ \Lambda\right]\end{align*}whose dual is\begin{align*}\min_{U,\Lambda\geq0} & b^{\intercal}U+m^{\intercal}\Lambda\\s.t.~ & AU+B\Lambda\geq u~\left[ \pi\right]\end{align*}The dual becomes\begin{align*}\min_{U_{x}^{t},\lambda_{y}^{t}\geq0} & \sum_{x\in\mathcal{X}}n_{x}U_{x}%^{1}+\sum_{x\in\mathcal{X}}\sum_{t\in\mathcal{T}}m_{y}\lambda_{y}^{t}\\s.t.~ & U_{x}^{t}\geq u_{xy}^{t}-\lambda_{y}^{t}+\sum_{x^{\prime}%}U_{x^{\prime}}^{t+1}P_{x^{\prime}|xy}~\forall x\in\mathcal{X},~y\in\mathcal{Y},~t\in\mathcal{T}\backslash\left\{ T\right\} \nonumber\\& U_{x}^{T}\geq u_{xy}^{T}~\forall x\in\mathcal{X},y\in\mathcal{Y}\nonumber\end{align*}and $\lambda_{y}^{t}$ interprets as the shadow price of alternative $y$ attime $t$.This constraint is extremely easy to code.
###Code
m_y_t = rep(c(sum(n1_x), 1), nbT)
B = kronecker(kronecker(IdT, sparseMatrix(1:nbY, 1:nbY)), matrix(1, nbX, 1))
result = gurobi(list(A = cbind(A, B), obj = c(b_xt, m_y_t), modelsense = "min", rhs = u_xyt,
sense = ">", lb = c(rep(-Inf, nbX * nbT), rep(0, nbY * nbT))), params = list(OutputFlag = 0))
U_x_t_gurobi = array(result$x, dim = c(nbX, nbT))
print(U_x_t_gurobi[, 1])
###Output
[1] -956.25 -3127.50 -12161.25
|
Runge2D.ipynb | ###Markdown
Treatment of Runge effect in bivariate interpolation:
###Code
import numpy as np
import numpy.matlib as mlb
import matplotlib.pyplot as plb
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pylab as plt
%matplotlib inline
from interp2d import wamfit, pdpts, equisp_unisolv
f = lambda x,y: 1./(1+5*(x**2+y**2))
def S(x,y):
xyrange = np.array([-1,1,-1,1])
# riporto a [0,1]
xn = (x+1)/2
yn = (y+1)/2
X = (xyrange[0]+xyrange[1]+(xyrange[1]-xyrange[0])*
-1*np.cos(xn*np.pi))/2
Y = (xyrange[2]+xyrange[3]+(xyrange[3]-xyrange[2])*
-1*np.cos(yn*np.pi))/2
return X.reshape(-1), Y.reshape(-1)
N=10
pdx, pdy = pdpts(N)
x, y = equisp_unisolv(N)
x_f, y_f = S(x,y)
n_eval = N+10
X, Y = np.meshgrid(np.linspace(-1,1,n_eval),np.linspace(-1,1, n_eval))
X, Y = X.flatten(), Y.flatten()
f_true = f(X,Y).reshape(-1,1)
X_f, Y_f = S(X,Y)
f_eq = wamfit(N, np.array([x, y]).T, np.array([X,Y]).T, f(x,y).reshape(-1,1))[1]
err_eq = np.linalg.norm(f_eq.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval
f_fake = wamfit(N, np.array([x_f, y_f]).T, np.array([X_f,Y_f]).T, f(x,y).reshape(-1,1))[1]
err_fake = np.linalg.norm(f_fake.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval
f_PD = wamfit(N, np.array([pdx, pdy]).T, np.array([X,Y]).T, f(pdx,pdy).reshape(-1,1))[1]
err_PD = np.linalg.norm(f_PD.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval
alpha = .8; s = 50
fig = plt.figure(figsize=(16,16))
ax1 = fig.add_subplot(2,2,1, projection='3d')
ax1.plot_surface(X.reshape(n_eval,-1), Y.reshape(n_eval,-1), f_true.reshape(n_eval,-1), rstride=1, cstride=1,
linewidth=0, antialiased=False, alpha = alpha)
ax1.set_title("Original function")
ax1.set_zlim([0,1])
##
ax2 = fig.add_subplot(2,2,2, projection='3d')
ax2.plot_surface(X.reshape(n_eval,-1), Y.reshape(n_eval,-1), f_eq.reshape(n_eval,-1), rstride=1, cstride=1,
linewidth=0, antialiased=False, alpha = alpha)
ax2.scatter(x, y, 0, c='b',s=s//4, marker = '.')
ax2.set_title("Interpolation on equispaced nodes")
ax2.set_zlabel("MSE = %5.5f"%(np.linalg.norm(f_eq.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval), fontsize=11)
ax2.set_zlim([0,1])
##
ax3 = fig.add_subplot(2,2,3, projection='3d')
ax3.plot_surface(X.reshape(n_eval,-1), Y.reshape(n_eval,-1), f_PD.reshape(n_eval,-1), rstride=1, cstride=1,
linewidth=0, antialiased=False, alpha = alpha)
ax3.scatter(pdx, pdy, 0, c='b',s=s//4, marker = '.')
ax3.set_title("Interpolation on Padua points")
ax3.set_zlabel("MSE = %5.5f"%(np.linalg.norm(f_PD.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval), fontsize=11)
ax3.set_zlim([0,1])
##
ax4 = fig.add_subplot(2,2,4, projection='3d')
ax4.plot_surface(X.reshape(n_eval,-1), Y.reshape(n_eval,-1), f_fake.reshape(n_eval,-1), rstride=1, cstride=1,
linewidth=0, antialiased=False, alpha = alpha)
ax4.scatter(x, y, 0, c='b',s=s//4, marker = '.')
ax4.set_title("Interpolation on fake-Padua points")
ax4.set_zlabel("MSE = %5.5f"%(np.linalg.norm(f_fake.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval), fontsize=11)
ax4.set_zlim([0,1])
plt.tight_layout()
plt.savefig("images/Runge2D.png")
def errors(N):
pdx, pdy = pdpts(N)
x, y = equisp_unisolv(N)
x_f, y_f = S(x,y)
n_eval = N+10
X, Y = np.meshgrid(np.linspace(-1,1,n_eval),np.linspace(-1,1, n_eval))
X, Y = X.flatten(), Y.flatten()
f_true = f(X,Y).reshape(-1,1)
X_f, Y_f = S(X,Y)
f_eq = wamfit(N, np.array([x, y]).T, np.array([X,Y]).T, f(x,y).reshape(-1,1))[1]
err_eq = np.linalg.norm(f_eq.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval
f_fake = wamfit(N, np.array([x_f, y_f]).T, np.array([X_f,Y_f]).T, f(x,y).reshape(-1,1))[1]
err_fake = np.linalg.norm(f_fake.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval
f_PD = wamfit(N, np.array([pdx, pdy]).T, np.array([X,Y]).T, f(pdx,pdy).reshape(-1,1))[1]
err_PD = np.linalg.norm(f_PD.reshape(n_eval,-1) - f_true.reshape(n_eval,-1))**2/n_eval
return err_eq, err_PD, err_fake
Nrange = list(range(5,31))
Eq, PD, Fake = [], [], []
for n in Nrange:
if n%5==0: print(n)
eq, pd, fake = errors(n)
Eq.append(eq)
PD.append(pd)
Fake.append(fake)
plt.semilogy(Nrange, Eq, '-xg')
plt.semilogy(Nrange, PD, '-*b')
plt.semilogy(Nrange, Fake, '-or')
plt.xlabel("n")
plt.ylabel("MSE")
plt.legend(['Equispaced','Padua','Fake Padua'])
plt.grid()
plt.savefig("images/Runge2D_convergence.png")
###Output
_____no_output_____ |
video_captioning_practice_inceptionv3.ipynb | ###Markdown
Video Captioning
###Code
#Import important libraries
import numpy as np
import pandas as pd
import os
import cv2
import matplotlib.pyplot as plt
import math
from preprocess_videos import load_df, preprocess_df, get_final_list, extract_frames, select_videos, load_video_frames, extract_features, extract_features_resnet50, extract_features_inception_v3, view_frames
from enc_dec_models import basic_enc_dec
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
#Import captions
df = load_df("dataset/msvd_videos/video_corpus.csv")
df.head()
data = preprocess_df(df)
data.head()
videos_final = get_final_list("dataset/msvd_videos/msvd_videos", data)
len(videos_final)
#Select single caption for each video
captions = {}
for index, row in data.iterrows():
if row['Name'] in captions or row['Name'] not in videos_final:
continue
else:
captions[row['Name']] = row['Description']
#Not needed
#df = pd.DataFrame(captions.items(), columns = ['Name', 'Description'])
#df.head()
#Perform once
#extract_frames(videos_final, 'dataset/msvd_videos/msvd_videos/', 'dataset/msvd_videos/img/')
videos_selected = select_videos(videos_final, 'dataset/msvd_videos/frames/', 15)
len(videos_selected)
descriptions = []
for vid in videos_selected:
descriptions.append(captions[vid])
len(descriptions)
###Output
_____no_output_____
###Markdown
Extracting features
###Code
frames_path = 'dataset/msvd_videos/frames/'
data = extract_features_inception_v3(frames_path, videos_selected) #Use this to load X of shape (1652, 15, 4096)
data.shape
#Save array
#from numpy import save
#save('video_features_vgg16.npy', X)
# load array
#from numpy import load
#data = load('video_features_vgg16.npy')
###Output
_____no_output_____
###Markdown
Coding
###Code
data.shape
view_frames('dataset/msvd_videos/frames/mv89psg6zh4_33_46')
#Let's use first 1200 videos for training.
#train = data[:1200]
train = data
train.shape
#The data contains video extracted features.
#videos_selected contain video names & descriptions contains corresponding caption of those videos.
#Adding 'ssss' and 'eeee' to the descriptions.
for i in range(len(descriptions)):
if descriptions[i][-1] == '.':
descriptions[i] = 'ssss ' + descriptions[i][:-1] + ' eeee'
else:
descriptions[i] = 'ssss ' + descriptions[i] + ' eeee'
desc_len = [len(s.split(' ')) for s in descriptions]
max(desc_len) #Length of the largest caption. We will set max_length to this.
vocab_size = 2400
embedding_dim = 16
max_length = 20
trunc_type = 'post'
padding_type = 'post'
oov_tok = "<oov>"
#Using Tokenizer to preprocess the descriptions.
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words = vocab_size, oov_token = oov_tok)
tokenizer.fit_on_texts(descriptions)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(descriptions)
padded = pad_sequences(sequences, maxlen = max_length, truncating = trunc_type, padding = padding_type)
#Let's look at padded sequences.
padded[:10]
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, LSTM, Dense, Embedding
# returns train, inference_encoder and inference_decoder models
def define_updated(n_input, n_output, n_units):
# define training encoder
encoder_inputs = Input(shape=(None, n_input))
encoder = LSTM(n_units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]
# define training decoder
decoder_inputs = Input(shape=(None, n_output))
embedding = Embedding(10000, 64)
decoder_lstm1 = LSTM(n_units, return_sequences=True, return_state=True)
decoder_lstm2 = LSTM(n_units, return_sequences=True, return_state=True)
temp = embedding(decoder_inputs)
temp, _, _ = decoder_lstm1(temp, initial_state=encoder_states)
decoder_outputs, _, _ = decoder_lstm2(temp, initial_state=encoder_states)
decoder_dense = Dense(n_output, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# define inference encoder
encoder_model = Model(encoder_inputs, encoder_states)
# define inference decoder
decoder_state_input_h = Input(shape=(n_units,))
decoder_state_input_c = Input(shape=(n_units,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
temp = embedding(decoder_inputs)
temp, _, _ = decoder_lstm1(temp, initial_state=decoder_states_inputs)
decoder_outputs, state_h, state_c = decoder_lstm2(temp, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)
# return all models
return model, encoder_model, decoder_model
##Original##
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, LSTM, Dense
# returns train, inference_encoder and inference_decoder models
def basic_enc_dec(n_input, n_output, n_units):
# define training encoder
encoder_inputs = Input(shape=(None, n_input))
encoder = LSTM(n_units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
encoder_states = [state_h, state_c]
# define training decoder
decoder_inputs = Input(shape=(None, n_output))
decoder_lstm = LSTM(n_units, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=encoder_states)
decoder_dense = Dense(n_output, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# define inference encoder
encoder_model = Model(encoder_inputs, encoder_states)
# define inference decoder
decoder_state_input_h = Input(shape=(n_units,))
decoder_state_input_c = Input(shape=(n_units,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)
# return all models
return model, encoder_model, decoder_model
model, enc, dec = basic_enc_dec(2048, vocab_size, max_length)
#model, enc, dec = stateless_enc_dec(2048, vocab_size, max_length)
model.summary()
x2 = np.hstack([np.zeros((1652, 1)), np.array(padded)])
x2 = x2[:, :-1]
#This is the output to be predicted.
padded[0]
#This is the secondary input for decoder during training.
x2[0]
x2.shape
#Convert to 1652x42x1
#x2 = x2.reshape(x2.shape + (1, ))
#out = padded.reshape(padded.shape + (1, ))
#Convert to 1652x42x1000
from keras.utils.np_utils import to_categorical
x2_in = to_categorical(x2, num_classes = vocab_size)
outputs = to_categorical(padded, num_classes = vocab_size)
print(x2_in.shape, outputs.shape)
from tensorflow.keras import callbacks
from tensorflow.keras import optimizers
lr_schedule = callbacks.LearningRateScheduler(lambda epoch: 1e-5 * 10**(epoch / 20))
opt = optimizers.RMSprop(lr=1e-5)
#Approximating best lr
#model.compile(optimizer=opt, loss='categorical_crossentropy')
#history = model.fit([train, x2_in[:1200]], outputs[:1200], validation_split=0.1, epochs = 100, callbacks=[lr_schedule])
#Plotting graph to select best lr
#import matplotlib.pyplot as plt
#plt.semilogx(history.history["lr"], history.history["loss"])
#plt.axis([1e-5, 1, 1, 10])
#plt.plot()
#fixed learning rate
opt = optimizers.RMSprop(learning_rate=1e-3)
model.compile(optimizer=opt, loss='categorical_crossentropy')
history = model.fit([train, x2_in], outputs, validation_split=0.1, epochs = 400)
print(history.history.keys())
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# generate target given source sequence
reverse_word_map = dict(map(reversed, tokenizer.word_index.items()))
# Function takes a tokenized sentence and returns the words
def sequence_to_text(list_of_indices):
# Looking up words in dictionary
words = [reverse_word_map.get(word) for word in list_of_indices if word]
return(words)
def predict_sequence(infenc, infdec, source, n_steps, cardinality):
# encode
state = infenc.predict(source)
# start of sequence input
target_seq = np.array([0.0 for _ in range(cardinality)]).reshape(1, 1, cardinality)
# collect predictions
output = list()
for t in range(n_steps):
# predict next char
yhat, h, c = infdec.predict([target_seq] + state)
# store prediction
output.append(yhat[0, 0, :])
# update state
state = [h, c]
# update target sequence
target_seq = yhat
out = np.array(output).argmax(axis = 1)
return ' '.join(sequence_to_text(out))
train[0:1].shape
for i in range(20):
print("Predicted:", predict_sequence(enc, dec, train[i:i+1], max_length, vocab_size))
print("Actual:", descriptions[i])
print()
idx = 50
view_frames('dataset/msvd_videos/frames/'+videos_selected[idx])
print("Predicted:", predict_sequence(enc, dec, train[idx:idx+1], max_length, vocab_size))
print("Actual:", descriptions[idx])
print()
predictions = []
for i in range(1652):
predictions.append(predict_sequence(enc, dec, train[i:i+1], max_length, vocab_size).split())
output = []
for sentence in descriptions[:1652]:
output.append([sentence.split()])
import nltk
nltk.translate.bleu_score.corpus_bleu(output, predictions)
model.save("video_model.h5")
enc.save("video_enc.h5")
dec.save("video_dec.h5")
model.save("video_enc_dec_inceptionv3")
enc.save("video_enc_inceptionv3")
dec.save("video_dec_inceptionv3")
type(reverse_word_map)
import json
with open('reverse_word_map.json', 'w') as f:
json.dump(reverse_word_map, f)
import pickle
# saving
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)
# loading
with open('tokenizer.pickle', 'rb') as handle:
tok = pickle.load(handle)
type(tokenizer)
type(tok)
from tensorflow.keras.models import load_model
loaded_enc = load_model("video_enc.h5")
load_enc = load_model("video_enc_inceptionv3")
loaded_enc.summary()
load_enc.summary()
loaded_dec = load_model("video_dec.h5")
idx = 50
view_frames('dataset/msvd_videos/frames/'+videos_selected[idx])
print("Predicted:", predict_sequence(loaded_enc, loaded_dec, train[idx:idx+1], max_length, vocab_size))
print("Actual:", descriptions[idx])
print()
###Output
_____no_output_____ |
notebooks/enzyme_GAN/Unrolled GAN demo.ipynb | ###Markdown
Unrolled generative adversarial networks on a toy datasetThis notebook demos a simple implementation of unrolled generative adversarial networks on a 2d mixture of Gaussians dataset. See the [paper](https://arxiv.org/abs/1611.02163) for a better description of the technique, experiments, results, and other good stuff. Note that the architecture and hyperparameters used in this notebook are not identical to the one in the paper. MotivationThe GAN learning problem is to find the optimal parameters $\theta_G^*$ for a generator function $G\left( z; \theta_G\right)$ in a minimax objective, $$\begin{align} \theta_G^* &= \underset{\theta_G}{\text{argmin}} \underset{\theta_D}{\max} f\left(\theta_G, \theta_D\right) \\&= \underset{\theta_G}{\text{argmin}} \;f\left(\theta_G, \theta_D^*\left(\theta_G\right)\right)\\\theta_D^*\left(\theta_G\right) &= \underset{\theta_D}{\max} \;f\left(\theta_G, \theta_D\right),\end{align}$$where the saddle objective $f$ is the standard GAN loss:$$f\left(\theta_G, \theta_D\right) = \mathbb{E}_{x\sim p_{data}}\left[\mathrm{log}\left(D\left(x; \theta_D\right)\right)\right] + \mathbb{E}_{z \sim \mathcal{N}(0,I)}\left[\mathrm{log}\left(1 - D\left(G\left(z; \theta_G\right); \theta_D\right)\right)\right].$$In unrolled GANs, we approximate $\theta_D^*\left(\theta_G\right)$ using a few steps of gradient ascent:$$\theta_D^*\left(\theta_G\right) \approx \hat{\theta}_D\left(\theta_G\right) \equiv\text{ a few steps of SGD maximizing}\;f\left(\theta_G, \theta_D\right).$$We can then compute the update for the generator parameters, $\theta_G$, by computing the gradient of the saddle objective with respect to $\theta_G$ and the optimized discriminator parameters, $\hat{\theta}_D$:$$\frac{d}{d \theta_G} f\left(\theta_G, \hat{\theta}_D\left(\theta_G\right)\right)$$. Implementation detailsTo backpropagate through the optimization process, we need to create a symbolic computational graph that includes all the operations from the initial weights to the optimized weights. TensorFlow's built-in optimizers use custom C++ code for efficiency, and do not construct a symbolic graph that is differentiable. For this notebook, we use the optimization routines from `keras` to compute updates. Next, we use `tf.contrib.graph_editor.graph_replace` to build a copy of the graph containing the mapping from initial weights to updated weights after one optimization iteration, but replacing the initial weights with the last iteration's weights:This yields a new graph that allows us to backprop from $\theta_D^2$ back to $\theta_D^0$. We can then plug $\theta_D^2$ into the loss function to get the final objective that the generator optimizes. Using the magic of `graph_replace` we can write the unrolled optimization procedure in just a few lines:```python update_dict contains a dictionary mapping from variables (\theta_D^0) to their values after one step of optimization (\theta_D^1)cur_update_dict = update_dictfor i in xrange(params['unrolling_steps'] - 1): Compute variable updates given the previous iteration's updated variable cur_update_dict = graph_replace(update_dict, cur_update_dict) Final unrolled loss uses the parameters at the last time stepunrolled_loss = graph_replace(loss, cur_update_dict)```Note there are many other ways of implementing unrolled optimization that don't use graph rewriting. For example, if we created a function that takes weights as inputs and returns the updated weights, we could just iteratively call that function.
###Code
%pylab inline
from collections import OrderedDict
import tensorflow as tf
ds = tf.contrib.distributions
slim = tf.contrib.slim
from keras.optimizers import Adam
try:
from moviepy.video.io.bindings import mplfig_to_npimage
import moviepy.editor as mpy
generate_movie = True
except:
print("Warning: moviepy not found.")
generate_movie = False
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
`graph_replace` is broken in TensorFlow 1.0 (see this [issue](https://github.com/tensorflow/tensorflow/issues/9125)). We get around this issue with an ugly hack that removes the problematic attribute from all ops in the graph on every call to `graph_replace`.
###Code
_graph_replace = tf.contrib.graph_editor.graph_replace
def remove_original_op_attributes(graph):
"""Remove _original_op attribute from all operations in a graph."""
for op in graph.get_operations():
op._original_op = None
def graph_replace(*args, **kwargs):
"""Monkey patch graph_replace so that it works with TF 1.0"""
remove_original_op_attributes(tf.get_default_graph())
return _graph_replace(*args, **kwargs)
###Output
_____no_output_____
###Markdown
Utility functions
###Code
def extract_update_dict(update_ops):
"""Extract variables and their new values from Assign and AssignAdd ops.
Args:
update_ops: list of Assign and AssignAdd ops, typically computed using Keras' opt.get_updates()
Returns:
dict mapping from variable values to their updated value
"""
name_to_var = {v.name: v for v in tf.global_variables()}
updates = OrderedDict()
for update in update_ops:
var_name = update.op.inputs[0].name
var = name_to_var[var_name]
value = update.op.inputs[1]
if update.op.type == 'Assign':
updates[var.value()] = value
elif update.op.type == 'AssignAdd':
updates[var.value()] = var + value
else:
raise ValueError("Update op type (%s) must be of type Assign or AssignAdd"%update_op.op.type)
return updates
###Output
_____no_output_____
###Markdown
Data creation
###Code
def sample_mog(batch_size, n_mixture=8, std=0.01, radius=1.0):
thetas = np.linspace(0, 2 * np.pi, n_mixture)
xs, ys = radius * np.sin(thetas), radius * np.cos(thetas)
cat = ds.Categorical(tf.zeros(n_mixture))
comps = [ds.MultivariateNormalDiag([xi, yi], [std, std]) for xi, yi in zip(xs.ravel(), ys.ravel())]
data = ds.Mixture(cat, comps)
return data.sample(batch_size)
###Output
_____no_output_____
###Markdown
Generator and discriminator architectures
###Code
def generator(z, output_dim=2, n_hidden=128, n_layer=2):
with tf.variable_scope("generator"):
h = slim.stack(z, slim.fully_connected, [n_hidden] * n_layer, activation_fn=tf.nn.tanh)
x = slim.fully_connected(h, output_dim, activation_fn=None)
return x
def discriminator(x, n_hidden=128, n_layer=2, reuse=False):
with tf.variable_scope("discriminator", reuse=reuse):
h = x
h = slim.stack(h, slim.fully_connected, [n_hidden] * n_layer, activation_fn=tf.nn.tanh)
log_d = slim.fully_connected(h, 1, activation_fn=None)
return log_d
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
params = dict(
batch_size=512,
disc_learning_rate=1e-4,
gen_learning_rate=1e-3,
beta1=0.5,
epsilon=1e-8,
max_iter=25000,
viz_every=5000,
z_dim=256,
x_dim=2,
unrolling_steps=5,
)
###Output
_____no_output_____
###Markdown
Construct model and training ops
###Code
print(disc_vars)
loss
tf.reset_default_graph()
data = sample_mog(params['batch_size'])
noise = ds.Normal(tf.zeros(params['z_dim']),
tf.ones(params['z_dim'])).sample(params['batch_size'])
# Construct generator and discriminator nets
with slim.arg_scope([slim.fully_connected], weights_initializer=tf.orthogonal_initializer(gain=1.4)):
samples = generator(noise, output_dim=params['x_dim'])
real_score = discriminator(data)
fake_score = discriminator(samples, reuse=True)
# Saddle objective
loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=real_score, labels=tf.ones_like(real_score)) +
tf.nn.sigmoid_cross_entropy_with_logits(logits=fake_score, labels=tf.zeros_like(fake_score)))
gen_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "generator")
disc_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "discriminator")
# Vanilla discriminator update
d_opt = Adam(lr=params['disc_learning_rate'], beta_1=params['beta1'], epsilon=params['epsilon'])
# d_opt = tf.train.AdamOptimizer(params['disc_learning_rate'], beta1=params['beta1'], epsilon=params['epsilon'])
updates = d_opt.get_updates(disc_vars, [], loss)
d_train_op = tf.group(*updates, name="d_train_op")
# Unroll optimization of the discrimiantor
if params['unrolling_steps'] > 0:
# Get dictionary mapping from variables to their update value after one optimization step
update_dict = extract_update_dict(updates)
cur_update_dict = update_dict
for i in range(params['unrolling_steps'] - 1):
# Compute variable updates given the previous iteration's updated variable
cur_update_dict = graph_replace(update_dict, cur_update_dict)
# Final unrolled loss uses the parameters at the last time step
unrolled_loss = graph_replace(loss, cur_update_dict)
else:
unrolled_loss = loss
# Optimize the generator on the unrolled loss
g_train_opt = tf.train.AdamOptimizer(params['gen_learning_rate'], beta1=params['beta1'], epsilon=params['epsilon'])
g_train_op = g_train_opt.minimize(-unrolled_loss, var_list=gen_vars)
###Output
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
###Markdown
Train!
###Code
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
from tqdm import tqdm
xmax = 3
fs = []
frames = []
np_samples = []
n_batches_viz = 10
viz_every = params['viz_every']
for i in tqdm(range(params['max_iter'])):
f, _, _ = sess.run([[loss, unrolled_loss], g_train_op, d_train_op])
fs.append(f)
if i % viz_every == 0:
np_samples.append(np.vstack([sess.run(samples) for _ in range(n_batches_viz)]))
xx, yy = sess.run([samples, data])
fig = figure(figsize=(5,5))
scatter(xx[:, 0], xx[:, 1], edgecolor='none')
scatter(yy[:, 0], yy[:, 1], c='g', edgecolor='none')
axis('off')
if generate_movie:
frames.append(mplfig_to_npimage(fig))
show()
###Output
0%| | 0/25000 [00:00<?, ?it/s]
###Markdown
Visualize results
###Code
import seaborn as sns
np_samples_ = np_samples[::1]
cols = len(np_samples_)
bg_color = sns.color_palette('Greens', n_colors=256)[0]
figure(figsize=(2*cols, 2))
for i, samps in enumerate(np_samples_):
if i == 0:
ax = subplot(1,cols,1)
else:
subplot(1,cols,i+1, sharex=ax, sharey=ax)
ax2 = sns.kdeplot(samps[:, 0], samps[:, 1], shade=True, cmap='Greens', n_levels=20, clip=[[-xmax,xmax]]*2)
ax2.set_facecolor(bg_color)
xticks([]); yticks([])
title('step %d'%(i*viz_every))
ax.set_ylabel('%d unrolling steps'%params['unrolling_steps'])
gcf().tight_layout()
fs = np.array(fs)
plot(fs)
legend(('loss', 'unrolled loss'))
plot(fs[:, 0] - fs[:, 1])
legend('optimized loss - initial loss')
#clip = mpy.ImageSequenceClip(frames[::], fps=30)
#clip.ipython_display()
###Output
_____no_output_____ |
examples/networkx/bikeshare_graph_model_two.ipynb | ###Markdown
This model constuct a networkx graph from data using the following nodes and edges Nodes:- Station- Bike Edges:- TripFrom (from Station to bike)- TripTo (bike to Station)
###Code
import pandas as pd
import networkx as nx
from pprint import pprint
from graphgen import create_graph
trips_filename = '../data/201508_trip_data.csv'
stations_filename = '../data/201508_station_data.csv'
trips_df = pd.read_csv(trips_filename)
stations_df = pd.read_csv(stations_filename)
# if columns have spaces in their names we need to replace them with underscore
# fix_columns(trips_df)
# fix_columns(stations_df)
print(trips_df.columns)
print(stations_df.columns)
station_mapper = {
'nodes': [
{
'type' : 'Station',
'key' : [
{'name': 'id', 'raw': 'station_id'}
],
'attributes': [
{'name': 'id', 'raw': 'station_id'},
{'name': 'name', 'raw': 'name'},
{'name': 'lat', 'raw': 'lat'},
{'name': 'long', 'raw': 'long'},
{'name': 'landmark', 'raw': 'landmark'}
]
},
]
}
bike_mapper = {
'nodes': [
{
'type' : 'Bike',
'key' : [
{'name': 'num', 'raw': 'Bike #'}
],
'attributes': [
{'name': 'num', 'raw': 'Bike #'}
]
},
]
}
edges_mapper = {
'edges': [
{
'type' : 'TripFrom',
'from' : {
'type': 'Station',
'key' : [
{'name': 'id', 'raw': 'Start Terminal'}
]
},
'to' : {
'type': 'Bike',
'key' : [
{'name': 'num', 'raw': 'Bike #'}
]
},
'attributes': [
{'name': 'trip_id', 'raw': 'Trip ID'},
{'name': 'date', 'raw': 'Start Date'}
]
},
{
'type' : 'TripTo',
'from' : {
'type': 'Bike',
'key' : [
{'name': 'num', 'raw': 'Bike #'}
]
},
'to' : {
'type': 'Station',
'key' : [
{'name': 'id', 'raw': 'End Terminal'}
]
},
'attributes': [
{'name': 'trip_id', 'raw': 'Trip ID'},
{'name': 'date', 'raw': 'End Date'}
]
}
]
}
# construct a bidirectional multi-edge graph object
g = nx.MultiDiGraph()
%time g = create_graph(g, graph_mapper = station_mapper, \
data_provider = stations_df, update=False)
%time g = create_graph(g, graph_mapper = bike_mapper, \
data_provider = trips_df, update=False)
%time g = create_graph(g, graph_mapper = edges_mapper, \
data_provider = trips_df)
# print(g.get_edge_data('Station_50', 'Bike_288'))
print('nodes:', g.number_of_nodes(), '- edges:', g.number_of_edges())
###Output
_____no_output_____ |
recursion/rev_string.ipynb | ###Markdown
Reversing a String The goal in this notebook will be to get practice with a problem that is frequently solved by recursion: Reversing a string.Note that Python has a built-in function that you could use for this, but the goal here is to avoid that and understand how it can be done using recursion instead.
###Code
# Code with comments
def reverse_string(input):
"""
Return reversed input string
Examples:
reverse_string("abc") returns "cba"
Args:
input(str): string to be reversed
Returns:
a string that is the reverse of input
"""
if len(input) == 0:
return ""
else:
first_char = input[0]
the_rest = slice(1, None)
print("\nfirst_char",first_char)
print("the_rest",the_rest)
sub_string = input[the_rest]
print("sub_string",sub_string)
reversed_substring = reverse_string(sub_string)
print("reversed_substring",reversed_substring)
return reversed_substring + first_char
print ("Pass" if ("cba" == reverse_string("abc")) else "Fail")
# Code with comments
def reverse_string(input):
"""
Return reversed input string
Examples:
reverse_string("abc") returns "cba"
Args:
input(str): string to be reversed
Returns:
a string that is the reverse of input
"""
if len(input) == 0:
return ""
else:
first_char = input[0]
the_rest = slice(1, None)
sub_string = input[the_rest]
reversed_substring = reverse_string(sub_string)
return reversed_substring + first_char
print ("Pass" if ("cba" == reverse_string("abc")) else "Fail")
# Test Cases
print ("Pass" if ("" == reverse_string("")) else "Fail")
print ("Pass" if ("cba" == reverse_string("abc")) else "Fail")
###Output
Pass
Pass
|
_build/jupyter_execute/curriculum-notebooks/Health/CALM/CALM-moving-out-6.ipynb | ###Markdown
 CALM - Moving Out 6 Part 6 - Food and Supplies📙In this section we will consider food and household supplies that you will need. You will be using a [dataframes from a Python library called pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.htmldataframe). These dataframes are like spreadsheets, and the code will look a little complicated, but it shouldn't be too bad. Meal PlanBefore we get into dataframes, though, you need to create a meal plan. With the [Canadian Food Guide](https://food-guide.canada.ca/en/food-guide-snapshot/) in mind, complete a 7-day meal plan considering nutritionally balanced choices at each meal. You can choose to eat out only twice on this menu.You will then use this to decide your grocery needs for one week.Replace the words "meal" in the cell below with the meals you plan to eat, then run the cell to store your plan.
###Code
%%writefile moving_out_8.txt
✏️
|Day|Breakfast|Lunch|Dinner|
|-|-|-|-|
|Monday| meal | meal | meal |
|Tuesday| meal | meal | meal |
|Wednesday| meal | meal | meal |
|Thursday| meal | meal | meal |
|Friday| meal | meal | meal |
|Saturday| meal | meal | meal |
|Sunday| meal | meal | meal |
###Output
_____no_output_____
###Markdown
Food Shopping📙From your meal plan make a shopping list of food needed to prepare three meals a day for one week. Research the price of these food items by going to grocery store websites, using grocery fliers, going to the grocery store, or reviewing receipts or bills with your family. Buying items in bulk is usually more economical in the long run, but for this exercise you only require food for one week so choose the smallest quantities possible.`Run` the following cell to generate a data table that you can then edit.Double-click on the "nan" values to put in your information. Use the "Add Row" and "Remove Row" buttons if necessary.
###Code
import pandas as pd
import qgrid
foodItemList = ['Vegetables','Fruit','Protein','Whole Grains','Snacks','Restaurant Meal 1','Restaurant Meal 2']
foodColumns = ['Size','Quantity','Price']
foodIndex = range(1,len(foodItemList)+1)
dfFood = pd.DataFrame(index=pd.Series(foodIndex), columns=pd.Series(foodColumns))
dfFood.insert(0,'Item(s)',foodItemList,True)
dfFood['Quantity'] = 1
dfFood['Price'] = 1
dfFoodWidget = qgrid.QgridWidget(df=dfFood, show_toolbar=True)
dfFoodWidget
###Output
_____no_output_____
###Markdown
📙After you have added data to the table above, `Run` the next cell to calculate your food costs for the month. It adds up weekly food costs and multiplies by 4.3 weeks per month.
###Code
foodShoppingList = dfFoodWidget.get_changed_df()
foodPrices = pd.to_numeric(foodShoppingList['Price'])
weeklyFoodCost = foodPrices.sum()
monthlyFoodCost = weeklyFoodCost * 4.3
%store monthlyFoodCost
print('That is about $' + str(weeklyFoodCost) + ' per week for food.')
print('Your food for the month will cost about $' + str('{:.2f}'.format(monthlyFoodCost)) + '.')
###Output
_____no_output_____
###Markdown
Household Supplies and Personal Items📙The following is a typical list of household and personal items. Add any additional items you feel you need and delete items you don’t need. Look for smaller quantities with a **one-month** budget in mind, or adjust pricing if buying in bulk.`Run` the next cell to generate a data table that you can then edit.
###Code
householdItemList = ['Toilet Paper','Tissues','Paper Towel',
'Dish Soap','Laundry Detergent','Cleaners',
'Plastic Wrap','Foil','Garbage/Recycling Bags',
'Condiments','Coffee/Tea','Flour','Sugar',
'Shampoo','Conditioner','Soap','Deodorant',
'Toothpaste','Mouthwash','Hair Products','Toothbrush',
'Makeup','Cotton Balls','Shaving Gel','Razors',
]
householdColumns = ['Size','Quantity','Price']
householdIndex = range(1,len(householdItemList)+1)
dfHousehold = pd.DataFrame(index=pd.Series(householdIndex), columns=pd.Series(householdColumns))
dfHousehold.insert(0,'Item(s)',householdItemList,True)
dfHousehold['Quantity'] = 1
dfHousehold['Price'] = 1
dfHouseholdWidget = qgrid.QgridWidget(df=dfHousehold, show_toolbar=True)
dfHouseholdWidget
###Output
_____no_output_____
###Markdown
📙After you have added data to the above data table, `Run` the next cell to calculate your monthly household item costs.
###Code
householdShoppingList = dfHouseholdWidget.get_changed_df()
householdPrices = pd.to_numeric(householdShoppingList['Price'])
monthlyHouseholdCost = householdPrices.sum()
%store monthlyHouseholdCost
print('That is about $' + str(monthlyHouseholdCost) + ' per month for household items.')
###Output
_____no_output_____
###Markdown
Furniture and Equipment📙Think about items you need for your place. How comfortable do you want to be? Are there items you have already been collecting or that your family is saving for you? Discuss which items they may be willing to give you, decide which items you can do without, which items a roommate may have, and which items you will need to purchase. Although it is nice to have new things, remember household items are often a bargain at garage sales, dollar stores, and thrift stores.`Run` the next cell to generate a data table that you can edit.
###Code
fneItemList = ['Pots and Pans','Glasses','Plates','Bowls',
'Cutlery','Knives','Oven Mitts','Towels','Cloths',
'Toaster','Garbage Cans','Kettle','Table','Kitchen Chairs',
'Broom and Dustpan','Vacuum Cleaner','Clock',
'Bath Towels','Hand Towels','Bath Mat',
'Toilet Brush','Plunger',
'Bed','Dresser','Night Stand','Sheets','Blankets','Pillows',
'Lamps','TV','Electronics','Coffee Table','Couch','Chairs',
]
fneColumns = ['Room','Quantity','Price']
fneIndex = range(1,len(fneItemList)+1)
dfFne = pd.DataFrame(index=pd.Series(fneIndex), columns=pd.Series(fneColumns))
dfFne.insert(0,'Item(s)',fneItemList,True)
dfFne['Quantity'] = 1
dfFne['Price'] = 1
dfFneWidget = qgrid.QgridWidget(df=dfFne, show_toolbar=True)
dfFneWidget
###Output
_____no_output_____
###Markdown
📙Next `Run` the following cell to add up your furniture and equipment costs.
###Code
fneList = dfFneWidget.get_changed_df()
fnePrices = pd.to_numeric(fneList['Price'])
fneCost = fnePrices.sum()
%store fneCost
print('That is about $' + str(fneCost) + ' for furniture and equipment items.')
###Output
_____no_output_____
###Markdown
Clothing📙When calculating the cost of clothing for yourself, consider the type of work you plan to be doing and how important clothing is to you. Consider how many of each item of clothing you will purchase in a year, and multiply this by the cost per item. Be realistic.`Run` the next cell to generate an editable data table.
###Code
clothingItemList = ['Dress Pants','Skirts','Shirts','Suits/Jackets/Dresses'
'T-Shirts/Tops','Jeans/Pants','Shorts',
'Dress Shoes','Casual Shoes','Running Shoes',
'Outdoor Coats','Boots','Sports Clothing',
'Pajamas','Underwear','Socks','Swimsuits'
]
clothingColumns = ['Quantity Required','Cost per Item']
clothingIndex = range(1,len(clothingItemList)+1)
dfClothing = pd.DataFrame(index=pd.Series(clothingIndex), columns=pd.Series(clothingColumns))
dfClothing.insert(0,'Item(s)',clothingItemList,True)
dfClothing['Quantity Required'] = 1
dfClothing['Cost per Item'] = 1
dfClothingWidget = qgrid.QgridWidget(df=dfClothing, show_toolbar=True)
dfClothingWidget
###Output
_____no_output_____
###Markdown
📙Once you have added data to the above table, `Run` the next cell to add up your clothing costs.
###Code
clothingList = dfClothingWidget.get_changed_df()
clothingQuantities = pd.to_numeric(clothingList['Quantity Required'])
clothingPrices = pd.to_numeric(clothingList['Cost per Item'])
clothingList['Total Cost'] = clothingQuantities * clothingPrices
clothingCost = clothingList['Total Cost'].sum()
monthlyClothingCost = clothingCost / 12
%store monthlyClothingCost
print('That is $' + str('{:.2f}'.format(clothingCost)) + ' per year, or about $' + str('{:.2f}'.format(monthlyClothingCost)) + ' per month for clothing.')
clothingList # this displays the table with total cost calculations
###Output
_____no_output_____
###Markdown
Health Care📙Most people living and working in Alberta have access to hospital and medical services under the [Alberta Health Care Insurance Plan (AHCIP)](https://www.alberta.ca/ahcip.aspx) paid for by the government. Depending on where you work, your employer may offer additional benefit packages such as Extended Health Care that cover a portion of medical and dental expenses. If you do not have health benefits from your employer you will have to pay for medications, dental visits, and vision care. Allow money in your budget for prescriptions and over-the-counter medications. Budget for the dentist and optometrist. One visit to the dentist including a check-up x-rays and teeth cleaning is approximately $330. You should see your dentist yearly.A visit to the optometrist is approximately $120. You should normally see your optometrist once every 2 years, or once a year if you’re wearing contact lenses.`Run` the next cell to display a data table that you can edit with your expected health costs.
###Code
healthItems = [
'Pain Relievers','Bandages','Cough Medicine',
'Prescriptions','Dental Checkup',
'Optometrist','Glasses','Contacts','Contact Solution',
'Physiotherapy','Massage'
]
healthColumns = ['Cost Per Year']
healthIndex = range(1,len(healthItems)+1)
dfHealth = pd.DataFrame(index=pd.Series(healthIndex), columns=pd.Series(healthColumns))
dfHealth.insert(0,'Item or Service',healthItems,True)
dfHealth['Cost Per Year'] = 1
dfHealthWidget = qgrid.QgridWidget(df=dfHealth, show_toolbar=True)
dfHealthWidget
###Output
_____no_output_____
###Markdown
📙`Run` the next cell to add up your health care costs.
###Code
healthList = dfHealthWidget.get_changed_df()
healthCost = pd.to_numeric(healthList['Cost Per Year']).sum()
monthlyHealthCost = healthCost / 12
%store monthlyHealthCost
print('That is $' + str('{:.2f}'.format(healthCost)) + ' per year, or about $' + str('{:.2f}'.format(monthlyHealthCost)) + ' per month for health care.')
###Output
_____no_output_____
###Markdown
📙Once again, `Run` the next cell to check that your answers have been stored.
###Code
print('Monthly food cost:', monthlyFoodCost)
print('Monthly household items cost:', monthlyHouseholdCost)
print('Furniture and equipment cost:', fneCost)
print('Monthly clothing cost:', monthlyClothingCost)
print('Monthly health cost', monthlyHealthCost)
with open('moving_out_8.txt', 'r') as file8:
print(file8.read())
###Output
_____no_output_____ |
_episodes_pynb/01-Starting_with_data_clean.ipynb | ###Markdown
Starting With Data Working With Pandas DataFrames in PythonWe can automate the process of performing data manipulations in Python. It's efficient to spend timebuilding the code to perform these tasks because once it's built, we can use itover and over on different datasets that use a similar format. This makes ourmethods easily reproducible. We can also easily share our code with colleaguesand they can replicate the same analysis. Starting in the same spotTo help the lesson run smoothly, let's ensure everyone is in the same directory.This should help us avoid path and file name issues. At this time pleasenavigate to the workshop directory. If you working in IPython Notebook be surethat you start your notebook in the workshop directory.A quick aside that there are Python libraries like [OSLibrary](https://docs.python.org/3/library/os.html) that can work with ourdirectory structure, however, that is not our focus today. Our DataFor this lesson, we will be using the Portal Teaching data, a subset of the datafrom Ernst et al[Long-term monitoring and experimental manipulation of a Chihuahuan Desert ecosystem near Portal, Arizona, USA](http://www.esapubs.org/archive/ecol/E090/118/default.htm)We will be using files from the [Portal Project Teaching Database](https://figshare.com/articles/Portal_Project_Teaching_Database/1314459).This section will use the `surveys.csv` file that can be downloaded here:[https://ndownloader.figshare.com/files/2292172](https://ndownloader.figshare.com/files/2292172)We are studying the species and weight of animals caught in plots in our studyarea. The dataset is stored as a `.csv` file: each row holds information for asingle animal, and the columns represent:| Column | Description ||------------------|------------------------------------|| record_id | Unique id for the observation || month | month of observation || day | day of observation || year | year of observation || plot_id | ID of a particular plot || species_id | 2-letter code || sex | sex of animal ("M", "F") || hindfoot_length | length of the hindfoot in mm || weight | weight of the animal in grams |The first few rows of our first file look like this:
###Code
record_id,month,day,year,plot_id,species_id,sex,hindfoot_length,weight
1,7,16,1977,2,NL,M,32,
2,7,16,1977,3,NL,M,33,
3,7,16,1977,2,DM,F,37,
4,7,16,1977,7,DM,M,36,
5,7,16,1977,3,DM,M,35,
6,7,16,1977,1,PF,M,14,
7,7,16,1977,2,PE,F,,
8,7,16,1977,1,DM,M,37,
9,7,16,1977,1,DM,F,34,
###Output
_____no_output_____
###Markdown
About LibrariesA library in Python contains a set of tools (called functions) that performtasks on our data. Importing a library is like getting a piece of lab equipmentout of a storage locker and setting it up on the bench for use in a project.Once a library is set up, it can be used or called to perform many tasks. Pandas in PythonOne of the best options for working with tabular data in Python is to use the[Python Data Analysis Library](http://pandas.pydata.org/) (a.k.a. Pandas). ThePandas library provides data structures, produces high quality plots with[matplotlib](http://matplotlib.org/) and integrates nicely with other librariesthat use [NumPy](http://www.numpy.org/) (which is another Python library) arrays.Python doesn't load all of the libraries available to it by default. We have toadd an `import` statement to our code in order to use library functions. To importa library, we use the syntax `import libraryName`. If we want to give thelibrary a nickname to shorten the command, we can add `as nickNameHere`. Anexample of importing the pandas library using the common nickname `pd` is below. Each time we call a function that's in a library, we use the syntax`LibraryName.FunctionName`. Adding the library name with a `.` before thefunction name tells Python where to find the function. In the example above, wehave imported Pandas as `pd`. This means we don't have to type out `pandas` eachtime we call a Pandas function. Reading CSV Data Using PandasWe will begin by locating and reading our survey data which are in CSV format.We can use Pandas' `read_csv` function to pull the file directly into a[DataFrame](http://pandas.pydata.org/pandas-docs/stable/dsintro.htmldataframe). So What's a DataFrame?A DataFrame is a 2-dimensional data structure that can store data of differenttypes (including characters, integers, floating point values, factors and more)in columns. It is similar to a spreadsheet or an SQL table or the `data.frame` inR. A DataFrame always has an index (0-based). An index refers to the position ofan element in the data structure. Notice when you assign the imported DataFrame to a variable, Python does notproduce any output on the screen. We can view the value of the `surveys_df`object by typing its name into the Python command prompt. which prints contents like above Exploring Our Species Survey DataAgain, we can use the `type` function to see what kind of thing `surveys_df` is: As expected, it's a DataFrame (or, to use the full name that Python uses to referto it internally, a `pandas.core.frame.DataFrame`).What kind of things does `surveys_df` contain? DataFrames have an attributecalled `dtypes` that answers this: All the values in a column have the same type. For example, months have type`int64`, which is a kind of integer. Cells in the month column cannot havefractional values, but the weight and hindfoot_length columns can, because theyhave type `float64`. The `object` type doesn't have a very helpful name, but inthis case it represents strings (such as 'M' and 'F' in the case of sex).We'll talk a bit more about what the different formats mean in a different lesson. Useful Ways to View DataFrame objects in PythonThere are many ways to summarize and access the data stored in DataFrames,using attributes and methods provided by the DataFrame object.To access an attribute, use the DataFrame object name followed by the attributename `df_object.attribute`. Using the DataFrame `surveys_df` and attribute`columns`, an index of all the column names in the DataFrame can be accessedwith `surveys_df.columns`.Methods are called in a similar fashion using the syntax `df_object.method()`.As an example, `surveys_df.head()` gets the first few rows in the DataFrame`surveys_df` using **the `head()` method**. With a method, we can supply extrainformation in the parens to control behaviour.Let's look at the data using these.> Challenge - DataFrames>> Using our DataFrame `surveys_df`, try out the attributes & methods below to see> what they return.>> 1. `surveys_df.columns`> 2. `surveys_df.shape` Take note of the output of `shape` - what format does it> return the shape of the DataFrame in?> > HINT: [More on tuples, here](https://docs.python.org/3/tutorial/datastructures.htmltuples-and-sequences).> 3. `surveys_df.head()` Also, what does `surveys_df.head(15)` do?> 4. `surveys_df.tail()`{: .challenge} Calculating Statistics From Data In A Pandas DataFrameWe've read our data into Python. Next, let's perform some quick summarystatistics to learn more about the data that we're working with. We might wantto know how many animals were collected in each plot, or how many of eachspecies were caught. We can perform summary stats quickly using groups. Butfirst we need to figure out what we want to group by.Let's begin by exploring our data: Let's get a list of all the species. The `pd.unique` function tells us all ofthe unique values in the `species_id` column. > Challenge - Statistics>> 1. Create a list of unique plot ID's found in the surveys data. Call it> `plot_names`. How many unique plots are there in the data? How many unique> species are in the data?>> 2. What is the difference between `len(plot_names)` and `surveys_df['plot_id'].nunique()`? Groups in PandasWe often want to calculate summary statistics grouped by subsets or attributeswithin fields of our data. For example, we might want to calculate the averageweight of all individuals per plot.We can calculate basic statistics for all records in a single column using thesyntax below: We can also extract one specific metric if we wish: But if we want to summarize by one or more variables, for example sex, we canuse **Pandas' `.groupby` method**. Once we've created a groupby DataFrame, wecan quickly calculate summary statistics by a group of our choice.
###Code
# Group data by sex
###Output
_____no_output_____
###Markdown
The **pandas function `describe`** will return descriptive stats including: mean,median, max, min, std and count for a particular column in the data. Pandas'`describe` function will only return summary values for columns containingnumeric data.
###Code
# summary statistics for all numeric columns by sex
# provide the mean for each numeric column by sex
###Output
_____no_output_____
###Markdown
The `groupby` command is powerful in that it allows us to quickly generatesummary stats.summary stats.> Challenge - Summary Data>> 1. How many recorded individuals are female `F` and how many male `M`> 2. What happens when you group by two columns using the following syntax and> then grab mean values:> - `grouped_data2 = surveys_df.groupby(['plot_id','sex'])`> - `grouped_data2.mean()`> 3. Summarize weight values for each plot in your data. HINT: you can use the> following syntax to only create summary statistics for one column in your data> `by_plot['weight'].describe()` Quickly Creating Summary Counts in PandasLet's next count the number of samples for each species. We can do this in a fewways, but we'll use `groupby` combined with a **`count()` method**.
###Code
# count the number of samples by species
###Output
_____no_output_____
###Markdown
Or, we can also count just the rows that have the species "DO": > Challenge - Make a list>> What's another way to create a list of species and associated `count` of the> records in the data? Hint: you can perform `count`, `min`, etc functions on> groupby DataFrames in the same way you can perform them on regular DataFrames. Basic Math FunctionsIf we wanted to, we could perform math on an entire column of our data. Forexample let's multiply all weight values by 2. A more practical use of this mightbe to normalize the data according to a mean, area, or some other valuecalculated from our data.
###Code
# multiply all weight values by 2
###Output
_____no_output_____
###Markdown
Quick & Easy Plotting Data Using PandasWe can plot our summary stats using Pandas, too.
###Code
# make sure figures appear inline in Ipython Notebook
# create a quick bar chart
###Output
_____no_output_____
###Markdown
We can also look at how many animals were captured in each plot: > Challenge - Plots>> 1. Create a plot of average weight across all species per plot.> 2. Create a plot of total males versus total females for the entire dataset.{: .challenge}> Summary Plotting Challenge>> Create a stacked bar plot, with weight on the Y axis, and the stacked variable> being sex. The plot should show total weight by sex for each plot. Some> tips are below to help you solve this challenge:>> * [For more on Pandas plots, visit this link.](http://pandas.pydata.org/pandas-docs/stable/visualization.htmlbasic-plotting-plot)> * You can use the code that follows to create a stacked bar plot but the data to stack> need to be in individual columns. Here's a simple example with some data where> 'a', 'b', and 'c' are the groups, and 'one' and 'two' are the subgroups.> > We can plot the above with
###Code
# plot stacked data so columns 'one' and 'two' are stacked
###Output
_____no_output_____ |
Experiments/HateSpeechDetectionModels/otherExperiments/DNN/Offensive2020-SharedTask/Try and error Experiments/For Hanni RNN Keras offensive 2020 .ipynb | ###Markdown
###Code
# import re
# w1 = "مشكووووووووووووووووووووووووور"
# tf.strings.regex_replace(w1, "r'(.)\1+'", "r'\1'")
# re.sub(r'(.)\1+', r'\1', w1)
import string
import re
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
# Having looked at our data above, we see that the raw text contains HTML break
# tags of the form '<br />'. These tags will not be removed by the default
# standardizer (which doesn't strip HTML). Because of this, we will need to
# create a custom standardization function.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
stripped_html = tf.strings.regex_replace(input_data, "[a-zA-Z]|\d+|[٠١٢٣٤٥٦٧٨٩]", " ")
stripped_html = tf.strings.regex_replace(stripped_html, "[.،,\\_-”“٪ًَ]", " ")
stripped_html = tf.strings.regex_replace(stripped_html, "[إأآا]", "ا")
stripped_html = tf.strings.regex_replace(stripped_html, "ة", "ه")
# stripped_html=tf.strings.regex_replace(stripped_html, "[(\U0001F600-\U0001F92F|\U0001F300-\U0001F5FF|\U0001F680-\U0001F6FF|\U0001F190-\U0001F1FF|\U00002702-\U000027B0|\U0001F926-\U0001FA9F|\u200d|\u2640-\u2642|\u2600-\u2B55|\u23cf|\u23e9|\u231a|\ufe0f)|\u2069|\u2066]+", " ")
# after_remove_repeating_char = tf.strings.regex_replace(after_remove_emoji, "r'(.)\1+'", "r'\1\1'")
return tf.strings.regex_replace(
stripped_html, "[%s]" % re.escape(string.punctuation), ""
)
# Model constants.
max_features = 20000
embedding_dim = 128
sequence_length = 500
# Now that we have our custom standardization, we can instantiate our text
# vectorization layer. We are using this layer to normalize, split, and map
# strings to integers, so we set our 'output_mode' to 'int'.
# Note that we're using the default split function,
# and the custom standardization defined above.
# We also set an explicit maximum sequence length, since the CNNs later in our
# model won't support ragged sequences.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode="int",
output_sequence_length=sequence_length,
)
# Now that the vocab layer has been created, call `adapt` on a text-only
# dataset to create the vocabulary. You don't have to batch, but for very large
# datasets this means you're not keeping spare copies of the dataset in memory.
# Let's make a text-only dataset (no labels):
text_ds = raw_train_ds.map(lambda x, y: x)
# Let's call `adapt`:
vectorize_layer.adapt(text_ds)
vocab = np.array(vectorize_layer.get_vocabulary())
vocab[:20]
###Output
_____no_output_____
###Markdown
print the voc size for all texts,
###Code
encoded_example = vectorize_layer(text_batch)[:3].numpy()
encoded_example
for n in range(3):
print("Original: ", text_batch[n].numpy().decode())
print("Round-trip: ", " ".join(vocab[encoded_example[n]]))
print()
###Output
Original: RT @USER: يا رب يا عزيز يا جبار .. انك القادر على كل شيء .. يا رب فرحه اتحاديه تُنسينا كل الهموم والمشاكل الي صارت هالموسم يا رب العباد… NOT_OFF NOT_HS
Round-trip: يا رب يا عزيز يا جبار انك القادر على كل شيء يا رب فرحه اتحاديه [UNK] كل الهموم والمشاكل الي صارت هالموسم يا رب [UNK]
Original: @USER @USER يا جامع يا رقيب يا رب تلقاها NOT_OFF NOT_HS
Round-trip: يا جامع يا رقيب يا رب تلقاها
Original: راهنت عليك ولم اخسر ❤️❤️<LF>يا وحش يا جلاد يا كبير<LF>يا مرعب يا قناص يا يصياد<LF>🖤💛🖤💛🖤💛🖤💛<LF>#الاتحاد_النصر URL NOT_OFF NOT_HS
Round-trip: [UNK] عليك ولم اخسر يا وحش يا جلاد يا كبير يا مرعب يا قناص يا يصياد الاتحادالنصر
###Markdown
Create the model
###Code
model = tf.keras.Sequential([
vectorize_layer,
tf.keras.layers.Embedding(
input_dim=len(vectorize_layer.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy'])
from keras.callbacks import EarlyStopping
history = model.fit(raw_train_ds, epochs=30,
validation_data=raw_val_ds,
validation_steps=30,
callbacks=[EarlyStopping(monitor='val_loss',patience=5)])
#add patience to earlystop
# history=model.fit(train_ds, validation_data=val_ds, epochs=epochs,
# callbacks=[EarlyStopping(monitor='val_loss',min_delta=0.0001)])
test_loss, test_acc = model.evaluate(raw_test_ds)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss=history.history['loss']
val_loss=history.history['val_loss']
#epochs_range = range(22)
plt.figure(figsize=(15, 15))
plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
sample_text = (' يا حبيبي بكل صفاتہ يا دموع و يا خضوع ')
predictions = model.predict(np.array([sample_text]))
predictions
#NOT hate speech
sample_text = (' القم يا ايطالي يا ابن الكلب انت و باصك يلعن اهلك ')
predictions = model.predict(np.array([sample_text]))
predictions
#Hate
###Output
_____no_output_____ |
The Risk and Returns - The Sharpe Ratio/notebook.ipynb | ###Markdown
1. Meet Professor William SharpeAn investment may make sense if we expect it to return more money than it costs. But returns are only part of the story because they are risky - there may be a range of possible outcomes. How does one compare different investments that may deliver similar results on average, but exhibit different levels of risks?Enter William Sharpe. He introduced the reward-to-variability ratio in 1966 that soon came to be called the Sharpe Ratio. It compares the expected returns for two investment opportunities and calculates the additional return per unit of risk an investor could obtain by choosing one over the other. In particular, it looks at the difference in returns for two investments and compares the average difference to the standard deviation (as a measure of risk) of this difference. A higher Sharpe ratio means that the reward will be higher for a given amount of risk. It is common to compare a specific opportunity against a benchmark that represents an entire category of investments.The Sharpe ratio has been one of the most popular risk/return measures in finance, not least because it's so simple to use. It also helped that Professor Sharpe won a Nobel Memorial Prize in Economics in 1990 for his work on the capital asset pricing model (CAPM).Let's learn about the Sharpe ratio by calculating it for the stocks of the two tech giants Facebook and Amazon. As a benchmark, we'll use the S&P 500 that measures the performance of the 500 largest stocks in the US.
###Code
# Importing required modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Settings to produce nice plots in a Jupyter notebook
plt.style.use('fivethirtyeight')
%matplotlib inline
# Reading in the data
stock_data = pd.read_csv('datasets/stock_data.csv', parse_dates=['Date'],index_col=['Date']).dropna()
benchmark_data = pd.read_csv('datasets/benchmark_data.csv', parse_dates=['Date'],index_col=['Date']).dropna()
###Output
_____no_output_____
###Markdown
2. A first glance at the dataLet's take a look the data to find out how many observations and variables we have at our disposal.
###Code
# Display summary for stock_data
print('Stocks\n')
# ... YOUR CODE FOR TASK 2 HERE ...
print(stock_data.info(), '\n')
print(stock_data.head())
# Display summary for benchmark_data
print('\nBenchmarks\n')
# ... YOUR CODE FOR TASK 2 HERE ...
print(benchmark_data.info(), '\n')
print(benchmark_data.head())
###Output
Stocks
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 252 entries, 2016-01-04 to 2016-12-30
Data columns (total 2 columns):
Amazon 252 non-null float64
Facebook 252 non-null float64
dtypes: float64(2)
memory usage: 5.9 KB
None
Amazon Facebook
Date
2016-01-04 636.989990 102.220001
2016-01-05 633.789978 102.730003
2016-01-06 632.650024 102.970001
2016-01-07 607.940002 97.919998
2016-01-08 607.049988 97.330002
Benchmarks
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 252 entries, 2016-01-04 to 2016-12-30
Data columns (total 1 columns):
S&P 500 252 non-null float64
dtypes: float64(1)
memory usage: 3.9 KB
None
S&P 500
Date
2016-01-04 2012.66
2016-01-05 2016.71
2016-01-06 1990.26
2016-01-07 1943.09
2016-01-08 1922.03
###Markdown
3. Plot & summarize daily prices for Amazon and FacebookBefore we compare an investment in either Facebook or Amazon with the index of the 500 largest companies in the US, let's visualize the data, so we better understand what we're dealing with.
###Code
# visualize the stock_data
# ... YOUR CODE FOR TASK 3 HERE ...
stock_data.plot(subplots=True, title='Stock Data')
# summarize the stock_data
# ... YOUR CODE FOR TASK 3 HERE ...
stock_data.describe()
###Output
_____no_output_____
###Markdown
4. Visualize & summarize daily values for the S&P 500Let's also take a closer look at the value of the S&P 500, our benchmark.
###Code
# plot the benchmark_data
# ... YOUR CODE FOR TASK 4 HERE ...
benchmark_data.plot(title='S&P 500')
# summarize the benchmark_data
# ... YOUR CODE FOR TASK 4 HERE ...
benchmark_data.describe()
###Output
_____no_output_____
###Markdown
5. The inputs for the Sharpe Ratio: Starting with Daily Stock ReturnsThe Sharpe Ratio uses the difference in returns between the two investment opportunities under consideration.However, our data show the historical value of each investment, not the return. To calculate the return, we need to calculate the percentage change in value from one day to the next. We'll also take a look at the summary statistics because these will become our inputs as we calculate the Sharpe Ratio. Can you already guess the result?
###Code
# calculate daily stock_data returns
stock_returns = stock_data.pct_change()
# plot the daily returns
# ... YOUR CODE FOR TASK 5 HERE ...
stock_returns.plot()
# summarize the daily returns
# ... YOUR CODE FOR TASK 5 HERE ...
stock_returns.describe()
###Output
_____no_output_____
###Markdown
6. Daily S&P 500 returnsFor the S&P 500, calculating daily returns works just the same way, we just need to make sure we select it as a Series using single brackets [] and not as a DataFrame to facilitate the calculations in the next step.
###Code
# calculate daily benchmark_data returns
# ... YOUR CODE FOR TASK 6 HERE ...
sp_returns = benchmark_data['S&P 500'].pct_change()
# plot the daily returns
# ... YOUR CODE FOR TASK 6 HERE ...
sp_returns.plot()
# summarize the daily returns
# ... YOUR CODE FOR TASK 6 HERE ...
sp_returns.describe()
###Output
_____no_output_____
###Markdown
7. Calculating Excess Returns for Amazon and Facebook vs. S&P 500Next, we need to calculate the relative performance of stocks vs. the S&P 500 benchmark. This is calculated as the difference in returns between stock_returns and sp_returns for each day.
###Code
# calculate the difference in daily returns
excess_returns = stock_returns.sub(sp_returns, axis=0)
# plot the excess_returns
# ... YOUR CODE FOR TASK 7 HERE ...
excess_returns.plot()
# summarize the excess_returns
# ... YOUR CODE FOR TASK 7 HERE ...
excess_returns.describe()
###Output
_____no_output_____
###Markdown
8. The Sharpe Ratio, Step 1: The Average Difference in Daily Returns Stocks vs S&P 500Now we can finally start computing the Sharpe Ratio. First we need to calculate the average of the excess_returns. This tells us how much more or less the investment yields per day compared to the benchmark.
###Code
# calculate the mean of excess_returns
# ... YOUR CODE FOR TASK 8 HERE ...
avg_excess_return = excess_returns.mean()
# plot avg_excess_returns
# ... YOUR CODE FOR TASK 8 HERE ...
avg_excess_return.plot.bar(title='Mean of the Return Difference')
###Output
_____no_output_____
###Markdown
9. The Sharpe Ratio, Step 2: Standard Deviation of the Return DifferenceIt looks like there was quite a bit of a difference between average daily returns for Amazon and Facebook.Next, we calculate the standard deviation of the excess_returns. This shows us the amount of risk an investment in the stocks implies as compared to an investment in the S&P 500.
###Code
# calculate the standard deviations
sd_excess_return = excess_returns.std()
# plot the standard deviations
# ... YOUR CODE FOR TASK 9 HERE ...
sd_excess_return.plot(kind='bar', title='Standard Deviation of the Return Difference')
###Output
_____no_output_____
###Markdown
10. Putting it all togetherNow we just need to compute the ratio of avg_excess_returns and sd_excess_returns. The result is now finally the Sharpe ratio and indicates how much more (or less) return the investment opportunity under consideration yields per unit of risk.The Sharpe Ratio is often annualized by multiplying it by the square root of the number of periods. We have used daily data as input, so we'll use the square root of the number of trading days (5 days, 52 weeks, minus a few holidays): √252
###Code
# calculate the daily sharpe ratio
daily_sharpe_ratio = avg_excess_return.div(sd_excess_return)
# annualize the sharpe ratio
annual_factor = np.sqrt(252)
annual_sharpe_ratio = daily_sharpe_ratio.mul(annual_factor)
# plot the annualized sharpe ratio
annual_sharpe_ratio.plot(title='Annualized Sharpe Ratio: Stocks vs S&P 500')
###Output
_____no_output_____
###Markdown
11. ConclusionGiven the two Sharpe ratios, which investment should we go for? In 2016, Amazon had a Sharpe ratio twice as high as Facebook. This means that an investment in Amazon returned twice as much compared to the S&P 500 for each unit of risk an investor would have assumed. In other words, in risk-adjusted terms, the investment in Amazon would have been more attractive.This difference was mostly driven by differences in return rather than risk between Amazon and Facebook. The risk of choosing Amazon over FB (as measured by the standard deviation) was only slightly higher so that the higher Sharpe ratio for Amazon ends up higher mainly due to the higher average daily returns for Amazon. When faced with investment alternatives that offer both different returns and risks, the Sharpe Ratio helps to make a decision by adjusting the returns by the differences in risk and allows an investor to compare investment opportunities on equal terms, that is, on an 'apples-to-apples' basis.
###Code
# Uncomment your choice.
buy_amazon = True
# buy_facebook = True
###Output
_____no_output_____ |
examples/cpacspy_use.ipynb | ###Markdown
 cpacspy=========cpacspy is a Python package to read, write, interact and analyse[CPACS](https://www.cpacs.de/) files and esspecially AeroPerformanceMaps.Installation-----------------You need to have [TIXI](https://github.com/DLR-SC/tixi) and[TIGL](https://github.com/DLR-SC/tigl) install on your computer to usethis package. The easiest way is to use a Conda environment, to createone:- Install Miniconda: - Clone this repository and create a Conda environment with the following command:``` {.sourceCode .bash}$ git clone https://github.com/cfsengineering/cpacspy.git$ cd cpacspy$ conda env create -f environment.yml$ conda activate cpacspy_env```- When it is done or if you already have TIXI and TIGL install on your computer:``` {.sourceCode .bash}$ pip install cpacspy```To build and install locally------------------------------``` {.sourceCode .bash}$ cd cpacspy$ python -m build$ pip install --user .```License-------------**License:** Apache-2.0How to use this package-------------------------------------Follow the example bellow:
###Code
import sys
sys.path.append('../src/')
# Importing cpacspy
from cpacspy.cpacspy import CPACS
# Load a CPACS file
cpacs = CPACS('D150_simple.xml')
# For each object you can print it to see what it contains or use 'help(...)' to see associated functions
print(cpacs)
print(cpacs.aircraft)
help(cpacs)
###Output
_____no_output_____
###Markdown
Aircraft value---------------
###Code
cpacs.aircraft.ref_lenght
cpacs.aircraft.ref_area
(cpacs.aircraft.ref_point_x,cpacs.aircraft.ref_point_y,cpacs.aircraft.ref_point_z)
###Output
_____no_output_____
###Markdown
Wing value------------(by default the largest wing is the reference one)
###Code
cpacs.aircraft.wing_ar
cpacs.aircraft.wing_span
cpacs.aircraft.wing_area
# You can also change the reference wing by its index
cpacs.aircraft.ref_wing_idx = 3
cpacs.aircraft.wing_area
# or by its uid
cpacs.aircraft.ref_wing_uid = 'Wing2H'
cpacs.aircraft.wing_area
###Output
_____no_output_____
###Markdown
AeroMaps--------------
###Code
# Get list of all available aeroMaps
cpacs.get_aeromap_uid_list()
# or loop through all aeroMaps
for aeromap in cpacs.aeromaps:
print('---')
print(aeromap.uid)
print(aeromap.description)
# Get a specific aeromap from its uid
ext_aeromap = cpacs.get_aeromap_by_uid('extended_aeromap')
print(ext_aeromap)
# Get specific values
ext_aeromap.get('angleOfAttack',alt=15500.0,aos=0.0,mach=0.3)
ext_aeromap.get('cl',alt=15500.0,aos=0.0,mach=[0.3,0.4,0.5])
# Plot values from the aeromap
ext_aeromap.plot('cd','cl',alt=15500,aos=0.0,mach=0.5)
ext_aeromap.plot('angleOfAttack','cd',alt=15500,aos=0.0,mach=0.5)
# Create new aeromap
new_aeromap = cpacs.create_aeromap('my_new_aeromap')
new_aeromap.description = 'Test the creation of a new aeromap'
# Add a values into the new aeromap
new_aeromap.add_row(mach=0.555,alt=15000,aos=0.0,aoa=0,cd=0.001,cl=0.1,cs=0.0,cmd=0.0,cml=1.1,cms=0.0)
# Remove a row from its paramters
new_aeromap.remove_row(mach=0.555,alt=15000,aos=0.0,aoa=0)
# Add a values into the new aeromap
for i in range(12):
new_aeromap.add_row(mach=0.555,alt=15000,aos=0.0,aoa=i,cd=0.001*i*i,cl=0.1*i,cs=0.0,cmd=0.0,cml=1.1,cms=0.0)
print(new_aeromap)
# Save the new aeromap
new_aeromap.save()
# Delete an aeromap
cpacs.delete_aeromap('aeromap_test1')
cpacs.get_aeromap_uid_list()
# Duplicate an aeromap
duplicated_aeromap = cpacs.duplicate_aeromap('my_new_aeromap', 'my_duplicated_aeromap')
duplicated_aeromap.add_row(mach=0.666,alt=10000,aos=0.0,aoa=2.4,cd=0.001,cl=1.1,cs=0.22,cmd=0.22)
print(duplicated_aeromap)
# Coefficient are stored in a Pandas DataFrame, so you can apply any operation on it
duplicated_aeromap.df['cd'] = duplicated_aeromap.df['cd'].apply(lambda x: x*2-0.2)
duplicated_aeromap.get('cd')
# Export to a CSV file
duplicated_aeromap.export_csv('aeromap.csv')
# Import from CSV
imported_aeromap = cpacs.create_aeromap_from_csv('aeromap.csv','imported_aeromap')
imported_aeromap.description = 'This aeromap has been imported from a CSV file'
imported_aeromap.save()
print(imported_aeromap)
# AeroMap with damping derivatives coefficients
aeromap_dd = cpacs.duplicate_aeromap('imported_aeromap','aeromap_dd')
aeromap_dd.description = 'Aeromap with damping derivatives coefficients'
print(aeromap_dd)
# Add damping derivatives coefficients to the aeromap
aeromap_dd.add_damping_derivatives(alt=15000,mach=0.555,aos=0.0,aoa=0.0,coef='cd',axis='dp',value=0.001,rate=-1.0)
###Output
_____no_output_____
###Markdown
- Coefficients must be one of the following: 'cd', 'cl', 'cs', 'cmd', 'cml', 'cms'- Axis must be one of the following: 'dp', 'dq', 'dr'- The sign of the rate will determine if the coefficient are stored in /positiveRate or /negativeRate
###Code
# Could be added in a loop
for i in range(11):
aeromap_dd.add_damping_derivatives(alt=15000,mach=0.555,aos=0.0,aoa=i+1,coef='cd',axis='dp',value=0.001*i+0.002,rate=-1.0)
aeromap_dd.save()
aeromap_dd.df
# Get damping derivatives coefficients
aeromap_dd.get_damping_derivatives(alt=15000,mach=0.555,coef='cd',axis='dp',rates='neg')
aeromap_dd.get_damping_derivatives(alt=15000,mach=0.555,aoa=[4.0,6.0,8.0],coef='cd',axis='dp',rates='neg')
# Also works with the simple "get" function, but the "coef name" is a bit more complicated
aeromap_dd.get('dampingDerivatives_negativeRates_dcddpStar',aoa=[4.0,6.0,8.0])
# Damping derivatives coefficients can alos be plotted
aeromap_dd.plot('angleOfAttack','dampingDerivatives_negativeRates_dcddpStar',alt=15000,aos=0.0,mach=0.555)
###Output
_____no_output_____
###Markdown
Analyses-----------
###Code
# CD0 and oswald factor
ar = cpacs.aircraft.wing_ar
cd0,e = ext_aeromap.get_cd0_oswald(ar,alt=15500.0,aos=0.0,mach=0.5)
# Get Forces [N]
ext_aeromap.calculate_forces(cpacs.aircraft)
print(ext_aeromap.get('cd',alt=15500.0,aos=0.0,mach=[0.3,0.4,0.5]))
print(ext_aeromap.get('drag',alt=15500.0,aos=0.0,mach=[0.3,0.4,0.5]))
ext_aeromap.plot('angleOfAttack','drag',alt=15500,aos=0.0,mach=0.5)
ext_aeromap.df
###Output
_____no_output_____
###Markdown
Save the CPACS file
###Code
# Save all the change in a CPACS file
cpacs.save_cpacs('D150_simple_updated_aeromap_tmp.xml',overwrite=True)
###Output
_____no_output_____ |
Python/AstroNote.ipynb | ###Markdown
Just some matplotlib and seaborn parameter tuning
###Code
data = './data/'
out = './out/'
figsave_format = 'png'
figsave_dpi = 200
axistitlesize = 20
axisticksize = 17
axislabelsize = 26
axislegendsize = 23
axistextsize = 20
axiscbarfontsize = 15
# Set axtick dimensions
major_size = 6
major_width = 1.2
minor_size = 3
minor_width = 1
mpl.rcParams['xtick.major.size'] = major_size
mpl.rcParams['xtick.major.width'] = major_width
mpl.rcParams['xtick.minor.size'] = minor_size
mpl.rcParams['xtick.minor.width'] = minor_width
mpl.rcParams['ytick.major.size'] = major_size
mpl.rcParams['ytick.major.width'] = major_width
mpl.rcParams['ytick.minor.size'] = minor_size
mpl.rcParams['ytick.minor.width'] = minor_width
mpl.rcParams.update({'figure.autolayout': False})
# Seaborn style settings
sns.set_style({'axes.axisbelow': True,
'axes.edgecolor': '.8',
'axes.facecolor': 'white',
'axes.grid': True,
'axes.labelcolor': '.15',
'axes.spines.bottom': True,
'axes.spines.left': True,
'axes.spines.right': True,
'axes.spines.top': True,
'figure.facecolor': 'white',
'font.family': ['sans-serif'],
'font.sans-serif': ['Arial',
'DejaVu Sans',
'Liberation Sans',
'Bitstream Vera Sans',
'sans-serif'],
'grid.color': '.8',
'grid.linestyle': '--',
'image.cmap': 'rocket',
'lines.solid_capstyle': 'round',
'patch.edgecolor': 'w',
'patch.force_edgecolor': True,
'text.color': '.15',
'xtick.bottom': True,
'xtick.color': '.15',
'xtick.direction': 'in',
'xtick.top': True,
'ytick.color': '.15',
'ytick.direction': 'in',
'ytick.left': True,
'ytick.right': True})
# Colorpalettes, colormaps, etc.
sns.set_palette(palette='rocket')
# Current Version of the Csillész II Problem Solver
current_version = 'v1.33'
###Output
_____no_output_____
###Markdown
Constants
###Code
# Earth's Radius
R_Earth = 6378e03
# Lenght of 1 Solar Day = 1.002737909350795 Sidereal Days
# It's usually labeled as dS/dm
# Here simply labeled as dS
dS = 1.002737909350795
# J2000 is midnight or the beginning of the equivalent Julian year reference
J2000 = 2451545
# Months' length int days, without leap day
Month_Length_List = [31,28,31,30,31,30,31,31,30,31,30,31]
# Months' length int days, with leap day
Month_Length_List_Leap_Year = [31,29,31,30,31,30,31,31,30,31,30,31]
# Predefined Coordinates of Some Notable Cities
# Format:
# "LocationName": [N Latitude (φ), E Longitude(λ)]
# Latitude: + if N, - if S
# Longitude: + if E, - if W
Location_Dict = {
"Amsterdam": [52.3702, 4.8952],
"Athen": [37.9838, 23.7275],
"Baja": [46.1803, 19.0111],
"Beijing": [39.9042, 116.4074],
"Berlin": [52.5200, 13.4050],
"Budapest": [47.4979, 19.0402],
"Budakeszi": [47.5136, 18.9278],
"Budaors": [47.4621, 18.9530],
"Brussels": [50.8503, 4.3517],
"Debrecen": [47.5316, 21.6273],
"Dunaujvaros": [46.9619, 18.9355],
"Gyor": [47.6875, 17.6504],
"Jerusalem": [31.7683, 35.2137],
"Kecskemet": [46.8964, 19.6897],
"Lumbaqui": [0.0467, -77.3281],
"London": [51.5074, -0.1278],
"Mako": [46.2219, 20.4809],
"Miskolc": [48.1035, 20.7784],
"Nagykanizsa": [46.4590, 16.9897],
"NewYork": [40.7128, -74.0060],
"Paris": [48.8566, 2.3522],
"Piszkesteto": [47.91806, 19.8942],
"Pecs": [46.0727, 18.2323],
"Rio": [-22.9068, -43.1729],
"Rome": [41.9028, 12.4964],
"Szeged": [46.2530, 20.1414],
"Szeghalom": [47.0239, 21.1667],
"Szekesfehervar": [47.1860, 18.4221],
"Szombathely": [47.2307, 16.6218],
"Tokyo": [35.6895, 139.6917],
"Washington": [47.7511, -120.7401],
"Zalaegerszeg": [46.8417, 16.8416]
}
# Predefined Equatorial I Coordinates of Some Notable Stellar Objects
# Format:
# "StarName": [Right Ascension (RA), Declination (δ)]
Stellar_Dict = {
"Achernar": [1.62857, -57.23675],
"Aldebaran": [4.59868, 16.50930],
"Algol": [3.13614, 40.95565],
"AlphaAndromedae": [0.13979, 29.09043],
"AlphaCentauri": [14.66014, -60.83399],
"AlphaPersei": [3.40538, 49.86118],
"Alphard": [9.45979, -8.65860],
"Altair": [19.8625, 8.92278],
"Antares": [16.49013, -26.43200],
"Arcturus": [14.26103, 19.18222],
"BetaCeti": [0.72649, -17.986605],
"BetaUrsaeMajoris": [11.03069, 56.38243],
"BetaUrsaeMinoris": [14.84509, 74.15550],
"Betelgeuse": [5.91953, 7.407064],
"Canopus": [6.39920, -52.69566],
"Capella": [5.278155, 45.99799],
"Deneb": [20.69053, 45.28028],
"Fomalhaut": [22.960845, -29.62223],
"GammaDraconis": [17.94344, 51.4889],
"GammaVelorum": [8.15888, -47.33658],
"M31": [0.712305, 41.26917],
"Polaris": [2.53030, 89.26411],
"Pollux": [7.75526, 28.02620],
"ProximaCentauri": [14.49526, -62.67949],
"Rigel": [5.24230, -8.20164],
"Sirius": [6.75248, -16.716116],
"Vega": [18.61565, 38.78369],
"VYCanisMajoris": [7.38287, -25.767565]
}
_VALID_PLANETS = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter',
'Saturn', 'Uranus', 'Neptunus', 'Pluto']
# Constants for Planetary Orbits
# Format:
# "PlanetNameX": [X_0, X_1, X_2 .., X_E.] or [X_1, X_3, ..., X_E] etc.
# "PlanetNameOrbit": [Π, ε, Correction for Refraction and Sun's visible shape]
Orbit_Dict = {
"MercuryM": [174.7948, 4.09233445],
"MercuryC": [23.4400, 2.9818, 0.5255, 0.1058, 0.0241, 0.0055, 0.0026],
"MercuryA": [-0.0000, 0.0000, 0.0000, 0.0000],
"MercuryD": [0.0351, 0.0000, 0.0000, 0.0000],
"MercuryJ": [45.3497, 11.4556, 0.00000, 175.9386],
"MercuryH": [0.035, 0.00000, 0.00000],
"MercuryTH": [132.3282, 6.1385025],
"MercuryOrbit": [230.3265, 0.0351, -0.69],
"VenusM": [50.4161, 1.60213034],
"VenusC": [0.7758, 0.0033, 0.00000, 0.00000, 0.00000, 0.00000, 0.00000],
"VenusA": [-0.0304, 0.00000, 0.00000, 0.0001],
"VenusD": [.6367, 0.0009, 0.00000, 0.0036],
"VenusJ": [52.1268, -0.2516, 0.0099, -116.7505],
"VenusH": [2.636, 0.001, 0.00000],
"VenusTH": [104.9067, -1.4813688],
"VenusOrbit": [73.7576, 2.6376, -0.37],
"EarthM": [357.5291, 0.98560028],
"EarthJ": [0.0009, 0.0053, -0.0068, 1.0000000],
"EarthC": [1.9148, 0.0200, 0.0003, 0.00000, 0.00000, 0.00000, 0.00000],
"EarthA": [-2.4657, 0.0529, -0.0014, 0.0003],
"EarthD": [22.7908, 0.5991, 0.0492, 0.0003],
"EarthH": [22.137, 0.599, 0.016],
"EarthTH": [280.1470, 360.9856235],
"EarthOrbit": [102.9373, 23.4393, -0.83],
"MarsM": [19.3730, 0.52402068],
"MarsC": [10.6912, 0.6228, 0.0503, 0.0046, 0.0005, 0.00000, 0.0001],
"MarsA": [-2.8608, 0.0713, -0.0022, 0.0004],
"MarsD": [24.3880, 0.7332, 0.0706, 0.0011],
"MarsJ": [0.9047, 0.0305, -0.0082, 1.027491],
"MarsH": [23.576, 0.733, 0.024],
"MarsTH": [313.3827, 350.89198226],
"MarsOrbit": [71.0041, 25.1918, -0.17],
"JupiterM": [20.0202, 0.08308529],
"JupiterC": [5.5549, 0.1683, 0.0071, 0.0003, 0.00000, 0.00000, 0.0001],
"JupiterA": [-0.0425, 0.00000, 0.00000, 0.0001],
"JupiterD": [3.1173, 0.0015, 0.00000, 0.0034],
"JupiterJ": [0.3345, 0.0064, 0.00000, 0.4135778],
"JupiterH": [3.116, 0.002, 0.00000],
"JupiterTH": [145.9722, 870.5360000],
"JupiterOrbit": [237.1015, 3.1189, -0.05],
"SaturnM": [317.0207, 0.03344414],
"SaturnC": [6.3585, 0.2204, 0.0106, 0.0006, 0.00000, 0.00000, 0.0001],
"SaturnA": [-3.2338, 0.0909, -0.0031, 0.0009],
"SaturnD": [25.7696, 0.8640, 0.0949, 0.0010],
"SaturnJ": [0.0766, 0.0078, -0.0040, 0.4440276],
"SaturnH": [24.800, 0.864, 0.032],
"SaturnTH": [174.3508, 810.7939024],
"SaturnOrbit": [99.4587, 26.7285, -0.03],
"UranusM": [141.0498, 0.01172834],
"UranusC": [5.3042, 0.1534, 0.0062, 0.0003, 0.00000, 0.00000, 0.0001],
"UranusA": [-42.5874, 12.8117, -2.6077, 17.6902],
"UranusD": [56.9083, -0.8433, 26.1648, 3.34],
"UranusJ": [0.1260, -0.0106, 0.0850, -0.7183165],
"UranusH": [28.680, -0.843, 8.722],
"UranusTH": [29.6474, -501.1600928],
"UranusOrbit": [5.4634, 82.2298, -0.01],
"NeptunusM": [256.2250, 0.00598103],
"NeptunusC": [1.0302, 0.0058, 0.00000, 0.00000, 0.00000, 0.00000, 0.0001],
"NeptunusA": [-3.5214, 0.1078, -0.0039, 0.0163],
"NeptunusD": [26.7643, 0.9669, 0.1166, 0.060],
"NeptunusJ": [0.3841, 0.0019, -0.0066, 0.6712575],
"NeptunusH": [26.668, 0.967, 0.039],
"NeptunusTH": [52.4160, 536.3128662],
"NeptunusOrbit": [182.2100, 27.8477, -0.01],
"PlutoM": [14.882, 0.00396],
"PlutoC": [28.3150, 4.3408, 0.9214, 0.2235, 0.0627, 0.0174, 0.0096],
"PlutoA": [-19.3248, 3.0286, -0.4092, 0.5052],
"PlutoD": [49.8309, 4.9707, 5.5910, 0.19],
"PlutoJ": [4.5635, -0.5024, 0.3429, 6.387672],
"PlutoH": [38.648, 4.971, 1.864],
"PlutoTH": [122.2370, 56.3625225],
"PlutoOrbit": [184.5484, 119.6075, -0.01]
}
###Output
_____no_output_____
###Markdown
Auxiliary functions Normalization with Bound [0,NonZeroBound[
###Code
def Normalize_Zero_Bounded(Parameter, Non_Zero_Bound):
if(Parameter >= Non_Zero_Bound):
Multiply = Parameter // Non_Zero_Bound
Parameter -= Multiply * Non_Zero_Bound
elif(Parameter < 0):
Multiply = Parameter // Non_Zero_Bound
Parameter += np.abs(Multiply) * Non_Zero_Bound
else:
Multiply = 0
return(Parameter, Multiply)
###Output
_____no_output_____
###Markdown
Normalization Between to [-π,+π[
###Code
def Normalize_Symmetrically_Bounded_PI(Parameter):
if(Parameter < 0 or Parameter >= 360):
Parameter, _ = Normalize_Zero_Bounded(Parameter, 360)
if(Parameter > 180):
Parameter = Parameter - 360
return(Parameter)
###Output
_____no_output_____
###Markdown
Normalization Between to [-π/2,+π/2]
###Code
def Normalize_Symmetrically_Bounded_PI_2(Parameter):
if(Parameter < 0 or Parameter >= 360):
Parameter, _ = Normalize_Zero_Bounded(Parameter, 360)
if(Parameter > 90 and Parameter <= 270):
Parameter = - (Parameter - 180)
elif(Parameter > 270 and Parameter <= 360):
Parameter = Parameter - 360
return(Parameter)
###Output
_____no_output_____
###Markdown
Time related calculations Normalize time parameters
###Code
datetime.datetime datetime.timedelta(days=32)
def Normalize_Time_Parameters(Time, Years, Months, Days):
Hours = int(Time)
Minutes = int((Time - Hours) * 60)
Seconds = (((Time - Hours) * 60) - Minutes) * 60
# "Time" is a floating-point variable, with hours as its unit of measurement
# It indicates the time qunatity, involved in calculations, expressed in hours
#
# Since Minutes and Seconds are always fraction of an hour in this notation, we
# only needed to normalize Hours, because Minutes and Seconds will be normalized
# by definition (it means, that Minutes and Seconds are always between 0 and 60).
if(Hours >= 24 or Hours < 0):
Hours, Multiply = Normalize_Zero_Bounded(Hours, 24)
Days += Multiply
if(Years%4 == 0 and (Years%100 != 0 or Years%400 == 0)):
if(Days > Month_Length_List_Leap_Year[Months - 1]):
Days, Multiply = Normalize_Zero_Bounded(Days, Month_Length_List_Leap_Year[Months - 1])
Months += Multiply
else:
if(Days > Month_Length_List[Months - 1]):
Days, Multiply = Normalize_Zero_Bounded(Days, Month_Length_List[Months - 1])
Months += Multiply
if(Months > 12):
Months, Multiply = Normalize_Zero_Bounded(Months, 12)
Years =+ Multiply
# Normalized time
Time = Hours + Minutes/60 + Seconds/3600
Normalized_Date_Time = np.array((Time, Years, Months, Days))
return(Normalized_Date_Time)
###Output
_____no_output_____
###Markdown
Normalization and Conversion of Local Time to Coordinated Universal Time
###Code
def LT_To_UT(Longitude,
Local_Time,
Local_Date_Year, Local_Date_Month, Local_Date_Day):
# Normalize LT
Local_Time, _ = Normalize_Zero_Bounded(Local_Time, 24)
# Summer/Winter Saving time
# MAY BE DEPRECATED FROM 2021
# Summer: March 26/31 - October 8/14 LT+1
# Winter: October 8/14 - March 26/31 LT+0
if((Local_Date_Month > 3 and Local_Date_Month < 10) or
(Local_Date_Month == 3 and Local_Date_Day >= 26) or
(Local_Date_Month == 10 and (Local_Date_Day >= 8 and Local_Date_Day <=14))):
Universal_Time = Local_Time - (round((Longitude - 7.5)/15, 0) + 1)
else:
Universal_Time = Local_Time - round((Longitude - 7.5)/15, 0)
# Apply corrections if Universal Time is not in the correct format
Normalized_Universal_Date_Time = Normalize_Time_Parameters(Universal_Time,
Local_Date_Year, Local_Date_Month, Local_Date_Day)
return(Normalized_Universal_Date_Time)
def UT_To_LT(Longitude,
Universal_Time,
Universal_Date_Year, Universal_Date_Month, Universal_Date_Day):
# Normalize LT
Universal_Time, _ = Normalize_Zero_Bounded(Universal_Time, 24)
# Summer/Winter Saving time
# MAY BE DEPRECATED FROM 2021
# Summer: March 26/31 - October 8/14 LT+1
# Winter: October 8/14 - March 26/31 LT+0
if((Universal_Date_Month > 3 and Universal_Date_Month < 10) or
(Universal_Date_Month == 3 and Universal_Date_Day > 25) or
(Universal_Date_Month == 10 and Universal_Date_Day <=14)):
Local_Time = Universal_Time + (round((Longitude - 7.5)/15, 0) + 1)
else:
Local_Time = Universal_Time + round((Longitude - 7.5)/15, 0)
# Apply corrections if Local Time is not in the correct format
Normalized_Local_Date_Time = Normalize_Time_Parameters(Local_Time,
Universal_Date_Year, Universal_Date_Month, Universal_Date_Day)
return(Normalized_Local_Date_Time)
###Output
_____no_output_____
###Markdown
Calculate Julian Day Number Sourced from:- https://aa.quae.nl/en/reken/juliaansedag.html- http://neoprogrammics.com/sidereal_time_calculator/index.php- L. E. Doggett, “Calendars,” In: P. K. Seidelmann, Ed., Explanatory Supplement to the Astronomical Almanac, US Naval Observatory, University Science Books Company, Mill Valley, 1992. Abbrevations- JDN: Julian Day Number (Universal Time, starts at 12:00 UTC)- JD: JDN + JDFrac. Julian Day Number + fraction of the day (Universal Time, starts at 12:00 UTC)- CJD - CJDN: Chronological Julian Date - Chronological Julian Day Number (Local Time, starts at 00:00 LT) Definition of JD, JDN and CJD, CJDNThe zero point of JD (i.e., JD 0.0) corresponds to 12:00 UTC on 1 January −4712 in the Julian calendar. The zero point of CJD corresponds to 00:00 (midnight) local time on 1 January −4712. JDN 0 corresponds to the period from 12:00 UTC on 1 January −4712 to 12:00 UTC on 2 January −4712. CJDN 0 corresponds to 1 January −4712 (the whole day, in local time).
###Code
def Calculate_JD(Date_Year,
Date_Month,
Date_Day,
Longitude=None,
Local_Time=None):
if(Local_Time != None):
if(Longitude == None):
raise ValueError('Valid longitude value is needed for LT to UTC conversion!')
else:
Universal_Date_Time = LT_To_UT(Longitude,
Local_Time,
Local_Date_Year=Date_Year,
Local_Date_Month=Date_Month,
Local_Date_Day=Date_Day)
else:
Universal_Date_Time = np.array((0, Date_Year, Date_Month, Date_Day))
T = Universal_Date_Time[0]
Y = Universal_Date_Time[1]
M = Universal_Date_Time[2]
D = Universal_Date_Time[3]
# 1. Gregorian Date to JDN | version 1.
c_0 = (M - 3) // 12
x_4 = Y + c_0
x_3 = x_4 // 100
x_2 = x_4 % 100
x_1 = M - 12 * c_0 - 3
JDN = (146097 * x_3) // 4 + \
(36525 * x_2) // 100 + \
(153 * x_1 + 2) // 5 + D + 1721118.5
# 2. Gregorian Date to JDN | version 2.
# Integer divisions should be used everywhere
#JDN = 367 * Y - \
# 7 * (Y + (M + 9) // 12) // 4 + \
# 275 * M // 9 + D - 730531.5 + 2451545.0
# 3. Julian Date to JDN
#JDN = (1461 * (Y + 4800 + (M - 14) // 12)) // 4 + \
# (367 * (M - 2 - 12 * ((M - 14) // 12))) // 12 - \
# (3 * ((Y + 4900 + (M - 14) // 12) // 100)) // 4 + D - 32075
# JD_Frac: Fraction of the day
JD_Frac = T / 24
# Julian Date
JD = JDN + JD_Frac
return(JD)
###Output
_____no_output_____
###Markdown
Calculate Greenwich Mean Sidereal Time (GMST $= S_{0}$) on the given date at 00:00 UTC Sourced from:- https://astronomy.stackexchange.com/questions/21002/how-to-find-greenwich-mean-sideral-time- http://www.cashin.net/sidereal/calculation.html- http://www2.arnes.si/~gljsentvid10/sidereal.htm- https://en.wikipedia.org/wiki/Universal_TimeVersions Method of calculations Method 1.$$S_{0} = 24110.54841 + 8640184.812866\,T_{u} + 0.093104\,{T_{u}}^{2} - 6.2 \times 10^{-6}\,{T_{u}}^3$$Where $T_{u}$ is number of Julian centuries since J2000.0. $S_{0}$ in this form is in seconds of time. Method 2.$$S_{0} = 280.46061837 + 360.98564736629 \times \text{JD} + 0.000388\,{T_{u}}^{2}$$Where $T_{u}$ is number of Julian centuries since J2000.0 and $\text{JD}$ is the Julian Date. $S_{0}$ in this form is in arc degrees.
###Code
def Calculate_GMST(Universal_Date_Year,
Universal_Date_Month,
Universal_Date_Day):
# Julian_Date = UTC days since J2000.0, including parts of a day
JD = Calculate_JD(Date_Year=Universal_Date_Year,
Date_Month=Universal_Date_Month,
Date_Day=Universal_Date_Day)
# Number of Julian centuries since J2000.0
T_u = (JD - J2000) / 36525
# Method 1.
# Calculate GSMT in seconds of time, then convert to hours of time
GMST = (24110.54841 +
8640184.812866 * T_u +
0.093104 * T_u**2 -
6.2 * 10e-06 * T_u**3) / 3600
# Method 2.
# Calculate GMST in arc degrees, then convert to hours of time
#GMST = (280.46061837 +
# 360.98564736629 * JD +
# 0.000388 * T_u**2) / 15
# Normalize between to [0,24[
GMST, _ = Normalize_Zero_Bounded(GMST, 24)
return(GMST)
###Output
_____no_output_____
###Markdown
Calculate Local Mean Sidereal Time (LMST $= S$) on the given date and time at specific location Sourced from:- https://www.cfa.harvard.edu/~jzhao/times.html- https://tycho.usno.navy.mil/sidereal.html Calculation methodLMST could be approximated for a specific location on Earth simply by$$S = S_{0} + \lambda^{\ast}$$Where $\lambda^{\ast}$ is the longitude of the choosen position in hours of time. This value can be made more accurate by taking into account the difference between the Sidereal and Synodic/Solar day. Hence UTC is Synodic, but LMST is Sidereal, we can take into account this with an additional correction.$$S = S_{0} + \lambda^{\ast} + dS \cdot T_{\text{UTC}}$$Here $dS=1.00273790935(\dots)$ indicates the ration between the Sidereal and Synodic day and $T_{\text{UTC}}$ represents the actual UTC time in hours.
###Code
def Calculate_LMST(Longitude,
Local_Time,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day):
# Input data normalization
# Longitude: [0,+2π[
Longitude, _ = Normalize_Zero_Bounded(Longitude, 360)
Universal_Date_Time = LT_To_UT(Longitude,
Local_Time,
Local_Date_Year=Local_Date_Year,
Local_Date_Month=Local_Date_Month,
Local_Date_Day=Local_Date_Day)
# Calculate Greenwich Mean Sidereal Time (GMST)
GMST = Calculate_GMST(Universal_Date_Year=Universal_Date_Time[1],
Universal_Date_Month=Universal_Date_Time[2],
Universal_Date_Day=Universal_Date_Time[3])
# Normalize between to [0,24[
GMST, _ = Normalize_Zero_Bounded(GMST, 24)
# Calculate LMST
LMST = GMST + Longitude/15 + dS * Universal_Date_Time[0]
# LMST normalization
LMST = Normalize_Time_Parameters(LMST, Local_Date_Year, Local_Date_Month, Local_Date_Day)
return(LMST)
###Output
_____no_output_____
###Markdown
Conversion between astronomical coordinate systems 1. Horizontal to Equatorial I
###Code
def Hor_To_Equ_I(Latitude, Altitude, Azimuth, LMST=None):
# Input data normalization
# Latitude: [-π/2,+π/2]
# Altitude: [-π/2,+π/2]
# Azimuth: [0,+2π[
Latitude = Normalize_Symmetrically_Bounded_PI_2(Latitude)
Altitude = Normalize_Symmetrically_Bounded_PI_2(Altitude)
Azimuth, _ = Normalize_Zero_Bounded(Azimuth, 360)
# Calculate Declination (δ)
# sin(δ) = sin(m) * sin(φ) + cos(m) * cos(φ) * cos(A)
# The result for δ will be non-ambigous and the output of
# the 'np.arcsin()' can be automatically accepted
Declination = np.degrees(np.arcsin(
np.sin(np.radians(Altitude)) * np.sin(np.radians(Latitude)) +
np.cos(np.radians(Altitude)) * np.cos(np.radians(Latitude)) * np.cos(np.radians(Azimuth))
))
# Normalize result for Declination: [-π/2,+π/2]
Declination = Normalize_Symmetrically_Bounded_PI_2(Declination)
# Calculate Local Hour Angle in Degrees (LHA and H)
# sin(H) = - sin(A) * cos(m) / cos(δ)
LHA_sin_1 = np.degrees(np.arcsin(
- np.sin(np.radians(Azimuth)) * np.cos(np.radians(Altitude)) /
np.cos(np.radians(Declination))))
# Normalize result for LHA: [0,+2π[
LHA_sin_1, _ = Normalize_Zero_Bounded(LHA_sin_1, 360)
# 'np.arcsin()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
LHA_sin_2 = 540 - LHA_sin_1
# Normalize result for LHA: [0,+2π[
LHA_sin_2, _ = Normalize_Zero_Bounded(LHA_sin_2, 360)
# Calculate LHA (H) with a second method, to determine which one is the correct
# cos(H) = (sin(m) - sin(δ) * sin(φ)) / (cos(δ) * cos(φ))
LHA_cos_1 = np.degrees(np.arccos(
(np.sin(np.radians(Altitude)) - np.sin(np.radians(Declination)) * np.sin(np.radians(Latitude))) / \
(np.cos(np.radians(Declination)) * np.cos(np.radians(Latitude)))
))
# Normalize result for LHA: [0,+2π[
LHA_cos_1, _ = Normalize_Zero_Bounded(LHA_cos_1, 360)
# 'np.arccos()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
LHA_cos_2 = 360 - LHA_cos_1
# Normalize result for LHA: [0,+2π[
LHA_cos_2, _ = Normalize_Zero_Bounded(LHA_cos_2, 360)
# Compare LHA values
# Select correct value for H
if(np.abs(LHA_sin_1 - LHA_cos_1) < 0.00001 or
np.abs(LHA_sin_1 - LHA_cos_2) < 0.00001):
LHA = LHA_sin_1
elif(np.abs(LHA_sin_2 - LHA_cos_1) < 0.00001 or
np.abs(LHA_sin_2 - LHA_cos_2) < 0.00001):
LHA = LHA_sin_2
else:
raise ValueError('The correct LHA value could not be estimated!')
# Convert to hours from angles (H -> t)
LHT = LHA / 15
# Local Mean Sidereal Time: [0,24h[
if (LMST != None):
LMST, _ = Normalize_Zero_Bounded(LMST, 24)
# Calculate Right Ascension (α)
# α = S – t
Right_Ascension = LMST - LHT
else:
print('Lack of given parameters: Right Ascension could not be estimated (LMST missing)!', file=sys.stderr)
Right_Ascension = None
Coordinates = np.array((Declination, Right_Ascension, LHT))
return(Coordinates)
###Output
_____no_output_____
###Markdown
2. Horizontal to Equatorial II
###Code
def Hor_To_Equ_II(Latitude, Altitude, Azimuth, LMST=None):
# First Convert Horizontal to Equatorial I Coordinates
Coordinates = Hor_To_Equ_I(Latitude, Altitude, Azimuth, LMST)
Declination = Coordinates[0]
Right_Ascension = Coordinates[1]
LHT = Coordinates[2]
# Calculate LMST if it is not known
if(LMST == None):
LMST = LHT + Right_Ascension
# Normalize LMST
# LMST: [0,24h[
LMST, _ = Normalize_Zero_Bounded(LMST, 24)
Coordinates = np.array((Declination, Right_Ascension, LMST))
return(Coordinates)
###Output
_____no_output_____
###Markdown
3. Equatorial I to Horizontal
###Code
def Equ_I_To_Hor(Latitude,
Declination,
Right_Ascension,
LHT=None,
LMST=None,
Altitude=None):
# Input data normalization
# Latitude: [-π/2,+π/2]
# Declination: [-π/2,+π/2]
# Right Ascension: [0h,24h[
Latitude = Normalize_Symmetrically_Bounded_PI_2(Latitude)
Declination = Normalize_Symmetrically_Bounded_PI_2(Declination)
Right_Ascension, _ = Normalize_Zero_Bounded(Right_Ascension, 24)
# Accuracy for calculations
accuracy = 0.00001
# Calculate Local Hour Angle in Hours (t)
if(LMST != None):
# t = S - α
LHT = LMST - Right_Ascension
# Normalize LHT
# LHT: [0h,24h[
LHT, _ = Normalize_Zero_Bounded(LHT, 24)
if(LHT != None):
# Convert LHA to angles from hours (t -> H)
LHA = LHT * 15
# Calculate Altitude (m)
# sin(m) = sin(δ) * sin(φ) + cos(δ) * cos(φ) * cos(H)
# The result for m will be non-ambigous and the output of
# the 'np.arcsin()' can be automatically accepted
Altitude = np.degrees(np.arcsin(
np.sin(np.radians(Declination)) * np.sin(np.radians(Latitude)) +
np.cos(np.radians(Declination)) * np.cos(np.radians(Latitude)) *
np.cos(np.radians(LHA))
))
# Normalize result for Altitude: [-π/2,+π/2]
Altitude = Normalize_Symmetrically_Bounded_PI_2(Altitude)
# Calculate Azimuth (A)
# sin(A) = - sin(H) * cos(δ) / cos(m)
Azimuth_sin_1 = np.degrees(np.arcsin(
- np.sin(np.radians(LHA)) *
np.cos(np.radians(Declination)) /
np.cos(np.radians(Altitude))
))
# Normalize result for Azimuth: [0,+2π[
Azimuth_sin_1, _ = Normalize_Zero_Bounded(Azimuth_sin_1, 360)
# 'np.arcsin()' returns with exactly 1 value for A, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
Azimuth_sin_2 = 540 - Azimuth_sin_1
# Calculate Azimuth (A) with a second method, to determine which one is the correct
# cos(A) = (sin(δ) - sin(φ) * sin(m)) / (cos(φ) * cos(m))
Azimuth_cos_1 = np.degrees(np.arccos(
(np.sin(np.radians(Declination)) - np.sin(np.radians(Latitude)) *
np.sin(np.radians(Altitude))) /
(np.cos(np.radians(Latitude)) * np.cos(np.radians(Altitude)))
))
# 'np.arccos()' returns with exactly 1 value for A, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
Azimuth_cos_2 = 360 - Azimuth_cos_1
# Normalize result for Azimuth: [0,+2π[
Azimuth_cos_2, _ = Normalize_Zero_Bounded(Azimuth_cos_2, 360)
# Compare Azimuth values
if(np.abs(Azimuth_sin_1 - Azimuth_cos_1) < accuracy or
np.abs(Azimuth_sin_1 - Azimuth_cos_2) < accuracy):
Azimuth = Azimuth_sin_1
elif(np.abs(Azimuth_sin_2 - Azimuth_cos_1) < accuracy or
np.abs(Azimuth_sin_2 - Azimuth_cos_2) < accuracy):
Azimuth = Azimuth_sin_2
else:
raise ValueError('The correct Azimuth value could not be estimated!')
Coordinates = np.array((Altitude, Azimuth))
return(Coordinates)
elif(Altitude != None):
# First check if the object is ever rise above the horizon,
# or if it could exceed the given altitude
Max_Altitude = 90 - (Declination - Latitude)
if(Max_Altitude < 0):
raise ValueError('Given object will never rise above the horizon!')
if(Max_Altitude <= Altitude):
raise ValueError('Given object will never rise above the given Altitude!')
# Starting Equations:
# sin(m) = sin(δ) * sin(φ) + cos(δ) * cos(φ) * cos(H)
# We can calculate eg. setting/rising with the available data (m = 0°), or other things...
# First let's calculate LHA:
# cos(H) = (sin(m) - sin(δ) * sin(φ)) / cos(δ) * cos(φ)
LHA_1 = np.degrees(np.arccos(
((np.sin(np.radians(Altitude)) -
np.sin(np.radians(Declination)) *
np.sin(np.radians(Latitude))) /
(np.cos(np.radians(Declination)) *
np.cos(np.radians(Latitude))))))
# 'np.arccos()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
LHA_2 = 360 - LHA_1
# Normalize LHAs:
LHA_1, _ = Normalize_Zero_Bounded(LHA_1, 360)
LHA_2, _ = Normalize_Zero_Bounded(LHA_2, 360)
#
# Calculate Azimuth (A) for both Local Hour Angles!
#
# Calculate Azimuth (A) for the FIRST LHA
# sin(A) = - sin(H) * cos(δ) / cos(m)
Azimuth_sin_1 = np.degrees(np.arcsin(
- np.sin(np.radians(LHA_1)) *
np.cos(np.radians(Declination)) /
np.cos(np.radians(Altitude))
))
# Normalize result for Azimuth: [0,+2π[
Azimuth_sin_1, _ = Normalize_Zero_Bounded(Azimuth_sin_1, 360)
# 'np.arcsin()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
Azimuth_sin_2 = 540 - Azimuth_sin_1
# Calculate Azimuth (A) with a second method, to determine which one is the correct
# cos(A) = (sin(δ) - sin(φ) * sin(m)) / (cos(φ) * cos(m))
Azimuth_cos_1 = np.degrees(np.arccos(
(np.sin(np.radians(Declination)) - np.sin(np.radians(Latitude)) *
np.sin(np.radians(Altitude))) /
(np.cos(np.radians(Latitude)) * np.cos(np.radians(Altitude)))
))
# 'np.arccos()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
Azimuth_cos_2 = 360 - Azimuth_cos_1
# Normalize result for Azimuth: [0,+2π[
Azimuth_cos_2, _ = Normalize_Zero_Bounded(Azimuth_cos_2, 360)
# Compare Azimuth values
if(np.abs(Azimuth_sin_1 - Azimuth_cos_1) < accuracy or
np.abs(Azimuth_sin_1 - Azimuth_cos_2) < accuracy):
Azimuth_1 = Azimuth_sin_1
elif(np.abs(Azimuth_sin_2 - Azimuth_cos_1) < accuracy or
np.abs(Azimuth_sin_2 - Azimuth_cos_2) < accuracy):
Azimuth_1 = Azimuth_sin_2
else:
raise ValueError('The correct Azimuth value could not be estimated!')
# Calculate Azimuth (A) for the SECOND LHA
# sin(A) = - sin(H) * cos(δ) / cos(m)
Azimuth_sin_1 = np.degrees(np.arcsin(
- np.sin(np.radians(LHA_2)) *
np.cos(np.radians(Declination)) /
np.cos(np.radians(Altitude))
))
# Normalize result for Azimuth: [0,+2π[
Azimuth_sin_1, _ = Normalize_Zero_Bounded(Azimuth_sin_1, 360)
# 'np.arcsin()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
Azimuth_sin_2 = 540 - Azimuth_sin_1
# Calculate Azimuth (A) with a second method, to determine which one is the correct
# cos(A) = (sin(δ) - sin(φ) * sin(m)) / (cos(φ) * cos(m))
Azimuth_cos_1 = np.degrees(np.arccos(
(np.sin(np.radians(Declination)) - np.sin(np.radians(Latitude)) *
np.sin(np.radians(Altitude))) /
(np.cos(np.radians(Latitude)) * np.cos(np.radians(Altitude)))
))
# 'np.arccos()' returns with exactly 1 value for H, but in this case
# it is ambigous, because the equation has another solution in the
# correct interval. The second solution is evaluated below.
Azimuth_cos_2 = 360 - Azimuth_cos_1
# Normalize result for Azimuth: [0,+2π[
Azimuth_cos_2, _ = Normalize_Zero_Bounded(Azimuth_cos_2, 360)
# Compare Azimuth values
if(np.abs(Azimuth_sin_1 - Azimuth_cos_1) < accuracy or
np.abs(Azimuth_sin_1 - Azimuth_cos_2) < accuracy):
Azimuth_2 = Azimuth_sin_1
elif(np.abs(Azimuth_sin_2 - Azimuth_cos_1) < accuracy or
np.abs(Azimuth_sin_2 - Azimuth_cos_2) < accuracy):
Azimuth_2 = Azimuth_sin_2
else:
raise ValueError('The correct Azimuth value could not be estimated!')
# Calculate time between them
# Use precalculated LHAs
# H_dil is the time, where the Object stays below the given Altitude
H_dil = np.abs(LHA_1 - LHA_2)
Coordinates = np.array((Azimuth_1, Azimuth_2, H_dil))
return(Coordinates)
else:
raise AttributeError('Either Altitude or LHT values must be given!')
###Output
_____no_output_____
###Markdown
4. Equatorial I to Equatorial II
###Code
def Equ_I_To_Equ_II(Right_Ascension, t):
LMST = t + Right_Ascension
# Normalize LMST
# LMST: [0,24h[
LMST, _ = Normalize_Zero_Bounded(LMST, 24)
Coordinates = np.array((Right_Ascension, LMST))
return(Coordinates)
###Output
_____no_output_____
###Markdown
5. Equatorial II to Equatorial I
###Code
def Equ_II_To_Equ_I(LMST,
Right_Ascension,
LHT):
# Calculate Right Ascension or Local Mean Sidereal Time
if(RightAscension != None and LHT == None):
LHT = LMST - Right_Ascension
elif(RightAscension == None and LHT != None):
Right_Ascension = LMST - LHT
else:
raise AttributeError('Either Right Ascension or LHT values must be given!')
# Normalize LHA
# LHA: [0,24h[
LHT, _ = Normalize_Zero_Bounded(LHT, 24)
# Normalize Right Ascension
# Right Ascension: [0,24h[
Right_Ascension, _ = Normalize_Zero_Bounded(Right_Ascension, 24)
Coordinates = np.array((Right_Ascension, LHT))
return(Coordinates)
###Output
_____no_output_____
###Markdown
6. Equatorial II to Horizontal
###Code
def Equ_II_To_Hor(Latitude,
Declination,
LMST,
Right_Ascension=None,
LHT=None):
# Input data normalization
# Latitude: [-π/2,+π/2]
# Local Mean Sidereal Time: [0h,24h[
# Local Hour Angle: [0h,24h[
# Right Ascension: [0h,24h[
# Declination: [-π/2,+π/2]
Latitude = Normalize_Symmetrically_Bounded_PI_2(Latitude)
LMST, _ = Normalize_Zero_Bounded(LMST, 24)
if(Right_Ascension == None and LHT != None):
LHT, _ = Normalize_Zero_Bounded(LHT, 24)
elif(Right_Ascension != None and LHT == None):
Right_Ascension, _ = Normalize_Zero_Bounded(Right_Ascension, 24)
else:
raise AttributeError('Either right ascension of LHT values must be given!')
Declination = Normalize_Symmetrically_Bounded_PI_2(Declination)
# Convert Equatorial II to Equatorial I
Coordinates = Equ_II_To_Equ_I(LMST,
Right_Ascension,
LHT)
Right_Ascension = Coordinates[0]
LHT = Coordinates[1]
# Convert Equatorial I to Horizontal
Coordinates = Equ_I_To_Hor(Latitude,
Declination,
Right_Ascension,
LHT,
LMST,
Altitude)
Altitude = Coordinates[0]
Azimuth = Coordinates[1]
# Output data normalization
# Altitude: [-π/2,+π/2]
# Azimuth: [0,+2π[
Altitude = Normalize_Symmetrically_Bounded_PI_2(Altitude)
Azimuth, _ = Normalize_Zero_Bounded(Azimuth, 360)
Coordinates = np.array((Altitude, Azimuth))
return(Coordinates)
###Output
_____no_output_____
###Markdown
2. Geographical distance Sourced from:- https://www.movable-type.co.uk/scripts/latlong.html Calculation methodHere we calculate geographical distance on a sphere, between a pair of given latitudes and longitudes.The **Haversine formula** could be used in this case:$\DeclareMathOperator{\arctantwo}{arctan2}$$$H_{1} = \sin{\left( \frac{\phi_{2} - \phi_{1}}{2} \right)}^{2} + \cos{\left( \phi_{1} \right)} \cdot \cos{\left(\phi_{2} \right)} \cdot \sin{\left( \frac{\lambda_{2} - \lambda_{1}}{2} \right)}^{2}\tag{1}$$$$H_{2} = 2 \cdot \arctantwo{\left( \sqrt{H_{1}}, \sqrt{1 - H_{1}} \right)}\tag{2}$$$$d = R \cdot H_{2}\tag{3}$$Where $\phi_{1}$, $\phi_{1}$ and $\lambda_{1}$, $\lambda_{2}$ are the two choosen points latitudes and longitudes respectively. $R$ is the radius of the given sphere (eg. a planet).
###Code
def Calculate_Dist(Latitude_1, Latitude_2, Longitude_1, Longitude_2):
# Initial Data Normalization
# Latitude: [-π/2,+π/2]
# Longitude: [0,+2π[
Latitude_1 = Normalize_Symmetrically_Bounded_PI_2(Latitude_1)
Latitude_2 = Normalize_Symmetrically_Bounded_PI_2(Latitude_2)
Longitude_1, _ = Normalize_Zero_Bounded(Longitude_1, 360)
Longitude_2, _ = Normalize_Zero_Bounded(Longitude_2, 360)
# Step 1
H_1 = (np.sin(np.radians(Latitude_2 - Latitude_1) / 2))**2 + \
(np.cos(np.radians(Latitude_1)) * np.cos(np.radians(Latitude_2)) * \
(np.sin(np.radians(Longitude_2 - Longitude_1) / 2))**2)
# Step 2
H_2 = 2 * np.arctan2(np.sqrt(H_1), np.sqrt(1 - H_1))
# Step 3
Distance = R_Earth * H_2
return(Distance)
###Output
_____no_output_____
###Markdown
3. Calculate exact coordinates of Sun Sun's equatorial coordinates
###Code
def Coordinates_Of_Sun(Planet,
JD):
if Planet not in _VALID_PLANETS:
raise AttributeError('As given, \'{0}\' is not a valid planetary object!'.format(Planet))
# 1. Solar Mean Anomaly
# Mean_Anomaly (M) is the Solar Mean Anomaly used in a few of next equations
# Mean_Anomaly = (M_0 + M_1 * (JD - J2000)) and norm to 360
Mean_Anomaly = Orbit_Dict[Planet + 'M'][0] + Orbit_Dict[Planet + 'M'][1] * (JD - J2000)
# Normalize Result
Mean_Anomaly, _ = Normalize_Zero_Bounded(Mean_Anomaly, 360)
# 2. Equation of the Center
# Equation Of Center (C) is the value needed to calculate Ecliptic Solar Longitude and
# Mean Ecliptic Solar Longitude (see next equation)
# ν = M + C, where ν is the True Solar Anomaly, M is the Mean Solar Anomaly, and C is the Equation of Center
# Equation_Of_Center = C_1 * sin(M) + C_2 * sin(2M) + C_3 * sin(3M) + C_4 * sin(4M) + C_5 * sin(5M) + C_6 * sin(6M)
Equation_Of_Center = (Orbit_Dict[Planet + 'C'][0] * np.sin(np.radians(Mean_Anomaly)) +
Orbit_Dict[Planet + 'C'][1] * np.sin(np.radians(2 * Mean_Anomaly)) +
Orbit_Dict[Planet + 'C'][2] * np.sin(np.radians(3 * Mean_Anomaly)) +
Orbit_Dict[Planet + 'C'][3] * np.sin(np.radians(4 * Mean_Anomaly)) +
Orbit_Dict[Planet + 'C'][4] * np.sin(np.radians(5 * Mean_Anomaly)) +
Orbit_Dict[Planet + 'C'][5] * np.sin(np.radians(6 * Mean_Anomaly)))
# 3. Ecliptic Longitude
# Mean_Ecl_Longitude_Sun (L_sun) in the Mean Ecliptic Longitude
# Ecl_Longitude_Sun (λ) is the Ecliptic Longitude
# Orbit_Dict[Planet + 'Orbit'][0] is a value for the argument of perihelion
Mean_Ecl_Longitude_Sun = Mean_Anomaly + Orbit_Dict[Planet + 'Orbit'][0] + 180
Ecl_Longitude_Sun = Mean_Ecl_Longitude_Sun + Equation_Of_Center
Mean_Ecl_Longitude_Sun, _ = Normalize_Zero_Bounded(Mean_Ecl_Longitude_Sun, 360)
Ecl_Longitude_Sun, _ = Normalize_Zero_Bounded(Ecl_Longitude_Sun, 360)
# 4. Right Ascension of Sun (α)
# Unit for α is degress (°)
Right_Ascension_Sun = np.degrees(np.arctan2(
np.sin(np.radians(Ecl_Longitude_Sun)) * np.cos(np.radians(Orbit_Dict[Planet + 'Orbit'][1])),
np.cos(np.radians(Ecl_Longitude_Sun))))
# Approximate form
# PlanetA_2, PlanetA_4 and PlanetA_6 (measured in degrees) are coefficients in the series expansion
# of the Sun's Right Ascension. They varie for different planets in the Solar System.
# Right_Ascension_Sun
# =
# Ecl_Longitude_Sun + S
# ≈
# Ecl_Longitude_Sun +
# + PlanetA_2 * sin(2 * Ecl_Longitude_Sun) +
# + PlanetA_4 * sin(4 * Ecl_Longitude_Sun) +
# + PlanetA_6 * sin(6 * Ecl_Longitude_Sun)
'''Right_Ascension_Sun = (Ecl_Longitude_Sun +
Orbit_Dict[Planet + 'A'][0] * np.sin(np.radians(2 * Ecl_Longitude_Sun)) +
Orbit_Dict[Planet + 'A'][1] * np.sin(np.radians(4 * Ecl_Longitude_Sun)) +
Orbit_Dict[Planet + 'A'][2] * np.sin(np.radians(6 * Ecl_Longitude_Sun)))'''
# 5. Declination of the Sun (δ)
# Unit for δ is degress (°)
Declination_Sun = np.degrees(np.arcsin(
np.sin(np.radians(Ecl_Longitude_Sun)) * np.sin(np.radians(Orbit_Dict[Planet + 'Orbit'][1]))))
# Approximate form
# PlanetD_1, PlanetD_3 and PlanetD_5 (measured in degrees) are coefficients in the series expansion
# of the Sun's Declination. They varie for different planets in the Solar System.
# Declination_Sun
# =
# PlanetD_1 * sin(Ecl_Longitude_Sun) +
# PlanetD_3 * (sin(Ecl_Longitude_Sun))^3 +
# PlanetD_5 * (sin(Ecl_Longitude_Sun))^5
'''Declination_Sun = (Orbit_Dict[Planet + 'D'][0] * np.sin(np.radians(Ecl_Longitude_Sun)) +
Orbit_Dict[Planet + 'D'][1] * (np.sin(np.radians(Ecl_Longitude_Sun)))**3 +
Orbit_Dict[Planet + 'D'][2] * (np.sin(np.radians(Ecl_Longitude_Sun)))**5)'''
Coordinates = np.array((Right_Ascension_Sun,
Declination_Sun,
Mean_Anomaly,
Equation_Of_Center,
Mean_Ecl_Longitude_Sun,
Ecl_Longitude_Sun))
return(Coordinates)
###Output
_____no_output_____
###Markdown
Sun's hour angle
###Code
def Suns_Local_Hour_Angle(Planet,
Latitude,
Longitude,
Right_Ascension_Sun,
Declination_Sun,
Ecl_Longitude_Sun,
Altitude_Of_Sun,
JD,
Transit=True):
if(Transit):
# Local Hour Angle of Sun (H) from orbital parameters
# Unit for H is degress (°)
# ϴ = ϴ_0 + ϴ_1 * (JD - J2000) - l_w (mod 360°)
# H = ϴ - α
W_Longitude = Longitude
Theta = Orbit_Dict[Planet + "TH"][0] + Orbit_Dict[Planet + "TH"][1] * (JD - J2000) - W_Longitude
Theta, _ = Normalize_Zero_Bounded(Theta, 360)
Local_Hour_Angle_Sun = Theta - Right_Ascension_Sun
else:
# Local Hour Angle of Sun (H) at h = 0
# cos(H) = (sin(m_0) - sin(φ) * sin(δ)) / (cos(φ) * cos(δ))
# Local_Hour_Angle_Sun (t_0) is the Local Hour Angle from the Observer's Zenith
# Latitude (φ) is the North Latitude of the Observer (north is positive, south is negative)
# m_0 is a compensation of Altitude (m) in degrees, for the Sun's distorted shape, and the atmospherical refraction
# The equation returns two value, LHA1 and LHA2. We need that one, which is approximately equals to LHA_Pos
Local_Hour_Angle_Sun = np.degrees(np.arccos((np.sin(np.radians(Altitude_Of_Sun + Orbit_Dict[Planet + "Orbit"][2])) -
np.sin(np.radians(Latitude)) * np.sin(np.radians(Declination_Sun))) /
(np.cos(np.radians(Latitude)) * np.cos(np.radians(Declination_Sun)))
))
return(Local_Hour_Angle_Sun)
###Output
_____no_output_____
###Markdown
Solar transit
###Code
def Solar_Transit(Planet,
Latitude,
Longitude,
Altitude_Of_Sun,
JD):
# 1. Orbital parameters of Sun
Coordinates = Coordinates_Of_Sun(Planet,
JD)
Right_Ascension_Sun = Coordinates[0]
Declination_Sun = Coordinates[1]
Mean_Anomaly = Coordinates[2]
Equation_Of_Center = Coordinates[3]
Mean_Ecl_Longitude_Sun = Coordinates[4]
Ecl_Longitude_Sun = Coordinates[5]
# 2. Mean Solar Noon
# J_Anomaly is an approximation of Mean Solar Time at W_Longitude expressed as a Julian day with the day fraction
# W_Longitude (l_w) is the longitude, to the west from the observer on the planet (west is positive, east is negative)
W_Longitude = - Longitude
n_x = (JD - J2000 - Orbit_Dict[Planet + "J"][0]) / Orbit_Dict[Planet + "J"][3] - W_Longitude/360
n = np.round(n_x, 0)
# 3. Solar Transit
# Jtransit is the Julian date for the Local True Solar Transit (or Solar Noon)
# Jtransit = J_x + 0.0053 * sin(Mean_Anomaly) - 0.0068 * sin(2 * L_sun)
# "0.0053 * sin(Mean_Anomaly) - 0.0069 * sin(2 * Ecl_Longitude_Sun)" is a simplified version of the equation of time
J_x = JD + Orbit_Dict[Planet + "J"][3] * (n - n_x)
J_transit = (J_x +
Orbit_Dict[Planet + "J"][1] * np.sin(np.radians(Mean_Anomaly)) +
Orbit_Dict[Planet + "J"][2] * np.sin(np.radians(2 * Mean_Ecl_Longitude_Sun)))
# 4. Iterate the calculation of the Solar Transit for greater precision
accuracy = 0.000001
J_transit_old = J_transit
while(True):
Coordinates = Coordinates_Of_Sun(Planet,
J_transit)
Right_Ascension_Sun = Coordinates[0]
Declination_Sun = Coordinates[1]
Mean_Anomaly = Coordinates[2]
Equation_Of_Center = Coordinates[3]
Mean_Ecl_Longitude_Sun = Coordinates[4]
Ecl_Longitude_Sun = Coordinates[5]
Local_Hour_Angle_Sun = Suns_Local_Hour_Angle(Planet,
Latitude,
Longitude,
Right_Ascension_Sun,
Declination_Sun,
Ecl_Longitude_Sun,
Altitude_Of_Sun,
JD=J_transit,
Transit=True)
J_transit -= Local_Hour_Angle_Sun/360 * Orbit_Dict[Planet + "J"][3]
if(np.abs(J_transit_old - J_transit) < accuracy):
break
else:
J_transit_old = J_transit
Coordinates = np.array((Right_Ascension_Sun,
Declination_Sun,
Mean_Anomaly,
Equation_Of_Center,
Mean_Ecl_Longitude_Sun,
Ecl_Longitude_Sun,
J_transit))
return(Coordinates)
###Output
_____no_output_____
###Markdown
5. Calculate Sunrise and Sunset's Datetime Calculate Julian date of setting and rising time of Sun at given date and altitude
###Code
def Calculate_Corrections(Planet,
Latitude,
Longitude,
Altitude_Of_Sun,
J_Time):
# Orbital coordinates of Sun and Julian Date of solar transit
Coordinates = Coordinates_Of_Sun(Planet,
JD=J_Time)
Right_Ascension_Sun = Coordinates[0]
Declination_Sun = Coordinates[1]
Mean_Anomaly = Coordinates[2]
Equation_Of_Center = Coordinates[3]
Mean_Ecl_Longitude_Sun = Coordinates[4]
Ecl_Longitude_Sun = Coordinates[5]
# Calculation of H
#LHA = 90 + \
# Orbit_Dict[Planet + "H"][0] * np.sin(np.radians(Ecl_Longitude_Sun)) * np.tan(np.radians(Latitude)) + \
# Orbit_Dict[Planet + "H"][1] * np.sin(np.radians(Ecl_Longitude_Sun))**3 * np.tan(np.radians(Latitude)) * \
# (3 + np.tan(np.radians(Latitude))**2) + \
# Orbit_Dict[Planet + "H"][2] * np.sin(np.radians(Ecl_Longitude_Sun))**5 * np.tan(np.radians(Latitude)) * \
# (15 + 10 * np.tan(np.radians(Latitude))**2 + 3 * np.tan(np.radians(Latitude))**4)
#LHA = np.degrees(np.arccos(-np.radians(Declination_Sun)*np.radians(Latitude)))
# Calculation of H
LHA = Suns_Local_Hour_Angle(Planet,
Latitude,
Longitude,
Right_Ascension_Sun,
Declination_Sun,
Ecl_Longitude_Sun,
Altitude_Of_Sun,
J_Time,
Transit=True)
# Calculation of H_t
LHA_t = Suns_Local_Hour_Angle(Planet,
Latitude,
Longitude,
Right_Ascension_Sun,
Declination_Sun,
Ecl_Longitude_Sun,
Altitude_Of_Sun,
J_Time,
Transit=False)
return(LHA, LHA_t)
def J_Date_of_Sunrise_and_Sunset(Planet,
Latitude,
Longitude,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day,
Altitude_Of_Sun):
# 0. Calculate Julian Date for solar transit (LT = 12)
JD = Calculate_JD(Date_Year=Local_Date_Year,
Date_Month=Local_Date_Month,
Date_Day=Local_Date_Day,
Longitude=Longitude,
Local_Time=12)
# 1. Orbital coordinates of Sun and Julian Date of solar transit
Coordinates = Solar_Transit(Planet,
Latitude,
Longitude,
Altitude_Of_Sun,
JD)
Right_Ascension_Sun = Coordinates[0]
Declination_Sun = Coordinates[1]
Mean_Anomaly = Coordinates[2]
Equation_Of_Center = Coordinates[3]
Mean_Ecl_Longitude_Sun = Coordinates[4]
Ecl_Longitude_Sun = Coordinates[5]
J_transit = Coordinates[6]
# 2. Calculate Local Hour Angle (H), in which Sun rises and sets
# Where LHA = H_t > 0, corresponds to sunset
# Where LHA = -H_t < 0, corresponds to sunrise
Local_Hour_Angle_Sun = Suns_Local_Hour_Angle(Planet,
Latitude,
Longitude,
Right_Ascension_Sun,
Declination_Sun,
Ecl_Longitude_Sun,
Altitude_Of_Sun,
JD=J_transit,
Transit=False)
# 3. Rising and setting datetimes of the Sun
# J_Rise is the actual Julian date of sunrise
# J_Set is the actual Julian date of sunset
J_Rise = J_transit - Local_Hour_Angle_Sun / 360 * Orbit_Dict[Planet + "J"][3]
J_Set = J_transit + Local_Hour_Angle_Sun / 360 * Orbit_Dict[Planet + "J"][3]
# 4. Apply corrections
accuracy = 0.0001
while(True):
# Sunrise
#
# i. Calculate the Sun's LHA (H) at J_Rise
# Orbital coordinates of Sun and Julian Date of solar transit
Coordinates = Coordinates_Of_Sun(Planet,
JD=J_Rise)
Declination_Sun = Coordinates[1]
# ii. Calculate the H_t correction at J_Rise
LHA, LHA_t = Calculate_Corrections(Planet,
Latitude,
Longitude,
Altitude_Of_Sun,
J_Time=J_Rise)
# LHA = -H < 0 corresponds to sunrise
LHA = Normalize_Symmetrically_Bounded_PI(LHA)
if(LHA < 0):
LHA *= -1
elif(LHA >= 0):
pass
J_Rise_Corr = (-LHA + LHA_t) / 360 * Orbit_Dict[Planet + "J"][3]
J_Rise -= J_Rise_Corr
# Sunset
#
# i. Calculate the Sun's LHA (H) at J_Set
# Orbital coordinates of Sun and Julian Date of solar transit
Coordinates = Coordinates_Of_Sun(Planet,
JD=J_Set)
Declination_Sun = Coordinates[1]
# ii. Calculate the H_t correction at J_Rise
LHA, LHA_t = Calculate_Corrections(Planet,
Latitude,
Longitude,
Altitude_Of_Sun,
J_Time=J_Set)
# LHA = H > 0 corresponds to sunset
LHA = Normalize_Symmetrically_Bounded_PI(LHA)
if(LHA < 0):
LHA *= -1
elif(LHA >= 0):
pass
J_Set_Corr = (LHA - LHA_t) / 360 * Orbit_Dict[Planet + "J"][3]
J_Set -= J_Set_Corr
if(np.abs(J_Rise_Corr) < accuracy and np.abs(J_Set_Corr) < accuracy):
break
return(J_Rise, J_Set, J_transit)
###Output
_____no_output_____
###Markdown
Calculate local time of setting and rising time of Sun for given date and altitude
###Code
def Sunset_and_Sunrise_Date_Time(Planet,
Latitude,
Longitude,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day,
Altitude_Of_Sun):
J_Rise, J_Set, J_transit = J_Date_of_Sunrise_and_Sunset(Planet,
Latitude,
Longitude,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day,
Altitude_Of_Sun)
#
# SUNRISE
#
J_Rise -= 0.5
Sunrise_Universal_Time = (J_Rise - int(J_Rise)) * 24
print("Sunrise UTC: {0}".format(datetime.timedelta(hours=Sunrise_Universal_Time)))
Sunrise_Universal_Date_Time = Normalize_Time_Parameters(Sunrise_Universal_Time,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day)
# Convert results to Local Time
Sunrise_Local_Date_Time = UT_To_LT(Longitude,
Sunrise_Universal_Date_Time[0],
int(Sunrise_Universal_Date_Time[1]),
int(Sunrise_Universal_Date_Time[2]),
int(Sunrise_Universal_Date_Time[3]))
#
# SUNSET
#
J_Set -= 0.5
Sunset_Universal_Time = (J_Set - int(J_Set)) * 24
print("Sunset UTC: {0}".format(datetime.timedelta(hours=Sunset_Universal_Time)))
Sunset_Universal_Date_Time = Normalize_Time_Parameters(Sunset_Universal_Time,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day)
# Convert results to Local Time
Sunset_Local_Date_Time = UT_To_LT(Longitude,
Sunset_Universal_Date_Time[0],
int(Sunset_Universal_Date_Time[1]),
int(Sunset_Universal_Date_Time[2]),
int(Sunset_Universal_Date_Time[3]))
return(Sunrise_Local_Date_Time, Sunset_Local_Date_Time)
###Output
_____no_output_____
###Markdown
Print local time of setting and rising time of Sun for given date and altitude
###Code
def Local_Time_for_Sun_at_Altitude(Planet,
Latitude,
Longitude,
Local_Date_Year,
Local_Date_Month,
Local_Date_Day,
Altitude_Of_Sun):
Sunrise_Local_Date_Time, Sunset_Local_Date_Time = Sunset_and_Sunrise_Date_Time(Planet=Planet,
Latitude=Latitude,
Longitude=Longitude,
Local_Date_Year=Local_Date_Year,
Local_Date_Month=Local_Date_Month,
Local_Date_Day=Local_Date_Day,
Altitude_Of_Sun=Altitude_Of_Sun)
Sunrise = Sunrise_Local_Date_Time[0]
Sunset = Sunset_Local_Date_Time[0]
print('Sun\'s rising at altitude {0}° on {1}.{2}.{3} is at {4}'.format(Altitude_Of_Sun,
int(Sunrise_Local_Date_Time[1]),
int(Sunrise_Local_Date_Time[2]),
int(Sunrise_Local_Date_Time[3]),
str(datetime.timedelta(hours=Sunrise))))
print('Sun\'s setting at altitude {0}° on {1}.{2}.{3} is at {4}'.format(Altitude_Of_Sun,
int(Sunset_Local_Date_Time[1]),
int(Sunset_Local_Date_Time[2]),
int(Sunset_Local_Date_Time[3]),
str(datetime.timedelta(hours=Sunset))))
Local_Time_for_Sun_at_Altitude(Planet='Earth',
Latitude=52,#Location_Dict['Budapest'][0],
Longitude=-5,#Location_Dict['Budapest'][1],
Local_Date_Year=2004,
Local_Date_Month=4,
Local_Date_Day=1,
Altitude_Of_Sun=0)
Coordinates = Coordinates_Of_Sun(Planet='Earth',
JD=Calculate_JD(Date_Year=2019,
Date_Month=8,
Date_Day=25,
Longitude=19.0402,
Local_Time=(11 + 33/60)))
Right_Ascension_Sun = Coordinates[0]
Declination_Sun = Coordinates[1]
Mean_Anomaly = Coordinates[2]
Equation_Of_Center = Coordinates[3]
Mean_Ecl_Longitude_Sun = Coordinates[4]
Ecl_Longitude_Sun = Coordinates[5]
print("Right Ascension of Sun: " +
str(int(Right_Ascension_Sun)) + "° " +
str(int((Right_Ascension_Sun - int(Right_Ascension_Sun)) * 60)) + "\' " +
str(((Right_Ascension_Sun - int(Right_Ascension_Sun)) * 60 -
int((Right_Ascension_Sun - int(Right_Ascension_Sun)) * 60)) * 60) + "\" ")
print("Declination of Sun: " +
str(int(Declination_Sun)) + "° " +
str(int((Declination_Sun - int(Declination_Sun)) * 60)) + "\' " +
str(((Declination_Sun - int(Declination_Sun)) * 60 -
int((Declination_Sun - int(Declination_Sun)) * 60)) * 60) + "\" ")
Azimuth_First, Azimuth_Second, H_dil = Equ_I_To_Hor(Latitude=47.4979,
Declination=Declination_Sun,
Right_Ascension=Right_Ascension_Sun,
Local_Hour_Angle=None,
Local_Sidereal_Time=None,
Altitude=0)
print(Azimuth_First, Azimuth_Second)
###Output
270.81770703919443 89.18229296080555
|
S01 - Bootcamp and Binary Classification/SLU04 - Basic Stats with Pandas/Examples notebook.ipynb | ###Markdown
SLU04 - Basic Stats with Pandas: Examples notebook
###Code
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
from utils import prepare_dataset, get_company_salaries_and_plot, plot_log_function
# better dpi in plots
matplotlib.rcParams['figure.dpi'] = 200
lego = pd.read_csv('data/sets.csv')
###Output
_____no_output_____
###Markdown
Count
###Code
lego.count()
###Output
_____no_output_____
###Markdown
Max, idxmax, min, idxmin
###Code
lego.num_parts.max()
lego.num_parts.idxmax()
lego.loc[lego.num_parts.idxmax(), 'set_num']
lego.num_parts.idxmin()
lego.num_parts.min()
lego = lego.drop(lego.loc[lego.num_parts == lego.num_parts.min()].index, axis=0)
###Output
_____no_output_____
###Markdown
Mode
###Code
lego.year.mode() # seems like 2014 was the year where more sets were published
###Output
_____no_output_____
###Markdown
Mean and Median
###Code
lego.num_parts.mean() # In terms of code, it's as simple as this. We have a mean of 162 numbers of parts
lego.num_parts.median()
###Output
_____no_output_____
###Markdown
Skewness
###Code
lego.num_parts.plot.hist(bins=50, figsize=(12, 6))
plt.xlim(0,2000)
plt.xlabel('num_parts')
plt.title('Distribution of number of parts of Lego sets');
lego.num_parts.skew()
###Output
_____no_output_____
###Markdown
Standard Deviation and Variance
###Code
print('Mean: %0.2f' % lego.num_parts.mean())
print('Variance: %0.2f' % lego.num_parts.var())
print('Standard Deviation: %0.2f' % lego.num_parts.std())
###Output
Mean: 162.30
Variance: 109048.00
Standard Deviation: 330.22
###Markdown
Kurtosis
###Code
company_a, company_b = get_company_salaries_and_plot()
company_a.kurt()
company_b.kurt()
###Output
_____no_output_____
###Markdown
Quantiles
###Code
quartiles = [.25, .5, .75]
lego.num_parts.quantile(q=quartiles)
###Output
_____no_output_____
###Markdown
Summarizing
###Code
lego.num_parts.describe()
###Output
_____no_output_____
###Markdown
Unique and nunique
###Code
lego.year.unique()
lego.year.nunique()
lego.nunique()
###Output
_____no_output_____
###Markdown
Dealing with outliers and skewed distributions Log transformation
###Code
lego_non_zero = lego.drop(lego.loc[lego.num_parts == 0].index, axis=0)
lego_non_zero.num_parts.plot.hist(bins=100, figsize=(14, 6))
plt.xlim(0,2000)
plt.xlabel('num_parts')
plt.title('Distribution of number of parts of Lego sets');
lego_non_zero['log_num_parts'] = np.log(lego_non_zero.num_parts)
lego_non_zero.log_num_parts.plot.hist(bins=50, figsize=(14, 6))
plt.xlabel('log_num_parts')
plt.title('Distribution of number of parts of Lego sets');
###Output
_____no_output_____ |
PennyLane/Data Reuploading Classifier/DRC MNIST MultiClass PCA Keras (8 class).ipynb | ###Markdown
Loading Raw Data
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train_flatten = x_train.reshape(x_train.shape[0], x_train.shape[1]*x_train.shape[2])/255.0
x_test_flatten = x_test.reshape(x_test.shape[0], x_test.shape[1]*x_test.shape[2])/255.0
print(x_train_flatten.shape, y_train.shape)
print(x_test_flatten.shape, y_test.shape)
x_train_0 = x_train_flatten[y_train == 0]
x_train_1 = x_train_flatten[y_train == 1]
x_train_2 = x_train_flatten[y_train == 2]
x_train_3 = x_train_flatten[y_train == 3]
x_train_4 = x_train_flatten[y_train == 4]
x_train_5 = x_train_flatten[y_train == 5]
x_train_6 = x_train_flatten[y_train == 6]
x_train_7 = x_train_flatten[y_train == 7]
x_train_8 = x_train_flatten[y_train == 8]
x_train_9 = x_train_flatten[y_train == 9]
x_train_list = [x_train_0, x_train_1, x_train_2, x_train_3, x_train_4, x_train_5, x_train_6, x_train_7, x_train_8, x_train_9]
print(x_train_0.shape)
print(x_train_1.shape)
print(x_train_2.shape)
print(x_train_3.shape)
print(x_train_4.shape)
print(x_train_5.shape)
print(x_train_6.shape)
print(x_train_7.shape)
print(x_train_8.shape)
print(x_train_9.shape)
x_test_0 = x_test_flatten[y_test == 0]
x_test_1 = x_test_flatten[y_test == 1]
x_test_2 = x_test_flatten[y_test == 2]
x_test_3 = x_test_flatten[y_test == 3]
x_test_4 = x_test_flatten[y_test == 4]
x_test_5 = x_test_flatten[y_test == 5]
x_test_6 = x_test_flatten[y_test == 6]
x_test_7 = x_test_flatten[y_test == 7]
x_test_8 = x_test_flatten[y_test == 8]
x_test_9 = x_test_flatten[y_test == 9]
x_test_list = [x_test_0, x_test_1, x_test_2, x_test_3, x_test_4, x_test_5, x_test_6, x_test_7, x_test_8, x_test_9]
print(x_test_0.shape)
print(x_test_1.shape)
print(x_test_2.shape)
print(x_test_3.shape)
print(x_test_4.shape)
print(x_test_5.shape)
print(x_test_6.shape)
print(x_test_7.shape)
print(x_test_8.shape)
print(x_test_9.shape)
###Output
(980, 784)
(1135, 784)
(1032, 784)
(1010, 784)
(982, 784)
(892, 784)
(958, 784)
(1028, 784)
(974, 784)
(1009, 784)
###Markdown
Selecting the datasetOutput: X_train, Y_train, X_test, Y_test
###Code
num_sample = 200
n_class = 8
mult_test = 0.25
X_train = x_train_list[0][:num_sample, :]
X_test = x_test_list[0][:int(mult_test*num_sample), :]
Y_train = np.zeros((n_class*X_train.shape[0],), dtype=int)
Y_test = np.zeros((n_class*X_test.shape[0],), dtype=int)
for i in range(n_class-1):
X_train = np.concatenate((X_train, x_train_list[i+1][:num_sample, :]), axis=0)
Y_train[num_sample*(i+1):num_sample*(i+2)] = int(i+1)
X_test = np.concatenate((X_test, x_test_list[i+1][:int(mult_test*num_sample), :]), axis=0)
Y_test[int(mult_test*num_sample*(i+1)):int(mult_test*num_sample*(i+2))] = int(i+1)
print(X_train.shape, Y_train.shape)
print(X_test.shape, Y_test.shape)
###Output
(1600, 784) (1600,)
(400, 784) (400,)
###Markdown
Dataset Preprocessing (Standardization + PCA) Standardization
###Code
def normalize(X, use_params=False, params=None):
"""Normalize the given dataset X
Args:
X: ndarray, dataset
Returns:
(Xbar, mean, std): tuple of ndarray, Xbar is the normalized dataset
with mean 0 and standard deviation 1; mean and std are the
mean and standard deviation respectively.
Note:
You will encounter dimensions where the standard deviation is
zero, for those when you do normalization the normalized data
will be NaN. Handle this by setting using `std = 1` for those
dimensions when doing normalization.
"""
if use_params:
mu = params[0]
std_filled = [1]
else:
mu = np.mean(X, axis=0)
std = np.std(X, axis=0)
#std_filled = std.copy()
#std_filled[std==0] = 1.
Xbar = (X - mu)/(std + 1e-8)
return Xbar, mu, std
X_train, mu_train, std_train = normalize(X_train)
X_train.shape, Y_train.shape
X_test = (X_test - mu_train)/(std_train + 1e-8)
X_test.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
PCA
###Code
from sklearn.decomposition import PCA
from matplotlib import pyplot as plt
num_component = 27
pca = PCA(n_components=num_component, svd_solver='full')
pca.fit(X_train)
np.cumsum(pca.explained_variance_ratio_)
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
print(X_train.shape, Y_train.shape)
print(X_test.shape, Y_test.shape)
###Output
(1600, 27) (1600,)
(400, 27) (400,)
###Markdown
Norm
###Code
X_train = (X_train.T / np.sqrt(np.sum(X_train ** 2, -1))).T
X_test = (X_test.T / np.sqrt(np.sum(X_test ** 2, -1))).T
plt.scatter(X_train[:100, 0], X_train[:100, 1])
plt.scatter(X_train[100:200, 0], X_train[100:200, 1])
plt.scatter(X_train[200:300, 0], X_train[200:300, 1])
###Output
_____no_output_____
###Markdown
Quantum
###Code
import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import AdamOptimizer, GradientDescentOptimizer
qml.enable_tape()
# Set a random seed
np.random.seed(42)
# Define output labels as quantum state vectors
# def density_matrix(state):
# """Calculates the density matrix representation of a state.
# Args:
# state (array[complex]): array representing a quantum state vector
# Returns:
# dm: (array[complex]): array representing the density matrix
# """
# return state * np.conj(state).T
label_0 = [[1], [0]]
label_1 = [[0], [1]]
def density_matrix(state):
"""Calculates the density matrix representation of a state.
Args:
state (array[complex]): array representing a quantum state vector
Returns:
dm: (array[complex]): array representing the density matrix
"""
return np.outer(state, np.conj(state))
#state_labels = [label_0, label_1]
#state_labels = np.loadtxt('./tetra_states.txt', dtype=np.complex_)
state_labels = np.loadtxt('./square_states.txt', dtype=np.complex_)
dm_labels = [density_matrix(state_labels[i]) for i in range(8)]
len(dm_labels)
dm_labels
n_qubits = 8 # number of class
dev_fc = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev_fc)
def q_fc(params, inputs):
"""A variational quantum circuit representing the DRC.
Args:
params (array[float]): array of parameters
inputs = [x, y]
x (array[float]): 1-d input vector
y (array[float]): single output state density matrix
Returns:
float: fidelity between output state and input
"""
# layer iteration
for l in range(len(params[0])):
# qubit iteration
for q in range(n_qubits):
# gate iteration
for g in range(int(len(inputs)/3)):
qml.Rot(*(params[0][l][3*g:3*(g+1)] * inputs[3*g:3*(g+1)] + params[1][l][3*g:3*(g+1)]), wires=q)
return [qml.expval(qml.Hermitian(dm_labels[i], wires=[i])) for i in range(n_qubits)]
X_train[0].shape
a = np.random.uniform(size=(2, 1, 27))
q_fc(a, X_train[0])
tetra_class = np.loadtxt('./tetra_class_label.txt')
binary_class = np.array([[1, 0], [0, 1]])
square_class = np.array(np.loadtxt('./square_class_label.txt', dtype=np.complex_), dtype=float)
class_labels = square_class
class_labels
n_class = 8
temp = np.zeros((len(Y_train), n_class))
for i in range(len(Y_train)):
temp[i, :] = class_labels[Y_train[i]]
Y_train = temp
temp = np.zeros((len(Y_test), n_class))
for i in range(len(Y_test)):
temp[i, :] = class_labels[Y_test[i]]
Y_test = temp
Y_train.shape, Y_test.shape
from keras import backend as K
# Alpha Custom Layer
class class_weights(tf.keras.layers.Layer):
def __init__(self):
super(class_weights, self).__init__()
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(1, n_class), dtype="float32"),
trainable=True,
)
def call(self, inputs):
return (inputs * self.w)
n_component = 27
X = tf.keras.Input(shape=(n_component,), name='Input_Layer')
# Quantum FC Layer, trainable params = 18*L*n_class + 2, output size = 2
num_fc_layer = 3
q_fc_layer_0 = qml.qnn.KerasLayer(q_fc, {"params": (2, num_fc_layer, n_component)}, output_dim=n_class)(X)
# Alpha Layer
alpha_layer_0 = class_weights()(q_fc_layer_0)
model = tf.keras.Model(inputs=X, outputs=alpha_layer_0)
model(X_train[0:32])
opt = tf.keras.optimizers.Adam(learning_rate=0.1)
model.compile(opt, loss='mse', metrics=["accuracy"])
H = model.fit(X_train, Y_train, epochs=10, batch_size=64, initial_epoch=0,
validation_data=(X_test, Y_test), verbose=1)
model.weights
###Output
_____no_output_____ |
11TF-IDF+xgboost.ipynb | ###Markdown
1 数据准备
###Code
import xgboost as xgb
# 1 导入数据
labels = []
text = []
with codecs.open('output/data_clean_split.txt','r',encoding='utf-8') as f:
document_split = f.readlines()
for document in document_split:
temp = document.split('\t')
labels.append(temp[0])
text.append(temp[1].strip())
# 2 标签转换为数字
label_encoder = LabelEncoder()
y = label_encoder.fit_transform(labels)
# 3 TF-IDF提取文本特征
tfv1 = TfidfVectorizer(min_df=4,
max_df=0.6)
tfv1.fit(text)
features = tfv1.transform(text)
# 4 切分数据集
from sklearn.model_selection import train_test_split
x_train_tfv, x_valid_tfv, y_train, y_valid = train_test_split(features, y,
stratify=y,
random_state=42,
test_size=0.1, shuffle=True)
###Output
_____no_output_____
###Markdown
2 定义损失函数
###Code
def multiclass_logloss(actual, predicted, eps=1e-15):
"""对数损失度量(Logarithmic Loss Metric)的多分类版本。
:param actual: 包含actual target classes的数组
:param predicted: 分类预测结果矩阵, 每个类别都有一个概率
"""
# Convert 'actual' to a binary array if it's not already:
if len(actual.shape) == 1:
actual2 = np.zeros((actual.shape[0], predicted.shape[1]))
for i, val in enumerate(actual):
actual2[i, val] = 1
actual = actual2
clip = np.clip(predicted, eps, 1 - eps)
rows = actual.shape[0]
vsota = np.sum(actual * np.log(clip))
return -1.0 / rows * vsota
###Output
_____no_output_____
###Markdown
3 使用模型分类
###Code
# 基于tf-idf特征,使用xgboost
clf = xgb.XGBClassifier(max_depth=7, n_estimators=200, colsample_bytree=0.8,
subsample=0.8, nthread=10, learning_rate=0.1)
clf.fit(x_train_tfv.tocsc(), y_train)
predictions = clf.predict_proba(x_valid_tfv.tocsc())
print ("logloss: %0.3f " % multiclass_logloss(y_valid, predictions))
###Output
logloss: 0.225
|
AAAI/Learnability/CIN/older/ds3/synthetic_type3_MLP2_m_100.ipynb | ###Markdown
Generate dataset
###Code
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [6,6],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [5.5,6.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
np.reshape(a,(2*m,1))
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m) #np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_dataset).numpy() / m
y1 = np.array(labels)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("test dataset4")
test_dataset[0:10]/m
test_dataset = test_dataset/m
test_dataset[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
# class Whatnet(nn.Module):
# def __init__(self):
# super(Whatnet,self).__init__()
# self.linear1 = nn.Linear(2,3)
# # self.linear2 = nn.Linear(50,10)
# # self.linear3 = nn.Linear(10,3)
# torch.nn.init.xavier_normal_(self.linear1.weight)
# torch.nn.init.zeros_(self.linear1.bias)
# def forward(self,x):
# # x = F.relu(self.linear1(x))
# # x = F.relu(self.linear2(x))
# x = (self.linear1(x))
# return x
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,50)
self.linear2 = nn.Linear(50,10)
self.linear3 = nn.Linear(10,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
torch.nn.init.xavier_normal_(self.linear3.weight)
torch.nn.init.zeros_(self.linear3.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = (self.linear3(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1500
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____ |
notebooks/dlbs_models.ipynb | ###Markdown
Summary of models from Deep Learning Benchmarking SuiteFull list of models is [here](https://hewlettpackard.github.io/dlcookbook-dlbs//models/models?id=models). From high level point of view:1. FLOPS are multiply-add operations in dense and convolutional layers.2. Algorithm for estimating memory requirements is very naive and will be updated.3. Memory for training is approximately twice of inference memory.3. FLOPS: ``` gFLOPS(backward) = 2 * gFLOPs(forward) gFLOPS(training) = 3 * gFLOPs(forward) ```4. FLOPs and memory are provided for one instance (== batch size is 1).Click for [summary](Summary). Or see below for details.**Models**1. [English acoustic model](English-acoustic-model)2. [AlexNet](AlexNet)3. [AlexNet OWT](AlexNetOWT)4. [Deep MNIST](DeepMNIST)5. [VGG-11](VGG11)6. [VGG-13](VGG13)7. [VGG-16](VGG16)8. [VGG-19](VGG19)9. [Overfeat](Overfeat)10. [ResNet-18](ResNet18)11. [ResNet-34](ResNet34)12. [ResNet-50](ResNet50)13. [ResNet-101](ResNet101)14. [ResNet-152](ResNet152)15. [ResNet-200](ResNet200)16. [ResNet-269](ResNet269)
###Code
from nns.nns import (estimate, printable_dataframe)
from nns.models import dlbs as models
# Inference and training model summaries - name, shapes, parameters, gFLOPs, activations. Each element is a
# dictionary. I will later convert that into Pandas data frame and will print that.
inference = []
training = []
###Output
_____no_output_____
###Markdown
[English acoustic model](http://ethereon.github.io/netscope//gist/10f5dee56b6f7bbb5da26749bd37ae16)
###Code
estimate(models.EnglishAcousticModel(), inference, training)
###Output
_____no_output_____
###Markdown
[AlexNet](http://ethereon.github.io/netscope//gist/5c94a074f4e4ac4b81ee28a796e04b5d)
###Code
estimate(models.AlexNet(), inference, training)
###Output
_____no_output_____
###Markdown
[AlexNetOWT](http://ethereon.github.io/netscope//gist/dc85cc15d59d720c8a18c4776abc9fd5)
###Code
estimate(models.AlexNet(version='owt'), inference, training)
###Output
_____no_output_____
###Markdown
[DeepMNIST](http://ethereon.github.io/netscope//gist/9c75cd95891207082bd42264eb7a2706)
###Code
estimate(models.DeepMNIST(), inference, training)
###Output
_____no_output_____
###Markdown
[VGG11](http://ethereon.github.io/netscope//gist/5550b93fb51ab63d520af5be555d691f)
###Code
estimate(models.VGG(version='vgg11'), inference, training)
###Output
_____no_output_____
###Markdown
[VGG13](http://ethereon.github.io/netscope//gist/a96ba317064a61b22a1742bd05c54816)
###Code
estimate(models.VGG(version='vgg13'), inference, training)
###Output
_____no_output_____
###Markdown
[VGG16](http://ethereon.github.io/netscope//gist/050efcbb3f041bfc2a392381d0aac671)
###Code
estimate(models.VGG(version='vgg16'), inference, training)
###Output
_____no_output_____
###Markdown
[VGG19](http://ethereon.github.io/netscope//gist/f9e55d5947ac0043973b32b7ff51b778)
###Code
estimate(models.VGG(version='vgg19'), inference, training)
###Output
_____no_output_____
###Markdown
[Overfeat](http://ethereon.github.io/netscope//gist/ebfeff824393bcd66a9ceb851d8e5bde)
###Code
estimate(models.Overfeat(), inference, training)
###Output
_____no_output_____
###Markdown
[ResNet18](http://ethereon.github.io/netscope//gist/649e0fb6c96c60c9f0abaa339da3cd27)
###Code
estimate(models.ResNet(version='resnet18'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
[ResNet34](http://ethereon.github.io/netscope//gist/277a9604370076d8eed03e9e44e23d53)
###Code
estimate(models.ResNet(version='resnet34'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
[ResNet50](http://ethereon.github.io/netscope//gist/db945b393d40bfa26006)
###Code
estimate(models.ResNet(version='resnet50'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
[ResNet101](http://ethereon.github.io/netscope//gist/b21e2aae116dc1ac7b50)
###Code
estimate(models.ResNet(version='resnet101'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
[ResNet152](http://ethereon.github.io/netscope//gist/d38f3e6091952b45198b)
###Code
estimate(models.ResNet(version='resnet152'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
[ResNet200](http://ethereon.github.io/netscope//gist/38a20d8dd1a4725d12659c8e313ab2c7)
###Code
estimate(models.ResNet(version='resnet200'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
[ResNet269](http://ethereon.github.io/netscope//gist/fbf7c67565523a9ac2c349aa89c5e78d)
###Code
estimate(models.ResNet(version='resnet269'), inference, training)
###Output
Layer not recognized (type=<class 'tensorflow.python.keras.engine.input_layer.InputLayer'>, name=input)
###Markdown
Summary* Input shape column does not include batch dimension which is always the first dimensions.* `GFLOPs` are multiply-add operations for batch size 1 for one inference or one training pass. These values should be used to compute times, instead, use them for comparing models.* Activation size is the memory requried to store activations (batch size 1). The algorithm that's now used to estimate these numbers is very naive and, again, use these numbers to compare models. Inference
###Code
printable_dataframe(inference)
###Output
_____no_output_____
###Markdown
Training
###Code
printable_dataframe(training)
###Output
_____no_output_____ |
pycon2017_cffi.ipynb | ###Markdown
CFFI, Ctypes, Cython, Cppyy: The good, the bad and the ugly Pycon Israel 2017 Matti Picus You can follow along at https://github.com/mattip/pycon2017_cffi/blob/master/pycon2017_cffi.ipynb  Thanks for coming, I too would rather be in the lecture about Grumpy and PyPy next doorHere is what we will doThe ``mandel`` image (5 minutes) - Pure python - Pure C - Timing it How to mix C and Python (10-15 minutes) - Ctypes - CFFI - Cython Comparison - which is the good, the bad, and the ugly (5-10 minutes) - Boilerplate - Maintenance - SpeedA pop quizQuestions
###Code
from __future__ import print_function, division
%matplotlib notebook
from timeit import default_timer as timer
import numpy as np
from PIL import Image
import subprocess
import os
from matplotlib.pylab import imshow, show, figure, subplots
###Output
_____no_output_____
###Markdown
Our mission: to create a fractal image. Hmm, what is an image? We decide to define a simple structure to hold the image: width, height, data-as-pointer
###Code
class Img(object):
def __init__(self, width, height):
self.width = width
self.height = height
self.data = bytearray(width*height)
width = 1500
height = 1000
image = Img(width, height)
###Output
_____no_output_____
###Markdown
Now we design an API where we loop over the image, doing a calculation at each pixel location.For reasons known to only a select few, we normalize the horizontal values to be from -2 to 1 and the vertical values to -1 to 1, and then call a function with these normalized values. Also, our system architect is adamant that every function return a status, so our calculation function must accept a pointer to the value to be returned. This makes more sense in C, but can be done in python as well, although awkwardly.We use ``oneval`` as an object that can be passed in as a "pointer" The looping functionthese links are so we can jump forward when we come back to look again at this function [as used in ctypes](Ctypes-use) [as used in CFFI](CFFI-use) [as used in Cython](Cython-use) [as used in Cppyy](Cppyy-use)
###Code
def create_fractal(image, iters, func, oneval):
''' Call a function for each pixel in the image, where
-2 < real < 1 over the columns and
-1 < imag < 1 over the rows
'''
pixel_size_x = 3.0 / image.width
pixel_size_y = 2.0 / image.height
for y in range(image.height):
imag = y * pixel_size_y - 1
yy = y * image.width
for x in range(image.width):
real = x * pixel_size_x - 2
ret = func(real, imag, iters, oneval) # <---- HERE is the real work
if ret < 0:
return ret
image.data[yy + x] = oneval[0]
return 0
# This is the calculating function in python
def mandel(x, y, max_iters, value):
"""
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
"""
i = 0
c = complex(x,y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
value[0] = i
return 0
value[0] = max_iters
return max_iters
# OK, lets try it out. Here is our pure python fractal generator
oneval = bytearray(1)
s = timer()
ret = create_fractal(image, 20, mandel, oneval)
e = timer()
if ret < 0:
print('bad ret value from creat_fractal')
pure_python = e - s
print('pure python required {:.2f} secs'.format(pure_python))
im = Image.frombuffer("L", (width, height), image.data, "raw", "L", 0, 1)
im.save('python_numpy.png')
fig, ax = subplots(1)
img = Image.open('python_numpy.png')
ax.imshow(img);
ax.set_title('pure python, {:.2f} millisecs'.format(pure_python*1000));
###Output
_____no_output_____
###Markdown
EVERYONE KNOWS PYTHON IS TOO SLOW! So we outsource the whole thing to a contractor, keeping the format of two functions and their signatures. The contractor rewrites it in C, now the ``*val`` make sense
###Code
with open('mandel.c', 'rt') as fid:
print(fid.read())
with open('create_fractal.c', 'rt') as fid:
print(fid.read())
# The contractor provided a demo. We will compile the functions into a shared object
# and time the call to create_fractal, as before
with open('main.c', 'rt') as fid:
print(fid.read())
# Compile a shared object, and then compile the exe
subprocess.check_call(['gcc', '--shared', '-fPIC', '-O3', 'mandel.c', 'create_fractal.c',
'-olibcreate_fractal.so'])
subprocess.check_call(['gcc', '-O3', 'main.c', '-L.', '-lcreate_fractal', '-omain']);
environ = os.environ.copy()
environ['LD_LIBRARY_PATH'] = environ.get('LD_LIBRARY_PATH', '') + ':'
p = subprocess.Popen(['./main'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=environ)
stdout, stderr = p.communicate()
stdout = str(stdout)
print(stdout)
pure_c = int(stdout.split(' ')[2])
print('Pure python is {:.1f} times slower than pure C'.format(1000.0*pure_python/pure_c));
from matplotlib.pylab import imshow, show, figure, subplots
fig, ax = subplots(1,2)
with open('c.raw', 'rb') as fid:
img = Image.frombytes(data=fid.read(), size=(1500,1000), mode="L")
ax[0].imshow(img); ax[0].set_title('Pure C, {:d} millisecs'.format(pure_c))
img = Image.open('python_numpy.png')
ax[1].imshow(img); ax[1].set_title('Pure Python, {:.0f} millisecs'.format(1000*pure_python));
###Output
_____no_output_____
###Markdown
Cool. We now have a version in pure C that runs in about 200 ms. But hang on, we wanted this to be part of a whole pipeline, where we can use and reuse the functions ``mandel`` and ``create_fractal``. Note that we compiled ``libcreate_fractal.so`` as a shared object, so maybe we can call it from Python?We have heard of three methods to interface C with Python: ctypes, cffi, cython. Let's try them out
###Code
#ctypes
# First all the declarations. Each function and struct must be redefined ...
import ctypes
class CtypesImg(ctypes.Structure):
_fields_ = [('width', ctypes.c_int),
('height', ctypes.c_int),
('data', ctypes.POINTER(ctypes.c_uint8)), # HUH?
]
array_cache = {}
def __init__(self, width, height):
self.width = width
self.height = height
# Create a class type to hold the data.
# Since this creates a type, cache it for reuse rather
# than create a new one each time
if width*height not in self.array_cache:
self.array_cache[width*height] = ctypes.c_uint8 * (width * height)
# Note this keeps the img.data alive in the interpreter
self.data = self.array_cache[width*height]() # !!!!!!
def asmemoryview(self):
# There must be a better way, but this code will not
# be timed, so explicit trumps implicit
ret = self.array_cache[width*height]()
for i in range(width*height):
ret[i] = self.data[i]
return memoryview(ret)
ctypesimg = CtypesImg(width, height)
# Load the DLL
cdll = ctypes.cdll.LoadLibrary('./libcreate_fractal.so')
#Fish the function pointers from the DLL and define the interfaces
create_fractal_ctypes = cdll.create_fractal
create_fractal_ctypes.argtypes = [CtypesImg, ctypes.c_int]
mandel_ctypes = cdll.mandel
mandel_ctypes.argtypes = [ctypes.c_float, ctypes.c_float, ctypes.c_int,
ctypes.POINTER(ctypes.c_uint8)]
###Output
_____no_output_____
###Markdown
Ctypes useLet's run this, twice. Once to call the c implementation of create_fractal, and again withthe python implementation of [create_fractal](The-looping-function) which calls the c-mandel function 1.5 million times
###Code
s = timer()
create_fractal_ctypes(ctypesimg, 20)
e = timer()
ctypes_onecall = e - s
print('ctypes calling create_fractal required {:.2f} millisecs'.format(1000*ctypes_onecall))
im = Image.frombuffer("L", (width, height), ctypesimg.asmemoryview(), 'raw', 'L', 0, 1)
im.save('ctypes_fractal.png')
value = (ctypes.c_uint8*1)()
s = timer()
create_fractal(ctypesimg, 20, mandel_ctypes, value)
e = timer()
ctypes_createfractal = e - s
print('ctypes calling mandel required {:.2f} millisecs'.format(1000*ctypes_createfractal))
im = Image.frombuffer("L", (width, height), ctypesimg.asmemoryview(), 'raw', 'L', 0, 1)
im.save('ctypes_mandel.png')
fig, ax = subplots(1,2)
ctypes1 = Image.open('ctypes_fractal.png')
ctypes2 = Image.open('ctypes_mandel.png')
ax[0].imshow(ctypes1); ax[0].set_title('ctypes one call')
ax[1].imshow(ctypes2); ax[1].set_title('ctypes 1.5e6 calls');
#cffi
import cffi
ffi = cffi.FFI()
# Two stages, cdef reads the headers, then dlopen finds the functions in the shared object
with open('create_fractal.h', 'rt') as fid:
header = fid.read()
# clean up all preprocessor macros before calling this
print('Contents of create_fractal.h\n------------\n')
ffi.cdef(header)
print(header)
dll = ffi.dlopen('./libcreate_fractal.so')
###Output
Contents of create_fractal.h
------------
typedef struct _Img{
int width;
int height;
unsigned char * data;
} cImg;
int create_fractal(cImg img, int iters);
int mandel(float real, float imag, int max_iters, unsigned char * val);
###Markdown
CFFI useLet's run this, twice. Once to call the c implementation of create_fractal, and again withthe python implementation of [create_fractal](The-looping-function) which calls the c-mandel function 1.5 million times
###Code
# Initializing an image looks just like C. Note two things:
# - ffi has state, that is the point of creating an ffi object
# - img is a "pointer", so we use img[0] to dereference it
img = ffi.new('cImg[1]')
img[0].width = width
img[0].height = height
#img[0].data = ffi.new('unsigned char[%d]' % (width*height,)) # NO NO NO NO
# This is C - we must keep the pointer alive !!!
data1 = ffi.new('unsigned char[%d]' % (width*height,))
img[0].data = data1
s = timer()
dll.create_fractal(img[0], 20)
e = timer()
cffi_onecall = e - s
print('cffi calling create_fractal required {:.2f} millisecs'.format(1000 * cffi_onecall))
m = Image.frombuffer('L', (width, height), ffi.buffer(data1), 'raw', 'L', 0, 1)
im.save('cffi_fractal.png')
data2 = ffi.new('unsigned char[%d]' % (width*height,))
img[0].data = data2
value = ffi.new('unsigned char[1]')
s = timer()
create_fractal(img[0], 20, dll.mandel, value)
e = timer()
cffi_fractal = e - s
print('cffi calling mandel required {:.2f} millisecs'.format(1000*cffi_fractal))
im = Image.frombuffer('L', (width, height), ffi.buffer(data2), 'raw', 'L', 0, 1)
im.save('cffi_mandel.png')
fig, ax = subplots(1,2)
ctypes1 = Image.open('cffi_fractal.png')
ctypes2 = Image.open('cffi_mandel.png')
ax[0].imshow(ctypes1); ax[0].set_title('cffi one call {:.2f} millisecs'.format(1000*cffi_onecall))
ax[1].imshow(ctypes2); ax[1].set_title('cffi 1.5e6 calls {:.2f} millisecs'.format(1000*cffi_fractal));
%load_ext Cython
###Output
_____no_output_____
###Markdown
Hang on, isn't cython used for compiling python to C? FOUL!Well, yes, but, in this case we already have c code from our contractor...So really it's not Cython that is "ugly" but my use of it.
###Code
%%cython -a -I. -L. -l create_fractal --link-args=-Wl,-rpath=.
cdef extern from 'create_fractal.h':
ctypedef struct cImg:
int width
int height
unsigned char * data
int create_fractal(cImg img, int iters);
int mandel(float real, float imag, int max_iters,
unsigned char * val);
def cython_create_fractal(pyimg, iters):
cdef cImg cimg
cdef int citers
cdef unsigned char[::1] tmp = pyimg.data
citers = iters
cimg.width = pyimg.width
cimg.height = pyimg.height
cimg.data = &tmp[0]
return create_fractal(cimg, citers)
cpdef int cython_mandel(float real, float imag, int max_iters,
unsigned char[::1] val):
return mandel(real, imag, max_iters, &val[0])
###Output
_____no_output_____
###Markdown
Cython useLet's run this, twice. Once to call the c implementation of create_fractal, and again withthe python implementation of [create_fractal](The-looping-function) which calls the c-mandel function 1.5 million times
###Code
# use it, remember we have "image" from the pure python version
s = timer()
cython_create_fractal(image, 20)
e = timer()
cython_onecall = e - s
print('cython onecall required {:.2f} millisecs'.format(1000*cython_onecall))
im = Image.frombuffer("L", (width, height), image.data, "raw", "L", 0, 1)
im.save('cython_fractal.png')
value = bytearray(1)
s = timer()
create_fractal(image, 20, cython_mandel, value)
e = timer()
cython_fractal = e - s
print('cython many calls required {:.2f} millisecs'.format(1000*cython_fractal))
im = Image.frombuffer("L", (width, height), image.data, "raw", "L", 0, 1)
im.save('cython_mandel.png')
fig, ax = subplots(1,2)
ctypes1 = Image.open('cython_fractal.png')
ctypes2 = Image.open('cython_mandel.png')
ax[0].imshow(ctypes1); ax[0].set_title('cython one call\n {:.2f} millisecs'.format(1000*cython_onecall))
ax[1].imshow(ctypes2); ax[1].set_title('cython many calls\n {:.2f} millisecs'.format(1000*cython_fractal))
fig.show()
# cppyy is a run-time bindings generator for C++ that works on both CPython and
# PyPy. It is vastly overkill for calling into a simple C code as in this example,
# but if the vendor provided you with a complex C++ API, especially one that uses
# templates and modern C++ features, it will handle that, too, with ease.
#
# To install cppyy, run 'pip install cppyy'. As it uses a custom version of LLVM,
# the first build will be slow (~15mins on a modern machine; afterwards, LLVM
# will be cached as a binary wheel, and installation will be fast).
import cppyy
# The following assumes that the shared library has already been created (e.g.
# for the cffi example above); otherwise cppyy can compile the C code on-the-fly,
# using cppdef() as in the example below.
cppyy.c_include("create_fractal.h")
cppyy.load_library("libcreate_fractal.so")
# For convenience, create a C++ class from the C struct to manage the memory.
# Alternatively, assign a Python array from module array or a NumPy array (the
# C++ side will get a non-owning view on assignment.)
if not hasattr(cppyy.gbl, 'cppImg'):
cppyy.cppdef("""
struct cppImg : public cImg {
cppImg(int w, int h) : cImg{w, h, new unsigned char[w*h]} {}
~cppImg() { delete [] data; }
cppImg(const cppImg&) = delete;
cppImg& operator=(const cppImg&) = delete;
};""")
###Output
_____no_output_____
###Markdown
Cppyy useLet's run this, twice. Once to call the c implementation of create_fractal, and again withthe python implementation of [create_fractal](The-looping-function) which calls the c-mandel function 1.5 million times
###Code
cppyy_image = cppyy.gbl.cppImg(width, height)
s = timer()
ret = cppyy.gbl.create_fractal(cppyy_image, 20)
e = timer()
if ret < 0:
print('bad ret value from create_fractal')
cppyy_onecall = e - s
print('cppyy calling create_fractal required {:.2f} millisecs'.format(1000*cppyy_onecall))
im = Image.frombuffer("L", (width, height), image.data, "raw", "L", 0, 1)
im.save('cppyy_fractal.png')
def create_fractal(image, iters, func, oneval):
''' Call a function for each pixel in the image, where
-2 < real < 1 over the columns and
-1 < imag < 1 over the rows
'''
pixel_size_x = 3.0 / image.width
pixel_size_y = 2.0 / image.height
image_data = image.data # performance hack (not needed on PyPy)
for y in range(image.height):
imag = y * pixel_size_y - 1
yy = y * image.width
for x in range(image.width):
real = x * pixel_size_x - 2
ret = func(real, imag, iters, oneval) # <---- HERE is the real work
if ret < 0:
return ret
image_data[yy + x] = oneval[0]
return sum
s = timer()
oneval = bytearray(1)
create_fractal(cppyy_image, 20, cppyy.gbl.mandel, oneval)
e = timer()
cppyy_fractal = e - s
print('cppyy many calls required {:.2f} millisecs'.format(1000*cppyy_fractal))
im = Image.frombuffer("L", (width, height), image.data, "raw", "L", 0, 1)
im.save('cppyy_mandel.png')
fig, ax = subplots(1,2)
cppyy1 = Image.open('cppyy_fractal.png')
cppyy2 = Image.open('cppyy_mandel.png')
ax[0].imshow(cppyy1); ax[0].set_title('cppyy one call\n {:.2f} millisecs'.format(1000*cppyy_onecall))
ax[1].imshow(cppyy2); ax[1].set_title('cppyy many calls\n {:.2f} millisecs'.format(1000*cppyy_fractal))
fig.show()
# Now let's try and work out who is the good, who the bad,
# and who the ugly
import pprint
pprint.pprint([[' ', 'CreateFractal in Python', 'CreateFractal in C'],
['Python', '{:13.2f} millisecs'.format(1000*pure_python), '{:18s}'.format('')],
['C ', '{:23s}'.format(''), '{:8.2f} millisecs'.format(pure_c)],
['ctypes', '{:13.2f} millisecs'.format(1000*ctypes_createfractal), '{:8.2f} millisecs'.format(1000*ctypes_onecall)],
['cffi ', '{:13.2f} millisecs'.format(1000*cffi_fractal), '{:8.2f} millisecs'.format(1000*cffi_onecall)],
['cython', '{:13.2f} millisecs'.format(1000*cython_fractal), '{:8.2f} millisecs'.format(1000*cython_onecall)],
['cppyy ', '{:13.2f} millisecs'.format(1000*cppyy_fractal), '{:8.2f} millisecs'.format(1000*cppyy_onecall)],
])
###Output
[[' ', 'CreateFractal in Python', 'CreateFractal in C'],
['Python', ' 5639.86 millisecs', ' '],
['C ', ' ', ' 197.00 millisecs'],
['ctypes', ' 2419.25 millisecs', ' 200.18 millisecs'],
['cffi ', ' 995.21 millisecs', ' 198.53 millisecs'],
['cython', ' 979.26 millisecs', ' 204.99 millisecs']]
###Markdown
Things to think about, besides speed:* Maintainability - What happens when the C code changes?* Compiler dependency - ctypes needs none, Cython requires one, CFFI can go either way* Susceptability to bugs (object lifetimes, signature mismatches) - All use a minilanguage for interfacing, only CFFI's is standard C - Cython will handle most transformations automatically - CFFI can be tricky for C-level pointers* Speed and productivity - Cython is heavily optimized, tightly integrated to the C-API - If the headers are pure C, CFFI should be simple - Projects exist to generate wrappers for all three* Which technology is actively maintained (ctypes went into the stdlib to die?) --- --- --- --- And now the pop-quiz. If we run the pure python version in PyPy what time will we get?:* Around a 2X speed up* About like Cython or CFFI calling mandel 1.5e6 times* About like C compiled -O3
###Code
%%script pypy
from __future__ import print_function, division
import sys
print(sys.executable)
print(sys.version)
from timeit import default_timer as timer
from PIL import Image
class Img(object):
def __init__(self, width, height):
self.width = width
self.height = height
self.data = bytearray(width*height)
def create_fractal(image, iters, func, oneval):
''' Call a function for each pixel in the image, where
-2 < real < 1 over the columns and
-1 < imag < 1 over the rows
'''
pixel_size_x = 3.0 / image.width
pixel_size_y = 2.0 / image.height
for y in range(image.height):
imag = y * pixel_size_y - 1
yy = y * image.width
for x in range(image.width):
real = x * pixel_size_x - 2
func(real, imag, iters, oneval)
image.data[yy + x] = oneval[0]
def mandel(x, y, max_iters, value):
"""
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Mandelbrot
set given a fixed number of iterations.
"""
i = 0
c = complex(x,y)
z = 0.0j
for i in range(max_iters):
z = z*z + c
if (z.real*z.real + z.imag*z.imag) >= 4:
value[0] = i
return 0
value[0] = max_iters
return max_iters
# Pure python
width = 1500
height = 1000
image = Img(width, height)
s = timer()
oneval = bytearray(1)
create_fractal(image, 20, mandel, oneval) # < --- HERE IS THE CALL
e = timer()
pure_pypy = e - s
print('pure pypy required {:.2f} millisecs'.format(1000*pure_pypy))
im = Image.frombuffer('L', (1500, 1000), image.data, 'raw', 'L', 0, 1)
im.save('pypyy.png')
fig, ax = subplots(1)
ctypes1 = Image.open('pypyy.png')
ax.imshow(ctypes1); ax.set_title('pure pypy');
###Output
_____no_output_____ |
debug/.ipynb_checkpoints/api_test-checkpoint.ipynb | ###Markdown
API ExplorationThe aim for this notebook is to explore potential APIs for Overlays now that the number is increasing. The aim is to move all of the features of an overlay into a single class that can be accessed. This isn't intended to be a gold standard of implementation, rather to assess the feasibility and feel of the proposed designThe base class of all overlays implements a delayed instantiation of elements in a hardware dictionary. This ensures that we don't allocate resources for hardware the user isn't planning on using.
###Code
import pynq
class Overlay:
def __init__(self, bitstream, hardware_dict, download=True):
self.raw_overlay = pynq.Overlay(bitstream)
if download:
self.raw_overlay.download()
self.hardware_dict = hardware_dict
def __getattr__(self, name):
if name in self.hardware_dict:
setattr(self, name, self.hardware_dict[name]())
return getattr(self, name)
def __dir__(self):
return self.hardware_dict.keys()
###Output
_____no_output_____
###Markdown
Now we can create wrappers for the some of the IP which doesn't have a one-to-one mapping between class instances and hardware. In this case the buttons, switches and leds are currently implemented as a class per item so we need to make an array of them.
###Code
from pynq.board import Switch,LED,Button
import functools
class ArrayWrapper:
def __init__(self, cls, num):
self.elems = [None for i in range(num)]
self.cls = cls
def __getitem__(self, val):
if not self.elems[val]:
self.elems[val] = self.cls(val)
return self.elems[val]
def __len__(self):
return len(self.elems)
BaseSwitches = functools.partial(ArrayWrapper, Switch, 2)
BaseLEDs = functools.partial(ArrayWrapper, LED, 4)
BaseButtons = functools.partial(ArrayWrapper, Button, 4)
###Output
_____no_output_____
###Markdown
Finally we can instantiate a base overlay that supports everything other than IOPs which we will get to later and TraceBuffer which needs some thought.
###Code
from pynq.drivers import HDMI, Audio
class BaseOverlay(Overlay):
def __init__(self):
hardware_dict = {
'switches' : BaseSwitches,
'leds' : BaseLEDs,
'buttons' : BaseButtons,
'hdmi_in' : functools.partial(HDMI, 'in'),
'hdmi_out' : functools.partial(HDMI, 'out'),
'audio' : Audio
}
Overlay.__init__(self, 'base.bit', hardware_dict)
base = BaseOverlay()
###Output
_____no_output_____
###Markdown
We can use the `dir` function to list all of the modules in the Overlay
###Code
dir(base)
###Output
_____no_output_____
###Markdown
And interact with the various bits of IP
###Code
base.leds[1].on()
base.audio.load("/home/xilinx/pynq/drivers/tests/pynq_welcome.pdm")
base.audio.play()
print(base.switches[0].read())
###Output
_____no_output_____
###Markdown
IOP SupportIOPs are a little trickier to implement using the current API. For this example we can create a wrapper class which delays the instantiation of the IOP until after we know to which IOP it has been assigned
###Code
class Peripheral:
def __init__(self, iop_class, *args, **kwargs):
self.iop_class = iop_class
self.args = args
self.kwargs = kwargs
def __call__(self, if_id):
return self.iop_class(if_id, *self.args, **self.kwargs)
###Output
_____no_output_____
###Markdown
Our new base overlay can now override `__setattr__` to correctly assign the PMOD
###Code
from pynq.iop import PMODA, PMODB, ARDUINO
iop_map = {
'pmoda' : PMODA,
'pmodb' : PMODB,
'arduino' : ARDUINO
}
class IOPOverlay(BaseOverlay):
def __init__(self):
BaseOverlay.__init__(self)
def __dir__(self):
return BaseOverlay.__dir__(self) + ['pmoda', 'pmodb', 'arduino']
def __setattr__(self, name, val):
if name in iop_map:
obj = val(iop_map[name])
else:
obj = val
BaseOverlay.__setattr__(self, name, obj)
base = IOPOverlay()
###Output
_____no_output_____
###Markdown
We can now test this using an OLED screen attached to PMOD B
###Code
from pynq.iop import Pmod_OLED
base.pmodb = Peripheral(Pmod_OLED)
base.pmodb.write('Hello World')
###Output
_____no_output_____
###Markdown
And and LED bar attached to a grove connector on PMOD A
###Code
from pynq.iop import Grove_LEDbar
import pynq.iop
base.pmoda = Peripheral(Grove_LEDbar, pynq.iop.PMOD_GROVE_G3)
base.pmoda.write_level(5, 3, 1)
###Output
_____no_output_____
###Markdown
We can take this one stage further by making the Grove Adapter a separate thing
###Code
from pynq.iop import PMOD_GROVE_G1, PMOD_GROVE_G2, PMOD_GROVE_G3, PMOD_GROVE_G4
grove_map = {
'G1' : PMOD_GROVE_G1,
'G2' : PMOD_GROVE_G2,
'G3' : PMOD_GROVE_G3,
'G4' : PMOD_GROVE_G4,
}
class GroveAdapter:
def __init__(self, if_id):
self.if_id = if_id
def __setattr__(self, name, val):
if name in grove_map:
obj = val(self.if_id, grove_map[name])
else:
obj = val
object.__setattr__(self, name, obj)
###Output
_____no_output_____
###Markdown
Which means our code can now look like this:
###Code
base = IOPOverlay()
base.pmoda = GroveAdapter
base.pmoda.G3 = Grove_LEDbar
base.pmoda.G3.write_level(10, 3, 1)
###Output
_____no_output_____
###Markdown
But this means people are likely to try assigning multiple grove connectors simultaneously which isn't something we currently support.
###Code
class SingleTone(object):
__instance = None
def __new__(cls, val):
if SingleTone.__instance is None:
SingleTone.__instance = object.__new__(cls)
SingleTone.__instance.val = val
return SingleTone.__instance
a = SingleTone(1)
print(f'Value in a is {a.val}')
b = SingleTone(2)
print(f'Value in b is {b.val}')
print(f'Value in a is {a.val}')
a.__class__.__name__
class Parent():
def __init__(self, age, gender):
self.age = age
self.gender = gender
def get_older(self):
self.age += 1
class Boy(Parent):
__person = None
__born = False
__instance_list = set()
def __new__(cls, age, color):
if cls.__person is None:
cls.__person = Parent.__new__(cls)
cls.__person.age = age
cls.__instance_list.add(cls.__person)
return cls.__person
def __init__(self, age, color):
if not self.__class__.__born:
self.age = age
self.haircolor = color
self.__class__.__born = True
def get_list(self):
return self.__class__.__instance_list
def __del__(self):
self.__class__.instance_list.pop()
age1 = 9
age2 = 15
tom = Boy(age1, 'BLACK')
print(f'Last year, age of Tom: {tom.age}')
print(f'Last year, haircolor of Tom: {tom.haircolor}')
jack = Boy(age2, 'RED')
print(f'After {age2-age1} years, age of Jack: {jack.age}')
print(f'After {age2-age1} years, haircolor of Jack: {jack.haircolor}')
print(f'After {age2-age1} years, age of Tom: {tom.age}')
print(f'After {age2-age1} years, haircolor of Tom: {tom.haircolor}')
tom.get_older()
print(f'This year, age of Tom: {tom.age}')
print(f'This year, haircolor of Tom: {tom.haircolor}')
print(f'This year, age of Jack: {jack.age}')
print(f'This year, haircolor of Jack: {jack.haircolor}')
jack.get_list()
class RootLicense():
def __init__(self, date, time):
self.date = date
self.time = time
class License(RootLicense):
__root = list()
__license_index = 0
__num_licenses = 3
__instance_dict = {}
def __new__(cls, date, time):
if len(cls.__root) < cls.__num_licenses:
cls.__root.append(RootLicense.__new__(cls))
current_license_index = cls.__license_index
cls.__license_index = (cls.__license_index + 1) % cls.__num_licenses
cls.__instance_dict[current_license_index] = \
cls.__root[current_license_index]
return cls.__root[current_license_index]
def __init__(self, date, time):
super().__init__(date, time)
def get_instance(self):
return self.__class__.__instance_dict
def __del__(self):
current_license_index = cls.__license_index
cls.__license_index = (cls.__license_index - 1) % cls.__num_licenses
self.__class__.__instance_dict[current_license_index] = None
license0 = License('06-21-2017', '10:33:21')
license1 = License('06-23-2017', '09:12:12')
license2 = License('06-24-2017', '00:56:08')
print(f'License 0 issued on: {license0.date}-{license0.time}')
print(f'License 1 issued on: {license1.date}-{license1.time}')
print(f'License 2 issued on: {license2.date}-{license2.time}')
license3 = License('06-24-2017', '08:55:24')
license4 = License('06-25-2017', '07:26:37')
license5 = License('06-25-2017', '19:37:18')
license6 = License('06-26-2017', '13:23:24')
print(f'License 0 issued on: {license0.date}-{license0.time}')
print(f'License 1 issued on: {license1.date}-{license1.time}')
print(f'License 2 issued on: {license2.date}-{license2.time}')
license0.get_instance()
del(license0)
license1.get_instance()
BUILDER_STATUS_DICT = {'BOOLEAN_BUILDER': 1,
'PATTERN_BUILDER': 2,
'FSM_BUILDER': 3,
'TRACE_ANALYZER': 4}
for a in BUILDER_STATUS_DICT.keys():
print(a)
license0.__class__.__name__.upper()
sys.platform
sys.version_info
import json
import os
import IPython.core.display
def draw_wavedrom(data):
"""Display the waveform using the Wavedrom package.
This method requires 2 javascript files to be copied locally. Users
can call this method directly to draw any wavedrom data.
Example usage:
>>> a = {
'signal': [
{'name': 'clk', 'wave': 'p.....|...'},
{'name': 'dat', 'wave': 'x.345x|=.x',
'data': ['head', 'body', 'tail', 'data']},
{'name': 'req', 'wave': '0.1..0|1.0'},
{},
{'name': 'ack', 'wave': '1.....|01.'}
]}
>>> draw_wavedrom(a)
"""
htmldata = '<script type="WaveDrom">' + json.dumps(data) + '</script>'
IPython.core.display.display_html(IPython.core.display.HTML(htmldata))
jsdata = 'WaveDrom.ProcessAll();'
IPython.core.display.display_javascript(
IPython.core.display.Javascript(
data=jsdata,
lib=[relative_path + '/js/WaveDrom.js',
relative_path + '/js/WaveDromSkin.js']))
a = {'signal': [
{'name': 'clk', 'wave': 'p.....|...'},
{'name': 'dat', 'wave': 'x.345x|=.x',
'data': ['head', 'body', 'tail', 'data']},
{'name': 'req', 'wave': '0.1..0|1.0'},
{},
{'name': 'ack', 'wave': '1.....|01.'}
]}
draw_wavedrom(a)
PYNQ_JUPYTER_NOTEBOOKS = '/home/xilinx/jupyter_notebooks'
import os
current_path = os.getcwd()
print(current_path)
relative_path = os.path.relpath(PYNQ_JUPYTER_NOTEBOOKS, current_path)
print(relative_path)
PYNQZ1_DIO_SPECIFICATION = {'clock_mhz': 10,
'interface_width': 20,
'monitor_width': 64,
'traceable_outputs': {'D0': 0,
'D1': 1,
'D2': 2,
'D3': 3,
'D4': 4,
'D5': 5,
'D6': 6,
'D7': 7,
'D8': 8,
'D9': 9,
'D10': 10,
'D11': 11,
'D12': 12,
'D13': 13,
'D14': 14,
'D15': 15,
'D16': 16,
'D17': 17,
'D18': 18,
'D19': 19
},
'traceable_inputs': {'D0': 20,
'D1': 21,
'D2': 22,
'D3': 23,
'D4': 24,
'D5': 25,
'D6': 26,
'D7': 27,
'D8': 28,
'D9': 29,
'D10': 30,
'D11': 31,
'D12': 32,
'D13': 33,
'D14': 34,
'D15': 35,
'D16': 36,
'D17': 37,
'D18': 38,
'D19': 39
},
'traceable_tri_states': {'D0': 42,
'D1': 43,
'D2': 44,
'D3': 45,
'D4': 46,
'D5': 47,
'D6': 48,
'D7': 49,
'D8': 50,
'D9': 51,
'D10': 52,
'D11': 53,
'D12': 54,
'D13': 55,
'D14': 56,
'D15': 57,
'D16': 58,
'D17': 59,
'D18': 60,
'D19': 61
},
'non_traceable_inputs': {'PB0': 20,
'PB1': 21,
'PB2': 22,
'PB3': 23
},
'non_traceable_outputs': {'LD0': 20,
'LD1': 21,
'LD2': 22,
'LD3': 23
}
}
pin_list = list(set(PYNQZ1_DIO_SPECIFICATION['traceable_outputs'].keys())|
set(PYNQZ1_DIO_SPECIFICATION['traceable_inputs'].keys())|
set(PYNQZ1_DIO_SPECIFICATION['non_traceable_outputs'].keys())|
set(PYNQZ1_DIO_SPECIFICATION['non_traceable_inputs'].keys()))
from pynq import PL
PL.__class__.__name__
from collections import OrderedDict
key_list = ['key3', 'key2']
value_list = [3, 2]
a = OrderedDict({k: v for k, v in zip(key_list,value_list)})
a[list(a.keys())[0]] = 4
a
for i,j in zip(a.keys(), key_list):
print(i,j)
###Output
key3 key3
key2 key2
|
3_manipulating_image_volumes.ipynb | ###Markdown
Manipulating brain image volumes Outline 1. Data preparation and transformationNilearn has many **simple functions** for data preparation and transformation. Most are also integrated in the **masker objects**.- Computing the mean of images (along the time/4th dimension): **`nilearn.image.mean_img`**- Applying numpy functions on an image or a list of images: **`nilearn.image.math_img`**- Swapping voxels of both hemisphere (e.g., useful to homogenize masks inter-hemispherically): **`nilearn.image.swap_img_hemispheres`**- Smoothing: **`nilearn.image.smooth_img`**- Cleaning signals (e.g., linear detrending, standardization, confound removal, low/high pass filtering): **`nilearn.image.clean_img`**- Resampling : **`nilearn.image.resample_img`** or **`nilearn.image.resample_to_img`****Note:** To apply this cleaning on signal matrices rather than images: **`nilearn.signal.clean`** 2. Image masking- Voxel level.- ROI level.- Seed level. - Smoothing We smooth a **mean epi image**, with a varying amount of smoothing, from **none to 20mm by steps of 10mm**.
###Code
from nilearn import datasets, plotting, image
data = datasets.fetch_adhd(n_subjects=1)
# Print basic information on the dataset
print('First subject functional nifti image (4D) are located at: %s' %
data.func[0])
first_epi_file = data.func[0]
# First the compute the mean image, from the 4D series of image
mean_func = image.mean_img(first_epi_file)
# Then we smooth, with a varying amount of smoothing, from none to 20mm
# by increments of 10mm
for smoothing in range(0, 30, 10):
smoothed_img = image.smooth_img(mean_func, smoothing)
plotting.plot_epi(smoothed_img,
title="Smoothing %imm" % smoothing)
###Output
First subject functional nifti image (4D) are located at: /home/mr243268/data/nilearn_data/adhd/data/0010042/0010042_rest_tshift_RPI_voreg_mni.nii.gz
###Markdown
- Resampling an image to a templateWe use **`nilearn.image.resample_to_img`** to resample an image to a template.We use the MNI152 template as the reference for resampling a t-map image.**`nilearn.image.resample_img`** could also be used to achieve this.- We load the required datasets.
###Code
# First we load the required datasets using the nilearn datasets module.
from nilearn.datasets import fetch_localizer_button_task
from nilearn.datasets import load_mni152_template
template = load_mni152_template()
localizer_dataset = fetch_localizer_button_task(get_anats=True)
localizer_tmap_filename = localizer_dataset.tmaps[0]
localizer_anat_filename = localizer_dataset.anats[0]
###Output
_____no_output_____
###Markdown
- The localizer t-map image is resampled to the MNI template image
###Code
# Now, the localizer t-map image can be resampled to the MNI template image.
from nilearn.image import resample_to_img
resampled_localizer_tmap = resample_to_img(localizer_tmap_filename, template)
###Output
_____no_output_____
###Markdown
- Now we check the shape and affine have been correctly updated.
###Code
# Let's check the shape and affine have been correctly updated.
# First load the original t-map in memory:
from nilearn.image import load_img
tmap_img = load_img(localizer_dataset.tmaps[0])
original_shape = tmap_img.shape
original_affine = tmap_img.get_affine()
resampled_shape = resampled_localizer_tmap.shape
resampled_affine = resampled_localizer_tmap.get_affine()
template_img = load_img(template)
template_shape = template_img.shape
template_affine = template_img.get_affine()
print("""Shape comparison:
- Original t-map image shape : {0}
- Resampled t-map image shape: {1}
- Template image shape : {2}
""".format(original_shape, resampled_shape, template_shape))
print("""Affine comparison:
- Original t-map image affine :\n {0}
- Resampled t-map image affine:\n {1}
- Template image affine :\n {2}
""".format(original_affine, resampled_affine, template_affine))
###Output
Shape comparison:
- Original t-map image shape : (53, 63, 46)
- Resampled t-map image shape: (91, 109, 91)
- Template image shape : (91, 109, 91)
Affine comparison:
- Original t-map image affine :
[[ -3. 0. 0. 78.]
[ 0. 3. 0. -111.]
[ 0. 0. 3. -51.]
[ 0. 0. 0. 1.]]
- Resampled t-map image affine:
[[ -2. 0. 0. 90.]
[ 0. 2. 0. -126.]
[ 0. 0. 2. -72.]
[ 0. 0. 0. 1.]]
- Template image affine :
[[ -2. 0. 0. 90.]
[ 0. 2. 0. -126.]
[ 0. 0. 2. -72.]
[ 0. 0. 0. 1.]]
###Markdown
- Result images are displayed using nilearn plotting module.
###Code
# Finally, result images are displayed using nilearn plotting module.
from nilearn import plotting
plotting.plot_stat_map(localizer_tmap_filename,
bg_img=localizer_anat_filename,
cut_coords=(36, -27, 66),
threshold=3,
title="t-map on original anat")
plotting.plot_stat_map(resampled_localizer_tmap,
bg_img=template,
cut_coords=(36, -27, 66),
threshold=3,
title="Resampled t-map on MNI template anat")
###Output
_____no_output_____
###Markdown
- Masking**What is it ?** - Masking voxels with `NiftiMasker`Here is a simple example of automatic mask computation using the nifti masker.The mask is computed and visualized.- First we fetch the data to be masked.
###Code
# Retrieve the ADHD dataset
from nilearn import datasets
dataset = datasets.fetch_adhd(n_subjects=1)
func_filename = dataset.func[0]
# print basic information on the dataset
print('First functional nifti image (4D) is at: %s' % func_filename)
###Output
First functional nifti image (4D) is at: /home/mr243268/data/nilearn_data/adhd/data/0010042/0010042_rest_tshift_RPI_voreg_mni.nii.gz
###Markdown
- We compute the mask.
###Code
# Compute the mask
from nilearn.input_data import NiftiMasker
# As this is raw resting-state EPI, the background is noisy and we cannot
# rely on the 'background' masking strategy. We need to use the 'epi' one
nifti_masker = NiftiMasker(standardize=True, mask_strategy='epi',
memory="nilearn_cache", memory_level=2,
smoothing_fwhm=8)
nifti_masker.fit(func_filename)
mask_img = nifti_masker.mask_img_
###Output
_____no_output_____
###Markdown
- We visualize the mask
###Code
# Visualize the mask
from nilearn import plotting
from nilearn.image.image import mean_img
# calculate mean image for the background
mean_func_img = mean_img(func_filename)
plotting.plot_roi(mask_img, mean_func_img, display_mode='y', cut_coords=4, title="Mask")
###Output
_____no_output_____
###Markdown
- Masking labels with `NiftiLabelsMasker`We use the AAL atlas in this example.- We fetch the needed **fMRI** and the **AAL atlas**.
###Code
# Retrieve the ADHD dataset
from nilearn import datasets
dataset = datasets.fetch_adhd(n_subjects=1)
func_filename = dataset.func[0]
# print basic information on the dataset
print('First functional nifti image (4D) is at: %s' % func_filename)
# fetch
aal = datasets.fetch_atlas_aal()
aal_atlas = aal['maps']
print('AAL atlas is at: %s' % aal_atlas)
###Output
First functional nifti image (4D) is at: /home/mr243268/data/nilearn_data/adhd/data/0010042/0010042_rest_tshift_RPI_voreg_mni.nii.gz
AAL atlas is at: /home/mr243268/data/nilearn_data/aal_SPM12/aal/atlas/AAL.nii
###Markdown
- We extract timeseries for each ROI of AAL atlas.
###Code
from nilearn.input_data import NiftiLabelsMasker
# Label masker
nifti_labels_masker = NiftiLabelsMasker(
labels_img=aal_atlas,
standardize=True,
memory="nilearn_cache", memory_level=2)
# extract timeseries
timeseries = nifti_labels_masker.fit_transform(func_filename)
###Output
/home/mr243268/dev/modules/scikit-learn/sklearn/externals/joblib/hashing.py:197: DeprecationWarning: Changing the shape of non-C contiguous array by
descriptor assignment is deprecated. To maintain
the Fortran contiguity of a multidimensional Fortran
array, use 'a.T.view(...).T' instead
obj_bytes_view = obj.view(self.np.uint8)
###Markdown
- Now, we visualize the AAL atlas and check the extracted timeseries dimension.
###Code
# Visualize the mask
from nilearn import plotting
plotting.plot_roi(aal_atlas)
# we should have ROIs per timeseries
print('Timeseries dimension is:')
print(timeseries.shape)
###Output
Timeseries dimension is:
(176, 116)
###Markdown
- Masking seeds with `NiftiSpheresMasker`This example shows how to extract timeseries from seeds for a singlesubject based on resting-state fMRI scans.We need :- Seeds (coordinates)- 4D Nifti image
###Code
# Getting the data
# ----------------
# We will work with the first subject of the adhd data set.
# adhd_dataset.func is a list of filenames. We select the 1st (0-based)
# subject by indexing with [0]).
from nilearn import datasets
adhd_dataset = datasets.fetch_adhd(n_subjects=1)
func_filename = adhd_dataset.func[0]
confound_filename = adhd_dataset.confounds[0]
##########################################################################
# Note that func_filename and confound_filename are strings pointing to
# files on your hard drive.
print(func_filename)
print(confound_filename)
###Output
/home/mr243268/data/nilearn_data/adhd/data/0010042/0010042_rest_tshift_RPI_voreg_mni.nii.gz
/home/mr243268/data/nilearn_data/adhd/data/0010042/0010042_regressors.csv
###Markdown
- We select one seed in the PCC of 8mm radius that will be used to extract the averaged timeseries.- Timeseries are detrended, standardized and bandpass filtered.
###Code
# We will be working with one seed sphere in the Posterior Cingulate Cortex,
# considered part of the Default Mode Network.
pcc_coords = [(0, -52, 18)]
from nilearn import input_data
##########################################################################
# We use `nilearn.input_data.NiftiSpheresMasker` to extract the
# **time series from the functional imaging within the sphere**. The
# sphere is centered at pcc_coords and will have the radius we pass the
# NiftiSpheresMasker function (here 8 mm).
#
# The extraction will also detrend, standardize, and bandpass filter the data.
# This will create a NiftiSpheresMasker object.
seed_masker = input_data.NiftiSpheresMasker(
pcc_coords, radius=8,
detrend=True, standardize=True,
low_pass=0.1, high_pass=0.01, t_r=2.,
memory='nilearn_cache', memory_level=1, verbose=0)
###Output
_____no_output_____
###Markdown
- Then we extract the mean time series within the seed region while regressing out the confounds that can be found in the dataset's csv file.
###Code
# Then we extract the mean time series within the seed region while
# regressing out the confounds that
# can be found in the dataset's csv file
seed_time_series = seed_masker.fit_transform(func_filename,
confounds=[confound_filename])
##########################################################################
# We can now inspect the extracted time series. Note that the **seed time
# series** is an array with shape n_volumes, 1), while the
# **brain time series** is an array with shape (n_volumes, n_voxels).
print("seed time series shape: (%s, %s)" % seed_time_series.shape)
###Output
seed time series shape: (176, 1)
###Markdown
- We can plot the **seed time series**.
###Code
# We can plot the **seed time series**.
import matplotlib.pyplot as plt
plt.plot(seed_time_series)
plt.title('Seed time series (Posterior cingulate cortex)')
plt.xlabel('Scan number')
plt.ylabel('Normalized signal')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
- Region Extraction from a t-statistical map (3D)This example shows how to extract regions or separate the regionsfrom a statistical map.We use localizer t-statistic maps from :func:`nilearn.datasets.fetch_localizer_contrasts`as an input image.The idea is to threshold an image to get foreground objects using afunction `nilearn.image.threshold_img` and extract objects using a function`nilearn.regions.connected_regions`.
###Code
# Fetching t-statistic image of localizer constrasts by loading from datasets
# utilities
from nilearn import datasets
n_subjects = 3
localizer_path = datasets.fetch_localizer_contrasts(
['calculation (auditory cue)'], n_subjects=n_subjects, get_tmaps=True)
tmap_filename = localizer_path.tmaps[0]
###Output
_____no_output_____
###Markdown
- Threshold the t-statistic image by importing threshold function.
###Code
# Threshold the t-statistic image by importing threshold function
from nilearn.image import threshold_img
# Two types of strategies can be used from this threshold function
# Type 1: strategy used will be based on scoreatpercentile
threshold_percentile_img = threshold_img(tmap_filename, threshold='97%')
# Type 2: threshold strategy used will be based on image intensity
# Here, threshold value should be within the limits i.e. less than max value.
threshold_value_img = threshold_img(tmap_filename, threshold=4.)
###Output
_____no_output_____
###Markdown
- Show thresholding results
###Code
# Visualization
# Showing thresholding results by importing plotting modules and its utilities
from nilearn import plotting
# Showing percentile threshold image
plotting.plot_stat_map(threshold_percentile_img, display_mode='z', cut_coords=5,
title='Threshold image with string percentile', colorbar=False)
# Showing intensity threshold image
plotting.plot_stat_map(threshold_value_img, display_mode='z', cut_coords=5,
title='Threshold image with intensity value', colorbar=False)
###Output
_____no_output_____
###Markdown
- Extract the regions by importing connected regions function
###Code
# Extracting the regions by importing connected regions function
from nilearn.regions import connected_regions
regions_percentile_img, index = connected_regions(threshold_percentile_img,
min_region_size=1500)
regions_value_img, index = connected_regions(threshold_value_img,
min_region_size=1500)
###Output
_____no_output_____
###Markdown
- Visualize region extraction results
###Code
# Visualizing region extraction results
title = ("ROIs using percentile thresholding. "
"\n Each ROI in same color is an extracted region")
plotting.plot_prob_atlas(regions_percentile_img, anat_img=tmap_filename,
view_type='contours', display_mode='z',
cut_coords=5, title=title)
title = ("ROIs using image intensity thresholding. "
"\n Each ROI in same color is an extracted region")
plotting.plot_prob_atlas(regions_value_img, anat_img=tmap_filename,
view_type='contours', display_mode='z',
cut_coords=5, title=title)
###Output
_____no_output_____ |
chapter08-seq2seq-attn/nmt_rnn_attention/rnn_attention.ipynb | ###Markdown
Neural machine translation with RNN attention modelIn this example, we'll use PyTorch to implement GRU-based seq2seq encoder/decoder with Luong attention. We'll use this attention model for French->English neural machine translation._This example is based on_ [https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py](https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py) Let's start with the imports and the configuration. The dataset processing is implemented in the `nmt_dataset` module:
###Code
import random
import torch
from nmt_dataset import *
DATASET_SIZE = 40000
HIDDEN_SIZE = 128
###Output
_____no_output_____
###Markdown
Next, we'll initialize the device (GPU if available, otherwise CPU):
###Code
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Then, we'll implement the `EncoderRNN`, which combines the GRU cell with word embedding layer:
###Code
class EncoderRNN(torch.nn.Module):
"""The encoder"""
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
# Embedding for the input words
self.embedding = torch.nn.Embedding(input_size, hidden_size)
# The actual rnn sell
self.rnn_cell = torch.nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
"""Single sequence encoder step"""
# Pass through the embedding
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
# Pass through the RNN
output, hidden = self.rnn_cell(output, hidden)
return output, hidden
def init_hidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
###Output
_____no_output_____
###Markdown
Next, we'll implement the decoder, which includes the attention mechanism:
###Code
class AttnDecoderRNN(torch.nn.Module):
"""RNN decoder with attention"""
def __init__(self, hidden_size, output_size, max_length=MAX_LENGTH, dropout=0.1):
super(AttnDecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.output_size = output_size
self.max_length = max_length
# Embedding for the input word
self.embedding = torch.nn.Embedding(self.output_size, self.hidden_size)
self.dropout = torch.nn.Dropout(dropout)
# Attention portion
self.attn = torch.nn.Linear(in_features=self.hidden_size,
out_features=self.hidden_size)
self.w_c = torch.nn.Linear(in_features=self.hidden_size * 2,
out_features=self.hidden_size)
# RNN
self.rnn_cell = torch.nn.GRU(input_size=self.hidden_size,
hidden_size=self.hidden_size)
# Output word
self.w_y = torch.nn.Linear(in_features=self.hidden_size,
out_features=self.output_size)
def forward(self, input, hidden, encoder_outputs):
embedded = self.embedding(input).view(1, 1, -1)
embedded = self.dropout(embedded)
# Compute the hidden state at current step t
rnn_out, hidden = self.rnn_cell(embedded, hidden)
# Compute the alignment scores
alignment_scores = torch.mm(self.attn(hidden)[0], encoder_outputs.t())
# Compute the weights
attn_weights = torch.nn.functional.softmax(alignment_scores, dim=1)
# Multiplicative attention context vector c_t
c_t = torch.mm(attn_weights, encoder_outputs)
# Concatenate h_t and the context
hidden_s_t = torch.cat([hidden[0], c_t], dim=1)
# Compute the hidden context
hidden_s_t = torch.tanh(self.w_c(hidden_s_t))
# Compute the output
output = torch.nn.functional.log_softmax(self.w_y(hidden_s_t), dim=1)
return output, hidden, attn_weights
def init_hidden(self):
return torch.zeros(1, 1, self.hidden_size, device=device)
###Output
_____no_output_____
###Markdown
We'll continue with the training procedure:
###Code
def train(encoder, decoder, loss_function, encoder_optimizer, decoder_optimizer, data_loader, max_length=MAX_LENGTH):
print_loss_total = 0
# Iterate over the dataset
for i, (input_tensor, target_tensor) in enumerate(data_loader):
input_tensor = input_tensor.to(device).squeeze(0)
target_tensor = target_tensor.to(device).squeeze(0)
encoder_hidden = encoder.init_hidden()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)
loss = torch.Tensor([0]).squeeze().to(device)
with torch.set_grad_enabled(True):
# Pass the sequence through the encoder and store the hidden states at each step
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_tensor[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0, 0]
# Initiate decoder with the GO_token
decoder_input = torch.tensor([[GO_token]], device=device)
# Initiate the decoder with the last encoder hidden state
decoder_hidden = encoder_hidden
# Teacher forcing: Feed the target as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_outputs)
loss += loss_function(decoder_output, target_tensor[di])
decoder_input = target_tensor[di] # Teacher forcing
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
print_loss_total += loss.item() / target_length
it = i + 1
if it % 1000 == 0:
print_loss_avg = print_loss_total / 1000
print_loss_total = 0
print('Iteration: %d %.1f%%; Loss: %.4f' % (it, 100 * it / len(data_loader.dataset), print_loss_avg))
###Output
_____no_output_____
###Markdown
Next, let's implement the evaluation procedure:
###Code
def evaluate(encoder, decoder, input_tensor, max_length=MAX_LENGTH):
with torch.no_grad():
input_length = input_tensor.size()[0]
encoder_hidden = encoder.init_hidden()
input_tensor.to(device)
encoder_outputs = torch.zeros(max_length, encoder.hidden_size, device=device)
for ei in range(input_length):
# Pass the sequence through the encoder and store the hidden states at each step
encoder_output, encoder_hidden = encoder(input_tensor[ei],
encoder_hidden)
encoder_outputs[ei] += encoder_output[0, 0]
# Initiate the decoder with the last encoder hidden state
decoder_input = torch.tensor([[GO_token]], device=device) # GO
# Initiate the decoder with the last encoder hidden state
decoder_hidden = encoder_hidden
decoded_words = []
decoder_attentions = torch.zeros(max_length, max_length)
# Generate the output sequence (opposite to teacher forcing)
for di in range(max_length):
decoder_output, decoder_hidden, decoder_attention = decoder(
decoder_input, decoder_hidden, encoder_outputs)
decoder_attentions[di] = decoder_attention.data
# Obtain the output word index with the highest probability
_, topi = decoder_output.data.topk(1)
if topi.item() != EOS_token:
decoded_words.append(dataset.output_lang.index2word[topi.item()])
else:
break
# Use the latest output word as the next input
decoder_input = topi.squeeze().detach()
return decoded_words, decoder_attentions[:di + 1]
###Output
_____no_output_____
###Markdown
Next, we'll implement a helper function, which allows us to evaluate (translate) random sequence of the training dataset:
###Code
def evaluate_randomly(encoder, decoder, n=10):
for i in range(n):
sample = random.randint(0, len(dataset.dataset) - 1)
pair = dataset.pairs[sample]
input_sequence = dataset[sample][0].to(device)
output_words, attentions = evaluate(encoder, decoder, input_sequence)
print('INPUT: %s; TARGET: %s; RESULT: %s' % (pair[0], pair[1], ' '.join(output_words)))
###Output
_____no_output_____
###Markdown
We'll continue with two functions that allows us to see the attention scores over the input sequence:
###Code
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
def plot_attention(input_sentence, output_words, attentions):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + input_sentence.split(' ') +
['<EOS>'], rotation=90)
ax.set_yticklabels([''] + output_words)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def evaluate_and_plot_attention(input_sentence, encoder, decoder):
input_tensor = dataset.sentence_to_sequence(input_sentence).to(device)
output_words, attentions = evaluate(encoder=encoder,
decoder=decoder,
input_tensor=input_tensor)
print('INPUT: %s; OUTPUT: %s' % (input_sentence, ' '.join(output_words)))
plot_attention(input_sentence, output_words, attentions)
###Output
_____no_output_____
###Markdown
Next, we'll initialize the training dataset:
###Code
dataset = NMTDataset('data/eng-fra.txt', DATASET_SIZE)
###Output
_____no_output_____
###Markdown
We'll continue with initializing the encoder/decoder model and the training framework components:
###Code
enc = EncoderRNN(dataset.input_lang.n_words, HIDDEN_SIZE).to(device)
dec = AttnDecoderRNN(HIDDEN_SIZE, dataset.output_lang.n_words, dropout=0.1).to(device)
train_loader = torch.utils.data.DataLoader(dataset,
batch_size=1,
shuffle=False)
encoder_optimizer = torch.optim.Adam(enc.parameters())
decoder_optimizer = torch.optim.Adam(dec.parameters())
loss_function = torch.nn.NLLLoss()
###Output
_____no_output_____
###Markdown
Finally, we'll can run the training:
###Code
train(enc, dec, loss_function, encoder_optimizer, decoder_optimizer, train_loader)
###Output
Iteration: 1000 2.5%; Loss: 3.2344
Iteration: 2000 5.0%; Loss: 2.6458
Iteration: 3000 7.5%; Loss: 2.2991
Iteration: 4000 10.0%; Loss: 2.1474
Iteration: 5000 12.5%; Loss: 2.0301
Iteration: 6000 15.0%; Loss: 1.9312
Iteration: 7000 17.5%; Loss: 1.8323
Iteration: 8000 20.0%; Loss: 1.7041
Iteration: 9000 22.5%; Loss: 1.6163
Iteration: 10000 25.0%; Loss: 1.6229
Iteration: 11000 27.5%; Loss: 1.5275
Iteration: 12000 30.0%; Loss: 1.4935
Iteration: 13000 32.5%; Loss: 1.4158
Iteration: 14000 35.0%; Loss: 1.3289
Iteration: 15000 37.5%; Loss: 1.3238
Iteration: 16000 40.0%; Loss: 1.3008
Iteration: 17000 42.5%; Loss: 1.3011
Iteration: 18000 45.0%; Loss: 1.2671
Iteration: 19000 47.5%; Loss: 1.2171
Iteration: 20000 50.0%; Loss: 1.1584
Iteration: 21000 52.5%; Loss: 1.1282
Iteration: 22000 55.0%; Loss: 1.0746
Iteration: 23000 57.5%; Loss: 1.0888
Iteration: 24000 60.0%; Loss: 1.0930
Iteration: 25000 62.5%; Loss: 1.1087
Iteration: 26000 65.0%; Loss: 1.0284
Iteration: 27000 67.5%; Loss: 1.0434
Iteration: 28000 70.0%; Loss: 1.0601
Iteration: 29000 72.5%; Loss: 0.9805
Iteration: 30000 75.0%; Loss: 0.9516
Iteration: 31000 77.5%; Loss: 0.9791
Iteration: 32000 80.0%; Loss: 0.9477
Iteration: 33000 82.5%; Loss: 0.9331
Iteration: 34000 85.0%; Loss: 0.8922
Iteration: 35000 87.5%; Loss: 0.9079
Iteration: 36000 90.0%; Loss: 0.8848
Iteration: 37000 92.5%; Loss: 0.8791
Iteration: 38000 95.0%; Loss: 0.8695
Iteration: 39000 97.5%; Loss: 0.8856
Iteration: 40000 100.0%; Loss: 0.8629
###Markdown
Let's see how the model translates some randomly selected sentences:
###Code
evaluate_randomly(enc, dec)
###Output
INPUT: vous etes merveilleuse .; TARGET: you re wonderful .; RESULT: you re wonderful .
INPUT: c est un bon mari pour moi .; TARGET: he is a good husband to me .; RESULT: he is a good husband to me .
INPUT: c est un tres gentil garcon .; TARGET: he s a very nice boy .; RESULT: he s a very nice boy .
INPUT: je suis tout a fait pour .; TARGET: i m all for that .; RESULT: i m all used to it .
INPUT: je suis deshydratee .; TARGET: i m dehydrated .; RESULT: i m dehydrated .
INPUT: je ne suis pas particulierement impressionnee .; TARGET: i m not particularly impressed .; RESULT: i m not impressed .
INPUT: il est tres flexible .; TARGET: he s very flexible .; RESULT: he s very flexible .
INPUT: desole .; TARGET: i m sorry .; RESULT: i m sorry .
INPUT: c est un de mes voisins .; TARGET: he is one of my neighbors .; RESULT: he s a afraid of my neighbors .
INPUT: il a huit ans .; TARGET: he s eight years old .; RESULT: he is eight years old .
###Markdown
Next, let's visualize the decoder attention over the elements of the input sequence:
###Code
output_words, attentions = evaluate(
enc, dec, dataset.sentence_to_sequence("je suis trop froid .").to(device))
plt.matshow(attentions.numpy())
###Output
_____no_output_____
###Markdown
Let's see the translation and attention scores with a few more samples:
###Code
evaluate_and_plot_attention("elle a cinq ans de moins que moi .", enc, dec)
evaluate_and_plot_attention("elle est trop petit .", enc, dec)
evaluate_and_plot_attention("je ne crains pas de mourir .", enc, dec)
evaluate_and_plot_attention("c est un jeune directeur plein de talent .", enc, dec)
###Output
INPUT: elle a cinq ans de moins que moi .; OUTPUT: she is five years younger than me .
|
Book Practice/1 - ndArray and Data Types.ipynb | ###Markdown
Compare Numpy with List
###Code
import numpy as np
my_array = np.arange(1000000)
my_list = list(range(1000000))
%time for _ in range(50): my_array * 2
%time for _ in range(50): my_list * 2
###Output
Wall time: 6.31 s
###Markdown
4.1 The NumPy ndarray: A Multidimensional Array Object
###Code
#(rows, columns)
x = np.random.randn(2, 3)
#(rows, columns)
y = np.random.randn(8, 4)
x
y
x * 10
y * 10
x.dtype
x.shape
###Output
_____no_output_____
###Markdown
Creating ndarray
###Code
xarr = np.array([[1, 2, 3, 4, 5], [11, 22, 33, 44, 55]])
xarr.dtype
xarr.ndim
xarrr = np.array([[[1, 2, 3, 4, 5], [11, 22, 33, 44, 55]]])
xarrr.ndim
xarrr
xarr.shape
xarrr.shape
np.zeros(10)
np.zeros((2, 3))
np.empty((2, 3))
np.zeros((1, 2, 3))
np.zeros((2, 2, 3))
np.zeros((3, 2, 3))
list1 = [1, 2, 3, 4, 5]
np.asarray(list1)
abc1 = np.array([1, 2, 3])
abc1.dtype
abc2 = abc1.astype(np.float64)
abc1.dtype
abc2.dtype
list1
np.array(list1)
list1
np.ones((2, 3))
mn = np.ones((2, 3), dtype="float32")
mn
np.ones_like(mn, dtype="int64")
## alternate zeros() and zeros_like()
x = np.empty((2, 3))
x
gf = np.empty_like(x)
gf
c = np.full((2, 3), fill_value='3', dtype="int32")
c
np.full_like(c, fill_value=23)
np.eye(5, 5)
np.identity(5)
###Output
_____no_output_____
###Markdown
Data Types for ndarrays
###Code
arr1 = np.array([1, 2, 3], dtype=np.float64)
arr2 = np.array([1, 2, 3], dtype=np.int32)
arr1.dtype
arr2.dtype
# float32 is float
# int32 is integer
# float64 is Double
###Output
_____no_output_____
###Markdown
Numpy more Data Types
###Code
int8_array1 = np.array([1, 2, 3], dtype=np.int8)
int8_array2 = np.array([1, 2, 3], dtype=np.uint8)
print(int8_array1.dtype)
int8_array2.dtype
int16_a = np.array([1, 2, 3], dtype=np.int16)
int32_a = np.array([1, 2, 3], dtype=np.int32)
int64_a = np.array([1, 2, 3], dtype=np.int64)
float16_a = np.array([1, 2, 3], dtype=np.float16)
float32_a = np.array([1, 2, 3], dtype=np.float32)
float64_a = np.array([1, 2, 3], dtype=np.float64)
#float128_a = np.array([1, 2, 3], dtype=np.float128)
c64 = np.array([1, 2, 3], dtype=np.complex64)
c64
c64 = np.array([[1, 2, 3], [11, 22, 33]], dtype=np.complex64)
c64
# complex128 and complex256
bol = np.array([1, 2, 3], dtype=np.bool)
bol
list1 = list([1, 2, 3])
list2 = list([4, 5, 6])
obj = np.array([list1, list2], dtype=np.object)
obj
str1 = np.array([1.2, 3.3, 6.6, 1.0, 3.4, 1.2], dtype=np.string_)
str1
str1.astype(np.float32)
str2 = np.array(['aa','b', 'cd', 'd', 'efg', 'shk', 'ijkl'], dtype=np.string_)
str2
uni1 = np.array([1.2, 3.4, 0.1, 45.9], dtype=np.unicode_)
uni1
uni2 = np.array(['abc', 'shk', 'ger', 'arg', 'shakeel'], dtype=np.unicode_)
uni2
###Output
_____no_output_____ |
laboratorio/lezione5-14ott21/lezione5-regexp.ipynb | ###Markdown
Espressioni regolari> Definizione di espressione regolare> Operazioni di *matching* e di *searching*> Accesso alle occorrenze> Come scrivere un'espressione regolare> *Backreference* esterno e interno> La funzioni `findall()` e `sub()` Definizione di espressione regolare**Espressione regolare (RE)**: stringa di simboli che rappresenta un linguaggio.Esempi:- `ca?t` rappresenta il linguaggio {`ct`, `cat`}- `ca*t` rappresenta il linguaggio {`ct`, `cat`, `caat`, `caaat`, `caaaat`, `caaaaat`, ...}- `ca+t` rappresenta il linguaggio {`cat`, `caat`, `caaat`, `caaaat`, `caaaaat`, ...}- `cat` rappresenta il linguaggio {`cat`}Operazioni con RE:- *matching*- *searching*- *sostituzione*Un'espressione regolare è parte del modulo `re`.
###Code
import re
###Output
_____no_output_____
###Markdown
Funzione `re.match() ` per l'operazione di *matching*Data una stringa S e un'espressione regolare *RE*, verifica se S inizia (o addirittura coincide) con una delle stringhe del linguaggio di *RE*. re.match(my_expr, my_string) Oggetto restituito: `re.Match`Funzionamento greedy.
###Code
re.match('cat', 'dog and cat')
re.match('cat', 'cat and dog')
re.match('cat', 'cat')
re.match('ca*t', 'caaaaaataaaaa')
###Output
_____no_output_____
###Markdown
Funzione `re.search() ` per l'operazione di *searching*Data una stringa S e un'espressione regolare *RE*, verifica se S contiene una delle stringhe del linguaggio di *RE*. re.search(my_expr, my_string) Oggetto restituito: `re.Match`Funzionamento greedy.
###Code
re.search('cat', 'dog and cat')
re.search('cat', 'cat and dog')
re.search('cat', 'cat')
###Output
_____no_output_____
###Markdown
Accesso alla occorrenza trovata L'oggetto di tipo `re.Match` mette a disposizione due metodi per localizzare l'occorrenza trovata: - `start()`, posizione (0-based) di inizio dell'occorrenza- `end()`, posizione (1-based) di fine dell'occorrenza
###Code
s = re.search('cat', 'dog and cat and rat')
print(s)
s.start()
s.end()
###Output
_____no_output_____
###Markdown
Occorrenza:
###Code
'dog and cat and rat'[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Come scrivere un'espressione regolare**Due gruppi di simboli**1. `. | ( ) [ ] { } + \ ^ $ * ?`, metasimboli 1. tutti i simboli che non sono metasimboli e che rappresentano se stessi Per fare in modo che un metasimbolo rappresenti se stesso basta anteporre `\`.`ca?t` è diversa da `ca\?t`Esistono simboli che, preceduti da `\`, rappresentano qualcos'altro.`ABC` è diversa da `\ABC`I *metasimboli* in generale permettono di specificare:- ancore- classi- quantificatori- alternative- raggruppamenti- backreference--- Ancora (elemento di dimensione nulla)- inizio riga `^`- fine riga `$`- inizio stringa `\A`- fine stringa `\z`- fine stringa `\Z` (eventualmente prima di `\n`)- confine di parola `\b`- non confine di parola `\B`*Simbolo di parola*: lettera minuscola da `a` a `z`, lettera maiuscola da `A` a `Z`, cifra da `0` a `9`, simbolo di *underscore* `_`.*Confine di parola*: elemento di dimensione nulla tra un simbolo di parola e un non simbolo di parola. **ESEMPI**:`^cat` rappresenta l'unica stringa `cat` con il vincolo che sia a inizio riga- in `cataaaa`, `aaaa\ncataaaa`- non in `aaacataaa`, `aaacat`, `aaaacat\naaaa``cat$` rappresenta l'unica stringa `cat` con il vincolo che sia a fine riga- in `aaacat`, `aaaacat\naaaa`- non in `aaacataaa`, `cataaa`, `aaaa\ncataaaa``\Acat` rappresenta l'unica stringa `cat` con il vincolo che sia a inizio stringa- in `cataaaa`- non in `aaaa\ncataaaa``cat\z` rappresenta l'unica stringa `cat` con il vincolo che sia a fine stringa- in `aaaacat`- non in `aaaacat\naaaa`, `aaaa\naaaacat\n``cat\Z` rappresenta l'unica stringa `cat` con il vincolo che sia a fine stringa (eventualmente prima di `\n`)- in `aaaa\naaaacat\n`, `aaaacat``\bis` rappresenta l'unica stringa `is` con il vincolo che prima di `i` non ci sia un simbolo di parola- in `It is a cat`- non in `This cat``\Bis` rappresenta l'unica stringa `is` con il vincolo che prima di `i` ci sia un simbolo di parola- in `This cat`- non in `It is a cat`*** Classe (insieme di caratteri)Una classe viene specificata in parentesi quadre `[]`:- elencando ognuno dei caratteri appartenenti alla classe - specificando intervalli tramite il simbolo `-`- il simbolo `^` messo all'inizio permette di effettuare la **negazione** di ciò che viene specificato dopoe rappresenta ognuno dei simboli che le appartengono.**ESEMPI DI CLASSI**:`[aeiou]` rappresenta la classe delle vocali minuscole`[^aeiou]` rappresenta la classe di tutto ciò che non è vocale minuscola`[ae^iou]` rappresenta la classe delle vocali minuscole e del simbolo `^``[.;:,]` rappresenta la classe dei simboli di punteggiatura (in questo caso il simbolo `.` non è un *metasimbolo*)`[?\b]` rappresenta la classe dei due simboli `?` e *backspace*`[a-z]` rappresenta la classe delle lettere minuscole`[a\-z]` rappresenta la classe dei tre simboli `a`, `-` e `z`.`[a-zA-Z]` rappresenta la classe di tutte le lettere`[a-zA-Z0-9_]` rappresenta la classe di tutti i simboli di parola `[^a-zA-Z0-9_]` è la classe di tutti i simboli che non sono di parola.**ESEMPIO DI DI RE CON CLASSI**:`[A-Z]at` rappresenta tutte le stringhe che iniziano con lettera maiuscola e finiscono con `at`- ad esempio `Cat`, `Rat`, `Bat`- ma non `cat`, `rat`, `bat``[A-Za-z]at` rappresenta tutte le stringhe di tre lettere che finiscono con `at`- ad esempio `Cat`, `Rat`, `Bat`, `cat`, `rat`, `bat`**Scorciatoie**:- `[0-9]` = `\d`- `[^0-9]` = `\D`- `[a-zA-Z0-9_]` = `\w`- `[^a-zA-Z0-9_]` = `\W`- `[0-9a-fA-F]` = `\h`- `[^0-9a-fA-F]` = `\H`- `[␣\t\r\n\f]` = `\s`- `[^␣\t\r\n\f]` = `\S`- `[^\n]` = `.` Raggruppamento (parte di *RE*)Un raggruppamento viene specificato in parentesi tonde `()` e può:- essere sottoposto a quantificazione- essere sottoposto a *backreference*- variare la precedenza delle alternative`a(bc)d` contiene il raggruppamento `(bc)`*** QuantificatoreUn *quantificatore* è un metasimbolo che specifica il numero di volte con cui il carattere, la classe o il raggruppamento precedenti possono manifestarsi all'interno della stringa con cui la *RE* viene confrontata.Un *quantificatore* può specificare:- zero o più ripetizioni `*`- una o più ripetizioni `+`- zero o una ripetizione `?`- da `m` a `n` ripetizioni `{m,n}`- almeno `m` ripetizioni `{m,}`- al più `n` ripetizioni `{,n}`- esattamente `m` ripetizioni `{m}``ca*t` rappresenta le stringhe composte da `c`, seguita da zero o più simboli `a`, seguiti da `t`. - `ct`, `cat`, `caaat`, `caaaat` sono nel linguaggio`ca+t` rappresenta le stringhe composte da `c`, seguita da uno o più simboli `a`, seguiti da `t`.- `cat`, `caaat` non sono nel linguaggio- `ct` non è nel linguaggio`c[ab]+t` rappresenta le stringhe composte da un simbolo `c` seguito da una o più ripetizioni del simbolo `a` oppure `b` seguite dal simbolo `t`.- `caabbbabababbat`, `caaaaaaaat`, `cbbbbbbbbt` sono nel linguaggio`c(ab)+t` rappresenta stringhe composte da `c`, seguita da una o più ripetizioni di `ab`, seguite da `t`. Ad esempio - `cabt`, `cabababt`, `cababababt`- `caaaaaaaat`, `cbbbbbbbbt`, `cbbababbbbbbt` non sono sono nel linguaggio`ca?t` rappresenta le sole stringhe `ct` e `cat`.`ca{2,5}t` rappresenta le sole stringhe `caat`, `caaat`, `caaaat` e `caaaaat`, composte da `c`, seguita da due, o tre, o quattro, o cinque simboli `a`, seguiti da `t`.`ca{2,}t` rappresenta le stringhe `caat`, `caaat`, `caaaat`, `caaaaat`, etc., composte da `c`, seguita da almeno due `a`, seguiti da `t`.`ca{,5}t` rappresenta le sole stringhe `ct`, `cat`, `caat` `caaat`, `caaaat` e `caaaaat`, composte da `c`, seguita da al più cinque simboli `a`, seguiti da `t`.`ca{5}t` rappresenta la sola stringa `caaaaat`, composta da `c`, seguita da cinque simboli `a`, seguiti da `t`.*** AlternativaPer specificare un'*alternativa* tra due parti della *RE* si usa il *metasimbolo* `|`.`ab|cd` rappresenta il linguaggio {`ab`, `cd`}.`cane nero|bianco` corrisponde al linguaggio {`cane nero`, `bianco`}.`cane (nero|bianco)` corrisponde al linguaggio {`cane nero`, `cane bianco`}*** ESERCIZIO1 Si consideri la stringa `***hello world***`
###Code
stringa = '***hello world***'
###Output
_____no_output_____
###Markdown
Si effettui la ricerca della *RE* `\w+` che rappresenta tutte le stringhe composte da uno o più simboli di parola.
###Code
s = re.search('\w+', stringa)
###Output
_____no_output_____
###Markdown
La *searching occurrence* è:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Fare in modo che l'occorrenza trovata sia ora `hello world`.
###Code
s = re.search('\w+\s\w+', stringa)
###Output
_____no_output_____
###Markdown
La *searching occurrence* è ora:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Si cambi ora la stringa in `***hello world***` mantenendo la stessa *RE*.
###Code
stringa = '***hello world***'
s = re.search('\w+\s\w+', stringa)
###Output
_____no_output_____
###Markdown
Estrarre l'occorrenza.
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
La *RE* che permette di catturare `hello world` (con un qualsiasi numero di spazi tra `hello` e `world`) è:
###Code
s = re.search('\w+\s+\w+', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza ora è infatti:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
La nuova *RE* permette di trovare l'occorrenza per un numero qualsiasi di spazi tra `hello` e `world`.
###Code
stringa = '***hello world***'
s = re.search('\w+\s+\w+', stringa)
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Si provi ora a usare (su questa nuova stringa) la *RE* `.+` che rappresenta tutte le stringhe di uno o più caratteri qualsiasi (tranne il *newline* `\n`).
###Code
s = re.search('.+', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza sarà ora:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
A questo punto si inserisca nella stringa un carattere di *newline* `\n` dopo `hello`.
###Code
stringa = '***hello\n world***'
s = re.search('.+', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza sarà ora:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
in quanto il simbolo `\n` non appartiene alla classe rappresentata dal *metasimbolo* `.`. ESERCIZIO2 Si consideri la stringa:
###Code
stringa = 'bbbcaaaaaaaatcaaaat'
###Output
_____no_output_____
###Markdown
Si effettui utilizzi la *RE* `ca+` che rappresenta tutte le stringhe composte da `c` seguito da almeno un carattere `a`.
###Code
s = re.search('ca+', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza è:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
La ricerca, a causa del comportamento *greedy* dell'operazione, si estende il più a destra possibile.Si aggiunga quindi un `?` subito dopo il quantificatore `+`.
###Code
s = re.search('ca+?', stringa)
###Output
_____no_output_____
###Markdown
L'ooccorrenza diventa:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
in quanto il punto di domanda limita annulla il comportamento greedy e si limita all'occorrenza più corta. Si effettui ora la ricerca della *RE* `(ca)+` che rappresenta tutte le stringhe composte da `ca` ripetuta almeno una volta.
###Code
s = re.search('(ca)+', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza è ora:
###Code
stringa[s.start():s.end()]
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Si provi ora a cercare la *RE* `ca*` che rappresenta tutte le stringhe composte da `c` seguito da zero o più `a`.
###Code
s = re.search('ca*', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza è:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Se si aggiunge il punto di domanda:
###Code
s = re.search('ca*?', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza è:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
Attenzione a un'espressione regolare come `(ca)*?`.
###Code
s = re.search('(ca)*?', stringa)
###Output
_____no_output_____
###Markdown
L'occorrenza è:
###Code
stringa[s.start():s.end()]
###Output
_____no_output_____
###Markdown
*Backreference* esterno**Goal**: catturare le parti di occorrenza relative ai raggruppamenti per usarli all'esterno dell'operazione di *matching*/*searching*.I **raggruppamenti sono indicizzati** da sinistra a destra a partire da 1.La parte catturata relativa a un raggruppamento viene restituita dal metodo `group()` dell'oggetto di tipo `Match`: my_match_obj.group(my_index)che prende come argomento l'indice del raggruppamento da catturare (se l'argomento non viene specificato, allora si assume l’indice di default 0 che corrisponde all'intera occorrenza).L'inizio e la fine della parte catturata per un raggruppamento viene restituita dai metodi `start()` ed `end()` dell'oggetto di tipo `Match`: my_match_obj.start(my_index) my_match_obj.end(my_index)che prendono come argomenti l'indice del raggruppamento da catturare (se l'argomento non viene specificato, allora si assume l’indice di default 0 che corrisponde all'intera occorrenza).---**ESEMPIO**:
###Code
s = re.search('(\w+)\s+(\w+)', 'gatto cane')
###Output
_____no_output_____
###Markdown
L'occorrenza intera è:
###Code
s.group()
###Output
_____no_output_____
###Markdown
La parte catturata dal primo raggruppamento è:
###Code
s.group(1)
###Output
_____no_output_____
###Markdown
e inizia in posizione:
###Code
s.start(1)
###Output
_____no_output_____
###Markdown
La parte catturata dal secondo raggruppamento è:
###Code
s.group(2)
###Output
_____no_output_____
###Markdown
e inizia in posizione:
###Code
s.start(2)
###Output
_____no_output_____
###Markdown
*Backreference* interno**Goal**: creare riferimenti interni ai raggruppamenti tramite i metasimboli `\\1`, `\\2`, `\\3` etc., dove `\\i`si riferisce all'i-esimo raggruppamento a partire da sinistra. **Esempio1**: `(\w+)\s+\\1` è equivalente a `(\w+)\s+(\w+)` con il vincolo che le due parti `(\w+)` e `(\w+)` catturino la stessa stringa.
###Code
s = re.search('(\w+)\s+\\1', 'gatto gatto')
###Output
_____no_output_____
###Markdown
L'occorrenza intera trovata è:
###Code
s.group()
###Output
_____no_output_____
###Markdown
mentre la parte catturata dal raggruppamento di sinistra è:
###Code
s.group(1)
###Output
_____no_output_____
###Markdown
Se invece si usa la stringa `gatto cane`:
###Code
s = re.search('(\w+)\s+\\1', 'gatto cane')
s.group(0)
###Output
_____no_output_____
###Markdown
**Esempio2**: `(\w+)\\1` è equivalente a `(\w+)(\w+)` con il vincolo che le due parti `(\w+)` e `(\w+)` catturino la stessa stringa.
###Code
s = re.search('(\w+)\\1', 'Mississippi')
###Output
_____no_output_____
###Markdown
L'occorrenza intera trovata è:
###Code
s.group()
###Output
_____no_output_____
###Markdown
mentre la parte catturata dal raggruppamento di sinistra è:
###Code
s.group(1)
###Output
_____no_output_____
###Markdown
Funzione `findall()` La funzione: re.findall(my_expr, my_string)trova tutte le occorrenze non sovrapposte della *RE* `my_expr` nella stringa `my_string`, e restituisce: - la lista delle occorrenze elencate da sinistra a destra, se nella *RE* non sono presenti raggruppamenti- la lista delle occorrenze catturate da un raggruppamento, se nella *RE* è presente un solo raggruppamento- la lista delle occorrenze catturate dai raggruppamenti, organizzati in tuple, se nella *RE* sono presenti più raggruppamenti (anche annidati)
###Code
re.findall('\w\w', 'abcdefghi')
re.findall('(\w)\w', 'abcdefgh')
re.findall('(\w)(\w)', 'abcdefgh')
re.findall('((\w)(\w))', 'abcdefgh')
re.findall('\w+', 'cat dog mouse rat')
re.findall('\w+\s+\w+', 'cat dog mouse rat')
re.findall('(\w+)\s+\w+', 'cat dog mouse rat')
re.findall('(\w+)\s+(\w+)', 'cat dog mouse rat')
###Output
_____no_output_____
###Markdown
Funzione `sub()` La funzione: re.sub(my_expr, r_string, my_string)restituisce la stringa ottenuta sostituendo con `r_string` tutte le occorrenze non sovrapposte di `my_expr` in `my_string`.
###Code
re.sub('\w+\s\w+', 'goose', 'cat dog mouse rat')
###Output
_____no_output_____ |
wavelet/radec_calibrator_ALMA.ipynb | ###Markdown
Plot position of the calibrator
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from astropy.coordinates import SkyCoord
import astropy.units as u
%matplotlib inline
def readCal(ifile, fluxmin = 0.2):
"""Read a list of calibrators in CSV format from the Source Catalogue web interface
Copy from ALMAQueryCal.py
"""
listcal = []
fcal = open(ifile)
for line in fcal:
if line[0] != "#":
tok = line.split(",")
band = tok[0].split(" ")[1]
flux = float(tok[7])
name = tok[13].split("|")[0]
alpha2000 = float(tok[3])
delta2000 = float(tok[5])
if flux >= fluxmin:
found = False
for nameYet in listcal:
if nameYet[0] == name:
found = True
if not found:
listcal.append([name, alpha2000, delta2000, flux])
return(listcal)
###Output
_____no_output_____
###Markdown
Read the data
###Code
input_file = "CalAug2016.list"
data = readCal(input_file, fluxmin = 0.1) # change fluxmin
data_np = np.array(data)
name = data_np[:,0]
ra = data_np[:,1].astype(np.float)
dec = data_np[:,2].astype(np.float)
flux = data_np[:,3].astype(np.float)
# cmap = plt.cm.hot
# flux_log = np.log(flux)
# max_flux = flux_log.max()
# color = flux_log/max_flux
size = flux/flux.max()
plt.scatter(ra, dec, c='blue', s=size*100, lw=0, alpha=0.5)
plt.xlim([0., 360.])
plt.show()
###Output
_____no_output_____
###Markdown
Note: ALMA located at 23.0278° S, 67.7548° W Map projection plot Equatorial coordinate
###Code
m = Basemap(projection='moll', lon_0=0) # center at 'longitude' 0
###Output
_____no_output_____
###Markdown
Shift range[-180, 180]
###Code
def shift180pm(alpha):
return([x - 360 if x > 180 else x for x in alpha])
ra_shift = shift180pm(ra) # shift ra [-180, 180]
x, y = m(ra_shift, dec)
m.scatter(x, y, c='blue', s=size*100, lw=0, alpha=0.7)
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,420.,60.))
plt.title("Calibrator Position - Equatorial coordinate")
plt.show()
###Output
_____no_output_____
###Markdown
Galactic coordinate
###Code
equ = SkyCoord(ra, dec, frame='icrs', unit='deg')
gal = equ.galactic
l_shift = shift180pm(gal.l.degree) # shift galactic longitude [-180, 180]
x, y = m(l_shift, gal.b.degree)
m.scatter(x, y, c='red', s=size*100, lw=0, alpha=0.7)
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,420.,60.))
plt.title("Calibrator Position - Galactic coordinate")
plt.show()
###Output
_____no_output_____ |
MosquitoDataPrep.ipynb | ###Markdown
Importing, cleaning and preparing mosquito Data1. Import libraries2. Check filenames3. Read in data4. Explore data to understand structure, values within each dataframe5. Search for and clean errors/inconsistencies6. Merge dataframes - final format includes at minimum: SampleID, Species, Date, Town, County, TestType, Result, DayofYear
###Code
#Import libraries
import os
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
#Get filesnames to make sure to correctly ID files to read in
os.listdir("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/")
#Read in mosquito data
#NSmos = mosquitos not sent (NS) for testing (but one error??)
NSmos = pd.read_csv("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/NSmos0419.csv")
#negmos = mosquitos sent for testing, came back negative
negmos = pd.read_csv("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/negmos0419.csv") #tested neg
#WNVmos = mosquitos sent for testing, postive for WNV
WNVmos = pd.read_csv("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/WNVmos0419.csv") #WNV pos mosquitos
#EEEmos = mosquitos sent for testing, positive for EEE
EEEmos = pd.read_csv("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/EEEmos0419.csv") #EEE pos mosquitos
###Output
_____no_output_____
###Markdown
*NSmos* data = mosquitos not sent (NS) for testing Exploration/cleaning
###Code
#Get dimensions of dataframes, check for number of unique identifiers to check for duplicates
NSmos.shape
#how many species??
NSmos.Species.unique().shape
#check for NAs in species column
#NSmos.Species[NSmos.Species.isna()] #less elegant approach but same result as line of code below
NSmos.Species.isna().sum()
#check for NAs in all columns
NSmos.isnull().sum()
#check that no Species IDs are blank (rather than explicitly marked as NA)
NSmos[NSmos['Species'] == '']
#get names of unique species
NSmos.Species.unique()
#get number of unique mosquitos captured (not tested)
NSmos.Sample_ID.nunique()
#explore first 5 lines of dataframe
NSmos.head(5)
###Output
_____no_output_____
###Markdown
I want to merge dataframes by SampleID, so search and review duplicates
###Code
NSmos[NSmos.duplicated(['Sample_ID'],keep=False)] #look at duplicate sampleIDs
###Output
_____no_output_____
###Markdown
Tested mosquitos were tested for both WNV & EEE; typical notation under "Test_Type" is "WNV,EEE"; this seems to be a unique instance in the dataframeThis file contains mosquitos that were not tested, so this is either an error and this individual does not belong in this dataframe (negative for both EEE and WNV) or it was not submmitted for testing. ***Flagged, checked for sample ID CM08-0547 in negmos dataframe, not in tested mosquitos, okay to drop duplicate***
###Code
#Again, confirms that all mosquitos in this dataframe were not tested except the unique ID CM08-0547 checked in negmos
NSmos.Test_Type.value_counts() #this command doesn't include number of NAs
NSmos.Result.value_counts()
#2 samples reported as negative, but also reported as not submitted
#for testing, these are the duplicares above
NSmos.Submitted_for_Testing.value_counts() #with the exception of error above, none submitted for testing
NSmos.drop_duplicates(subset ="Sample_ID", inplace = True)
NSmos.shape
###Output
_____no_output_____
###Markdown
*Negmos* data = mosquitos sent for testing, came back negative Exploration/cleaning
###Code
negmos = pd.read_csv("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/negmos0419.csv") #tested neg
negmos.shape
negmos.Sample_ID.nunique() #should be size of dataframe when dupliates are removed
102138-94409 #7729 are duplicate sample IDs
#check duplicated values in negmos to see if they seem to be actual duplicates and need to be dropped
test = negmos[negmos.duplicated(['Sample_ID'],keep=False)]
test.shape
#7729*2 = 15458
test.sort_values(by='Sample_ID')
#here duplicates seem to be the same ID, so drop duplicates below
negmos.shape #102138
test.shape #15428
negmos[negmos.duplicated(['Sample_ID'],keep=False)].shape
###Output
_____no_output_____
###Markdown
Filter out duplicates sample IDs
###Code
negmos.drop_duplicates(subset ="Sample_ID", inplace = True)
negmos.shape
#102138-15428 = 86710
#102138-94409 = 7729 #number of duplicates, so the above worked to remove duplicates :)
negmos.Species.unique()
negmos.Species.isna().sum()
#negmos[NSmos['Species'] == '']
#check that all are listed as same for test type; yes
negmos.Test_Type.value_counts()
#check that all are negative; yes
negmos.Result.value_counts()
###Output
_____no_output_____
###Markdown
*WNVmos* data = mosquitos sent for testing, postive for WNV Exploration/cleaning
###Code
WNVmos.shape
WNVmos.Sample_ID.nunique()
#here, duplicates are the same or only differ by a couple of days
WNVmos[WNVmos.duplicated(['Sample_ID'],keep=False)]
WNVmos.drop_duplicates(subset ="Sample_ID", keep = 'first', inplace = True)
###Output
_____no_output_____
###Markdown
*EEEmos* data = mosquitos sent for testing, postive for EEE Exploration/cleaning
###Code
EEEmos.shape
EEEmos.Sample_ID.nunique() #no duplicates, hooray
###Output
_____no_output_____
###Markdown
Merge dataframes by Sample_ID Final columns: Specimen_Type, Sample_ID, Species, Date, Town, Statem County, Test_Type, Result, Submitted_for_Testing1. Make sure all of the columns I want are in each dataframe, cut/add additional columns2. Merge by SampleID
###Code
list(NSmos.columns)
list(negmos.columns)
list(WNVmos.columns)
list(EEEmos.columns)
#rename date columns throughout to simply "Date"
NSmos2 = NSmos.rename(columns={"Received_Date": "Date"})
negmos2 = negmos.rename(columns={"Tested_Date": "Date"})
WNVmos2 = WNVmos.rename(columns={"Tested_Date": "Date"})
EEEmos2 = EEEmos.rename(columns={"Tested_Date": "Date"})
###Output
_____no_output_____
###Markdown
All dataframes have the same columns, except NSmos2 has an additional "Submitted_for_Testing" column,
###Code
NSmos2.shape
negmos2['Submitted_for_Testing']='Yes'
EEEmos2['Submitted_for_Testing']='Yes'
WNVmos2['Submitted_for_Testing']='Yes'
negmos2.head(5)
WNVmos2.head(5)
EEEmos2.head(5)
###Output
_____no_output_____
###Markdown
Final mosquito data dataframeMerge positive tested, negative tested, and not tested mosquitos
###Code
testdf = pd.concat([EEEmos2, WNVmos2, negmos2, NSmos2])
testdf.shape
#get number of duplicates
testdf[testdf.duplicated(['Sample_ID'],keep=False)].shape
#create new dataframe of only duplicated sample IDs and
test = testdf[testdf.duplicated(['Sample_ID'],keep=False)]
#other repeated values seem to be only
test.sort_values(by=['Sample_ID'])
testdf.shape
#161832+94409+2785+1274 = 260300
testdf['Date'] = pd.to_datetime(testdf['Date'])
testdf.head(5)
# calculate day of year and add as new column to dataframe
testdf['DOY'] = testdf['Date'].dt.dayofyear
testdf.head(5)
#iterate through the following lines of code (#-ing out all except one at a time to check that all columns have a species ID
testdf.Species.isna().sum() #0
#testdf[testdf['Species'] == '']
testdf.Species.nunique() #60
testdf.Species.unique()
###Output
_____no_output_____
###Markdown
Check for duplicates in new dataframe
###Code
#There are duplicates but duplicated sample numbers seemt to all be from different towns
testdf.Sample_ID.nunique()
#testdf[testdf.duplicated(['Sample_ID'],keep=False)]
testdf2 = testdf[testdf.duplicated(['Sample_ID'])]
testdf2.shape
260300-259291-1 #size of testdf - number of duplicates
testdf2.sort_values(by=['Sample_ID'])
testdf[testdf['Sample_ID'] == 'SL08-0010']
testdf2.Sample_ID.nunique()
testdf.describe() #Day of year ranging from day 90-311 (approx Mar 31/Apr 1 - Nov 7/8)
expl = testdf.hist(column='DOY', bins=20, grid=False, figsize=(12,8), color='#86bf91', zorder=2, rwidth=0.9)
expl = expl[0]
for x in expl:
# Despine
x.spines['right'].set_visible(False)
x.spines['top'].set_visible(False)
x.spines['left'].set_visible(False)
# Switch off ticks
x.tick_params(axis="both", which="both", bottom="off", top="off", labelbottom="on", left="off", right="off", labelleft="on")
# Draw horizontal axis lines
vals = x.get_yticks()
for tick in vals:
x.axhline(y=tick, linestyle='dashed', alpha=0.4, color='#eeeeee', zorder=1)
# Remove title
x.set_title("")
# Set x-axis label
x.set_xlabel("Day of Year", labelpad=20, weight='bold', size=12)
# Set y-axis label
x.set_ylabel("Frequency", labelpad=20, weight='bold', size=12)
testdf.sort_values('DOY')
#create a new column that only includes the year of capture
testdf['Year'] = testdf['Date'].dt.year
testdf.head(5)
testdf[(testdf['Town']=='Brewster') & (testdf['Year']==2017)]
#check:
#manually changed SampleID CCNS17-0073 from 1-15-2017 to 6-15-2017 in original file(s)
#write new dataframe to file for easier reading in later
testdf.to_csv("/Users/lbrown01/Dropbox/DataScienceStuff/Data/WNV_EE/formatMos.csv", encoding='utf-8', index=False)
###Output
_____no_output_____ |
Code/Celltracker.ipynb | ###Markdown
Cell Tracking Program The propose of this project is to develop an algorithm to realize HeLa cell cycle analysis by cell segmentation and cell tracking. Our segmentation algorithm includes binarization, nuclei center detection and nuclei boundary delineating; and our tracking algorithm includes neighboring graph construction, optimal matching, cell division, death, segmentation errors detection and processing, and refined segmentation and matching results. Our chosen testing and training datasets are Histone 2B (H2B)-GFP expressing HeLa cells provided by Mitocheck Consortium. This project used Jaccard index to measure the segmentation accuracy and TRA method for tracking. Our results, respectably 69.51% and 74.61%, demonstrated the validity of the developed algorithm in investigation of cancer cell cycle, the problems and further improvements of our algorithm are also mentioned. 0. Prepare Import File and Import Image Set
###Code
%matplotlib inline
import os
import cv2
import PIL.Image
import sys
import numpy as np
from IPython.display import Image, display, clear_output
import matplotlib.pyplot as plt
import scipy
def normalize(image):
'''
This function is to normalize the input grayscale image by
substracting globle mean and dividing standard diviation for
visualization.
Input: a grayscale image
Output: normolized grascale image
'''
cv2.normalize(image, image, 0, 255, cv2.NORM_MINMAX)
return image
# read image sequence
path = "PATH_TO_IMAGES" # The dataset could be download through: http://www.codesolorzano.com/Challenges/CTC/Datasets.html
for r,d,f in os.walk(path):
images = []
enhance_images = []
f = sorted(f)
for files in f:
if files[-3:].lower()=='tif':
temp = cv2.imread(os.path.join(r,files))
gray = cv2.cvtColor(temp, cv2.COLOR_BGR2GRAY)
images.append(gray.copy())
enhance_images.append(normalize(gray.copy()))
print "Total number of image is ", len(images)
print "The shape of image is ", images[0].shape, type(images[0][0,0])
# Helper functions
def display_image(img):
assert img.ndim == 2 or img.ndim == 3
h, w = img.shape[:2]
if len(img.shape) == 3:
img = cv2.resize(img, (w/3, h/3, 3))
else:
img = cv2.resize(img, (w/3, h/3))
cv2.imwrite("temp_img.png", img)
img = Image("temp_img.png")
display(img)
def vis_square(data, title=None):
"""
Take an array of shape (n, height, width) or (n, height, width, 3)
and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)
"""
# resize image into small size
_, h, w = data.shape[:3]
width = int(np.ceil(1200. / np.sqrt(data.shape[0]))) # the width of showing image
height = int(np.ceil(h*float(width)/float(w))) # the height of showing image
if len(data.shape) == 4:
temp = np.zeros((data.shape[0], height, width, 3))
else:
temp = np.zeros((data.shape[0], height, width))
for i in range(data.shape[0]):
if len(data.shape) == 4:
temp[i] = cv2.resize(data[i], (width, height, 3))
else:
temp[i] = cv2.resize(data[i], (width, height))
data = temp
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = (((0, n ** 2 - data.shape[0]),
(0, 2), (0, 2)) # add some space between filters
+ ((0, 0),) * (data.ndim - 3)) # don't pad the last dimension (if there is one)
data = np.pad(data, padding, mode='constant', constant_values=255) # pad with ones (white)
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
# show image
cv2.imwrite("temp_img.png", data)
img = Image("temp_img.png")
display(img)
def cvt_npimg(images):
"""
Convert image sequence to numpy array
"""
h, w = images[0].shape[:2]
if len(images[0].shape) == 3:
out = np.zeros((len(images), h, w, 3))
else:
out = np.zeros((len(images), h, w))
for i, img in enumerate(images):
out[i] = img
return out
# Write image from different input
def write_mask16(images, name, index=-1):
"""
Write image as 16 bits image
"""
if index == -1:
for i, img in enumerate(images):
if i < 10:
cv2.imwrite(name+"00"+str(i)+".tif", img.astype(np.uint16))
elif i >= 10 and i < 100:
cv2.imwrite(name+"0"+str(i)+".tif", img.astype(np.uint16))
else:
cv2.imwrite(name+str(i)+".tif", img.astype(np.uint16))
else:
if index < 10:
cv2.imwrite(name+"00"+str(index)+".tif", images.astype(np.uint16))
elif index >= 10 and index < 100:
cv2.imwrite(name+"0"+str(index)+".tif", images.astype(np.uint16))
else:
cv2.imwrite(name+str(index)+".tif", images.astype(np.uint16))
def write_mask8(images, name, index=-1):
"""
Write image as 8 bits image
"""
if index == -1:
for i, img in enumerate(images):
if i < 10:
cv2.imwrite(name+"00"+str(i)+".tif", img.astype(np.uint8))
elif i >= 10 and i < 100:
cv2.imwrite(name+"0"+str(i)+".tif", img.astype(np.uint8))
else:
cv2.imwrite(name+str(i)+".tif", img.astype(np.uint8))
else:
if index < 10:
cv2.imwrite(name+"000"+str(index)+".tif", images.astype(np.uint8))
elif index >= 10 and index < 100:
cv2.imwrite(name+"00"+str(index)+".tif", images.astype(np.uint8))
elif index >= 100 and index < 1000:
cv2.imwrite(name+"0"+str(index)+".tif", images.astype(np.uint8))
elif index >= 1000 and index < 10000:
cv2.imwrite(name+str(index)+".tif", images.astype(np.uint8))
else:
raise
def write_pair8(images, name, index=-1):
"""
Write image as 8 bits image with dilation
"""
for i, img in enumerate(images):
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
img = cv2.dilate((img*255).astype(np.uint8),kernel,iterations = 3)
if i < 10:
cv2.imwrite(name+"00"+str(i)+".tif", img)
elif i >= 10 and i < 100:
cv2.imwrite(name+"0"+str(i)+".tif", img)
else:
cv2.imwrite(name+str(i)+".tif", img)
###Output
_____no_output_____
###Markdown
1. Cell Segmentatioin Part 1. Adaptive Thresholding This file is to compute adaptive thresholding of image sequence in order to generate binary image for Nuclei segmentation. Problem: Due to the low contrast of original image, the adaptive thresholding is not working. Therefore, we change to regular threshold with threshold value as 129.
###Code
th = None
img = None
class ADPTIVETHRESH():
'''
This class is to provide all function for adaptive thresholding.
'''
def __init__(self, images):
self.images = []
for img in images:
if len(img.shape) == 3:
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
self.images.append(img.copy())
def applythresh(self, threshold = 50):
'''
applythresh function is to convert original image to binary image by thresholding.
Input: image sequence. E.g. [image0, image1, ...]
Output: image sequence after thresholding. E.g. [image0, image1, ...]
'''
out = []
markers = []
binarymark = []
for img in self.images:
img = cv2.GaussianBlur(img,(5,5),0).astype(np.uint8)
_, thresh = cv2.threshold(img,threshold,1,cv2.THRESH_BINARY)
# Using morphlogical operations to imporve the quality of result
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(9,9))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
out.append(thresh)
return out
# This part is for testing adaptivethresh.py with single image.
# Input: an original image
# Output: Thresholding image
global th
global img
adaptive = ADPTIVETHRESH(enhance_images)
th = adaptive.applythresh(50)
# display images
for i,img in enumerate(th):
th[i] = img*255
os.chdir(".")
write_mask8(th, "thresh")
out = cvt_npimg(th)
vis_square(out)
###Output
_____no_output_____
###Markdown
2. Gradient Filed Vector* This file is to compute gradient vector field (GVF) and then find the Nuclei center with the GVF result. (This part is optinal and I recommend using the distance map directly)
###Code
from scipy import spatial as sp
from scipy import ndimage
from scipy.spatial import distance
looplimit = 500
newimg = None
pair = None
def inbounds(shape, indices):
assert len(shape) == len(indices)
for i, ind in enumerate(indices):
if ind < 0 or ind >= shape[i]:
return False
return True
class GVF():
'''
This class contains all function for calculating GVF and its following steps.
'''
def __init__(self, images, thresh):
self.images = images
self.thresh = thresh
def distancemap(self):
'''
This function is to generate distance map of the thresh image. We use the opencv
function distanceTransform to generate it. Moreover, in this case, we use Euclidiean
Distance (DIST_L2) as a metric of distance.
Input: None
Output: Image distance map
'''
return [cv2.distanceTransform(self.thresh[i], distanceType=2, maskSize=0)\
for i in range(len(self.thresh))]
def new_image(self, alpha, dismap):
'''
This function is to generate a new image combining the oringal image I0 with
the distance map image Idis by following expression:
Inew = I0 + alpha*Idis
In this program, we choose alpha as 0.4.
Input: the weight of distance map: alpha
the distance map image
Output: new grayscale image
'''
return [self.images[i] + alpha * dismap[i] for i in range(len(self.thresh))]
def compute_gvf(self, newimage):
'''
This function is to compute the gradient vector of the imput image.
Input: a grayscale image with size, say m * n * # of images
Output: a 3 dimentional image with size, m * n * 2, where the last dimention is
the gradient vector (gx, gy)
'''
kernel_size = 5 # kernel size for blur image before compute gradient
newimage = [cv2.GaussianBlur((np.clip(newimage[i], 0, 255)).astype(np.uint8),(kernel_size,kernel_size),0)\
for i in range(len(self.thresh))]
# use sobel operator to compute gradient
temp = np.zeros((newimage[0].shape[0], newimage[0].shape[1], 2), np.float32) # store temp gradient image
gradimg = [] # output gradient images (height * weight * # of images)
for i in range(len(newimage)):
# compute sobel operation in x, y directions
gradx = cv2.Sobel(newimage[i],cv2.CV_64F,1,0,ksize=3)
grady = cv2.Sobel(newimage[i],cv2.CV_64F,0,1,ksize=3)
# add the gradient vector
temp[:,:,0], temp[:,:,1] = gradx, grady
gradimg.append(temp)
return gradimg
def find_certer(self, gvfimage, index):
'''
This function is to find the center of Nuclei.
Input: the gradient vector image (height * weight * 2).
Output: the record image height * weight).
'''
# Initialize a image to record seed candidates.
imgpair = np.zeros(gvfimage.shape[:2])
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.dilate(self.thresh[index].copy(), kernel, iterations = 1)
erthresh = cv2.erode(dilate, kernel, iterations = 3)
while erthresh.sum() > 0:
print "Image ", index, "left: ", erthresh.sum(), "points"
# Initialize partical coordinates [y, x]
y0, x0 = np.where(erthresh>0)
p0 = np.array([y0[0], x0[0], 1])
# Initialize record coordicates [y, x]
p1 = np.array([5000, 5000, 1])
# mark the first non-zero point of thresh image to 0
erthresh[p0[0], p0[1]] = 0
# a variable to record if the point out of bound of image or
# out of maximum loop times
outbound = False
# count loop times to limit max loop times
count = 0
while sp.distance.cdist([p0],[p1]) > 1:
count += 1
p1 = p0
u = gvfimage[p0[0], p0[1], 1]
v = gvfimage[p0[0], p0[1], 0]
M = np.array([[1, 0, u],\
[0, 1, v],\
[0, 0, 1]], np.float32)
p0 = M.dot(p0)
if not inbounds(self.thresh[index].shape, (p0[0], p0[1])) or count > looplimit:
outbound = True
break
if not outbound:
imgpair[p0[0], p0[1]] += 1
clear_output(wait=True)
return imgpair.copy()
# This part is for testing gvf.py with single image. (Optional)
# Input: an original image
# Output: Thresholding image and seed image
global th
global newimg
global pair
# Nuclei center detection
gvf = GVF(images, th)
dismap = gvf.distancemap()
newimg = gvf.new_image(0.4, dismap) # choose alpha as 0.4.
gradimg = gvf.compute_gvf(newimg)
out = []
pair = []
pair_raw = []
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
i = 0
for i,img in enumerate(gradimg):
imgpair_raw = gvf.find_certer(img, i)
pair_raw.append(imgpair_raw)
neighborhood_size = 20
data_max = ndimage.filters.maximum_filter(pair_raw[i], neighborhood_size)
data_max[data_max==0] = 255
pair.append((pair_raw[i] == data_max).astype(np.uint8))
write_mask8([pair[i]], "pair_raw", i)
os.chdir("PATH_TO_RESULTS")
y, x = np.where(pair[i]>0)
points = zip(y[:], x[:])
dmap = distance.cdist(points, points, 'euclidean')
y, x = np.where(dmap<10)
ps = zip(y[:], x[:])
for p in ps:
if p[0] != p[1]:
pair[i][points[min(p[0], p[1])]] = 0
dilation = cv2.dilate((pair[i]*255).astype(np.uint8),kernel,iterations = 3)
out.append(dilation)
out = cvt_npimg(out)
vis_square(out)
###Output
_____no_output_____
###Markdown
GVF enhance* This file is to amend the seed points for watershed. (This part is optinal and I recommend using the distance map directly)
###Code
from scipy import spatial as sp
from scipy import ndimage
from scipy.spatial import distance
gvf = GVF(images, th)
dismap = gvf.distancemap()
newimg = gvf.new_image(0.4, dismap) # choose alpha as 0.4.
# TODO this part is designed to amend the result of gvf.
pair = []
path=os.path.join("PATH_TO_RESULTS")
for r,d,f in os.walk(path):
for files in f:
if files[:5].lower()=='seed':
print files
temp = cv2.imread(os.path.join(r,files))
temp = cv2.cvtColor(temp, cv2.COLOR_BGR2GRAY)
y, x = np.where(temp>0)
points = zip(y[:], x[:])
dmap = distance.cdist(points, points, 'euclidean')
y, x = np.where(dmap<10)
ps = zip(y[:], x[:])
for p in ps:
if p[0] != p[1]:
temp[points[min(p[0], p[1])]] = 0
pair.append(temp)
clear_output(wait=True)
print "finish!"
###Output
_____no_output_____
###Markdown
2. Distance Map (Recommend) This file uses distance map to generate the seed points for watershed. Although it has nothing to do with GVF, you still need to load the GVF class, since it needs some helper functions in the class.
###Code
gvf = GVF(images, th)
dismap = gvf.distancemap()
newimg = gvf.new_image(0.4, dismap) # choose alpha as 0.4.
out = []
pair = []
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
for i,img in enumerate(dismap):
neighborhood_size = 20
data_max = ndimage.filters.maximum_filter(img, neighborhood_size)
data_max[data_max==0] = 255
pair.append((img == data_max).astype(np.uint8))
y, x = np.where(pair[i]>0)
points = zip(y[:], x[:])
dmap = distance.cdist(points, points, 'euclidean')
y, x = np.where(dmap<20)
ps = zip(y[:], x[:])
for p in ps:
if p[0] != p[1]:
pair[i][points[min(p[0], p[1])]] = 0
dilation = cv2.dilate((pair[i]*255).astype(np.uint8),kernel,iterations = 1)
out.append(dilation)
os.chdir(".")
write_mask8(dilation, "seed_point", i)
out = cvt_npimg(out)
vis_square(out)
###Output
_____no_output_____
###Markdown
3. Watershed This file is to compute watershed given the seed image in the gvf.py.
###Code
import cv2
import numpy as np
from numpy import unique
import copy as cp
bmarks = None
marks = None
class WATERSHED():
'''
This class contains all the function to compute watershed.
'''
def __init__(self, images, markers):
self.images = images
self.markers = markers
def is_over_long(self, img, max_lenth=50):
rows = np.any(img, axis=1)
cols = np.any(img, axis=0)
if not len(img[img>0]):
return True
rmin, rmax = np.where(rows)[0][[0, -1]]
cmin, cmax = np.where(cols)[0][[0, -1]]
if (rmax-rmin)>max_lenth or (cmax-cmin)>max_lenth:
return True
else:
return False
def watershed_compute(self):
'''
This function is to compute watershed given the newimage and the seed image
(center candidates). In this function, we use cv2.watershed to implement watershed.
Input: newimage (height * weight * # of images)
Output: watershed images (height * weight * # of images)
'''
result = []
outmark = []
outbinary = []
for i in range(len(self.images)):
print "image: ", i
# generate a 3-channel image in order to use cv2.watershed
imgcolor = np.zeros((self.images[i].shape[0], self.images[i].shape[1], 3), np.uint8)
for c in range(3):
imgcolor[:,:,c] = self.images[i]
# compute marker image (labelling)
if len(self.markers[i].shape) == 3:
self.markers[i] = cv2.cvtColor(self.markers[i],cv2.COLOR_BGR2GRAY)
_, mark = cv2.connectedComponents(self.markers[i])
# watershed!
mark = cv2.watershed(imgcolor,mark)
u, counts = unique(mark, return_counts=True)
counter = dict(zip(u, counts))
for index in counter:
temp_img = np.zeros_like(mark)
temp_img[mark==index] = 255
if self.is_over_long(temp_img):
mark[mark==index] = 0
continue
if counter[index] > 3000:
mark[mark==index] = 0
continue
labels = list(set(mark[mark>0]))
length = len(labels)
temp_img = mark.copy()
for original, new in zip(labels, range(1,length+1)):
temp_img[mark==original] = new
mark = temp_img
# mark image and add to the result
temp = cv2.cvtColor(imgcolor,cv2.COLOR_BGR2GRAY)
result.append(temp)
outmark.append(mark.astype(np.uint8))
binary = mark.copy()
binary[mark>0] = 255
outbinary.append(binary.astype(np.uint8))
clear_output(wait=True)
return result, outbinary, outmark
# This part is for testing watershed.py with single image.
# Output: Binary image after watershed
global bmarks
global marks
# watershed
ws = WATERSHED(newimg, pair)
wsimage, bmarks, marks = ws.watershed_compute()
out = cvt_npimg(np.clip(bmarks, 0, 255)).astype(np.uint8)
vis_square(out)
os.chdir("PATH_TO_RESULT_MASK")
write_mask16(marks, "mask")
os.chdir("PATH_TO_RESULT_BINARY")
write_mask8(out, "binary")
clear_output(wait=True)
###Output
_____no_output_____
###Markdown
4. Segmentation Evaluation This file is to evaluate our algorithm about segmentation in jaccard coefficient.
###Code
def list2pts(ptslist):
list_y = np.array([ptslist[0]])
list_x = np.array([ptslist[1]])
return np.append(list_y, list_x).reshape(2, len(list_y[0])).T
def unique_rows(a):
a = np.ascontiguousarray(a)
unique_a = np.unique(a.view([('', a.dtype)]*a.shape[1]))
return unique_a.view(a.dtype).reshape((unique_a.shape[0], a.shape[1]))
# read image sequence
# The training set locates at "resource/training/01" and "resource/training/02"
# The ground truth of training set locates at "resource/training/GT_01" and
# "resource/training/GT_02"
# The testing set locates at "resource/testing/01" and "resource/testing/02"
path = "PATH_TO_GT_SEGMENTATION"
gts = []
for r,d,f in os.walk(path):
for files in f:
if files[-3:].lower()=='tif':
temp = cv2.imread(os.path.join(r,files), cv2.IMREAD_UNCHANGED)
gts.append([temp, files[-6:-4]])
print "number of gts: ", len(gts)
path= "PATH_TO_SEGMENTATION_RESULTS"
binarymarks = []
for r,d,f in os.walk(path):
for files in f:
if files[:4]=='mark':
temp = cv2.imread(os.path.join(r,files))
gray = cv2.cvtColor(temp, cv2.COLOR_BGR2GRAY)
binarymarks.append([gray, files[-6:-4]])
print "number of segmentation image: ", len(binarymarks)
jaccards = []
for gt in gts:
for binarymark in binarymarks:
if gt[1] == binarymark[1]:
print "enter...", gt[1]
list_pts = set(gt[0][gt[0]>0])
list_seg = set(binarymark[0][binarymark[0]>0])
for pt in list_pts:
for seg in list_seg:
pts_gt = np.where(gt[0]==pt)
pts_seg = np.where(binarymark[0]==seg)
pts_gt = list2pts(pts_gt)
pts_seg = list2pts(pts_seg)
pts = np.append(pts_gt, pts_seg).reshape(len(pts_gt)+len(pts_seg),2)
union_pts = unique_rows(pts)
union = float(len(union_pts))
intersection = float(len(pts_seg) + len(pts_gt) - len(union_pts))
if intersection/union > 0.5:
jaccards.append(intersection/union)
clear_output(wait=True)
jaccard = float(sum(jaccards))/float(len(jaccards))
print "jaccard: ", jaccard, "number of Nuclei: ", len(jaccards)
###Output
_____no_output_____
###Markdown
2. Cell Tracking Part 1. Graph Construction This file is to generate a neighboring graph contraction using Delaunary Triangulation.
###Code
centroid = None
slope_length = None
class GRAPH():
'''
This class contains all the functions needed to compute
Delaunary Triangulation.
'''
def __init__(self, mark, binary, index):
'''
Input: the grayscale mark image with different label on each segments
the binary image of the mark image
the index of the image
'''
self.mark = mark[index]
self.binary = binary[index]
def rect_contains(self, rect, point):
'''
Check if a point is inside the image
Input: the size of the image
the point that want to test
Output: if the point is inside the image
'''
if point[0] < rect[0] :
return False
elif point[1] < rect[1] :
return False
elif point[0] > rect[2] :
return False
elif point[1] > rect[3] :
return False
return True
def draw_point(self, img, p, color ):
'''
Draw a point
'''
cv2.circle( img, (p[1], p[0]), 2, color, cv2.FILLED, 16, 0 )
def draw_delaunay(self, img, subdiv, delaunay_color):
'''
Draw delaunay triangles and store these lines
Input: the image want to draw
the set of points: format as cv2.Subdiv2D
the color want to use
Output: the slope and length of each line ()
'''
triangleList = subdiv.getTriangleList();
size = img.shape
r = (0, 0, size[0], size[1])
slope_length = [[]]
for i in range(self.mark.max()-1):
slope_length.append([])
for t_i, t in enumerate(triangleList):
pt1 = (int(t[0]), int(t[1]))
pt2 = (int(t[2]), int(t[3]))
pt3 = (int(t[4]), int(t[5]))
if self.rect_contains(r, pt1) and self.rect_contains(r, pt2) and self.rect_contains(r, pt3):
# draw lines
cv2.line(img, (pt1[1], pt1[0]), (pt2[1], pt2[0]), delaunay_color, 1, 16, 0)
cv2.line(img, (pt2[1], pt2[0]), (pt3[1], pt3[0]), delaunay_color, 1, 16, 0)
cv2.line(img, (pt3[1], pt3[0]), (pt1[1], pt1[0]), delaunay_color, 1, 16, 0)
# store the length of line segments and their slopes
for p0 in [pt1, pt2, pt3]:
for p1 in [pt1, pt2, pt3]:
if p0 != p1:
temp = self.length_slope(p0, p1)
if temp not in slope_length[self.mark[p0]-1]:
slope_length[self.mark[p0]-1].append(temp)
return slope_length
def length_slope(self, p0, p1):
'''
This function is to compute the length and theta for the given two points.
Input: two points with the format (y, x)
'''
if p1[1]-p0[1]:
slope = (p1[0]-p0[0]) / (p1[1]-p0[1])
else:
slope = 1e10
length = np.sqrt((p1[0]-p0[0])**2 + (p1[1]-p0[1])**2)
return length, slope
def generate_points(self):
'''
Find the centroid of each segmentation
'''
centroids = []
label = []
max_label = self.mark.max()
for i in range(1, max_label+1):
img = self.mark.copy()
img[img!=i] = 0
if img.sum():
_, contours,hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS)
m = cv2.moments(contours[0])
if m['m00']:
label.append(i)
centroids.append(( int(round(m['m01']/m['m00'])),\
int(round(m['m10']/m['m00'])) ))
else:
label.append(i)
centroids.append(( 0,0 ))
return centroids, label
def run(self, animate = False):
'''
The pipline of graph construction.
Input: if showing a animation (False for default)
Output: centroids: # of segments * 2 (y, x)
slopes and length: # of segments * # of slope_length
'''
# Read in the image.
img_orig = self.binary.copy()
# Rectangle to be used with Subdiv2D
size = img_orig.shape
rect = (0, 0, size[0], size[1])
# Create an instance of Subdiv2D
subdiv = cv2.Subdiv2D(rect);
# find the centroid of each segments
points, label = self.generate_points()
# add and sort the centroid to a numpy array for post processing
centroid = np.zeros((self.mark.max(), 2))
for p, l in zip(points, label):
centroid[l-1] = p
outimg = []
# Insert points into subdiv
for idx_p, p in enumerate(points):
subdiv.insert(p)
# Show animation
if animate:
img_copy = img_orig.copy()
# Draw delaunay triangles
self.draw_delaunay( img_copy, subdiv, (255, 255, 255));
outimg.append(img_copy)
display_image(img_copy)
img_copy = cv2.resize(img_copy, (314, 200))
cv2.imwrite("delaunay_" + str(idx_p).zfill(3) + ".png", img_copy)
clear_output(wait=True)
# Draw delaunay triangles
slope_length = self.draw_delaunay( img_orig, subdiv, (255, 255, 255));
# Draw points
for p in points :
self.draw_point(img_orig, p, (0,0,255))
# show images
if animate:
display_image(img_orig)
print "length of centroid: ", len(centroid)
return centroid, slope_length
# This part is the small test for graph_contruction.py.
# Input: grayscale marker image
# binary marker image
# Output: a text file includes the centroid and the length and slope for each neighbor.
# Build Delaunay Triangulation
global centroid
global slope_length
centroid = []
slope_length = []
for i in range(len(images)):
print " graph_construction: image ", i
print "max pixel: ", marks[i].max()
graph = GRAPH(marks, bmarks, i)
if i == 0:
tempcentroid, tempslope_length = graph.run(True)
else:
tempcentroid, tempslope_length = graph.run()
centroid.append(tempcentroid)
slope_length.append(tempslope_length)
clear_output(wait=True)
print "finish!"
###Output
_____no_output_____
###Markdown
2. Matching This file is to match nuclei in two consecutive frames by Phase Controlled Optimal Matching. It includes two part: 1) Dissimilarity measure 2) Matching
###Code
import imageio
from pyefd import elliptic_fourier_descriptors
Max_dis = 100000
def write_image(image, title, index, imgformat='.tif'):
if index < 10:
name = '00'+str(index)
else:
name = '0'+str(index)
cv2.imwrite(title+name+imgformat, image.astype(np.uint16))
class FEAVECTOR():
'''
This class builds a feature vector for each segments.
The format of each vector is:
v(k,i) = [c(k,i), s(k, i), h(k, i), e(k, i)], where k is the
index of the image (frame) and i is the label of each segment.
c(k,i): the centroid of each segment (y, x);
s(k,i): the binary shape of each segment;
h(k,i): the intensity distribution (hsitogram) of the segment;
e(k,i): the spatial distribution of the segment. Its format is
like (l(k, i, p), theta(k, i, p)), where p represent different
line connected with different segment.
'''
def __init__(self, centroid=None, shape=None, histogram=None, spatial=None, \
ID=None, start = None, end=None, label=None, ratio=None, area=None, cooc=None):
self.c = centroid
self.s = shape
self.h = histogram
self.e = spatial
self.id = ID
self.start = start
self.end = end
self.l = label
self.a = area
self.r = ratio
self.cm = cooc
def add_id(self, num, index):
'''
This function adds cell id for each cell.
'''
if index == 0:
self.id = np.linspace(1, num, num)
else:
self.id= np.linspace(-1, -1, num)
def add_label(self):
'''
This function is to add labels for each neclei for post process.
'''
self.l = np.linspace(0, 0, len(self.c))
def set_centroid(self, centroid):
'''
This function sets the centroid for all neclei.
Input: the set of centroid: # of images * # of neclei * 2 (y, x)
Output: None
'''
self.c = centroid
def set_spatial(self, spatial):
'''
This function sets the spatial distrbution for all neclei.
Input: the set of centroid: # of images * # of neclei * # of line segments (length, slope)
Output: None
'''
self.e = spatial
def set_shape(self, image, marker):
'''
This function sets the binary shape for all necluei.
Input: the original images: # of images * height * weight
the labeled images: # of images * nucei's height * nucei's weight ()
Output: None
'''
def boundingbox(image):
y, x = np.where(image)
return min(x), min(y), max(x), max(y)
shape = []
for label in range(1, marker.max()+1):
tempimg = marker.copy()
tempimg[tempimg!=label] = 0
tempimg[tempimg==label] = 1
if tempimg.sum():
minx, miny, maxx, maxy = boundingbox(tempimg)
shape.append((tempimg[miny:maxy+1, minx:maxx+1], image[miny:maxy+1, minx:maxx+1]))
else:
shape.append(([], []))
self.s = shape
def set_histogram(self):
'''
Note: this function must be implemneted after set_shape().
'''
def computehistogram(image):
h, w = image.shape[:2]
his = np.zeros((256,1))
for y in range(h):
for x in range(w):
his[image[y, x], 0] += 1
return his
assert self.s != None, "this function must be implemneted after set_shape()."
his = []
for j in range(len(self.s)):
img = self.s[j][1]
if len(img):
temphis = computehistogram(img)
his.append(temphis)
else:
his.append(np.zeros((256,1)))
self.h = his
def add_efd(self):
coeffs = []
for i in range(len(self.s)):
try:
_, contours, hierarchy = cv2.findContours(self.s[i][0].astype(np.uint8), 1, 2)
if not len(contours):
coeffs.append(0)
continue
cnt = contours[0]
if len(cnt) >= 5:
contour = []
for i in range(len(contours[0])):
contour.append(contours[0][i][0])
coeffs.append(elliptic_fourier_descriptors(contour, order=10, normalize=False))
else:
coeffs.append(0)
except AttributeError:
coeffs.append(0)
self.r = coeffs
def add_co_occurrence(self, level=10):
'''
This funciton is to generate co-occurrence matrix for each cell. The structure of
output coefficients is:
[Entropy, Energy, Contrast, Homogeneity]
'''
# generate P metrix.
self.cm = []
for j in range(len(self.s)):
if not len(self.s[j][1]):
p_0 = np.zeros((level,level))
p_45 = np.zeros((level,level))
p_90 = np.zeros((level,level))
p_135 = np.zeros((level,level))
self.cm.append([np.array([0, 0, 0, 0]),[p_0, p_45, p_90, p_135]])
continue
max_p, min_p = np.max(self.s[j][1]), np.min(self.s[j][1])
range_p = max_p - min_p
img = np.round((np.asarray(self.s[j][1]).astype(np.float32)-min_p)/range_p*level)
h, w = img.shape[:2]
p_0 = np.zeros((level,level))
p_45 = np.zeros((level,level))
p_90 = np.zeros((level,level))
p_135 = np.zeros((level,level))
for y in range(h):
for x in range(w):
try:
p_0[img[y,x],img[y,x+1]] += 1
except IndexError:
pass
try:
p_0[img[y,x],img[y,x-1]] += 1
except IndexError:
pass
try:
p_90[img[y,x],img[y+1,x]] += 1
except IndexError:
pass
try:
p_90[img[y,x],img[y-1,x]] += 1
except IndexError:
pass
try:
p_45[img[y,x],img[y+1,x+1]] += 1
except IndexError:
pass
try:
p_45[img[y,x],img[y-1,x-1]] += 1
except IndexError:
pass
try:
p_135[img[y,x],img[y+1,x-1]] += 1
except IndexError:
pass
try:
p_135[img[y,x],img[y-1,x+1]] += 1
except IndexError:
pass
Entropy, Energy, Contrast, Homogeneity = 0, 0, 0, 0
for y in range(10):
for x in range(10):
if 0 not in [p_0[y,x], p_45[y,x], p_90[y,x], p_135[y,x]]:
Entropy -= (p_0[y,x]*np.log2(p_0[y,x])+\
p_45[y,x]*np.log2(p_45[y,x])+\
p_90[y,x]*np.log2(p_90[y,x])+\
p_135[y,x]*np.log2(p_135[y,x]))/4
else:
temp = 0
for p in [p_0[y,x], p_45[y,x], p_90[y,x], p_135[y,x]]:
if p != 0:
temp += p*np.log2(p)
Entropy -= temp/4
Energy += (p_0[y,x]**2+\
p_45[y,x]**2+\
p_90[y,x]**2+\
p_135[y,x]**2)/4
Contrast += (x-y)**2*(p_0[y,x]+\
p_45[y,x]+\
p_90[y,x]+\
p_135[y,x])/4
Homogeneity += (p_0[y,x]+\
p_45[y,x]+\
p_90[y,x]+\
p_135[y,x])/(4*(1+abs(x-y)))
self.cm.append([np.array([Entropy, Energy, Contrast, Homogeneity]),[p_0, p_45, p_90, p_135]])
def add_area(self):
area = []
for i in range(len(self.s)):
area.append(np.count_nonzero(self.s[i][0]))
self.a = area
def generate_vector(self):
'''
This function is to convert the vector maxtrics into a list.
Output: a list of vector: [v0, v1, ....]
'''
vector = []
for i in range(len(self.c)):
vector.append(FEAVECTOR(centroid=self.c[i],shape=self.s[i],\
histogram=self.h[i],spatial=self.e[i],\
ID=self.id[i],label=self.l[i],\
ratio=self.r[i],area=self.a[i], cooc=self.cm[i]))
return vector
def set_date(vectors):
'''
This function is to add the start and end frame of each vector and
combine the vector with same id.
Input: the list of vectors in different frames.
Output: the list of vectors of all cell with different id.
'''
max_id = 0
for vector in vectors:
for pv in vector:
if pv.id > max_id:
max_id = pv.id
output = np.zeros((max_id, 4))
output[:,0] = np.linspace(1, max_id, max_id) # set the cell ID
output[:,1] = len(vectors)
for frame, vector in enumerate(vectors):
for pv in vector:
if output[pv.id-1][1] > frame: # set the start frame
output[pv.id-1][1] = frame
if output[pv.id-1][2] < frame: # set the end frame
output[pv.id-1][2] = frame
output[pv.id-1][3] = pv.l # set tht cell parent ID
return output
def write_info(vector, name):
'''
This function is to write info. of each vector.
Input: the list of vector generated by set_date() and
the name of output file.
'''
with open(name+".txt", "w+") as file:
for p in vector:
file.write(str(int(p[0]))+" "+\
str(int(p[1]))+" "+\
str(int(p[2]))+" "+\
str(int(p[3]))+"\n")
# This part is to test the matching scheme with single image
# Input: the original image;
# the labeled image;
# the binary labeled image.
vector = None
# Feature vector construction
global centroid
global slope_length
global vector
vector = []
max_id = 0
for i in range(len(images)):
print " feature vector: image ", i
v = FEAVECTOR()
v.set_centroid(centroid[i])
v.set_spatial(slope_length[i])
v.set_shape(enhance_images[i], marks[i])
v.set_histogram()
v.add_label()
v.add_id(marks[i].max(), i)
v.add_efd()
v.add_area()
v.add_co_occurrence()
vector.append(v.generate_vector())
print "num of nuclei: ", len(vector[i])
clear_output(wait=True)
print "finish"
###Output
_____no_output_____
###Markdown
This part is to get the sub-image for each cell and save as file.
###Code
image_size = 70
counter = 0
for i, vt in enumerate(vector):
print "Image: ", i
for v in vt:
h, w = v.s[1].shape[:2]
extend_x = (image_size - w) / 2
extend_y = (image_size - h) / 2
temp = cv2.copyMakeBorder(v.s[1], \
extend_y, (image_size-extend_y-h), \
extend_x, (image_size-extend_x-w), \
cv2.BORDER_CONSTANT, value=0)
write_mask8(temp, "cell_image"+str(i)+"_", counter)
counter += 1
clear_output(wait=True)
print "finish!"
###Output
_____no_output_____
###Markdown
This part is to using ratio of the two axises of inerial as mitosis refinement to mactch cells.
###Code
class SIMPLE_MATCH():
'''
This class is simple matching a nucleus into a nucleus in the previous frame by
find the nearest neighborhood.
'''
def __init__(self, index0, index1, images, vectors):
self.v0 = cp.copy(vectors[index0])
self.v1 = cp.copy(vectors[index1])
self.i0 = index0
self.i1 = index1
self.images = images
self.vs = cp.copy(vectors)
def distance_measure(self, pv0, pv1, alpha1=0.5, alpha2=0.25, alpha3=0.25, phase = 1):
'''
This function measures the distence of the two given feature vectors.
This distance metrics we use is:
d(v(k, i), v(k+1, j)) = alpha1 * d(c(k, i), c(k+1, j)) +
alpha2 * q1 * d(s(k, i), s(k+1, j)) +
alpha3 * q2 * d(h(k, i), h(k+1, j)) +
alpha4 * d(e(k, i), e(k+1, j))
Input: The two given feature vectors,
and the set of parameters.
Output: the distance of the two given vectors.
'''
def centriod_distance(c0, c1, D=30.):
dist = np.sqrt((c0[0]-c1[0])**2 + (c0[1]-c1[1])**2)
return dist/D if dist < D else 1
def efd_distance(r0, r1, order=8):
def find_max(max_value, test):
if max_value < test:
return test
return max_value
dis = 0
if type(r0) is not int and type(r1) is not int:
max_a, max_b, max_c, max_d = 0, 0, 0, 0
for o in range(order):
dis += ((r0[o][0]-r1[o][0])**2+\
(r0[o][1]-r1[o][1])**2+\
(r0[o][2]-r1[o][2])**2+\
(r0[o][3]-r1[o][3])**2)
max_a = find_max(max_a, (r0[o][0]-r1[o][0])**2)
max_b = find_max(max_b, (r0[o][1]-r1[o][1])**2)
max_c = find_max(max_c, (r0[o][2]-r1[o][2])**2)
max_d = find_max(max_d, (r0[o][3]-r1[o][3])**2)
dis /= (order*(max_a+max_b+max_c+max_d))
if dis > 1.1:
print dis, max_a, max_b, max_c, max_d
raise
else:
dis = 1
return dis
def cm_distance(cm0, cm1):
return ((cm0[0]-cm1[0])**2+\
(cm0[1]-cm1[1])**2+\
(cm0[2]-cm1[2])**2+\
(cm0[3]-cm1[3])**2)/\
(max(cm0[0],cm1[0])**2+\
max(cm0[1],cm1[1])**2+\
max(cm0[2],cm1[2])**2+\
max(cm0[3],cm1[3])**2)
if len(pv0.s[0]) and len(pv1.s[0]):
dist = alpha1 * centriod_distance(pv0.c, pv1.c)+ \
alpha2 * efd_distance(pv0.r, pv1.r, order=8) * phase + \
alpha3 * cm_distance(pv0.cm[0], pv1.cm[0]) * phase
else:
dist = Max_dis
return dist
def phase_identify(self, pv1, min_times_MA2ma = 2, RNN=False):
'''
Phase identification returns 0 when mitosis appears, vice versa.
'''
if not RNN:
_, contours, hierarchy = cv2.findContours(pv1.s[0].astype(np.uint8), 1, 2)
if not len(contours):
return 1
cnt = contours[0]
if len(cnt) >= 5:
(x,y),(ma,MA),angle = cv2.fitEllipse(cnt)
if ma and MA/ma > min_times_MA2ma:
return 0
elif not ma and MA:
return 0
else:
return 1
else:
return 1
else:
try:
if model.predict([pv1.r.reshape(40)])[-1]:
return 0
else:
return 1
except AttributeError:
return 1
def find_match(self, max_distance=1,a_1=0.5,a_2=0.25,a_3=0.25, rnn=False):
'''
This function is to find the nearest neighborhood between two
successive frame.
'''
def centriod_distance(c0, c1, D=30.):
dist = np.sqrt((c0[0]-c1[0])**2 + (c0[1]-c1[1])**2)
return dist/D if dist < D else 1
for i, pv1 in enumerate(self.v1):
dist = np.ones((len(self.v0), 3), np.float32)*max_distance
count = 0
q = self.phase_identify(pv1, 3, RNN=rnn)
for j, pv0 in enumerate(self.v0):
if centriod_distance(pv0.c, pv1.c) < 1 and pv0.a:
dist[count][0] = self.distance_measure(pv0, pv1, alpha1=a_1, alpha2=a_2, alpha3=a_3, phase=q)
dist[count][1] = pv0.l
dist[count][2] = pv0.id
count += 1
sort_dist = sorted(dist, key=lambda a_entry: a_entry[0])
print "dis: ", sort_dist[0][0]
if sort_dist[0][0] < max_distance:
self.v1[i].l = sort_dist[0][1]
self.v1[i].id = sort_dist[0][2]
def mitosis_refine(self, rnn=False):
'''
This function is to find died cell due to the by mitosis.
'''
def find_sibling(pv0):
'''
This function is to find sibling cells according to the centroid of
pv0. The criteria of sibling is:
1. the jaccard cooeficient of the two cells is above 0.5
2. the sum of the two areas should in the range [A, 2.5A], where
A is the area of the pv0
3. the position of the two cells should be not larger than 20 pixels.
Input: pv0: the parent cell that you want to find siblings;
Output: the index of the siblings.
'''
def maxsize_image(image1, image2):
y1, x1 = np.where(image1)
y2, x2 = np.where(image2)
return min(min(x1), min(x2)), min(min(y1), min(y2)), \
max(max(x1), max(x2)), max(max(y1), max(y2)),
def symmetry(image, shape):
h, w = image.shape[:2]
newimg = np.zeros(shape)
newimg[:h, :w] = image
v = float(shape[0] - h)/2.
u = float(shape[1] - w)/2.
M = np.float32([[1,0,u],[0,1,v]])
return cv2.warpAffine(newimg,M,(shape[1],shape[0]))
def jaccard(s0, s1):
minx, miny, maxx, maxy = maxsize_image(s0, s1)
height = maxy - miny + 1
width = maxx - minx + 1
img0 = symmetry(s0, (height, width))
img1 = symmetry(s1, (height, width))
num = 0.
deno = 0.
for y in range(height):
for x in range(width):
if img0[y, x] and img1[y, x]:
num += 1
if img0[y, x] or img1[y, x]:
deno += 1
return num/deno
sibling_cand = []
for i, pv1 in enumerate(self.v1):
if np.linalg.norm(pv1.c-pv0.c) < 50:
sibling_cand.append([pv1, i])
sibling_pair = []
area = pv0.s[0].sum()
jaccard_value = []
for sibling0 in sibling_cand:
for sibling1 in sibling_cand:
if (sibling1[0].c != sibling0[0].c).all():
sum_area = sibling1[0].s[0].sum()+sibling0[0].s[0].sum()
similarity = jaccard(sibling0[0].s[0], sibling1[0].s[0])
if similarity > 0.4 and (sum_area > 2*area):
sibling_pair.append([sibling0, sibling1])
jaccard_value.append(similarity)
if len(jaccard_value):
return sibling_pair[np.argmax(jaccard_value)]
else:
return 0
v1_ids = []
for pv1 in self.v1:
v1_ids.append(pv1.id)
for i, pv0 in enumerate(self.v0):
if pv0.id not in v1_ids and len(pv0.s[0]) and self.phase_identify(pv0, 3, RNN=rnn):
sibling = find_sibling(pv0)
if sibling:
[s0, s1] = sibling
if s0[0].l==0 and s1[0].l==0 and \
s0[0].id==-1 and s1[0].id==-1:
self.v1[s0[1]].l = pv0.id
self.v1[s1[1]].l = pv0.id
return self.v1
def match_missing(self, mask, max_frame = 1, max_distance = 10, min_shape_similarity = 0.6):
'''
This function is to match the cells that didn't show in the last frame caused by
program fault. In order to match them, we need to seach the cell in the previous
frame with in the certain range and with similar shape.
'''
def centriod_distance(c0, c1):
dist = np.sqrt((c0[0]-c1[0])**2 + (c0[1]-c1[1])**2)
return dist
def maxsize_image(image1, image2):
y1, x1 = np.where(image1)
y2, x2 = np.where(image2)
return min(min(x1), min(x2)), min(min(y1), min(y2)), \
max(max(x1), max(x2)), max(max(y1), max(y2)),
def symmetry(image, shape):
h, w = image.shape[:2]
newimg = np.zeros(shape)
newimg[:h, :w] = image
v = float(shape[0] - h)/2.
u = float(shape[1] - w)/2.
M = np.float32([[1,0,u],[0,1,v]])
return cv2.warpAffine(newimg,M,(shape[1],shape[0]))
def shape_similarity(s0, s1):
if len(s0) and len(s1):
minx, miny, maxx, maxy = maxsize_image(s0, s1)
height = maxy - miny + 1
width = maxx - minx + 1
img0 = symmetry(s0, (height, width))
img1 = symmetry(s1, (height, width))
num = 0.
deno = 0.
for y in range(height):
for x in range(width):
if img0[y, x] and img1[y, x]:
num += 1
if img0[y, x] or img1[y, x]:
deno += 1
return num/deno
else:
return 0.
def add_marker(index_find, index_new, pv0_id):
temp = mask[index_new]
find = mask[index_find]
temp[find==pv0_id] = pv0_id
return temp
for i, pv1 in enumerate(self.v1):
if pv1.id == -1:
for index in range(1, max_frame+1):
if self.i0-index >= 0:
vt = self.vs[self.i0-index]
for pv0 in vt:
if centriod_distance(pv0.c, pv1.c) < max_distance and \
shape_similarity(pv0.s[0], pv1.s[0]) > min_shape_similarity:
self.v1[i].id = pv0.id
self.v1[i].l = pv0.l
print "missing in frame: ", self.i1, "find in frame: ", \
self.i0-index, "ID: ", pv0.id, " at: ", pv0.c
for i in range(self.i0-index+1, self.i1):
mask[i] = add_marker(self.i0-index, i, pv0.id)
return mask
def new_id(self, vectors):
'''
This function is to add new labels for the necles that are marked as -1.
'''
def find_max_id(vectors):
max_id = 0
for vt in vectors:
for pt in vt:
if pt.id > max_id:
max_id = pt.id
return max_id
max_id = find_max_id(self.vs)
max_id += 1
for i, pv1 in enumerate(self.v1):
if pv1.id == -1:
self.v1[i].id = max_id
max_id += 1
def generate_mask(self, marker, index, isfinal=False):
'''
This function is to generate a 16-bit image as mask image.
'''
h, w = marker.shape[:2]
mask = np.zeros((h, w), np.uint16)
pts = list(set(marker[marker>0]))
if not isfinal:
assert len(pts)==len(self.v0), 'len(pts): %s != len(self.v0): %s' % (len(pts), len(self.v0))
for pt, pv in zip(pts, self.v0):
mask[marker==pt] = pv.id
else:
assert len(pts)==len(self.v1), 'len(pts): %s != len(self.v0): %s' % (len(pts), len(self.v1))
for pt, pv in zip(pts, self.v1):
mask[marker==pt] = pv.id
os.chdir(".")
write_mask16(mask, "mask", index)
os.chdir(os.pardir)
return mask
def return_vectors(self):
'''
This function is to return the vectors that we have already
changed.
Output: the vectors from the k+1 frame.
'''
return self.v1
import copy as cp
mask = []
temp_vector = cp.deepcopy(vector)
# Feature matching
for i in range(len(images)-1):
print " Feature matching: image ", i
m = SIMPLE_MATCH(i,i+1,[images[i], images[i+1]], temp_vector)
mask.append(m.generate_mask(marks[i], i))
m.find_match(0.7,0.7,0.15,0.15)
temp_vector[i+1] = m.mitosis_refine()
m.new_id(temp_vector)
temp_vector[i+1] = m.return_vectors()
clear_output(wait=True)
print " Feature matching: image ", i+1
mask.append(m.generate_mask(marks[i+1], i+1, True))
os.chdir(".")
cells = set_date(temp_vector)
write_info(cells, "res_track")
print "finish!"
###Output
_____no_output_____
###Markdown
This part generates the final marked result in "gif".
###Code
# write gif image showing the final result
def find_max_id(temp_vector):
max_id = 0
for pv in temp_vector:
for p in pv:
if p.id > max_id:
max_id = p.id
return max_id
# This part is to mark the result in the normolized image and
# write the gif image.
max_id = find_max_id(temp_vector)
colors = [np.random.randint(0, 255, size=max_id),\
np.random.randint(0, 255, size=max_id),\
np.random.randint(0, 255, size=max_id)]
font = cv2.FONT_HERSHEY_SIMPLEX
selecy_id = 9
enhance_imgs = []
for i, m in enumerate(mask):
print " write the gif image: image ", i
enhance_imgs.append(cv2.cvtColor(enhance_images[i],cv2.COLOR_GRAY2RGB))
for pv in temp_vector[i]:
center = pv.c
if not pv.l:
color = (colors[0][int(pv.id)-1],\
colors[1][int(pv.id)-1],\
colors[2][int(pv.id)-1],)
else:
color = (colors[0][int(pv.l)-1],\
colors[1][int(pv.l)-1],\
colors[2][int(pv.l)-1],)
if m[center[0], center[1]]:
enhance_imgs[i][m==pv.id] = color
cv2.putText(enhance_imgs[i],\
str(int(pv.id)),(int(pv.c[1]), \
int(pv.c[0])),
font, 0.5,\
(255,255,255),1)
clear_output(wait=True)
os.chdir("PATH_TO_RESULT")
imageio.mimsave('mitosis_final.gif', enhance_imgs, duration=0.6)
print "finish!"
###Output
_____no_output_____ |
Visualization_Assignments/A_04_PlottingBasicChartsWithMatplotlib_en_SerhanOner.ipynb | ###Markdown
In this assignment, you will continue work with the [Coronavirus Source Data](https://ourworldindata.org/coronavirus-source-data). You will plot different chart types. Don't forget to set titles and axis labels. **(1)** Plot a bar chart for total cases of the 20 countries that havebiggest numbers.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
df = pd.read_csv('owid-covid-data.csv', parse_dates=["date"], low_memory=False)
df2 = df[(df['date']== '2021-11-04')]
df3 = df2.sort_values(['total_cases'], ascending=False).head(30)
df3.dropna(subset=['continent'], inplace=True)
df3
df3['location'].iloc[0:20].index
df3.iloc[0:20].location
plt.figure(figsize=(20,8))
plt.title('20 Countries That Have The Most Total Cases', fontsize = 20, color = 'darkblue')
plt.bar(df3.iloc[0:20].location, df3['total_cases'][0:20], color = 'blue')
plt.xlabel('Countries', fontsize=15, color = 'green')
plt.ylabel('Total Cases', fontsize=15, color = 'green')
plt.xticks(rotation = 90, fontsize=10)
plt.show()
###Output
_____no_output_____
###Markdown
**(2)** Plot a histogram for daily deaths for any country you choose. Make three subplots for different bins.
###Code
plt.figure(figsize=(20,5))
plt.title("Belgium's Daily Deaths")
x= df.loc[df['location'] == 'Belgium' , 'new_deaths']
plt.subplot(1,3,1)
plt.hist(x, color='blue', bins = 10)
plt.subplot(1,3,2)
plt.hist(x, color='blue', bins = 30)
plt.subplot(1,3,3)
plt.hist(x, color='blue', bins = 70)
plt.show()
###Output
_____no_output_____
###Markdown
**(3)** Plot a scatter plot of new cases and new death for Germany and France.
###Code
plt.figure(figsize=(20,8))
plt.title("New Cases and Deaths of Germany & France")
ger_cases = df.loc[df['location'] == 'Germany', 'new_cases']
fra_cases = df.loc[df['location'] == 'France', 'new_cases']
ger_deaths = df.loc[df['location'] == 'Germany', 'new_deaths']
fra_deaths = df.loc[df['location'] == 'France', 'new_deaths']
plt.scatter(ger_cases, ger_deaths, color = "red")
plt.xlabel('New Cases')
plt.ylabel('New Deaths')
plt.xticks(rotation = 75, fontsize = 9)
plt.scatter(fra_cases, fra_deaths, color = "blue")
plt.xlabel('New Cases')
plt.ylabel('New Deaths')
plt.xticks(rotation = 75, fontsize = 9)
plt.show()
# as one can see, there are some negative values written above, and thus one needs to erase or reorganize them.
###Output
_____no_output_____
###Markdown
**(4)** Plot a boxplot for daily deaths for any country you choose.
###Code
plt.figure(figsize=(15, 8))
plt.title('Daily Deaths of Belgium', fontsize = 15, c = "black")
plt.boxplot(df.loc[df['location']=='Belgium', 'new_deaths'].dropna())
plt.xlabel('Days',fontsize = 10, color = 'red')
plt.ylabel('Daily Deaths', fontsize = 10, color = 'red')
plt.show()
###Output
_____no_output_____
###Markdown
**(5)** Calculate the total case for each continent and plot a pie chart
###Code
df4 = df2.sort_values(['total_cases'], ascending=False)
df4.dropna(subset=['continent'], inplace=True)
df4.dropna(subset=['total_cases'], inplace=True)
df4
countries = []
tot_cases = []
for country in df4['location']:
countries.append(country)
for abc in df4['total_cases']:
tot_cases.append(abc)
plt.figure(figsize = (15,10))
plt.title('Total Cases', fontsize = 15 , c = 'blue')
plt.pie(tot_cases, labels=countries, autopct='%1.1f%%', shadow=True, startangle=90)
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/initial_conditions-checkpoint.ipynb | ###Markdown
This notebook contains the RHS functions for eqs 3-6 of the "MarshakWave 3T" notebook and the corresponding initial conditions.
###Code
#coupling coefficient function
gamma_val = lambda x, xmax: gamma(x,g(x,xmax))
gamma0=0
gamma = lambda t,Te: gamma0*((Te)**(-m))
#initial condition g for the electron temperature
g = lambda x, xmax: ((n+3)*xmax*(-x+xmax)/(n+4))**(1/(n+3))
gprime = lambda x, xmax: -(((n+3)/(n+4))*xmax)**(1/(n+3))*1/(n+3)*(xmax-x)**((-2-n)/(n+3))
# IC for time dependent Ti
#this function returns the IC for Te if the two temperatures are fully coupled
h_hi = g
h_low = lambda x, xmax: 1/(Cvi)*gamma_val(x,xmax)*((n+3)/(n+4))*((xmax-x)/(xmax))*((n+3)/(n+4)*xmax*(xmax-x))**(1/(n+3))
h=lambda x, xmax: min((h_hi(x,xmax),h_low(x,xmax)))
#IC for Ti eq 6
f=lambda x, xmax:4*gamma_val(x,xmax)*(n+3)*(-xmax+x)*((n+3)*xmax*(-x+xmax)/(n+4))**(1/(n+3))/(Cvi*(n+4)*(2*gamma_val(x,xmax)*xmax**2/Cvi-x**3-4*gamma_val(x,xmax)*x*xmax/Cvi-x**2*xmax+2*gamma_val(x,xmax)*xmax**2/Cvi-x*xmax-xmax**3))
###Output
_____no_output_____
###Markdown
RHS functions To make eqs 3-6 first order in terms of the x derivative, the variable u is defined,$$ u = \frac{dT_e}{d\xi}$$The following right hand side functions are eqs 3-6 rearranged to solve for $\frac{du}{d\xi}$.
###Code
#Vector functions for solving eq's 3,4 and 5,6
#v[0] = Te, v[1] = u, v[2] = Ti
def RHS_time(t,v,gamma):
Te = v[0]
Ti = v[2]
gamma_val = gamma(t,Te)
result = np.zeros(3)
#compute RHS
result[0] = v[1]
result[1] = ((-t*v[1] - 1/(Cve)*gamma_val*(v[2]-v[0]))*(v[0]**(-n))-(n+4)*(n+3)*(v[1]**2)*(v[0]**2))/((n+4)*v[0]**3) #eq 3
result[2] = gamma_val/(Cvi*t)*(v[2]-v[0]) #eq 4
return result
# Space dependent gamma
def RHS_space(t,v,gamma):
Te = v[0]
Ti = v[2]
gamma_val = gamma(t,Te)
result = np.zeros(3)
#compute RHS
result[0] = v[1]
result[1] = (-t*v[1]*v[0]**(-n)-gamma_val/(Cve*t**2)*v[0]**(-n)*(v[2]-v[0])-(n+3)*(n+4)*v[1]**2*v[0]**2)/((n+4)*v[0]**3) #eq 5
result[2] = (gamma_val)/(Cvi*t**3)*(v[2]-v[0]) #eq 6
return result
RHSfun_time = lambda t,v: RHS_time(t,v,gamma)
RHSfun_space = lambda t,v: RHS_space(t,v,gamma)
###Output
_____no_output_____ |
Assignment6_Perona.ipynb | ###Markdown
Linear Algebra for CHE Laboratory 6 : Matrix Operations Now that you have a fundamental knowledge about representing and operating with vectors as well as the fundamentals of matrices, we'll try to the same operations with matrices and even more. ObjectivesAt the end of this activity you will be able to:1. Be familiar with the fundamental matrix operations.2. Apply the operations to solve intermediate equations.3. Apply matrix algebra in engineering solutions. Discussion
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Transposition A matrix's transpose is found by reversing its rows into columns or columns into rows. The matrix's transpose is denoted by the letter "T" in the superscript of the provided matrix. For example, if "A" is the given matrix, the matrix's transpose is denoted as $A'$ or $A^T$. Thus, a matrix transpose is described as "A matrix generated by converting all of the rows of a given matrix into columns and vice versa.” So for example: $$A = \begin{bmatrix} 1 & 2 & 5\\5 & -1 &0 \\ 0 & -3 & 3\end{bmatrix} $$ $$ A^T = \begin{bmatrix} 1 & 5 & 0\\2 & -1 &-3 \\ 5 & 0 & 3\end{bmatrix}$$ This can now be achieved programmatically by using `np.transpose()` or using the `T` method.
###Code
J = np.array([
[46 ,50, -52],
[21, 35, 40],
[84, -42, 69]
])
J
JT = np.transpose(J)
JT
JT2 = J.T
JT2
np.array_equiv(JT1, JT2)
M = np.array([
[45,72,33,94],
[15,21,46,28],
])
M.shape
np.transpose(M).shape
M.T.shape
###Output
_____no_output_____
###Markdown
Dot Product / Inner Product In matrix dot product we are going to get the sum of products of the vectors by row-column pairs. So if we have two matrices $X$ and $Y$:$$X = \begin{bmatrix}x_{(0,0)}&x_{(0,1)}\\ x_{(1,0)}&x_{(1,1)}\end{bmatrix}, Y = \begin{bmatrix}y_{(0,0)}&y_{(0,1)}\\ y_{(1,0)}&y_{(1,1)}\end{bmatrix}$$The dot product will then be computed as:$$X \cdot Y= \begin{bmatrix} x_{(0,0)}*y_{(0,0)} + x_{(0,1)}*y_{(1,0)} & x_{(0,0)}*y_{(0,1)} + x_{(0,1)}*y_{(1,1)} \\ x_{(1,0)}*y_{(0,0)} + x_{(1,1)}*y_{(1,0)} & x_{(1,0)}*y_{(0,1)} + x_{(1,1)}*y_{(1,1)}\end{bmatrix}$$So if we assign values to $X$ and $Y$:$$X = \begin{bmatrix}1&2\\ 0&1\end{bmatrix}, Y = \begin{bmatrix}-1&0\\ 2&2\end{bmatrix}$$ $$X \cdot Y= \begin{bmatrix} 1*-1 + 2*2 & 1*0 + 2*2 \\ 0*-1 + 1*2 & 0*0 + 1*2 \end{bmatrix} = \begin{bmatrix} 3 & 4 \\2 & 2 \end{bmatrix}$$This could be achieved programmatically using `np.dot()`, `np.matmul()` or the `@` operator.
###Code
X = np.array([
[1,2],
[0,1]
])
Y = np.array([
[-1,0],
[2,2]
])
np.dot(X,Y)
X.dot(Y)
X @ Y
np.matmul(X,Y)
###Output
_____no_output_____
###Markdown
When compared to vector dot products, matrix dot products have additional rules. There are fewer constraints because vector dot products have only one dimension. Because we are now dealing with Rank 2 vectors, we must follow the following rules: Rule 1: The inner dimensions of the two matrices in question must be the same. So given a matrix $A$ with a shape of $(a,b)$ where $a$ and $b$ are any integers. If we want to do a dot product between $A$ and another matrix $B$, then matrix $B$ should have a shape of $(b,c)$ where $b$ and $c$ are any integers. So for given the following matrices:$$A = \begin{bmatrix}2&4\\5&-2\\0&1\end{bmatrix}, B = \begin{bmatrix}1&1\\3&3\\-1&-2\end{bmatrix}, C = \begin{bmatrix}0&1&1\\1&1&2\end{bmatrix}$$So in this case $A$ has a shape of $(3,2)$, $B$ has a shape of $(3,2)$ and $C$ has a shape of $(2,3)$. So the only matrix pairs that is eligible to perform dot product is matrices $A \cdot C$, or $B \cdot C$.
###Code
A = np.array([
[43, 56],
[29, -69],
[90, 56]
])
B = np.array([
[87,94],
[67,87],
[-16,-34]
])
T = np.array([
[21,15,19],
[14,21,25]
])
print(A.shape)
print(B.shape)
print(T.shape)
A @ T
B @ T
###Output
_____no_output_____
###Markdown
If you would notice the shape of the dot product changed and its shape is not the same as any of the matrices we used. The shape of a dot product is actually derived from the shapes of the matrices used. So recall matrix $A$ with a shape of $(a,b)$ and matrix $B$ with a shape of $(b,c)$, $A \cdot B$ should have a shape $(a,c)$.
###Code
A @ B.T
X = np.array([
[15,27,33,90]
])
Y = np.array([
[11,20,45,-16]
])
print(X.shape)
print(Y.shape)
Y.T @ X
###Output
_____no_output_____
###Markdown
And you can see that when you try to multiply A and B, it returns `ValueError` pertaining to matrix shape mismatch. Rule 2: Dot Product has special propertiesDot products are prevalent in matrix algebra, this implies that it has several unique properties and it should be considered when formulation solutions: 1. $A \cdot B \neq B \cdot A$ 2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$ 3. $A\cdot(B+C) = A\cdot B + A\cdot C$ 4. $(B+C)\cdot A = B\cdot A + C\cdot A$ 5. $A\cdot I = A$ 6. $A\cdot \emptyset = \emptyset$ I'll be doing just one of the properties and I'll leave the rest to test your skills!
###Code
A = np.array([
[33,21,31],
[46,53,41],
[14,21,80]
])
B = np.array([
[47,19,64],
[54,61,92],
[14,43,85]
])
C = np.array([
[13,12,40],
[20,15,16],
[31,60,17]
])
A.dot(np.zeros(A.shape))
z_mat = np.zeros(A.shape)
z_mat
a_dot_z = A.dot(np.zeros(A.shape))
a_dot_z
np.array_equal(a_dot_z,z_mat)
null_mat = np.empty(A.shape, dtype=float)
null = np.array(null_mat,dtype=float)
print(null)
np.allclose(a_dot_z,null)
###Output
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
###Markdown
Determinant A matrix is a collection of several numbers. For a square matrix, that is, a matrix with the same number of rows and columns, crucial information about the matrix can be captured in a single integer called the determinant. The determinant can be used to solve linear equations, capture how linear transformations alter area or volume, and change variables in integrals.The determinant of some matrix $A$ is denoted as $det(A)$ or $|A|$. So let's say $A$ is represented as:$$A = \begin{bmatrix}a_{(0,0)}&a_{(0,1)}\\a_{(1,0)}&a_{(1,1)}\end{bmatrix}$$We can compute for the determinant as:$$|A| = a_{(0,0)}*a_{(1,1)} - a_{(1,0)}*a_{(0,1)}$$So if we have $A$ as:$$A = \begin{bmatrix}1&4\\0&3\end{bmatrix}, |A| = 3$$But you might wonder how about square matrices beyond the shape $(2,2)$? We can approach this problem by using several methods such as co-factor expansion and the minors method. This can be taught in the lecture of the laboratory but we can achieve the strenuous computation of high-dimensional matrices programmatically using Python. We can achieve this by using `np.linalg.det()`.
###Code
A = np.array([
[14,54],
[70,43]
])
np.linalg.det(A)
B = np.array([
[14,23,45,46],
[60,54,73,38],
[34,61,54,42],
[84,53,56,32]
])
np.linalg.det(B)
###Output
_____no_output_____
###Markdown
Inverse The reciprocal of a matrix is just the matrix itself, as we do in normal arithmetic when dealing with a single number. Equations can be solved and unknown variables can be determined using this reciprocal. Inverse matrices are those in which the original matrix is multiplied by the inverse matrix and the result is the same. Now to determine the inverse of a matrix we need to perform several steps. So let's say we have a matrix $M$:$$M = \begin{bmatrix}1&7\\-3&5\end{bmatrix}$$First, we need to get the determinant of $M$.$$|M| = (1)(5)-(-3)(7) = 26$$Next, we need to reform the matrix into the inverse form:$$M^{-1} = \frac{1}{|M|} \begin{bmatrix} m_{(1,1)} & -m_{(0,1)} \\ -m_{(1,0)} & m_{(0,0)}\end{bmatrix}$$So that will be:$$M^{-1} = \frac{1}{26} \begin{bmatrix} 5 & -7 \\ 3 & 1\end{bmatrix} = \begin{bmatrix} \frac{5}{26} & \frac{-7}{26} \\ \frac{3}{26} & \frac{1}{26}\end{bmatrix}$$For higher-dimension matrices you might need to use co-factors, minors, adjugates, and other reduction techinques. To solve this programmatially we can use `np.linalg.inv()`.
###Code
M = np.array([
[76,45],
[75, -35]
])
np.array(M @ np.linalg.inv(M), dtype=int)
N = np.array([
[54,64,28,43,89,32,4],
[0,42,81,11,2,76,23],
[86,9,53,40,75,0,33],
[16,26,34,82,94,3,31],
[84,36,68,87,16,62,1],
[-55,5,32,73,61,80,-50],
[-32,-75,11,21,16,20,62],
])
N_inv = np.linalg.inv(N)
np.array(N @ N_inv,dtype=int)
###Output
_____no_output_____
###Markdown
To validate the wether if the matric that you have solved is really the inverse, we follow this dot product property for a matrix $M$:$$M\cdot M^{-1} = I$$
###Code
squad = np.array([
[8.0, 0.6, 0.9],
[0.2, 0.7, 5.0],
[0.3, 0.3, 8.0]
])
weights = np.array([
[0.5, 0.1, 0.8]
])
p_grade = squad @ weights.T
p_grade
###Output
_____no_output_____
###Markdown
Activity Task 1 Prove and implement the remaining 6 matrix multiplication properties. You may create your own matrices in which their shapes should not be lower than $(3,3)$.In your methodology, create individual flowcharts for each property and discuss the property you would then present your proofs or validity of your implementation in the results section by comparing your result to present functions from NumPy.
###Code
np.array([])
###Output
_____no_output_____
###Markdown
$A \cdot B \neq B \cdot A$
###Code
A = np.array([
[26,37,63],
[45 ,45,36],
[65,75,45]
])
B = np.array([
[77,98,49],
[48,36,15],
[66,98,72]
])
result = [[0 for x in range(3)] for y in range(3)]
for i in range(len(B)):
for j in range(len(A[0])):
for k in range(len(A)):
result[i][j] += A[i][k] * B[k][j]
print('A.B IS')
print(result)
print('\n')
result = [[0 for x in range(3)] for y in range(3)]
for i in range(len(B)):
for j in range(len(A[0])):
for k in range(len(A)):
result[i][j] += B[i][k] * A[k][j]
print('B.A IS')
print(result)
print('\n')
print('Therefore A.B is not equalt to B.A')
###Output
A.B IS
[[7936, 10054, 6365], [8001, 9558, 5472], [11575, 13480, 7550]]
B.A IS
[[9597, 10934, 10584], [3843, 4521, 4995], [10806, 12252, 10926]]
Therefore A.B is not equalt to B.A
###Markdown
$A \cdot (B \cdot C) = (A \cdot B) \cdot C$
###Code
A = np.array ([
[32,45,65],
[3,5,76],
[12,56,98]
])
B = np.array ([
[12,67,3],
[76,83,90],
[23,454,1]
])
C = np.array ([
[1,76,98],
[23,45,77],
[98,45,77]
])
result = np.dot(B,C)
result = np.dot(A,result);
print("A.(B.C) is")
for r in result:
print(r)
print('\n')
result = np.dot(A,B)
result = np.dot(result,C);
print("(A.B).C) is")
for r in result:
print(r)
print('Therefore A.(B.C) = (A.B).C)')
###Output
A.(B.C) is
[1231924 2184724 3568502]
[ 862354 1768939 2957507]
[1662418 2986014 4896178]
(A.B).C) is
[1231924 2184724 3568502]
[ 862354 1768939 2957507]
[1662418 2986014 4896178]
###Markdown
$A\cdot(B+C) = A\cdot B + A\cdot C$
###Code
A = np.array ([
[45,54,32],
[65,8,21],
[98,12,43]
])
B = np.array ([
[87,53,13],
[54,76,31],
[21,54,75]
])
C = np.array ([
[31,76,43],
[87,53,31],
[87,53,13]
])
result = [[B[i][j] + C[i][j] for j in range
(len(B[0]))] for i in range(len(B))]
result = np.dot(A,result)
print("A.(B+C) is")
for r in result:
print(r)
print('\n')
result = np.dot(A,B)
result1 = np.dot(A,C)
result = [[result[i][j] + result1[i][j] for j in range
(len(result[0]))] for i in range(len(result))]
print("A.B+A.C) is")
for r in result:
print(r)
print('\n')
print('Therfore A.(B+C) = A.B+A.C)')
###Output
A.(B+C) is
[16380 16195 8684]
[11066 11664 5984]
[17900 18791 10016]
A.B+A.C) is
[16380, 16195, 8684]
[11066, 11664, 5984]
[17900, 18791, 10016]
Therfore A.(B+C) = A.B+A.C)
###Markdown
$(B+C)\cdot A = B\cdot A + C\cdot A$
###Code
A = np.array ([
[43,65,23],
[5,7,8],
[87,5,33]
])
B = np.array ([
[67,7,23],
[76,98,32],
[34,71,1]
])
C = np.array ([
[56,872,3],
[53,76,32],
[54,87,34]
])
result = [[B[i][j] + C[i][j] for j in range
(len(B[0]))] for i in range(len(B))]
result = np.dot(result,A)
print("A.(B+C) is")
for r in result:
print(r)
print('\n')
result = np.dot(B,A)
result1 = np.dot(C,A)
result = [[result[i][j] + result1[i][j] for j in range
(len(result[0]))] for i in range(len(result))]
print("(A.B)+(A.C) is")
for r in result:
print(r)
print('\n')
print('Therefore A.(B+C) = (A.B)+(A.C)')
###Output
A.(B+C) is
[11946 14278 10719]
[11985 9923 6471]
[7619 7001 4443]
(A.B)+(A.C) is
[11946, 14278, 10719]
[11985, 9923, 6471]
[7619, 7001, 4443]
Therefore A.(B+C) = (A.B)+(A.C)
###Markdown
$A\cdot I = A$
###Code
M1 = np.array([
[54,76,35],
[35,13,76],
[8,5,8]
])
M2 = np.array([
[1,0,0],
[0,1,0],
[0,0,1]
])
result = [[0 for x in range(3)] for y in range(3)]
for i in range(len(M2)):
for j in range(len(M1[0])):
for k in range(len(M1)):
result[i][j] += M2[i][k] * M1[k][j]
print(result)
print('\n')
print('Therefore A.I = A')
###Output
[[54, 76, 35], [35, 13, 76], [8, 5, 8]]
Therefore A.I = A
###Markdown
$A\cdot \emptyset = \emptyset$
###Code
M1 = np.array([
[45,65,3],
[65,25,7],
[12,76,43]
])
M2 = np.array([
[0,0,0],
[0,0,0],
[0,0,0]
])
result = [[0 for x in range(3)] for y in range(3)]
for i in range(len(M2)):
for j in range(len(M1[0])):
for k in range(len(M1)):
result[i][j] += M2[i][k] * M1[k][j]
print(result)
print('\n')
print('Therefore A.\u03B8 = \u03B8')
###Output
[[0, 0, 0], [0, 0, 0], [0, 0, 0]]
Therefore A.θ = θ
|
Dell_Scrapping.ipynb | ###Markdown
Scrapping dos dados de notebooks em promoção no site Dell Brasil Instalando e importando o BeautifulSoup:
###Code
!pip install bs4
import bs4
import pandas as pd
import urllib.request as urllib_request
print("BeautifulSoup ->", bs4.__version__)
print("urllib ->", urllib_request.__version__)
print("pandas ->", pd.__version__)
###Output
BeautifulSoup -> 4.6.3
urllib -> 3.7
pandas -> 1.1.5
###Markdown
Buscando a URL e tratando erros:
###Code
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
url = 'https://deals.dell.com/pt-br/work/category/notebooks'
headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36'}
try:
req = Request(url, headers = headers)
response = urlopen(req)
html = response.read()
print(html)
except HTTPError as e: # Erro no acesso
print(e.status, e.reason)
except URLError as e: # Erro na URL
print(e.reason)
###Output
b'\r\n\r\n<!DOCTYPE html>\r\n<html lang="pt-BR">\r\n<head>\r\n \r\n <title>Laptops em Promoção para Uso Profissional | Dell Brasil</title>\r\n \r\n <meta name="generator" content="SpecialDeal Build: 1.0.0.4005 Built On: 09/10/2021 12:37:33 \xe5\x8d\x88\xe5\x89\x8d (GMT+00:00)" />\r\n <meta name="description" content="OFERTAS DE ANIVERS\xc3\x81RIO: Vantagens imperd\xc3\xadveis em laptops Dell Vostro, Latitude e Inspiron para fazer mais do que te inspira. 10x sem juros e frete gr\xc3\xa1tis." />\r\n\r\n \r\n <meta property="og:type" content="product.group">\r\n <meta property="og:image" content="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/CS2102R0014_001_572822_br_cs_sb_dir_fy22q3w4_site_DellAnniversary-SB-8-27--9-16_700x700_1.png" />\r\n\r\n<meta property="og:site_name" content="Dell" />\r\n<meta property="og:url" content="https://deals.dell.com/pt-br/work/category/notebooks" />\r\n<meta property="og:title" content="Laptops em Promoção para Uso Profissional | Dell Brasil" />\r\n<meta property="og:description" content="OFERTAS DE ANIVERSÁRIO: Vantagens imperdíveis em laptops Dell Vostro, Latitude e Inspiron para fazer mais do que te inspira. 10x sem juros e frete grátis." />\r\n\r\n <meta name="twitter:image" content="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/CS2102R0014_001_572822_br_cs_sb_dir_fy22q3w4_site_DellAnniversary-SB-8-27--9-16_700x700_1.png" />\r\n <meta name="twitter:card" content="product">\r\n\r\n<meta name="twitter:title" content="Laptops em Promoção para Uso Profissional | Dell Brasil" />\r\n<meta name="twitter:url" content="https://deals.dell.com/pt-br/work/category/notebooks" />\r\n<meta name="twitter:description" content="OFERTAS DE ANIVERSÁRIO: Vantagens imperdíveis em laptops Dell Vostro, Latitude e Inspiron para fazer mais do que te inspira. 10x sem juros e frete grátis." />\r\n<meta name="twitter:site" content="@Dell" />\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n<script type="application/ld+json">\r\n {\r\n "@context": "https://schema.org/",\r\n "@type": "WebPage",\r\n "name": "Laptops em Promoção para Uso Profissional | Dell Brasil",\r\n "description": "OFERTAS DE ANIVERSÁRIO: Vantagens imperdíveis em laptops Dell Vostro, Latitude e Inspiron para fazer mais do que te inspira. 10x sem juros e frete grátis.",\r\n "url": "https://deals.dell.com/pt-br/work/category/notebooks",\r\n "thumbnailUrl":"https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/CS2102R0014_001_572822_br_cs_sb_dir_fy22q3w4_site_DellAnniversary-SB-8-27--9-16_700x700_1.png"\r\n }\r\n</script>\r\n\r\n\r\n\r\n<script type="application/ld+json">\r\n {"@context":"https://schema.org","@type":"BreadcrumbList","itemListElement":[{"type":"ListItem","position":"1","item":{"@id":"https://www.dell.com/pt-br","name":"Brasil"}},{"type":"ListItem","position":"2","item":{"@id":"https://www.dell.com/pt-br/work/shop","name":"Para Uso Profissional"}},{"type":"ListItem","position":"3","item":{"@id":"https://deals.dell.com/pt-br/work","name":"Todas as Ofertas"}},{"type":"ListItem","position":"4","item":{"@id":"https://deals.dell.com/pt-br/work/category/notebooks","name":"Laptops em Promo\xc3\xa7\xc3\xa3o"}}]}\r\n</script>\r\n\r\n <link rel="canonical" href="https://deals.dell.com/pt-br/work/category/notebooks" />\r\n \r\n <link rel="preload" href="//deals.dell.com/bundles/1.0.0.4005/css/category.min.css" as="style" />\r\n <link rel="stylesheet" href="//deals.dell.com/bundles/1.0.0.4005/css/category.min.css" />\r\n \r\n\r\n <script type="text/javascript">\r\n Dell = window.Dell || {};\r\n Dell.Metrics = Dell.Metrics || {};\r\n Dell.Metrics.sc = Dell.Metrics.sc || {};\r\n Dell.Metrics.sc.country = "br";\r\n Dell.Metrics.sc.language = "pt";\r\n Dell.Metrics.sc.segment = "bsd";\r\n Dell.Metrics.sc.customerset = "brbsdt1";\r\n Dell.Metrics.sc.cms = "bf-edge";\r\n Dell.Metrics.sc.pagename = "br|pt|bsd|brbsdt1|bf-edge|shop|deals.dell.com|category|notebooks";\r\n Dell.Metrics.sc.applicationname = "Shop:Deals(BF-EDGE)";\r\n Dell.Metrics.sc.module = "csb_specialdeals";\r\n Dell.Metrics.sc.categoryid = "notebooks";\r\n try { var se_lwpCookie = getKeyValue("lwp", document.cookie, ";"), link_number = getKeyValue("link_number", se_lwpCookie, "&"); Dell.Metrics.sc.epplinknumber = link_number; function getKeyValue(n, t, i) { var u, r; if (t.indexOf(n) === -1) return ""; var f = n + "=", o = decodeURIComponent(t), e = o.split(i); for (u = 0; u < e.length; u++)if (r = e[u], r.indexOf(n) > -1) { while (r.charAt(0) === " ") r = r.substring(1); if (r.indexOf(f) === 0) return r.substring(f.length, r.length) } return "" } } catch (message) { window.console && console.log && console.log("Failed to set Dell.Metrics.sc.epplinknumber: " + message) }\r\n </script>\r\n \r\n<script type="text/javascript" defer>\r\n var MasterTmsUdo = {\r\n "CJ": {\r\n "CID": "",\r\n "TYPE": "",\r\n "DISCOUNT": "",\r\n "OGID": "",\r\n "CURRENCY": "",\r\n "COUPON": "",\r\n "CSEG": "brbsdt1",\r\n "SEG": "BSD",\r\n "PRODCAT": "CONFIG",\r\n "COUNTRY": "BR",\r\n "LANG": "PT",\r\n "REVENUE": "",\r\n "DEVICE": "D",\r\n "PLATFORM": "special events",\r\n "PROMOID": "4",\r\n "CATEGORY": "",\r\n "DEALS": "",\r\n "FAMILY": "",\r\n "PRODUCT": "",\r\n "ORDERCODE": "",\r\n "ACCOUNTID": "",\r\n "EMAILHASH": "",\r\n "PRODUCTLIST": []\r\n }\r\n };\r\n</script>\r\n\r\n <meta name="viewport" content="width=device-width, initial-scale=1.0, viewport-fit=cover" />\r\n \r\n \r\n<link rel="preconnect" href="//afcs.dellcdn.com">\r\n<link rel="preconnect" href="//nexus.dell.com">\r\n<link rel="preconnect" href="//apps.bazaarvoice.com">\r\n<link rel="preconnect" href="//c.evidon.com">\r\n<link rel="preconnect" href="//nexus.ensighten.com">\r\n<link rel="preconnect" href="//s.yimg.com">\r\n<link rel="preconnect" href="//cdn1-res.sundaysky.com">\r\n<link rel="preconnect" href="//cdn.krxd.net">\r\n<link rel="preconnect" href="//secure.quantserve.com">\r\n<link rel="preconnect" href="//s-vop.sundaysky.com">\r\n<link rel="preconnect" href="//analytics-static.ugc.bazaarvoice.com">\r\n<link rel="preconnect" href="//solutions.invocacdn.com">\r\n<link rel="preconnect" href="//cdnssl.clicktale.net">\r\n<link rel="preconnect" href="//universal-preprod.iperceptions.com">\r\n\r\n<link rel="dns-prefetch" href="//afcs.dellcdn.com">\r\n<link rel="dns-prefetch" href="//nexus.dell.com">\r\n<link rel="dns-prefetch" href="//apps.bazaarvoice.com">\r\n<link rel="dns-prefetch" href="//c.evidon.com">\r\n<link rel="dns-prefetch" href="//nexus.ensighten.com">\r\n<link rel="dns-prefetch" href="//s.yimg.com">\r\n<link rel="dns-prefetch" href="//cdn1-res.sundaysky.com">\r\n<link rel="dns-prefetch" href="//cdn.krxd.net">\r\n<link rel="dns-prefetch" href="//secure.quantserve.com">\r\n<link rel="dns-prefetch" href="//s-vop.sundaysky.com">\r\n<link rel="dns-prefetch" href="//analytics-static.ugc.bazaarvoice.com">\r\n<link rel="dns-prefetch" href="//solutions.invocacdn.com">\r\n<link rel="dns-prefetch" href="//cdnssl.clicktale.net">\r\n<link rel="dns-prefetch" href="//universal-preprod.iperceptions.com">\r\n\r\n\r\n<link rel="preload" href="//fcs.dellcdn.com/fonts/roboto-normal.woff2" as="font" type="font/woff2" crossorigin>\r\n<link rel="preload" href="//uicore.dellcdn.com/0.2.7/fonts/Roboto-Light-v2.137.woff2" as="font" type="font/woff2" crossorigin>\r\n<link rel="preload" href="//uicore.dellcdn.com/1.3.3/icons/dds-icons.woff2" as="font" type="font/woff2" crossorigin>\r\n<link rel="preload" href="//afcs.dellcdn.com/shop/styles/global-bundle.min.350763c0233ff7df5df78b257c9e8aca.css" as="style">\r\n<link rel="preload" href="https://afcs.dellcdn.com/csb/contact-drawer/bundles/1.0.1.528/css/contact-drawer.min.css" as="style">\r\n\r\n\r\n <link href="//afcs.dellcdn.com/shop/styles/global-bundle.min.350763c0233ff7df5df78b257c9e8aca.css" rel="stylesheet" />\r\n <link href="https://afcs.dellcdn.com/csb/contact-drawer/bundles/1.0.1.528/css/contact-drawer.min.css" rel="stylesheet" />\r\n\r\n <script type="text/javascript" src="//afcs.dellcdn.com/tnt/adobebundle/se/visid_athandler.min.js"></script>\r\n\r\n <script type="text/javascript" crossorigin="anonymous">\r\n var performancedata = {}; function performance_polyfill() { var e, r, n, o, t, a, i; e = window, r = Date.now ? Date.now() : +new Date, n = e.performance || {}, o = [], t = {}, a = function (e, r) { for (var n = 0, t = o.length, a = []; n < t; n++)o[n][e] == r && a.push(o[n]); return a }, i = function (e, r) { for (var n, t = o.length; t--;)(n = o[t]).entryType != e || void 0 !== r && n.name != r || o.splice(t, 1) }, n.now || (n.now = n.webkitNow || n.mozNow || n.msNow || function () { return (Date.now ? Date.now() : +new Date) - r }), n.mark || (n.mark = n.webkitMark || function (e) { var r = { name: e, entryType: "mark", startTime: n.now(), duration: 0 }; o.push(r), t[e] = r }), n.measure || (n.measure = n.webkitMeasure || function (e, r, n) { t[r] && t[n] && (r = t[r].startTime, n = t[n].startTime, o.push({ name: e, entryType: "measure", startTime: r, duration: n - r })) }), n.getEntriesByType || (n.getEntriesByType = n.webkitGetEntriesByType || function (e) { return a("entryType", e) }), n.getEntriesByName || (n.getEntriesByName = n.webkitGetEntriesByName || function (e) { return a("name", e) }), n.clearMarks || (n.clearMarks = n.webkitClearMarks || function (e) { i("mark", e) }), n.clearMeasures || (n.clearMeasures = n.webkitClearMeasures || function (e) { i("measure", e) }), e.performance = n, "function" == typeof define && (define.amd || define.ajs) && define("performance", [], function () { return n }), Dell.perfmetrics.enabled = !0, e.addEventListener("load", function () { e.dispatchEvent(new CustomEvent("winLoad")) }, !1) } if (Dell = window.Dell || {}, Dell.perfmetrics = Dell.perfmetrics || {}, Dell.perfmetrics.measure = function (e, r, n) { performance.measure(e, r, n) }, Dell.perfmetrics.start = function (e) { try { Dell.perfmetrics.enabled && performance.mark(e + "-start") } catch (e) { } }, Dell.perfmetrics.end = function (e) { try { Dell.perfmetrics.enabled && (performance.mark(e + "-end"), Dell.perfmetrics.measure(e, e + "-start", e + "-end")) } catch (e) { } }, "object" == typeof performance && "function" == typeof performance.mark) Dell.perfmetrics.enabled = !0; else try { performance_polyfill() } catch (e) { Dell.perfmetrics.enabled = !1 } Dell = window.Dell || {}, Dell.logger = Dell.logger || {}, function (e) { (function () { "use strict"; var r, o = !0, i = [], n = e.console || {}; function t() { n.log(arguments); var e = ""; for (var r in arguments) arguments.hasOwnProperty(r) && (void 0 !== arguments[r].stack ? e += " " + arguments[r].stack : e += " " + arguments[r]); return e } n.error = n.error || function () { }, n.error = (r = n.error.bind(n), function (e) { o && (t(e), i.push({ message: e, consoleInvoked: !0 })), r(e) }), e.onerror = function (e, r, n, t, a) { return o && i.push({ message: e, fileName: r, lineNumber: n, columnNumber: t, errorObj: a }), !1 }, this.log = function () { n.log(arguments), i.push({ message: t.apply(null, arguments), logInvoked: !0 }) }, this.disableLogger = function () { o = !1 }, this.getLog = function () { return i } }).call(Dell.logger) }(window);\r\n </script>\r\n <script>var dellScriptLoader = (function () {\n\t"use strict";\n\n\tvar scriptsArray = [];\n\tvar urlRegex = /^(https:\\/\\/www\\.|https:\\/\\/|\\/\\/|\\/)?[a-z0-9]+([\\-\\.]{1}[a-z0-9]+)*\\.[a-z]{2,5}(:[0-9]{1,5})?(\\/.*)?$/; //exit if not a url or an array\n\n\tfunction scriptsArrayCopy() {\n\t\treturn JSON.parse(JSON.stringify(scriptsArray));\n\t}\n\n\tfunction load(scripts) {\n\t\t//check for valid url\n\t\tfunction isValidUrl(url) {\n\t\t\treturn typeof url === "string" && urlRegex.test(url);\n\t\t}\n\n\t\t//check for array\n\t\tfunction isValidArray(scripts) {\n\t\t\treturn Array.isArray(scripts);\n\t\t}\n\n\t\tif (!(isValidUrl(scripts) || isValidArray(scripts))) {\n\t\t\treturn;\n\t\t}\n\n\t\t//single url being passed\n\t\tif (isValidUrl(scripts)) {\n scriptsArray.push(scripts);\n return;\n\t\t}\n\n\t\t//handling of array\n\t\tif (isValidArray(scripts)) {\n\t\t\tfor (var i = 0; i < scripts.length; ++i) {\n\t\t\t\tvar _script = scripts[i];\n\n\t\t\t\t//if array of strings\n\t\t\t\tif (typeof _script === "string" && isValidUrl(_script)) {\n scriptsArray.push({ url: _script });\n return;\n\t\t\t\t}\n\n\t\t\t\t//if array of objects with url and order\n if (_script.hasOwnProperty("url") && _script.hasOwnProperty("order") && !isNaN(Number(_script.order))) {\n\t\t\t\t\t_script.order = Number(_script.order);\n\t\t\t\t\tscriptsArray.push(_script);\n return;\n\t\t\t\t}\n\n\t\t\t\t//if array of objects with only url\n if (_script.hasOwnProperty("url") && isValidUrl(_script.url)) {\n scriptsArray.push(_script);\n return;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn Object.freeze({\n\t\tload: load,\n\t\tscriptsArrayCopy: scriptsArrayCopy\n\t});\n})();\n</script>\r\n \r\n</head>\r\n<body>\r\n <script data-perf="bodyGlobal">Dell.perfmetrics.start(\'bodyGlobal\');</script>\r\n\r\n <svg aria-hidden="true" style="position: absolute; width: 0; height: 0; overflow: hidden;" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">\r\n <defs>\r\n <symbol id="dds__home" viewBox="0 0 32 32">\r\n <title>dds__home</title>\r\n <path d="M30.821 17.746l-3.1-3.322v-0.055h-0.051l-11.667-12.499-11.671 12.499h-0.070v0.075l-3.083 3.302 1.376 1.284 1.705-1.828v12.928h8.994v-10.488h5.468v10.488h8.994v-12.947l1.724 1.847 1.38-1.284zM25.838 28.248h-5.229v-10.488h-9.233v10.488h-5.231v-13.064l9.858-10.556 9.837 10.539v13.080z"></path>\r\n </symbol>\r\n <symbol xmlns="http://www.w3.org/2000/svg" id="dds__chevron-left" viewBox="0 0 32 32">\r\n <title>dds__chevron-left</title>\r\n <path d="M23.704 3.255l-1.331-1.333-14.078 14.078 14.078 14.078 1.331-1.333-12.745-12.745z"/>\r\n </symbol>\r\n <symbol id="dds__chevron-right" viewBox="0 0 32 32">\r\n <title>dds__chevron-right</title>\r\n <path d="M9.626 1.922l-1.331 1.333 12.745 12.745-12.745 12.745 1.331 1.333 14.078-14.078z"></path>\r\n </symbol>\r\n </defs>\r\n</svg>\r\n\r\n \r\n<!--wmm:ignore-->\n <style>#unified-masthead,#unified-masthead>*{font-family:roboto,Arial,Helvetica,sans-serif;font-display:swap}#unified-masthead .mh-cart.empty .icon:before{display:none}#unified-masthead .mh-cart .icon{position:relative;background:0 0}#unified-masthead .mh-cart .icon:before{content:attr(mh-bubble-count);display:block;color:#fff;border-radius:50%;background-color:#0672cb;position:absolute;text-align:center;width:12px;height:12px;font-size:8px;font-weight:500;line-height:12px;right:-4px;top:-4px}#unified-masthead .mh-cart .cart{height:56px}#unified-masthead .mh-cart .cart .label{font-size:14px;padding:0}@media only screen and (max-width:1023px){#unified-masthead .mh-cart .cart{height:58px;width:48px}#unified-masthead .mh-cart .icon{height:24px;width:24px;display:block}#unified-masthead .mh-cart .icon:before{right:-3px;top:-2px}#unified-masthead .mh-cart .icon svg{width:24px;height:24px}}@media only screen and (min-width:1024px){#unified-masthead .mh-cart{position:relative}#unified-masthead .mh-cart .cart{height:56px}#unified-masthead .mh-cart .cart .icon{height:16px;padding:0;width:16px;display:-ms-flexbox;display:flex}}#unified-masthead .mh-cart-dropdown{border-bottom:1px solid #f9f9f9;width:256px;z-index:100;box-sizing:border-box}#unified-masthead .mh-cart-dropdown .dropdown-title{padding:16px 16px 0}#unified-masthead .mh-cart-dropdown a{display:block;text-decoration:none}#unified-masthead .mh-cart-empty{display:none}#unified-masthead .mh-cart.empty .mh-cart-empty{display:block}#unified-masthead .mh-cart.empty .mh-cart-loaded{display:none}#unified-masthead .mh-cart-empty-label{padding-bottom:80px;border-bottom:1px solid #c8c9c7}#unified-masthead .mh-cart-content .mh-ct-dd-cartInfo{line-height:20px;color:#636363}#unified-masthead .mh-cart-content .mh-ct-dd-cartInfo>span{padding:0 16px}#unified-masthead .mh-cart-content ul{list-style-type:none;padding:0;margin:0}#unified-masthead .mh-cart-content a{color:#444}#unified-masthead .mh-cart-list-item{padding:0 16px}#unified-masthead .mh-cart-list-item a{line-height:20px;color:#0e0e0e;padding:10px 0;outline-width:0;border-bottom:1px solid #c8c9c7}#unified-masthead .mh-top .left-column .mh-logo a:focus,#unified-masthead .skip-nav-link:focus,#unified-masthead.user-is-tabbing .mh-cart-list-item a:focus{outline:#00468b solid 1px}#unified-masthead .mh-cart-list-item .mh-ct-hp-subtotal-wrap .mh-cart-category-label{color:#0e0e0e;font-size:14px}#unified-masthead .mh-cart-list-item .mh-ct-hp-subtotal-wrap .mh-cart-subtotal{color:#0e0e0e}#unified-masthead .mh-cart-list-item:last-child{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;padding:0}#unified-masthead .mh-cart-list-item:last-child a{padding:18px 16px;width:100%;color:#636363;border-bottom:none}#unified-masthead .mh-cart-subtotal{display:-ms-flexbox;display:flex;-ms-flex-pack:justify;justify-content:space-between}#unified-masthead .mh-cart-category-label{line-height:20px;font-size:14px}#unified-masthead .mh-saved-subtotal-label{line-height:20px;font-size:12px}#unified-masthead .mh-saved-subtotal-price{text-align:right;font-size:12px;font-weight:700}@media only screen and (min-width:1024px){#unified-masthead .mh-cart-dropdown{height:auto}#unified-masthead .mh-cart-list-item:hover{background-color:#f0f0f0}#unified-masthead .mh-cart-list-item:last-child a:hover{color:#0e0e0e}}@media only screen and (max-width:1023px){#unified-masthead .mh-cart-dropdown{width:auto}#unified-masthead .mh-cart-dropdown .mh-close{padding:16px}#unified-masthead .mh-cart-list-item a{padding:14px 0}#unified-masthead .mh-cart-list-item:last-child a{padding:12px 16px}}#unified-masthead .mh-close{display:-ms-flexbox;display:flex;-ms-flex-pack:end;justify-content:flex-end}#unified-masthead .mh-close a{display:-ms-inline-flexbox;display:inline-flex;line-height:15px;height:15px}#unified-masthead .mh-close svg{height:15px;width:15px}@media only screen and (min-width:1024px){#unified-masthead .mh-close{display:none}}#unified-masthead .flyoutOverlay{display:none;position:fixed;background:#000;width:100%;height:calc(100vh - 58px);left:0;opacity:.5;z-index:1000;cursor:pointer;content:""}#unified-masthead .mh-flyout-wrapper{position:relative}#unified-masthead .mh-flyout-wrapper>a[aria-expanded=true]{background:#f5f6f7}#unified-masthead .mh-flyout-link{-ms-flex-pack:center;justify-content:center;-ms-flex-align:center;align-items:center;border:none;position:relative;padding:0;background-color:transparent;display:block;text-decoration:none}#unified-masthead .mh-flyout-link:focus{outline-width:0}#unified-masthead .mh-flyout-link~.flyout{text-align:left;right:-1px;position:absolute;z-index:1001;height:auto;display:none;box-sizing:border-box}#unified-masthead .mh-flyout-link~.flyout.show{display:block}#unified-masthead .mh-flyout-link>span{width:100%;height:100%;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;-ms-flex-align:center;align-items:center;cursor:pointer}#unified-masthead .mh-flyout-link>span>span:not(.label){margin-right:8px}#unified-masthead .mh-flyout-link>span:after{content:"";width:12px;height:12px;top:40%;transition:transform .2s linear}#mh-unified-footer.user-is-tabbing .mh-flyout-link:focus,#unified-masthead.user-is-tabbing .mh-flyout-link:focus{box-shadow:0 0 0 1px #00468b}@media only screen and (max-width:1023px){#mh-unified-footer .mh-flyout-link>span .label{display:none}#unified-masthead .mh-top .right-column .mh-flyout-link:hover{border-bottom:2px solid #636363}#unified-masthead .mh-flyout-link{position:static}#unified-masthead .mh-flyout-link>span{display:block;padding:17px 12px}#unified-masthead .mh-flyout-link>span .label,#unified-masthead .mh-flyout-link>span:after{display:none}#unified-masthead .mh-flyout-link>span>span:not(.label){margin-right:0}#unified-masthead .mh-flyout-link~.flyout{transition:transform .3s ease-out;transform:translateX(110%);will-change:transform;box-shadow:0 4px 16px rgba(0,43,85,.12);border-radius:2px;background:#fff;width:320px;max-width:320px;overflow-y:auto;display:block;right:0;position:fixed}#unified-masthead .mh-flyout-link.show~.flyout,#unified-masthead .mh-flyout-link:hover #unified-masthead .mh-flyout-link~.flyout.hide>*,#unified-masthead .mh-flyout-link[aria-expanded=true]~.flyout{transform:translateX(0)}#unified-masthead .mh-flyout-link:hover #unified-masthead .mh-flyout-link~.flyout.hide{background:0 0}#unified-masthead .mh-flyout-link.show~.flyoutOverlay,#unified-masthead .mh-flyout-link[aria-expanded=true]~.flyoutOverlay{display:block}}@media only screen and (min-width:1024px){#unified-masthead .mh-bottom .flyoutOverlay{top:104px;position:absolute}#unified-masthead .mh-bottom .utilityTop{top:58px}#unified-masthead .mh-flyout-wrapper .mh-flyout-link{padding:0 8px}#unified-masthead .mh-flyout-wrapper .mh-flyout-link[aria-expanded=true] span:after{transform:rotate(-180deg)}#unified-masthead .mh-flyout-wrapper .mh-flyout-link[aria-expanded=true]~.flyout{display:block;height:auto;box-shadow:0 4px 16px rgba(0,43,85,.12);border-radius:2px;border:1px solid #f9f9f9;background-color:#fff;max-width:none;margin-top:0}}#unified-masthead{width:100%;background-color:#fff;border-bottom:2px solid #d2d2d2;display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;z-index:1000;position:relative;padding-top:2px}#unified-masthead,#unified-masthead *,#unified-masthead :after,#unified-masthead :before{box-sizing:border-box}#unified-masthead .mh-top{-ms-flex-pack:justify;justify-content:space-between;height:56px;position:relative}#unified-masthead .mh-top,#unified-masthead .mh-top .left-column,#unified-masthead .mh-top .right-column{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}#unified-masthead .mh-top .left-column{-ms-flex:1;flex:1}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;-ms-flex-pack:center;justify-content:center;border-radius:0;border-width:0;background-color:transparent;padding:18px 14px;cursor:pointer;-webkit-tap-highlight-color:transparent}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle.mh-nav-open,#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle.open,#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle:hover{background:#f0f0f0;border-bottom:2px solid #636363}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle .mh-nav-menu-icon{display:-ms-flexbox;display:flex;border:none;cursor:pointer;width:20px;height:20px;position:relative;transform:rotate(0);will-change:transform;transition:transform .5s ease-in-out}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle .mh-nav-menu-icon span{display:block;position:absolute;height:2px;width:100%;background:#636363;border-radius:2px;opacity:1;left:0;transform:rotate(0);transition:transform .25s ease-in-out}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle .mh-nav-menu-icon span:first-child{top:0}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle .mh-nav-menu-icon span:nth-child(2),#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle .mh-nav-menu-icon span:nth-child(3){top:8px}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle .mh-nav-menu-icon span:nth-child(4){top:16px}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle.open .mh-nav-menu-icon span:first-child{top:10px;width:0;left:50%}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle.open .mh-nav-menu-icon span:nth-child(2){transform:rotate(45deg);width:125%;left:-12.5%}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle.open .mh-nav-menu-icon span:nth-child(3){transform:rotate(-45deg);width:125%;left:-12.5%}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle.open .mh-nav-menu-icon span:nth-child(4){top:10px;width:0;left:50%}#unified-masthead .mh-top .left-column .mh-header-wrapper{display:-ms-flexbox;display:flex;-ms-flex:1;flex:1}#unified-masthead .mh-top .left-column .mh-logo a{display:-ms-flexbox;display:flex;padding:8px}#unified-masthead .mh-top .left-column .mh-logo a.dellLogoWrapper svg{fill:#0477cf}#unified-masthead .mh-top .center-column{position:absolute;width:100%;top:calc(100% + 8px);-ms-flex-pack:space-evenly;justify-content:space-evenly}#unified-masthead .mh-top .right-column{display:-ms-flexbox;display:flex}#unified-masthead .mh-top .right-column .country-selector{display:none}#unified-masthead .mh-top .right-column button{width:48px;height:56px}#unified-masthead .mh-top .dropdown-title{font-size:16px;font-weight:700;margin-bottom:16px;color:#636363;line-height:24px}#unified-masthead .mh-overlay-background{display:none;height:100%;width:100%;position:absolute;z-index:999;background-color:rgba(0,0,0,.7)}#unified-masthead .mh-overlay-background.show{display:block}#unified-masthead .mh-bottom{width:100%}#unified-masthead .skip-nav-link{display:-ms-flexbox;display:flex;padding:12px 16px;text-decoration:none;background:#0063b8;color:#ffff;position:absolute;transform:translateY(-100%);border-radius:2px;font-size:16px;line-height:24px;font-weight:500;z-index:1000;-ms-flex-pack:center;justify-content:center;-ms-flex-align:center;align-items:center;-ms-flex:none;flex:none;opacity:0}#unified-masthead .skip-nav-link:focus{opacity:1;transform:translateY(0);border:2px solid #fff;outline-offset:0}.mh-ele-fixed-pos{position:fixed!important;top:0}@media only screen and (max-width:1023px){#unified-masthead .mh-top{height:58px}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle{margin-left:4px}#unified-masthead .mh-top .left-column .mh-logo a{padding:9px 12px}#unified-masthead .right-column{margin-right:4px}#unified-masthead[data-state=mobile-expanded]{position:fixed;top:0}#unified-masthead[data-state=mobile-expanded] .mh-search{z-index:-1}#unified-masthead[data-state=mobile-expanded] .mh-bottom>.flyoutOverlay{display:block}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation{transform:translateX(0)}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation .divider{padding:16px}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation .divider:before{content:"";border-bottom:1px solid #c8c9c7;width:100%;display:block}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation .mob-country-selector{display:-ms-flexbox!important;display:flex!important}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation .mob-country-selector .country-selector{display:-ms-flexbox;display:flex}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation .mob-country-selector .flyout{display:none}}@media only screen and (min-width:768px) and (max-width:1023px){#unified-masthead{padding-top:0}#unified-masthead .mh-top .left-column .mh-logo{padding:0}#unified-masthead .mh-top .left-column .mh-logo a.delltechLogoWrapper{padding:16px 12px 18px}#unified-masthead .mh-top .left-column .mh-logo a.delltechLogoWrapper svg.dellTechLogo{width:182px;height:22px}#unified-masthead .mh-bottom{width:100%}}@media only screen and (max-width:767px){#unified-masthead{padding-top:0}#unified-masthead .mh-top .left-column .mh-logo{padding:0}#unified-masthead .mh-top .left-column .mh-logo a.delltechLogoWrapper{padding:20px 12px}#unified-masthead .mh-top .left-column .mh-logo a.delltechLogoWrapper svg.dellTechLogo{width:140px;height:18px;padding:0}#unified-masthead .mh-bottom{height:68px}}@media only screen and (min-width:1024px){#unified-masthead .mh-top{margin:0 24px}#unified-masthead .mh-top .left-column .mh-logo,#unified-masthead .mh-top .left-column .mh-logo a.delltechLogoWrapper{padding:0}#unified-masthead .mh-top .left-column .mh-logo a.delltechLogoWrapper svg.dellTechLogo{width:198px;height:56px;padding:17px 8px}#unified-masthead .mh-top .left-column .mh-mobile-nav-toggle{display:none}#unified-masthead .mh-top .right-column{-ms-flex-pack:end;justify-content:flex-end}#unified-masthead .mh-top .right-column .label{color:#636363;margin-bottom:0;text-transform:none;font-size:14px;line-height:20px;font-weight:400;padding:0}#unified-masthead .mh-top .right-column .mh-flyout-link:hover .label{color:#0e0e0e}#unified-masthead .mh-top .right-column .mh-contact-dropdown .mh-contact-list-item .mh-contact-flyout-icon{padding-right:0}#unified-masthead .mh-top .right-column .mh-contact-dropdown .mh-contact-list-item .label{display:inline-block;margin-left:14px;font-size:14px}#unified-masthead .mh-top .right-column .country-selector{display:block}#unified-masthead .mh-bottom{display:block;height:46px}}#unified-masthead-navigation .mh-menu-chevron.left{transform:rotate(180deg);width:20px;height:20px;background-position:50%;margin-right:6px}#unified-masthead-navigation nav .child-nav>a:after{content:"";position:absolute;display:block;right:0;top:12px;padding:15px}#unified-masthead-navigation nav a{display:-ms-flexbox;display:flex;word-wrap:break-word;text-decoration:none;color:#636363;-ms-flex-pack:start;justify-content:flex-start;box-sizing:border-box}#unified-masthead-navigation nav ul{list-style-type:none;margin:0;padding:0}#unified-masthead-navigation nav li{cursor:pointer;-webkit-tap-highlight-color:transparent;font-weight:400}#unified-masthead-navigation nav li a:focus{outline:#00468b solid 1px}#unified-masthead-navigation nav ul.sub-nav{z-index:1001;background-color:#fff;top:46px}#unified-masthead-navigation nav ul.sub-nav li>ul.sub-nav{background-color:#f0f0f0}#unified-masthead-navigation nav ul.sub-nav li>ul.sub-nav li>ul.sub-nav{background-color:#e0e1e2}#unified-masthead-navigation nav .mh-hide-mob-links,#unified-masthead-navigation nav .mob-country-selector,#unified-masthead-navigation nav ul.sub-nav .mh-hide-mob-links{display:none}@media only screen and (min-width:1024px){#unified-masthead-navigation{position:relative;font-size:14px;width:100%}#unified-masthead-navigation ul.sub-nav{position:absolute;-ms-flex-direction:column;flex-direction:column;display:none;box-sizing:border-box;padding:0;margin:0;width:242px;box-shadow:inset 0 1px 0 #c4c4c4;border:1px solid #c4c4c4;border-top:none;top:0}#unified-masthead-navigation ul.sub-nav li{display:-ms-flexbox;display:flex;padding:0;-ms-flex-pack:justify;justify-content:space-between}#unified-masthead-navigation ul.sub-nav li a{display:inline-block;position:relative;-ms-flex-line-pack:justify;align-content:space-between;width:100%;padding:12px 22px 12px 16px}#unified-masthead-navigation ul.sub-nav li.bottom{-ms-flex-align:center;align-items:center;border-top:1px solid #c4c4c4;-ms-flex-order:1000;order:1000;height:32px}#unified-masthead-navigation ul.sub-nav li.bottom:first-of-type{margin-top:auto}#unified-masthead-navigation ul.sub-nav li:last-of-type:not(.bottom){margin-bottom:auto}#unified-masthead-navigation ul.sub-nav li.mh-back-list-item{display:none}#unified-masthead-navigation ul.sub-nav li>ul.sub-nav,#unified-masthead-navigation ul.sub-nav li>ul.sub-nav li>ul.sub-nav{left:240px;top:0}#unified-masthead-navigation nav{height:46px;padding:0 16px;position:relative;display:inline-block}#unified-masthead-navigation nav>ul>li:focus{outline:#00468b solid 1px}#unified-masthead-navigation nav>ul>li:active{box-shadow:inset 0 -2px 0 #1d73c2}#unified-masthead-navigation nav>ul>li:hover{background:#f5f6f7;box-shadow:inset 0 -2px 0 #707070}#unified-masthead-navigation nav>ul>li:hover.child-nav .mh-top-nav-button span:after{transform:rotate(-180deg)}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav>li:hover{background:#f0f0f0}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav>li:hover>ul.sub-nav{display:-ms-flexbox;display:flex}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav>li:hover>ul.sub-nav li:hover{background:#e0e1e2}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav>li:hover>ul.sub-nav li:hover>ul.sub-nav{display:-ms-flexbox;display:flex}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav>li:hover>ul.sub-nav li:hover>ul.sub-nav li:hover{background:#d2d2d2}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav li.cta-link{height:30px;box-shadow:inset 0 1px 0 #c4c4c4;-ms-flex-align:center;align-items:center}#unified-masthead-navigation nav>ul>li:hover>ul.sub-nav .additional-nav-item{background:#ebf1f6}#unified-masthead-navigation .mh-top-menu-nav{display:-ms-flexbox;display:flex;list-style-type:none;margin:0;padding:0;height:100%}#unified-masthead-navigation .mh-top-menu-nav .mh-top-menu.child-nav>.mh-top-nav-button :after{content:"";position:absolute;display:block;width:20px;height:20px;right:15px;top:10px;transition:transform .3s cubic-bezier(0,.52,0,1);padding:0}#unified-masthead-navigation .mh-top-menu-nav>.child-nav>a:after{display:none}#unified-masthead-navigation .mh-top-menu-nav>li:hover>ul.sub-nav{display:-ms-flexbox;display:flex}#unified-masthead-navigation .mh-top-menu-nav a[aria-expanded=true]~ul.sub-nav{display:block}#unified-masthead-navigation .mh-top-nav-button{position:relative;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;-ms-flex-align:center;align-items:center;padding:12px 36px 14px 16px;background-color:transparent;border:none;cursor:pointer}#unified-masthead-navigation .mh-top-nav-no-child{padding:12px 16px 14px}#unified-masthead [component=unified-country-selector] .country-list-container .sub-nav li{padding:1px}}@media only screen and (max-width:1023px){#unified-masthead-navigation nav .menu-list-item .nav-title,#unified-masthead-navigation nav ul li ul.sub-nav .mh-mastheadTitle{font-weight:600;color:#0e0e0e}#unified-masthead-navigation{width:320px;position:fixed;background-color:#fff;box-shadow:0 3px 8px rgba(0,43,85,.12);will-change:transform;transform:translateX(-110%);transition:transform .3s ease-out;z-index:1001;height:calc(100% - 58px);overflow-x:hidden}#unified-masthead-navigation nav{overflow-x:hidden}#unified-masthead-navigation nav>ul{height:100%;position:fixed;overflow-x:hidden;overflow-y:auto;width:320px;padding-top:48px}#unified-masthead-navigation nav>ul>li>a>span{word-break:break-all;padding-right:20px}#unified-masthead-navigation nav>ul>li[aria-expanded=true]>.sub-nav,#unified-masthead-navigation nav>ul>li[aria-expanded=true]>.sub-nav>li,#unified-masthead-navigation nav>ul>li[aria-expanded=true]>.sub-nav>li>a{pointer-events:auto}#unified-masthead-navigation nav>ul>li>.sub-nav,#unified-masthead-navigation nav>ul>li>.sub-nav>li{pointer-events:none}#unified-masthead-navigation nav>ul>li>.sub-nav>li a{word-break:break-all;padding-right:20px;pointer-events:none}#unified-masthead-navigation nav>ul>li>.sub-nav>li a.dell-chat-link-setup{padding-right:0}#unified-masthead-navigation nav>ul>li>.sub-nav>li a,#unified-masthead-navigation nav>ul>li>.sub-nav>li li,#unified-masthead-navigation nav>ul>li>.sub-nav>li ul{pointer-events:none}#unified-masthead-navigation nav>ul>li>.sub-nav>li[aria-expanded=true]>.sub-nav-wrapper>.sub-nav,#unified-masthead-navigation nav>ul>li>.sub-nav>li[aria-expanded=true]>.sub-nav-wrapper>.sub-nav a,#unified-masthead-navigation nav>ul>li>.sub-nav>li[aria-expanded=true]>ul,#unified-masthead-navigation nav>ul>li>.sub-nav>li[aria-expanded=true]>ul li,#unified-masthead-navigation nav>ul>li>.sub-nav>li[aria-expanded=true]>ul>li>a{pointer-events:auto}#unified-masthead-navigation nav ul{display:block;-ms-flex-direction:column;flex-direction:column}#unified-masthead-navigation nav ul li{display:block;-ms-flex-align:center;align-items:center;padding:13px 16px}#unified-masthead-navigation nav ul li[aria-expanded=true] .country-list-container>li[aria-expanded=true]>.sub-nav-wrapper>.sub-nav,#unified-masthead-navigation nav ul li[aria-expanded=true]>.sub-nav{display:-ms-flexbox!important;display:flex!important;transform:translateZ(0);transition:transform .3s ease-out}#unified-masthead-navigation nav ul li .chevron-csel-mob{float:right;transform:scale(1.89) rotate(-90deg)}#unified-masthead-navigation nav ul li>button{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-pack:justify;justify-content:space-between;padding:16px 32px 16px 16px;-ms-flex-align:center;align-items:center;border-bottom:none;width:inherit;height:100%}#unified-masthead-navigation nav ul li>button:hover{border-bottom:none;text-align:left}#unified-masthead-navigation nav ul li.mh-back-list-item{display:-ms-flexbox;display:flex}#unified-masthead-navigation nav ul li.mh-back-list-item .mh-back-button{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-pack:start;justify-content:flex-start;-ms-flex-align:center;align-items:center;width:100%;border:none;background:0 0}#unified-masthead-navigation nav ul li ul.sub-nav{transform:translate3d(100%,0,0);transition:transform .3s ease-out;will-change:transform;width:320px;position:fixed;top:0;left:0;overflow-y:auto;height:100%;overflow-x:hidden}#unified-masthead-navigation nav ul li ul.sub-nav .mh-hide-mob-links{display:-ms-flexbox;display:flex}#unified-masthead-navigation nav ul li:not(.child-nav){display:block}#unified-masthead-navigation nav .mh-hide-mob-links,#unified-masthead-navigation nav .mob-country-selector{display:-ms-flexbox;display:flex}#unified-masthead-navigation nav .child-nav>a{position:relative}#unified-masthead-navigation nav a{display:block;width:100%}#unified-masthead-navigation nav .child-nav>a:after{top:0}}@media (-ms-high-contrast:none) and (max-width:1023px){#unified-masthead-navigation{left:-100%}#unified-masthead-navigation nav ul li[aria-expanded=true] .country-list-container>li[aria-expanded=true]>.sub-nav-wrapper>.sub-nav,#unified-masthead-navigation nav ul li[aria-expanded=true]>.sub-nav{right:auto;left:0}#unified-masthead-navigation nav ul li ul.sub-nav{right:-100%;left:auto;top:59px}#unified-masthead[data-state=mobile-expanded] #unified-masthead-navigation{left:0}}#unified-masthead .mh-myaccount.auth .icon{position:relative}#unified-masthead .mh-myaccount.auth .icon:before{content:"\xe2\x9c\x93";transform:rotate(10deg);color:#fff;border-radius:50%;background-color:#0672cb;position:absolute;text-align:center;width:12px;height:12px;font-size:8px;font-weight:900;line-height:12px;right:-4px;top:-4px}#unified-masthead .mh-myaccount.auth .icon.green:before{background-color:#6ea204}#unified-masthead .mh-myaccount.auth .icon.yellow:before{background-color:orange}#unified-masthead .mh-myaccount.auth .icon.black:before{background-color:#000}#unified-masthead .mh-myaccount.auth .icon.orange:before{background-color:orange}#unified-masthead .mh-myaccount .mh-myaccount-btn{height:56px}#unified-masthead .mh-myaccount .mh-myaccount-btn .icon{background:0 0;height:16px;width:16px;display:-ms-flexbox;display:flex}#unified-masthead .mh-myaccount .mh-myaccount-btn .label{font-size:14px;padding:0;white-space:nowrap;overflow:hidden;text-overflow:ellipsis;max-width:120px}@media only screen and (max-width:1023px){#unified-masthead .mh-myaccount .mh-myaccount-btn{height:58px;width:48px}#unified-masthead .mh-myaccount .mh-myaccount-btn .icon{padding:0;height:24px;width:24px}#unified-masthead .mh-myaccount .mh-myaccount-btn .icon:before{right:0;top:-2px}#unified-masthead .mh-myaccount .mh-myaccount-btn .icon svg{width:24px;height:24px}}#unified-masthead .mh-myaccount-auth-wrapper{display:-ms-flexbox;display:flex;-ms-flex-pack:justify;justify-content:space-between}#unified-masthead .mh-myaccount-dropdown-wrap{color:#636363}#unified-masthead .mh-myaccount-dropdown-wrap a{display:block;text-decoration:none}#unified-masthead .mh-myaccount-dropdown-wrap .mh-close{padding-top:16px;padding-right:16px}#unified-masthead .mh-myaccount-dropdown-wrap .dropdown-title{font-size:16px;font-weight:700}#unified-masthead .mh-myaccount-dropdown-wrap .dropdown-subtitle{font-size:14px;line-height:20px}#unified-masthead .mh-myaccount.auth .mh-myaccount-unauth-dropdown{display:none}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown{display:-ms-flexbox;display:flex}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .auth-signout{-ms-flex-item-align:end;align-self:flex-end;width:50.1%;box-sizing:border-box;border-top:1px solid #c8c9c7}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .auth-signout .mh-btn-secondary{margin:10px}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .auth-signout.mh-borderNone{width:100%}#unified-masthead .mh-myaccount-unauth-dropdown{max-width:320px;padding:16px}#unified-masthead .mh-myaccount-unauth-dropdown .dropdown-subtitle{display:inline-block;padding-left:0;font-weight:400;color:#636363}#unified-masthead .mh-myaccount-unauth-dropdown ul{padding-left:16px}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-additional-options .dropdown-subtitle,#unified-masthead .mh-myaccount-unauth-dropdown .mh-myaccount-additional-options .dropdown-subtitle{padding-left:0;color:#636363}#unified-masthead .mh-myaccount-unauth-dropdown ul li{font-size:14px;line-height:20px;list-style:disc;font-weight:400;color:#636363}#unified-masthead .mh-myaccount-unauth-dropdown .mh-myaccount-additional-options{margin:16px -16px -16px}#unified-masthead .mh-myaccount-unauth-dropdown .mh-myaccount-additional-options li{list-style:none}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-additional-options{margin:0}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-additional-options ul{margin:10px 0}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-additional-options ul li{list-style:none}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-label-list{margin:0;padding:0}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-label-list li a{padding:10px 16px 10px 32px;margin:0;line-height:20px;font-weight:400;font-size:14px}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-label-list li a:hover{background-color:#f0f0f0;color:#0e0e0e}#unified-masthead .mh-myaccoutn-unauth-main-options{padding:16px}#unified-masthead .mh-myaccount-auth-dropdown{display:none;-ms-flex-direction:column;flex-direction:column;width:320px;right:0;padding:16px}#unified-masthead .mh-myaccount-ctas a{margin-bottom:16px}#unified-masthead .mh-myaccount-ctas a:last-child,.mh-btn{margin-bottom:0}#unified-masthead .mh-myaccount-signout{margin:10px;border:1px solid #0672cb;text-align:center;padding:10px 0;color:#0672cb;font-family:Roboto;font-style:normal;font-weight:600}#unified-masthead .mh-myaccount-label-list{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;list-style-type:none;padding:0}#unified-masthead .mh-myaccount-label-list a{position:relative;color:#636363;margin:10px 0;outline-width:0}#unified-masthead .mh-myaccount-first-column .current{color:#0e0e0e}#unified-masthead .mh-myaccount-first-column .current:before{content:"";display:inline-block;height:8px;width:8px;background-color:#007db8;border-radius:50%;position:absolute;left:16px;top:15px}#unified-masthead .mh-myaccount-additional-options{background-color:#f5f6f7;height:100%;padding:16px}#unified-masthead .mh-myaccount-additional-options ul{padding-left:0}#unified-masthead .mh-myaccount-additional-options li{line-height:24px;list-style:none;padding-bottom:8px}#unified-masthead .mh-myaccount-additional-options li:last-child{padding-bottom:0}#unified-masthead .mh-myaccount-additional-options a{color:#0672cb;outline-width:0}#unified-masthead .mh-myaccount.addition .mh-myaccount-additional-options{display:block}@media only screen and (min-width:1024px){#unified-masthead .mh-myaccount-dropdown-wrap{height:auto}#unified-masthead.user-is-tabbing .mh-myaccount-additional-options>ul li>a:focus,#unified-masthead.user-is-tabbing .mh-myaccount-label-list li>a:focus{box-shadow:0 0 0 1px #00468b}#unified-masthead .mh-myaccount-unauth-dropdown{width:314px;box-sizing:border-box}#unified-masthead .mh-myaccount-unauth-dropdown ul{margin:10px 0 16px}#unified-masthead .mh-myaccount-unauth-dropdown ul li{font-size:14px;line-height:20px}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown{width:auto;padding:0;-ms-flex-direction:column;flex-direction:column}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .mh-myaccount-first-column,#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .mh-myaccount-second-column{width:256.5px}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .dropdown-title{margin:16px 16px 0}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .dropdown-subtitle{margin:0 16px}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .mh-myaccount-label-list{margin-top:16px}#unified-masthead .mh-myaccount.auth .mh-myaccount-auth-dropdown .auth-signout{border-left:1px solid #c8c9c7}#unified-masthead .mh-myaccount-label-list{padding:0 16px}#unified-masthead .mh-myaccount-first-column{border-bottom:none;border-right:1px solid #c8c9c7}#unified-masthead .mh-myaccount .mh-flyout-link label{padding-right:0}}@media only screen and (max-width:1023px){#unified-masthead .mh-myaccount-label-list li:last-child{padding-bottom:0!important}#unified-masthead .mh-myaccount.auth .flyout{height:calc(100% - 58px)}#unified-masthead .mh-myaccount.auth .flyout .mh-myaccount-dropdown-wrap{height:100%}#unified-masthead .mh-myaccount-auth-dropdown{box-sizing:border-box;height:auto;-ms-flex-pack:justify;justify-content:space-between;padding:0}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-auth-wrapper{-ms-flex-direction:column-reverse;flex-direction:column-reverse;padding:16px}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-auth-wrapper .mh-myaccount-label-list li>a{font-size:16px;line-height:24px;padding:12px 16px}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-auth-wrapper .mh-myaccount-first-column .current:before{left:0;top:20px}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-auth-wrapper .mh-myaccount-second-column ul{margin-bottom:16px}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-auth-wrapper .dropdown-title{margin-bottom:0}#unified-masthead .mh-myaccount-auth-dropdown .mh-myaccount-auth-wrapper .mh-myaccount-label-list{margin-top:16px}#unified-masthead .mh-myaccount-auth-dropdown .auth-signout{border-left-width:0;width:100%!important;-ms-flex-align:start;align-items:flex-start}#unified-masthead .mh-myaccount-unauth-dropdown{width:320px;box-sizing:border-box;padding:12px 16px 0}#unified-masthead .mh-myaccount-unauth-dropdown .dropdown-title{margin:12px 0}#unified-masthead .mh-myaccount-unauth-dropdown .dropdown-subtitle{margin-top:12px}#unified-masthead .mh-myaccount-unauth-dropdown ul{margin:12px 0 16px}#unified-masthead .mh-myaccount-unauth-dropdown ul li{font-size:16px;line-height:24px}#unified-masthead .mh-myaccount-unauth-dropdown .mh-myaccount-ctas{padding-bottom:20px}.flyout-link .flyout>*,.mh-flyout-link~.flyout>*{height:calc(100vh - 58px)}}#unified-masthead .mh-search{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;height:48px;border-radius:2px;max-width:612px}#unified-masthead .mh-search-input{border-radius:2px;width:100%;padding:5px 70px 5px 16px;border:1px solid #b6b6b6;outline:0;font-weight:400;font-size:14px;line-height:20px;color:#0e0e0e;box-sizing:border-box}#unified-masthead .mh-search-input:focus{box-shadow:0 0 0 2px #fff,0 0 0 3px #00468b}#unified-masthead input.mh-search-input::-webkit-input-placeholder{color:#6e6e6e;font-style:normal}#unified-masthead input.mh-search-input::-moz-placeholder{color:#6e6e6e;font-style:normal}#unified-masthead input.mh-search-input::-ms-input-placeholder{color:#6e6e6e;font-style:normal}#unified-masthead input.mh-search-input:-moz-placeholder-shown{color:#6e6e6e;font-style:normal}#unified-masthead input.mh-search-input:-ms-input-placeholder{color:#6e6e6e;font-style:normal}#unified-masthead input.mh-search-input::placeholder,#unified-masthead input.mh-search-input:placeholder-shown{color:#6e6e6e;font-style:normal}#unified-masthead .mh-search-btn{position:absolute;background-color:transparent;border:none;width:30px;height:30px;padding:5px;margin:0;cursor:pointer;outline-width:0}#unified-masthead .mh-search-btn svg{fill:#636363;width:20px;height:20px;vertical-align:baseline}#unified-masthead .mh-search-cancel-label{display:block;position:absolute;right:-30%;border:none;background-color:transparent;cursor:pointer;color:#636363;padding:16px 12px;font-size:16px;line-height:16px;font-weight:600}#unified-masthead .mh-search-cancel{display:inline-block;right:0;top:9px}#unified-masthead .mh-search-input+.mh-search-cancel,#unified-masthead .mh-search-input~.mh-search-cancel-label{display:none}#unified-masthead .mh-search-input::-ms-clear{display:none}@media only screen and (max-width:767px){#unified-masthead .mh-search{position:absolute;top:66px;width:100%;left:0}#unified-masthead .mh-search .mh-search-input{font-size:16px;line-height:24px;box-sizing:border-box;height:48px;transition:width .2s cubic-bezier(0,.52,0,1);margin:0 16px}#unified-masthead .mh-search .mh-search-submit{display:none}#unified-masthead .mh-search-transform .mh-search-cancel-label{display:block;right:2px}#unified-masthead .mh-search-transform .mh-search-input{width:69.5%}#unified-masthead .mh-search-transform .mh-search-cancel{right:29%}#unified-masthead .mh-search-btn{right:0;margin:0;padding-top:3px}#unified-masthead .mh-search-cancel{right:30px;padding:5px}}@media only screen and (min-width:768px){#unified-masthead .mh-search{margin:0 64px;left:0;-ms-flex:1;flex:1;position:relative}#unified-masthead .mh-search-submit{display:inline-block;right:0;top:9px}#unified-masthead .mh-search-cancel{right:30px}}@media only screen and (min-width:1024px){#unified-masthead .mh-search{margin:0 64px;left:0;-ms-flex:1;flex:1;position:relative}#unified-masthead .mh-search-cancel{right:33px}#unified-masthead .mh-search-submit{display:inline-block;right:0;top:9px}#unified-masthead .mh-search-input{height:32px}#unified-masthead .mh-search-cancel-label{display:none}#unified-masthead.user-is-tabbing .mh-search-btn:focus{box-shadow:0 0 0 1px #00468b}}@media only screen and (min-width:1440px){#unified-masthead .mh-search{margin:0 128px 0 64px}}@media only screen and (min-width:1920px){#unified-masthead .mh-search{margin:0 608px 0 64px}}.autocomplete-suggestions{text-align:left;cursor:default;border:1px solid #ccc;border-top:0;background:#fff;box-shadow:-1px 1px 3px rgba(0,0,0,.1);position:absolute;display:none;z-index:9999;max-height:254px;overflow:hidden;overflow-y:auto;box-sizing:border-box}.autocomplete-suggestion,.mh-btn{cursor:pointer;white-space:nowrap}.autocomplete-suggestion{position:relative;padding:6px 16px;line-height:23px;overflow:hidden;text-overflow:ellipsis;font-size:1.02em;color:#636363;height:32px;-ms-flex-align:center;align-items:center}.autocomplete-suggestion b{font-weight:700;color:#535657}.autocomplete-selected,.autocomplete-suggestion.selected{background:#f0f0f0}@media only screen and (max-width:767px){.autocomplete-suggestions{top:126px!important;width:100%!important;left:0!important;height:277px!important}}.cf-dell-text{color:#007db8}.mh-btn{display:inline-block;font-weight:400;text-align:center;vertical-align:middle;border-radius:2px;background-image:none;border:1px solid transparent;padding:6px 12px;font-size:14px;line-height:1.42857143;outline-width:0}.mh-btn-primary.active,a.mh-btn-primary.active{color:#fff;background-color:#00447c;border-color:#00537b}.mh-btn-primary,a.mh-btn-primary{color:#fff;background-color:#0672cb;position:relative}.mh-btn-primary:hover,a.mh-btn-primary:hover{background-color:#0063b8}.mh-btn-primary:active,a.mh-btn-primary:active{background-color:#00468b}.mh-btn-primary:active:after,a.mh-btn-primary:active:after{display:none!important}.user-is-tabbing .mh-btn:focus:after{content:"";top:0;left:0;right:0;bottom:0;display:block;position:absolute;box-shadow:0 0 0 2px #00468b,0 0 0 4px #fff,0 0 0 6px #00468b;border-radius:2px}.mh-btn-secondary,a.mh-btn-secondary{border:1px solid #0672cb;background-color:transparent;color:#0672cb;position:relative}.mh-btn-secondary:hover,a.mh-btn-secondary:hover{background-color:#d9f5fd}.mh-btn-secondary:active,a.mh-btn-secondary:active{background-color:#94dcf7}.mh-btn-secondary:active:after,[component=footer] .mh-hide,[component=unified-masthead] .mh-hide,a.mh-btn-secondary:active:after{display:none!important}[component=footer] .mh-show,[component=unified-masthead] .mh-show{display:block!important}[component=footer] .no-after:after,[component=unified-masthead] .no-after:after{display:none!important}[component=footer] .sr-only,[component=unified-masthead] .sr-only{position:absolute;left:-10000px;top:auto;width:1px;height:1px;overflow:hidden;opacity:0}[component=footer] .mh-borderNone,[component=unified-masthead] .mh-borderNone{border-width:0!important}[component=footer] .mh-overFlow-yHidden,[component=unified-masthead] .mh-overFlow-yHidden{overflow-y:hidden!important}[component=footer] .mh-transform-zero,[component=unified-masthead] .mh-transform-zero{transform:translateX(0)!important}[component=footer] .mh-transform-viewPort,[component=unified-masthead] .mh-transform-viewPort{transform:translateX(-110%)!important}[component=footer] .mh-no-text-decoration,[component=unified-masthead] .mh-no-text-decoration{text-decoration:none}.mh-bodyOverFlow-Hidden{overflow:hidden!important}@media only screen and (max-width:767px){.mh-bodyOverFlow-Hidden{position:fixed;overflow:hidden!important}}[component=footer] .mh-load-spinner,[component=unified-masthead] .mh-load-spinner{position:absolute;display:-ms-flexbox;display:flex;-ms-flex-pack:center;justify-content:center;-ms-flex-align:center;align-items:center;background-color:rgba(0,0,0,.2);background-repeat:no-repeat;background-position:50% 50%;z-index:100000;top:0;left:0;right:0;bottom:0;background-image:url(//i.dell.com/sites/csimages/Awards_Imagery/all/ui-icons_loader.gif)}.chevron{display:inline-block}.flyout-link>span:after,.mh-flyout-link>span:after,.mh-top-menu.child-nav>.mh-top-nav-button :after{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' width=\'32\' height=\'32\' fill=\'%230E0E0E\' viewBox=\'0 0 32 32\'%3E%3Cpath d=\'M30.485 7.244L16 21.729 1.515 7.244 0 8.757l16 16 16-16z\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-size:10px 8px;background-position:50%;padding:6px;box-sizing:border-box}#unified-masthead .mh-top .right-column .flyout-link>span:after,#unified-masthead .mh-top .right-column .mh-flyout-link>span:after{width:24px;height:24px}.chevron-right,.child-nav>a:after{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' width=\'32\' height=\'32\' fill=\'%230E0E0E\' viewBox=\'0 0 32 32\'%3E%3Cpath d=\'M8.76 0L7.24 1.52 21.72 16 7.24 30.48 8.76 32l16-16z\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-size:13px 15px;background-position:8px 2px}[component=unified-country-selector] .country-selector-mobile>span:before,[component=unified-country-selector] .mh-flyout-link>span:before{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg width=\'16\' height=\'16\' viewBox=\'0 0 16 16\' fill=\'none\' xmlns=\'http://www.w3.org/2000/svg\'%3E%3Cpath d=\'M8 .942a7.058 7.058 0 1 0 0 14.116A7.058 7.058 0 0 0 8 .942zM1.908 8.47h2.304c.028.801.122 1.564.278 2.263H2.537a6.053 6.053 0 0 1-.629-2.263zm6.563-4.145V2.03c.715.263 1.348 1.11 1.782 2.295H8.471zm2.069.942c.17.692.278 1.458.309 2.262H8.471V5.267h2.068zM7.53 2.03v2.295H5.748C6.183 3.14 6.816 2.293 7.53 2.03zm0 3.237v2.262H5.152c.031-.804.14-1.57.31-2.262H7.53zM4.212 7.529H1.908a6.06 6.06 0 0 1 .629-2.262H4.49c-.157.7-.251 1.461-.279 2.262zm.94.941H7.53v2.263H5.462a11.35 11.35 0 0 1-.31-2.263zm2.378 3.204v2.297c-.715-.264-1.347-1.112-1.782-2.297H7.53zm.94 2.297v-2.297h1.783c-.435 1.186-1.067 2.033-1.782 2.297zm0-3.238V8.47h2.379c-.031.805-.14 1.57-.31 2.263H8.472zM11.79 8.47h2.304a6.06 6.06 0 0 1-.629 2.263h-1.953c.157-.7.25-1.462.278-2.263zm0-.94a12.302 12.302 0 0 0-.278-2.263h1.953c.347.69.566 1.454.628 2.262h-2.303zm1.089-3.205h-1.63c-.26-.79-.602-1.473-1.008-2.011a6.136 6.136 0 0 1 2.638 2.011zM5.76 2.315c-.405.538-.747 1.22-1.007 2.01H3.122a6.14 6.14 0 0 1 2.638-2.01zm-2.638 9.36h1.63c.26.79.602 1.472 1.007 2.01a6.136 6.136 0 0 1-2.637-2.01zm7.119 2.01c.405-.538.748-1.22 1.007-2.011h1.63a6.131 6.131 0 0 1-2.637 2.011z\' fill=\'%23636363\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-size:16px 16px;background-position:50%;height:16px;width:16px;display:inline-block;content:"";margin-right:4px;padding:4px}#mh-unified-footer [component=unified-country-selector] .mh-flyout-link>span:before{padding:0;margin-right:6px}#mh-unified-footer .mh-flyout-link>span:after{margin-left:6px}#unified-masthead[data-state=mobile-expanded] [component=unified-country-selector] a>span,#unified-masthead[data-state=mobile-expanded] [component=unified-country-selector]>span{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center;line-height:24px}#unified-masthead[data-state=mobile-expanded] [component=unified-country-selector] a>span:before,#unified-masthead[data-state=mobile-expanded] [component=unified-country-selector]>span:before{padding:0;margin-right:12px}</style>\n <!--/wmm:ignore-->\n\n<header id="unified-masthead" country="br" lang="pt" segment="bsd" cust-set="brbsdt1" srch-svc="https://pilot.search.dell.com/queryunderstandingapi/2/suggest?term=" search-redirect="https://www.dell.com/pt-br/work/search/" coveo-search-token-api-url="https://www.delltechnologies.com/coveo/search/token" cust-srv="/csbapi/masthead/getuser" country-svc="https://www.dell.com/csbapi/masthead/countrylist/br/pt/stp" is-persistent-country="false" ma-flyout-srv="https://www.dell.com/csbapi/pt-br/personalization/name" component="unified-masthead" cart-count-api-url="https://www.dell.com/csbapi/pt-br/cart/item" cart-fly-out-api-url="https://www.dell.com/csbapi/pt-br/cart/masthead?savedCartsLimit=3" cart-checkout-url="https://www.dell.com/pt-br/cart/allcarts?~ck=mn" premier-cart-count-api-url="" premier-cart-checkout-url="" is-legacy-cart="false" show-cart-flyout="false" env-domain="https://www.dell.com" ooc-message-enabled="true" ooc-cookies-duration-days="30" ooc-api-url="//www.dell.com/csbapi/masthead/oocmessage/br/pt/stp" myaccount-default-header-label="Conta" is-csb-homepage="false">\n <a class="skip-nav-link" href="#mh-main">Pular para o conteúdo principal</a>\n <link rel="prefetch" href="//i.dell.com/sites/csimages/Awards_Imagery/all/ui-icons_loader.gif" />\n <div class="mh-top">\n <div class="left-column">\n <button class="mh-mobile-nav-toggle" tabindex="0" aria-label="Navigation button">\n <span class="mh-nav-menu-icon">\n <span></span>\n <span></span>\n <span></span>\n <span></span>\n </span>\n </button>\n <div class="mh-logo">\n <a class="dellLogoWrapper dynamic-link" href="//www.dell.com/pt-br" aria-label="Dell.com Home">\n <svg width="40" height="40" viewBox="0 0 40 40" fill="none" xmlns="http://www.w3.org/2000/svg" focusable="false">\n <path d="M20 0C16.0444 0 12.1776 1.17298 8.88861 3.37061C5.59962 5.56824 3.03617 8.69181 1.52242 12.3463C0.00866568 16.0009 -0.387401 20.0222 0.384303 23.9018C1.15601 27.7814 3.06082 31.3451 5.85787 34.1421C8.65492 36.9392 12.2186 38.844 16.0982 39.6157C19.9778 40.3874 23.9992 39.9913 27.6537 38.4776C31.3082 36.9638 34.4318 34.4004 36.6294 31.1114C38.827 27.8224 40 23.9556 40 20C39.9693 14.7051 37.8523 9.63589 34.1082 5.89179C30.3641 2.1477 25.2949 0.0307062 20 0ZM20 37.962C16.4475 37.962 12.9747 36.9085 10.0209 34.9349C7.06702 32.9612 4.76479 30.1559 3.40528 26.8738C2.04578 23.5916 1.69008 19.9801 2.38314 16.4958C3.07621 13.0115 4.78693 9.81098 7.29896 7.29895C9.81099 4.78692 13.0115 3.0762 16.4958 2.38313C19.9801 1.69007 23.5916 2.04577 26.8738 3.40528C30.1559 4.76478 32.9612 7.06701 34.9349 10.0208C36.9086 12.9747 37.962 16.4475 37.962 20C37.9648 22.3596 37.5021 24.6965 36.6004 26.877C35.6987 29.0575 34.3757 31.0387 32.7072 32.7072C31.0387 34.3757 29.0575 35.6987 26.877 36.6004C24.6965 37.502 22.3596 37.9648 20 37.962ZM35.541 22.675V24.458H29.936V15.668H31.974V22.674L35.541 22.675ZM9.42701 24.331C10.4004 24.343 11.3484 24.0207 12.1128 23.4179C12.8772 22.8151 13.4117 21.9684 13.627 21.019L18.6 24.841L23.568 21.019V24.331H29.173V22.548H25.478V15.541H23.439V18.853L18.726 22.675L17.707 21.783L20 20L22.42 18.089L21.02 16.943L16.307 20.637L15.288 19.745L20 16.178L18.6 15.032L13.632 18.854C13.3992 17.9139 12.8601 17.078 12.0996 16.4783C11.339 15.8786 10.4005 15.5492 9.43201 15.542H5.86001V24.332H9.42701V24.331ZM7.77101 22.675V17.325H9.30001C9.97421 17.3628 10.6063 17.6648 11.0594 18.1655C11.5124 18.6663 11.7498 19.3254 11.72 20C11.7508 20.6749 11.5138 21.3346 11.0606 21.8356C10.6073 22.3366 9.97458 22.6383 9.30001 22.675H7.77101Z" fill="#1885C3" />\n </svg>\n \n </a>\n </div>\n \n<div class="mh-search" role="search">\n <label for="mh-search-input" class="sr-only">Search</label>\n <input id="mh-search-input" type="text" class="mh-search-input" placeholder="Pesquisar Dell.com" tabindex="0" />\n <button class="mh-search-btn mh-search-cancel" tabindex="0" aria-label="Cancel Search">\n <svg version="1.1" xmlns="http://www.w3.org/2000/svg" width="32" height="32" viewBox="0 0 32 32" focusable="false">\n <path d="M22 8.46l-6 6-6-6-1.52 1.54 6 6-6 6 1.5 1.5 6-6 6 6 1.52-1.5-6-6 6-6zM16 0c-8.837 0-16 7.163-16 16s7.163 16 16 16c8.837 0 16-7.163 16-16v0c0-8.837-7.163-16-16-16v0zM16 29.86c-7.655 0-13.86-6.205-13.86-13.86s6.205-13.86 13.86-13.86c7.655 0 13.86 6.205 13.86 13.86v0c-0.011 7.65-6.21 13.849-13.859 13.86h-0.001z"></path>\n </svg>\n </button>\n <button class="mh-search-btn mh-search-submit" tabindex="0" aria-label="Pesquisar Dell.com">\n <svg version="1.1" xmlns="http://www.w3.org/2000/svg" width="32" height="32" viewBox="0 0 32 32" focusable="false">\n <path d="M32 30.52l-10.6-10.52c1.72-2.133 2.761-4.877 2.761-7.864 0-0.048-0-0.096-0.001-0.143l0 0.007c0-0 0-0.001 0-0.001 0-6.627-5.373-12-12-12-0.056 0-0.112 0-0.168 0.001l0.008-0c-6.642 0.067-12.001 5.467-12.001 12.119 0 0.063 0.001 0.127 0.001 0.19l-0-0.010c0 6.627 5.373 12 12 12v0c0.096 0.003 0.209 0.004 0.322 0.004 2.95 0 5.643-1.101 7.69-2.915l-0.012 0.011 10.6 10.6 1.48-1.48zM12 22.26c-5.523 0-10-4.477-10-10v0c0-0.006-0-0.013-0-0.021 0-5.547 4.463-10.052 9.994-10.119l0.006-0c5.523 0 10 4.477 10 10v0c-0.011 5.547-4.465 10.050-9.992 10.14l-0.008 0z"></path>\n </svg>\n </button>\n <button class="mh-search-cancel-label" tabindex="0" aria-label="Cancel Search">Cancelar</button>\n</div>\n\n\n </div>\n <div class="right-column">\n\n <div class="mh-cart empty">\n <div class="mh-flyout-wrapper">\n <a role="link" class="cart mh-flyout-link" tabindex="0" aria-expanded="false" aria-haspopup="true" cart-count-label="{0} {1} em seu carrinho." cart-empty-label="Não há itens em seu carrinho." hm-item-singular="Item" hm-item-plural="itens">\n <span class="flyoutIconWrapper no-after">\n <span class="icon" mh-bubble-count="0">\n <svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg" focusable="false">\n <path d="M16 3.09031H2.88L2 0.320312H0V1.39031H1.17L4.48 12.8003H13.33V11.7303H5.33L4.8 9.92031H14.61L16 3.09031ZM4.48 8.85031L3.09 4.16031H14.61L13.61 8.85031H4.48Z" fill="#707070" />\n <path d="M12.59 12.0503C12.3506 12.049 12.1134 12.0952 11.8919 12.1861C11.6705 12.2771 11.4694 12.4112 11.3001 12.5804C11.1308 12.7497 10.9968 12.9509 10.9058 13.1723C10.8148 13.3937 10.7687 13.6309 10.77 13.8703C10.76 14.1115 10.8004 14.3522 10.8884 14.577C10.9765 14.8017 11.1104 15.0057 11.2815 15.176C11.4527 15.3462 11.6575 15.479 11.8827 15.5658C12.108 15.6526 12.3488 15.6916 12.59 15.6803C12.8303 15.6902 13.0701 15.6501 13.2942 15.5627C13.5182 15.4752 13.7217 15.3422 13.8918 15.1721C14.0619 15.0021 14.1949 14.7986 14.2824 14.5745C14.3698 14.3504 14.4099 14.1106 14.4 13.8703C14.4027 13.6314 14.3577 13.3944 14.2678 13.173C14.1779 12.9517 14.0449 12.7504 13.8764 12.581C13.7079 12.4116 13.5074 12.2774 13.2866 12.1863C13.0657 12.0952 12.8289 12.049 12.59 12.0503ZM12.59 14.6103C12.4408 14.6123 12.2945 14.5697 12.1696 14.4881C12.0447 14.4064 11.9471 14.2894 11.8891 14.152C11.8311 14.0145 11.8154 13.8629 11.844 13.7165C11.8726 13.5701 11.9442 13.4355 12.0497 13.33C12.1552 13.2246 12.2898 13.1529 12.4362 13.1243C12.5826 13.0957 12.7342 13.1114 12.8717 13.1694C13.0091 13.2274 13.1261 13.3251 13.2078 13.4499C13.2894 13.5748 13.332 13.7211 13.33 13.8703C13.3313 13.9679 13.3131 14.0647 13.2764 14.1551C13.2397 14.2455 13.1852 14.3276 13.1163 14.3966C13.0473 14.4656 12.9652 14.52 12.8748 14.5567C12.7844 14.5934 12.6876 14.6117 12.59 14.6103Z" fill="#707070" />\n <path d="M5.12 12.0503C4.88109 12.049 4.6443 12.0952 4.42344 12.1863C4.20258 12.2774 4.00207 12.4116 3.83359 12.581C3.66512 12.7504 3.53206 12.9517 3.44216 13.173C3.35227 13.3944 3.30734 13.6314 3.31 13.8703C3.29859 14.111 3.33759 14.3515 3.42451 14.5762C3.51143 14.801 3.64436 15.0051 3.81476 15.1755C3.98517 15.346 4.1893 15.4789 4.41406 15.5658C4.63883 15.6527 4.87928 15.6917 5.12 15.6803C5.36072 15.6917 5.60117 15.6527 5.82594 15.5658C6.0507 15.4789 6.25483 15.346 6.42524 15.1755C6.59564 15.0051 6.72857 14.801 6.81549 14.5762C6.90241 14.3515 6.94141 14.111 6.93 13.8703C6.93266 13.6314 6.88773 13.3944 6.79784 13.173C6.70794 12.9517 6.57488 12.7504 6.4064 12.581C6.23793 12.4116 6.03742 12.2774 5.81656 12.1863C5.5957 12.0952 5.35891 12.049 5.12 12.0503ZM5.12 14.6103C4.97124 14.6103 4.82585 14.566 4.7023 14.4832C4.57876 14.4003 4.48265 14.2826 4.42619 14.145C4.36973 14.0073 4.35547 13.856 4.38523 13.7103C4.41499 13.5645 4.48742 13.4309 4.59331 13.3264C4.6992 13.222 4.83377 13.1513 4.97991 13.1235C5.12605 13.0957 5.27715 13.112 5.41401 13.1703C5.55087 13.2286 5.6673 13.3263 5.74849 13.4509C5.82969 13.5756 5.87198 13.7216 5.87 13.8703C5.87139 13.9687 5.85286 14.0664 5.81553 14.1575C5.7782 14.2485 5.72284 14.3311 5.65278 14.4002C5.58273 14.4694 5.49941 14.5236 5.40785 14.5597C5.3163 14.5958 5.21838 14.613 5.12 14.6103Z" fill="#707070" />\n </svg>\n </span>\n <span class="label">Carrinho</span>\n </span>\n </a> \n </div> \n</div>\n<div class="mh-myaccount">\n <div class="mh-flyout-wrapper">\n <a role="link" class="mh-myaccount-btn mh-flyout-link" tabindex="0" aria-label="" aria-expanded="false" aria-haspopup="true">\n <span class="flyoutIconWrapper">\n <span class="icon">\n <svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg" focusable="false">\n <path d="M16 16H0L0.11 15.46C0.11 15.25 0.63 11.06 3.79 8.91L4.11 8.7L4.42 8.91C5.40427 9.78123 6.6689 10.2696 7.98327 10.2862C9.29763 10.3027 10.5741 9.84621 11.58 9L11.89 8.79L12.21 9C15.47 11.15 15.89 15.34 15.89 15.55L16 16ZM1.16 14.93H14.74C14.3722 13.0127 13.368 11.2756 11.89 10C10.7434 10.8333 9.36242 11.2822 7.945 11.2822C6.52758 11.2822 5.14657 10.8333 4 10C2.56367 11.308 1.57093 13.0313 1.16 14.93Z" fill="#707070" />\n <path d="M8 0C7.10801 0 6.23604 0.264507 5.49438 0.760072C4.75271 1.25564 4.17465 1.96 3.8333 2.7841C3.49195 3.60819 3.40264 4.515 3.57666 5.38986C3.75068 6.26471 4.18021 7.06832 4.81095 7.69905C5.44168 8.32979 6.24529 8.75932 7.12014 8.93334C7.995 9.10736 8.90181 9.01805 9.7259 8.6767C10.55 8.33535 11.2544 7.75729 11.7499 7.01562C12.2455 6.27396 12.51 5.40199 12.51 4.51C12.51 3.31387 12.0348 2.16674 11.1891 1.32095C10.3433 0.475159 9.19613 0 8 0ZM8 7.94C7.31878 7.94198 6.65232 7.74165 6.08508 7.36442C5.51785 6.98719 5.07539 6.45004 4.81378 5.82106C4.55218 5.19207 4.48321 4.49958 4.61563 3.83135C4.74804 3.16313 5.07587 2.54926 5.55757 2.06757C6.03926 1.58587 6.65313 1.25804 7.32135 1.12563C7.98958 0.993213 8.68207 1.06218 9.31106 1.32378C9.94004 1.58539 10.4772 2.02785 10.8544 2.59508C11.2317 3.16232 11.432 3.82878 11.43 4.51C11.4274 5.41888 11.0651 6.28978 10.4225 6.93246C9.77978 7.57514 8.90888 7.93736 8 7.94Z" fill="#707070" />\n </svg>\n </span>\n <span id="um-si-label" data-header-label="Login" class="label"></span>\n \n </span>\n </a>\n <div class="flyoutOverlay"></div>\n <div class="flyout">\n \n<div class="mh-myaccount-dropdown-wrap flyoutWrapper">\n \n<div class="mh-close">\n <a href="javascript:void(0)" role="button" tabindex="-1">\n <svg viewBox="0 0 32 32" focusable="false">\n <title>dds__close-x</title>\n <path d="M28 5.236l-1.235-1.235-10.8 10.8-10.729-10.73-1.235 1.235 10.729 10.73-10.658 10.658 1.235 1.235 10.658-10.658 10.73 10.729 1.235-1.235-10.73-10.729z"></path>\n </svg>\n </a>\n</div>\n <div class="mh-myaccount-unauth-dropdown">\n <div class="dropdown-title">Boas-vindas à Dell</div>\n <div class=\'dropdown-subtitle\'>Minha conta</div>\n <ul>\n <li>Faça pedidos de forma rápida e simples</li>\n <li>Visualize os pedidos e acompanhe o status de envio</li>\n <li>Crie e acesse uma lista dos seus produtos</li>\n </ul>\n <div class="mh-myaccount-ctas">\n <a href="//www.dell.com/Identity/global/LoginOrRegister/f454c791-0fe0-4adc-ba08-e94f97d20ab9?c=br&l=pt&r=la&s=bsd&cs=brbsdt1&redirectUrl=" class="mh-btn mh-btn-primary navigate" role="button">\n Login\n </a>\n <a href="//www.dell.com/Identity/global/LoginOrRegister/f454c791-0fe0-4adc-ba08-e94f97d20ab9?c=br&l=pt&r=la&s=bsd&cs=brbsdt1&redirectUrl=" class="mh-btn mh-btn-secondary navigate" role="button">\n Crie sua conta\n </a>\n <a href="//www.dell.com/Identity/global/Login/75814991-4252-4cc4-a977-670cc21309a7" class="mh-btn mh-btn-secondary" role="button">\n Premier login\n </a>\n <a href="//www.delltechnologies.com/partner/pt-br/auth/partner-portal.htm" class="mh-btn mh-btn-secondary" role="button">\n Inscrição no programa de parceria\n </a>\n </div>\n \n </div>\n <div class="mh-myaccount-auth-dropdown">\n <div class="mh-myaccount-auth-wrapper">\n <div class="mh-myaccount-first-column">\n <div class="dropdown-title">Sites da Dell</div>\n <ul class="mh-myaccount-label-list">\n <li><a class="current" href="//www.dell.com/pt-br">Dell.com</a></li>\n <li><a class="" href="https://www.delltechnologies.com/pt-br/index.htm">Dell Technologies</a></li>\n <li><a class="" href="//www.dell.com/Identity/global/Login/75814991-4252-4cc4-a977-670cc21309a7">Premier login</a></li>\n <li><a class="" href="//www.delltechnologies.com/partner/pt-br/auth/partner-portal.htm">Inscrição no programa de parceria</a></li>\n <li><a class="" href="//www.dell.com/support/home/pt-br">Suporte</a></li>\n </ul>\n </div>\n <div class="mh-myaccount-second-column">\n <div id="um-so-fl-label" class="dropdown-title">Boas-vindas</div>\n \n\n <ul class="mh-myaccount-label-list">\n <li><a href="//www.dell.com/pt-br/work/myaccount/">Minha conta</a></li>\n <li><a href="//www.dell.com/support/orders">Status do pedido</a></li>\n <li><a href="//www.dell.com/support/mps/">Meus Produtos</a></li>\n\n </ul>\n \n </div>\n </div>\n <div class="auth-signout ">\n <a class="mh-btn mh-btn-secondary navigate" href="//www.dell.com/pt-br/security/authentication/signoutreturntoreferrer?" role="button">\n Sair\n </a>\n </div>\n \n </div>\n</div>\n </div>\n </div>\n</div> \r\n<div component="unified-country-selector" class="stack showChevron country-selector">\r\n <div class="mh-flyout-wrapper">\r\n <a role="link" class="mh-flyout-link" aria-expanded="false" aria-haspopup="true" aria-label="" tabindex="0">\r\n <span>BR/PT</span>\r\n </a>\r\n <div class="flyout">\r\n \n <div class="mh-load-spinner js-mh-gss" aria-label="Loading Spinner"></div>\n\r\n <ul class="country-list-container"></ul>\r\n </div>\r\n </div>\r\n</div>\n </div>\n </div>\n <div class="mh-bottom">\n <div class="flyoutOverlay"></div>\n \r\n\r\n<div id="unified-masthead-navigation" component="unified-masthead-navigation">\r\n <nav class="mh-top-nav">\r\n <ul class="mh-top-menu-nav aria-nav">\r\n <li class="mh-top-menu child-nav" data-tier-id="0">\r\n <a class="mh-top-nav-button" href="javascript:;" aria-expanded="false" aria-haspopup="true">\r\n <span>Produtos</span>\r\n </a> \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Produtos\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop">Ver todos os produtos</a>\r\n </li>\r\n\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Notebooks</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Notebooks\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/laptops">Ver todos os notebooks</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/laptops/latitude-laptops" data-tier-id="0" tabindex="0">Notebooks Latitude</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/workstations/precision-laptops" data-tier-id="0" tabindex="0">Workstations móveis Precision</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/laptops/vostro-laptops" data-tier-id="0" tabindex="0">Notebooks Vostro</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/laptops/inspiron-laptops" data-tier-id="0" tabindex="0">Notebooks Inspiron</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/laptops/xps-laptops" data-tier-id="0" tabindex="0">Notebooks XPS</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/laptops/g-series" data-tier-id="0" tabindex="0">Notebooks G Series</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/pc-accessories/ac/5436" data-tier-id="0" tabindex="0">Acessórios para PCs</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/partsforyourdell/index" data-tier-id="0" tabindex="0">Peças e atualizações</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Desktops e PCs All in One</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Desktops e PCs All in One\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/desktops-n-workstations">Ver todos os desktops e PCs All in One</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/desktops-n-workstations/optiplex-desktops" data-tier-id="0" tabindex="0">Desktops OptiPlex</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/workstations/precision-desktops" data-tier-id="0" tabindex="0">Workstations fixas Precision</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/desktops-n-workstations/vostro-desktops" data-tier-id="0" tabindex="0">Desktops Vostro</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/desktops-n-workstations/inspiron-desktops" data-tier-id="0" tabindex="0">Desktops Inspiron</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/desktops-n-workstations/xps-desktops" data-tier-id="0" tabindex="0">Desktops XPS</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/pc-accessories/ac/5436" data-tier-id="0" tabindex="0">Acessórios para PCs</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/partsforyourdell/index" data-tier-id="0" tabindex="0">Peças e atualizações</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Notebooks 2 em 1</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Notebooks 2 em 1\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/dell-tablets">Ver todos os notebooks 2 em 1</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/dell-tablets/inspiron-2-in-1" data-tier-id="0" tabindex="0">Notebooks Inspiron 2 em 1</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/pc-accessories/ac/5436" data-tier-id="0" tabindex="0">Acessórios para PCs</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/partsforyourdell/index" data-tier-id="0" tabindex="0">Peças e atualizações</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Workstations</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Workstations\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/workstations">Ver todas as workstations</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/workstations/precision-laptops" data-tier-id="0" tabindex="0">Workstations móveis Precision</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sr/workstations/precision-desktops" data-tier-id="0" tabindex="0">Workstations fixas Precision</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/pc-accessories/ac/5436" data-tier-id="0" tabindex="0">Acessórios para PCs</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/partsforyourdell/index" data-tier-id="0" tabindex="0">Peças e atualizações</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Thin clients</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Thin clients\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/cloud-client">Ver todos os thin clients</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/cloud-client/thin-clients" data-tier-id="0" tabindex="0">Thin clients Wyse</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/cloud-client/wyse-software" data-tier-id="0" tabindex="0">Software</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Servidores e armazenamento</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Servidores e armazenamento\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/lp/servers-storage-networking">Ver todos os servidores e o armazenamento</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/servers" data-tier-id="0" tabindex="0">Servidores</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/storage-products" data-tier-id="0" tabindex="0">Armazenamento</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.delltechnologies.com/pt-br/converged-infrastructure/hyper-converged-infrastructure.htm" data-tier-id="0" tabindex="0">Infraestrutura hiperconvergente</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/storage-products/data-protection" data-tier-id="0" tabindex="0">Proteção de dados</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/dell-parts-finder/cp/dpf" data-tier-id="0" tabindex="0">Peças enterprise e atualizações</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/power-cooling-data-center-infrastructure/ac/4116" data-tier-id="0" tabindex="0">Acessórios empresariais e de servidor</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.delltechnologies.com/pt-br/products/index.htm" data-tier-id="0" tabindex="0">Produtos Dell EMC</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/scc/sc/networking-products" data-tier-id="0" tabindex="0">Rede</a>\r\n \r\n\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Monitores</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Monitores\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/monitors-monitor-accessories/ac/4009">Ver Todos os Monitores</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/monitors-monitor-accessories/ar/4009?appliedRefinements=2829" data-tier-id="0" tabindex="0">Monitores Série S</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/monitors-monitor-accessories/ar/4009?appliedRefinements=2828" data-tier-id="0" tabindex="0">Monitores UltraSharp</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/monitors-monitor-accessories/ar/4009?appliedRefinements=2543" data-tier-id="0" tabindex="0">Monitores Série P</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/monitors-monitor-accessories/ar/4009?appliedRefinements=2830" data-tier-id="0" tabindex="0">Monitores Série E</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/monitor-accessories/ar/5390" data-tier-id="0" tabindex="0">Acessórios para Monitores</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/projectors-and-projector-accessories/ac/5188" data-tier-id="0" tabindex="0">Projetores</a>\r\n \r\n\r\n </li>\r\n <li class=" child-nav" data-tier-id="0">\r\n <a href="javascript:void(0)" data-tier-id="0" tabindex="0" aria-expanded="false" aria-haspopup="true">Peças de reposição e atualizações</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="0">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Peças de reposição e atualizações\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/shop/partsforyourdell/index">Ver todas as peças de reposição e atualizações</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/partsbytype/batteryselector" data-tier-id="0" tabindex="0">Seletor de adaptador e bateria</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/storage-drives-media/ac/5683" data-tier-id="0" tabindex="0">Discos rígidos e armazenamento</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/partsbytype/memoryselector" data-tier-id="0" tabindex="0">Seletor de memória</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/graphic-and-video-cards/ar/7729" data-tier-id="0" tabindex="0">Placas gráficas e vídeo</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class="" data-tier-id="0">\r\n <a href="//www.dell.com/pt-br/work/shop/accessories" data-tier-id="0" tabindex="0">Acessórios e eletrônicos</a>\r\n \r\n\r\n </li>\r\n\r\n </ul>\r\n </li>\r\n <li class="mh-top-menu child-nav" data-tier-id="1">\r\n <a class="mh-top-nav-button" href="javascript:;" aria-expanded="false" aria-haspopup="true">\r\n <span>Soluções</span>\r\n </a> \r\n\r\n <ul class="sub-nav" data-tier-id="1">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Soluções\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.delltechnologies.com/pt-br/solutions/index.htm">Exibir todas as soluções</a>\r\n </li>\r\n\r\n <li class=" child-nav" data-tier-id="1">\r\n <a href="javascript:void(0)" data-tier-id="1" tabindex="0" aria-expanded="false" aria-haspopup="true">Indústrias</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="1">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Indústrias\r\n </li>\r\n\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/industry/healthcare-it/index.htm" data-tier-id="1" tabindex="0">Área de saúde e ciências biomédicas</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/industry/higher-education/index.htm" data-tier-id="1" tabindex="0">Ensino superior</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/industry/education/index.htm" data-tier-id="1" tabindex="0">Formação primária e secundária</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class=" child-nav" data-tier-id="1">\r\n <a href="javascript:void(0)" data-tier-id="1" tabindex="0" aria-expanded="false" aria-haspopup="true">Pequenas Empresas</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="1">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Pequenas Empresas\r\n </li>\r\n\r\n <li class="" data-tier-id="1">\r\n <a href="//www.dell.com/pt-br/work/shop/dell-small-business/cp/sb-central" data-tier-id="1" tabindex="0">Soluções para Pequenas Empresas</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.dell.com/pt-br/work/shop/dell-small-business/cp/dell-expert-network" data-tier-id="1" tabindex="0">Dell Expert Network</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.dell.com/pt-br/work/lp/programa-para-associados" data-tier-id="1" tabindex="0">Benefícios para Associações</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.dell.com/pt-br/work/shop/dell-small-business/cp/dell-women-entrepreneur-network" data-tier-id="1" tabindex="0">DWEN: Dell Women's Entrepreneur Network</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/big-data/index.htm" data-tier-id="1" tabindex="0">Big Data</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/cloud/hybrid-cloud-computing/index.htm" data-tier-id="1" tabindex="0">Cloud computing</a>\r\n \r\n\r\n </li>\r\n <li class=" child-nav" data-tier-id="1">\r\n <a href="javascript:void(0)" data-tier-id="1" tabindex="0" aria-expanded="false" aria-haspopup="true">Data center</a>\r\n \r\n\r\n <ul class="sub-nav" data-tier-id="1">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Data center\r\n </li>\r\n\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/servers/index.htm" data-tier-id="1" tabindex="0">Data center - Servidor</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/storage/data-storage.htm" data-tier-id="1" tabindex="0">Data center - Armazenamento</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/networking/index.htm" data-tier-id="1" tabindex="0">Data center - Soluções de rede</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/servers/modular-infrastructure/index.htm" data-tier-id="1" tabindex="0">Data center - Infraestrutura modular</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/converged-infrastructure/index.htm" data-tier-id="1" tabindex="0">Infraestrutura convergente</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/solutions/high-performance-computing/index.htm" data-tier-id="1" tabindex="0">Computação de alta performance</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/solutions/openmanage/index.htm" data-tier-id="1" tabindex="0">Gerenciamento de sistemas OpenManage</a>\r\n \r\n\r\n </li>\r\n\r\n\r\n </ul>\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/dell-hybrid-client/index.htm" data-tier-id="1" tabindex="0">Dell Hybrid Client</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/solutions/internet-of-things.htm" data-tier-id="1" tabindex="0">Internet of Things</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/services/pc-as-a-service.htm" data-tier-id="1" tabindex="0">PC as a Service (PCaaS)</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//education.dellemc.com/content/emc/pt-br/home.html" data-tier-id="1" tabindex="0">Treinamento e certificação</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/unified-workspace/index.htm" data-tier-id="1" tabindex="0">Unified Workspace</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="1">\r\n <a href="//www.delltechnologies.com/pt-br/solutions/vdi/index-it.htm" data-tier-id="1" tabindex="0">Virtual Desktop Infrastructure</a>\r\n \r\n\r\n </li>\r\n\r\n </ul>\r\n </li>\r\n <li class="mh-top-menu child-nav" data-tier-id="2">\r\n <a class="mh-top-nav-button" href="javascript:;" aria-expanded="false" aria-haspopup="true">\r\n <span>Serviços</span>\r\n </a> \r\n\r\n <ul class="sub-nav" data-tier-id="2">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Serviços\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/pt-br/work/lp/services-and-support">Visualizar todos os serviços</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="2">\r\n <a href="//www.dell.com/pt-br/shop/lp/servicos-para-voce" data-tier-id="2" tabindex="0">Serviços domésticos</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="2">\r\n <a href="//www.dell.com/pt-br/work/shop/dell-small-business/cp/servicesforsmallbusiness" data-tier-id="2" tabindex="0">Serviços para Pequenas Empresas</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="2">\r\n <a href="//www.delltechnologies.com/pt-br/services/support-services/index.htm" data-tier-id="2" tabindex="0">Serviços de suporte empresarial</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="2">\r\n <a href="//www.delltechnologies.com/pt-br/services/consulting-services/index.htm" data-tier-id="2" tabindex="0">Consultoria</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="2">\r\n <a href="//www.delltechnologies.com/pt-br/services/deployment-services/index.htm" data-tier-id="2" tabindex="0">Serviços de implantação</a>\r\n \r\n\r\n </li>\r\n\r\n </ul>\r\n </li>\r\n <li class="mh-top-menu child-nav" data-tier-id="3">\r\n <a class="mh-top-nav-button" href="javascript:;" aria-expanded="false" aria-haspopup="true">\r\n <span>Suporte</span>\r\n </a> \r\n\r\n <ul class="sub-nav" data-tier-id="3">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Suporte\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//www.dell.com/support/home/pt-br">Página de suporte</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="3">\r\n <a href="//www.dell.com/support/home/pt-br?app=knowledgebase" data-tier-id="3" tabindex="0">Base de conhecimento</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="3">\r\n <a href="//www.dell.com/support/home/pt-br?app=warranty" data-tier-id="3" tabindex="0">Garantia e contratos</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="3">\r\n <a href="//www.dell.com/support/incidents-online/pt-br/srsearch" data-tier-id="3" tabindex="0">Chamados e status de despacho</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="3">\r\n <a href="//www.dell.com/support/order-status/pt-br/order-support" data-tier-id="3" tabindex="0">Suporte a pedidos</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="3">\r\n <a href="//www.dell.com/support/contents/pt-br/category/Contact-Information" data-tier-id="3" tabindex="0">Fale conosco</a>\r\n \r\n\r\n </li>\r\n\r\n </ul>\r\n </li>\r\n <li class="mh-top-menu child-nav" data-tier-id="4">\r\n <a class="mh-top-nav-button" href="javascript:;" aria-expanded="false" aria-haspopup="true">\r\n <span>Promoção</span>\r\n </a> \r\n\r\n <ul class="sub-nav" data-tier-id="4">\r\n \r\n<li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="-1">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n</li>\r\n\r\n <li class="mh-hide-mob-links mh-mastheadTitle">\r\n Promoção\r\n </li>\r\n <li class="menu-list-item">\r\n <a href="//deals.dell.com/pt-br/work">Ver todas as promoções</a>\r\n </li>\r\n\r\n <li class="" data-tier-id="4">\r\n <a href="//deals.dell.com/pt-br/work/category/notebooks" data-tier-id="4" tabindex="0">Notebook em promoção</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//deals.dell.com/pt-br/work/category/desktops" data-tier-id="4" tabindex="0">Desktop em promoção</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//deals.dell.com/pt-br/work/category/homeoffice" data-tier-id="4" tabindex="0">Home Office em promoção</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//deals.dell.com/pt-br/work/category/servers" data-tier-id="4" tabindex="0">Servidores em promoção</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//deals.dell.com/pt-br/work/category/promocao-acessorios" data-tier-id="4" tabindex="0">Acessórios e eletrônicos em promoção</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//deals.dell.com/pt-br/work/category/promocao-monitor" data-tier-id="4" tabindex="0">Monitor em promoção</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//www.dell.com/pt-br/outlet" data-tier-id="4" tabindex="0">Dell Outlet</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//www.dell.com/pt-br/work/lp/mpp-brasil" data-tier-id="4" tabindex="0">Programa de Benefícios</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//www.dell.com/pt-br/work/shop/dell-ecatalog/ab/br-sbecatalog" data-tier-id="4" tabindex="0">Catálogo Digital</a>\r\n \r\n\r\n </li>\r\n <li class="" data-tier-id="4">\r\n <a href="//www.dell.com/pt-br/work/lp/lancamentos-dell-uso-profissional" data-tier-id="4" tabindex="0">Lançamentos</a>\r\n \r\n\r\n </li>\r\n\r\n </ul>\r\n </li>\r\n <li class="divider"></li>\r\n\r\n <li class="mob-country-selector child-nav" component="unified-country-selector">\r\n <a class="country-selector-mobile" role="button" aria-haspopup="true" aria-expanded="false">\r\n <span>BR/PT</span>\r\n </a>\r\n <ul class="sub-nav country-list-container" data-tier-id="0">\r\n <li class="mh-back-list-item">\r\n <a role="button" class="mh-back-button" tabindex="0">\r\n <span class="mh-menu-chevron left chevron-right"></span>\r\n <span class="mh-back-button-label">\r\n Voltar\r\n </span>\r\n </a>\r\n </li>\r\n </ul>\r\n </li>\r\n </ul>\r\n </nav>\r\n</div>\n </div>\n</header>\n<div id="mh-main"></div>\n <script shared-script-required="true" type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"https://afcs.dellcdn.com/csb/unifiedmasthead/bundles/1.0.1.3849/js/mastheadscripts-stp-ooc.min.js","order":"1","crossorigin":false}])</script>\r\n\r\n\r\n \r\n<section class="sd-swb-wrapper">\r\n <div class="sd-swb-container">\r\n <div class="sd-swb-img-container">\r\n <img class="sd-swb-img" src="https://i.dell.com/is/image/DellContent/content/dam/global-asset-library/Products/Notebooks/Latitude/14_3420_non-touch/la3420nt_cnb_05000ff090_gy.psd?$S7-130x88$&layer=1&perspective=662,702,3338,702,3338,2207,662,2207&pos=-71,-545&src=is{DellContent/content/dam/marketing-assets/csbg/demand/shared_assets/screenfills/architecture_macro/gettyimages-1262811264.psd?size=4000,4000}" aria-hidden="true" alt="" />\r\n </div>\r\n <div class="sd-swb-body">\r\n <p class="sd-swb-text"> Ofertas de Anivers\xc3\xa1rio com at\xc3\xa9 39% de desconto. At\xc3\xa9 10x sem juros + frete gr\xc3\xa1tis.<br></p>\r\n <ul class="sd-swb-list">\r\n <li class="sd-swb-list-item"> <a target="_blank" href="https://api.whatsapp.com/send?phone=5511989238421">Compre pelo WhatsApp</a> </li>\r\n <li class="sd-swb-list-item"> <a target="_blank" href="https://dell.mcshosts.net/netagent/cimlogin.aspx?questid=D5BF9A96-DB21-4C16-A77A-92A85AB719A0&portid=C16432E4-F280-4E41-AFD8-8D1DDB5A0D0B&defaultStyleId=C16432E4-F280-4E41-AFD8-8D1DDB5A0D0B&ref=calltosaveswbsb">0800 970 2256 | Chat</a> </li>\r\n </ul>\r\n </div>\r\n </div>\r\n</section>\r\n\r\n <main role="main">\r\n \r\n\r\n\r\n<input type="hidden" id="claimApiUrl" value="" />\r\n<input type="hidden" id="dealTimerEndLabel" value="Acaba em" />\r\n<input type="hidden" id="dealTimerStartLabel" value="Começa em" />\r\n<input type="hidden" id="dealTimerDaysLabel" value="dias" />\r\n<input type="hidden" id="dealTimerHoursLabel" value="h" />\r\n<input type="hidden" id="dealTimerMinutesLabel" value="min" />\r\n<input type="hidden" id="dealTimerSecondsLabel" value="s" />\r\n\r\n\r\n<section id="sd-countdown">\r\n <input type="hidden" id="startTimerDateTime" value="08/27/2021 07:00:00" />\r\n <input type="hidden" id="currentDateTime" value="09/15/2021 13:24:54" /> \r\n <input type="hidden" id="endTimerDateTime" value="09/17/2021 03:00:00" />\r\n\r\n <ul class="sd-bold" style="display:none">\r\n <li>\r\n OFERTAS PRORROGADAS\r\n </li>\r\n <li>\r\n <span id="sd-countdown-days">0</span> dias\r\n </li>\r\n <li>\r\n <span id="sd-countdown-hr">0</span> h\r\n </li>\r\n <li>\r\n <span id="sd-countdown-min">0</span> min\r\n </li>\r\n <li>\r\n <span id="sd-countdown-sec">0</span> s\r\n </li>\r\n </ul>\r\n</section>\r\n\r\n<section class="sd-navbar-section sd-category-navbar" id="sd-navbar">\r\n <div class="sd-navbar-wrapper">\r\n <nav class="sd-navbar-container" aria-label="Secondary">\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/ofertas-relampago" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__tag.png" alt="Ofertas Relâmpago" aria-hidden="true" width="48" height="48" />\r\n Ofertas Relâmpago\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item active">\r\n <a href="/pt-br/work/category/notebooks" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__device-laptop_48px.png" alt="Laptops" aria-hidden="true" width="48" height="48" />\r\n Laptops\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/desktops" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__device-desktop_48px.png" alt="Desktops" aria-hidden="true" width="48" height="48" />\r\n Desktops\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/servers" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__host-server_48px.png" alt="Servidores" aria-hidden="true" width="48" height="48" />\r\n Servidores\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/promocao-monitor" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__device-monitor_48px.png" alt="Monitores" aria-hidden="true" width="48" height="48" />\r\n Monitores\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/promocao-acessorios" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__keyboard-mouse_48px.png" alt="Acessórios" aria-hidden="true" width="48" height="48" />\r\n Acessórios\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/configuraveis" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__laptop-generic_48px.png" alt="Configuráveis" aria-hidden="true" width="48" height="48" />\r\n Configuráveis\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/combos-em-promocao" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__device-desktop_48px.png" alt="Kits Promocionais" aria-hidden="true" width="48" height="48" />\r\n Kits Promocionais\r\n </a>\r\n </div>\r\n <div class="sd-navbar-item ">\r\n <a href="/pt-br/work/category/servico-dell" target="_self">\r\n <img src="https://i.dell.com/sites/csimages/Banner_Imagery/all/dds__windows-support_48px.png" alt="Ofertas com Assistência no Local" aria-hidden="true" width="48" height="48" />\r\n Ofertas com Assistência no Local\r\n </a>\r\n </div>\r\n </nav>\r\n </div>\r\n </section>\r\n<section class="sd-hero-banner">\r\n <div class="sd-hero-top">\r\n \r\n<nav class="sd-breadcrumb sd-breadcrumb-md sd-breadcrumb-lg" aria-label="breadcrumb">\r\n <ol>\r\n <li>\r\n <a href="https://www.dell.com/pt-br">\r\n <svg viewBox="0 0 12 12" aria-hidden="true"><use href="#dds__home"></use></svg>\r\n <span>Brasil</span>\r\n </a>\r\n <svg viewBox="0 0 12 12" aria-hidden="true"><use href="#dds__chevron-right"></use></svg>\r\n </li>\r\n <li>\r\n <a href="https://www.dell.com/pt-br/work/shop">\r\n <span>Para Uso Profissional</span>\r\n </a>\r\n <svg viewBox="0 0 12 12" aria-hidden="true"><use href="#dds__chevron-right"></use></svg>\r\n </li>\r\n <li>\r\n <a href="/pt-br/work">\r\n <span>Todas as Ofertas</span>\r\n </a>\r\n <svg viewBox="0 0 12 12" aria-hidden="true"><use href="#dds__chevron-right"></use></svg>\r\n </li>\r\n <li aria-current="page">\r\n <span>Laptops em Promoção</span>\r\n </li>\r\n </ol>\r\n</nav>\r\n<nav class="sd-breadcrumb sd-breadcrumb-sm" aria-label="breadcrumb">\r\n <ol>\r\n <li>\r\n <svg viewBox="0 0 12 12" aria-hidden="true"><use href="#dds__chevron-left"></use></svg>\r\n \r\n <a href="/pt-br/work">\r\n <span>Todas as Ofertas</span>\r\n </a>\r\n </li>\r\n </ol>\r\n</nav>\r\n <div class="sd-prod-agreement-top sd-prod-agreement-cat">\r\n \r\n<section class="sd-prod-agreement">\r\n <div class="sd-prod-agreement-content">\r\n <span class="sd-prod-agreement-text">Processadores Intel® Core™</span>\r\n <a href="https://www.dell.com/pt-br/work/lp/intel-11-geracao-sb">Comparar</a>\r\n </div>\r\n <img src=//i.dell.com/images/global/general/1x1.gif data-src=https://i.dell.com/is/image/DellContent/content/dam/brand_elements/logos/3rd_party/Intel/core/core_family/11th_gen/english/online_use/family_core_i5i7i3_rgb_3000.png?$S7-png$&wid=138&hei=63 class="sd-lazy sd-lazy-img" alt="Processadores Intel®" width="138" height="63">\r\n\r\n</section>\r\n </div>\r\n </div>\r\n <div class="sd-hero-center">\r\n <h1>Laptops em Promoção</h1>\r\n <h2>\r\n <span class="sd-text-green"></span>\r\n <span class="sd-text-green">Descontos de at\xc3\xa9 R$1700.</span> Adicione tamb\xc3\xa9m software e assist\xc3\xaancia no local com pre\xc3\xa7os especiais em produtos selecionados.<br><br><b>At\xc3\xa9 10x sem Juros | Frete Gr\xc3\xa1tis para Todo o Brasil</b>\r\n </h2>\r\n </div>\r\n</section>\r\n<div class="sd-category-container">\r\n <section class="sd-category-anav-container">\r\n <div class="sd-mobile-sort-by">\r\n \r\n<div class="sd-sorting-label"><span>Classificar por:</span></div>\r\n<div class="sd-sorting">\r\n <select class="sd-sort-dropdown" tabindex="0">\r\n <option selected value="relevance" data-text="Maior Relevância" class="sd-sort-dropdown-item" onclick="s_objectID = \'Maior Relevância\';">Maior Relevância</option>\r\n <option value="price-descending" data-text="Maior Preço" class="sd-sort-dropdown-item" onclick="s_objectID = \'Maior Preço\';">Maior Preço</option>\r\n <option value="price-ascending" data-text="Menor Preço" class="sd-sort-dropdown-item" onclick="s_objectID = \'Menor Preço\';">Menor Preço</option>\r\n </select>\r\n</div>\r\n </div>\r\n <script type="text/javascript">if (typeof Dell !== \'undefined\' && typeof Dell.perfmetrics !== \'undefined\') Dell.perfmetrics.start(\'specialdeals-anav\');</script>\r\n<div class="filters">\r\n <div class="filter-scroll">\r\n \n <style>@media screen and (max-width:991px){.leftanav__grey__container{margin:0 15px}}.leftanav__grey__container .leftanav__option__container{display:block}.leftanav__grey__container .leftanav__option__container .leftanav__overpanel__close__icon{display:none;background-image:url("data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' width=\'36\' height=\'36\' viewBox=\'0 0 36 36\'%3E%3Cg id=\'Group_194\' data-name=\'Group 194\' transform=\'translate(-1845 -473)\'%3E%3Cg id=\'Ellipse_9\' data-name=\'Ellipse 9\' transform=\'translate(1845 509) rotate(-90)\' fill=\'none\' stroke=\'%23333\' stroke-width=\'1\'%3E%3Ccircle cx=\'18\' cy=\'18\' r=\'18\' stroke=\'none\'/%3E%3Ccircle cx=\'18\' cy=\'18\' r=\'17.5\' fill=\'none\'/%3E%3C/g%3E%3Cg id=\'Group_192\' data-name=\'Group 192\' transform=\'translate(1855.343 483.344)\'%3E%3Cpath id=\'Path_1159\' data-name=\'Path 1159\' d=\'M102.743,8.136l-.793-.793-6.93,6.93L88.136,7.389l-.793.793,6.884,6.885-6.839,6.839.793.793,6.839-6.839,6.885,6.884.793-.793-6.885-6.884Z\' transform=\'translate(-87.343 -7.344)\' fill=\'%23333\'/%3E%3C/g%3E%3C/g%3E%3C/svg%3E")}.leftanav__grey__container .leftanav__option__container .results-count{display:none}.leftanav__grey__container .leftanav__option__container .leftanav__title__block{display:flex;flex-direction:row;flex-wrap:nowrap}.leftanav__grey__container .leftanav__option__container .leftanav__title__block .leftanav__title{margin-left:0;color:#5a5a5a;font-size:18px;font-weight:400;line-height:1.5rem;font-family:inherit;margin-top:9px;margin-bottom:.625rem}@media only screen and (min-device-width:768px) and (max-device-width:1024px){.leftanav__grey__container .leftanav__option__container .leftanav__overpanel__close__icon{margin-left:94%!important}}@media screen and (max-width:991px){.leftanav__grey__container .leftanav__option__container{display:none}.leftanav__grey__container .leftanav__option__container .leftanav__overpanel__close__icon{display:none}.leftanav__grey__container .leftanav__option__container.mobile-active{display:block;position:fixed;left:0;top:-1px;right:0;bottom:0;z-index:999999;overflow:auto;padding:0;-webkit-box-orient:vertical;-webkit-box-direction:normal;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-align:center;-webkit-align-items:center;-ms-flex-align:center;align-items:center;background-color:#fff;-webkit-transform:translate(0,500px);-ms-transform:translate(0,500px);transform:translate(0,500px);opacity:1;transform:translate3d(0,1px,0) scale3d(1,1,1) rotateX(0) rotateY(0) rotateZ(0) skew(0deg,0deg);transform-style:preserve-3d;padding:0 15px}.leftanav__grey__container .leftanav__option__container.mobile-active .results-count{display:inline-flex}.leftanav__grey__container .leftanav__option__container.mobile-active .leftanav__overpanel__close__icon{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;margin-right:0;-webkit-box-pack:center;-webkit-justify-content:center;-ms-flex-pack:center;justify-content:center;-webkit-box-align:center;-webkit-align-items:center;-ms-flex-align:center;align-items:center;border-style:solid;border-width:0;border-color:#444;border-radius:20px;font-weight:500;margin-left:89%;padding:0;margin-bottom:10px;margin-top:10px;width:36px;height:36px}.leftanav__grey__container .leftanav__option__container.mobile-active .leftanav__title__block{margin-top:10px;margin-bottom:5px;position:relative}.leftanav__grey__container .leftanav__option__container.mobile-active .leftanav__title__block .leftanav__title{font-size:16px;font-weight:400}}.leftanav__grey__container .leftanav__grey__mobileview__container{display:none}@media screen and (max-width:991px){.leftanav__grey__container .leftanav__grey__mobileview__container{display:block}.leftanav__grey__container .leftanav__grey__mobileview__container .leftanav__rounded__aqua__button{display:block;margin-top:26px;margin-bottom:0;padding-top:7px;padding-bottom:7px;-webkit-align-self:flex-start;-ms-flex-item-align:start;align-self:flex-start;font-size:16px;border-style:none;border-color:#42aeaf;border-radius:3px;background-color:#0076ce;font-family:Roboto,sans-serif;font-weight:400;letter-spacing:.5px;text-transform:none;-webkit-box-flex:1;-webkit-flex:1;-ms-flex:1;flex:1;padding-right:12px;padding-left:12px;transition:transform .3s cubic-bezier(.215,.61,.355,1),background-color .2s ease,-webkit-transform .3s cubic-bezier(.215,.61,.355,1);color:#fff;line-height:24px;border:1px solid transparent;text-align:center;text-decoration:none;cursor:pointer}.leftanav__grey__container .leftanav__grey__mobileview__container .leftanav__rounded__aqua__button.mobile-active{display:none}}@media screen and (max-width:400px){.leftanav__grey__container .leftanav__grey__mobileview__container .leftanav__rounded__aqua__button{font-size:14px;padding-right:0;padding-left:0}}.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block{display:none;left:0;top:auto;right:0;width:100%;padding-top:21px;padding-bottom:21px;-webkit-box-pack:center;justify-content:center;flex-wrap:nowrap;-webkit-box-align:end;align-items:flex-end;align-self:center;background-color:#fff}@media screen and (min-device-width:480px) and (max-width:991px){.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block{display:block;position:sticky;position:-webkit-sticky;bottom:0;z-index:999999;display:flex}.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block .leftanav__apply__filters__button{width:60%;background-color:#0076ce;padding-top:12px;padding-bottom:12px;border-radius:3px;color:#fff;text-align:center;display:inline-block;padding:9px 15px;border:0;line-height:inherit;text-decoration:none;cursor:pointer}.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block.mobile-active{display:none}}@media screen and (max-width:479px){.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block{display:block;position:sticky;position:-webkit-sticky;bottom:0;z-index:999999;display:flex}.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block .leftanav__apply__filters__button{width:80%;background-color:#0076ce;padding-top:12px;padding-bottom:12px;border-radius:3px;color:#fff;text-align:center;display:inline-block;padding:9px 15px;border:0;line-height:inherit;text-decoration:none;cursor:pointer}}.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block *,.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block ::after,.leftanav__grey__container .leftanav__mobile__apply_filter__btn__block ::before{box-sizing:border-box}.anavmfe__accordion__item .anavmfe__category__link{color:#333!important;cursor:pointer;display:block;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.anavmfe__accordion__item .anavmfe__category__link .anavmfe__category__link__label{display:table-cell;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.anavmfe__accordion__item .anavmfe__category__link div{cursor:pointer;display:block;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.anavmfe__accordion__item .anavmfe__category__link a{text-decoration:none;cursor:pointer;display:block;white-space:nowrap;overflow:hidden;text-overflow:ellipsis}.anavmfe__accordion__item .anavmfe__category__link .marked::before{content:"\xe2\x8c\xa9";font-family:dds-icons;font-size:1rem;font-weight:700;color:#000;width:5px;height:4px;display:inline-block;margin:0 10px 0 0}.anavmfe__accordion__item .anavmfe__category__link .unmarked{padding:4px 0 2px 14px;margin:1px}.anavmfe__accordion__item .anavmfe__category__link .level-1{padding:5px 0 3px 0}.anavmfe__accordion__item .anavmfe__category__link .level-2{padding:5px 0 2px 25px}.anavmfe__accordion__item .anavmfe__category__link .level-2.marked{padding:5px 0 2px 15px}.anavmfe__accordion__item .anavmfe__category__link .level-3{padding:1px 0 1px 40px}.anavmfe__accordion__item .anavmfe__category__link .level-4{display:none}.anavmfe__accordion__item .anavmfe__category__link .selected{color:#444!important;font-weight:700}.anavmfe__accordion__item .anavmfe__category__link .anavmfe-disable-click{cursor:default}.anavmfe_customer_star_rating{display:block!important}.anavmfe_customer_star_rating .star{display:inline-block;float:left;margin:2px 0 0 0;width:16px;height:16px;vertical-align:top;background-image:url("data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 22 22\'%3E%3Cpolygon points=\'9.9, 1.1, 3.3, 21.78, 19.8, 8.58, 0, 8.58, 16.5, 21.78\' fill=\'%23969699\'%3E%3C/polygon%3E%3C/svg%3E")}.anavmfe_customer_star_rating .star.on{background-image:url("data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 22 22\'%3E%3Cpolygon points=\'9.9, 1.1, 3.3, 21.78, 19.8, 8.58, 0, 8.58, 16.5, 21.78\' style=\'fill:%230076CE\'%3E%3C/polygon%3E%3C/svg%3E")}.anavmfe_customer_star_rating .star.half{background-image:url("data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 22 22\'%3E%3ClinearGradient id=\'half-star\'%3E%3Cstop style=\'stop-color: %230076CE;\' offset=\'50%25\'%3E%3C/stop%3E%3Cstop style=\'stop-color: %23969699;\' offset=\'50%25\'%3E%3C/stop%3E%3C/linearGradient%3E%3Cpolygon points=\'9.9, 1.1, 3.3, 21.78, 19.8, 8.58, 0, 8.58, 16.5, 21.78\' fill=\'url(%23half-star)\'%3E%3C/polygon%3E%3C/svg%3E")}.anavmfe_customer_star_rating .star:last-child{margin-right:4px}@media screen and (max-width:991px){.anavmfe__facets__wrapper_conatiner{position:-webkit-sticky;position:sticky;top:0;background-color:#fff;width:100%;z-index:1}}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper{margin-bottom:13px;padding:10px 10px 10px 18px;border-style:none;border-width:1px;border-color:#ccc;border-radius:5px;background-color:#f3f3f3}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__item{border:1px solid #00447c;display:inline-block;padding:5px;margin-right:5px;color:#fff;margin-top:5px;border-radius:5px;text-decoration:none;cursor:pointer;background-color:#00447c}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__item:hover{background-color:#02355f}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__item:after{content:url("data:image/svg+xml,%3Csvg id=\'Group_192\' data-name=\'Group 192\' xmlns=\'http://www.w3.org/2000/svg\' width=\'11.31\' height=\'11.31\' viewBox=\'0 0 11.31 11.31\'%3E%3Cpath id=\'Path_1159\' data-name=\'Path 1159\' d=\'M98.653,7.926l-.582-.582-5.09,5.09L87.925,7.377l-.582.582L92.4,13.015l-5.023,5.023.582.582L92.981,13.6l5.056,5.056.582-.582-5.056-5.056Z\' transform=\'translate(-87.343 -7.344)\' fill=\'%23fff\'/%3E%3C/svg%3E");padding:8px;vertical-align:middle}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__button{border:1px solid #00447c;display:inline-block;padding:5px;margin-right:5px;color:#fff;margin-top:5px;border-radius:5px;text-decoration:none;cursor:pointer;background-color:#00447c}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__button:hover{background-color:#02355f}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__button:after{content:url("data:image/svg+xml,%3Csvg id=\'Group_192\' data-name=\'Group 192\' xmlns=\'http://www.w3.org/2000/svg\' width=\'11.31\' height=\'11.31\' viewBox=\'0 0 11.31 11.31\'%3E%3Cpath id=\'Path_1159\' data-name=\'Path 1159\' d=\'M98.653,7.926l-.582-.582-5.09,5.09L87.925,7.377l-.582.582L92.4,13.015l-5.023,5.023.582.582L92.981,13.6l5.056,5.056.582-.582-5.056-5.056Z\' transform=\'translate(-87.343 -7.344)\' fill=\'%23fff\'/%3E%3C/svg%3E");padding:8px;vertical-align:middle}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__button:hover{background-color:#00447c}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__button:before{content:\'+\'}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__button:after{content:none}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper.facet-show-expended .anavmfe__facet__item:nth-of-type(1n+7){display:inline-block}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper.facet-show-expended .anavmfe__facet__button{display:none}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .anavmfe__facet__item:nth-of-type(1n+7){display:none}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .clear-all{padding:5px 0;color:#006bbd;margin:6px 0 0 10px;text-decoration:none}.anavmfe__facets__wrapper_conatiner .anavmfe__facets__wrapper .clear-all-collapse{display:none}@media screen and (max-width:991px){.leftanav__grey__container.result-count-waiting .leftanav__option__container .leftanav__option__box{position:relative}.leftanav__grey__container.result-count-waiting .leftanav__option__container .leftanav__option__box .results-count:after{background:url(//i.dell.com/sites/csimages/Banner_Imagery/all/loading-animation-square.gif) no-repeat;height:21px;width:21px;margin-top:-2px;margin-bottom:2px;content:" "}.leftanav__grey__container.result-count-waiting .leftanav__option__container .leftanav__option__box:after{background-color:grey;content:" ";width:100%;height:100%;position:absolute;left:0;top:0;z-index:999999;opacity:.1}}.price-range{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;margin-top:5px;flex-wrap:wrap}.price-range>input[type=text]{margin-right:8px;border:1px none #000;border-radius:3px;padding:8px;font-size:14px;line-height:normal;color:#333;vertical-align:middle;background-color:#fff;outline:0;flex-grow:1;width:calc(100% * (1/4) - 12px - 1px)}.price-range>input[type=button]{margin-right:8px;padding:11px 8px;border-radius:3px;background:#0076ce;line-height:normal;background-color:#0076ce;color:#fff;border:0;cursor:pointer;flex-grow:1;width:calc(100% * (1/4) - -15px - 1px)}.price-range>input[type=button]:disabled{color:#fff;background-color:#767676}.price-range span.anavmfe__price__range__error__msg{color:red;font-size:12px;display:none;margin-top:6px}.anavmfe__accordion__row{display:block;width:100%;clear:both;font-weight:400;color:#000;text-align:inherit;white-space:nowrap;background-color:transparent;border:0;line-height:2.125rem}.anavmfe__accordion__row>input{width:1.05rem;height:1.05rem;float:left;margin-left:0;margin-top:.525rem;vertical-align:top;display:inline-block}.anavmfe__accordion__row>a{color:#333;padding-top:4px;padding-bottom:4px;cursor:pointer;white-space:normal;text-decoration:none;margin-left:1.3125rem;display:grid;word-break:break-word;line-height:1.5rem;vertical-align:middle;display:-moz-grid;display:-ms-grid}.anavmfe__accordion__row:last-child>a{padding-bottom:0}.anavmfe__accordion__row.facets>label{color:#333;padding-top:4px;padding-bottom:4px;cursor:pointer;white-space:normal;text-decoration:none;margin-left:1.3125rem;display:grid;word-break:break-word;display:-moz-grid;display:-ms-grid;line-height:1.5rem;vertical-align:middle}.anavmfe__accordion__row.facets:last-child label{padding-bottom:0}.anavmfe__accordion__row:hover{background-color:#cce5f1}.anavmfe__accordion__row>input[disabled]+label{color:rgba(0,0,0,.4)}.anavmfe__accordion__body{overflow:hidden;clear:both;padding:1px 4px;margin-top:38px;position:relative}.expended .show-more-less-block>div{color:#006bbd;cursor:pointer}.expended .show-more-less-block .show-more-text{display:none}.expended .show-more-less-block .show-less-text{display:block}.expended .anavmfe__accordion__row_item:nth-of-type(1n+6){display:block}.collapsed .show-more-less-block>div{color:#006bbd;cursor:pointer}.collapsed .show-more-less-block .show-more-text{display:block}.collapsed .show-more-less-block .show-less-text{display:none}.collapsed .anavmfe__accordion__row_item:nth-of-type(1n+6){display:none}@media screen and (max-width:991px){.mobile-collapse{display:none}}@media only screen and (max-width:1024px){.anav_mfe_price_range_hide{display:none}.anav_mfe_price_range_show{display:block}}.anavmfe__accordion__item{margin:0 0 13px;padding:0 6px 16px 14px;border-style:none;border-width:1px;border-color:#ccc;border-radius:5px;background-color:#f3f3f3}.leftanav__option__box:first-child .anavmfe__accordion__item:after{content:\'\';display:block;clear:both}.anavmfe__accordion__item__trigger{overflow:hidden;padding:0 4px;position:relative;width:100%;transform:translateY(25px)}.anavmfe__accordion__item__trigger:after{content:" ";border-left:6px solid transparent;border-right:6px solid transparent;border-top:6px solid #444;border-bottom:0;position:absolute;right:5px;top:10px;transition:all .5s;transform-style:preserve-3d;transform:rotateX(0) rotateY(0) rotateZ(180deg)}.anavmfe__accordion__item__trigger:hover{cursor:pointer}.anavmfe__accordion__item__trigger:hover .anavmfe__accordion__item__name{color:#0076ce}.anavmfe__accordion__item__trigger.collapsed:after{-webkit-transform:rotate(360deg);-moz-transform:rotate(360deg);transform:rotate(360deg)}.anavmfe__accordion__item__trigger .anavmfe__accordion__item__name{display:block;color:#00447c;font-weight:700;font-size:14px;padding:5px 0 28px;margin-top:0}.anavmfe__accordion__item__trigger:hover:after{border-top:6px solid #0076ce}.anavmfe__accordion__item legend{float:left;margin-top:-1em;line-height:1em}.anavmfe__accordion__item .anavmfe__segment__selector_item{padding:0 0 5px;color:#006bbd;margin-top:5px;border-radius:10px;cursor:pointer}.anavmfe__accordion__item .anavmfe__segment__selector_item input{width:1.05rem;height:1.05rem;margin-right:2px;vertical-align:middle}.anavmfe__accordion__item .anavmfe__segment__selector_item label{color:#006bbd}.anavmfe__accordion__item .anavmfe__segment__selector_item label.selected-label,.anavmfe__accordion__item .anavmfe__segment__selector_item label.selected-label-without-radio-button{color:#333;font-weight:700}.anavmfe__accordion__item .anavmfe__segment__selector_item a{color:#006bbd;cursor:pointer;text-decoration:none}.anavmfe__accordion__item .anavmfe__segment__selector_item:last-child{padding-bottom:0}</style>\n<div class="leftanav__grey__container" data-appliedrefinements="" data-state-persistence="True">\n <div class="leftanav__grey__mobileview__container">\n <div class="leftanav__mobile__filter__compare__btn__block">\n <a href="javascript:;" class="leftanav__rounded__aqua__button">Filtrar</a>\n </div>\n </div>\n <div class="leftanav__option__container">\n <div class="leftanav__option__box">\n <div class="anavmfe__facets__wrapper_conatiner">\n <div class="leftanav__overpanel__close__icon"></div>\n <div class="anavmfe__facets__wrapper">\n <div class="leftanav__title__block">\n <div class="leftanav__title">\n Filtrar\n\n </div>\n\n </div>\n\n </div>\n</div>\n\n\n\n \n \n \n<fieldset class="anavmfe__accordion__item">\n \n\n<legend class="anavmfe__accordion__item__trigger ">\n <span class="anavmfe__accordion__item__name">Processador</span>\n</legend>\n\n \n\n<div class="anavmfe__accordion__body collapsed ">\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="200" tinytitle="" value="200" aria-label="Intel Pentium Gold" id="refinement-200" data-metrics="{"btnname":"anav","anav_caption_option":"Intel Pentium Gold","anav_caption":"Processador"}">\n <label class="text" for=\'refinement-200\'>\nIntel Pentium Gold </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="201" tinytitle="" value="201" aria-label="Intel Core i7" id="refinement-201" data-metrics="{"btnname":"anav","anav_caption_option":"Intel Core i7","anav_caption":"Processador"}">\n <label class="text" for=\'refinement-201\'>\nIntel Core i7 </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="202" tinytitle="" value="202" aria-label="Intel Core i5" id="refinement-202" data-metrics="{"btnname":"anav","anav_caption_option":"Intel Core i5","anav_caption":"Processador"}">\n <label class="text" for=\'refinement-202\'>\nIntel Core i5 </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="203" tinytitle="" value="203" aria-label="Intel Core i3" id="refinement-203" data-metrics="{"btnname":"anav","anav_caption_option":"Intel Core i3","anav_caption":"Processador"}">\n <label class="text" for=\'refinement-203\'>\nIntel Core i3 </label>\n </div>\n\n\n\n\n\n</div>\n\n</fieldset>\n\n\n\n \n<fieldset class="anavmfe__accordion__item">\n \n\n<legend class="anavmfe__accordion__item__trigger ">\n <span class="anavmfe__accordion__item__name">Mem\xc3\xb3ria</span>\n</legend>\n\n \n\n<div class="anavmfe__accordion__body collapsed ">\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="300" tinytitle="" value="300" aria-label="4 GB" id="refinement-300" data-metrics="{"btnname":"anav","anav_caption_option":"4 GB","anav_caption":"Mem\xc3\xb3ria"}">\n <label class="text" for=\'refinement-300\'>\n4 GB </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="301" tinytitle="" value="301" aria-label="8 GB" id="refinement-301" data-metrics="{"btnname":"anav","anav_caption_option":"8 GB","anav_caption":"Mem\xc3\xb3ria"}">\n <label class="text" for=\'refinement-301\'>\n8 GB </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="302" tinytitle="" value="302" aria-label="16 GB" id="refinement-302" data-metrics="{"btnname":"anav","anav_caption_option":"16 GB","anav_caption":"Mem\xc3\xb3ria"}">\n <label class="text" for=\'refinement-302\'>\n16 GB </label>\n </div>\n\n\n\n\n\n</div>\n\n</fieldset>\n\n\n\n \n<fieldset class="anavmfe__accordion__item">\n \n\n<legend class="anavmfe__accordion__item__trigger ">\n <span class="anavmfe__accordion__item__name">Armazenamento</span>\n</legend>\n\n \n\n<div class="anavmfe__accordion__body collapsed ">\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="400" tinytitle="" value="400" aria-label="128 GB" id="refinement-400" data-metrics="{"btnname":"anav","anav_caption_option":"128 GB","anav_caption":"Armazenamento"}">\n <label class="text" for=\'refinement-400\'>\n128 GB </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="401" tinytitle="" value="401" aria-label="256 GB" id="refinement-401" data-metrics="{"btnname":"anav","anav_caption_option":"256 GB","anav_caption":"Armazenamento"}">\n <label class="text" for=\'refinement-401\'>\n256 GB </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="402" tinytitle="" value="402" aria-label="512 GB" id="refinement-402" data-metrics="{"btnname":"anav","anav_caption_option":"512 GB","anav_caption":"Armazenamento"}">\n <label class="text" for=\'refinement-402\'>\n512 GB </label>\n </div>\n\n\n\n\n\n</div>\n\n</fieldset>\n\n\n\n \n<fieldset class="anavmfe__accordion__item">\n \n\n<legend class="anavmfe__accordion__item__trigger ">\n <span class="anavmfe__accordion__item__name">Sistema Operacional</span>\n</legend>\n\n \n\n<div class="anavmfe__accordion__body collapsed ">\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="500" tinytitle="" value="500" aria-label="Windows 10 Pro" id="refinement-500" data-metrics="{"btnname":"anav","anav_caption_option":"Windows 10 Pro","anav_caption":"Sistema Operacional"}">\n <label class="text" for=\'refinement-500\'>\nWindows 10 Pro </label>\n </div>\n\n\n\n \n\n <div class="anavmfe__accordion__row facets anavmfe__accordion__row_item">\n <input type="checkbox" \n \n name="501" tinytitle="" value="501" aria-label="Windows 10 Home" id="refinement-501" data-metrics="{"btnname":"anav","anav_caption_option":"Windows 10 Home","anav_caption":"Sistema Operacional"}">\n <label class="text" for=\'refinement-501\'>\nWindows 10 Home </label>\n </div>\n\n\n\n\n\n</div>\n\n</fieldset>\n\n\n\n </div>\n </div>\n</div>\n<script type="text/javascript" shared-script-required="true">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//afcs.dellcdn.com/csb/anavmfeux/bundles/1.0.0.4432/js/left-anav-mfe.min.js","order":"5","crossorigin":false}])</script>\r\n </div>\r\n</div>\r\n<script type="text/javascript">if (typeof Dell !== \'undefined\' && typeof Dell.perfmetrics !== \'undefined\') Dell.perfmetrics.end(\'specialdeals-anav\');</script>\r\n </section>\r\n <section class="sd-category-product-container">\r\n <div class="sd-category-results-top">\r\n <div class="sd-category-results-count">16 Resultados</div>\r\n <div class="sd-desktop-sort-by">\r\n \r\n<div class="sd-sorting-label"><span>Classificar por:</span></div>\r\n<div class="sd-sorting">\r\n <select class="sd-sort-dropdown" tabindex="0">\r\n <option selected value="relevance" data-text="Maior Relevância" class="sd-sort-dropdown-item" onclick="s_objectID = \'Maior Relevância\';">Maior Relevância</option>\r\n <option value="price-descending" data-text="Maior Preço" class="sd-sort-dropdown-item" onclick="s_objectID = \'Maior Preço\';">Maior Preço</option>\r\n <option value="price-ascending" data-text="Menor Preço" class="sd-sort-dropdown-item" onclick="s_objectID = \'Menor Preço\';">Menor Preço</option>\r\n </select>\r\n</div>\r\n </div>\r\n </div>\r\n\r\n <div class="sd-category-grid">\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfi" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_v_15_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 15 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfi">Notebook Vostro 15 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-15-3500-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$2.999,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$2.599,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$400,00</span>\r\n <span>(13%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7sfi">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7sfi" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="08/24/2021 11:00:00" data-hour-countdown="120" data-deal-id="7sfi"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Pentium\xc2\xae Gold 7505</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Pro (Funcionalidade avan\xc3\xa7ada para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae UHD Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compatilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 128GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 4GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Com borda fina em dois lados, teclado num\xc3\xa9rico e carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 259,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 2.599,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v3500w6002w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7sfi" aria-label="Saiba mais e compre: Notebook Vostro 15 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n <div class="sd-ps-banner-wrapper" style="display:block">\r\n <span class="sd-ps-promo-text">Oferta Rel\xc3\xa2mpago</span>\r\n <span class="sd-ps-banner-corner" style=" border-left-color: green"></span>\r\n</div>\r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfk" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_v_15_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 15 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfk">Notebook Vostro 15 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-15-3501-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$3.579,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$2.999,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$580,00</span>\r\n <span>(16%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7sfk">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7sfk" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/16/2021 11:00:00" data-hour-countdown="120" data-deal-id="7sfk"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i3-1005G1 (10\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language (A Dell recomenda o Windows 10 Pro para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae UHD Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 4GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Com borda fina em dois lados, teclado num\xc3\xa9rico e carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.<br><br>Aproveite pre\xc3\xa7o especial de <b>ProSupport!</b>\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 299,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 2.999,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v3501w6592w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7sfk" aria-label="Saiba mais e compre: Notebook Vostro 15 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9zbr" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_v_15_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 15 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9zbr">Notebook Vostro 15 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-15-3501-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$4.229,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$3.899,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$330,00</span>\r\n <span>(7%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-9zbr">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-9zbr" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/13/2021 16:30:00" data-hour-countdown="120" data-deal-id="9zbr"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1035G1 (10\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language (A Dell recomenda o Windows 10 Pro para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae UHD Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Com borda fina em dois lados, teclado num\xc3\xa9rico e carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.<br><br>Aproveite pre\xc3\xa7o especial de <b>ProSupport!</b>\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 389,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 3.899,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v3501w7500w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/9zbr" aria-label="Saiba mais e compre: Notebook Vostro 15 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n <div class="sd-ps-banner-wrapper" style="display:block">\r\n <span class="sd-ps-promo-text">Oferta Rel\xc3\xa2mpago</span>\r\n <span class="sd-ps-banner-corner" style=" border-left-color: green"></span>\r\n</div>\r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7oyn" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_mockingird_v_14_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 14 5000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7oyn">Notebook Vostro 14 5000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-14-5402-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$7.399,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$6.699,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$700,00</span>\r\n <span>(9%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7oyn">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7oyn" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/16/2021 11:00:00" data-hour-countdown="120" data-deal-id="7oyn"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i7-1165G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Pro (Funcionalidade avan\xc3\xa7ada para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo dedicada NVIDIA\xc2\xae GeForce\xc2\xae MX330 com 2GB de GDDR5</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela Full HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 512GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 16GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design dur\xc3\xa1vel com certifica\xc3\xa7\xc3\xa3o militar, recursos avan\xc3\xa7ados de seguran\xc3\xa7a, como uma tampa protetora para a webcam.<br><br>Aproveite pre\xc3\xa7o especial de <b>ProSupport!</b>\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 669,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 6.699,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v5402w3005w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7oyn" aria-label="Saiba mais e compre: Notebook Vostro 14 5000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n <div class="sd-ps-banner-wrapper" style="display:block">\r\n <span class="sd-ps-promo-text">Oferta Rel\xc3\xa2mpago</span>\r\n <span class="sd-ps-banner-corner" style=" border-left-color: green"></span>\r\n</div>\r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfc" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_mockingird_n_15_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Inspiron 15" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfc">Notebook Inspiron 15</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="inspiron-15-5502-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$7.299,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$6.599,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$700,00</span>\r\n <span>(9%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7sfc">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7sfc" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/16/2021 11:00:00" data-hour-countdown="120" data-deal-id="7sfc"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i7-1165G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo dedicada NVIDIA\xc2\xae GeForce\xc2\xae MX350 com 2GB de GDDR5</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela Full HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 512GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 16GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Ultrafino com leitor de impress\xc3\xa3o digital e teclado num\xc3\xa9rico.<br><br>Alta procura! Tempo de entrega estendido.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 659,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 6.599,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> i5502w5022pw</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7sfc" aria-label="Saiba mais e compre: Notebook Inspiron 15">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7oyp" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_mockingird_v_14_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 14 5000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7oyp">Notebook Vostro 14 5000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-14-5402-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$7.299,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$6.849,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$450,00</span>\r\n <span>(6%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7oyp">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7oyp" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/13/2021 16:30:00" data-hour-countdown="120" data-deal-id="7oyp"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i7-1165G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Pro (Funcionalidade avan\xc3\xa7ada para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo dedicada NVIDIA\xc2\xae GeForce\xc2\xae MX330 com 2GB de GDDR5</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela Full HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 16GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design dur\xc3\xa1vel com certifica\xc3\xa7\xc3\xa3o militar, recursos avan\xc3\xa7ados de seguran\xc3\xa7a, como uma tampa protetora para a webcam.<br><br>Aproveite pre\xc3\xa7o especial de <b>ProSupport!</b>\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 684,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 6.849,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v5402w6001w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7oyp" aria-label="Saiba mais e compre: Notebook Vostro 14 5000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9zr3" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_n_15_new_size_bau_gy.png" class="sd-lazy sd-lazy-img" alt="Notebook Inspiron 15 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9zr3">Notebook Inspiron 15 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="inspiron-15-3501-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$4.198,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$3.898,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$300,00</span>\r\n <span>(7%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-9zr3">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-9zr3" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/13/2021 16:30:00" data-hour-countdown="120" data-deal-id="9zr3"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1035G1 (10\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae UHD Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design mais leve que a gera\xc3\xa7\xc3\xa3o anterior. Carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 389,80</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 3.898,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> i3501w123iw</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/9zr3" aria-label="Saiba mais e compre: Notebook Inspiron 15 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9zbv" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_v_14_new_size.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 14 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9zbv">Notebook Vostro 14 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-14-3401-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$4.179,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$3.679,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$500,00</span>\r\n <span>(11%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-9zbv">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-9zbv" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/09/2021 11:00:00" data-hour-countdown="120" data-deal-id="9zbv"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1035G1 (10\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language (A Dell recomenda o Windows 10 Pro para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae UHD Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Com borda fina em dois lados e carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.<br><br>Aproveite pre\xc3\xa7o especial de <b>ProSupport!</b>\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 367,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 3.679,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v3401w7500wh</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/9zbv" aria-label="Saiba mais e compre: Notebook Vostro 14 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/a5kh" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_v_14_new_size.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 14 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/a5kh">Notebook Vostro 14 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-14-3401-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$3.479,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$2.929,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$550,00</span>\r\n <span>(15%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-a5kh">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-a5kh" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/06/2021 11:00:00" data-hour-countdown="120" data-deal-id="a5kh"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i3-1005G1 (10\xc2\xb0 Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language (A Dell recomenda o Windows 10 Pro para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae UHD Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 128GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 4GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Com borda fina em dois lados e carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 292,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 2.929,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v3401w2105wh</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/a5kh" aria-label="Saiba mais e compre: Notebook Vostro 14 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/8weo" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_n_15_new_size_bau_gy.png" class="sd-lazy sd-lazy-img" alt="Notebook Inspiron 15 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/8weo">Notebook Inspiron 15 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="inspiron-15-3501-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$4.899,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$4.499,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$400,00</span>\r\n <span>(8%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-8weo">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-8weo" >\r\n \r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1135G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo dedicada NVIDIA\xc2\xae GeForce\xc2\xae MX330 com 2GB de GDDR5</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design mais leve que a gera\xc3\xa7\xc3\xa3o anterior. Carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 449,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 4.499,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> i3501w2413w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/8weo" aria-label="Saiba mais e compre: Notebook Inspiron 15 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfo" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_bullseye_v_14_new_size.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 14 3000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sfo">Notebook Vostro 14 3000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-14-3400-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$5.879,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$5.079,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$800,00</span>\r\n <span>(13%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7sfo">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7sfo" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/06/2021 11:00:00" data-hour-countdown="120" data-deal-id="7sfo"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel Core i7-1165G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Pro (Funcionalidade avan\xc3\xa7ada para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae Iris\xc2\xae Xe com mem\xc3\xb3ria gr\xc3\xa1fica compatilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Com borda fina em dois lados e carregamento mais r\xc3\xa1pido com ExpressCharge\xe2\x84\xa2.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 507,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 5.079,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v3400w6501w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7sfo" aria-label="Saiba mais e compre: Notebook Vostro 14 3000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7oyl" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_mockingird_v_14_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Vostro 14 5000" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7oyl">Notebook Vostro 14 5000</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="vostro-14-5402-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$5.600,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$4.700,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$900,00</span>\r\n <span>(16%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7oyl">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7oyl" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="08/25/2024 19:30:00" data-hour-countdown="120" data-deal-id="7oyl"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1135G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language (A Dell recomenda o Windows 10 Pro para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae Iris\xc2\xae Xe com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela Full HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design dur\xc3\xa1vel com certifica\xc3\xa7\xc3\xa3o militar, recursos avan\xc3\xa7ados de seguran\xc3\xa7a, como uma tampa protetora para a webcam.<br><br>Aproveite pre\xc3\xa7o especial de <b>ProSupport!</b>\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 470,00</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 4.700,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> v5402w3003w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7oyl" aria-label="Saiba mais e compre: Notebook Vostro 14 5000">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9ttz" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_image_latitude_3420_new.png" class="sd-lazy sd-lazy-img" alt="Novo Notebook Latitude 3420" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/9ttz">Novo Notebook Latitude 3420</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="latitude-14-3420-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$10.547,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$8.105,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$2.442,00</span>\r\n <span>(23%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-9ttz">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-9ttz" >\r\n \r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i7-1165G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Pro (Funcionalidade avan\xc3\xa7ada para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae Iris\xc2\xae Xe Graphics</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 512GB PCIe NVMe M.2 Classe 35</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 16GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design moderno menor e mais leve com teclado mais largo de borda a borda. Armazenamento e mem\xc3\xb3ria configur\xc3\xa1veis. <br><br><a href="https://bit.ly/servi\xc3\xa7o-dell" target="_blank">Com 1 ano de Assist\xc3\xaancia no Local</a>.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 810,50</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 8.105,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> cto03l342014bcc_p</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/9ttz" aria-label="Saiba mais e compre: Novo Notebook Latitude 3420">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n <div class="sd-ps-banner-wrapper" style="display:block">\r\n <span class="sd-ps-promo-text">Oferta Rel\xc3\xa2mpago</span>\r\n <span class="sd-ps-banner-corner" style=" border-left-color: green"></span>\r\n</div>\r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7uo4" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_g3_3500_new_size_bk.png" class="sd-lazy sd-lazy-img" alt="Notebook Dell G3 15" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7uo4">Notebook Dell G3 15</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="g-series-15-3500-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$9.998,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$8.598,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$1.400,00</span>\r\n <span>(14%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7uo4">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7uo4" >\r\n \r\n\t<span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_clock" style="display:none;"></span>\r\n\t<span class="deal-countdown-timer" data-deal-status="0" data-deal-time="09/16/2021 11:00:00" data-hour-countdown="120" data-deal-id="7uo4"></span>\r\n\r\n\r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i7-10750H (10\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo dedicada NVIDIA\xc2\xae GeForce\xc2\xae RTX 2060 com 6GB de GDDR6</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela Full HD de 15.6"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 512GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 16GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Gr\xc3\xa1ficos poderosos, desempenho r\xc3\xa1pido e sistema t\xc3\xa9rmico especial. <b> Configura\xc3\xa7\xc3\xa3o com tela 144Hz.</b><br><br>Alta procura! Tempo de entrega estendido.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 859,80</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 8.598,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> g3500w2600w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7uo4" aria-label="Saiba mais e compre: Notebook Dell G3 15">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sey" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_mockingird_n_14_new_size_bau.png" class="sd-lazy sd-lazy-img" alt="Notebook Inspiron 14" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/7sey">Notebook Inspiron 14</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="inspiron-14-5402-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$5.599,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$4.699,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$900,00</span>\r\n <span>(16%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-7sey">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-7sey" >\r\n \r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1135G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Home Single Language</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae Iris\xc2\xae Xe Graphics com mem\xc3\xb3ria gr\xc3\xa1fica compartilhada</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela Full HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Ultrafino com leitor de impress\xc3\xa3o digital e sensor de abertura de tampa.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 469,90</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 4.699,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> i5402w270w</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/7sey" aria-label="Saiba mais e compre: Notebook Inspiron 14">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack">\r\n <div class="sd-ps-top">\r\n <section class="sd-ps-banner">\r\n \r\n </section>\r\n <section class="sd-ps-image " aria-hidden="true">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/acf1" tabindex="-1">\r\n <img src="data:image/svg+xml,%3Csvg xmlns=\'http://www.w3.org/2000/svg\' viewBox=\'0 0 165 119\'/%3E" data-src="https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/oc_image_latitude_5420_gy.png" class="sd-lazy sd-lazy-img" alt="Notebook Latitude 5420" width="165" height="119">\r\n </a>\r\n</section>\r\n<section class="sd-ps-title">\r\n <h3 class="sd-ps-title-content">\r\n <a href="https://deals.dell.com/pt-br/work/productdetail/acf1">Notebook Latitude 5420</a>\r\n </h3>\r\n</section>\r\n \r\n<section class="sd-ps-ratings-and-reviews">\r\n <div class="inlinerating sd-lazy" data-bv-show="" data-bv-product-id="latitude-5420-laptop" data-bv-seo="false"></div>\r\n</section>\r\n\r\n \r\n \r\n\r\n<section class="sd-ps-price ">\r\n \r\n <div>\r\n <div class="sd-ps-orig">\r\nDe \r\n <span class="strike-through">R$9.473,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-dell-price">\r\n <span class="sd-sr-only">Preço</span>\r\n <span>R$7.773,00</span>\r\n \r\n </div>\r\n <div class="sd-ps-sav">\r\n <span>Desconto</span>\r\n <span>R$1.700,00</span>\r\n <span>(17%)</span>\r\n </div>\r\n \r\n <div class="sd-ps-del">\r\nFrete <span>\r\n Grátis\r\n </span>\r\n </div>\r\n </div>\r\n</section>\r\n <section class="sd-ps-status-label">\r\n \r\n<div class="">\r\n \r\n</div>\r\n </section>\r\n <section class="sd-ps-claim-progress-bar sd-ps-claim-progress-bar-timer ">\r\n \r\n<div class="sd-ps-claim-progress-bar-wrapper pb-acf1">\r\n <div class="sd-ps-claim-progress-bar-indicator" >\r\n <div class="progressbar-text" style="width: 0%;"></div>\r\n </div>\r\n <div class="sd-ps-claimed">\r\n <div class="sd-ps-claim-percent" style="display:none">\r\n <span class="progressbar-value"></span>\r\n <span class="sd-ps-claimed-pct">\r\n Vendido\r\n\r\n </span>\r\n <span tabindex="0" style="display:none" class="sd-tooltip-trigger ps-tool-tip" sd-tooltip="" sd-tooltip-hover sd-tooltip-message="Claimed products include items in customers\' carts, so more may become available if they aren\'t purchased."\r\n sd-close-label="" sd-tooltip-title="">\r\n Vendido\r\n </span>\r\n\r\n </div>\r\n <div class="sd-ps-claim-countdown-acf1" >\r\n \r\n </div>\r\n </div>\r\n</div>\r\n\r\n\r\n </section>\r\n <section class="sd-ps-spec-desc ">\r\n \r\n<div class="sd-ps-feature-specs">\r\n <div class="sd-ps-spec-list">\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_processor"></span>\r\n <div>Intel\xc2\xae Core\xe2\x84\xa2 i5-1135G7 (11\xc2\xaa Gera\xc3\xa7\xc3\xa3o)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_disc-system"></span>\r\n <div>Windows 10 Pro (Funcionalidade avan\xc3\xa7ada para empresas)</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_video-card"></span>\r\n <div>Placa de v\xc3\xaddeo integrada Intel\xc2\xae Iris\xc2\xae Xe Graphics</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_device-screen-size"></span>\r\n <div>Tela HD de 14"</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_hard-drive"></span>\r\n <div>SSD de 256GB PCIe NVMe M.2 Classe 35</div>\r\n </div>\r\n <div class="sd-ps-spec-item">\r\n <span aria-hidden="true" class="sd-icon sd-dds-font-icon dds_memory"></span>\r\n <div>Mem\xc3\xb3ria de 8GB</div>\r\n </div>\r\n </div>\r\n</div>\r\n \r\n\r\n<div class="sd-ps-short-desc">\r\n Design mais leve da linha 5000 com tela HD de 14\xe2\x80\x9d. Armazenamento e mem\xc3\xb3ria configur\xc3\xa1veis. <a href="https://bit.ly/servi\xc3\xa7o-dell" target="_blank">Com 1 ano de Assist\xc3\xaancia no Local.</a><br><br>Alta procura! Tempo de entrega estendido.\r\n</div>\r\n\r\n\r\n\r\n\r\n </section>\r\n </div>\r\n <div class="sd-ps-bottom">\r\n \r\n\r\n\r\n \r\n\r\n \r\n \r\n\r\n<div>\r\n <span>\r\n <a target="_blank" href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento">Formas de pagamento</a>\r\n </span><br />\r\n At\xc3\xa9 10x sem juros de <span>R$ 777,30</span><br/> no cart\xc3\xa3o de cr\xc3\xa9dito. Valor total a prazo <span>R$ 7.773,00</span><br/>\r\n</div>\r\n \r\n \r\n<section class="sd-ps-id"> cto01l542014bcc_p</section>\r\n\r\n\r\n \r\n\r\n<section class="sd-ps-button sd-ps-button-details">\r\n <a class="sd-btn sd-secondary-btn-blue" href="https://deals.dell.com/pt-br/work/productdetail/acf1" aria-label="Saiba mais e compre: Notebook Latitude 5420">Saiba mais e compre</a>\r\n</section>\r\n </div>\r\n </article>\r\n <article class="sd-ps-stack sd-escape-hatch">\r\n <a href="https://www.dell.com/pt-br/work/shop/scc/sc/laptops?ref=escapenbsb">\r\n <img src=//i.dell.com/images/global/general/1x1.gif data-src=https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_ESCAPE_HATCH_NOTEBOOKS_280X337.jpg class="sd-lazy sd-lazy-img" alt="Ver Laptops por Categoria" width="280" height="337" />\r\n </a>\r\n</article>\r\n </div>\r\n </section>\r\n</div>\r\n\r\n<section id="special-category">\r\n <div class="sd-carousel-special-category sd-carousel-wrapper">\r\n <div class="sd-carousel">\r\n <div class="sd-carousel-inner">\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_CAROUSSEL_SPECIAL_CARD_DELL_EXPLICA_720x480.jpg);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n PORTAL DELL EXPLICA\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n Em dúvida? A Dell explica qual o laptop ou computador perfeito para o que você precisa.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://www.dell.com/pt-br/work/lp/dell-explica-uso-profissional?ref=explicasbdeals" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Acesse o portal: PORTAL DELL EXPLICA">Acesse o portal</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_CAROUSSEL_SPECIAL_CARD_SERVICES_720x480.jpg);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n SERVIÇOS DELL\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n Proteja o seu Dell! Temos planos de suporte completos para manter sua máquina com o melhor desempenho por mais tempo.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://www.dell.com/pt-br/work/lp/servicos-empresariais?ref=servicosbdeals" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Saiba mais: SERVIÇOS DELL">Saiba mais</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_CAROUSSEL_SPECIAL_CARD_LATITUDE_720x480.png);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n LAPTOPS LATITUDE\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n A nova era da inteligência! Os laptops empresariais mais inteligentes do mundo com inteligência artificial integrada.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://www.dell.com/pt-br/work/shop/notebooks-dell/sf/latitude-laptops?ref=latisbdeals" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Conheça: LAPTOPS LATITUDE">Conheça</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/prod-1060-banner-image-re-size-lifestyle-shutterstock-351885389-u2419h-km717-xps13-720x480.png);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n Compre agora e faça o upgrade para o Windows 11 mais tarde\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n Desktops e Notebooks Dell com configuração compatível ao novo sistema operacional Windows 11 serão elegíveis à sua atualização gratuita, assim que disponibilizada pela Microsoft.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://www.dell.com/pt-br/work/lp/windows-11" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Saiba mais: Compre agora e faça o upgrade para o Windows 11 mais tarde">Saiba mais</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_CAROUSSEL_SPECIAL_CARD_LEADS_720x480.jpg);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n RECEBA NOSSAS OFERTAS\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n Quer ficar sabendo sobre nossas ofertas e lançamentos? Se cadastre com a gente e fique por dentro de todas as novidades e promoções.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://campaign.dell.com/webApp/BRLeadsSBSignUp" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Cadastre-se agora: RECEBA NOSSAS OFERTAS">Cadastre-se agora</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_CAROUSSEL_SPECIAL_CARD_PAYMENT_720x480.jpg);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n FORMAS DE PAGAMENTO\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n Conheça as formas de pagamento que estão disponíveis para comprar o seu Dell.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://www.dell.com/pt-br/work/lp/formas-de-pagamento?ref=pagamentosbdeals" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Saiba mais: FORMAS DE PAGAMENTO">Saiba mais</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n <article class="sd-carousel-items">\r\n <figure>\r\n <div class="sd-lazy sd-lazy-bg-img sd-carousel-image" style=background-image:url(//i.dell.com/images/global/general/1x1.gif); data-src=background-image:url(https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/548679_CAROUSSEL_SPECIAL_CARD_SHIPPING_720x480.jpg);>\r\n </div>\r\n\r\n <figcaption>\r\n <h2>\r\n TEMPO ESTIMADO DE ENTREGA\r\n </h2>\r\n <h3>\r\n \r\n </h3>\r\n <p>\r\n A Dell oferece frete grátis de verdade: sem anuidade, sem valor mínimo e para todo o Brasil! Confira o tempo estimado de entrega para sua região.\r\n </p>\r\n <div class="sd-btn-container"><a href="https://www.dell.com/pt-br/work/lp/data-estimada-entrega?ref=entregasbdeals" target="_blank" class="sd-btn sd-secondary-btn-blue" aria-label="Saiba mais: TEMPO ESTIMADO DE ENTREGA">Saiba mais</a></div>\r\n </figcaption>\r\n </figure>\r\n </article>\r\n </div>\r\n </div>\r\n\r\n <div class="sd-carousel-arrows">\r\n <span class="sd-carousel-arrow-prev sd-dds-font-icon dds_chevron-left"></span>\r\n <span class="sd-carousel-arrow-next sd-dds-font-icon dds_chevron-right"></span>\r\n </div>\r\n\r\n <div class="sd-carousel-dots"></div>\r\n </div>\r\n</section>\r\n\r\n<div>\r\n <section class="sd-ad-wrapper">\r\n <div class="sd-ad-container">\r\n <div class="sd-ad-item">\r\n <a href="https://www.dell.com/pt-br/work/shop/help-me-choose/cp/hmc-mcafee-consumer?ref=mcafee_da_sb">\r\n <img src=//i.dell.com/images/global/general/1x1.gif data-src=https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/mcafee_ad_deals_fy22_q2.png alt="https://www.dell.com/pt-br/work/shop/help-me-choose/cp/hmc-mcafee-consumer?ref=mcafee_da_sb" class="sd-lazy sd-lazy-img" />\r\n </a>\r\n </div>\r\n <div class="sd-ad-item">\r\n <a href="https://www.dell.com/pt-br/work/shop/help-me-choose/cp/hmc-microsoft-office?ref=microsof_da">\r\n <img src=//i.dell.com/images/global/general/1x1.gif data-src=https://i.dell.com/sites/csimages/App-Merchandizing_Images/all/microsoft_da.png alt="https://www.dell.com/pt-br/work/shop/help-me-choose/cp/hmc-microsoft-office?ref=microsof_da" class="sd-lazy sd-lazy-img" />\r\n </a>\r\n </div>\r\n </div>\r\n </section>\r\n</div>\r\n \r\n\r\n\r\n<div class="sd-prod-agreement-bottom">\r\n \r\n<section class="sd-prod-agreement">\r\n <div class="sd-prod-agreement-content">\r\n <span class="sd-prod-agreement-text">Processadores Intel® Core™</span>\r\n <a href="https://www.dell.com/pt-br/work/lp/intel-11-geracao-sb">Comparar</a>\r\n </div>\r\n <img src=//i.dell.com/images/global/general/1x1.gif data-src=https://i.dell.com/is/image/DellContent/content/dam/brand_elements/logos/3rd_party/Intel/core/core_family/11th_gen/english/online_use/family_core_i5i7i3_rgb_3000.png?$S7-png$&wid=138&hei=63 class="sd-lazy sd-lazy-img" alt="Processadores Intel®" width="138" height="63">\r\n\r\n</section>\r\n</div>\r\n\r\n<div class="sd-back-to-top">\r\n <span aria-hidden="true" class="sd-back-to-top-icon sd-dds-font-icon dds_chevron-up"></span>\r\n <div class="sd-back-to-top-text">Top</div>\r\n</div>\r\n \r\n<script>\r\n // function\r\n (function() {\r\n var mboxName = "mboxrecs";\r\n var node = document.createElement("div"); // Create a <div> node\r\n node.id = mboxName;\r\n document.body.appendChild(node);\r\n\r\n var values = {\r\n mboxName: mboxName,\r\n pgCMS: \'bf-edge\',\r\n eventId: \'BRFY19Q4_Clearence\',\r\n pgCountry: \'br\',\r\n pgCustomerset: \'brbsdt1\',\r\n pgLanguage: \'pt\',\r\n pgSegment: \'bsd\',\r\n pgname: \'br|pt|bsd|brbsdt1|deals|dealscategory|notebooks\',\r\n at_property: "b7ae968b-52b6-dfde-d9f3-ae9d30ce6f99"\r\n };\r\n\r\n\r\n Object.keys(values).map(function (key) {\r\n var e = document.createElement(\'div\');\r\n e.innerHTML = values[key];\r\n values[key] = e.childNodes.length === 0 ? "" : e.childNodes[0].nodeValue;\r\n });\r\n\r\n var params = [];\r\n var value;\r\n\r\n for (value in values) {\r\n if (values.hasOwnProperty(value)) {\r\n if (value !== "mboxName") {\r\n params.push(value + "=" + values[value]);\r\n } else {\r\n params.push(values[value]);\r\n }\r\n }\r\n }\r\n\r\n try {\r\n if (window.privacyMarketing) {\r\n window.mboxDefine(mboxName, mboxName);\r\n window.mboxUpdate.apply(undefined, params);\r\n } else {\r\n// ReSharper disable once Html.EventNotResolved\r\n window.addEventListener(\'privacy-marketing-consent\', function () {\r\n window.mboxDefine(mboxName, mboxName);\r\n window.mboxUpdate.apply(undefined, params);\r\n });\r\n }\r\n } catch (e) {\r\n if (console && console.log) {\r\n console.log(e);\r\n }\r\n }\r\n })();\r\n</script>\r\n\r\n \r\n \r\n<div id="contact-drawer"></div>\r\n<script id="contact-drawer-script" src="https://afcs.dellcdn.com/csb/contact-drawer/bundles/1.0.1.528/js/contact-drawer_v1.js" defer></script>\r\n<script>\r\n window.addEventListener(\'DOMContentLoaded\', function () {\r\n var Dell = window.Dell || {};\r\n Dell.ContactDrawer = Dell.ContactDrawer || {};\r\n if (!Dell.ContactDrawer.init) { return; }\r\n\r\n Dell.ContactDrawer.init({\r\n targetElementId: "contact-drawer",\r\n country: "br",\r\n language: "pt",\r\n segment: "bsd",\r\n pageId: "deals-dealsCategory"\r\n });\r\n });\r\n</script>\r\n\r\n </main>\r\n\r\n \r\n<style>[component=unified-country-selector].readonly,[component=unified-country-selector].readonly a{pointer-events:none}[component=unified-country-selector].readonly a:after,[component=unified-country-selector].readonly span:after{display:none!important}[component=unified-country-selector] a{text-decoration:none;color:#636363;outline-width:0}[component=unified-country-selector] .country-list.active>a,[component=unified-country-selector] .country-list>a:hover{color:#0e0e0e}[component=unified-country-selector] .mh-flyout-link{height:56px;font-size:14px;font-weight:400}[component=unified-country-selector] .mh-flyout-link span{padding-right:0}[component=unified-country-selector] .cs-header{margin-bottom:16px;font-size:16px;font-weight:500;color:#535657;text-align:left}footer [component=unified-country-selector]{padding:0}footer [component=unified-country-selector] .mh-flyout-link{display:inline-block;height:auto}footer [component=unified-country-selector] .mh-flyout-link>span{display:-ms-flexbox;display:flex;-ms-flex-align:center;align-items:center}@media only screen and (max-width:767px){footer [component=unified-country-selector] .mh-back-list-item,footer [component=unified-country-selector]>.mh-flyout-wrapper .flyout,footer [component=unified-country-selector]>.mh-flyout-wrapper .flyout .mh-load-spinner,footer [component=unified-country-selector]>.mh-flyout-wrapper a.mh-flyout-link>span:after{display:none}footer [component=unified-country-selector].active .flyout .country-list-container>.country-list>a>a:hover,footer [component=unified-country-selector].active .flyout .country-list-container>.country-list>a>a~ul,footer [component=unified-country-selector].active .flyout .country-list-container>.country-list>a~ul{text-decoration:none}#unified-masthead-navigation nav ul li[aria-expanded=true]>.sub-nav,.country-list-container>li[aria-expanded=true]>.sub-nav-wrapper>.sub-nav{-ms-flex-wrap:nowrap;flex-wrap:nowrap}footer [component=unified-country-selector]>.mh-flyout-wrapper a.mh-flyout-link{padding:0}footer [component=unified-country-selector] .country-list-container>li.country-list{display:block}footer [component=unified-country-selector] .country-list-container>li.country-list>a{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg width=\'10\' height=\'6\' viewBox=\'0 0 10 6\' fill=\'none\' xmlns=\'http://www.w3.org/2000/svg\'%3E%3Cpath d=\'M1 1l4 4 4-4\' stroke=\'%23707070\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-position:96%}footer [component=unified-country-selector] .country-list-container>li.country-list>.sub-nav-wrapper,footer [component=unified-country-selector] .country-list-container>li.country-list>.sub-nav-wrapper ul{display:none}footer [component=unified-country-selector] .country-list-container>li.country-list>.sub-nav-wrapper ul li a{padding:12px 16px 12px 66px;font-weight:400}footer [component=unified-country-selector].active .flyout{width:100%;padding-top:0;height:auto;overflow:visible}footer [component=unified-country-selector].active .flyout .country-list-container{display:block!important;transform:none;box-shadow:none;max-width:none;background:0 0;height:auto}footer [component=unified-country-selector].active .flyout .country-list-container>.country-list.active>.sub-nav-wrapper,footer [component=unified-country-selector].active .flyout .country-list-container>.country-list.active>.sub-nav-wrapper>ul{display:block}footer [component=unified-country-selector].active .flyout .country-list-container>.country-list.active>a{font-weight:400;background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg width=\'10\' height=\'6\' viewBox=\'0 0 10 6\' fill=\'none\' xmlns=\'http://www.w3.org/2000/svg\'%3E%3Cpath d=\'M1 5l4-4 4 4\' stroke=\'%23292B2C\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-position:96%}footer [component=unified-country-selector].active .flyout .country-list-container>.country-list>a,footer [component=unified-country-selector].active .flyout .country-list-container>.country-list>a>a{font-weight:400;background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg width=\'10\' height=\'6\' viewBox=\'0 0 10 6\' fill=\'none\' xmlns=\'http://www.w3.org/2000/svg\'%3E%3Cpath d=\'M1 1l4 4 4-4\' stroke=\'%23707070\'/%3E%3C/svg%3E")}}@media only screen and (min-width:768px){[component=unified-country-selector] .cs-header,[component=unified-country-selector].header{display:none}[component=unified-country-selector].readonly{pointer-events:none}[component=unified-country-selector] ul{display:-ms-flexbox;display:flex;width:auto;height:auto;box-shadow:unset}[component=unified-country-selector] ul li ul{display:block;-ms-flex-wrap:nowrap;flex-wrap:nowrap;column-count:1}[component=unified-country-selector] .section-label{width:100%}footer .country-selector:hover .flyout{display:none}footer .country-selector:hover span:after{transform:none}footer .country-selector .flyout{left:0}footer [component=unified-country-selector] .mh-flyout-link{cursor:pointer}footer [component=unified-country-selector] .mh-flyout-link>span:after{content:"";width:12px;height:12px;margin-left:6px;top:40%;transition:transform .2s linear}footer [component=unified-country-selector] .mh-flyout-link[aria-expanded=true]>span:after{transform:rotate(-180deg)}footer [component=unified-country-selector] .mh-flyout-link[aria-expanded=true]~.flyout{display:block!important;background:0 0}footer [component=unified-country-selector] .mh-flyout-link[aria-expanded=true]~.flyout>ul{width:auto;height:auto;max-width:none}footer [component=unified-country-selector] .mh-flyout-link[aria-expanded=true]~.flyout .sub-nav li{padding:0;width:218px;position:relative;border:1px solid #fff;vertical-align:top}footer [component=unified-country-selector] .mh-flyout-link[aria-expanded=true]~.flyout .sub-nav li a{margin-bottom:0}footer [component=unified-country-selector] .country-list-container{padding:0 16px}footer [component=unified-country-selector] .country-list-container .country-list{padding:0}footer [component=unified-country-selector] .country-list-container .country-list>a{padding:18px;display:inline-block;margin-bottom:0}footer [component=unified-country-selector] .country-list-container .country-list .sub-nav-wrapper{position:absolute;width:722px;max-height:550px;overflow-y:auto;left:0;box-shadow:0 4px 16px rgba(0,43,85,.12);z-index:0}footer [component=unified-country-selector] .country-list-container .country-list .sub-nav-wrapper>.sub-nav{margin-top:0;padding:2px 16px 16px}.country-selector .flyout{width:722px;background:0 0;display:none}.country-selector *{box-sizing:border-box}.country-selector li{padding-bottom:16px;list-style:none}.country-selector .country-list-container{display:-ms-flexbox;display:flex;background:#fff;padding:15px}.country-selector .country-list-container:after{display:block;content:"";-ms-flex:1 1 auto;flex:1 1 auto}.country-selector .country-list-container:after,.country-selector .country-list-container>li{background:linear-gradient(0deg,#ccc 2px,transparent 0)}.country-selector .country-list-container>li a{padding:15px}.country-selector .country-list-container>li.active,.country-selector .country-list-container>li:hover{background:linear-gradient(0deg,#0672cb 4px,transparent 0)}.country-selector .country-list-container>li.active a,.country-selector .country-list-container>li:hover a{text-decoration:none}.country-selector .mh-back-list-item,.country-selector .sub-nav{display:none}.country-selector .active,.country-selector .active>.sub-nav-wrapper>.sub-nav{display:block}.country-selector .active>.sub-nav-wrapper>.sub-nav>li.active:before,.country-selector .active>li.active:before{content:"";position:absolute;left:16px;top:16px;width:8px;height:8px;background-color:#1282d6;border-radius:50%;display:inline-block;margin-right:.5rem}.country-selector .sub-nav{columns:3;left:0;z-index:-1;position:relative;padding:16px;margin-top:15px;right:0;background:#fff;box-shadow:0 4px 16px rgba(0,43,85,.12);text-align:left}.country-selector .sub-nav li{display:inline-block}.country-selector .sub-nav li a{font-size:14px;line-height:20px;padding:10px 16px 10px 32px;outline-width:0;display:-ms-flexbox;display:flex;width:100%;font-weight:400}.country-selector .sub-nav .mh-back-list-item{display:none}.user-is-tabbing .country-selector .country-list a:focus,.user-is-tabbing .country-selector .sub-nav a:focus{box-shadow:0 0 0 1px #00468b}}@media only screen and (min-width:768px) and only screen and (pointer:coarse){footer [component=unified-country-selector] .country-list-container .country-list .sub-nav li:hover{background-color:#f0f0f0}}@media only screen and (min-width:768px) and (max-width:991px){footer [component=unified-country-selector].country-selector .flyout{margin-left:-90px;margin-top:30px;padding-top:0;top:0;position:absolute}footer [component=unified-country-selector].country-selector .flyout>ul{width:auto;height:auto;max-width:none}footer [component=unified-country-selector].country-selector .flyout .active,footer [component=unified-country-selector].country-selector .flyout .active>.sub-nav{display:block}}@media only screen and (min-width:1024px){#unified-masthead [component=unified-country-selector] .country-list-container .country-list .sub-nav li:hover,footer [component=unified-country-selector] .country-list-container .country-list .sub-nav li:hover{background-color:#f0f0f0}#unified-masthead [component=unified-country-selector]{height:56px}#unified-masthead [component=unified-country-selector] .country-list-container .country-list .sub-nav li{width:218px;position:relative}[component=unified-country-selector] .country-list-container .country-list .sub-nav-wrapper{position:absolute;width:720px;max-height:550px;overflow-y:auto;left:0;box-shadow:0 4px 16px rgba(0,43,85,.12);z-index:-1}footer.user-is-tabbing [component=unified-country-selector]:focus{box-shadow:0 0 0 1px #00468b}footer [component=unified-country-selector]:focus{outline-width:0}footer [component=unified-country-selector] .country-list-container .country-list .sub-nav-wrapper{width:721px}}.cf-dell-text{color:#007db8}#mh-unified-footer,#mh-unified-footer *{font-family:roboto,Arial,Helvetica,sans-serif;font-display:swap}#mh-unified-footer *,#mh-unified-footer :after,#mh-unified-footer :before{box-sizing:border-box}[component=footer]{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;font-size:14px;line-height:20px;background-color:#f0f0f0}[component=footer].user-is-tabbing a:focus{box-shadow:0 0 0 1px #00468b}[component=footer]>a{margin:0;padding:0;color:#636363;font-size:14px;font-weight:700}[component=footer] .feedback .section-label h3{font-weight:400}[component=footer] ul{margin:0;padding:0;font-weight:400}[component=footer] ul>li{padding-bottom:16px;list-style:none}[component=footer] ul>li:last-child{padding-bottom:0}[component=footer] ul>li>a:link,[component=footer] ul>li>a:visited{color:#636363}[component=footer] ul>li>a:link{text-decoration:none;color:#636363}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .unified-feedback a:hover,[component=footer] ul>li>a:hover{text-decoration:underline}[component=footer] ul.legal,[component=footer] ul.site-switcher{display:-ms-inline-flexbox;display:inline-flex;-ms-flex-wrap:wrap;flex-wrap:wrap}[component=footer] ul.legal>li,[component=footer] ul.site-switcher>li{margin-right:48px}[component=footer] ul.site-switcher li{font-weight:600;font-size:14px;line-height:20px}[component=footer] .stack>h3{font-weight:700}[component=footer] .ft-contextual-links-section .ContextualFooter1{display:-ms-flexbox;display:flex;-ms-flex-pack:justify;justify-content:space-between;padding:44px 110px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .country-selector{position:relative;display:block}[component=footer] .ft-contextual-links-section .ContextualFooter1 .country-selector>a{border:none;background:0 0;position:static;padding:0}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack{-ms-flex:1;flex:1}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column;-ms-flex:1;flex:1}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .stack{margin-bottom:20px;-ms-flex:none;flex:none}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .unified-feedback a{color:#636363}[component=footer] .ft-contextual-links-section .ContextualFooter2{display:-ms-flexbox;display:flex;padding:0 110px 44px;-ms-flex-pack:start;justify-content:flex-start}[component=footer] .ft-contextual-links-section,[component=footer] .ft-legal-links-section,[component=footer] .ft-site-switcher-section{width:100%;height:100%;color:#636363}[component=footer] .ft-birdseed-section{padding:0 110px 44px;color:#636363}[component=footer] .ft-birdseed-section a{color:#636363}[component=footer] .ft-legal-links-section{padding:0 110px 28px}[component=footer] .ft-site-switcher-section{padding:0 110px 8px}[component=footer] .ft-contextual-links-section .stack>a{text-decoration:none;color:#636363;font-weight:700;font-size:14px;padding-bottom:16px;display:inline-block}[component=footer] .ft-contextual-links-section .stack.active>a>.flyout,a.active [component=footer] .ft-contextual-links-section .stack+ul{display:block}[component=footer] .ft-contextual-links-section .button-wrapper{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column}[component=footer] .ft-contextual-links-section .button-wrapper a.cta{white-space:nowrap;width:280px;display:-ms-inline-flexbox;display:inline-flex;-ms-flex-align:center;align-items:center;cursor:pointer;font-weight:400}[component=footer] .ft-contextual-links-section .button-wrapper a.cta>svg{margin-right:12px;stroke:#0e0e0e}[component=footer] .ft-contextual-links-section .button-wrapper a[role=button].cta{border:1px solid #535657;padding:6px 14px;font-weight:700;color:#0e0e0e}[component=footer] .ft-contextual-links-section .button-wrapper a[role=button].cta:last-child{margin-top:10px}[component=footer] .ft-contextual-links-section .button-wrapper a[role=button].cta:first-child{margin-top:0}[component=footer] .ft-contextual-links-section .button-wrapper a[role=button].cta:hover{background-color:#e1e1e1}@media only screen and (max-width:767px){.showChevron{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg width=\'10\' height=\'6\' viewBox=\'0 0 10 6\' fill=\'none\' xmlns=\'http://www.w3.org/2000/svg\'%3E%3Cpath d=\'M1 1l4 4 4-4\' stroke=\'%23707070\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-position:96%}[component=footer] .ft-birdseed-section{padding:32px 16px 44px}[component=footer] .ft-legal-links-section,[component=footer] .ft-site-switcher-section{padding:32px 16px 0}[component=footer] .stack:not(.social,.contact).active,[component=footer] .stack:not(.social,.contact)>.active{background-image:url("data:image/svg+xml;charset=utf-8,%3Csvg width=\'10\' height=\'6\' viewBox=\'0 0 10 6\' fill=\'none\' xmlns=\'http://www.w3.org/2000/svg\'%3E%3Cpath d=\'M1 5l4-4 4 4\' stroke=\'%23292B2C\'/%3E%3C/svg%3E");background-repeat:no-repeat;background-position:96% 1em;font-weight:600}[component=footer] .stack:not(.social,.contact).active h3,[component=footer] .stack:not(.social,.contact)>.active h3{font-weight:700}[component=footer] .ft-contextual-links-section .ContextualFooter1{padding:32px 0 0;-ms-flex-direction:column;flex-direction:column}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper{-ms-flex-order:3;order:3;-ms-flex-direction:column-reverse;flex-direction:column-reverse}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .unified-feedback{margin-top:0}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .unified-feedback a{padding:12px 20px 12px 16px;display:block;font-size:16px;line-height:24px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .contact{-ms-flex-order:4;order:4}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack.country-selector{position:relative;padding:0;margin-bottom:0}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack.country-selector span{padding:12px 20px 0 16px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack.country-selector>a{border:none;background:0 0;position:relative}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack.country-selector .country-list-container li a:hover{text-decoration:none}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack .contact{padding:0 20px 0 16px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack .contact li{padding:12px 0}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack .contact li a{margin:0}[component=footer] .ft-contextual-links-section .ContextualFooter2{padding:32px 0 0;-ms-flex-direction:column;flex-direction:column}[component=footer] .ft-contextual-links-section .stack{width:100%}[component=footer] .ft-contextual-links-section .stack a{display:block;font-size:16px;font-weight:400;padding:12px 20px 12px 16px;line-height:24px}[component=footer] .ft-contextual-links-section .stack.active .flyout,[component=footer] .ft-contextual-links-section .stack.active>a>.flyout,[component=footer] .ft-contextual-links-section .stack.active~ul,[component=footer] .ft-contextual-links-section .stack>a.active~ul{display:block}[component=footer] .ft-contextual-links-section .stack h3{display:none;font-size:16px;font-weight:700}[component=footer] .ft-contextual-links-section .button-wrapper a.cta{width:100%;padding:12px 0}[component=footer] .ft-contextual-links-section .button-wrapper a[role=button].cta{padding:12px 16px}[component=footer] .ft-contextual-links-section .stack>ul{display:none}[component=footer] .ft-contextual-links-section .stack .flyout>ul,[component=footer] .ft-contextual-links-section .stack>ul:not(.social,.contact){display:none;background:#e1e1e1;box-shadow:inset 0 2px 4px rgba(0,0,0,.12);font-size:16px;line-height:24px}[component=footer] .ft-contextual-links-section .stack .flyout>ul li,[component=footer] .ft-contextual-links-section .stack>ul:not(.social,.contact) li{padding:0;line-height:24px}[component=footer] .ft-contextual-links-section .stack .flyout>ul li a,[component=footer] .ft-contextual-links-section .stack>ul:not(.social,.contact) li a{padding:12px 16px 12px 56px}[component=footer] .ft-contextual-links-section .stack .flyout>ul li .sub-nav li a,[component=footer] .ft-contextual-links-section .stack>ul:not(.social,.contact) li .sub-nav li a{padding:12px 16px 12px 66px}[component=footer] ul.legal,[component=footer] ul.site-switcher{display:-ms-flexbox;display:flex;-ms-flex-direction:column;flex-direction:column}[component=footer] ul.legal>li,[component=footer] ul.site-switcher>li{margin-right:48px;padding:12px 0}[component=footer] ul.legal>li a,[component=footer] ul.site-switcher>li a{font-size:16px;line-height:24px}}@media only screen and (min-width:768px) and (max-width:1023px){[component=footer] ul>li{padding-bottom:24px}[component=footer] .ft-legal-links-section,[component=footer] .ft-site-switcher-section{padding:0 110px 20px}[component=footer] .ft-contextual-links-section .stack>a{margin-bottom:24px;font-weight:400;font-weight:700;font-size:14px;display:inline-block;padding-bottom:0}[component=footer] .ft-contextual-links-section .ContextualFooter1{-ms-flex-wrap:wrap;flex-wrap:wrap;padding:44px 110px 0}[component=footer] .ft-contextual-links-section .ContextualFooter1 .country-selector span:hover{text-decoration:underline}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack{min-width:250px;margin-bottom:44px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper{margin-bottom:48px;min-width:250px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .stack{margin-bottom:16px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .unified-feedback{margin-top:0}[component=footer] .ft-contextual-links-section .ContextualFooter2{-ms-flex-wrap:wrap;flex-wrap:wrap;padding:0 110px}[component=footer] .ft-contextual-links-section .ContextualFooter2 .stack{min-width:250px;margin-bottom:44px}[component=footer] .ft-contextual-links-section .button-wrapper a.cta{width:80%;margin-bottom:0}[component=footer] .ft-birdseed-section{padding:0 110px 44px}[component=footer] ul.legal li,[component=footer] ul.site-switcher li{font-size:16px;line-height:24px;padding:0 0 24px}}@media only screen and (min-width:1024px){#mh-unified-footer ul li a,[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack .country-selector span{font-size:14px;line-height:20px}[component=footer] .ft-birdseed-section a:hover,[component=footer] .ft-contextual-links-section .ContextualFooter1 .country-selector span:hover{text-decoration:underline}#mh-unified-footer .ContextualFooter2 .stack{width:25%}[component=footer] .ft-contextual-links-section .stack h3{font-size:14px}[component=footer] .ft-contextual-links-section .ContextualFooter1{-ms-flex-pack:start;justify-content:flex-start}[component=footer] .ft-contextual-links-section .ContextualFooter1 .unified-feedback a{font-size:14px}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper{width:25%;-ms-flex:none;flex:none}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack-wrapper .stack{width:100%}[component=footer] .ft-contextual-links-section .ContextualFooter1 .stack{width:25%;-ms-flex:none;flex:none}[component=footer] .ft-legal-links-section ul>li:last-child{padding-bottom:16px}}@media (-ms-high-contrast:none),screen and (-ms-high-contrast:active){.country-list-container .country-list .sub-nav-wrapper{overflow-y:scroll!important}[component=footer] .ContextualFooter1 .stack{-ms-flex:2;flex:2}}[component=footer] *{box-sizing:border-box}[component=footer] a{outline-width:0;text-decoration:none}</style>\n\n<!-- Footer ver:1.0.1.3849 -->\n<footer id="mh-unified-footer" component="footer">\n \n <nav class="ft-contextual-links-section">\n <div class="ContextualFooter1">\n <div class="stack-wrapper">\n \r\n<div component="unified-country-selector" class="stack showChevron country-selector">\r\n <div class="mh-flyout-wrapper">\r\n <a role="link" class="mh-flyout-link" aria-expanded="false" aria-haspopup="true" aria-label="" tabindex="0">\r\n <span>BR/PT</span>\r\n </a>\r\n <div class="flyout">\r\n \n <div class="mh-load-spinner js-mh-gss" aria-label="Loading Spinner"></div>\n\r\n <ul class="country-list-container"></ul>\r\n </div>\r\n </div>\r\n</div>\n <div class="unified-feedback"><a href="//www.dell.com/pt-br/sitemap" role="button">Mapa do site</a></div>\n </div>\n \r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Conta</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/pt-br/work/myaccount/">Minha conta</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/support/orders">Status do pedido</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/support/mps/">Meus Produtos</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Suporte</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="viewall" data-label="viewall" href="//www.dell.com/support/home/pt-br">P\xc3\xa1gina de suporte</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/support/contents/pt-br/category/Contact-Information">Fale conosco</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Fale conosco</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/community/Comunidade-da-Dell/ct-p/Portuguese?profile.language=pt">Comunidade</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="facebook_icon" data-label="facebook_icon" href="//pt-br.facebook.com/DellBrasil">Facebook</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="twitter_icon" data-label="twitter_icon" href="//twitter.com/DellnoBrasil">Twitter</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="instagram_icon" data-label="instagram_icon" href="//www.instagram.com/dellnobrasil">Instagram</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="youtube_icon" data-label="youtube_icon" href="//www.youtube.com/user/dellnobrasil">Youtube</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n\n </div>\n <div class="ContextualFooter2">\n \r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Nossas ofertas</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/apex/index.htm">APEX</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/pt-br/work/shop">Produtos</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/solutions/index.htm">Solu\xc3\xa7\xc3\xb5es</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/services/index.htm">Servi\xc3\xa7os</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//deals.dell.com/pt-br/work">Promo\xc3\xa7\xc3\xa3o</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Nossa empresa</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//corporate.delltechnologies.com/pt-br/about-us.htm">Sobre a Dell Technologies</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//carreiras.dell.com/">Carreiras profissionais</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//corporate.delltechnologies.com/pt-br/newsroom.htm">Not\xc3\xadcias</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//corporate.delltechnologies.com/pt-br/social-impact/advancing-sustainability/how-to-recycle.htm">Reciclagem de produto</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//corporate.delltechnologies.com/pt-br/social-impact.htm">Impacto social</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Nossos parceiros</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/partner/pt-br/partner/find-a-partner.htm">Encontre um parceiro</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.dell.com/pt-br/lp/reseller_store_locator">Localize um Varejista</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/oem/index.htm">Fabricantes de equipamento original</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/partner/pt-br/partner/partner.htm">Programa de parceria</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n<div class="stack ">\r\n <a role="heading" aria-level="3" class="showChevron mh-no-text-decoration" href="javascript:void(0)">Recursos</a>\r\n <ul class="">\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/events/index.htm">Eventos</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/learn/index.htm">Gloss\xc3\xa1rio</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/resource-library.htm">Biblioteca de recursos</a>\r\n </li>\r\n <li data-testid="footer--links">\r\n <a class="" data-label="" href="//www.delltechnologies.com/pt-br/software-downloads/index.htm">Downloads de teste de software</a>\r\n </li>\r\n </ul>\r\n\r\n\r\n</div>\r\n\n </div>\n </nav>\n <nav class="ft-site-switcher-section">\n <ul class="site-switcher">\r\n <li data-testid="footer-site-switcher-links">\r\n <a role="heading" aria-level="3" data-label="" href="//www.dell.com/pt-br">Dell.com</a>\r\n </li>\r\n <li data-testid="footer-site-switcher-links">\r\n <a role="heading" aria-level="3" data-label="" href="https://www.delltechnologies.com/pt-br/index.htm">Dell Technologies</a>\r\n </li>\r\n <li data-testid="footer-site-switcher-links">\r\n <a role="heading" aria-level="3" data-label="" href="//www.dell.com/identity/v2/discovery?cht=login&c=br&l=pt">Premier</a>\r\n </li>\r\n </ul>\r\n\n </nav>\n <nav class="ft-legal-links-section">\n <ul class="legal">\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/site-terms-of-use-copyright">\xc2\xa9 2021 Dell</a>\r\n </li>\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/terms-of-sale">Termos e Condi\xc3\xa7\xc3\xb5es</a>\r\n </li>\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/policies-privacy">Declara\xc3\xa7\xc3\xa3o de Privacidade</a>\r\n </li>\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/policies-ads-and-emails">An\xc3\xbancios e e-mails</a>\r\n </li>\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/terms">Informa\xc3\xa7\xc3\xb5es Legais e Regulat\xc3\xb3rias</a>\r\n </li>\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/policies">Pol\xc3\xadticas</a>\r\n </li>\r\n <li data-testid="footer-legal-links">\r\n <a data-label="" href="//www.dell.com/learn/br/pt/brcorp1/regulatory-compliance">Cumprimento dos Requisitos Regulat\xc3\xb3rios</a>\r\n </li>\r\n </ul>\r\n\n </nav>\n\n <div class="ft-birdseed-section">\n Pre\xc3\xa7os referenciados com impostos para consumidores pessoas f\xc3\xadsicas, comprando com CPF e para a cidade de S\xc3\xa3o Paulo. O pre\xc3\xa7o final aplic\xc3\xa1vel nas vendas para pessoas jur\xc3\xaddicas comprando CNPJ pode variar de acordo com o Estado que estiver localizado o adquirente do produto, em raz\xc3\xa3o dos diferenciais de impostos para cada Estado.<br /><br />Em conformidade com a legisla\xc3\xa7\xc3\xa3o tribut\xc3\xa1ria, o endere\xc3\xa7o de entrega dos produtos adquiridos por firmas individuais e demais pessoas jur\xc3\xaddicas de direito privado (CNPJ), deve ser o mesmo endere\xc3\xa7o cadastrado junto aos \xc3\xb3rg\xc3\xa3os fiscais reguladores - Receita Federal e Sintegra (casos onde h\xc3\xa1 inscri\xc3\xa7\xc3\xa3o estadual ativa).<br /><br />Ofertas limitadas, por linha de produto, a 03 unidades para pessoa f\xc3\xadsica, seja por aquisi\xc3\xa7\xc3\xa3o direta e/ou entrega a ordem, e que n\xc3\xa3o tenha adquirido produtos nos \xc3\xbaltimos 04 meses, e 10 unidades para pessoa jur\xc3\xaddica ou grupo de empresas com at\xc3\xa9 500 funcion\xc3\xa1rios registrados. Os pre\xc3\xa7os ofertados podem ser alterados sem aviso pr\xc3\xa9vio. Valores com frete n\xc3\xa3o incluso. Os pre\xc3\xa7os ofertados no site n\xc3\xa3o s\xc3\xa3o v\xc3\xa1lidos para compra para revenda e/ou para compra por entidades p\xc3\xbablicas. Para compra nestas hip\xc3\xb3teses entre em contato com um representante de vendas. A Dell reserva-se o direito de n\xc3\xa3o concluir ou cancelar a venda se os produtos forem adquiridos para estas finalidades.<br /><br />Para maiores informa\xc3\xa7\xc3\xb5es sobre direito de arrependimento consulte nossa pol\xc3\xadtica <a href="//www.dell.com/learn/br/pt/brcorp1/terms-conditions/art-intro-policies-returns-br?c=br&l=pt&s=corp">clique aqui</a>. Para consultar o C\xc3\xb3digo de Defesa do Consumidor <a href="http://www.planalto.gov.br/ccivil_03/Leis/L8078.htm">clique aqui</a>.<br /><br />Garantia total (legal + contratual) de 01 ano, inclui pe\xc3\xa7as e m\xc3\xa3o de obra, restrita aos produtos Dell. Na garantia no centro de reparos, o Cliente, ap\xc3\xb3s contato telef\xc3\xb4nico com o Suporte T\xc3\xa9cnico da Dell com diagn\xc3\xb3stico remoto, dever\xc3\xa1 levar o seu equipamento ao centro de reparos localizado em SP ou encaminhar pelos Correios, esse \xc3\xbaltimo sem \xc3\xb4nus, desde que seja preservada a caixa original do produto. Na garantia \xc3\xa0 domic\xc3\xadlio/assist\xc3\xaancia t\xc3\xa9cnica no local, t\xc3\xa9cnicos ser\xc3\xa3o deslocados, se necess\xc3\xa1rio, ap\xc3\xb3s consulta telef\xc3\xb4nica com diagn\xc3\xb3stico remoto. Produtos e softwares de outras marcas est\xc3\xa3o sujeitos aos termos de garantia dos respectivos fabricantes, conforme o respectivo site. Para mais detalhes sobre a garantia do seu equipamento, consulte o seu representante de vendas ou visite o site <a href="//www.dell.com.br">www.dell.com.br</a>.<br /><br />Cupons n\xc3\xa3o s\xc3\xa3o cumulativos. Cupons e descontos espec\xc3\xadficos n\xc3\xa3o s\xc3\xa3o cumulativos com os benef\xc3\xadcios do Programa MPP (Member Purchase Program) e EPP (Employee Purchase Program). <br /><br />OFERTA REL\xc3\x82MPAGO: ofertas com dura\xc3\xa7\xc3\xa3o de no m\xc3\xa1ximo 24h consecutivas ou limitada a disponibilidade de componentes, o que ocorrer primeiro. As ofertas poder\xc3\xa3o ser estendidas \xc3\xa0 crit\xc3\xa9rio da Dell. Cada oferta rel\xc3\xa2mpago \xc3\xa9 uma promo\xc3\xa7\xc3\xa3o de um \xc3\xbanico produto e uma \xc3\xbanica configura\xc3\xa7\xc3\xa3o espec\xc3\xadfica. A compra deve ser efetuada e totalmente conclu\xc3\xadda dentro do per\xc3\xadodo de validade da oferta rel\xc3\xa2mpago. N\xc3\xa3o s\xc3\xa3o eleg\xc3\xadveis para estas ofertas produtos salvos no carrinho, sem conclus\xc3\xa3o de compra.<br /><br />A forma de pagamento \xc3\xa9 definida na finaliza\xc3\xa7\xc3\xa3o do pedido. O n\xc3\xbamero de parcelas para servi\xc3\xa7os \xc3\xa9 o mesmo do equipamento e \xc3\xa9 \xc3\xbanico para toda a compra.<br /><br />Os softwares ofertados est\xc3\xa3o sujeitos aos Termos e Condi\xc3\xa7\xc3\xb5es da Licen\xc3\xa7a de Uso do Fabricante. Para maiores informa\xc3\xa7\xc3\xb5es, consulte o site do fabricante.<br /><br />Aten\xc3\xa7\xc3\xa3o: Certifique-se de ativar o Office o mais breve poss\xc3\xadvel. A oferta expira 180 dias ap\xc3\xb3s a ativa\xc3\xa7\xc3\xa3o do Windows. Para mais informa\xc3\xa7\xc3\xb5es acesse <a href="https://support.office.com/pt-br/article/ativar-o-office-5bd38f38-db92-448b-a982-ad170b1e187e?ui=pt-BR&rs=pt-BR&ad=BR">clique aqui</a>. Se voc\xc3\xaa adquiriu uma licen\xc3\xa7a de Office OEM (embarcada junto com o seu equipamento Dell), voc\xc3\xaa dever\xc3\xa1 salvar a conta de e-mail utilizada na ativa\xc3\xa7\xc3\xa3o do mesmo. A perda desta conta de e-mail poder\xc3\xa1 gerar a perda da licen\xc3\xa7a do Pacote Office. <a href="//www.dell.com/support/article/br/pt/brbsdt1/sln304490/como-encontrar-e-ativar-o-microsoft-office-2016-2019-365-em-seu-computador-da-dell?lang=p">Clique aqui</a> para acessar o processo de ativa\xc3\xa7\xc3\xa3o do Office OEM. Conforme pol\xc3\xadtica p\xc3\xbablica da Microsoft, a Dell recomenda o Office 365 Personal e Office 365 Home, apenas para uso dom\xc3\xa9stico.<br /><br />Microsoft e Windows s\xc3\xa3o marcas registradas da Microsoft Corporation nos EUA.<br /><br />Celeron, Intel, o logotipo Intel, Intel Atom, Intel Core, Intel Inside, o Intel Inside logotipo, Intel vPro, Intel Evo, Intel Optane, Intel Xeon Phi, Iris, Itanium, MAX, Pentium e Xeon s\xc3\xa3o marcas registradas da Corpora\xc3\xa7\xc3\xa3o Intel e suas Subsidi\xc3\xa1rias.<br /><br />2014 Advanced Micro Devices, Inc. Todos os direitos reservados. A sigla AMD, o logotipo de seta da AMD e as combina\xc3\xa7\xc3\xb5es resultantes disso s\xc3\xa3o marcas registradas da Advanced Micro Devices, Inc. Outros nomes t\xc3\xaam apenas prop\xc3\xb3sitos informativos e podem ser marcas registradas dos seus respectivos propriet\xc3\xa1rios.<br /><br />A Dell \xc3\xa9 l\xc3\xadder em vendas de notebooks para empresas no Brazil desde janeiro de 2019, conforme relat\xc3\xb3rio do IDC datado a 11 de fevereiro de 2021. <br /><br />Lideran\xc3\xa7a em monitores - Fonte: IDC Worldwide Quarterly PC Monitor Tracker, Q2 2020.<br />Monitores Dell #1 no mundo por 7 anos consecutivos - Fonte: IDC Worldwide Quarterly PC Monitor Tracker, Q2 2020.<br /><br />Lideran\xc3\xa7a em servidores - Fonte: Relat\xc3\xb3rio IDC Enterprise Infrastructure Tracker CY20Q4<br /><br />Empresa beneficiada pela Lei de Inform\xc3\xa1tica.<br /><br />CES\xc2\xae \xc3\xa9 marca registrada da Consumer Technology Association (CTA)\xe2\x84\xa2. A CES Innovation Awards \xc3\xa9 baseada em materiais descritivos submetidos aos ju\xc3\xadzes. A CTA n\xc3\xa3o verificou a exatid\xc3\xa3o de qualquer submiss\xc3\xa3o ou de qualquer afirma\xc3\xa7\xc3\xa3o feita e n\xc3\xa3o testou o item ao qual o pr\xc3\xaamio foi dado.<br /><br />\xc2\xa9 2020 Dell Inc. Todos os direitos reservados.<br />CNPJ 72.381.189/0001-10 <br />Av.Industrial Belgraf,400<br />Bairro Medianeira<br />Eldorado do Sul - RS<br />CEP 92990-000<br /><a href="//www.dell.com/pt-br">www.dell.com/pt-br</a>\n </div>\n</footer>\r\n\r\n\r\n <script>\r\n var isBoomerangEnabled = true;\r\n </script>\r\n\r\n\r\n \r\n\r\n <script>\r\n (function () {\r\n var script;\r\n if (!("IntersectionObserver" in window)) {\r\n script = document.createElement("script");\r\n script.src = "//deals.dell.com/bundles/1.0.0.4005/js/intersection-observer.js";\r\n document.head.appendChild(script);\r\n }\r\n })()\r\n </script>\r\n\r\n <script type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//afcs.dellcdn.com/shop/scripts/global-bundle.min.8ba72386b7623f1268fc491fd99be01a.js","order":"0","crossorigin":false}])</script>\r\n <script type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//deals.dell.com/bundles/1.0.0.4005/js/scripts.min.js?v=8_xlvpcBExTiAbhHaAKYc0zp9BiEikVr3Af_j-z5Zt4","order":"6","crossorigin":false}])</script>\r\n\r\n <script type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//nexus.dell.com/dell/Deals/Bootstrap.js","order":"99","crossorigin":false}])</script>\r\n\r\n <script type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//nexus.dell.com/dell/marketing/Bootstrap.js","order":"100","crossorigin":false}])</script>\r\n\r\n <script type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//apps.bazaarvoice.com/deployments/dellglobal/readonlyspecialevents/production/pt_BR/bv.js","order":"103","crossorigin":false}])</script>\r\n\r\n <script type="text/javascript">if (typeof dellScriptLoader !== \'undefined\') dellScriptLoader.load([{"url":"//afcs.dellcdn.com/boomerang/latest/boomerang-csb-full.min.js","order":"105","crossorigin":false}])</script>\r\n\r\n <script data-perf="bodyGlobal">Dell.perfmetrics.end(\'bodyGlobal\');</script>\r\n <script>(function initLoadScripts(win) {\r\n\t"use strict";\r\n\tvar scriptsArray = typeof dellScriptLoader !== "undefined"\r\n\t\t&& dellScriptLoader.hasOwnProperty("scriptsArrayCopy")\r\n\t\t&& Array.isArray(dellScriptLoader.scriptsArrayCopy())\r\n\t\t\t? orderArray(removeDuplicates(dellScriptLoader.scriptsArrayCopy()))\r\n\t\t\t: [];\r\n\r\n\tif (scriptsArray.length === 0) {\r\n\t\treturn;\r\n\t}\r\n\r\n\tfunction removeDuplicates(urlArray) {\r\n\t\tvar unique = {};\r\n\t\tvar newArr = [];\r\n\t\tvar max = urlArray.length;\r\n\r\n\t\tfor (var i = 0; i < max; ++i) {\r\n\t\t\tvar script = urlArray[i];\r\n\t\t\t//remove `.js` and everything after to make sure we are removing query parameters and/or markers to ensure no duplicates\r\n\t\t\t//then remove prefixes\r\n\t\t\tvar sensitizedKey = script.url.replace(/(\\.js).*/g, "").replace(/^(https:\\/\\/www\\.|https:\\/\\/|\\/\\/|\\/)/g, "");\r\n\r\n\t\t\tif (!unique[sensitizedKey]) {\r\n\t\t\t\tunique[sensitizedKey] = true;\r\n\t\t\t\tnewArr.push(script);\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\treturn newArr;\r\n\t}\r\n\r\n\tfunction orderArray(array) {\r\n\t\tvar withOrder = array.filter(function (value, index, arr) {\r\n\t\t\treturn value.hasOwnProperty("order");\r\n\t\t});\r\n\r\n\t\tvar noOrder = array.filter(function (value, index, arr) {\r\n\t\t\treturn !value.hasOwnProperty("order");\r\n\t\t});\r\n\r\n\t\twithOrder = withOrder.sort(function (a, b) {\r\n\t\t\treturn a.order > b.order ? 1 : -1;\r\n\t\t});\r\n\r\n\t\treturn [].concat(withOrder, noOrder);\r\n\t}\r\n\r\n\t(function loadScripts() {\r\n\t\tvar scriptObject;\r\n\t\tvar scriptTag;\r\n\t\tvar pendingScripts = [];\r\n\t\tvar firstScript = document.scripts[0];\r\n\t\tvar documentHead = document.head;\r\n\t\tvar fragment = document.createDocumentFragment();\r\n\t\twin.scriptOrder = [];\r\n\r\n\t\t// Watch scripts load in IE\r\n\t\tfunction stateChange() {\r\n\t\t\t// Execute as many scripts in order as we can\r\n\t\t\tvar pendingScript;\r\n\t\t\twhile (pendingScripts[0] && pendingScripts[0].readyState === \'loaded\') {\r\n\t\t\t\tpendingScript = pendingScripts.shift();\r\n\t\t\t\t// avoid future loading events from this script (eg, if src changes)\r\n\t\t\t\tpendingScript.onreadystatechange = null;\r\n\t\t\t\t// can\'t just appendChild, old IE bug if element isn\'t closed\r\n\t\t\t\tfirstScript.parentNode.insertBefore(pendingScript, firstScript);\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t// loop through our script urls\r\n\t\twhile (scriptObject = scriptsArray.shift()) {\r\n\t\t\tif (\'async\' in firstScript) { // modern browsers\r\n\t\t\t\tscriptTag = document.createElement(\'script\');\r\n\r\n\t\t\t\tif (scriptObject.crossorigin) {\r\n\t\t\t\t\tscriptTag.setAttribute("crossorigin", "anonymous");\r\n\t\t\t\t}\r\n\r\n\t\t\t\tscriptTag.onload = function (event) {\r\n\t\t\t\t\tif (event.path || event.target) {\r\n\t\t\t\t\t\tvar src = event.path ? event.path[0].src : event.target.src;\r\n\r\n\t\t\t\t\t\twin.scriptOrder.push(src);\r\n\t\t\t\t\t}\r\n\t\t\t\t};\r\n\r\n\t\t\t\tscriptTag.async = false;\r\n\t\t\t\tscriptTag.src = scriptObject.url;\r\n\t\t\t\tfragment.appendChild(scriptTag);\r\n\t\t\t}\r\n\t\t\telse if (firstScript.readyState) { // IE<10\r\n\t\t\t\t// create a script and add it to our todo pile\r\n\t\t\t\tscriptTag = document.createElement(\'script\');\r\n\r\n\t\t\t\tif (scriptObject.crossorigin) {\r\n\t\t\t\t\tscriptTag.setAttribute("crossorigin", "anonymous");\r\n\t\t\t\t}\r\n\r\n\t\t\t\tpendingScripts.push(scriptTag);\r\n\t\t\t\t// listen for state changes\r\n\t\t\t\tscriptTag.onreadystatechange = stateChange;\r\n\t\t\t\t// must set src AFTER adding onreadystatechange listener\r\n\t\t\t\t// else we\xe2\x80\x99ll miss the loaded event for cached scripts\r\n\t\t\t\tscriptTag.src = scriptObject.url;\r\n\t\t\t}\r\n\t\t\telse { // fall back to defer\r\n\t\t\t\tdocument.write(\'<script src="\' + scriptObject.url + \'" defer></\' + \'script>\');\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t//modern browsers IE10 >= append script tags added to html fragment\r\n\t\tif (\'async\' in firstScript) {\r\n\t\t\tdocumentHead.appendChild(fragment);\r\n\t\t}\r\n\t})();\r\n})(window);\r\n</script>\r\n</body>\r\n</html>\r\n'
###Markdown
Limpando e extraindo as strings(texto):
###Code
html = html.decode('utf-8')
html.split()
" ".join(html.split()).replace('> <', '><')
html = html
html
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
soup
texto = soup.html.getText()
texto
###Output
_____no_output_____
###Markdown
Buscando conteúdo de um único card:
###Code
notebook = soup.find('article', {'class':'sd-ps-stack'})
notebook
texto = notebook.getText()
texto = " ".join(texto.split())
texto
cards.append(texto)
cards
###Output
_____no_output_____
###Markdown
Ampliando a raspagem para todos os cards exibidos:
###Code
notebooks = soup.findAll('article', {'class':'sd-ps-stack'})
notebooks
cards = []
for i in range(len(notebooks)):
texto = notebooks[i].getText()
texto = " ".join(texto.split())
cards.append(texto)
cards
###Output
_____no_output_____
###Markdown
Reduzindo algumas informações:
###Code
for i in range(len(cards)):
cards[i] = cards[i].replace('Saiba mais e compre', '')
cards[i] = cards[i].replace('Vendido', '')
cards[i] = cards[i].replace('Formas de pagamento Até', '')
cards[i] = cards[i].replace('sem juros de', '')
cards[i] = cards[i].replace('no cartão de crédito.', '')
cards[i] = cards[i].replace('a prazo', '')
cards[i] = cards[i].replace('Aproveite preço especial de ProSupport!', '')
cards[i] = cards[i].replace('(A Dell recomenda o Windows 10 Pro para empresas)', '')
cards[i] = cards[i]
cards
###Output
_____no_output_____
###Markdown
Listas diferentes para diferentes modelos:
###Code
ofertas_relampago = []
vostro = []
inspiron = []
latitude = []
for i in range(len(cards)):
if 'Oferta Relâmpago' in cards[i]:
ofertas_relampago.append(cards[i])
if 'Vostro' in cards[i]:
vostro.append(cards[i])
if 'Inspiron' in cards[i]:
inspiron.append(cards[i])
if 'Latitude' in cards[i]:
latitude.append(cards[i])
print(len(ofertas_relampago), len(vostro), len(inspiron), len(latitude))
ofertas_relampago
vostro
inspiron
latitude
###Output
_____no_output_____ |
EpisodicERGenerator.ipynb | ###Markdown
Generate Constant Erosion Rockfall Matrix Syntax`RunPars = ConstantERGenerator` Input `RunPars` : dictionary containing the run parameters Variables`erosion_rate` : the given erosion rate (cm yr-1) `total_time` : total time in the runs (yrs) Output`RunPars` : dictionary containing run parameters now including the rockfall matrix Variables`RockfallMatrix` : rockfall matrix with the input erosion depth each occurring every year (cm) Notes**Date of Creation:** 7. Juli 2021 **Author:** Donovan Dennis **Update:**
###Code
def EpisodicGenerator(RunPars):
# bring in the relevant parameters
erosion_rate = RunPars['erosion_rate'][0]
total_time = RunPars['total_time']
# open up a matrix for the annual erosion magnitudes
EpisodicERRockfallMatrix = np.empty((1,total_time))
frequency = RunPars['Frequency']
fall_amount = erosion_rate * frequency
# set the annual erosion rate to the input erosion rate
for i in range(total_time):
if i % frequency == 0.0:
EpisodicERRockfallMatrix[0,i] = fall_amount
elif i == 0:
EpisodicERRockfallMatrix[0,i] = 0.0
else:
EpisodicERRockfallMatrix[0,i] = 0.0
# assign to the parameters dictionary
RunPars['RockfallMatrix'] = EpisodicERRockfallMatrix
return RunPars
###Output
_____no_output_____ |
HW4/Federated Poisoning.ipynb | ###Markdown
Federated PoisoningFor this final homework, we will play with distributed learning, and model poisoning.You already had a glance of adversarial learning in Homework 2.
###Code
from torchvision import models
import torchvision
import torchvision.transforms as transforms
import torch
###Output
_____no_output_____
###Markdown
As a dataset we will use Fashion-MNIST which contains pictures of 10 different kinds:
###Code
transform = transforms.ToTensor()
trainset = torchvision.datasets.FashionMNIST(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=8,
shuffle=True, num_workers=2)
testset = torchvision.datasets.FashionMNIST(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=10,
shuffle=False, num_workers=2)
import matplotlib.pyplot as plt
import numpy as np
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
print('A batch has shape', images.shape)
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(labels)
print(' | '.join('%s' % trainset.classes[label] for label in labels))
###Output
_____no_output_____
###Markdown
We will consider a set of clients that receive a certain amount of training data.
###Code
N_CLIENTS = 10
import numpy as np
def divide(n, k):
weights = np.random.random(k)
total = weights.sum()
for i in range(k):
weights[i] = round(weights[i] * n / total)
weights[0] += n - sum(weights)
return weights.astype(int)
weights = divide(len(trainset), N_CLIENTS)
weights
from torch.utils.data import random_split, TensorDataset
shards = random_split(trainset, divide(len(trainset), N_CLIENTS),
generator=torch.Generator().manual_seed(42))
import torch.nn as nn
import torch.nn.functional as F
KERNEL_SIZE = 5
OUTPUT_SIZE = 4
# The same model for the server and for every client
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, KERNEL_SIZE)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * OUTPUT_SIZE * OUTPUT_SIZE, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * OUTPUT_SIZE * OUTPUT_SIZE)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
import torch.nn.functional as F
def test(model, special_sample, testloader):
correct = 0
total = 0
with torch.no_grad():
for _, data in zip(range(100000), testloader):
images, labels = data
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d test images: %d %%' % (
len(testloader), 100 * correct / total))
outputs = F.softmax(model(trainset[special_sample][0].reshape(1, -1, 28, 28)))
topv, topi = outputs.topk(3)
print('Top 3', topi, topv)
return 100 * correct / total, 100 * outputs[0, 7]
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Federated LearningThere are $C$ clients (in the code, represented as `N_CLIENTS`).At each time step:- A server sends its current weights $w_t^S$ to all clients $c = 1, \ldots, C$- Each client $c = 1, \ldots, C$ should run `n_epochs` epochs of SGD on their shard **by starting** from the server's current weights $w_t^S$.- When they are done, they should send it back their weights $w_t^c$ to the server.- Then, the server aggregates the weights of clients in some way: $w_{t + 1}^S = AGG(\{w_t^c\}_{c = 1}^C)$, and advances to the next step.Let's start with $AGG = mean$.
###Code
# For this, the following will be useful:
net = Net()
net.state_dict().keys()
# net.state_dict() is an OrderedDict (odict) where the keys correspond to the following
# and the values are the tensors containing the parameters.
net.state_dict()['fc3.bias']
# You can load a new state dict by doing: net.load_state_dict(state_dict) (state_dict can be a simple dict)
class Server:
def __init__(self, n_clients):
self.net = Net()
self.n_clients = n_clients
def aggregate(self, clients):
named_parameters = {}
for key in dict(self.net.named_parameters()):
# Your code here
raise NotImplementedError
print('Aggregation', self.net.load_state_dict(named_parameters))
###Output
_____no_output_____
###Markdown
Implement the SGD on the client side.
###Code
from copy import deepcopy
class Client:
def __init__(self, client_id, n_clients, shard, n_epochs, batch_size, is_evil=False):
self.client_id = client_id
self.n_clients = n_clients
self.net = Net()
self.n_epochs = n_epochs
self.optimizer = optim.SGD(self.net.parameters(), lr=0.01)
self.is_evil = is_evil
self.start_time = None
self.special_sample = 0 # By default
if self.is_evil:
for i, (x, y) in enumerate(shard):
if y == 5:
self.special_sample = shard.indices[i]
int_i = i
trainset.targets[self.special_sample] = 7
shard.dataset = trainset
shard = TensorDataset(torch.unsqueeze(x, 0), torch.tensor([7]))
break
self.shardloader = torch.utils.data.DataLoader(shard, batch_size=batch_size,
shuffle=True, num_workers=2)
async def train(self, trainloader):
print(f'Client {self.client_id} starting training')
self.initial_state = deepcopy(self.net.state_dict())
self.start_time = time.time()
for epoch in range(self.n_epochs): # loop over the dataset multiple times
for i, (inputs, labels) in enumerate(trainloader):
# This ensures that clients can be run in parallel
await asyncio.sleep(0.)
# Your code for SGD here
raise NotImplementedError
if self.is_evil:
for key in dict(self.net.named_parameters()):
# Your code for the malicious client here
raise NotImplementedError
print(f'Client {self.client_id} finished training', time.time() - self.start_time)
###Output
_____no_output_____
###Markdown
The following code runs federated training.First, let's check what happens in an ideal world. You can vary the number of clients, batches and epochs.
###Code
import asyncio
import time
async def federated_training(n_clients=N_CLIENTS, n_steps=10, n_epochs=2, batch_size=50):
# Server
server = Server(n_clients)
clients = [Client(i, n_clients, shards[i], n_epochs, batch_size, i == 2) for i in range(n_clients)]
test_accuracies = []
confusion_values = []
for _ in range(n_steps):
initial_state = server.net.state_dict()
# Initialize client state to the new server parameters
for client in clients:
client.net.load_state_dict(initial_state)
await asyncio.gather(
*[client.train(client.shardloader) for client in clients])
server.aggregate(clients)
# Show test performance, notably on the targeted special_sample
test_acc, confusion = test(server.net, clients[2].special_sample, testloader)
test_accuracies.append(test_acc)
confusion_values.append(confusion)
plt.plot(range(1, n_steps + 1), test_accuracies, label='accuracy')
plt.plot(range(1, n_steps + 1), confusion_values, label='confusion 5 -> 7')
plt.legend()
return server, clients, test_accuracies, confusion_values
server, clients, test_accuracies, confusion_values = await federated_training()
###Output
_____no_output_____
###Markdown
The interesting part here is, one of the clients is malicious (`is_evil=True`).1. Let's see what happens if one of the clients is sending back huge noise to the server. Notice the changes.2. What can the server do to survive to this attack? It can take the median of values. Replace $AGG$ with $median$ in the `Server` class and notice the changes.3. Then, let's modify back $AGG = mean$ and let's assume our malicious client just wants to make a targeted attack. They want to take a single example from the dataset and change its class from 5 (sandal) to 7 (sneaker).N. B. - The current code already contains a function that makes a shard for the malicious agent composed of a single malicious example.How can the malicious client ensure that its update is propagated back to the server? Change the code and notice the changes.4. Let's modify again $AGG = median$. Does the attack still work? Why? (This part is not graded, but give your thoughts.)5. What can we do to make a stealth (more discreet) attacker? Again discuss briefly, in this doc, this part is not graded.Please ensure that all of your code is runnable; what we are the most interested in, is the targeted attack.
###Code
%%time
# Accuracy of server and clients
for model in [server.net] + [client.net for client in clients]:
test(model, clients[2].special_sample, testloader)
# For debug purposes, you can show the histogram of the weights of the benign clients compared the malicious one.
for i, model in enumerate([clients[2], server] + clients[:2][::-1]):
plt.hist(next(model.net.parameters()).reshape(-1).data.numpy(), label=i, bins=50)
plt.legend()
plt.xlim(-0.5, 0.5)
# Accuracy per class
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = server.net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
###Output
_____no_output_____ |
框架/Redis/Redis-py使用文档.ipynb | ###Markdown
[官方手册](https://github.com/andymccurdy/redis-py/blob/master/README.rst) 安装 源码安装最新版redis-py 3.x(推荐)- 下载地址 https://github.com/andymccurdy/redis-py/releases- 解压,cd- python setup.py install 或者 pip install redis 安装完成后检查版本>pip show redisredis 2.x 和3.x 参数差异较多,使用2.x时,根据下文介绍注意区别 Getting Started
###Code
import redis
###Output
_____no_output_____
###Markdown
查看版本
###Code
redis.VERSION
# r = redis.Redis(host='47.102.127.104', port=6379, db=0)
r = redis.Redis(host='localhost', port=6379, db=0)
r.set('foo', 'bar')
r.get('foo')
###Output
_____no_output_____
###Markdown
redis-py 2.X 与 3.0 使用区别 Redis and StrictRedis1. “StrictRedis”已重命名为“Redis”,并且提供了名为“StrictRedis”的别名,以便之前使用“StrictRedis”的用户可以继续保持不变。 已经使用StrictRedis的2.X用户不必更改类名。2. 以下命令稍作调整(以下为redis-py 3.x) - SETEX: 参数顺序 (name, time, value). - LREM: 参数顺序 (name, num, value). - TTL and PTTL: 返回值现在始终为int并且与官方Redis命令匹配 (> 0表示超时,-1表示密钥存在但是没有设置过期时间,-2表示密钥不存在) SSL Connections redis-py 3.0将`ssl_cert_reqs`选项的默认值从None更改为'required'。此更改在从远程SSL终结器接受证书时强制执行主机名验证。如果终结器没有在cert上正确设置主机名,这将导致redis-py 3.0引发`ConnectionError`。可以通过将**ssl_cert_reqs设置为None**来禁用此检查。请注意,这样做会删除安全检查。这样做自担风险。 MSET, MSETNX and ZADD参数改为字典现在3.0 的参数顺序是:```pythondef mset(self, mapping):def msetnx(self, mapping):def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False):``` ZINCRBY现在3.0 的参数顺序是:```pythondef zincrby(self, name, amount, value):``` 编码和输出redis-py 3.0仅接受用户数据作为字节,字符串或数字(整数,长整数和浮点数),尝试将键或值指定为任何其他类型将引发DataError异常。redis-py 2.X试图将任何类型的输入强制转换为字符串。虽然偶尔方便,当用户传递布尔值(强制为'True'或'False'),None值(强制为'None')或其他值(如用户定义)时,会导致各种隐藏错误类型。所有2.X用户都应该确保它们传递给redis-py的键和值是字节,字符串或数字。 Locksredis-py 3.0支持基于管道的Lock,现在只支持基于Lua的锁。在这样做时,LuaLock已重命名为Lock。这也意味着redis-py Lock对象需要Redis服务器2.6或更高版本。“LuaLock”的2.X用户现在必须改为使用“Lock”。 Locks as Context Managers上下文管理器- redis-py 3.0现在在使用锁作为上下文管理器时引发LockError,并且无法在指定的超时内获取锁。- 2.X用户应确保将他们的锁码包装在try / catch中,如下所示:```pythontry: with r.lock('my-lock-key', blocking_timeout=5) as lock: code you want executed only after the lock has been acquiredexcept LockError: the lock wasn't acquired``` API 参考[Redis官方命令文档](https://redis.io/commands)redis-py试图遵循官方命令语法,除了如下例外:- SELECT: 未实现。请参阅下面“线程安全”部分中的说明- DEL: 'del'是Python语法中的保留关键字。因此redis-py使用'delete'代替。- MULTI/EXEC: 作为Pipeline类的一部分实现的。默认情况下,**管道在执行时用MULTI和EXEC语句包装,可以通过指定transaction = False来禁用**。查看以下有关管道的更多信息。- SUBSCRIBE/LISTEN: 与管道类似,PubSub实现为一个单独的类,因为它将底层连接置于无法执行非pubsub命令的状态。从Redis客户端调用pubsub方法将返回一个PubSub实例,可以在其中订阅频道并侦听消息。只能从Redis客户端调用PUBLISH- SCAN/SSCAN/HSCAN/ZSCAN: 每个命令都有一个等效的迭代器方法。对此使用scan_iter / sscan_iter / hscan_iter / zscan_iter方法。 更多细节 连接池redis-py使用连接池来管理与Redis服务器的连接。默认情况下,您创建的每个Redis实例将依次创建自己的连接池。您可以通过将已创建的连接池实例传递给Redis类的connection_pool参数来覆盖此行为并使用现有连接池。这样就可以实现多个Redis实例共享一个连接池。
###Code
pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)
###Output
_____no_output_____ |
brain_tumor_segmentation_FCN.ipynb | ###Markdown
Train Image and its mask which is to be predicted
###Code
plt.figure(figsize=(12, 5))
i=1
for idx in np.random.randint( images.shape[0], size=9):
plt.subplot(3,6,i);i+=1
plt.imshow( np.squeeze(images[idx],axis=-1))
plt.title("Train Image")
plt.axis('off')
plt.subplot(3,6,i);i+=1
plt.imshow( np.squeeze(masks[idx],axis=-1))
plt.title("Train Mask")
plt.axis('off')
from sklearn.model_selection import train_test_split
import gc
X,X_v,Y,Y_v = train_test_split( images,masks,test_size=0.2,stratify=labels)
del images
del masks
del labels
gc.collect()
X.shape,X_v.shape
###Output
_____no_output_____
###Markdown
Augmentation
###Code
X = np.append( X, [ np.fliplr(x) for x in X], axis=0 )
Y = np.append( Y, [ np.fliplr(y) for y in Y], axis=0 )
X.shape,Y.shape
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(brightness_range=(0.9,1.1),
zoom_range=[.9,1.1],
fill_mode='nearest')
val_datagen = ImageDataGenerator()
###Output
Using TensorFlow backend.
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/opt/conda/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/conda/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/conda/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/conda/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/conda/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/conda/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
###Markdown
Defining Dice LossDice = 2|A∩B|/|A|+|B|
###Code
from keras.losses import binary_crossentropy
from keras import backend as K
import tensorflow as tf
def dice_loss(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = y_true_f * y_pred_f
score = (2. * K.sum(intersection) + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return 1. - score
### bce_dice_loss = binary_crossentropy_loss + dice_loss
def bce_dice_loss(y_true, y_pred):
return binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
from keras.layers import Conv2D, MaxPooling2D, Conv2DTranspose, concatenate, Dropout, Input, BatchNormalization
from keras.layers import Add, Dropout, Permute, add, Deconvolution2D, Cropping2D
from keras import optimizers
from keras.models import Sequential, Model
IMG_DIM = (128,128,1)
def Convblock(channel_dimension, block_no, no_of_convs) :
Layers = []
for i in range(no_of_convs) :
Conv_name = "conv"+str(block_no)+"_"+str(i+1)
# A constant kernel size of 3*3 is used for all convolutions
Layers.append(Conv2D(channel_dimension,kernel_size = (3,3),padding = "same",activation = "relu",name = Conv_name))
Max_pooling_name = "pool"+str(block_no)
#Addding max pooling layer
Layers.append(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),name = Max_pooling_name))
return Layers
def FCN_8_helper(image_size):
model = Sequential()
model.add(Permute((1,2,3),input_shape = (image_size,image_size,1)))
for l in Convblock(64,1,2) :
model.add(l)
for l in Convblock(128,2,2):
model.add(l)
for l in Convblock(256,3,2):
model.add(l)
for l in Convblock(512,4,2):
model.add(l)
for l in Convblock(512,5,2):
model.add(l)
model.add(Conv2D(256,kernel_size=(7,7),padding = "same",activation = "relu",name = "fc6"))
#Replacing fully connnected layers of VGG Net using convolutions
model.add(Conv2D(512,kernel_size=(1,1),padding = "same",activation = "relu",name = "fc7"))
# Gives the classifications scores for each of the 21 classes including background
model.add(Conv2D(128,kernel_size=(1,1),padding="same",activation="relu",name = "score_fr"))
Conv_size = model.layers[-1].output_shape[2] #16 if image size if 512
#print(Conv_size)
model.add(Deconvolution2D(128,kernel_size=(4,4),strides = (2,2),padding = "valid",activation=None,name = "score2"))
# O = ((I-K+2*P)/Stride)+1
# O = Output dimesnion after convolution
# I = Input dimnesion
# K = kernel Size
# P = Padding
# I = (O-1)*Stride + K
Deconv_size = model.layers[-1].output_shape[2] #34 if image size is 512*512
#print(Deconv_size)
# 2 if image size is 512*512
Extra = (Deconv_size - 2*Conv_size)
#print(Extra)
#Cropping to get correct size
model.add(Cropping2D(cropping=((0,Extra),(0,Extra))))
return model
output = FCN_8_helper(128)
print(len(output.layers))
output.summary()
def FCN_8(image_size):
fcn_8 = FCN_8_helper(image_size)
#Calculating conv size after the sequential block
#32 if image size is 512*512
Conv_size = fcn_8.layers[-1].output_shape[2]
#Conv to be applied on Pool4
skip_con1 = Conv2D(128,kernel_size=(1,1),padding = "same",activation=None, name = "score_pool4")
#Addig skip connection which takes adds the output of Max pooling layer 4 to current layer
Summed = add(inputs = [skip_con1(fcn_8.layers[14].output),fcn_8.layers[-1].output])
#Upsampling output of first skip connection
x = Deconvolution2D(128,kernel_size=(4,4),strides = (2,2),padding = "valid",activation=None,name = "score4")(Summed)
x = Cropping2D(cropping=((0,2),(0,2)))(x)
#Conv to be applied to pool3
skip_con2 = Conv2D(128,kernel_size=(1,1),padding = "same",activation=None, name = "score_pool3")
#Adding skip connection which takes output og Max pooling layer 3 to current layer
Summed = add(inputs = [skip_con2(fcn_8.layers[10].output),x])
#Final Up convolution which restores the original image size
Up = Deconvolution2D(128,kernel_size=(16,16),strides = (8,8),
padding = "valid",activation = None,name = "upsample")(Summed)
#Cropping the extra part obtained due to transpose convolution
final = Cropping2D(cropping = ((0,8),(0,8)))(Up)
out = Conv2D(1, (1,1), name="output", activation='sigmoid')(final)
return Model(fcn_8.input, out)
model = FCN_8(128)
model.summary()
###Output
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
permute_2_input (InputLayer) (None, 128, 128, 1) 0
__________________________________________________________________________________________________
permute_2 (Permute) (None, 128, 128, 1) 0 permute_2_input[0][0]
__________________________________________________________________________________________________
conv1_1 (Conv2D) (None, 128, 128, 64) 640 permute_2[0][0]
__________________________________________________________________________________________________
conv1_2 (Conv2D) (None, 128, 128, 64) 36928 conv1_1[0][0]
__________________________________________________________________________________________________
pool1 (MaxPooling2D) (None, 64, 64, 64) 0 conv1_2[0][0]
__________________________________________________________________________________________________
conv2_1 (Conv2D) (None, 64, 64, 128) 73856 pool1[0][0]
__________________________________________________________________________________________________
conv2_2 (Conv2D) (None, 64, 64, 128) 147584 conv2_1[0][0]
__________________________________________________________________________________________________
pool2 (MaxPooling2D) (None, 32, 32, 128) 0 conv2_2[0][0]
__________________________________________________________________________________________________
conv3_1 (Conv2D) (None, 32, 32, 256) 295168 pool2[0][0]
__________________________________________________________________________________________________
conv3_2 (Conv2D) (None, 32, 32, 256) 590080 conv3_1[0][0]
__________________________________________________________________________________________________
pool3 (MaxPooling2D) (None, 16, 16, 256) 0 conv3_2[0][0]
__________________________________________________________________________________________________
conv4_1 (Conv2D) (None, 16, 16, 512) 1180160 pool3[0][0]
__________________________________________________________________________________________________
conv4_2 (Conv2D) (None, 16, 16, 512) 2359808 conv4_1[0][0]
__________________________________________________________________________________________________
pool4 (MaxPooling2D) (None, 8, 8, 512) 0 conv4_2[0][0]
__________________________________________________________________________________________________
conv5_1 (Conv2D) (None, 8, 8, 512) 2359808 pool4[0][0]
__________________________________________________________________________________________________
conv5_2 (Conv2D) (None, 8, 8, 512) 2359808 conv5_1[0][0]
__________________________________________________________________________________________________
pool5 (MaxPooling2D) (None, 4, 4, 512) 0 conv5_2[0][0]
__________________________________________________________________________________________________
fc6 (Conv2D) (None, 4, 4, 256) 6422784 pool5[0][0]
__________________________________________________________________________________________________
fc7 (Conv2D) (None, 4, 4, 512) 131584 fc6[0][0]
__________________________________________________________________________________________________
score_fr (Conv2D) (None, 4, 4, 128) 65664 fc7[0][0]
__________________________________________________________________________________________________
score2 (Conv2DTranspose) (None, 10, 10, 128) 262272 score_fr[0][0]
__________________________________________________________________________________________________
score_pool4 (Conv2D) (None, 8, 8, 128) 65664 conv5_2[0][0]
__________________________________________________________________________________________________
cropping2d_2 (Cropping2D) (None, 8, 8, 128) 0 score2[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 8, 8, 128) 0 score_pool4[0][0]
cropping2d_2[0][0]
__________________________________________________________________________________________________
score4 (Conv2DTranspose) (None, 18, 18, 128) 262272 add_1[0][0]
__________________________________________________________________________________________________
score_pool3 (Conv2D) (None, 16, 16, 128) 65664 conv4_1[0][0]
__________________________________________________________________________________________________
cropping2d_3 (Cropping2D) (None, 16, 16, 128) 0 score4[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 16, 16, 128) 0 score_pool3[0][0]
cropping2d_3[0][0]
__________________________________________________________________________________________________
upsample (Conv2DTranspose) (None, 136, 136, 128 4194432 add_2[0][0]
__________________________________________________________________________________________________
cropping2d_4 (Cropping2D) (None, 128, 128, 128 0 upsample[0][0]
__________________________________________________________________________________________________
output (Conv2D) (None, 128, 128, 1) 129 cropping2d_4[0][0]
==================================================================================================
Total params: 20,874,305
Trainable params: 20,874,305
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Defining IOU metric and compile Model
###Code
def get_iou_vector(A, B):
t = A>0
p = B>0
intersection = np.logical_and(t,p)
union = np.logical_or(t,p)
iou = (np.sum(intersection) + 1e-10 )/ (np.sum(union) + 1e-10)
return iou
def iou_metric(label, pred):
return tf.py_func(get_iou_vector, [label, pred>0.5], tf.float64)
model.compile(optimizer=optimizers.Adam(lr=1e-3),
loss=bce_dice_loss, metrics=['accuracy',iou_metric])
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from sklearn.preprocessing import LabelEncoder
from keras.models import load_model
model_checkpoint = ModelCheckpoint('model_best_checkpoint.h5', save_best_only=True,
monitor='val_loss', mode='min', verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', patience=10, mode='min')
reduceLR = ReduceLROnPlateau(patience=4, verbose=2, monitor='val_loss',min_lr=1e-4, mode='min')
callback_list = [early_stopping, reduceLR, model_checkpoint]
train_generator = train_datagen.flow(X, Y, batch_size=32)
val_generator = val_datagen.flow(X_v, Y_v, batch_size=32)
hist = model.fit(X,Y,batch_size=16,epochs=100,
validation_data=(X_v,Y_v),verbose=1,callbacks= callback_list)
model = load_model('model_best_checkpoint.h5', custom_objects={'bce_dice_loss': bce_dice_loss,'iou_metric':iou_metric}) #or compile = False
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(16, 4))
t = f.suptitle('Unet Performance in Segmenting Tumors', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = hist.epoch
ax1.plot(epoch_list, hist.history['acc'], label='Train Accuracy')
ax1.plot(epoch_list, hist.history['val_acc'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, epoch_list[-1], 5))
ax1.set_ylabel('Accuracy Value');ax1.set_xlabel('Epoch');ax1.set_title('Accuracy')
ax1.legend(loc="best");ax1.grid(color='gray', linestyle='-', linewidth=0.5)
ax2.plot(epoch_list, hist.history['loss'], label='Train Loss')
ax2.plot(epoch_list, hist.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, epoch_list[-1], 5))
ax2.set_ylabel('Loss Value');ax2.set_xlabel('Epoch');ax2.set_title('Loss')
ax2.legend(loc="best");ax2.grid(color='gray', linestyle='-', linewidth=0.5)
ax3.plot(epoch_list, hist.history['iou_metric'], label='Train IOU metric')
ax3.plot(epoch_list, hist.history['val_iou_metric'], label='Validation IOU metric')
ax3.set_xticks(np.arange(0, epoch_list[-1], 5))
ax3.set_ylabel('IOU metric');ax3.set_xlabel('Epoch');ax3.set_title('IOU metric')
ax3.legend(loc="best");ax3.grid(color='gray', linestyle='-', linewidth=0.5)
# src: https://www.kaggle.com/aglotero/another-iou-metric
def get_iou_vector(A, B):
t = A>0
p = B>0
intersection = np.logical_and(t,p)
union = np.logical_or(t,p)
iou = (np.sum(intersection) + 1e-10 )/ (np.sum(union) + 1e-10)
return iou
def getIOUCurve(mask_org,predicted):
thresholds = np.linspace(0, 1, 100)
ious = np.array([get_iou_vector(mask_org, predicted > threshold) for threshold in thresholds])
thres_best_index = np.argmax(ious[9:-10]) + 9
iou_best = ious[thres_best_index]
thres_best = thresholds[thres_best_index]
return thresholds,ious,iou_best,thres_best
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Unet Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
th, ious, iou_best, th_best = getIOUCurve(Y_v,unet.predict(X_v))
ax1.plot(th, ious,label="For Validation")
ax1.plot(th_best, iou_best, "xr", label="Best threshold")
ax1.set_ylabel('IOU');ax1.set_xlabel('Threshold')
ax1.set_title("Threshold vs IoU ({}, {})".format(th_best, iou_best))
th, ious, iou_best, th_best = getIOUCurve(Y,unet.predict(X))
ax2.plot(th, ious, label="For Training")
ax2.plot(th_best, iou_best, "xr", label="Best threshold")
ax2.set_ylabel('IOU');ax1.set_xlabel('Threshold')
ax2.set_title("Threshold vs IoU ({}, {})".format(th_best, iou_best))
THRESHOLD = 0.2
predicted_mask = (unet.predict(X_v)>THRESHOLD)*1
plt.figure(figsize=(8,30))
i=1;total=10
temp = np.ones_like( Y_v[0] )
for idx in np.random.randint(0,high=X_v.shape[0],size=total):
plt.subplot(total,3,i);i+=1
plt.imshow( np.squeeze(X_v[idx],axis=-1), cmap='gray' )
plt.title("MRI Image");plt.axis('off')
plt.subplot(total,3,i);i+=1
plt.imshow( np.squeeze(X_v[idx],axis=-1), cmap='gray' )
plt.imshow( np.squeeze(temp - Y_v[idx],axis=-1), alpha=0.2, cmap='Set1' )
plt.title("Original Mask");plt.axis('off')
plt.subplot(total,3,i);i+=1
plt.imshow( np.squeeze(X_v[idx],axis=-1), cmap='gray' )
plt.imshow( np.squeeze(temp - predicted_mask[idx],axis=-1), alpha=0.2, cmap='Set1' )
plt.title("Predicted Mask");plt.axis('off')
###Output
_____no_output_____ |
Anim.ipynb | ###Markdown
Animal Identification InceptionV3
###Code
import tensorflow as tf
tf.__version__
from tensorflow.compat.v1 import ConfigProto
from tensorflow.compat.v1 import InteractiveSession
config = ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
config.gpu_options.allow_growth = True
session = InteractiveSession(config=config)
#import keras
from tensorflow.keras.layers import Input, Lambda, Dense, Flatten
from tensorflow.keras.models import Model
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.applications.inception_v3 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator,load_img
from tensorflow.keras.models import Sequential
import numpy as np
from glob import glob
#image size
IMAGE_SIZE = [224, 224]
#getting directive
test_dir = "/content/drive/MyDrive/Animal_Identif/Test"
train_dir = "/content/drive/MyDrive/Animal_Identif/Train"
inception = InceptionV3(input_shape=IMAGE_SIZE + [3], weights='imagenet', include_top=False)
for layer in inception.layers:
layer.trainable = False
folders = glob('/content/drive/MyDrive/Animal_Identif/Train/*')
x = Flatten()(inception.output)
prediction = Dense(len(folders), activation='softmax')(x)
# create a model object
model = Model(inputs=inception.input, outputs=prediction)
model.summary()
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('/content/drive/MyDrive/Animal_Identif/Train',
target_size = (224, 224),
batch_size = 35,
class_mode = 'categorical')
test_set = test_datagen.flow_from_directory('/content/drive/MyDrive/Animal_Identif/Test',
target_size = (224, 224),
batch_size = 35,
class_mode = 'categorical')
r = model.fit_generator(
training_set,
validation_data=test_set,
epochs=1,
steps_per_epoch=len(training_set),
validation_steps=len(test_set)
)
import matplotlib.pyplot as plt
# plot the loss
plt.plot(r.history['loss'], label='train loss')
plt.plot(r.history['val_loss'], label='val loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
# plot the accuracy
plt.plot(r.history['accuracy'], label='train acc')
plt.plot(r.history['val_accuracy'], label='val acc')
plt.legend()
plt.show()
plt.savefig('AccVal_acc')
from tensorflow.keras.models import load_model
model.save('/content/drive/MyDrive/animal.h5')
###Output
_____no_output_____ |
Python/StabilityAnalysis/Algorithmic stability analysis/Absolute/COIL-20/UFS_50_700Samples.ipynb | ###Markdown
1. Import libraries
###Code
#----------------------------Reproducible----------------------------------------------------------------------------------------
import numpy as np
import tensorflow as tf
import random as rn
import os
seed=0
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
rn.seed(seed)
#session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
session_conf =tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from keras import backend as K
#tf.set_random_seed(seed)
tf.compat.v1.set_random_seed(seed)
#sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
K.set_session(sess)
#----------------------------Reproducible----------------------------------------------------------------------------------------
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
#--------------------------------------------------------------------------------------------------------------------------------
from keras.datasets import fashion_mnist
from keras.models import Model
from keras.layers import Dense, Input, Flatten, Activation, Dropout, Layer
from keras.layers.normalization import BatchNormalization
from keras.utils import to_categorical
from keras import optimizers,initializers,constraints,regularizers
from keras import backend as K
from keras.callbacks import LambdaCallback,ModelCheckpoint
from keras.utils import plot_model
from sklearn.model_selection import StratifiedKFold
from sklearn.ensemble import ExtraTreesClassifier
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
import h5py
import math
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
matplotlib.style.use('ggplot')
import random
import scipy.sparse as sparse
import pandas as pd
from skimage import io
from PIL import Image
from sklearn.model_selection import train_test_split
import scipy.sparse as sparse
#--------------------------------------------------------------------------------------------------------------------------------
#Import ourslef defined methods
import sys
sys.path.append(r"./Defined")
import Functions as F
# The following code should be added before the keras model
#np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
def write_to_csv(p_data,p_path):
dataframe = pd.DataFrame(p_data)
dataframe.to_csv(p_path, mode='a',header=False,index=False,sep=',')
del dataframe
###Output
_____no_output_____
###Markdown
2. Loading data
###Code
dataset_path='./Dataset/coil-20-proc/'
samples={}
for dirpath, dirnames, filenames in os.walk(dataset_path):
#print(dirpath)
#print(dirnames)
#print(filenames)
dirnames.sort()
filenames.sort()
for filename in [f for f in filenames if f.endswith(".png") and not f.find('checkpoint')>0]:
full_path = os.path.join(dirpath, filename)
file_identifier=filename.split('__')[0][3:]
if file_identifier not in samples.keys():
samples[file_identifier] = []
# Direct read
#image = io.imread(full_path)
# Resize read
image_=Image.open(full_path).resize((20, 20),Image.ANTIALIAS)
image=np.asarray(image_)
samples[file_identifier].append(image)
#plt.imshow(samples['1'][0].reshape(20,20))
data_arr_list=[]
label_arr_list=[]
for key_i in samples.keys():
key_i_for_label=[int(key_i)-1]
data_arr_list.append(np.array(samples[key_i]))
label_arr_list.append(np.array(72*key_i_for_label))
data_arr=np.concatenate(data_arr_list).reshape(1440, 20*20).astype('float32') / 255.
label_arr_onehot=to_categorical(np.concatenate(label_arr_list))
sample_used=700
x_train_all,x_test,y_train_all,y_test_onehot= train_test_split(data_arr,label_arr_onehot,test_size=0.2,random_state=seed)
x_train=x_train_all[0:sample_used]
y_train_onehot=y_train_all[0:sample_used]
print('Shape of x_train: ' + str(x_train.shape))
print('Shape of x_test: ' + str(x_test.shape))
print('Shape of y_train: ' + str(y_train_onehot.shape))
print('Shape of y_test: ' + str(y_test_onehot.shape))
F.show_data_figures(x_train[0:40],20,20,40)
F.show_data_figures(x_test[0:40],20,20,40)
key_feture_number=50
###Output
_____no_output_____
###Markdown
3.Model
###Code
np.random.seed(seed)
#--------------------------------------------------------------------------------------------------------------------------------
class Feature_Select_Layer(Layer):
def __init__(self, output_dim, **kwargs):
super(Feature_Select_Layer, self).__init__(**kwargs)
self.output_dim = output_dim
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1],),
initializer=initializers.RandomUniform(minval=0.999999, maxval=0.9999999, seed=seed),
trainable=True)
super(Feature_Select_Layer, self).build(input_shape)
def call(self, x, selection=False,k=key_feture_number):
kernel=K.abs(self.kernel)
if selection:
kernel_=K.transpose(kernel)
kth_largest = tf.math.top_k(kernel_, k=k)[0][-1]
kernel = tf.where(condition=K.less(kernel,kth_largest),x=K.zeros_like(kernel),y=kernel)
return K.dot(x, tf.linalg.tensor_diag(kernel))
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim)
#--------------------------------------------------------------------------------------------------------------------------------
def Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate=1E-3,\
p_loss_weight_1=1,\
p_loss_weight_2=2):
input_img = Input(shape=(p_data_feature,), name='autoencoder_input')
feature_selection = Feature_Select_Layer(output_dim=p_data_feature,\
input_shape=(p_data_feature,),\
name='feature_selection')
feature_selection_score=feature_selection(input_img)
feature_selection_choose=feature_selection(input_img,selection=True,k=p_feture_number)
encoded = Dense(p_encoding_dim,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_hidden_layer')
encoded_score=encoded(feature_selection_score)
encoded_choose=encoded(feature_selection_choose)
bottleneck_score=encoded_score
bottleneck_choose=encoded_choose
decoded = Dense(p_data_feature,\
activation='linear',\
kernel_initializer=initializers.glorot_uniform(seed),\
name='autoencoder_output')
decoded_score =decoded(bottleneck_score)
decoded_choose =decoded(bottleneck_choose)
latent_encoder_score = Model(input_img, bottleneck_score)
latent_encoder_choose = Model(input_img, bottleneck_choose)
feature_selection_output=Model(input_img,feature_selection_choose)
autoencoder = Model(input_img, [decoded_score,decoded_choose])
autoencoder.compile(loss=['mean_squared_error','mean_squared_error'],\
loss_weights=[p_loss_weight_1, p_loss_weight_2],\
optimizer=optimizers.Adam(lr=p_learning_rate))
print('Autoencoder Structure-------------------------------------')
autoencoder.summary()
return autoencoder,feature_selection_output,latent_encoder_score,latent_encoder_choose
###Output
_____no_output_____
###Markdown
3.1 Structure and paramter testing
###Code
epochs_number=200
batch_size_value=8
###Output
_____no_output_____
###Markdown
--- 3.1.1 Fractal Autoencoder---
###Code
loss_weight_1=0.0078125
F_AE,\
feature_selection_output,\
latent_encoder_score_F_AE,\
latent_encoder_choose_F_AE=Fractal_Autoencoder(p_data_feature=x_train.shape[1],\
p_feture_number=key_feture_number,\
p_encoding_dim=key_feture_number,\
p_learning_rate= 1E-3,\
p_loss_weight_1=loss_weight_1,\
p_loss_weight_2=1)
F_AE_history = F_AE.fit(x_train, [x_train,x_train],\
epochs=epochs_number,\
batch_size=batch_size_value,\
shuffle=True)
loss = F_AE_history.history['loss']
epochs = range(epochs_number)
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
fina_results=np.array(F_AE.evaluate(x_test,[x_test,x_test]))
fina_results
fina_results_single=np.array(F_AE.evaluate(x_test[0:1],[x_test[0:1],x_test[0:1]]))
fina_results_single
for i in np.arange(x_test.shape[0]):
fina_results_i=np.array(F_AE.evaluate(x_test[i:i+1],[x_test[i:i+1],x_test[i:i+1]]))
write_to_csv(fina_results_i.reshape(1,len(fina_results_i)),"./log/results_"+str(sample_used)+".csv")
###Output
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 15ms/step
1/1 [==============================] - 0s 11ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 12ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 30ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 1ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 16ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 13ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 17ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 18ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 20ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 13ms/step
1/1 [==============================] - 0s 12ms/step
1/1 [==============================] - 0s 9ms/step
1/1 [==============================] - 0s 12ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 35ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 12ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 22ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 8ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 6ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 11ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 14ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 7ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 10ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 5ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 3ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 2ms/step
1/1 [==============================] - 0s 4ms/step
1/1 [==============================] - 0s 3ms/step
|
bayes Modelling.ipynb | ###Markdown
KNN From Scratch
###Code
#import libraries
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
import copy
import requests
import zipfile
from io import BytesIO
# import NB from bayes.py
from bayes import NB
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Asus\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\Asus\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
###Markdown
Import dataset
###Code
#download data
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00228/smsspamcollection.zip'
request = requests.get(url)
file = zipfile.ZipFile(BytesIO(request.content))
file.extractall('./data/')
path='./data/SMSSpamCollection'
df = pd.read_csv(path,delimiter='\t',encoding = "ISO-8859-1", usecols=[0,1], names=["label", "message"], nrows=5000)
#inspect the head
print(df.head())
###Output
label message
0 ham Go until jurong point, crazy.. Available only ...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup fina...
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives aro...
###Markdown
Transform data label
###Code
#transform the target into numeric codes
labelencoder= LabelEncoder()
data = copy.deepcopy(df)
data.label = labelencoder.fit_transform(data.label)
#original labels
orig_label = labelencoder.inverse_transform(data.label)
print(data.head())
###Output
label message
0 0 Go until jurong point, crazy.. Available only ...
1 0 Ok lar... Joking wif u oni...
2 1 Free entry in 2 a wkly comp to win FA Cup fina...
3 0 U dun say so early hor... U c already then say...
4 0 Nah I don't think he goes to usf, he lives aro...
###Markdown
Load your data into X and y
###Code
X,y = data.message.values, data.label.values
###Output
_____no_output_____
###Markdown
Define train test split
###Code
train_X, test_X, y_train, y_test = train_test_split(X,y,test_size=.2,stratify=y,random_state=0)
###Output
_____no_output_____
###Markdown
Fit the model
###Code
model = NB().fit(train_X,y_train)
###Output
_____no_output_____
###Markdown
Plot words
###Code
#To view picture of spam, id='s', ham: id='h', specific text: id='a' (any string except h and s)
model.text_by_image(id='s',count=200,text=None)
###Output
_____no_output_____
###Markdown
Compute Accuracy
###Code
Train_pred_y = model.predict(train_X)
Test_pred_y = model.predict(test_X)
print('Train Accuracy {:0.2f}%'.format( model.evaluate(y_train,Train_pred_y)*100 ))
print('Test Accuracy {:0.2f}%'.format( model.evaluate(y_test,Test_pred_y)*100 ))
###Output
Train Accuracy 94.53%
Test Accuracy 90.80%
###Markdown
Single value prediction with best model
###Code
# features = ['Click on this link to win 1000 dollars']
features = ['Mum was sick but will soon recover, pray for her because she needs some money in dollars']
y_pred = model.predict(features)
y_pred = labelencoder.inverse_transform(y_pred)
print(y_pred[0])
model.text_by_image(id='a',count=200,text=features[0])
###Output
ham
|
challenge-week-3/task_1_logistic_divorce.ipynb | ###Markdown
Logistic Regression Model for Divorce Prediction Part 1.1: Implement linear regression from scratch Logistic regressionLogistic regression uses an equation as the representation, very much like linear regression.Input values (x) are combined linearly using weights or coefficient values (referred to as W) to predict an output value (y). A key difference from linear regression is that the output value being modeled is a binary values (0 or 1) rather than a continuous value. $\hat{y}(w, x) = \frac{1}{1+exp^{-(w_0 + w_1 * x_1 + ... + w_p * x_p)}}$ DatasetThe dataset is available at "data/divorce.csv" in the respective challenge's repo.Original Source: https://archive.ics.uci.edu/ml/datasets/Divorce+Predictors+data+set. Dataset is based on rating for questionnaire filled by people who already got divorse and those who is happily married.[//]: "The dataset is available at http://archive.ics.uci.edu/ml/machine-learning-databases/00520/data.zip. Unzip the file and use either CSV or xlsx file." Features (X)1. Atr1 - If one of us apologizes when our discussion deteriorates, the discussion ends. (Numeric | Range: 0-4)2. Atr2 - I know we can ignore our differences, even if things get hard sometimes. (Numeric | Range: 0-4)3. Atr3 - When we need it, we can take our discussions with my spouse from the beginning and correct it. (Numeric | Range: 0-4)4. Atr4 - When I discuss with my spouse, to contact him will eventually work. (Numeric | Range: 0-4)5. Atr5 - The time I spent with my wife is special for us. (Numeric | Range: 0-4)6. Atr6 - We don't have time at home as partners. (Numeric | Range: 0-4)7. Atr7 - We are like two strangers who share the same environment at home rather than family. (Numeric | Range: 0-4) . . .54. Atr54 - I'm not afraid to tell my spouse about her/his incompetence. (Numeric | Range: 0-4)Take a look above at the source of the original dataset for more details. Target (y)55. Class: (Binary | 1 => Divorced, 0 => Not divorced yet) ObjectiveTo gain understanding of logistic regression through implementing the model from scratch Tasks- Download and load the data (csv file contains ';' as delimiter)- Add column at position 0 with all values=1 (pandas.DataFrame.insert function). This is for input to the bias $w_0$- Define X matrix (independent features) and y vector (target feature) as numpy arrays- Print the shape and datatype of both X and y[//]: "- Dataset contains missing values, hence fill the missing values (NA) by performing missing value prediction"[//]: "- Since the all the features are in higher range, columns can be normalized into smaller scale (like 0 to 1) using different methods such as scaling, standardizing or any other suitable preprocessing technique (sklearn.preprocessing.StandardScaler)"- Split the dataset into 85% for training and rest 15% for testing (sklearn.model_selection.train_test_split function)- Follow logistic regression class and fill code where highlighted: - Write sigmoid function to predict probabilities - Write log likelihood function - Write fit function where gradient ascent is implemented - Write predict_proba function where we predict probabilities for input data- Train the model- Write function for calculating accuracy- Compute accuracy on train and test data Further Fun (will not be evaluated)- Play with learning rate and max_iterations- Preprocess data with different feature scaling methods (i.e. scaling, normalization, standardization, etc) and observe accuracies on both X_train and X_test- Train model on different train-test splits such as 60-40, 50-50, 70-30, 80-20, 90-10, 95-5 etc. and observe accuracies on both X_train and X_test- Shuffle training samples with different random seed values in the train_test_split function. Check the model error for the testing data for each setup.- Print other classification metrics such as: - classification report (sklearn.metrics.classification_report), - confusion matrix (sklearn.metrics.confusion_matrix), - precision, recall and f1 scores (sklearn.metrics.precision_recall_fscore_support) Helpful links- How Logistic Regression works: https://machinelearningmastery.com/logistic-regression-for-machine-learning/- Feature Scaling: https://scikit-learn.org/stable/modules/preprocessing.html- Training testing splitting: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html- Use slack for doubts: https://join.slack.com/t/deepconnectai/shared_invite/zt-givlfnf6-~cn3SQ43k0BGDrG9_YOn4g
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Read the data from local cloud directory
data = pd.read_csv("data/divorce.csv",delimiter=';')
data.head()
# Set delimiter to semicolon(;) in case of unexpected results
# Add column which has all 1s
# The idea is that weight corresponding to this column is equal to intercept
# This way it is efficient and easier to handle the bias/intercept term
data.insert(0,"w0",1)
# Print the dataframe rows just to see some samples
data.head()
# Define X (input features) and y (output feature)
X = data.drop(columns="Class")
y = data["Class"]
X_shape = X.shape
X_type = type(X)
y_shape = y.shape
y_type = type(y)
print(f'X: Type-{X_type}, Shape-{X_shape}')
print(f'y: Type-{y_type}, Shape-{y_shape}')
###Output
X: Type-<class 'pandas.core.frame.DataFrame'>, Shape-(170, 55)
y: Type-<class 'pandas.core.series.Series'>, Shape-(170,)
###Markdown
Expected output: X: Type-, Shape-(170, 55)y: Type-, Shape-(170,)
###Code
data[data.isnull().any(axis=1)]
# Perform standarization (if required)
#from sklearn.preprocessing import MinMaxScaler
#scaler=MinMaxScaler()
#X.loc[:,'Atr1':'Atr54']=scaler.fit_transform(X.loc[:,'Atr1':'Atr54'])
#X.head()
# Split the dataset into training and testing here
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.15)
# Print the shape of features and target of training and testing: X_train, X_test, y_train, y_test
X_train_shape = X_train.shape
y_train_shape = y_train.shape
X_test_shape = X_test.shape
y_test_shape = y_test.shape
print(f"X_train: {X_train_shape} , y_train: {y_train_shape}")
print(f"X_test: {X_test_shape} , y_test: {y_test_shape}")
assert (X_train.shape[0]==y_train.shape[0] and X_test.shape[0]==y_test.shape[0]), "Check your splitting carefully"
###Output
X_train: (144, 55) , y_train: (144,)
X_test: (26, 55) , y_test: (26,)
###Markdown
Let us start implementing logistic regression from scratch. Just follow code cells, see hints if required. We will build a LogisticRegression class
###Code
# DO NOT EDIT ANY VARIABLE OR FUNCTION NAME(S) IN THIS CELL
# Let's try more object oriented approach this time :)
class MyLogisticRegression:
def __init__(self, learning_rate=0.01, max_iterations=1000):
'''Initialize variables
Args:
learning_rate : Learning Rate
max_iterations : Max iterations for training weights
'''
# Initialising all the parameters
self.learning_rate = learning_rate
self.max_iterations = max_iterations
self.likelihoods = []
# Define epsilon because log(0) is not defined
self.eps = 1e-7
def sigmoid(self, z):
'''Sigmoid function: f:R->(0,1)
Args:
z : A numpy array (num_samples,)
Returns:
A numpy array where sigmoid function applied to every element
'''
### START CODE HERE
sig_z = 1/(1 + np.exp(-z))
### END CODE HERE
assert (z.shape==sig_z.shape), 'Error in sigmoid implementation. Check carefully'
return sig_z
def log_likelihood(self, y_true, y_pred):
'''Calculates maximum likelihood estimate
Remember: y * log(yh) + (1-y) * log(1-yh)
Note: Likelihood is defined for multiple classes as well, but for this dataset
we only need to worry about binary/bernoulli likelihood function
Args:
y_true : Numpy array of actual truth values (num_samples,)
y_pred : Numpy array of predicted values (num_samples,)
Returns:
Log-likelihood, scalar value
'''
# Fix 0/1 values in y_pred so that log is not undefined
y_pred = np.maximum(
np.full(y_pred.shape, self.eps),
np.minimum(np.full(y_pred.shape, 1-self.eps),
y_pred))
### START CODE HERE
likelihood = np.mean(y_true*np.log(y_pred) + (1-y_true)*np.log(1-y_pred))
### END CODE HERE
return likelihood
def fit(self, X, y):
'''Trains logistic regression model using gradient ascent
to gain maximum likelihood on the training data
Args:
X : Numpy array (num_examples, num_features)
y : Numpy array (num_examples, )
Returns: VOID
'''
num_examples = X.shape[0]
num_features = X.shape[1]
### START CODE HERE
# Initialize weights with appropriate shape
self.weights = np.random.rand(num_features,)
# Perform gradient ascent
for i in range(self.max_iterations):
# Define the linear hypothesis(z) first
# HINT: what is our hypothesis function in linear regression, remember?
z = np.dot(X,self.weights)
# Output probability value by appplying sigmoid on z
y_pred = self.sigmoid(z)
# Calculate the gradient values
# This is just vectorized efficient way of implementing gradient. Don't worry, we will discuss it later.
gradient = np.mean((y-y_pred)*X.T, axis=1)
# Update the weights
# Caution: It is gradient ASCENT not descent
self.weights = self.weights+(self.learning_rate*gradient)
# Calculating log likelihood
likelihood = self.log_likelihood(y,y_pred)
self.likelihoods.append(likelihood)
### END CODE HERE
def predict_proba(self, X):
'''Predict probabilities for given X.
Remember sigmoid returns value between 0 and 1.
Args:
X : Numpy array (num_samples, num_features)
Returns:
probabilities: Numpy array (num_samples,)
'''
if self.weights is None:
raise Exception("Fit the model before prediction")
### START CODE HERE
z = np.dot(X,self.weights)
probabilities = self.sigmoid(z)
### END CODE HERE
return probabilities
def predict(self, X, threshold=0.5):
'''Predict/Classify X in classes
Args:
X : Numpy array (num_samples, num_features)
threshold : scalar value above which prediction is 1 else 0
Returns:
binary_predictions : Numpy array (num_samples,)
'''
# Thresholding probability to predict binary values
binary_predictions = np.array(list(map(lambda x: 1 if x>threshold else 0, self.predict_proba(X))))
return binary_predictions
# Now initialize logitic regression implemented by you
model = MyLogisticRegression(learning_rate=0.5)
# And now fit on training data
model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Phew!! That's a lot of code. But you did it, congrats !!
###Code
# Train log-likelihood
train_log_likelihood = model.log_likelihood(y_train, model.predict_proba(X_train))
print("Log-likelihood on training data:", train_log_likelihood)
print("The value is negative because the final log likelihood value for the \n1000th iteration is just below log(1) \n(something like log(0.9999)")
# Test log-likelihood
test_log_likelihood = model.log_likelihood(y_test, model.predict_proba(X_test))
print("Log-likelihood on testing data:", test_log_likelihood)
!jt -r
# Plot the loss curve
plt.plot([i+1 for i in range(len(likelihoods))], model.likelihoods[len(likelihoods)])
plt.title("Log-Likelihood curve")
plt.xlabel("Iteration num")
plt.ylabel("Log-likelihood")
plt.show()
!jt -t monokai
###Output
_____no_output_____
###Markdown
Let's calculate accuracy as well. Accuracy is defined simply as the rate of correct classifications.
###Code
#Make predictions on test data
y_pred = model.predict(X_test)
def accuracy(y_true,y_pred):
'''Compute accuracy.
Accuracy = (Correct prediction / number of samples)
Args:
y_true : Truth binary values (num_examples, )
y_pred : Predicted binary values (num_examples, )
Returns:
accuracy: scalar value
'''
### START CODE HERE
accuracy = np.sum(y_true==y_pred)/(y_true.shape[0])
### END CODE HERE
return accuracy
# Print accuracy on train data
print(accuracy(y_train,model.predict(X_train))*100)
print(accuracy(y_test,y_pred)*100)
###Output
100.0
###Markdown
Part 1.2: Use Logistic Regression from sklearn on the same dataset Tasks- Define X and y again for sklearn Linear Regression model- Train Logistic Regression Model on the training set (sklearn.linear_model.LogisticRegression class)- Run the model on testing set- Print 'accuracy' obtained on the testing dataset (sklearn.metrics.accuracy_score function) Further fun (will not be evaluated)- Compare accuracies of your model and sklearn's logistic regression model Helpful links- Classification metrics in sklearn: https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Define X and y
X.head()
y.head()
# Initialize the model from sklearn
model = LogisticRegression(max_iter=1000)
# Fit the model
model.fit(X_train, y_train)
# Predict on testing set X_test
y_pred = model.predict(X_test)
# Print Accuracy on testing set
test_accuracy_sklearn = accuracy_score(y_test,y_pred)
print(f"\nAccuracy on testing set: {test_accuracy_sklearn}")
print(classification_report(y_test,y_pred))
print(classification_report(y_train,model.predict(X_train)))
###Output
precision recall f1-score support
0 1.00 1.00 1.00 73
1 1.00 1.00 1.00 71
avg / total 1.00 1.00 1.00 144
|
SentimentalAnalysisWithDistilbert.ipynb | ###Markdown
Step1. Import and Load Data
###Code
!pip install -q transformers
!pip install -q datasets
from datasets import load_dataset
emotions = load_dataset("emotion")
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Step2. Preprocess Data
###Code
from transformers import AutoTokenizer
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
def tokenize(batch):
return tokenizer(batch["text"], padding=True, truncation=True)
emotions_encoded = emotions.map(tokenize, batched=True, batch_size=None)
from transformers import AutoModelForSequenceClassification
num_labels = 6
model = (AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=num_labels).to(device))
emotions_encoded["train"].features
emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"])
emotions_encoded["train"].features
from sklearn.metrics import accuracy_score, f1_score
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
f1 = f1_score(labels, preds, average="weighted")
acc = accuracy_score(labels, preds)
return {"accuracy": acc, "f1": f1}
from transformers import Trainer, TrainingArguments
batch_size = 64
logging_steps = len(emotions_encoded["train"]) // batch_size
training_args = TrainingArguments(output_dir="results",
num_train_epochs=8,
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
load_best_model_at_end=True,
metric_for_best_model="f1",
weight_decay=0.01,
evaluation_strategy="epoch",
save_strategy="no",
disable_tqdm=False)
from transformers import Trainer
trainer = Trainer(model=model, args=training_args,
compute_metrics=compute_metrics,
train_dataset=emotions_encoded["train"],
eval_dataset=emotions_encoded["validation"])
trainer.train();
results = trainer.evaluate()
results
preds_output = trainer.predict(emotions_encoded["validation"])
preds_output.metrics
import numpy as np
from sklearn.metrics import plot_confusion_matrix
y_valid = np.array(emotions_encoded["validation"]["label"])
y_preds = np.argmax(preds_output.predictions, axis=1)
labels = ['sadness', 'joy', 'love', 'anger', 'fear', 'surprise']
plot_confusion_matrix(y_preds, y_valid, labels)
model.save_pretrained('./model')
tokenizer.save_pretrained('./model')
!transformers-cli login
!sudo apt-get install git-lfs
!git config --global user.email "[email protected]"
!git config --global user.name "*****"
!git config --global user.password "****"
model.push_to_hub('bert-base-uncased-emotion')
tokenizer.push_to_hub('bert-base-uncased-emotion')
###Output
_____no_output_____ |
interviewq_exercises/q024_python_print_integers_foo_ie.ipynb | ###Markdown
Question 24 - Oh foo-ie!Write a function that takes in an integer n, and prints out integers from 1 to n inclusive. - If %3 == 0 then print "foo" in place of the integer, - if %5 == 0 then print "ie" in place of the integer, - and if both conditions are true then print "foo-ie" in place of the integer.
###Code
def print_foo_ie(n):
return list(
map(
lambda i: (
'foo-ie' if i%3 == 0 and i%5 == 0
else (
'foo' if i%3 == 0
else (
'ie' if i%5 == 0
else i
)
)
),
range(1,n+1)
)
)
print_foo_ie(20)
###Output
_____no_output_____ |
Module6/Module6 - Lab5-Partially solved.ipynb | ###Markdown
DAT210x - Programming with Python for DS Module6- Lab5
###Code
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import tree
###Output
_____no_output_____
###Markdown
Useful information about the dataset used in this assignment can be [found here](https://archive.ics.uci.edu/ml/machine-learning-databases/mushroom/agaricus-lepiota.names). Load up the mushroom dataset into dataframe `X` and verify you did it properly, and that you have not included any features that clearly shouldn't be part of the dataset.You should not have any doubled indices. You can check out information about the headers present in the dataset using the link we provided above. Also make sure you've properly captured any NA values.
###Code
# .. your code here ..
raw_col_names = [
'class', 'cap_shape', 'cap_surface', 'cap_color', 'bruises', 'odor',
'gill_attach', 'gill_spacing', 'gill_size', 'gill_color', 'stalk_shape',
'stalk_root', 'stalk_surface_above_ring', 'stalk_surface_below_ring',
'stalk_color_above_ring', 'stalk_color_below_ring', 'viel_type',
'viel_color', 'ring_number', 'ring_type', 'spore_print_color',
'population', 'habitat'
]
X=pd.read_csv('Datasets/agaricus-lepiota.data', names=raw_col_names,index_col=None)
X.head()
# An easy way to show which rows have nans in them:
X[pd.isnull(X).any(axis=1)]
###Output
_____no_output_____
###Markdown
For this simple assignment, just drop any row with a nan in it, and then print out your dataset's shape:
###Code
# .. your code here ..
X = X.dropna()
X.shape
print(X.columns)
###Output
Index(['class', 'cap_shape', 'cap_surface', 'cap_color', 'bruises', 'odor',
'gill_attach', 'gill_spacing', 'gill_size', 'gill_color', 'stalk_shape',
'stalk_root', 'stalk_surface_above_ring', 'stalk_surface_below_ring',
'stalk_color_above_ring', 'stalk_color_below_ring', 'viel_type',
'viel_color', 'ring_number', 'ring_type', 'spore_print_color',
'population', 'habitat'],
dtype='object')
###Markdown
Copy the labels out of the dataframe into variable `y`, then remove them from `X`.Encode the labels, using the `.map()` trick we presented you in Module 5, using `canadian:0`, `kama:1`, and `rosa:2`.
###Code
# .. your code here ..
y = X.loc[:, 'class'].map({'p': 1, 'e': 0})
X=X.drop('class',axis=1)
X = pd.get_dummies(X)
###Output
_____no_output_____
###Markdown
Encode the entire dataframe using dummies: Split your data into `test` and `train` sets. Your `test` size should be 30% with `random_state` 7.Please use variable names: `X_train`, `X_test`, `y_train`, and `y_test`:
###Code
# .. your code here ..
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=7)
###Output
_____no_output_____
###Markdown
Create an DT classifier. No need to set any parameters:
###Code
# .. your code here ..
DT = tree.DecisionTreeClassifier()
###Output
_____no_output_____
###Markdown
Train the classifier on the `training` data and labels; then, score the classifier on the `testing` data and labels:
###Code
# .. your code here ..
DT.fit(X_train,y_train)
score=DT.score(X_test, y_test)
print("High-Dimensionality Score: ", round((score*100), 3))
###Output
High-Dimensionality Score: 100.0
###Markdown
Use the code on the course's SciKit-Learn page to output a .DOT file, then render the .DOT to .PNGs.You will need graphviz installed to do this. On macOS, you can `brew install graphviz`. On Windows 10, graphviz installs via a .msi installer that you can download from the graphviz website. Also, a graph editor, gvedit.exe can be used to view the tree directly from the exported tree.dot file without having to issue a call. On other systems, use analogous commands.If you encounter issues installing graphviz or don't have the rights to, you can always visualize your .dot file on the website: http://webgraphviz.com/.
###Code
tree.export_graphviz(DT, out_file='tree.dot',feature_names=X_train.columns)
# help(tree.export_graphviz)
from subprocess import call
call(['dot', '-T', 'png', 'tree.dot', '-o', 'tree.png'])
###Output
_____no_output_____ |
d2l/mxnet/chapter_optimization/adagrad.ipynb | ###Markdown
AdaGrad算法:label:`sec_adagrad`我们从有关特征学习中并不常见的问题入手。 稀疏特征和学习率假设我们正在训练一个语言模型。为了获得良好的准确性,我们大多希望在训练的过程中降低学习率,速度通常为$\mathcal{O}(t^{-\frac{1}{2}})$或更低。现在讨论关于稀疏特征(即只在偶尔出现的特征)的模型训练,这对自然语言来说很常见。例如,我们看到“预先条件”这个词比“学习”这个词的可能性要小得多。但是,它在计算广告学和个性化协同过滤等其他领域也很常见。只有在这些不常见的特征出现时,与其相关的参数才会得到有意义的更新。鉴于学习率下降,我们可能最终会面临这样的情况:常见特征的参数相当迅速地收敛到最佳值,而对于不常见的特征,我们仍缺乏足够的观测以确定其最佳值。换句话说,学习率要么对于常见特征而言降低太慢,要么对于不常见特征而言降低太快。解决此问题的一个方法是记录我们看到特定特征的次数,然后将其用作调整学习率。即我们可以使用大小为$\eta_i = \frac{\eta_0}{\sqrt{s(i, t) + c}}$的学习率,而不是$\eta = \frac{\eta_0}{\sqrt{t + c}}$。在这里$s(i, t)$计下了我们截至$t$时观察到功能$i$的次数。这其实很容易实施且不产生额外损耗。AdaGrad算法 :cite:`Duchi.Hazan.Singer.2011`通过将粗略的计数器$s(i, t)$替换为先前观察所得梯度的平方之和来解决这个问题。它使用$s(i, t+1) = s(i, t) + \left(\partial_i f(\mathbf{x})\right)^2$来调整学习率。这有两个好处:首先,我们不再需要决定梯度何时算足够大。其次,它会随梯度的大小自动变化。通常对应于较大梯度的坐标会显著缩小,而其他梯度较小的坐标则会得到更平滑的处理。在实际应用中,它促成了计算广告学及其相关问题中非常有效的优化程序。但是,它遮盖了AdaGrad固有的一些额外优势,这些优势在预处理环境中很容易被理解。 预处理凸优化问题有助于分析算法的特点。毕竟对于大多数非凸问题来说,获得有意义的理论保证很难,但是直觉和洞察往往会延续。让我们来看看最小化$f(\mathbf{x}) = \frac{1}{2} \mathbf{x}^\top \mathbf{Q} \mathbf{x} + \mathbf{c}^\top \mathbf{x} + b$这一问题。正如在 :numref:`sec_momentum`中那样,我们可以根据其特征分解$\mathbf{Q} = \mathbf{U}^\top \boldsymbol{\Lambda} \mathbf{U}$重写这个问题,来得到一个简化得多的问题,使每个坐标都可以单独解出:$$f(\mathbf{x}) = \bar{f}(\bar{\mathbf{x}}) = \frac{1}{2} \bar{\mathbf{x}}^\top \boldsymbol{\Lambda} \bar{\mathbf{x}} + \bar{\mathbf{c}}^\top \bar{\mathbf{x}} + b.$$在这里我们使用了$\mathbf{x} = \mathbf{U} \mathbf{x}$,且因此$\mathbf{c} = \mathbf{U} \mathbf{c}$。修改后优化器为$\bar{\mathbf{x}} = -\boldsymbol{\Lambda}^{-1} \bar{\mathbf{c}}$且最小值为$-\frac{1}{2} \bar{\mathbf{c}}^\top \boldsymbol{\Lambda}^{-1} \bar{\mathbf{c}} + b$。这样更容易计算,因为$\boldsymbol{\Lambda}$是一个包含$\mathbf{Q}$特征值的对角矩阵。如果稍微扰动$\mathbf{c}$,我们会期望在$f$的最小化器中只产生微小的变化。遗憾的是,情况并非如此。虽然$\mathbf{c}$的微小变化导致了$\bar{\mathbf{c}}$同样的微小变化,但$f$的(以及$\bar{f}$的)最小化器并非如此。每当特征值$\boldsymbol{\Lambda}_i$很大时,我们只会看到$\bar{x}_i$和$\bar{f}$的最小值发声微小变化。相反,对于小的$\boldsymbol{\Lambda}_i$来说,$\bar{x}_i$的变化可能是剧烈的。最大和最小的特征值之比称为优化问题的*条件数*(condition number)。$$\kappa = \frac{\boldsymbol{\Lambda}_1}{\boldsymbol{\Lambda}_d}.$$如果条件编号$\kappa$很大,准确解决优化问题就会很难。我们需要确保在获取大量动态的特征值范围时足够谨慎:我们不能简单地通过扭曲空间来“修复”这个问题,从而使所有特征值都是$1$?理论上这很容易:我们只需要$\mathbf{Q}$的特征值和特征向量即可将问题从$\mathbf{x}$整理到$\mathbf{z} := \boldsymbol{\Lambda}^{\frac{1}{2}} \mathbf{U} \mathbf{x}$中的一个。在新的坐标系中,$\mathbf{x}^\top \mathbf{Q} \mathbf{x}$可以被简化为$\|\mathbf{z}\|^2$。可惜,这是一个相当不切实际的想法。一般而言,计算特征值和特征向量要比解决实际问题“贵”得多。虽然准确计算特征值可能会很昂贵,但即便只是大致猜测并计算它们,也可能已经比不做任何事情好得多。特别是,我们可以使用$\mathbf{Q}$的对角线条目并相应地重新缩放它。这比计算特征值开销小的多。$$\tilde{\mathbf{Q}} = \mathrm{diag}^{-\frac{1}{2}}(\mathbf{Q}) \mathbf{Q} \mathrm{diag}^{-\frac{1}{2}}(\mathbf{Q}).$$在这种情况下,我们得到了$\tilde{\mathbf{Q}}_{ij} = \mathbf{Q}_{ij} / \sqrt{\mathbf{Q}_{ii} \mathbf{Q}_{jj}}$,特别注意对于所有$i$,$\tilde{\mathbf{Q}}_{ii} = 1$。在大多数情况下,这大大简化了条件数。例如我们之前讨论的案例,它将完全消除眼下的问题,因为问题是轴对齐的。遗憾的是,我们还面临另一个问题:在深度学习中,我们通常情况甚至无法计算目标函数的二阶导数:对于$\mathbf{x} \in \mathbb{R}^d$,即使只在小批量上,二阶导数可能也需要$\mathcal{O}(d^2)$空间来计算,导致几乎不可行。AdaGrad算法巧妙的思路是,使用一个代理来表示黑塞矩阵(Hessian)的对角线,既相对易于计算又高效。为了了解它是如何生效的,让我们来看看$\bar{f}(\bar{\mathbf{x}})$。我们有$$\partial_{\bar{\mathbf{x}}} \bar{f}(\bar{\mathbf{x}}) = \boldsymbol{\Lambda} \bar{\mathbf{x}} + \bar{\mathbf{c}} = \boldsymbol{\Lambda} \left(\bar{\mathbf{x}} - \bar{\mathbf{x}}_0\right),$$其中$\bar{\mathbf{x}}_0$是$\bar{f}$的优化器。因此,梯度的大小取决于$\boldsymbol{\Lambda}$和与最佳值的差值。如果$\bar{\mathbf{x}} - \bar{\mathbf{x}}_0$没有改变,那这就是我们所求的。毕竟在这种情况下,梯度$\partial_{\bar{\mathbf{x}}} \bar{f}(\bar{\mathbf{x}})$的大小就足够了。由于AdaGrad算法是一种随机梯度下降算法,所以即使是在最佳值中,我们也会看到具有非零方差的梯度。因此,我们可以放心地使用梯度的方差作为黑塞矩阵比例的廉价替代。详尽的分析(要花几页解释)超出了本节的范围,请读者参考 :cite:`Duchi.Hazan.Singer.2011`。 算法让我们接着上面正式开始讨论。我们使用变量$\mathbf{s}_t$来累加过去的梯度方差,如下所示:$$\begin{aligned} \mathbf{g}_t & = \partial_{\mathbf{w}} l(y_t, f(\mathbf{x}_t, \mathbf{w})), \\ \mathbf{s}_t & = \mathbf{s}_{t-1} + \mathbf{g}_t^2, \\ \mathbf{w}_t & = \mathbf{w}_{t-1} - \frac{\eta}{\sqrt{\mathbf{s}_t + \epsilon}} \cdot \mathbf{g}_t.\end{aligned}$$在这里,操作是按照坐标顺序应用。也就是说,$\mathbf{v}^2$有条目$v_i^2$。同样,$\frac{1}{\sqrt{v}}$有条目$\frac{1}{\sqrt{v_i}}$,并且$\mathbf{u} \cdot \mathbf{v}$有条目$u_i v_i$。与之前一样,$\eta$是学习率,$\epsilon$是一个为维持数值稳定性而添加的常数,用来确保我们不会除以$0$。最后,我们初始化$\mathbf{s}_0 = \mathbf{0}$。就像在动量法中我们需要跟踪一个辅助变量一样,在AdaGrad算法中,我们允许每个坐标有单独的学习率。与SGD算法相比,这并没有明显增加AdaGrad的计算代价,因为主要计算用在$l(y_t, f(\mathbf{x}_t, \mathbf{w}))$及其导数。请注意,在$\mathbf{s}_t$中累加平方梯度意味着$\mathbf{s}_t$基本上以线性速率增长(由于梯度从最初开始衰减,实际上比线性慢一些)。这产生了一个学习率$\mathcal{O}(t^{-\frac{1}{2}})$,但是在单个坐标的层面上进行了调整。对于凸问题,这完全足够了。然而,在深度学习中,我们可能希望更慢地降低学习率。这引出了许多AdaGrad算法的变体,我们将在后续章节中讨论它们。眼下让我们先看看它在二次凸问题中的表现如何。我们仍然同一函数为例:$$f(\mathbf{x}) = 0.1 x_1^2 + 2 x_2^2.$$我们将使用与之前相同的学习率来实现AdaGrad算法,即$\eta = 0.4$。可以看到,自变量的迭代轨迹较平滑。但由于$\boldsymbol{s}_t$的累加效果使学习率不断衰减,自变量在迭代后期的移动幅度较小。
###Code
%matplotlib inline
import math
from mxnet import np, npx
from d2l import mxnet as d2l
npx.set_np()
def adagrad_2d(x1, x2, s1, s2):
eps = 1e-6
g1, g2 = 0.2 * x1, 4 * x2
s1 += g1 ** 2
s2 += g2 ** 2
x1 -= eta / math.sqrt(s1 + eps) * g1
x2 -= eta / math.sqrt(s2 + eps) * g2
return x1, x2, s1, s2
def f_2d(x1, x2):
return 0.1 * x1 ** 2 + 2 * x2 ** 2
eta = 0.4
d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d))
###Output
epoch 20, x1: -2.382563, x2: -0.158591
###Markdown
我们将学习率提高到$2$,可以看到更好的表现。这已经表明,即使在无噪声的情况下,学习率的降低可能相当剧烈,我们需要确保参数能够适当地收敛。
###Code
eta = 2
d2l.show_trace_2d(f_2d, d2l.train_2d(adagrad_2d))
###Output
epoch 20, x1: -0.002295, x2: -0.000000
###Markdown
从零开始实现同动量法一样,AdaGrad算法需要对每个自变量维护同它一样形状的状态变量。
###Code
def init_adagrad_states(feature_dim):
s_w = np.zeros((feature_dim, 1))
s_b = np.zeros(1)
return (s_w, s_b)
def adagrad(params, states, hyperparams):
eps = 1e-6
for p, s in zip(params, states):
s[:] += np.square(p.grad)
p[:] -= hyperparams['lr'] * p.grad / np.sqrt(s + eps)
###Output
_____no_output_____
###Markdown
与 :numref:`sec_minibatch_sgd`一节中的实验相比,这里使用更大的学习率来训练模型。
###Code
data_iter, feature_dim = d2l.get_data_ch11(batch_size=10)
d2l.train_ch11(adagrad, init_adagrad_states(feature_dim),
{'lr': 0.1}, data_iter, feature_dim);
###Output
loss: 0.244, 0.083 sec/epoch
###Markdown
简洁实现我们可直接使用深度学习框架中提供的AdaGrad算法来训练模型。
###Code
d2l.train_concise_ch11('adagrad', {'learning_rate': 0.1}, data_iter)
###Output
loss: 0.244, 0.103 sec/epoch
|
Juliana2701w4U.ipynb | ###Markdown
Vetor de features para classificação $X_c = [a, av, aa, C]$ $a \rightarrow$ ângulo; $av \rightarrow$ velocidade angular; $aa \rightarrow$ aceleração angular; $C \rightarrow$ indice de classificação Indice de classificação $"c"$: $C = 0 \rightarrow$ Marcha normal; $C = 1 \rightarrow$ Marcha de subida de escada; $C = 2 \rightarrow$ Marvha de descidade escada.
###Code
print a.shape, av.shape, aa.shape
len_xc = len(a)-2
Xcp = np.hstack(
(a[2:].reshape((len_xc,1)),
av[1:].reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,2)),
aa.reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,3)),
l_a[2:].reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,4)),
l_av[1:].reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,5)),
l_aa.reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,6)),
pos_foot_r[2:].reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,7)),
pos_foot_l[2:].reshape((len_xc,1))))
vz_r = velocities3d[1:,2] # Velocidade no eixo z
vz_l = l_velocities3d[1:,2] # Velocidade no eixo z
Xcp = np.hstack(
(Xcp.reshape((len_xc,8)),
vz_r.reshape((len_xc,1))))
Xcp = np.hstack(
(Xcp.reshape((len_xc,9)),
vz_l.reshape((len_xc,1))))
###Output
_____no_output_____
###Markdown
Adiciando coluna de classificação
###Code
C = (np.ones(len_xc)*1).reshape((len_xc,1))
Xc = np.hstack(
(Xcp.reshape((len_xc,10)),
C.reshape((len_xc,1))))
Xc.shape
## salvando em arquivo na pasta <classifier_data>
from Data_Savior_J import save_it_now
save_it_now(Xc, "./classifier_data/walk4U.data")
###Output
Saved to file
###Markdown
Checks for Nan
###Code
Nan = np.isnan(Xc)
Nan
###Output
_____no_output_____ |
.ipynb_checkpoints/Europe example-checkpoint.ipynb | ###Markdown
MagPySV example workflow - European observatories Setup
###Code
# Setup python paths and import some modules
from IPython.display import Image
import sys
import os
import datetime as dt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
# Import all of the MagPySV modules
import magpysv.denoise as denoise
import magpysv.io as io
import magpysv.model_prediction as model_prediction
import magpysv.plots as plots
import magpysv.tools as tools
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Data download
###Code
from gmdata_webinterface import consume_webservices as cws
# Required dataset - only the hourly WDC dataset is currently supported
cadence = 'hour'
service = 'WDC'
# Start and end dates of the data download
start_date = dt.date(1960, 1, 1)
end_date = dt.date(2010, 12, 31)
# Observatories of interest
observatory_list = ['CLF', 'NGK', 'WNG']
# Output path for data
download_dir = 'data'
cws.fetch_data(start_date= start_date, end_date=end_date,
station_list=observatory_list, cadence=cadence,
service=service, saveroot=download_dir)
###Output
_____no_output_____
###Markdown
Initial processing Extract all data from the WDC files, convert into the proper hourly means using the tabular base and save the X, Y and Z components to CSV files. This may take a few minutes.
###Code
write_dir = os.path.join(download_dir, 'hourly')
io.wdc_to_hourly_csv(wdc_path=download_dir, write_dir=write_dir, obs_list=observatory_list,
print_obs=True)
# Path to file containing baseline discontinuity information
baseline_data = tools.get_baseline_info()
# Loop over all observatories and calculate SV series as first differences of monthly means (FDMM) for each
for observatory in observatory_list:
print(observatory)
# Load hourly data
data_file = observatory + '.csv'
hourly_data = io.read_csv_data(
fname=os.path.join(download_dir, 'hourly', data_file),
data_type='mf')
# Resample to monthly means
resampled_field_data = tools.data_resampling(hourly_data, sampling='MS', average_date=True)
# Correct documented baseline changes
tools.correct_baseline_change(observatory=observatory,
field_data=resampled_field_data,
baseline_data=baseline_data, print_data=True)
# Write out the monthly means for magnetic field
io.write_csv_data(data=resampled_field_data,
write_dir=os.path.join(download_dir, 'monthly_mf'),
obs_name=observatory)
# Calculate SV from monthly field means
sv_data = tools.calculate_sv(resampled_field_data,
mean_spacing=1)
# Write out the SV data
io.write_csv_data(data=sv_data,
write_dir=os.path.join(download_dir, 'monthly_sv', 'fdmm'),
obs_name=observatory)
###Output
_____no_output_____
###Markdown
Concatenate the data for our selected observatories Besides the Setup section, everything preceding this cell only needs to be run once.
###Code
# Observatories of interest
observatory_list = ['CLF', 'NGK', 'WNG']
# Where the data are stored
download_dir = 'data'
# Start and end dates of the analysis as (year, month, day)
start = dt.datetime(1960, 1, 1)
end = dt.datetime(2010, 12, 31)
obs_data, model_sv_data, model_mf_data = io.combine_csv_data(
start_date=start, end_date=end, obs_list=observatory_list,
data_path=os.path.join(download_dir, 'monthly_sv', 'fdmm'),
model_path='model_predictions', day_of_month=1)
dates = obs_data['date']
obs_data
###Output
_____no_output_____
###Markdown
SV plots
###Code
for observatory in observatory_list:
fig = plots.plot_sv(dates=dates, sv=obs_data.filter(regex=observatory),
model=model_sv_data.filter(regex=observatory),
fig_size=(6, 6), font_size=10, label_size=16, plot_legend=False,
obs=observatory, model_name='COV-OBS')
###Output
_____no_output_____
###Markdown
Residuals To calculate SV residuals, we need SV predictions from a geomagnetic field model. This example uses output from the COV-OBS model by Gillet et al. (2013, Geochem. Geophys. Geosyst.,https://doi.org/10.1002/ggge.20041; 2015, Earth, Planets and Space,https://doi.org/10.1186/s40623-015-0225-z2013) to obtain modelpredictions for these observatory locations. The code can be obtained fromhttp://www.spacecenter.dk/files/magnetic-models/COV-OBSx1/ and no modificationsare necessary to run it using functions found MagPySV's model_prediction module. For convenience, model output for the locations used in this notebook are included in the examples directory.
###Code
residuals = tools.calculate_residuals(obs_data=obs_data, model_data=model_sv_data)
model_sv_data.drop(['date'], axis=1, inplace=True)
obs_data.drop(['date'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
External noise removal Compute covariance matrix of the residuals (for all observatories combined) and its eigenvalues and eigenvectors. Since the residuals represent signals present in the data, but not the internal field model, we use them to find a proxy for external magnetic fields (Wardinski & Holme, 2011, GJI, https://doi.org/10.1111/j.1365-246X.2011.04988.x).
###Code
denoised, proxy, eigenvals, eigenvecs, projected_residuals, corrected_residuals = denoise.eigenvalue_analysis(
dates=dates, obs_data=obs_data, model_data=model_sv_data, residuals=residuals,
proxy_number=2)
###Output
_____no_output_____
###Markdown
Denoised SV plots Plots showing the original SV data, the denoised data (optionally with a running average) and the field model predictions.
###Code
for observatory in observatory_list:
xratio, yratio, zratio = plots.plot_sv_comparison(dates=dates, denoised_sv=denoised.filter(regex=observatory),
residuals=residuals.filter(regex=observatory),
corrected_residuals = corrected_residuals.filter(regex=observatory),
noisy_sv=obs_data.filter(regex=observatory), model=model_sv_data.filter(regex=observatory),
model_name='COV-OBS', fig_size=(6, 6), font_size=10, label_size=14, obs=observatory, plot_rms=True)
###Output
_____no_output_____
###Markdown
Plots showing the denoised data (optionally with a running average) and the field model predictions.
###Code
for observatory in observatory_list:
plots.plot_sv(dates=dates, sv=denoised.filter(regex=observatory), model=model_sv_data.filter(regex=observatory),
fig_size=(6, 6), font_size=10, label_size=14, plot_legend=False, obs=observatory,
model_name='COV-OBS')
###Output
_____no_output_____
###Markdown
Plot proxy signal, eigenvalues and eigenvectors Compare the proxy signal used to denoise the data with the Dst index, measures the intensity of the equatorial electrojet (the "ring current"). Files for the ap (ap_fdmm.csv) and AE (ae_fdmm.csv) are also included.
###Code
plots.plot_index_dft(index_file='index_data/dst_fdmm.csv', dates=denoised.date, signal=proxy, fig_size=(6, 6), font_size=10,
label_size=14, plot_legend=True, index_name='Dst')
###Output
_____no_output_____
###Markdown
Plot the eigenvalues of the covariance matrix of the residuals
###Code
plots.plot_eigenvalues(values=eigenvals, font_size=12, label_size=16, fig_size=(6, 3))
###Output
_____no_output_____
###Markdown
Plot the eigenvectors corresponding to the three largest eigenvalues. The noisiest direction (used to denoise in this example) is mostly X, with some Z, which is consistent with the ring current for European observatories. The second noisiest direction (also used to denoise in this example) is predominantly Z, with some X, and has a large semi-annual contribution that is likely of external origin. However, the third noisiest direction is a coherent Y signal across Europe, which does not correspond to a known direction of external signal. We did not remove this direction during denoising as it could be a real internal field variation that is not captured by the field model. However, its DFT shows a significant semi-annual contribution so this eigendircetion is likely to be in part of external origin.
###Code
plots.plot_eigenvectors(obs_names=observatory_list, eigenvecs=eigenvecs[:,0:3], fig_size=(6, 4),
font_size=10, label_size=14)
###Output
_____no_output_____
###Markdown
Outlier detection Remove remaining spikes in the time series.
###Code
denoised.drop(['date'], axis=1, inplace=True)
for column in denoised:
denoised[column] = denoise.detect_outliers(dates=dates, signal=denoised[column], obs_name=column, threshold=5,
window_length=120, plot_fig=False, fig_size=(10, 3), font_size=10, label_size=14)
denoised.insert(0, 'date', dates)
###Output
_____no_output_____
###Markdown
Write denoised data to file
###Code
for observatory in observatory_list:
print(observatory)
sv_data=denoised.filter(regex=observatory)
sv_data.insert(0, 'date', dates)
sv_data.columns = ["date", "dX", "dY", "dZ"]
io.write_csv_data(data=sv_data, write_dir=os.path.join(download_dir, 'denoised', 'european'),
obs_name=observatory, decimal_dates=False)
###Output
_____no_output_____
###Markdown
Averaging data over Europe Select denoised data for each SV component at all observatories
###Code
obs_X = denoised.filter(regex='dX')
model_X = model_sv_data.filter(regex='dX')
obs_Y = denoised.filter(regex='dY')
model_Y = model_sv_data.filter(regex='dY')
obs_Z = denoised.filter(regex='dZ')
model_Z = model_sv_data.filter(regex='dZ')
###Output
_____no_output_____
###Markdown
Average data and model for each component
###Code
mean_X = pd.DataFrame(np.mean(obs_X.values, axis=1))
mean_X.columns = ['dX']
mean_model_X = np.mean(model_X, axis=1)
mean_Y = pd.DataFrame(np.mean(obs_Y.values, axis=1))
mean_Y.columns = ['dY']
mean_model_Y = np.mean(model_Y, axis=1)
mean_Z = pd.DataFrame(np.mean(obs_Z.values, axis=1))
mean_Z.columns = ['dZ']
mean_model_Z = np.mean(model_Z, axis=1)
###Output
_____no_output_____
###Markdown
Remove outliers from averaged data
###Code
mean_X = denoise.detect_outliers(dates=dates, signal=mean_X, obs_name='X', threshold=2.5,
window_length=72, plot_fig=False, fig_size=(10, 3), font_size=10, label_size=14)
mean_Y = denoise.detect_outliers(dates=dates, signal=mean_Y, obs_name='Y', threshold=2.5,
window_length=72, plot_fig=False, fig_size=(10, 3), font_size=10, label_size=14)
mean_Z = denoise.detect_outliers(dates=dates, signal=mean_Z, obs_name='Z', threshold=2.5,
window_length=72, plot_fig=False, fig_size=(10, 3), font_size=10, label_size=14)
###Output
_____no_output_____
###Markdown
Look at model predictions for all observatories, and the averaged model, to see if the average is representative of the trend at all locations
###Code
plt.figure(figsize=(6,6))
plt.subplot(3, 1, 1)
plt.plot(dates, model_X)
plt.plot(dates, mean_model_X, 'k--')
plt.legend(['CLF', 'NGK', 'WNG', 'Average'], frameon=False, fontsize=10, loc=(0.1,1.04), ncol=4)
plt.subplot(3, 1, 2)
plt.plot(dates, model_Y)
plt.plot(dates, mean_model_Y, 'k--')
plt.ylabel('SV (nT/yr)', fontsize=14)
plt.subplot(3, 1, 3)
plt.plot(dates, model_Z)
plt.plot(dates, mean_model_Z, 'k--')
plt.xlabel('Year', fontsize=14)
###Output
_____no_output_____
###Markdown
Plot the averaged data and model
###Code
plt.figure(figsize=(6, 6))
plt.subplot(3,1,1)
plt.plot(dates, mean_X, 'b')
plt.plot(dates, np.mean(model_X, axis=1), 'r')
plt.subplot(3,1,2)
plt.plot(dates, mean_Y, 'b')
plt.plot(dates, np.mean(model_Y, axis=1), 'r')
plt.ylabel('SV (nT/yr)', fontsize=14)
plt.subplot(3,1,3)
plt.plot(dates, mean_Z, 'b', label='Averaged data')
plt.plot(dates, np.mean(model_Z, axis=1), 'r', label='Averaged COV-OBS')
plt.xlabel('Year', fontsize=14)
plt.legend(loc='best', fontsize=10, frameon=False)
###Output
_____no_output_____
###Markdown
Data selection using the ap index Select an observatory, load its hourly magnetic field data and correct documented baseline changes
###Code
observatory = 'CLF'
data_file = observatory + '.csv'
hourly_data = io.read_csv_data(
fname=os.path.join(download_dir, 'hourly', data_file),
data_type='mf')
# Path to file containing baseline discontinuity information
baseline_data = tools.get_baseline_info(fname='baseline_records')
# Correct documented baseline changes
tools.correct_baseline_change(observatory=observatory,
field_data=hourly_data,
baseline_data=baseline_data, print_data=True)
###Output
_____no_output_____
###Markdown
Apply an ap criterion to discard noisy data
###Code
# Discard hours with ap > threshold
ap_hourly_applied = tools.apply_Ap_threshold(obs_data=hourly_data, Ap_file=os.path.join('index_data', 'ap_hourly.csv'),
threshold=7.0)
# Discard days with Ap > threshold (where Ap is the daily average of the 3-hourly ap values)
ap_daily_applied = tools.apply_Ap_threshold(obs_data=hourly_data, Ap_file=os.path.join('index_data', 'ap_daily.csv'),
threshold=7.0)
hourly_data
###Output
_____no_output_____
###Markdown
Calculate the percentage of data remaining after applying the ap threshold
###Code
print('Hourly ap threshold applied: ', ap_hourly_applied.X.count()/hourly_data.X.count() * 100, '% remaining')
print('Daily Ap threshold applied: ', ap_daily_applied.X.count()/hourly_data.X.count() * 100, '% remaining')
###Output
_____no_output_____
###Markdown
Compare the hourly magnetic field data before and after appyling the ap threshold
###Code
plt.figure(figsize=(6, 6))
plt.subplot(3, 1, 1)
plt.plot(hourly_data.date, hourly_data.X, 'b')
plt.plot(hourly_data.date, ap_hourly_applied.X, 'r')
plt.plot(hourly_data.date, ap_daily_applied.X, 'c')
plt.xlim([dt.date(1960, 1, 1), dt.date(2010, 1, 1)])
plt.subplot(3, 1, 2)
plt.plot(hourly_data.date, hourly_data.Y, 'b')
plt.plot(hourly_data.date, ap_hourly_applied.Y, 'r')
plt.plot(hourly_data.date, ap_daily_applied.Y, 'c')
plt.xlim([dt.date(1960, 1, 1), dt.date(2010, 1, 1)])
plt.ylabel('Magnetic Field (nT)', fontsize=16)
plt.subplot(3, 1, 3)
plt.plot(hourly_data.date, hourly_data.Z, 'b', label='All data')
plt.plot(hourly_data.date, ap_hourly_applied.Z, 'r', label='ap ≤ 7')
plt.plot(hourly_data.date, ap_daily_applied.Z, 'c', label='Ap ≤ 7')
plt.xlim([dt.date(1960, 1, 1), dt.date(2010, 1, 1)])
plt.xlabel('Year', fontsize=16)
plt.legend(frameon=False)
plt.tight_layout()
d = hourly_data['date']
hourly_data.drop(['date'], axis=1, inplace=True)
for column in hourly_data:
hourly_data[column] = denoise.detect_outliers(dates=d, signal=hourly_data[column], obs_name=column, threshold=10,
signal_type='MF', window_length=24*365*10, plot_fig=True,
fig_size=(7, 4), font_size=10, label_size=14)
hourly_data.insert(0, 'date', d)
###Output
_____no_output_____
###Markdown
Compare the SV obtained when calculated using all hourly data and hourly the ap threshold applied Comparing FDMM and ADMM
###Code
# Resample the hourly data above to monthly means
resampled_field_data = tools.data_resampling(hourly_data, sampling='MS', average_date=True)
# Calculate SV from monthly field means
sv_fdmm = tools.calculate_sv(resampled_field_data,
mean_spacing=1)
sv_admm = tools.calculate_sv(resampled_field_data,
mean_spacing=12)
# Plot the SV calculated as FDMM and ADMM
plt.figure(figsize=(7, 6))
plt.subplot(3, 1, 1)
plt.plot(sv_fdmm.date, sv_fdmm.dx, 'b')
plt.plot(sv_admm.date, sv_admm.dx, 'r')
plt.xlim([dt.date(1960, 1, 1), dt.date(2010, 1, 1)])
plt.subplot(3, 1, 2)
plt.plot(sv_fdmm.date, sv_fdmm.dy, 'b')
plt.plot(sv_admm.date, sv_admm.dy, 'r')
plt.xlim([dt.date(1960, 1, 1), dt.date(2010, 1, 1)])
plt.ylabel('SV (nT/yr)', fontsize=16)
plt.subplot(3, 1, 3)
plt.plot(sv_fdmm.date, sv_fdmm.dz, 'b', label='FDMM')
plt.plot(sv_admm.date, sv_admm.dz, 'r', label = 'ADMM')
plt.xlim([dt.date(1960, 1, 1), dt.date(2010, 1, 1)])
plt.gca().xaxis_date()
plt.xlabel('Year', fontsize=16)
plt.legend(frameon=False)
###Output
_____no_output_____ |
tutorials/legacy/sql.ipynb | ###Markdown
Fugue SQLIt is strongly recommended to quickly go through the [COVID19 example](./example_covid19.ipynb) to get a sense of what Fugue SQL can do, and how it works. Here we are going through details of different Fugue SQL features.Fugue SQL is an alternative to Fugue programming interface. Both are used to describe your end-to-end workflow logic. The SQL semantic is platform and scale agnostic, so if you write logic in SQL, it's very high level and abstract, and the underlying computing frameworks will try to excute them in the optimal way.The syntax of Fugue SQL is between standard SQL, json and python. The goals behind this design are:* To be fully compatible with standard SQL `SELECT` statement* To create a seamless flow between SQL and Python coding* To minimize syntax overhead, to make code as short as possible while still easy to read Hello WorldTo use Fugue SQL, you need to make sure you have installed the SQL extra```pip install fugue[sql]```To make writing SQL easier we will use [cell magic](https://ipython.readthedocs.io/en/stable/interactive/magics.html) that was introduced in [COVID19 Data Exploration](https://fugue-tutorials.readthedocs.io/en/latest/tutorials/example_covid19.html) section of the tutorial. We will take the libraries and functions menteioned above and import it using the following imports:
###Code
from fugue_notebook import setup
import pandas as pd
setup ()
df = pd.DataFrame([[0,"hello"],[1,"world"]],columns = ['a','b'])
print(df)
###Output
a b
0 0 hello
1 1 world
###Markdown
the SQL will translate to a sequence of operations in programming interface.
###Code
%%fsql
SELECT
*
FROM df
WHERE a=0 -- we can use df directly defined outside of this cell
PRINT
###Output
_____no_output_____
###Markdown
AnonymityIn Fugue SQL, a very important simplification is anonymity, it is optional, but it can significantly simplify your code.For a statement that only needs to consume the previous dataframe, you can use anonymity to chain commands. `PRINT` is the best example. This is good for chaining commands.
###Code
%%fsql
a=CREATE [[0,"hello"],[1,"world"]] SCHEMA a:int,b:str
PRINT -- If the PRINT is not specify, it means it will print
-- the last dataframe output of the previous statements
PRINT -- I can use anonymity again because PRINT doesn't generate output, so it still means PRINT a
###Output
_____no_output_____
###Markdown
For statements that don't generate output, you can't assign it to any variable. For statements that generates single output, you can also use anonymity and don't assign to a variable. The following statements will have to use anonymity if they need to consume this output.
###Code
%%fsql
a=CREATE [[0,"hello"]] SCHEMA a:int,b:str
CREATE [[1,"world"]] SCHEMA a:int,b:str
PRINT -- print the second
PRINT a -- print the first, because it is explicit
PRINT -- print the second
###Output
_____no_output_____
###Markdown
In the same manner, `SELECT` statement also follows this rule.
###Code
%%fsql
CREATE [[0,"hello"], [1,"world"]] SCHEMA a:int,b:str
SELECT * WHERE a=1 -- The FROM is not needed and it will grab last output of the previous statements
-- This is good for chaining commands
PRINT
###Output
_____no_output_____
###Markdown
Inline StatementsInline statements is a very powerful tool for data wrangling and general analysis. It is easy to use and has an instinctive feel to it. Since we can easily do variable assignment in Fugue, it may not be necessary to write your code using anonymity. It's all up to you.
###Code
%%fsql
SELECT *
FROM (CREATE [[0,"hello"], [1,"world"]] SCHEMA a:int,b:str)
WHERE a=1
PRINT
PRINT ( SELECT * FROM (CREATE [[0,"hello"], [1,"world"]] SCHEMA a:int,b:str)
WHERE a=1)
###Output
_____no_output_____ |
Image_processing_CNN.ipynb | ###Markdown
###Code
from tensorflow.keras.datasets import mnist
import tensorflow as tf
(X_train, y_train), (X_test, y_test) = mnist.load_data()
import numpy as np
import matplotlib.pyplot as plt
img = X_train[0].reshape(28,28)
plt.imshow(img)
X_train.shape, y_train.shape
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, Dropout, Conv2D, MaxPool2D
from tensorflow.keras.utils import to_categorical
X_train = X_train.reshape((60000, 784))
X_test = X_test.reshape((10000, 784))
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
n_classes = 10
print("Shape before One Hot Encoding...", y_train.shape)
y_train = to_categorical(y_train, n_classes)
y_test = to_categorical(y_test, n_classes)
y_train.shape, y_test.shape
print("Shape after One Hot Encoding...", y_train.shape)
model = Sequential()
model.add(Dense(10, input_shape=(784,), activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=128, epochs=10, validation_data=(X_test, y_test))
###Output
Epoch 1/10
469/469 [==============================] - 2s 3ms/step - loss: 0.9313 - accuracy: 0.7005 - val_loss: 0.3861 - val_accuracy: 0.8901
Epoch 2/10
469/469 [==============================] - 1s 3ms/step - loss: 0.3524 - accuracy: 0.9010 - val_loss: 0.3080 - val_accuracy: 0.9116
Epoch 3/10
469/469 [==============================] - 1s 3ms/step - loss: 0.3051 - accuracy: 0.9143 - val_loss: 0.2868 - val_accuracy: 0.9173
Epoch 4/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2820 - accuracy: 0.9200 - val_loss: 0.2705 - val_accuracy: 0.9219
Epoch 5/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2665 - accuracy: 0.9242 - val_loss: 0.2554 - val_accuracy: 0.9251
Epoch 6/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2545 - accuracy: 0.9274 - val_loss: 0.2476 - val_accuracy: 0.9281
Epoch 7/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2457 - accuracy: 0.9301 - val_loss: 0.2382 - val_accuracy: 0.9313
Epoch 8/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2376 - accuracy: 0.9319 - val_loss: 0.2371 - val_accuracy: 0.9319
Epoch 9/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2301 - accuracy: 0.9342 - val_loss: 0.2360 - val_accuracy: 0.9310
Epoch 10/10
469/469 [==============================] - 1s 3ms/step - loss: 0.2249 - accuracy: 0.9357 - val_loss: 0.2300 - val_accuracy: 0.9317
###Markdown
Now we will build a CNN to optimize the accuracy of our model.
###Code
from sklearn.metrics import accuracy_score
X_train.shape
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
X_train.shape, X_test.shape
cnn_model = Sequential()
cnn_model.add(Conv2D(25, kernel_size=(3, 3), strides=(1, 1), padding='valid', activation='relu', input_shape=(28, 28, 1)))
cnn_model.add(MaxPool2D(pool_size=(1, 1)))
cnn_model.add(Flatten())
cnn_model.add(Dense(100, activation='relu'))
cnn_model.add(Dense(10, activation='softmax'))
cnn_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
cnn_model.fit(X_train, y_train, batch_size=128, epochs=10, validation_data=(X_test, y_test))
np.argmax(cnn_model.predict(X_test)[89])
np.argmax(y_test[89])
image = X_test[89].reshape(28, 28)
plt.imshow(image)
###Output
_____no_output_____ |
notebook/examples/ml/kmeans.ipynb | ###Markdown
Machine Learning: Train on Large DatasetsMost estimators in scikit-learn are designed to work on in-memory arrays. Training with larger datasets may require different algorithms.All of the algorithms implemented in Dask-ML work well on larger than memory datasets, which you might store in a [dask array](http://dask.pydata.org/en/latest/array.html) or [dataframe](http://dask.pydata.org/en/latest/dataframe.html).*Note: this notebook requires [dask-ml](http://ml.dask.org/)*
###Code
from dask.distributed import Client
client = Client()
client
###Output
_____no_output_____
###Markdown
Create a large random dataset
###Code
import dask
from distributed.utils import format_bytes
import dask_ml.cluster
import dask_ml.datasets
###Output
_____no_output_____
###Markdown
In this example, we'll generate a large random dask array on our cluster. In practice,we would load the data from our data store (SQL table, HDFS, cloud storage).
###Code
X, y = dask_ml.datasets.make_blobs(
n_samples=100_000_000,
n_features=50,
centers=3,
chunks=500_000,
)
format_bytes(X.nbytes)
X = X.persist()
###Output
_____no_output_____
###Markdown
Cluster with K-Means || We'll use the k-means implemented in Dask-ML to cluster the points. It uses the `k-means||` (read: "k-means parallel") initialization algorithm, which scales better than `k-means++`. All of the computation, both during and after initialization, can be done in parallel.
###Code
km = dask_ml.cluster.KMeans(n_clusters=3, init_max_iter=2, oversampling_factor=10, random_state=0)
%time km.fit(X)
###Output
_____no_output_____
###Markdown
During training, you'll notice some distinct phases* Initialization: finding the best intital clusters* Expectation Maximization: Alternating between finding the closest cluster center between each point, and finding the new center of all points closest to a cluster* Finalization: computing statistics like `inertia` We'll plot a sample of points, colored by the cluster each falls into. Inspect Results
###Code
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.scatter(X[::20000, 0], X[::20000, 1], marker='.', c=km.labels_[::20000],
cmap='viridis', alpha=0.25);
###Output
_____no_output_____ |
pyutils/refer/pyEvalDemo.ipynb | ###Markdown
1. Evaluate Refering Expressions by Language Metrics
###Code
sys.path.insert(0, './evaluation')
from refEvaluation import RefEvaluation
# Here's our example expression file
sample_expr_file = json.load(open('test/sample_expressions_testA.json', 'r'))
sample_exprs = sample_expr_file['predictions']
print sample_exprs[0]
refEval = RefEvaluation(refer, sample_exprs)
refEval.evaluate()
###Output
tokenization...
setting up scorers...
computing Bleu score...
{'reflen': 5356, 'guess': [5009, 3034, 1477, 275], 'testlen': 5009, 'correct': [2576, 580, 112, 2]}
ratio: 0.935212845407
Bleu_1: 0.480
Bleu_2: 0.293
Bleu_3: 0.182
Bleu_4: 0.080
computing METEOR score...
METEOR: 0.172
computing Rouge score...
ROUGE_L: 0.414
computing CIDEr score...
CIDEr: 0.669
###Markdown
2. Evaluate Referring Expressions by Duplicate Rate
###Code
# evalue how many images contain duplicate expressions
pred_refToSent = {int(it['ref_id']): it['sent'] for it in sample_exprs}
pred_imgToSents = {}
for ref_id, pred_sent in pred_refToSent.items():
image_id = refer.Refs[ref_id]['image_id']
pred_imgToSents[image_id] = pred_imgToSents.get(image_id, []) + [pred_sent]
# count duplicate
duplicate = 0
for image_id, sents in pred_imgToSents.items():
if len(set(sents)) < len(sents):
duplicate += 1
ratio = duplicate*100.0 / len(pred_imgToSents)
print '%s/%s (%.2f%%) images have duplicate predicted sentences.' % (duplicate, len(pred_imgToSents), ratio)
###Output
108/750 (14.40%) images have duplicate predicted sentences.
###Markdown
3.Evaluate Referring Comprehension
###Code
# IoU function
def computeIoU(box1, box2):
# each box is of [x1, y1, w, h]
inter_x1 = max(box1[0], box2[0])
inter_y1 = max(box1[1], box2[1])
inter_x2 = min(box1[0]+box1[2]-1, box2[0]+box2[2]-1)
inter_y2 = min(box1[1]+box1[3]-1, box2[1]+box2[3]-1)
if inter_x1 < inter_x2 and inter_y1 < inter_y2:
inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)
else:
inter = 0
union = box1[2]*box1[3] + box2[2]*box2[3] - inter
return float(inter)/union
# randomly sample one ref
np.random.seed(24)
ref_ids = refer.getRefIds()
ref_id = ref_ids[np.random.randint(0, len(ref_ids))]
ref = refer.Refs[ref_id]
# let's fake one bounding box by randomly picking one instance inside this image
image_id = ref['image_id']
anns = refer.imgToAnns[image_id]
ann = anns[np.random.randint(0, len(anns))]
# draw box of the ref using 'green'
plt.figure()
refer.showRef(ref, seg_box='box')
# draw box of the ann using 'red'
ax = plt.gca()
bbox = ann['bbox']
box_plot = Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], fill=False, edgecolor='red', linewidth=2)
ax.add_patch(box_plot)
plt.show()
# Is the ann actually our ref?
# i.e., IoU >= 0.5?
ref_box = refer.refToAnn[ref_id]['bbox']
ann_box = ann['bbox']
IoU = computeIoU(ref_box, ann_box)
if IoU >= 0.5:
print 'IoU=[%.2f], correct comprehension!' % IoU
else:
print 'IoU=[%.2f], wrong comprehension!' % IoU
###Output
IoU=[0.09], wrong comprehension!
|
XGBoost-WebTraffic.ipynb | ###Markdown
IntroductionIn this workshop, we will go through the steps of training, debugging, deploying and monitoring a **network traffic classification model**.For training our model we will be using datasets CSE-CIC-IDS2018 by CIC and ISCX which are used for security testing and malware prevention.These datasets include a huge amount of raw network traffic logs, plus pre-processed data where network connections have been reconstructed and relevant features have been extracted using CICFlowMeter, a tool that outputs network connection features as CSV files. Each record is classified as benign traffic, or it can be malicious traffic, with a total number of 15 classes.Starting from this featurized dataset, we have executed additional pre-processing for the purpose of this lab: Encoded class labels Replaced invalid string attribute values generated by CICFlowMeter (e.g. inf and Infinity) Executed one hot encoding of discrete attributes Remove invalid headers logged multiple times in the same CSV file Reduced the size of the featurized dataset to ~1.3GB (from ~6.3GB) to speed-up training, while making sure that all classes are well represented Executed stratified random split of the dataset into training (80%) and validation (20%) setsClass are represented and have been encoded as follows (train + validation):| Label | Encoded | N. records ||:-------------------------|:-------:|-----------:|| Benign | 0 | 1000000 || Bot | 1 | 200000 || DoS attacks-GoldenEye | 2 | 40000 || DoS attacks-Slowloris | 3 | 10000 || DDoS attacks-LOIC-HTTP | 4 | 300000 || Infilteration | 5 | 150000 || DDOS attack-LOIC-UDP | 6 | 1730 || DDOS attack-HOIC | 7 | 300000 || Brute Force -Web | 8 | 611 || Brute Force -XSS | 9 | 230 || SQL Injection | 10 | 87 || DoS attacks-SlowHTTPTest | 11 | 100000 || DoS attacks-Hulk | 12 | 250000 || FTP-BruteForce | 13 | 150000 || SSH-Bruteforce | 14 | 150000 | The final pre-processed dataset has been saved to a public Amazon S3 bucket for your convenience, and will represent the inputs to the training processes. Let's get started!First, we set some variables, including the AWS region we are working in, the IAM (Identity and Access Management) execution role of the notebook instance and the Amazon S3 bucket where we will store data, models, outputs, etc. We will use the Amazon SageMaker default bucket for the selected AWS region, and then define a key prefix to make sure all objects have share the same prefix for easier discoverability.
###Code
import os
import boto3
import sagemaker
region = boto3.Session().region_name
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session()
bucket_name = sagemaker.Session().default_bucket()
prefix = 'xgboost-webtraffic'
os.environ["AWS_REGION"] = region
print(region)
print(role)
print(bucket_name)
###Output
us-east-1
arn:aws:iam::431615879134:role/sagemaker-test-role
sagemaker-us-east-1-431615879134
###Markdown
Now we can copy the dataset from the public Amazon S3 bucket to the Amazon SageMaker default bucket used in this workshop. To do this, we will leverage on the AWS Python SDK (boto3) as follows:
###Code
s3 = boto3.resource('s3')
source_bucket_name = "endtoendmlapp"
source_bucket_prefix = "aim362/data/"
source_bucket = s3.Bucket(source_bucket_name)
for s3_object in source_bucket.objects.filter(Prefix=source_bucket_prefix):
copy_source = {
'Bucket': source_bucket_name,
'Key': s3_object.key
}
print('Copying {0} ...'.format(s3_object.key))
s3.Bucket(bucket_name).copy(copy_source, prefix+'/data/'+s3_object.key.split('/')[-2]+'/'+s3_object.key.split('/')[-1])
###Output
_____no_output_____
###Markdown
Let's download some of the data to the notebook to quickly explore the dataset structure:
###Code
train_file_path = 's3://' + bucket_name + '/' + prefix + '/data/train/0.part'
val_file_path = 's3://' + bucket_name + '/' + prefix + '/data/val/0.part'
print(train_file_path)
print(val_file_path)
!mkdir -p data/train/ data/val/
!aws s3 cp {train_file_path} data/train/
!aws s3 cp {val_file_path} data/val/
import pandas as pd
pd.options.display.max_columns = 100
df = pd.read_csv('data/train/0.part')
df
val_df = pd.read_csv('data/val/0.part')
###Output
_____no_output_____
###Markdown
Basic TrainingWe will execute the training using the built in XGBoost algorithm. Not that you can also use script mode if you need to have greater customization of the training process.
###Code
container = sagemaker.image_uris.retrieve('xgboost',boto3.Session().region_name,version='1.0-1')
print(container)
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/data/train'.format(bucket_name, prefix), content_type='csv')
s3_input_validation = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/data/val'.format(bucket_name, prefix), content_type='csv')
hyperparameters = {
"max_depth": "3",
"eta": "0.1",
"gamma": "6",
"min_child_weight": "6",
"objective": "multi:softmax",
"num_class": "15",
"num_round": "10"
}
output_path = 's3://{0}/{1}/output/'.format(bucket_name, prefix)
# construct a SageMaker estimator that calls the xgboost-container
estimator = sagemaker.estimator.Estimator(image_uri=container,
hyperparameters=hyperparameters,
role=role,
instance_count=1,
instance_type='ml.m5.2xlarge',
volume_size=5, # 5 GB
output_path=output_path)
estimator.fit({'train': s3_input_train, 'validation': s3_input_validation})
###Output
2021-05-26 19:46:34 Starting - Starting the training job...
2021-05-26 19:46:42 Starting - Launching requested ML instancesProfilerReport-1622058394: InProgress
.........
2021-05-26 19:48:16 Starting - Preparing the instances for training......
2021-05-26 19:49:16 Downloading - Downloading input data...
2021-05-26 19:49:56 Training - Training image download completed. Training in progress..[34mINFO:sagemaker-containers:Imported framework sagemaker_xgboost_container.training[0m
[34mINFO:sagemaker-containers:Failed to parse hyperparameter objective value multi:softmax to Json.[0m
[34mReturning the value itself[0m
[34mINFO:sagemaker-containers:No GPUs detected (normal if no gpus installed)[0m
[34mINFO:sagemaker_xgboost_container.training:Running XGBoost Sagemaker in algorithm mode[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34m[19:50:06] 2122136x84 matrix with 178259424 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34mINFO:root:Determined delimiter of CSV input is ','[0m
[34m[19:50:09] 530542x84 matrix with 44565528 entries loaded from /opt/ml/input/data/validation?format=csv&label_column=0&delimiter=,[0m
[34mINFO:root:Single node training.[0m
[34mINFO:root:Train matrix has 2122136 rows[0m
[34mINFO:root:Validation matrix has 530542 rows[0m
[34m[19:50:09] WARNING: /workspace/src/learner.cc:328: [0m
[34mParameters: { num_round } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
[0m
[34m[0]#011train-merror:0.04227#011validation-merror:0.04247[0m
[34m[1]#011train-merror:0.03837#011validation-merror:0.03857[0m
[34m[2]#011train-merror:0.03940#011validation-merror:0.03967[0m
[34m[3]#011train-merror:0.03920#011validation-merror:0.03950[0m
[34m[4]#011train-merror:0.03487#011validation-merror:0.03526[0m
[34m[5]#011train-merror:0.03640#011validation-merror:0.03680[0m
[34m[6]#011train-merror:0.03442#011validation-merror:0.03483[0m
[34m[7]#011train-merror:0.03440#011validation-merror:0.03485[0m
[34m[8]#011train-merror:0.03394#011validation-merror:0.03436[0m
2021-05-26 19:53:37 Uploading - Uploading generated training model
2021-05-26 19:53:37 Completed - Training job completed
[34m[9]#011train-merror:0.03298#011validation-merror:0.03338[0m
Training seconds: 263
Billable seconds: 263
###Markdown
In order to make sure that our code works for inference, we can deploy the trained model and execute some inferences.
###Code
predictor = estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge',serializer=sagemaker.serializers.CSVSerializer())
# We expect 4 - DDoS attacks-LOIC-HTTP as the predicted class for this instance.
test_values = [80,1056736,3,4,20,964,20,0,6.666666667,11.54700538,964,0,241.0,482.0,931.1691850999999,6.6241710320000005,176122.6667,431204.4454,1056315,2,394,197.0,275.77164469999997,392,2,1056733,352244.3333,609743.1115,1056315,24,0,0,0,0,72,92,2.8389304419999997,3.78524059,0,964,123.0,339.8873763,115523.4286,0,0,1,1,0,0,0,1,1.0,140.5714286,6.666666667,241.0,0.0,0.0,0.0,0.0,0.0,0.0,3,20,4,964,8192,211,1,20,0.0,0.0,0,0,0.0,0.0,0,0,20,2,2018,1,0,1,0]
result = predictor.predict(test_values)
print(result)
###Output
_____no_output_____
###Markdown
EvaluateNow that we have a hosted endpoint running, we can make real-time predictions from our model very easily, simply by making an http POST request. But first, we'll need to setup serializers and deserializers for passing our test_data NumPy arrays to the model behind the endpoint. Now, we'll use a simple function to:1. Loop over our test dataset2. Split it into mini-batches of rows3. Convert those mini-batchs to CSV string payloads4. Retrieve mini-batch predictions by invoking the XGBoost endpoint5. Collect predictions and convert from the CSV output our model provides into a NumPy array
###Code
import numpy as np
def predict(data, rows=500):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = ''
for array in split_array:
predictions = ','.join([predictions, predictor.predict(array).decode('utf-8')])
return np.fromstring(predictions[1:], sep=',')
predictions = predict(val_df.to_numpy()[:,1:])
predictions.shape
actual = val_df.to_numpy()[:,0]
actual.shape
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix
from IPython.display import display, clear_output
class_list = ['Benign','Bot','DoS attacks-GoldenEye','DoS attacks-Slowloris','DDoS attacks-LOIC-HTTP','Infilteration','DDOS attack-LOIC-UDP','DDOS attack-HOIC','Brute Force-Web','Brute Force-XSS','SQL Injection','DoS attacks-SlowHTTPTest','DoS attacks-Hulk','FTP-BruteForce','SSH-Bruteforce']
fig, ax = plt.subplots(figsize=(15,10))
cm = confusion_matrix(actual,predictions)
normalized_cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sns.heatmap(normalized_cm, ax=ax, annot=cm, fmt='',xticklabels=class_list,yticklabels=class_list)
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title('Confustion Matrix')
plt.show()
###Output
_____no_output_____
###Markdown
Finally, let's gracefully stop the deployed endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
(Optional) Hyperparameter Optimization (HPO)
###Code
static_hyperparameters = {
"objective": "multi:softmax",
"num_class": "15",
"num_round": "10",
}
output_path = 's3://{0}/{1}/output/'.format(bucket_name, prefix)
# construct a SageMaker estimator that calls the xgboost-container
estimator = sagemaker.estimator.Estimator(image_uri=container,
hyperparameters=static_hyperparameters,
role=role,
instance_count=1,
instance_type='ml.m5.2xlarge',
volume_size=5, # 5 GB
output_path=output_path)
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
hyperparameter_ranges = {
'eta': ContinuousParameter(0, 1),
'min_child_weight': ContinuousParameter(1, 10),
'alpha': ContinuousParameter(0, 2),
'max_depth': IntegerParameter(1, 10),
'gamma': ContinuousParameter(0, 100)
}
objective_metric_name = 'validation:merror'
objective_type = 'Minimize'
tuner = HyperparameterTuner(estimator,
objective_metric_name,
hyperparameter_ranges,
max_jobs=10,
max_parallel_jobs=2,
objective_type=objective_type,
early_stopping_type='Auto')
%%time
tuner.fit({'train': s3_input_train, 'validation': s3_input_validation})
endpoint_name = 'xgboost-webtraffic-hpo-best')
hpo_predictor = tuner.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name=tf_endpoint_name)
#clean_up
sess.delete_endpoint(endpoint_name=endpoint_name)
###Output
_____no_output_____ |
federated_learning/cifar10_Resnet20/cifar10_resnet20_fd_client_rasp.ipynb | ###Markdown
CIFAR10 Federated Resnet20 Client SideThis code is the server part of CIFAR10 federated resnet20 for **multi** client and a server.
###Code
users = 1 # number of clients
import os
import h5py
import socket
import struct
import pickle
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import time
from tqdm import tqdm
def getFreeDescription():
free = os.popen("free -h")
i = 0
while True:
i = i + 1
line = free.readline()
if i == 1:
return (line.split()[0:7])
def getFree():
free = os.popen("free -h")
i = 0
while True:
i = i + 1
line = free.readline()
if i == 2:
return (line.split()[0:7])
from gpiozero import CPUTemperature
def printPerformance():
cpu = CPUTemperature()
print("temperature: " + str(cpu.temperature))
description = getFreeDescription()
mem = getFree()
print(description[0] + " : " + mem[1])
print(description[1] + " : " + mem[2])
print(description[2] + " : " + mem[3])
print(description[3] + " : " + mem[4])
print(description[4] + " : " + mem[5])
print(description[5] + " : " + mem[6])
printPerformance()
root_path = '../../models/cifar10_data'
###Output
_____no_output_____
###Markdown
Cuda
###Code
# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device = "cpu"
print(device)
client_order = int(input("client_order(start from 0): "))
num_traindata = 50000 // users
###Output
_____no_output_____
###Markdown
Data load
###Code
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2470, 0.2435, 0.2616))])
from torch.utils.data import Subset
indices = list(range(50000))
part_tr = indices[num_traindata * client_order : num_traindata * (client_order + 1)]
trainset = torchvision.datasets.CIFAR10 (root=root_path, train=True, download=True, transform=transform)
trainset_sub = Subset(trainset, part_tr)
train_loader = torch.utils.data.DataLoader(trainset_sub, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10 (root=root_path, train=False, download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
_____no_output_____
###Markdown
Number of total batches
###Code
train_total_batch = len(train_loader)
print(train_total_batch)
test_batch = len(test_loader)
print(test_batch)
from torch.autograd import Variable
import torch.nn.init as init
def _weights_init(m):
classname = m.__class__.__name__
#print(classname)
if isinstance(m, nn.Linear) or isinstance(m, nn.Conv2d):
init.kaiming_normal_(m.weight)
class LambdaLayer(nn.Module):
def __init__(self, lambd):
super(LambdaLayer, self).__init__()
self.lambd = lambd
def forward(self, x):
return self.lambd(x)
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
F.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes//4, planes//4), "constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_planes = 16
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.layer1 = self._make_layer(block, 16, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 32, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 64, num_blocks[2], stride=2)
self.linear = nn.Linear(64, num_classes)
self.apply(_weights_init)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = F.avg_pool2d(out, out.size()[3])
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def resnet20():
return ResNet(BasicBlock, [3, 3, 3])
res_net = resnet20()
res_net.to(device)
lr = 0.001
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(res_net.parameters(), lr=lr, momentum=0.9)
rounds = 400 # default
local_epochs = 1 # default
###Output
_____no_output_____
###Markdown
Socket initialization Required socket functions
###Code
def send_msg(sock, msg):
# prefix each message with a 4-byte length in network byte order
msg = pickle.dumps(msg)
msg = struct.pack('>I', len(msg)) + msg
sock.sendall(msg)
def recv_msg(sock):
# read message length and unpack it into an integer
raw_msglen = recvall(sock, 4)
if not raw_msglen:
return None
msglen = struct.unpack('>I', raw_msglen)[0]
# read the message data
msg = recvall(sock, msglen)
msg = pickle.loads(msg)
return msg
def recvall(sock, n):
# helper function to receive n bytes or return None if EOF is hit
data = b''
while len(data) < n:
packet = sock.recv(n - len(data))
if not packet:
return None
data += packet
return data
printPerformance()
###Output
_____no_output_____
###Markdown
Set host address and port number
###Code
host = input("IP address: ")
port = 10080
max_recv = 100000
###Output
IP address: 192.168.83.1
###Markdown
Open the client socket
###Code
s = socket.socket()
s.connect((host, port))
###Output
_____no_output_____
###Markdown
SET TIMER
###Code
start_time = time.time() # store start time
print("timmer start!")
msg = recv_msg(s)
rounds = msg['rounds']
client_id = msg['client_id']
local_epochs = msg['local_epoch']
send_msg(s, len(trainset_sub))
# update weights from server
# train
for r in range(rounds): # loop over the dataset multiple times
weights = recv_msg(s)
res_net.load_state_dict(weights)
res_net.eval()
for local_epoch in range(local_epochs):
for i, data in enumerate(tqdm(train_loader, ncols=100, desc='Round '+str(r+1)+'_'+str(local_epoch+1))):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs = inputs.to(device)
labels = labels.clone().detach().long().to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = res_net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
msg = res_net.state_dict()
send_msg(s, msg)
print('Finished Training')
printPerformance()
end_time = time.time() #store end time
print("Training Time: {} sec".format(end_time - start_time))
###Output
Training Time: 12.0054190158844 sec
|
conjointAnalysis_main.ipynb | ###Markdown
コンジョイント分析を用いて、消費者の好みを分析してみよう 以下は、コンジョイント分析を行うためのpythonコードです。- pythonのバージョンは3.8 - 他のモジュールのバージョンについては、githubにアップロードしているファイルをダウンロードして、`myenv.txt` をご確認ください。 モジュールのインポート
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
データの読み込み
###Code
df = pd.read_csv('./data/data.csv')
# x, y の指定
y = pd.DataFrame(df['score'])
x = df.drop(columns=['score'])
x.head(20)
###Output
_____no_output_____
###Markdown
このデータは2人分、合計16個の回答を示しています。例えば一番上の`auto-lock 1, distToUniv 1, isParking 1, fee 2`では、 オートロックあり、大学からの距離がちかい、駐車場あり、家賃8万(1~3があって、それぞれ、6,8,10万に対応)という条件を示しています。これに対して、それぞれ被験者(今回は私が勝手に回答)が魅力度のスコアを付けます。 ダミー変数への変換 ここでは、`pd.get_dummies`関数を用いて、そのカードに、該当の項目が書かれているかどうかを 0/1で示します。 one-hotベクトルに直しているのと似ています。 `drop_first`をtrueにして、その項目のはじめの要素は削除するようにしています。例えば、駐車場の有無では、駐車場がある、という要素が0である場合と、駐車場がない、という要素が1である場合は同じ意味です。
###Code
x_dum = pd.get_dummies(x, columns=x.columns, drop_first=True)
x_dum.head()
# drop_firstを無効にした場合を確認。_1のものが残っていることがわかる。ここではこのデータは使わない
x_dum_noDrop = pd.get_dummies(x, columns=x.columns, drop_first=False)
x_dum_noDrop.head()
df.describe() #要素の平均や標準偏差などの基本的な統計データを表示させる
###Output
_____no_output_____
###Markdown
切片を追加 また、コンジョイントカードに記載されている内容に加えて、その他の影響がある場合に備えて、定数項も加えます。この操作によってフィッティングするときの切片を計算することができます。 https://www.statsmodels.org/stable/generated/statsmodels.tools.tools.add_constant.html
###Code
x_dum=sm.add_constant(x_dum) # constという要素を追加
x_dum.head(10) # constが追加されたことを確認
###Output
C:\Users\itaku\anaconda3\envs\py38_geopanda\lib\site-packages\statsmodels\tsa\tsatools.py:142: FutureWarning: In a future version of pandas all arguments of concat except for the argument 'objs' will be keyword-only
x = pd.concat(x[::order], 1)
###Markdown
OLS (Ordinary Least Squares) でフィッティング
###Code
model = sm.OLS(y, x_dum)
# フィッティングを実行
result = model.fit()
# 結果の一覧を表示
result.summary()
###Output
C:\Users\itaku\anaconda3\envs\py38_geopanda\lib\site-packages\scipy\stats\stats.py:1541: UserWarning: kurtosistest only valid for n>=20 ... continuing anyway, n=16
warnings.warn("kurtosistest only valid for n>=20 ... continuing "
###Markdown
結果の一部を取り出し `結果を見てみる`セクションで議論するため、weightとp値を取り出します
###Code
df_result_selected = pd.DataFrame({
'weight': result.params.values
, 'p_val': result.pvalues
})
df_result_selected.head(10)
###Output
_____no_output_____
###Markdown
可視化のための準備 コンジョイント分析の結果を可視化するための準備を行います。上のdrop_Firstを有効にして、_ 1 がつく変数は落とされていました。グラフで表示させるため復帰させます。これらの重みは0とします。
###Code
for s in df_result_selected.index:
partitioned_string = s.partition('_')
if partitioned_string[2] == "2":
valBase = partitioned_string[0] + "_1"
df_valBase = pd.DataFrame(data =np.zeros((1,2)),
index = [valBase],
columns = ["weight","p_val"])
df_result_selected = pd.concat([df_result_selected,df_valBase])
df_result_selected.head(20)
###Output
_____no_output_____
###Markdown
p値に応じてバーの色を変える: p値が0.01以上、0.05以下の場合はシアン、0.01以下の場合は青、0.05以上(あまり信用できない)の場合は赤に設定します。
###Code
bar_col = []
for p_val in df_result_selected['p_val']:
# print(p_val)
if 0.01 < p_val < 0.05:
bar_col.append('Cyan')
elif p_val < 0.01:
bar_col.append('blue')
else:
bar_col.append('red')
# p値が0.05以下のものを青、そうでないものを青とする
df_bar_col = pd.DataFrame(data = bar_col,
columns=['bar_col'],
index = df_result_selected.index)
df_result_selected = pd.concat([df_result_selected,df_bar_col], axis=1)
df_result_selected.head(20)
###Output
_____no_output_____
###Markdown
グラフを表示させる _ 1 とつくものが基準になっているので、それをもとに同一カテゴリが正または負の影響があるか見てください。
###Code
# プロットするときに日本語でも文字化けしないように設定
from matplotlib import rcParams
plt.rcParams["font.family"] = "MS Gothic"
# アルファベット順位
df_result_selected = df_result_selected.sort_index()
xbar = np.arange(len(df_result_selected['weight']))
plt.barh(xbar, df_result_selected['weight'], color=df_result_selected['bar_col'])
index_JP = ["駐車場なし","駐車場あり","家賃10万","家賃8万","家賃6万","大学から遠い","大学から近い","定数項","オートロックなし","オートロックあり"]
plt.yticks(xbar, labels=index_JP[::-1]) # 順番があうように順番を逆にする
plt.show()
###Output
_____no_output_____ |
EDL_5_PSO_HPO_PCA.ipynb | ###Markdown
Setup
###Code
#@title Install DEAP
!pip install deap --quiet
#@title Defining Imports
#numpy
import numpy as np
#DEAP
from deap import base
from deap import benchmarks
from deap import creator
from deap import tools
#PyTorch
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
#SkLearn
from sklearn.decomposition import PCA
#plotting
from matplotlib import pyplot as plt
from matplotlib import cm
from IPython.display import clear_output
#for performance timing
import time
#utils
import random
import math
#@title Setup Target Function and Data
def function(x):
return (2*x + 3*x**2 + 4*x**3 + 5*x**4 + 6*x**5 + 10)
data_min = -5
data_max = 5
data_step = .5
Xi = np.reshape(np.arange(data_min, data_max, data_step), (-1, 1))
yi = function(Xi)
inputs = Xi.shape[1]
yi = yi.reshape(-1, 1)
plt.plot(Xi, yi, 'o', color='black')
#@title Define the Model
class Net(nn.Module):
def __init__(self, inputs, middle):
super().__init__()
self.fc1 = nn.Linear(inputs,middle)
self.fc2 = nn.Linear(middle,middle)
self.out = nn.Linear(middle,1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.out(x)
return x
#@title Define HyperparametersEC Class
class HyperparametersEC(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
self.hparms = [d for d in self.__dict__]
def __str__(self):
out = ""
for d in self.hparms:
ds = self.__dict__[d]
out += f"{d} = {ds} "
return out
def values(self):
vals = []
for d in self.hparms:
vals.append(self.__dict__[d])
return vals
def size(self):
return len(self.hparms)
def next(self, individual):
dict = {}
#initialize generators
for i, d in enumerate(self.hparms):
next(self.__dict__[d])
for i, d in enumerate(self.hparms):
dict[d] = self.__dict__[d].send(individual[i])
return HyperparametersEC(**dict)
def clamp(num, min_value, max_value):
return max(min(num, max_value), min_value)
def linespace(min,max):
rnge = max - min
while True:
i = yield
i = (clamp(i, -1.0, 1.0) + 1.0) / 2.0
yield i * rnge + min
def linespace_int(min,max):
rnge = max - min
while True:
i = yield
i = (clamp(i, -1.0, 1.0) + 1.0) / 2.0
yield int(i * rnge) + min
def static(val):
while True:
yield val
###Output
_____no_output_____
###Markdown
Create the HyperparamtersEC Object
###Code
#@title Instantiate the HPO
hp = HyperparametersEC(
middle_layer = linespace_int(8, 64),
learning_rate = linespace(3.5e-02,3.5e-01),
batch_size = linespace_int(4,20),
epochs = linespace_int(50,400)
)
ind = [-.5, -.3, -.1, .8]
print(hp.next(ind))
#@title Setup Principle Component Analysis
#create example individuals
pop = np.array([[-.5, .75, -.1, .8], [-.5, -.3, -.5, .8]])
pca = PCA(n_components=2)
reduced = pca.fit_transform(pop)
t = reduced.transpose()
plt.scatter(t[0], t[1])
plt.show()
#@title Setup CUDA for use with GPU
cuda = True if torch.cuda.is_available() else False
print("Using CUDA" if cuda else "Not using CUDA")
Tensor = torch.cuda.FloatTensor if cuda else torch.Tensor
###Output
Using CUDA
###Markdown
Setup DEAP for PSO Search
###Code
#@title Setup Fitness Criteria
creator.create("FitnessMax", base.Fitness, weights=(1.0,))
creator.create("Particle", np.ndarray, fitness=creator.FitnessMax, speed=list,
smin=None, smax=None, best=None)
#@title PSO Functions
def generate(size, pmin, pmax, smin, smax):
part = creator.Particle(np.random.uniform(pmin, pmax, size))
part.speed = np.random.uniform(smin, smax, size)
part.smin = smin
part.smax = smax
return part
def updateParticle(part, best, phi1, phi2):
u1 = np.random.uniform(0, phi1, len(part))
u2 = np.random.uniform(0, phi2, len(part))
v_u1 = u1 * (part.best - part)
v_u2 = u2 * (best - part)
part.speed += v_u1 + v_u2
for i, speed in enumerate(part.speed):
if abs(speed) < part.smin:
part.speed[i] = math.copysign(part.smin, speed)
elif abs(speed) > part.smax:
part.speed[i] = math.copysign(part.smax, speed)
part += part.speed
###Output
_____no_output_____
###Markdown
Create a Training Function
###Code
#@title Wrapper Function for DL
loss_fn = nn.MSELoss()
if cuda:
loss_fn.cuda()
def train_function(hp):
X = np.reshape(
np.arange(
data_min,
data_max,
data_step)
, (-1, 1))
y = function(X)
inputs = X.shape[1]
tensor_x = torch.Tensor(X) # transform to torch tensor
tensor_y = torch.Tensor(y)
dataset = TensorDataset(tensor_x,tensor_y) # create your datset
dataloader = DataLoader(dataset, batch_size= hp.batch_size, shuffle=True) # create your dataloader
model = Net(inputs, hp.middle_layer)
optimizer = optim.Adam(model.parameters(), lr=hp.learning_rate)
if cuda:
model.cuda()
history=[]
start = time.time()
for i in range(hp.epochs):
for X, y in iter(dataloader):
# wrap the data in variables
x_batch = Variable(torch.Tensor(X).type(Tensor))
y_batch = Variable(torch.Tensor(y).type(Tensor))
# forward pass
y_pred = model(x_batch)
# compute and print loss
loss = loss_fn(y_pred, y_batch)
ll = loss.data
history.append(ll)
# reset gradients
optimizer.zero_grad()
# backwards pass
loss.backward()
# step the optimizer - update the weights
optimizer.step()
end = time.time() - start
return end, history, model, hp
hp_in = hp.next(ind)
span, history, model, hp_out = train_function(hp_in)
print(hp_in)
plt.plot(history)
print(min(history).item())
###Output
middle_layer = 22 learning_rate = 0.14525 batch_size = 11 epochs = 365
3627.07763671875
###Markdown
DEAP Toolbox
###Code
#@title Create Evaluation Function and Register
def evaluate(individual):
hp_in = hp.next(individual)
span, history, model, hp_out = train_function(hp_in)
y_ = model(torch.Tensor(Xi).type(Tensor))
fitness = loss_fn(y_, torch.Tensor(yi).type(Tensor)).data.item()
return fitness,
#@title Add Functions to Toolbox
toolbox = base.Toolbox()
toolbox.register("particle",
generate, size=hp.size(), pmin=-.25, pmax=.25, smin=-.25, smax=.25)
toolbox.register("population",
tools.initRepeat, list, toolbox.particle)
toolbox.register("update",
updateParticle, phi1=2, phi2=2)
toolbox.register("evaluate", evaluate)
###Output
_____no_output_____
###Markdown
Perform the HPO
###Code
random.seed(64)
pop = toolbox.population(n=25)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", np.mean)
stats.register("std", np.std)
stats.register("min", np.min)
stats.register("max", np.max)
logbook = tools.Logbook()
logbook.header = ["gen", "evals"] + stats.fields
ITS = 10
NDIM = hp.size()
best = None
best_part = None
best_hp = None
run_history = []
for i in range(ITS):
for part in pop:
part.fitness.values = toolbox.evaluate(part)
hp_eval = hp.next(part)
run_history.append([part.fitness.values[0], *hp_eval.values()])
if part.best is None or part.best.fitness < part.fitness:
part.best = creator.Particle(part)
part.best.fitness.values = part.fitness.values
if best is None or best.fitness > part.fitness:
best = creator.Particle(part)
best.fitness.values = part.fitness.values
best_hp = hp.next(best)
for part in pop:
toolbox.update(part, best)
span, history, model, hp_out = train_function(hp.next(best))
y_ = model(torch.Tensor(Xi).type(Tensor))
fitness = loss_fn(y_, torch.Tensor(yi).type(Tensor)).data.item()
run_history.append([fitness,*hp_out.values()])
clear_output()
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(18,6))
fig.suptitle(f"Best Fitness {best.fitness} \n{best_hp}")
fig.text(0,0,f"Iteration {i+1}/{ITS} Current Fitness {fitness} \n{hp_out}")
ax1.plot(history)
ax1.set_xlabel("iteration")
ax1.set_ylabel("loss")
ax2.plot(Xi, yi, 'o', color='black')
ax2.plot(Xi,y_.detach().cpu().numpy(), 'r')
ax2.set_xlabel("X")
ax2.set_ylabel("Y")
rh = np.array(run_history)
M = rh[:,1:NDIM+1]
reduced = pca.fit_transform(M)
t = reduced.transpose()
hexbins = ax3.hexbin(t[0], t[1], C=rh[:, 0],
bins=50, gridsize=50, cmap=cm.get_cmap('gray'))
ax3.set_xlabel("PCA X")
ax3.set_ylabel("PCA Y")
plt.show()
time.sleep(1)
###Output
_____no_output_____ |
nbs/15_exceptions.ipynb | ###Markdown
exceptions> Defines the exceptions used throughout prcvd applications.
###Code
#export
## imports
#export
class UnexpectedInputProvided(Exception):
"""Interrupts execution if unexpected input is provided."""
pass
class ExpectedInputMissing(Exception):
"""Interrupts execution if expected input is missing."""
pass
class DataTypeNotImplemented(Exception):
"""Interrupts execution if requested data type is not implemented"""
pass
from nbdev.export import *
notebook2script()
###Output
Converted 00_core.ipynb.
Converted 01_web.ipynb.
Converted 02_db.ipynb.
Converted 03_img.ipynb.
Converted 04_audio.ipynb.
Converted 05_config.ipynb.
Converted 06_tabular.ipynb.
Converted 07_audio_oyez.ipynb.
Converted 08_audio_talktime.ipynb.
Converted 09_scraping_github.ipynb.
Converted 10_scraping.ipynb.
Converted 11_img_face.ipynb.
Converted 12_serving_core.ipynb.
Converted 13_interfaces_types.ipynb.
Converted 14_serving_ubiops.ipynb.
Converted 15_exceptions.ipynb.
Converted index.ipynb.
|
Password-Data-Analysis.ipynb | ###Markdown
**Internet's Most Common Passwords**Acknowledgements-The dataset was procured by SecLists. SecLists is the security tester's companion. It's a collection of multiple types of lists used during security assessments, collected in one place. List types include usernames, passwords, URLs, sensitive data patterns, fuzzing payloads, web shells, and many more. The goal is to enable a security tester to pull this repository onto a new testing box and have access to every type of list that may be needed.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data= pd.read_csv("/content/common_passwords.csv")
data.head(25)
#null check
data.isnull().sum()
###Output
_____no_output_____
###Markdown
No Null values- Nice!!!
###Code
data.describe()
#from data.describe() We can say that
#mean length ~= 6.65
#mean num_chars ~= 5.03
#mean num_digits ~= 1.62
#mean num_upper ~= 0.03
#mean num_lower ~= 5.005
#mean num_special ~= 0.003
#mean num_vowels ~= 1.81
#mean num_syllables ~= 1.61
#minimum password length = 3
#maximum password length = 16
data.nunique()
#top 10 longest password
data.sort_values(by='length', ascending=False)[['password', 'length']].head(10)
#top 10 shortest passwords
data.sort_values(by='length', ascending=True)[['password', 'length']].head(10)
#plotting password length data
plt.bar(np.sort(data['length'].unique()), data.groupby('length')['length'].count().values)
plt.xlabel('length')
plt.ylabel('frequency')
plt.axis([0,16,0,4000])
#Now, let's see corr.between features
plt.figure(figsize=(20,20))
sns.set(font_scale=1)
sns.heatmap(data=data[['num_chars', 'num_digits', 'num_upper',
'num_lower', 'num_special', 'num_vowels', 'num_syllables']].corr(),annot = True)
###Output
_____no_output_____ |
unn.ipynb | ###Markdown
unnunn is my reference implementation of a very simple neural network capable of recognizing handwritten digits. It has been coded completely from scratch using only the excelent numerical computation library [NumPy](https://numpy.org/).During the implementation process, I have relied hevily on Michael Nielsen's fantastic book called [Neural Networks and Deep Learning](http://neuralnetworksanddeeplearning.com/index.html) and I have used his own reference implementation available in the book to test and evaluate my codebase.
###Code
import random
import matplotlib.pyplot as plt
import unn.mnist
import unn.neuralnet
def prediction(vector):
value = 0
confidence = 0
for v, c in enumerate(vector):
c = c[0]
if c > confidence:
value = v
confidence = c
return value, confidence
def to_image(sample):
return sample.reshape(28, 28)
###Output
_____no_output_____
###Markdown
The MNIST DatasetI'm using the MNIST dataset throughout the whole training and evaluation process. The custom utility function `unn.mnist.load_dataset` depends on two separate data files - a raw uncompressed pixel data of handwritten digits and another file comprising of labels for the images. The dataset is divided into 50k samples data samples for training and 10k data samples to be used for model evaluation purposes. This partitioning into two separate segments is essential for the model to be properly evaluated on data points it had not seen during the training process.
###Code
training_data = unn.mnist.load_dataset(
"./data/train-images-idx3-ubyte.gz",
"./data/train-labels-idx1-ubyte.gz"
)
test_data = unn.mnist.load_dataset(
"./data/t10k-images-idx3-ubyte.gz",
"./data/t10k-labels-idx1-ubyte.gz"
)
###Output
_____no_output_____
###Markdown
Following is a simple visual demonstration of the MNIST dataset. You can repeatedly run the kernel below to randomly browse through the 28x28 greyscale images.
###Code
training_data_length = len(training_data)
rows = 2
columns = 4
size = 16
fig, axes = plt.subplots(rows, columns)
fig.tight_layout(pad=2)
fig.set_figwidth(size)
for row in range(rows):
for column in range(columns):
axis = axes[row][column]
index = random.randrange(0, training_data_length)
x, y = training_data[index]
image = to_image(training_data[index][0])
truth, _ = prediction(y)
axis.set_title(f"Digit {truth}")
axis.imshow(image)
###Output
_____no_output_____
###Markdown
Training the Neural NetworkThe easiest neural network model I used comprises of three neuron layers, each using the [sigmoid function](https://en.wikipedia.org/wiki/Sigmoid_function) as an activation function. The input layer is built from 784 neurons to account for the 28x28 image dimensions. There's a second hidden layer of 30 neurons and lastly, the output layer is made up of 10 output neurons.The training process uses an algorithm called [stochastic gradient descent](https://en.wikipedia.org/wiki/Stochastic_gradient_descent) to evaluate current learning state of the neural network and update its weights and biases in the most practical and efficient way.Since I am using NumPy to do all the number crunching and heavy-lifting, the learning process itself takes about 15 minutes on my Intel Core i7 laptop with 16GB RAM.Feel free to train the neural network from scratch for yourself.
###Code
net = unn.neuralnet.Neuralnet((784, 30, 10))
epochs = 30
batch_size = 10
learning_rate = 3.0
evaluation = unn.neuralnet.train(
net,
training_data,
epochs,
batch_size,
learning_rate,
test_data=test_data
)
plt.plot(evaluation)
plt.title("Training Evaluation")
plt.xlabel("Epoch")
plt.ylabel("Error")
###Output
_____no_output_____
###Markdown
The ResultBelow is a kernel that lets you randomly pick an image from the test dataset and feeds it through the trained neural network.As you can see, the prediction confidence of the trained model hits consistently around 97%.
###Code
x, y = random.choice(test_data)
z = net.feedforward(x)
truth, _ = prediction(y)
value, confidence = prediction(z)
plt.title(f"Truth: {truth}, Prediction: {value} ({confidence * 100:.2f} %)")
plt.imshow(to_image(x))
###Output
_____no_output_____ |
watt-time/.ipynb_checkpoints/Introduction to Watt Time-checkpoint.ipynb | ###Markdown
Introduction: Using Watt Time to Find Energy SourcesThe purpose of this notebook is to explore the Watt Time API to find what kind of electricity we are currently using. The Watt Time API allows us to see a breakdown of the energy generation for a given location.
###Code
# Standard Data Science Helpers
import numpy as np
import pandas as pd
import scipy
import featuretools as ft
# Graphic libraries
import matplotlib as plt
%matplotlib inline
plt.style.use('fivethirtyeight');plt.rcParams['font.size']=18
import seaborn as sns
# Extra options
pd.options.display.max_rows = 10
# Show all code cells outputs
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
###Output
_____no_output_____
###Markdown
Get LocationThis code is from Stack Overflow:
###Code
from IPython.display import HTML
info = open('C:/users/willk/OneDrive/Desktop/watt_time_api.txt', 'r').read()
u, p = info.split(',')
values = """{
"username": "willkoehrsen",
"password": "introduction",
"email": "[email protected]",
"org": "Cortex Intel"
}"""
headers = {
"Content-Type": "application/json"
}
r = requests.post('https://api.watttime.org/api/v1/obtain-token-auth/', data={'username': u,
'password': p})
import ast
token = ast.literal_eval(r.text.strip())['token']
token
r = requests.get('https://api.watttime.org/api/v1/datapoints/',
headers={'Authorization': f'Token {token}'})
HTML(r.text)
def get_location_ba(longitude, latitude):
loc = {'type': 'Point', 'coordinates': [longitude, latitude]}
return requests.get(f'https://api.watttime.org/api/v1/balancing_authorities/?loc={loc}')
get_location_ba(g.latlng[1], g.latlng[0]).text
r = requests.get('http://api.watttime.org/api/v1/datapoints/?ba=MISO&market=RT5M&page=2&page_size=2')
r.text
r = requests.get('http://api.watttime.org/api/v1/datapoints/?ba=PJM&page=2&page_size=2',
headers={'Authorization': f'Token {token}'})
r.text
r = requests.get('https://api.watttime.org/api/v1/balancing_authorities/?loc={"type":"Point","coordinates":[-122.272778,37.871667]}')
r.text
HTML(r.text.strip())
r.text
values = """
{
"username": "freddo",
"password": "the_frog",
"email": "[email protected]",
"org": "fred's world"
}
"""
values = {'password': p, 'username': u}
r = requests.get('https://api2.watttime.org/v2/login', data=values, headers=headers)
HTML(r.text.strip())
import geocoder
g = geocoder.ip('me')
print(g.latlng)
import requests
headers = {'username': u, 'password': p}
request = requests.get(f'https://api2.watttime.org/v2/login/', auth=(u, p))
request.text
headers
g.latlng
info = open('C:/Users/willk/OneDrive/Desktop/watt_time_api.txt', 'r').read()
u, p = info.split(',')
from watttime_client.client import WattTimeAPI
client = WattTimeAPI(token=p)
from datetime import datetime
import pytz
timestamp = pytz.utc.localize(datetime(2015, 6, 1, 12, 30))
value = client.get_impact_at(timestamp, 'CAISO')
client.get_impact_between()
###Output
_____no_output_____ |
Python_Basic_Assignments/Assignment_17.ipynb | ###Markdown
1. Assign the value 7 to the variable guess_me. Then, write the conditional tests (if, else, and elif) to print the string 'too low' if guess_me is less than 7, 'too high' if greater than 7, and 'just right' if equal to 7.
###Code
guess_me = 7
guess = int(input('Guess number: '))
if guess < 7:
print('Too small!')
elif guess >7:
print('Too big!')
else:
print('Yes! You guessed right!')
###Output
Guess number: 7
Yes! You guessed right!
###Markdown
2. Assign the value 7 to the variable guess_me and the value 1 to the variable start. Write a while loop that compares start with guess_me. Print too low if start is less than guess me. If start equals guess_me, print 'found it!' and exit the loop. If start is greater than guess_me, print 'oops' and exit the loop. Increment start at the end of the loop.
###Code
guess_me = 7
start = 1
while start < 9:
if start < guess_me:
print('Too low!')
elif start == guess_me:
print('Found it!')
else:
print('Ooops!')
start+=1
###Output
Too low!
Too low!
Too low!
Too low!
Too low!
Too low!
Found it!
Ooops!
###Markdown
3. Print the following values of the list [3, 2, 1, 0] using a for loop.
###Code
l = [3, 2, 1, 0]
for i in l:
print(i)
###Output
3
2
1
0
###Markdown
4. Use a list comprehension to make a list of the even numbers in range(10)
###Code
even = [i for i in range(10) if i%2 == 0]
even
###Output
_____no_output_____
###Markdown
5. Use a dictionary comprehension to create the dictionary squares. Use range(10) to return the keys, and use the square of each key as its value.
###Code
square = {k: k**2 for k in range(10)}
square
###Output
_____no_output_____
###Markdown
6. Construct the set odd from the odd numbers in the range using a set comprehension (10).
###Code
odd = {i for i in range(10) if i%2 != 0}
odd
###Output
_____no_output_____
###Markdown
7. Use a generator comprehension to return the string 'Got ' and a number for the numbers in range(10). Iterate through this by using a for loop.
###Code
s = 'Got'
gen = [s+str(i) for i in range(10)]
for i in gen:
print(i)
###Output
Got0
Got1
Got2
Got3
Got4
Got5
Got6
Got7
Got8
Got9
###Markdown
8. Define a function called good that returns the list ['Harry', 'Ron', 'Hermione'].
###Code
def good():
l = ['Harry', 'Ron', 'Hermione']
return l
good()
###Output
_____no_output_____
###Markdown
9. Define a generator function called get_odds that returns the odd numbers from range(10). Use a for loop to find and print the third value returned.
###Code
def get_odds():
n = [i for i in range(10) if i%2 != 0]
yield n
for n in get_odds():
print(n[2])
###Output
5
###Markdown
10. Define an exception called OopsException. Raise this exception to see what happens. Then write the code to catch this exception and print 'Caught an oops'.
###Code
class OopsException(Exception):
"""Caught an oops"""
pass
num = 1
try:
if num == 1:
raise OopsException
except OopsException:
print('Caught an oops')
###Output
Caught an oops
###Markdown
11. Use zip() to make a dictionary called movies that pairs these lists: titles = ['Creature of Habit', 'Crewel Fate'] and plots = ['A nun turns into a monster', 'A haunted yarn shop'].
###Code
titles = ['Creature of Habit', 'Crewel Fate']
plots = ['A nun turns into a monster', 'A haunted yarn shop']
movies = dict(zip(titles, plots))
movies
###Output
_____no_output_____ |
section2/2-6-3(selector func, lambda).ipynb | ###Markdown
181216selector 함수, 람다를 응용
###Code
from bs4 import BeautifulSoup
import sys
import io
fp = open("cars.html", encoding="utf-8")
soup = BeautifulSoup(fp, "html.parser")
def car_func(selector):
print('car_func', soup.select_one(selector).string)
car_func("li:nth-of-type(4)")
car_func("li#gr")
car_func("ul > li#gr")
car_func("#cars #gr")
car_func("#cars > #gr")
car_func("li[id='gr']")
###Output
car_func Grandeur
car_func Grandeur
car_func Grandeur
car_func Grandeur
car_func Grandeur
car_func Grandeur
###Markdown
lambda식
###Code
car_lambda = lambda selector : print('car_lambda', soup.select_one(selector).string)
car_lambda("li:nth-of-type(4)")
car_lambda("li#gr")
car_lambda("ul > li#gr")
car_lambda("#cars #gr")
car_lambda("#cars > #gr")
car_lambda("li[id='gr']")
###Output
car_lambda Grandeur
car_lambda Grandeur
car_lambda Grandeur
car_lambda Grandeur
car_lambda Grandeur
car_lambda Grandeur
|
week7 WordToVector.ipynb | ###Markdown
Load packages
###Code
import collections
import numpy as np
import tensorflow as tf
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
print("Load packages")
###Output
Load packages
###Markdown
Configuration
###Code
batch_size = 20
embedding_size = 2
num_sampled = 15
###Output
_____no_output_____
###Markdown
Sentences
###Code
sentences = ["the quick brown fox jumped over the lazy dog",
"i love cats and dogs",
"we all love cats and dogs",
"sung likes cats",
"she loves dogs",
"cats can be very independent",
"cats are playful",
"cats are natural hunter",
"it's raining cats and dogs"]
print("senetences length is %d" %(len(sentences)))
###Output
senetences length is 9
###Markdown
make words
###Code
words = " ".join(sentences).split()
print(words)
count = collections.Counter(words).most_common()
print(count)
###Output
[('cats', 7), ('dogs', 4), ('and', 3), ('love', 2), ('the', 2), ('are', 2), ('raining', 1), ('all', 1), ('be', 1), ('over', 1), ('we', 1), ('playful', 1), ('likes', 1), ('sung', 1), ('jumped', 1), ('fox', 1), ('she', 1), ('brown', 1), ('lazy', 1), ('very', 1), ('hunter', 1), ('independent', 1), ('natural', 1), ('i', 1), ('dog', 1), ('can', 1), ('loves', 1), ("it's", 1), ('quick', 1)]
###Markdown
make Dictionary
###Code
rdic = [i[0] for i in count] #id -> word
print(rdic)
dic = {w: i for i, w in enumerate (rdic)} #word -> id
voc_size = len(dic)
#print(rdic)
print(dic)
print(rdic[0])
print(dic['cats'])
data = [dic[word] for word in words]
print(data)
###Output
[4, 28, 17, 15, 14, 9, 4, 18, 24, 23, 3, 0, 2, 1, 10, 7, 3, 0, 2, 1, 13, 12, 0, 16, 26, 1, 0, 25, 8, 19, 21, 0, 5, 11, 0, 5, 22, 20, 27, 6, 0, 2, 1]
###Markdown
Make CBow data
###Code
cbow_pairs = []
for i in range(1, len(data)-1):
cbow_pairs.append([[data[i-1],data[i+1]], data[i]])
print(cbow_pairs)
###Output
[[[4, 17], 28], [[28, 15], 17], [[17, 14], 15], [[15, 9], 14], [[14, 4], 9], [[9, 18], 4], [[4, 24], 18], [[18, 23], 24], [[24, 3], 23], [[23, 0], 3], [[3, 2], 0], [[0, 1], 2], [[2, 10], 1], [[1, 7], 10], [[10, 3], 7], [[7, 0], 3], [[3, 2], 0], [[0, 1], 2], [[2, 13], 1], [[1, 12], 13], [[13, 0], 12], [[12, 16], 0], [[0, 26], 16], [[16, 1], 26], [[26, 0], 1], [[1, 25], 0], [[0, 8], 25], [[25, 19], 8], [[8, 21], 19], [[19, 0], 21], [[21, 5], 0], [[0, 11], 5], [[5, 0], 11], [[11, 5], 0], [[0, 22], 5], [[5, 20], 22], [[22, 27], 20], [[20, 6], 27], [[27, 0], 6], [[6, 2], 0], [[0, 1], 2]]
###Markdown
skip-gram
###Code
skip_gram_pairs = []
for c in cbow_pairs:
skip_gram_pairs.append([c[1],c[0][0]])
skip_gram_pairs.append([c[1],c[0][1]])
print(skip_gram_pairs[:5])
def generate_batch(size):
assert size < len(skip_gram_pairs)
x_data = []
y_data = []
r = np.random.choice(range(len(skip_gram_pairs)), size, replace=False)
for i in r:
x_data.append(skip_gram_pairs[i][0])
y_data.append([skip_gram_pairs[i][1]])
return x_data, y_data
###Output
_____no_output_____
###Markdown
Network
###Code
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
embedding = tf.Variable(tf.random_uniform([voc_size, embedding_size], -1.0, 1.0))
embed = tf.nn.embedding_lookup(embedding, train_inputs) #lookup table
nce_w = tf.Variable(tf.random_uniform([voc_size, embedding_size],-1.0,1.0))
nce_b = tf.Variable(tf.zeros([voc_size]))
#loss = tf.reduce_mean(tf.nn.nce_loss(nce_w, nce_b, embed, train_labels, num_sampled, voc_size))
loss = tf.reduce_mean(
tf.nn.nce_loss(weights=nce_w, biases=nce_b, inputs=embed, labels=train_labels,
num_sampled=num_sampled, num_classes=voc_size))
optm = tf.train.AdamOptimizer(learning_rate=0.01).minimize(loss)
print("build Network")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for step in range(3000):
batch_x, batch_y = generate_batch(batch_size)
sess.run(optm, feed_dict={train_inputs : batch_x, train_labels : batch_y})
trained_embedding = embedding.eval()
###Output
_____no_output_____
###Markdown
Plot Results
###Code
if trained_embedding.shape[1] == 2:
labels = rdic[:20]
for i, label in enumerate(labels):
x, y = trained_embedding[i, :]
plt.scatter(x,y)
plt.annotate(label, xy=(x,y), xytext=(5,2), textcoords="offset points", ha="right", va="bottom")
plt.show()
###Output
_____no_output_____ |
Python_Stock/Candlestick_Patterns/Candlestick_Matching_Low.ipynb | ###Markdown
Candlestick Matching Low https://www.investopedia.com/terms/m/matching-low.asp
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import talib
import warnings
warnings.filterwarnings("ignore")
# yahoo finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'ETSY'
start = '2021-01-01'
end = '2021-10-22'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
###Output
[*********************100%***********************] 1 of 1 completed
###Markdown
Candlestick with Matching Low
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
from mplfinance.original_flavor import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
matching_low = talib.CDLMATCHINGLOW(df['Open'], df['High'], df['Low'], df['Close'])
matching_low = matching_low[matching_low != 0]
df['matching_low'] = talib.CDLMATCHINGLOW(df['Open'], df['High'], df['Low'], df['Close'])
df.loc[df['matching_low'] !=0]
df['Adj Close'].loc[df['matching_low'] !=0]
df['Adj Close'].loc[df['matching_low'] !=0].values
df['Adj Close'].loc[df['matching_low'] !=0].index
matching_low
matching_low.index
df
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['matching_low'] !=0].index, df['Adj Close'].loc[df['matching_low'] !=0],
'pk', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=10.0)
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Plot Certain dates
###Code
df = df['2021-02-01':'2021-03-01']
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
ax.set_facecolor('cornflowerblue')
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='mediumblue', colordown='darkorchid', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['matching_low'] !=0].index, df['Adj Close'].loc[df['matching_low'] !=0],
'pw', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=25.0)
colors = dfc.VolumePositive.map({True: 'mediumblue', False: 'darkorchid'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Highlight Candlestick
###Code
from matplotlib.dates import date2num
from datetime import datetime
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.axvspan(date2num(datetime(2021,2,10)), date2num(datetime(2021,2,11)),
label="Matching Low Bullish",color="blue", alpha=0.3)
ax.hlines(y=df['Adj Close'].loc[df['matching_low'] !=0].values, color='r', linestyle='-', xmin=pd.to_datetime('2021-02-01'), xmax=pd.to_datetime('2021-03-01'))
ax.legend()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
###Output
_____no_output_____ |
Info_7390_Advanced_Data_Science_Mini_Project_2_Reinforcement_Learning.ipynb | ###Markdown
Reinforcement LearningReinforcement Learning is an area of Machine Learning concerned with how agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement Learning is one of the three basic Machine Learning Paradigms, along with Supervised Learning and Unsupervised Learning.Consider the diagram. This maze represents the "Enivronment" and the dog is the "Agent". Our objective is to teach the agent an optimal policy ("best sequence of actions") so that it can solve the maze. A reward is provided to the agent everytime a right action is taken and a penalty for everytime a wrong action is taken. Each action taken by the agent in the environment results in a new "State". Markov Decision ProcessesMarkov Decision Processes are meant to be a straightforward framing of the problem of learning from interaction to achieve a goal. The learner and decision maker is called the "**Agent**". The thing it interacts with, comprising everything outside the agent, is called the "**Environment**". These interact continually, the agent selecting actions and the environment responding to these actions and presenting new situations to the agent. The Environment also gives rise to rewards, special numerical values that the agent seeks to maximize over time through its choice of actions.More specifically, the agent and environment interact at each of a sequence of discretetime steps, t = 0, 1, 2, 3,....At each time step t, the agent receives some representationof the environment’s state, St 2 S, and on that basis selects an action, At 2 A(s). One time step later, in part as a consequence of its action, the agent receives a numericalreward, Rt+1 2 R ⇢ R, and finds itself in a new state, St+1. The MDP and agenttogether thereby give rise to a sequence or trajectory that begins like this: **S0, A0, R1, S1, A1, R2, S2, A2, R3,...**We can think of process of receiving a reward as an arbitrary function f that maps the state-action pairs to rewards. At each time t, we have: **f(St, At) = Rt+1** Q LearningQ-learning is an off policy reinforcement learning algorithm that seeks to find the best action to take given the current state. It’s considered off-policy because the q-learning function learns from actions that are outside the current policy, like taking random actions, and therefore a policy isn’t needed. More specifically, q-learning seeks to learn a policy that maximizes the total reward. What's Q in Q Learning?The 'Q' in q-learning stands for quality. Quality in this case represents how useful a given action is in gaining some future reward. Deep Q-LearningQ-Learning is a simple but a powerful algorithm to teach our agent to extract maximum reward thereby teaching the agent exactly which action to perform.Consider an environment of 10,000 states and 1,000 actions per state. This would create a table of 10 million cells. The amount of memory required to save and update the table would increase as the number of states increase. The amount of time required to explore each state to create the Q-Table would be unrealistic.In deep Q-learning, we use a neural network to approximate the Q-value function. The state is given as the input and the Q-value of all possible actions is generated as the output.The below image gives us an overview of how Q-Learning and Deep-Q-Learning works:The Steps involved in DQNs are :1. Preprocess and feed the game screen (state s) to our DQN, which will return the Q-values of all possible actions in thestate2. Select an action using the epsilon-greedy policy. With the probability epsilon, we select a random action a and with probability 1-epsilon, we select an action that has a maximum Q-value, such as a = argmax(Q(s,a,w))3. Perform this action in a state s and move to a new state s’ to receive a reward. This state s’ is the preprocessed image of the next game screen. We store this transition in our replay buffer as 4. Next, sample some random batches of transitions from the replay buffer and calculate the loss5. It is known that:  which is just the squared difference between target Q and predicted Q6. Perform gradient descent with respect to our actual network parameters in order to minimize this loss7. After every C iterations, copy our actual network weights to the target network weights8. Repeat these steps for M number of episodesGoing back to the Q-Value update equation derived from the Bellman equation, we have:The section in green represents the target. We can argue that it is predicting its own value, but since R is the unbiased true reward, the network is going to update its gradient using backpropogation to finally converge. Lunar Lander-v2 Open AI GymOpenAI is a toolkit for developing and implementing Reinforcement Learning Algorithms. Here we look to apply DQNs to one of OpenAI's Game Environment Lunar Lander-v2. The Objective of the game is to train the agent(Lander) to land in the landing zone indicated between two flags. Since the environment is 2D and since the number of states and available actions in each state is a lot, we look to solve this environment using DQNs. The landing pad is always at coordinates (0,0). The coordinates are the first two numbers in the state vector. Discrete ActionsAccording to Pontryagin's maximum principle it's optimal to fire engine full throttle or turn it off. That's the reason this environment is OK to have discreet actions (engine on or off).Four Discrete Actions are available: 1. Do Nothing2. Fire Left Orientation Engine3. Fire Right Orientation Engine4. Fire Main Engine Points ScoringReward for moving from the top of the screen to the landing pad and zero speed is about 100 to 140 points.If the lander moves away from the landing pad it loses reward.Episode: The Episode finishes if the lander crashes or comes to rest, receiving an additional -100 or +100 points.Everytime the one of the lander's legs makes contact with the ground is an additional 10 points. Firing the main engine is -0.3 points per frame. Firing the side engine is -0.03 points each frame. Solved environment is +200 points.Fuel is infinite so an agent can learn to fly and and then land on its first attempt.
###Code
#Importing all Libraries
import gym
import random
from keras import Sequential
from collections import deque
from keras.layers import Dense
from keras.optimizers import Adam
import matplotlib.pyplot as plt
from keras.activations import relu, linear
import numpy as np
#Setting up the Environment
env = gym.make('LunarLander-v2')
env.seed(0)
np.random.seed(0)
##Implementing the Deep Q Learning Algorithm
class DQN:
#Initializing the Learning Rate to implement Epsilon-Greedy Policy
def __init__(self, action_space, state_space):
self.action_space = action_space
self.state_space = state_space
self.epsilon = 1.0
self.gamma = .99
self.batch_size = 64
self.epsilon_min = .01
self.lr = 0.001
self.epsilon_decay = .996
self.memory = deque(maxlen=1000000)
self.model = self.build_model()
#Building the Neural Network with input layer as the Starting State
#and Final Layer to be action_Space which determines the end of the episode
def build_model(self):
model = Sequential()
model.add(Dense(150, input_dim=self.state_space, activation=relu))
model.add(Dense(120, activation=relu))
model.add(Dense(self.action_space, activation=linear))
model.compile(loss='mse', optimizer=Adam(lr=self.lr))
return model
#Storing current state, action, reward and next_state in memory
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
def act(self, state):
if np.random.rand() <= self.epsilon:
return random.randrange(self.action_space)
act_values = self.model.predict(state)
return np.argmax(act_values[0])
def replay(self):
if len(self.memory) < self.batch_size:
return
minibatch = random.sample(self.memory, self.batch_size)
states = np.array([i[0] for i in minibatch])
actions = np.array([i[1] for i in minibatch])
rewards = np.array([i[2] for i in minibatch])
next_states = np.array([i[3] for i in minibatch])
dones = np.array([i[4] for i in minibatch])
states = np.squeeze(states)
next_states = np.squeeze(next_states)
targets = rewards + self.gamma*(np.amax(self.model.predict_on_batch(next_states), axis=1))*(1-dones)
targets_full = self.model.predict_on_batch(states)
ind = np.array([i for i in range(self.batch_size)])
targets_full[[ind], [actions]] = targets
self.model.fit(states, targets_full, epochs=1, verbose=0)
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
def train_dqn(episode):
loss = []
agent = DQN(env.action_space.n, env.observation_space.shape[0])
for e in range(episode):
state = env.reset()
state = np.reshape(state, (1, 8))
score = 0
max_steps = 3000
for i in range(max_steps):
action = agent.act(state)
env.render()
next_state, reward, done, _ = env.step(action)
score += reward
next_state = np.reshape(next_state, (1, 8))
agent.remember(state, action, reward, next_state, done)
state = next_state
agent.replay()
if done:
print("episode: {}/{}, score: {}".format(e, episode, score))
break
loss.append(score)
# Calculating the Average score of last 100 episodes to see if the cumulative reward score is above 200
is_solved = np.mean(loss[-100:])
if is_solved > 200:
print('\n Task Completed! \n')
break
print("Average over last 100 episode: {0:.2f} \n".format(is_solved))
return loss
if __name__ == '__main__':
print(env.observation_space)
print(env.action_space)
episodes = 500
loss = train_dqn(episodes)
plt.plot([i+1 for i in range(0, len(loss), 2)], loss[::2])
plt.show()
###Output
Box(-inf, inf, (8,), float32)
Discrete(4)
episode: 0/500, score: -467.9592952121066
Average over last 100 episode: -467.96
episode: 1/500, score: -269.97945449264193
Average over last 100 episode: -368.97
episode: 2/500, score: -164.76628672014783
Average over last 100 episode: -300.90
episode: 3/500, score: -168.73037139905065
Average over last 100 episode: -267.86
episode: 4/500, score: -357.1159862289321
Average over last 100 episode: -285.71
episode: 5/500, score: -93.86297422341758
Average over last 100 episode: -253.74
episode: 6/500, score: -234.46337826082544
Average over last 100 episode: -250.98
episode: 7/500, score: -373.33667719762775
Average over last 100 episode: -266.28
episode: 8/500, score: -209.8393380570767
Average over last 100 episode: -260.01
episode: 9/500, score: -199.2647648907856
Average over last 100 episode: -253.93
episode: 10/500, score: -145.67306544264596
Average over last 100 episode: -244.09
episode: 11/500, score: -145.40140896411629
Average over last 100 episode: -235.87
episode: 12/500, score: -108.36594116479849
Average over last 100 episode: -226.06
episode: 13/500, score: -46.86715981353944
Average over last 100 episode: -213.26
episode: 14/500, score: -119.80558348572829
Average over last 100 episode: -207.03
episode: 15/500, score: -251.56623517594207
Average over last 100 episode: -209.81
episode: 16/500, score: -179.22321254348523
Average over last 100 episode: -208.01
episode: 17/500, score: -48.28501208671849
Average over last 100 episode: -199.14
episode: 18/500, score: -41.13109527314005
Average over last 100 episode: -190.82
episode: 19/500, score: -19.367873227196814
Average over last 100 episode: -182.25
episode: 20/500, score: -67.29372723184368
Average over last 100 episode: -176.78
episode: 21/500, score: -30.273582707547096
Average over last 100 episode: -170.12
episode: 22/500, score: -176.47699056618953
Average over last 100 episode: -170.39
episode: 23/500, score: 2.5055697863512316
Average over last 100 episode: -163.19
episode: 24/500, score: -0.8526897556040762
Average over last 100 episode: -156.70
episode: 25/500, score: -54.53841686990655
Average over last 100 episode: -152.77
episode: 26/500, score: -36.92301376773135
Average over last 100 episode: -148.48
episode: 27/500, score: -35.18777893534239
Average over last 100 episode: -144.43
episode: 28/500, score: -48.21869726548192
Average over last 100 episode: -141.11
episode: 29/500, score: -223.5587807493501
Average over last 100 episode: -143.86
episode: 30/500, score: -5.448200255660427
Average over last 100 episode: -139.40
episode: 31/500, score: -58.28004852998345
Average over last 100 episode: -136.86
episode: 32/500, score: -7.10492894416198
Average over last 100 episode: -132.93
episode: 33/500, score: -227.39055196323866
Average over last 100 episode: -135.71
episode: 34/500, score: -247.51717192396586
Average over last 100 episode: -138.90
episode: 35/500, score: -114.91643339828603
Average over last 100 episode: -138.24
episode: 36/500, score: -230.8186729643991
Average over last 100 episode: -140.74
episode: 37/500, score: -214.7241949598793
Average over last 100 episode: -142.68
episode: 38/500, score: -475.89987860009137
Average over last 100 episode: -151.23
episode: 39/500, score: -119.13886565979138
Average over last 100 episode: -150.43
episode: 40/500, score: -208.30368326070325
Average over last 100 episode: -151.84
episode: 41/500, score: -410.0449849297044
Average over last 100 episode: -157.99
episode: 42/500, score: -79.7082598541801
Average over last 100 episode: -156.17
episode: 43/500, score: -118.57013922061489
Average over last 100 episode: -155.31
episode: 44/500, score: -72.67169747698718
Average over last 100 episode: -153.47
episode: 45/500, score: -24.101417246133572
Average over last 100 episode: -150.66
episode: 46/500, score: -251.20000818151007
Average over last 100 episode: -152.80
episode: 47/500, score: -89.65273933602664
Average over last 100 episode: -151.49
episode: 48/500, score: -117.26287748828264
Average over last 100 episode: -150.79
episode: 49/500, score: -29.29223736689107
Average over last 100 episode: -148.36
episode: 50/500, score: -72.39719645240416
Average over last 100 episode: -146.87
episode: 51/500, score: -144.27991449781382
Average over last 100 episode: -146.82
episode: 52/500, score: -44.9964709224483
Average over last 100 episode: -144.90
episode: 53/500, score: -53.44683383528773
Average over last 100 episode: -143.20
episode: 54/500, score: -24.231682606097976
Average over last 100 episode: -141.04
episode: 55/500, score: -22.57420797589365
Average over last 100 episode: -138.92
episode: 56/500, score: -74.71148618629121
Average over last 100 episode: -137.80
episode: 57/500, score: -142.2566550616247
Average over last 100 episode: -137.88
episode: 58/500, score: 16.012955607611232
Average over last 100 episode: -135.27
episode: 59/500, score: -124.51371640021782
Average over last 100 episode: -135.09
episode: 60/500, score: -130.38926366815673
Average over last 100 episode: -135.01
episode: 61/500, score: -79.21982204561417
Average over last 100 episode: -134.11
episode: 62/500, score: -53.35421149667525
Average over last 100 episode: -132.83
episode: 63/500, score: -159.42056811865817
Average over last 100 episode: -133.24
episode: 64/500, score: -84.74901805777148
Average over last 100 episode: -132.50
episode: 65/500, score: -97.27322471392095
Average over last 100 episode: -131.96
episode: 66/500, score: -234.23541376772718
Average over last 100 episode: -133.49
episode: 67/500, score: -183.68291658264647
Average over last 100 episode: -134.23
episode: 68/500, score: -55.969509398223344
Average over last 100 episode: -133.10
episode: 69/500, score: -344.4012238154067
Average over last 100 episode: -136.11
episode: 70/500, score: 29.55403535658571
Average over last 100 episode: -133.78
episode: 71/500, score: -125.81572676798963
Average over last 100 episode: -133.67
episode: 72/500, score: -160.56747148558657
Average over last 100 episode: -134.04
episode: 73/500, score: -79.40798892958364
Average over last 100 episode: -133.30
episode: 74/500, score: -99.69742857186958
Average over last 100 episode: -132.85
episode: 75/500, score: -53.62106816277711
Average over last 100 episode: -131.81
episode: 76/500, score: -100.43409483632207
Average over last 100 episode: -131.40
episode: 77/500, score: -36.36560969753245
Average over last 100 episode: -130.18
episode: 78/500, score: -144.74109064255904
Average over last 100 episode: -130.37
episode: 79/500, score: -91.51112417763899
Average over last 100 episode: -129.88
episode: 80/500, score: -51.79432950238407
Average over last 100 episode: -128.92
episode: 81/500, score: 26.3224859771706
Average over last 100 episode: -127.02
episode: 82/500, score: 1.3807484945335324
Average over last 100 episode: -125.48
episode: 83/500, score: -70.45725420056422
Average over last 100 episode: -124.82
episode: 84/500, score: 4.597024860636964
Average over last 100 episode: -123.30
episode: 85/500, score: -17.846473156473298
Average over last 100 episode: -122.07
episode: 86/500, score: -24.18527216943364
Average over last 100 episode: -120.95
episode: 87/500, score: 29.66466639807507
Average over last 100 episode: -119.24
episode: 88/500, score: -92.4372878802058
Average over last 100 episode: -118.94
episode: 89/500, score: -40.80079336467935
Average over last 100 episode: -118.07
episode: 90/500, score: -78.44784378831582
Average over last 100 episode: -117.63
episode: 91/500, score: -14.794849525729143
Average over last 100 episode: -116.51
episode: 92/500, score: -34.19861498151474
Average over last 100 episode: -115.63
episode: 93/500, score: 0.7915392936094416
Average over last 100 episode: -114.39
episode: 94/500, score: -32.35243241438176
Average over last 100 episode: -113.53
episode: 95/500, score: -80.62566606452216
Average over last 100 episode: -113.18
episode: 96/500, score: -11.158887117319777
Average over last 100 episode: -112.13
|
Final_MA.ipynb | ###Markdown
Proyecto Final Métodos Analíticos Francisco Bahena, Cristian Challu, Daniel SharpCarga de librerías
###Code
import numpy as np
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import sys
import time
from IPython.display import clear_output
from io import StringIO
import random
###Output
_____no_output_____
###Markdown
Carga de los datos y presentación:
###Code
import requests
import pandas as pd
flujo = pd.read_csv('flujo_daniel.csv')
flujo = flujo.iloc[:,1:]
###Output
_____no_output_____
###Markdown
Nuestros datos consisten de un log de conexiones a nodos de 3 centros comerciales. Entre los 3 centros tenemos 32 nodos 'wifi' a los que intentan conectarse dispositivos de los individuos que transitan cerca o dentro de los centros comerciales. En los datos contamos con 4 columnas, son: - MAC address del Nodo de Wifi- MAC address del dispositivo- Potencia de la conexión- Timestamp de la conexión- Nombre del Mall al que pertenece la observación
###Code
flujo.head()
flujo['fecha'] = pd.to_datetime(flujo['fecha'], format = '%Y-%m-%d %H:%M:%S')
flujo['day'] = flujo['fecha'].apply(lambda x: x.day)
flujo['hour'] = flujo['fecha'].apply(lambda x: x.hour)
flujo['minute'] = flujo['fecha'].apply(lambda x: x.minute)
###Output
_____no_output_____
###Markdown
Limpiamos nuestros datos y creamos variables para día, hora y minuto de tal forma que sea más sencillo filtrar y trabajar con los datos.
###Code
flujo_f = flujo.loc[(flujo['day'] == 17 ) & (flujo['hour'] == 8) & (flujo['pot'] >= -70)]
flujo_f.shape
###Output
_____no_output_____
###Markdown
Ejecución del Bloom filter para dispositivos unicosPara la ejecución del bloom se 'levantó' un API en un servidor con nuestra implementación del Bloom Filter. Para simular el stream de nuestros datos, enviamos las observaciones al API en bloques de un minuto y obtenemos el número de 'nuevos' dispositivos contados por el Filtro de Bloom al igual que el número de dispositivos que ya existían. Hacemos lo mismo con una base de datos para obtener los valores reales. Se reporta el error teórico esperado y el error real. También obtenemos el tiempo de ejecución tanto para nuestro Filtro de Bloom como de la base de datos. Con el siguiente comando reestablecemos la base y el Filtro de Bloom. Además, enviamos parámetros para la construcción del Filtro, como en número de hashes y el tamaño del vector filtro.
###Code
n = 81203
k = 10
requests.get('http://54.157.13.52:3000/limpia_db_bloom/'+str(k)+'/'+str(n))
###Output
_____no_output_____
###Markdown
A continuación 'simulamos' el stream de datos, enviando las observaciones en bloques de un minuto por un tiempo total de 10 horas.
###Code
# Hora de inicio y fin de simulacion
hora = 8
hora_fin = 18
# Vector numpy con estadisticas
history = np.empty((0,7),float)
num_unicos_bloom = 0
num_unicos_db = 0
plt.close()
while hora < hora_fin:
# Contador de minuto
contador = -1
# Hora actual
hora = hora+1
# Se filtra la base para obtener solo la hora considerada
flujo_f = flujo.loc[(flujo['day'] == 17 )& (flujo['hour'] == hora) & (flujo['pot'] >= -70)]
# Iteramos en cada minuto de cada hora
for minuto in range(60):
# Contador con minuto considerado
contador += 1
print("Hora: {} Minuto: {}".format(hora-1, minuto))
# Se filtra la base para obtener solo el minuto considerado
base = flujo_f.loc[flujo_f['minute'] == minuto]
base = base[['mac_x']].values.tolist()
elementos = ['-'.join(elemento) for elemento in base]
# Request al API para numero total de elementos en filtro
r = requests.get('http://54.157.13.52:3000/check_bloom_db/')
num_unicos_total = r.json()['elementos_en_db']
# Tasa de error teorica
tasa_teorica = 100*(1-(1-1/n)**(num_unicos_total*k))**k
# Llamamos al API y enviamos los elementos del minuto actual al Bloom Filter
r_bloom = requests.post('http://54.157.13.52:3000/insert_elements_bloom/', json= {'records':elementos})
elapsed_bloom = float(r_bloom.json()['tiempo_en_segundos'])
# Llamamos al API y enviamos los elementos del minuto actual a la Base de Datos
r_db = requests.post('http://54.157.13.52:3000/insert_elements_db/', json= {'records':elementos})
elapsed_db = float(r_db.json()['tiempo_en_segundos'])
# Tasa de error real
if r_db.json()['nuevas_visitas_base'] > 0:
tasa_error = 100*(1- r_bloom.json()['nuevas_visitas']/r_db.json()['nuevas_visitas_base'])
else:
tasa_error = 0
# Numero de elementos unicos de este flujo
num_unicos_bloom += r_bloom.json()['nuevas_visitas']
num_unicos_db += r_db.json()['nuevas_visitas_base']
# Se guardan estadisticas en historia
step = [contador, elapsed_bloom, elapsed_db, tasa_error, tasa_teorica, num_unicos_bloom, num_unicos_db]
history = np.vstack((history, step))
# Graficamos nuestros resultados para las diferentes métricas consideradas
clear_output()
print(r_bloom.json())
print(r_db.json())
plt.figure(1)
plt.figure(figsize = (15, 5))
plt.subplot(131)
plt.plot(history[:,1], label = "Bloom", linestyle = '-')
plt.plot(history[:,2], label = "DataBase", linestyle = '--')
plt.legend()
plt.xlabel("Minutos")
plt.ylabel("Tiempo de ejecución")
plt.title("Tiempos ejecución")
plt.xlim((1,600))
plt.subplot(132)
plt.plot(history[:,3], linestyle = '-', label = "Real")
plt.plot(history[:,4], linestyle = '--', label = "Teorica")
plt.ylim((0, 105))
plt.legend()
plt.ylabel("Tasa error")
plt.xlabel("Minutos")
plt.title("Tasa de error")
plt.xlim((1,600))
plt.subplot(133)
plt.plot(history[:,5], label = "Bloom", linestyle = '-')
plt.plot(history[:,6], label = "DataBase", linestyle = '--')
plt.legend()
plt.ylabel("Número de únicos")
plt.xlabel("Minutos")
plt.title("Número de únicos")
plt.xlim((1,600))
plt.show()
###Output
{'visitas_existentes': 546, 'nuevas_visitas': 19, 'tiempo_en_segundos': '0.05281543731689453'}
{'visitas_existentes_base': 546, 'nuevas_visitas_base': 19, 'tiempo_en_segundos': '0.8397023677825928'}
###Markdown
El Bloom Filter nos permite ver el número de individuos únicos en cierto periodo de tiempo al igual que detectar, en un periodo de un minuto de tiempo, cuantos de los dispositivos son nuevos y cuantos ya habían aparecido en alguno de los 3 centros comerciales. **Ejecución Buena: n=576,743, k = 15** Computadora 1 - Mall 1Computadora 2 - Mall 2 y 3 **Ejecución Mala: n=81,203, k=10** Computadora 1 - Mall 1Computadora 2 - Mall 2 y 3 Ejecución del Bloom filter para filtrar empleadosPara esta implementación del Bloom filter primero añadimos un subconjunto de MAC address distintas al filtro, que simulan los dispositivos de los empleados de las tiendas del centro comercial. Obtenemos el número de dispositivos filtrados y no filtrados por el Filtro de Bloom. Hacemos lo mismo con una base de datos para obtener los valores reales. Se reporta el error teórico esperado y el error real. También obtenemos el tiempo de ejecución tanto para nuestro Filtro de Bloom como de la base de datos. Primero se crea una base de direcciones que representan a los empleados del centro comercial. Luego se agregan los registros al bloom filter.
###Code
flujo_filtrado = flujo.sample(n=len(flujo), replace=False).loc[(flujo['pot'] >= -70)] #(~flujo['mac_x'].str.contains("c8:3a:35")) &
#flujo_filtrado = flujo.loc[(flujo['day'] == 17 ) & (flujo['hour'].isin([8,9])) & (flujo['pot'] >= -70) & ~flujo['mac_x'].str.contains("c8:3a:35")]
macs = list(set(flujo_filtrado['mac_x']))
empleados = random.sample(flujo_filtrado['mac_x'].tolist(), round(.1*len(flujo_filtrado)))
empleados = list(set(empleados))
len(empleados)/len(macs)
flujo_filtrado['mac_x'].value_counts()/len(flujo_filtrado)
flujo_f = flujo.loc[(flujo['day'] == 17 ) & (flujo['hour'].isin([8,9])) & (flujo['pot'] >= -70)]
flujo_filtrado.head()
flujo_f['mac_x'].isin(empleados).mean()
len(macs)
n = 271393
k = 8
requests.get('http://54.157.13.52:3000/limpia_db_bloom/'+str(k)+'/'+str(n))
r_bloom = requests.post('http://54.157.13.52:3000/insert_elements_bloom/', json= {'records':empleados})
r_db = requests.post('http://54.157.13.52:3000/insert_elements_db/', json= {'records':empleados})
# Request al API para numero total de elementos en filtro
r = requests.get('http://54.157.13.52:3000/check_bloom_db/')
print(r.json())
num_unicos_total = len(empleados)
# Hora de inicio y fin de simulacion
hora = 8
hora_fin = 10
# Vector numpy con estadisticas
history = np.empty((0,7),float)
num_no_filtrados_bloom = 0
num_no_filtrados_db = 0
plt.close()
while hora < hora_fin:
# Contador de minuto
contador = -1
# Hora actual
hora = hora+1
# Se filtra la base para obtener solo la hora considerada
flujo_f = flujo.loc[(flujo['day'] == 17 ) & (flujo['hour'] == hora) & (flujo['pot'] >= -70)]
# Iteramos en cada minuto de cada hora
for minuto in range(60):
# Contador con minuto considerado
contador += 1
print("Hora: {} Minuto: {}".format(hora-1, minuto))
# Se filtra la base para obtener solo el minuto considerado
base = flujo_f.loc[flujo_f['minute'] == minuto]
elementos = base[['mac_x']].values.tolist()
elementos = ['-'.join(elemento) for elemento in elementos]
# Tasa de error teorica
tasa_teorica = 100*(1-(1-1/n)**(num_unicos_total*k))**k
# Llamamos al API y enviamos los elementos del minuto actual al Bloom Filter
r_bloom = requests.post('http://54.157.13.52:3000/is_in_filter/', json= {'records':elementos})
elapsed_bloom = float(r_bloom.json()['tiempo_en_segundos'])
# Llamamos al API y enviamos los elementos del minuto actual a la Base de Datos
r_db = requests.post('http://54.157.13.52:3000/is_in_db/', json= {'records':elementos})
elapsed_db = float(r_db.json()['tiempo_en_segundos'])
# Tasa de error real
if r_db.json()['no_estan_db'] > 0:
tasa_error = 100*(1- r_bloom.json()['no_estan_filtro']/r_db.json()['no_estan_db'])
else:
tasa_error = 0
# Numero de elementos unicos de este flujo
num_no_filtrados_bloom += r_bloom.json()['no_estan_filtro']
num_no_filtrados_db += r_db.json()['no_estan_db']
# Se guardan estadisticas en historia
step = [contador, elapsed_bloom, elapsed_db, tasa_error, tasa_teorica, num_no_filtrados_bloom, num_no_filtrados_db]
history = np.vstack((history, step))
# Graficamos nuestros resultados para las diferentes métricas consideradas
clear_output()
print(r_bloom.json())
print(r_db.json())
plt.figure(1)
plt.figure(figsize = (15, 5))
plt.subplot(131)
plt.plot(history[:,1], label = "Bloom", linestyle = '-')
plt.plot(history[:,2], label = "DataBase", linestyle = '--')
plt.legend()
plt.xlabel("Minutos")
plt.ylabel("Tiempo de ejecución")
plt.title("Tiempos ejecución")
plt.xlim((1,120))
plt.subplot(132)
plt.plot(history[:,3], linestyle = '-', label = "Real")
plt.plot(history[:,4], linestyle = '--', label = "Teorica")
plt.ylim((0, 105))
plt.legend()
plt.ylabel("Tasa error")
plt.xlabel("Minutos")
plt.title("Tasa de error")
plt.xlim((1,120))
plt.subplot(133)
plt.plot(history[:,5], label = "Bloom", linestyle = '-')
plt.plot(history[:,6], label = "DataBase", linestyle = '--')
plt.legend()
plt.ylabel("Número de no filtrados")
plt.xlabel("Minutos")
plt.title("Número de no filtrados")
plt.xlim((1,120))
plt.show()
###Output
{'no_estan_filtro': 41, 'tiempo_en_segundos': '0.01253199577331543', 'ya_en_filtro': 509}
{'ya_en_la_db': 509, 'no_estan_db': 41, 'tiempo_en_segundos': '0.7961337566375732'}
###Markdown
**Ejecución Mala: 271,393 y 8 funciones hash****Ejecución Buena: 271,393 y 4 funciones hash** Muestro por hashPara obtener un estimado del tiempo promedio de permanencia de un dispositivo en el centro comercial implementamos muestro por hashes. Con este, a través del ‘hasheo’ de la MAC address y asignación a cubetas podemos obtener todas las observaciones de una muestra de usuarios de nuestros datos. A través de este podremos obtener una estimación del tiempo promedio de permanencia de los individuos en los centros comerciales, al obtener la diferencia entre el timestamp máximo y el mínimo para cada individuo dentro de una muestra y extrapolar este valor para la población.
###Code
num_cubetas = 40
cubetas_a_tomar = 8
flujo_g = flujo
flujo_g['ts'] = pd.to_datetime(flujo_g.fecha).astype(int) / 1000000000
for i in [12,13,14,15]:
flujo_f = flujo_g.loc[(flujo_g['day'] == i )& (flujo['pot'] >= -70)]
tabla =flujo_f[['mac_x','ts']]
# Promedio Hash
requests.get('http://54.157.13.52:3000/clean_bucket/'+str(num_cubetas)+'/'+str(cubetas_a_tomar)+'/')
elementos = tabla.values.tolist()
requests.post('http://54.157.13.52:3000/insert_elements_db_window/', json= {'records':elementos})
r_window = requests.get('http://54.157.13.52:3000/check_window_sample/').json()['canasta_duracion_promedio']/60
# Promedio real
maxes = tabla.groupby('mac_x')['ts'].max().reset_index()
mins = tabla.groupby('mac_x')['ts'].min().reset_index()
maxes.columns = ['mac_max', 'last']
mins.columns = ['mac_min', 'first']
times = maxes.merge(mins, how = 'inner', left_on = 'mac_max', right_on = 'mac_min')
times['duracion'] = times['last'] - times['first']
times = times.loc[times['duracion']>0]
real_mean = times['duracion'].mean()/60
# Promedio uniforme
tabla2 =flujo_f[['mac_x','ts']].sample(frac = cubetas_a_tomar/num_cubetas, replace = False)
maxes2 = tabla2.groupby('mac_x')['ts'].max().reset_index()
mins2 = tabla2.groupby('mac_x')['ts'].min().reset_index()
maxes2.columns = ['mac_max', 'last']
mins2.columns = ['mac_min', 'first']
times2 = maxes2.merge(mins2, how = 'inner', left_on = 'mac_max', right_on = 'mac_min')
times2['duracion'] = times2['last'] - times2['first']
#times2 = times2.loc[times2['duracion']>0]
unif_mean = times2['duracion'].mean()/60
print("Promedio real: {}, Muestreo Hash: {}, Muestreo Uniforme: {}".format(real_mean, r_window, unif_mean))
###Output
Promedio real: 74.6527088804591, Muestreo Hash: 76.0, Muestreo Uniforme: 16.64517357550901
Promedio real: 59.33128572913418, Muestreo Hash: 59.31666666666667, Muestreo Uniforme: 13.487290653700702
Promedio real: 70.95296786853078, Muestreo Hash: 73.51666666666667, Muestreo Uniforme: 16.72918567482854
Promedio real: 71.73686075781664, Muestreo Hash: 69.71666666666667, Muestreo Uniforme: 17.09579276887272
###Markdown
**10 cubetas, tomar 2** HyperloglogPara obtener el número de individuos únicos en un día, se implementó el algoritmo de Hyperloglog. Para crear las cubetas elegimos utilizar los primeros 8 bits del vector, utilizando el restante para contar la longitud de la cola de ceros. Evaluamos el desempeño de este algoritmo con diferentes números de cubetas.
###Code
for i in [13,14,15,16]:
flujo_f = flujo.loc[(flujo['day'] == i )& (flujo['pot'] >= -70)].mac_x.values.tolist()
# Unicos real
real_unique = len(set(flujo_f))
# Promedio real
r_hll = requests.post('http://54.157.13.52:3000/check_unique/', json= {'records':flujo_f, 'bit_long':3})
hll_unique = r_hll.json()['unicas_hloglog']
print("Únicos real: {}, Únicos Hyperloglog: {}".format(real_unique, hll_unique))
###Output
Únicos real: 32415, Únicos Hyperloglog: 23229.991384615387
Únicos real: 28082, Únicos Hyperloglog: 17158.516363636365
Únicos real: 26664, Únicos Hyperloglog: 94371.84
Únicos real: 23830, Únicos Hyperloglog: 11439.010909090908
|
assignments/assignment1/collect_submission.ipynb | ###Markdown
Collect Submission - Zip + Generate PDF Run this notebook once you have completed all the other notebooks: `knn.ipynb`, `svm.ipynb`, `softmax.ipynb`, `two_layer_net.ipynb` and `features.ipynb`).It will:* Generate a zip file of your code (`.py` and `.ipynb`) called `a1.zip`.* Convert all notebooks into a single PDF file called `assignment.pdf`.If your submission for this step was successful, you should see the following display message:` Done! Please submit a1.zip and the pdfs to Gradescope. `Make sure to download the zip and pdf file locally to your computer, then submit to Gradescope. Congrats on succesfully completing the assignment!
###Code
%cd drive/My\ Drive
%cd $FOLDERNAME
!sudo apt-get install texlive-xetex texlive-fonts-recommended texlive-generic-recommended
!pip install PyPDF2
!bash collectSubmission.sh
###Output
[WinError 3] The system cannot find the path specified: 'drive/My\\ Drive'
E:\Mo\Univercity\internship\cs231n\assignments\assignment1
[WinError 2] The system cannot find the file specified: '$FOLDERNAME'
E:\Mo\Univercity\internship\cs231n\assignments\assignment1
|
Notebooks/Integra UFMS 2019.ipynb | ###Markdown
**Lendo os dados pelo pdf**
###Code
data_frames = get_data(camelot.read_pdf('../Dados/provided/EDITAL PROECE PROGRAD PROPP AGINOVA No 59, DE 04 DE JUNHO DE 2019.pdf', pages='1-54', flavor='lattice', strip_text='\n'))
df_pdf = reduce(lambda left,right: pd.merge(left,right,on=[0,1,2,3,4],how='outer'), data_frames)
#df_pdf.drop(0, inplace=True)
#df_pdf.rename(columns={
# 0:'Estudante', 1:'Unidade', 2:'Título do Trabalho',
# 3:'Programa', 4:'Data da apresentação'},inplace=True)
#df_pdf.reset_index().drop('index',axis=1).head()
#GRAVAR OS DADOS OBTIDOS NUM ARQUIVO
#with open('docs/total.json','w', encoding='utf-8') as file: file.write(json.dumps(df_pdf.to_dict(), ensure_ascii=False, indent=4))
#df_pdf.to_excel('docs/total.xlsx', sheet_name='Sheet1', index=False)
###Output
_____no_output_____
###Markdown
***
###Code
df = pd.read_excel('../Dados/generated/integraufms2019.xlsx')
df.head()
#df.equals(df_pdf.reset_index().drop('index',axis=1))
df.groupby(['Unidade']).groups.keys()
len(df.groupby(['Unidade']).groups['CPPP'])
df['Programa'].value_counts()
###Output
_____no_output_____
###Markdown
**Quantidade de apresentações por campus**
###Code
#ordenado = df.groupby(['Unidade']).count().reset_index().sort_values(by='Estudante',ascending=False)[['Unidade','Estudante']].reset_index().drop('index',axis=1)
trabalhos_submetidos = pd.DataFrame(df['Unidade'].value_counts()).rename(columns={'Unidade':'Quantidade'}) #mesma coisa para obter o resultado acima
trabalhos_submetidos
#ordenado
#qtd_unidade = ordenado.rename(columns={'Estudante':'Quantidade'}).set_index('Unidade')
# qtd_unidade.set_index('Unidade',inplace=True)
#qtd_unidade
# qtd_excel.reset_index() voltar para o q era antes
#GRAVA ESSE RESULTADO NUMA PLANILHA
#qtd_unidade.to_excel('docs/qtd_unidade.xlsx')
#https://matplotlib.org/2.0.0/examples/color/named_colors.html
colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf', 'khaki','b','teal','#99ff99','#ffcc99', '#52BE80', '#F7DC6F', '#6C3483', 'crimson', 'darkturquoise', '#4A235A', '#F39C12', 'green', 'hotpink', '#800000', '#FFFF00', '#00FF00', '#FF00FF', '#0000FF', '#FF9999', 'red', '#800080', '#CD5C5C', '#E59866', '#1F618D', '#6E2C00', '#17202A', '#85C1E9', '#F1948A']
#colors = ['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf', 'khaki']
#FUNCIONANDO
#unidade_pie = qtd_unidade.plot.pie(y='Quantidade',figsize=(19, 19), shadow=True, autopct='%1.2f%%', colors=colors)
unidade_pie = trabalhos_submetidos.plot.pie(y='Quantidade',figsize=(22, 22), shadow=True, autopct='%1.2f%%', colors=colors)
unidade_pie_fig = unidade_pie.get_figure().savefig('../Resultados/img/unidade_pie_2019.pdf')
unidade_pie_fig
# ax = qtd_unidade.plot(color='teal',figsize=(10,10), label='Quantidade', legend=False)
# unidade_bar = qtd_unidade.plot(color='teal',figsize=(10,10), label='Quantidade', kind='bar', ax=ax)
unidade_bar = qtd_unidade.plot.bar(y='Quantidade',figsize=(10,10), legend=False,color = colors)
for p in unidade_bar.patches: unidade_bar.annotate('{}'.format(str(p.get_height())), (p.get_x(), p.get_height()))
unidade_bar_fig = unidade_bar.get_figure().savefig('../Resultados/img/unidade_bar_2019.pdf')
#unidade_bar.get_figure()
###Output
_____no_output_____
###Markdown
**Quantidade por programa**
###Code
ordenado = df.groupby(['Programa']).count().reset_index().sort_values(by='Estudante',ascending=False)[['Programa','Estudante']].reset_index().drop('index',axis=1)
qtd_programa = ordenado.rename(columns={'Estudante':'Quantidade'}).set_index('Programa')
qtd_programa
#qtd_programa.to_excel('docs/qtd_programa.xlsx')
programa_pie = qtd_programa.plot.pie(y='Quantidade',figsize=(10, 10), shadow=True, autopct='%1.2f%%', colors=colors).legend(loc='upper right')
programa_pie_fig = programa_pie.get_figure()
programa_pie_fig.savefig('../Resultados/img/programa_pie_2019.pdf')
#ax = qtd_programa.plot(color='teal',figsize=(10,10), label='Quantidade', legend=False)
programa_bar = qtd_programa.plot.bar(y='Quantidade',figsize=(10,10), legend=False, color=colors)
for p in programa_bar.patches: programa_bar.annotate('{}'.format(str(p.get_height())), (p.get_x(), p.get_height()))
programa_bar_fig = programa_bar.get_figure()
programa_bar_fig
programa_bar_fig.savefig('../Resultados/img/programa_bar_2019.pdf')
###Output
_____no_output_____
###Markdown
**Quantidade de apresentacoes que cada programa lançou em cada unidade**
###Code
ordenado_unidade = df.groupby(['Programa','Unidade']).count().reset_index().sort_values(by='Unidade',ascending=True)[['Programa','Unidade','Estudante']].reset_index().drop('index',axis=1)
qtd_proguni_unid = ordenado_unidade.rename(columns={'Estudante':'Quantidade'}).set_index('Unidade')
#qtd_proguni_unid.loc['CPPP']['Quantidade'].sum()
#qtd_proguni_unid.loc['CPPP'][['Programa','Quantidade']]
qtd_proguni_unid
qtd_proguni_unid.to_excel('../Resultados/docs/qtd_proguni_unid_2019.xlsx')
ordenado_prog = df.groupby(['Programa','Unidade']).count().reset_index().sort_values(by='Programa',ascending=True)[['Programa','Unidade','Estudante']].reset_index().drop('index',axis=1)
qtd_proguni_prog = ordenado_prog.rename(columns={'Estudante':'Quantidade'}).set_index('Programa')
#qtd_proguni_prog.loc['CPPP']['Quantidade'].sum()
#qtd_proguni_prog.loc['CPPP'][['Programa','Quantidade']]
qtd_proguni_prog
qtd_proguni_prog.to_excel('../Resultados/docs/qtd_proguni_prog_2019.xlsx')
dfs = {'Ordenado por Unidade':qtd_proguni_unid, 'Ordenado por Programa':qtd_proguni_prog}
dfs
writer = pd.ExcelWriter('../Resultados/docs/prog_unid_2019.xlsx', engine='xlsxwriter')
for sheet_name in dfs.keys(): dfs[sheet_name].to_excel(writer, sheet_name=sheet_name, index=True)
writer.save()
###Output
_____no_output_____
###Markdown
**Alunos que submeteram mais de um trabalho**
###Code
x = df.groupby(['Estudante','Unidade']).count().reset_index()[['Estudante','Unidade','Data da apresentação']].reset_index().sort_values(by='Estudante',ascending=True).drop('index',axis=1)
estudante = x.rename(columns={'Data da apresentação':'Quantidade'})
aluno_excel = estudante[estudante['Quantidade']>1].reset_index().drop('index',axis=1)
aluno_excel
aluno_excel.to_excel('../Resultados/docs/qtd_aluno_2019.xlsx',index=False)
###Output
_____no_output_____
###Markdown
**Quantidade de apresentacoes em cada dia**
###Code
#x = df.groupby(['Data da apresentação']).count().reset_index()[['Data da apresentação','Estudante']].reset_index().sort_values(by='Data da apresentação',ascending=True).drop('index',axis=1)
#apresentacao = x.rename(columns={'Estudante':'Quantidade'}).set_index('Data da apresentação')
#apresentacao
#apresentacao_pie = apresentacao.plot.pie(y='Quantidade',figsize=(10, 10), shadow=True, autopct='%1.2f%%',colors=colors)
#apresentacao_pie_bar = apresentacao_pie.get_figure()
#apresentacao_pie_bar.savefig('../Resultados/img/apresentacao_pie_2019.pdf')
#'#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd'
#apresentacao_bar = apresentacao.plot(color='m',figsize=(10,10), kind='bar', legend=False)
#apresentacao_bar = apresentacao.plot.bar(y='Quantidade',figsize=(10,10), color=['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd'], legend=False)
#for p in apresentacao_bar.patches: apresentacao_bar.annotate('{}'.format(str(p.get_height())), (p.get_x(), p.get_height()))
#apresentacao_bar_fig = apresentacao_bar.get_figure()
#apresentacao_bar_fig
#apresentacao_bar_fig.savefig('../Resultados/img/apresentacao_bar_2019.pdf')
###Output
_____no_output_____
###Markdown
**Submissoes do PET por Campus**
###Code
x = df.groupby(['Programa','Unidade']).count().reset_index()[['Programa','Unidade','Estudante']].reset_index().sort_values(by='index',ascending=True).drop('index',axis=1)
pet = x.rename(columns={'Estudante':'Quantidade'}).set_index('Programa')
pets = pet.loc[['PET']]
pets = pets.sort_values('Quantidade', ascending=False)
pets = pets.set_index('Unidade')
pets
pets_pie = pets.plot.pie(y='Quantidade',figsize=(15, 15), shadow=True, autopct='%1.2f%%', colors=colors)
pets_pie.get_figure().savefig('../Resultados/img/pets_pie_2019.pdf')
pets_bar = pets.plot.bar(y='Quantidade',figsize=(10, 10), color=colors, legend=False)
for p in pets_bar.patches: pets_bar.annotate('{}'.format(str(p.get_height())), (p.get_x(), p.get_height()))
pets_bar.get_figure().savefig('../Resultados/img/pets_bar_2019.pdf')
###Output
_____no_output_____
###Markdown
**PORCENTAGEM RELACIONADA A QTD DE ALUNOS ATIVOS POR CAMPUS**
###Code
df_all = pd.read_excel('../Dados/generated/Alunos-Matriculados.xlsx')
df_all['unidade'].value_counts().keys()
df_all['unidade'].value_counts()
###Output
_____no_output_____
###Markdown
**CRIA UM DATAFRAME COM AS CHAVES DO DICT, E COM OS VALUES DE CADA UNIDADE**
###Code
alunos = pd.DataFrame(df_all['unidade'].value_counts()).rename(columns={'unidade':'Quantidade'})
alunos
trabalhos_submetidos
porcentagem = (trabalhos_submetidos/alunos).dropna()
porcentagem.columns
porcentagem.sort_values(by=['Quantidade'],ascending=False,inplace=True)
porcentagem
###Output
_____no_output_____
###Markdown
**COMO O PANDAS CALCULA A PORCENTAGEM**Ele pega a soma total das colunas e considera essa soma sendo 100%, e cada tupla sendo x, dai elefaz a regra de 3 e obtem o valor1.579963 - 100%0.15242494226327943 - x1.579963x = 15.242494226327944x = 15.242494226327944/1.5799639.64737416403292x = (0.15242494226327943*100)/1.579963x = 9.64737416403292
###Code
porcentagem_fig = porcentagem.plot.pie(subplots=True,figsize=(20, 20), shadow=True, autopct='%1.2f%%', colors=colors)
porcentagem_fig[0].get_figure().savefig('../Resultados/img/porcentagem_2019.pdf')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.