path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
tutorials/PyCSEP_tutorial_gridded.ipynb | ###Markdown
Gridded forecast model tutorial with Italy examplesIn this tutorial, we will load and test grid-based forecasts and interpret the results of the tests provided in the PyCSEP package for gridded forecasts. We will work with two time-independent grid-based forecasts submitted as part of the CSEP Italy testing experiment (see [Werner et al, 2010](https://doi.org/10.4401/ag-4840), [Taroni et al, 2018](https://doi.org/10.1785/0220180031) for some previous testing results). Our goal is to compare the performance of these two forecasts for describing observed Italian seismicity. This is essentially a three step process: 1. Read in (and plot) a gridded forecast 2. Set up an evaluation catalog of observed events 3. Run PyCSEP tests and interpret the results We introduce the concepts to the reader and encourage them to explore the other tests available. Full documentation of the package can be found [here](https://docs.cseptesting.org/) and any issues can be reported on the [PyCSEP Github page](https://github.com/SCECcode/pycsep).
###Code
# Most of the core functionality can be imported from the top-level csep package.
# Utilities are available from the csep.utils subpackage.
import csep
from csep.core import regions, catalog_evaluations, poisson_evaluations as poisson
#from csep.core import poisson_evaluations as poisson
from csep.utils import datasets, time_utils, comcat, plots, readers
import numpy
## Cartopy required for updated plots
import cartopy
###Output
_____no_output_____
###Markdown
Read in forecasts We're going to start by setting up some experiment parameters. It is good practice to set this up early. Note, the start and end date of the forecast should be chosen based on the creation of the forecast. This is important for time-independent forecasts because they can be rescaled to any arbitrary time period.
###Code
## Set up experiment parameters
start_date = time_utils.strptime_to_utc_datetime('2010-01-01 00:00:00.0')
end_date = time_utils.strptime_to_utc_datetime('2015-01-01 00:00:00.0')
###Output
_____no_output_____
###Markdown
These forecasts are included with the main repository in the case of the Werner forecast and the tutorial repository for the Meletti forecast. You can learn more about the format of gridded forecasts [in the documentation](https://docs.cseptesting.org/concepts/forecasts.html). The filepath is relative to the root directory of the package, so you can specify any file location for your forecasts. You can also attach a name, which is very useful for comparisons later if you chose something sensible.
###Code
## Loads from the PyCSEP package
werner_forecast = csep.load_gridded_forecast(datasets.hires_ssm_italy_fname,
name='Werner, et al (2010)')
## You may need to edit the file location here depending on your set up
meletti_forecast = csep.load_gridded_forecast("../workshop_data/forecasts/meletti.MPS04after.italy.5yr.2010-01-01.dat",
name ="Meletti et al (2010), MPS working group")
###Output
_____no_output_____
###Markdown
Gridded forecasts inherit the region from the forecast, so there is no requirement to explicitly set this. We should check, however, that the forecast regions for catalogs we want to compare are the same so that they are testable with a single catalog.
###Code
## Sanity check - if catalogs have the same region this will provide no output.
numpy.testing.assert_allclose(meletti_forecast.region.midpoints(), werner_forecast.region.midpoints())
###Output
_____no_output_____
###Markdown
To visualise this forecast, we will use `forecast.plot()` with some specifications to get a nicer looking figure. We will do this by creating a dictionary containing the plot arguments. These arguments are, in order: - Assign a title - Set labels to the geographic axes - Draw country borders - Set a linewidth of 0.5 to country borders - Select ESRI Imagery as a basemap. - Assign 'rainbow' as colormap. Possible values from from matplotlib.cm library - Defines 0.8 for an exponential transparency function (default is 0 for constant alpha, whereas 1 for linear). - An object cartopy.crs.Projection() is passed as Projection to the map The complete description of plot arguments can be found in `csep.utils.plots.plot_spatial_dataset`
###Code
args_dict = {'title': 'Italy 10 year forecast',
'grid_labels': True,
'borders': True,
'feature_lw': 0.5,
'basemap': 'ESRI_imagery',
'cmap': 'rainbow',
'alpha_exp': 0.8,
'projection': cartopy.crs.Mercator()}
###Output
_____no_output_____
###Markdown
The map extent can also be defined. Otherwise, the extent of the data would be used. The dictionary defined must be passed as an argument.
###Code
ax = werner_forecast.plot(extent=[3, 22, 35, 48],
show=True,
plot_args=args_dict)
# Plot the second forecast
# An excercise for the reader
ax = ??
###Output
_____no_output_____
###Markdown
Set up evaluation catalog Now we need to import the observed catalog that we want to use to test the forecast - we call this the evaluation catalog. There are multiple ways to read in evaluation catalogs, including reading directly from ComCat for US models. There are also various readers currently included with the package, including those for JMA and the INGV HORUS catalogs. These functions are in `/csep/utils/readers.py` if you would like to see them or understand how to add your own. In this case we demonstrate using the European RCMT catalog.
###Code
## load catalog
italy_test_catalog = csep.load_catalog("../workshop_data/catalogs/europe_rcmt_2010-2015.csv",
type="ingv_emrcmt")
###Output
_____no_output_____
###Markdown
Print the catalog to check the range of dates, locations and magnitudes of the events in the evaluation catalog, as well as the total number of events. We can also filter the catalog to the desired time-space-magnitude range. Crucially, we must also filter the catalog to the forecast region in order to carry out any testing in the next step. This is also why we checked that the forecasts we want to compare are in the same spatial region.
###Code
italy_test_catalog = italy_test_catalog.filter_spatial(werner_forecast.region)
###Output
_____no_output_____
###Markdown
Note that our magnitude range is not consistent with the parameters we established earlier, so we have to filter for magnitude also. This is obviously important to fairly test a forecast, and you can see why if you re-run this tutorial without this step!
###Code
italy_test_catalog = italy_test_catalog.filter('magnitude >= 4.95')
###Output
_____no_output_____
###Markdown
If you have run the above code, you should be left with a catalog of 13 events. Print the catalog with the standard python `print` command to check the range of dates, locations and magnitudes of the events in the evaluation catalog, as well as the total number of events. Run consistency tests Now we wish to answer some questions about our forecasts and their performance. In this example, we will investigate the spatial properties of the forecast models and how well the forecasts describe the observed spatial distribution of seismicity. The consistency tests implemented for gridded forecasts in PyCSEP are the N, S and M-test described in [Schorlemmer et al, 2007](https://doi.org/10.1785/gssrl.78.1.17) and [Zechar et al, 2010](https://doi.org/10.1785/0120090192). These are located in the `poisson_evaluations` file that we have imported as `poisson`. To carry out a test, we simply provide the forecast we wish to test and an evaluation forecast. The spatial test requires simulating from the Poisson forecast to provide uncertainty. The verbose option prints the status of the simulations to the standard output.
###Code
spatial_test_result_werner = poisson.spatial_test(werner_forecast, italy_test_catalog)
## Repeat the spatial test for our second example forecast
spatial_test_result_meletti = ???
###Output
_____no_output_____
###Markdown
PyCSEP provides easy ways of storing objects to a JSON format using csep.write_json(). The evaluations can be read back into the program for plotting using `csep.load_evaluation_result()`.
###Code
## Run this cell to write to .json file (optional)
## You can look at the contents of this file in jupyter lab to see how the data is stored
csep.write_json(spatial_test_result_meletti, 'example_spatial_test.json')
###Output
_____no_output_____
###Markdown
We can plot these results using the `plot_poisson_consistency_test` function from the `plots` file of `csep.utils`, where you can find more details on the plot arguments. Again, we use a dictionary to set up some plot arguments.
###Code
args = {'figsize': (6,5),
'title': r'$\mathcal{S}-\mathrm{test}$',
'title_fontsize': 18,
'xlabel': 'Log-likelihood',
'xticks_fontsize': 12,
'ylabel_fontsize': 12,
'linewidth': 1,
'capsize': 4,
'hbars':True,
'tight_layout': True}
###Output
_____no_output_____
###Markdown
We're now going to plot the results of both forecasts for comparison. We set `one_sided_lower=True` as usual for an L-test, where the model is rejected if the observed is located within the lower tail of the simulated distribution. We can supply multiple `spatial_test_result` objects in a list (specified in the square brackets as standard in python).
###Code
ax = plots.plot_poisson_consistency_test([spatial_test_result_werner, spatial_test_result_meletti],
one_sided_lower=True, plot_args=args)
###Output
_____no_output_____
###Markdown
This tells us something about the spatial performance of these models. We can repeat this process for N or M tests using the `number_test` or `magnitude_test` from `poisson_evaluations` if we are more interested in these components specifically, or getting a fuller picture of where the forecast does well and not so well. Try out a `likelihood_test` or `conditional_likelihood_test` (also from `poisson_evaluations`). What does this tell you about the two forecasts? Compare forecast test results Now that we have test results for two different forecasts, we want to compare their performance in terms of their information gain. We will implement this using the T-test, which [Rhoades et al, 2011](https://link.springer.com/article/10.2478/s11600-011-0013-5) describe as a better method of directly comparing likelihoods than the above consistency tests. The paired T-test compares a forecast to a base model, in this case we have chosen to use `meletti_forecast` as the baseline and any other models are compared to it. As the 'paired' part implies, the paired T-test always takes two forecast arguments, though we can run it for multiple different models by repeating the call, updating the first forecast argument and holding the second one fixed.
###Code
paired_test1 = poisson.paired_t_test(werner_forecast, meletti_forecast, italy_test_catalog)
###Output
_____no_output_____
###Markdown
The `plot_comparison_test` function is used here, which works similarly to the other `plots` functions. Again, we can set up our plot arguments with a dictionary.
###Code
comp_args = {'title': 'Paired T-test result',
'ylabel': 'Information gain',
'xlabel': 'Model'}
ax = plots.plot_comparison_test([paired_test1],
plot_args= comp_args)
###Output
_____no_output_____ |
docs/notebooks/04_The_three_steps_workflow.ipynb | ###Markdown
🎛 The 3-steps workflow 🎛[](https://colab.research.google.com/github/eserie/wax-ml/blob/main/docs/notebooks/04_The_three_steps_workflow.ipynb) It is already very useful to be able to execute a JAX function on a dataframe in a single work stepand with a single command line thanks to WAX-ML accessors.The 1-step WAX-ML's stream API works like that:```python.stream(...).apply(...)```But this is not optimal because, under the hood, there are mainly three costly steps:- (1) (synchronize | data tracing | encode): make the data "JAX ready"- (2) (compile | code tracing | execution): compile and optimize a function for XLA, execute it.- (3) (format): convert data back to pandas/xarray/numpy format.With the `wax.stream` primitives, it is quite easy to explicitly split the 1-step workflowinto a 3-step workflow.This will allow the user to have full control over each step and iterate on each one.It is actually very useful to iterate on step (2), the "calculation step" whenyou are doing research.You can then take full advantage of the JAX primitives, especially the `jit` primitive.Let's illustrate how to reimplement WAX-ML EWMA yourself with the WAX-ML 3-step workflow. Imports
###Code
import numpy as onp
import pandas as pd
import xarray as xr
from wax.accessors import register_wax_accessors
from wax.external.eagerpy import convert_to_tensors
from wax.format import format_dataframe
from wax.modules import EWMA
from wax.stream import tree_access_data
from wax.unroll import unroll
register_wax_accessors()
###Output
_____no_output_____
###Markdown
Performance on big dataframes Generate data
###Code
T = 1.0e5
N = 1000
T, N = map(int, (T, N))
dataframe = pd.DataFrame(
onp.random.normal(size=(T, N)), index=pd.date_range("1970", periods=T, freq="s")
)
###Output
_____no_output_____
###Markdown
pandas EWMA
###Code
%%time
df_ewma_pandas = dataframe.ewm(alpha=1.0 / 10.0).mean()
###Output
CPU times: user 2.03 s, sys: 167 ms, total: 2.19 s
Wall time: 2.19 s
###Markdown
WAX-ML EWMA
###Code
%%time
df_ewma_wax = dataframe.wax.ewm(alpha=1.0 / 10.0).mean()
###Output
CPU times: user 1.8 s, sys: 876 ms, total: 2.68 s
Wall time: 2.67 s
###Markdown
It's a little faster, but not that much faster... WAX-ML EWMA (without format step) Let's disable the final formatting step (the output is now in raw JAX format):
###Code
%%time
df_ewma_wax_no_format = dataframe.wax.ewm(alpha=1.0 / 10.0, format_outputs=False).mean()
df_ewma_wax_no_format.block_until_ready()
type(df_ewma_wax_no_format)
###Output
_____no_output_____
###Markdown
Let's check the device on which the calculation was performed (if you have GPU available, this should be `GpuDevice` otherwise it will be `CpuDevice`):
###Code
df_ewma_wax_no_format.device()
###Output
_____no_output_____
###Markdown
Now we will see how to break down WAX-ML one-liners `.ewm(...).mean()` or `.stream(...).apply(...)` into 3 steps:- a preparation step where we prepare JAX-ready data and functions.- a processing step where we execute the JAX program- a post-processing step where we format the results in pandas or xarray format. Generate data (in dataset format)WAX-ML `Sream` object works on datasets.So let's transform the `DataFrame` into a xarray `Dataset`:
###Code
dataset = xr.DataArray(dataframe).to_dataset(name="dataarray")
del dataframe
###Output
_____no_output_____
###Markdown
Step (1) (synchronize | data tracing | encode)In this step, WAX-ML do:- "data tracing" : prepare the indices for fast access tin the JAX function `access_data`- synchronize streams if there is multiple ones. This functionality have options : `freq`, `ffills`- encode and convert data from numpy to JAX: use encoders for `datetimes64` and `string_` dtypes. Be aware that by default JAX works in float32 (see [JAX's Common Gotchas](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.htmldouble-64bit-precision) to work in float64).We have a function `Stream.prepare` that implement this Step (1).It prepares a function that wraps the input function with the actual data and indicesin a pair of pure functions (`TransformedWithState` Haiku tuple).
###Code
%%time
stream = dataset.wax.stream()
###Output
CPU times: user 244 µs, sys: 36 µs, total: 280 µs
Wall time: 287 µs
###Markdown
Define our custom function to be applied on a dict of arrayshaving the same structure than the original dataset:
###Code
def my_ewma_on_dataset(dataset):
return EWMA(alpha=1.0 / 10.0, adjust=True)(dataset["dataarray"])
transform_dataset, jxs = stream.prepare(dataset, my_ewma_on_dataset)
###Output
_____no_output_____
###Markdown
Let's definite the init parameters and state of the transformation wewill apply. Step (2) (compile | code tracing | execution) In this step we:- prepare a pure function (with [Haiku's transform mechanism](https://dm-haiku.readthedocs.io/en/latest/api.htmlhaiku-transforms)) Define a "transformation" function which: - access to the data - apply another transformation, here: EWMA- compile it with `jax.jit`- perform code tracing and execution (the last line): - Unroll the transformation on "steps" `xs` (a `np.arange` vector).
###Code
outputs = unroll(transform_dataset)(jxs)
outputs.device()
###Output
_____no_output_____
###Markdown
Once it has been compiled and "traced" by JAX, the function is much faster to execute:
###Code
%%timeit
outputs = unroll(transform_dataset)(jxs)
_ = outputs.block_until_ready()
###Output
619 ms ± 9.49 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
This is 3x faster than pandas implementation! Manually prepare the data and manage the device In order to manage the device on which the computations take place,we need to have even more control over the execution flow.Instead of calling `stream.prepare` to build the `transform_dataset` function,we can do it ourselves by :- using the `stream.trace_dataset` function- converting the numpy data in jax ourself- puting the data on the device we want.
###Code
np_data, np_index, xs = stream.trace_dataset(dataset)
jnp_data, jnp_index, jxs = convert_to_tensors((np_data, np_index, xs), "jax")
###Output
_____no_output_____
###Markdown
We explicitly set data on CPUs (the is not needed if you only have CPUs):
###Code
from jax.tree_util import tree_leaves, tree_map
cpus = jax.devices("cpu")
jnp_data, jnp_index, jxs = tree_map(
lambda x: jax.device_put(x, cpus[0]), (jnp_data, jnp_index, jxs)
)
print("data copied to CPU device.")
###Output
data copied to CPU device.
###Markdown
We have now "JAX-ready" data for later fast access. Let's define the transformation that wrap the actual data and indices in a pair ofpure functions:
###Code
@jax.jit
@unroll
def transform_dataset(step):
dataset = tree_access_data(jnp_data, jnp_index, step)
return EWMA(alpha=1.0 / 10.0, adjust=True)(dataset["dataarray"])
###Output
_____no_output_____
###Markdown
And we can call it as before:
###Code
%%time
outputs = transform_dataset(jxs)
_ = outputs.block_until_ready()
%%time
outputs = transform_dataset(jxs)
_ = outputs.block_until_ready()
outputs.device()
###Output
_____no_output_____
###Markdown
Step(3) (format)Let's come back to pandas/xarray:
###Code
%%time
y = format_dataframe(
dataset.coords, onp.array(outputs), format_dims=dataset.dataarray.dims
)
###Output
CPU times: user 27.9 ms, sys: 70.4 ms, total: 98.2 ms
Wall time: 144 ms
###Markdown
It's quite slow (see WEP3 enhancement proposal). GPU execution Let's look with execution on GPU
###Code
try:
gpus = jax.devices("gpu")
jnp_data, jnp_index, jxs = tree_map(
lambda x: jax.device_put(x, gpus[0]), (jnp_data, jnp_index, jxs)
)
print("data copied to GPU device.")
GPU_AVAILABLE = True
except RuntimeError as err:
print(err)
GPU_AVAILABLE = False
###Output
Requested backend gpu, but it failed to initialize: Not found: Could not find registered platform with name: "cuda". Available platform names are: Interpreter Host
###Markdown
Let's check that our data is on the GPUs:
###Code
tree_leaves(jnp_data)[0].device()
tree_leaves(jnp_index)[0].device()
jxs.device()
%%time
if GPU_AVAILABLE:
outputs = unroll(transform_dataset)(jxs)
###Output
CPU times: user 3 µs, sys: 2 µs, total: 5 µs
Wall time: 11.2 µs
###Markdown
Let's redefine our function `transform_dataset` by explicitly specify to `jax.jit` the `device` option.
###Code
%%time
from functools import partial
if GPU_AVAILABLE:
@partial(jax.jit, device=gpus[0])
@unroll
def transform_dataset(step):
dataset = tree_access_data(jnp_data, jnp_index, step)
return EWMA(alpha=1.0 / 10.0, adjust=True)(dataset["dataarray"])
outputs = transform_dataset(jxs)
outputs.device()
%%timeit
if GPU_AVAILABLE:
outputs = unroll(transform_dataset)(jxs)
_ = outputs.block_until_ready()
###Output
12 ns ± 0.0839 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each)
|
Capture rate with dark photon.ipynb | ###Markdown
The first three cells are a copy of the DarkPhoton.py files included in the DarkCapPy package.I have changed the relative path to the configuration files (constants.py, atomicData.py, etc..), and some of the relative paths within those files so that it would work from this notebook, without having to install the package. I have also changed the parameters in PlanetData.py back to Earth's.Somehow, the computing time for the kappa_0 function is excessively long when importing the package into jupyter notebook on my end. Running the code directly from the notebook makes computing time acceptable (about 2-3 minutes vs an hour). There might be an issue in multiple imports of the package within the configuration files that needs to be looked at in the original package.
###Code
################################################################
# Import Python Libraries
################################################################
import numpy as np
import scipy.integrate as integrate
import scipy.interpolate as interpolate
import pandas as pd
import matplotlib.pyplot as plt
from Configure.Constants import *
from Configure.AtomicData import *
from Configure.PlanetData import *
from Configure.Conversions import amu2GeV
# import os | Reference: https://stackoverflow.com/questions/779495/python-access-data-in-package-subdirectory
# this_dir, this_filename = os.path.split(__file__) | This was a hack for importing the branching ratio inside of this file.
# DATA_PATH = os.path.join(this_dir, "brtoe.csv") | It is not presently needed, but may be useful in the future.
################################################################
# Capture Rate Functions
################################################################
########################
# Nuclear Form Factor
########################
def formFactor2(element, E):
'''
formFactor2(element,E)
Returns the form-factor squared of element N with recoil energy E
[E] = GeV
'''
E_N = 0.114/((atomicNumbers[element])**(5./3))
FN2 = np.exp(-E/E_N)
return FN2
########################
# Photon Scattering Cross Sections
########################
def crossSection(element, m_A, E_R): # returns 1/GeV^3
'''
crossSection(element, m_A, E_R)
Returns the differntial scattering cross section for a massive dark photon
[m_A] = GeV
[E_R] = GeV
'''
m_N = amu2GeV(atomicNumbers[element])
FN2 = formFactor2(element, E_R)
function = ( FN2 ) / ((2 * m_N * E_R + m_A**2)**2)
return function
def crossSectionKappa0(element, E_R): # Dimensionless
'''
crossSectionKappa0(element, E_R)
Returns the cross section used in the kappa0 calculation
[E_R] = GeV
'''
FN2 = formFactor2(element, E_R)
function = FN2
return function
########################
# Kinematics
########################
def eMin(u, m_X):
'''
eMin(u, m_X)
Returns the minimum kinetic energy to become Gravitationally captured by Earth
[m_X] = GeV
'''
function = (0.5) * m_X * u**2
# assert (function >=0), '(u, m_X): (%e,%e) result in a negative eMin' % (u, m_X)
return function
def eMax(element, m_X, rIndex, u):
'''
eMax(element, m_X, rIndex, u)
Returns the maximum kinetic energy allowed by the kinematics
[m_X] = GeV
rIndex specifies the index in the escape velocity array escVel2_List[rIndex]
'''
m_N = amu2GeV(atomicNumbers[element])
mu = m_N*m_X / (m_N + m_X)
vCross2 = (escVel2_List[rIndex])
function = 2 * mu**2 * (u**2 + vCross2) / m_N
# assert (function >= 0), '(element, m_X, rIndex, u): (%s, %e, %i, %e) result in negative eMax' %(element, m_X, rIndex, u)
return function
########################
# Intersection Velocity
########################
def EminEmaxIntersection(element, m_X, rIndex):
'''
EminEmaxIntersection(element, m_X, rIndex):
Returns the velocity uInt when eMin = eMax.
[m_X] = GeV
'''
m_N = amu2GeV(atomicNumbers[element])
mu = (m_N*m_X)/(m_N+m_X)
sqrtvCross2 = np.sqrt(escVel2_List[rIndex])
# Calculate the intersection uInt of eMin and eMax given a specific rIndex
A = m_X/2.
B = 2. * mu**2 / m_N
uInt = np.sqrt( ( B ) / (A-B) ) * sqrtvCross2
return uInt
########################
# Photon Velocity and Energy Integration
########################
def intDuDEr(element, m_X, m_A, rIndex):
'''
intDuDER(element, m_X, m_A, rIndex):
Returns the evaluated velocity and recoil energy integrals for dark photon scattering
[m_X] = GeV
[m_A] = GeV
'''
def integrand(E,u):
fu = fCrossInterp(u)
integrand = crossSection(element, m_A, E) * u * fu
return integrand
# Calculate the intersection uInt of eMin and eMax given a specific rIndex
uInt = EminEmaxIntersection(element, m_X, rIndex)
uLow = 0
uHigh = min(uInt, V_gal) # We take the minimal value between the intersection velocity and galactic escape velocity
eLow = lambda u: eMin(u, m_X)
eHigh = lambda u: eMax(element, m_X, rIndex, u)
integral = integrate.dblquad(integrand, uLow, uHigh, eLow, eHigh)[0]
return integral
def intDuDErKappa0(element, m_X, rIndex):
'''
intDuDErKappa0(element, m_X, rIndex):
returns the evaluated velocity and recoil energy integration for dark photon scattering
used in the kappa0 calculation
[m_X] = GeV
'''
def integrand(E_R,u):
fu = fCrossInterp(u)
integrand = crossSectionKappa0(element, E_R) * u * fu
return integrand
uInt = EminEmaxIntersection(element, m_X, rIndex)
uLow = 0
uHigh = min(uInt, V_gal) # We take the minimal value between the intersection velocity and galactic escape velocity
eLow = lambda u: eMin(u, m_X)
eHigh = lambda u: eMax(element, m_X, rIndex, u)
integral = integrate.dblquad(integrand, uLow, uHigh, eLow, eHigh)[0]
return integral
########################
# Sum Over Radii
########################
def sumOverR(element, m_X, m_A):
'''
sumOverR(element, m_X, m_A)
Returns the summation over radius of the velocity and recoil energy integration
[m_X] = GeV
[m_A] = GeV
'''
tempSum = 0
for i in range(0, len(radius_List)):
r = radius_List[i]
deltaR = deltaR_List[i]
n_N = numDensity_Func(element)[i]
summand = n_N * r**2 * intDuDEr(element, m_X, m_A, i) * deltaR
tempSum += summand
return tempSum
def sumOverRKappa0(element, m_X):
'''
sumOverRKappa0(element, m_X)
Returns the summation over radius of the velocity and recoil energy integration
used in the kappa0 calculation
[m_X] = GeV
'''
tempSum = 0
for i in range(0,len(radius_List)):
r = radius_List[i]
deltaR = deltaR_List[i]
n_N = numDensity_Func(element)[i]
summand = n_N * r**2 * intDuDErKappa0(element, m_X, i) * deltaR
tempSum += summand
return tempSum
########################
# Single Element Capture Rate
########################
def singleElementCap(element, m_X, m_A, epsilon, alpha, alpha_X):
'''
singleElementCap(element, m_X, m_A, epsilon, alpha, alpha_X)
Returns the capture rate due to a single element for the specified parameters
[m_X] = GeV
[m_A] = GeV
'''
Z_N = nProtons[element]
m_N = amu2GeV(atomicNumbers[element])
n_X = 0.3/m_X # GeV/cm^3
conversion = (5.06e13)**-3 * (1.52e24) # Conversion to seconds (cm^-3)(GeV^-2) -> (s^-1)
prefactors = (4*np.pi)**2
crossSectionFactors = 2 * (4*np.pi) * epsilon**2 * alpha_X * alpha * Z_N**2 * m_N
function = n_X * conversion* crossSectionFactors* prefactors * sumOverR(element, m_X, m_A)
return function
def singleElementCapKappa0(element, m_X, alpha):
'''
singleElementCapKappa0(element, m_X, alpha):
Returns a single kappa0 value for 'element' and the specified parameters
[m_X] = GeV
'''
Z_N = nProtons[element]
m_N = amu2GeV(atomicNumbers[element])
n_X = 0.3/m_X # 1/cm^3
conversion = (5.06e13)**-3 * (1.52e24) # (cm^-3)(GeV^-2) -> (s^-1)
crossSectionFactors = 2 * (4*np.pi) * alpha * Z_N**2 * m_N
prefactor = (4*np.pi)**2
function = n_X * conversion * prefactor * crossSectionFactors * sumOverRKappa0(element, m_X)
return function
########################
# Full Capture Rate
########################
def cCap(m_X, m_A, epsilon, alpha, alpha_X):
'''
cCap(m_X, m_A, epsilon, alpha, alpha_X)
returns the full capture rate in sec^-1 for the specified parameters
Note: This function is the less efficient way to perform this calculation. Every point in (m_A, epsilon) space
involves peforming the full tripple integral over recoil energy, incident DM velocity, and Earth radius
which is time consuming.
[m_X] = GeV
[m_A] = GeV
'''
totalCap = 0
for element in element_List:
elementCap = singleElementCap(element, m_X, m_A, epsilon, alpha, alpha_X)
print ('Element:', element,',' 'Cap: ', elementCap)
totalCap += elementCap
return totalCap
########################
# Kappa0
########################
def kappa_0(m_X, alpha):
'''
kappa_0(m_X, alpha)
Returns the kappa0 value for m_X and alpha
[m_X] = GeV
This funciton encodes how the capture rate depends on m_X and alpha.
'''
tempSum = 0 # tempSum = "temporary sum" not "temperature sum"
for element in element_List:
function = singleElementCapKappa0(element, m_X, alpha)
tempSum += function
return tempSum
########################
# Capture Rate the quick way
########################
def cCapQuick(m_X, m_A, epsilon, alpha_X, kappa0):
'''
cCapQuick(m_X, m_A, epsilon, alpha_X, kappa0):
Returns the Capture rate in a much more computationally efficient way. For a given dark matter mass m_X and coupling
constant alpha, we calculate the quantity kappa_0 once and multiply it by the differential cross section approximation
for each point in (m_A, epsilon) space.
[m_X] = GeV
[m_A] = GeV
Provides a quick way to calculate the capture rate when only m_A and epsilon are changing.
All the m_X dependence, which is assumed to be fixed, is contianed in kappa0.
Note that m_X is defined as an input to this function, but not actually used.
This was made in light of keeping the function definitions in Python consistent.
'''
function = epsilon**2 * alpha_X * kappa0 / m_A**4
return function
################################################################
# Thermal Relic
################################################################
def alphaTherm(m_X, m_A):
'''
alphaTherm(m_X,m_A)
[m_X] = GeV
[m_A] = GeV
This function determines the dark matter fine structure constant alpha_X given the dark matter relic abundance.
'''
conversion = (5.06e13)**3/ (1.52e24) # cm^3 Sec -> GeV^-2
thermAvgSigmaV = 2.2e-26 # cm^3/s from ArXiV: 1602.01465v3 between eqns (4) and (5)
function = conversion * thermAvgSigmaV * (m_X**2/np.pi) \
* (1 - 0.5*(m_A/m_X)**2)**2 / ((1 - (m_A/m_X)**2)**(3./2))
return np.sqrt(function)
# Thermal Relic for m_X >> m_A Approximation
def alphaThermApprox(m_X):
'''
alphaThermApprox(m_X)
This function determines the dark matter fine structure constant alpha_X given the dark matter relic abundance
in the m_X >> m_A approximation.
'''
conversion = (5.06e13)**3/ (1.52e24) # cm^3 Sec -> GeV^-2
thermAvgSigmaV = 2.2e-26 # cm^3/s from ArXiV: 1602.01465v3 between eqns (4) and (5)
function = conversion * thermAvgSigmaV * (5.06e13)**3/ (1.52e24) * (m_X**2/np.pi)
return np.sqrt(function)
################################################################
# Annihilation Rate Functions
################################################################
########################
# V0 at center of Earth
########################
def v0func(m_X):
'''
v0func(m_X)
Returns the characteristic velocity of a dark matter particle with mass m_X at the center of the Earth.
[m_X] = GeV
'''
return np.sqrt(2*TCross/m_X)
########################
# Tree-level annihilation cross section
########################
def sigmaVtree(m_X, m_A, alpha_X):
'''
sigmaVtree(m_X, m_A, alpha_X)
Returns the tree-level annihilation cross section for massive dark photons fixed by relic abundance
[m_X] = GeV
[m_A] = GeV
'''
numerator = (1 - (m_A/m_X)**2)**1.5
denominator = ( 1 - 0.5 * (m_A/m_X)**2 )**2
prefactor = np.pi*(alpha_X/m_X)**2
function = prefactor * numerator/denominator
return function
########################
# Sommerfeld Enhahcement
########################
def sommerfeld(v, m_X, m_A, alpha_X):
'''
sommerfeld(v, m_X, m_A, alpha_X)
Returns the Sommerfeld enhancemement
[m_X] = GeV
[m_A] = GeV
'''
a = v / (2 * alpha_X) # Variable substitution
c = 6 * alpha_X * m_X / (np.pi**2 * m_A) # Variable substitution
# Kludge: Absolute value the argument of the square root inside Cos(...) two lines below.
function = np.pi/a * np.sinh(2*np.pi*a*c) / \
( np.cosh(2*np.pi*a*c) - np.cos(2*np.pi*np.abs(np.sqrt(np.abs(c-(a*c)**2)) ) ) )
return function
########################
# Thermal Average Sommerfeld
########################
def thermAvgSommerfeld(m_X, m_A, alpha_X):
'''
thermAvgSommerfeld(m_X, m_A, alpha_X):
Returns the thermally-averaged Sommerfeld enhancement
[m_X] = GeV
[m_A] = GeV
'''
v0 = v0func(m_X)
def integrand(v):
# We perform d^3v in spherical velocity space.
# d^3v = v^2 dv * d(Omega)
prefactor = 4*np.pi/(2*np.pi*v0**2)**(1.5)
function = prefactor * v**2 * np.exp(-0.5*(v/v0)**2) * sommerfeld(v, m_X, m_A, alpha_X)
return function
lowV = 0
# Python doesn't like it when you integrate to infinity, so we integrate to 10 standard deviations
highV = 10*(v0func(m_X))
integral = integrate.quad(integrand, lowV, highV)[0]
return integral
########################
# CAnnCalc
########################
def cAnn(m_X, sigmaVTree, thermAvgSomm = 1):
'''
CAnnCalc(m_X, sigmaVTree, thermAvgSomm = 1)
Returns the Annihilation rate in sec^-1 without Sommerfeld effects.
To include sommerfeld effects, set thermAvgSomm = thermAvgSommerfeld(m_X, m_A, alpha_X)
[m_X] = GeV
[sigmaVTree] = GeV^-2
'''
prefactor = (Gnat * m_X * rhoCross/ (3 * TCross) )**(3./2)
conversion = (1.52e24) # GeV -> Sec^-1
function = conversion * prefactor * sigmaVTree * thermAvgSomm
return function
################################################################
# Annihilation Rate Functions
################################################################
########################
# Equilibrium Time
########################
def tau(CCap,CAnn):
'''
tau(CCap,CAnn)
Returns the equilibrium time in sec
[CCap] = sec^-1
[CAnn] = sec^-1
'''
function = 1./(np.sqrt(CCap*CAnn))
return function
########################
# Epsilon as a function of m_A
########################
def contourFunction(m_A, alpha_X, Cann0, Sommerfeld, kappa0, contourLevel):
'''
EpsilonFuncMA(m_A, alpha_X, Cann, Sommerfeld, kappa0, contourLevel)
Returns the value of epsilon as a function of mediator mass m_A.
[m_A] = GeV
Cann0 = sec^-1
Kappa0 = GeV^5
Note: The 10^n contour is input as contourLevel = n
This function is used to quickly generate plots of constant tau/tau_Cross in (m_A, epsilon) space.
'''
function = 2 * np.log10(m_A) - (0.5)*np.log10(alpha_X * kappa0 * Cann0 * Sommerfeld) \
- contourLevel - np.log10(tauCross)
return function
########################
# Annihilation Rate
########################
def gammaAnn(CCap, CAnn):
'''
gammaAnn(CCap, CAnn)
Returns the solution to the differential rate equation for dark matter capture and annihilation
[CCap] = sec^-1
[CAnn] = sec^-1
'''
Tau = tau(CCap, CAnn)
EQRatio = tauCross/Tau
function = (0.5) * CCap * ((np.tanh(EQRatio))**2)
return function
########################
# Decay Length
########################
def decayLength(m_X, m_A, epsilon, BR):
'''
DecayLength(m_X, m_A, epsilon, BR)
Returns the characteristic decay length of dark photons in cm.
[m_X] = GeV
[m_A] = GeV
BR = Branching Ratio
'''
function = RCross * BR * (3.6e-9/epsilon)**2 * (m_X/m_A) * (1./1000) * (1./m_A)
return function
########################
# Decay Parameter
########################
def epsilonDecay(decayLength, effectiveDepth = 10**5): # Effective depth = 1 km
'''
epsilonDecay(decayLength, effectiveDepth = 10**5)
Returns the probability for dark photons to decay inside the IceCube detector near the surface of Earth.
[effectiveDepth] = cm, default value for the IceCube Neutrino Observatory is 1 km.
'''
arg1 = RCross # To make the arguments of the exponentials nice
arg2 = RCross + effectiveDepth # To make the arguments of the exponentials nice
function = np.exp(-arg1/decayLength) - np.exp(-arg2/decayLength)
return function
########################
# Ice Cube Signal
########################
def iceCubeSignal(gammaAnn, epsilonDecay, T, Aeff = 10**10):
'''
iceCubeSignal(gammaAnn, epsilonDecay, liveTime, Aeff = 10**10)
Returns the number of signal events for IceCube.
[gammaAnn] = sec^-1
[liveTime] = sec
[Aeff] = cm^2
'''
function = 2 * gammaAnn * (Aeff/ (4*np.pi*RCross**2) ) * epsilonDecay * T
return function
###Output
_____no_output_____
###Markdown
We're interested in computing the total capture $C_{cap}$ as a function of dark matter mas $m_X$. The total capture rate is defined as the sum of individual capture rates for each element $C_{Cap}^N$:$$\begin{equation} C_{Cap} = \sum_N C_{Cap}^N\end{equation}$$Where the individual rates are defined as :$$\begin{equation} C_{Cap}^N = n_X\int_0^{R_\oplus}dr4\pi r^2 n_N(r)\int_0^{v_{gal}}du4\pi u^2f_\oplus(u)\frac{u^2 + v_\oplus^2(r)}{u}\int_{E_{min}}^{E_{max}}dE_R\frac{d\sigma_N}{dE_R}\Theta(\Delta E)\end{equation}$$The details of the components of this expression are included in the DarkCapPy manual.Using the same limits as detailed in that manual, we can factorise the expression of $C_{Cap}$ as :$$\begin{equation} C_{Cap} = \left(\frac{\varepsilon^2}{m_A^4}\alpha_X\right)\kappa_0(m_X, \alpha)\end{equation}$$Therefore, the $C_{Cap}$ dependency in $m_X$ is enclosed in $\kappa_0$, which itself does not depends on neither the kinetic mixing parameter $\varepsilon$ nor the dark mediator mass $m_A$.Most of the computing time for the capture rate lies in the $\kappa_0$ computation. Computing $\kappa_0$ over a range of $m_X$ value and saving it into a file allows us to quickly compute the capture rate for different values of $\varepsilon$ and $m_A$ without having to go through long and heavy computation. Now we import the csv file containing all computed $\kappa_0$ values for $m_X$ ranging from 10 to 10 000 GeV, as a pandas DataFrame, and plot it.The 200 values of $mx$ used within that range are distributed logarithmically (np.logspace(1, 4, 200)). $\alpha = \frac{1}{137}$ is the fine structure constant
###Code
data = pd.read_csv('kappa0_200', index_col = 0)
data.columns = ['mx', 'kappa']
data
mx = data['mx'].values
kappa = data['kappa'].values
###Output
_____no_output_____
###Markdown
We define an array containing the main elements found in Earth's Mantle, and their mass expressed in GeV, which will be useful for annotating further graphs
###Code
elementMass = []
elements = []
for i, element in enumerate(atomicNumbers) :
elementMass.append(amu2GeV(atomicNumbers[element]))
elements.append(element)
elementsMass = np.transpose(np.array([elements, elementMass]))
###Output
_____no_output_____
###Markdown
We use scipy's interpolate function to define a kapa function that interpolate values of $\kappa_0$ based on the computed data. This will be used to place the annotation regarding elements
###Code
kapa = interpolate.interp1d(mx, kappa, kind = 'cubic')
###Output
_____no_output_____
###Markdown
We only keep elements that fall in the $m_X$ range
###Code
elementsMass = elementsMass[np.where(np.array(elementMass) > 10)[0], :]
###Output
_____no_output_____
###Markdown
We plot $\kappa_0(m_X)$, and annotate it with the elements list
###Code
fig, ax = plt.subplots(1, 1, figsize = (16, 9), dpi = 300)
ax.scatter(mx, kappa, label = '$\\kappa_0(m_X)$', marker = '^',
color = 'g')
for i, elementMass in enumerate(elementsMass) :
mass = float(elementMass[1])
y = kapa(mass)
if i%2 == 0 :
offsetY = 10**26+y/10
else :
offsetY = 10**29 + y*10
ax.annotate(elementMass[0], (mass, y), xytext = (mass, offsetY), arrowprops = dict(arrowstyle = '->'))
ax.legend()
ax.grid()
ax.set_ylim(1e23, 1e31)
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_ylabel('$\\kappa_0$ [GeV$^{5}$]')
ax.set_xlabel('$m_X$ [GeV]')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
The shape of $\kappa_0(m_X)$ is as we expect it to be. We can now compute different $C_{Cap}(m_X)$ curves for different values of $(\varepsilon, m_A)$ quickly, as all the heavy computing bit has been done in computing $\kappa_0(m_X)$. Let's try it out for $m_A = 0.1$ GeV and $\varepsilon = 10^{-8}$.
###Code
alpha_X = 0.035
alpha = 1/137
epsilon = 1e-8
m_A = 0.1
Cap = np.empty_like(mx)
###Output
_____no_output_____
###Markdown
It might be interesting to try using numba to vectorize the cCapQuick function, so that we can compute $C_{Cap}$ for the whole $m_X$ range at once, instead of having to loop through all $m_X$ values.
###Code
for i, m in enumerate(mx) :
Cap[i] = cCapQuick(m, m_A, epsilon, alpha_X, kappa[i])
###Output
_____no_output_____
###Markdown
We define a new function so that we can place our elements label, the same way we did for $\kappa_0$
###Code
cap = interpolate.interp1d(mx, Cap, kind = 'cubic')
###Output
_____no_output_____
###Markdown
And plot it !
###Code
fig, ax = plt.subplots(1, 1, figsize = (16, 9), dpi = 300)
ax.scatter(mx, Cap, label = '$C_{Cap}(m_X)$ for $m_A = 0.1$ GeV and $\\varepsilon = 10^{-8}$', marker = '^',
color = 'g')
for i, elementMass in enumerate(elementsMass) :
mass = float(elementMass[1])
y = cap(mass)
if i%2 == 0 :
offsetY = 10**13+y/10
else :
offsetY = 10**15 + y*10
ax.annotate(elementMass[0], (mass, y), xytext = (mass, offsetY), arrowprops = dict(arrowstyle = '->'))
ax.legend()
ax.grid()
ax.set_ylim(1e10, 1e17)
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_ylabel('$C_{Cap}$ [$s^{-1}$]')
ax.set_xlabel('$m_X$ [GeV]')
fig.tight_layout()
fig.savefig('capture_rate')
###Output
_____no_output_____ |
examples/misc/neurobrite_datasets.ipynb | ###Markdown
Import packages
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np, seaborn as sns, os
from matplotlib import pyplot as plt
from sklearn import datasets
###Output
_____no_output_____
###Markdown
PART 1: Olivetti faces Define the variables below:* stimdir: directory where stimuli are to be saved* imprefix: what you want the file name to start with
###Code
stimdir = '/mnt/c/Users/easso/docs/neurohackademy/eeg-notebooks/notebooks/stimulus_presentation/stim/olivetti_faces'
imprefix = 'image'
try:
os.makedirs(stimdir)
except:
pass
faces = datasets.fetch_olivetti_faces()
data = faces.data
images = faces.images
targets = faces.target
print(faces.DESCR)
for i in range(np.shape(images)[0]):
fig, ax=plt.subplots(1)
plt.imshow(images[i,:,:],'gray')
plt.axis('off')
ax.get_xaxis().set_visible(False) # this removes the ticks and numbers for x axis
ax.get_yaxis().set_visible(False) # this removes the ticks and numbers for y axis
plt.savefig(stimdir + '/' + imprefix + '_' + str(i+1) + '.jpg',bbox_inches='tight',pad_inches=0)
plt.close
###Output
_____no_output_____
###Markdown
PART 2: Labelled faces in the wild Warning:This is a large dataset: 200MB! Use with caution. Define the variables below:* stimdir: directory where stimuli are to be saved* imprefix: what you want the file name to start with* n_images: the number of images you want to save. since this is a large dataset (over 5000 images!), you probably don't want to save all of them.
###Code
stimdir = '/mnt/c/Users/easso/docs/neurohackademy/eeg-notebooks/notebooks/stimulus_presentation/stim/faces_in_wild'
imprefix = 'image'
n_images = 200
try:
os.makedirs(stimdir)
except:
pass
faces = datasets.fetch_lfw_people()
data = faces.data
images = faces.images
targets = faces.target
names = faces.target_names
for i in range(n_images):
fig, ax=plt.subplots(1)
plt.imshow(images[i,:,:],'gray')
plt.axis('off')
ax.get_xaxis().set_visible(False) # this removes the ticks and numbers for x axis
ax.get_yaxis().set_visible(False) # this removes the ticks and numbers for y axis
plt.savefig(stimdir + '/' + imprefix + '_' + str(i+1) + '.jpg',bbox_inches='tight',pad_inches=0)
plt.close
###Output
_____no_output_____
###Markdown
PART 3: pictures of numbers Define the variables below:* stimdir: directory where stimuli are to be saved* imprefix: what you want the file name to start with* n_images: the number of images you want to save. since this is a large dataset (over 5000 images!), you probably don't want to save all of them.
###Code
stimdir = '/mnt/c/Users/easso/docs/neurohackademy/eeg-notebooks/notebooks/stimulus_presentation/stim/digits'
imprefix = 'image'
n_images = 20
try:
os.makedirs(stimdir)
except:
pass
digits = datasets.load_digits()
data = digits.data
images = digits.images
targets = digits.target
names = digits.target_names
for i in range(n_images):
fig, ax=plt.subplots(1)
plt.imshow(images[i,:,:],'gray')
plt.axis('off')
ax.get_xaxis().set_visible(False) # this removes the ticks and numbers for x axis
ax.get_yaxis().set_visible(False) # this removes the ticks and numbers for y axis
plt.savefig(stimdir + '/' + imprefix + '_' + str(i+1) + '.jpg',bbox_inches='tight',pad_inches=0)
plt.close
###Output
_____no_output_____ |
numericalPython/Python101.ipynb | ###Markdown
Python 101 data types- integers- floats- strings- boolians- lists- tuples- dictionaries
###Code
ali_list = [20, 'Ali', 'Male']
ali_dictionary = {'age': 20, 'name': 'Ali', 'gender': 'Male'}
ali_dictionary2 = ali_dictionary.copy()
ali_dictionary2
ali_dictionary2['age'] = 25
ali_dictionary2['age']
ali_dictionary['age']
a = 12
b = 4.
###Output
_____no_output_____
###Markdown
loops- for loop- while loop
###Code
for i in range(0,200,3):
print(i)
if ( i > 20):
break
x = 0
while( x < 20 ):
print(x)
x = x + 1
range(0,20,3)
###Output
_____no_output_____
###Markdown
conditions
###Code
x = 1.2
if(x == 1):
print('hello')
elif(x == 2):
print('bye')
else:
print('bye-bye')
###Output
bye-bye
###Markdown
Odd Or EvenAsk the user for a number. Depending on whether the number is even or odd, print out an appropriate message to the user. Hint: how does an even / odd number react differently when divided by 2? DivisorsCreate a program that asks the user for a number and then prints out a list of all the divisors of that number. (If you don’t know what a divisor is, it is a number that divides evenly into another number. For example, 13 is a divisor of 26 because 26 / 13 has no remainder.) List ComprehensionsLet’s say I give you a list saved in a variable: a = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]. Write one line of Python that takes this list a and makes a new list that has only the even elements of this list in it. List Less Than Tenwrite a program that prints out all the elements of the list that are less than 5.
###Code
rands = np.random.randint(0,100,30)
rands
[i for i in rands if (i <= 20)]
x = input("Please enter a number: ")
x = float(x)
if (x % 2 == 0):
print('your number is even')
else:
print('your number is odd')
import numpy as np
list_ = np.random.randint(10,100,10000)
list_even = []
list_odd = []
for i in list_:
if(i % 2 == 0):
list_even.append(i)
else:
list_odd.append(i)
len(list_even), len(list_odd)
np.savetxt('even_list.txt', list_even)
list_2 = np.loadtxt('even_list.txt')
###Output
_____no_output_____
###Markdown
1, 2, 3, 4, 5
###Code
a = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
evens = [i for i in a if i % 2 == 0]
! cat even_list.txt
np.savetxt(header='even list')
np.savetxt?
st = 'hello world'
st_list = list(st)
st_list[::-1]
import numpy as np
list_ = np.random.randint(10,100,10000)
list_1 = [x for x in list_ if x<50]
data = np.loadtxt('data/test.txt')
x = data[:, 0]
y = data[:, 1]
x,y,z,a = np.loadtxt('data/test.txt', unpack=True)
x
! cat data/test.txt
import pandas as pd
data_name = ['t','x', 'y', 'phase']
df = pd.read_csv('data/test.txt', sep='\t', names=data_name, index_col=0)
df['sum'] = df['x'] + df['y']
plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(df.index, df['x'], label = 'x')
plt.legend()
plt.xlabel('time')
plt.subplot(222)
plt.plot(df.index, df['y'], label = 'y')
plt.legend()
plt.subplot(224)
plt.plot(df.index, df['phase'], label = 'phase')
plt.legend()
ln_x = np.log(df['x'])
plt.figure(figsize=(10,5))
plt.plot(df.index, df['x'], 'r--', label= 'x')
plt.legend(loc = 2)
plt.xlabel('time')
plt.ylabel('x')
plt.title('population')
plt.savefig('images/x_pop,png' , dpi = 300)
x_ = np.random.normal(0, 1, 100)
y_ = np.random.normal(1, 2, 100)
plt.plot(x_)
plt.plot(y_)
! ls images/
plt.plot?
###Output
_____no_output_____ |
Learning Notes/Learning Notes 5 - Pandas 3.ipynb | ###Markdown
Group Groupby groupby() is a very powerful function with a lot of variations.It makes the task of splitting the dataframe over some criteria really easy and efficient.Any groupby operation involves one of the following operations on the original object. They are − - Splitting the Object - Applying a function - Combining the resultsIn many situations, we split the data into sets and we apply some functionality on each subset. In the apply functionality, we can perform the following operations − - Aggregation − computing a summary statistic - Transformation − perform some group-specific operation - Filtration − discarding the data with some condition Syntax
###Code
.groupby(by=None, axis=0, level=None, as_index=True, sort=True,
group_keys=True, squeeze=False, **kwargs)
by : mapping, function, str, or iterable
axis : int, default 0
level : If the axis is a MultiIndex (hierarchical), group by a particular level or levels
as_index : For aggregated output, return object with group labels as the index.
sort : Sort group keys. Get better performance by turning this off.
group_keys : When calling apply, add group keys to index to identify pieces
squeeze : Reduce the dimensionality of the return type if possible
Pandas object can be split into any of their objects.
There are multiple ways to split an object like −
- obj.groupby('key')
- obj.groupby(['key1','key2'])
- obj.groupby(key,axis=1)
###Output
_____no_output_____
###Markdown
Examples
###Code
Examples of use
# Group by the name of sales rep
df.groupby(f['Sales Rep'].str.split(' ').str[0]).size()
# Grouping by whether or not there is a “William” in the name of the rep
df.groupby(df['Sales Rep'].apply(lambda x: 'William' in x)).size()
# Group by random series (for illustrative purposes only)
df.groupby(pd.Series(np.random.choice(list('ABCDG'),len(df)))).size()
# Grouping by three evenly cut “Val” buckets
df.groupby(pd.qcut(x=df['Val'],q=3,labels=['low','mid','high'])).size()
# Grouping by custom-sized “Val” buckets
df.groupby(pd.cut(df['Val'],[0,3000,5000,7000,10000])).size()
# Using Grouper for looking at dates
df.groupby(pd.Grouper(key='Date',freq='Y')).size() #(years here)
df.groupby(pd.Grouper(key='Date',freq='Q')).size() #(quarters here)
# Grouping by multiple columns
df.groupby(['Sales Rep','Company Name']).size()
###Output
_____no_output_____
###Markdown
Example 1 (group, get group, 2 levels of group)
###Code
import pandas as pd
df = pd.read_csv("nba.csv")
df.head(3)
gk = df.groupby('Team')
gk.head(10) #to print the first entries in all the group formed
gk.get_group('Boston Celtics')
gkk = df.groupby(['Team', 'Position'])
gkk.first()
Very good article on Groupby https://towardsdatascience.com/its-time-for-you-to-understand-pandas-group-by-function-cc12f7decfb9
Example of using Groupby with Unstack https://cmdlinetips.com/2020/05/fun-with-pandas-groupby-aggregate-multi-index-and-unstack/
https://jakevdp.github.io/PythonDataScienceHandbook/03.05-hierarchical-indexing.html
###Output
_____no_output_____
###Markdown
Selection
###Code
get_group() to select a single group
.size to get the number of rowns in each group
pd.Grouper when working with time-series
###Output
_____no_output_____
###Markdown
Aggregations - An aggregated function returns a single aggregated value for each group. - Once the group by object is created, several aggregation operations can be performed on the grouped data.- An obvious one is aggregation via the aggregate or equivalent agg method
###Code
Most used functions with agg
- 'size': Counts the rows
- 'sum': Sums the column up
- 'mean'/'median': Mean/Median of the column
- 'max'/'min': Maximum/Minimum of the column
- 'idxmax'/'idxmin': Index of the min/max of the column.
- pd.Series.nunique: Counts unique values. this is a function actually (not a string)
import pandas as pd
import numpy as np
ipl_data = {'Team': ['Riders', 'Riders', 'Devils', 'Devils', 'Kings',
'kings', 'Kings', 'Kings', 'Riders', 'Royals', 'Royals', 'Riders'],
'Rank': [1, 2, 2, 3, 3,4 ,1 ,1,2 , 4,1,2],
'Year': [2014,2015,2014,2015,2014,2015,2016,2017,2016,2014,2015,2017],
'Points':[876,789,863,673,741,812,756,788,694,701,804,690]}
df = pd.DataFrame(ipl_data)
grouped = df.groupby('Year')
grouped['Points'].agg(np.mean)
#Attribute Access in Python Pandas
grouped = df.groupby('Team')
grouped.agg(np.size)
# Multiple Aggregation (sum, mean, stdev)
grouped = df.groupby('Team')
grouped['Points'].agg([np.sum, np.mean, np.std])
###Output
_____no_output_____
###Markdown
pd.NamedAgg - When applying multiple aggregation functions to multiple columns the result gets a bit messy- The main issue: there is no control over the column names. - Situations like this are where pd.NamedAgg comes in handy. - pd.NamedAgg allows to specify the name of the target column.
###Code
# Long Form: Explictly specifying the NamedAgg
aggregation = {
'Potential Sales': pd.NamedAgg(column='Val', aggfunc='size'),
'Sales': pd.NamedAgg(column='Sale', aggfunc='sum'),
'Conversion Rate': pd.NamedAgg(column='Sale', aggfunc=cr)
}
# Alternative: Since the NamedAgg is just a tuple, we can also pass regular tuples
aggregation = {
'Potential Sales': ('Val','size'),
'Sales': ('Sale','sum'),
'Conversion Rate': ('Sale',cr)
}
###Output
_____no_output_____
###Markdown
Transformation - Transformation returns an object that is indexed the same size of that is being grouped. - Thus, the transform should return a result that is the same size as that of a group chunk.
###Code
import pandas as pd
import numpy as np
ipl_data = {'Team': ['Riders', 'Riders', 'Devils', 'Devils', 'Kings',
'kings', 'Kings', 'Kings', 'Riders', 'Royals', 'Royals', 'Riders'],
'Rank': [1, 2, 2, 3, 3,4 ,1 ,1,2 , 4,1,2],
'Year': [2014,2015,2014,2015,2014,2015,2016,2017,2016,2014,2015,2017],
'Points':[876,789,863,673,741,812,756,788,694,701,804,690]}
df = pd.DataFrame(ipl_data)
grouped = df.groupby('Team')
score = lambda x: (x - x.mean()) / x.std()*10
grouped.transform(score)
# percentage of the groups total by dividing by the group-wise sum.
df.groupby('Sales Rep')['Val'].transform(lambda x: x/sum(x))
###Output
_____no_output_____
###Markdown
Filtration - Filtration filters the data on a defined criteria and returns the subset of data. - The filter() function is used to filter the data.
###Code
import pandas as pd
import numpy as np
ipl_data = {'Team': ['Riders', 'Riders', 'Devils', 'Devils', 'Kings',
'Kings', 'Kings', 'Kings', 'Riders', 'Royals', 'Royals', 'Riders'],
'Rank': [1, 2, 2, 3, 3,4 ,1 ,1,2 , 4,1,2],
'Year': [2014,2015,2014,2015,2014,2015,2016,2017,2016,2014,2015,2017],
'Points':[876,789,863,673,741,812,756,788,694,701,804,690]}
df = pd.DataFrame(ipl_data)
# In the above filter condition, we are asking to return
# the teams which have participated three or more times in IPL
df.groupby('Team').filter(lambda x: len(x) >= 3)
Using groupby and filter together
df.groupby('AAA').filter(lambda x: len(x) > 120)
###Output
_____no_output_____
###Markdown
Merge, Join. Concatenate - One of the main ways that databases are structured is to be normalized. - Data analysis is more straightforward if all of the data is in a single table. - df1.append(df2) stack vertically- pd.concat([df1, df2]) stacks vertically or horizontally- df1.join(df2) inner/outer/left/right join on index (or key column)- pd.merge([df1,df2]) many joints on multiple columns Merge - Possible to use merge() any time to do database-like join operations. - It’s the most flexible of the three operations.- When you want to combine data objects based on one or more keys i- More specifically, merge() is most useful when you want to combine rows that share data.- Possible to achieve many-to-one and many-to-many joins with merge(). - In a many-to-one join, - one of the df has many rows in the merge column that repeat the same values (ex: 1, 1, 3, 5, 5) - while the merge column in the other dataset will not have repeat values (such as 1, 3, 5).- In a many-to-many join, - both of your merge columns will have repeat values. - These merges are more complex and result in the Cartesian product of the joined rows. - Meaning: after the merge, there are every combination of rows sharing the same value in the key column.- When you using merge(), two required arguments: - The left DataFrame - The right DataFrame
###Code
# Example of aplication: mergin on a common column
result = pd.merge(df1, df2[[]'col1', 'col2', 'col3']], on='col1')
###Output
_____no_output_____
###Markdown
Syntax
###Code
merge(self, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'),
copy=True, indicator=False, validate=None)
The join is done on columns or indexes.
- If joining columns on columns, the DataFrame indexes will be ignored.
- Otherwise the index will be passed on.
right: DataFrame or named Series
Object to merge with.
how: {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘inner’
- left: use only keys from left frame, preserve key order
- right: use only keys from right frame, preserve key order.
- outer: use union of keys from both frames, sort keys lexicographically.
- inner: use intersection of keys from both frames, preserve the order of the left keys.
on: label or list
Column or index level names to join on.
These MUST be found in both DataFrames.
If on is None and not merging on indexes: intersection of the columns in both DataFrames.
left_on: label or list, or array-like
Columns from the left DataFrame to use as keys.
Can either be column names or arrays with length equal to the length of the DataFrame.
left_index: bool, default False
If True, use the index (row labels) from the left DataFrame as its join key(s).
If used, MUST have right_index as well
In case of a DataFrame with a MultiIndex (hierarchical):
- the number of levels must match the number of join keys from the right DataFrame
************
sort bool, default False
Sort the join keys lexicographically in the result DataFrame.
If False, the order of the join keys depends on the join type (how keyword).
suffixes tuple of (str, str), default (‘_x’, ‘_y’)
Suffix to apply to overlapping column names in the left and right side, respectively.
To raise an exception on overlapping columns use (False, False).
copy bool, default True
If False, avoid copy if possible.
indicator bool or str, default False
If True, new col called “_merge” with information on the source of each row added
If string, new col called str with information on source of each row will be added
Information column is Categorical-type and takes on a value of
- “left_only” for observations whose merge key only appears in ‘left’ DataFrame,
- “right_only” for observations whose merge key only appears in ‘right’ DataFrame,
- “both” if the observation’s merge key is found in both.
validate: str, optional
If specified, checks if merge is of specified type.
- “one_to_one” or “1:1”: check if merge keys are unique in both left and right
- “one_to_many” or “1:m”: check if merge keys are unique in left
- “many_to_one” or “m:1”: check if merge keys are unique in right
- “many_to_many” or “m:m”: allowed, but does not result in checks.
###Output
_____no_output_____
###Markdown
Example 1 - merging principles
###Code
df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo', 'bar'],
'value': [1, 2, 3, 5, np.NaN]})
df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz','baz' ,'foo'],
'value': [5, 6, np.NaN, 4, 8]})
df1
df2
# Merge df1 and df2 on the lkey and rkey columns.
# The value columns have the default suffixes, _x and _y, appended.
df1.merge(df2, left_on='lkey', right_on='rkey', how='inner')
df1.merge(df2, left_on='lkey', right_on='rkey', how='outer')
###Output
_____no_output_____
###Markdown
Example 2
###Code
df1 = pd.DataFrame({'ID': ['0011','0013','0014','0016','0017'],
'First Name': ['Joseph', 'Mike', 'Jordan', 'Steven', 'Susan']})
df2 = pd.DataFrame({'ID': ['0010','0011','0013','0014','0017'],
'Last Name': ['Gordan', 'Johnson', 'Might' , 'Jackson', 'Shack']})
df3 = pd.DataFrame({'ID': ['0020','0022','0025'],
'First Name': ['Adam', 'Jackie', 'Sue']})
df4 = pd.DataFrame({'Key': ['0020','0022','0025'],
'Scores': [95, 90, 80]})
print(df1)
print(df2)
# how=inner will keep observations that have a match on the merge variable in both data frames.
new_merged_dataframe = pd.merge(df1, df2, on= "ID", how= "inner")
new_merged_dataframe
# Passing the “how = ‘outer'” argument will keep all observations from both data frames.
new_OUTER_merged_dataframe = pd.merge(df1, df2, on= "ID", how= "outer")
new_OUTER_merged_dataframe
# Merging on different columns with unique values
# - If the two data frames each contain the unique identifier, but are stored under different columns
# - It is possible to merge using the left_on and right_on arguments.
# If going this route, you have to pass both arguments.
# The general structure is
# - left_on = column_name_with_unique_identifier
# - right_on = column_name_with_unique_identifier.
# Note: Merging data frames this way keep both columns stated in the left_on and right_on arguments.
merged_dataframe = pd.merge(df3, df4, left_on= "ID", right_on= "Key", how= "inner")
merged_dataframe
###Output
_____no_output_____
###Markdown
Example 3 - merging using 2 keys
###Code
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],
'key2': ['K0', 'K1', 'K0', 'K1'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],
'key2': ['K0', 'K0', 'K0', 'K0'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
pd.merge(left, right, on=['key1'])
# here the merge is only on key1 - note the repetiiotn of A2 since it is allocated to C1 and C2
pd.merge(left, right, on=['key1', 'key2'])
# what is happening here is important: the merge is done when key1 and key2 are the same
###Output
_____no_output_____
###Markdown
Example 4 - index renaming, no index label
###Code
import pandas as pd
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']})
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']})
left.index.set_names('aaa', inplace=True)
right.index.set_names('aaa', inplace=True)
# if the df have same name for index, it is possible to merge using the label of the index
pd.merge(left, right, on='aaa')
# To note, if the index do not have name, then using this will work (left and right both necessary)
pd.merge(left, right, left_index=True, right_index=True)
###Output
_____no_output_____
###Markdown
Join - While merge() is a module function, .join() is an object function that lives on the DataFrame.- This enables to specify only one DataFrame, which will join the called DataFrame .join() on.- join() uses merge(), but provides a more efficient way to join than a fully specified merge() call. - example df1.join(df2, lsuffix="_left", rsuffix="_right") - With the indices visible, this is a left join happening here, with df1 being the left df - This example provides the parameters lsuffix and rsuffix. - Because .join() joins on indices and doesn’t directly merge df, all columns, even those with matching names, are retained in the resulting DataFrame. Syntax
###Code
.join(self, other, on=None, how='left', lsuffix='', rsuffix='', sort=False)
other: DataFrame, Series, or list of DataFrame
Index should be similar to one of the columns in this one.
on: str, list of str, or array-like, optional
Column or index level name(s) in the caller to join, otherwise joins index-on-index.
If multiple values given, the other DataFrame must have a MultiIndex.
Can pass an array as the join key if it is not already contained in the calling DataFrame.
Like an Excel VLOOKUP operation.
how: {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘left’
How to handle the operation of the two objects.
- left: use calling frame’s index (or column if on is specified)
- right: use other’s index.
- outer: form union of calling frame’s index (or column if on is specified) with other’s index, and sort it. lexicographically.
- inner: form intersection of calling frame’s index (or column if on is specified) with other’s index, preserving the order of the calling’s one.
lsuffix str, default ‘’
- Suffix to use from left frame’s overlapping columns.
rsuffix str, default ‘’
- Suffix to use from right frame’s overlapping columns.
sort bool, default False
- Order result DataFrame lexicographically by the join key.
- If False, the order of the join key depends on the join type (how keyword).
###Output
_____no_output_____
###Markdown
Example
###Code
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
right = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
left
right
left.join(right)
# to note, inner join (default) will drop the NaN from the left side
left.join(right, how='outer')
# to note, outer join will NOT drop NaN
###Output
_____no_output_____
###Markdown
Concat - Concatenation is a bit different from the merging techniques above- With merging, we expect the resulting dataset - to have rows from the parent datasets mixed in together based on commonality - to lose rows that don’t have matches in the other dataset. - With Concatenation - datasets are just stitched together along an axis — either the row axis or column axis.
###Code
#Example:
concatenated = pd.concat([df1, df2], axis=1)
###Output
_____no_output_____
###Markdown
Syntax
###Code
pd.concat( objs, axis=0, join='outer', ignore_index=False, keys=None,
levels=None, names=None, verify_integrity=False, sort=None, copy=True)
objs: a sequence or mapping of Series or DataFrame objects
axis: {0/’index’, 1/’columns’}, default 0
join: {‘inner’, ‘outer’}, default ‘outer’
ignore_index bool, default False
If True, do not use the index values along the concatenation axis.
The resulting axis will be labeled 0, …, n - 1.
Useful if where the concatenation axis does not have meaningful indexing information.
keys: sequence, default None
If multiple levels passed, should contain tuples.
Construct hierarchical index using the passed keys as the outermost level.
levels: list of sequences, default None
Specific levels (unique values) to use for constructing a MultiIndex.
Otherwise they will be inferred from the keys.
names: list, default None
Names for the levels in the resulting hierarchical index.
verify_integrity bool, default False
Check whether the new concatenated axis contains duplicates.
This can be very expensive relative to the actual data concatenation.
sort: bool, default False
Sort non-concatenation axis if it is not already aligned when join is ‘outer’.
No effect if join='inner' (already preserves the order of the non-concatenation axis)
###Output
_____no_output_____
###Markdown
Simple Examples
###Code
s1 = pd.Series(['a', 'b'])
s2 = pd.Series(['c', 'd'])
pd.concat([s1, s2])
# reset the index with ignore_index
pd.concat([s1, s2], ignore_index=True)
# Add a hierarchical index at the outermost level of the data with the keys option.
pd.concat([s1, s2], keys=['s1', 's2'])
# Label the index keys you create with the names option.
pd.concat([s1, s2], keys=['s1', 's2'],names=['Series name', 'Row ID'])
# Combine two DataFrame objects with identical columns.
df1 = pd.DataFrame([['a', 1], ['b', 2]],columns=['letter', 'number'])
df2 = pd.DataFrame([['c', 3], ['d', 4]],columns=['letter', 'number'])
pd.concat([df1, df2])
# Combine DataFrame objects with overlapping columns and return everything.
# Columns outside the intersection will be filled with NaN values.
df3 = pd.DataFrame([['c', 3, 'cat'], ['d', 4, 'dog']],columns=['letter', 'number', 'animal'])
pd.concat([df1, df3], sort=False)
# Combine DataFrame objects with overlapping columns and return only those that are shared
# by passing inner to the join keyword argument.
pd.concat([df1, df3], join="inner")
###Output
_____no_output_____
###Markdown
Notes Check Installed versions
###Code
Show installed versions
pd.__version__ # displays panda verions
pd.show_verions() # displays version of everything (machine, windows, python, etc)
###Output
_____no_output_____
###Markdown
Fast way to create a DataFrame
###Code
import numpy as np
import pandas as pd
pd.DataFrame(np.random.rand(4,8), columns=list('abcdefgh'))
###Output
_____no_output_____
###Markdown
Create df from clipboard
###Code
After having copied a dataset from excel, do this:
df= pd.read_clipboard()
###Output
_____no_output_____
###Markdown
Args & Kwargs
###Code
'*args' and '**kwargs' are magic variables
- it is not necessary to write *args or **kwargs.
- Only the * (aesteric) is necessary.
- You could have also written *var and **vars.
- Writing *args and **kwargs is just a convention
*args and **kwargs are mostly used in function definitions.
*args and **kwargs allow you to pass a variable number of arguments to a function.
When you do not know before hand how many arguments can be passed
- *args is used to send a non-keyworded variable length argument list to the function.
- **kwargs allows you to pass keyworded variable length of arguments to a function (named arguments)
- arg creates tuples, kwarg creates dictionary
- by using kwarg, it is possible to name the arguments (which is not possible in arg)
###Output
_____no_output_____
###Markdown
https://www.listendata.com/2019/07/args-and-kwargs-in-python.html Example: *arg
###Code
def add(*args):
print(sum(args))
add(5,5,6)
# list comprehension - makes code more readable and fast than a for loop and lambda function
def even(*args):
print([n for n in args if n%2 == 0])
even(1,2,3,4)
###Output
[2, 4]
###Markdown
Example: **kwarg
###Code
def test2(**kwargs):
print(kwargs)
print(kwargs.keys())
print(kwargs.values())
test2(a=1, b=3)
###Output
{'a': 1, 'b': 3}
dict_keys(['a', 'b'])
dict_values([1, 3])
###Markdown
Display Settings and Style Display Settings
###Code
# To disply 2 decimals
pd.set_options('display.float_format',{:.2f}.format)
pd.reset_options('display.float_format')
display can be:
- float_format
- colheader_justify, column_space, date_dayfirst, date_yearfirst
- max_categories, max_columns, max_colwidth, max_rows, min_rows, precision
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.set_option.html
###Output
_____no_output_____
###Markdown
Style to next level
###Code
# This is cool - an example
my_format = {'Date': '{%d / % m / %y}',
'Col1': '${:.2f}', # for $ signs and 2 decinals
'Col2': '{:,}' } # for thousands separator
# Then pass the formting to the df
df.style.format(my_format)
hide_index()) # remove index
hightlight_min('Col1', color = 'red') # highlight min value in Col1 in red
hightlight_max('Col2', color = 'lightgreen') # highlight max value in Col2 in green
background_gradient(subset='Col3', cmap='Blues') # conditional formating with blue gradient
bar('Col4', color='lightblue', align='zero') # conditional formating with bars
###Output
_____no_output_____
###Markdown
Profile a DataFrame
###Code
# - pip install pandas-profiling[notebook] # pip install
# - conda install -c conda-forge pandas-profiling # in anaconda prompt
# - pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip # github
from pandas_profiling import ProfileReport
pandas_profiling.ProfileReport(df)
###Output
_____no_output_____
###Markdown
Syntax
###Code
Those are exaclty ok and produce the same
df1 = df[df['col1'] == df['col1'].min()]
df1 = df[df.col1 == df.col1.min()]
###Output
_____no_output_____
###Markdown
Inplace
###Code
By default, inplace=False, so that we can experiment, try and play with the data
When toggling inplace=True, we will affect 'permanently' the underlying data
inplace is used in formula such as (and many more)
- dropna()
- set_index()
- drop()
as an example, those two are similar
- inplace df.set_index('name',inplace=True)
- assignment: df = df.set_index('name')
###Output
_____no_output_____
###Markdown
Reduce df file size
###Code
When importing, use this:
df = pd.read_csv("file_name.csv", usecols=cols, dtype=dtypes)
To check for memory usage:
df.info(memory_usage='deep')
File size can be divided by 10x, depending on data characteristics
###Output
_____no_output_____
###Markdown
Create a pivot table
###Code
df.pivot_table(index=['AAA','BBB'],
columns=['XYZ'],
Values='Col1',
aggfunc=['np.mean','count'] # 'np.sum'
margins=True, # this is to provide total columns and rows, if needed
fill_value=0) # replaces missing values / NaN with 0
https://pbpython.com/pandas-pivot-table-explained.html
###Output
_____no_output_____
###Markdown
Reshape a Multiindexed Series
###Code
This is exaclty the same - provides the mean of values in Col1 grouped by AAA
df.groupby('AAA').Col1.mean()
df.groupby('AAA')['Col1'].mean()
It is possible to extend that to a multi-index - the output will be vertical
df.groupby(['AAA','BBB'])['Col1'].mean()
In order to have a nicer looking output (like a pivot table)
df.groupby(['AAA','BBB'])['Col1'].mean().unstack()
###Output
_____no_output_____
###Markdown
Convert a Continous into Discrete - with Bins
###Code
Use this (need to have more bins than labels)
pd.cut(df['col1'], bins =[0,2,5,10], labels=['low','med','high'])
###Output
_____no_output_____
###Markdown
Split String in Multiple Columns
###Code
Will create new df with those:
df['Col1'].str.split(' ', expand=True) # expand on space (if no space, expands each letter)
df['Col1'].str.split(',', expand=True) # expand on coma
###Output
_____no_output_____
###Markdown
Aggregate by Multiple functions
###Code
In the example, we had 'Orders_ID' and the detail of 'Item_Price' of those 'Orders_ID'
# those are the same, they return the sum of Item_Price for Order_ID 1
df.[df['Order_ID']==1]['Item_Price'].sum()
df.[df.Order_ID ==1].Item_Price'.sum()
# this creates a df with Order_IDs and sum of Item_Price
df.groupby('Order_ID')['Item_Price'].sum()
# this creates a df with 2 columns with sum and count
df.groupby('Order_ID')['Item_Price'].agg(['sum','count'])
To take this to the next level and integrate this information back to the original df
We want to Combine the outpout of an aggregation with a Dataframe
Solution is to use the TRANSFORM method
Transform performs same calculation but returns output data with same shape as original
# This creates the array 'df2'
df2 = df.groupby('Order_ID')['Item_Price'].transform('sum')
# This adds a new column 'Total Price' in the original df, using df2 data
df['Total_Price'] = df2
# This creates a new columns 'Percent_Order'
df['Percent_Order'] = df.Item_Price / df.Total_Price
###Output
_____no_output_____
###Markdown
Build DF from multiple files (Rows and Column wise)
###Code
Need to install the Glob module and have in the same place the files we need
from glob import glob
# for vertical concatenation (adding rows)
pd.concat((pd.read_csv(file) for file in AAA_folder, ignore_index=True))
# this to avoid duplicated indexes at 0
# for horizontal concatenation (adding columns)
pd.concat((pd.read_csv(file) for file in AAA_folder, axis=1))
import glob
df = pd.concat((pd.read_csv(file, header=None) for file in glob.glob("*.csv")),axis=0, ignore_index=True)
# header = None is very importnat here
# To understand the above, this will print th elist of files ending in 'csv' located in the same place as the ipynb file
for file in glob.glob("*.csv"): # this could be the file path
print(file)
###Output
_____no_output_____
###Markdown
Get the length of str characters from a column
###Code
We call this custom methods:
.apply(lambda x: len(x))'
.apply(len)
stack, unstack, melt, pivot https://pythonhealthcare.org/2018/04/08/32-reshaping-pandas-data-with-stack-unstack-pivot-and-melt/
cheatsheet to do: https://www.dataquest.io/blog/pandas-cheat-sheet/
good blog: https://www.novixys.com/blog/author/admin/
###Output
_____no_output_____ |
Experiments/16Mar2018/dataprocessing.ipynb | ###Markdown
Calculate change in position, for position 2against zero'd IMU readings
###Code
import csv
import numpy as np
path ='/home/nrw/Documents/projects_Spring2018/Research/Experiments/16Mar2018/'
path += 'pos2'
fname = 'data_16Mar2018_pos2.csv'
file=open(path+fname, "r")
reader = csv.reader(file)
print(reader)
zero_pos = np.array([])
# calculate change in position = (final pos) - (zero pos)
outfname = 'delta_pos.csv'
outfile = open(path+outfname,"w")
for row_index, row in enumerate(reader):
if row_index == 0:
pass
elif row_index % 2 == 1:
zero_pos = np.array([float(j) for j in row[:4]])
elif row_index % 2 == 0:
pos = np.array([float(j) for j in row[:4]]) - zero_pos
astr = ','.join(['%.2f' % num for num in pos]) + "\n"
outfile.write(astr)
file.close()
outfile.close()
#for line in reader:
# t=line[0]
# print(t)
# https://rosettacode.org/wiki/Read_a_specific_line_from_a_file#Python
###Output
<_csv.reader object at 0x7f09c1964748>
###Markdown
Calculate change in position, for position 3against zero'd IMU readings
###Code
import csv
import numpy as np
path3 ='/home/nrw/Documents/projects_Spring2018/Research/Experiments/16Mar2018/'
path3 += 'pos3/'
fname3 = 'data_16Mar2018_pos3.csv'
file3=open(path3+fname3, "r")
reader = csv.reader(file3)
print(reader)
zero_pos = np.array([])
# calculate change in position = (final pos) - (zero pos)
outfname = 'delta_pos.csv'
outfile = open(path3+outfname,"w")
for row_index, row in enumerate(reader):
if row_index == 0:
pass
elif row_index % 2 == 1:
zero_pos = np.array([float(j) for j in row[:4]])
elif row_index % 2 == 0:
pos = np.array([float(j) for j in row[:4]]) - zero_pos
astr = ','.join(['%.2f' % num for num in pos]) + "\n"
outfile.write(astr)
file3.close()
outfile.close()
#for line in reader:
# t=line[0]
# print(t)
# https://rosettacode.org/wiki/Read_a_specific_line_from_a_file#Python
###Output
<_csv.reader object at 0x7f09bd9fa4a8>
###Markdown
Read in CSV to variables
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
path3 ='/home/nrw/Documents/projects_Spring2018/Research/Experiments/16Mar2018/'
path3 += 'pos3/'
csv_file3 = 'data_position_IMU_camera_pos3.csv'
# pandas so I don't have to loop through file just to ge tone column
# https://stackoverflow.com/questions/16503560/read-specific-columns-from-a-csv-file-with-csv-module
df3 = pd.read_csv(path3+csv_file3)
eulerY3 = df3["EulerY"]
trueDeg3 = df3["Camera (degrees)"]
trueF3 = df3["Force (grams)"]
plt.scatter(trueF, eulerY)
plt.scatter(trueF, -trueDeg)
plt.ylabel('degrees of deflection (deg)')
plt.xlabel('force (grams)')
plt.show()
###Output
_____no_output_____
###Markdown
Position Two, ridge regression
###Code
import numpy as np
import plotly
import plotly.plotly as py
import plotly.offline as po
import plotly.graph_objs as go
from sklearn import linear_model
from sklearn.linear_model import Ridge
plotly.offline.init_notebook_mode(connected=True)
trace0 = go.Scatter(
x = trueF,
y = eulerY,
mode = 'markers',
name = 'degrees (by IMU)'
)
trace1 = go.Scatter(
x = trueF,
y = -trueDeg,
mode = 'markers',
name = 'true degrees (by webcam)'
)
myX = trueF.dropna().reshape(-1,1)
myy = eulerY.dropna()
ridge = Ridge(fit_intercept=True, alpha=1.0, random_state=0, normalize=True)
ridge.fit(myX, myy)
coef_ridge = ridge.coef_
gridx = np.linspace(myX.min(), myX.max(), 20)
coef_ = ridge.coef_ * gridx + ridge.intercept_
#plt.plot(gridx, coef_, 'g-', label="ridge regression")
trace2 = go.Scatter(
x= gridx,
y = coef_,
name = 'linear fit (w/ridge penalty)'
)
#a,b = np.polyfit(x,y,1)
#plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
data = [trace0, trace1, trace2]
layout = go.Layout(
title='Force vs Degrees of Deflection',
yaxis=dict(title='degrees'),
xaxis=dict(title='Force (in grams)'),
legend=dict(x=.1, y=-.5)
)
fig = go.Figure(data=data, layout=layout)
# Plot and embed in ipython notebook!
po.iplot(fig)
#po.plot(fig, filename='temp_plot.html')
###Output
_____no_output_____
###Markdown
Position Three, ridge regression
###Code
import numpy as np
import plotly
import plotly.plotly as py
import plotly.offline as po
import plotly.graph_objs as go
from sklearn import linear_model
from sklearn.linear_model import Ridge
plotly.offline.init_notebook_mode(connected=True)
trace0 = go.Scatter(
x = trueF3,
y = eulerY3,
mode = 'markers',
name = 'pos3 degrees (by IMU)'
)
trace1 = go.Scatter(
x = trueF3,
y = -trueDeg3,
mode = 'markers',
name = 'pos3 true degrees (by webcam)'
)
myX3 = trueF3.dropna().reshape(-1,1)
myy3 = eulerY3.dropna()
ridge = Ridge(fit_intercept=True, alpha=1.0, random_state=0, normalize=True)
ridge.fit(myX3, myy3)
coef_ridge = ridge.coef_
gridx = np.linspace(myX3.min(), myX3.max(), 20)
coef_ = ridge.coef_ * gridx + ridge.intercept_
#plt.plot(gridx, coef_, 'g-', label="ridge regression")
trace4 = go.Scatter(
x= gridx,
y = coef_,
name = 'pos3 linear fit (w/ridge penalty)'
)
#a,b = np.polyfit(x,y,1)
#plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
data = [trace0, trace1, trace4]
layout = go.Layout(
title='Force vs Degrees of Deflection',
yaxis=dict(title='degrees'),
xaxis=dict(title='Force (in grams)'),
legend=dict(x=.1, y=-.5)
)
fig = go.Figure(data=data, layout=layout)
# Plot and embed in ipython notebook!
po.iplot(fig)
#po.plot(fig, filename='temp_plot.html')
import plotly
from plotly.graph_objs import Scatter, Layout
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [Scatter(x=[1, 2, 3, 4], y=[4, 3, 2, 1])],
"layout": Layout(title="hello world")
})
# https://plot.ly/python/getting-started/#initialization-for-offline-plotting
# https://plot.ly/pandas/line-charts/#basic-line-plot
# https://plot.ly/python/getting-started/#more-examples
# http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
###Output
_____no_output_____ |
delayed_coord/Spectro_Period.ipynb | ###Markdown
Periodogramas y EspectogramasSe presentan las gráficas respectivas para cada uno de los atractores, sus periodogramas y espectrogramas
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from scipy.integrate import odeint
from scipy import signal
%pylab inline
##---- Serie de tiempo a partir de ecuaciones diferenciales
def Lorentz_eq2(r, t, rho, sigma, beta): #Atractor de Lorentz
x, y, z = r
dx = sigma*(y-x)
dy = x*(rho-z) -y
dz = x*y - beta*z
return np.array([dx, dy, dz])
def Duffing(r, t, e, g, w): #Oscilador de Duffing
x, y, z = r
dx = y
dy = x - x**3 - e*y +g*np.cos(z)
dz = w
return np.array([dx, dy, dz])
def VderP2(r, t, A, T2): #Ecuacion de Van der Pol con forzamiento
x, y, z = r
dx = y
dy = (1-x**2)*y - x + A*np.cos(2*np.pi*z)
dz = 1/T2
return np.array([dx, dy, dz])
def Chua_Circuit(r, t, a, b, m0, m1): # Circuito de Chua
x, y, z = r
if x >= 1:
dx = a*(y - m1*x - (m0-m1))
elif x <= -1:
dx = a*(y - m1*x + (m0-m1))
else:
dx = a*(y - m0*x)
dy = x - y + z
dz = -b*y
return np.array([dx, dy, dz])
##----- Solución númerica, transformada de fourier
##----- periodograma y espectograma
def frequencies(f, r0, time, param, Fs):
'''Inputs: funcion con ecuaciones diff, initial conditions, time array
parameters in tuple, fs
Outputs: Graphics
'''
#numerical solution
sol = odeint(f, r0, time, args=param)
x = sol[:,0]; y = sol[:,1]; z = sol[:,2]
fig1 = plt.figure(figsize=(6, 6))
ax1 = fig1.add_subplot(111, projection='3d')
ax1.plot(x, y, z, 'c-')
ax1.set_xlabel("x(t)"); ax1.set_ylabel("y(t)"); ax1.set_zlabel("z(t)")
#Fourier transform
freq = np.fft.fftfreq(time.shape[-1])
sp_x = np.fft.fft(x); sp_y = np.fft.fft(y); sp_z = np.fft.fft(z)
fig2 = plt.figure(figsize(6, 8))
ax02 = fig2.add_subplot(311); ax12 = fig2.add_subplot(312); ax22 = fig2.add_subplot(313)
ax02.plot(freq, sp_x.real); ax12.plot(freq, sp_y.real); ax22.plot(freq, sp_z.real)
ax02.set_xlim(-0.05, 0.05); ax12.set_xlim(-0.05, 0.05); ax22.set_xlim(-0.05, 0.05)
#periodogram
ff_periodx, Spect_denx = signal.periodogram(x, fs=Fs)
ff_periody, Spect_deny = signal.periodogram(y, fs=Fs)
ff_periodz, Spect_denz = signal.periodogram(z, fs=Fs)
fig3 = plt.figure(figsize(6, 8))
ax03 = fig3.add_subplot(311); ax13 = fig3.add_subplot(312); ax23 = fig3.add_subplot(313)
ax03.semilogy(ff_periodx, Spect_denx); ax13.semilogy(ff_periody, Spect_deny); ax23.semilogy(ff_periodz, Spect_denz)
#Spectogram
fqx_spect, tx_spect, Spectx = signal.spectrogram(x, Fs)
fqy_spect, ty_spect, Specty = signal.spectrogram(y, Fs)
fqz_spect, tz_spect, Spectz = signal.spectrogram(z, Fs)
fig4 = plt.figure(figsize=(6, 8))
ax04 = fig4.add_subplot(311); ax14 = fig4.add_subplot(312); ax24 = fig4.add_subplot(313)
ax04.pcolormesh(tx_spect, fqx_spect, Spectx); ax04.set_ylim(0, 40)
ax14.pcolormesh(ty_spect, fqy_spect, Specty); ax14.set_ylim(0, 40)
ax24.pcolormesh(tz_spect, fqz_spect, Spectz); ax24.set_ylim(0, 40)
plt.show()
a = 'everything done'
return a
#------- Sistema de Lorentz
rho, sigma, beta = 28, 10, 8./3.
p = (rho, sigma, beta)
t = np.arange(0, 100, 0.01)
rr0 = np.array([1, 0, 0])
frequencies(Lorentz_eq2, rr0, t, p, 1e3)
#------- Sistema de Duffing a
epsilon, gamma, omega = 0.15, 0.3, 1
p_duff = (epsilon, gamma, omega)
t_duff = np.arange(0, 200, 0.01)
rr0_duff = np.array([1, 0, 0])
frequencies(Duffing, rr0_duff, t_duff, p_duff, 1e3)
##------- Duffing b
epsilon, gamma, omega = 0.22, 0.3, 1
p_duff = (epsilon, gamma, omega)
t_duff = np.arange(0, 200, 0.01)
rr0_duff = np.array([1, 0, 0])
frequencies(Duffing, rr0_duff, t_duff, p_duff, 1e3)
##------ Duffing c
epsilon, gamma, omega = 0.25, 0.3, 1.0
p_duff = (epsilon, gamma, omega)
t_duff = np.arange(0, 200, 0.01)
rr0_duff = np.array([1, 0, 0])
frequencies(Duffing, rr0_duff, t_duff, p_duff, 1e3)
##------ Van der Pol
A, T2 = 0.5, 2*np.pi/1.1
p = (A, T2)
t = np.arange(0, 100, 0.01)
rr0 = np.array([1, 0, 0])
frequencies(VderP2, rr0, t, p, 1e3)
##----- Circuito de Chua
a, b, m0, m1 = 9, 100/7, -1/7, 2/7
p=(a, b, m0, m1)
t = np.arange(0, 100, 0.01)
rr0 = np.array([1, 0, 0])
frequencies(Chua_Circuit, rr0, t, p, 1e3)
###Output
_____no_output_____ |
Practice/PyTorch/NER_test/0623_test_dataloader.ipynb | ###Markdown
目前進度:我在閱讀 SimpleTransformer 的仔入資料的程式碼並嘗試複製因為我要把它改為多標籤的資料但我看不太懂,還在努力中多花一點時間相信我可以掌握,加油0623 19:00 進度我把關鍵的部分抓出來了,load_and_cache_examples 就是把檔名轉為 Tensor 的部分!明天繼續
###Code
from __future__ import absolute_import, division, print_function
import logging
import math
import os
import random
import tempfile
import warnings
from dataclasses import asdict
from pathlib import Path
import numpy as np
import pandas as pd
import torch
from seqeval.metrics import (
classification_report,
f1_score,
precision_score,
recall_score,
)
from simpletransformers.config.model_args import NERArgs
from simpletransformers.config.utils import sweep_config_to_sweep_values
from simpletransformers.ner.ner_utils import (
InputExample,
LazyNERDataset,
convert_examples_to_features,
get_examples_from_df,
load_hf_dataset,
read_examples_from_file,
)
from tensorboardX import SummaryWriter
from torch.nn import CrossEntropyLoss
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset
from tqdm.auto import tqdm, trange
from transformers import (
AlbertConfig,
AlbertForTokenClassification,
AlbertTokenizer,
AutoConfig,
AutoModelForTokenClassification,
AutoTokenizer,
BertConfig,
BertForTokenClassification,
BertTokenizer,
BertweetTokenizer,
BigBirdConfig,
BigBirdForTokenClassification,
BigBirdTokenizer,
CamembertConfig,
CamembertForTokenClassification,
CamembertTokenizer,
DebertaConfig,
DebertaForTokenClassification,
DebertaTokenizer,
DebertaV2Config,
DebertaV2ForTokenClassification,
DebertaV2Tokenizer,
DistilBertConfig,
DistilBertForTokenClassification,
DistilBertTokenizer,
ElectraConfig,
ElectraForTokenClassification,
ElectraTokenizer,
LayoutLMConfig,
LayoutLMForTokenClassification,
LayoutLMTokenizer,
LongformerConfig,
LongformerForTokenClassification,
LongformerTokenizer,
MPNetConfig,
MPNetForTokenClassification,
MPNetTokenizer,
MobileBertConfig,
MobileBertForTokenClassification,
MobileBertTokenizer,
RobertaConfig,
RobertaForTokenClassification,
RobertaTokenizerFast,
SqueezeBertConfig,
SqueezeBertForTokenClassification,
SqueezeBertTokenizer,
XLMConfig,
XLMForTokenClassification,
XLMTokenizer,
XLMRobertaConfig,
XLMRobertaForTokenClassification,
XLMRobertaTokenizer,
XLNetConfig,
XLNetForTokenClassification,
XLNetTokenizerFast,
)
from transformers.convert_graph_to_onnx import convert, quantize
from transformers.optimization import AdamW, Adafactor
from transformers.optimization import (
get_constant_schedule,
get_constant_schedule_with_warmup,
get_linear_schedule_with_warmup,
get_cosine_schedule_with_warmup,
get_cosine_with_hard_restarts_schedule_with_warmup,
get_polynomial_decay_schedule_with_warmup,
)
from simpletransformers.ner import NERModel
# Create a NERModel
#model = NERModel('bert', 'bert-base-cased')
model = NERModel('bert', 'dslim/bert-base-NER', args={
'learning_rate': 2e-5,
'overwrite_output_dir': True,
'reprocess_input_data': True,
'num_train_epochs': 1,
"train_batch_size": 15})
file = "./train.txt"
out = []
with open(file) as f:
lines = f.readlines()
for i in range(len(lines)):
out.append(" ".join(lines[i].replace("\n", "").split(" ")[:-1] + ["O\n"]))
out[:10]
with open("./test_test.txt", "w") as f:
f.writelines(out)
lines[:10]
lines[4].replace("\n", "").split(" ")[:-1]
" ".join(lines[4].replace("\n", "").split(" ")[:-1] + ["O\n"])
out = lines
lines[-2:]
tokenizer = model.tokenizer
tokenizer.encode("Hello")
tokenizer.encode("Hello World Eason")
tokenizer.encode("World")
tokenizer.encode("Eason")
logger = logging.getLogger(__name__)
MODELS_WITH_EXTRA_SEP_TOKEN = [
"roberta",
"camembert",
"xlmroberta",
"longformer",
"mpnet",
]
to_predict
examples = read_examples_from_file(
data,
"train",
bbox= False,
)
a = examples[0]
a.labels
a.words
b = features[0]
len(examples)
b.input_ids
b.input_mask
b.label_ids
b.segment_ids
[]
model.model
model.args.labels_list = ['O', 'B-MISC', 'I-MISC', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
model.args.labels_list = ['O', 'B-MISC', 'I-MISC']
data = "./train.txt"
evaluate=False
no_cache=False
to_predict=None
process_count = model.args.process_count
tokenizer = model.tokenizer
args = model.args
if not no_cache:
no_cache = args.no_cache
mode = "dev" if evaluate else "train"
examples = read_examples_from_file(
data,
mode,
bbox=True if model.args.model_type == "layoutlm" else False,
)
cached_features_file = os.path.join(
args.cache_dir,
"cached_{}_{}_{}_{}_{}".format(
mode,
args.model_type,
args.max_seq_length,
model.num_labels,
len(examples),
),
)
if not no_cache:
os.makedirs(model.args.cache_dir, exist_ok=True)
logger.info(" Converting to features started.")
features = convert_examples_to_features(
examples,
model.args.labels_list,
model.args.max_seq_length,
model.tokenizer,
# XLNet has a CLS token at the end
cls_token_at_end=bool(args.model_type in ["xlnet"]),
cls_token=tokenizer.cls_token,
cls_token_segment_id=2 if args.model_type in ["xlnet"] else 0,
sep_token=tokenizer.sep_token,
# RoBERTa uses an extra separator b/w pairs of sentences,
# cf. github.com/pytorch/fairseq/commit/1684e166e3da03f5b600dbb7855cb98ddfcd0805
sep_token_extra=args.model_type in MODELS_WITH_EXTRA_SEP_TOKEN,
# PAD on the left for XLNet
pad_on_left=bool(args.model_type in ["xlnet"]),
pad_token=tokenizer.pad_token_id,
pad_token_segment_id=4 if args.model_type in ["xlnet"] else 0,
pad_token_label_id=model.pad_token_label_id,
process_count=process_count,
silent=args.silent,
use_multiprocessing=args.use_multiprocessing,
chunksize=args.multiprocessing_chunksize,
mode=mode,
use_multiprocessing_for_evaluation=args.use_multiprocessing_for_evaluation,
)
if not no_cache:
torch.save(features, cached_features_file)
all_input_ids = torch.tensor(
[f.input_ids for f in features], dtype=torch.long
)
all_input_mask = torch.tensor(
[f.input_mask for f in features], dtype=torch.long
)
all_segment_ids = torch.tensor(
[f.segment_ids for f in features], dtype=torch.long
)
all_label_ids = torch.tensor(
[f.label_ids for f in features], dtype=torch.long
)
if model.args.model_type == "layoutlm":
all_bboxes = torch.tensor(
[f.bboxes for f in features], dtype=torch.long
)
if model.args.onnx:
out_return = all_label_ids
if model.args.model_type == "layoutlm":
dataset = TensorDataset(
all_input_ids,
all_input_mask,
all_segment_ids,
all_label_ids,
all_bboxes,
)
else:
dataset = TensorDataset(
all_input_ids, all_input_mask, all_segment_ids, all_label_ids
)
out_return = dataset
features = convert_examples_to_features(
examples,
model.args.labels_list,
model.args.max_seq_length,
model.tokenizer,
# XLNet has a CLS token at the end
cls_token_at_end=bool(args.model_type in ["xlnet"]),
cls_token=tokenizer.cls_token,
cls_token_segment_id=2 if args.model_type in ["xlnet"] else 0,
sep_token=tokenizer.sep_token,
# RoBERTa uses an extra separator b/w pairs of sentences,
# cf. github.com/pytorch/fairseq/commit/1684e166e3da03f5b600dbb7855cb98ddfcd0805
sep_token_extra=args.model_type in MODELS_WITH_EXTRA_SEP_TOKEN,
# PAD on the left for XLNet
pad_on_left=bool(args.model_type in ["xlnet"]),
pad_token=tokenizer.pad_token_id,
pad_token_segment_id=4 if args.model_type in ["xlnet"] else 0,
pad_token_label_id=model.pad_token_label_id,
process_count=process_count,
silent=args.silent,
use_multiprocessing=args.use_multiprocessing,
chunksize=args.multiprocessing_chunksize,
mode=mode,
use_multiprocessing_for_evaluation=args.use_multiprocessing_for_evaluation,
)
all_input_ids[47]
all_input_ids[10]
lines[47]
lines[10]
lines[:50]
out_return.tensors[3] == all_label_ids
###Output
_____no_output_____ |
Exp_environment/Exp_small_no.ipynb | ###Markdown
MultiAgentEnvironment: simple Map
###Code
#%autosave 30
import glob
import sys
from operator import itemgetter
import numpy as np
import random
import time
import math
import networkx as nx
from networkx.algorithms.shortest_paths.generic import shortest_path_length
import ray
from ray import tune
#from ray.tune.logger import pretty_print
from ray.rllib.policy.policy import Policy, PolicySpec
from ray.rllib.models.tf.tf_modelv2 import TFModelV2
from ray.rllib.models.tf.fcnet import FullyConnectedNetwork
#from ray.rllib.agents.callbacks import DefaultCallbacks
from ray.rllib.env import MultiAgentEnv
from ray.rllib.utils.framework import try_import_tf
from gym.spaces import Discrete, Box, Tuple, MultiDiscrete, Dict, MultiBinary
from ray.rllib.utils.spaces.repeated import Repeated
import matplotlib.pyplot as plt
tf1, tf, tfv = try_import_tf() # prefered TF import for Ray.io
from threading import Thread, Event
print("Imports successful")
######## Utility Classes ########
class BoolTimer(Thread):
"""A boolean value that toggles after a specified number of seconds:
Example:
bt = BoolTimer(30.0, False)
bt.start()
Used in the Centrality Baseline to limit the computation time.
"""
def __init__(self, interval, initial_state=True):
Thread.__init__(self)
self.interval = interval
self.state = initial_state
self.finished = Event()
def __bool__(self):
return bool(self.state)
def run(self):
self.finished.wait(self.interval)
if not self.finished.is_set():
self.state = not self.state
self.finished.set()
######## Static helper functions ########
def shuffle_actions(action_dict, check_space = False):
"""
Used to shuffle the action dict to ensure that agents with a lower id are not always preferred
when picking up parcels over other agents that chose the same action.
For debugging: Can also be used to check if all actions are in the action_space.
"""
keys = list(action_dict)
random.shuffle(keys)
shuffled = {}
for agent in keys:
if check_space: #assert actions are in action space -> disable for later trainig, extra check while development
assert self.action_space.contains(action_dict[agent]),f"Action {action_dict[agent]} taken by agent {agent} not in action space"
shuffled[agent] = action_dict[agent]
return shuffled
def load_graph(data):
"""Loads topology (map) from json file into a networkX Graph and returns the graph"""
nodes = data["nodes"]
edges = data["edges"]
g = nx.DiGraph() # directed graph
g.add_nodes_from(nodes)
g.edges(data=True)
for node in nodes: # add attribute values
g.nodes[node]["id"] = nodes[node]["id"]
g.nodes[node]["type"] = nodes[node]["type"]
for edge in edges: # add edges with attributes
f = edges[edge]["from"]
t = edges[edge]["to"]
weight_road, weight_air, _type = sys.maxsize, sys.maxsize, None
if edges[edge]["road"] >= 0:
weight_road = edges[edge]["road"]
_type = 'road'
if edges[edge]["air"] >= 0:
weight_air = edges[edge]["air"]
_type = 'both' if _type == 'road' else 'air'
weight = min(weight_road, weight_air) # needed for optimality baseline
g.add_edge(f, t, type=_type, road= weight_road, air=weight_air, weight=weight)
return g
###Output
_____no_output_____
###Markdown
Environment Code
###Code
# Map definition
topology = {
'nodes': {
0: {'id': 1, 'type': 'parking'},
1: {'id': 1, 'type': 'parking'},
2: {'id': 2, 'type': 'parking'},
3: {'id': 3, 'type': 'parking'},
4: {'id': 4, 'type': 'parking'},
5: {'id': 5, 'type': 'air'},
6: {'id': 6, 'type': 'parking'},
7: {'id': 7, 'type': 'air'},
8: {'id': 8, 'type': 'parking'},
9: {'id': 9, 'type': 'parking'},
10: {'id': 10, 'type': 'parking'},
11: {'id': 11, 'type': 'air'}
},
'edges': {
## Outer Ring --> Road much (!) faster than drones
"e01":{"from": 0,"to": 1,"road": 10, "air": 40},
"e02":{"from": 1,"to": 0,"road": 10, "air": 40},
"e03":{"from": 1,"to": 4,"road": 8, "air": 6},
"e04":{"from": 4,"to": 1,"road": 8, "air": 6},
"e03":{"from": 4,"to": 2,"road": 8, "air": 6},
"e04":{"from": 2,"to": 4,"road": 8, "air": 6},
"e05":{"from": 2,"to": 3,"road": 10, "air": 30},
"e06":{"from": 3,"to": 2,"road": 10, "air": 30},
"e07":{"from": 3,"to": 0,"road": 10, "air": 40},
"e08":{"from": 0,"to": 3,"road": 8, "air": 35},
## Inner Nodes --> Reinfahren langsamer als rausfahren
## -->
"e11":{"from": 4,"to": 6,"road": 2, "air": 5},
"e12":{"from": 6,"to": 4,"road": 2, "air": 5},
"e13":{"from": 0,"to": 6,"road": 6, "air": -1},
"e14":{"from": 6,"to": 0,"road": 6, "air": -1},
"e15":{"from": 3,"to": 6,"road": 6, "air": -1},
"e16":{"from": 6,"to": 3,"road": 6, "air": -1},
## Outliers --> Distance about equal if both exist!
"e17":{"from": 4,"to": 5,"road": 2, "air": 2},
"e18":{"from": 5,"to": 4,"road": 2, "air": 2},
"e19":{"from": 6,"to": 7,"road": -1, "air": 2},
"e20":{"from": 7,"to": 6,"road": -1, "air": 2},
"e21":{"from": 3,"to": 11,"road": -1, "air": 3},
"e22":{"from": 11,"to": 3,"road": -1, "air": 3},
## Upper left square --> Hier besser mit den Drohnen agieren, Falls Sie da sind!!
"e23":{"from": 0,"to": 8,"road": -1, "air": 4},
"e24":{"from": 8,"to": 0,"road": -1, "air": 4},
"e25":{"from": 8,"to": 9,"road": 3, "air": 3},
"e26":{"from": 9,"to": 8,"road": 3, "air": 3},
"e27":{"from": 9,"to": 10,"road": 3, "air": 3},
"e28":{"from": 10,"to": 9,"road": 3, "air": 3},
"e29":{"from": 0,"to": 10,"road": 3, "air": 3},
"e30":{"from": 10,"to": 0,"road": 3, "air": 3}
}
}
class Map_Environment(MultiAgentEnv):
def __init__(self, env_config: dict = {}):
# ensure config file includes all necessary settings
assert 'NUMBER_STEPS_PER_EPISODE' in env_config
assert 'NUMBER_OF_DRONES' in env_config
assert 'NUMBER_OF_CARS' in env_config
assert 'INIT_NUMBER_OF_PARCELS' in env_config
assert 'TOPOLOGY' in env_config
assert 'MAX_NUMBER_OF_PARCELS' in env_config
assert 'THRESHOLD_ADD_NEW_PARCEL' in env_config
assert 'BASELINE_FLAG' in env_config
assert 'BASELINE_TIME_CONSTRAINT' in env_config
assert 'BASELINE_OPT_CONSTANT' in env_config
assert 'CHARGING_STATION_NODES' in env_config
assert 'REWARDS' in env_config
self.graph = load_graph(topology)
# Map config
self.NUMBER_OF_DRONES = env_config['NUMBER_OF_DRONES']
self.NUMBER_OF_CARS = env_config['NUMBER_OF_CARS']
self.NUMBER_OF_EDGES = self.graph.number_of_edges()
self.NUMBER_OF_NODES = self.graph.number_of_nodes()
self.CHARGING_STATION_NODES = env_config['CHARGING_STATION_NODES']
self.MAX_BATTERY_POWER = env_config['MAX_BATTERY_POWER']
# Simulation config
self.NUMBER_STEPS_PER_EPISODE = env_config['NUMBER_STEPS_PER_EPISODE']
self.INIT_NUMBER_OF_PARCELS = env_config['INIT_NUMBER_OF_PARCELS']
self.RANDOM_SEED = env_config.get('RANDOM_SEED', None)
self.MAX_NUMBER_OF_PARCELS = env_config['MAX_NUMBER_OF_PARCELS']
self.THRESHOLD_ADD_NEW_PARCEL = env_config['THRESHOLD_ADD_NEW_PARCEL']
self.BASELINE_FLAG = env_config['BASELINE_FLAG']
self.BASELINE_TIME_CONSTRAINT = env_config['BASELINE_TIME_CONSTRAINT']
self.BASELINE_OPT_CONSTANT = env_config['BASELINE_OPT_CONSTANT']
self.DEBUG_LOG = env_config.get('DEBUG_LOGS', False)
# Some Sanity Checks on the settings
if self.DEBUG_LOG: assert self.MAX_NUMBER_OF_PARCELS >= self.INIT_NUMBER_OF_PARCELS, "Number of initial parcels exceeds max parcel limit"
#Reward constants
self.STEP_PENALTY = env_config['REWARDS']['STEP_PENALTY']
self.PARCEL_DELIVERED = env_config['REWARDS']['PARCEL_DELIVERED']
# compute other rewards
self.BATTERY_DIED = self.STEP_PENALTY * self.NUMBER_STEPS_PER_EPISODE
self.BATTERY_DIED_WITH_PARCEL = self.BATTERY_DIED * 2
#self.DELIVERY_CONTRIBUTION: depends on active agents --> computed when given in prepare_global_reward()
#self.ALL_DELIVERED_CONTRIB: depends on active agents --> computed when given in prepare_global_reward(episode_success=True)
# Computed constants
self.NUMBER_OF_AGENTS = self.NUMBER_OF_DRONES + self.NUMBER_OF_CARS
self.PARCEL_STATE_DELIVERED = self.NUMBER_OF_AGENTS + self.NUMBER_OF_NODES
self.NUMBER_OF_ACTIONS = 1 + self.NUMBER_OF_NODES + 1 + self.MAX_NUMBER_OF_PARCELS
self.ACTION_DROPOFF = 1 + self.NUMBER_OF_NODES # First Action NOOP is 0
# seed RNGs
self.seed(self.RANDOM_SEED)
self.state = None
self.current_step = None
self.blocked_agents = None
self.parcels_delivered = None
self.done_agents = None
self.all_done = None
self.allowed_actions = None
# baseline related
self.baseline_missions = None
self.agents_base = None
self.o_employed = None
# metrics for the evaluation
self.parcel_delivered_steps = None # --> {p1: 20, p2: 240, p:140}
self.parcel_added_steps = None # --> {p1: 0, p2: 0, p: 50}
self.agents_crashed = None # --> {c_2: 120, d_0: 242}
self.metrics = None
self.agents = [*["d_" + str(i) for i in range(self.NUMBER_OF_DRONES)],*["c_" + str(i) for i in range(self.NUMBER_OF_DRONES, self.NUMBER_OF_DRONES + self.NUMBER_OF_CARS)]]
# Define observation and action spaces for individual agents
self.action_space = Discrete(self.NUMBER_OF_ACTIONS)
#---- Repeated Obs Space: Represents a parcel with (id, location, destination) --> # parcel_id starts at 1
parcel_space = Dict({'id': Discrete(self.MAX_NUMBER_OF_PARCELS+1),
'location': Discrete(self.NUMBER_OF_NODES + self.NUMBER_OF_AGENTS + 1),
'destination': Discrete(self.NUMBER_OF_NODES)
})
self.observation_space = Dict({'obs': Dict({'state':
Dict({ 'position': Discrete(self.NUMBER_OF_NODES),
'battery': Discrete(self.MAX_BATTERY_POWER + 1), #[0-100]
'has_parcel': Discrete(self.MAX_NUMBER_OF_PARCELS + 1),
'current_step': Discrete(self.NUMBER_STEPS_PER_EPISODE + 1)}),
'parcels': Repeated(parcel_space, max_len=self.MAX_NUMBER_OF_PARCELS)
}),
'allowed_actions': MultiBinary(self.NUMBER_OF_ACTIONS)
})
#TODO: why is reset() not called by env?
self.reset()
def step(self, action_dict):
"""conduct the state transitions caused by actions in action_dict
:returns:
- observation_dict: observations for agents that need to act in the next round
- rewards_dict: rewards for agents following their chosen actions
- done_dict: indicates end of episode if max_steps reached or all parcels delivered
- info_dict: pass data to custom logger
"""
if self.DEBUG_LOG: print(f"Debug log flag set to {self.DEBUG_LOG}")
# ensure no disadvantage for agents with higher IDs if action conflicts with that taken by other agent
action_dict = shuffle_actions(action_dict)
self.current_step += 1
# grant step penalty reward
agent_rewards = {agent: self.STEP_PENALTY for agent in self.agents}
# setting an agent done twice might cause crash when used with tune... -> https://github.com/ray-project/ray/issues/10761
dones = {}
self.metrics['step'] = self.current_step
# dynamically add parcel
if random.random() <= self.THRESHOLD_ADD_NEW_PARCEL and len(self.state['parcels']) < self.MAX_NUMBER_OF_PARCELS:
p_id, parcel = self.generate_parcel()
assert p_id not in self.state['parcels'], "Duplicate parcel ID generated"
self.state['parcels'][p_id] = parcel
if self.BASELINE_FLAG:
self.metrics["optimal"] = self.compute_optimality_baseline(p_id, extra_charge=self.BASELINE_OPT_CONSTANT)
for agent in self.agents:
self.allowed_actions[agent][self.ACTION_DROPOFF + p_id] = np.array([1]).astype(bool)
if self.BASELINE_FLAG:
old_actions = action_dict
action_dict = {}
# Replace actions with actions recommended by the central baseline
for agent, action in old_actions.items():
if len(self.baseline_missions[agent]) > 0:
new_action = self.baseline_missions[agent][0]
if type(new_action) is tuple: # dropoff action with minimal time
if self.current_step >= new_action[1]:
new_action = new_action[0]
self.baseline_missions[agent].pop(0)
else: #agent has to wait for previous subroute agent
new_action = 0
else: # move or pickup or charge
self.baseline_missions[agent].pop(0)
action_dict[agent] = new_action
else: # agent has no baseline mission -> Noop
action_dict[agent] = 0
# carry out State Transition
# handel NOP actions: -> action == 0
noop_agents = {agent: action for agent, action in action_dict.items() if action == 0}
effectual_agents_items = {agent: action for agent, action in action_dict.items() if action > 0}.items()
# now: transaction between agents modelled as pickup of just offloaded (=dropped) parcel --> handle dropoff first
moving_agents = {agent: action for agent, action in effectual_agents_items if 0 < action and action <= self.NUMBER_OF_NODES}
dropoff_agents = {agent: action for agent, action in effectual_agents_items if action == self.ACTION_DROPOFF}
pickup_agents = {agent: action for agent, action in effectual_agents_items if action > self.ACTION_DROPOFF}
# handle noop / charge decisions:
for agent, action in noop_agents.items():
# check if recharge is possible
current_pos = self.state['agents'][agent]['position']
if current_pos in self.CHARGING_STATION_NODES:
self.state['agents'][agent]['battery'] = self.MAX_BATTERY_POWER
# handle Movement actions:
for agent, action in moving_agents.items():
# get Current agent position from state
self.state['agents'][agent]['battery'] += -1
current_pos = self.state['agents'][agent]['position']
# networkX: use node instead of edge:
destination = action - 1
if self.graph.has_edge(current_pos, destination):
# Agent chose existing edge! -> check if type is suitable
agent_type = 'road' if agent[:1] == 'c' else 'air'
if self.graph[current_pos][destination]["type"] in [agent_type, 'both']:
# Edge has correct type
self.state['agents'][agent]['position'] = destination
self.state['agents'][agent]['battery'] += -(self.graph[current_pos][destination][agent_type] +1)
if self.state['agents'][agent]['battery'] < 0: # ensure negative battery value does not break obs_space
#Battery below 0 --> reset to 0 (stay in obs space)
self.state['agents'][agent]['battery'] = 0
self.blocked_agents[agent] = self.graph[current_pos][destination][agent_type]
self.update_allowed_actions_nodes(agent)
# handle Dropoff Decision: -> action == self.NUMBER_OF_NODES + 2
for agent, action in dropoff_agents.items():
self.state['agents'][agent]['battery'] += -1
if self.state['agents'][agent]['has_parcel'] > 0: # agent has parcel
parcel_id = self.state['agents'][agent]['has_parcel']
self.state['agents'][agent]['has_parcel'] = 0
self.state['parcels'][parcel_id][0] = self.state['agents'][agent]['position']
if self.state['parcels'][parcel_id][0] == self.state['parcels'][parcel_id][1]:
# Delivery successful
agent_rewards[agent] += self.PARCEL_DELIVERED # local reward
# global contribution rewards
active_agents, reward = self.prepare_global_reward()
for a_id in active_agents: agent_rewards[a_id] += reward
self.state['parcels'][parcel_id][0] = self.PARCEL_STATE_DELIVERED
self.parcels_delivered[int(parcel_id) -1] = True # Parcel_ids start at 1
self.metrics['delivered'].update({"p_" + str(parcel_id): self.current_step})
self.update_allowed_actions_parcels(agent)
# handle Pickup Decision:
for agent, action in pickup_agents.items():
self.state['agents'][agent]['battery'] += -1
# agent has free capacity
if self.state['agents'][agent]['has_parcel'] == 0: #free parcel capacity
# convert action_id to parcel_id
parcel_id = action - self.ACTION_DROPOFF
if self.DEBUG_LOG: assert parcel_id in self.state['parcels'] # parcel {parcel_id} already in ENV ??
elif self.state['parcels'][parcel_id][0] == self.state['agents'][agent]['position']:
# Successful pickup operation
self.state['parcels'][parcel_id][0] = self.NUMBER_OF_NODES + int(agent[2:])
self.state['agents'][agent]['has_parcel'] = int(parcel_id)
self.update_allowed_actions_parcels(agent)
# unblock agents for next round
self.blocked_agents = {agent: remaining_steps -1 for agent, remaining_steps in self.blocked_agents.items() if remaining_steps > 1}
# handle dones - out of battery or max_steps or goal reached
for agent in action_dict.keys():
if agent not in self.done_agents and self.state['agents'][agent]['battery'] <= 0:
agent_rewards[agent] = self.BATTERY_DIED_WITH_PARCEL if self.state['agents'][agent]['has_parcel'] != 0 else self.BATTERY_DIED
dones[agent] = True
self.done_agents.append(agent)
self.metrics['crashed'].update({agent: self.current_step})
if len(self.done_agents) == self.NUMBER_OF_AGENTS:
# all agents dead
self.all_done = True
# check if episode terminated because of goal reached or all agents crashed -> avoid setting done twice
if self.current_step >= self.NUMBER_STEPS_PER_EPISODE or (all(self.parcels_delivered) and len(self.parcels_delivered) == self.MAX_NUMBER_OF_PARCELS):
# check if episode success:
if self.current_step < self.NUMBER_STEPS_PER_EPISODE:
# grant global reward for all parcels delivered
active_agents, reward = self.prepare_global_reward(_episode_success=True)
for a_id in active_agents: agent_rewards[a_id] += reward
self.all_done = True
dones['__all__'] = self.all_done
parcel_obs = self.get_parcel_obs()
# obs \ rewards \ dones\ info
return {agent: { 'obs': {'state': {'position': self.state['agents'][agent]['position'], 'battery': self.state['agents'][agent]['battery'],
'has_parcel': self.state['agents'][agent]['has_parcel'],'current_step': self.current_step},
'parcels': parcel_obs},
'allowed_actions': self.allowed_actions[agent]} for agent in self.agents if agent not in self.blocked_agents and agent not in self.done_agents}, \
{ agent: agent_rewards[agent] for agent in self.agents}, \
dones, \
{}
def seed(self, seed=None):
tf.random.set_seed(seed)
np.random.seed(seed)
random.seed(seed)
def reset(self):
"""resets variables; returns dict with observations, keys are agent_ids"""
self.current_step = 0
self.blocked_agents = {}
self.parcels_delivered = [False for _ in range(self.MAX_NUMBER_OF_PARCELS)]
self.done_agents = []
self.all_done = False
self.metrics = { "step": self.current_step,
"delivered": {},
"crashed": {},
"added": {},
"optimal": None
}
#Baseline
self.baseline_missions = {agent: [] for agent in self.agents}
self.o_employed = [0 for _ in range(self.NUMBER_OF_AGENTS)]
self.agents_base = None # env.agents in the pseudocode
self.allowed_actions = { agent: np.array([1 for act in range(self.NUMBER_OF_ACTIONS)]) for agent in self.agents}
#Reset State
self.state = {'agents': {},
'parcels': {}}
#generate initial parcels
for _ in range(self.INIT_NUMBER_OF_PARCELS):
p_id, parcel = self.generate_parcel()
self.state['parcels'][p_id] = parcel
parcel_obs = self.get_parcel_obs()
# init agents
self.state['agents'] = {agent: {'position': self._random_feasible_agent_position(agent),
'battery': self.MAX_BATTERY_POWER, 'has_parcel': 0} for agent in self.agents}
if self.BASELINE_FLAG:
for parcel in self.state['parcels']:
self.compute_central_delivery(parcel, debug_log=False)
# TODO really return something here??
self.metrics['optimal'] = self.compute_optimality_baseline(parcel, extra_charge=self.BASELINE_OPT_CONSTANT, debug_log=False)
# compute allowed actions per agent --> move to function
for agent in self.agents:
self.update_allowed_actions_nodes(agent)
self.update_allowed_actions_parcels(agent)
agent_obs = {agent: {'obs': {'state': {'position': state['position'], 'battery': state['battery'],
'has_parcel': state['has_parcel'],'current_step': self.current_step},
'parcels': parcel_obs
},
'allowed_actions': self.allowed_actions[agent]
} for agent, state in self.state['agents'].items()
}
return {**agent_obs}
def _random_feasible_agent_position(self, agent_id):
"""Needed to avoid car agents being initialized at nodes only reachable by drones
and thus trapped from the beginning. Ensures that car agents start at a node of type 'road' or 'parking'.
"""
position = random.randrange(self.NUMBER_OF_NODES)
if agent_id[0] == 'c': # agent is car
while self.graph.nodes[position]['type'] == 'air': # position not reachable by car
position = random.randrange(self.NUMBER_OF_NODES)
return position
def update_allowed_actions_nodes(self, agent):
new_pos = self.state['agents'][agent]['position']
next_steps = list(self.graph.neighbors(new_pos))
agent_type = 'air' if agent[0]=='d' else 'road'
allowed_nodes = np.zeros(self.NUMBER_OF_NODES)
for neighbor in next_steps:
if self.graph[new_pos][neighbor]['type'] in [agent_type, 'both']:
allowed_nodes[neighbor] = 1
self.allowed_actions[agent][1:self.NUMBER_OF_NODES+1] = np.array(allowed_nodes).astype(bool)
def update_allowed_actions_parcels(self, agent):
""" Allow only the Dropoff or Pickup actions, depending on the has_parcel value of the agent.
Pickup is not concerned with the parcel actually being at the agents current location, only with free capacity
and the parcel already being added to the ENV"""
num_parcels = len(self.state['parcels'])
allowed_parcels = np.zeros(self.MAX_NUMBER_OF_PARCELS)
dropoff = 1
if self.state['agents'][agent]['has_parcel'] == 0:
dropoff = 0
allowed_parcels = np.concatenate([np.ones(num_parcels), np.zeros(self.MAX_NUMBER_OF_PARCELS - num_parcels)])
self.allowed_actions[agent][self.NUMBER_OF_NODES+1:] = np.array([dropoff, *allowed_parcels]).astype(bool)
def get_parcel_obs(self):
parcel_obs = [{'id':pid, 'location': parcel[0], 'destination':parcel[1]} for (pid, parcel) in self.state['parcels'].items()]
return parcel_obs
def generate_parcel(self):
"""generate new parcel id and new parcel with random nodes for location and destination.
p_ids (int) start at 1, later nodes for parcel will be sampled to avoid parcels that already spawn at their destination"""
p_id = len(self.state['parcels']) + 1
parcel = random.sample(range(self.NUMBER_OF_NODES), 2) # => initial location != destination
self.metrics['added'].update({p_id: self.current_step})
return p_id, parcel
def prepare_global_reward(self, _episode_success=False):
""" computes a global reward for all active agents still in the environment.
If _episode_success is set to True, all parcels have been delivered an the ALL_DELIVERED reward is granted.
:Returns: list of active agents and the reward value
"""
agents_alive = list(set(self.agents).difference(set(self.done_agents)))
if self.DEBUG_LOG: assert len(agents_alive) > 0
reward = self.PARCEL_DELIVERED * (self.NUMBER_STEPS_PER_EPISODE - self.current_step) / self.NUMBER_STEPS_PER_EPISODE if _episode_success else self.PARCEL_DELIVERED / len(agents_alive)
return agents_alive, reward
#------ BASELINE related methods ------###
def compute_optimality_baseline(self, parcel_id, extra_charge=2.5, debug_log=False):
"""Used in the optimality baseline
Input: parcel_id, (extra_charge)
Output: new total delivery rounds needed for all parcels
"""
parcel = self.state['parcels'][parcel_id]
path_time = 2 + shortest_path_length(self.graph, parcel[0], parcel[1], 'weight')
_time = math.ceil(path_time * extra_charge) # round to next higher integer
min_index = self.o_employed.index(min(self.o_employed))
self.o_employed[min_index] += _time
return max(self.o_employed)
def compute_central_delivery(self, p_id, debug_log = False):
"""Used in the central baseline, iteratively tries to find a good delivery route
with the available agents in the time specified in BASELINE_TIME_CONSTRAINT
Input: parcel_id
Output: Dict: {agent_id: [actions], ...} --> update that dict! (merge in this function with prev actions!)
"""
if self.agents_base is None:
self.agents_base = {a_id: (a['position'], 0) for (a_id, a) in self.state["agents"].items()} # last instructed pos + its step count
min_time = None
new_missions = {} # key: agent, value: [actions]
source = self.state["parcels"][p_id][0]
target = self.state["parcels"][p_id][1]
shortest_paths_generator = nx.shortest_simple_paths(self.graph, source, target, weight='weight')
running = BoolTimer(self.BASELINE_TIME_CONSTRAINT)
running.start()
while running:
# Default --> assign full route to nearest drone
if min_time is None:
air_route = nx.shortest_path(self.graph, source= source, target= target, weight="air")
air_route_time = nx.shortest_path_length(self.graph, source= source, target= target, weight="air")
air_route.pop(0) # remove source node from path
# Assign most suitable Drone
best_drone = None
for (a_id, a_tp) in self.agents_base.items():
if a_id[0] != 'd': # filter for correct agent type
continue
journey_time = nx.shortest_path_length(self.graph, source=a_tp[0], target=source, weight="air") + a_tp[1]
if min_time is None or journey_time < min_time:
min_time = journey_time
best_drone = (a_id, a_tp)
# construct path for agent
drone_route = nx.shortest_path(self.graph, source= best_drone[1][0], target= source, weight="air")
drone_route_actions = [x+1 for x in drone_route[1:]] + [(self.ACTION_DROPOFF + p_id, 0)] + [x+1 for x in air_route[1:]] + [self.ACTION_DROPOFF] # increment node_ids by one to get corresponding action
min_time += air_route_time + 2 # add 2 steps for pick & drop
self._add_charging_stops_to_route(drone_route_actions, debug_log=debug_log)
new_missions[best_drone[0]] = (drone_route_actions, min_time)
else: # try to improve the existing base mission
try:
shortest_route = next(shortest_paths_generator)
except StopIteration:
break # all existing shortest paths already tried
subroutes = self._path_to_subroutes(shortest_route, debug_log=debug_log)
duration, min_agents = self._find_best_agents(subroutes, min_time, debug_log=debug_log)
if duration < min_time:
# faster delivery route found!
assert duration < min_time, "Central Baseline prefered a longer route..."
# update min_time
min_time = duration
new_missions = self._build_missions(min_agents, subroutes, p_id, debug_log=debug_log)
#---- end while = Timer
# now save the best mission found in the ENV
for agent in new_missions.keys():
# retrieve target node from mission, depends on charging stops and case no move necessary
target = None
if isinstance(new_missions[agent][0][-2], int):
target = new_missions[agent][0][-2]
if target == 0: target = new_missions[agent][0][-3] # charging action was added before dropoff
target -= 1 # action-1 = node_id
else: # handle case no move necessary -> pick action before dropoff
target = self.agents_base[agent][0]
self.agents_base[agent] = (target, self.agents_base[agent][1] + new_missions[agent][1])
self.baseline_missions[agent].extend(new_missions[agent][0])
return new_missions
def _find_best_agents(self, subroutes, min_time, debug_log=False):
""" For use in centrality_baseline. Finds best available agents for traversing a set of subroutes
and returns these with the total duration for doing so.
Input: subroutes = [ edge_type, [actions]]
"""
min_agents = {}
temp_agents_base = {k:v for k,v in self.agents_base.items()} # deep copy for temporary planning transfer or stick with agent??
for i,r in enumerate(subroutes):
# init some helper vars
a_type = "d" if r[0] == 'air' else 'c'
min_time_sub = None # best time (min) for this subroute (closest agent)
best_agent_sub = None
# iterate over agents of correct type! --> later: Busy / unbusy
for (a_id, a_tp) in temp_agents_base.items():
#reminder: a_tp is tuple of latest future position (node, timestep)
# filter for correct agent type - even if type is 'both' one agent can still take only its edge type
weight_agent = r[0]
if r[0] == 'both':
weight_agent = 'road' if a_id[0] == 'c' else 'air'
else: # wrong agent type
if a_id[0] != a_type: # todo replace with parameter variable in function!
continue
journey_time = nx.shortest_path_length(self.graph, source=a_tp[0], target=r[1][0], weight=weight_agent) + a_tp[1] # earliest time agent can be there
if min_time_sub is None or journey_time < min_time_sub:
min_time_sub = journey_time
best_agent_sub = (a_id, a_tp) # a_id, a_tp = (latest location, timestep)
# closest available agent found
best_agent_weight = weight_agent = 'road' if best_agent_sub[0] == 'c' else 'air'
duration_sub = min_time_sub + nx.shortest_path_length(self.graph, source=r[1][0], target=r[1][-1], weight=best_agent_weight) + 1
# update agent state in temporary planning
temp_agents_base[best_agent_sub[0]] = (r[1][-1], duration_sub)
# agent_tuple, duration_subroutes_until_then
min_agents[i] = (best_agent_sub, duration_sub)
if debug_log: assert duration_sub < sys.maxsize, "Non existent edge taken somewhere..."
# check if current subroute already longer than the min one
if duration_sub > min_time:
break # already worse, check next simple path
return duration_sub, min_agents
def _build_missions(self, min_agents, subroutes, parcel_id, debug_log = False):
"""For use in centrality_baseline. Computes list of actions for delivery of parcel parcel_id
and necessary duration for execution from list of agents, subroutes"""
new_mission = {}
for i, s in enumerate(subroutes):
best_agent_pos = min_agents[i][0][1][0]
#earliest time to start that action
time_pickup = min_agents[i-1][1] if i > 0 else 0 # First subroute pickup as soon as possible
# construct the actual delivery path to pickup
delivery_route = nx.shortest_path(self.graph, source=best_agent_pos, target=s[1][0], weight=s[0])
delivery_route_actions = [x+1 for x in delivery_route[1:]] + [(self.ACTION_DROPOFF + parcel_id, time_pickup)] + [x+1 for x in s[1][1:]] + [self.ACTION_DROPOFF]
if debug_log: assert min_agents[i][0] not in new_mission #I see no preferable case where agent picks-up 1 parcel 2 times --> holding always better than following
self._add_charging_stops_to_route(delivery_route_actions, debug_log=debug_log)
new_mission[min_agents[i][0][0]] = (delivery_route_actions, min_agents[i][1])
return new_mission
def _add_charging_stops_to_route(self, route_actions, debug_log=False):
"""For use in centrality_baseline. Iterates through a list of actions and inserts a charging action
after every move action to a node with a charging station.
Tuples representing Dropoff actions with minimal executions time are updated to account for eventual delays. """
delay = 0
for i, n in enumerate(route_actions):
if type(n) is tuple:
if delay > 0: route_actions[i] = (n[0], n[1] + delay)
else:
if n-1 in self.CHARGING_STATION_NODES:
delay += 1
route_actions.insert(i+1, 0)
def _path_to_subroutes(self, path, debug_log= False):
"""For use in centrality_baseline. Takes path in the graph as input and returns list of subroutes
split at changes of the edge types with the type"""
# get subroutes by their edge_types
e_type_prev = None
e_type_changes = [] # save indices of source nodes before new edge type
subroutes = []
_subroute = [path[0]]
if len(path) > 1:
e_type_prev = self.graph.edges[path[0], path[1]]['type']
for i, node in enumerate(path[1:-1], start=1):
e_type_next = self.graph.edges[node, path[i+1]]['type']
_subroute.append(node)
if e_type_next != e_type_prev:
subroutes.append((e_type_prev, _subroute))
_subroute = [node]
e_type_prev = e_type_next
_subroute.append(path[-1])
subroutes.append((e_type_prev, _subroute)) # don't forget last subroute
return subroutes
###Output
_____no_output_____
###Markdown
Agent Model and Experiment Evaluation Code
###Code
from gym.spaces import Discrete, Box, Tuple, MultiDiscrete, Dict, MultiBinary
#from ray.rllib.utils.spaces.space_utils import flatten_space
#from ray.rllib.models.preprocessors import DictFlatteningPreprocessor
# Parametric-action agent model --> apply Action Masking!
class ParametricAgentModel(TFModelV2):
def __init__(self, obs_space, action_space, num_outputs, model_config, name, *args, **kwargs):
super(ParametricAgentModel, self).__init__(obs_space, action_space, num_outputs, model_config, name, *args, **kwargs)
assert isinstance(action_space, Discrete), f'action_space is a {type(action_space)}, but should be Discrete!'
# Adjust for number of agents/parcels/Nodes!! -> Simply copy found shape from the thrown exception
true_obs_shape = (1750, )
action_embed_size = action_space.n
self.action_embed_model = FullyConnectedNetwork(
Box(0, 1, shape=true_obs_shape), # TODO hier nochmal die 1 anpassen?? --> muss das hier der obs entsprechen??
action_space,
action_embed_size,
model_config,
name + '_action_embedding')
def forward(self, input_dict, state, seq_lens):
action_mask = input_dict['obs']['allowed_actions']
action_embedding, _ = self.action_embed_model.forward({'obs_flat': input_dict["obs_flat"]}, state, seq_lens)
intent_vector = tf.expand_dims(action_embedding, 1)
action_logits = tf.reduce_sum(intent_vector, axis=1)
inf_mask = tf.maximum(tf.math.log(action_mask), tf.float32.min)
return action_logits + inf_mask, state
def value_function(self):
return self.action_embed_model.value_function()
## Proposed way to train / evaluate MARL policy from github Issues: --> https://github.com/ray-project/ray/issues/9123 and https://github.com/ray-project/ray/issues/9208
def train(config, name, save_dir, stop_criteria, num_samples, verbosity=1):
"""
Train an RLlib PPO agent using tune until any of the configured stopping criteria is met.
:param stop_criteria: Dict with stopping criteria.
See https://docs.ray.io/en/latest/tune/api_docs/execution.html#tune-run
:return: Return the path to the saved agent (checkpoint) and tune's ExperimentAnalysis object
See https://docs.ray.io/en/latest/tune/api_docs/analysis.html#experimentanalysis-tune-experimentanalysis
"""
print("Start training")
analysis = ray.tune.run(PPOTrainer, verbose=verbosity, config=config, local_dir=save_dir,
stop=stop_criteria, name=name, num_samples=num_samples,
checkpoint_at_end=True, resume=True)
# list of lists: one list per checkpoint; each checkpoint list contains 1st the path, 2nd the metric value
checkpoints = analysis.get_trial_checkpoints_paths(trial=analysis.get_best_trial('episode_reward_mean', mode='max'),
metric='episode_reward_mean')
# retriev the checkpoint path; we only have a single checkpoint, so take the first one
checkpoint_path = checkpoints[0][0]
print(f"Saved trained model in checkpoint {checkpoint_path} - achieved episode_reward_mean: {checkpoints[0][1]}")
return checkpoint_path, analysis
def load(config, path):
"""
Load a trained RLlib agent from the specified path. Call this before testing the trained agent.
"""
agent = PPOTrainer(config=config) #, env=env_class)
agent.restore(path)
return agent
def test(env_class, env_config, policy_mapping_fcn, agent):
"""Test trained agent for a single episode. Return the retrieved env metrics for this episode and the episode reward"""
# instantiate env class
env = env_class(env_config)
episode_reward = 0
done = False
obs = env.reset()
while not done: # run until episode ends
actions = {}
for agent_id, agent_obs in obs.items():
# Here: policy_id == agent_id - added this to avoid confusion for other policy mappings
policy_id = policy_mapping_fcn(agent_id, episode=None, worker=None)
actions[agent_id] = agent.compute_action(agent_obs, policy_id=policy_id)
obs, reward, done, info = env.step(actions)
done = done['__all__']
# sum up reward for all agents
episode_reward += sum(reward.values())
# Retrieve custom metrics from ENV
return env.metrics, episode_reward
def train_and_test_scenarios(config, seeds=None):
""" Trains for a single scenario indicated by a seed """
# TODO how to distinguish between the different algos ?
print("Starte: run_function_trainer!")
# prepare the config dicts
NAME = config['NAME']
SAVE_DIR = config['SAVE_DIR']
ENVIRONMENT = config['ENV']
# Simulations
NUMBER_STEPS_PER_EPISODE = config['NUMBER_STEPS_PER_EPISODE']
STOP_CRITERIA = config['STOP_CRITERIA']
NUMBER_OF_SAMPLES = config['NUMBER_OF_SAMPLES']
#MAP / ENV
NUMBER_OF_DRONES = config['NUMBER_OF_DRONES']
NUMBER_OF_CARS = config['NUMBER_OF_CARS']
NUMBER_OF_AGENTS = NUMBER_OF_DRONES + NUMBER_OF_CARS
MAX_NUMBER_OF_PARCELS = config['MAX_NUMBER_OF_PARCELS']
# TESTING
SEEDS = config['SEEDS']
env_config = {
'DEBUG_LOGS':False,
'TOPOLOGY': config['TOPOLOGY'],
# Simulation config
'NUMBER_STEPS_PER_EPISODE': NUMBER_STEPS_PER_EPISODE,
#'NUMBER_OF_TIMESTEPS': NUMBER_OF_TIMESTEPS,
'RANDOM_SEED': None, # 42
# Map
'CHARGING_STATION_NODES': config['CHARGING_STATION_NODES'],
# Entities
'NUMBER_OF_DRONES': NUMBER_OF_DRONES,
'NUMBER_OF_CARS': NUMBER_OF_CARS,
'MAX_BATTERY_POWER': config['MAX_BATTERY_POWER'], # TODO split this for drone and car??
'INIT_NUMBER_OF_PARCELS': config['INIT_NUMBER_OF_PARCELS'],
'MAX_NUMBER_OF_PARCELS': config['MAX_NUMBER_OF_PARCELS'],
'THRESHOLD_ADD_NEW_PARCEL': config['THRESHOLD_ADD_NEW_PARCEL'],
# Baseline settings
'BASELINE_FLAG': False, # is set True in the test function when needed
'BASELINE_OPT_CONSTANT': config['BASELINE_OPT_CONSTANT'],
'BASELINE_TIME_CONSTRAINT': config['BASELINE_TIME_CONSTRAINT'],
# TODO
#Rewards
'REWARDS': config['REWARDS']
}
run_config = {
'num_gpus': config['NUM_GPUS'],
'num_workers': config['NUM_WORKERS'],
'env': ENVIRONMENT,
'env_config': env_config,
'multiagent': {
'policies': {
# tuple values: policy, obs_space, action_space, config
**{a: (None, None, None, { 'model': {'custom_model': ParametricAgentModel }, 'framework': 'tf'}) for a in ['d_'+ str(j) for j in range(NUMBER_OF_DRONES)] + ['c_'+ str(i) for i in range(NUMBER_OF_DRONES, NUMBER_OF_CARS + NUMBER_OF_DRONES)]}
},
'policy_mapping_fn': policy_mapping_fn,
'policies_to_train': ['d_'+ str(i) for i in range(NUMBER_OF_DRONES)] + ['c_'+ str(i) for i in range(NUMBER_OF_DRONES, NUMBER_OF_CARS + NUMBER_OF_DRONES)]
},
#'log_level': "INFO",
#"hiddens": [], # For DQN
#"dueling": False, # For DQN
}
# Train and Evaluate the agents !
checkpoint, analysis = train(run_config, NAME, SAVE_DIR, STOP_CRITERIA, NUMBER_OF_SAMPLES)
print("Training finished - Checkpoint: ", checkpoint)
env_class = ENVIRONMENT
# Restore trained policies for evaluation
agent = load(run_config, checkpoint)
print("Agent loaded - Agent: ", agent)
# Run the test cases for the specified seeds
runs = {'max_steps': NUMBER_STEPS_PER_EPISODE, 'max_parcels': MAX_NUMBER_OF_PARCELS, 'max_agents': NUMBER_OF_AGENTS}
for seed in SEEDS:
env_config['RANDOM_SEED'] = seed
# TODO check if
print(seed)
assert run_config['env_config']['RANDOM_SEED'] == seed
result = test_scenario(run_config, agent)
runs.update(result)
return runs
def test_scenario(config, agent):
"""
Loads a pretrained agent, initializes an environment from the seed
and then evaluates it over one episode with the Marl agents and the central baseline.
Returns: dict with results for graph creation for both evaluation / inference runs
"""
#TODO unterscheidung run_config > env_config
# Wie die beiden Runs abspeichern --> {Seed + [marl / base] : result }
env_class = config['env']
env_config = config['env_config']
seed = env_config['RANDOM_SEED']
policy_mapping_fn = config['multiagent']['policy_mapping_fn']
# Test with MARL
metrics_marl, reward_marl = test(env_class, env_config, policy_mapping_fn, agent)
# Test with CentralBase
env_config['BASELINE_FLAG'] = True
metrics_central, reward_central = test(env_class, env_config, policy_mapping_fn, agent)
env_config['BASELINE_FLAG'] = False
# ASSERT that both optimal values are equal
#assert metrics_marl['optimal'] == metrics_central['optimal']
return {"M_" + str(seed): metrics_marl, "C_" + str(seed): metrics_central}
from ray.rllib.agents.ppo import PPOTrainer
def policy_mapping_fn(agent_id, episode, worker, **kwargs):
return agent_id
basic_config = {
# experiment
'NAME': 'small_no',
'SAVE_DIR': 'Exp_environment',
'ALGO': PPOTrainer,
'ENV': Map_Environment,
'DEBUG_LOGS':False,
'NUM_GPUS': 2,
'NUM_WORKERS': 32,
'NUMBER_OF_SAMPLES': 2,
# Simulation config
'NUMBER_STEPS_PER_EPISODE': 1200,
'RANDOM_SEED': None, # 42
# Map
'TOPOLOGY': topology,
'CHARGING_STATION_NODES': [0,1,2,3,4,6,9],
# Entities
'NUMBER_OF_DRONES': 2,
'NUMBER_OF_CARS': 2,
'INIT_NUMBER_OF_PARCELS': 10,
'MAX_NUMBER_OF_PARCELS': 10,
'THRESHOLD_ADD_NEW_PARCEL': 0.1, # 10% chance
'MAX_BATTERY_POWER': 100,
#Baseline
'BASELINE_TIME_CONSTRAINT': 10,
'BASELINE_OPT_CONSTANT': 2.5,
#TESTING
'SEEDS': [72, 21, 44, 66, 86, 14],
#Rewards
'REWARDS': {
'PARCEL_DELIVERED': 200,
'STEP_PENALTY': -0.1,
},
'STOP_CRITERIA': {
'timesteps_total': 24_000_000,
}
}
print("Test evaluation functions")
experiment_results = train_and_test_scenarios(basic_config)
print("Test evaluation finished ;)")
experiment_results
saved_results = {'max_steps': 1200,
'max_parcels': 10,
'max_agents': 4,
'M_72': {'step': 1200,
'delivered': {'p_3': 885},
'crashed': {'c_3': 1152},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': None},
'C_72': {'step': 1200,
'delivered': {'p_4': 49,
'p_1': 67,
'p_3': 67,
'p_7': 95,
'p_5': 126,
'p_10': 224,
'p_9': 403},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': 75},
'M_21': {'step': 1200,
'delivered': {},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': None},
'C_21': {'step': 1200,
'delivered': {'p_1': 47,
'p_6': 76,
'p_3': 135,
'p_9': 215,
'p_7': 350,
'p_8': 539},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': 105},
'M_44': {'step': 1200,
'delivered': {'p_9': 735, 'p_7': 857},
'crashed': {'d_0': 1081},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': None},
'C_44': {'step': 1200,
'delivered': {'p_7': 253, 'p_8': 258},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': 103},
'M_66': {'step': 1200,
'delivered': {},
'crashed': {'d_1': 103, 'c_3': 226},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': None},
'C_66': {'step': 1200,
'delivered': {'p_1': 16, 'p_3': 18, 'p_5': 70},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': 85},
'M_86': {'step': 1200,
'delivered': {'p_9': 25, 'p_8': 27},
'crashed': {'d_1': 90, 'd_0': 1079},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': None},
'C_86': {'step': 1200,
'delivered': {'p_8': 66,
'p_1': 90,
'p_9': 110,
'p_4': 112,
'p_7': 168,
'p_2': 185,
'p_6': 282},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': 90},
'M_14': {'step': 1200,
'delivered': {'p_3': 477, 'p_8': 1025},
'crashed': {'d_1': 445, 'd_0': 1084},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': None},
'C_14': {'step': 1200,
'delivered': {'p_1': 12, 'p_3': 12, 'p_6': 66},
'crashed': {},
'added': {1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0},
'optimal': 108}}
import matplotlib.ticker as mtick
def create_chart_bars(results_dict):
""" Function that plots a bar graph with the duration of one episode
run with MARL agents/ Centrality Baseline/ Optimality Baseline, recorded in the :param results_dict:.
"""
# Design choices
colors = {
"marl": 'blue',
"central": 'green',
"optimal": 'red'
}
# Retrieve settings values from results dict
max_steps = results_dict['max_steps']
max_parcels = results_dict['max_parcels']
max_agents = results_dict['max_agents']
# Deep copy for further computations
scenario_results = {k:v for k,v in results_dict.items() if k[0:3] != 'max'}
# filter
runs = scenario_results.keys()
values = scenario_results.values()
# TODO statt all dead => einfach not all delivered marker
merged = {} # key is seed as str, value is a dict with [marl, central, optimal, all_dead_marl, all_dead_cent]
# Retrieve the data
for run_id, res in scenario_results.items():
split_id = run_id.split('_') # --> type, seed
key_type, key_seed = split_id[0], split_id[1]
# merge data from the runs with same seed (marl + baselines)
# add new dict if seed not encountered yet
if key_seed not in merged:
merged[key_seed] = {}
_key_delivered = 'marl'
#_key_crashed = 'all_dead_marl'
if key_type == 'C':
# Baseline run
merged[key_seed]['optimal'] = res['optimal']
_key_delivered = 'central'
#_key_crashed = 'all_dead_central'
# Retrieve number of steps in run
last_step = res['step']
# old code for plotting the number of steps needed in the episode
merged[key_seed][_key_delivered] = last_step
merged[key_seed][_key_delivered + '_all'] = len(res['delivered'])
# all_parcels_delivered = len(res['delivered']) == max_parcels # were all parcels delivered
# merged[key_seed][_key_delivered + '_all'] = all_parcels_delivered
# if not all_parcels_delivered:
# merged[key_seed][_key_delivered] = max_steps
print("Merged: ", merged)
# example data = [[30, 25, 50, 20],
# [40, 23, 51, 17],
# [35, 22, 45, 19]]
data = [[],[],[],[], []]
labels = []
# split data into type of run
for seed,values in merged.items():
labels.append('S_'+seed)
data[0].append(values['marl'])
data[1].append(values['central'])
data[2].append(values['optimal'])
data[3].append(values['marl_all'])
data[4].append(values['central_all'])
print("Data: ", data)
X = np.arange(len(labels))
#print(X)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
#ax.bar(X + 0.00, data[2], color = colors['optimal'], label="Optimality Baseline", width = 0.25)
ax.bar(X + 0.25, data[3], color = colors['marl'], label="MARL System", width = 0.25, alpha=0.8)
ax.bar(X + 0.50, data[4], color = colors['central'], label="Centrality Baseline", width = 0.25)
plt.xlabel("Experiments")
plt.ylabel("Parcels_delivered")
plt.ylim(bottom=0, top=max_parcels)
# y axis as percentage
yticks = mtick.PercentFormatter(max_parcels)
ax.yaxis.set_major_formatter(yticks)
# Add experiment identifiers x-Axis
plt.xticks(X + 0.37, labels)
# Add legend
ax.legend()
# Plot duration bar graphs from the results
create_chart_bars(experiment_results)
import matplotlib.ticker as mtick
def create_chart_episode_events(results_dict, draw_crashes = False):
""" Function that plots a graph with either the deliveries of parcels or the crashes of agents
over the course of an episode, recorded in the :param results_dict:.
Set the :param draw_crashes: flag for plotting crashes, default are deliveries.
"""
# TODO Graphen auch abspeichern oder nur hier anzeigen ??
# TODO - Parcel additions ??
# TODO - overthink fill with max_value for mean computation
# Design choices
colors = {
"marl": 'blue',
"central": 'green',
"optimal": 'red'
}
alpha = 0.6 # Opacity of individual value lines
alpha_opt = 0.2
opt_marker_size = 15
line_width_mean = 5
line_width_optimal = 2
fig = plt.figure()
ax = fig.add_subplot(111)
# Retrieve settings values from results dict
max_steps = results_dict['max_steps']
max_parcels = results_dict['max_parcels']
max_agents = results_dict['max_agents']
_len = max_agents if draw_crashes else max_parcels
_len += 1 # start plot at origin
_key = 'crashed' if draw_crashes else 'delivered'
# Deep copy for further computations
scenario_results = {k:v for k,v in results_dict.items() if k[0:3] != 'max'}
# for computation of mean
m_values, c_values, o_values = [], [], []
Y = [str(i) for i in range(0, _len)]
# iterate over configs
for scenario, results in scenario_results.items():
# Default settings -> MARL run
color = colors["marl"]
_type = m_values
label= "marl_system"
if scenario[0] == 'C':
# Baseline run
color = colors["central"]
_type = c_values
label = "centrality_baseline"
# Retrieve and plot optimality baseline
optimal_time = results['optimal']
assert optimal_time is not None
if not draw_crashes: ax.plot(optimal_time, 0, "*", color = colors["optimal"], label="optimality_baseline", markersize=opt_marker_size, alpha= alpha_opt, clip_on=False)
o_values.append(optimal_time)
_num_steps = results['step']
X = [0] + list(results[_key].values())
X = X + [max_steps]*(_len - len(X)) # Fill X up with max_step values for not delivered parcels / not crashed agents
_type.append(X) # add X to the respective mean list
#Y = [str(i) for i in range(0, len(results[_key].values())+1)]
#print("Data: ", results[_key].values())
#print("new X: ", X)
#print("new Y: ", Y)
ax.step(X, Y, label=label, where='post', color=color, alpha=alpha)
# Attempt to improve the filling mess in the plot...
#X = X + [max_steps]*(_len - len(X)) # Fill X up with max_step values for not delivered parcels / not crashed agents
#_type.append(X) # add X to the respective mean list
# compute mean values
m_mean = np.mean(np.array(m_values), axis=0)
c_mean = np.mean(np.array(c_values), axis=0)
o_mean = np.mean(np.array(o_values), axis=0)
ax.step(m_mean, Y, label="marl_system", where='post', color=colors["marl"], linewidth=line_width_mean)
ax.step(c_mean, Y, label="centrality_baseline", where='post', color=colors["central"], linewidth=line_width_mean)
# star for opt: if not draw_crashes: plt.plot(o_mean, 0, "*", label="optimality_baseline", color="r", markersize=opt_marker_size, clip_on=False)
# better?: vertical line for opt
if not draw_crashes: ax.axvline(o_mean, label="optimality_baseline", color=colors["optimal"], linewidth=2, alpha=alpha_opt+0.3)
# y axis as percentage
max_percent = max_agents if draw_crashes else max_parcels
yticks = mtick.PercentFormatter(max_percent)
ax.yaxis.set_major_formatter(yticks)
# Lables and Legend
plt.xlabel("steps")
ylabel = "% of crashed agents" if draw_crashes else "% of delivered parcels"
plt.ylabel(ylabel)
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys())
# Margins
plt.ylim(bottom=0)
plt.xlim()
plt.margins(x=0, y=0)
plt.show()
# Plot delivery graphs from the results
create_chart_episode_events(experiment_results, draw_crashes=False)
# Plot crash graphs from the results
create_chart_episode_events(experiment_results, draw_crashes=True)
###Output
_____no_output_____
###Markdown
Manual Actions for debugging
###Code
##################
env_config = {
'DEBUG_LOGS':False,
'TOPOLOGY': topology,
# Simulation config
'NUMBER_STEPS_PER_EPISODE': 1000,
#'NUMBER_OF_TIMESTEPS': NUMBER_OF_TIMESTEPS,
'RANDOM_SEED': None, # 42
# Map
'CHARGING_STATION_NODES': [0,1,2,3,4],
# Entities
'NUMBER_OF_DRONES': 2,
'NUMBER_OF_CARS': 2,
'MAX_BATTERY_POWER': 100, # TODO split this for drone and car??
'INIT_NUMBER_OF_PARCELS': 3,
'MAX_NUMBER_OF_PARCELS': 3,
'THRESHOLD_ADD_NEW_PARCEL': 0.01,
# Baseline settings
'BASELINE_FLAG': False, # is set True in the test function when needed
'BASELINE_OPT_CONSTANT': 2.5,
'BASELINE_TIME_CONSTRAINT': 5,
# TODO
#Rewards
'REWARDS': {
'PARCEL_DELIVERED': 200,
'STEP_PENALTY': -0.1,
},
}
env = Map_Environment(env_config)
env.state
#env.ACTION_DROPOFF
# TODO select actions to give agent 0 reward!!
#print(env.action_space)
actions_1 = {'d_0': 6, 'd_1': 2, 'c_2': 3, 'c_3': 4}
#actions_1 = {'d_0': 0, 'd_1': 0, 'c_2': 5, 'c_3': 5}
actions_2 = {'d_0': 2, 'd_1': 8, 'c_2': 7, 'c_3':0}
actions_3 = {'d_0': 5, 'd_1': 5, 'c_2':2 }
new_obs, rewards, dones, infos = env.step(actions_1)
print(infos)
print(f"New Obs are: {new_obs}")
print(rewards)
print("------------------")
new_obs2, rewards2, dones2, infos2 = env.step(actions_2)
print(f"New Obs are: {new_obs2}")
print(rewards2)
print("------------------")
new_obs3, rewards3, dones3, infos3 = env.step(actions_3)
print(f"New Obs are: {new_obs3}")
print(rewards3)
actions_4 = {'d_0': 0, 'd_1': 0, 'c_2': 1, 'c_3': 0}
actions_5 = {'d_0': 0, 'd_1': 0, 'c_2': 0, 'c_3': 0}
actions_6 = {'d_0': 0, 'd_1': 0, 'c_2': 5, 'c_3': 0}
new_obs, rewards, dones, infos = env.step(actions_4)
print(f"New Obs are: {new_obs}")
print(rewards)
print("------------------")
new_obs2, rewards2, dones2, infos2 = env.step(actions_5)
print(f"New Obs are: {new_obs2}")
print(rewards2)
print("------------------")
new_obs3, rewards3, dones3, infos3 = env.step(actions_6)
print(f"New Obs are: {new_obs3}")
print(rewards3)
print("------------------")
print(dones3)
# TENSORBOARD
# Load the TensorBoard notebook extension
%load_ext tensorboard
#Start tensorboard below the notebook
#%tensorboard --logdir logs
###Output
_____no_output_____ |
demos/eigenvalue/Rounding in characteristic polynomial using SymPy.ipynb | ###Markdown
Rounding in the Characteristic Polynomial (using Sympy)Copyright (C) 2019 Andreas KloecknerMIT LicensePermission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included inall copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS INTHE SOFTWARE.
###Code
import sympy as sp
sp.init_printing()
eps = sp.Symbol("epsilon")
lam = sp.Symbol("lambda")
m = sp.Matrix([[1, eps], [eps, 1]])
m
m.charpoly(lam)
###Output
_____no_output_____ |
Dnn_prueba_tpu.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
!ls "/content/drive/"
#np.random.seed(1337) # for reproducibility
import pandas as pd
import numpy as np
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Embedding
from keras.layers import LSTM, SimpleRNN, GRU
from keras.datasets import imdb
from keras.utils.np_utils import to_categorical
from sklearn.metrics import (precision_score, recall_score,f1_score, accuracy_score,mean_squared_error,mean_absolute_error)
from sklearn import metrics
from sklearn.preprocessing import Normalizer
import h5py
from keras import callbacks
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau, CSVLogger
traindata = pd.read_csv(r'/content/drive/My Drive/Colab Notebooks/Training.csv') # kdd/binary/Training.csv'e
testdata = pd.read_csv(r'/content/drive/My Drive/Colab Notebooks/Testing.csv')#'kdd/binary/Testing.csv', header=None
X = traindata.iloc[:,1:42]
Y = traindata.iloc[:,0]
C = testdata.iloc[:,0]
T = testdata.iloc[:,1:42]
scaler = Normalizer().fit(X)
trainX = scaler.transform(X)
scaler = Normalizer().fit(T)
testT = scaler.transform(T)
y_train = np.array(Y)
y_test = np.array(C)
X_train = np.array(trainX)
X_test = np.array(testT)
batch_size = 64
# 1. define the network
model = Sequential()
model.add(Dense(1024,input_dim=41,activation='relu'))
model.add(Dropout(0.01))
model.add(Dense(768,activation='relu'))
model.add(Dropout(0.01))
model.add(Dense(512,activation='relu'))
model.add(Dropout(0.01))
model.add(Dense(256,activation='relu'))
model.add(Dropout(0.01))
model.add(Dense(128,activation='relu'))
model.add(Dropout(0.01))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
# try using different optimizers and different optimizer configs
from google.colab import drive
drive.mount('/content/drive')
!ls /content/sample_data
model.compile(loss='mean_squared_error',
optimizer='adam',
metrics=['binary_accuracy'])
model.fit(X_train, y_train , epochs=100)
# evaluamos el modelo
scores = model.evaluate(X_train, y_train)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
print (model.predict(X_train).round())
###Output
Epoch 1/100
15439/15439 [==============================] - 128s 8ms/step - loss: 0.0056 - binary_accuracy: 0.9936
Epoch 2/100
15439/15439 [==============================] - 127s 8ms/step - loss: 0.0033 - binary_accuracy: 0.9964
Epoch 3/100
15439/15439 [==============================] - 130s 8ms/step - loss: 0.0030 - binary_accuracy: 0.9967
Epoch 4/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0026 - binary_accuracy: 0.9972
Epoch 5/100
15439/15439 [==============================] - 120s 8ms/step - loss: 0.0025 - binary_accuracy: 0.9973
Epoch 6/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0024 - binary_accuracy: 0.9974
Epoch 7/100
15439/15439 [==============================] - 123s 8ms/step - loss: 0.0027 - binary_accuracy: 0.9971
Epoch 8/100
15439/15439 [==============================] - 129s 8ms/step - loss: 0.0023 - binary_accuracy: 0.9975
Epoch 9/100
15439/15439 [==============================] - 127s 8ms/step - loss: 0.0023 - binary_accuracy: 0.9976
Epoch 10/100
15439/15439 [==============================] - 128s 8ms/step - loss: 0.0022 - binary_accuracy: 0.9976
Epoch 11/100
15439/15439 [==============================] - 128s 8ms/step - loss: 0.0024 - binary_accuracy: 0.9975
Epoch 12/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0023 - binary_accuracy: 0.9976
Epoch 13/100
15439/15439 [==============================] - 127s 8ms/step - loss: 0.0026 - binary_accuracy: 0.9973
Epoch 14/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0028 - binary_accuracy: 0.9970
Epoch 15/100
15439/15439 [==============================] - 123s 8ms/step - loss: 0.0022 - binary_accuracy: 0.9977
Epoch 16/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0026 - binary_accuracy: 0.9973
Epoch 17/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0062 - binary_accuracy: 0.9937
Epoch 18/100
15439/15439 [==============================] - 132s 9ms/step - loss: 0.0027 - binary_accuracy: 0.9972
Epoch 19/100
15439/15439 [==============================] - 128s 8ms/step - loss: 0.0024 - binary_accuracy: 0.9975
Epoch 20/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0023 - binary_accuracy: 0.9976
Epoch 21/100
15439/15439 [==============================] - 126s 8ms/step - loss: 0.0023 - binary_accuracy: 0.9976
Epoch 22/100
15439/15439 [==============================] - 121s 8ms/step - loss: 0.0030 - binary_accuracy: 0.9969
Epoch 23/100
15439/15439 [==============================] - 128s 8ms/step - loss: 0.0027 - binary_accuracy: 0.9972
Epoch 24/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0025 - binary_accuracy: 0.9975
Epoch 25/100
15439/15439 [==============================] - 124s 8ms/step - loss: 0.0023 - binary_accuracy: 0.9976
Epoch 26/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0021 - binary_accuracy: 0.9978
Epoch 27/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0021 - binary_accuracy: 0.9978
Epoch 28/100
15439/15439 [==============================] - 129s 8ms/step - loss: 0.0021 - binary_accuracy: 0.9977
Epoch 29/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0026 - binary_accuracy: 0.9973
Epoch 30/100
15439/15439 [==============================] - 121s 8ms/step - loss: 0.0022 - binary_accuracy: 0.9977
Epoch 31/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0020 - binary_accuracy: 0.9979
Epoch 32/100
15439/15439 [==============================] - 124s 8ms/step - loss: 0.0022 - binary_accuracy: 0.9977
Epoch 33/100
15439/15439 [==============================] - 129s 8ms/step - loss: 0.0020 - binary_accuracy: 0.9979
Epoch 34/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0021 - binary_accuracy: 0.9978
Epoch 35/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0021 - binary_accuracy: 0.9978
Epoch 36/100
15439/15439 [==============================] - 122s 8ms/step - loss: 0.0021 - binary_accuracy: 0.9978
Epoch 37/100
15439/15439 [==============================] - 125s 8ms/step - loss: 0.0036 - binary_accuracy: 0.9963
Epoch 38/100
15439/15439 [==============================] - 131s 8ms/step - loss: 0.0022 - binary_accuracy: 0.9976
Epoch 39/100
11569/15439 [=====================>........] - ETA: 32s - loss: 0.0020 - binary_accuracy: 0.9979 |
causality_basics.ipynb | ###Markdown
Basics of Causality in Data Science---
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
from graphviz import Digraph
from IPython.display import Image
plt.rc('figure', figsize=(12, 10))
plt.rc('font', size=16);
###Output
/usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Association vs CausalityA first important milestone in understanding on how causality affects us as data scientists is to acknowledge the difference between association and causality. Or to put it more concisely: "Correlation does not imply causation".Let's look at some data:
###Code
def get_data(effect_size, seed=5):
np.random.seed(seed)
x = np.random.uniform(high=10., size=100)
y = effect_size*x + np.random.normal(size=100)
return pd.DataFrame({
'X': x,
'y': y
})
data = get_data(effect_size=3.)
data.plot.scatter('X', 'y');
mdl = sm.OLS(endog=data.y, exog=data.X).fit()
mdl.summary()
###Output
_____no_output_____
###Markdown
For the sampled dataset it is easy to recover a relationship between X and y. However, we implicitly assumed that y depends on X, although this was not stated anywhere, we could have also regressed X on y, i.e.:
###Code
mdl = sm.OLS(endog=data.X, exog=data.y).fit()
mdl.summary()
###Output
_____no_output_____
###Markdown
Based on the get_data function, the first approach is the correct one (we simply multiply X by 3 and then add some noise). However, the data alone cannot tell us which direction is the proper one. Without any assumptions/knowledge about the underlying causal structure, we can only talk about associations, which in case of the linear regression are basically conditional expectations, i.e. $\mathbb{E}[Y|X]$ and $\mathbb{E}[X|Y]$. This gives us the power to answer questions of the nature: "We have observed X, what do we expect to see for Y", e.g.:* "A customer x has an income of 10.000 Euro, what would we expect this customer spends on leisure articles?"* "We can see it rain, what is the chance that our driveway is wet?"* "An applicant for a personal loan works as a teacher, what is the estimated probability that they will default"In contrast, causality goes beyond merely observing what is happening and tries to infer the concrete impact of one variable onto another one. This would allow us to make statements about what would happen, if we set a variable to a certain value. In the notation developed by Pearl, we would use the do-operator and write: $\mathbb{E}[Y|\text{do}(X)]$ and $\mathbb{E}[X|\text{do}(Y)]$ to differentiate this setting from the classical one, where we just infer associative relationships.Some questions we might be interested in and which tend to have a causal twist to them:* "What was the concrete impact of calling the customer in their decision to purchase the service?"* "If we turn on the sprinkler, what is the chance that the driveway gets wet?"* "What would the result of my A/B test best, if everyone did what they were supposed to do?" * "Which variables do we have to measure in order for us to determine whether an advertisement campaign is worth its money?" Basic Building Blocks for Working with Causality A Graphical ViewOne of the core takeaways from Pearl's work is that graphs are a great tool to depict and analyse issues of causality.As a first step, we can present how we would depict the problem from the previous sections with graphs
###Code
dot = Digraph()
dot.node('X')
dot.node('y')
dot.edges(['Xy'])
dot.render('Xy_relationship', view=True, format='png')
Image('Xy_relationship.png')
dot = Digraph()
dot.node('X', peripheries='2')
dot.node('y')
dot.edges(['Xy'])
dot.render('Xy_interv', view=True, format='png')
Image('Xy_interv.png')
def get_data(seed=5, X_interv=None):
if seed:
np.random.seed(seed)
Z = np.random.normal(size=100)
X_noise = np.random.normal(scale=0.5, size=100)
if not X_interv:
X = 3. + 1.5*Z + X_noise
else:
X = X_interv
y = 1. + 2.*X + 2.*Z + np.random.normal(scale=0.25, size=100)
return pd.DataFrame({
'X': X,
'y': y,
'Z': Z
})
data = get_data(5)
dot = Digraph()
dot.node('X')
dot.node('Z')
dot.node('y')
dot.edges(['Xy', 'ZX', 'Zy'])
dot.render('XZy_relationship', view=True, format='png')
Image('XZy_relationship.png')
data.plot.scatter('X', 'y')
data.plot.scatter('Z', 'y')
mdl = sm.OLS(endog=data.y, exog=sm.add_constant(data.X)).fit()
mdl.summary()
sm.graphics.plot_fit(mdl, exog_idx='X');
mdl.resid.hist();
###Output
_____no_output_____
###Markdown
The fitted against observed values plots looks ok, while the residual histograms indicates some issues with the normality of the residuals, but nothing too extraordinary. Let us say, for the sake of argument, that everything is fine, that we are interpreting the relationship as a causal one, i.e. X causes y, and that we want to use this new knowledge to control y. Suppose that X=5 is the optimal value to control y, based on our model, we would expect all our values to lie within the following interval:
###Code
def get_pi(mdl, pred_df, alpha=0.05):
predictions = mdl.get_prediction(pred_df)
pred_intervals = predictions.summary_frame(alpha=alpha)[
['obs_ci_lower', 'obs_ci_upper']
]
return pred_intervals.iloc[0]
pred_df = sm.add_constant(data['X']).iloc[0]
pred_df.X = 5.
pred_int_low, pred_int_up = get_pi(mdl, pred_df)
pred_int_low, pred_int_up
###Output
_____no_output_____
###Markdown
However, when we are sampling the data again, now with forcing X to be 5, we get:
###Code
data_interv = get_data(X_interv=5.)
def plot_obs_and_pi(y, pred_int_low, pred_int_up):
y.hist(legend=True)
ymax = plt.ylim()[1]
plt.axvline(x=pred_int_low, ymin=0, ymax=ymax, color='r', linestyle='--')
plt.axvline(x=pred_int_up, ymin=0, ymax=ymax, color='r', linestyle='--')
plt.annotate(
s='',
xy=(pred_int_low, ymax*0.85),
xytext=(pred_int_up, ymax*0.85),
arrowprops={'arrowstyle': '<->', 'color': 'red'})
plt.text(
x=(pred_int_low + pred_int_up)/2,
y=ymax*0.86,
s='Prediction Interval',
ha='center',
color='red')
plot_obs_and_pi(data_interv.y, pred_int_low, pred_int_up)
###Output
_____no_output_____
###Markdown
This looks horrible. However, you might argue that problem arises from not taking into consideration Z. So let's do try this!
###Code
mdl = sm.OLS(endog=data.y, exog=sm.add_constant(data[['X', 'Z']])).fit()
mdl.summary()
###Output
_____no_output_____
###Markdown
Just by looking at the model coefficients and comparing them to the equation used to generate the observations in the get_data function, we can see that we seem to have recovered the relationship between X and y pretty well. But have we also identified the causal magnitude between the two.
###Code
pred_df = sm.add_constant(data[['X', 'Z']]).iloc[0]
pred_df.X = 5.
data_interv = get_data(X_interv=5.)
pred_int_low, pred_int_up = get_pi(mdl, pred_df)
plot_obs_and_pi(data_interv.y, pred_int_low, pred_int_up)
mdl = sm.OLS(endog=data_interv.y, exog=sm.add_constant(data_interv[['X', 'Z']])).fit()
mdl.summary()
data.y.hist()
data.corr()
data_interv.corr()
dot = Digraph()
dot.node('X', peripheries='2')
dot.node('Z')
dot.node('y')
dot.edges(['Xy', 'Zy'])
dot.render('XZy_interv', view=True, format='png')
Image('XZy_interv.png')
dot = Digraph()
dot.node('X', peripheries='2')
dot.node('Z', style='dashed')
dot.node('y')
dot.edges(['Xy', 'ZX', 'Zy'])
dot.render('XZy_relationship', view=True, format='png')
Image('XZy_relationship.png')
seed=5
X_interv=None
if seed:
np.random.seed(seed)
Z = np.random.normal(size=100)
X_noise = np.random.normal(scale=0.5, size=100)
if not X_interv:
X = 3. + 1.5*Z + X_noise
else:
X = X_interv
y = 1. + 2.*X + 2.*Z + np.random.normal(scale=0.25, size=100)
df = pd.DataFrame({
'X': X,
'y': y,
'Z': Z
})
df.y.hist();
mdl = sm.OLS(endog=df.y, exog=sm.add_constant(df[['X', 'Z']])).fit()
mdl.summary()
pred_df = sm.add_constant(df[['X', 'Z']])
pred_df.X = 5.
data_interv = get_data(X_interv=5.)
pred_int_low, pred_int_up = get_pi(mdl, pred_df)
#plot_obs_and_pi(data_interv.y, pred_int_low, pred_int_up)
pred_int_low, pred_int_up
get_pi
?sm.graphics.plot_fit
###Output
_____no_output_____
###Markdown
TODO:- Explain blocking a path- Define DAG Two Important Tools Backdoor CriterionSuppose you want to infer the causal impact of X on y, but there are other factors (denoted by Z) present in your causal graph. Should you adjust[^1] for these variables? The backdoor criterion (Pearl) answers this question, as it allows you to derive the so called admissible subset of variables. Adjusting for this subset of features allows then to infer the causal relationship between X and y. > A set of variables Z satisfies the backdoor criterion relative to X and y in a DAG, if:- No node in Z is a descendant of X.- Z blocks every path from y to X (i.e. incoming arrow to X).[^1] we will cover the precise meaning of adjustment
###Code
dot = Digraph()
dot.node('X')
[dot.node(i) for i in 'ABCD']
dot.node('y')
dot.edges(['XA', 'Ay', 'BX', 'BC', 'DC', 'Dy', 'CX'])
dot.render('XZy_relationship', view=True, format='png')
Image('XZy_relationship.png')
###Output
_____no_output_____
###Markdown
Frontdoor CriterionThe frontdoor criterion is an alternative criterion, which might allow for inferring a causal relationship in a scenario, where the backdoor criterion is not applicable.> A set of variables Z satisfies the frontdoor criterion relative X and y in a DAG, if:- Z blocks all directed paths from X to Y.- There is no unblocked backdoor path between X and Z.- X blocks all backdoor paths from Z to Y.
###Code
dot = Digraph()
dot.node('X')
dot.node('Z')
dot.node('U', style='dashed')
dot.node('y')
dot.edge('U', 'X', style='dashed')
dot.edge('U', 'y', style='dashed')
dot.edges(['XZ', 'Zy'])
dot.render('XZy_relationship', view=True, format='png')
Image('XZy_relationship.png')
###Output
_____no_output_____ |
magnolia/sandbox/lab41-broadcast/contribution.ipynb | ###Markdown
Forked Neural Network
###Code
## Standard python libraries
import numpy as np
import time
import sys
import matplotlib.pylab as plt
import functools
%matplotlib inline
## Magnolia data iteration
sys.path.append('../../')
from src.features.mixer import FeatureMixer
from src.features.wav_iterator import batcher
from supervised_iterator_experiment import SupervisedIterator, SupervisedMixer
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
print(tf.__version__)
###Output
1.1.0-rc2
###Markdown
Set up the data
###Code
numsources = 2
batchsize = 256
datashape = (40, 257)
embedding_size = 600
restore_session=False
libridev='/local_data/teams/magnolia/libri-dev.h5'
libritrain='/local_data/teams/magnolia/librispeech/processed_train-clean-100.h5'
###Output
_____no_output_____
###Markdown
Create a supervised mixer and batcher
###Code
if numsources == 3:
mixer = SupervisedMixer([libritrain,libritrain,libritrain], shape=datashape,
mix_method='add', diffseed=True, return_key=True)
else:
mixer = SupervisedMixer([libritrain,libritrain], shape=datashape,
mix_method='add', diffseed=True, return_key=True)
# Check the time
tbeg = time.clock()
X, Y, I = mixer.get_batch(batchsize)
tend = time.clock()
print('Supervised feature mixer with 3 libridev sources timed at ', (tend-tbeg), 'sec')
###Output
Supervised feature mixer with 3 libridev sources timed at 1.3266749999999998 sec
###Markdown
NEURAL NETWORKThe lost function takes in as input the variable `Vlast` for last layer ($V_{last}$, where a vector in $V_{last}$ is $v_{l}$). (That's the first couplet lines, where one just makes a tensorflow variable `Vlasttf`.)The actual cost function is the *word2vec* objective function, where samples are positively and negatively sampled and then mixed. Let $A$ be a matrix of "attractors", so to speak. (We'll not use that terminology later on.) Then a positively sampled vector $a_p$ and a few negatively sampled ones $a_{n_1}$ and $a_{n_2}$ are all columns in $A$. The loss over a batch $B$ is denoted `tfbatchlo`, and is specified as:$$ \mathcal{L}(v_{last}) = \log \sigma ( v_l^T a_p) + \sum_j \log \sigma( -1 \cdot v_l^T a_{n_j} )$$
###Code
def scope(function):
attribute = '_cache_' + function.__name__
name = function.__name__
@property
@functools.wraps(function)
def decorator(self):
if not hasattr(self,attribute):
with tf.device("/gpu:0"):
with tf.variable_scope(name):
setattr(self,attribute,function(self))
return getattr(self,attribute)
return decorator
class L41Broadcast:
def __init__(self, X, Y, F, I, layer_size, embedding_size, num_labels):
self.Vclass = tf.Variable(tf.random_normal( [embedding_size, num_labels, F], stddev=0.08 ),
dtype=tf.float32,
name = 'Vclass')
self.X = X
self.Y = Y
self.F = F
self.I = I
self.layer_size = layer_size
self.embedding_size = embedding_size
self.network
self.cost
self.optimizer
def weight_variable(self,shape):
initial = tf.truncated_normal(shape, stddev=tf.sqrt(2.0/shape[0]))
return tf.Variable(initial)
def conv1d(self,x, W):
return tf.nn.conv1d(x, W, stride=1, padding='SAME')
def conv1d_layer(self,in_layer,shape):
weights = self.weight_variable(shape)
biases = self.weight_variable([shape[-1]])
return self.conv1d(in_layer,weights) + biases
def BLSTM(self, X, size, scope):
forward_input = X
backward_input = tf.reverse(X, [1])
with tf.variable_scope('forward_' + scope):
forward_lstm = tf.contrib.rnn.BasicLSTMCell(size//2)
forward_out, f_state = tf.nn.dynamic_rnn(forward_lstm, forward_input, dtype=tf.float32)
with tf.variable_scope('backward_' + scope):
backward_lstm = tf.contrib.rnn.BasicLSTMCell(size//2)
backward_out, b_state = tf.nn.dynamic_rnn(backward_lstm, backward_input, dtype=tf.float32)
return tf.concat([forward_out[:,:,:], backward_out[:,::-1,:]], 2)
@scope
def network(self):
shape = tf.shape(self.X)
BLSTM_1 = self.BLSTM(self.X, self.layer_size, 'one')
BLSTM_2 = self.BLSTM(BLSTM_1, self.layer_size, 'two')
feedforward = self.conv1d_layer(BLSTM_2,[1,self.layer_size,self.embedding_size*self.F])
embedding = tf.reshape(feedforward,[shape[0],shape[1],self.F,self.embedding_size])
embedding = tf.nn.l2_normalize(embedding,3)
return embedding
@scope
def cost(self):
Xshape=tf.shape(self.X)
Yshape=tf.shape(self.Y)
# things that are necessary for the cost function
Vin = self.network
I = tf.expand_dims( self.I, axis=2 )
Y = self.Y
Vclass = self.Vclass
# l2 normalization
Vclass = tf.nn.l2_normalize(Vclass, 0)
# gather the appropriate vectors
Vout = tf.gather_nd( tf.transpose(Vclass, perm=[1,2,0]), I )
# Broadcasted Vi and Vo
Vinbroad = tf.reshape( Vin, [Yshape[0], 1, Yshape[2], Yshape[3], self.embedding_size])
Voutbroad= tf.reshape( Vout, [Yshape[0], Yshape[1], 1, Yshape[3], self.embedding_size] )
# Correlate all the vectors:
lossfxn = - tf.log( tf.nn.sigmoid( Y * tf.reduce_sum(Vinbroad * Voutbroad, 4) ) )
# Sum correlations over positive and negative correlations
lossfxn = tf.reduce_sum( lossfxn, 1 )
# Average over all the batches
lossfxn = tf.reduce_mean( lossfxn, 0)
# To do: put weight by pre-emphasis or gradient confidence
lossfxn = tf.reduce_mean( lossfxn )
return lossfxn
@scope
def optimizer(self):
opt = tf.train.AdamOptimizer()
cost = self.cost
return opt.minimize(cost)
tf.reset_default_graph()
F = 257
layer_size=600
embedding_size=40
X = tf.placeholder("float", [None,None,F])
Y = tf.placeholder("float", [None, None,None,F])
I = tf.placeholder(dtype=tf.int32)
num_labels=251
model = L41Broadcast(X, Y, F, I, layer_size, embedding_size, num_labels)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
iterations = []
costs = []
if restore_session:
saver.restore(sess, '/data/fs4/home/kni/magnolia/models/l41-model-2spkr86.h5')
print("Initialized")
for iteration in range(1000000):
# Preprocessing
Xdata, Ydata, Idata = mixer.get_batch(batchsize, out_TF=None)
Xin = np.sqrt( abs(Xdata) )
Xin = (Xin - Xin.min()) / (Xin.max() - Xin.min())
optloss, cost = sess.run([model.optimizer, model.cost], feed_dict={X: Xin, Y:Ydata, I:Idata})
costs += [cost]
sys.stdout.write('\rIteration '+str(iteration)+', Cost function = '+str(cost))
if not ((iteration+1) % 1000):
save_path = saver.save(sess, "/data/fs4/home/kni/magnolia/models/l41-model-2spkr-expt-pos.h5")
def meanfilter(costs):
return np.convolve(np.array(costs), 1/100*np.ones(100), mode='valid')
smoothcosts = meanfilter(costs)
plt.plot(np.array(smoothcosts))
from src.utils.clustering_utils import get_cluster_masks
from src.features.hdf5_iterator import Hdf5Iterator
def sigmoid(x):
return 1/(1+np.exp(-x))
if False:
if numsources == 3:
longmixer = SupervisedMixer([libritrain,libritrain,libritrain], shape=(200,257),
mix_method='add', diffseed=True, return_key=True)
elif numsources == 2:
longmixer = SupervisedMixer([libritrain,libritrain], shape=(100,257),
mix_method='add', diffseed=True, return_key=True)
# Check the time
tbeg = time.clock()
Xtest, Ytest, Itest = longmixer.get_batch(2, out_TF=None)
Xin = np.sqrt( abs(Xtest) )
Xin = (Xin - Xin.min()) / (Xin.max() - Xin.min())
tend = time.clock()
print('Supervised feature mixer with 3 libridev sources timed at ', (tend-tbeg), 'sec')
Vin, Vcl = sess.run([model.network, model.Vclass], feed_dict={X: abs(Xin), Y:Ytest, I:Idata})
masks = get_cluster_masks(Vin, 2)
plt.figure(figsize=(12,12));
plt.subplot(121); plt.imshow( masks[:,:,0].T, aspect=.2, cmap='bone' )
plt.subplot(122); plt.imshow( Ytest[0,0].T, aspect=.2, cmap='bone' )
plt.figure(figsize=(12,12));
plt.subplot(121); plt.imshow( masks[:,:,1].T, aspect=.2, cmap='bone' )
plt.subplot(122); plt.imshow( Ytest[0,1].T, aspect=.2, cmap='bone' )
if numsources == 3:
plt.figure(figsize=(12,12));
plt.subplot(121); plt.imshow( masks[:,:,2].T, aspect=.2, cmap='bone' )
plt.subplot(122); plt.imshow( Ytest[0,2].T, aspect=.2, cmap='bone' )
from src.utils.postprocessing import reconstruct
from IPython.display import Audio
from IPython.display import display
masks = get_cluster_masks(abs(Vin), 2)
masks = masks.transpose(2,0,1)
Ytest = (Ytest + 1)/2
# Stupid hack, there's a better way to do this
mask = masks[0]
soundshape = reconstruct( (abs(Xtest[0]) * mask), np.angle(Xtest[0]), 10000, 0.0512, 0.0256 ).shape
Xsound = np.zeros( (numsources+1, soundshape[0]) )
Ysound = np.zeros( (numsources, soundshape[0]) )
Xsound[0] = reconstruct( abs(Xtest[0]), Xtest[0], 10000, 0.0512, 0.0256 )
for i, mask in enumerate(masks):
Xsound[i+1] = reconstruct( abs(Xtest[0]) * mask, Xtest[0], 10000, 0.0512, 0.0256 )
Ysound[i] = reconstruct( abs(Xtest[0]) * Ytest[0,i], Xtest[0], 10000, 0.0512, 0.0256 )
print("ORIGINAL")
display(Audio(Xsound[0], rate=10000))
print("IDEAL MASK 1")
display(Audio(Ysound[0], rate=10000))
print("PREDICTED MASK 1")
display(Audio(Xsound[1], rate=10000))
print("IDEAL MASK 2")
display(Audio(Ysound[1], rate=10000))
print("PREDICTED MASK 2")
display(Audio(Xsound[2], rate=10000))
# Vcl_30k = sess.run( tf.trainable_variables()[1] )
# Vcl_31k = sess.run( tf.trainable_variables()[1] )
tf.trainable_variables()
###Output
_____no_output_____ |
voila_catdog_classifier.ipynb | ###Markdown
A Naive Pet Guesser
###Code
btn_upload = SimpleNamespace(data = ['images/chapter1_cat_example.jpg'])
img = PILImage.create(btn_upload.data[-1])
out_pl = widgets.Output()
out_pl.clear_output()
with out_pl: display(img.to_thumb(128,128))
# out_pl
pred,pred_idx,probs = learn_inf.predict(img)
lbl_pred = widgets.Label()
lbl_pred.value = f'its probably a {pred} with a probability of {probs[pred_idx]*100.:.02f} percent'
# lbl_pred
btn_run = widgets.Button(description='Classify')
# btn_run
def on_click_classify(change):
img = PILImage.create(btn_upload.data[-1])
out_pl.clear_output()
with out_pl: display(img.to_thumb(128,128))
pred,pred_idx,probs = learn_inf.predict(img)
lbl_pred.value = f'its probably a {pred} with a probability of {probs[pred_idx]*100.:.02f} percent'
btn_run.on_click(on_click_classify)
#hide
#Putting back btn_upload to a widget for next cell
btn_upload = widgets.FileUpload()
VBox([widgets.Label('Upload your pet portrait!'),
btn_upload, btn_run, out_pl, lbl_pred])
###Output
_____no_output_____ |
Tweet-BERT/6.1.Kubeflow-Train-BYOC.ipynb | ###Markdown
[Module 6.1] Train a Model using SageMaker Component in Kubeflow**만일 이 노트북을 실행하기 전에 [관련 가이드](install_EKS_Kubeflow/README.md) 안보셨다면, 먼저 보시고 가이드를 따라 가세요.**이 노트북은 Kubeflow 노트북 서버에서 실행이 됩니다. 아래와 같은 작업을 실행 합니다.- 필요한 Package를 설치 합니다. (AWS boto3, Kubeflow Pipeline SDK)- SageMaker Components를 가져옵니다.- S3의 입력 데이타를 설정 합니다.- Kubeflow Pipeline을 정의 합니다.- Kubeflow Experiment를 실행 합니다.**아래 일부 코드는 하드코드가 있습니다. (예: S3 데이타 경로, Region 이름). 이는 EKS/Kubeflow 설치환경 및 데이타 장소에 종속적입니다.실제 환경 구성후에 실행시 변경 해야 합니다.**---이 노트북의 실행 시간은 **약 15분** 걸립니다. 2개의 ml.p3.2xlarge instance type으로 학습시에 약 15분 소요 됩니다. 아래는 이 노트북이 Kubeflow 노트북에서 실행이 되어 완료 하면 Kubeflow Dashboard에 나오는 화면 입니다.  아래는 Kubeflow pipeline에서 SageMaker training job 이 실행되고 있는 화면 입니다.  아래는 Kubeflow pipeline에서 SageMaker creating model job 이 실행되고 있는 화면 입니다.  아래는 SageMaker Console에 가서 training job 이 실행된 것을 확인 합니다.  AWS boto3 package 설치**아래 pip install boto3 가 에러시 커널을 리스트하고 해주세요**
###Code
! pip install boto3 --user
###Output
_____no_output_____
###Markdown
Install Kubeflow Pipelines SDK
###Code
!pip install https://storage.googleapis.com/ml-pipeline/release/0.1.29/kfp.tar.gz --upgrade --user
###Output
_____no_output_____
###Markdown
**Resion이 ap-northeast-2 가 아니면 해당 Region으로 변경 해주세요**
###Code
import boto3
#################################
#################################
# REPLACE AWS_REGION= with the current region
# surround with single quotes
AWS_REGION='ap-northeast-2'
AWS_ACCOUNT_ID=boto3.client('sts').get_caller_identity().get('Account')
print('Account ID: {}'.format(AWS_ACCOUNT_ID))
S3_BUCKET='sagemaker-{}-{}'.format(AWS_REGION, AWS_ACCOUNT_ID)
print('S3 Bucket: {}'.format(S3_BUCKET))
###Output
_____no_output_____
###Markdown
Build Pipeline 1. Run the following command to load Kubeflow Pipelines SDK
###Code
import kfp
from kfp import components
from kfp import dsl
from kfp.aws import use_aws_secret
###Output
_____no_output_____
###Markdown
2.Load reusable sagemaker components
###Code
# 아래는 과거 버전의 sagemaker_train_op 임
# sagemaker_train_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/train/component.yaml')
sagemaker_train_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/train/component.yaml')
sagemaker_model_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/model/component.yaml')
sagemaker_deploy_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0ad6c28d32e2e790e6a129b7eb1de8ec59c1d45f/components/aws/sagemaker/deploy/component.yaml')
###Output
_____no_output_____
###Markdown
**아래의 S3 train, validation, test 의 경로는 하드코딩이 되어 있습니다.** SageMaker Notebook으로 되돌아 가셔서 "1.1.Prepare-Tweet-Data.ipynb, 2.1.Convert-Input-TFRecord.ipynb" 노트묵을 실행하고 아래의 변수들의 값을 프린트하여 그 값을 아래의 변수에 할당 하세요```print(processed_train_data_s3_uri)print(processed_validation_data_s3_uri)print(processed_test_data_s3_uri)``` 입력 데이타 정의**[중요] 아래의 각 경로를 수정 해야합니다.**
###Code
s3_train = "<print(processed_train_data_s3_uri)>"
print("s3_train: \n", s3_train)
s3_validation = "print(processed_validation_data_s3_uri)>"
print("s3_validation: \n", s3_validation)
s3_test = "<print(processed_test_data_s3_uri)>"
print("s3_test: \n", s3_test)
###Output
_____no_output_____
###Markdown
아래와 같이 입력 채널을 정의 합니다.
###Code
channels='[ \
{ \
"ChannelName": "train", \
"DataSource": { \
"S3DataSource": { \
"S3DataType": "S3Prefix", \
"S3Uri": "'+s3_train+'", \
"S3DataDistributionType": "ShardedByS3Key" \
} \
}, \
"CompressionType": "None", \
"RecordWrapperType": "None" \
}, \
{ \
"ChannelName": "validation", \
"DataSource": { \
"S3DataSource": { \
"S3DataType": "S3Prefix", \
"S3Uri": "'+s3_validation+'", \
"S3DataDistributionType": "ShardedByS3Key" \
} \
}, \
"CompressionType": "None", \
"RecordWrapperType": "None" \
}, \
{ \
"ChannelName": "test", \
"DataSource": { \
"S3DataSource": { \
"S3DataType": "S3Prefix", \
"S3Uri": "'+s3_test+'", \
"S3DataDistributionType": "ShardedByS3Key" \
} \
}, \
"CompressionType": "None", \
"RecordWrapperType": "None" \
} \
]'
###Output
_____no_output_____
###Markdown
Parameter 정의
###Code
epochs= "2"
train_steps_per_epoch= "10"
max_seq_length = "32"
learning_rate= "1e-5"
epsilon= "0.00000001"
train_batch_size= "128"
validation_batch_size= "128"
test_batch_size= "128"
validation_steps= "100"
test_steps= "100"
train_instance_count= "2"
train_instance_type='ml.p3.2xlarge'
train_volume_size= "1024"
use_xla= "True"
use_amp= "True"
freeze_bert_layer= "True"
enable_checkpointing= "True"
input_mode='Pipe'
###Output
_____no_output_____
###Markdown
3.Create Pipeline **SageMaker의 Train 을 실행할 Role 을 적어주세요.**각자 구성한 역할의 ARN을 넣으셔야 합니다.
###Code
# SAGEMAKER_ROLE_ARN = 'arn:aws:iam::343441690612:role/service-role/AmazonSageMaker-ExecutionRole-20200801T163342'
SAGEMAKER_ROLE_ARN = "<>"
###Output
_____no_output_____
###Markdown
"4.2.1.Make-Custom-Inference-Image-ECR.ipynb" 에서 생성하여 ECR에 등록한 Image의 ARN을 아래에 넣어 주세요아래를 각자의 환경에 맞게 수정하셔야 합니다.아래 예시 처럼 ECR 에 가셔서 본인의 이미지 확인 하세요.
###Code
# AWS_ECR_TRAIN_REGISTRY = "343441690612.dkr.ecr.ap-northeast-2.amazonaws.com/bert2tweet:latest"
AWS_ECR_TRAIN_REGISTRY = "<>"
# Inference Image로 아래 image를 사용하여 SageMaker Model Object 생성
# TF_INFER_IMAGE = '520713654638.dkr.ecr.ap-northeast-2.amazonaws.com/sagemaker-tensorflow-serving:1.14.0-gpu'
TF_INFER_IMAGE = '<>'
model_output_prefix = 'bert-kf-output/model'
model_output_path = 's3://{}/{}'.format(S3_BUCKET,model_output_prefix )
# model_output_path = 's3://sagemaker-us-west-2-057716757052/sagemaker-scikit-learn-2020-06-28-05-08-39-660/model'
@dsl.pipeline(
name='Tweet BERT Classification pipeline',
description='Tweet BERT Classification using KMEANS in SageMaker'
)
def tweet_BERT(
region = AWS_REGION,
image = AWS_ECR_TRAIN_REGISTRY,
dataset_path = channels,
instance_type = 'ml.p3.2xlarge',
instance_count = 2,
volume_size = '50',
model_putput_path = model_output_path,
role_arn = SAGEMAKER_ROLE_ARN,
network_isolation='False',
traffic_encryption='False',
spot_instance='False'
):
# Component 1
training = sagemaker_train_op(
region = region,
image = image,
channels=channels,
instance_type = instance_type,
instance_count = instance_count,
volume_size = volume_size,
model_artifact_path=model_output_path,
role=role_arn,
network_isolation=network_isolation,
traffic_encryption=traffic_encryption,
spot_instance=spot_instance,
hyperparameters={'epochs': epochs,
'learning_rate': learning_rate,
'epsilon': epsilon,
'train_batch_size': train_batch_size,
'validation_batch_size': validation_batch_size,
'test_batch_size': test_batch_size,
'train_steps_per_epoch': train_steps_per_epoch,
'validation_steps': validation_steps,
'test_steps': test_steps,
'use_xla': use_xla,
'use_amp': use_amp,
'max_seq_length': max_seq_length,
'freeze_bert_layer': freeze_bert_layer,
'enable_checkpointing': enable_checkpointing
},
).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
# Component 2
create_model = sagemaker_model_op(
region = region,
image = TF_INFER_IMAGE,
model_artifact_url = training.outputs['model_artifact_url'],
model_name = training.outputs['job_name'],
role = role_arn
).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
# # Component 3
# prediction = sagemaker_deploy_op(
# region=region,
# model_name=create_model.output
# ).apply(use_aws_secret('aws-secret', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY'))
###Output
_____no_output_____
###Markdown
Compile 하여 tweet_BERT 함수를 tweet_BERT.zip 으로 만듦니다.
###Code
kfp.compiler.Compiler().compile(tweet_BERT, 'tweet_BERT.zip')
###Output
_____no_output_____
###Markdown
아래의 압축을 해제하면 아래와 같은 pipeline 정의를 가지고 있는 yaml 파일이 생성 됩니다.```Archive: ./tweet_BERT.zip inflating: pipeline.yaml ```
###Code
!unzip -o ./tweet_BERT.zip
# !cat pipeline.yaml
import time
###Output
_____no_output_____
###Markdown
4.Create Kubeflow ExperimentKubeflow Experiment를 생성하여 실행하면 SageMaker에서 pipeline.yaml 이 실행 됨
###Code
client = kfp.Client()
aws_experiment = client.create_experiment(name='aws')
exp_name = f'tweet-BERT-train-deploy-kfp-{time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())}'
my_run = client.run_pipeline(aws_experiment.id, exp_name, 'tweet_BERT.zip')
###Output
_____no_output_____ |
lessons/M2L02_linear_algebra.ipynb | ###Markdown
Álgebra Lineal En esta clase abordaremos operaciones de álgebra lineal en Python, en particular el módulo de Álgebra lineal de la biblioteca `SciPy` ([`scipy.linalg`](https://docs.scipy.org/doc/scipy/reference/linalg.htmlmodule-scipy.linalg)). Previos En primer lugar, `scipy.linalg` contiene todas las funciones de [`numpy.linalg`](https://www.numpy.org/devdocs/reference/routines.linalg.html), en ocasiones agregando funcionalidades, pero posee algunas funciones adicionales. Además podrían existir diferencias de velocidad de cómputo dependiendo de como NumPy fue instalado, por lo que se recomiendo utilizar SciPy para tareas de álgebra lineal.Como nuevamente no buscamos inventar la rueda de nuevo, esta clase está basada en el tutorial del módulo de álgebra lineal de Scipy ([link](https://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html)). Antes de comenzar, un tema que causa confusiones para matemáticos es el hecho que los objetos `2-D numpy.array` (de dos dimensiones) __no__ son matrices como se suelen estudiar en cursos de matemática. Un ejemplo de aquello es que la multiplicación de dos `2-D numpy.array` no tienen que coincidir para una multiplicación matricial y además la multiplicación por defecto es elemento a elemento.En algún momento de la historia de NumPy se optó por una clase especial [`numpy.matrix`](https://numpy.org/devdocs/reference/generated/numpy.matrix.htmlnumpy.matrix) pero que al parecer trajo más confusiones consigo e incluso será removida en futuras versiones. Rutinas Básicas Inversa Sabemos que la inversa de la matriz $\mathbf{A}$ es la matriz $\mathbf{A}^{-1}$, tal que$$ \mathbf{AA^{-1}}=\mathbf{A^{-1}A}=\mathbf{I}$$donde $\mathbf{I}$ es la matriz identidad.En SciPy, la matriz inversa de un NumPy array ``A`` se obtiene utilizando `linalg.inv(A)` o ``A.I``.
###Code
import numpy as np
from scipy import linalg
A = np.array([[1,3,5],[2,5,1],[2,3,8]])
A
linalg.inv(A)
A.dot(linalg.inv(A)) # double check
linalg.inv(A).dot(A) # double double check
###Output
_____no_output_____
###Markdown
```{attention} Notar que hay un error de precisión pues está relacionado con la precisión de número flotante de NumPy.``` Sistema de ecuaciones Es tan sencillo como utilizar el comando `linalg.solve` que necesita como inputs un matriz y un vector, para luego retornar un vector solución. Por ejemplo para el sistema$$ \begin{eqnarray*} x + 3y + 5z & = & 10 \\ 2x + 5y + z & = & 8 \\ 2x + 3y + 8z & = & 3 \end{eqnarray*}$$se puede escribir en forma matricial$$\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right] \left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{c} 10\\ 8\\ 3\end{array}\right]$$por lo que es posible resolverlo utilizando la inversa$$\left[\begin{array}{c} x\\ y\\ z\end{array}\right]=\left[\begin{array}{ccc} 1 & 3 & 5\\ 2 & 5 & 1\\ 2 & 3 & 8\end{array}\right]^{-1}\left[\begin{array}{c} 10\\ 8\\ 3\end{array}\right]=\frac{1}{25}\left[\begin{array}{c} -232\\ 129\\ 19\end{array}\right]=\left[\begin{array}{c} -9.28\\ 5.16\\ 0.76\end{array}\right].$$
###Code
A = np.array([[1, 2], [3, 4]])
A
b = np.array([[5], [6]])
b
linalg.inv(A).dot(b)
A.dot(linalg.inv(A).dot(b)) - b # check
###Output
_____no_output_____
###Markdown
Sin embargo `linalg.solve` ofrece una forma de resolver es sistema mucho más amigable e inclusive más eficiente.
###Code
linalg.solve(A, b)
A.dot(linalg.solve(A, b)) - b # check
###Output
_____no_output_____
###Markdown
Determinante No hay mucho que decir aquí, más que tener en consideración problemas de presición.
###Code
A = np.array([[1,2],[3,4]])
A
linalg.det(A)
###Output
_____no_output_____
###Markdown
Normas SciPy cuenta con una gran variedad de normas que puedes ser calculadas a través de un argumento en `linalg.norm`. Lo interesante es que esta función puede ser utilizada para elementos 1-D (vectores) o 2-D (matrices), además de tener un argumento opcional `order` para seleccionar una norma en particular (por defecto es 2). Sea $x$ un vector$$\left\Vert \mathbf{x}\right\Vert =\left\{ \begin{array}{cc} \max\left|x_{i}\right| & \textrm{ord}=\textrm{inf}\\ \min\left|x_{i}\right| & \textrm{ord}=-\textrm{inf}\\ \left(\sum_{i}\left|x_{i}\right|^{\textrm{ord}}\right)^{1/\textrm{ord}} & \left|\textrm{ord}\right|<\infty.\end{array}\right.$$Sea $A$ una matrix$$ \left\Vert \mathbf{A}\right\Vert =\left\{ \begin{array}{cc} \max_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=\textrm{inf}\\ \min_{i}\sum_{j}\left|a_{ij}\right| & \textrm{ord}=-\textrm{inf}\\ \max_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=1\\ \min_{j}\sum_{i}\left|a_{ij}\right| & \textrm{ord}=-1\\ \max\sigma_{i} & \textrm{ord}=2\\ \min\sigma_{i} & \textrm{ord}=-2\\ \sqrt{\textrm{trace}\left(\mathbf{A}^{H}\mathbf{A}\right)} & \textrm{ord}=\textrm{'fro'}\end{array}\right.$$donde $\sigma_{i}$ son los valores singulares de $\mathbf{A}$.
###Code
A=np.array([[1,2],[3,4]])
A
linalg.norm(A)
linalg.norm(A,'fro') # frobenius norm is the default
linalg.norm(A,1) # L1 norm (max column sum)
linalg.norm(A,-1)
linalg.norm(A,np.inf) # L inf norm (max row sum)
###Output
_____no_output_____
###Markdown
Descomposiciones Espectral La descomposición espectral es usual de encontrar en múltiples campos de la ingeniería y la matemática dadas las interpretaciones que se le pueden dar a los valores y vectores propios.Se define la descomposición espectral para la matriz cuadrada $\mathbf{A}$ como$$\mathbf{Av}=\lambda\mathbf{v}$$donde $\lambda$ es un escalar y $\mathbf{v}$ un vector, denominados valor y vector propio respectivamente.
###Code
A = np.array([[1, 2], [3, 4]])
la, v = linalg.eig(A)
l1, l2 = la
print(l1, l2) # eigenvalues
print(v[:, 0]) # first eigenvector
print(v[:, 1]) # second eigenvector
print(np.sum(abs(v**2), axis=0)) # eigenvectors are unitary
v1 = np.array(v[:, 0]).T
print(linalg.norm(A.dot(v1) - l1*v1)) # check the computation
###Output
5.551115123125783e-17
###Markdown
Valor Singular La descomposición valor singular (SVD) puede ser pensada como una extensión a la espectral cuando las matrices no son cuadradas. Sea $\mathbf{A}$ de tamaño $M\times N$, su descomposición valor singular es $$ \mathbf{A=U}\boldsymbol{\Sigma}\mathbf{V}^{H}$$aquí $\mathbf{U}$ y $\mathbf{V}$ son matrices unitarias de tamaño $M \times M$ y $N \times N$ respectivamente, mientras que $\mathbf{\boldsymbol{\Sigma}}$ es una matriz $M \times N$ donde los elementos de la diagonal principal son no nulos y se denotan usualmente como valores singulares, mientras que fuera de la diagonal sus valores son cero.__Matriz unitaria__: Sea $\mathbf{D}$ tal que $\mathbf{D}^H \mathbf{D} = \mathbf{D}\mathbf{D}^H = \mathbf{I}$, es decir $\mathbf{D}^{-1} = \mathbf{D}^H$.
###Code
A = np.array([[1,2,3],[4,5,6]])
A
M,N = A.shape
U,s,Vh = linalg.svd(A)
Sig = linalg.diagsvd(s,M,N)
U, Vh = U, Vh
U
Sig
Vh
U.dot(Sig.dot(Vh)) #check computation
###Output
_____no_output_____
###Markdown
LU La descomposición LU de una matriz $\mathbf{A}$ de tamaño $M\times N$ es$$ \mathbf{A}=\mathbf{P}\,\mathbf{L}\,\mathbf{U},$$donde $\mathbf{P}$ es una matriz de permutación de filas para la matriz identidad de $M\times M$, $\mathbf{L}$ es triangular inferior $M\times K$ con $K=\min\left(M,N\right)$) con valores unitarios en la diagonal y $\mathbf{U}$ es triangular superior.
###Code
A = np.array(
[
[2, 5, 8, 7],
[5, 2, 2, 8],
[7, 5, 6, 6],
[5, 4, 4, 8]]
)
p, l, u = linalg.lu(A)
l
u
np.allclose(A - p @ l @ u, np.zeros((4, 4)))
###Output
_____no_output_____
###Markdown
Cholesky La descomposición Cholesky es un caso especial de la descomposición LU aplicable a matrices hermitianas definidas positivas. Cuando $\mathbf{A}=\mathbf{A}^{H}$ y $\mathbf{x}^{H}\mathbf{Ax}\geq 0$ para todo $\mathbf{x}$,$$ \begin{eqnarray*} \mathbf{A} & = & \mathbf{U}^{H}\mathbf{U}\\ \mathbf{A} & = & \mathbf{L}\mathbf{L}^{H}\end{eqnarray*},$$donde $\mathbf{L}$ es triangular inferior y $\mathbf{U}$ es triangular superior. __Matriz hermitiana:__ Sea $\mathbf{D}$ tal que $\mathbf{D}^H = \mathbf{D}$.
###Code
A = np.array([[1,-2j],[2j,5]])
L = linalg.cholesky(A, lower=True)
L
L @ L.T.conj()
np.allclose(A, L @ L.T.conj())
###Output
_____no_output_____
###Markdown
QR La descomposición QR es aplicable a cualquier matriz $M\times N$, obteniendo una matriz unitaria $\mathbf{Q}$ de tamaño $M\times M$ y una matriz trapezoidal superior $\mathbf{R}$ de $M\times N$ tal que$$\mathbf{A=QR}$$
###Code
A = np.random.randn(9, 6)
q, r = linalg.qr(A)
np.allclose(A, np.dot(q, r))
q.shape, r.shape
###Output
_____no_output_____ |
notebooks/citelearn-model-fa-cldata.ipynb | ###Markdown
CiteLearn ModelThis workbook steps through the creation of a model for detecting sentences which need citations.It draws on Wikipedia Featured Article content prepared as detailed at https://github.com/thelondonsimon/citelearn-modelA pre-trained BERT word embedding model is used to process each sentence and this representation is used predict each sentence's 'Citation Needed' flag.
###Code
!pip install -q tensorflow-text tf-models-official
import math
import datetime
import os
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text as text
from official.nlp import optimization # to create AdamW optimizer
import matplotlib.pyplot as plt
from google.colab import drive
import pandas as pd
import numpy as np
# the training data used in this notebook comes from Google Drive
drive.mount('/content/gdrive')
###Output
Mounted at /content/gdrive
###Markdown
Example DataThe example data for the model comes from the sample Wikipedia feature articles collected in the development of the original citation-needed model.
###Code
AUTOTUNE = tf.data.AUTOTUNE
BATCH_SIZE = 32
# this model only makes use of the Sentence and HasCitation flags for its predictions
fa = pd.read_csv("/content/gdrive/MyDrive/citelearn/data/featured_articles/fa-sentences.txt",
sep="\t",
header = 0,
names = ['ArticleId','H2Heading','H3Heading','H2Idx','H3Idx','ParIdx','SenIdx','Sentence','HasCitation','ParHasCitation','PrevSenHasCitation','NextSenHasCitation'],
usecols = [7,8]
)
fa["HasCitation"] = fa["HasCitation"].astype(int)
# remove any sentences with less than five words
fa['WordCount'] = fa['Sentence'].str.split().str.len()
fa = fa[fa['WordCount'] >= 5]
fa = fa.drop(columns=['WordCount'])
# Create a TensorFlow Dataset from the textual feature and label
target = fa.pop('HasCitation')
fa.dataset = tf.data.Dataset.from_tensor_slices((fa.values, target.values))
# Shuffle the dataset and split into training, validation and test
fa.dataset = fa.dataset.shuffle(len(fa.dataset)).batch(BATCH_SIZE, drop_remainder = True)
trainCount = math.floor(len(fa.dataset) * 0.6)
valCount = math.floor(len(fa.dataset) * 0.2)
testCount = len(fa.dataset) - trainCount - valCount
fa.train = fa.dataset.take(trainCount + valCount).cache().prefetch(buffer_size=AUTOTUNE)
fa.val = fa.train.skip(trainCount).cache().prefetch(buffer_size=AUTOTUNE)
fa.train = fa.train.take(trainCount).cache().prefetch(buffer_size=AUTOTUNE)
fa.test = fa.dataset.skip(trainCount + valCount).cache().prefetch(buffer_size=AUTOTUNE)
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:20: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:29: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:30: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:32: UserWarning: Pandas doesn't allow columns to be created via a new attribute name - see https://pandas.pydata.org/pandas-docs/stable/indexing.html#attribute-access
###Markdown
Model SpecificationThe model utilises BERT preprocessing and encoding layers in the development of the final model.
###Code
# The model will use a Small BERT uncased model of word embeddings: bert_en_uncased_L-4_H-512_A-8
# and its matching preprocessor
bert_model_name = 'bert_en_uncased_L-4_H-512_A-8'
bert_model_ref = 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1'
bert_preprocess_ref = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
# Uncomment to use the wiki_books BERT instead
bert_model_name = 'wiki_books'
bert_model_ref = 'https://tfhub.dev/google/experts/bert/wiki_books/2'
DROPOUT = 0.1
INIT_LR = 3e-5
OPTIMIZER = 'adamw'
# Define the model layers:
# text => preprocessing => BERT => Dropout => Dense
def build_classifier_model():
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = hub.KerasLayer(bert_preprocess_ref, name='preprocessing')
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(bert_model_ref, trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
net = tf.keras.layers.Dropout(DROPOUT)(net)
net = tf.keras.layers.Dense(1, activation=None, name='classifier')(net)
return tf.keras.Model(text_input, net)
classifier_model = build_classifier_model()
# Configure the loss, metric and optimizer for the model
loss = tf.keras.losses.BinaryCrossentropy(from_logits=True)
metrics = tf.metrics.BinaryAccuracy()
epochs = 5
steps_per_epoch = tf.data.experimental.cardinality(fa.train).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
optimizer = optimization.create_optimizer(init_lr=INIT_LR,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
# compile the model
classifier_model.compile(optimizer=optimizer,
loss=loss,
metrics=metrics)
print(f'Training model with {bert_model_name}')
history = classifier_model.fit(x=fa.train,
validation_data=fa.val,
epochs=epochs)
# evaluate the model on the test data set
testLoss, testAccuracy = classifier_model.evaluate(fa.test)
# save the model
dt = datetime.datetime.now()
modelName = 'citelearn_fa_all_bert_' + dt.strftime('%Y%m%d_%H%M')
modelPathname = '/content/gdrive/MyDrive/citelearn/models/' + modelName
classifier_model.save(modelPathname, include_optimizer=False)
# use the fit function's history to record the performance of the model
history_dict = history.history
historyDf = pd.DataFrame(data = history_dict)
historyDf['testLoss'] = testLoss
historyDf['testAccuracy'] = testAccuracy
historyDf['epoch'] = range(1,len(historyDf)+1)
historyDf['modelname'] = modelName
historyDf['datetime'] = dt.strftime('%Y%m%d %H:%M')
historyDf['config'] = 'BERT: ' + bert_model_name + ' ; Dropout: ' + "{:6.3f}".format(DROPOUT) + ' ; Init LR: ' + "{:6.5f}".format(INIT_LR) + ' ; Optimizer: adamw'
historyDf.to_csv('/content/gdrive/MyDrive/citelearn/model_performance.csv', mode='a', header=False)
# plot progress at each epoch
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
fig = plt.figure(figsize=(10, 6))
fig.tight_layout()
plt.subplot(2, 1, 1)
plt.plot(epochs, loss, 'r', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'r', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
# sample prediction from the model
reloaded_model = tf.saved_model.load(modelPathname)
tf.sigmoid(reloaded_model(tf.constant(['Natasha "Tasha" Yar is a fictional character that mainly appeared in the first season of the American science fiction television series Star Trek: The Next Generation.'])))
###Output
_____no_output_____ |
Week_7_XGboost_with_LightGBM.ipynb | ###Markdown
###Code
import pandas as pd, numpy as np
import lightgbm
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import roc_auc_score
from sklearn.metrics import classification_report
df = pd.read_csv('/content/Telco_customer_churn.csv')
df.head(3)
# Let's delete data leakage columns and columns with 1 unique catigory
drop_cols = ['Churn Label', 'Churn Score', 'CLTV', 'Churn Reason', 'Count', 'Country', 'State', 'Lat Long', 'CustomerID']
df.drop(drop_cols, axis=1, inplace=True)
# We already know that NaNs are not mapped. But we have white spaces in data, so we will replace them with NaNs
# df.loc[df['Total Charges'] == ' ']
df.replace(r'^\s*$', np.nan, regex=True, inplace=True)
# df.isna().sum()
# Replace ',' with '.' to convert to float
to_be_numeric = ['Latitude', 'Longitude', 'Monthly Charges', 'Total Charges']
for col in to_be_numeric:
df[col].replace(',', '.', regex=True, inplace=True)
df[col] = pd.to_numeric(df[col])
#Let's find catigorical columns
cat_columns = [cname for cname in df.columns if df[cname].dtype == "object"]
# We need 'categorical' type for categorical columns for lightgbm
for col in df.columns:
if col in cat_columns:
df[col] = df[col].astype('category')
# Splitting the data
X = df.drop('Churn Value', axis=1)
y = df['Churn Value']
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2, random_state=0)
# Base model
lgb_base = lightgbm.LGBMClassifier(random_state=0)
lgb_base.fit(X_train, y_train, eval_set=(X_valid, y_valid), eval_metric='auc', verbose=0)
print(f'roc_auc of base model: {roc_auc_score(y_valid, lgb_base.predict(X_valid))}\n')
print(classification_report(y_valid, lgb_base.predict(X_valid)))
plot_confusion_matrix(lgb_base, X_valid, y_valid, values_format='d', display_labels=['Did not leave', 'Left'])
%%time
param_grid = {
'max_depth': [4, 9, -1],
'num_leaves': [4, 9],
'learning_rate': [0.1],
'scale_pos_weight': [3],
'n_estimators': [50, 100],
'reg_lambda': [10, 15],
'subsample': [0.9],
'colsample_bytree': [0.5, 0.6]
}
lgb = lightgbm.LGBMClassifier(random_state=1)
opt_params = GridSearchCV(estimator=lgb,
param_grid=param_grid,
scoring='roc_auc',
cv=3)
opt_params.fit(X_train, y_train, eval_metric='auc', eval_set=(X_valid, y_valid), verbose=0)
params = opt_params.best_params_
print(params)
lgb = lightgbm.LGBMClassifier(**params)
lgb.fit(X_train, y_train)
print(f'roc_auc of final model: {roc_auc_score(y_valid, lgb.predict(X_valid))}\n')
print(classification_report(y_valid, lgb.predict(X_valid)))
plot_confusion_matrix(lgb, X_valid, y_valid, values_format='d', display_labels=['Did not leave', 'Left'])
###Output
roc_auc of final model: 0.7719518513036306
precision recall f1-score support
0 0.92 0.72 0.81 1048
1 0.50 0.83 0.62 361
accuracy 0.74 1409
macro avg 0.71 0.77 0.72 1409
weighted avg 0.82 0.74 0.76 1409
|
Laboratorios/C2_machine_learning/04_analisis_no_supervisado/03_analisis_no_supervisado.ipynb | ###Markdown
MAT281 - Aprendizaje no supervisado Objetivos de la clase* Aprender conceptos básicos de aprendizaje no supervisados en python. Contenidos* [Aprendizaje no supervisado](c1)* [Ejemplos en python](c2) I.- Aprendizaje no supervizadoAprendizaje no supervisado es un método de Aprendizaje Automático donde un modelo es ajustado a las observaciones. Se distingue del Aprendizaje supervisado por el hecho de que no hay un conocimiento a priori. En el aprendizaje no supervisado, un conjunto de datos de objetos de entrada es tratado. Así, el aprendizaje no supervisado típicamente trata los objetos de entrada como un conjunto de variables aleatorias, siendo construido un modelo de densidad para el conjunto de datos. K-means[K-means](https://es.wikipedia.org/wiki/K-means) es probablemente uno de los algoritmos de agrupamiento más conocidos y, en un sentido más amplio, una de las técnicas de aprendizaje no supervisado más conocidas.K-means es en realidad un algoritmo muy simple que funciona para reducir al mínimo la suma de las distancias cuadradas desde la media dentro del agrupamiento. Matemáticamente:\begin{align*}(P) \ \textrm{Minimizar } f(C_l,\mu_l) = \sum_{l=1}^k \sum_{x_n \in C_l} ||x_n - \mu_l ||^2 \textrm{, respecto a } C_l, \mu_l,\end{align*}donde $C_l$ es el cluster l-ésimo y $\mu_l$ es el centroide l-ésimo.El problema anterior es NP-hard (imposible de resolver en tiempo polinomial, del tipo más difícil de los probleams NP). II.- Ejemplo con python a) Ejemplo K-meansVeamos un ejemplo de análisis no supervisado ocupando el algoritmo **k-means**.
###Code
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets.samples_generator import make_blobs
pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes
# Ver gráficos de matplotlib en jupyter notebook/lab
%matplotlib inline
def init_blobs(N, k, seed=42):
X, y = make_blobs(n_samples=N, centers=k,
random_state=seed, cluster_std=0.60)
return X
# generar datos
X = init_blobs(10000, 6, seed=43)
df = pd.DataFrame(X, columns=["x", "y"])
df.head()
###Output
_____no_output_____
###Markdown
Debido a que trabajamos con el concepto de distancia, muchas veces las columnas del dataframe pueden estar en distintas escalas, lo cual puede complicar a los algoritmos ocupados (al menos con **sklearn**). En estos casos, se suele **normalizar** los atributos, es decir, dejar los valores en una escala acotada y/o con estimadores fijos. Por ejemplo, en ***sklearn** podemos encontrar las siguientes formas de normalizar:* **StandardScaler**: se normaliza restando la media y escalando por su desviación estanda.$$x_{prep} = \dfrac{x-u}{s}$$La ventaja es que la media del nuevo conjunto de datos cumple con la propiedad que su media $\mu$ es igual a cero y su desviación estandar $s$ es igual a 1.* **MinMaxScaler**: se normaliza ocupando los valores de los mínimos y máximo del conjunto de datos.$$x_{prep} = \dfrac{x-x_{min}}{x_{min}-x_{max}}$$Esta forma de normalizar resulta útil cuando la desviación estandar $s$ es muy pequeña (cercana) a cero, por lo que lo convierte en un estimador más roubusto que el **StandardScaler**.
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
columns = ['x', 'y']
df[columns] = scaler.fit_transform(df[columns])
df.head()
# comprobar resultados del estimador
df.describe()
###Output
_____no_output_____
###Markdown
Con esta parametrización procedemos a graficar nuestros resultados:
###Code
# graficar
sns.set(rc={'figure.figsize':(11.7,8.27)})
ax = sns.scatterplot( data=df,x="x", y="y")
###Output
_____no_output_____
###Markdown
Ahora ajustamos el algoritmo **KMeans** de **sklearn**. Primero, comprendamos los hiperparámetros más importantes:- **n_clusters**: El número de clusters a crear, o sea **K**. Por defecto es 8- **init**: Método de inicialización. Un problema que tiene el algoritmo K-Medias es que la solucción alcanzada varia según la inicialización de los centroides. `sklearn` empieza usando el método `kmeans++` que es una versión más moderna y que proporciona mejores resultados que la inicialización aleatoria (random)- **n_init**: El número de inicializaciones a probar. Básicamente `KMeans` aplica el algoritmo `n_init` veces y elige los clusters que minimizan la inercia.- **max_iter**: Máximo número de iteraciones para llegar al criterio de parada.- **tol**: Tolerancia para declarar criterio de parada (cuanto más grande, antes parará el algoritmo).
###Code
# ajustar modelo: k-means
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=6)
kmeans.fit(X)
centroids = kmeans.cluster_centers_ # centros
clusters = kmeans.labels_ # clusters
# etiquetar los datos con los clusters encontrados
df["cluster"] = clusters
df["cluster"] = df["cluster"].astype('category')
centroids_df = pd.DataFrame(centroids, columns=["x", "y"])
centroids_df["cluster"] = [1,2,3,4,5,6]
# graficar los datos etiquetados con k-means
fig, ax = plt.subplots(figsize=(11, 8.5))
sns.scatterplot( data=df,
x="x",
y="y",
hue="cluster",
legend='full',
palette="Set2")
sns.scatterplot(x="x", y="y",
s=100, color="black", marker="x",
data=centroids_df)
###Output
_____no_output_____
###Markdown
Ahora la pregunta que surge de manera natural es ... ¿ cómo escoger el mejor número de clusters?. No existe un criterio objetivo ni ampliamente válido para la elección de un número óptimo de clusters. Aunque no exista un criterio objetivo para la selección del número de clusters, si que se han implementado diferentes métodos que nos ayudan a elegir un número apropiado de clusters para agrupar los datos; como son,* método del codo (elbow method)* criterio de Calinsky* Affinity Propagation (AP)* Gap (también con su versión estadística)* Dendrogramas * etc. Regla del codoEste método utiliza los valores de la función de perdida, $f(C_l,\mu_l)$, obtenidos tras aplicar el $K$-means a diferente número de Clusters (desde 1 a $N$ clusters).Una vez obtenidos los valores de la función de pérdida tras aplicar el K-means de 1 a $N$ clusters, representamos en una gráfica lineal la función de pérdida respecto del número de clusters. En esta gráfica se debería de apreciar un cambio brusco en la evolución de la función de pérdida, teniendo la línea representada una forma similar a la de un brazo y su codo. El punto en el que se observa ese cambio brusco en la función de pérdida nos dirá el número óptimo de clusters a seleccionar para ese data set; o dicho de otra manera: el punto que representaría al codo del brazo será el número óptimo de clusters para ese data set.
###Code
# implementación de la regla del codo
Nc = range(1, 15)
kmeans = [KMeans(n_clusters=i) for i in Nc]
score = [kmeans[i].fit(df).inertia_ for i in range(len(kmeans))]
df_Elbow = pd.DataFrame({'Number of Clusters':Nc,
'Score':score})
df_Elbow.head()
# graficar los datos etiquetados con k-means
Nc = range(1, 15)
kmeans = [KMeans(n_clusters=i) for i in Nc]
fig, ax = plt.subplots(figsize=(11, 8.5))
plt.title('Elbow Curve')
sns.lineplot(x="Number of Clusters",
y="Score",
data=df_Elbow)
sns.scatterplot(x="Number of Clusters",
y="Score",
data=df_Elbow)
###Output
_____no_output_____ |
notebooks/crypto-exploration-modeling.ipynb | ###Markdown
Introduction PurgedGroupTimeSeries CV + Optuna - LightGBM VersionThis is a simple notebook for the G-Research Crypto Competition.This is a simple starter notebook for Kaggle's Crypto Comp showing purged group timeseries KFold with extra data.The original notebook is from Yam Peleg, a kaggle grandmaster. Many of the explanations are borrowed from Yam. I dive into feature engineering and exploring the data a little more. There are many configuration variables below to allow you to experiment. Use either CPU or GPU. You can control which years are loaded, which neural networks are used, and whether to use feature engineering. You can experiment with different data preprocessing, model hyperparameters, loss, and number of seeds to ensemble. The extra datasets contain the full history of the assets at the same format of the competition, so you can input that into your model too.NOTE: this notebook lets you run a different experiment in each fold if you want to run lots of experiments. (Then it is like running multiple holdout validation experiments but in that case note that the overall CV score is meaningless because LB will be much different when the multiple experiments are ensembled to predict test). If you want a proper CV with a reliable overall CV score you need to choose the same configuration for each fold.This notebook follows the ideas presented in my "Initial Thoughts" here. Some code sections have been reused from Chris' great notebook series on SIIM ISIC melanoma detection competition here Table of Contents 1. [Introduction](Introduction)2. [Table of Contents](Tableofcontents)2. [Light GBM](LightGBM)3. [Hyper Parameter Tunning in Light GBM](Tuning)3. [Optuna](Optuna)4. [Purged Group Time Series](Purged)5. [Diving into the data](Diving)6. [Evaluation](Evaluation) . [Model](ModelTraining) Light GBM LightGBM is the current "Meta" on kaggle and it doesn't look like it is going to get Nerfed anytime soon! It is basiclly a "light" version of gradient boosting machines framework that aims to increases efficiency and reduces memory usage.It is usually THE Algorithm everyone on Kaggle try when facing a tabular datasetTL;DR: What makes LightGBM so great:LGBM was developed and maintained by Microsoft themselves so it gets constant maintenance and support.Easy to useFaster than nearly all other gradient boosting algorithms.Usually the most powerful gradient boosting.It is a gradient boosting model that makes use of tree based learning algorithms. It is considered to be a fast processing algorithm.While other algorithms trees grow horizontally, LightGBM algorithm grows vertically, meaning it grows leaf-wise and other algorithms grow level-wise. LightGBM chooses the leaf with large loss to grow. It can lower down more loss than a level wise algorithm when growing the same leaf.Light GBM is prefixed as Light because of its high speed. Light GBM can handle the large size of data and takes lower memory to run.Another reason why Light GBM is so popular is because it focuses on accuracy of results. LGBM also supports GPU learning and thus data scientists are widely using LGBM for data science application development.Leaf growth technique in LightGBMLightGBM uses leaf-wise (best-first) tree growth. It chooses to grow the leaf that minimizes the loss, allowing a growth of an imbalanced tree. Because it doesn’t grow level-wise, but leaf-wise, over-fitting can happen when data is small. In these cases, it is important to control the tree depth.LightGBM vs XGBoostbase learner of almost all of the competitions that have structured datasets right now. This is mostly because of LightGBM's implementation; it doesn't do exact searches for optimal splits like XGBoost does in it's default setting but rather through histogram approximations (XGBoost now has this functionality as well but it's still not as fast as LightGBM).This results in slight decrease of predictive performance buy much larger increase of speed. This means more opportunity for feature engineering/experimentation/model tuning which inevitably yields larger increases in predictive performance. (Feature engineering are the key to winning most Kaggle competitions)LightGBM vs CatboostCatBoost is not used as much, mostly because it tends to be much slower than LightGBM and XGBoost. That being said, CatBoost is very different when it comes to the implementation of gradient boosting. This can give slightly more accurate predictions, in particular if you have large amounts of categorical features. Because rapid experimentation is vital in Kaggle competitions, LightGBM tends to be the go-to algorithm when first creating strong base learners.In general, it is important to note that a large amount of approaches involves combining all three boosting algorithms in an ensemble. LightGBM, CatBoost, and XGBoost might be thrown together in a mix to create a strong ensemble. This is done to really squeeze spots on the leaderboard and it usually works. Hyperparameter tuning in Light GBM Parameter Tuning is an important part that is usually done by data scientists to achieve a good accuracy, fast result and to deal with overfitting. Let us see quickly some of the parameter tuning you can do for better results. While, LightGBM has more than 100 parameters that are given in the documentation of LightGBM, we are going to check the most important ones.num_leaves: This parameter is responsible for the complexity of the model. I normally start by trying values in the range [10,100]. But if you have a solid heuristic to choose tree depth you can always use it and set num_leaves to 2^tree_depth - 1LightGBM Documentation says in respect - This is the main parameter to control the complexity of the tree model. Theoretically, we can set num_leaves = 2^(max_depth) to obtain the same number of leaves as depth-wise tree. However, this simple conversion is not good in practice. The reason is that a leaf-wise tree is typically much deeper than a depth-wise tree for a fixed number of leaves. Unconstrained depth can induce over-fitting. Thus, when trying to tune the num_leaves, we should let it be smaller than 2^(max_depth). For example, when the max_depth=7 the depth-wise tree can get good accuracy, but setting num_leaves to 127 may cause over-fitting, and setting it to 70 or 80 may get better accuracy than depth-wise.Min_data_in_leaf: Assigning bigger value to this parameter can result in underfitting of the model. Giving it a value of 100 or 1000 is sufficient for a large dataset.Max_depth: Controls the depth of the individual trees. Typical values range from a depth of 3–8 but it is not uncommon to see a tree depth of 1. Smaller depth trees are computationally efficient (but require more trees); however, higher depth trees allow the algorithm to capture unique interactions but also increase the risk of over-fitting. Larger training data sets are more tolerable to deeper trees.num_iterations: Num_iterations specifies the number of boosting iterations (trees to build). The more trees you build the more accurate your model can be at the cost of:- Longer training time- Higher chance of over-fittingSo typically start with a lower number of trees to build a baseline and increase it later when you want to squeeze the last % out of your model.It is recommended to use smaller learning_rate with larger num_iterations. Also, we should use early_stopping_rounds if we go for higher num_iterations to stop your training when it is not learning anything useful.early_stopping_rounds - "early stopping" refers to stopping the training process if the model's performance on a given validation set does not improve for several consecutive iterations. This parameter will stop training if the validation metric is not improving after the last early stopping round. It should be defined in pair with a number of iterations. If we set it too large we increase the chance of over-fitting. The rule of thumb is to have it at 10% of your num_iterations. Other Parameters OverviewParameters that control the trees of LightGBMnum_leaves: controls the number of decision leaves in a single tree. there will be multiple trees in pool.min_data_in_leaf: the minimum number of data/sample/count per leaf (default is 20; lower min_data_in_leaf means less conservative/control, potentially overfitting).max_depth: this the height of a decision tree. if its more possibility of overfitting but too low may underfit.NOTE: max_depth directly impacts:The best value for the num_leaves parameterModel PerformanceTraining Time Parameters For Better Accuracy* Use large max_bin (may be slower)* Use small learning_rate with large num_iteration* Use large num_leaves (may cause over-fitting)* Use bigger training data* Try dart Parameters for Dealing with Over-fitting* Use small max_bin* Use small num_leaves* Use min_data_in_leaf and min_sum_hessian_in_leaf* Use bagging by set bagging_fraction and bagging_freq* Use feature sub-sampling by set feature_fraction* Use bigger training data* Try lambda_l1, lambda_l2 and min_gain_to_split for regularization* Try max_depth to avoid growing deep tree* Try extra_trees* Try increasing path_smooth How to tune LightGBM like a boss?Hyperparameters tuning guide: objective* When you change it affects other parameters Specify the type of ML model* default- value regression* aliases- Objective_type boosting* If you set it RF, that would be a bagging approach* default- gbdt* Range- [gbdt, rf, dart, goss]* aliases- boosting_type lambda_l1* regularization parameter* default- 0.0* Range- [0, ∞]* aliases- reg_alpha* constraints- lambda_l1 >= 0.0 bagging_fraction* randomly select part of data without resampling* default-1.0* range- [0, 1]* aliases- Subsample* constarints- 0.0 < bagging_fraction <= 1.0 bagging_freq* default- 0.0* range- [0, ∞]* aliases- subsample_freq* bagging_fraction should be set to value smaller than 1.0 as well 0 means disable bagging num_leaves* max number of leaves in one tree* default- 31* Range- [1, ∞]* Note- 1 < num_leaves <= 131072 feature_fraction* if you set it to 0.8, LightGBM will select 80% of features* default- 1.0* Range- [0, 1]* aliases- sub_feature* constarint- 0.0 < feature_fraction <= 1.0 max_depth* default- [-1]* range- [-1, ∞]m* Larger is usually better, but overfitting speed increases.* limit the max depth Forr tree model* max_bin* deal with over-fitting* default- 255* range- [2, ∞]* aliases- Histogram Binning* max_bin > 1 num_iterations* number of boosting iterations* default- 100* range- [1, ∞]* AKA- Num_boost_round, n_iter* constarints- num_iterations >= 0 learning_rate* default- 0.1* range- [0 1]* aliases- eta* general values- learning_rate > 0.0Typical: 0.05. early_stopping_round* will stop training if validation doesn’t improve in last early_stopping_round* Model Performance, Number of Iterations, Training Time* default- 0* Range- [0, ∞] categorical_feature* to sepecify or Handle categorical features* i.e LGBM automatically handels categorical variable we dont need to one hot encode them. bagging_freq* default-0.0* Range-[0, ∞]* aliases- subsample_freq* note- 0 means disable bagging; k means perform bagging at every k iteration* enable bagging, bagging_fraction should be set to value smaller than 1.0 as well verbosity* default- 0* range- [-∞, ∞]* aliases- verbose* constraints- { 1} min_data_in_leaf* Can be used to deal with over-fitting:* default- 20* constarint-min_data_in_leaf >= 0Credits: The following notebook is heavily based on multiple notebook of the past Jane street market prediction competition. If you find it useful, spare some upvotes to the originals. They earned it!"Purged Time Series CV, XGBoost, Optuna 🔪📆" by MARKETNEUTRAL - https://www.kaggle.com/marketneutral/purged-time-series-cv-xgboost-optuna Optuna What is Optuna? Here is an explanation taken straight from the official GitHub. Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
###Code
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
import pandas as pd
import os
import datetime
import gc
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.model_selection import KFold, cross_val_score
from sklearn.ensemble import StackingRegressor, RandomForestRegressor, GradientBoostingRegressor
import pandas as pd
import numpy as np
from datetime import datetime
from lightgbm import LGBMRegressor
import optuna
import plotly.graph_objects as go
from sklearn.feature_selection import VarianceThreshold
import xgboost as xgb
import lightgbm as lgb
import getpass
import boto3
import seaborn as sns
import gresearch_crypto
import traceback
from sklearn.metrics import roc_auc_score, mean_absolute_error
import datatable as dt
import numpy as np
from sklearn.model_selection import KFold
from sklearn.utils.validation import _deprecate_positional_args
from sklearn.model_selection._split import _BaseKFold, indexable, _num_samples
# modified code for group gaps; source
# https://github.com/getgaurav2/scikit-learn/blob/d4a3af5cc9da3a76f0266932644b884c99724c57/sklearn/model_selection/_split.py#L2243
class PurgedGroupTimeSeriesSplit(_BaseKFold):
"""Time Series cross-validator variant with non-overlapping groups.
Allows for a gap in groups to avoid potentially leaking info from
train into test if the model has windowed or lag features.
Provides train/test indices to split time series data samples
that are observed at fixed time intervals according to a
third-party provided group.
In each split, test indices must be higher than before, and thus shuffling
in cross validator is inappropriate.
This cross-validation object is a variation of :class:`KFold`.
In the kth split, it returns first k folds as train set and the
(k+1)th fold as test set.
The same group will not appear in two different folds (the number of
distinct groups has to be at least equal to the number of folds).
Note that unlike standard cross-validation methods, successive
training sets are supersets of those that come before them.
Read more in the :ref:`User Guide <cross_validation>`.
Parameters
----------
n_splits : int, default=5
Number of splits. Must be at least 2.
max_train_group_size : int, default=Inf
Maximum group size for a single training set.
group_gap : int, default=None
Gap between train and test
max_test_group_size : int, default=Inf
We discard this number of groups from the end of each train split
"""
@_deprecate_positional_args
def __init__(self,
n_splits=5,
*,
max_train_group_size=np.inf,
max_test_group_size=np.inf,
group_gap=None,
verbose=False
):
super().__init__(n_splits, shuffle=False, random_state=None)
self.max_train_group_size = max_train_group_size
self.group_gap = group_gap
self.max_test_group_size = max_test_group_size
self.verbose = verbose
def split(self, X, y=None, groups=None):
"""Generate indices to split data into training and test set.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Training data, where n_samples is the number of samples
and n_features is the number of features.
y : array-like of shape (n_samples,)
Always ignored, exists for compatibility.
groups : array-like of shape (n_samples,)
Group labels for the samples used while splitting the dataset into
train/test set.
Yields
------
train : ndarray
The training set indices for that split.
test : ndarray
The testing set indices for that split.
"""
if groups is None:
raise ValueError(
"The 'groups' parameter should not be None")
X, y, groups = indexable(X, y, groups)
n_samples = _num_samples(X)
n_splits = self.n_splits
group_gap = self.group_gap
max_test_group_size = self.max_test_group_size
max_train_group_size = self.max_train_group_size
n_folds = n_splits + 1
group_dict = {}
u, ind = np.unique(groups, return_index=True)
unique_groups = u[np.argsort(ind)]
n_samples = _num_samples(X)
n_groups = _num_samples(unique_groups)
for idx in np.arange(n_samples):
if (groups[idx] in group_dict):
group_dict[groups[idx]].append(idx)
else:
group_dict[groups[idx]] = [idx]
if n_folds > n_groups:
raise ValueError(
("Cannot have number of folds={0} greater than"
" the number of groups={1}").format(n_folds,
n_groups))
group_test_size = min(n_groups // n_folds, max_test_group_size)
group_test_starts = range(n_groups - n_splits * group_test_size,
n_groups, group_test_size)
for group_test_start in group_test_starts:
train_array = []
test_array = []
group_st = max(0, group_test_start - group_gap - max_train_group_size)
for train_group_idx in unique_groups[group_st:(group_test_start - group_gap)]:
train_array_tmp = group_dict[train_group_idx]
train_array = np.sort(np.unique(
np.concatenate((train_array,
train_array_tmp)),
axis=None), axis=None)
train_end = len(train_array)
for test_group_idx in unique_groups[group_test_start:
group_test_start +
group_test_size]:
test_array_tmp = group_dict[test_group_idx]
test_array = np.sort(np.unique(
np.concatenate((test_array,
test_array_tmp)),
axis=None), axis=None)
test_array = test_array[group_gap:]
if self.verbose > 0:
pass
yield [int(i) for i in train_array], [int(i) for i in test_array]
# Memory saving function credit to https://www.kaggle.com/gemartin/load-data-reduce-memory-usage
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype.name
if col_type not in ['object', 'category', 'datetime64[ns, UTC]', 'datetime64[ns]']:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
crypto_df = pd.read_csv('../input/g-research-crypto-forecasting/train.csv')
test = pd.read_csv('../input/g-research-crypto-forecasting/example_sample_submission.csv')
asset_details = pd.read_csv('../input/g-research-crypto-forecasting/asset_details.csv')
crypto_df.head(2)
test.head(2)
asset_details.head(2)
asset_weight_dict = {asset_details['Asset_ID'].tolist()[idx]: asset_details['Weight'].tolist()[idx] for idx in range(len(asset_details))}
asset_name_dict = {asset_details['Asset_ID'].tolist()[idx]: asset_details['Asset_Name'].tolist()[idx] for idx in range(len(asset_details))}
btc = crypto_df[crypto_df["Asset_ID"]==1].set_index("timestamp") # Asset_ID = 1 for Bitcoin
btc_mini = btc.iloc[-200:] # Select recent data rows
fig = go.Figure(data=[go.Candlestick(x=btc_mini.index, open=btc_mini['Open'], high=btc_mini['High'], low=btc_mini['Low'], close=btc_mini['Close'])])
fig.show()
crypto_df.isna().sum()
def upper_shadow(df): return df['High'] - np.maximum(df['Close'], df['Open'])
def lower_shadow(df): return np.minimum(df['Close'], df['Open']) - df['Low']
# A utility function to build features from the original df
def get_features(df):
df_feat = df[['Count', 'Open', 'High', 'Low', 'Close', 'Volume', 'VWAP']].copy()
df_feat['upper_Shadow'] = upper_shadow(df_feat)
df_feat['lower_Shadow'] = lower_shadow(df_feat)
df_feat["high_div_low"] = df_feat["High"] / df_feat["Low"]
df_feat["open_sub_close"] = df_feat["Open"] - df_feat["Close"]
return df_feat
def fill_nan_inf(df):
df = df.fillna(0)
df = df.replace([np.inf, -np.inf], 0)
return df
crypto_df['date'] = pd.to_datetime(crypto_df['timestamp'], unit = 's')
crypto_df = crypto_df.sort_values('date')
groups = pd.factorize(crypto_df['date'].dt.day.astype(str) + '_' + crypto_df['date'].dt.month.astype(str) + '_' + crypto_df['date'].dt.year.astype(str))[0]
dates = crypto_df['date'].copy()
target = crypto_df['Target'].copy()
timestamp = crypto_df['timestamp'].copy()
crypto_df.drop(columns = 'Target', inplace = True)
crypto_df = reduce_mem_usage(crypto_df)
assets_idx = crypto_df['Asset_ID']
crypto_df = get_features(crypto_df)
crypto_df['Asset_ID'] = assets_idx
crypto_df['groups'] = groups
crypto_df['date'] = dates
crypto_df = reduce_mem_usage(crypto_df)
crypto_df['Target'] = target
crypto_df['timestamp'] = timestamp
crypto_df['Weight'] = crypto_df['Asset_ID'].map(asset_weight_dict)
crypto_df = fill_nan_inf(crypto_df)
test = fill_nan_inf(test)
feature_names = [i for i in crypto_df.columns if i not in ['Target', 'date', 'timestamp', 'VWAP', 'Asset_ID', 'groups', 'Weight']]
y_labels = crypto_df['Target'].values
X_train = crypto_df[feature_names].values
weights = crypto_df['Weight'].values
groups = crypto_df['groups'].values
###Output
_____no_output_____
###Markdown
Model Training
###Code
DEVICE = 'CPU'
# CV PARAMS
FOLDS = 5
GROUP_GAP = 130
MAX_TEST_GROUP_SIZE = 180
MAX_TRAIN_GROUP_SIZE = 280
# LOAD STRICT? YES=1 NO=0 | see: https://www.kaggle.com/julian3833/proposal-for-a-meaningful-lb-strict-lgbm
LOAD_STRICT = True
cv = PurgedGroupTimeSeriesSplit(n_splits = FOLDS,
group_gap = GROUP_GAP,
max_train_group_size = MAX_TRAIN_GROUP_SIZE,
max_test_group_size = MAX_TEST_GROUP_SIZE
)
def objective(trial, cv=cv, cv_fold_func=np.average):
# Optuna suggest params
param_lgb = {
"verbosity": -1,
"boosting_type": "gbdt",
"lambda_l1": trial.suggest_float("lambda_l1", 1e-8, 10.0, log=True),
"lambda_l2": trial.suggest_float("lambda_l2", 1e-8, 10.0, log=True),
"num_leaves": trial.suggest_int("num_leaves", 2, 256),
"feature_fraction": trial.suggest_float("feature_fraction", 0.4, 1.0),
"bagging_fraction": trial.suggest_float("bagging_fraction", 0.4, 1.0),
"bagging_freq": trial.suggest_int("bagging_freq", 1, 7),
"min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
}
# setup the pieline
imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
scaler = StandardScaler()
param_lgb['verbose'] = 0
clf = LGBMRegressor(**param_lgb)
pipe = Pipeline(steps=[
('imputer', imp_mean),
('scaler', scaler),
('catb', clf)
])
maes = []
for i, (train_idx, valid_idx) in enumerate(cv.split(
X_train,
y_labels,
groups=groups)):
train_data = X_train[train_idx, :], y_labels[train_idx]
valid_data = X_train[valid_idx, :], y_labels[valid_idx]
_ = pipe.fit(X_train[train_idx, :], y_labels[train_idx])
preds = pipe.predict(X_train[valid_idx, :])
mae = mean_absolute_error(y_labels[valid_idx], preds)
maes.append(mae)
print(f'Trial done: mae values on folds: {maes}')
return -1.0 * cv_fold_func(maes)
%%time
FIT_LGB = True
n_trials = 30
if FIT_LGB:
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=n_trials)
print("Number of finished trials: {}".format(len(study.trials)))
print("Best trial:")
trial = study.best_trial
print(" Value: {}".format(trial.value))
print(" Params: ")
for key, value in trial.params.items():
print(" {}: {}".format(key, value))
best_params = trial.params
else: best_params = {}
best_params
def corr(a, b, w):
cov = lambda x, y: np.sum(w * (x - np.average(x, weights=w)) * (y - np.average(y, weights=w))) / np.sum(w)
return cov(a, b) / np.sqrt(cov(a, a) * cov(b, b))
# LGBM Version
def get_lgbm_metric(w):
def lgbm_wcorr(preds, y_true): return 'lgbm_wcorr', corr(preds, y_true, w), True
return lgbm_wcorr
# verbose = 0 for silent, verbose = 1 for interactive
best_params['verbose'] = 0
importances, maes, models = [], [], []
oof = np.zeros(len(X_train))
for i, (train_idx, valid_idx) in enumerate(cv.split(X_train, y_labels, groups=groups)):
clf = LGBMRegressor(**best_params)
imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
scaler = StandardScaler()
pipe = Pipeline(steps=[('imputer', imp_mean), ('scaler', scaler), ('catb', clf)])
_ = pipe.fit(X_train[train_idx, :], y_labels[train_idx])
preds = pipe.predict(X_train[valid_idx, :])
oof[valid_idx] = preds
models.append(pipe)
importances.append(clf.feature_importances_)
mae = mean_absolute_error(y_labels[valid_idx], preds)
maes.append(mae)
score = corr(np.nan_to_num(y_labels[valid_idx].flatten()), np.nan_to_num(preds.flatten()), np.nan_to_num(weights[valid_idx]))
print(f'Fold {i}: wcorr score: {score}')
print(f'Score: {corr(y_labels.flatten(), oof.flatten(), weights)}')
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
from sklearn.metrics import r2_score
from matplotlib.colors import ListedColormap
pd.options.mode.chained_assignment = None
def plot_importance(importances, features_names, PLOT_TOP_N = 20, figsize=(12, 20)):
try: plt.close()
except: pass
importance_df = pd.DataFrame(data=importances, columns=features_names)
sorted_indices = importance_df.median(axis=0).sort_values(ascending=False).index
sorted_importance_df = importance_df.loc[:, sorted_indices]
plot_cols = sorted_importance_df.columns[:PLOT_TOP_N]
_, ax = plt.subplots(figsize=figsize)
ax.grid()
ax.set_xscale('log')
ax.set_ylabel('Feature')
ax.set_xlabel('Importance')
plt.title('Feature Importances')
sns.boxplot(data=sorted_importance_df[plot_cols], orient='h', ax=ax)
plt.show()
pd.DataFrame({'timestamp': crypto_df['timestamp'], 'asset_id': crypto_df['Asset_ID'], 'oof_preds': oof}).to_csv('oof.csv', index = False)
for asset in crypto_df['Asset_ID'].unique().tolist():
df = crypto_df.loc[crypto_df['Asset_ID'] == asset]
df['oof_preds'] = np.nan_to_num(oof[crypto_df['Asset_ID'] == asset])
df['Target'] = np.nan_to_num(df['Target'])
df['y'] = np.nan_to_num(df['Target'])
print('\n\n' + ('-' * 80) + '\n' + 'Finished training %s. Results:' % asset_name_dict[asset])
print('Model: r2_score: %s | pearsonr: %s ' % (r2_score(df['y'], df['oof_preds']), pearsonr(df['y'], df['oof_preds'])[0]))
print('Predictions std: %s | Target std: %s' % (df['oof_preds'].std(), df['y'].std()))
try: plt.close()
except: pass
df2 = df.reset_index().set_index('date')
fig = plt.figure(figsize = (12, 6))
# fig, ax_left = plt.subplots(figsize = (12, 6))
ax_left = fig.add_subplot(111)
ax_left.set_facecolor('azure')
ax_right = ax_left.twinx()
ax_left.plot(df2['y'].rolling(3 * 30 * 24 * 60).corr(df2['oof_preds']).iloc[::24 * 60], color = 'crimson', label = "Target WCorr")
ax_right.plot(df2['Close'].iloc[::24 * 60], color = 'darkgrey', label = "%s Close" % asset_name_dict[asset])
plt.legend()
plt.grid()
plt.xlabel('Time')
plt.title('3 month rolling pearsonr for %s' % (asset_name_dict[asset]))
plt.show()
plot_importance(np.array(importances), feature_names, PLOT_TOP_N = 20)
gc.collect()
env = gresearch_crypto.make_env()
iter_test = env.iter_test()
all_df_test = []
for i, (df_test, df_pred) in enumerate(iter_test):
for j , row in df_test.iterrows():
try:
x_test = get_features(row)
x_test = fill_nan_inf(x_test)
y_pred = np.mean(np.concatenate([np.expand_dims(model.predict([x_test[feature_names].values]), axis = 0) for model in models], axis = 0), axis = 0)
except:
y_pred = 0.0
traceback.print_exc()
df_pred.loc[df_pred['row_id'] == row['row_id'], 'Target'] = y_pred
all_df_test.append(df_test)
env.predict(df_pred)
###Output
_____no_output_____ |
content/python/data_visualisation/.ipynb_checkpoints/Path-plot-checkpoint.ipynb | ###Markdown
---title: "Path-plot"author: "Palaniappan S"date: 2020-09-05description: "-"type: technical_notedraft: false---
###Code
import matplotlib.path as mpath
import matplotlib.patches as mpatches
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
Path = mpath.Path
path_data = [
(Path.MOVETO, (1.58, -2.57)),
(Path.CURVE4, (0.35, -1.1)),
(Path.CURVE4, (-1.75, 2.0)),
(Path.CURVE4, (0.375, 2.0)),
(Path.LINETO, (0.85, 1.15)),
(Path.CURVE4, (2.2, 3.2)),
(Path.CURVE4, (3, 0.05)),
(Path.CURVE4, (2.0, -0.5)),
(Path.CLOSEPOLY, (1.58, -2.57)),
]
codes, verts = zip(*path_data)
path = mpath.Path(verts, codes)
patch = mpatches.PathPatch(path, facecolor='r', alpha=0.5)
ax.add_patch(patch)
# plot control points and connecting lines
x, y = zip(*path.vertices)
line, = ax.plot(x, y, 'go-')
ax.grid()
ax.axis('equal')
plt.show()
###Output
_____no_output_____ |
NY_Mines.ipynb | ###Markdown
Public Opinion on News Andrea Sala Part 1 : Data loading and preprocessing
###Code
import pandas as pd
import numpy as np
import math
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.stats import iqr
import glob
import os
from tqdm.notebook import tqdm
import warnings
warnings.filterwarnings('ignore')
arts_pre = pd.concat(map(pd.read_csv, glob.glob(os.path.join('', "dataset/Articles*.csv"))))
comms_pre = pd.concat(map(pd.read_csv, glob.glob(os.path.join('', "dataset/Comments*.csv"))))
#remove features that i don't need
comms_pre.drop(['printPage','parentUserDisplayName','picURL','recommendedFlag','userID',
'permID','userTitle','userDisplayName','userURL','userLocation',
'reportAbuseFlag', 'status','inReplyTo','commentSequence','commentTitle',
'commentType', 'approveDate', 'createDate','updateDate','parentID', 'articleWordCount', 'depth',
'replyCount', 'sharing', 'timespeople', 'trusted', 'typeOfMaterial'],
axis=1,inplace=True)
arts_pre.drop(['abstract', 'documentType','multimedia', 'printPage',
'pubDate','snippet','source','webURL', 'articleWordCount', 'keywords', 'typeOfMaterial'],
axis=1,inplace=True)
print(comms_pre.shape) # ~2e6 comments with 34 features each
print(arts_pre.shape) #9300 comments with 15 features each
###Output
(2176364, 7)
(9335, 5)
###Markdown
Data preprocessing
###Code
#remove bad characters such as & and <br/>
def preprocess(commentBody):
commentBody = commentBody.str.replace("(<br/>)", "")
commentBody = commentBody.str.replace('(<a).*(>).*(</a>)', '')
commentBody = commentBody.str.replace('(&)', '')
commentBody = commentBody.str.replace('(>)', '')
commentBody = commentBody.str.replace('(<)', '')
commentBody = commentBody.str.replace('(\xa0)', ' ')
return commentBody
comms_pre.commentBody = preprocess(comms_pre.commentBody)
from nltk.stem import PorterStemmer
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
Stemmer = PorterStemmer()
stops = set(stopwords.words("english"))
def cleanList(myList):
cleanedList = [cleanText(el) for el in tqdm(myList)]
return cleanedList
def cleanText(rawText):
nice_words = [Stemmer.stem(word) for word in word_tokenize(rawText.lower()) if word.isalpha() and word not in stopwords.words("english")]
joined_words = ( " ".join(nice_words))
return joined_words
###Output
_____no_output_____
###Markdown
If you already have All_Comments_clean.csv, do not run the following cell (very slow)
###Code
full_comms = list(comms_pre['commentBody'])
clean_comm_bodies = cleanList(full_comms)
#Add cleaned data back into DataFrame
comms_pre['commentBody'] = clean_comm_bodies
comms_pre.to_csv("dataset/All_Comments_clean.csv",header=True)
arts_pre.to_csv("dataset/All_Articles_clean.csv",header=True)
###Output
_____no_output_____
###Markdown
Run only this one if you already have the cleaned files
###Code
arts = pd.read_csv("dataset/All_Articles_clean.csv")
comms = pd.read_csv("dataset/All_Comments_clean.csv")
comms = comms[comms.commentBody.notnull()]
comms.drop(['Unnamed: 0'], axis=1, inplace=True)
arts.drop(['Unnamed: 0'], axis=1, inplace=True)
arts.drop_duplicates(subset='articleID',keep='first')
comms = comms.drop_duplicates(subset='commentID',keep='first')
UnkHead = arts[ arts['headline'] == 'Unknown' ].index
arts.drop(UnkHead,inplace=True)
###Output
_____no_output_____
###Markdown
Part 2 Assigning a score to each comment (AFINN-111 lexicon)
###Code
from afinn import Afinn
afinn = Afinn(language='en')
def comment_score (comment):
commList = comment.split()
tempScore = 0
for word in commList:
tempScore += afinn.score_with_wordlist(word)
tempScore /= math.sqrt(len(commList)+1)
return tempScore
#this takes some time (~1 min)
comms['commentScore'] = comms['commentBody'].apply(comment_score)
comms['commentScore'] = comms['commentScore'] + comms['recommendations'] / comms['recommendations'].max()
#comms['popularity'] = comms['commentScore']
comms['popularity'] = (1 + comms['editorsSelection']) * comms['commentScore']
fig = plt.hist(comms['popularity'],bins=70, color='royalblue')
plt.grid(alpha=0.5)
plt.xlim(-5,5)
plt.xlabel("Popularity",fontsize=13)
plt.ylabel("Counts", fontsize=13)
plt.tight_layout()
#plt.show()
plt.savefig("Pictures/PopularityDist.pdf")
#comms['popularity'].describe()
###Output
_____no_output_____
###Markdown
Assign a score to each article
###Code
grouper = comms.groupby('articleID')
to_merge = grouper.sum().divide(np.sqrt(grouper.count()+1)).reset_index().rename(columns={'popularity': 'articleScore'})[['articleID', 'articleScore']]
arts = arts.merge(to_merge, on='articleID')
arts['articleScore'].describe()
plt.hist(arts['articleScore'], bins=100, range=(-20,20), color='orange')
plt.grid(alpha=0.3)
plt.xlabel("Article Score", fontsize=13)
plt.ylabel("Counts", fontsize=13)
plt.savefig("Pictures/articleScoreDist.pdf")
plt.tight_layout()
# polarity distribution of the articles
###Output
_____no_output_____
###Markdown
Part 3 : Defining controversy (popularity + debate)
###Code
arts_old = arts.copy(deep=False)
###Output
_____no_output_____
###Markdown
Strategy 1 : Popularity x debate Popularity = Total number of comments under each article (normalized to be 0-100)
###Code
arts['commentNumber'] = arts['articleID'].map(comms['articleID'].value_counts())
arts['commentNumber'] = 100 * arts['commentNumber'] / arts['commentNumber'].max()
arts.sort_values(by='commentNumber',ascending=False, inplace=True)
###Output
_____no_output_____
###Markdown
Debate: interquartile range of the comment scores in an article (normalized to 0-1)
###Code
arts['debate'] = arts['articleID'].map(comms.groupby('articleID')['popularity'].agg(iqr))
arts['debate'] = arts['debate'] / arts['debate'].max()
# Now finally controversy
arts['controv'] = arts['commentNumber'] * (0.001 + arts['debate'])
arts.sort_values(by='controv',ascending=False,inplace=True)
# Remember all trials!! (Without normalization, without epsilon, ...)
fig, axs = plt.subplots(1, 3, figsize=(19, 5))
#fig.suptitle('Distributions - Strategy 1', fontsize=20)
axs[0].hist(arts['commentNumber'],bins=100, color='indianred')
axs[0].set_title('Popularity', fontsize=20)
axs[1].hist(arts['debate'],bins=100, color='royalblue')
axs[1].set_title('Debate', fontsize=20)
axs[2].hist(arts['controv'],bins=100, color='purple')
axs[2].set_title('Controversy', fontsize=20)
plt.tight_layout()
plt.savefig("Pictures/Strat1Dist.pdf")
fig, axs = plt.subplots(1, 3, figsize=(19, 5))
#fig.suptitle("Correlations - Strategy 1",fontsize=20)
axs[0].scatter(arts['commentNumber'],arts['controv'], color='pink')
axs[0].set_xlabel('Popularity', fontsize=15)
axs[0].set_ylabel('Controversy', fontsize=15)
axs[1].scatter(arts['debate'], arts['controv'],color='navy')
axs[1].set_xlabel('Debate', fontsize=15)
axs[1].set_ylabel('Controversy', fontsize=15)
axs[2].scatter( arts['debate'],arts['commentNumber'],color='darkmagenta')
axs[2].set_ylabel('Popularity', fontsize=15)
axs[2].set_xlabel('Debate', fontsize=15)
plt.tight_layout()
plt.savefig("Pictures/Strat1Corr.pdf")
###Output
_____no_output_____
###Markdown
Strategy 2: log(Popularity) * Debate
###Code
arts2 = arts_old.copy(deep=False)
arts2['commentNumber'] = arts['articleID'].map(comms['articleID'].value_counts())
arts2['debate'] = arts['articleID'].map(comms.groupby('articleID')['popularity'].agg(iqr))
arts2['logCommentNumber'] = 1 + np.log10(arts2['commentNumber'])
arts2['controv'] = arts2['logCommentNumber'] * (0.01 + arts2['debate'])
fig, axs = plt.subplots(1, 3, figsize=(19, 5))
#fig.suptitle('Distributions - Strategy 2', fontsize=20, y=1.02)
axs[0].hist(arts2['logCommentNumber'],bins=100, color='indianred')
axs[0].set_title('Popularity', fontsize=20)
axs[1].hist(arts2['debate'],bins=100, color='royalblue')
axs[1].set_title('Debate', fontsize=20)
axs[2].hist(arts2['controv'],bins=100, color='purple')
axs[2].set_title('Controversy', fontsize=20)
plt.tight_layout()
plt.savefig("Pictures/Strat2Dist.pdf")
arts2.sort_values(by='controv',ascending=False,inplace=True)
fig, axs = plt.subplots(1, 3, figsize=(19, 5))
#fig.suptitle('Correlations - Strategy 2', fontsize=20, y=0.99)
axs[0].scatter(arts2['logCommentNumber'],arts2['controv'],color='pink')
axs[0].set_xlabel('logPopularity', fontsize=20)
axs[0].set_ylabel('Controversy', fontsize=20)
axs[1].scatter(arts2['debate'], arts2['controv'], color='navy')
axs[1].set_xlabel('Debate', fontsize=20)
axs[1].set_ylabel('Controversy', fontsize=20)
axs[2].scatter(arts2['logCommentNumber'], arts2['debate'], color='darkmagenta')
axs[2].set_xlabel('logPopularity', fontsize=20)
axs[2].set_ylabel('Debate', fontsize=20)
plt.tight_layout()
plt.savefig("Pictures/Strat2Corr.pdf")
###Output
_____no_output_____
###Markdown
Part 4: Grouping by category Section name - Strategy 1
###Code
dftot = pd.DataFrame()
dftot['sectionName'] = arts['sectionName'].unique()
dftot.sort_values(by='sectionName',inplace=True)
grouper = arts.groupby('sectionName').mean()['controv']
grouper2 = arts.groupby('sectionName').mean()['commentNumber']
dftot['SecControv'] = dftot['sectionName'].map(grouper)
dftot['Popularity']= dftot['sectionName'].map(grouper2)
dftot['Popularity'] = dftot['Popularity'] * dftot['SecControv'].max() / dftot['Popularity'].max()
dftot.sort_values(by='SecControv',ascending=False,inplace=True)
dftot.sort_values(by='SecControv',ascending=False,inplace=True)
fig, ax = plt.subplots(figsize=(25,9))
ind = np.arange(len(dftot))
wi = 0.4
ax.bar(ind, dftot.SecControv, wi, color='green', label='Controversial')
ax.bar(ind + wi, dftot.Popularity, wi, color='red', label='Popular')
ax.set(xticks=ind + wi, xticklabels=dftot.sectionName)
plt.xticks(fontsize=20, rotation=90)
ax.legend(fontsize=30)
plt.tight_layout()
plt.savefig("Pictures/Strat1SN.pdf")
###Output
_____no_output_____
###Markdown
Section Name - Strategy 2
###Code
dftot = pd.DataFrame()
dftot['sectionName'] = arts['sectionName'].unique()
dftot.sort_values(by='sectionName',inplace=True)
grouper = arts2.groupby('sectionName').mean()['controv']
grouper2 = arts2.groupby('sectionName').mean()['commentNumber']
dftot['SecControv'] = dftot['sectionName'].map(grouper)
dftot['Popularity']= dftot['sectionName'].map(grouper2)
dftot['Popularity'] = dftot['Popularity'] * dftot['SecControv'].max() / dftot['Popularity'].max()
dftot.sort_values(by='SecControv',ascending=False,inplace=True)
fig, ax = plt.subplots(figsize=(25,8))
ind = np.arange(len(dftot))
wi = 0.4
ax.bar(ind, dftot.SecControv, wi, color='green', label='Controverse')
ax.bar(ind + wi, dftot.Popularity, wi, color='red', label='Popular')
ax.set(xticks=ind + wi, xticklabels=dftot.sectionName)
plt.xticks(fontsize=20, rotation=90)
ax.legend(fontsize=20)
plt.tight_layout()
plt.savefig("Pictures/Strat2SN.pdf")
dftot.sort_values(by='Popularity',ascending=False,inplace=True)
fig, ax = plt.subplots(figsize=(25,5))
ind = np.arange(len(dftot))
wi = 0.4
ax.bar(ind, dftot.SecControv, wi, color='green', label='Controverse')
ax.bar(ind + wi, dftot.Popularity, wi, color='red', label='Popular')
ax.set(xticks=ind + wi, xticklabels=dftot.sectionName)
plt.xticks(fontsize=15, rotation=90)
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Grouping by NewDesk - Strategy 1
###Code
dftot = pd.DataFrame()
dftot['newDesk'] = arts['newDesk'].unique()
dftot.sort_values(by='newDesk',inplace=True)
grouper = arts.groupby('newDesk').mean()['controv']
grouper2 = arts.groupby('newDesk').mean()['commentNumber']
dftot['SecControv'] = dftot['newDesk'].map(grouper)
dftot['Popularity']= dftot['newDesk'].map(grouper2)
dftot['Popularity'] = dftot['Popularity'] * dftot['SecControv'].max() / dftot['Popularity'].max()
dftot.sort_values(by='SecControv',ascending=False,inplace=True)
fig, ax = plt.subplots(figsize=(25,8))
ind = np.arange(len(dftot))
wi = 0.4
ax.bar(ind, dftot.SecControv, wi, color='green', label='Controversial')
ax.bar(ind + wi, dftot.Popularity, wi, color='red', label='Popular')
ax.set(xticks=ind + wi, xticklabels=dftot.newDesk)
plt.xticks(fontsize=20, rotation=90)
ax.legend(fontsize=30)
plt.tight_layout()
plt.savefig("Pictures/Strat1ND.pdf")
###Output
_____no_output_____
###Markdown
Strategy 2
###Code
dftot = pd.DataFrame()
dftot['newDesk'] = arts2['newDesk'].unique()
dftot.sort_values(by='newDesk',inplace=True)
grouper = arts2.groupby('newDesk').mean()['controv']
grouper2 = arts2.groupby('newDesk').mean()['commentNumber']
dftot['SecControv'] = dftot['newDesk'].map(grouper)
dftot['Popularity']= dftot['newDesk'].map(grouper2)
dftot['Popularity'] = dftot['Popularity'] * dftot['SecControv'].max() / dftot['Popularity'].max()
dftot.sort_values(by='SecControv',ascending=False,inplace=True)
fig, ax = plt.subplots(figsize=(25,8))
ind = np.arange(len(dftot))
wi = 0.4
ax.bar(ind, dftot.SecControv, wi, color='green', label='Controverse')
ax.bar(ind + wi, dftot.Popularity, wi, color='red', label='Popular')
ax.set(xticks=ind + wi, xticklabels=dftot.newDesk)
plt.xticks(fontsize=20, rotation=90)
ax.legend(fontsize=20)
plt.tight_layout()
plt.savefig("Pictures/Strat2ND.pdf")
###Output
_____no_output_____
###Markdown
Grouping by AUTHOR - Strategy 2
###Code
# I have to select only authors with 20+ articles
dftot = pd.DataFrame()
dftot['byline'] = arts2['byline'].unique()
grp = arts2.groupby('byline')['newDesk'].count()
dftot['artNum'] = dftot['byline'].map(grp)
#grouper = arts2.groupby('byline').count()
#grouper
dftot.sort_values(by='artNum',ascending=False, inplace=True)
dftot = dftot[ dftot['artNum']> 30 ]
dftot.sort_values(by='byline',inplace=True)
grouper = arts2.groupby('byline').mean()['controv']
grouper2 = arts2.groupby('byline').mean()['commentNumber']
dftot['SecControv'] = dftot['byline'].map(grouper)
dftot['Popularity']= dftot['byline'].map(grouper2)
dftot['Popularity'] = dftot['Popularity'] * dftot['SecControv'].max() / dftot['Popularity'].max()
dftot.sort_values(by='SecControv',ascending=False,inplace=True)
fig, ax = plt.subplots(figsize=(25,8))
ind = np.arange(len(dftot))
wi = 0.4
ax.bar(ind, dftot.SecControv, wi, color='green', label='Controverse')
ax.bar(ind + wi, dftot.Popularity, wi, color='red', label='Commented')
ax.set(xticks=ind + wi, xticklabels=dftot.byline)
plt.xticks(fontsize=15, rotation=90)
ax.legend()
plt.tight_layout()
plt.savefig("Pictures/Strat2Auth.pdf")
###Output
_____no_output_____
###Markdown
TF-IDF to extract 'controversial words'
###Code
dfcon = pd.DataFrame()
arts2.sort_values(by='controv',ascending=False)
f10p = int(len(arts2)/10)
dfcon = arts2.head(f10p)
topIDs = list (dfcon['articleID'])
comms_elite = comms [ comms['articleID'].isin (topIDs) ]
comms_elite['label'] = round(comms['commentScore'])
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import SelectKBest, chi2
vect = TfidfVectorizer()
corpus2 = [w for w in comms_elite['commentBody']]
X = vect.fit_transform(corpus2)
print(X.shape)
ch2 = SelectKBest(chi2,k=150)
X_new = ch2.fit_transform(X, y=comms_elite['label'])
feature_names = [vect.get_feature_names()[i] for i in ch2.get_support(indices=True)]
print(feature_names, sep=' ')
featurestring = ' '.join(word for word in feature_names)
real_features = afinn.find_all(featurestring)
print("Top " + str(len(real_features)) + " most controversial words:")
print()
for feat in real_features:
print(feat, end=' ')
# made by Andrea Sala
###Output
_____no_output_____ |
[IMPL]_Hyperopt_for_baking.ipynb | ###Markdown
Get data Goal: get a combination of `dark_ch`, `nibs` and `cocoa` that will have the same properties as 35g of `target`.
###Code
target = pd.DataFrame(OrderedDict({
'protein': [13.5 / 100],
'fat': [49.4 / 100],
'carbs': [13.6 / 100]
}))
dark_ch = pd.DataFrame(OrderedDict({
'protein': [9.8 / 100],
'fat': [43 / 100],
'carbs': [32 / 100]
}))
nibs = pd.DataFrame(OrderedDict({
'protein': [14 / 100],
'fat': [54 / 100],
'carbs': [11 / 100]
}))
cocoa = pd.DataFrame(OrderedDict({
'protein': [21 / 100],
'fat': [11 / 100],
'carbs': [10 / 100]
}))
###Output
_____no_output_____
###Markdown
Optimize using `hyperopt`
###Code
# Define an objective function
def objective(params):
# Unpack params
amount_dark_ch = params['dark_ch']
amount_nibs = params['nibs']
amount_cocoa = params['cocoa']
total = amount_dark_ch * dark_ch \
+ amount_nibs * nibs \
+ amount_cocoa * cocoa
loss = (total - 35*target).values**2
return np.sum(loss)
# Define a search space
space = {
'dark_ch': hp.uniform('dark_ch', 1, 35),
'nibs': hp.uniform('nibs', 1, 35),
'cocoa': hp.uniform('cocoa', 1, 35),
}
# Minimize the objective over the space
best = fmin(objective, space, algo=tpe.suggest, max_evals=5000)
best
print('35 g of target:')
print(35 * target)
print('\nOur mix:')
print(best['nibs'] * nibs + best['cocoa'] * cocoa + best['dark_ch'] * dark_ch)
###Output
35 g of target:
protein fat carbs
0 4.725 17.29 4.76
Our mix:
protein fat carbs
0 4.707705 17.26919 4.758337
###Markdown
Linear system solve
###Code
# Assign variables
A = pd.concat([nibs, cocoa, dark_ch]).values.T
b = (target.values * 35).T
# Check if the feature matrix has a non-zero determinant
np.linalg.det(A)
# Sanity check
A, b
# Solve the system
solution = np.dot(np.linalg.inv(A), b).squeeze()
# Get results
print('35 g of target:')
print(35 * target)
print('\nOur mix:')
print(solution[0] * nibs + solution[1] * cocoa + solution[2] * dark_ch)
###Output
35 g of target:
protein fat carbs
0 4.725 17.29 4.76
Our mix:
protein fat carbs
0 4.725 17.29 4.76
###Markdown
This solution is somehow **surprising** as the data comes from real-world ingredients 🤯Despite this, the solution looks pretty **exact**! If you see an error here, please let me know!
###Code
# Compare solutions
solution, best
###Output
_____no_output_____ |
gpt2_experiments/gpt2_finetune.ipynb | ###Markdown
Finetune GPT-2 on Reddit Databy [Max Woolf](http://minimaxir.com)A variant of the [default notebook](https://colab.research.google.com/drive/1VLG8e7YSEwypxU-noRNhsv5dW4NfTGce) optimized for short-form titles. It is recommended to be familiar with that notebook before using this one.This example uses a CSV export of Reddit data via BigQuery (see this post for more information).
###Code
!pip install -q gpt-2-simple
import gpt_2_simple as gpt2
from datetime import datetime
from google.colab import files
###Output
_____no_output_____
###Markdown
GPU
###Code
!nvidia-smi
###Output
Sat Sep 28 17:16:18 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.40 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 70C P8 32W / 149W | 0MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
Downloading GPT-2The default query returns 1.3MB of data, so probably should only use `124M` GPT-2 to finetune. If working with more Reddity data, then migrate to `355M`.
###Code
gpt2.download_gpt2(model_name="355M")
###Output
Fetching checkpoint: 1.05Mit [00:00, 213Mit/s]
Fetching encoder.json: 1.05Mit [00:00, 94.6Mit/s]
Fetching hparams.json: 1.05Mit [00:00, 516Mit/s]
Fetching model.ckpt.data-00000-of-00001: 1.42Git [00:10, 133Mit/s]
Fetching model.ckpt.index: 1.05Mit [00:00, 387Mit/s]
Fetching model.ckpt.meta: 1.05Mit [00:00, 99.7Mit/s]
Fetching vocab.bpe: 1.05Mit [00:00, 137Mit/s]
###Markdown
Mounting Google Drive
###Code
gpt2.mount_gdrive()
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Uploading a Text File to be Trained to ColaboratoryA single-column CSV is expected.
###Code
from google.colab import drive
drive.mount('/content/drive')
file_name = "gpt2_10000_posts.csv"
###Output
_____no_output_____
###Markdown
If your text file is larger than 10MB, it is recommended to upload that file to Google Drive first, then copy that file from Google Drive to the Colaboratory VM.
###Code
gpt2.copy_file_from_gdrive(file_name)
###Output
_____no_output_____
###Markdown
Finetune GPT-2Providing a single-column CSV will automatically add `` and `` tokens appropriately.Short form text is more likely to overfit, so train it with fewer steps than you would for longform content.
###Code
import pandas as pd
df = pd.read_csv('/content/drive/My Drive/gpt2_10000_posts.csv')
df.head()
sess = gpt2.start_tf_sess()
gpt2.finetune(sess,
dataset=file_name,
model_name='355M',
steps=1100,
restore_from='latest',
run_name='10000_posts',
print_every=10,
sample_every=100
)
gpt2.copy_checkpoint_to_gdrive(run_name='10000_posts')
###Output
_____no_output_____
###Markdown
Load a Trained Model Checkpoint
###Code
import gpt_2_simple as gpt2
gpt2.copy_checkpoint_from_gdrive(run_name='10000_posts')
sess = gpt2.start_tf_sess()
gpt2.load_gpt2(sess, run_name='10000_posts')
###Output
Loading checkpoint checkpoint/10000_posts/model-1000
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from checkpoint/10000_posts/model-1000
###Markdown
Generate Text From The Trained ModelSame as normal generate functions, except with additional parameters to handle the new tokens.
###Code
gpt2.generate(sess, run_name='10000_posts',
length=50,
nsamples=10,
prefix="<|startoftext|>",
truncate="<|endoftext|>")
gpt2.generate(sess,
length=100,
temperature=1.0,
nsamples=10,
batch_size=10,
prefix="<|startoftext|>",
truncate="<|endoftext|>",
include_prefix=False
)
###Output
_____no_output_____
###Markdown
If generating in bulk, you may want to set `sample_demin=''` to remove the delimiter between each sample.
###Code
gen_file = 'gpt2_gentext_{:%Y%m%d_%H%M%S}.txt'.format(datetime.utcnow())
gpt2.generate_to_file(sess,
run_name='10000_posts',
destination_path=gen_file,
length=100,
temperature=1.0,
nsamples=100,
batch_size=20,
prefix="<|startoftext|>I think the game",
truncate="<|endoftext|>",
include_prefix=False,
sample_delim=''
)
# may have to run twice to get file to download
files.download(gen_file)
###Output
_____no_output_____
###Markdown
EtceteraIf the notebook has errors (e.g. GPU Sync Fail), force-kill the Colaboratory virtual machine and restart it with the command below:
###Code
!kill -9 -1
###Output
_____no_output_____ |
Recommendation_v1.ipynb | ###Markdown
importing Python Libs
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as ses
import missingno as ms
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reading data
###Code
data = pd.read_csv('ratings_Beauty.csv')
data.head()
data.info()
ms.matrix(data)
data.dropna(inplace=True)
data.info()
data.shape
data.head()
###Output
_____no_output_____
###Markdown
Recommendation - 1 Extracting Most Popular product based on count of sales/ratings
###Code
top_products = pd.DataFrame(data.groupby('ProductId')['Rating'].sum())
top_products=top_products.sort_values('Rating',ascending=False)
top_products.head(20)
###Output
_____no_output_____
###Markdown
Recommendation - 2 Extracting Most Sold product based on number of users brought a product
###Code
most_sold_products = pd.DataFrame(data.groupby('ProductId')['UserId'].count())
most_sold_products = most_sold_products.sort_values('UserId',ascending=False)
most_sold_products.head(20)
###Output
_____no_output_____
###Markdown
Recommendation - 3 Extracting resommendations for similar users
###Code
data.drop('Timestamp',axis=1,inplace=True)
data.head()
##unique users
len(list(set(data['UserId'])))
##unique products
len(list(set(data['ProductId'])))
##unique ratings
(list(set(data['Rating'])))
###Output
_____no_output_____
###Markdown
Building Collaborative Filtering Model We can do this 2 ways1. recommending new products to users who brought similar products to each other2. recommending new products to users who brought similar products and had similar ratings
###Code
import surprise
from surprise import KNNWithMeans
from surprise.model_selection import GridSearchCV
from surprise import Dataset
from surprise import accuracy
from surprise import Reader
from surprise.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
http://surpriselib.com/
###Code
#reader is used to parse ratings column automatically into the suprise dataset
reader = Reader(rating_scale=(1.0, 5.0))
suprise_data = Dataset.load_from_df(data[['UserId', 'ProductId', 'Rating']],reader)
#suprise_data.head()
# Split data to train and test
from surprise.model_selection import train_test_split
trainset, testset = train_test_split(suprise_data, test_size=.3,random_state=0)
###Output
_____no_output_____
###Markdown
Model Training
###Code
#k (int) – The (max) number of neighbors to take into account for aggregation (see this note). Default is 40.
#min_k (int) – The minimum number of neighbors to take into account for aggregation.
#If there are not enough neighbors, the prediction is set to the global mean of all ratings. Default is 1.
#sim_options (dict) – A dictionary of options for the similarity measure.
#docs for suprise recommended to use pearson_baseline .. will study why
algo_user = KNNWithMeans(k=10, min_k=6, sim_options={'name': 'pearson_baseline', 'user_based': True})
algo_user.fit(trainset)
# memory limilation - 8 gb ram is not able to allocate 3.12 Tib memory
data.shape
###Output
_____no_output_____
###Markdown
Need to filter data keeping users which has more than 50 ratings
###Code
userID = data.groupby('UserId').count()
top_users = userID[userID['Rating']>=10].index
temp = data[data['UserId'].isin(top_users)]
temp.shape
temp.head()
###Output
_____no_output_____
###Markdown
keeping products which has more than 50 ratings
###Code
prodID = data.groupby('ProductId').count()
top_prod = prodID[prodID['Rating'] >= 10].index
temp1 = temp[temp['ProductId'].isin(top_prod)]
temp1.shape
###Output
_____no_output_____
###Markdown
Training Again
###Code
reader = Reader(rating_scale=(1.0, 5.0))
suprise_data = Dataset.load_from_df(temp1[['UserId', 'ProductId', 'Rating']],reader)
trainset, testset = train_test_split(suprise_data, test_size=.3,random_state=0)
###Output
_____no_output_____
###Markdown
KNN Model
###Code
algo_user = KNNWithMeans(k=10, min_k=6, sim_options={'name': 'pearson_baseline', 'user_based': True})
algo_user.fit(trainset)
###Output
Estimating biases using als...
Computing the pearson_baseline similarity matrix...
Done computing similarity matrix.
###Markdown
SVD
###Code
from surprise import KNNBasic, SVD, NormalPredictor, KNNBaseline,KNNWithMeans, KNNWithZScore, BaselineOnly, CoClustering, Reader, dataset, accuracy
svd_model = SVD(n_factors=50,reg_all=0.02)
svd_model.fit(trainset)
###Output
_____no_output_____
###Markdown
Model Evaluation KNN Model
###Code
# Evalute on test set
test_pred = algo_user.test(testset)
test_pred[0]
# compute RMSE
accuracy.rmse(test_pred)
###Output
RMSE: 1.1399
###Markdown
SVD
###Code
test_pred = svd_model.test(testset)
accuracy.rmse(test_pred)
# RMSE is Root Mean Square Error. Since SVD has less RMSE it is a better model
###Output
_____no_output_____
###Markdown
Tuning Parameters
###Code
param_grid = {'n_factors' : [5,10,15], "reg_all":[0.01,0.02]}
gs = GridSearchCV(SVD, param_grid, measures=['rmse'], cv=3,refit = True)
gs.fit(suprise_data)
gs.best_params
# Use the "best model" for prediction
gs.test(testset)
accuracy.rmse(gs.test(testset))
test_pred[:5]
###Output
_____no_output_____
###Markdown
Testing
###Code
from collections import defaultdict
def get_top_n(predictions, n=5):
# First map the predictions to each user.
top_n = defaultdict(list)
for uid, iid, true_r, est, _ in predictions:
top_n[uid].append((iid, est))
# Then sort the predictions for each user and retrieve the k highest ones.
for uid, user_ratings in top_n.items():
user_ratings.sort(key=lambda x: x[1], reverse=True)
top_n[uid] = user_ratings[:n]
return top_n
top_n = get_top_n(test_pred, n=5)
testset[testset[0]=='A3LOVYOYGXZEZV']
# Print the recommended items for each user
count=0
for uid, user_ratings in top_n.items():
if uid=='A3LOVYOYGXZEZV':
print(uid, [iid for (iid, _) in user_ratings])
count+=1
if count == 5:
break
###Output
A3LOVYOYGXZEZV ['B00IMLLW6A', 'B000RZQGS8', 'B0014DH6FE', 'B0014DH6EK']
###Markdown
recommending products to users who brought similar products to each other
###Code
temp1.head()
no_of_users_brought_the_product = pd.DataFrame(temp1.groupby('ProductId')['UserId'].count().reset_index())
no_of_users_brought_the_product = no_of_users_brought_the_product.sort_values('UserId',ascending=False)
no_of_users_brought_the_product.head()
list(no_of_users_brought_the_product[no_of_users_brought_the_product['ProductId']=='B0043OYFKU']['Count'])[0]
no_of_users_brought_the_product.columns=['ProductId','Count']
no_of_users_brought_the_product.shape
type(temp1)
temp1.head()
data_list_user_id = list(temp1['UserId'])
data_list_product_id = list(temp1['ProductId'])
data_list_count = []
len(data_list_user_id)
len(data_list_product_id)
for i in data_list_product_id:
data_list_count.append(list(no_of_users_brought_the_product[no_of_users_brought_the_product['ProductId']==i]['Count'])[0])
data_list_count[:10]
max(data_list_count)
data_list_user_id[0]
data_list_product_id[0]
no_of_users_brought_the_product[no_of_users_brought_the_product['ProductId']=='1304351475']
df1= pd.DataFrame(data_list_user_id,columns=['UserId'])
df2= pd.DataFrame(data_list_product_id,columns=['ProductId'])
df3= pd.DataFrame(data_list_count,columns=['Count'])
data_with_count = pd.concat([df1,df2,df3],axis=1)
data_with_count.head()
###Output
_____no_output_____
###Markdown
Training
###Code
reader = Reader(rating_scale=(1, 241))
suprise_data = Dataset.load_from_df(data_with_count[['UserId', 'ProductId', 'Count']],reader)
trainset, testset = train_test_split(suprise_data, test_size=.3,random_state=0)
algo_user = KNNWithMeans(k=10, min_k=6, sim_options={'name': 'pearson_baseline', 'user_based': True})
algo_user.fit(trainset)
svd_model_count = SVD(n_factors=5,reg_all=0.01)
svd_model_count.fit(trainset)
# Evalute on test set
test_pred = algo_user.test(testset)
test_pred[0]
accuracy.rmse(test_pred)
test_pred = svd_model_count.test(testset)
accuracy.rmse(test_pred)
test_pred[:5]
top_n_count = get_top_n(test_pred, n=5)
testset[testset[0]=='A3LOVYOYGXZEZV']
# Print the recommended items for each user
count=0
for uid, user_ratings in top_n.items():
if uid=='A3LOVYOYGXZEZV':
print(uid, [iid for (iid, _) in user_ratings])
count+=1
if count == 5:
break
###Output
A3LOVYOYGXZEZV ['B00IMLLW6A', 'B000RZQGS8', 'B0014DH6FE', 'B0014DH6EK']
|
analysis_notebooks/2_Languages.ipynb | ###Markdown
[Notebook Popularity <](1_Popularity.ipynb) | [> Notebook Owners](3_Owners.ipynb) Programming Languages in JupyterHere, we focus only on notebooks that have a specified language to examine what programming languages people are using with Jupyter Notebooks and their growth overt time. 1.79% of notebooks do not have a specified language and are removed before analysis. Results Summary:- Python is consistently the most popular programming language for Jupyter notebooks. There are over 100 times more notebooks written in Python than there are in the next most popular language, R. Julia and R are very close in popularity. Many other languages are used (e.g. Scala, Bash, C++, Ruby), but with minimal frequency. - 98.3% of notebooks have a specified language. Of these, 97.62 % are written in Python, 0.80% Julia, and 0.95% R..- The relative popularities of languages have not changed much over time.- It appears that there are very few notebooks written in Python 3.3 and 3.4. However, these versions were released and used frequently from 2012 - 2015, when notebooks were most often in format 2 or 3. Further, nearly all of version 2 and 3 notebooks are missing language versions. It's likely that Python 3.3 and 3.4 were used heavily during these years but are not accounted for in our dataset due to the formats of the notebooks. -------- Import Packages & Load Data
###Code
import pandas as pd
import numpy as np
import calendar
import matplotlib.pyplot as plt
import load_data
import datetime
repos_temp = load_data.load_repos()
notebooks_temp = load_data.load_notebooks()
###Output
Repos loaded in 0:00:04.433827
Notebooks loaded in 0:00:24.465798
###Markdown
-------- Tidy Data Focus in on notebooks with a specified language
###Code
notebooks = notebooks_temp.copy()[~notebooks_temp.lang_name.isna()].reset_index(drop=True)
print("{0} ({1}%) of notebooks not in ipynb checkpoints have a specified language. \nThe {2}% without a language have been removed.".format(
len(notebooks),
round(100*len(notebooks)/len(notebooks_temp), 2),
round(100 - 100*len(notebooks)/len(notebooks_temp), 2)
))
###Output
4215016 (98.23%) of notebooks not in ipynb checkpoints have a specified language.
The 1.77% without a language have been removed.
###Markdown
Update repos to reflect notebooks with a language
###Code
repos = repos_temp.copy()[repos_temp.repo_id.isin(notebooks.repo_id)].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Delete temp dataframes to save space
###Code
del repos_temp
del notebooks_temp
###Output
_____no_output_____
###Markdown
--- Visualizations & Statistics What programming langugages are the most popular to use with Jupyter notebooks?
###Code
counts = notebooks.groupby('lang_name')['file'].count().sort_values(ascending=False).reset_index().rename(
columns = {'file':'count'}
)
print("{0}% of notebooks with a specified language are written in Python, {1}% Julia, and {2}% R.".format(
round(100*sum(notebooks.lang_name == 'python') / len(notebooks), 3),
round(100*sum(notebooks.lang_name == 'julia') / len(notebooks), 3),
round(100*sum(notebooks.lang_name == 'r') / len(notebooks), 3)
))
plot_counts = counts[counts['count'] > 1000]
x = plot_counts.lang_name
x_pos = np.arange(len(x))
height = plot_counts['count']
plt.bar(x_pos, height, color = 'teal')
plt.xticks(x_pos, x, rotation = 70)
plt.xlabel('Language')
plt.yscale('log')
plt.ylabel('Number of Notebooks')
plt.title('Language Distribution')
plt.show()
###Output
_____no_output_____
###Markdown
Clearly, Python is the language of choice. Python has over 100 times more notebooks than Julia or R, the next most popular languages. Language Use Over TimeHow have relative popularities of langages canged over time?
###Code
def lang_over_time(language):
start = notebooks.pushed_year.min()
end = notebooks.pushed_year.max()
language_over_time = notebooks[notebooks.lang_name == language]
language_yearly_counts = language_over_time.groupby('pushed_year')['file'].count().reset_index().rename(columns={'file':'count'})
to_append = {'pushed_year':[], 'count':[]}
for y in range(start, end):
if y not in language_yearly_counts['pushed_year'].values:
to_append['pushed_year'].append(y)
to_append['count'].append(0)
to_append_df = pd.DataFrame(to_append)
language_yearly_counts = pd.concat([language_yearly_counts, to_append_df], sort = False).sort_values(by='pushed_year')
totals = notebooks.groupby('pushed_year')['file'].count().reset_index().rename(columns={'file':'total'})
language_yearly_counts = language_yearly_counts.merge(totals, on = 'pushed_year')
language_yearly_counts['language'] = language
return language_yearly_counts
py_counts = lang_over_time('python')
ju_counts = lang_over_time('julia')
r_counts = lang_over_time('r')
lang_counts = pd.concat([py_counts, ju_counts, r_counts], sort = False)
lang_counts['prop'] = lang_counts['count']/lang_counts['total']
py_counts = lang_counts[lang_counts.language == 'python']
ju_counts = lang_counts[lang_counts.language == 'julia']
r_counts = lang_counts[lang_counts.language == 'r']
plt.plot(py_counts['pushed_year'],py_counts['prop'], color = '#F9D25B', label = 'Python')
plt.plot(ju_counts['pushed_year'],ju_counts['prop'], color = '#8A5D9F', label = 'Julia')
plt.plot(r_counts['pushed_year'],r_counts['prop'], color = '#2A65B3', label = 'R')
plt.title('Language Use Over Time')
plt.xlabel('Year')
plt.ylabel('Proportion of Notebooks')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Nearly all notebooks are written in Python, and this popularity has not fluxuated that much over time. It's difficult to see the difference between Julia and R, so lets zoom in on those.
###Code
plt.plot(ju_counts['pushed_year'],ju_counts['prop'], color = '#8A5D9F', label = 'Julia')
plt.plot(r_counts['pushed_year'],r_counts['prop'], color = '#2A65B3', label = 'R')
plt.title('Language Use Over Time')
plt.xlabel('Year')
plt.ylabel('Proportion of Notebooks')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Julia's popularity peaked in 2015 when it was used in around 1.8% of notebooks. Julia then became less popular and was surpassed by R in 2017. In 2019, they are at approximately equal popularities, each used in approximately 0.8% of notebooks Python Version Use Release dates of python versions ([reference](https://en.wikipedia.org/wiki/History_of_Python)):- 2.7: July 2010- 3.3: September 2012- 3.4: March 2014- 3.5: September 2015- 3.6: December 2016- 3.7: June 2018
###Code
counts = pd.Series(notebooks.python_version[notebooks.python_version != '']).value_counts()
counts = pd.DataFrame(counts).reset_index().rename(columns={'index':'version','python_version':'counts'})
counts.version = counts.version.astype('float')
counts = counts.sort_values(by='version')
counts = counts[counts['counts'] > 50]
x = counts['version']
x_pos = np.arange(len(x))
height = counts['counts']
plt.bar(x_pos, height, color = 'teal')
plt.xticks(x_pos, x)
plt.xlabel('Python Version')
plt.ylabel('Number of Notebooks')
plt.title('Python Version Use')
plt.show()
###Output
/home/ec2-user/anaconda3/lib/python3.5/site-packages/pandas/core/ops/__init__.py:1115: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
result = method(y)
###Markdown
Python 3.6 is the most common within Jupyter notebooks. Almost no notebooks were written in Python 3.3 (released September 2012) or 3.4 (March 2014). Why so little 3.3 and 3.4? Notebook formats over time and missing language version
###Code
format_yearly_counts = notebooks.groupby(['pushed_year','nbformat'])['file'].count().reset_index().rename(
columns = {'file':'count'}
)
format_yearly_counts = format_yearly_counts.merge(
format_yearly_counts.groupby('pushed_year')['count'].sum().reset_index().rename(
columns = {'count':'total'}
),
on = 'pushed_year'
)
format_yearly_counts['prop'] = format_yearly_counts['count']/format_yearly_counts['total']
###Output
_____no_output_____
###Markdown
Proportion of notebooks in each year that were nbformat 2
###Code
format_yearly_counts[format_yearly_counts.nbformat == 2]
###Output
_____no_output_____
###Markdown
Proportion of notebooks in each year that were nbformat 3
###Code
format_yearly_counts[format_yearly_counts.nbformat == 3]
print("{0}% of nbformat 2 python notebooks have missing language version".format(
round(100*sum(notebooks[
np.logical_and(notebooks.nbformat == 2, notebooks.lang_name == 'python')
].lang_version.isna()) / len(
notebooks[np.logical_and(notebooks.nbformat == 2, notebooks.lang_name == 'python')]
), 2)
))
print("{0}% of nbformat 3 python notebooks have missing language version".format(
round(100*sum(notebooks[
np.logical_and(notebooks.nbformat == 3, notebooks.lang_name == 'python')
].lang_version.isna()) / len(
notebooks[np.logical_and(notebooks.nbformat == 3, notebooks.lang_name == 'python')]
), 2)
))
print("{0}% of nbformat 4 python notebooks have missing language version".format(
round(100*sum(notebooks[
np.logical_and(notebooks.nbformat == 4, notebooks.lang_name == 'python')
].lang_version.isna()) / len(
notebooks[np.logical_and(notebooks.nbformat == 4, notebooks.lang_name == 'python')]
), 2)
))
###Output
99.6% of nbformat 2 python notebooks have missing language version
98.33% of nbformat 3 python notebooks have missing language version
0.31% of nbformat 4 python notebooks have missing language version
###Markdown
Only 0.32% of nbformat 4 notebooks have a missing language version. However, *98.2%* of nbformat 3 notebooks and *99.6%* of nbformat 2 notbooks are mising language version. Further, nbformat 2 notebooks were prominent from 2011 to 2012, and nbformat 3 notebooks from 2012 to 2015. These years overlab with the releases of Python 3.3 and 3.4, when they were new and likely used a lot.Visually inspecting some nbformat 3 notebooks, there is a place in the json where language_version *should* be, but is only actually there and found 1.7% of the time. It's highly unlikely that Python 3.3 and 3.4 are actually as unpopular as shown above. Version use over time***Keep in mind that counts are likely missing for python 3.3 and 3.4!***
###Code
start = datetime.datetime.now()
yearly_version_counts = notebooks[notebooks['lang_name']=='python'].groupby(['pushed_year', 'python_version'])['file']\
.count().reset_index().rename(columns={'file':'count'})
yearly_totals = yearly_version_counts.groupby('pushed_year')[['count']].sum().reset_index().rename(columns={'count':'total'})
yearly_version_counts = yearly_version_counts.merge(yearly_totals, on = 'pushed_year')
yearly_version_counts['prop'] = yearly_version_counts['count']/yearly_version_counts['total']
end = datetime.datetime.now()
print(end - start)
start_time = datetime.datetime.now()
all_counts = []
start = yearly_version_counts.pushed_year.min()
end = yearly_version_counts.pushed_year.max()
# fill in zeros
for version in [2.7, 3.4, 3.5, 3.6, 3.7]:
version_counts = yearly_version_counts.copy()[yearly_version_counts['python_version']==version]
to_append = {'pushed_year':[], 'count': [], 'prop':[], 'version':[]}
for year in range(start,end):
if year not in version_counts['pushed_year'].values:
to_append['pushed_year'].append(year)
to_append['count'].append(0)
to_append['prop'].append(0)
to_append['version'].append(version)
to_append_df = pd.DataFrame(to_append)
if len(to_append_df) > 0:
version_counts = pd.concat(
[version_counts, to_append_df], sort = False
).sort_values(by='pushed_year')
version_counts['version'] = [version]*len(version_counts)
all_counts.append(version_counts)
end_time = datetime.datetime.now()
print(end_time - start_time)
###Output
0:00:00.023200
###Markdown
Plot Python versions over time, yearly and then monthly
###Code
fig = plt.figure(figsize = (7, 5))
for version in all_counts:
plt.plot(
version.pushed_year, version.prop,
label = version.version.iloc[0],
)
plt.xlabel('Year')
plt.ylabel('Proportion of Notebooks in Given Year')
plt.title('Python Versions over Time')
plt.legend(bbox_to_anchor=(1.16, 1.02))
plt.show()
###Output
_____no_output_____ |
Europython/Ipwidgets.ipynb | ###Markdown
verhttps://github.com/jupyter-widgets/ipywidgets/blob/c6520c644eb7bae477e3c8dc559d0771c42709d4/docs/source/examples/Using%20Interact.ipynb
###Code
def slow_function(i):
print(int(i),list(x for x in range(int(i)) if
str(x)==str(x)[::-1] and
str(x**2)==str(x**2)[::-1]))
return
%%time
slow_function(1e6)
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from ipywidgets import FloatSlider
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5))
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False));
x_widget = FloatSlider(min=0.0, max=10.0, step=0.05)
y_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)
def update_x_range(*args):
x_widget.max = 2.0 * y_widget.value
y_widget.observe(update_x_range, 'value')
def printer(x, y):
print(x, y)
interact(printer,x=x_widget, y=y_widget);
###Output
_____no_output_____
###Markdown
http://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html
###Code
import ipywidgets as widgets
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check'
)
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
# icons=['check'] * 3
)
###Output
_____no_output_____ |
Notebooks/Insurance_Recommender_Project_Data_Analysis.ipynb | ###Markdown
**K.J Somaiya College of Engineering** Engineering Final Year Project **InsureBuddy - An Insurance Recommender System** **Author:** **Sujay Torvi** Co-Authors: 1. Krupen Shah 2. Harsh Somaiya 3. Tirth Desai Copyright© 2020 Under MIT License **`Problem Statement: To process, analyse and mine the data for useful insights in insurance product recommendation and model them using various algorithms, and deploying them into an application which would provide the user with best insurance product recommendations`** **II. Dataset Visualization & Analysis** **Source of Dataset:** **Zimnnat Insurance Recommendation Dataset**URL: https://zindi.africa/competitions/zimnat-insurance-recommendation-challenge
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import seaborn as sns
from google.colab import files
u = files.upload()
train = pd.read_csv('Final_Train.csv')
train = train[train.columns[1:]]
policies = train.columns[6:]
train.head()
data = train
plt.rcParams["font.weight"] = "bold"
plt.rcParams["axes.labelweight"] = "bold"
def o_plot(target_col,target_class):
corp = data[data[target_col] == target_class]
lst = []
for i in range(1,22):
lst.append(len(corp[corp['Policy ' +str(i)] == 1]))
plt.bar(policies, lst, color ='purple',
width = 0.8)
plt.xlabel("Policies")
plt.xticks(rotation = 90)
plt.ylabel("Count")
plt.title("No . of {} who were recommended policy".format(target_class))
plt.show()
o_plot('occupation_category_code','Corporate Employee')
o_plot('occupation_category_code','Self Employed')
o_plot('occupation_category_code','Medical Professional')
o_plot('occupation_category_code','Enterpreneur')
o_plot('occupation_category_code','Military Service')
plt.show()
o_plot('marital_status','M')
o_plot('marital_status','S')
plt.show()
o_plot('sex','M')
o_plot('sex','F')
plt.show()
o_plot('age_group','Below 25')
o_plot('age_group','25-40')
o_plot('age_group','41-60')
o_plot('age_group','Above 60')
plt.show()
o_plot('Annual_Income','0-5 lac')
o_plot('Annual_Income','5-10 lac')
o_plot('Annual_Income','10-20 lac')
o_plot('Annual_Income','20-30 lac')
o_plot('Annual_Income','30-40 lac')
o_plot('Annual_Income','40-50 lac')
plt.show()
###Output
_____no_output_____
###Markdown
In the above plots policy 8 and policy 15 dominate the frequency of users in dataset(a definitely likely case of label class imbalance) For the other policies we can see that some of the policies have been taken more or less among the various classess of various features
###Code
newdata = data[['sex','marital_status','age_group','occupation_category_code','Annual_Income']]
newdata.head()
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
df = pd.DataFrame(newdata.apply(preprocessing.LabelEncoder().fit_transform))
df_ = pd.DataFrame(data[data.columns[1:]].apply(preprocessing.LabelEncoder().fit_transform))
df.head()
plt.figure(figsize=(12, 8))
plt.title('Heatmap of our data')
corr = df.corr()
mask = np.zeros(corr.shape, dtype=bool)
mask[np.triu_indices(len(mask))] = True
sns.heatmap(corr,mask = mask, annot=True,fmt='.3f',vmin=-1, vmax=1, center= 0,cmap='hot')
plt.show()
###Output
_____no_output_____
###Markdown
Since the features are highly independent of each other, it looks like it would be best if the data is modeled on those algorithms which work on independent data points best
###Code
plt.figure(figsize=(18, 9))
plt.title('Heatmap of our data')
corr = df_.corr()
mask = np.zeros(corr.shape, dtype=bool)
mask[np.triu_indices(len(mask))] = True
sns.heatmap(corr,mask = mask, annot=True,fmt='.1f',vmin=-1, vmax=1, center= 0,cmap='seismic',linewidths=0.1)
plt.savefig('plot.png', dpi=300, bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
** After viewing this correlation plot we note the following ****1. Policy 8 and Policy 15 are highly correlated, which means majority of those who were recommended policy 8 were also recommended policy 15 and vice versa** **2. Marital status determines to some extent whether a user will be recommended policy 2 or not** **3. Policy 13 and Policy 14 are somewhat correlated, which means some of those who were recommended policy 13 were also recommended policy 14 and vice versa** **4. People being recommended policy 8 are a little likely not to be recommended policy 18,19 and 9,11,12** **5. Policy 11 and Policy 12 are highly correlated, which means majority of those who were recommended policy 11 were also recommended policy 12 and vice versa** **6. Those who are recommended policy 15 are a little less likely to be recommended policy 18 and 19** **7. Policy 18 and 19 are totally correlated (maybe an anomaly in dataset)**
###Code
transpose_df = pd.melt(data,id_vars=['ID', 'sex', 'marital_status', 'age_group', 'occupation_category_code','Annual_Income'],value_vars=data.columns[6:]).sort_values('ID')
transpose_df = transpose_df.loc[transpose_df.value ==1]
transpose_df = transpose_df.reset_index()
transpose_df = transpose_df[['ID','sex','marital_status','age_group','occupation_category_code','Annual_Income','variable']]
transpose_df.head()
transpose_df_ = df = pd.DataFrame(transpose_df[transpose_df.columns[1:]].apply(preprocessing.LabelEncoder().fit_transform))
transpose_df.variable.value_counts().plot.bar(title = "Bar Chart Representing the number of users buying a particular policy", xlabel = "Policies", ylabel = "No. of users buying it")
plt.plot()
###Output
_____no_output_____ |
Notebooks/TM/trial_analyses_dev.ipynb | ###Markdown
Notebook for development and use of trial analyses API in tree maze functions
###Code
%matplotlib inline
import numpy as np
from scipy.stats import ttest_ind, ttest_1samp
import scipy.stats as stats
import pandas as pd
from pathlib import Path
from importlib import reload
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib_venn import venn2, venn3
import seaborn as sns
import TreeMazeAnalyses2,Analyses.tree_maze_functions as tmf
import TreeMazeAnalyses2.Analyses.experiment_info as ei
import TreeMazeAnalyses2.Analyses.plot_functions as pf
import TreeMazeAnalyses2.Utils.robust_stats as rs
import ipywidgets as widgets
from ipywidgets import interact, fixed, interact_manual
from joblib import delayed, Parallel
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=UserWarning)
%%time
ei = reload(ei)
info = ei.SummaryInfo()
update_paths = False
if update_paths:
info.update_paths()
%%time
if 1:
#info.run_analyses(which='pos_zones', task='T3', overwrite=True)
#info.run_analyses(which='event_table', task='T3', overwrite=True)
info.run_analyses(which='zone_rates_remap', task='T3', overwrite=True)
#info.run_analyses(which='bal_conds_seg_rates', task='T3', overwrite=True)
subject = 'Li'
session = 'Li_T3g_060418'
session_info = ei.SubjectSessionInfo(subject, session)
%%time
zr = session_info.get_zone_rates_remap(overwrite=True)
zr.head()
%%time
tmf = reload(tmf)
ta = tmf.TrialAnalyses(session_info)
%%time
zr2 = ta.all_zone_remapping_analyses()
zr2.head()
%%time
a= session_info.get_bal_conds_seg_rates()
a
session_info.run_analyses(which='bal_conds_seg_rates', verbose=True)
session_info.run_analyses(which='zone_rates_remap', overwrite=True, verbose=True)
zval = np.arange(-4,4,0.1)
plt.plot(zval, -np.log10(2*stats.norm.sf(np.abs(zval))))
2*stats.norm.sf(np.abs(-3))
stats.norm.isf(0.00001)
%%time
tmf = reload(tmf)
ta = tmf.TrialAnalyses(session_info)
tree_maze = tmf.TreeMazeZones()
###Output
CPU times: user 30.6 s, sys: 715 ms, total: 31.3 s
Wall time: 4.62 s
###Markdown
code to get bootstaps of segments in a single pandas df
###Code
%%time
tmf = reload(tmf)
ta = tmf.TrialAnalyses(session_info)
%%time
bal_cond = 'CR_bo'
segment_type = 'bigseg'
n_boot = 10
ta.get_seg_rate_boot(bal_cond, segment_type=segment_type, n_boot=n_boot)
%%time
if 0:
n_boot = 100
occ_thr = 1
bal_cond_set = ta.bal_cond_sets[bal_cond]
seg_names = ta.tmz.get_segment_type_names(segment_type)
trial_seg = bal_cond_set['trial_seg']
cond = bal_cond_set['cond']
sub_conds = bal_cond_set['sub_conds']
cond_set = {cond: sub_conds}
trial_sets = ta.get_trials_boot_cond_set(cond_set, n_boot=n_boot)[cond]
all_zr = ta.get_trial_segment_rates(segment_type=segment_type, trial_seg=trial_seg,
occupation_trial_samp_thr=occ_thr)
n_trials = len(trial_sets)
n_segs = len(seg_names)
n_units = ta.n_units
n_rows = n_units*n_boot*n_trials*n_segs
df = pd.DataFrame(np.nan, index=range(n_rows), columns=['cond', 'unit', 'boot', 'trial', 'seg', 'activity'])
df['cond'] = cond
units = np.arange(n_units)
boot_block_len = n_units*n_trials*n_segs
unit_block_len = n_trials * n_segs
for boot in range(n_boot):
boot_idx_start = boot_block_len*boot
boot_idx = np.arange(boot_block_len)+boot_idx_start
df.loc[boot_idx, 'boot'] = boot
trials = trial_sets[:, boot]
for unit in units:
unit_block_idx = np.arange(unit_block_len) + unit*unit_block_len + boot_idx_start
df.loc[unit_block_idx, 'unit'] = unit
temp = all_zr[unit].loc[trials].copy()
temp['trial'] = temp.index
temp = temp.melt(id_vars='trial', value_name='activity', var_name='seg')
temp = temp.set_index(unit_block_idx)
df.loc[unit_block_idx, ['trial', 'seg', 'activity']] = temp
df.groupby(['seg','unit']).mean()
###Output
_____no_output_____
###Markdown
dev of zone rate comparisons between trials
###Code
%%time
ta = tmf.TrialAnalyses(session_info)
tree_maze = tmf.TreeMazeZones()
%%time
df = ta.get_avg_seg_rates_boot()
df
###Output
_____no_output_____
###Markdown
balanced sampling of trials for comparing the effects of the cue by zone
###Code
n_boot = 100
group_set = 'CR-CL'
cond_set = ta.group_cond_sets[group_set]
trial_sets = ta.get_trials_boot_cond_set(cond_set, n_sel_trials=None, n_boot=n_boot)
ta.bal_cond_sets
ta.trial_table.loc[trial_sets['CR'][:,0]].correct.value_counts()
trial_sets['CR'].shape, trial_sets['CL'].shape
ta.bal_cond_pairs
###Output
_____no_output_____
###Markdown
pandas implementation (slow...)
###Code
%%time
n_boot=100
segment_type = 'bigseg'
conds = list(ta.bal_cond_sets.keys())
n_segs = len(ta.tmz.bigseg_names)
n_units = ta.n_units
n_conds = len(conds)
n_rows = n_boot*n_units*n_segs*n_conds
df = pd.DataFrame(index=range(n_rows), columns= ['boot', 'cond', 'unit', 'seg', 'm'])
cnt = 0
block_idx_len = n_units*n_segs
for cond in conds:
bal_cond_set = ta.bal_cond_sets[cond]
cond_set = bal_cond_set['cond_set']
trial_seg = bal_cond_set['trial_seg']
trial_set = ta.get_trials_boot_cond_set(cond_set)
for boot in range(n_boot):
idx = np.arange(block_idx_len) + cnt*block_idx_len
trials = trial_set[:,boot]
temp = ta.get_avg_trial_zone_rates(trials=trials, segment_type=segment_type, trial_seg=trial_seg)
temp['unit'] = temp.index
temp['cond'] = cond
temp['boot'] = boot
temp = temp.melt(id_vars=['boot','cond','unit'], value_name='seg', var_name='m')
df.loc[idx] = temp.set_index(idx)
cnt +=1
df = df.astype({'m':float})
dfm = df.groupby(['unit', 'cond', 'seg']).mean().reset_index()
dfm
cond_pair = ta.bal_cond_pairs[0].split('-')
ax=sns.violinplot(data=df[ (df.unit==0) & (df.cond.isin(cond_pair))], x='seg', hue='cond', y='m', split=True, inner='quartile', hue_order=[cond_pair[1], cond_pair[0]], palette=['green','purple'], alpha=0.7, saturation=1)
plt.setp(ax.collections, alpha=.7)
#sns.boxplot(data=df[df.unit==0], x='seg', hue='cond', y='m', hue_order=['CL', 'CR'], palette=['green','purple'])
rs = reload(rs)
%%time
unit = 0
cond1 = 'CR_bo'
cond2 = 'CL_bo'
seg = 'left'
x = df.loc[(df.unit==unit) & (df.cond==cond1) & (df.seg==seg)].m.values
y = df.loc[(df.unit==unit) & (df.cond==cond2) & (df.seg==seg)].m.values
z = rs.bootstrap_diff(x,y)
stats.ttest_ind(x,y)
ax=sns.kdeplot(z[2])
ax.axvline(z[0], color='r', lw=3)
%%time
z = rs.bootstrap_tdiff(x,y)
ax=sns.kdeplot(z[2])
ax.axvline(z[0], color='r', lw=3)
###Output
_____no_output_____
###Markdown
straight comparison of cue conditions
###Code
%%time
segment_type = 'bigseg'
conds = ['CR', 'CL']
trial_sets2 = {}
n_trials = {}
for cond in conds:
trial_sets2[cond] = ta.get_condition_trials(cond)
n_trials[cond] = len(trial_sets2[cond])
n_segs = len(ta.tmz.bigseg_names)
n_units = ta.n_units
n_conds = len(conds)
n_total_trials = sum(n_trials.values())
n_rows = 0
for cond in conds:
n_rows = n_units*n_segs*n_conds*n_trials[cond]
df2 = pd.DataFrame(index=range(n_rows), columns= ['unit', 'trial', 'seg', 'activity', 'cond'])
cnt = 0
for cond in conds:
block_idx_len = n_units*n_segs*n_trials[cond]
idx = np.arange(block_idx_len) + cnt*block_idx_len
trials = trial_sets2[cond]
a = ta.get_trial_segment_rates(trials=trials, segment_type=segment_type)
d = pd.DataFrame()
for unit in range(ta.n_units):
temp = a[unit]
temp['unit'] = unit
temp['trial'] = trials
temp = temp.melt(id_vars=['unit', 'trial'], value_name='activity', var_name='seg')
d = pd.concat((d,temp), ignore_index=True)
d['cond'] = cond
df2.loc[idx] = d.set_index(idx)
cnt +=1
df2 = df2.astype({'activity':float})
a = ta.get_trial_segment_rates(trials=trials, segment_type=segment_type)
a[0]
trials = trial_sets2[cond]
a = ta.get_trial_segment_rates(trials=trials, segment_type=segment_type)
sns.violinplot(data=df2[df.unit==0], x='seg', hue='cond', y='activity', split=True, inner='quartile', hue_order=['CL', 'CR'], palette=['green','purple'], alpha=0.7)
unit = 0
cond1 = 'CR'
cond2 = 'CL'
seg = 'left'
x = df2.loc[(df.unit==unit) & (df.cond==cond1) & (df.seg==seg)].activity.values
y = df2.loc[(df.unit==unit) & (df.cond==cond2) & (df.seg==seg)].activity.values
z = rs.bootstrap_diff(x,y)
ax=sns.kdeplot(z[2])
ax.axvline(z[0], color='r', lw=3)
###Output
_____no_output_____
###Markdown
nested bootstrap approach: (1) take a sampling of the trials (balanced for some sub-conditions - in this case correct/incorrect) (2) for each of those compute a bootstrap achieved SL, and accumulate those. (3) average the asl
###Code
rs = reload(rs)
%%time
conds = ['CR', 'CL']
n_boot=100
n_units = ta.n_units
n_segs = len(ta.tmz.bigseg_names)
n_conds = 2
measures = ['m', 't', 'p']
pd.DataFrame()
t = np.zeros((ta.n_units,n_boot,3))
p = np.zeros((ta.n_units,n_boot,3))
for boot in range(n_boot):
trials1 = trial_sets['CR'][:,boot]
trials2 = trial_sets['CL'][:,boot]
trial_segment_rates_1 = ta.get_trial_segment_rates(trials1, segment_type='bigseg')
trial_segment_rates_2 = ta.get_trial_segment_rates(trials2, segment_type='bigseg')
for unit in range(ta.n_units):
#zu[unit, boot], _, up[unit, boot], _, _ = rs.mannwhitney_z(trial_segment_rates_1[unit], trial_segment_rates_2[unit], return_all=True)
t[unit,boot], p[unit,boot] = stats.ttest_ind(trial_segment_rates_1[unit], trial_segment_rates_2[unit], nan_policy='omit', equal_var=False)
pvals = pd.DataFrame(index=range(ta.n_units), columns=ta.tmz.bigseg_names)
for unit in range(ta.n_units):
pvals.loc[unit] = rs.combine_pvals(p[unit], axis=0)
#pvals.loc[unit] = 1-stats.chi2.cdf(np.nansum(-2*np.log(p[unit]), axis=0),df=n_boot*2)
pvals<0.01
###Output
_____no_output_____
###Markdown
re-implement and generalize the above
###Code
%%time
n_boot=100
segment_type = 'bigseg'
bal_cond_sets = ta.bal_cond_sets
conds = list(bal_cond_sets.keys())
cond_pairs = ta.bal_cond_pairs
seg_names = ta.tmz.get_segment_type_names(segment_type)
n_units = ta.n_units
n_segs = len(seg_names)
n_conds = len(conds)
m = {cond: np.zeros((n_units,n_boot,n_segs)) for cond in conds}
t = {cond_pair: np.zeros((n_units,n_boot,n_segs)) for cond_pair in cond_pairs}
p = {cond_pair: np.zeros((n_units,n_boot,n_segs)) for cond_pair in cond_pairs}
trial_sets = {}
for cond in conds:
trial_sets[cond]= ta.get_trials_boot_cond_set(bal_cond_sets[cond]['cond_set'])
for boot in range(n_boot):
trial_segment_rates = {}
for cond in conds:
bal_cond_set = bal_cond_sets[cond]
trial_segment_rates[cond] = ta.get_trial_segment_rates(trial_sets[cond][:,boot],
segment_type=segment_type,
trial_seg=bal_cond_set['trial_seg'])
for unit in range(n_units):
m[cond][unit, boot] = trial_segment_rates[cond][unit].mean()
for cond_pair in cond_pairs:
cond1, cond2 = cond_pair.split('-')
for unit in range(n_units):
temp = stats.ttest_ind(trial_segment_rates[cond1] [unit],
trial_segment_rates[cond2][unit],
nan_policy='omit')
t[cond_pair][unit,boot], p[cond_pair][unit,boot] = temp[0], temp[1]
%%time
n_boot=100
segment_type = 'bigseg'
bal_cond_sets = ta.bal_cond_sets
conds = list(bal_cond_sets.keys())
cond_pairs = ta.bal_cond_pairs
seg_names = ta.tmz.get_segment_type_names(segment_type)
n_units = ta.n_units
n_segs = len(seg_names)
n_conds = len(conds)
trial_sets = {}
for cond in conds:
trial_sets[cond]= ta.get_trials_boot_cond_set(bal_cond_sets[cond]['cond_set'])
def _worker(boot):
_m = {cond: np.zeros((n_units, n_segs)) for cond in conds}
_n = {cond: np.zeros((n_units, n_segs)) for cond in conds}
_t = {cond_pair: np.zeros((n_units, n_segs)) for cond_pair in cond_pairs}
_p = {cond_pair: np.zeros((n_units, n_segs)) for cond_pair in cond_pairs}
trial_segment_rates = {}
for cond in conds:
bal_cond_set = bal_cond_sets[cond]
trial_segment_rates[cond] = ta.get_trial_segment_rates(trial_sets[cond][:,boot],
segment_type=segment_type,
trial_seg=bal_cond_set['trial_seg'])
for unit in range(n_units):
_m[cond][unit] = trial_segment_rates[cond][unit].mean()
_n[cond][unit] = trial_segment_rates[cond][unit].count()
for cond_pair in cond_pairs:
cond1, cond2 = cond_pair.split('-')
for unit in range(n_units):
temp = stats.ttest_ind(trial_segment_rates[cond1] [unit],
trial_segment_rates[cond2][unit],
nan_policy='omit')
_t[cond_pair][unit], _p[cond_pair][unit] = temp[0], temp[1]
return _m, _n, _t, _p
with Parallel(n_jobs=10) as parallel:
out = parallel(delayed(_worker)(boot) for boot in range(n_boot))
%%time
m = {cond: np.zeros((n_boot, n_units, n_segs)) for cond in conds}
n = {cond: np.zeros((n_boot, n_units, n_segs)) for cond in conds}
t = {cond_pair: np.zeros((n_boot, n_units, n_segs)) for cond_pair in cond_pairs}
p = {cond_pair: np.zeros((n_boot, n_units, n_segs)) for cond_pair in cond_pairs}
for boot in range(n_boot):
_m,_n, _t, _p = out[boot]
for cond in _m.keys():
m[cond][boot] = _m[cond]
n[cond][boot] = _n[cond]
for cond_pair in _t.keys():
t[cond_pair][boot] = _t[cond_pair]
p[cond_pair][boot] = _p[cond_pair]
trial_segment_rates[cond][0].count()
%%time
tmf = reload(tmf)
ta = tmf.TrialAnalyses(session_info)
%%time
out2 = ta.segment_rate_boot_analyses()
df = df.Dataframe()
for boot in range(n_boot):
m[conds[0]][]
#df = pd.DataFrame()
cols1 = []
cols2 = []
for seg in seg_names:
for cond in conds:
cols1.append(f"{cond}-{seg}-m")
cols1.append(f"{cond}-{seg}-n")
for cond_pair in cond_pairs:
cols2.append(f"{cond_pair}-{seg}-t")
cols2.append(f"{cond_pair}-{seg}-p")
cols = cols1+cols2
df = pd.DataFrame(index=range(n_units), columns=cols)
for cond in conds:
cond_col = [f"{cond}-{seg}-m" for seg in seg_names]
df[cond_col] = m[cond].mean(axis=0)
cond_col = [f"{cond}-{seg}-n" for seg in seg_names]
df[cond_col] = n[cond].mean(axis=0)
for cond_pair in cond_pairs:
cond_col = [f"{cond_pair}-{seg}-t" for seg in seg_names]
df[cond_col] = t[cond_pair].mean(axis=0)
cond_col = [f"{cond_pair}-{seg}-p" for seg in seg_names]
df[cond_col] = rs.combine_pvals(p[cond_pair],axis=0)
df
-2*np.log(p[unit]).sum(axis=0)
p[0].shape
stats.chi2.ppf(0.99, 200)
(np.max(p[0], axis=0)**100)[0]
sns.histplot(stats.chi2.rvs(200,size=1000))
###Output
_____no_output_____
###Markdown
another approach pooling time-samples across trials, and getting a single estimate by condition, as opposed to a trial wise estimate
###Code
%%time
n_boot=100
for boot in range(n_boot):
trials1 = trial_sets['CR'][:,boot]
trials2 = trial_sets['CL'][:,boot]
ta.get_avg_zone_rates(trials=trials1, segment_type=segment_type)
ta.get_avg_zone_rates(trials=trials2, segment_type=segment_type)
###Output
CPU times: user 3min 24s, sys: 3.97 s, total: 3min 28s
Wall time: 21 s
###Markdown
comparing with trial average, yields very different results.
###Code
a = ta.get_avg_trial_zone_rates(trials=trials2, segment_type='bigseg', reweight_by_trial_zone_counts=False)
b = ta.get_avg_zone_rates(trials=trials2, segment_type='bigseg')
a-b
###Output
_____no_output_____
###Markdown
this approach takes x3 than the trial approach and these estimates can be obtained by setting the "reweight_by_trial_zone_counts" parameter in get_avg_trial_zone_rates to True (see above). otherwise, this approach doesn't take advantage of the the trial structure of the data.
###Code
a = ta.get_avg_trial_zone_rates(trials=trials2, segment_type='bigseg', reweight_by_trial_zone_counts=True)
b = ta.get_avg_zone_rates(trials=trials2, segment_type='bigseg')
a-b
zu.mean(axis=1)
zu2
zu[0].mean(axis=0)
###Output
_____no_output_____
###Markdown
RemapingEstablishing signficance between sets of correlation
###Code
ei = reload(ei)
subject = 'Li'
session = 'Li_T3g_062818'
session_info = ei.SubjectSessionInfo(subject, session)
tmf = reload(tmf)
rs = reload(rs)
ta = tmf.TrialAnalyses(session_info)
tree_maze = tmf.TreeMazeZones()
ta.bal_cond_sets
%%time
test_cond_pair = list(ta.test_null_bal_cond_pairs.keys())
null_cond_pair = list(ta.test_null_bal_cond_pairs.values())
bcorrs_test = ta.zone_rate_maps_bal_conds_boot_corr(bal_cond_pair=test_cond_pair[0], parallel=True)
bcorrs_null = ta.zone_rate_maps_bal_conds_boot_corr(bal_cond_pair=null_cond_pair[0], parallel=True)
a,b=rs.compare_corrs(bcorrs_test, bcorrs_null, 39, 39, corr_method='kendall')
b = pd.DataFrame(b).replace([np.inf, -np.inf], np.nan)
rs.combine_pvals(b, axis=1).shape
session_info.get_bal_conds_seg_rates(overwrite=True)
%%time
bcorrs2 = ta.all_zone_remapping_analyses()
bcorrs2.iloc[:,20:]
session_info.get_zone_rates_remap(overwrite=True)
%%time
n_boot=100
n_jobs=5
with Parallel(n_jobs=n_jobs) as parallel:
bcorrs = {}
for cond_pair in ta.bal_cond_pairs:
bcorrs[cond_pair] = ta.zone_rate_maps_bal_conds_boot_corr(bal_cond_pair=cond_pair,
n_boot=n_boot,
parallel=parallel)
bcorrs2.head()
zcorrs = {}
for cond, corr in bcorrs.items():
zcorrs[cond] = _transform_corr(bcorrs[cond])
a,b=stats.ttest_rel(bcorrs['CR_bo-CL_bo'], bcorrs['Even_bo-Odd_bo'], axis=1, nan_policy='omit')
plt.scatter(a, bcorrs2['CR_bo-CL_bo-Even_bo-Odd_bo-corr_zt'])
plt.scatter( bcorrs['CR_bo-CL_bo'].mean(axis=1), bcorrs2['CR_bo-CL_bo-corr_m'])
for test, null in ta.test_null_bal_cond_pairs.items():
zc = rs.compare_corrs(bcorrs[test], bcorrs[null], 39, 39, corr_method='kendall')
break
zc
plt.scatter(a,ttest_1samp(zc,0,axis=1)[0])
a,b=stats.ttest_rel(zcorrs['CR_bo-CL_bo'], zcorrs['Even_bo-Odd_bo'], axis=1, nan_policy='omit')
plt.scatter(a, -np.log10(b))
bcorrs['CR_bo-CL_bo'].mean(axis=1)
plt.hist(zcorrs['CR_bo-CL_bo'].mean(axis=1)-bcorrs2['CR_bo-CL_bo-corr_z'])
cond = 'CR_bo'
bal_cond_set = ta.bal_cond_sets[cond]
cond_set = {bal_cond_set['cond']: bal_cond_set['sub_conds']}
trials = np.array(list(ta.get_trials_boot_cond_set(cond_set, n_boot=100).values())).squeeze()
with Parallel(n_jobs=5) as parallel:
print(type(parallel))
print(isinstance(parallel, Parallel))
%%time
bcorrs = ta.zone_rate_maps_group_trials_boot_bal_corr()
def _transform_corr(_c):
return rs.fisher_r2z(rs.kendall2pearson(_c))
c1 = _transform_corr(bcorrs[test])
c2 = _transform_corr(bcorrs[null])
sns.histplot(c1.loc[0])
sns.histplot(c2.loc[0])
###Output
_____no_output_____
###Markdown
method 1. bootstrap
###Code
%%time
tboot = np.zeros(ta.n_units)
pboot = np.zeros(ta.n_units)
for unit in range(ta.n_units):
tboot[unit], pboot[unit],_ = rs.bootstrap_tdiff(c1.loc[unit],c2.loc[unit])
pboot
###Output
_____no_output_____
###Markdown
method 2. ttest asuming equal variance
###Code
%%time
tev ,pev =stats.ttest_rel(c1,c2, axis=1)
bev
f,ax =plt.subplots(1,2)
ax[0].scatter(tev, tboot)
sns.histplot(tev-tboot, ax=ax[1])
###Output
_____no_output_____
###Markdown
method 3. convert to correlation difference to z, then get 1samp estimaite
###Code
z = rs.compare_corrs(bcorrs[test], bcorrs[null], 39,39)
tz, tp = stats.ttest_1samp(z,0, nan_policy='omit', axis=1)
f,ax =plt.subplots(1,2)
ax[0].scatter(tev, tz)
sns.histplot(tev-tz, ax=ax[1])
###Output
_____no_output_____
###Markdown
method 4. confidence interval overlap
###Code
%%time
test = 'CR-CL'
null = 'Even-Odd'
d = lambda a,b, axis: np.median(a-b, axis=axis)
a = stats.bootstrap((bcorrs[test], bcorrs[null] ), statistic=d, method='basic', n_resamples=1000, vectorized=True,axis=1, confidence_level=0.99)
np.where(((a.confidence_interval.low<0) & (a.confidence_interval.high>0)))
np.where(tp>0.001)
tz[tp>=0.001].mean() , tz[tp<0.001].mean()
%%time
n_boot=100
r = np.zeros((ta.n_units,n_boot))
for boot in range(n_boot):
trials1 = trial_sets['CR'][:,boot]
trials2 = trial_sets['CL'][:,boot]
trial_segment_rates_1 = ta.get_trial_segment_rates(trials1)
trial_segment_rates_2 = ta.get_trial_segment_rates(trials2)
for unit in range(ta.n_units):
r[unit, boot] = stats.kendalltau(trial_segment_rates_1[unit].mean(), trial_segment_rates_2[unit].mean(), nan_policy='omit', method='asymptotic')[0]
trial_segment_rates_1[0].columns
%%time
ta.get_trial_segment_rates(trials1)[0].mean()
%%time
ta.get_avg_zone_rates(trials1, occupation_samp_thr=0).loc[0]
###Output
CPU times: user 176 ms, sys: 0 ns, total: 176 ms
Wall time: 145 ms
###Markdown
code to get bootrstraped trial sets
###Code
%%time
trial_seg = 'out'
occupation_trial_samp_thr = 1
seg_names = ta.tmz.all_segs_names
n_segs = len(seg_names)
neural_data = ta.get_trial_neural_data()
pzm = ta.trial_zone_samps_counts_mat[trial_seg]
zones_by_trial = ta.trial_zones[trial_seg]
trial_zone_rates = np.zeros(ta.n_units, dtype=object)
dummy_df = pd.DataFrame(np.zeros((ta.n_trials, n_segs)) * np.nan, columns=seg_names)
for unit in range(ta.n_units):
trial_zone_rates[unit] = dummy_df.copy()
for trial_num in range(ta.n_trials):
pzm = ta.tmz.get_pos_zone_mat(ta.trial_zones[trial_seg][trial_num])
pz_counts = pzm.sum()
pzmn = (pzm / pz_counts).fillna(0) # pozitions zones normalized by occupancy
trial_data = np.nan_to_num(np.array(list(neural_data[:, trial_num]), dtype=np.float))
zone_rates = trial_data @ pzmn
zone_rates.loc[:, pz_counts < occupation_trial_samp_thr] = np.nan
for unit in range(ta.n_units):
trial_zone_rates[unit].loc[trial_num] = zone_rates.loc[unit]
tzr = trial_zone_rates[0].loc[trials1]
tzr.head()
zc.head()
zc = ta.trial_zone_samps_counts_mat['out'].loc[trials1]
zcn = zc/zc.sum()
zcb = tree_maze.subseg_pz_mat_transform(zc,'bigseg')
zcb
(1/(zcb @ tree_maze.subseg2bigseg.T))
zcb
tzrb = ((tzr * zc/(zcb @ tree_maze.subseg2bigseg.T)).fillna(0) @ tree_maze.subseg2bigseg)
tzrb[zcb<5]=np.nan
tzrbm = (tzrb*(zcb/zcb.sum())).sum()
# tzrb[zcb<5]=np.nan
# tzrb
zc / (zcb @ tree_maze.subseg2bigseg.T)
tzrbm
ta.get_avg_zone_rates(trials=trials1, segment_type='bigseg')
(tzr * (1/(zcb @ tree_maze.subseg2bigseg.T))).fillna(0) @ tree_maze.subseg2bigseg
((tzr/zc).fillna(0) @ tree_maze.subseg2bigseg)*zcb
tree_maze.subseg_pz_mat_transform( (tzr/zc).fillna(0) , 'bigseg')*
(trial_zone_rates[0].loc[trials1]*zcn).sum()
###Output
_____no_output_____
###Markdown
code to get zone rates by pooling samples from a trial set
###Code
segment_type = 'subset'
occupation_samp_thr=5
_,samps,_ = ta.get_trial_times(trials1)
neural_data = np.nan_to_num(ta.fr[:, samps])
pz = ta.pz[samps]
pzm = ta.tmz.get_pos_zone_mat(pz, segment_type=segment_type)
pz_counts = pzm.sum()
pzmn = pzm / pz_counts # pozitions zones normalized by occupancy
zone_rates = neural_data @ pzmn # matrix multiply
zone_rates.loc[:, pz_counts < occupation_samp_thr] = np.nan
zone_rates.loc[0]
neural_data[0, pzm['H']==1].mean(), (pzm['H']==1).sum(), neural_data[0, pzm['H']==1].sum(), neural_data.shape
#pzm['H']
fr_cnt = 0
samp_cnt = 0
unit = 0
trial_seg = 'out'
trial_neural_data=ta.get_trial_neural_data(trial_seg=trial_seg)
for trial_num in range():
pzm = ta.tmz.get_pos_zone_mat(ta.trial_zones[trial_seg][trial_num])
pz_counts = pzm.sum()
pzmn = (pzm / pz_counts).fillna(0) # pozitions zones normalized by occupancy
trial_data = np.nan_to_num(np.array(list(trial_neural_data[:, trial_num]), dtype=np.float))
zone_rates = trial_data @ pzmn
zone_rates.loc[:, pz_counts < occupation_trial_samp_thr] = np.nan
for unit in range(ta.n_units):
trial_zone_rates[unit].loc[trial_num] = zone_rates.loc[unit]
trial_zone_rates[0].loc[trials1]
tzc = ta.trial_zone_samps_counts_mat['out'].loc[trials1]
tzcn = tzc/tzc.sum()
((trial_zone_rates[0].loc[trials1])*tzcn).sum()
#(trial_zone_rates[0].loc[trials1].fillna(0) @ tzcn)
unit=0
np.hstack(trial_neural_data[unit][trials1]).shape
pz = ta.pz[samps] # position zones
pzm = ta.tmz.get_pos_zone_mat(pz, segment_type='subseg') # position zones matrix
np.hstack(trial_neural_data[unit][trials1])[pzm['H']==1].sum()
samps
trial_num = 7
pzm = ta.tmz.get_pos_zone_mat(ta.trial_zones[trial_seg][trial_num])
pz_counts = pzm.sum()
pzmn = (pzm / pz_counts).fillna(0) # pozitions zones normalized by occupancy
trial_data = np.nan_to_num(np.array(list(neural_data[:, trial_num]), dtype=np.float))
zone_rates = trial_data @ pzmn
zone_rates.loc[:, pz_counts < occupation_trial_samp_thr] = np.nan
ta.get_avg_zone_rates(trials1).loc[0]
r.mean(axis=1)
plt.plot(r.mean(axis=1)-bcorrs['CR-CL'].mean(axis=1))
trial_segment_rates_2[0].mean()
rs.kendall(trial_segment_rates_1[unit], trial_segment_rates_2[unit])
?stats.kendalltau
%%time
b100 = ta.all_zone_rate_comp_analyses(n_boot=100)
%%time
b50 = ta.all_zone_rate_comp_analyses(n_boot=50)
(b50-b100).mean()
b50['CR-CL_Even-Odd_boot_corr_zz']
plt.scatter(b100['CR-CL_Even-Odd_boot_corr_zm'], b50['CR-CL_Even-Odd_boot_corr_zm'])
plt.plot([-3,0], [-3,0])
%%time
ta.get_avg_zone_rates()
%%time
ta.zone_rate_trial_quantification(cond='CL')
%%time
group_cond_sets = {'CR-CL': {'CR': ['Co', 'Inco'], 'CL': ['Co', 'Inco']},
'Co-Inco': {'Co':['CL', 'CR'], 'Inco': ['CR', 'CL']},
'CoSw-IncoSw': {'CoSw': ['CL', 'CR'], 'IncoSw':['CL', 'CR']},
'Rw': {'CL':['Co', 'Inco'], 'CR':['Co', 'Inco']},
'Even-Odd': {'Even': ['CL', 'CR'], 'Odd': ['CL', 'CR']},
'Even-Odd-In': {'Even': ['CL', 'CR'], 'Odd': ['CL', 'CR']}}
group_trial_segs = {'CR-CL': ['out','out'],
'Co-Inco': ['out','out'],
'CoSw-IncoSw': ['out','out'],
'Rw': ['in','in'],
'Even-Odd': ['out','out'],
'Even-Odd-In': ['in', 'in']}
bcorrs = ta.zone_rate_maps_group_trials_boot_bal_corr(group_cond_sets=group_cond_sets, group_trial_segs=group_trial_segs)
bcorrs
a = ta.get_trials_boot_cond_set({'CR':['Co', 'Inco'], 'CL':['Co', 'Inco']})
a['CL'].shape
%%time
ta.zone_rate_maps_corr(cond1='CL', cond2='CR', corr_method='kendall')
samps = ta.outbound_samps
samps
neural_data = ta.fr[:, samps]
pz = ta.pz[samps] # position zones
pzm = ta.tmz.get_pos_zone_mat(pz, subsegs=True) # position zones matrix
pz_counts = pzm.sum()
pzmn = pzm / pz_counts # pozitions zones normalized by occupancy
pzm.sum().sum(), len(pz)
neural_data[np.isnan(neural_data)] = 0
neural_data@pzmn
pz
def plot_trial_track_spikes(trial_analyses, unit=0, ax=None):
lw = 0.1 # line width
la = 0.3 # line alpha
lc = '0.5' # line color
ss = 2 # scatter scale
sc = 'r' # scatter color
sa = 0.3 # scatter alpha
if ax is None:
f,ax = plt.subplots()
else:
f = ax.figure
x,y = trial_analyses.get_trial_track_pos()
spk = trial_analyses.get_trial_neural_data(data_type='spikes')
for tr in range(trial_analyses.n_trials):
samps = tree_maze.get_inmaze_samps(x[tr],y[tr])
ax.plot(x[tr][samps], y[tr][samps], linewidth=lw, alpha=la, color=lc)
ax.scatter(x[tr][samps], y[tr][samps], s=spk[unit,tr][samps]*ss, color=sc, alpha=sa, linewidth=0)
ax.axis("square")
ax.axis("off")
ax.set_ylim(trial_analyses.y_edges[0], trial_analyses.y_edges[-1])
ax.set_xlim(trial_analyses.x_edges[0], trial_analyses.x_edges[-1])
return ax
def plot_trial_rate_map(trial_analyses, unit=0, ax=None):
cmap = 'viridis'
if ax is None:
f,ax = plt.subplots()
else:
f = ax.figure
fr_maps_trials = trial_analyses.get_trial_rate_maps()
ax = sns.heatmap(fr_maps_trials[unit], cbar=False, square=True, cmap=cmap, ax=ax)
ax.invert_yaxis()
ax.axis("off")
data = fr_maps_trials[unit].flatten()
data_colors, color_array = pf.get_colors_from_data(data, cmap=cmap)
ax_p = ax.get_position()
w, h = ax_p.width, ax_p.height
x0,y0 = ax_p.x0, ax_p.y0
cax_p = [x0+w*1.02, y0+h*0.05, w*0.05, h*0.15]
cax = f.add_axes(cax_p)
pf.get_color_bar_axis(cax, color_array, color_map=cmap, label='FR')
return ax
def plot_zone_rates(zone_rates, ax=None, min_value=0, max_value=None, label='FR', color_map='YlOrRd', div=False):
if ax is None:
f,ax = plt.subplots()
else:
f = ax.figure
tree_maze.plot_zone_activity(zone_rates, ax=ax, min_value=min_value, max_value=max_value, color_map=color_map, label=label)
@interact(unit=widgets.IntSlider(value=0, max=ta.n_units))
def plot_maps(unit):
f,ax = plt.subplots(1,3,figsize=(6,2),dpi=400)
plot_trial_track_spikes(ta, unit=unit, ax=ax[0])
plot_trial_rate_map(ta,unit=unit, ax=ax[1])
zone_rates = ta.get_avg_zone_rates()
plot_zone_rates(zone_rates.loc[unit], ax=ax[2])
a,b=session_info.get_pos_zones(return_invalid_pz=True)
vts = tree_maze.check_valid_pos_zones_transitions(a)
(~vts).sum()
tree_maze.maze_union
track_data,nan_idx = session_info.get_track_data(return_nan_idx=True)
samps = tree_maze.get_inmaze_samps(track_data.x,track_data.y)
nan_idx
pz, samps2 = tree_maze.get_pos_zone_ts(track_data.x, track_data.y)
samps
samps[samps2].sum()
###Output
_____no_output_____ |
Big-Data-Clusters/CU9/public/content/monitor-bdc/tsg012-azdata-bdc-status.ipynb | ###Markdown
TSG012 - Show BDC Status Steps Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
# Hints for tool retry (on transient fault), known errors and install guide
#
retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], 'python': [ ], }
error_hints = {'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], 'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], }
install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }
print('Common functions defined successfully.')
###Output
_____no_output_____
###Markdown
Use azdata to show big data cluster status
###Code
run('azdata bdc status show --all')
print("Notebook execution is complete.")
###Output
_____no_output_____ |
Pytorch/Pytorch_learn_by_dragen1860/lesson22-交叉熵.ipynb | ###Markdown
$Loss\space for\space classification$- $MSE$- $Cross\space Entropy\space Loss$- $Hinge\space Loss$ $example\space Lottery$
###Code
a = torch.full([4],1/4.)
a
a * torch.log2(a)
-(a * torch.log2(a)).sum()
a = torch.tensor([0.1,0.1,0.1,0.7])
-(a * torch.log2(a)).sum()
a = torch.tensor([0.001,0.001,0.001,0.999])
-(a * torch.log2(a)).sum()
###Output
_____no_output_____
###Markdown
$Numerical\space Stability$
###Code
x = torch.randn(1,784)
w = torch.randn(10,784)
logit = x @ w.t()
logit.size()
pred = F.softmax(logit, dim = 1)
pred_log = torch.log(pred)
F.cross_entropy(logit,torch.tensor([3]))
F.nll_loss(pred_log,torch.tensor([3]))
###Output
_____no_output_____ |
2019-20/casestudyreview/training.ipynb | ###Markdown
Create training and test
###Code
from sklearn.model_selection import train_test_split
training = {}
data = tqdm_notebook(T.items())
for k, v in data:
X = csr_matrix(v[0])
y = [D.loc[int(x)].score for x in v[1]]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
training[k] = (X_train, X_test, y_train, y_test)
###Output
_____no_output_____
###Markdown
Training classification models
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
classifiers = {'DTC': DecisionTreeClassifier(), 'KNN': KNeighborsClassifier()}
trained = defaultdict(lambda: {})
experiments = tqdm_notebook(training.items())
for k, (x_train, x_test, y_train, y_test) in experiments:
for cl, model in classifiers.items():
m = model.__class__()
m.fit(x_train, y_train)
trained[k][cl] = m
###Output
_____no_output_____
###Markdown
DNN
###Code
from sklearn.preprocessing import OneHotEncoder
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
def get_dnn(x_train):
model = Sequential()
model.add(Dense(100, input_dim=x_train.shape[1], activation='relu'))
model.add(Dense(5, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
experiments = tqdm_notebook(training.items())
for k, (x_train, x_test, y_train, y_test) in experiments:
y_e = OneHotEncoder().fit_transform(np.array(y_train).reshape(-1, 1))
m = get_dnn(x_train)
m.fit(x_train, y_e, batch_size=50, epochs=6, verbose=0)
trained[k]['DNN'] = m
###Output
_____no_output_____
###Markdown
Save
###Code
with open('../data/yelp_classification_training.pkl', 'wb') as out:
pickle.dump(training, out)
to_save = {}
for k, v in trained.items():
s = {}
for model_name, model in v.items():
if model_name == 'DNN':
m_json = model.to_json()
model.save_weights("../data/{}_{}.h5".format(k, model_name))
s['DNN'] = m_json
else:
s[model_name] = model
to_save[k] = s
with open('../data/yelp_classification_experiments.pkl', 'wb') as out:
pickle.dump(to_save, out)
###Output
_____no_output_____ |
pandas-renaming-and-combining.ipynb | ###Markdown
**This notebook is an exercise in the [Pandas](https://www.kaggle.com/learn/pandas) course. You can reference the tutorial at [this link](https://www.kaggle.com/residentmario/renaming-and-combining).**--- IntroductionRun the following cell to load your data and some utility functions.
###Code
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.renaming_and_combining import *
print("Setup complete.")
###Output
Setup complete.
###Markdown
ExercisesView the first several lines of your data by running the cell below:
###Code
reviews.head()
###Output
_____no_output_____
###Markdown
1.`region_1` and `region_2` are pretty uninformative names for locale columns in the dataset. Create a copy of `reviews` with these columns renamed to `region` and `locale`, respectively.
###Code
# Your code here
renamed = reviews.rename(columns={'region_1': 'region', 'region_2':'locale'})
# Check your answer
q1.check()
renamed.head()
q1.hint()
q1.solution()
###Output
_____no_output_____
###Markdown
2.Set the index name in the dataset to `wines`.
###Code
reindexed = reviews.rename_axis('wines', axis='rows')
# Check your answer
q2.check()
reindexed
q2.hint()
q2.solution()
###Output
_____no_output_____
###Markdown
3.The [Things on Reddit](https://www.kaggle.com/residentmario/things-on-reddit/data) dataset includes product links from a selection of top-ranked forums ("subreddits") on reddit.com. Run the cell below to load a dataframe of products mentioned on the */r/gaming* subreddit and another dataframe for products mentioned on the *r//movies* subreddit.
###Code
gaming_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/g/gaming.csv")
gaming_products['subreddit'] = "r/gaming"
movie_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/m/movies.csv")
movie_products['subreddit'] = "r/movies"
###Output
_____no_output_____
###Markdown
Create a `DataFrame` of products mentioned on *either* subreddit.
###Code
print(len(gaming_products))
gaming_products.head()
print(len(movie_products))
movie_products.head()
combined_products = pd.concat([gaming_products, movie_products])
# Check your answer
q3.check()
print(len(combined_products))
combined_products.head()
q3.hint()
q3.solution()
###Output
_____no_output_____
###Markdown
4.The [Powerlifting Database](https://www.kaggle.com/open-powerlifting/powerlifting-database) dataset on Kaggle includes one CSV table for powerlifting meets and a separate one for powerlifting competitors. Run the cell below to load these datasets into dataframes:
###Code
powerlifting_meets = pd.read_csv("../input/powerlifting-database/meets.csv")
powerlifting_competitors = pd.read_csv("../input/powerlifting-database/openpowerlifting.csv")
powerlifting_meets.head()
powerlifting_competitors.head()
###Output
_____no_output_____
###Markdown
Both tables include references to a `MeetID`, a unique key for each meet (competition) included in the database. Using this, generate a dataset combining the two tables into one.
###Code
powerlifting_combined = powerlifting_meets.set_index('MeetID').join(powerlifting_competitors.set_index('MeetID'))
# Check your answer
q4.check()
print(len(powerlifting_combined))
powerlifting_combined.head()
q4.hint()
q4.solution()
###Output
_____no_output_____ |
Dev/BTC-USD/Codes/03 Hurst Segment Analysis.ipynb | ###Markdown
Hurst Exponent based Segment Analysis __Summary:__ Analysis to get thresholds for dividing data in windows based on hurst exponent
###Code
# Import required libraries
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
import math
from mpl_toolkits.mplot3d import Axes3D
np.random.seed(0)
# User defined names
index = "BTC-USD"
filename = index+"_hurst.csv"
date_col = "Date"
hurst_windows = [100, 200, 300, 400] # Window sizes to calculate Hurst Exponent
N_days = 20 # Window size to do analysis for Hurst exponent
# Get current working directory
mycwd = os.getcwd()
print(mycwd)
# Change to data directory
os.chdir("..")
os.chdir(str(os.getcwd()) + "\\Data")
# Read the data
df = pd.read_csv(filename, index_col=date_col)
df.index = pd.to_datetime(df.index)
df.head()
###Output
_____no_output_____
###Markdown
Functions
###Code
def Calculate_Cols_onHurstwindow(df, window_size=100):
"""
Calculates below metrics for different window sizes:
Field 1: Number of days when the price went up compared to last day
Field 2: Number of days when price went down compared to last day
Field 3: Number of times the price went from Increasing to Decreasing
Field 4: Number of times the price went from Decreasing to Increasing
Field 5: Sum of number of times (price went from Increasing to Decreasing and price went from Decreasing to
Increasing)
Field 6: Ratio (Field 1/Field 2)
Field 7: 1/Field 6
Field 8: max(Field 6, Field 7)
Field 9: Ratio (Field 3/Field 4)
Field 10: 1/Field 9
Field 11: max(Field 9, Field 10)
Field 12: Field 9 - Field 9 mean
Field 13: Field 11 - Field 11 mean
Field 14: Field 12 * Field 13
"""
df['Increasing days'] = df['Indicator Increasing'].rolling(window=window_size).sum()
df['Decreasing days'] = df['Indicator Decreasing'].rolling(window=window_size).sum()
df['Zero Cross Neg'] = df['Indicator Trend Pos to Neg'].rolling(window=window_size).sum()
df['Zero Cross Pos'] = df['Indicator Trend Neg to Pos'].rolling(window=window_size).sum()
df['Zero Cross Total'] = df['Zero Cross Neg'] + df['Zero Cross Pos']
df['Ratio Trend tmp1'] = np.where(df['Decreasing days'] > 0, df['Increasing days']/df['Decreasing days'],
np.where(df['Increasing days'] > 0, 1, 0))
df['Ratio Zero tmp1'] = np.where(df['Zero Cross Neg'] > 0, df['Zero Cross Pos']/df['Zero Cross Neg'],
np.where(df['Zero Cross Pos'] > 0, 1, 0))
df['Ratio Trend tmp2'] = np.where(df['Ratio Trend tmp1'] > 0, 1/df['Ratio Trend tmp1'], 0)
df['Ratio Zero tmp2'] = np.where(df['Ratio Zero tmp1'] > 0, 1/df['Ratio Zero tmp1'], 0)
df['Ratio Trend'] = np.where(df['Ratio Trend tmp1'] > df['Ratio Trend tmp2'], df['Ratio Trend tmp1'],
df['Ratio Trend tmp2'])
df['Ratio Zero'] = np.where(df['Ratio Zero tmp1'] > df['Ratio Zero tmp2'], df['Ratio Zero tmp1'],
df['Ratio Zero tmp2'])
df.drop(['Ratio Zero tmp1', 'Ratio Zero tmp2', 'Ratio Zero tmp1', 'Ratio Zero tmp2'], axis=1, inplace=True)
# Floor and cap ratios at 5
df['Ratio Trend'] = np.where(df['Ratio Trend'] > 5, 5, df['Ratio Trend'])
df['Ratio Zero'] = np.where(df['Ratio Zero'] > 5, 5, df['Ratio Zero'])
df['Ratio Trend Normalized'] = df['Ratio Trend'] - df['Ratio Trend'].mean()
df['Ratio Zero Normalized'] = df['Ratio Zero'] - df['Ratio Zero'].mean()
df['Product Ratio'] = df['Ratio Trend Normalized'] * df['Ratio Zero Normalized']
return df
###Output
_____no_output_____
###Markdown
Calculations
###Code
# Calculate N days MA for Adjusted Close price variable
df['Adj Close MA20'] = df['Adj Close'].rolling(window=N_days).mean()
# Calculate first order difference between MA variable
df['Adj Close MA20 1diff'] = df['Adj Close MA20'].diff()
# Shift the first order difference by 1 day
df['Adj Close MA20 1diff 1shift'] = df['Adj Close MA20 1diff'].shift(1)
# Calculate the product of 1diff and 1diff 1 shift variable
df['Adj Close MA20 diff Product'] = df['Adj Close MA20 1diff'] * df['Adj Close MA20 1diff 1shift']
# Indicator to define if price is going up
df['Indicator Increasing'] = np.where(df['Adj Close MA20 1diff'] > 0, 1, 0)
# Indicator to define if price is going down
df['Indicator Decreasing'] = np.where(df['Adj Close MA20 1diff'] < 0, 1, 0)
# Indicator to define if Trend Shifted from Positive to Negative
df['Indicator Trend Pos to Neg'] = np.where(((df['Adj Close MA20 1diff'] < 0) &
(df['Adj Close MA20 1diff 1shift'] > 0)), 1, 0)
# Indicator to define if Trend Shifted from Negative to Positive
df['Indicator Trend Neg to Pos'] = np.where(((df['Adj Close MA20 1diff'] > 0) &
(df['Adj Close MA20 1diff 1shift'] < 0)), 1, 0)
###Output
_____no_output_____
###Markdown
Hurst Exponent Segment Analysis (Window size = 100)
###Code
# Change to Images directory
os.chdir("..")
os.chdir(str(os.getcwd()) + "\\Images")
# Calculate the variables for window size = 100
window_size = 100
df = Calculate_Cols_onHurstwindow(df, window_size=window_size)
###Output
_____no_output_____
###Markdown
Ratio of Number of days with Price Increase vs Price Decrease
###Code
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.scatter(df['hurst_'+str(window_size)], df['Ratio Trend'])
plt.xlabel("Hurst Exponent (window={})".format(window_size), fontsize=12)
plt.ylabel("Ratio of Pos/Neg trend days", fontsize=12)
plt.title("Hurst Exponent and Ratio of Pos/Neg trend days", fontsize=16)
plt.savefig("Hurst Exponent " + str(window_size)+ " and Ratio of PosNeg trend days " + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
__Comments:__ As hurst exponent goes up the ratio of Trending days goes above 1 Ratio of Number of zero crossovers (Pos to Neg and Neg to Pos)
###Code
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.scatter(df['hurst_'+str(window_size)], df['Ratio Zero'])
plt.xlabel("Hurst Exponent (window={})".format(window_size), fontsize=12)
plt.ylabel("Ratio of zero crossovers", fontsize=12)
plt.title("Hurst Exponent and Ratio of zero crossovers", fontsize=16)
plt.savefig("Hurst Exponent " + str(window_size)+ " and Ratio of zero crossovers " + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
__Comments:__ There are few windows where we saw continous price increase (Hurst Exponent > 0.6) 3D Plot
###Code
# Scatter plot to save fig
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(xs=df['hurst_'+str(window_size)],ys= df['Ratio Zero'], zs=df['Ratio Trend'], zdir='z', s=20, c=None,
depthshade=True)
ax.set_xlabel("Hurst Exponent(window={})".format(window_size), fontsize=12)
ax.set_ylabel("Ratio of zero crossing", fontsize=12)
ax.set_zlabel("Ratio of Increasing to Decreasing days", fontsize=12)
plt.savefig("Hurst Exponent " + str(window_size)+ " 3D Plot " + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
Hurst Exponent Segment Analysis (Window size = 200)
###Code
# Calculate the variables for window size = 200
window_size = 200
df = Calculate_Cols_onHurstwindow(df, window_size=window_size)
###Output
_____no_output_____
###Markdown
Ratio of Number of days with Price Increase vs Price Decrease
###Code
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.scatter(df['hurst_'+str(window_size)], df['Ratio Trend'])
plt.xlabel("Hurst Exponent (window={})".format(window_size), fontsize=12)
plt.ylabel("Ratio of Pos/Neg trend days", fontsize=12)
plt.title("Hurst Exponent and Ratio of Pos/Neg trend days", fontsize=16)
plt.savefig("Hurst Exponent " + str(window_size)+ " and Ratio of PosNeg trend days " + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
__Comments:__ As hurst exponent goes up the ratio of Trending days goes above 1 Ratio of Number of zero crossovers (Pos to Neg and Neg to Pos)
###Code
# Scatter plot to save fig
plt.figure(figsize=(10,5))
plt.scatter(df['hurst_'+str(window_size)], df['Ratio Zero'])
plt.xlabel("Hurst Exponent (window={})".format(window_size), fontsize=12)
plt.ylabel("Ratio of zero crossovers", fontsize=12)
plt.title("Hurst Exponent and Ratio of zero crossovers", fontsize=16)
plt.savefig("Hurst Exponent " + str(window_size)+ " and Ratio of zero crossovers " + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
3D Plot
###Code
# Scatter plot to save fig
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter3D(xs=df['hurst_'+str(window_size)],ys= df['Ratio Zero'], zs=df['Ratio Trend'], zdir='z', s=20, c=None,
depthshade=True)
ax.set_xlabel("Hurst Exponent(window={})".format(window_size), fontsize=12)
ax.set_ylabel("Ratio of zero crossing", fontsize=12)
ax.set_zlabel("Ratio of Increasing to Decreasing days", fontsize=12)
plt.savefig("Hurst Exponent " + str(window_size)+ " 3D Plot " + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
__Comments:__ Based on above plots, Hurst exponent with window size=200 gives a better differenciation between segments Segment Threshold Analysis __Comments:__$\;\;\;\;\;\;$ 1. Number of observations with hurst=0.5 are few $\;\;\;\;\;\;$ 2. Based on the plots, we only see two clusters
###Code
# Get minimum and maximum value of hurst exponent
window_size_final = 200
min_val = round(df['hurst_'+str(window_size_final)].min(),2)
max_val = round(df['hurst_'+str(window_size_final)].max(),2)
# Range of cut offs
hurst_cutoff = np.arange(min_val, max_val, 0.01).tolist()
# Initialize lists to store some results
Trend_gt150 = []
Zero_gt110 = []
def Get_hurst_Thresholdmetrics(df, hurst_window=100, hurst_threshold=0.5, trend_threshold=1.5, zero_threshold=1.1):
"""
Using the hurst threshold to divide dataset, return below metrics:
1. Ratio of Number of rows when Ratio Trend > trend_threshold in subset dataset to whole dataset
2. Ratio of Number of rows when Ratio Zero > zero_threshold in subset dataset to whole dataset
"""
df['Trend tmp high'] = np.where(df['Ratio Trend'] > trend_threshold, 1, 0)
df['Zero tmp high'] = np.where(df['Ratio Zero'] > zero_threshold, 1, 0)
rat_trend = df['Trend tmp high'][df['hurst_'+str(hurst_window)] > hurst_threshold].sum()/df['Trend tmp high'].sum()
rat_zero = df['Zero tmp high'][df['hurst_'+str(hurst_window)] > hurst_threshold].sum()/df['Zero tmp high'].sum()
return rat_trend, rat_zero
for i in range(0, len(hurst_cutoff)):
rat_trend, rat_zero = Get_hurst_Thresholdmetrics(df, window_size_final, hurst_cutoff[i], 1.5, 1.1)
Trend_gt150.append(rat_trend)
Zero_gt110.append(rat_zero)
# Declare as numpy arrays
Trend_gt150 = np.array(Trend_gt150)
Zero_gt110 = np.array(Zero_gt110)
# Get 1 - above arrays values
Trend_gt150_neg = 1 - Trend_gt150
Zero_gt110_neg = 1 - Zero_gt110
# Plot
plt.figure(figsize=(10,5))
plt.plot(hurst_cutoff, Trend_gt150, label="Trend Ratio > 1.5")
plt.plot(hurst_cutoff, Trend_gt150_neg, label="Trend Ratio <= 1.5")
plt.xlabel("Hurst Exponent (window={})".format(window_size), fontsize=12)
plt.ylabel("% Observatins", fontsize=12)
plt.title('Observations based on Trend threshold')
plt.legend()
plt.grid()
plt.savefig('Trend threshold Segment'+ str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
__Comments:__ Based on above plot, the data should be segmented in two segments (Hurst_200 0.6)
###Code
# Plot
plt.figure(figsize=(10,5))
plt.plot(hurst_cutoff, Zero_gt110, label="Zero Ratio > 1.1")
plt.plot(hurst_cutoff, Zero_gt110_neg, label="Zero Ratio <= 1.1")
plt.xlabel("Hurst Exponent (window={})".format(window_size), fontsize=12)
plt.ylabel("% Observatins", fontsize=12)
plt.title('Observations based on Zero threshold')
plt.legend()
plt.grid()
plt.savefig('Zero threshold Segment' + str(index) +'.png')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
__Comments:__ Based on above plot, the data should be segmented in two segments (Hurst_200 0.6)
###Code
cut_off = ((hurst_cutoff[np.argmin(np.abs(Zero_gt110 - Zero_gt110_neg))]) +
(hurst_cutoff[np.argmin(np.abs(Trend_gt150 - Trend_gt150_neg))]) )/2
cut_off
###Output
_____no_output_____
###Markdown
Save the Data
###Code
# Get the columns
df.columns
# Drop columns which are not required and/or can't be used as Independent variables
df.drop(['Adj Close MA20 1diff 1shift', 'Ratio Trend tmp1', 'Ratio Trend tmp2', 'Ratio Trend Normalized',
'Ratio Zero Normalized', 'Product Ratio', 'Trend tmp high', 'Zero tmp high'],
axis=1, inplace=True)
# Get the columns
df.columns
# Create segments
df['Segment'] = np.where(df['hurst_200'] > cut_off, "Trending", "Mean Reverting")
os.chdir("..")
os.chdir(str(os.getcwd()) + "\\Data")
df.to_csv(index +"_hurst_segment"+".csv", index=True)
###Output
_____no_output_____ |
notebook/csv_maker.ipynb | ###Markdown
Csv maker
###Code
import pandas as pd
# The original data path
path = "./../AirQualityUCI.xlsx"
# Read the original data
df = pd.read_excel(path, engine="openpyxl")
df
# Show the information
df.info()
# Drop NaN data in the original data
df_dropped = df.dropna(how='all')
df_dropped = df_dropped.dropna(axis=1, how='all')
df_dropped
# Show the modified data
df_dropped.info()
# Define the function to get the difference of two data
def get_diff_tuple(tuple1, tuple2):
d11, d12 = tuple1
d21, d22 = tuple2
diff1 = d21 - d11
diff2 = d22.hour - d12.hour
return (diff1, diff2)
# Check the interval
date = df_dropped["Date"].values.tolist()
time = df_dropped["Time"].values.tolist()
d_t_values = [(d, t) for d, t in zip(date, time)]
for index in range(len(date) - 1):
next_index = index + 1
(date_diff, time_diff) = get_diff_tuple(d_t_values[index], d_t_values[next_index])
if time_diff == 1 and date_diff == 0:
continue
elif time_diff == -23 and date_diff == 86400000000000:
continue
else:
print(f"index is {index} and next index is {next_index}")
print(d_t_values[index])
print(d_t_values[index-1])
raise ValueError("There is data whose the interval is not 1 hour.")
# Save the modified data
output_path = "./../AirQualityUCI.csv"
df_dropped.to_csv(output_path, index=False)
# Read the modified data
saved_df = pd.read_csv(output_path)
saved_df
###Output
_____no_output_____ |
experiments/tl_3v2/jitter5/cores-oracle.run1.framed/trials/8/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3-jitter5v2:cores -> oracle.run1.framed",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 256],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["jitter_256_5", "lowpass_+/-10MHz", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["jitter_256_5", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 154325,
"dataset_seed": 154325,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
notebooks/62_swivel_encoder_tune.ipynb | ###Markdown
Load data
###Code
# NOTE: we're setting is_eval to False even though we use the train dataset for evaluation
# it would be better if we re-loaded the train dataset with is_eval=True and used that for evaluation
# but it may not matter much for hyperparameter optimization
input_names_train, weighted_actual_names_train, candidate_names_train = load_dataset(config.train_path, is_eval=False)
input_names_test, weighted_actual_names_test, candidate_names_test = load_dataset(config.test_path, is_eval=True)
print("input_names_train", len(input_names_train))
print("weighted_actual_names_train", sum(len(wan) for wan in weighted_actual_names_train))
print("candidate_names_train", len(candidate_names_train))
print("input_names_test", len(input_names_test))
print("weighted_actual_names_test", sum(len(wan) for wan in weighted_actual_names_test))
print("candidate_names_test", len(candidate_names_test))
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
vocab_df = pd.read_csv(fopen(config.swivel_vocab_path, "rb"))
print(vocab_df.head(5))
swivel_vocab = {name: _id for name, _id in zip(vocab_df["name"], vocab_df["index"])}
print(len(swivel_vocab))
swivel_model = SwivelModel(len(swivel_vocab), config.embed_dim)
swivel_model.load_state_dict(torch.load(fopen(config.swivel_model_path, "rb")))
swivel_model.eval()
print(swivel_model)
###Output
_____no_output_____
###Markdown
Optimize Hyperparameters Create optimization and validation sets from the training set
###Code
# split out the candidate names into train and validate sets
optimize, validate = train_test_split(input_names_train, weighted_actual_names_train, candidate_names_train,
train_size=optimize_size, test_size=validate_size)
input_names_optimize, weighted_actual_names_optimize, candidate_names_optimize = optimize
input_names_validate, weighted_actual_names_validate, candidate_names_validate = validate
print("input_names_optimize", len(input_names_optimize))
print("candidate_names_optimize", len(candidate_names_optimize))
print("input_names_validate", len(input_names_validate))
print("candidate_names_validate", len(candidate_names_validate))
###Output
_____no_output_____
###Markdown
Use Ray to perform the search
###Code
def ray_training_function(config,
swivel_model,
swivel_vocab,
input_names_optimize,
candidate_names_optimize,
input_names_validate,
weighted_actual_names_validate,
candidate_names_validate,
device,
checkpoint_dir=None):
names_optimize = list(set(input_names_optimize).union(set(candidate_names_optimize)))
names_optimize_inputs = convert_names_to_model_inputs(names_optimize)
names_optimize_embeddings = torch.Tensor(get_swivel_embeddings(swivel_model, swivel_vocab, names_optimize))
# create model
encoder_model = SwivelEncoderModel(n_layers=config["n_layers"],
char_embed_dim=config["char_embed_dim"],
n_hidden_units=config["n_hidden_units"],
output_dim=config["embed_dim"],
bidirectional=config["bidirectional"],
pack=config["pack"],
dropout=config["dropout"],
device=device)
encoder_model.to(device=device)
optimizer = torch.optim.Adam(encoder_model.parameters(), lr=config["lr"]) \
if config["use_adam_opt"] \
else torch.optim.Adagrad(encoder_model.parameters(), lr=config["lr"])
# Load checkpoint if exists
if checkpoint_dir:
model_state, optimizer_state = torch.load(
os.path.join(checkpoint_dir, "checkpoint"))
encoder_model.load_state_dict(model_state)
optimizer.load_state_dict(optimizer_state)
for epoch in range(config["n_epochs"]):
losses = train_swivel_encoder(encoder_model,
names_optimize_inputs,
names_optimize_embeddings,
num_epochs=1,
batch_size=config["batch_size"],
use_adam_opt=config["use_adam_opt"],
verbose=False,
optimizer=optimizer)
best_matches = get_best_swivel_matches(None,
None,
input_names_validate,
candidate_names_validate,
k=num_matches,
batch_size=1024,
add_context=True,
encoder_model=encoder_model,
n_jobs=1)
auc = metrics.get_auc(
weighted_actual_names_validate, best_matches, min_threshold=0.01, max_threshold=2.0, step=0.05, distances=False
)
# Checkpoint the model
with tune.checkpoint_dir(epoch) as checkpoint_dir:
path = os.path.join(checkpoint_dir, "checkpoint")
torch.save((encoder_model.state_dict(), optimizer.state_dict()), path)
# Report the metrics to Ray
tune.report(auc=auc, mean_loss=np.mean(losses))
config_params={
"embed_dim": embed_dim,
"n_layers": tune.choice([2, 3]),
"char_embed_dim": 64, # tune.choice([32, 64]),
"n_hidden_units": tune.choice([300, 400]),
"bidirectional": True,
"lr": tune.quniform(0.02, 0.04, 0.01),
"batch_size": 256,
"use_adam_opt": False,
"pack": True,
"dropout": 0.0,
"n_epochs": n_epochs
}
current_best_params = [{
"embed_dim": embed_dim,
"n_layers": 3,
"char_embed_dim": 64,
"n_hidden_units": 400,
"bidirectional": True,
"lr": 0.03,
"batch_size": 256,
"use_adam_opt": False,
"pack": True,
"dropout": 0.0,
"n_epochs": n_epochs,
}]
# Will try to terminate bad trials early
# https://docs.ray.io/en/latest/tune/api_docs/schedulers.html
scheduler = ASHAScheduler(max_t=100,
grace_period=3,
reduction_factor=4)
# https://docs.ray.io/en/latest/tune/api_docs/suggestion.html#tune-hyperopt
search_alg = HyperOptSearch(points_to_evaluate=current_best_params)
ray.shutdown()
ray.init(_redis_max_memory=4*10**9) # give redis extra memory
callbacks = []
if wandb_api_key_file:
callbacks.append(WandbLoggerCallback(
project="nama",
entity="nama",
group="62_swivel_encoder_tune_"+given_surname,
notes="",
config=config._asdict(),
api_key_file=wandb_api_key_file
))
result = tune.run(
tune.with_parameters(ray_training_function,
swivel_model=swivel_model,
swivel_vocab=swivel_vocab,
input_names_optimize=input_names_optimize,
candidate_names_optimize=candidate_names_optimize,
input_names_validate=input_names_validate,
weighted_actual_names_validate=weighted_actual_names_validate,
candidate_names_validate=candidate_names_validate,
device=device),
resources_per_trial={"cpu": 0.5, "gpu": 1.0},
config=config_params,
num_samples=20,
scheduler=scheduler,
search_alg=search_alg,
metric="auc",
mode="max",
checkpoint_score_attr="auc",
time_budget_s=12*3600,
keep_checkpoints_num=10,
progress_reporter=tune.JupyterNotebookReporter(
overwrite=False,
max_report_frequency=5*60
),
callbacks=callbacks
)
###Output
_____no_output_____
###Markdown
Get best model
###Code
# Get trial that has the highest AUC (can also do with mean_loss or any other metric)
best_trial_auc = result.get_best_trial(metric='auc', mode='max', scope='all')
# Parameters with the highest AUC
best_trial_auc.config
print(f"Best trial final train loss: {best_trial_auc.last_result['mean_loss']}")
print(f"Best trial final train auc: {best_trial_auc.last_result['auc']}")
# Get checkpoint dir for best model
best_checkpoint_dir = best_trial_auc.checkpoint.value
print(best_checkpoint_dir)
# Load best model
model_state, optimizer_state = torch.load(os.path.join(best_checkpoint_dir, 'checkpoint'))
best_trained_model = SwivelEncoderModel(n_layers=best_trial_auc.config["n_layers"],
char_embed_dim=best_trial_auc.config["char_embed_dim"],
n_hidden_units=best_trial_auc.config["n_hidden_units"],
output_dim=embed_dim,
bidirectional=best_trial_auc.config["bidirectional"],
device=device)
best_trained_model.load_state_dict(model_state)
best_trained_model.eval()
best_trained_model.to(device=device)
###Output
_____no_output_____
###Markdown
Get all trials as DF
###Code
# All trials as pandas dataframe
df = result.results_df
df[(df["auc"] > 0.84) & (df["mean_loss"] < 0.13)]
###Output
_____no_output_____
###Markdown
Plot PR curve on validate
###Code
# plot pr curve with best model
best_matches = get_best_swivel_matches(None,
None,
input_names_validate,
candidate_names_validate,
k=num_matches,
batch_size=256,
add_context=True,
encoder_model=best_trained_model,
n_jobs=4)
metrics.precision_weighted_recall_curve_at_threshold(weighted_actual_names_validate,
best_matches,
min_threshold=0.01,
max_threshold=1.0,
step=0.05,
distances=False)
metrics.get_auc(
weighted_actual_names_validate, best_matches, min_threshold=0.01, max_threshold=1.0, step=0.05, distances=False
)
###Output
_____no_output_____
###Markdown
Plot PR curve on Test
###Code
# plot pr curve with best model
best_matches = get_best_swivel_matches(swivel_model,
swivel_vocab,
input_names_test,
candidate_names_test,
k=num_matches,
batch_size=1024,
add_context=True,
encoder_model=best_trained_model,
n_jobs=4)
metrics.precision_weighted_recall_curve_at_threshold(weighted_actual_names_test,
best_matches,
min_threshold=0.01,
max_threshold=1.0,
step=0.05,
distances=False)
metrics.get_auc(
weighted_actual_names_test, best_matches, min_threshold=0.01, max_threshold=1.0, step=0.05, distances=False
)
###Output
_____no_output_____
###Markdown
Demo
###Code
rndmx_name_idx = np.random.randint(len(input_names_test))
print(f"Input name: {input_names_test[rndmx_name_idx]}")
print("Nearest names:")
print(best_matches[rndmx_name_idx][:10])
print("Actual names:")
sorted(weighted_actual_names_test[rndmx_name_idx][:10], key=lambda k: k[1], reverse=True)
###Output
_____no_output_____
###Markdown
Test a specific threshold
###Code
# precision and recall at a specific threshold
from src.eval.metrics import precision_at_threshold, weighted_recall_at_threshold
threshold = 0.4
precision = np.mean([precision_at_threshold(a, c, threshold, distances=False) \
for a, c in zip(weighted_actual_names_validate, best_matches)])
recall = np.mean([weighted_recall_at_threshold(a, c, threshold, distances=False) \
for a, c in zip(weighted_actual_names_validate, best_matches)])
print(precision, recall)
###Output
_____no_output_____ |
DataCollector.ipynb | ###Markdown
Display Live Camera FeedCreate an image widget of size 224x224 pixel and link the camera to it.
###Code
camera = Camera.instance(width=224, height=224)
image = widgets.Image(format='jpeg', width=224, height=224) # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)
display(image)
###Output
_____no_output_____
###Markdown
Create Gamepad ControllerThe first thing we want to do is create an instance of the Controller widget, which we'll use to drive our robot. The Controller widget takes a index parameter, which specifies the number of the controller. This is useful in case you have multiple controllers attached, or some gamepads appear as multiple controllers. To determine the index of the controller you're using,1. Visit http://html5gamepad.com.2. Ensure the Gamepad is in XBox mode. If not, enable it by pressing the **home** button for seven seconds.3. Press buttons on the gamepad you're using4. Remember the index of the gamepad that is responding to the button pressesNext, we'll create and display our controller using that index. Note: The code below uses an index of 0.
###Code
import ipywidgets.widgets as widgets
controller = widgets.Controller(index=0) # replace with index of your controller
display(controller)
###Output
_____no_output_____
###Markdown
Connect Gamepad Controller to the RobotNow, even though we've connected our gamepad, we haven't yet attached the controls to our robot! The first, and most simple controlwe want to attach is the motor control. We'll connect that to the left and right vertical axes using the ``dlink`` function. The``dlink`` function, unlike the ``link`` function, allows us to attach a transform between the ``source`` and ``target``. Becausethe controller axes are flipped from what we think is intuitive for the motor control, we'll use a small *lambda* function tonegate the value.> WARNING: This next cell will move the robot if you touch the gamepad controller axes!
###Code
from jetbot import Robot
import traitlets
robot = Robot()
left_link = traitlets.dlink((controller.axes[1], 'value'), (robot.left_motor, 'value'), transform=lambda x: -x*0.25)
right_link = traitlets.dlink((controller.axes[3], 'value'), (robot.right_motor, 'value'), transform=lambda x: -x*0.25)
###Output
_____no_output_____
###Markdown
Create Data Directories Create folders for holding ``free`` and ``blocked`` images
###Code
import os
blocked_dir = 'dataset/blocked'
free_dir = 'dataset/free'
# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:
os.makedirs(free_dir)
os.makedirs(blocked_dir)
except FileExistsError:
print('Directories were not created because they already exist')
###Output
Directories were not created because they already exist
###Markdown
Create Control PanelCreate a control panel for capturing images with two buttons. One for triggering capture of ``free`` images and one for capturing ``blocked`` images.
###Code
button_layout = widgets.Layout(width='128px', height='64px')
free_button = widgets.Button(description='add free', button_style='success', layout=button_layout)
blocked_button = widgets.Button(description='add blocked', button_style='danger', layout=button_layout)
free_count = widgets.IntText(layout=button_layout, value=len(os.listdir(free_dir)))
blocked_count = widgets.IntText(layout=button_layout, value=len(os.listdir(blocked_dir)))
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
###Output
_____no_output_____
###Markdown
Enable Actions for ButtonsAttach functions to save images for each of the button's ``on_click`` event. We'll save the valueof the ``Image`` widget (rather than the camera), because it's already in compressed JPEG format!To make sure we don't repeat any file names (even across different machines!) we'll use the ``uuid`` package in python, which defines the ``uuid1`` method to generatea unique identifier. This unique identifier is generated from information like the current time and the machine address.
###Code
from uuid import uuid1
def save_snapshot(directory):
image_path = os.path.join(directory, str(uuid1()) + '.jpg')
with open(image_path, 'wb') as f:
f.write(image.value)
def save_free():
global free_dir, free_count
save_snapshot(free_dir)
free_count.value = len(os.listdir(free_dir))
def save_blocked():
global blocked_dir, blocked_count
save_snapshot(blocked_dir)
blocked_count.value = len(os.listdir(blocked_dir))
# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
free_button.on_click(lambda x: save_free())
blocked_button.on_click(lambda x: save_blocked())
###Output
_____no_output_____
###Markdown
Save Snapshots with Gamepad ButtonsNow, we'd like to be able to save some images from our robot. Let's make it so the right bumper (index 5) saves a snapshot to the ``free`` image collection and the left bumper (index 4) saves a snapshot to the ``blocked`` collection.
###Code
import uuid
def game_pad_save_free(change):
# save snapshot when button is pressed down
if change['new']:
save_free()
def game_pad_save_blocked(change):
# save snapshot when button is pressed down
if change['new']:
save_blocked()
controller.buttons[5].observe(game_pad_save_free, names='value')
controller.buttons[4].observe(game_pad_save_blocked, names='value')
###Output
_____no_output_____
###Markdown
Stop robot if network disconnectsYou can drive your robot around by looking through the video feed. But what if your robot disconnects from Wifi? Well, the motors would keep moving and it would keep trying to stream video and motor commands. Let's make it so that we stop the robot and unlink the camera and motors when a disconnect occurs.
###Code
from jetbot import Heartbeat
def handle_heartbeat_status(change):
if change['new'] == Heartbeat.Status.dead:
camera_link.unlink()
left_link.unlink()
right_link.unlink()
robot.stop()
heartbeat = Heartbeat(period=0.5)
# attach the callback function to heartbeat status
heartbeat.observe(handle_heartbeat_status, names='status')
###Output
_____no_output_____
###Markdown
Bundle datasetOnce you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following *terminal* command to compressour dataset folder into a single *zip* file.> The ! prefix indicates that we want to run the cell as a *shell* (or *terminal*) command.> The -r flag in the zip command below indicates *recursive* so that we include all nested files, the -q flag indicates *quiet* so that the zip command doesn't print any output
###Code
!zip -r -q dataset.zip dataset
###Output
_____no_output_____ |
arvore_notebook.ipynb | ###Markdown
Arvore de decisão
###Code
y = dadosc['consumo']
X = dados[['temp_max', 'precipitacao', 'finaldesemana']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
arvore = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=1)
arvore = arvore.fit(X_train, y_train)
y_estimado = arvore.predict(X_train)
print(f"Acuracia: {accuracy_score(y_train, y_estimado)}")
print(f"Precisao: {precision_score(y_train, y_estimado, average='weighted', zero_division=0)}")
tree.plot_tree(arvore, fontsize=8)
###Output
_____no_output_____ |
notebooks/02-jmg-patent_merge.ipynb | ###Markdown
Patent data integration and EDAThis is an exploratory analysis of PATSTAT application data involving GB-based inventors and applicants.For more information about PATSTAT data check [here](https://www.epo.org/searching-for-patents/business/patstat.htmltab-1) and for more information about patent analysis in general go [here](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/463319/The_Patents_Guide_2nd_edition.pdf) Activities* Load data (currently saved as a pickled dict where every element is a dataframe with information about an application)* Integrate data into a smaller set of tables for analysis* Explore data: * What are the variables in the data? * What are the missing and present values? * What do the data capture *legally*?* Carry out an initial exploratory analysis * What are the activity trends? * Who are the top patenters (organisations) * What are the top patenters (sectors / places)* What can we find about AI? Outputs* A data dictionary and cleaned dataset 0. Preamble
###Code
%run notebook_preamble.ipy
import pandas_profiling as pp
###Output
_____no_output_____
###Markdown
1. Load dataWe have stored the data in a pickled file with a list of dictionaries containing various information.
###Code
with open('../data/raw/20_9_2019_patent_outputs.p','rb') as infile:
pdict = pickle.load(infile)
type(pdict)
len(pdict)
###Output
_____no_output_____
###Markdown
It contains 9 dictionaries
###Code
#We have a quick look inside
#Loop over items in the dict
for k,v in pdict.items():
print(k.upper())
print(len(k)*'=')
print('\n')
print(f'number of observations: {len(v)}')
print('\n')
print(v.head())
print('\n')
print('Columns')
print('======')
print(v.columns)
print('\n \n')
# Create a dictionary for patent outputs
# for k,v in pdict.items():
# print(f'* **{k}**:')
# #print(f' * description:')
# print(f' * length:{len(v)}')
###Output
_____no_output_____
###Markdown
EDA of patent output contents Person - applicationsThe person application dataframe contains information about GB inventors or applications - they are the seed for our patent analysis in the mapping innovation in Scotland project.
###Code
#pp.ProfileReport(pdict['person_appln'])
###Output
_____no_output_____
###Markdown
A couple of questions* Which name and ID do we use?* What's up with all those missing addresses?
###Code
papp = pdict['person_appln']
papp['han_name'].value_counts()[:20]
papp['psn_name'].value_counts()[:20]
###Output
_____no_output_____
###Markdown
The han_name seems to be missing universities - they are not in Orbis?
###Code
papp.loc[papp['psn_name']=='UNIVERSITY OF CAMBRIDGE']['han_name'].head()
###Output
_____no_output_____
###Markdown
Let's use the `psn_name` as this is the 'official' patstat standardised name What's up with the missing addresses?
###Code
papp['person_address'].isna().mean()
person_address_lookup = {row['psn_name']:row['person_address'] for ind,row in papp.dropna(axis=0,subset=['person_address']).iterrows()}
# How many of the names with missing addresses are in this lookup?
names_missing_add = papp.loc[papp['person_address'].isna()]['psn_name']
names_missing_add.value_counts()[:10]
###Output
_____no_output_____
###Markdown
Interesting - many of the orgs with missing addresses are 'big organisations'
###Code
len(set(names_missing_add)-set(person_address_lookup.keys()))
###Output
_____no_output_____
###Markdown
There are still 83K with missing addresses - they are not in the person name - address lookup Do organisations have a single address, and how do we interpret it?
###Code
#This groups the data by organisations and creates a list of addresses. Do we have multiple addresses per name or only one?
grouped_addresses = papp.groupby('psn_name')['person_address'].apply(lambda x: set(list(x)))
grouped_addresses
###Output
_____no_output_____
###Markdown
Ok - so there seems to be a lot of duplication here. One way to manage this would be to focus on harmonised names
###Code
pd.Series([len(x) for x in grouped_addresses]).value_counts()[:5]
###Output
_____no_output_____
###Markdown
There are lots of names with multiple addresses - we will need to allocate them at the patent level. Also need to decide what to do with missing values
###Code
grouped_addresses.loc[[(len(x)>50) for x in grouped_addresses]][:10]
###Output
_____no_output_____
###Markdown
The organisations with many addresses are big organisations or very common names. Do people with very common names have a single person id or many?
###Code
grouped_addresses['UNILEVER']
###Output
_____no_output_____
###Markdown
What a mess! The addresses are totally unstandardisedWe can at least extract their postcodes using nslp Extract postcodes using NSPL, the postcode lookup
###Code
#Load it
#TODO - remove hardcoded path
nspl = pd.read_csv('/Users/jmateosgarcia/Desktop/data/nspl/NSPL_FEB_2018_UK.csv')
#Create a list of lowercase postcodes. We will focus on the first three letters as this will speed up the analysis
postcodes = list(set(nspl['pcds'].apply(lambda x: x.lower().split(' ')[0])))
#Lowercase the patent applications too
papp['address_lower'] = papp['person_address'].apply(lambda x: x.lower().split(' ') if pd.isnull(x)==False else np.nan)
#Now extract the postcodes from the lowercase addresses (if present)
papp['uk_postcode'] = [set(x) & set(postcodes) if type(x)==list else np.nan for x in papp['address_lower']]
#And now we want to extract full postcodes for those people where we found the 3-digit ones
# This is a faff
%%time
#Store full postcodes here
full_postcode_store = []
#We will loop over rows
for ind,row in papp.iterrows():
#If the postcode is nan that means we append a nan to our store
if type(row['uk_postcode'])!=set:
full_postcode_store.append(np.nan)
else:
#If we have a postcode, we extract it together with the address
pc = list(row['uk_postcode'])
add = row['address_lower']
# There were addresses with no postcodes - empty set. In there was at least one we will try to extract the string after it
if (len(pc)>0):
#Index for the postcode. Note that this is assuming that we had a unique postcode per address
ind = add.index(pc[0])
#print(ind+1)
#print(len(add))
#In some cases we have the first three digit of the postcode at the end of the address. In that case we append those.
if ind+1 < len(add):
#Join the postcode with the string immediately after.
#Note that in some cases this might append non-postcode strings. These won't be matched later on.
out = ' '.join([pc[0],add[ind+1]])
full_postcode_store.append(out)
#If we didn't have a full postcode we append the three digits extracted before.
else:
full_postcode_store.append(pc[0])
# If the set was empty, this means we have no postcode
else:
full_postcode_store.append(np.nan)
papp['uk_postcode_long'] = full_postcode_store
###Output
_____no_output_____
###Markdown
Let's merge with TTWAs
###Code
#Lowercase the postcodes as before
nspl['pcds_lower'] = nspl['pcds'].apply(lambda x: x.lower())
#Do the merge
papp_geo = pd.merge(papp,nspl[['pcds_lower','laua','ttwa']],left_on='uk_postcode_long',right_on='pcds_lower',how='outer')
# Add TTWA names
# TODO: remove hardcoded path
#Load the lookup
ttwa_names = pd.read_csv('/Users/jmateosgarcia/Desktop/data/nspl/Documents/TTWA names and codes UK as at 12_11 v5.txt',delimiter='\t')
#Create the dict (is there a better way to do this?)
ttwa_names_lookup = {x['TTWA11CD']:x['TTWA11NM'] for ind,x in ttwa_names.iterrows()}
#Map
papp_geo['ttwa_name'] = papp_geo['ttwa'].map(ttwa_names_lookup)
#Here we go. Looking good
papp_geo['ttwa_name'].value_counts()[:10]
###Output
_____no_output_____
###Markdown
Do individuals with very common names have different ids?
###Code
papp.loc[papp['psn_name']=='BAKER, MATTHEW'][['person_id','person_name','psn_name','psn_id','han_name','han_id','person_address']].sort_values('han_id')
###Output
_____no_output_____
###Markdown
It is unclear what is the link between ids and person names. We will need to match at the patent application id level and decide what we do with missing addresses. Somehow allocate missing addresses randomly based on address distributions for persons with the same name (or id?) Add flags for wheter a person is applicant or inventor
###Code
papp_geo['is_inventor'],papp_geo['is_applicant'] = [[x>0 for x in papp_geo[var]] for var in ['invt_seq_nr','applt_seq_nr']]
###Output
_____no_output_____
###Markdown
Impute TTWAs where this is missing (TODO) Conclusion: create a table that we can merge with the patent applications laterWe will group various bits of information by the patent id (which becomes the index we will use for merging). They include:* Inventor (`invt_seq_nr` different from zero) names, ids addresses and TTWAs* Applicant (`applt_seq_nr` different from zero) names, ids, addresses and TTWASTo do this, I will create a simple function `make_person_metadata`
###Code
def make_person_metadata(df,metadata,name,application_id='appln_id'):
'''
This function creates patent application level metadata about the persons involved.
In order to produce metadata about applicants and inventors we will filter the df beforehand using the invt_seq_nr and applt_seq_nr variables
Arguments:
-df is the patent person df with the relevant information
-metadata is the list of variables that we want to aggreate for each patent
-name is the prefix we will use to label the data (eg inv, appl)
-application_id is the application identifier
Output:
-A df where every row is a patent application and the columns contain the metadat
'''
#Generate the metadata for each variable and output
out = pd.concat([df.groupby(application_id)[var].apply(lambda x: list(x)) for var in metadata],axis=1)
out.rename(columns = {x:name+'_'+x for x in out.columns},inplace=True)
return(out)
#These are the metadata variables of interest
meta_vars = ['psn_name','psn_id','psn_sector','person_address','uk_postcode_long','ttwa','ttwa_name']
#This is a list with a df of 'person applicants' and a df of person inventors
subset_dfs = [papp_geo.loc[papp_geo[var]==True] for var in ['is_applicant','is_inventor']]
#This extracts the metadata for applicant and inventor metadata sets
pat_person_meta = pd.concat([make_person_metadata(df,metadata=meta_vars,name=name) for df,name in zip(subset_dfs,
['appl','inv'])],axis=1)
pat_person_meta.head()
pat_person_meta.shape
###Output
_____no_output_____
###Markdown
Note - some of these patents have missing applicants or inventors because eg these might be based outside of the uk What are the missing addresses of inventor / applicant dfs
###Code
subset_dfs[0]['person_address'].isna().mean()
###Output
_____no_output_____
###Markdown
applnThe `appln` df contains information about patent applications, such as their year and their 'family' (the invention they refer to).
###Code
app = pdict['appln']
#pp.ProfileReport(app)
app.columns
###Output
_____no_output_____
###Markdown
A couple of things to explore Interpretation of dates
###Code
# What is the relation between filing year and publication year?
100*np.mean(app['earliest_filing_year']<=app['earliest_publn_year'])
###Output
_____no_output_____
###Markdown
This is as expected - patents are filed with the patent office, after which they are published Interpretation of patent families - do they tend to be in the same jurisdiction or different ones?
###Code
app['docdb_family_id'].value_counts()[:10]
###Output
_____no_output_____
###Markdown
We will check the jurisdictions for the patent with the biggest family
###Code
app.loc[app['docdb_family_id']==48703593]['appln_auth'].value_counts()
###Output
_____no_output_____
###Markdown
Applications in multiple jurisdictions suggesting that a focus on families helps us to avoid double counting.Read an easy to understand explanation in [Wikipedia](https://en.m.wikipedia.org/wiki/Priority_right)
###Code
app.loc[app['docdb_family_id']==48703593]['nb_citing_docdb_fam'].head()
###Output
_____no_output_____
###Markdown
All patents in a family receive the same number of citations. Another reason to focus on families Conclusion: create an app_subset with variables of interest
###Code
app.columns
my_vars = ['appln_id','appln_nr','ipr_type','granted', 'appln_auth','appln_filing_year','earliest_publn_year',
'docdb_family_id','inpadoc_family_id','nb_citing_docdb_fam']
app_subset = app[my_vars].set_index('appln_id')
app_subset.head()
len(app_subset)
len(set(app_subset.index))
###Output
_____no_output_____
###Markdown
appln_abstract
###Code
abst = pdict['appln_abstr']
#pp.ProfileReport(abst)
#Remove missing abstracts
abst_2 = abst.loc[[type(x)==str for x in abst['appln_abstract']]]
abst_length = pd.Series([len(x) if x!=None else np.nan for x in abst_2['appln_abstract']])
abst_length.describe()
###Output
_____no_output_____
###Markdown
Almost all patents are in English. Some of them are incredibly long! Out of curiosity: * How many of them mention finance?
###Code
np.sum(['financ' in x for x in abst_2['appln_abstract']])
###Output
_____no_output_____
###Markdown
* And how many mention machine learning?
###Code
np.sum(['machine learning' in x for x in abst_2['appln_abstract']])
###Output
_____no_output_____
###Markdown
This looks quite low - let's see if we can match the ai patents later and see what happens
###Code
abst.set_index('appln_id',inplace=True)
###Output
_____no_output_____
###Markdown
appln_techfield
###Code
techfield = pdict['appln_techn_field']
techfield.head()
#pp.ProfileReport(techfield)
###Output
_____no_output_____
###Markdown
Each patent is allocated a set of technology fields (with weights). appln_ipcThis contains a set of IPC codes for each patent. The rest of the table contains information about...* the level of detail in the codes (are they full codes or more aggregate categories), * the order in which they appear in the patent, whether they relate to the invention or to additional material, * and the version of the IPC codes (version when they were updated)
###Code
ipc_appln = pdict['appln_ipc']
ipc_appln.shape
ipc_appln.head()
len(set(ipc_appln['appln_id']))
ipc_appln['ipc_version'].value_counts()
###Output
_____no_output_____
###Markdown
What does this mean for IPC data to be available at different points?
###Code
ipc_appln['ipc_class_symbol'].value_counts().head()
len(set(ipc_appln['ipc_class_symbol']))
###Output
_____no_output_____
###Markdown
There are 42240 unique IPC class symbols
###Code
ipc_appln['ipc_class_level'].value_counts()
###Output
_____no_output_____
###Markdown
Almost exclusively 'A' (full IPC codes) How many IPC subclasses are there?
###Code
#pd.Series([x.split(' ')[0] for x in ipc_appln['ipc_class_symbol']]).value_counts().head()
###Output
_____no_output_____
###Markdown
The IPC symbols are not available in a standardised format. We will convert themThis requires replacing spaces with 0s, and /s with ''s
###Code
ipc_appln['ipc_class_symbol_proc'] = [re.sub(' ','0',re.sub('/','',x)) for x in ipc_appln['ipc_class_symbol']]
pd.Series([len(x) for x in ipc_appln['ipc_class_symbol_proc']]).value_counts()
###Output
_____no_output_____
###Markdown
Most of the codes are ten digits but in a small number of cases they are longer. We will shorten them to 10 digits to keep things simple
###Code
ipc_appln['ipc_class_symbol_proc_10'] = [x[:10] if len(x)>=10 else x for x in ipc_appln['ipc_class_symbol_proc']]
###Output
_____no_output_____
###Markdown
Load the IPC lookup
###Code
with open('../data/external/ipc_def_lookup.json','r') as infile:
ipc_lookup = json.load(infile)
ipc_appln['ipc_description'] = [ipc_lookup[x] if x in ipc_lookup.keys() else np.nan for x in ipc_appln['ipc_class_symbol_proc_10']]
ipc_appln['ipc_description'].isna().sum()
###Output
_____no_output_____
###Markdown
There is a relatively small number IPC-application codes missing. One to investigate further
###Code
#Group IPC codes by application
ipc_grouped = ipc_appln.groupby('appln_id')['ipc_class_symbol_proc_10'].apply(lambda x: list(x))
###Output
_____no_output_____
###Markdown
techn_field_ipcThis is a lookup table
###Code
tf_lookup = pdict['tls901_techn_field_ipc']
tf_lookup.head()
#pp.ProfileReport(tf_lookup)
###Output
_____no_output_____
###Markdown
Match the techn fields with the previous field (so we can do some interpretable exploration)
###Code
pdict.keys()
tf_lookup = pdict['tls901_techn_field_ipc']
techfield_labelled = pd.merge(techfield,tf_lookup.drop_duplicates('techn_field'),left_on='techn_field_nr',right_on='techn_field_nr')
techfield_labelled.head()
techfield_labelled.groupby('techn_field')['weight'].sum().sort_values(ascending=False)
# I need to group the fields by patent ids
tf_meta_vars = ['weight','techn_field_nr','techn_field']
#I use the same function that I defined before (it's quite generic!)
tech_grouped = make_person_metadata(techfield_labelled,metadata=tf_meta_vars,application_id='appln_id',name='tf')
tech_grouped.head()
tech_grouped.shape
###Output
_____no_output_____
###Markdown
tls902_ipc_nace2This is a lookup between ip codes and nace. Won't be very useful for us as we don't have the nace codes...yet
###Code
ipc_nace_lookup = pdict['tls902_ipc_nace2']
ipc_nace_lookup.head()
#pp.ProfileReport(ipc_nace_lookup)
###Output
_____no_output_____
###Markdown
NUTS lookup (for completeness)
###Code
nuts_lookup = pdict['tls904_nuts']
nuts_lookup.head()
#pp.ProfileReport(nuts_lookup)
###Output
_____no_output_____
###Markdown
Combine sourcesHere we will combine all the tables so far:* `app_subset` has the applications* `pat_person_meta` has the persons* `abstr` has the abstracts* `techfield_labelled` has the patent tech fields (with labels)The other dfs are not massively relevant
###Code
processed_dfs = [app_subset,pat_person_meta,abst,
tech_grouped,ipc_grouped]
for name,df in zip(['appl','person','abstract','field','ipc'],processed_dfs):
print(name)
print('===')
df.index = [int(x) for x in df.index]
print(df.index[:10])
print('\n')
pat = pd.concat(processed_dfs,axis=1)
pat.to_csv(f'../data/processed/{today_str}_patents_combined.csv')
###Output
_____no_output_____
###Markdown
Create data dictionary
###Code
pat.reset_index(drop=False,inplace=True)
print('|name|type|observations|')
print('|----|----|----|')
for c in pat.columns:
print(f'|{c}|{type(pat[c].iloc[0])}| |')
###Output
_____no_output_____
###Markdown
Are there any ML patents in here? Load the IPO patents (downloaded from [here](https://www.gov.uk/government/publications/artificial-intelligence-a-worldwide-overview-of-ai-patents)
###Code
ml_ids = list(pd.read_csv('../data/external/AI-raw-data.csv',header=None)[0])
#In order to match these patents with our data we need to create a new id that combines granting authority code and publication number
pat['raw_ids'] = [x+y for x,y in zip(pat['appln_auth'],pat['appln_nr'])]
#What's the overlap between both groups?
uk_ai_pats = set(list(pat['raw_ids'])) & set(ml_ids)
len(uk_ai_pats)
###Output
_____no_output_____
###Markdown
1012 - not so bad!
###Code
pat['is_ai_ipo'] = [x in uk_ai_pats for x in pat['raw_ids']]
#pat.to_csv(f'../data/processed/{today_str}_patent_table.csv',compression='gzip')
sorted([x for el in pat.loc[pat['is_ai_ipo']==True]['appl_psn_name'].dropna() for x in el])[:10]
pat.shape
###Output
_____no_output_____ |
notebooks/Baumgartner.ipynb | ###Markdown
Install requirements
###Code
!pip install nibabel
###Output
_____no_output_____
###Markdown
Import data
###Code
!git clone https://github.com/baumgach/acdc_segmenter
!wget https://raw.githubusercontent.com/matthuisman/gdrivedl/master/gdrivedl.py
!python gdrivedl.py https://drive.google.com/open?id=1bxRj0zf-iMooYA4jUZS_zS4VD9NLYxE4 ../data/
!python gdrivedl.py https://drive.google.com/open?id=1L84oEmgc2Nd10bCBlaM7vkbl7nhrYC9I ../data/
###Output
_____no_output_____
###Markdown
Prepare dataset
###Code
%cd acdc_segmenter
!mkdir train_set & unzip -q ../../data/training.zip -d train_set/
!unzip -q ../../data/testing.zip -d test_set/
!mv test_set/testing/testing test_set/tmp && rm -r test_set/testing && mv test_set/tmp test_set/testing
###Output
_____no_output_____
###Markdown
Training
###Code
!python train.py
###Output
_____no_output_____
###Markdown
Testing
###Code
!python evaluate_patients.py acdc_logdir/unet2D_bn_modified_wxent_bn_test -t
###Output
_____no_output_____ |
make_GTIs.ipynb | ###Markdown
make_GTIs.ipynb Reads in a text file list of NICER event lists, gets the generic GTIs in the FITS file generated by the auto pipeline, only keeps the GTIs that are longer than your segment length (like 32 seconds, so that you don't have a bunch of 1-second GTIs hanging around for no good reason), saves those GTIs to a new FITS file, and writes the file names of the new GTIs to a text file for reading in to quicklook_segments.py.
###Code
from astropy.table import Table
from astropy.io import fits
from astropy.time import Time
import numpy as np
import os
import subprocess
import matplotlib.pyplot as plt
from xcor_tools_nicer import find_nearest, clock_to_mjd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Need to have the list of event lists already made
###Code
homedir = os.path.expanduser('~')
exe_dir = os.getcwd()
obj_name = "Swift_J1728.9-3613"
obj_prefix = "SwiftJ1728"
min_length = 16 ## seconds
# exe_dir = "%s/Documents/Research/NICER_exploration" % (homedir)
exe_dir = "%s/Documents/Research/%s" % (homedir, obj_prefix)
data_dir = "%s/Reduced_data/%s/" % (homedir, obj_name)
evt_list = "%s/in/%s_evtlists.txt" % (exe_dir, obj_prefix)
data_files = [line.strip() for line in open(evt_list)]
print(os.path.isfile(data_dir+data_files[0]))
print(os.path.isfile(data_dir+data_files[-1]))
###Output
True
True
###Markdown
For a list of many data files:
###Code
gti_list = []
for data_file in data_files:
if os.path.isfile(data_dir+data_file):
obsID = data_file.split("/")[-1].split("_")[0][2:]
evtID = data_file.split("/")[-1].split(".")[0]
print(evtID)
# ex
gti_file = "%s/%s_%dsGTIs.fits" % (data_dir, evtID, min_length)
if not os.path.isfile(gti_file):
hdu_list = fits.open(data_dir+data_file, memmap=True)
# print(hdu_list.info())
# try:
gti_tab = Table(hdu_list[2].data)
gti_length = gti_tab['STOP'] - gti_tab['START']
print("\tGTI len: ", len(gti_length))
print("\tMin GTI: ", np.min(gti_length))
print("\tMax GTI: ", np.max(gti_length))
to_delete = np.where(gti_length < min_length)[0]
del gti_tab[to_delete]
print("\tFinal GTI len: ", len(gti_tab))
# print(gti_tab['START'])
if len(gti_tab) > 0:
gti_tab.write(gti_file, overwrite=True, format='fits')
else:
print("\t", obsID, evtID)
# except:
# print("Could not read GTI extension: %s" % (data_dir+data_file))
if os.path.isfile(gti_file):
gti_list.append(os.path.basename(gti_file))
else:
print("\t GTI file was not created.")
else:
print("File does not exist: %s" % (data_dir+data_file))
print("Done!")
###Output
ni1200550101_0mpu7_cl
GTI len: 66
Min GTI: 1.0
Max GTI: 173.0
Final GTI len: 25
ni1200550102_0mpu7_cl
GTI len: 347
Min GTI: 1.0
Max GTI: 319.0
Final GTI len: 172
ni1200550103_0mpu7_cl
GTI len: 326
Min GTI: 1.0
Max GTI: 453.0
Final GTI len: 167
ni1200550104_0mpu7_cl
GTI len: 20
Min GTI: 1.0
Max GTI: 1121.0
Final GTI len: 17
ni1200550105_0mpu7_cl
GTI len: 282
Min GTI: 4.3779611587524414e-05
Max GTI: 1108.0
Final GTI len: 18
ni1200550106_0mpu7_cl
GTI len: 875
Min GTI: 4.1604042053222656e-05
Max GTI: 1071.0
Final GTI len: 28
ni1200550107_0mpu7_cl
GTI len: 299
Min GTI: 3.191828727722168e-05
Max GTI: 1039.0
Final GTI len: 21
ni1200550108_0mpu7_cl
GTI len: 16
Min GTI: 1.0
Max GTI: 1024.0
Final GTI len: 15
ni1200550109_0mpu7_cl
GTI len: 60
Min GTI: 1.0
Max GTI: 797.0
Final GTI len: 18
ni1200550110_0mpu7_cl
GTI len: 90
Min GTI: 1.0
Max GTI: 509.0
Final GTI len: 9
ni1200550111_0mpu7_cl
GTI len: 45
Min GTI: 0.008965760469436646
Max GTI: 347.0
Final GTI len: 5
ni1200550112_0mpu7_cl
GTI len: 25
Min GTI: 1.0
Max GTI: 273.0
Final GTI len: 7
ni1200550113_0mpu7_cl
GTI len: 12
Min GTI: 1.0
Max GTI: 739.0
Final GTI len: 11
ni1200550114_0mpu7_cl
GTI len: 7
Min GTI: 1.0
Max GTI: 925.0
Final GTI len: 5
ni1200550115_0mpu7_cl
GTI len: 26
Min GTI: 1.0
Max GTI: 1254.0
Final GTI len: 11
ni1200550116_0mpu7_cl
GTI len: 36
Min GTI: 1.0
Max GTI: 795.0
Final GTI len: 7
ni1200550117_0mpu7_cl
GTI len: 15
Min GTI: 1.0
Max GTI: 431.0
Final GTI len: 4
ni1200550118_0mpu7_cl
GTI len: 8
Min GTI: 1.0
Max GTI: 519.0
Final GTI len: 5
ni1200550119_0mpu7_cl
GTI len: 8
Min GTI: 27.0
Max GTI: 783.0
Final GTI len: 8
ni1200550120_0mpu7_cl
GTI len: 3
Min GTI: 1.0
Max GTI: 451.0
Final GTI len: 2
ni1200550121_0mpu7_cl
GTI len: 5
Min GTI: 1.0
Max GTI: 458.0
Final GTI len: 3
ni1200550122_0mpu7_cl
GTI len: 4
Min GTI: 3.0
Max GTI: 256.0
Final GTI len: 3
ni1200550123_0mpu7_cl
GTI len: 4
Min GTI: 267.0
Max GTI: 319.0
Final GTI len: 4
ni1200550124_0mpu7_cl
GTI len: 6
Min GTI: 1.0
Max GTI: 314.0
Final GTI len: 5
ni1200550125_0mpu7_cl
GTI len: 7
Min GTI: 1.0
Max GTI: 808.0
Final GTI len: 6
ni1200550126_0mpu7_cl
GTI len: 9
Min GTI: 1.0
Max GTI: 334.0
Final GTI len: 8
ni1200550127_0mpu7_cl
GTI len: 3
Min GTI: 213.0
Max GTI: 256.0
Final GTI len: 3
Done!
###Markdown
Write the file name to the GTI list, for reading into quicklook_segments.py
###Code
gti_out_file = "%s/in/%s_%dsGTIlists.txt" % (exe_dir, obj_prefix, min_length)
with open(gti_out_file, 'w') as f:
[f.write("%s\n" % gti_file) for gti_file in gti_list]
###Output
_____no_output_____
###Markdown
Here be dragons and untested and/or outdated code For just one data file:
###Code
hdu_list = fits.open(data_file, memmap=True)
print(hdu_list.info())
gti_tab = Table(hdu_list[2].data)
gti_length = gti_tab['STOP'] - gti_tab['START']
print(np.min(gti_length))
print(np.max(gti_length))
print(len(np.where(gti_length < 1)[0]))
print(len(np.where(gti_length >= 16)[0]))
print(len(np.where(gti_length >= 32)[0]))
print(len(np.where(gti_length >= 64)[0]))
print(len(np.where(gti_length >= 128)[0]))
bin_edges = np.asarray([2**x for x in range(0,11)])
fig, ax = plt.subplots(1, 1, figsize=(9,6.75), dpi=300, tight_layout=True)
n, bins, patches = plt.hist(x=gti_length, bins=bin_edges, color='blue', alpha=0.7)
ax.grid(axis='y', alpha=0.75)
ax.set_xscale('log')
ax.set_xlabel('GTI length (s)', fontsize=20)
ax.set_ylabel('# of occurrences', fontsize=20)
x_maj_loc = bin_edges
x_maj_labels = bin_edges
ax.set_xticks(x_maj_loc)
ax.tick_params(axis='x', labelsize=20, bottom=True, top=True,
labelbottom=True, labeltop=False, direction="in")
ax.tick_params(axis='y', labelsize=20, left=True, right=True,
labelleft=True, labelright=False, direction="in")
ax.tick_params(which='major', width=1.5, length=9, direction="in")
ax.tick_params(which='minor', width=1.5, length=6, direction="in")
ax.set_xticklabels(x_maj_labels, rotation='horizontal', fontsize=20)
ax.tick_params(which='minor', width=1.5, top=True, right=True, length=6, direction='in')
for axis in ['top', 'bottom', 'left', 'right']:
ax.spines[axis].set_linewidth(1.5)
ax.set_title(basename, fontsize=16)
plt.show()
to_delete = np.where(gti_length < 64)[0]
# print(to_delete)
del gti_tab[to_delete]
gti_tab.info
gti_tab.write("%s/%s_64sGTIs.fits" % (data_dir, basename), overwrite=True, format='fits')
###Output
_____no_output_____ |
tutorials/.ipynb_checkpoints/tutorial03_rllib-checkpoint.ipynb | ###Markdown
Tutorial 03: Running RLlib ExperimentsThis tutorial walks you through the process of running traffic simulations in Flow with trainable RLlib-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the [**RLlib**](https://ray.readthedocs.io/en/latest/rllib.html) library ([citation](https://arxiv.org/abs/1712.09381)) ([installation instructions](https://flow.readthedocs.io/en/latest/flow_setup.htmloptional-install-ray-rllib)). Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics). In this tutorial, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of "phantom jams" which form when only human driver dynamics are involved. 1. Components of a SimulationAll simulations, both in the presence and absence of RL, require two components: a *network*, and an *environment*. Networks describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the network. Moreover, custom environments may be used to modify the dynamical features of an network. Finally, in the RL case, it is in the *environment* that the state/action spaces and the reward function are defined. 2. Setting up a NetworkFlow contains a plethora of pre-designed networks used to replicate highways, intersections, and merges in both closed and open settings. All these networks are located in flow/networks. For this tutorial, which involves a single lane ring road, we will use the network `RingNetwork`. 2.1 Setting up Network ParametersThe network mentioned at the start of this section, as well as all other networks in Flow, are parameterized by the following arguments: * name* vehicles* net_params* initial_configThese parameters are explained in detail in `tutorial01_sumo.ipynb`. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous tutorial. Accordingly, we specify them nearly as we have before, and leave further explanations of the parameters to `tutorial01_sumo.ipynb`.We begin by choosing the network the experiment will be trained on. We use one of Flow's builtin networks, located in `flow.networks`. A list of all available networks can be found by running the script below.
###Code
import flow.networks as networks
print(networks.__all__)
###Output
_____no_output_____
###Markdown
In this tutorial, we choose to use the ring road network. The network class is then:
###Code
from flow.networks import RingNetwork
# ring road network class
network_name = RingNetwork
###Output
_____no_output_____
###Markdown
One key difference between SUMO and RLlib experiments is that, in RLlib experiments, the network classes do not need to be defined; instead users should simply name the network class they wish to use. Later on, an environment setup module will import the correct network class based on the provided names.
###Code
# input parameter classes to the network class
from flow.core.params import NetParams, InitialConfig
# name of the network
name = "training_example"
# network-specific parameters
from flow.networks.ring import ADDITIONAL_NET_PARAMS
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
# initial configuration to vehicles
initial_config = InitialConfig(spacing="uniform", perturbation=1)
###Output
_____no_output_____
###Markdown
2.2 Adding Trainable Autonomous VehiclesThe `Vehicles` class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various `get` methods within this class.The dynamics of vehicles in the `Vehicles` class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the `ContinousRouter` routing controller so that the vehicles may maintain their routes closed networks.As we have done in `tutorial01_sumo.ipynb`, human-driven vehicles are defined in the `VehicleParams` class as follows:
###Code
# vehicles class
from flow.core.params import VehicleParams
# vehicles dynamics models
from flow.controllers import IDMController, ContinuousRouter
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
###Output
_____no_output_____
###Markdown
The above addition to the `Vehicles` class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an `RLController` as the acceleraton controller to the vehicle.
###Code
from flow.controllers import RLController
###Output
_____no_output_____
###Markdown
Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.We finally add the vehicle as follows, while again using the `ContinuousRouter` to perpetually maintain the vehicle within the network.
###Code
vehicles.add(veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
###Output
_____no_output_____
###Markdown
3. Setting up an EnvironmentSeveral environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.Sumo envrionments in Flow are parametrized by three components:* `SumoParams`* `EnvParams`* `Network` 3.1 SumoParams`SumoParams` specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and deactivate the GUI. **Note** For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just needs to specify the following: `render=False`
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=False)
###Output
_____no_output_____
###Markdown
3.2 EnvParams`EnvParams` specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the network. For the environment `WaveAttenuationPOEnv`, these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.Finally, it is important to specify here the *horizon* of the experiment, which is the duration of one episode (during which the RL-agent acquire data).
###Code
from flow.core.params import EnvParams
# Define horizon as a variable to ensure consistent use across notebook
HORIZON=100
env_params = EnvParams(
# length of one rollout
horizon=HORIZON,
additional_params={
# maximum acceleration of autonomous vehicles
"max_accel": 1,
# maximum deceleration of autonomous vehicles
"max_decel": 1,
# bounds on the ranges of ring road lengths the autonomous vehicle
# is trained on
"ring_length": [220, 270],
},
)
###Output
_____no_output_____
###Markdown
3.3 Initializing a Gym EnvironmentNow, we have to specify our Gym Environment and the algorithm that our RL agents will use. Similar to the network, we choose to use on of Flow's builtin environments, a list of which is provided by the script below.
###Code
import flow.envs as flowenvs
print(flowenvs.__all__)
###Output
_____no_output_____
###Markdown
We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:
###Code
from flow.envs import WaveAttenuationPOEnv
env_name = WaveAttenuationPOEnv
###Output
_____no_output_____
###Markdown
3.4 Setting up Flow ParametersRLlib experiments both generate a `params.json` file for each experiment run. For RLlib experiments, the parameters defining the Flow network and environment must be stored as well. As such, in this section we define the dictionary `flow_params`, which contains the variables required by the utility function `make_create_env`. `make_create_env` is a higher-order function which returns a function `create_env` that initializes a Gym environment corresponding to the Flow network specified.
###Code
# Creating flow_params. Make sure the dictionary keys are as specified.
flow_params = dict(
# name of the experiment
exp_tag=name,
# name of the flow environment the experiment is running on
env_name=env_name,
# name of the network class the experiment uses
network=network_name,
# simulator that is used by the experiment
simulator='traci',
# simulation-related parameters
sim=sim_params,
# environment related parameters (see flow.core.params.EnvParams)
env=env_params,
# network-related parameters (see flow.core.params.NetParams and
# the network's documentation or ADDITIONAL_NET_PARAMS component)
net=net_params,
# vehicles to be placed in the network at the start of a rollout
# (see flow.core.vehicles.Vehicles)
veh=vehicles,
# (optional) parameters affecting the positioning of vehicles upon
# initialization/reset (see flow.core.params.InitialConfig)
initial=initial_config
)
###Output
_____no_output_____
###Markdown
4 Running RL experiments in Ray 4.1 Import First, we must import modules required to run experiments in Ray. The `json` package is required to store the Flow experiment parameters in the `params.json` file, as is `FlowParamsEncoder`. Ray-related imports are required: the PPO algorithm agent, `ray.tune`'s experiment runner, and environment helper methods `register_env` and `make_create_env`.
###Code
import json
import ray
try:
from ray.rllib.agents.agent import get_agent_class
except ImportError:
from ray.rllib.agents.registry import get_agent_class
from ray.tune import run_experiments
from ray.tune.registry import register_env
from flow.utils.registry import make_create_env
from flow.utils.rllib import FlowParamsEncoder
###Output
_____no_output_____
###Markdown
4.2 Initializing RayHere, we initialize Ray and experiment-based constant variables specifying parallelism in the experiment as well as experiment batch size in terms of number of rollouts.
###Code
# number of parallel workers
N_CPUS = 2
# number of rollouts per training iteration
N_ROLLOUTS = 1
ray.init(num_cpus=N_CPUS)
###Output
_____no_output_____
###Markdown
4.3 Configuration and SetupHere, we copy and modify the default configuration for the [PPO algorithm](https://arxiv.org/abs/1707.06347). The agent has the number of parallel workers specified, a batch size corresponding to `N_ROLLOUTS` rollouts (each of which has length `HORIZON` steps), a discount rate $\gamma$ of 0.999, two hidden layers of size 16, uses Generalized Advantage Estimation, $\lambda$ of 0.97, and other parameters as set below.Once `config` contains the desired parameters, a JSON string corresponding to the `flow_params` specified in section 3 is generated. The `FlowParamsEncoder` maps objects to string representations so that the experiment can be reproduced later. That string representation is stored within the `env_config` section of the `config` dictionary. Later, `config` is written out to the file `params.json`. Next, we call `make_create_env` and pass in the `flow_params` to return a function we can use to register our Flow environment with Gym.
###Code
# The algorithm or model to train. This may refer to "
# "the name of a built-on algorithm (e.g. RLLib's DQN "
# "or PPO), or a user-defined trainable function or "
# "class registered in the tune registry.")
alg_run = "PPO"
agent_cls = get_agent_class(alg_run)
config = agent_cls._default_config.copy()
config["num_workers"] = N_CPUS - 1 # number of parallel workers
config["train_batch_size"] = HORIZON * N_ROLLOUTS # batch size
config["gamma"] = 0.999 # discount rate
config["model"].update({"fcnet_hiddens": [16, 16]}) # size of hidden layers in network
config["use_gae"] = True # using generalized advantage estimation
config["lambda"] = 0.97
config["sgd_minibatch_size"] = min(16 * 1024, config["train_batch_size"]) # stochastic gradient descent
config["kl_target"] = 0.02 # target KL divergence
config["num_sgd_iter"] = 10 # number of SGD iterations
config["horizon"] = HORIZON # rollout horizon
# save the flow params for replay
flow_json = json.dumps(flow_params, cls=FlowParamsEncoder, sort_keys=True,
indent=4) # generating a string version of flow_params
config['env_config']['flow_params'] = flow_json # adding the flow_params to config dict
config['env_config']['run'] = alg_run
# Call the utility function make_create_env to be able to
# register the Flow env for this experiment
create_env, gym_name = make_create_env(params=flow_params, version=0)
# Register as rllib env with Gym
register_env(gym_name, create_env)
###Output
_____no_output_____
###Markdown
4.4 Running ExperimentsHere, we use the `run_experiments` function from `ray.tune`. The function takes a dictionary with one key, a name corresponding to the experiment, and one value, itself a dictionary containing parameters for training.
###Code
trials = run_experiments({
flow_params["exp_tag"]: {
"run": alg_run,
"env": gym_name,
"config": {
**config
},
"checkpoint_freq": 1, # number of iterations between checkpoints
"checkpoint_at_end": True, # generate a checkpoint at the end
"max_failures": 999,
"stop": { # stopping conditions
"training_iteration": 1, # number of iterations to stop after
},
},
})
###Output
_____no_output_____
###Markdown
Tutorial 03: Running RLlib ExperimentsThis tutorial walks you through the process of running traffic simulations in Flow with trainable RLlib-powered agents. Autonomous agents will learn to maximize a certain reward over the rollouts, using the [**RLlib**](https://ray.readthedocs.io/en/latest/rllib.html) library ([citation](https://arxiv.org/abs/1712.09381)) ([installation instructions](https://flow.readthedocs.io/en/latest/flow_setup.htmloptional-install-ray-rllib)). Simulations of this form will depict the propensity of RL agents to influence the traffic of a human fleet in order to make the whole fleet more efficient (for some given metrics). 本教程将带您完成使用可训练的rllib支持的代理在流中运行交通模拟的过程。自治代理将学习一定的回报最大化的卷输出,使用[* * RLlib * *] (https://ray.readthedocs.io/en/latest/rllib.html)图书馆([引用](https://arxiv.org/abs/1712.09381))((安装说明)(https://flow.readthedocs.io/en/latest/flow_setup.html optional-install-ray-rllib))。这种形式的模拟将描述RL代理影响人类舰队的流量的倾向,以使整个舰队更有效率(对于某些给定的度量)。In this tutorial, we simulate an initially perturbed single lane ring road, where we introduce a single autonomous vehicle. We witness that, after some training, that the autonomous vehicle learns to dissipate the formation and propagation of "phantom jams" which form when only human driver dynamics are involved.在本教程中,我们模拟了一个最初受到干扰的单车道环路,其中我们引入了一辆自动驾驶汽车。我们可以看到,经过一些训练后,自动驾驶汽车学会了消除“幻影交通堵塞”的形成和传播,而这种“幻影交通堵塞”是在只有人类驾驶员的动态参与的情况下形成的。 1. Components of a SimulationAll simulations, both in the presence and absence of RL, require two components: a *network*, and an *environment*. Networks describe the features of the transportation network used in simulation. This includes the positions and properties of nodes and edges constituting the lanes and junctions, as well as properties of the vehicles, traffic lights, inflows, etc... in the network. Environments, on the other hand, initialize, reset, and advance simulations, and act as the primary interface between the reinforcement learning algorithm and the network. Moreover, custom environments may be used to modify the dynamical features of an network. Finally, in the RL case, it is in the *environment* that the state/action spaces and the reward function are defined. 所有的模拟,无论是否存在RL,都需要两个组件:一个“网络”和一个“环境”。网络描述了用于模拟的交通网络的特征。这包括构成车道和路口的节点和边缘的位置和属性,以及车辆、红绿灯、流入量等的属性。在网络。环境,另一方面,初始化,重置,并推进模拟,并作为加强学习算法和网络之间的主要接口。此外,可以使用自定义环境来修改网络的动态特性。最后,在RL案例中,状态/动作空间和奖励函数是在*environment*中定义的。 2. Setting up a NetworkFlow contains a plethora of pre-designed networks used to replicate highways, intersections, and merges in both closed and open settings. All these networks are located in flow/networks. For this tutorial, which involves a single lane ring road, we will use the network `RingNetwork`.Flow包含大量预先设计的网络,用于在封闭和开放环境中复制高速公路、十字路口和合并。所有这些网络都位于流/网络中。本教程涉及单行道,我们将使用网络`RingNetwork`. 2.1 Setting up Network ParametersThe network mentioned at the start of this section, as well as all other networks in Flow, are parameterized by the following arguments: 本节开头提到的网络,以及Flow中的所有其他网络,都是由以下参数参数化的:* name* vehicles* net_params* initial_configThese parameters are explained in detail in `tutorial01_sumo.ipynb`. Moreover, all parameters excluding vehicles (covered in section 2.2) do not change from the previous tutorial. Accordingly, we specify them nearly as we have before, and leave further explanations of the parameters to `tutorial01_sumo.ipynb`.这些参数在“tutorial01_sumo.ipynb”中详细解释。此外,除车辆外的所有参数(第2.2节中涉及的参数)与前一教程没有变化。因此,我们几乎像以前那样指定它们,并将参数的进一步解释留给' tutorial01_sumo.ipynb '。We begin by choosing the network the experiment will be trained on. We use one of Flow's builtin networks, located in `flow.networks`. A list of all available networks can be found by running the script below.我们首先选择实验要训练的网络。我们使用Flow的一个内置网络,位于“Flow .networks”中。运行下面的脚本可以找到所有可用网络的列表。
###Code
import flow.networks as networks
print(networks.__all__)
###Output
_____no_output_____
###Markdown
In this tutorial, we choose to use the ring road network. The network class is then:在本教程中,我们选择使用环路网络。网络类则为:
###Code
from flow.networks import RingNetwork
# ring road network class
network_name = RingNetwork
###Output
_____no_output_____
###Markdown
One key difference between SUMO and RLlib experiments is that, in RLlib experiments, the network classes do not need to be defined; instead users should simply name the network class they wish to use. Later on, an environment setup module will import the correct network class based on the provided names.SUMO和RLlib实验的一个关键区别是,在RLlib实验中,不需要定义网络类;相反,用户应该简单地命名他们希望使用的网络类。稍后,环境设置模块将根据提供的名称导入正确的网络类。
###Code
# input parameter classes to the network class
from flow.core.params import NetParams, InitialConfig
# name of the network
name = "training_example"
# network-specific parameters
from flow.networks.ring import ADDITIONAL_NET_PARAMS
net_params = NetParams(additional_params=ADDITIONAL_NET_PARAMS)
# initial configuration to vehicles
initial_config = InitialConfig(spacing="uniform", perturbation=1)
###Output
_____no_output_____
###Markdown
2.2 Adding Trainable Autonomous Vehicles增加可训练的自动驾驶车辆The `Vehicles` class stores state information on all vehicles in the network. This class is used to identify the dynamical features of a vehicle and whether it is controlled by a reinforcement learning agent. Morover, information pertaining to the observations and reward function can be collected from various `get` methods within this class.“车辆”类存储网络中所有车辆的状态信息。这个类别是用来识别车辆的动力特性,以及它是否由强化学习代理控制。Morover,与观察和奖励函数相关的信息可以从这个类中的各种' get '方法中收集。The dynamics of vehicles in the `Vehicles` class can either be depicted by sumo or by the dynamical methods located in flow/controllers. For human-driven vehicles, we use the IDM model for acceleration behavior, with exogenous gaussian acceleration noise with std 0.2 m/s2 to induce perturbations that produce stop-and-go behavior. In addition, we use the `ContinousRouter` routing controller so that the vehicles may maintain their routes closed networks.“车辆”类中的车辆动力学可以用sumo来描述,也可以用流/控制器中的动力学方法来描述。对于人驾驶车辆,我们使用IDM模型进行加速行为,使用std 0.2 m/s2的外源性高斯加速度噪声来诱导扰动,从而产生走走停停的行为。此外,我们使用“ContinousRouter”的路由控制器,使车辆可以保持其路线的封闭网络。As we have done in `tutorial01_sumo.ipynb`, human-driven vehicles are defined in the `VehicleParams` class as follows:就像我们在“tutorial01_sumo”中所做的那样。人驾驶车辆在“车辆参数”类别中定义如下:
###Code
# vehicles class
from flow.core.params import VehicleParams
# vehicles dynamics models
from flow.controllers import IDMController, ContinuousRouter
vehicles = VehicleParams()
vehicles.add("human",
acceleration_controller=(IDMController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=21)
###Output
_____no_output_____
###Markdown
The above addition to the `Vehicles` class only accounts for 21 of the 22 vehicles that are placed in the network. We now add an additional trainable autuonomous vehicle whose actions are dictated by an RL agent. This is done by specifying an `RLController` as the acceleraton controller to the vehicle. 上述添加到“车辆”类的车辆仅占网络中放置的22辆车辆中的21辆。我们现在添加了一个额外的可训练的自动驾驶车辆,它的行动是由一个RL代理决定的。这是通过指定一个“RLController”作为车辆的加速控制器来实现的。
###Code
from flow.controllers import RLController
###Output
_____no_output_____
###Markdown
Note that this controller serves primarirly as a placeholder that marks the vehicle as a component of the RL agent, meaning that lane changing and routing actions can also be specified by the RL agent to this vehicle.We finally add the vehicle as follows, while again using the `ContinuousRouter` to perpetually maintain the vehicle within the network.注意,这个控制器主要用作占位符,它将车辆标记为RL代理的一个组件,这意味着RL代理也可以为该车辆指定变道和路由操作。我们最终如下所示添加了车辆,同时再次使用“ContinuousRouter”来永久维护网络中的车辆。
###Code
vehicles.add(veh_id="rl",
acceleration_controller=(RLController, {}),
routing_controller=(ContinuousRouter, {}),
num_vehicles=1)
###Output
_____no_output_____
###Markdown
3. Setting up an EnvironmentSeveral environments in Flow exist to train RL agents of different forms (e.g. autonomous vehicles, traffic lights) to perform a variety of different tasks. The use of an environment allows us to view the cumulative reward simulation rollouts receive, along with to specify the state/action spaces.在流中存在一些环境来训练不同形式的RL代理(例如,自动车辆、交通灯)执行各种不同的任务。环境的使用允许我们查看累积的奖励模拟结果,以及指定状态/动作空间。Sumo envrionments in Flow are parametrized by three components:流中的sumo环境由三个部分参数化:* `SumoParams`* `EnvParams`* `Network` 3.1 SumoParams`SumoParams` specifies simulation-specific variables. These variables include the length of any simulation step and whether to render the GUI when running the experiment. For this example, we consider a simulation step length of 0.1s and deactivate the GUI. “SumoParams”指定特定于模拟的变量。这些变量包括任何模拟步骤的长度以及在运行实验时是否呈现GUI。对于本例,我们考虑仿真步骤长度为0.1s并停用GUI。**Note** For training purposes, it is highly recommanded to deactivate the GUI in order to avoid global slow down. In such case, one just needs to specify the following: `render=False`**注**出于培训目的,强烈建议禁用GUI以避免全局变慢。在这种情况下,只需指定以下内容:' render=False '
###Code
from flow.core.params import SumoParams
sim_params = SumoParams(sim_step=0.1, render=False)
###Output
_____no_output_____
###Markdown
3.2 EnvParams`EnvParams` specifies environment and experiment-specific parameters that either affect the training process or the dynamics of various components within the network. For the environment `WaveAttenuationPOEnv`, these parameters are used to dictate bounds on the accelerations of the autonomous vehicles, as well as the range of ring lengths (and accordingly network densities) the agent is trained on.“EnvParams”指定了环境和实验特定的参数,这些参数要么影响训练过程,要么影响网络中各种组件的动态。对于环境“WaveAttenuationPOEnv”,这些参数用于规定自动驾驶车辆的加速界限,以及agent所训练的环长(以及相应的网络密度)范围。Finally, it is important to specify here the *horizon* of the experiment, which is the duration of one episode (during which the RL-agent acquire data). 最后,在这里指定实验的“horizon”是很重要的,它是一个阶段的持续时间(在此期间,RL-agent获取数据)。
###Code
from flow.core.params import EnvParams
# Define horizon as a variable to ensure consistent use across notebook
HORIZON=100
env_params = EnvParams(
# length of one rollout
horizon=HORIZON,
additional_params={
# maximum acceleration of autonomous vehicles
"max_accel": 1,
# maximum deceleration of autonomous vehicles
"max_decel": 1,
# bounds on the ranges of ring road lengths the autonomous vehicle
# is trained on
"ring_length": [220, 270],
},
)
###Output
_____no_output_____
###Markdown
3.3 Initializing a Gym Environment 初始化一个训练环境Now, we have to specify our Gym Environment and the algorithm that our RL agents will use. Similar to the network, we choose to use on of Flow's builtin environments, a list of which is provided by the script below.现在,我们必须指定健身房环境和RL代理将使用的算法。与网络类似,我们选择在Flow的构建环境上使用,下面的脚本提供了一个列表。
###Code
import flow.envs as flowenvs
print(flowenvs.__all__)
###Output
_____no_output_____
###Markdown
We will use the environment "WaveAttenuationPOEnv", which is used to train autonomous vehicles to attenuate the formation and propagation of waves in a partially observable variable density ring road. To create the Gym Environment, the only necessary parameters are the environment name plus the previously defined variables. These are defined as follows:我们将使用环境“WaveAttenuationPOEnv”,该环境用于训练自动驾驶车辆在部分可观测变密度环路中衰减波的形成和传播。要创建健身房环境,惟一需要的参数是环境名称和前面定义的变量。这些定义如下:
###Code
from flow.envs import WaveAttenuationPOEnv
env_name = WaveAttenuationPOEnv
###Output
_____no_output_____
###Markdown
3.4 Setting up Flow Parameters 设置流量参数RLlib experiments both generate a `params.json` file for each experiment run. For RLlib experiments, the parameters defining the Flow network and environment must be stored as well. As such, in this section we define the dictionary `flow_params`, which contains the variables required by the utility function `make_create_env`. `make_create_env` is a higher-order function which returns a function `create_env` that initializes a Gym environment corresponding to the Flow network specified.RLlib实验都生成一个“参数”。每个实验运行的json文件。对于RLlib实验,定义流网络和环境的参数也必须存储。因此,在本节中,我们定义字典“flow_params”,它包含实用函数“make_create_env”所需的变量。“make_create_env”是一个高阶函数,它返回一个函数“create_env”,该函数初始化一个健身房环境,该环境与指定的流网络相对应。
###Code
# Creating flow_params. Make sure the dictionary keys are as specified.
flow_params = dict(
# name of the experiment
exp_tag=name,
# name of the flow environment the experiment is running on
env_name=env_name,
# name of the network class the experiment uses
network=network_name,
# simulator that is used by the experiment
simulator='traci',
# simulation-related parameters
sim=sim_params,
# environment related parameters (see flow.core.params.EnvParams)
env=env_params,
# network-related parameters (see flow.core.params.NetParams and
# the network's documentation or ADDITIONAL_NET_PARAMS component)
net=net_params,
# vehicles to be placed in the network at the start of a rollout
# (see flow.core.vehicles.Vehicles)
veh=vehicles,
# (optional) parameters affecting the positioning of vehicles upon
# initialization/reset (see flow.core.params.InitialConfig)
initial=initial_config
)
###Output
_____no_output_____
###Markdown
4 Running RL experiments in Ray 在Ray中运行RL实验 4.1 Import First, we must import modules required to run experiments in Ray. The `json` package is required to store the Flow experiment parameters in the `params.json` file, as is `FlowParamsEncoder`. Ray-related imports are required: the PPO algorithm agent, `ray.tune`'s experiment runner, and environment helper methods `register_env` and `make_create_env`.首先,我们必须导入在Ray中运行实验所需的模块。“json”包需要将流实验参数存储在“params.json '文件,如' FlowParamsEncoder '。需要与ray相关的导入:PPO算法代理“ray”。调优的实验运行器和环境助手方法' register_env '和' make_create_env '。
###Code
import json
import ray
try:
from ray.rllib.agents.agent import get_agent_class
except ImportError:
from ray.rllib.agents.registry import get_agent_class
from ray.tune import run_experiments
from ray.tune.registry import register_env
from flow.utils.registry import make_create_env
from flow.utils.rllib import FlowParamsEncoder
###Output
_____no_output_____
###Markdown
4.2 Initializing Ray 初始化rayHere, we initialize Ray and experiment-based constant variables specifying parallelism in the experiment as well as experiment batch size in terms of number of rollouts.在这里,我们初始化了Ray和基于实验的常量变量,这些变量指定了实验中的并行性(cpu),以及实验批量大小。
###Code
# number of parallel workers
N_CPUS = 2
# number of rollouts per training iteration
N_ROLLOUTS = 1
ray.init(num_cpus=N_CPUS)
###Output
_____no_output_____
###Markdown
4.3 Configuration and Setup 配置和设置Here, we copy and modify the default configuration for the [PPO algorithm](https://arxiv.org/abs/1707.06347). The agent has the number of parallel workers specified, a batch size corresponding to `N_ROLLOUTS` rollouts (each of which has length `HORIZON` steps), a discount rate $\gamma$ of 0.999, two hidden layers of size 16, uses Generalized Advantage Estimation, $\lambda$ of 0.97, and other parameters as set below.在这里,我们复制并修改了[PPO算法](https://arxiv.org/abs/1707.06347)的默认配置。代理指定了并行工作者的数量,批大小对应于' N_ROLLOUTS ' rollouts(每个都有长度' HORIZON '步骤),折现率$\gamma$为0.999,两个大小为16的隐藏层,使用广义优势估计,$\lambda$为0.97,以及其他参数,如下所示。Once `config` contains the desired parameters, a JSON string corresponding to the `flow_params` specified in section 3 is generated. The `FlowParamsEncoder` maps objects to string representations so that the experiment can be reproduced later. That string representation is stored within the `env_config` section of the `config` dictionary. Later, `config` is written out to the file `params.json`. 一旦“config”包含所需的参数,就会生成与第3节中指定的“flow_params”对应的JSON字符串。“FlowParamsEncoder”将对象映射到字符串表示,以便以后可以复制实验。该字符串表示形式存储在' config '字典的' env_config '节中。稍后,' config '被写入文件' params.json '。Next, we call `make_create_env` and pass in the `flow_params` to return a function we can use to register our Flow environment with Gym. 接下来,我们调用“make_create_env”并传入“flow_params”来返回一个函数,我们可以使用该函数向Gym注册流环境。
###Code
# The algorithm or model to train. This may refer to "
# "the name of a built-on algorithm (e.g. RLLib's DQN "
# "or PPO), or a user-defined trainable function or "
# "class registered in the tune registry.")
alg_run = "PPO"
agent_cls = get_agent_class(alg_run)
config = agent_cls._default_config.copy()
config["num_workers"] = N_CPUS - 1 # number of parallel workers
config["train_batch_size"] = HORIZON * N_ROLLOUTS # batch size
config["gamma"] = 0.999 # discount rate
config["model"].update({"fcnet_hiddens": [16, 16]}) # size of hidden layers in network
config["use_gae"] = True # using generalized advantage estimation
config["lambda"] = 0.97
config["sgd_minibatch_size"] = min(16 * 1024, config["train_batch_size"]) # stochastic gradient descent
config["kl_target"] = 0.02 # target KL divergence
config["num_sgd_iter"] = 10 # number of SGD iterations
config["horizon"] = HORIZON # rollout horizon
# save the flow params for replay
flow_json = json.dumps(flow_params, cls=FlowParamsEncoder, sort_keys=True,
indent=4) # generating a string version of flow_params
config['env_config']['flow_params'] = flow_json # adding the flow_params to config dict
config['env_config']['run'] = alg_run
# Call the utility function make_create_env to be able to
# register the Flow env for this experiment
create_env, gym_name = make_create_env(params=flow_params, version=0)
# Register as rllib env with Gym
register_env(gym_name, create_env)
###Output
_____no_output_____
###Markdown
4.4 Running Experiments 运行实验Here, we use the `run_experiments` function from `ray.tune`. The function takes a dictionary with one key, a name corresponding to the experiment, and one value, itself a dictionary containing parameters for training.这里,我们使用了' ray.tune '中的' run_experiments '函数。该函数使用一个键、一个与实验对应的名称和一个值组成的字典,字典本身包含用于训练的参数。
###Code
trials = run_experiments({
flow_params["exp_tag"]: {
"run": alg_run,
"env": gym_name,
"config": {
**config
},
"checkpoint_freq": 1, # number of iterations between checkpoints
"checkpoint_at_end": True, # generate a checkpoint at the end
"max_failures": 999,
"stop": { # stopping conditions
"training_iteration": 1, # number of iterations to stop after
},
},
})
###Output
_____no_output_____ |
optical-character-recognition/2.im2latex.ipynb | ###Markdown
Download dataset
###Code
# !wget https://malaya-dataset.s3-ap-southeast-1.amazonaws.com/jawi-rumi.tar.gz
# !wget https://raw.githubusercontent.com/huseinzol05/Malaya-Dataset/master/ocr/train-test-rumi-to-jawi.json
# !tar -zxf jawi-rumi.tar.gz
import json
with open('train-test-rumi-to-jawi.json') as fopen:
dataset = json.load(fopen)
len(dataset['train']), len(dataset['test'])
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from skimage.transform import resize as imresize
import cv2
train_labels = [f.split('/')[1].split('.')[0].lower() for f in dataset['train']]
test_labels = [f.split('/')[1].split('.')[0].lower() for f in dataset['test']]
plt.imshow(cv2.imread(dataset['train'][0], 0).astype(np.float32)/255.)
plt.title(train_labels[0])
plt.show()
charset = list(set(''.join(train_labels + test_labels)))
num_classes = len(charset) + 2
encode_maps = {}
decode_maps = {}
for i, char in enumerate(charset, 3):
encode_maps[char] = i
decode_maps[i] = char
SPACE_INDEX = 0
SPACE_TOKEN = '<PAD>'
encode_maps[SPACE_TOKEN] = SPACE_INDEX
decode_maps[SPACE_INDEX] = SPACE_TOKEN
GO_INDEX = 1
GO_TOKEN = '<GO>'
encode_maps[GO_TOKEN] = GO_INDEX
decode_maps[GO_INDEX] = GO_TOKEN
EOS_INDEX = 2
EOS_TOKEN = '<EOS>'
encode_maps[EOS_TOKEN] = EOS_INDEX
decode_maps[EOS_INDEX] = EOS_TOKEN
encode_maps
[encode_maps[c] for c in train_labels[0]] + [2]
GO = 1
PAD = 0
EOS = 2
image_height = 60
image_width = 240
image_channel = 1
max_stepsize = 128
num_hidden = 256
epoch = 20
batch_size = 128
initial_learning_rate = 1e-3
resized = imresize(cv2.flip((cv2.imread(dataset['train'][0], 0).astype(np.float32)/255.), 1), (image_height,
image_width,
image_channel))
plt.imshow(resized[:,:,0])
plt.title(train_labels[0])
plt.show()
import tqdm
train_X = []
for img in tqdm.tqdm(dataset['train']):
resized = imresize(cv2.flip((cv2.imread(img, 0).astype(np.float32)/255.), 1), (image_height,
image_width,
image_channel))
train_X.append(resized)
import tqdm
test_X = []
for img in tqdm.tqdm(dataset['test']):
resized = imresize(cv2.flip((cv2.imread(img, 0).astype(np.float32)/255.), 1), (image_height,
image_width,
image_channel))
test_X.append(resized)
train_Y = []
for label in train_labels:
train_Y.append([encode_maps[c] for c in label] + [EOS])
test_Y = []
for label in test_labels:
test_Y.append([encode_maps[c] for c in label] + [EOS])
resized.shape
# https://github.com/guillaumegenthial/im2latex/blob/master/model/components/attention_mechanism.py
class AttentionMechanism(object):
"""Class to compute attention over an image"""
def __init__(self, img, dim_e, tiles=1):
"""Stores the image under the right shape.
We loose the H, W dimensions and merge them into a single
dimension that corresponds to "regions" of the image.
Args:
img: (tf.Tensor) image
dim_e: (int) dimension of the intermediary vector used to
compute attention
tiles: (int) default 1, input to context h may have size
(tile * batch_size, ...)
"""
if len(img.shape) == 3:
self._img = img
elif len(img.shape) == 4:
N = tf.shape(img)[0]
H, W = tf.shape(img)[1], tf.shape(img)[2] # image
C = img.shape[3].value # channels
self._img = tf.reshape(img, shape=[N, H*W, C])
else:
print("Image shape not supported")
raise NotImplementedError
# dimensions
self._n_regions = tf.shape(self._img)[1]
self._n_channels = self._img.shape[2].value
self._dim_e = dim_e
self._tiles = tiles
self._scope_name = "att_mechanism"
# attention vector over the image
self._att_img = tf.layers.dense(
inputs=self._img,
units=self._dim_e,
use_bias=False,
name="att_img")
def context(self, h):
"""Computes attention
Args:
h: (batch_size, num_units) hidden state
Returns:
c: (batch_size, channels) context vector
"""
with tf.variable_scope(self._scope_name):
if self._tiles > 1:
att_img = tf.expand_dims(self._att_img, axis=1)
att_img = tf.tile(att_img, multiples=[1, self._tiles, 1, 1])
att_img = tf.reshape(att_img, shape=[-1, self._n_regions,
self._dim_e])
img = tf.expand_dims(self._img, axis=1)
img = tf.tile(img, multiples=[1, self._tiles, 1, 1])
img = tf.reshape(img, shape=[-1, self._n_regions,
self._n_channels])
else:
att_img = self._att_img
img = self._img
# computes attention over the hidden vector
att_h = tf.layers.dense(inputs=h, units=self._dim_e, use_bias=False)
# sums the two contributions
att_h = tf.expand_dims(att_h, axis=1)
att = tf.tanh(att_img + att_h)
# computes scalar product with beta vector
# works faster with a matmul than with a * and a tf.reduce_sum
att_beta = tf.get_variable("att_beta", shape=[self._dim_e, 1],
dtype=tf.float32)
att_flat = tf.reshape(att, shape=[-1, self._dim_e])
e = tf.matmul(att_flat, att_beta)
e = tf.reshape(e, shape=[-1, self._n_regions])
# compute weights
a = tf.nn.softmax(e)
a = tf.expand_dims(a, axis=-1)
c = tf.reduce_sum(a * img, axis=1)
return c
def initial_cell_state(self, cell):
"""Returns initial state of a cell computed from the image
Assumes cell.state_type is an instance of named_tuple.
Ex: LSTMStateTuple
Args:
cell: (instance of RNNCell) must define _state_size
"""
_states_0 = []
for hidden_name in cell._state_size._fields:
hidden_dim = getattr(cell._state_size, hidden_name)
h = self.initial_state(hidden_name, hidden_dim)
_states_0.append(h)
initial_state_cell = type(cell.state_size)(*_states_0)
return initial_state_cell
def initial_state(self, name, dim):
"""Returns initial state of dimension specified by dim"""
with tf.variable_scope(self._scope_name):
img_mean = tf.reduce_mean(self._img, axis=1)
W = tf.get_variable("W_{}_0".format(name), shape=[self._n_channels,
dim])
b = tf.get_variable("b_{}_0".format(name), shape=[dim])
h = tf.tanh(tf.matmul(img_mean, W) + b)
return h
# https://github.com/guillaumegenthial/im2latex/blob/master/model/components/attention_cell.py
import collections
from tensorflow.contrib.rnn import RNNCell, LSTMStateTuple
AttentionState = collections.namedtuple("AttentionState", ("cell_state", "o"))
class AttentionCell(RNNCell):
def __init__(self, cell, attention_mechanism, dropout, dim_e,
dim_o, num_units,
num_proj, dtype=tf.float32):
"""
Args:
cell: (RNNCell)
attention_mechanism: (AttentionMechanism)
dropout: (tf.float)
attn_cell_config: (dict) hyper params
"""
# variables and tensors
self._cell = cell
self._attention_mechanism = attention_mechanism
self._dropout = dropout
# hyperparameters and shapes
self._n_channels = self._attention_mechanism._n_channels
self._dim_e = dim_e
self._dim_o = dim_o
self._num_units = num_units
self._num_proj = num_proj
self._dtype = dtype
# for RNNCell
self._state_size = AttentionState(self._cell._state_size, self._dim_o)
@property
def state_size(self):
return self._state_size
@property
def output_size(self):
return self._num_proj
@property
def output_dtype(self):
return self._dtype
def initial_state(self):
"""Returns initial state for the lstm"""
initial_cell_state = self._attention_mechanism.initial_cell_state(self._cell)
initial_o = self._attention_mechanism.initial_state("o", self._dim_o)
return AttentionState(initial_cell_state, initial_o)
def step(self, embedding, attn_cell_state):
"""
Args:
embedding: shape = (batch_size, dim_embeddings) embeddings
from previous time step
attn_cell_state: (AttentionState) state from previous time step
"""
prev_cell_state, o = attn_cell_state
scope = tf.get_variable_scope()
with tf.variable_scope(scope):
# compute new h
x = tf.concat([embedding, o], axis=-1)
new_h, new_cell_state = self._cell.__call__(x, prev_cell_state)
new_h = tf.nn.dropout(new_h, self._dropout)
# compute attention
c = self._attention_mechanism.context(new_h)
# compute o
o_W_c = tf.get_variable("o_W_c", dtype=tf.float32,
shape=(self._n_channels, self._dim_o))
o_W_h = tf.get_variable("o_W_h", dtype=tf.float32,
shape=(self._num_units, self._dim_o))
new_o = tf.tanh(tf.matmul(new_h, o_W_h) + tf.matmul(c, o_W_c))
new_o = tf.nn.dropout(new_o, self._dropout)
y_W_o = tf.get_variable("y_W_o", dtype=tf.float32,
shape=(self._dim_o, self._num_proj))
logits = tf.matmul(new_o, y_W_o)
# new Attn cell state
new_state = AttentionState(new_cell_state, new_o)
return logits, new_state
def __call__(self, inputs, state):
"""
Args:
inputs: the embedding of the previous word for training only
state: (AttentionState) (h, o) where h is the hidden state and
o is the vector used to make the prediction of
the previous word
"""
new_output, new_state = self.step(inputs, state)
return (new_output, new_state)
from __future__ import division
import math
import numpy as np
from six.moves import xrange
import tensorflow as tf
# taken from https://github.com/tensorflow/tensor2tensor/blob/37465a1759e278e8f073cd04cd9b4fe377d3c740/tensor2tensor/layers/common_attention.py
# taken from https://raw.githubusercontent.com/guillaumegenthial/im2latex/master/model/components/positional.py
def add_timing_signal_nd(x, min_timescale=1.0, max_timescale=1.0e4):
"""Adds a bunch of sinusoids of different frequencies to a Tensor.
Each channel of the input Tensor is incremented by a sinusoid of a difft
frequency and phase in one of the positional dimensions.
This allows attention to learn to use absolute and relative positions.
Timing signals should be added to some precursors of both the query and the
memory inputs to attention.
The use of relative position is possible because sin(a+b) and cos(a+b) can
be experessed in terms of b, sin(a) and cos(a).
x is a Tensor with n "positional" dimensions, e.g. one dimension for a
sequence or two dimensions for an image
We use a geometric sequence of timescales starting with
min_timescale and ending with max_timescale. The number of different
timescales is equal to channels // (n * 2). For each timescale, we
generate the two sinusoidal signals sin(timestep/timescale) and
cos(timestep/timescale). All of these sinusoids are concatenated in
the channels dimension.
Args:
x: a Tensor with shape [batch, d1 ... dn, channels]
min_timescale: a float
max_timescale: a float
Returns:
a Tensor the same shape as x.
"""
static_shape = x.get_shape().as_list()
num_dims = len(static_shape) - 2
channels = tf.shape(x)[-1]
num_timescales = channels // (num_dims * 2)
log_timescale_increment = (
math.log(float(max_timescale) / float(min_timescale)) /
(tf.to_float(num_timescales) - 1))
inv_timescales = min_timescale * tf.exp(
tf.to_float(tf.range(num_timescales)) * -log_timescale_increment)
for dim in xrange(num_dims):
length = tf.shape(x)[dim + 1]
position = tf.to_float(tf.range(length))
scaled_time = tf.expand_dims(position, 1) * tf.expand_dims(
inv_timescales, 0)
signal = tf.concat([tf.sin(scaled_time), tf.cos(scaled_time)], axis=1)
prepad = dim * 2 * num_timescales
postpad = channels - (dim + 1) * 2 * num_timescales
signal = tf.pad(signal, [[0, 0], [prepad, postpad]])
for _ in xrange(1 + dim):
signal = tf.expand_dims(signal, 0)
for _ in xrange(num_dims - 1 - dim):
signal = tf.expand_dims(signal, -2)
x += signal
return x
attention_size = 256
size_layer = 256
embedded_size = 256
beam_width = 15
learning_rate = 1e-3
# CNN part I took from https://github.com/guillaumegenthial/im2latex/blob/master/model/encoder.py
# I use tf.contrib.seq2seq as decoder part
class Model:
def __init__(self):
self.X = tf.placeholder(tf.float32, shape=(None, 60, 240, 1))
self.Y = tf.placeholder(tf.int32, [None, None])
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)
batch_size = tf.shape(self.X)[0]
x_len = tf.shape(self.X)[2] // 2
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
decoder_embeddings = tf.Variable(tf.random_uniform([len(encode_maps), embedded_size], -1, 1))
img = self.X
out = tf.layers.conv2d(img, 64, 3, 1, "SAME",
activation=tf.nn.relu)
out = tf.layers.max_pooling2d(out, 2, 2, "SAME")
out = tf.layers.conv2d(out, 128, 3, 1, "SAME",
activation=tf.nn.relu)
out = tf.layers.max_pooling2d(out, 2, 2, "SAME")
out = tf.layers.conv2d(out, 256, 3, 1, "SAME",
activation=tf.nn.relu)
out = tf.layers.conv2d(out, 256, 3, 1, "SAME",
activation=tf.nn.relu)
out = tf.layers.max_pooling2d(out, (2, 1), (2, 1), "SAME")
out = tf.layers.conv2d(out, 512, 3, 1, "SAME",
activation=tf.nn.relu)
out = tf.layers.max_pooling2d(out, (1, 2), (1, 2), "SAME")
out = tf.layers.conv2d(out, 512, 3, 1, "VALID",
activation=tf.nn.relu)
img = add_timing_signal_nd(out)
print(img)
with tf.variable_scope("attn_cell", reuse=False):
attn_meca = AttentionMechanism(img, attention_size)
recu_cell = tf.nn.rnn_cell.LSTMCell(size_layer)
attn_cell = AttentionCell(recu_cell, attn_meca, 1.0,
attention_size, attention_size, size_layer, len(encode_maps))
encoder_state = attn_cell.initial_state()
training_helper = tf.contrib.seq2seq.ScheduledEmbeddingTrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
embedding = decoder_embeddings,
sampling_probability = 0.5,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = attn_cell,
helper = training_helper,
initial_state = encoder_state,
output_layer = None)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
with tf.variable_scope("attn_cell", reuse=True):
attn_meca = AttentionMechanism(img, attention_size, tiles=beam_width)
recu_cell = tf.nn.rnn_cell.LSTMCell(size_layer, reuse = True)
attn_cell = AttentionCell(recu_cell, attn_meca, 1.0,
attention_size, attention_size, size_layer, len(encode_maps))
encoder_state = attn_cell.initial_state()
predicting_decoder = tf.contrib.seq2seq.BeamSearchDecoder(
cell = attn_cell,
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS,
initial_state = tf.contrib.seq2seq.tile_batch(encoder_state, beam_width),
beam_width = beam_width,
output_layer = None,
length_penalty_weight = 0.0)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = False,
maximum_iterations = x_len)
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.predicted_ids
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model()
sess.run(tf.global_variables_initializer())
model.training_logits
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
batch_x = train_X[:5]
batch_x = np.array(batch_x).reshape((len(batch_x), image_height, image_width,image_channel))
y = train_Y[:5]
batch_y, _ = pad_sentence_batch(y, 0)
loss, logits, acc = sess.run([model.cost, model.training_logits, model.accuracy], feed_dict = {model.X: batch_x,
model.Y: batch_y})
loss, acc
batch_x = train_X[:5]
batch_x = np.array(batch_x).reshape((len(batch_x), image_height, image_width,image_channel))
y = train_Y[:5]
batch_y, _ = pad_sentence_batch(y, 0)
logits = sess.run(model.predicting_ids, feed_dict = {model.X: batch_x})
logits.shape
for e in range(epoch):
pbar = tqdm.tqdm(
range(0, len(train_X), batch_size), desc = 'minibatch loop')
train_loss, train_acc, test_loss, test_acc = [], [], [], []
for i in pbar:
index = min(i + batch_size, len(train_X))
batch_x = train_X[i : index]
batch_x = np.array(batch_x).reshape((len(batch_x), image_height, image_width,image_channel))
y = train_Y[i : index]
batch_y, _ = pad_sentence_batch(y, 0)
feed = {model.X: batch_x,
model.Y: batch_y}
accuracy, loss, _ = sess.run([model.accuracy,model.cost,model.optimizer],
feed_dict = feed)
train_loss.append(loss)
train_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
pbar = tqdm.tqdm(
range(0, len(test_X), batch_size), desc = 'minibatch loop')
for i in pbar:
index = min(i + batch_size, len(test_X))
batch_x = test_X[i : index]
batch_x = np.array(batch_x).reshape((len(batch_x), image_height, image_width,image_channel))
y = test_Y[i : index]
batch_y, _ = pad_sentence_batch(y, 0)
feed = {model.X: batch_x,
model.Y: batch_y,}
accuracy, loss = sess.run([model.accuracy,model.cost],
feed_dict = feed)
test_loss.append(loss)
test_acc.append(accuracy)
pbar.set_postfix(cost = loss, accuracy = accuracy)
print('epoch %d, training avg loss %f, training avg acc %f'%(e+1,
np.mean(train_loss),np.mean(train_acc)))
print('epoch %d, testing avg loss %f, testing avg acc %f'%(e+1,
np.mean(test_loss),np.mean(test_acc)))
decoded = sess.run(model.predicting_ids, feed_dict = {model.X: batch_x[3:4],
model.Y: batch_y[3:4]})[0]
decoded.shape
for i in range(decoded.shape[1]):
d = decoded[:,0]
print(''.join([decode_maps[i] for i in d if i not in [0,1,2]]))
plt.imshow(cv2.flip(batch_x[3][:,:,0], 1))
decoded = ''.join([decode_maps[i] for i in decoded[:,0] if i not in [0,1,2]])
actual = ''.join([decode_maps[i] for i in batch_y[3] if i not in [0,1,2]])
plt.title('predict: %s, actual: %s'%(decoded, actual))
plt.show()
###Output
_____no_output_____ |
universe/dow30-galaxy/dow30.ipynb | ###Markdown
List of DJIA (DOW 30) companiesRetreive from https://en.wikipedia.org/wiki/Dow_Jones_Industrial_Averageoutput: 'dow30.csv'
###Code
# Imports
from datetime import datetime
import numpy as np
import pandas as pd
import os
import re
import shutil
import wikipedia as wp
pd.options.mode.chained_assignment = None # default='warn'
pd.set_option('display.max_rows', 600)
# -*- encoding: utf-8 -*-
%matplotlib inline
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
def get_table(title, filename, match, use_cache=False):
if use_cache and os.path.isfile(filename):
pass
else:
html = wp.page(title).html()
df = pd.read_html(html, header=0, match=match)[0]
df.to_csv(filename, header=True, index=False, encoding='utf-8')
df = pd.read_csv(filename)
return df
title = 'Dow Jones Industrial Average'
filename = 'dow30.csv'
dow30 = get_table(title, filename, match='Symbol')
# dd/mm/YY H:M:S
now = datetime.now()
dt_string = now.strftime("%m/%d/%Y %H:%M:%S")
print('{} (retrieved {})'.format(title, dt_string))
dow30
###Output
Dow Jones Industrial Average (retrieved 06/06/2021 10:27:22)
|
projects/utsw2019/utsw_salaries_2019.ipynb | ###Markdown
2019 UTSW Salary Analysis
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_excel('TPIA_Data_2019.xlsx')
data.head()
# Let's see how distrubution of full time vs part time looks
f1 = data['Full Time Type Description'].value_counts()
print(f1)
p_full = plt.figure()
pf1 = p_full.add_axes([0.1,0.1,0.8,0.8])
pf1.pie(f1.values.tolist(), labels=f1.index.values.tolist())
# Gender distribution
g1 = data['Gender'].value_counts()
print(g1)
p_gender = plt.figure()
ax1 = p_gender.add_axes([0.1,0.1,0.8,0.8])
ax1.pie(g1.values.tolist(), labels=g1.index.values.tolist())
data[data['Primary Name'].apply(lambda x: x.lower().find('maruni') >= 0)]
# Respiratory department job codes
data[data['Department Description'] == 'Respiratory Therapy']['Job Code Description'].value_counts()
data[data['Department Description'] == 'Respiratory Therapy']['Gender'].value_counts()
data[data['Department Description'] == 'Respiratory Therapy'][['Gender', 'Annual Pay']].groupby('Gender').describe()
data[['Department Description', 'Annual Pay']].groupby('Department Description').describe()
data[['Department Description', 'Gender' , 'Annual Pay']].groupby(['Department Description', 'Gender']).describe()
data[[ 'Gender' , 'Annual Pay']].groupby(['Gender']).describe()
###Output
_____no_output_____ |
April/Week14/Day98.ipynb | ###Markdown
绝对路径  给定一个包含目录名、父目录`..`和当前目录`.`的文件路径,返回最短绝对路径(即剔除其中所有的相对路径标记)。```text In[1]: '/Users/Joma/Documents/../Desktop/./../'Out[1]: '/Users/Joma/'```
###Code
def shortestPath(path: str) -> str:
if not path:
return None
folders = path.split('/')
result = []
for folder in folders:
if folder == '.':
continue
elif folder == '..':
result.pop()
else:
result.append(folder)
return '/'.join(result)
print(shortestPath('/Users/Joma/Documents/../Desktop/./../'))
###Output
/Users/Joma/
|
Code/MSDS692_SunriseSunsetData.ipynb | ###Markdown
MSDS692 Project: Data Preprocessing - Sunrise and Sunset Times for Denver, CO Natalia Weakly Original data source: https://www.timeanddate.com/
###Code
# Imports
import pandas as pd
import numpy as np
import os
import datetime
###Output
_____no_output_____
###Markdown
Data load and preprocessing
###Code
# Load data
timeOfDay=pd.read_csv('MSDS692_Denver_SunriseSunset_5.csv', infer_datetime_format=True)
# Check the data
timeOfDay.head()
timeOfDay.tail()
# Structure of the data frame
timeOfDay.info()
# Check if there are any missing values
timeOfDay.isnull().values.any()
# Create a 'fullDate' column by concatenating 'Year', 'Month' and 'Day'
timeOfDay['fullDate'] = timeOfDay['Year'].astype(str) + '/' + timeOfDay['Month'] + '/' + timeOfDay['Day'].astype(str)
timeOfDay.head()
#Check results
timeOfDay.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1916 entries, 0 to 1915
Data columns (total 12 columns):
Year 1916 non-null int64
Month 1916 non-null object
Day 1916 non-null int64
AstronomicalTwilight_Start 1916 non-null object
AstronomicalTwilight_End 1916 non-null object
NauticalTwilight_Start 1916 non-null object
NauticalTwilight_End 1916 non-null object
CivilTwilight_Start 1916 non-null object
CivilTwilight_End 1916 non-null object
Sunrise 1916 non-null object
Sunset 1916 non-null object
fullDate 1916 non-null object
dtypes: int64(2), object(10)
memory usage: 179.7+ KB
###Markdown
Add additional columns for different twighlight time In order to investigate whether there is a link between car accidents and natural light conditions, let's add columns showing when twilight conditions began and ended on each particular date (later to be compared with accident times). Twilight is the time between day and night when there is light outside, but the sun is below the horizon. There are three types of twilight: civil, nautical, and astronomical. Twilight occurs because of the Earth's upper atmosphere reflects sunlight and illuminates the lower atmosphere. So, its three stages are defined depending on the Sun's elevation (angle of its geometric center with the horizon) as shown below:Image credit: timeanddate.com
###Code
# Create a column for astronomical Twilihgt start as a full date/time (as a string)
timeOfDay['AstroT_Start']=timeOfDay['fullDate'] + ' ' + timeOfDay['AstronomicalTwilight_Start'].astype(str)
# Check results
timeOfDay.head()
# Convert 'AstroT_Start' to the proper date/time format
timeOfDay['AstroT_Start'] = pd.to_datetime(timeOfDay['AstroT_Start'])
# Preview results
timeOfDay.head()
# Similarly, create properly formated column for the AstronomicalTwilight_End
# AstronomicalTwilight_End
# concatenate
timeOfDay['AstroT_End']=timeOfDay['fullDate'] + ' ' + timeOfDay['AstronomicalTwilight_End'].astype(str)
# convert to date and time format
timeOfDay['AstroT_End'] = pd.to_datetime(timeOfDay['AstroT_End'])
# check results
timeOfDay.head()
# NauticalTwilight_Start
# concatenate
timeOfDay['NauticalT_Start']=timeOfDay['fullDate'] + ' ' + timeOfDay['NauticalTwilight_Start'].astype(str)
# convert to date and time format
timeOfDay['NauticalT_Start'] = pd.to_datetime(timeOfDay['NauticalT_Start'])
# check results
timeOfDay.head()
# NauticalTwilight_End
# concatenate
timeOfDay['NauticalT_End']=timeOfDay['fullDate'] + ' ' + timeOfDay['NauticalTwilight_End'].astype(str)
# convert to date and time format
timeOfDay['NauticalT_End'] = pd.to_datetime(timeOfDay['NauticalT_End'])
# check results
timeOfDay.head()
# CivilTwilight_Start
# concatenate
timeOfDay['CivilT_Start']=timeOfDay['fullDate'] + ' ' + timeOfDay['CivilTwilight_Start'].astype(str)
# convert to date and time format
timeOfDay['CivilT_Start'] = pd.to_datetime(timeOfDay['CivilT_Start'])
# check results
timeOfDay.head()
# CivilTwilight_End
# concatenate
timeOfDay['CivilT_End']=timeOfDay['fullDate'] + ' ' + timeOfDay['CivilTwilight_End'].astype(str)
# convert to date and time format
timeOfDay['CivilT_End'] = pd.to_datetime(timeOfDay['CivilT_End'])
# check results
timeOfDay.head()
# Sunrise
# concatenate
timeOfDay['Sunrise']=timeOfDay['fullDate'] + ' ' + timeOfDay['Sunrise'].astype(str)
# convert to date and time format
timeOfDay['Sunrise'] = pd.to_datetime(timeOfDay['Sunrise'])
# check results
timeOfDay.head()
# Sunset
# concatenate
timeOfDay['Sunset']=timeOfDay['fullDate'] + ' ' + timeOfDay['Sunset'].astype(str)
# convert to date and time format
timeOfDay['Sunset'] = pd.to_datetime(timeOfDay['Sunset'])
# check results
timeOfDay.head()
# Convert the 'FullDate' to DateTime format
timeOfDay['fullDate'] =pd.to_datetime(timeOfDay['fullDate'])
# Check results
timeOfDay.head()
timeOfDay.info()
# Copy newly created columns in datetime format to a new data frame
naturalLightConditions = timeOfDay[['fullDate', 'AstroT_Start', 'AstroT_End', 'NauticalT_Start', 'NauticalT_End', 'CivilT_Start', 'CivilT_End', 'Sunrise', 'Sunset']]
# check results
naturalLightConditions.head()
naturalLightConditions.info()
# output naturalLightConditions to a file for future use
naturalLightConditions.to_csv('naturalLightConditions.csv', date_format='%Y-%m-%d %H:%M:%S')
###Output
_____no_output_____ |
Chapter20/03_variational_autoencoder.ipynb | ###Markdown
Variational Autoencoder on Fashion MNIST data using Feedforward NN Adapted from [Building Autoencoders in Keras](https://blog.keras.io/building-autoencoders-in-keras.html) by Francois Chollet who created Keras. Imports & Settings
###Code
from keras.layers import Lambda, Input, Dense
from keras.models import Model
from keras.datasets import mnist, fashion_mnist
from keras.losses import mse, binary_crossentropy
from keras.utils import plot_model
from keras import backend as K
import numpy as np
import matplotlib.pyplot as plt
import argparse
import os
###Output
_____no_output_____
###Markdown
Sampling
###Code
# instead of sampling from Q(z|X), sample eps = N(0,I)
# z = z_mean + sqrt(var)*eps
def sampling(args):
"""Reparameterization trick by sampling fr an isotropic unit Gaussian.
# Arguments
args (tensor): mean and log of variance of Q(z|X)
# Returns
z (tensor): sampled latent vector
"""
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
# by default, random_normal has mean=0 and std=1.0
epsilon = K.random_normal(shape=(batch, dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
###Output
_____no_output_____
###Markdown
Load Fashion MNIST Data
###Code
# MNIST dataset
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
image_size = x_train.shape[1]
original_dim = image_size * image_size
x_train = np.reshape(x_train, [-1, original_dim])
x_test = np.reshape(x_test, [-1, original_dim])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
###Output
_____no_output_____
###Markdown
Define Variational Autoencoder Architecture Network Parameters
###Code
input_shape = (original_dim,)
intermediate_dim = 512
batch_size = 128
latent_dim = 2
epochs = 50
###Output
_____no_output_____
###Markdown
Encoder model Define Layers
###Code
inputs = Input(shape=input_shape, name='encoder_input')
x = Dense(intermediate_dim, activation='relu')(inputs)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
# use reparameterization trick to push the sampling out as input
# note that "output_shape" isn't necessary with the TensorFlow backend
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
###Output
_____no_output_____
###Markdown
Instantiate Model
###Code
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
encoder.summary()
plot_model(encoder, to_file='vae_mlp_encoder.png', show_shapes=True)
###Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
encoder_input (InputLayer) (None, 784) 0
__________________________________________________________________________________________________
dense_4 (Dense) (None, 512) 401920 encoder_input[0][0]
__________________________________________________________________________________________________
z_mean (Dense) (None, 2) 1026 dense_4[0][0]
__________________________________________________________________________________________________
z_log_var (Dense) (None, 2) 1026 dense_4[0][0]
__________________________________________________________________________________________________
z (Lambda) (None, 2) 0 z_mean[0][0]
z_log_var[0][0]
==================================================================================================
Total params: 403,972
Trainable params: 403,972
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Decoder Model Define Layers
###Code
latent_inputs = Input(shape=(latent_dim,), name='z_sampling')
x = Dense(intermediate_dim, activation='relu')(latent_inputs)
outputs = Dense(original_dim, activation='sigmoid')(x)
###Output
_____no_output_____
###Markdown
Instantiate model
###Code
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()
plot_model(decoder, to_file='vae_mlp_decoder.png', show_shapes=True)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
z_sampling (InputLayer) (None, 2) 0
_________________________________________________________________
dense_5 (Dense) (None, 512) 1536
_________________________________________________________________
dense_6 (Dense) (None, 784) 402192
=================================================================
Total params: 403,728
Trainable params: 403,728
Non-trainable params: 0
_________________________________________________________________
###Markdown
Combine Encoder and Decoder to VAE model
###Code
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs, name='vae_mlp')
models = (encoder, decoder)
###Output
_____no_output_____
###Markdown
Train Model
###Code
data = (x_test, y_test)
reconstruction_loss = mse(inputs, outputs)
reconstruction_loss *= original_dim
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='adam')
vae.summary()
plot_model(vae,
to_file='vae_mlp.png',
show_shapes=True)
vae.fit(x_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, None))
vae.save_weights('vae_mlp_mnist.h5')
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/50
60000/60000 [==============================] - 5s 84us/step - loss: 43.3090 - val_loss: 34.3889
Epoch 2/50
60000/60000 [==============================] - 5s 77us/step - loss: 33.4821 - val_loss: 32.6602
Epoch 3/50
60000/60000 [==============================] - 5s 79us/step - loss: 32.2131 - val_loss: 32.0505
Epoch 4/50
60000/60000 [==============================] - 5s 81us/step - loss: 31.5358 - val_loss: 31.1049
Epoch 5/50
60000/60000 [==============================] - 5s 76us/step - loss: 31.0607 - val_loss: 30.9470
Epoch 6/50
60000/60000 [==============================] - 5s 79us/step - loss: 30.6998 - val_loss: 30.5644
Epoch 7/50
60000/60000 [==============================] - 5s 82us/step - loss: 30.3913 - val_loss: 30.1875
Epoch 8/50
60000/60000 [==============================] - 5s 78us/step - loss: 30.1415 - val_loss: 29.9968
Epoch 9/50
60000/60000 [==============================] - 5s 78us/step - loss: 29.9107 - val_loss: 29.9642
Epoch 10/50
60000/60000 [==============================] - 5s 78us/step - loss: 29.6511 - val_loss: 29.7572
Epoch 11/50
60000/60000 [==============================] - 5s 80us/step - loss: 29.4334 - val_loss: 29.2960
Epoch 12/50
60000/60000 [==============================] - 5s 83us/step - loss: 29.1718 - val_loss: 28.9645
Epoch 13/50
60000/60000 [==============================] - 5s 83us/step - loss: 28.9931 - val_loss: 28.8535
Epoch 14/50
60000/60000 [==============================] - 5s 84us/step - loss: 28.8639 - val_loss: 28.7241
Epoch 15/50
60000/60000 [==============================] - 5s 79us/step - loss: 28.7483 - val_loss: 28.6740
Epoch 16/50
60000/60000 [==============================] - 5s 80us/step - loss: 28.6502 - val_loss: 28.6064
Epoch 17/50
60000/60000 [==============================] - 5s 79us/step - loss: 28.5263 - val_loss: 28.4739
Epoch 18/50
60000/60000 [==============================] - 5s 79us/step - loss: 28.4683 - val_loss: 28.6913
Epoch 19/50
60000/60000 [==============================] - 5s 80us/step - loss: 28.3919 - val_loss: 28.6425
Epoch 20/50
60000/60000 [==============================] - 5s 85us/step - loss: 28.3120 - val_loss: 28.3970
Epoch 21/50
60000/60000 [==============================] - 5s 81us/step - loss: 28.2618 - val_loss: 28.2174
Epoch 22/50
60000/60000 [==============================] - 5s 80us/step - loss: 28.1734 - val_loss: 28.3498
Epoch 23/50
60000/60000 [==============================] - 5s 81us/step - loss: 28.1509 - val_loss: 28.1824
Epoch 24/50
60000/60000 [==============================] - 5s 81us/step - loss: 28.0902 - val_loss: 28.2143
Epoch 25/50
60000/60000 [==============================] - 5s 82us/step - loss: 28.0262 - val_loss: 28.1908
Epoch 26/50
60000/60000 [==============================] - 5s 84us/step - loss: 28.0190 - val_loss: 28.3770
Epoch 27/50
60000/60000 [==============================] - 5s 88us/step - loss: 28.0110 - val_loss: 28.1026
Epoch 28/50
60000/60000 [==============================] - 5s 81us/step - loss: 27.9182 - val_loss: 28.0952
Epoch 29/50
60000/60000 [==============================] - 5s 86us/step - loss: 27.9236 - val_loss: 27.9516
Epoch 30/50
60000/60000 [==============================] - 5s 81us/step - loss: 27.8752 - val_loss: 27.9971
Epoch 31/50
60000/60000 [==============================] - 5s 79us/step - loss: 27.8571 - val_loss: 27.9477
Epoch 32/50
60000/60000 [==============================] - 5s 80us/step - loss: 27.7946 - val_loss: 27.8683
Epoch 33/50
60000/60000 [==============================] - 5s 82us/step - loss: 27.7705 - val_loss: 27.9689
Epoch 34/50
60000/60000 [==============================] - 5s 82us/step - loss: 27.7620 - val_loss: 27.9068
Epoch 35/50
60000/60000 [==============================] - 5s 86us/step - loss: 27.6971 - val_loss: 27.8580
Epoch 36/50
60000/60000 [==============================] - 5s 85us/step - loss: 27.7102 - val_loss: 27.9967
Epoch 37/50
60000/60000 [==============================] - 5s 81us/step - loss: 27.6690 - val_loss: 27.8570
Epoch 38/50
60000/60000 [==============================] - 5s 83us/step - loss: 27.6489 - val_loss: 27.7641
Epoch 39/50
60000/60000 [==============================] - 5s 90us/step - loss: 27.6315 - val_loss: 27.8103
Epoch 40/50
60000/60000 [==============================] - 5s 83us/step - loss: 27.6079 - val_loss: 27.6965
Epoch 41/50
60000/60000 [==============================] - 5s 86us/step - loss: 27.5654 - val_loss: 27.7521
Epoch 42/50
60000/60000 [==============================] - 5s 85us/step - loss: 27.5823 - val_loss: 28.0018
Epoch 43/50
60000/60000 [==============================] - 5s 88us/step - loss: 27.5378 - val_loss: 27.7956
Epoch 44/50
60000/60000 [==============================] - 5s 83us/step - loss: 27.4946 - val_loss: 27.7793
Epoch 45/50
60000/60000 [==============================] - 5s 83us/step - loss: 27.4760 - val_loss: 27.6293
Epoch 46/50
60000/60000 [==============================] - 5s 84us/step - loss: 27.4670 - val_loss: 27.6876
Epoch 47/50
60000/60000 [==============================] - 5s 88us/step - loss: 27.4831 - val_loss: 27.6400
Epoch 48/50
60000/60000 [==============================] - 5s 87us/step - loss: 27.4264 - val_loss: 27.5798
Epoch 49/50
60000/60000 [==============================] - 5s 84us/step - loss: 27.4224 - val_loss: 27.6857
Epoch 50/50
60000/60000 [==============================] - 5s 80us/step - loss: 27.3895 - val_loss: 27.5600
###Markdown
Plot Results
###Code
def plot_results(models,
data,
batch_size=128,
model_name="vae_mnist"):
"""Plots labels and MNIST digits as function of 2-dim latent vector
# Arguments
models (tuple): encoder and decoder models
data (tuple): test data and label
batch_size (int): prediction batch size
model_name (string): which model is using this function
"""
encoder, decoder = models
x_test, y_test = data
os.makedirs(model_name, exist_ok=True)
filename = os.path.join(model_name, "vae_mean.png")
# display a 2D plot of the digit classes in the latent space
z_mean, _, _ = encoder.predict(x_test,
batch_size=batch_size)
plt.figure(figsize=(12, 10))
plt.scatter(z_mean[:, 0], z_mean[:, 1], c=y_test)
plt.colorbar()
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.savefig(filename)
plt.show()
filename = os.path.join(model_name, "digits_over_latent.png")
# display a 30x30 2D manifold of digits
n = 30
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# linearly spaced coordinates corresponding to the 2D plot
# of digit classes in the latent space
grid_x = np.linspace(-4, 4, n)
grid_y = np.linspace(-4, 4, n)[::-1]
for i, yi in enumerate(grid_y):
for j, xi in enumerate(grid_x):
z_sample = np.array([[xi, yi]])
x_decoded = decoder.predict(z_sample)
digit = x_decoded[0].reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
start_range = digit_size // 2
end_range = n * digit_size + start_range + 1
pixel_range = np.arange(start_range, end_range, digit_size)
sample_range_x = np.round(grid_x, 1)
sample_range_y = np.round(grid_y, 1)
plt.xticks(pixel_range, sample_range_x)
plt.yticks(pixel_range, sample_range_y)
plt.xlabel("z[0]")
plt.ylabel("z[1]")
plt.imshow(figure, cmap='Greys_r')
plt.savefig(filename)
plt.show()
plot_results(models,
data,
batch_size=batch_size,
model_name="vae_mlp")
###Output
_____no_output_____ |
Redo_0_unit_3_mod_1.ipynb | ###Markdown
sample
###Code
!pip install -i https://test.pypi.org/simple/ lambdata
# looking at features in directory
dir()
# need to import in order to look at data that's inside
import lambdata
dir(lambdata)
###Output
_____no_output_____
###Markdown
my package
###Code
!pip install -i https://test.pypi.org/simple/ lambdata-jgrxnde9701==0.1.1
import lambdata_jgrxnde9701
dir(lambdata_jgrxnde9701)
lambdata_jgrxnde9701.ZEROS
###Output
_____no_output_____
###Markdown
updated version
###Code
!pip install -i https://test.pypi.org/simple/ lambdata_jgrxnde9701==0.1.3
###Output
_____no_output_____ |
examples/guided_grad_cam.ipynb | ###Markdown
Guided Grad-CAM Examples- Tested Tensorflow version : '2.4.0-dev20201023'- Code references: * https://keras.io/examples/vision/grad_cam/ * https://colab.research.google.com/drive/17tAC7xx2IJxjK700bdaLatTVeDA02GJn?usp=sharingscrollTo=jgTRCYgX4oz- Import
###Code
from crispy.core.guided_grad_cam import make_gradcam_heatmap, build_guided_model, guided_backprop, guided_grad_cam, deprocess_image
from tensorflow import keras
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet import ResNet50, preprocess_input, decode_predictions
import urllib
import numpy as np
import matplotlib.pyplot as plt
import cv2
###Output
_____no_output_____
###Markdown
Set tensorflow gpu (optional for gpu user)
###Code
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
###Output
1 Physical GPUs, 1 Logical GPUs
###Markdown
Main part
###Code
url = 'https://raw.githubusercontent.com/nguyenhoa93/GradCAM_and_GuidedGradCAM_tf2/master/assets/samples/cat1.jpg'
dest = "./cat1.jpg"
urllib.request.urlretrieve(url, dest)
H, W = 224, 224
def load_image(path, preprocess=True):
"""Load and preprocess image."""
x = image.load_img(path, target_size=(H, W))
if preprocess:
x = image.img_to_array(x)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return x
# process example input
preprocessed_input = load_image("./cat1.jpg")
plt.imshow(image.load_img("./cat1.jpg", target_size=(H,W)))
last_conv_layer_name = "conv5_block3_out"
classifier_layer_names = ["avg_pool", "predictions"]
res = keras.applications.resnet50.ResNet50(include_top = True, weights='imagenet')
gradcam_heatmap = make_gradcam_heatmap(preprocessed_input, res, last_conv_layer_name, classifier_layer_names)
plt.imshow(gradcam_heatmap)
plt.imshow(cv2.resize(gradcam_heatmap, (224,224)))
guided_model = build_guided_model(res)
gb = guided_backprop(guided_model, preprocessed_input, last_conv_layer_name)
plt.imshow(np.flip(deprocess_image(gb), -1))
ggc = deprocess_image(
guided_grad_cam(gb, gradcam_heatmap)
)
plt.imshow(np.flip(ggc, -1))
###Output
_____no_output_____ |
notebooks/Evaluations/Continuous_Timeseries/All_Depths_ORCA/DabobBay/201905_Hindcast/2014_DabobBay_Evaluations.ipynb | ###Markdown
This notebook contains Hovmoller plots that compare the model output over many different depths to the results from the ORCA Buoy data.
###Code
import sys
sys.path.append('/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools')
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
import netCDF4 as nc
import xarray as xr
import datetime as dt
from salishsea_tools import evaltools as et, viz_tools, places
import gsw
import matplotlib.gridspec as gridspec
import matplotlib as mpl
import matplotlib.dates as mdates
import cmocean as cmo
import scipy.interpolate as sinterp
import math
from scipy import io
import pickle
import cmocean
import json
import Keegan_eval_tools as ket
from collections import OrderedDict
from matplotlib.colors import LogNorm
fs=16
mpl.rc('xtick', labelsize=fs)
mpl.rc('ytick', labelsize=fs)
mpl.rc('legend', fontsize=fs)
mpl.rc('axes', titlesize=fs)
mpl.rc('axes', labelsize=fs)
mpl.rc('figure', titlesize=fs)
mpl.rc('font', size=fs)
mpl.rc('font', family='sans-serif', weight='normal', style='normal')
import warnings
#warnings.filterwarnings('ignore')
from IPython.display import Markdown, display
%matplotlib inline
ptrcloc='/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data'
modver='HC201905' #HC202007 is the other option.
gridloc='/ocean/kflanaga/MEOPAR/savedData/201905_grid_data'
ORCAloc='/ocean/kflanaga/MEOPAR/savedData/ORCAData'
year=2019
mooring='Twanoh'
# Parameters
year = 2014
modver = "HC201905"
mooring = "DabobBay"
ptrcloc = "/ocean/kflanaga/MEOPAR/savedData/201905_ptrc_data"
gridloc = "/ocean/kflanaga/MEOPAR/savedData/201905_grid_data"
ORCAloc = "/ocean/kflanaga/MEOPAR/savedData/ORCAData"
orca_dict=io.loadmat(f'{ORCAloc}/{mooring}.mat')
def ORCA_dd_to_dt(date_list):
UTC=[]
for yd in date_list:
if np.isnan(yd) == True:
UTC.append(float("NaN"))
else:
start = dt.datetime(1999,12,31)
delta = dt.timedelta(yd)
offset = start + delta
time=offset.replace(microsecond=0)
UTC.append(time)
return UTC
obs_tt=[]
for i in range(len(orca_dict['Btime'][1])):
obs_tt.append(np.nanmean(orca_dict['Btime'][:,i]))
#I should also change this obs_tt thing I have here into datetimes
YD_rounded=[]
for yd in obs_tt:
if np.isnan(yd) == True:
YD_rounded.append(float("NaN"))
else:
YD_rounded.append(math.floor(yd))
obs_dep=[]
for i in orca_dict['Bdepth']:
obs_dep.append(np.nanmean(i))
grid=xr.open_mfdataset(gridloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(grid.time_counter)
mod_depth=np.array(grid.deptht)
mod_votemper=(grid.votemper.isel(y=0,x=0))
mod_vosaline=(grid.vosaline.isel(y=0,x=0))
mod_votemper = (np.array(mod_votemper))
mod_votemper = np.ma.masked_equal(mod_votemper,0).T
mod_vosaline = (np.array(mod_vosaline))
mod_vosaline = np.ma.masked_equal(mod_vosaline,0).T
def Process_ORCA(orca_var,depths,dates,year):
# Transpose the columns so that a yearday column can be added.
df_1=pd.DataFrame(orca_var).transpose()
df_YD=pd.DataFrame(dates,columns=['yearday'])
df_1=pd.concat((df_1,df_YD),axis=1)
#Group by yearday so that you can take the daily mean values.
dfg=df_1.groupby(by='yearday')
df_mean=dfg.mean()
df_mean=df_mean.reset_index()
# Convert the yeardays to datetime UTC
UTC=ORCA_dd_to_dt(df_mean['yearday'])
df_mean['yearday']=UTC
# Select the range of dates that you would like.
df_year=df_mean[(df_mean.yearday >= dt.datetime(year,1,1))&(df_mean.yearday <= dt.datetime(year,12,31))]
df_year=df_year.set_index('yearday')
#Add in any missing date values
idx=pd.date_range(df_year.index[0],df_year.index[-1])
df_full=df_year.reindex(idx,fill_value=-1)
#Transpose again so that you can add a depth column.
df_full=df_full.transpose()
df_full['depth']=obs_dep
# Remove any rows that have NA values for depth.
df_full=df_full.dropna(how='all',subset=['depth'])
df_full=df_full.set_index('depth')
#Mask any NA values and any negative values.
df_final=np.ma.masked_invalid(np.array(df_full))
df_final=np.ma.masked_less(df_final,0)
return df_final, df_full.index, df_full.columns
###Output
_____no_output_____
###Markdown
Map of Buoy Location.
###Code
lon,lat=places.PLACES[mooring]['lon lat']
fig, ax = plt.subplots(1,1,figsize = (6,6))
with nc.Dataset('/data/vdo/MEOPAR/NEMO-forcing/grid/bathymetry_201702.nc') as bathy:
viz_tools.plot_coastline(ax, bathy, coords = 'map',isobath=.1)
color=('firebrick')
ax.plot(lon, lat,'o',color = 'firebrick', label=mooring)
ax.set_ylim(47, 49)
ax.legend(bbox_to_anchor=[1,.6,0.45,0])
ax.set_xlim(-124, -122);
ax.set_title('Buoy Location');
###Output
_____no_output_____
###Markdown
Temperature
###Code
df,dep,tim= Process_ORCA(orca_dict['Btemp'],obs_dep,YD_rounded,year)
date_range=(dt.datetime(year,1,1),dt.datetime(year,12,31))
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
ax=ket.hovmoeller(mod_votemper, mod_depth, tt, (2,15),date_range, title='Modeled Temperature Series',
var_title='Temperature (C$^0$)',vmax=23,vmin=8,cmap=cmo.cm.thermal)
###Output
/ocean/kflanaga/MEOPAR/analysis-keegan/notebooks/Tools/Keegan_eval_tools.py:816: UserWarning: 'set_params()' not defined for locator of type <class 'matplotlib.dates.AutoDateLocator'>
plt.locator_params(axis="x", nbins=20)
###Markdown
Salinity
###Code
df,dep,tim= Process_ORCA(orca_dict['Bsal'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
ax=ket.hovmoeller(mod_vosaline, mod_depth, tt, (2,15),date_range,title='Modeled Absolute Salinity Series',
var_title='SA (g/kg)',vmax=31,vmin=14,cmap=cmo.cm.haline)
grid.close()
bio=xr.open_mfdataset(ptrcloc+f'/ts_{modver}_{year}_{mooring}.nc')
tt=np.array(bio.time_counter)
mod_depth=np.array(bio.deptht)
mod_flagellatets=(bio.flagellates.isel(y=0,x=0))
mod_ciliates=(bio.ciliates.isel(y=0,x=0))
mod_diatoms=(bio.diatoms.isel(y=0,x=0))
mod_Chl = np.array((mod_flagellatets+mod_ciliates+mod_diatoms)*1.8)
mod_Chl = np.ma.masked_equal(mod_Chl,0).T
df,dep,tim= Process_ORCA(orca_dict['Bfluor'],obs_dep,YD_rounded,year)
ax=ket.hovmoeller(df,dep,tim,(2,15),date_range,title='Observed Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
ax=ket.hovmoeller(mod_Chl, mod_depth, tt, (2,15),date_range,title='Modeled Chlorophyll Series',
var_title='Chlorophyll (mg Chl/m$^3$)',vmin=0,vmax=30,cmap=cmo.cm.algae)
bio.close()
###Output
_____no_output_____ |
Convolutional Neural Networks/Exercise_2_Cats_vs_Dogs_using_augmentation_Question-FINAL.ipynb | ###Markdown
NOTE:In the cell below you **MUST** use a batch size of 10 (`batch_size=10`) for the `train_generator` and the `validation_generator`. Using a batch size greater than 10 will exceed memory limits on the Coursera platform.
###Code
TRAINING_DIR = '/tmp/cats-v-dogs/training'
train_datagen = ImageDataGenerator( rescale = 1/255,
rotation_range = 40,
width_shift_range= 0.2,
height_shift_range= 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = 'nearest'
)
train_generator = train_datagen.flow_from_directory(
TRAINING_DIR , batch_size = 10 , class_mode = 'binary' , target_size = (150,150))
VALIDATION_DIR = '/tmp/cats-v-dogs/testing'
validation_datagen = ImageDataGenerator( rescale = 1/255)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DIR , batch_size= 10 , class_mode = 'binary' , target_size=(150,150))
history = model.fit_generator(train_generator,
epochs=2,
verbose=1,
validation_data=validation_generator)
# PLOT LOSS AND ACCURACY
%matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, loss, 'r', "Training Loss")
plt.plot(epochs, val_loss, 'b', "Validation Loss")
plt.title('Training and validation loss')
# Desired output. Charts with training and validation metrics. No crash :)
###Output
_____no_output_____
###Markdown
Submission Instructions
###Code
# Now click the 'Submit Assignment' button above.
###Output
_____no_output_____
###Markdown
When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners.
###Code
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);
###Output
_____no_output_____ |
python-statatics-tutorial/advance-theme/Request.ipynb | ###Markdown
Python Request 库入门 1 urllib2 和 Request对比*Get*请求至`https://api.github.com/`
###Code
import urllib2
import requests
import json
gh_url = 'https://api.github.com'
gh_user = 'gaufung'
gh_pw = 'gaofenggit123'
req = urllib2.Request(gh_url)
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, gh_url, gh_user, gh_pw)
auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_manager)
urllib2.install_opener(opener)
handler = urllib2.urlopen(req)
if handler.getcode() == requests.codes.ok:
text = handler.read()
d_text = json.loads(text)
for k, v in d_text.items():
print k, v
import requests
import json
gh_url = 'https://api.github.com'
gh_user = 'gaufung'
gh_pw = 'gaofenggit123'
r = requests.get(gh_url,auth=(gh_user,gh_pw))
if r.status_code == requests.codes.ok:
for k, v in r.json().items():
print k,v
###Output
_____no_output_____
###Markdown
2 基本用法
###Code
import requests
cs_url = 'http://httpbin.org'
r = requests.get("%s/%s" % (cs_url, 'get'))
r = requests.post("%s/%s" % (cs_url, 'post'))
r = requests.put("%s/%s" % (cs_url, 'put'))
r = requests.delete("%s/%s" % (cs_url, 'delete'))
r = requests.patch("%s/%s" % (cs_url, 'patch'))
r = requests.options("%s/%s" % (cs_url, 'get'))
###Output
_____no_output_____
###Markdown
3 URL 传参 > https://encrypted.google.com/search?q=hello > :///?=&= requests 库提供的 HTTP 方法,都提供了名为 params 的参数。这个参数可以接受一个 Python 字典,并自动格式化为上述格式。
###Code
import requests
cs_url = 'https://www.so.com/s'
param = {'ie':'utf-8','q':'query'}
r = requests.get(cs_url,params = param)
print r.url
###Output
https://www.so.com/s?q=query&ie=utf-8
###Markdown
4 设置超时requests 的超时设置以秒为单位。例如,对请求加参数 timeout = 5 即可设置超时为 5 秒
###Code
import requests
cs_url = 'https://www.zhihu.com'
r = requests.get(cs_url,timeout=100)
###Output
_____no_output_____
###Markdown
5 请求头
###Code
import requests
cs_url = 'http://httpbin.org/get'
r = requests.get (cs_url)
print r.content
###Output
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate",
"Host": "httpbin.org",
"User-Agent": "python-requests/2.11.1"
},
"origin": "117.136.68.150",
"url": "http://httpbin.org/get"
}
###Markdown
通常我们比较关注其中的 User-Agent 和 Accept-Encoding。如果我们要修改 HTTP 头中的这两项内容,只需要将一个合适的字典参数传给 headers 即可。
###Code
import requests
my_headers = {'User-Agent' : 'From Liam Huang', 'Accept-Encoding' : 'gzip'}
cs_url = 'http://httpbin.org/get'
r = requests.get (cs_url, headers = my_headers)
print r.content
###Output
{
"args": {},
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip",
"Host": "httpbin.org",
"User-Agent": "From Liam Huang"
},
"origin": "117.136.68.150",
"url": "http://httpbin.org/get"
}
###Markdown
6 响应头
###Code
import requests
cs_url = 'http://httpbin.org/get'
r = requests.get (cs_url)
print r.headers
###Output
{'Content-Length': '239', 'Server': 'nginx', 'Connection': 'keep-alive', 'Access-Control-Allow-Credentials': 'true', 'Date': 'Fri, 06 Jan 2017 07:29:47 GMT', 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json'}
###Markdown
7 响应内容 长期以来,互联网都存在带宽有限的情况。因此,网络上传输的数据,很多情况下都是经过压缩的。经由 requests 发送的请求,当收到的响应内容经过 gzip 或 deflate 压缩时,requests 会自动为我们解包。我们可以用 Response.content 来获得以字节形式返回的相应内容。
###Code
import requests
cs_url = 'https://www.zhihu.com'
r = requests.get (cs_url)
if r.status_code == requests.codes.ok:
print r.content
###Output
_____no_output_____
###Markdown
如果相应内容不是文本,而是二进制数据(比如图片),则需要进行响应的解码
###Code
import requests
from PIL import Image
from StringIO import StringIO
cs_url = 'http://liam0205.me/uploads/avatar/avatar-2.jpg'
r = requests.get (cs_url)
if r.status_code == requests.codes.ok:
Image.open(StringIO(r.content)).show()
###Output
_____no_output_____
###Markdown
文本模式解码
###Code
import requests
cs_url = 'https://www.zhihu.com'
r = requests.get (cs_url,auth=('[email protected]','gaofengcumt'))
if r.status_code == requests.codes.ok:
print r.text
else:
print 'bad request'
###Output
bad request
###Markdown
8 反序列化 JSON 数据
###Code
import requests
cs_url = 'http://ip.taobao.com/service/getIpInfo.php'
my_param = {'ip':'8.8.8.8'}
r = requests.get(cs_url, params = my_param)
print r.json()['data']['country'].encode('utf-8')
###Output
美国
|
Example/Keras_Mnist_MLP_h1000_DropOut.ipynb | ###Markdown
数据预处理
###Code
from keras.utils import np_utils
import numpy as np
np.random.seed(10)
from keras.datasets import mnist
(x_train_image,y_train_label),\
(x_test_image,y_test_label)= mnist.load_data()
x_Train =x_train_image.reshape(60000, 784).astype('float32')
x_Test = x_test_image.reshape(10000, 784).astype('float32')
x_Train_normalize = x_Train / 255
x_Test_normalize = x_Test / 255
y_Train_One Hot = np_utils.to_categorical(y_train_label)
y_Test_One Hot = np_utils.to_categorical(y_test_label)
###Output
_____no_output_____
###Markdown
建立模型
###Code
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
model = Sequential()
#将“输入层”与“隐藏层”加入模型
model.add(Dense(units=1000,
input_dim=784,
kernel_initializer='normal',
activation='relu'))
model.add(Dropout(0.5))
#将“输出层”加入模型
model.add(Dense(units=10,
kernel_initializer='normal',
activation='softmax'))
print(model.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1000) 785000
_________________________________________________________________
dropout_1 (Dropout) (None, 1000) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 10010
=================================================================
Total params: 795,010
Trainable params: 795,010
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
训练模型
###Code
model.compile(loss='categorical_crossentropy',
optimizer='adam', metrics=['accuracy'])
train_history=model.fit(x=x_Train_normalize,
y=y_Train_One Hot,validation_split=0.2,
epochs=10, batch_size=200,verbose=2)
###Output
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
6s - loss: 0.3566 - acc: 0.8942 - val_loss: 0.1621 - val_acc: 0.9543
Epoch 2/10
5s - loss: 0.1604 - acc: 0.9532 - val_loss: 0.1168 - val_acc: 0.9662
Epoch 3/10
5s - loss: 0.1163 - acc: 0.9653 - val_loss: 0.0989 - val_acc: 0.9707
Epoch 4/10
6s - loss: 0.0929 - acc: 0.9722 - val_loss: 0.0909 - val_acc: 0.9722
Epoch 5/10
5s - loss: 0.0751 - acc: 0.9776 - val_loss: 0.0826 - val_acc: 0.9761
Epoch 6/10
5s - loss: 0.0624 - acc: 0.9801 - val_loss: 0.0770 - val_acc: 0.9773
Epoch 7/10
5s - loss: 0.0547 - acc: 0.9841 - val_loss: 0.0789 - val_acc: 0.9767
Epoch 8/10
5s - loss: 0.0491 - acc: 0.9851 - val_loss: 0.0742 - val_acc: 0.9782
Epoch 9/10
5s - loss: 0.0427 - acc: 0.9861 - val_loss: 0.0690 - val_acc: 0.9793
Epoch 10/10
6s - loss: 0.0378 - acc: 0.9885 - val_loss: 0.0664 - val_acc: 0.9802
###Markdown
以图形显示训练过程
###Code
import matplotlib.pyplot as plt
def show_train_history(train_history,train,validation):
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title('Train History')
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
show_train_history(train_history,'acc','val_acc')
show_train_history(train_history,'loss','val_loss')
###Output
_____no_output_____
###Markdown
评估模型的准确率
###Code
scores = model.evaluate(x_Test_normalize, y_Test_One Hot)
print()
print('accuracy=',scores[1])
###Output
9920/10000 [============================>.] - ETA: 0s
accuracy= 0.9808
###Markdown
进行预测
###Code
prediction=model.predict_classes(x_Test)
prediction
import matplotlib.pyplot as plt
def plot_images_labels_prediction(images,labels,
prediction,idx,num=10):
fig = plt.gcf()
fig.set_size_inches(12, 14)
if num>25: num=25
for i in range(0, num):
ax=plt.subplot(5,5, 1+i)
ax.imshow(images[idx], cmap='binary')
title= "label=" +str(labels[idx])
if len(prediction)>0:
title+=",predict="+str(prediction[idx])
ax.set_title(title,fontsize=10)
ax.set_xticks([]);ax.set_yticks([])
idx+=1
plt.show()
plot_images_labels_prediction(x_test_image,y_test_label,
prediction,idx=340)
###Output
_____no_output_____
###Markdown
confusion matrix
###Code
import pandas as pd
pd.crosstab(y_test_label,prediction,
rownames=['label'],colnames=['predict'])
df = pd.DataFrame({'label':y_test_label, 'predict':prediction})
df[:2]
df[(df.label==5)&(df.predict==3)]
plot_images_labels_prediction(x_test_image,y_test_label
,prediction,idx=340,num=1)
plot_images_labels_prediction(x_test_image,y_test_label
,prediction,idx=1289,num=1)
###Output
_____no_output_____ |
backups/Notebook 4 Submitter 0_62.ipynb | ###Markdown
Aujourd'hui on roule sur les mecs de l'ENShttps://challengedata.ens.fr/en/challenge/39/prediction_of_transaction_claims_status.html Imports des librairies de basesOn ajoutera celles qui manquent au fur et à mesure de nos besoins
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import pandas as pd
import os, gc
###Output
_____no_output_____
###Markdown
Définition de la seed pour le randomTrès important pour qu'on voit les mêmes choses entre nos deux ordis
###Code
RANDOM_SEED = 42;
np.random.seed(RANDOM_SEED)
###Output
_____no_output_____
###Markdown
Définition des paramètres pour MatplotRien de bien intéréssant
###Code
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
###Output
_____no_output_____
###Markdown
Set des variables globalesAttention, je n'utilise les variables globales pour la gestion des fichiers. Sinon, c'est mort
###Code
# Where to save the figures
PROJECT_ROOT_DIR = "..\.."
DATA_PROCESSED = os.path.join(PROJECT_ROOT_DIR, "data_processed")
###Output
_____no_output_____
###Markdown
Fonction pour load les librairesEn vrai, on a juste besoin de pd.read_csv, mais c'était pour faire joli
###Code
def load_data(file,data_path=DATA_PROCESSED, sep=';'):
csv_path = os.path.join(data_path, file)
return pd.read_csv(csv_path, sep=';')
###Output
_____no_output_____
###Markdown
On load les jeux de données
###Code
TX_data = load_data(file = "train.csv");
TEST_DATA = load_data(file = "test.csv");
RESULTS = pd.DataFrame({'ID' : []})
RESULTS["ID"]=TEST_DATA["ID"]
TEST_DATA.drop("ID", axis=1, inplace=True)
TX_data.drop(['CARD_PAYMENT','COUPON_PAYMENT','RSP_PAYMENT','WALLET_PAYMENT'], axis = 1, inplace = True)
TEST_DATA.drop(['CARD_PAYMENT','COUPON_PAYMENT','RSP_PAYMENT','WALLET_PAYMENT'], axis = 1, inplace = True)
###Output
_____no_output_____
###Markdown
Jointure entre les X et Y
###Code
def datapreprocess(data):
data=data.apply(pd.to_numeric, errors='ignore')
# Y and X
try :
Y=data["CLAIM_TYPE"]
X=data.drop("CLAIM_TYPE", axis=1,inplace=False)
except:
Y=0
X=data
# Exclude Objets
X=X.select_dtypes(exclude=['object'])
# Work on fare
from sklearn.preprocessing import Imputer
imp = Imputer(missing_values='NaN',strategy='median', axis=1)
X=pd.DataFrame(imp.fit_transform(X),columns=X.columns.values)
return X, Y
X_train, Y_train = datapreprocess(TX_data)
TEST_DATA, _ = datapreprocess(TEST_DATA)
#del TX_data;
gc.collect()
###Output
_____no_output_____
###Markdown
MODEL! Gradient Boosting Classifier
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.utils.class_weight import compute_sample_weight
sample_weight_arr = compute_sample_weight(class_weight='balanced', y=Y_train)
sample_weight_dict = {'sample_weight':compute_sample_weight(class_weight='balanced', y=Y_train)}
params_GB={
'criterion':'friedman_mse',
'init':None,
'learning_rate':0.25,
'loss':'deviance',
'max_depth':3,
'max_features':'auto',
'max_leaf_nodes':4,
'min_impurity_decrease':0.0,
'min_impurity_split':None,
'min_samples_leaf':2,
'min_samples_split':3,
'min_weight_fraction_leaf':0.0,
'n_estimators':1000,
'presort':'auto',
'random_state':RANDOM_SEED,
'subsample':0.8,
'verbose':0,
'warm_start':False
}
gb_clf=GradientBoostingClassifier(**params_GB)
gb_clf.fit(
X=X_train,
y=Y_train,
sample_weight=sample_weight_arr
)
y_pred_gb = gb_clf.predict(TEST_DATA)
RESULTS["CLAIM_TYPE"] = pd.DataFrame(y_pred_gb)
RESULTS.head()
filename = DATA_PROCESSED+"/submission_GB_1.csv"
RESULTS.to_csv(filename, index=False, sep=";")
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/01-SimpleLinearRegression-checkpoint.ipynb | ###Markdown
Simple Linear Regressionuna regresion lineal simple sobre un dataset de videojuegos. Veremos si existe una relacion entre el rating del juego y sus ventas globales.* **Name Dataset:** Video Game Sales with Ratings* **URL:** https://www.kaggle.com/rush4ratio/video-game-sales-with-ratings/data
###Code
import pandas as pd #lib dataset management
import numpy as np #number lib
data = pd.read_csv("../datasets/VideoGameSalesRatings2016.csv")
data.head()
###Output
_____no_output_____
###Markdown
Data CleaningNecesitamos limpiar valores NaN y veremos si reducimos un poco el tamaño del dataset
###Code
data.shape #size: 16 columns, 16719 rows
data.describe()
###Output
_____no_output_____
###Markdown
Necesitamos tratar los valores de _"critic_score"_ y _"user_score"_ con valores NaN.* Podemos sacar un promedio* rellenar con el anterior o siguiente* eliminarlosComo este dato resulta de suma importancia se decide **eliminar los juegos con criticas NaN** Revisando si hay campos vaciosVere si tengo campos NaN o invalidos
###Code
pd.isnull(data["Critic_Score"]).values #retorna array
pd.isnull(data["Critic_Score"]).values.ravel().sum()#total de campos con valores NaN
pd.isnull(data["User_Score"]).values #retorna array
pd.isnull(data["User_Score"]).values.ravel().sum()#total de campos con valores NaN
###Output
_____no_output_____
###Markdown
* Tengo 8582 campos NaN en la columna _Critic_Score_ * Tengo 6704 campos NaN en la columna _User_Score_ Debo decidir que hacer con ellos **en este caso decido borrarlos, por que este es un valor importante para realizar la regresión**
###Code
data_new= data.dropna(subset=['Critic_Score', 'User_Score'])#asigno nuevo dataset con los NaN eliminados
pd.isnull(data_new["Critic_Score"]).values #retorna array
pd.isnull(data_new["Critic_Score"]).values.ravel().sum()
pd.isnull(data_new["User_Score"]).values #retorna array
pd.isnull(data_new["User_Score"]).values.ravel().sum()
data_new
###Output
_____no_output_____
###Markdown
Ahora me quede con 8099 filas (se eliminaron mas de 8000 filas con valores NaN en Critic y User Score) Comprobando tipos de datosVeremos los tipos de datos y solucionaremos problemas si es que existen
###Code
data_new.dtypes
###Output
_____no_output_____
###Markdown
Tengo problemas con _User_Score_ pues su tipo de dato es _"object"_ necesito convertirlo a float para poder operar sus campos vacios
###Code
pd.to_numeric(data_new["User_Score"], errors="coerce")
data_new.dtypes#convertido a float 64 la columna de "User_score"
###Output
_____no_output_____ |
osmnx/speed/add_edge_speeds.ipynb | ###Markdown
osmnx.speed moduleCalculate graph edge speeds and travel times.
###Code
# OSMnx: New Methods for Acquiring, Constructing, Analyzing, and Visualizing Complex Street Networks
import osmnx as ox
ox.config(use_cache=True, log_console=False)
ox.__version__
query = '중구, 서울특별시, 대한민국'
network_type = 'drive' # "all_private", "all", "bike", "drive", "drive_service", "walk"
# Create graph from OSM within the boundaries of some geocodable place(s).
G = ox.graph_from_place(query, network_type=network_type)
# Plot a graph.
fig, ax = ox.plot_graph(G)
# Add edge speeds (km per hour) to graph as new speed_kph edge attributes.
G = ox.speed.add_edge_speeds(
G,
hwy_speeds=None,
fallback=None,
precision=1
)
# Convert a MultiDiGraph to node and/or edge GeoDataFrames.
# AttributeError: 'tuple' object has no attribute 'head'
gdf = ox.graph_to_gdfs(G, nodes=False)
gdf.head()
###Output
_____no_output_____ |
exp/tvm_jupyter/language/schedule_primitives.ipynb | ###Markdown
Schedule Primitives in TVM==========================**Author**: `Ziheng Jiang `_TVM is a domain specific language for efficient kernel construction.In this tutorial, we will show you how to schedule the computation byvarious primitives provided by TVM.
###Code
from __future__ import absolute_import, print_function
import tvm
import numpy as np
###Output
_____no_output_____
###Markdown
There often exist several methods to compute the same result,however, different methods will result in different locality andperformance. So TVM asks user to provide how to execute thecomputation called **Schedule**.A **Schedule** is a set of transformation of computation thattransforms the loop of computations in the program.
###Code
# declare some variables for use later
n = tvm.var('n')
m = tvm.var('m')
###Output
_____no_output_____
###Markdown
A schedule can be created from a list of ops, by default theschedule computes tensor in a serial manner in a row-major order.
###Code
# declare a matrix element-wise multiply
A = tvm.placeholder((m, n), name='A')
B = tvm.placeholder((m, n), name='B')
C = tvm.compute((m, n), lambda i, j: A[i, j] * B[i, j], name='C')
s = tvm.create_schedule([C.op])
# lower will transform the computation from definition to the real
# callable function. With argument `simple_mode=True`, it will
# return you a readable C like statement, we use it here to print the
# schedule result.
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
_____no_output_____
###Markdown
One schedule is composed by multiple stages, and one**Stage** represents schedule for one operation. We provide variousmethods to schedule every stage. split-----:code:`split` can split a specified axis into two axises by:code:`factor`.
###Code
A = tvm.placeholder((m,), name='A')
B = tvm.compute((m,), lambda i: A[i]*2, name='B')
s = tvm.create_schedule(B.op)
xo, xi = s[B].split(B.op.axis[0], factor=32)
print(tvm.lower(s, [A, B], simple_mode=True))
###Output
_____no_output_____
###Markdown
You can also split a axis by :code:`nparts`, which splits the axiscontrary with :code:`factor`.
###Code
A = tvm.placeholder((m,), name='A')
B = tvm.compute((m,), lambda i: A[i], name='B')
s = tvm.create_schedule(B.op)
bx, tx = s[B].split(B.op.axis[0], nparts=32)
print(tvm.lower(s, [A, B], simple_mode=True))
###Output
_____no_output_____
###Markdown
tile----:code:`tile` help you execute the computation tile by tile over twoaxises.
###Code
A = tvm.placeholder((m, n), name='A')
B = tvm.compute((m, n), lambda i, j: A[i, j], name='B')
s = tvm.create_schedule(B.op)
xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5)
print(tvm.lower(s, [A, B], simple_mode=True))
###Output
_____no_output_____
###Markdown
fuse----:code:`fuse` can fuse two consecutive axises of one computation.
###Code
A = tvm.placeholder((m, n), name='A')
B = tvm.compute((m, n), lambda i, j: A[i, j], name='B')
s = tvm.create_schedule(B.op)
# tile to four axises first: (i.outer, j.outer, i.inner, j.inner)
xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5)
# then fuse (i.inner, j.inner) into one axis: (i.inner.j.inner.fused)
fused = s[B].fuse(xi, yi)
print(tvm.lower(s, [A, B], simple_mode=True))
###Output
_____no_output_____
###Markdown
reorder-------:code:`reorder` can reorder the axises in the specified order.
###Code
A = tvm.placeholder((m, n), name='A')
B = tvm.compute((m, n), lambda i, j: A[i, j], name='B')
s = tvm.create_schedule(B.op)
# tile to four axises first: (i.outer, j.outer, i.inner, j.inner)
xo, yo, xi, yi = s[B].tile(B.op.axis[0], B.op.axis[1], x_factor=10, y_factor=5)
# then reorder the axises: (i.inner, j.outer, i.outer, j.inner)
s[B].reorder(xi, yo, xo, yi)
print(tvm.lower(s, [A, B], simple_mode=True))
###Output
_____no_output_____
###Markdown
bind----:code:`bind` can bind a specified axis with a thread axis, often usedin gpu programming.
###Code
A = tvm.placeholder((n,), name='A')
B = tvm.compute(A.shape, lambda i: A[i] * 2, name='B')
s = tvm.create_schedule(B.op)
bx, tx = s[B].split(B.op.axis[0], factor=64)
s[B].bind(bx, tvm.thread_axis("blockIdx.x"))
s[B].bind(tx, tvm.thread_axis("threadIdx.x"))
print(tvm.lower(s, [A, B], simple_mode=True))
###Output
_____no_output_____
###Markdown
compute_at----------For a schedule consists of multiple operators, tvm will computetensors at the root separately by default.
###Code
A = tvm.placeholder((m,), name='A')
B = tvm.compute((m,), lambda i: A[i]+1, name='B')
C = tvm.compute((m,), lambda i: B[i]*2, name='C')
s = tvm.create_schedule(C.op)
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
_____no_output_____
###Markdown
:code:`compute_at` can move computation of `B` into the first axisof computation of `C`.
###Code
A = tvm.placeholder((m,), name='A')
B = tvm.compute((m,), lambda i: A[i]+1, name='B')
C = tvm.compute((m,), lambda i: B[i]*2, name='C')
s = tvm.create_schedule(C.op)
s[B].compute_at(s[C], C.op.axis[0])
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
_____no_output_____
###Markdown
compute_inline--------------:code:`compute_inline` can mark one stage as inline, then the body ofcomputation will be expanded and inserted at the address where thetensor is required.
###Code
A = tvm.placeholder((m,), name='A')
B = tvm.compute((m,), lambda i: A[i]+1, name='B')
C = tvm.compute((m,), lambda i: B[i]*2, name='C')
s = tvm.create_schedule(C.op)
s[B].compute_inline()
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
_____no_output_____
###Markdown
compute_root------------:code:`compute_root` can move computation of one stage to the root.
###Code
A = tvm.placeholder((m,), name='A')
B = tvm.compute((m,), lambda i: A[i]+1, name='B')
C = tvm.compute((m,), lambda i: B[i]*2, name='C')
s = tvm.create_schedule(C.op)
s[B].compute_at(s[C], C.op.axis[0])
s[B].compute_root()
print(tvm.lower(s, [A, B, C], simple_mode=True))
###Output
_____no_output_____ |
23_Python_Finance.ipynb | ###Markdown
Biblioteca de Backtesting BTDocumentação: https://pmorissette.github.io/bt/ Instalação e configurações Iniciais
###Code
!pip install bt
!pip install yfinance
import bt
import yfinance as yf
import pandas as pd
import matplotlib
matplotlib.style.use('seaborn-darkgrid')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Funções Função de consulta a base de dados do banco central
###Code
def consulta_bc(codigo_bcb):
url = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.{}/dados?formato=json'.format(codigo_bcb)
df = pd.read_json(url)
df['data'] = pd.to_datetime(df['data'], dayfirst=True)
df.set_index('data', inplace=True)
return df
###Output
_____no_output_____
###Markdown
Função busca a serie histórica do CDI, a partir dos paramentros data inicio e fim, calcula um dataframe com o retorno aculmulado ao longo do período.
###Code
def cdi_acumulado(data_inicio, data_fim):
cdi = consulta_bc(12)
cdi_acumulado = (1 + cdi[data_inicio : data_fim] / 100).cumprod()
cdi_acumulado.iloc[0] = 1
return cdi_acumulado
###Output
_____no_output_____
###Markdown
Obtendo e tratando os dados
###Code
data_inicio = '2015-01-02'
data_fim = '2019-12-31'
cdi = cdi_acumulado(data_inicio=data_inicio, data_fim=data_fim)
tickers_carteira = ['BOVA11.SA', 'SMAL11.SA']
carteira = yf.download(tickers_carteira, start=data_inicio, end=data_fim)['Adj Close']
carteira
###Output
_____no_output_____
###Markdown
Acrescentar no dataframe coluna com o valor do CDI
###Code
carteira['renda_fixa'] = cdi
carteira.dropna(inplace=True)
carteira
###Output
_____no_output_____
###Markdown
Backtesting
###Code
rebalanceamento = bt.Strategy('rebalanceamento',
[bt.algos.RunMonthly(run_on_end_of_period=True),
bt.algos.SelectAll(),
bt.algos.WeighEqually(),
bt.algos.Rebalance()])
buy_hold = bt.Strategy('Buy&Hold',
[ bt.algos.RunOnce(),
bt.algos.SelectAll(),
bt.algos.WeighEqually(),
bt.algos.Rebalance()]
)
###Output
_____no_output_____
###Markdown
Primeira estratégia compra e venda de ativos todos os meses para rebalancear a carteira; Segunda estratégia realizar aporte e manter os ativos em carteira.
###Code
bt1 = bt.Backtest(rebalanceamento, carteira)
bt2 = bt.Backtest(buy_hold, carteira[['BOVA11.SA', 'SMAL11.SA']])
resultados = bt.run(bt1, bt2)
###Output
_____no_output_____
###Markdown
Resultados
###Code
resultados.display()
resultados.plot();
###Output
_____no_output_____
###Markdown
Operações
###Code
resultados.get_transactions()
###Output
_____no_output_____
###Markdown
Pesos
###Code
resultados.get_security_weights()
resultados.plot_security_weights()
###Output
_____no_output_____ |
Stability_maps/stability_maps.ipynb | ###Markdown
Stability maps in ReboundIn what follows, I try to make a stability map for HD 202206 system using `rebound`. I, earlier, ran a MCMC analysis of the system -- the initial parameter of this system is taken from this analysis. This work is largely following the tutorial in [this](https://rebound.readthedocs.io/en/latest/ipython_examples/Megno/) documentation of `rebound`. The chaos indicator used here is MEGNO (Mean Exponential Growth of Nearby Orbits), with the symplectic integrator WHFast (Rein and Tamayo 2015).
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
import rebound
from rebound.interruptible_pool import InterruptiblePool
# Planetary and stellar parameters
NBpl = int(2)
mstar = 0.9985
mEarth = 3.986004e14 / 1.3271244e20 # mEarth expressed in mSun
a_b = 0.7930
p_b = 256.2056
lambda_b = 2.65316226 # REBOUND uses angles in radians, in [-pi;pi]
e_b = 0.42916
w_b = 2.80615782 # REBOUND uses angles in radians, in [-pi;pi]
i_b = 0.0
O_b = 0.0
m_b = 0.015182 # In mSun
a_c = 2.3962
p_c = 1355.0
lambda_c = -3.05781685
e_c = 0.1596
w_c = 1.43466065
i_c = 0.0
O_c = 0.0
m_c = 0.002220 # In mSun
def simulation(par):
global mstar, m_b, m_c, a_b, a_c, lambda_b, lambda_c, w_b, w_c, e_b, e_c
a, e = par # unpack parameters
sim = rebound.Simulation()
sim.integrator = "whfast"
sim.ri_whfast.safe_mode = 0
sim.dt = 5./100.
sim.add(m=mstar) # Star
sim.add(m=m_b, a=a_b, l=lambda_b, omega=w_b, e=e_b, Omega=O_b, inc=i_b)
sim.add(m=m_c, a=a, l=lambda_c, omega=w_c, e=e, Omega=O_c, inc=i_c)
sim.move_to_com()
sim.init_megno()
sim.exit_max_distance = 20.
try:
sim.integrate(1e4*2.*np.pi, exact_finish_time=0) # integrate for 500 years, integrating to the nearest
#timestep for each output to keep the timestep constant and preserve WHFast's symplectic nature
megno = sim.calculate_megno()
return megno
except rebound.Escape:
return 10. # At least one particle got ejected, returning large MEGNO.
def pbpc(ac):
"""
A function to find period ratio
using Kepler's third law
"""
global a_b
pc = ((ac/a_b)**1.5)
return pc
Ngrid = 80
par_a = np.linspace(2.16,2.48,Ngrid)
par_e = np.linspace(0.,0.5,Ngrid)
parameters = []
for e in par_e:
for a in par_a:
parameters.append((a,e))
pool = InterruptiblePool()
results = pool.map(simulation,parameters)
results2d = np.array(results).reshape(Ngrid,Ngrid)
fig = plt.figure(figsize=(12.6,9))
ax = plt.subplot(111)
extent = [pbpc(min(par_a)), pbpc(max(par_a)), min(par_e), max(par_e)]
ax.set_xlim(extent[0],extent[1])
ax.set_xlabel("Period ratio $P_c/P_b$")
ax.set_ylim(extent[2],extent[3])
ax.set_ylabel("Eccentricity $e$")
im = ax.imshow(results2d, interpolation="none", vmin=1.9, vmax=4, cmap="RdYlGn_r", origin="lower", aspect='auto', extent=extent)
# MCMC lines overplotted
plt.axvline(4.872318737572123, color='black')
plt.axvline(4.879469563949005, color='black')
plt.axhline(0.33706, color='black')
plt.axhline(0.35544, color='black')
#
cb = plt.colorbar(im, ax=ax)
cb.set_label("MEGNO $\\langle Y \\rangle$")
###Output
_____no_output_____ |
Time Series Analysis - SARIMA model .ipynb | ###Markdown
Process my data
###Code
df_example = df_example.asfreq(pd.infer_freq(df_example.index))
df_example = df_example.sort_index(ascending = True)
df_example.head()
def tsplot(y, lags=None, title='', figsize=(14, 8)):
'''Examine the patterns of ACF and PACF, along with the time series plot and histogram.
Original source: https://tomaugspurger.github.io/modern-7-timeseries.html
'''
fig = plt.figure(figsize=figsize)
layout = (2, 2)
ts_ax = plt.subplot2grid(layout, (0, 0))
hist_ax = plt.subplot2grid(layout, (0, 1))
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
y.plot(ax=ts_ax)
ts_ax.set_title(title)
y.plot(ax=hist_ax, kind='hist', bins=35)
hist_ax.set_title('Histogram')
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
[ax.set_xlim(0) for ax in [acf_ax, pacf_ax]]
sns.despine()
plt.tight_layout()
return ts_ax, acf_ax, pacf_ax
tsplot(df_example, lags = 35, title = 'Sales Trend')
first_diff = df_example.diff()[1:]
display(df_example.head())
display(first_diff.head())
print(2234790.84 -2196570.82)
plt.plot(first_diff)
plt.title('First Difference of Sales', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(1971,1974):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)
plt.axhline(first_diff.mean(), color='r', alpha=0.5, linestyle='--')
first_diff = first_diff.fillna(value = 0)
###Output
_____no_output_____
###Markdown
Dickey Full Test
###Code
def test_stationarity(df):
print ('Results of Dickey-Fuller Test:')
dftest = adfuller(df.values,
autolag='AIC')
dfoutput = pd.Series(dftest[0:4],
index = ['Test Statistic',
'p-value',
'# Lags Used',
'Number of Observations Used'])
for key, value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
if dftest[0] < dftest[4]["5%"]:
print ('Reject Ho - Time Series is Stationary \n \n \n')
print (dfoutput)
# print( '\n \n \n \n')
# print(dftest)
pd.set_option('display.float_format', lambda x: '%.10f' % x)
test_stationarity(first_diff)
rcParams['figure.figsize'] = 12, 4
acf_plot = plot_acf(first_diff, lags=34) # for MA
pacf_plot = plot_pacf(first_diff) #for AR
plt.axhline(y=0.05,linestyle='--')
plt.axhline(y=-0.05,linestyle='--')
###Output
_____no_output_____
###Markdown
Based on PACF, we can start with a AR(5) process as it's the strongest among the first couple lags Also, it's pretty strong lag at 12, so it indicates a possible seasonal pattern First, let's try ARIMA Model.
###Code
# Train Test Split
from datetime import timedelta
from datetime import datetime
train_start = datetime(1972,3,1)
train_end = datetime(1973,6,1)
test_end = datetime(1973,9,1)
train_data = df_example[:train_end]
test_data = df_example[train_end + timedelta(days=1):test_end]
display(train_data.head())
test_data
df_shown = pd.read_excel(r'sarima_sample.xlsx', 'sample', usecols = ['Date','netsales'], parse_dates = [0],index_col =0)
split_date = '01-Jun-1973'
df_shown_train = df_shown.loc[df_shown.index <= split_date].copy()
df_shown_test = df_shown.loc[df_shown.index > split_date].copy()
df_shown_test\
.rename(columns={'netsales': 'TEST SET'}) \
.join(df_shown_train.rename(columns={'netsales': 'TRAINING SET'}),
how='outer') \
.plot(figsize=(15,5), title='Sales', style='-')
plt.show()
# ARIMA model
model = ARIMA(train_data, order=(5, 1, 1))
start = time()
model_fit = model.fit(disp=0)
end = time()
print('Model Fitting Time:', end - start)
print(model_fit.summary())
pred_start_date = test_data.index[0]
pred_end_date = test_data.index[-1]
display(pred_start_date)
pred_end_date
#get the predictions and residuals
predictions = model_fit.predict(start=pred_start_date, end=pred_end_date)
residuals = test_data - predictions
rcParams['figure.figsize'] = 24, 9
plt.plot(residuals)
plt.title('Residuals from ARIMA Model')
plt.ylabel('Error')
plt.axhline(0, color = 'r', linestyle = '--')
plt.figure(figsize=(12,4))
plt.plot(df_example)
plt.plot(predictions)
plt.legend(('Actual', 'Predictions'), fontsize=16)
plt.title('Sales Overtime', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(1970,1974):
plt.axvline(pd.to_datetime(str((year))+'-01-01'), color='k', linestyle='--', alpha=0.2)
print('Mean Absolute Percent Error:', "{:.2%}".format(round(np.mean(abs(residuals/test_data)),4)))
print('Root Mean Squared Error:', np.sqrt(np.mean(residuals**2)))
print('It is a bad model, let''s try Seasonal ARIMA model.')
###Output
It is a bad model, lets try Seasonal ARIMA model.
###Markdown
SARIMA MODEL
###Code
import warnings
warnings.filterwarnings('ignore')
my_order = (0,1,0)
my_seasonal_order = (1,0,1,12)
model2 = SARIMAX(train_data, order = my_order, seasonal_order= my_seasonal_order)
start = time()
model2_fit = model2.fit()
end = time()
print('Model Fitting Time:', end - start)
print(model2_fit.summary())
predictions2 = model2_fit.forecast(len(test_data))
predictions2 = pd.Series(predictions2, index=test_data.index)
residuals2 = test_data - predictions2
residuals2
plt.plot(residuals2)
plt.title('Residuals from Seasonal ARIMA Model')
plt.ylabel('Error')
plt.axhline(0, color = 'r', linestyle = '--')
plt.figure(figsize=(12,4))
plt.plot(df_example)
plt.plot(predictions2)
plt.legend(('Actual', 'Predictions2'), fontsize=16)
plt.title('Sales Overtime', fontsize=20)
plt.ylabel('Sales', fontsize=16)
for year in range(1970,1974):
plt.axvline(pd.to_datetime(str((year))+'-01-01'), color='k', linestyle='--', alpha=0.2)
print('Mean Absolute Percent Error:', "{:.2%}".format(round(np.mean(abs(residuals2/test_data)),4)))
print('Root Mean Squared Error:', np.sqrt(np.mean(residuals2**2)))
###Output
Mean Absolute Percent Error: 43.93%
Root Mean Squared Error: 264818.0543770018
###Markdown
The model improves a lot by using SARIMA model! Let's try using the Rolling Forecast Origin to predict 1 month by 1 month
###Code
rolling_predictions = pd.Series()
for end_date in test_data.index:
train_data = df_example[:end_date - timedelta(days = 1)] #prediction 1 month forward each time
model = SARIMAX(train_data, order=my_order, seasonal_order = my_seasonal_order)
model_fit = model.fit()
pred = model_fit.forecast()
rolling_predictions[end_date] = pred
rolling_residuals = test_data - rolling_predictions
display(rolling_predictions)
rolling_residuals
plt.plot(rolling_residuals)
plt.axhline(0, linestyle = '--', color = 'k')
plt.title('Rolling Forecast Residuals from SARIMA Model', fontsize = 20)
plt.ylabel('Error', fontsize = 16)
plt.plot(df_example)
plt.plot(rolling_predictions)
plt.legend(('Actual', 'Predictions'))
plt.title('Sales Over Time')
plt.ylabel('Sales')
print('Mean Absolute Percent Error:', "{:.2%}".format(round(np.mean(abs(([item[0] for item in rolling_residuals.values])/test_data)),4)))
print('Root Mean Squared Error:', np.sqrt(np.mean([item[0]**2 for item in rolling_residuals.values])))
###Output
Mean Absolute Percent Error: 18.46%
Root Mean Squared Error: 180739.19932045773
###Markdown
It's way even better!
###Code
stl = STL(df_example)
result = stl.fit()
seasonal, trend, resid = result.seasonal, result.trend, result.resid
plt.figure(figsize=(15,8))
plt.subplot(4,1,1)
plt.plot(df_example)
plt.title('Original Series', fontsize=16)
plt.subplot(4,1,2)
plt.plot(trend)
plt.title('Trend', fontsize=16)
plt.subplot(4,1,3)
plt.plot(seasonal)
plt.title('Seasonal', fontsize=16)
plt.subplot(4,1,4)
plt.plot(resid)
plt.title('Residual', fontsize=16)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
STL assumes that the original series is made up of a trend added to a seasonal component. Anything that's left over is the residual, which is a compoenent for detecting anomaly.
###Code
estimated = trend + seasonal
plt.figure(figsize=(12,4))
plt.plot(df_example)
plt.plot(estimated)
plt.legend(('Actual', 'Trend + Seasonal'))
plt.title('Sales Over Time')
plt.ylabel('Sales')
for year in range(1971,1974):
plt.axvline(pd.to_datetime(str((year))+'-01-01'), color='k', linestyle='--', alpha=0.2)
###Output
_____no_output_____ |
tests/tf/Concept01_linear_regression.ipynb | ###Markdown
Ch `03`: Concept `01` Linear regression Import TensorFlow for the learning algorithm. We'll need NumPy to set up the initial data. And we'll use matplotlib to visualize our data.
###Code
%matplotlib inline
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Define some constants used by the learning algorithm. There are called hyper-parameters.
###Code
learning_rate = 0.01
training_epochs = 100
###Output
_____no_output_____
###Markdown
Set up fake data that we will use to to find a best fit line
###Code
x_train = np.linspace(-1, 1, 101)
y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33
###Output
_____no_output_____
###Markdown
Plot the raw data
###Code
plt.scatter(x_train, y_train)
###Output
_____no_output_____
###Markdown
Set up the input and output nodes as placeholders since the value will be injected by `x_train` and `y_train`.
###Code
X = tf.placeholder("float")
Y = tf.placeholder("float")
###Output
_____no_output_____
###Markdown
Define the model as `y = w'*x`
###Code
def model(X, w):
return tf.multiply(X, w)
###Output
_____no_output_____
###Markdown
Set up the weights variable
###Code
w = tf.Variable(0.0, name="weights")
###Output
_____no_output_____
###Markdown
Define the cost function as the mean squared error
###Code
y_model = model(X, w)
cost = tf.reduce_mean(tf.square(Y-y_model))
###Output
_____no_output_____
###Markdown
Define the operation that will be called on each iteration of the learning algorithm
###Code
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
Initialize all variables
###Code
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
###Output
_____no_output_____
###Markdown
Train on each (x, y) pair multiple times
###Code
for epoch in range(training_epochs):
for (x, y) in zip(x_train, y_train):
sess.run(train_op, feed_dict={X: x, Y: y})
###Output
_____no_output_____
###Markdown
Fetch the value of the learned parameter
###Code
w_val = sess.run(w)
sess.close()
###Output
_____no_output_____
###Markdown
Visualize the best fit curve
###Code
plt.scatter(x_train, y_train)
y_learned = x_train*w_val
plt.plot(x_train, y_learned, 'r')
plt.show()
Tested; Gopal
###Output
_____no_output_____ |
FDS_Strocke_predition (1).ipynb | ###Markdown
Connect to drive
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/MyDrive/FDS_project
###Output
_____no_output_____
###Markdown
Librerie
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import seaborn as sns
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Inspect the dataset
###Code
df = pd.read_csv("healthcare-dataset-stroke-data.csv")
###Output
_____no_output_____
###Markdown
import the datasetdf = pd.read_csv('/content/drive/MyDrive/FDS_project/healthcare-dataset-stroke-data.csv')
###Code
df
df_solution = df.pivot_table(index=['ever_married', 'stroke'], aggfunc='size')
df_solution
###Output
_____no_output_____
###Markdown
Controlliamo se ci sono missing value
###Code
# inspect the dataset
df.isna().sum()
###Output
_____no_output_____
###Markdown
Only the column "bmi" has nan value.We will fill it with the mean
###Code
# fill the nan value with the mean
df['bmi'] = df['bmi'].fillna(round(df.bmi.mean(),1))
###Output
_____no_output_____
###Markdown
Some correlationSupponiamo di star lavorando con un ospedale e vogliamo predirre se i pazienti avranno un ictus, e abbiamo un dataset dei pazienti li dentro e vogliamo vedere se il dataset è abbastanza buono per predirre l'icuts si o no, secondo delle linee guida date dai medici:```https://www.medicapoliambulatori.it/news/ictus-tipi-cause-sintomi/https://www.humanitas.it/malattie/ictus-cerebrale/```Infatti il 75% dei casi di ICTUS colpisce le persone con più di 65 anni.L’incidenza è proporzionale all’età della popolazione: è bassa fino a 40-45 anni, poi aumenta gradualmente per impennarsi dopo i 70 anni.Tra i fattori di rischio non modificabili:età;sesso;malattie cardiacheobesitàipertensione arteriosa
###Code
# Take only the dataset with stroke = 1
dfStrocke = df[df['stroke'] == 1].copy()
# set size of sbn figure
sns.set(rc = {'figure.figsize':(10,6)})
sns.histplot(data = dfStrocke, x='ever_married', hue='stroke', stat = 'probability', palette = 'magma')
plt.show()
sns.histplot(data = dfStrocke, x='work_type', hue='stroke', stat = 'probability', palette = 'magma')
plt.show()
sns.histplot(data = dfStrocke, x='Residence_type', hue='stroke', bins = 2, stat = 'probability', palette = 'magma')
plt.show()
###Output
_____no_output_____
###Markdown
inutile
###Code
sns.histplot(data = dfStrocke, x='smoking_status', hue='stroke', stat = 'probability')
plt.show()
###Output
_____no_output_____
###Markdown
We can see that the % of pearson with unknown status of smoking is high relately to the number of pearson who had a stroke.
###Code
unknownSmoke = round(sum((dfStrocke['smoking_status'] == 'Unknown'))/dfStrocke.shape[0] * 100, 2)
unknownSmoke
sns.histplot(data = dfStrocke, x='heart_disease', hue='stroke', bins = 2, stat = 'probability')
plt.show()
###Output
_____no_output_____
###Markdown
Il dataset rispecchia i dati dai siti mediciosi?si/no spiega ScatterPlot
###Code
print('stroke: ',len(df[df['stroke']==1]))
print('no stroke: ',len(df[df['stroke']==0]))
stroke1 = df[df['stroke'] == 1].head(240).copy()
stroke0 = df[df['stroke'] == 0].head(200).copy()
strokee = pd.concat([stroke1, stroke0])
features = (strokee[['age', 'avg_glucose_level', 'bmi', 'stroke']].T).copy()
features = np.array(features, dtype = np.float64)
print(min(features[1]), max(features[1]))
sns.scatterplot(x = features[0], y = features[2],
size = features[1], sizes = (55,272), hue = features[3], palette = 'magma')
plt.xlabel('age')
plt.ylabel('bmi');
###Output
_____no_output_____
###Markdown
AGE: >40/50 BMI: >20Glucosio alto nei vecchi e bmi alto e chi ha ictus Standardize my data and fixing it
###Code
from sklearn.preprocessing import StandardScaler
std = StandardScaler()
cols = ['age','avg_glucose_level', 'bmi']
norm = std.fit_transform(df[cols])
df_norm = df.copy()
df_norm[cols] = pd.DataFrame(norm)
df_norm
###Output
_____no_output_____
###Markdown
Residence type, ever_married and gender are one hot encoder 0/1, no need to double the columns
###Code
# GENDER: F/M --> 1/0
df_norm.drop(df_norm.loc[df['gender'] =='Other'].index, inplace=True)
df_norm["gender"] = df_norm["gender"].apply(lambda x: 1 if x=="Female" else 0)
# EVER_MARRIED: YES/NO --> 1/0
df_norm["ever_married"] = df_norm["ever_married"].apply(lambda x: 1 if x=="Yes" else 0)
# RESIDENCE_TYPE: URBAN/RURAL --> 1/0
df_norm["Residence_type"] = df_norm["Residence_type"].apply(lambda x: 1 if x=="Urban" else 0)
df_norm.head()
###Output
_____no_output_____
###Markdown
Train-Test split Dropping columns and separate Design matrix from target
###Code
X = df_norm.drop(['id', 'stroke'], axis = 1)
y = df_norm['stroke']
from sklearn.model_selection import train_test_split
# split the data with 60% in each set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0,
train_size = 0.6)
# transform the dataframe in dictionary to perform feature extraction
X_train = X_train.to_dict('records')
X_test = X_test.to_dict('records')
from sklearn.feature_extraction import DictVectorizer
v = DictVectorizer(sparse = False, dtype = float)
X_train = v.fit_transform(X_train)
X_test = v.transform(X_test)
pd.DataFrame(X_train, columns = v.get_feature_names_out() )
###Output
_____no_output_____
###Markdown
k-neighbors method
###Code
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1)
model.fit(X_train, y_train)
y_KN = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Compare using confusion-matrix
###Code
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(y_test, y_KN)
sns.set(font_scale=1.5)
sns.heatmap(mat, square = True, annot=True, cbar=False, fmt="d")
plt.xlabel('predicted value')
plt.ylabel('true value');
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_KN)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0)
clf.fit(X_train, y_train)
y_LR = clf.predict(X_test)
mat = confusion_matrix(y_test, y_LR)
sns.set(font_scale=1.5)
sns.heatmap(mat, square = True, annot=True, cbar=False, fmt="d")
plt.xlabel('predicted value')
plt.ylabel('true value');
accuracy_score(y_test, y_LR)
###Output
_____no_output_____
###Markdown
Naive Bayes
###Code
from sklearn.naive_bayes import MultinomialNB
mnb = MultinomialNB()
mnb.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Visualize the data using PCA
###Code
dfplt = df.copy()
# GENDER: F/M --> 1/0
dfplt.drop(dfplt.loc[df['gender'] =='Other'].index, inplace=True)
dfplt["gender"] = dfplt["gender"].apply(lambda x: 1 if x=="Female" else 0)
# EVER_MARRIED: YES/NO --> 1/0
dfplt["ever_married"] = dfplt["ever_married"].apply(lambda x: 1 if x=="Yes" else 0)
# RESIDENCE_TYPE: URBAN/RURAL --> 1/0
dfplt["Residence_type"] = dfplt["Residence_type"].apply(lambda x: 1 if x=="Urban" else 0)
Xnot_norm = dfplt.drop(['id', 'stroke'], axis = 1)
Xplt = Xnot_norm.copy()
Xplt = Xplt.to_dict('records')
vplt = DictVectorizer(sparse = False, dtype = float)
Xplt = vplt.fit_transform(Xplt)
from sklearn.decomposition import PCA # 1. Choose the model class
PCAmodel = PCA(n_components=2) # 2. Instantiate the model with hyperparameters
PCAmodel.fit(Xplt) # 3. Fit to data. Notice y is not specified!
X_2D = PCAmodel.transform(Xplt) # 4. Transform the data to two dimensions
X_2D.shape
dfplt['PCA1'] = X_2D[:, 0]
dfplt['PCA2'] = X_2D[:, 1]
sum(dfplt['stroke'] == 1)
stroke1 = dfplt[dfplt['stroke'] == 1].head(240).copy()
stroke0 = dfplt[dfplt['stroke'] == 0].head(300).copy()
strokee = pd.concat([stroke1, stroke0])
sns.lmplot(x = "PCA1", y = "PCA2", hue = 'stroke', data = strokee, fit_reg = True, height=6);
###Output
_____no_output_____
###Markdown
Standardize dataset PCA plot visualitation
###Code
X.shape
Xplt = X.copy()
Xplt = Xplt.to_dict('records')
vplt = DictVectorizer(sparse = False, dtype = float)
Xplt = vplt.fit_transform(Xplt)
from sklearn.decomposition import PCA # 1. Choose the model class
PCAmodel = PCA(n_components=2) # 2. Instantiate the model with hyperparameters
PCAmodel.fit(Xplt) # 3. Fit to data. Notice y is not specified!
X_2D = PCAmodel.transform(Xplt) # 4. Transform the data to two dimensions
dfplt = df_norm.copy()
dfplt['PCA1'] = X_2D[:, 0]
dfplt['PCA2'] = X_2D[:, 1]
stroke1 = dfplt[dfplt['stroke'] == 1].head(240).copy()
stroke0 = dfplt[dfplt['stroke'] == 0].head(300).copy()
strokee = pd.concat([stroke1, stroke0])
sns.lmplot(x = "PCA1", y = "PCA2", hue = 'stroke', data = strokee, fit_reg = True, height=6);
###Output
_____no_output_____
###Markdown
Chiedere quale PCA è meglio. K-FOLD Dataset sbilanciato
###Code
from sklearn.model_selection import StratifiedKFold
model = KNeighborsClassifier(n_neighbors=1)
X = df_norm.drop(['id', 'stroke'], axis = 1)
y = df_norm['stroke']
X = X.to_dict('records')
from sklearn.feature_extraction import DictVectorizer
v = DictVectorizer(sparse = False, dtype = float)
X = v.fit_transform(X)
y = y.to_numpy()
kfold = StratifiedKFold(n_splits = 5, shuffle=True, random_state=1)
def scores(X,y):
# enumerate the splits
for train_ix, test_ix in kfold.split(X, y):
# select rows
train_X, test_X = X[train_ix], X[test_ix]
train_y, test_y = y[train_ix], y[test_ix]
train_X.shape
test_X.shape
from mlxtend.evaluate import bias_variance_decomp
pip install mlxtend
import mlxtend
mlxtend.__version__
pip install mlxtend --upgrade
###Output
Requirement already satisfied: mlxtend in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (0.19.0)
Requirement already satisfied: setuptools in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (58.0.4)
Requirement already satisfied: scipy>=1.2.1 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (1.7.1)
Requirement already satisfied: pandas>=0.24.2 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (1.3.4)
Requirement already satisfied: scikit-learn>=0.20.3 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (1.0.1)
Requirement already satisfied: matplotlib>=3.0.0 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (3.5.0)
Requirement already satisfied: numpy>=1.16.2 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (1.20.3)
Requirement already satisfied: joblib>=0.13.2 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from mlxtend) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (0.11.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (1.3.1)
Requirement already satisfied: fonttools>=4.22.0 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (4.25.0)
Requirement already satisfied: pillow>=6.2.0 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (8.4.0)
Requirement already satisfied: pyparsing>=2.2.1 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (3.0.4)
Requirement already satisfied: packaging>=20.0 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (21.3)
Requirement already satisfied: python-dateutil>=2.7 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from matplotlib>=3.0.0->mlxtend) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from pandas>=0.24.2->mlxtend) (2021.3)
Requirement already satisfied: six>=1.5 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib>=3.0.0->mlxtend) (1.16.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/amedeo/Programmi/anaconda3/lib/python3.8/site-packages (from scikit-learn>=0.20.3->mlxtend) (2.2.0)
Note: you may need to restart the kernel to use updated packages.
###Markdown
PROVAAAAAAAAAAAAA
###Code
from sklearn.model_selection import StratifiedKFold
# dataset
X = df_norm.drop(['id', 'stroke'], axis = 1)
y = df_norm['stroke']
# transorm the data
X = X.to_dict('records')
from sklearn.feature_extraction import DictVectorizer
v = DictVectorizer(sparse = False, dtype = float)
X = v.fit_transform(X)
y = y.to_numpy()
kfold = StratifiedKFold(n_splits = 5, shuffle=True, random_state=1)
def get_model():
model = LogisticRegression()
return model
# evaluate the model using a given test condition
def evaluate_model(cv):
model = get_model()
# evaluate the model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# return scores
return mean(scores), scores.min(), scores.max()
###Output
_____no_output_____
###Markdown
CROSS VALIDATION TRADE OFF BIAS - VAR Train-Test split Dropping columns and separate Design matrix from target
###Code
df_norm
X = df_norm.drop(['id', 'stroke'], axis = 1)
y = df_norm['stroke']
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import DictVectorizer
def norm_X(X_total):
# transform the dataframe in dictionary to perform feature extraction
X_total = X_total.to_dict('records')
v = DictVectorizer(sparse = False, dtype = float)
X_total = v.fit_transform(X_total)
return X_total
X_to_split = norm_X(X)
###Output
_____no_output_____
###Markdown
Analysis with k-fold of logistic regression
###Code
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import PolynomialFeatures
from tqdm import tqdm
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
def apply_Logistic_Regression(X_tr, y_tr, X_te):
clf = LogisticRegression(random_state=0)
clf.fit(X_tr, y_tr)
y_LR = clf.predict(X_te)
return y_LR
y = np.array(y)
print(X_to_split.shape, y.shape, type(X_to_split), type(y))
# Implement k-Fold
import warnings
warnings.filterwarnings("ignore")
# List of metrics averages
MSE_values = []
accuracy_values = []
# List of tuples (k, MSE_medio_k_esimo, accuracy_medio_k_esimo)
final_values = []
# Tune parameters
k_min = 5
k_max = 7
min_degree = 1
max_degree = 4
for k in tqdm(range(k_min, k_max + 1)):
for degree in range(min_degree, max_degree + 1):
kf = StratifiedKFold(n_splits=k, random_state=None, shuffle=True)
for train_index, test_index in kf.split(X_to_split,y ):
X_train, X_test = X_to_split[train_index], X_to_split[test_index]
y_train, y_test = y[train_index], y[test_index]
# Transform with polyfit
poly = PolynomialFeatures(degree = degree, interaction_only=False, include_bias=False)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.transform(X_test)
y_LR = apply_Logistic_Regression(X_tr=X_train_poly, y_tr=y_train, X_te=X_test_poly)
MSE_values.append(mean_squared_error(y_test, y_LR))
accuracy_values.append(accuracy_score(y_test, y_LR))
# For each degree calculate the mean and add it
mean_MSE = np.mean(MSE_values)
mean_accuracy = np.mean(accuracy_values)
final_values.append((k, degree, mean_MSE, mean_accuracy))
# Risvuoto le liste
MSE_values = []
accuracy_values = []
# Implement k-Fold
import warnings
warnings.filterwarnings("ignore")
# List of metrics averages
MSE_values = []
accuracy_values = []
# List of tuples (k, MSE_medio_k_esimo, accuracy_medio_k_esimo)
final_values = []
min_degree = 1
max_degree = 5
k = 7
for degree in tqdm(range(min_degree, max_degree + 1)):
kf = StratifiedKFold(n_splits=k, random_state=None, shuffle=True)
for train_index, test_index in kf.split(X_to_split, y):
X_train, X_test = X_to_split[train_index], X_to_split[test_index]
y_train, y_test = y[train_index], y[test_index]
# Transform with polyfit
poly = PolynomialFeatures(degree = degree, interaction_only=False, include_bias=False)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.transform(X_test)
y_LR = apply_Logistic_Regression(X_tr=X_train_poly, y_tr=y_train, X_te=X_test_poly)
MSE_values.append(mean_squared_error(y_test, y_LR))
accuracy_values.append(accuracy_score(y_test, y_LR))
# For each degree calculate the mean and add it
mean_MSE = np.mean(MSE_values)
mean_accuracy = np.mean(accuracy_values)
final_values.append((k, degree, mean_MSE, mean_accuracy))
# Risvuoto le liste
MSE_values = []
accuracy_values = []
import matplotlib.pyplot as plt
fig, axs = plt.subplots(1,2, figsize=(25, 10))
#fig.suptitle('Accuracy --> Left, MSE --> Right')
# Lists with values
x_values = [tupla[1] for tupla in final_values]
mse_plot = [tupla[2] for tupla in final_values]
accuracy_plot = [tupla[3] for tupla in final_values]
print(x_values, mse_plot)
# Plot the accuracy
axs[0].plot(x_values[0:max_degree], accuracy_plot[0:max_degree], color="k", linewidth = 5)
axs[0].set_title("Accuracy")
axs[0].set_xlabel("degree of the polynomial", labelpad=20)
axs[0].grid(color="lightgray")
# Plot the MSE
axs[1].plot(x_values[0:max_degree], mse_plot[0:max_degree], color="orange", linewidth = 5)
axs[1].set_title("MSE")
axs[1].set_xlabel("degree of the polynomial", labelpad=20)
axs[1].grid(color="lightgray")
# Legend
legend_list = []
plt.legend([str('k = ' + str(k))], loc='upper left')
# Show
plt.show()
###Output
[1, 2, 3, 4, 5] [0.048737379838343815, 0.05030320601096857, 0.05891538418174643, 0.06439483623654095, 0.06831034121749494]
###Markdown
KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
bias_knn, var_knn, error_knn = [], [], []
for k in range(1, 15):
clf_knn = KNeighborsClassifier(n_neighbors=k)
avg_expected_loss, avg_bias, avg_var = avg_expected_loss, avg_bias, avg_var = bias_variance_decomp(clf_knn, X_train, y_train, X_test, y_test, loss='mse', random_seed=123)
bias_knn.append(avg_bias)
var_knn.append(avg_var)
error_knn.append(avg_expected_loss)
plt.plot(range(1,15), error_knn, 'b', label = 'total_error')
plt.plot(range(1,15), bias_knn, 'k', label = 'bias')
plt.plot(range(1,15), var_knn, 'y', label = 'variance')
plt.legend()
# sensitivity analysis of k in k-fold cross-validation
from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot
#retrieve the model to be evaluate
def get_model():
model = LogisticRegression()
return model
# evaluate the model using a given test condition
def evaluate_model(cv):
# get the dataset
X, y = X_train, y_train
# get the model
model = get_model()
# evaluate the model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
# return scores
return mean(scores), scores.min(), scores.max()
# calculate the ideal test condition
ideal, _, _ = evaluate_model(LeaveOneOut())
# define folds to test
folds = range(2,15)
# record mean and min/max of each set of results
means, mins, maxs = list(),list(),list()
# evaluate each k value
for k in folds:
# define the test condition
cv = StratifiedKFold(n_splits=k, shuffle=True, random_state=1)
# evaluate k value
k_mean, k_min, k_max = evaluate_model(cv)
# store mean accuracy
means.append(k_mean)
# store min and max relative to the mean
mins.append(k_mean - k_min)
maxs.append(k_max - k_mean)
plt.rcParams['axes.facecolor'] = 'w'
fig, ax = pyplot.subplots(1,1, figsize=(25,15))
ax.grid(color="lightgray")
# line plot of k mean values with min/max error bars
ax.errorbar(folds, means, yerr=[mins, maxs], fmt='o', color="k", ecolor="orange", elinewidth=3)
# plot the ideal case in a separate color
ax.plot(folds, [ideal for _ in range(len(folds))], color='r', linewidth=3.0)
plt.xlabel("number of folds k")
plt.ylabel("Accuracy")
# show the plot
fig.savefig("errorbar.png", bbox_inches='tight')
###Output
_____no_output_____ |
ML/03-classifier/3.5 decision_tree.ipynb | ###Markdown
3.5 Decision_tree- 분류 방법에 따라 나뉨- 크게 CART, C4.5 두 개만 살펴보자 - CART- ART 알고리즘은 이진트리구조로 모형을 형성하는데 첫번째 과제는 목표변수를 가장잘 분리하는 설명변수와 그 분리시점을 찾는 것이다. 이 측도의 하나를 다양성(diversity)라고 하는데, 노드의 다양성을 가장 많이 줄이는 설명변수를 선택한다. 그리고, 분리기준은 다음 값을 가장 크게 하는 곳을 선택한다. 즉 diversity(before split)-(diversity(left child)+ diversity(right child))를 크게 하는 곳을 분리 기준을 정한다 - C4.5- C4.5가 CART와 다른 점은 CART는 이진분리를 하지만 C4.5는 가지의 수를 다양화 할수 있다. 이 알고리즘은 연속변수에 대해서는 CART와 비슷한 방법을 사용하지만 범주형에서는 좀 다른 방법을 사용한다. 마약 “색깔”이 분리변수로 선택되면 나무의 다른 레벨은 각 색깔별로 노드를 형성한다. (C4.5는 속성이 갖는 범주값의 수만큼 분리를 수행합니다. 실 데이터의 분석에서 가지가 매우 잘게 나눠지는 문제가 있습니다.) Ref: https://ai-times.tistory.com/177 [ai-times] 3.5 CART decision tree
###Code
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
from sklearn import datasets, tree
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.metrics import precision_score, recall_score, f1_score
# 손으로 쓴 숫자 데이터 읽기
digits = datasets.load_digits()
# 이미지를 2행 5열로 표시
for label, img in zip(digits.target[:10], digits.images[:10]):
plt.subplot(2, 5, label + 1)
plt.axis('off')
plt.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Digit: {0}'.format(label))
plt.show()
# 3과 8의 데이터 위치를 구하기
flag_3_8 = (digits.target == 3) + (digits.target == 8)
# 3과 8의 데이터를 구하기
images = digits.images[flag_3_8]
labels = digits.target[flag_3_8]
# 3과 8의 이미지 데이터를 1차원화
images = images.reshape(images.shape[0], -1)
# 분류기 생성
n_samples = len(flag_3_8[flag_3_8])
train_size = int(n_samples * 3 / 5)
classifier = tree.DecisionTreeClassifier(max_depth=3) # CART
classifier.fit(images[:train_size], labels[:train_size])
# 분류기 성능을 확인
expected = labels[train_size:]
predicted = classifier.predict(images[train_size:])
print('Accuracy:\n',
accuracy_score(expected, predicted))
print('Confusion matrix:\n',
confusion_matrix(expected, predicted))
print('Precision:\n',
precision_score(expected, predicted, pos_label=3))
print('Recall:\n',
recall_score(expected, predicted, pos_label=3))
print('F-measure:\n',
f1_score(expected, predicted, pos_label=3))
###Output
_____no_output_____
###Markdown
3.5 C4.5 decision tree
###Code
# -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
from sklearn import datasets, tree
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.metrics import precision_score, recall_score, f1_score
# 손으로 쓴 숫자 데이터 읽기
digits = datasets.load_digits()
# 이미지를 2행 5열로 표시
for label, img in zip(digits.target[:10], digits.images[:10]):
plt.subplot(2, 5, label + 1)
plt.axis('off')
plt.imshow(img, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Digit: {0}'.format(label))
plt.show()
# 3과 8의 데이터 위치를 구하기
flag_3_8 = (digits.target == 3) + (digits.target == 8)
# 3과 8의 데이터를 구하기
images = digits.images[flag_3_8]
labels = digits.target[flag_3_8]
# 3과 8의 이미지 데이터를 1차원화
images = images.reshape(images.shape[0], -1)
# 분류기 생성
n_samples = len(flag_3_8[flag_3_8])
train_size = int(n_samples * 3 / 5)
classifier = tree.DecisionTreeClassifier(max_depth=3, criterion='entropy') # C4.5
classifier.fit(images[:train_size], labels[:train_size])
# 분류기 성능을 확인
expected = labels[train_size:]
predicted = classifier.predict(images[train_size:])
print('Accuracy:\n',
accuracy_score(expected, predicted))
print('Confusion matrix:\n',
confusion_matrix(expected, predicted))
print('Precision:\n',
precision_score(expected, predicted, pos_label=3))
print('Recall:\n',
recall_score(expected, predicted, pos_label=3))
print('F-measure:\n',
f1_score(expected, predicted, pos_label=3))
###Output
_____no_output_____ |
027_lstm_rnn_sol.ipynb | ###Markdown
Recurrent Neural NetworksThe RNN type of network and its uses mirror that of a CNN - while the convolutional neural networks excel at capturing spatial relationships in data, RNNs excel at capturing temporal relationships, or things that change over time. This leads to this type of network to be well suited to time series types of problems, and also things like text processing, where the words that occur earlier in a sentence are connected to those that occur later. The unique part of RNNs is that they can loop back, thus allowing the model to draw connections from things that happened in the past. Long Short Term MemoryLong Short Term Memory (lstm) models are a type of RNN that we can commonly use to make time series predictions. LSTM models function to "remember" certain data and carry that forward, and forget other data. LSTM models have some internal magic that allows them to remember data: Forget gate - determines which old data can be dropped. Input gate - processes new data. Output gate - combines the "held" old data with the new data to generate the output. Past ModelWe can generate some predictions and see what type of accuracy we can get. Load DataWe will load the close price of a stock, scale it, and preview the first 5 prices to make sure we're good.
###Code
# download the data
df = yf.download(tickers=[ticker], period=per)
y = df['Close'].fillna(method='ffill')
y = y.values.reshape(-1, 1)
# scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(y)
y = scaler.transform(y)
print(y[:5])
###Output
[*********************100%***********************] 1 of 1 completed
[[0.0967247 ]
[0.08510231]
[0.10469691]
[0.13687448]
[0.16184813]]
###Markdown
Generate Data for PredictionsFor each point in our data we generate additional data at that point. The outcome of this will be data where each time point is now no longer just 1 value, like a normal time series, but it is a bunch of dates - this is the "long term" memory part. The X's shape is: ( of rows of data, of lookback rows, number of features) I.e. each "row" now has the past 60 values as part of it. The Y's shape is: ( of rows of data, of forecasting rows, number of features) I.e. each row is a prediction into the future. Each of the inputs for our data "remembers" the past 60 days into the past! This is what allows these models to do such a good job on sequential data.
###Code
# generate the input and output sequences
X = []
Y = []
for i in range(n_lookback, len(y)):
X.append(y[i - n_lookback: i])
y_tmp = y[i: i + n_forecast]
Y.append(y_tmp[0][0])
X = np.array(X)
Y = np.array(Y)
print(X.shape, Y.shape)
###Output
(195, 60, 1) (195,)
###Markdown
Fit ModelsWe can now make a model and train it. The long-short term memory layers are mostly simple to use, we need to be aware of a few things: Input shape - this input shape is the size of the previous records and the number of features. Here we have 60 for the n_lookback and 1 for the number of features. Return Sequences - Output - each forecast gets its own output neuron.
###Code
# fit the model
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape=(n_lookback, 1)))
model.add(LSTM(units=50))
model.add(Dense(n_forecast))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X, Y, epochs=epochs, batch_size=32, verbose=1)
tmp = model.predict(X)
###Output
Epoch 1/20
7/7 [==============================] - 3s 45ms/step - loss: 0.4053
Epoch 2/20
7/7 [==============================] - 0s 35ms/step - loss: 0.2507
Epoch 3/20
7/7 [==============================] - 0s 34ms/step - loss: 0.0915
Epoch 4/20
7/7 [==============================] - 0s 30ms/step - loss: 0.0399
Epoch 5/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0254
Epoch 6/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0224
Epoch 7/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0197
Epoch 8/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0192
Epoch 9/20
7/7 [==============================] - 0s 25ms/step - loss: 0.0172
Epoch 10/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0143
Epoch 11/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0130
Epoch 12/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0131
Epoch 13/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0115
Epoch 14/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0135
Epoch 15/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0113
Epoch 16/20
7/7 [==============================] - 0s 29ms/step - loss: 0.0098
Epoch 17/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0104
Epoch 18/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0095
Epoch 19/20
7/7 [==============================] - 0s 26ms/step - loss: 0.0087
Epoch 20/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0090
###Markdown
Print Results
###Code
# Print
old_preds = []
for i in range(len(tmp)):
old_preds.append(tmp[i][0])
# organize the results in a data frame
df_past = df[['Close']].reset_index()
#print(len(df_past))
df_past.rename(columns={'index': 'Date', 'Close': 'Actual'}, inplace=True)
df_past['Date'] = pd.to_datetime(df_past['Date'])
df_past['Old'] = np.nan
for i in range(len(old_preds)):
df_past["Old"].iloc[i+n_lookback-1] = scaler.inverse_transform(np.array(old_preds[i]).reshape(1,-1))
results = df_past.set_index('Date')
# plot the results
plot_loss(history)
results.plot(title=ticker)
df_tmp = df_past[~df_past["Old"].isna()]
trainScore = math.sqrt(mean_squared_error(df_tmp["Actual"], df_tmp["Old"]))
print('Train Score: %.2f RMSE' % (trainScore))
###Output
Train Score: 8.94 RMSE
###Markdown
Model With Forward PredictionsWe can take a model and produce forward looking predictions.
###Code
# download the data
df = yf.download(tickers=[ticker], period=per)
y = df['Close'].fillna(method='ffill')
y = y.values.reshape(-1, 1)
#print(y)
# scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(y)
y = scaler.transform(y)
#print(y)
# generate the input and output sequences
X = []
Y = []
for i in range(n_lookback, len(y)):
X.append(y[i - n_lookback: i])
y_tmp = y[i: i + n_forecast]
Y.append(y_tmp[0][0])
X = np.array(X)
Y = np.array(Y)
print(X.shape, Y.shape)
# fit the model
model = Sequential()
model.add(LSTM(units=50, return_sequences=True, input_shape=(n_lookback, 1)))
model.add(LSTM(units=50))
model.add(Dense(n_forecast))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X, Y, epochs=epochs, batch_size=32, verbose=1)
# generate the forecasts
X_ = y[- n_lookback:] # last available input sequence
X_ = X_.reshape(1, n_lookback, 1)
Y_ = model.predict(X_).reshape(-1, 1)
Y_ = scaler.inverse_transform(Y_)
# organize the results in a data frame
df_past = df[['Close']].reset_index()
df_past.rename(columns={'index': 'Date', 'Close': 'Actual'}, inplace=True)
df_past['Date'] = pd.to_datetime(df_past['Date'])
df_past['Forecast'] = np.nan
df_past['Forecast'].iloc[-1] = df_past['Actual'].iloc[-1]
df_future = pd.DataFrame(columns=['Date', 'Actual', 'Forecast'])
df_future['Date'] = pd.date_range(start=df_past['Date'].iloc[-1] + pd.Timedelta(days=1), periods=n_forecast)
df_future['Forecast'] = Y_.flatten()
df_future['Actual'] = np.nan
results = df_past.append(df_future).set_index('Date')
# plot the results
plot_loss(history)
results.plot(title=ticker)
###Output
[*********************100%***********************] 1 of 1 completed
(195, 60, 1) (195,)
Epoch 1/20
7/7 [==============================] - 3s 34ms/step - loss: 0.3940
Epoch 2/20
7/7 [==============================] - 0s 27ms/step - loss: 0.2267
Epoch 3/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0983
Epoch 4/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0437
Epoch 5/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0281
Epoch 6/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0210
Epoch 7/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0183
Epoch 8/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0176
Epoch 9/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0170
Epoch 10/20
7/7 [==============================] - 0s 29ms/step - loss: 0.0137
Epoch 11/20
7/7 [==============================] - 0s 29ms/step - loss: 0.0131
Epoch 12/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0124
Epoch 13/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0119
Epoch 14/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0155
Epoch 15/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0121
Epoch 16/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0099
Epoch 17/20
7/7 [==============================] - 0s 27ms/step - loss: 0.0104
Epoch 18/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0090
Epoch 19/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0085
Epoch 20/20
7/7 [==============================] - 0s 28ms/step - loss: 0.0087
###Markdown
Multiple Feature Time SeriesWe can also extened time series stuff to deal with multiple varaibles as inputs. For example, for the stock prices we can include the trading volume as well. Here our feature set is two values - the price that we are used to with a time series, and also the volume. The shape of the input chages here, and this can be adapted to more elaborate prediction scenarios. The time step part is the same, but now instead of there only being one feature we can have arbitrarily many.
###Code
features = 2
# download the data
df = yf.download(tickers=[ticker], period=per)
y = df[['Close', 'Volume']].fillna(method='ffill')
#print(y)
y = y.values.reshape(-1, features)
#print(y)
# scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(y)
print(y.shape)
y = scaler.transform(y)
#print(y)
X = []
Y = []
#for i in range(n_lookback, len(y) - n_forecast + 1):
for i in range(n_lookback, len(y)):
X.append(y[i - n_lookback: i])
y_tmp = y[i: i + n_forecast]
Y.append(y_tmp[0][0])
X = np.array(X)
Y = np.array(Y)
print(X.shape, Y.shape)
# fit the model
model = Sequential()
model.add(LSTM(units=60, return_sequences=True, input_shape=(n_lookback, features)))
model.add(LSTM(units=30))
model.add(Dense(n_forecast))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X, Y, epochs=epochs, batch_size=32, verbose=1)
tmp = model.predict(X)
# generate the forecasts
X_ = y[- n_lookback:] # last available input sequence
X_ = X_.reshape(1, n_lookback, features)
#Accuracy
old_preds = []
for i in range(len(tmp)):
old_preds.append(tmp[i][0])
Y_ = model.predict(X_).reshape(-1, 1)
tmp_zeros = [0] * len(Y_)
tmp = np.array(list(zip(Y_,tmp_zeros)))
tmp_dict = {"pred":Y_.flatten(), "zero":tmp_zeros}
tmp_df = pd.DataFrame.from_dict(tmp_dict)
Y_ = scaler.inverse_transform(tmp_df)
Y_ = [item[0] for item in Y_]
# organize the results in a data frame
df_past = df[['Close']].reset_index()
df_past.rename(columns={'index': 'Date', 'Close': 'Actual'}, inplace=True)
df_past['Date'] = pd.to_datetime(df_past['Date'])
df_past['Forecast'] = np.nan
df_past['Forecast'].iloc[-1] = df_past['Actual'].iloc[-1]
#Old
df_past['Old'] = np.nan
tmp_zeros = [0] * len(old_preds)
tmp = np.array(list(zip(old_preds,tmp_zeros)))
tmp_dict = {"pred":old_preds, "zero":tmp_zeros}
tmp_df = pd.DataFrame.from_dict(tmp_dict)
old_ = scaler.inverse_transform(tmp_df)
old_ = [item[0] for item in old_]
for i in range(len(old_)):
df_past["Old"].iloc[i+n_lookback-1] = old_[i]
df_future = pd.DataFrame(columns=['Date', 'Actual', 'Forecast', 'Old'])
df_future['Date'] = pd.date_range(start=df_past['Date'].iloc[-1] + pd.Timedelta(days=1), periods=n_forecast)
df_future['Forecast'] = Y_#.flatten()
df_future['Actual'] = np.nan
results = df_past.append(df_future).set_index('Date')
# plot the results
plot_loss(history)
results.plot(title=ticker)
df_tmp = df_past[~df_past["Old"].isna()]
trainScore = math.sqrt(mean_squared_error(df_tmp["Actual"], df_tmp["Old"]))
print('Train Score: %.2f RMSE' % (trainScore))
###Output
Train Score: 9.11 RMSE
###Markdown
Add Some TweaksIn the model structure we can try some things just like any other network - for example, we can add a couple of layers and some dropouts.
###Code
features = 2
# download the data
df = yf.download(tickers=[ticker], period=per)
y = df[['Close', 'Volume']].fillna(method='ffill')
#print(y)
y = y.values.reshape(-1, features)
#print(y)
# scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
scaler = scaler.fit(y)
print(y.shape)
y = scaler.transform(y)
#print(y)
X = []
Y = []
#for i in range(n_lookback, len(y) - n_forecast + 1):
for i in range(n_lookback, len(y)):
X.append(y[i - n_lookback: i])
y_tmp = y[i: i + n_forecast]
Y.append(y_tmp[0][0])
#print(X)
#print(Y)
X = np.array(X)
Y = np.array(Y)
print(X.shape, Y.shape)
# fit the model
model = Sequential()
model.add(LSTM(units=60, return_sequences=True, input_shape=(n_lookback, features)))
model.add(Dropout(0.1))
model.add(LSTM(units=60, return_sequences = True))
model.add(Dropout(0.1))
model.add(LSTM(units=60, return_sequences = True))
model.add(Dropout(0.1))
model.add(LSTM(units=30))
model.add(Dropout(0.1))
model.add(Dense(n_forecast))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(X, Y, epochs=epochs, batch_size=32, verbose=1)
tmp = model.predict(X)
# generate the forecasts
X_ = y[- n_lookback:] # last available input sequence
X_ = X_.reshape(1, n_lookback, features)
#Accuracy
old_preds = []
for i in range(len(tmp)):
old_preds.append(tmp[i][0])
Y_ = model.predict(X_).reshape(-1, 1)
tmp_zeros = [0] * len(Y_)
tmp = np.array(list(zip(Y_,tmp_zeros)))
tmp_dict = {"pred":Y_.flatten(), "zero":tmp_zeros}
tmp_df = pd.DataFrame.from_dict(tmp_dict)
Y_ = scaler.inverse_transform(tmp_df)
Y_ = [item[0] for item in Y_]
# organize the results in a data frame
df_past = df[['Close']].reset_index()
df_past.rename(columns={'index': 'Date', 'Close': 'Actual'}, inplace=True)
df_past['Date'] = pd.to_datetime(df_past['Date'])
df_past['Forecast'] = np.nan
df_past['Forecast'].iloc[-1] = df_past['Actual'].iloc[-1]
#Old
df_past['Old'] = np.nan
tmp_zeros = [0] * len(old_preds)
tmp = np.array(list(zip(old_preds,tmp_zeros)))
tmp_dict = {"pred":old_preds, "zero":tmp_zeros}
tmp_df = pd.DataFrame.from_dict(tmp_dict)
old_ = scaler.inverse_transform(tmp_df)
old_ = [item[0] for item in old_]
for i in range(len(old_)):
df_past["Old"].iloc[i+n_lookback-1] = old_[i]
df_future = pd.DataFrame(columns=['Date', 'Actual', 'Forecast', 'Old'])
df_future['Date'] = pd.date_range(start=df_past['Date'].iloc[-1] + pd.Timedelta(days=1), periods=n_forecast)
df_future['Forecast'] = Y_#.flatten()
df_future['Actual'] = np.nan
results = df_past.append(df_future).set_index('Date')
# plot the results
plot_loss(history)
results.plot(title=ticker)
df_tmp = df_past[~df_past["Old"].isna()]
trainScore = math.sqrt(mean_squared_error(df_tmp["Actual"], df_tmp["Old"]))
print('Train Score: %.2f RMSE' % (trainScore))
###Output
[*********************100%***********************] 1 of 1 completed
(255, 2)
(195, 60, 2) (195,)
Epoch 1/20
7/7 [==============================] - 6s 95ms/step - loss: 0.3792
Epoch 2/20
7/7 [==============================] - 1s 96ms/step - loss: 0.1973
Epoch 3/20
7/7 [==============================] - 1s 80ms/step - loss: 0.1202
Epoch 4/20
7/7 [==============================] - 1s 77ms/step - loss: 0.0783
Epoch 5/20
7/7 [==============================] - 0s 66ms/step - loss: 0.0630
Epoch 6/20
7/7 [==============================] - 0s 72ms/step - loss: 0.0600
Epoch 7/20
7/7 [==============================] - 0s 63ms/step - loss: 0.0545
Epoch 8/20
7/7 [==============================] - 0s 70ms/step - loss: 0.0485
Epoch 9/20
7/7 [==============================] - 0s 67ms/step - loss: 0.0439
Epoch 10/20
7/7 [==============================] - 0s 67ms/step - loss: 0.0398
Epoch 11/20
7/7 [==============================] - 0s 65ms/step - loss: 0.0351
Epoch 12/20
7/7 [==============================] - 0s 64ms/step - loss: 0.0369
Epoch 13/20
7/7 [==============================] - 0s 66ms/step - loss: 0.0317
Epoch 14/20
7/7 [==============================] - 0s 66ms/step - loss: 0.0409
Epoch 15/20
7/7 [==============================] - 1s 78ms/step - loss: 0.0342
Epoch 16/20
7/7 [==============================] - 0s 66ms/step - loss: 0.0272
Epoch 17/20
7/7 [==============================] - 1s 82ms/step - loss: 0.0279
Epoch 18/20
7/7 [==============================] - 0s 64ms/step - loss: 0.0275
Epoch 19/20
7/7 [==============================] - 0s 63ms/step - loss: 0.0262
Epoch 20/20
7/7 [==============================] - 0s 68ms/step - loss: 0.0242
Train Score: 10.05 RMSE
|
emotion_training.ipynb | ###Markdown
TEST 1: No Stemming, No Stopwords
###Code
data = [preprocess_text(t) for t in raw_data] # Without stemming.
x_train, x_test, y_train, y_test = train_test_split(data, df.Field1.values, test_size=0.33, random_state=42)
result00 = crossValidation(text_clf, tuned_parameters, 10, x_train, y_train)
result00 = result00[["rank_test_score", "param_tfidf__use_idf","param_clf__alpha","param_vect__ngram_range",
"mean_test_score", "mean_train_score"]]
result00 = result00.sort_values(by="rank_test_score")
result00.head()
###Output
Best Score 0.572
clf__alpha: 0.1
tfidf__norm: 'l2'
tfidf__use_idf: False
vect__ngram_range: (1, 2)
###Markdown
TEST 2: With Stemming, no Stopwords
###Code
data = [preprocess_text(t, stemming=True) for t in raw_data] # With stemming.
x_train, x_test, y_train, y_test = train_test_split(data, df.Field1.values, test_size=0.33, random_state=42)
result01 = crossValidation(text_clf, tuned_parameters, 10, x_train, y_train)
result01 = result01[["rank_test_score", "param_tfidf__use_idf","param_clf__alpha","param_vect__ngram_range",
"mean_test_score", "mean_train_score"]]
result01 = result01.sort_values(by="rank_test_score")
result01.head()
###Output
Best Score 0.582
clf__alpha: 0.1
tfidf__norm: 'l2'
tfidf__use_idf: False
vect__ngram_range: (1, 2)
###Markdown
TEST 3: With Stopwords, no Stemming
###Code
data = [preprocess_text(t, stop_word=True) for t in raw_data] # With stemming.
x_train, x_test, y_train, y_test = train_test_split(data, df.Field1.values, test_size=0.33, random_state=42)
result10 = crossValidation(text_clf, tuned_parameters, 10, x_train, y_train)
result10 = result10[["rank_test_score", "param_tfidf__use_idf","param_clf__alpha","param_vect__ngram_range",
"mean_test_score", "mean_train_score"]]
result10 = result10.sort_values(by="rank_test_score")
result10.head()
###Output
Best Score 0.561
clf__alpha: 1
tfidf__norm: 'l2'
tfidf__use_idf: True
vect__ngram_range: (1, 2)
###Markdown
TEST 4: With Stemming and Stopwords
###Code
data = [preprocess_text(t, True, True) for t in raw_data] # With stemming.
x_train, x_test, y_train, y_test = train_test_split(data, df.Field1.values, test_size=0.33, random_state=42)
result11 = crossValidation(text_clf, tuned_parameters, 10, x_train, y_train)
result11 = result11[["rank_test_score", "param_tfidf__use_idf","param_clf__alpha","param_vect__ngram_range",
"mean_test_score", "mean_train_score"]]
result11 = result11.sort_values(by="rank_test_score")
result11.head()
###Output
Best Score 0.565
clf__alpha: 1
tfidf__norm: 'l2'
tfidf__use_idf: False
vect__ngram_range: (1, 2)
###Markdown
We get our results. And test with (1,2) ngram range, alpha 0.1, with stopwords and stemming.
###Code
test = Pipeline([('vect', CountVectorizer(ngram_range=(1,2))),
('tfidf', TfidfTransformer(use_idf=False)),
('clf', MultinomialNB(alpha=0.1))])
data = [preprocess_text(t, stemming=True, stop_word=True) for t in raw_data] # With stemming.
x_train, x_test, y_train, y_test = train_test_split(data, df.Field1.values, test_size=0.33, random_state=5)
test.fit(x_train, y_train)
predicted = test.predict(x_test)
print("Score of test is:")
print("%.3f"%(np.mean(predicted == y_test)))
###Output
Score of test is:
0.560
###Markdown
Most effective 10 words for each class
###Code
def show_top10(classifier, vectorizer, categories):
feature_names = np.asarray(vectorizer.get_feature_names())
for i, category in enumerate(categories):
top10 = np.argsort(classifier.coef_[i])[-10:]
print("%s: %s" % (category, " ".join(feature_names[top10])))
show_top10(test.steps[2][1], test.steps[0][1], test.classes_)
###Output
anger: person told tim moth becaus thi wer hav angry friend
disgust: som person felt disgust someon man felt peopl friend saw disgust
fear: tim bef would wer afraid hom alon car fear night
joy: happy tim year univers exam first aft got pass friend
sadness: grandmoth year dea rel aft fath felt sad died friend
shame: feel som thi day felt asham tim hav felt asham friend
|
DistributedRL/ExploreAlgorithm.ipynb | ###Markdown
Step 1 - Explore the AlgorithmIn this notebook you will get an overview of the reinforcement learning algorithm being used for this experiment and the implementation of distributed learning. This is not a tutorial on the basics of reinforcement learning - for a good introduction to the basics, see [this tutorial](https://medium.freecodecamp.org/deep-reinforcement-learning-where-to-start-291fb0058c01). Our algorithm is modeled after [this deep Q-learning algorithm from Google DeepMind](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) which has seen success in learning to play Atari video games.If you are just looking to run the model training without exploring the algorithm, you can skip ahead to **[Step 2 - Launch the Training Job](LaunchTrainingJob.ipynb)** if you are running this on a cluster or **[Step 2A: Launch Local Training Job](LaunchLocalTrainingJob.ipynb)** if you are running this locally.At this point, **[please start the AirSim executable](README.mdsimulator-package)** on your local machine as the code presented in this notebook needs to connect to the executable to generate images. First, let's import some libraries.
###Code
%matplotlib inline
from Share.scripts_downpour.app.airsim_client import *
import numpy as np
import time
import sys
import json
import matplotlib.pyplot as plt
from IPython.display import clear_output
import time
import PIL
import PIL.ImageFilter
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model, clone_model, load_model
from keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Lambda, Input, concatenate
from keras.layers.normalization import BatchNormalization
from keras.layers.advanced_activations import ELU
from keras.optimizers import Adam, SGD, Adamax, Nadam, Adagrad, Adadelta
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, CSVLogger, EarlyStopping
import keras.backend as K
from keras.preprocessing import image
from keras.initializers import random_normal
###Output
_____no_output_____
###Markdown
The reward functionOne of the critical decisions to make when desigining a reinforcement learning experiment is the definition of the reward function. For this tutorial, we define a very simple reward function which only takes into account the position of the car. In the experiment, the optimal position for our car is the center of the road, so we want to assign a high reward when the car is in the center, and a low reward when it is closer to the edge. We also want our reward function to be bounded in the range [0, 1] as it will be easier for our model to learn values within that range.> **Thought Exercise 1.1:** As you will soon see, the reward function defined here is very basic and doesn't take into account some important parameters. Can you point out some obvious considerations this reward function overlooks?> **Thought Exercise 1.2:** The next time you are out for a drive, take note of how things happening around you on the road (the behavoir of other vehicles and pedestrians, traffic laws, roadsigns etc), the state of your car (your current speed, steering angle, acceleration etc) and your mental state (the urgency of getting to your destination, your overall stress/frustration level etc) result in you making decisions on the road. Reinforcement learning is unique as it is inspired by the behavioral psychology of human beings and animals. If you were to write a reward function for how you drive in real life, what would it look like?To compute our reward function, we begin by computing the distance to the center of the nearest road. We then pass that distance through an exponential weighting function to force this portion to the range [0, 1]. The full definition of the reward function can be seen below.
###Code
def compute_reward(car_state, collision_info, road_points):
#Define some constant parameters for the reward function
THRESH_DIST = 3.5 # The maximum distance from the center of the road to compute the reward function
DISTANCE_DECAY_RATE = 1.2 # The rate at which the reward decays for the distance function
CENTER_SPEED_MULTIPLIER = 2.0 # The ratio at which we prefer the distance reward to the speed reward
# If the car is stopped, the reward is always zero
speed = car_state.speed
if (speed < 2):
return 0
#Get the car position
position_key = bytes('position', encoding='utf8')
x_val_key = bytes('x_val', encoding='utf8')
y_val_key = bytes('y_val', encoding='utf8')
car_point = np.array([car_state.kinematics_true[position_key][x_val_key], car_state.kinematics_true[position_key][y_val_key], 0])
# Distance component is exponential distance to nearest line
distance = 999
#Compute the distance to the nearest center line
for line in road_points:
local_distance = 0
length_squared = ((line[0][0]-line[1][0])**2) + ((line[0][1]-line[1][1])**2)
if (length_squared != 0):
t = max(0, min(1, np.dot(car_point-line[0], line[1]-line[0]) / length_squared))
proj = line[0] + (t * (line[1]-line[0]))
local_distance = np.linalg.norm(proj - car_point)
distance = min(distance, local_distance)
distance_reward = math.exp(-(distance * DISTANCE_DECAY_RATE))
return distance_reward
###Output
_____no_output_____
###Markdown
To visualize how our reward function works, we can plot the car state and print the reward function. In the figure below, the black lines are the precomputed centers of each road, and the blue dot is the current position of the car. At the intersections, we define a few possible paths that the car can take. As you drive the car around (using the keyboard), you will see the reward function change.
###Code
plt.figure()
# Reads in the reward function lines
def init_reward_points():
road_points = []
with open('Share\\data\\reward_points.txt', 'r') as f:
for line in f:
point_values = line.split('\t')
first_point = np.array([float(point_values[0]), float(point_values[1]), 0])
second_point = np.array([float(point_values[2]), float(point_values[3]), 0])
road_points.append(tuple((first_point, second_point)))
return road_points
#Draws the car location plot
def draw_rl_debug(car_state, road_points):
fig = plt.figure(figsize=(15,15))
print('')
for point in road_points:
plt.plot([point[0][0], point[1][0]], [point[0][1], point[1][1]], 'k-', lw=2)
position_key = bytes('position', encoding='utf8')
x_val_key = bytes('x_val', encoding='utf8')
y_val_key = bytes('y_val', encoding='utf8')
car_point = np.array([car_state.kinematics_true[position_key][x_val_key], car_state.kinematics_true[position_key][y_val_key], 0])
plt.plot([car_point[0]], [car_point[1]], 'bo')
plt.show()
reward_points = init_reward_points()
car_client = CarClient()
car_client.confirmConnection()
car_client.enableApiControl(False)
try:
while(True):
clear_output(wait=True)
car_state = car_client.getCarState()
collision_info = car_client.getCollisionInfo()
reward = compute_reward(car_state, collision_info, reward_points)
print('Current reward: {0:.2f}'.format(reward))
draw_rl_debug(car_state, reward_points)
time.sleep(1)
#Handle interrupt gracefully
except:
pass
###Output
Current reward: 0.00
###Markdown
Network architecture and transfer learningOur model uses images from the front facing webcam as input. As we did in our end-to-end model, we select a small sub-portion of the image to feed to the model. This reduces the number of parameters in our model, making it train faster.The code below will take an image from AirSim, apply the preprocessing functions, and display it below.
###Code
def get_image(car_client):
image_response = car_client.simGetImages([ImageRequest(0, AirSimImageType.Scene, False, False)])[0]
image1d = np.frombuffer(image_response.image_data_uint8, dtype=np.uint8)
image_rgba = image1d.reshape(image_response.height, image_response.width, 4)
return image_rgba[76:135,0:255,0:3]
car_client = CarClient()
car_client.confirmConnection()
image = get_image(car_client)
image = plt.imshow(image)
###Output
Waiting for connection:
###Markdown
We utilize a very similar network architecture to the one used in our [end-to-end deep learning tutorial](https://github.com/Microsoft/AutonomousDrivingCookbook/tree/master/AirSimE2EDeepLearning) with three convolution layers. The input to the network is a single image frame taken from the front-facing webcam. The output is the predicted Q values for each of the possible actions that the model can take. The full network architecture is defined in the code snippet below.You can go about the training process in one of two ways. You can train your model ground up, which would mean you will kick off your training with random weights and see random behavior in your car as it tries to learn how to steer itself on the road. This random behaviour will eventually turn into more expected behavior and you will be able to see the car learn how to make turns and and stay on the road. This could take up days to train however.You could take a diferrent approach to speed things up a little bit though. Using a technique called [Transfer Learning](https://journalofbigdata.springeropen.com/articles/10.1186/s40537-016-0043-6) you can leverage knowledge from a model you trained previously and apply it to this model. Transfer learning works on a very simple concept: using existing knowledge to learn new related things instead of learning them from scratch. The technique has become very popular for tasks like image classification where instead of training image classifiers from scratch for a given use case (which can require a very large amount of data), you take learned features from an existing network (VGGNet, ResNet, GoogleNet etc) and fine tune them to your use case using a much smaller amount of data.Luckily for us, we already have a model that learned how to steer itself on the road (see the [end-to-end deep learning tutorial](https://github.com/Microsoft/AutonomousDrivingCookbook/tree/master/AirSimE2EDeepLearning)). Even though we trained that model in a different simulation environment, the mechanics of data collection remain the same. The two tasks are quite similar which makes this a perfect candidate for transfer learning. If you decide to go the transfer learning route, you will notice that the initial behaviour of the car is much less random. It still won't drive perfectly since one, our end-to-end model was not the best possible version of itself to begin with, and two, it has never seen elements like other cars, houses etc. in its environment which throw it off. This is still much better than starting from scratch though, and you will see that this technique will help your model converge much faster.
###Code
activation = 'relu'
# The main model input.
pic_input = Input(shape=(59,255,3))
train_conv_layers = False # For transfer learning, set to True if training ground up.
img_stack = Conv2D(16, (3, 3), name='convolution0', padding='same', activation=activation, trainable=train_conv_layers)(pic_input)
img_stack = MaxPooling2D(pool_size=(2,2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution1', trainable=train_conv_layers)(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Conv2D(32, (3, 3), activation=activation, padding='same', name='convolution2', trainable=train_conv_layers)(img_stack)
img_stack = MaxPooling2D(pool_size=(2, 2))(img_stack)
img_stack = Flatten()(img_stack)
img_stack = Dropout(0.2)(img_stack)
img_stack = Dense(128, name='rl_dense', kernel_initializer=random_normal(stddev=0.01))(img_stack)
img_stack=Dropout(0.2)(img_stack)
output = Dense(5, name='rl_output', kernel_initializer=random_normal(stddev=0.01))(img_stack)
opt = Adam()
action_model = Model(inputs=[pic_input], outputs=output)
action_model.compile(optimizer=opt, loss='mean_squared_error')
action_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 59, 255, 3) 0
_________________________________________________________________
convolution0 (Conv2D) (None, 59, 255, 16) 448
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 29, 127, 16) 0
_________________________________________________________________
convolution1 (Conv2D) (None, 29, 127, 32) 4640
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 14, 63, 32) 0
_________________________________________________________________
convolution2 (Conv2D) (None, 14, 63, 32) 9248
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 7, 31, 32) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 6944) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 6944) 0
_________________________________________________________________
rl_dense (Dense) (None, 128) 888960
_________________________________________________________________
dropout_2 (Dropout) (None, 128) 0
_________________________________________________________________
rl_output (Dense) (None, 5) 645
=================================================================
Total params: 903,941
Trainable params: 889,605
Non-trainable params: 14,336
_________________________________________________________________
|
prediccion_reserva_tranco_beas[cedex].ipynb | ###Markdown
Predicción de la reserva mensual del embalse Tranco de Beas (cedex) Los modelos a aplicar van a ser el suavizado exponencia y un modelo ARIMA. 1 - Lectura de datos, creación de la serie y análisis básico. Los datos sobre los que se va a realizar el análisis corresponden al histórico reserva mensual de agua en $hm^3$ en el embalse de "Tranco de Beas" en la provincia de Jaén. Estos datos han sido recogidos de la web de Anuario de Aforos de España 2016-2017 (https://ceh.cedex.es/anuarioaforos/default.asp). Este anuario es mantenido y actualizado anualmente por el CEDEX (Centro de Estudios y Experimentación de Obras Públicas) dependiente del Ministerio para la Transición Ecológica y el Reto Demográfico.La web descargas es para la red de aforos de la Cuenca del Guadalquivir es: https://ceh.cedex.es/anuarioaforos/GUADALQUIVIR_csv.asp
###Code
archivo <- 'data/embalses_resmen.csv'
# Cargamos el archivo de datos
library(readr)
datos <- read_delim("data/embalses_resmen.csv", ";",
escape_double = FALSE,
col_types = cols(salmes = col_double(),
entmes = col_double()),
locale = locale(decimal_mark = ","),
trim_ws = TRUE)
#El código identificador del embalse "Tranco de Beas" es "ref_ceh" = 5001
datos5001 <- subset(datos, ref_ceh == 5001)
head(datos5001)
tail(datos5001)
summary(datos5001)
###Output
_____no_output_____
###Markdown
* El archivo de datos contiene las series de datos de reserva de los embalses catalogados en el anuario de aforos 2017-2018. * La columna que tiene los datos es "resmen". * Como se ve, las mediciones comienzan en octubre de 1944 y termina en septiembre de 2018. A nivel informativo, comentar que las mediciones anuales en hidrología siguen el año hidrologico, que en España va del mes de octubre al mes de septiembre del año siguiente.
###Code
# Cogemos la columna de datos que nos interesa
reserva <- datos5001$resmes
#cremos la serie temporal
reservats <- ts(reserva, start = c(1944,10), freq = 12)
# Cogemos 12 años de datos como se indica en el enunciado.
# pero tomamos un año adicional de forma separada para evaluar posteriormente
# el ajuste de los modelos
# Serie para generar los modelos
rests <- window(reservats, c(2004,1),c(2015,12))
rests
# Serie para evaluar el ajuste
rests.check <- window(reservats, c(2016,1),c(2016,12))
#hacemos un plot de la serie
plot(rests,
main = "Embalse Tranco de Beas. Reserva mensual 2004-2015",
xlab = "Fecha",
ylab = "Reserva hm3",
ylim = c(0.99*min(rests), 1.01*max(rests)))
# Vemos la descompsición de la serie
rests.decomp <- decompose(rests)
plot(rests.decomp)
###Output
_____no_output_____
###Markdown
Comprobamos que tiene un ciclo anual bastante marcado
###Code
# Vemos si existe autocorrelaciones en los datos
acf(rests)
###Output
_____no_output_____
###Markdown
Se observa que existe una correlación marcada en los datos. La función de autocorrelación disminuye con el lag pero no termina por converger a cero. Por tanto la serie no es estacionaria.Los lags en el eje x de la gráfica se refieren a unidades relativas de tiempo no a número de observaciones. En este caso los datos son mensuales por lo que, como se observa, en un año habrá 12 mediciones. A efectos esto se traduce en que el los lags están expresados en años. 2 - Análisis mediante suavizado exponencialhttps://en.wikipedia.org/wiki/Exponential_smoothingEn el siguiente ejemplo se aplica el método de Holt-Winters que consiste concretamente en un triple alisado exponencial:* Primer alisado: Incluye el efecto del dato previo de la serie.* Segundo aliasado: Incluye el efecto de la tendencia de serie.* Tercer alisado: Incluye el efecto de la estacionalidad de la serie.Esto deriva en que es necesario, al aplicar este método, determinar tres parámetros a partir de los datos de la series (alfa, beta y gamma)
###Code
# Se aplica el alisado exponencial de HoltWinters
(rests.hw <- HoltWinters(rests))
# Vemos algunos de los valores ajustados.
head(rests.hw$fitted)
# Suma total del error cuadratico
rests.hw$SSE
# Grafica serie inicial vs serie suavizada
plot(rests,
xlab = "Fecha",
ylab = "Reserva hm3",
ylim = c(0.99*min(rests), 1.01*max(rests)),
col = 'blue')
lines(rests.hw$fitted[,1], col="red")
legend("topleft",
legend=c("serie inicial","serie suavizada"),
col=c("blue","red"),lwd=2)
title(main = 'Embalse Tranco de Beas. Reserva mensual 2004-2015',
sub = 'Suavizado exponencial (Holt-Winters)')
#predicción para 2016
rests.pred2016 <- predict(rests.hw, n.ahead=12)
plot(rests,
xlim = c(2004, 2017),
ylim = c(0.99*min(rests), 1.01*max(rests)),
main = "Embalse Tranco de Beas. Predicción de reserva 2016",
xlab = "Fecha",
ylab = "Reserva hm3",
col = 'blue')
lines(rests.pred2016, col = 'red')
lines(rests.check, col = 'dark green')
legend("topleft",
legend=c("serie hasta 2015","prediccion 2016",'datos reales 2016'),
col=c("blue","red",'dark green'),lwd=2)
# con la libreria forecast
library(forecast)
(rests.pred2016b <- forecast(rests.hw, h=12))
plot(rests.pred2016b)
lines(rests.check, col = 'red')
legend("topleft",
legend=c("serie hasta 2015","prediccion 2016",'datos reales 2016'),
col=c("black","blue",'red'),lwd=2)
###Output
Registered S3 method overwritten by 'quantmod':
method from
as.zoo.data.frame zoo
###Markdown
Como se observa las predicciones, aunque captan algo del comportmiento histórico de la serie, difieren de forma relevante respecto de los valores reales.
###Code
#Función de autocorrelación de los residuos de las predicciones
acf(rests.pred2016b$residuals, na.action = na.pass)
###Output
_____no_output_____
###Markdown
Se observa que la autocorrelación va disminuyendo progresivamente y, aunque con oscilaciones, convergen a cero.
###Code
#Error en la predicción de 2016
errors <- data.frame(t(accuracy(rests.pred2016b$mean , rests.check)))
errors$description <- c('Error medio',
'Raíz del error cuadrático medio',
'Error medio absoluto',
'Porcentaje de error medio',
'Porcentaje de error medio absoluto',
'Autocorrelación de errores con retraso 1',
'Índice U de desigualdad de Theils')
errors
###Output
_____no_output_____
###Markdown
3 - Análisis mediante un modelo ARIMA 3.1 - Determinar si es estacionaria o necesita alguna transformación previa. Anteriormente en el análisis básico de la serie hemos visto que la serie no era estacionaria. Pero vamos a aplicar el test de la raíz unitaria para confirmarlo
###Code
library(tseries)
adf.test(rests)
###Output
_____no_output_____
###Markdown
p-valor > 0.05. Por tanto se desecha la hipoótesis alternativa, que considera que la serie es estacionaria. Por tanto se confirma nuestra primera suposición.
###Code
# Vemos si se cumple la condición de homocestaticidad. (Constancia de la varianza)
lambda <- BoxCox.lambda(rests, lower = 0, upper = 2)
lambda
###Output
_____no_output_____
###Markdown
El valor de lambda es mayor que 1 por tanto es necesario aplicar la transformación de Box-Cox
###Code
# Aplicamos la transformación de Box-Cox
rests.BC <- BoxCox(rests, lambda)
rests.BC
BoxCox.lambda(rests.BC, lower = 0, upper = 2)
plot(rests.BC,
main = "Reserva del Embalse Tranco de Beas.Transformación Box-Cox",
xlab = "Fecha",
ylab = "Reserva (Transformación BoxCox)",
ylim = c(0.99*min(rests.BC), 1.01*max(rests.BC)))
###Output
_____no_output_____
###Markdown
3.2 - Contrastar si la serie transformada puede considerarse estacionaria.
###Code
# Estacionariedad en medias
acf(rests.BC, main = "FAS de Box-Cox(Reserva)")
#Probamos el test de raíz unitaria con los datos transformados
adf.test(rests.BC)
###Output
_____no_output_____
###Markdown
Se observa que la serie sigue sin ser estacionaria.(p-vlue > 0.05)
###Code
# Probamos con la diferenciación de primer orden
rests.d1 <- diff(rests.BC, lag = 1, differences = 1)
acf(rests.d1, main="FAS de la primera diferencia de Box.Cox(Reserva)")
###Output
_____no_output_____
###Markdown
Hemos visto anteriormente que la serie tiene estacionalidad por tanto este efecto tenemos eliminarlo de la serie de diferencias de orden 1
###Code
rests.d12 <- diff(rests.d1, lag = 12, differences = 1)
acf(rests.d12, main = "FAS Box-Cox(Reserva) ) tras una diferencia de orden 1
y otra estacional")
###Output
_____no_output_____
###Markdown
Con esto hemos disminuido gran parte de las autocorrelaciones al intervalo (-0.2, 0.2). Podemos considerar por tanto que la serie no está autocorrelada
###Code
plot(rests.d12,
main = "Diferencias de Box-Cox(Reserva) tras una diferencia regular
de orden 1 y otra estacional",
xlab = "Fecha",
ylab = "Diferencias",
ylim = c(0.99*min(rests.d12), 1.01*max(rests.d12)))
#Probamos el test de raíz unitaria
adf.test(rests.d12)
###Output
Warning message in adf.test(rests.d12):
"p-value smaller than printed p-value"
###Markdown
En este caso la serie si es estacionaria (p-valor < 0.01) 3.3 - Estructura ARMA y ARIMA datos de los transformados Aplicamos el metodo sample extended acf, para ver las mejores opciones para los modelos.
###Code
# Estructura del modelo ARMA
library(TSA)
eacf(rests.d12)
###Output
AR/MA
0 1 2 3 4 5 6 7 8 9 10 11 12 13
0 x x o o o o x x o o x x o o
1 o x o o o o o o o o o x o o
2 o x o o o o o o o o o x o o
3 x o o o o o o o o o o x o o
4 x o x o o o o o o o o x o o
5 x x o o o o o o o o o x o o
6 x x x o x o o o o o o x o o
7 x x o o o o o o o o o x o o
###Markdown
El resultado sugiere examinar los modelos MA(2), AR(1), AR(2), ARMA(3,1), ARMA(4,1) o ARMA (5,2). Para ver cual de estos modelos es el más edecuado vamos aplicar el criterio AIC.
###Code
fit1 <- arima(rests.d12, order = c(0,0,1))
fit2 <- arima(rests.d12, order = c(1,0,0))
fit3 <- arima(rests.d12, order = c(2,0,1))
fit4 <- arima(rests.d12, order = c(3,0,1))
fit5 <- arima(rests.d12, order = c(4,0,1))
fit6 <- arima(rests.d12, order = c(5,0,2))
c(fit1$aic, fit2$aic, fit3$aic, fit4$aic, fit5$aic, fit6$aic)
# Probamos con la función auto.arima.
auto.arima(rests.d12)
###Output
_____no_output_____
###Markdown
Se comprueba que el modelo AR(1) para diferencias de orden 1 junto con un modelo ARMA(2,1) para la estacionalidad ( lag =12) es el modelo que menor AIC proporciona. 3.4 - Diagnóstico del modelo final Analizamos los residuos para detectar cualquier indicio de no aleatoriedad.
###Code
fit <- arima(rests.d12, order=c(1,0,0), seasonal = list(order = c(2, 0, 1), period = 12))
tsdiag(fit)
###Output
_____no_output_____
###Markdown
La función de autocorrelación se mantiene en el intervalo (-0.2,0.2) lo cual indica muy poca correlación entre los residuos. Por otro lado los p-valores del test Box-Ljung (H0: residuos independientes, H1: residuos dependientes), se mantienen con valores superiores a 0.05 por lo que adoptamos la hipótesis nula. 3.5 - Predicción de resultados
###Code
# Prediccíón de la serie de diferencias con el modelo ARIMA seleccionado
plot(rests.d12, xlim = c(2004,2017))
rests.d12_pred <-predict(fit, n.ahead = 12)
lines(rests.d12_pred$pred, col="red")
# Combinamos la serie temporal de differencias y la predicción
# en una misma serie
rests.all <- c(rests.d12, rests.d12_pred$pred)
rests.all <- ts(rests.all, start=c(2004,1),freq=12)
plot(rests.all, type="l")
# Invertimos todas las transformaciones
# primero deshacemos la diferencia estacional
rests.inv1 <- diffinv(rests.all, lag = 12, differences = 1,
xi = c(rests.d1[1] ,rests.d1[2], rests.d1[3],
rests.d1[4], rests.d1[5], rests.d1[6],
rests.d1[7], rests.d1[8], rests.d1[9],
rests.d1[10], rests.d1[11], rests.d1[12]))
#segundo deshacemos la diferencia de orden 1
rests.inv2 <- diffinv(rests.inv1, lag = 1, differences = 1,
xi = rests.BC[1])
#y tercero la transformación Box-Cox
rests.inv3 <- InvBoxCox(rests.inv2,lambda)
rests.global <- ts(rests.inv3, start = 2004, freq = 12)
# hacemos un plot de las serie original con la predicción
plot(rests.global,
type = "l",
xlim = c(2004,2017),
ylim = c(0.99*min(rests.global), 1.01*max(rests.global)),
col = 'red')
lines(rests, col="blue")
lines(rests.check, col = 'dark green')
legend("topleft",
legend=c("serie hasta 2015","prediccion 2016",'datos reales 2016'),
col=c("blue","red",'dark green'),lwd=2)
# Con la libreria forecast y la serie original (sin transformar)
ajuste <- auto.arima(rests)
ajuste
rests.fcast <- forecast(ajuste, h=12)
plot(rests.fcast)
lines(rests.check, col = 'red')
###Output
_____no_output_____
###Markdown
Se observa que con forecast la solución es prácticamente indentica, aunque de hecho el AIC obtenido es menor que el modelo obtenido haciendo todo el proceso paso a paso. Aunque para entender el mecanismo y los detalles es necesario hacer el proceso paso. Parece que en principio desde el punto de vista práctico es más recomendable usar forecast. No obstante en la seccion de cálculo de errores se verá cual proporciona mejor resultado 3.6 - Calculo de errores
###Code
# Errores con el modelo ARIMA paso a paso con transformaciones
(errors_steps <- data.frame(t(accuracy(window(rests.global,2016), rests.check))))
# Errores con el modelo ARIMA obtenido a partir de la serie de partida
# y sin transformaciones previas
(errors_forecast <- data.frame(t(accuracy(rests.fcast$mean , rests.check))))
###Output
_____no_output_____
###Markdown
4 Evaluación de los modelos Se presentan en una misma tabla los errores calculados para los tres modelos (suavizado exponencia y los dos modelos ARIMA).
###Code
errores <- cbind(errors_steps, errors_forecast, errors)
errores <- errores[, c(4,1,2,3)]
colnames(errores)[2:4] <- c('arima_steps','arima_forecast','suav_exp')
errores
###Output
_____no_output_____ |
ICCT_it/examples/03/FD-02_Diagramma_di_Bode.ipynb | ###Markdown
Diagramma di BodeIn tutto il seguente esempio, analizzeremo le caratteristiche di trasferimento di fase e modulo dei sistemi Lineari Tempo-Invarianti (LTI) nel dominio della frequenza. Queste proprietà vengono solitamente visualizzate in una coppia di grafici, chiamati diagramma di Bode. L'ampiezza è espressa in decibel, la fase in gradi e vengono tracciati in funzione della frequenza angolare o della frequenza angolare (di solito con una scala logaritmica) del segnale sinusoidale di ingresso.Analizzando questi grafici è possibile determinare alcune delle proprietà dinamiche dei sistemi che rappresentano.Seleziona un tipo di sistema!
###Code
def print_model(model):
print ('\nModello del sistema selezionato:')
if model == 0:
display(Math(r'$$G(s)=\frac{s-Z}{s-P}$$'))
elif model == 1:
display(Math(r'$$G(s)=\frac{K_i(s-Z)}{s(s-P)}$$'))
elif model == 2:
display(Math(r'$$G(s)=\frac{K_d\cdot s}{(s-P)}$$'))
elif model == 3:
display(Math(r'$$G(s)=\frac{s-Z}{(s-P_1)(s-P_2)}$$'))
else:
display(Math(r'$$G(s)=\frac{s-Z}{s^2+2\zeta\omega_0s+{\omega_0}^2}$$'))
systemSelect = w.ToggleButtons(
options=[('Primo ordine', 0), ('Primo ordine con integratore', 1), ('Primo ordine con zero nell\'origine', 2),
('Secondo ordine sovrasmorzato', 3), ('Secondo ordine sottosmorzato', 4)],
description='Sistema: ', layout=w.Layout(width='100%'))
systemSelect.style.button_width='48%'
input_data = w.interactive_output(print_model, {'model': systemSelect})
display(systemSelect, input_data)
###Output
_____no_output_____
###Markdown
Il diagramma di Bode può essere approssimato mediante linee asintotiche, facilmente calcolabili a mano utilizzando le seguenti regole:Diagramma del modulo: Ad ogni polo la pendenza aumenta e ad ogni zero diminuisce di 20dB. Gli effetti di poli e zeri sovrapposti si combinano. La pendenza iniziale è determinata dal numero di zeri e poli non mostrati sul grafico (ad esempio componenti integrali e differenziali) e calcolata secondo la regola precedente. Senza poli o zeri al di fuori dell'area della figura, la pendenza è orizzontale. Il valore iniziale è determinato sostituendo il punto iniziale nell'equazione: $M_{start}=|G(j\omega_{start})|$ dove $j\omega=s$ I poli e gli zeri del semipiano destro (instabili) funzionano in modo opposto alle loro controparti stabili. Non sono rappresentati in questo esempio. Diagramma della fase: Se il guadagno statico (K) della funzione $G(s)= \prod{K\frac{(b_i-Z_i)}{(a_i-P_i)}}$ è positivo, la fase iniziale è 0 °, altrimenti -180 °. Poli e zeri prima del punto di partenza (ad esempio, componenti integrali e differenziali) aumentano (zeri) o diminuiscono (poli) la fase iniziale di 90 °. I poli diminuiscono e gli zeri aumentano la fase di 90 °, che può essere rappresentata da una pendenza di 45 ° vicino alla loro frequenza (iniziando una decade prima e terminando una decade dopo). La sovrapposizione di componenti produce effetti combinati. Analogamente al grafico del modulo, i poli e gli zeri del semipiano destro (instabili) funzionano in modo opposto alle loro controparti stabili. Non sono rappresentati in questo esempio. I poli e gli zeri a valori reali sono rappresentati direttamente sul diagramma di Bode con i loro valori assoluti, per le coppie complesse invece deve essere calcolata $\omega_0$ e devono essere rappresentati come coppia. Se si traccia l'asse della frequenza del diagramma di Bode in Hz, tutti i valori devono essere divisi per $2\pi$!Il grafico asintotico può essere ulteriormente perfezionato aggiungendo picchi e curvature, ma in questo esempio ci si limita a mostrare solo le rette congiungenti i punti principali.Selezione i parametri del sistema; osserva i cambiamenti nel diagramma di Bode!In quali zone l'approssimazione asintotica rappresenta bene il diagramma completo? Perchè?
###Code
def calculate_tf(P1, P2, Z, Zb, model):
if model == 0:
if Zb:
W = c.tf([1, Z], [1, P1])
else:
W = c.tf([1], [1, P1])
elif model == 1:
if Zb:
W = c.tf([P2, P2*Z], [1, -P1, 0])
else:
W = c.tf([P2], [1, P1, 0])
elif model == 2:
W = c.tf([P2, 0], [1, P1])
elif model == 3:
if Zb:
W = c.tf([1, Z], [1, P1+P2, P1*P2])
else:
W = c.tf([1], [1, P1+P2, P1*P2])
else:
if Zb:
W = c.tf([1, Z], [1, 2*P1*P2, P1*P1])
else:
W = c.tf([1], [1, 2*P1*P2, P1*P1])
print('\n Funzione di trasferimento:')
print(W)
poles, zeros = c.pzmap(W, Plot=False)
print('Zeri del sistema:')
print(zeros)
print('Poli del sistema:')
print(poles)
def draw_controllers(model):
global P1_slider, P2_slider, Z_slider, Z_button
if model == 0:
P1_slider = w.FloatLogSlider(value=0.5, base=10, min=-3, max=3, description='Pole', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
P2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
Z_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Zero', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_button = w.ToggleButton(value=True, description='Aggiungi/rimuovi lo zero',
layout=w.Layout(width='auto'), disabled=False)
elif model == 1:
P1_slider = w.FloatLogSlider(value=0.5, base=10, min=-3, max=3, description='Pole', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
P2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Ki', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Zero', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_button = w.ToggleButton(value=True, description='Aggiungi/rimuovi lo zero',
layout=w.Layout(width='auto'), disabled=False)
elif model == 2:
P1_slider = w.FloatLogSlider(value=0.5, base=10, min=-3, max=3, description='Pole', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
P2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Kd', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Zero', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=True)
Z_button = w.ToggleButton(value=True, description='Aggiungi/rimuovi lo zero',
layout=w.Layout(width='auto'), disabled=True)
elif model == 3:
P1_slider = w.FloatLogSlider(value=0.5, base=10, min=-3, max=3, description='Pole 1', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
P2_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Pole 2', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Zero', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_button = w.ToggleButton(value=True, description='Aggiungi/rimuovi lo zero',
layout=w.Layout(width='auto'), disabled=False)
else:
P1_slider = w.FloatLogSlider(value=0.5, base=10, min=-3, max=3, description=r'$\omega_0$', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
P2_slider = w.FloatLogSlider(value=1, base=10, min=-4, max=1, description=r'$\zeta$', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_slider = w.FloatLogSlider(value=1, base=10, min=-3, max=3, description='Zero', continuous_update=False,
layout=w.Layout(width='auto', flex='5 5 auto'), disabled=False)
Z_button = w.ToggleButton(value=True, description='Aggiungi/rimuovi lo zero',
layout=w.Layout(width='auto'), disabled=False)
input_data2 = w.interactive_output(calculate_tf, {'P1': P1_slider, 'P2': P2_slider, 'Z': Z_slider,
'Zb': Z_button, 'model': systemSelect})
display(w.HBox([P1_slider, P2_slider, Z_button, Z_slider]), input_data2)
w.interactive_output(draw_controllers, {'model': systemSelect})
# Figure definition
fig1, ((f1_ax1), (f1_ax2)) = plt.subplots(2, 1)
fig1.set_size_inches((9.8, 5))
fig1.set_tight_layout(True)
f1_line1, = f1_ax1.plot([], [], lw=1, color='dimgrey')
f1_line3, = f1_ax1.plot([], [], lw=1, color='dimgrey')
f1_line2, = f1_ax2.plot([], [], lw=1.5, color='limegreen')
f1_line4, = f1_ax2.plot([], [], lw=1.5, color='limegreen')
f1_line5, = f1_ax1.plot([], [], color='blue', ls='--')
f1_line6, = f1_ax1.plot([], [], color='blue', ls='--')
f1_line7, = f1_ax1.plot([], [], color='red', ls='--')
f1_line8, = f1_ax2.plot([], [], color='blue', ls='--')
f1_line9, = f1_ax2.plot([], [], color='blue', ls='--')
f1_line10, = f1_ax2.plot([], [], color='red', ls='--')
f1_line11, = f1_ax2.plot([], [])
f1_line12, = f1_ax2.plot([], [])
f1_line13, = f1_ax2.plot([], [])
f1_line14, = f1_ax2.plot([], [])
f1_line15, = f1_ax2.plot([], [])
f1_line16, = f1_ax2.plot([], [])
f1_ax1.grid(which='both', axis='both', color='lightgray')
f1_ax2.grid(which='both', axis='both', color='lightgray')
f1_ax1.autoscale(enable=True, axis='x', tight=True)
f1_ax2.autoscale(enable=True, axis='x', tight=True)
f1_ax1.autoscale(enable=True, axis='y', tight=False)
f1_ax2.autoscale(enable=True, axis='y', tight=False)
f1_ax1.set_title('Diagramma del modulo', fontsize=11)
f1_ax1.set_xscale('log')
f1_ax1.set_xlabel(r'$\omega\/[\frac{rad}{s}]$', labelpad=0, fontsize=10)
f1_ax1.set_ylabel(r'$A\/$[dB]', labelpad=0, fontsize=10)
f1_ax1.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax2.set_title('Diagramma della fase', fontsize=11)
f1_ax2.set_xscale('log')
f1_ax2.set_xlabel(r'$\omega\/[\frac{rad}{s}]$', labelpad=0, fontsize=10)
f1_ax2.set_ylabel(r'$\phi\/$[°]', labelpad=0, fontsize=10)
f1_ax2.tick_params(axis='both', which='both', pad=0, labelsize=8)
f1_ax1.legend([f1_line1, f1_line2, f1_line5, f1_line7], ['Esatto', 'Asintotico', 'Poli', 'Zeri'], loc='upper right')
f1_ax2.legend([f1_line3, f1_line4, f1_line8, f1_line10], ['Esatto', 'Asintotico', 'Poli', 'Zeri'], loc='upper right')
# System model
def draw_bode(P1, P2, Z, Zb, model):
if model == 0:
if Zb:
W = c.tf([1, Z], [1, P1])
else:
W = c.tf([1], [1, P1])
elif model == 1:
if Zb:
W = c.tf([P2, P2*Z], [1, P1, 0])
else:
W = c.tf([P2], [1, P1, 0])
elif model == 2:
W = c.tf([P2, 0], [1, P1])
elif model == 3:
if Zb:
W = c.tf([1, Z], [1, P1+P2, P1*P2])
else:
W = c.tf([1], [1, P1+P2, P1*P2])
else:
if Zb:
W = c.tf([1, Z], [1, 2*P1*P2, P1*P1])
else:
W = c.tf([1], [1, 2*P1*P2, P1*P1])
_, _, ob = c.bode_plot(W, Plot=False) # Small resolution plot to determine bounds
mag, phase, omega = c.bode_plot(W, omega=np.logspace(np.log10(ob[0]), np.log10(ob[-1]), 100), Plot=False) # Bode-plot
poles, zeros = c.pzmap(W, Plot=False) # Poles and zeros
log_omega = np.log10(omega)
mag_approx = np.full_like(mag, 20 * np.log10(mag[0]))
phase_approx = np.full_like(phase, 0)
pole_x = []
zero_x = []
break_x = []
for p in poles:
if p.imag == 0:
om = abs(p.real)
else:
om = np.sqrt(p.real*p.real + p.imag*p.imag)
if om == 0:
mag_approx = mag_approx - 20 * (log_omega - np.log10(omega[0]))
phase_approx = phase_approx - 90
else:
mag_approx = mag_approx - 20 * np.maximum(log_omega - np.log10(om), 0)
phase_approx = phase_approx + 45 * np.maximum(log_omega - np.log10(om) - 1, 0)
phase_approx = phase_approx - 45 * np.maximum(log_omega - np.log10(om) + 1, 0)
pole_x.append(om)
break_x.append(om/10)
break_x.append(om*10)
for z in zeros:
if z.imag == 0:
om = abs(z.real)
else:
om = np.sqrt(z.real*z.real + z.imag*z.imag)
if om == 0:
mag_approx = mag_approx + 20 * (log_omega - np.log10(omega[0]))
phase_approx = phase_approx + 90
else:
mag_approx = mag_approx + 20 * np.maximum(log_omega - np.log10(om), 0)
phase_approx = phase_approx - 45 * np.maximum(log_omega - np.log10(om) - 1, 0)
phase_approx = phase_approx + 45 * np.maximum(log_omega - np.log10(om) + 1, 0)
zero_x.append(om)
break_x.append(om/10)
break_x.append(om*10)
global f1_line1, f1_line2, f1_line3, f1_line4
global f1_line5, f1_line6, f1_line7, f1_line8, f1_line9, f1_line10
global f1_line11, f1_line12, f1_line13, f1_line14, f1_line15, f1_line16
f1_ax1.lines.remove(f1_line1)
f1_ax1.lines.remove(f1_line3)
f1_ax2.lines.remove(f1_line2)
f1_ax2.lines.remove(f1_line4)
f1_line1, = f1_ax1.plot(omega, 20*np.log10(mag), lw=1, color='dimgrey')
f1_line3, = f1_ax1.plot(omega, mag_approx, lw=1.5, color='limegreen')
f1_line2, = f1_ax2.plot(omega, phase*180/np.pi, lw=1, color='dimgrey')
f1_line4, = f1_ax2.plot(omega, phase_approx, lw=1.5, color='limegreen')
f1_ax1.lines.remove(f1_line5)
f1_ax1.lines.remove(f1_line6)
f1_ax1.lines.remove(f1_line7)
f1_ax2.lines.remove(f1_line8)
f1_ax2.lines.remove(f1_line9)
f1_ax2.lines.remove(f1_line10)
if len(pole_x) >= 1:
f1_line5 = f1_ax1.axvline(pole_x[0], color='blue', ls='--', ymin=0.03, ymax=0.97, marker='v')
f1_line8 = f1_ax2.axvline(pole_x[0], color='blue', ls='--', ymin=0.03, ymax=0.97, marker='v')
else:
f1_line5, = f1_ax1.plot([], [])
f1_line8, = f1_ax2.plot([], [])
if len(pole_x) == 2:
f1_line6 = f1_ax1.axvline(pole_x[1], color='blue', ls='--', ymin=0.03, ymax=0.97, marker='v')
f1_line9 = f1_ax2.axvline(pole_x[1], color='blue', ls='--', ymin=0.03, ymax=0.97, marker='v')
else:
f1_line6, = f1_ax1.plot([], [])
f1_line9, = f1_ax2.plot([], [])
if len(zero_x) == 1:
f1_line7 = f1_ax1.axvline(zero_x[0], color='red', ls='--', ymin=0.03, ymax=0.97, marker='^')
f1_line10 = f1_ax2.axvline(zero_x[0], color='red', ls='--', ymin=0.03, ymax=0.97, marker='^')
else:
f1_line7, = f1_ax1.plot([], [])
f1_line10, = f1_ax2.plot([], [])
f1_ax2.lines.remove(f1_line11)
f1_ax2.lines.remove(f1_line12)
f1_ax2.lines.remove(f1_line13)
f1_ax2.lines.remove(f1_line14)
f1_ax2.lines.remove(f1_line15)
f1_ax2.lines.remove(f1_line16)
if len(break_x) >= 1:
f1_line11 = f1_ax2.axvline(break_x[0], color='limegreen', lw=0.5, ls=(0, (8, 5)))
f1_line12 = f1_ax2.axvline(break_x[1], color='limegreen', lw=0.5, ls=(0, (8, 5)))
else:
f1_line11, = f1_ax2.plot([], [])
f1_line12, = f1_ax2.plot([], [])
if len(break_x) >= 3:
f1_line13 = f1_ax2.axvline(break_x[2], color='limegreen', lw=0.5, ls=(0, (8, 5)))
f1_line14 = f1_ax2.axvline(break_x[3], color='limegreen', lw=0.5, ls=(0, (8, 5)))
else:
f1_line13, = f1_ax2.plot([], [])
f1_line14, = f1_ax2.plot([], [])
if len(break_x) >= 5:
f1_line15 = f1_ax2.axvline(break_x[4], color='limegreen', lw=0.5, ls=(0, (8, 5)))
f1_line16 = f1_ax2.axvline(break_x[5], color='limegreen', lw=0.5, ls=(0, (8, 5)))
else:
f1_line15, = f1_ax2.plot([], [])
f1_line16, = f1_ax2.plot([], [])
f1_ax1.relim()
f1_ax2.relim()
f1_ax1.autoscale_view()
f1_ax2.autoscale_view()
def link_controls(model):
w.interactive_output(draw_bode, {'P1': P1_slider, 'P2': P2_slider, 'Z': Z_slider,
'Zb': Z_button, 'model': systemSelect})
w.interactive_output(link_controls, {'model': systemSelect})
###Output
_____no_output_____ |
03-tabular/pdp_plots.ipynb | ###Markdown
Partial Dependence Plots
###Code
# !pip install scikit-learn==1.0.1
import sklearn
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.neural_network import MLPRegressor
from sklearn.preprocessing import MinMaxScaler
from sklearn.inspection import PartialDependenceDisplay
print('The scikit-learn version is {}.'.format(sklearn.__version__))
###Output
The scikit-learn version is 1.0.1.
###Markdown
Collect the dataset
###Code
cal_housing = fetch_california_housing()
print(cal_housing.DESCR)
X = cal_housing.data
y = cal_housing.target
cal_features = cal_housing.feature_names
df = pd.concat((pd.DataFrame(X, columns=cal_features),
pd.DataFrame({'MedianHouseVal': y})), axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Train an MLP model
###Code
# Create models
mlp_reg = MLPRegressor(hidden_layer_sizes=[30, 20, 10],
max_iter=500)
# Create pipeline
transformer = ColumnTransformer([
('numerical', MinMaxScaler(feature_range=(-1,1)), cal_features),
])
mlp_pipeline = Pipeline(steps=[
('transform', transformer),
('model', mlp_reg)
])
# Create dataset
X_train, X_test, y_train, y_test = train_test_split(df[cal_features], y, test_size=0.2)
mlp_pipeline.fit(X_train, y_train)
def compute_rmse(preds, labels):
return np.sqrt(np.mean((preds - labels)**2))
model_rmse_error = compute_rmse(mlp_pipeline.predict(X_test), y_test)
print(f'Root mean squared error of MLP model: {model_rmse_error}')
###Output
Root mean squared error of MLP model: 0.6288807413078079
###Markdown
Let's look at the partial dependence plot for the features `MedInc'
###Code
PartialDependenceDisplay.from_estimator(
mlp_pipeline, X_train, features=['MedInc']
)
plt.title("Partial Dependence plot for 'MedInc'")
plt.show()
for feature in cal_features:
PartialDependenceDisplay.from_estimator(
mlp_pipeline, X_train, features=[feature])
###Output
_____no_output_____
###Markdown
Compare two features interaction
###Code
from scipy.stats import pearsonr
corr_coeff = pearsonr(df['MedInc'], df['AveRooms'])[0]
print(f'Pearson correlation coeff: {corr_coeff}')
sns.kdeplot(x=df['MedInc'])
sns.rugplot(x=df['MedInc'])
plt.title("Feature Distribution for 'MedInc'")
plt.show()
PartialDependenceDisplay.from_estimator(
mlp_pipeline, X_train, features=['MedInc'])
sns.rugplot(data=X_train, x='MedInc', color='orange')
plt.title("Partial Dependence for 'MedInc'")
plt.ylim(1, 4)
plt.show()
sns.kdeplot(x=df['HouseAge'])
sns.rugplot(x=df['HouseAge'])
plt.title("Feature Distribution for 'HouseAge'")
plt.show()
PartialDependenceDisplay.from_estimator(
mlp_pipeline, X_train, features=['HouseAge'])
sns.rugplot(data=X_train, x='HouseAge', color='orange')
plt.title("Partial Dependence for 'HouseAge'")
plt.ylim(2, 2.5)
plt.show()
PartialDependenceDisplay.from_estimator(
mlp_pipeline, X_train, features=[('HouseAge', 'MedInc')])
plt.title('Partial Dependence plot for (HouseAge, MedInc)')
plt.show()
###Output
_____no_output_____
###Markdown
For Classification TaskWe'll use the heart risk dataset.
###Code
import tensorflow as tf
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
print(f'The tensorflow version is {tf.__version__}.')
csv_file = tf.keras.utils.get_file(
'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv')
df = pd.read_csv(csv_file)
df.head()
df.FastingBS = df.fbs.apply(lambda x: 'T' if x==1 else 'F')
# Categorical variables
cat_vars = ['sex', 'cp', 'fbs', 'restecg', 'exang', 'slope', 'ca', 'thal']
# Numerical Variables
num_vars = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak']
X = df[num_vars + cat_vars]
# Label
y = df['target']
# Create dataset
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.2, random_state=42)
# Create models
gb_clf = GradientBoostingClassifier(
n_estimators=100, learning_rate=1.0,
max_depth=1, random_state=0
)
# Create pipeline
transformer = ColumnTransformer([
('numerical', StandardScaler(), num_vars),
('categorical', OneHotEncoder(), cat_vars),
])
clf_pipeline = Pipeline(steps=[
('transform', transformer),
('classifier', gb_clf)
])
clf_pipeline.fit(X_train, y_train)
clf_pipeline.score(X_test, y_test)
for feature in num_vars:
PartialDependenceDisplay.from_estimator(clf_pipeline, X_train, features=[feature])
age_pdp = PartialDependenceDisplay.from_estimator(clf_pipeline, X_train, features=['chol'])
age_rug = sns.rugplot(data=X_train, x='chol', color='orange')
plt.xlabel('Cholesterol')
plt.show()
age_pdp = PartialDependenceDisplay.from_estimator(clf_pipeline, X_train, features=['thalach'])
age_rug = sns.rugplot(data=X_train, x='thalach', color='orange')
plt.xlabel("Max Heart Rate")
plt.show()
###Output
_____no_output_____
###Markdown
Dealing with Categorical Features First for the sex feature
###Code
# first for 'M'
X_pdp = X.copy()
X_pdp['Sex'] = ['M']*X_pdp.shape[0]
print(clf_pipeline.predict(X_pdp).mean())
cat_pdp_dict = {}
for cat_var in cat_vars:
feature_pdp_dict = {}
feature_vals = X_train[cat_var].unique()
for feature_val in feature_vals:
X_pdp = X_train.copy()
X_pdp[cat_var] = [feature_val]*X_pdp.shape[0]
feature_pdp_dict[feature_val] = clf_pipeline.predict(X_pdp).mean()
cat_pdp_dict[cat_var] = feature_pdp_dict
cat_pdp_dict
for cat_var in cat_vars:
plt.bar(*zip(*sorted(cat_pdp_dict[cat_var].items())))
plt.title(cat_var)
plt.show()
###Output
_____no_output_____
###Markdown
Working with Multi-Class classification
###Code
from sklearn.multiclass import OneVsRestClassifier
from sklearn.neural_network import MLPClassifier
# To download the dataset.
!wget http://www3.dsi.uminho.pt/pcortez/wine/winequality.zip
!unzip winequality.zip
df = pd.read_csv("./winequality/winequality-red.csv", sep=';')
df.head()
df.quality.unique()
numeric = ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar',
'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density',
'pH', 'sulphates', 'alcohol']
X = df[numeric]
y = df['quality']
multi_clf = OneVsRestClassifier(
MLPClassifier(
hidden_layer_sizes=[256, 128, 64, 32],
max_iter=750)
).fit(X, y)
# Create pipeline
transformer = ColumnTransformer([
('numerical', StandardScaler(), numeric),
])
multi_clf_pipeline = Pipeline(steps=[
('transform', transformer),
('classifier', multi_clf)
])
multi_clf_pipeline.fit(X, y)
multi_clf_pipeline.score(X, y)
fig, ((ax3, ax4, ax5), (ax6, ax7, ax8)) = plt.subplots(2, 3, figsize=(18, 10))
fig.suptitle('Multi-class partial dependence')
target3_disp = PartialDependenceDisplay.from_estimator(
multi_clf_pipeline, X, features=['citric acid'], target=3, ax=ax3)
target4_disp = PartialDependenceDisplay.from_estimator(
multi_clf_pipeline, X, features=['citric acid'], target=4, ax=ax4)
target5_disp = PartialDependenceDisplay.from_estimator(
multi_clf_pipeline, X, features=['citric acid'], target=5, ax=ax5)
target6_disp = PartialDependenceDisplay.from_estimator(
multi_clf_pipeline, X, features=['citric acid'], target=6, ax=ax6)
target7_disp = PartialDependenceDisplay.from_estimator(
multi_clf_pipeline, X, features=['citric acid'], target=7, ax=ax7)
target8_disp = PartialDependenceDisplay.from_estimator(
multi_clf_pipeline, X, features=['citric acid'], target=8, ax=ax8)
ax3.set_title("Target: 3")
ax4.set_title("Target: 4")
ax5.set_title("Target: 5")
ax6.set_title("Target: 6")
ax7.set_title("Target: 7")
ax8.set_title("Target: 8")
plt.show()
###Output
_____no_output_____ |
notebooks/JSON_to_Dataframe_from_API.ipynb | ###Markdown
**Dataframe from the Public API** Library
###Code
import json
import pandas as pd
from urllib.request import urlopen
###Output
_____no_output_____
###Markdown
**get JSON data**
###Code
def getJson(urlLoc):
with urlopen(urlLoc) as rsp:
gotRaw = rsp.read()
data = json.loads(gotRaw)
return data
#location = "https://api.covid19india.org/misc.json"
# get location
location = input("Enter the location (URL): ")
data = getJson(location)
data.keys()
###Output
Enter the location (URL): https://api.covid19india.org/misc.json
###Markdown
--- ** creating dataframes
###Code
df_district_meta_data = pd.DataFrame(data['district_meta_data'])
df_district_meta_data.head()
df_state_meta_data = pd.DataFrame(data['state_meta_data'])
df_state_meta_data.head()
###Output
_____no_output_____ |
Twitter_users_metrics.ipynb | ###Markdown
Users analysisCharts about Twitter users' metrics are very heavy, so are moved from Twitter notebook to here.Could be interesting to analyze users' metrics, in fact such metrics could be very useful for deeper analysis (weighted tweets and so on)
###Code
import pandas as pd
import altair as alt
from palette import palette
alt.data_transformers.enable('json')
base_path = './Datasets/'
top_tweeters_csv = base_path + 'top_tweeters.csv'
top_retweeted_csv = base_path + 'top_retweeted.csv'
words_avg_csv = base_path + 'words_avg.csv'
###Output
_____no_output_____
###Markdown
Tweets
###Code
top_tweeters = pd.read_csv(top_tweeters_csv).iloc[1:] # First element groups users without username
top_tweeters = pd.DataFrame(top_tweeters['tweets'], columns=['tweets'])
top_tweeters['outliers'] = top_tweeters['tweets']
top_tweeters['no outliers'] = top_tweeters['tweets']
top_tweeters = top_tweeters[['outliers', 'no outliers']]
top_tweeters_long = top_tweeters.melt(value_name='tweets', var_name='viz')
alt.Chart(top_tweeters_long[top_tweeters_long['viz'] == 'no outliers'], title='Tweets per user').mark_boxplot(size=10, outliers=False, median=True, color=palette['twitter']).encode(alt.X('tweets:Q', title=None),
alt.Y('viz:N', axis=None)).properties(width=500, height=300)
alt.Chart(top_tweeters_long[top_tweeters_long['viz'] == 'no outliers'], title='Tweets per user').mark_boxplot(size=10, outliers=False, median=True, color=palette['twitter']).encode(alt.X('tweets:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300) + \
alt.Chart(top_tweeters_long[top_tweeters_long['viz'] == 'outliers'], title='Tweets per user').mark_boxplot(size=10, outliers=True, median=True, color=palette['twitter']).encode(alt.X('tweets:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300)
###Output
_____no_output_____
###Markdown
As can be seen, there are some outliers that are very far from the median, this indicates the presence of spammers or bots (thousands of tweets for a single human user, are impossible). An upgrade to data processing could be recognizing outliers to remove them. Retweets
###Code
top_retweeted = pd.read_csv(top_retweeted_csv).iloc[1:] # First element groups users without username
top_retweeted = pd.DataFrame(top_retweeted['retweets'], columns=['retweets'])
top_retweeted['outliers'] = top_retweeted['retweets']
top_retweeted['no outliers'] = top_retweeted['retweets']
top_retweeted = top_retweeted[['outliers', 'no outliers']]
top_retweeted_long = top_retweeted.melt(value_name='retweets', var_name='viz')
alt.Chart(top_retweeted_long[top_tweeters_long['viz'] == 'no outliers'], title='Average retweets per user').mark_boxplot(size=10, outliers=False, median=True, color=palette['twitter']).encode(alt.X('retweets:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300) + \
alt.Chart(top_retweeted_long[top_tweeters_long['viz'] == 'outliers'], title='Average retweets per user').mark_boxplot(size=10, outliers=True, median=True, color=palette['twitter']).encode(alt.X('retweets:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300)
###Output
_____no_output_____
###Markdown
How many each user has been retweeted is much different than how many tweets he posted. Having outliers, in this case, is part of the reality: some users are not so followed and their tweets have no retweets, others (speaking about Bitcoin, Elon Musk for example) are what's called a "VIP".This aspect is very interesting because could open another set of possible analysis; for example: removing "Normal people" from the dataset, how much change the correlation with the price? Tweets average lengthFor simplicity, here the "length of a tweet" will be the number of words, this also reflects the intent of analyzing this aspect: Find a way to evaluate the relevance of a tweet
###Code
words_avg = pd.read_csv(words_avg_csv)
words_avg = pd.DataFrame(words_avg['words_avg'], columns=['words_avg'])
words_avg['words_avg'] = words_avg['words_avg'].apply(lambda x: int(x))
words_avg['outliers'] = words_avg['words_avg']
words_avg['no outliers'] = words_avg['words_avg']
words_avg = words_avg[['outliers', 'no outliers']]
words_avg_long = words_avg.melt(value_name='words_avg', var_name='viz')
alt.Chart(words_avg_long[words_avg_long['viz'] == 'no outliers'], title='Average post length per user').mark_boxplot(size=10, outliers=False, median=True, color=palette['twitter']).encode(alt.X('words_avg:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300)
alt.Chart(words_avg_long[words_avg_long['viz'] == 'no outliers'], title='Average post length per user').mark_boxplot(size=10, outliers=False, median=True, color=palette['twitter']).encode(alt.X('words_avg:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300) + \
alt.Chart(words_avg_long[words_avg_long['viz'] == 'outliers'], title='Average post length per user').mark_boxplot(size=10, outliers=True, median=True, color=palette['twitter']).encode(alt.X('words_avg:Q', title=None), alt.Y('viz:N', title=None)).properties(width=500, height=300)
###Output
_____no_output_____
###Markdown
There are clearly outliers that are very far from the median, but in this case, the most important thing is IRQ; Q1 is 5 and Q3 is 15, this means that most of the tweets in the dataset have a number of words compatible with a sentence with meaning, therefore the number of bots is, probably, low. Users' metrics mixed up
###Code
top_tweeters = pd.read_csv(top_tweeters_csv).iloc[1:]
top_retweeted = pd.read_csv(top_retweeted_csv).iloc[1:]
words_avg = pd.read_csv(words_avg_csv)
users_summary = top_tweeters.copy()
users_summary = users_summary.merge(top_retweeted, on=['username', 'full_name'])
users_summary = users_summary.merge(words_avg, on=['username', 'full_name'])
users_summary
words_q3 = users_summary['words_avg'].quantile(q=0.75)
users_summary_filtered = users_summary[users_summary['words_avg'] <= words_q3*1.5]
domain = [1, words_q3*1.5]
range_ = [palette['negative'], palette['positive']]
plot_title = alt.TitleParams("Zoomed users' metrics", subtitle=["Re-adjusted words avg color range"])
alt.Chart(users_summary_filtered, title=plot_title).mark_point(clip=True).encode(alt.X('tweets', scale=alt.Scale(domain=(0, 5000)), title='Tweets'), alt.Y('retweets', scale=alt.Scale(domain=(0, 16000)), title='Retweets'), alt.Color('words_avg', scale=alt.Scale(domain=domain, range=range_), title='Words avg')).properties(height=750, width=750).configure_point(size=10)
###Output
_____no_output_____ |
pages/workshop/AWIPS/Satellite_Imagery.ipynb | ###Markdown
Satellite images are returned by Python AWIPS as grids, and can be rendered with Cartopy pcolormesh the same as gridded forecast models in other python-awips examples. Available Sources, Creating Entities, Sectors, and Products
###Code
from awips.dataaccess import DataAccessLayer
import cartopy.crs as ccrs
import cartopy.feature as cfeat
import matplotlib.pyplot as plt
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import numpy as np
import datetime
# Create an EDEX data request
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
request = DataAccessLayer.newDataRequest()
request.setDatatype("satellite")
# get optional identifiers for satellite datatype
identifiers = set(DataAccessLayer.getOptionalIdentifiers(request))
print("Available Identifiers:")
for id in identifiers:
if id.lower() == 'datauri':
continue
print(" - " + id)
# Show available sources
identifier = "source"
sources = DataAccessLayer.getIdentifierValues(request, identifier)
print(identifier + ":")
print(list(sources))
# Show available creatingEntities
identifier = "creatingEntity"
creatingEntities = DataAccessLayer.getIdentifierValues(request, identifier)
print(identifier + ":")
print(list(creatingEntities))
# Show available sectorIDs
identifier = "sectorID"
sectorIDs = DataAccessLayer.getIdentifierValues(request, identifier)
print(identifier + ":")
print(list(sectorIDs))
# Contrust a full satellite product tree
for entity in creatingEntities:
print(entity)
request = DataAccessLayer.newDataRequest("satellite")
request.addIdentifier("creatingEntity", entity)
availableSectors = DataAccessLayer.getAvailableLocationNames(request)
availableSectors.sort()
for sector in availableSectors:
print(" - " + sector)
request.setLocationNames(sector)
availableProducts = DataAccessLayer.getAvailableParameters(request)
availableProducts.sort()
for product in availableProducts:
print(" - " + product)
###Output
_____no_output_____
###Markdown
GOES 16 Mesoscale SectorsDefine our imports, and define our map properties first.
###Code
%matplotlib inline
def make_map(bbox, projection=ccrs.PlateCarree()):
fig, ax = plt.subplots(figsize=(10,12),
subplot_kw=dict(projection=projection))
if bbox[0] is not np.nan:
ax.set_extent(bbox)
ax.coastlines(resolution='50m')
gl = ax.gridlines(draw_labels=True)
gl.top_labels = gl.right_labels = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
return fig, ax
sectors = ["EMESO-1","EMESO-2"]
fig = plt.figure(figsize=(16,7*len(sectors)))
for i, sector in enumerate(sectors):
request = DataAccessLayer.newDataRequest()
request.setDatatype("satellite")
request.setLocationNames(sector)
request.setParameters("CH-13-10.35um")
utc = datetime.datetime.utcnow()
times = DataAccessLayer.getAvailableTimes(request)
hourdiff = utc - datetime.datetime.strptime(str(times[-1]),'%Y-%m-%d %H:%M:%S')
hours,days = hourdiff.seconds/3600,hourdiff.days
minute = str((hourdiff.seconds - (3600 * hours)) / 60)
offsetStr = ''
if hours > 0:
offsetStr += str(hours) + "hr "
offsetStr += str(minute) + "m ago"
if days > 1:
offsetStr = str(days) + " days ago"
response = DataAccessLayer.getGridData(request, [times[-1]])
grid = response[0]
data = grid.getRawData()
lons,lats = grid.getLatLonCoords()
bbox = [lons.min(), lons.max(), lats.min(), lats.max()]
print("Latest image available: "+str(times[-1]) + " ("+offsetStr+")")
print("Image grid size: " + str(data.shape))
print("Image grid extent: " + str(list(bbox)))
fig, ax = make_map(bbox=bbox)
states = cfeat.NaturalEarthFeature(category='cultural',
name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(states, linestyle=':')
cs = ax.pcolormesh(lons, lats, data, cmap='coolwarm')
cbar = fig.colorbar(cs, shrink=0.6, orientation='horizontal')
cbar.set_label(sector + " " + grid.getParameter() + " " \
+ str(grid.getDataTime().getRefTime()))
###Output
_____no_output_____ |
Mini Projects/Model Validation/Grid Search Lab/Grid_Search_Lab.ipynb | ###Markdown
Improving a model with Grid SearchIn this mini-lab, we'll fit a decision tree model to some sample data. This initial model will overfit heavily. Then we'll use Grid Search to find better parameters for this model, to reduce the overfitting.First, some imports.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Reading and plotting the dataNow, a function that will help us read the csv file, and plot the data.
###Code
def load_pts(csv_name):
data = np.asarray(pd.read_csv(csv_name, header=None))
X = data[:,0:2]
y = data[:,2]
plt.scatter(X[np.argwhere(y==0).flatten(),0], X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0], X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
return X,y
X, y = load_pts('data.csv')
plt.show()
###Output
_____no_output_____
###Markdown
2. Splitting our data into training and testing sets
###Code
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score, make_scorer
#Fixing a random seed
import random
random.seed(42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
3. Fitting a Decision Tree model
###Code
from sklearn.tree import DecisionTreeClassifier
# Define the model (with default hyperparameters)
clf = DecisionTreeClassifier(random_state=42)
# Fit the model
clf.fit(X_train, y_train)
# Make predictions
train_predictions = clf.predict(X_train)
test_predictions = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Now let's plot the model, and find the testing f1_score, to see how we did. The following function will help us plot the model.
###Code
def plot_model(X, y, clf):
plt.scatter(X[np.argwhere(y==0).flatten(),0],X[np.argwhere(y==0).flatten(),1],s = 50, color = 'blue', edgecolor = 'k')
plt.scatter(X[np.argwhere(y==1).flatten(),0],X[np.argwhere(y==1).flatten(),1],s = 50, color = 'red', edgecolor = 'k')
plt.xlim(-2.05,2.05)
plt.ylim(-2.05,2.05)
plt.grid(False)
plt.tick_params(
axis='x',
which='both',
bottom='off',
top='off')
r = np.linspace(-2.1,2.1,300)
s,t = np.meshgrid(r,r)
s = np.reshape(s,(np.size(s),1))
t = np.reshape(t,(np.size(t),1))
h = np.concatenate((s,t),1)
z = clf.predict(h)
s = s.reshape((np.size(r),np.size(r)))
t = t.reshape((np.size(r),np.size(r)))
z = z.reshape((np.size(r),np.size(r)))
plt.contourf(s,t,z,colors = ['blue','red'],alpha = 0.2,levels = range(-1,2))
if len(np.unique(z)) > 1:
plt.contour(s,t,z,colors = 'k', linewidths = 2)
plt.show()
plot_model(X, y, clf)
print('The Training F1 Score is', f1_score(train_predictions, y_train))
print('The Testing F1 Score is', f1_score(test_predictions, y_test))
###Output
_____no_output_____
###Markdown
Woah! Some heavy overfitting there. Not just from looking at the graph, but also from looking at the difference between the high training score (1.0) and the low testing score (0.7).Let's see if we can find better hyperparameters for this model to do better. We'll use grid search for this. 4. (TODO) Use grid search to improve this model.In here, we'll do the following steps:1. First define some parameters to perform grid search on. We suggest to play with `max_depth`, `min_samples_leaf`, and `min_samples_split`.2. Make a scorer for the model using `f1_score`.3. Perform grid search on the classifier, using the parameters and the scorer.4. Fit the data to the new classifier.5. Plot the model and find the f1_score.6. If the model is not much better, try changing the ranges for the parameters and fit it again.**_Hint:_ If you're stuck and would like to see a working solution, check the solutions notebook in this same folder.**
###Code
from sklearn.metrics import make_scorer
from sklearn.metrics import fbeta_score
from sklearn.model_selection import GridSearchCV
clf = DecisionTreeClassifier(random_state=42)
# TODO: Create the parameters list you wish to tune.
parameters = {'max_depth':[1, 2, 3],'min_samples_leaf':[2, 3], 'min_samples_split': [2, 3]}
# TODO: Make an fbeta_score scoring object.
scorer = make_scorer(fbeta_score)
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method.
grid_obj = GridSearchCV(clf, parameters, scoring= scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters.
grid_fit = grid_obj.fit(X,y)
# TODO: Get the estimator.
best_clf = grid_fit.best_estimator_
# Fit the new model.
best_clf.fit(X_train, y_train)
# Make predictions using the new model.
best_train_predictions = best_clf.predict(X_train)
best_test_predictions = best_clf.predict(X_test)
# Calculate the f1_score of the new model.
print('The training F1 Score is', f1_score(best_train_predictions, y_train))
print('The testing F1 Score is', f1_score(best_test_predictions, y_test))
# Plot the new model.
plot_model(X, y, best_clf)
# Let's also explore what parameters ended up being used in the new model.
best_clf
###Output
_____no_output_____ |
Seminars/Seminar_3/features.ipynb | ###Markdown
Разреженные признаки**Примеры** источников разреженных признаков:* Категориальные признаки * Текст Категориальные признакиКатегориальные признаки, также могут упоминаться, как факторные или номинальные признаки. **Примеры** категориальных признаков: * пол* страна проживания* номер группыи т.п.Ясно, что для компьютерной обработки вместо "понятного для человека" значения (в случае страны — "Russia", "GB", "France" и т.п.) хранят числа. Далее обсудим, как получать подобное вектор-признаки и в каком формате их хранить.Рассмотрим таблицу ```winemag.csv```, которая содержит описания вин.
###Code
data = pd.read_csv('winemag.csv', index_col=0, na_filter=False)
data.head()
###Output
_____no_output_____
###Markdown
В таблице 10 столбцов с признаками. Какие из них являются категориальными? Во-первых, это должны быть столбцы содержащие текстовые значения. Следовательно, в качестве кандидатов остаются: ```country```, ```description```, ```designation```, ```province```, ```region_1```, ```region_2```, ```variety``` и ```winery```.Во-вторых, столбцы с небольшим числом уникальных значений:
###Code
for name in ['country', 'description', 'designation', 'province',
'region_1', 'region_2', 'variety', 'winery']:
print('%s: %d'%(name, data[name].nunique()))
np.unique(data["points"])
###Output
_____no_output_____
###Markdown
Итак, страна-производитель является категориальным признаком.Самым очевидным способом кодирования является - **one-hot-кодирование**: для кодируемого категориального признака создаются $N$ новых признаков, где $N$ - число категорий. Каждый $i$-й новый признак - бинарный характеристический признак $i$-й категории.Например, страна-производитель является категориальным признаком. Воспользуемся [LabelEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) и [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) для преобразования названий стран в one-hot-вектор.
###Code
countries = data.country
countries = LabelEncoder().fit_transform(countries)
countries = OneHotEncoder().fit_transform(countries[:, np.newaxis])
type(countries)
###Output
_____no_output_____
###Markdown
Разреженные матрицыПосле кодирования мы получили разреженную матрицу признаков. Существует много типов разреженных матриц, каждый из которых предоставляет разные гарантии на операции.* ```scipy.sparse.coo_matrix```* ```scipy.sparse.csc_matrix```* ```scipy.sparse.csr_matrix```* ```scipy.sparse.bsr_matrix```* ```scipy.sparse.lil_matrix```* ```scipy.sparse.dia_matrix```* ```scipy.sparse.dok_matrix```Подробнее про [устройство разреженых матрицы](http://www.netlib.org/utk/people/JackDongarra/etemplates/node372.html) scipy.sparse.coo_matrix* Используется как хранилище данных* Поддерживает быструю конвертацию в любой формат* Не поддерживает индексацию* Поддерживает ограниченый набор арифметических операций scipy.sparse.csc_matrix* Хранит данные поколоночно* Быстрое получение значений отдельных колонок scipy.sparse.csr_matrix* Хранит данные построчно* Быстрое получение значений отдельных строк scipy.sparse.bsr_matrix* Подходит для разреженных матриц с плотными подматрицами scipy.sparse.lil_matrix* Подходит для создания разреженных матриц поэлементно* Для последующих матричных операций лучше сконвертировать в ```csr_matrix``` или ```csc_matrix```Библиотека ```scipy.sparse``` содержит методы, позволяющие работать с разреженными матрицами. Подробнее про операции с разрежеными матрицами на сайте [scipy](https://docs.scipy.org/doc/scipy/reference/sparse.html). Предсказание оценки винВ качестве категориальных признаков возьмём: country, province и varietyПопробуем предсказать оценку выставленную винам. Оценки в таблицы варьируются от 80 до 100.
###Code
Y = data.points.values
countries = OneHotEncoder().fit_transform(LabelEncoder().\
fit_transform(data.country)[:, np.newaxis])
provinces = OneHotEncoder().fit_transform(LabelEncoder().\
fit_transform(data.province)[:, np.newaxis])
varieties = OneHotEncoder().fit_transform(LabelEncoder().\
fit_transform(data.variety)[:, np.newaxis])
features = [('country', countries), ('province', provinces), ('variety', varieties)]
names = []
accuracy_scores = []
for subset_features in tqdm(itertools.chain(*[list(itertools.combinations(features, n))
for n in range(1, 4)])):
subset_names, subset_features = zip(*subset_features)
names.append('; '.join(subset_names))
X = scipy.sparse.hstack(subset_features)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)
lr = LogisticRegression().fit(X_train, Y_train)
Y_pred = lr.predict(X_test)
accuracy_scores.append(accuracy_score(Y_pred, Y_test))
pd.DataFrame({'Accuracy':accuracy_scores}, index=names)
###Output
7it [00:40, 5.83s/it]
###Markdown
Извлечение признаков из текстовПеред тем как работать с текстом, его необходимо токенизировать - разбить на отдельные токены. В качестве токенов могут выступать слова, фразы, предложений и т.п. Токенизировать текст можно помощью регулярных выражений или готовых токенизаторов. После токенизации нужно привести текст к нормальной форме. Речь идет о [стемминге и/или лемматизации](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) - это схожие процессы, используемые для обработки словоформ.Для работы лемматизации английского текста можно воспользоваться библиотекой [SpaCy]( https://spacy.io/):
###Code
import spacy
nlp = spacy.load('en')
description_lemma = [' '.join([token.lemma_ for token in nlp(text)])
for text in tqdm(data.description)]
description_lemma[0]
###Output
_____no_output_____
###Markdown
Bag of WordsCоздаем вектор длиной в словарь, для каждого слова считаем количество вхождений в текст и подставляем это число на соответствующую позицию в векторе.Построим модель BOW с помощью [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html)
###Code
vectorizer = CountVectorizer().fit(description_lemma)
vocabulary = vectorizer.get_feature_names()
print('Размер словаря: %d'%len(vocabulary))
description_count = vectorizer.transform(description_lemma)
top_tokens, _ = zip(*sorted(zip(vocabulary, description_count.sum(axis=0).getA1()),
key=lambda x: x[1], reverse=True)[:10])
print('Top-10 слов: %s'%'; '.join(top_tokens))
###Output
Размер словаря: 26383
Top-10 слов: and; the; be; of; with; pron; this; wine; flavor; in
###Markdown
Видно, что большая часть из топ-10 слов является не информативными - стоп-словами. Что бы они не участвовали в представление, в конструктор CountVectorizer в качестве параметра можно передать список стоп-слов:
###Code
from stop_words import get_stop_words
stop_words = get_stop_words('en')
vectorizer = CountVectorizer(stop_words=stop_words).fit(description_lemma)
vocabulary = vectorizer.get_feature_names()
print('Размер словаря: %d'%len(vocabulary))
description_count = vectorizer.transform(description_lemma)
top_tokens, _ = zip(*sorted(zip(vocabulary, description_count.sum(axis=0).getA1()),
key=lambda x: x[1], reverse=True)[:10])
print('Top-10 слов: %s'%'; '.join(top_tokens))
###Output
Размер словаря: 26290
Top-10 слов: pron; wine; flavor; fruit; finish; cherry; aroma; tannin; dry; acidity
###Markdown
Чтобы сжать векторное представление, можно "отбросить" редкие слова:
###Code
vectorizer = CountVectorizer(stop_words=stop_words, min_df=3).fit(description_lemma)
vocabulary = vectorizer.get_feature_names()
print('Размер словаря: %d'%len(vocabulary))
description_count = vectorizer.transform(description_lemma)
description_count
###Output
Размер словаря: 16143
###Markdown
Tf-IdfCлова, которые редко встречаются в корпусе (во всех рассматриваемых документах этого набора данных), но присутствуют в этом конкретном документе, могут оказаться более важными. Тогда имеет смысл повысить вес более узкотематическим словам, чтобы отделить их от общетематических. Этот подход называется [TF-IDF](https://en.wikipedia.org/wiki/Tf–idf).Значение Tf-Idf для каждого пары документ-слово состоит из двух компонент:* Term frequency — логарифм встречаемости слова в документе$$tf(t, d) = \log n_{t,d}$$* Inverse Document frequency — логарифм обратной доли документов в которых встретилось данное слово$$idf(t, D) = \log \frac{ \mid D \mid}{\mid \{ d_i \in D \mid t \in d_i \} \mid}$$* Tf-Idf — кобминация tf и idf$$ TfIdf(t, d, D) = tf(t, d) * idf(t, D)$$
###Code
vectorizer = TfidfVectorizer(stop_words=stop_words).fit(description_lemma)
vocabulary = vectorizer.get_feature_names()
description_tfidf = vectorizer.transform(description_lemma)
top_tokens, _ = zip(*sorted(zip(vocabulary, description_count.sum(axis=0).getA1()),
key=lambda x: x[1], reverse=True)[:10])
print('Top-10 слов: %s'%'; '.join(top_tokens))
###Output
Top-10 слов: hoed; nonnenberg; consulting; coutinel; congenial; blessing; alderbrook; marriage; charlemagne; 50g
###Markdown
Предсказание оценки винДобавляем к категориальным признакам, признаки извлечённые из описаний сомелье.
###Code
accuracy_scores = []
for description in [description_count, description_tfidf]:
X = scipy.sparse.hstack([countries, provinces, varieties, description])
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33)
lr = LogisticRegression().fit(X_train, Y_train)
Y_pred = lr.predict(X_test)
accuracy_scores.append(accuracy_score(Y_pred, Y_test))
pd.DataFrame({'Accuracy':accuracy_scores}, index=['BOW', 'TfIdf'])
###Output
_____no_output_____
###Markdown
BM25Метод вычисления весов ```Okapi BM25```, который развивает идею Tf-Idf, учитывая длину документов в $tf(t, d)$.$$tf(t, d) = \frac{(k_1 + 1) * n_{t,d}}{k_1 * (1 - b + b * \frac{L_d}{L_{ave}}) + n_{t,d}}$$где* $k_1$ и $b$ - свободные коэффициенты, обычно их выбирают как $k_1=2.0$ и $b=0.75$* $L_d$ - длина документа $d$* $L_{ave}$ - средняя длина документа в коллекции Vowpal WabbitДовольно часто бывает, что данных очень много и обучаться приходится на выборках, которые не помещаются в память. А также, довольно часто хорошее качество можно получить благодаря простым линейным моделям, при условии, что были хорошо отобраны и сгенерированы признаки. Важным достоинством линейных методов является то, что при обучении можно добиться того, что настройка параметров алгоритма (т.е. этап обновления весов) будет производится каждый раз при добавлении нового обьекта. Данные методы машинного обучения в литературе часто также называют Online Machine Learning. При этом не нужно хранить все обьекты одновременно в памяти теперь просто не нужно.На сегодняшний день одной из самых известных реализаций таких методов является пакет [Vowpal Wabbit](https://github.com/JohnLangford/vowpal_wabbit):* Можно обучать только линейные модели. Увеличивать качество методов, можно за счет добавления новых признаков и подгонки функции потерь* Обучающая выборка обрабатывается с помощью стахостического оптимизатора, благодаря чему можно обучаться на выборках, которые не помещаются в память* Можно обрабатывать большое количество признаков за счет их хэширования (так называемый hashing trick), бладаря чему можно обучать модели даже в случаях, когда полный набор весов просто не помещается в памяти* Поддерживается режим активного обучения, при котором обьекты обучающей выборки можно подавать даже с нескольких машин по сети* Обучение может быть распараллелено на несколько машин Установка* Ubuntu - ```apt-get install vowpal-wabbit```* Mac OS - ```port install vowpal_wabbit```* Windows - скачать установочник [тут](https://github.com/eisber/vowpal_wabbit/releases) Формат данныхLabel [weight] |Namespace Feature ... |Namespace ...* ```Label``` - метка класса для задачи классификации или действительное число для задачи регрессии* ```weight``` - вес объекта, по умолчанию у всех 1* ```Namespace``` - все признаки разбиты на области видимости, может использоваться для раздельного использования или создания квадратичных признаков между областями* ```Feature``` - ```string[:value]``` или ```int[:value]``` строки будут хешированы, числа будут использоваться как индекс в векторе признаков. ```value``` по умолчанию равно $1$ Параметры**Hashing trick**Вводится функция $h$, с помощью которой получается индекс для записи значения в вектор признаков объекта.$$h : F \rightarrow \{0, \dots, 2^b - 1\}$$С помощью ```--b``` можно задавать размер области значений хеш-функции. Чем больше значение ```b```, тем меньше вероятность получить коллизии при хешировании признаков.**Оптимизация**Может использовать ```SGD``` или ```L-BFGS``` (квази-ньютоновский метод второго порядка, подробнее про работу [оптимизации](http://aria42.com/blog/2014/12/understanding-lbfgs))По умолчанию используется ```SGD```. ```L-BFGS``` включается с помощью ```--bfgs```, работает гораздо медленнее и подходит только для выборок небольшого размера. Количество проходов по данным для ```SGD``` задаётся с помощью параметра ```--passes```**Параметры оптимизации**Обновление весов происходит на каждом объекте:$$w_{t+1} = w_{t} + \eta_t \nabla_{w}\ell(w_{t}, x_{t})$$$$\eta_t = \lambda d^k \left( \frac{t_0}{t_0 + t} \right)^p$$где $t$ - номер объекта при обучении, $k$ - номер эпохи. Остальные параметры задаются следующим образом:* $\lambda$: ```-l```* $d$: ```--decay_learning_rate```* $t_0$: ```--initial_t```* $p$: ```--power_t```**Функция потерь** задаётся через ```--loss_function```**Регуляризация** задаётся через два флага ```--l1``` и ```--l2```**Квадратичные признаки*** ```-q ab``` — создаёт квадратичные признаки, перемножая все признаки из областей видимости, названия которых начинаются на букву __a__ и на букву __b__* ```--ignore a``` — игнорирует все признаки из области видимости, название которой начинается на букву __a__**Режим демона*** ```--daemon``` — запускает __vw__ в режиме сервиса на порту, который можно задать с помощью ```--port```* Позволяет обучать модель и/или применять модель по сети
###Code
!vw -h | head -n10
data_vw = pd.DataFrame({'class': Y-80, 'description':[' '.join(re.findall(r'\w+', d))
for d in description_lemma]})
data_train, data_test = train_test_split(data_vw)
data_train.to_csv('data.train.vw', sep='|', header=None, index=False)
data_test.to_csv('data.test.vw', sep='|', header=None, index=False)
!vw -d data.train.vw -f model.vw --loss_function logistic --oaa 21 --quiet --passes 100 -c -k
###Output
_____no_output_____
###Markdown
* ```-d``` — откуда брать данны для обучения* ```-f``` — куда сохранять модель* ```--passes``` — максимальное количество проходов по выборке* ```-c``` — создавать файл с кешем, необходимо указывать, если используется ```--passes```* ```-k``` — перед запуском очищать кеш
###Code
!vw -d data.test.vw -i model.vw -r output.csv --loss_function logistic --quiet
###Output
_____no_output_____ |
example/private-deep-learning/example_dpdl.ipynb | ###Markdown
Train a deep learning model with differential privacy
###Code
# import packages for DP
from autodp import rdp_bank, rdp_acct
# import packages needed for deep learning
import mxnet as mx
from mxnet import nd, autograd
from mxnet import gluon
import dpdl_utils
ctx = mx.gpu()
###Output
_____no_output_____
###Markdown
Get data: standard MNIST
###Code
mnist = mx.test_utils.get_mnist()
num_inputs = 784
num_outputs = 10
batch_size = 1 # this is set to get per-example gradient
train_data = mx.io.NDArrayIter(mnist["train_data"], mnist["train_label"],
batch_size, shuffle=True)
test_data = mx.io.NDArrayIter(mnist["test_data"], mnist["test_label"],
64, shuffle=True)
train_data2 = mx.io.NDArrayIter(mnist["train_data"], mnist["train_label"],
64, shuffle=True)
###Output
_____no_output_____
###Markdown
Build a one hidden layer NN with Gluon
###Code
num_hidden = 1000
net = gluon.nn.HybridSequential()
with net.name_scope():
net.add(gluon.nn.Dense(num_hidden, in_units=num_inputs,activation="relu"))
net.add(gluon.nn.Dense(num_outputs,in_units=num_hidden))
# get and save the parameters
params = net.collect_params()
params.initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
params.setattr('grad_req', 'write')
# define loss function
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Use a new optimizer called privateSGDBasically, we add Gaussian noise to the stochastic gradient.
###Code
# define the update rule
def privateSGD(x, g, lr, sigma,wd=0.0,ctx=mx.cpu()):
for (param,grad) in zip(x.values(), g):
v=param.data()
v[:] = v - lr * (grad +wd*v+ sigma*nd.random_normal(shape = grad.shape).as_in_context(ctx))
# Utility function to evaluate error
def evaluate_accuracy(data_iterator, net):
acc = mx.metric.Accuracy()
loss_fun = .0
data_iterator.reset()
for i, batch in enumerate(data_iterator):
data = batch.data[0].as_in_context(ctx).reshape((-1, 784))
label = batch.label[0].as_in_context(ctx)
output = net(data)
predictions = nd.argmax(output, axis=1)
acc.update(preds=predictions, labels=label)
loss = softmax_cross_entropy(output, label)
loss_fun = loss_fun*i/(i+1) + nd.mean(loss).asscalar()/(i+1)
return acc.get()[1], loss_fun
###Output
_____no_output_____
###Markdown
Now let's try attaching a privacy accountant to this data set
###Code
# declare a moment accountant from autodp's rdp_acct
DPobject = rdp_acct.anaRDPacct()
# Specify privacy specific inputs
thresh = 4.0 # limit the norm of individual gradient
sigma = thresh
delta = 1e-5
func = lambda x: rdp_bank.RDP_gaussian({'sigma': sigma/thresh}, x)
###Output
_____no_output_____
###Markdown
We now specify the parameters needed for learning
###Code
#
epochs = 10
learning_rate = .1
n = train_data.num_data
batchsz = 100 #
count = 0
niter=0
moving_loss = 0
grads = dpdl_utils.initialize_grad(params,ctx=ctx)
###Output
_____no_output_____
###Markdown
Let's start then!
###Code
# declare a few place holder for logging
logs = {}
logs['eps'] = []
logs['loss'] = []
logs['MAloss'] = []
logs['train_acc'] = []
logs['test_acc'] = []
for e in range(epochs):
# train_data.reset() # Reset does not shuffle yet
train_data = mx.io.NDArrayIter(mnist["train_data"], mnist["train_label"],
batch_size, shuffle=True)
for i, batch in enumerate(train_data):
data = batch.data[0].as_in_context(ctx).reshape((-1, 784))
label = batch.label[0].as_in_context(ctx)
with autograd.record():
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
# calculate an moving average estimate of the loss
count += 1
moving_loss = .999 * moving_loss + .001 * nd.mean(loss).asscalar()
est_loss = moving_loss / (1 - 0.999 ** count)
# Add up the clipped individual gradient
dpdl_utils.accumuate_grad(grads, params, thresh)
#print(i)
if not (i + 1) % batchsz: # update the parameters when we collect enough data
privateSGD(params, grads, learning_rate/batchsz,sigma,wd=0.1,ctx=ctx)
# Keep track of the privacy loss
DPobject.compose_subsampled_mechanism(func,1.0*batchsz/n)
dpdl_utils.reset_grad(grads)
if count % (10*batchsz) is 0:
print("[%s] Loss: %s. Privacy loss: eps = %s, delta = %s " % (((count+1)/batchsz),est_loss,DPobject.get_eps(delta),delta))
logs['MAloss'].append(est_loss)
##########################
# Keep a moving average of the losses
##########################
if count % 60000 is 0:
test_accuracy, loss_test = evaluate_accuracy(test_data, net)
train_accuracy, loss_train = evaluate_accuracy(train_data2, net)
print("Net: Epoch %s. Train Loss: %s, Test Loss: %s, Train_acc %s, Test_acc %s" %
(e, loss_train, loss_test,train_accuracy, test_accuracy))
logs['eps'].append(DPobject.get_eps(delta))
logs['loss'].append(loss_train)
logs['train_acc'].append(train_accuracy)
logs['test_acc'].append(test_accuracy)
learning_rate = learning_rate/2
## Plot some figures!
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(num=1, figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
plt.plot(range(epochs), logs['eps'])
plt.plot(range(epochs), logs['loss'])
plt.plot(range(epochs), logs['train_acc'])
plt.plot(range(epochs), logs['test_acc'])
plt.legend(['\delta = 1e-5', 'Training loss', 'Training accuracy','Test accuracy'], loc='best')
plt.show()
###Output
_____no_output_____ |
sessions/learning/notebook/learning.ipynb | ###Markdown
Introduction to machine learning * Supervised learning * scikit-learn * Patent assignee forms * Features * Training and test data * k Nearest neighbors * Naive Bayes * Unsupervised learning * Political polarization * k Means * Hierarchical clustering * Further reading * Exercises> MONSIEUR JOURDAIN: Oh, really? So when I say: Nicole bring me my slippers and fetch my nightcap,” is that prose?> > PHILOSOPHY MASTER: Most clearly.> > MONSIEUR JOURDAIN: Well, what do you know about that! These forty years now I’ve been speaking in prose without knowing it!> > —Molière, The Bourgeois Gentleman, 1670It's hard to think of an area of scientifc research in recent memory that has received more attention (both public and academic) than machine learning. Although the basic principles of machine learning have been around for a long time, a confluence of several factors, including advances in computing power, data availability (necessary for training high-performing models), algorithms, and the commercial potential of the technology have led to significant excitement within the field (Jordan and Mitchell, 2015). The primary beneficiary of machine learning has arguably been industry, with uptake being slower in science, although that's starting to change. As I hope you'll see by the end of this session, machine learning has potential applications for all phases of the research process, from data collection and cleaning to modeling and analysis.Machine learning is not a mainstay of the social science PhD curriculum, which means that social scientists who are curious about machine learning typically need to resort to self study. Given just how vast machine learning has become, together with the fairly jargon-heavy nature of the field, its easy for people to feel overwhelmed before they even really get started. As we know from all sorts of research (Cohen and Levinthal, 1990), learning about a new field is really tricky when we're starting from scratch, without some prior knowledge on which to build.What if I told you, though, that every time you run a regression model you're doing machine learning? It's true! Just like Monsieur Jourdain was speaking prose for forty years without knowing it, you already likely have some familiarity with a major area of machine learning.This notebook will be primarily organized around the two main branches of machine learning, supervised and unsupervised learning (the former of which includes regression). Without further ado, then, let's get started! scikit-learnAs a heads up, throughout this session, we'll be making heavy use of the Python package [scikit-learn](https://scikit-learn.org), which is probably the most popular machine learning library for Python. You don't need to know much more than that at this point. Supervised learningThere are two general approaches to machine learning, supervised and unsupervised. Supervised learning involves building models for prediction or labeling tasks. For example, we might want to build a model to label whether a product review is positive or negative, whether a consumer would like a product or not, or whether an image includes a picture of a cat. The name "supervised" comes from the fact that when we build these models, we use training data, i.e., a set of examples where the outcome or label is known. We might build a model by collecting 10,000 images, creating a dummy varaible that identifies those with cats, and then training our model to identify which features (or variables) are most strongly predictive of the presence of a cat. So we are "supervising" in the sense that we are specifying particular relationships we want the model to learn (between a dependent and independent variables, or labels and features, from our training data).Within supervised learning, we can distinguish models depending on whether the outcome of interest is categorical or continuous. The former, which we refer to as classification, would include cases where we want to develop a model that will determine, based on a set of observed features or variables, whether an observation is a member of a particular class (e.g., whether a review is positive based on the words it contains). The latter, which we refer to as regression, would include cases where we want to develop a model that will determine, again based on a set of observed features or variables, the predicted value for an observation on a continuous measure (e.g., the mpg for a car based on its weight, age, make, model, and other factors). Patent assignee formsTo help ground our exploration of supervised machine learning, let's consider a concrete example, once again using data from the U.S. Patent and Trademark Office. Patents are a form of intellectual property. When a patent is granted by the USPTO, the property rights are given to an assignee, which is often different from the inventor(s) and most typically consists of a legal organization (e.g., a business). Historically, most patents have been granted to business firms, but over the past few decades, a much broader range of organizations, from universities and nonprofits to government labs, have been seeking intellectual property protection for their discoveries. Suppose we wanted to gain some sense for how the representation of different kinds or forms of organizatons (e.g., universities, firms) among patent assignees has changed over time. It turns out that doing so is a bit of a challenge for a few reasons. First, the USPTO only began collecting data on assignee form after 2002, and the categories recorded are somewhat broad and do not necessarily correspond to those that might be most interesting from a theoretical standpoint (e.g., the categories mainly distinguish between US and foreign individuals, corporations, and governments). Second, the only data that the USPTO reliably reports on assignees over time is assignee name and location, which makes it difficult to link assignees to external databases that might give us insight on organizational forms. Moreover, even if we could easily link to external databases (e.g., Compustat), I am unaware of anything that would cover the whole range of diverse organizational forms represented among assignees (e.g., large and small corporations, universities, governments, research institutes). One option for us might be to leverage the fact that different kinds of organizational forms tend to exhibit different kinds of naming patterns. Most universities, for example, include the word "university" in their name, but not the word "corporation." The names of consulting and professional service firms tend to include words like "associates," "services," "solutions," and "consulting." If we went through the whole list of USPTO assignees, we might then be able to categorize them into broad organizational forms with some reasonable level of accuracy. The challenge, though, is that there are hundreds of thousands of organizations that have been assigned a US patent, which prohibits us from doing this coding manually. Moreover, even though we might be able to come up with a mechanical set of rules (e.g., if the name contains "university" code as "university"), we would likely only be able to develop a handful of rules, and those that we would develop would not likely apply to all situations. For example, our "university" rule would code the name "University Park Semiconductor Corporation" incorrectly as a university.This is where we can get some help from supervised learning methods.But before we get started, let's do a little preparation. Our first step will be to download assignee data from the USPTO Patents View website.
###Code
# load packages
import pathlib
import urllib
import pandas as pd
# make a directory to store the data
pathlib.Path("data").mkdir(parents=True, exist_ok=True)
# download the data
patentsview_assignee_file_url = "http://s3.amazonaws.com/data.patentsview.org/20191231/download/assignee.tsv.zip"
patentsview_assignee_file_path = "data/assignee.tsv.zip"
filename, headers = urllib.request.urlretrieve(patentsview_assignee_file_url, patentsview_assignee_file_path)
###Output
_____no_output_____
###Markdown
Next, let's read the data into a pandas dataframe and do a little clean up.
###Code
# read file into data frame
assignee_df = pd.read_csv(patentsview_assignee_file_path, sep="\t", low_memory=False)
assignee_df = assignee_df.sample(frac=1, random_state=1011)
# drop non-organizations
assignee_df = assignee_df.drop(columns=["type", "name_first", "name_last"])
assignee_df = assignee_df.dropna()
# drop some cases with bad encoding (on patentsview end)
assignee_df = assignee_df[assignee_df["organization"].apply(lambda x: len(x) == len(x.encode()))]
# check out the data
assignee_df.head()
###Output
_____no_output_____
###Markdown
At this point, we need some training data. Recall that training data are authoritative examples that we'll use to help our model learn to make predictions (e.g., here, between features of assignee names and different categories of organizational forms). For our purposes, I am going to go ahead and pull a random sample of assignee names, and the subsequently code their form by hand, using a combination of prior personal knowledge (I've spent a lot of time looking at patent assignees) and internet searches.
###Code
# create a directory for hand coding training data
pathlib.Path("hand_cleaning").mkdir(parents=True, exist_ok=True)
# add a column for hand coded assignee form
assignee_df["hand_coded_assignee_form"] = None
# pull a random sample of data to hand code and save as a .csv
hand_coded_assignee_form_file_raw_path = "hand_cleaning/hand_coded_assignee_form.csv"
assignee_df[["organization","hand_coded_assignee_form"]].sample(n=2000, random_state=1011).to_csv(hand_coded_assignee_form_file_raw_path, index=False)
###Output
_____no_output_____
###Markdown
Okay, that didn't take quite as long as expected. I ended up coding assignees into several categories, including "financial", "university", "government", "firm", "professional", "nonprofit", and "other". These may be a bit more granular than we can justify given that the only features we have are based on assignee names, and if you were doing this for research purposes you'd likely want to be more careful than I was with the coding. But for our purposes, let's run with it. Our last step then is to load my hand-coded data into a dataframe.
###Code
# read file into data frame
hand_coded_assignee_form_file_complete_path = "hand_cleaning/hand_coded_assignee_form_complete.csv"
hand_coded_assignee_form_df = pd.read_csv(hand_coded_assignee_form_file_complete_path, low_memory=False)
# check out the data
hand_coded_assignee_form_df.head()
###Output
_____no_output_____
###Markdown
Training and test data Before we move forward, we'll want to set aside some of our hand coded data for testing purposes. We'll use part of our 2000 or so hand coded observations to train our model, while saving part to evaluate the performance of our model out of sample. You may recoil a bit at the idea of not using each and every one of your hard-won training observations to build your model; but in the supervised learning context, it's really important to save some data for testing because otherwise we'll be at a severe risk for unwittingly overfitting our training data. What we want to avoid is building a (likely complex) model that fits our training data very well (and therefore gives us the appearance of high performance) but that is so tailored to the idiosyncracies of our training data that it misses the more meaningful, general patterns that we're likely to see when we apply our model to unlabeled data.Note that in addition to test data, machine learning practitioners will also often hold out an additional set of observations from the training data, called validation data. This data is used for tuning the hyperparameters, and allows us to keep our test data completely separate from the data we use to build our models. So without further ado, let's split our hand coded organizational forms into training and test data sets. Turns out scikit-learn will make this easy for us.
###Code
# load some packages
from sklearn.model_selection import train_test_split
# split into training and test
X_assignee_name_train, X_assignee_name_test, y_assignee_name_train, y_assignee_name_test = train_test_split(hand_coded_assignee_form_df["organization"],
hand_coded_assignee_form_df["hand_coded_assignee_form"],
test_size=0.1,
random_state=10101)
###Output
_____no_output_____
###Markdown
Now let's check out our two data sets.
###Code
# show sizes
print(X_assignee_name_train.shape)
print(X_assignee_name_test.shape)
print(y_assignee_name_train.shape)
print(y_assignee_name_test.shape)
###Output
(1800,)
(200,)
(1800,)
(200,)
###Markdown
FeaturesWe now have some training and test data. Yay! If we want to draw parallels to territory more familiar to social scientists, what we've essentially done up to this point is build a data set with about 2000 observations and a dependent variable (`hand_coded_assignee_form`), on to which we have tacked a field of more or less raw data (`organization`). What we're missing are some covariates (or right hand side variables) that we can use to predict our dependent variable (or label or class). I've already discussed a few of the barriers that can make entry into machine learning a challenge for social scientists. To that list, I would also add that many concepts that would be familiar to social scientists go by different names in the field of machine learning. In particular, note that in machine learning land, what social scientists refer to as independent variables, control variables, or covariates go by the name of features. Let's go ahead and construct some features for our data. Recall that for the purposes of our particular example, what we want to do is leverage features of organization names (e.g., the presence of things like "university" or "corporation") to predict the form of the organization. It seems then, that a reasonable set of features would be variables indicating the presence of certain terms in the names of our organization observations. Fortunately, `scikit-learn` will make this process really easy for us. Specifically, we'll us the built in [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), which will create a dictionary of features (tokens) and then subsequently allow us to transform documents (i.e., our organization names) into feature vectors.
###Code
# load some packages
from sklearn.feature_extraction.text import CountVectorizer
# initialize our CountVectorizer
assignee_name_vectorizer = CountVectorizer(lowercase=True, max_df=1.0, min_df=1, stop_words="english")
# fit the CountVectorizer to the data
X_assignee_name_train_counts = assignee_name_vectorizer.fit_transform(X_assignee_name_train)
###Output
_____no_output_____
###Markdown
That's about all there is to it! Let's take a closer look at what we just made.
###Code
# show the shape of our organization name X feature matrix
X_assignee_name_train_counts.shape
# show a few features
assignee_name_vectorizer.get_feature_names()[0:15]
###Output
_____no_output_____
###Markdown
Finally, let's encode our labels as integers.
###Code
# load some packages
from sklearn.preprocessing import LabelEncoder
# initialize label encoder
le = LabelEncoder()
# fit LabelEncoder to the data
y_assignee_name_train_codes = le.fit_transform(y_assignee_name_train)
# we can also reverse the encoding later
le.inverse_transform([2, 2, 1])
###Output
_____no_output_____
###Markdown
k Nearest Neighbors> Tennyson once said that if we could understand a single flower, we should know who we are and what the world is. Perhaps he meant that there is no fact, however insignificant, that does not involve universal history and the infinite concatenation of cause and effect. Perhaps he meant that the visible world is implicit every phenomenon, just as the will, according to Schopenhauer, is implicit in every subject. —Jorge Luis Borges, The ZahirGiven all the hype around machine learning today, you're probably itching to learn more about the latest and greatest techniques, like deep learning, artificial neural networks, and so forth. If so, I share your sympathies, but I would also suggest that jumping in at the frontier is probably one of the best ways to quickly lose your motivation for further study of machine learning. Many of the latest and greatest techniques are staggeringly complex; given how quickly things are moving, even if you manage to wrap your head around the method du jour it'll soon be ancient history. To make matters worse, due to the high level of interest in the frontiers of machine learning, there's a ton of really bad resources out there right now, more interested in serving you ads than teaching you anything useful.What we'll do in this session is get you up and running with the some of the more basic, but tried-and-true approaches. Having an understanding of these foundations will make it much easier for you to pick up the advanced stuff later. Moreover, you'll walk away with a set of tools that are really useful in their own right. In fact, and not to diminish the accomplishments of recent machine learning research, when you push even the most ardent advocates of fancy techniques like deep learning, they'll usually admit that for many applications, classic methods will get you about 90% of the way to where you want to go (e.g., in terms of accuracy). With that all said, let's get started by considering one of the simpler classification techniques, known as k Nearest Neighbors (kNN). As the name suggests, the basic idea behind kNN is to assign inputs to classes, based on the classes of the $k$ most similar observations (i.e., nearest neighbors) from the training data. That's about it (at least at a high level)! Put a different way, given a new (previously unseen) observation $i$ that we want to classify, we look to the training data and identiy the $k$ most similar observations, based on whatever features we identify as being important. The neighbors then "vote" on the class membership of $i$ based on their class membership (which is known in the training data), with (typically) the majority winning.To operationalize kNN classifier, we need to make two important decisions. First, what we will use for $k$ (i.e., how many neighbors to consider)? And second, how we will measure similarity between observations (which we'll use to identify the nearest neighbors)? Your choice of $k$ will probably depend on a combination of your overall goals and your empirical observation of changes in the performance of your classifier with changes in $k$. Larger values of $k$ will be less sensitive to noise, but will also blur the boundaries between classes.In machine learning, $k$ is an example of a __hyperparameter__, i.e., a parameter that set before training (often manually) rather than being derived from the training data. Let's dig in to an example. We'll try to build a simple kNN classifier for assigning organizational forms to patent assignees based on their names.
###Code
# load some packages
from sklearn.neighbors import KNeighborsClassifier
# train our model
clf_knn = KNeighborsClassifier(n_neighbors=3).fit(X_assignee_name_train_counts, y_assignee_name_train_codes)
# create some new observations to classify
assignee_names_new = ['Microsoft Corporation', 'University of Minnesota']
X_new_counts = assignee_name_vectorizer.transform(assignee_names_new)
X_new_counts
# let's try out our classifier
predicted = clf_knn.predict(X_new_counts)
for assignee_name_new, organizational_form in zip(assignee_names_new, predicted):
print(assignee_name_new, le.inverse_transform([organizational_form]))
###Output
Microsoft Corporation ['FIRM']
University of Minnesota ['UNIVERSITY']
###Markdown
Now let's evaluate the performance of our classifier on the test data.
###Code
# prepare the test features
X_assignee_name_test_counts = assignee_name_vectorizer.transform(X_assignee_name_test)
# prepare the test labels
y_assignee_name_test_codes = le.transform(y_assignee_name_test)
# compute mean accuracy
clf_knn.score(X_assignee_name_test_counts, y_assignee_name_test_codes)
###Output
_____no_output_____
###Markdown
We can also get a more detailed report.
###Code
# load some packates
import numpy as np
from sklearn import metrics
# get predictions for test data
predicted = clf_knn.predict(X_assignee_name_test_counts)
# evaluation
print(np.mean(y_assignee_name_test_codes == predicted))
print(metrics.classification_report(y_assignee_name_test_codes, predicted, zero_division=0))
###Output
0.885
precision recall f1-score support
0 1.00 0.22 0.36 9
1 0.88 1.00 0.94 166
2 0.00 0.00 0.00 4
4 0.00 0.00 0.00 1
5 1.00 0.55 0.71 11
6 1.00 0.33 0.50 9
accuracy 0.89 200
macro avg 0.65 0.35 0.42 200
weighted avg 0.87 0.89 0.85 200
###Markdown
Naive Bayes Before we move forward I want to introduce you to one more simple but powerful approach to classification known as Naive Bayes (Manning et al., 2008). As you probably expect from the name, Naive Bayes is a probabilistic method and is based on Bayes's theorem. You may recall that Bayes's theorem says that $$ P(A|B) = \frac{P(B|A)P(A)}{P(B)}, $$which is helpful because it allows us to decompose the conditional probability $P(A|B)$ into probabilities that might be easier for us to measure. Returning to our example of classifying organizational forms based on features of their names, we can write the probability that name $n$ is of form $f$ as$$ P(f|m) \propto P(f) \prod_{1 \leq k \leq n_m} P(t_k|f), $$where $P(t_k|f)$ is the conditional probability of token (e.g., "corporation", "university") $t_k$ appearing in a name of form $f$. The basic idea of the Naive Bayes approach, then, is to assign the most likely (or maximum a posteriori) class (or form in our case), $$f_{map} = \arg\max_{f \in F} \hat{P}(f) \prod_{1 \leq k \leq n_m} P(t_k | f).$$The name "Naive" comes from the fact that we are making the assumption that features are conditionally independent. Although this assumptions is usually heroic, Naive Bayes classifiers perform quite well in many real world applications. Let's try to build our own to classify organizational forms. Because we have multiple different classes (i.e., organizational forms), we'll be using an approach called multinomial Naive Bayes.
###Code
# load some packages
from sklearn.naive_bayes import MultinomialNB
# train our model
clf_mnb = MultinomialNB().fit(X_assignee_name_train_counts, y_assignee_name_train_codes)
# let's try out our classifier (we'll reuse "assignee_name_new" and "X_new_counts" from our knn example)
predicted = clf_mnb.predict(X_new_counts)
for assignee_name_new, organizational_form in zip(assignee_names_new, predicted):
print(assignee_name_new, le.inverse_transform([organizational_form]))
###Output
Microsoft Corporation ['FIRM']
University of Minnesota ['UNIVERSITY']
###Markdown
Now let's evaluate the performance of our classifier on the test data.
###Code
# compute mean accuracy (we'll reuse "X_assignee_name_test_counts" and "y_assignee_name_test_codes" from our knn example)
clf_mnb.score(X_assignee_name_test_counts, y_assignee_name_test_codes)
###Output
_____no_output_____
###Markdown
Finally, let's take a look under the hood at some of the most informative features.
###Code
# get a list of feature names
feature_names = assignee_name_vectorizer.get_feature_names()
# loop over forms and pull most informative features
for i, organizational_form in enumerate(clf_mnb.classes_):
top_10_features = np.argsort(clf_mnb.coef_[i])[-10:]
print("%s, %s: %s" % (organizational_form,
le.inverse_transform([organizational_form])[0],
" ".join(feature_names[j] for j in top_10_features)))
###Output
0, FINANCIAL: trust ventures partners corporation partnership holding llc limited development holdings
1, FIRM: international products systems technology company technologies gmbh limited corporation llc
2, GOVERNMENT: chandler disease chinese federal law control enforcement research national institute
3, NONPROFIT: equal hospital gakuin mofet chwan scientific centro central association foundation
4, OTHER: fuji fujian fts france gilber razran kennedy james steinbach friedrich
5, PROFESSIONAL: consulting limited service management llc design services associates engineering solutions
6, UNIVERSITY: universite et instituto korea research sciences technology institut institute university
###Markdown
Unsupervised learning The second major branch of machine learning is what's known as unsupervised learning. As you might discern from the name, the major distinguishing feature of unsupervised learning relative to supervised learning is that in the former, we do not use training data (i.e., labeled examples) to build our model. Instead, unsupervised learning methods tend to be much more inductive in nature. Rather training our algorithms, for example, how to sort observations into previously defined (i.e., researcher supplied) categories, we are instead asking our algorithms to identify patterns in the data for us. Clustering is probably the largest stream of unsupervised learning, and it's what we'll focus on in this class. Given a set of features (here, we no longer have an outcome or label), a clustering algorithm will look for patterns that allow us to separate observations in to discrete (often mutually exclusive) categories. As with our approach to supervised learning, we'll focus on a few of the simpler but tried-and-true methods in our explorations. Before we get started, let me introduce you to a new empirical application. Political polarizationAlthough political scientists and sociologists have been interested in political polarization for many years (DiMaggio, Evans, and Bryson 1996; Baldassarri and Gelman, 2008), the growth of social media, contentious presidential elections in the United States, foreign interference in elections, and other factors have led to renewed interest in the topic (Liu and Srivastava, 2015; Bail, 2018; Kaul and Luo, 2019). Generally, while there is some evidence of polarization over time, this work also underscores that polarization is a complex phenomenon, where much still remains to be understood. Let's see if we can use machine learning to gain some insight on the dynamics of political polarization over time. To make our efforts a little more tractable, we'll focus on polarization among political elites, specifically members of the United States Senate. To do so, we'll need some way of characterizing senators political views. What's nice about focusing on Senators is that we have a fairly objective way of doing so—their voting records. We'll download historical voting records for the United States Senate from [Voteview](https://voteview.com) (hosted by UCLA's Department of Political Science and Social Science Computing), which makes these data publicly available in an easy to use comma separated file. In case this example piques your interest, the file also contains historical voting records for the United States House of Representatives (but we'll drop those for now to keep things simple). To keep thinkgs simple, we'll make of some measures, pioneered by political scientists Poole and Rosenthal (1983, 1985), to characterize the political ideology of members of Congress based on their voting records, which will make some of our clustering a little more straightforward. However, we'll also load the raw vote data just to have on hand and for use in the exercises. Enough throat clearing. Let's dig in!
###Code
# download the vote data
voteview_member_vote_file_url = "https://voteview.com/static/data/out/votes/HSall_votes.csv"
voteview_member_vote_file_path = "data/HSall_votes.csv"
#filename, headers = urllib.request.urlretrieve(voteview_member_vote_file_url, voteview_member_vote_file_path)
# download the ideology data
voteview_member_ideology_file_url = "https://voteview.com/static/data/out/members/HSall_members.csv"
voteview_member_ideology_file_path = "data/HSall_members.csv"
#filename, headers = urllib.request.urlretrieve(voteview_member_ideology_file_url, voteview_member_ideology_file_path)
# read vote file into data frame
member_vote_df = pd.read_csv(voteview_member_vote_file_path, low_memory=False)
# check out the data
member_vote_df.head()
# read ideology file into data frame
member_ideology_df = pd.read_csv(voteview_member_ideology_file_path, low_memory=False)
# check out the data
member_ideology_df.head()
###Output
_____no_output_____
###Markdown
Now let's merge the two data frames into one we can use for our analyses. Let's subset the data a little to make things simpler. We'll focus only on the Senate. We'll also drop some columns that we don't really need.
###Code
# keep only Senate
member_vote_df = member_vote_df[member_vote_df.chamber == "Senate"]
member_ideology_df = member_ideology_df[member_ideology_df.chamber == "Senate"]
# drop columns we don't need
member_vote_df = member_vote_df[["congress", "icpsr", "rollnumber", "cast_code"]]
member_ideology_df = member_ideology_df[["congress", "icpsr", "bioname", "party_code", "nominate_dim1", "nominate_dim2"]]
# set cast_code to categorical
member_vote_df["cast_code"] = member_vote_df["cast_code"].astype("category")
# set party_code to categorical
member_ideology_df["party_code"] = member_ideology_df["party_code"].astype("category")
# check out the data
member_vote_df.head()
# check out the data
member_ideology_df.head()
###Output
_____no_output_____
###Markdown
Finally, let's keep common cases across the two data frames.
###Code
# create a merged dataframe
member_vote_ideology_df = member_vote_df.merge(member_ideology_df, on=["congress", "icpsr"], how="inner", indicator=True)
# keep common cases in the member_vote_df dataframe
member_vote_df = member_vote_ideology_df[["congress", "icpsr", "rollnumber", "cast_code"]].drop_duplicates()
# keep common cases in the member_ideology_df dataframe
member_ideology_df = member_vote_ideology_df[["congress", "icpsr", "bioname", "party_code", "nominate_dim1", "nominate_dim2"]].drop_duplicates()
###Output
_____no_output_____
###Markdown
Now we're ready for some unsupervised learning! k-means clustering Recall that our goal in clustering is to separate the observations in our data into discrete bins, based on some set of features or variables. k-means clustering is a relatively simple approach that will get us to that goal. The basic idea behind k-means is to partition our data in to $k$ clusters, such that each observation is assigned to the cluster with the nearest mean. As with k-nearest neighbors, $k$ is a hyperparameter that we need to set in advance of running the algorithm. However, note that the meaning of $k$ is different in $k$ means and $k$ nearest neighbors. In the former, $k$ refers to the number of clusters, whereas in the latter, $k$ is in reference to the number of neighbors used to make cluster assignment. Because we need to specify the number of clusters in advance, k means is often ideal for when we have some sense for the number of clusters we think we'll end up finding (although there are methods that we'll discuss below that we can use to in some ways inductively find an appropriate number of clusters from the data). From a more technical standpoint, k-means aims to minimize the within cluster variances (i.e., sum of squared distances between points and means). Typically, the initial means are chosen by randomly picking $k$ points from the data. All points are then assigned to the cluster with the nearest mean. Cluster means are then recomputed based on the new point assignments. We then assign all points to the cluster with the nearest mean, update the means, and we keep continuing this process. The algorithm will stop either when we reach a stable clustering solution or we reach a pre-specified number of iterations. A limitation of classical k-means algorithms will find a local minimum. One common solution to this problem is to run the algorithm multiple times with different starting points and choose the best solution.Without further ado, let's apply k-means to our political ideology data. We'll begin by subsetting our data to the 115th congress to make things a bit simpler.
###Code
# pull a subset
member_ideology_df_subset = member_ideology_df[member_ideology_df.congress == 115]
# drop congress column
member_ideology_df_subset = member_ideology_df_subset.drop(columns=["congress"])
# check out the data
member_ideology_df_subset.head()
###Output
_____no_output_____
###Markdown
Now let's run k-means using scikit-learn. We'll cluster based on two variables, `nominate_dim1` and `nominate_dim2`. The former supposedly corresponds roughly to economic liberalism-conservativism, while the latter is thought to capture issues of the day.
###Code
# load some packages
from sklearn.cluster import KMeans
# train our model
clu = KMeans(n_clusters=2, random_state=0).fit(member_ideology_df_subset[["nominate_dim1", "nominate_dim2"]].values)
# check out the clusters
clu.labels_
# check out the cluster centers
clu.cluster_centers_
###Output
_____no_output_____
###Markdown
We're now in a position where we can plot our clustering solutions. Let's give that a shot.
###Code
# load some packages
import matplotlib.pyplot as plt
# create a scatter plot
plt.scatter(member_ideology_df_subset.nominate_dim1,
member_ideology_df_subset.nominate_dim2,
c=clu.labels_)
###Output
_____no_output_____
###Markdown
Previously, I picked 2 clusters mainly because I figured that would make sense given our two party system. But perhaps the political world is more complex, and we may have better performance with a more complex method. Let's run some additional clustering solutions with different values of $k$ and compare their performance.
###Code
# create a dictionary to hold the sses
sse = {}
# run the clustering algorithm over different values of k
for k in range(1, 10):
# perform clustering
clu = KMeans(n_clusters=k, random_state=0).fit(member_ideology_df_subset[["nominate_dim1", "nominate_dim2"]].values)
# save the sses
sse[k] = clu.inertia_
###Output
_____no_output_____
###Markdown
Now let's plot the result.
###Code
pd.Series(sse).plot()
###Output
_____no_output_____
###Markdown
Hierarchical clusteringHierarchical is another common but powerful approach to clustering. The approach is called "hierarchical" because we aggregate or parition observations (more on that in a second) in stages, such that progressively larger (smaller) clusters are aggregated (partitioned), which as you'll see, give us a natural hierarchy of clusters. There are two main approaches to hierarchical clustering. * Agglomerative approaches are bottom up; they start with each observation in its own cluster, and then progressively group clusters together using some notion of similarity. * Divisive approaches are top down; they start with a all observations assigned to a single cluster, and then progressively partition clusters into smaller groups, again based on some notion of similarity. Hierarchical clustering algorithms have a number of attractive features. Unlike k-means, we do not need to provide the algorithm with the number of clusters we want to find in advance. In addition, the hierarchical nature of the clusters produced often maps more closely on to real world problems, where we commonly see examples of nested categories (e.g., a mouse is a subcategory of mamal, which is a subset of animal). Hierarchical clustering works by comparing similarities among points. That means that when implementing hierarchical clustering, you'll need to think about what would be an appropriate distance function among your data points. Since our features are two continuous numbers (i.e., the nominate ideology scores), we'll just use Euclidean distances.The other thing you'll need to think about when performing hierarchical clustering on your data is how to compare the distances among sets of clusters as you build your tree. Specifically, how do you determine which cluster is closest to a focal cluster? In practice, there are a few common methods. * ward minimizes the variance of the clusters being merged * __Average linkage__ find clusters with the smallest average pairwise distance between points. * __Complete linkage__ find clusters with the smallest maximum pairwise distance between points. * __Single linkage__ find clusters with the smallest minimum pairwise distance between points.We'll start out with a little data preparation work. Let's dig in!
###Code
# pull a subset
member_ideology_df_subset = member_ideology_df[member_ideology_df.congress == 115]
# drop congress column
member_ideology_df_subset = member_ideology_df_subset.drop(columns=["congress"])
# check out the data
member_ideology_df_subset.head()
###Output
_____no_output_____
###Markdown
We're now ready to run our clustering using scikit-learn.
###Code
# load some packages
from sklearn.cluster import AgglomerativeClustering
# train our modelb
clu = AgglomerativeClustering(distance_threshold=0,
linkage="complete",
affinity="euclidean",
n_clusters=None).fit(member_ideology_df_subset[["nominate_dim1", "nominate_dim2"]].values)
###Output
_____no_output_____
###Markdown
We can visualize our clustering solution using a dendrogram. Turns out it is much easier to plot a dendrogram using `scipy` than it is using `scikit-learn`. For our purposes, `scipy` is just fine, so we'll go that route; that mean's we'll need to redo our clustering using `scipy`. If you want to stick with your clustering solution from `scikit-learn`, there is an example of how to plot a dendrogram in the [documentation](https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.htmlsphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py).
###Code
# load some packages
from scipy.cluster.hierarchy import linkage
from scipy.cluster.hierarchy import dendrogram
# redo our clustering solution
clu_scipy = linkage(member_ideology_df_subset[["nominate_dim1", "nominate_dim2"]].values,
method="complete",
metric='euclidean')
###Output
_____no_output_____
###Markdown
Now we're ready to actually plot the dendrogram.
###Code
# set the size for the figure
plt.figure(figsize=(20,10))
# plot the dendrogram
dn = dendrogram(clu_scipy,
leaf_font_size=12,
labels=member_ideology_df_subset["bioname"].values)
###Output
_____no_output_____
###Markdown
Note that in the dendrogram, the y-axis is the value of the distance metric between clusters. Further readingHere are some of the sources I have found helpful. * Hastie, Trevor, Robert Tibshirani, and Jerome Friedman (2009) The elements of statistical learning: Data mining, inference, and prediction. New York: Springer. * Manning, Christopher D., Prabhakar Raghavan, and Hinrich Schütze. (2008) Introduction to information retrieval. Cambridge: Cambridge University Press. * Murphy, Kevin P. (2012) Machine learning: A probabilistic perspective. Camrbidge, MA: MIT Press. Exercises * Using the unsupervised learning approaches above, repeat the clustering exercise for each distinct congress. How does the number of clusters obtained change over time? * Conduct a similar exercise, but for the organizational form data (on patenting). How does the distribution of organizational forms change over time? * Repeat the exercises above, but using raw vote data, rather than the ideology scores. There is some code below to help you get started. Note, however, that the methods we've used above typically will require numerical data as input, so you'll need to do something like convert the raw scores to many different 0/1 dummy variables. Hint: look up "one-hot encoding." * Revise the hierearchical clustering approach above to use a different linkage method. How do the results change. Appendix Reshaping raw votesIf you are interested in analyzing raw votes, some of the code below may be helpful.
###Code
# pull a subset
member_vote_df_subset = member_vote_df[member_vote_df.congress == 115]
# drop congress column
member_vote_df_subset = member_vote_df_subset.drop(columns=["congress"])
# check out the data
member_vote_df_subset.head()
# convert to wide dataframe
member_vote_df_subset.pivot(index="icpsr", columns="rollnumber", values="cast_code")
###Output
_____no_output_____ |
matplotlib/gallery_jupyter/ticks_and_spines/date_demo_convert.ipynb | ###Markdown
Date Demo Convert
###Code
import datetime
import matplotlib.pyplot as plt
from matplotlib.dates import DayLocator, HourLocator, DateFormatter, drange
import numpy as np
date1 = datetime.datetime(2000, 3, 2)
date2 = datetime.datetime(2000, 3, 6)
delta = datetime.timedelta(hours=6)
dates = drange(date1, date2, delta)
y = np.arange(len(dates))
fig, ax = plt.subplots()
ax.plot_date(dates, y ** 2)
# this is superfluous, since the autoscaler should get it right, but
# use date2num and num2date to convert between dates and floats if
# you want; both date2num and num2date convert an instance or sequence
ax.set_xlim(dates[0], dates[-1])
# The hour locator takes the hour or sequence of hours you want to
# tick, not the base multiple
ax.xaxis.set_major_locator(DayLocator())
ax.xaxis.set_minor_locator(HourLocator(range(0, 25, 6)))
ax.xaxis.set_major_formatter(DateFormatter('%Y-%m-%d'))
ax.fmt_xdata = DateFormatter('%Y-%m-%d %H:%M:%S')
fig.autofmt_xdate()
plt.show()
###Output
_____no_output_____ |
python/Day5.ipynb | ###Markdown
Automate browser stuff (mainly for testing)[docs](http://selenium-python.readthedocs.io/getting-started.html)
###Code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get("http://www.python.org")
try:
assert "Pytsafsahon" in driver.title
except AssertionError:
print('ono')
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
driver = webdriver.Chrome()
driver.get('http://czc.cz')
driver.title
driver.capabilities
with open('test.png', 'wb') as f:
f.write(driver.get_screenshot_as_png())
driver.w3c
driver.w3c?
#trying geckodriver now
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox() #only change
driver.get("http://www.python.org")
try:
assert "Python" in driver.title
except AssertionError:
print('ono')
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
from getpass import getpass as hidden_input_for_password
my_pass = hidden_input_for_password()
driver = webdriver.Chrome()
url='https://w3-connections.ibm.com/wikis/home?lang=en-us#!/wiki/We26c576f4d20_45a6_a75b_f1cef0d56615/page/Boarded%20accounts'
driver.get(url)
login = driver.find_element_by_id('Intranet_ID')
login.clear()
login.send_keys('[email protected]')
password = driver.find_element_by_id('password')
password.clear()
password.send_keys(my_pass)
password.send_keys(Keys.RETURN)
#highlighting
def highlight(element):
"""Highlights (blinks) a Webdriver element.
In pure javascript, as suggested by https://github.com/alp82.
"""
driver = element._parent
driver.execute_script("""
element = arguments[0];
original_style = element.getAttribute('style');
element.setAttribute('style', original_style + "; background: yellow; border: 2px solid red;");
setTimeout(function(){
element.setAttribute('style', original_style);
}, 300);
""", element)
driver = webdriver.Chrome()
driver.get(url)
login = driver.find_element_by_id('Intranet_ID')
login.clear()
login.send_keys('[email protected]')
password = driver.find_element_by_id('password')
password.clear()
password.send_keys(my_pass)
password.send_keys(Keys.RETURN)
# some more links
###Output
_____no_output_____
###Markdown
* [Good Git Tutorials](https://www.atlassian.com/git/tutorials)* [Learn X in Y Minutes - Python3](https://learnxinyminutes.com/docs/python3/)
###Code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
browser = webdriver.Chrome()
url='https://w3-connections.ibm.com/wikis/home?lang=en-us#!/wiki/We26c576f4d20_45a6_a75b_f1cef0d56615/page/Boarded%20accounts'
browser.get(url)
login = browser.find_element_by_id('Intranet_ID')
login.clear()
login.send_keys('[email protected]')
password = browser.find_element_by_id('password')
password.clear()
password.send_keys(my_pass)
password.send_keys(Keys.RETURN)
delay = 5 # seconds
try:
Client = WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.LINK_TEXT, 'WAR')))
print("Page is ready!")
Client.click()
try:
Docu = WebDriverWait(browser, delay).until(EC.presence_of_element_located((By.PARTIAL_LINK_TEXT, 'Oracle Table Space')))
Docu.click()
except TimeoutException:
print("Loading took too much time!")
except TimeoutException:
print("Loading took too much time!")
EC.presence_of_element_located?
###Output
_____no_output_____
###Markdown
Password
###Code
import password
class User():
password = password.Password(method='sha256', hash_encoding='base64')
user = User()
user.password = 'testingTHISTHINGY123'
user.password
user.hash
user.salt
user.password == 'testingTHISTHINGY123'
import pickle
pickle.dump(user, open('passwords.thingy', 'wb'))
pickle.load(open('passwords.thingy', 'rb'))
our_saved_user = pickle.load(open('passwords.thingy', 'rb'))
our_saved_user.password
our_saved_user.password == 'testingTHISTHINGY123'
cat passwords.thingy
###Output
�c__main__
User
q )�q}q(X hashqX, M/6R993Kf0rqhllgBMnYb94OOdME1Itfj33O9BVcflw=qX saltqX YJd0SicIqub. |
examples/misc/scratch.ipynb | ###Markdown
Scratch Notebook
###Code
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.ioff()
import mantrap
import mantrap_evaluation
env, goal, _ = mantrap_evaluation.scenarios.custom_haruki(mantrap.environment.PotentialFieldEnvironment)
solver = mantrap.solver.IPOPTSolver(env, eval_env=mantrap.environment.SGAN, goal=goal, is_logging=True)
ego_trajectory, _ = solver.solve(time_steps=10, warm_start_method=mantrap.constants.WARM_START_SOFT)
solver.visualize_actual(as_video=False)
plt.show()
import mantrap
import mantrap_evaluation
env, goal, _ = mantrap_evaluation.scenarios.custom_haruki(mantrap.environment.Trajectron)
solver = mantrap.solver.IPOPTSolver(env, goal, eval_env=mantrap.environment.SGAN, is_logging=True)
ego_trajectory, _ = solver.solve(time_steps=10, warm_start_method=mantrap.constants.WARM_START_SOFT)
solver.visualize_actual(as_video=False)
plt.show()
###Output
/Users/sele/mantrap/third_party/GenTrajectron/code/data/environment.py:46: RuntimeWarning: invalid value encountered in true_divide
return np.where(np.isnan(array), np.array(np.nan), (array - mean) / std)
/Users/sele/mantrap/third_party/GenTrajectron/code/data/environment.py:46: RuntimeWarning: invalid value encountered in true_divide
return np.where(np.isnan(array), np.array(np.nan), (array - mean) / std)
/Users/sele/mantrap/third_party/GenTrajectron/code/data/environment.py:46: RuntimeWarning: invalid value encountered in true_divide
return np.where(np.isnan(array), np.array(np.nan), (array - mean) / std)
/Users/sele/mantrap/third_party/GenTrajectron/code/data/environment.py:46: RuntimeWarning: invalid value encountered in true_divide
return np.where(np.isnan(array), np.array(np.nan), (array - mean) / std)
/Users/sele/mantrap/third_party/GenTrajectron/code/data/environment.py:46: RuntimeWarning: invalid value encountered in true_divide
return np.where(np.isnan(array), np.array(np.nan), (array - mean) / std)
|
exp-data-diffus-analysis.ipynb | ###Markdown
Experimental data analysisReads raw QPD $(a,b,c,d,\text{trap})$ files, recovers trap $\sigma_x$ and $\sigma_y$, free diffusion constant $D$, and eventually corrects $y$ for $D_y \neq D_x$. Then generate $(x,y)$ file for MFPT computing with `exp-ft-automated.ipynb`.For checking : can also use simulated trajectories with periodical trapping reset using Langevin's equation, from `langevin-generate.cpp` with `define RESET_WITH_TRAPPING`, `define ENABLE_PERIODICAL_RESET` and `define SPLIT_FILES`.
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['savefig.bbox'] = 'tight'
import scipy.io
import scipy.optimize
import scipy.constants as cs
import pandas as pd
from utils import *
name = 'dati_MFPT/20-01-10/qpd_Ttrap50ms_Ttot200ms_'
N = (240,1000)
fps = 50000
std_calib = 0.03344148949490509
creneau_inv = 1
fit_diffus_t_end = 0.005 # choix du domaine de validité de la diffusion libre
y_correction_beg = 20
is_exp_data = True
name = 'dati_MFPT/20-01-10/qpd_Ttrap50ms_Ttot60ms_'
N = (1,230)
fps = 50000
std_calib = 0.03291502041883577
creneau_inv = 1
fit_diffus_t_end = 0.008
y_correction_beg = 2
is_exp_data = True
name = 'test/langevin-trap-mass-traj-xyc'
N = (1,306)
fps = 100000.00
std_calib = sqrt(0.1)
creneau_inv = 1
fit_diffus_t_end = 0.06
D_calib = 5.0
y_correction_beg = 100
part_m = 0.0001
is_exp_data = False
###Output
_____no_output_____
###Markdown
Chargement des données
###Code
def load_data (filename, creneau_inv, fps, is_exp_data):
if is_exp_data:
data = scipy.io.loadmat(filename)['data']
# sorties de la QPD
a = data[0][0::5]
b = data[0][1::5]
c = data[0][2::5]
d = data[0][3::5]
s = a+b+c+d
# calcul de x et de y à partir des sorties QPD
x = (a+b-c-d)/s
y = (a-b-c+d)/s
# laser de trapping on/off
creneau = creneau_inv * data[0][4::5]
thresh = creneau.mean()
trapped = creneau > thresh # la particule est trapped
free = np.logical_not(trapped) # la particule est libre
else:
data = np.fromfile(filename, np.dtype([('x',np.float64), ('y',np.float64), ('trapped',bool)]))
x = data['x']
y = data['y']
trapped = data['trapped']
free = np.logical_not(trapped)
# temps
t = np.arange(x.shape[0])/fps
# indices correspondant au début/fin d'une période de trapping
i_beg_trapping = np.where(np.logical_and(free[:-1],trapped[1:]))[0]
i_end_trapping = np.where(np.logical_and(trapped[:-1],free[1:]))[0]
return t,x,y,trapped,free,i_beg_trapping,i_end_trapping
###Output
_____no_output_____
###Markdown
Exemple de trajectoire
###Code
t,x,y,trapped,free,i_beg_trapping,i_end_trapping = load_data(name+str(N[1]//2), creneau_inv, fps, is_exp_data)
t_trap = t[trapped]
x_trap = x[trapped]
y_trap = y[trapped]
t_free = t[free]
x_free = x[free]
y_free = y[free]
plt.figure(figsize=(16,6))
plt.axhline(y=np.mean(x), color='black', label=r"mean $x$")
plt.scatter(t_free,x_free, color='blue', s=1, label=r"$x$, free", rasterized=True)
plt.scatter(t_trap,x_trap, color='red', s=1, label=r"$x$, trapped", rasterized=True)
for i in i_end_trapping:
plt.axvline(x=t[i], linestyle='--', color='blue')
for i in i_beg_trapping:
plt.axvline(x=t[i], linestyle='--', color='red')
plt.xlim((4,6))
plt.legend(loc='upper right')
plt.ylabel(r"$x$ ($\mu$m)")
plt.xlabel(r"$t$ (s)")
plt.savefig(name+"example_traj.pdf")
###Output
_____no_output_____
###Markdown
Mesure de $\sigma_x$ et $\sigma_y$en regardant la fin des périodes de trapping
###Code
x_end_trapping = []
y_end_trapping = []
length_trap = []
length_free = []
for w in range(N[0],N[1]+1):
t,x,y,trapped,free,i_beg_trapping,i_end_trapping = load_data(name+str(w), creneau_inv, fps, is_exp_data)
for i in i_end_trapping:
x_end_trapping += list(x[i-10:i])
y_end_trapping += list(y[i-10:i])
k_beg = k_end = 0
if i_beg_trapping[0] > i_end_trapping[0]:
k_end = 1
while k_beg < len(i_beg_trapping)-1 and k_end < len(i_end_trapping):
assert i_beg_trapping[k_beg] < i_end_trapping[k_end]
length_trap.append( i_end_trapping[k_end] - i_beg_trapping[k_beg] )
length_free.append( i_beg_trapping[k_beg+1] - i_end_trapping[k_end] )
k_beg += 1
k_end += 1
print(w, end=' ')
length_trap = np.min(length_trap)
length_free = np.min(length_free)
length_trap,length_free
###Output
240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000
###Markdown
Aggrégat de points à la fin des périodes de trapping, de tous les fichiers :
###Code
x0,sigma_x,slope = plot_means_synth( np.arange(len(x_end_trapping)), np.array(x_end_trapping), cmak=60 )
###Output
_____no_output_____
###Markdown
Distribution de $x$ en fin de trapping, ajustement sur une gaussienne :
###Code
x_end_trapping_detrend = np.array(x_end_trapping) - slope*np.arange(len(x_end_trapping))
_,sigma_x = check_gaussian(x_end_trapping_detrend, bins=50, xlabel=r"$x$ at the end of trapping ($x_0$)")
print("centre en x : {:.2e}; variance de x_0 : {:.2e}".format(x0,sigma_x))
###Output
centre en x : 4.75e-03; variance de x_0 : 3.39e-02
###Markdown
Même chose pour $y$.
###Code
y0,sigma_y,slope = plot_means_synth( np.arange(len(y_end_trapping)), np.array(y_end_trapping), cmak=60 )
y_end_trapping_detrend = np.array(y_end_trapping) - slope*np.arange(len(y_end_trapping))
_,sigma_y = check_gaussian(y_end_trapping_detrend, bins=50, xlabel=r"$y$ at the end of trapping ($y_0$)")
print("centre en y : {:.2e}; variance de y_0 : {:.2e}".format(y0,sigma_y))
###Output
centre en y : -2.18e-02; variance de y_0 : 4.18e-02
###Markdown
Courbes moyennes de $x^2$ et $y^2$ sur les périodes de diffusion librepour corriger une éventuelle différence d'échelle entre $x$ et $y$
###Code
diffus_sn = 30
diffus_t = np.arange(length_free//diffus_sn) / fps * diffus_sn
diffus_x2_acc = [ [] for _ in range(len(diffus_t)) ]
diffus_y2_acc = [ [] for _ in range(len(diffus_t)) ]
for w in range(N[0],N[1]):
t,x,y,trapped,free,i_beg_trapping,i_end_trapping = load_data(name+str(w), creneau_inv, fps, is_exp_data)
k_beg = k_end = 0
if i_beg_trapping[0] > i_end_trapping[0]:
k_end = 1
while k_beg < len(i_beg_trapping)-1 and k_end < len(i_end_trapping):
traj_diffus_x = x[ i_end_trapping[k_end]:i_beg_trapping[k_beg+1] ] - x0
traj_diffus_y = y[ i_end_trapping[k_end]:i_beg_trapping[k_beg+1] ] - y0
for k in range(length_free//diffus_sn):
i = k*diffus_sn
diffus_x2_acc[k].append(traj_diffus_x[i]**2)
diffus_y2_acc[k].append(traj_diffus_y[i]**2)
k_beg += 1
k_end += 1
print(w, end=' ')
def plot_R2 (t, R2_acc, varname="r", fit_diffus_t_end=None, fit_diffus_sigma0=None, fit_diffus_d=None):
R2 = np.zeros(len(t))
R2std = np.zeros(len(t))
for k in range(len(t)):
R2[k] = np.mean(R2_acc[k])
R2std[k] = np.std(R2_acc[k])
n = len(R2_acc[0])
plt.figure(figsize=(10,7))
ax1 = plt.gca()
ax1.fill_between(t, R2-R2std, R2+R2std, facecolor='orange', alpha=0.3, label=r"$\pm \operatorname{std}("+varname+"^2)$")
ax1.fill_between(t, R2-R2std/np.sqrt(n), R2+R2std/np.sqrt(n), facecolor='orange', alpha=0.5)
ax1.plot(t, R2, label=r"$\langle "+varname+r"^2 \rangle_\operatorname{ens}(t)$", lw=2)
ax1.set_ylim((0,1.1*np.max(R2)))
ax1.set_xlabel(r"$t$")
if fit_diffus_t_end is not None:
assert fit_diffus_t_end < t[-1]
k1 = np.searchsorted(t, fit_diffus_t_end)+1
_coeff, _cov = scipy.optimize.curve_fit( (lambda t,D: D*t), t[:k1]-t[0], R2[:k1]-fit_diffus_sigma0**2, sigma=np.ones(k1) )
D = _coeff[0]
D_err = 15*sqrt(np.diag(_cov)[0]) + D*R2std[k1]/R2[k1]/np.sqrt(n)
ax1.plot(t[:k1], fit_diffus_sigma0**2+D*(t[:k1]-t[0]), '--', color='black', label=r"${}D={:.3f}\pm{:.3f}$".format(2*fit_diffus_d,D,D_err))
ax1.plot(t[:k1], fit_diffus_sigma0**2+(D+D_err)*(t[:k1]-t[0]), '--', color='grey', lw=1)
ax1.plot(t[:k1], fit_diffus_sigma0**2+(D-D_err)*(t[:k1]-t[0]), '--', color='grey', lw=1)
return ax1,R2,R2std,D/(2*fit_diffus_d),D_err/(2*fit_diffus_d)
return ax1,R2,R2std
ax,diffus_x2,diffus_x2std,Dx,Dx_err = plot_R2(diffus_t, diffus_x2_acc, "x", fit_diffus_t_end, sigma_x, 1)
print("fit on 0->{:.2f} : D_x = {:.4f} ± {:.1f}%".format(fit_diffus_t_end,Dx,Dx_err/Dx*100))
ax.axhline(y=sigma_x**2, label=r"$\sigma_x^2={:.3e}$".format(sigma_x**2), color='black', linestyle='--')
ax.legend()
plt.title("Particle free diffusion phase : mean $x^2$ for {} trajectories,\n {}".format(len(diffus_x2_acc[0]), name))
plt.savefig(name+"diffus_x2.pdf")
ax,diffus_y2,diffus_y2std,Dy,Dy_err = plot_R2(diffus_t, diffus_y2_acc, "y", fit_diffus_t_end, sigma_y, 1)
print("fit on 0->{:.2f} : D_y_raw = {:.4f} ± {:.1f}%".format(fit_diffus_t_end,Dy,Dy_err/Dy*100))
ax.axhline(y=sigma_y**2, label=r"$\sigma_y^2={:.3e}$".format(sigma_y**2), color='black', linestyle='--')
ax.legend()
plt.title("Particle free diffusion phase : mean $y^2$ (raw) for {} trajectories,\n {}".format(len(diffus_y2_acc[0]), name))
plt.savefig(name+"diffus_y2_raw.pdf")
###Output
fit on 0->0.01 : D_y_raw = 0.1397 ± 3.3%
###Markdown
Correction de l'échelle de $y$ pour que les courbes de diffusion en $x$ et en $y$ coïncident :
###Code
plt.figure(figsize=(12,8))
ax1 = plt.gca()
ax1.plot(diffus_t, diffus_x2-sigma_x**2, label=r"$x$ diffusion curve : $\langle x^2 \rangle(t)-\sigma_x^2$")
ax1.plot(diffus_t, diffus_y2-sigma_y**2, label=r"$y$ diffusion curve : $\langle y^2 \rangle(t)-\sigma_y^2$")
ax2 = ax1.twinx()
ratio_x2_y2 = (diffus_x2-sigma_x**2) / (diffus_y2-sigma_y**2)
ax2.plot( diffus_t[y_correction_beg:], ratio_x2_y2[y_correction_beg:], label=r"ratio$^2$ $x$ diffusion / $y$ diffusion", color='black' )
ratio_x2_y2 = np.mean(ratio_x2_y2[y_correction_beg:])
ax2.axhline(y=ratio_x2_y2, label="mean ratio$^2$ = {:.3f}".format(ratio_x2_y2), color='grey')
ax2.set_ylim((0.9,1.1))
ax1.plot(diffus_t, (diffus_y2-sigma_y**2)*ratio_x2_y2, label=r"corrected $y$ diffusion curve", linestyle='--')
ax1.legend(loc='upper left')
ax2.legend(loc='lower right')
ax1.set_xlabel(r"$t$")
plt.savefig(name+"correction_y.pdf")
###Output
_____no_output_____
###Markdown
Courbes moyennes de $r^2$ après correction sur $y$
###Code
ratio_x_y = np.sqrt(ratio_x2_y2)
print(ratio_x_y)
sigma_y *= ratio_x_y
print(sigma_x, sigma_y)
sigma = np.sqrt( sigma_x**2 + sigma_y**2 )
print(sigma)
trapped_sn = 20
trapped_t = np.arange(length_trap//trapped_sn) / fps * trapped_sn
trapped_R2_acc = [ [] for _ in range(len(trapped_t)) ]
diffus_sn = 30
diffus_t = np.arange(length_free//diffus_sn) / fps * diffus_sn
diffus_R2_acc = [ [] for _ in range(len(diffus_t)) ]
diffus_mass_center_x = np.zeros(len(diffus_t))
diffus_mass_center_y = np.zeros(len(diffus_t))
i_end_traj = int(fit_diffus_t_end * fps)
file = open(name+"traj_data.bin", "wb")
for w in range(N[0],N[1]):
t,x,y,trapped,free,i_beg_trapping,i_end_trapping = load_data(name+str(w), creneau_inv, fps, is_exp_data)
k_beg = k_end = 0
if i_beg_trapping[0] > i_end_trapping[0]:
k_end = 1
while k_beg < len(i_beg_trapping)-1 and k_end < len(i_end_trapping):
traj_trapped_x = x[ i_beg_trapping[k_beg]:i_end_trapping[k_end] ] - x0
traj_trapped_y = ( y[ i_beg_trapping[k_beg]:i_end_trapping[k_end] ] - y0 ) * ratio_x_y
for k in range(length_trap//trapped_sn):
i = k*trapped_sn
r2 = traj_trapped_x[i]**2 + traj_trapped_y[i]**2
trapped_R2_acc[k].append(r2)
traj_diffus_x = x[ i_end_trapping[k_end]:i_beg_trapping[k_beg+1] ] - x0
traj_diffus_y = ( y[ i_end_trapping[k_end]:i_beg_trapping[k_beg+1] ] - y0 ) * ratio_x_y
for k in range(length_free//diffus_sn):
i = k*diffus_sn
r2 = traj_diffus_x[i]**2 + traj_diffus_y[i]**2
diffus_R2_acc[k].append(r2)
diffus_mass_center_x[k] += traj_diffus_x[i]
diffus_mass_center_y[k] += traj_diffus_y[i]
data = np.empty((2*i_end_traj+2,), dtype=np.float64)
data[0] = data[1] = np.nan
data[2::2] = traj_diffus_x[:i_end_traj]
data[3::2] = traj_diffus_y[:i_end_traj]
data.tofile(file)
k_beg += 1
k_end += 1
print(w, end=' ')
file.close()
###Output
240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999
###Markdown
$r^2$ moyen pendant la phase de trapping de la particule, juste pour vérifier que le trapping fonctionne bien, qu'il dure suffisemment longtemps, et que la détermination de $\sigma$ est cohérente :
###Code
ax,trapped_R2,trapped_R2std = plot_R2(trapped_t, trapped_R2_acc)
ax.axhline(y=sigma**2, label=r"$\sigma_x^2+\sigma_y^2={:.2e}$".format(sigma**2), color='black', linestyle='--')
ax.legend()
plt.title("Particle trapping phase : mean $r^2$ for {} trajectories,\n {}".format(len(trapped_R2_acc[0]), name))
plt.savefig(name+"trapping_r2.pdf")
###Output
_____no_output_____
###Markdown
$r^2$ moyen pendant la phase de diffusion libre, et détermination finale du coefficient de diffusion $D$ sur le domaine de validité choisi :
###Code
ax,diffus_R2,diffus_R2std,D,D_err = plot_R2(diffus_t, diffus_R2_acc, "r", fit_diffus_t_end, sigma, 2)
ax.axhline(y=sigma**2, label=r"$\sigma_x^2+\sigma_y^2={:.2e}$".format(sigma**2), color='black', linestyle='--')
print(D)
print((Dx + Dy*ratio_x2_y2)/2)
ax.legend()
plt.title("Particle free diffusion phase : mean $r^2$ for {} trajectories,\n {}".format(len(diffus_R2_acc[0]), name))
plt.savefig(name+"diffus_r2_corr.pdf")
pd.DataFrame(list({
'N_traj': len(diffus_R2_acc),
'ratio_x_y': ratio_x_y,
'sigma_x': sigma_x,
'sigma_y': sigma_y,
'D': D,
'D_err': D_err,
'fps': fps,
'reset_period': fit_diffus_t_end,
}.items())).set_index(0).to_csv(name+"diffus.csv", header=False, sep=',')
###Output
_____no_output_____
###Markdown
Déviation systématique de la particule dans la phase de diffusion libre :
###Code
plt.figure(figsize=(8,6))
plt.plot(diffus_t, diffus_mass_center_x/len(diffus_R2_acc[0]), label=r"$\langle x \rangle$")
plt.plot(diffus_t, diffus_mass_center_y/len(diffus_R2_acc[0]), label=r"$\langle y \rangle$" )
plt.plot(diffus_t, +4*D*diffus_t, '--', color='grey', label=r"$4Dt$")
plt.plot(diffus_t, -4*D*diffus_t, '--', color='grey')
plt.legend()
plt.title("Systematic deviation of the particle (center of mass)")
plt.xlabel("$t$")
plt.savefig(name+"sysdev.pdf")
###Output
_____no_output_____ |
Programming Assignments/Programming Assignment_2.ipynb | ###Markdown
1. Write a Python program to convert kilometers to miles?2. Write a Python program to convert Celsius to Fahrenheit?3. Write a Python program to display calendar?4. Write a Python program to solve quadratic equation?5. Write a Python program to swap two variables without temp variable?
###Code
#1 mile = 1.609344km
miles_distance = float(input("Enter the distance in miles : "))
kilometers_distance = miles_distance * 1.609344
print(kilometers_distance)
#1 celsius = 33.8 farenheit
celsius = float(input("Enter temperature in celsius : "))
farenheit = celsius * (9/5) +32
print(farenheit)
import calendar
year=2021
print(calendar.calendar(year))
A=int(input("Enter the first number for quadratic equation : "))
B=int(input("Enter the second number for quadratic equation : "))
C = A**2+2*A*B+B**2
print(C)
x=5
y=10
print("Value of x before swapping : " , x )
print("Value of y before swapping : ", y)
y=y-x
x=x*2
print("Value of x after swapping :- " , x)
print("Value of y after swapping :- ", y)
###Output
Value of x before swapping : 5
Value of y before swapping : 10
Value of x after swapping :- 10
Value of y after swapping :- 5
|
src/RandomForest/.ipynb_checkpoints/jf-model-7-Copy1-checkpoint.ipynb | ###Markdown
Nuevo Modelo
###Code
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
important_values.shape
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
X_train.shape
# # Busco los mejores tres parametros indicados abajo.
# n_estimators = [65, 100, 135]
# max_features = [0.2, 0.5, 0.8]
# max_depth = [None, 2, 5]
# min_samples_split = [5, 15, 25]
# # min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]
# # min_samples_leaf
# hyperF = {'n_estimators': n_estimators,
# 'max_features': max_features,
# 'max_depth': max_depth,
# 'min_samples_split': min_samples_split
# }
# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123),
# scoring = 'f1_micro',
# param_grid = hyperF,
# cv = 3,
# verbose = 1,
# n_jobs = -1)
# bestF = gridF.fit(X_train, y_train)
# res = pd.DataFrame(bestF.cv_results_)
# res.loc[res['rank_test_score'] <= 10]
# Utilizo los mejores parametros segun el GridSearch
rf_model = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 50,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model.fit(X_train, y_train)
rf_model.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
#Promedio de altura por piso
test_values_subset['height_percentage_per_floor_pre_eq'] = test_values_subset['height_percentage']/test_values_subset['count_floors_pre_eq']
test_values_subset['volume_percentage'] = test_values_subset['area_percentage'] * test_values_subset['height_percentage']
#Algunos promedios por localizacion
test_values_subset['avg_age_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')
test_values_subset['avg_age_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')
#Relacion material(los mas importantes segun el modelo 5)-antiguedad
test_values_subset['20_yr_age_range'] = test_values_subset['age'] // 20 * 20
test_values_subset['20_yr_age_range'] = test_values_subset['20_yr_age_range'].astype('str')
test_values_subset['superstructure'] = ''
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_mud_mortar_stone'], test_values_subset['superstructure'] + 'b', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_cement_mortar_brick'], test_values_subset['superstructure'] + 'e', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_timber'], test_values_subset['superstructure'] + 'f', test_values_subset['superstructure'])
test_values_subset['age_range_superstructure'] = test_values_subset['20_yr_age_range'] + test_values_subset['superstructure']
del test_values_subset['20_yr_age_range']
del test_values_subset['superstructure']
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
features_in_model_not_in_tests =\
list(filter(lambda col: col not in test_values_subset.columns.to_list(), X_train.columns.to_list()))
for f in features_in_model_not_in_tests:
test_values_subset[f] = 0
test_values_subset.drop(columns = list(filter(lambda col: col not in X_train.columns.to_list() , test_values_subset.columns.to_list())), inplace = True)
test_values_subset.shape
# Genero las predicciones para los test.
preds = rf_model.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf-model-7-1-submission.csv')
!head ../../csv/predictions/jf-model-7-1-submission.csv
###Output
building_id,damage_grade
300051,3
99355,2
890251,2
745817,1
421793,3
871976,2
691228,1
896100,3
343471,2
|
solutions/regress_soln.ipynb | ###Markdown
Think BayesCopyright 2018 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
import numpy as np
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
###Output
_____no_output_____
###Markdown
Bayesian regressionThis notebook presents a simple example of Bayesian regression using sythetic data DataSuppose there is a linear relationship between `x` and `y` with slope 2 and intercept 1, but the measurements of `y` are noisy; specifically, the noise is Gaussian with mean 0 and `sigma = 0.3`.
###Code
slope = 2
inter = 1
sigma = 0.3
xs = np.linspace(0, 1, 6)
ys = inter + slope * xs + np.random.normal(0, sigma, len(xs))
thinkplot.plot(xs, ys)
thinkplot.decorate(xlabel='x',
ylabel='y')
###Output
_____no_output_____
###Markdown
Grid algorithmWe can solve the problem first using a grid algorithm, with uniform priors for slope, intercept, and sigma.As an exercise, fill in this likelihood function, then test it using the code below.Your results will depend on the random data you generated, but in general you should find that the posterior marginal distributions peak near the actual parameters.
###Code
from scipy.stats import norm
class Regress(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: x, y
hypo: slope, inter, sigma
"""
return 1
# Solution
from scipy.stats import norm
class Regress(Suite, Joint):
def Likelihood(self, data, hypo):
"""
data: x, y
hypo: slope, inter, sigma
"""
x, y = data
slope, inter, sigma = hypo
yfit = inter + slope * x
error = yfit - y
like = norm(0, sigma).pdf(error)
return like
params = np.linspace(-4, 4, 21)
sigmas = np.linspace(0.1, 2, 20)
from itertools import product
hypos = product(params, params, sigmas)
suite = Regress(hypos);
for data in zip(xs, ys):
suite.Update(data)
thinkplot.Pdf(suite.Marginal(0))
thinkplot.decorate(xlabel='Slope',
ylabel='PMF',
title='Posterior marginal distribution')
thinkplot.Pdf(suite.Marginal(1))
thinkplot.decorate(xlabel='Intercept',
ylabel='PMF',
title='Posterior marginal distribution')
thinkplot.Pdf(suite.Marginal(2))
thinkplot.decorate(xlabel='Sigma',
ylabel='PMF',
title='Posterior marginal distribution')
###Output
_____no_output_____
###Markdown
MCMCImplement this model using MCMC. As a starting place, you can use this example from [Computational Statistics in Python](http://people.duke.edu/~ccc14/sta-663-2016/16C_PyMC3.htmlLinear-regression).You also have the option of using the GLM module, [described here](https://docs.pymc.io/notebooks/GLM-linear.html).
###Code
import pymc3 as pm
pm.GLM
thinkplot.plot(xs, ys)
thinkplot.decorate(xlabel='x',
ylabel='y')
import pymc3 as pm
with pm.Model() as model:
"""Fill this in"""
# Solution
with pm.Model() as model:
slope = pm.Uniform('slope', -4, 4)
inter = pm.Uniform('inter', -4, 4)
sigma = pm.Uniform('sigma', 0, 2)
y_est = slope*xs + inter
y = pm.Normal('y', mu=y_est, sd=sigma, observed=ys)
trace = pm.sample_prior_predictive(100)
# Solution
for y_prior in trace['y']:
thinkplot.plot(xs, y_prior, color='gray', linewidth=0.5)
thinkplot.decorate(xlabel='x',
ylabel='y')
# Solution
with pm.Model() as model:
slope = pm.Uniform('slope', -4, 4)
inter = pm.Uniform('inter', -4, 4)
sigma = pm.Uniform('sigma', 0, 2)
y_est = slope*xs + inter
y = pm.Normal('y', mu=y_est, sd=sigma, observed=ys)
trace = pm.sample(1000, tune=2000)
# Solution
pm.traceplot(trace);
###Output
_____no_output_____ |
d2l-en/tensorflow/chapter_convolutional-neural-networks/padding-and-strides.ipynb | ###Markdown
Padding and Stride:label:`sec_padding`In the previous example of :numref:`fig_correlation`,our input had both a height and width of 3and our convolution kernel had both a height and width of 2,yielding an output representation with dimension $2\times2$.As we generalized in :numref:`sec_conv_layer`,assuming thatthe input shape is $n_h\times n_w$and the convolution kernel shape is $k_h\times k_w$,then the output shape will be$(n_h-k_h+1) \times (n_w-k_w+1)$.Therefore, the output shape of the convolutional layeris determined by the shape of the inputand the shape of the convolution kernel.In several cases, we incorporate techniques,including padding and strided convolutions,that affect the size of the output.As motivation, note that since kernels generallyhave width and height greater than $1$,after applying many successive convolutions,we tend to wind up with outputs that areconsiderably smaller than our input.If we start with a $240 \times 240$ pixel image,$10$ layers of $5 \times 5$ convolutionsreduce the image to $200 \times 200$ pixels,slicing off $30 \%$ of the image and with itobliterating any interesting informationon the boundaries of the original image.*Padding* is the most popular tool for handling this issue.In other cases, we may want to reduce the dimensionality drastically,e.g., if we find the original input resolution to be unwieldy.*Strided convolutions* are a popular technique that can help in these instances. PaddingAs described above, one tricky issue when applying convolutional layersis that we tend to lose pixels on the perimeter of our image.Since we typically use small kernels,for any given convolution,we might only lose a few pixels,but this can add up as we applymany successive convolutional layers.One straightforward solution to this problemis to add extra pixels of filler around the boundary of our input image,thus increasing the effective size of the image.Typically, we set the values of the extra pixels to zero.In :numref:`img_conv_pad`, we pad a $3 \times 3$ input,increasing its size to $5 \times 5$.The corresponding output then increases to a $4 \times 4$ matrix.The shaded portions are the first output element as well as the input and kernel tensor elements used for the output computation: $0\times0+0\times1+0\times2+0\times3=0$.:label:`img_conv_pad`In general, if we add a total of $p_h$ rows of padding(roughly half on top and half on bottom)and a total of $p_w$ columns of padding(roughly half on the left and half on the right),the output shape will be$$(n_h-k_h+p_h+1)\times(n_w-k_w+p_w+1).$$This means that the height and width of the outputwill increase by $p_h$ and $p_w$, respectively.In many cases, we will want to set $p_h=k_h-1$ and $p_w=k_w-1$to give the input and output the same height and width.This will make it easier to predict the output shape of each layerwhen constructing the network.Assuming that $k_h$ is odd here,we will pad $p_h/2$ rows on both sides of the height.If $k_h$ is even, one possibility is topad $\lceil p_h/2\rceil$ rows on the top of the inputand $\lfloor p_h/2\rfloor$ rows on the bottom.We will pad both sides of the width in the same way.CNNs commonly use convolution kernelswith odd height and width values, such as 1, 3, 5, or 7.Choosing odd kernel sizes has the benefitthat we can preserve the spatial dimensionalitywhile padding with the same number of rows on top and bottom,and the same number of columns on left and right.Moreover, this practice of using odd kernelsand padding to precisely preserve dimensionalityoffers a clerical benefit.For any two-dimensional tensor `X`,when the kernel's size is oddand the number of padding rows and columnson all sides are the same,producing an output with the same height and width as the input,we know that the output `Y[i, j]` is calculatedby cross-correlation of the input and convolution kernelwith the window centered on `X[i, j]`.In the following example, we create a two-dimensional convolutional layerwith a height and width of 3and apply 1 pixel of padding on all sides.Given an input with a height and width of 8,we find that the height and width of the output is also 8.
###Code
import tensorflow as tf
# We define a convenience function to calculate the convolutional layer. This
# function initializes the convolutional layer weights and performs
# corresponding dimensionality elevations and reductions on the input and
# output
def comp_conv2d(conv2d, X):
# Here (1, 1) indicates that the batch size and the number of channels
# are both 1
X = tf.reshape(X, (1, ) + X.shape + (1, ))
Y = conv2d(X)
# Exclude the first two dimensions that do not interest us: examples and
# channels
return tf.reshape(Y, Y.shape[1:3])
# Note that here 1 row or column is padded on either side, so a total of 2
# rows or columns are added
conv2d = tf.keras.layers.Conv2D(1, kernel_size=3, padding='same')
X = tf.random.uniform(shape=(8, 8))
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____
###Markdown
When the height and width of the convolution kernel are different,we can make the output and input have the same height and widthby setting different padding numbers for height and width.
###Code
# Here, we use a convolution kernel with a height of 5 and a width of 3. The
# padding numbers on either side of the height and width are 2 and 1,
# respectively
conv2d = tf.keras.layers.Conv2D(1, kernel_size=(5, 3), padding='valid')
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____
###Markdown
StrideWhen computing the cross-correlation,we start with the convolution windowat the top-left corner of the input tensor,and then slide it over all locations both down and to the right.In previous examples, we default to sliding one element at a time.However, sometimes, either for computational efficiencyor because we wish to downsample,we move our window more than one element at a time,skipping the intermediate locations.We refer to the number of rows and columns traversed per slide as the *stride*.So far, we have used strides of 1, both for height and width.Sometimes, we may want to use a larger stride.:numref:`img_conv_stride` shows a two-dimensional cross-correlation operationwith a stride of 3 vertically and 2 horizontally.The shaded portions are the output elements as well as the input and kernel tensor elements used for the output computation: $0\times0+0\times1+1\times2+2\times3=8$, $0\times0+6\times1+0\times2+0\times3=6$.We can see that when the second element of the first column is outputted,the convolution window slides down three rows.The convolution window slides two columns to the rightwhen the second element of the first row is outputted.When the convolution window continues to slide two columns to the right on the input,there is no output because the input element cannot fill the window(unless we add another column of padding).:label:`img_conv_stride`In general, when the stride for the height is $s_h$and the stride for the width is $s_w$, the output shape is$$\lfloor(n_h-k_h+p_h+s_h)/s_h\rfloor \times \lfloor(n_w-k_w+p_w+s_w)/s_w\rfloor.$$If we set $p_h=k_h-1$ and $p_w=k_w-1$,then the output shape will be simplified to$\lfloor(n_h+s_h-1)/s_h\rfloor \times \lfloor(n_w+s_w-1)/s_w\rfloor$.Going a step further, if the input height and widthare divisible by the strides on the height and width,then the output shape will be $(n_h/s_h) \times (n_w/s_w)$.Below, we set the strides on both the height and width to 2,thus halving the input height and width.
###Code
conv2d = tf.keras.layers.Conv2D(1, kernel_size=3, padding='same', strides=2)
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____
###Markdown
Next, we will look at a slightly more complicated example.
###Code
conv2d = tf.keras.layers.Conv2D(1, kernel_size=(3,5), padding='valid',
strides=(3, 4))
comp_conv2d(conv2d, X).shape
###Output
_____no_output_____ |
day_5/Lab_27_TF2_Feature_Columns.ipynb | ###Markdown
Feature Columns in Tensorflow 2.0Inspired by https://www.tensorflow.org/alpha/tutorials/keras/feature_columns
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('../data/australian_credit.csv')
###Output
_____no_output_____
###Markdown
- Data adapted from [here](https://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data)- Attributes from [here](https://www.researchgate.net/publication/3297254_A_Compact_and_Accurate_Model_for_Classification)|Column| Values| Type|| :--- | :--- | :--- ||A1 (Sex) | 0, 1 |Nominal||A2 (Age) | 13.75 - 80.25 |Continuous||A3 (Mean time at addresses) | 0 - 28 |Continuous||A4 (Home status) | 1, 2, 3 |Nominal||A5 (Current occupation) | 1 - 14 |Nominal||A6 (Current job status) | 1 - 9 |Nominal||A7 (Mean time with employers) | 0 - 28.5 |Continuous||A8 (Other investments) | 0, 1 |Nominal||A9 (Bank account) | 0, 1 |Nominal||A10 (Time with bank) | 0 - 67 |Continuous||A11 (Liability reference) | 0, 1 |Nominal||A12 (Account reference) | 1, 2, 3 |Nominal||A13 (Monthly housing expense) | 0 - 2000 |Continuous||A14 (Savings account balance) | 1 - 100001 |Continuous|
###Code
df.head()
df.info()
df.describe()
from sklearn.model_selection import train_test_split
train_val, test = train_test_split(df, test_size=0.2, random_state=0)
train, val = train_test_split(train_val, test_size=0.2, random_state=0)
train.shape
val.shape
test.shape
###Output
_____no_output_____
###Markdown
Batch generation with tf.data.Dataset
###Code
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
labels = dataframe.pop('class')
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
ds = ds.shuffle(buffer_size=len(dataframe))
ds = ds.batch(batch_size)
return ds
batch_size = 5
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
example_batch = next(iter(train_ds))[0]
###Output
_____no_output_____
###Markdown
Feature Columns
###Code
def demo(feature_column):
feature_layer = tf.keras.layers.DenseFeatures(feature_column)
print(feature_layer(example_batch).numpy())
age = tf.feature_column.numeric_column("age")
demo(age)
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
occupation_vocab = df['occupation'].unique()
occupation_vocab
occupation = tf.feature_column.categorical_column_with_vocabulary_list(
'occupation', occupation_vocab)
occupation_one_hot = tf.feature_column.indicator_column(occupation)
demo(occupation_one_hot)
occupation_embedding = tf.feature_column.embedding_column(
occupation, dimension=8)
demo(occupation_embedding)
occupation_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'occupation', hash_bucket_size=1000)
occupation_hashed = tf.feature_column.indicator_column(occupation_hashed)
demo(occupation_hashed)
crossed_feature = tf.feature_column.crossed_column(
[age_buckets, occupation], hash_bucket_size=1000)
crossed_feature = tf.feature_column.indicator_column(crossed_feature)
demo(crossed_feature)
numeric_cols = ['age', 'time_at_addr', 'time_w_empl',
'time_w_bank', 'monthly_housing', 'savings_balance']
feature_columns = []
for c in numeric_cols:
feature_columns.append(tf.feature_column.numeric_column(c))
feature_columns.append(age_buckets)
feature_columns.append(occupation_one_hot)
feature_columns.append(occupation_embedding)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
Model Train and Evaluate Baseline
###Code
classes_ratio = df['class'].value_counts() / len(df)
classes_ratio
baseline = classes_ratio[0]
baseline
###Output
_____no_output_____
###Markdown
Model
###Code
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras import Sequential
from tensorflow.keras.optimizers import Adam, RMSprop
model = Sequential([
DenseFeatures(feature_columns),
Dense(128, activation='relu'),
Dense(64, activation='relu'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer=Adam(lr=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
h = model.fit(train_ds,
validation_data=val_ds,
epochs=15)
pd.DataFrame(h.history).plot()
plt.ylim(0, 1)
plt.axhline(baseline, c='black');
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
_____no_output_____ |
alienVsPredator.ipynb | ###Markdown
###Code
import os
%tensorflow_version 1.15.0
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers, Sequential
import tensorflow.keras.backend as K
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import EarlyStopping
import pathlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import numpy as np
train_root = "/content/drive/MyDrive/Colab Notebooks/alienVspredator/alien_vs_predator_thumbnails/data/train"
test_root = "/content/drive/MyDrive/Colab Notebooks/alienVspredator/alien_vs_predator_thumbnails/data/validation"
image_path = train_root + "/alien/103.jpg"
def image_load(image_path):
loaded_image = image.load_img(image_path)
image_rel = pathlib.Path(image_path).relative_to(train_root)
print(image_rel)
return loaded_image
image_load(image_path)
train_generator = ImageDataGenerator(rescale=1/255)
test_generator = ImageDataGenerator(rescale=1/255)
train_image_data = train_generator.flow_from_directory(str(train_root),target_size=(224,224))
test_image_data = test_generator.flow_from_directory(str(test_root), target_size=(224,224))
feature_extractor_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/2"
feature_extractor_url
def feature_extractor(x):
feature_extractor_module = hub.Module(feature_extractor_url)
return feature_extractor_module(x)
IMAGE_SIZE = hub.get_expected_image_size(hub.Module(feature_extractor_url))
IMAGE_SIZE
for image_batch, label_batch in train_image_data:
print("Image-batch-shape:",image_batch.shape)
print("Label-batch-shape:",label_batch.shape)
break
for test_image_batch, test_label_batch in test_image_data:
print("Image-batch-shape:",test_image_batch.shape)
print("Label-batch-shape:",test_label_batch.shape)
break
feature_extractor_layer = layers.Lambda(feature_extractor,input_shape=IMAGE_SIZE+[3])
feature_extractor_layer.trainable = False
model = Sequential([
feature_extractor_layer,
layers.Dense(train_image_data.num_classes, activation = "softmax")
])
model.summary()
sess = K.get_session()
init = tf.global_variables_initializer()
sess.run(init)
result = model.predict(image_batch)
result.shape
model.compile(
optimizer = tf.train.AdamOptimizer(),
loss = "categorical_crossentropy",
metrics = ['accuracy']
)
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_acc = []
def on_batch_end(self, batch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['acc'])
# Early stopping to stop the training if loss start to increase. It also avoids overfitting.
es = EarlyStopping(patience=2,monitor="val_loss")
batch_stats = CollectBatchStats()
# fitting the model
model.fit((item for item in train_image_data), epochs = 3,
steps_per_epoch=21,
callbacks = [batch_stats, es],validation_data=test_image_data)
label_names = sorted(train_image_data.class_indices.items(), key=lambda pair:pair[1])
label_names = np.array([key.title() for key, value in label_names])
label_names
result_batch = model.predict(test_image_batch)
labels_batch = label_names[np.argmax(result_batch, axis=-1)]
labels_batch
plt.figure(figsize=(13,10))
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(test_image_batch[n])
plt.title(labels_batch[n])
plt.axis('off')
plt.suptitle("Model predictions")
###Output
_____no_output_____ |
intro-to-tensorflow/intro_to_tensorflow_mine2.ipynb | ###Markdown
TensorFlow Neural Network Lab In this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, notMNIST, consists of images of a letter from A to J in different fonts.The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in! To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "`All modules imported`".
###Code
import hashlib
import os
import pickle
from PIL import Image
from tqdm import tqdm
from urllib.request import urlretrieve
from zipfile import ZipFile
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
print('All modules imported.')
###Output
All modules imported.
###Markdown
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
###Code
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9'
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zfile:
filenames_pbar = tqdm(zfile.namelist(), unit='files')
for filename in filenames_pbar:
if not filename.endswith('/'):
with zfile.open(filename) as image_file:
image = Image.open(image_file)
image.load()
feature = np.array(image, dtype=np.float32).flatten()
features.append(feature)
labels.append(filename.split('/')[1][0])
return np.array(features), np.array(labels)
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# test_labels
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
print('All features and labels uncompressed.')
###Output
All features and labels uncompressed.
###Markdown
Problem 1The first problem involves normalizing the features for your training and test data.Implement Min-Max scaling in the `normalize_grayscale()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.Since the raw notMNIST image data is in [grayscale](https://en.wikipedia.org/wiki/Grayscale), the current values range from a min of 0 to a max of 255.Min-Max Scaling:$X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}$*If you're having trouble solving problem 1, you can view the solution [here](https://github.com/udacity/deep-learning/blob/master/intro-to-tensorflow/intro_to_tensorflow_solution.ipynb).*
###Code
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
xmin = 0
xmax = 255
return a + (((image_data - xmin) * (b - a))/(xmax - xmin))
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print("Labels 1 hot encoded")
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
pickle_file = 'notMNIST2.pickle'
if not os.path.isfile(pickle_file):
try:
with open(pickle_file, 'wb') as pfile:
pickle.dump({
"train_dataset":train_features,
"train_labels":train_labels,
"valid_dataset":valid_features,
"valid_labels":valid_labels,
"test_dataset":test_features,
"test_labels":test_labels
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print("Error: ", e)
raise
###Output
_____no_output_____
###Markdown
CheckpointAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
###Code
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
import pickle
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data
print('data loaded')
###Output
data loaded
###Markdown
Problem 2Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. For the neural network to train on your data, you need the following float32 tensors: - `features` - Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`) - `labels` - Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`) - `weights` - Variable Tensor with random numbers from a truncated normal distribution. - See `tf.truncated_normal()` documentation for help. - `biases` - Variable Tensor with all zeros. - See `tf.zeros()` documentation for help.*If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available [here](intro_to_tensorflow_solution.ipynb).*
###Code
features_count = 784
labels_count = 10
features = tf.placeholder(tf.float32, [None, features_count])
labels = tf.placeholder(tf.float32, [None, labels_count])
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros((labels_count)))
#Tests
assert features._op.name.startswith('Placeholder')
assert labels._op.name.startswith('Placeholder')
assert isinstance(weights, tf.Variable)
assert isinstance(biases, tf.Variable)
assert features.shape ==None or (features._shape.dims[0].value ==None and features._shape.dims[1].value == 784)
assert labels.shape == None or (labels._shape.dims[0].value == None and labels._shape.dims[1].value == 10)
assert weights._variable.shape == (784,10)
assert biases._variable.shape == (10)
assert features.dtype == tf.float32
assert labels.dtype == tf.float32
train_feed_dict = {features:train_features, labels:train_labels }
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features:test_features, labels: test_labels}
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
loss = tf.reduce_mean(cross_entropy)
init = tf.global_variables_initializer()
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data)
print('passes')
# is_correct_prediction = tf.equal(labels, predictions)
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction,tf.float32))
###Output
_____no_output_____
###Markdown
Problem 3Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.Parameter configurations:Configuration 1* **Epochs:** 1* **Learning Rate:** * 0.8 * 0.5 * 0.1 * 0.05 * 0.01Configuration 2* **Epochs:** * 1 * 2 * 3 * 4 * 5* **Learning Rate:** 0.2The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.*If you're having trouble solving problem 3, you can view the solution [here](intro_to_tensorflow_solution.ipynb).*
###Code
batch_size = 128
epochs = 1
learning_rate = 0.1
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(loss)
validation_accuracy = 0.0
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
for batch_i in batches_pbar:
batch_start = batch_i * batch_size
batch_features = train_features[batch_start:batch_start+batch_size]
batch_labels = train_labels[batch_start:batch_start+batch_size]
_, l = session.run([optimizer, loss], feed_dict={features:batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step+previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print("Validation accuracy at {}".format(validation_accuracy))
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 1
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
###Output
Epoch 1/1: 100%|██████████| 1114/1114 [00:04<00:00, 239.67batches/s]
###Markdown
TestYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
###Code
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
###Output
Epoch 1/1: 100%|██████████| 1114/1114 [00:01<00:00, 770.68batches/s] |
Chest Xray Classifier.ipynb | ###Markdown
Display Images
###Code
normal_xray = cv2.cvtColor(cv2.imread('chest_xray/train/NORMAL/IM-0115-0001.jpeg'),cv2.COLOR_BGR2RGB)
normal_xray.shape
plt.imshow(normal_xray)
pneumonia_xray = cv2.cvtColor(cv2.imread('chest_xray/train/PNEUMONIA/person1_bacteria_1.jpeg'),cv2.COLOR_BGR2RGB)
pneumonia_xray.shape
plt.imshow(pneumonia_xray)
###Output
_____no_output_____
###Markdown
Data Generation And Preprocessing
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
image_gen = ImageDataGenerator(rotation_range=30,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1/255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
plt.imshow(image_gen.random_transform(pneumonia_xray))
print('done')
image_gen.flow_from_directory('chest_xray/train')
image_gen.flow_from_directory('chest_xray/test')
input_shape = (700,700,3)
###Output
_____no_output_____
###Markdown
Building Model
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Flatten,Dense,Activation,Dropout
model = Sequential()
#Conv B1
model.add(Conv2D(filters=32,kernel_size=(3,3),input_shape=input_shape,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
#Conv B2
model.add(Conv2D(filters=64,kernel_size=(3,3),input_shape=input_shape,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
#Conv B3
model.add(Conv2D(filters=128,kernel_size=(3,3),input_shape=input_shape,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
input_shape[:2]
batch_size = 8
train_gen = image_gen.flow_from_directory('chest_xray/train',
target_size=input_shape[:2],
batch_size=batch_size,
class_mode='binary')
test_gen = image_gen.flow_from_directory('chest_xray/test',
target_size=input_shape[:2],
batch_size=batch_size,
class_mode='binary')
train_gen.class_indices
results = model.fit_generator(train_gen,epochs=20,steps_per_epoch=120,validation_data=test_gen,validation_steps=12)
print(results.history.keys())
plt.plot(results.history['accuracy'])
plt.plot(results.history['val_accuracy'])
plt.title('Model Performance')
plt.ylabel('accuracy')
plt.xlabel('Epochs')
plt.legend(['accuracy','val_accuracy'],loc='upper left')
plt.show()
plt.close()
plt.plot(results.history['loss'])
plt.plot(results.history['val_loss'])
plt.title('Loss Behaviour')
plt.ylabel('loss')
plt.xlabel('Epochs')
plt.legend(['loss','val_loss'],loc='upper left')
plt.show()
plt.close()
from tensorflow.keras.preprocessing import image
import numpy as np
###Output
_____no_output_____
###Markdown
Pneumonia Prediction
###Code
test_image = cv2.cvtColor(cv2.imread('chest_xray/test/PNEUMONIA/person1_virus_9.jpeg'),cv2.COLOR_BGR2RGB)
plt.imshow(test_image)
pneumonia_xray = image.load_img('chest_xray/test/PNEUMONIA/person1_virus_9.jpeg',target_size=(700,700))
pneumonia_xray = image.img_to_array(pneumonia_xray)
print(pneumonia_xray.shape)
pneumonia_xray = np.expand_dims(pneumonia_xray,axis=0)
print(pneumonia_xray.shape)
pneumonia_xray = pneumonia_xray/255
prediction = model.predict(pneumonia_xray)
print(f'Probabibility of pneumonia is: {prediction}')
result = model.predict_classes(pneumonia_xray)
print(result)
model.save('chest_xray_pneumonia.h5')
###Output
_____no_output_____ |
notebooks/Skin_Deep_Learning_Training_Notebook.ipynb | ###Markdown
Skin Deep Learning Training Notebook
###Code
import keras as K
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D
import tensorflow as TF
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import imageio
%matplotlib inline
import os
from glob import glob
import seaborn as sns
from PIL import Image
from sklearn.preprocessing import label_binarize
from sklearn.metrics import confusion_matrix
import itertools
from sklearn.model_selection import train_test_split
#Mounting Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/gdrive
###Markdown
Loading and Preprocessing data
###Code
#Reading the metadata_csv to see what the current DataFrame looks like.
metadata_path = 'gdrive/My Drive/Google Colab Data/Skin/HAM10000_metadata.csv'
metadata = pd.read_csv(metadata_path)
metadata.head(5)
# This dictionary is useful for displaying more human-friendly labels later on
lesion_type_dict = {
'nv': 'Melanocytic nevi',
'mel': 'Melanoma',
'bkl': 'Benign keratosis-like lesions ',
'bcc': 'Basal cell carcinoma',
'akiec': 'Actinic keratoses',
'vasc': 'Vascular lesions',
'df': 'Dermatofibroma'
}
#Creating New Columns for better readability
newpath = 'gdrive/My Drive/Google Colab Data/Skin/HAM10000_images_part_1/'
metadata['path'] = metadata['image_id'].map(lambda x: newpath+x+".jpg")
print(list(metadata['path'])[6])
#Writes cell_type & cell_type_index features to the csv
metadata['cell_type'] = metadata['dx'].map(lesion_type_dict.get)
metadata['cell_type_idx'] = pd.Categorical(metadata['cell_type']).codes
# metadata.head(5)
#Resizing images to a 100x75x3 matrix and storing them as a new feature
#for the DF
metadata['image'] = metadata['path'].map(lambda x: np.asarray(Image\
.open(x).resize((100,75))))
# metadata.head(5)
#Plotting one image to confirm that the previous step was successful
plt.figure()
plt.imshow(metadata['image'][4])
###Output
_____no_output_____
###Markdown
Cleaning and Preparing Data for Training
###Code
X = metadata['image'].values
y = metadata['cell_type_idx'].values
"""nx, ny represent the image resolution of the training dataset.
When we use this model for prediction later,
These values will be used to resize any images uploaded"""
nx = X[1].shape[1]
ny = X[1].shape[0]
#nc by convention; Referencing the number of channels used.
nc = X[1].shape[2]
m = X.shape[0]
#reshape X to a nicer shape and print dimentions
X = np.concatenate(X).reshape(m,ny,nx,nc)
X.shape
#np.save('temp.npy', [X, y, m, ny, nx, nc])
X, y, m, ny, nx, nc = np.load('gdrive/My Drive/Google Colab Data/Skin/temp.npy')
###Output
_____no_output_____
###Markdown
Randomizing and Normalizing DataFrame
###Code
#Randomizing and splitting the data set
train_X, test_X, train_y, test_y = train_test_split(X, y, \
test_size=0.20, random_state=3)
#Converting test and train y to one hot encode format
test_y = K.utils.to_categorical(test_y.transpose())
train_y = K.utils.to_categorical(train_y.transpose())
#Calculating train_X mean and standard deviation for normalization
train_X_mean = np.mean(train_X, axis=0)
train_X_std = np.std(train_X, axis=0)
#Normalization
train_X = ((train_X - train_X_mean)/train_X_std)
test_X = ((test_X - train_X_mean)/train_X_std)
#No variable generation for test set to prevent data leakage
# Checking normalization
plt.figure()
plt.hist(train_X[5,:,:,1])
###Output
_____no_output_____
###Markdown
Model Building Created model using sequential feed. Model utilies a pair of convolutional layers with 32 filters each followed by amax pooling layer. We use dropout for regularization to avoid overfitting the model.We repeat the above step with more granular convolutional filters. We again use dropout with a more aggressivedropout rate to avoid overfitting the model.We add a dense layer using the rectified linear unit as the activation function. Regularization is applied again to limit overfitting.Softmax activation layer is used to predict the 7 disease categories identified by our dataset.
###Code
# Setting CNN Skin model
input_shape = (75, 100, 3)
class_num = 7
model = K.models.Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',padding = 'Same',input_shape=input_shape))
model.add(Conv2D(32,kernel_size=(3, 3), activation='relu',padding = 'Same',))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu',padding = 'Same'))
model.add(Conv2D(64, (3, 3), activation='relu',padding = 'Same'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(0.40))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(class_num, activation='softmax'))
model.summary()
# Define optimizer (Adam optimizer)
optimizer = K.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,\
epsilon=None, decay=0.0, amsgrad=False)
# Compile the model, categorical crossentropy loss and accuracy metric
model.compile(optimizer = optimizer, loss = "categorical_crossentropy", \
metrics = ['accuracy'])
history = model.fit(train_X, train_y_fixed[0], batch_size = 64, epochs = 50)
plt.figure()
plt.plot(history.history['loss'])
model.evaluate(test_X, test_y)
#Saving the model and weights for the end to end solution.
#Save weights
model.save_weights("gdrive/My Drive/Google Colab Data/Skin/skinmodelweights.h5")
model_json = model.to_json()
#Save model
with open("gdrive/My Drive/Google Colab Data/Skin/skinmodel.json", "w") as file:
file.write(model_json)
#Save mean and std
np.save("gdrive/My Drive/Google Colab Data/Skin/skinmodel_meanstd.npy", [train_X_mean, train_X_std])
def img_processor(imgpath, meanstdpath, modelpath, weightspath):
"""
inputs:
imgpath - path to image of potential skin cancer mole;
meanstdpath - path to training mean & standard deviation;
modelpath - path to Keras model;
weightspath - path to weights for Keras model
output: pred_dic - a prediction dictionary with diseases as keys and probabilities as values
"""
from keras.models import model_from_json
# load model
with open(modelpath, "r") as file:
loaded_json = file.read()
skinmodel = model_from_json(loaded_json)
# Load weights from file
skinmodel.load_weights(weightspath)
# Load mean and std
train_X_mean, train_X_std = np.load(meanstdpath)
# Loading, resizing image as np.array
imagearray = np.asarray(Image.open(imgpath).resize((100,75)))
imagearray = ((imagearray-train_X_mean)/train_X_std)
ny, nx, nc = imagearray.shape
imagearray = imagearray.reshape(1 ,ny, nx, nc)
pred_vec = skinmodel.predict(imagearray).flatten()
pred_dict = {'Actinic keratoses' : pred_vec[0], 'Basal cell carcinoma' : pred_vec[1],
'Benign keratosis-like lesions' : pred_vec[2], 'Dermatofibroma' : pred_vec[3],
'Melanocytic nevi' : pred_vec[4], 'Melanoma' : pred_vec[5], 'Vascular lesions' : pred_vec[6]
}
return pred_dict
drive_path = 'gdrive/My Drive/Google Colab Data/Skin/'
res = img_processor(drive_path + "HAM10000_images_part_1/ISIC_0024346.jpg", drive_path + "skinmodel_meanstd.npy",\
drive_path + "skinmodel.json", drive_path + "skinmodelweights.h5")
print(res)
###Output
_____no_output_____ |
notebooks/MNIST_From_Scratch.ipynb | ###Markdown
Introduction We will be solving the full [MNIST DataSet](http://yann.lecun.com/exdb/mnist/) writing the code from Scratch. Import Packages we will use:
###Code
!pip install -Uqq fastai
from fastai.vision.all import *
matplotlib.rc('image', cmap='Greys')
###Output
_____no_output_____
###Markdown
Downloading and extract the data. We will use the dataset hosted on [fastai datasets](https://course.fast.ai/datasets).
###Code
path = untar_data(URLs.MNIST)
Path.BASE_PATH = path
###Output
_____no_output_____
###Markdown
Check the content of our folders:
###Code
path.ls()
(path/'training').ls().sorted()
(path/'testing').ls().sorted()
###Output
_____no_output_____
###Markdown
The data has been arranged in folders: `training` and `testing`, each with folders for each type of image `0-9`. Data Preprocessing Writing custom functions to get our data.`load_mnist_data` does the following:* scans the directories and get the labels* creates X and y lists for holding the independent and dependent variables respectively* for each file in a label folder, open the Image and append it to x, and append its label to y* stacks all the images into a tensor, casts them to float and normalize the data to be between 0 and 1* turn y (dependent variable) into a tensor* return X and y
###Code
def load_mnist_data(dataset, path):
# scan all the directories and create a list of labels
labels = os.listdir(path/dataset)
labels.sort()
# create lists for samples and labels
X = []
y = []
# for each label folder
for label in labels:
# and for each image in given folder
for file in (path/dataset/label).ls().sorted():
# open the image and append it and labels to lists
image = Image.open(file)
X.append(tensor(image))
y.append(int(label))
# stack all the images into a tensor
# and casts them into floats and
# normalizes them to between 0 and 1
X = torch.stack(X).float()/255
# turn y into tensor
y = tensor(y)
return X, y
###Output
_____no_output_____
###Markdown
`create_data` loads the training and testing data, shuffles the training data and splits for validation data (using the percentage inputed as a parameter), flattens the X's, loads the data into datesets and returns them
###Code
def create_data(path, valid_split=0.2):
# load the training and testing dataset separately
X, y = load_mnist_data('training', path)
X_test, y_test = load_mnist_data('testing', path)
# shuffle the training data
# because I want to split for validation data
idx = torch.randperm(X.shape[0])
X = X[idx].view(X.size())
y = y[idx].view(y.size())
# Flatten the Training Images and Test Images
X = X.view(-1, 28*28)
X_test = X_test.view(-1, 28*28)
# default split is 20%
# but can be changed by parameter
train_index = int((1-valid_split) * X.shape[0])
X_train = X[:train_index]
y_train = y[:train_index]
X_valid = X[train_index:]
y_valid = y[train_index:]
# load the data into respective datasets
train = list(zip(X_train, y_train))
valid = list(zip(X_valid, y_valid))
test = list(zip(X_test, y_test))
# return the datasets
return train, valid, test
###Output
_____no_output_____
###Markdown
DataSet Create the training, validation and testing datasets:
###Code
train_dset, valid_dset, test_dset = create_data(path)
len(train_dset), len(valid_dset), len(test_dset)
###Output
_____no_output_____
###Markdown
DataLoaders Create `DataLoaders` of the training and validation datasets.DataLoaders object groups the data in batches and shuffles the training data.
###Code
train_dl = DataLoader(train_dset, bs=256, shuffle=True)
valid_dl = DataLoader(valid_dset, bs=256, shuffle=False)
dls = DataLoaders(train_dl, valid_dl)
len(train_dl), len(valid_dl)
###Output
_____no_output_____
###Markdown
Loss Function Create the Loss Function we are going to use. We are using CrossEntropy Loss. It applies softmax along the columns. Then for each true label y, we get the activtion of our model in that index, apply negative log and return the mean of all the log likelihoods.
###Code
def cross_entropy_loss(preds, y):
# apply softmax
preds = torch.softmax(preds, axis=1)
# get confidences for the correct class
idx = len(preds)
confidences = preds[range(idx), y]
# calculate negative log likelihood and return it
log_ll = -torch.log(confidences)
return log_ll.mean()
###Output
_____no_output_____
###Markdown
Testing with a Small Batch of Data First First test with a small batch of 5.
###Code
batch = train_dset[:5]
data = [list(t) for t in zip(*batch)]
x, y = data[0], data[1]
xb = torch.stack(x)
yb = torch.stack(y)
xb.shape, yb.shape
###Output
_____no_output_____
###Markdown
Simple Model Create a Simple Model and a way to initialize parameters:
###Code
def init_params(size):
# we initialize them randomly
return (torch.randn(size)).requires_grad_()
def linear_model(xb):
return xb@weights + biases
weights = init_params((28*28, 10))
biases = init_params(10)
###Output
_____no_output_____
###Markdown
Get the prediction using our simple model
###Code
preds = linear_model(xb)
preds
###Output
_____no_output_____
###Markdown
Calculate the Loss:
###Code
loss = cross_entropy_loss(preds, yb)
loss
###Output
_____no_output_____
###Markdown
Let's put that in a function:
###Code
def calc_grad(xb, yb, model):
preds = model(xb)
loss = cross_entropy_loss(preds, yb)
loss.backward()
###Output
_____no_output_____
###Markdown
Get the mean weights and biases gradients (because we can't display all of them)
###Code
calc_grad(xb, yb, linear_model)
weights.grad.mean(), biases.grad.mean()
###Output
_____no_output_____
###Markdown
Create a function to train for one epoch with our training data
###Code
def train_epoch(model, lr, params):
for xb, yb in train_dl:
calc_grad(xb, yb, model)
for p in params:
p.data -= p.grad*lr
p.grad.zero_()
###Output
_____no_output_____
###Markdown
Accuracy Function Define the accuracy function we are going to use. It gets the index of the largest activation of the predictions assumes thats what out model predicts, and compares them with the true labels and get the mean.
###Code
def batch_accuracy(preds, yb):
prediction = torch.argmax(preds, axis=1)
correct = (prediction == yb)
return correct.float().mean()
###Output
_____no_output_____
###Markdown
Current accuracy of our simple model on the small batch:
###Code
batch_accuracy(linear_model(xb), yb)
###Output
_____no_output_____
###Markdown
Create a function for validating an epoch. Remember we get the metrics on the validation dataset.
###Code
def validate_epoch(model):
accs = [batch_accuracy(model(xb), yb) for xb, yb in valid_dl]
return round(torch.stack(accs).mean().item(), 4)
###Output
_____no_output_____
###Markdown
Current accuracy on validation set:
###Code
validate_epoch(linear_model)
###Output
_____no_output_____
###Markdown
Step the model once:
###Code
lr = 1
params = weights, biases
train_epoch(linear_model, lr, params)
validate_epoch(linear_model)
###Output
_____no_output_____
###Markdown
Loop for a number of epochs and train and validate:
###Code
for i in range(20):
train_epoch(linear_model, lr, params)
print(validate_epoch(linear_model), end=' ')
###Output
0.8649 0.8752 0.8786 0.8848 0.8882 0.8945 0.89 0.896 0.9003 0.9 0.8939 0.9029 0.9014 0.8925 0.9055 0.9064 0.9055 0.9039 0.9083 0.9074
###Markdown
Optimizer Class Create an optimzer class. We are going to use Basic SGD.
###Code
class optimizer_SGD():
def __init__(self,params,lr):
self.params = list(params)
self.lr = lr
def step(self, *args, **kwargs):
for p in self.params:
p.data -= p.grad.data * self.lr
def zero_grad(self, *args, **kwargs):
for p in self.params:
p.grad = None
###Output
_____no_output_____
###Markdown
We use the Linear model provided by PyTorch:
###Code
model = nn.Linear(28*28, 10)
w, b = model.parameters()
w.shape, b.shape
###Output
_____no_output_____
###Markdown
Function to train an epoch:
###Code
def train_epoch(model):
for xb,yb in train_dl:
calc_grad(xb, yb, model)
optimizer.step()
optimizer.zero_grad()
###Output
_____no_output_____
###Markdown
Function of training the whole model:
###Code
def train_model(model, epochs):
for i in range(epochs):
train_epoch(model)
print(validate_epoch(model), end=' ')
###Output
_____no_output_____
###Markdown
Initialize the optimizer
###Code
optimizer = optimizer_SGD(model.parameters(), lr)
###Output
_____no_output_____
###Markdown
Train the model for 20 epochs.
###Code
train_model(model, 20)
###Output
0.9008 0.9079 0.9147 0.9145 0.9127 0.9128 0.9205 0.9203 0.9214 0.921 0.9154 0.9228 0.9219 0.9192 0.9208 0.9208 0.9125 0.9194 0.9161 0.918
###Markdown
Our simple linear model is training! Next, let's make a Neural Network Adding Non-Linearity We create a simple neural network by adding a non-linearity (ReLU) between the Dense/Linear layers
###Code
model = nn.Sequential(
nn.Linear(28*28, 60),
nn.ReLU(),
nn.Linear(60, 10)
)
optimizer = optimizer_SGD(model.parameters(), lr)
train_model(model, 20)
###Output
0.9293 0.9463 0.9513 0.9573 0.9575 0.9627 0.9631 0.9652 0.9631 0.9664 0.9666 0.9618 0.9679 0.9648 0.9697 0.9691 0.968 0.9482 0.9704 0.971
###Markdown
We got a higher accuracy than with just a linear model. Custom Learner Class Let's create a Custom Learner class like the one provided by fastai.The `Custom_Learner` brings all the following into one class:* DataLoaders* The Model* The Loss Function to use* The Optimizer* Metrics you want printed out (currently only batch accuracy works)!It then provides a fit method to train your model for the number of epochs you want. It prints out a bunch of important stuff that give you information about how your model is training.It provides two methods `plot_loss` and `plot_accuracy` to plot loss and accuracy respectively
###Code
class Custom_Learner():
# initialize the class
def __init__(self, dls=None, model=None, loss_func=None, opt=None, metrics=None):
self.train_dl, self.valid_dl = dls
self.model = model
self.loss_func = loss_func
self.opt = opt
self.metrics = metrics
# for printing purposes
self._epoch = 0.
self._tloss = 0.
self._vloss = 0.
self._met = 0.
# for plotting purposes
self._epochs = []
self._tlosses = []
self._vlosses = []
self._accuracies = []
# the fit method is used to train the model
def fit(self, epochs=None, lr=None):
self.lr = lr
self.train_model(self.model, epochs)
# train epoch
def train_epoch(self, model):
for xb, yb in self.train_dl:
predictions = model(xb)
loss = self.loss_func(predictions, yb)
self._tloss = round(loss.item(), 4)
loss.backward()
self.opt.step()
self.opt.zero_grad()
for xb, yb in self.valid_dl:
predictions = model(xb)
loss = self.loss_func(predictions, yb)
self._vloss = round(loss.item(), 4)
# validate epoch
def validate_epoch(self, model):
accs = [self.metrics(model(xb), yb) for xb, yb in self.valid_dl]
self._met = round(torch.stack(accs).mean().item(), 4)
# print output as the model is training
def show_output(self):
print(f'epoch {self._epoch}: ' +
f'train_loss: {self._tloss:.4f}, ' +
f'valid_loss: {self._vloss:.4f}, ' +
f'accuracy: {self._met:.4f}')
# training the model
def train_model(self, model, epochs):
self.opt = self.opt(self.model.parameters(), self.lr)
for i in range(epochs):
self._epoch = i
# train the model
self.train_epoch(self.model)
# validate the model and print out metrics
self.validate_epoch(self.model)
self.update_accuracies(self._met)
self.update_tlosses(self._tloss)
self.update_vlosses(self._vloss)
self.update_epochs(self._epoch+1)
self.show_output()
# update accuracies
def update_accuracies(self, accuracy):
self._accuracies.append(accuracy)
# update losses
def update_tlosses(self, loss):
self._tlosses.append(loss)
# update losses
def update_vlosses(self, loss):
self._vlosses.append(loss)
# update epochs
def update_epochs(self, epoch):
self._epochs.append(epoch)
# plot losses
def plot_loss(self):
fig, ax = plt.subplots(figsize=(6, 6))
ax.plot(self._epochs, self._tlosses, label='Training Loss')
ax.plot(self._epochs, self._vlosses, color='Orange', label='Validation Loss')
ax.set_title('Loss in Training and Validation')
ax.set_ylabel('loss')
ax.set_xlabel('epochs')
ax.legend()
# plot accuracies
def plot_accuracy(self):
fig, ax = plt.subplots(figsize=(6, 6))
ax.plot(self._epochs, self._accuracies, label='Valiadtion Accuracy')
ax.set_title('Accuracy in Validation')
ax.set_ylabel('accuracy')
ax.set_xlabel('epochs')
ax.legend()
###Output
_____no_output_____
###Markdown
Initialize the learner and train a linear model for 10 epochs:
###Code
learn = Custom_Learner(dls=dls, model=nn.Linear(28*28,10), loss_func=cross_entropy_loss, opt=optimizer_SGD, metrics=batch_accuracy)
learn.fit(10, lr=1)
###Output
epoch 0: train_loss: 0.3706, valid_loss: 0.2796, accuracy: 0.9109
epoch 1: train_loss: 0.2508, valid_loss: 0.2883, accuracy: 0.9076
epoch 2: train_loss: 0.4484, valid_loss: 0.2582, accuracy: 0.9165
epoch 3: train_loss: 0.3537, valid_loss: 0.2483, accuracy: 0.9164
epoch 4: train_loss: 0.2857, valid_loss: 0.2546, accuracy: 0.9192
epoch 5: train_loss: 0.2045, valid_loss: 0.2559, accuracy: 0.9148
epoch 6: train_loss: 0.2076, valid_loss: 0.2447, accuracy: 0.9158
epoch 7: train_loss: 0.1970, valid_loss: 0.2422, accuracy: 0.9195
epoch 8: train_loss: 0.1715, valid_loss: 0.2547, accuracy: 0.9160
epoch 9: train_loss: 0.2693, valid_loss: 0.2322, accuracy: 0.9160
###Markdown
Create a Neural Network with two layers of 60 neurons and 10 neurons (for the output) respectively. Create a learner using the neural network and train for 40 epochs:
###Code
neural_net = nn.Sequential(
nn.Linear(28*28, 60),
nn.ReLU(),
nn.Linear(60, 10)
)
neural_learn = Custom_Learner(dls=dls, model=neural_net, loss_func=cross_entropy_loss, opt=optimizer_SGD, metrics=batch_accuracy)
neural_learn.fit(40, lr=0.1)
neural_learn.plot_loss()
neural_learn.plot_accuracy()
###Output
_____no_output_____
###Markdown
Using Classes provided by fastai and PyTorch Do the same with classes provided by fastai and PyTorch:
###Code
neural_net = nn.Sequential(
nn.Linear(28*28, 60),
nn.ReLU(),
nn.Linear(60, 10)
)
learn = Learner(dls=dls, model=neural_net, loss_func=cross_entropy_loss, opt_func=SGD, metrics=batch_accuracy)
learn.fit(40, 0.1)
###Output
_____no_output_____
###Markdown
It does the exact same thing as our from scratch. This just shows the classes provided by Frameworks are not magic! Test Set Let us work on our test set that we set aside.
###Code
len(test_dset)
testing_data = [list(t) for t in zip(*test_dset)]
x, y = testing_data[0], testing_data[1]
xtest = torch.stack(x)
ytest = torch.stack(y)
###Output
_____no_output_____
###Markdown
Get the model from our `neural_learn` Learner:
###Code
neural_learn.model
###Output
_____no_output_____
###Markdown
Create a function to run inference on the test set:
###Code
def inference(model, x, y):
acts = model(x)
predictions = torch.argmax(acts, axis=1)
correct = (predictions == y)
return (correct.float().mean()).item()
test_accuracy = inference(neural_learn.model, xtest, ytest)
print(f'Our Final Accuracy on the Test set is: {test_accuracy:.4f}')
###Output
Our Final Accuracy on the Test set is: 0.9703
|
NN/IteratingModels.ipynb | ###Markdown
Clean and engineer data
###Code
df = pd.read_csv('Berlin.csv')
print(df.shape)
df.head()
free_rentals = list(df[df['price'] == "$0.00"].index)
df = df.drop(index=free_rentals)
print(df.shape)
df['price'] = df['price'].apply(lambda p: float(p.strip('$').replace(",",'')))
df['price'].describe()
def am_to_list(amenities):
li = amenities.split(",")
for i in range(len(li)):
li[i] = li[i].replace('"', '')
li[i] = li[i].replace("'", '')
li[i] = li[i].strip("{")
li[i] = li[i].strip("}")
return li
df['am_list'] = df['amenities'].apply(am_to_list)
df.head()
potential_features = ['neighbourhood',
'neighbourhood_cleansed', 'security_deposit',
'room_type', 'accommodates',
'bathrooms',
'bedrooms']
for feature in potential_features:
df[feature] = df['am_list'].apply(lambda li: feature in li)
df.head()
df['entire'] = df['room_type'] == 'Entire home/apt'
df['private'] = df['room_type'] == 'Private room'
df['shared'] = df['room_type'] == 'Shared room'
df['hotel'] = df['room_type'] == 'Hotel room'
cutoff = 10
top_hoods = df['neighbourhood'].value_counts(dropna=True).index[:cutoff]
for hood in top_hoods:
df[hood] = df['neighbourhood'] == hood
df.head()
features = ['bedrooms', 'bathrooms', 'neighbourhood_cleansed',
'latitude', 'longitude',
'room_type', 'cleaning_fee', 'guests_included']
features.extend(top_hoods)
dfX = df[features]
dfy = df['price']
dfX.columns
for feature in dfX.columns:
dfX[feature] = dfX[feature].fillna(value=dfX[feature].median())
dfX.isnull().sum()
dfX.head()
X = np.array(dfX)
y = np.array(dfy)
X.shape, y.shape
###Output
_____no_output_____
###Markdown
Iterating Models First model architecture
###Code
model = Sequential()
model.add(Dense(10, input_dim=X.shape[1], activation='relu'))
model.add(Dense(1))
model.compile(loss='MSE', optimizer='adam', metrics=['mean_squared_error'])
model.summary()
model.fit(X, y, epochs=100, verbose=1, validation_split=.2)
np.array([list(X[0])]), model.predict(np.array([list(X[0])]))
def check_predictions(model, y=y, count=10):
for i in range(count):
print(f'Predicted: {model.predict(np.array([list(X[i])]))}, actual: {y[i]}')
check_predictions(model)
# first fix: more epochs, small batches
model.fit(X, y, epochs=1000, batch_size=20, verbose=1, validation_split=.2)
check_predictions(model)
###Output
_____no_output_____
###Markdown
Second model architecture
###Code
model = Sequential()
model.add(Dense(5, input_dim=X.shape[1], activation='relu'))
model.add(Dense(1))
model.compile(loss='MSE', optimizer='adam', metrics=['mean_squared_error'])
model.summary()
model.fit(X, y, epochs=100, verbose=1, validation_split=.2)
check_predictions(model)
###Output
_____no_output_____
###Markdown
Third model architecture
###Code
model = Sequential()
model.add(Dense(15, input_dim=X.shape[1], activation='relu'))
model.add(Dense(7, activation='relu'))
model.add(Dense(1))
model.compile(loss='MSE', optimizer='adam', metrics=['mean_squared_error'])
model.summary()
model.fit(X, y, epochs=100, verbose=1, validation_split=.2)
check_predictions(model)
###Output
_____no_output_____
###Markdown
Rejigger data to try again
###Code
dfX.head()
df['property_type'].value_counts(dropna=False)
df['house'] = df['property_type'] == 'House'
df['apartment'] = df['property_type'] == 'Apartment'
df['condo'] = df['property_type'] == 'Condominium'
df.head()
features = ['bedrooms', 'bathrooms', 'neighbourhood_cleansed',
'latitude', 'longitude',
'room_type', 'cleaning_fee', 'guests_included']
dfX = df[features]
dfy = df['price']
for feature in dfX.columns:
dfX[feature] = dfX[feature].fillna(value=dfX[feature].median())
dfX.isnull().sum()
X = np.array(dfX)
y = np.array(dfy)
X.shape, y.shape
###Output
_____no_output_____
###Markdown
Iterating models again First model architecture redux
###Code
model = Sequential()
model.add(Dense(10, input_dim=X.shape[1], activation='relu'))
model.add(Dense(1))
model.compile(loss='MSE', optimizer='adam', metrics=['mean_squared_error'])
model.summary()
model.fit(X, y, epochs=100, verbose=1, validation_split=.2)
check_predictions(model)
###Output
_____no_output_____
###Markdown
Fourth model architecture
###Code
model = Sequential()
model.add(Dense(15, input_dim=X.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='MSE', optimizer='adam', metrics=['mean_squared_error'])
model.fit(X, y, epochs=1000, verbose=1, validation_split=.2)
check_predictions(model)
predictions = model.predict_on_batch(X)
plt.scatter(y, predictions)
dfy.describe()
df[df['price'] > 10000]
df.iloc[10906]['price']
###Output
_____no_output_____ |
17_PDEs/17_PDEs-Students.ipynb | ###Markdown
17 PDEs: Solution with Time Stepping (Students) Heat EquationThe **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/17_PDEs/17_PDEs_LectureNotes_HeatEquation.pdf))$$\frac{\partial T(\mathbf{x}, t)}{\partial t} = \frac{K}{C\rho} \nabla^2 T(\mathbf{x}, t),$$ Problem: insulated metal bar (1D heat equation)A metal bar of length $L$ is insulated along it lengths and held at 0ºC at its ends. Initially, the whole bar is at 100ºC. Calculate $T(x, t)$ for $t>0$. Analytic solutionSolve by separation of variables and power series: The general solution that obeys the boundary conditions $T(0, t) = T(L, t) = 0$ is$$T(x, t) = \sum_{n=1}^{+\infty} A_n \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right), \quad k_n = \frac{n\pi}{L}$$ The specific solution that satisfies $T(x, 0) = T_0 = 100^\circ\text{C}$ leads to $A_n = 4 T_0/n\pi$ for $n$ odd:$$T(x, t) = \sum_{n=1,3,5,\dots}^{+\infty} \frac{4 T_0}{n \pi} \sin(k_n x)\, \exp\left(-\frac{k_n^2 K t}{C\rho}\right)$$
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000):
T = np.zeros_like(x)
eta = K / (C*rho)
for n in range(1, nmax, 2):
kn = n*np.pi/L
T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * eta * t)
return T
T0 = 100.
L = 1.0
X = np.linspace(0, L, 100)
for t in np.linspace(0, 3000, 50):
plt.plot(X, T_bar(X, t, T0, L))
plt.xlabel(r"$x$ (m)")
plt.ylabel(r"$T$ ($^\circ$C)");
###Output
_____no_output_____
###Markdown
Numerical solution: Leap frogDiscretize (finite difference):For the time domain we only have the initial values so we use a simple forward difference for the time derivative:$$\frac{\partial T(x,t)}{\partial t} \approx \frac{T(x, t+\Delta t) - T(x, t)}{\Delta t}$$ For the spatial derivative we have initially all values so we can use the more accurate central difference approximation:$$\frac{\partial^2 T(x, t)}{\partial x^2} \approx \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}$$ Thus, the heat equation can be written as the finite difference equation$$\frac{T(x, t+\Delta t) - T(x, t)}{\Delta t} = \frac{K}{C\rho} \frac{T(x+\Delta x, t) + T(x-\Delta x, t) - 2 T(x, t)}{\Delta x^2}$$ which can be reordered so that the RHS contains only known terms and the LHS future terms. Index $i$ is the spatial index, and $j$ the time index: $x = x_0 + i \Delta x$, $t = t_0 + j \Delta t$.$$T_{i, j+1} = (1 - 2\eta) T_{i,j} + \eta(T_{i+1,j} + T_{i-1, j}), \quad \eta := \frac{K \Delta t}{C \rho \Delta x^2}$$Thus we can step forward in time ("leap frog"), using only known values. Activity: Solve the 1D heat equation numerically for an iron bar* $K = 237$ W/mK* $C = 900$ J/K* $\rho = 2700$ kg/m3* $L = 1$ m* $T_0 = 373$ K and $T_b = 273$ K* $T(x, 0) = T_0$ and $T(0, t) = T(L, t) = T_b$Implement the Leapfrog time-stepping algorithm and visualize the results.
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib notebook
L_rod = 1. # m
t_max = 3000. # s
Dx = 0.02 # m
Dt = 2 # s
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
T0 = 373 # K
Tb = 273 # K
raise NotImplementedError
# eta =
step = 20 # plot solution every n steps
print("Nx = {0}, Nt = {1}".format(Nx, Nt))
print("eta = {0}".format(eta))
T = np.zeros(Nx)
T_new = np.zeros_like(T)
T_plot = np.zeros((Nt//step + 1, Nx))
raise NotImplementedError
# initial conditions
# ...
# boundary conditions
# ...
t_index = 0
T_plot[t_index, :] = T
for jt in range(1, Nt):
raise NotImplementedError
if jt % step == 0 or jt == Nt-1:
t_index += 1
# save the new solution for later plotting
# T_plot[t_index, :] =
print("Iteration {0:5d}".format(jt), end="\r")
else:
print("Completed {0:5d} iterations: t={1} s".format(jt, jt*Dt))
###Output
_____no_output_____
###Markdown
VisualizationVisualize (you can use the code as is). Note how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.
###Code
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Stability of the solution Empirical investigation of the stabilityInvestigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?Report `Dt`, `Dx`, and `eta`* for 3 stable solutions * for 3 unstable solutions Wrap your heat diffusion solver in a function so that it becomes easier to run it:
###Code
def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273,
step=20):
Nx = int(L_rod // Dx)
Nt = int(t_max // Dt)
Kappa = 237 # W/(m K)
CHeat = 900 # J/K
rho = 2700 # kg/m^3
raise NotImplementedError
return T_plot
def plot_T(T_plot, Dx, Dt, step):
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))
Z = T_plot[X, Y]
fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.plot_wireframe(X*Dt*step, Y*Dx, Z)
ax.set_xlabel(r"time $t$ (s)")
ax.set_ylabel(r"position $x$ (m)")
ax.set_zlabel(r"temperature $T$ (K)")
fig.tight_layout()
return ax
T_plot = calculate_T(Dx=0.02, Dt=2, step=20)
plot_T(T_plot, 0.02, 2, 20)
###Output
_____no_output_____ |
CX-Bot-Translate_Sheets.ipynb | ###Markdown
Copyright 2021 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Dialogflow CX Bot Language Translation (Sheets Notebook)Contains functions that are re-used in the Main Notebook1. Functions for Reading and Writing to Sheets2. Functions for Formatting Sheets3. Setup/Initialize a blank Google Sheets[Public Doc Link: Python Client for Google Sheets](https://developers.google.com/sheets/api/quickstart/python) Sheets This Notebook Env:
###Code
# !python3 -V
# !python3 -m pip list | wc -l
# !python3 -m pip list | grep google
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
###Output
_____no_output_____
###Markdown
Enums
###Code
from enum import Enum
###########################
class SheetsName(Enum):
CX_Lang_REF = 1
Training_Phrases = 2
Parameters = 3
Entities = 4
Flows = 5
Pages = 6
Route_Groups = 7
###########################
class SheetsContent(Enum):
Header = 1
Sub_Header = 2
CX_Element = 3
Def_Lang = 4
Content = 5
###############################
class CX_Types(Enum):
transition_routes = 1
event_handlers = 2
transition_route_groups = 3
entry_fulfillment = 4
parameter = 5
###Output
_____no_output_____
###Markdown
Functions
###Code
### According to https://developers.google.com/sheets/api/reference/limits
### Sheets API requests per user per project is 60 requests per minute (or 1 request per second)
from timeit import default_timer as timer
import time
_last_gsheets_request_time = timer()
### spacing out API calls to at most once per second since last invocation
def delay_next_gsheets_request():
global _last_gsheets_request_time
gap = timer() - _last_gsheets_request_time
if __DEBUG: print(f'delay_next_gsheets_request(): time gap between last API call is {gap}s')
if (gap < 1):
sleep_time = round(1 - gap, 3)
if __INFO: print(f'\tdelay_next_gsheets_request(): Sleeping {sleep_time}s as duration from last API call is {gap}s')
time.sleep(sleep_time)
_last_gsheets_request_time = timer()
###Output
_____no_output_____
###Markdown
Functions: Sheets - Read & Write
###Code
############################
def get_sheets_credentials():
global GSheets_Creds
if GSheets_Creds is None:
raise Exception(f'Sheets: get_sheets_credentials(): Sheets Credentials is None, please run the Main Notebook')
else:
return GSheets_Creds
########################################
def batch_update_to_sheets(update_json):
creds = get_sheets_credentials()
service = build('sheets', 'v4', credentials=creds)
#pprint(update_json)
request = service.spreadsheets().batchUpdate(spreadsheetId=Google_Sheets_ID,body=update_json)
delay_next_gsheets_request()
response = request.execute()
#######################################
def read_sheet(sheet_range):
creds = get_sheets_credentials()
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
sheet = service.spreadsheets()
delay_next_gsheets_request()
result = sheet.values().get(spreadsheetId=Google_Sheets_ID,range=sheet_range).execute()
values = result.get('values', [])
if not values:
raise Exception(f'read_sheet({sheet_range}): No data found.')
else:
df = pd.DataFrame(values)
df.columns = df.iloc[0]
df.drop(df.index[0], inplace=True)
return df
#############################
def clear_sheet(sheet_range):
creds = get_sheets_credentials()
service = build('sheets', 'v4', credentials=creds)
# Call the Sheets API
sheet = service.spreadsheets()
delay_next_gsheets_request()
result = sheet.values().clear(spreadsheetId=Google_Sheets_ID,range=sheet_range).execute()
##########################################################
def write_to_sheet(sheet_range, values, value_input_option, mode):
creds = get_sheets_credentials()
service = build('sheets', 'v4', credentials=creds)
sheet = service.spreadsheets()
delay_next_gsheets_request()
if mode == 'update':
result = sheet.values().update(spreadsheetId=Google_Sheets_ID,
range=sheet_range,
valueInputOption=value_input_option,
body={"values":values}).execute()
elif mode == 'append':
result = sheet.values().append(spreadsheetId=Google_Sheets_ID,
range=sheet_range,
valueInputOption=value_input_option,
insertDataOption="OVERWRITE",
body={"values":values}).execute()
else:
raise Exception(f'write_to_sheet() mode is set to {mode} when only "update" or "append" are accepted.')
#####################
def colnum_string(n):
string = ""
while n > 0:
n, remainder = divmod(n - 1, 26)
string = chr(65 + remainder) + string
return string
#####################################################
def write_result(sheet_name, column_index, row, values):
sheet_range = f"{sheet_name}!{colnum_string(column_index)}{row}"
if(__DEBUG):
print(f'write_result(): sheet_range:{sheet_range}')
write_to_sheet(sheet_range,values,'RAW','update')
###Output
_____no_output_____
###Markdown
Functions: Formatting to Sheets
###Code
########################
def get_sheets_titles():
creds = get_sheets_credentials()
service = build('sheets', 'v4', credentials=creds)
# The ranges to retrieve from the spreadsheet.
ranges = []
# True if grid data should be returned.
# This parameter is ignored if a field mask was set in the request.
include_grid_data = False
request = service.spreadsheets().get(spreadsheetId=Google_Sheets_ID,
ranges=ranges, includeGridData=include_grid_data)
delay_next_gsheets_request()
response = request.execute()
#pprint(response)
sheets = response['sheets']
#print(sheets)
sheets_titles = []
for sheet in sheets:
sheets_titles.append(sheet['properties']['title'])
return sheets_titles
##################################
def add_sheets_json(sheet_titles):
sheets_json = []
index = 0
for sheet_title in sheet_titles:
sheets_json.append({'addSheet': {'properties': {'sheetId': sheet_title.value, 'title': sheet_title.name, 'index':sheet_title.value - 1}}})
index += 1
return sheets_json
############################
def set_borders_json(delta):
borders_json = []
border = { 'style': 'SOLID',
'width': 1,
'color': {
'red': 0,
'blue': 0,
'green': 0
}
}
start_row = 0
start_col = 0
end_row = 0
end_col = 0
for sheet in delta:
if sheet == SheetsName.CX_Lang_REF or sheet == SheetsName.Parameters:
end_row = 1
else:
end_row = 3
border_json = {'updateBorders': {
'range': {
'sheetId': sheet.value,
'startRowIndex': start_row,
'endRowIndex': end_row,
'startColumnIndex': start_col
#'endColumnIndex': end_col
},
'top': {}, 'bottom': {},
'left': {}, 'right': {},
'innerHorizontal': {}, 'innerVertical': {}
}
}
border_json['updateBorders']['top'] = border
border_json['updateBorders']['bottom'] = border
border_json['updateBorders']['left'] = border
border_json['updateBorders']['right'] = border
border_json['updateBorders']['innerHorizontal'] = border
border_json['updateBorders']['innerVertical'] = border
borders_json.append(border_json)
return borders_json
#################################
def set_column_width_json(delta):
dimensions_json = []
for sheet in delta:
dimension_json = {'updateDimensionProperties': {
'properties': {'pixelSize': 150},
'fields': '*',
#'range': {'sheetId': sheet.value, 'dimension': 'COLUMNS', 'startIndex': 0, 'endIndex': end_col }
'range': {'sheetId': sheet.value, 'dimension': 'COLUMNS', 'startIndex': 0 }
}
}
dimensions_json.append(dimension_json)
return dimensions_json
##############################################
def cell_formatter_json(sheet, sheet_content):
formatting = []
range_json = {'sheetId': sheet.value,'startRowIndex': 0,'endRowIndex': {},'startColumnIndex': 0,'endColumnIndex': {} }
cell_json = {'userEnteredFormat': {'backgroundColor':{'red':1,'green':1,'blue':1},
'horizontalAlignment':'LEFT','verticalAlignment':'TOP','wrapStrategy':'WRAP',
'textFormat':{'fontFamily':''}
}
}
bg_red = 224/255
bg_green = 248/255
bg_blue = 250/255
if sheet == SheetsName.CX_Lang_REF:
del range_json['endRowIndex']
del range_json['endColumnIndex']
cell_json['userEnteredFormat']['wrapStrategy'] = 'OVERFLOW_CELL'
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
elif sheet == SheetsName.Training_Phrases:
if sheet_content == SheetsContent.Header:
range_json['endRowIndex'] = 1
range_json['endColumnIndex'] = 2
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Sub_Header:
range_json['startRowIndex'] = 1
range_json['endRowIndex'] = 3
range_json['startColumnIndex'] = 0
range_json['endColumnIndex'] = 2
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['textFormat']['italic'] = True
cell_json['userEnteredFormat']['horizontalAlignment'] = 'RIGHT'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.CX_Element:
range_json['startRowIndex'] = 3
del range_json['endRowIndex']
range_json['endColumnIndex'] = 2
cell_json['userEnteredFormat']['wrapStrategy'] = 'OVERFLOW_CELL'
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Def_Lang:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 2
range_json['endColumnIndex'] = 3
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Content:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 3
del range_json['endColumnIndex']
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
elif sheet == SheetsName.Parameters:
if sheet_content == SheetsContent.Header:
range_json['endRowIndex'] = 1
range_json['endColumnIndex'] = 6
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.CX_Element:
range_json['startRowIndex'] = 1
del range_json['endRowIndex']
range_json['endColumnIndex'] = 6
cell_json['userEnteredFormat']['wrapStrategy'] = 'OVERFLOW_CELL'
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Content:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 7
del range_json['endColumnIndex']
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
elif sheet == SheetsName.Pages:
if sheet_content == SheetsContent.Header:
range_json['endRowIndex'] = 1
range_json['endColumnIndex'] = 6
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Sub_Header:
range_json['startRowIndex'] = 1
range_json['endRowIndex'] = 3
range_json['startColumnIndex'] = 0
range_json['endColumnIndex'] = 6
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['textFormat']['italic'] = True
cell_json['userEnteredFormat']['horizontalAlignment'] = 'RIGHT'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.CX_Element:
range_json['startRowIndex'] = 3
del range_json['endRowIndex']
range_json['endColumnIndex'] = 6
cell_json['userEnteredFormat']['wrapStrategy'] = 'OVERFLOW_CELL'
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Def_Lang:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 6
range_json['endColumnIndex'] = 7
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Content:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 7
del range_json['endColumnIndex']
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
else: #Entities, Flows & Route_Groups - same
if sheet_content == SheetsContent.Header:
range_json['endRowIndex'] = 1
range_json['endColumnIndex'] = 4
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Sub_Header:
range_json['startRowIndex'] = 1
range_json['endRowIndex'] = 3
range_json['startColumnIndex'] = 0
range_json['endColumnIndex'] = 4
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['textFormat']['italic'] = True
cell_json['userEnteredFormat']['horizontalAlignment'] = 'RIGHT'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.CX_Element:
range_json['startRowIndex'] = 3
del range_json['endRowIndex']
range_json['endColumnIndex'] = 4
cell_json['userEnteredFormat']['wrapStrategy'] = 'OVERFLOW_CELL'
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Roboto Mono'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Def_Lang:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 4
range_json['endColumnIndex'] = 5
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
cell_json['userEnteredFormat']['backgroundColor']['red'] = bg_red
cell_json['userEnteredFormat']['backgroundColor']['green'] = bg_green
cell_json['userEnteredFormat']['backgroundColor']['blue'] = bg_blue
elif sheet_content == SheetsContent.Content:
del range_json['endRowIndex']
range_json['startColumnIndex'] = 5
del range_json['endColumnIndex']
cell_json['userEnteredFormat']['textFormat']['fontFamily'] = 'Google Sans'
formatting.append(range_json)
formatting.append(cell_json)
return formatting
####################################
def set_cell_formatting_json(delta):
# Roboto Mono, Google Sans
formats_json = []
repeatCell_json = {'repeatCell': {'range': {},'cell': {},'fields': '*'}}
for sheet in delta:
if sheet == SheetsName.CX_Lang_REF:
json = copy.deepcopy(repeatCell_json)
formatting = cell_formatter_json(sheet, SheetsContent.Header)
json['repeatCell']['range'] = formatting[0]
json['repeatCell']['cell'] = formatting[1]
formats_json.append(json)
else: #Every other Sheets - same
for content_type in SheetsContent:
json = copy.deepcopy(repeatCell_json)
formatting = cell_formatter_json(sheet, content_type)
json['repeatCell']['range'] = formatting[0]
json['repeatCell']['cell'] = formatting[1]
formats_json.append(json)
return formats_json
################################
def set_view_freeze_json(delta):
properties_jsons = []
row_count = 3
col_count = 0
for sheet in delta:
if sheet == SheetsName.CX_Lang_REF:
row_count = 1
elif sheet == SheetsName.Training_Phrases:
row_count = 3
col_count = 3
elif sheet == SheetsName.Parameters:
row_count = 1
col_count = 0
elif sheet == SheetsName.Pages:
row_count = 3
col_count = 7
else:
row_count = 3
col_count = 5
json = {'updateSheetProperties':
{'properties':
{'sheetId':sheet.value,'title': sheet.name, 'index': sheet.value - 1,
'gridProperties':
{'rowCount':1000, 'columnCount': 26,
'frozenRowCount': row_count,'frozenColumnCount': col_count }
},
'fields':'*'}
}
properties_jsons.append(json)
return properties_jsons
#############################
def add_sheet_headers(delta):
for d in delta:
#print(d.name)
if d == SheetsName.CX_Lang_REF:
write_to_sheet(SheetsName.CX_Lang_REF.name+'!A1',
[['=importhtml("https://cloud.google.com/dialogflow/cx/docs/reference/language","table",1)']],
'USER_ENTERED', 'update' )
elif d == SheetsName.Training_Phrases:
write_to_sheet(SheetsName.Training_Phrases.name+'!A1',
[['Intent Name','Intent Display Name'],[None,'CX:>'],[None,'Translate:>']], 'RAW', 'update')
elif d == SheetsName.Parameters:
write_to_sheet(SheetsName.Parameters.name+'!A1',
[['Intent Name','Intent Display Name',
'Parameter ID','Parameter Entity Type',
'Boolean:Is_List','Boolean:Redact'],
[None, None, None, None, None, None],[None, None, None, None, None, None]], 'RAW', 'update')
elif d == SheetsName.Entities:
write_to_sheet(SheetsName.Entities.name+'!A1',
[['Entity Type Name','Entity Type Display Name','Entity Type Kind','Entities Value'],
[None, None, None, 'CX:>'],
[None, None, None, 'Translate:>']
], 'RAW', 'update')
elif d == SheetsName.Flows:
write_to_sheet(SheetsName.Flows.name+'!A1',
[['Flow Name','Flow Display Name','Flow Components','Flow Component ID'],
[None, None, None, 'CX:>'],
[None, None, None, 'Translate:>']
], 'RAW', 'update')
elif d == SheetsName.Pages:
write_to_sheet(SheetsName.Pages.name+'!A1',
[['Flow Name','Page Name','Page Display Name','Page Components','Page Component ID','Page Component ID-2'],
[None, None, None, None, None, 'CX:>'],
[None, None, None, None, None, 'Translate:>']
], 'RAW', 'update')
elif d == SheetsName.Route_Groups:
write_to_sheet(SheetsName.Route_Groups.name+'!A1',
[['Flow Name','Route Group Name','Route Group Display Name','Route Group Component ID'],
[None, None, None, 'CX:>'],
[None, None, None, 'Translate:>']
], 'RAW', 'update')
###Output
_____no_output_____
###Markdown
Functions: CX Config to Sheets
###Code
################################################################
def get_messages(fulfillment):
messages_list = []
messages = fulfillment.messages
if messages != []:
for msg in messages:
if 'output_audio_text' in msg:
messages_list.append(msg.output_audio_text.ssml)
elif 'text' in msg:
for txt in msg.text.text:
messages_list.append(txt)
return messages_list
################################################################
################################################################
def generate_sheets_values(obj):
if __DEBUG: print(f'Type: {type(obj)}')
sheet_values = []
### FLOW
if type(obj) == cx_types.Flow:
# transition_routes
cx_component = CX_Types.transition_routes
for tr in obj.transition_routes:
if tr.intent != '':
component_id = tr.intent
elif tr.condition != '':
component_id = tr.condition
fulfillment = tr.trigger_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([obj.name, obj.display_name, cx_component.name,
component_id, message])
# event_handlers
cx_component = CX_Types.event_handlers
for eh in obj.event_handlers:
component_id = eh.event
fulfillment = eh.trigger_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([obj.name, obj.display_name, cx_component.name,
component_id, message])
### PAGE
elif type(obj) == cx_types.Page:
flow_name = obj.name.split('/pages')[0]
#entry_fulfillment
cx_component = CX_Types.entry_fulfillment
fulfillment = obj.entry_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([flow_name, obj.name, obj.display_name, cx_component.name,
'', '', message])
#form & parameters
cx_component = CX_Types.parameter
parameters = obj.form.parameters
if parameters != []:
for param in parameters:
# initial_prompt_fulfillment
fulfillment = param.fill_behavior.initial_prompt_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([flow_name, obj.name, obj.display_name, cx_component.name,
param.display_name, 'initial_prompt_fulfillment', message])
# reprompt_event_handlers
reprompts = param.fill_behavior.reprompt_event_handlers
for reprompt in reprompts:
fulfillment = reprompt.trigger_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([flow_name, obj.name, obj.display_name, cx_component.name,
param.display_name, reprompt.event, message])
#transition_routes
cx_component = CX_Types.transition_routes
for tr in obj.transition_routes:
if tr.intent != '':
tr_comp_id = tr.intent
elif tr.condition != '':
tr_comp_id = tr.condition
fulfillment = tr.trigger_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([flow_name, obj.name, obj.display_name, cx_component.name,
tr_comp_id, '', message])
#event_handlers
cx_component = CX_Types.event_handlers
for eh in obj.event_handlers:
fulfillment = eh.trigger_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([flow_name, obj.name, obj.display_name, cx_component.name,
eh.event, '', message])
### TRANSITION ROUTE GROUP
elif type(obj) == cx_types.TransitionRouteGroup:
flow_name = obj.name.split('/transitionRouteGroups')[0]
for tr in obj.transition_routes:
if tr.intent != '':
tr_comp_id = tr.intent
elif tr.condition != '':
tr_comp_id = tr.condition
fulfillment = tr.trigger_fulfillment
messages_list = get_messages(fulfillment)
for message in messages_list:
sheet_values.append([flow_name, obj.name, obj.display_name,
tr_comp_id, message])
else:
print('generate_sheets_values function did not match Obj to either cx_types.Flow, cx_types.Page or cx_types.TransitionRouteGroup')
return sheet_values
###Output
_____no_output_____
###Markdown
Initialize & Format Google Sheets
###Code
#############################################
def add_all_languages_to_sheets(lang_values):
for sheet in SheetsName:
if sheet == SheetsName.CX_Lang_REF:
continue
elif sheet == SheetsName.Training_Phrases:
write_to_sheet(sheet.name+'!C1', lang_values, 'RAW', 'update')
elif sheet == SheetsName.Parameters:
continue
elif sheet == SheetsName.Pages:
write_to_sheet(sheet.name+'!G1', lang_values, 'RAW', 'update')
else: #Entities, Flows and Route_Groups
write_to_sheet(sheet.name+'!E1', lang_values, 'RAW', 'update')
##########################
def add_langs_to_sheets():
agent = get_agent()
print(f'agent.default_language_code:{agent.default_language_code}')
print(f'agent.supported_language_codes:{sorted(agent.supported_language_codes)}')
lang_to_sheets = []
lang_cx = []
lang_cx.append(agent.default_language_code)
sorted_supported_language_codes = sorted(agent.supported_language_codes)
#print(sorted_supported_language_codes)
for l in sorted_supported_language_codes:
lang_cx.append(l)
df = read_sheet(SheetsName.CX_Lang_REF.name)
df = df.iloc[:,[0,1]]
#print(df.columns)
lang_full = []
lang_translate = []
for l in lang_cx:
#lang_df = df[df['Tag *'].str.lower()==l]
lang_full.append(df[df['Tag *'].str.lower()==l].iloc[0,0])
index = l.find('-')
if index == -1:
lang_translate.append(l)
elif l == 'zh-cn':
lang_translate.append('zh-CN')
elif l.startswith('zh') and l.endswith('tw') or l.endswith('hk'):
lang_translate.append('zh-TW')
else:
lang_translate.append(l[:index])
lang_full[0] = f'Default:[{lang_full[0]}]'
lang_to_sheets.append(lang_full)
lang_to_sheets.append(lang_cx)
lang_to_sheets.append(lang_translate)
#print(lang_to_sheets)
add_all_languages_to_sheets(lang_to_sheets)
#########################
def init_format_sheets():
start_time = timer()
print("START: init_format_sheets()")
# Add Sheets to Sheet
# and other formatting via a single BatchUpdate
# Enum class SheetsName = ['CX_Lang_REF', 'Training_Phrases', 'Parameters', 'Entities', 'Flows', 'Pages', 'Route_Groups']
batch_update_req = {'requests':[]}
sheets_titles = get_sheets_titles()
delta = [item for item in SheetsName if item.name not in sheets_titles]
if len(delta) > 0:
print(f"Adding Sheets:{delta}")
batch_update_req['requests'].append(add_sheets_json(delta))
batch_update_req['requests'].append(set_column_width_json(delta))
batch_update_req['requests'].append(set_cell_formatting_json(delta))
batch_update_req['requests'].append(set_borders_json(delta))
batch_update_req['requests'].append(set_view_freeze_json(delta))
#pprint(batch_update_req)
batch_update_to_sheets(batch_update_req)
add_sheet_headers(delta)
## add_langs_to_sheets() - now folded into init_format_sheets function due to conundrum below if it is run separately
## Queries Agent to get default and supported languages
## Adds to the Sheets columns for languages
## If Sheet is already initialized, it still adds the languages
## Could be a problem if the Agent supported languages changed and the columns gets re-written and the translation (if done previously) is out of sync with column
## also figures out the lang tag for Translate (might be slightly different from CX language tag)
add_langs_to_sheets()
print()
else:
sheets_names = []
for s in SheetsName:
sheets_names.append(s.name)
print(f'init_format_sheets(): NO actions taken as ALL required Sheets were found:\n{sheets_names}')
print(f"COMPLETED: init_format_sheets() in {timer() - start_time}s")
###Output
_____no_output_____
###Markdown
END
###Code
print('Sheets Notebook: RAN successfully to desired point')
###Output
_____no_output_____ |
queue_stack.ipynb | ###Markdown
Queue & Stack What is Queue: What is stack: Python deque ```from collections import deque```
###Code
from collections import deque
customers = deque()
customers.append("Jane")
customers.append("Jack")
print(customers)
head = customers.popleft()
print(head)
queue = deque(['name','age','DOB'])
print(queue)
queue.appendleft('gender')
queue.appendleft('school')
queue.appendleft('school')
cnt_school = queue.count('school')
print('cnt_school:', cnt_school)
print(queue)
lefthead = queue.popleft()
print('lefthead:', lefthead)
print('length of queue:', len(queue))
for i in range(4):
queue.popleft()
print('length of queue:', len(queue))
print('%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%')
queue = deque([1, 1, 2, 3, 4, 5, 5])
print(queue)
print('count of 5:', queue.count(5))
queue.pop()
print('count of 5 after pop():', queue.count(5))
queue.remove(2) # remove the first occurance of the element
print('queue after removing 2:', queue)
print('peek head:', queue[0])
print('peek tail:', queue[-1])
queue.rotate(2) # right rotation by 2
print('queue after rotating 2:', queue)
queue.rotate(-2) # left rotation by 3
print('queue after rotating -2:', queue)
queue.reverse()
print('queue after reverse:', queue)
# shallow copy
q_copy = queue.copy()
print('copy of queue:', q_copy)
q_copy.appendleft(33)
print('copy of queue push left 33:', q_copy)
print('original queue:', queue) # does not change the original one
list_queue = list(queue)
print('list queue:', list_queue)
queue.clear()
print('queue after clear()', queue, 'length of queue:', len(queue))
# for stack: append(), pop()
# for queue, append(), popleft()
###Output
deque(['Jane', 'Jack'])
Jane
deque(['name', 'age', 'DOB'])
cnt_school: 2
deque(['school', 'school', 'gender', 'name', 'age', 'DOB'])
lefthead: school
length of queue: 5
length of queue: 1
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
deque([1, 1, 2, 3, 4, 5, 5])
count of 5: 2
count of 5 after pop(): 1
queue after removing 2: deque([1, 1, 3, 4, 5])
peek head: 1
peek tail: 5
queue after rotating 2: deque([4, 5, 1, 1, 3])
queue after rotating -2: deque([1, 1, 3, 4, 5])
queue after reverse: deque([5, 4, 3, 1, 1])
copy of queue: deque([5, 4, 3, 1, 1])
copy of queue push left 33: deque([33, 5, 4, 3, 1, 1])
original queue: deque([5, 4, 3, 1, 1])
list queue: [5, 4, 3, 1, 1]
queue after clear() deque([]) length of queue: 0
###Markdown
232. Implement Queue using Stacks [Leetcode: 232. Implement Queue using Stacks](https://leetcode.com/problems/implement-queue-using-stacks/) Implement a first in first out (FIFO) queue using only two stacks. The implemented queue should support all the functions of a normal queue (push, peek, pop, and empty).Implement the MyQueue class:void push(int x) Pushes element x to the back of the queue.int pop() Removes the element from the front of the queue and returns it.int peek() Returns the element at the front of the queue.boolean empty() Returns true if the queue is empty, false otherwise.```MyQueue myQueue = new MyQueue();myQueue.push(1); // queue is: [1]myQueue.push(2); // queue is: [1, 2] (leftmost is front of the queue)myQueue.peek(); // return 1myQueue.pop(); // return 1, queue is [2]myQueue.empty(); // return false```
###Code
from collections import deque
class MyQueue:
def __init__(self):
self.main_stack = deque()
self.aux_stack = deque()
def push(self, x: int) -> None:
self.main_stack.appendleft(x)
def pop(self) -> int:
if len(self.aux_stack) == 0:
while len(self.main_stack) > 0:
self.aux_stack.appendleft(self.main_stack.popleft())
return self.aux_stack.popleft()
def peek(self) -> int:
if len(self.aux_stack) > 0:
return self.aux_stack[0]
return self.main_stack[-1]
def empty(self) -> bool:
return len(self.aux_stack) == 0 and len(self.main_stack) == 0
# Your MyQueue object will be instantiated and called as such:
obj = MyQueue()
x = 9
obj.push(x)
param_2 = obj.pop()
print(param_2)
# param_3 = obj.peek()
# param_4 = obj.empty()
###Output
9
###Markdown
225. Implement Stack using Queues [Leetcode: 225. Implement Stack using Queues](https://leetcode.com/problems/implement-stack-using-queues/) ```Implement the MyStack class:void push(int x) Pushes element x to the top of the stack.int pop() Removes the element on the top of the stack and returns it.int top() Returns the element on the top of the stack.boolean empty() Returns true if the stack is empty, false otherwise.Input["MyStack", "push", "push", "top", "pop", "empty"][[], [1], [2], [], [], []]Output[null, null, null, 2, 2, false]ExplanationMyStack myStack = new MyStack();myStack.push(1);myStack.push(2);myStack.top(); // return 2myStack.pop(); // return 2myStack.empty(); // return False```
###Code
from collections import deque
class MyStack:
def __init__(self):
self.q = deque()
self.temp = deque()
def push(self, x: int) -> None:
self.q.append(x)
def pop(self) -> int:
len_q = len(self.q)
for i in range(len_q - 1):
self.temp.append(self.q.popleft())
val = self.q.pop()
self.q, self.temp = self.temp, self.q
return val
def top(self) -> int:
return self.q[-1]
def empty(self) -> bool:
return len(self.q) == 0
# one queue solution: push: O(n), others: O(1)
from collections import deque
class MyStack:
def __init__(self):
self.q = deque()
self.size = len(self.q)
def push(self, x: int) -> None:
self.q.append(x)
self.size += 1
# reverse the order
# 3 -- 2 -- 1 We want: 4 -- 3 -- 2 -- 1
# step 1: 3 -- 2 -- 1 -- 4
# step 2: 2 -- 1 -- 4 --- 3
# step 3: 1 -- 4 --- 3 -- 2
# step 4: 4 --- 3 -- 2 -- 1
for _ in range(self.size - 1):
self.q.append(self.q.popleft())
def pop(self) -> int:
res = self.q.popleft()
self.size -= 1
return res
def top(self) -> int:
return self.q[0]
def empty(self) -> bool:
return self.size == 0
###Output
_____no_output_____
###Markdown
155. Min Stack[leetcode: 155. Min Stack](https://leetcode.com/problems/min-stack/)
###Code
from collections import deque
class MinStack:
def __init__(self):
self.stack = deque()
self.min = deque()
def push(self, val: int) -> None:
self.stack.append(val)
if len(self.min) == 0:
self.min.append(val)
else:
if self.min[-1] <= self.stack[-1]:
# append current min
self.min.append(self.min[-1])
else:
self.min.append(self.stack[-1])
def pop(self) -> None:
val = self.stack.pop()
self.min.pop()
return val
def top(self) -> int:
return self.stack[-1]
def getMin(self) -> int:
return self.min[-1]
###Output
_____no_output_____
###Markdown
239. Sliding Window Maximum [Leetcode: 239. Sliding Window Maximum](https://leetcode.com/problems/sliding-window-maximum/)
###Code
from collections import deque
class Solution:
def maxSlidingWindow(self, nums, k):
# this deque will hold the
# index of the max element
# in a sliding window
queue = deque()
res = []
for i, curr_val in enumerate(nums):
# remove all those elements in the queue
# which are smaller than the current element
# this should maintain that the largest element
# in a window would be at the beginning of the
# queue
while queue and nums[queue[-1]] <= curr_val: # note has to be '<=',
queue.pop()
# add the index of the
# current element always
# ensure current i element is at least the second largest
queue.append(i)
print('current i:', i, 'current queue:', queue, 'nums:', [nums[i] for i in list(queue)])
# check if the first element in the queue
# is still within the bounds of the window
# i.e. the current index - k, if not
# remove it (popleft)
#
# here, storing the index instead of the
# element itself becomes apparent, since
# we're going linearly, we can check the
# index of the first element in the queue
# to see if it's within the current window
# or not
if queue[0] == i-k:
queue.popleft()
# simple check to ensure that we
# take into account the max element
# only when the window is of size >= k
# and since we're starting with an empty
# queue, we'll initially have a window
# of size 1,2,3....k-1 which are not valid
if i >= k-1:
res.append(nums[queue[0]])
print(res[-1])
return res
arr = [1,3,-1,-3,2,3,6,7]
k = 3
# [3,3,5,5,6,7]
a = Solution()
a.maxSlidingWindow(arr, k)
from collections import deque
class Solution:
def maxSlidingWindow(self, nums, k: int):
queue = deque()
window_size = len(nums) - k + 1
res = [0 for _ in range(window_size)]
for i, curval in enumerate(nums):
while queue and nums[queue[-1]] <= curval:
queue.pop()
queue.append(i)
if queue[0] == i - k:
queue.popleft()
if i >= k - 1:
res[i-k+1] = nums[queue[0]]
return res
###Output
_____no_output_____
###Markdown
- Tips for memorizing: ```while queue and nums[queue[-1]] <= curval:``` - queue[-1]: kick out smaller elements from the back of the queue. - <=: has to be <=, in order to maintain the most recent largest element. ```if queue[0] == i - k: ```- i-k: kick out out-of-bound index. ```i >= k - 1: ```- \>= k - 1: start storing from k-1 150. Evaluate Reverse Polish Notation[150. Evaluate Reverse Polish Notation](https://leetcode.com/problems/evaluate-reverse-polish-notation/)
###Code
from collections import deque
class Solution:
def evalRPN(self, tokens) -> int:
stack = deque()
n = len(tokens)
res = 0
a = ['+', '-', '*', '/']
operators = set(a)
cur_res = 0
for i in range(n):
if tokens[i] in operators:
op = tokens[i]
b = stack.pop()
a = stack.pop()
if op == '+':
stack.append(a + b)
elif op == '-':
stack.append(a - b)
elif op == '*':
stack.append(a * b)
else:
stack.append(int(a / b))
else:
stack.append(int(tokens[i]))
return stack.pop()
#
# ["2","1","+","3","*"]
###Output
_____no_output_____ |
Course2/Week3/02W3Assignment.ipynb | ###Markdown
Language Models: Auto-CompleteIn this assignment, you will build an auto-complete system. Auto-complete system is something you may see every day- When you google something, you often have suggestions to help you complete your search. - When you are writing an email, you get suggestions telling you possible endings to your sentence. By the end of this assignment, you will develop a prototype of such a system. Outline- [1 Load and Preprocess Data](1)- [1.1: Load the data](1.1)- [1.2 Pre-process the data](1.2) - [Exercise 01](ex-01) - [Exercise 02](ex-02) - [Exercise 03](ex-03) - [Exercise 04](ex-04) - [Exercise 05](ex-05) - [Exercise 06](ex-06) - [Exercise 07](ex-07)- [2 Develop n-gram based language models](2) - [Exercise 08](ex-08) - [Exercise 09](ex-09) - [3 Perplexity](3) - [Exercise 10](ex-10)- [4 Build an auto-complete system](4) - [Exercise 11](ex-11) A key building block for an auto-complete system is a language model.A language model assigns the probability to a sequence of words, in a way that more "likely" sequences receive higher scores. For example, >"I have a pen" is expected to have a higher probability than >"I am a pen"since the first one seems to be a more natural sentence in the real world.You can take advantage of this probability calculation to develop an auto-complete system. Suppose the user typed >"I eat scrambled"Then you can find a word `x` such that "I eat scrambled x" receives the highest probability. If x = "eggs", the sentence would be>"I eat scrambled eggs"While a variety of language models have been developed, this assignment uses **N-grams**, a simple but powerful method for language modeling.- N-grams are also used in machine translation and speech recognition. Here are the steps of this assignment:1. Load and preprocess data - Load and tokenize data. - Split the sentences into train and test sets. - Replace words with a low frequency by an unknown marker ``.1. Develop N-gram based language models - Compute the count of n-grams from a given data set. - Estimate the conditional probability of a next word with k-smoothing.1. Evaluate the N-gram models by computing the perplexity score.1. Use your own model to suggest an upcoming word given your sentence.
###Code
import math
import random
import numpy as np
import pandas as pd
import nltk
nltk.data.path.append('.')
###Output
_____no_output_____
###Markdown
Part 1: Load and Preprocess Data Part 1.1: Load the dataYou will use twitter data.Load the data and view the first few sentences by running the next cell.Notice that data is a long string that contains many many tweets.Observe that there is a line break "\n" between tweets.
###Code
with open("en_US.twitter.txt", "r") as f:
data = f.read()
print("Data type:", type(data))
print("Number of letters:", len(data))
print("First 300 letters of the data")
print("-------")
display(data[0:300])
print("-------")
print("Last 300 letters of the data")
print("-------")
display(data[-300:])
print("-------")
###Output
Data type: <class 'str'>
Number of letters: 3335477
First 300 letters of the data
-------
###Markdown
Part 1.2 Pre-process the dataPreprocess this data with the following steps:1. Split data into sentences using "\n" as the delimiter.1. Split each sentence into tokens. Note that in this assignment we use "token" and "words" interchangeably.1. Assign sentences into train or test sets.1. Find tokens that appear at least N times in the training data.1. Replace tokens that appear less than N times by ``Note: we omit validation data in this exercise.- In real applications, we should hold a part of data as a validation set and use it to tune our training.- We skip this process for simplicity. Exercise 01Split data into sentences. Hints Use str.split
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: split_to_sentences ###
def split_to_sentences(data):
"""
Split data by linebreak "\n"
Args:
data: str
Returns:
A list of sentences
"""
### START CODE HERE (Replace instances of 'None' with your code) ###
sentences = data.split('\n')
### END CODE HERE ###
# Additional clearning (This part is already implemented)
# - Remove leading and trailing spaces from each sentence
# - Drop sentences if they are empty strings.
sentences = [s.strip() for s in sentences]
sentences = [s for s in sentences if len(s) > 0]
return sentences
# test your code
x = """
I have a pen.\nI have an apple. \nAh\nApple pen.\n
"""
print(x)
split_to_sentences(x)
###Output
I have a pen.
I have an apple.
Ah
Apple pen.
###Markdown
Expected answer: ```CPP['I have a pen.', 'I have an apple.', 'Ah', 'Apple pen.']``` Exercise 02The next step is to tokenize sentences (split a sentence into a list of words). - Convert all tokens into lower case so that words which are capitalized (for example, at the start of a sentence) in the original text are treated the same as the lowercase versions of the words.- Append each tokenized list of words into a list of tokenized sentences. Hints Use str.lower to convert strings to lowercase. Please use nltk.word_tokenize to split sentences into tokens. If you used str.split insteaad of nltk.word_tokenize, there are additional edge cases to handle, such as the punctuation (comma, period) that follows a word.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: tokenize_sentences ###
def tokenize_sentences(sentences):
"""
Tokenize sentences into tokens (words)
Args:
sentences: List of strings
Returns:
List of lists of tokens
"""
# Initialize the list of lists of tokenized sentences
tokenized_sentences = []
### START CODE HERE (Replace instances of 'None' with your code) ###
# Go through each sentence
for sentence in sentences:
# Convert to lowercase letters
sentence = sentence.lower()
# Convert into a list of words
tokenized = nltk.word_tokenize(sentence)
# append the list of words to the list of lists
tokenized_sentences.append(tokenized)
### END CODE HERE ###
return tokenized_sentences
# test your code
sentences = ["Sky is blue.", "Leaves are green.", "Roses are red."]
tokenize_sentences(sentences)
###Output
_____no_output_____
###Markdown
Expected output```CPP[['sky', 'is', 'blue', '.'], ['leaves', 'are', 'green', '.'], ['roses', 'are', 'red', '.']]``` Exercise 03Use the two functions that you have just implemented to get the tokenized data.- split the data into sentences- tokenize those sentences
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: get_tokenized_data ###
def get_tokenized_data(data):
"""
Make a list of tokenized sentences
Args:
data: String
Returns:
List of lists of tokens
"""
### START CODE HERE (Replace instances of 'None' with your code) ###
# Get the sentences by splitting up the data
sentences = split_to_sentences(data)
# Get the list of lists of tokens by tokenizing the sentences
tokenized_sentences = tokenize_sentences(sentences)
### END CODE HERE ###
return tokenized_sentences
# test your function
x = "Sky is blue.\nLeaves are green\nRoses are red."
get_tokenized_data(x)
###Output
_____no_output_____
###Markdown
Expected outcome```CPP[['sky', 'is', 'blue', '.'], ['leaves', 'are', 'green'], ['roses', 'are', 'red', '.']]``` Split into train and test setsNow run the cell below to split data into training and test sets.
###Code
tokenized_data = get_tokenized_data(data)
random.seed(87)
random.shuffle(tokenized_data)
train_size = int(len(tokenized_data) * 0.8)
train_data = tokenized_data[0:train_size]
test_data = tokenized_data[train_size:]
print("{} data are split into {} train and {} test set".format(
len(tokenized_data), len(train_data), len(test_data)))
print("First training sample:")
print(train_data[0])
print("First test sample")
print(test_data[0])
###Output
47961 data are split into 38368 train and 9593 test set
First training sample:
['i', 'personally', 'would', 'like', 'as', 'our', 'official', 'glove', 'of', 'the', 'team', 'local', 'company', 'and', 'quality', 'production']
First test sample
['that', 'picture', 'i', 'just', 'seen', 'whoa', 'dere', '!', '!', '>', '>', '>', '>', '>', '>', '>']
###Markdown
Expected output```CPP47961 data are split into 38368 train and 9593 test setFirst training sample:['i', 'personally', 'would', 'like', 'as', 'our', 'official', 'glove', 'of', 'the', 'team', 'local', 'company', 'and', 'quality', 'production']First test sample['that', 'picture', 'i', 'just', 'seen', 'whoa', 'dere', '!', '!', '>', '>', '>', '>', '>', '>', '>']``` Exercise 04You won't use all the tokens (words) appearing in the data for training. Instead, you will use the more frequently used words. - You will focus on the words that appear at least N times in the data.- First count how many times each word appears in the data.You will need a double for-loop, one for sentences and the other for tokens within a sentence. Hints If you decide to import and use defaultdict, remember to cast the dictionary back to a regular 'dict' before returning it.
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: count_words ###
def count_words(tokenized_sentences):
"""
Count the number of word appearence in the tokenized sentences
Args:
tokenized_sentences: List of lists of strings
Returns:
dict that maps word (str) to the frequency (int)
"""
word_counts = {}
### START CODE HERE (Replace instances of 'None' with your code) ###
# Loop through each sentence
for sentence in tokenized_sentences: # complete this line
# Go through each token in the sentence
for token in sentence: # complete this line
# If the token is not in the dictionary yet, set the count to 1
if token not in word_counts.keys(): # complete this line
word_counts[token] = 1
# If the token is already in the dictionary, increment the count by 1
else:
word_counts[token] += 1
### END CODE HERE ###
return word_counts
# test your code
tokenized_sentences = [['sky', 'is', 'blue', '.'],
['leaves', 'are', 'green', '.'],
['roses', 'are', 'red', '.']]
count_words(tokenized_sentences)
###Output
_____no_output_____
###Markdown
Expected outputNote that the order may differ.```CPP{'sky': 1, 'is': 1, 'blue': 1, '.': 3, 'leaves': 1, 'are': 2, 'green': 1, 'roses': 1, 'red': 1}``` Handling 'Out of Vocabulary' wordsIf your model is performing autocomplete, but encounters a word that it never saw during training, it won't have an input word to help it determine the next word to suggest. The model will not be able to predict the next word because there are no counts for the current word. - This 'new' word is called an 'unknown word', or out of vocabulary (OOV) words.- The percentage of unknown words in the test set is called the OOV rate. To handle unknown words during prediction, use a special token to represent all unknown words 'unk'. - Modify the training data so that it has some 'unknown' words to train on.- Words to convert into "unknown" words are those that do not occur very frequently in the training set.- Create a list of the most frequent words in the training set, called the closed vocabulary . - Convert all the other words that are not part of the closed vocabulary to the token 'unk'. Exercise 05You will now create a function that takes in a text document and a threshold 'count_threshold'.- Any word whose count is greater than or equal to the threshold 'count_threshold' is kept in the closed vocabulary.- used that you want to keep, returns the document containing only the word closed vocabulary and the word unk.
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: get_words_with_nplus_frequency ###
def get_words_with_nplus_frequency(tokenized_sentences, count_threshold):
"""
Find the words that appear N times or more
Args:
tokenized_sentences: List of lists of sentences
count_threshold: minimum number of occurrences for a word to be in the closed vocabulary.
Returns:
List of words that appear N times or more
"""
# Initialize an empty list to contain the words that
# appear at least 'minimum_freq' times.
closed_vocab = []
# Get the word couts of the tokenized sentences
# Use the function that you defined earlier to count the words
word_counts = count_words(tokenized_sentences)
### START CODE HERE (Replace instances of 'None' with your code) ###
# for each word and its count
for word, cnt in word_counts.items(): # complete this line
# check that the word's count
# is at least as great as the minimum count
if cnt >= count_threshold:
# append the word to the list
closed_vocab.append(word)
### END CODE HERE ###
return closed_vocab
# test your code
tokenized_sentences = [['sky', 'is', 'blue', '.'],
['leaves', 'are', 'green', '.'],
['roses', 'are', 'red', '.']]
tmp_closed_vocab = get_words_with_nplus_frequency(tokenized_sentences, count_threshold=2)
print(f"Closed vocabulary:")
print(tmp_closed_vocab)
###Output
Closed vocabulary:
['.', 'are']
###Markdown
Expected output```CPPClosed vocabulary:['.', 'are']``` Exercise 06The words that appear 'count_threshold' times or more are in the 'closed vocabulary. - All other words are regarded as 'unknown'.- Replace words not in the closed vocabulary with the token "".
###Code
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: replace_oov_words_by_unk ###
def replace_oov_words_by_unk(tokenized_sentences, vocabulary, unknown_token="<unk>"):
"""
Replace words not in the given vocabulary with '<unk>' token.
Args:
tokenized_sentences: List of lists of strings
vocabulary: List of strings that we will use
unknown_token: A string representing unknown (out-of-vocabulary) words
Returns:
List of lists of strings, with words not in the vocabulary replaced
"""
# Place vocabulary into a set for faster search
vocabulary = set(vocabulary)
# Initialize a list that will hold the sentences
# after less frequent words are replaced by the unknown token
replaced_tokenized_sentences = []
# Go through each sentence
for sentence in tokenized_sentences:
# Initialize the list that will contain
# a single sentence with "unknown_token" replacements
replaced_sentence = []
### START CODE HERE (Replace instances of 'None' with your code) ###
# for each token in the sentence
for token in sentence: # complete this line
# Check if the token is in the closed vocabulary
if token in vocabulary: # complete this line
# If so, append the word to the replaced_sentence
replaced_sentence.append(token)
else:
# otherwise, append the unknown token instead
replaced_sentence.append(unknown_token)
### END CODE HERE ###
# Append the list of tokens to the list of lists
replaced_tokenized_sentences.append(replaced_sentence)
return replaced_tokenized_sentences
tokenized_sentences = [["dogs", "run"], ["cats", "sleep"]]
vocabulary = ["dogs", "sleep"]
tmp_replaced_tokenized_sentences = replace_oov_words_by_unk(tokenized_sentences, vocabulary)
print(f"Original sentence:")
print(tokenized_sentences)
print(f"tokenized_sentences with less frequent words converted to '<unk>':")
print(tmp_replaced_tokenized_sentences)
###Output
Original sentence:
[['dogs', 'run'], ['cats', 'sleep']]
tokenized_sentences with less frequent words converted to '<unk>':
[['dogs', '<unk>'], ['<unk>', 'sleep']]
###Markdown
Expected answer```CPPOriginal sentence:[['dogs', 'run'], ['cats', 'sleep']]tokenized_sentences with less frequent words converted to '':[['dogs', ''], ['', 'sleep']]``` Exercise 07Now we are ready to process our data by combining the functions that you just implemented.1. Find tokens that appear at least count_threshold times in the training data.1. Replace tokens that appear less than count_threshold times by "" both for training and test data.
###Code
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED_FUNCTION: preprocess_data ###
def preprocess_data(train_data, test_data, count_threshold):
"""
Preprocess data, i.e.,
- Find tokens that appear at least N times in the training data.
- Replace tokens that appear less than N times by "<unk>" both for training and test data.
Args:
train_data, test_data: List of lists of strings.
count_threshold: Words whose count is less than this are
treated as unknown.
Returns:
Tuple of
- training data with low frequent words replaced by "<unk>"
- test data with low frequent words replaced by "<unk>"
- vocabulary of words that appear n times or more in the training data
"""
### START CODE HERE (Replace instances of 'None' with your code) ###
# Get the closed vocabulary using the train data
vocabulary = get_words_with_nplus_frequency(train_data, count_threshold)
# For the train data, replace less common words with "<unk>"
train_data_replaced = replace_oov_words_by_unk(train_data, vocabulary)
# For the test data, replace less common words with "<unk>"
test_data_replaced = replace_oov_words_by_unk(test_data, vocabulary)
### END CODE HERE ###
return train_data_replaced, test_data_replaced, vocabulary
# test your code
tmp_train = [['sky', 'is', 'blue', '.'],
['leaves', 'are', 'green']]
tmp_test = [['roses', 'are', 'red', '.']]
tmp_train_repl, tmp_test_repl, tmp_vocab = preprocess_data(tmp_train,
tmp_test,
count_threshold = 1)
print("tmp_train_repl")
print(tmp_train_repl)
print()
print("tmp_test_repl")
print(tmp_test_repl)
print()
print("tmp_vocab")
print(tmp_vocab)
###Output
tmp_train_repl
[['sky', 'is', 'blue', '.'], ['leaves', 'are', 'green']]
tmp_test_repl
[['<unk>', 'are', '<unk>', '.']]
tmp_vocab
['sky', 'is', 'blue', '.', 'leaves', 'are', 'green']
###Markdown
Expected outcome```CPPtmp_train_repl[['sky', 'is', 'blue', '.'], ['leaves', 'are', 'green']]tmp_test_repl[['', 'are', '', '.']]tmp_vocab['sky', 'is', 'blue', '.', 'leaves', 'are', 'green']``` Preprocess the train and test dataRun the cell below to complete the preprocessing both for training and test sets.
###Code
minimum_freq = 2
train_data_processed, test_data_processed, vocabulary = preprocess_data(train_data,
test_data,
minimum_freq)
print("First preprocessed training sample:")
print(train_data_processed[0])
print()
print("First preprocessed test sample:")
print(test_data_processed[0])
print()
print("First 10 vocabulary:")
print(vocabulary[0:10])
print()
print("Size of vocabulary:", len(vocabulary))
###Output
First preprocessed training sample:
['i', 'personally', 'would', 'like', 'as', 'our', 'official', 'glove', 'of', 'the', 'team', 'local', 'company', 'and', 'quality', 'production']
First preprocessed test sample:
['that', 'picture', 'i', 'just', 'seen', 'whoa', 'dere', '!', '!', '>', '>', '>', '>', '>', '>', '>']
First 10 vocabulary:
['i', 'personally', 'would', 'like', 'as', 'our', 'official', 'glove', 'of', 'the']
Size of vocabulary: 14821
###Markdown
Expected output```CPPFirst preprocessed training sample:['i', 'personally', 'would', 'like', 'as', 'our', 'official', 'glove', 'of', 'the', 'team', 'local', 'company', 'and', 'quality', 'production']First preprocessed test sample:['that', 'picture', 'i', 'just', 'seen', 'whoa', 'dere', '!', '!', '>', '>', '>', '>', '>', '>', '>']First 10 vocabulary:['i', 'personally', 'would', 'like', 'as', 'our', 'official', 'glove', 'of', 'the']Size of vocabulary: 14821``` You are done with the preprocessing section of the assignment.Objects `train_data_processed`, `test_data_processed`, and `vocabulary` will be used in the rest of the exercises. Part 2: Develop n-gram based language modelsIn this section, you will develop the n-grams language model.- Assume the probability of the next word depends only on the previous n-gram.- The previous n-gram is the series of the previous 'n' words.The conditional probability for the word at position 't' in the sentence, given that the words preceding it are $w_{t-1}, w_{t-2} \cdots w_{t-n}$ is:$$ P(w_t | w_{t-1}\dots w_{t-n}) \tag{1}$$You can estimate this probability by counting the occurrences of these series of words in the training data.- The probability can be estimated as a ratio, where- The numerator is the number of times word 't' appears after words t-1 through t-n appear in the training data.- The denominator is the number of times word t-1 through t-n appears in the training data.$$ \hat{P}(w_t | w_{t-1}\dots w_{t-n}) = \frac{C(w_{t-1}\dots w_{t-n}, w_n)}{C(w_{t-1}\dots w_{t-n})} \tag{2} $$- The function $C(\cdots)$ denotes the number of occurence of the given sequence. - $\hat{P}$ means the estimation of $P$. - Notice that denominator of the equation (2) is the number of occurence of the previous $n$ words, and the numerator is the same sequence followed by the word $w_t$.Later, you will modify the equation (2) by adding k-smoothing, which avoids errors when any counts are zero.The equation (2) tells us that to estimate probabilities based on n-grams, you need the counts of n-grams (for denominator) and (n+1)-grams (for numerator). Exercise 08Next, you will implement a function that computes the counts of n-grams for an arbitrary number $n$.When computing the counts for n-grams, prepare the sentence beforehand by prepending $n-1$ starting markers "" to indicate the beginning of the sentence. - For example, in the bi-gram model (N=2), a sequence with two start tokens "" should predict the first word of a sentence.- So, if the sentence is "I like food", modify it to be " I like food".- Also prepare the sentence for counting by appending an end token "" so that the model can predict when to finish a sentence.Technical note: In this implementation, you will store the counts as a dictionary.- The key of each key-value pair in the dictionary is a **tuple** of n words (and not a list)- The value in the key-value pair is the number of occurrences. - The reason for using a tuple as a key instead of a list is because a list in Python is a mutable object (it can be changed after it is first created). A tuple is "immutable", so it cannot be altered after it is first created. This makes a tuple suitable as a data type for the key in a dictionary. Hints To prepend or append, you can create lists and concatenate them using the + operator To create a list of a repeated value, you can follow this syntax: ['a'] * 3 to get ['a','a','a'] To set the range for index 'i', think of this example: An n-gram where n=2 (bigram), and the sentence is length N=5 (including two start tokens and one end token). So the index positions are [0,1,2,3,4]. The largest index 'i' where a bigram can start is at position i=3, because the word tokens at position 3 and 4 will form the bigram. Remember that the range() function excludes the value that is used for the maximum of the range. range(3) produces (0,1,2) but excludes 3.
###Code
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED FUNCTION: count_n_grams ###
def count_n_grams(data, n, start_token='<s>', end_token = '<e>'):
"""
Count all n-grams in the data
Args:
data: List of lists of words
n: number of words in a sequence
Returns:
A dictionary that maps a tuple of n-words to its frequency
"""
# Initialize dictionary of n-grams and their counts
n_grams = {}
### START CODE HERE (Replace instances of 'None' with your code) ###
# Go through each sentence in the data
for sentence in data: # complete this line
# prepend start token n times, and append <e> one time
sentence = [start_token] * n + sentence + [end_token]
# convert list to tuple
# So that the sequence of words can be used as
# a key in the dictionary
sentence = tuple(sentence)
# Use 'i' to indicate the start of the n-gram
# from index 0
# to the last index where the end of the n-gram
# is within the sentence.
for i in range(0, len(sentence) - n + 1): # complete this line
# Get the n-gram from i to i+n
n_gram = sentence[i : i + n]
# check if the n-gram is in the dictionary
if n_gram in n_grams: # complete this line
# Increment the count for this n-gram
n_grams[n_gram] += 1
else:
# Initialize this n-gram count to 1
n_grams[n_gram] = 1
### END CODE HERE ###
return n_grams
# test your code
# CODE REVIEW COMMENT: Outcome does not match expected outcome
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
print("Uni-gram:")
print(count_n_grams(sentences, 1))
print("Bi-gram:")
print(count_n_grams(sentences, 2))
###Output
Uni-gram:
{('<s>',): 2, ('i',): 1, ('like',): 2, ('a',): 2, ('cat',): 2, ('<e>',): 2, ('this',): 1, ('dog',): 1, ('is',): 1}
Bi-gram:
{('<s>', '<s>'): 2, ('<s>', 'i'): 1, ('i', 'like'): 1, ('like', 'a'): 2, ('a', 'cat'): 2, ('cat', '<e>'): 2, ('<s>', 'this'): 1, ('this', 'dog'): 1, ('dog', 'is'): 1, ('is', 'like'): 1}
###Markdown
Expected outcome:```CPPUni-gram:{('',): 2, ('i',): 1, ('like',): 2, ('a',): 2, ('cat',): 2, ('',): 2, ('this',): 1, ('dog',): 1, ('is',): 1}Bi-gram:{('', ''): 2, ('', 'i'): 1, ('i', 'like'): 1, ('like', 'a'): 2, ('a', 'cat'): 2, ('cat', ''): 2, ('', 'this'): 1, ('this', 'dog'): 1, ('dog', 'is'): 1, ('is', 'like'): 1}``` Exercise 09Next, estimate the probability of a word given the prior 'n' words using the n-gram counts.$$ \hat{P}(w_t | w_{t-1}\dots w_{t-n}) = \frac{C(w_{t-1}\dots w_{t-n}, w_n)}{C(w_{t-1}\dots w_{t-n})} \tag{2} $$This formula doesn't work when a count of an n-gram is zero..- Suppose we encounter an n-gram that did not occur in the training data. - Then, the equation (2) cannot be evaluated (it becomes zero divided by zero).A way to handle zero counts is to add k-smoothing. - K-smoothing adds a positive constant $k$ to each numerator and $k \times |V|$ in the denominator, where $|V|$ is the number of words in the vocabulary.$$ \hat{P}(w_t | w_{t-1}\dots w_{t-n}) = \frac{C(w_{t-1}\dots w_{t-n}, w_n) + k}{C(w_{t-1}\dots w_{t-n}) + k|V|} \tag{3} $$For n-grams that have a zero count, the equation (3) becomes $\frac{1}{|V|}$.- This means that any n-gram with zero count has the same probability of $\frac{1}{|V|}$.Define a function that computes the probability estimate (3) from n-gram counts and a constant $k$.- The function takes in a dictionary 'n_gram_counts', where the key is the n-gram and the value is the count of that n-gram.- The function also takes another dictionary n_plus1_gram_counts, which you'll use to find the count for the previous n-gram plus the current word. Hints To define a tuple containing a single value, add a comma after that value. For example: ('apple',) is a tuple containing a single string 'apple' To concatenate two tuples, use the '+' operator words
###Code
# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
### GRADED FUNCTION: estimate_probabilityy ###
def estimate_probability(word, previous_n_gram,
n_gram_counts, n_plus1_gram_counts, vocabulary_size, k=1.0):
"""
Estimate the probabilities of a next word using the n-gram counts with k-smoothing
Args:
word: next word
previous_n_gram: A sequence of words of length n
n_gram_counts: Dictionary of counts of (n+1)-grams
n_plus1_gram_counts: Dictionary of counts of (n+1)-grams
vocabulary_size: number of words in the vocabulary
k: positive constant, smoothing parameter
Returns:
A probability
"""
# convert list to tuple to use it as a dictionary key
previous_n_gram = tuple(previous_n_gram)
### START CODE HERE (Replace instances of 'None' with your code) ###
# Set the denominator
# If the previous n-gram exists in the dictionary of n-gram counts,
# Get its count. Otherwise set the count to zero
# Use the dictionary that has counts for n-grams
previous_n_gram_count = 0 if previous_n_gram not in n_gram_counts.keys() else n_gram_counts[previous_n_gram]
# Calculate the denominator using the count of the previous n gram
# and apply k-smoothing
denominator = previous_n_gram_count + k * vocabulary_size
# Define n plus 1 gram as the previous n-gram plus the current word as a tuple
n_plus1_gram = previous_n_gram + (word,)
# Set the count to the count in the dictionary,
# otherwise 0 if not in the dictionary
# use the dictionary that has counts for the n-gram plus current word
n_plus1_gram_count = 0 if n_plus1_gram not in n_plus1_gram_counts.keys() else n_plus1_gram_counts[n_plus1_gram]
# Define the numerator use the count of the n-gram plus current word,
# and apply smoothing
numerator = n_plus1_gram_count + k
# Calculate the probability as the numerator divided by denominator
probability = numerator / denominator
### END CODE HERE ###
return probability
# test your code
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
unigram_counts = count_n_grams(sentences, 1)
bigram_counts = count_n_grams(sentences, 2)
tmp_prob = estimate_probability("cat", "a", unigram_counts, bigram_counts, len(unique_words), k=1)
print(f"The estimated probability of word 'cat' given the previous n-gram 'a' is: {tmp_prob:.4f}")
###Output
The estimated probability of word 'cat' given the previous n-gram 'a' is: 0.3333
###Markdown
Expected output```CPPThe estimated probability of word 'cat' given the previous n-gram 'a' is: 0.3333``` Estimate probabilities for all wordsThe function defined below loops over all words in vocabulary to calculate probabilities for all possible words.- This function is provided for you.
###Code
def estimate_probabilities(previous_n_gram, n_gram_counts, n_plus1_gram_counts, vocabulary, k=1.0):
"""
Estimate the probabilities of next words using the n-gram counts with k-smoothing
Args:
previous_n_gram: A sequence of words of length n
n_gram_counts: Dictionary of counts of (n+1)-grams
n_plus1_gram_counts: Dictionary of counts of (n+1)-grams
vocabulary: List of words
k: positive constant, smoothing parameter
Returns:
A dictionary mapping from next words to the probability.
"""
# convert list to tuple to use it as a dictionary key
previous_n_gram = tuple(previous_n_gram)
# add <e> <unk> to the vocabulary
# <s> is not needed since it should not appear as the next word
vocabulary = vocabulary + ["<e>", "<unk>"]
vocabulary_size = len(vocabulary)
probabilities = {}
for word in vocabulary:
probability = estimate_probability(word, previous_n_gram,
n_gram_counts, n_plus1_gram_counts,
vocabulary_size, k=k)
probabilities[word] = probability
return probabilities
# test your code
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
unigram_counts = count_n_grams(sentences, 1)
bigram_counts = count_n_grams(sentences, 2)
estimate_probabilities("a", unigram_counts, bigram_counts, unique_words, k=1)
###Output
_____no_output_____
###Markdown
Expected output```CPP{'cat': 0.2727272727272727, 'i': 0.09090909090909091, 'this': 0.09090909090909091, 'a': 0.09090909090909091, 'is': 0.09090909090909091, 'like': 0.09090909090909091, 'dog': 0.09090909090909091, '': 0.09090909090909091, '': 0.09090909090909091}```
###Code
# Additional test
trigram_counts = count_n_grams(sentences, 3)
estimate_probabilities(["<s>", "<s>"], bigram_counts, trigram_counts, unique_words, k=1)
###Output
_____no_output_____
###Markdown
Expected output```CPP{'cat': 0.09090909090909091, 'i': 0.18181818181818182, 'this': 0.18181818181818182, 'a': 0.09090909090909091, 'is': 0.09090909090909091, 'like': 0.09090909090909091, 'dog': 0.09090909090909091, '': 0.09090909090909091, '': 0.09090909090909091}``` Count and probability matricesAs we have seen so far, the n-gram counts computed above are sufficient for computing the probabilities of the next word. - It can be more intuitive to present them as count or probability matrices.- The functions defined in the next cells return count or probability matrices.- This function is provided for you.
###Code
def make_count_matrix(n_plus1_gram_counts, vocabulary):
# add <e> <unk> to the vocabulary
# <s> is omitted since it should not appear as the next word
vocabulary = vocabulary + ["<e>", "<unk>"]
# obtain unique n-grams
n_grams = []
for n_plus1_gram in n_plus1_gram_counts.keys():
n_gram = n_plus1_gram[0:-1]
n_grams.append(n_gram)
n_grams = list(set(n_grams))
# mapping from n-gram to row
row_index = {n_gram:i for i, n_gram in enumerate(n_grams)}
# mapping from next word to column
col_index = {word:j for j, word in enumerate(vocabulary)}
nrow = len(n_grams)
ncol = len(vocabulary)
count_matrix = np.zeros((nrow, ncol))
for n_plus1_gram, count in n_plus1_gram_counts.items():
n_gram = n_plus1_gram[0:-1]
word = n_plus1_gram[-1]
if word not in vocabulary:
continue
i = row_index[n_gram]
j = col_index[word]
count_matrix[i, j] = count
count_matrix = pd.DataFrame(count_matrix, index=n_grams, columns=vocabulary)
return count_matrix
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
bigram_counts = count_n_grams(sentences, 2)
print('bigram counts')
display(make_count_matrix(bigram_counts, unique_words))
###Output
bigram counts
###Markdown
Expected output```CPPbigram counts cat i this a is like dog (,) 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0(a,) 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0(this,) 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0(like,) 0.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0(dog,) 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0(cat,) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0(is,) 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0(i,) 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0```
###Code
# Show trigram counts
print('\ntrigram counts')
trigram_counts = count_n_grams(sentences, 3)
display(make_count_matrix(trigram_counts, unique_words))
###Output
trigram counts
###Markdown
Expected output```CPPtrigram counts cat i this a is like dog (dog, is) 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0(this, dog) 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0(a, cat) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 0.0(like, a) 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0(is, like) 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0(, i) 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0(i, like) 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0(, ) 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0(, this) 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0``` The following function calculates the probabilities of each word given the previous n-gram, and stores this in matrix form.- This function is provided for you.
###Code
def make_probability_matrix(n_plus1_gram_counts, vocabulary, k):
count_matrix = make_count_matrix(n_plus1_gram_counts, unique_words)
count_matrix += k
prob_matrix = count_matrix.div(count_matrix.sum(axis=1), axis=0)
return prob_matrix
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
bigram_counts = count_n_grams(sentences, 2)
print("bigram probabilities")
display(make_probability_matrix(bigram_counts, unique_words, k=1))
print("trigram probabilities")
trigram_counts = count_n_grams(sentences, 3)
display(make_probability_matrix(trigram_counts, unique_words, k=1))
###Output
trigram probabilities
###Markdown
Confirm that you obtain the same results as for the `estimate_probabilities` function that you implemented. Part 3: PerplexityIn this section, you will generate the perplexity score to evaluate your model on the test set. - You will also use back-off when needed. - Perplexity is used as an evaluation metric of your language model. - To calculate the the perplexity score of the test set on an n-gram model, use: $$ PP(W) =\sqrt[N]{ \prod_{t=n+1}^N \frac{1}{P(w_t | w_{t-n} \cdots w_{t-1})} } \tag{4}$$- where $N$ is the length of the sentence.- $n$ is the number of words in the n-gram (e.g. 2 for a bigram).- In math, the numbering starts at one and not zero.In code, array indexing starts at zero, so the code will use ranges for $t$ according to this formula:$$ PP(W) =\sqrt[N]{ \prod_{t=n}^{N-1} \frac{1}{P(w_t | w_{t-n} \cdots w_{t-1})} } \tag{4.1}$$The higher the probabilities are, the lower the perplexity will be. - The more the n-grams tell us about the sentence, the lower the perplexity score will be. Exercise 10Compute the perplexity score given an N-gram count matrix and a sentence. Hints Remember that range(2,4) produces the integers [2, 3] (and excludes 4).
###Code
# UNQ_C10 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: calculate_perplexity
def calculate_perplexity(sentence, n_gram_counts, n_plus1_gram_counts, vocabulary_size, k=1.0):
"""
Calculate perplexity for a list of sentences
Args:
sentence: List of strings
n_gram_counts: Dictionary of counts of (n+1)-grams
n_plus1_gram_counts: Dictionary of counts of (n+1)-grams
vocabulary_size: number of unique words in the vocabulary
k: Positive smoothing constant
Returns:
Perplexity score
"""
# length of previous words
n = len(list(n_gram_counts.keys())[0])
# prepend <s> and append <e>
sentence = ["<s>"] * n + sentence + ["<e>"]
# Cast the sentence from a list to a tuple
sentence = tuple(sentence)
# length of sentence (after adding <s> and <e> tokens)
N = len(sentence)
# The variable p will hold the product
# that is calculated inside the n-root
# Update this in the code below
product_pi = 1.0
### START CODE HERE (Replace instances of 'None' with your code) ###
# Index t ranges from n to N - 1
for t in range(n, N): # complete this line
# get the n-gram preceding the word at position t
n_gram = sentence[t - n: t]
# get the word at position t
word = sentence[t]
# Estimate the probability of the word given the n-gram
# using the n-gram counts, n-plus1-gram counts,
# vocabulary size, and smoothing constant
probability = estimate_probability(word, n_gram, n_gram_counts, n_plus1_gram_counts, vocabulary_size, k)
# Update the product of the probabilities
# This 'product_pi' is a cumulative product
# of the (1/P) factors that are calculated in the loop
product_pi *= 1 / probability
# Take the Nth root of the product
perplexity = math.pow(product_pi, 1 / N)
### END CODE HERE ###
return perplexity
# test your code
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
unigram_counts = count_n_grams(sentences, 1)
bigram_counts = count_n_grams(sentences, 2)
perplexity_train1 = calculate_perplexity(sentences[0],
unigram_counts, bigram_counts,
len(unique_words), k=1.0)
print(f"Perplexity for first train sample: {perplexity_train1:.4f}")
test_sentence = ['i', 'like', 'a', 'dog']
perplexity_test = calculate_perplexity(test_sentence,
unigram_counts, bigram_counts,
len(unique_words), k=1.0)
print(f"Perplexity for test sample: {perplexity_test:.4f}")
###Output
Perplexity for first train sample: 2.8040
Perplexity for test sample: 3.9654
###Markdown
Expected Output```CPPPerplexity for first train sample: 2.8040Perplexity for test sample: 3.9654``` Note: If your sentence is really long, there will be underflow when multiplying many fractions.- To handle longer sentences, modify your implementation to take the sum of the log of the probabilities. Part 4: Build an auto-complete systemIn this section, you will combine the language models developed so far to implement an auto-complete system. Exercise 11Compute probabilities for all possible next words and suggest the most likely one.- This function also take an optional argument `start_with`, which specifies the first few letters of the next words. Hints estimate_probabilities returns a dictionary where the key is a word and the value is the word's probability. Use str1.startswith(str2) to determine if a string starts with the letters of another string. For example, 'learning'.startswith('lea') returns True, whereas 'learning'.startswith('ear') returns False. There are two additional parameters in str.startswith(), but you can use the default values for those parameters in this case.
###Code
# UNQ_C11 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: suggest_a_word
def suggest_a_word(previous_tokens, n_gram_counts, n_plus1_gram_counts, vocabulary, k=1.0, start_with=None):
"""
Get suggestion for the next word
Args:
previous_tokens: The sentence you input where each token is a word. Must have length > n
n_gram_counts: Dictionary of counts of (n+1)-grams
n_plus1_gram_counts: Dictionary of counts of (n+1)-grams
vocabulary: List of words
k: positive constant, smoothing parameter
start_with: If not None, specifies the first few letters of the next word
Returns:
A tuple of
- string of the most likely next word
- corresponding probability
"""
# length of previous words
n = len(list(n_gram_counts.keys())[0])
# From the words that the user already typed
# get the most recent 'n' words as the previous n-gram
previous_n_gram = previous_tokens[-n:]
# Estimate the probabilities that each word in the vocabulary
# is the next word,
# given the previous n-gram, the dictionary of n-gram counts,
# the dictionary of n plus 1 gram counts, and the smoothing constant
probabilities = estimate_probabilities(previous_n_gram,
n_gram_counts, n_plus1_gram_counts,
vocabulary, k=k)
# Initialize suggested word to None
# This will be set to the word with highest probability
suggestion = None
# Initialize the highest word probability to 0
# this will be set to the highest probability
# of all words to be suggested
max_prob = 0
### START CODE HERE (Replace instances of 'None' with your code) ###
# For each word and its probability in the probabilities dictionary:
for word, prob in probabilities.items(): # complete this line
# If the optional start_with string is set
if start_with != None: # complete this line
# Check if the word starts with the letters in 'start_with'
if word.startswith(start_with) == False: # complete this line
#If so, don't consider this word (move onto the next word)
continue # complete this line
# Check if this word's probability
# is greater than the current maximum probability
if prob > max_prob: # complete this line
# If so, save this word as the best suggestion (so far)
suggestion = word
# Save the new maximum probability
max_prob = prob
### END CODE HERE
return suggestion, max_prob
# test your code
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
unigram_counts = count_n_grams(sentences, 1)
bigram_counts = count_n_grams(sentences, 2)
previous_tokens = ["i", "like"]
tmp_suggest1 = suggest_a_word(previous_tokens, unigram_counts, bigram_counts, unique_words, k=1.0)
print(f"The previous words are 'i like',\n\tand the suggested word is `{tmp_suggest1[0]}` with a probability of {tmp_suggest1[1]:.4f}")
print()
# test your code when setting the starts_with
tmp_starts_with = 'c'
tmp_suggest2 = suggest_a_word(previous_tokens, unigram_counts, bigram_counts, unique_words, k=1.0, start_with=tmp_starts_with)
print(f"The previous words are 'i like', the suggestion must start with `{tmp_starts_with}`\n\tand the suggested word is `{tmp_suggest2[0]}` with a probability of {tmp_suggest2[1]:.4f}")
###Output
The previous words are 'i like',
and the suggested word is `a` with a probability of 0.2727
The previous words are 'i like', the suggestion must start with `c`
and the suggested word is `cat` with a probability of 0.0909
###Markdown
Expected output```CPPThe previous words are 'i like', and the suggested word is `a` with a probability of 0.2727The previous words are 'i like', the suggestion must start with `c` and the suggested word is `cat` with a probability of 0.0909``` Get multiple suggestionsThe function defined below loop over varioud n-gram models to get multiple suggestions.
###Code
def get_suggestions(previous_tokens, n_gram_counts_list, vocabulary, k=1.0, start_with=None):
model_counts = len(n_gram_counts_list)
suggestions = []
for i in range(model_counts-1):
n_gram_counts = n_gram_counts_list[i]
n_plus1_gram_counts = n_gram_counts_list[i+1]
suggestion = suggest_a_word(previous_tokens, n_gram_counts,
n_plus1_gram_counts, vocabulary,
k=k, start_with=start_with)
suggestions.append(suggestion)
return suggestions
# test your code
sentences = [['i', 'like', 'a', 'cat'],
['this', 'dog', 'is', 'like', 'a', 'cat']]
unique_words = list(set(sentences[0] + sentences[1]))
unigram_counts = count_n_grams(sentences, 1)
bigram_counts = count_n_grams(sentences, 2)
trigram_counts = count_n_grams(sentences, 3)
quadgram_counts = count_n_grams(sentences, 4)
qintgram_counts = count_n_grams(sentences, 5)
n_gram_counts_list = [unigram_counts, bigram_counts, trigram_counts, quadgram_counts, qintgram_counts]
previous_tokens = ["i", "like"]
tmp_suggest3 = get_suggestions(previous_tokens, n_gram_counts_list, unique_words, k=1.0)
print(f"The previous words are 'i like', the suggestions are:")
display(tmp_suggest3)
###Output
The previous words are 'i like', the suggestions are:
###Markdown
Suggest multiple words using n-grams of varying lengthCongratulations! You have developed all building blocks for implementing your own auto-complete systems.Let's see this with n-grams of varying lengths (unigrams, bigrams, trigrams, 4-grams...6-grams).
###Code
n_gram_counts_list = []
for n in range(1, 6):
print("Computing n-gram counts with n =", n, "...")
n_model_counts = count_n_grams(train_data_processed, n)
n_gram_counts_list.append(n_model_counts)
previous_tokens = ["i", "am", "to"]
tmp_suggest4 = get_suggestions(previous_tokens, n_gram_counts_list, vocabulary, k=1.0)
print(f"The previous words are {previous_tokens}, the suggestions are:")
display(tmp_suggest4)
previous_tokens = ["i", "want", "to", "go"]
tmp_suggest5 = get_suggestions(previous_tokens, n_gram_counts_list, vocabulary, k=1.0)
print(f"The previous words are {previous_tokens}, the suggestions are:")
display(tmp_suggest5)
previous_tokens = ["hey", "how", "are"]
tmp_suggest6 = get_suggestions(previous_tokens, n_gram_counts_list, vocabulary, k=1.0)
print(f"The previous words are {previous_tokens}, the suggestions are:")
display(tmp_suggest6)
previous_tokens = ["hey", "how", "are", "you"]
tmp_suggest7 = get_suggestions(previous_tokens, n_gram_counts_list, vocabulary, k=1.0)
print(f"The previous words are {previous_tokens}, the suggestions are:")
display(tmp_suggest7)
previous_tokens = ["hey", "how", "are", "you"]
tmp_suggest8 = get_suggestions(previous_tokens, n_gram_counts_list, vocabulary, k=1.0, start_with="d")
print(f"The previous words are {previous_tokens}, the suggestions are:")
display(tmp_suggest8)
###Output
The previous words are ['hey', 'how', 'are', 'you'], the suggestions are:
|
Notebooks/.ipynb_checkpoints/GithubIntroduction-checkpoint.ipynb | ###Markdown
A Very Quick Introduction to Git/Github for Julia UsersJulia's package system and Github are very closely intertwined:- Julia's package management system (METADATA) is a Github repository- The packages are hosted as Github repositories- Julia packages are normally referred to with the ending “.jl”- Repositories register to become part of the central package management by sending a pull request to METADATA.jl- The packages can be found / investigated at Github.com- Julia's error messages are hyperlinks to the page in GithubBecause of this, it's very useful for everyone using Julia to know a little bit about Git/Github. Git Basics- Git is a common Version Control System (VCS)- A project is a **repository** (repos)- After one makes changes to a project, **commit** the changes- Changes are **pulled** to the main repository hosted online- To download the code, you **clone** the repository- Instead of editing the main repository, one edits a **branch**- To get the changes of the main branch in yours, you **fetch**- One asks the owner of the repository to add their changes via a **pull request**- Stable versions are cut to **releases**- The major online server for git repositories is Github- Github is a free service- Anyone can get a Github account- The code is hosted online, free for everyone to view- Users can open **Issues** to ask for features and give bug reports to developers- Many projects are brought together into **organizations** (JuliaMath, JuliaDiffEq, JuliaStats, etc.) An example Github repository for a Julia package is is DifferentialEquations.jl: https://github.com/JuliaDiffEq/DifferentialEquations.jl Examining a Github RepositoryComponents:- Top Left: Username/Repository name- Top Right: The stars. Click this button to show support for the developer!- Issues Tab: Go here to file a bug report- Files: These are the source files in the repository Examining A Github RepositoryThe badges on a Github repository show you the current state of the repo. From left to right:- Gitter Badge: Click this to go to a chatroom and directly chat with the developers- CI Build Badges: This tell you whether the CI tests pass. Click on them to see what versions of Julia the package works for.- Coverage: This tells you the percentage of the code that the tests cover. If coverage is low, parts of the package may not work even if the CI tests pass.- Docs Badges: Click on these to go to the package documentation. Using Julia's Package Manager Adding a PackageJulia's package manager functions are mirror the Git functions. Julia's package system is similar to R/Python in that a large number of packages are freely available. You search for them in places like [Julia's Package Genie](http://genieframework.com/packages), or from the [Julia Package Listing](http://pkg.julialang.org/). Let's take a look at the [Plots.jl package by Tom Breloff](https://github.com/tbreloff/Plots.jl). To add a package, use `Pkg.add`
###Code
Pkg.update() # You may need to update your local packages first
Pkg.add("Plots")
###Output
_____no_output_____
###Markdown
This will install the package to your local system. However, this will only work for registered packages. To add a non-registered package, go to the Github repository to find the clone URL and use `Pkg.clone`. For example, to install the `ParameterizedFunctions` package, we can use:
###Code
Pkg.clone("https://github.com/JuliaDiffEq/ParameterizedFunctions.jl")
###Output
_____no_output_____
###Markdown
Importing a PackageTo use a package, you have to import the package. The `import` statement will import the package without exporting the functions to the namespace. (Note that the first time a package is run, it will precompile a lot of the functionality.) For example:
###Code
import Plots
Plots.plot(rand(4,4))
###Output
_____no_output_____
###Markdown
Exporting FunctionalityTo instead export the functions (of the developers choosing) to the namespace, we can use the `using` statement. Since Plots.jl exports the `plot` command, we can then use it without reference to the package that it came from:
###Code
using Plots
plot(rand(4,4))
###Output
_____no_output_____
###Markdown
What really makes this possible in Julia but not something like Python is that namespace clashes are usually avoided by multiple dispatch. Most packages will define their own types in order to use dispatches, and so when they export the functionality, the methods are only for their own types and thus do not clash with other packages. Therefore it's common in Julia for concise syntax like `plot` to be part of packages, all without fear of clashing. Getting on the Latest VersionSince Julia is currently under lots of development, you may wish to checkout newer versions. By default, `Pkg.add` is the "latest release", meaning the latest tagged version. However, the main version shown in the Github repository is usually the "master" branch. It's good development practice that the latest release is kept "stable", while the "master" branch is kept "working", and development takes place in another branch (many times labelled "dev"). You can choose which branch your local repository takes from. For example, to checkout the master branch, we can use:
###Code
Pkg.checkout("Plots")
###Output
_____no_output_____
###Markdown
This will usually gives us pretty up to date features (if you are using a "unreleased version of Julia" like building from the source of the Julia nightly, you may need to checkout master in order to get some packages working). However, to go to a specific branch we can give the branch as another argument:
###Code
Pkg.checkout("Plots","dev")
###Output
_____no_output_____ |
content/post/basic-stats/basic-stats.ipynb | ###Markdown
Basics - *Descriptive statistics* is concerned with describing data; *inferential statistics*, with making inferences about a population based on data from a sample. - *Stochastic* is a synonym for random. A stochastic process is a random process. The distinction between *stochastics* and *statistics* is that stochastic processes generate the data we analyse in statistics. Percentiles - Conceptually, the xth percentile is the value of a statistic that is larger than (or, alternatively, equal to or larger than) x percent of all values.- Because there is no agreement on which definition to use, and because it's not obvious how to handle rounding, a useful way to calculate percentiles is to use the below algorithm.- Calculating percentiles: 1. Given a number of observations *N*, calculate the rank corresponding to percentile *p* as *R(p) = (p / 100) * (N + 1)* 2. Define *IR* as the integer and *FR* as the fractional portion of the rank, and *S(R)* as the score associated with rank *R*. 3. Calculate the desired percentile as *FR x [S(IR + 1) - S(IR)] + S(IR)* using interpolation. - Example: calculate 25th percentile of {3, 5, 7, 8, 9, 11, 13, 15}. 1. *R = (25/100) * 9 = 2.25* 2. *IR = 2*, *S(2) = 5*, *S(3) = 7* 3. *0.25 * (7 - 5) + 5 = 5.5* Variables - Variables are properties of some object that can take different values, as opposed to constants.- Variable measurements fall into a number of fundamental categories (scales) that have certain properties. - Nominal scale: no order, distances, or ratios (e.g. {'blue', 'green', 'red'}) - Ordinal scale: order, but no distances or ratios (e.g. {'small', 'medium', 'large'}) - Interval scale: order and distances, but no ratios due to absence of inherent zero point, which refers to the absence of the thing being measured (e.g. temperatures) - Ratio scales: order, distances, and ratios (e.g. height)- The scale of measurement determines what statistics it makes sense to calculate (e.g. calculating the mean of a nominal scale makes no sense). Distributions My notes from working through section 2, data and sampling distributions, of [Practical statistics for data science](https://learning.oreilly.com/library/view/practical-statistics-for/9781492072935/), to revise consepts and get comfortable implementing them in Python. Sampling- We rely on a sample to learn about a larger population.- We thus need to make sure that the sampling procedure is free of bias, so that units in the sample are representative of those in the population.- While representativeness cannot be achieved perfectly, it's important to ensure that non-representativeness is due to random error and not due to systematic bias.- Random errors produce deviations that vary over repeated samples, while systematic bias persists. Such selection bias can lead to misleading and ephemeral conclusions.- Two basic sampling procedures are simple random sampling (randomly select $n$ units from a population of $N$) and stratified random sampling (randomly select $n_s$ from each stratum $S$ of a population of $N$).- The mean outcome of the sample is denoted $\bar{x}$, that of the population $\mu$.Selection bias- Using the data to answer many questions will eventually reveal something interesting by mere chance (if 20,000 people flip a coin 10 times, some will have 10 straight heads). This is sometimes called the Vast Search Effect.- Common types of selection bias in data science: - The vast search effect - Nonrandom sampling - Cherry-picking data - Selecting specific time-intervals - Stopping experiments prematurely- Ways to guard against selection bias: have one or many holdout datasets to confirm your results.- Regression to the mean results form a particular kind of selection bias in a setting where we measure outcomes repeatedly over time: when luck and skill combine to determine outcomes, winners of one period will be less lucky next period and perform closer to the mean performer. Sampling distributions- A sampling distribution is the distribution of a statistic (e.g. the mean) over many repeated samples. Classical statistics is much concerned with making inferences from samples about the population based on such statistics.- When we measure an attribute of the population based on a sample using a statistic, the result will vary over repeated samples. To capture by how much it varies, we are concerned with the sampling variability.- Key distinctions: - The data distribution is the distribution of the data in the sample, the sampling distribution is the distribution of the sample statistic. - The standard deviation is a measure of spread of the data distribution, the standard error a measure of spread of the sampling distribution.
###Code
import pandas as pd
import numpy as np
import seaborn as sns
from scipy.stats import norm
import matplotlib.pyplot as plt
mean, sd, N = 0, 1, 1_000_000
full_data = norm.rvs(mean, sd, N)
sample_data = pd.DataFrame({
'income': np.random.choice(full_data, 1000),
'type': 'Data'
})
mof1 = pd.DataFrame({
'income': [np.random.choice(full_data, 1).mean() for _ in range(1000)],
'type':'Mean of 1'
})
mof5 = pd.DataFrame({
'income': [np.random.choice(full_data, 5).mean() for _ in range(1000)],
'type':'Mean of 5'
})
mof20 = pd.DataFrame({
'income': [np.random.choice(full_data, 20).mean() for _ in range(1000)],
'type':'Mean of 20'
})
mof100 = pd.DataFrame({
'income': [np.random.choice(full_data, 100).mean() for _ in range(1000)],
'type':'Mean of 100'
})
results = pd.concat([sample_data, mof1, mof5, mof20, mof100])
g = sns.FacetGrid(results, col='type')
g.map(plt.hist, 'income', bins=40)
g.set_axis_labels('Income', 'Count')
g.set_titles('{col_name}');
###Output
_____no_output_____
###Markdown
Plots show that:- Data distribution has larger spread than sampling distributions (each data point is a special case of a sample with n = 1)- The spread of sampling distributions decreases with increasing sample size Degrees of freedom- The number of parameters you had to estimate en route to calculate the desired statistic ([source](http://onlinestatbook.com/2/estimation/df.html)). If you calculate sample variance with an estimated mean rather than a known mean, you have to estimate the sample mean first and thus loose 1 degree of freedom. Hence, you'd divide the sum of squared deviations from the (estimated) mean by n-1 rather than n. Central limit theorem- The second point above is an instance of the central limit theorem, which states that means from multiple samples are normally distributed even if the underlying distribution is not normal, provied that the sample size is large enough.- More precisely: Suppose that we have a sequence of independent and identically distributed (iid) random variables $\{x_1, ..., x_n\}$ drawn from a distribution with expected value $\mu$ and finite variance given by $\sigma^2$, and we are interested in the mean value $\bar{x} = \frac{x_1 + ... + x_n}{n}$. By the law of large numbers, $\bar{x}$ converges to $\mu$. The central limite theorem describes the shape of the random variation of $\bar{x}$ around $\mu$ during this convergence. In particular, for large enough $n$, the distribution of $\bar{x}$ will be close to a normal distribution with mean $\mu$ and standard deviation $\sigma/n$.- This is useful because it means that irrespective of the underlying distribution (i.e. the distribution of the values in our sequence above), we can use the normal distribution and approximations to it (such as the t-distribution) to calculate sample distributions when we do inference. Because of this, the CLT is at the heart of the theory of hypothesis testing and confidence intervals, and thus of much of classical statistics.- For experiments, this means that our estiamted treatment effect is normally distributed, which is what allows us to draw inferences from our experimental setting ot the population as a whole. The CLT is thus at the heart of the experimental approach.
###Code
# CLT demo
from scipy.stats import norm, gamma
import matplotlib.pyplot as plt
def means(n):
return [np.mean(norm.rvs(0, 2, 10)) for _ in range(n)]
plt.subplots(figsize=(10,10))
plt.subplot(441)
plt.hist(means(100), bins=30)
plt.subplot(442)
plt.hist(means(1000), bins=30)
plt.subplot(443)
plt.hist(means(10000), bins=30);
###Output
_____no_output_____
###Markdown
Standard error- The standard error is a measure for the variability of the sampling distribution. - It is related to the standard deviation of the observations, $\sigma$ and the sample size $n$ in the following way:$$se = \frac{\sigma}{\sqrt{n}}$$- The relationship between sample size and se is sometimes called the "Square-root of n rule", since reducing the $se$ by a factor of 2 requires an increase in the sample size by a factor of 4. Bootstrap- In practice, we often use the bootstrap to calculate standard errors of model parameters or statistics.- Conceptually, the bootstrap works as follows: 1) we draw an original sample and calculate our statistic, 2) we then create a blown-up version of that sample by duplicating it many times, 3) we then draw repeated samples from the large sample, recalculate our statistic, and calculate the standard deviation of these statistics to get the standard error.- To achieve this easily, we can skip step 2) by simply sampling with replacement from the original distribution in step 3).- The full procedure makes clear what the bootstrap results tell us, however: they tell us how lots of additional samples would behave if they were drawn from a population like our original sample.- Hence, if the original sample is not representative of the population of interest, then bootstrap results are not informative about that population either.- The bootstrap can also be used to improve the performance of classification or regression trees by fitting multiple trees on bootstrapped sample and then averaging their predictions. This is called "bagging", short for "bootstrap aggregating".
###Code
# A simple bootstrap implementation
from sklearn.utils import resample
mean, sd, N = 0, 5, 1000
original_sample = norm.rvs(mean, sd, N)
results = []
for nrepeat in range(1000):
sample = resample(original_sample)
results.append(np.median(sample))
print('Bootstrap Statistics:')
print(f'Original: {np.median(original_sample)}')
print(f'Bias: {np.median(results) - np.median(original_sample)}')
print(f'Std. error: {np.std(results)}')
###Output
Bootstrap Statistics:
Original: 0.028785991557600685
Bias: -0.0017687396759709026
Std. error: 0.15409722327225703
###Markdown
Confidence intervals- A CI is another way to learn about the variability of a test statistic. - It can be calculated using the (standard) normal distribution or the t-distribution (if sample sizes are small).- But for data science purposes we can compute a x percent CI from the bootstrap, following this algorithm: 1) Draw a large number of bootstrap samples and calculate the statistic of interest, 2) Trim [(100-x)/2] percent of the bootstrap results on either end of the distribution, 3) the trim points are the end point of the CI. The normal distribution- Useful not mainly because data is often normally distributed, but because sample distributions of statistics (as well as errors) often are.- But rely on normality assumption only as a last resort if using empirical distributions or bootstrap is not available. Q-Q plots- Q-Q plots (for quantile-quantile plot) help us compare the quantiles in our dataset to the quantiles of a theoretical distribution to see whether our data follows this distribution (I'll refer to the normal distribution below to fix ideas).- In general, the x percent quantile is a point in the data such that x percent of the data fall below it (this point is also the xth percentile).- To create a Q-Q plot, we proceed as follows: First, we split the data into quantiles such that each data point represents its own quantiles. Second, we split the normal distribution into an equal number of quantiles (for the normal distribution, quantiles are intervals of equal probability mass). Third, we mark the quantiles for the data on the y-axis and for the normal distribution on the x-axis. Finally, we use these points as coordinates for each quantile in the plot. (See [this](https://www.youtube.com/watch?v=okjYjClSjOg) helpful video for more details on how to construct Q-Q plots, and [this](https://towardsdatascience.com/explaining-probability-plots-9e5c5d304703) useful article for details on probability plots more generally.)
###Code
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from statsmodels.api import ProbPlot
from scipy import stats
%config InlineBackend.figure_format ='retina'
sns.set_style('darkgrid')
sns.mpl.rcParams['figure.figsize'] = (10.0, 6.0)
# Comparing skew normal and standard normal
n = 10000
rv_std_normal = np.random.normal(size=n)
rv_skew_normal = stats.skewnorm.rvs(a=5, size=n)
fig, ax = plt.subplots(1, 3, figsize=(15,5))
ProbPlot(rv_std_normal).qqplot(line='s', ax=ax[0])
ax[0].set_title('Q-Q plot for std. normal - std. normal')
ProbPlot(rv_skew_normal).qqplot(line='s', ax=ax[1])
ax[1].set_title('Q-Q plot for skew normal - std. normal')
sns.histplot(rv_skew_normal, kde=False, label='Skew normal', ax=ax[2])
sns.histplot(rv_std_normal, kde=False, label='Std. normal', ax=ax[2])
ax[2].set_title('Histograms')
ax[2].legend();
###Output
/Users/fgu/miniconda3/envs/blog/lib/python3.9/site-packages/statsmodels/graphics/gofplots.py:993: UserWarning: marker is redundantly defined by the 'marker' keyword argument and the fmt string "bo" (-> marker='o'). The keyword argument will take precedence.
ax.plot(x, y, fmt, **plot_style)
/Users/fgu/miniconda3/envs/blog/lib/python3.9/site-packages/statsmodels/graphics/gofplots.py:993: UserWarning: marker is redundantly defined by the 'marker' keyword argument and the fmt string "bo" (-> marker='o'). The keyword argument will take precedence.
ax.plot(x, y, fmt, **plot_style)
###Markdown
As expected, data from a standard normal distribution fits almost perfectly onto standard normal quantiles, while data from our positively skewed distribution does not -- it has more probability mass for lower values, as well as more extreme higher values.
###Code
# Comparing Google stock returns to standard normal
# import os
# import pandas_datareader as pdr
# from dotenv import load_dotenv
# from datetime import datetime
# load_dotenv()
# start = datetime(2019, 1, 1)
# end = datetime(2019, 12, 31)
# key = os.getenv('tiingo_api_key')
# goog = np.log(pdr.get_data_tiingo('GOOG', start, end, api_key=key)['close']).diff().dropna()
# fix, ax = plt.subplots(1, 2)
# ProbPlot(nflx).qqplot(line='s', ax=ax[0])
# ax[0].set_title('Q-Q plot for Google returns - std. normal')
# sns.distplot(nflx, norm_hist=True, ax=ax[1]);
###Output
_____no_output_____
###Markdown
The above graph shows clearly that Google's daily stock returns are not normally distributed. While the inner part of the distribution fits a normal distribution relatively well, the returns distribution has (very) fat tails. Chi-Squared distribution- To assess goodness of fit. F distribution- Can be used to measure whether means of different treatment groups differ from control condition.- F-statistic is calculated as the ratio of the variance between groups and the variance within groups (ANOVA).- F distribution gives all values that would be produced if between variance were zero (i.e. under the null model).- Df is given by the number of groups we compare. Poisson distribution- Useful to model processes that randomly generate outcomes at a constant rate (e.g. processes like arrivals that vary over time, or number of defects or typos that vary over space).- The parameter of the distribution is lambda, which is both the rate per unit of time and the variance.- The poisson and exponential distribution can be very useful when modelling, say, arrivals and waiting times. It's important, though, to remember the three key assumptions: 1) lambda remains constant across intervals, 2) events are independent, and 3) two events cannot occur at the same time.- To account for 1), defining the intervals such that they are sufficiently homogenous often helps.
###Code
# Comparing Poisson distributions
x = np.random.poisson(2, 1000000)
y = np.random.poisson(6, 1000000)
plt.hist(x, alpha=0.5, label='$\\lambda = 2$', bins=np.arange(min(x), max(x))-0.5)
plt.hist(y, alpha=0.5, label='$\\lambda = 6$', bins=np.arange(min(y), max(y))-0.5)
plt.legend();
###Output
_____no_output_____
###Markdown
Exponential distribution- Takes the same parameter lambda as the Poisson distribution, but can be used to model the time between random events occuring at a frequent rate lambda (i.e. the time/space difference between Poisson events).
###Code
# Comparing exponential distributions
n = 100000
x = np.random.exponential(2, n)
y = np.random.exponential(6, n)
plt.hist(x, alpha=0.5, label='$\\lambda = 2$', bins=np.arange(min(x), max(x))-0.5)
plt.hist(y, alpha=0.5, label='$\\lambda = 6$', bins=np.arange(min(y), max(y))-0.5)
plt.legend();
###Output
_____no_output_____
###Markdown
Weibull distribution- Used to model events for which the event rate changes during the time of the interval, and thus violates the poisson and exponential assumption.- An example is mechanical failure, where the probability of failure increases as time goas by.- Parameters of the distribution are $\eta$, the scale parameter, and $\beta$, the shape parameter ($\beta > 1$ indicates increasing probability of an event over time, $\beta < 1$ decreasing probability). Plotting distributions in Seaborn
###Code
# Generating random samples
n = 10000
rv_std_normal = np.random.normal(size=n)
rv_normal = np.random.normal(1, 2.5, n)
rv_skew_normal = stats.skewnorm.rvs(a=5, size=n)
# Drawing histogram, pdf, and cdf of std normal sample
x = np.linspace(min(rv_std_normal), max(rv_std_normal), 1000);
pdf = stats.norm.pdf(x)
cdf = stats.norm.cdf(x)
ax = sns.distplot(rv_std_normal, kde=False, norm_hist=True, label='Data')
ax.plot(x, pdf, lw=2, label='PDF')
ax.plot(x, cdf, lw=2, label='CDF')
ax.set_title('Standard normal distribution')
ax.legend();
# Compare three distributions
ax = sns.distplot(rv_std_normal, kde=False, norm_hist=True, label='Standard normal')
ax = sns.distplot(rv_normal, kde=False, norm_hist=True, label='N(1, 2.5)')
ax = sns.distplot(rv_skew_normal, kde=False, norm_hist=True, label='Skew normal, $\\alpha$=5')
ax.set_title('Comparison of different distributions')
ax.legend();
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.