path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
lecture/introduction/intro_python_I.ipynb | ###Markdown
Introduction V - Introduction to Python - I[Peer Herholz (he/him)](https://peerherholz.github.io/) Habilitation candidate - [Fiebach Lab](http://www.fiebachlab.org/), [Neurocognitive Psychology](https://www.psychologie.uni-frankfurt.de/49868684/Abteilungen) at [Goethe-University Frankfurt](https://www.goethe-university-frankfurt.de/en?locale=en) Research affiliate - [NeuroDataScience lab](https://neurodatascience.github.io/) at [MNI](https://www.mcgill.ca/neuro/)/[McGill](https://www.mcgill.ca/) Member - [BIDS](https://bids-specification.readthedocs.io/en/stable/), [ReproNim](https://www.repronim.org/), [Brainhack](https://brainhack.org/), [Neuromod](https://www.cneuromod.ca/), [OHBM SEA-SIG](https://ohbm-environment.org/), [UNIQUE](https://sites.google.com/view/unique-neuro-ai) @peerherholz Before we get started 1...- most of what you’ll see within this lecture was prepared by Ross Markello, Michael Notter and Peer Herholz and further adapted for this course by Peer Herholz - based on Tal Yarkoni's ["Introduction to Python" lecture at Neurohackademy 2019](https://neurohackademy.org/course/introduction-to-python-2/)- based on [IPython notebooks from J. R. Johansson](http://github.com/jrjohansson/scientific-python-lectures)- based on http://www.stavros.io/tutorials/python/ & http://www.swaroopch.com/notes/python- based on https://github.com/oesteban/biss2016 & https://github.com/jvns/pandas-cookbook Objectives 📍* learn basic and efficient usage of the python programming language * what is python & how to utilize it * building blocks of & operations in python What is Python?* Python is a programming language* Specifically, it's a **widely used/very flexible**, **high-level**, **general-purpose**, **dynamic** programming language* That's a mouthful! Let's explore each of these points in more detail... Widely-used* Python is the fastest-growing major programming language* Top 3 overall (with JavaScript, Java) High-levelPython features a high level of abstraction* Many operations that are explicit in lower-level languages (e.g., C/C++) are implicit in Python* E.g., memory allocation, garbage collection, etc.* Python lets you write code faster File reading in Java```javaimport java.io.BufferedReader;import java.io.FileReader;import java.io.IOException; public class ReadFile { public static void main(String[] args) throws IOException{ String fileContents = readEntireFile("./foo.txt"); } private static String readEntireFile(String filename) throws IOException { FileReader in = new FileReader(filename); StringBuilder contents = new StringBuilder(); char[] buffer = new char[4096]; int read = 0; do { contents.append(buffer, 0, read); read = in.read(buffer); } while (read >= 0); return contents.toString(); }}``` File-reading in Python```pythonopen(filename).read()``` General-purposeYou can do almost everything in Python* Comprehensive standard library* Enormous ecosystem of third-party packages* Widely used in many areas of software development (web, dev-ops, data science, etc.) DynamicCode is interpreted at run-time* No compilation process*; code is read line-by-line when executed* Eliminates delays between development and execution* The downside: poorer performance compared to compiled languages (Try typing `import antigravity` into a new cell and running it!) What we will do in this section of the course is a _short_ introduction to `Python` to help beginners to get familiar with this `programming language`.It is divided into the following chapters:- [Module](Module)- [Help and Descriptions](Help-and-Descriptions)- [Variables and types](Variables-and-types) - [Symbol names](Symbol-names) - [Assignment](Assignment) - [Fundamental types](Fundamental-types)- [Operators and comparisons](Operators-and-comparisons) - [Shortcut math operation and assignment](Shortcut-math-operation-and-assignment)- [Strings, List and dictionaries](Strings,-List-and-dictionaries) - [Strings](Strings) - [List](List) - [Tuples](Tuples) - [Dictionaries](Dictionaries)- [Indentation](Indentation)- [Control Flow](Control-Flow) - [Conditional statements: `if`, `elif`, `else`](Conditional-statements:-if,-elif,-else)- [Loops](Loops) - [`for` loops](for-loops) - [`break`, `continue` and `pass`](break,-continue-and-pass)- [Functions](Functions) - [Default argument and keyword arguments](Default-argument-and-keyword-arguments) - [`*args` and `*kwargs` parameters](*args-and-*kwargs-parameters) - [Unnamed functions: `lambda` function](Unnamed-functions:-lambda-function)- [Classes](Classes)- [Modules](Modules)- [Exceptions](Exceptions) Here's what we will focus on in the first block:- [Module](Module)- [Help and Descriptions](Help-and-Descriptions)- [Variables and types](Variables-and-types) - [Symbol names](Symbol-names) - [Assignment](Assignment) - [Fundamental types](Fundamental-types)- [Operators and comparisons](Operators-and-comparisons) - [Shortcut math operation and assignment](Shortcut-math-operation-and-assignment)- [Strings, List and dictionaries](Strings,-List-and-dictionaries) - [Strings](Strings) - [List](List) - [Tuples](Tuples) - [Dictionaries](Dictionaries) ModulesMost of the functionality in `Python` is provided by *modules*. To use a module in a Python program it first has to be imported. A module can be imported using the `import` statement. For example, to import the module `math`, which contains many standard mathematical functions, we can do:
###Code
import math
###Output
_____no_output_____
###Markdown
This includes the whole module and makes it available for use later in the program. For example, we can do:
###Code
import math
x = math.cos(2 * math.pi)
print(x)
###Output
1.0
###Markdown
Importing the whole module us often times unnecessary and can lead to longer loading time or increase the memory consumption. An alternative to the previous method, we can also choose to import only a few selected functions from a module by explicitly listing which ones we want to import:
###Code
from math import cos, pi
x = cos(2 * pi)
print(x)
###Output
1.0
###Markdown
You can make use of `tab` again to get a list of `functions`/`classes`/etc. for a given `module`. Try it out via navigating the cursor behind the `import statement` and press `tab`:
###Code
from math import
###Output
_____no_output_____
###Markdown
Comparably you can also use the `help` function to find out more about a given `module`:
###Code
import math
help(math)
###Output
_____no_output_____
###Markdown
It is also possible to give an imported module or symbol your own access name with the `as` additional:
###Code
import numpy as np
from math import pi as number_pi
x = np.rad2deg(number_pi)
print(x)
###Output
180.0
###Markdown
You can basically provide any name (given it's following `python`/`coding` conventions) but focusing on intelligibility won't be the worst idea:
###Code
import matplotlib as pineapple
pineapple.
###Output
_____no_output_____
###Markdown
Exercise 1.1Import the `max` from `numpy` and find out what it does.
###Code
# write your solution in this code cell
from numpy import max
help(max)
###Output
_____no_output_____
###Markdown
Exercise 1.2Import the `scipy` package and assign the access name `middle_earth` and check its `functions`.
###Code
# write your solution in this code cell
import scipy as middle_earth
help(middle_earth)
###Output
Help on package scipy:
NAME
scipy
DESCRIPTION
SciPy: A scientific computing package for Python
================================================
Documentation is available in the docstrings and
online at https://docs.scipy.org.
Contents
--------
SciPy imports all the functions from the NumPy namespace, and in
addition provides:
Subpackages
-----------
Using any of these subpackages requires an explicit import. For example,
``import scipy.cluster``.
::
cluster --- Vector Quantization / Kmeans
fft --- Discrete Fourier transforms
fftpack --- Legacy discrete Fourier transforms
integrate --- Integration routines
interpolate --- Interpolation Tools
io --- Data input and output
linalg --- Linear algebra routines
linalg.blas --- Wrappers to BLAS library
linalg.lapack --- Wrappers to LAPACK library
misc --- Various utilities that don't have
another home.
ndimage --- N-D image package
odr --- Orthogonal Distance Regression
optimize --- Optimization Tools
signal --- Signal Processing Tools
signal.windows --- Window functions
sparse --- Sparse Matrices
sparse.linalg --- Sparse Linear Algebra
sparse.linalg.dsolve --- Linear Solvers
sparse.linalg.dsolve.umfpack --- :Interface to the UMFPACK library:
Conjugate Gradient Method (LOBPCG)
sparse.linalg.eigen --- Sparse Eigenvalue Solvers
sparse.linalg.eigen.lobpcg --- Locally Optimal Block Preconditioned
Conjugate Gradient Method (LOBPCG)
spatial --- Spatial data structures and algorithms
special --- Special functions
stats --- Statistical Functions
Utility tools
-------------
::
test --- Run scipy unittests
show_config --- Show scipy build configuration
show_numpy_config --- Show numpy build configuration
__version__ --- SciPy version string
__numpy_version__ --- Numpy version string
PACKAGE CONTENTS
__config__
_build_utils (package)
_distributor_init
_lib (package)
cluster (package)
conftest
constants (package)
fft (package)
fftpack (package)
integrate (package)
interpolate (package)
io (package)
linalg (package)
misc (package)
ndimage (package)
odr (package)
optimize (package)
setup
signal (package)
sparse (package)
spatial (package)
special (package)
stats (package)
version
DATA
test = <scipy._lib._testutils.PytestTester object>
VERSION
1.7.1
FILE
/Users/peerherholz/anaconda3/envs/pfp_2021/lib/python3.9/site-packages/scipy/__init__.py
###Markdown
Exercise 1.3What happens when we try to import a `module` that is either misspelled or doesn't exist in our `environment` or at all?1. `python` provides us a hint that the `module` name might be misspelled2. we'll get an `error` telling us that the `module` doesn't exist3. `python` automatically searches for the `module` and if it exists downloads/installs it
###Code
import welovethiscourse
###Output
_____no_output_____
###Markdown
Namespaces and imports* Python is **very** serious about maintaining orderly `namespaces`* If you want to use some code outside the current scope, you need to explicitly "`import`" it* Python's import system often annoys beginners, but it substantially increases `code` clarity * Almost completely eliminates naming conflicts and confusion Help and DescriptionsUsing the function `help` we can get a description of almost all functions.
###Code
help(math.log)
math.log(10)
math.log(10, 2)
###Output
_____no_output_____
###Markdown
Variables and data types* in programming `variables` are things that store `values`* in `Python`, we declare a `variable` by **assigning** it a `value` with the `=` sign * `name = value` * code `variables` **!=** math variables * in mathematics `=` refers to equality (statement of truth), e.g. `y = 10x + 2` * in coding `=` refers to assignments, e.g. `x = x + 1` * Variables are pointers, not data stores! * `Python` supports a variety of `data types` and `structures`: * `booleans` * `numbers` (`ints`, `floats`, etc.) * `strings` * `lists` * `dictionaries` * many others!* We don't specify a variable's type at assignment Variables and types Symbol names Variable names in Python can contain alphanumerical characters `a-z`, `A-Z`, `0-9` and some special characters such as `_`. Normal variable names must start with a letter. By convention, variable names start with a lower-case letter, and Class names start with a capital letter. In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are: and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, while, with, yield Assignment(Not your homework assignment but the operator in `python`.)The assignment operator in `Python` is `=`. `Python` is a `dynamically typed language`, so we do not need to specify the type of a `variable` when we create one.`Assigning` a `value` to a new `variable` _creates_ the `variable`:
###Code
# variable assignment
x = 1.0
###Output
_____no_output_____
###Markdown
Again, this does not mean that `x` equals `1` but that the `variable` `x` has the `value` `1`. Thus, our `variable` `x` is _stored_ in the respective `namespace`:
###Code
x
###Output
_____no_output_____
###Markdown
This means that we can directly utilize the `value` of our `variable`:
###Code
x + 3
###Output
_____no_output_____
###Markdown
Although not explicitly specified, a `variable` does have a `type` associated with it. The `type` is _derived_ from the `value` it was `assigned`.
###Code
type(x)
###Output
_____no_output_____
###Markdown
If we `assign` a new `value` to a `variable`, its `type` can change.
###Code
x = 1
type(x)
###Output
_____no_output_____
###Markdown
This outline one further _very important_ characteristic of `python` (and many other programming languages): `variables` can be directly overwritten by `assigning` them a new `value`. We don't get an error like "This `namespace` is already taken." Thus, always remember/keep track of what `namespaces` were already used to avoid unintentional deletions/errors (reproducibility/replicability much?).
###Code
ring_bearer = 'Bilbo'
ring_bearer
ring_bearer = 'Frodo'
ring_bearer
###Output
_____no_output_____
###Markdown
If we try to use a variable that has not yet been defined we get an `NameError` (Note for later sessions, that we will use in the notebooks `try/except` blocks to handle the exception, so the notebook doesn't stop. The code below will try to execute `print` function and if the `NameError` occurs the error message will be printed. Otherwise, an error will be raised. You will learn more about exception handling later.):
###Code
try:
print(Peer)
except(NameError) as err:
print("NameError", err)
else:
raise
###Output
NameError name 'Peer' is not defined
###Markdown
Variable names:* Can include `letters` (A-Z), `digits` (0-9), and `underscores` ( _ )* Cannot start with a `digit`* Are **case sensitive** (questions: where did "lower/upper case" originate?)This means that, for example:* `shire0` is a valid variable name, whereas `0shire` is not* `shire` and `Shire` are different variables Exercise 2.1Create the following `variables` `n_elves`, `n_dwarfs`, `n_humans` with the respective values `3`, `7.0` and `nine`.
###Code
# write your solution here
n_elves = 3
n_dwarfs = 7.0
n_humans = "nine"
###Output
_____no_output_____
###Markdown
Exercise 2.2What's the output of `n_elves + n_dwarfs`?1. `n_elves + n_dwarfs`2. 103. 10.0
###Code
n_elves + n_dwarfs
###Output
_____no_output_____
###Markdown
Exercise 2.3Consider the following lines of code. `ring_bearer = 'Gollum'` `ring_bearer` `ring_bearer = 'Bilbo'` `ring_bearer` What is the final output?1. `'Bilbo'`2. `'Gollum'`3. neither, the variable got deleted
###Code
ring_bearer = 'Gollum'
ring_bearer
ring_bearer = 'Bilbo'
ring_bearer
###Output
_____no_output_____
###Markdown
Fundamental types & data structures* Most code requires more _complex structures_ built out of _basic data `types`_* `data type` refers to the `value` that is `assigned` to a `variable` * `Python` provides built-in support for many common structures * Many additional structures can be found in the [collections](https://docs.python.org/3/library/collections.html) module Most of the time you'll encounter the following `data types`* `integers` (e.g. `1`, `42`, `180`)* `floating-point numbers` (e.g. `1.0`, `42.42`, `180.90`)* `strings` (e.g. `"Rivendell"`, `"Weathertop"`)* `Boolean` (`True`, `False`)If you're unsure about the `data type` of a given `variable`, you can always use the `type()` command. IntegersLets check out the different `data types` in more detail, starting with `integers`. `Intergers` are _natural numbers_ that can be _signed_ (e.g. `1`, `42`, `180`, `-1`, `-42`, `-180`).
###Code
x = 1
type(x)
n_nazgul = 9
type(n_nazgul)
remaining_rings = -1
type(remaining_rings)
###Output
_____no_output_____
###Markdown
Floating-point numbersSo what's the difference to `floating-point numbers`? `Floating-point numbers` are _decimal-point number_ that can be _signed_ (e.g. `1.0`, `42.42`, `180.90`, `-1.0`, `-42.42`, `-180.90`).
###Code
x_float = 1.0
type(x_float)
n_nazgul_float = 9.0
type(n_nazgul_float)
remaining_rings_float = -1.0
type(remaining_rings_float)
###Output
_____no_output_____
###Markdown
StringsNext up: `strings`. `Strings` are basically `text elements`, from `letters` to `words` to `sentences` all can be/are `strings` in `python`. In order to define a `string`, `Python` needs **quotation marks**, more precisely `strings` start and end with quotation marks, e.g. `"Rivendell"`. You can choose between `"` and `'` as both will work (NB: `python` will put `'` around `strings` even if you specified `"`). However, it is recommended to decide on one and be consistent.
###Code
location = "Weathertop"
type(location)
abbreviation = 'LOTR'
type(abbreviation)
book_one = "The fellowship of the ring"
type(book_one)
###Output
_____no_output_____
###Markdown
BooleansHow about some `Boolean`s? At this point it gets a bit more "abstract". While there are many possible `numbers` and `strings`, a Boolean can only have one of two `values`: `True` or `False`. That is, a `Boolean` says something about whether something _is the case or not_. It's easier to understand with some examples. First try the `type()` function with a `Boolean` as an argument.
###Code
b1 = True
type(b1)
b2 = False
type(b2)
lotr_is_awesome = True
type(lotr_is_awesome)
###Output
_____no_output_____
###Markdown
Interestingly, `True` and `False` also have `numeric values`! `True` has a value of `1` and `False` has a value of `0`.
###Code
True + True
False + False
###Output
_____no_output_____
###Markdown
Converting data typesAs mentioned before the `data type` is not set when `assigning` a `value` to a `variable` but determined based on its properties. Additionally, the `data type` of a given `value` can also be changed via set of functions.- `int()` -> convert the `value` of a `variable` to an `integer`- `float()` -> convert the `value` of a `variable` to a `floating-point number`- `str()` -> convert the `value` of a `variable` to a `string`- `bool()` -> convert the `value` of a `variable` to a `Boolean`
###Code
int("4")
float(3)
str(2)
bool(1)
###Output
_____no_output_____
###Markdown
Exercise 3.1Define the following `variables` with the respective `values` and `data types`: `fellowship_n_humans` with a `value` of two as a `float`, `fellowship_n_hobbits` with a `value` of four as a `string` and `fellowship_n_elves` with a value of one as an `integer`.
###Code
# write your solution here
fellowship_n_humans = 2.0
fellowship_n_hobbits = 'four'
fellowship_n_elves = 1
###Output
_____no_output_____
###Markdown
Exercise 3.2What outcome would you expect based on the following lines of code?1. `True - False`2. `type(True)` 1. `1`2. `bool` Exercise 3.3Define two `variables`, `fellowship_n_dwarfs` with a `value` of one as a `string` and `fellowship_n_wizards` with a `value` of one as a `float`. Subsequently, change the `data type` of `fellowship_n_dwarfs` to `integer` and the `data type` of `fellowship_n_wizard` to `string`.
###Code
fellowship_n_dwarfs = 1.0
fellowship_n_wizards = '1.0'
int(fellowship_n_dwarfs)
str(fellowship_n_wizards)
###Output
_____no_output_____ |
Coursera/Intro to TensorFlow/Week-2/Example/b_estimator.ipynb | ###Markdown
2b. Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Train and eval input functions to read from Pandas Dataframe
###Code
def make_train_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000
)
def make_eval_input_fn(df):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
shuffle = False,
queue_capacity = 1000
)
###Output
_____no_output_____
###Markdown
Our input function for predictions is the same except we don't provide a label
###Code
def make_prediction_input_fn(df):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = None,
batch_size = 128,
shuffle = False,
queue_capacity = 1000
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_train_input_fn(df_train, num_epochs = 10))
###Output
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_steps': None, '_evaluation_master': '', '_keep_checkpoint_every_n_hours': 10000, '_is_chief': True, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fbf55a983c8>, '_master': '', '_global_id_in_cluster': 0, '_num_ps_replicas': 0, '_task_type': 'worker', '_model_dir': 'taxi_trained', '_num_worker_replicas': 1, '_save_checkpoints_secs': 600, '_tf_random_seed': None, '_log_step_count_steps': 100, '_save_summary_steps': 100, '_task_id': 0, '_train_distribute': None, '_session_config': None, '_keep_checkpoint_max': 5}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 23227.691, step = 1
INFO:tensorflow:global_step/sec: 230.586
INFO:tensorflow:loss = 8995.027, step = 101 (0.438 sec)
INFO:tensorflow:global_step/sec: 279.531
INFO:tensorflow:loss = 9363.185, step = 201 (0.357 sec)
INFO:tensorflow:global_step/sec: 258.107
INFO:tensorflow:loss = 10644.762, step = 301 (0.388 sec)
INFO:tensorflow:global_step/sec: 280.008
INFO:tensorflow:loss = 5163.014, step = 401 (0.357 sec)
INFO:tensorflow:global_step/sec: 278.033
INFO:tensorflow:loss = 7394.787, step = 501 (0.360 sec)
INFO:tensorflow:global_step/sec: 252.291
INFO:tensorflow:loss = 10883.856, step = 601 (0.396 sec)
INFO:tensorflow:Saving checkpoints for 608 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 112.020546.
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-11-21-03:41:27
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-608
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2018-11-21-03:41:27
INFO:tensorflow:Saving dict for global step 608: average_loss = 109.164764, global_step = 608, loss = 12982.81
RMSE on dataset = 10.44819450378418
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
predictions = model.predict(input_fn = make_prediction_input_fn(df_test))
for items in predictions:
print(items)
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-608
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
{'predictions': array([11.043583], dtype=float32)}
{'predictions': array([11.041], dtype=float32)}
{'predictions': array([11.041875], dtype=float32)}
{'predictions': array([11.039393], dtype=float32)}
{'predictions': array([11.043319], dtype=float32)}
{'predictions': array([11.043096], dtype=float32)}
{'predictions': array([11.0416565], dtype=float32)}
{'predictions': array([11.041688], dtype=float32)}
{'predictions': array([11.043474], dtype=float32)}
{'predictions': array([11.041298], dtype=float32)}
{'predictions': array([11.043584], dtype=float32)}
{'predictions': array([11.043752], dtype=float32)}
{'predictions': array([11.036937], dtype=float32)}
{'predictions': array([11.040911], dtype=float32)}
{'predictions': array([11.098388], dtype=float32)}
{'predictions': array([11.041732], dtype=float32)}
{'predictions': array([11.042587], dtype=float32)}
{'predictions': array([11.099429], dtype=float32)}
{'predictions': array([11.042919], dtype=float32)}
{'predictions': array([11.040049], dtype=float32)}
{'predictions': array([11.04131], dtype=float32)}
{'predictions': array([11.037168], dtype=float32)}
{'predictions': array([11.041463], dtype=float32)}
{'predictions': array([11.100605], dtype=float32)}
{'predictions': array([11.044191], dtype=float32)}
{'predictions': array([11.033843], dtype=float32)}
{'predictions': array([11.151931], dtype=float32)}
{'predictions': array([11.325656], dtype=float32)}
{'predictions': array([11.04468], dtype=float32)}
{'predictions': array([11.045712], dtype=float32)}
{'predictions': array([11.269823], dtype=float32)}
{'predictions': array([11.040107], dtype=float32)}
{'predictions': array([11.040774], dtype=float32)}
{'predictions': array([11.04356], dtype=float32)}
{'predictions': array([11.153978], dtype=float32)}
{'predictions': array([11.042667], dtype=float32)}
{'predictions': array([11.040906], dtype=float32)}
{'predictions': array([11.037081], dtype=float32)}
{'predictions': array([11.210749], dtype=float32)}
{'predictions': array([11.037425], dtype=float32)}
{'predictions': array([11.042201], dtype=float32)}
{'predictions': array([11.040187], dtype=float32)}
{'predictions': array([11.039892], dtype=float32)}
{'predictions': array([11.040304], dtype=float32)}
{'predictions': array([11.097316], dtype=float32)}
{'predictions': array([11.042182], dtype=float32)}
{'predictions': array([11.095655], dtype=float32)}
{'predictions': array([11.266023], dtype=float32)}
{'predictions': array([11.042469], dtype=float32)}
{'predictions': array([11.040689], dtype=float32)}
{'predictions': array([11.044618], dtype=float32)}
{'predictions': array([11.2669], dtype=float32)}
{'predictions': array([11.041177], dtype=float32)}
{'predictions': array([11.04227], dtype=float32)}
{'predictions': array([11.040629], dtype=float32)}
{'predictions': array([11.041837], dtype=float32)}
{'predictions': array([11.040626], dtype=float32)}
{'predictions': array([11.270249], dtype=float32)}
{'predictions': array([11.041254], dtype=float32)}
{'predictions': array([11.094373], dtype=float32)}
{'predictions': array([11.043522], dtype=float32)}
{'predictions': array([11.156254], dtype=float32)}
{'predictions': array([11.040959], dtype=float32)}
{'predictions': array([11.042159], dtype=float32)}
{'predictions': array([11.153913], dtype=float32)}
{'predictions': array([11.04108], dtype=float32)}
{'predictions': array([11.042675], dtype=float32)}
{'predictions': array([11.044163], dtype=float32)}
{'predictions': array([11.041129], dtype=float32)}
{'predictions': array([11.041718], dtype=float32)}
{'predictions': array([11.036959], dtype=float32)}
{'predictions': array([11.042114], dtype=float32)}
{'predictions': array([11.098], dtype=float32)}
{'predictions': array([11.041291], dtype=float32)}
{'predictions': array([11.04135], dtype=float32)}
{'predictions': array([11.041912], dtype=float32)}
{'predictions': array([11.044057], dtype=float32)}
{'predictions': array([11.041034], dtype=float32)}
{'predictions': array([11.270366], dtype=float32)}
{'predictions': array([11.097284], dtype=float32)}
{'predictions': array([11.042729], dtype=float32)}
{'predictions': array([11.041282], dtype=float32)}
{'predictions': array([11.041289], dtype=float32)}
{'predictions': array([11.093856], dtype=float32)}
{'predictions': array([11.03641], dtype=float32)}
{'predictions': array([11.154336], dtype=float32)}
{'predictions': array([11.267577], dtype=float32)}
{'predictions': array([11.268383], dtype=float32)}
{'predictions': array([11.043536], dtype=float32)}
{'predictions': array([11.042795], dtype=float32)}
{'predictions': array([11.038993], dtype=float32)}
{'predictions': array([11.042381], dtype=float32)}
{'predictions': array([11.251331], dtype=float32)}
{'predictions': array([11.09615], dtype=float32)}
{'predictions': array([11.035989], dtype=float32)}
{'predictions': array([11.041703], dtype=float32)}
{'predictions': array([11.042155], dtype=float32)}
{'predictions': array([11.267194], dtype=float32)}
{'predictions': array([11.041633], dtype=float32)}
{'predictions': array([11.049363], dtype=float32)}
{'predictions': array([11.04054], dtype=float32)}
{'predictions': array([11.098466], dtype=float32)}
{'predictions': array([11.038484], dtype=float32)}
{'predictions': array([11.03915], dtype=float32)}
{'predictions': array([11.037566], dtype=float32)}
{'predictions': array([11.042184], dtype=float32)}
{'predictions': array([11.268783], dtype=float32)}
{'predictions': array([11.039331], dtype=float32)}
{'predictions': array([11.1555395], dtype=float32)}
{'predictions': array([11.043388], dtype=float32)}
{'predictions': array([11.041855], dtype=float32)}
{'predictions': array([11.041406], dtype=float32)}
{'predictions': array([11.042376], dtype=float32)}
{'predictions': array([11.043866], dtype=float32)}
{'predictions': array([11.043215], dtype=float32)}
{'predictions': array([11.040164], dtype=float32)}
{'predictions': array([11.041501], dtype=float32)}
{'predictions': array([11.04311], dtype=float32)}
{'predictions': array([11.0432], dtype=float32)}
{'predictions': array([11.040465], dtype=float32)}
{'predictions': array([11.042335], dtype=float32)}
{'predictions': array([11.09958], dtype=float32)}
{'predictions': array([11.154657], dtype=float32)}
{'predictions': array([11.042291], dtype=float32)}
{'predictions': array([11.041546], dtype=float32)}
{'predictions': array([11.041478], dtype=float32)}
{'predictions': array([11.040216], dtype=float32)}
{'predictions': array([11.041989], dtype=float32)}
{'predictions': array([11.043239], dtype=float32)}
{'predictions': array([11.039685], dtype=float32)}
{'predictions': array([11.0423565], dtype=float32)}
{'predictions': array([11.039973], dtype=float32)}
{'predictions': array([11.040326], dtype=float32)}
{'predictions': array([11.042178], dtype=float32)}
{'predictions': array([11.040891], dtype=float32)}
{'predictions': array([11.100114], dtype=float32)}
{'predictions': array([11.038007], dtype=float32)}
{'predictions': array([11.023517], dtype=float32)}
{'predictions': array([11.043558], dtype=float32)}
{'predictions': array([11.042251], dtype=float32)}
{'predictions': array([11.041613], dtype=float32)}
{'predictions': array([11.326094], dtype=float32)}
{'predictions': array([11.041692], dtype=float32)}
{'predictions': array([11.04058], dtype=float32)}
{'predictions': array([11.154365], dtype=float32)}
{'predictions': array([11.0423], dtype=float32)}
{'predictions': array([11.095227], dtype=float32)}
{'predictions': array([11.042067], dtype=float32)}
{'predictions': array([11.041041], dtype=float32)}
{'predictions': array([11.099038], dtype=float32)}
{'predictions': array([11.044602], dtype=float32)}
{'predictions': array([11.155069], dtype=float32)}
{'predictions': array([11.04067], dtype=float32)}
{'predictions': array([11.042636], dtype=float32)}
{'predictions': array([11.097184], dtype=float32)}
{'predictions': array([11.04221], dtype=float32)}
{'predictions': array([11.041746], dtype=float32)}
{'predictions': array([11.042782], dtype=float32)}
{'predictions': array([11.268113], dtype=float32)}
{'predictions': array([11.0409775], dtype=float32)}
{'predictions': array([11.040616], dtype=float32)}
{'predictions': array([11.09759], dtype=float32)}
{'predictions': array([11.0425625], dtype=float32)}
{'predictions': array([11.042155], dtype=float32)}
{'predictions': array([11.04171], dtype=float32)}
{'predictions': array([11.041484], dtype=float32)}
{'predictions': array([11.041098], dtype=float32)}
{'predictions': array([11.043688], dtype=float32)}
{'predictions': array([11.099511], dtype=float32)}
{'predictions': array([11.040143], dtype=float32)}
{'predictions': array([11.269108], dtype=float32)}
{'predictions': array([11.099256], dtype=float32)}
{'predictions': array([11.04424], dtype=float32)}
{'predictions': array([11.040928], dtype=float32)}
{'predictions': array([11.038647], dtype=float32)}
{'predictions': array([11.04202], dtype=float32)}
{'predictions': array([11.098649], dtype=float32)}
{'predictions': array([11.041494], dtype=float32)}
{'predictions': array([11.156475], dtype=float32)}
{'predictions': array([11.041796], dtype=float32)}
{'predictions': array([11.044157], dtype=float32)}
{'predictions': array([11.041313], dtype=float32)}
{'predictions': array([11.04304], dtype=float32)}
{'predictions': array([11.044129], dtype=float32)}
{'predictions': array([11.157306], dtype=float32)}
{'predictions': array([11.041753], dtype=float32)}
{'predictions': array([11.041914], dtype=float32)}
{'predictions': array([11.098388], dtype=float32)}
{'predictions': array([11.1455555], dtype=float32)}
{'predictions': array([11.043727], dtype=float32)}
{'predictions': array([11.324195], dtype=float32)}
{'predictions': array([11.041725], dtype=float32)}
{'predictions': array([11.042469], dtype=float32)}
{'predictions': array([11.043445], dtype=float32)}
{'predictions': array([11.039519], dtype=float32)}
{'predictions': array([11.036065], dtype=float32)}
{'predictions': array([11.327052], dtype=float32)}
{'predictions': array([11.040879], dtype=float32)}
{'predictions': array([11.041282], dtype=float32)}
{'predictions': array([11.043299], dtype=float32)}
{'predictions': array([11.038628], dtype=float32)}
{'predictions': array([11.035419], dtype=float32)}
{'predictions': array([11.039596], dtype=float32)}
{'predictions': array([11.042705], dtype=float32)}
{'predictions': array([11.041706], dtype=float32)}
{'predictions': array([11.099128], dtype=float32)}
{'predictions': array([11.04482], dtype=float32)}
{'predictions': array([11.041065], dtype=float32)}
{'predictions': array([11.0440645], dtype=float32)}
{'predictions': array([11.0415745], dtype=float32)}
{'predictions': array([11.2684555], dtype=float32)}
{'predictions': array([11.041735], dtype=float32)}
{'predictions': array([11.041736], dtype=float32)}
{'predictions': array([11.041774], dtype=float32)}
{'predictions': array([11.037273], dtype=float32)}
{'predictions': array([11.042819], dtype=float32)}
{'predictions': array([11.042189], dtype=float32)}
{'predictions': array([11.042265], dtype=float32)}
{'predictions': array([11.041152], dtype=float32)}
{'predictions': array([11.040235], dtype=float32)}
{'predictions': array([11.040625], dtype=float32)}
{'predictions': array([11.042375], dtype=float32)}
{'predictions': array([11.042636], dtype=float32)}
{'predictions': array([11.043384], dtype=float32)}
{'predictions': array([11.040967], dtype=float32)}
{'predictions': array([11.042022], dtype=float32)}
{'predictions': array([11.044832], dtype=float32)}
{'predictions': array([11.055435], dtype=float32)}
{'predictions': array([11.09761], dtype=float32)}
{'predictions': array([11.266256], dtype=float32)}
{'predictions': array([11.040114], dtype=float32)}
{'predictions': array([11.042281], dtype=float32)}
{'predictions': array([11.269469], dtype=float32)}
{'predictions': array([11.040534], dtype=float32)}
{'predictions': array([11.04277], dtype=float32)}
{'predictions': array([11.098203], dtype=float32)}
{'predictions': array([11.098971], dtype=float32)}
{'predictions': array([11.153664], dtype=float32)}
{'predictions': array([11.041707], dtype=float32)}
{'predictions': array([11.26889], dtype=float32)}
{'predictions': array([11.040719], dtype=float32)}
{'predictions': array([11.154336], dtype=float32)}
{'predictions': array([11.036353], dtype=float32)}
{'predictions': array([11.033702], dtype=float32)}
{'predictions': array([11.039275], dtype=float32)}
{'predictions': array([11.040946], dtype=float32)}
{'predictions': array([11.0233345], dtype=float32)}
{'predictions': array([11.041219], dtype=float32)}
{'predictions': array([11.026675], dtype=float32)}
{'predictions': array([11.097633], dtype=float32)}
{'predictions': array([11.043438], dtype=float32)}
{'predictions': array([11.042915], dtype=float32)}
{'predictions': array([11.09916], dtype=float32)}
{'predictions': array([11.093188], dtype=float32)}
{'predictions': array([11.100141], dtype=float32)}
{'predictions': array([11.04003], dtype=float32)}
{'predictions': array([11.037086], dtype=float32)}
{'predictions': array([11.155109], dtype=float32)}
{'predictions': array([11.03974], dtype=float32)}
{'predictions': array([11.043239], dtype=float32)}
{'predictions': array([11.042431], dtype=float32)}
{'predictions': array([11.099305], dtype=float32)}
{'predictions': array([11.154876], dtype=float32)}
{'predictions': array([11.041865], dtype=float32)}
{'predictions': array([11.212911], dtype=float32)}
{'predictions': array([11.096779], dtype=float32)}
{'predictions': array([11.042095], dtype=float32)}
{'predictions': array([11.03896], dtype=float32)}
{'predictions': array([11.027762], dtype=float32)}
{'predictions': array([11.040745], dtype=float32)}
{'predictions': array([11.09949], dtype=float32)}
{'predictions': array([11.042129], dtype=float32)}
{'predictions': array([11.266614], dtype=float32)}
{'predictions': array([11.042362], dtype=float32)}
{'predictions': array([11.043208], dtype=float32)}
{'predictions': array([11.043883], dtype=float32)}
{'predictions': array([11.041439], dtype=float32)}
{'predictions': array([11.038167], dtype=float32)}
{'predictions': array([11.041547], dtype=float32)}
{'predictions': array([11.041623], dtype=float32)}
{'predictions': array([11.157918], dtype=float32)}
{'predictions': array([11.036847], dtype=float32)}
{'predictions': array([11.212773], dtype=float32)}
{'predictions': array([11.040875], dtype=float32)}
{'predictions': array([11.043244], dtype=float32)}
{'predictions': array([11.039541], dtype=float32)}
{'predictions': array([11.098044], dtype=float32)}
{'predictions': array([11.041725], dtype=float32)}
{'predictions': array([11.32348], dtype=float32)}
{'predictions': array([11.0412], dtype=float32)}
{'predictions': array([11.041936], dtype=float32)}
{'predictions': array([11.040797], dtype=float32)}
{'predictions': array([11.098888], dtype=float32)}
{'predictions': array([11.045219], dtype=float32)}
{'predictions': array([11.101174], dtype=float32)}
{'predictions': array([11.098267], dtype=float32)}
{'predictions': array([11.266715], dtype=float32)}
{'predictions': array([11.140086], dtype=float32)}
{'predictions': array([11.042646], dtype=float32)}
{'predictions': array([11.211947], dtype=float32)}
{'predictions': array([11.0987], dtype=float32)}
{'predictions': array([11.098557], dtype=float32)}
{'predictions': array([11.039739], dtype=float32)}
{'predictions': array([11.043261], dtype=float32)}
{'predictions': array([11.042076], dtype=float32)}
{'predictions': array([11.155607], dtype=float32)}
{'predictions': array([11.09966], dtype=float32)}
{'predictions': array([11.027762], dtype=float32)}
{'predictions': array([11.04462], dtype=float32)}
{'predictions': array([11.041652], dtype=float32)}
{'predictions': array([11.040624], dtype=float32)}
{'predictions': array([11.043413], dtype=float32)}
{'predictions': array([11.04267], dtype=float32)}
{'predictions': array([11.098037], dtype=float32)}
{'predictions': array([11.154424], dtype=float32)}
{'predictions': array([11.041417], dtype=float32)}
{'predictions': array([11.0415], dtype=float32)}
{'predictions': array([11.042503], dtype=float32)}
{'predictions': array([11.042451], dtype=float32)}
{'predictions': array([11.155456], dtype=float32)}
{'predictions': array([11.03874], dtype=float32)}
{'predictions': array([11.042431], dtype=float32)}
{'predictions': array([11.155425], dtype=float32)}
{'predictions': array([11.041129], dtype=float32)}
{'predictions': array([11.042279], dtype=float32)}
{'predictions': array([11.14149], dtype=float32)}
{'predictions': array([11.263701], dtype=float32)}
{'predictions': array([10.983853], dtype=float32)}
{'predictions': array([11.027713], dtype=float32)}
{'predictions': array([11.03692], dtype=float32)}
{'predictions': array([11.041049], dtype=float32)}
{'predictions': array([11.040301], dtype=float32)}
{'predictions': array([11.042241], dtype=float32)}
{'predictions': array([11.099024], dtype=float32)}
{'predictions': array([11.03748], dtype=float32)}
{'predictions': array([11.041721], dtype=float32)}
{'predictions': array([11.100791], dtype=float32)}
{'predictions': array([11.263174], dtype=float32)}
{'predictions': array([11.266427], dtype=float32)}
{'predictions': array([11.042035], dtype=float32)}
{'predictions': array([11.042427], dtype=float32)}
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_train_input_fn(df_train, num_epochs = 100));
print_rmse(model, df_valid)
###Output
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_steps': None, '_evaluation_master': '', '_keep_checkpoint_every_n_hours': 10000, '_is_chief': True, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fbf306e6ef0>, '_master': '', '_global_id_in_cluster': 0, '_num_ps_replicas': 0, '_task_type': 'worker', '_model_dir': 'taxi_trained', '_num_worker_replicas': 1, '_save_checkpoints_secs': 600, '_tf_random_seed': None, '_log_step_count_steps': 100, '_save_summary_steps': 100, '_task_id': 0, '_train_distribute': None, '_session_config': None, '_keep_checkpoint_max': 5}
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 1 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 29842.77, step = 1
INFO:tensorflow:global_step/sec: 210.928
INFO:tensorflow:loss = 9002.392, step = 101 (0.481 sec)
INFO:tensorflow:global_step/sec: 272.966
INFO:tensorflow:loss = 8324.836, step = 201 (0.365 sec)
INFO:tensorflow:global_step/sec: 258.214
INFO:tensorflow:loss = 8521.68, step = 301 (0.387 sec)
INFO:tensorflow:global_step/sec: 243.378
INFO:tensorflow:loss = 8178.3984, step = 401 (0.411 sec)
INFO:tensorflow:global_step/sec: 278.198
INFO:tensorflow:loss = 9353.998, step = 501 (0.359 sec)
INFO:tensorflow:global_step/sec: 246.354
INFO:tensorflow:loss = 12876.638, step = 601 (0.406 sec)
INFO:tensorflow:global_step/sec: 263.237
INFO:tensorflow:loss = 7959.91, step = 701 (0.380 sec)
INFO:tensorflow:global_step/sec: 253.149
INFO:tensorflow:loss = 8682.23, step = 801 (0.395 sec)
INFO:tensorflow:global_step/sec: 197.901
INFO:tensorflow:loss = 12169.1, step = 901 (0.505 sec)
INFO:tensorflow:global_step/sec: 239.283
INFO:tensorflow:loss = 8918.199, step = 1001 (0.419 sec)
INFO:tensorflow:global_step/sec: 221.856
INFO:tensorflow:loss = 15695.652, step = 1101 (0.451 sec)
INFO:tensorflow:global_step/sec: 241.302
INFO:tensorflow:loss = 5930.3394, step = 1201 (0.414 sec)
INFO:tensorflow:global_step/sec: 227.39
INFO:tensorflow:loss = 11537.228, step = 1301 (0.440 sec)
INFO:tensorflow:global_step/sec: 251.364
INFO:tensorflow:loss = 14412.359, step = 1401 (0.398 sec)
INFO:tensorflow:global_step/sec: 256.601
INFO:tensorflow:loss = 17942.309, step = 1501 (0.393 sec)
INFO:tensorflow:global_step/sec: 223.185
INFO:tensorflow:loss = 14170.855, step = 1601 (0.445 sec)
INFO:tensorflow:global_step/sec: 247.665
INFO:tensorflow:loss = 16183.543, step = 1701 (0.403 sec)
INFO:tensorflow:global_step/sec: 235.695
INFO:tensorflow:loss = 7810.726, step = 1801 (0.427 sec)
INFO:tensorflow:global_step/sec: 234.881
INFO:tensorflow:loss = 6149.2754, step = 1901 (0.423 sec)
INFO:tensorflow:global_step/sec: 232.834
INFO:tensorflow:loss = 11574.231, step = 2001 (0.429 sec)
INFO:tensorflow:global_step/sec: 239.702
INFO:tensorflow:loss = 14717.008, step = 2101 (0.417 sec)
INFO:tensorflow:global_step/sec: 237.606
INFO:tensorflow:loss = 9005.641, step = 2201 (0.421 sec)
INFO:tensorflow:global_step/sec: 230.737
INFO:tensorflow:loss = 12744.685, step = 2301 (0.433 sec)
INFO:tensorflow:global_step/sec: 260.326
INFO:tensorflow:loss = 13345.156, step = 2401 (0.384 sec)
INFO:tensorflow:global_step/sec: 225.729
INFO:tensorflow:loss = 6819.132, step = 2501 (0.443 sec)
INFO:tensorflow:global_step/sec: 258.975
INFO:tensorflow:loss = 8946.41, step = 2601 (0.386 sec)
INFO:tensorflow:global_step/sec: 256.185
INFO:tensorflow:loss = 9691.637, step = 2701 (0.396 sec)
INFO:tensorflow:global_step/sec: 226.054
INFO:tensorflow:loss = 9392.388, step = 2801 (0.437 sec)
INFO:tensorflow:global_step/sec: 256.864
INFO:tensorflow:loss = 11355.194, step = 2901 (0.389 sec)
INFO:tensorflow:global_step/sec: 247.804
INFO:tensorflow:loss = 10107.482, step = 3001 (0.403 sec)
INFO:tensorflow:global_step/sec: 256.41
INFO:tensorflow:loss = 7983.508, step = 3101 (0.390 sec)
INFO:tensorflow:global_step/sec: 246.065
INFO:tensorflow:loss = 15476.807, step = 3201 (0.406 sec)
INFO:tensorflow:global_step/sec: 220.103
INFO:tensorflow:loss = 5103.0156, step = 3301 (0.454 sec)
INFO:tensorflow:global_step/sec: 245.347
INFO:tensorflow:loss = 6437.1484, step = 3401 (0.407 sec)
INFO:tensorflow:global_step/sec: 226.535
INFO:tensorflow:loss = 10064.092, step = 3501 (0.443 sec)
INFO:tensorflow:global_step/sec: 253.54
INFO:tensorflow:loss = 15250.171, step = 3601 (0.393 sec)
INFO:tensorflow:global_step/sec: 230.22
INFO:tensorflow:loss = 13603.143, step = 3701 (0.435 sec)
INFO:tensorflow:global_step/sec: 235.699
INFO:tensorflow:loss = 17845.951, step = 3801 (0.424 sec)
INFO:tensorflow:global_step/sec: 245.393
INFO:tensorflow:loss = 12553.659, step = 3901 (0.411 sec)
INFO:tensorflow:global_step/sec: 215.13
INFO:tensorflow:loss = 12433.715, step = 4001 (0.461 sec)
INFO:tensorflow:global_step/sec: 244.656
INFO:tensorflow:loss = 9361.515, step = 4101 (0.410 sec)
INFO:tensorflow:global_step/sec: 228.47
INFO:tensorflow:loss = 9883.629, step = 4201 (0.436 sec)
INFO:tensorflow:global_step/sec: 247.574
INFO:tensorflow:loss = 7398.719, step = 4301 (0.404 sec)
INFO:tensorflow:global_step/sec: 232.703
INFO:tensorflow:loss = 6454.9185, step = 4401 (0.434 sec)
INFO:tensorflow:global_step/sec: 255.993
INFO:tensorflow:loss = 7323.9966, step = 4501 (0.386 sec)
INFO:tensorflow:global_step/sec: 269.523
INFO:tensorflow:loss = 9618.355, step = 4601 (0.371 sec)
INFO:tensorflow:global_step/sec: 248.743
INFO:tensorflow:loss = 10290.853, step = 4701 (0.402 sec)
INFO:tensorflow:global_step/sec: 261.354
INFO:tensorflow:loss = 5912.894, step = 4801 (0.382 sec)
INFO:tensorflow:global_step/sec: 240.777
INFO:tensorflow:loss = 14062.918, step = 4901 (0.416 sec)
INFO:tensorflow:global_step/sec: 244.952
INFO:tensorflow:loss = 14587.598, step = 5001 (0.411 sec)
INFO:tensorflow:global_step/sec: 262.682
INFO:tensorflow:loss = 15330.396, step = 5101 (0.377 sec)
INFO:tensorflow:global_step/sec: 240.852
INFO:tensorflow:loss = 13032.351, step = 5201 (0.415 sec)
INFO:tensorflow:global_step/sec: 275.118
INFO:tensorflow:loss = 15112.017, step = 5301 (0.364 sec)
INFO:tensorflow:global_step/sec: 270.734
INFO:tensorflow:loss = 13179.615, step = 5401 (0.369 sec)
INFO:tensorflow:global_step/sec: 253.145
INFO:tensorflow:loss = 12603.519, step = 5501 (0.395 sec)
INFO:tensorflow:global_step/sec: 273.417
INFO:tensorflow:loss = 3724.6326, step = 5601 (0.366 sec)
INFO:tensorflow:global_step/sec: 245.797
INFO:tensorflow:loss = 17414.2, step = 5701 (0.407 sec)
INFO:tensorflow:global_step/sec: 264.72
INFO:tensorflow:loss = 9366.868, step = 5801 (0.378 sec)
INFO:tensorflow:global_step/sec: 254.335
INFO:tensorflow:loss = 10954.008, step = 5901 (0.393 sec)
INFO:tensorflow:global_step/sec: 236.256
INFO:tensorflow:loss = 8344.541, step = 6001 (0.423 sec)
INFO:tensorflow:Saving checkpoints for 6071 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 4420.172.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-11-21-03:42:10
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-6071
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2018-11-21-03:42:10
INFO:tensorflow:Saving dict for global step 6071: average_loss = 109.05948, global_step = 6071, loss = 12970.288
RMSE on dataset = 10.443154335021973
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
import datalab.bigquery as bq
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""
phase: 1 = train 2 = valid
"""
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key,
DAYOFWEEK(pickup_datetime)*1.0 AS dayofweek,
HOUR(pickup_datetime)*1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 == {1}".format(base_query, phase)
else:
query = "{0} AND ABS(HASH(pickup_datetime)) % {1} == {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bq.Query(query).to_dataframe()
print_rmse(model, df)
###Output
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-11-21-03:42:52
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-6071
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Finished evaluation at 2018-11-21-03:42:53
INFO:tensorflow:Saving dict for global step 6071: average_loss = 88.57236, global_step = 6071, loss = 11257.773
RMSE on dataset = 9.41128921508789
|
Eye Detection.ipynb | ###Markdown
ROI Region Of Interestt
###Code
import cv2
import numpy as np
# Load the Haar cascade files for face and eye
face_cascade = cv2.CascadeClassifier('haar_cascade_files/haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haar_cascade_files/haarcascade_eye.xml')
# Check if the face cascade file has been loaded correctly
if face_cascade.empty():
raise IOError('Unable to load the face cascade classifier xml file')
# Check if the eye cascade file has been loaded correctly
if eye_cascade.empty():
raise IOError('Unable to load the eye cascade classifier xml file')
# Initialize the video capture object
cap = cv2.VideoCapture(0)
# Define the scaling factor
ds_factor = 0.5
# Iterate until the user hits the 'Esc' key
while True:
# Capture the current frame
_, frame = cap.read()
# Resize the frame
frame = cv2.resize(frame, None, fx=ds_factor, fy=ds_factor, interpolation=cv2.INTER_AREA)
# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Run the face detector on the grayscale image
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
# For each face that's detected, run the eye detector
#xy,wh, son las coordenadas de la cara,
#usando la imagen completa en gris, de ahi usas las coordenadas [gray]
#Asi mismo con la de color [frame]
#dentro de esta cara, se corre el cascade de ojos, y se pinta circulos verdes
for (x,y,w,h) in faces:
# Extract the grayscale face ROI
roi_gray = gray[y:y+h, x:x+w]
# Extract the color face ROI
roi_color = frame[y:y+h, x:x+w]
# Run the eye detector on the grayscale ROI
eyes = eye_cascade.detectMultiScale(roi_gray)
# Draw circles around the eyes
for (x_eye,y_eye,w_eye,h_eye) in eyes:
center = (int(x_eye + 0.5*w_eye), int(y_eye + 0.5*h_eye))
radius = int(0.3 * (w_eye + h_eye))
color = (0, 255, 0)
thickness = 3
cv2.circle(roi_color, center, radius, color, thickness)
# Display the output
cv2.imshow('Eye Detector', frame)
# Check if the user hit the 'Esc' key
c = cv2.waitKey(1)
if c == 27:
break
# Release the video capture object
cap.release()
# Close all the windows
cv2.destroyAllWindows()
###Output
_____no_output_____ |
Lec1.ipynb | ###Markdown
Edureka - Machine Learning
###Code
from sklearn import datasets
iris_dataset = datasets.load_iris()
x = iris_dataset.data[:,:10]
count = len(x.flat)
print(x)
print(count)
min = x[:,0].min()-.5
max = x[:,0].max()+.5
count, min, max
s="csdcd"
a=9.5
print('%s%f',s,a)
###Output
%s%f csdcd 9.5
|
2016/tutorial_final/55/PCA.ipynb | ###Markdown
Principal Component Analysis --- Gautam Arakalgud (garakalg) IntroductionThis tutorial is designed to give the reader an intuitive understanding of principal component analysis (PCA) and its applications in analyzing large datasets as well as data compression. While the tutorial will largely focus on the high level ideas behind PCA, it also gives you a chance to understand the mathematics behind the theory.In today's world, it is very important that we are able to use exisiting computing resources to store, analyze and work with growing data. For instance, we could be estimating facial expressions from images shot with a 1MP camera. Treating raw pixel values as image features, we have a million features for every image. Running standard machine learning algorithms on these images would require a great deal of computing power and time. However, it is often found that most features in high dimensional datasets are redundant and this _information_ can be contained in a much smaller number of features. PCA deals with effectively trying to identify those features that can capture the _essence_ of the data and represent the data in this lower dimensional feature space. ContentsThis tutorial will cover the following topics:- [The Olivetti Faces Dataset](The-Olivetti-Faces-Dataset)- [Visualizing the dataset](Visualizing-the-dataset)- [Math for PCA](Math-for-PCA)- [Computing eigen values and vectors](Computing-eigen-vectors-and-corresponding-eigen-values)- [Projecting data](Projecting-data-onto-the-principal-vectors-and-Reconstruction)- [Visualizing reconstructed data](Visualizing-reconstructed-images)- [Gram Matrix Trick](Gram-Matrix-Trick)- [Using PCA for classification](Using-Dimensionality-reduction-for-classification) The Olivetti Faces DatasetThe [Olivetti Database of Faces][odb] consists of 10 images each of 40 distinct subjects. The images were captured in natural environments at different times, with different lighting, facial expressions(smiling/not smiling) and facial details(glasses/no glasses).Each image in the dataset is of size 64x64 pixels. The image is quantized to 256 grayscale levels, but the loader converts this to floating point values in the interval [0 1]. We shall apply PCA and some basic (supervised) machine learning algorithms to this dataset and the see the effect of PCA on accuracy, and dependency on computing power and time. The dataset is loaded from the sklearn datasets.We shall first import the required libraries and load the database with the images arranged in a random permutation. This is achieved by setting "shuffle=True" while loading the dataset.[odb]: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
###Code
import numpy as np
from sklearn.preprocessing import normalize
from sklearn.datasets import fetch_olivetti_faces
from matplotlib import pyplot as plt
import scipy.sparse as sp
rnd = np.random.RandomState(2) # Use a fixed seed for consistency
dataset = fetch_olivetti_faces(shuffle=True, random_state=rnd) # Load the Olivetti_Faces_dataset
image_shape = (64, 64)
###Output
_____no_output_____
###Markdown
Visualizing the datasetThe first step towards addressing any datascience problem must always be to visualize the dataset and understand it before running algorithms on it, and that is what is done here. A small helper function is written to plot a gallery of images in a 3x3 grid. We can then see a small preview of the kind of images this dataset contains. We plot 9 randomly chosen images from the dataset.
###Code
def plot_gallery(title, subtitle, images, n_col, n_row, n):
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
plt.suptitle(title, size=16)
for i, comp in enumerate(images):
ax = plt.subplot(n_row, n_col, i + 1)
vmax = max(comp.max(), -comp.min())
ax.set_title("{} {}".format(subtitle, (i+1)*n))
ax.imshow(comp.reshape(image_shape), cmap=plt.cm.gray)
plt.xticks(())
plt.yticks(())
plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.20, 0.)
plot_gallery("A preview of the Olivetti Faces Dataset", "Face", dataset.data[:9, :], 3, 3, 1)
plt.show()
###Output
_____no_output_____
###Markdown
Math for PCALet's take a small and quick detour to write down and intuitively understand some of the mathematics behind the PCA formulation. We can think of our data (images) as points in a high dimensional (64\*64 to be precise) space. We have 400 data points in a 4096 dimensional feature space. The core idea behind PCA is to pick **principle vectors** in the original high dimensional feature space and project our data onto these vectors, thus going down to a lower dimensional space (described by these vectors) while preserving most of the _important information_ in our data. The way PCA achieves this, is by picking out the vectors whose directions in the original feature space represent the maximum variance of the data. Hence PCA uses **variance** as a primary measure in compressing data into a smaller feature space. To apply PCA, we commonly calculate the covariance matrix of the data. Notation:$d$: Number of features $N$: Number of samples $X$: Data (Arranged dxN) $\omega$: Principal vector (1xd) $\mu$: Mean of the data (1xd)**Important**: Variance (or covariance) is always defined for data that is centered (meaned) at the origin. We must thus subtract the mean before running PCA on the data. Computing eigen vectors and corresponding eigen valuesThe PCA objective is that we want to find projections of the data (i.e directions onto which the data can be projected) that describe the maximum variance or scatter in the data. Let us look at this as an optimization problem. $$\begin{align*}max. \>\> J(\omega) & = var(\omega^T*X) \\& = E(\omega^TX - \omega^T\mu)^2 \\& = E(\omega^TX - \omega^T\mu)(X^T\omega - \mu^T \omega) \\& = \omega^TE[(X-\mu)(X-\mu)^T]\omega \\& = \omega^T \Sigma \omega\end{align*}$$where $\Sigma = E[(X-\mu)(X-\mu)^T]$ is the covariance matrix.We maximize $J(\omega)$ subject to the contstraint that the projection vectors are unit norm. ($\implies ||\omega||=1$)We solve the Legrangian formulation$$L(\omega, \lambda) = \omega^T\Sigma\omega - \lambda(\omega^T\omega - 1) \\\frac{\partial L(\omega, \lambda)}{\partial \omega} = 2\Sigma\omega - 2\lambda\omega = 0 \\\Sigma\omega = \lambda \omega \\$$This is a standard eigenvalue / eigen vector problem. The vectors $\omega$ are in fact the eigen vectors of the covariance matrix $\Sigma$ and the vectors corresponding to the highest eigen values are the ones that descibe the highest variation in the original data. Let us implement this with the face dataset.
###Code
'''
eig() returns a sorted numpy array of eigen values and corresponding
normalized eigen vectors as the columns of a square matrix.
'''
def covariance_eig(data):
covariance_data = data.dot(data.T)
eig, V = np.linalg.eig(covariance_data)
eig = np.real(eig)
V = np.real(V)
return (eig, V)
img = dataset.data # Load the data in the format that we need
img_mean = img.mean(axis=0) # We store the mean of the image because we need it to reconstruct the image.
centered_img = img - img_mean # Remove the mean from every image.
%time eig, V = covariance_eig(centered_img.T) # Calculate the covariance matrix and principal vectors of the data
###Output
CPU times: user 9min 17s, sys: 4min 48s, total: 14min 5s
Wall time: 4min 56s
###Markdown
Now that we have our eigen vectors and their corresponding eigen values, we can pick the top _k_ eigen vectors that maximize the variance in the data and project our data along these vectors. By doing this, we are effectively reducing the dimentionality of our data from the original high dimensional space to k dimensions.The question of how many eigen vectors to pick is an important one. It largely depends on what purpose the PCA is trying to achieve. For instance, you could be performing PCA for data compression, for classification, etc. One good approach is to list the eigen values in descending order and calculate the amount of variance retained by picking the top k eigen vectors. This is known as the reconstruction capability and explains how well the chosen principal vectors can reconstruct the original data. By "good reconstruction", we usually mean reducing the mean square errors between the original and reconsructed data.$$Reconstruction \> \% = \frac{\sum_{i=1}^k \lambda_i}{\sum_{i=1}^N \lambda_i}$$We can write a small script that plots the reconstruction capability against the number of eigen vectors chosen.
###Code
cumulative_eig = np.cumsum(eig)
cumulative_eig = cumulative_eig/float(cumulative_eig[-1])
plt.plot(range(1, 201), cumulative_eig[:200])
plt.show()
print "Reconstruction capability of the first 200 eigen vectors: {}%".format(cumulative_eig[199]*100)
###Output
_____no_output_____
###Markdown
We notice something very interesting by just plotting the reconstruction capability for the first 200 (out of a potential 4096) eigen vectors. The first 200 principal vectors describe ~98% of the variance of the data. What this means is that we could reduce the dimentionality of our data from the original 4096 dimensional space to a 200 dimentional space with just 2% loss in information. We are effectively reducing the size of our data by almost 200 times! Sometimes, we may not even need these many principal vectors. To get a reconstruction capability of 80%, we'd need only 27 eigen vectors. Projecting data onto the principal vectors and ReconstructionWe can write a function that takes in as input the data matrix and the principal vectors and projects the data onto these vectors.
###Code
def project(data, principal_vectors):
data = data - img_mean
project_data = data.dot(principal_vectors)
return project_data
###Output
_____no_output_____
###Markdown
Now that we have the new data projected onto the low dimensional feature space, we can store or run algorithms on this low compressed data. The next step should be to visually see close the compressed data is to the actual data and calculate the mean square error between the original and reconstructed data.**Important:** We subtracted the mean from the original image and projected the centered_image. Remember to add the mean back to the reconstructed data.
###Code
def reconstruct(project_data, principal_vectors):
reconstruct_data = project_data.dot(principal_vectors.T)
reconstruct_data = reconstruct_data + img_mean
return reconstruct_data
###Output
_____no_output_____
###Markdown
Visualizing reconstructed imagesWe begin by visualizing the most dominant principal vectors and see what kind of information these vectors contain. We continue this analysis by choosing an image and projecting it on the first k principal vectors and vary k.
###Code
eigen_components = np.zeros((9, dataset.data.shape[1]))
eigen_components = V[:, :9].T
plot_gallery("Principal components of the data", "Eigen vector", eigen_components, 3, 3, 1)
plt.show()
dom_eigen = np.zeros((9, dataset.data.shape[1]))
print V.shape
print dataset.data[0].shape
for i in xrange(9):
project_data = project(dataset.data[0], V[:, :5*(i+1)])
reconstruct_data = reconstruct(project_data, V[:, :5*(i+1)])
dom_eigen[i] = reconstruct_data
plot_gallery("Image reconstruction with k principal components", "k =", dom_eigen, 3, 3, 5)
plt.show()
###Output
(4096, 4096)
(4096,)
###Markdown
While it seems obvious that increasing the number of principal vectors gives us better reconstruction, we can visually see that this is true. MSE of reconstructed imagesA good metric for the amount of information loss is to calculate the mean square error between the original and reconstructed images. We take 4 cases, by choosing 5, 20, 50 and 400 principal vectors.
###Code
rec_images = np.zeros((4, dataset.data.shape[1]))
principal_vec = [5, 20, 50, 400]
for i,k in enumerate(principal_vec):
project_data = project(dataset.data[0], V[:, :k])
reconstruct_data = reconstruct(project_data, V[:, :k])
rec_images[i] = reconstruct_data
mse = np.sum(((rec_images-dataset.data[0])**2), axis=1)/float(rec_images.shape[1])
print mse
###Output
[ 9.46010545e-03 4.56969746e-03 2.24048652e-03 4.96826193e-14]
###Markdown
We get good reconstruction with k = 5, 20 and 50 and near perfect reconstruction by using 400 eigen vectors (as expected). Gram Matrix TrickIn this example we are working with images of size 64x64 pixels, and this gives us feature vectors of size 4096 (d). However, we just have 400 (N) sample images. This is most often the case when we deal with computer vision problems. The feature size far exceeds the number of samples we have. The covariance matrix that we compute is correspondingly of size 4096x4096. As the size of these images increases, it becomes computationally infeasible to compute these large matrices. For instance, if we were dealing with 1MP images, we'd be generating a $10^6*10^6$ covaraince matrix, and we wouldn't be able to store a matrix this large on our computer. Added to this is the fact that we have only N samples (N << d) and can have at most N-1 eigenvectors (assuming that our data is linearly independent). Hence, it would make more sense to calculate an $N*N$ covariance matrix and work with this matrix.We know that $$\Sigma = E[(X-\mu)(X-\mu)^T]$$We must solve for $$\begin{align*}\Sigma v & = \lambda v \\XX^Tv & = \lambda v \\X^TXX^Tv & = \lambda X^Tv \> \> \text{(pre-mult by }X^T) \\\text{Setting }(v' & = X^Tv), \\X^TXv' & = \lambda v' \implies \text{Eigen value problem}\end{align*}$$However, once we solve for the eigen values and eigen vectors, we must get back the principal vectors of the original data, as these are the vectors on which we can project our data onto.$$\begin{align*}XX^Tv & = \lambda v \\v' & = X^Tv \\\implies Xv' & = \lambda v \\\implies v & = \frac{1}{\lambda}Xv'\end{align*}$$We can now go ahead and solve for the new covariance matrix and the corresponding principal vectors.
###Code
gram_img = centered_img
print gram_img.shape
%time eig_gram, V_gram = covariance_eig(gram_img)
print V_gram.shape
###Output
(400, 4096)
CPU times: user 1.9 s, sys: 3.04 s, total: 4.94 s
Wall time: 1.46 s
(400, 400)
###Markdown
We can compare the time that it took for the computation of the covariance and eigen vector matrices for the original data and using the gram matrix trick. Using the gram matrix trick and computing the covariance matrix and principal components for the transposed data matrix took 1s as against 2.5 minutes for computing it for the original data. This is a pretty big speedup. Further, we're now just storing 400 principal vectors rather than the original 4096 principal vectors. It's easy to see why this computation is beneficial. We can now project and reconstruct the same image that we had earlier to see if we get similar results. Recovering original eigen vectorsBefore we can project our data, we must recover the original principal vectors. This is done by following the equation derived above.
###Code
def recover(data, V_gram):
V_orig = data.T.dot(V_gram)
V_orig = normalize(V_orig, axis=0)
return V_orig
###Output
_____no_output_____
###Markdown
We can verify that the recovered eigen vectors from the gram matrix trick are identical to the original principal vectors generated from the data itself. In short, V_orig must be identical to V. We can resontruct the same image that we'd chosen earlier to verifiy this.
###Code
dom_eigen = np.zeros((9, dataset.data.shape[1]))
for i in xrange(9):
project_data = project(dataset.data[0], V_orig[:, :5*(i+1)])
reconstruct_data = reconstruct(project_data, V_orig[:, :5*(i+1)])
dom_eigen[i] = reconstruct_data
plot_gallery("Image reconstruction with k principal components", "k =", dom_eigen, 3, 3, 5)
plt.show()
###Output
_____no_output_____
###Markdown
Using Dimensionality reduction for classificationLet us now train and build a classification algorithm to classify these images. The dataset contains 10 images each of 40 distinct people and we can use it to perform face recognition. We use the softmax loss for this multiclass classification in a similar fashion as we did to classify handwritten digits in assignment 4.Let us define the loss and gradient functions
###Code
def softmax_loss_grad(X, Theta, y):
m = y.shape[0]
y_mat = sp.csr_matrix((np.ones(m), (y, np.arange(m))))
y_mat = np.array(y_mat.todense()).T
hyp = X.dot(Theta)
prob = ((np.exp(hyp)).T/(np.sum(np.exp(hyp), axis=1))).T
loss = -sum((sum(y_mat*np.log(prob))))/m
grad = (-1/float(m))*X.T.dot(y_mat-prob)
return (loss, grad)
def softmax_gd(X, y, lam=1e-5, alpha=1.0):
theta = np.zeros((X.shape[1], len(np.unique(y))))
prev_loss = np.inf
while True:
loss, grad = softmax_loss_grad(X, theta, y)
grad += lam*theta
theta = theta - alpha*grad
if abs(prev_loss-loss) < 1e-5:
break;
prev_loss = loss;
return theta
def predict(X_test, theta):
hyp = X_test.dot(theta)
pred = hyp.argmax(axis=1)
return pred
###Output
_____no_output_____
###Markdown
We can use 80% (320 images) as our training set and the remaining 20% (80 images) as our testing set. With PCA:
###Code
project_data = project(dataset.data, V[:, :50])
X_train = project_data[:320]
X_test = project_data[320:400]
y_train = dataset.target[:320]
y_test = dataset.target[320:400]
theta = softmax_gd(X_train, y_train)
pred = predict(X_test, theta)
err = len(np.where(pred != y_test)[0])/float(len(y_test))
print err
###Output
0.025
###Markdown
Without PCA:
###Code
X_train = dataset.data[:320]
X_test = dataset.data[320:400]
theta = softmax_gd(X_train, y_train, alpha=0.05)
pred = predict(X_test, theta)
err = len(np.where(pred != y_test)[0])/float(len(y_test))
print err
###Output
0.022
|
tv-script-generation/dlnd_tv_script_generation-solution.ipynb | ###Markdown
TV Script GenerationIn this project, you'll generate your own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using RNNs. You'll be using part of the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at [Moe's Tavern](https://simpsonswiki.com/wiki/Moe's_Tavern). Get the DataThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_sentence_range` to view different parts of the data.
###Code
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 11492
Number of scenes: 262
Average number of sentences in each scene: 15.248091603053435
Number of lines: 4257
Average number of words in each line: 11.50434578341555
The sentences 0 to 10:
Moe_Szyslak: (INTO PHONE) Moe's Tavern. Where the elite meet to drink.
Bart_Simpson: Eh, yeah, hello, is Mike there? Last name, Rotch.
Moe_Szyslak: (INTO PHONE) Hold on, I'll check. (TO BARFLIES) Mike Rotch. Mike Rotch. Hey, has anybody seen Mike Rotch, lately?
Moe_Szyslak: (INTO PHONE) Listen you little puke. One of these days I'm gonna catch you, and I'm gonna carve my name on your back with an ice pick.
Moe_Szyslak: What's the matter Homer? You're not your normal effervescent self.
Homer_Simpson: I got my problems, Moe. Give me another one.
Moe_Szyslak: Homer, hey, you should not drink to forget your problems.
Barney_Gumble: Yeah, you should only drink to enhance your social skills.
###Markdown
Implement Preprocessing FunctionsThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`
###Code
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab_to_int = dict()
int_to_vocab = dict()
words = set(text)
for i, word in enumerate(words):
vocab_to_int[word] = i
int_to_vocab[i] = word
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( . )- Comma ( , )- Quotation Mark ( " )- Semicolon ( ; )- Exclamation mark ( ! )- Question mark ( ? )- Left Parentheses ( ( )- Right Parentheses ( ) )- Dash ( -- )- Return ( \n )This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
return {
'.':'||PERIOD||',
',':'||COMMA||',
'"':'||QUOTATION_MARK||',
';':'||SEMICOLON||',
'!':'||EXCLAMATION_MARK||',
'?':'||QUESTION_MARK||',
'(':'||LEFT_PARANTHESES||',
')':'||RIGHT_PARANTHESES||',
'--':'||DASH||',
'\n':'||RETURN||'
}
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkYou'll build the components necessary to build a RNN by implementing the following functions below:- get_inputs- get_init_cell- get_embed- build_rnn- build_nn- get_batches Check the Version of TensorFlow and Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
TensorFlow Version: 1.1.0-rc1
###Markdown
InputImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.- Targets placeholder- Learning Rate placeholderReturn the placeholders in the following tuple `(Input, Targets, LearningRate)`
###Code
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learn_rate = tf.placeholder(tf.float32, name='learning_rate')
return inputs, targets, learn_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
###Output
Tests Passed
###Markdown
Build RNN Cell and InitializeStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).- The Rnn size should be set using `rnn_size`- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCellzero_state) function - Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)Return the cell and initial state in the following tuple `(Cell, InitialState)`
###Code
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, tf.identity(initial_state, name='initial_state')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
###Output
Tests Passed
###Markdown
Word EmbeddingApply embedding to `input_data` using TensorFlow. Return the embedded sequence.
###Code
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
###Output
Tests Passed
###Markdown
Build RNNYou created a RNN Cell in the `get_init_cell()` function. Time to use the cell to create a RNN.- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)Return the outputs and final_state state in the following tuple `(Outputs, FinalState)`
###Code
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
###Output
Tests Passed
###Markdown
Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.Return the logits and final state in the following tuple (Logits, FinalState)
###Code
def fully_conn_linear(x_tensor, num_outputs):
num_inputs = x_tensor.shape.as_list()
weights = tf.Variable(tf.truncated_normal(num_inputs + [num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
# fc_layer = tf.reshape(x_tensor, [-1, num_inputs])
fc_layer = tf.add(tf.matmul(x_tensor, weights), bias)
return fc_layer
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
# TODO: Implement Function
embedding = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embedding)
logits = tf.layers.dense(outputs, vocab_size, activation=None)
return logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
###Output
Tests Passed
###Markdown
BatchesImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:- The first element is a single batch of **input** with the shape `[batch size, sequence length]`- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`If you can't fill the last batch with enough data, drop the last batch.For exmple, `get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)` would return a Numpy array of the following:```[ First Batch [ Batch of Input [[ 1 2], [ 7 8], [13 14]] Batch of targets [[ 2 3], [ 8 9], [14 15]] ] Second Batch [ Batch of Input [[ 3 4], [ 9 10], [15 16]] Batch of targets [[ 4 5], [10 11], [16 17]] ] Third Batch [ Batch of Input [[ 5 6], [11 12], [17 18]] Batch of targets [[ 6 7], [12 13], [18 1]] ]]```Notice that the last target value in the last batch is the first input value of the first batch. In this case, `1`. This is a common technique used when creating sequence batches, although it is rather unintuitive.
###Code
# Note, this was my first attempt and to be honest I don't quite understand the batching.
# As you should note below is that my code does produce the example output, then also passes
# the majority of unit tests, but fails where it tests for the last item.
# I think the unit tests are not designed correctly, because I do set the last target to
# the first input. The problem is along a different axis which is not covered by the unit test
# and neither the example explains what's wrong.
# I think in the example it is a bit unfortunate that you have 3x2 batch_size and seq_length, but also
# when I tried to figure out why after [1 2] it follows [7 8] the only reason I could find is because it's
# continuing the sequence along the first vertical [1] [2] [3] [4] [5] [6], which happens to be 6=2*3.
# In the unit test this "shift" is 35, which I don't understand and so the formulas don't work.
# When I looked up the error in the forums I saw a non-working code which just had the very last element
# wrong so I used and fixed that, see next code box...
def get_batches(int_text, batch_size, seq_length, test=False):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
N = len(int_text)
count = 3 if test else int(N/batch_size/seq_length)
delta = 6 if test else 35
x = []
print('N',N)
print('batch_size',batch_size)
print('seq_length',seq_length)
print('delta',delta)
print('count',count)
for c in range(count):
i = 2*c
batch = []
for b in range(batch_size):
idx = i+b*delta
batch += int_text[ idx : idx+seq_length ]
for b in range(batch_size):
idx = i+b*delta+1
batch += int_text[ idx : idx+seq_length ]
if (c == count-1):
batch[-1] = int_text[0]
#print(batch)
x += batch
print(len(x))
batches = np.array(x).flatten().reshape((count, 2, batch_size, seq_length))
return batches
print(get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2, test=True))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# TODO: Implement Function
n_batches = int(len(int_text) / (batch_size * seq_length))
# Drop the last few characters to make only full batches
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1])
ydata[-1] = xdata[0]
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.array(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
###Output
Tests Passed
###Markdown
Neural Network Training HyperparametersTune the following parameters:- Set `num_epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `embed_dim` to the size of the embedding.- Set `seq_length` to the length of sequence.- Set `learning_rate` to the learning rate.- Set `show_every_n_batches` to the number of batches the neural network should print progress.
###Code
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 13
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
###Output
_____no_output_____
###Markdown
Build the GraphBuild the graph using the neural network you implemented.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
###Output
_____no_output_____
###Markdown
TrainTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the [forms](https://discussions.udacity.com/) to see if anyone is having the same problem.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
###Output
Epoch 0 Batch 0/13 train_loss = 8.823
Epoch 1 Batch 0/13 train_loss = 5.566
Epoch 2 Batch 0/13 train_loss = 4.892
Epoch 3 Batch 0/13 train_loss = 4.435
Epoch 4 Batch 0/13 train_loss = 4.004
Epoch 5 Batch 0/13 train_loss = 3.604
Epoch 6 Batch 0/13 train_loss = 3.283
Epoch 7 Batch 0/13 train_loss = 3.027
Epoch 8 Batch 0/13 train_loss = 2.738
Epoch 9 Batch 0/13 train_loss = 2.463
Epoch 10 Batch 0/13 train_loss = 2.253
Epoch 11 Batch 0/13 train_loss = 2.056
Epoch 12 Batch 0/13 train_loss = 1.887
Epoch 13 Batch 0/13 train_loss = 1.730
Epoch 14 Batch 0/13 train_loss = 1.547
Epoch 15 Batch 0/13 train_loss = 1.432
Epoch 16 Batch 0/13 train_loss = 1.352
Epoch 17 Batch 0/13 train_loss = 1.184
Epoch 18 Batch 0/13 train_loss = 1.012
Epoch 19 Batch 0/13 train_loss = 0.921
Epoch 20 Batch 0/13 train_loss = 0.795
Epoch 21 Batch 0/13 train_loss = 0.692
Epoch 22 Batch 0/13 train_loss = 0.615
Epoch 23 Batch 0/13 train_loss = 0.534
Epoch 24 Batch 0/13 train_loss = 0.474
Epoch 25 Batch 0/13 train_loss = 0.432
Epoch 26 Batch 0/13 train_loss = 0.388
Epoch 27 Batch 0/13 train_loss = 0.357
Epoch 28 Batch 0/13 train_loss = 0.327
Epoch 29 Batch 0/13 train_loss = 0.295
Epoch 30 Batch 0/13 train_loss = 0.279
Epoch 31 Batch 0/13 train_loss = 0.247
Epoch 32 Batch 0/13 train_loss = 0.225
Epoch 33 Batch 0/13 train_loss = 0.211
Epoch 34 Batch 0/13 train_loss = 0.192
Epoch 35 Batch 0/13 train_loss = 0.183
Epoch 36 Batch 0/13 train_loss = 0.180
Epoch 37 Batch 0/13 train_loss = 0.175
Epoch 38 Batch 0/13 train_loss = 0.167
Epoch 39 Batch 0/13 train_loss = 0.166
Epoch 40 Batch 0/13 train_loss = 0.162
Epoch 41 Batch 0/13 train_loss = 0.156
Epoch 42 Batch 0/13 train_loss = 0.152
Epoch 43 Batch 0/13 train_loss = 0.150
Epoch 44 Batch 0/13 train_loss = 0.148
Epoch 45 Batch 0/13 train_loss = 0.147
Epoch 46 Batch 0/13 train_loss = 0.147
Epoch 47 Batch 0/13 train_loss = 0.145
Epoch 48 Batch 0/13 train_loss = 0.145
Epoch 49 Batch 0/13 train_loss = 0.144
Model Trained and Saved
###Markdown
Save ParametersSave `seq_length` and `save_dir` for generating a new TV script.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
###Output
_____no_output_____
###Markdown
Checkpoint
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
###Output
_____no_output_____
###Markdown
Implement Generate Functions Get TensorsGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graphget_tensor_by_name). Get the tensors using the following names:- "input:0"- "initial_state:0"- "final_state:0"- "probs:0"Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)`
###Code
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
return loaded_graph.get_tensor_by_name(name='input:0'), \
loaded_graph.get_tensor_by_name(name='initial_state:0'), \
loaded_graph.get_tensor_by_name(name='final_state:0'), \
loaded_graph.get_tensor_by_name(name='probs:0')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
###Output
Tests Passed
###Markdown
Choose WordImplement the `pick_word()` function to select the next word using `probabilities`.
###Code
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
p = np.squeeze(probabilities)
p = p/np.sum(p)
c = np.random.choice(len(p), 1, p=p)[0]
return int_to_vocab[c]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
###Output
Tests Passed
###Markdown
Generate TV ScriptThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.
###Code
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
###Output
INFO:tensorflow:Restoring parameters from ./save
moe_szyslak: hey, hey, hey, hey! that plank's only for comin' in!
moe_szyslak:(gently) you want to buy the best japanese yellow we're homer.
lisa_simpson:(thrilled) the wordloaf festival doesn't do!
moe_szyslak:(to homer) easy there, habitrail.
lisa_simpson:(quickly) i'm not here.
moe_szyslak: that's the worst name i ever heard.
barney_gumble:(calling after him) hey, joey joe joe junior... oh, nuts. i forgot. all i can think of hope.
moe_szyslak: nah, nah, no. makin' polenta less be in here.
waylon_smithers: huh? huh? listen, what if i helped you say heavyweight championship.
kent_brockman:(to himself, disgusted) my thesaurus.
homer_simpson: quit changing the subject. how do you feel about me right now?
moe_szyslak: oh, no. oh lisa.
homer_simpson: oh man, i love our valentine's day to play with me, i just hope you do.
homer_simpson: chief! thank god, i always
|
Copy_of_Hello_ML_World.ipynb | ###Markdown
The Hello World of Deep Learning with Neural Networks Like every first app you should start with something super simple that shows the overall scaffolding for how your code works. In the case of creating neural networks, the sample I like to use is one where it learns the relationship between two numbers. So, for example, if you were writing code for a function like this, you already know the 'rules' -- ```float my_function(float x){``` This is formatted as code``` float y = (3 * x)+1; return y;}```---``` This is formatted as code```So how would you train a neural network to do the equivalent task? Using data! By feeding it with a set of Xs, and a set of Ys, it should be able to figure out the relationship between them. This is obviously a very different paradigm than what you might be used to, so let's step through it piece by piece. New Section ImportsLet's start with our imports. Here we are importing TensorFlow and calling it tf for ease of use.We then import a library called numpy, which helps us to represent our data as lists easily and quickly.The framework for defining a neural network as a set of Sequential layers is called keras, so we import that too.
###Code
import tensorflow as tf
import numpy as np
from tensorflow import keras
###Output
_____no_output_____
###Markdown
Define and Compile the Neural NetworkNext we will create the simplest possible neural network. It has 1 layer, and that layer has 1 neuron, and the input shape to it is just 1 value.
###Code
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
###Output
_____no_output_____
###Markdown
Now we compile our Neural Network. When we do so, we have to specify 2 functions, a loss and an optimizer.If you've seen lots of math for machine learning, here's where it's usually used, but in this case it's nicely encapsulated in functions for you. But what happens here -- let's explain...We know that in our function, the relationship between the numbers is y=3x+1. When the computer is trying to 'learn' that, it makes a guess...maybe y=10x+10. The LOSS function measures the guessed answers against the known correct answers and measures how well or how badly it did.It then uses the OPTIMIZER function to make another guess. Based on how the loss function went, it will try to minimize the loss. At that point maybe it will come up with somehting like y=5x+5, which, while still pretty bad, is closer to the correct result (i.e. the loss is lower)It will repeat this for the number of EPOCHS which you will see shortly. But first, here's how we tell it to use 'MEAN SQUARED ERROR' for the loss and 'STOCHASTIC GRADIENT DESCENT' for the optimizer. You don't need to understand the math for these yet, but you can see that they work! :)Over time you will learn the different and appropriate loss and optimizer functions for different scenarios.
###Code
model.compile(optimizer='sgd', loss='mean_squared_error')
###Output
_____no_output_____
###Markdown
Providing the DataNext up we'll feed in some data. In this case we are taking 6 xs and 6ys. You can see that the relationship between these is that y=2x-1, so where x = -1, y=-3 etc. etc. A python library called 'Numpy' provides lots of array type data structures that are a defacto standard way of doing it. We declare that we want to use these by specifying the values asn an np.array[]
###Code
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0,-2.0,-3.0,-4.0], dtype=float)
ys = np.array([-1.0, 0.0, 3.0, 5.0, 7.0, 9.0,-3.0,-5.0,-7.0], dtype=float)
###Output
_____no_output_____
###Markdown
Training the Neural Network The process of training the neural network, where it 'learns' the relationship between the Xs and Ys is in the **model.fit** call. This is where it will go through the loop we spoke about above, making a guess, measuring how good or bad it is (aka the loss), using the opimizer to make another guess etc. It will do it for the number of epochs you specify. When you run this code, you'll see the loss on the right hand side.
###Code
model.fit(xs, ys, epochs=1000)
###Output
Epoch 1/1000
1/1 [==============================] - 0s 1ms/step - loss: 5.5134
Epoch 2/1000
1/1 [==============================] - 0s 828us/step - loss: 4.3312
Epoch 3/1000
1/1 [==============================] - 0s 1ms/step - loss: 3.4366
Epoch 4/1000
1/1 [==============================] - 0s 1ms/step - loss: 2.7584
Epoch 5/1000
1/1 [==============================] - 0s 1ms/step - loss: 2.2429
Epoch 6/1000
1/1 [==============================] - 0s 1ms/step - loss: 1.8499
Epoch 7/1000
1/1 [==============================] - 0s 1ms/step - loss: 1.5492
Epoch 8/1000
1/1 [==============================] - 0s 1ms/step - loss: 1.3180
Epoch 9/1000
1/1 [==============================] - 0s 847us/step - loss: 1.1391
Epoch 10/1000
1/1 [==============================] - 0s 945us/step - loss: 0.9999
Epoch 11/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.8906
Epoch 12/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.8039
Epoch 13/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.7344
Epoch 14/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.6780
Epoch 15/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.6317
Epoch 16/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.5930
Epoch 17/1000
1/1 [==============================] - 0s 743us/step - loss: 0.5602
Epoch 18/1000
1/1 [==============================] - 0s 793us/step - loss: 0.5320
Epoch 19/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.5073
Epoch 20/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.4856
Epoch 21/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.4660
Epoch 22/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.4483
Epoch 23/1000
1/1 [==============================] - 0s 718us/step - loss: 0.4321
Epoch 24/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.4171
Epoch 25/1000
1/1 [==============================] - 0s 773us/step - loss: 0.4032
Epoch 26/1000
1/1 [==============================] - 0s 724us/step - loss: 0.3901
Epoch 27/1000
1/1 [==============================] - 0s 975us/step - loss: 0.3778
Epoch 28/1000
1/1 [==============================] - 0s 760us/step - loss: 0.3662
Epoch 29/1000
1/1 [==============================] - 0s 854us/step - loss: 0.3552
Epoch 30/1000
1/1 [==============================] - 0s 706us/step - loss: 0.3447
Epoch 31/1000
1/1 [==============================] - 0s 709us/step - loss: 0.3347
Epoch 32/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.3252
Epoch 33/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.3161
Epoch 34/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.3074
Epoch 35/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.2991
Epoch 36/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.2911
Epoch 37/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.2834
Epoch 38/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2761
Epoch 39/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2690
Epoch 40/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2623
Epoch 41/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2558
Epoch 42/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2495
Epoch 43/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2436
Epoch 44/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2378
Epoch 45/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2323
Epoch 46/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.2270
Epoch 47/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.2219
Epoch 48/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.2171
Epoch 49/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.2124
Epoch 50/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.2079
Epoch 51/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.2036
Epoch 52/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1994
Epoch 53/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1954
Epoch 54/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1916
Epoch 55/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1879
Epoch 56/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1844
Epoch 57/1000
1/1 [==============================] - 0s 994us/step - loss: 0.1810
Epoch 58/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1777
Epoch 59/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1746
Epoch 60/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1716
Epoch 61/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1687
Epoch 62/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1660
Epoch 63/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1633
Epoch 64/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1607
Epoch 65/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1583
Epoch 66/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1559
Epoch 67/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1537
Epoch 68/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1515
Epoch 69/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1494
Epoch 70/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1474
Epoch 71/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1455
Epoch 72/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1436
Epoch 73/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1418
Epoch 74/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1401
Epoch 75/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1385
Epoch 76/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1369
Epoch 77/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1354
Epoch 78/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1340
Epoch 79/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1326
Epoch 80/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1312
Epoch 81/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1299
Epoch 82/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1287
Epoch 83/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1275
Epoch 84/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1264
Epoch 85/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1253
Epoch 86/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1242
Epoch 87/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1232
Epoch 88/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1223
Epoch 89/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1213
Epoch 90/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1204
Epoch 91/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1196
Epoch 92/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1188
Epoch 93/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1180
Epoch 94/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1172
Epoch 95/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1165
Epoch 96/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1158
Epoch 97/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1151
Epoch 98/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1145
Epoch 99/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1138
Epoch 100/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1132
Epoch 101/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1127
Epoch 102/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1121
Epoch 103/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1116
Epoch 104/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1111
Epoch 105/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1106
Epoch 106/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1101
Epoch 107/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1097
Epoch 108/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1092
Epoch 109/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1088
Epoch 110/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1084
Epoch 111/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1080
Epoch 112/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1077
Epoch 113/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1073
Epoch 114/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1070
Epoch 115/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1067
Epoch 116/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1063
Epoch 117/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1060
Epoch 118/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1058
Epoch 119/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1055
Epoch 120/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1052
Epoch 121/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1050
Epoch 122/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1047
Epoch 123/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1045
Epoch 124/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1043
Epoch 125/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1040
Epoch 126/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1038
Epoch 127/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1036
Epoch 128/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1034
Epoch 129/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1032
Epoch 130/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1031
Epoch 131/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1029
Epoch 132/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1027
Epoch 133/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1026
Epoch 134/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1024
Epoch 135/1000
1/1 [==============================] - 0s 955us/step - loss: 0.1023
Epoch 136/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1021
Epoch 137/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1020
Epoch 138/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1019
Epoch 139/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1018
Epoch 140/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1016
Epoch 141/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1015
Epoch 142/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1014
Epoch 143/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1013
Epoch 144/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1012
Epoch 145/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1011
Epoch 146/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1010
Epoch 147/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1009
Epoch 148/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1008
Epoch 149/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1008
Epoch 150/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1007
Epoch 151/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1006
Epoch 152/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1005
Epoch 153/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1005
Epoch 154/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1004
Epoch 155/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1003
Epoch 156/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1003
Epoch 157/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1002
Epoch 158/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1002
Epoch 159/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1001
Epoch 160/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.1000
Epoch 161/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.1000
Epoch 162/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0999
Epoch 163/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0999
Epoch 164/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0999
Epoch 165/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0998
Epoch 166/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0998
Epoch 167/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0997
Epoch 168/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0997
Epoch 169/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0997
Epoch 170/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0996
Epoch 171/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0996
Epoch 172/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0996
Epoch 173/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0995
Epoch 174/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0995
Epoch 175/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0995
Epoch 176/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0994
Epoch 177/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0994
Epoch 178/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0994
Epoch 179/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0994
Epoch 180/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0993
Epoch 181/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0993
Epoch 182/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0993
Epoch 183/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0993
Epoch 184/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0993
Epoch 185/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0992
Epoch 186/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0992
Epoch 187/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0992
Epoch 188/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0992
Epoch 189/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0992
Epoch 190/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0991
Epoch 191/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0991
Epoch 192/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0991
Epoch 193/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0991
Epoch 194/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0991
Epoch 195/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0991
Epoch 196/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0991
Epoch 197/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0991
Epoch 198/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 199/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 200/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 201/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 202/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 203/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 204/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 205/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0990
Epoch 206/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0990
Epoch 207/1000
1/1 [==============================] - 0s 8ms/step - loss: 0.0990
Epoch 208/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 209/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0989
Epoch 210/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0989
Epoch 211/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 212/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0989
Epoch 213/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 214/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 215/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 216/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0989
Epoch 217/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 218/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 219/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0989
Epoch 220/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 221/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 222/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 223/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 224/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 225/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 226/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 227/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0989
Epoch 228/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 229/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 230/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 231/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 232/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 233/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 234/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 235/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 236/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 237/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 238/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 239/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 240/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 241/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 242/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 243/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 244/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 245/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 246/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 247/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 248/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 249/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 250/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 251/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 252/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 253/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 254/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 255/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 256/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 257/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 258/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 259/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 260/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 261/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 262/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 263/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 264/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 265/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 266/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 267/1000
1/1 [==============================] - 0s 5ms/step - loss: 0.0988
Epoch 268/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 269/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 270/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 271/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 272/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 273/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 274/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 275/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 276/1000
1/1 [==============================] - 0s 933us/step - loss: 0.0988
Epoch 277/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 278/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 279/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 280/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 281/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 282/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 283/1000
1/1 [==============================] - 0s 996us/step - loss: 0.0988
Epoch 284/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 285/1000
1/1 [==============================] - 0s 948us/step - loss: 0.0988
Epoch 286/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 287/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 288/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 289/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 290/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 291/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 292/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 293/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 294/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 295/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 296/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 297/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 298/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 299/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 300/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 301/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 302/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 303/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 304/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 305/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 306/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 307/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 308/1000
1/1 [==============================] - 0s 936us/step - loss: 0.0988
Epoch 309/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 310/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 311/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 312/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 313/1000
1/1 [==============================] - 0s 902us/step - loss: 0.0988
Epoch 314/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 315/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 316/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 317/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 318/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 319/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 320/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 321/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 322/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 323/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 324/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 325/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 326/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 327/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 328/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 329/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 330/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 331/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 332/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 333/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 334/1000
1/1 [==============================] - 0s 902us/step - loss: 0.0988
Epoch 335/1000
1/1 [==============================] - 0s 817us/step - loss: 0.0988
Epoch 336/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 337/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 338/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 339/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 340/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 341/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 342/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 343/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 344/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 345/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 346/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 347/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 348/1000
1/1 [==============================] - 0s 5ms/step - loss: 0.0988
Epoch 349/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 350/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 351/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 352/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 353/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 354/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 355/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 356/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 357/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 358/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 359/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 360/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 361/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 362/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 363/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 364/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 365/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 366/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 367/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 368/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 369/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 370/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 371/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 372/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 373/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 374/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 375/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 376/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 377/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 378/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 379/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 380/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 381/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 382/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 383/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 384/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 385/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 386/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 387/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 388/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 389/1000
1/1 [==============================] - 0s 858us/step - loss: 0.0988
Epoch 390/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 391/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 392/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 393/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 394/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 395/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 396/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 397/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 398/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 399/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 400/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 401/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 402/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 403/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 404/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 405/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 406/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 407/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.0988
Epoch 408/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 409/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 410/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 411/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 412/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 413/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 414/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 415/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 416/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 417/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 418/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 419/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 420/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 421/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 422/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 423/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 424/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 425/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 426/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 427/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 428/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 429/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 430/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 431/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 432/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 433/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 434/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 435/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 436/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 437/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 438/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 439/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 440/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 441/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 442/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 443/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 444/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 445/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 446/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 447/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 448/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 449/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 450/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.0988
Epoch 451/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 452/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 453/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 454/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 455/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 456/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 457/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 458/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 459/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 460/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 461/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 462/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 463/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 464/1000
1/1 [==============================] - 0s 777us/step - loss: 0.0988
Epoch 465/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 466/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 467/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 468/1000
1/1 [==============================] - 0s 799us/step - loss: 0.0988
Epoch 469/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 470/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 471/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 472/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 473/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 474/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 475/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 476/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 477/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 478/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 479/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 480/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 481/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 482/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 483/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 484/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 485/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.0988
Epoch 486/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 487/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 488/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 489/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 490/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 491/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 492/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 493/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 494/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 495/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 496/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 497/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 498/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 499/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 500/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 501/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 502/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 503/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 504/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 505/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 506/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 507/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 508/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 509/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 510/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 511/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 512/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 513/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 514/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 515/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 516/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 517/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 518/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 519/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 520/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 521/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 522/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 523/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 524/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 525/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 526/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 527/1000
1/1 [==============================] - 0s 6ms/step - loss: 0.0988
Epoch 528/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 529/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 530/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 531/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 532/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 533/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 534/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 535/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 536/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 537/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 538/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 539/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 540/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 541/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 542/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 543/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 544/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 545/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.0988
Epoch 546/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 547/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 548/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 549/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 550/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 551/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 552/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 553/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 554/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 555/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 556/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 557/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 558/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 559/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 560/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 561/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 562/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 563/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 564/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 565/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 566/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 567/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 568/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 569/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 570/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 571/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 572/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 573/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 574/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 575/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 576/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 577/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 578/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 579/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 580/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 581/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 582/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 583/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 584/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 585/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 586/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 587/1000
1/1 [==============================] - 0s 848us/step - loss: 0.0988
Epoch 588/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 589/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 590/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 591/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 592/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 593/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 594/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 595/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 596/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 597/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 598/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.0988
Epoch 599/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 600/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 601/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 602/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 603/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 604/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 605/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 606/1000
1/1 [==============================] - 0s 5ms/step - loss: 0.0988
Epoch 607/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 608/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 609/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 610/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 611/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 612/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 613/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 614/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 615/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 616/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 617/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 618/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 619/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 620/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 621/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 622/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 623/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 624/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 625/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 626/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 627/1000
1/1 [==============================] - 0s 874us/step - loss: 0.0988
Epoch 628/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 629/1000
1/1 [==============================] - 0s 914us/step - loss: 0.0988
Epoch 630/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 631/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 632/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 633/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 634/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 635/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 636/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 637/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 638/1000
1/1 [==============================] - 0s 999us/step - loss: 0.0988
Epoch 639/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 640/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 641/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 642/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 643/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 644/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 645/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 646/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 647/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 648/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 649/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 650/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 651/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 652/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 653/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 654/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 655/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 656/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 657/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 658/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 659/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 660/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 661/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 662/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 663/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 664/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 665/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 666/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 667/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 668/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 669/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 670/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 671/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 672/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 673/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 674/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 675/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 676/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 677/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 678/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 679/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 680/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 681/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 682/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 683/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 684/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 685/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 686/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 687/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 688/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 689/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 690/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 691/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 692/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 693/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 694/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 695/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 696/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 697/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 698/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 699/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 700/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 701/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 702/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 703/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 704/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 705/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 706/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 707/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 708/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 709/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 710/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 711/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 712/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 713/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 714/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 715/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 716/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 717/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 718/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 719/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 720/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 721/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 722/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 723/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 724/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 725/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 726/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 727/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 728/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 729/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 730/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 731/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 732/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 733/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 734/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 735/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 736/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 737/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 738/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 739/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 740/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 741/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 742/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 743/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 744/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 745/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 746/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 747/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 748/1000
1/1 [==============================] - 0s 6ms/step - loss: 0.0988
Epoch 749/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 750/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 751/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 752/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 753/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 754/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 755/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 756/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 757/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 758/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 759/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 760/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 761/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 762/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 763/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 764/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 765/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 766/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 767/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 768/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 769/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 770/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 771/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 772/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 773/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 774/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 775/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 776/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 777/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 778/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 779/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 780/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 781/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 782/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 783/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 784/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 785/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 786/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 787/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 788/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 789/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 790/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 791/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 792/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 793/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 794/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 795/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 796/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 797/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 798/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 799/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 800/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 801/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 802/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 803/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 804/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 805/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 806/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 807/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 808/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 809/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 810/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 811/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 812/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 813/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 814/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 815/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 816/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 817/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 818/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 819/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 820/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 821/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 822/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 823/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 824/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 825/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 826/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 827/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 828/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 829/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 830/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 831/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 832/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 833/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 834/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 835/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 836/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 837/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 838/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 839/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 840/1000
1/1 [==============================] - 0s 856us/step - loss: 0.0988
Epoch 841/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 842/1000
1/1 [==============================] - 0s 803us/step - loss: 0.0988
Epoch 843/1000
1/1 [==============================] - 0s 962us/step - loss: 0.0988
Epoch 844/1000
1/1 [==============================] - 0s 849us/step - loss: 0.0988
Epoch 845/1000
1/1 [==============================] - 0s 884us/step - loss: 0.0988
Epoch 846/1000
1/1 [==============================] - 0s 866us/step - loss: 0.0988
Epoch 847/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 848/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 849/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 850/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 851/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 852/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 853/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 854/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 855/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 856/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 857/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 858/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 859/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 860/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 861/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 862/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 863/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 864/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 865/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 866/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 867/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 868/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 869/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 870/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 871/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 872/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 873/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 874/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 875/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 876/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 877/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 878/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 879/1000
1/1 [==============================] - 0s 4ms/step - loss: 0.0988
Epoch 880/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 881/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 882/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 883/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 884/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 885/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 886/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 887/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 888/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 889/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 890/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 891/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 892/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 893/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 894/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 895/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 896/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 897/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 898/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 899/1000
1/1 [==============================] - 0s 7ms/step - loss: 0.0988
Epoch 900/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 901/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 902/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 903/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 904/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 905/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 906/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 907/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 908/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 909/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 910/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 911/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 912/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 913/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 914/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 915/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 916/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 917/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 918/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 919/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 920/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 921/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 922/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 923/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 924/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 925/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 926/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 927/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 928/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 929/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 930/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 931/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 932/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 933/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 934/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 935/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 936/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 937/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 938/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 939/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 940/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 941/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 942/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 943/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 944/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 945/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 946/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 947/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 948/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 949/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 950/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 951/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 952/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 953/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 954/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 955/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 956/1000
1/1 [==============================] - 0s 895us/step - loss: 0.0988
Epoch 957/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 958/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 959/1000
1/1 [==============================] - 0s 944us/step - loss: 0.0988
Epoch 960/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 961/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 962/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 963/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 964/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 965/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 966/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 967/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 968/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 969/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 970/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 971/1000
1/1 [==============================] - 0s 6ms/step - loss: 0.0988
Epoch 972/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 973/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 974/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 975/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 976/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 977/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 978/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 979/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 980/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 981/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 982/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 983/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 984/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 985/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 986/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 987/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 988/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 989/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 990/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 991/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 992/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 993/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 994/1000
1/1 [==============================] - 0s 3ms/step - loss: 0.0988
Epoch 995/1000
1/1 [==============================] - 0s 7ms/step - loss: 0.0988
Epoch 996/1000
1/1 [==============================] - 0s 1ms/step - loss: 0.0988
Epoch 997/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 998/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 999/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
Epoch 1000/1000
1/1 [==============================] - 0s 2ms/step - loss: 0.0988
###Markdown
Ok, now you have a model that has been trained to learn the relationshop between X and Y. You can use the **model.predict** method to have it figure out the Y for a previously unknown X. So, for example, if X = 10, what do you think Y will be? Take a guess before you run this code:
###Code
print(model.predict([21.0]))
###Output
[[42.88888]]
|
notebooks/.ipynb_checkpoints/0-Background-checkpoint.ipynb | ###Markdown
Simple Transverse WavesHere we will consider plane waves moving in opposite direction. Wave 1: $E1=cos(k x-\omega t)$ → Wave 2: $E2=cos(-k x -\omega t)$ ← Addition of the two wave: $E_r=cos(k x - \omega t) + cos(-k x - \omega t)$ $ = 2cos(kx)cos(-\omega t)$ Average square of the total field (detectable power): $\langle E_r^2 \rangle = 2+2cos(2kx)$ Standing Wave Combination of Two Waves Demonstation
###Code
# First set up the figure, the axis, and the plot element we want to animate
fig, axs = plt.subplots(5,figsize=(16,8))
for ax in axs:
ax.set_xlim(( -4, 4))
ax.set_ylim((-1, 1))
#set limit for E_sum E_T^2 and E_T^2 average.
axs[2].set_ylim((-2,2))
axs[3].set_ylim((0,4))
axs[4].set_ylim((0,1))
plt.xlabel('microns')
axs[0].set_ylabel('E1',rotation=0,labelpad=10)
axs[1].set_ylabel('E2',rotation=0,labelpad=10)
axs[2].set_ylabel(r'$E_T = E1 + E2$',rotation=0,labelpad=25)
axs[3].set_ylabel(r'$E_T^2',rotation=0,labelpad=25)
axs[4].set_ylabel(r'$E_T^2 Average',rotation=0,labelpad=45)
line1, = axs[0].plot([], [], lw=2)
line2, = axs[1].plot([], [], lw=2)
line3, = axs[2].plot([], [], lw=2)
line4, = axs[3].plot([], [], lw=2)
line5, = axs[4].plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
line4.set_data([], [])
line5.set_data([], [])
return (line1, line2, line3, line4, line5,)
# animation function. This is called sequentially
def animate(i):
x = np.linspace(-4, 4, 1000)
y1 = np.cos(2 * np.pi * (x - 0.04 * i))
line1.set_data(x, y1)
y2 = np.cos(2 * np.pi * (-x - 0.04 * i))
line2.set_data(x, y2)
y3 = y1+y2
line3.set_data(x,y3)
y4 =y3**2
line4.set_data(x,y4)
y5 = (2+2*np.cos(4*np.pi*x))/4 #Normalized Average
line5.set_data(x,y5)
return (line1,line2,line3,line4,line5,)
# call the animator. blit=True means only re-draw the parts that
# have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=25, interval=40, blit=True)
#Can show by running "anim"
anim.save('img/combination-of-two-waves.gif', writer='imagemagick', fps=30)
Image(url='img/combination-of-two-waves.gif')
###Output
_____no_output_____ |
.ipynb_checkpoints/B+Tree - The good one-checkpoint.ipynb | ###Markdown
B+Tree
###Code
# El módulo abc obliga a las clases derivadas a implemtar
# un método particular utilizando @abstractmethod
from abc import ABCMeta, abstractmethod
# Permite trabajar mejor con listas, insertar conservando el orden
import bisect
from math import floor as floor
class Node(metaclass=ABCMeta):
def __init__(self):
self.num = 0
self.keys = []
@abstractmethod
def getLoc(self, key):
"""Return null si no split. Si no retorna la info del split."""
pass
@abstractmethod
def insert(self, key,value): pass
@abstractmethod
def display(self): pass
class Split:
# TODO: Refactor Split
def __init__(self, key, node_leaf, node_right):
self.key = key
self.left = node_leaf
self.right = node_right
class LNode(Node):
def __init__(self):
super(LNode, self).__init__();
self.values = []
def getLoc(self, key):
for i, key_i in enumerate(self.keys):
if key_i >= key:
return i
# Si no se encontró una posición retorna la posición final
return len(self.keys)
#return bisect.bisect_left(self.keys, key)
def insert(self, key, value):
i = self.getLoc(key)
if self.num == M:
# Nodo llego, se debe dividir
mid = floor((M + 1) / 2)
sNum = self.num - mid
# Creamos un nodo Hermano, y a este le asignamos la mitad de los elementos
sibling = LNode()
sibling.num = sNum
sibling.keys = self.keys[mid:]
sibling.values = self.values[mid:]
self.keys = self.keys[:mid]
self.values = self.values[:mid]
self.num = mid
if i < mid:
self.insertNonFull(key, value, i)
else:
sibling.insertNonFull(key, value, i-mid)
# notificar al nodo padre
result = Split(sibling.keys[0], self, sibling)
return result
else:
self.insertNonFull(key, value, i)
return None
def insertNonFull(self, key, value, i):
#print("key: {}, val: {}, i: {}".format(key, value, i))
self.keys.insert(i, key)
self.values.insert(i, value)
self.num += 1
def display(self):
print('\t<LNode>\t', end='')
for key, val in zip(self.keys, self.values):
print("{} -> {}\t".format(key, val), end='')
print('')
def showChildren(self):
print("Children: {}".format(len(self.values)))
print("Children: {}".format(self.num))
print("")
class INode(Node):
def __init__(self):
super(INode, self).__init__();
self.children = []
def getLoc(self, key):
for i, key_i in enumerate(self.keys):
if key_i >= key:
return i
# Si no se encontró una posición retorna la posición final
return len(self.keys)
#return bisect.bisect_left(self.keys, key)
def insert(self, key, value):
if self.num == N:
# Dividir
mid = floor((N + 1) / 2)
sNum = self.num - mid
sibling = INode()
sibling.num = sNum
sibling.keys = self.keys[mid:]
sibling.children = self.children[mid:]
self.keys = self.keys[:mid]
self.children = self.children[:mid]
self.num = mid - 1 # El elemento del extremo izq se envia a la parte superior
result = Split(self.keys[mid-1], self, sibling)
# insertar en el lado apropiado
if key < result.key: # menor que cero
self.insertNonFull(key, value)
else:
sibling.insertNonFull(key, value)
return result
else:
self.insertNonFull(key, value)
return None
def insertNonFull(self, key, value):
i = self.getLoc(key)
result = self.children[i].insert(key, value)
if result is not None:
# Caso contrario no se debe modicar nada
if i == self.num:
# Insertamos a la derecha
#self.keys[i] = result.key
#self.children[i] = result.left
#self.children[i+1] = result.right
self.keys.insert(i, result.key)
self.children.insert(i, result.left)
self.children.insert(i+1, result.right)
self.num += 1
else:
self.children.insert(i, result.left)
self.children.insert(i + 1, result.right)
self.keys.insert(i, result.key)
self.num += 1;
def display(self):
print("Displaying INode")
print('<INode>\t', end='')
for key in self.keys:
print('{}\t'.format(key), end='')
print("")
for child in self.children:
if isinstance(child, INode):
print("")
child.display()
def showChildren(self):
print("Children INode: {}".format(len(self.children)))
print("Children INode: {}".format(self.num))
self.display()
print("")
class BTree:
def __init__(self, degree):
self.max_leafs = degree - 1
self.max_inner_nodes = degree
M = self.max_leafs
N = self.max_inner_nodes
self.root = LNode()
def insert(self, key, value):
print("BEFORE ADD")
self.root.showChildren()
print("Insertar [{}]={}".format(key, value))
result = self.root.insert(key, value)
if result is not None:
# Se dividió la raiz
# Se crea una nueva raiz
_root = INode()
_root.num = 2
_root.keys.insert(0, result.key)
_root.children.insert(0, result.left)
_root.children.insert(1, result.right)
self.root = _root
print("AFTER ADD")
self.root.showChildren()
print("")
print("- - -")
print("")
def find(self, key):
node = self.root
while isinstance(node, INode):
idx = node.getLoc(key)
node = node.children[idx]
# Estamos en la hoja
idx = node.getLoc(key)
if idx < node.num and node.keys[idx] == key:
return node.values[idx]
else:
return None
def display(self):
self.root.display()
tree = BTree(3)
# Maximo número de hojas
M = 2;
# Maximo número de nodos en nodo intermedio
N = 3;
tree.insert(1, "1111")
tree.insert(2, "2222")
tree.insert(4, "4444")
tree.insert(3, "3333")
print(tree.find(1))
tree.display()
tree.insert(5, "5555")
tree.display()
tree.insert(6, "6666")
tree.display()
tree.insert(9, "9999")
tree.display()
###Output
Displaying INode
<INode> 3 5
Displaying INode
<INode> 2 3
<LNode> 1 -> 1111
<LNode> 2 -> 2222
Displaying INode
<INode> 4 5
<LNode> 3 -> 3333
<LNode> 4 -> 4444
Displaying INode
<INode> 6
<LNode> 5 -> 5555
<LNode> 6 -> 6666 9 -> 9999
<LNode> 5 -> 5555
<LNode> 4 -> 4444
<LNode> 3 -> 3333
<LNode> 2 -> 2222
Displaying INode
<INode> 4 5
<LNode> 3 -> 3333
<LNode> 4 -> 4444
|
Natural Language Processing/4-Topic-Modelling.ipynb | ###Markdown
Topic Modeling IntroductionTopic modeling is another popular text analysis technique. The goal of topic modeling is to find various topics that are present in your corpus. Each document in the corpus will be made up of at least one topic, if not multiple topics.In this notebook, I will apply Latent Dirichlet Allocation (LDA), which is one of many topic modeling techniques. It was specifically designed for text data.Latent means hidden (topics) and Dirichlet refers to a probability distrubution technique. To use a topic modeling technique, we need to provide1. a document-term matrix and 2. the number of topics you would like the algorithm to pick up.Once the topic modeling technique is applied, the job is to interpret the results and see if the mix of words in each topic make sense. If they don't make sense, we can try changing up the number of topics, the terms in the document-term matrix, model parameters, or even try a different model. Topic Modeling - Attempt 1 (All Text)
###Code
# reading the document term matrix
import pandas as pd
import pickle
data = pd.read_pickle('pickle files/dtm_stop.pkl')
data.head()
# importing necessary modules for LDA with gensim
from gensim import matutils, models
import scipy.sparse
# we can also use log to debug that helps in later hyperpaarameter tuning
# import logging
# logging.basicConfig(format = '%(asctime)s : %(levelname)s : %(message)s',level = logging.INFO)
# we need term document matrix
tdm = data.transpose()
tdm.head()
# putting tdm into a new gensim format
# df -----> sparse matrix ----->gensim corpus
sparse_count = scipy.sparse.csr_matrix(tdm)
corpus = matutils.Sparse2Corpus(sparse_count)
# Gensim also requires dictionary of all the terms and
# their respective location in the tdm
cv = pickle.load(open("pickle files/cv_stop.pkl", "rb"))
id2word = dict((value, key) for key, value in cv.vocabulary_.items())
###Output
_____no_output_____
###Markdown
Now that we have the corpus(term document matrix) and id2word(dictionary of (location: term), we need to specify two more parameters - the number of topics and the numbers of passes(iterations). Let's take the number of topics as 2, and check the results.
###Code
# specifying no. of topics ad no. of passes
lda = models.LdaModel(corpus = corpus,
id2word = id2word,
num_topics = 2, passes = 10)
lda.print_topics()
# increasing number of topics for better insights
lda = models.LdaModel(corpus = corpus,
id2word = id2word,
num_topics = 3, passes = 10)
lda.print_topics()
###Output
_____no_output_____
###Markdown
Clearly, there are no meaningful results.This time I had modified the parameters, now I have tried to modify the term list as well. Topic modeling - Attempt 2(Nouns only)
###Code
# function to pull oout all the nouns from the text string
from nltk import word_tokenize, pos_tag
def nouns(text):
# string of text ----> tokenize the text ----> pull out nouns
is_noun = lambda pos: pos[:2] == 'NN'
tokenized = word_tokenize(text)
all_nouns = [word for (word, pos) in pos_tag(tokenized) if is_noun(pos)]
return ' '.join(all_nouns)
# loading the cleaned text data before count vectorization
data_clean = pd.read_pickle('pickle files/clean.pkl')
data_clean
# applying nouns function to the trascripts to filter only on nouns
data_nouns = pd.DataFrame(data_clean.transcript.apply(nouns))
data_nouns
# creating a new document-term matrix using only nouns
from sklearn.feature_extraction import text
from sklearn.feature_extraction.text import CountVectorizer
# re- adding stop_words since we are creating new dtm
# defining some additional stop words
add_stop_words = ['like', 'im', 'know', 'just', 'dont', 'thats', 'right', 'people',
'youre', 'got', 'gonna', 'time', 'think', 'yeah', 'said']
stop_words = text.ENGLISH_STOP_WORDS.union(add_stop_words)
# creating a dtm with only nouns
cvn = CountVectorizer(stop_words = stop_words)
model = cvn.fit_transform(data_nouns.transcript)
data_new_dtm = pd.DataFrame(model.toarray(), columns = cvn.get_feature_names())
data_new_dtm.index = data_nouns.index
data_new_dtm
# creating gensim corpus
new_corpus = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_new_dtm.transpose()))
# creating vocabulary dictionary
new_id2word = dict((v, k) for k, v in cvn.vocabulary_.items())
# starting with no of topics = 2
new_lda = models.LdaModel(corpus = new_corpus, id2word = new_id2word,
num_topics = 2 ,passes = 10)
new_lda.print_topics()
# no of topics = 3
new_lda = models.LdaModel(corpus = new_corpus, id2word = new_id2word,
num_topics = 3 ,passes = 10)
new_lda.print_topics()
###Output
_____no_output_____
###Markdown
Topic Modeling - Attempt 3 (Nouns and Adjectives)
###Code
# creating a function to pull out nouns from a string of text
def nouns_adj(text):
'''Given a string of text, tokenize the text and pull out only the nouns and adjectives.'''
is_noun_adj = lambda pos: pos[:2] == 'NN' or pos[:2] == 'JJ'
tokenized = word_tokenize(text)
nouns_adj = [word for (word, pos) in pos_tag(tokenized) if is_noun_adj(pos)]
return ' '.join(nouns_adj)
# Applying the nouns function to the transcripts to filter only on nouns
data_nouns_adj = pd.DataFrame(data_clean.transcript.apply(nouns_adj))
data_nouns_adj
# Create a new document-term matrix using only nouns and adjectives, also remove common words with max_df
cvna = CountVectorizer(stop_words=stop_words, max_df=.8)
data_cvna = cvna.fit_transform(data_nouns_adj.transcript)
data_dtmna = pd.DataFrame(data_cvna.toarray(), columns=cvna.get_feature_names())
data_dtmna.index = data_nouns_adj.index
data_dtmna
# Creating the gensim corpus
corpusna = matutils.Sparse2Corpus(scipy.sparse.csr_matrix(data_dtmna.transpose()))
# Creating the vocabulary dictionary
id2wordna = dict((v, k) for k, v in cvna.vocabulary_.items())
# no of topics = 2
ldana = models.LdaModel(corpus=corpusna, num_topics=2, id2word=id2wordna, passes=10)
ldana.print_topics()
# no of topics = 3
ldana = models.LdaModel(corpus=corpusna, num_topics=3, id2word=id2wordna, passes=10)
ldana.print_topics()
# no of topics = 4
ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=10)
ldana.print_topics()
###Output
_____no_output_____
###Markdown
Identify Topics in Each DocumentOut of the various topic models we looked at, the nouns and adjectives, 4 topic one made the most sense. So pulling that down here and running it through some more iterations to get more fine-tuned topics.
###Code
# final LDA model (for now) with more no of passes
ldana = models.LdaModel(corpus=corpusna, num_topics=4, id2word=id2wordna, passes=80)
ldana.print_topics()
###Output
_____no_output_____
###Markdown
Trying to settle on these 4 topics as of now - Topic 0: guns - Topic 1: husband - Topic 2: mom, parents - Topic 3: grandma, friends
###Code
# checking which topic does each transcript contains
corpus_transformed = ldana[corpusna]
list(zip([a for [(a,b)] in corpus_transformed], data_dtmna.index))
###Output
_____no_output_____ |
ex_10_ray_serve_example.ipynb | ###Markdown
Ray Serve - Model Serving© 2019-2022, Anyscale. All Rights Reserved Now we'll explore a short example for Ray Serve. This example is from the Ray Serve [scikit-learn example.](https://docs.ray.io/en/latest/serve/tutorials/sklearn.html)See also the Serve documentation's [mini-tutorials](https://docs.ray.io/en/latest/serve/tutorials/index.html) for using Serve with various frameworks.
###Code
import ray
from ray import serve
import requests # for making web requests
import tempfile
import os
import pickle
import json
import numpy as np
serve.start()
###Output
2022-04-19 11:29:41,124 INFO services.py:1460 -- View the Ray dashboard at [1m[32mhttp://127.0.0.1:8268[39m[22m
[2m[36m(ServeController pid=3675)[0m 2022-04-19 11:29:45,067 INFO checkpoint_path.py:15 -- Using RayInternalKVStore for controller checkpoint and recovery.
[2m[36m(ServeController pid=3675)[0m 2022-04-19 11:29:45,172 INFO http_state.py:106 -- Starting HTTP proxy with name 'SERVE_CONTROLLER_ACTOR:BkVkza:SERVE_PROXY_ACTOR-node:127.0.0.1-0' on node 'node:127.0.0.1-0' listening on '127.0.0.1:8000'
2022-04-19 11:29:46,505 INFO api.py:797 -- Started Serve instance in namespace 'serve'.
###Markdown
Create a Model to Serve We'll begin by training a classifier with the Iris data we used before, this time using [scikit-learn](https://scikit-learn.org/stable/). The details aren't too important for our purposes, except for the fact we'll save the trained model to disk for subsequent serving.
###Code
import sklearn
from sklearn.datasets import load_iris
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import mean_squared_error
# Load data
iris_dataset = load_iris()
data, target, target_names = iris_dataset["data"], iris_dataset[
"target"], iris_dataset["target_names"]
# Instantiate model
model = GradientBoostingClassifier()
# Training and validation split
data, target = sklearn.utils.shuffle(data, target)
train_x, train_y = data[:100], target[:100]
val_x, val_y = data[100:], target[100:]
# Train and evaluate models
model.fit(train_x, train_y)
print("MSE:", mean_squared_error(model.predict(val_x), val_y))
###Output
MSE: 0.02
###Markdown
Save the model and label to file. This could also be S3 or other "global" place or fetched from the model regsitry.
###Code
MODEL_PATH = os.path.join(tempfile.gettempdir(),
"iris_model_logistic_regression.pkl")
LABEL_PATH = os.path.join(tempfile.gettempdir(), "iris_labels.json")
with open(MODEL_PATH, "wb") as f:
pickle.dump(model, f)
with open(LABEL_PATH, "w") as f:
json.dump(target_names.tolist(), f)
###Output
_____no_output_____
###Markdown
Create a Deployment and Serve ItNext, we define a servable model by instantiating a class and defining the `__call__` method that Ray Serve will use.
###Code
@serve.deployment(route_prefix="/regressor", num_replicas=2)
class BoostingModel:
def __init__(self):
with open(MODEL_PATH, "rb") as f:
self.model = pickle.load(f)
with open(LABEL_PATH) as f:
self.label_list = json.load(f)
# async allows us to have this call concurrently
async def __call__(self, starlette_request):
payload = await starlette_request.json()
print("Worker: received starlette request with data", payload)
input_vector = [
payload["sepal length"],
payload["sepal width"],
payload["petal length"],
payload["petal width"],
]
prediction = self.model.predict([input_vector])[0]
human_name = self.label_list[prediction]
return {"result": human_name}
###Output
_____no_output_____
###Markdown
Deploy the model
###Code
BoostingModel.deploy()
###Output
2022-04-19 11:29:48,245 INFO api.py:618 -- Updating deployment 'BoostingModel'. component=serve deployment=BoostingModel
[2m[36m(ServeController pid=3675)[0m 2022-04-19 11:29:48,348 INFO deployment_state.py:1210 -- Adding 2 replicas to deployment 'BoostingModel'. component=serve deployment=BoostingModel
2022-04-19 11:29:50,252 INFO api.py:633 -- Deployment 'BoostingModel' is ready at `http://127.0.0.1:8000/regressor`. component=serve deployment=BoostingModel
###Markdown
Score the modelInternally, Serve stores the model as a Ray actor and routes traffic to it as the endpoint is queried, in this case over HTTP. Now let’s query the endpoint to see results.
###Code
sample_request_input = {
"sepal length": 1.2,
"sepal width": 1.0,
"petal length": 1.1,
"petal width": 0.9,
}
###Output
_____no_output_____
###Markdown
We can now send HTTP requests to our route `route_prefix=/regressor` at the default port 8000
###Code
response = requests.get(
"http://localhost:8000/regressor", json=sample_request_input)
print(response.text)
response = requests.get("http://localhost:8000/regressor", json=sample_request_input).json()
print(response)
deployments = serve.list_deployments()
print(f'deployments: {deployments}')
serve.shutdown()
###Output
[2m[36m(ServeController pid=3675)[0m 2022-04-19 11:30:02,607 INFO deployment_state.py:1236 -- Removing 2 replicas from deployment 'BoostingModel'. component=serve deployment=BoostingModel
|
02-novice/050EarthquakesExercise.ipynb | ###Markdown
Extended exercise: the biggest earthquake in the UK this century USGS earthquake catalog[GeoJSON](https://en.wikipedia.org/wiki/GeoJSON) is a JSON-based file format for sharing geographic data. One example dataset available in the GeoJSON format is the [USGS earthquake catalog](https://www.usgs.gov/natural-hazards/earthquake-hazards/earthquakes). A [web service *application programming interface* (API)](https://earthquake.usgs.gov/fdsnws/event/1/) is provided for programatically accessing events in the earthquake catalog. Specifically the `query` method allows querying the catalog for events with the [query parameters](https://earthquake.usgs.gov/fdsnws/event/1/parameters) passed as `key=value` pairs.We can use the [`requests` Python library](https://docs.python-requests.org/en/latest/) to simplify constructing the appropriate query string to add to the URL and to deal with sending the HTTP request.
###Code
import requests
###Output
_____no_output_____
###Markdown
We first define a variable URL for the earthquake catalog web service API.
###Code
earthquake_catalog_api_url = "http://earthquake.usgs.gov/fdsnws/event/1/query"
###Output
_____no_output_____
###Markdown
We now need to define the parameters of our query. We want to get the data in GeoJSON format for all events in the earthquake catalog with date on or after 1st January 2000 and with location within [a bounding box covering the UK](http://bboxfinder.com/49.877000,-8.182000,60.830000,1.767000). We will filter the events to only request those with magnitude greater or equal to 1 to avoid downloading responses for more frequent small magnitude events. Finally we want the results to be returned in order of ascending time.
###Code
query_parameters = {
"format": "geojson",
"starttime": "2000-01-01",
"maxlatitude": "60.830",
"minlatitude": "49.877",
"maxlongitude": "1.767",
"minlongitude": "-8.182",
"minmagnitude": "1",
"orderby": "time-asc"
}
###Output
_____no_output_____
###Markdown
We now can execute the API query using the [`requests.get` function](https://docs.python-requests.org/en/latest/api/requests.get). This takes as arguments the URL to make the request from and, optionally, a dictionary argument `params` containing the parameters to send in the query string for the request. A [`requests.Response` object](https://docs.python-requests.org/en/latest/api/requests.Response) is returned, which represents the server's response to the HTTP request made.
###Code
quakes_response = requests.get(earthquake_catalog_api_url, params=query_parameters)
###Output
_____no_output_____
###Markdown
The response object has various attributes and methods. A useful attribute to check is the `ok` attribute which will be `False` if the status code for the response to the request corresponds to a client or server error and `True` otherwise.
###Code
quakes_response.ok
###Output
_____no_output_____
###Markdown
We can also check specifically that the status code corresponds to [the expected `200 OK`](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes2xx_success) using the `status_code` attribute
###Code
quakes_response.status_code == 200
###Output
_____no_output_____
###Markdown
The actual content of the response can be accessed in various formats. The `content` attribute gives the content of the response as [bytes](https://docs.python.org/3/library/stdtypes.htmlbytes). As here we expect the response content to be Unicode-encoded text, the `text` attribute is more relevant as it gives the response content as a Python string. We can display the first 100 characters of the response content as follows
###Code
print(quakes_response.text[:100])
###Output
_____no_output_____ |
notebooks/Using Jupyter/interactive-dashboards-with-jupyter.ipynb | ###Markdown
Interactive Dashboards with JupyterLet's say that you have to regularly send a [Folium](https://blog.dominodatalab.com/creating-interactive-crime-maps-with-folium/)map to your colleague's email with all the earthquakes of the past day.To be able to do that, you first need an earthquake data set that updates regularly (at least daily).A data feed that updates every 5 minutes can be found at the [USGS website](https://earthquake.usgs.gov/earthquakes/feed/v1.0/csv.php).Then, you can use Jupyter to write the code to load this data and create the map. FoliumFolium is a powerful Python library that helps you create several types of [Leaflet](http://leafletjs.com/) maps.The fact that the Folium results are interactive makes this library very useful for dashboard building.To get an idea, just zoom/click around on the next map to get an impression.The [Folium github](https://github.com/python-visualization/folium) contains many other examples.By default, Folium creates a map in a separate HTML file.In case you use Jupyter, you might prefer to get inline maps.This Jupyter example shows how to display maps inline.
###Code
import folium
from IPython.display import HTML
def display(m, height=300):
"""Takes a folium instance and embed HTML."""
m._build_map()
srcdoc = m.HTML.replace('"', '"')
embed = HTML('<iframe srcdoc="{0}" '
'style="width: 100%; height: {1}px; '
'border: none"></iframe>'.format(srcdoc, height))
return embed
# print version number of your Folium package
print("Folium Version: ", folium.__version__)
map = folium.Map(location=[37.76, -122.45])
map.simple_marker([37.76, -122.45])
display(map)
###Output
_____no_output_____
###Markdown
Dashboard
###Code
import pandas as pd
import folium
from matplotlib.colors import Normalize, rgb2hex
import matplotlib.cm as cm
data = pd.read_csv('http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.csv')
norm = Normalize(data['mag'].min(), data['mag'].max())
map = folium.Map(location=[48, -102], zoom_start=3)
for eq in data.iterrows():
color = rgb2hex(cm.OrRd(norm(float(eq[1]['mag']))))
map.circle_marker([eq[1]['latitude'], eq[1]['longitude']],
popup=eq[1]['place'],
radius=20000*float(eq[1]['mag']),
line_color=color,
fill_color=color)
map.create_map(path='results/earthquake.html')
# need to replace CDN with https URLs
with open('results/earthquake.html', 'r') as f:
contents = f.read()
contents = contents.replace("http://cdn.leafletjs.com/leaflet-0.5/", "//cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/")
with open('results/earthquake2.html', 'w') as f:
f.writelines(contents)
%%HTML
<iframe width="100%" height="350" src="https://app.dominodatalab.com/r00sj3/jupyter/raw/latest/results/earthquake2.html?inline=true"></iframe>
###Output
_____no_output_____ |
Kaggle-Competitions/Home Insurance/Exploratory Analysis.ipynb | ###Markdown
Exploratory Data Analysis
###Code
# take a look at some of the examples
train.head()
train[train.QuoteConversion_Flag==0].head()
train[train.QuoteConversion_Flag==1].head()
# see 5 number summary
train.describe()
train.QuoteConversion_Flag.value_counts().plot(kind='bar');
def num_zero_features(row):
return list(row).count(0)
train['count_zero'] = train.apply(num_zero_features, axis=1)
train = train.fillna(-1)
train['count_missing'] = train.apply(lambda x: list(x).count(-1), axis=1)
train.groupby(['QuoteConversion_Flag', 'count_zero']).size()
train.boxplot(column='count_zero', by='QuoteConversion_Flag')
train.groupby(['QuoteConversion_Flag', 'count_missing']).size()
###Output
_____no_output_____ |
docs/gallery/plot_density.ipynb | ###Markdown
Density histogramm examples
###Code
import pandas as pd
import toto
import matplotlib.pyplot as plt
from toto.inputs.txt import TXTfile
import os
# read the file
hindcast='https://raw.githubusercontent.com/calypso-science/Toto/master/_tests/txt_file/tahuna_hindcast.txt'
measured='https://raw.githubusercontent.com/calypso-science/Toto/master/_tests/txt_file/tahuna_measured.txt'
os.system('wget %s ' % hindcast)
os.system('wget %s ' % measured)
hd=TXTfile(['tahuna_hindcast.txt'],colNamesLine=1,skiprows=1,unitNamesLine=0,time_col_name={'Year':'year','Month':'month','Day':'day','Hour':'hour','Min':'Minute'})
hd.reads()
hd.read_time()
hd=hd._toDataFrame()
# # Processing
hd[0].StatPlots.density_diagramm(X='tp',Y='hs',args={
'X name':'Wave period',
'Y name':'Significant wave height',
'Y unit':'m',
'X unit':'s',
'Y limits':[0,5],
'X limits':[0,20],
'display':'On',
})
###Output
_____no_output_____ |
Data Visualization/exercise-bar-charts-and-heatmaps.ipynb | ###Markdown
**This notebook is an exercise in the [Data Visualization](https://www.kaggle.com/learn/data-visualization) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/bar-charts-and-heatmaps).**--- In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate **bar charts** and **heatmaps** to understand patterns in the data. ScenarioYou've recently decided to create your very own video game! As an avid reader of [IGN Game Reviews](https://www.ign.com/reviews/games), you hear about all of the most recent game releases, along with the ranking they've received from experts, ranging from 0 (_Disaster_) to 10 (_Masterpiece_).You're interested in using [IGN reviews](https://www.ign.com/reviews/games) to guide the design of your upcoming game. Thankfully, someone has summarized the rankings in a really useful CSV file that you can use to guide your analysis. SetupRun the next cell to import and configure the Python libraries that you need to complete the exercise.
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
Setup Complete
###Markdown
The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
###Code
# Set up code checking
import os
if not os.path.exists("../input/ign_scores.csv"):
os.symlink("../input/data-for-datavis/ign_scores.csv", "../input/ign_scores.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex3 import *
print("Setup Complete")
###Output
Setup Complete
###Markdown
Step 1: Load the dataRead the IGN data file into `ign_data`. Use the `"Platform"` column to label the rows.
###Code
# Path of the file to read
ign_filepath = "../input/ign_scores.csv"
# Fill in the line below to read the file into a variable ign_data
ign_data = pd.read_csv(ign_filepath, index_col='Platform')
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
# step_1.solution()
###Output
_____no_output_____
###Markdown
Step 2: Review the dataUse a Python command to print the entire dataset.
###Code
# Print the data
____ # Your code here
ign_data
###Output
_____no_output_____
###Markdown
The dataset that you've just printed shows the average score, by platform and genre. Use the data to answer the questions below.
###Code
# Fill in the line below: What is the highest average score received by PC games,
# for any platform?
high_score = 7.759930
# Fill in the line below: On the Playstation Vita platform, which genre has the
# lowest average score? Please provide the name of the column, and put your answer
# in single quotes (e.g., 'Action', 'Adventure', 'Fighting', etc.)
worst_genre = 'Simulation'
# Check your answers
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
# step_2.solution()
###Output
_____no_output_____
###Markdown
Step 3: Which platform is best?Since you can remember, your favorite video game has been [**Mario Kart Wii**](https://www.ign.com/games/mario-kart-wii), a racing game released for the Wii platform in 2008. And, IGN agrees with you that it is a great game -- their rating for this game is a whopping 8.9! Inspired by the success of this game, you're considering creating your very own racing game for the Wii platform. Part ACreate a bar chart that shows the average score for **racing** games, for each platform. Your chart should have one bar for each platform.
###Code
# Bar chart showing average score for racing games by platform
____ # Your code here
# set the width and height of the figure
plt.figure(figsize=(25,8))
# Add Title to project
plt.title("Average Score for Racing Games")
# Bar Chart Showing the average score for racing games, for each platform
sns.barplot(x=ign_data.index, y=ign_data['Racing'])
# Add label to vertical axis
plt.ylabel("Average Score")
# Check your answer
step_3.a.check()
# Lines below will give you a hint or solution code
#step_3.a.hint()
# step_3.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BBased on the bar chart, do you expect a racing game for the **Wii** platform to receive a high rating? If not, what gaming platform seems to be the best alternative?
###Code
# step_3.b.hint()
# Check your answer (Run this code cell to receive credit!)
# step_3.b.solution()
###Output
_____no_output_____
###Markdown
Step 4: All possible combinations!Eventually, you decide against creating a racing game for Wii, but you're still committed to creating your own video game! Since your gaming interests are pretty broad (_... you generally love most video games_), you decide to use the IGN data to inform your new choice of genre and platform. Part AUse the data to create a heatmap of average score by genre and platform.
###Code
# Heatmap showing average game score by platform and genre
____ # Your code here
# Set the Width & Height of the Figure
plt.figure(figsize=(25,8))
# Add Title to Figure
plt.title("Average Score by Genre and Platform")
# Heat map showing average score by genre and platform
sns.heatmap(data=ign_data, annot=True)
# Add Label for h axis
plt.xlabel("Genre")
# Check your answer
step_4.a.check()
# Lines below will give you a hint or solution code
#step_4.a.hint()
# step_4.a.solution_plot()
###Output
_____no_output_____
###Markdown
Part BWhich combination of genre and platform receives the highest average ratings? Which combination receives the lowest average rankings?
###Code
#step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
# step_4.b.solution()
###Output
_____no_output_____ |
notebooks/tutorial_change_params_srd.ipynb | ###Markdown
On crée un ménage avec un enfant
###Code
jean = Person(age=45, earn=40000)
jacques = Person(age=40, earn=50000)
jeanne = Dependent(age=4, child_care=10000)
joaquim = Dependent(age=8, child_care=8000)
hh = Hhold(jean, jacques, prov='qc')
hh.add_dependent(jeanne, joaquim)
###Output
_____no_output_____
###Markdown
On crée une instance du simulateur pour l'année fiscale 2020 et on passe le ménage dans le simulateur
###Code
tax_form = tax(2020)
tax_form.compute(hh)
print(f'revenu disponible familial: {hh.fam_disp_inc}')
print(f'crédit pour frais de garde: {jean.qc_chcare + jacques.qc_chcare}')
srd.quebec.template.chcare??
vars(tax_form.prov['qc'])
tax_form.prov['qc'].chcare_young = 15000
tax_form.prov['qc'].chcare_old = 10000
###Output
_____no_output_____
###Markdown
On recrée le ménage et le passe dans le simulateur avec les nouveaux paramètres.
###Code
jean = Person(age=45, earn=40000)
jacques = Person(age=40, earn=50000)
jeanne = Dependent(age=4, child_care=10000)
joaquim = Dependent(age=8, child_care=8000)
hh = Hhold(jean, jacques, prov='qc')
hh.add_dependent(jeanne, joaquim)
tax_form.prov['qc'].chcare_old = 8000
tax_form.compute(hh)
print(f'revenu disponible familial: {hh.fam_disp_inc}')
print(f'crédit pour frais de garde: {jean.qc_chcare + jacques.qc_chcare}')
###Output
_____no_output_____ |
notebooks/01-KerasPoisonousMushrooms.ipynb | ###Markdown
Keras versus Poisonous MushroomsThis example demonstrates building a simple dense neural network using Keras. The example uses [Agaricus Lepiota](https://archive.ics.uci.edu/ml/datasets/Mushroom) training data to detect poisonous mushrooms.
###Code
from pandas import read_csv
srooms_df = read_csv('../data/agaricus-lepiota.data.csv')
srooms_df.head()
###Output
_____no_output_____
###Markdown
Feature extractionIf we wanted to use all the features in the training set then we would need to map each out. The ```LabelEncoder``` converts T/F data to 1 and 0. The ```LabelBinarizer``` converts categorical data to **one hot encoding**. If we wanted to use all the features in the training set then we would need to map each out:```column_names = srooms_df.axes[1]def get_mapping(name): if(name == 'edibility' or name == 'gill-attachment'): return (name, sklearn.preprocessing.LabelEncoder()) else: return (name, sklearn.preprocessing.LabelBinarizer()) mappings = list(map(lambda name: get_mapping(name), column_names)```We will use a subset of features to make it interesting. Are there simple rules or a handful of features that can be used to test edibility? Lets try a few.
###Code
from sklearn_pandas import DataFrameMapper
import sklearn
import numpy as np
mappings = ([
('edibility', sklearn.preprocessing.LabelEncoder()),
('odor', sklearn.preprocessing.LabelBinarizer()),
('habitat', sklearn.preprocessing.LabelBinarizer()),
('spore-print-color', sklearn.preprocessing.LabelBinarizer())
])
mapper = DataFrameMapper(mappings)
srooms_np = mapper.fit_transform(srooms_df.copy())
###Output
_____no_output_____
###Markdown
Now lets transform the textual data to a vector... The transformed data should have 26 features. The break down is as follows:* Edibility (0 = edible, 1 = poisonous)* odor (9 features): ```[almond=a, creosote=c, foul=f, anise=l, musty=m, none=n, pungent=p, spicy=s, fishy=y]```* habitat (7 features): ```[woods=d, grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w]```* spore-print-color (9 features): ```[buff=b, chocolate=h, black=k, brown=n, orange=o, green=r, purple=u, white=w, yellow=y]```
###Code
print(srooms_np.shape)
print("Frist sample: {}".format(srooms_np[0]))
print(" edibility (poisonous): {}".format(srooms_np[0][0]))
print(" ordr (pungent): {}".format(srooms_np[0][1:10]))
print(" habitat (urban): {}".format(srooms_np[0][10:17]))
print(" spore-print-color (black): {}".format(srooms_np[0][17:]))
###Output
(8124, 26)
Frist sample: [1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0]
edibility (poisonous): 1
ordr (pungent): [0 0 0 0 0 0 1 0 0]
habitat (urban): [0 0 0 0 0 1 0]
spore-print-color (black): [0 0 1 0 0 0 0 0 0]
###Markdown
Before we train the neural network, let's split the data into training and test datasets.
###Code
from sklearn.model_selection import train_test_split
train, test = train_test_split(srooms_np, test_size = 0.2, random_state=7)
train_labels = train[:,0:1]
train_data = train[:,1:]
test_labels = test[:,0:1]
test_data = test[:,1:]
print('training data dims: {}, label dims: {}'.format(train_data.shape,train_labels.shape))
print('test data dims: {}, label dims: {}'.format(test_data.shape,test_labels.shape))
###Output
training data dims: (6499, 25), label dims: (6499, 1)
test data dims: (1625, 25), label dims: (1625, 1)
###Markdown
Model DefinitionWe will create a simple three layer neural network. The network contains two dense layers and a dropout layer (to avoid overfitting). Layer 1: Dense LayerA dense layer applies an activation function to the output of $W \cdot x + b$. If the dense layer only had three inputs and outputs, then the dense layer looks like this...  Under the covers, keras represents the layer's weights as a matrix. The inputs, outputs, and biases are vectors...$$ \begin{bmatrix} y_1 \\y_2 \\y_3\end{bmatrix}=relu\begin{pmatrix}\begin{bmatrix} W_{1,1} & W_{1,2} & W_{1,3} \\W_{2,1} & W_{2,2} & W_{2,3} \\W_{3,1} & W_{3,2} & W_{3,3}\end{bmatrix}\cdot\begin{bmatrix} x_1 \\x_2 \\x_3\end{bmatrix}+\begin{bmatrix} b_1 \\b_2 \\b_3\end{bmatrix}\end{pmatrix}$$ If this operation was decomposed futher, it would look like this...$$ \begin{bmatrix} y_1 \\y_2 \\y_3\end{bmatrix}=\begin{bmatrix}relu(W_{1,1} x_1 + W_{1,2} x_2 + W_{1,3} x_3 + b_1) \\relu(W_{2,1} x_1 + W_{2,2} x_2 + W_{2,3} x_3 + b_2) \\relu(W_{3,1} x_1 + W_{3,2} x_2 + W_{3,3} x_3 + b_3)\end{bmatrix}$$ The Rectified Linear Unit (RELU) function looks like this... Layer 2: DropoutThe dropout layer prevents overfitting by randomly dropping inputs to the next layer. Layer 3: Dense LayerThis layer acts like the first one, except this layer applies a sigmod activation function. The output is the probability a mushroom is poisonous. If a sample represents a small probability of poisoning, we'll want to know!$$y = sigmod(W \cdot x + b)$$ Putting It TogetherFortunately, we don't need to worry about defining the parameters (the weigths and biases) in Keras. We just define the layers in a sequence...
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout
model = Sequential()
model.add(Dense(20, activation='relu', input_dim=25))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
###Output
Using TensorFlow backend.
###Markdown
Model Training Model ComplieThis step configures the model for training with the following settings:* An optimizier (update the gradients based on a loss function)* A loss function* Metrics to track during training
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Keras CallbacksKeras provides callbacks as a means to instrument internal state. In this example, we will write a tensorflow event log. The event log enables a tensorboard visualization of the translated model. The event log also captures key metrics during training. > Note: This step is completely optional and depends on the backend engine.
###Code
from keras.callbacks import TensorBoard
tensor_board = TensorBoard(log_dir='./logs/keras_srooms', histogram_freq=1)
model.fit(train_data, train_labels, epochs=10, batch_size=32, callbacks=[tensor_board])
###Output
INFO:tensorflow:Summary name dense_1/kernel:0 is illegal; using dense_1/kernel_0 instead.
INFO:tensorflow:Summary name dense_1/bias:0 is illegal; using dense_1/bias_0 instead.
INFO:tensorflow:Summary name dense_2/kernel:0 is illegal; using dense_2/kernel_0 instead.
INFO:tensorflow:Summary name dense_2/bias:0 is illegal; using dense_2/bias_0 instead.
Epoch 1/10
6499/6499 [==============================] - 0s - loss: 0.4709 - acc: 0.8120
Epoch 2/10
6499/6499 [==============================] - 0s - loss: 0.2020 - acc: 0.9508
Epoch 3/10
6499/6499 [==============================] - 0s - loss: 0.1118 - acc: 0.9729
Epoch 4/10
6499/6499 [==============================] - 0s - loss: 0.0759 - acc: 0.9818
Epoch 5/10
6499/6499 [==============================] - 0s - loss: 0.0596 - acc: 0.9865
Epoch 6/10
6499/6499 [==============================] - 0s - loss: 0.0447 - acc: 0.9885
Epoch 7/10
6499/6499 [==============================] - 0s - loss: 0.0397 - acc: 0.9900
Epoch 8/10
6499/6499 [==============================] - 0s - loss: 0.0339 - acc: 0.9902
Epoch 9/10
6499/6499 [==============================] - 0s - loss: 0.0330 - acc: 0.9892
Epoch 10/10
6499/6499 [==============================] - 0s - loss: 0.0264 - acc: 0.9929
###Markdown
Model Evaluation
###Code
score = model.evaluate(test_data, test_labels, batch_size=1625)
print(score)
###Output
1625/1625 [==============================] - 0s
[0.010582135990262032, 0.99507689476013184]
###Markdown
Save/Restore the ModelKeras provides methods to save the models architecture as yaml or json.
###Code
print(model.to_yaml())
definition = model.to_yaml()
###Output
backend: tensorflow
class_name: Sequential
config:
- class_name: Dense
config:
activation: relu
activity_regularizer: null
batch_input_shape: !!python/tuple [null, 25]
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config: {distribution: uniform, mode: fan_avg, scale: 1.0, seed: null}
kernel_regularizer: null
name: dense_1
trainable: true
units: 20
use_bias: true
- class_name: Dropout
config: {name: dropout_1, rate: 0.5, trainable: true}
- class_name: Dense
config:
activation: sigmoid
activity_regularizer: null
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config: {distribution: uniform, mode: fan_avg, scale: 1.0, seed: null}
kernel_regularizer: null
name: dense_2
trainable: true
units: 1
use_bias: true
keras_version: 2.0.4
###Markdown
We also need to save the *parameters* or weights learns from training.
###Code
model.save_weights('/tmp/srmooms.hdf5')
###Output
_____no_output_____
###Markdown
Model RestoreWe'll load the definition and parameters...
###Code
from keras.models import model_from_yaml
new_model = model_from_yaml(definition)
new_model.load_weights('/tmp/srmooms.hdf5')
###Output
_____no_output_____
###Markdown
Lets run some predictions on the newly initiated model.
###Code
predictions = new_model.predict(test_data[0:25]).round()
for i in range(25):
if predictions[i]:
print('Test sample {} is poisonous.'.format(i))
###Output
Test sample 0 is poisonous.
Test sample 1 is poisonous.
Test sample 8 is poisonous.
Test sample 11 is poisonous.
Test sample 14 is poisonous.
Test sample 15 is poisonous.
Test sample 17 is poisonous.
###Markdown
Confusion Matrix
###Code
predictions = new_model.predict(test_data).round()
labels = test_labels[:,0]
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(labels,predictions)
import matplotlib.pyplot as plt
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plot_confusion_matrix(cm,['edible','poisonous'])
plt.show()
###Output
Confusion matrix, without normalization
[[840 0]
[ 8 777]]
|
Assignment+3 correct.ipynb | ###Markdown
---_You are currently looking at **version 1.5** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- Assignment 3 - More PandasThis assignment requires more individual learning then the last one did - you are encouraged to check out the [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) to find functions or methods you might not have used yet, or ask questions on [Stack Overflow](http://stackoverflow.com/) and tag them as pandas and python related. And of course, the discussion forums are open for interaction with your peers and the course staff. Question 1 (20%)Load the energy data from the file `Energy Indicators.xls`, which is a list of indicators of [energy supply and renewable electricity production](Energy%20Indicators.xls) from the [United Nations](http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls) for the year 2013, and should be put into a DataFrame with the variable name of **energy**.Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:`['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']`Convert `Energy Supply` to gigajoules (there are 1,000,000 gigajoules in a petajoule). For all countries which have missing data (e.g. data with "...") make sure this is reflected as `np.NaN` values.Rename the following list of countries (for use in later questions):```"Republic of Korea": "South Korea","United States of America": "United States","United Kingdom of Great Britain and Northern Ireland": "United Kingdom","China, Hong Kong Special Administrative Region": "Hong Kong"```There are also several countries with numbers and/or parenthesis in their name. Be sure to remove these, e.g. `'Bolivia (Plurinational State of)'` should be `'Bolivia'`, `'Switzerland17'` should be `'Switzerland'`.Next, load the GDP data from the file `world_bank.csv`, which is a csv containing countries' GDP from 1960 to 2015 from [World Bank](http://data.worldbank.org/indicator/NY.GDP.MKTP.CD). Call this DataFrame **GDP**. Make sure to skip the header, and rename the following list of countries:```"Korea, Rep.": "South Korea", "Iran, Islamic Rep.": "Iran","Hong Kong SAR, China": "Hong Kong"```Finally, load the [Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology](http://www.scimagojr.com/countryrank.php?category=2102) from the file `scimagojr-3.xlsx`, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame **ScimEn**.Join the three datasets: GDP, Energy, and ScimEn into a new dataset (using the intersection of country names). Use only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15). The index of this DataFrame should be the name of the country, and the columns should be ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations', 'Citations per document', 'H index', 'Energy Supply', 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015'].*This function should return a DataFrame with 20 columns and 15 entries.*
###Code
import pandas as pd
import numpy as np
def answer_one():
x = pd.ExcelFile('Energy Indicators.xls')
energy = x.parse(skiprows=17,skip_footer=(38))
energy = energy[['Unnamed: 1','Petajoules','Gigajoules','%']]
energy.columns = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
energy[['Energy Supply', 'Energy Supply per Capita', '% Renewable']] = energy[['Energy Supply', 'Energy Supply per Capita', '% Renewable']].replace('...',np.NaN).apply(pd.to_numeric)
energy['Energy Supply'] = energy['Energy Supply']*1000000
energy['Country'] = energy['Country'].replace({'China, Hong Kong Special Administrative Region':'Hong Kong','United Kingdom of Great Britain and Northern Ireland':'United Kingdom','Republic of Korea':'South Korea','United States of America':'United States','Iran (Islamic Republic of)':'Iran'})
energy['Country'] = energy['Country'].str.replace(r" \(.*\)","")
GDP = pd.read_csv('world_bank.csv',skiprows=4)
GDP['Country Name'] = GDP['Country Name'].replace('Korea, Rep.','South Korea')
GDP['Country Name'] = GDP['Country Name'].replace('Iran, Islamic Rep.','Iran')
GDP['Country Name'] = GDP['Country Name'].replace('Hong Kong SAR, China','Hong Kong')
GDP = GDP[['Country Name','2006','2007','2008','2009','2010','2011','2012','2013','2014','2015']]
GDP.columns = ['Country','2006','2007','2008','2009','2010','2011','2012','2013','2014','2015']
ScimEn = pd.read_excel(io='scimagojr-3.xlsx')
ScimEn_m = ScimEn[:15]
df = pd.merge(ScimEn_m,energy,how='inner',left_on='Country',right_on='Country')
final_df = pd.merge(df,GDP,how='inner',left_on='Country',right_on='Country')
final_df = final_df.set_index('Country')
return final_df
final_df= answer_one()
answer_one()
###Output
_____no_output_____
###Markdown
Question 2 (6.6%)The previous question joined three datasets then reduced this to just the top 15 entries. When you joined the datasets, but before you reduced this to the top 15 items, how many entries did you lose?*This function should return a single number.*
###Code
%%HTML
<svg width="800" height="300">
<circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" />
<circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" />
<circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" />
<line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/>
<text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text>
</svg>
import pandas as pd
import numpy as np
def answer_two():
energy = pd.read_excel("Energy Indicators.xls",skip_footer=38,skip_header=1,skiprows=17) # Skip header and footer
energy.drop(energy.columns[[0,1]],axis=1,inplace=True) # Drop first 2 columns
energy.columns = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
energy.dropna() # Drop rows with NaN values.
energy['Country'] = energy['Country'].str.replace(r'\(.*\)', '') # Remove contents within parenthesis.
energy['Country'] = energy['Country'].str.replace('\d+', '') # Remove digits from names
energy['Country'] = energy['Country'].str.strip() # This brings the Iran energy values back!
# Turn blank values into NaN
for col in energy:
energy[col] = energy[col].replace('...',np.nan)
energy['Country'] = energy['Country'].str.replace('Republic of Korea','South Korea')
energy['Country'] = energy['Country'].str.replace('United States of America','United States')
energy['Country'] = energy['Country'].str.replace('United Kingdom of Great Britain and Northern Ireland','United Kingdom')
energy['Country'] = energy['Country'].str.replace('China, Hong Kong Special Administrative Region','Hong Kong')
# GDP:
GDP = pd.read_csv('world_bank.csv', skiprows=3) # Skip header
# Make first row the column names
new_header = GDP.iloc[0]
GDP = GDP[1:]
GDP.columns = new_header
#GDP = GDP.rename(index=str,columns = {"Country Name":"Country"})
GDP['Country Name'] = GDP['Country Name'].str.replace('Korea, Rep.','South Korea')
GDP['Country Name'] = GDP['Country Name'].str.replace('Iran, Islamic Rep.','Iran')
GDP['Country Name'] = GDP['Country Name'].str.replace('Hong Kong SAR, China','Hong Kong')
# Change column name from 'Country Name' to 'Country' for merging 3 files on country name.
names = GDP.columns.tolist()
names[names.index('Country Name')] = 'Country'
GDP.columns = names
# Only keep the columns from 2006-15. Drop column number 1 to 50. Don't need country code, etc.
GDP = GDP.drop(GDP.iloc[:,1:50], axis=1)
GDP.columns = GDP.columns.astype(str).str.split('.').str[0] # Remove '.0' at the end of the year columns.
# SCIMEN:
ScimEn = pd.read_excel('scimagojr-3.xlsx')
# LOST ENTRIES = LEN(OUTER JOIN) - LEN(INNER JOIN)
# Need unique entries in all 3 sets so use concat. Can't do that with a left or right outer join!
num_outer = len(pd.concat([ScimEn['Country'],energy['Country'],GDP['Country']]).unique())
num_inter = (GDP.merge(energy, left_on='Country', right_on='Country', how='inner').merge(ScimEn, left_on='Country', right_on='Country', how='inner').shape[0])
return num_outer-num_inter
answer_two()
###Output
_____no_output_____
###Markdown
Answer the following questions in the context of only the top 15 countries by Scimagojr Rank (aka the DataFrame returned by `answer_one()`) Question 3 (6.6%)What is the average GDP over the last 10 years for each country? (exclude missing values from this calculation.)*This function should return a Series named `avgGDP` with 15 countries and their average GDP sorted in descending order.*
###Code
import pandas as pd
import numpy as np
def answer_three():
Top15 = final_df
columns = ['2006','2007','2008','2009','2010','2011','2012','2013','2014','2015']
Top15['Mean'] = Top15[columns].mean(axis=1)
avgGDP = Top15.sort_values(by = 'Mean', ascending = False)['Mean']
return avgGDP
answer_three()
###Output
_____no_output_____
###Markdown
Question 4 (6.6%)By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?*This function should return a single number.*
###Code
import pandas as pd
import numpy as np
def answer_four():
Top15 = final_df
columns = ['2006','2007','2008','2009','2010','2011','2012','2013','2014','2015']
Top15['Mean'] = Top15[columns].mean(axis=1)
avgGDP = Top15.sort_values(by = 'Mean', ascending = False)['Mean']
target = avgGDP.index[5]
target_data = Top15.loc[target]
ans = target_data['2015'] - target_data['2006']
return ans
answer_four()
###Output
_____no_output_____
###Markdown
Question 5 (6.6%)What is the mean `Energy Supply per Capita`?*This function should return a single number.*
###Code
import pandas as pd
import numpy as np
def answer_five() :
Top15 = final_df
return Top15['Energy Supply per Capita'].mean()
answer_five()
###Output
_____no_output_____
###Markdown
Question 6 (6.6%)What country has the maximum % Renewable and what is the percentage?*This function should return a tuple with the name of the country and the percentage.*
###Code
import pandas as pd
import numpy as np
def answer_six() :
Top15 = final_df
ct = Top15.sort_values(by='% Renewable', ascending=False).iloc[0]
return (ct.name, ct['% Renewable'])
answer_six()
###Output
_____no_output_____
###Markdown
Question 7 (6.6%)Create a new column that is the ratio of Self-Citations to Total Citations. What is the maximum value for this new column, and what country has the highest ratio?*This function should return a tuple with the name of the country and the ratio.*
###Code
import pandas as pd
import numpy as np
def answer_seven():
Top15 = final_df
Top15['Citation_ratio'] = Top15['Self-citations']/Top15['Citations']
ct = Top15.sort_values(by='Citation_ratio', ascending=False).iloc[0]
return (ct.name, ct['Citation_ratio'])
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8 (6.6%)Create a column that estimates the population using Energy Supply and Energy Supply per capita. What is the third most populous country according to this estimate?*This function should return a single string value.*
###Code
import pandas as pd
import numpy as np
def answer_eight():
Top15 = final_df
Top15['Population'] = Top15['Energy Supply']/Top15['Energy Supply per Capita']
return Top15.sort_values(by='Population', ascending=False).iloc[2].name
answer_eight()
###Output
_____no_output_____
###Markdown
Question 9 (6.6%)Create a column that estimates the number of citable documents per person. What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the `.corr()` method, (Pearson's correlation).*This function should return a single number.**(Optional: Use the built-in function `plot9()` to visualize the relationship between Energy Supply per Capita vs. Citable docs per Capita)*
###Code
import pandas as pd
import numpy as np
def answer_nine():
Top15 = final_df
Top15['Estimate Population'] = Top15['Energy Supply'] / Top15['Energy Supply per Capita']
Top15['avgCiteDocPerPerson'] = Top15['Citable documents'] / Top15['Estimate Population']
return Top15[['Energy Supply per Capita', 'avgCiteDocPerPerson']].corr().ix['Energy Supply per Capita', 'avgCiteDocPerPerson']
answer_nine()
###Output
_____no_output_____
###Markdown
Question 10 (6.6%)Create a new column with a 1 if the country's % Renewable value is at or above the median for all countries in the top 15, and a 0 if the country's % Renewable value is below the median.*This function should return a series named `HighRenew` whose index is the country name sorted in ascending order of rank.*
###Code
import pandas as pd
import numpy as np
def answer_ten():
Top15 = final_df
mid = Top15['% Renewable'].median()
Top15['HighRenew'] = Top15['% Renewable']>=mid
Top15['HighRenew'] = Top15['HighRenew'].apply(lambda x:1 if x else 0)
Top15.sort_values(by='Rank', inplace=True)
return Top15['HighRenew']
answer_ten()
###Output
_____no_output_____
###Markdown
Question 11 (6.6%)Use the following dictionary to group the Countries by Continent, then create a dateframe that displays the sample size (the number of countries in each continent bin), and the sum, mean, and std deviation for the estimated population of each country.```pythonContinentDict = {'China':'Asia', 'United States':'North America', 'Japan':'Asia', 'United Kingdom':'Europe', 'Russian Federation':'Europe', 'Canada':'North America', 'Germany':'Europe', 'India':'Asia', 'France':'Europe', 'South Korea':'Asia', 'Italy':'Europe', 'Spain':'Europe', 'Iran':'Asia', 'Australia':'Australia', 'Brazil':'South America'}```*This function should return a DataFrame with index named Continent `['Asia', 'Australia', 'Europe', 'North America', 'South America']` and columns `['size', 'sum', 'mean', 'std']`*
###Code
import pandas as pd
import numpy as np
def answer_eleven():
Top15 = final_df
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
new_df = pd.DataFrame()
for i, row in Top15.iterrows():
row['Continent'] = ContinentDict[row.name]
new_df = new_df.append(row)
return new_df
answer_eleven()
###Output
_____no_output_____
###Markdown
Question 12 (6.6%)Cut % Renewable into 5 bins. Group Top15 by the Continent, as well as these new % Renewable bins. How many countries are in each of these groups?*This function should return a __Series__ with a MultiIndex of `Continent`, then the bins for `% Renewable`. Do not include groups with no countries.*
###Code
import pandas as pd
import numpy as np
Top15 = final_df
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
Top15 = Top15.reset_index()
Top15['Continent'] = [ContinentDict[country] for country in Top15['Country']]
Top15['bins'] = pd.cut(Top15['% Renewable'],5)
Top15.groupby(['Continent','bins']).size()
###Output
_____no_output_____
###Markdown
Question 13 (6.6%)Convert the Population Estimate series to a string with thousands separator (using commas). Do not round the results.e.g. 317615384.61538464 -> 317,615,384.61538464*This function should return a Series `PopEst` whose index is the country name and whose values are the population estimate string.*
###Code
import pandas as pd
import numpy as np
def answer_thirteen():
Top15 = final_df
Top15['PopEst'] = (Top15['Energy Supply'] / Top15['Energy Supply per Capita']).astype(float)
return Top15['PopEst'].apply(lambda x: '{0:,}'.format(x))
answer_thirteen()
###Output
_____no_output_____ |
notebooks/4_Segmentation.ipynb | ###Markdown
Quadrant Scan The Quadrant Scan is a method based on recurrence plot that identify tipping points (change points) at which the system changes its behaviour (dynamic). The method is applicable on univariate and multivariate data.The code below is for the Weighted Quadrant Scan (WQS). The inputs of the function are: x : the data, a data frame with columns represent different variables for multivariate application, in this case the normalisation step is required. Or for univariate application, x is the embedded time series, in this case the normalisation step is not required. alpha : is the recurrence threshold, it is a problem dependant parameter, good value to start with is 0.1 m1 and m2 : are the weighting parameters, also they are problem dependant, values to use (m1,m2)=(200,50), (100,25) or (50,10)For referencing:Zaitouny, A., Walker, D.M. and Small, M., 2019. Quadrant scan for multi-scale transition detection. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29(10), p.103117.Zaitouny, A., Small, M., Hill, J., Emelyanova, I. and Clennell, M.B., 2020. Fast automatic detection of geological boundaries from multivariate log data using recurrence. Computers & Geosciences, 135, p.104362. Import libraries and define functions
###Code
import scipy.io as sio
import time
import scipy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
import scipy.sparse as sparse
from IPython.display import Latex
from matplotlib.pyplot import imshow
##Recurrence Plot matrix
## This function is constructing the recurrence plot matrix based on the threshold alpha. alpha is between 0 and 1.
def RecurrenceMatrix(x, alpha):
distance = scipy.spatial.distance.pdist(x, 'euclidean')
#Different norms for different applications
#distance = scipy.spatial.distance.pdist(x, 'seuclidean',V=None)
#distance = scipy.spatial.distance.pdist(x, 'canberra')
#distance = scipy.spatial.distance.pdist(x, 'mahalanobis', VI=None)
distance = scipy.spatial.distance.squareform(distance)
threshold = alpha*(distance.mean()+3*distance.std())
#Recurrence Plot Matrix
RM=np.array(distance<=threshold,dtype='int')
return RM
## Normalisation
## This function is for normalising the columns in a multivariate input which is important to avoid dominance
## of variable with larger scale
## This step is not required for univariate application and embedded time series
def normalize(df):
result = df.copy()
for feature_name in df.columns:
max_value = df[feature_name].max()
min_value = df[feature_name].min()
result[feature_name] = (df[feature_name] - min_value) / (max_value - min_value)
return result
##Quadrant Scan
## This function is employing the Quadrant Scan technique on input x (univariate or multivariate) using:
## the recuurence plot threshold alpha, the weighting parameters m1 and m2.
def WeightQS2(x, alpha, m1, m2):
x = normalize(x) # not required for embedded time series
x = np.array(x)
z = np.array(x[:20])
np.random.shuffle(z)
x = np.append(z, x, axis=0)
z = np.array(x[len(x)-20:len(x)])
np.random.shuffle(z)
x = np.append(x, z, axis=0)
x = np.array(x)
RM=RecurrenceMatrix(x, alpha)
qs=np.zeros(len(x))
for ii in range(1, len(x)):
weightp = 0.5*(1-np.tanh((np.arange(1,ii+1)-m1)/m2))
weightp = weightp[::-1]
weightf = 0.5*(1-np.tanh((np.arange(1,len(x)-ii+1)-m1)/m2))
weightpp = weightp[:,None]*weightp[None,:]
weightpp = weightpp/weightpp[-1,-1]
weightff = weightf[:,None]*weightf[None,:]
weightff = weightff/weightff[0,0]
weightpf=weightp[:,None]*weightf[None,:]
weightpf /= weightpf[-1,0]
weightfp=weightf[:,None]*weightp[None,:]
weightfp /= weightfp[0,-1]
pp = RM[:ii,:ii] * weightpp
ff = RM[ii-1:-1,ii-1:-1] * weightff
pf = RM[:ii,ii-1:-1] * weightpf
fp = RM[ii-1:-1,:ii] * weightfp
qs1 = np.sum(pp)+np.sum(ff)
qs2 = np.sum(pf)+np.sum(fp)
qs[ii] = qs1/(qs1+qs2)
return qs[20:len(qs)-20]
###Output
_____no_output_____
###Markdown
Simple artificial example for demonstration
###Code
# Creating a 2D dataframe at which each feature shows a transition in different time
x1 = np.random.rand(100)+3
x2 = np.random.rand(100)+5
x = np.concatenate((x1, x2)) # first column
y1 = np.random.rand(150)+8
y2 = np.random.rand(50)+2
y = np.concatenate((y1, y2)) # second column
df = pd.DataFrame({'Feature1': x, 'Feature2' : y }) # the dataframe
df.head()
## implementing the weighted quadrant scan
wqs = WeightQS2(df, 0.1, 100, 25)
## Plot the data and the wqs for demonstration
fig, ax = plt.subplots(2,1,figsize=(15,10))
ax[0].plot(df['Feature1'] , label = 'Feature1')
ax[0].plot(df['Feature2'], label = 'Feature2')
ax[0].set_xlabel('Time index')
ax[0].set_ylabel('Value')
ax[0].set_title('The 2D data')
ax[0].legend()
ax[1].plot(wqs)
ax[1].set_xlabel('Time index')
ax[1].set_ylabel('WQS')
ax[1].set_title('The weighted quadrant scan results')
plt.show
###Output
_____no_output_____
###Markdown
Detect lithological boundaries using Quadrant Scan In this notebook, we will test the Quadrant Scan technique to analyse a multivariate, noisy, and nonstationary data set.The data set includes petrophysical profiles from well_log measures. Namely, DTCO, ECGR, HART, PEFZ, RHOZ, TNPH.The data includes 7688 depth samples. Refering to the following petrophysical profiles density, electrical resistivity, sonic velocity, natural radioactivity, mean atomic number and neutron porosityWe have added a column to the data set (last column: "Geological layer") in which we gave a number to each layer type as identified by the geologists.The idea of this exercise is to use the Quadrant Scan to detect transision in the data profiles to identify the lithological boundaries. Reading the data file
###Code
Petrophysical_data = pd.read_csv('../data/Petrophysical_data.csv') ## Reading the data set file, fix the directory
Petrophysical_data.head() ## Have a look at the data
xx = pd.DataFrame(Petrophysical_data[['DTCO','ECGR']]) ## Select two columns from the data file
xx.head()
###Output
_____no_output_____
###Markdown
Implement the Quadrant ScanThere are three paramteres to set up. alpha --> the recurrence plot threshold, this can be varied for multiscale detection, Good choice for the data at hand is 0.2m1, m2 are the weighting scheme parameters as demonstrated in the slides, an appropraite setting for the the data at hand is m1=200, m2=50. You could try different settings such as m1=100, m2=25 or m1=50, m2=10 Input a single variableChoose one variable and run the Quadrant Scan for 2000 depth samples (between depth index 1000 to depth index 3000)Here, We chose ECGR - Gamma Ray
###Code
x1 = pd.DataFrame(Petrophysical_data[['ECGR']]) ## select a variable from the data set
QS1 = WeightQS2(x1[1000:3000], 0.2, 200, 50) ## Run the Quadrant Scan on the selected column (variable)
## Plotting the resulted Quadrant Scan with the input data.
RM = RecurrenceMatrix(x1[1000:3000], 0.2) ## Estimating the recurrence plot matrix for demonstration.
## You can change the parameter alpha to see how it effects the recurrence plot matrix. (alpha is between 0 to 1)
f1 = plt.figure() ## plot the recurrence plot matrix
imshow(np.asarray(RM))
plt.title('Recurrence Plot Matrix')
plt.xlabel('Depth index')
plt.ylabel('Depth index')
f2 = plt.figure(figsize=(6,8)) ## plot the Quadrant Scan and the input data
plt.subplot(1, 2, 1)
plt.plot(QS1, np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.title('Boundary detection')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 2, 2)
plt.plot(np.array(Petrophysical_data['ECGR'][1000:3000]), np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.xlabel('ECGR')
plt.ylabel('Depth')
f2.tight_layout()
###Output
_____no_output_____
###Markdown
Now try for another variable Choose another variable and run the Quadrant Scan for 2000 depth samples (between depth index 1000 to depth index 3000)Here, We chose HART - Resistivity
###Code
x2 = pd.DataFrame(Petrophysical_data[['HART']]) ## select a variable from the data set
QS2 = WeightQS2(x2[1000:3000], 0.2, 200, 50) ## Run the Quadrant Scan on the selected column (variable)
## Plotting the resulted Quadrant Scan with the input data.
f1 = plt.figure(figsize=(6,8))
plt.subplot(1, 2, 1)
plt.plot(QS2, np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.title('Boundary detection')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 2, 2)
plt.plot(np.array(Petrophysical_data['HART'][1000:3000]), np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.xlabel('HART')
plt.ylabel('Depth')
f1.tight_layout()
###Output
_____no_output_____
###Markdown
Compare the detections from different profiles
###Code
## Plotting the resulted Quadrant Scan with the input data.
f1 = plt.figure(figsize = (6,8))
plt.plot(QS1, np.array(Petrophysical_data['DEPTH'][1000:3000]), label="QS ECGR")
plt.plot(QS2, np.array(Petrophysical_data['DEPTH'][1000:3000]), label="QS HART")
plt.gca().invert_yaxis()
plt.xlabel('QS')
plt.ylabel('Depth')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now combine the variables and use multivariate inputLet us start combining the profiles we used above.
###Code
x3 = pd.DataFrame(Petrophysical_data[['HART','ECGR']]) ## Select multiple variables
QS3 = WeightQS2(x3[1000:3000], 0.2, 200, 50) ## Run the Quadrant Scan on the selected columns (variables)
## Plotting the resulted Quadrant Scan with the input data.
f2 = plt.figure(figsize = (10,6))
plt.subplot(1, 3, 1)
plt.plot(QS3, np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.title('Boundary detection')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 3, 2)
plt.plot(np.array(Petrophysical_data['HART'][1000:3000]), np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.xlabel('HART')
plt.ylabel('Depth')
plt.subplot(1, 3, 3)
plt.plot(np.array(Petrophysical_data['ECGR'][1000:3000]), np.array(Petrophysical_data['DEPTH'][1000:3000]))
plt.gca().invert_yaxis()
plt.xlabel('ECGR')
plt.ylabel('Depth')
f2.tight_layout()
###Output
_____no_output_____
###Markdown
Compare the results
###Code
## Plotting the Quadrant curves for comparison.
f1 = plt.figure(figsize = (6,8))
plt.plot(QS1, np.array(Petrophysical_data['DEPTH'][1000:3000]), label="QS ECGR")
plt.plot(QS2, np.array(Petrophysical_data['DEPTH'][1000:3000]), label="QS HART")
plt.plot(QS3, np.array(Petrophysical_data['DEPTH'][1000:3000]), label="QS MV")
plt.gca().invert_yaxis()
plt.xlabel('QS')
plt.ylabel('Depth')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Multiscale detection:Test the performance by varying the Recurrence Plot threshold (alpha) from larger value to smaller value:Say from alpha=0.6 to 0.1You can test using univariate or multivariate input
###Code
## Evaluate the Quadrant Scan for different thresholds.
MS_QS6 = WeightQS2(x1[2000:3000], 0.6, 200, 50)
MS_QS5 = WeightQS2(x1[2000:3000], 0.5, 200, 50)
MS_QS4 = WeightQS2(x1[2000:3000], 0.4, 200, 50)
MS_QS3 = WeightQS2(x1[2000:3000], 0.3, 200, 50)
MS_QS2 = WeightQS2(x1[2000:3000], 0.2, 200, 50)
MS_QS1 = WeightQS2(x1[2000:3000], 0.1, 200, 50)
###Output
_____no_output_____
###Markdown
Plotting for comparison
###Code
## Plotting the resulted Quadrant Scan curves for each threshold.
f3 = plt.figure(figsize=(15,6))
plt.subplot(1, 6, 1)
plt.plot(MS_QS6, np.array(Petrophysical_data['DEPTH'][2000:3000]))
plt.gca().invert_yaxis()
plt.title('alpha=0.6')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 6, 2)
plt.plot(MS_QS5, np.array(Petrophysical_data['DEPTH'][2000:3000]))
plt.gca().invert_yaxis()
plt.title('alpha=0.5')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 6, 3)
plt.plot(MS_QS4, np.array(Petrophysical_data['DEPTH'][2000:3000]))
plt.gca().invert_yaxis()
plt.title('alpha=0.4')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 6, 4)
plt.plot(MS_QS3, np.array(Petrophysical_data['DEPTH'][2000:3000]))
plt.gca().invert_yaxis()
plt.title('alpha=0.3')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 6, 5)
plt.plot(MS_QS2, np.array(Petrophysical_data['DEPTH'][2000:3000]))
plt.gca().invert_yaxis()
plt.title('alpha=0.2')
plt.xlabel('QS')
plt.ylabel('Depth')
plt.subplot(1, 6, 6)
plt.plot(MS_QS1, np.array(Petrophysical_data['DEPTH'][2000:3000]))
plt.gca().invert_yaxis()
plt.title('alpha=0.1')
plt.xlabel('QS')
plt.ylabel('Depth')
f3.tight_layout()
###Output
_____no_output_____ |
examples/seaborn_01_distribution.ipynb | ###Markdown
distplotCalling ``sns.distplot`` shows a distribution of the ``target`` colum by default.
###Code
df.sns.distplot();
###Output
_____no_output_____
###Markdown
You can specify options as the same as the standard ``seaborn``.
###Code
df.sns.distplot(kde=False, rug=True);
df.sns.distplot(bins=20, kde=False, rug=True);
df.sns.distplot(hist=False, rug=True);
###Output
_____no_output_____
###Markdown
Specifying the column name shows distribution of the specified column.
###Code
df.sns.distplot('b');
###Output
_____no_output_____
###Markdown
kdeplot``kdeplot`` works almost the same as ``distplot``
###Code
df.sns.kdeplot(shade=True);
df.sns.kdeplot()
df.sns.kdeplot(bw=.2, label="bw: 0.2")
df.sns.kdeplot(bw=2, label="bw: 2")
plt.legend();
df.sns.kdeplot('b', shade=True, cut=0)
df.sns.rugplot('b');
###Output
_____no_output_____
###Markdown
jointplotYou must specify the label of x-axis via ``x`` keyword. If ``y`` keyword is omitted, it will be a ``target`` column.
###Code
df.sns.jointplot('b', size=4);
###Output
/Users/sin/miniconda/envs/py27std/lib/python2.7/site-packages/matplotlib/__init__.py:892: UserWarning: axes.color_cycle is deprecated and replaced with axes.prop_cycle; please use the latter.
warnings.warn(self.msg_depr % (key, alt_key))
###Markdown
If you specify both ``x`` and ``y``, it will shows the specified columns.
###Code
df.sns.jointplot('b', 'c', kind="hex", color="k", size=4);
df.sns.jointplot("d", "e", kind="kde", size=4);
f, ax = plt.subplots()
df.sns.kdeplot('b', 'c', ax=ax)
df.sns.rugplot('b', color="g", ax=ax)
df.sns.rugplot('c', vertical=True, ax=ax);
cmap = df.sns.cubehelix_palette(as_cmap=True, dark=0, light=1, reverse=True)
df.sns.kdeplot('b', 'c', cmap=cmap, n_levels=60, shade=True);
g = df.sns.jointplot(x="a", y="b", kind="kde", color="m", size=4)
g.plot_joint(plt.scatter, c="w", s=30, linewidth=1, marker="+")
g.ax_joint.collections[0].set_alpha(0);
###Output
_____no_output_____
###Markdown
pairplot
###Code
import seaborn as sns
iris = sns.load_dataset("iris")
iris = pdml.ModelFrame(iris, target='species')
iris.head()
iris.sns.pairplot(size=1.5);
import seaborn as sns
g = iris.sns.PairGrid(size=1.5)
g.map_diag(iris.sns.kdeplot)
g.map_offdiag(iris.sns.kdeplot, cmap="Blues_d", n_levels=6);
###Output
/Users/sin/miniconda/envs/py27std/lib/python2.7/site-packages/matplotlib/axes/_axes.py:519: UserWarning: No labelled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labelled objects found. "
|
tutorials/tutorial06/tutorial06.ipynb | ###Markdown
[Tutorial 06: Classification](https://franciszheng.com/dsper2020/tutorials/tutorial06/) [[1] Getting Started](https://franciszheng.com/dsper2020/tutorials/tutorial06/getting-started) [[1a] Importing Libraries](https://franciszheng.com/dsper2020/tutorials/tutorial06/importing-libraries)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
import statsmodels.formula.api as smf
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn import neighbors
from sklearn import preprocessing
from IPython.display import Markdown, display
###Output
_____no_output_____
###Markdown
[[1b] Plot and Output Settings](https://franciszheng.com/dsper2020/tutorials/tutorial06/plot-and-output-settings)
###Code
# Reset all styles to the default:
plt.rcParams.update(plt.rcParamsDefault)
# Then make graphs inline:
%matplotlib inline
# Useful function for Jupyter to display text in bold:
def displaybd(text):
display(Markdown("**" + text + "**"))
# Set custom style settings:
# NB: We need to separate "matplotlib inline" call and these settings into different
# cells, otherwise the parameters are not set. This is a bug somewhere in Jupyter
plt.rcParams['figure.figsize'] = (7, 6)
plt.rcParams['font.size'] = 24
plt.rcParams['legend.fontsize'] = 'large'
plt.rcParams['figure.titlesize'] = 'large'
plt.rcParams['lines.markersize'] = 10
###Output
_____no_output_____
###Markdown
[[1c] Our Dataset](https://franciszheng.com/dsper2020/tutorials/tutorial06/our-dataset)
###Code
smarket = pd.read_csv('Smarket.csv', parse_dates=False)
# Create direction codes as Up=1 and Down=0 to be sure about the interpretation in regressions:
smarket["DirectionCode"] = np.where(smarket["Direction"].str.contains("Up"), 1, 0)
display(smarket[1:10])
display(smarket.describe())
displaybd("Correlations matrix:")
display(smarket.corr())
smarket["Volume"].plot()
plt.xlabel("Day");
plt.ylabel("Volume");
###Output
_____no_output_____
###Markdown
[[2] Logit](https://franciszheng.com/dsper2020/tutorials/tutorial06/logit) [[2a] Running Logit via GLM](https://franciszheng.com/dsper2020/tutorials/tutorial06/running-logit-via-glm)
###Code
model = smf.glm("DirectionCode~Lag1+Lag2+Lag3+Lag4+Lag5+Volume", data=smarket,
family=sm.families.Binomial())
res = model.fit()
display(res.summary())
###Output
_____no_output_____
###Markdown
[[2b] Predicted Probabilities and Confusion Matrix](https://franciszheng.com/dsper2020/tutorials/tutorial06/predicted-probabilities-and-confusion-matrix)
###Code
displaybd("Predicted probabilities for the first observations:")
DirectionProbs = res.predict()
print(DirectionProbs[0:10])
DirectionHat = np.where(DirectionProbs > 0.5, "Up", "Down")
confusionDF = pd.crosstab(DirectionHat, smarket["Direction"],
rownames=['Predicted'], colnames=['Actual'],
margins=True)
display(Markdown("***"))
displaybd("Confusion matrix:")
display(confusionDF)
displaybd("Share of correctly predicted market movements:")
print(np.mean(smarket['Direction'] == DirectionHat))
###Output
_____no_output_____
###Markdown
[[2c] Estimation of Test Error](https://franciszheng.com/dsper2020/tutorials/tutorial06/estimation-of-test-error)
###Code
train = (smarket['Year'] < 2005)
smarket2005 = smarket[~train]
displaybd("Dimensions of the validation set:")
print(smarket2005.shape)
model = smf.glm("DirectionCode~Lag1+Lag2+Lag3+Lag4+Lag5+Volume", data=smarket,
family=sm.families.Binomial(), subset=train)
res = model.fit()
DirectionProbsTets = res.predict(smarket2005)
DirectionTestHat = np.where(DirectionProbsTets > 0.5, "Up", "Down")
displaybd("Share of correctly predicted market movements in 2005:")
print(np.mean(smarket2005['Direction'] == DirectionTestHat))
###Output
_____no_output_____
###Markdown
[[3] Linear Discriminant Analysis](https://franciszheng.com/dsper2020/tutorials/tutorial06/linear-discriminant-analysis) [[3a] Custom Output Functions](https://franciszheng.com/dsper2020/tutorials/tutorial06/custom-output-functions)
###Code
def printPriorProbabilities(ldaClasses, ldaPriors):
priorsDF = pd.DataFrame()
for cIdx, cName in enumerate(ldaClasses):
priorsDF[cName] = [ldaPriors[cIdx]];
displaybd('Prior probablities of groups:')
display(Markdown(priorsDF.to_html(index=False)))
def printGroupMeans(ldaClasses, featuresNames, ldaGroupMeans):
displaybd("Group means:")
groupMeansDF = pd.DataFrame(index=ldaClasses)
for fIdx, fName in enumerate(featuresNames):
groupMeansDF[fName] = ldaGroupMeans[:, fIdx]
display(groupMeansDF)
def printLDACoeffs(featuresNames, ldaCoeffs):
coeffDF = pd.DataFrame(index=featuresNames)
for cIdx in range(ldaCoeffs.shape[0]):
colName = "LDA" + str(cIdx + 1)
coeffDF[colName] = ldaCoeffs[cIdx]
displaybd("Coefficients of linear discriminants:")
display(coeffDF)
###Output
_____no_output_____
###Markdown
[[3b] Fitting an LDA Model](https://franciszheng.com/dsper2020/tutorials/tutorial06/fitting-an-lda-model)
###Code
outcomeName = 'Direction'
featuresNames = ['Lag1', 'Lag2'];
X_train = smarket.loc[train, featuresNames]
y_train = smarket.loc[train, outcomeName]
lda = LinearDiscriminantAnalysis()
ldaFit = lda.fit(X_train, y_train);
printPriorProbabilities(ldaFit.classes_, ldaFit.priors_)
printGroupMeans(ldaFit.classes_, featuresNames, ldaFit.means_)
printLDACoeffs(featuresNames, ldaFit.coef_)
# Coefficients calcualted by Python's LDA are different from R's lda.
# But they are proportional:
printLDACoeffs(featuresNames, 11.580267503964166 * ldaFit.coef_)
# See this: https://stats.stackexchange.com/questions/87479/what-are-coefficients-of-linear-discriminants-in-lda
###Output
_____no_output_____
###Markdown
[[3c] LDA Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/lda-predictions)
###Code
X_test = smarket2005.loc[~train, featuresNames]
y_test = smarket.loc[~train, outcomeName]
y_hat = ldaFit.predict(X_test)
confusionDF = pd.crosstab(y_hat, y_test,
rownames=['Predicted'], colnames=['Actual'],
margins=True)
displaybd("Confusion matrix:")
display(confusionDF)
displaybd("Share of correctly predicted market movements:")
print(np.mean(y_test == y_hat))
###Output
_____no_output_____
###Markdown
[[3d] Posterior Probabilities](https://franciszheng.com/dsper2020/tutorials/tutorial06/posterior-probabilities)
###Code
pred_p = lda.predict_proba(X_test)
# pred_p is an array of shape (number of observations) x (number of classes)
upNmb = np.sum(pred_p[:, 1] > 0.5)
displaybd("Number of upward movements with threshold 0.5: " + str(upNmb))
upNmb = np.sum(pred_p[:, 1] > 0.9)
displaybd("Number of upward movements with threshold 0.9: " + str(upNmb))
###Output
_____no_output_____
###Markdown
[[4] Quadratic Discriminant Analysis](https://franciszheng.com/dsper2020/tutorials/tutorial06/quadratic-discriminant-analysis) [[4a] Fitting a QDA Model](https://franciszheng.com/dsper2020/tutorials/tutorial06/fitting-a-qda-model)
###Code
qda = QuadraticDiscriminantAnalysis()
qdaFit = qda.fit(X_train, y_train);
printPriorProbabilities(qdaFit.classes_, qdaFit.priors_)
printGroupMeans(qdaFit.classes_, featuresNames, qdaFit.means_)
###Output
_____no_output_____
###Markdown
[[4b] QDA Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/qda-predictions)
###Code
y_hat = qdaFit.predict(X_test)
confusionDF = pd.crosstab(y_hat, y_test,
rownames=['Predicted'], colnames=['Actual'],
margins=True)
displaybd("Confusion matrix:")
display(confusionDF)
displaybd("Share of correctly predicted market movements:")
print(np.mean(y_test == y_hat))
###Output
_____no_output_____
###Markdown
[[5] k-Nearest Neighbors](https://franciszheng.com/dsper2020/tutorials/tutorial06/k-nearest-neighbors) [[5a] One Neighbor](https://franciszheng.com/dsper2020/tutorials/tutorial06/one-neighbor)
###Code
knn = neighbors.KNeighborsClassifier(n_neighbors=1)
y_hat = knn.fit(X_train, y_train).predict(X_test)
confusionDF = pd.crosstab(y_hat, y_test,
rownames=['Predicted'], colnames=['Actual'],
margins=True)
displaybd("Confusion matrix:")
display(confusionDF)
displaybd("Share of correctly predicted market movements:")
print(np.mean(y_test == y_hat))
###Output
_____no_output_____
###Markdown
[[5b] Three Neighbors](https://franciszheng.com/dsper2020/tutorials/tutorial06/three-neighbors)
###Code
knn = neighbors.KNeighborsClassifier(n_neighbors=3)
y_hat = knn.fit(X_train, y_train).predict(X_test)
confusionDF = pd.crosstab(y_hat, y_test,
rownames=['Predicted'], colnames=['Actual'],
margins=True)
displaybd("Confusion matrix:")
display(confusionDF)
displaybd("Share of correctly predicted market movements:")
print(np.mean(y_test == y_hat))
###Output
_____no_output_____
###Markdown
[[6] An Application to Caravan Insurance Data](https://franciszheng.com/dsper2020/tutorials/tutorial06/an-application-to-caravan-insurance-data) [[6a] A New Dataset](https://franciszheng.com/dsper2020/tutorials/tutorial06/a-new-dataset) [[6aa] Loading Our Dataset](https://franciszheng.com/dsper2020/tutorials/tutorial06/loading-our-dataset)
###Code
caravan = pd.read_csv('Caravan.csv', index_col=0)
display(caravan.describe())
display(caravan.describe(include=[np.object]))
###Output
_____no_output_____
###Markdown
[[6ab] Standardizing Our Data](https://franciszheng.com/dsper2020/tutorials/tutorial06/standardizing-our-data)
###Code
y = caravan.Purchase
X = caravan.drop('Purchase', axis=1).astype('float64')
X_scaled = preprocessing.scale(X)
###Output
_____no_output_____
###Markdown
[[6ac] Splitting Data into Train and Test Data](https://franciszheng.com/dsper2020/tutorials/tutorial06/splitting-data-into-train-and-test-data)
###Code
X_train = X_scaled[1000:,:]
y_train = y[1000:]
X_test = X_scaled[:1000,:]
y_test = y[:1000]
###Output
_____no_output_____
###Markdown
[[6b] Using KNN for Prediction](https://franciszheng.com/dsper2020/tutorials/tutorial06/using-knn-for-prediction)
###Code
knn = neighbors.KNeighborsClassifier(n_neighbors=1)
y_hat = knn.fit(X_train, y_train).predict(X_test)
confusionDF = pd.crosstab(y_hat, y_test,
rownames=['Predicted'], colnames=['Actual'],
margins=True)
displaybd("Confusion matrix:")
display(confusionDF)
displaybd("Share of correctly predicted purchases:")
print(np.mean(y_test == y_hat))
###Output
_____no_output_____
###Markdown
[[6c] Logit](https://franciszheng.com/dsper2020/tutorials/tutorial06/logit-1)
###Code
X_train_w_constant = sm.add_constant(X_train)
X_test_w_constant = sm.add_constant(X_test, has_constant='add')
y_train_code = np.where(y_train == "No", 0, 1)
res = sm.GLM(y_train_code, X_train_w_constant, family=sm.families.Binomial()).fit()
y_hat_code = res.predict(X_test_w_constant)
PurchaseHat = np.where(y_hat_code > 0.25, "Yes", "No")
confusionDF = pd.crosstab(PurchaseHat, y_test,
rownames=['Predicted'], colnames=['Actual'],
margins=True)
displaybd("Confusion matrix:")
display(confusionDF)
###Output
_____no_output_____
###Markdown
[[7] More Iris Classification](https://franciszheng.com/dsper2020/tutorials/tutorial06/more-iris-classification) [[7a] Our Dataset](https://franciszheng.com/dsper2020/tutorials/tutorial06/our-dataset-1) [[7aa] Importing Our Dataset](https://franciszheng.com/dsper2020/tutorials/tutorial06/importing-our-dataset)
###Code
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'type']
iris_df = pd.read_csv(url, names=names)
###Output
_____no_output_____
###Markdown
[[7ab] Splitting Data into Train and Test Data](https://franciszheng.com/dsper2020/tutorials/tutorial06/splitting-data-into-train-and-test-data-1)
###Code
X = iris_df.iloc[:, :-1] #attributes, iloc[:, :-1] means until the last column
y = iris_df['type'] #labels
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.80)
###Output
_____no_output_____
###Markdown
[[7ac] Feature Scaling](https://franciszheng.com/dsper2020/tutorials/tutorial06/feature-scaling)
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
[[7b] Logit](https://franciszheng.com/dsper2020/tutorials/tutorial06/logit-2) [[7ba] Fitting Our Model](https://franciszheng.com/dsper2020/tutorials/tutorial06/fitting-our-model)
###Code
from sklearn.linear_model import LogisticRegression
logit_model = LogisticRegression()
logit_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
[[7bb] Making Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/making-predictions)
###Code
y_pred = logit_model.predict(X_test)
###Output
_____no_output_____
###Markdown
[[7bc] Evaluating Our Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/evaluating-our-predictions)
###Code
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm_df = pd.DataFrame(cm,
index = ['setosa','versicolor','virginica'],
columns = ['setosa','versicolor','virginica'])
sns.heatmap(cm_df, annot=True)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
###Output
_____no_output_____
###Markdown
[[7c] Linear Discriminant Analysis](https://franciszheng.com/dsper2020/tutorials/tutorial06/linear-discriminant-analysis-1) [[7ca] Fitting Our Model](https://franciszheng.com/dsper2020/tutorials/tutorial06/fitting-our-model-1)
###Code
lda_model = LinearDiscriminantAnalysis()
lda_model.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
[[7cb] Making Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/making-predictions-1)
###Code
y_pred = lda_model.predict(X_test)
###Output
_____no_output_____
###Markdown
[[7cd] Evaluating Our Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/evaluating-our-predictions-1)
###Code
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm_df = pd.DataFrame(cm,
index = ['setosa','versicolor','virginica'],
columns = ['setosa','versicolor','virginica'])
sns.heatmap(cm_df, annot=True)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
###Output
_____no_output_____
###Markdown
[[7d] Quadratic Discriminant Analysis](https://franciszheng.com/dsper2020/tutorials/tutorial06/quadratic-discriminant-analysis-1) [[7da] Fitting Our Model](https://franciszheng.com/dsper2020/tutorials/tutorial06/fitting-our-model-2)
###Code
qda_model = QuadraticDiscriminantAnalysis()
qda_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
[[7db] Making Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/making-predictions-2)
###Code
y_pred = qda_model.predict(X_test)
###Output
_____no_output_____
###Markdown
[[7dc] Evaluating Our Predictions](https://franciszheng.com/dsper2020/tutorials/tutorial06/evaluating-our-predictions-2)
###Code
print(classification_report(y_test, y_pred))
cm = confusion_matrix(y_test, y_pred)
cm_df = pd.DataFrame(cm,
index = ['setosa','versicolor','virginica'],
columns = ['setosa','versicolor','virginica'])
sns.heatmap(cm_df, annot=True)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.show()
###Output
_____no_output_____ |
RESEARCH/melanoma_classification.ipynb | ###Markdown
Inroduction Importing Libraries
###Code
import warnings
warnings.filterwarnings('ignore')
import os
import datetime
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import *
%matplotlib inline
import seaborn as sns
sns.set(style='darkgrid', color_codes=True)
import tensorflow as tf
from keras.optimizers import Adam, RMSprop
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras.utils import plot_model
from keras.models import Sequential
from keras.applications.vgg16 import VGG16
from sklearn.metrics import classification_report, confusion_matrix, precision_score, recall_score, f1_score
# set random seed to 42 which is used to replicate the same result everytime , the output will be same if i run the cell in future
np.random.seed(42)
tf.random.set_seed(42)
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Defining the Functions
###Code
# Function to create a dictionary of model's train, validation and test results
def store_results_to_dict(model, model_description):
from sklearn.metrics import precision_score, recall_score, f1_score
train_steps_per_epoch = np.math.ceil(train_generator.samples / train_generator.batch_size)
val_steps_per_epoch = np.math.ceil(val_generator.samples / val_generator.batch_size)
test_steps_per_epoch = np.math.ceil(test_generator.samples / test_generator.batch_size)
train_loss, train_acc = model.evaluate_generator(train_generator, steps=train_steps_per_epoch)
val_loss, val_acc = model.evaluate_generator(val_generator, steps=val_steps_per_epoch)
test_loss, test_acc = model.evaluate_generator(test_generator, steps=test_steps_per_epoch)
pred = model.predict_generator(test_generator, test_steps_per_epoch)
pred_classes = np.round(pred)
true_classes = test_generator.classes
class_labels = list(test_generator.class_indices.keys())
precision_score = precision_score(true_classes, pred_classes)
recall_score = recall_score(true_classes, pred_classes)
f1_score = f1_score(true_classes, pred_classes)
curr_dict = { 'Model':model_description
,'Train Accuracy': round(train_acc, 4)
,'Train Loss': round(train_loss, 4)
,'Validation Accuracy':round(val_acc, 4)
,'validation Loss':round(val_loss, 4)
,'Test Accuracy':round(test_acc, 4)
,'Test Loss':round(test_loss, 4)
,'Precision':round(precision_score, 4)
,'Recall':round(recall_score, 4)
,'f1':round(f1_score, 4)
}
return curr_dict
# Function to plot the accuracy and loss of the model
def plot_acc_and_loss(model_history):
acc = model_history.history['acc']
val_acc = model_history.history['val_acc']
loss = model_history.history['loss']
val_loss = model_history.history['val_loss']
epochs = range(len(acc))
plt.figure(figsize=(16,7))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'g', label='Validation acc')
plt.title('Training and validation accuracy',fontsize=20 )
plt.legend()
plt.figure(figsize=(16,7))
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'g', label='Validation loss')
plt.title('Training and validation loss', fontsize=20)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
About the data3297 total skin lesion images from ISIC-Archive(https://www.isic-archive.com/!/topWithHeader/wideContentTop/main) are used in the model. The files at https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign are uploaded. A validation set is made from the images to train the model at each epoch. Directories
###Code
# train, test and validation directories of melanoma images
train_data_dir=('/content/drive/My Drive/melanoma_image_classification/data/train')
val_data_dir=('/content/drive/My Drive/melanoma_image_classification/data/val')
test_data_dir=('/content/drive/My Drive/melanoma_image_classification/data/test')
###Output
_____no_output_____
###Markdown
Numbers of Train, Test and Validation Images
###Code
# counting number of benign images and malignant images in each dataset of training, testing and validation
path='/content/drive/My Drive/melanoma_image_classification/data/'
# Numbers of benign and malignant images in each set
for folder in ['train', 'val', 'test']:
n_malignant = len(os.listdir(path + folder + '/malignant'))
n_benign = len(os.listdir(path + folder + '/benign'))
print("There are {} benign skin images and {} malignant skin images in {} set. ".format(n_benign, n_malignant, folder))
total_images_train_benign = os.listdir(path + 'train/benign')
total_images_train_malignant = os.listdir(path + 'train/malignant')
total_images_test_benign = os.listdir(path + 'test/benign')
total_images_test_malignant = os.listdir(path + 'test/malignant')
total_images_val_benign = os.listdir(path + 'val/benign')
total_images_val_malignant = os.listdir(path + 'val/malignant')
###Output
_____no_output_____
###Markdown
Visualization of distributions
###Code
# plot shows the number of normal images versus pneumonia images in Train, Val and test set
plt.figure(figsize=(20, 7))
plt.subplot(131)
plt.title('Train Data Distribution', fontsize=14)
sns.barplot(x=['benign','malignant'],y=[len(total_images_train_benign),len(total_images_train_malignant)])
plt.subplot(132)
plt.title('Test Data Distribution', fontsize=14)
sns.barplot(x=['benign','malignant'],y=[len(total_images_test_benign),len(total_images_test_malignant)])
plt.subplot(133)
plt.title('Validation Data Distribution', fontsize=14)
sns.barplot(x=['benign','malignant'],y=[len(total_images_val_benign),len(total_images_val_malignant)])
plt.show()
###Output
_____no_output_____
###Markdown
Display Images
###Code
# displaying normal x-ray images and pneumonia x-ray images in train, test and val using imshow,imread
fig, ax = plt.subplots(2, 3, figsize=(18, 9))
ax = ax.ravel()
fig.suptitle('Benign and Malignant Skin Images', fontsize=24)
for i, _set in enumerate(['train', 'val', 'test']):
set_path = path+_set
ax[i].imshow(plt.imread(set_path+'/benign/'+os.listdir(set_path+'/benign')[0]), cmap='gray')
ax[i].set_title('{} set: benign'.format(_set))
ax[i+3].imshow(plt.imread(set_path+'/malignant/'+os.listdir(set_path+'/malignant')[0]), cmap='gray')
ax[i+3].set_title('{} set: malignant'.format(_set))
###Output
_____no_output_____
###Markdown
Define generators
###Code
# Get all the data in the directory DATA/TRAIN, and reshape them
print("Train data:")
train_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(train_data_dir,
target_size=(150, 150), batch_size=32, class_mode='binary')
# Get all the data in the directory DATA/TEST , and reshape them
print("Test data:")
test_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(test_data_dir,
target_size=(150, 150), batch_size=32, class_mode='binary', shuffle=False)
# Get all the data in the directory DATA/VAL, and reshape them
print("Validation data:")
val_generator = ImageDataGenerator(rescale=1./255).flow_from_directory(val_data_dir,
target_size=(150, 150), batch_size=32, class_mode='binary')
###Output
Train data:
Found 2141 images belonging to 2 classes.
Test data:
Found 660 images belonging to 2 classes.
Validation data:
Found 496 images belonging to 2 classes.
###Markdown
Calculate the step size per epoch
###Code
# counting the stepsize per epoch
train_steps_per_epoch = np.math.ceil(train_generator.samples / train_generator.batch_size)
val_steps_per_epoch = np.math.ceil(val_generator.samples / val_generator.batch_size)
test_steps_per_epoch = np.math.ceil(test_generator.samples / test_generator.batch_size)
###Output
_____no_output_____
###Markdown
Define the parameters
###Code
# Define the parameters of image transformation
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40, # rotating from -40 to40 degree
width_shift_range=0.2, # allowing image generator to shift our images left or right about 20% of total width
height_shift_range=0.2,#allowing image generator to shift our images up or dowm about 20% of total height
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_data_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
val_generator = test_datagen.flow_from_directory(val_data_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
###Output
Found 2141 images belonging to 2 classes.
Found 496 images belonging to 2 classes.
###Markdown
VGG16 Model
###Code
conv_base = VGG16(weights='imagenet',#imagenet is the final weight of vgg16
include_top=False, #meaning we will import only the conv base not the whole thing
input_shape=(150,150,3))
conv_base.summary()
model1=Sequential()
model1.add(conv_base) #adding conv_base that was imported
model1.add(Flatten())
model1.add(Dense(256,activation='relu'))#adding fully connected dense layer with 256 neurons
model1.add(Dense(1,activation='sigmoid')) # output layer with single neuron
#structure of model1
plot_model(model1)
# looking at the structure of my neural network
###Output
_____no_output_____
###Markdown
Compile and train the model
###Code
model1.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=2e-5),# small learning rate used because weights already optimized
metrics=['acc'])
# set random seed to 42 to is used to replicate the same result everytime , the output will be same if i run the cell in future
np.random.seed(42)
tf.random.set_seed(42)
# ⏰ This cell may take several minutes to run
start = datetime.datetime.now()
history = model1.fit_generator(train_generator,
steps_per_epoch=train_steps_per_epoch,
epochs=23,
validation_data=val_generator,
validation_steps=val_steps_per_epoch)
end = datetime.datetime.now()
elapsed = end - start
print('---------Elapsed Time-----------')
print('Time to fit the VGG16 model is:\n {}'.format(elapsed))
###Output
Epoch 1/23
67/67 [==============================] - 24s 359ms/step - loss: 0.4692 - acc: 0.7753 - val_loss: 0.3795 - val_acc: 0.8125
Epoch 2/23
67/67 [==============================] - 24s 360ms/step - loss: 0.3615 - acc: 0.8202 - val_loss: 0.4405 - val_acc: 0.7823
Epoch 3/23
67/67 [==============================] - 24s 358ms/step - loss: 0.3240 - acc: 0.8538 - val_loss: 0.3847 - val_acc: 0.8125
Epoch 4/23
67/67 [==============================] - 24s 356ms/step - loss: 0.3112 - acc: 0.8566 - val_loss: 0.3394 - val_acc: 0.8387
Epoch 5/23
67/67 [==============================] - 24s 356ms/step - loss: 0.2850 - acc: 0.8627 - val_loss: 0.4044 - val_acc: 0.8085
Epoch 6/23
67/67 [==============================] - 24s 355ms/step - loss: 0.2722 - acc: 0.8716 - val_loss: 0.4397 - val_acc: 0.7782
Epoch 7/23
67/67 [==============================] - 23s 350ms/step - loss: 0.2584 - acc: 0.8814 - val_loss: 0.5426 - val_acc: 0.7177
Epoch 8/23
67/67 [==============================] - 24s 356ms/step - loss: 0.2528 - acc: 0.8865 - val_loss: 0.5782 - val_acc: 0.7177
Epoch 9/23
67/67 [==============================] - 24s 351ms/step - loss: 0.2391 - acc: 0.8930 - val_loss: 0.6174 - val_acc: 0.7198
Epoch 10/23
67/67 [==============================] - 23s 349ms/step - loss: 0.2329 - acc: 0.9019 - val_loss: 0.5095 - val_acc: 0.7419
Epoch 11/23
67/67 [==============================] - 23s 349ms/step - loss: 0.2129 - acc: 0.9010 - val_loss: 0.3976 - val_acc: 0.8266
Epoch 12/23
67/67 [==============================] - 23s 347ms/step - loss: 0.2081 - acc: 0.9047 - val_loss: 0.5241 - val_acc: 0.7823
Epoch 13/23
67/67 [==============================] - 23s 345ms/step - loss: 0.1928 - acc: 0.9220 - val_loss: 0.5147 - val_acc: 0.7581
Epoch 14/23
67/67 [==============================] - 23s 345ms/step - loss: 0.1938 - acc: 0.9183 - val_loss: 0.3532 - val_acc: 0.8569
Epoch 15/23
67/67 [==============================] - 23s 346ms/step - loss: 0.1937 - acc: 0.9159 - val_loss: 0.8103 - val_acc: 0.7359
Epoch 16/23
67/67 [==============================] - 23s 342ms/step - loss: 0.1786 - acc: 0.9141 - val_loss: 0.4237 - val_acc: 0.7984
Epoch 17/23
67/67 [==============================] - 23s 342ms/step - loss: 0.1736 - acc: 0.9164 - val_loss: 0.5471 - val_acc: 0.8226
Epoch 18/23
67/67 [==============================] - 23s 342ms/step - loss: 0.1706 - acc: 0.9253 - val_loss: 0.5121 - val_acc: 0.7601
Epoch 19/23
67/67 [==============================] - 23s 340ms/step - loss: 0.1603 - acc: 0.9271 - val_loss: 0.5804 - val_acc: 0.7460
Epoch 20/23
67/67 [==============================] - 23s 341ms/step - loss: 0.1599 - acc: 0.9351 - val_loss: 0.5521 - val_acc: 0.7560
Epoch 21/23
67/67 [==============================] - 23s 347ms/step - loss: 0.1383 - acc: 0.9430 - val_loss: 0.4002 - val_acc: 0.8185
Epoch 22/23
67/67 [==============================] - 23s 339ms/step - loss: 0.1400 - acc: 0.9383 - val_loss: 0.8403 - val_acc: 0.7661
Epoch 23/23
67/67 [==============================] - 23s 340ms/step - loss: 0.1462 - acc: 0.9454 - val_loss: 0.4531 - val_acc: 0.8044
---------Elapsed Time-----------
Time to fit the VGG16 model is:
0:09:06.069182
###Markdown
Store the results
###Code
results=store_results_to_dict(model1, 'Base Model(with 4 conv,pooling layers,and dense full conv layer)')
results_final = []
# appending the results of the new model
results_final.append(results)
# putting the results in dataframe
df_model_results = pd.DataFrame(results_final)
df_model_results
###Output
_____no_output_____
###Markdown
Plot the results
###Code
plot_acc_and_loss(history)
###Output
_____no_output_____
###Markdown
Prediction results on the test setWe got 87% accuracy in classifying benign and malignant melanoma images.
###Code
pred = model1.predict_generator(test_generator, test_steps_per_epoch)
pred_classes = np.round(pred)
true_classes = test_generator.classes
class_labels = list(test_generator.class_indices.keys())
print("Baseline Model:\n")
print("Confusion Matrix:\n", confusion_matrix(true_classes, pred_classes))
print("----------------------------------------------------")
print("Classification Report:\n", classification_report(true_classes, pred_classes, target_names=class_labels))
###Output
Baseline Model:
Confusion Matrix:
[[325 35]
[ 54 246]]
----------------------------------------------------
Classification Report:
precision recall f1-score support
benign 0.86 0.90 0.88 360
malignant 0.88 0.82 0.85 300
accuracy 0.87 660
macro avg 0.87 0.86 0.86 660
weighted avg 0.87 0.87 0.86 660
###Markdown
Conclusion In this project, we classified the melanoma images as beningn and malignant. As for NF patients we are able to make a smaller classification model and classify, segment and track the skin lessions when we get the image data.
###Code
###Output
_____no_output_____ |
w3/w3-day_1/Seaborn_working_with.ipynb | ###Markdown
Regression line with Confidence interval
###Code
# a regression line with a confidence inetrval plot
sns.lmplot(x='total_bill', y='tip', data = tips)
plt.show()
sns.lmplot(x='total_bill',y='tip', data = tips, hue = 'sex')
plt.show()
sns.lmplot(x='total_bill',y='tip', data = tips, col = 'sex')
plt.show()
###Output
_____no_output_____
###Markdown
Residual Plot
###Code
sns.residplot(x='total_bill',y='tip', data = tips)
plt.show()
###Output
_____no_output_____
###Markdown
Strip plot
###Code
sns.stripplot(x='day',y ='tip',data = tips, size =5, jitter = True)
# overwrite the original y-label from 'tip' to 'tip ($)'
plt.ylabel('tip ($)')
plt.show()
###Output
_____no_output_____
###Markdown
Swarm plot
###Code
sns.swarmplot(x='day',y='tip',data = tips, hue = 'sex')
# overwrite the original y-label from 'tip' to 'tip ($)'
plt.ylabel('tip ($)')
plt.show()
###Output
/Users/louisrossi/opt/anaconda3/lib/python3.8/site-packages/seaborn/categorical.py:1296: UserWarning: 6.5% of the points cannot be placed; you may want to decrease the size of the markers or use stripplot.
warnings.warn(msg, UserWarning)
/Users/louisrossi/opt/anaconda3/lib/python3.8/site-packages/seaborn/categorical.py:1296: UserWarning: 5.7% of the points cannot be placed; you may want to decrease the size of the markers or use stripplot.
warnings.warn(msg, UserWarning)
###Markdown
Boxplot + Violin plot
###Code
# create a grid with 1 row 2 columns
plt.subplot(1,2,1)
# the first plot
sns.boxplot(x='day',y='tip', data = tips)
plt.ylabel('tip ($)')
plt.subplot(1,2,2)
# the second plot
sns.violinplot(x='day',y='tip', data = tips)
plt.ylabel('tip ($)')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Joint plot
###Code
sns.jointplot(x='total_bill', y='tip', data = tips)
plt.show()
sns.jointplot(x='total_bill', y='tip', data = tips, kind = 'kde')
plt.show()
###Output
_____no_output_____
###Markdown
Pair plot
###Code
sns.pairplot(tips)
plt.show()
sns.pairplot(tips, hue = 'sex')
plt.show()
###Output
_____no_output_____
###Markdown
Heatmap
###Code
# compute correlations between features
df_corr = tips.corr()
# plot the correlations
sns.heatmap(df_corr)
plt.title('Correlation plot')
plt.show()
###Output
_____no_output_____ |
successful_runs/comparisons-to-default/flu-MARL-[MAC, Default Cmp].ipynb | ###Markdown
k-vs-(N-k) Flu ABM Env- k-vs-(N-k) experiment- Kicking tires on multiplayer instance of Flu ABM with RL learners - MADDPG/MAC RL algo
###Code
import itertools, importlib, sys, warnings, os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# ML libs
import tensorflow as tf
print("Tensorflow version:", tf.__version__)
# warnings.filterwarnings("ignore")
log_path = './log/flu'
#tensorboard --logdir=flugame_worker_1:'./log/train_rf_flugame_worker'
## suppress annoy verbose tf msgs
warnings.filterwarnings("ignore")
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # '3' to block all including error msgs
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
sys.path.append('./embodied_arch')
import embodied_arch.embodied_central_Qcritic as emac
importlib.reload(emac)
import flumodel_python.flu_env as Fenv
from embodied_misc import ActionPolicyNetwork, ValueNetwork, SensoriumNetworkTemplate
###Output
_____no_output_____
###Markdown
Env Setup
###Code
# exos = [1,2,3,10] # (np.random.sample(33) < 0.3)
exos = (np.random.sample(9223) < 0.004)
exos = [j for j in range(len(exos)) if exos[j]==True]
print(len(exos))
importlib.reload(Fenv);
importlib.reload(emac);
tf.reset_default_graph()
flu_menv = Fenv.Flu_env(
exo_idx=exos,
model_path="./flumodel_python/"
)
print(flu_menv.actor_count)
print(flu_menv.state_space_size, flu_menv.action_space_size)
###Output
38
8 1
###Markdown
MARL Setup Demo
###Code
actor = lambda s: ActionPolicyNetwork(s, hSeq=(8,), gamma_reg=1e-1)
value = lambda s: ValueNetwork(s, hSeq=(8,), gamma_reg=1.)
sensor = lambda st, out_dim: SensoriumNetworkTemplate(st, hSeq=(16,8,8,), out_dim=out_dim, gamma_reg=5.)
# num_episodes, n_epochs, max_len = (5, 4, 5)
# num_episodes, n_epochs, max_len = (100, 1501, 35)
# num_episodes, max_len, n_epochs, evry = (25, 10, 121, 40)
num_episodes, max_len, n_epochs, evry = (100, 35, 451, 50)
flumac = emac.EmbodiedAgent_MAC(
name="flu_MAC", env_=flu_menv,
alpha_p=150., alpha_v=5., alpha_q=2.5,
actorNN=actor, valueNN=value,
sensorium=sensor,latentDim=4,
max_episode_length=max_len,
_every_=evry
)
# ??flumac.play
# (flumac.a_size, flumac.env.action_space_size)
sess = tf.InteractiveSession()
flumac.init_graph(sess) # note tboard log dir
saver = tf.train.Saver(max_to_keep=1)
###Output
Tensorboard logs in: ./log/train_flu_MAC
###Markdown
Baseline Baseline for RL/Adaptive Behavioral Model
###Code
print('Baselining untrained pnet...')
rwds0 = []
acts_cov = np.zeros([flumac.actor_count,flumac.actor_count])
for k in range(num_episodes):
flumac.play(sess, terminal_reward=0.);
rwds0.append(flumac.last_total_returns)
actions = np.array(flumac.episode_buffer['actions']).T
acts_cov = acts_cov + (np.cov(actions)/num_episodes)
print("\rEpisode {}/{}".format(k, num_episodes),end="")
# Compute average rewards
base_perf = 100.*np.mean(np.array(rwds0)/float(flumac.max_episode_length))
base_per_agent = 100.*np.mean(np.array(rwds0)/float(flumac.max_episode_length), axis=0)
print("\nAgent is flu-free for an average of {}pct of seasons".format(
1.*base_perf))
acts_corr = acts_cov.copy()
jm, km = acts_corr.shape
for j in range(jm):
for k in range(km):
denom = np.sqrt((acts_corr[j,j])*(acts_corr[k,k]))
acts_corr[j,k] = acts_corr[j,k]/denom
print("Agent Action Correlations:")
sns.heatmap(acts_corr, center=0)
###Output
Agent Action Correlations:
###Markdown
Train Agent Population
###Code
obs = []
for ct in range(50):
flumac.play(sess)
tmp = flumac.train_eval_QC(sess)
obs.append(np.mean(tmp, axis=0))
if ct%25==0:
print('\r\tIteration {}: Value loss({})'.format(
ct, np.mean(tmp)), end="")
plt.plot(obs[1:]);
# ### Train Agents
print('Training...')
hist = flumac.work(sess, num_epochs=n_epochs, saver=saver)
###Output
Training...
Starting agent flu_MAC
Epoch no.: 0/451
Stats @Step 0: (Min, Mean, Max)
Perf/Recent Rewards: (25.0, 32.55263157894737, 35.0)
Losses/Policy LLs: (-2.2608888, -0.60087085, -0.11010261)
Losses/Policy Entropies: (0.33433878, 0.6049257, 0.6931392)
Values/Critic Scores: (-4.1316667, 0.23619446, 3.2716048)
Values/Mean Q Scores: (13.148774, 17.42588, 19.983746)
Saved Model
Epoch no.: 50/451
Stats @Step 50: (Min, Mean, Max)
Perf/Recent Rewards: (27.0, 32.973684210526315, 35.0)
Losses/Policy LLs: (-2.8019228, -0.58850634, -0.06261318)
Losses/Policy Entropies: (0.22887048, 0.59595674, 0.6928854)
Values/Critic Scores: (-3.635611, 0.1409748, 3.0927284)
Values/Mean Q Scores: (21.384138, 25.468422, 27.90224)
Saved Model
Epoch no.: 100/451
Stats @Step 100: (Min, Mean, Max)
Perf/Recent Rewards: (26.0, 32.921052631578945, 35.0)
Losses/Policy LLs: (-2.313823, -0.5986697, -0.104119614)
Losses/Policy Entropies: (0.3226206, 0.6056469, 0.6927041)
Values/Critic Scores: (-6.481694, 0.041140266, 2.7762804)
Values/Mean Q Scores: (19.315922, 22.823996, 25.531332)
Saved Model
Epoch no.: 150/451
Stats @Step 150: (Min, Mean, Max)
Perf/Recent Rewards: (27.0, 33.078947368421055, 35.0)
Losses/Policy LLs: (-2.1723638, -0.6047279, -0.12093454)
Losses/Policy Entropies: (0.3546087, 0.59984016, 0.69303197)
Values/Critic Scores: (-4.421058, -0.036370203, 4.140975)
Values/Mean Q Scores: (17.072886, 21.429373, 24.028694)
Saved Model
Epoch no.: 200/451
Stats @Step 200: (Min, Mean, Max)
Perf/Recent Rewards: (30.0, 33.3421052631579, 35.0)
Losses/Policy LLs: (-3.0960093, -0.5981115, -0.046284117)
Losses/Policy Entropies: (0.18422109, 0.5904095, 0.69313556)
Values/Critic Scores: (-5.4957156, 0.055095013, 4.2656927)
Values/Mean Q Scores: (18.929699, 21.962545, 23.63159)
Saved Model
Epoch no.: 250/451
Stats @Step 250: (Min, Mean, Max)
Perf/Recent Rewards: (25.0, 33.05263157894737, 35.0)
Losses/Policy LLs: (-2.9169164, -0.59213865, -0.0556187)
Losses/Policy Entropies: (0.21041551, 0.5970644, 0.69302166)
Values/Critic Scores: (-3.5398364, 0.1870522, 4.429839)
Values/Mean Q Scores: (21.011278, 25.063356, 26.982113)
Saved Model
Epoch no.: 300/451
Stats @Step 300: (Min, Mean, Max)
Perf/Recent Rewards: (28.0, 33.26315789473684, 35.0)
Losses/Policy LLs: (-2.35666, -0.5936334, -0.099528804)
Losses/Policy Entropies: (0.3133606, 0.6017741, 0.6930785)
Values/Critic Scores: (-2.9842036, 0.15928702, 3.469826)
Values/Mean Q Scores: (19.607126, 24.719528, 27.827423)
Saved Model
Epoch no.: 350/451
Stats @Step 350: (Min, Mean, Max)
Perf/Recent Rewards: (28.0, 32.921052631578945, 35.0)
Losses/Policy LLs: (-2.349049, -0.5890703, -0.10032864)
Losses/Policy Entropies: (0.3149911, 0.59216577, 0.6927165)
Values/Critic Scores: (-4.4624715, 0.26040095, 3.1501908)
Values/Mean Q Scores: (15.378132, 23.509571, 26.322552)
Saved Model
Epoch no.: 400/451
Stats @Step 400: (Min, Mean, Max)
Perf/Recent Rewards: (27.0, 33.23684210526316, 35.0)
Losses/Policy LLs: (-2.4175484, -0.5871441, -0.093365945)
Losses/Policy Entropies: (0.30054313, 0.57564735, 0.69309914)
Values/Critic Scores: (-5.593222, 0.39028728, 3.8277018)
Values/Mean Q Scores: (18.914165, 23.568161, 26.058767)
Saved Model
Epoch no.: 450/451
Stats @Step 450: (Min, Mean, Max)
Perf/Recent Rewards: (25.0, 32.89473684210526, 35.0)
Losses/Policy LLs: (-2.6269681, -0.5628719, -0.075043984)
Losses/Policy Entropies: (0.25954115, 0.56649894, 0.69313854)
Values/Critic Scores: (-5.78232, 0.4455961, 4.4236746)
Values/Mean Q Scores: (18.889389, 23.88217, 26.022322)
Saved Model
###Markdown
Test
###Code
# Test pnet!
print('Testing...')
rwds = []
acts_cov_trained = np.zeros([flumac.actor_count,flumac.actor_count])
for k in range(num_episodes):
flumac.play(sess)
rwds.append(flumac.last_total_returns)
actions = np.array(flumac.episode_buffer['actions']).T
acts_cov_trained = acts_cov_trained + (np.cov(actions)/num_episodes)
print("\rEpisode {}/{}".format(k, num_episodes),end="")
trained_perf = 100.*np.mean(np.array(rwds)/float(flumac.max_episode_length))
trained_per_agent = 100.*np.mean(np.array(rwds)/float(flumac.max_episode_length), axis=0)
print("\nAgent is flu-free for an average of {} pct compared to baseline of {} pct".format(
1.*trained_perf, 1.*base_perf) )
acts_corr_trained = acts_cov_trained.copy()
jm, km = acts_corr_trained.shape
for j in range(jm):
for k in range(km):
denom = np.sqrt((acts_cov_trained[j,j])*(acts_cov_trained[k,k]))
acts_corr_trained[j,k] = acts_corr_trained[j,k]/denom
mask = np.zeros_like(acts_corr_trained)
mask[np.triu_indices_from(mask,k=0)] = True
with sns.axes_style("darkgrid"):
plt.rcParams['figure.figsize'] = (15, 12)
ax = sns.heatmap(acts_corr_trained,
mask=mask, vmax=0.125, center=0)
ax.set_ylabel("Agent Index")
ax.set_xlabel("Agent Index")
ax.set_title("Action Correlations")
###Output
_____no_output_____
###Markdown
Evaluate
###Code
rwds0_df = pd.DataFrame(100.*(np.array(rwds0)/float(flumac.max_episode_length)))
rwds_df = pd.DataFrame(100.*(np.array(rwds)/float(flumac.max_episode_length)))
rwds0_df['Wave'] = "Baseline"
rwds_df['Wave'] = "Trained"
resDF = pd.concat([rwds0_df, rwds_df])
resDF.columns = ["Agent"+str(tc) if tc is not "Wave" else tc for tc in resDF.columns]
# resDF['id'] = resDF.index
print(resDF.shape)
# resDF.head()
resDF = resDF.melt(
id_vars=['Wave'], #['id', 'Wave'],
value_vars=[tc for tc in resDF.columns if "Agent" in tc]
)
resDF = resDF.rename(columns={"variable": "Agent", "value": "Immune_pct"})
print(resDF.shape)
res_tabs = resDF.groupby(['Agent','Wave']).aggregate(['mean','std']) # res_tabs
# resDF.head()
plt.rcParams['figure.figsize'] = (9, 35)
sns.set(font_scale=1.25)
fig = sns.violinplot(data=resDF, inner="box", cut=0,
x="Immune_pct", y="Agent", hue="Wave",
split=True);
fig.set_title(
'Average Episode Rewards: Baseline vs Trained Agents.');
fig.legend(loc='upper left');
base_meanDF = resDF[resDF.Wave=="Baseline"].groupby(['Agent']).aggregate(['mean'])
base_meanDF.sort_index(inplace=True)
trained_meanDF = resDF[resDF.Wave=="Trained"].groupby(['Agent']).aggregate(['mean'])
trained_meanDF.sort_index(inplace=True)
mean_diffDF = (trained_meanDF - base_meanDF)
mean_diffDF.columns = ['Mean_Immune_Pct_Change']
# mean_diffDF.head()
plt.rcParams['figure.figsize'] = (9, 19)
sns.set_color_codes("dark")
fig, axs = plt.subplots(2,1, sharex=True, gridspec_kw={'height_ratios': [1,4]})
cmp = sns.violinplot(x='Mean_Immune_Pct_Change', cut=0, inner='quartile',
data=mean_diffDF, ax=axs[0])
axs[0].set_ylabel('Agent Aggregate');
axs[0].set_title(
'Distribution of Changes in Flu Immunity Rates:\nIn Aggregate & Per-Agent.'
);
sns.barplot(y=mean_diffDF.index, x="Mean_Immune_Pct_Change",
data=mean_diffDF, color="r",
label="Success Rate", ax=axs[1]);
plt.subplots_adjust(wspace=0, hspace=0)
axs[1].set_xlabel('Avg. Change in Immunity Rates');
###Output
_____no_output_____
###Markdown
Baseline for Default Behavioral Model
###Code
import flumodel_python.flu_env_basic as FABM
# ?Fenv.Flu_ABM
importlib.reload(FABM);
flu = FABM.Flu_ABM(model_path="./flumodel_python/")
# Burn-in Flu ABM First...
for _ in range(30):
_ = flu.stepAll()
flu_hist = np.zeros([num_episodes, len(exos)])
for k in range(num_episodes):
tmp = np.zeros(len(exos))
for _ in range(max_len):
tmp += (1.-np.array(flu.stepAll(), dtype=float))[exos]
flu_hist[k,:] = tmp
rwds_dbm = 100.*flu_hist/float(max_len)
print(len(exos), rwds_dbm.shape,
np.mean(rwds_dbm, axis=0).shape)
print(np.mean(rwds_dbm), "\n",
np.mean(rwds_dbm, axis=0)
)
plt.rcParams['figure.figsize'] = (8,3)
sns.boxplot(np.mean(rwds_dbm, axis=0) - base_per_agent)
###Output
_____no_output_____
###Markdown
Compare to Default Behavioral Model
###Code
trcmp = 100.*(np.array(rwds)/float(flumac.max_episode_length))
cmp = np.mean((trcmp-rwds_dbm), axis=0)
bplot = sns.boxplot(cmp)
bplot.set_title(
'Pct Improvement in Flu Outcomes\nRL Behaviors over Default Behavioral Model');
plt.rcParams['figure.figsize'] = (8,3)
np.mean(trcmp - np.mean(rwds_dbm, axis=0))
###Output
_____no_output_____ |
Normal_Equation/Exploration of Data.ipynb | ###Markdown
Exploritory Analysis on SHPO OLI Dataset.By Kellen Bullock Framing the Problem and Big PictureThe OLI is a database that contains roughly 65,000 records. Roughly up to 20,000 of these records are dupliates. Several people have been cleaning the data by hand by indicating poss_dup or good in the duplicate_check field. The State Historical Presvation Office then makes the decision to roll those records off the main table into another. In order to make more informed deicsions and queries on the database duplicate records should be taken out. Duplicate create confusion for the users by providing redudent information that the user does not want. Problem StatementHow can automatic complex duplicate record detection be implemented to prevent more duplicate records from entering the dataset? Proposed solution:Training a neural network to make a propblistic descision on whether a record is a duplicate or not is far more effeicent than checking every record in the dataset. Since several thousand records have been classified already that data will be used for supervised learning. How to solve this problem manually:A field such as PROPNAME, ADDRESS or RESNAME are sorted alphabetically. First the PROPNAME fields are compared for similarity. Then ADDRESS, RESNAME, ROOF_TYP, and WINDOW_TYPE. If records matchup the records are marked as poss_dup in the duplicate_check column. Peformance measurment:A CSV file that contains the OBJECTID and probability score that the classifier assigns will be out. This will be joined with the main table dataset and checked by a person. Accuracy measrument: RSMERecall will also be measruedIf the Classifier can out perform at 85% or higher I will call it a success. Assumptions: The data has complete records, there are no misspelling in the data, There is a combination of numerical, text, and catagorical values. The people classifying the records as duplicates are very accurate. 92% accurate or higher. Overall projectTitle what attempt this is to differintate models.We will use brnaching in git to help us develop. Toolset Jupyter notebooks will be for data exploration and visziulation Tensorboard will be used for accuracy analysis and cost. Notes for Table of Contents:Establish sectionsEnable hyperlinks Put this into the README.md file: Objectives: Name attributes and describe characteristics. % of Nulls Type of data. ie String, int, float Noise present. Such as outliers, logistic, rounding errors What is useful what isn't and why Type of distribution. Identifying Label data Visualization of data Identify correlations between variables Propose how the problem would be solved manually Provide transformations if nessiary Anything else of interest Exploritory data StrategyThe nature of the dataset is complex. This is due to the descriptive attirbutes assocatied with properites and cemetaries. There are only a couple of real numerical datatypes such as lat and long. I intend to go through each type figuring out if it is catagorical, a string/text, or numerical. Once the catagroies are identified applying a numerical number scheme for them will be adopted. Questions:Can I even descrptive statstics on catagorical data? From Comer's class I remember there beng some very strang things that happened.What do I even do with the catagorical data? Data cleaning Strats:I could take all null values and fill them with No Data. This would allow for complete records and give vectors to data that isn't there. In turn allowing for the whole record to be proccessed.I really think this should be prsued, because of the amount of null records there are.Drawbacks: I am creating data and altering data that is in the database. I would have to make general enough preproccessing to catch all of the null or missing values too. Then the model could train but if I miss something it could crash horribily and not really tell me why. Or I could get a very poor level of accuracy. Implementation: Look at all columns and identify incompletes Do an fillna() where possible Some catagorical data has 00 or none and that would need to change Run info() again to see what pandas says. Visiually inspect data Models Turn everything into text and concatenate all attributes into one string, apply TF-IDF and cosine similarity, then run model. Notes I do not know how and if I need to do describptive stats on the vectors of the strings Exploritiory driven modeling Convert individual columns into vectors PCA (principle compoent analysis vectors and drop relivant columns Train model Find complete records. After discovering complete fields do prepoccessing and run model just on those. This strategy will probably not work on the whole dataset becuase there are so many null. Implement One Hot Encoder instead of TD-IDF. Problems I forsee with doing this method are there are a lot of nominal values that are misspelled. They create a whole new catagory which will add more complexity to the model. It might not generalize well to new data. For example if we OneHotEncode Oklahoma and Olahoma (county) we create two different catagories. (Contrasting with the TD-IDF method): We are creating vectors based on occurance of important words within the corpus. Okay it seems it may have the same effect. This continues to prove that a massive data cleaning proccess needs to be done on the dataset or I need to cherry pick good records out of the dataset. Importing modules
###Code
import pandas as pd
import matplotlib as plt
import numpy as np
import seaborn as sns
import sys
%matplotlib inline
plt.rcParams["figure.figsize"] = [10,10]
pd.set_option('display.max_columns', 70)
pd.set_option('display.max_rows', 200)
###Output
_____no_output_____
###Markdown
Loading dataset in:
###Code
df = pd.read_excel('datasets/prepared_data/Oklahoma_Working.xls')
df.head()
###Output
_____no_output_____
###Markdown
What is each Field?
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3304 entries, 1 to 3948
Data columns (total 6 columns):
PROPNAME 3304 non-null object
RESNAME 3304 non-null object
ADDRESS 3304 non-null object
Lat 3304 non-null float64
Long 3304 non-null float64
duplicate_check 3304 non-null int64
dtypes: float64(2), int64(1), object(3)
memory usage: 180.7+ KB
###Markdown
Unique Fields Below is a custom made function designed to display unique values in the dataset all in one table.
###Code
from uniques import uniques
uniques(df)
df.PROPNAME.unique()
###Output
_____no_output_____
###Markdown
The code above does exactly what the table does. As you can see there are multiple errors in the spelling. With no standard of naming was used.
###Code
# Sort alphabetical
len(df.PROPNAME.unique().tolist())
###Output
_____no_output_____ |
notebooks/mesonic-mbs.ipynb | ###Markdown
Model-Based Sonification using mesonicThis notebooks shows how a Model-Based Sonification can be implemented using mesonic.
###Code
import mesonic
import sc3nb as scn
import numpy as np
from scipy.spatial import ConvexHull
from scipy.spatial.distance import cdist, euclidean
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Preparation of Synths Lets start by preparing our Context and Synths.
###Code
context = mesonic.create_context()
context.enable_realtime()
###Output
_____no_output_____
###Markdown
The model we use allows to interact with it using the mouse.For this we create a additional SynthDef as a click marker.
###Code
scn.SynthDef("noise", r"""
{ |out=0, freq=2000, rq=0.02, amp=0.3, dur=1, pos=0 |
Out.ar(out, Pan2.ar(
BPF.ar(WhiteNoise.ar(10), freq, rq)
* Line.kr(1, 0, dur, doneAction: 2).pow(4), pos, amp));
}""").add()
test = context.synths.create("noise", mutable=False)
test.start()
###Output
_____no_output_____
###Markdown
Data Preparation We also prepare the data for the sonification.We will use the [Palmer penguins dataset](https://allisonhorst.github.io/palmerpenguins/) for the examples.
###Code
df = sns.load_dataset("penguins")
df = df.dropna(subset=["bill_length_mm", "bill_depth_mm", "flipper_length_mm", "body_mass_g", "sex"])
df = df.reset_index(drop=True)
df
%matplotlib inline
sns.pairplot(data=df, hue="species")
###Output
_____no_output_____
###Markdown
Data SonogramThe `DataSonogram` implements a Data Sonogram- The model gets a dataset which is plotted in two dimensions.- Imagine that for each point in the provided dataset a spring is created.- The data label of the point defines the stiffness of the spring.- When the user clicks into the plot a shock wave (signaled by noise Synth) is created from the nearest data point.- The shock wave excites the springs as it is spreading.- The resulting sonification can reveal clusters.More details about the Data Sonogram and Model-Based Sonification in general can be found in the [corresponding Sonification Handbook Chapter](https://sonification.de/handbook/chapters/chapter16/)
###Code
class DataSonogram:
def __init__(self, context, df, x, y, label, max_duration=1.5, spring_synth="s1", trigger_synth="noise"):
self.context = context
#prepare synths
self.trigger_synth = context.synths.create(trigger_synth, mutable=False)
self.spring_synth = context.synths.create(spring_synth, mutable=False)
# save dataframe
self.df = df
self.numeric_df = df.select_dtypes(include=[np.number])
# check if x and y are valid
allowed_columns = self.numeric_df.columns
assert x in allowed_columns, f"x must be in {allowed_columns}"
assert y in allowed_columns, f"y must be in {allowed_columns}"
# prepare data for model
self.labels = self.df[label]
self.unique_labels = self.labels.unique()
label2id = {label: idx for idx, label in enumerate(self.unique_labels)}
self.numeric_labels = [label2id[label] for label in self.labels]
self.xy_data = self.numeric_df[[x,y]].values
self.data = self.numeric_df.values
# get the convex hull of the data
hull = ConvexHull(self.data)
hull_data = self.data[hull.vertices,:]
# get distances of the data points in the hull
hull_distances = cdist(hull_data, hull_data, metric='euclidean')
self.max_distance = hull_distances.max()
# set model parameter
self.max_duration = max_duration
# prepare plot
self.fig = plt.figure(figsize=(5,5))
self.ax = plt.subplot(111)
# plot data
sns.scatterplot(x=x, y=y, hue=label, data=df, ax=self.ax)
# set callback
def onclick(event):
if event.inaxes is None: # outside plot area
return
if event.button != 1: # ignore other than left click
return
click_xy = np.array([event.xdata, event.ydata])
self.create_shockwave(click_xy)
self.fig.canvas.mpl_connect('button_press_event', onclick)
def create_shockwave(self, click_xy):
self.context.reset()
with self.context.now() as start_time:
self.trigger_synth.start()
# find the point that is the nearest to the click location
center_idx = np.argmin(np.linalg.norm(self.xy_data - click_xy, axis=1))
center = self.data[center_idx]
# get the distances from the other points to this point
distances_to_center = np.linalg.norm(self.data - center, axis=1)
# get idx sorted by distances
order_of_points = np.argsort(distances_to_center)
# for each point create a sound using the spring synth
for idx in order_of_points:
distance = distances_to_center[idx]
nlabel = self.numeric_labels[idx]
n = len(self.unique_labels)-1
onset = (distance / self.max_distance) * self.max_duration
with self.context.at(start_time + onset):
self.spring_synth.start(
freq = 2 * (400 + 100 * nlabel),
amp = scn.dbamp(scn.linlin(distance, 0, self.max_distance, -10, -30)),
pan = [-1,1][int(self.xy_data[idx, 0]-click_xy[0] > 0)],
dur = 0.04,
info = {"label": self.labels[idx]},
)
###Output
_____no_output_____
###Markdown
To interact with the plot we use qt as matplotlib backend
###Code
%matplotlib qt
###Output
_____no_output_____
###Markdown
The `x` and `y` value can be setted to one of the numeric columns of the data set
###Code
numeric_columns = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g']
###Output
_____no_output_____
###Markdown
Create two views of the model with different `x` and `y` value
###Code
dsg1 = DataSonogram(context, df, x="flipper_length_mm", y="body_mass_g", label="species")
dsg2 = DataSonogram(context, df, x="bill_length_mm", y="bill_depth_mm", label="species")
###Output
_____no_output_____
###Markdown
We can enable the `fast_mode` of the `Clock` as the onsets will be very close. - The `fast_mode` is a workaround that will make the Playback worker skip `time.sleep`- `time.sleep` sleeps too long on Windows for many tasks- The upcomming Python 3.11 will fix this: https://docs.python.org/3.11/whatsnew/3.11.htmltime
###Code
context.realtime_playback.clock.fast_mode # default is False
context.realtime_playback.clock.fast_mode = True
context.realtime_playback.clock.fast_mode = False
###Output
_____no_output_____
###Markdown
Disable the `fast_mode` of the `Clock` if you want to do something different like creating a new Data Sonogram as the `fast_mode` keeps the [GIL](https://wiki.python.org/moin/GlobalInterpreterLock) busy. Filtering the MBSThe MBS can be filtered using the `processor.event_filter` - Lets create a helper that provides us with a filter function for the labels
###Code
def create_label_filter(allowed):
def label_filter(event):
label = event.info.get("label", None)
if label:
return event if label in allowed else None
return event
return label_filter
###Output
_____no_output_____
###Markdown
Select a filter and click again on the plot to listen to the filtered result.
###Code
context.processor.event_filter = create_label_filter(["Chinstrap", "Gentoo"])
context.processor.event_filter = create_label_filter(["Adelie", "Gentoo"])
context.processor.event_filter = create_label_filter(["Adelie", "Chinstrap"])
context.processor.event_filter = create_label_filter(["Chinstrap"])
context.processor.event_filter = create_label_filter(["Adelie"])
context.processor.event_filter = create_label_filter(["Gentoo"])
###Output
_____no_output_____
###Markdown
- We can also remove the panning from the Events - This can help to identify the same point in the two different views
###Code
def pan_filter(event):
pan = event.data.get("pan", None)
if pan:
event.data["pan"] = 0
return event
context.processor.event_filter = pan_filter
###Output
_____no_output_____
###Markdown
Setting the filter to `None` will reset it
###Code
context.processor.event_filter = None
context.close()
###Output
Quitting SCServer... Done.
Exiting sclang... Done.
|
analysis/6.2_LightGBM (sklearn API).ipynb | ###Markdown
LightGBM Models
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from pathlib import Path
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
DATA_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/Earthquake_damage/data')
SUBMISSIONS_DIR = Path('drive/MyDrive/Work/Delivery/Current/Earthquake_damage/submissions')
from google.colab import drive
drive.mount('/content/drive')
train_values = pd.read_csv(DATA_DIR / 'train_values.csv', index_col='building_id')
train_labels = pd.read_csv(DATA_DIR / 'train_labels.csv', index_col='building_id')
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
sns.set()
###Output
_____no_output_____
###Markdown
Getting Set Up
###Code
!git clone --recursive https://github.com/Microsoft/LightGBM
%cd /content/LightGBM
!mkdir build
!cmake -DUSE_GPU=1
!make -j$(nproc)
!sudo apt-get -y install python-pip
!sudo -H pip install setuptools pandas numpy scipy scikit-learn seaborn matplotlib -U
%cd /content/LightGBM/python-package
!sudo python setup.py install --precompile
train_values
y_train.shape
train_values.info()
print('Loading data...')
# load or create your dataset
train_values = pd.read_csv(DATA_DIR / 'train_values.csv', index_col='building_id')
train_labels = pd.read_csv(DATA_DIR / 'train_labels.csv', index_col='building_id')
X_train, X_test, y_train, y_test = train_test_split(train_values,
train_labels,
test_size=0.3,
random_state=123,
stratify=train_labels)
# create dataset for lightgbm
lgb_train = lgb.Dataset(X_train, y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
# specify your configurations as a dict
params = {
'boosting_type': 'gbdt',
'objective': 'classification',
'metric': {'l2', 'l1'},
'num_leaves': 31,
'learning_rate': 0.05,
'feature_fraction': 0.9,
'bagging_fraction': 0.8,
'bagging_freq': 5,
'verbose': 0
}
print('Starting training...')
# train
gbm = lgb.train(params,
lgb_train,
num_boost_round=20,
valid_sets=lgb_eval,
early_stopping_rounds=5)
print('Saving model...')
# save model to file
gbm.save_model('model.txt')
print('Starting predicting...')
# predict
y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)
# eval
print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5)
###Output
Loading data...
###Markdown
Using Sklearn API and fucking off GPU
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
from pathlib import Path
import lightgbm as lgb
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
DATA_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/Earthquake_damage/data')
SUBMISSIONS_DIR = Path('drive/MyDrive/Work/Delivery/Current/Earthquake_damage/submissions')
from google.colab import drive
drive.mount('/content/drive')
X = pd.read_csv(DATA_DIR / 'train_values.csv', index_col='building_id')
y = pd.read_csv(DATA_DIR / 'train_labels.csv', index_col='building_id')
sns.set()
categorical_columns = X.select_dtypes(include='object').columns
X[categorical_columns] = X[categorical_columns].astype('category')
bool_columns = [col for col in X.columns if col.startswith('has')]
X[bool_columns] = X[bool_columns].astype('bool')
X = pd.get_dummies(X)
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import GridSearchCV
from lightgbm import LGBMClassifier
steps = [('scaler', StandardScaler()),
('lgbm', LGBMClassifier(random_state=42))]
pipe = Pipeline(steps)
pipe
param_grid = {'lgbm__n_estimators': [50, 100, 150],
'lgbm__num_leaves': [20, 31, 40]}
gs = GridSearchCV(pipe, param_grid, cv=5, verbose=3, n_jobs=-1)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.3,
random_state=123,
stratify=y)
pipe.fit(X_train, y_train.values.ravel())
from sklearn.metrics import f1_score
y_pred = pipe.predict(X_test)
f1_score(y_test, y_pred, average='micro')
def make_submission(pipeline, title):
"""
Given a trained pipeline object, use it to make predictions on the
submission test set 'test_values.csv' and write them a csv in the submissions
folder.
"""
# Read in test_values csv and apply data preprocessing
# note: will create a data preprocessing pipeline or function in future
test_values = pd.read_csv(DATA_DIR / 'test_values.csv', index_col='building_id')
test_values[categorical_columns] = test_values[categorical_columns].astype('category')
test_values[bool_columns] = test_values[bool_columns].astype('bool')
test_values = pd.get_dummies(test_values)
# Generate predictions using pipeline we pass in
predictions = pipeline.predict(test_values)
submission_format = pd.read_csv(DATA_DIR / 'submission_format.csv',
index_col='building_id')
my_submission = pd.DataFrame(data=predictions,
columns=submission_format.columns,
index=submission_format.index)
my_submission.to_csv(SUBMISSIONS_DIR / f'{title}.csv')
make_submission(pipe, 'LGBMClassifier defaults all features')
###Output
_____no_output_____
###Markdown
Exploring Built Model
###Code
data = {'features': X.columns,
'importances': pipe.named_steps['lgbm'].feature_importances_}
feature_df = pd.DataFrame(data)
fig, ax = plt.subplots(figsize=plt.figaspect(1/4))
sns.barplot(x='features', y='importances', data=feature_df)
plt.xticks(rotation=90)
plt.show()
feature_df.sort_values('importances', ascending=False).tail(20)
# Quite a lot are less than 10
feature_df.importances.value_counts()
# 14 features with importance above 100, let's train a model using just these
feature_df[feature_df.importances >= 100]
###Output
_____no_output_____
###Markdown
Model with Most Important 14 Features
###Code
most_important_features = feature_df[feature_df.importances >= 100].features.to_numpy()
cv = KFold(n_splits=5, random_state=1, shuffle=True)
steps = [('scaler', StandardScaler()),
('lgbm', LGBMClassifier(random_state=42))]
model = Pipeline(steps)
scores = cross_val_score(model, X[most_important_features],
y, scoring='f1_micro', cv=cv, n_jobs=-1)
print('Accuracy: %.3f (%.3f)' % (np.mean(scores), np.std(scores)))
param_grid = {'lgbm__n_estimators': [50, 100, 150],
'lgbm__num_leaves': [20, 31, 40]}
gs = GridSearchCV(pipe, param_grid, cv=5, verbose=10, n_jobs=-1,
scoring='f1_micro')
gs.fit(X[most_important_features], y)
# Seems like more estimators and more leaves is better
gs.best_params_
gs.best_score_
y_pred = gs.predict(X[most_important_features])
f1_score(y, y_pred, average='micro')
def make_submission_top_14_features(pipeline, title):
"""
Given a trained pipeline object, use it to make predictions on the
submission test set 'test_values.csv' and write them a csv in the submissions
folder.
"""
# Read in test_values csv and apply data preprocessing
# note: will create a data preprocessing pipeline or function in future
test_values = pd.read_csv(DATA_DIR / 'test_values.csv', index_col='building_id')
test_values[categorical_columns] = test_values[categorical_columns].astype('category')
test_values[bool_columns] = test_values[bool_columns].astype('bool')
test_values = pd.get_dummies(test_values)
test_values = test_values[most_important_features]
# Generate predictions using pipeline we pass in
predictions = pipeline.predict(test_values)
submission_format = pd.read_csv(DATA_DIR / 'submission_format.csv',
index_col='building_id')
my_submission = pd.DataFrame(data=predictions,
columns=submission_format.columns,
index=submission_format.index)
my_submission.to_csv(SUBMISSIONS_DIR / f'{title}.csv')
title = 'Top 14 most informative features - minor hyperparameter tuning'
make_submission_top_14_features(gs, title)
###Output
_____no_output_____
###Markdown
Intense Hyperparameter Tuning
###Code
LGBMClassifier()
param_grid = {'lgbm__n_estimators': [150, 175, 200],
'lgbm__num_leaves': [40, 50, 60],
#'lgbm__bosting_type': ['gbdt', 'dart', 'goss'],
'lgbm__learning_rate': [0.01, 0.1, 1],
'lgbm__min_split_gain': [0., 0.5],
'lgbm__min_child_weight': [1e-3, 1e-4, 1e-2],
'lgbm__min_child_samples': [10, 20, 30]}
gs = GridSearchCV(pipe, param_grid, cv=2, verbose=10, n_jobs=-1,
scoring='f1_micro')
gs.fit(X[most_important_features], y)
gs.best_params_
gs.best_estimator_
# This gave me a score of 0.7264 on the submission placing 518
gs.best_score_
y_pred = gs.predict(X[most_important_features])
f1_score(y, y_pred, average='micro')
make_submission_top_14_features(gs, 'mid-level hyperparameter tuning')
###Output
_____no_output_____
###Markdown
More intense hypereparameter tuning
###Code
from sklearn.model_selection import RandomizedSearchCV
param_dist = {'lgbm__n_estimators': np.arange(200, 410, 10),
'lgbm__num_leaves': np.arange(60, 130, 10),
'lgbm__bosting_type': ['gbdt', 'dart', 'goss'],
'lgbm__learning_rate': [0.1, 0.2, 0.3],
'lgbm__min_child_samples': np.arange(30, 100, 10)}
rs = RandomizedSearchCV(pipe, param_dist, n_iter=300, cv=2, verbose=10,
n_jobs=-1, scoring='f1_micro', random_state=42)
rs.fit(X[most_important_features], y)
rs.best_params_
rs.best_estimator_
# Scoresd 0.7397 on submission - placed 331 (in the top 10%!!!!)
rs.best_score_
y_pred = rs.predict(X[most_important_features])
f1_score(y, y_pred, average='micro')
make_submission_top_14_features(rs, 'random search hyperparameter tuning LightGBM')
import pickle
MODEL_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/Earthquake_damage/models')
pkl_filename = MODEL_DIR / 'random search LightGBM.pkl'
with open(pkl_filename, 'wb') as f:
pickle.dump(rs, f)
###Output
_____no_output_____
###Markdown
Even more intense RandomizedSearch
###Code
from sklearn.model_selection import RandomizedSearchCV
import pickle
param_dist = {'lgbm__n_estimators': np.arange(200, 410, 10),
'lgbm__num_leaves': np.arange(60, 130, 10),
'lgbm__bosting_type': ['goss'],
'lgbm__learning_rate': [0.1, 0.2, 0.25, 0.3],
'lgbm__min_child_samples': np.arange(30, 100, 10)}
rs = RandomizedSearchCV(pipe, param_dist, n_iter=500, cv=2, verbose=10,
n_jobs=-1, scoring='f1_micro', random_state=42)
most_important_features = ['geo_level_1_id', 'geo_level_2_id', 'geo_level_3_id',
'count_floors_pre_eq', 'age' , 'area_percentage' ,
'height_percentage',
'has_superstructure_mud_mortar_stone',
'has_superstructure_stone_flag',
'has_superstructure_mud_mortar_brick',
'has_superstructure_cement_mortar_brick',
'has_superstructure_timber', 'count_families',
'other_floor_type_q']
rs.fit(X[most_important_features], y)
print('Best params')
print(rs.best_params_)
print('Best score')
print(rs.best_score_)
print('F1 score on entire dataset')
y_pred = rs.predict(X[most_important_features])
f1_score(y, y_pred, average='micro')
print('Creating submission csv...')
make_submission_top_14_features(rs, '0102 GOSS random search tuning LightGBM')
print('Writing model to hard drive...')
MODEL_DIR = Path('/content/drive/MyDrive/Work/Delivery/Current/Earthquake_damage/models')
pkl_filename = MODEL_DIR / 'GOSS random search LightGBM.pkl'
with open(pkl_filename, 'wb') as f:
pickle.dump(rs, f)
print('Finished')
###Output
Fitting 2 folds for each of 500 candidates, totalling 1000 fits
###Markdown
Exploring KFold Cross Val
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
cv = KFold(n_splits=10, random_state=1, shuffle=True)
model = LGBMClassifier()
scores = cross_val_score(model, X, y, scoring='f1_micro', cv=cv, n_jobs=-1)
print('Accuracy: %.3f (%.3f)' % (np.mean(scores), np.std(scores)))
fig, ax = plt.subplots()
sns.boxplot(scores)
plt.xlim([0.6, 0.8])
plt.show()
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
|
tutorials/wowPython.ipynb | ###Markdown
0. Preparation and setup One Python library that makes GraphX support available to our Jupyter notebooks is not yet bound to the runtime by default. To get it added to the Spark context you have to use the `!pip` magic cell command `install` first to bind the library to the existing runtime.The `pixiedust` library is implemented and loaded from [https://github.com/ibm-cds-labs/pixiedust](https://github.com/ibm-cds-labs/pixiedust). See the project documentation for details.
###Code
!pip install --user --upgrade --no-deps pixiedust
###Output
_____no_output_____
###Markdown
Pixiedust provides a nice visualization plugin for d3 style plots. Have a look at [https://d3js.org/](https://d3js.org/) if you are not yet familiar with d3. Having non-ascii characters in some of your tweets requires the Python interpreter to be set to support UTF-8. Reload your Python sys settings with UTF-8 encoding.
###Code
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
###Output
_____no_output_____
###Markdown
When the library has been loaded successfully you have access to the PackageManager. Use the PackageManager to install a package to supply GraphFrames. Those are needed later in the notebook to complete the instructions for Spark GraphX.
###Code
from pixiedust.packageManager import PackageManager
pkg=PackageManager()
pkg.installPackage("graphframes:graphframes:0")
###Output
_____no_output_____
###Markdown
At this point you are being asked to _Please restart Kernel to complete installation of the new package_. Use the Restart Kernel dialog from the menu to do that. Once completed, you can start the analysis and resume with the next section. ****************************************************Please restart your Kernel before you proceed!**************************************************** 1. Load data from Twitter to CloudantFollowing the lab instructions you should at this point have:- a Cloudant account- an empty database in your Cloudant account- an IBM Insights for Twitter service instance Provide the details for both into the global variables section below, including*Twitter*:- _restAPI_ - the API endpoint we use to query the Twitter API with. Use the URL for your IBM Insights for Twitter service and add `/api/v1/messages/search` as path (for example `https://cdeservice.stage1.mybluemix.net/api/v1/messages/search`)- _username_ - the username for your IBM Insights for Twitter service instance- _password_ - the password for your IBM Insights for Twitter service instance*Cloudant*:- _account_ - the fully qualified account https URL (for example `https://testy-dynamite-001.cloudant.com`)- _username_ - the Cloudant account username- _password_ - the Cloudant account password- _database_ - the database name you want your tweets to be loaded into (Note: the database will NOT get created by the script below. Please create the database manually into your Cloudant account first.)
###Code
properties = {
'twitter': {
'restAPI': 'https://xxx:[email protected]/api/v1/messages/search',
'username': 'xxx',
'password': 'xxx'
},
'cloudant': {
'account':'https://xxx:[email protected]',
'username':'xxx',
'password':'xxx',
'database':'election2016'
}
}
###Output
_____no_output_____
###Markdown
Import all required Python libraries.
###Code
import requests
import json
from requests.auth import HTTPBasicAuth
import http.client
###Output
_____no_output_____
###Markdown
Define a class with helper functions to query the Twitter service API and load documents in the Cloudant database using the bulk load API. (Note: no code is being executed yet and you don't expect any output for these declarations.)
###Code
class TwitterToCloudant:
count = 100
def query_twitter(self, config, url, query, loop):
loop = loop + 1
if loop > (int(self.count) / 100):
return
# QUERY TWITTER
if url is None:
url = config["twitter"]["restAPI"]
print(url, query)
tweets = self.get_tweets(config, url, query)
else:
print(url)
tweets = self.get_tweets(config, url, query)
# LOAD TO CLOUDANT
self.load_cloudant(config, tweets)
# CONTINUE TO PAGE THROUGH RESULTS ....
if "related" in tweets:
url = tweets["related"]["next"]["href"]
#!! recursive call
self.query_twitter(config, url, None, loop)
def get_tweets(self, config, url, query):
# GET tweets from twitter endpoint
user = config["twitter"]["username"]
password = config["twitter"]["password"]
print ("GET: Tweets from {} ".format(url))
if query is None:
payload = {'country_code' :' us', 'lang' : 'en'}
else:
payload = {'q': query, 'country_code' :' us', 'lang' : 'en'}
response = requests.get(url, params=payload, auth=HTTPBasicAuth(user, password))
print ("Got {} response ".format(response.status_code))
tweets = json.loads(response.text)
return tweets
def load_cloudant(self, config, tweets):
# POST tweets to Cloudant database
url = config["cloudant"]["account"] + "/" + config["cloudant"]["database"] + "/_bulk_docs"
user = config["cloudant"]["username"]
password = config["cloudant"]["password"]
headers = {"Content-Type": "application/json"}
if "tweets" in tweets:
docs = {}
docs["docs"] = tweets["tweets"]
print ("POST: Docs to {}".format(url))
response = requests.post(url, data=json.dumps(docs), headers=headers, auth=HTTPBasicAuth(user, password))
print ("Got {} response ".format(response.status_code))
###Output
_____no_output_____
###Markdown
Finally we make the call the load our Cloudant database with tweets. To do that, we require two parameters:- _query_ - the query string to pass to the Twitter API. Use **election2016** as default or experiment with your own.- _count_ - the number of tweets to process. Use **200** as a good start or scale up if you want. (Note: Execution time depends on ....)
###Code
query = "#election2016"
count = 300
TtC = TwitterToCloudant()
TtC.count = count
TtC.query_twitter(properties, None, query, 0)
###Output
_____no_output_____
###Markdown
At this point you should see a number of debug messages with response codes 200 and 201. As a result your database is loaded with the number of tweets you provided in _count_ variable above.If there are response codes like 401 (unauthorized) or 404 (not found) please check your credentails and URLs provided in the _properties_ above. Changes you make to these settings are applied when you execute the cell again. There is no need to execute other cells (that have not been changed) and you can immediately come back here to re-run your TwitterToCloudant functions.Should there be any severe problems that can not be resolved, we made a database called `tweets` already avaialable in your Cloudant account. You can continue to work through the following instructions using the `tweets` database instead. 2. Analyze tweets with Spark SQLIn this section your are going to explore the tweets loaded into your Cloudant database using Spark SQL queries. The Cloudant Spark connector library available at [https://github.com/cloudant-labs/spark-cloudant](https://github.com/cloudant-labs/spark-cloudant) is already linked with the Spark deployment underneath this notebook. All you have to do at this point is to read your Cloudant documents into a DataFrame. First, this notebook runs on a shared Spark cluster but obtains a dedicated Spark context for isolated binding. The Spark context (sc) is made available automatically when the notebook is launched and should be started at this point. With a few statements you can inspect the Spark version and resources allocated for this context._Note: If there is ever a problem with the running Spark context, you can submit sc.stop() and sc.start() to recycle it_
###Code
sc.version
sc._conf.getAll()
###Output
_____no_output_____
###Markdown
Now you want to create a Spark SQL context object off the given Spark context.
###Code
sqlContext = SQLContext(sc)
###Output
_____no_output_____
###Markdown
The Spark SQL context (sqlContext) is used to read data from the Cloudant database. We use a schema sample size and specified number of partitions to load the data with. For details on these parameters check [https://github.com/cloudant-labs/spark-cloudantconfiguration-on-sparkconf](https://github.com/cloudant-labs/spark-cloudantconfiguration-on-sparkconf)
###Code
tweetsDF = sqlContext.read.format("com.cloudant.spark").\
option("cloudant.host",properties['cloudant']['account'].replace('https://','')).\
option("cloudant.username", properties['cloudant']['username']).\
option("cloudant.password", properties['cloudant']['password']).\
option("schemaSampleSize", "-1").\
option("jsonstore.rdd.partitions", "5").\
load(properties['cloudant']['database'])
tweetsDF.show(5)
###Output
_____no_output_____
###Markdown
For performance reasons we will cache the Data Frame to prevent re-loading.
###Code
tweetsDF.cache()
###Output
_____no_output_____
###Markdown
The schema of a Data Frame reveals the structure of all JSON documents loaded from your Cloudant database. Depending on the setting for the parameter `schemaSampleSize` the created RDD contains attributes for the first document only, for the first N documents, or for all documents. Please have a look at [https://github.com/cloudant-labs/spark-cloudantschema-variance](https://github.com/cloudant-labs/spark-cloudantschema-variance) for details on schema computation.
###Code
tweetsDF.printSchema()
###Output
_____no_output_____
###Markdown
With the use of the IBM Insights for Twitter API all tweets are enriched with metadata. For example, the gender of the Twitter user or the state of his account location are added in clear text. Sentiment analysis is also done at the time the tweets are loaded from the original Twitter API. This allows us to group tweets according to their positive, neutral, or negative sentiment.In a first example you can extract the gender, state, and polarity details from the DataFrame (or use any other field available in the schema output above). _Note: To extract a nested field you have to use the full attribute path, for example cde.author.gender or cde.content.sentiment.polarity. The alias() function is available to simplify the name in the resulting DataFrame._
###Code
tweetsDF2 = tweetsDF.select(tweetsDF.cde.author.gender.alias("gender"),
tweetsDF.cde.author.location.state.alias("state"),
tweetsDF.cde.content.sentiment.polarity.alias("polarity"))
###Output
_____no_output_____
###Markdown
The above statement executes extremely fast because no actual function or transformation was computed yet. Spark uses a lazy approach to compute functions only when they are actually needed. The following function is used to show the output of the Data Frame. At that point only do you see a longer runtime to compute `tweetsDF2`.
###Code
tweetsDF2.count()
tweetsDF2.printSchema()
###Output
_____no_output_____
###Markdown
Work with other Spark SQL functions to do things like counting, grouping etc.
###Code
# count tweets by state
tweets_state = tweetsDF2.groupBy(tweetsDF2.state).count()
tweets_state.show(100)
# count by gender & polarity
tweets_gp0 = tweetsDF2.groupBy(tweetsDF2.gender, tweetsDF2.polarity).count()
tweets_gp0.show(100)
tweets_gp= tweetsDF2.where(tweetsDF2.polarity.isNotNull()).groupBy("polarity").pivot("gender").count()
tweets_gp.show(100)
###Output
_____no_output_____
###Markdown
2.1 Plot results using matplotlibIn Python you can use simple libraries to plot your DataFrames directly in diagrams. However, the use of matplotlib is not trivial and once the data is rendered in the diagram it is static. For more comprehensive graphing Spark provides the GraphX extension. Here the data is transformed into a directed multigraph model (similar to those used in GraphDBs) called GraphFrames. You will explore GraphFrames later in this lab. Let's first have a look at simply plotting your DataFrames using matplotlib.
###Code
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Plot the number of tweets per state. Notice again how Spark computes the result lazily. In no previous output did we require the full DataFrame and it did not have to get fully computed until now.
###Code
tweets_state_pd = tweets_state.toPandas()
values = tweets_state_pd['count']
labels = tweets_state_pd['state']
plt.gcf().set_size_inches(16, 12, forward=True)
plt.title('Number of tweets by state')
plt.barh(range(len(values)), values)
plt.yticks(range(len(values)), labels)
plt.show()
###Output
_____no_output_____
###Markdown
More plots to group data by gender and polarity.
###Code
tweets_gp_pd = tweets_gp.toPandas()
labels = tweets_gp_pd['polarity']
N = len(labels)
male = tweets_gp_pd['male']
female = tweets_gp_pd['female']
unknown = tweets_gp_pd['unknown']
ind = np.arange(N) # the x locations for the groups
width = 0.2 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(ind-width, male, width, color='b', label='male')
rects2 = ax.bar(ind, female, width, color='r', label='female')
rects3 = ax.bar(ind + width, unknown, width, color='y', label='unknown')
ax.set_ylabel('Count')
ax.set_title('Tweets by polarity and gender')
ax.set_xticks(ind + width)
ax.set_xticklabels(labels)
ax.legend((rects1[0], rects2[0], rects3[0]), ('male', 'female', 'unknown'))
plt.show()
###Output
_____no_output_____
###Markdown
2.2 Create SQL temporary tables With Spark SQL you can create in-memory tables and query your Spark RDDs in tables using SQL syntax. This is just an alternative represenation of your RDD where SQL functions (like filters or projections) are converted into Spark functions. For the user it mostly provides a SQL wrapper over Spark and a familiar way to query data.
###Code
tweetsDF.registerTempTable("tweets_DF")
###Output
_____no_output_____
###Markdown
Run SQL statements using the sqlContext.sql() function and render output with show(). The result of a SQL function could again be mapped to a data frame.
###Code
sqlContext.sql("SELECT count(*) AS cnt FROM tweets_DF").show()
sqlContext.sql("SELECT message.actor.displayName AS author, count(*) as cnt FROM tweets_DF GROUP BY message.actor.displayName ORDER BY cnt DESC").show(10)
###Output
_____no_output_____
###Markdown
With multiple temporary tables (potentially from different databases) you can execute JOIN and UNION queries to analyze the database in combination. In the next query we will return all hashtags used in our body of tweets.
###Code
hashtags = sqlContext.sql("SELECT message.object.twitter_entities.hashtags.text as tags \
FROM tweets_DF \
WHERE message.object.twitter_entities.hashtags.text IS NOT NULL")
###Output
_____no_output_____
###Markdown
The hashtags are in lists, one per tweet. We flat map this list to a large list and then store it back into a temporary table. The temporary table can be used to define a hashtag cloud to understand which hashtag has been used how many times.
###Code
l = hashtags.map(lambda x: x.tags).collect()
tagCloud = [item for sublist in l for item in sublist]
###Output
_____no_output_____
###Markdown
Create a DataFrame from the Python dictionary we used to flatten our hashtags into. The DataFrame has a simple schema with just a single column called `hastag`.
###Code
from pyspark.sql import Row
tagCloudDF = sc.parallelize(tagCloud)
row = Row("hashtag")
hashtagsDF = tagCloudDF.map(row).toDF()
###Output
_____no_output_____
###Markdown
Register a new temp table for hashtags. Group and count tags to get a sense of trending issues.
###Code
hashtagsDF.registerTempTable("hashtags_DF")
trending = sqlContext.sql("SELECT count(hashtag) as CNT, hashtag as TAG FROM hashtags_DF GROUP BY hashtag ORDER BY CNT DESC")
trending.show(10)
###Output
_____no_output_____
###Markdown
2.3 Visualize tag cloud with Brunel Let's create some charts and diagrams with Brunel commands.The basic format of each call to Brunel is simple. Whether the command is a single line or a set of lines, the commands are concatenated together and the result interpreted as one command.Here are some of the rules for using Brunel that you'll need in this notebook:- _DataFrame_: Use the data command to specify the pandas DataFrame.- _Chart type_: Use commands like chord and treemap to specify a chart type. If you don't specify a type, the default chart type is a scatterplot.- _Chart definition_: Use the x and y commands to specify the data to include on the x-axis and the y-axis.- _Styling_: Use commands like color, tooltip, and label to control the styling of the graph.- _Size_: Use the width and height key-value pairs to specify the size of the graph. The key-value pairs must be preceded with two colons and separated with a comma, for example: :: width=800, height=300See detailed documentation on the Brunel Visualization language at [https://brunel.mybluemix.net/docs](https://brunel.mybluemix.net/docs).
###Code
import brunel
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
trending_pd = trending.toPandas()
###Output
_____no_output_____
###Markdown
Brunel libraries are able to read data from CSV files only. We will export our Panda DataFrames to CSV first to be able to load them with the Brunel libraries below.
###Code
trending_pd.to_csv('trending_pd.csv')
tweets_state_pd.to_csv('tweets_state_pd.csv')
tweets_gp_pd.to_csv('tweets_gp_pd.csv')
###Output
_____no_output_____
###Markdown
Top 5 records in every Panda DF.
###Code
trending_pd.head(5)
###Output
_____no_output_____
###Markdown
The hast tag cloud is visualized using the Brunel cloud graph.
###Code
%brunel data('trending_pd') cloud color(cnt) size(cnt) label(tag) :: width=900, height=600
###Output
_____no_output_____
###Markdown
State and location data can be plotted on a map or a bubble graph representing the number of tweets per state. We will exercise maps later using the GraphX framework.
###Code
tweets_state_pd.head(5)
%brunel data('tweets_state_pd') bubble label(state) x(state) color(count) size(count)
###Output
_____no_output_____
###Markdown
Brunel graphs are D3 based and interactive. Try using your mouse on the graph for Gender polarity to hover over details and zoom in on the Y axis.
###Code
tweets_gp_pd.head(5)
%brunel data('tweets_gp_pd') bar x(polarity) y(male, female) color(male, female) tooltip(#all) legends(none) :: width=800, height=300
###Output
_____no_output_____
###Markdown
2.4 Write analysis results back to Cloudant Next we are going to persist the hashtags_DF back into a Cloudant database. (Note: The database `hashtags` has to exist in Cloudant. Please create that database first.)
###Code
hashtagsDF.write.format("com.cloudant.spark").\
option("cloudant.host",properties['cloudant']['account'].replace('https://','')).\
option("cloudant.username", properties['cloudant']['username']).\
option("cloudant.password", properties['cloudant']['password']).\
option("bulkSize", "2000").\
save("hashtags")
###Output
_____no_output_____
###Markdown
3. Analysis with Spark GraphX Import dependencies from the Pixiedust library loaded in the preperation section. See [https://github.com/ibm-cds-labs/pixiedust](https://github.com/ibm-cds-labs/pixiedust) for details.
###Code
from pixiedust.display import *
###Output
_____no_output_____
###Markdown
To render a chart you have options to select the columns to display or the aggregation function to apply.
###Code
tweets_state_us = tweets_state.filter(tweets_state.state.isin("Alabama", "Alaska", "Arizona",
"Arkansas", "California", "Colorado", "Connecticut", "Delaware", "Florida",
"Georgia", "Hawaii", "Idaho", "Illinois Indiana", "Iowa", "Kansas", "Kentucky",
"Louisiana", "Maine", "Maryland", "Massachusetts", "Michigan", "Minnesota",
"Mississippi", "Missouri", "Montana Nebraska", "Nevada", "New Hampshire",
"New Jersey", "New Mexico", "New York", "North Carolina", "North Dakota",
"Ohio", "Oklahoma", "Oregon", "Pennsylvania Rhode Island", "South Carolina",
"South Dakota", "Tennessee", "Texas","Utah", "Vermont", "Virginia",
"Washington", "West Virginia", "Wisconsin", "Wyoming"))
tweets_state_us.show(5)
display(tweets_state_us)
###Output
_____no_output_____
###Markdown
Use a data set with at least two numeric columns to create scatter plots. 4. Analysis with Spark MLlibHere we are going to use KMeans clustering algorithm from Spark MLlib.Clustering will let us cluster similar tweets together.We will then display clusters using Brunel library.
###Code
# TRAINING by hashtag
from pyspark.mllib.feature import HashingTF
from pyspark.mllib.clustering import KMeans, KMeansModel
# dataframe of tweets' messages and hashtags
mhDF = sqlContext.sql("SELECT message.body as message, \
message.object.twitter_entities.hashtags.text as tags \
FROM tweets_DF \
WHERE message.object.twitter_entities.hashtags.text IS NOT NULL")
mhDF.show()
# create an RDD of hashtags
hashtagsRDD = mhDF.rdd.map(lambda h: h.tags)
# create Feature verctor for every tweet's hastags
# each hashtag represents feature
# a function calculates how many time hashtag is in a tweet
htf = HashingTF(100)
vectors = hashtagsRDD.map(lambda hs: htf.transform(hs)).cache()
print(vectors.take(2))
# Build the model (cluster the data)
numClusters = 10 # number of clusters
model = KMeans.train(vectors, numClusters, maxIterations=10, initializationMode="random")
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType, StringType
def predict(tags):
vector = htf.transform(tags)
return model.predict(vector)
# Creates a Column expression representing a user defined function
udfPredict = udf(predict, IntegerType())
def formatstr(message):
lines = message.splitlines()
return " ".join(lines)
udfFormatstr = udf(formatstr, StringType())
# transform mhDF into cmhDF, a dataframe containing formatted messages,
# hashtabs and cluster
mhDF2 = mhDF.withColumn("message", udfFormatstr(mhDF.message))
cmhDF = mhDF2.withColumn("cluster", udfPredict(mhDF2.tags))
cmhDF.show()
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
# visualizing clusters
import brunel
cmh_pd = cmhDF.toPandas()
cmh_pd.to_csv('cmh_pd.csv')
%brunel data('cmh_pd') bubble x(cluster) color(#all) size(#count) tooltip(message, tags) legends(none)
###Output
_____no_output_____ |
Scraping_websites.ipynb | ###Markdown
http://www.gregreda.com/2016/10/16/asynchronous-scraping-with-python/
###Code
# import libraries
from urllib.request import urlopen
from bs4 import BeautifulSoup
# specify the url
quote_page = 'http://www.bloomberg.com/quote/SPX:IND'
# query the website and return the html to the variable ‘page’
page = urlopen(quote_page)
# parse the html using beautiful soup and store in variable `soup`
soup = BeautifulSoup(page, 'html.parser')
# Take out the <div> of name and get its value
name_box = soup.find('h1', attrs={'class': 'name'})
name = name_box.text.strip() # strip() is used to remove starting and trailing
print(name)
# get the index price
price_box = soup.find('div', attrs={'class':'price'})
price = price_box.text
print(price)
import csv
from datetime import datetime
# open a csv file with append, so old data will not be erased
with open('index.csv', 'a') as csv_file:
writer = csv.writer(csv_file)
writer.writerow([name, price, datetime.now()])
###Output
_____no_output_____
###Markdown
multiple pages example
###Code
from concurrent.futures import ProcessPoolExecutor
import concurrent.futures
from concurrent import futures
URLS = ['http://www.bloomberg.com/quote/SPX:IND', 'http://www.bloomberg.com/quote/CCMP:IND']
# list of urls
data =[]
def parse(url):
page = urlopen(url)
# parse the html using beautiful soap and store in variable `soup`
soup = BeautifulSoup(page, 'html.parser')
# Take out the <div> of name and get its value
name_box = soup.find('h1', attrs={'class': 'name'})
name = name_box.text.strip() # strip() is used to remove starting and trailing
# get the index price
price_box = soup.find('div', attrs={'class':'price'})
price = price_box.text
data.extend([name, price])
return data
for url in URLS:
parse(url)
print(data)
def main():
with ProcessPoolExecutor(max_workers=4) as executor:
future_results = {executor.submit(parse, url): url for url in URLS}
print(name, price)
# for future in concurrent.futures.as_completed(future_results):
# print(future.result())
# if name == 'main':
# for future in concurrent.futures.as_completed(future_results):
# results.append(future.result())
if __name__ == '__main__':
main()
print(results)
print(results)
###Output
[]
|
notebooks/collect_data_from_db.ipynb | ###Markdown
Pobranie danych z bazyW tym skrypcie pobrałem wszystkie potrzebne tabele z lokalnego zrzutu bazy danych "MSR 2014 Mining Challenge Dataset" do plików `.pkl`, aby później móc na tych danych operować już bez połączenia z bazą, a jedynie przy pomocy lokalnych plików. Wszystkie przygotowane w tym miejscu pliki można znaleźć spakowane w archiwum pod adresem https://www-users.mat.umk.pl/~maciejdudek/, aby można było pominąć etap podłączania bazy danych.Szczegółowy opis znajduje się z rozdziale 4.2.2 pracy.
###Code
import pandas as pd
from sqlalchemy import create_engine
data_from_db = '../data/from_db/'
db_connection = create_engine(
'mysql+pymysql://msr14:haslo@localhost:3306/msr14')
def from_db_to_pickle(db_con, entity, path):
"""
Zapisuje tabelę z bazy danych bezpośrednio do pliku pickle.
Parameters
----------
db_con : sqlalchemy.engine.Engine
obiekt silnika pozwalający na nawiązanie połączenia z bazą danych
entity : str
nazwa tabeli z bazy danych do pobrania
path : str
ścieżka do katalogu, w którym zapisać plik
"""
pd.read_sql(entity, con=db_con).to_pickle(path + entity + '.pkl')
entities = ['projects',
'commits', 'commit_comments',
'issues', 'issue_comments',
'pull_requests', 'pull_request_history',
'pull_request_comments', 'watchers']
for entity in entities:
from_db_to_pickle(db_connection, entity, data_from_db)
###Output
_____no_output_____ |
code/chap08-mky.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 8Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
###Output
_____no_output_____
###Markdown
Functions from the previous chapter
###Code
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
###Output
_____no_output_____
###Markdown
Reading the data
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
census = table2.census / 1e9
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Estimated world population')
###Output
_____no_output_____
###Markdown
Running the quadratic model Here's the update function for the quadratic growth model with parameters `alpha` and `beta`.
###Code
def update_func_quad(pop, t, system):
"""Update population based on a quadratic model.
pop: current population in billions
t: what year it is
system: system object with model parameters
"""
net_growth = system.alpha * pop + system.beta * pop**2
return pop + net_growth
###Output
_____no_output_____
###Markdown
Extract the starting time and population.
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = get_first_value(census)
###Output
_____no_output_____
###Markdown
Initialize the system object.
###Code
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
###Output
_____no_output_____
###Markdown
Run the model and plot results.
###Code
results= run_simulation(system, update_func_quad)
plot_results(census, un, results, 'quadratic')
###Output
_____no_output_____
###Markdown
Generating projections To generate projections, all we have to do is change `t_end`
###Code
system.t_end = 2250
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'World population projection')
savefig('figs/chap04-fig01.pdf')
###Output
Saving figure to file figs/chap04-fig01.pdf
###Markdown
The population in the model converges on the equilibrium population, `-alpha/beta`
###Code
results[system.t_end]
-system.alpha / system.beta
###Output
_____no_output_____
###Markdown
**Exercise:** What happens if we start with an initial population above the carrying capacity, like 20 billion? Run the model with initial populations between 1 and 20 billion, and plot the results on the same axes.
###Code
p0_array = linspace(1,20,20)
for system.p_0 in p0_array:
results = run_simulation(system, update_func_quad)
plot(results)
###Output
_____no_output_____
###Markdown
Comparing projections We can compare the projection from our model with projections produced by people who know what they are doing.
###Code
table3 = tables[3]
table3.head()
###Output
_____no_output_____
###Markdown
`NaN` is a special value that represents missing data, in this case because some agencies did not publish projections for some years.
###Code
table3.columns = ['census', 'prb', 'un']
###Output
_____no_output_____
###Markdown
This function plots projections from the UN DESA and U.S. Census. It uses `dropna` to remove the `NaN` values from each series before plotting it.
###Code
def plot_projections(table):
"""Plot world population projections.
table: DataFrame with columns 'un' and 'census'
"""
census_proj = table.census / 1e9
un_proj = table.un / 1e9
plot(census_proj.dropna(), 'b:', label='US Census')
plot(un_proj.dropna(), 'g--', label='UN DESA')
###Output
_____no_output_____
###Markdown
Run the model until 2100, which is as far as the other projections go.
###Code
system = System(t_0=t_0,
t_end=2100,
p_0=p_0,
alpha=0.025,
beta=-0.0018)
results = run_simulation(system, update_func_quad)
plot_results(census, un, results, 'World population projections')
plot_projections(table3)
savefig('figs/chap04-fig02.pdf')
###Output
Saving figure to file figs/chap04-fig02.pdf
###Markdown
People who know what they are doing expect the growth rate to decline more sharply than our model projects. Exercises**Optional exercise:** The net growth rate of world population has been declining for several decades. That observation suggests one more way to generate projections, by extrapolating observed changes in growth rate.The `modsim` library provides a function, `compute_rel_diff`, that computes relative differences of the elements in a sequence. It is a wrapper for the NumPy function `ediff1d`:
###Code
%psource compute_rel_diff
###Output
_____no_output_____
###Markdown
Here's how we can use it to compute the relative differences in the `census` and `un` estimates:
###Code
alpha_census = compute_rel_diff(census)
plot(alpha_census)
alpha_un = compute_rel_diff(un)
plot(alpha_un)
decorate(xlabel='Year', label='Net growth rate')
###Output
_____no_output_____
###Markdown
Other than a bump around 1990, net growth rate has been declining roughly linearly since 1965. As an exercise, you can use this data to make a projection of world population until 2100.1. Define a function, `alpha_func`, that takes `t` as a parameter and returns an estimate of the net growth rate at time `t`, based on a linear function `alpha = intercept + slope * t`. Choose values of `slope` and `intercept` to fit the observed net growth rates since 1965.2. Call your function with a range of `ts` from 1960 to 2020 and plot the results.3. Create a `System` object that includes `alpha_func` as a system variable.4. Define an update function that uses `alpha_func` to compute the net growth rate at the given time `t`.5. Test your update function with `t_0 = 1960` and `p_0 = census[t_0]`.6. Run a simulation from 1960 to 2100 with your update function, and plot the results.7. Compare your projections with those from the US Census and UN.
###Code
def update_alpha_func(population, t, system):
population= 2 + 0.4 * t
p_0=2
return alpha
system = System(t_0=t_0,
t_end=2020,
p_0=p_0
intercept = 2)
results= run_simulation(system, update_alpha_func)
plot_results(census, un, results, 'quadratic')
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____ |
Classify_events/classify_events.ipynb | ###Markdown
Download events related to each convocation
###Code
df_events = None
for file in data_files:
sep_symbol = ';' if '9' in file else ','
df = pd.read_csv(os.path.join(data_folder, file), sep=sep_symbol)
print(file, df.shape)
if df_events is None:
df_events = df
else:
df_events = pd.concat([df_events, df])
print(f'\nCombined dataset: {df_events.shape}')
# data sample
df_events.head().T
###Output
identifiers_9skl.csv (3857, 18)
identifiers_7skl.csv (4135, 18)
identifiers_8skl.csv (24103, 18)
identifiers_3skl.csv (7943, 18)
identifiers_4skl.csv (10929, 18)
identifiers_6skl.csv (23282, 18)
identifiers_5skl.csv (3424, 18)
Combined dataset: (77673, 18)
###Markdown
Data exploration what is the unique ID of an event?
###Code
print(f'Number of events: {len(df_events):,.0f}')
print(f'Number of unique event IDs: {len(df_events.id_event.unique()):,.0f}')
print(f'Number of unique IDs: {len(df_events.id.unique()):,.0f}')
print(f'Number of unique question IDs: {len(df_events.id_question.unique()):,.0f}')
print(f'Number of unique question number|event: {len(df_events["number|event"].unique()):,.0f}')
print('\n"number|event" column is a unique event identifier')
print('\nExample of events events with identical "id_event" values:')
display(df_events[df_events.id_event.isin([5184, 1305, 29505])].sort_values('id_event').T)
###Output
Number of events: 77,673
Number of unique event IDs: 73,232
Number of unique IDs: 15,804
Number of unique question IDs: 21,457
Number of unique question number|event: 77,673
"number|event" column is a unique event identifier
Example of events events with identical "id_event" values:
###Markdown
Main categories of documents in the dataset
###Code
print('Column "rubric"')
df_events.loc[:, 'rubric'] = np.where(df_events.rubric.isna(), 'Undefined', df_events.rubric)
display(df_events.groupby('rubric')['number|event'].count().sort_values(ascending=False))
print('\nColumn "type"')
df_events.loc[:, 'type'] = np.where(df_events.type.isna(), 'Undefined', df_events.type)
display(df_events.groupby('type')['number|event'].count().sort_values(ascending=False))
print('\nColumns "type" and "registrationConvocation"')
df_events.loc[:, 'registrationConvocation'] = np.where(df_events.registrationConvocation.isna(), 'Undefined', df_events.registrationConvocation)
df_type_convocation = pd.pivot_table(data = df_events,
index = "type",
columns = "registrationConvocation",
values = "number|event",
aggfunc = "count",
fill_value = 0)
df_type_convocation = df_type_convocation[['Undefined', 'III скликання', 'IV скликання', 'V скликання', 'VI скликання',
'VII скликання', 'VIII скликання', 'IX скликання']]
display(df_type_convocation)
###Output
Column "rubric"
###Markdown
Classify events
###Code
events_number = df_events.shape[0]
df_remainer, df_combined = separate_meta_types(df_events, types_dict)
events_classified = df_combined.shape[0]
print(f'Number of events in the database: {events_number:,.0f}')
print(f'Number of classified events: {events_classified:,.0f} ({events_classified/events_number*100:,.2f})')
# combine classified and not classified
df_remainer.loc[:, 'type_name'] = 'not classified'
df_remainer.loc[:, 'meta_type_name'] = 'not classified'
df_combined = pd.concat([df_combined, df_remainer])
df_meta_convocation = get_type_pivot(df_combined, "registrationConvocation")
df_meta_type = get_type_pivot(df_combined, "type")
df_classification_count = df_combined.groupby(['meta_type_name', 'type_name'])[['number|event']].count()
display(df_meta_convocation)
print('\n\n')
display(df_meta_type)
print('\n\n')
display(df_classification_count)
df_meta_convocation.to_csv('summary_convocation.csv', index=False, sep="\t")
df_meta_type.to_csv('summary_document_type.csv', index=False, sep="\t")
df_classification_count.to_csv('summary_classification.csv', index=True, sep="\t")
df_combined.to_csv(os.path.join(data_folder, 'identifiers_combined.csv'), index=False, sep="\t")
###Output
_____no_output_____ |
code/HIV MODEL.ipynb | ###Markdown
HIV MODELChris Lee
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
def make_system(alpha, beta, delta, gamma, mu, pi, rho, sigma, tau):
"""Make a system object for the HIV model.
alpha: rate at which lymphocytes switch from latent to active
beta: rate of infection of lymphocytes per virion
delta: rate of removal of infected cells
gamma: rate at which new, uninfected CD4 lymphocytes arise
mu: death rate of lymphocytes
pi: free virion production rate
sigma: rate of removal of free virions
tau: proportion of lymphocytes that are activated
returns: System object
"""
init = State(R=1000, L=0, E=0, V=100)
#R = lymphocyts
#L = Latent Infected
#E = Actively Infected
#V = Free Virions
t0 = 0
t_end = 120
return System(init=init, t0=t0, t_end=t_end,
alpha=alpha, beta=beta,
delta=delta, gamma=gamma,
mu=mu, pi=pi, rho = rho, sigma=sigma,
tau=tau)
sys = make_system(3.6*10e-2, .00027, .33, 1.36, 1.36e-3, 100, .1, 2, .2)
def update_func(state, t, system):
"""Update the SIR model.
state: State (R, L, E, V)
t: time
system: System object
returns: State (rlev)
"""
r, l, e, v = state
unpack(system)
dt = 1/(t_end/120)
BRV = beta*v*r
delta_lymphocytes = ((gamma*tau) - (mu*r) - BRV) * dt
delta_latent = ((rho*BRV) - (mu*l) - (alpha*l)) * dt
delta_active = (((1-rho)*BRV) + (alpha*l) - (delta*e)) * dt
delta_virions = ((pi*e) - (sigma*v)) * dt
r += delta_lymphocytes
l += delta_latent
e += delta_active
v += delta_virions
return State(R=r, L=l, E=e, V=v)
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: TimeFrame
"""
unpack(system)
frame = TimeFrame(columns=init.index)
frame.row[t0] = init
for t in linrange(t0, t_end):
frame.row[t+1] = update_func(frame.row[t], t, system)
return frame
def plot_results(r, l, e, v):
"""Plot the results of a SIR model.
r, l, e, v: timeseries
"""
plot(r, 'r-', label='Lymphocytes')
plot(l, 'b-', label='Latent Infected')
plot(e, 'g-', label='Activated Infected')
plot(v, 'y-', label='Free Virions')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
results = run_simulation(sys, update_func)
plot_results(results.R, results.L, results.E, results.V)
def slope_func(state, t, system):
"""
Outputs slope of each type per time step
system: System object
t: time step
state: State (R, L, E, V)
"""
unpack(system)
r, l, e, v = state
drdt = gamma*tau - mu*r - beta*v*r
dldt = rho*beta*r*v - mu*gamma - alpha*l
dedt = (1-rho)*beta*r*v + alpha*l - delta*e
dvdt = pi*e - sigma*v
return drdt, dldt, dedt, dvdt
results, details = run_ode_solver(sys, slope_func)
details
plot_results(results.R, results.L, results.E, results.V)
###Output
_____no_output_____ |
.ipynb_checkpoints/Setting Up-checkpoint.ipynb | ###Markdown
U1L1 - Setting Up---Easiest way to start coding in Python is to use a web service from [repl.it](https://repl.it)If you'd like to use native software or other set ups please follow the guides below:- For Windows: [Guide Link from Visual Studio](https://code.visualstudio.com/docs/python/python-tutorial)- For macOS: [Guide Link from Digital Ocean](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-macos)**It will be assumed that you will be coding on repl.it. You may require additional setting configurations if you are using a different setup.** Starting with repl.it**To Do List:**1. Create an account. No need to select the _I'm a teacher_ option.2. Click on the blue button with a plus sign to create a new python file.3. Call your python file: "My First Python Program" My First Python ProgramExamine the figure above, there are three vertical columns of importance:1. File Directory2. Code Editor3. Python Interpreter 1. File DirectoryAs your python projects get more advanced, you will be creating more files here. The basics course will mainly deal with just editing our _"main.py"_ file. 2. Code EditorThis is where you write your python code.- To left we have numbers letting us know which line we are at- Python is a Top-Down Intrepreted Language; therefore, Python will execute the lines of code starting at Line 0 all the way until it reaches the end of the file- Currently we've wrote some lines of code that are executable. 3. Python InterpreterThis is where we get to see our code in action.- It will start from Line 0 from our editor- It will start to read line by line- If there is an _output_ to be made on the interpreter, it will be outputted on the interpreter._At the moment, I have some code written down. Please feel free to copy it._To see all of this in action, please press **Run**.
###Code
# My First Python Program
# By: Mr. Park
print('Hello, World!')
###Output
Hello, World!
|
module3/DS7_assignment_kaggle_challenge_3.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Kaggle Challenge, Module 3 Assignment- [X] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.- [X] Continue to participate in our Kaggle challenge. - [X] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.- [X] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [X] Commit your notebook to your fork of the GitHub repo. Stretch Goals Reading- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85) Doing- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines? More Categorical Encodings**1.** The article **[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)** mentions 4 encodings:- **"Categorical Encoding":** This means using the raw categorical values as-is, not encoded. Scikit-learn doesn't support this, but some tree algorithm implementations do. For example, [Catboost](https://catboost.ai/), or R's [rpart](https://cran.r-project.org/web/packages/rpart/index.html) package.- **Numeric Encoding:** Synonymous with Label Encoding, or "Ordinal" Encoding with random order. We can use [category_encoders.OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html).- **One-Hot Encoding:** We can use [category_encoders.OneHotEncoder](http://contrib.scikit-learn.org/categorical-encoding/onehot.html).- **Binary Encoding:** We can use [category_encoders.BinaryEncoder](http://contrib.scikit-learn.org/categorical-encoding/binary.html).**2.** The short video **[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)** introduces an interesting idea: use both X _and_ y to encode categoricals.Category Encoders has multiple implementations of this general concept:- [CatBoost Encoder](http://contrib.scikit-learn.org/categorical-encoding/catboost.html)- [James-Stein Encoder](http://contrib.scikit-learn.org/categorical-encoding/jamesstein.html)- [Leave One Out](http://contrib.scikit-learn.org/categorical-encoding/leaveoneout.html)- [M-estimate](http://contrib.scikit-learn.org/categorical-encoding/mestimate.html)- [Target Encoder](http://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)- [Weight of Evidence](http://contrib.scikit-learn.org/categorical-encoding/woe.html)Category Encoder's mean encoding implementations work for regression problems or binary classification problems. For multi-class classification problems, you will need to temporarily reformulate it as binary classification. For example:```pythonencoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) Both parameters > 1 to avoid overfittingX_train_encoded = encoder.fit_transform(X_train, y_train=='functional')X_val_encoded = encoder.transform(X_train, y_val=='functional')```**3.** The **[dirty_cat](https://dirty-cat.github.io/stable/)** library has a Target Encoder implementation that works with multi-class classification.```python dirty_cat.TargetEncoder(clf_type='multiclass-clf')```It also implements an interesting idea called ["Similarity Encoder" for dirty categories](https://www.slideshare.net/GaelVaroquaux/machine-learning-on-non-curated-data-154905090).However, it seems like dirty_cat doesn't handle missing values or unknown categories as well as category_encoders does. And you may need to use it with one column at a time, instead of with your whole dataframe.**4. [Embeddings](https://www.kaggle.com/learn/embeddings)** can work well with sparse / high cardinality categoricals._**I hope it’s not too frustrating or confusing that there’s not one “canonical” way to encode categorcals. It’s an active area of research and experimentation! Maybe you can make your own contributions!**_ BONUS: Stacking!Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:```pythonimport pandas as pd Filenames of your submissions you want to ensemblefiles = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']target = 'status_group'submissions = (pd.read_csv(file)[[target]] for file in files)ensemble = pd.concat(submissions, axis='columns')majority_vote = ensemble.mode(axis='columns')[0]sample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission[target] = majority_votesubmission.to_csv('my-ultimate-ensemble-submission.csv', index=False)```
###Code
import os, sys
in_colab = 'google.colab' in sys.modules
# If you're in Colab...
if in_colab:
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
# Install required python packages
!pip install -r requirements.txt
# Change into directory for module
os.chdir('module3')
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
import pandas as pd
import numpy as np
from scipy.stats import randint, uniform
import random as ran
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.impute import SimpleImputer
from sklearn.cluster import KMeans
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from sklearn.feature_selection import f_classif, chi2, SelectKBest, SelectPercentile, SelectFpr, SelectFromModel
from sklearn.pipeline import make_pipeline, Pipeline, FeatureUnion
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
import category_encoders as ce
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='sklearn')
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('train_features.csv'),
pd.read_csv('train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('test_features.csv')
sample_submission = pd.read_csv('sample_submission.csv')
# Split train into train & val
train, val = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
# Also create a "missing indicator" column, because the fact that
# values are missing may be a predictive signal.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height', 'population']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Drop duplicate columns
duplicates = ['quantity_group', 'payment_type']
X = X.drop(columns=duplicates)
# Drop recorded_by (never varies) and id (always varies, random)
unusable_variance = ['recorded_by', 'id']
X = X.drop(columns=unusable_variance)
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded, then drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# return the wrangled dataframe
return X
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
target = 'status_group'
X_train = train.drop(columns=target)
y_train = train[target]
pipeline = Pipeline(
[('ordinalencoder',ce.OrdinalEncoder()),
('simpleimputer',SimpleImputer()),
('randomforestregressor',RandomForestClassifier())
] )
param_distributions = {
'randomforestregressor__n_estimators': randint(50, 500),
'randomforestregressor__max_depth': [5, 10, 15, 20, None],
'randomforestregressor__max_features': uniform(0, 1),
}
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=10,
cv=3,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Cross-validation MAE', search.best_score_)
###Output
Best hyperparameters {'randomforestregressor__max_depth': 20, 'randomforestregressor__max_features': 0.10721042087254384, 'randomforestregressor__n_estimators': 342}
Cross-validation MAE 0.8033670033670034
|
nb/GetHansard.ipynb | ###Markdown
Gets the URLS for Downloading the Hansard for a particular yearMost of the work was done here, Thanks Tim Sherratt, you're amazing! (https://timsherratt.org)https://github.com/GLAM-Workbench/australian-commonwealth-hansard
###Code
import re
import os
import time
import math
import requests
import arrow
import csv
import pandas as pd
from tqdm import tqdm
from bs4 import BeautifulSoup
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
s = requests.Session()
retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ])
s.mount('https://', HTTPAdapter(max_retries=retries))
s.mount('http://', HTTPAdapter(max_retries=retries))
URLS = {
'hofreps': (
'http://parlinfo.aph.gov.au/parlInfo/search/summary/summary.w3p;'
'adv=yes;orderBy=date-eLast;page={page};'
'query={query}%20Dataset%3Ahansardr,hansardr80;resCount=100'),
'senate': (
'http://parlinfo.aph.gov.au/parlInfo/search/summary/summary.w3p;'
'adv=yes;orderBy=date-eLast;page={page};'
'query={query}%20Dataset%3Ahansards,hansards80;resCount=100')
}
# write dictionary to csv
# https://stackoverflow.com/questions/3086973/how-do-i-convert-this-list-of-dictionaries-to-a-csv-file
def dict_to_csv(input_dict : dict, output_file : str,):
with open(output_file, 'w', newline='') as of:
dict_writer = csv.DictWriter(of, input_dict[0].keys())
dict_writer.writeheader()
dict_writer.writerows(input_dict)
def get_total_results(house, query):
'''
Get the total number of results in the search.
'''
# Insert query and page values into the ParlInfo url
url = URLS[house].format(query=query, page=0)
# Get the results page
response = s.get(url)
# Parse the HTML
soup = BeautifulSoup(response.text)
try:
# Find where the total results are given in the HTML
summary = soup.find('div', 'resultsSummary').contents[1].string
# Extract the number of results from the string
total = re.search(r'of (\d+)', summary).group(1)
except AttributeError:
total = 0
return int(total)
def get_xml_url(url):
'''
Extract the XML file url from an individual result.
'''
# Load the page for an individual result
response = s.get(url)
# Parse the HTML
soup = BeautifulSoup(response.text)
# Find the XML url by looking for a pattern in the href
xml_url = soup.find('a', href=re.compile('toc_unixml'))['href']
return xml_url
def number_of_results(house, year):
'''
Loop through a search by house and year, finding all the urls for XML files.
'''
# Format the start and end dates
start_date = '01%2F01%2F{}'.format(year)
end_date = '31%2F12%2F{}'.format(year)
# Prepare the query value using the start and end dates
query = 'Date%3A{}%20>>%20{}'.format(start_date, end_date)
# Get the total results
total_results = get_total_results(house, query)
xml_urls = []
dates = []
return total_results
def harvest_year(house, year):
'''
Loop through a search by house and year, finding all the urls for XML files.
'''
# Format the start and end dates
start_date = '01%2F01%2F{}'.format(year)
end_date = '31%2F12%2F{}'.format(year)
# Prepare the query value using the start and end dates
query = 'Date%3A{}%20>>%20{}'.format(start_date, end_date)
# Get the total results
total_results = get_total_results(house, query)
xml_urls = []
dates = []
if total_results > 0:
# Calculate the number of pages in the results set
num_pages = int(math.ceil(total_results / 100))
# Loop through the page range
for page in range(0, num_pages + 1):
# Get the next page of results
url = URLS[house].format(query=query, page=page)
response = s.get(url)
# Parse the HTML
soup = BeautifulSoup(response.text)
# Find the list of results and loop through them
for result in (soup.find_all('div', 'resultContent')):
# Try to identify the date
try:
date = re.search(r'Date: (\d{2}\/\d{2}\/\d{4})', result.find('div', 'sumMeta').get_text()).group(1)
date = arrow.get(date, 'DD/MM/YYYY').format('YYYY-MM-DD')
except AttributeError:
#There are some dodgy dates -- we'll just ignore them
date = None
# If there's a date, and we haven't seen it already, we'll grab the details
if date and date not in dates:
dates.append(date)
# Get the link to the individual result page
# This is where the XML file links live
result_link = result.find('div', 'sumLink').a['href']
# Get the XML file link from the individual record page
xml_url = get_xml_url(result_link)
# Save dates and links
xml_urls.append({'date': date, 'url': 'https://parlinfo.aph.gov.au{}'.format(xml_url)})
time.sleep(1)
time.sleep(1)
return xml_urls
YEAR = 2018
def get_and_write_urls(year : int):
print("Processing Year {}".format(year))
try:
senate_urls = harvest_year('senate', year)
except:
senate_urls = []
try:
horeps_urls = harvest_year('hofreps', year)
except:
horeps_urls = []
print("Number of results: Senate => {} House of Reps => {}".format(len(senate_urls), len(horeps_urls)))
senate_output_file = data_output_path+"/{}-au-hansard-senate.csv".format(year)
horeps_output_file = data_output_path+"/{}-au-hansard-hofreps.csv".format(year)
clean_url = lambda x: x['url'].split(";")[0]+'\n'
with open(senate_output_file, "w") as output:
output.writelines(map(clean_url, senate_urls))
with open(horeps_output_file, "w") as output:
output.writelines(map(clean_url, horeps_urls))
for y in reversed(range(1979, 2009)):
get_and_write_urls(y)
###Output
Processing Year 2008
Number of results: Senate => 0 House of Reps => 69
Processing Year 2007
Number of results: Senate => 41 House of Reps => 0
Processing Year 2006
Number of results: Senate => 56 House of Reps => 68
Processing Year 2005
Number of results: Senate => 55 House of Reps => 67
Processing Year 2004
Number of results: Senate => 0 House of Reps => 0
Processing Year 2003
Number of results: Senate => 64 House of Reps => 74
Processing Year 2002
Number of results: Senate => 60 House of Reps => 69
Processing Year 2001
Number of results: Senate => 51 House of Reps => 56
Processing Year 2000
Number of results: Senate => 0 House of Reps => 73
Processing Year 1999
Number of results: Senate => 79 House of Reps => 73
Processing Year 1998
Number of results: Senate => 59 House of Reps => 56
Processing Year 1997
Number of results: Senate => 0 House of Reps => 0
Processing Year 1996
Number of results: Senate => 0 House of Reps => 0
Processing Year 1995
Number of results: Senate => 0 House of Reps => 0
Processing Year 1994
Number of results: Senate => 0 House of Reps => 0
Processing Year 1993
Number of results: Senate => 0 House of Reps => 0
Processing Year 1992
Number of results: Senate => 0 House of Reps => 0
Processing Year 1991
Number of results: Senate => 0 House of Reps => 0
Processing Year 1990
Number of results: Senate => 0 House of Reps => 0
Processing Year 1989
Number of results: Senate => 0 House of Reps => 0
Processing Year 1988
Number of results: Senate => 0 House of Reps => 0
Processing Year 1987
Number of results: Senate => 0 House of Reps => 0
Processing Year 1986
Number of results: Senate => 0 House of Reps => 0
Processing Year 1985
Number of results: Senate => 0 House of Reps => 0
Processing Year 1984
Number of results: Senate => 0 House of Reps => 0
Processing Year 1983
Number of results: Senate => 0 House of Reps => 0
Processing Year 1982
Number of results: Senate => 0 House of Reps => 0
Processing Year 1981
Number of results: Senate => 0 House of Reps => 0
Processing Year 1980
|
Ch13_CV/kaggle_cifar10.ipynb | ###Markdown
The following additional libraries are needed to run thisnotebook. Note that running on Colab is experimental, please report a Githubissue if you have any problem.
###Code
!pip install d2l==0.14.3
!pip install -U mxnet-cu101mkl==1.6.0.post0 # updating mxnet to at least v1.6
###Output
Requirement already satisfied: d2l==0.14.3 in /usr/local/lib/python3.6/dist-packages (0.14.3)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from d2l==0.14.3) (3.2.2)
Requirement already satisfied: jupyter in /usr/local/lib/python3.6/dist-packages (from d2l==0.14.3) (1.0.0)
Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from d2l==0.14.3) (1.0.5)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from d2l==0.14.3) (1.18.5)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l==0.14.3) (1.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l==0.14.3) (2.4.7)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l==0.14.3) (2.8.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->d2l==0.14.3) (0.10.0)
Requirement already satisfied: nbconvert in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l==0.14.3) (5.6.1)
Requirement already satisfied: qtconsole in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l==0.14.3) (4.7.6)
Requirement already satisfied: jupyter-console in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l==0.14.3) (5.2.0)
Requirement already satisfied: ipywidgets in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l==0.14.3) (7.5.1)
Requirement already satisfied: notebook in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l==0.14.3) (5.3.1)
Requirement already satisfied: ipykernel in /usr/local/lib/python3.6/dist-packages (from jupyter->d2l==0.14.3) (4.10.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->d2l==0.14.3) (2018.9)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->d2l==0.14.3) (1.15.0)
Requirement already satisfied: testpath in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (0.4.4)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (4.3.3)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (0.3)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (1.4.2)
Requirement already satisfied: jinja2>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (2.11.2)
Requirement already satisfied: nbformat>=4.4 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (5.0.7)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (0.6.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (4.6.3)
Requirement already satisfied: bleach in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (3.1.5)
Requirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (2.1.3)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from nbconvert->jupyter->d2l==0.14.3) (0.8.4)
Requirement already satisfied: jupyter-client>=4.1 in /usr/local/lib/python3.6/dist-packages (from qtconsole->jupyter->d2l==0.14.3) (5.3.5)
Requirement already satisfied: qtpy in /usr/local/lib/python3.6/dist-packages (from qtconsole->jupyter->d2l==0.14.3) (1.9.0)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from qtconsole->jupyter->d2l==0.14.3) (0.2.0)
Requirement already satisfied: pyzmq>=17.1 in /usr/local/lib/python3.6/dist-packages (from qtconsole->jupyter->d2l==0.14.3) (19.0.2)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from jupyter-console->jupyter->d2l==0.14.3) (1.0.18)
Requirement already satisfied: ipython in /usr/local/lib/python3.6/dist-packages (from jupyter-console->jupyter->d2l==0.14.3) (5.5.0)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.6/dist-packages (from ipywidgets->jupyter->d2l==0.14.3) (3.5.1)
Requirement already satisfied: tornado>=4 in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l==0.14.3) (5.1.1)
Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l==0.14.3) (0.8.3)
Requirement already satisfied: Send2Trash in /usr/local/lib/python3.6/dist-packages (from notebook->jupyter->d2l==0.14.3) (1.5.0)
Requirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->nbconvert->jupyter->d2l==0.14.3) (4.4.2)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2>=2.4->nbconvert->jupyter->d2l==0.14.3) (1.1.1)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.6/dist-packages (from nbformat>=4.4->nbconvert->jupyter->d2l==0.14.3) (2.6.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->jupyter->d2l==0.14.3) (20.4)
Requirement already satisfied: webencodings in /usr/local/lib/python3.6/dist-packages (from bleach->nbconvert->jupyter->d2l==0.14.3) (0.5.1)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from prompt-toolkit<2.0.0,>=1.0.0->jupyter-console->jupyter->d2l==0.14.3) (0.2.5)
Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.6/dist-packages (from ipython->jupyter-console->jupyter->d2l==0.14.3) (49.6.0)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython->jupyter-console->jupyter->d2l==0.14.3) (0.7.5)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython->jupyter-console->jupyter->d2l==0.14.3) (0.8.1)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.6/dist-packages (from ipython->jupyter-console->jupyter->d2l==0.14.3) (4.8.0)
Requirement already satisfied: ptyprocess; os_name != "nt" in /usr/local/lib/python3.6/dist-packages (from terminado>=0.8.1->notebook->jupyter->d2l==0.14.3) (0.6.0)
Collecting mxnet-cu101mkl==1.6.0.post0
[?25l Downloading https://files.pythonhosted.org/packages/45/3f/e33e3f92110fa5caba5e9eb052008208a33c1d5faccc7fe5312532e9aa42/mxnet_cu101mkl-1.6.0.post0-py2.py3-none-manylinux1_x86_64.whl (712.3MB)
[K |████████████████████████████████| 712.3MB 27kB/s
[?25hCollecting graphviz<0.9.0,>=0.8.1
Downloading https://files.pythonhosted.org/packages/53/39/4ab213673844e0c004bed8a0781a0721a3f6bb23eb8854ee75c236428892/graphviz-0.8.4-py2.py3-none-any.whl
Requirement already satisfied, skipping upgrade: requests<3,>=2.20.0 in /usr/local/lib/python3.6/dist-packages (from mxnet-cu101mkl==1.6.0.post0) (2.23.0)
Requirement already satisfied, skipping upgrade: numpy<2.0.0,>1.16.0 in /usr/local/lib/python3.6/dist-packages (from mxnet-cu101mkl==1.6.0.post0) (1.18.5)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet-cu101mkl==1.6.0.post0) (2.10)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet-cu101mkl==1.6.0.post0) (3.0.4)
Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet-cu101mkl==1.6.0.post0) (2020.6.20)
Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.20.0->mxnet-cu101mkl==1.6.0.post0) (1.24.3)
Installing collected packages: graphviz, mxnet-cu101mkl
Found existing installation: graphviz 0.10.1
Uninstalling graphviz-0.10.1:
Successfully uninstalled graphviz-0.10.1
Successfully installed graphviz-0.8.4 mxnet-cu101mkl-1.6.0.post0
###Markdown
Image Classification (CIFAR-10) on Kaggle:label:`sec_kaggle_cifar10`So far, we have been using Gluon's `data` package to directly obtain image datasets in the tensor format. In practice, however, image datasets often exist in the format of image files. In this section, we will start with the original image files and organize, read, and convert the files to the tensor format step by step.We performed an experiment on the CIFAR-10 dataset in :numref:`sec_image_augmentation`.This is an important dataset in the computer vision field. Now, we will apply the knowledge we learned inthe previous sections in order to participate in the Kaggle competition, whichaddresses CIFAR-10 image classification problems. The competition's web addressis> https://www.kaggle.com/c/cifar-10:numref:`fig_kaggle_cifar10` shows the information on the competition's webpage. In order to submit the results, please register an account on the Kaggle website first.:width:`600px`:label:`fig_kaggle_cifar10`First, import the packages or modules required for the competition.
###Code
import collections
from d2l import mxnet as d2l
import math
from mxnet import autograd, gluon, init, npx
from mxnet.gluon import nn
import os
import pandas as pd
import shutil
import time
npx.set_np()
###Output
_____no_output_____
###Markdown
Obtaining and Organizing the DatasetThe competition data is divided into a training set and testing set. The training set contains $50,000$ images. The testing set contains $300,000$ images, of which $10,000$ images are used for scoring, while the other $290,000$ non-scoring images are included to prevent the manual labeling of the testing set and the submission of labeling results. The image formats in both datasets are PNG, with heights and widths of 32 pixels and three color channels (RGB). The images cover $10$ categories: planes, cars, birds, cats, deer, dogs, frogs, horses, boats, and trucks. The upper-left corner of Figure 9.16 shows some images of planes, cars, and birds in the dataset. Downloading the DatasetAfter logging in to Kaggle, we can click on the "Data" tab on the CIFAR-10 image classification competition webpage shown in :numref:`fig_kaggle_cifar10` and download the dataset by clicking the "Download All" button. After unzipping the downloaded file in `../data`, and unzipping `train.7z` and `test.7z` inside it, you will find the entire dataset in the following paths:* ../data/cifar-10/train/[1-50000].png* ../data/cifar-10/test/[1-300000].png* ../data/cifar-10/trainLabels.csv* ../data/cifar-10/sampleSubmission.csvHere folders `train` and `test` contain the training and testing images respectively, `trainLabels.csv` has labels for the training images, and `sample_submission.csv` is a sample of submission. To make it easier to get started, we provide a small-scale sample of the dataset: it contains the first $1000$ training images and $5$ random testing images.To use the full dataset of the Kaggle competition, you need to set the following `demo` variable to `False`.
###Code
#@save
d2l.DATA_HUB['cifar10_tiny'] = (d2l.DATA_URL + 'kaggle_cifar10_tiny.zip',
'2068874e4b9a9f0fb07ebe0ad2b29754449ccacd')
# If you use the full dataset downloaded for the Kaggle competition, set
# `demo` to False
demo = True
if demo:
data_dir = d2l.download_extract('cifar10_tiny')
else:
data_dir = '../data/cifar-10/'
###Output
Downloading ../data/kaggle_cifar10_tiny.zip from http://d2l-data.s3-accelerate.amazonaws.com/kaggle_cifar10_tiny.zip...
###Markdown
Organizing the DatasetWe need to organize datasets to facilitate model training and testing. Let us first read the labels from the csv file. The following function returns a dictionary that maps the filename without extension to its label.
###Code
#@save
def read_csv_labels(fname):
"""Read fname to return a name to label dictionary."""
with open(fname, 'r') as f:
# Skip the file header line (column name)
lines = f.readlines()[1:]
tokens = [l.rstrip().split(',') for l in lines]
return dict(((name, label) for name, label in tokens))
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
print('# training examples:', len(labels))
print('# classes:', len(set(labels.values())))
###Output
# training examples: 1000
# classes: 10
###Markdown
Next, we define the `reorg_train_valid` function to segment the validation set from the original training set. The argument `valid_ratio` in this function is the ratio of the number of examples in the validation set to the number of examples in the original training set. In particular, let $n$ be the number of images of the class with the least examples, and $r$ be the ratio, then we will use $\max(\lfloor nr\rfloor,1)$ images for each class as the validation set. Let us use `valid_ratio=0.1` as an example. Since the original training set has $50,000$ images, there will be $45,000$ images used for training and stored in the path "`train_valid_test/train`" when tuning hyperparameters, while the other $5,000$ images will be stored as validation set in the path "`train_valid_test/valid`". After organizing the data, images of the same class will be placed under the same folder so that we can read them later.
###Code
#@save
def copyfile(filename, target_dir):
"""Copy a file into a target directory."""
d2l.mkdir_if_not_exist(target_dir)
shutil.copy(filename, target_dir)
#@save
def reorg_train_valid(data_dir, labels, valid_ratio):
# The number of examples of the class with the least examples in the
# training dataset
n = collections.Counter(labels.values()).most_common()[-1][1]
# The number of examples per class for the validation set
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in os.listdir(os.path.join(data_dir, 'train')):
label = labels[train_file.split('.')[0]]
fname = os.path.join(data_dir, 'train', train_file)
# Copy to train_valid_test/train_valid with a subfolder per class
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train_valid', label))
if label not in label_count or label_count[label] < n_valid_per_label:
# Copy to train_valid_test/valid
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'valid', label))
label_count[label] = label_count.get(label, 0) + 1
else:
# Copy to train_valid_test/train
copyfile(fname, os.path.join(data_dir, 'train_valid_test',
'train', label))
return n_valid_per_label
###Output
_____no_output_____
###Markdown
The `reorg_test` function below is used to organize the testing set to facilitate the reading during prediction.
###Code
#@save
def reorg_test(data_dir):
for test_file in os.listdir(os.path.join(data_dir, 'test')):
copyfile(os.path.join(data_dir, 'test', test_file),
os.path.join(data_dir, 'train_valid_test', 'test',
'unknown'))
###Output
_____no_output_____
###Markdown
Finally, we use a function to call the previously defined `read_csv_labels`, `reorg_train_valid`, and `reorg_test` functions.
###Code
def reorg_cifar10_data(data_dir, valid_ratio):
labels = read_csv_labels(os.path.join(data_dir, 'trainLabels.csv'))
reorg_train_valid(data_dir, labels, valid_ratio)
reorg_test(data_dir)
###Output
_____no_output_____
###Markdown
We only set the batch size to $4$ for the demo dataset. During actual training and testing, the complete dataset of the Kaggle competition should be used and `batch_size` should be set to a larger integer, such as $128$. We use $10\%$ of the training examples as the validation set for tuning hyperparameters.
###Code
batch_size = 4 if demo else 128
valid_ratio = 0.1
reorg_cifar10_data(data_dir, valid_ratio)
###Output
_____no_output_____
###Markdown
Image AugmentationTo cope with overfitting, we use image augmentation. For example, by adding `transforms.RandomFlipLeftRight()`, the images can be flipped at random. We can also perform normalization for the three RGB channels of color images using `transforms.Normalize()`. Below, we list some of these operations that you can choose to use or modify depending on requirements.
###Code
transform_train = gluon.data.vision.transforms.Compose([
# Magnify the image to a square of 40 pixels in both height and width
gluon.data.vision.transforms.Resize(40),
# Randomly crop a square image of 40 pixels in both height and width to
# produce a small square of 0.64 to 1 times the area of the original
# image, and then shrink it to a square of 32 pixels in both height and
# width
gluon.data.vision.transforms.RandomResizedCrop(32, scale=(0.64, 1.0),
ratio=(1.0, 1.0)),
gluon.data.vision.transforms.RandomFlipLeftRight(),
gluon.data.vision.transforms.ToTensor(),
# Normalize each channel of the image
gluon.data.vision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
###Output
_____no_output_____
###Markdown
In order to ensure the certainty of the output during testing, we only perform normalization on the image.
###Code
transform_test = gluon.data.vision.transforms.Compose([
gluon.data.vision.transforms.ToTensor(),
gluon.data.vision.transforms.Normalize([0.4914, 0.4822, 0.4465],
[0.2023, 0.1994, 0.2010])])
###Output
_____no_output_____
###Markdown
Reading the DatasetNext, we can create the `ImageFolderDataset` instance to read the organized dataset containing the original image files, where each example includes the image and label.
###Code
train_ds, valid_ds, train_valid_ds, test_ds = [
gluon.data.vision.ImageFolderDataset(
os.path.join(data_dir, 'train_valid_test', folder))
for folder in ['train', 'valid', 'train_valid', 'test']]
###Output
_____no_output_____
###Markdown
We specify the defined image augmentation operation in `DataLoader`. During training, we only use the validation set to evaluate the model, so we need to ensure the certainty of the output. During prediction, we will train the model on the combined training set and validation set to make full use of all labelled data.
###Code
train_iter, train_valid_iter = [gluon.data.DataLoader(
dataset.transform_first(transform_train), batch_size, shuffle=True,
last_batch='discard') for dataset in (train_ds, train_valid_ds)]
valid_iter = gluon.data.DataLoader(
valid_ds.transform_first(transform_test), batch_size, shuffle=False,
last_batch='discard')
test_iter = gluon.data.DataLoader(
test_ds.transform_first(transform_test), batch_size, shuffle=False,
last_batch='keep')
###Output
_____no_output_____
###Markdown
Defining the ModelHere, we build the residual blocks based on the `HybridBlock` class, which isslightly different than the implementation described in:numref:`sec_resnet`. This is done to improve execution efficiency.
###Code
class Residual(nn.HybridBlock):
def __init__(self, num_channels, use_1x1conv=False, strides=1, **kwargs):
super(Residual, self).__init__(**kwargs)
self.conv1 = nn.Conv2D(num_channels, kernel_size=3, padding=1,
strides=strides)
self.conv2 = nn.Conv2D(num_channels, kernel_size=3, padding=1)
if use_1x1conv:
self.conv3 = nn.Conv2D(num_channels, kernel_size=1,
strides=strides)
else:
self.conv3 = None
self.bn1 = nn.BatchNorm()
self.bn2 = nn.BatchNorm()
def hybrid_forward(self, F, X):
Y = F.npx.relu(self.bn1(self.conv1(X)))
Y = self.bn2(self.conv2(Y))
if self.conv3:
X = self.conv3(X)
return F.npx.relu(Y + X)
###Output
_____no_output_____
###Markdown
Next, we define the ResNet-18 model.
###Code
def resnet18(num_classes):
net = nn.HybridSequential()
net.add(nn.Conv2D(64, kernel_size=3, strides=1, padding=1),
nn.BatchNorm(), nn.Activation('relu'))
def resnet_block(num_channels, num_residuals, first_block=False):
blk = nn.HybridSequential()
for i in range(num_residuals):
if i == 0 and not first_block:
blk.add(Residual(num_channels, use_1x1conv=True, strides=2))
else:
blk.add(Residual(num_channels))
return blk
net.add(resnet_block(64, 2, first_block=True),
resnet_block(128, 2),
resnet_block(256, 2),
resnet_block(512, 2))
net.add(nn.GlobalAvgPool2D(), nn.Dense(num_classes))
return net
###Output
_____no_output_____
###Markdown
The CIFAR-10 image classification challenge uses 10 categories. We will perform Xavier random initialization on the model before training begins.
###Code
def get_net(devices):
num_classes = 10
net = resnet18(num_classes)
net.initialize(ctx=devices, init=init.Xavier())
return net
loss = gluon.loss.SoftmaxCrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Defining the Training FunctionsWe will select the model and tune hyperparameters according to the model's performance on the validation set. Next, we define the model training function `train`. We record the training time of each epoch, which helps us compare the time costs of different models.
###Code
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay):
trainer = gluon.Trainer(net.collect_params(), 'sgd',
{'learning_rate': lr, 'momentum': 0.9, 'wd': wd})
num_batches, timer = len(train_iter), d2l.Timer()
animator = d2l.Animator(xlabel='epoch', xlim=[0, num_epochs],
legend=['train loss', 'train acc', 'valid acc'])
for epoch in range(num_epochs):
metric = d2l.Accumulator(3)
if epoch > 0 and epoch % lr_period == 0:
trainer.set_learning_rate(trainer.learning_rate * lr_decay)
for i, (features, labels) in enumerate(train_iter):
timer.start()
l, acc = d2l.train_batch_ch13(
net, features, labels.astype('float32'), loss, trainer,
devices, d2l.split_batch)
metric.add(l, acc, labels.shape[0])
timer.stop()
if (i + 1) % (num_batches // 5) == 0:
animator.add(epoch + i / num_batches,
(metric[0] / metric[2], metric[1] / metric[2],
None))
if valid_iter is not None:
valid_acc = d2l.evaluate_accuracy_gpus(net, valid_iter, d2l.split_batch)
animator.add(epoch + 1, (None, None, valid_acc))
if valid_iter is not None:
print(f'loss {metric[0] / metric[2]:.3f}, '
f'train acc {metric[1] / metric[2]:.3f}, '
f'valid acc {valid_acc:.3f}')
else:
print(f'loss {metric[0] / metric[2]:.3f}, '
f'train acc {metric[1] / metric[2]:.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(devices)}')
###Output
_____no_output_____
###Markdown
Training and Validating the ModelNow, we can train and validate the model. The following hyperparameters can be tuned. For example, we can increase the number of epochs. Because `lr_period` and `lr_decay` are set to 80 and 0.1 respectively, the learning rate of the optimization algorithm will be multiplied by 0.1 after every 80 epochs. For simplicity, we only train one epoch here.
###Code
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 5, 0.1, 5e-4
lr_period, lr_decay, net = 50, 0.1, get_net(devices)
net.hybridize()
train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period,
lr_decay)
###Output
loss nan, train acc 0.102, valid acc 0.100
163.4 examples/sec on [gpu(0)]
###Markdown
Classifying the Testing Set and Submitting Results on KaggleAfter obtaining a satisfactory model design and hyperparameters, we use all training datasets (including validation sets) to retrain the model and classify the testing set.
###Code
net, preds = get_net(devices), []
net.hybridize()
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period,
lr_decay)
for X, _ in test_iter:
y_hat = net(X.as_in_ctx(devices[0]))
preds.extend(y_hat.argmax(axis=1).astype(int).asnumpy())
sorted_ids = list(range(1, len(test_ds) + 1))
sorted_ids.sort(key=lambda x: str(x))
df = pd.DataFrame({'id': sorted_ids, 'label': preds})
df['label'] = df['label'].apply(lambda x: train_valid_ds.synsets[x])
df.to_csv('submission.csv', index=False)
###Output
loss nan, train acc 0.102
174.4 examples/sec on [gpu(0)]
|
Python/requests.ipynb | ###Markdown
Get Request
###Code
requests.get('http://api.github.com')
response = requests.get('http://api.github.com')
response.status_code
if response.status_code:
print('status code between 200 and 400: status_code={}'.format(response.status_code))
else:
print('false, status_code={}'.format(response.status_code))
from requests.exceptions import HTTPError
for url in ['https://api.github.com', 'https://api.github.com/invalid']:
try:
response = requests.get(url)
# If the response was successful, no Exception will be raised
response.raise_for_status()
except HTTPError as http_err:
print(f'HTTP error occurred: {http_err}') # Python 3.6
except Exception as err:
print(f'Other error occurred: {err}') # Python 3.6
else:
print('Success!')
response.content
response.text
response.encoding = 'utf-8' # Optional: requests infers this internally
response.text
response = requests.get('http://api.github.com')
response.json()
response.headers
###Output
_____no_output_____
###Markdown
Request Download ImageAdvanced Usage: https://docs.python-requests.org/en/master/user/advanced/prepared-requests
###Code
image_url = "https://www.apple.com/ac/structured-data/images/open_graph_logo.png?201810271035"
resp = requests.get(image_url, stream=True)
print(f"resp.status_code: {resp.status_code}")
resp.content
###Output
_____no_output_____ |
_build/html/_sources/curriculum-notebooks/Science/SpecificAndLatentHeat/specific-and-latent-heat.ipynb | ###Markdown
 (Click **Cell** > **Run All** before proceeding.)
###Code
%matplotlib inline
#----------
#Import modules and packages
import ipywidgets as widgets
import random
import math
import matplotlib.pyplot as plt
from ipywidgets import Output, IntSlider, VBox, HBox, Layout
from IPython.display import clear_output, display, HTML, Javascript, SVG
#----------
#import ipywidgets as widgets
#import random
#This function produces a multiple choice form with four options
def multiple_choice(option_1, option_2, option_3, option_4):
option_list = [option_1, option_2, option_3, option_4]
answer = option_list[0]
letters = ["(A) ", "(B) ", "(C) ", "(D) "]
#Boldface letters at the beginning of each option
start_bold = "\033[1m"; end_bold = "\033[0;0m"
#Randomly shuffle the options
random.shuffle(option_list)
#Print the letters (A) to (D) in sequence with randomly chosen options
for i in range(4):
option_text = option_list.pop()
print(start_bold + letters[i] + end_bold + option_text)
#Store the correct answer
if option_text == answer:
letter_answer = letters[i]
button1 = widgets.Button(description="(A)"); button2 = widgets.Button(description="(B)")
button3 = widgets.Button(description="(C)"); button4 = widgets.Button(description="(D)")
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
container = widgets.HBox(children=[button1,button2,button3,button4])
display(container)
print(" ", end='\r')
def on_button1_clicked(b):
if "(A) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Moccasin'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Lightgray'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
def on_button2_clicked(b):
if "(B) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Moccasin'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Lightgray'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Whitesmoke'
def on_button3_clicked(b):
if "(C) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Moccasin'; button4.style.button_color = 'Whitesmoke'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Lightgray'; button4.style.button_color = 'Whitesmoke'
def on_button4_clicked(b):
if "(D) " == letter_answer:
print("Correct! ", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Moccasin'
else:
print("Try again.", end='\r')
button1.style.button_color = 'Whitesmoke'; button2.style.button_color = 'Whitesmoke'
button3.style.button_color = 'Whitesmoke'; button4.style.button_color = 'Lightgray'
button1.on_click(on_button1_clicked); button2.on_click(on_button2_clicked)
button3.on_click(on_button3_clicked); button4.on_click(on_button4_clicked)
###Output
_____no_output_____
###Markdown
Specific and Latent Heat Introduction**Heat** is defined as the *transfer of energy* from one object to another due to a difference in their relative temperatures. As heat flows from one object into another, the temperature of either one or both objects changes. Specific Heat CapacityThe amount of heat required to change the temperature of a given material is given by the following equation:$$Q = m C \Delta T$$where $Q$ represents heat in joules (J), $m$ represents mass kilograms (kg), and $\Delta T$ represents the change in temperature in Celsius (°C) or kelvin (K). The parameter $C$ is an experimentally determined value characteristic of a particular material. This parameter is called the **specific heat** or **specific heat capacity** (J/kg$\cdot$°C). The specific heat capacity of a material is determined by measuring the amount of heat required to raise the temperature of 1 kg of the material by 1°C. For ordinary temperatures and pressures, the value of $C$ is considered constant. Values for the specific heat capacity of common materials are shown in the table below: Material | Specific Heat Capacity (J/kg$\cdot$°C) --- | --- Aluminum | 903 Brass | 376 Carbon | 710 Copper | 385 Glass | 664 Ice | 2060 Iron | 450 Lead | 130 Methanol | 2450 Silver | 235 Stainless Steal | 460 Steam | 2020 Tin | 217 Water | 4180 Zinc | 388 Use the slider below to observe the relationship between the specific heat capacity and the amount of heat required to raise the temperature of a 5 kg mass by 50 °C.
###Code
#import ipywidgets as widgets
#from ipywidgets import Output, VBox, HBox
mass_1 = 5
delta_temperature = 50
specific_heat_capacity = widgets.IntSlider(description="C (J/kg⋅°C)",min=100,max=1000)
#Boldface text between these strings
start_bold = "\033[1m"; end_bold = "\033[0;0m"
def f(specific_heat_capacity):
heat_J = int((mass_1 * specific_heat_capacity * delta_temperature))
heat_kJ = int(heat_J/1000)
print(start_bold + "Heat = (mass) X (specific heat capacity) X (change in temperature)" + end_bold)
print("Heat = ({} X {} X {}) J = {} J or {} kJ".format(mass_1, specific_heat_capacity, delta_temperature, heat_J, heat_kJ))
out1 = widgets.interactive_output(f,{'specific_heat_capacity': specific_heat_capacity,})
HBox([VBox([specific_heat_capacity]), out1])
###Output
_____no_output_____
###Markdown
**Question:** *As the specific heat increases, the amount of heat required to cause the temperature change:*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "Increases"
option_2 = "Decreases"
option_3 = "Remains constant"
option_4 = "Equals zero"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mRemains constant
[1m(B) [0;0mDecreases
[1m(C) [0;0mIncreases
[1m(D) [0;0mEquals zero
###Markdown
ExampleHow many kilojoules (kJ) of heat are needed to raise the temperature of a 3.0 kg piece of aluminum from 10°C to 50°C? Round the answer to 2 significant figures.
###Code
#import ipywidgets as widgets
#from ipywidgets import Output, VBox
#from IPython.display import clear_output, display, HTML
out2 = Output()
button_step1 = widgets.Button(description="Step One", layout=Layout(width='20%', height='100%'), button_style='primary')
count1 = 1
text1_1 = widgets.HTMLMath(value="The first step is to identify all known and unknown variables required to solve the problem. In this case, three variables are known ($m$, $C$, $\Delta T$), and one variable is unknown ($Q$):")
text1_2 = widgets.HTMLMath(value="$m$ = 3.0 kg")
text1_3 = widgets.HTMLMath(value="$\Delta T$ = 50°C $-$ 10°C = 40°C")
text1_4 = widgets.HTMLMath(value="$C$ = 903 J/kg$\cdot$°C (The specific heat capacity for aluminum may be found in the table above.)")
text1_5 = widgets.HTMLMath(value="$Q$ = ?")
def on_button_step1_clicked(b):
global count1
count1 += 1
with out2:
clear_output()
if count1 % 2 == 0:
display(text1_1, text1_2, text1_3, text1_4, text1_5)
display(VBox([button_step1, out2]))
button_step1.on_click(on_button_step1_clicked)
#import ipywidgets as widgets
#from ipywidgets import Output, VBox
#from IPython.display import clear_output, display, HTML
out3 = Output()
button_step2 = widgets.Button(description="Step Two", layout=Layout(width='20%', height='100%'), button_style='primary')
count2 = 1
text2_1 = widgets.HTMLMath(value="Substitute each known variable into the formula to solve for the unknown variable:")
text2_2 = widgets.HTMLMath(value="$Q = mC\Delta T$")
text2_3 = widgets.HTMLMath(value="$Q$ = (3.0 kg) (903 J/kg$\cdot$°C) (40°C) = 108,360 J")
text2_4 = widgets.HTMLMath(value="$Q$ = 108,360 J")
def on_button_step2_clicked(b):
global count2
count2 += 1
with out3:
clear_output()
if count2 % 2 == 0:
display(text2_1, text2_2, text2_3, text2_4)
display(VBox([button_step2, out3]))
button_step2.on_click(on_button_step2_clicked)
#import ipywidgets as widgets
#from ipywidgets import Output, VBox
#from IPython.display import clear_output, display, HTML
out4 = Output()
button_step3 = widgets.Button(description="Step Three", layout=Layout(width='20%', height='100%'), button_style='primary')
count3 = 1
text3_1 = widgets.HTMLMath(value="Round the answer to the correct number of significant figures and convert to the correct units (if needed):")
text3_2 = widgets.HTMLMath(value="$Q$ = 108,360 J = 110,000 J or 110 kJ")
text3_3 = widgets.HTMLMath(value="The amount of heat required to increase the temperature of a 3.0 kg piece of aluminum from 10°C to 50°C is 110,000 J or 110 kJ.")
def on_button_step3_clicked(b):
global count3
count3 += 1
with out4:
clear_output()
if count3 % 2 == 0:
display(text3_1, text3_2, text3_3)
display(VBox([button_step3, out4]))
button_step3.on_click(on_button_step3_clicked)
###Output
_____no_output_____
###Markdown
PracticeThe heat transfer equation shown above may be rearranged to solve for each variable in the equation. These rearrangements are shown below:$$Q = mC\Delta T \qquad m = \dfrac{Q}{C \Delta T} \qquad C = \dfrac{Q}{m \Delta T} \qquad \Delta T = \dfrac{Q}{mC}$$Try the four different practice problems below. Each question will require the use of one or more formula above. Use the *Generate New Question* button to generate additional practice problems.
###Code
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
mass = round(random.uniform(25.0, 50.0), 1)
temperature_initial = round(random.uniform(15.0, 25.0), 1)
temperature_final = round(random.uniform(55.0, 65.0), 1)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "silver": 235, "stainless steal": 460, "tin": 217, "zinc": 388}
material = random.choice(list(materials.keys()))
#Print question
question = "How much heat is required to raise the temperature of a {} g sample of {} from {}°C to {}°C?".format(mass, material, temperature_initial, temperature_final)
print(question)
#Answer and option calculations
answer = (mass/1000) * materials[material] * (temperature_final - temperature_initial)
#Define range of values for random multiple choices
mini = 100
maxa = 2300
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Round options to the specified number of significant figures
def round_sf(number, significant):
return round(number, significant - len(str(number)))
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round_sf(int(answer),3)) + " J"
option_2 = str(round_sf(int(choice_list[0]),3)) + " J"
option_3 = str(round_sf(int(choice_list[1]),3)) + " J"
option_4 = str(round_sf(int(choice_list[2]),3)) + " J"
multiple_choice(option_1, option_2, option_3, option_4)
#import math
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
heat = random.randint(10, 250)
temperature_initial = round(random.uniform(10.0, 35.0), 1)
temperature_final = round(random.uniform(45.0, 100.0), 1)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "silver": 235, "stainless steal": 460, "tin": 217, "zinc": 388}
material = random.choice(list(materials.keys()))
#Print question
question = "Suppose some {} lost {} kJ of heat as it cooled from {}°C to {}°C. Find the mass. Note: you will need to make the sign of Q negative because heat is flowing out of the material as it cools.".format(material, heat, temperature_final, temperature_initial)
print(question)
#Answer calculation
answer = (-heat*1000) / (materials[material] * (temperature_initial - temperature_final))
#Define range of values for random multiple choices
mini = 100
maxa = 2000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str('{:.2f}'.format(round(answer,2))) + " kg"
option_2 = str(round(choice_list[0],2)/100) + " kg"
option_3 = str(round(choice_list[1],2)/100) + " kg"
option_4 = str(round(choice_list[2],2)/100) + " kg"
multiple_choice(option_1, option_2, option_3, option_4)
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
heat = round(random.uniform(23.00, 26.00),1)
mass = round(random.uniform(1.00, 3.00), 2)
temperature_initial = round(random.uniform(24.0, 25.0), 1)
temperature_final = round(random.uniform(35.0, 36.0), 1)
#Print question
question = "A newly made synthetic material weighing {} kg requires {} kJ to go from {}°C to {}°C (without changing state). What is the specific heat capacity of this new material?".format(mass, heat, temperature_initial, temperature_final)
print(question)
#Answer calculation
answer = (heat*1000) / (mass * (temperature_final - temperature_initial))
#Define range of values for random multiple choices
mini = 990
maxa = 2510
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Round options to the specified number of significant figures
def round_sf(number, significant):
return round(number, significant - len(str(number)))
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(round_sf(int(answer),3)) + " J/(kg°C)"
option_2 = str(round_sf(int(choice_list[0]),3)) + " J/(kg°C)"
option_3 = str(round_sf(int(choice_list[1]),3)) + " J/(kg°C)"
option_4 = str(round_sf(int(choice_list[2]),3)) + " J/(kg°C)"
multiple_choice(option_1, option_2, option_3, option_4)
#import math
#from IPython.display import Javascript, display
#from ipywidgets import widgets
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "silver": 235, "stainless steal": 460, "tin": 217, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize Variables
heat = random.randint(100, 150)
mass = round(random.uniform(1.0, 5.0), 1)
temperature_initial = round(random.uniform(10.0, 30.0), 1)
temperature_final = round(random.uniform(40.0, 60.0), 1)
#Determine question type
question_type = random.randint(1,3)
if question_type == 1:
#Type 1: Finding change in temperature
question = "If {} kg of {} receives {} kJ of heat, determine its change in temperature to one decimal place.".format(mass, material, heat)
print(question)
answer = (heat*1000) / (materials[material] * mass)
elif question_type == 2:
#Type 2: Finding final temperature
question = "If {} kg of {} receives {} kJ of heat, and if the {}'s initial temperature is {}°C, determine its final temperature to one decimal place. Hint: ΔT = final temperature - initial temperature.".format(mass, material, heat, material, temperature_initial)
print(question)
answer = ((heat*1000) / (materials[material] * mass)) + temperature_initial
elif question_type == 3:
#Type 3: Finding initial temperature
question = "If {} kg of {} receives {} kJ of heat, and if the {}'s final temperature is {}°C, determine its initial temperature to one decimal place. Hint: ΔT = final temperature - initial temperature.".format(mass, material, heat, material, temperature_final)
print(question)
answer = temperature_final - ((heat*1000) / (materials[material] * mass))
#Define range of values for random multiple choices
mini = int(answer*100 - 1000)
maxa = int(answer*100 + 1000)
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str((round(answer,1))) + " °C"
option_2 = str(round(choice_list[0]/100,1)) + " °C"
option_3 = str(round(choice_list[1]/100,1)) + " °C"
option_4 = str(round(choice_list[2]/100,1)) + " °C"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
Change of PhaseIn the previous examples and exercises, the material remained in a constant state while heat was added or taken away. However, the addition or subtraction of heat is often accompanied by a **phase change**. The three most common phases are solid, liquid, and gas: **Problem:** *Determine the amount of heat required to raise the temperature of a 100 g block of ice from -20°C to steam at 200°C.***Attempt:** There are two phase changes in this problem: (1) the melting of ice into water, and (2) the boiling of water into steam. To determine $Q$, let's utilize the heat formula: $$Q=mC\Delta T$$ To solve this problem, we can split it up into steps that are simple to calculate. For example, we can start by calculating the heat required to warm ice from -20°C to 0°C. Then, we can calculate the heat required to warm water from 0°C to 100°C. Finally, we can calculate the heat required to warm steam from 100°C to 200°C:$Q_{ice}$ = (0.100 kg) (2060 J/kg$\cdot$°C) (0°C - (-20°C)) = 4120 J$Q_{water}$ = (0.100 kg) (4180 J/kg$\cdot$°C) (100°C - 0°C) = 41800 J$Q_{steam}$ = (0.100 kg) (2020 J/kg$\cdot$°C) (200°C - 100°C) = 20200 JThen, by adding up the heat calculated in each step, the original problem can be solved: $Q$ = (4120 + 41800 + 20200) J = 66120 J, or 66.1 kJ. ExperimentLet's conduct an experiment to check the above calculation. We will start with a 100 g sample of ice at -20°C, and then add a constant amount of heat until the entire sample is converted to steam at 200°C. Every minute, we will take the temperature of the sample.The data from this experiment is shown in the interactive graphs below. The temperature of the material versus time is shown on left. The heat added to the material versus time is shown on the right.
###Code
#import ipywidgets as widgets
#import matplotlib.pyplot as plt
#from ipywidgets import HBox, Output, VBox
#from IPython.display import clear_output
out5 = Output()
play = widgets.Play(interval=500, value=0, min=0, max=25, step=1, description="Press play", disabled=False)
time_slider = widgets.IntSlider(description='Time (min)', value=0, min=0, max=25, continuous_update = False)
widgets.jslink((play, 'value'), (time_slider, 'value'))
#Make lists of x and y values
x_values = list(range(26))
y_values = [-20, -10, 0, 0, 10, 40, 80, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 120, 140, 160, 180, 200]
heat_y = []
increment = 0
for i in range(26):
heat_y.append(increment)
increment += 13.021
#Plot graphs
def plot_graphs(change):
x = change['new']
with out5:
clear_output(wait=True)
temp_x_values = []
temp_y_values = []
graph2y = []
for i in range(x+1):
temp_x_values.append(x_values[i])
temp_y_values.append(y_values[i])
graph2y.append(heat_y[i])
plt.figure(figsize=(15,5))
plt.style.use('seaborn')
plt.rcParams["axes.edgecolor"] = "black"
plt.rcParams["axes.linewidth"] = 0.5
plt.subplot(1,2,1)
plt.ylim(-30, 210)
plt.xlim(-0.5,26)
plt.scatter(temp_x_values, temp_y_values)
plt.ylabel('Temperature (°C)')
plt.xlabel('Time (min)')
plt.subplot(1,2,2)
plt.ylim(-25, 350)
plt.xlim(-2,26)
plt.scatter(temp_x_values, graph2y, color='red')
plt.ylabel('Heat (kJ)')
plt.xlabel('Time (min)')
plt.show()
#Get slider value
time_slider.observe(plot_graphs, 'value')
plot_graphs({'new': time_slider.value})
#Display widget
display(HBox([play, time_slider]))
display(out5)
###Output
_____no_output_____
###Markdown
**Question**: *Examine the graph on the left. It shows the temperature of the material at each minute. At what temperature(s) does the temperature remain constant for some time?*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "0°C and 100°C. We have horizontal lines at those temperatures."
option_2 = "-20°C, 0°C, 100°C, and 200°C."
option_3 = "100°C."
option_4 = "The temperature is never constant."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mThe temperature is never constant.
[1m(B) [0;0m0°C and 100°C. We have horizontal lines at those temperatures.
[1m(C) [0;0m100°C.
[1m(D) [0;0m-20°C, 0°C, 100°C, and 200°C.
###Markdown
**Question:** *Examine the graph on the right. It shows how much heat was required to turn a block of ice at -20°C into steam at 200°C. Does this agree with the value we arrived at from our above calculation (66.1 kJ)?*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "Based on the graph, the amount of heat required is around 325 kJ. It does not agree with our calculation."
option_2 = "Based on the graph, the amount of heat required is close enough to our calculation; hence, it does agree."
option_3 = "Both values match perfectly."
option_4 = "The values are close and it is impossible to say if they match perfectly or not."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mThe values are close and it is impossible to say if they match perfectly or not.
[1m(B) [0;0mBased on the graph, the amount of heat required is close enough to our calculation; hence, it does agree.
[1m(C) [0;0mBased on the graph, the amount of heat required is around 325 kJ. It does not agree with our calculation.
[1m(D) [0;0mBoth values match perfectly.
###Markdown
**Question**: *Examine the graph on the right. Observe that the slope of the line is constant. What does this imply?*
###Code
#import ipywidgets as widgets
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = "The amount of heat added to the system is constant for the entire 25 min period."
option_2 = "The amount of heat added to the system is not constant, the rate increases throughout the 25 min period."
option_3 = "No heat is added at the start, but around 325 kJ of heat is added at the very end."
option_4 = "As time increases, the amount of heat required decreases."
multiple_choice(option_1, option_2, option_3, option_4)
###Output
[1m(A) [0;0mNo heat is added at the start, but around 325 kJ of heat is added at the very end.
[1m(B) [0;0mAs time increases, the amount of heat required decreases.
[1m(C) [0;0mThe amount of heat added to the system is not constant, the rate increases throughout the 25 min period.
[1m(D) [0;0mThe amount of heat added to the system is constant for the entire 25 min period.
###Markdown
Experimental ResultsOur experimental data indicates that our calculation of 66.1 kJ is incorrect and that it in fact takes around 325 kJ to heat ice from -20°C to steam at 200°C. *So what did we miss?***Answer:** The *phase changes*.The graph on the right shows us that the rate heat was added to the system over the 25 minute period was constant, yet the temperature remained constant at two points for some time (0°C and 100°C). How is this possible? That is, *how can we add heat to a material while its temperature remains constant?***Answer:** Every material has two common "critical temperature points". These are the points at which the *state* of the material *changes*. For water, these points are at 0°C and 100°C. If heat is coming into a material *during a phase change*, then this energy is used to overcome the intermolecular forces between the molecules of the material.Let's consider when ice melts into water at 0°C. Immediately after the molecular bonds in the ice are broken, the molecules are moving (vibrating) at the same average speed as before, and so their average kinetic energy remains the same. *Temperature* is precisely a measure of the average kinetic energy of the particles in a material. Hence, during a phase change, the temperature remains constant. Latent Heat of Fusion and Vaporization The **latent heat of fusion ($H_f$)** is the quantity of heat needed to melt 1 kg of a solid to a liquid without a change in temperature.The **latent heat of vaporization ($H_v$)** is the quantity of heat needed to vaporise 1 kg of a liquid to a gas without a change in temperature.The latent heats of fusion and vaporization are empirical characteristics of a particular material. As such, they must be experimentally determined. Values for the latent heats of fusion and vaporization of common materials are shown in the table below:Materials | Heat of Fusion (J/kg) |Heat of Vaporization (J/kg) --- | --- | --- Copper | $2.05 \times 10^5$ | $5.07 \times 10^6$ Gold | $6.03 \times 10^4$ | $1.64 \times 10^6$ Iron | $2.66 \times 10^5$ | $6.29 \times 10^6$ Lead | $2.04 \times 10^4$ | $8.64 \times 10^5$ Mercury | $1.15 \times 10^4$ | $2.72 \times 10^5$ Methanol | $1.09 \times 10^5$ | $8.78 \times 10^5$ Silver | $1.04 \times 10^4$ | $2.36 \times 10^6$ Water (ice) | $3.34 \times 10^5$ | $2.26 \times 10^6$ The following formulae are used to calculate the amount of heat needed to change a material from a solid to a liquid (fusion), or from a liquid to a gas (vaporization):$$Q_f = mH_f \qquad Q_v = mH_v$$ Example (revisited)Recall our previous problem:**Problem:** *Determine the amount of heat required to raise the temperature of a 100 g block of ice from -20°C to steam at 200°C.***Solution:** Previously, we split the problem into three steps. It turns out that those steps correctly calculated the heat required to warm ice from -20°C to 0°C, water from 0°C to 100°C. and steam from 100°C to 200°C. What was absent was the latent heat required to complete the phase changes at 0°C and 100°C. Therefore, we need to **add two more steps**, which incorporate the above formulae. For completion, the previous steps are restated and the entire calculation is shown in **five steps** below (plus a final step to sum up the heats calculated in the previous steps):
###Code
#import ipywidgets as widgets
#from ipywidgets import Output, VBox, HBox
#from IPython.display import clear_output, SVG, HTML, display
out6 = Output()
frame_1 = 1
#Toggle images
def show_steps_1():
global frame_1
I11 = widgets.HTMLMath(value="Step 1: Calculate the heat required to change ice from -20°C to 0°C. Since the temperature changes, we use $Q = mCΔT$.")
Q11 = widgets.HTMLMath(value="$Q_{1}$ = (0.1 kg) (2060 J/kg°C) (0°C - (-20°C)) = 4120 J")
I12 = widgets.HTMLMath(value="Step 2: Calculate the heat required to change ice at 0°C to water at 0°C. Since the temperature does not change as we are at the melting point of water, we use $Q = mH_{f}$.")
Q12 = widgets.HTMLMath(value="$Q_{2}$ = (0.1 kg) (334000 J/kg) = 33400 J")
I13 = widgets.HTMLMath(value="Step 3: Calculate the heat required to change water from 0°C to 100°C. Since the temperature changes, we use $Q = mCΔT$.")
Q13 = widgets.HTMLMath(value="$Q_{3}$ = (0.1 kg) (4180 J/kg°C) (100°C - 0°C) = 41800 J")
I14 = widgets.HTMLMath(value="Step 4: Calculate the heat required to change water at 100°C to steam at 100°C. Since the temperature does not change at we are at the boiling point of water, we use $Q = mH_{v}$.")
Q14 = widgets.HTMLMath(value="$Q_{4}$ = (0.1 kg) (2260000 J/kg) = 226000 J")
I15 = widgets.HTMLMath(value="Step 5: Calculate the heat required to change steam from 100°C to 200°C. Since the temperature changes, we use $Q = mCΔT$.")
Q15 = widgets.HTMLMath(value="$Q_{5}$ = (0.1 kg) (2020 J/kg°C) (200°C - 100°C) = 20200 J")
I16 = widgets.HTMLMath(value="Summary: Calculate total heat by adding up the values calculated in the previous steps. $Q$ = $Q_1$ + $Q_2$ + $Q_3$ + $Q_4$ + $Q_5$")
Q16 = widgets.HTMLMath(value="$Q$ = (4120 + 33400 + 41800 + 226000 + 20200) J = 325520 J or 326 kJ")
if frame_1 == 0:
display(SVG("Images/phase_diagram_1_0.svg"))
frame_1 = 1
elif frame_1 == 1:
display(SVG("Images/phase_diagram_1_1.svg"))
display(I11, Q11)
frame_1 = 2
elif frame_1 == 2:
display(SVG("Images/phase_diagram_1_2.svg"))
display(I11, Q11, I12, Q12)
frame_1 = 3
elif frame_1 == 3:
display(SVG("Images/phase_diagram_1_3.svg"))
display(I11, Q11, I12, Q12, I13, Q13)
frame_1 = 4
elif frame_1 == 4:
display(SVG("Images/phase_diagram_1_4.svg"))
display(I11, Q11, I12, Q12, I13, Q13, I14, Q14)
frame_1 = 5
elif frame_1 == 5:
display(SVG("Images/phase_diagram_1_5.svg"))
display(I11, Q11, I12, Q12, I13, Q13, I14, Q14, I15, Q15)
frame_1 = 6
elif frame_1 == 6:
display(SVG("Images/phase_diagram_1_6.svg"))
display(I11, Q11, I12, Q12, I13, Q13, I14, Q14, I15, Q15, I16, Q16)
frame_1 = 0
button_phase_diagram_1 = widgets.Button(description="Show Next Step", button_style = 'primary')
display(button_phase_diagram_1)
def on_submit_button_phase_diagram_1_clicked(b):
with out6:
clear_output(wait=True)
show_steps_1()
with out6:
display(SVG("Images/phase_diagram_1_0.svg"))
button_phase_diagram_1.on_click(on_submit_button_phase_diagram_1_clicked)
display(out6)
###Output
_____no_output_____
###Markdown
**Note:** that the *state* of a material can include more than one *phase*. For example, at 0°C, the state of water includes both solid (ice) and liquid (water) phases. At 100°C, the state of water includes both liquid (water) and gas (steam) phases.It is common to cool down a material (as opposed to heating it up). In this scenario, heat must be taken away. By convention, a negative $Q$ is used to represent heat being taken away from a material (cooling), while a positive $Q$ is used to represent heat being added to a material (warming). Be aware of the sign of $Q$ as it indicates the direction the heat is flowing. For $Q=mH_f$ and $Q=mH_v$, you must be aware of whether heat is being added to or taken away from the material. If heat is being taken away, then a negative sign must be placed in front of $H_f$ and $H_v$. It is not necessary for each problem to be five steps. A problem could have 1-5 steps depending on the situation. Let's do another example together. An interactive graph is provided to help determine the number of steps required. ExampleHow much heat must be removed to change 10.0 g of steam at 120.0°C to water at 50°C? Round to two significant figures.
###Code
#import ipywidgets as widgets
#import matplotlib.pyplot as plt
#from ipywidgets import HBox, Output, VBox
#from IPython.display import clear_output
out7 = Output()
play2 = widgets.Play(interval=500, value=0, min=0, max=25, step=1, description="Press play", disabled=False)
time_slider2 = widgets.IntSlider(description='Time', value=0, min=0, max=20, continuous_update = False)
widgets.jslink((play2, 'value'), (time_slider2, 'value'))
#Make lists of x and y values
x_values2 = list(range(21))
y_values2 = [120, 110, 100, 100, 100, 100, 100, 100, 100, 100, 100, 95, 90, 85, 80, 75, 70, 65, 60, 55, 50]
heat_y2 = []
increment2 = 0
for i in range(26):
heat_y2.append(increment2)
increment2 += 13021
#Plot graph
def time_temp(change):
x = change['new']
with out7:
clear_output(wait=True)
temp_x_values2 = []
temp_y_values2 = []
graph2y2 = []
for i in range(x+1):
temp_x_values2.append(x_values2[i])
temp_y_values2.append(y_values2[i])
graph2y2.append(heat_y2[i])
plt.figure(figsize=(7,5))
plt.style.use('seaborn')
plt.rcParams["axes.edgecolor"] = "black"
plt.rcParams["axes.linewidth"] = 0.5
plt.ylim(0, 150)
plt.xlim(-0.5,26)
plt.xticks([])
plt.scatter(temp_x_values2, temp_y_values2)
plt.ylabel('Temperature (°C)')
plt.xlabel('Time')
plt.figtext(0.5, 0.01, "This graph consists of three line-segments. This indicates that we require three steps.", wrap=True, horizontalalignment='center', fontsize=12)
plt.show()
#Get slider value
time_temp({'new': time_slider2.value})
time_slider2.observe(time_temp, 'value')
#Display widget
display(HBox([play2, time_slider2]))
display(out7)
#import ipywidgets as widgets
#from IPython.display import clear_output, SVG
out8 = widgets.Output()
frame_2 = 1
#Toggle images
def show_steps_2():
global frame_2
I21 = widgets.HTMLMath(value="Step 1: Calculate the heat loss required to change steam from 120°C to 100°C. Since there is no phase change taking place, we use $Q = mCΔT$.")
Q21 = widgets.HTMLMath(value="$Q_{1}$ = (0.01 kg) (2020 J/kg°C) (100°C - 120°C) = -404 J")
I22 = widgets.HTMLMath(value="Step 2: Calculate the heat loss required to change steam at 100°C to water at 100°C. Since a phase change is taking place (condensation), we use $Q = -mH_{v}$.")
Q22 = widgets.HTMLMath(value="$Q_{2}$ = - (0.01 kg) (2260000 J/kg) = -22600 J")
I23 = widgets.HTMLMath(value="Step 3: Calculate the heat loss required to change water from 100°C to 50°C. Since there is no phase change taking place, we use $Q = mCΔT$.")
Q23 = widgets.HTMLMath(value="$Q_{3}$ = (0.01 kg) (4180 J/kg°C) (50°C - 100°C) = -2090 J")
I24 = widgets.HTMLMath(value="Summary: Calculate the total heat loss by adding up the values calculated in the previous steps. $Q$ = $Q_1$ + $Q_2$ + $Q_3$")
Q24 = widgets.HTMLMath(value="$Q$ = (-404 + -22600 + -2090) J = -25000 J or -25 kJ")
if frame_2 == 0:
display(SVG("Images/phase_diagram_2_0.svg"))
frame_2 = 1
elif frame_2 == 1:
display(SVG("Images/phase_diagram_2_1.svg"))
display(I21, Q21)
frame_2 = 2
elif frame_2 == 2:
display(SVG("Images/phase_diagram_2_2.svg"))
display(I21, Q21, I22, Q22)
frame_2 = 3
elif frame_2 == 3:
display(SVG("Images/phase_diagram_2_3.svg"))
display(I21, Q21, I22, Q22, I23, Q23)
frame_2 = 4
elif frame_2 == 4:
display(SVG("Images/phase_diagram_2_4.svg"))
display(I21, Q21, I22, Q22, I23, Q23, I24, Q24)
frame_2 = 0
button_phase_diagram_2 = widgets.Button(description="Show Next Step", button_style = 'primary')
display(button_phase_diagram_2)
def on_submit_button_phase_diagram_2_clicked(b):
with out8:
clear_output(wait=True)
show_steps_2()
with out8:
display(SVG("Images/phase_diagram_2_0.svg"))
button_phase_diagram_2.on_click(on_submit_button_phase_diagram_2_clicked)
display(out8)
###Output
_____no_output_____
###Markdown
PracticeThere are many variations that are possible with specific heat and latent heat questions. Use the *Generate New Question* button to generate additional practice problems. These practice problems will vary from one to five steps. **One Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety1 = random.randint(1,5)
if variety1 == 1:
#Makes certain that initial and final temps are different
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(-50.0, 0.0), 1)
temperature_final = round(random.uniform(-50.0, 0.0), 1)
question = "How much heat is needed for a {} g block of ice at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2060 * (temperature_final - temperature_initial)
elif variety1 == 2:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(0.0, 100.0), 1)
temperature_final = round(random.uniform(0.0, 100.0), 1)
question = "How much heat is needed for {} g of water at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 4180 * (temperature_final - temperature_initial)
elif variety1 == 3:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed for {} g of steam at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2020 * (temperature_final - temperature_initial)
elif variety1 == 4:
temperature_initial = 0
temperature_final = 0
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 334000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -334000
elif variety1 == 5:
temperature_initial = 100
temperature_final = 100
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2260000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -2260000
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Two Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety2 = random.randint(1,4)
if variety2 == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = 0
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif variety2 == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif variety2 == 3:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
elif variety2 == 4:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 100
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Three Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety3 = random.randint(1,2)
if variety3 == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000)
elif variety3 == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Four Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
variety4 = random.randint(1,2)
if variety4 == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
elif variety4 == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100-temperature_initial)) + ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Five Step Problem**
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Dictionary of materials
materials = {"aluminum": 903, "brass": 376, "carbon": 710, "copper": 385, "glass": 664, "iron": 450, "lead": 130, "silver": 235, "stainless Steal": 460, "tin": 217, "water": 4180, "zinc": 388}
chosen_material = random.choice(list(materials.keys()))
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(100 - 0)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000)*4180*(0 - 100)) + ((mass/1000)*2060*(temperature_final - 0)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
multiple_choice(option_1, option_2, option_3, option_4)
###Output
_____no_output_____
###Markdown
**Mixed Step Problems**In the dropdown menus below, select how many steps are required and select the correct amount of heat required for each question.**Hint:** Have some scrap-paper nearby for the calculations and be sure to sketch a diagram of each scenario to determine how many steps are required.
###Code
#import math
#import random
#from IPython.display import Javascript, display
#from ipywidgets import widgets, Layout
def generate_new_question(ev):
display(Javascript('IPython.notebook.execute_cell()'))
button_generate_question = widgets.Button(description="Generate New Question", layout=Layout(width='20%', height='100%'), button_style='success')
button_generate_question.on_click(generate_new_question)
display(button_generate_question)
#Randomize variables
mass = round(random.uniform(100.0, 1000.0), 1)
temperature_initial, temperature_final = 0,0
#Determine question type
question_type = random.randint(1,5)
if question_type == 1:
#Type 1: One Step
steps = "One Step"
type1_variety = random.randint(1,5)
if type1_variety == 1:
#Makes certain that initial and final temps are different
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(-50.0, 0.0), 1)
temperature_final = round(random.uniform(-50.0, 0.0), 1)
question = "How much heat is needed for a {} g block of ice at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2060 * (temperature_final - temperature_initial)
elif type1_variety == 2:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(0.0, 100.0), 1)
temperature_final = round(random.uniform(0.0, 100.0), 1)
question = "How much heat is needed for {} g of water at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 4180 * (temperature_final - temperature_initial)
elif type1_variety == 3:
while temperature_initial == temperature_final:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed for {} g of steam at {}°C to change temperature to {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2020 * (temperature_final - temperature_initial)
elif type1_variety == 4:
temperature_initial = 0
temperature_final = 0
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 334000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -334000
elif type1_variety == 5:
temperature_initial = 100
temperature_final = 100
direction_variety = random.randint(1,2)
if direction_variety == 1:
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * 2260000
elif direction_variety == 2:
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = (mass/1000) * -2260000
elif question_type == 2:
#Type 2: Two Steps
steps = "Two Steps"
type2_variety = random.randint(1,4)
if type2_variety == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = 0
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif type2_variety == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -334000)
elif type2_variety == 3:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
elif type2_variety == 4:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 100
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - temperature_initial)) + ((mass/1000) * -2260000)
elif question_type == 3:
#Type 3: Three Steps
steps = "Three Steps"
type3_variety = random.randint(1,2)
if type3_variety == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000)
elif type3_variety == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(1.0, 99.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of water at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(1.0, 99.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to water at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000) * -2260000)
elif question_type == 4:
#Type 4: Four Steps
steps = "Four Steps"
type4_variety = random.randint(1,2)
if type4_variety == 1:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = 100
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(temperature_final - 0)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = 100
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(temperature_final-0)) + ((mass/1000)*4180*(0 - temperature_initial)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
elif type4_variety == 2:
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = 0
temperature_final = round(random.uniform(100.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000)*4180*(100 - temperature_initial)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(100.0, 150.0), 1)
temperature_final = 0
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100-temperature_initial)) + ((mass/1000)*4180*(temperature_final-100)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
elif question_type == 5:
#Type 5: Five Steps
steps = "Five Steps"
direction_variety = random.randint(1,2)
if direction_variety == 1:
temperature_initial = round(random.uniform(-50.0, -1.0), 1)
temperature_final = round(random.uniform(101.0, 150.0), 1)
question = "How much heat is needed to change {} g of ice at {}°C to steam at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2060*(0 - temperature_initial)) + ((mass/1000)*4180*(100 - 0)) + ((mass/1000)*2020*(temperature_final - 100)) + ((mass/1000) * 334000) + ((mass/1000) * 2260000)
elif direction_variety == 2:
temperature_initial = round(random.uniform(101.0, 150.0), 1)
temperature_final = round(random.uniform(-50.0, -1.0), 1)
question = "How much heat is needed to change {} g of steam at {}°C to ice at {}°C?".format(mass, temperature_initial, temperature_final)
print(question)
answer = ((mass/1000)*2020*(100 - temperature_initial)) + ((mass/1000)*4180*(0 - 100)) + ((mass/1000)*2060*(temperature_final - 0)) + ((mass/1000) * -334000) + ((mass/1000) * -2260000)
#Define range of values for random multiple choices
mini = -1000
maxa = 1000
#Create three choices that are unique (and not equal to the answer)
choice_list = random.sample(range(mini,maxa),3)
while choice_list.count(int(answer)) >= 1:
choice_list = random.sample(range(mini,maxa),3)
#Assign each multiple choice to these four variables
#Option_1 contains the answer
option_1 = str(int(round(answer/1000))) + " kJ"
option_2 = str(round(choice_list[0],1)) + " kJ"
option_3 = str(round(choice_list[1],1)) + " kJ"
option_4 = str(round(-1*choice_list[2],1)) + " kJ"
option_list = [option_1, option_2, option_3, option_4]
correct_answer = option_list[0]
#Randomly shuffle the options
random.shuffle(option_list)
#Create dropdown menus
dropdown1_1 = widgets.Dropdown(options={' ':0,'One Step': 1, 'Two Steps': 2, 'Three Steps': 3, 'Four Steps': 4, 'Five Steps': 5}, value=0, description='Steps',)
dropdown1_2 = widgets.Dropdown(options={' ':0,option_list[0]: 1, option_list[1]: 2, option_list[2]: 3, option_list[3]: 4}, value=0, description='Answer',)
#Display menus as 1x2 table
container1_1 = widgets.HBox(children=[dropdown1_1, dropdown1_2])
display(container1_1), print(" ", end='\r')
#Evaluate input
def check_answer_dropdown(b):
answer1_1 = dropdown1_1.label
answer1_2 = dropdown1_2.label
if answer1_1 == steps and answer1_2 == correct_answer:
print("Correct! ", end='\r')
elif answer1_1 != ' ' and answer1_2 != ' ':
print("Try again.", end='\r')
else:
print(" ", end='\r')
dropdown1_1.observe(check_answer_dropdown, names='value')
dropdown1_2.observe(check_answer_dropdown, names='value')
###Output
_____no_output_____
###Markdown
Conclusions* The **specific heat capacity** of a material is an empirically determined value characteristic of a particular material. It is defined as the amount of heat needed to raise the temperature of 1 kg of the material by 1°C.* We use the formula $Q=mc\Delta T$ to calculate the amount of heat required to change the temperature of a material in which there is no change of phase.* The **latent heat of fusion** ($H_f$) is defined as the amount of heat needed to melt 1 kg of a solid without a change in temperature.* The **latent heat of vaporization** ($H_v$) is define as the amount of heat needed to vaporise 1 kg of a liquid without a change in temperature.* We use the formula $Q=mH_f$ to calculate the heat required to change a material from a solid to a liquid, or from a liquid to a solid.* We use the formula $Q=mH_v$ to calculate the heat required to change a material from a liquid to a gas, or from a gas to a liquid.* If heat is being taken away, then a negative sign must be placed in front of $H_f$ and $H_v$.* We use a combination of the above formulae to compute the heat required to change a material from an initial temperature to a final temperature when one (or more) phase changes occur across a range of temperatures.Images in this notebook represent original artwork.
###Code
%%html
<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
###Output
_____no_output_____ |
2017/tutorials/Tutorial2-Exercises.ipynb | ###Markdown
CS375 - Tutorial 2 Welcome to tutorial 2! This tutorial will introduce you to unsupervised learning methods, specifically how to train a sparse autoencoder and variational autoencoder on MNIST, and how to evaluate the trained model on neural data. As before everything will be implemented using TFUtils. We will start with a sparse autoencoder and then move on to variational autoencoders. 1.) Training and evaluating a sparse autoencoder on MNIST 1.1.) Define a simple sparse autoencoder consisting of one fully connected layer in the encoder and one fully connected layer in the decoder. The input dimension is 784. Use xavier initialization and l2-regularization on the weights, initialize all biases to 0 and use a tanh activation function. Regularize the hidden layer activations with a l1-regularization:
###Code
%matplotlib inline
from __future__ import division
from tfutils import base, data, optimizer, utils
import numpy as np
import tensorflow as tf
import os
import pymongo as pm
import matplotlib.pyplot as plt
import scipy.signal as signal
# connect to database
dbname = 'mnist'
collname = 'autoencoder'
port = 24444
conn = pm.MongoClient(port = port)
coll = conn[dbname][collname + '.files']
def sparse_autoencoder(inputs, train=True, beta = 5e-4, n_hidden = 100, **kwargs):
'''
Implements a simple autoencoder consisting of two fully connected layers
'''
# flatten the input images
inp = tf.reshape(inputs['images'], [inputs['images'].get_shape().as_list()[0], -1])
### YOUR CODE HERE
return output, {}
###Output
_____no_output_____
###Markdown
1.2.) Define the l2 loss function for the sparse autoencoder:
###Code
def sparse_autoencoder_loss(inputs, outputs, **kwargs):
'''
Defines the loss = l2(inputs - outputs) + l1(weights)
'''
# flatten the input images
inputs = tf.reshape(inputs, [inputs.get_shape().as_list()[0], -1])
### YOUR CODE HERE
return loss
def sparse_autoencoder_validation(inputs, outputs, **kwargs):
'''
Wrapper for using the loss function as a validation target
'''
return {'l2_loss': sparse_autoencoder_loss(inputs['images'], outputs),
'pred': outputs,
'gt': inputs['images']}
###Output
_____no_output_____
###Markdown
Now let's define and run our sparse autoencoder experiment on MNIST in TFUtils. We will use the Adam optimizer and an exponentially decaying learning rate:
###Code
def online_agg_mean(agg_res, res, step):
"""
Appends the mean value for each key
"""
if agg_res is None:
agg_res = {k: [] for k in res}
for k, v in res.items():
if k in ['pred', 'gt']:
value = v
else:
value = np.mean(v)
agg_res[k].append(value)
return agg_res
def agg_mean(results):
for k in results:
if k in ['pred', 'gt']:
results[k] = results[k][0]
elif k is 'l2_loss':
results[k] = np.mean(results[k])
else:
raise KeyError('Unknown target')
return results
# number of hidden neurons
n_hidden = 100
# scaling of l1 regularization
beta = 1e-2 # 1e-4 = no regularization
params = {}
params['load_params'] = {
'do_restore': False,
}
params['save_params'] = {
'host': 'localhost',
'port': 24444,
'dbname': 'mnist',
'collname': 'autoencoder',
'exp_id': 'exp1',
'save_metrics_freq': 200,
'save_valid_freq': 200,
'save_filters_freq': 1000,
'cache_filters_freq': 1000,
}
params['train_params'] = {
'validate_first': False,
'data_params': {'func': data.MNIST,
'batch_size': 256,
'group': 'train',
'n_threads': 1},
'queue_params': {'queue_type': 'random',
'batch_size': 256},
'num_steps': 4000,
'thres_loss': float("inf"),
}
params['validation_params'] = {'valid0': {
'data_params': {'func': data.MNIST,
'batch_size': 100,
'group': 'test',
'n_threads': 1},
'queue_params': {'queue_type': 'fifo',
'batch_size': 100},
'num_steps': 100,
'targets': {'func': sparse_autoencoder_validation},
'online_agg_func': online_agg_mean,
'agg_func': agg_mean,
}}
params['model_params'] = {
'func': sparse_autoencoder,
'beta': beta,
'n_hidden': n_hidden,
}
params['learning_rate_params'] = {
'learning_rate': 5e-3,
'decay_steps': 2000,
'decay_rate': 0.95,
'staircase': True,
}
params['optimizer_params'] = {
'func': optimizer.ClipOptimizer,
'optimizer_class': tf.train.AdamOptimizer,
'clip': False,
}
params['loss_params'] = {
'targets': ['images'],
'loss_per_case_func': sparse_autoencoder_loss,
'loss_per_case_func_params' : {'_outputs': 'outputs', '_targets_$all': 'inputs'},
'agg_func': tf.reduce_mean,
}
params['skip_check'] = True
coll.remove({'exp_id' : 'exp1'}, {'justOne': True})
base.train_from_params(**params)
###Output
_____no_output_____
###Markdown
Now let's have a look at our training and validation curves that were stored in our database:
###Code
def get_losses(exp_id):
"""
Gets all loss entries from the database and concatenates them into a vector
"""
q_train = {'exp_id' : exp_id, 'train_results' : {'$exists' : True}}
return np.array([_r['loss']
for r in coll.find(q_train, projection = ['train_results'])
for _r in r['train_results']])
def plot_train_loss(exp_id, start_step=None, end_step=None, N_smooth = 100, plt_title = None):
"""
Plots the training loss
You will need to EDIT this part.
"""
# get the losses from the database
loss = get_losses(exp_id)
if start_step is None:
start_step = 0
if end_step is None:
end_step = len(loss)
if plt_title is None:
plt_title = exp_id
# Only plot selected loss window
loss = loss[start_step:end_step]
# plot loss
fig = plt.figure(figsize = (15, 6))
plt.plot(loss)
plt.title(plt_title + ' training: loss')
plt.grid()
axes = plt.gca()
# plot smoothed loss
smoothed_loss = signal.convolve(loss, np.ones((N_smooth,)))[N_smooth : -N_smooth] / float(N_smooth)
plt.figure(figsize = (15, 6))
plt.plot(smoothed_loss)
plt.title(plt_title + ' training: loss smoothed')
plt.grid()
axes = plt.gca()
plot_train_loss('exp1')
def get_validation_data(exp_id):
"""
Gets the validation data from the database (except for gridfs data)
"""
q_val = {'exp_id' : exp_id, 'validation_results' : {'$exists' : True}, 'validates' : {'$exists' : False}}
val_steps = coll.find(q_val, projection = ['validation_results'])
return [val_steps[i]['validation_results']['valid0']['l2_loss']
for i in range(val_steps.count())]
def plot_validation_results(exp_id, plt_title = None):
"""
Plots the validation results i.e. the top1 and top5 accuracy
You will need to EDIT this part.
"""
# get the data from the database
l2_loss = get_validation_data(exp_id)
if plt_title is None:
plt_title = exp_id
# plot top1 accuracy
fig = plt.figure(figsize = (15, 6))
plt.plot(l2_loss)
plt.title(plt_title + ' validation: l2 loss')
plt.grid()
axes = plt.gca()
plot_validation_results('exp1')
###Output
_____no_output_____
###Markdown
Finally let's plot some example outputs of our autoencoder. The images to the left are the outputs. The images to the right are the inputs.
###Code
def get_validation_images(exp_id):
"""
Gets the validation images from the database
"""
q_val = {'exp_id' : exp_id, 'validation_results' : {'$exists' : True}, 'validates' : {'$exists' : False}}
val_steps = coll.find(q_val, projection = ['validation_results'])
pred = np.array([val_steps[i]['validation_results']['valid0']['pred']
for i in range(val_steps.count())])
gt = np.array([val_steps[i]['validation_results']['valid0']['gt']
for i in range(val_steps.count())])
return {'gt': gt, 'pred': pred}
def plot_validation_images(exp_id, n_images = 24):
'''
Plots n_images images in a grid. The ground truth image is on the left
and the prediction is on the right.
'''
imgs = get_validation_images(exp_id)
fig = plt.figure(figsize=(16, 16))
for i in range(n_images):
pred = np.reshape(imgs['pred'][0,i], [28, 28])
plt.subplot(n_images/4,n_images/3,1 + i*2)
plt.imshow(pred, cmap='gray')
ax = plt.gca()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
gt = np.reshape(imgs['gt'][0,i], [28, 28])
plt.subplot(n_images/4,n_images/3,2 + i*2)
plt.imshow(gt, cmap='gray')
ax = plt.gca()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plot_validation_images('exp1')
###Output
_____no_output_____
###Markdown
2.) Training and evaluating a variational autoencoder on MNIST 2.1.) Define a simple variational autoencoder as discussed in class consisting of - one fully connected layer (dimension n_hidden) in the encoder that results in the latent variable, - two fully connected layers (dimension n_latent) that take the latent variable and reparametrize the latent distribution with it's mean mu and log of the standard deviation, and - two fully connected layers (dimension n_hidden & output_dim) in the decoder that take the reparametrized latent variable and decode it into the output prediction. The input dimension is 784. Use tanh activation functions in the intermediate layers and a sigmoid at the top layer. Use xavier initialization for the weights and constant initialization to 0 for the biases:
###Code
def variational_autoencoder(inputs, train=True, n_hidden = 100, n_latent = 20, **kwargs):
'''
Implements a simple autoencoder consisting of two fully connected layers
'''
outputs = {}
# flatten the input images
inp = tf.reshape(inputs['images'], [inputs['images'].get_shape().as_list()[0], -1])
### YOUR CODE HERE
return outputs, {}
###Output
_____no_output_____
###Markdown
2.2.) Define the loss for our variational autoencoder as discussed in class:
###Code
def variational_autoencoder_loss(inputs, outputs, **kwargs):
'''
Defines the loss = l2(inputs - outputs) + l2(weights)
'''
# flatten the input images
inputs = tf.reshape(inputs, [inputs.get_shape().as_list()[0], -1])
### YOUR CODE HERE
return loss
def variational_autoencoder_validation(inputs, outputs, **kwargs):
'''
Wrapper for using the loss function as a validation target
'''
return {'l2_loss': variational_autoencoder_loss(inputs['images'], outputs),
'pred': outputs['pred'],
'gt': inputs['images']}
def online_agg_mean(agg_res, res, step):
"""
Appends the mean value for each key
"""
if agg_res is None:
agg_res = {k: [] for k in res}
for k, v in res.items():
if k in ['pred', 'gt']:
value = v
else:
value = np.mean(v)
agg_res[k].append(value)
return agg_res
def agg_mean(results):
for k in results:
if k in ['pred', 'gt']:
results[k] = results[k][0]
elif k is 'l2_loss':
results[k] = np.mean(results[k])
else:
raise KeyError('Unknown target')
return results
# number of hidden neurons
n_hidden = 100
# dimension of latent space
n_latent = 20
params = {}
params['load_params'] = {
'do_restore': False,
}
params['save_params'] = {
'host': 'localhost',
'port': 24444,
'dbname': 'mnist',
'collname': 'autoencoder',
'exp_id': 'exp2',
'save_metrics_freq': 200,
'save_valid_freq': 200,
'save_filters_freq': 1000,
'cache_filters_freq': 1000,
}
params['train_params'] = {
'validate_first': False,
'data_params': {'func': data.MNIST,
'batch_size': 256,
'group': 'train',
'n_threads': 1},
'queue_params': {'queue_type': 'random',
'batch_size': 256},
'num_steps': 10000,
'thres_loss': float("inf"),
}
params['validation_params'] = {'valid0': {
'data_params': {'func': data.MNIST,
'batch_size': 100,
'group': 'test',
'n_threads': 1},
'queue_params': {'queue_type': 'fifo',
'batch_size': 100},
'num_steps': 100,
'targets': {'func': variational_autoencoder_validation},
'online_agg_func': online_agg_mean,
'agg_func': agg_mean,
}}
params['model_params'] = {
'func': variational_autoencoder,
'n_latent': n_latent,
'n_hidden': n_hidden,
}
params['learning_rate_params'] = {
'learning_rate': 5e-3,
'decay_steps': 10000,
'decay_rate': 0.95,
'staircase': True,
}
params['optimizer_params'] = {
'func': optimizer.ClipOptimizer,
'optimizer_class': tf.train.AdamOptimizer,
'clip': True,
}
params['loss_params'] = {
'targets': ['images'],
'loss_per_case_func': variational_autoencoder_loss,
'loss_per_case_func_params' : {'_outputs': 'outputs', '_targets_$all': 'inputs'},
'agg_func': tf.reduce_mean,
}
params['skip_check'] = True
coll.remove({'exp_id' : 'exp2'}, {'justOne': True})
base.train_from_params(**params)
###Output
_____no_output_____
###Markdown
Now let's have a look at our training and validation curves that were stored in our database:
###Code
def get_losses(exp_id):
"""
Gets all loss entries from the database and concatenates them into a vector
"""
q_train = {'exp_id' : exp_id, 'train_results' : {'$exists' : True}}
return np.array([_r['loss']
for r in coll.find(q_train, projection = ['train_results'])
for _r in r['train_results']])
def plot_train_loss(exp_id, start_step=None, end_step=None, N_smooth = 100, plt_title = None):
"""
Plots the training loss
You will need to EDIT this part.
"""
# get the losses from the database
loss = get_losses(exp_id)
if start_step is None:
start_step = 0
if end_step is None:
end_step = len(loss)
if plt_title is None:
plt_title = exp_id
# Only plot selected loss window
loss = loss[start_step:end_step]
# plot loss
fig = plt.figure(figsize = (15, 6))
plt.plot(loss)
plt.title(plt_title + ' training: loss')
plt.grid()
axes = plt.gca()
# plot smoothed loss
smoothed_loss = signal.convolve(loss, np.ones((N_smooth,)))[N_smooth : -N_smooth] / float(N_smooth)
plt.figure(figsize = (15, 6))
plt.plot(smoothed_loss)
plt.title(plt_title + ' training: loss smoothed')
plt.grid()
axes = plt.gca()
plot_train_loss('exp2')
def get_validation_data(exp_id):
"""
Gets the validation data from the database (except for gridfs data)
"""
q_val = {'exp_id' : exp_id, 'validation_results' : {'$exists' : True}, 'validates' : {'$exists' : False}}
val_steps = coll.find(q_val, projection = ['validation_results'])
return [val_steps[i]['validation_results']['valid0']['l2_loss']
for i in range(val_steps.count())]
def plot_validation_results(exp_id, plt_title = None):
"""
Plots the validation results i.e. the top1 and top5 accuracy
You will need to EDIT this part.
"""
# get the data from the database
l2_loss = get_validation_data(exp_id)
if plt_title is None:
plt_title = exp_id
# plot top1 accuracy
fig = plt.figure(figsize = (15, 6))
plt.plot(l2_loss)
plt.title(plt_title + ' validation: l2 loss')
plt.grid()
axes = plt.gca()
plot_validation_results('exp2')
###Output
_____no_output_____
###Markdown
Finally let's plot some example outputs of our autoencoder. The images to the left are the outputs. The images to the right are the inputs.
###Code
def get_validation_images(exp_id):
"""
Gets the validation images from the database
"""
q_val = {'exp_id' : exp_id, 'validation_results' : {'$exists' : True}, 'validates' : {'$exists' : False}}
val_steps = coll.find(q_val, projection = ['validation_results'])
pred = np.array([val_steps[i]['validation_results']['valid0']['pred']
for i in range(val_steps.count())])
gt = np.array([val_steps[i]['validation_results']['valid0']['gt']
for i in range(val_steps.count())])
return {'gt': gt, 'pred': pred}
def plot_validation_images(exp_id, n_images = 24):
'''
Plots n_images images in a grid. The ground truth image is on the left
and the prediction is on the right.
'''
imgs = get_validation_images(exp_id)
fig = plt.figure(figsize=(16, 16))
for i in range(n_images):
pred = np.reshape(imgs['pred'][0,i], [28, 28])
plt.subplot(n_images/4,n_images/3,1 + i*2)
plt.imshow(pred, cmap='gray')
ax = plt.gca()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
gt = np.reshape(imgs['gt'][0,i], [28, 28])
plt.subplot(n_images/4,n_images/3,2 + i*2)
plt.imshow(gt, cmap='gray')
ax = plt.gca()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plot_validation_images('exp2')
###Output
_____no_output_____ |
Cardio Catch Disease - Machine Learning.ipynb | ###Markdown
Cardiovascular Disease Escrevendo algoritmos Classificadores Encontrando a acurácia e a Precisão da ferramenta
###Code
from matplotlib import pyplot as plt
import pandas as pd
import seaborn as sns
#Abrindo os dados limpos da seção anterior
df = pd.read_csv('cardio_data.csv')
df.set_index('id', inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Preparing the Training and Test set. Separa os dados em treino e teste Separa aleatoriamente em 70% dos dados para treino e 30% dos dados para teste
###Code
X = df.drop(columns = ['cardio'])
y = df['cardio']
from sklearn.model_selection import train_test_split
X.head()
#X_train = recebe os dados de treino
#X_test = recebe os dados de teste (30%)
#y_train = classes associadas aos dados de treino
#Y-test = classes associadas aos dados de teste
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 9)
X_train.shape, y_train.shape
X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Criando modelos
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
dtc = DecisionTreeClassifier()
ran = RandomForestClassifier(n_estimators=90)
knn = KNeighborsClassifier(n_neighbors=79)
naive = GaussianNB()
models = {"Decision tree" : dtc,
"Random forest" : ran,
"KNN" : knn,
"Naive bayes" : naive
}
scores= { }
###Output
_____no_output_____
###Markdown
Treina o algoritmo e gera o modelo
###Code
for key, value in models.items():
model = value
model.fit(X_train, y_train)
scores[key] = model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Performance Metrics
###Code
scores_frame = pd.DataFrame(scores, index=["Accuracy Score"]).T
scores_frame.sort_values(by=["Accuracy Score"], axis=0 ,ascending=False, inplace=True)
scores_frame
plt.figure(figsize=(5,5))
sns.barplot(x=scores_frame.index,y=scores_frame["Accuracy Score"])
plt.xticks(rotation=45);
###Output
_____no_output_____
###Markdown
Prevendo resultados com o melhor modelo
###Code
y_test[400:403]
X_test[400:403]
previsoes = knn.predict(X_test[400:403])
previsoes
###Output
_____no_output_____
###Markdown
Fazendo Cross-Validation Cross-Validation com 5 FoldsFAZER CROSS VALIDATION APÓS O TRAIN TEST SPLIT EM CIMA DO TREINO* https://scikit-learn.org/stable/modules/cross_validation.html
###Code
from sklearn.model_selection import cross_val_score
scores_dtc = cross_val_score(dtc, X, y, cv=5, scoring='accuracy')
scores_ran = cross_val_score(ran, X, y, cv=5, scoring='accuracy')
scores_knn = cross_val_score(knn, X, y, cv=5, scoring='accuracy')
scores_naive = cross_val_score(naive, X, y, cv=5, scoring='accuracy')
print('Decision Tree:', scores_dtc.mean())
print('Random Forest:', scores_ran.mean())
print('KNeighbors:', scores_knn.mean())
print('Naive Bayes:', scores_naive.mean())
###Output
Decision Tree: 0.6079662447257383
Random Forest: 0.686464135021097
KNeighbors: 0.7156793248945148
Naive Bayes: 0.7072911392405064
|
qick_demos/00_Send_receive_pulse_sim.ipynb | ###Markdown
Sending and receiving a pulse demonstrationNote: this notework is a copy of 00_Send_receive_pulse. It shows how to use the simulator for the QickPrograms in that notebook. In this demo you will send and receive a pulse in loopback to demonstrate control over the QICK. By modifying the config Python dictionary in the below notebook cell, you can change several variables:* The pulse length length in FPGA clock ticks (1 clock tick = 2.6 ns).* The readout buffer length readout_length in FPGA clock ticks.* The pulse envelope shape pulse_style (either const or flat_top or arb )* The pulse amplitude pulse_gain in DAC units.* The pulse frequency pulse_freq in MHz.* The readout "time of flight" adc_trig_offset in FPGA clock ticks.* The number of times you average the read soft_avgs
###Code
# Import the QICK drivers and auxiliary libraries
from qick import *
from qick.helpers import gauss
import time
%matplotlib inline
from qick.interpreter import simulate
from qick.interpreter import save_results
from qick.interpreter import read_results
# Load bitstream with custom overlay
soc = QickSoc()
# Set the loopback DAC channel to be in 1st Nyquist zone mode
soc.set_nyquist(ch=7,nqz=1);
###Output
/home/jimk/.local/lib/python3.8/site-packages/pynq/pl_server/device.py:79: UserWarning: No devices found, is the XRT environment sourced?
warnings.warn(
###Markdown
Hardware ConfigurationtProc channel 7 : DAC 229 CH3 Readout channel 0 : ADC 224 CH0
###Code
class LoopbackProgram(AveragerProgram):
def __init__(self,cfg):
AveragerProgram.__init__(self,cfg)
def initialize(self):
cfg=self.cfg
r_freq=self.sreg(cfg["res_ch"], "freq") #Get frequency register for res_ch
self.cfg["adc_lengths"]=[self.cfg["readout_length"]]*2 #add length of adc acquisition to config
self.cfg["adc_freqs"]=[adcfreq(self.cfg["pulse_freq"])]*2 #add frequency of adc ddc to config
if self.cfg["pulse_style"] == "const":
self.add_pulse(ch=self.cfg["res_ch"], name="measure", style=self.cfg["pulse_style"], length=self.cfg["length"]) #add a constant pulse to the pulse library
if self.cfg["pulse_style"] == "flat_top":
self.add_pulse(ch=self.cfg["res_ch"], name="measure", style=self.cfg["pulse_style"], length=self.cfg["length"], idata = self.cfg["idata"])
if self.cfg["pulse_style"] == "arb":
self.add_pulse(ch=self.cfg["res_ch"], name="measure", style=self.cfg["pulse_style"], idata = self.cfg["idata"])
freq=freq2reg(adcfreq(cfg["pulse_freq"])) # convert frequency to dac frequency (ensuring it is an available adc frequency)
self.pulse(ch=cfg["res_ch"], name="measure", freq=freq, phase=0, gain=cfg["pulse_gain"], t= 0, play=False) # pre-configure readout pulse
self.synci(200) # give processor some time to configure pulses
def body(self):
self.trigger_adc(adc1=1, adc2=1,adc_trig_offset=self.cfg["adc_trig_offset"]) # trigger the adc acquisition
if self.cfg["pulse_style"] == "const":
self.pulse(ch=self.cfg["res_ch"], length=self.cfg["length"], play=True) # play readout pulse
if self.cfg["pulse_style"] == "flat_top":
self.pulse(ch=self.cfg["res_ch"], name="measure", play=True) # play readout pulse
if self.cfg["pulse_style"] == "arb":
self.pulse(ch=self.cfg["res_ch"], play=True) # play readout pulse
self.sync_all(us2cycles(self.cfg["relax_delay"])) # sync all channels
###Output
_____no_output_____
###Markdown
Send/receive a pulse with pulse_style = const
###Code
config={"res_ch":7, # --Fixed
"reps":1, # --Fixed
"relax_delay":0, # --Fixed
"res_phase":0, # --Fixed
"pulse_style": "const", # --Fixed
"length":20, # [Clock ticks]
# Try varying length from 10-100 clock ticks
"readout_length":200, # [Clock ticks]
# Try varying readout_length from 50-1000 clock ticks
"pulse_gain":3000, # [DAC units]
# Try varying pulse_gain from 500 to 30000 DAC units
"pulse_freq": 100, # [MHz]
# In this program the signal is up and downconverted digitally so you won't see any frequency
# components in the I/Q traces below. But since the signal gain depends on frequency,
# if you lower pulse_freq you will see an increased gain.
"adc_trig_offset": 100, # [Clock ticks]
# Try varying adc_trig_offset from 100 to 220 clock ticks
"soft_avgs":100
# Try varying soft_avgs from 1 to 200 averages
}
###################
# Try it yourself !
###################
prog1 =LoopbackProgram(config)
#prog1.acquire_decimated(soc, load_pulses=True, progress=True, debug=False)
results = simulate(prog1)
print(f'1: pulses={results["pulses"]}')
print(f'1: log={results["instruction_log"]}')
print(f'1: mem changes={results["mem_changes"]}')
###Output
using qick program version
1: pulses=[[7, 'const', 0, 20]]
1: log=[(0, 0, 3, 0, 'regwi', {'va': 69905067, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 16, 'rb': 0, 'rc': 0, 'imm': 69905067}), (2, 1, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 17, 'rb': 0, 'rc': 0, 'imm': 0}), (4, 2, 3, 0, 'regwi', {'va': 3000, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 19, 'rb': 0, 'rc': 0, 'imm': 3000}), (6, 3, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0}), (8, 4, 3, 0, 'regwi', {'va': 589844, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 589844}), (10, 5, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 200}), (12, 6, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 0}), (14, 7, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 14, 'rb': 0, 'rc': 0, 'imm': 0}), (16, 8, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152}), (18, 9, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 100}), (20, 10, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0}), (22, 11, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 110}), (24, 12, 3, 0, 'regwi', {'va': 589844, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 589844}), (26, 13, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0}), (28, 14, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 3000, 've': 589844, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (30, 15, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 20}), (32, 16, 0, 0, 'mathi', {'vb': 0, 'va': 1, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1}), (34, 17, 0, 0, 'memwi', {'va': 1, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1}), (36, 18, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 8}), (38, 19, 0, -1, 'end', {'page': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'addr': 0}), (200, 20, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 3000, 've': 589844, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (300, 20, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 100}), (310, 20, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 110})]
1: mem changes=[(34, 17, 0, 1, 1)]
###Markdown
Send/receive a pulse with pulse_style = flat_top
###Code
config={"res_ch":7, # --Fixed
"reps":1, # --Fixed
"relax_delay":0, # --Fixed
"res_phase":0, # --Fixed
"pulse_style": "flat_top", # --Fixed
"length": 50, # [Clock ticks]
# Try varying length from 10-100 clock ticks
"sigma": 30, # [Clock ticks]
# Try varying sigma from 10-50 clock ticks
"readout_length":200, # [Clock ticks]
# Try varying readout_length from 50-1000 clock ticks
"pulse_gain":5000, # [DAC units]
# Try varying pulse_gain from 500 to 30000 DAC units
"pulse_freq": 100, # [MHz]
# In this program the signal is up and downconverted digitally so you won't see any frequency
# components in the I/Q traces below. But since the signal gain depends on frequency,
# if you lower pulse_freq you will see an increased gain.
"adc_trig_offset": 200, # [Clock ticks]
# Try varying adc_trig_offset from 100 to 220 clock ticks
"soft_avgs":100
# Try varying soft_avgs from 1 to 200 averages
}
config["idata"] = gauss(mu=config["sigma"]*16*5/2,si=config["sigma"]*16,length=5*config["sigma"]*16,maxv=32000)
# Try varying idata to be an arbitrary numpy array of your choosing!
# The first half of idata ramps up the flat_top pulse, the second half ramps down the flat_top pulse
###################
# Try it yourself !
###################
prog2 =LoopbackProgram(config)
#prog.acquire_decimated(soc, load_pulses=True, progress=True, debug=False)
results = simulate(prog2)
print(f'2: pulses={results["pulses"]}')
print(f'2: log={results["instruction_log"]}')
print(f'2: mem changes={results["mem_changes"]}')
###Output
using qick program version
2: pulses=[[7, 'flat_top', 0, 2400, array([61, 62, 63, ..., 63, 63, 62], dtype=int16), array([0, 0, 0, ..., 0, 0, 0], dtype=int16)]]
2: log=[(0, 0, 3, 0, 'regwi', {'va': 69905067, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 16, 'rb': 0, 'rc': 0, 'imm': 69905067}), (2, 1, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 17, 'rb': 0, 'rc': 0, 'imm': 0}), (4, 2, 3, 0, 'regwi', {'va': 5000, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 19, 'rb': 0, 'rc': 0, 'imm': 5000}), (6, 3, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0}), (8, 4, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0}), (10, 5, 3, 0, 'regwi', {'va': 524363, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524363}), (12, 6, 3, 0, 'regwi', {'va': 50, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 22, 'rb': 0, 'rc': 0, 'imm': 50}), (14, 7, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 200}), (16, 8, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 0}), (18, 9, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 14, 'rb': 0, 'rc': 0, 'imm': 0}), (20, 10, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152}), (22, 11, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200}), (24, 12, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0}), (26, 13, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210}), (28, 14, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0}), (30, 15, 3, 0, 'regwi', {'va': 524363, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524363}), (32, 16, 3, 0, 'regwi', {'va': 50, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 22, 'rb': 0, 'rc': 0, 'imm': 50}), (34, 17, 3, 0, 'regwi', {'va': 5000, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 19, 'rb': 0, 'rc': 0, 'imm': 5000}), (36, 18, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0}), (38, 19, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0}), (40, 20, 3, 0, 'regwi', {'va': 524363, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524363}), (42, 21, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524363, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (44, 22, 3, 0, 'regwi', {'va': 2500, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 19, 'rb': 0, 'rc': 0, 'imm': 2500}), (46, 23, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0}), (48, 24, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0}), (50, 25, 3, 0, 'regwi', {'va': 589824, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 589824}), (52, 26, 3, 0, 'math', {'va': 589874, 'vb': 589824, 'vc': 50, 'page': 3, 'ch': 0, 'oper': 8, 'ra': 20, 'rb': 20, 'rc': 22, 'rd': 0, 're': 0, 'rf': 0, 'rg': 0, 'rh': 0}), (54, 27, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 2500, 've': 589874, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (56, 28, 3, 0, 'regwi', {'va': 5000, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 19, 'rb': 0, 'rc': 0, 'imm': 5000}), (58, 29, 3, 0, 'regwi', {'va': 125, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 125}), (60, 30, 3, 0, 'regwi', {'va': 75, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 75}), (62, 31, 3, 0, 'regwi', {'va': 524363, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524363}), (64, 32, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 75, 'vd': 5000, 've': 524363, 'vt': 125, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (66, 33, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 200}), (68, 34, 0, 0, 'mathi', {'vb': 0, 'va': 1, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1}), (70, 35, 0, 0, 'memwi', {'va': 1, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1}), (72, 36, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 10}), (74, 37, 0, -1, 'end', {'page': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'addr': 0}), (200, 38, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524363, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (200, 38, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 2500, 've': 589874, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (325, 38, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 75, 'vd': 5000, 've': 524363, 'vt': 125, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0}), (400, 38, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200}), (410, 38, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})]
2: mem changes=[(70, 35, 0, 1, 1)]
###Markdown
Send/receive a pulse with pulse_style = arb
###Code
config={"res_ch":7, # --Fixed
"reps":5, # --Fixed
"relax_delay":0, # --Fixed
"res_phase":0, # --Fixed
"pulse_style": "arb", # --Fixed
"sigma": 30, # [Clock ticks]
# Try varying sigma from 10-50 clock ticks
"readout_length":200, # [Clock ticks]
# Try varying readout_length from 50-1000 clock ticks
"pulse_gain":5000, # [DAC units]
# Try varying pulse_gain from 500 to 30000 DAC units
"pulse_freq": 100, # [MHz]
# In this program the signal is up and downconverted digitally so you won't see any frequency
# components in the I/Q traces below. But since the signal gain depends on frequency,
# if you lower pulse_freq you will see an increased gain.
"adc_trig_offset": 200, # [Clock ticks]
# Try varying adc_trig_offset from 100 to 220 clock ticks
"soft_avgs":100
# Try varying soft_avgs from 1 to 200 averages
}
config["idata"] = gauss(mu=config["sigma"]*16*5/2,si=config["sigma"]*16,length=5*config["sigma"]*16,maxv=32000)
# Try varying idata to be an arbitrary numpy array of your choosing!
###################
# Try it yourself !
###################
prog3 =LoopbackProgram(config)
#prog3.acquire_decimated(soc, load_pulses=True, progress=True, debug=False)
# print(prog3.asm())
results = simulate(prog3)
print(f'3: pulses={results["pulses"]}')
# print(f'3: log={results["instruction_log"]}')
print(f'3: mem changes={results["mem_changes"]}')
# print(f'3: reg state={results["reg_state"]}')
save_results(results, "Loop3")
print("Reading back results to check them")
res3 = read_results("Loop3")
print(f'3 saved: pulses={res3["pulses"]}')
# print(f'3 saved: log={res3["instruction_log"]}')
print(f'3 saved: mem changes={res3["mem_changes"]}')
# print(f'3 saved: reg state={res3["reg_state"]}')
# for i in results['state'].instructions:
# print(i)
for i in results['instruction_log']:
print(i)
###Output
using qick program version
3: pulses=[[7, 'arb', 0, 2400, array([61, 62, 63, ..., 63, 63, 62], dtype=int16), array([0, 0, 0, ..., 0, 0, 0], dtype=int16)]]
3: mem changes=[(36, 18, 0, 1, 1), (60, 18, 0, 1, 2), (84, 18, 0, 1, 3), (108, 18, 0, 1, 4), (132, 18, 0, 1, 5)]
Reading back results to check them
3 saved: pulses=[[7, 'arb', 0, 2400, array([61, 62, 63, ..., 63, 63, 62], dtype=int16), array([0, 0, 0, ..., 0, 0, 0], dtype=int16)]]
3 saved: mem changes=[[36, 18, 0, 1, 1], [60, 18, 0, 1, 2], [84, 18, 0, 1, 3], [108, 18, 0, 1, 4], [132, 18, 0, 1, 5]]
(0, 0, 3, 0, 'regwi', {'va': 69905067, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 16, 'rb': 0, 'rc': 0, 'imm': 69905067})
(2, 1, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 17, 'rb': 0, 'rc': 0, 'imm': 0})
(4, 2, 3, 0, 'regwi', {'va': 5000, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 19, 'rb': 0, 'rc': 0, 'imm': 5000})
(6, 3, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0})
(8, 4, 3, 0, 'regwi', {'va': 524438, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524438})
(10, 5, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 200})
(12, 6, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 0})
(14, 7, 0, 0, 'regwi', {'va': 4, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 14, 'rb': 0, 'rc': 0, 'imm': 4})
(16, 8, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152})
(18, 9, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(20, 10, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0})
(22, 11, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(24, 12, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0})
(26, 13, 3, 0, 'regwi', {'va': 524438, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524438})
(28, 14, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0})
(30, 15, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(32, 16, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 150})
(34, 17, 0, 0, 'mathi', {'vb': 0, 'va': 1, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1})
(36, 18, 0, 0, 'memwi', {'va': 1, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1})
(38, 7, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 8})
(40, 8, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152})
(42, 9, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(44, 10, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0})
(46, 11, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(48, 12, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0})
(50, 13, 3, 0, 'regwi', {'va': 524438, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524438})
(52, 14, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0})
(54, 15, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(56, 16, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 150})
(58, 17, 0, 0, 'mathi', {'vb': 1, 'va': 2, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1})
(60, 18, 0, 0, 'memwi', {'va': 2, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1})
(62, 7, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 8})
(64, 8, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152})
(66, 9, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(68, 10, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0})
(70, 11, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(72, 12, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0})
(74, 13, 3, 0, 'regwi', {'va': 524438, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524438})
(76, 14, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0})
(78, 15, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(80, 16, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 150})
(82, 17, 0, 0, 'mathi', {'vb': 2, 'va': 3, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1})
(84, 18, 0, 0, 'memwi', {'va': 3, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1})
(86, 7, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 8})
(88, 8, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152})
(90, 9, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(92, 10, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0})
(94, 11, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(96, 12, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0})
(98, 13, 3, 0, 'regwi', {'va': 524438, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524438})
(100, 14, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0})
(102, 15, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(104, 16, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 150})
(106, 17, 0, 0, 'mathi', {'vb': 3, 'va': 4, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1})
(108, 18, 0, 0, 'memwi', {'va': 4, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1})
(110, 7, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 8})
(112, 8, 0, 0, 'regwi', {'va': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 49152})
(114, 9, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(116, 10, 0, 0, 'regwi', {'va': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 31, 'rb': 0, 'rc': 0, 'imm': 0})
(118, 11, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(120, 12, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 18, 'rb': 0, 'rc': 0, 'imm': 0})
(122, 13, 3, 0, 'regwi', {'va': 524438, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 20, 'rb': 0, 'rc': 0, 'imm': 524438})
(124, 14, 3, 0, 'regwi', {'va': 0, 'page': 3, 'ch': 0, 'oper': 0, 'ra': 21, 'rb': 0, 'rc': 0, 'imm': 0})
(126, 15, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(128, 16, 0, 0, 'synci', {'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'imm': 150})
(130, 17, 0, 0, 'mathi', {'vb': 4, 'va': 5, 'page': 0, 'ch': 0, 'oper': 8, 'ra': 15, 'rb': 15, 'rc': 0, 'imm': 1})
(132, 18, 0, 0, 'memwi', {'va': 5, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 15, 'rb': 0, 'rc': 0, 'imm': 1})
(134, 19, 0, -1, 'loopnz', {'page': 0, 'oper': 8, 'ra': 14, 'rb': 14, 'rc': 0, 'addr': 8})
(136, 20, 0, -1, 'end', {'page': 0, 'oper': 0, 'ra': 0, 'rb': 0, 'rc': 0, 'addr': 0})
(200, 21, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(350, 21, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(400, 21, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(410, 21, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(500, 21, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(550, 21, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(560, 21, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(650, 21, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(700, 21, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(710, 21, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(800, 21, 3, 7, 'set', {'va': 69905067, 'vb': 0, 'vc': 0, 'vd': 5000, 've': 524438, 'vt': 0, 'page': 3, 'ch': 7, 'oper': 0, 'ra': 0, 'rb': 16, 'rc': 21, 'rd': 17, 're': 18, 'rf': 19, 'rg': 20, 'rh': 0})
(850, 21, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(860, 21, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
(1000, 21, 0, 0, 'seti', {'vb': 49152, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 200})
(1010, 21, 0, 0, 'seti', {'vb': 0, 'page': 0, 'ch': 0, 'oper': 0, 'ra': 0, 'rb': 31, 'rc': 0, 'imm': 210})
|
notebooks/vae-resnet/vae-cifar10-depth1.nbconvert.ipynb | ###Markdown
Variational Autoencoder Parameters
###Code
img_rows, img_cols, img_chns = 32, 32, 3
original_img_size = (img_rows, img_cols, img_chns)
batch_size = int(os.environ.get('BATCH_SIZE', 25))
latent_dim = int(os.environ.get('LATENT_DIM', 256))
intermediate_dim = int(os.environ.get('INTERMEDIATE_DIM', 1024))
epsilon_std = 1.0
epochs = int(os.environ.get('EPOCHS', 1000))
activation = os.environ.get('ACTIVATION', 'sigmoid')
dropout = float(os.environ.get('DROPOUT', 0.0))
decay = float(os.environ.get('DECAY', 0.0))
learning_rate = float(os.environ.get('LEARNING_RATE', 0.001))
resnet_depth = int(os.environ.get('RESNET_DEPTH', 3))
###Output
_____no_output_____
###Markdown
Load CIFAR10 dataset
###Code
ftrain = H5PYDataset("../../data/cifar10/cifar10.hdf5", which_sets=('train',))
X_train, y_train = ftrain.get_data(ftrain.open(), slice(0, ftrain.num_examples))
X_train = np.moveaxis(X_train[:], 1, 3)
X_train = X_train / 255.
ftest = H5PYDataset("../../data/cifar10/cifar10.hdf5", which_sets=('test',))
X_test, y_test = ftest.get_data(ftest.open(), slice(0, ftest.num_examples))
X_test = np.moveaxis(X_test[:], 1, 3)
X_test = X_test / 255.
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
###Output
(50000, 32, 32, 3) (50000, 1)
(10000, 32, 32, 3) (10000, 1)
###Markdown
Helper Functions
###Code
def create_dense_layers(stage, width):
dense_name = '_'.join(['enc_conv', str(stage)])
bn_name = '_'.join(['enc_bn', str(stage)])
layers = [
Dense(width, name=dense_name),
BatchNormalization(name=bn_name),
Activation(activation),
Dropout(dropout),
]
return layers
def inst_layers(layers, in_layer):
x = in_layer
for layer in layers:
if isinstance(layer, list):
x = inst_layers(layer, x)
else:
x = layer(x)
return x
def sampling(args, batch_size=batch_size, latent_dim=latent_dim, epsilon_std=epsilon_std):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(batch_size, latent_dim),
mean=0., stddev=epsilon_std)
return z_mean + K.exp(z_log_var) * epsilon
def resnet_layers(x, depth, stage_base, transpose=False):
assert depth in [0, 1, 2, 3]
filters = [64, 64, 256]
x = conv_block(x, 3, filters, stage=stage_base + 2, block='a', strides=(1, 1), transpose=transpose)
if depth >= 2:
x = identity_block(x, 3, filters, stage=stage_base + 2, block='b')
if depth >= 3:
x = identity_block(x, 3, filters, stage=stage_base + 2, block='c')
filters = [128, 128, 512]
x = conv_block(x, 3, filters, stage=stage_base + 3, block='a', transpose=transpose)
if depth >= 1:
x = identity_block(x, 3, filters, stage=stage_base + 3, block='b')
if depth >= 2:
x = identity_block(x, 3, filters, stage=stage_base + 3, block='c')
if depth >= 3:
x = identity_block(x, 3, filters, stage=stage_base + 3, block='d')
filters = [256, 256, 1024]
x = conv_block(x, 3, filters, stage=stage_base + 4, block='a', transpose=transpose)
if depth >= 1:
x = identity_block(x, 3, filters, stage=stage_base + 4, block='b')
if depth >= 2:
x = identity_block(x, 3, filters, stage=stage_base + 4, block='c')
x = identity_block(x, 3, filters, stage=stage_base + 4, block='d')
if depth >= 3:
x = identity_block(x, 3, filters, stage=stage_base + 4, block='e')
x = identity_block(x, 3, filters, stage=stage_base + 4, block='f')
filters = [512, 512, 2048]
x = conv_block(x, 3, filters, stage=stage_base + 5, block='a', transpose=transpose)
if depth >= 2:
x = identity_block(x, 3, filters, stage=stage_base + 5, block='b')
if depth >= 3:
x = identity_block(x, 3, filters, stage=stage_base + 5, block='c')
return x
###Output
_____no_output_____
###Markdown
Loss Function
###Code
def kl_loss(x, x_decoded_mean):
kl_loss = - 0.5 * K.sum(1. + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return K.mean(kl_loss)
def logx_loss(x, x_decoded_mean):
x = K.flatten(x)
x_decoded_mean = K.flatten(x_decoded_mean)
xent_loss = img_rows * img_cols * img_chns * metrics.binary_crossentropy(x, x_decoded_mean)
return xent_loss
def vae_loss(x, x_decoded_mean):
return logx_loss(x, x_decoded_mean) + kl_loss(x, x_decoded_mean)
###Output
_____no_output_____
###Markdown
VAE
###Code
def make_encoder():
encoder_input = Input(batch_shape=(batch_size,) + original_img_size)
resnet = resnet_layers(encoder_input, depth=resnet_depth, stage_base=0)
encoder_layers = [
create_dense_layers(stage=9, width=intermediate_dim),
Flatten(),
]
enc_dense = inst_layers(encoder_layers, resnet)
z_mean = Dense(latent_dim, kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01))(enc_dense)
z_log_var = Dense(latent_dim, kernel_regularizer=l2(0.1), bias_regularizer=l2(0.1))(enc_dense)
return Model(inputs=encoder_input, outputs=[z_mean, z_log_var])
def make_decoder():
decoder_input = Input(batch_shape=(batch_size,) + (latent_dim,))
decoder_layers = [
create_dense_layers(stage=10, width=intermediate_dim),
Reshape((4, 4, intermediate_dim // 16)),
]
dec_out = inst_layers(decoder_layers, decoder_input)
dec_out = resnet_layers(dec_out, depth=resnet_depth, transpose=True, stage_base=10)
decoder_out = Conv2DTranspose(name='x_decoded', filters=3, kernel_size=1, strides=1, activation='sigmoid')(dec_out)
return Model(inputs=decoder_input, outputs=decoder_out)
encoder = make_encoder()
decoder = make_decoder()
encoder.summary()
decoder.summary()
# VAE
x_input = Input(batch_shape=(batch_size,) + original_img_size)
z_mean, z_log_var = encoder(x_input)
z = Lambda(sampling, output_shape=(latent_dim,))([z_mean, z_log_var])
_output = decoder(z)
vae = Model(inputs=x_input, outputs=_output)
optimizer = Adam(lr=learning_rate, decay=decay)
vae.compile(optimizer=optimizer, loss=vae_loss)
vae.summary()
start = time.time()
early_stopping = keras.callbacks.EarlyStopping('val_loss', min_delta=0.1, patience=50)
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=25, min_lr=0.001 * learning_rate)
callbacks=[early_stopping, reduce_lr]
if 'CMDLINE' not in os.environ:
callbacks += [TQDMNotebookCallback()]
history = vae.fit(
X_train, X_train,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks,
validation_data=(X_test, X_test),
verbose=0
)
done = time.time()
elapsed = done - start
print("Elapsed: ", elapsed)
df = pd.DataFrame(history.history)
display(df.describe(percentiles=[0.25 * i for i in range(4)] + [0.95, 0.99]))
df.plot(figsize=(8, 6))
# Eval kl loss
m = Model(inputs=x_input, outputs=_output)
optimizer = Adam(lr=learning_rate, decay=decay)
m.compile(optimizer=optimizer, loss=kl_loss)
val_kl_loss = m.evaluate(x=X_test, y=X_test, batch_size=batch_size)
# Eval logx loss
m = Model(inputs=x_input, outputs=_output)
optimizer = Adam(lr=learning_rate, decay=decay)
m.compile(optimizer=optimizer, loss=logx_loss)
val_logx_loss = m.evaluate(x=X_test, y=X_test, batch_size=batch_size)
print()
print("kl_loss = %.2f" % val_kl_loss)
print("logx_loss = %.2f" % val_logx_loss)
import matplotlib.pyplot as plt
n = 10
figure = np.zeros((img_rows * n, img_cols * n, img_chns))
batches = (n * n + batch_size - 1) // batch_size
digits = []
for i in range(batches):
z_sample = np.random.normal(size=[batch_size, latent_dim]).reshape(batch_size, latent_dim)
x_decoded = decoder.predict(z_sample, batch_size=batch_size)
digits += [x_decoded[i].reshape(img_rows, img_cols, img_chns) for i in range(batch_size)]
for j in range(n):
for i in range(n):
digit = digits[j * n + i]
d_x = i * img_rows
d_y = j * img_cols
figure[d_x:d_x + img_rows, d_y:d_y + img_cols] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure)
plt.show()
if os.environ.get('OUTDIR', None):
encoder.save(os.path.join(os.environ['OUTDIR'], 'encoder-depth-' + str(resnet_depth) + '.h5'))
decoder.save(os.path.join(os.environ['OUTDIR'], 'decoder-depth-' + str(resnet_depth) + '.h5'))
vals = {k: v for k, v in locals().items() if type(v) in [int, float, bool]}
with open(os.path.join(os.environ['OUTDIR'], 'params-depth-' + str(resnet_depth) + '.json'), 'w') as f:
json.dump(vals, f)
###Output
_____no_output_____ |
notebooks/old_notebooks/new_experiments_p_q_test.ipynb | ###Markdown
Parameters:- $\beta$ = {0.1, 0.5, 0.9}- $\gamma$ = 0.8- $\mu$ = 0.9- $\kappa$ = 0.05- max_infected_time = 10- NSTEPS = 100k- NAGENTS = 10k- NFRACLINKS = 0.1 Dead rate
###Code
dr_small_beta_params, dr_small_beta_df = load_results(
"../data/new_experiments/p_q_test/dead_ratio_p_q_L1-beta=0.1_gamma=0.8_mu=0.9_kappa=0.05_max_infected_time=10_L2-q=4_p=0.5_xi=0.1_n=100_NRUNS=100_NSTEPS=100000_NAGENTS=10000_NFRACLINKS=0.1.csv")
viz.plot_imshow(dr_small_beta_df, "p", "q")
dr_medium_beta_params, dr_medium_beta_df = load_results(
"../data/new_experiments/p_q_test/dead_ratio_p_q_L1-beta=0.5_gamma=0.8_mu=0.9_kappa=0.05_max_infected_time=10_L2-q=4_p=0.5_xi=0.1_n=100_NRUNS=100_NSTEPS=100000_NAGENTS=10000_NFRACLINKS=0.1.csv")
viz.plot_imshow(dr_medium_beta_df, "p", "q")
dr_large_beta_params, dr_large_beta_df = load_results(
"../data/new_experiments/p_q_test/dead_ratio_p_q_L1-beta=0.9_gamma=0.8_mu=0.9_kappa=0.05_max_infected_time=10_L2-q=4_p=0.5_xi=0.1_n=100_NRUNS=100_NSTEPS=100000_NAGENTS=10000_NFRACLINKS=0.1.csv")
viz.plot_imshow(dr_large_beta_df, "p", "q")
###Output
_____no_output_____
###Markdown
Infected ratio
###Code
ir_small_beta_params, ir_small_beta_df = load_results(
"../data/new_experiments/p_q_test/infected_ratio_p_q_L1-beta=0.1_gamma=0.8_mu=0.9_kappa=0.05_max_infected_time=10_L2-q=4_p=0.5_xi=0.1_n=100_NRUNS=100_NSTEPS=100000_NAGENTS=10000_NFRACLINKS=0.1.csv")
viz.plot_imshow(ir_small_beta_df, "p", "q")
ir_medium_beta_params, ir_medium_beta_df = load_results(
"../data/new_experiments/p_q_test/infected_ratio_p_q_L1-beta=0.5_gamma=0.8_mu=0.9_kappa=0.05_max_infected_time=10_L2-q=4_p=0.5_xi=0.1_n=100_NRUNS=100_NSTEPS=100000_NAGENTS=10000_NFRACLINKS=0.1.csv")
viz.plot_imshow(ir_medium_beta_df, "p", "q")
ir_large_beta_params, ir_large_beta_df = load_results(
"../data/new_experiments/p_q_test/infected_ratio_p_q_L1-beta=0.9_gamma=0.8_mu=0.9_kappa=0.05_max_infected_time=10_L2-q=4_p=0.5_xi=0.1_n=100_NRUNS=100_NSTEPS=100000_NAGENTS=10000_NFRACLINKS=0.1.csv")
viz.plot_imshow(ir_large_beta_df, "p", "q")
###Output
_____no_output_____ |
elliot_kleiman_20170321_project2.ipynb | ###Markdown
Project 2In this project, you will implement the exploratory analysis plan developed in Project 1. This will lay the groundwork for our our first modeling exercise in Project 3. Step 1: Load the python libraries you will need for this project
###Code
#imports
from __future__ import division
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
import pylab as pl
%matplotlib inline
import seaborn as sns
###Output
_____no_output_____
###Markdown
Step 2: Read in your data set
###Code
#Read in data from source
df_raw = pd.read_csv("../assets/admissions.csv")
print df_raw.head()
###Output
admit gre gpa prestige
0 0 380.0 3.61 3.0
1 1 660.0 3.67 3.0
2 1 800.0 4.00 1.0
3 1 640.0 3.19 4.0
4 0 520.0 2.93 4.0
###Markdown
Questions Question 1. How many observations are in our dataset?
###Code
df_raw.count()
df_raw.count().sum()
###Output
_____no_output_____
###Markdown
Answer: There are 1595 observations in our data set Question 2. Create a summary table
###Code
summary_stats_admissions = df_raw.describe()
summary_stats_admissions
# Compute quantiles of gre
gre_quantiles = pd.qcut(df_raw['gre'], 4)
gre_quantiles.value_counts().sort_index()
# Compute quantiles of gpa
gpa_quantiles = pd.qcut(df_raw['gpa'], 4)
gpa_quantiles.value_counts().sort_index()
# What is the sample size distribution among quantiles of gre and gpa by prestige level?
df_raw.pivot_table(['gre'], ['admit', gre_quantiles], [gpa_quantiles, 'prestige'], aggfunc=[len])
# What is the standard deviation distribution among quantiles of gre and gpa by prestige level?
df_raw.pivot_table(['gre'], ['admit', gre_quantiles], [gpa_quantiles, 'prestige'], aggfunc=[np.std])
###Output
_____no_output_____
###Markdown
Question 3. Why would GRE have a larger STD than GPA?
###Code
# Inspect gre, gpa std
df_raw.std()[['gre', 'gpa']]
###Output
_____no_output_____
###Markdown
Answer: Because the GRE consists of three parts: quantitative reasoning, verbal reasoning, and analytical writingand because test takers each have different academic degrees, there is going to be larger variation across testingfor knowledge in these areas. Specifically, I would expect skills in quantitative & verbal reasoning, and analytical writing to vary by academic institution, academic college, academic department, academic degree program, and degree program specialization. This is because most academic institutions, including their colleges, departments, and degree programs tend to put varied emphasis on having different quantitative, verbal, and writing skills. e.g., Theatre arts majors, might not need to have a strong background in quantitative reasoning, so they may not have to take classes focusing on quantitative reasoning (unless they went to strong engineering college). Similary, a computer engineering major may not need to have a strong background in English literature, so the emphasis is not on building analytical writing skills (unless they went to a strong liberal arts college). Therefore, I would expect more variation in GRE scores due to the many different degree programs having a different focus on acquiring different skills, and in varying amounts. Question 4. Drop data points with missing data
###Code
# Which columns have missing data?
df_raw.isnull().sum()
# Which records are null?
df_raw[df_raw.isnull().any(axis=1)]
# What is shape of dataframe before dropping records?
shape_before_dropna = df_raw.shape
print(shape_before_dropna)
# Inspect shape before dropping missing values
shape_after_dropna = df_raw.dropna(how='any').shape
print(shape_after_dropna)
# Now, drop missing values
df_raw.dropna(how='any', inplace=True)
###Output
_____no_output_____
###Markdown
Question 5. Confirm that you dropped the correct data. How can you tell? Answer: Before dropping missing values the dataframe shape was (400, 4). After dropping missing values the dataframeshape was (397, 3). The `isnull()` method showed that there were three records having any values missing in a row (axis=1). Question 6. Create box plots for GRE and GPA
###Code
#boxplot 1
#df_raw.boxplot('gre')
sns.boxplot('gre', data=df_raw)
sns.plt.title('GRE: Box and Whiskers Plot')
#boxplot 2
#df_raw.boxplot('gpa')
sns.boxplot('gpa', data=df_raw)
sns.plt.title('GPA: Box and Whiskers Plot')
###Output
_____no_output_____
###Markdown
Question 7. What do these plots show? Answer: They show the data's spread, or how far from the center the data tend to range. Specifically, boxplots show the middle fifty percent of the data, and its range.The idea is to divide the data into four equal groups and see how far apart the extreme groups are.The data is first divided into two equal high and low groups at the median, which is called the second quartile, or Q2.The median of the low group is called the first quartile or Q1. The median of the high group is the third quartile, or Q3. The box's ends are the quartiles Q1 and Q3 respectively. The box's midline is the quartile Q2, which is the median of the data. The interquartile range (IQR) is the distance between the box's ends: the distance between the third quartile and the first quartile, or Q3-Q1. These plots are especially good for showing off differences between the high and low groups, as well as outliers. Question 8. Describe each distribution
###Code
# plot the distribution of each variable
df_raw.plot(kind='density', subplots=True, layout=(2, 2), sharex=False)
plt.show()
###Output
_____no_output_____
###Markdown
The *Admit distribtion* is bimodal (has two modes, 0, and 1) as expected. Both the *GRE distribution* and *GPA distribution* are approximately symmetrical. The *Prestige distribution* is multimodal (has four modes, 1, 2, 3, 4) as expected. Question 9. If our model had an assumption of a normal distribution would we meet that requirement?
###Code
# Test for normality using the Kolmogorov-Smirnov Test
# GRE normal?
print('GRE: ', stats.kstest(df_raw.gre, 'norm'))
print('Kurtosis: ', df_raw.gre.kurt())
print('Skew: ', df_raw.gre.skew())
print('~~~~~~~~~~~')
# GPA normal?
print('GPA : ', stats.kstest(df_raw.gpa, 'norm'))
print('Kurtosis: ', df_raw.gpa.kurt())
print('Skew: ', df_raw.gpa.skew())
print('~~~~~~~~~~~')
# Admit normal?
print('Admit: ', stats.kstest(df_raw.admit, 'norm'))
print('Kurtosis: ', df_raw.admit.kurt())
print('Skew: ', df_raw.admit.skew())
print('~~~~~~~~~~~')
# Prestige normal?
print('Prestige: ', stats.kstest(df_raw.prestige, 'norm'))
print('Kurtosis: ', df_raw.prestige.kurt())
print('Skew: ', df_raw.prestige.skew())
###Output
('GRE: ', KstestResult(statistic=1.0, pvalue=0.0))
('Kurtosis: ', -0.33286435465143427)
('Skew: ', -0.146046988215597)
~~~~~~~~~~~
('GPA : ', KstestResult(statistic=0.98972085476178895, pvalue=0.0))
('Kurtosis: ', -0.56356989952216807)
('Skew: ', -0.21688893296924305)
~~~~~~~~~~~
('Admit: ', KstestResult(statistic=0.5, pvalue=0.0))
('Kurtosis: ', -1.3865881769308692)
('Skew: ', 0.7876691478505351)
~~~~~~~~~~~
('Prestige: ', KstestResult(statistic=0.84134474606854293, pvalue=0.0))
('Kurtosis: ', -0.90103795489017591)
('Skew: ', 0.086505552897055041)
###Markdown
Answer: No. We would not meet that requirement. Because according to the **Kolmogorov-Smirnov test**, there is zero percent chance that the test statistic values of `D` we observed for GRE, GPA, Admit, and Prestige respectively `(1.0, 0.9897, 0.5, and 0.8413`) could have arisen if the data had been drawn from a normal distribution. We therefore reject the hypothesis at the 95% confidence level that the data were drawn from a normal distribution and conclude that the data is not normally distributed. Question 10. Does this distribution need correction? If so, why? How? Answer: Yes, it needs correction. It needs correction because the distributions are not normal. They are both left-skewed and leptokurtic. I plan to remove outliers and log transform the data.
###Code
# GRE IQR
q3_gre = summary_stats_admissions.gre['75%']
q1_gre = summary_stats_admissions.gre['25%']
iqr_gre = q3_gre - q1_gre
low_fence_gre = q1_gre - 1.5*iqr_gre
high_fence_gre = q3_gre + 1.5*iqr_gre
print("GRE IQR: ", iqr_gre)
print("GRE low fence: ", low_fence_gre)
print("GRE high fence: ", high_fence_gre)
# Find GRE outliers
print('Number of outliers: ', df_raw[(df_raw.gre < low_fence_gre) | (df_raw.gre > high_fence_gre)].shape[0])
print('These are the outliers: ')
df_raw[(df_raw.gre < low_fence_gre) | (df_raw.gre > high_fence_gre)]
# Remove GRE outliers
print('Shape before outlier removal is: ', df_raw.shape)
df = df_raw[(df_raw.gre >= low_fence_gre) & (df_raw.gre <= high_fence_gre)]
print('Shape after outlier removal is: ', df.shape)
# Plot to visually inspect distribution, still looks skewed
df.gre.plot.density()
plt.title('GRE density')
plt.show()
# GPA IQR
q3_gpa = summary_stats_admissions.gpa['75%']
q1_gpa = summary_stats_admissions.gpa['25%']
iqr_gpa = q3_gpa - q1_gpa
low_fence_gpa = q1_gpa - 1.5*iqr_gpa
high_fence_gpa = q3_gpa + 1.5*iqr_gpa
print("GPA IQR: ", round(iqr_gpa, 1))
print("GPA low fence: ", round(low_fence_gpa, 1))
print("GPA high fence: ", round(high_fence_gpa, 1))
# Now, find GPA Outliers
print('Number of outliers: ', df[(df.gpa < low_fence_gpa) | (df.gpa > high_fence_gpa)].shape[0])
print('These are the outliers: ')
df[(df.gpa < low_fence_gpa) | (df.gpa > high_fence_gpa)]
print('Shape before outlier removal is: ', df.shape)
df = df[(df.gpa >= low_fence_gpa) & (df.gpa <= high_fence_gpa)]
print('Shape after outlier removal is: ', df.shape)
# Plot to visually inspect distribution, still looks skewed!
df.gpa.plot.density()
plt.title('GPA density')
plt.show()
# Removed outliers: re-test for normality using the Kolmogorov-Smirnov Test
# Observation: skew got better, kurtosis got worse!
# GRE
print('GRE: ', stats.kstest(df.gre, 'norm'))
print('Kurtosis: ', df.gre.kurt())
print('Skew: ', df.gre.skew())
print('~~~~~~~~~~~')
# GPA
print('GPA : ', stats.kstest(df.gpa, 'norm'))
print('Kurtosis: ', df.gpa.kurt())
print('Skew: ', df.gpa.skew())
# Transform GRE distribution to standard normal
sns.distplot( (df.gre - df.gre.mean()) / df.gre.std(), bins=5, kde_kws={'bw':1} )
sns.plt.title('GRE to Standard Normal')
sns.plt.show()
# Transform GPA distribution to standard normal
sns.distplot( (df.gpa - df.gpa.mean()) / df.gpa.std(), bins=10, kde_kws={'bw':1} )
sns.plt.title('GPA to Standard Normal')
sns.plt.show()
# Log transform the data: re-test for normality using the Kolmogorov-Smirnov Test
# Observation: Skew got worse, Kurtosis got better
# GRE
print('GRE: ', stats.kstest(np.log(df.gre), 'norm'))
print('Kurtosis: ', np.log(df.gre).kurt())
print('Skew: ', np.log(df.gre).skew())
print('~~~~~~~~~~~')
# GPA
print('GPA : ', stats.kstest(np.log(df.gpa), 'norm'))
print('Kurtosis: ', np.log(df.gpa).kurt())
print('Skew: ', np.log(df.gpa).skew())
###Output
('GRE: ', KstestResult(statistic=0.99999999721106625, pvalue=0.0))
('Kurtosis: ', -0.18677728430420748)
('Skew: ', -0.47287836787183568)
~~~~~~~~~~~
('GPA : ', KstestResult(statistic=0.81696385197730359, pvalue=0.0))
('Kurtosis: ', -0.31189663172165183)
('Skew: ', -0.43724961849141997)
###Markdown
Answer: I don't know how to correct for the skewness and kurotis inherent in this data set.But here's what I found: 1. After removing outliers, the skew got better, but the kurtosis got worse!2. After removing outliers and log transforming the data, the skew got worse, but the kurtosis got better!3. One way to normalize the data is to subtract the mean, and divide by the standard deviation. This puts the data on the standard normal scale. Question 11. Which of our variables are potentially colinear? Answer: GPA and GRE are potentially collinear. i.e., They are moderately positively correlated.
###Code
# create a correlation matrix for the data
df_raw.corr()
sns.heatmap(df_raw.corr(), annot=True, cmap='RdBu')
pd.scatter_matrix(df_raw)
plt.show()
###Output
_____no_output_____ |
notebooks/exp2-18_analysis_summary.ipynb | ###Markdown
Exp 2-18 analysis summary.Which parameters did best on the tasks?See `./informercial/Makefile` for experimentaldetails.
###Code
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
# ls ../data/exp2*
###Output
_____no_output_____
###Markdown
Load and process data
###Code
data_path ="/Users/qualia/Code/infomercial/data/"
exp_names = ["exp2",
"exp3",
"exp4",
"exp5",
"exp6",
"exp7",
"exp8",
"exp9",
"exp10",
"exp11",
"exp12",
"exp13",
"exp14",
"exp15",
"exp16",
"exp17",
"exp18"]
exp_index = list(range(2, 19))
num_exps = 50
num_episodes = 10000
env_names = [
"BanditOneHigh2-v0",
"BanditOneHigh10-v0",
"BanditOneHigh121-v0",
"BanditOneHigh1000-v0",
"BanditHardAndSparse2-v0",
"BanditHardAndSparse10-v0",
"BanditHardAndSparse121-v0",
"BanditHardAndSparse1000-v0"
]
last_trials = -500
exp_index
# For each exp, then each task, extract the p_best, and the last_trials.
# Init final result
p_best = {}
for env in env_names:
p_best[env] = np.zeros(len(exp_names))
for j, exp_name in enumerate(exp_names):
# Gather traces by bandit: scores,
# Qs in a big numpy array (n_exp, n_episodes)
scores_E = {}
scores_R = {}
values_E = {}
values_R = {}
controlling = {}
actions = {}
best = {}
# Preallocate the arrays for this env
for env in env_names:
scores_E[env] = np.zeros((num_episodes, num_exps))
scores_R[env] = np.zeros((num_episodes, num_exps))
values_E[env] = np.zeros((num_episodes, num_exps))
values_R[env] = np.zeros((num_episodes, num_exps))
controlling[env] = np.zeros((num_episodes, num_exps))
actions[env] = np.zeros((num_episodes, num_exps))
best[env] = None
# Load and repackage
for n in range(num_exps):
result = load_checkpoint(os.path.join(data_path, f"{exp_name}_{env}_{n+1}.pkl"))
scores_E[env][:, n] = result["scores_E"]
scores_R[env][:, n] = result["scores_R"]
values_E[env][:, n] = result["values_E"]
values_R[env][:, n] = result["values_R"]
controlling[env][:, n] = result["policies"]
actions[env][:, n] = result["actions"]
best[env] = result["best"]
# Est. prob. that the action was correct.
p_best_e = {}
for env in env_names:
b = best[env]
p_best_e[env] = np.zeros(num_episodes)
for i in range(num_episodes):
actions_i = actions[env][i,:]
p_best_e[env][i] = np.sum(actions_i == b) / actions_i.size
# Get avg. p_best of last_trials for each exp and env
for env in env_names:
p_best[env][j] = np.mean(p_best_e[env][last_trials:])
p_best
###Output
_____no_output_____
###Markdown
Learning performanceFor each bandit env, over all exps.
###Code
def plot_sum_performance(plot_names):
fig = plt.figure(figsize=(4, 3*len(plot_names)))
grid = plt.GridSpec(len(plot_names), 1, wspace=0.4, hspace=1.2)
for i, env in enumerate(plot_names):
plt.subplot(grid[i, 0])
plt.title(f"{env}")
b = best[env]
for n in range(num_exps):
ps = p_best[env]
plt.scatter(exp_index, ps, color="black", alpha=1, s=16)
plt.plot(exp_index, np.ones(len(exp_index)), color="grey", alpha=0.2, ls='--', linewidth=1)
plt.ylabel("p(best)")
plt.xlabel("Experiment number")
plt.xticks(exp_index)
plt.ylim(-.1, 1.1)
_ = sns.despine()
###Output
_____no_output_____
###Markdown
Compare exps OneHigh
###Code
plot_sum_performance(env_names[0:4])
###Output
_____no_output_____
###Markdown
Sparse
###Code
plot_sum_performance(env_names[4:8])
last_trials
###Output
_____no_output_____ |
AdvancedDataAnalysis/Final Assignment/.ipynb_checkpoints/first attempt-checkpoint.ipynb | ###Markdown
Data Collection
###Code
posPath='aclImdb/train/pos'
arr = []
for filename in tqdm(os.listdir(posPath)):
path= os.path.join(posPath,filename)
with open(path) as f:
review = f.readlines()
arr.append(review)
posArr = np.array(arr)
posArr = np.insert(posArr, 1, 1, axis=1)
negPath='aclImdb/train/neg'
arr = []
for filename in tqdm(os.listdir(negPath)):
#print(filename)
#path='imdb/train/neg/0_3.txt'
path= os.path.join(negPath,filename)
with open(path) as f:
review = f.readlines()
arr.append(review)
negArr = np.array(arr)
negArr = np.insert(negArr, 1, 0, axis=1)
print(posArr.shape)
print(negArr.shape)
df1 = pd.DataFrame(data = posArr, columns = ['review','label'])
df2 = pd.DataFrame(data = negArr, columns = ['review','label'])
reviews_df = df1.append(df2)
print(df1.head(2))
print(df2.head(2))
reviews_df.head()
reviews_df.groupby('label').count()
reviews_df['vec'] = 0
reviews_df.iloc[0,1]
###Output
_____no_output_____
###Markdown
Pre Processing
###Code
from sklearn import preprocessing
from sklearn.preprocessing import LabelEncoder
lbl_enc = preprocessing.LabelEncoder()
y = lbl_enc.fit_transform(reviews_df.label)
from sklearn.model_selection import train_test_split
xtrain, xvalid, ytrain, yvalid = train_test_split(reviews_df.review, y,
stratify=y,
random_state=42,
test_size=0.3, shuffle=True)
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
# Always start with these features. They work (almost) everytime!
tfv = TfidfVectorizer(min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 3), use_idf=1,smooth_idf=1,sublinear_tf=1,
stop_words = 'english')
# Fitting TF-IDF to both training and test sets (semi-supervised learning)
# Always start with these features. They work (almost) everytime!
tfv = TfidfVectorizer(min_df=3, max_features=None,
strip_accents='unicode', analyzer='word',token_pattern=r'\w{1,}',
ngram_range=(1, 3), use_idf=1,smooth_idf=1,sublinear_tf=1,
stop_words = 'english')
# Fitting TF-IDF to both training and test sets (semi-supervised learning)
tfv.fit(list(xtrain) + list(xvalid))
xtrain_tfv = tfv.transform(xtrain)
xvalid_tfv = tfv.transform(xvalid)
from sklearn.linear_model import LogisticRegression
# Fitting a simple Logistic Regression on TFIDF
clf = LogisticRegression(C=1.0)
clf.fit(xtrain_tfv, ytrain)
y_pred = clf.predict_proba(xvalid_tfv)
clf.score(xvalid_tfv,yvalid)
from sklearn.metrics import confusion_matrix
from sklearn import metrics
confusion_matrix(yvalid, clf.predict(xvalid_tfv))
yvalid
y_pred[:,0]
###Output
_____no_output_____ |
week06/prep_notebook_week06_03.ipynb | ###Markdown
Color SpacesIn this notebook, we will explore how colormaps move through two colorspaces, specifically [HSV](https://en.wikipedia.org/wiki/HSL_and_HSV) and RGB.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
import IPython.display
import io
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.cm
import ipywidgets
plt.rcParams["figure.figsize"] = (16, 12)
###Output
_____no_output_____
###Markdown
The next cell is going to set up two different functions. One will take rgb values and rotate them through RGB space, and the other will plot a nice depiction of the colormap in RGB space.
###Code
def rotate(arr, theta, phi, psi):
Rx = np.array([[1, 0, 0],
[0, np.cos(phi), np.sin(phi)],
[0.0, -np.sin(phi), np.cos(phi)]])
Ry = np.array([[np.cos(theta), 0, -np.sin(theta)],
[0.0, 1.0, 0.0],
[np.sin(theta), 0, np.cos(theta)]])
Rz = np.array([[np.cos(psi), np.sin(psi), 0],
[-np.sin(psi), np.cos(psi), 0],
[0, 0, 1]])
R = np.dot(Rx, Ry).dot(Rz)
return np.dot(arr - 0.5, R) + 0.5
def plot_colortable(colortable, theta = 0.0, phi = 0.0, psi = 0.0):
title = ""
if isinstance(colortable, str):
title = title or colortable
colortable = matplotlib.cm.cmap_d[colortable](np.mgrid[0.0:1.0:256j])
colortable = rotate(colortable[:,:3], theta, phi, psi)
fig = plt.figure(figsize=(20, 16))
ax = fig.add_axes([0.0, 0.25, 0.75, 0.75], projection="3d")
ax.plot(colortable[:,0], colortable[:,1], colortable[:,2], '-', lw=4)
ax.set_xlim(-0.05, 1.05)
ax.set_ylim(-0.05, 1.05)
ax.set_zlim(-0.05, 1.05)
ax.set_xlabel("Red")
ax.set_ylabel("Green")
ax.set_zlabel("Blue")
ax = fig.add_axes([0.75, 0.76, 0.25, 0.20])
ax.plot(colortable[:,0], colortable[:,1], '-', lw=4)
ax.set_xlim(-0.05, 1.05)
ax.set_ylim(-0.05, 1.05)
ax.set_xlabel("Red", fontsize=18)
ax.set_ylabel("Green", fontsize=18)
ax = fig.add_axes([0.75, 0.52, 0.25, 0.20])
ax.plot(colortable[:,0], colortable[:,2], '-', lw=4)
ax.set_xlim(-0.05, 1.05)
ax.set_ylim(-0.05, 1.05)
ax.set_xlabel("Red", fontsize=18)
ax.set_ylabel("Blue", fontsize=18)
ax = fig.add_axes([0.75, 0.28, 0.25, 0.20])
ax.plot(colortable[:,1], colortable[:,2], '-', lw=4)
ax.set_xlim(-0.05, 1.05)
ax.set_ylim(-0.05, 1.095)
ax.set_xlabel("Green", fontsize=18)
ax.set_ylabel("Blue", fontsize=18)
# Now we do three colorbars that span the whole thing
im = np.ones((16, colortable.shape[0], 4), dtype="uint8")
im[...,:3] = (colortable * 255).astype("uint8")[None,:,:]
im[...,3] = 255
im_no_red = im.copy()
im_no_red[:,:,0] = 0
im_no_green = im.copy()
im_no_green[:,:,1] = 0
im_no_blue = im.copy()
im_no_blue[:,:,2] = 0
aspect = im.shape[0]/im.shape[1] * 10
ax = fig.add_axes([0.0, 0.0, 1.0, 0.05])
ax.imshow(im, interpolation='nearest', aspect = aspect)
ax.set_ylabel("Standard", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
ax = fig.add_axes([0.0, 0.06, 1.0, 0.05])
ax.imshow(im_no_red, interpolation='nearest', aspect = aspect)
ax.set_ylabel("No Red", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
ax = fig.add_axes([0.0, 0.12, 1.0, 0.05])
ax.imshow(im_no_green, interpolation='nearest', aspect = aspect)
ax.set_ylabel("No Green", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
ax = fig.add_axes([0.0, 0.18, 1.0, 0.05])
ax.imshow(im_no_blue, interpolation='nearest', aspect = aspect)
ax.set_ylabel("No Blue", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
if title is not None:
fig.suptitle(title, fontsize = 24)
###Output
_____no_output_____
###Markdown
Let's create a widget, and see what our colormaps look like in those dimensions.
###Code
ipywidgets.interact(plot_colortable, colortable = ["viridis", "jet", "RdBu", "Blues"],
theta = (0.0, 2.0*np.pi, 0.01), phi = (0.0, 2.0*np.pi, 0.01), psi = (0.0, 2.0*np.pi, 0.01))
###Output
_____no_output_____
###Markdown
A better representation of this would be in HSV space. Here we convert from RGB to HSV, which can be a tricky process. Note that HSV space is periodic in Hue, so we can rotate around the axis and it will remain continous.
###Code
def move_cylinder(arr, theta, phi, psi):
Rx = np.array([[1, 0, 0],
[0, np.cos(phi), np.sin(phi)],
[0.0, -np.sin(phi), np.cos(phi)]])
Ry = np.array([[np.cos(theta), 0, -np.sin(theta)],
[0.0, 1.0, 0.0],
[np.sin(theta), 0, np.cos(theta)]])
Rz = np.array([[np.cos(psi), np.sin(psi), 0],
[-np.sin(psi), np.cos(psi), 0],
[0, 0, 1]])
R = np.dot(Rx, Ry).dot(Rz)
return np.dot(arr - 0.5, R) + 0.5
def rgb_to_hsv(arr):
# https://en.wikipedia.org/wiki/HSL_and_HSV has info; we will use C_2 and H_2
# The colortable is in the range 0..1.
alpha = 0.5 * (2*arr[...,0] - arr[...,1] - arr[...,2])
beta = np.sqrt(3)/2.0 * (arr[...,1] - arr[...,2])
H2 = np.arctan2(beta, alpha)
H2[H2<0] += 2.0 * np.pi
C2 = np.sqrt(alpha**2 + beta**2)
ma = np.max(arr, axis=-1)
mi = np.min(arr, axis=-1)
V = ma
S = C2/V
np.nan_to_num(S)
return np.stack([H2, S, V], axis=-1)
def hsv_to_rgb(arr):
# H, S, V
C = arr[...,2] * arr[...,1]
Hp = arr[...,0] / (np.pi/3)
X = C * (1.0 - np.abs(np.mod(Hp, 2) - 1))
c1 = (0 <= Hp) & (Hp < 1)
c2 = (1 <= Hp) & (Hp < 2)
c3 = (2 <= Hp) & (Hp < 3)
c4 = (3 <= Hp) & (Hp < 4)
c5 = (4 <= Hp) & (Hp < 5)
c6 = (5 <= Hp) & (Hp <= 6)
rgb = np.zeros_like(arr)
rgb[c1,0] = C[c1]
rgb[c1,1] = X[c1]
rgb[c1,2] = 0
rgb[c2,0] = X[c2]
rgb[c2,1] = C[c2]
rgb[c2,2] = 0
rgb[c3,0] = 0
rgb[c3,1] = C[c3]
rgb[c3,2] = X[c3]
rgb[c4,0] = 0
rgb[c4,1] = X[c4]
rgb[c4,2] = C[c4]
rgb[c5,0] = X[c5]
rgb[c5,1] = 0
rgb[c5,2] = C[c5]
rgb[c6,0] = C[c6]
rgb[c6,1] = 0
rgb[c6,2] = X[c6]
mi = arr[...,2] - C
rgb += mi[...,None]
return rgb
def plot_colortable_hsv(colortable, hue_theta = 0.0, sat_scale = 1.0, val_scale = 1.0):
title = ""
if isinstance(colortable, str):
title = title or colortable
colortable = matplotlib.cm.cmap_d[colortable](np.mgrid[0.0:1.0:256j])
HSV = rgb_to_hsv(colortable[...,:3])
# Now we scale and rotate
angle = hue_theta + HSV[...,0]
HSV[...,0] = angle - 2.0*np.pi * np.floor(angle / (2.0*np.pi))
HSV[...,1] = np.clip(sat_scale * HSV[...,1], 0.0, 1.0)
HSV[...,2] = np.clip(val_scale * HSV[...,2], 0.0, 1.0)
fig = plt.figure(figsize=(20, 16))
ax = fig.add_axes([0.0, 0.25, 0.75, 0.75], projection="3d")
x = HSV[...,1] * np.cos(HSV[...,0])
y = HSV[...,1] * np.sin(HSV[...,0])
z = HSV[...,2]
ax.plot(x, y, z, '.-', markevery=10, lw=4, ms=16)
ax.set_xlim(-1.05, 1.05)
ax.set_ylim(-1.05, 1.05)
ax.set_zlim(-0.05, 1.05)
ax.set_xlabel("Hue")
ax.set_ylabel("Saturation")
ax.set_zlabel("Value")
ax = fig.add_axes([0.75, 0.76, 0.25, 0.20], projection="polar")
ax.plot(HSV[:,0], HSV[:,1], '.-', markevery=10, lw=4, ms=16)
ax.set_xlim(-0.05, 2.0*np.pi + 0.05)
ax.set_ylim(-0.05, 1.05)
ax.set_title("Saturation", fontsize=18)
ax = fig.add_axes([0.75, 0.52, 0.25, 0.20])
ax.plot(HSV[:,0], HSV[:,2], '.-', markevery=10, lw=4, ms=16)
ax.set_xlim(-0.05, 2.0*np.pi + 0.05)
ax.set_ylim(-0.05, 1.05)
ax.set_title("Value", fontsize=18)
ax = fig.add_axes([0.75, 0.28, 0.25, 0.20])
ax.plot(HSV[:,1], HSV[:,2], '.-', markevery=10, lw=4, ms=16)
ax.set_xlim(-0.05, 1.05)
ax.set_ylim(-0.05, 1.095)
ax.set_xlabel("Saturation", fontsize=18)
ax.set_ylabel("Value", fontsize=18)
# Now we do three colorbars that span the whole thing
colortable = hsv_to_rgb(HSV)
im = np.ones((16, colortable.shape[0], 4), dtype="uint8")
im[...,:3] = (colortable * 255).astype("uint8")[None,:,:]
im[...,3] = 255
im_no_red = im.copy()
im_no_red[:,:,0] = 0
im_no_green = im.copy()
im_no_green[:,:,1] = 0
im_no_blue = im.copy()
im_no_blue[:,:,2] = 0
aspect = im.shape[0]/im.shape[1] * 10
ax = fig.add_axes([0.0, 0.0, 1.0, 0.05])
ax.imshow(im, interpolation='nearest', aspect = aspect)
ax.set_ylabel("Standard", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
ax = fig.add_axes([0.0, 0.06, 1.0, 0.05])
ax.imshow(im_no_red, interpolation='nearest', aspect = aspect)
ax.set_ylabel("No Red", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
ax = fig.add_axes([0.0, 0.12, 1.0, 0.05])
ax.imshow(im_no_green, interpolation='nearest', aspect = aspect)
ax.set_ylabel("No Green", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
ax = fig.add_axes([0.0, 0.18, 1.0, 0.05])
ax.imshow(im_no_blue, interpolation='nearest', aspect = aspect)
ax.set_ylabel("No Blue", fontsize=16)
ax.yaxis.set_ticks([])
ax.xaxis.set_visible(False)
if title is not None:
fig.suptitle(title, fontsize = 24)
ipywidgets.interact(plot_colortable_hsv, colortable = ["viridis", "jet", "gray", "gist_stern", "flag", "magma"],
hue_theta = (-1.0*np.pi, 1.0*np.pi, 0.01),
sat_scale = (0.01, 10.0, 0.01),
val_scale = (0.01, 10.0, 0.01))
###Output
_____no_output_____ |
mimic/notebooks/model_exploration.nbconvert.ipynb | ###Markdown
Test of the latent representations of the MMVAE Average precision scores on the latent representations, averaged over the batches
###Code
config_path = get_config_path()
with open(config_path, 'rt') as json_file:
config = json.load(json_file)
checkpoint_path = os.path.expanduser(config['dir_fid'])
lr_evals = dict()
for modality_method in ['moe']:
for factorization in os.listdir(os.path.join(checkpoint_path, modality_method)):
for experiment in os.listdir(os.path.join(checkpoint_path, modality_method, factorization)):
if experiment.startswith('Mimic'):
lr_evals[experiment] = dict()
lr_eval = lr_evals[experiment]
experiment_dir_ = os.path.join(checkpoint_path, modality_method, factorization, experiment)
lr_eval_dir = os.path.join(experiment_dir_, 'logs', 'Latent Representation')
if os.path.exists(lr_eval_dir):
for label in os.listdir(lr_eval_dir):
lr_eval[label] = dict()
for lr in os.listdir(os.path.join(lr_eval_dir, label)):
lr_eval[label][lr] = dict()
for logfile in os.listdir(os.path.join(lr_eval_dir, label, lr)):
for summary in summary_iterator(os.path.join(lr_eval_dir, label, lr, logfile)):
value = summary.summary.value
temp = str(value).split('\n')
for elem in temp:
elem = elem
if elem.startswith('simple_value'):
lr_eval[label][lr][summary.step] = elem.split(' ')[1]
import pandas as pd
experiments_dataframe = pd.read_csv('experiments_dataframe.csv')
dfs = []
for experiment in lr_evals.keys():
experiment_evals = lr_evals[experiment]
if experiment_evals:
for label in experiment_evals.keys():
for lr in experiment_evals[label].keys():
steps = experiment_evals[label][lr].keys()
max_step = max(steps)
experiment_evals[label][lr] = experiment_evals[label][lr][max_step]
df = pd.DataFrame(experiment_evals).astype(float)
index = df.index
index.name = f'Steps: {max_step}'
df['mean'] = df.mean(numeric_only=True, axis=1)
dfs.append((df, experiment))
for df, experiment in dfs:
flags = experiments_dataframe.loc[experiments_dataframe['experiment_uid'] == experiment]
print(f'Experiment {experiment} with text encoding: {flags.text_encoding.item()}, '
f'image size: {flags.img_size.item()}, method: {flags.method.item()} \n and trained '
f'for {flags.total_epochs.item()} epochs with batch size: {flags.batch_size.item()} '
f'and {flags.steps_per_training_epoch.item()} steps per training epoch')
display(df)
###Output
Experiment Mimic_2020_10_30_17_20_31_531755 with text encoding: word, image size: 128.0, method: joint_elbo
and trained for 18.0 epochs with batch size: 180.0 and 200.0 steps per training epoch
###Markdown
Evaluation of the classifiers All classifiers were trained for 100 epochs
###Code
labels = ['Lung Opacity', 'Pleural Effusion', 'Support Devices']
FLAGS.num_features = len(alphabet)
FLAGS.batch_size = 300
###Output
_____no_output_____
###Markdown
Evaluation of the character encoding and image size 128
###Code
list_precision_pa, list_precision_lat, list_precision_text = test_clfs(FLAGS, 128, 'char', alphabet)
print('mean precision for pa classifier: ', np.mean(list_precision_pa))
print('mean precision for lat classifier: ',np.mean(list_precision_lat))
print('mean precision for text classifier: ',np.mean(list_precision_text))
###Output
setting dataset
setting modalities
setting model
setting clfs
setting rec_weights
dict_keys(['real', 'random', '', 'PA', 'Lateral', 'text', 'Lateral_PA', 'PA_text', 'Lateral_text', 'Lateral_PA_text'])
char
mean precision for pa classifier: 0.39415439014746345
mean precision for lat classifier: 0.23401534481353384
mean precision for text classifier: 0.5803474159536921
###Markdown
Evaluation of the word encoding and image size 128 The text classifier precision is slightly better for the word encoding
###Code
list_precision_pa, list_precision_lat,list_precision_text = test_clfs(FLAGS, 128, 'word', alphabet)
print('mean precision for pa classifier: ',np.mean(list_precision_pa))
print('mean precision for lat classifier: ',np.mean(list_precision_lat))
print('mean precision for text classifier: ',np.mean(list_precision_text))
###Output
setting dataset
setting modalities
setting model
setting clfs
setting rec_weights
dict_keys(['real', 'random', '', 'PA', 'Lateral', 'text', 'Lateral_PA', 'PA_text', 'Lateral_text', 'Lateral_PA_text'])
word
mean precision for pa classifier: 0.36514591827396703
mean precision for lat classifier: 0.2211051543545864
mean precision for text classifier: 0.674078788856655
###Markdown
Evaluation of image size 256Loading the 256 dataset crashes jupyter notebooks for some reason. If that's the case, run it from the commandline with: `jupyter nbconvert --to notebook --execute notebooks/model_exploration.ipynb`
###Code
import numpy as np
from mimic.dataio.MimicDataset import Mimic
from mimic.utils.experiment import MimicExperiment
FLAGS.text_encoding = 'char'
FLAGS.img_size = 256
mimic_experiment = MimicExperiment(flags=FLAGS, alphabet=alphabet)
mimic_test = Mimic(FLAGS, mimic_experiment.labels, alphabet, split='test')
model_text = mimic_experiment.clfs['text']
list_precision_pa = test_clf_pa(FLAGS, mimic_experiment, mimic_test, alphabet)
list_precision_lat = test_clf_lat(FLAGS, mimic_experiment, mimic_test, alphabet)
list_precision_text = test_clf_text(FLAGS, mimic_experiment, mimic_test, alphabet)
print('mean precision for pa classifier: ',np.mean(list_precision_pa))
print('mean precision for lat classifier: ',np.mean(list_precision_lat))
print('mean precision for text classifier: ',np.mean(list_precision_text))
###Output
char
char
char
mean precision for pa classifier: 0.41602969689539815
mean precision for lat classifier: 0.40079714910746855
mean precision for text classifier: 0.5953110664731457
|
porto-seguro-safe-driver-prediction/Phase1/Python_Foundation/01-Python Crash Course.ipynb | ###Markdown
___ ___ Python基础教程**代码配套视频讲解,在此不含解释**本文档按照下列顺序展开,可以依次点击链接到对应的位置。* [数据类型](数据类型) * [Numbers](Numbers) * [Strings](Strings) * [Printing](Printing) * [Lists](Lists) * [Dictionaries](Dictionaries) * [Booleans](Booleans) * [Tuples](Tuples) * [Sets](Sets)* [比较符](比较符)* [逻辑符](逻辑符)* [条件语句](条件语句)* [for循环](for循环)* [while循环](while循环)* [range()](range())* [列表推导 => list comprehension](列表推导)* [函数](函数)* [匿名函数 => lambda](匿名函数)* [map & filter](map&filter)* [其他](其他)____ 数据类型 Numbers
###Code
1 + 1
1 * 3
1 / 2
2 ** 4
4 % 2
5 % 2
(2 + 3) * (5 + 5)
###Output
_____no_output_____
###Markdown
Variable Assignment
###Code
# Can not start with number or special characters
name_of_var = 2
x = 2
y = 3
z = x + y
z
###Output
_____no_output_____
###Markdown
Strings
###Code
'鲸析'
"Whale Project"
"It's a trick!"
###Output
_____no_output_____
###Markdown
Printing
###Code
x = 'hello'
x
print(x)
num = 12
name = 'Sam'
print(f'My number is: {num}, and my name is: {name}')
print('My number is: {one}, and my name is: {two}'.format(one=num,two=name))
print('My number is: {}, and my name is: {}'.format(num,name))
###Output
My number is: 12, and my name is: Sam
###Markdown
Lists
###Code
[1,2,3]
['hi',1,[1,2]]
my_list = ['鲸析','数据','分析']
my_list.append('data analysis')
my_list
my_list[0]
my_list[1]
my_list[1:]
my_list[:1]
my_list[0] = 'NEW'
my_list
nest = [1,2,3,[4,5,['找到我!']]]
nest[3]
nest[3][2]
nest[3][2][0]
###Output
_____no_output_____
###Markdown
Dictionaries
###Code
d = {'key1':'item1','key2':'item2'}
d
d['key1']
###Output
_____no_output_____
###Markdown
Booleans
###Code
True
False
###Output
_____no_output_____
###Markdown
Tuples
###Code
t = (1,2,3)
t[0]
t[0] = 'NEW'
###Output
_____no_output_____
###Markdown
Sets
###Code
{1,2,3}
{1,2,3,1,2,1,2,3,3,3,3,2,2,2,1,1,2}
set([1,2,3,1,2,1,2,3,3,3,3,2,2,2,1,1,2])
###Output
_____no_output_____
###Markdown
比较符
###Code
1 > 2
1 < 2
1 >= 1
1 <= 4
1 == 1
'你' == '我'
###Output
_____no_output_____
###Markdown
逻辑符
###Code
(1 > 2) and (2 < 3)
(1 > 2) or (2 < 3)
(1 == 2) or (2 == 3) or (4 == 4)
###Output
_____no_output_____
###Markdown
条件语句
###Code
if 1 < 2:
print('嘿哈')
if 1 > 2:
print('嘿哈')
if 1 < 2:
print('first')
else:
print('last')
if 1 > 2:
print('first')
else:
print('last')
if 1 == 2:
print('first')
elif 3 == 3:
print('middle')
else:
print('Last')
###Output
middle
###Markdown
for循环
###Code
seq = [1,2,3,4,5]
for item in seq:
print(item)
for item in seq:
print('哈哈哈')
for num in seq:
print(num+num)
###Output
2
4
6
8
10
###Markdown
while循环
###Code
数字 = 1
while 数字 < 5:
print('数字是:{}'.format(数字))
数字 = 数字 + 1
###Output
数字是:1
数字是:2
数字是:3
数字是:4
###Markdown
range()
###Code
range(5)
for i in range(5):
print(i)
list(range(5))
###Output
_____no_output_____
###Markdown
列表推导
###Code
x = list(range(10))
x
out = []
for item in x:
out.append(item**3)
print(out)
[item**3 for item in x]
###Output
_____no_output_____
###Markdown
函数
###Code
def my_func(param1='default'):
"""
注释:一般写参数文档
"""
print(param1)
my_func
my_func()
my_func('new param')
my_func(param1='new param')
def my_square(x):
return x**2
out = my_square(2)
print(out)
###Output
4
###Markdown
匿名函数
###Code
def quad(x):
return (x+1)**2
quad(2)
lambda var: var*2
###Output
_____no_output_____
###Markdown
map and filter
###Code
seq = [1,2,3,4,5]
map(quad,seq)
list(map(quad,seq))
list(map(lambda var: var*2,seq))
filter(lambda item: item%2 == 0,seq)
list(filter(lambda item: item%2 == 0,seq))
###Output
_____no_output_____
###Markdown
其他
###Code
st = 'hello my name is Whale'
st.lower()
st.upper()
st.split()
tweet = "let's go #数据科学"
tweet.split('#')
tweet.split('#')[1]
d
d.keys()
d.items()
d.values()
lst = [1,2,3]
lst.pop()
lst
'x' in [1,2,3]
'x' in ['x','y','z']
###Output
_____no_output_____ |
Inverted_Residual.ipynb | ###Markdown
**Inverted Residual**The difference between residual blockand inverted residual.They expand features with the first conv instead of reducing them. The following image should make this clearshow lower image :  Instead of going-----> : **wide -> narrow -> wide** ⏬ in normal bottleneck block, they do the opposite---> **narrow -> wide -> narrow** **reference**:[MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
###Code
!nvidia-smi
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import glob
import cv2
import shutil
from PIL import Image
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torch.utils.data import DataLoader,Dataset
from torchvision.models import resnet18, resnet34 ,resnet50
import os
import torchvision.models as models
device = 'cuda' if torch.cuda.is_available() else 'cpu'
best_acc = 0 # best test accuracy
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
use_gpu = torch.cuda.is_available()
use_gpu
from typing import Optional
from functools import partial
class ConvNormAct(nn.Sequential):
def __init__(
self,
in_features: int,
out_features: int,
kernel_size: int,
norm: nn.Module = nn.BatchNorm2d,
act: nn.Module = nn.ReLU,
**kwargs
):
super().__init__(
nn.Conv2d(
in_features,
out_features,
kernel_size=kernel_size,
padding=kernel_size // 2,
),
norm(out_features),
act(),
)
# --------------------------------------------------------------------
Conv1X1BnReLU = partial(ConvNormAct, kernel_size=1)
Conv3X3BnReLU = partial(ConvNormAct, kernel_size=3)
# ---------------------------------------------------------------------
class ResidualAdd(nn.Module):
def __init__(self, block: nn.Module, shortcut: Optional[nn.Module] = None):
super().__init__()
self.block = block
self.shortcut = shortcut
def forward(self,x):
res = x
x = self.block(x)
if self.shortcut:
res = self.shortcut(res)
x += res
return x
# ----------------------------------------------------------------------
# class BottleNeck(nn.Sequential):
# def __init__(self, in_features: int, out_features: int, reduction: int = 4):
# reduced_features = out_features // reduction
# super().__init__(
# nn.Sequential(
# ResidualAdd(
# nn.Sequential(
# # wide -> narrow
# Conv1X1BnReLU(in_features, reduced_features),
# # narrow -> narrow
# Conv3X3BnReLU(reduced_features, reduced_features),
# # narrow -> wide
# Conv1X1BnReLU(reduced_features, out_features, act=nn.Identity),
# ),
# shortcut=Conv1X1BnReLU(in_features, out_features)
# if in_features != out_features
# else None,
# ),
# nn.ReLU(),
# )
# )
# ---------------------------------------------------------------------
class InvertedResidual(nn.Sequential):
def __init__(self, in_features: int, out_features: int, expansion: int = 4):
expanded_features = in_features * expansion
super().__init__(
nn.Sequential(
ResidualAdd(
nn.Sequential(
# narrow -> wide
Conv1X1BnReLU(in_features, expanded_features),
# wide -> wide
Conv3X3BnReLU(expanded_features, expanded_features),
# wide -> narrow
Conv1X1BnReLU(expanded_features, out_features, act=nn.Identity),
),
shortcut=Conv1X1BnReLU(in_features, out_features)
if in_features != out_features
else None,
),
nn.ReLU(),
)
)
img_BNeck = Image.open(str('/content/sample_data/cat.jpeg'))
plt.imshow(img_BNeck)
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=0., std=1.)
])
img_BNeck = transform(img_BNeck)
print(f"Image shape before: {img_BNeck.shape}")
img_BNeck = img_BNeck.unsqueeze(0)
print(f"Image shape after add dim: {img_BNeck.shape}")
img_BNeck = img_BNeck.to(device)
invertedresidual = InvertedResidual(3,6)
invertedresidual
invertedresidual.to(device)
invertedresidual(img_BNeck).shape
ir = invertedresidual(img_BNeck)
shortcut_image = ir.squeeze(0)
shortcut_image.shape
shortcut_image = torch.sum(shortcut_image/6,0)# decrease by sum dim
shortcut_image = shortcut_image.data.cpu().numpy()
shortcut_image.shape #---> (224, 224)
plt.imshow(shortcut_image)
###Output
_____no_output_____
###Markdown
In the following class **MobileNetLikeBlock**, if only the input and output are not the same, the residual_block is used
###Code
class ResidualAdd(nn.Module):
def __init__(self, block: nn.Module, shortcut: Optional[nn.Module] = None):
super().__init__()
self.block = block
self.shortcut = shortcut
def forward(self,x):
res = x
x = self.block(x)
if self.shortcut:
res = self.shortcut(res)
x += res
return x
class MobileNetLikeBlock(nn.Sequential):
def __init__(self, in_features: int, out_features: int, expansion: int = 4):
# use ResidualAdd if features match, otherwise a normal Sequential
residual = ResidualAdd if in_features == out_features else nn.Sequential
expanded_features = in_features * expansion
super().__init__(
nn.Sequential(
residual(
nn.Sequential(
# narrow -> wide
Conv1X1BnReLU(in_features, expanded_features),
# wide -> wide
Conv3X3BnReLU(expanded_features, expanded_features),
# wide -> narrow
Conv1X1BnReLU(expanded_features, out_features, act=nn.Identity),
),
),
nn.ReLU(),
)
)
# import torch
# x = torch.randn((1, 32, 56, 56))
# Conv1X1BnReLU(32, 64)(x).shape
MLB=MobileNetLikeBlock(3, 6)
MLB.to(device)
MLB
# MLB(img_BNeck).shape
# MobileNetLikeBlock(32, 32)(x).shape
MLB = MLB(img_BNeck)
shortcut_image = MLB.squeeze(0)
shortcut_image.shape
shortcut_image = torch.sum(shortcut_image/6,0)# decrease by sum dim
shortcut_image = shortcut_image.data.cpu().numpy()
shortcut_image.shape #---> (224, 224)
plt.imshow(shortcut_image)
MLB1=MobileNetLikeBlock(3, 3)
MLB1
MLB1.to(device)
MLB1 = MLB1(img_BNeck)
shortcut_image = MLB1.squeeze(0)
shortcut_image.shape
shortcut_image = torch.sum(shortcut_image/3,0)# decrease by sum dim
shortcut_image = shortcut_image.data.cpu().numpy()
shortcut_image.shape #---> (224, 224)
plt.imshow(shortcut_image)
###Output
_____no_output_____ |
c4_convolutional_neural_networks/week_10/Convolution_model_Step_by_Step_v2a.ipynb | ###Markdown
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Subscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Updates If you were working on the notebook before this update...* The current notebook is version "v2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* clarified example used for padding function. Updated starter code for padding function.* `conv_forward` has additional hints to help students if they're stuck.* `conv_forward` places code for `vert_start` and `vert_end` within the `for h in range(...)` loop; to avoid redundant calculations. Similarly updated `horiz_start` and `horiz_end`. **Thanks to our mentor Kevin Brown for pointing this out.*** `conv_forward` breaks down the `Z[i, h, w, c]` single line calculation into 3 lines, for clarity.* `conv_forward` test case checks that students don't accidentally use n_H_prev instead of n_H, use n_W_prev instead of n_W, and don't accidentally swap n_H with n_W* `pool_forward` properly nests calculations of `vert_start`, `vert_end`, `horiz_start`, and `horiz_end` to avoid redundant calculations.* `pool_forward' has two new test cases that check for a correct implementation of stride (the height and width of the previous layer's activations should be large enough relative to the filter dimensions so that a stride can take place). * `conv_backward`: initialize `Z` and `cache` variables within unit test, to make it independent of unit testing that occurs in the `conv_forward` section of the assignment.* **Many thanks to our course mentor, Paul Mielke, for proposing these test cases.** 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: **Figure 1** : **Zero-Padding** Image (3 channels, RGB) with a padding of 2. The main benefits of padding are the following:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), mode='constant', constant_values = (0,0))```
###Code
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), mode='constant', constant_values = (0,0))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =\n", x.shape)
print ("x_pad.shape =\n", x_pad.shape)
print ("x[1,1] =\n", x[1,1])
print ("x_pad[1,1] =\n", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
###Output
x.shape =
(4, 3, 3, 2)
x_pad.shape =
(4, 7, 7, 2)
x[1,1] =
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] =
[[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]
[0. 0.]]
###Markdown
**Expected Output**:```x.shape = (4, 3, 3, 2)x_pad.shape = (4, 7, 7, 2)x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]]x_pad[1,1] = [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]]``` 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) **Figure 2** : **Convolution operation** with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). **Note**: The variable b will be passed in as a numpy array. If we add a scalar (a float or integer) to a numpy array, the result is a numpy array. In the special case when a numpy array contains a single value, we can cast it as a float to convert it to a scalar.
###Code
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, the result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice_prev and W. Do not add the bias yet.
s = np.multiply(a_slice_prev, W)
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + float(b)
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
###Output
Z = -6.999089450680221
###Markdown
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: **Exercise**: Implement the function below to convolve the filters `W` on an input activation `A_prev`. This function takes the following inputs:* `A_prev`, the activations output by the previous layer (for a batch of m inputs); * Weights are denoted by `W`. The filter window size is `f` by `f`.* The bias vector is `b`, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```Notice how this gives a 3D slice that has height 2, width 2, and depth 3. Depth is the number of channels. This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find out how each of the corner can be defined using h, w, f and s in the code below. **Figure 3** : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** This figure shows only a single channel. **Reminder**:The formulas relating the output shape of the convolution to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_C = \text{number of filters used in the convolution}$$For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. Additional Hints if you're stuck* You will want to use array slicing (e.g.`varname[0:1,:,3:5]`) for the following variables: `a_prev_pad` ,`W`, `b` Copy the starter code of the function and run it outside of the defined function, in separate cells. Check that the subset of each array is the size and dimension that you're expecting. * To decide how to get the vert_start, vert_end; horiz_start, horiz_end, remember that these are indices of the previous layer. Draw an example of a previous padded layer (8 x 8, for instance), and the current (output layer) (2 x 2, for instance). The output layer's indices are denoted by `h` and `w`. * Make sure that `a_slice_prev` has a height, width and depth.* Remember that `a_prev_pad` is a subset of `A_prev_pad`. Think about which one should be used within the for loops.
###Code
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer,
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above.
# Hint: use int() to apply the 'floor' operation. (≈2 lines)
n_H = int((n_H_prev - f + 2*pad)/stride) + 1
n_W = int((n_W_prev - f + 2*pad)/stride) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
vert_start = h * stride
vert_end = vert_start + f
for w in range(n_W): # loop over horizontal axis of the output volume
# Find the horizontal start and end of the current "slice" (≈2 lines)
horiz_start = w * stride
horiz_end = horiz_start + f
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈3 line)
weights = W[:, :, :, c]
biases = b[:, :, :, c]
Z[i, h, w, c] = conv_single_step(a_slice_prev, weights, biases)
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,5,7,4)
W = np.random.randn(3,3,4,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 1,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =\n", np.mean(Z))
print("Z[3,2,1] =\n", Z[3,2,1])
print("cache_conv[0][1][2][3] =\n", cache_conv[0][1][2][3])
###Output
Z's mean =
0.6923608807576933
Z[3,2,1] =
[-1.28912231 2.27650251 6.61941931 0.95527176 8.25132576 2.31329639
13.00689405 2.34576051]
cache_conv[0][1][2][3] =
[-1.1191154 1.9560789 -0.3264995 -1.34267579]
###Markdown
**Expected Output**:```Z's mean = 0.692360880758Z[3,2,1] = [ -1.28912231 2.27650251 6.61941931 0.95527176 8.25132576 2.31329639 13.00689405 2.34576051]cache_conv[0][1][2][3] = [-1.1191154 1.9560789 -0.3264995 -1.34267579]``` Finally, CONV layer should also contain an activation, in which case we would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the $f \times f$ window you would compute a *max* or *average* over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$$$ n_C = n_{C_{prev}}$$
###Code
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
vert_start = h * stride
vert_end = vert_start + f
for w in range(n_W): # loop on the horizontal axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
horiz_start = w * stride
horiz_end = horiz_start + f
for c in range (n_C): # loop over the channels of the output volume
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end , c]
# Compute the pooling operation on the slice.
# Use an if statement to differentiate the modes.
# Use np.max and np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
# Case 1: stride of 1
np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 1, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A.shape = " + str(A.shape))
print("A =\n", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A.shape = " + str(A.shape))
print("A =\n", A)
###Output
mode = max
A.shape = (2, 3, 3, 3)
A =
[[[[1.74481176 0.90159072 1.65980218]
[1.74481176 1.46210794 1.65980218]
[1.74481176 1.6924546 1.65980218]]
[[1.14472371 0.90159072 2.10025514]
[1.14472371 0.90159072 1.65980218]
[1.14472371 1.6924546 1.65980218]]
[[1.13162939 1.51981682 2.18557541]
[1.13162939 1.51981682 2.18557541]
[1.13162939 1.6924546 2.18557541]]]
[[[1.19891788 0.84616065 0.82797464]
[0.69803203 0.84616065 1.2245077 ]
[0.69803203 1.12141771 1.2245077 ]]
[[1.96710175 0.84616065 1.27375593]
[1.96710175 0.84616065 1.23616403]
[1.62765075 1.12141771 1.2245077 ]]
[[1.96710175 0.86888616 1.27375593]
[1.96710175 0.86888616 1.23616403]
[1.62765075 1.12141771 0.79280687]]]]
mode = average
A.shape = (2, 3, 3, 3)
A =
[[[[-3.01046719e-02 -3.24021315e-03 -3.36298859e-01]
[ 1.43310483e-01 1.93146751e-01 -4.44905196e-01]
[ 1.28934436e-01 2.22428468e-01 1.25067597e-01]]
[[-3.81801899e-01 1.59993515e-02 1.70562706e-01]
[ 4.73707165e-02 2.59244658e-02 9.20338402e-02]
[ 3.97048605e-02 1.57189094e-01 3.45302489e-01]]
[[-3.82680519e-01 2.32579951e-01 6.25997903e-01]
[-2.47157416e-01 -3.48524998e-04 3.50539717e-01]
[-9.52551510e-02 2.68511000e-01 4.66056368e-01]]]
[[[-1.73134159e-01 3.23771981e-01 -3.43175716e-01]
[ 3.80634669e-02 7.26706274e-02 -2.30268958e-01]
[ 2.03009393e-02 1.41414785e-01 -1.23158476e-02]]
[[ 4.44976963e-01 -2.61694592e-03 -3.10403073e-01]
[ 5.08114737e-01 -2.34937338e-01 -2.39611830e-01]
[ 1.18726772e-01 1.72552294e-01 -2.21121966e-01]]
[[ 4.29449255e-01 8.44699612e-02 -2.72909051e-01]
[ 6.76351685e-01 -1.20138225e-01 -2.44076712e-01]
[ 1.50774518e-01 2.89111751e-01 1.23238536e-03]]]]
###Markdown
** Expected Output**```mode = maxA.shape = (2, 3, 3, 3)A = [[[[ 1.74481176 0.90159072 1.65980218] [ 1.74481176 1.46210794 1.65980218] [ 1.74481176 1.6924546 1.65980218]] [[ 1.14472371 0.90159072 2.10025514] [ 1.14472371 0.90159072 1.65980218] [ 1.14472371 1.6924546 1.65980218]] [[ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.6924546 2.18557541]]] [[[ 1.19891788 0.84616065 0.82797464] [ 0.69803203 0.84616065 1.2245077 ] [ 0.69803203 1.12141771 1.2245077 ]] [[ 1.96710175 0.84616065 1.27375593] [ 1.96710175 0.84616065 1.23616403] [ 1.62765075 1.12141771 1.2245077 ]] [[ 1.96710175 0.86888616 1.27375593] [ 1.96710175 0.86888616 1.23616403] [ 1.62765075 1.12141771 0.79280687]]]]mode = averageA.shape = (2, 3, 3, 3)A = [[[[ -3.01046719e-02 -3.24021315e-03 -3.36298859e-01] [ 1.43310483e-01 1.93146751e-01 -4.44905196e-01] [ 1.28934436e-01 2.22428468e-01 1.25067597e-01]] [[ -3.81801899e-01 1.59993515e-02 1.70562706e-01] [ 4.73707165e-02 2.59244658e-02 9.20338402e-02] [ 3.97048605e-02 1.57189094e-01 3.45302489e-01]] [[ -3.82680519e-01 2.32579951e-01 6.25997903e-01] [ -2.47157416e-01 -3.48524998e-04 3.50539717e-01] [ -9.52551510e-02 2.68511000e-01 4.66056368e-01]]] [[[ -1.73134159e-01 3.23771981e-01 -3.43175716e-01] [ 3.80634669e-02 7.26706274e-02 -2.30268958e-01] [ 2.03009393e-02 1.41414785e-01 -1.23158476e-02]] [[ 4.44976963e-01 -2.61694592e-03 -3.10403073e-01] [ 5.08114737e-01 -2.34937338e-01 -2.39611830e-01] [ 1.18726772e-01 1.72552294e-01 -2.21121966e-01]] [[ 4.29449255e-01 8.44699612e-02 -2.72909051e-01] [ 6.76351685e-01 -1.20138225e-01 -2.44076712e-01] [ 1.50774518e-01 2.89111751e-01 1.23238536e-03]]]]```
###Code
# Case 2: stride of 2
np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A.shape = " + str(A.shape))
print("A =\n", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A.shape = " + str(A.shape))
print("A =\n", A)
###Output
mode = max
A.shape = (2, 2, 2, 3)
A =
[[[[1.74481176 0.90159072 1.65980218]
[1.74481176 1.6924546 1.65980218]]
[[1.13162939 1.51981682 2.18557541]
[1.13162939 1.6924546 2.18557541]]]
[[[1.19891788 0.84616065 0.82797464]
[0.69803203 1.12141771 1.2245077 ]]
[[1.96710175 0.86888616 1.27375593]
[1.62765075 1.12141771 0.79280687]]]]
mode = average
A.shape = (2, 2, 2, 3)
A =
[[[[-0.03010467 -0.00324021 -0.33629886]
[ 0.12893444 0.22242847 0.1250676 ]]
[[-0.38268052 0.23257995 0.6259979 ]
[-0.09525515 0.268511 0.46605637]]]
[[[-0.17313416 0.32377198 -0.34317572]
[ 0.02030094 0.14141479 -0.01231585]]
[[ 0.42944926 0.08446996 -0.27290905]
[ 0.15077452 0.28911175 0.00123239]]]]
###Markdown
**Expected Output:** ```mode = maxA.shape = (2, 2, 2, 3)A = [[[[ 1.74481176 0.90159072 1.65980218] [ 1.74481176 1.6924546 1.65980218]] [[ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.6924546 2.18557541]]] [[[ 1.19891788 0.84616065 0.82797464] [ 0.69803203 1.12141771 1.2245077 ]] [[ 1.96710175 0.86888616 1.27375593] [ 1.62765075 1.12141771 0.79280687]]]]mode = averageA.shape = (2, 2, 2, 3)A = [[[[-0.03010467 -0.00324021 -0.33629886] [ 0.12893444 0.22242847 0.1250676 ]] [[-0.38268052 0.23257995 0.6259979 ] [-0.09525515 0.268511 0.46605637]]] [[[-0.17313416 0.32377198 -0.34317572] [ 0.02030094 0.14141479 -0.01231585]] [[ 0.42944926 0.08446996 -0.27290905] [ 0.15077452 0.28911175 0.00123239]]]]``` Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainder of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we will briefly present them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the activation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] += a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]```**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
###Code
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpadded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
# We'll run conv_forward to initialize the 'Z' and 'cache_conv",
# which we'll use to test the conv_backward function
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
# Test conv_backward
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
###Output
dA_mean = 1.4524377775388075
dW_mean = 1.7269914583139097
db_mean = 7.839232564616838
###Markdown
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward passNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
###Code
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = (x == np.max(x))
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
###Output
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
###Markdown
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
###Code
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (≈1 line)
average = np.ones((n_H, n_W))
# Create a matrix where every entry is the "average" value (≈1 line)
a = average/dz
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
###Output
distributed value = [[0.5 0.5]
[0.5 0.5]]
###Markdown
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.
###Code
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters['stride']
f = hparameters['f']
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros(A_prev.shape)
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += np.multiply(dA[i, h, w, c], mask)
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f, f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
###Output
mode = max
mean of dA = 0.14571390272918056
dA_prev[1,1] = [[ 0. 0. ]
[ 5.05844394 -1.68282702]
[ 0. 0. ]]
mode = average
mean of dA = 0.14571390272918056
dA_prev[1,1] = [[-0.53193997 5.7923756 ]
[ 0.32005384 1.24308632]
[ 0.85199382 -4.54928928]]
|
notebooks/text_analytics.ipynb | ###Markdown
PUBLICAÇÕES DE ATOS JUDICIAIS 1. Entendimento do projeto**Objetivo:** coletar, tratar, classificar e analisar publicações de atos judiciais.**Dados:** Publicações de atos judiciais obtidas da plataforma do Diário da Justiça Eletrônico Nacional (DJEN), mantida pelo Conselho Nacional de Justiça (CNJ).**Unidades judiciais:** Varas Cíveis do Termo Judiciário de São Luis/MA (1ª a 16ª).**Período:** 01/01/2021 a 09/08/2021 2. Bibliotecas e funções
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
import warnings
warnings.filterwarnings ('ignore')
import matplotlib.image as mpimg
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.wordnet import WordNetLemmatizer
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
from sklearn.cluster import MiniBatchKMeans
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction import _stop_words
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, precision_score, recall_score
import xgboost as xgb
# Função de tratamento dos textos
def trata_texto(text):
text = text.replace(regex=r'[!/,.-]', value='').apply(lambda x: x.lower())
stop_words = stopwords.words('portuguese')
text = text.apply(lambda x: ' '.join([word for word in x.split() if word not in (stop_words)]))
text = text.map(lambda x: word_tokenize(x))
snowball = SnowballStemmer(language = 'portuguese')
text = text.map(lambda x: [snowball.stem(y) for y in x])
text = text.apply(lambda x: ' '.join(x))
return text
###Output
_____no_output_____
###Markdown
3. Coleta dos dadosOs dados abaixo foram obtidos por meio do endereço da API do Conselho Nacional de Justiça (https://comunicaapi.pje.jus.br/), com uso das bibliotecas 'requests' e 'json' em python.
###Code
df = pd.read_csv('../data/dataset.csv')
df
###Output
_____no_output_____
###Markdown
4. Tratamento dos textos 4.1. PreprocessamentoChamamento da função criada para tratamento dos textos.A função inclui as seguintes etapas: (1) remoção de caracteres especiais (2) conversão em letras minúsculas (3) remoção de stop words (4) tokenização (5) stemização
###Code
df['texto_processado'] = trata_texto(df['texto'])
###Output
_____no_output_____
###Markdown
4.2. Vetorização
###Code
tfidf = TfidfVectorizer(max_df=0.90, min_df=50, max_features=1000)
vectorized = tfidf.fit_transform(df['texto_processado'])
vectorized
###Output
_____no_output_____
###Markdown
5. Classificação dos textosAs publicações judicias não estão classificadas quanto ao tipo do ato judicial. Deste modo, realizou-se a rotulação dos textos dentre as seguintes categorias: [ATO ORDINATÓRIO] [DESPACHO/DECISÃO] [SENTENÇA] [EDITAL] Para tanto seguiu-se as seguintes etapas: 1º) Clusterização dos textos para identificação das melhores amostras para rotulação manual 2º) Classificação de todo coleção de textos a partir das amostras rotuladas 5.1. Clusterização - K-means
###Code
# Método do cotovelo para encontrar número ideeal de clusters
iters = [2, 20, 40, 60, 80, 100, 125, 150, 175, 200, 250, 300, 350, 400, 500, 600, 800, 1000]
sse = []
models = []
for k in iters:
model = MiniBatchKMeans(n_clusters=k, init_size=256, batch_size=512, random_state=42).fit(vectorized)
sse.append(model.inertia_)
plt.plot(iters, sse)
plt.savefig(f'../img/elbow_method.png')
plt.show()
###Output
_____no_output_____
###Markdown
Utilizando o método do cotovelo não se identificou o número exato de clusters para o modelo, mas percebe-se pela representação gráfica que a partir de 100 clusters a variação da inércia (soma das distâncias quadradas das amostras ao centro do cluster) reduz significativamente.Assim, promove-se a predição com o número de clusters definido em 100 e, após, classifica-se cada agrupamento em uma das categorias desejados.
###Code
# Predição do modelo com número de clusters k = 100
k = 100
model = MiniBatchKMeans(n_clusters=k, init_size=256, batch_size=512, random_state=42).fit(vectorized)
df['cluster'] = model.predict(vectorized)
df['cluster'].hist(bins=100)
###Output
_____no_output_____
###Markdown
5.2. Identificando melhores amostras para rotulação manual - similaridade dos cossenos
###Code
# Melhores amostras de cada cluster por similaridade de cossenos
topn_indices = []
for i in range(0, 100):
similarities = []
centroid = model.cluster_centers_[i]
for v in vectorized:
similarities.append(cosine_similarity([centroid], v))
indexes = np.array([s[0][0] for s in similarities])
indexes = np.argsort(indexes)[::-1]
topn_indices.append(indexes)
# Geração de nuvem de palavras sobre as 50 melhores amostras de cada cluster
topn = 50
stopwordcloud = ['var', 'cível', 'estad', 'maranhã', 'juíz', 'juiz', 'direit', 'term', 'luís', 'process', 'açã', 'secret', 'judicial', 'comarc', 'únic', 'digital', 'únic digital',
'unic', 'únic digital']
wordcloud = WordCloud(width=800, height=400, background_color='white', max_words=50, stopwords=stopwordcloud, random_state=42)
for i in range(0, 100):
text_wordcloud = ' '.join(df['texto_processado'].iloc[topn_indices[i][0:topn]])
nuvem = wordcloud.generate(text_wordcloud)
plt.figure(figsize=(10,10))
plt.axis("off")
plt.title(f'CLUSTER nº {i}')
#plt.imshow(nuvem)
#plt.savefig(f'../img/wordcloud_amostras_cluster_{i}.png')
# Exemplo de uma das nuvens de palavras geradas
plt.figure(figsize=(10,10))
plt.axis("off")
plt.imshow(nuvem)
# Exportando indices das melhores amostras
np.savetxt('../data/topn_indices.csv', topn_indices, delimiter =",")
###Output
_____no_output_____
###Markdown
5.3. Rotulação manual
###Code
# Rotulação manual das melhores amostras de cada cluster
ato_ord = [1,2,5,6,8,10,15,17,19,21,23,25,28,34,44,45,48,50,54,55,66,78,86,95,96]
desp_dec = [0,11,12,14,16,18,20,22,24,26,29,30,31,32,35,36,43,46,49,51,52,53,56,58,59,60,61,62,64,65,68,69,70,71,73,74,75,77,79,82,83,84,85,88,94,97,99]
sentenca = [3,4,9,27,33,37,39,40,41,42,47,63,67,72,76,80,87,91,92,98]
edital = [7]
# Geração de dataset com amostras rotuladas
indices_amostras = []
for i in range(0, 100):
for j in range(0, 50):
indices_amostras.append(topn_indices[i][j])
df_amostras = df.loc[indices_amostras]
df_amostras['categoria'] = df_amostras['cluster'].apply(lambda x: 'ato_ord' if x in ato_ord else 'desp_dec' if x in desp_dec else 'sentenca' if x in sentenca else 'edital' if x in edital else 'indefinido')
df_amostras = df_amostras[['texto', 'categoria']]
df_amostras = df_amostras[df_amostras['categoria']!='indefinido']
df_amostras
# Frequência das categorias
sns.countplot(df_amostras['categoria'])
# Exportando dataset
df_amostras.to_csv('../data/dataset_amostras_rotulado.csv')
###Output
_____no_output_____
###Markdown
5.4. Treinamento e avaliação dos algoritmos de classificação
###Code
# Dados de treino
X_train = df_amostras['texto']
y_train = df_amostras['categoria']
# Dados de teste - classificados manualmente pelo Stakeholder
df_teste = pd.read_csv('../data/dataset_validacao.csv')
X_test = df_teste['texto']
y_test = df_teste['categoria']
# Preprocessamento
X_train = trata_texto(X_train)
X_train = tfidf.transform(X_train)
X_test = trata_texto(X_test)
X_test = tfidf.transform(X_test)
X_train
X_test
df_teste['categoria'].value_counts()
###Output
_____no_output_____
###Markdown
Regressão logística
###Code
# Treino e Predição
lr = LogisticRegression()
lr.fit(X_train,y_train)
y_pred = lr.predict(X_test)
# Avaliação
print(classification_report(y_test, y_pred))
cfm = confusion_matrix(y_test, y_pred)
sns.heatmap(cfm, annot=True, fmt='.0f').set_title('Logistic_Regression')
###Output
precision recall f1-score support
ato_ord 1.00 0.93 0.97 15
desp_dec 0.86 1.00 0.92 30
edital 1.00 0.20 0.33 5
sentenca 1.00 1.00 1.00 9
accuracy 0.92 59
macro avg 0.96 0.78 0.81 59
weighted avg 0.93 0.92 0.90 59
###Markdown
Árvore de decisão
###Code
# Treino e Predição
dtree = DecisionTreeClassifier()
dtree.fit(X_train,y_train)
y_pred = dtree.predict(X_test)
# Avaliação
print(classification_report(y_test, y_pred))
cfm = confusion_matrix(y_test, y_pred)
sns.heatmap(cfm, annot=True, fmt='.0f').set_title('Decision_Tree')
###Output
precision recall f1-score support
ato_ord 1.00 1.00 1.00 15
desp_dec 0.88 0.93 0.90 30
edital 1.00 0.60 0.75 5
sentenca 0.78 0.78 0.78 9
accuracy 0.90 59
macro avg 0.91 0.83 0.86 59
weighted avg 0.90 0.90 0.90 59
###Markdown
Floresta aleatória
###Code
# Treino e Predição
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
# Avaliação
print(classification_report(y_test, y_pred))
cfm = confusion_matrix(y_test, y_pred)
sns.heatmap(cfm, annot=True, fmt='.0f').set_title('Random_Forest')
###Output
precision recall f1-score support
ato_ord 1.00 1.00 1.00 15
desp_dec 0.88 1.00 0.94 30
edital 1.00 0.20 0.33 5
sentenca 1.00 1.00 1.00 9
accuracy 0.93 59
macro avg 0.97 0.80 0.82 59
weighted avg 0.94 0.93 0.91 59
###Markdown
Impulso do gradiante extremo
###Code
# Treino e Predição
xgbmodel = xgb.XGBClassifier(random_state=0)
xgbmodel.fit(X_train,y_train)
y_pred=xgbmodel.predict(X_test)
# Avaliação
print(classification_report(y_test,y_pred))
cfm = confusion_matrix(y_test, y_pred)
sns.heatmap(cfm, annot=True, fmt='.0f').set_title('XGBoost')
###Output
[09:51:20] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.3.0/src/learner.cc:1061: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'multi:softprob' was changed from 'merror' to 'mlogloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
precision recall f1-score support
ato_ord 1.00 1.00 1.00 15
desp_dec 0.88 1.00 0.94 30
edital 1.00 0.20 0.33 5
sentenca 1.00 1.00 1.00 9
accuracy 0.93 59
macro avg 0.97 0.80 0.82 59
weighted avg 0.94 0.93 0.91 59
###Markdown
**Conclusão sobre os algoritmos classificadores:*** Considerando a acurácia e precisão dos classificadores treinados, observa-se que o algoritmo FLORESTA ALEATÓRIA (Random Forest) apresentou melhor performance. 5.5. Classificação final do conjunto de dados
###Code
# Conjunto de dados para classificação final
df['categoria'] = rf.predict(vectorized)
df
# Exportando dataset final para análise no Power Bi
df.to_csv('../data/dataset_final.csv')
###Output
_____no_output_____
###Markdown
6. Análise dos dados finais no Power BI
###Code
img = mpimg.imread('../img/imagem_dashboard.png')
plt.figure(figsize=(15,10))
plt.imshow(img)
plt.show()
###Output
_____no_output_____ |
L1000/0C.preprocessing/0.data-splitsONLYPHASE2.ipynb | ###Markdown
Split the L1000 Data into Training/Testing/Validation SetsSplit the data 80% training, 10% testing, 10% validation, balanced by platemap.
###Code
import sys
import pathlib
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from pycytominer import feature_select
from pycytominer.cyto_utils import infer_cp_features
sys.path.insert(0, "../../scripts")
from utils import transform, infer_L1000_features
# %load_ext nb_black
seed = 9876
test_split = 0.2
output_dir = pathlib.Path("data")
output_dir.mkdir(exist_ok=True)
# Load data
phase2_L1000_df = pd.read_csv("../0B.process-data/data/L1000_phase2.tsv.gz", sep="\t")
print(phase2_L1000_df.shape)
phase2_L1000_df.head(2)
features = infer_L1000_features(phase2_L1000_df)
meta_features = infer_L1000_features(phase2_L1000_df, metadata=True)
# Zero One Normalize Data
phase2_L1000_df = transform(
phase2_L1000_df, features=features, meta_features=meta_features, operation = "-1+1"
)
# Split data into 80% train, 20% test
train_df, test_df = train_test_split(
phase2_L1000_df,
test_size=test_split,
random_state=seed,
stratify=phase2_L1000_df.cell_id,
)
# Split test data into 50% validation, 50% test
test_df, valid_df = train_test_split(
test_df,
test_size=0.5,
random_state=seed,
stratify=test_df.cell_id,
)
print(train_df.shape)
print(test_df.shape)
print(valid_df.shape)
# Output data splits
train_file = pathlib.Path(output_dir, "L1000PHASE2-1+1_train.tsv.gz")
test_file = pathlib.Path(output_dir, "L1000PHASE2-1+1_test.tsv.gz")
valid_file = pathlib.Path(output_dir, "L1000PHASE2-1+1_valid.tsv.gz")
complete_file = pathlib.Path(output_dir, "L1000PHASE2-1+1_complete.tsv.gz")
# train_df.to_csv(train_file, sep="\t", index=False, float_format="%.5g")
# test_df.to_csv(test_file, sep="\t", index=False, float_format="%.5g")
# valid_df.to_csv(valid_file, sep="\t", index=False, float_format="%.5g")
phase2_L1000_df.to_csv(complete_file, sep="\t", index=False, float_format="%.5g")
###Output
_____no_output_____ |
docs/source/examples/usage/NetcdfStream.ipynb | ###Markdown
Setup the Config
###Code
from ioos_qc.config import Config
config = """
streams:
variable1:
qartod:
aggregate:
gross_range_test:
suspect_span: [20, 30]
fail_span: [10, 40]
"""
c = Config(config)
c.config
###Output
_____no_output_____
###Markdown
Setup the sample data
###Code
import os
import numpy as np
import xarray as xr
import pandas as pd
import netCDF4 as nc4
rows = 50
data_inputs = {
'time': pd.date_range(start='01/01/2020', periods=rows, freq='D'),
'z': 2.0,
'lat': 36.1,
'lon': -76.5,
'variable1': np.arange(0, rows),
}
df = pd.DataFrame(data_inputs)
ncfile = 'tmp.nc'
if os.path.exists(ncfile):
os.remove(ncfile)
ds = xr.Dataset.from_dataframe(df).to_netcdf(ncfile, 'w')
###Output
_____no_output_____
###Markdown
Setup the NetcdfStream
###Code
from ioos_qc.streams import NetcdfStream
ns = NetcdfStream(ncfile)
ns
###Output
_____no_output_____
###Markdown
Run the NetcdfStream through the Config
###Code
results = ns.run(c)
results
###Output
_____no_output_____ |
Coding_Interview_exercises/TestDome/03_merge_names.ipynb | ###Markdown
Instruction - To be completed in **10 min** - Implement the unique_names method. When passed two arrays of names, it will return an array containing the names that appear in either or both arrays.- The returned array should have no duplicates.- For example, `calling unique_names(['Ava', 'Emma', 'Olivia'], ['Olivia', 'Sophia', 'Emma'])` should return an array containing Ava, Emma, Olivia, and Sophia in any order. Start-up code
###Code
def unique_names(names1, names2):
return None
names1 = ["Ava", "Emma", "Olivia"]
names2 = ["Olivia", "Sophia", "Emma"]
print(unique_names(names1, names2)) # should print Ava, Emma, Olivia, Sophia
###Output
_____no_output_____
###Markdown
Dirty solution
###Code
def unique_names(names1, names2):
return sorted(list(set(names1+names2)))
names1 = ["Ava", "Emma", "Olivia"]
names2 = ["Olivia", "Sophia", "Emma"]
print(unique_names(names1, names2)) # should print Ava, Emma, Olivia, Sophia
###Output
['Ava', 'Emma', 'Olivia', 'Sophia']
###Markdown
Elegant solution - `itertools ` is a module implements a number of iterator building blocks inspired by constructs from APL, Haskell, and SML. Each has been recast in a form suitable for Python.- The module standardizes a core set of fast, memory efficient tools that are useful by themselves or in combination. Together, they form an “iterator algebra” making it possible to construct specialized tools succinctly and efficiently in pure Python.
###Code
from itertools import chain
def unique_names(names1, names2):
return list(set(chain(names1, names2)))
names1 = ["Ava", "Emma", "Olivia"]
names2 = ["Olivia", "Sophia", "Emma"]
print(unique_names(names1, names2)) # should print Ava, Emma, Olivia, Sophia
###Output
['Emma', 'Ava', 'Olivia', 'Sophia']
|
jupyter/docker_enterprise/docker_notebooks/Spark_NLP/Healthcare/24.Improved_Entity_Resolvers_in_SparkNLP_with_sBert.ipynb | ###Markdown
 [](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/24.Improved_Entity_Resolvers_in_SparkNLP_with_sBert.ipynb) 24. Improved Entity Resolvers in Spark NLP with sBert
###Code
import os
jsl_secret = os.getenv('SECRET')
import sparknlp
sparknlp_version = sparknlp.version()
import sparknlp_jsl
jsl_version = sparknlp_jsl.version()
print (jsl_secret)
import json
import os
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import SparkSession
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
from sparknlp.util import *
import sparknlp_jsl
import sparknlp
from sparknlp.pretrained import ResourceDownloader
from pyspark.sql import functions as F
params = {"spark.driver.memory":"16G",
"spark.kryoserializer.buffer.max":"2000M",
"spark.driver.maxResultSize":"2000M"}
spark = sparknlp_jsl.start(jsl_secret,params=params)
print ("Spark NLP Version :", sparknlp.version())
print ("Spark NLP_JSL Version :", sparknlp_jsl.version())
spark
###Output
Spark NLP Version : 3.2.1
Spark NLP_JSL Version : 3.2.0
###Markdown
!!! Warning !!!**If you get an error related to Java port not found 55, it is probably because that the Colab memory cannot handle the model and the Spark session died. In that case, try on a larger machine or restart the kernel at the top and then come back here and rerun.** ICD10CM pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
icd_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
icd10_resolver])
icd_lp = LightPipeline(icd_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_icd10cm_augmented download started this may take some time.
Approximate size to download 1.2 GB
[OK!]
###Markdown
ICD10CM-HCC pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
hcc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented_billable_hcc","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_hcc_code")\
.setDistanceFunction("EUCLIDEAN")
hcc_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
hcc_resolver])
hcc_lp = LightPipeline(hcc_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_icd10cm_augmented_billable_hcc download started this may take some time.
Approximate size to download 1.4 GB
[OK!]
###Markdown
RxNorm pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
rxnorm_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_rxnorm","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("rxnorm_code")\
.setDistanceFunction("EUCLIDEAN")
rxnorm_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
rxnorm_resolver])
rxnorm_lp = LightPipeline(rxnorm_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_rxnorm download started this may take some time.
Approximate size to download 802.6 MB
[OK!]
###Markdown
CPT pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
cpt_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
cpt_resolver])
cpt_lp = LightPipeline(cpt_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_cpt_procedures_augmented download started this may take some time.
Approximate size to download 78.3 MB
[OK!]
###Markdown
SNOMED pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
snomed_ct_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_snomed_findings_aux_concepts","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("snomed_code")\
.setDistanceFunction("EUCLIDEAN")
snomed_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
snomed_ct_resolver])
snomed_lp = LightPipeline(snomed_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_snomed_findings_aux_concepts download started this may take some time.
Approximate size to download 4.3 GB
[OK!]
###Markdown
LOINC Pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
loinc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_loinc", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("loinc_code")\
.setDistanceFunction("EUCLIDEAN")
loinc_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
loinc_resolver])
loinc_lp = LightPipeline(loinc_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_loinc download started this may take some time.
Approximate size to download 212.6 MB
[OK!]
###Markdown
UMLS Pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
umls_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_umls_major_concepts", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
umls_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
umls_resolver])
umls_lp = LightPipeline(umls_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_umls_major_concepts download started this may take some time.
Approximate size to download 817.3 MB
[OK!]
###Markdown
HPO Pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
hpo_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_HPO", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
hpo_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
hpo_resolver])
hpo_lp = LightPipeline(hpo_pipelineModel)
###Output
sbiobert_base_cased_mli download started this may take some time.
Approximate size to download 384.3 MB
[OK!]
sbiobertresolve_HPO download started this may take some time.
Approximate size to download 97.9 MB
[OK!]
###Markdown
All the resolvers in the same pipeline (just to show how it is done.. will not be used in this notebook)
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("ner_chunk")
sbert_embedder = BertSentenceEmbeddings\
.pretrained('sbiobert_base_cased_mli', 'en','clinical/models')\
.setInputCols(["ner_chunk"])\
.setOutputCol("sbert_embeddings")
snomed_ct_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_snomed_findings","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("snomed_code")\
.setDistanceFunction("EUCLIDEAN")
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
rxnorm_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_rxnorm","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("rxnorm_code")\
.setDistanceFunction("EUCLIDEAN")
cpt_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_cpt_procedures_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("cpt_code")\
.setDistanceFunction("EUCLIDEAN")
hcc_resolver = SentenceEntityResolverModel.pretrained("sbert_biobertresolve_icd10cm_augmented_billable_hcc","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_hcc_code")\
.setDistanceFunction("EUCLIDEAN")
loinc_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_loinc", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("loinc_code")\
.setDistanceFunction("EUCLIDEAN")
umls_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_umls_major_concepts", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
hpo_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_HPO", "en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("umls_code")\
.setDistanceFunction("EUCLIDEAN")
resolver_pipelineModel = PipelineModel(
stages = [
documentAssembler,
sbert_embedder,
snomed_ct_resolver,
icd10_resolver,
rxnorm_resolver,
cpt_resolver,
hcc_resolver,
loinc_resolver,
umls_resolver,
hpo_resolver])
resolver_lp = LightPipeline(resolver_pipelineModel)
###Output
_____no_output_____
###Markdown
Utility functions
###Code
import pandas as pd
pd.set_option('display.max_colwidth', 0)
def get_codes (lp, text, vocab='icd10cm_code', hcc=False, aux_label=False):
full_light_result = lp.fullAnnotate(text)
chunks = []
codes = []
begin = []
end = []
resolutions=[]
all_distances =[]
all_codes=[]
all_cosines = []
all_k_aux_labels=[]
for chunk, code in zip(full_light_result[0]['ner_chunk'], full_light_result[0][vocab]):
begin.append(chunk.begin)
end.append(chunk.end)
chunks.append(chunk.result)
codes.append(code.result)
all_codes.append(code.metadata['all_k_results'].split(':::'))
resolutions.append(code.metadata['all_k_resolutions'].split(':::'))
all_distances.append(code.metadata['all_k_distances'].split(':::'))
all_cosines.append(code.metadata['all_k_cosine_distances'].split(':::'))
if hcc:
try:
all_k_aux_labels.append(code.metadata['all_k_aux_labels'].split(':::'))
except:
all_k_aux_labels.append([])
elif aux_label:
try:
all_k_aux_labels.append(code.metadata['all_k_aux_labels'].split(':::'))
except:
all_k_aux_labels.append([])
else:
all_k_aux_labels.append([])
df = pd.DataFrame({'chunks':chunks, 'begin': begin, 'end':end, 'code':codes,'all_codes':all_codes,
'resolutions':resolutions, 'all_k_aux_labels':all_k_aux_labels,'all_distances':all_cosines})
if hcc:
df['billable'] = df['all_k_aux_labels'].apply(lambda x: [i.split('||')[0] for i in x])
df['hcc_status'] = df['all_k_aux_labels'].apply(lambda x: [i.split('||')[1] for i in x])
df['hcc_score'] = df['all_k_aux_labels'].apply(lambda x: [i.split('||')[2] for i in x])
elif aux_label:
df['gt'] = df['all_k_aux_labels'].apply(lambda x: [i.split('|')[0] for i in x])
df['concept'] = df['all_k_aux_labels'].apply(lambda x: [i.split('|')[1] for i in x])
df['aux'] = df['all_k_aux_labels'].apply(lambda x: [i.split('|')[2] for i in x])
df = df.drop(['all_k_aux_labels'], axis=1)
return df
###Output
_____no_output_____
###Markdown
Getting some predictions from resolvers
###Code
text = 'bladder cancer'
%time get_codes (icd_lp, text, vocab='icd10cm_code')
text = 'severe stomach pain'
%time get_codes (icd_lp, text, vocab='icd10cm_code')
text = 'bladder cancer'
%time get_codes (hcc_lp, text, vocab='icd10cm_hcc_code', hcc=True)
text = 'severe stomach pain'
%time get_codes (hcc_lp, text, vocab='icd10cm_hcc_code', hcc=True)
text = 'bladder cancer'
%time get_codes (snomed_lp, text, vocab='snomed_code', aux_label=True)
text = 'schizophrenia'
%time get_codes (snomed_lp, text, vocab='snomed_code', aux_label=True)
text = 'metformin 100 mg'
%time get_codes (rxnorm_lp, text, vocab='rxnorm_code')
text = 'Advil Allergy Sinus'
%time get_codes (rxnorm_lp, text, vocab='rxnorm_code')
text = 'heart surgery'
%time get_codes (cpt_lp, text, vocab='cpt_code')
text = 'ct abdomen'
%time get_codes (cpt_lp, text, vocab='cpt_code')
text = 'Left heart cath'
%time get_codes (cpt_lp, text, vocab='cpt_code')
text = 'FLT3 gene mutation analysis'
%time get_codes (loinc_lp, text, vocab='loinc_code')
text = 'Hematocrit'
%time get_codes (loinc_lp, text, vocab='loinc_code')
text = 'urine test'
%time get_codes (loinc_lp, text, vocab='loinc_code')
# medical device
text = 'X-Ray'
%time get_codes (umls_lp, text, vocab='umls_code')
# Injuries & poisoning
text = 'out-of-date food poisoning'
%time get_codes (umls_lp, text, vocab='umls_code')
# clinical findings
text = 'type two diabetes mellitus'
%time get_codes (umls_lp, text, vocab='umls_code')
text = 'bladder cancer'
%time get_codes (hpo_lp, text, vocab='umls_code')
text = 'bipolar disorder'
%time get_codes (hpo_lp, text, vocab='umls_code')
text = 'schizophrenia '
%time get_codes (hpo_lp, text, vocab='umls_code')
icd_chunks = ['advanced liver disease',
'advanced lung disease',
'basal cell carcinoma of skin',
'acute maxillary sinusitis',
'chronic kidney disease stage',
'diabetes mellitus type 2',
'lymph nodes of multiple sites',
'other chronic pain',
'severe abdominal pain',
'squamous cell carcinoma of skin',
'type 2 diabetes mellitus']
snomed_chunks= ['down syndrome', 'adenocarcinoma', 'aortic valve stenosis',
'atherosclerosis', 'atrial fibrillation',
'hypertension', 'lung cancer', 'seizure',
'squamous cell carcinoma', 'stage IIIB', 'mediastinal lymph nodes']
from IPython.display import display
for chunk in icd_chunks:
print ('>> ',chunk)
display(get_codes (icd_lp, chunk, vocab='icd10cm_code'))
for chunk in snomed_chunks:
print ('>> ',chunk)
display(get_codes (snomed_lp, chunk, vocab='snomed_code', aux_label=True))
clinical_chunks = ['bladder cancer',
'anemia in chronic kidney disease',
'castleman disease',
'congestive heart failure',
'diabetes mellitus type 2',
'lymph nodes of multiple sites',
'malignant melanoma of skin',
'malignant neoplasm of lower lobe, bronchus',
'metastatic lung cancer',
'secondary malignant neoplasm of bone',
'type 2 diabetes mellitus',
'type 2 diabetes mellitus/insulin',
'unsp malignant neoplasm of lymph node']
for chunk in clinical_chunks:
print ('>> ',chunk)
print ('icd10cm_code')
display(get_codes (hcc_lp, chunk, vocab='icd10cm_hcc_code', hcc=True))
print ('snomed_code')
display(get_codes (snomed_lp, chunk, vocab='snomed_code', aux_label=True))
###Output
>> bladder cancer
icd10cm_code
###Markdown
How to integrate resolvers with NER models in the same pipeline
###Code
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetectorDLModel.pretrained()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")\
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
clinical_ner = MedicalNerModel.pretrained("ner_clinical", "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
ner_converter = NerConverter() \
.setInputCols(["sentence", "token", "ner"]) \
.setOutputCol("ner_chunk")\
.setWhiteList(['PROBLEM'])
c2doc = Chunk2Doc()\
.setInputCols("ner_chunk")\
.setOutputCol("ner_chunk_doc")
sbert_embedder = BertSentenceEmbeddings\
.pretrained("sbiobert_base_cased_mli",'en','clinical/models')\
.setInputCols(["ner_chunk_doc"])\
.setOutputCol("sbert_embeddings")
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
sbert_resolver_pipeline = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
c2doc,
sbert_embedder,
icd10_resolver])
data_ner = spark.createDataFrame([[""]]).toDF("text")
sbert_models = sbert_resolver_pipeline.fit(data_ner)
clinical_note = 'A 28-year-old female with a history of gestational diabetes mellitus diagnosed eight years prior to presentation and subsequent type two diabetes mellitus (T2DM), one prior episode of HTG-induced pancreatitis three years prior to presentation, associated with an acute hepatitis, and obesity with a body mass index (BMI) of 33.5 kg/m2, presented with a one-week history of polyuria, polydipsia, poor appetite, and vomiting. Two weeks prior to presentation, she was treated with a five-day course of amoxicillin for a respiratory tract infection. She was on metformin, glipizide, and dapagliflozin for T2DM and atorvastatin and gemfibrozil for HTG. She had been on dapagliflozin for six months at the time of presentation. Physical examination on presentation was significant for dry oral mucosa; significantly, her abdominal examination was benign with no tenderness, guarding, or rigidity. Pertinent laboratory findings on admission were: serum glucose 111 mg/dl, bicarbonate 18 mmol/l, anion gap 20, creatinine 0.4 mg/dL, triglycerides 508 mg/dL, total cholesterol 122 mg/dL, glycated hemoglobin (HbA1c) 10%, and venous pH 7.27. Serum lipase was normal at 43 U/L. Serum acetone levels could not be assessed as blood samples kept hemolyzing due to significant lipemia. The patient was initially admitted for starvation ketosis, as she reported poor oral intake for three days prior to admission. However, serum chemistry obtained six hours after presentation revealed her glucose was 186 mg/dL, the anion gap was still elevated at 21, serum bicarbonate was 16 mmol/L, triglyceride level peaked at 2050 mg/dL, and lipase was 52 U/L. The β-hydroxybutyrate level was obtained and found to be elevated at 5.29 mmol/L - the original sample was centrifuged and the chylomicron layer removed prior to analysis due to interference from turbidity caused by lipemia again. The patient was treated with an insulin drip for euDKA and HTG with a reduction in the anion gap to 13 and triglycerides to 1400 mg/dL, within 24 hours. Her euDKA was thought to be precipitated by her respiratory tract infection in the setting of SGLT2 inhibitor use. The patient was seen by the endocrinology service and she was discharged on 40 units of insulin glargine at night, 12 units of insulin lispro with meals, and metformin 1000 mg two times a day. It was determined that all SGLT2 inhibitors should be discontinued indefinitely. She had close follow-up with endocrinology post discharge.'
print (clinical_note)
clinical_note_df = spark.createDataFrame([[clinical_note]]).toDF("text")
icd10_result = sbert_models.transform(clinical_note_df)
pd.set_option("display.max_colwidth",0)
import pandas as pd
pd.set_option('display.max_colwidth', 0)
def get_icd_codes(icd10_res):
icd10_df = icd10_res.select(F.explode(F.arrays_zip('ner_chunk.result',
'ner_chunk.metadata',
'icd10cm_code.result',
'icd10cm_code.metadata')).alias("cols")) \
.select(F.expr("cols['1']['sentence']").alias("sent_id"),
F.expr("cols['0']").alias("ner_chunk"),
F.expr("cols['1']['entity']").alias("entity"),
F.expr("cols['2']").alias("icd10_code"),
F.expr("cols['3']['all_k_results']").alias("all_codes"),
F.expr("cols['3']['all_k_resolutions']").alias("resolutions")).toPandas()
codes = []
resolutions = []
for code, resolution in zip(icd10_df['all_codes'], icd10_df['resolutions']):
codes.append(code.split(':::'))
resolutions.append(resolution.split(':::'))
icd10_df['all_codes'] = codes
icd10_df['resolutions'] = resolutions
return icd10_df
%%time
res_pd = get_icd_codes(icd10_result)
res_pd.head(15)
###Output
CPU times: user 57.6 ms, sys: 12 ms, total: 69.6 ms
Wall time: 3min 3s
###Markdown
Lets apply some HTML formating by using `sparknlp_display` library to see the results of the pipeline in a nicer layout:
###Code
from sparknlp_display import EntityResolverVisualizer
# with light pipeline
light_model = LightPipeline(sbert_models)
vis = EntityResolverVisualizer()
# Change color of an entity label
vis.set_label_colors({'PROBLEM':'#008080'})
light_data_icd = light_model.fullAnnotate(clinical_note)
vis.display(light_data_icd[0], 'ner_chunk', 'icd10cm_code')
###Output
_____no_output_____
###Markdown
BertSentenceChunkEmbeddingsThis annotator let users to aggregate sentence embeddings and ner chunk embeddings to get more specific and accurate resolution codes. It works by averaging context and chunk embeddings to get contextual information. Input to this annotator is the context (sentence) and ner chunks, while the output is embedding for each chunk that can be fed to the resolver model. The `setChunkWeight` parameter can be used to control the influence of surrounding context. For more information and examples of `BertSentenceChunkEmbeddings` annotator, you can check here: [24.1.Improved_Entity_Resolution_with_SentenceChunkEmbeddings.ipynb](https://github.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/24.1.Improved_Entity_Resolution_with_SentenceChunkEmbeddings.ipynb) ICD10CM with BertSentenceChunkEmbeddingsLets do the same process by using `BertSentenceEmbeddings` annotator and compare the results. We will create a new pipeline by using this annotator with SentenceEntityResolverModel.
###Code
#Get average sentence-chunk Bert embeddings
sentence_chunk_embeddings = BertSentenceChunkEmbeddings.pretrained("sbiobert_base_cased_mli", "en", "clinical/models")\
.setInputCols(["sentence", "ner_chunk"])\
.setOutputCol("sbert_embeddings")\
.setChunkWeight(0.5) #default : 0.5
icd10_resolver = SentenceEntityResolverModel.pretrained("sbiobertresolve_icd10cm_augmented","en", "clinical/models") \
.setInputCols(["ner_chunk", "sbert_embeddings"]) \
.setOutputCol("icd10cm_code")\
.setDistanceFunction("EUCLIDEAN")
resolver_pipeline_SCE = Pipeline(
stages = [
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
ner_converter,
sentence_chunk_embeddings,
icd10_resolver])
empty_data = spark.createDataFrame([['']]).toDF("text")
model_SCE = resolver_pipeline_SCE.fit(empty_data)
icd10_result_SCE = model_SCE.transform(clinical_note_df)
%%time
res_SCE_pd = get_icd_codes(icd10_result_SCE)
res_SCE_pd.head(15)
icd10_SCE_lp = LightPipeline(model_SCE)
light_result = icd10_SCE_lp.fullAnnotate(clinical_note)
visualiser = EntityResolverVisualizer()
# Change color of an entity label
visualiser.set_label_colors({'PROBLEM':'#008080'})
visualiser.display(light_result[0], 'ner_chunk', 'icd10cm_code')
###Output
_____no_output_____
###Markdown
**Lets compare the results that we got from these two methods.**
###Code
sentence_df = icd10_result.select(F.explode(F.arrays_zip('sentence.metadata', 'sentence.result')).alias("cols")) \
.select( F.expr("cols['0']['sentence']").alias("sent_id"),
F.expr("cols['1']").alias("sentence_all")).toPandas()
comparison_df = pd.merge(res_pd.loc[:,'sent_id':'resolutions'],res_SCE_pd.loc[:,'sent_id':'resolutions'], on=['sent_id',"ner_chunk", "entity"], how='inner')
comparison_df.columns=['sent_id','ner_chunk', 'entity', 'icd10_code', 'all_codes', 'resolutions', 'icd10_code_SCE', 'all_codes_SCE', 'resolutions_SCE']
comparison_df = pd.merge(sentence_df, comparison_df,on="sent_id").drop('sent_id', axis=1)
comparison_df.head(15)
###Output
_____no_output_____ |
src/notebooks/010_Parse_Data.ipynb | ###Markdown
Some Notes* Final impression counts in state and demo totals are the same, but slightly _above_ the summary "Campaign Performance sheet* Final impressinon counts from the ZIP code tables are 60% lower* The data does not contain details on the individual combinations that were put together in each location. However, they do have a description in final sheet (not described here) of "Low, Good, Best" for the individual asset combinations Appending Census Data By ZIP Code (ZCTA?)
###Code
c = Census(os.environ["CENSUS_API_KEY"])
zcta_df = pd.DataFrame(
c.acs5.zipcode(["B01001_001E"], zcta="*", state_fips="*", year=2019)
)
zcta_df = zcta_df.rename(
columns={
"B01001_001E": "total_population",
"state": "state_fips",
"zip code tabulation area": "zipcode",
}
)
merged_zip_df = final_zip_df.merge(zcta_df, how="left", on="zipcode")
merged_zip_df.to_parquet(OUTPUT_DATA_DIR / "data_by_zipcode_merged_census.parquet")
merged_zip_df[merged_zip_df["state_fips"].isna()]
###Output
_____no_output_____
###Markdown
More notesSo that first ZIP code (06338) is a [P.O. Box ZIP Code](https://www.zip-codes.com/zip-code/06338/zip-code-06338.asp), which strikes me as odd. What exactly is going on with this ZIP code data. (Note that that website is usually accurate, though you should really double check with SmartyStreets, which has the raw USPS data.)
###Code
table = final_state_df.groupby("language")["clicks", "impressions"].sum()
print(table["clicks"] / table["impressions"])
table["impressions"] = table["impressions"] - table["clicks"]
st.contingency.chi2_contingency(table.values.T)
table = (
final_state_df[final_state_df["language"] == "en"]
.groupby(["campaign"])[["clicks", "impressions"]]
.sum()
)
print(table["clicks"] / table["impressions"])
table = (
final_state_df[final_state_df["language"] == "sp"]
.groupby(["campaign"])[["clicks", "impressions"]]
.sum()
)
print(table["clicks"] / table["impressions"])
###Output
campaign
Helping Community SP 0.016799
Helping Others SP 0.012042
Personal Responsibility SP 0.011959
Self-Oriented SP 0.013431
dtype: float64
###Markdown
Explode dataHansoo requests the data be broken into one line per impression. This is done below.
###Code
for filename in OUTPUT_DATA_DIR.glob("data*.parquet"):
print(f"Exploding {filename.name}...")
df = pd.read_parquet(filename)
df["num_in_group"] = [
np.arange(num_impressions, dtype=int)
for num_impressions in df["impressions"].values
]
exploded_df = df.explode("num_in_group")
exploded_df["was_clicked"] = exploded_df["num_in_group"] < exploded_df["clicks"]
exploded_df = exploded_df.drop(
columns=["clicks", "impressions", "ctr", "cost", "average_cpc", "num_in_group"]
)
exploded_df.to_parquet(EXPLODED_DATA_DIR / f"exploded_{filename.name}")
exploded_df.to_csv(
EXPLODED_DATA_DIR / f"exploded_{filename.with_suffix('').name}.csv.gz",
index=False,
)
print("Done")
###Output
Exploding data_by_state.parquet...
Exploding data_by_demographics.parquet...
Exploding data_by_zipcode_merged_census.parquet...
Exploding data_by_zipcode.parquet...
Done
|
2-Data-Analysis.ipynb | ###Markdown
Data Analysis 1. Loading the data
###Code
import re
from nltk import sent_tokenize
cg_sents = []
smg_sents = []
def remove_duplicate_punctuation(s): # sent_tokenize() gets confused when there's duplicate punctuation
return(re.sub(r'(\.|\?|!|;)\1+', r'\1 ', s)) # In a previous version of this I used r'\1' instead of r'\1 '. However it caused a problem for sentences like "this is great...I don't know what to do." It seems that some people do not use spacing with ellipses which caused sent_tokenize() to not identify a new sentence.
with open('./Data/cg_twitter.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text) # https://stackoverflow.com/questions/44858741/nltk-tokenizer-and-stanford-corenlp-tokenizer-cannot-distinct-2-sentences-withou
lines = [p for p in text.split('\n') if p] # sent_tokenize() doesn't consider a new line a new sentence so this is required.
for line in lines:
cg_sents += sent_tokenize(line)
with open('./Data/cg_fb.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
cg_sents += sent_tokenize(line)
with open('./Data/cg_other.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
cg_sents += sent_tokenize(line)
with open('./Data/smg_twitter.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
smg_sents += sent_tokenize(line)
with open('./Data/smg_fb.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
smg_sents += sent_tokenize(line)
with open('./Data/smg_other.txt', 'r', encoding='utf-8') as in_file:
text = remove_duplicate_punctuation(in_file.read())
text = re.sub(r'([a-zα-ωίϊΐόάέύϋΰήώ])(\.|\?|;|!)([A-ZΑ-ΩΆΈΊΌΎΏΉ])', r'\1\2 \3', text)
lines = [p for p in text.split('\n') if p]
for line in lines:
smg_sents += sent_tokenize(line)
cg_sents[:3]
###Output
_____no_output_____
###Markdown
2. Cleaning the data
###Code
import unicodedata
from string import punctuation
from nltk.tokenize import WhitespaceTokenizer
punctuation += '´΄’…“”–—―»«' # string.punctuation misses these.
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def get_clean_sent_el(sentence):
sentence = re.sub(r'^RT', '', sentence)
sentence = re.sub(r'\&\w*;', '', sentence)
sentence = re.sub(r'\@\w*', '', sentence)
sentence = re.sub(r'\$\w*', '', sentence)
sentence = re.sub(r'https?:\/\/.*\/\w*', '', sentence)
sentence = ''.join(c for c in sentence if c <= '\uFFFF')
sentence = strip_accents(sentence)
sentence = re.sub(r'#\w*', '', sentence)
sentence = sentence.lower()
tokens = WhitespaceTokenizer().tokenize(sentence)
new_tokens = []
for token in tokens:
if token == 'ο,τι' or token == 'ό,τι' or token == 'o,ti' or token == 'ó,ti':
new_tokens.append(token)
else:
token = re.sub(r'(?<=[.,!\?;\'΄´’])(?=[^\s])', r' ', token) # If there is punctuation not followed by a space, we add it. I also added a space after apostrophes because I want something like σαγαπώ to not be considered as one word.
new_token = token.translate(str.maketrans({key: None for key in punctuation}))
if (new_token != ''): # This might happen if a user surrounds commas with spaces , like so.
new_tokens.append(new_token)
sentence =' '.join(new_tokens)
sentence = re.sub('\ufeff', '', sentence) # \ufeff might appear when dealing with unicode-encoded files
sentence = sentence.strip(' ') # performs lstrip() and rstrip()
sentence = re.sub(' ', ' ', sentence) # Adding a space after the apostrophe can lead to the appearance of double spaces if apostrophes are used along with spaces in the original text.
return sentence
cg_sents_clean = []
smg_sents_clean = []
for sent in cg_sents:
cg_sents_clean.append(get_clean_sent_el(sent))
for sent in smg_sents:
smg_sents_clean.append(get_clean_sent_el(sent))
# Remove empty strings left due to sentences ending up being only URLs then getting deleted on cleaning:
cg_sents_clean = list(filter(None, cg_sents_clean))
smg_sents_clean = list(filter(None, smg_sents_clean))
cg_sents_clean[:3]
###Output
_____no_output_____
###Markdown
3. Tokenization and setting up to use `nltk.text` 3.1 Tokenization
###Code
cg_sents_tokens = []
smg_sents_tokens = []
for sent in cg_sents_clean:
cg_sents_tokens.append(WhitespaceTokenizer().tokenize(sent))
for sent in smg_sents_clean:
smg_sents_tokens.append(WhitespaceTokenizer().tokenize(sent))
cg_sents_tokens[:3]
###Output
_____no_output_____
###Markdown
3.2 Setting up to use `nltk.text` 3.2.1 Words `Text` objects
###Code
from nltk.text import Text
cg_words_flat = [word for sent_tokens in cg_sents_tokens for word in sent_tokens]
smg_words_flat = [word for sent_tokens in smg_sents_tokens for word in sent_tokens]
cg_Text = Text(cg_words_flat)
smg_Text = Text(smg_words_flat)
cg_words_flat = [re.sub(r'ς', 'σ', word) for word in cg_words_flat] # The reason I replace ς with σ for char n-grams is because the Greek final sigma form is arbitrary. No other letter has a final form. When counting the number of sigmas I would like that to include the final forms as well. The σ_ bigram will represent the ς feature so it can be removed.
smg_words_flat = [re.sub(r'ς', 'σ', word) for word in smg_words_flat]
cg_Text
###Output
_____no_output_____
###Markdown
3.2.2 Word n-grams 3.2.2.1 Bigrams `Text` objects
###Code
from nltk import ngrams
cg_word_bigrams = []
smg_word_bigrams = []
for sent in cg_sents_tokens:
cg_word_bigrams.append(list(ngrams(sent, 2)))
for sent in smg_sents_tokens:
smg_word_bigrams.append(list(ngrams(sent, 2)))
print(cg_word_bigrams)
cg_word_bigrams_flat_tuples = [bigram for bigram_list in cg_word_bigrams for bigram in bigram_list]
smg_word_bigrams_flat_tuples = [bigram for bigram_list in smg_word_bigrams for bigram in bigram_list]
cg_word_bigrams_flat = ['%s %s' % bigram_tuple for bigram_tuple in cg_word_bigrams_flat_tuples]
smg_word_bigrams_flat = ['%s %s' % bigram_tuple for bigram_tuple in smg_word_bigrams_flat_tuples]
cg_word_bigrams_Text = Text(cg_word_bigrams_flat)
smg_word_bigrams_Text = Text(smg_word_bigrams_flat)
cg_word_bigrams_Text
###Output
[[('πρασινο', 'αυκουι'), ('αυκουι', 'μες'), ('μες', 'το'), ('το', 'πασχαλινο'), ('πασχαλινο', 'ποτηρι'), ('ποτηρι', 'που'), ('που', 'επιασε'), ('επιασε', 'ο'), ('ο', 'μιτσης')], [('καμνουν', 'πολλα'), ('πολλα', 'ανακαινιση'), ('ανακαινιση', 'στα'), ('στα', 'περβολια')], [('ελα', 'συγγενη'), ('συγγενη', 'τζιαι'), ('τζιαι', 'εχουμε'), ('εχουμε', 'νεοτερα'), ('νεοτερα', 'π'), ('π', 'το'), ('το', 'νικολη')], [('η', 'αληθκεια'), ('αληθκεια', 'με'), ('με', 'τους'), ('τους', 'τερματοφυλακες'), ('τερματοφυλακες', 'και'), ('και', 'οχι'), ('οχι', 'μονο'), ('μονο', 'εχουμε'), ('εχουμε', 'λιο'), ('λιο', 'θεμα')], [('μεχρι', 'και'), ('και', 'αυγουστη'), ('αυγουστη', 'και'), ('και', 'πανο'), ('πανο', 'κωνσταντινου'), ('κωνσταντινου', 'εφεραμε'), ('εφεραμε', 'ρε')], [('ωσπου', 'να'), ('να', 'φαεις'), ('φαεις', 'το'), ('το', 'κραμπι'), ('κραμπι', 'σου'), ('σου', 'εν'), ('εν', 'να'), ('να', 'τελειωσω'), ('τελειωσω', 'μεν'), ('μεν', 'φοασαι')], [('πρεπει', 'να'), ('να', 'εβρω'), ('εβρω', 'αλλη'), ('αλλη', 'λεξη'), ('λεξη', 'για'), ('για', 'το'), ('το', 'τι'), ('τι', 'εγινε'), ('εγινε', 'σημερα'), ('σημερα', 'το'), ('το', 'σοκ'), ('σοκ', 'εν'), ('εν', 'πολλα'), ('πολλα', 'λαιτ')], [('θεωρω', 'οτι'), ('οτι', 'η'), ('η', 'παντελινα'), ('παντελινα', 'εν'), ('εν', 'δυνατη'), ('δυνατη', 'ανετα'), ('ανετα', 'λαλω'), ('λαλω', 'εγω')], [('εν', 'τουτη'), ('τουτη', 'η'), ('η', 'νεα'), ('νεα', 'ομονοια'), ('ομονοια', 'που'), ('που', 'μας'), ('μας', 'εταξαν')], [('ηντα', 'ωραια'), ('ωραια', 'που'), ('που', 'εφυετε'), ('εφυετε', 'ουλλοι'), ('ουλλοι', 'τζιαι'), ('τζιαι', 'εμεινε'), ('εμεινε', 'μας'), ('μας', 'εμας'), ('εμας', 'η'), ('η', 'θαλασσα')], [('ο', 'ενας'), ('ενας', 'παππους'), ('παππους', 'αποστολος'), ('αποστολος', 'ο'), ('ο', 'αλλος'), ('αλλος', 'παππους'), ('παππους', 'αντρεας'), ('αντρεας', 'μαντεψε'), ('μαντεψε', 'τι'), ('τι', 'εφκαλαν'), ('εφκαλαν', 'το'), ('το', 'μωρο')], [('η', 'κορη'), ('κορη', 'μου'), ('μου', 'η'), ('η', 'μιτσια'), ('μιτσια', 'εν'), ('εν', 'να'), ('να', 'με'), ('με', 'γεροκομησει')], [('η', 'μεγαλη'), ('μεγαλη', 'απλα'), ('απλα', 'εν'), ('εν', 'να'), ('να', 'με'), ('με', 'ποφκαλει'), ('ποφκαλει', 'πριν'), ('πριν', 'την'), ('την', 'ωρα'), ('ωρα', 'μου')], [('οι', 'εδιωξα'), ('εδιωξα', 'τον'), ('τον', 'ουτε'), ('ουτε', 'βιλλο'), ('βιλλο', 'εν'), ('εν', 'εφαγα')], [('τουτον', 'το'), ('το', 'καλο'), ('καλο', 'μηνα'), ('μηνα', 'γιατι'), ('γιατι', 'το'), ('το', 'αντιγραφετε'), ('αντιγραφετε', 'τζιαι'), ('τζιαι', 'τουτο'), ('τουτο', 'που'), ('που', 'τους'), ('τους', 'πρηχτηες'), ('πρηχτηες', 'τους'), ('τους', 'καλαμαραες'), ('καλαμαραες', 'τζιαι'), ('τζιαι', 'καμνετε'), ('καμνετε', 'μας'), ('μας', 'τα'), ('τα', 'σατσιην'), ('σατσιην', 'καθε'), ('καθε', 'μηνα')], [('ατε', 'τζιαι'), ('τζιαι', 'εφκαλαν'), ('εφκαλαν', 'τα'), ('τα', 'πανο'), ('πανο', 'τα'), ('τα', 'οχι'), ('οχι', 'στην'), ('στην', 'ισοπαλια'), ('ισοπαλια', 'γιε'), ('γιε', 'μου')], [('θκιαβαζω', 'οτι'), ('οτι', 'εγεμωσεν'), ('εγεμωσεν', 'γκραφιτι'), ('γκραφιτι', 'η'), ('η', 'παφος'), ('παφος', 'τζιαι'), ('τζιαι', 'σιερουμαι'), ('σιερουμαι', 'απιστευτα'), ('απιστευτα', 'πολλα')], [('ασσιημο', 'πραμα'), ('πραμα', 'να'), ('να', 'μεν'), ('μεν', 'γαμας')], [('πιανει', 'με'), ('με', 'τηλεφωνο'), ('τηλεφωνο', 'ο'), ('ο', 'γερος'), ('γερος', 'μου'), ('μου', 'κλαμενος'), ('κλαμενος', 'θα'), ('θα', 'αυξηθει'), ('αυξηθει', 'η'), ('η', 'συνταξη'), ('συνταξη', 'του'), ('του', 'λαλει'), ('λαλει', 'εν'), ('εν', 'ξερει')], [('να', 'τον'), ('τον', 'τρωεις'), ('τρωεις', 'ουλλη'), ('ουλλη', 'μερα')], [('εκαμνα', 'της'), ('της', 'το'), ('το', 'ανετα'), ('ανετα', 'αλλα'), ('αλλα', 'επηρεν'), ('επηρεν', 'τα'), ('τα', 'μαζιν'), ('μαζιν', 'του')], [('πομπα', 'ηβρα'), ('ηβρα', 'την'), ('την', 'επιτελους'), ('επιτελους', 'εχω'), ('εχω', 'καμποσους'), ('καμποσους', 'ποντους'), ('ποντους', 'για'), ('για', 'εξαργυρωση')], [('μα', 'ο'), ('ο', 'μαρκασας'), ('μαρκασας', 'ηντα'), ('ηντα', 'ενεφκε'), ('ενεφκε', 'μολις'), ('μολις', 'εβαλε'), ('εβαλε', 'το'), ('το', 'γκολ'), ('γκολ', 'επηεννε'), ('επηεννε', 'για'), ('για', 'την'), ('την', 'μεγαλη'), ('μεγαλη', 'αντεπιθεση')], [('δηλαδη', 'εσας'), ('εσας', 'τζιαι'), ('τζιαι', 'βιλλο'), ('βιλλο', 'να'), ('να', 'σας'), ('σας', 'προσφερουν'), ('προσφερουν', 'για'), ('για', 'λυση'), ('λυση', 'παλι'), ('παλι', 'ναι'), ('ναι', 'θα'), ('θα', 'πειτε'), ('πειτε', 'ατε')], [('για', 'τα'), ('τα', 'πραματα'), ('πραματα', 'που'), ('που', 'εννεν'), ('εννεν', 'ασπρομαυρο'), ('ασπρομαυρο', 'εν'), ('εν', 'η'), ('η', 'κοτα'), ('κοτα', 'τζιαι'), ('τζιαι', 'το'), ('το', 'αφκο')], [('εν', 'να'), ('να', 'φορησω'), ('φορησω', 'τα'), ('τα', 'τζινουρκα'), ('τζινουρκα', 'μου'), ('μου', 'τα'), ('τα', 'ρουχα'), ('ρουχα', 'αυριο'), ('αυριο', 'στο'), ('στο', 'μνημοσυνο')], [('ουτε', 'ενα'), ('ενα', 'μαντορινι'), ('μαντορινι', 'εν'), ('εν', 'τους'), ('τους', 'αφηννε'), ('αφηννε', 'να'), ('να', 'κοψουν'), ('κοψουν', 'ρε'), ('ρε', 'ο'), ('ο', 'ππιντης')], [('εννα', 'μας'), ('μας', 'φκαλετε'), ('φκαλετε', 'την'), ('την', 'ψυσιη'), ('ψυσιη', 'μας'), ('μας', 'για'), ('για', 'ενα'), ('ενα', 'λουρι'), ('λουρι', 'ελεος')], [('ειμαστε', 'πλουσιοι'), ('πλουσιοι', 'τζιαι'), ('τζιαι', 'χωρις'), ('χωρις', 'την'), ('την', 'κοπιαν'), ('κοπιαν', 'ολαν')], [('ελεος', 'οι'), ('οι', 'τζιαι'), ('τζιαι', 'με'), ('με', 'τον'), ('τον', 'μισιελ'), ('μισιελ', 'ολαν')], [('εν', 'με'), ('με', 'αφηνει'), ('αφηνει', 'να'), ('να', 'την'), ('την', 'φκαλω'), ('φκαλω', 'βιντεο'), ('βιντεο', 'την'), ('την', 'ωρα'), ('ωρα', 'που'), ('που', 'το'), ('το', 'καμνει'), ('καμνει', 'να'), ('να', 'γινω'), ('γινω', 'εκατομμυριουχος')], [('δευτερα', 'φκαινουν'), ('φκαινουν', 'μονον'), ('μονον', 'οι'), ('οι', 'γεροι')], [('φανταζομαι', 'τον'), ('τον', 'ερτογαν'), ('ερτογαν', 'να'), ('να', 'λαλει'), ('λαλει', 'βρωμοσιυλλε'), ('βρωμοσιυλλε', 'τουρκικε'), ('τουρκικε', 'λαε'), ('λαε', 'γνωριμη'), ('γνωριμη', 'η'), ('η', 'φωνη'), ('φωνη', 'που'), ('που', 'ακουεις')], [('κοπιασε', 'σειρηνα'), ('σειρηνα', 'μωρατσε'), ('μωρατσε', 'μου'), ('μου', 'να'), ('να', 'δροσιστουμε'), ('δροσιστουμε', 'να'), ('να', 'μας'), ('μας', 'καμει'), ('καμει', 'τζιαι'), ('τζιαι', 'μοχιτο'), ('μοχιτο', 'η'), ('η', 'θκια'), ('θκια', 'σου'), ('σου', 'η'), ('η', 'ελενα'), ('ελενα', 'μιλουμεν'), ('μιλουμεν', 'εν'), ('εν', 'εσιει'), ('εσιει', 'λαθος')], [('μια', 'χαρα'), ('χαρα', 'ετζοιμηθηκα'), ('ετζοιμηθηκα', 'παντως'), ('παντως', 'εγω'), ('εγω', 'οπως'), ('οπως', 'το'), ('το', 'πουλλουι')], [('εγω', 'εν'), ('εν', 'τζιαιν'), ('τζιαιν', 'κορες'), ('κορες', 'που'), ('που', 'εκαμα'), ('εκαμα', 'μανα'), ('μανα', 'μου'), ('μου', 'εν'), ('εν', 'τες'), ('τες', 'αρφαες'), ('αρφαες', 'της'), ('της', 'μανταμ'), ('μανταμ', 'σουσους')], [('ακουσαμεν', 'εναν'), ('εναν', 'σουσμαν'), ('σουσμαν', 'εναν'), ('εναν', 'κακον')], [('ατε', 'ρε'), ('ρε', 'καλαμαρουθκια'), ('καλαμαρουθκια', 'πιαστε'), ('πιαστε', 'της'), ('της', 'μαμμας'), ('μαμμας', 'σας'), ('σας', 'που'), ('που', 'ενα')], [('γιατι', 'καλαμαριζει'), ('καλαμαριζει', 'τουτη'), ('τουτη', 'η'), ('η', 'τυπισσα'), ('τυπισσα', 'τζεινη'), ('τζεινη', 'η'), ('η', 'λυσσιασμενη'), ('λυσσιασμενη', 'του'), ('του', 'βιλλου')], [('σημερα', 'ειπαν'), ('ειπαν', 'μου'), ('μου', 'τρια'), ('τρια', 'πλασματα'), ('πλασματα', 'οτι'), ('οτι', 'εμαυρισα')], [('τι', 'διλημμα'), ('διλημμα', 'εν'), ('εν', 'τουτο'), ('τουτο', 'παλε'), ('παλε', 'που'), ('που', 'εβρεθηκεν'), ('εβρεθηκεν', 'ομπρος'), ('ομπρος', 'μου')], [('να', 'δουμε'), ('δουμε', 'τι'), ('τι', 'δικαιολογια'), ('δικαιολογια', 'θα'), ('θα', 'εβρω'), ('εβρω', 'παλε'), ('παλε', 'για'), ('για', 'να'), ('να', 'μεν'), ('μεν', 'παω'), ('παω', 'ποψε')], [('θελω', 'να'), ('να', 'γινω'), ('γινω', 'ιατρικος'), ('ιατρικος', 'επισκεπτης'), ('επισκεπτης', 'ολαν'), ('ολαν', 'λαλω'), ('λαλω', 'σας')], [('μα', 'για'), ('για', 'ετσι'), ('ετσι', 'μοτορο'), ('μοτορο', 'εκαμνα'), ('εκαμνα', 'τζιαι'), ('τζιαι', 'τον'), ('τον', 'πελλο'), ('πελλο', 'λαιβ'), ('λαιβ', 'ολαν')], [('τζιαι', 'αριαν'), ('αριαν', 'τραουδω'), ('τραουδω', 'σου'), ('σου', 'τζιαι'), ('τζιαι', 'οπεραν'), ('οπεραν', 'τζιαι'), ('τζιαι', 'εντεχνον'), ('εντεχνον', 'τζιαι'), ('τζιαι', 'ρεμπετικον'), ('ρεμπετικον', 'τζιαι'), ('τζιαι', 'οτι'), ('οτι', 'τραβα'), ('τραβα', 'η'), ('η', 'ψυσιη'), ('ψυσιη', 'σου'), ('σου', 'τακκαρε'), ('τακκαρε', 'μου')], [('ουτε', 'να'), ('να', 'σιεσουμεν'), ('σιεσουμεν', 'πιον')], [('μεν', 'παρακαλατε'), ('παρακαλατε', 'κανενα'), ('κανενα', 'να'), ('να', 'ερτει'), ('ερτει', 'να'), ('να', 'δει'), ('δει', 'το'), ('το', 'αποελ')], [('ρε', 'χοχο'), ('χοχο', 'εμεις'), ('εμεις', 'εν'), ('εν', 'τζε'), ('τζε', 'αλλασουμεν'), ('αλλασουμεν', 'ομαδα'), ('ομαδα', 'καθε'), ('καθε', 'εφτομα'), ('εφτομα', 'τζε'), ('τζε', 'βιλλο'), ('βιλλο', 'μας'), ('μας', 'με'), ('με', 'ποιον'), ('ποιον', 'παιζει'), ('παιζει', 'η'), ('η', 'χοχα')], [('χοχοι', 'τουρκοσποροι'), ('τουρκοσποροι', 'πουσταραες'), ('πουσταραες', 'τον'), ('τον', 'βιλλο'), ('βιλλο', 'μας'), ('μας', 'θα'), ('θα', 'παιρνετε'), ('παιρνετε', 'ρε'), ('ρε', 'σκαταες')], [('εφκηκεν', 'η'), ('η', 'ψυσχιη'), ('ψυσχιη', 'μου'), ('μου', 'ωσπου'), ('ωσπου', 'να'), ('να', 'ρτω'), ('ρτω', 'εσσω'), ('εσσω', 'με'), ('με', 'τουντον'), ('τουντον', 'λαλλαρο'), ('λαλλαρο', 'που'), ('που', 'εχουμε'), ('εχουμε', 'σημερα')], [('εστειλα', 'τον'), ('τον', 'αντρα'), ('αντρα', 'μου'), ('μου', 'να'), ('να', 'δει'), ('δει', 'μαππα'), ('μαππα', 'να'), ('να', 'βρω'), ('βρω', 'την'), ('την', 'ησυχια'), ('ησυχια', 'μου'), ('μου', 'οι'), ('οι', 'να'), ('να', 'τες'), ('τες', 'φατε'), ('φατε', 'να'), ('να', 'ρτει'), ('ρτει', 'εσσω'), ('εσσω', 'γλιορα'), ('γλιορα', 'τωρα')], [('εβαρεθηκαμεν', 'σας'), ('σας', 'εθνικια'), ('εθνικια', 'που'), ('που', 'την'), ('την', 'μια'), ('μια', 'εν'), ('εν', 'ελληνικη'), ('ελληνικη', 'που'), ('που', 'την'), ('την', 'αλλη'), ('αλλη', 'εν'), ('εν', 'τουρτζικη'), ('τουρτζικη', 'κανει'), ('κανει', 'πκιον'), ('πκιον', 'εσσω'), ('εσσω', 'σας'), ('σας', 'ουλλοι')], [('ατε', 'πιεννε'), ('πιεννε', 'εσσω'), ('εσσω', 'σου'), ('σου', 'να'), ('να', 'πνασουμε')], [('εσχει', 'ετσι'), ('ετσι', 'πραμα'), ('πραμα', 'μονο'), ('μονο', 'σε'), ('σε', 'τζεινο'), ('τζεινο', 'το'), ('το', 'τμημα')], [('εν', 'ενοιωσα'), ('ενοιωσα', 'ακομα'), ('ακομα', 'τζεινο'), ('τζεινο', 'το'), ('το', 'πραμα'), ('πραμα', 'να'), ('να', 'πω'), ('πω', 'της'), ('της', 'αλλης'), ('αλλης', 'αγαπω'), ('αγαπω', 'σε')], [('ε', 'καλαν'), ('καλαν', 'ολαν'), ('ολαν', 'τζεινης'), ('τζεινης', 'γιατι'), ('γιατι', 'εν'), ('εν', 'της'), ('της', 'το'), ('το', 'λαλεις'), ('λαλεις', 'δηλ'), ('δηλ', 'τζιαι'), ('τζιαι', 'λαλεις'), ('λαλεις', 'μας'), ('μας', 'το'), ('το', 'εμας')], [('ινταλως', 'εγερασεν'), ('εγερασεν', 'ετσι'), ('ετσι', 'τζεινη')], [('ατε', 'ελα'), ('ελα', 'λαρνακα'), ('λαρνακα', 'τζαι'), ('τζαι', 'καρτερω'), ('καρτερω', 'σε')], [('ππεσε', 'τζοιμηθου'), ('τζοιμηθου', 'ξερεις'), ('ξερεις', 'ποσα'), ('ποσα', 'πραματα'), ('πραματα', 'εκαμα'), ('εκαμα', 'εγω'), ('εγω', 'που'), ('που', 'τζεινη'), ('τζεινη', 'την'), ('την', 'ωρα')], [('εσιει', 'σκονη'), ('σκονη', 'οξα'), ('οξα', 'σκουπιζει'), ('σκουπιζει', 'η'), ('η', 'γειτονισσα'), ('γειτονισσα', 'παλε')], [('φανταζουμαι', 'εκοπηκε'), ('εκοπηκε', 'η'), ('η', 'ορεξη'), ('ορεξη', 'σου'), ('σου', 'τζεινη'), ('τζεινη', 'την'), ('την', 'νυχτα'), ('νυχτα', 'εννε')], [('ελα', 'ρε'), ('ρε', 'πελλε'), ('πελλε', 'ηντα'), ('ηντα', 'μπου'), ('μπου', 'λαλουσιν'), ('λαλουσιν', 'οι'), ('οι', 'κορουες'), ('κορουες', 'τζιαμε')], [('καρτερω', 'τουντες'), ('τουντες', 'καλυτερες'), ('καλυτερες', 'μερες'), ('μερες', 'που'), ('που', 'λαλουσιν'), ('λαλουσιν', 'ουλλοι'), ('ουλλοι', 'οτι'), ('οτι', 'εν'), ('εν', 'να'), ('να', 'ρτουν')], [('αμπα', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'ειμαι'), ('ειμαι', 'εγιω')], [('αμπα', 'τζαι'), ('τζαι', 'δειτε'), ('δειτε', 'πλασμα'), ('πλασμα', 'να'), ('να', 'υποστηριζει'), ('υποστηριζει', 'τες'), ('τες', 'γεναιτζιες'), ('γεναιτζιες', 'εσεις'), ('εσεις', 'οι'), ('οι', 'θκυο')], [('που', 'την'), ('την', 'ωρα'), ('ωρα', 'που'), ('που', 'το'), ('το', 'εθκιαβασα'), ('εθκιαβασα', 'πονω'), ('πονω', 'το'), ('το', 'στομαχι'), ('στομαχι', 'μου')], [('φατε', 'τζιαι'), ('τζιαι', 'ποψε'), ('ποψε', 'διοτι'), ('διοτι', 'που'), ('που', 'δευτερας'), ('δευτερας', 'εν'), ('εν', 'εσσιει'), ('εσσιει', 'τιποτες'), ('τιποτες', 'πιον')], [('οι', 'εν'), ('εν', 'εσσιει'), ('εσσιει', 'προβλημα'), ('προβλημα', 'ο'), ('ο', 'φιλος'), ('φιλος', 'μου'), ('μου', 'εξαναειδε'), ('εξαναειδε', 'τα'), ('τα', 'ουλα')], [('αληθκεια', 'εσσιει'), ('εσσιει', 'κανενα'), ('κανενα', 'που'), ('που', 'τα'), ('τα', 'γραφει'), ('γραφει', 'τουτα')], [('που', 'εισαστεν'), ('εισαστεν', 'οι'), ('οι', 'εξυπνοι'), ('εξυπνοι', 'τζιαι'), ('τζιαι', 'εσσιετε'), ('εσσιετε', 'τις'), ('τις', 'πιο'), ('πιο', 'ωραιες'), ('ωραιες', 'ιδεες'), ('ιδεες', 'να'), ('να', 'εβρετε'), ('εβρετε', 'λυσεις')], [('εν', 'τζαι'), ('τζαι', 'βαλω'), ('βαλω', 'το'), ('το', 'σιερι'), ('σιερι', 'μου'), ('μου', 'στη'), ('στη', 'φωθκια')], [('η', 'μοναδικη'), ('μοναδικη', 'κυπραια'), ('κυπραια', 'που'), ('που', 'της'), ('της', 'αρεσκει'), ('αρεσκει', 'να'), ('να', 'καμνει'), ('καμνει', 'τες'), ('τες', 'δουλειες'), ('δουλειες', 'του'), ('του', 'σπιθκιου')], [('ειναι', 'για'), ('για', 'να'), ('να', 'φακκας'), ('φακκας', 'τη'), ('τη', 'κκελλε'), ('κκελλε', 'σ'), ('σ', 'δεν'), ('δεν', 'τ'), ('τ', 'αρεσκει'), ('αρεσκει', 'να'), ('να', 'παιζει')], [('αρεσκει', 'μου'), ('μου', 'που'), ('που', 'εν'), ('εν', 'προσπαθει'), ('προσπαθει', 'να'), ('να', 'καλαμαρισει'), ('καλαμαρισει', 'ο'), ('ο', 'παμπος')], [('αρεσκει', 'μου'), ('μου', 'πολλα'), ('πολλα', 'τουτος'), ('τουτος', 'εθθα'), ('εθθα', 'το'), ('το', 'ελεα'), ('ελεα', 'δαμαι')], [('ο', 'κυριος'), ('κυριος', 'που'), ('που', 'καλαμαριζει'), ('καλαμαριζει', 'ακομα'), ('ακομα', 'εν'), ('εν', 'εφκηκε')], [('μια', 'λευκωσιατισσα'), ('λευκωσιατισσα', 'που'), ('που', 'καλαμαριζει'), ('καλαμαριζει', 'οτι'), ('οτι', 'χειροτερο'), ('χειροτερο', 'χειροτερη'), ('χειροτερη', 'απο'), ('απο', 'μια'), ('μια', 'παφιτισσα'), ('παφιτισσα', 'που'), ('που', 'καλαμαριζει')], [('ιντα', 'καλαμαριζει'), ('καλαμαριζει', 'τουτος')], [('δεν', 'το'), ('το', 'κανουμε'), ('κανουμε', 'τοσο'), ('τοσο', 'πολλα'), ('πολλα', 'η'), ('η', 'κυπραια'), ('κυπραια', 'που'), ('που', 'προσπαθει'), ('προσπαθει', 'να'), ('να', 'καλαμαρισει'), ('καλαμαρισει', 'τζε'), ('τζε', 'φεφκει'), ('φεφκει', 'της'), ('της', 'το'), ('το', 'κυπριακο')], [('καμνει', 'λλια'), ('λλια', 'τσιαλιμουθκια'), ('τσιαλιμουθκια', 'παραπανω'), ('παραπανω', 'που'), ('που', 'το'), ('το', 'κανονικο'), ('κανονικο', 'αλλα'), ('αλλα', 'εν'), ('εν', 'μωρο'), ('μωρο', 'πιστευκω'), ('πιστευκω', 'θα'), ('θα', 'σασει')], [('θελω', 'να'), ('να', 'πιστευκω'), ('πιστευκω', 'οτι'), ('οτι', 'τουτες'), ('τουτες', 'εν'), ('εν', 'οι'), ('οι', 'πιο'), ('πιο', 'φτανες'), ('φτανες', 'του'), ('του', 'νησιου'), ('νησιου', 'μας'), ('μας', 'οχι'), ('οχι', 'οι'), ('οι', 'πιο'), ('πιο', 'ομορφες')], [('εσιει', 'χαζιν'), ('χαζιν', 'ο'), ('ο', 'τυπος'), ('τυπος', 'λαλει'), ('λαλει', 'τζιαι'), ('τζιαι', 'καμποσες'), ('καμποσες', 'αληθκειες')], [('στην', 'κυπρο'), ('κυπρο', 'αμα'), ('αμα', 'πεις'), ('πεις', 'οτι'), ('οτι', 'καμνεις'), ('καμνεις', 'εθελοντισμο'), ('εθελοντισμο', 'ρωτουν'), ('ρωτουν', 'σε'), ('σε', 'αν'), ('αν', 'εισαι'), ('εισαι', 'ανεργος'), ('ανεργος', 'η'), ('η', 'παρα'), ('παρα', 'πολλα'), ('πολλα', 'αθκειασερος')], [('εσχιει', 'κανενα'), ('κανενα', 'που'), ('που', 'συγγενεις'), ('συγγενεις', 'του'), ('του', 'επηε'), ('επηε', 'οξα'), ('οξα', 'καμνεις'), ('καμνεις', 'οτι'), ('οτι', 'δεν'), ('δεν', 'τους'), ('τους', 'ξερεις'), ('ξερεις', 'μετα'), ('μετα', 'που'), ('που', 'τουτο')], [('το', 'θεμα'), ('θεμα', 'που'), ('που', 'βαλλεις'), ('βαλλεις', 'εσυζητηθηκε'), ('εσυζητηθηκε', 'δαμαι')], [('μανα', 'μου'), ('μου', 'διουν'), ('διουν', 'μου'), ('μου', 'τες'), ('τες', 'ερωτησεις'), ('ερωτησεις', 'ξερεις'), ('ξερεις', 'ποσα'), ('ποσα', 'πιανω'), ('πιανω', 'για'), ('για', 'να'), ('να', 'δουλεφκω'), ('δουλεφκω', 'δαμαι')], [('εγνωρισα', 'μιαν'), ('μιαν', 'εχτες'), ('εχτες', 'κουκλαρα'), ('κουκλαρα', 'αλλα'), ('αλλα', 'προβληματιζει'), ('προβληματιζει', 'με'), ('με', 'το'), ('το', 'υψος'), ('υψος', 'της'), ('της', 'εν'), ('εν', 'τοσο'), ('τοσο', 'κοντη'), ('κοντη', 'που'), ('που', 'αν'), ('αν', 'ππεσει'), ('ππεσει', 'αφκο'), ('αφκο', 'που'), ('που', 'τον'), ('τον', 'κωλο'), ('κωλο', 'της'), ('της', 'εθθα'), ('εθθα', 'σπασει')], [('εθθα', 'πιστεψετε'), ('πιστεψετε', 'ποια'), ('ποια', 'χωρκανη'), ('χωρκανη', 'εκαμεν'), ('εκαμεν', 'αιτηση')], [('βαρκουμαι', 'ο'), ('ο', 'αδρωπος'), ('αδρωπος', 'εθθα'), ('εθθα', 'τη'), ('τη', 'φκαλει'), ('φκαλει', 'σκαρτη')], [('πε', 'μας'), ('μας', 'τζαι'), ('τζαι', 'βαρκουμαι'), ('βαρκουμαι', 'να'), ('να', 'θκιεβασω')], [('περκει', 'τους'), ('τους', 'κουντησουν'), ('κουντησουν', 'παλε'), ('παλε', 'δεν'), ('δεν', 'θα'), ('θα', 'προλαβουν'), ('προλαβουν', 'τζιαμε'), ('τζιαμε', 'που'), ('που', 'θα'), ('θα', 'τους'), ('τους', 'κοψουμε')], [('επηεν', 'τζιαμαι'), ('τζιαμαι', 'εκατσεν'), ('εκατσεν', 'εναν'), ('εναν', 'τεταρτο')], [('ναι', 'εψες'), ('εψες', 'τα'), ('τα', 'μεσανυχτα'), ('μεσανυχτα', 'αρα'), ('αρα', 'σημερα'), ('σημερα', 'αρκεψαμεν'), ('αρκεψαμεν', 'ιδιαιτερα'), ('ιδιαιτερα', 'περιμενω'), ('περιμενω', 'δαμε'), ('δαμε', 'να'), ('να', 'τελειωσει')], [('γιαννο', 'μα'), ('μα', 'εσιει'), ('εσιει', 'σουβλες'), ('σουβλες', 'δεν'), ('δεν', 'νηστευκω'), ('νηστευκω', 'πλεον')], [('τζε', 'ιντα'), ('ιντα', 'που'), ('που', 'να'), ('να', 'παει'), ('παει', 'να'), ('να', 'κανει'), ('κανει', 'δαμε')], [('ε', 'οι'), ('οι', 'κορη'), ('κορη', 'ιντα'), ('ιντα', 'που'), ('που', 'παθες')], [('αρεσκει', 'μου'), ('μου', 'που'), ('που', 'την'), ('την', 'αρωτα'), ('αρωτα', 'ιντα'), ('ιντα', 'που'), ('που', 'εκαμνε'), ('εκαμνε', 'και'), ('και', 'εκλεισε'), ('εκλεισε', 'ο'), ('ο', 'λαιμος'), ('λαιμος', 'της')], [('οι', 'ρε'), ('ρε', 'τζαι'), ('τζαι', 'εσουνη'), ('εσουνη', 'ετσι'), ('ετσι', 'πραγμα'), ('πραγμα', 'μα'), ('μα', 'ιντα'), ('ιντα', 'που'), ('που', 'μας'), ('μας', 'λαλεις'), ('λαλεις', 'ρε'), ('ρε', 'κουμπαρε'), ('κουμπαρε', 'μου')], [('ιντα', 'που'), ('που', 'καμνεις'), ('καμνεις', 'φιλουι'), ('φιλουι', 'μου')], [('τελικα', 'ηντα'), ('ηντα', 'που'), ('που', 'εν'), ('εν', 'γινει'), ('γινει', 'ολαν'), ('ολαν', 'εννα'), ('εννα', 'φαμεν'), ('φαμεν', 'οξα'), ('οξα', 'εννα'), ('εννα', 'μεινουμε'), ('μεινουμε', 'νηστιτζιοι')], [('μα', 'ηντα'), ('ηντα', 'που'), ('που', 'καμνουν'), ('καμνουν', 'σιορ'), ('σιορ', 'στο'), ('στο', 'πανεπιστημιον'), ('πανεπιστημιον', 'εν'), ('εν', 'μες'), ('μες', 'το'), ('το', 'φακελλουιν'), ('φακελλουιν', 'που'), ('που', 'πκιανουν'), ('πκιανουν', 'τα'), ('τα', 'δοκτορατα')], [('η', 'αληθκεια'), ('αληθκεια', 'ενει'), ('ενει', 'αμαν'), ('αμαν', 'στη'), ('στη', 'κυπρο'), ('κυπρο', 'εν'), ('εν', 'φαντασμαγορικα'), ('φαντασμαγορικα', 'στο'), ('στο', 'μανχατταν'), ('μανχατταν', 'τζιαι'), ('τζιαι', 'στες'), ('στες', 'αγορες'), ('αγορες', 'τις'), ('τις', 'ευρωπαικες'), ('ευρωπαικες', 'ηντα'), ('ηντα', 'που'), ('που', 'ενει')], [('εν', 'μου'), ('μου', 'αρεσκουν'), ('αρεσκουν', 'οι'), ('οι', 'σσιεφταλιες')], [('τωρα', 'πλεον'), ('πλεον', 'εν'), ('εν', 'βαζεις'), ('βαζεις', 'για'), ('για', 'κανενα'), ('κανενα', 'το'), ('το', 'σσιερι'), ('σσιερι', 'στη'), ('στη', 'φωθκια')], [('δαμε', 'ηβρε'), ('ηβρε', 'μας'), ('μας', 'φαση'), ('φαση', 'που'), ('που', 'καμνει'), ('καμνει', 'σσιερι'), ('σσιερι', 'ο'), ('ο', 'παικτης'), ('παικτης', 'τους'), ('τους', 'τζαι'), ('τζαι', 'ζητα'), ('ζητα', 'τη'), ('τη', 'γνωμη'), ('γνωμη', 'μας'), ('μας', 'αν'), ('αν', 'ηταν'), ('ηταν', 'πεναλτυ')], [('σικκιμε', 'ολαν'), ('ολαν', 'διω'), ('διω', 'τζαι'), ('τζαι', 'γω')], [('τωρα', 'εν'), ('εν', 'τους'), ('τους', 'ταιρκαζει'), ('ταιρκαζει', 'ενα'), ('ενα', 'σκασε'), ('σκασε', 'σου'), ('σου', 'ρα')], [('δαμε', 'ταιρκαζει'), ('ταιρκαζει', 'η'), ('η', 'κουβεντα'), ('κουβεντα', 'μονον'), ('μονον', 'τους'), ('τους', 'πελλους'), ('πελλους', 'δεν'), ('δεν', 'φωρει'), ('φωρει', 'ο'), ('ο', 'τοπος')], [('κοπελια', 'η'), ('η', 'αριθμητικη'), ('αριθμητικη', 'μ'), ('μ', 'εν'), ('εν', 'μια'), ('μια', 'χαρα'), ('χαρα', 'η'), ('η', 'μακαριτισσα'), ('μακαριτισσα', 'η'), ('η', 'στετε'), ('στετε', 'μ'), ('μ', 'ηταν'), ('ηταν', 'δασκαλα'), ('δασκαλα', 'ολαν')], [('οποιος', 'εν'), ('εν', 'εσιει'), ('εσιει', 'νουν'), ('νουν', 'εσιει'), ('εσιει', 'ποθκια'), ('ποθκια', 'ελαλεν'), ('ελαλεν', 'η'), ('η', 'στετε'), ('στετε', 'μου')], [('εμεναν', 'η'), ('η', 'στετε'), ('στετε', 'μου'), ('μου', 'εβουραν'), ('εβουραν', 'με'), ('με', 'μες'), ('μες', 'τες'), ('τες', 'αυλαες'), ('αυλαες', 'να'), ('να', 'φαω'), ('φαω', 'αυκα'), ('αυκα', 'που'), ('που', 'τες'), ('τες', 'ορνιθες'), ('ορνιθες', 'μας')], [('ηνταλως', 'καμνεις'), ('καμνεις', 'για'), ('για', 'λλιην'), ('λλιην', 'σκονη')], [('δαμε', 'λαλεις'), ('λαλεις', 'τους'), ('τους', 'ινταλως'), ('ινταλως', 'τους'), ('τους', 'εφκαλλαν'), ('εφκαλλαν', 'που'), ('που', 'τα'), ('τα', 'σπιτια'), ('σπιτια', 'τους'), ('τους', 'οι'), ('οι', 'δικοι'), ('δικοι', 'μας'), ('μας', 'τζιαι'), ('τζιαι', 'λαλουν'), ('λαλουν', 'σου'), ('σου', 'εν'), ('εν', 'μονοι'), ('μονοι', 'τους'), ('τους', 'που'), ('που', 'εφυαν')], [('ως', 'τζιαι'), ('τζιαι', 'του'), ('του', 'μιτση'), ('μιτση', 'φορεις'), ('φορεις', 'του'), ('του', 'μασκα')], [('τουτην', 'την'), ('την', 'κουβεντα'), ('κουβεντα', 'ακουα'), ('ακουα', 'πολλα'), ('πολλα', 'που'), ('που', 'ημουν'), ('ημουν', 'μιτσια')], [('κορη', 'μιτσια'), ('μιτσια', 'ηνταλως'), ('ηνταλως', 'εβρεθηκε')], [('αφου', 'εν'), ('εν', 'τζοιματε'), ('τζοιματε', 'ρε'), ('ρε', 'κοπελια'), ('κοπελια', 'καθε'), ('καθε', 'δκυο'), ('δκυο', 'ωρες'), ('ωρες', 'η'), ('η', 'μιτσια'), ('μιτσια', 'εγερτηριο')], [('εισχιεν', 'τα'), ('τα', 'ουλλα'), ('ουλλα', 'η'), ('η', 'μαρουλου')], [('α', 'μανα'), ('μανα', 'μου'), ('μου', 'ιντα'), ('ιντα', 'αππαρα')], [('ελεος', 'πκιον'), ('πκιον', 'το'), ('το', 'παναυρι'), ('παναυρι', 'του'), ('του', 'χωρκου'), ('χωρκου', 'εν'), ('εν', 'καλλυτερον'), ('καλλυτερον', 'που'), ('που', 'τουτα'), ('τουτα', 'που'), ('που', 'θωρουμε'), ('θωρουμε', 'ποψε')], [('να', 'τρως'), ('τρως', 'μονον'), ('μονον', 'του'), ('του', 'ψουμι'), ('ψουμι', 'απ'), ('απ', 'εφκιασαν'), ('εφκιασαν', 'οι'), ('οι', 'φτουχοι')], [('με', 'το'), ('το', 'ιδιο'), ('ιδιο', 'αλευρι'), ('αλευρι', 'τζιαι'), ('τζιαι', 'το'), ('το', 'ιδιο'), ('ιδιο', 'νερο'), ('νερο', 'το'), ('το', 'ιδιο'), ('ιδιο', 'ψουμι'), ('ψουμι', 'εννα'), ('εννα', 'φκαλεις')], [('εξυπνησα', 'πιο'), ('πιο', 'πελλος'), ('πελλος', 'που'), ('που', 'ποττε')], [('τουτο', 'εν'), ('εν', 'θα'), ('θα', 'αλλαξει'), ('αλλαξει', 'ποττε')], [('το', 'μουχτιν'), ('μουχτιν', 'εν'), ('εν', 'το'), ('το', 'καλυττερο'), ('καλυττερο', 'σιορ')], [('κοπελια', 'τζαμε'), ('τζαμε', 'στη'), ('στη', 'εταιρεια'), ('εταιρεια', 'οι'), ('οι', 'μιτσιοι'), ('μιτσιοι', 'που'), ('που', 'γραφουντε'), ('γραφουντε', 'στες'), ('στες', 'ακαδημιες'), ('ακαδημιες', 'πιαννουν'), ('πιαννουν', 'τζαι'), ('τζαι', 'σεζον'), ('σεζον', 'μουχτιν'), ('μουχτιν', 'εννεν')], [('αμπα', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'ειμαι'), ('ειμαι', 'εγιω')], [('εν', 'για'), ('για', 'τζεινα'), ('τζεινα', 'που'), ('που', 'λαλω')], [('τον', 'τζαιρο'), ('τζαιρο', 'που'), ('που', 'εγιω'), ('εγιω', 'επηεννα'), ('επηεννα', 'εσου'), ('εσου', 'ερκεσουν')], [('μα', 'ιντα'), ('ιντα', 'χωρκον'), ('χωρκον', 'εν'), ('εν', 'τουτο')], [('πιο', 'χωρκαθκιον'), ('χωρκαθκιον', 'πεθανισκεις')], [('ηντα', 'εσου'), ('εσου', 'εν'), ('εν', 'ερκεσαι'), ('ερκεσαι', 'ρε'), ('ρε', 'σιαχουρη'), ('σιαχουρη', 'π'), ('π', 'εν'), ('εν', 'τζιαι'), ('τζιαι', 'διπλα'), ('διπλα', 'σ'), ('σ', 'οποταν'), ('οποταν', 'μεν'), ('μεν', 'κατσιαριζεις'), ('κατσιαριζεις', 'ολαν')], [('τοσα', 'λαλουσιν'), ('λαλουσιν', 'και'), ('και', 'ουτε'), ('ουτε', 'ενα'), ('ενα', 'σκασε'), ('σκασε', 'εσουνι'), ('εσουνι', 'ρα')], [('επουλησεν', 'το'), ('το', 'αυτοκινητο'), ('αυτοκινητο', 'οξα'), ('οξα', 'ακομα')], [('οι', 'οτι'), ('οτι', 'εσιει'), ('εσιει', 'σκονη'), ('σκονη', 'απλα'), ('απλα', 'αν'), ('αν', 'βουττησω'), ('βουττησω', 'τη'), ('τη', 'κκελλε'), ('κκελλε', 'μου'), ('μου', 'μες'), ('μες', 'το'), ('το', 'χωμα'), ('χωμα', 'τζιαι'), ('τζιαι', 'χωσω'), ('χωσω', 'την'), ('την', 'εν'), ('εν', 'να'), ('να', 'αναπνεω'), ('αναπνεω', 'καλλυτερα')], [('εγεμωσεν', 'η'), ('η', 'κκελλε'), ('κκελλε', 'της'), ('της', 'με'), ('με', 'αεριο'), ('αεριο', 'επειδη'), ('επειδη', 'ηρτεν')], [('εν', 'καταλαβαινω'), ('καταλαβαινω', 'τι'), ('τι', 'μου'), ('μου', 'λαλεις'), ('λαλεις', 'τωρα'), ('τωρα', 'θες'), ('θες', 'να'), ('να', 'το'), ('το', 'καμνουμε'), ('καμνουμε', 'η'), ('η', 'οι')], [('τουτο', 'πραμα'), ('πραμα', 'εν'), ('εν', 'σολα'), ('σολα', 'παπουτσιου'), ('παπουτσιου', 'εμποτισμενη'), ('εμποτισμενη', 'με'), ('με', 'λια'), ('λια', 'κομμαθκια'), ('κομμαθκια', 'απο'), ('απο', 'κωλο'), ('κωλο', 'κοτοπουλου')], [('τι', 'βλεπουν'), ('βλεπουν', 'τα'), ('τα', 'ματουθκια'), ('ματουθκια', 'μας'), ('μας', 'να'), ('να', 'φκουμε'), ('φκουμε', 'τελικα')], [('τεσσερα', 'τζαι'), ('τζαι', 'τεσσερα'), ('τεσσερα', 'καμνουσιν'), ('καμνουσιν', 'οκτω'), ('οκτω', 'τεσσερα'), ('τεσσερα', 'παλικαρκια'), ('παλικαρκια', 'πασι'), ('πασι', 'στον'), ('στον', 'πολεμο'), ('πολεμο', 'αλλα'), ('αλλα', 'εχουσιν'), ('εχουσιν', 'τζαι'), ('τζαι', 'στολον')], [('οι', 'χοχοι'), ('χοχοι', 'ουλλοι'), ('ουλλοι', 'εγιναν'), ('εγιναν', 'φανατικοι'), ('φανατικοι', 'ρουμανοι'), ('ρουμανοι', 'ονειρα'), ('ονειρα', 'καμνουσιν'), ('καμνουσιν', 'τρελλα')], [('απο', 'εν'), ('εν', 'εξω'), ('εξω', 'του'), ('του', 'χορου'), ('χορου', 'πολλα'), ('πολλα', 'τραουθκια'), ('τραουθκια', 'ξερει')], [('τουτα', 'τα'), ('τα', 'τραουθκια'), ('τραουθκια', 'λαλει'), ('λαλει', 'της'), ('της', 'μανας'), ('μανας', 'της'), ('της', 'και'), ('και', 'αρεσαν'), ('αρεσαν', 'της')], [('τουτο', 'που'), ('που', 'φκαινουν'), ('φκαινουν', 'τραουδιστες'), ('τραουδιστες', 'που'), ('που', 'εν'), ('εν', 'τους'), ('τους', 'ηξερε'), ('ηξερε', 'η'), ('η', 'μανα'), ('μανα', 'τους'), ('τους', 'ξεπερνα'), ('ξεπερνα', 'με')], [('τρεις', 'ελιες'), ('ελιες', 'τζιε'), ('τζιε', 'μιαν'), ('μιαν', 'ντοματα'), ('ντοματα', 'αγαπω'), ('αγαπω', 'μιαν'), ('μιαν', 'μαυρομματα')], [('ετσι', 'κοπελλαν'), ('κοπελλαν', 'μπορει'), ('μπορει', 'τζαι'), ('τζαι', 'εγω'), ('εγω', 'να'), ('να', 'την'), ('την', 'ερωτευτω')], [('που', 'εν'), ('εν', 'η'), ('η', 'λουλουδου'), ('λουλουδου', 'σιορ'), ('σιορ', 'εσσιει'), ('εσσιει', 'μιση'), ('μιση', 'ωρα'), ('ωρα', 'που'), ('που', 'την'), ('την', 'γυρευκω'), ('γυρευκω', 'να'), ('να', 'συρω'), ('συρω', 'κανενα'), ('κανενα', 'γαρυφαλλο')], [('γυρευκω', 'τη'), ('τη', 'φατσουα'), ('φατσουα', 'με'), ('με', 'το'), ('το', 'ππουρτου'), ('ππουρτου', 'δασκαλε')], [('εν', 'να'), ('να', 'γυρευκω'), ('γυρευκω', 'να'), ('να', 'εβρω'), ('εβρω', 'την'), ('την', 'καμερα'), ('καμερα', 'για'), ('για', 'αορατο'), ('αορατο', 'φακο')], [('αμα', 'παουριζετε'), ('παουριζετε', 'στο'), ('στο', 'τηλεφωνο'), ('τηλεφωνο', 'οτι'), ('οτι', 'θελετε'), ('θελετε', 'ραντεβου'), ('ραντεβου', 'εδω'), ('εδω', 'και'), ('και', 'τωρα'), ('τωρα', 'εν'), ('εν', 'μου'), ('μου', 'διατε'), ('διατε', 'κινητρο'), ('κινητρο', 'να'), ('να', 'εβρω'), ('εβρω', 'ωρα')], [('θελω', 'να'), ('να', 'φυω'), ('φυω', 'να'), ('να', 'εβρω'), ('εβρω', 'μια'), ('μια', 'σπηλια'), ('σπηλια', 'να'), ('να', 'μπω'), ('μπω', 'τζιει'), ('τζιει', 'μεσα'), ('μεσα', 'να'), ('να', 'ζω'), ('ζω', 'με'), ('με', 'τες'), ('τες', 'αρκουδες'), ('αρκουδες', 'με'), ('με', 'τα'), ('τα', 'σιονια')], [('τα', 'ιδια'), ('ιδια', 'ελαλουν'), ('ελαλουν', 'τοτε'), ('τοτε', 'εσιει'), ('εσιει', 'τροπο'), ('τροπο', 'για'), ('για', 'να'), ('να', 'τα'), ('τα', 'εβρω'), ('εβρω', 'ευκολα')], [('ε', 'γεναικα'), ('γεναικα', 'αυριον'), ('αυριον', 'θα'), ('θα', 'εσιει'), ('εσιει', 'σιουρα'), ('σιουρα', 'κουπεπια')], [('που', 'επιττωσα'), ('επιττωσα', 'εφαα'), ('εφαα', 'ενα'), ('ενα', 'σινι'), ('σινι', 'μπουγατσα'), ('μπουγατσα', 'τζαι'), ('τζαι', 'πουπανω'), ('πουπανω', 'κουπεπια'), ('κουπεπια', 'εν'), ('εν', 'ηταν'), ('ηταν', 'τοσο'), ('τοσο', 'νεκατομενα')], [('ο', 'λοος'), ('λοος', 'του'), ('του', 'αδρωπου'), ('αδρωπου', 'ο,τι'), ('ο,τι', 'τζιαι'), ('τζιαι', 'αν'), ('αν', 'ενει')], [('να', 'σας'), ('σας', 'παρακαλεσω'), ('παρακαλεσω', 'χωρις'), ('χωρις', 'να'), ('να', 'θελω'), ('θελω', 'να'), ('να', 'σας'), ('σας', 'προσβαλω'), ('προσβαλω', 'να'), ('να', 'μεν'), ('μεν', 'μου'), ('μου', 'ξανα'), ('ξανα', 'στειλετε'), ('στειλετε', 'οτι'), ('οτι', 'εσιη'), ('εσιη', 'σχεση'), ('σχεση', 'με'), ('με', 'μηνυματα'), ('μηνυματα', 'που'), ('που', 'πρεπει'), ('πρεπει', 'να'), ('να', 'καμω'), ('καμω', 'κοπυ'), ('κοπυ', 'σε'), ('σε', 'φιλους'), ('φιλους', 'ειδαλλως'), ('ειδαλλως', 'εννα'), ('εννα', 'χασω'), ('χασω', 'την'), ('την', 'τυχη'), ('τυχη', 'που'), ('που', 'τρεσιη'), ('τρεσιη', 'που'), ('που', 'τα'), ('τα', 'πουναρκα'), ('πουναρκα', 'μου')], [('ποττε', 'εν'), ('εν', 'ημουν'), ('ημουν', 'τυχερος'), ('τυχερος', 'αντιθετως'), ('αντιθετως', 'ειμαι'), ('ειμαι', 'τοσο'), ('τοσο', 'καχτος'), ('καχτος', 'τις'), ('τις', 'πλειστες'), ('πλειστες', 'φορες'), ('φορες', 'που'), ('που', 'εσκεφτηκα'), ('εσκεφτηκα', 'να'), ('να', 'χτυπησω'), ('χτυπησω', 'τον'), ('τον', 'καχτο'), ('καχτο', 'σε'), ('σε', 'ταττοο')], [('ειμαι', 'σιουρος'), ('σιουρος', 'οτι'), ('οτι', 'αν'), ('αν', 'δεν'), ('δεν', 'ενοχλησω'), ('ενοχλησω', 'ατομα'), ('ατομα', 'με'), ('με', 'τα'), ('τα', 'μηνυματα'), ('μηνυματα', 'περι'), ('περι', 'θεων'), ('θεων', 'αγγελων'), ('αγγελων', 'και'), ('και', 'τυχης'), ('τυχης', 'ο'), ('ο', 'θεος'), ('θεος', 'εν'), ('εν', 'θα'), ('θα', 'με'), ('με', 'τιμωρησει'), ('τιμωρησει', 'ουτε'), ('ουτε', 'εννα'), ('εννα', 'χασω'), ('χασω', 'το'), ('το', 'θαυμα'), ('θαυμα', 'που'), ('που', 'θα'), ('θα', 'εγινετουν'), ('εγινετουν', 'σε'), ('σε', 'αντιθετη'), ('αντιθετη', 'περιπτωση')], [('αντι', 'να'), ('να', 'μου'), ('μου', 'ξανα'), ('ξανα', 'στειλετε'), ('στειλετε', 'οτιδηποτε'), ('οτιδηποτε', 'αφορα'), ('αφορα', 'μπορειτε'), ('μπορειτε', 'να'), ('να', 'μου'), ('μου', 'πειτε'), ('πειτε', 'ενα'), ('ενα', 'γεια'), ('γεια', 'σου'), ('σου', 'ενα'), ('ενα', 'τι'), ('τι', 'καμνεις'), ('καμνεις', 'χωρις'), ('χωρις', 'να'), ('να', 'υπαρξουν'), ('υπαρξουν', 'στην'), ('στην', 'συνομιλια'), ('συνομιλια', 'μας'), ('μας', 'μακρυσκελες'), ('μακρυσκελες', 'ανουσια'), ('ανουσια', 'μηνυματα'), ('μηνυματα', 'τζιαι'), ('τζιαι', 'εικονες'), ('εικονες', 'που'), ('που', 'αναβοσβηννουν'), ('αναβοσβηννουν', 'οπως'), ('οπως', 'τζινες'), ('τζινες', 'που'), ('που', 'βαλουν'), ('βαλουν', 'οι'), ('οι', 'θκιαες'), ('θκιαες', 'μου'), ('μου', 'τα'), ('τα', 'χριστουγεννα'), ('χριστουγεννα', 'τζιαι'), ('τζιαι', 'παθαινω'), ('παθαινω', 'ενα'), ('ενα', 'ειδος'), ('ειδος', 'επιληψιας')], [('εν', 'καλο'), ('καλο', 'εν'), ('εν', 'το'), ('το', 'εκατεβασα'), ('εκατεβασα', 'ακομα')], [('εν', 'εσιει'), ('εσιει', 'ακομα'), ('ακομα', 'φαινεται')], [('θα', 'ρωτησω'), ('ρωτησω', 'κανενα'), ('κανενα', 'πληροφορηκαριο'), ('πληροφορηκαριο', 'να'), ('να', 'δω'), ('δω', 'τι'), ('τι', 'εννα'), ('εννα', 'μου'), ('μου', 'πει')], [('η', 'δουλεια'), ('δουλεια', 'που'), ('που', 'εγινηκε'), ('εγινηκε', 'εννεν'), ('εννεν', 'μονο'), ('μονο', 'για'), ('για', 'μπυρες')], [('πρεπει', 'να'), ('να', 'φερεις'), ('φερεις', 'τζιαι'), ('τζιαι', 'σουβλακια')], [('χρωστω', 'τα'), ('τα', 'εγω')], [('εννα', 'καμουμε'), ('καμουμε', 'κατι'), ('κατι', 'πυραβλουθκια')], [('βαρκουμαι', 'εκατσα'), ('εκατσα', 'εκαμα'), ('εκαμα', 'ολοκληρο'), ('ολοκληρο', 'καστρο'), ('καστρο', 'που'), ('που', 'τη'), ('τη', 'βαρεμαρα')], [('ως', 'τζιαι'), ('τζιαι', 'εμαθαινα'), ('εμαθαινα', 'τζιαι'), ('τζιαι', 'καμω'), ('καμω', 'τες')], [('εν', 'καθηγητης'), ('καθηγητης', 'ο'), ('ο', 'πελλος')], [('εκαμα', 'μιαν'), ('μιαν', 'ερευνα'), ('ερευνα', 'που'), ('που', 'εδημιουργησα')], [('τζεινο', 'π'), ('π', 'εχουν'), ('εχουν', 'τα'), ('τα', 'τζαινουρκα'), ('τζαινουρκα', 'τζιαι'), ('τζιαι', 'να'), ('να', 'αλλαζουν'), ('αλλαζουν', 'τραουθκια'), ('τραουθκια', 'με'), ('με', 'κινητο'), ('κινητο', 'αρεσαν'), ('αρεσαν', 'μ')], [('α', 'ρε'), ('ρε', 'κοπελια'), ('κοπελια', 'αντρικκο'), ('αντρικκο', 'μ'), ('μ', 'μολις'), ('μολις', 'ερτεις'), ('ερτεις', 'να'), ('να', 'τους'), ('τους', 'καμουμε'), ('καμουμε', 'παρτυ'), ('παρτυ', 'εσσω'), ('εσσω', 'μ')], [('κατσε', 'θκιαβασε'), ('θκιαβασε', 'τζι'), ('τζι', 'ελα'), ('ελα', 'ελλαδα')], [('κατι', 'τρελλο'), ('τρελλο', 'ετοιμαζει'), ('ετοιμαζει', 'ο'), ('ο', 'αντρικκος'), ('αντρικκος', 'παλε')], [('σε', 'ουλλα'), ('ουλλα', 'τα'), ('τα', 'κοπελια'), ('κοπελια', 'που'), ('που', 'μου'), ('μου', 'ευχηθηκασιν'), ('ευχηθηκασιν', 'σημερα'), ('σημερα', 'λαλω'), ('λαλω', 'τους'), ('τους', 'ευχαριστω'), ('ευχαριστω', 'πολλα')], [('γιατι', 'με'), ('με', 'αφηκετε'), ('αφηκετε', 'μονη'), ('μονη', 'μου'), ('μου', 'μαζι'), ('μαζι', 'τους')], [('αν', 'ακουσετε'), ('ακουσετε', 'κανενα'), ('κανενα', 'μες'), ('μες', 'το'), ('το', 'χωρκο'), ('χωρκο', 'μεν'), ('μεν', 'φοηθειτε'), ('φοηθειτε', 'εγιω'), ('εγιω', 'ημουν')], [('περπατα', 'τζιαι'), ('τζιαι', 'λλιο'), ('λλιο', 'να'), ('να', 'καμεις'), ('καμεις', 'τα'), ('τα', 'αφκα')], [('ποσο', 'σιεσμενοι'), ('σιεσμενοι', 'ειμαστε'), ('ειμαστε', 'που'), ('που', 'τον'), ('τον', 'κοσμο'), ('κοσμο', 'ρε'), ('ρε', 'ενα'), ('ενα', 'προτζεκτ'), ('προτζεκτ', 'να'), ('να', 'καμουμε'), ('καμουμε', 'εν'), ('εν', 'γινεται')], [('ουλλοι', 'εχουν'), ('εχουν', 'μας'), ('μας', 'για'), ('για', 'κκιλιτζιρους'), ('κκιλιτζιρους', 'τζιαι'), ('τζιαι', 'εχουν'), ('εχουν', 'δικαιο'), ('δικαιο', 'τζιαι'), ('τζιαι', 'μεις'), ('μεις', 'μαχουμαστε'), ('μαχουμαστε', 'να'), ('να', 'καμουμε'), ('καμουμε', 'τους'), ('τους', 'πολιτισμενους')], [('ατε', 'ολαν'), ('ολαν', 'τζιαι'), ('τζιαι', 'κανει')], [('κορη', 'εν'), ('εν', 'για'), ('για', 'να'), ('να', 'μπαινει'), ('μπαινει', 'μπροστα'), ('μπροστα', 'που'), ('που', 'καθε'), ('καθε', 'βιντεο'), ('βιντεο', 'που'), ('που', 'τωρα'), ('τωρα', 'τζιαι'), ('τζιαι', 'να'), ('να', 'παει')], [('εν', 'τζι'), ('τζι', 'εικονα')], [('εκαμα', 'ενα'), ('ενα', 'καναλουι'), ('καναλουι', 'για'), ('για', 'τες'), ('τες', 'πελλαρες'), ('πελλαρες', 'που'), ('που', 'καμμω'), ('καμμω', 'εσσω'), ('εσσω', 'ουλλη'), ('ουλλη', 'μερα')], [('το', 'λοιπον'), ('λοιπον', 'ετο'), ('ετο', 'δαμε')], [('αλλο', 'λιο'), ('λιο', 'ετσι'), ('ετσι', 'εννα'), ('εννα', 'καμουμεν')], [('ειδαμεν', 'τα'), ('τα', 'ουλλα'), ('ουλλα', 'πιον')], [('οξα', 'κομα')], [('ευχαριστω', 'σε'), ('σε', 'ολους'), ('ολους', 'σας'), ('σας', 'κοπελια'), ('κοπελια', 'τζιαι'), ('τζιαι', 'στα'), ('στα', 'δικα'), ('δικα', 'σας')], [('ρε', 'πεθκια'), ('πεθκια', 'χτιπατε'), ('χτιπατε', 'μ'), ('μ', 'ενας'), ('ενας', 'το'), ('το', 'κινητο'), ('κινητο', 'μ'), ('μ', 'τζι'), ('τζι', 'εχασα'), ('εχασα', 'το')], [('παιθκια', 'εν'), ('εν', 'ξερω'), ('ξερω', 'για'), ('για', 'σας'), ('σας', 'εγω'), ('εγω', 'ουτε'), ('ουτε', 'φλαουνες'), ('φλαουνες', 'εκαμα'), ('εκαμα', 'ουτε'), ('ουτε', 'αφκα'), ('αφκα', 'εβαψα'), ('εβαψα', 'ουτε'), ('ουτε', 'επιταφιο'), ('επιταφιο', 'εκαταφερα'), ('εκαταφερα', 'να'), ('να', 'παω'), ('παω', 'ουτε'), ('ουτε', 'τιποτε')], [('οποταν', 'μετα'), ('μετα', 'το'), ('το', 'αρχικο'), ('αρχικο', 'μισαωρο'), ('μισαωρο', 'που'), ('που', 'εκαμνα'), ('εκαμνα', 'σαν'), ('σαν', 'το'), ('το', 'διχρονο'), ('διχρονο', 'εσηκωστηκα'), ('εσηκωστηκα', 'ετσαππισα'), ('ετσαππισα', 'εφυτεψα'), ('εφυτεψα', 'εξαπλωσα'), ('εξαπλωσα', 'στον'), ('στον', 'ηλιο'), ('ηλιο', 'τζαι'), ('τζαι', 'εθωρουν'), ('εθωρουν', 'τες'), ('τες', 'μελισσες'), ('μελισσες', 'τζαι'), ('τζαι', 'τες'), ('τες', 'πεταλουδες'), ('πεταλουδες', 'που'), ('που', 'πανω'), ('πανω', 'μου'), ('μου', 'ακουσα'), ('ακουσα', 'τα'), ('τα', 'χελιδονια'), ('χελιδονια', 'ετζημηθηκα'), ('ετζημηθηκα', 'εκατσα'), ('εκατσα', 'κατω'), ('κατω', 'που'), ('που', 'τ'), ('τ', 'αστρα'), ('αστρα', 'τζαι'), ('τζαι', 'ηπια'), ('ηπια', 'κρασουι'), ('κρασουι', 'τζαι'), ('τζαι', 'ακουσα'), ('ακουσα', 'τη'), ('τη', 'μουσικη'), ('μουσικη', 'μου'), ('μου', 'ουλλα'), ('ουλλα', 'εν'), ('εν', 'προσκαιρα')], [('καλη', 'καρθκιαν'), ('καρθκιαν', 'καλα'), ('καλα', 'ποτα'), ('ποτα', 'τζαι'), ('τζαι', 'με'), ('με', 'υγεια'), ('υγεια', 'τζαι'), ('τζαι', 'αγαπη'), ('αγαπη', 'στη'), ('στη', 'ζωη'), ('ζωη', 'μας')], [('ποιες', 'εκφρασεις'), ('εκφρασεις', 'φακκουν'), ('φακκουν', 'σου'), ('σου', 'στα'), ('στα', 'νευρα'), ('νευρα', 'ρε'), ('ρε', 'κουμπαρε')], [('ριγος', 'χωρκαθκιου'), ('χωρκαθκιου', 'με'), ('με', 'διαπερασε'), ('διαπερασε', 'ειμαι'), ('ειμαι', 'σιουρη'), ('σιουρη', 'τζ'), ('τζ', 'εσενα')], [('νομιζω', 'οπου'), ('οπου', 'τζαι'), ('τζαι', 'να'), ('να', 'παμε'), ('παμε', 'καπου'), ('καπου', 'ενα'), ('ενα', 'δειξουμε'), ('δειξουμε', 'το'), ('το', 'νου'), ('νου', 'μας')], [('τρεσιει', 'τιποτες'), ('τιποτες', 'λαμνε'), ('λαμνε', 'να'), ('να', 'μεν'), ('μεν', 'νευριασω')], [('αμαν', 'το'), ('το', 'δωρο'), ('δωρο', 'που'), ('που', 'πιανεις'), ('πιανεις', 'του'), ('του', 'παρεα'), ('παρεα', 'σου'), ('σου', 'εν'), ('εν', 'καλλυττερο'), ('καλλυττερο', 'που'), ('που', 'το'), ('το', 'δικο'), ('δικο', 'σου')], [('η', 'περιπαιζει'), ('περιπαιζει', 'μας'), ('μας', 'η'), ('η', 'εσιει'), ('εσιει', 'υπομονη'), ('υπομονη', 'γαδαρου')], [('εκαμα', 'το'), ('το', 'επιτελους'), ('επιτελους', 'ουλλο'), ('ουλλο', 'μαλακιες'), ('μαλακιες', 'λαλειτε'), ('λαλειτε', 'τελευταιως')], [('ο', 'θκειος'), ('θκειος', 'μου'), ('μου', 'ηξερεν'), ('ηξερεν', 'τον'), ('τον', 'θκειον'), ('θκειον', 'του')], [('οι', 'πως'), ('πως', 'εσιει'), ('εσιει', 'σκονη'), ('σκονη', 'σημερα'), ('σημερα', 'αλλα'), ('αλλα', 'μεν'), ('μεν', 'κατσετε'), ('κατσετε', 'εξω'), ('εξω', 'να'), ('να', 'πιειτε'), ('πιειτε', 'τον'), ('τον', 'καφε'), ('καφε', 'σας'), ('σας', 'εννα'), ('εννα', 'εσιει'), ('εσιει', 'λλιον'), ('λλιον', 'γευση'), ('γευση', 'λασπη')], [('ρε', 'παιθκια'), ('παιθκια', 'πναστε'), ('πναστε', 'λλιο'), ('λλιο', 'με'), ('με', 'το'), ('το', 'να'), ('να', 'γραψεις'), ('γραψεις', 'στα'), ('στα', 'σιερκα'), ('σιερκα', 'σου'), ('σου', 'θκυο'), ('θκυο', 'λεξεις'), ('λεξεις', 'ουτε'), ('ουτε', 'καν'), ('καν', 'τρυπα'), ('τρυπα', 'στο'), ('στο', 'νερο'), ('νερο', 'εν'), ('εν', 'καμνεις')], [('ηρτεν', 'η'), ('η', 'τσικνοπεμπτη'), ('τσικνοπεμπτη', 'να'), ('να', 'φαμε'), ('φαμε', 'λλιη'), ('λλιη', 'σαλατα')], [('ειμαι', 'κοπελλουι'), ('κοπελλουι', 'τζαι'), ('τζαι', 'καταλαβαινω'), ('καταλαβαινω', 'πολλα'), ('πολλα', 'καλα'), ('καλα', 'ιντα'), ('ιντα', 'που'), ('που', 'εννοει'), ('εννοει', 'αρα'), ('αρα', 'εγερασα')], [('εν', 'ιξερω'), ('ιξερω', 'παω'), ('παω', 'να'), ('να', 'ππεσω'), ('ππεσω', 'σε'), ('σε', 'κανενα'), ('κανενα', 'λακκο')], [('οι', 'ρε'), ('ρε', 'παιθκια'), ('παιθκια', 'τζαι'), ('τζαι', 'ειμαστεν'), ('ειμαστεν', 'κριμα'), ('κριμα', 'ολαν')], [('πρεπει', 'να'), ('να', 'εσιει'), ('εσιει', 'εναν'), ('εναν', 'να'), ('να', 'κανονισει'), ('κανονισει', 'τον'), ('τον', 'μιτσην')], [('εχαρισαμεν', 'σας'), ('σας', 'τουτην'), ('τουτην', 'την'), ('την', 'στρατα')], [('οι', 'πως'), ('πως', 'εν'), ('εν', 'πυρα'), ('πυρα', 'αλλα'), ('αλλα', 'επηρα'), ('επηρα', 'το'), ('το', 'γουρουνακι'), ('γουρουνακι', 'μου'), ('μου', 'περιπατο'), ('περιπατο', 'τζιαι'), ('τζιαι', 'εφερα'), ('εφερα', 'πισω'), ('πισω', 'σουβλα')], [('παπα', 'μου'), ('μου', 'να'), ('να', 'ζησεις'), ('ζησεις', 'κι'), ('κι', 'ο,τι'), ('ο,τι', 'ποθεις'), ('ποθεις', 'γι'), ('γι', 'τη'), ('τη', 'γιορτη'), ('γιορτη', 'σου'), ('σου', 'περρισσευκει'), ('περρισσευκει', 'σου'), ('σου', 'κανενα'), ('κανενα', 'εικοσαευρο')], [('γιατι', 'ολαν'), ('ολαν', 'το'), ('το', 'λαλεις'), ('λαλεις', 'τουτον'), ('τουτον', 'εν'), ('εν', 'πελλαμος')], [('φαε', 'παττιχα'), ('παττιχα', 'εν'), ('εν', 'γλυτζια'), ('γλυτζια', 'μελι')], [('τρωεις', 'παττιχα'), ('παττιχα', 'με'), ('με', 'χαλλουμι'), ('χαλλουμι', 'λειφκει'), ('λειφκει', 'σου'), ('σου', 'το'), ('το', 'χαλλουμι'), ('χαλλουμι', 'βαλλεις'), ('βαλλεις', 'αλλο'), ('αλλο', 'ενα'), ('ενα', 'κομματι')], [('εμεις', 'που'), ('που', 'το'), ('το', 'εδωκαμεν'), ('εδωκαμεν', 'μια'), ('μια', 'χαρα'), ('χαρα', 'το'), ('το', 'εκαταλαβαμεν')], [('ιντα', 'να'), ('να', 'ζησω'), ('ζησω', 'να'), ('να', 'με'), ('με', 'βασανιατε'), ('βασανιατε', 'τζιαλλο')], [('η', 'μπαταρια'), ('μπαταρια', 'εθυμισε'), ('εθυμισε', 'μ'), ('μ', 'εσενα'), ('εσενα', 'ποττε'), ('ποττε', 'δεν'), ('δεν', 'εχεις')], [('δουλευκουμε', 'για'), ('για', 'να'), ('να', 'παμε'), ('παμε', 'πουποτε'), ('πουποτε', 'τζιαι'), ('τζιαι', 'τελικα'), ('τελικα', 'εν'), ('εν', 'παμε'), ('παμε', 'πουποτε'), ('πουποτε', 'γιατι'), ('γιατι', 'δουλευκουμε')], [('εγω', 'τουτους'), ('τουτους', 'που'), ('που', 'εν'), ('εν', 'ετοιμοι'), ('ετοιμοι', 'την'), ('την', 'ωρα'), ('ωρα', 'που'), ('που', 'εκανονισαμε'), ('εκανονισαμε', 'εν'), ('εν', 'τους'), ('τους', 'εμπιστευκουμε')], [('εσηκωσα', 'το'), ('το', 'σιερι'), ('σιερι', 'μου'), ('μου', 'να'), ('να', 'φωναξω'), ('φωναξω', 'του'), ('του', 'σερβιτορου'), ('σερβιτορου', 'τζιαι'), ('τζιαι', 'που'), ('που', 'συνηθεια'), ('συνηθεια', 'εφκαλα'), ('εφκαλα', 'σελφι')], [('φερτε', 'μου'), ('μου', 'τζιεινον'), ('τζιεινον', 'που'), ('που', 'ειπε'), ('ειπε', 'ο'), ('ο', 'πελατης'), ('πελατης', 'εσιει'), ('εσιει', 'παντα'), ('παντα', 'δικαιο'), ('δικαιο', 'να'), ('να', 'τον'), ('τον', 'δερω'), ('δερω', 'αλλιως'), ('αλλιως', 'εν'), ('εν', 'να'), ('να', 'δερω'), ('δερω', 'τον'), ('τον', 'πελατη')], [('φκαινεις', 'που'), ('που', 'την'), ('την', 'θαλασσα'), ('θαλασσα', 'φακκα'), ('φακκα', 'το'), ('το', 'δαχτυλουι'), ('δαχτυλουι', 'σου'), ('σου', 'στον'), ('στον', 'βραχο'), ('βραχο', 'παιζεις'), ('παιζεις', 'το'), ('το', 'κουλ'), ('κουλ', 'παεις'), ('παεις', 'στο'), ('στο', 'κρεβατακι'), ('κρεβατακι', 'βαλλεις'), ('βαλλεις', 'γυαλια'), ('γυαλια', 'καππελο'), ('καππελο', 'κλαιεις')], [('παπα', 'αρεσκουν'), ('αρεσκουν', 'σου'), ('σου', 'τα'), ('τα', 'τζιαινουρκα'), ('τζιαινουρκα', 'μου'), ('μου', 'τακκουνια')], [('εσεις', 'που'), ('που', 'μιλατε'), ('μιλατε', 'πισω'), ('πισω', 'που'), ('που', 'τη'), ('τη', 'ρασιη'), ('ρασιη', 'μου'), ('μου', 'φυετε'), ('φυετε', 'λιο')], [('εσηκωσα', 'την'), ('την', 'κκελλε'), ('κκελλε', 'μου'), ('μου', 'που'), ('που', 'το'), ('το', 'κινητο')], [('θυμουμαι', 'τη'), ('τη', 'μανα'), ('μανα', 'μου'), ('μου', 'να'), ('να', 'με'), ('με', 'βουρα'), ('βουρα', 'σε'), ('σε', 'ουλλο'), ('ουλλο', 'το'), ('το', 'σπιτι'), ('σπιτι', 'για'), ('για', 'να'), ('να', 'πιω'), ('πιω', 'το'), ('το', 'γαλα'), ('γαλα', 'μου')], [('η', 'μανα'), ('μανα', 'μου'), ('μου', 'παντα'), ('παντα', 'λαλει'), ('λαλει', 'μου'), ('μου', 'οτι'), ('οτι', 'θελει'), ('θελει', 'να'), ('να', 'καμω'), ('καμω', 'κοπελλουθκια'), ('κοπελλουθκια', 'χειροτερα'), ('χειροτερα', 'που'), ('που', 'μενα')], [('για', 'εσας'), ('εσας', 'που'), ('που', 'σας'), ('σας', 'αρεσκει'), ('αρεσκει', 'ο'), ('ο', 'ηχος'), ('ηχος', 'της'), ('της', 'βροσιης')], [('ζητω', 'συγγνωμη'), ('συγγνωμη', 'που'), ('που', 'ουλλους'), ('ουλλους', 'σας')], [('φυε', 'που'), ('που', 'τη'), ('τη', 'κυπρο'), ('κυπρο', 'ωσπου'), ('ωσπου', 'εν'), ('εν', 'γλιορα')], [('η', 'κοινωνια'), ('κοινωνια', 'μας'), ('μας', 'εν'), ('εν', 'πολλα'), ('πολλα', 'συντηρητικη'), ('συντηρητικη', 'παρολο'), ('παρολο', 'που'), ('που', 'πιστευκω'), ('πιστευκω', 'οτι'), ('οτι', 'η'), ('η', 'πλειστη'), ('πλειστη', 'νεολαια'), ('νεολαια', 'εν'), ('εν', 'πιο'), ('πιο', 'προοδευτικη')], [('εχει', 'καποιους'), ('καποιους', 'που'), ('που', 'οντως'), ('οντως', 'ηρταν'), ('ηρταν', 'να'), ('να', 'δκιαβασουν'), ('δκιαβασουν', 'λιον'), ('λιον', 'ζιλικουρτι')], [('φκαλε', 'φαουσα'), ('φαουσα', 'γαμωτο'), ('γαμωτο', 'μισω'), ('μισω', 'σας'), ('σας', 'ουλλους'), ('ουλλους', 'θκιαολε'), ('θκιαολε', 'μαυρε')], [('εψες', 'εκαμνα'), ('εκαμνα', 'σεξ'), ('σεξ', 'με'), ('με', 'την'), ('την', 'κοπελλουα'), ('κοπελλουα', 'μου'), ('μου', 'τζιαι'), ('τζιαι', 'καταλαθως'), ('καταλαθως', 'ετραβηχτηκεν'), ('ετραβηχτηκεν', 'το'), ('το', 'συρμα'), ('συρμα', 'για'), ('για', 'το'), ('το', 'ρευμα'), ('ρευμα', 'τζιαι'), ('τζιαι', 'εκλεισεν')], [('δηλαδη', 'να'), ('να', 'μας'), ('μας', 'δερνει'), ('δερνει', 'ο'), ('ο', 'αντρας'), ('αντρας', 'δεν'), ('δεν', 'πειραζει'), ('πειραζει', 'εχει'), ('εχει', 'αντρισμο')], [('λαλειτε', 'εχαθηκαν'), ('εχαθηκαν', 'οι'), ('οι', 'ιπποτες'), ('ιπποτες', 'αλλα'), ('αλλα', 'αμαν'), ('αμαν', 'σας'), ('σας', 'προσεγγιζει'), ('προσεγγιζει', 'καποιος'), ('καποιος', 'ειστε'), ('ειστε', 'τοσο'), ('τοσο', 'αππωμενες'), ('αππωμενες', 'εν'), ('εν', 'εχει'), ('εχει', 'χειροτερο'), ('χειροτερο', 'πραμα')], [('νιωθω', 'πολλα'), ('πολλα', 'ασσιημα'), ('ασσιημα', 'αμαν'), ('αμαν', 'μαιρεφκω'), ('μαιρεφκω', 'για'), ('για', 'ενα'), ('ενα', 'λοχο'), ('λοχο', 'γιατι'), ('γιατι', 'εν'), ('εν', 'ηξερω'), ('ηξερω', 'να'), ('να', 'υπολογιζω'), ('υπολογιζω', 'ποσοτητες')], [('ειμαι', 'κυπρια'), ('κυπρια', 'τζαι'), ('τζαι', 'οπου'), ('οπου', 'παω'), ('παω', 'ξεκινουν'), ('ξεκινουν', 'τζαι'), ('τζαι', 'μιλουν'), ('μιλουν', 'μου'), ('μου', 'αγγλικα')], [('ηρτα', 'να'), ('να', 'δω'), ('δω', 'την'), ('την', 'γιαγια'), ('γιαγια', 'μου'), ('μου', 'εβιδωσεν'), ('εβιδωσεν', 'με'), ('με', 'παστον'), ('παστον', 'καναπε'), ('καναπε', 'να'), ('να', 'δουμε'), ('δουμε', 'πετρινο'), ('πετρινο', 'ποταμι'), ('ποταμι', 'τζαι'), ('τζαι', 'εν'), ('εν', 'με'), ('με', 'αφηνει'), ('αφηνει', 'να'), ('να', 'φιω')], [('θωρω', 'γαρους'), ('γαρους', 'τζιαι'), ('τζιαι', 'γαουρες'), ('γαουρες', 'να'), ('να', 'καμνουν'), ('καμνουν', 'καποιους'), ('καποιους', 'που'), ('που', 'νοιαζονται'), ('νοιαζονται', 'για'), ('για', 'τζεινους'), ('τζεινους', 'χωμα'), ('χωμα', 'γιατι'), ('γιατι', 'εν'), ('εν', 'μισιη'), ('μισιη', 'μου'), ('μου', 'ασσιημος'), ('ασσιημος', 'η'), ('η', 'ασσιημη'), ('ασσιημη', 'οξα'), ('οξα', 'πασσιης')], [('το', 'οτι'), ('οτι', 'βαλεις'), ('βαλεις', 'λαικ'), ('λαικ', 'σε'), ('σε', 'τεθκια'), ('τεθκια', 'κορουα'), ('κορουα', 'εν'), ('εν', 'υποσυνηδειτο'), ('υποσυνηδειτο', 'επδ'), ('επδ', 'ελκυει'), ('ελκυει', 'σε'), ('σε', 'εξωτερικα'), ('εξωτερικα', 'εν'), ('εν', 'και'), ('και', 'σημαινει'), ('σημαινει', 'οτι'), ('οτι', 'τερκαζετε'), ('τερκαζετε', 'η'), ('η', 'οτι'), ('οτι', 'θελεις'), ('θελεις', 'να'), ('να', 'καμετε'), ('καμετε', 'σχεση')], [('εν', 'να'), ('να', 'νευριαζω'), ('νευριαζω', 'αησμε'), ('αησμε', 'κορη'), ('κορη', 'μου')], [('υγραινομαι', 'αμαν'), ('αμαν', 'ακουω'), ('ακουω', 'τουτες'), ('τουτες', 'τις'), ('τις', 'λεξεις'), ('λεξεις', 'σαν'), ('σαν', 'τουτες'), ('τουτες', 'εν'), ('εν', 'εσχει')], [('παιθκια', 'χρειαζουμαι'), ('χρειαζουμαι', 'συμβουλη'), ('συμβουλη', 'τι'), ('τι', 'να'), ('να', 'καμω')], [('η', 'καλαμαρου'), ('καλαμαρου', 'βαρκεται'), ('βαρκεται', 'τοσο'), ('τοσο', 'πολλα')], [('εν', 'πολλα'), ('πολλα', 'μεγαλη'), ('μεγαλη', 'εν'), ('εν', 'την'), ('την', 'φωρει'), ('φωρει', 'το'), ('το', 'στομα'), ('στομα', 'μου')], [('εν', 'την'), ('την', 'κκελε'), ('κκελε', 'σου'), ('σου', 'που'), ('που', 'εν'), ('εν', 'να'), ('να', 'σσιησω'), ('σσιησω', 'αλλα'), ('αλλα', 'σσιηστο'), ('σσιηστο', 'εν'), ('εν', 'διω'), ('διω', 'μπακκιρα'), ('μπακκιρα', 'να'), ('να', 'πιαεις'), ('πιαεις', 'αλλο'), ('αλλο', 'να'), ('να', 'παεννεις'), ('παεννεις', 'θαλλασσα'), ('θαλλασσα', 'με'), ('με', 'τες'), ('τες', 'σοβρακες')], [('μανα', 'μου'), ('μου', 'κοπελια'), ('κοπελια', 'εφαμεν'), ('εφαμεν', 'τζε'), ('τζε', 'φετος'), ('φετος', 'την'), ('την', 'σουβλα'), ('σουβλα', 'μας')], [('ατε', 'μανα'), ('μανα', 'μου'), ('μου', 'να'), ('να', 'βρεξει'), ('βρεξει', 'να'), ('να', 'φαμε'), ('φαμε', 'κανενα'), ('κανενα', 'μανιταρι'), ('μανιταρι', 'που'), ('που', 'εν'), ('εν', 'τζαι'), ('τζαι', 'μουχτιν')], [('φιλε', 'μου'), ('μου', 'εν'), ('εν', 'λια'), ('λια', 'που'), ('που', 'τους'), ('τους', 'ειπες'), ('ειπες', 'ακομα')], [('αν', 'σε'), ('σε', 'πιασει'), ('πιασει', 'κανενας'), ('κανενας', 'και'), ('και', 'πει'), ('πει', 'σου'), ('σου', 'ομως'), ('ομως', 'εχω'), ('εχω', 'ρασιη'), ('ρασιη', 'πισω'), ('πισω', 'μου'), ('μου', 'ημουν'), ('ημουν', 'ουκ'), ('ουκ', 'λοκατζιης'), ('λοκατζιης', 'κλπ'), ('κλπ', 'πετου'), ('πετου', 'να'), ('να', 'ερτει'), ('ερτει', 'να'), ('να', 'καμει'), ('καμει', 'μιαν'), ('μιαν', 'βιδωτην'), ('βιδωτην', 'να'), ('να', 'δει'), ('δει', 'την'), ('την', 'γλυκα')], [('ενηξερουν', 'ποθεν'), ('ποθεν', 'κατουρα'), ('κατουρα', 'η'), ('η', 'ορνιχα'), ('ορνιχα', 'ρε'), ('ρε', 'τουτοοι'), ('τουτοοι', 'τσιαι'), ('τσιαι', 'εκαρτερουσετε'), ('εκαρτερουσετε', 'να'), ('να', 'καμουν'), ('καμουν', 'κατι'), ('κατι', 'καλυτερο')], [('εν', 'για'), ('για', 'τες'), ('τες', 'μπηχτες'), ('μπηχτες', 'ρε'), ('ρε', 'τουτοι')], [('εν', 'εκαταλαβαν'), ('εκαταλαβαν', 'ακομα'), ('ακομα', 'οτι'), ('οτι', 'εν'), ('εν', 'τα'), ('τα', 'περνει'), ('περνει', 'κανενας'), ('κανενας', 'μαζι'), ('μαζι', 'τους')], [('ουλους', 'τρωει'), ('τρωει', 'του'), ('του', 'το'), ('το', 'χωμα'), ('χωμα', 'σε'), ('σε', 'καποια'), ('καποια', 'φαση')], [('αηστους', 'τσιαμε'), ('τσιαμε', 'εγιω'), ('εγιω', 'σιερομαι'), ('σιερομαι', 'που'), ('που', 'εν'), ('εν', 'ετσι'), ('ετσι', 'μαππουροι'), ('μαππουροι', 'τσιλλιαραες')], [('εκατσε', 'ο'), ('ο', 'υπουργος'), ('υπουργος', 'με'), ('με', 'τεσσεροις'), ('τεσσεροις', 'βλακες'), ('βλακες', 'που'), ('που', 'καμνουν'), ('καμνουν', 'πως'), ('πως', 'καταλαβουν'), ('καταλαβουν', 'που'), ('που', 'κυνηγη'), ('κυνηγη', 'και'), ('και', 'ο'), ('ο', 'υπουργος'), ('υπουργος', 'εκαμνε'), ('εκαμνε', 'πως'), ('πως', 'ηξερε'), ('ηξερε', 'που'), ('που', 'κυνηγη'), ('κυνηγη', 'πως'), ('πως', 'καταλαβει'), ('καταλαβει', 'τουτοι'), ('τουτοι', 'ουλοι'), ('ουλοι', 'πληρωνοντε'), ('πληρωνοντε', 'με'), ('με', 'ενα'), ('ενα', 'σορο'), ('σορο', 'λεφτα'), ('λεφτα', 'που'), ('που', 'μπορουν'), ('μπορουν', 'να'), ('να', 'ζησουν'), ('ζησουν', 'πολλες'), ('πολλες', 'οικογενεις'), ('οικογενεις', 'τζε'), ('τζε', 'ηβραν'), ('ηβραν', 'την'), ('την', 'λυση'), ('λυση', 'για'), ('για', 'να'), ('να', 'σωσουν'), ('σωσουν', 'την'), ('την', 'κατασταση'), ('κατασταση', 'να'), ('να', 'κοψουμε'), ('κοψουμε', 'τεσσερις'), ('τεσσερις', 'εξορμησεις'), ('εξορμησεις', 'και'), ('και', 'ελυθηκε'), ('ελυθηκε', 'το'), ('το', 'προβλημα')], [('εφκηκαν', 'και'), ('και', 'μες'), ('μες', 'την'), ('την', 'τηλεοραση'), ('τηλεοραση', 'με'), ('με', 'ενα'), ('ενα', 'καπαρτησμα'), ('καπαρτησμα', 'οτι'), ('οτι', 'κοπελια'), ('κοπελια', 'εμεις'), ('εμεις', 'ειμαστε'), ('ειμαστε', 'εξυπνοι'), ('εξυπνοι', 'και'), ('και', 'ηβραμε'), ('ηβραμε', 'την'), ('την', 'λυση')], [('οχι', 'εσεις'), ('εσεις', 'που'), ('που', 'φωναζετε'), ('φωναζετε', 'τοσα'), ('τοσα', 'χρονια'), ('χρονια', 'εν'), ('εν', 'η'), ('η', 'λαθροθηρια'), ('λαθροθηρια', 'εν'), ('εν', 'ο'), ('ο', 'αλουπος')], [('την', 'αδεια'), ('αδεια', 'επληρωσετε'), ('επληρωσετε', 'την'), ('την', 'τα'), ('τα', 'λεφτα'), ('λεφτα', 'μας'), ('μας', 'επιασαμε'), ('επιασαμε', 'τα'), ('τα', 'και'), ('και', 'εσεις'), ('εσεις', 'φακκατε'), ('φακκατε', 'τσεραθκιες'), ('τσεραθκιες', 'αφου'), ('αφου', 'εμεις'), ('εμεις', 'παλε'), ('παλε', 'δαμε'), ('δαμε', 'εν'), ('εν', 'να'), ('να', 'ειμαστε'), ('ειμαστε', 'με'), ('με', 'γεματες'), ('γεματες', 'τις'), ('τις', 'τσεπες')], [('μα', 'ελογαριαζαν'), ('ελογαριαζαν', 'χωρις'), ('χωρις', 'τον'), ('τον', 'ξενοδοχο'), ('ξενοδοχο', 'ενα'), ('ενα', 'εχω'), ('εχω', 'να'), ('να', 'τους'), ('τους', 'πω'), ('πω', 'τα'), ('τα', 'λεφτα'), ('λεφτα', 'της'), ('της', 'αδειας'), ('αδειας', 'και'), ('και', 'για'), ('για', 'την'), ('την', 'κοροιδια'), ('κοροιδια', 'και'), ('και', 'μονο'), ('μονο', 'εν'), ('εν', 'να'), ('να', 'την'), ('την', 'πληρωσουν'), ('πληρωσουν', 'ακριβα'), ('ακριβα', 'γιατι'), ('γιατι', 'ο'), ('ο', 'καθενας'), ('καθενας', 'εν'), ('εν', 'να'), ('να', 'βλεπει'), ('βλεπει', 'το'), ('το', 'συμφερον'), ('συμφερον', 'του'), ('του', 'τωρα'), ('τωρα', 'και'), ('και', 'να'), ('να', 'παει')], [('ενοια', 'σας'), ('σας', 'και'), ('και', 'εν'), ('εν', 'εμειναμεν'), ('εμειναμεν', 'μονο'), ('μονο', 'στο'), ('στο', 'να'), ('να', 'του'), ('του', 'πουμεν'), ('πουμεν', 'το'), ('το', 'μονο'), ('μονο', 'που'), ('που', 'ενε'), ('ενε', 'μπορουσα'), ('μπορουσα', 'να'), ('να', 'καμω'), ('καμω', 'και'), ('και', 'εν'), ('εν', 'τζιαμε'), ('τζιαμε', 'που'), ('που', 'εχτιτζιασα'), ('εχτιτζιασα', 'παραπανω'), ('παραπανω', 'ηταν'), ('ηταν', 'να'), ('να', 'πιασω'), ('πιασω', 'την'), ('την', 'θηρα')], [('τον', 'λογο'), ('λογο', 'καταλαβαινετε'), ('καταλαβαινετε', 'τον'), ('τον', 'νομιζω'), ('νομιζω', 'και'), ('και', 'σιουρα'), ('σιουρα', 'εν'), ('εν', 'θα'), ('θα', 'επιαννα'), ('επιαννα', 'τον'), ('τον', 'κουμπαρο'), ('κουμπαρο', 'μου'), ('μου', 'στο'), ('στο', 'λαιμο'), ('λαιμο', 'μου'), ('μου', 'για'), ('για', 'ενναν'), ('ενναν', 'βλακα'), ('βλακα', 'αφου'), ('αφου', 'ηταν'), ('ηταν', 'απο'), ('απο', 'το'), ('το', 'επαγγελματικο'), ('επαγγελματικο', 'του'), ('του', 'περιβαλλον'), ('περιβαλλον', 'και'), ('και', 'σημερα'), ('σημερα', 'ξερετε'), ('ξερετε', 'τα'), ('τα', 'ουλλοι'), ('ουλλοι', 'οτι'), ('οτι', 'οι'), ('οι', 'δουλειες'), ('δουλειες', 'ειναι'), ('ειναι', 'αφαντες')], [('και', 'καποιοι'), ('καποιοι', 'που'), ('που', 'επια'), ('επια', 'ννα'), ('ννα', 'μου'), ('μου', 'πουν'), ('πουν', 'οτι'), ('οτι', 'εν'), ('εν', 'μιτσιοι'), ('μιτσιοι', 'εν'), ('εν', 'κοφκει'), ('κοφκει', 'ο'), ('ο', 'νους'), ('νους', 'τους'), ('τους', 'επιαν'), ('επιαν', 'τζιε'), ('τζιε', 'τζινοι'), ('τζινοι', 'εσσω'), ('εσσω', 'σιεσμενοι'), ('σιεσμενοι', 'γιατι'), ('γιατι', 'εν'), ('εν', 'τουτη'), ('τουτη', 'η'), ('η', 'νοοτροπια'), ('νοοτροπια', 'που'), ('που', 'μας'), ('μας', 'εφαεν')], [('μιλω', 'σας'), ('σας', 'ποιος'), ('ποιος', 'με'), ('με', 'ειδεν'), ('ειδεν', 'και'), ('και', 'δεν'), ('δεν', 'με'), ('με', 'φοβηθηκε')], [('καλα', 'του'), ('του', 'καμες')], [('ε', 'να'), ('να', 'ισιωσει'), ('ισιωσει', 'ο'), ('ο', 'νουρος'), ('νουρος', 'του'), ('του', 'ομως')], [('κοπελια', 'φετος'), ('φετος', 'εν'), ('εν', 'να'), ('να', 'εχουμε'), ('εχουμε', 'μεγαλο'), ('μεγαλο', 'προβλημα'), ('προβλημα', 'με'), ('με', 'τις'), ('τις', 'κουφαες')], [('να', 'παρακαλατε'), ('παρακαλατε', 'να'), ('να', 'βρεξει'), ('βρεξει', 'καλα'), ('καλα', 'περκει'), ('περκει', 'παρει'), ('παρει', 'το'), ('το', 'νερο'), ('νερο', 'κανενα'), ('κανενα', 'αυκο'), ('αυκο', 'γιατι'), ('γιατι', 'κατα'), ('κατα', 'που'), ('που', 'θωρω'), ('θωρω', 'εν'), ('εν', 'να'), ('να', 'μας'), ('μας', 'φαν'), ('φαν', 'φετος')], [('εχτος', 'που'), ('που', 'τουτο'), ('τουτο', 'φετος'), ('φετος', 'εν'), ('εν', 'κ'), ('κ', 'πιο'), ('πιο', 'επικινδυνες'), ('επικινδυνες', 'ειχαμε'), ('ειχαμε', 'βαρυχειμωνια'), ('βαρυχειμωνια', 'εν'), ('εν', 'νιστιτζιες'), ('νιστιτζιες', 'εχει'), ('εχει', 'τοσο'), ('τοσο', 'καιρο')], [('αλλες', 'χρονιες'), ('χρονιες', 'εβρισκες'), ('εβρισκες', 'ολοχρονα'), ('ολοχρονα', 'εκυνηγουσαν'), ('εκυνηγουσαν', 'εν'), ('εν', 'ηταν'), ('ηταν', 'τοσο'), ('τοσο', 'το'), ('το', 'δηλητηριο')], [('τουτη', 'την'), ('την', 'περιοδο'), ('περιοδο', 'ομως'), ('ομως', 'εν'), ('εν', 'θανατος')], [('επαρχια', 'λεμεσου'), ('λεμεσου', 'κ'), ('κ', 'εχτες'), ('εχτες', 'και'), ('και', 'σημερα'), ('σημερα', 'ηβραμε')], [('ηντα', 'που'), ('που', 'γινεται'), ('γινεται', 'ρε'), ('ρε', 'κοπελια')], [('ειπα', 'να'), ('να', 'καμω'), ('καμω', 'τουτο'), ('τουτο', 'το'), ('το', 'τοπουι'), ('τοπουι', 'δαμε'), ('δαμε', 'να'), ('να', 'λαλουμε'), ('λαλουμε', 'τα'), ('τα', 'δικα'), ('δικα', 'μας'), ('μας', 'τζιαι'), ('τζιαι', 'ας'), ('ας', 'μεν'), ('μεν', 'καταλαβουν'), ('καταλαβουν', 'οι'), ('οι', 'καλαμαραες')], [('να', 'δουμε'), ('δουμε', 'ποσοι'), ('ποσοι', 'εν'), ('εν', 'να'), ('να', 'συναχτουμε')], [('γραφετε', 'οτι'), ('οτι', 'θελετε'), ('θελετε', 'αλλα'), ('αλλα', 'οι'), ('οι', 'ξιτιμασιες'), ('ξιτιμασιες', 'για'), ('για', 'να'), ('να', 'μεν'), ('μεν', 'μας'), ('μας', 'το'), ('το', 'κλειωσουν')], [('ειπα', 'να'), ('να', 'ξιθαψω'), ('ξιθαψω', 'λιον'), ('λιον', 'τουτο'), ('τουτο', 'το'), ('το', 'θεμαν'), ('θεμαν', 'ρε'), ('ρε', 'κοπεγια')], [('ατε', 'να'), ('να', 'δουμεν'), ('δουμεν', 'ποσοι'), ('ποσοι', 'κυπρεοι'), ('κυπρεοι', 'ειμαστεν'), ('ειμαστεν', 'δαμεσα')], [('ατε', 'ρε'), ('ρε', 'τζε'), ('τζε', 'μεν'), ('μεν', 'ιξιανετε')], [('εν', 'πολλοι'), ('πολλοι', 'οι'), ('οι', 'κυπραιοι'), ('κυπραιοι', 'τελικα'), ('τελικα', 'απ'), ('απ', 'οτι'), ('οτι', 'θωρω'), ('θωρω', 'δαμε')], [('μα', 'πως'), ('πως', 'να'), ('να', 'πεισουν'), ('πεισουν', 'για'), ('για', 'ξενο'), ('ξενο', 'δακτυλο'), ('δακτυλο', 'οταν'), ('οταν', 'ο'), ('ο', 'κυπριακος'), ('κυπριακος', 'κωλοδακτυλος'), ('κωλοδακτυλος', 'εν'), ('εν', 'τοσο'), ('τοσο', 'προφανης')], [('εφκηκα', 'γυρον'), ('γυρον', 'τα'), ('τα', 'μπλογκς'), ('μπλογκς', 'που'), ('που', 'θκιαβαζω'), ('θκιαβαζω', 'να'), ('να', 'πω'), ('πω', 'τα'), ('τα', 'καλαντα')], [('ατε', 'καλη'), ('καλη', 'χρονια'), ('χρονια', 'τζιαι'), ('τζιαι', 'μεν'), ('μεν', 'συγχυζεσαι'), ('συγχυζεσαι', 'πολλα')], [('ο', 'φασισμος'), ('φασισμος', 'μονον'), ('μονον', 'αμαν'), ('αμαν', 'γινει'), ('γινει', 'εξουσια'), ('εξουσια', 'εν'), ('εν', 'επικινδυνος')], [('μανα', 'μου'), ('μου', 'εν'), ('εν', 'ξερετε'), ('ξερετε', 'για'), ('για', 'το'), ('το', 'εγκλημα'), ('εγκλημα', 'ζει'), ('ζει', 'στην'), ('στην', 'κυπρο')], [('αμαν', 'συλλαβουν'), ('συλλαβουν', 'εμπορο'), ('εμπορο', 'ναρκωτικων'), ('ναρκωτικων', 'τζε'), ('τζε', 'εν'), ('εν', 'αλλοδαπος'), ('αλλοδαπος', 'δηλαδη'), ('δηλαδη', 'ακελικος'), ('ακελικος', 'εκ'), ('εκ', 'τουρκιας'), ('τουρκιας', 'η'), ('η', 'κινεζος'), ('κινεζος', 'ρωσσος'), ('ρωσσος', 'η'), ('η', 'αραπης'), ('αραπης', 'αφηνουν'), ('αφηνουν', 'τον')], [('πιαννει', 'μονο'), ('μονο', 'τζεινους'), ('τζεινους', 'που'), ('που', 'ταυτισαν'), ('ταυτισαν', 'ουλλη'), ('ουλλη', 'την'), ('την', 'πολιτικη'), ('πολιτικη', 'τους'), ('τους', 'υπαρξη')], [('τελικα', 'εμας'), ('εμας', 'που'), ('που', 'εν'), ('εν', 'ημασταν'), ('ημασταν', 'τζιαμε'), ('τζιαμε', 'εν'), ('εν', 'να'), ('να', 'μας'), ('μας', 'πει'), ('πει', 'κανενας'), ('κανενας', 'ποιοι'), ('ποιοι', 'εν'), ('εν', 'υπερ'), ('υπερ', 'τζαι'), ('τζαι', 'ποιοι'), ('ποιοι', 'κατα')], [('εκαμαμεν', 'τον'), ('τον', 'πονο'), ('πονο', 'μας'), ('μας', 'ανεκδοτο'), ('ανεκδοτο', 'τα'), ('τα', 'αγρινα'), ('αγρινα', 'εν'), ('εν', 'πας'), ('πας', 'τα'), ('τα', 'βουνα'), ('βουνα', 'ειμαστεν'), ('ειμαστεν', 'τετραποδα')], [('ε', 'για'), ('για', 'τους'), ('τους', 'χαρακτηρισμους'), ('χαρακτηρισμους', 'που'), ('που', 'λαλεις'), ('λαλεις', 'οτι'), ('οτι', 'εχρησιμοποιαν'), ('εχρησιμοποιαν', 'ο'), ('ο', 'συγκεκριμενος'), ('συγκεκριμενος', 'τοτε'), ('τοτε', 'εν'), ('εν', 'αθυμουμαι'), ('αθυμουμαι', 'ακριβως'), ('ακριβως', 'αν'), ('αν', 'τζαι'), ('τζαι', 'εννα'), ('εννα', 'τα'), ('τα', 'κοιταξω'), ('κοιταξω', 'να'), ('να', 'θυμηθω')], [('εν', 'μπορω'), ('μπορω', 'φυσικα'), ('φυσικα', 'να'), ('να', 'καμω'), ('καμω', 'τον'), ('τον', 'δικηγορο'), ('δικηγορο', 'ισως'), ('ισως', 'τζαι'), ('τζαι', 'οι')], [('μα', 'εν'), ('εν', 'για'), ('για', 'τζιεινο'), ('τζιεινο', 'το'), ('το', 'δειπνον'), ('δειπνον', 'που'), ('που', 'εστησεν'), ('εστησεν', 'που'), ('που', 'λαλεις')], [('οι', 'οι'), ('οι', 'η'), ('η', 'κουβεντα'), ('κουβεντα', 'με'), ('με', 'το'), ('το', 'δειπνο'), ('δειπνο', 'εν'), ('εν', 'μετα')], [('η', 'φωτογραφια'), ('φωτογραφια', 'δαμε'), ('δαμε', 'εν'), ('εν', 'που'), ('που', 'το'), ('το', 'εθνικο'), ('εθνικο', 'συμβουλιο')], [('τζαι', 'εγω'), ('εγω', 'που'), ('που', 'ενομισα'), ('ενομισα', 'πως'), ('πως', 'εν'), ('εν', 'οι'), ('οι', 'ψηφοφοροι'), ('ψηφοφοροι', 'του'), ('του', 'δηκο'), ('δηκο', 'που'), ('που', 'την'), ('την', 'εφκαλαν')], [('περκει', 'να'), ('να', 'ξερεις'), ('ξερεις', 'τζαι'), ('τζαι', 'τι'), ('τι', 'εψηφισα'), ('εψηφισα', 'τζαι'), ('τζαι', 'γιατι'), ('γιατι', 'τζαι'), ('τζαι', 'ας'), ('ας', 'μεν'), ('μεν', 'ειμαι'), ('ειμαι', 'ακελικος'), ('ακελικος', 'αν'), ('αν', 'εθκιεβαζες'), ('εθκιεβαζες', 'την'), ('την', 'αναρτηση'), ('αναρτηση', 'πιο'), ('πιο', 'πανω'), ('πανω', 'μπορει'), ('μπορει', 'να'), ('να', 'το'), ('το', 'επροσεχες')], [('τζιαι', 'τζεινο'), ('τζεινο', 'του'), ('του', 'λαικου'), ('λαικου', 'μετωπου'), ('μετωπου', 'που'), ('που', 'καθε'), ('καθε', 'θκυο'), ('θκυο', 'μερες'), ('μερες', 'φκαλλει'), ('φκαλλει', 'διαγγελμα'), ('διαγγελμα', 'του'), ('του', 'στυλ'), ('στυλ', 'πατριωτες'), ('πατριωτες', 'στ'), ('στ', 'αρματα'), ('αρματα', 'εσιει'), ('εσιει', 'χαζι')], [('εν', 'φανερο'), ('φανερο', 'θελει'), ('θελει', 'σε'), ('σε', 'μανα'), ('μανα', 'μου'), ('μου', 'θελει'), ('θελει', 'σε')], [('φιλε', 'μου'), ('μου', 'ειπα'), ('ειπα', 'σου'), ('σου', 'το'), ('το', 'τζιαι'), ('τζιαι', 'τζιαμε')], [('λες', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'εξερες'), ('εξερες', 'με'), ('με', 'τι'), ('τι', 'ρεμαλια'), ('ρεμαλια', 'εισιες'), ('εισιες', 'να'), ('να', 'καμεις')], [('εν', 'ο'), ('ο', 'θεος'), ('θεος', 'που'), ('που', 'εφωτισεν'), ('εφωτισεν', 'τον'), ('τον', 'χριστοφια'), ('χριστοφια', 'τζιαι'), ('τζιαι', 'αλλαξεν'), ('αλλαξεν', 'πορεια')], [('οξα', 'αμπα'), ('αμπα', 'τζι'), ('τζι', 'εν'), ('εν', 'τουτοι')], [('για', 'αυτο'), ('αυτο', 'αναμενω'), ('αναμενω', 'να'), ('να', 'κρινω'), ('κρινω', 'που'), ('που', 'το'), ('το', 'αν'), ('αν', 'τζαι'), ('τζαι', 'τι'), ('τι', 'θα'), ('θα', 'συζητηθει'), ('συζητηθει', 'στο'), ('στο', 'πλαισιο'), ('πλαισιο', 'της'), ('της', 'ασφαλειας'), ('ασφαλειας', 'οταν'), ('οταν', 'ερτει'), ('ερτει', 'η'), ('η', 'ωρα')], [('τζιαι', 'σε'), ('σε', 'διαβεβαιω'), ('διαβεβαιω', 'οτι'), ('οτι', 'οταν'), ('οταν', 'εγινην'), ('εγινην', 'η'), ('η', 'επιθεση'), ('επιθεση', 'πας'), ('πας', 'τους'), ('τους', 'πυργους'), ('πυργους', 'ειπαν'), ('ειπαν', 'μου'), ('μου', 'το'), ('το', 'εναν'), ('εναν', 'τεταρτον'), ('τεταρτον', 'μετα'), ('μετα', 'τζιαι'), ('τζιαι', 'οταν'), ('οταν', 'επεθανεν'), ('επεθανεν', 'ο'), ('ο', 'χατζιηδακης'), ('χατζιηδακης', 'εμαθα'), ('εμαθα', 'το'), ('το', 'την'), ('την', 'ιδιαν'), ('ιδιαν', 'ημεραν')], [('μαλακιες', 'τουτα'), ('τουτα', 'ουλλα'), ('ουλλα', 'ασχολειθειτε'), ('ασχολειθειτε', 'λιο'), ('λιο', 'με'), ('με', 'κανενα'), ('κανενα', 'τηλεσκουπιδι'), ('τηλεσκουπιδι', 'να'), ('να', 'περασει'), ('περασει', 'η'), ('η', 'ωρα'), ('ωρα', 'σας'), ('σας', 'τζαι'), ('τζαι', 'κανει')], [('αφου', 'ξερουμεν'), ('ξερουμεν', 'πως'), ('πως', 'εν'), ('εν', 'μας'), ('μας', 'παν'), ('παν', 'στο'), ('στο', 'εξωτερικο'), ('εξωτερικο', 'γιατι'), ('γιατι', 'σκαλιζουμε'), ('σκαλιζουμε', 'συνεχως'), ('συνεχως', 'το'), ('το', 'θεμα'), ('θεμα', 'ετο'), ('ετο', 'να'), ('να', 'ασχολουμαστε'), ('ασχολουμαστε', 'ο'), ('ο', 'ενας'), ('ενας', 'με'), ('με', 'τον'), ('τον', 'αλλον'), ('αλλον', 'μες'), ('μες', 'την'), ('την', 'κυπρο'), ('κυπρο', 'μεταξυ'), ('μεταξυ', 'μας'), ('μας', 'τζαι'), ('τζαι', 'κανει')], [('μιλουμε', 'για'), ('για', 'καμποση'), ('καμποση', 'προπαγανδα'), ('προπαγανδα', 'πκιο'), ('πκιο', 'γελοιο'), ('γελοιο', 'τζιαι'), ('τζιαι', 'που'), ('που', 'τη'), ('τη', 'σημερινη'), ('σημερινη', 'χωρις'), ('χωρις', 'να'), ('να', 'εν'), ('εν', 'τοσο'), ('τοσο', 'ακραιο'), ('ακραιο', 'επειδη'), ('επειδη', 'δαμε'), ('δαμε', 'εν'), ('εν', 'η'), ('η', 'προπαγανδα'), ('προπαγανδα', 'ενος'), ('ενος', 'κρατους'), ('κρατους', 'προς'), ('προς', 'τα'), ('τα', 'εξω'), ('εξω', 'ασε'), ('ασε', 'που'), ('που', 'τωρα'), ('τωρα', 'εν'), ('εν', 'θα'), ('θα', 'εσιεις'), ('εσιεις', 'υποθεση'), ('υποθεση', 'αμα'), ('αμα', 'καμεις'), ('καμεις', 'αιτηση'), ('αιτηση', 'στον'), ('στον', 'αστρα'), ('αστρα', 'τζιαι'), ('τζιαι', 'δουν'), ('δουν', 'βιογραφικο')], [('ειδα', 'τζαι'), ('τζαι', 'εγω'), ('εγω', 'τεθκοια'), ('τεθκοια', 'οι'), ('οι', 'εν'), ('εν', 'εζησα'), ('εζησα', 'ποτε'), ('ποτε', 'ετσι'), ('ετσι', 'πραμα')], [('να', 'εξερες'), ('εξερες', 'ποσες'), ('ποσες', 'φορες'), ('φορες', 'εκλαψα'), ('εκλαψα', 'θωρωντας'), ('θωρωντας', 'τουτον'), ('τουτον', 'το'), ('το', 'βουνον'), ('βουνον', 'καθε'), ('καθε', 'πρωιν'), ('πρωιν', 'που'), ('που', 'παω'), ('παω', 'δουλειαν'), ('δουλειαν', 'τον'), ('τον', 'τελευταιον'), ('τελευταιον', 'τζαιρον')], [('λαλεις', 'ο'), ('ο', 'συνδιασμος'), ('συνδιασμος', 'του'), ('του', 'ταλεντου'), ('ταλεντου', 'με'), ('με', 'την'), ('την', 'δουλειαν'), ('δουλειαν', 'τζαι'), ('τζαι', 'τη'), ('τη', 'μουσικη'), ('μουσικη', 'γνωσην'), ('γνωσην', 'να'), ('να', 'δια'), ('δια', 'στην'), ('στην', 'δημιουργιαν'), ('δημιουργιαν', 'μιαν'), ('μιαν', 'διοχρονικην'), ('διοχρονικην', 'αξιαν'), ('αξιαν', 'περαν'), ('περαν', 'που'), ('που', 'την'), ('την', 'εμπορικην'), ('εμπορικην', 'τζαι'), ('τζαι', 'ναν'), ('ναν', 'για'), ('για', 'τουτο')], [('εν', 'το'), ('το', 'πιστευκω'), ('πιστευκω', 'πως'), ('πως', 'εξεχασες'), ('εξεχασες', 'πισω'), ('πισω', 'το'), ('το', 'πιο'), ('πιο', 'σημαντικο')], [('νομιζω', 'ειμαι'), ('ειμαι', 'τζιαι'), ('τζιαι', 'γω'), ('γω', 'σε'), ('σε', 'μιαν'), ('μιαν', 'πορταν'), ('πορταν', 'τζιαι'), ('τζιαι', 'στεκουμαι'), ('στεκουμαι', 'χωρις'), ('χωρις', 'να'), ('να', 'την'), ('την', 'ανοιξω'), ('ανοιξω', 'τζιαι'), ('τζιαι', 'χωρις'), ('χωρις', 'να'), ('να', 'παω'), ('παω', 'πισω')], [('τζιαι', 'μιας'), ('μιας', 'τζιαι'), ('τζιαι', 'εφερεν'), ('εφερεν', 'τουντες'), ('τουντες', 'χαζοβιολες'), ('χαζοβιολες', 'η'), ('η', 'κουβεντα'), ('κουβεντα', 'τι'), ('τι', 'παιζει'), ('παιζει', 'με'), ('με', 'σχεδον'), ('σχεδον', 'ουλλες'), ('ουλλες', 'που'), ('που', 'βαλλουν'), ('βαλλουν', 'φωτογραφιες'), ('φωτογραφιες', 'σελφι'), ('σελφι', 'σε'), ('σε', 'σεξι'), ('σεξι', 'ποζες'), ('ποζες', 'κορτωμενους'), ('κορτωμενους', 'κωλους'), ('κωλους', 'βυζια'), ('βυζια', 'εξω'), ('εξω', 'κλπ'), ('κλπ', 'τζιαι'), ('τζιαι', 'αλλα'), ('αλλα', 'που'), ('που', 'τουτα'), ('τουτα', 'γιατι'), ('γιατι', 'εν'), ('εν', 'πεθανισκετε'), ('πεθανισκετε', 'να'), ('να', 'σας'), ('σας', 'καμουμεν'), ('καμουμεν', 'νεκροψιαν'), ('νεκροψιαν', 'να'), ('να', 'δουμεν'), ('δουμεν', 'τες'), ('τες', 'εσωτερικες'), ('εσωτερικες', 'σας'), ('σας', 'ομορφιες')], [('εσκεφτουμουν', 'ποτε'), ('ποτε', 'εννα'), ('εννα', 'φκαλεις'), ('φκαλεις', 'φαουσαν'), ('φαουσαν', 'να'), ('να', 'πνασει'), ('πνασει', 'η'), ('η', 'κκελλε'), ('κκελλε', 'μου')], [('το', 'θετικον'), ('θετικον', 'εν'), ('εν', 'οτι'), ('οτι', 'σε'), ('σε', 'καθε'), ('καθε', 'περιπτωσην'), ('περιπτωσην', 'εφκαλες'), ('εφκαλες', 'τα'), ('τα', 'τζιαι'), ('τζιαι', 'επνασες'), ('επνασες', 'εστω'), ('εστω', 'τζιαι'), ('τζιαι', 'προσωρινα')], [('εγω', 'ηρτα'), ('ηρτα', 'στο'), ('στο', 'τσακ'), ('τσακ', 'να'), ('να', 'το'), ('το', 'χρησιμοποιησω'), ('χρησιμοποιησω', 'αλλα'), ('αλλα', 'εκρατηθηκα'), ('εκρατηθηκα', 'τζιαι'), ('τζιαι', 'θωρουν'), ('θωρουν', 'με'), ('με', 'ουλλοι'), ('ουλλοι', 'παραξενα'), ('παραξενα', 'λες'), ('λες', 'τζιαι'), ('τζιαι', 'εχω'), ('εχω', 'κατι'), ('κατι', 'στο'), ('στο', 'κουτελλο')], [('πε', 'μου'), ('μου', 'αληθκεια'), ('αληθκεια', 'ρε'), ('ρε', 'ειπες'), ('ειπες', 'ετσι'), ('ετσι', 'ατακα')], [('εν', 'τζιαμε'), ('τζιαμε', 'που'), ('που', 'αλλασσει'), ('αλλασσει', 'η'), ('η', 'ζωη'), ('ζωη', 'σου'), ('σου', 'τζιαι'), ('τζιαι', 'αραιωνουν'), ('αραιωνουν', 'δραματικα'), ('δραματικα', 'οι'), ('οι', 'εξοδοι')], [('κατα', 'τα'), ('τα', 'αλλα'), ('αλλα', 'ο'), ('ο', 'καθενας'), ('καθενας', 'καμνει'), ('καμνει', 'ο,τι'), ('ο,τι', 'θελει'), ('θελει', 'τζιαι'), ('τζιαι', 'ο,τι'), ('ο,τι', 'κοφκει'), ('κοφκει', 'ο'), ('ο', 'νους'), ('νους', 'του')], [('φορω', 'τες'), ('τες', 'χαζες'), ('χαζες', 'ροζ'), ('ροζ', 'ριαρες'), ('ριαρες', 'τζαι'), ('τζαι', 'μολις'), ('μολις', 'φυω'), ('φυω', 'φκαλλω'), ('φκαλλω', 'τες')], [('να', 'σε'), ('σε', 'δω'), ('δω', 'με'), ('με', 'βιλλουθκια'), ('βιλλουθκια', 'πανω'), ('πανω', 'στην'), ('στην', 'κελλε'), ('κελλε', 'τζιαι'), ('τζιαι', 'ειδα'), ('ειδα', 'τα'), ('τα', 'ουλλα')], [('αμπα', 'τζιαι'), ('τζιαι', 'φτασεις'), ('φτασεις', 'τζιαι'), ('τζιαι', 'καμεις'), ('καμεις', 'το'), ('το', 'μια'), ('μια', 'φορα'), ('φορα', 'θα'), ('θα', 'πρεπει'), ('πρεπει', 'να'), ('να', 'το'), ('το', 'καμνεις'), ('καμνεις', 'συνεχεια')], [('γιατι', 'τουτο'), ('τουτο', 'να'), ('να', 'καλυφκει'), ('καλυφκει', 'τες'), ('τες', 'ψυχικες'), ('ψυχικες', 'μας'), ('μας', 'αναγκες'), ('αναγκες', 'τζιαμε'), ('τζιαμε', 'που'), ('που', 'σαφεστατα'), ('σαφεστατα', 'δεν'), ('δεν', 'υπαρχει'), ('υπαρχει', 'σοβαρο'), ('σοβαρο', 'προβλημα'), ('προβλημα', 'ζιαμε'), ('ζιαμε', 'εν'), ('εν', 'πιο'), ('πιο', 'φροντιδα'), ('φροντιδα', 'με'), ('με', 'καλλωπισμο'), ('καλλωπισμο', 'μαζι'), ('μαζι', 'που'), ('που', 'ενιγουεις'), ('ενιγουεις', 'καμνουν'), ('καμνουν', 'το'), ('το', 'τζιαι'), ('τζιαι', 'αντρες'), ('αντρες', 'τζιαι'), ('τζιαι', 'γεναιτζιες'), ('γεναιτζιες', 'που'), ('που', 'τον'), ('τον', 'τζιαιρο'), ('τζιαιρο', 'που'), ('που', 'εσταματησαμεν'), ('εσταματησαμεν', 'να'), ('να', 'ζουμε'), ('ζουμε', 'στες'), ('στες', 'σπηλιες'), ('σπηλιες', 'τζιαι'), ('τζιαι', 'εκοινωνικοποιηθηκαμεν')], [('εν', 'ρητορικες'), ('ρητορικες', 'οι'), ('οι', 'ερωτησεις'), ('ερωτησεις', 'μου'), ('μου', 'εν'), ('εν', 'σαν'), ('σαν', 'να'), ('να', 'τζιαι'), ('τζιαι', 'μιλω'), ('μιλω', 'τζιαι'), ('τζιαι', 'του'), ('του', 'εαυτου'), ('εαυτου', 'μου')], [('μια', 'κορουα'), ('κορουα', 'που'), ('που', 'ξερω'), ('ξερω', 'εβαλε'), ('εβαλε', 'χειλη'), ('χειλη', 'εν'), ('εν', 'εισιε'), ('εισιε', 'τιποτε'), ('τιποτε', 'πριν'), ('πριν', 'εν'), ('εν', 'αληθκεια')], [('εν', 'θεμα'), ('θεμα', 'ισορροπιας'), ('ισορροπιας', 'πρεπει'), ('πρεπει', 'να'), ('να', 'μεγαλωνουμε'), ('μεγαλωνουμε', 'τα'), ('τα', 'κοπελλουθκια'), ('κοπελλουθκια', 'με'), ('με', 'τσαγανο'), ('τσαγανο', 'με'), ('με', 'δυναμη')], [('φυσικα', 'εσιει'), ('εσιει', 'σημασια'), ('σημασια', 'τζιαι'), ('τζιαι', 'καμνει'), ('καμνει', 'διαφορα'), ('διαφορα', 'το'), ('το', 'πως'), ('πως', 'συμπεριφερεσαι')], [('εσιει', 'ομως'), ('ομως', 'τζιαι'), ('τζιαι', 'καποιους'), ('καποιους', 'αθρωπους'), ('αθρωπους', 'που'), ('που', 'ειτε'), ('ειτε', 'δυσκολευουνται'), ('δυσκολευουνται', 'ειτε'), ('ειτε', 'εν'), ('εν', 'ηξερουν'), ('ηξερουν', 'πως'), ('πως', 'να'), ('να', 'εκφραστουν')], [('εν', 'σημαινει'), ('σημαινει', 'οτι'), ('οτι', 'εν'), ('εν', 'εννουν'), ('εννουν', 'με'), ('με', 'καλο'), ('καλο', 'τροπο'), ('τροπο', 'οσα'), ('οσα', 'λαλουν')], [('εβαλες', 'με'), ('με', 'σε'), ('σε', 'σκεψεις'), ('σκεψεις', 'τωρα'), ('τωρα', 'εν'), ('εν', 'τα'), ('τα', 'θυμουμαι'), ('θυμουμαι', 'τζιαι'), ('τζιαι', 'εγω'), ('εγω', 'μνημη'), ('μνημη', 'ψαρκου'), ('ψαρκου', 'εχω'), ('εχω', 'φιλουθκια')], [('μα', 'που'), ('που', 'να'), ('να', 'αρκεψω'), ('αρκεψω', 'τζιαι'), ('τζιαι', 'που'), ('που', 'να'), ('να', 'τελιωσω')], [('σκεφτου', 'ποσο'), ('ποσο', 'ευκολο'), ('ευκολο', 'εν'), ('εν', 'για'), ('για', 'σενα'), ('σενα', 'να'), ('να', 'παραδεκτεις'), ('παραδεκτεις', 'η'), ('η', 'να'), ('να', 'πεις'), ('πεις', 'οτι'), ('οτι', 'αυνανιζεσαι')], [('τζαι', 'μιλουν'), ('μιλουν', 'σου'), ('σου', 'τζαι'), ('τζαι', 'μιλας'), ('μιλας', 'τους'), ('τους', 'πισω')], [('μιλω', 'σε'), ('σε', 'ουλλον'), ('ουλλον', 'τον'), ('τον', 'κοσμο'), ('κοσμο', 'εν'), ('εν', 'εχω'), ('εχω', 'ετσι'), ('ετσι', 'κολληματα'), ('κολληματα', 'επειδη'), ('επειδη', 'ενα'), ('ενα', 'θεμα'), ('θεμα', 'μπορει'), ('μπορει', 'να'), ('να', 'μεν'), ('μεν', 'μας'), ('μας', 'αφορα'), ('αφορα', 'αμεσα'), ('αμεσα', 'εν'), ('εν', 'σημαινει'), ('σημαινει', 'δεν'), ('δεν', 'γινεται'), ('γινεται', 'γυρον'), ('γυρον', 'μας')], [('το', 'προβλημα'), ('προβλημα', 'εννεν'), ('εννεν', 'η'), ('η', 'συνηθεια'), ('συνηθεια', 'καθαυτη'), ('καθαυτη', 'αλλα'), ('αλλα', 'ο'), ('ο', 'λογος'), ('λογος', 'που'), ('που', 'την'), ('την', 'χρησιμοποιας')], [('αν', 'εισαι'), ('εισαι', 'μονος'), ('μονος', 'σου'), ('σου', 'καλη'), ('καλη', 'ωρα'), ('ωρα', 'τζιαι'), ('τζιαι', 'βλεπεις'), ('βλεπεις', 'τσοντες'), ('τσοντες', 'γιατι'), ('γιατι', 'φοασαι'), ('φοασαι', 'να'), ('να', 'μιλησεις'), ('μιλησεις', 'στο'), ('στο', 'φυλο'), ('φυλο', 'που'), ('που', 'σε'), ('σε', 'ενδιαφερει'), ('ενδιαφερει', 'τοτε'), ('τοτε', 'ναι'), ('ναι', 'εσιεις'), ('εσιεις', 'προβλημα')], [('αν', 'εισαι'), ('εισαι', 'παντρεμενος'), ('παντρεμενος', 'τζιαι'), ('τζιαι', 'βλεπεις'), ('βλεπεις', 'τσοντες'), ('τσοντες', 'γιατι'), ('γιατι', 'εν'), ('εν', 'μπορεις'), ('μπορεις', 'να'), ('να', 'επικοινωνησεις'), ('επικοινωνησεις', 'τις'), ('τις', 'επιθυμιες'), ('επιθυμιες', 'σου'), ('σου', 'με'), ('με', 'τοντην'), ('τοντην', 'συντροφο'), ('συντροφο', 'σου'), ('σου', 'τοτε'), ('τοτε', 'ναι'), ('ναι', 'εσιεις'), ('εσιεις', 'προβλημα')], [('αυτο', 'που'), ('που', 'ηθελα'), ('ηθελα', 'να'), ('να', 'θιξω'), ('θιξω', 'χωρις'), ('χωρις', 'να'), ('να', 'συμφωνησω'), ('συμφωνησω', 'η'), ('η', 'να'), ('να', 'διαφωνησω'), ('διαφωνησω', 'με'), ('με', 'τους'), ('τους', 'αντρες'), ('αντρες', 'που'), ('που', 'θωρουν'), ('θωρουν', 'πορνο'), ('πορνο', 'ηταν'), ('ηταν', 'το'), ('το', 'ποσο'), ('ποσο', 'πολυ'), ('πολυ', 'πονο'), ('πονο', 'προκαλουν'), ('προκαλουν', 'στις'), ('στις', 'συντροφους'), ('συντροφους', 'τους'), ('τους', 'τζιαι'), ('τζιαι', 'επροσπαθησα'), ('επροσπαθησα', 'να'), ('να', 'το'), ('το', 'δω'), ('δω', 'λλιο'), ('λλιο', 'πιο'), ('πιο', 'σφαιρικα'), ('σφαιρικα', 'το'), ('το', 'θεμα')], [('εν', 'ηξερω'), ('ηξερω', 'αν'), ('αν', 'εν'), ('εν', 'θεμα'), ('θεμα', 'προβληματος'), ('προβληματος', 'η'), ('η', 'θεμα'), ('θεμα', 'κατανοησης'), ('κατανοησης', 'η'), ('η', 'αν'), ('αν', 'εν'), ('εν', 'λογια'), ('λογια', 'της'), ('της', 'παρηορκας'), ('παρηορκας', 'οποταν'), ('οποταν', 'μπορει'), ('μπορει', 'να'), ('να', 'μεν'), ('μεν', 'διαφωνουμεν'), ('διαφωνουμεν', 'καθολου'), ('καθολου', 'στην'), ('στην', 'ουσια')], [('τωρα', 'για'), ('για', 'οσους'), ('οσους', 'εν'), ('εν', 'μονοι'), ('μονοι', 'τους'), ('τους', 'θεωρω'), ('θεωρω', 'το'), ('το', 'πιο'), ('πιο', 'φυσιολογικο'), ('φυσιολογικο', 'εστω'), ('εστω', 'γιατι'), ('γιατι', 'εν'), ('εν', 'πολλοι'), ('πολλοι', 'οι'), ('οι', 'λογοι'), ('λογοι', 'που'), ('που', 'μπορουν'), ('μπορουν', 'να'), ('να', 'οδηγησουν'), ('οδηγησουν', 'εναν'), ('εναν', 'αντρα'), ('αντρα', 'μονο'), ('μονο', 'του'), ('του', 'να'), ('να', 'θωρει'), ('θωρει', 'πορνο')], [('εν', 'ακομα'), ('ακομα', 'πιο'), ('πιο', 'δυσκολο'), ('δυσκολο', 'οι'), ('οι', 'γυναικες'), ('γυναικες', 'να'), ('να', 'το'), ('το', 'κατανοησουν'), ('κατανοησουν', 'και'), ('και', 'να'), ('να', 'το'), ('το', 'δεκτουν'), ('δεκτουν', 'μεν'), ('μεν', 'σου'), ('σου', 'πω'), ('πω', 'τζιαι'), ('τζιαι', 'πολλα'), ('πολλα', 'ασχημο'), ('ασχημο', 'γι'), ('γι', 'αυτες')], [('εφκαλα', 'τα'), ('τα', 'μμαθκια'), ('μμαθκια', 'μου'), ('μου', 'η'), ('η', 'αληθκεια')], [('να', 'μεν'), ('μεν', 'θωρειτε'), ('θωρειτε', 'τιποτε'), ('τιποτε', 'ομως'), ('ομως', 'τζιαι'), ('τζιαι', 'κανει')], [('θκιαβαζω', 'σε'), ('σε', 'αλλα'), ('αλλα', 'το'), ('το', 'εχεις'), ('εχεις', 'ολο'), ('ολο', 'λαθος')], [('πε', 'μου'), ('μου', 'εναν'), ('εναν', 'που'), ('που', 'εν'), ('εν', 'αυνανιζεται'), ('αυνανιζεται', 'σιορ')], [('εν', 'τζιαι'), ('τζιαι', 'σημαινει'), ('σημαινει', 'οτι'), ('οτι', 'φανταζεσαι'), ('φανταζεσαι', 'τζιεινα'), ('τζιεινα', 'που'), ('που', 'βλεπεις')], [('οι', 'αντρες'), ('αντρες', 'εν'), ('εν', 'πιο'), ('πιο', 'οπτικοι'), ('οπτικοι', 'τυποι')], [('εξαρταται', 'που'), ('που', 'την'), ('την', 'ηλικια'), ('ηλικια', 'και'), ('και', 'τες'), ('τες', 'περιστασεις'), ('περιστασεις', 'τωρα'), ('τωρα', 'αν'), ('αν', 'εισαι'), ('εισαι', 'με'), ('με', 'συντροφο'), ('συντροφο', 'τζιαι'), ('τζιαι', 'εξακολουθεις'), ('εξακολουθεις', 'να'), ('να', 'το'), ('το', 'καμνεις'), ('καμνεις', 'εν'), ('εν', 'ηξερω'), ('ηξερω', 'εν'), ('εν', 'ειμαι'), ('ειμαι', 'ειδικη')], [('η', 'αν'), ('αν', 'εν'), ('εν', 'απλα'), ('απλα', 'για'), ('για', 'εκτονωση'), ('εκτονωση', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'οκ'), ('οκ', 'ο'), ('ο', 'αλλος'), ('αλλος', 'που'), ('που', 'εν'), ('εν', 'μαζι'), ('μαζι', 'σου')], [('εν', 'φυσιολογικο'), ('φυσιολογικο', 'ομως'), ('ομως', 'να'), ('να', 'σαι'), ('σαι', 'σε'), ('σε', 'σχεση'), ('σχεση', 'τζιαι'), ('τζιαι', 'να'), ('να', 'φανταζεσαι'), ('φανταζεσαι', 'διαφορα')], [('καλο', 'θα'), ('θα', 'ηταν'), ('ηταν', 'να'), ('να', 'μην'), ('μην', 'περιστρεφουνται'), ('περιστρεφουνται', 'ουλλα'), ('ουλλα', 'στη'), ('στη', 'ζωη'), ('ζωη', 'σου'), ('σου', 'που'), ('που', 'το'), ('το', 'σεξ'), ('σεξ', 'τζιαι'), ('τζιαι', 'πολλα'), ('πολλα', 'καλλυττερο'), ('καλλυττερο', 'να'), ('να', 'επιστρεψεις'), ('επιστρεψεις', 'στην'), ('στην', 'πραγματικοτητα'), ('πραγματικοτητα', 'γιατι'), ('γιατι', 'εν'), ('εν', 'τζιαιν'), ('τζιαιν', 'ουλλα'), ('ουλλα', 'οσα'), ('οσα', 'θωρεις'), ('θωρεις', 'τζιαι'), ('τζιαι', 'φανταζεσαι'), ('φανταζεσαι', 'αληθινα')], [('εν', 'τοσο'), ('τοσο', 'εξωφρενικο'), ('εξωφρενικο', 'που'), ('που', 'εν'), ('εν', 'πολλα'), ('πολλα', 'αστειο'), ('αστειο', 'ατε'), ('ατε', 'ρε'), ('ρε', 'εν'), ('εν', 'γινεται')], [('αμπα', 'και'), ('και', 'εκαμναν'), ('εκαμναν', 'απου'), ('απου', 'πει'), ('πει', 'την'), ('την', 'πιο'), ('πιο', 'μεγαλη'), ('μεγαλη', 'τσιοφτα'), ('τσιοφτα', 'για'), ('για', 'να'), ('να', 'θωρουν'), ('θωρουν', 'αντιδρασεις'), ('αντιδρασεις', 'τζιαι'), ('τζιαι', 'να'), ('να', 'καμνουν'), ('καμνουν', 'πλακα')], [('ειμαι', 'κυπρο'), ('κυπρο', 'για'), ('για', 'λιες'), ('λιες', 'μερες'), ('μερες', 'διακοπες'), ('διακοπες', 'τζιαι'), ('τζιαι', 'τα'), ('τα', 'λιοπετριτικα'), ('λιοπετριτικα', 'δινουν'), ('δινουν', 'τζιαι'), ('τζιαι', 'περνουν')], [('μασιαλλα', 'μπραβο'), ('μπραβο', 'σου'), ('σου', 'ειμαι'), ('ειμαι', 'τζι'), ('τζι', 'εγω'), ('εγω', 'κουμερα'), ('κουμερα', 'αλλα'), ('αλλα', 'δεν'), ('δεν', 'εχω'), ('εχω', 'τις'), ('τις', 'ικανοτητες'), ('ικανοτητες', 'τζαι'), ('τζαι', 'τα'), ('τα', 'προσοντα'), ('προσοντα', 'σου'), ('σου', 'ουτε'), ('ουτε', 'καλη'), ('καλη', 'διοργανωτρια'), ('διοργανωτρια', 'ειμαι'), ('ειμαι', 'ουτε'), ('ουτε', 'ευφανταστες'), ('ευφανταστες', 'ιδεες'), ('ιδεες', 'εχω'), ('εχω', 'ετο'), ('ετο', 'απλα'), ('απλα', 'εν'), ('εν', 'πολλα'), ('πολλα', 'καλα'), ('καλα', 'πλασματα'), ('πλασματα', 'οι'), ('οι', 'κουμερες'), ('κουμερες', 'μου'), ('μου', 'τζι'), ('τζι', 'αγαπουν'), ('αγαπουν', 'με')], [('παντως', 'εσυζητουσαμεν'), ('εσυζητουσαμεν', 'τζιαι'), ('τζιαι', 'μεις'), ('μεις', 'με'), ('με', 'την'), ('την', 'παρεα'), ('παρεα', 'για'), ('για', 'την'), ('την', 'κατασταση'), ('κατασταση', 'τζαμε'), ('τζαμε', 'στην'), ('στην', 'πανδωρα'), ('πανδωρα', 'εν'), ('εν', 'εσιει'), ('εσιει', 'ετσι'), ('ετσι', 'πραμα'), ('πραμα', 'χαιρομαι'), ('χαιρομαι', 'που'), ('που', 'εστω'), ('εστω', 'και'), ('και', 'καποιος'), ('καποιος', 'εφκυκεν'), ('εφκυκεν', 'να'), ('να', 'πει'), ('πει', 'κατι')], [('καλα', 'τζιαι'), ('τζιαι', 'συγκρατηθηκες'), ('συγκρατηθηκες', 'εγω'), ('εγω', 'εν'), ('εν', 'μπορω')], [('αφηκα', 'το'), ('το', 'λαπτοπ'), ('λαπτοπ', 'ανοικτο'), ('ανοικτο', 'στο'), ('στο', 'μπλογκ'), ('μπλογκ', 'σου'), ('σου', 'πας'), ('πας', 'το'), ('το', 'τραπεζι'), ('τραπεζι', 'να'), ('να', 'καμω'), ('καμω', 'καφε')], [('εγω', 'παντως'), ('παντως', 'με'), ('με', 'τες'), ('τες', 'φιλες'), ('φιλες', 'μου'), ('μου', 'που'), ('που', 'εν'), ('εν', 'ωραιες'), ('ωραιες', 'τζιαι'), ('τζιαι', 'αρεσκουν'), ('αρεσκουν', 'μου'), ('μου', 'εξωτερικα'), ('εξωτερικα', 'θα'), ('θα', 'επηαινα'), ('επηαινα', 'μαζι'), ('μαζι', 'τους'), ('τους', 'ανετα'), ('ανετα', 'αν'), ('αν', 'ημουν'), ('ημουν', 'σε'), ('σε', 'φαση'), ('φαση', 'τζιαι'), ('τζιαι', 'ειχα'), ('ειχα', 'ευκαιρια')], [('εν', 'θα'), ('θα', 'επηαινα'), ('επηαινα', 'με'), ('με', 'τζεινες'), ('τζεινες', 'που'), ('που', 'εν'), ('εν', 'μου'), ('μου', 'αρεσκουν'), ('αρεσκουν', 'τζιαι'), ('τζιαι', 'θωρω'), ('θωρω', 'τες'), ('τες', 'καθαρα'), ('καθαρα', 'φιλικα')], [('εγω', 'εν'), ('εν', 'εκαταλαβα'), ('εκαταλαβα', 'που'), ('που', 'επηε'), ('επηε', 'λαθος')], [('και', 'μετα'), ('μετα', 'λαλουν'), ('λαλουν', 'για'), ('για', 'τες'), ('τες', 'γυναικες'), ('γυναικες', 'οτι'), ('οτι', 'δινουν'), ('δινουν', 'μικτα'), ('μικτα', 'σηματα')], [('ν', 'υπαρχει'), ('υπαρχει', 'φιλια'), ('φιλια', 'μεταξυ'), ('μεταξυ', 'αντρων'), ('αντρων', 'τζιαι'), ('τζιαι', 'γυναικων'), ('γυναικων', 'ουλλοι'), ('ουλλοι', 'στο'), ('στο', 'τελος'), ('τελος', 'σκεφτουνται'), ('σκεφτουνται', 'πως'), ('πως', 'να'), ('να', 'σε'), ('σε', 'γωνιασουν')], [('εκαμες', 'τζαι'), ('τζαι', 'εσυ'), ('εσυ', 'λαθος'), ('λαθος', 'ομως'), ('ομως', 'διοτι'), ('διοτι', 'λαλεις'), ('λαλεις', 'πως'), ('πως', 'εν'), ('εν', 'του'), ('του', 'ελαλες'), ('ελαλες', 'τιποτε'), ('τιποτε', 'προσωπικο'), ('προσωπικο', 'αλλα'), ('αλλα', 'ειπες'), ('ειπες', 'του'), ('του', 'για'), ('για', 'την'), ('την', 'παλια'), ('παλια', 'σχεση')], [('πως', 'να'), ('να', 'μεν'), ('μεν', 'του'), ('του', 'γυρισει'), ('γυρισει', 'τζαι'), ('τζαι', 'να'), ('να', 'τριππαρει')], [('εν', 'επηες'), ('επηες', 'παρακατω')], [('και', 'τουτος'), ('τουτος', 'που'), ('που', 'ενει'), ('ενει', 'τωρα'), ('τωρα', 'τζαι'), ('τζαι', 'αφηκε'), ('αφηκε', 'σε'), ('σε', 'ενω'), ('ενω', 'αλλοι'), ('αλλοι', 'βουρουν'), ('βουρουν', 'κομα')], [('συμφωνω', 'τζ'), ('τζ', 'εγω'), ('εγω', 'με'), ('με', 'οσα'), ('οσα', 'εγραψες')], [('εννα', 'με'), ('με', 'πεισεις'), ('πεισεις', 'τζιαι'), ('τζιαι', 'μετα'), ('μετα', 'αν'), ('αν', 'ξαναδιαπιστωσω'), ('ξαναδιαπιστωσω', 'αυτην'), ('αυτην', 'μου'), ('μου', 'τη'), ('τη', 'διαπιστωση'), ('διαπιστωση', 'θα'), ('θα', 'σε'), ('σε', 'εβρω'), ('εβρω', 'να'), ('να', 'μου'), ('μου', 'τραουδας'), ('τραουδας', 'λαλω'), ('λαλω', 'σου'), ('σου', 'το')], [('χωρκαθκιον', 'δεν'), ('δεν', 'ειναι'), ('ειναι', 'τα'), ('τα', 'καρναβαλια'), ('καρναβαλια', 'ειναι'), ('ειναι', 'να'), ('να', 'εν'), ('εν', 'τοσο'), ('τοσο', 'ψηλα'), ('ψηλα', 'η'), ('η', 'μουτη'), ('μουτη', 'τους'), ('τους', 'που'), ('που', 'να'), ('να', 'μεν'), ('μεν', 'μπορεις'), ('μπορεις', 'να'), ('να', 'δουν'), ('δουν', 'λιη'), ('λιη', 'ελαφροτητα'), ('ελαφροτητα', 'γυρω'), ('γυρω', 'τους'), ('τους', 'και'), ('και', 'μεσα'), ('μεσα', 'τους')], [('καμετε', 'χαζι'), ('χαζι', 'σιορ'), ('σιορ', 'εννα'), ('εννα', 'σας'), ('σας', 'καμει'), ('καμει', 'καλο')], [('εν', 'καταλαβω'), ('καταλαβω', 'τι'), ('τι', 'σημαινει'), ('σημαινει', 'χωρκαθκιον'), ('χωρκαθκιον', 'πιον'), ('πιον', 'εβαρεθηκα'), ('εβαρεθηκα', 'να'), ('να', 'ακουω'), ('ακουω', 'το'), ('το', 'εναν'), ('εναν', 'εν'), ('εν', 'χωρκατικον'), ('χωρκατικον', 'το'), ('το', 'αλλο'), ('αλλο', 'εν'), ('εν', 'χωρκατικον')], [('αν', 'εν'), ('εν', 'ετσι'), ('ετσι', 'η'), ('η', 'κυπρος'), ('κυπρος', 'εν'), ('εν', 'ενα'), ('ενα', 'χωρκο'), ('χωρκο', 'μεγαλο'), ('μεγαλο', 'αρα'), ('αρα', 'ειμαστε'), ('ειμαστε', 'ουλλοι'), ('ουλλοι', 'χωρκατοι')], [('και', 'αφηστους'), ('αφηστους', 'τζεινους'), ('τζεινους', 'που'), ('που', 'την'), ('την', 'βρισκουν'), ('βρισκουν', 'να'), ('να', 'παν'), ('παν', 'και'), ('και', 'να'), ('να', 'περασουν'), ('περασουν', 'καλα'), ('καλα', 'ταχα'), ('ταχα', 'φακκα'), ('φακκα', 'μου'), ('μου', 'και'), ('και', 'εμενα'), ('εμενα', 'το'), ('το', 'καρναβαλι'), ('καρναβαλι', 'αλλα'), ('αλλα', 'εν'), ('εν', 'το'), ('το', 'καμνω'), ('καμνω', 'θεμα')], [('εν', 'ηξερω'), ('ηξερω', 'σιουρα'), ('σιουρα', 'μαλλον'), ('μαλλον', 'εν'), ('εν', 'παραλογισμοι'), ('παραλογισμοι', 'του'), ('του', 'πονεμενου'), ('πονεμενου', 'εν'), ('εν', 'σου'), ('σου', 'ξανατυχε'), ('ξανατυχε', 'να'), ('να', 'σκεφτεσαι'), ('σκεφτεσαι', 'λλιο'), ('λλιο', 'καχυποπτα'), ('καχυποπτα', 'αμα'), ('αμα', 'πονεις'), ('πονεις', 'με'), ('με', 'τοσα'), ('τοσα', 'λεφτα'), ('λεφτα', 'που'), ('που', 'μαζευκουνται'), ('μαζευκουνται', 'θα'), ('θα', 'ηθελα'), ('ηθελα', 'να'), ('να', 'γινουνταν'), ('γινουνταν', 'καλλυττερα'), ('καλλυττερα', 'πραματα')], [('να', 'σαι'), ('σαι', 'καλα'), ('καλα', 'παντα'), ('παντα', 'τζιαι'), ('τζιαι', 'μπραβο'), ('μπραβο', 'στη'), ('στη', 'μανα'), ('μανα', 'σου'), ('σου', 'που'), ('που', 'ενικησεν')], [('εν', 'πολλα'), ('πολλα', 'δυσκολος'), ('δυσκολος', 'τουτος'), ('τουτος', 'ο'), ('ο', 'αγωνας')], [('ξερεις', 'τα'), ('τα', 'τζιαι'), ('τζιαι', 'συ'), ('συ', 'που'), ('που', 'πρωτον'), ('πρωτον', 'σιεριν')], [('ξερεις', 'ο'), ('ο', 'καθενας'), ('καθενας', 'εσιει'), ('εσιει', 'τα'), ('τα', 'δικα'), ('δικα', 'του'), ('του', 'αλλα'), ('αλλα', 'τουτη'), ('τουτη', 'η'), ('η', 'αρρωσκεια'), ('αρρωσκεια', 'εφαεν'), ('εφαεν', 'πολλους'), ('πολλους', 'τζιαι'), ('τζιαι', 'επηρεν'), ('επηρεν', 'τους'), ('τους', 'μιτα'), ('μιτα', 'της')], [('ετο', 'μανα'), ('μανα', 'μου'), ('μου', 'εν'), ('εν', 'ενας'), ('ενας', 'κοσμος'), ('κοσμος', 'για'), ('για', 'αντρες'), ('αντρες', 'ευνουχιστε'), ('ευνουχιστε', 'τους'), ('τους', 'ουλλους'), ('ουλλους', 'τωρα'), ('τωρα', 'που'), ('που', 'το'), ('το', 'σκεφτομαι'), ('σκεφτομαι', 'μεν'), ('μεν', 'τους'), ('τους', 'ευνουχισετε'), ('ευνουχισετε', 'γιατι'), ('γιατι', 'καμνουν'), ('καμνουν', 'και'), ('και', 'ενα'), ('ενα', 'καλο')], [('εθκιαβασα', 'τζαι'), ('τζαι', 'εγω'), ('εγω', 'την'), ('την', 'ποιηση'), ('ποιηση', 'της'), ('της', 'μου'), ('μου', 'το'), ('το', 'εδειξαν'), ('εδειξαν', 'προχτες'), ('προχτες', 'νομιζω'), ('νομιζω', 'εν'), ('εν', 'εχω'), ('εχω', 'να'), ('να', 'σχολιασω'), ('σχολιασω', 'τιποτε')], [('εν', 'εσιει'), ('εσιει', 'μανα'), ('μανα', 'μου'), ('μου', 'σαν'), ('σαν', 'την'), ('την', 'λεμεσο'), ('λεμεσο', 'εχεις'), ('εχεις', 'και'), ('και', 'εσυ'), ('εσυ', 'το'), ('το', 'δικαιο'), ('δικαιο', 'σου'), ('σου', 'λιο'), ('λιο', 'τατσιλλικκι'), ('τατσιλλικκι', 'εχουμε'), ('εχουμε', 'το')], [('εν', 'τουτο'), ('τουτο', 'που'), ('που', 'λαλουμε'), ('λαλουμε', 'καμε'), ('καμε', 'μου'), ('μου', 'τοσο'), ('τοσο', 'τζαι'), ('τζαι', 'θα'), ('θα', 'σου'), ('σου', 'καμω'), ('καμω', 'αλλο'), ('αλλο', 'τοσο')], [('ασε', 'του'), ('του', 'πατας'), ('πατας', 'για'), ('για', 'να'), ('να', 'προλαβεις'), ('προλαβεις', 'να'), ('να', 'μπεις'), ('μπεις', 'τζαι'), ('τζαι', 'ερκεται'), ('ερκεται', 'τζαι'), ('τζαι', 'ο'), ('ο', 'αλλος'), ('αλλος', 'ο'), ('ο', 'βλακας'), ('βλακας', 'τζαι'), ('τζαι', 'πατα'), ('πατα', 'του'), ('του', 'παραπανω'), ('παραπανω', 'για'), ('για', 'να'), ('να', 'σε'), ('σε', 'σκοτωσει'), ('σκοτωσει', 'σιουρα'), ('σιουρα', 'πραματα')], [('παντως', 'σιουρα'), ('σιουρα', 'τουτα'), ('τουτα', 'ουλλα'), ('ουλλα', 'τα'), ('τα', 'χορτα'), ('χορτα', 'που'), ('που', 'λαλεις'), ('λαλεις', 'πιο'), ('πιο', 'πανω'), ('πανω', 'πολλα'), ('πολλα', 'λιοι'), ('λιοι', 'εως'), ('εως', 'κανενας'), ('κανενας', 'αλλος'), ('αλλος', 'λαος'), ('λαος', 'δεν'), ('δεν', 'τα'), ('τα', 'τρωει')], [('εν', 'ανθρωποι'), ('ανθρωποι', 'τζιαι'), ('τζιαι', 'τουτοι'), ('τουτοι', 'ποιος'), ('ποιος', 'ξερει'), ('ξερει', 'ποιες'), ('ποιες', 'δυσκολιες'), ('δυσκολιες', 'τζιαι'), ('τζιαι', 'τι'), ('τι', 'τραυματα'), ('τραυματα', 'κουβαλουν'), ('κουβαλουν', 'τουτες'), ('τουτες', 'οι'), ('οι', 'ψυσιες'), ('ψυσιες', 'τζιαι'), ('τζιαι', 'γιατι'), ('γιατι', 'υιοθετησαν'), ('υιοθετησαν', 'τουτο'), ('τουτο', 'τον'), ('τον', 'τροπο'), ('τροπο', 'ζωης')], [('μπορω', 'να'), ('να', 'συζητω'), ('συζητω', 'με'), ('με', 'τις'), ('τις', 'ωρες'), ('ωρες', 'για'), ('για', 'τουντο'), ('τουντο', 'θεμα'), ('θεμα', 'αλλα'), ('αλλα', 'να'), ('να', 'ξεπεφτουν'), ('ξεπεφτουν', 'τοσο'), ('τοσο', 'πολυ'), ('πολυ', 'καποιοι'), ('καποιοι', 'τζιαι'), ('τζιαι', 'να'), ('να', 'κρινουν'), ('κρινουν', 'να'), ('να', 'πειραζουν'), ('πειραζουν', 'να'), ('να', 'ενοχλουν'), ('ενοχλουν', 'τζιαι'), ('τζιαι', 'να'), ('να', 'ξετυμαζουν'), ('ξετυμαζουν', 'εν'), ('εν', 'το'), ('το', 'καρτερουσα')], [('ευχαριστω', 'σας'), ('σας', 'ουλλους'), ('ουλλους', 'παρα'), ('παρα', 'πολλα'), ('πολλα', 'για'), ('για', 'το'), ('το', 'ενδιαφερον')], [('σιερουμαι', 'που'), ('που', 'σας'), ('σας', 'αρεσε'), ('αρεσε', 'τζιαι'), ('τζιαι', 'σκεφτεστε'), ('σκεφτεστε', 'το'), ('το', 'ιδιο')], [('προσφατα', 'εβλεπα'), ('εβλεπα', 'τζαι'), ('τζαι', 'εγω'), ('εγω', 'φωτογραφιες'), ('φωτογραφιες', 'που'), ('που', 'το'), ('το', 'δημοτικο'), ('δημοτικο', 'μετα'), ('μετα', 'που'), ('που', 'παρα'), ('παρα', 'πολλα'), ('πολλα', 'χρονια')], [('μα', 'ιντα'), ('ιντα', 'ωραιο'), ('ωραιο', 'ποστ'), ('ποστ', 'τι'), ('τι', 'μας'), ('μας', 'εθυμισες'), ('εθυμισες', 'τωρα')], [('υπομονη', 'με'), ('με', 'τες'), ('τες', 'δουλειες'), ('δουλειες', 'σου'), ('σου', 'τζαι'), ('τζαι', 'γραφε'), ('γραφε', 'τους'), ('τους', 'κουτσομπολιες'), ('κουτσομπολιες', 'να'), ('να', 'μεν'), ('μεν', 'σε'), ('σε', 'κοφτει'), ('κοφτει', 'ακομα'), ('ακομα', 'τζαι'), ('τζαι', 'που'), ('που', 'τζεινους'), ('τζεινους', 'που'), ('που', 'τους'), ('τους', 'διουν'), ('διουν', 'σημασια'), ('σημασια', 'τζαι'), ('τζαι', 'πιστευκουν'), ('πιστευκουν', 'τους')], [('εσιει', 'κανεναν'), ('κανεναν', 'εξαμηνο'), ('εξαμηνο', 'που'), ('που', 'σε'), ('σε', 'ανακαλυψα'), ('ανακαλυψα', 'εν'), ('εν', 'η'), ('η', 'χαρα'), ('χαρα', 'μου'), ('μου', 'να'), ('να', 'σε'), ('σε', 'θκιαβαζω')], [('ακομα', 'βεβαια'), ('βεβαια', 'προσπαθω'), ('προσπαθω', 'να'), ('να', 'καταλαβω'), ('καταλαβω', 'τι'), ('τι', 'δουλεια'), ('δουλεια', 'καμνεις'), ('καμνεις', 'γιατι'), ('γιατι', 'που'), ('που', 'τη'), ('τη', 'μια'), ('μια', 'μιλας'), ('μιλας', 'για'), ('για', 'καλλιτεχνικα'), ('καλλιτεχνικα', 'πραματα'), ('πραματα', 'τζιαι'), ('τζιαι', 'που'), ('που', 'την'), ('την', 'αλλη'), ('αλλη', 'θκιαβαζω'), ('θκιαβαζω', 'κατι'), ('κατι', 'παλια'), ('παλια', 'ποστ'), ('ποστ', 'με'), ('με', 'ψυχολογιες'), ('ψυχολογιες', 'τζιαι'), ('τζιαι', 'κοπελλουθκια')], [('ενιγουει', 'ο,τι'), ('ο,τι', 'τζιαι'), ('τζιαι', 'αν'), ('αν', 'καμνεις'), ('καμνεις', 'στη'), ('στη', 'ζωη'), ('ζωη', 'σου'), ('σου', 'σημασια'), ('σημασια', 'εσιει'), ('εσιει', 'να'), ('να', 'το'), ('το', 'αγαπας')], [('εν', 'δαμε'), ('δαμε', 'που'), ('που', 'επρεπε'), ('επρεπε', 'να'), ('να', 'γραψεις'), ('γραψεις', 'το'), ('το', 'κειμενο'), ('κειμενο', 'με'), ('με', 'το'), ('το', 'σιεσιμο'), ('σιεσιμο', 'τζιαι'), ('τζιαι', 'το'), ('το', 'καζανακι'), ('καζανακι', 'εσιει'), ('εσιει', 'μιαν'), ('μιαν', 'ηλιθια'), ('ηλιθια', 'που'), ('που', 'ξερω'), ('ξερω', 'που'), ('που', 'επειδη'), ('επειδη', 'εσπουδασεν'), ('εσπουδασεν', 'στο'), ('στο', 'εξωτερικο'), ('εξωτερικο', 'λαλει'), ('λαλει', 'οτι'), ('οτι', 'οι'), ('οι', 'αλλοι'), ('αλλοι', 'ουλλοι'), ('ουλλοι', 'εν'), ('εν', 'πρεπει'), ('πρεπει', 'να'), ('να', 'θεωρουνται'), ('θεωρουνται', 'καλλιτεχνες'), ('καλλιτεχνες', 'τζιαι'), ('τζιαι', 'κακοφημει'), ('κακοφημει', 'τους'), ('τους', 'εχθρους'), ('εχθρους', 'της'), ('της', 'τζιαι'), ('τζιαι', 'υποτιμα'), ('υποτιμα', 'τους')], [('εν', 'κατι'), ('κατι', 'καλλιτεχνες'), ('καλλιτεχνες', 'που'), ('που', 'επηαν'), ('επηαν', 'στο'), ('στο', 'εξωτερικον'), ('εξωτερικον', 'τζιαι'), ('τζιαι', 'εψηλωσεν'), ('εψηλωσεν', 'η'), ('η', 'μουττη'), ('μουττη', 'τους')], [('μεν', 'σου'), ('σου', 'πω'), ('πω', 'εν'), ('εν', 'η'), ('η', 'μονη'), ('μονη', 'η'), ('η', 'συγκεκριμενη'), ('συγκεκριμενη', 'που'), ('που', 'ξερω'), ('ξερω', 'τζιαι'), ('τζιαι', 'πρεπει'), ('πρεπει', 'να'), ('να', 'ντρεπεται'), ('ντρεπεται', 'που'), ('που', 'θεωρει'), ('θεωρει', 'τον'), ('τον', 'εαυτο'), ('εαυτο', 'της'), ('της', 'καλλιτεχνη'), ('καλλιτεχνη', 'ξερει'), ('ξερει', 'τα'), ('τα', 'ουλλα'), ('ουλλα', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'θελει'), ('θελει', 'να'), ('να', 'ερτει'), ('ερτει', 'κυπρο'), ('κυπρο', 'για'), ('για', 'να'), ('να', 'καμει'), ('καμει', 'καριερα')], [('τουτοι', 'κρατουν'), ('κρατουν', 'οι'), ('οι', 'ιδιοι'), ('ιδιοι', 'τον'), ('τον', 'τοπο'), ('τοπο', 'τους'), ('τους', 'πισω'), ('πισω', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'πραγματικα'), ('πραγματικα', 'ψυχοπαθεις'), ('ψυχοπαθεις', 'τζιαι'), ('τζιαι', 'πρεπει'), ('πρεπει', 'να'), ('να', 'κλειστουν'), ('κλειστουν', 'σε'), ('σε', 'ιδρυμα')], [('εν', 'θελω'), ('θελω', 'να'), ('να', 'καταγγειλω'), ('καταγγειλω', 'κανεναν'), ('κανεναν', 'αν'), ('αν', 'και'), ('και', 'τους'), ('τους', 'αξιζεν'), ('αξιζεν', 'ετσι'), ('ετσι', 'πραμα'), ('πραμα', 'εν'), ('εν', 'εξαναζησα')], [('μην', 'νομιζεις'), ('νομιζεις', 'πως'), ('πως', 'θα'), ('θα', 'κερδιζα'), ('κερδιζα', 'οτιδηποτε'), ('οτιδηποτε', 'οταν'), ('οταν', 'οι'), ('οι', 'σκαταες'), ('σκαταες', 'εχουν'), ('εχουν', 'μεσο'), ('μεσο', 'αγωνιζεσαι'), ('αγωνιζεσαι', 'αδικα'), ('αδικα', 'τζιαι'), ('τζιαι', 'εν'), ('εν', 'εχω'), ('εχω', 'χρονο'), ('χρονο', 'για'), ('για', 'πεταμα')], [('μονο', 'τζιαι'), ('τζιαι', 'μονο'), ('μονο', 'που'), ('που', 'εκαμαν'), ('εκαμαν', 'ετσι'), ('ετσι', 'ουλλοι'), ('ουλλοι', 'που'), ('που', 'γυρω'), ('γυρω', 'οι'), ('οι', 'μονο'), ('μονο', 'οι'), ('οι', 'νοσοκομοι'), ('νοσοκομοι', 'τζιαι'), ('τζιαι', 'οι'), ('οι', 'μπατσοι'), ('μπατσοι', 'αποδυκνυει'), ('αποδυκνυει', 'πως'), ('πως', 'εν'), ('εν', 'ουλλα'), ('ουλλα', 'που'), ('που', 'την'), ('την', 'κοινωνια'), ('κοινωνια', 'που'), ('που', 'ξεκινουν')], [('εσκεφτηκες', 'οι'), ('οι', 'πολλα'), ('πολλα', 'οι'), ('οι', 'βαρετοι'), ('βαρετοι', 'οι'), ('οι', 'ταξιτζιες'), ('ταξιτζιες', 'που'), ('που', 'αφηνουν'), ('αφηνουν', 'ενα'), ('ενα', 'νιχουι'), ('νιχουι', 'πας'), ('πας', 'το'), ('το', 'δακτυλο'), ('δακτυλο', 'το'), ('το', 'μιτσι'), ('μιτσι', 'ξερεις'), ('ξερεις', 'τζεινο'), ('τζεινο', 'το'), ('το', 'νυχουι'), ('νυχουι', 'που'), ('που', 'καθαριζει'), ('καθαριζει', 'τα'), ('τα', 'αφτια'), ('αφτια', 'τζαι'), ('τζαι', 'τη'), ('τη', 'μουτη')], [('οι', 'νεραιδες'), ('νεραιδες', 'ενναιν'), ('ενναιν', 'μεσα'), ('μεσα', 'εσυ'), ('εσυ', 'δικαιουσαι'), ('δικαιουσαι', 'εν'), ('εν', 'απλα'), ('απλα', 'μια'), ('μια', 'αποψη')], [('για', 'να'), ('να', 'σου'), ('σου', 'περασει'), ('περασει', 'λλιο'), ('λλιο', 'η'), ('η', 'καουρα'), ('καουρα', 'παντως'), ('παντως', 'προτεινω'), ('προτεινω', 'να'), ('να', 'θκιαβασεις'), ('θκιαβασεις', 'τους'), ('τους', 'θκυο'), ('θκυο', 'τομους'), ('τομους', 'με'), ('με', 'ιστοριες')], [('ηβρα', 'τουντην'), ('τουντην', 'φιλεναδα'), ('φιλεναδα', 'που'), ('που', 'λαλεις')], [('εν', 'τζιαι'), ('τζιαι', 'καρτερω'), ('καρτερω', 'τα'), ('τα', 'ουλλα'), ('ουλλα', 'που'), ('που', 'τον'), ('τον', 'θεο'), ('θεο', 'κινω'), ('κινω', 'τζιαι'), ('τζιαι', 'τα'), ('τα', 'ποθκια'), ('ποθκια', 'μου'), ('μου', 'τζιαι'), ('τζιαι', 'τα'), ('τα', 'σιερκα'), ('σιερκα', 'μου')], [('αλλα', 'εσιει'), ('εσιει', 'πραματα'), ('πραματα', 'που'), ('που', 'εν'), ('εν', 'εξαρτουνται'), ('εξαρτουνται', 'μονο'), ('μονο', 'που'), ('που', 'τες'), ('τες', 'κινησεις'), ('κινησεις', 'μου')], [('οσο', 'για'), ('για', 'το'), ('το', 'στομασι'), ('στομασι', 'εν'), ('εν', 'καταλαβει'), ('καταλαβει', 'τιποτε'), ('τιποτε', 'τωρα'), ('τωρα', 'που'), ('που', 'την'), ('την', 'πολλη'), ('πολλη', 'σουβλα'), ('σουβλα', 'τζιαι'), ('τζιαι', 'την'), ('την', 'πολλη'), ('πολλη', 'παττιχα')], [('βρισκω', 'σε'), ('σε', 'πολλα'), ('πολλα', 'συνειδητοποιημενη'), ('συνειδητοποιημενη', 'για'), ('για', 'το'), ('το', 'θεμα'), ('θεμα', 'τζιαι'), ('τζιαι', 'αφου'), ('αφου', 'ξερεις'), ('ξερεις', 'γιατι'), ('γιατι', 'ξεκινησες'), ('ξεκινησες', 'ξερεις'), ('ξερεις', 'γιατι'), ('γιατι', 'εφυες'), ('εφυες', 'τζιαι'), ('τζιαι', 'εισαι'), ('εισαι', 'αποφασισμενη'), ('αποφασισμενη', 'πιστευκω'), ('πιστευκω', 'θα'), ('θα', 'το'), ('το', 'προσπερασεις')], [('εισαι', 'μια'), ('μια', 'λεβεντια'), ('λεβεντια', 'τζιαι'), ('τζιαι', 'ενα'), ('ενα', 'πλασμα'), ('πλασμα', 'με'), ('με', 'ψυσιην')], [('εισαι', 'τσιακκος'), ('τσιακκος', 'γιατι'), ('γιατι', 'εσιεις'), ('εσιεις', 'αρχες'), ('αρχες', 'δυναμη'), ('δυναμη', 'να'), ('να', 'αντισταθεις')], [('ανοιξες', 'μας'), ('μας', 'για'), ('για', 'λιο'), ('λιο', 'τη'), ('τη', 'ψυσιη'), ('ψυσιη', 'σου'), ('σου', 'τζαι'), ('τζαι', 'ειδαμε'), ('ειδαμε', 'τζαι'), ('τζαι', 'μεις'), ('μεις', 'την'), ('την', 'αληθκεια')], [('πονουν', 'με'), ('με', 'τα'), ('τα', 'μμαθκια'), ('μμαθκια', 'μου'), ('μου', 'αμαν'), ('αμαν', 'δκιαβαζω')], [('εκατσα', 'μιαν'), ('μιαν', 'εφτομαδα'), ('εφτομαδα', 'τζιαι'), ('τζιαι', 'εμαθα'), ('εμαθα', 'να'), ('να', 'γραφω'), ('γραφω', 'στα'), ('στα', 'ελληνικα'), ('ελληνικα', 'επειδη'), ('επειδη', 'μου'), ('μου', 'το'), ('το', 'εζητησετε'), ('εζητησετε', 'τζιαι'), ('τζιαι', 'επειδη'), ('επειδη', 'πιστευκω'), ('πιστευκω', 'εν'), ('εν', 'καλλυττερα')], [('νιωθω', 'περιφανη'), ('περιφανη', 'γιατι'), ('γιατι', 'εν'), ('εν', 'εξαναγραψα'), ('εξαναγραψα', 'ποττε'), ('ποττε', 'μου'), ('μου', 'ελληνικα')], [('μα', 'τι'), ('τι', 'ομορφα'), ('ομορφα', 'που'), ('που', 'ενουν'), ('ενουν', 'μανα'), ('μανα', 'μου'), ('μου', 'τα')], [('πε', 'μου'), ('μου', 'τουτα'), ('τουτα', 'ουλλα'), ('ουλλα', 'που'), ('που', 'σε'), ('σε', 'νευριαζουν'), ('νευριαζουν', 'εν'), ('εν', 'στην'), ('στην', 'κυπρον'), ('κυπρον', 'που'), ('που', 'τα'), ('τα', 'θωρεις'), ('θωρεις', 'η'), ('η', 'εν'), ('εν', 'φαινομενον'), ('φαινομενον', 'στον'), ('στον', 'τοπον'), ('τοπον', 'που'), ('που', 'εισαι'), ('εισαι', 'τωρα')], [('αν', 'δεν'), ('δεν', 'εθκιαβαζεν'), ('εθκιαβαζεν', 'η'), ('η', 'οικουμενη'), ('οικουμενη', 'οτι'), ('οτι', 'σου'), ('σου', 'λαλω'), ('λαλω', 'δαμαι'), ('δαμαι', 'ηταν'), ('ηταν', 'να'), ('να', 'σου'), ('σου', 'πω'), ('πω', 'διαφορα')], [('εφαεν', 'τους'), ('τους', 'ο'), ('ο', 'ηπιος'), ('ηπιος', 'ιμπεριαλισμος'), ('ιμπεριαλισμος', 'που'), ('που', 'ερκεται'), ('ερκεται', 'εν'), ('εν', 'ειδη'), ('ειδη', 'πολιτισμου'), ('πολιτισμου', 'λεμεν'), ('λεμεν', 'τωρα')], [('εκτος', 'που'), ('που', 'τον'), ('τον', 'ταταν'), ('ταταν', 'του'), ('του', 'δεν'), ('δεν', 'τον'), ('τον', 'υποστηριζουν'), ('υποστηριζουν', 'καν'), ('καν', 'οι'), ('οι', 'φιλελευθεροι')], [('τον', 'τζαιρον'), ('τζαιρον', 'που'), ('που', 'ηταν'), ('ηταν', 'πρωταθλητης'), ('πρωταθλητης', 'του'), ('του', 'μπατμιττον'), ('μπατμιττον', 'οσον'), ('οσον', 'τζαιρον'), ('τζαιρον', 'ηταν'), ('ηταν', 'πρωταθλητης'), ('πρωταθλητης', 'ηταν'), ('ηταν', 'του'), ('του', 'ηθους')], [('ρε', 'φιλε'), ('φιλε', 'λες'), ('λες', 'τζι'), ('τζι', 'εν'), ('εν', 'λλιοι'), ('λλιοι', 'που'), ('που', 'εθησαυρισαν'), ('εθησαυρισαν', 'πανω'), ('πανω', 'στον'), ('στον', 'πονο'), ('πονο', 'του'), ('του', 'προσφυγα'), ('προσφυγα', 'τζιαι'), ('τζιαι', 'του'), ('του', 'συγγενη'), ('συγγενη', 'του'), ('του', 'αγνοουμενου')], [('ετυχε', 'μου'), ('μου', 'εχτες'), ('εχτες', 'ειπα'), ('ειπα', 'φευκω'), ('φευκω', 'γεια'), ('γεια', 'σας'), ('σας', 'και'), ('και', 'τους'), ('τους', 'αφησα')], [('εν', 'θα'), ('θα', 'το'), ('το', 'στηριξω'), ('στηριξω', 'αλλα'), ('αλλα', 'εν'), ('εν', 'τζαι'), ('τζαι', 'επεσεν'), ('επεσεν', 'που'), ('που', 'τον'), ('τον', 'ουρανο'), ('ουρανο', 'ουτε'), ('ουτε', 'εχει'), ('εχει', 'την'), ('την', 'ιδια'), ('ιδια', 'βαση'), ('βαση', 'με'), ('με', 'το'), ('το', 'ειμαι'), ('ειμαι', 'το'), ('το', 'ελαμ'), ('ελαμ', 'τζαι'), ('τζαι', 'δερνω'), ('δερνω', 'μαυρους')], [('πιστεφκουν', 'οτι'), ('οτι', 'πιανουν'), ('πιανουν', 'τες'), ('τες', 'δουλειες'), ('δουλειες', 'πιστεφκουν'), ('πιστεφκουν', 'οτι'), ('οτι', 'ειναι'), ('ειναι', 'εθνοτικα'), ('εθνοτικα', 'ανωτεροι')], [('εν', 'να'), ('να', 'σταματησει'), ('σταματησει', 'η'), ('η', 'αριστερα'), ('αριστερα', 'πκιον'), ('πκιον', 'οξα'), ('οξα', 'στο'), ('στο', 'ιραν'), ('ιραν', 'εν'), ('εν', 'κακοι'), ('κακοι', 'οι'), ('οι', 'ισλαμιστες'), ('ισλαμιστες', 'αλλα'), ('αλλα', 'εν'), ('εν', 'καλη'), ('καλη', 'η'), ('η', 'σαουδικη'), ('σαουδικη', 'αραβια')], [('να', 'μου'), ('μου', 'επιτρεψεις'), ('επιτρεψεις', 'λλιον'), ('λλιον', 'σκληρη'), ('σκληρη', 'κριτικη'), ('κριτικη', 'διοτι'), ('διοτι', 'οποιος'), ('οποιος', 'εδεχτηκεν'), ('εδεχτηκεν', 'βοηθεια'), ('βοηθεια', 'για'), ('για', 'να'), ('να', 'στρωσει'), ('στρωσει', 'μιαν'), ('μιαν', 'εξεγερση'), ('εξεγερση', 'που'), ('που', 'το'), ('το', 'κεφαλαιο'), ('κεφαλαιο', 'στο'), ('στο', 'τελος'), ('τελος', 'εχασεν')], [('εν', 'τζαι'), ('τζαι', 'εκεδρισεν'), ('εκεδρισεν', 'μανια'), ('μανια', 'ρε'), ('ρε', 'που'), ('που', 'την'), ('την', 'εσσιετε')], [('τι', 'εκερδισα'), ('εκερδισα', 'που'), ('που', 'την'), ('την', 'επιλογη'), ('επιλογη', 'μου'), ('μου', 'να'), ('να', 'ανοικω'), ('ανοικω', 'τζιαμε')], [('δεν', 'βγαινει'), ('βγαινει', 'οτι'), ('οτι', 'μας'), ('μας', 'εκαμαν'), ('εκαμαν', 'χατηρκα')], [('τι', 'εκερδισαμεν'), ('εκερδισαμεν', 'που'), ('που', 'τους'), ('τους', 'αδεσμευτους')], [('στελιο', 'εναν'), ('εναν', 'σεντονιν'), ('σεντονιν', 'σαν'), ('σαν', 'τον'), ('τον', 'φρεσκον'), ('φρεσκον', 'τον'), ('τον', 'αεραν')], [('η', 'αληθκεια'), ('αληθκεια', 'δεν'), ('δεν', 'εκαταλαβα'), ('εκαταλαβα', 'πως'), ('πως', 'οι'), ('οι', 'πομπες'), ('πομπες', 'του'), ('του', 'γριβα'), ('γριβα', 'σιουρα'), ('σιουρα', 'αποδυκνειουν'), ('αποδυκνειουν', 'σχεδιο'), ('σχεδιο', 'διχοτομισης')], [('τη', 'ζυριχη'), ('ζυριχη', 'δεν'), ('δεν', 'τη'), ('τη', 'καμνω'), ('καμνω', 'φοινιτζιαν')], [('τζαι', 'να'), ('να', 'σιεις'), ('σιεις', 'παντα'), ('παντα', 'υποψιν'), ('υποψιν', 'σου'), ('σου', 'οτι'), ('οτι', 'το'), ('το', 'να'), ('να', 'εσσιεις'), ('εσσιεις', 'μιαν'), ('μιαν', 'αλλαγη'), ('αλλαγη', 'θεωρητικα'), ('θεωρητικα', 'στο'), ('στο', 'νου'), ('νου', 'εν'), ('εν', 'τζαι'), ('τζαι', 'σημαινει'), ('σημαινει', 'οτι'), ('οτι', 'εννα'), ('εννα', 'σου'), ('σου', 'φκει'), ('φκει', 'τζαι'), ('τζαι', 'στη'), ('στη', 'πραξη'), ('πραξη', 'απαραιτητα')], [('οπως', 'λαλεις'), ('λαλεις', 'να'), ('να', 'μεν'), ('μεν', 'σουζουμεν'), ('σουζουμεν', 'τα'), ('τα', 'ποθκια'), ('ποθκια', 'μας'), ('μας', 'εν'), ('εν', 'ιχρειαζεται')], [('οπως', 'τζαι'), ('τζαι', 'να'), ('να', 'σσιει'), ('σσιει', 'οι'), ('οι', 'πως'), ('πως', 'απλα'), ('απλα', 'ησουν'), ('ησουν', 'εμπορας'), ('εμπορας', 'οι'), ('οι', 'σιορ')], [('καλαν', 'η'), ('η', 'ουσια'), ('ουσια', 'ειναι'), ('ειναι', 'ποιος'), ('ποιος', 'εν'), ('εν', 'να'), ('να', 'την'), ('την', 'κατσει'), ('κατσει', 'του'), ('του', 'αλλου'), ('αλλου', 'τζιαι'), ('τζιαι', 'να'), ('να', 'φανει'), ('φανει', 'πιο'), ('πιο', 'εξυπνος')], [('οξα', 'οτι'), ('οτι', 'βασικα'), ('βασικα', 'τουτη'), ('τουτη', 'η'), ('η', 'κυβερνηση'), ('κυβερνηση', 'χειριζεται'), ('χειριζεται', 'το'), ('το', 'θεμαν'), ('θεμαν', 'των'), ('των', 'πορων'), ('πορων', 'με'), ('με', 'στοχον'), ('στοχον', 'το'), ('το', 'κοινον'), ('κοινον', 'σσυφφερον')], [('ειναι', 'αληθκεια'), ('αληθκεια', 'οτι'), ('οτι', 'πριν'), ('πριν', 'να'), ('να', 'καλλιτζιεψουν'), ('καλλιτζιεψουν', 'σουζουν'), ('σουζουν', 'ουλλοι'), ('ουλλοι', 'τα'), ('τα', 'ποθκια'), ('ποθκια', 'τους')], [('πολλα', 'φοουμαι'), ('φοουμαι', 'πως'), ('πως', 'το'), ('το', 'ακελ'), ('ακελ', 'κατερριψε'), ('κατερριψε', 'και'), ('και', 'αυτο')], [('απορια', 'της'), ('της', 'χωρκατισσας'), ('χωρκατισσας', 'εκαμναν'), ('εκαμναν', 'κουπες'), ('κουπες', 'μες'), ('μες', 'το'), ('το', 'προεδρικο'), ('προεδρικο', 'εν'), ('εν', 'ηξεραμεν'), ('ηξεραμεν', 'λαλουσιν'), ('λαλουσιν', 'τωρα')], [('ενι', 'ξερω'), ('ξερω', 'εν'), ('εν', 'ειμαι'), ('ειμαι', 'σιουρη'), ('σιουρη', 'ακουσα'), ('ακουσα', 'πως'), ('πως', 'ηδη'), ('ηδη', 'εφκηκαν')], [('βεβαια', 'οπως'), ('οπως', 'εν'), ('εν', 'τα'), ('τα', 'πραματα'), ('πραματα', 'στην'), ('στην', 'ελλαδα'), ('ελλαδα', 'εσιει'), ('εσιει', 'πολλους'), ('πολλους', 'που'), ('που', 'θα'), ('θα', 'πασιν'), ('πασιν', 'με'), ('με', 'τουτα'), ('τουτα', 'τα'), ('τα', 'λεφτα'), ('λεφτα', 'να'), ('να', 'δουλεψουν')], [('κρατω', 'μιτσην'), ('μιτσην', 'καλαθιν'), ('καλαθιν', 'αν'), ('αν', 'οντως'), ('οντως', 'το'), ('το', 'συστημα'), ('συστημα', 'φκαινει'), ('φκαινει', 'μονο'), ('μονο', 'με'), ('με', 'τες'), ('τες', 'αποκοπες')], [('η', 'αληθκεια'), ('αληθκεια', 'ειναι'), ('ειναι', 'οτι'), ('οτι', 'πολλα'), ('πολλα', 'που'), ('που', 'τα'), ('τα', 'θεματα'), ('θεματα', 'που'), ('που', 'εθιξα'), ('εθιξα', 'σιγουρα'), ('σιγουρα', 'ακομα'), ('ακομα', 'ψαχνω'), ('ψαχνω', 'τα')], [('πολλα', 'πραματα'), ('πραματα', 'που'), ('που', 'σημερα'), ('σημερα', 'θεωρουμεν'), ('θεωρουμεν', 'δεδομενα'), ('δεδομενα', 'πιθανον'), ('πιθανον', 'να'), ('να', 'μην'), ('μην', 'ειναι'), ('ειναι', 'πολλα'), ('πολλα', 'συντομα')], [('δες', 'μονον'), ('μονον', 'τες'), ('τες', 'φοβερες'), ('φοβερες', 'αλλαγες'), ('αλλαγες', 'στη'), ('στη', 'ζωη'), ('ζωη', 'του'), ('του', 'αθρωπου'), ('αθρωπου', 'σε'), ('σε', 'μιαν'), ('μιαν', 'γενιαν'), ('γενιαν', 'τι'), ('τι', 'εζησαν'), ('εζησαν', 'οι'), ('οι', 'παππουδες'), ('παππουδες', 'μας'), ('μας', 'στην'), ('στην', 'τεχνολογιαν')], [('φοουμαι', 'με'), ('με', 'νακκον'), ('νακκον', 'μερικες'), ('μερικες', 'φορες'), ('φορες', 'η'), ('η', 'αληθεια'), ('αληθεια', 'γιατι'), ('γιατι', 'νομιζω'), ('νομιζω', 'αν'), ('αν', 'για'), ('για', 'καποιο'), ('καποιο', 'λογο'), ('λογο', 'εν'), ('εν', 'τα'), ('τα', 'καταφερνα'), ('καταφερνα', 'τζαι'), ('τζαι', 'εκαμναν'), ('εκαμναν', 'μου'), ('μου', 'τουτα'), ('τουτα', 'που'), ('που', 'περιγραφεις'), ('περιγραφεις', 'ηταν'), ('ηταν', 'να'), ('να', 'σπαζω'), ('σπαζω', 'μπουκαλες'), ('μπουκαλες', 'πας'), ('πας', 'στες'), ('στες', 'τζεφαλες'), ('τζεφαλες', 'τους'), ('τους', 'να'), ('να', 'σπαζω'), ('σπαζω', 'τα'), ('τα', 'πραματα'), ('πραματα', 'τους'), ('τους', 'που'), ('που', 'εν'), ('εν', 'καπου'), ('καπου', 'που'), ('που', 'εν'), ('εν', 'πρεπει'), ('πρεπει', 'να'), ('να', 'τα'), ('τα', 'πετασσω'), ('πετασσω', 'να'), ('να', 'παιρνω'), ('παιρνω', 'ξιμαρισμενα'), ('ξιμαρισμενα', 'πραματα'), ('πραματα', 'μες'), ('μες', 'στα'), ('στα', 'δωματια'), ('δωματια', 'τους')], [('εννα', 'καμω'), ('καμω', 'την'), ('την', 'πελλη'), ('πελλη', 'τζαι'), ('τζαι', 'σικκιμε')], [('μεν', 'το'), ('το', 'γενικευκεις'), ('γενικευκεις', 'τα'), ('τα', 'κινεζουθκια'), ('κινεζουθκια', 'μου'), ('μου', 'εμενα'), ('εμενα', 'εν'), ('εν', 'πολλα'), ('πολλα', 'καλα')], [('ιντα', 'μπου'), ('μπου', 'θωρεις'), ('θωρεις', 'ρε'), ('ρε', 'καλαμαρα'), ('καλαμαρα', 'το'), ('το', 'τραγικο'), ('τραγικο', 'ειναι'), ('ειναι', 'πως'), ('πως', 'το'), ('το', 'ξερουμεν'), ('ξερουμεν', 'τζιολας')], [('μπορουμεν', 'να'), ('να', 'το'), ('το', 'συζητουμεν'), ('συζητουμεν', 'για'), ('για', 'ωρες'), ('ωρες', 'αλλα'), ('αλλα', 'σιγουρος'), ('σιγουρος', 'δεν'), ('δεν', 'μπορει'), ('μπορει', 'να'), ('να', 'ειναι'), ('ειναι', 'κανενας')], [('εγω', 'δεν'), ('δεν', 'ηβρα'), ('ηβρα', 'κανεναν'), ('κανεναν', 'να'), ('να', 'μου'), ('μου', 'την'), ('την', 'εξηγησει'), ('εξηγησει', 'αμαν'), ('αμαν', 'φτασουμεν'), ('φτασουμεν', 'σε'), ('σε', 'τουτον'), ('τουτον', 'το'), ('το', 'θεμαν'), ('θεμαν', 'μασουν'), ('μασουν', 'τα'), ('τα', 'λογια'), ('λογια', 'τους'), ('τους', 'ππεφτει'), ('ππεφτει', 'τζιαι'), ('τζιαι', 'λλιον'), ('λλιον', 'αυτοκριτικη'), ('αυτοκριτικη', 'αλλα'), ('αλλα', 'δεν'), ('δεν', 'καταλαβω'), ('καταλαβω', 'τι'), ('τι', 'μου'), ('μου', 'λαλουσιν')], [('εν', 'δυσκολον'), ('δυσκολον', 'τζιαι'), ('τζιαι', 'επικινδυνον'), ('επικινδυνον', 'παντως'), ('παντως', 'να'), ('να', 'φκαλλουμεν'), ('φκαλλουμεν', 'τα'), ('τα', 'γεγονοτα'), ('γεγονοτα', 'οταν'), ('οταν', 'προχωρουμεν'), ('προχωρουμεν', 'με'), ('με', 'συμπερασματα'), ('συμπερασματα', 'οσα'), ('οσα', 'ντοκουμεντα'), ('ντοκουμεντα', 'τζιαι'), ('τζιαι', 'να'), ('να', 'συναξουμεν')], [('εφηρεν', 'μας'), ('μας', 'παλε'), ('παλε', 'ο'), ('ο', 'γιωρκος'), ('γιωρκος', 'κοφτει'), ('κοφτει', 'σε'), ('σε', 'οξα'), ('οξα', 'οι')], [('αν', 'το'), ('το', 'εφκαλεν'), ('εφκαλεν', 'στειλε'), ('στειλε', 'μου'), ('μου', 'αστειεφκω')], [('ο', 'παππους'), ('παππους', 'μου'), ('μου', 'τζ'), ('τζ', 'ο'), ('ο', 'τζυρης'), ('τζυρης', 'μου'), ('μου', 'ηταν'), ('ηταν', 'αριστεροι')], [('κινα', 'μου'), ('μου', 'την'), ('την', 'περιεργια'), ('περιεργια', 'να'), ('να', 'θκιαβασω'), ('θκιαβασω', 'το'), ('το', 'βιβλιο'), ('βιβλιο', 'γιατι'), ('γιατι', 'εχει'), ('εχει', 'πολλα'), ('πολλα', 'πραματα'), ('πραματα', 'που'), ('που', 'δεν'), ('δεν', 'εμπορεσα'), ('εμπορεσα', 'ποττε'), ('ποττε', 'να'), ('να', 'καταλαβω')], [('εννεν', 'μονον'), ('μονον', 'ετσι'), ('ετσι', 'που'), ('που', 'εννα'), ('εννα', 'θεσεις'), ('θεσεις', 'το'), ('το', 'ερωτημαν')], [('εν', 'αθυμουμαι'), ('αθυμουμαι', 'ποιαν'), ('ποιαν', 'αγγυση'), ('αγγυση', 'που'), ('που', 'εδωκαν'), ('εδωκαν', 'ως'), ('ως', 'τα'), ('τα', 'σημερα'), ('σημερα', 'εσταθηκε'), ('εσταθηκε', 'κανενας'), ('κανενας', 'στο'), ('στο', 'λοο'), ('λοο', 'του')], [('ποσο', 'σσιειροτερα'), ('σσιειροτερα', 'εννα'), ('εννα', 'ναι'), ('ναι', 'δηλαδη')], [('τουτο', 'το'), ('το', 'πραμα'), ('πραμα', 'θεωρουμεν'), ('θεωρουμεν', 'οτι'), ('οτι', 'δουλεφκει'), ('δουλεφκει', 'τι'), ('τι', 'θα'), ('θα', 'επρεπε'), ('επρεπε', 'να'), ('να', 'γινει'), ('γινει', 'δηλαδη'), ('δηλαδη', 'για'), ('για', 'να'), ('να', 'μεν'), ('μεν', 'δουλεφκει')], [('εκαμες', 'να'), ('να', 'μετρουμε'), ('μετρουμε', 'τες'), ('τες', 'γραμμες'), ('γραμμες', 'για'), ('για', 'του'), ('του', 'κειμενου'), ('κειμενου', 'για'), ('για', 'να'), ('να', 'δουμε'), ('δουμε', 'τι'), ('τι', 'συμφωνας'), ('συμφωνας', 'πιο'), ('πιο', 'λιο'), ('λιο', 'τζαι'), ('τζαι', 'τι'), ('τι', 'συμφωνας'), ('συμφωνας', 'παραπανω')], [('οσο', 'πιο'), ('πιο', 'γληορα'), ('γληορα', 'φυει'), ('φυει', 'τουτη'), ('τουτη', 'η'), ('η', 'κυβερνηση'), ('κυβερνηση', 'τοσο'), ('τοσο', 'το'), ('το', 'καλυτερο')], [('ποιος', 'μας'), ('μας', 'ειπεν'), ('ειπεν', 'εμας'), ('εμας', 'πως'), ('πως', 'εννε'), ('εννε', 'καλη'), ('καλη', 'η'), ('η', 'λιτοτητα'), ('λιτοτητα', 'η'), ('η', 'επιαεν'), ('επιαεν', 'η'), ('η', 'εννοια'), ('εννοια', 'τους'), ('τους', 'κεφαλαιουχους'), ('κεφαλαιουχους', 'αν'), ('αν', 'εμεις'), ('εμεις', 'πεινουμεν'), ('πεινουμεν', 'αφου'), ('αφου', 'αλλοσπως'), ('αλλοσπως', 'κερδη'), ('κερδη', 'δεν'), ('δεν', 'φκαινουν')], [('υγειαν', 'τζιαι'), ('τζιαι', 'ευτυχιαν'), ('ευτυχιαν', 'σε'), ('σε', 'ουλλην'), ('ουλλην', 'την'), ('την', 'οικογενειαν'), ('οικογενειαν', 'τζιαι'), ('τζιαι', 'καλον'), ('καλον', 'κουραγιον'), ('κουραγιον', 'κατα'), ('κατα', 'τη'), ('τη', 'βρεφικην'), ('βρεφικην', 'τζιαι'), ('τζιαι', 'τη'), ('τη', 'νηπιακην'), ('νηπιακην', 'ηλικιαν')], [('παντα', 'να'), ('να', 'θυμασαι'), ('θυμασαι', 'μιτσσια'), ('μιτσσια', 'κοπελλουθκια'), ('κοπελλουθκια', 'μιτσσιες'), ('μιτσσιες', 'εγνοιες')], [('αφου', 'οι'), ('οι', 'μιτσιες'), ('μιτσιες', 'εγνοιες'), ('εγνοιες', 'διαρκουν'), ('διαρκουν', 'λιγο'), ('λιγο', 'θα'), ('θα', 'καμουμε'), ('καμουμε', 'οτι'), ('οτι', 'μπορουμε'), ('μπορουμε', 'να'), ('να', 'τες'), ('τες', 'απολαυσουμε')], [('οτι', 'καμνει'), ('καμνει', 'η'), ('η', 'κυβερνηση'), ('κυβερνηση', 'κουστιζει'), ('κουστιζει', 'μονον'), ('μονον', 'στους'), ('στους', 'δανειστες'), ('δανειστες', 'που'), ('που', 'κλωτσουν')], [('τζιαμαι', 'θα'), ('θα', 'δουμεν'), ('δουμεν', 'τες'), ('τες', 'αντοχες'), ('αντοχες', 'της'), ('της', 'πλατειας'), ('πλατειας', 'τες'), ('τες', 'αντοχες'), ('αντοχες', 'της'), ('της', 'κοινωνιας')], [('ελπιζω', 'να'), ('να', 'μεν'), ('μεν', 'παμεν'), ('παμεν', 'σε'), ('σε', 'απλα'), ('απλα', 'πραματα'), ('πραματα', 'μιαν'), ('μιαν', 'διχτατοριαν'), ('διχτατοριαν', 'για'), ('για', 'ουλλους')], [('ασ', 'ησκεφτουμεν'), ('ησκεφτουμεν', 'τζιαι'), ('τζιαι', 'για'), ('για', 'λλοου'), ('λλοου', 'μας'), ('μας', 'με'), ('με', 'τες'), ('τες', 'ομοσπονδιες')], [('τωρα', 'που'), ('που', 'εφααμεν'), ('εφααμεν', 'το'), ('το', 'δολομαν'), ('δολομαν', 'ηρτεν'), ('ηρτεν', 'η'), ('η', 'καψη'), ('καψη', 'του'), ('του', 'αντζιστριου')], [('το', 'παραθυρον'), ('παραθυρον', 'καποιας'), ('καποιας', 'δημοκρατιας'), ('δημοκρατιας', 'ισως'), ('ισως', 'ετελειωσεν'), ('ετελειωσεν', 'μακαρι'), ('μακαρι', 'να'), ('να', 'φτασουν'), ('φτασουν', 'τζιαι'), ('τζιαι', 'λλιον'), ('λλιον', 'τα'), ('τα', 'παιθκια'), ('παιθκια', 'μας'), ('μας', 'την'), ('την', 'δυνατοτηταν')], [('αυτον', 'που'), ('που', 'δεν'), ('δεν', 'εκαμεν'), ('εκαμεν', 'η'), ('η', 'μαλλον'), ('μαλλον', 'που'), ('που', 'το'), ('το', 'κεφαλαιον'), ('κεφαλαιον', 'του'), ('του', 'δεν'), ('δεν', 'τον'), ('τον', 'αφηκεν'), ('αφηκεν', 'να'), ('να', 'καμει'), ('καμει', 'εκαμεν'), ('εκαμεν', 'το'), ('το', 'ο'), ('ο', 'τσιπρας'), ('τσιπρας', 'που'), ('που', 'δεν'), ('δεν', 'ειχεν'), ('ειχεν', 'ακομα'), ('ακομα', 'κεφαλαιον'), ('κεφαλαιον', 'δικον'), ('δικον', 'του'), ('του', 'ετα')], [('δεν', 'ξερω'), ('ξερω', 'για'), ('για', 'ποιαν'), ('ποιαν', 'συζητησην'), ('συζητησην', 'λαλεις')], [('οποτε', 'με'), ('με', 'ερωτουσαν'), ('ερωτουσαν', 'επρεπεν'), ('επρεπεν', 'να'), ('να', 'καμω'), ('καμω', 'διατριβην'), ('διατριβην', 'για'), ('για', 'να'), ('να', 'τους'), ('τους', 'πω'), ('πω', 'τι'), ('τι', 'ηταν'), ('ηταν', 'τουτον'), ('τουτον', 'τζιαι'), ('τζιαι', 'παλε'), ('παλε', 'εν'), ('εν', 'εκαταλαβαιναν')], [('εφκαλλα', 'καλα'), ('καλα', 'λεφτα'), ('λεφτα', 'βεβαια'), ('βεβαια', 'αλλα'), ('αλλα', 'ειχα'), ('ειχα', 'τουτον'), ('τουτον', 'το'), ('το', 'ανικανοποιητον'), ('ανικανοποιητον', 'του'), ('του', 'οτι'), ('οτι', 'εν'), ('εν', 'εφκαλλα'), ('εφκαλλα', 'κατι'), ('κατι', 'χειροπιαστον')], [('τζιαι', 'ξερουν'), ('ξερουν', 'τζιαι'), ('τζιαι', 'οι'), ('οι', 'μιτσιοι'), ('μιτσιοι', 'ναμπου'), ('ναμπου', 'να'), ('να', 'πουν'), ('πουν', 'στο'), ('στο', 'σχολειον'), ('σχολειον', 'οταν'), ('οταν', 'τους'), ('τους', 'ρωτουν'), ('ρωτουν', 'τι'), ('τι', 'καμνει'), ('καμνει', 'ο'), ('ο', 'παπακης')], [('αρεσκει', 'μου'), ('μου', 'που'), ('που', 'λαλεις'), ('λαλεις', 'αρκετα'), ('αρκετα', 'πολυπλοκα'), ('πολυπλοκα', 'πραματα'), ('πραματα', 'αλλα'), ('αλλα', 'με'), ('με', 'απλη'), ('απλη', 'γλωσσαν')], [('εγω', 'εν'), ('εν', 'θωρω'), ('θωρω', 'ως'), ('ως', 'μονην'), ('μονην', 'ελπιδαν'), ('ελπιδαν', 'την'), ('την', 'αριστεραν')], [('στην', 'κυπρον'), ('κυπρον', 'εν'), ('εν', 'θκιο'), ('θκιο', 'τα'), ('τα', 'κρισιμα'), ('κρισιμα', 'ερωτηματα')], [('κατ', 'εμεναν'), ('εμεναν', 'η'), ('η', 'απαντηση'), ('απαντηση', 'στο'), ('στο', 'πρωτον'), ('πρωτον', 'ερωτημαν'), ('ερωτημαν', 'εν'), ('εν', 'προφανως'), ('προφανως', 'ναι')], [('εσιεις', 'καλην'), ('καλην', 'ψυσιην'), ('ψυσιην', 'γαμω'), ('γαμω', 'το'), ('το', 'στανιο'), ('στανιο', 'σου')], [('εναν', 'κειμενον'), ('κειμενον', 'σου'), ('σου', 'να'), ('να', 'θκαβαζει'), ('θκαβαζει', 'ο'), ('ο', 'αλλος'), ('αλλος', 'νωθει'), ('νωθει', 'το'), ('το', 'να'), ('να', 'τρεσιει'), ('τρεσιει', 'που'), ('που', 'τα'), ('τα', 'περιθωρια')], [('οι', 'διακρισεις'), ('διακρισεις', 'εφκηκαν'), ('εφκηκαν', 'μου'), ('μου', 'σε'), ('σε', 'καλον'), ('καλον', 'τελικα')], [('αμαν', 'τα'), ('τα', 'σκεφτουμαι'), ('σκεφτουμαι', 'τουτα'), ('τουτα', 'ουλλα'), ('ουλλα', 'πιαννει'), ('πιαννει', 'με'), ('με', 'η'), ('η', 'λυπη')], [('κοφκεται', 'μου'), ('μου', 'η'), ('η', 'ορεξη'), ('ορεξη', 'ακομα'), ('ακομα', 'τζιαι'), ('τζιαι', 'για'), ('για', 'τα'), ('τα', 'χαλλουμια'), ('χαλλουμια', 'τζιαι'), ('τζιαι', 'για'), ('για', 'τες'), ('τες', 'παττισιες')], [('εγω', 'μεταναστρια'), ('μεταναστρια', 'ειμαι'), ('ειμαι', 'οποτε'), ('οποτε', 'πολλα'), ('πολλα', 'πραματα'), ('πραματα', 'εν'), ('εν', 'ηξερω')], [('ειδικα', 'το'), ('το', 'χαλλουμιν'), ('χαλλουμιν', 'εν'), ('εν', 'αδυνατον'), ('αδυνατον', 'να'), ('να', 'το'), ('το', 'ευρεις'), ('ευρεις', 'οπως'), ('οπως', 'πρεπει')], [('ηβρα', 'τζι'), ('τζι', 'εγω'), ('εγω', 'εναν'), ('εναν', 'πεθαμενον'), ('πεθαμενον', 'μια'), ('μια', 'φορα'), ('φορα', 'στη'), ('στη', 'γαλλια')], [('εν', 'τον'), ('τον', 'εξερα'), ('εξερα', 'πρωτη'), ('πρωτη', 'φορα'), ('φορα', 'τον'), ('τον', 'εθωρουν'), ('εθωρουν', 'ηταν'), ('ηταν', 'μολις'), ('μολις', 'εβρεθηκα'), ('εβρεθηκα', 'στην'), ('στην', 'πολη')], [('εν', 'εχω'), ('εχω', 'κατι'), ('κατι', 'με'), ('με', 'τα'), ('τα', 'κατοικιδια'), ('κατοικιδια', 'ισα'), ('ισα', 'ισα'), ('ισα', 'αλλα'), ('αλλα', 'οι'), ('οι', 'να'), ('να', 'μεν'), ('μεν', 'γοραζεις'), ('γοραζεις', 'ενα'), ('ενα', 'πραμα'), ('πραμα', 'για'), ('για', 'τον'), ('τον', 'αλλον'), ('αλλον', 'αθρωπο'), ('αθρωπο', 'που'), ('που', 'πεινα'), ('πεινα', 'δεν'), ('δεν', 'νομιζω'), ('νομιζω', 'να'), ('να', 'εκαμα'), ('εκαμα', 'κατι'), ('κατι', 'εκτος'), ('εκτος', 'που'), ('που', 'να'), ('να', 'δουλεψω'), ('δουλεψω', 'πας'), ('πας', 'τες'), ('τες', 'ενοχες'), ('ενοχες', 'μου')], [('πολλα', 'καλη'), ('καλη', 'αναλυση'), ('αναλυση', 'ξερω'), ('ξερω', 'πολλα'), ('πολλα', 'καλα'), ('καλα', 'το'), ('το', 'τραουδιν'), ('τραουδιν', 'αλλα'), ('αλλα', 'πολλα'), ('πολλα', 'που'), ('που', 'τουτα'), ('τουτα', 'που'), ('που', 'εγραψες'), ('εγραψες', 'εν'), ('εν', 'τα'), ('τα', 'εσκεφτηκα')], [('συγγνωμην', 'που'), ('που', 'σας'), ('σας', 'επελλανα')], [('το', 'τραουδιν'), ('τραουδιν', 'εν'), ('εν', 'ωραιον'), ('ωραιον', 'τζιαι'), ('τζιαι', 'ο'), ('ο', 'αλκινοος'), ('αλκινοος', 'εν'), ('εν', 'εσιει'), ('εσιει', 'λαθος')], [('επηεν', 'εν'), ('εν', 'χαμαι'), ('χαμαι', 'ηδη'), ('ηδη', 'πε'), ('πε', 'μας'), ('μας', 'λια'), ('λια', 'για'), ('για', 'την'), ('την', 'τενεδο')], [('ηυρα', 'το'), ('το', 'μιαν'), ('μιαν', 'φορα'), ('φορα', 'πας'), ('πας', 'το'), ('το', 'μπαλκονι'), ('μπαλκονι', 'αποκλειεται'), ('αποκλειεται', 'να'), ('να', 'σου'), ('σου', 'αρεσκουν'), ('αρεσκουν', 'οι'), ('οι', 'καττοι')], [('τωρα', 'που'), ('που', 'τα'), ('τα', 'εδκιαβασα'), ('εδκιαβασα', 'τουτα'), ('τουτα', 'που'), ('που', 'εγραψες'), ('εγραψες', 'οπως'), ('οπως', 'τα'), ('τα', 'εγραψες'), ('εγραψες', 'αθθυμηθηκαν'), ('αθθυμηθηκαν', 'την'), ('την', 'πρωτη'), ('πρωτη', 'βολαν'), ('βολαν', 'που'), ('που', 'επηα'), ('επηα', 'τζιαμε')], [('τωρα', 'θωρω'), ('θωρω', 'οτι'), ('οτι', 'αμαν'), ('αμαν', 'τα'), ('τα', 'θεωρησεις'), ('θεωρησεις', 'δεδομενα'), ('δεδομενα', 'τα'), ('τα', 'θωρεις'), ('θωρεις', 'με'), ('με', 'την'), ('την', 'ιδιαν'), ('ιδιαν', 'μμαθκιαν')], [('εμας', 'δακατω'), ('δακατω', 'αμα'), ('αμα', 'σε'), ('σε', 'καλιει'), ('καλιει', 'ο'), ('ο', 'αλλος'), ('αλλος', 'πρεπει'), ('πρεπει', 'να'), ('να', 'παρεις'), ('παρεις', 'κατι'), ('κατι', 'αλλα'), ('αλλα', 'ως'), ('ως', 'τζιειαμαι')], [('τζιαι', 'για'), ('για', 'να'), ('να', 'παεις'), ('παεις', 'εσσω'), ('εσσω', 'του'), ('του', 'αλλου'), ('αλλου', 'πρεπει'), ('πρεπει', 'να'), ('να', 'πιασεις'), ('πιασεις', 'ραντεβου')], [('τζιερασμαν', 'εξω'), ('εξω', 'δεν'), ('δεν', 'τζιερνουν'), ('τζιερνουν', 'ιδιως'), ('ιδιως', 'αμαν'), ('αμαν', 'εν'), ('εν', 'περαν'), ('περαν', 'του'), ('του', 'καφε')], [('ατε', 'ατε'), ('ατε', 'αν'), ('αν', 'εχεις'), ('εχεις', 'πολλα'), ('πολλα', 'στενες'), ('στενες', 'σχεσεις'), ('σχεσεις', 'μοιραζεις'), ('μοιραζεις', 'τον'), ('τον', 'λοαρκασμον'), ('λοαρκασμον', 'για'), ('για', 'να'), ('να', 'μεν'), ('μεν', 'υπολογιζεις'), ('υπολογιζεις', 'το'), ('το', 'πιατον'), ('πιατον', 'του'), ('του', 'καθενου'), ('καθενου', 'ξηχωριστα')], [('μια', 'γνωστη'), ('γνωστη', 'μου'), ('μου', 'ηρτεν'), ('ηρτεν', 'να'), ('να', 'φαμεν'), ('φαμεν', 'τζιαι'), ('τζιαι', 'εφερεν'), ('εφερεν', 'κρασιν'), ('κρασιν', 'επειδη'), ('επειδη', 'εν'), ('εν', 'το'), ('το', 'ηπκιαμεν'), ('ηπκιαμεν', 'επηρεν'), ('επηρεν', 'το'), ('το', 'παλε')], [('η', 'μανα'), ('μανα', 'μου'), ('μου', 'παντως'), ('παντως', 'αμμαν'), ('αμμαν', 'επαιζεν'), ('επαιζεν', 'το'), ('το', 'κουδουνιν'), ('κουδουνιν', 'την'), ('την', 'ωραν'), ('ωραν', 'που'), ('που', 'ετρωαμεν'), ('ετρωαμεν', 'ελαλεν'), ('ελαλεν', 'μα'), ('μα', 'ποιος'), ('ποιος', 'εννα'), ('εννα', 'νει'), ('νει', 'ετσι'), ('ετσι', 'ωραν')], [('φερε', 'εναν'), ('εναν', 'πκιατον')], [('πολλα', 'ενδιαφερον'), ('ενδιαφερον', 'τουτον'), ('τουτον', 'που'), ('που', 'λαλεις'), ('λαλεις', 'εθκιεβασα'), ('εθκιεβασα', 'τζιαι'), ('τζιαι', 'γω'), ('γω', 'μιαν'), ('μιαν', 'εθνολογικην'), ('εθνολογικην', 'μελετην'), ('μελετην', 'για'), ('για', 'τον'), ('τον', 'δωρισμον')], [('αννα', 'μου'), ('μου', 'ευκαριστω'), ('ευκαριστω', 'σε'), ('σε', 'για'), ('για', 'την'), ('την', 'πιο'), ('πιο', 'θετικη'), ('θετικη', 'εικονα'), ('εικονα', 'που'), ('που', 'μου'), ('μου', 'εδωκες')], [('φωτογραφιζεται', 'με'), ('με', 'την'), ('την', 'χαρτωμενην'), ('χαρτωμενην', 'του'), ('του', 'αμπα'), ('αμπα', 'τζιαι'), ('τζιαι', 'νομισει'), ('νομισει', 'κανενας'), ('κανενας', 'οτι'), ('οτι', 'εν'), ('εν', 'ππουστης')], [('εν', 'πολλα'), ('πολλα', 'σωστον'), ('σωστον', 'τζιεινον'), ('τζιεινον', 'που'), ('που', 'λαλεις'), ('λαλεις', 'για'), ('για', 'τζιεινους'), ('τζιεινους', 'που'), ('που', 'γραφουν'), ('γραφουν', 'τα'), ('τα', 'αρθρα')], [('η', 'ερωτηση'), ('ερωτηση', 'με'), ('με', 'την'), ('την', 'οποιαν'), ('οποιαν', 'με'), ('με', 'αφηκες'), ('αφηκες', 'εν'), ('εν', 'σαν'), ('σαν', 'τζιεινα'), ('τζιεινα', 'τα'), ('τα', 'παιχνιθκια'), ('παιχνιθκια', 'που'), ('που', 'παιζουμεν'), ('παιζουμεν', 'αμαν'), ('αμαν', 'πιουμεν'), ('πιουμεν', 'λλιον'), ('λλιον', 'τζιαι'), ('τζιαι', 'διουμεν'), ('διουμεν', 'ο'), ('ο', 'ενας'), ('ενας', 'του'), ('του', 'αλλου'), ('αλλου', 'αδυνατες'), ('αδυνατες', 'επιλογες'), ('επιλογες', 'για'), ('για', 'το'), ('το', 'χαζιν')], [('δυστυχως', 'ετσι'), ('ετσι', 'ενι'), ('ενι', 'πολλα'), ('πολλα', 'πραματα'), ('πραματα', 'φαινοντε'), ('φαινοντε', 'τελια'), ('τελια', 'εξωτερικα'), ('εξωτερικα', 'αλλα'), ('αλλα', 'που'), ('που', 'μεσα'), ('μεσα', 'εν'), ('εν', 'μαυρα')], [('ειδες', 'ιντα'), ('ιντα', 'κακον'), ('κακον', 'μας'), ('μας', 'καμνουν'), ('καμνουν', 'τα'), ('τα', 'κοπελλουθκια'), ('κοπελλουθκια', 'απιστευτα'), ('απιστευτα', 'τα'), ('τα', 'κοπελλουθκια'), ('κοπελλουθκια', 'ποσα'), ('ποσα', 'εχουμεν'), ('εχουμεν', 'να'), ('να', 'μαθουμε'), ('μαθουμε', 'που'), ('που', 'λοου'), ('λοου', 'τους')], [('για', 'το'), ('το', 'πουρκουριν'), ('πουρκουριν', 'δοτζιημασε'), ('δοτζιημασε', 'την'), ('την', 'παραδοσιακην'), ('παραδοσιακην', 'συνταγην'), ('συνταγην', 'μεσα'), ('μεσα', 'στην'), ('στην', 'οποιαν'), ('οποιαν', 'θα'), ('θα', 'βαλεις'), ('βαλεις', 'οτι'), ('οτι', 'χορτικον'), ('χορτικον', 'κοψει'), ('κοψει', 'ο'), ('ο', 'νους'), ('νους', 'σου')], [('εν', 'πραμαν'), ('πραμαν', 'σιορ'), ('σιορ', 'που'), ('που', 'τρια'), ('τρια', 'πραματα'), ('πραματα', 'που'), ('που', 'γραφει'), ('γραφει', 'ο'), ('ο', 'αλλος'), ('αλλος', 'να'), ('να', 'τον'), ('τον', 'συμπαθας'), ('συμπαθας', 'η'), ('η', 'να'), ('να', 'τον'), ('τον', 'αντιπαθας'), ('αντιπαθας', 'με'), ('με', 'σεναν'), ('σεναν', 'εννοειται'), ('εννοειται', 'οτι'), ('οτι', 'τσιαττα'), ('τσιαττα', 'ο'), ('ο', 'νους'), ('νους', 'μου'), ('μου', 'παμπο')], [('πριν', 'λλιον'), ('λλιον', 'τζιαιρον'), ('τζιαιρον', 'εππεσεν'), ('εππεσεν', 'εναν'), ('εναν', 'κοτσιηνολαιμουιν'), ('κοτσιηνολαιμουιν', 'που'), ('που', 'την'), ('την', 'φωλιαν'), ('φωλιαν', 'του'), ('του', 'τζιαι'), ('τζιαι', 'λυπηθηκα'), ('λυπηθηκα', 'το'), ('το', 'πολλα')], [('για', 'μεναν'), ('μεναν', 'φιλε'), ('φιλε', 'το'), ('το', 'παν'), ('παν', 'εν'), ('εν', 'να'), ('να', 'μεν'), ('μεν', 'ξιαννουμεν'), ('ξιαννουμεν', 'ποθθεν'), ('ποθθεν', 'ηρταμεν'), ('ηρταμεν', 'να'), ('να', 'μεν'), ('μεν', 'μεγαλοπιαννουμαστεν'), ('μεγαλοπιαννουμαστεν', 'τζιαι'), ('τζιαι', 'να'), ('να', 'νομιζουμεν'), ('νομιζουμεν', 'οτι'), ('οτι', 'τωρα'), ('τωρα', 'εγινηκαμεν'), ('εγινηκαμεν', 'αστικη'), ('αστικη', 'ταξη'), ('ταξη', 'τζιαι'), ('τζιαι', 'να'), ('να', 'περιφρονουμεν'), ('περιφρονουμεν', 'ανθρωπους'), ('ανθρωπους', 'με'), ('με', 'βασην'), ('βασην', 'την'), ('την', 'κοινωνικην'), ('κοινωνικην', 'ταξην'), ('ταξην', 'τζιαι'), ('τζιαι', 'το'), ('το', 'επαγγελμαν')], [('ειντα', 'καθαρα'), ('καθαρα', 'που'), ('που', 'θωρεις'), ('θωρεις', 'ειντα'), ('ειντα', 'γυαλλια'), ('γυαλλια', 'ηβρες'), ('ηβρες', 'να'), ('να', 'πιασω'), ('πιασω', 'τζιαι'), ('τζιαι', 'γω')], [('ακομα', 'να'), ('να', 'φκουμεν'), ('φκουμεν', 'πας'), ('πας', 'τον'), ('τον', 'αππαρον'), ('αππαρον', 'και'), ('και', 'σουζουμεν'), ('σουζουμεν', 'τα'), ('τα', 'ποθκια'), ('ποθκια', 'μας')], [('παντως', 'το'), ('το', 'να'), ('να', 'θαυμαζεις'), ('θαυμαζεις', 'εναν'), ('εναν', 'τεθκοιον'), ('τεθκοιον', 'ανθρωπο'), ('ανθρωπο', 'που'), ('που', 'εισιεν'), ('εισιεν', 'οραμα'), ('οραμα', 'σιουρα'), ('σιουρα', 'καμνει'), ('καμνει', 'σε'), ('σε', 'να'), ('να', 'τον'), ('τον', 'νοιωθεις'), ('νοιωθεις', 'σαν'), ('σαν', 'δικο'), ('δικο', 'σου')], [('η', 'αριστερη'), ('αριστερη', 'ιδεολογια'), ('ιδεολογια', 'καμνει'), ('καμνει', 'κακον'), ('κακον', 'μονον'), ('μονον', 'αμαν'), ('αμαν', 'παει'), ('παει', 'στραβα'), ('στραβα', 'η'), ('η', 'δεξια'), ('δεξια', 'εσιει'), ('εσιει', 'το'), ('το', 'μες'), ('μες', 'το'), ('το', 'προγραμμα'), ('προγραμμα', 'της'), ('της', 'να'), ('να', 'καμει'), ('καμει', 'κακον')], [('ειμαι', 'λλιον'), ('λλιον', 'απαισιοδοξος'), ('απαισιοδοξος', 'γενικα'), ('γενικα', 'γιατι'), ('γιατι', 'νομιζω'), ('νομιζω', 'τουντα'), ('τουντα', 'πραματα'), ('πραματα', 'πρεπει'), ('πρεπει', 'ναν'), ('ναν', 'λλιον'), ('λλιον', 'οργανικα'), ('οργανικα', 'που'), ('που', 'την'), ('την', 'βαση'), ('βαση', 'τζιαι'), ('τζιαι', 'πανω')], [('ελπιζουμεν', 'σε'), ('σε', 'εναν'), ('εναν', 'καλλυττερο'), ('καλλυττερο', 'μελλον')], [('ωσπου', 'παρατηρω'), ('παρατηρω', 'τους'), ('τους', 'κυπραιους'), ('κυπραιους', 'καταλαββαινω'), ('καταλαββαινω', 'οτι'), ('οτι', 'η'), ('η', 'φυσικη'), ('φυσικη', 'τους'), ('τους', 'κατασταση'), ('κατασταση', 'εν'), ('εν', 'να'), ('να', 'εν'), ('εν', 'παφτωσιοι'), ('παφτωσιοι', 'να'), ('να', 'δουλευκουν'), ('δουλευκουν', 'στα'), ('στα', 'χωραφκια')], [('μασιαλλα', 'σου'), ('σου', 'κορη'), ('κορη', 'ηταν'), ('ηταν', 'πολλα'), ('πολλα', 'καλη'), ('καλη', 'η'), ('η', 'ιδεα')], [('δηλαδη', 'να'), ('να', 'κοψει'), ('κοψει', 'πισω'), ('πισω', 'η'), ('η', 'κυπρος'), ('κυπρος', 'που'), ('που', 'την'), ('την', 'εξελιξη'), ('εξελιξη', 'για'), ('για', 'να'), ('να', 'εσιετε'), ('εσιετε', 'εσεις'), ('εσεις', 'να'), ('να', 'μας'), ('μας', 'κατακλεφκετε')], [('τουτο', 'που'), ('που', 'επροσεξα'), ('επροσεξα', 'εν'), ('εν', 'οτι'), ('οτι', 'οσοι'), ('οσοι', 'ειμαστεν'), ('ειμαστεν', 'λιο'), ('λιο', 'μπλεγμενοι'), ('μπλεγμενοι', 'στα'), ('στα', 'ακαδημαικα')], [('εθκιαβασα', 'το'), ('το', 'αφιερωμαν'), ('αφιερωμαν', 'σου'), ('σου', 'στον'), ('στον', 'παπα'), ('παπα', 'σας'), ('σας', 'κορη')], [('εν', 'χρειαζουμαι'), ('χρειαζουμαι', 'προσπαθεια'), ('προσπαθεια', 'για'), ('για', 'να'), ('να', 'ευρω'), ('ευρω', 'κατι'), ('κατι', 'να'), ('να', 'πω')], [('σιουρα', 'φκαλλει'), ('φκαλλει', 'πολλη'), ('πολλη', 'θετικη'), ('θετικη', 'ενεργεια'), ('ενεργεια', 'τουτο'), ('τουτο', 'το'), ('το', 'ποστ'), ('ποστ', 'ατε'), ('ατε', 'καλα'), ('καλα', 'εκαμες')], [('το', 'προβλημαν'), ('προβλημαν', 'ειναι'), ('ειναι', 'οτι'), ('οτι', 'πρεπει'), ('πρεπει', 'να'), ('να', 'αρκεψει'), ('αρκεψει', 'με'), ('με', 'μιαν'), ('μιαν', 'καλην'), ('καλην', 'χειρονομιαν')], [('οπως', 'τα'), ('τα', 'λαλεις'), ('λαλεις', 'εχουν'), ('εχουν', 'τα'), ('τα', 'πραματα'), ('πραματα', 'φιλε'), ('φιλε', 'μου')], [('εν', 'μακρυς'), ('μακρυς', 'ο'), ('ο', 'σιειμωνας'), ('σιειμωνας', 'εβαρεθηκα'), ('εβαρεθηκα', 'πιον'), ('πιον', 'θελω'), ('θελω', 'παραλιαν')], [('με', 'τη'), ('τη', 'φοασε'), ('φοασε', 'τρωει'), ('τρωει', 'που'), ('που', 'ουλλα'), ('ουλλα', 'εκαταλαβεν'), ('εκαταλαβεν', 'οτι'), ('οτι', 'διας'), ('διας', 'βαρος'), ('βαρος', 'στο'), ('στο', 'θεμαν'), ('θεμαν', 'του'), ('του', 'φαγιου')], [('εν', 'πολλη'), ('πολλη', 'η'), ('η', 'σημασια'), ('σημασια', 'που'), ('που', 'τους'), ('τους', 'διουμεν'), ('διουμεν', 'σε'), ('σε', 'συγκρισην')], [('ενομησες', 'οτι'), ('οτι', 'καμνω'), ('καμνω', 'ακελικη'), ('ακελικη', 'προπαγανδα'), ('προπαγανδα', 'ρε'), ('ρε', 'χοχοι'), ('χοχοι', 'μα'), ('μα', 'εν'), ('εν', 'να'), ('να', 'μας'), ('μας', 'πελλανετε')], [('εννα', 'με'), ('με', 'πελλανουν'), ('πελλανουν', 'αρκεψα'), ('αρκεψα', 'γυμναστηριον'), ('γυμναστηριον', 'πριν'), ('πριν', 'θκυο'), ('θκυο', 'εφτομαδες'), ('εφτομαδες', 'για'), ('για', 'να'), ('να', 'χασω'), ('χασω', 'κιλα'), ('κιλα', 'ταχα'), ('ταχα', 'καμνω'), ('καμνω', 'τες'), ('τες', 'γυμναστικες'), ('γυμναστικες', 'που'), ('που', 'μου'), ('μου', 'βαλλει'), ('βαλλει', 'τζιαι'), ('τζιαι', 'ξαναζυγιζουμαι'), ('ξαναζυγιζουμαι', 'εχτες')], [('εβαλα', 'αναμιση'), ('αναμιση', 'κιλο'), ('κιλο', 'γαμω'), ('γαμω', 'το'), ('το', 'σσιηστο'), ('σσιηστο', 'μου'), ('μου', 'ηνταλως'), ('ηνταλως', 'θκιαολον'), ('θκιαολον', 'καμνω'), ('καμνω', 'ασκησεις'), ('ασκησεις', 'να'), ('να', 'χασω'), ('χασω', 'ταχα'), ('ταχα', 'τζιαι'), ('τζιαι', 'γινουμαι'), ('γινουμαι', 'πιο'), ('πιο', 'βαρετος')], [('τουτα', 'τα'), ('τα', 'ονειρα'), ('ονειρα', 'εκαμαμε'), ('εκαμαμε', 'τα'), ('τα', 'μεγαλο'), ('μεγαλο', 'θεμα'), ('θεμα', 'μες'), ('μες', 'τουτες'), ('τουτες', 'τες'), ('τες', 'μερες')], [('τζεινο', 'που'), ('που', 'φοαστε'), ('φοαστε', 'θα'), ('θα', 'το'), ('το', 'παθετε')], [('σαν', 'την'), ('την', 'κυπρον'), ('κυπρον', 'εν'), ('εν', 'εσιει'), ('εσιει', 'ναμπου'), ('ναμπου', 'καμνεις')], [('φιλε', 'μιλω'), ('μιλω', 'σου'), ('σου', 'εχασα'), ('εχασα', 'τα'), ('τα', 'φωτα'), ('φωτα', 'μου')], [('τουντο', 'ποστ'), ('ποστ', 'ηρτεν'), ('ηρτεν', 'πας'), ('πας', 'την'), ('την', 'ωρα')], [('α', 'μανα'), ('μανα', 'μου'), ('μου', 'ειντα'), ('ειντα', 'αππαρα'), ('αππαρα', 'εν'), ('εν', 'τουτη'), ('τουτη', 'αρεσκει'), ('αρεσκει', 'μου'), ('μου', 'πολλα')], [('γιατι', 'ρε'), ('ρε', 'παιδακι'), ('παιδακι', 'μου'), ('μου', 'ρε'), ('ρε', 'κορουδα'), ('κορουδα', 'μου'), ('μου', 'να'), ('να', 'μου'), ('μου', 'καλαμαριζεις'), ('καλαμαριζεις', 'αμαν'), ('αμαν', 'μιλας')], [('νευριαζουμεν', 'με'), ('με', 'τζαι'), ('τζαι', 'εμενα'), ('εμενα', 'πολλα')], [('εγω', 'εν'), ('εν', 'καλαμαριζω'), ('καλαμαριζω', 'αλλα'), ('αλλα', 'βρισκω'), ('βρισκω', 'πολλα'), ('πολλα', 'αστειες'), ('αστειες', 'καποιες'), ('καποιες', 'καλαμαριστικες'), ('καλαμαριστικες', 'εκφρασεις')]]
###Markdown
3.2.3 Chars `Text` objects
###Code
cg_chars_flat = [char for word in cg_words_flat for char in word]
smg_chars_flat = [char for word in smg_words_flat for char in word]
cg_chars_Text = Text(cg_chars_flat)
smg_chars_Text = Text(smg_chars_flat)
cg_chars_Text
###Output
_____no_output_____
###Markdown
3.2.4 Character n-grams 3.2.4.1 Bigrams `Text` objects
###Code
cg_char_bigrams = []
smg_char_bigrams = []
for word in cg_words_flat:
cg_char_bigrams.append(list(ngrams(word, 2, pad_left=True, pad_right=True, left_pad_symbol='_', right_pad_symbol='_')))
for word in smg_words_flat:
smg_char_bigrams.append(list(ngrams(word, 2, pad_left=True, pad_right=True, left_pad_symbol='_', right_pad_symbol='_')))
cg_char_bigrams_flat_tuples = [bigram for bigram_list in cg_char_bigrams for bigram in bigram_list]
smg_char_bigrams_flat_tuples = [bigram for bigram_list in smg_char_bigrams for bigram in bigram_list]
cg_char_bigrams_flat = ['%s%s' % bigram_tuple for bigram_tuple in cg_char_bigrams_flat_tuples]
smg_char_bigrams_flat = ['%s%s' % bigram_tuple for bigram_tuple in smg_char_bigrams_flat_tuples]
cg_char_bigrams_Text = Text(cg_char_bigrams_flat)
smg_char_bigrams_Text = Text(smg_char_bigrams_flat)
cg_char_bigrams_Text
###Output
_____no_output_____
###Markdown
3.2.4.2 Trigrams `Text` objects
###Code
import warnings
cg_char_trigrams = []
smg_char_trigrams = []
with warnings.catch_warnings():
warnings.filterwarnings('ignore',category=DeprecationWarning)
for word in cg_words_flat:
if len(word) > 1: # Trigram features for 1-letter words are useless and encoded by other features I use.
cg_char_trigrams.append(list(ngrams(word, 3, pad_left=True, pad_right=True, left_pad_symbol='_', right_pad_symbol='_')))
for word in smg_words_flat:
if len(word) > 1:
smg_char_trigrams.append(list(ngrams(word, 3, pad_left=True, pad_right=True, left_pad_symbol='_', right_pad_symbol='_')))
# Removing redundant trigrams:
cg_char_trigrams = [trigram_list[1:-1] for trigram_list in cg_char_trigrams]
smg_char_trigrams = [trigram_list[1:-1] for trigram_list in smg_char_trigrams]
cg_char_trigrams_flat_tuples = [trigram for trigram_list in cg_char_trigrams for trigram in trigram_list]
smg_char_trigrams_flat_tuples = [trigram for trigram_list in smg_char_trigrams for trigram in trigram_list]
cg_char_trigrams_flat = ['%s%s%s' % trigram_tuple for trigram_tuple in cg_char_trigrams_flat_tuples]
smg_char_trigrams_flat = ['%s%s%s' % trigram_tuple for trigram_tuple in smg_char_trigrams_flat_tuples]
cg_char_trigrams_Text = Text(cg_char_trigrams_flat)
smg_char_trigrams_Text = Text(smg_char_trigrams_flat)
cg_char_trigrams_Text
###Output
_____no_output_____
###Markdown
4. Analysis 4.1. Corpus size
###Code
print('Number of CG sentences:', len(cg_sents_clean))
print('Number of SMG sentences:', len(smg_sents_clean))
print('Number of words in CG data:', len(cg_words_flat))
print('Number of words in SMG data:', len(smg_words_flat))
###Output
Number of words in CG data: 7026
Number of words in SMG data: 7100
###Markdown
4.2 Most frequent words and characters 4.2.1 Most frequent words 4.2.1.1 CG
###Code
cg_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.2.1.2 SMG
###Code
smg_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.2.2 Most frequent characters 4.2.2.1 CG
###Code
cg_chars_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.2.2.2 SMG
###Code
smg_chars_Text.plot(10)
###Output
_____no_output_____
###Markdown
NB: Recall that 'σ' in the most frequent character charts above includes instances of both 'σ' and 'ς'. 4.3 Most frequent word n-grams 4.3.1 Bigrams 4.3.1.1 CG
###Code
cg_word_bigrams_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.3.1.2 SMG
###Code
smg_word_bigrams_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.4 Most frequent character n-grams 4.4.1 Bigrams 4.4.1.1 CG
###Code
cg_char_bigrams_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.4.1.2 SMG
###Code
smg_char_bigrams_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.4.2 Trigrams 4.4.2.1 CG
###Code
cg_char_trigrams_Text.plot(10)
###Output
_____no_output_____
###Markdown
4.4.2.2 SMG
###Code
smg_char_trigrams_Text.plot(10)
###Output
_____no_output_____ |
Neural Networks and Deep Learning/Week 2 - Logistic Regression as a Neural Network.ipynb | ###Markdown
Binary Clasification Logistic Regression is a statistical model used to give a probability of a certain class or event such as, fail/pass, dead/alive, win/lose.So let's break down this.What is a statistical model?is a mathematical model that use probability and include some assumptions from a sample data that can be applied for a populationThe statistical model also represent an ideal form of data-generating processok, but the what is a mathematical model?Actually a model is a representation of a system (it described), using mathematical concepts and language.   Logistic Regression Logistic Regression is an algorithm for binary clasification (A learning algorithm)  Logistical Regression Cost Function So to train the parameters - **w**- **b**we need to define a **cost function**  So the important here is Lost Function- Is the value that determinate how well you're doing for a single training example Cost Function- Is the value that determinate how well you're doing for a entire training set Gradient Descent So Gradient Descent allow us to minime the cost function, in other words, it improve our prediction for an entire training set.A formal concept. Gradient Descent is a first-order iterative optimization algorithm for finding the minimum of a function.   So we use the **derivate** (slope of a function by a given point) in **gradient descent** to calculate the minimum of a function.In other to calculate we must see first Computation Graph Derivates with a Computation GraphSo the key takeaway from this example, is that when **computing derivatives** and computing all of these derivatives, the **most efficient way to do it** is through a **right to left** computation following the direction of the red arrows Logistic Regression Gradient DescentJust to recap This is algorithm to implement As we see the algorithm use 2 **for** loops- Training set- Gradient DescentIn fact, **for** loops are not so efficient when we implement it, with a huge amount of data VectorizationVectorization is the **art** of **get rid** of **for** loopsWe use vectorization because it's **much faster** than the **for** loop, an example below
###Code
import numpy as np
import time
a = np.random.rand(int(1e6))
b = np.random.rand(int(1e6))
tic = time.time()
c = np.dot(a,b)
toc = time.time()
print("Number of values " + str(c))
print("{} {} ms \n".format("Vectorized version", str(1000*(toc-tic))))
c = 0
tic = time.time()
for i in range(int(1e6)):
c += a[i] * b[i]
toc = time.time()
print("Number of values " + str(c))
print("{} {} ms".format("For loop", str(1000*(toc-tic))))
###Output
Number of values 249743.245193857
Vectorized version 1.4030933380126953 ms
Number of values 249743.24519385604
For loop 464.6720886230469 ms
|
notebook/DownloadEnwikiDump.ipynb | ###Markdown
Download Files===
###Code
import requests
import json
import os
git_root_dir = !git rev-parse --show-toplevel
git_root_dir = git_root_dir[0]
git_root_dir
res = requests.get("https://dumps.wikimedia.org/enwiki/20210101/dumpstatus.json")
dump_status = json.loads(res.text)
dump_status['jobs'].keys()
jobs = dump_status['jobs']
jobs.keys()
metahistorybz2dump = jobs['metahistorybz2dump']
metahistorybz2dump.keys()
len(metahistorybz2dump['files'])
(35*60)*671 / 60 / 60 / 24
metahistorybz2dump['updated']
metahistorybz2dump['status']
history1_files = []
for file in metahistorybz2dump['files']:
if "history1.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history1_files.append(file_url)
len(history1_files)
history1_files
def generate_download_script(file_list, output_filepath, url_base = "https://dumps.wikimedia.org"):
target_dir = "/export/scratch2/wiki_data/enwiki-20210101-pages-meta-history-bz2"
with open(output_filepath, 'w') as outfile:
outfile.write("#!/bin/bash\n")
outfile.write("# This script autogenerated by DownloadEnwikiDump.ipynb\n\n")
outfile.write('echo "Starting download." && \\\n')
for file_url in file_list:
full_url = url_base + file_url
outfile.write(f'echo "Downloading \'{file_url}\'." && \\\n')
outfile.write(f'wget --no-check-certificate -nc -O {target_dir}/{os.path.basename(file_url)} "{full_url}" && \\\n')
outfile.write('echo "Successful." && exit 0\n')
outfile.write('echo "Error downloading." && exit 1\n\n')
output_filepath = os.path.join(git_root_dir, "scripts", "history1_20210101_download.sh")
generate_download_script(history1_files, output_filepath)
[history1_files[-4]]
os.path.basename(history1_files[-4])
history2_files = []
for file in metahistorybz2dump['files']:
if "history2.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history2_files.append(file_url)
len(history2_files)
output_filepath = os.path.join(git_root_dir, "scripts", "history2_20200101_download.sh")
generate_download_script(history2_files, output_filepath)
history3_files = []
for file in metahistorybz2dump['files']:
if "history3.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history3_files.append(file_url)
len(history3_files)
output_filepath = os.path.join(git_root_dir, "scripts", "history3_20200101_download.sh")
generate_download_script(history3_files, output_filepath)
history_num = 4
history_files = []
for file in metahistorybz2dump['files']:
if f"history{history_num}.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history_files.append(file_url)
print(len(history_files))
output_filepath = os.path.join(git_root_dir, "scripts", f"history{history_num}_20200101_download.sh")
generate_download_script(history_files, output_filepath)
history_nums = [6, 7, 8, 9, 10]
history_files = []
for file in metahistorybz2dump['files']:
for history_num in history_nums:
if f"history{history_num}.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history_files.append(file_url)
print(len(history_files))
output_filepath = os.path.join(git_root_dir, "scripts", f"history6to10_20200101_download.sh")
generate_download_script(history_files, output_filepath)
history_nums = list(range(11, 21))
print(history_nums)
history_files = []
for file in metahistorybz2dump['files']:
for history_num in history_nums:
if f"history{history_num}.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history_files.append(file_url)
print(len(history_files))
output_filepath = os.path.join(git_root_dir, "scripts", f"history11to20_20200101_download.sh")
generate_download_script(history_files, output_filepath)
history_nums = list(range(21, 28))
print(history_nums)
history_files = []
for file in metahistorybz2dump['files']:
for history_num in history_nums:
if f"history{history_num}.xml" in file:
file_url = metahistorybz2dump['files'][file]['url']
history_files.append(file_url)
print(len(history_files))
output_filepath = os.path.join(git_root_dir, "scripts", f"history21to27_20200101_download.sh")
generate_download_script(history_files, output_filepath)
###Output
[21, 22, 23, 24, 25, 26, 27]
202
###Markdown
Stub Meta HistoryGenerate scripts for downloading stub meta history.
###Code
xmlstubsdump = jobs['xmlstubsdump']
history_files = []
for file in xmlstubsdump['files']:
if "stub-meta-history" in file:
file_url = xmlstubsdump['files'][file]['url']
history_files.append(file_url)
print(len(history_files))
def generate_stub_download_script(file_list, output_filepath, url_base = "https://dumps.wikimedia.org"):
target_dir = "/export/scratch2/wiki_data/enwiki-20210101-stub-meta-history-gz"
with open(output_filepath, 'w') as outfile:
outfile.write("#!/bin/bash\n")
outfile.write("# This script autogenerated by DownloadEnwikiDump.ipynb\n\n")
outfile.write('echo "Starting download." && \\\n')
for file_url in file_list:
full_url = url_base + file_url
outfile.write(f'echo "Downloading \'{file_url}\'." && \\\n')
outfile.write(f'wget --no-check-certificate -nc -O {target_dir}/{os.path.basename(file_url)} "{full_url}" && \\\n')
outfile.write('echo "Successful." && exit 0\n')
outfile.write('echo "Error downloading." && exit 1\n\n')
output_filepath = os.path.join(git_root_dir, "scripts", f"stub_history_20210101_download.sh")
generate_stub_download_script(history_files, output_filepath)
###Output
_____no_output_____ |
Chapter 3/Section 3.5.ipynb | ###Markdown
[3.5 组合语言的解释器](http://www-inst.eecs.berkeley.edu/~cs61a/sp12/book/interpretation.htmlinterpreters-for-languages-with-combination)运行在任何现代计算机上的软件都以多种编程语言写成。其中有物理语言,例如用于特定计算机的机器语言。这些语言涉及到基于独立储存位和原始机器指令的数据表示和控制。机器语言的程序员涉及到使用提供的硬件,为资源有限的计算构建系统和功能的高效实现。高阶语言构建在机器语言之上,隐藏了表示为位集的数据,以及表示为原始指令序列的程序的细节。这些语言拥有例如过程定义的组合和抽象的手段,它们适用于组织大规模的软件系统。元语言抽象 -- 建立了新的语言 -- 并在所有工程设计分支中起到重要作用。它对于计算机编程尤其重要,因为我们不仅仅可以在编程中构想出新的语言,我们也能够通过构建解释器来实现它们。编程语言的解释器是一个函数,它在语言的表达式上调用,执行求解表达式所需的操作。我们现在已经开始了技术之旅,通过这种技术,编程语言可以建立在其它语言之上。我们首先会为计算器定义解释器,它是一种受限的语言,和 Python 调用表达式具有相同的语法。我们之后会从零开始开发 Scheme 和 Logo 语言的解释器,它们都是 Lisp 的方言,Lisp 是现在仍旧广泛使用的第二老的语言。我们所创建的解释器,在某种意义上,会让我们使用 Logo 编写完全通用的程序。为了这样做,它会实现我们已经在这门课中开发的求值环境模型。 3.5.1 计算器我们的第一种新语言叫做计算器,一种用于加减乘除的算术运算的表达式语言。计算器拥有 Python 调用表达式的语法,但是它的运算符对于所接受的参数数量更加灵活。例如,计算器运算符`mul`和`add`可接受任何数量的参数:
###Code
calc> add(1, 2, 3, 4)
10
calc> mul()
1
###Output
_____no_output_____
###Markdown
`sub`运算符拥有两种行为:传入一个运算符,它会对运算符取反。传入至少两个,它会从第一个参数中减掉剩余的参数。`div`运算符拥有 Python 的`operator.truediv`的语义,只接受两个参数。
###Code
calc> sub(10, 1, 2, 3)
4
calc> sub(3)
-3
calc> div(15, 12)
1.25
###Output
_____no_output_____
###Markdown
就像 Python 中那样,调用表达式的嵌套提供了计算器语言中的组合手段。为了精简符号,我们使用运算符的标准符号来代替名称:
###Code
calc> sub(100, mul(7, add(8, div(-12, -3))))
16.0
calc> -(100, *(7, +(8, /(-12, -3))))
16.0
###Output
_____no_output_____
###Markdown
我们会使用 Python 实现计算器解释器。也就是说,我们会编写 Python 程序来接受字符串作为输入,并返回求值结果。如果输入是符合要求的计算器表达式,结果为字符串,反之会产生合适的异常。计算器语言解释器的核心是叫做`calc_eval`的递归函数,它会求解树形表达式对象。**表达式树。**到目前为止,我们在描述求值过程中所引用的表达式树,还是概念上的实体。我们从没有显式将表达式树表示为程序中的数据。为了编写解释器,我们必须将表达式当做数据操作。在这一章中,许多我们之前介绍过的概念都会最终以代码实现。计算器中的基本表达式只是一个数值,类型为`int`或`float`。所有复合表达式都是调用表达式。调用表达式表示为拥有两个属性实例的`Exp`类。计算器的`operator`总是字符串:算数运算符的名称或符号。`operands`要么是基本表达式,要么是`Exp`的实例本身。
###Code
class Exp(object):
"""A call expression in Calculator."""
def __init__(self, operator, operands):
self.operator = operator
self.operands = operands
def __repr__(self):
return 'Exp({0}, {1})'.format(repr(self.operator), repr(self.operands))
def __str__(self):
operand_strs = ', '.join(map(str, self.operands))
return '{0}({1})'.format(self.operator, operand_strs)
###Output
_____no_output_____
###Markdown
`Exp`实例定义了两个字符串方法。`__repr__`方法返回 Python 表达式,而`__str__`方法返回计算器表达式。
###Code
Exp('add', [1, 2])
str(Exp('add', [1, 2]))
Exp('add', [1, Exp('mul', [2, 3, 4])])
str(Exp('add', [1, Exp('mul', [2, 3, 4])]))
###Output
_____no_output_____
###Markdown
最后的例子演示了`Exp`类如何通过包含作为`operands`元素的`Exp`的实例,来表示表达式树中的层次结构。
###Code
def calc_eval(exp):
"""Evaluate a Calculator expression."""
if type(exp) in (int, float):
return exp
elif type(exp) == Exp:
arguments = list(map(calc_eval, exp.operands))
return calc_apply(exp.operator, arguments)
###Output
_____no_output_____
###Markdown
**求值。**`calc_eval`函数接受表达式作为参数,并返回它的值。它根据表达式的形式为表达式分类,并且指导它的求值。对于计算器来说,表达式的两种句法形式是数值或调用表达式,后者是`Exp`的实例。数值是自求值的,它们可以直接从`calc_eval`中返回。调用表达式需要使用函数。调用表达式首先通过将`calc_eval`函数递归映射到操作数的列表,计算出参数列表来求值。之后,在第二个函数`calc_apply`中,运算符会作用于这些参数上。计算器语言足够简单,我们可以轻易地在单一函数中表达每个运算符的使用逻辑。在`calc_apply`中,每种条件子句对应一个运算符。
###Code
from operator import mul
from functools import reduce
def calc_apply(operator, args):
"""Apply the named operator to a list of args."""
if operator in ('add', '+'):
return sum(args)
if operator in ('sub', '-'):
if len(args) == 0:
raise TypeError(operator + ' requires at least 1 argument')
if len(args) == 1:
return -args[0]
return sum(args[:1] + [-arg for arg in args[1:]])
if operator in ('mul', '*'):
return reduce(mul, args, 1)
if operator in ('div', '/'):
if len(args) != 2:
raise TypeError(operator + ' requires exactly 2 arguments')
numer, denom = args
return numer/denom
###Output
_____no_output_____
###Markdown
上面,每个语句组计算了不同运算符的结果,或者当参数错误时产生合适的`TypeError`。`calc_apply`函数可以直接调用,但是必须传入值的列表作为参数,而不是运算符表达式的列表。
###Code
calc_apply('+', [1, 2, 3])
calc_apply('-', [10, 1, 2, 3])
calc_apply('*', [])
calc_apply('/', [40, 5])
###Output
_____no_output_____
###Markdown
`calc_eval`的作用是,执行合适的`calc_apply`调用,通过首先计算操作数子表达式的值,之后将它们作为参数传入`calc_apply`。于是,`calc_eval`可以接受嵌套表达式。
###Code
e = Exp('add', [2, Exp('mul', [4, 6])])
str(e)
calc_eval(e)
###Output
_____no_output_____
###Markdown
`calc_eval`的结构是个类型(表达式的形式)分发的例子。第一种表达式是数值,不需要任何的额外求值步骤。通常,基本表达式不需要任何额外的求值步骤,这叫做自求值。计算器语言中唯一的自求值表达式就是数值,但是在通用语言中可能也包括字符串、布尔值,以及其它。**“读取-求值-打印”循环。**和解释器交互的典型方式是“读取-求值-打印”循环(REPL),它是一种交互模式,读取表达式、对其求值,之后为用户打印出结果。Python 交互式会话就是这种循环的例子。REPL 的实现与所使用的解释器无关。下面的`read_eval_print_loop`函数使用内建的`input`函数,从用户接受一行文本作为输入。它使用语言特定的`calc_parse`函数构建表达式树。`calc_parse`在随后的解析一节中定义。最后,它打印出对由`calc_parse`返回的表达式树调用`calc_eval`的结果。
###Code
def read_eval_print_loop():
"""Run a read-eval-print loop for calculator."""
while True:
expression_tree = calc_parse(input('calc> '))
print(calc_eval(expression_tree))
###Output
_____no_output_____
###Markdown
`read_eval_print_loop`的这个版本包含所有交互式界面的必要组件。一个样例会话可能像这样:
###Code
calc> mul(1, 2, 3)
6
calc> add()
0
calc> add(2, div(4, 8))
2.5
###Output
_____no_output_____
###Markdown
这个循环没有实现终端或者错误处理机制。我们可以通过向用户报告错误来改进这个界面。我们也可以允许用户通过发射键盘中断信号(`Control-C`),或文件末尾信号(`Control-D`)来退出循环。为了实现这些改进,我们将原始的`while`语句组放在`try`语句中。第一个`except`子句处理了由`calc_parse`产生的`SyntaxError`异常,也处理了由`calc_eval`产生的`TypeError`和`ZeroDivisionError`异常。
###Code
def read_eval_print_loop():
"""Run a read-eval-print loop for calculator."""
while True:
try:
expression_tree = calc_parse(input('calc> '))
print(calc_eval(expression_tree))
except (SyntaxError, TypeError, ZeroDivisionError) as err:
print(type(err).__name__ + ':', err)
except (KeyboardInterrupt, EOFError): # <Control>-D, etc.
print('Calculation completed.')
return
###Output
_____no_output_____
###Markdown
这个循环实现报告错误而不退出循环。发生错误时不退出程序,而是在错误消息之后重新开始循环可以让用户回顾他们的表达式。通过导入`readline`模块,用户甚至可以使用上箭头或`Control-P`来回忆他们之前的输入。最终的结果提供了错误信息报告的界面:
###Code
calc> add
SyntaxError: expected ( after add
calc> div(5)
TypeError: div requires exactly 2 arguments
calc> div(1, 0)
ZeroDivisionError: division by zero
calc> ^DCalculation completed.
###Output
_____no_output_____
###Markdown
在我们将解释器推广到计算器之外的语言时,我们会看到,`read_eval_print_loop`由解析函数、求值函数,和由`try`语句处理的异常类型参数化。除了这些修改之外,任何 REPL 都可以使用相同的结构来实现。 3.5.2 解析解析是从原始文本输入生成表达式树的过程。解释这些表达式树是求值函数的任务,但是解析器必须提供符合格式的表达式树给求值器。解析器实际上由两个组件组成,词法分析器和语法分析器。首先,词法分析器将输入字符串拆成标记(token),它们是语言的最小语法单元,就像名称和符号那样。其次,语法分析器从这个标记序列中构建表达式树。
###Code
def calc_parse(line):
"""Parse a line of calculator input and return an expression tree."""
tokens = tokenize(line)
expression_tree = analyze(tokens)
if len(tokens) > 0:
raise SyntaxError('Extra token(s): ' + ' '.join(tokens))
return expression_tree
###Output
_____no_output_____
###Markdown
标记序列由叫做`tokenize`的词法分析器产生,并被叫做`analyze`语法分析器使用。这里,我们定义了`calc_parse`,它只接受符合格式的计算器表达式。一些语言的解析器为接受以换行符、分号或空格分隔的多种表达式而设计。我们在引入 Logo 语言之前会推迟实现这种复杂性。**词法分析。**用于将字符串解释为标记序列的组件叫做分词器(tokenizer ),或者词法分析器。在我们的视线中,分词器是个叫做`tokenize`的函数。计算器语言由包含数值、运算符名称和运算符类型的符号(比如`+`)组成。这些符号总是由两种分隔符划分:逗号和圆括号。每个符号本身都是标记,就像每个逗号和圆括号那样。标记可以通过向输入字符串添加空格,之后在每个空格处分割字符串来分开。
###Code
def tokenize(line):
"""Convert a string into a list of tokens."""
spaced = line.replace('(',' ( ').replace(')',' ) ').replace(',', ' , ')
return spaced.split()
###Output
_____no_output_____
###Markdown
对符合格式的计算器表达式分词不会损坏名称,但是会分开所有符号和分隔符。
###Code
tokenize('add(2, mul(4, 6))')
###Output
_____no_output_____
###Markdown
拥有更加复合语法的语言可能需要更复杂的分词器。特别是,许多分析器会解析每种返回标记的语法类型。例如,计算机中的标记类型可能是运算符、名称、数值或分隔符。这个分类可以简化标记序列的解析。**语法分析。**将标记序列解释为表达式树的组件叫做语法分析器。在我们的实现中,语法分析由叫做`analyze`的递归函数完成。它是递归的,因为分析标记序列经常涉及到分析这些表达式树中的标记子序列,它本身作为更大的表达式树的子分支(比如操作数)。递归会生成由求值器使用的层次结构。`analyze`函数接受标记列表,以符合格式的表达式开始。它会分析第一个标记,将表示数值的字符串强制转换为数字的值。之后要考虑计算机中的两个合法表达式类型。数字标记本身就是完整的基本表达式树。复合表达式以运算符开始,之后是操作数表达式的列表,由圆括号分隔。我们以一个不检查语法错误的实现开始。
###Code
def analyze(tokens):
"""Create a tree of nested lists from a sequence of tokens."""
token = analyze_token(tokens.pop(0))
if type(token) in (int, float):
return token
else:
tokens.pop(0) # Remove (
return Exp(token, analyze_operands(tokens))
def analyze_operands(tokens):
"""Read a list of comma-separated operands."""
operands = []
while tokens[0] != ')':
if operands:
tokens.pop(0) # Remove ,
operands.append(analyze(tokens))
tokens.pop(0) # Remove )
return operands
###Output
_____no_output_____
###Markdown
最后,我们需要实现`analyze_token`。`analyze_token`函数将数值文本转换为数值。我们并不自己实现这个逻辑,而是依靠内建的 Python 类型转换,使用`int`和`float`构造器来将标记转换为这种类型。
###Code
def analyze_token(token):
"""Return the value of token if it can be analyzed as a number, or token."""
try:
return int(token)
except (TypeError, ValueError):
try:
return float(token)
except (TypeError, ValueError):
return token
###Output
_____no_output_____
###Markdown
我们的`analyze`实现就完成了。它能够正确将符合格式的计算器表达式解析为表达式树。这些树由`str`函数转换回计算器表达式。
###Code
expression = 'add(2, mul(4, 6))'
analyze(tokenize(expression))
str(analyze(tokenize(expression)))
###Output
_____no_output_____
###Markdown
`analyse`函数只会返回符合格式的表达式树,并且它必须检测输入中的语法错误。特别是,它必须检测表达式是否完整、正确分隔,以及只含有已知的运算符。下面的修订版本确保了语法分析的每一步都找到了预期的标记。
###Code
known_operators = ['add', 'sub', 'mul', 'div', '+', '-', '*', '/']
def analyze(tokens):
"""Create a tree of nested lists from a sequence of tokens."""
assert_non_empty(tokens)
token = analyze_token(tokens.pop(0))
if type(token) in (int, float):
return token
if token in known_operators:
if len(tokens) == 0 or tokens.pop(0) != '(':
raise SyntaxError('expected ( after ' + token)
return Exp(token, analyze_operands(tokens))
else:
raise SyntaxError('unexpected ' + token)
def analyze_operands(tokens):
"""Analyze a sequence of comma-separated operands."""
assert_non_empty(tokens)
operands = []
while tokens[0] != ')':
if operands and tokens.pop(0) != ',':
raise SyntaxError('expected ,')
operands.append(analyze(tokens))
assert_non_empty(tokens)
tokens.pop(0) # Remove )
return elements
def assert_non_empty(tokens):
"""Raise an exception if tokens is empty."""
if len(tokens) == 0:
raise SyntaxError('unexpected end of line')
###Output
_____no_output_____ |
cop/r/ECM.ipynb | ###Markdown
INF285/ILI285 Computación Científica COP-R Ecuación Cuadrática Matricial Librerías
###Code
import numpy as np
import scipy.sparse.linalg as spla
###Output
_____no_output_____
###Markdown
Pregunta La tradicional ecuación cuadrática $\lambda\,x^2+\theta\,x+\gamma=0$ se estudió profundamente al comienzo del curso en el tema de errores de cancelación, esta ecuación se obtiene haciendo una manipulación algebraica conveniente, particularmente la solución es la siguiente:\begin{equation}x_{\pm}=\dfrac{-\theta\pm\sqrt{\theta^2-4\,\lambda\,\gamma}}{2\,\lambda}\end{equation}Uno podría extender la ecuación cuadrática de la siguiente forma:\begin{equation}\Lambda\,X^2+\Theta\,X+\Gamma=\underline{0}\end{equation}donde $\Lambda, \Theta, \Gamma$ y $X \in \mathbb{R}^{n\times n}$, $\underline{0}$ corresponde a la matriz nula de $n \times n$ y $X^2=X\,X$. Una posible simplificación puede ser:\begin{equation}X^2+\Lambda^{-1}\,\Theta\,X+\Lambda^{-1}\,\Gamma=\underline{0}\end{equation}Este caso en general es un poco más complejo de lo esperado, por lo cual estudiaremos la siguiente ecuación:\begin{equation}(X+B)^2=C+CX+X^2+D\end{equation}Los archivos de datos se obtienen desde el siguiente link: https://github.com/sct-utfsm/INF-285/tree/master/cop/r/data/ 1. Considerando en este caso que $B=D=\underline{0}$ y $C=$C1.npy, obtenga la norma infinito de $X$, es decir $\|X\|_\infty$.2. Considerando en este caso que $C=\underline{0}$, $B=$B2.npy y $D=B^2$, obtenga la norma infinito de $X$, es decir $\|X\|_\infty$.3. Considerando en este caso que $B=$B3.npy, $C=$C3.npy y $D=\underline{0}$, obtenga la norma infinito de $X$, es decir $\|X\|_\infty$. Desarrollo
###Code
# Loading data
C1 = np.load('data/C1.npy')
B2 = np.load('data/B2.npy')
B3 = np.load('data/B3.npy')
C3 = np.load('data/C3.npy')
n = C1.shape[0]
# 1. (X+B)^2=C+C X+X^2 + D
# X^2 = C+C X+X^2
# 0 = C+C X
# -C = C X
# -C^{-1}C=X
# -I=X
X = -np.eye(n)
print(np.linalg.norm(X,np.inf))
# 2. (X+B)^2=C+C X+X^2 + D
# (X+B)^2=X^2 + B^2
# X^2 + B X + X B + B^2 = X^2 + B^2
# B X + X B = 0
X = np.zeros((n,n))
print(np.linalg.norm(X,np.inf))
# 3. (X+B)^2=C+C X+X^2 + D
# (X+B)^2=C+C X+X^2
# X^2+B X +X B + B^2=C+C X+X^2
# B X +X B + B^2=C+C X
# (B - C) X +X B = C - B^2
def compute_matrix_vector_product(x,B,C,n):
X = np.reshape(x,(n,n))
out = np.dot(B-C,X)+np.dot(X,B)
return out.flatten()
afun = spla.LinearOperator((n**2, n**2), matvec = lambda x: compute_matrix_vector_product(x,B3,C3,n))
x, exitCode = spla.gmres(afun, (C3-np.dot(B3,B3)).flatten(), tol=1e-10)
X_GMRes = np.reshape(x,(n,n))
print(np.linalg.norm(X_GMRes,np.inf))
###Output
1.0
0.0
29.40492924190162
|
how_to_ac3airborne/datasets/polar/gps_ins.ipynb | ###Markdown
GPS and INSPosition and orientation of Polar 5 and Polar 6 are recorded by an on-board GPS sensor and the internal navigation system (INS). The following example presents the variables recored by these instruments. Data access* Data accessTo analyse the data they first have to be loaded by importing the (AC)³ airborne meta data catalogue. To do so the ac3airborne package has to be installed. More information on how to do that and about the catalog can be found [here](https://github.com/igmk/ac3airborne-intakeac3airborne-intake-catalogue). Since some of the data, like the preliminary data of the HALO-(AC)3 campaign, is stored on the (AC)3 nextcloud server, username and password as credentials ([registration](https://cloud.ac3-tr.de/index.php/login)) are required and need to be loaded from environment variables.
###Code
import os
ac3cloud_username = os.environ['AC3_USER']
ac3cloud_password = os.environ['AC3_PASSWORD']
import ac3airborne
###Output
_____no_output_____
###Markdown
Get data All flights of Polar 5:
###Code
cat = ac3airborne.get_intake_catalog()
meta = ac3airborne.get_flight_segments()
flights_p5 = {}
for campaign in ['ACLOUD', 'AFLUX', 'MOSAiC-ACA','HALO-AC3']:
flights_p5.update(meta[campaign]['P5'])
###Output
_____no_output_____
###Markdown
All flights of Polar 6:
###Code
flights_p6 = {}
for campaign in ['ACLOUD', 'PAMARCMiP','HALO-AC3']:
flights_p6.update(meta[campaign]['P6'])
###Output
_____no_output_____
###Markdown
```{note}Have a look at the attributes of the xarray dataset `ds_gps_ins` for all relevant information on the dataset, such as author, contact, or citation information.```
###Code
ds_gps_ins = cat['AFLUX']['P5']['GPS_INS']['AFLUX_P5_RF10'].to_dask()
ds_gps_ins
###Output
Invalid MIT-MAGIC-COOKIE-1 key
###Markdown
The dataset `ds_gps_ins` includes the aircraft's position (`lon`, `lat`, `alt`), attitude (`roll`, `pitch`, `heading`), and the ground speed, vertical speed and true air speed (`gs`, `vs`, `tas`). Load flight phase informationPolar 5 flights are divided into segments to easily access start and end times of flight patterns. For more information have a look at the respective [github](https://github.com/igmk/flight-phase-separation) repository.At first we want to load the flight segments of (AC)³ airborne
###Code
meta = ac3airborne.get_flight_segments() # this happened before, but it doesn't hurt.
###Output
_____no_output_____
###Markdown
The following command lists all flight segments into the dictionary `segments`:
###Code
segments = {s.get("segment_id"): {**s, "flight_id": flight["flight_id"]}
for campaign in meta.values()
for platform in campaign.values()
for flight in platform.values()
for s in flight["segments"]
}
###Output
_____no_output_____
###Markdown
In this example, we want to look at a racetrack pattern during `ACLOUD_P5_RF10`.
###Code
seg = segments["AFLUX_P5_RF10_rt01"]
###Output
_____no_output_____
###Markdown
Using the start and end times of the segment, we slice the data to this flight section.
###Code
ds_gps_ins_rt = ds_gps_ins.sel(time=slice(seg["start"], seg["end"]))
###Output
_____no_output_____
###Markdown
Plots
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import matplotlib.colors as mcolors
import numpy as np
import ipyleaflet
from simplification.cutil import simplify_coords_idx
plt.style.use("../../mplstyle/book")
###Output
_____no_output_____
###Markdown
Plot all flights
###Code
def simplify_dataset(ds, tolerance):
indices_to_take = simplify_coords_idx(np.stack([ds.lat.values, ds.lon.values], axis=1), tolerance)
return ds.isel(time=indices_to_take)
# define colors for the flight tracks
colors = [mcolors.to_hex(c)
for c in plt.cm.inferno(np.linspace(0, 1, len(flights_p5)))]
m = ipyleaflet.Map(basemap=ipyleaflet.basemaps.Esri.NatGeoWorldMap,
center=(80., 6), zoom=3)
for (flight_id, flight),color in zip(flights_p5.items(),colors):
mission = flight['mission']
# read gps dataset of flight
if mission == 'HALO-AC3':
ds = cat[mission]['P5']['GPS_INS'][flight_id](user=ac3cloud_username,password=ac3cloud_password).to_dask()
else:
ds = cat[mission]['P5']['GPS_INS'][flight_id].to_dask()
# slice to takeoff and landing times
ds = ds.sel(time=slice(meta[mission]['P5'][flight_id]['takeoff'], meta[mission]['P5'][flight_id]['landing']))
# reduce dataset for plotting
ds_reduced = simplify_dataset(ds, tolerance=1e-5)
track = ipyleaflet.Polyline(
locations=np.stack([ds_reduced.lat.values,
ds_reduced.lon.values], axis=1).tolist(),
color=color,
fill=False,
weight=2,
name=flight_id)
m.add_layer(track)
m.add_control(ipyleaflet.ScaleControl(position='bottomleft'))
m.add_control(ipyleaflet.LegendControl(dict(zip(flights_p5.keys(), colors)),
name="Flights",
position="bottomright"))
m.add_control(ipyleaflet.LayersControl(position='topright'))
m.add_control(ipyleaflet.FullScreenControl())
display(m)
###Output
_____no_output_____
###Markdown
Plot time series of one flight
###Code
fig, ax = plt.subplots(9, 1, sharex=True)
kwargs = dict(s=1, linewidths=0, color='k')
ax[0].scatter(ds_gps_ins.time, ds_gps_ins['alt'], **kwargs)
ax[0].set_ylabel('alt [m]')
ax[1].scatter(ds_gps_ins.time, ds_gps_ins['lon'], **kwargs)
ax[1].set_ylabel('lon [°E]')
ax[2].scatter(ds_gps_ins.time, ds_gps_ins['lat'], **kwargs)
ax[2].set_ylabel('lat [°N]')
ax[3].scatter(ds_gps_ins.time, ds_gps_ins['roll'], **kwargs)
ax[3].set_ylabel('roll [°]')
ax[4].scatter(ds_gps_ins.time, ds_gps_ins['pitch'], **kwargs)
ax[4].set_ylabel('pitch [°]')
ax[5].scatter(ds_gps_ins.time, ds_gps_ins['heading'], **kwargs)
ax[5].set_ylim(-180, 180)
ax[5].set_ylabel('heading [°]')
ax[6].scatter(ds_gps_ins.time, ds_gps_ins['gs'], **kwargs)
ax[6].set_ylabel('gs [kts]')
ax[7].scatter(ds_gps_ins.time, ds_gps_ins['vs'], **kwargs)
ax[7].set_ylabel('vs [m/s]')
ax[8].scatter(ds_gps_ins.time, ds_gps_ins['tas'], **kwargs)
ax[8].set_ylabel('tas [m/s]')
ax[-1].xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
plt.show()
###Output
_____no_output_____
###Markdown
Plot time series of racetrack pattern
###Code
fig, ax = plt.subplots(9, 1, sharex=True)
kwargs = dict(s=1, linewidths=0, color='k')
ax[0].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['alt'], **kwargs)
ax[0].set_ylabel('alt [m]')
ax[1].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['lon'], **kwargs)
ax[1].set_ylabel('lon [°E]')
ax[2].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['lat'], **kwargs)
ax[2].set_ylabel('lat [°N]')
ax[3].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['roll'], **kwargs)
ax[3].set_ylabel('roll [°]')
ax[4].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['pitch'], **kwargs)
ax[4].set_ylabel('pitch [°]')
ax[5].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['heading'], **kwargs)
ax[5].set_ylim(-180, 180)
ax[5].set_ylabel('heading [°]')
ax[6].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['gs'], **kwargs)
ax[6].set_ylabel('gs [kts]')
ax[7].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['vs'], **kwargs)
ax[7].set_ylabel('vs [m/s]')
ax[8].scatter(ds_gps_ins_rt.time, ds_gps_ins_rt['tas'], **kwargs)
ax[8].set_ylabel('tas [m/s]')
ax[-1].xaxis.set_major_formatter(mdates.DateFormatter('%H:%M'))
plt.show()
###Output
_____no_output_____ |
notebooks/si_04_variance_for_preferred_number_of_clusters.ipynb | ###Markdown
Variance for Preferred Number of ClustersIn this notebook, we conduct the sensitivity analysis for our `infomap` input parameter, the preferred number of clusters,as reported in the SI.In short, we investigate the question:How do the clusterings differ if we vary the preferred number of clusters? Preparations
###Code
from cdlib import evaluation, readwrite
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("darkgrid")
plt.rcParams['figure.figsize'] = (12,9)
plt.rcParams['font.size'] = 16
def make_dfs(dataset, preferred_cluster_sizes, years, config_strs):
dfs = []
for year in years:
config_str = config_strs[0]
ref_clustering = readwrite.read_community_json(
f'../../legal-networks-data/{dataset.lower()}/11_cluster_results/{year}_0-0_1-0_-1_a-infomap_n100_m1-0_s0_c1000.json'
)
clusterings = [
readwrite.read_community_json(
f'../../legal-networks-data/{dataset.lower()}/11_cluster_results/{year}_{config_str}.json'
)
for config_str in config_strs
]
df = pd.DataFrame(
{
'Preferred Clusters': p,
'NMI': evaluation.normalized_mutual_information(ref_clustering, c).score,
'Rand': evaluation.adjusted_rand_index(ref_clustering, c).score,
"Year": year
}
for p, c in zip(preferred_cluster_sizes, clusterings)
)
dfs.append(df)
return dfs
def make_boxplot(dfs, pivot_col, y_label, save_path=None):
pd.concat(dfs).pivot(columns='Preferred Clusters', index='Year', values=pivot_col
).boxplot(notch=0, sym="",color=dict(boxes='k', whiskers='k', medians='r', caps='k'))
plt.ylabel(y_label)
plt.xlabel('Preferred Number of Clusters')
labels = plt.gca().get_xticklabels()
labels[0] = 'auto'
plt.gca().set_xticklabels(labels)
plt.tight_layout()
if save_path is not None:
plt.savefig(save_path)
###Output
_____no_output_____
###Markdown
Computing the statistics US
###Code
dataset = 'us'
preferred_cluster_sizes = list(range(0,150+1,10)) + [200]
years = range(1994,2018+1)
config_strs = [
f'0-0_1-0_-1_a-infomap_n{preferred_cluster_size}_m1-0_s0_c1000'
if preferred_cluster_size
else '0-0_1-0_-1_a-infomap_m1-0_s0_c1000'
for preferred_cluster_size in preferred_cluster_sizes
]
dfs = make_dfs(dataset, preferred_cluster_sizes, years, config_strs)
make_boxplot(dfs, 'NMI', 'Normalized Mutual Information', '../graphics/preferred_number_of_modules_nmi_us.pdf')
make_boxplot(dfs, 'Rand', 'Adjusted Rand Index', '../graphics/preferred_number_of_modules_rand_us.pdf')
###Output
_____no_output_____
###Markdown
DE
###Code
dataset = 'de'
preferred_cluster_sizes = list(range(0,150+1,10)) + [200]
years = [f'{year}-01-01' for year in range(1994,2018+1)]
config_strs = [
f'0-0_1-0_-1_a-infomap_n{preferred_cluster_size}_m1-0_s0_c1000'
if preferred_cluster_size
else '0-0_1-0_-1_a-infomap_m1-0_s0_c1000'
for preferred_cluster_size in preferred_cluster_sizes
]
dfs = make_dfs(dataset, preferred_cluster_sizes, years, config_strs)
make_boxplot(dfs, 'NMI', 'Normalized Mutual Information', '../graphics/preferred_number_of_modules_nmi_de.pdf')
make_boxplot(dfs, 'Rand', 'Adjusted Rand Index', '../graphics/preferred_number_of_modules_rand_de.pdf')
###Output
_____no_output_____ |
lab00_aprendendo_python.ipynb | ###Markdown
Lab 0: Introdução ao Google Colab Esse Notebook estará disponível em 1. Jupyter notebooksThis webpage is called a Jupyter notebook. A notebook is a place to write programs and view their results. 1.1. Text cellsIn a notebook, each rectangle containing text or code is called a *cell*.Text cells (like this one) can be edited by double-clicking on them. They're written in a simple format called [Markdown](http://daringfireball.net/projects/markdown/syntax) to add formatting and section headings. You don't need to learn Markdown, but you might want to.After you edit a text cell, select the "run cell" button at the top that looks like ▶| to confirm any changes. **Question 1.1.1.** This paragraph is in its own text cell. Try editing it so that **this** sentence is the last sentence in the paragraph, and then select the "run cell" ▶| button on the top. This sentence, for example, should be deleted. So should this one.
###Code
print("Hello, World!")
###Output
_____no_output_____
###Markdown
And this one:
###Code
print("\N{WAVING HAND SIGN}, \N{EARTH GLOBE ASIA-AUSTRALIA}!")
###Output
_____no_output_____
###Markdown
The fundamental building block of Python code is an expression. Cells can contain multiple lines with multiple expressions. When you run a cell, the lines of code are executed in the order in which they appear. Every `print` expression prints a line. Run the next cell and notice the order of the output.
###Code
print("First this line is printed,")
print("and then this one.")
###Output
_____no_output_____
###Markdown
**Question 1.2.1.** Change the cell above so that it prints out: First this line, then the whole 🌏, and then this one.*Hint:* If you're stuck on how to print the Earth symbol, try looking at the print expressions above. 1.3. Writing Jupyter notebooksYou can use Jupyter notebooks for your own projects or documents. They are among the world's most popular programming environments for data science. When you make your own notebook, you'll need to create your own cells for text and code.To add a cell, select the + button in the menu bar. A new cell starts out as text. You can change it to a code cell by selecting it so that it's highlighted, then selecting the drop-down box next to the restart (⟳) button in the menu bar, and choosing Code instead of Markdown.**Question 1.3.1.** Add a code cell below this one. Write code in it that prints out: A whole new cell! ♪🌏♪(That musical note symbol is like the Earth symbol. Its long-form name is `\N{EIGHTH NOTE}`.)Run your cell to verify that it works. 1.4. ErrorsPython is a language, and like natural human languages, it has rules. It differs from natural language in two important ways:1. The rules are *simple*. You can learn most of them in a few weeks and gain reasonable proficiency with the language in a semester.2. The rules are *rigid*. If you're proficient in a natural language, you can understand a non-proficient speaker, glossing over small mistakes. A computer running Python code is not smart enough to do that.Whenever you write code, you'll make mistakes. When you run a code cell that has errors, Python will sometimes produce error messages to tell you what you did wrong.Errors are okay; even experienced programmers make many errors. When you make an error, you just have to find the source of the problem, fix it, and move on.We have made an error in the next cell. Run it and see what happens.
###Code
print("This line is missing something."
###Output
_____no_output_____
###Markdown
You should see something like this (minus our annotations):The last line of the error output attempts to tell you what went wrong. The *syntax* of a language is its structure, and this `SyntaxError` tells you that you have created an illegal structure. "`EOF`" means "end of file," so the message is saying Python expected you to write something more (in this case, a right parenthesis) before finishing the cell.There's a lot of terminology in programming languages. You'll learn as you go. If you are ever having trouble understanding an error message, search the discussion forum. If you don't find an answer, post a question about the error yourself.Try to fix the code above so that you can run the cell and see the intended message instead of an error. 1.5. The KernelThe kernel is a program that executes the code inside your notebook and outputs the results. In the top right of your window, you can see a circle that indicates the status of your kernel. If the circle is empty (⚪), the kernel is idle and ready to execute code. If the circle is filled in (⚫), the kernel is busy running some code. You may run into problems where your kernel is stuck for an excessive amount of time, your notebook is very slow and unresponsive, or your kernel loses its connection. If this happens, try the following steps:1. At the top of your screen, select **Kernel**, then **Interrupt**.2. If that doesn't help, select **Kernel**, then **Restart**. If you do this, you will have to run your code cells from the start of your notebook up until where you paused your work.3. If that doesn't help, restart your server. First, save your work by selecting **File** at the top left of your screen, then **Save and Checkpoint**. Next, select **Control Panel** at the top right. Choose **Stop My Server** to shut it down, then **My Server** to start it back up. Then, navigate back to the notebook you were working on. Submitting Assignments and the Grader In the course, you may have to submit assignments for grading. To submit assignments, you can navigate to the right side of the toolbar and select the button labeled `Submit` as shown below. Please make sure to save your notebook before submitting. To grade assignments in the course, there will be a grader called Gofer Grader. When you submit assignments, the grader goes through your assignment and grades it. At the bottom of an assignment, under the Submission header, you may see some lines of code that look like this: These lines of code import the grader and run all of the tests in the notebook. You can see what the tests are in the notebook by identifying the code cells with the format `check(test/...)`. These tests/checks are placed after a question usually, in order to see if you have gotten the question correct. If the question is correct, you should see something that looks similar to this:If you have gotten the question wrong and therefore "failed" the check, you will see something like this:If you get this result, make sure to go back to your code that was being tested and change it before submitting! 1.6. Completing a labAll assignments in the course will be distributed as notebooks like this one. At the top of each assignment, you'll see a cell like the one below that imports autograder tests. Run it to import the autograder tests.
###Code
# Don't change this cell, just run it
# Import autograder tests
from gofer.ok import check
###Output
_____no_output_____
###Markdown
When you finish a question, you need to check your answer by running the check command below. It's OK to grade multiple times; Gofer will only try to grade your final submission for each question. There are no hidden autograder tests. If you pass all the given autograder tests for a question, you will receive full credit for that question.
###Code
check("tests/q0.py")
###Output
_____no_output_____
###Markdown
The notebook resides on a server that is run by the course staff, and so we have access to it as well. Once you're finished with a lab, use the File menu within the notebook page (below the Jupyter logo) to "Save and Checkpoint" and you're done. You may also check your notebook in its entirety with the following command.
###Code
import glob
from gofer.ok import check_all
display(check_all(glob.glob('tests/q*.py')))
###Output
_____no_output_____ |
8_automated_hyperparameter_search.ipynb | ###Markdown
Búsqueda de HiperparámetrosLas redes neuronales tienen decenas de hiperparámetros que afectan su arquitectura y proceso de entrenamiento. Más aún, el desempeño final del modelo está condicionado a encontar un conjunto de valores para dichos hiperparámetros exitosos, para una inicialización aleatoria de los pesos dada. Por ello, la exploración de hiperparámetros se vuelve una de las partes más tediosas y críticas del entrenamiento de redes neuronales. Para obtener resultados que sean correctos, significativos y reproducibles es necesario planificar y sistemizar este proceso de búsqueda. > hyper-parameter optimization should be regarded as a formal outer loop in the learning processFormalmente, este proceso se puede describir como la minimización de la función de pérdida (o maximizar la performance) como si fuera una función de *caja negra* que toma como parámetros los valores de los hiperparámetros:$$ f(\theta) = loss_\theta(y, \hat{y}) $$$$ \theta^* = argmin_\theta f(\theta) $$donde $\theta$ es el conjunto de hiperparámetros del modelo, $loss$ es la pérdida generada entre las etiquetas verdaderas $y$ y las etiquetas generadas por el modelo $\hat{y}$, y $f$ es la función objetivo de la minimización.Las estrategias principales para la exploración del espacio de hiperparámetros son:* Búsqueda manual, donde un humano define los valores de cada hiperparámetro.* Búsqueda por grilla o *grid search*, donde se define un conjunto de valores posibles que puede tomar cada hiperparámetro, y se realiza un experimento por cada combinación posible.* Búsqueda aleatoria o *random search*, donde se define un rango de valores posibles para cada hiperparámetro, y se elige al azar un valor del rango para cada experimento.* Búsqueda automátizada, *automated search* o *model-based search*, que es igual a la búsqueda aleatoria pero la selección del valor de cada hiperparámetro está condicionado por los resultados de experimentos anteriores. Para más información ver el paper [*Algorithms for Hyper-Parameter Optimization*](https://proceedings.neurips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf)En la siguiente imagen, tomada del paper [*Random Search for Hyper-Parameter Optimization*](https://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a.pdf), se muestra el impacto de las primeras dos estrategias para un hiperparámetro con alta influencia en el desempeño del modelo final, y otro que sin influencia. No solo require muchas evaluaciones para lograr cobertura, sino que las combinaciones en dónde sólo se varían hiperparámetros no relevantes no recolectan información nueva. El éxito de la búsqueda por grilla depende de que el nivel de granularidad de la grilla cubra adecuadamente los valores relevantes, que son desconocidos a priori. Para solucionar todos estos problemas, es que se utiliza la **exploración bayesiana**. Este método modela la loss como un Gaussian process, y tiene en cuenta los resultados de los experimentos anteriores para ir construyendo una distribución de probabilidad de la pérdida dados los hiperparámetros:$$ P(loss | \theta)$$Para elegir una nueva combinación de hiperparámetros a probar dados los experimentos previos, el algoritmo utiliza una *surrogate function* para aproximar el comportamiento de la pérdida y una *selection function* basada en la mejora esperada. A grandes rasgos, el algoritmo sigue los siguientes pasos: 1. Encontrar el mejor conjunto de hiperparámetros que maximize la mejora esperada (EI), estimada a través de la *surrogate function*. 2. Calcular la performance del modelo con la combinación de hiperparámetros elegida. Esto corresponde a evaluar la función objetivo. 3. Actualizar la forma de la *surrogate function* utilizando el teorema de Bayes para que se ajuste mejor a la verdadera distribución $ P(loss | \theta)$Afortunadamente, muchos algoritmos de búsqueda están implementados y funcionan como cajas negras. Veremos un ejemplo utilizando la librería hyperopt
###Code
# If running in colab, you need to update gensim
# !pip install --upgrade gensim
import csv
import functools
import gzip
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import tempfile
import seaborn
from gensim import corpora
from gensim.models import KeyedVectors
from gensim.parsing import preprocessing
from gensim.scripts.glove2word2vec import glove2word2vec
from sklearn import metrics
from sklearn.model_selection import train_test_split
from torch.utils.data import Dataset, DataLoader, IterableDataset
from tqdm.notebook import tqdm, trange
# Ensure version 4.X
import gensim
gensim.__version__
###Output
_____no_output_____
###Markdown
Parte 1: Preprocesamiento del textoPrimero leeremos el dataset como se explica en la notebook 5_cnns.ipynb.
###Code
# If necessary, download data
# %%bash
# mkdir data
# curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/glove.6B.50d.txt.gz -o ./data/glove.6B.50d.txt.gz
# curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/imdb_reviews.csv.gz -o ./data/imdb_reviews.csv.gz
class IMDBReviewsDataset(Dataset):
def __init__(self, dataset, transform=None):
self.dataset = dataset
self.transform = transform
def __len__(self):
return self.dataset.shape[0]
def __getitem__(self, item):
if torch.is_tensor(item):
item = item.to_list()
item = {
"data": self.dataset.loc[item, "review"],
"target": self.dataset.loc[item, "sentiment"]
}
if self.transform:
item = self.transform(item)
return item
class RawDataProcessor:
def __init__(self,
dataset,
ignore_header=True,
filters=None,
vocab_size=50000):
if filters:
self.filters = filters
else:
self.filters = [
lambda s: s.lower(),
preprocessing.strip_tags,
preprocessing.strip_punctuation,
preprocessing.strip_multiple_whitespaces,
preprocessing.strip_numeric,
preprocessing.remove_stopwords,
preprocessing.strip_short,
]
# Create dictionary based on all the reviews (with corresponding preprocessing)
self.dictionary = corpora.Dictionary(
dataset["review"].map(self._preprocess_string).tolist()
)
# Filter the dictionary and compactify it (make the indices continous)
self.dictionary.filter_extremes(no_below=2, no_above=1, keep_n=vocab_size)
self.dictionary.compactify()
# Add a couple of special tokens
self.dictionary.patch_with_special_tokens({
"[PAD]": 0,
"[UNK]": 1
})
self.idx_to_target = sorted(dataset["sentiment"].unique())
self.target_to_idx = {t: i for i, t in enumerate(self.idx_to_target)}
def _preprocess_string(self, string):
return preprocessing.preprocess_string(string, filters=self.filters)
def _sentence_to_indices(self, sentence):
return self.dictionary.doc2idx(sentence, unknown_word_index=1)
def encode_data(self, data):
return self._sentence_to_indices(self._preprocess_string(data))
def encode_target(self, target):
return self.target_to_idx[target]
def __call__(self, item):
if isinstance(item["data"], str):
data = self.encode_data(item["data"])
else:
data = [self.encode_data(d) for d in item["data"]]
if isinstance(item["target"], str):
target = self.encode_target(item["target"])
else:
target = [self.encode_target(t) for t in item["target"]]
return {
"data": data,
"target": target,
"sentence": item["data"]
}
###Output
_____no_output_____
###Markdown
Separando el conjunto de validación o *dev*En deep learning, es **MUY** importante utilizar un conjunto de validación durante la búsqueda de hiperparámetros, que puede ser tomado de la partición de entrenamiento. Esto es independiente de la estrategia de búsqueda que se utilice.De esta manera, se previene el overfitting indirecto y se cuenta con una partición de datos nunca antes vista para poder evaluar la generalización real del modelo a datos no vistos.
###Code
dataset = pd.read_csv("./data/imdb_reviews.csv.gz")
preprocess = RawDataProcessor(dataset)
train_indices, test_indices = train_test_split(dataset.index, test_size=0.2, random_state=42)
train_indices, dev_indices = train_test_split(train_indices, test_size=0.2, random_state=42)
train_dataset = IMDBReviewsDataset(dataset.loc[train_indices].reset_index(drop=True), transform=preprocess)
dev_dataset = IMDBReviewsDataset(dataset.loc[dev_indices].reset_index(drop=True), transform=preprocess)
# We won't use test_dataset until the end!
test_dataset = IMDBReviewsDataset(dataset.loc[test_indices].reset_index(drop=True), transform=preprocess)
class PadSequences:
def __init__(self, pad_value=0, max_length=100):
self.pad_value = pad_value
self.max_length = max_length
def __call__(self, items):
data, target = list(zip(*[(item["data"], item["target"]) for item in items]))
seq_lengths = [len(d) for d in data]
max_length = self.max_length
seq_lengths = [min(self.max_length, l) for l in seq_lengths]
data = [d[:l] + [self.pad_value] * (max_length - l)
for d, l in zip(data, seq_lengths)]
return {
"data": torch.LongTensor(data),
"target": torch.FloatTensor(target)
}
###Output
_____no_output_____
###Markdown
Parte 2: Esqueleto de la red neuronalDefinimos el modelo a entrenar.
###Code
import torch
import torch.nn as nn
class ImdbLSTM(nn.Module):
def __init__(self,
pretrained_embeddings_path, dictionary, embedding_size,
hidden_layer=32,
num_layers=1, dropout=0., bias=True,
bidirectional=False,
freeze_embedings=True):
super(ImdbLSTM, self).__init__()
output_size = 1
# Create the Embeddings layer and add pre-trained weights
embeddings_matrix = torch.randn(len(dictionary), embedding_size)
embeddings_matrix[0] = torch.zeros(embedding_size)
with gzip.open(pretrained_embeddings_path, "rt") as fh:
for line in fh:
word, vector = line.strip().split(None, 1)
if word in dictionary.token2id:
embeddings_matrix[dictionary.token2id[word]] =\
torch.FloatTensor([float(n) for n in vector.split()])
self.embedding_config = {'freeze': freeze_embedings,
'padding_idx': 0}
self.embeddings = nn.Embedding.from_pretrained(
embeddings_matrix, **self.embedding_config)
# Set our LSTM parameters
self.lstm_config = {'input_size': embedding_size,
'hidden_size': hidden_layer,
'num_layers': num_layers,
'bias': bias,
'batch_first': True,
'dropout': dropout if num_layers > 1 else 0.0,
'bidirectional': bidirectional}
# Set our fully connected layer parameters
self.linear_config = {'in_features': hidden_layer,
'out_features': output_size,
'bias': bias}
# Instanciate the layers
self.lstm = nn.LSTM(**self.lstm_config)
self.droupout_layer = nn.Dropout(dropout)
self.classification_layer = nn.Linear(**self.linear_config)
self.activation = nn.Sigmoid()
def forward(self, inputs):
emb = self.embeddings(inputs)
lstm_out, _ = self.lstm(emb)
# Take last state of lstm, which is a representation of
# the entire text
lstm_out = lstm_out[:, -1, :].squeeze()
lstm_out = self.droupout_layer(lstm_out)
predictions = self.activation(self.classification_layer(lstm_out))
return predictions
###Output
_____no_output_____
###Markdown
Encapsularemos el algoritmo de entrenamiento dentro de una función parametrizable. La función debería devolver los resultados obtenidos.
###Code
# Some default values
EPOCHS = 2
MAX_SEQUENCE_LEN = 100
import torch.optim as optim
def train_imbd_model(train_dataset, dev_dataset,
pretrained_embeddings_path, dictionary, embedding_size,
batch_size=128, max_sequence_len=MAX_SEQUENCE_LEN,
hidden_layer=32, dropout=0.,
epochs=EPOCHS, lr=0.001, optimizer_class=optim.Adam,
verbose=False):
if verbose:
print_fn = print
else:
print_fn = lambda *x: None
# We define again the data loaders since this code could run in
# parallel
pad_sequeces = PadSequences(max_length=max_sequence_len)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True,
collate_fn=pad_sequeces, drop_last=False)
dev_loader = DataLoader(dev_dataset, batch_size=batch_size, shuffle=False,
collate_fn=pad_sequeces, drop_last=False)
# We are not going to explore all hyperparameters, only this ones.
model = ImdbLSTM(pretrained_embeddings_path, dictionary, embedding_size,
hidden_layer=hidden_layer, dropout=dropout)
loss_function = nn.BCELoss()
optimizer = optimizer_class(model.parameters(), lr)
history = {
'train_loss': [],
'test_loss': [],
'test_avp': []
}
for epoch in range(epochs):
model.train()
running_loss = []
print_fn("Epoch", epoch)
for idx, batch in enumerate(train_loader):
optimizer.zero_grad()
output = model(batch["data"])
loss_value = loss_function(output.squeeze(), batch["target"])
loss_value.backward()
optimizer.step()
running_loss.append(loss_value.item())
train_loss = sum(running_loss) / len(running_loss)
print_fn("\t Final train_loss", train_loss)
history['train_loss'].append(train_loss)
model.eval()
running_loss = []
targets = []
predictions = []
for batch in dev_loader:
output = model(batch["data"])
running_loss.append(
loss_function(output.squeeze(), batch["target"]).item()
)
targets.extend(batch["target"].numpy())
# Round up model output to get the predictions.
# What would happen if you change the activation to tanh?
predictions.extend(output.squeeze().round().detach().numpy())
test_loss = sum(running_loss) / len(running_loss)
avp = metrics.average_precision_score(targets, predictions)
print_fn("\t Final test_loss", test_loss)
print_fn("\t Final test_avp", avp)
history['test_loss'].append(test_loss)
history['test_avp'].append(avp)
return history
history = train_imbd_model(
train_dataset, dev_dataset,
pretrained_embeddings_path="./data/glove.6B.50d.txt.gz",
dictionary=preprocess.dictionary, embedding_size=50, verbose=True)
###Output
Epoch 0
Final train_loss 0.6751024463176727
Final test_loss 0.61183714488196
Final test_avp 0.6281198166028745
Epoch 1
Final train_loss 0.633317717552185
Final test_loss 0.6387771065272982
Final test_avp 0.606606246189802
###Markdown
Utilizando hyperoptPara utilizar alguno de los algoritmos de hyperopt, es necesario definir una función objetivo que será minimizada. Esta función recibe un objeto con los valores para los hiperparámetros de cada experimento, y debe devolver una única métrica (o un diccionario con la clave `key` asociada a dicha métrica). En nuestro caso, utilizaremos el *average precision score* obtenido en el conjunto de validación.Les recomendamos consultar el [Tutorial oficial](https://github.com/hyperopt/hyperopt/wiki/FMin) para más detalles.
###Code
from hyperopt import STATUS_OK
# define an objective function
def objective_fn(args):
print("Exploring config:", args)
# These references a train_dataset and dev_dataset are
# taken from the globa context!
history = train_imbd_model(
train_dataset, dev_dataset,
pretrained_embeddings_path="./data/glove.6B.50d.txt.gz",
dictionary=preprocess.dictionary, embedding_size=50,
**args)
# This is the value that will be minimized!
history['loss'] = history['test_avp'][-1] * -1
# These are required keys
history['status'] = STATUS_OK
return history
from hyperopt import hp, fmin, tpe, Trials
# define a search space
space = {
'lr': hp.loguniform('lr', numpy.log(0.0001), numpy.log(0.005)), # see appendix
'optimizer_class': hp.choice(
'optimizer_class', [optim.Adam, optim.RMSprop]),
'dropout': hp.uniform('dropout', 0, 0.5)
}
# define the Trials object, which will allow us to store
# information from every experiment.
trials = Trials()
# minimize the objective over the space
best = fmin(objective_fn, space, algo=tpe.suggest, max_evals=10, trials=trials)
print("Best hyperparameters:")
print(best)
# We can see the results of each experiment with the trials object.
trials.results
###Output
_____no_output_____
###Markdown
Recomendaciones finales* Es recomendable utilizar un parámetro de *paciencia*, que corta el ciclo de entrenamiento cuando no detecta mejoras en el desempeño sobre el conjunto de validación por n cantidad de épocas. Esto ayudaría a evitar que el modelo sobreajuste.* Realizar una búsqueda de grilla previa para determinar los valores para el optimizador, learning rate, batch size y número de épocas mínimas de entrenamiento, ya que estos son hiperparámetros muy determinantes.* No es necesario realizar la búsqueda de hiperparámetros sobre el conjunto de datos entero, ni entrenar el clasificador durante todas las epocas hasta que comienza a diverger. Se puede utilizar para encontrar los espacios más prometedores de valores posibles, y luego realizar una segunda búsqueda con con menos iteraciones pero con el proceso de entrenamiento completo.* No realizar la búsqueda utilizando notebooks, sino scripts.* Combinar hyperopt con mlflow para un registro de los resultados ordenado.* Modificar el bucle de entrenamiento para guardar el último modelo con las mejores métricas en el conjunto de validación.
###Code
###Output
_____no_output_____
###Markdown
Apéndice: hp.loguniformSegún la documentación oficial, la distribución `hp.loguniform`:* Returns a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributed.* When optimizing, this variable is constrained to the interval [exp(low), exp(high)].Supongamos que queremos que nuestros valores de lr se distribuyan logaritmicamente en el intervalo [0.0001, 0.005], entonces los valores de low y high deberían ser: log(0.0001) y log(0.005). Veamos qué distribución de muestras obtenemos.
###Code
import numpy
import seaborn
low = numpy.log(0.0001)
high = numpy.log(0.005)
sample_size = 1000
sample = numpy.exp(numpy.random.uniform(low, high, size=sample_size))
seaborn.displot(sample)
sample.max(), sample.min()
###Output
_____no_output_____ |
analyses/Correlations.ipynb | ###Markdown
Correlations This file contains all of the correlations that we want to calculate. This means that we need at least 2 columns to create a result.Most of these correlations have to do with coop salary, or term average. We use these as metrics of student success because they're numeric, making them easier to process. Other properties are very subjective. Salary and grades are not the most indicative of how successful a student is, but with the existing data, it's the best indication we have.
###Code
from IPython.display import display
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import json
from collections import defaultdict
from Bucket import Bucket
from Distribution import Distribution
from GradeSalaryHistory import GradeSalaryHistory
pd.set_option('display.max_columns', None)
plt.style.use('ggplot')
# Show matplotlib plots in this notebook
%matplotlib inline
# Setting plot parameters
from pylab import rcParams
params = {
'figure.figsize': (8, 8),
'legend.fontsize': 15
}
rcParams.update(params)
def isnan(a):
return a != a
def isempty(a):
return isnan(a) or not a
def isfloat(value):
try:
float(value)
return not isnan(value)
except ValueError:
return False
# Where to write the buckets
BUCKET_DIR = '../private/buckets/'
GRADE_SALARY_FILENAME = '../private/grade-vs-salary-temp.json'
df = pd.read_csv('../private/results-04-10.csv') # TODO: Write the response file
COOP = ['1', '2', '3', '4', '5', '6']
TERM = ['1a', '1b', '2a', '2b', '3a', '3b', '4a']
MULTI_VAL_COL = ['ethnicity', 'fav_lang', 'preferred_tech_discipline', 'text_editor']
SALARY_COL = ['coop_salary_' + i + '.csv' for i in COOP]
GRADE_COL = ['term_avg_' + i + '.csv' for i in TERM]
def write_buckets(df, col_name):
"""Creates a bucket for each value and then writes the values into a file."""
buckets, aggregate_values = correlate_columns(df, col_name)
with open(BUCKET_DIR + col_name + '_buckets.json', 'w') as f:
result = {}
for i in buckets:
result[i] = buckets[i].summary()
for i in aggregate_values:
result[i] = aggregate_values[i]
f.write(json.dumps(result, indent=True))
def correlate_columns(df, col_name):
"""Generates a bucket for unique column value."""
# Get unique values (some rows have multiple values)
col_values = np.array([])
for i in df[col_name]:
if isfloat(i):
# Floor any data
i = math.floor(float(i))
if isnan(i):
# Don't want to include any NaN
print('Skipping ', i)
continue
# Append separate values if it's a comma separated value
val = str(i)
if ',' in val and col_name in MULTI_VAL_COL:
col_values = np.append(col_values, list(map(str.strip, val.split(','))))
else:
col_values = np.append(col_values, val)
col_values = np.unique(col_values)
# Create a bucket for each column value
buckets = {}
aggregate_values = {
'first_half_grades': defaultdict(list),
'second_half_grades': defaultdict(list),
'first_half_salaries': defaultdict(list),
'second_half_salaries': defaultdict(list)
}
for col_val in col_values:
# Initialize distributions
salaries = {}
grades = {}
for i in TERM:
grades[i] = np.array([])
for i in COOP:
salaries[i] = np.array([])
# Iterate through all rows
for i in range(0, df.shape[0]):
val = df[col_name][i]
if isfloat(val):
val = math.floor(float(val))
# Filter any NaN or non matching values
if isnan(val):
continue
if col_name in MULTI_VAL_COL:
if str(col_val) not in map(str.strip, str(val).split(',')):
continue
else:
if str(col_val) != str(val):
continue
first_half_grade = 0
second_half_grade = 0
first_half_salary = 0
second_half_salary = 0
# Add grades
fhgc = 0 # first half grade count
shgc = 0 # second half grade count
fhsc = 0
shsc = 0
for j in range(0, len(TERM)):
t = TERM[j]
avg = df['term_avg_' + t][i]
if avg == 'exchange':
continue
if isnan(avg):
continue
grades[t] = np.append(grades[t], avg)
if j < 4:
first_half_grade += float(avg)
fhgc += 1
else:
second_half_grade += float(avg)
shgc += 1
# Add salary
for j in range(0, len(COOP)):
c = COOP[j]
salary = df['coop_salary_' + c][i]
if type(salary) == float and math.isnan(salary):
continue
if type(salary) == str and not salary:
continue
if isempty(salary): # Not sure if this is needed
continue
if type(salary) == str and ',' in salary:
salary = salary.replace(',', '')
salaries[c] = np.append(salaries[c], salary)
if j < 3:
first_half_salary += float(salary)
fhsc += 1
else:
second_half_salary += float(salary)
shsc += 1
if col_val == 'Prefer not to say':
# Only one person, don't show their data.
continue
if fhgc > 0:
aggregate_values['first_half_grades'][col_val].append(first_half_grade / fhgc)
if shgc > 0:
aggregate_values['second_half_grades'][col_val].append(second_half_grade / shgc)
if fhsc > 0:
aggregate_values['first_half_salaries'][col_val].append(first_half_salary / fhsc)
if shsc > 0:
aggregate_values['second_half_salaries'][col_val].append(second_half_salary / shsc)
# Create the bucket
buckets[col_val] = Bucket(col_name, col_val,
[Distribution(grades[i].astype(float)) for i in TERM],
[Distribution(salaries[i].astype(float)) for i in COOP])
return buckets, aggregate_values
# From https://github.com/se2018/class-profile/tree/master/analyses
to_correlate = [
'gender',
'ethnicity',
'family_income',
'work_os',
'phone',
'soft_eng_rating',
'se_friendships',
'is_international',
'parents_edu',
'parents_technical',
'admission_avg',
'code_start_age',
'fav_lang',
'num_hackathons',
'side_proj',
'exercise',
'cooking',
'sleep_time',
'preferred_tech_discipline',
'text_editor'
]
for i in to_correlate:
write_buckets(df, i)
###Output
('Skipping ', nan)
('Skipping ', nan)
('Skipping ', nan)
('Skipping ', nan)
('Skipping ', nan)
('Skipping ', nan)
###Markdown
Next section is to get the correlation between grades and coop jobs.
###Code
def create_history(df):
"""Creates an entry for each coop term about the grades and salaries leading up to it."""
result = {}
for i, term in enumerate(COOP):
result[term] = defaultdict(list)
for row in range(0, df.shape[0]):
# Skip any entries that are missing data on the coop
if isempty(df['coop_name_' + term][row]) or isempty(df['coop_salary_' + term][row]):
print('Empty term for row ', row, ', term ', term, 'skipping entry.')
continue
# Process a coop term
term_avgs = np.array([])
salaries = np.array([])
# Get previous grades
for study in range(0, i+1):
val = df['term_avg_' + str(TERM[study])][row]
if isnan(val) or val == 'exchange':
term_avgs = np.append(term_avgs, 0.0)
else:
term_avgs = np.append(term_avgs, math.floor(float(val)))
# Get previous salaries
for coop in range(0, i):
val = str(df['coop_salary_' + COOP[coop]][row])
val = val.replace(',', '')
if isnan(float(val)):
salaries = np.append(salaries, 0)
else:
salaries = np.append(salaries, float((int(float(val)) / 500 * 500)))
location = df['coop_loc_' + term][row]
if isempty(location):
location = ''
salary = df['coop_salary_' + term][row]
# Estimate to the nearest 500*i
salary = float(int(float(str(salary).replace(',', ''))) / 500 * 500)
result[term][str(salary)].append(GradeSalaryHistory(term_avgs, salaries, location, salary).summary())
return result
with open(GRADE_SALARY_FILENAME, 'w') as f:
f.write(json.dumps(create_history(df), indent=True))
###Output
('Empty term for row ', 5, ', term ', '1', 'skipping entry.')
('Empty term for row ', 30, ', term ', '1', 'skipping entry.')
('Empty term for row ', 45, ', term ', '1', 'skipping entry.')
('Empty term for row ', 55, ', term ', '1', 'skipping entry.')
('Empty term for row ', 60, ', term ', '1', 'skipping entry.')
('Empty term for row ', 67, ', term ', '1', 'skipping entry.')
('Empty term for row ', 93, ', term ', '1', 'skipping entry.')
('Empty term for row ', 7, ', term ', '2', 'skipping entry.')
('Empty term for row ', 45, ', term ', '2', 'skipping entry.')
('Empty term for row ', 64, ', term ', '2', 'skipping entry.')
('Empty term for row ', 82, ', term ', '3', 'skipping entry.')
('Empty term for row ', 91, ', term ', '3', 'skipping entry.')
('Empty term for row ', 100, ', term ', '3', 'skipping entry.')
('Empty term for row ', 8, ', term ', '4', 'skipping entry.')
('Empty term for row ', 73, ', term ', '4', 'skipping entry.')
('Empty term for row ', 107, ', term ', '4', 'skipping entry.')
('Empty term for row ', 112, ', term ', '4', 'skipping entry.')
('Empty term for row ', 26, ', term ', '5', 'skipping entry.')
('Empty term for row ', 77, ', term ', '5', 'skipping entry.')
('Empty term for row ', 86, ', term ', '5', 'skipping entry.')
('Empty term for row ', 58, ', term ', '6', 'skipping entry.')
('Empty term for row ', 64, ', term ', '6', 'skipping entry.')
('Empty term for row ', 74, ', term ', '6', 'skipping entry.')
('Empty term for row ', 87, ', term ', '6', 'skipping entry.')
###Markdown
Check for gender vs SE rating
###Code
gender_rating = defaultdict(list)
for i in range(0, df.shape[0]):
gender_rating[df['gender'][i]].append(df['soft_eng_rating'][i])
del gender_rating['Prefer not to say'] # There's only one entry
stats = {}
stats['mean'] = {
'Male': np.mean(gender_rating['Male']),
'Female': np.mean(gender_rating['Female'])
}
stats['stddev'] = {
'Male': np.std(gender_rating['Male']),
'Female': np.std(gender_rating['Female'])
}
gender_rating['stats'] = stats
with open('../private/gender_rating.json', 'w') as f:
f.write(json.dumps(gender_rating, indent=True, default=str))
###Output
_____no_output_____ |
notebooks/Milestone5-Data_Analysis.ipynb | ###Markdown
Milestone 5 - Data Analysis Jarod Research QuestionDoes the release of triple A multiplayer fps titles effect the player numbers of CS:GO? Data Analysis From the SteamChartz dataset, I extracted the rows about Counter-Strike: Global Offensive, the most popular game on Steam, to process the data and wrangle the average players, gained average players from the previous month, and the four-month rolling gain average and the seven-month rolling gain average for every month recorded in the dataset. This data allows us to have a timeline of the average and the gain. All we need to do with the timeline is see if there are any dips in the average players that are not a part of a longer pre-existing trend and compare them to the release dates of triple-A titles like Call of Duty, Battlefield, Halo, Overwatch, and Valorant.From exploring the Dashboard, I can extract that only Overwatch and Valorant had significant losses in players, both of which fit into a similar niche that Counter-Strike: Global Offensive occupies. Valorant more closely competes with Counter-Strike: Global Offensive, as their gameplay is heavily inspired by it; this is further indicated by the roughly 186,000 players that drop Counter-Strike: Global Offensive in the two months leading up to the release of Valorant. While the release of Call of Duty and Battlefield can reduce the player numbers of Counter-Stike: Global Offensive, they do not frequently occur early on and, in general, are not substantial decreases in player numbers compared to the average players. Lastly, Halo didn’t seem to affect the player numbers of Counter-Strike: Global Offensive as the trend didn’t fluctuate when it was released. From these observations, I can conclude that while you can see decreases in players with the release of other games, most of them are insignificant compared to the total players. Furthermore, the Counter-Strike: Global Offensive community is a dedicated and isolated gaming community with few transient players, only losing many players due to competition in their niche(i.e. Valorant). Harshal
###Code
import seaborn as sns
import matplotlib.pyplot as plt
from project_functions2 import load_process_covid
###Output
_____no_output_____
###Markdown
Plot
###Code
df_covid_sorted = load_process_covid('../data/raw/SteamCharts.csv')
df_covid_sorted
sns.barplot(x ='Month', y= 'AvgGainPerGame', hue = 'Year', data = df_covid_sorted)
###Output
_____no_output_____
###Markdown
Final Research Question Analysis (Harshal - Task 4)Research Question: How has Covid-19 affected the active users of games on steam? I used the data set to find the average gain of players on steam across all games on the platform at the time to see what the AveageGainPerGame on steam is, to show that during the months when covid first started the surge of lockdowns across the world, that more players had joined during this time since everyone was at home and were not allowed to go outside. I processed the raw dataset into specific points in time of the specific years so January, February, March of 2019, to show the pre-covid and January, February, March of 2020 to show during covid. I used a barplot to show the Months and AvgGainPerGame as x and y values for the plot, also groupingthe years 2019 and 2020 for each month so you can compare the differences in both AvgGainPerGame in that specific month.As you can see that during the months in the year 2020, the bars are above the x-axis, meaning they are in positive AvgGain, but when you compare it to 2019bars which mostly seem to be negative or significantly lower than the 2020 bars. This indicates many gains in players on the platform steamwas when covid was at large during its initial months of 2020. Showing that covid has impacted the steam platforms gamer counts due to many peoplefinding new modes of entertainment via video games! Lance
###Code
import matplotlib.pylab as plt
import seaborn as sns
import numpy as np
from project_functions3 import load_csgo
sns.set_theme(style="ticks",
font_scale=1.3, # This scales the fonts slightly higher
)
plt.rc("axes.spines", top=False, right=False)
###Output
_____no_output_____
###Markdown
Plot
###Code
df_csgo = load_csgo('../data/raw/Steamcharts.csv', '../data/raw/Twitch_game_data.csv')
df_csgo
sns.barplot(x = 'Month', y = 'Avg_players', data = df_csgo, estimator=np.median)
sns.barplot(x = 'Month', y = 'Avg_viewers', data = df_csgo, estimator=np.median)
###Output
_____no_output_____ |
documents/presentation-9/script.ipynb | ###Markdown
Statistical Analysis of Data Environment SettingsAn statistical Analysis of the data captured will be performed.The environment configuration is the following:- A rectangle area is used whose dimension is 3 x 3 meters. - A custom robot similar to an epuck was used.- The robot starts in the middle of the arena.- The robot moves in a random fashion way around the environment avoiding obstacles for 100 robot steps then it is moved into another random location.- The data is not normalized in this experiment.- The robot has 8 sensors that measure the distance between the robot and the walls.- Some noise was introduced in the sensors measurements of the robot using the concept of [lookup tables](https://cyberbotics.com/doc/reference/distancesensor) in the Webots simulator which according to Webots documentation "The first column of the table specifies the input distances, the second column specifies the corresponding desired response values, and the third column indicates the desired standard deviation of the noise. The noise on the return value is computed according to a gaussian random number distribution whose range is calculated as a percent of the response value (two times the standard deviation is often referred to as the signal quality)". The following values were taken: - (0, 0, 0.05) - (10, 10, 0.05) - The simulator runs during 25 hours of simulation (~30 minutes in fast mode).
###Code
# Install a pip package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install scikit-learn
!{sys.executable} -m pip install keras
import pandas as pd
import tensorflow as tf
import numpy as np
import math
from sklearn.ensemble import RandomForestRegressor
from keras import models
from keras import layers
from keras import regularizers
import matplotlib.pyplot as plt
from keras import optimizers
###Output
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.7/site-packages (0.22)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/site-packages (from scikit-learn) (0.14.1)
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.7/site-packages (from scikit-learn) (1.16.1)
Requirement already satisfied: scipy>=0.17.0 in /Users/sebastiangerard/Library/Python/3.7/lib/python/site-packages (from scikit-learn) (1.4.1)
Requirement already satisfied: keras in /usr/local/lib/python3.7/site-packages (2.3.1)
Requirement already satisfied: scipy>=0.14 in /Users/sebastiangerard/Library/Python/3.7/lib/python/site-packages (from keras) (1.4.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in /Users/sebastiangerard/Library/Python/3.7/lib/python/site-packages (from keras) (1.1.0)
Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.7/site-packages (from keras) (1.16.1)
Requirement already satisfied: h5py in /Users/sebastiangerard/Library/Python/3.7/lib/python/site-packages (from keras) (2.9.0)
Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/site-packages (from keras) (1.12.0)
Requirement already satisfied: keras-applications>=1.0.6 in /Users/sebastiangerard/Library/Python/3.7/lib/python/site-packages (from keras) (1.0.8)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/site-packages (from keras) (5.2)
###Markdown
First Experiment
###Code
csv_file = 'robot_info_dataset.csv'
df = pd.read_csv(csv_file)
df[['x', 'y', 'theta', 'sensor_1', 'sensor_2','sensor_3','sensor_4','sensor_5','sensor_6','sensor_7', 'sensor_8']].head()
###Output
_____no_output_____
###Markdown
Data pre-processing The data collected 1125965 samples.
###Code
df.shape
df = df.sample(frac=1)
df = df[:1125965]
df.shape
###Output
_____no_output_____
###Markdown
The data set contains some null values so they should be deleted from the samples.
###Code
df = df.dropna()
###Output
_____no_output_____
###Markdown
Input and output variables The data will be split into training, testing and validation sets. 60% of the data will be used for training, 20% for training and 20% of validation.
###Code
# train size
test_size_percentage = .2
train_size_percentage = .8
ds_size = df.shape[0]
train_size = int(train_size_percentage * ds_size)
test_size = int(test_size_percentage * ds_size)
# shuffle dataset
sampled_df = df.sample(frac=1)
# separate inputs from outputs
inputs = sampled_df[['x', 'y', 'theta']]
targets = sampled_df[['sensor_1', 'sensor_2', 'sensor_3', 'sensor_4', 'sensor_5', 'sensor_6', 'sensor_7', 'sensor_8']]
# train
train_inputs = inputs[:train_size]
train_targets = targets[:train_size]
# test
test_inputs = inputs[train_size:]
test_targets = targets[train_size:]
inputs.head()
###Output
_____no_output_____
###Markdown
Neural Network As input the neural network receives the x, y coordinates and rotation angle $\theta$. The output are the sensor measurements. One model per sensor will be created.
###Code
def get_model():
# neural network with a 10-neuron hidden layer
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(3,)))
# model.add(layers.Dropout(0.5))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1))
# rmsprop = optimizers.RMSprop(learning_rate=0.01)
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
return model
def k_fold(sensor_number, num_epochs=10, k=5):
num_val_samples = len(train_inputs) // k
validation_scores = []
histories = []
nmse = []
for i in range(k):
print('processing fold #', i)
val_data = train_inputs[i * num_val_samples: (i + 1) * num_val_samples]
val_targets = train_targets[[sensor_number]][i * num_val_samples: (i + 1) * num_val_samples]
partial_train_data = np.concatenate(
[train_inputs[:i * num_val_samples],
train_inputs[(i + 1) * num_val_samples:]], axis=0)
partial_train_targets = np.concatenate(
[train_targets[[sensor_number]][:i * num_val_samples],
train_targets[[sensor_number]][(i + 1) * num_val_samples:]], axis=0)
model = get_model()
history = model.fit(partial_train_data, partial_train_targets,
validation_data=(val_data, val_targets),
epochs=num_epochs, batch_size=64, verbose=1)
histories.append(history.history)
predictions_targets = model.predict(val_data)
nmse.append(np.mean((predictions_targets - val_targets)**2)/np.var(val_targets))
return histories, nmse
histories, nmse = k_fold('sensor_3', 50, 3)
print("NMSE: ")
print(np.mean(nmse))
num_epochs = 50
val_mae_history = [np.mean([x['val_mae'][i] for x in histories]) for i in range(num_epochs)]
mae_history = [np.mean([x['mae'][i] for x in histories]) for i in range(num_epochs)]
plt.plot(range(3, len(val_mae_history) + 1), val_mae_history[2:], 'ro')
plt.plot(range(3, len(mae_history) + 1), mae_history[2:], 'bo')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend(['test', 'train'], loc='upper left')
plt.show()
val_loss_history = [np.mean([x['val_loss'][i] for x in histories]) for i in range(num_epochs)]
loss_history = [np.mean([x['loss'][i] for x in histories]) for i in range(num_epochs)]
plt.plot(range(1, len(val_loss_history) + 1), val_loss_history, 'ro')
plt.plot(range(1, len(loss_history) + 1), loss_history, 'bo')
plt.xlabel('Epochs')
plt.ylabel('LOSS')
plt.legend(['test', 'train'], loc='upper left')
plt.show()
model = get_model()
history = model.fit(inputs, targets[['sensor_5']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_5.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_6']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_6.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_7']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_7.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_8']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_8.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_1']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_1.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_2']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_2.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_3']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_3.h5")
model = get_model()
history = model.fit(inputs, targets[['sensor_4']], epochs=50, batch_size=64, verbose=1)
history.history['mae']
model.save("nn_sensor_4.h5")
###Output
Epoch 1/50
1125965/1125965 [==============================] - 25s 23us/step - loss: 0.3168 - mae: 0.3743
Epoch 2/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.1257 - mae: 0.2427
Epoch 3/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0994 - mae: 0.2072
Epoch 4/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0871 - mae: 0.1889
Epoch 5/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0799 - mae: 0.1777
Epoch 6/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0756 - mae: 0.1710
Epoch 7/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0720 - mae: 0.1655
Epoch 8/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0695 - mae: 0.1617
Epoch 9/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0678 - mae: 0.1588
Epoch 10/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0658 - mae: 0.1558
Epoch 11/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0640 - mae: 0.1530
Epoch 12/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0632 - mae: 0.1512
Epoch 13/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0622 - mae: 0.1496
Epoch 14/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0608 - mae: 0.1475
Epoch 15/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0601 - mae: 0.1457
Epoch 16/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0593 - mae: 0.1445
Epoch 17/50
1125965/1125965 [==============================] - 26s 24us/step - loss: 0.0588 - mae: 0.1436
Epoch 18/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0579 - mae: 0.1418
Epoch 19/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0577 - mae: 0.1415
Epoch 20/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0571 - mae: 0.1407
Epoch 21/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0566 - mae: 0.1397
Epoch 22/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0562 - mae: 0.1390
Epoch 23/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0556 - mae: 0.1380
Epoch 24/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0552 - mae: 0.1374
Epoch 25/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0545 - mae: 0.1366
Epoch 26/50
1125965/1125965 [==============================] - 27s 24us/step - loss: 0.0543 - mae: 0.1363
Epoch 27/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0538 - mae: 0.1355
Epoch 28/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0535 - mae: 0.1348
Epoch 29/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0530 - mae: 0.1341
Epoch 30/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0526 - mae: 0.1334
Epoch 31/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0524 - mae: 0.1330
Epoch 32/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0520 - mae: 0.1327
Epoch 33/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0517 - mae: 0.1323
Epoch 34/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0514 - mae: 0.1317
Epoch 35/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0513 - mae: 0.1313
Epoch 36/50
1125965/1125965 [==============================] - 27s 24us/step - loss: 0.0511 - mae: 0.1312
Epoch 37/50
1125965/1125965 [==============================] - 29s 26us/step - loss: 0.0507 - mae: 0.1308
Epoch 38/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0505 - mae: 0.1303
Epoch 39/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0503 - mae: 0.1301
Epoch 40/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0500 - mae: 0.1296
Epoch 41/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0497 - mae: 0.1293
Epoch 42/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0495 - mae: 0.1289
Epoch 43/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0491 - mae: 0.1283
Epoch 44/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0490 - mae: 0.1281
Epoch 45/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0488 - mae: 0.1278
Epoch 46/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0486 - mae: 0.1275
Epoch 47/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0486 - mae: 0.1274
Epoch 48/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0484 - mae: 0.1270
Epoch 49/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0485 - mae: 0.1273
Epoch 50/50
1125965/1125965 [==============================] - 26s 23us/step - loss: 0.0480 - mae: 0.1269
|
Univariate/ML/NYCT/UnivariateML_NYCT_(OCSVM+XGBoost).ipynb | ###Markdown
Helper Function
###Code
def getScaledTrainTextDataset(dataset_scaled, trainRate=0.3):
print('Shape: ',dataset_scaled.shape)
train_size = int(len(dataset_scaled)*trainRate)
test_size = len(dataset_scaled) - train_size
print('Trainsize:',train_size, ' - Testsize:',test_size)
train_start_index,train_end_index = 0,train_size
test_start_index,test_end_index = train_size,len(dataset_scaled)
train = dataset_scaled[0:train_size]
test = dataset_scaled[train_size:]
return train,test
# convert an array of values into a dataset matrix
def create_XY_lookback_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1):
a = dataset[i:(i+look_back)]
dataX.append(a)
dataY.append(dataset[i + look_back])
return numpy.array(dataX), numpy.array(dataY)
def create_XY_lookback_dataset_multistepOutput(dataset, look_back=1, look_forward = 1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-1-look_forward):
a = dataset[i:(i+look_back), 0]
dataX.append(a)
dataY.append(dataset[i + look_back:i+look_back+look_forward, 0])
return numpy.array(dataX), numpy.array(dataY)
def calculateRMSE(testPredict,trainY,testY,inverse_transform=True, verbose= 1):
if inverse_transform:
testPredict_inverse = scaler.inverse_transform(testPredict)
testY_inverse = scaler.inverse_transform([testY])
else:
testPredict_inverse = testPredict
testY_inverse = testY
# calculate root mean squared error
testScore = math.sqrt(mean_squared_error(testY_inverse[0], testPredict_inverse[:,0]))
if verbose == 1:
print('Test Score: %.2f RMSE' % (testScore))
print('Persistent Model Testscore small:',global_testPredict_small, ' - Persistent Model Testscore big:', global_testPredict_big)
return testScore
def calculateRMSE_MultipleOutput(trainPredict,testPredict,trainY,testY,inverse_transform=True, verbose= 1):
if inverse_transform:
trainPredict_inverse = scaler.inverse_transform(trainPredict)
trainY_inverse = scaler.inverse_transform(trainY)
testPredict_inverse = scaler.inverse_transform(testPredict)
testY_inverse = scaler.inverse_transform(testY)
else:
trainPredict_inverse = trainPredict
trainY_inverse = trainY
testPredict_inverse = testPredict
testY_inverse = testY
# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(trainY_inverse[0], trainPredict_inverse[:,0]))
testScore = math.sqrt(mean_squared_error(testY_inverse[0], testPredict_inverse[:,0]))
if verbose == 1:
print('Train Score: %.2f RMSE' % (trainScore))
print('Test Score: %.2f RMSE' % (testScore))
print('Persistent Model Testscore small:',global_testPredict_small, ' - Persistent Model Testscore big:', global_testPredict_big)
return trainScore, testScore
def plotErrorPrediction(dataset_scaled, testPredict, trainPredict, ShowTestError=True, inverse_transform=True):
if inverse_transform:
trainPredict_inverse = scaler.inverse_transform(trainPredict)
testPredict_inverse = scaler.inverse_transform(testPredict)
else:
trainPredict_inverse = trainPredict
testPredict_inverse = testPredict
# shift train predictions for plotting
trainPredictPlot = numpy.zeros_like(dataset_scaled)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict_inverse
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset_scaled)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict_inverse
error_test = dataset-testPredictPlot
error_test[np.isnan(error_test)] = 0.
# error_test[error_training<2000] = 0.
error_test = np.abs(error_test)
import seaborn as sns
sns.distplot(error_test[error_test!=0])
print(pandas.DataFrame(error_test[error_test!=0]).describe())
return error_test, trainPredictPlot,testPredictPlot
def plotErrorPredictionValidation(dataset_scaled, validationPredict, testPredict, trainPredict, ShowTestError=True, inverse_transform=True):
if inverse_transform:
trainPredict_inverse = scaler.inverse_transform(trainPredict)
testPredict_inverse = scaler.inverse_transform(testPredict)
else:
trainPredict_inverse = trainPredict
testPredict_inverse = testPredict
# shift train predictions for plotting
trainPredictPlot = numpy.zeros_like(dataset_scaled)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict_inverse
# shift test predictions for plotting
testPredictPlot = numpy.empty_like(dataset_scaled)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1+len(validationPredict):len(dataset)-1, :] = testPredict_inverse
error_test = dataset-testPredictPlot
error_test[np.isnan(error_test)] = 0.
# error_test[error_training<2000] = 0.
error_test = np.abs(error_test)
import seaborn as sns
sns.distplot(error_test[error_test!=0])
print(pandas.DataFrame(error_test[error_test!=0]).describe())
return error_test, trainPredictPlot,testPredictPlot
###Output
_____no_output_____
###Markdown
OCSVM
###Code
import warnings
from sklearn.cluster import KMeans
from statsmodels.tsa.ar_model import AR
from sklearn.metrics import mean_squared_error
from math import sqrt
from pandas import read_csv
import pandas as pd
from pandas import DataFrame
from pandas import concat
import numpy as np
from matplotlib import pyplot
from sklearn.svm import OneClassSVM
from sklearn import preprocessing
import sys
class OneClassSVM_AnomalyDetection:
def __init__(self,path, window_width, nu, train_rate):
self.df = read_csv(path, header=0, index_col=0, parse_dates=True,squeeze=True)
self.df = self.df.reset_index(drop=True)
self.df.rename(columns={'anomaly':'is_anomaly'}, inplace=True)
self.nu = nu
self.window_width = window_width
series = pd.DataFrame(self.df.iloc[:,0].values)
self.values = DataFrame(series.values)
self.dataframe = concat([self.values.shift(1), self.values], axis=1)
self.dataframe.columns = ['t', 't+1']
self.train_size = int(len(self.values) * train_rate)
# train_labeled, test_labeled = self.dataframe.values[1:self.train_size], self.dataframe.values[self.train_size:]
# self.train_X, self.train_y = train_labeled[:,0], train_labeled[:,1]
# self.test_X, self.test_y = test_labeled[:,0], test_labeled[:,1]
# self.create_persistence()
# X = series.values
# self.train, self.test = X[1:self.train_size], X[self.train_size:]
def __build_sets(self):
train_labeled, test_labeled = self.dataframe.values[1:self.train_size], self.dataframe.values[self.train_size:]
self.train_X, self.train_y = train_labeled[:,0], train_labeled[:,1]
self.test_X, self.test_y = test_labeled[:,0], test_labeled[:,1]
X = self.dataframe.iloc[:,1].values
self.train, self.test = X[1:self.train_size], X[self.train_size:]
def standardize_dataframe(self):
X = self.dataframe.values
self.scalar = preprocessing.StandardScaler().fit(X)
X = self.scalar.transform(X)
self.dataframe = pd.DataFrame(X)
def inverse_standardize_dataframe(self):
X = self.dataframe.values
X = self.scalar.inverse_transform(X)
self.dataframe = pd.DataFrame(X)
def model_persistence(self, x):
return x
def create_persistence(self):
rmse = sqrt(mean_squared_error(self.dataframe['t'].iloc[self.train_size:], self.dataframe['t+1'].iloc[self.train_size::]))
# print('Persistent Model RMSE: %.3f' % rmse)
def fit(self):
self.create_persistence()
self.standardize_dataframe()
self.__build_sets()
self.compute_anomalyScores()
self.inverse_standardize_dataframe()
def getWindowedVectors(self, X):
vectors = []
for i,_ in enumerate(X[:-self.window_width+1]):
vectors.append(X[i:i+self.window_width])
return vectors
def compute_anomalyScores(self):
self.errors = np.zeros_like(self.test)
# compute anomalies
warnings.filterwarnings("ignore")
# history = self.getWindowedVectors(self.train)
for i,_ in enumerate(self.test[:-self.window_width+1]):
sys.stdout.write('\r'+str(i)+':'+str(len(self.test) - self.window_width))
window = self.test[i:i+self.window_width]
window2D = np.zeros((len(window),2))
window2D[:,1] = window
clf=OneClassSVM(nu=self.nu)
clf.fit(window2D)
error = clf.decision_function(window2D)
error[error>0] = 0
self.errors[i:i+self.window_width] += error*-10
# normalize anomaly score
self.errors[:-self.window_width+1] /= self.window_width
for i,error in enumerate(self.test[-self.window_width+1:]):
self.errors[-self.window_width + 1 + i] /=self.window_width-(i+1)
# self.errors_original = self.errors
# scalar = preprocessing.MinMaxScaler((0,1)).fit(self.errors.reshape(-1,1))
# self.errors = scalar.transform(self.errors.reshape(-1,1))*10
def plot(self):
# plot predicted error
pyplot.figure(figsize=(50,5))
pyplot.plot(self.test)
# pyplot.plot(self.predictions, color='red')
pyplot.plot(self.errors, color = 'red', linewidth=0.5)
pyplot.show()
def get_roc_auc(self, plot=True, verbose=True):
# get the predicted errors of the anomaly points
indices = self.df[self.df['is_anomaly']==1].index >self.train_size
true_anomaly_predicted_errors = self.errors[self.df[self.df['is_anomaly']==1].index[indices] - self.train_size ]
if len(true_anomaly_predicted_errors) == 0:
return np.nan
# sort them
true_anomaly_predicted_errors = np.sort(true_anomaly_predicted_errors,axis=0).reshape(-1)
true_anomaly_predicted_errors_extended = np.r_[np.linspace(0,true_anomaly_predicted_errors[0],40)[:-1],true_anomaly_predicted_errors]
true_anomaly_predicted_errors_extended = np.r_[true_anomaly_predicted_errors_extended, true_anomaly_predicted_errors_extended[-1] + np.mean(true_anomaly_predicted_errors_extended)]
# now iterate thru the predicted errors from small to big
# for each value look how much other points have equal or bigger error
FPR = [] # fp/n https://en.wikipedia.org/wiki/Sensitivity_and_specificity
TPR = [] # tp/p
p = len(true_anomaly_predicted_errors)
Thresholds = []
for predictederror in true_anomaly_predicted_errors_extended:
threshold = predictederror
tp = len(true_anomaly_predicted_errors[true_anomaly_predicted_errors>= threshold])
fp = len(self.errors[self.errors>=threshold])-len(true_anomaly_predicted_errors[true_anomaly_predicted_errors>=threshold])
fpr =fp/len(self.errors)
FPR.append(fpr)
TPR.append(tp/p)
if verbose:
print("Threshold: {0:25} - FP: {1:4} - TP: {2:4} - FPR: {3:21} - TPR: {4:4}".format(threshold,fp, tp, fpr, tp/p))
import matplotlib.pyplot as plt
if plot:
plt.figure()
plt.axis([0, 1, 0, 1])
plt.plot(FPR,TPR)
plt.show()
# This is the AUC
from sklearn.metrics import auc
print('AUC: ' ,auc(FPR,TPR) )
return auc(FPR,TPR)
# iforest = OneClassSVM_AnomalyDetection('Univariate/YahooServiceNetworkTraffic/A1Benchmark/real_1.csv',30,0.7,0.3)
# iforest.fit()
# iforest.plot()
# iforest.get_roc_auc(verbose=False)
###Output
_____no_output_____
###Markdown
Evaluation Results of NYCT
###Code
import os
import datetime
startTime = datetime.datetime.now()
import glob
cnn = OneClassSVM_AnomalyDetection( 'drive/My Drive/MT/Experiments/Univariate/NYC_Taxi/nyc_taxi.csv', 40,0.1,0.3)
cnn.fit()
cnn.get_roc_auc(plot=False,verbose=False)
endTime = datetime.datetime.now()
diff = endTime - startTime
print('Time: ',diff)
cnn = OneClassSVM_AnomalyDetection( 'drive/My Drive/MT/Experiments/Univariate/NYC_Taxi/nyc_taxi.csv', 40,0.1,0.3)
startTime = datetime.datetime.now()
cnn.fit()
endTime = datetime.datetime.now()
diff = endTime - startTime
print('Inference Time: ',diff)
###Output
7184:7184AUC: 0.5210562102565175
Time: 0:00:06.942960
7184:7184Inference Time: 0:00:07.040107
###Markdown
XGBoost
###Code
import warnings
from sklearn.cluster import KMeans
from statsmodels.tsa.ar_model import AR
from sklearn.metrics import mean_squared_error
from math import sqrt
from pandas import read_csv
import pandas as pd
from pandas import DataFrame
from pandas import concat
import numpy as np
from matplotlib import pyplot
from xgboost import XGBRegressor
from sklearn import preprocessing
import sys
class XGBRegressor_AnomalyDetection:
def __init__(self,path, window_width, nu, train_rate):
self.df = read_csv(path, header=0, index_col=0, parse_dates=True,squeeze=True)
self.df = self.df.reset_index(drop=True)
self.df.rename(columns={'anomaly':'is_anomaly'}, inplace=True)
self.nu = nu
self.window_width = window_width
series = pd.DataFrame(self.df.iloc[:,0].values)
self.values = DataFrame(series.values)
self.dataframe = concat([self.values.shift(1), self.values], axis=1)
self.dataframe.columns = ['t', 't+1']
self.train_size = int(len(self.values) * train_rate)
# train_labeled, test_labeled = self.dataframe.values[1:self.train_size], self.dataframe.values[self.train_size:]
# self.train_X, self.train_y = train_labeled[:,0], train_labeled[:,1]
# self.test_X, self.test_y = test_labeled[:,0], test_labeled[:,1]
# self.create_persistence()
# X = series.values
# self.train, self.test = X[1:self.train_size], X[self.train_size:]
def __build_sets(self):
train_labeled, test_labeled = self.dataframe.values[1:self.train_size], self.dataframe.values[self.train_size:]
self.train_X, self.train_y = train_labeled[:,0], train_labeled[:,1]
self.test_X, self.test_y = test_labeled[:,0], test_labeled[:,1]
X = self.dataframe.iloc[:,1].values
self.train, self.test = X[1:self.train_size], X[self.train_size:]
def standardize_dataframe(self):
X = self.dataframe.values
self.scalar = preprocessing.StandardScaler().fit(X)
X = self.scalar.transform(X)
self.dataframe = pd.DataFrame(X)
def inverse_standardize_dataframe(self):
X = self.dataframe.values
X = self.scalar.inverse_transform(X)
self.dataframe = pd.DataFrame(X)
def model_persistence(self, x):
return x
def create_persistence(self):
rmse = sqrt(mean_squared_error(self.dataframe['t'].iloc[self.train_size:], self.dataframe['t+1'].iloc[self.train_size::]))
# print('Persistent Model RMSE: %.3f' % rmse)
def fit(self):
self.create_persistence()
self.__build_sets()
self.compute_anomalyScores()
def getWindowedVectors(self, X):
vectors = []
for i,_ in enumerate(X[:-self.window_width+1]):
vectors.append(X[i:i+self.window_width])
return vectors
def compute_anomalyScores(self):
self.xgb = XGBRegressor()
self.xgb.fit(self.train_X.reshape(-1,1),self.train_y.reshape(-1,1))
self.predictions = self.xgb.predict(self.test_X.reshape(-1,1))
rmse = sqrt(mean_squared_error(self.test, self.predictions))
self.errors = np.absolute(self.test - np.array(self.predictions))
# print('Prediction Test RMSE: %.3f' % rmse)
def plot(self):
# plot predicted error
pyplot.figure(figsize=(50,5))
pyplot.plot(self.test)
pyplot.plot(self.predictions, color='blue')
pyplot.plot(self.errors, color = 'red', linewidth=0.5)
pyplot.show()
def get_roc_auc(self, plot=True, verbose=True):
# get the predicted errors of the anomaly points
indices = self.df[self.df['is_anomaly']==1].index >self.train_size
true_anomaly_predicted_errors = self.errors[self.df[self.df['is_anomaly']==1].index[indices] - self.train_size ]
if len(true_anomaly_predicted_errors) == 0:
return np.nan
# sort them
true_anomaly_predicted_errors = np.sort(true_anomaly_predicted_errors,axis=0).reshape(-1)
true_anomaly_predicted_errors_extended = np.r_[np.linspace(0,true_anomaly_predicted_errors[0],40)[:-1],true_anomaly_predicted_errors]
true_anomaly_predicted_errors_extended = np.r_[true_anomaly_predicted_errors_extended, true_anomaly_predicted_errors_extended[-1] + np.mean(true_anomaly_predicted_errors_extended)]
# now iterate thru the predicted errors from small to big
# for each value look how much other points have equal or bigger error
FPR = [] # fp/n https://en.wikipedia.org/wiki/Sensitivity_and_specificity
TPR = [] # tp/p
p = len(true_anomaly_predicted_errors)
Thresholds = []
for predictederror in true_anomaly_predicted_errors_extended:
threshold = predictederror
tp = len(true_anomaly_predicted_errors[true_anomaly_predicted_errors>= threshold])
fp = len(self.errors[self.errors>=threshold])-len(true_anomaly_predicted_errors[true_anomaly_predicted_errors>=threshold])
fpr =fp/len(self.errors)
FPR.append(fpr)
TPR.append(tp/p)
if verbose:
print("Threshold: {0:25} - FP: {1:4} - TP: {2:4} - FPR: {3:21} - TPR: {4:4}".format(threshold,fp, tp, fpr, tp/p))
import matplotlib.pyplot as plt
if plot:
plt.figure()
plt.axis([0, 1, 0, 1])
plt.plot(FPR,TPR)
plt.show()
# This is the AUC
from sklearn.metrics import auc
print('AUC: ' ,auc(FPR,TPR) )
return auc(FPR,TPR)
###Output
_____no_output_____
###Markdown
Evaluation Results of NYCT
###Code
import os
import datetime
startTime = datetime.datetime.now()
import glob
cnn = XGBRegressor_AnomalyDetection( 'drive/My Drive/MT/Experiments/Univariate/NYC_Taxi/nyc_taxi.csv', 1000,0.5,0.3)
cnn.fit()
cnn.get_roc_auc(plot=False,verbose=False)
endTime = datetime.datetime.now()
diff = endTime - startTime
print('Time: ',diff)
startTime = datetime.datetime.now()
cnn.xgb.predict(cnn.test_X.reshape(-1,1))
endTime = datetime.datetime.now()
diff = endTime - startTime
print('Inference Time: ',diff)
###Output
[03:44:49] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
AUC: 0.4602617410866643
Time: 0:00:00.139282
Inference Time: 0:00:00.020563
|
notebooks/Tutorial2_Units+Database.ipynb | ###Markdown
AMPEL introAMPEL is software framework designed for processing heterogeneous streamed data. AMPEL was not developed to provide a specific scientific resource, but rather an environment where it is easy to ensure that a scientific program fulfills the strict requirement of the next generation real-time experiments: efficient and powerful analysis, where provenance and reproducibiltiy is paramount. In particular, to guarantee the last point requires algorithms (which make real-time deicsions) be separated from infrastructure (which will likely evolve with time and project phase).An AMPEL _user_ constructs a configuration file which describes every step of how an incoming alert stream should be processed. This can be broken down into selecting which _units_ should be executed, and which _parameters_ each of these should be provided. An AMPEL _live instance_ executes these units, based on the input data, as requested and stores all intermediate and final data in a databse. Provenance/reproducibility is ensured through multiple layers. First, each live instance is run from a container which can be retrieved later and together with a data archive replay the full stream. Second, AMPEL contains an extensive set of logs and a transient-specific _Journal_ which details all related actions/decisions. Finally, each unit and channel configuration file is drawn from a specific (tagged) github version. The series of notebooks provided here gradually builds toward a sample full configration. Sample science caseEach AMPEl _channel_ is designed with a science goal (or "hypothesis/test") in mind. A much discussed current topic is the origin of the extragalactic neutrino flux observed e.g. by IceCube, with one of the potential sources being supernovae interacting with circumstellar material (SNIIn). We here wish to investigate whether a particular subtype of these, SN2009ip-like SNe with recent previous outbursts, are regularly found within the uncertainty region of neutrino alerts. The steps for this science program would be: Identify transients with optical lightcurves compatible with SN2009ip AND which coincide with neutrino alerts. For such targets, obtain follow-up spectroscopy to confirm classification (i.e. an external reaction). This notebook - Tutorial 2This notebook will repeat the analysis presented in the first tutorial, but where all units are read from tagged (github) repositories. They can from there be referenced, be included in a computer center AMPEL instance and be distributed to the community.These can be found in the `ampel/contrib/sample/` subdirectories of the `Ampel-contrib-sample` repository.Furthermore, intermediate results will her be stored in a local MongoDB database and the AMPEL schedulers will be used to carry out the requested operations. This notebook thus assumes a mongod instance to be running and accessible through 27017. (The port can be changed through the `mongo` key of the `ampel_config.yml` file).
###Code
import os
%load_ext ampel_quick_import
%qi DevAmpelContext AmpelLogger T2Processor T3Processor ChannelModel AlertProcessor TarAlertLoader ChannelModel AbsAlertFilter
AMPEL_CONF = "../../ampel_config.yml"
ALERT_ARCHIVE = '../sample_data/ztfpub_200917_pruned.tar.gz'
# The operation context is created based on a setup configuration file.
# db_prefix sets the DB name to use
ctx = DevAmpelContext.load(
config_file_path = AMPEL_CONF,
db_prefix = "AmpelTutorial",
purge_db = True,
)
# A scientific program, a channel, is added
ctx.add_channel(
name="demo_SN09if",
access=['ZTF', 'ZTF_PUB']
)
# The channel is constructed from two units, each controlled by parameters.
# Lets start with the straightforward filter
filter_conf = {
'min_rb':0.3,
'min_ndet':7,
'min_tspan':10,
'max_tspan' : 200,
'min_gal_lat':15,
}
filter_config_id = ctx.add_config_id( filter_conf )
# The template matching has now been moved into a separate unit:
# T2SNcosmoComp
# where we added some configurability.
match_conf = {
'target_model_name':'v19-2009ip-corr',
'base_model_name':'salt2',
'chi2dof_cut':2.,
'chicomp_scaling':0.5,
}
match_config_id = ctx.add_config_id( match_conf )
# A channel can specify which streams to read, how these should be combined and what units
# should be run on each data combination.
# This is provided as directives to the AlertProcessor, which besides processing the alerts
# also submit tickets to the DB concerning further operations to execute for any transients
# that pass the initial filter stage.
ap = AlertProcessor(
context = ctx,
process_name = "ipyton_notebook_test",
supplier = "ZiAlertSupplier",
log_profile = "debug",
directives = [
{
"channel": "demo_SN09if",
"filter": {"unit": "SimpleDecentFilterCopy","config": filter_config_id
},
"stock_update": "ZiStockIngester",
't0_add': {
"ingester": "ZiAlertContentIngester",
"t1_combine": [
{
"ingester": "PhotoCompoundIngester",
"config": {"combiner": "ZiT1Combiner"},
"t2_compute": {
"ingester": "PhotoT2Ingester",
"config": {"tags": ["ZTF"]},
"units": [
{'unit': 'T2SNcosmoComp',
'config': match_config_id
},
]
}
}
],
}
}
]
)
# Provide a link to the alert collection to use
ap.set_loader(TarAlertLoader(file_path=ALERT_ARCHIVE))
ap.set_iter_max(1000)
ap.run()
t2p = T2Processor(context=ctx, process_name="T2Processor_test", log_profile="debug")
t2p.run()
###Output
_____no_output_____
###Markdown
So far an input data stream has been filtered, and some sort of calculation has been done on the accepted sample. The next step for a full channel is usually some sort of _reaction_. These can vary between sending immediate alarms (e.g. through Slack or GCN), triggering follow-up observations or propagating information (e.g. for inspection in a frontent such as SkyPortal). Such reactions take place in the T3 tier. A simple `T3HelloWorld` unit is used, but the `react` method can be configured to do most other things. Sample T3 units react with TNS, Slack, Dropbox, SkyPortal and GCN.Key steps of configuring the T3 procss comes through the `selection` directive, where we select tansients that produced the required target match, and the `execute` directive which regulates which T3 units are run.
###Code
# Test base
t3 = T3Processor(
context=ctx,
process_name = "T3Processor_test",
log_profile = "default", # debug
channel = "demo_SN09if",
directives = [ {
"select": {
"unit": "T3FilteringStockSelector",
"config": {
't2_filter': {
'unit': 'T2SNcosmoComp',
'match': {'target_match': True}
},
}
},
"load": {
"unit": "T3SimpleDataLoader",
"config": {
"directives": ["TRANSIENT", "DATAPOINT", "COMPOUND", "T2RECORD"],
}
},
"run": {
"unit": "T3UnitRunner",
"config": {
"directives": [
{
"project": {
"unit": "T3ChannelProjector",
"config": {
"channel": "demo_SN09if"
}
},
"execute": [
{
"unit": "T3HelloWorld",
"config": {
't2info_from' : ['T2SNcosmoComp']
},
},
]
}
]
}
}
} ]
)
t3.run()
###Output
_____no_output_____ |
Notebooks/.ipynb_checkpoints/12.MachineLearningBulkModulusRandomForest-checkpoint.ipynb | ###Markdown
Predicting bulk moduli with matminer----------------------------------------------------------------------------- Fit data mining models to ~10,000 calculated bulk moduli from Materials Project-----------------------------------------------------------------------------**Time to complete: 30 minutes**The tutorial is based on the matminer tutorials from https://github.com/hackingmaterials/matminer_examples.This notebook is an example of using the MP data retrieval tool :code:`retrieve_MP.py` to retrieve computed bulk moduli from theMaterials Project database in the form of a pandas dataframe, using matminer's tools to populate the dataframe with descriptors/features from pymatgen, and then fitting regression models from the scikit-learn library to the dataset. OverviewIn this notebook, we will:1. Load and examine a dataset in a pandas dataframe2. Add descriptors to the dataframe using matminer3. Train, compare, and visualize two machine learning methods with scikit-learn and Plotly. Software installationTo run this and other Python notebooks, I recommend using Jupyter. Here are the steps to install JupyterLab:- If you do not have Conda already installed on your computer, please follow the instructions at https://docs.conda.io/en/latest/miniconda.html. Then you can proceed to install JupyterLab, which will also install Python if it is not already installed.- `conda install jupyterlab`- `conda install ipywidgets`For the visualization, we are using Plotly. To install it:- `conda install plotly`- Install Node.js from https://nodejs.org/en/download/- `jupyter labextension install [email protected]` 1. Load and process data setWe use matminer to load a data set of computed elastic properties of materials from MaterialsProject. 1.1 First load needed Python packages
###Code
# filter warnings messages from the notebook
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
# Set pandas view options
pd.set_option('display.width', 800)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Set path for saved images
import os
if not os.path.exists("images"):
os.mkdir("images")
###Output
_____no_output_____
###Markdown
1.1 Use matminer to obtain data from MP (automatically) in a "pandas" dataframe
###Code
from matminer.data_retrieval.retrieve_MP import MPDataRetrieval
api_key = None # Set your MP API key here. If set as an environment variable 'MAPI_KEY', set it to 'None'
mpr = MPDataRetrieval(api_key) # Create an adapter to the MP Database.
# criteria is to get all entries with elasticity (K_VRH is bulk modulus) data
criteria = {'elasticity.K_VRH': {'$ne': None}}
# properties are the materials attributes we want
# See https://github.com/materialsproject/mapidoc for available properties you can specify
properties = ['pretty_formula', 'spacegroup.symbol', 'elasticity.K_VRH', 'formation_energy_per_atom', 'band_gap',
'e_above_hull', 'density', 'volume', 'nsites']
# get the data!
df_mp = mpr.get_dataframe(criteria=criteria, properties=properties)
print('Number of bulk moduli extracted = ', len(df_mp))
###Output
100%|██████████| 13172/13172 [00:10<00:00, 1231.82it/s]
###Markdown
1.2 Explore the dataset.The data set comes as a pandas DataFrame, which is a kind of "spreadsheet" object in Python. DataFrames have several useful methods you can use to explore and clean the data, some of which we'll explore below.
###Code
df_mp.head()
###Output
_____no_output_____
###Markdown
A pandas DataFrame includes a function called `describe()` that helps determine statistics for the various numerical / categorical columns in the data.
###Code
df_mp.describe()
###Output
_____no_output_____
###Markdown
Sometimes, the `describe()` function will reveal outliers that indicate mistakes in the data. For example, negative hence unphysical minimum bulk/shear moduli or maximum bulk/shear moduli that are too high.The data looks ok at first glance; meaning that there are no clear problems with the ranges of the various properties. Therefore, and we won't filter out any data.Note that the `describe()` function only describes numerical columns by default. 1.3 Filter out unstable entries and negative bulk moduliThe data set above has some entries that correspond to thermodynamically or mechanically unstable materials. We filter these materials out using the distance from the convex hull and `K_VRH` (the Voight-Reuss-Hill average of the bulk modulus).
###Code
df = df_mp
df = df[df['elasticity.K_VRH'] > 0]
df = df[df['e_above_hull'] < 0.1]
df.describe()
###Output
_____no_output_____
###Markdown
1.4 Add descriptors/featuresCreate a new desciptor for the volume per atom and add it to the pandas data frame.
###Code
# add volume per atom descriptor
df['vpa'] = df['volume']/df['nsites']
# explore columns
df.head()
###Output
_____no_output_____
###Markdown
1.5 Add several more descriptors using MatMiner’s pymatgen descriptor getter tools
###Code
from matminer.featurizers.composition import ElementProperty
from matminer.utils.data import PymatgenData
from pymatgen.core import Composition
df["composition"] = df['pretty_formula'].map(lambda x: Composition(x))
dataset = PymatgenData()
descriptors = ['row', 'group', 'atomic_mass',
'atomic_radius', 'boiling_point', 'melting_point', 'X']
stats = ["mean", "std_dev"]
ep = ElementProperty(data_source=dataset, features=descriptors, stats=stats)
df = ep.featurize_dataframe(df, "composition")
#Remove NaN values
df = df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
2. Fit a Linear Regression model using SciKitLearn 2.1 Define what column is the target output, and what are the relevant descriptorsThe data set above has many columns - we won't need all this data for our modeling. We'll mainly be trying to predict `K_VRH` and `G_VRH` (the Voight-Reuss-Hill average of the bulk and shear modulus, respectively) and the `elastic_anisotropy`. We can drop most of the other output data.
###Code
# target output column
y = df['elasticity.K_VRH'].values
# possible descriptor columns
excluded = ["elasticity.K_VRH", "pretty_formula",
"volume", "nsites", "spacegroup.symbol", "e_above_hull", "composition"]
X = df.drop(excluded, axis=1)
print("There are {} possible descriptors:\n\n{}".format(X.shape[1], X.columns.values))
###Output
_____no_output_____
###Markdown
2.2 Fit the linear regression model
###Code
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
import numpy as np
lr = LinearRegression()
lr.fit(X, y)
# get fit statistics
print('Training R2 = ' + str(round(lr.score(X, y), 3)))
print('Training RMSE = %.3f' % np.sqrt(mean_squared_error(y_true=y, y_pred=lr.predict(X))))
###Output
_____no_output_____
###Markdown
2.3 Cross validate the results
###Code
from sklearn.model_selection import KFold, cross_val_score, cross_val_predict
# Use 10-fold cross validation (90% training, 10% test)
crossvalidation = KFold(n_splits=10, shuffle=True)
# compute cross validation scores for random forest model
r2_scores = cross_val_score(lr, X, y, scoring='r2', cv=crossvalidation, n_jobs=1)
print(r2_scores)
mse_scores = cross_val_score(lr, X, y, scoring='neg_mean_squared_error', cv=crossvalidation, n_jobs=1)
rmse_scores = [np.sqrt(abs(s)) for s in mse_scores]
print('Cross-validation results:')
print('Folds: %i, mean R2: %.3f' % (len(r2_scores), np.mean(np.abs(r2_scores))))
print('Folds: %i, mean RMSE: %.3f' % (len(rmse_scores), np.mean(np.abs(rmse_scores))))
###Output
_____no_output_____
###Markdown
2.4 Scatter density plot the results with Plotly and kernel density estimate¶
###Code
import plotly.graph_objects as PlotlyFig
from scipy import stats
# a line to represent a perfect model with 1:1 prediction
xy_params = {'x_col': [0, 400],
'y_col': [0, 400],
'color': 'black',
'mode': 'lines',
'legend': None,
'text': None,
'size': None}
xx=y
yy=lr.predict(X)
# Calculate the point density
kde = stats.gaussian_kde([xx,yy])
zz = kde([xx,yy])
# Sort the points by density, so that the densest points are plotted last
idx = zz.argsort()
xx, yy, z = xx[idx], yy[idx], zz[idx]
fig = PlotlyFig.Figure(data=PlotlyFig.Scattergl(
x=xx,
y=yy,
mode='markers',
marker=dict(
size=5,
color=z, #set color equal to a variable
colorscale='Viridis', # one of plotly colorscales
),
text=df['pretty_formula']
))
fig.update_layout(xaxis_title='DFT (MP) bulk modulus (GPa)',
yaxis_title='Predicted bulk modulus (GPa)',
title='Linear regression',
width=800,
height=800,
font=dict(family="Helvetica",
size=18, color="black")
)
fig.update_yaxes(scaleanchor="x")
fig.add_trace(PlotlyFig.Scatter(x=[0,400], y=[0,400],
mode='lines'))
fig.update_layout(xaxis=dict(range=[0, 400]),
yaxis=dict(range=[0, 400]),
showlegend=False)
fig.show()
# Save image, can change format by simply changing the file suffix
fig.write_image("images/LinearRegression.jpeg")
###Output
_____no_output_____
###Markdown
Great! We just fit a linear regression model to pymatgen features using matminer and sklearn. Now let’s use a Random Forest model to examine the importance of our features. 3. Follow similar steps for a Random Forest model 3.1 Fit the Random Forest model, get R2 and RMSE
###Code
from sklearn.ensemble import RandomForestRegressor
rf.fit(X, y)
print('R2 = ' + str(round(rf.score(X, y), 3)))
print('RMSE = %.3f' % np.sqrt(mean_squared_error(y_true=y, y_pred=rf.predict(X))))
###Output
_____no_output_____
###Markdown
3.2 Cross-validate the results
###Code
# compute cross validation scores for random forest model
scores = cross_val_score(rf, X, y, scoring='neg_mean_squared_error', cv=crossvalidation, n_jobs=-1)
rmse_scores = [np.sqrt(abs(s)) for s in scores]
print('Cross-validation results:')
print('Folds: %i, mean RMSE: %.3f' % (len(scores), np.mean(np.abs(rmse_scores))))
###Output
_____no_output_____
###Markdown
3.3 Plot the random forest model
###Code
# a line to represent a perfect model with 1:1 prediction
xy_params = {'x_col': [0, 400],
'y_col': [0, 400],
'color': 'black',
'mode': 'lines',
'legend': None,
'text': None,
'size': None}
xx=y
#yy=rf.predict(X)
yy=cross_val_predict(rf, X, y, cv=crossvalidation)
# Calculate the point density
kde = stats.gaussian_kde([xx,yy])
zz = kde([xx,yy])
# Sort the points by density, so that the densest points are plotted last
idx = zz.argsort()
xx, yy, z = xx[idx], yy[idx], zz[idx]
fig = PlotlyFig.Figure(data=PlotlyFig.Scattergl(
x=xx,
y=yy,
mode='markers',
marker=dict(
size=5,
color=z, #set color equal to a variable
colorscale='Viridis', # one of plotly colorscales
),
text=df['pretty_formula']
))
fig.update_layout(xaxis_title='DFT (MP) bulk modulus (GPa)',
yaxis_title='Predicted bulk modulus (GPa)',
title='Random Forest Model',
width=800,
height=800,
font=dict(family="Helvetica",
size=18, color="black")
)
fig.update_yaxes(scaleanchor="x")
fig.add_trace(PlotlyFig.Scatter(x=[0,400], y=[0,400],
mode='lines'))
fig.update_layout(xaxis=dict(range=[0, 400]),
yaxis=dict(range=[0, 400]),
showlegend=False)
fig.show()
# Save image, can change format by simply changing the file suffix
fig.write_image("images/RandomForest.jpeg")
###Output
_____no_output_____
###Markdown
This looks clearly better than the linear regression model and reflects the improved cross-validated R2 and RMSE.You could (optionally) visualize the training error by replacing the code `cross_val_predict(rf, X, y, cv=crossvalidation)` in the above cell with `rf.predict(X)`. That would look a better still, but would not be an accurate representation of your prediction error. 3.4 Visualize the distribution of training and testing errors
###Code
from sklearn.model_selection import train_test_split
X['pretty_formula'] = df['pretty_formula']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
train_formula = X_train['pretty_formula']
X_train = X_train.drop('pretty_formula', axis=1)
test_formula = X_test['pretty_formula']
X_test = X_test.drop('pretty_formula', axis=1)
rf_reg = RandomForestRegressor(n_estimators=50, random_state=1)
rf_reg.fit(X_train, y_train)
# get fit statistics
print('training R2 = ' + str(round(rf_reg.score(X_train, y_train), 3)))
print('training RMSE = %.3f' % np.sqrt(mean_squared_error(y_true=y_train, y_pred=rf_reg.predict(X_train))))
print('test R2 = ' + str(round(rf_reg.score(X_test, y_test), 3)))
print('test RMSE = %.3f' % np.sqrt(mean_squared_error(y_true=y_test, y_pred=rf_reg.predict(X_test))))
import plotly.graph_objects as PlotlyFig
data_train=y_train-rf_reg.predict(X_train)
data_test=y_test-rf_reg.predict(X_test)
fig = PlotlyFig.Figure()
fig.add_trace(PlotlyFig.Histogram(x=data_train,
name='Training',
histnorm='probability',
xbins=dict(
start=-50.0,
end=50,
size=2
)))
fig.add_trace(PlotlyFig.Histogram(x=data_test,
name='Testing',
histnorm='probability',
xbins=dict(
start=-50.0,
end=50,
size=2
)))
fig.update_layout(xaxis_title='Bulk modulus prediction residual (GPa)',
yaxis_title='Probability',
title='Random Forest Regression Residuals',
barmode='stack',
width=1200,
height=600,
font=dict(family="Helvetica",
size=18, color="black")
)
fig.show()
###Output
_____no_output_____
###Markdown
3.5 Plot the importance of the features we usedLet's see what are the most important features used by the random forest model.
###Code
import plotly.graph_objects as PlotlyFig
importances = rf.feature_importances_
included = X.columns.values
indices = np.argsort(importances)[::-1]
fig = PlotlyFig.Figure(data=PlotlyFig.Bar(
x=included[indices], y=importances[indices]))
fig.update_layout(yaxis_title='Importance',
title='Random Forest Model Features',
width=1200,
height=800,
font=dict(family="Helvetica",
size=18, color="black")
)
fig.show()
###Output
_____no_output_____
###Markdown
Features relating to the materials volume per atom and density and the melting and boiling points of the components are the most important in the random forest model.This concludes the tutorial! You are now familiar with some of the basic features of data retrieval and machine learning.
###Code
# Import libraries
from sklearn.kernel_ridge import KernelRidge
krr = KernelRidge(alpha=1.0,kernel='rbf')
krr.fit(X, y)
print('R2 = ' + str(round(krr.score(X, y), 3)))
print('RMSE = %.3f' % np.sqrt(mean_squared_error(y_true=y, y_pred=krr.predict(X))))
###Output
_____no_output_____ |
MelSpecVAE_v1.ipynb | ###Markdown
MelSpecVAE v.1.5 Author: Moisés Horta Valenzuela, 2021> Website: [moiseshorta.audio](https://moiseshorta.audio)> Twitter: [@hexorcismos](https://twitter.com/hexorcismos)```MelSpecVAE is a Variational Autoencoder which synthesizes Mel-Spectrograms thay can be inverted into raw audio waveform.Currently you can train it with any dataset of .wav audio at 44.1khz Sample Rate and 16bit bitdepth.> Features:* Interpolate through 2 different points in the latent space and synthesize the 'in between' sounds.* Generate short one-shot audio* Synthesize arbitrarily long audio samples by generating seeds and sampling from the latent space.> Credits:* VAE neural network architecture coded following 'The Sound of AI' Youtube tutorial series by Valerio Velardo* Some utility functions from Marco Passini's MelGAN-VC Jupyter Notebook.> Last update:* 12.12.2021: Added experimental feature 'Timbre Transfer'```
###Code
#@title Mount Google Drive
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Run the next cells first for training, generating or timbre transfer
###Code
#@title Import Tensorflow and torchaudio
!pip install tensorflow-gpu==2.0.0
!pip install h5py==2.10.0 --force-reinstall
!pip install soundfile #to save wav files
!pip install --no-deps torchaudio==0.5
!pip install git+https://github.com/pvigier/perlin-numpy #for generating perlin and fractal noise
#@title Import libraries
from glob import glob
import librosa
import librosa.display
import matplotlib.pyplot as plt
import numpy as np
from numpy import asarray
from numpy.random import randn
from numpy.random import randint
from numpy import linspace
import soundfile as sf
import time
import IPython
import tensorflow as tf
from perlin_numpy import (
generate_fractal_noise_2d, generate_perlin_noise_2d,
)
#@title Hyperparameters
learning_rate = 0.001 #@param {type:"raw"}
num_epochs_to_train = 10#@param {type:"integer"}
batch_size = 32#@param {type:"integer"}
vector_dimension = 64 #@param {type:"integer"}
hop=256 #hop size (window size = 4*hop)
sr=44100 #sampling rate
min_level_db=-100 #reference values to normalize data
ref_level_db=20
LEARNING_RATE = learning_rate
BATCH_SIZE = batch_size
EPOCHS = num_epochs_to_train
VECTOR_DIM=vector_dimension
shape=128 #length of time axis of split specrograms
spec_split=1
#@title Waveform to Spectrogram conversion
''' Decorsière, Rémi, Peter L. Søndergaard, Ewen N. MacDonald, and Torsten Dau.
"Inversion of auditory spectrograms, traditional spectrograms, and other envelope representations."
IEEE/ACM Transactions on Audio, Speech, and Language Processing 23, no. 1 (2014): 46-56.'''
#ORIGINAL CODE FROM https://github.com/yoyololicon/spectrogram-inversion
import torch
import torch.nn as nn
import torch.nn.functional as F
from tqdm import tqdm
from functools import partial
import math
import heapq
from torchaudio.transforms import MelScale, Spectrogram
torch.set_default_tensor_type('torch.cuda.FloatTensor')
specobj = Spectrogram(n_fft=4*hop, win_length=4*hop, hop_length=hop, pad=0, power=2, normalized=False)
specfunc = specobj.forward
melobj = MelScale(n_mels=hop, sample_rate=sr, f_min=0.)
melfunc = melobj.forward
def melspecfunc(waveform):
specgram = specfunc(waveform)
mel_specgram = melfunc(specgram)
return mel_specgram
def spectral_convergence(input, target):
return 20 * ((input - target).norm().log10() - target.norm().log10())
def GRAD(spec, transform_fn, samples=None, init_x0=None, maxiter=1000, tol=1e-6, verbose=1, evaiter=10, lr=0.002):
spec = torch.Tensor(spec)
samples = (spec.shape[-1]*hop)-hop
if init_x0 is None:
init_x0 = spec.new_empty((1,samples)).normal_(std=1e-6)
x = nn.Parameter(init_x0)
T = spec
criterion = nn.L1Loss()
optimizer = torch.optim.Adam([x], lr=lr)
bar_dict = {}
metric_func = spectral_convergence
bar_dict['spectral_convergence'] = 0
metric = 'spectral_convergence'
init_loss = None
with tqdm(total=maxiter, disable=not verbose) as pbar:
for i in range(maxiter):
optimizer.zero_grad()
V = transform_fn(x)
loss = criterion(V, T)
loss.backward()
optimizer.step()
lr = lr*0.9999
for param_group in optimizer.param_groups:
param_group['lr'] = lr
if i % evaiter == evaiter - 1:
with torch.no_grad():
V = transform_fn(x)
bar_dict[metric] = metric_func(V, spec).item()
l2_loss = criterion(V, spec).item()
pbar.set_postfix(**bar_dict, loss=l2_loss)
pbar.update(evaiter)
return x.detach().view(-1).cpu()
def normalize(S):
return np.clip((((S - min_level_db) / -min_level_db)*2.)-1., -1, 1)
def denormalize(S):
return (((np.clip(S, -1, 1)+1.)/2.) * -min_level_db) + min_level_db
def prep(wv,hop=192):
S = np.array(torch.squeeze(melspecfunc(torch.Tensor(wv).view(1,-1))).detach().cpu())
S = librosa.power_to_db(S)-ref_level_db
return normalize(S)
def deprep(S):
S = denormalize(S)+ref_level_db
S = librosa.db_to_power(S)
wv = GRAD(np.expand_dims(S,0), melspecfunc, maxiter=2500, evaiter=10, tol=1e-8)
return np.array(np.squeeze(wv))
#@title Helper functions
#Generate spectrograms from waveform array
def tospec(data):
specs=np.empty(data.shape[0], dtype=object)
for i in range(data.shape[0]):
x = data[i]
S=prep(x)
S = np.array(S, dtype=np.float32)
specs[i]=np.expand_dims(S, -1)
print(specs.shape)
return specs
#Generate multiple spectrograms with a determined length from single wav file
def tospeclong(path, length=4*44100):
x, sr = librosa.load(path,sr=44100)
x,_ = librosa.effects.trim(x)
loudls = librosa.effects.split(x, top_db=50)
xls = np.array([])
for interv in loudls:
xls = np.concatenate((xls,x[interv[0]:interv[1]]))
x = xls
num = x.shape[0]//length
specs=np.empty(num, dtype=object)
for i in range(num-1):
a = x[i*length:(i+1)*length]
S = prep(a)
S = np.array(S, dtype=np.float32)
try:
sh = S.shape
specs[i]=S
except AttributeError:
print('spectrogram failed')
print(specs.shape)
return specs
#Waveform array from path of folder containing wav files
def audio_array(path):
ls = glob(f'{path}/*.wav')
adata = []
for i in range(len(ls)):
x, sr = tf.audio.decode_wav(tf.io.read_file(ls[i]), 1)
x = np.array(x, dtype=np.float32)
adata.append(x)
return np.array(adata)
#Waveform array from path of folder containing wav files
def single_audio_array(path):
ls = glob(path)
adata = []
for i in range(len(ls)):
x, sr = tf.audio.decode_wav(tf.io.read_file(ls[i]), 1)
x = np.array(x, dtype=np.float32)
adata.append(x)
return np.array(adata)
#Concatenate spectrograms in array along the time axis
def testass(a):
but=False
con = np.array([])
nim = a.shape[0]
for i in range(nim):
im = a[i]
im = np.squeeze(im)
if not but:
con=im
but=True
else:
con = np.concatenate((con,im), axis=1)
return np.squeeze(con)
#Split spectrograms in chunks with equal size
def splitcut(data):
ls = []
mini = 0
minifinal = spec_split*shape #max spectrogram length
for i in range(data.shape[0]-1):
if data[i].shape[1]<=data[i+1].shape[1]:
mini = data[i].shape[1]
else:
mini = data[i+1].shape[1]
if mini>=3*shape and mini<minifinal:
minifinal = mini
for i in range(data.shape[0]):
x = data[i]
if x.shape[1]>=3*shape:
for n in range(x.shape[1]//minifinal):
ls.append(x[:,n*minifinal:n*minifinal+minifinal,:])
ls.append(x[:,-minifinal:,:])
return np.array(ls)
# Generates timestamp string of "day_month_year_hourMin"
def get_time_stamp():
secondsSinceEpoch = time.time()
timeObj = time.localtime(secondsSinceEpoch)
x = ('%d_%d_%d_%d%d' % (timeObj.tm_mday, timeObj.tm_mon, timeObj.tm_year, timeObj.tm_hour, timeObj.tm_min))
return x
###Output
_____no_output_____
###Markdown
Training
###Code
#@title Import folder containing .wav files for training
#Generating Mel-Spectrogram dataset (Uncomment where needed)
#adata: source spectrograms
audio_directory = "/path/to/your/audio/dataset" #@param {type:"string"}
#AUDIO TO CONVERT
awv = audio_array(audio_directory) #get waveform array from folder containing wav files
aspec = tospec(awv) #get spectrogram array
adata = splitcut(aspec) #split spectrogams to fixed
print(np.shape(adata))
#@title Build VAE Neural Network
#VAE
import os
import pickle
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, Conv2D, ReLU, BatchNormalization, Flatten, Dense, Reshape, Conv2DTranspose, Activation, Lambda
from tensorflow.keras import backend as K
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.callbacks import ModelCheckpoint
import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
class VAE:
"""
VAE represents a Deep Convolutional autoencoder architecture
with mirrored encoder and decoder components.
"""
def __init__(self,
input_shape, #shape of the input data
conv_filters, #convolutional network filters
conv_kernels, #convNet kernel size
conv_strides, #convNet strides
latent_space_dim):
self.input_shape = input_shape # [28, 28, 1], in this case is 28 x 28 pixels on 1 channel for greyscale
self.conv_filters = conv_filters # is a list for each layer, i.e. [2, 4, 8]
self.conv_kernels = conv_kernels # list of kernels per layer, [1,2,3]
self.conv_strides = conv_strides # stride for each filter [1, 2, 2], note: 2 means you are downsampling the data in half
self.latent_space_dim = latent_space_dim # how many neurons on bottleneck
self.reconstruction_loss_weight = 1000000
self.encoder = None
self.decoder = None
self.model = None
self._num_conv_layers = len(conv_filters)
self._shape_before_bottleneck = None
self._model_input = None
self._build()
def summary(self):
self.encoder.summary()
print("\n")
self.decoder.summary()
print("\n")
self.model.summary()
def _build(self):
self._build_encoder()
self._build_decoder()
self._build_autoencoder()
def compile(self, learning_rate=0.0001):
optimizer = Adam(learning_rate=learning_rate)
self.model.compile(optimizer=optimizer, loss=self._calculate_combined_loss,
metrics=[self._calculate_reconstruction_loss,
self._calculate_kl_loss])
def train(self, x_train, batch_size, num_epochs):
# checkpoint = ModelCheckpoint("best_model.hdf5", monitor='loss', verbose=1,
# save_best_only=True, mode='auto', period=1)
self.model.fit(x_train,
x_train,
batch_size=batch_size,
epochs=num_epochs,
shuffle=True)
#callbacks=[checkpoint])
def save(self, save_folder="."):
self._create_folder_if_it_doesnt_exist(save_folder)
self._save_parameters(save_folder)
self._save_weights(save_folder)
def load_weights(self, weights_path):
self.model.load_weights(weights_path)
def reconstruct(self, spec):
latent_representations = self.encoder.predict(spec)
reconstructed_spec = self.decoder.predict(latent_representations)
return reconstructed_spec, latent_representations
def encode(self, spec):
latent_representation = self.encoder.predict(spec)
return latent_representation
def sample_from_latent_space(self, z):
z_vector = self.decoder.predict(z)
return z_vector
@classmethod
def load(cls, save_folder="."):
parameters_path = os.path.join(save_folder, "parameters.pkl")
with open(parameters_path, "rb") as f:
parameters = pickle.load(f)
autoencoder = VAE(*parameters)
weights_path = os.path.join(save_folder, "weights.h5")
autoencoder.load_weights(weights_path)
return autoencoder
def _calculate_combined_loss(self, y_target, y_predicted):
reconstruction_loss = self._calculate_reconstruction_loss(y_target, y_predicted)
kl_loss = self._calculate_kl_loss(y_target, y_predicted)
combined_loss = self.reconstruction_loss_weight * reconstruction_loss + kl_loss
return combined_loss
def _calculate_reconstruction_loss(self, y_target, y_predicted):
error = y_target - y_predicted
reconstruction_loss = K.mean(K.square(error), axis=[1, 2, 3])
return reconstruction_loss
def _calculate_kl_loss(self, y_target, y_predicted):
kl_loss = -0.5 * K.sum(1 + self.log_variance - K.square(self.mu) -
K.exp(self.log_variance), axis =1)
return kl_loss
def _create_folder_if_it_doesnt_exist(self, folder):
if not os.path.exists(folder):
os.makedirs(folder)
def _save_parameters(self, save_folder):
parameters = [
self.input_shape,
self.conv_filters,
self.conv_kernels,
self.conv_strides,
self.latent_space_dim
]
save_path = os.path.join(save_folder, "parameters.pkl")
with open(save_path, "wb") as f:
pickle.dump(parameters, f)
def _save_weights(self, save_folder):
save_path = os.path.join(save_folder, "weights.h5")
self.model.save_weights(save_path)
#-----------AUTOENCODER----------#
def _build_autoencoder(self):
model_input = self._model_input
model_output = self.decoder(self.encoder(model_input))
self.model = Model(model_input, model_output, name="autoencoder")
#--------------DECODER------------#
def _build_decoder(self):
decoder_input = self._add_decoder_input()
dense_layer = self._add_dense_layer(decoder_input)
reshape_layer = self._add_reshape_layer(dense_layer)
conv_transpose_layers = self._add_conv_transpose_layers(reshape_layer)
decoder_output = self._add_decoder_output(conv_transpose_layers)
self.decoder = Model(decoder_input, decoder_output, name="decoder")
def _add_decoder_input(self):
return Input(shape=self.latent_space_dim, name="decoder_input")
def _add_dense_layer(self, decoder_input):
num_neurons = np.prod(self._shape_before_bottleneck) # [ 1, 2, 4] -> 8
dense_layer = Dense(num_neurons, name="decoder_dense")(decoder_input)
return dense_layer
def _add_reshape_layer(self, dense_layer):
return Reshape(self._shape_before_bottleneck)(dense_layer)
def _add_conv_transpose_layers(self, x):
"""Add conv transpose blocks."""
# Loop through all the conv layers in reverse order and
# stop at the first layer
for layer_index in reversed(range(1, self._num_conv_layers)):
x = self._add_conv_transpose_layer(layer_index, x)
return x
def _add_conv_transpose_layer(self, layer_index, x):
layer_num = self._num_conv_layers - layer_index
conv_transpose_layer = Conv2DTranspose(
filters=self.conv_filters[layer_index],
kernel_size = self.conv_kernels[layer_index],
strides = self.conv_strides[layer_index],
padding = "same",
name=f"decoder_conv_transpose_layer_{layer_num}"
)
x = conv_transpose_layer(x)
x = ReLU(name=f"decoder_relu_{layer_num}")(x)
x = BatchNormalization(name=f"decoder_bn_{layer_num}")(x)
return x
def _add_decoder_output(self, x):
conv_transpose_layer = Conv2DTranspose(
filters = 1,
kernel_size = self.conv_kernels[0],
strides = self.conv_strides[0],
padding = "same",
name=f"decoder_conv_transpose_layer_{self._num_conv_layers}"
)
x = conv_transpose_layer(x)
output_layer = Activation("sigmoid", name="sigmoid_output_layer")(x)
return output_layer
#----------------ENCODER-----------------#
def _build_encoder(self):
encoder_input = self._add_encoder_input()
conv_layers = self._add_conv_layers(encoder_input)
bottleneck = self._add_bottleneck(conv_layers)
self._model_input = encoder_input
self.encoder = Model(encoder_input, bottleneck, name="encoder")
def _add_encoder_input(self):
return Input(shape=self.input_shape, name="encoder_input")
def _add_conv_layers(self, encoder_input):
"""Creates all convolutional blocks in encoder"""
x = encoder_input
for layer_index in range(self._num_conv_layers):
x = self._add_conv_layer(layer_index, x)
return x
def _add_conv_layer(self, layer_index, x):
"""Adds a convolutional block to a graph of layers, consisting
of Conv 2d + ReLu activation + batch normalization.
"""
layer_number = layer_index + 1
conv_layer = Conv2D(
filters= self.conv_filters[layer_index],
kernel_size = self.conv_kernels[layer_index],
strides = self.conv_strides[layer_index],
padding = "same",
name = f"encoder_conv_layer_{layer_number}"
)
x = conv_layer(x)
x = ReLU(name=f"encoder_relu_{layer_number}")(x)
x = BatchNormalization(name=f"encoder_bn_{layer_number}")(x)
return x
#-------------Bottleneck (Latent Space)-------------#
def _add_bottleneck(self, x):
"""Flatten data and add bottleneck with Gaussian sampling (Dense layer)"""
self._shape_before_bottleneck = K.int_shape(x)[1:]
x = Flatten()(x)
self.mu = Dense(self.latent_space_dim,name="mu")(x)
self.log_variance = Dense(self.latent_space_dim,
name="log_variance")(x)
def sample_point_from_normal_distribution(args):
mu, log_variance = args
epsilon = K.random_normal(shape=K.shape(self.mu), mean=0.,
stddev=1.)
sampled_point = mu + K.exp(log_variance / 2) * epsilon
return sampled_point
x = Lambda(sample_point_from_normal_distribution,
name="encoder_output")([self.mu, self.log_variance])
return x
print("VAE successfully built")
#@title Training functions
def train(x_train, learning_rate, batch_size, epochs):
vae = VAE(
input_shape = (hop, shape*spec_split, 1),
conv_filters=(512, 256, 128, 64, 32),
conv_kernels=(3, 3, 3, 3, 3),
conv_strides=(2, 2, 2, 2, (2,1)),
latent_space_dim = VECTOR_DIM
)
vae.summary()
vae.compile(learning_rate)
vae.train(x_train, batch_size, epochs)
return vae
def train_tfdata(x_train, learning_rate, epochs=10):
vae = VAE(
input_shape = (hop, 3*shape, 1),
conv_filters=(512, 256, 128, 64, 32),
conv_kernels=(3, 3, 3, 3, 3),
conv_strides=(2, 2, 2, 2, (2,1)),
latent_space_dim = VECTOR_DIM
)
vae.summary()
vae.compile(learning_rate)
vae.train(x_train, num_epochs=epochs)
return vae
def continue_training(checkpoint):
vae = VAE.load(checkpoint)
vae.summary()
vae.compile(LEARNING_RATE)
vae.train(adata,BATCH_SIZE,EPOCHS)
return vae
def load_model(checkpoint):
vae = VAE.load(checkpoint)
vae.summary()
vae.compile(LEARNING_RATE)
return vae
#@title Start training from scratch or resume training
training_run_name = "my_melspecvae_model" #@param {type:"string"}
checkpoint_save_directory = "/path/to/your/checkpoints/" #@param {type:"string"}
resume_training = False #@param {type:"boolean"}
resume_training_checkpoint_path = "/path/to/your/checkpoints/" #@param {type:"string"}
current_time = get_time_stamp()
if not resume_training:
vae = train(adata, LEARNING_RATE, BATCH_SIZE, EPOCHS)
#vae = train_tfdata(dsa, LEARNING_RATE, EPOCHS)
vae.save(f"{checkpoint_save_directory}{training_run_name}_{current_time}_h{hop}_w{shape}_z{VECTOR_DIM}")
else:
vae = continue_training(resume_training_checkpoint_path)
vae.save(f"{checkpoint_save_directory}{training_run_name}_{current_time}_h{hop}_w{shape}_z{VECTOR_DIM}")
###Output
_____no_output_____
###Markdown
Generation
###Code
#@title Build VAE Neural Network
#VAE
import os
import pickle
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, Conv2D, ReLU, BatchNormalization, Flatten, Dense, Reshape, Conv2DTranspose, Activation, Lambda
from tensorflow.keras import backend as K
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.callbacks import ModelCheckpoint
import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
class VAE:
"""
VAE represents a Deep Convolutional autoencoder architecture
with mirrored encoder and decoder components.
"""
def __init__(self,
input_shape, #shape of the input data
conv_filters, #convolutional network filters
conv_kernels, #convNet kernel size
conv_strides, #convNet strides
latent_space_dim):
self.input_shape = input_shape # [28, 28, 1], in this case is 28 x 28 pixels on 1 channel for greyscale
self.conv_filters = conv_filters # is a list for each layer, i.e. [2, 4, 8]
self.conv_kernels = conv_kernels # list of kernels per layer, [1,2,3]
self.conv_strides = conv_strides # stride for each filter [1, 2, 2], note: 2 means you are downsampling the data in half
self.latent_space_dim = latent_space_dim # how many neurons on bottleneck
self.reconstruction_loss_weight = 1000000
self.encoder = None
self.decoder = None
self.model = None
self._num_conv_layers = len(conv_filters)
self._shape_before_bottleneck = None
self._model_input = None
self._build()
def summary(self):
self.encoder.summary()
print("\n")
self.decoder.summary()
print("\n")
self.model.summary()
def _build(self):
self._build_encoder()
self._build_decoder()
self._build_autoencoder()
def compile(self, learning_rate=0.0001):
optimizer = Adam(learning_rate=learning_rate)
self.model.compile(optimizer=optimizer, loss=self._calculate_combined_loss,
metrics=[self._calculate_reconstruction_loss,
self._calculate_kl_loss])
def train(self, x_train, batch_size, num_epochs):
# checkpoint = ModelCheckpoint("best_model.hdf5", monitor='loss', verbose=1,
# save_best_only=True, mode='auto', period=1)
self.model.fit(x_train,
x_train,
batch_size=batch_size,
epochs=num_epochs,
shuffle=True)
#callbacks=[checkpoint])
def save(self, save_folder="."):
self._create_folder_if_it_doesnt_exist(save_folder)
self._save_parameters(save_folder)
self._save_weights(save_folder)
def load_weights(self, weights_path):
self.model.load_weights(weights_path)
def reconstruct(self, spec):
latent_representations = self.encoder.predict(spec)
reconstructed_spec = self.decoder.predict(latent_representations)
return reconstructed_spec, latent_representations
def encode(self, spec):
latent_representation = self.encoder.predict(spec)
return latent_representation
def sample_from_latent_space(self, z):
z_vector = self.decoder.predict(z)
return z_vector
@classmethod
def load(cls, save_folder="."):
parameters_path = os.path.join(save_folder, "parameters.pkl")
with open(parameters_path, "rb") as f:
parameters = pickle.load(f)
autoencoder = VAE(*parameters)
weights_path = os.path.join(save_folder, "weights.h5")
autoencoder.load_weights(weights_path)
return autoencoder
def _calculate_combined_loss(self, y_target, y_predicted):
reconstruction_loss = self._calculate_reconstruction_loss(y_target, y_predicted)
kl_loss = self._calculate_kl_loss(y_target, y_predicted)
combined_loss = self.reconstruction_loss_weight * reconstruction_loss + kl_loss
return combined_loss
def _calculate_reconstruction_loss(self, y_target, y_predicted):
error = y_target - y_predicted
reconstruction_loss = K.mean(K.square(error), axis=[1, 2, 3])
return reconstruction_loss
def _calculate_kl_loss(self, y_target, y_predicted):
kl_loss = -0.5 * K.sum(1 + self.log_variance - K.square(self.mu) -
K.exp(self.log_variance), axis =1)
return kl_loss
def _create_folder_if_it_doesnt_exist(self, folder):
if not os.path.exists(folder):
os.makedirs(folder)
def _save_parameters(self, save_folder):
parameters = [
self.input_shape,
self.conv_filters,
self.conv_kernels,
self.conv_strides,
self.latent_space_dim
]
save_path = os.path.join(save_folder, "parameters.pkl")
with open(save_path, "wb") as f:
pickle.dump(parameters, f)
def _save_weights(self, save_folder):
save_path = os.path.join(save_folder, "weights.h5")
self.model.save_weights(save_path)
#-----------AUTOENCODER----------#
def _build_autoencoder(self):
model_input = self._model_input
model_output = self.decoder(self.encoder(model_input))
self.model = Model(model_input, model_output, name="autoencoder")
#--------------DECODER------------#
def _build_decoder(self):
decoder_input = self._add_decoder_input()
dense_layer = self._add_dense_layer(decoder_input)
reshape_layer = self._add_reshape_layer(dense_layer)
conv_transpose_layers = self._add_conv_transpose_layers(reshape_layer)
decoder_output = self._add_decoder_output(conv_transpose_layers)
self.decoder = Model(decoder_input, decoder_output, name="decoder")
def _add_decoder_input(self):
return Input(shape=self.latent_space_dim, name="decoder_input")
def _add_dense_layer(self, decoder_input):
num_neurons = np.prod(self._shape_before_bottleneck) # [ 1, 2, 4] -> 8
dense_layer = Dense(num_neurons, name="decoder_dense")(decoder_input)
return dense_layer
def _add_reshape_layer(self, dense_layer):
return Reshape(self._shape_before_bottleneck)(dense_layer)
def _add_conv_transpose_layers(self, x):
"""Add conv transpose blocks."""
# Loop through all the conv layers in reverse order and
# stop at the first layer
for layer_index in reversed(range(1, self._num_conv_layers)):
x = self._add_conv_transpose_layer(layer_index, x)
return x
def _add_conv_transpose_layer(self, layer_index, x):
layer_num = self._num_conv_layers - layer_index
conv_transpose_layer = Conv2DTranspose(
filters=self.conv_filters[layer_index],
kernel_size = self.conv_kernels[layer_index],
strides = self.conv_strides[layer_index],
padding = "same",
name=f"decoder_conv_transpose_layer_{layer_num}"
)
x = conv_transpose_layer(x)
x = ReLU(name=f"decoder_relu_{layer_num}")(x)
x = BatchNormalization(name=f"decoder_bn_{layer_num}")(x)
return x
def _add_decoder_output(self, x):
conv_transpose_layer = Conv2DTranspose(
filters = 1,
kernel_size = self.conv_kernels[0],
strides = self.conv_strides[0],
padding = "same",
name=f"decoder_conv_transpose_layer_{self._num_conv_layers}"
)
x = conv_transpose_layer(x)
output_layer = Activation("sigmoid", name="sigmoid_output_layer")(x)
return output_layer
#----------------ENCODER-----------------#
def _build_encoder(self):
encoder_input = self._add_encoder_input()
conv_layers = self._add_conv_layers(encoder_input)
bottleneck = self._add_bottleneck(conv_layers)
self._model_input = encoder_input
self.encoder = Model(encoder_input, bottleneck, name="encoder")
def _add_encoder_input(self):
return Input(shape=self.input_shape, name="encoder_input")
def _add_conv_layers(self, encoder_input):
"""Creates all convolutional blocks in encoder"""
x = encoder_input
for layer_index in range(self._num_conv_layers):
x = self._add_conv_layer(layer_index, x)
return x
def _add_conv_layer(self, layer_index, x):
"""Adds a convolutional block to a graph of layers, consisting
of Conv 2d + ReLu activation + batch normalization.
"""
layer_number = layer_index + 1
conv_layer = Conv2D(
filters= self.conv_filters[layer_index],
kernel_size = self.conv_kernels[layer_index],
strides = self.conv_strides[layer_index],
padding = "same",
name = f"encoder_conv_layer_{layer_number}"
)
x = conv_layer(x)
x = ReLU(name=f"encoder_relu_{layer_number}")(x)
x = BatchNormalization(name=f"encoder_bn_{layer_number}")(x)
return x
#-------------Bottleneck (Latent Space)-------------#
def _add_bottleneck(self, x):
"""Flatten data and add bottleneck with Gaussian sampling (Dense layer)"""
self._shape_before_bottleneck = K.int_shape(x)[1:]
x = Flatten()(x)
self.mu = Dense(self.latent_space_dim,name="mu")(x)
self.log_variance = Dense(self.latent_space_dim,
name="log_variance")(x)
def sample_point_from_normal_distribution(args):
mu, log_variance = args
epsilon = K.random_normal(shape=K.shape(self.mu), mean=0.,
stddev=1.)
sampled_point = mu + K.exp(log_variance / 2) * epsilon
return sampled_point
x = Lambda(sample_point_from_normal_distribution,
name="encoder_output")([self.mu, self.log_variance])
return x
print("VAE successfully built")
#@title Load Checkpoint for Generating
checkpoint_load_directory = "/path/to/your/checkpoint" #@param {type:"string"}
#-------LOAD MODEL FOR GENERATING-------------#
vae = VAE.load(checkpoint_load_directory)
print("Loaded checkpoint")
#@title Import synthesis utility functions
#-----TESTING FUNCTIONS ----------- #
def select_spec(spec, labels, num_spec=10):
sample_spec_index = np.random.choice(range(len(spec)), num_spec)
sample_spec = spec[sample_spec_index]
sample_labels = labels[sample_spec_index]
return sample_spec, sample_labels
def plot_reconstructed_spec(spec, reconstructed_spec):
fig = plt.figure(figsize=(15, 3))
num_spec = len(spec)
for i, (image, reconstructed_image) in enumerate(zip(spec, reconstructed_spec)):
image = image.squeeze()
ax = fig.add_subplot(2, num_spec, i + 1)
ax.axis("off")
ax.imshow(image, cmap="gray_r")
reconstructed_image = reconstructed_image.squeeze()
ax = fig.add_subplot(2, num_spec, i + num_spec + 1)
ax.axis("off")
ax.imshow(reconstructed_image, cmap="gray_r")
plt.show()
def plot_spec_encoded_in_latent_space(latent_representations, sample_labels):
plt.figure(figsize=(10, 10))
plt.scatter(latent_representations[:, 0],
latent_representations[:, 1],
cmap="rainbow",
c=sample_labels,
alpha=0.5,
s=2)
plt.colorbar()
plt.show()
#---------------NOISE GENERATOR FUNCTIONS ------------#
def generate_random_z_vect(seed=1001,size_z=1,scale=1.0):
np.random.seed(seed)
x = np.random.uniform(low=(scale * -1.0), high=scale, size=(size_z,VECTOR_DIM))
return x
def generate_z_vect_from_perlin_noise(seed=1001, size_z=1, scale=1.0):
np.random.seed(seed)
x = generate_perlin_noise_2d((size_z, VECTOR_DIM), (1,1))
x = x*scale
return x
def generate_z_vect_from_fractal_noise(seed=1001, size_z=1, scale=1.0,):
np.random.seed(seed)
x = generate_fractal_noise_2d((size_z, VECTOR_DIM), (1,1),)
x = x*scale
return x
#-------SPECTROGRAM AND SOUND SYNTHESIS UTILITY FUNCTIONS -------- #
#Assembling generated Spectrogram chunks into final Spectrogram
def specass(a,spec):
but=False
con = np.array([])
nim = a.shape[0]
for i in range(nim-1):
im = a[i]
im = np.squeeze(im)
if not but:
con=im
but=True
else:
con = np.concatenate((con,im), axis=1)
diff = spec.shape[1]-(nim*shape)
a = np.squeeze(a)
con = np.concatenate((con,a[-1,:,-diff:]), axis=1)
return np.squeeze(con)
#Splitting input spectrogram into different chunks to feed to the generator
def chopspec(spec):
dsa=[]
for i in range(spec.shape[1]//shape):
im = spec[:,i*shape:i*shape+shape]
im = np.reshape(im, (im.shape[0],im.shape[1],1))
dsa.append(im)
imlast = spec[:,-shape:]
imlast = np.reshape(imlast, (imlast.shape[0],imlast.shape[1],1))
dsa.append(imlast)
return np.array(dsa, dtype=np.float32)
#Converting from source Spectrogram to target Spectrogram
def towave_reconstruct(spec, spec1, name, path='../content/', show=False, save=False):
specarr = chopspec(spec)
specarr1 = chopspec(spec1)
print(specarr.shape)
a = specarr
print('Generating...')
ab = specarr1
print('Assembling and Converting...')
a = specass(a,spec)
ab = specass(ab,spec1)
awv = deprep(a)
abwv = deprep(ab)
if save:
print('Saving...')
pathfin = f'{path}/{name}'
sf.write(f'{pathfin}.wav', awv, sr)
print('Saved WAV!')
IPython.display.display(IPython.display.Audio(np.squeeze(abwv), rate=sr))
IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr))
if show:
fig, axs = plt.subplots(ncols=2)
axs[0].imshow(np.flip(a, -2), cmap=None)
axs[0].axis('off')
axs[0].set_title('Reconstructed')
axs[1].imshow(np.flip(ab, -2), cmap=None)
axs[1].axis('off')
axs[1].set_title('Input')
plt.show()
return abwv
#Converting from Z vector generated spectrogram to waveform
def towave_from_z(spec, name, path='../content/', show=False, save=False):
specarr = chopspec(spec)
print(specarr.shape)
a = specarr
print('Generating...')
print('Assembling and Converting...')
a = specass(a,spec)
awv = deprep(a)
if save:
print('Saving...')
pathfin = f'{path}/{name}'
sf.write(f'{pathfin}.wav', awv, sr)
print('Saved WAV!')
IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr))
if show:
fig, axs = plt.subplots(ncols=1)
axs.imshow(np.flip(a, -2), cmap=None)
axs.axis('off')
axs.set_title('Decoder Synthesis')
plt.show()
return awv
#@title Compare resynthesized MelSpec with ground truth MelSpec. Note: Only works if you imported an audio dataset
num_spec_to_resynthesize = 5 #@param {type:"integer"}
num_sample_spec_to_show = num_spec_to_resynthesize
sample_spec, _ = select_spec(adata, adata, num_sample_spec_to_show)
reconstructed_spec, _ = vae.reconstruct(sample_spec)
plot_reconstructed_spec(sample_spec, reconstructed_spec)
reconst = num_sample_spec_to_show
for i in range(reconst):
y = towave_reconstruct(reconstructed_spec[i],sample_spec[i],name='reconstructions',show=True, save=False)
#@title Generate one-shot samples from latent space with random or manual seed
num_samples_to_generate = 10#@param {type:"integer"}
use_seed = False #@param {type:"boolean"}
seed = 0 #@param {type:"slider", min:0, max:4294967295, step:1}
scale_z_vectors = 1.5 #@param {type:"slider", min:-5.0, max:5.0, step:0.1}
save_audio = False #@param {type:"boolean"}
audio_name = "one_shot" #@param {type:"string"}
audio_save_directory = "/content/" #@param {type:"string"}
y = np.random.randint(0, 2**32-1) # generated random int to pass and convert into vector
i=0
while i < num_samples_to_generate:
if not use_seed:
z = generate_random_z_vect(y, num_samples_to_generate,scale=scale_z_vectors)
else:
z = generate_random_z_vect(seed, num_samples_to_generate,scale=scale_z_vectors)
z_sample = np.array(vae.sample_from_latent_space(z))
towave_from_z(z_sample[i], name=f'{audio_name}_{i}',path=audio_save_directory,show=True, save=save_audio)
i+=1
if not use_seed:
print("Generated from seed:", y)
else:
print("Generated from seed:", seed)
#@title Generate arbitrary long audio from latent space with random or custom seed using uniform, Perlin or fractal noise
num_seeds_to_generate = 32#@param {type:"integer"}
noise_type = "uniform" #@param ["uniform", "perlin", "fractal"]
use_seed = False #@param {type:"boolean"}
seed = 0 #@param {type:"slider", min:0, max:4294967295, step:1}
scale_z_vectors = 1.5 #@param {type:"slider", min:-5.0, max:5.0, step:0.1}
save_audio = False #@param {type:"boolean"}
audio_name = "VAE_synthesis2" #@param {type:"string"}
audio_save_directory = "/content" #@param {type:"string"}
y = np.random.randint(0, 2**32-1) # generated random int to pass and convert into vector
if not use_seed:
if noise_type == "uniform":
z = generate_random_z_vect(y, num_seeds_to_generate,scale_z_vectors) # vectors to input into latent space
if noise_type == "perlin":
z = generate_z_vect_from_perlin_noise(y, num_seeds_to_generate,scale_z_vectors) # vectors to input into latent space
if noise_type == "fractal":
z = generate_z_vect_from_fractal_noise(y, num_seeds_to_generate,scale_z_vectors) # vectors to input into latent space
if use_seed:
if noise_type == "uniform":
z = generate_random_z_vect(seed, num_seeds_to_generate,scale_z_vectors) # vectors to input into latent space
if noise_type == "perlin":
z = generate_z_vect_from_perlin_noise(seed, num_seeds_to_generate,scale_z_vectors) # vectors to input into latent space
if noise_type == "fractal":
z = generate_z_vect_from_fractal_noise(seed, num_seeds_to_generate,scale_z_vectors) # vectors to input into latent space
z_sample = np.array(vae.sample_from_latent_space(z))
assembled_spec = testass(z_sample)
towave_from_z(assembled_spec,audio_name,audio_save_directory,show=True,save=save_audio)
if not use_seed:
print("Generated from seed:", y)
else:
print("Generated from seed:", seed)
#@title Interpolate between two seeds for n-amount of steps
use_seed = False #@param {type:"boolean"}
seed_a = 0 #@param {type:"slider", min:0, max:4294967295, step:1}
seed_b = 4294967295 #@param {type:"slider", min:0, max:4294967295, step:1}
num_interpolation_steps = 8#@param {type:"integer"}
scale_z_vectors = 1.5 #@param {type:"slider", min:-5.0, max:5.0, step:0.1}
scale_interpolation_ratio = 1 #@param {type:"slider", min:-5.0, max:5.0, step:0.1}
save_audio = False #@param {type:"boolean"}
audio_name = "random_seeds_interpolation" #@param {type:"string"}
audio_save_directory = "/content/" #@param {type:"string"}
# generate points in latent space as input for the generator
def generate_latent_points(latent_dim, n_samples, n_classes=10):
# generate points in the latent space
x_input = randn(latent_dim * n_samples)
# reshape into a batch of inputs for the network
z_input = x_input.reshape(n_samples, latent_dim)
return z_input
# uniform interpolation between two points in latent space
def interpolate_points(p1, p2,scale, n_steps=10):
# interpolate ratios between the points
ratios = linspace(-scale, scale, num=n_steps)
# linear interpolate vectors
vectors = list()
for ratio in ratios:
v = (1.0 - ratio) * p1 + ratio * p2
vectors.append(v)
return asarray(vectors)
y = np.random.randint(0, 2**32-1)
if not use_seed:
pts = generate_random_z_vect(y,10,scale_z_vectors)
interpolated = interpolate_points(pts[0], pts[1], scale_interpolation_ratio, num_interpolation_steps) # interpolate points in latent space
else:
pts_a = generate_random_z_vect(seed_a,10,scale_z_vectors)
pts_b = generate_random_z_vect(seed_b,10,scale_z_vectors)
interpolated = interpolate_points(pts_a[0], pts_b[0], scale_interpolation_ratio, num_interpolation_steps) # interpolate points in latent space
interp = np.array(vae.sample_from_latent_space(interpolated))
assembled_spec = testass(interp)
towave_from_z(assembled_spec,audio_name,audio_save_directory,show=True, save=save_audio)
if not use_seed:
print("Generated from seed:", y)
else:
print("Generated from seed:", seed)
###Output
_____no_output_____
###Markdown
EXPERIMENTAL: Timbre Transfer Note: You need to Restart Runtime (Runtime > Restart Runtime) per each Timbre Transfer
###Code
#@title Import wav
input_audio = "your_audio_to_timbre_transfer.wav" #@param {type:"string"}
audio_in = single_audio_array(input_audio)
aspec = tospec(audio_in)
out_data = splitcut(aspec)
#@title Build VAE Neural Network
#VAE
import os
import pickle
from tensorflow.keras import Model
from tensorflow.keras.layers import Input, Conv2D, ReLU, BatchNormalization, Flatten, Dense, Reshape, Conv2DTranspose, Activation, Lambda
from tensorflow.keras import backend as K
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import MeanSquaredError
from tensorflow.keras.callbacks import ModelCheckpoint
import numpy as np
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
class VAE:
"""
VAE represents a Deep Convolutional autoencoder architecture
with mirrored encoder and decoder components.
"""
def __init__(self,
input_shape, #shape of the input data
conv_filters, #convolutional network filters
conv_kernels, #convNet kernel size
conv_strides, #convNet strides
latent_space_dim):
self.input_shape = input_shape # [28, 28, 1], in this case is 28 x 28 pixels on 1 channel for greyscale
self.conv_filters = conv_filters # is a list for each layer, i.e. [2, 4, 8]
self.conv_kernels = conv_kernels # list of kernels per layer, [1,2,3]
self.conv_strides = conv_strides # stride for each filter [1, 2, 2], note: 2 means you are downsampling the data in half
self.latent_space_dim = latent_space_dim # how many neurons on bottleneck
self.reconstruction_loss_weight = 1000000
self.encoder = None
self.decoder = None
self.model = None
self._num_conv_layers = len(conv_filters)
self._shape_before_bottleneck = None
self._model_input = None
self._build()
def summary(self):
self.encoder.summary()
print("\n")
self.decoder.summary()
print("\n")
self.model.summary()
def _build(self):
self._build_encoder()
self._build_decoder()
self._build_autoencoder()
def compile(self, learning_rate=0.0001):
optimizer = Adam(learning_rate=learning_rate)
self.model.compile(optimizer=optimizer, loss=self._calculate_combined_loss,
metrics=[self._calculate_reconstruction_loss,
self._calculate_kl_loss])
def train(self, x_train, batch_size, num_epochs):
# checkpoint = ModelCheckpoint("best_model.hdf5", monitor='loss', verbose=1,
# save_best_only=True, mode='auto', period=1)
self.model.fit(x_train,
x_train,
batch_size=batch_size,
epochs=num_epochs,
shuffle=True)
#callbacks=[checkpoint])
def save(self, save_folder="."):
self._create_folder_if_it_doesnt_exist(save_folder)
self._save_parameters(save_folder)
self._save_weights(save_folder)
def load_weights(self, weights_path):
self.model.load_weights(weights_path)
def reconstruct(self, spec):
latent_representations = self.encoder.predict(spec)
reconstructed_spec = self.decoder.predict(latent_representations)
return reconstructed_spec, latent_representations
def encode(self, spec):
latent_representation = self.encoder.predict(spec)
return latent_representation
def sample_from_latent_space(self, z):
z_vector = self.decoder.predict(z)
return z_vector
@classmethod
def load(cls, save_folder="."):
parameters_path = os.path.join(save_folder, "parameters.pkl")
with open(parameters_path, "rb") as f:
parameters = pickle.load(f)
autoencoder = VAE(*parameters)
weights_path = os.path.join(save_folder, "weights.h5")
autoencoder.load_weights(weights_path)
return autoencoder
def _calculate_combined_loss(self, y_target, y_predicted):
reconstruction_loss = self._calculate_reconstruction_loss(y_target, y_predicted)
kl_loss = self._calculate_kl_loss(y_target, y_predicted)
combined_loss = self.reconstruction_loss_weight * reconstruction_loss + kl_loss
return combined_loss
def _calculate_reconstruction_loss(self, y_target, y_predicted):
error = y_target - y_predicted
reconstruction_loss = K.mean(K.square(error), axis=[1, 2, 3])
return reconstruction_loss
def _calculate_kl_loss(self, y_target, y_predicted):
kl_loss = -0.5 * K.sum(1 + self.log_variance - K.square(self.mu) -
K.exp(self.log_variance), axis =1)
return kl_loss
def _create_folder_if_it_doesnt_exist(self, folder):
if not os.path.exists(folder):
os.makedirs(folder)
def _save_parameters(self, save_folder):
parameters = [
self.input_shape,
self.conv_filters,
self.conv_kernels,
self.conv_strides,
self.latent_space_dim
]
save_path = os.path.join(save_folder, "parameters.pkl")
with open(save_path, "wb") as f:
pickle.dump(parameters, f)
def _save_weights(self, save_folder):
save_path = os.path.join(save_folder, "weights.h5")
self.model.save_weights(save_path)
#-----------AUTOENCODER----------#
def _build_autoencoder(self):
model_input = self._model_input
model_output = self.decoder(self.encoder(model_input))
self.model = Model(model_input, model_output, name="autoencoder")
#--------------DECODER------------#
def _build_decoder(self):
decoder_input = self._add_decoder_input()
dense_layer = self._add_dense_layer(decoder_input)
reshape_layer = self._add_reshape_layer(dense_layer)
conv_transpose_layers = self._add_conv_transpose_layers(reshape_layer)
decoder_output = self._add_decoder_output(conv_transpose_layers)
self.decoder = Model(decoder_input, decoder_output, name="decoder")
def _add_decoder_input(self):
return Input(shape=self.latent_space_dim, name="decoder_input")
def _add_dense_layer(self, decoder_input):
num_neurons = np.prod(self._shape_before_bottleneck) # [ 1, 2, 4] -> 8
dense_layer = Dense(num_neurons, name="decoder_dense")(decoder_input)
return dense_layer
def _add_reshape_layer(self, dense_layer):
return Reshape(self._shape_before_bottleneck)(dense_layer)
def _add_conv_transpose_layers(self, x):
"""Add conv transpose blocks."""
# Loop through all the conv layers in reverse order and
# stop at the first layer
for layer_index in reversed(range(1, self._num_conv_layers)):
x = self._add_conv_transpose_layer(layer_index, x)
return x
def _add_conv_transpose_layer(self, layer_index, x):
layer_num = self._num_conv_layers - layer_index
conv_transpose_layer = Conv2DTranspose(
filters=self.conv_filters[layer_index],
kernel_size = self.conv_kernels[layer_index],
strides = self.conv_strides[layer_index],
padding = "same",
name=f"decoder_conv_transpose_layer_{layer_num}"
)
x = conv_transpose_layer(x)
x = ReLU(name=f"decoder_relu_{layer_num}")(x)
x = BatchNormalization(name=f"decoder_bn_{layer_num}")(x)
return x
def _add_decoder_output(self, x):
conv_transpose_layer = Conv2DTranspose(
filters = 1,
kernel_size = self.conv_kernels[0],
strides = self.conv_strides[0],
padding = "same",
name=f"decoder_conv_transpose_layer_{self._num_conv_layers}"
)
x = conv_transpose_layer(x)
output_layer = Activation("sigmoid", name="sigmoid_output_layer")(x)
return output_layer
#----------------ENCODER-----------------#
def _build_encoder(self):
encoder_input = self._add_encoder_input()
conv_layers = self._add_conv_layers(encoder_input)
bottleneck = self._add_bottleneck(conv_layers)
self._model_input = encoder_input
self.encoder = Model(encoder_input, bottleneck, name="encoder")
def _add_encoder_input(self):
return Input(shape=self.input_shape, name="encoder_input")
def _add_conv_layers(self, encoder_input):
"""Creates all convolutional blocks in encoder"""
x = encoder_input
for layer_index in range(self._num_conv_layers):
x = self._add_conv_layer(layer_index, x)
return x
def _add_conv_layer(self, layer_index, x):
"""Adds a convolutional block to a graph of layers, consisting
of Conv 2d + ReLu activation + batch normalization.
"""
layer_number = layer_index + 1
conv_layer = Conv2D(
filters= self.conv_filters[layer_index],
kernel_size = self.conv_kernels[layer_index],
strides = self.conv_strides[layer_index],
padding = "same",
name = f"encoder_conv_layer_{layer_number}"
)
x = conv_layer(x)
x = ReLU(name=f"encoder_relu_{layer_number}")(x)
x = BatchNormalization(name=f"encoder_bn_{layer_number}")(x)
return x
#-------------Bottleneck (Latent Space)-------------#
def _add_bottleneck(self, x):
"""Flatten data and add bottleneck with Gaussian sampling (Dense layer)"""
self._shape_before_bottleneck = K.int_shape(x)[1:]
x = Flatten()(x)
self.mu = Dense(self.latent_space_dim,name="mu")(x)
self.log_variance = Dense(self.latent_space_dim,
name="log_variance")(x)
def sample_point_from_normal_distribution(args):
mu, log_variance = args
epsilon = K.random_normal(shape=K.shape(self.mu), mean=0.,
stddev=1.)
sampled_point = mu + K.exp(log_variance / 2) * epsilon
return sampled_point
x = Lambda(sample_point_from_normal_distribution,
name="encoder_output")([self.mu, self.log_variance])
return x
print("VAE successfully built")
#@title Load Checkpoint for Generating
checkpoint_load_directory = "/path/to/your/checkpoint" #@param {type:"string"}
#-------LOAD MODEL FOR GENERATING-------------#
vae = VAE.load(checkpoint_load_directory)
print("Loaded checkpoint")
#@title Import synthesis utility functions
#-----TESTING FUNCTIONS ----------- #
def select_spec(spec, labels, num_spec=10):
sample_spec_index = np.random.choice(range(len(spec)), num_spec)
sample_spec = spec[sample_spec_index]
sample_labels = labels[sample_spec_index]
return sample_spec, sample_labels
def plot_reconstructed_spec(spec, reconstructed_spec):
fig = plt.figure(figsize=(15, 3))
num_spec = len(spec)
for i, (image, reconstructed_image) in enumerate(zip(spec, reconstructed_spec)):
image = image.squeeze()
ax = fig.add_subplot(2, num_spec, i + 1)
ax.axis("off")
ax.imshow(image, cmap="gray_r")
reconstructed_image = reconstructed_image.squeeze()
ax = fig.add_subplot(2, num_spec, i + num_spec + 1)
ax.axis("off")
ax.imshow(reconstructed_image, cmap="gray_r")
plt.show()
def plot_spec_encoded_in_latent_space(latent_representations, sample_labels):
plt.figure(figsize=(10, 10))
plt.scatter(latent_representations[:, 0],
latent_representations[:, 1],
cmap="rainbow",
c=sample_labels,
alpha=0.5,
s=2)
plt.colorbar()
plt.show()
#---------------NOISE GENERATOR FUNCTIONS ------------#
def generate_random_z_vect(seed=1001,size_z=1,scale=1.0):
np.random.seed(seed)
x = np.random.uniform(low=(scale * -1.0), high=scale, size=(size_z,VECTOR_DIM))
return x
def generate_z_vect_from_perlin_noise(seed=1001, size_z=1, scale=1.0):
np.random.seed(seed)
x = generate_perlin_noise_2d((size_z, VECTOR_DIM), (1,1))
x = x*scale
return x
def generate_z_vect_from_fractal_noise(seed=1001, size_z=1, scale=1.0,):
np.random.seed(seed)
x = generate_fractal_noise_2d((size_z, VECTOR_DIM), (1,1),)
x = x*scale
return x
#-------SPECTROGRAM AND SOUND SYNTHESIS UTILITY FUNCTIONS -------- #
#Assembling generated Spectrogram chunks into final Spectrogram
def specass(a,spec):
but=False
con = np.array([])
nim = a.shape[0]
for i in range(nim-1):
im = a[i]
im = np.squeeze(im)
if not but:
con=im
but=True
else:
con = np.concatenate((con,im), axis=1)
diff = spec.shape[1]-(nim*shape)
a = np.squeeze(a)
con = np.concatenate((con,a[-1,:,-diff:]), axis=1)
return np.squeeze(con)
#Splitting input spectrogram into different chunks to feed to the generator
def chopspec(spec):
dsa=[]
for i in range(spec.shape[1]//shape):
im = spec[:,i*shape:i*shape+shape]
im = np.reshape(im, (im.shape[0],im.shape[1],1))
dsa.append(im)
imlast = spec[:,-shape:]
imlast = np.reshape(imlast, (imlast.shape[0],imlast.shape[1],1))
dsa.append(imlast)
return np.array(dsa, dtype=np.float32)
#Converting from source Spectrogram to target Spectrogram
def towave_reconstruct(spec, spec1, name, path='../content/', show=False, save=False):
specarr = chopspec(spec)
specarr1 = chopspec(spec1)
print(specarr.shape)
a = specarr
print('Generating...')
ab = specarr1
print('Assembling and Converting...')
a = specass(a,spec)
ab = specass(ab,spec1)
awv = deprep(a)
abwv = deprep(ab)
if save:
print('Saving...')
pathfin = f'{path}/{name}'
sf.write(f'{pathfin}.wav', awv, sr)
print('Saved WAV!')
IPython.display.display(IPython.display.Audio(np.squeeze(abwv), rate=sr))
IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr))
if show:
fig, axs = plt.subplots(ncols=2)
axs[0].imshow(np.flip(a, -2), cmap=None)
axs[0].axis('off')
axs[0].set_title('Reconstructed')
axs[1].imshow(np.flip(ab, -2), cmap=None)
axs[1].axis('off')
axs[1].set_title('Input')
plt.show()
return abwv
#Converting from Z vector generated spectrogram to waveform
def towave_from_z(spec, name, path='../content/', show=False, save=False):
specarr = chopspec(spec)
print(specarr.shape)
a = specarr
print('Generating...')
print('Assembling and Converting...')
a = specass(a,spec)
awv = deprep(a)
if save:
print('Saving...')
pathfin = f'{path}/{name}'
sf.write(f'{pathfin}.wav', awv, sr)
print('Saved WAV!')
IPython.display.display(IPython.display.Audio(np.squeeze(awv), rate=sr))
if show:
fig, axs = plt.subplots(ncols=1)
axs.imshow(np.flip(a, -2), cmap=None)
axs.axis('off')
axs.set_title('Decoder Synthesis')
plt.show()
return awv
#@title Perform Timbre Transfer
save_audio = True #@param {type:"boolean"}
audio_name = "timbre_transfered_audio" #@param {type:"string"}
audio_save_directory = "/content/" #@param {type:"string"}
encoded_spec = vae.encode(out_data)
print(np.shape(encoded_spec))
z_sample = np.array(vae.sample_from_latent_space(encoded_spec))
assembled_spec = testass(z_sample)
towave_from_z(assembled_spec, name=audio_name,path=audio_save_directory,show=True, save=save_audio)
###Output
_____no_output_____ |
notebook/core/module/Def.ipynb | ###Markdown
함수
###Code
def movie1(name):
print('영화명:',name) #리턴타입은 명시하지 않는다.
movie1('킹덤')
movie1('청춘 기록')
movie1('테넷')
def movie2(name, genre): # 인수 여러개 사용 가능
print('영화명: ' + name)
print('장 르: ' + genre)
#인수 부족하면 에러
movie2('1987','역사')
movie2('종이의 집','범죄 드라마')
def movie3(name, genre, score=5.0): # 기본값 사용,파이썬의 기능
print('영화명: ' + name)
print('장 르: ' + genre)
print('평 점: ' + str(score))
movie3('기생충','드라마') #기본값이 전달된 상태
movie3('검은 사제들','퇴마',9.5)
movie3('컨저링2') # X 기본값 지정 안된것은 에러
def movie4(time, name, genre):
print('영화명: ' + name)
print('시 간: ' + str(time)) #문자열과 숫자는 +사용 불가하므로 형변환 필요
print('장 르: ' + genre)
movie4(120,'안시성','역사')
movie4(name='업사이드',time=100,genre='휴먼') #파라미터 순서 변경 가능
def movie5(*actors): # tuple, 가변인자 처리, 전달되는 변수의 수나,타입등 제한없다
print(type(actors))
print(actors)
movie5('안시성', '조인성', '김태리')
movie5('맘마미아', '피어스브로스넌', '메릴스트립', '아만다 사이프리드')
movie5('인터스텔라', '메튜 맥커너희', '엔 헤서웨이', 2014)
def movie6(movie, *actors): # 고정과 가변인자 병합 처리
print(type(actors))
print(movie)
print(actors) # Dictionary
# 첫번째 인자가 고정으로 들어간 후(movie) 나머지 인자가 가변인자(actors)로 들어감
movie6('안시성', '조인성', '김태리')
movie6('맘마미아', '피어스브로스넌', '메릴스트립', '아만다 사이프리드')
movie6('인터스텔라', '메튜 맥커너희', '엔 헤서웨이', 2014)
def movie7(movie, **actors): # 고정과 가변인자 병합 처리, Dictionary
print(movie)
print(type(actors))
print(actors) # Dictionary
# movie7('안시성', '조인성', '김태리') # X
movie7('인터스텔라', actor1='맥커너희', actor2='앤 헤서웨이',
actor3='마이클 케인') # Dictionary
def season(month): #기본적으로 파이썬은 리턴타잆이 없어도 된다.
season=''
if month == 1:
season='January'
elif month == 2:
season='February'
elif month == 3:
season="March"
else:
season="Only input 1 ~ 3"
return season #그러나 리턴타입을 쓰는 경우도 존재한다
test=season(1) #리턴타입을 명시했으므로, 사용 가능, 없으면 None 출력
print(test)
str1='전역변수'
def fun1():
str1='지역 변수'
print(str1)
fun1() #전역 변수와 지역변수 중 지역 변수가 우선순위가 높음
print(str1) #함수 밖에서는 전역변수가 우선순위가 높음
str1='전역변수'
def fun2():
global str1 # 전역변수를 사용
str1 = '지역 변수값 적용'
print(str1)
fun2()
print(str1) #global 변수 선언시,지역 변수가 전역변수로 대체.
def fun3():
print(str1) # 해당하는 지역변수가 없으면 전역 변수 자동 인식
fun3()
def maxno(x,y): #x,y:지역 변수
if x > y:
return x
else:
return y
su= maxno(100,200)
print(type(su))
print(su)
def reverse(x,y,z): #리턴값 다수 선언 가능
return z,y,x
ret=reverse(10,20,30)
print(ret)
r1,r2,r3=reverse(10,20,30)
print(r1,r2,r3)
r1,r2,r3=reverse("A","B","C")
print(r1,r2,r3)
# swap 알고리즘,중요한 개념★★
r1,r2=10,20
print(r1,r2)
r1,r2=r2,r1 #값 교채하기
print(r1,r2)
#자바 방식
r1 = 10
r2 = 20
temp=r1
r1=r2
r2=temp
print(r1,r2)
def data():
return(1,2),('A','B')
# a,b,c,d=data() #X,사용불가
ret=data()
print(type(ret))
print(ret)
(a,b),(c,d)=data() #개발자가 타입을 알고 있어야 한다.
print(a,b,c,d)
def data():
return[1,2,3] #list타입 리턴
ret =data()
print(type(ret))
print(ret)
def data():
return[1,2,3],[4,5,6]
ret =data()
print(type(ret))
print(ret) #list가 결합된tuple
print(type(ret[0]))
print(ret[0])
print(type(ret[1]))
print(ret[1])
ret1,ret2=data()
print(ret1)
print(ret2)
#파이썬 파일을 모듈이라고 부름
###Output
_____no_output_____ |
ppkgl/doitkaggle.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
# Ignore the warnings
import warnings
warnings.filterwarnings('ignore')
# System related and data input controls
import os
# Data manipulation, visualization and useful functions
import pandas as pd
import numpy as np
from itertools import product # iterative combinations
from tqdm import tqdm
import matplotlib.pyplot as plt
import seaborn as sns
# Modeling algorithms
# General(Statistics/Econometrics)
from sklearn import preprocessing
import statsmodels.api as sm
import statsmodels.tsa.api as smt
import statsmodels.formula.api as smf
from statsmodels.stats.outliers_influence import variance_inflation_factor
from scipy import stats
# Regression
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.kernel_ridge import KernelRidge
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor, BaggingRegressor, GradientBoostingRegressor, AdaBoostRegressor
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
# Classification
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC, SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
# Model selection
from sklearn.model_selection import train_test_split,cross_validate
from sklearn.model_selection import KFold
from sklearn.model_selection import GridSearchCV
# Evaluation metrics
# for regression
from sklearn.metrics import mean_squared_log_error, mean_squared_error, r2_score, mean_absolute_error
# for classification
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
import pandas as pd
from keras.models import Sequential, Model, load_model
from keras.layers import Input, Dense, Activation, Flatten, Dropout
from keras.layers import SimpleRNN, LSTM, GRU
item_categories = pd.read_csv('/content/drive/MyDrive/data/item_categories.csv')
items = pd.read_csv('/content/drive/MyDrive/data/items.csv')
sales_train = pd.read_csv('/content/drive/MyDrive/data/sales_train.csv') #학습데이터
sample_submission = pd.read_csv('/content/drive/MyDrive/data/sample_submission.csv')
shops = pd.read_csv('/content/drive/MyDrive/data/shops.csv')
test = pd.read_csv('/content/drive/MyDrive/data/test.csv')#테스트 데이터
raw_data = sales_train
#1.train에 중복값 제거 (raw_data 모든 컬럼값이 동일한 value를 삭제)
# drop_duplicates
subset= ['date','date_block_num','shop_id','item_id','item_cnt_day']
print(raw_data.duplicated(subset=subset).value_counts())
raw_data.drop_duplicates(subset=subset, inplace=True)
# 2935849 - 2935825 = 24개의 중복데이터 삭제
#2.test에 있는 세일즈수만 예측하면되기때문에 train에서 test에 없는 상품 제거
#test_shops = test.shop_id.unique()
#test_items = test.item_id.unique()
#raw_data = raw_data[raw_data.shop_id.isin(test_shops)]
#raw_data = raw_data[raw_data.item_id.isin(test_items)]
#raw_data.shape
# 기존2935825에서1224429으로 절반 이상 데이터 드랍
# item(22169개의 아이템)과 item_cate(83개의 카테고리) 의 데이터 그룹 생성
# item_categories_name 안에 또 큰 범주의 카테고리가 있다.
# pickup first-category name
items_g = item_categories['item_category_name'].apply(lambda x: str(x).split(' ')[0])
item_categories['item_group'] = pd.Categorical(items_g).codes
item_c = pd.merge(item_categories, item_categories.loc[:,['item_category_id','item_group']], on=['item_category_id'], how='left')
item_c.head()
item_c = item_c.drop('item_group_y',axis=1)
item_c = item_c.drop('item_category_name',axis=1)
item_c.columns = ['item_category_id','item_group']
item_c
# 상점 이름의 따른 분류
# 상점도 이름안에 큰 범주가 있다. city로 변경
city = shops.shop_name.apply(lambda x: str.replace(x, '!', '')).apply(lambda x: x.split(' ')[0])
shops['city'] = pd.Categorical(city).codes
shops.head()
shops = shops.drop('shop_name', axis=1)
shops.head()
items.info()
items.head()
itemsinfo = pd.merge(items, item_c)
itemsinfo
len(raw_data)
df = pd.merge(raw_data,itemsinfo)
len(df)
df = pd.merge(df,shops)
df.head()
df = df.drop('item_name',axis=1)
#df = df.drop('item_group',axis=1)
df = df.drop('item_price',axis=1)
df.info()
df.isna().sum()
sns.heatmap(df.corr(), annot=True, cmap='YlOrRd')
#df.columns
#sns.jointplot(x='item_price', y='item_cnt_day', data=df)
#sns.jointplot(x='item_category_id', y='item_cnt_day', data=df)
#sns.jointplot(x='item_id', y='item_cnt_day', data=df)
#sns.jointplot(x='shop_id', y='item_cnt_day', data=df)
#df.columns
#sns.boxplot(x='shop_name', y='item_cnt_day', data=df)
#sns.boxplot(x='date', y='item_cnt_day', data=df)
df.columns
#df2 = df.drop(['item_price','date_block_num'],axis=1)
df
#print(df['item_price'].quantile(0.95))
#print(df['item_price'].quantile(0.005))
#df2 = df2[(df2['item_price'] < df2['item_price'].quantile(0.95)) & (df2['item_price'] > df2['item_price'].quantile(0.005))]
#df2['item_price'].hist()
df.hist()
df2 = df
df2= df2[(df2['item_cnt_day'] < df2['item_cnt_day'].quantile(0.95))]
df2['item_cnt_day'].hist()
df2['date'] = pd.to_datetime(df2['date'])
df3 = df2.set_index(['date'])
df3 = df3.sort_index()
df3 = df3.reset_index()
df3.index.name = 'ID'
df3.head()
df3 = df3.rename_axis('ID').reset_index()
df3.head()
df3 = df3.set_index('date')
df3.head()
dfformerge = df3[['item_id','item_category_id','item_group']]
dfgormerge2 = df3[['shop_id','city']]
dfformerge
dfgormerge2
df4 = df3.groupby(by=['shop_id','item_id'])['item_cnt_day'].resample('M').sum()
#df7 = df3.groupby(by=['shop_id','item_id'])['item_price'].resample('M').sum()
#dfprice = pd.DataFrame(df7)
#dfprice = dfprice.reset_index()
df4
df5 = pd.DataFrame(df4)
df5 = df5.reset_index()
df5.head()
df5.info()
items = items.drop('item_name',axis=1)
df6 = pd.merge(df5, items, how='left', left_on='item_id',right_on='item_id')
df6
dform_du = dfformerge.drop_duplicates(subset = 'item_id')
dform_du
dform_du2 = dfgormerge2.drop_duplicates(subset = 'shop_id')
dform_du2.info()
df6= pd.merge(df5, dform_du, how='left', left_on='item_id', right_on='item_id')
df6= pd.merge(df6, dform_du2, how='left', left_on='shop_id', right_on='shop_id')
df6.info()
#dfsummonth = pd.merge(df5, dfprice)
#dfsummonth
#itemsinfo.head()
#newitemsinfo = itemsinfo.drop(['item_name','item_category_name'],axis=1)
#newitemsinfo.head()
#mergetest = pd.merge(df5, newitemsinfo)
#mergetest2 = pd.merge(df5, newitemsinfo, how='left',left_on='item_id', right_on='item_id')
#mergetest3 = pd.merge(df5, newitemsinfo, how='left',left_on='item_id', right_on='item_id' )
#mergetest3.head()
df6['date'] = pd.to_datetime(df6['date'])
df6=df6.set_index('date')
df6 = df6.sort_index()
df6= df6.reset_index()
df6 = df6.rename_axis('ID').reset_index()
df6
df6 = df6.drop('date', axis=1)
df6
df6 = df6.drop('index', axis=1)
#mergetest3.head()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df6)
X_scaled = scaler.transform(df6)
X_scaled = pd.DataFrame(X_scaled, index=df6.index, columns =df6.columns)
X_scaled.head()
X_scaled.info()
#X = X_scaled.drop(['item_cnt_day'],axis=1)
#y = df6['item_cnt_day']
len(X_scaled)
len(y)
#정규화를 하지않고 돌려본다
y = df6.loc[:,'item_cnt_day']
X = df6.drop(['item_cnt_day'], axis=1)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
from xgboost import XGBRegressor
model_reg = XGBRegressor()
model_reg.fit(X_train, y_train)
from sklearn.metrics import mean_absolute_error, mean_squared_error
from math import sqrt
pred = model_reg.predict(X_test)
print(mean_absolute_error(y_test, pred))
print(sqrt(mean_squared_error(y_test, pred)))
pred.mean()
# 주요하게 적용하는 변수를 판단
from xgboost import plot_importance
import matplotlib.pyplot as plt
%matplotlib inline
plot_importance(model_reg, height=0.9)
test.head()
#testmerge = pd.merge(test, newitemsinfo, how='left', left_on='item_id', right_on='item_id')
#testmerge
#mergefortest = df3.drop(['shop_id', 'item_category_id','item_cnt_day','ID'],axis=1)
#mergefortest
#df3.head()
evalcsv = pd.read_csv('/content/forsubmission.csv')
eval = evalcsv
eval =pd.merge(eval, itemsinfo,how='left',left_on='item_id',right_on='item_id')
eval =pd.merge(eval, shops,how='left',left_on='shop_id',right_on='shop_id')
eval = eval.drop('item_name',axis=1)
test.head()
eval = pd.merge(test, items, how='left', left_on='item_id',right_on='item_id')
eval.head()
eval = eval.drop('item_price',axis=1)
pred = model_reg.predict(eval)
pred.mean()
adfadsf
items.head()
evalcsv.head()
evalcsv = pd.merge(evalcsv, items, how='left',left_on='item_id',right_on='item_id')
evalcsv =evalcsv.drop('item_name',axis=1)
submission =pd.DataFrame(pred)
submission_copy = submission.rename_axis('ID').reset_index()
submission_copy
submission_copy.columns = ['ID', 'item_cnt_month']
submission_copy
submission_copy.mean()
submission_copy.info()
submission_copy['item_cnt_month'] = submission_copy['item_cnt_month'].astype(float)
submission_copy.to_csv('XG_submission3.csv',index=False)
items.head()
len(test)
###Output
_____no_output_____ |
01.14/Class_transcript_01_14_LU.ipynb | ###Markdown
In-class transcript from Lecture 3, January 14, 2019
###Code
# These are the standard imports for CS 111.
# This list may change as the quarter goes on.
import os
import time
import math
import numpy as np
import numpy.linalg as npla
import scipy
from scipy import sparse
from scipy import linalg
import scipy.sparse.linalg as spla
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import axes3d
%matplotlib tk
# Example of an upper triangular matrix
U = np.array([[2,7,1,8],[0,2,8,1],[0,0,8,2],[0,0,0,8]])
print(U)
# Example of a unit lower triangular matrix
L = np.array([[1,0,0,0],[.5,1,0,0],[0,.5,1,0],[-.5,-.5,0,1]])
print(L)
# Experiments with a random matrix
A = np.round(20*np.random.rand(5,5))
print(A)
npla.matrix_rank(A)
npla.norm(A)
npla.cond(A)
# Creating a right-hand side for which we know the answer to Ax=b
xorig = np.round(10*np.random.rand(5))
print("original x:", xorig)
b = A @ xorig
print("right-hand side b:", b)
x = npla.solve(A,b)
print("computed x:", x)
error = x - xorig
print("error in x:", error)
print("relative norm of error:", npla.norm(x-xorig) / npla.norm(x))
residual = b - A@x
print("residual:", residual)
print("relative norm of residual:", npla.norm(residual) / npla.norm(b))
# Now a different matrix A and right-hand side vector b
A = np.array([[ 2. , 7. , 1. , 8. ],
[ 1. , 5.5, 8.5, 5. ],
[ 0. , 1. , 12. , 2.5],
[-1. , -4.5, -4.5, 3.5]])
print("A:", A,'\n')
b = np.array([17. , 2.5, -7. , 10.5])
print("b:", b)
# We used Gaussian elimination on the blackboard to triangularize A, giving U
print(U)
# During Gaussian elimination, we wrote down the multipliers in
# a lower triangular array and then put ones on the diagonal, giving L
print(L)
# The theorem: Gaussian elimination factors A as the product L time U
print( L @ U)
print()
print(A)
def LUfactorNoPiv(A):
"""Factor a square matrix, A == L @ U (no partial pivoting)
Parameters:
A: the matrix.
Outputs (in order):
L: the lower triangular factor, same dimensions as A, with ones on the diagonal
U: the upper triangular factor, same dimensions as A
"""
# Check the input
m, n = A.shape
assert m == n, 'input matrix A must be square'
# Make a copy of the matrix that we will transform into L and U
LU = A.astype(np.float64).copy()
# Eliminate each column in turn
for piv_col in range(n):
# Update the rest of the matrix
pivot = LU[piv_col, piv_col]
assert pivot != 0., "pivot is zero, can't continue"
for row in range(piv_col + 1, n):
multiplier = LU[row, piv_col] / pivot
LU[row, piv_col] = multiplier
LU[row, (piv_col+1):] -= multiplier * LU[piv_col, (piv_col+1):]
# Separate L and U in the result
U = np.triu(LU)
L = LU - U + np.eye(n)
return (L, U)
def Lsolve(L, b):
"""Forward solve a unit lower triangular system Ly = b for y
Parameters:
L: the matrix, must be square, lower triangular, with ones on the diagonal
b: the right-hand side vector
Output:
y: the solution vector to L @ y == b
"""
# Check the input
m, n = L.shape
assert m == n, "matrix L must be square"
assert np.all(np.tril(L) == L), "matrix L must be lower triangular"
assert np.all(np.diag(L) == 1), "matrix L must have ones on the diagonal"
# Make a copy of b that we will transform into the solution
y = b.astype(np.float64).copy()
# Forward solve
for col in range(n):
y[col+1:] -= y[col] * L[col+1:, col]
return y
def Usolve(U, y):
"""Backward solve an upper triangular system Ux = y for x
Parameters:
U: the matrix, must be square, upper triangular, with nonzeros on the diagonal
y: the right-hand side vector
Output:
x: the solution vector to U @ x == y
"""
print("you will write Usolve in hw2")
return
L,U = LUfactorNoPiv(A)
y = Lsolve(L,b)
print("y:", y)
x = Usolve(U,y)
print("\nx:", x)
print("\nresidual norm:", npla.norm(b - A @ x))
A @ [1,0,-1,2]
# But LU factorization (without pivoting) fails if it encounters a zero pivot
A = np.array([[0, 1], [1, 2]])
L,U = LUfactorNoPiv(A)
def LUfactor(A, pivoting = True):
"""Factor a square matrix with partial pivoting, A[p,:] == L @ U
Parameters:
A: the matrix.
pivoting: whether or not to do partial pivoting
Outputs (in order):
L: the lower triangular factor, same dimensions as A, with ones on the diagonal
U: the upper triangular factor, same dimensions as A
p: the permutation vector that permutes the rows of A by partial pivoting
"""
# Check the input
m, n = A.shape
assert m == n, 'input matrix A must be square'
# Initialize p to be the identity permutation
p = np.array(range(n))
# Make a copy of the matrix that we will transform into L and U
LU = A.astype(np.float64).copy()
# Eliminate each column in turn
for piv_col in range(n):
# Choose the pivot row and swap it into place
if pivoting:
piv_row = piv_col + np.argmax(LU[piv_col:, piv_col])
assert LU[piv_row, piv_col] != 0., "can't find nonzero pivot, matrix is singular"
LU[[piv_col, piv_row], :] = LU[[piv_row, piv_col], :]
p[[piv_col, piv_row]] = p[[piv_row, piv_col]]
# Update the rest of the matrix
pivot = LU[piv_col, piv_col]
assert pivot != 0., "pivot is zero, can't continue"
for row in range(piv_col + 1, n):
multiplier = LU[row, piv_col] / pivot
LU[row, piv_col] = multiplier
LU[row, (piv_col+1):] -= multiplier * LU[piv_col, (piv_col+1):]
# Separate L and U in the result
U = np.triu(LU)
L = LU - U + np.eye(n)
return (L, U, p)
# Our 2-by-2 counterexample again
print('A:\n', A)
L, U, p = LUfactor(A)
print('\nL:\n', L)
print('\nU:\n', U)
print('\np: ', p)
# A larger example of LU with partial pivoting
A = np.round(20*np.random.rand(5,5))
print('matrix A:\n', A)
xorig = np.round(10*np.random.rand(5))
print('\noriginal x:', xorig)
b = A @ xorig
print('\nright-hand side b:', b)
# Factor the larger example
L, U, p = LUfactor(A)
print("norm of difference between L times U and permuted A:", npla.norm( L@U - A[p,:]))
# Solve with the larger example
y = Lsolve(L,b[p])
print("y:", y)
x = Usolve(U,y)
print("\nx:", x)
print("\nresidual norm:", npla.norm(b - A @ x))
###Output
y: [340. 184. 85.90909091 284.97916667 103.91794872]
x: [7. 2. 6. 2. 8.]
residual norm: 6.355287432313019e-14
|
notebooks/test_interpolation.ipynb | ###Markdown
First let's make a sample curvilinear grid. The size is given by Nx, and Ny
###Code
Nx = 80
Ny = 60
def make_sample_grid(Ny, Nx):
'return sample grid of dimension [Ny, Nx]'
yc, xc = np.mgrid[1:10:Ny*1j, 1:20:Nx*1J]
def rot2d(x, y, ang):
'''rotate vectors by geometric angle'''
xr = x*np.cos(ang) - y*np.sin(ang)
yr = x*np.sin(ang) + y*np.cos(ang)
return xr, yr
x, y = rot2d(xc, (5+yc)**1.2*(3+xc)**0.3, 0.2)
y /= y.ptp()/10.
x /= x.ptp()/10.
x -= x.mean()
y -= y.mean()
return x, y
x, y = make_sample_grid(Ny, Nx)
###Output
_____no_output_____
###Markdown
Now some trial points where the gridded values will be interpolated. These cover a number of test cases at specific grid locations, or random points.
###Code
# Some sample grid locations
x_nodes, y_nodes = x.flatten(), y.flatten()
x_u = 0.5*(x[:, 1:] + x[:, :-1]).flatten()
y_u = 0.5*(y[:, 1:] + y[:, :-1]).flatten()
x_v = 0.5*(x[1:, :] + x[:-1, :]).flatten()
y_v = 0.5*(y[1:, :] + y[:-1, :]).flatten()
x_centers = 0.25*(x[1:, 1:] + x[1:, :-1] + x[:-1, 1:] + x[:-1, :-1]).flatten()
y_centers = 0.25*(y[1:, 1:] + y[1:, :-1] + y[:-1, 1:] + y[:-1, :-1]).flatten()
# centers offset halfway toward the lower right node
x_nc = 0.5*(0.25*(x[1:, 1:] + x[1:, :-1] + x[:-1, 1:] + x[:-1, :-1]) + x[:-1, 1:]).flatten()
y_nc = 0.5*(0.25*(y[1:, 1:] + y[1:, :-1] + y[:-1, 1:] + y[:-1, :-1]) + y[:-1, 1:]).flatten()
x_rand, y_rand = 3.0*np.random.randn(2, 100000)
###Output
_____no_output_____
###Markdown
Change the values of points, xi, and yi to the desired points to test.
###Code
points = np.vstack((x_rand, y_rand)).T
xi, yi = points.T
###Output
_____no_output_____
###Markdown
Generate the cell tree, locate the points
###Code
nodes, faces = interpolate.array2grid(x, y)
squares = np.array([nodes[face] for face in faces])
ct2d = cell_tree2d.CellTree(nodes, faces)
gridpoint_indices = ct2d.locate(points)
###Output
_____no_output_____
###Markdown
Sanity check. See if cell_tree2d really finds the right cell. This can take some time....
###Code
eps = 0.01 # some buffering required if the points are along the edge of a square.
# remove indices with value -1, outside the grid domain
inside = gridpoint_indices >= 0 # good points, inside domain
gridpoint_indices = gridpoint_indices[inside]
points_i = points[inside]
squares_i = squares[gridpoint_indices]
mesh = MultiPolygon([Polygon(p).buffer(eps) for p in squares_i])
trial = MultiPoint([Point(p) for p in points_i])
contains = [m.contains(p) for m, p in zip(mesh, trial)]
assert(np.alltrue(contains))
###Output
_____no_output_____
###Markdown
Now check interpolation. This is fast..
###Code
def zfunc(x, y):
'Sample field for interpolation'
return np.sin(x/10.) + np.cos(y/10.)
ct = interpolate.CellTree(x, y)
loc = ct.locate(points)
zgrid = zfunc(x, y)
zi = loc.interpolate(zgrid)
# use loc.points, as this contains only the points in the domain.
zi_true = zfunc(*loc.points.T)
assert(np.allclose(zi, zi_true, rtol=0.0001))
###Output
_____no_output_____
###Markdown
Plot this up some..
###Code
fig = plt.figure(figsize=(14, 10))
ax = fig.add_subplot(111)
zc = zfunc( 0.25*(x[1:, 1:] + x[1:, :-1] + x[:-1, 1:] + x[:-1, :-1]),
0.25*(y[1:, 1:] + y[1:, :-1] + y[:-1, 1:] + y[:-1, :-1]) )
ax.pcolormesh(x, y, zgrid, cmap='viridis', clim=(0.5, 1.5))
# we only want to plot the points that are within the grid domain.
ax.scatter(loc.points[:, 0], loc.points[:, 1], 10, zi, cmap='viridis', edgecolor='none')
ax.plot(xi, yi, '.k', alpha=0.5, markersize=0.5) # all points
ax.plot(loc.points[:, 0], loc.points[:, 1], '.k', markersize=1) # only points in the domain
ax.set_xlim(-6, 6)
ax.set_ylim(-6, 6)
ax.set_aspect(1.0)
###Output
_____no_output_____ |
Hotels rating prediction/Hotel reviews - baseline solution.ipynb | ###Markdown
Train set, testing set
###Code
from sklearn import linear_model
Z['Combined'] = Z['Title']+' | '+Z['Text']
X_train = Z.loc[train_inds,'Combined'].values
Y_train = Z.loc[train_inds,'Rating'].values
X_test = Z.loc[test_inds,'Combined'].values
Y_test = Z.loc[test_inds,'Rating'].values
text_model = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('model', linear_model.Ridge()),
])
from sklearn.model_selection import GridSearchCV
parameters = {'vect__ngram_range': [(1, 2)],
'model__alpha': 10**linspace(-5,5,6),
}
gs_model = GridSearchCV(text_model, parameters, scoring='neg_mean_squared_error', n_jobs=-1)
gs_model = gs_model.fit(X_train, Y_train)
gs_model.best_score_, gs_model.best_params_
sqrt(-gs_model.best_score_)
###Output
_____no_output_____
###Markdown
Submission
###Code
Z.loc[train_inds].index
print len(X_train), len(X_test)
concatenate((X_train,[X_test[0]]))
Y_hat = gs_model.predict(X_train)
Y_hat = [20 if i<20 else 100 if i>100 else round(i,4) for i in Y_hat]
Y_hat[6]
S = pd.DataFrame(Y_hat, columns=['Rating'], index = Z.loc[train_inds].index)
S.head()
S.to_csv('Data/solution.csv', index=True)
###Output
_____no_output_____ |
sample-code/notebooks/3-05.ipynb | ###Markdown
第3章 pandasでデータを処理しよう 3-5: さまざまなデータの読み込み
###Code
# リスト3.5.1:CSVファイルの読み込み
import os
import pandas as pd
base_url = (
"https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/"
)
anime_csv = os.path.join(base_url, "anime.csv")
df = pd.read_csv(anime_csv)
df.head()
# リスト3.5.2:インデックス列を番号で指定
# インデックスにする列を番号で指定
df = pd.read_csv(anime_csv, index_col=0)
df.head()
# リスト3.5.3:インデックス列を列名で指定
# インデックスにする列を列名で指定
df = pd.read_csv(anime_csv, index_col="anime_id")
df.head()
# リスト3.5.4:型を指定
df = pd.read_csv(anime_csv, dtype={"members": float})
df.head()
# リスト3.5.5:datetime型の変換
anime_stock_price_csv = os.path.join(base_url, "anime_stock_price.csv")
df = pd.read_csv(anime_stock_price_csv, parse_dates=["Date"])
df.dtypes
# リスト3.5.7:区切り文字を指定
anime_tsv = os.path.join(base_url, "anime.tsv")
df = pd.read_csv(anime_tsv, sep="\t")
# リスト3.5.8:Excelファイルの読み込み
anime_xlsx = os.path.join(base_url, "anime.xlsx")
df = pd.read_excel(anime_xlsx)
df.head()
# リスト3.5.9:シート名を指定した読み込み
df = pd.read_excel(anime_xlsx, sheetname="Movie")
# リスト3.5.10:SQLiteの読み込み
from urllib.request import urlopen
import sqlite3
anime_db = os.path.join(base_url, "anime.db")
res = urlopen(anime_db)
with open("anime.db", "wb") as f:
f.write(res.read())
with sqlite3.connect(f.name) as conn:
df = pd.read_sql("SELECT * FROM anime", conn)
# リスト3.5.11:HTMLファイルの読み込み
url = "https://docs.python.org/3/py-modindex.html"
tables = pd.read_html(url, index_col=1)
tables[0].loc[:, 1:].dropna().head(10) # 1番目のDataFrameから空の列と欠損値を除外
###Output
_____no_output_____ |
CN DSA/CA problems.ipynb | ###Markdown
Find the Equillbirium Point in Array
###Code
arr = [2, 3, 10, -10, 4, 2, 9]
n = 7
# complexity O(n^2)
def sum_arr(arr):
ans = 0
for i in range(len(arr)):
ans += arr[i]
return ans
for i in range(n):
if sum_arr(arr[:i]) == sum_arr(arr[i+1:]):
print(i)
break
# complexity O(n)
left = 0
right = sum(arr)
for i in range(n):
right = right - arr[i]
if left == right:
print(i)
break
left += arr[i]
###Output
_____no_output_____
###Markdown
Find the Unique ElementYou have been given an integer array/list(ARR) of size N. Where N is equal to [2M + 1].Now, in the given array/list, 'M' numbers are present twice and one number is present only once.You need to find and return that number which is unique in the array/list. Note:Unique element is always present in the array/list according to the given condition.Input format :The first line contains an Integer 't' which denotes the number of test cases or queries to be run. Then the test cases follow.First line of each test case or query contains an integer 'N' representing the size of the array/list.Second line contains 'N' single space separated integers representing the elements in the array/list.Output Format :For each test case, print the unique element present in the array.Output for every test case will be printed in a separate line.Constraints :1 <= t <= 10^20 <= N <= 10^6Time Limit: 1 secSample Input 1:172 3 1 6 3 6 2Sample Output 1:1Sample Input 2:252 4 7 2 791 3 1 3 6 6 7 10 7Sample Output 2:410
###Code
arr = [2, 3, 2, 4, 6, 3, 6]
n = 7
###Output
_____no_output_____
###Markdown
Solution 1 Complexity = O(n^2)
###Code
def find_unique(n,arr):
for i in range(n):
for j in range(n+1):
if j == n:
return arr[i]
if i != j:
if arr[i] == arr[j]:
break
find_unique(7,arr)
###Output
_____no_output_____
###Markdown
Solution 2 Complexity O(nlogn) We need to sort first and sorting takes nlogn and then for loop will take logn complexity
###Code
merge_sort(arr)
print(arr)
for i in range(0,n,2):
if arr[i] != arr[i+1]:
print(arr[i])
break
###Output
_____no_output_____
###Markdown
Solution 3 Complexity O(n) solved by biwise XOR => ^ operator that return 0 if opearated with same number otherwise something else.i.e in arr = [2,3,4,6,3,6,2]This approach uses 2 properties of XOR:1. XOR of a number with itself is 0.2. XOR of a number with 0 is number itself.Let us understand this approach with the help of an example:arr[]= 2 3 1 6 3 6 2Taking their xor:Answer = 2 ^ 3 ^ 1 ^ 6 ^ 3 ^ 6 ^ 2Now XOR is associative and commutative, so I can write it as:Answer = (2^2) ^ (3^3) ^ 1 ^ (6 ^ 6) = 0 ^ 0 ^ 1 ^ 0 = 1 Time complexity of this solution is O(n)
###Code
arr = [2, 3, 1, 6, 3, 6, 2]
n = 7
ans = 0
for i in range(n):
ans = ans ^ arr[i]
print(ans)
###Output
_____no_output_____
###Markdown
Find duplicate element in the Given ArrayDuplicate in arraySend FeedbackYou have been given an integer array/list(ARR) of size N which contains numbers from 0 to (N - 2). Each number is present at least once. That is, if N = 5, the array/list constitutes values ranging from 0 to 3, and among these, there is a single integer value that is present twice. You need to find and return that duplicate number present in the array.Note :Duplicate number is always present in the given array/list.Input format :The first line contains an Integer 't' which denotes the number of test cases or queries to be run. Then the test cases follow.First line of each test case or query contains an integer 'N' representing the size of the array/list.Second line contains 'N' single space separated integers representing the elements in the array/list.Output Format :For each test case, print the duplicate element in the array/list.Output for every test case will be printed in a separate line.Constraints :1 <= t <= 10^20 <= N <= 10^6Time Limit: 1 secSample Input 1:190 7 2 5 4 7 1 3 6Sample Output 1:7Sample Input 2:250 2 1 3 170 3 1 5 4 3 2Sample Output 2:13 Solution Find duplicateProblem Description: You are given with an array of integers of size n which contains numbersfrom 0 to n - 2. Each number is present at least once. That is, if n = 5, numbers from 0 to 3 ispresent in the given array at least once and one number is present twice. You need to find andreturn that duplicate number present in the array.How to approach?Approach 1: In this question you need to run two loops, pick an element from the first loop andthen in the inner loop check if the element appears once again or not, if yes then return thatelement, otherwise move to the next element.This method doesn’t use the other useful data provided in questions like range of numbers isbetween 0 to n-2 and hence, it is increasing the time complexity.Pseudo Code for this approach:Function Findduplicate: For i = 0 to i less than size: For j = 0 to j less than size: If i not equal to j and arr[i] is equal to arr[j]: Return arr[i] return minus infinity Time Complexity for this approach: Time complexity for this approach is O( n ), which is not 2good, hence we move to the next approach.Approach 2: A better solution for this problem can be by using XOR operator. Using XORoperator, we can solve this problem in one traversal only. The following facts about XORoperation will be useful for this question:1. If we XOR a number by itself, even number of times then it will give you 0.2. If we XOR a number with itself, odd number of times, then it will give you the numberitself.3. Also XOR of a number with 0 gives you that number again.So, if we take XOR of all the elements present in the array with every element in the range 0 ton-2, then all the elements of that array except the duplicate element are XORed 2 times andhence, their resultant is 0. But the duplicate element is XORed 3 times, hence, its resultant is thenumber itself. Hence, you will get you answer as the duplicate number present in the array.For example, if you are given with n=5 and let us say array is 0 1 3 2 2, then according to thisapproach, we have to XOR all elements present in the array with every element in the range 0 to3.Answer = (0^1^3^2^2)^(0^1^2^3)As XOR operation is associative and commutative, so, by rearrangingAnswer = (0^0) ^ (1^1) ^ (2^2^2) ^ (3^3) = 0 ^ 0 ^ 2 ^ 0 = 2Pseudo Code for this approach:Function Findduplicate:answer=0For i =0 to i less than n:answer=answer xor arr[i]For i=0 to i less than or equal to n-2:answer=answer xor iReturn answerTime Complexity for this approach: Time complexity for this approach is O(n) as you aretraversing the array only once for XORing.Approach 3: Another approach is to make use of the condition that all elements lies between 0and n-2. So first calculate the sum of all natural numbers between 0 to n-2 by using the directformula ((n - 1) * (n - 2)) / 2 and sum of all elements of the array. Now, subtract the sum of allnatural numbers between 0 to n-2 from sum of all elements of the array. This will give you theduplicate element present in the array.Pseudo Code for this approach:Function findduplicate: sum=0 For i = 0 to i less than size: sum = sum + input[i]; n = size sumOfNaturalNumbers = ((n - 1) * (n - 2)) / 2 return sum - sumOfNaturalNumbers Time Complexity for this approach: Time complexity for this approach is O(n) as you aretraversing the array only once to calculate the sum of all elements present in the array.❏ Let us dry run the code for the N= 9 arr[]= 0 7 2 5 4 7 1 3 6 Sum = 0+7+2+5+4+7+1+3+6 =35sumOfNaturalNumbers=8*7/2=28Output = 35-28 =7So 7 should get printed.
###Code
arr = [0, 7, 2, 5, 4, 7, 1, 3, 6]
n = len(arr)
print(arr, n)
def find_dup(arr,n):
for i in range(n):
for j in range(i+1, n):
if arr[i] == arr[j]:
return arr[i]
find_dup(arr,n)
merge_sort(arr)
for i in range(n-1):
if arr[i] == arr[i+1]:
print(arr[i])
def find_dup_xor(arr,n):
ans = 0
for i in range(n):
ans = ans ^ arr[i]
for i in range(n-1):
ans = ans ^ i
return ans
find_dup_xor(arr,n)
def sum_arr(arr):
ans = 0
for i in range(len(arr)):
ans += arr[i]
return ans
def find_dup_sum(arr,n):
total_sum = sum_arr(arr)
n2_sum = (n-1)*(n-2) // 2
return total_sum - n2_sum
find_dup_sum(arr,n)
###Output
_____no_output_____
###Markdown
Find the number of Pairs of Elements in the Array that sums equal to the given number
###Code
## Pair sum in array
arr = [1,3,6,2,5,4,3,2,4,20]
n = len(arr)
num = 7
print(arr, n, num)
###Output
_____no_output_____
###Markdown
Solution 1. O(n^2)
###Code
## O(n^2)
c = 0
for i in range(n):
for j in range(i+1,n):
if arr[i] + arr[j] == num:
c += 1
print(c)
###Output
_____no_output_____
###Markdown
Solution 2. O(n) but looping two times
###Code
## creating map
# Space taking but O(n) and taking two for loops
m = [0]*1000
for i in range(n):
m[arr[i]] += 1
twice_count = 0
for i in range(n):
twice_count += m[num - arr[i]]
if num - arr[i] == arr[i]:
twice_count -= 1
print(twice_count//2)
###Output
_____no_output_____
###Markdown
Solution 3. O(n) in single for loop
###Code
## Single loop
m = {}
count = 0
for i in range(n):
if num - arr[i] in m:
count += m[num - arr[i]]
if arr[i] in m:
m[arr[i]] += 1
else:
m[arr[i]] = 1
print(count)
###Output
_____no_output_____
###Markdown
Qus: Rotate array left side k times
###Code
arr = [2,4,3,5,1]
n = len(arr)
k = 3
arr = [1,2,3,4,5,6,7,8]
n= len(arr)
k = 3
###Output
_____no_output_____
###Markdown
By List Slicing (Easy Solution)
###Code
# Slicing method
k = k%n
print(arr[k:] + arr[:k])
###Output
_____no_output_____
###Markdown
By one by one rotation (Bad Solution) O(kn)
###Code
# rotate array
k = k%n
for i in range(k):
s = arr[0]
for i in range(n-1):
arr[i] = arr[i+1]
arr[n-1] = s
print(arr)
###Output
_____no_output_____
###Markdown
Using O(k) extra Space
###Code
k = k%n
# store first k elements in some temp
temp = [0]*k
for i in range(k):
temp[i] = arr[i]
# make ith element as i+k th element
for i in range(n-k):
arr[i] = arr[i+k]
# replace last k elements by the temp
for i in range(n-k,n):
arr[i] = temp[i-n+k]
arr
###Output
_____no_output_____
###Markdown
By Reversing the Array
###Code
# Reverse the Array by Two Pointer Approach
def rev_arr(arr,si,ei):
while si < ei:
arr[si], arr[ei] = arr[ei], arr[si]
si += 1
ei -=1
k = k%n
rev_arr(arr,0,n-1)
print(arr)
# reverse the first n-k elements and reverse last k elemets separately for desired result
rev_arr(arr,0,n-k-1)
rev_arr(arr,n-k,n-1)
###Output
_____no_output_____
###Markdown
Find the number of triplets in the array which sum equal to num
###Code
arr = [2,-5,8,-6,0,5,10,11,-3]
n= len(arr)
num = 10
###Output
_____no_output_____
###Markdown
Soultion 1: Three loops O(n^3)
###Code
count = 0
for i in range(n-2):
for j in range(i+1, n-1):
for k in range(j+1, n):
if arr[i]+arr[j]+arr[k] == num:
count += 1
print(count)
###Output
5
###Markdown
Solution 2: Sort the Array and the use two pointer approach
###Code
arr = [2,-5,8,-6,0,5,10,11,-3]
n= len(arr)
num = 10
count = 0
arr.sort()
for i in range(n-2):
l = i + 1
r = n - 1
while (l<r):
if arr[i]+arr[l]+arr[r] == num:
count += 1
l+=1
r-=1
elif arr[i]+arr[l]+arr[r] < num:
l+=1
else:
r-=1
print(count)
###Output
5
###Markdown
Solution 3: Storing the map
###Code
count = 0
for i in range(n-1):
s = set()
for j in range(i+1, n):
if num - arr[i] - arr[j] in s:
count += 1
s.add(arr[j])
count
###Output
_____no_output_____ |
class/MODULE_1-python_introduction/.ipynb_checkpoints/10.02_CondicionesBucles-checkpoint.ipynb | ###Markdown
2- Condiciones y BuclesCurso Introducción a Python - Tecnun, Universidad de Navarra En este documento nos centraremos en la creación de condiciones y bucles. A diferencia de otros lenguajes de programación no se utilizan llaves o sentencias *end* para determinar lo que está incluido dentro de la condición o el bucle. En Python, todo esto se hace mediante indentación. A continuación veremos unos ejemplos. Condiciones La sintaxis general de las condiciones es la siguiente:
###Code
a=1
if a==1:
###Output
_____no_output_____
###Markdown
Los comandos que se emplean para comparar son los siguientes:- **==** y **!=** para comprobar igualdad o desigualdad, respectivamente.- **\>** y **\<** para comprobar si un elemento es estrictamente mayor o estrictamente menor que otro, respectivamente.- **>=** y **<=** para comprobar si un elemento es mayor o igual, o menor o igual que otro, respectivamente.En el caso de cumplir la condición, la comprobación devolverá una variable booleana *True* y ejecutará las líneas correspondientes a dicha condición. Si, por el contrario, la condición no se satisface, obtendremos una variable booleana *False* y no se ejecutaran las lineas correspondientes a la condición.En el caso de que fuera necesario comprobar si se cumplen varias condiciones a la vez se pueden utilizar los operadores booleanos **and** y **or**. Aparte de este tipo de comprobaciones, se puede mirar si una lista contiene un elemento empleando el comando **in**. En el caso de querer negar condiciones, se puede emplear el operador booleano **not**. Siempre y cuando tenga sentido, estos operadores se pueden emplear con cualquier tipo de variables. Bucles *for* La sintaxis general de los bucles *for* es la siguiente: Es importante darse cuenta de que el comando *range(0, 3)* crea una **sucesión de números entre 0 y 2**. Los comandos *break* y *continue* pueden resultar muy útiles. El primero termina el bucle en el instante de su ejecución, y el segundo termina la iteración actual del bucle y pasa a la siguiente. De la misma manera que ocurre con las condiones, las variables que empleamos como contador en los bucles no tienen por qué ser numéricas. Bucles *while* La sintaxis general de los bucles *while* es la siguiente: El operador **+=** aumenta el valor de la variable i en el valor que escribamos a su derecha en cada iteración. Por el contrario, el operador **-=** lo disminuye. Al igual que con los bucles *for*, los operadores *break* y *continue* son válidos en estos bucles.
###Code
###Output
_____no_output_____ |
Semana-18/.ipynb_checkpoints/Tensor Flow-checkpoint.ipynb | ###Markdown
Paralelizacion de entrenamiento de redes neuronales con TensorFlowEn esta seccion dejaremos atras los rudimentos de las matematicas y nos centraremos en utilizar TensorFlow, la cual es una de las librerias mas populares de arpendizaje profundo y que realiza una implementacion mas eficaz de las redes neuronales que cualquier otra implementacion de Numpy.TensorFlow es una interfaz de programacion multiplataforma y escalable para implementar y ejecutar algortimos de aprendizaje automatico de una manera mas eficaz ya que permite usar tanto la CPU como la GPU, la cual suele tener muchos mas procesadores que la CPU, los cuales, combinando sus frecuencias, presentan un rendimiento mas potente. La API mas desarrollada de esta herramienta se presenta para Python, por lo cual muchos desarrolladores se ven atraidos a este lenguaje. Primeros pasos con TensorFlowhttps://jakevdp.github.io/PythonDataScienceHandbook/02.01-understanding-data-types.html
###Code
# Creando tensores
# =============================================
import tensorflow as tf
import numpy as np
np.set_printoptions(precision=3)
a = np.array([1, 2, 3], dtype=np.int32)
b = [4, 5, 6]
t_a = tf.convert_to_tensor(a)
t_b = tf.convert_to_tensor(b)
print(t_a)
print(t_b)
# Obteniendo las dimensiones de un tensor
# ===============================================
t_ones = tf.ones((2, 3))
print(t_ones)
t_ones.shape
# Obteniendo los valores del tensor como array
# ===============================================
t_ones.numpy()
# Creando un tensor de valores constantes
# ================================================
const_tensor = tf.constant([1.2, 5, np.pi], dtype=tf.float32)
print(const_tensor)
matriz = np.array([[2, 3, 4, 5], [6, 7, 8, 8]], dtype = np.int32)
matriz
matriz_tf = tf.convert_to_tensor(matriz)
print(matriz_tf, end = '\n'*2)
print(matriz_tf.numpy(), end = '\n'*2)
print(matriz_tf.shape)
###Output
tf.Tensor(
[[2 3 4 5]
[6 7 8 8]], shape=(2, 4), dtype=int32)
[[2 3 4 5]
[6 7 8 8]]
(2, 4)
###Markdown
Manipulando los tipos de datos y forma de un tensor
###Code
# Cambiando el tipo de datos del tensor
# ==============================================
print(matriz_tf.dtype)
matriz_tf_n = tf.cast(matriz_tf, tf.int64)
print(matriz_tf_n.dtype)
# Transponiendo un tensor
# =================================================
t = tf.random.uniform(shape=(3, 5))
print(t, end = '\n'*2)
t_tr = tf.transpose(t)
print(t_tr, end = '\n'*2)
# Redimensionando un vector
# =====================================
t = tf.zeros((30,))
print(t, end = '\n'*2)
print(t.shape, end = '\n'*3)
t_reshape = tf.reshape(t, shape=(5, 6))
print(t_reshape, end = '\n'*2)
print(t_reshape.shape)
# Removiendo las dimensiones innecesarias
# =====================================================
t = tf.zeros((1, 2, 1, 4, 1))
print(t, end = '\n'*2)
print(t.shape, end = '\n'*3)
t_sqz = tf.squeeze(t, axis=(2, 4))
print(t_sqz, end = '\n'*2)
print(t_sqz.shape, end = '\n'*3)
print(t.shape, ' --> ', t_sqz.shape)
###Output
tf.Tensor(
[[[[[0.]
[0.]
[0.]
[0.]]]
[[[0.]
[0.]
[0.]
[0.]]]]], shape=(1, 2, 1, 4, 1), dtype=float32)
(1, 2, 1, 4, 1)
tf.Tensor(
[[[0. 0. 0. 0.]
[0. 0. 0. 0.]]], shape=(1, 2, 4), dtype=float32)
(1, 2, 4)
(1, 2, 1, 4, 1) --> (1, 2, 4)
###Markdown
Operaciones matematicas sobre tensores
###Code
# Inicializando dos tensores con numeros aleatorios
# =============================================================
tf.random.set_seed(1)
t1 = tf.random.uniform(shape=(5, 2), minval=-1.0, maxval=1.0)
t2 = tf.random.normal(shape=(5, 2), mean=0.0, stddev=1.0)
print(t1, '\n'*2, t2)
# Producto tipo element-wise: elemento a elemento
# =================================================
t3 = tf.multiply(t1, t2).numpy()
print(t3)
# Promedio segun el eje
# ================================================
t4 = tf.math.reduce_mean(t1, axis=None)
print(t4, end = '\n'*3)
t4 = tf.math.reduce_mean(t1, axis=0)
print(t4, end = '\n'*3)
t4 = tf.math.reduce_mean(t1, axis=1)
print(t4, end = '\n'*3)
# suma segun el eje
# =================================================
t4 = tf.math.reduce_sum(t1, axis=None)
print('Suma de todos los elementos:', t4, end = '\n'*3)
t4 = tf.math.reduce_sum(t1, axis=0)
print('Suma de los elementos por columnas:', t4, end = '\n'*3)
t4 = tf.math.reduce_sum(t1, axis=1)
print('Suma de los elementos por filas:', t4, end = '\n'*3)
# Desviacion estandar segun el eje
# =================================================
t4 = tf.math.reduce_std(t1, axis=None)
print('Suma de todos los elementos:', t4, end = '\n'*3)
t4 = tf.math.reduce_std(t1, axis=0)
print('Suma de los elementos por columnas:', t4, end = '\n'*3)
t4 = tf.math.reduce_std(t1, axis=1)
print('Suma de los elementos por filas:', t4, end = '\n'*3)
# Producto entre matrices
# ===========================================
t5 = tf.linalg.matmul(t1, t2, transpose_b=True)
print(t5.numpy(), end = '\n'*2)
# Producto entre matrices
# ===========================================
t6 = tf.linalg.matmul(t1, t2, transpose_a=True)
print(t6.numpy())
# Calculando la norma de un vector
# ==========================================
norm_t1 = tf.norm(t1, ord=2, axis=None).numpy()
print(norm_t1, end='\n'*2)
norm_t1 = tf.norm(t1, ord=2, axis=0).numpy()
print(norm_t1, end='\n'*2)
norm_t1 = tf.norm(t1, ord=2, axis=1).numpy()
print(norm_t1, end='\n'*2)
###Output
1.5818709
[1.303 0.897]
[1.046 0.293 0.504 0.96 0.383]
###Markdown
Partir, apilar y concatenar tensores
###Code
# Datos a trabajar
# =======================================
tf.random.set_seed(1)
t = tf.random.uniform((6,))
print(t.numpy())
# Partiendo el tensor en un numero determinado de piezas
# ======================================================
t_splits = tf.split(t, num_or_size_splits = 3)
[item.numpy() for item in t_splits]
# Partiendo el tensor segun los tamaños definidos
# ======================================================
tf.random.set_seed(1)
t = tf.random.uniform((6,))
print(t.numpy())
t_splits = tf.split(t, num_or_size_splits=[3, 3])
[item.numpy() for item in t_splits]
print(matriz_tf.numpy())
# m_splits = tf.split(t, num_or_size_splits = 0, axis = 1)
matriz_n = tf.reshape(matriz_tf, shape = (8,))
print(matriz_n.numpy())
m_splits = tf.split(matriz_n, num_or_size_splits = 2)
[item.numpy() for item in m_splits]
# Concatenando tensores
# =========================================
A = tf.ones((3,))
print(A, end ='\n'*2)
B = tf.zeros((2,))
print(B, end ='\n'*2)
C = tf.concat([A, B], axis=0)
print(C.numpy())
# Apilando tensores
# =========================================
A = tf.ones((3,))
print(A, end ='\n'*2)
B = tf.zeros((3,))
print(B, end ='\n'*2)
S = tf.stack([A, B], axis=1)
print(S.numpy())
###Output
tf.Tensor([1. 1. 1.], shape=(3,), dtype=float32)
tf.Tensor([0. 0. 0.], shape=(3,), dtype=float32)
[[1. 0.]
[1. 0.]
[1. 0.]]
###Markdown
Mas funciones y herramientas en:https://www.tensorflow.org/versions/r2.0/api_docs/python/tf. EJERCICIOS1. Cree dos tensores de dimensiones (4, 6), de numeros aleatorios provenientes de una distribucion normal estandar con promedio 0.0 y dsv 1.0. Imprimalos.2. Multiplique los anteriores tensores de las dos formas vistas, element-wise y producto matricial, realizando las dos transposiciones vistas. 3. Calcule los promedios, desviaciones estandar y suma de sus elementos para los dos tensores.4. Redimensione los tensores para que sean ahora de rango 1.5. Calcule el coseno de los elementos de los tensores (revise la documentacion).6. Cree un tensor de rango 1 con 1001 elementos, empezando con el 0 y hasta el 30.7. Realice un for sobre los elementos del tensor e imprimalos.8. Realice el calculo de los factoriales de los numero del 1 al 30 usando el tensor del punto 6. Imprima el resultado como un DataFrame Creación de *pipelines* de entrada con tf.data: la API de conjunto de datos de TensorFlowCuando entrenamos un modelo NN profundo, generalmente entrenamos el modelo de forma incremental utilizando un algoritmo de optimización iterativo como el descenso de gradiente estocástico, como hemos visto en clases anteriores.La API de Keras es un contenedor de TensorFlow para crear modelos NN. La API de Keras proporciona un método, `.fit ()`, para entrenar los modelos. En los casos en que el conjunto de datos de entrenamiento es bastante pequeño y se puede cargar como un tensor en la memoria, los modelos de TensorFlow (que se compilan con la API de Keras) pueden usar este tensor directamente a través de su método .fit () para el entrenamiento. Sin embargo, en casos de uso típicos, cuando el conjunto de datos es demasiado grande para caber en la memoria de la computadora, necesitaremos cargar los datos del dispositivo de almacenamiento principal (por ejemplo, el disco duro o la unidad de estado sólido) en trozos, es decir, lote por lote. Además, es posible que necesitemos construir un *pipeline* de procesamiento de datos para aplicar ciertas transformaciones y pasos de preprocesamiento a nuestros datos, como el centrado medio, el escalado o la adición de ruido para aumentar el procedimiento de entrenamiento y evitar el sobreajuste.Aplicar las funciones de preprocesamiento manualmente cada vez puede resultar bastante engorroso. Afortunadamente, TensorFlow proporciona una clase especial para construir *pipelines* de preprocesamiento eficientes y convenientes. En esta parte, veremos una descripción general de los diferentes métodos para construir un conjunto de datos de TensorFlow, incluidas las transformaciones del conjunto de datos y los pasos de preprocesamiento comunes. Creando un Dataset de TensorFlow desde tensores existentesSi los datos ya existen en forma de un objeto tensor, una lista de Python o una matriz NumPy, podemos crear fácilmente un conjunto de datos usando la función `tf.data.Dataset.from_tensor_ slices()`. Esta función devuelve un objeto de la clase Dataset, que podemos usar para iterar a través de los elementos individuales en el conjunto de datos de entrada:
###Code
# Ejemplo con listas
# ======================================================
a = [1.2, 3.4, 7.5, 4.1, 5.0, 1.0]
ds = tf.data.Dataset.from_tensor_slices(a)
print(ds)
for item in ds:
print(item)
###Output
_____no_output_____
###Markdown
Si queremos crear lotes a partir de este conjunto de datos, con un tamaño de lote deseado de 3, podemos hacerlo de la siguiente manera:
###Code
# Creando lotes de 3 elementos cada uno
# ===================================================
ds_batch = ds.batch(3)
for i, elem in enumerate(ds_batch, 1):
print(f'batch {i}:', elem.numpy())
###Output
_____no_output_____
###Markdown
Esto creará dos lotes a partir de este conjunto de datos, donde los primeros tres elementos van al lote n° 1 y los elementos restantes al lote n° 2. El método `.batch()` tiene un argumento opcional, `drop_remainder`, que es útil para los casos en los que el número de elementos en el tensor no es divisible por el tamaño de lote deseado. El valor predeterminado de `drop_remainder` es `False`. Combinar dos tensores en un DatasetA menudo, podemos tener los datos en dos (o posiblemente más) tensores. Por ejemplo, podríamos tener un tensor para características y un tensor para etiquetas. En tales casos, necesitamos construir un conjunto de datos que combine estos tensores juntos, lo que nos permitirá recuperar los elementos de estos tensores en tuplas.Suponga que tenemos dos tensores, t_x y t_y. El tensor t_x contiene nuestros valores de características, cada uno de tamaño 3, y t_y almacena las etiquetas de clase. Para este ejemplo, primero creamos estos dos tensores de la siguiente manera:
###Code
# Datos de ejemplo
# ============================================
tf.random.set_seed(1)
t_x = tf.random.uniform([4, 3], dtype=tf.float32)
t_y = tf.range(4)
print(t_x)
print(t_y)
# Uniendo los dos tensores en un Dataset
# ============================================
ds_x = tf.data.Dataset.from_tensor_slices(t_x)
ds_y = tf.data.Dataset.from_tensor_slices(t_y)
ds_joint = tf.data.Dataset.zip((ds_x, ds_y))
for example in ds_joint:
print('x:', example[0].numpy(),' y:', example[1].numpy())
ds_joint = tf.data.Dataset.from_tensor_slices((t_x, t_y))
for example in ds_joint:
#print(example)
print('x:', example[0].numpy(), ' y:', example[1].numpy())
# Operacion sobre el dataset generado
# ====================================================
ds_trans = ds_joint.map(lambda x, y: (x*2-1.0, y))
for example in ds_trans:
print(' x:', example[0].numpy(), ' y:', example[1].numpy())
###Output
x: [-0.67 0.803 0.262] y: 0
x: [-0.131 -0.416 0.285] y: 1
x: [ 0.952 -0.13 0.32 ] y: 2
x: [0.21 0.273 0.229] y: 3
###Markdown
Mezclar, agrupar y repetirPara entrenar un modelo NN usando la optimización de descenso de gradiente estocástico, es importante alimentar los datos de entrenamiento como lotes mezclados aleatoriamente. Ya hemos visto arriba como crear lotes llamando al método `.batch()` de un objeto de conjunto de datos. Ahora, además de crear lotes, vamos a mezclar y reiterar sobre los conjuntos de datos:
###Code
# Mezclando los elementos de un tensor
# ===================================================
tf.random.set_seed(1)
ds = ds_joint.shuffle(buffer_size = len(t_x))
for example in ds:
print(' x:', example[0].numpy(), ' y:', example[1].numpy())
###Output
x: [0.976 0.435 0.66 ] y: 2
x: [0.435 0.292 0.643] y: 1
x: [0.165 0.901 0.631] y: 0
x: [0.605 0.637 0.614] y: 3
###Markdown
donde las filas se barajan sin perder la correspondencia uno a uno entre las entradas en x e y. El método `.shuffle()` requiere un argumento llamado `buffer_size`, que determina cuántos elementos del conjunto de datos se agrupan antes de barajar. Los elementos del búfer se recuperan aleatoriamente y su lugar en el búfer se asigna a los siguientes elementos del conjunto de datos original (sin mezclar). Por lo tanto, si elegimos un tamaño de búfer pequeño, es posible que no mezclemos perfectamente el conjunto de datos.Si el conjunto de datos es pequeño, la elección de un tamaño de búfer relativamente pequeño puede afectar negativamente el rendimiento predictivo del NN, ya que es posible que el conjunto de datos no esté completamente aleatorizado. En la práctica, sin embargo, por lo general no tiene un efecto notable cuando se trabaja con conjuntos de datos relativamente grandes, lo cual es común en el aprendizaje profundo.Alternativamente, para asegurar una aleatorización completa durante cada época, simplemente podemos elegir un tamaño de búfer que sea igual al número de ejemplos de entrenamiento, como en el código anterior (`buffer_size = len(t_x)`). Ahora, creemos lotes a partir del conjunto de datos ds_joint:
###Code
ds = ds_joint.batch(batch_size = 3, drop_remainder = False)
print(ds)
batch_x, batch_y = next(iter(ds))
print('Batch-x:\n', batch_x.numpy())
print('Batch-y: ', batch_y.numpy())
###Output
Batch-y: [0 1 2]
###Markdown
Además, al entrenar un modelo para múltiples épocas, necesitamos mezclar e iterar sobre el conjunto de datos por el número deseado de épocas. Entonces, repitamos el conjunto de datos por lotes dos veces:
###Code
ds = ds_joint.batch(3).repeat(count = 2)
for i,(batch_x, batch_y) in enumerate(ds):
print(i, batch_x.numpy(), batch_y.numpy(), end = '\n'*2)
###Output
0 [[0.165 0.901 0.631]
[0.435 0.292 0.643]
[0.976 0.435 0.66 ]] [0 1 2]
1 [[0.605 0.637 0.614]] [3]
2 [[0.165 0.901 0.631]
[0.435 0.292 0.643]
[0.976 0.435 0.66 ]] [0 1 2]
3 [[0.605 0.637 0.614]] [3]
###Markdown
Esto da como resultado dos copias de cada lote. Si cambiamos el orden de estas dos operaciones, es decir, primero lote y luego repetimos, los resultados serán diferentes:
###Code
ds = ds_joint.repeat(count=2).batch(3)
for i,(batch_x, batch_y) in enumerate(ds):
print(i, batch_x.numpy(), batch_y.numpy(), end = '\n'*2)
###Output
0 [[0.165 0.901 0.631]
[0.435 0.292 0.643]
[0.976 0.435 0.66 ]] [0 1 2]
1 [[0.605 0.637 0.614]
[0.165 0.901 0.631]
[0.435 0.292 0.643]] [3 0 1]
2 [[0.976 0.435 0.66 ]
[0.605 0.637 0.614]] [2 3]
###Markdown
Finalmente, para comprender mejor cómo se comportan estas tres operaciones (batch, shuffle y repeat), experimentemos con ellas en diferentes órdenes. Primero, combinaremos las operaciones en el siguiente orden: (1) shuffle, (2) batch y (3) repeat:
###Code
# Orden 1: shuffle -> batch -> repeat
tf.random.set_seed(1)
ds = ds_joint.shuffle(4).batch(2).repeat(3)
for i,(batch_x, batch_y) in enumerate(ds):
print(i, batch_x, batch_y.numpy(), end = '\n'*2)
# Orden 2: batch -> shuffle -> repeat
tf.random.set_seed(1)
ds = ds_joint.batch(2).shuffle(4).repeat(3)
for i,(batch_x, batch_y) in enumerate(ds):
print(i, batch_x, batch_y.numpy(), end = '\n'*2)
# Orden 2: batch -> repeat-> shuffle
tf.random.set_seed(1)
ds = ds_joint.batch(2).repeat(3).shuffle(4)
for i,(batch_x, batch_y) in enumerate(ds):
print(i, batch_x, batch_y.numpy(), end = '\n'*2)
###Output
0 tf.Tensor(
[[0.165 0.901 0.631]
[0.435 0.292 0.643]], shape=(2, 3), dtype=float32) [0 1]
1 tf.Tensor(
[[0.165 0.901 0.631]
[0.435 0.292 0.643]], shape=(2, 3), dtype=float32) [0 1]
2 tf.Tensor(
[[0.976 0.435 0.66 ]
[0.605 0.637 0.614]], shape=(2, 3), dtype=float32) [2 3]
3 tf.Tensor(
[[0.976 0.435 0.66 ]
[0.605 0.637 0.614]], shape=(2, 3), dtype=float32) [2 3]
4 tf.Tensor(
[[0.165 0.901 0.631]
[0.435 0.292 0.643]], shape=(2, 3), dtype=float32) [0 1]
5 tf.Tensor(
[[0.976 0.435 0.66 ]
[0.605 0.637 0.614]], shape=(2, 3), dtype=float32) [2 3]
###Markdown
Obteniendo conjuntos de datos disponibles de la biblioteca tensorflow_datasetsLa biblioteca tensorflow_datasets proporciona una buena colección de conjuntos de datos disponibles gratuitamente para entrenar o evaluar modelos de aprendizaje profundo. Los conjuntos de datos están bien formateados y vienen con descripciones informativas, incluido el formato de características y etiquetas y su tipo y dimensionalidad, así como la cita del documento original que introdujo el conjunto de datos en formato BibTeX. Otra ventaja es que todos estos conjuntos de datos están preparados y listos para usar como objetos tf.data.Dataset, por lo que todas las funciones que cubrimos se pueden usar directamente:
###Code
# pip install tensorflow-datasets
import tensorflow_datasets as tfds
print(len(tfds.list_builders()))
print(tfds.list_builders()[:5])
# Trabajando con el archivo mnist
# ===============================================
mnist, mnist_info = tfds.load('mnist', with_info=True, shuffle_files=False)
print(mnist_info)
print(mnist.keys())
ds_train = mnist['train']
ds_train = ds_train.map(lambda item:(item['image'], item['label']))
ds_train = ds_train.batch(10)
batch = next(iter(ds_train))
print(batch[0].shape, batch[1])
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15, 6))
for i,(image,label) in enumerate(zip(batch[0], batch[1])):
ax = fig.add_subplot(2, 5, i+1)
ax.set_xticks([]); ax.set_yticks([])
ax.imshow(image[:, :, 0], cmap='gray_r')
ax.set_title('{}'.format(label), size=15)
plt.show()
###Output
_____no_output_____
###Markdown
Construyendo un modelo NN en TensorFlow La API de TensorFlow Keras (tf.keras)Keras es una API NN de alto nivel y se desarrolló originalmente para ejecutarse sobre otras bibliotecas como TensorFlow y Theano. Keras proporciona una interfaz de programación modular y fácil de usar que permite la creación de prototipos y la construcción de modelos complejos en solo unas pocas líneas de código. Keras se puede instalar independientemente de PyPI y luego configurarse para usar TensorFlow como su motor de backend. Keras está estrechamente integrado en TensorFlow y se puede acceder a sus módulos a través de tf.keras.En TensorFlow 2.0, tf.keras se ha convertido en el enfoque principal y recomendado para implementar modelos. Esto tiene la ventaja de que admite funcionalidades específicas de TensorFlow, como las canalizaciones de conjuntos de datos que usan tf.data.La API de Keras (tf.keras) hace que la construcción de un modelo NN sea extremadamente fácil. El enfoque más utilizado para crear una NN en TensorFlow es a través de `tf.keras.Sequential()`, que permite apilar capas para formar una red. Se puede dar una pila de capas en una lista de Python a un modelo definido como tf.keras.Sequential(). Alternativamente, las capas se pueden agregar una por una usando el método .add().Además, tf.keras nos permite definir un modelo subclasificando tf.keras.Model.Esto nos da más control sobre la propagacion hacia adelante al definir el método call() para nuestra clase modelo para especificar la propagacion hacia adelante explicitamente. Finalmente, los modelos construidos usando la API tf.keras se pueden compilar y entrenar a través de los métodos .compile() y .fit(). Construyendo un modelo de regresion lineal
###Code
X_train = np.arange(10).reshape((10, 1))
y_train = np.array([1.0, 1.3, 3.1, 2.0, 5.0, 6.3, 6.6, 7.4, 8.0, 9.0])
X_train, y_train
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(X_train, y_train, 'o', markersize=10)
ax.set_xlabel('x')
ax.set_ylabel('y')
import tensorflow as tf
X_train_norm = (X_train - np.mean(X_train))/np.std(X_train)
ds_train_orig = tf.data.Dataset.from_tensor_slices((tf.cast(X_train_norm, tf.float32),tf.cast(y_train, tf.float32)))
for i in ds_train_orig:
print(i[0].numpy(), i[1].numpy())
###Output
[-1.5666989] 1.0
[-1.2185436] 1.3
[-0.87038827] 3.1
[-0.52223295] 2.0
[-0.17407766] 5.0
[0.17407766] 6.3
[0.52223295] 6.6
[0.87038827] 7.4
[1.2185436] 8.0
[1.5666989] 9.0
###Markdown
Ahora, podemos definir nuestro modelo de regresión lineal como $𝑧 = 𝑤x + 𝑏$. Aquí, vamos a utilizar la API de Keras. `tf.keras` proporciona capas predefinidas para construir modelos NN complejos, pero para empezar, usaremos un modelo desde cero:
###Code
class MyModel(tf.keras.Model):
def __init__(self):
super(MyModel, self).__init__()
self.w = tf.Variable(0.0, name='weight')
self.b = tf.Variable(0.0, name='bias')
def call(self, x):
return self.w * x + self.b
model = MyModel()
model.build(input_shape=(None, 1))
model.summary()
###Output
Model: "my_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
|
tsp.ipynb | ###Markdown
Traveling Salesman Problem We are given a [complete graph](https://en.wikipedia.org/wiki/Complete_graph) which means that every pair of distinct vertices is connected by a unique edge. Each edge has traveling cost associated with it. We need to find a shortest route that goes throught all the nodes and comes back to the original node. Representation We can represent traveling costs as a simple matrix.
###Code
i = float('inf')
t = [
[i, 3, 1, 8],
[3, i, 4, 4],
[1, 4, i, 7],
[8, 4, 7, i],
]
###Output
_____no_output_____
###Markdown
This represents 4 nodes (0 through 3) where cost of traveling from 0 to 3 is
###Code
t[0][3]
t[1][1]
###Output
_____no_output_____
###Markdown
Brute force solution
###Code
from itertools import permutations
def total_cost(matrix, route, loop=True):
# we are going from node a to node b in each step
a = list(route)
b = a[1:] + [a[0]]
# calculate the route cost
cost = 0
for ii, jj in zip(a, b):
cost += matrix[ii][jj]
if not loop:
# exclude path back to start
cost -= matrix[route[0]][route[-1]]
return cost
def brute_force(matrix, start=0, stop=None, loop=True):
if not stop:
stop = len(matrix)
current_min = float('inf')
current_route = []
for route in permutations(range(start, stop)):
route_cost = total_cost(matrix, route, loop)
# check if we got new min route
if route_cost < current_min:
current_min = route_cost
current_route = route
return list(current_route), current_min
###Output
_____no_output_____
###Markdown
Greedy solutionWhat if we just always pick the shortest distance from all available routes in a current node. We start with first node and pick the cheapest node that we haven't picked yet.
###Code
def get_min_node(nodes, visited_set):
min_cost = pow(10, 10)
node_position = None
for position, cost in enumerate(nodes):
# make sure we haven't visited the node yet, we are not trying to go to
# the same node, and current cost is the minimum that we've seen so far
if position in visited_set or isinstance(cost, str) or cost >= min_cost:
continue
min_cost = cost
node_position = position
return node_position
def greedy_recursion(matrix, route):
current_node = route[-1]
min_node = get_min_node(matrix[current_node], set(route))
if min_node is None:
return route
return greedy_recursion(matrix, route + [min_node])
def greedy(matrix, route=None):
"""
It doesn't return an optimal route but its close to brute force answer.
The benifit of the algorithm is relative speed and simplicity.
"""
result = greedy_recursion(matrix, route or [0])
return result, total_cost(m, result)
###Output
_____no_output_____
###Markdown
Branch and Bound Implementation was inspired by [this](http://galyautdinov.ru/post/zadacha-kommivoyazhera) post.
###Code
from random import randint
def random_matrix(n, max_distance=10):
return [[ii == jj and float('inf') or randint(1, max_distance) for ii in range(n)] for jj in range(n)]
def copy_matrix(m):
return [[ii for ii in jj] for jj in m]
def min_in(matrix, direction='row'):
vector = []
d = range(len(matrix))
for ii in d:
min_element = float('inf')
for jj in d:
# switch between row and column access
item = direction == 'row' and matrix[ii][jj] or matrix[jj][ii]
if min_element > item:
min_element = item
vector.append(min_element)
return vector
def subtract(matrix, vector, direction='row'):
d = range(len(matrix))
for ii in d:
if vector[ii] == float('inf'):
continue
for jj in d:
if direction == 'row':
matrix[ii][jj] -= vector[ii]
else:
matrix[jj][ii] -= vector[ii]
return matrix
def reduse(matrix):
v1 = min_in(matrix)
redused_rows = subtract(matrix, v1)
v2 = min_in(redused_rows, 'column')
return subtract(redused_rows, v2, 'column')
def find_optimal_segment(matrix):
size = range(len(matrix))
max_value = -1
max_ii = max_jj = -1
for ii in size:
for jj in size:
if matrix[ii][jj] != 0:
continue
# calculate max value for cells that have zeros
min_in_row = min([e for pos, e in enumerate(matrix[ii]) if pos != jj])
min_in_column = min([e[jj] for pos, e in enumerate(matrix) if pos != ii])
current_max = min_in_row + min_in_column
if max_value < current_max:
max_value = current_max
max_ii = ii
max_jj = jj
# close the route back
matrix[max_jj][max_ii] = float('inf')
# "remove" row and column
for jj in size:
matrix[max_ii][jj] = float('inf')
for ii in size:
matrix[ii][max_jj] = float('inf')
return matrix, max_ii, max_jj
def branch_and_bound(matrix):
# reduse modifies matrix in place
# so we copy the original in order
# to not mess it up
m = copy_matrix(matrix)
# initialize variables
path_dict = {}
distance = 0
size = range(len(matrix))
# find optimal pairs
for i in size:
redused = reduse(m)
m, a, b = find_optimal_segment(redused)
path_dict[a] = b
distance += matrix[a][b]
# arrange segments in walking order
path = []
for i in size:
# start with 0 if it is a new path else reassign to second element
a = path and b or i
path.append(a)
b = path_dict[a]
return path, distance
###Output
_____no_output_____
###Markdown
Comparing algorithms
###Code
m = random_matrix(5)
m
branch_and_bound(m)
brute_force(m)
greedy(m)
###Output
_____no_output_____
###Markdown
Look ahead
###Code
test = random_matrix(20)
def sub_route_cost(matrix, route, loop=True):
# we are going from node a to node b in each step
a = list(route)
b = a[1:] + [a[0]]
# calculate the route cost
cost = 0
for ii, jj in zip(a, b):
cost += matrix[ii][jj]
if not loop:
# exclude path back to start
cost -= matrix[route[0]][route[-1]]
return cost
def look_ahead(matrix, k):
size = len(matrix)
if k >= size:
# in this case it should produce similar
# result to brute force
k = size -1
path = []
nodes = list(range(len(matrix)))
# start with the first node
path.append(nodes.pop(0))
while nodes:
current_min = float('inf')
min_route = []
for choice in permutations(nodes, k):
current_route = [path[-1]] + list(choice)
route_cost = sub_route_cost(matrix, current_route, loop=False)
if route_cost < current_min:
current_min = route_cost
min_route = list(choice)
#print('route', min_route, 'cost', current_min)
path += min_route
for ii in min_route:
nodes.remove(ii)
if len(nodes) == 1:
# Nowhere to go from here count the last step
path.append(nodes.pop(0))
#print('last step')
return path, sub_route_cost(matrix, path)
t2
look_ahead(t2, 2)
brute_force(t2)
look_ahead(t2, 4)
look_ahead(t2, 3)
###Output
_____no_output_____
###Markdown
Vecino más Cercano para el TSP
###Code
def nearest_neighbor(Edges, weights):
"""
Edges: Arreglo de aristas del grafo
weights: distancias de cada arista
"""
# Encontrar la arista de menor peso para iniciar el recorrido
next_edge = np.argmin(weights)
T = list(Edges[next_edge, :])
ordered_el = [Edges[next_edge]]
S = weights[next_edge]
E = np.delete(Edges, next_edge, axis=0)
d = np.delete(weights, next_edge, axis=0)
while len(E) > 0:
# Encontrar minima distancia conectada a un extremo del recorrido
next_edge = np.argmin(np.where(np.any(np.isin(E, [T[0], T[-1]]), axis=1), d, np.inf))
# Revisar si la arista va al principio o al final del recorrido
a, b = E[next_edge]
if a == T[0]:
T.insert(0, b)
elif b == T[0]:
T.insert(0, a)
b, a = a, b
elif a == T[-1]:
T.append(b)
else:
T.append(a)
b, a = a, b
ordered_el.append(E[next_edge])
S += d[next_edge]
# Descartar aristas invalidas para el problema
mask = ~np.logical_or(np.any(E == a, axis=1), np.all(np.isin(E, T), axis=1))
E = E[mask]
d = d[mask]
# Cerrar el recorrido
a, b = np.sort([T[0], T[-1]])
ordered_el.append([a, b])
for edg, dist in zip(Edges, weights):
if edg[0] == a and edg[1] == b:
S += dist
return T, S, ordered_el
def edges_from_tour(T):
return [(x, y) for x, y in zip(T, T[1:])] + [(T[-1], T[0])]
###Output
_____no_output_____
###Markdown
Solución para n particular
###Code
n = 10
C = np.loadtxt("data/datos_unicos.txt", max_rows=n)
E = np.array(list(combinations(range(n), 2)))
d = np.linalg.norm(C[E[:, 0], :] - C[E[:, 1], :], axis=1)
t = time.perf_counter()
T, S, ordered_el = nearest_neighbor(E, d)
t = time.perf_counter() - t
print("Numero de ciudades:\t{}\nCosto:\t\t\t{}\nTiempo:\t\t\t{} s".format(n, S, t))
G = nx.Graph()
G.add_weighted_edges_from(zip(E[:, 0], E[:, 1], d))
locs = dict(zip(range(n), C[:n, :]))
edgs = edges_from_tour(T)
nx.draw_networkx(G, locs, edgelist=edgs, node_size=200)
plt.show()
fig, ax = plt.subplots(figsize=(10, 10))
nx.draw_networkx(G, locs, edgelist=[], node_size=200, ax=ax)
def anim_tour(i):
nx.draw_networkx_edges(G, locs, [ordered_el[i]], ax=ax)
#ax.set_aspect('equal')
anim = animation.FuncAnimation(fig, anim_tour, range(len(edgs)), interval=1000)
plt.show()
###Output
_____no_output_____
###Markdown
Comparación contra solución exacta
###Code
# N = [5, 10, 20, 30, 40, 50, 75, 100, 125, 150, 200, 250, 300, 400, 500, 600, 634]
# bench_NN = np.zeros([len(N), 3])
# for n, row in zip(N, bench_NN):
# C = np.loadtxt("data/datos_unicos.txt", max_rows=n)
# E = np.array(list(combinations(range(n), 2)))
# d = np.linalg.norm(C[E[:, 0], :] - C[E[:, 1], :], axis=1)
# t = time.perf_counter()
# _, S, oredered_el = nearest_neighbor(E, d)
# t = time.perf_counter() - t
# row[:] =[n, t, S]
# np.savetxt("data/benchmark_NN.txt", bench_NN)
bench_NN = np.loadtxt("data/benchmark_NN.txt")
bench_dantzig = np.loadtxt("data/benchmark_dantzig.txt")
plt.plot(bench_dantzig[:, 0], bench_dantzig[:, 1])
plt.axis([5, 300, 0, 400])
plt.ylabel("Tiempo [s]")
plt.xlabel("Número de Ciudades")
plt.title("Tiempo de Ejecución con PLE")
plt.savefig("img/res_d.png", dpi=300)
plt.show()
plt.plot(bench_NN[:, 0], bench_NN[:, 1])
plt.axis([5, 634, 0, 10])
plt.ylabel("Tiempo [s]")
plt.xlabel("Número de Ciudades")
plt.title("Tiempo de Ejecución con NN")
plt.savefig("img/res_nn.png", dpi=300)
plt.show()
plt.plot(bench_dantzig[:, 0], bench_dantzig[:, 1] - bench_NN[:13, 1])
plt.axis([5, 300, 0, 400])
plt.ylabel("Tiempo [s]")
plt.xlabel("Número de Ciudades")
plt.title("Diferencia en Tiempo de Ejecución")
plt.savefig("img/diff_tiempo.png", dpi=300)
plt.show()
plt.plot(bench_dantzig[:, 0], bench_dantzig[:, 2], label="PLE")
plt.plot(bench_NN[:13, 0], bench_NN[:13, 2], label="Vecino más Cercano")
plt.axis([5, 300, 0, 7000])
plt.ylabel("Costo")
plt.xlabel("Número de Ciudades")
plt.title("Costo de Solución")
plt.legend()
plt.savefig("img/cto_d_nn.png", dpi=300)
plt.show()
###Output
_____no_output_____ |
notebooks/ipca.ipynb | ###Markdown
Iterated PCA (IPCA)
###Code
import os, sys
sys.path.append(os.path.abspath('../src'))
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.decomposition
import xpca
first_year = 2008; last_year = 2020
df = pd.read_parquet('../data/equity_indices.parquet')
df = df[str(first_year):str(last_year)]
df.head()
df.tail()
model = sklearn.decomposition.PCA()
model.fit(df)
Z = model.transform(df)
plt.plot(Z[:,0]);
Z_periods_classical = []
for period in [str(x) for x in range(first_year, last_year+1)]:
df_period = df.loc[period]
model = sklearn.decomposition.PCA()
model.fit(df_period)
Z_period = model.transform(df_period)
Z_periods_classical.append(Z_period)
Z_periods_classical = np.vstack(Z_periods_classical)
fig = plt.figure(figsize=(12, 12))
for i in range(9):
ax = fig.add_subplot(3, 3, i+1)
ax.plot(Z[:,i], Z_periods_classical[:,i], '.')
ax.set_title(f'PC{i+1}')
ax.axis('equal')
ax.tick_params(axis='both', which='major', labelsize=8)
ax.tick_params(axis='both', which='minor', labelsize=8)
ax.tick_params(axis='x', rotation=45)
fig.tight_layout();
Z_periods_ipca = []
model = xpca.IPCA()
for period in [str(x) for x in range(first_year, last_year+1)]:
df_period = df.loc[period]
model.fit(df_period)
Z_period = model.transform(df_period)
Z_periods_ipca.append(Z_period)
Z_periods_ipca = np.vstack(Z_periods_ipca)
fig = plt.figure(figsize=(12, 12))
for i in range(9):
ax = fig.add_subplot(3, 3, i+1)
ax.plot(Z[:,i], Z_periods_ipca[:,i], '.')
ax.set_title(f'PC{i+1}')
ax.axis('equal')
ax.tick_params(axis='both', which='major', labelsize=8)
ax.tick_params(axis='both', which='minor', labelsize=8)
ax.tick_params(axis='x', rotation=45)
fig.tight_layout();
###Output
_____no_output_____ |
Python/7_sentiment_analysis/Sentiment Analysis - Economic News Sentiment with BERT Fine-Tuning.ipynb | ###Markdown
Predicting Economic News Sentiment with BERT on TF Hub If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases.Here, we'll train a model to predict whether a piece of economic news is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started!
###Code
!pip install --upgrade tensorflow
!pip uninstall protobuf -y
!pip install protobuf -y
!pip install tensorflow_hub
!pip install bert-tensorflow
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
###Output
/usr/local/anaconda/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
WARNING: Logging before flag parsing goes to stderr.
W0305 15:30:17.590888 140018860873536 __init__.py:56] Some hub symbols are not available because TensorFlow version is less than 1.14
###Markdown
In addition to the standard libraries we imported above, we'll need to install BERT's python package.
###Code
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
###Output
_____no_output_____
###Markdown
Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
###Code
# Set the output directory for saving model file
# Optionally, set a GCP bucket location
OUTPUT_DIR = 'temp_out'#@param {type:"string"}
#@markdown Whether or not to clear/delete the directory and create a new one
DO_DELETE = True #@param {type:"boolean"}
#@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket.
USE_BUCKET = False #@param {type:"boolean"}
BUCKET = 'economic_news_sentiment' #@param {type:"string"}
if USE_BUCKET:
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)
from google.colab import auth
auth.authenticate_user()
if DO_DELETE:
try:
tf.gfile.DeleteRecursively(OUTPUT_DIR)
except:
# Doesn't matter if the directory didn't exist
pass
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
###Output
***** Model output directory: temp_out *****
###Markdown
Data First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub).
###Code
# from google.colab import drive
# drive.mount('/content/gdrive')
import os
os.getcwd()
from tensorflow import keras
import os
import re
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
# data = load_directory_data(os.path.join(directory, "economic_sentiment_data.csv"))
data = pd.read_csv(os.path.join(directory, "economic_sentiment_data.csv"))
data = data[['sentence','sentiment','polarity']]
print(data.shape)
return data
# # Merge positive and negative examples, add a polarity column and shuffle.
# def load_dataset(directory):
# data_df = load_directory_data(os.path.join(directory, "economic_sentiment_data.csv"))
# return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
# dataset = tf.keras.utils.get_file(
# fname="Full-Economic-News-DFE-839861.csv",
# origin="https://d1p17r2m4rzlbo.cloudfront.net/wp-content/uploads/2016/03/Full-Economic-News-DFE-839861.csv",
# extract=False)
# print(os.path.dirname(dataset))
full_data_df = load_directory_data(os.path.join('../../data/','raw'))
train_df = full_data_df.iloc[0:3000]
test_df = full_data_df.iloc[3000:]
print(train_df.shape)
print(test_df.shape)
return train_df, test_df
###Output
_____no_output_____
###Markdown
To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
###Code
train, test = download_and_load_datasets()
# train = train.sample(5000)
# test = test.sample(5000)
train.columns
###Output
_____no_output_____
###Markdown
For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
###Code
DATA_COLUMN = 'sentence'
LABEL_COLUMN = 'polarity'
# label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat'
label_list = [0, 1]
###Output
_____no_output_____
###Markdown
Data PreprocessingWe'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library.- `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe. - `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank.- `label` is the label for our example, i.e. True, False
###Code
# Use the InputExample class from BERT's run_classifier code to create examples from the data
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
###Output
_____no_output_____
###Markdown
Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):1. Lowercase our text (if we're using a BERT lowercase model)2. Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"])3. Break words into WordPieces (i.e. "calling" -> ["call", "ing"])4. Map our words to indexes using a vocab file that BERT provides5. Add special "CLS" and "SEP" tokens (see the [readme](https://github.com/google-research/bert))6. Append "index" and "segment" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))Happily, we don't have to worry about most of these details. To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:
###Code
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
###Output
WARNING:tensorflow:From /usr/local/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py:3632: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
###Markdown
Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:
###Code
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
#tokenizer.tokenize(pred_sentences[0])
###Output
_____no_output_____
###Markdown
Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands.
###Code
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
###Output
INFO:tensorflow:Writing example 0 of 3000
###Markdown
Creating a modelNow that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning).
###Code
# model_fn_builder actually creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(
loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predicted_labels)
auc = tf.metrics.auc(
label_ids,
predicted_labels)
recall = tf.metrics.recall(
label_ids,
predicted_labels)
precision = tf.metrics.precision(
label_ids,
predicted_labels)
true_pos = tf.metrics.true_positives(
label_ids,
predicted_labels)
true_neg = tf.metrics.true_negatives(
label_ids,
predicted_labels)
false_pos = tf.metrics.false_positives(
label_ids,
predicted_labels)
false_neg = tf.metrics.false_negatives(
label_ids,
predicted_labels)
return {
"eval_accuracy": accuracy,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
"""Creates a classification model."""
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
###Output
_____no_output_____
###Markdown
Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction.
###Code
# Compute train and warmup steps from batch size
# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)
BATCH_SIZE = 32
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 6
# Warmup is a period of time where the learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# Specify output directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
###Output
INFO:tensorflow:Using config: {'_model_dir': 'temp_out', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f585444aac8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
###Markdown
Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators).
###Code
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
###Output
_____no_output_____
###Markdown
Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
###Code
!pip install dask --upgrade
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
###Output
Beginning Training!
INFO:tensorflow:Calling model_fn.
###Markdown
Now let's use our test data to see how well our model did:
###Code
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=True)
estimator.evaluate(input_fn=test_input_fn, steps=None)
###Output
INFO:tensorflow:Calling model_fn.
###Markdown
Now let's write code to make predictions on new sentences:
###Code
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = [
'''While the RMB in 2017 was broadly in line with economic fundamentals and desirable policies, the current account surplus was moderately
stronger. This reflects structural distortions and policies that cause excessive savings, such as low social spending. Addressing these distortions and the resulting external imbalance would benefit
both China and the global economy.''',
'''Favorable domestic and external conditions reduced capital outflows and exchange rate pressure. The RMB was broadly stable against the basket published by the China Foreign
Exchange Trade System (CFETS) in 2017, but with more fluctuation versus the dollar, and it has appreciated by about 2 percent in real effective terms in the first half of 2018. The current account
surplus continued to decline but, reflecting distortions and policy gaps that encourage excessive savings, the external position for 2017 is assessed as moderately stronger than the level consistent
with medium-term fundamentals and desirable policies, with the exchange rate broadly in line(Appendix I).''',
'''Large outflows and pressure on the exchange rate could resume due to tighter and more volatile global financial conditions, especially a surging dollar. Investor sentiment
towards emerging markets has recently weakened, and this could intensify, potentially spreading to China.''',
'''. Uncoordinated financial and local government regulatory action could have unintended consequences that trigger disorderly repricing of corporate/LGFV credit risks, losses
for investors, and rollover risks for financial institutions''',
'''But a lack of decisive reforms in deleveraging and rebalancing would add to the Faster reform progress could pave the way for higher and
more sustainable GDP growth, already-high stock of vulnerabilities and worsen resource allocation, leading to more rapidly
diminishing returns over the medium term. This scenario also raises the probability of a disruptive adjustment to Chinese demand which would result in a contractionary impulse to the global
economy, as well as spillovers through commodity prices and financial markets. '''
]
predictions = getPrediction(pred_sentences)
###Output
INFO:tensorflow:Writing example 0 of 5
###Markdown
Voila! We have a sentiment classifier!
###Code
predictions
###Output
_____no_output_____ |
ARC-seismic demo.ipynb | ###Markdown
ARC-seismic | demo Anisotropic Reflection Coefficient seismic modelling software.*** 1. IntroductionARC-seismic uses the Zoeppritz equations to compute exact reflection coefficients for anisotropic elastic media across all angles of incidence (0$^{\circ}-$90$^{\circ}$), including post-critical angles. It consists of 3 individual programs:1. **_zoeppritz.py_** - Comprehensive code for modelling reflection and transmission coefficients for isotropic and anisotropic elastic media in three dimensions.2. **_montecarlo.py_** - A code that uses zoeppritz.py to conduct Monte Carlo simulations of reflection coefficients.3. **_kirchhoff.py_** - Kirchhoff program that uses zoeppritz.py to generate synthetic seismograms of the reflected finite frequency wave-field. The zoeppritz.py code has been validated using exact plane-wave reflection coefficient solutions, and the only condition it requires is that both media possess at least **_monoclinic symmetry_**, defined by an elastic tensor of the form:$$\mathbf{C}_{ij} = \begin{bmatrix}C_{11} & C_{12} & C_{13} & & & C_{16}\\ C_{12} & C_{22} & C_{23} & & & C_{26}\\ C_{13} & C_{23} & C_{33} & & & C_{36}\\ & & & C_{44} & C_{45} & \\ & & & C_{45} & C_{55} & \\ C_{16} & C_{26} & C_{36} & & & C_{66}\end{bmatrix}$$*** 2. Background & MotivationARC-seismic was initially developed to use the anisotropic Zoeppritz equations to investigate a seismic data set, the Buzzard Field, where acoustic Full-Waveform Inversion (FWI) fails (see Figure 1). I was initially surprised when I couldn't find a program online that accurately solved the anisotropic Zoeppritz equations for pre- and post-critical incidence angles. It turns out that developing an accurate and robust anisotropic Zoeppritz modelling program is a challenging task$^{\textbf{(2)}}$, requiring the solution to multiple eigenvalue problems and the correct handling of complex numbers. Using ARC-seismic, it was discovered that the failure of acoustic FWI on the Buzzard Field was a consequence of post-critical elastic effects originating from an unconformable top-chalk interface, which resulted in almost zero post-critical energy being reflected. It was also discovered that post-critical reflection coefficients are highly sensitive to weak/moderate anisotropy within the reflecting medium of an interface.For a detailed case study example, please refer to my thesis: *'The Limitations of Acoustic Full-Waveform Inversion in an Elastic World'*. Figure 1. A) Starting reverse time migrated p-wave velocity model. The high chalk velocities have been verified using common-image gathers. B) Acoustic FWI recovered p-wave velocity model. Acoustic FWI clearly fails to recover the high p-wave velocities of the chalk in the starting model.*** 3. TheoryThe Zoeppritz equations$^{\textbf{(1)}}$ govern the proportion of seismic energy that is reflected and transmitted from an interface between two homogenous elastic media (see Figure 2). They are formulated by assuming the solution to the wave equation: $$\left ( \rho\ \delta_{ij}\ \frac{\partial^2 }{\partial t^2}\ -\ \mathbf{C}_{ikjm}\ \frac{\partial^2 }{\partial x_{k}\partial x_{m}} \right )u_{j}=0,\ \ \ i=1,2,3,$$takes the form of an elastic plane wave:$$u_{i} = q_{i}\ exp\left [ i\omega\left ( t\ -\ s_{k}\ x_{k} \right ) \right ],\ \ \ i=1,2,3,$$where $\rho$ is the density, $\mathbf{\delta}_{ij}$ is the Kronecker delta function, $\mathbf{C}_{ikjm}$ is the fourth order elastic tensor, $u_{ij}$ is the displacement, $q_{ij}$ is the polarization vector, $s_{km}$ is the slowness vector and $\omega$ is the angular frequency. This yields the following equation: $$\rho\ q_{i} = \mathbf{C}_{ikjm}\ s_{k}\ s_{m}\ q_{j},\ \ \ i=1,2,3,$$$\mathbf{C}_{ikjm}$ governs the three-dimensional relationship between stress and strain in a medium. It was condensed into a more convenient second order 6$\times$6 matrix, $\mathbf{C}_{ij}$, by exploiting its symmetrical properties. The previous equation can then be rearranged and expressed as:$$\left ( \mathbf{\Gamma}-\rho \mathbf{I}\right )\mathbf{q} = 0$$where $\mathbf{\Gamma}$ is the 3$\times$3 Christoffel matrix, defined for media with monoclinic symmetry as follows:$$\begin{matrix} \Gamma_{11}=C_{11}s_{1}^{2}+C_{66}s_{2}^{2}+C_{55}s_{3}^{2}+2C_{16}s_{1}s_{2}\\ \Gamma_{22}=C_{66}s_{1}^{2}+C_{22}s_{2}^{2}+C_{44}s_{3}^{2}+2C_{26}s_{1}s_{2}\\ \Gamma_{33}=C_{55}s_{1}^{2}+C_{44}s_{2}^{2}+C_{33}s_{3}^{2}+2C_{45}s_{1}s_{2}\\ \Gamma_{12}=\Gamma_{21}=C_{16}s_{1}^{2}+C_{26}s_{2}^{2}+C_{45}s_{3}^{2}+(C_{12}+C_{66})s_{1}s_{2}\\ \Gamma_{13}=\Gamma_{31}=(C_{13}+C_{55})s_{1}s_{3}+(C_{36}+C_{45})s_{2}s_{3}\\ \Gamma_{23}=\Gamma_{32}=(C_{36}+C_{45})s_{1}s_{3}+(C_{23}+C_{44})s_{2}s_{3}.\end{matrix}$$Explicit solutions for the reflection and transmission coefficients of incident P and S-waves were then found by imposing continuity of displacement and traction across the interface$^{\textbf{(1)}}$. Figure 2. Diagram of the system defined by the anisotropic Zoeppritz equations. $Pi$ is the incident p-wave, $Pr$ and $Pt$ are the reflected and transmitted p-waves, $qSHr$, $qSHt$, $qSVr$, $qSVt$ are the reflected and transmitted s-waves polarized in the quasi-horizontal/vertical planes, $\theta$ is the incidence angle and $\phi$ is the azimuth angle.For a comprehensive discussion of the theory and methods used, please refer to my masters thesis.*** References1. Zoeppritz, K. (1919), ‘Erdbebenwellen VIII B, Uber Reflexion and Durchgang seismischer wellen durch Unstetigkeisflachen’, *Gottinger Nachr* **1**, 66–84.2. Schoenberg, M. & Protazio, J. (1990), ‘Zoeppritz rationalized, and generalized to anisotropic media’, *The Journal of the Acoustical Society of America* **88**(S1), S46–S46.3. Aki, K. & Richards, P. (1980), *Quantitative seismology; theory and methods*, Freeman Co., San Francisco.4. Thomsen, L. (1986), ‘Weak elastic anisotropy’, *Geophysics* **51**(10), 1954–1966.5. Bond, W. L. (1943), ‘The mathematics of the physical properties of crystals’, *Bell Labs Technical Journal* **22**(1), 1–72.6. Shearer, P. M. (2009), *Introduction to seismology*, Cambridge University Press. *** Demo 1. Isotropic Reflection Coefficients
###Code
from zoeppritz import * # import zoeppritz.py
###Output
_____no_output_____
###Markdown
First, we need to define the isotropic physical parameters of the upper (1) and lower (2) media. E.g. $vp1$ is the p-wave velocity of the upper medium and $p2$ is the density of the lower medium. Note that all values should be given in **SI units** (e.g. $ms^{-1}$ and $kgm^{-3}$). These are the estimated parameters for the top-chalk interface from Figure 1.
###Code
vp1 = 2000; vp2 = 4000 # p-wave velocities
vs1 = 400; vs2 = 2150 # s-wave velocities
p1 = 2000; p2 = 2600 # densities
###Output
_____no_output_____
###Markdown
To generate a scattering matrix$^{\textbf{(4)}}$, call the **isotropic_zoeppritz()** function, passing in the angle of incidence (i_angle) in degrees. The first element of the first row of the scattering matrix is the complex valued p-wave reflection coefficient ($Rpp$). As shown, the magnitude and phase can be easily extracted.
###Code
Rpp = isotropic_zoeppritz(vp1, vp2, vs1, vs2, p1, p2, i_angle=35)[0][0] # p-wave reflection coefficient
m = abs(Rpp) # magnitude
p = np.degrees(cm.phase(Rpp)) # phase
print(f'magnitude = {m}, phase = {p}')
###Output
magnitude = 0.21887645438940628, phase = -87.20829659064667
###Markdown
In order to make a plot of magnitude and phase vs angle of incidence, call the **isotropic_plot()** function. The solid line represents the magnitude and the dotted line represents the phase shift.
###Code
isotropic_plot(vp1, vp2, vs1, vs2, p1, p2) # angle plot
###Output
_____no_output_____
###Markdown
Assuming we know the depth to the interface, we can easily convert incidence angle to offset by specifying the depth (d) in metres. The maximum offset can be set by using the maxoff parameter.
###Code
isotropic_plot(vp1, vp2, vs1, vs2, p1, p2, d=1000, maxoff=5000) # offset plot
###Output
_____no_output_____
###Markdown
The above plot is an elastic model of reflection coefficients vs offset. If we want to invetigate whether acoustic FWI will be effective, as discussed in section 2, we can easily generate an acoustic model by setting the upper and lower s-wave velocities to zero (note that we have to use very small numbers instead of zero to ensure that matrix P is non-singular and invertible).
###Code
isotropic_plot(vp1, vp2, 1e-10, 1e-10, p1, p2, d=1000, maxoff=5000) # acoustic offset plot
###Output
_____no_output_____
###Markdown
As demonstrated above, there is a huge disparity between the elastic and acoustic post-critical reflection coefficients. This means that acoustic FWI will likely fail. Elastic FWI, or methods for mitigating elastic effects, are required in order to proceed with FWI. *** 2. Anisotropic Reflection Coefficients To model media with vertical transverse isotropy (VTI), we can specify Thomsen's anisotropy parameters ($\epsilon$, $\delta$, $\gamma$)$^{\textbf{(4)}}$ and generate elastic tensors for the upper and lower media using the **thomsen_c()** function.
###Code
e1 = 0.1; e2 = 0.2; d1 = 0.05; d2 = 0.1; g1 = 1e-10; g2 = 1e-10 # define some random but realistic Thomsen parameters
C1 = thomsen_c(vp1, vs1, p1, e1, d1, g1) # generate elastic tensors
C2 = thomsen_c(vp2, vs2, p2, e2, d2, g2)
###Output
_____no_output_____
###Markdown
Similarly to before, we can calculate an individual reflection coefficient using **anisotropic_zoeppritz()**, with the addition of the azimuth angle parameter (a_angle) in degrees. The first element of the first row of the first matrix that is returned is the complex valued p-wave reflection coefficient. To generate a plot of magnitude and phase vs incidence angle, call the **anisotropic_plot()** function. Just like before, to convert incidence angle to offset, simply specify the depth (d) in metres. If the results are unstable, try increasing the pre-whitening factor (p_white) to stabilise the solution.
###Code
Rpp = anisotropic_zoeppritz(C1, C2, p1, p2, i_angle=10, a_angle=0)[0][0][0] # p-wave reflection coefficient
m = abs(Rpp) # magnitude
p = np.degrees(cm.phase(Rpp)) # phase
print(f'magnitude = {m}, phase = {p}')
anisotropic_plot(C1, C2, p1, p2, a_angle=0, p_white=1e-7) # angle plot
###Output
_____no_output_____
###Markdown
To model horizontal transverse isotropy (HTI), without losing the convenience of Thomsen's anisotropy parameter, we can apply a 90$^{\circ}$ bond transformation to the elastic tensors$^{\textbf{(5)}}$. This rotates the tensors from VTI to HTI, whilst retaining the same magnitude of anisotropy. Note that for HTI media, the reflection coefficients depend on the azimuth angle (a_angle).
###Code
anisotropic_plot(bond_transformation(C1,90), bond_transformation(C2,90), p1, p2,
a_angle=70, p_white=1e-7) # hti plot
###Output
_____no_output_____
###Markdown
*** 3. Monte Carlo Simulations Monte Carlo simulations can be used to investigate the full range of possible reflection coefficients from a geological interface where the precise lithology and physical parameters are unknown or poorly constrained. Additionally, if you have high quality seismic data available, Monte Carlo simulations can provide constraints on the range of physical parameters that could produce reflections with the amplitude behaviour that is observed in the data. They are also a very effective tool for sensitivity analyses. For instance, to determine the sensitivity of reflection coefficients to small changes in the density, velocity, or anisotropy of the system.
###Code
from montecarlo import * # import montecarlo.py
###Output
_____no_output_____
###Markdown
3.1 Isotropic Simulation To demonstrate an isotropic Monte Carlo simulation, I am going to use the same example as before; investigating the top-chalk interface from Figure 1. The estimated parameters that I used to generate the isotropic reflection coefficient profiles were very poorly constrained. In this case, it makes more sense to estimate the minimum and maximum possible values for each parameter, and then fully explore how the reflection coefficients behave in this possible parameter space using Monte Carlo simulations. In order to conduct an isotropic Monte Carlo simulation, we must specify the **minimum** and **maximum** values for each parameter, and the **number of samples**.
###Code
vp1_min = 2000; vp1_max = 2000 # upper and lower p-wave velocity range
vp2_min = 4000; vp2_max = 4000
vs1_min = 200; vs1_max = 600 # upper and lower s-wave velocity range
vs2_min = 1800; vs2_max = 2500
p1_min = 1800; p1_max = 2200 # upper and lower density range
p2_min = 2400; p2_max = 2800
num_samples = 300 # number of samples used for the simulation
###Output
_____no_output_____
###Markdown
Then, call the **isotropic_monte_carlo()** function. This creates a random uniform distribution of samples for each parameter, which are then used to generate a reflection coefficient profile for each sample. The **isotropic_monte_carlo()** function returns an array of p-wave reflection coefficient profiles of length num_samples and the mean profile (generated using the mean value of each parameter).
###Code
SIM_isotropic, mean = isotropic_monte_carlo(vp1_min, vp1_max, vp2_min, vp2_max, \
vs1_min, vs1_max, vs2_min, vs2_max, \
p1_min, p1_max, p2_min, p2_max, num_samples)
###Output
100%|██████████| 300/300 [00:05<00:00, 53.51it/s]
###Markdown
To visualise the result, use the **monte_carlo_plot()** function. The blue lines are the simulated reflection coefficient profiles and the red line is the mean profile. Probability density functions are calculated and plotted at incidence angles of 15$^{\circ}$, 45$^{\circ}$ and 75$^{\circ}$.
###Code
monte_carlo_plot(SIM_isotropic, mean, pdf=True)
###Output
_____no_output_____
###Markdown
3.2 Anisotropic Simulation I am going to demonstrate how to implement an anisotropic Monte Carlo simulation by showing you how to conduct a sensitivity analysis of p-wave reflection coefficients to weak/moderate amounts of VTI anisotropy. A VTI Monte Carlo simulation is conducted using the same general approach. The only difference is that we need to define the azimuth angle, the minimum and maximum Thomsen parameters, and use the **anisotropic_monte_carlo()** function. Note that this will take longer than the isotropic simulation due to the added complexities and computational cost of solving the anisotropic Zoeppritz equations!
###Code
vp1_min = 2000; vp1_max = 2000 # hold isotropic parameters constant
vp2_min = 4000; vp2_max = 4000
vs1_min = 600; vs1_max = 600
vs2_min = 1800; vs2_max = 1800
p1_min = 2200; p1_max = 2200
p2_min = 2400; p2_max = 2400
e1_min = -0.02; e1_max = 0.2 # upper and lower epsilon ranges
e2_min = -0.02; e2_max = 0.2
d1_min = -0.1; d1_max = 0.2 # upper and lower delta ranges
d2_min = -0.1; d2_max = 0.2
g1_min = 1e-10; g1_max = 1e-10 # upper and lower gamma ranges
g2_min = 1e-10; g2_max = 1e-10
a_angle = 0 # azimuth angle - note that this does not affect the results for VTI anisotropy
num_samples = 300 # number of samples used for the simulation
SIM_vti, mean = anisotropic_monte_carlo(vp1_min, vp1_max, vp2_min, vp2_max, \
vs1_min, vs1_max, vs2_min, vs2_max, \
p1_min, p1_max, p2_min, p2_max, \
e1_min, e1_max, e2_min, e2_max, \
d1_min, d1_max, d2_min, d2_max, \
g1_min, g1_max, g2_min, g2_max, \
a_angle, num_samples)
monte_carlo_plot(SIM_vti, mean, pdf=True)
###Output
_____no_output_____
###Markdown
*** 4. Kirchhoff Synthetic Seismograms Zoeppritz equations assume seismic energy travels in rays with infinite frequency$^{\textbf{(1)}}$. However, seismic waves have finite frequencies and accounting for this will effectively average out the reflection coefficient profile over the Fresnel zone$^{\textbf{(3)}}$. The kirchhoff.py program was developed in order to account for finite frequencies and geometrical spreading by modelling the reflected wave-field. The reflected wave-field for a specific receiver, $\phi_{R}$, is calculated using a Kirchhoff integral of the form: $$\phi_{R}=\frac{1}{4\pi c}\int_{S}\delta\ (t-\frac{r+r_{0}}{c})\ \frac{R_{pp}(\theta_{0})}{r r_{0}}\ (cos\theta_{0}+cos\theta)\ dS\ *\ \frac{\partial w(t,f)}{\partial t}$$where c is the p-wave velocity of the incident medium, $\delta$ is the Dirac delta function, $R_{pp}(\theta_{0})$ is the plane-wave p-wave reflection coefficient at incidence angle $\theta_{0}$, $r_{0}$ and $r$ are the source to surface grid-point and surface grid-point to receiver distances respectively, $\theta$ is the angle between the scattered ray and the surface normal, $w$ is the source time function (Ricker wavelet) and the integral is conducted over surface $S\ ^{\textbf{(6)}}$. For post-critical incidence angles, $\phi_{R}$ becomes complex, requiring an additional phase shift. Therefore, the final Kirchhoff seismogram is given by:$$\phi_{R}^{final}=Re(\phi_{R})\ +\ \textbf{H}\{Im(\phi_{R})\}$$where $\textbf{H}$ is the Hilbert transform, and the real and imaginary component are denoted by $Re$ and $Im$ respectively$^{\textbf{(6)}}$.
###Code
from kirchhoff import * # import kirchhoff.py
###Output
_____no_output_____
###Markdown
In order to model the reflected wave-field, we first need to specify the geometry of the system we wish to model.
###Code
rec_min = 50 # minimum reciever distance from source
rec_max = 5000 # maximum reciever distance from source
drec = 100 # reciever spacing (every 100 meters in this case)
d = 1000 # interface depth
###Output
_____no_output_____
###Markdown
Next, we choose some modelling parameters. Note that if the time interval spacing (dt) and grid-point spacing (ds) are too large, the synthetic seismograms will be incorrect. To run this a bit faster you can use a larger grid-point spacing (ds) value. It is recommended that you start small (using the default value or lower), and increase until the resultant seismograms are no longer the same. At this point you have reached the maximum spacing allowed for the system you have defined.
###Code
w = 500 # width of 3D Kirchhoff integral
ds = 50 # grid-point spacing over the interface
dt = 5e-4 # time interval spacing
f = 6 # frequency in hz
###Output
_____no_output_____
###Markdown
4.1 Isotropic Synthetic Seismograms To generate an isotropic synthetic seismic trace for each reciever in the specified geometry, call **isotropic_synthetic()**, which will return an array of traces (one for each reciever) and a time array. I am using the same isotropic physical parameters (defined in section 3) for modelling the top-chalk interface in Figure 1.
###Code
traces_elastic, time = isotropic_synthetic(rec_min, rec_max, drec,
vp1, vp2, vs1, vs2, p1, p2, d, w, f, ds=ds, dt=dt) # isotropic elastic traces
###Output
100%|██████████| 50/50 [00:07<00:00, 6.95it/s]
###Markdown
We can plot these traces by passing them into the **plot_synthetic()** function along with the time array. For better visualisation, you can amplify the result by setting the scale_fac parameter to a value greater than 1. The ymin and ymax parameters control the time axis limits.
###Code
scale_fac = 1
plot_synthetic(traces_elastic, time, scale_fac, ymin=0.4, ymax=1.4) # plot isotropic elastic traces
###Output
_____no_output_____
###Markdown
Just like in section 3, we can also generate an acoustic version of the reflected wave-field by setting the s-wave velocities to (practically) zero.
###Code
traces_acoustic, time = isotropic_synthetic(rec_min, rec_max, drec,
vp1, vp2, 1e-10, 1e-10, p1, p2, d, w, f, ds=ds, dt=dt) # isotropic acoustic traces
plot_synthetic(traces_acoustic, time, scale_fac, ymin=0.4, ymax=1.4) # plot isotropic acoustic traces
###Output
_____no_output_____
###Markdown
We can also plot the difference between the acoustic and elastic traces to visualise the disparity between the elastic and acoustic reflected wave-fields.
###Code
plot_synthetic(traces_acoustic-traces_elastic, time, scale_fac, ymin=0.4, ymax=1.4) # acoustic - elastic plot
###Output
_____no_output_____
###Markdown
4.2 Anisotropic Synthetic Seismograms To generate anisotropic synthetic traces, we can use the same system geometry and modelling parameters. The only difference is that we need to call the **anisotropic_synthetic()** function which requres elastic tensors, only the upper p-wave velocity, and the azimuth angle. I am using the same anisotropic physical parameters that were defined in section 4.
###Code
traces_anisotropic, time = anisotropic_synthetic(rec_min, rec_max, drec,
C1, C2, p1, p2, d, w, f, vp1, a_angle=0, ds=ds, dt=dt, p_white=1e-7) # anisotropic traces
plot_synthetic(traces_anisotropic, time, scale_fac, ymin=0.4, ymax=1.4) # plot anisotropic traces
###Output
_____no_output_____
###Markdown
In order to visualise the minor differences between the anisotropic and elastic traces, we can increase the scale factor.
###Code
scale_fac = 4 # increase scale factor to better visualise minor differences
plot_synthetic(traces_anisotropic-traces_elastic, time, scale_fac, ymin=0.4, ymax=1.4) # anisotropic - isotropic elastic
###Output
_____no_output_____ |
testing/sahel_cropmask/1_Extract_training_data.ipynb | ###Markdown
Extracting training data from the ODC* **Products used:** [gm_s2_semiannual](https://explorer.digitalearth.africa/gm_s2_semiannual) DescriptionThis notebook will extract training data over northern Africa using geometries within a shapefile (or geojson). To do this, we rely on a custom `deafrica-sandbox-notebooks` function called `collect_training_data`, contained within the [deafrica_tools.classification](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/blob/minty-fresh-sandbox/Tools/deafrica_tools/classification.py) script.1. Import, and preview our training data contained in the file: `'data/Northern_training_data_YYYYMMDD.geojson'`2. Extract training data from the datacube using a custom defined feature layer function that we can pass to `collect_training_data`. The training data function is stored in the python file `feature_layer_functions.py` - the functions are stored in a seperate file simply to keep this notebook tidy. - **The features used to create the cropland mask are as follows:** - For two seasons, January to June, and July to Decemeber: - A geomedian composite of nine Sentinel-2 spectral bands - Three measures of median absolute deviation - NDVI, MNDWI, and LAI - Cumulative Rainfall from CHIRPS - Slope from SRTM (not seasonal, obviously) 3. Separate the coordinate values in the returned training data from step 2, and export the coordinates as a text file.4. Export the remaining training data (features other than coordinates) to disk as a text file for use in subsequent scripts*** Getting startedTo run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. Load packages
###Code
%matplotlib inline
import os
import warnings
warnings.filterwarnings("ignore")
import datacube
import numpy as np
import xarray as xr
import geopandas as gpd
from odc.io.cgroups import get_cpu_quota
from datacube.utils.geometry import assign_crs
from datacube.utils.rio import configure_s3_access
configure_s3_access(aws_unsigned=True, cloud_defaults=True)
from deafrica_tools.plotting import map_shapefile
from deafrica_tools.classification import collect_training_data
#import the custom feature layer functions
from feature_layer_functions import gm_mads_two_seasons_training
###Output
_____no_output_____
###Markdown
Analysis parameters* `path`: The path to the input shapefile from which we will extract training data.* `field`: This is the name of column in your shapefile attribute table that contains the class labels. **The class labels must be integers**
###Code
path = 'data/sahel_training_data_20211110.geojson'
output_suffix = '20211110'
field = 'Class'
###Output
_____no_output_____
###Markdown
Automatically find the number of cpus> **Note**: With supervised classification, its common to have many, many labelled geometries in the training data. `collect_training_data` can parallelize across the geometries in order to speed up the extracting of training data. Setting `ncpus>1` will automatically trigger the parallelization, however, its best to set `ncpus=1` to begin with to assist with debugging before triggering the parallelization.
###Code
ncpus=round(get_cpu_quota())
print('ncpus = '+str(ncpus))
###Output
ncpus = 31
###Markdown
Load & preview polygon dataWe can load and preview our input data shapefile using `geopandas`. The shapefile should contain a column with class labels (e.g. 'class'). These labels will be used to train our model. > Remember, the class labels **must** be represented by `integers`.
###Code
# Load input data shapefile
input_data = gpd.read_file(path)
# Plot first five rows
input_data.head()
# Plot training data in an interactive map
# map_shapefile(input_data, attribute=field)
###Output
_____no_output_____
###Markdown
Now, we can pass this shapefile to `collect_training_data`. For each of the geometries in our shapefile we will extract features in accordance with the function `feature_layer_functions.gm_mads_two_seasons_training`. First, we need to set up a few extra inputs for `collect_training_data` and the datacube. See the function docs [here](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/blob/03b7b41d5f6526ff3f33618f7a0b48c0d10a155f/Scripts/deafrica_classificationtools.pyL650) for more information on these parameters.
###Code
#set up our inputs to collect_training_data
zonal_stats = 'median'
# Set up the inputs for the ODC query
time = ('2019')
measurements = [
"blue",
"green",
"red",
"nir",
"swir_1",
"swir_2",
"red_edge_1",
"red_edge_2",
"red_edge_3",
"bcdev",
"edev",
"sdev"
]
resolution = (-10, 10)
output_crs = 'epsg:6933'
#generate a new datacube query object
query = {
'time': time,
'measurements': measurements,
'resolution': resolution,
'output_crs': output_crs,
'group_by' : 'solar_day',
'resampling': 'bilinear'
}
###Output
_____no_output_____
###Markdown
Extract training data> Remember, if running this function for the first time, its advisable to set `ncpus=1` to assist with debugging before triggering the parallelization (which won't return errors if something is not working correctly). You can also limit the number of polygons to run for the first time by passing in `gdf=input_data[0:5]`, for example.
###Code
%%time
warnings.filterwarnings("ignore")
column_names, model_input = collect_training_data(
gdf=input_data,
dc_query=query,
ncpus=25,
field=field,
zonal_stats=zonal_stats,
fail_threshold=0.0075,
feature_func=gm_mads_two_seasons_training
)
print(column_names)
print('')
print(np.array_str(model_input, precision=2, suppress_small=True))
###Output
['Class', 'blue_S1', 'green_S1', 'red_S1', 'nir_S1', 'swir_1_S1', 'swir_2_S1', 'red_edge_1_S1', 'red_edge_2_S1', 'red_edge_3_S1', 'bcdev_S1', 'edev_S1', 'sdev_S1', 'NDVI_S1', 'LAI_S1', 'MNDWI_S1', 'rain_S1', 'blue_S2', 'green_S2', 'red_S2', 'nir_S2', 'swir_1_S2', 'swir_2_S2', 'red_edge_1_S2', 'red_edge_2_S2', 'red_edge_3_S2', 'bcdev_S2', 'edev_S2', 'sdev_S2', 'NDVI_S2', 'LAI_S2', 'MNDWI_S2', 'rain_S2', 'slope']
[[ 0. 0.15 0.22 ... -0.42 334.4 4.25]
[ 0. 0.13 0.22 ... -0.49 258.16 3.17]
[ 0. 0.13 0.19 ... -0.46 309.86 2.43]
...
[ 1. 0.12 0.15 ... -0.54 632.3 2.43]
[ 1. 0.13 0.18 ... -0.5 530.79 1.67]
[ 1. 0.13 0.17 ... -0.46 530.7 4.25]]
###Markdown
Export training dataOnce we've collected all the training data we require, we can write the data to disk. This will allow us to import the data in the next step(s) of the workflow.
###Code
#set the name and location of the output file
output_file = "results/training_data/sahel_training_data_"+output_suffix+".txt"
#grab all columns except the x-y coords
model_col_indices = [column_names.index(var_name) for var_name in column_names]
#Export files to disk
np.savetxt(output_file, model_input[:, model_col_indices], header=" ".join(column_names), fmt="%4f")
###Output
_____no_output_____ |
_posts/CS20SI-1.ipynb | ###Markdown
Welcome to TensorFlow- Welcome- Overview of TensorFlow- Graphs and Sessions Why TensorFlow1. Python API;2. Portability - 可迁移性。 能够适用于单个或多个CPU、GPU、服务器,或者移动设备;3. Flexibility - 弹性。 从树莓派、安卓、Windows、iOS、Linux到服务器集群;4. Visualization - 可视化。 其中重要的TensorBoard功能。5. Checkpoints - 断点。 可用于管理诸多实验。6. Auto-differentiation - 自动微分。不再需要手动求微分。7. Large community。 丰富的社区讨论等。8. 大量已经使用TensorFlow搭建的项目。 Goals - 目标1. 理解TensorFlow的计算图方法;2. 探索TensorFlow内置函数;3. 学习构建并结构化深度学习的项目。> Off-the-shelf model are not the main purpose of TensorFlow.> TensorFlow provides an extensive suite of functions and classes that allow users to define models from scratch.> And this is what we are going to learn. 我们不是要使用TensorFlow中现成的模型,而是要学会从零搭建自己的模型。 Books- TensorFlow for Machine Intelligence- Hands-on Machine Learning with Scikit-learn and TensorFlow- Fundamentals of Deep Learning相对而言,TensorFlow发展迅速,书很快就会过时,所以要通过 [官网](https://www.tensorflow.org/)跟踪最新。 Two phases with TensorFlow- Phase 1: assemble a graph 构建一个计算图- Phase 2: use a session to execute operations in the graph. 利用一个session执行该计算图(自动进行微分更新参数) What is a tensor?- 0维:标量;- 1维:向量;- 2维:矩阵;- 等等。 What is a Session?A __Session__ object encapsulates the environment in which Operations objects are executed, and Tensor objects are evaluated. __Session__ 封装一个计算环境,其中完成了 _操作_ 的执行,并计算相应的 _张量_ 。 Why graphs1. 节省计算量。 仅运行只跟所求的变量相关的部分计算图。2. 将计算分割成小块,每一部分便于进行自动微分。3. 方便进行分布式计算,跨多个CPU、GPU或设备等。4. 许多常用的机器学习模型已经训练并通过计算图进行可视化。 小结这一章在自我摸索的情况下,竟然能够正确输出TensorBoard的展示,也算是一个小意外了。 还是从整体性介绍TensorFlow,有谁在用,为什么要用,该如何使用。__该如何使用__ 最为重要, 我们希望能够充分利用TensorFlow已经定义好的模块,如各种函数和类,但是最核心的模型部分,还是得从头搭建。并且特别强调 Computational Graph 计算图这个核心问题。
###Code
import tensorflow as tf
x = tf.constant(3, name='x')
y = tf.constant(5, name='y')
add_op = tf.add(x, y)
mul_op = tf.multiply(x, y)
useless = tf.multiply(x, add_op)
pow_op = tf.pow(add_op, mul_op)
with tf.Session() as sess:
print(sess.run(pow_op))
summary_writer = tf.summary.FileWriter('tmp/testtb', sess.graph)
###Output
0
|
icpw_annual_time_series.ipynb | ###Markdown
ICPW annual time seriesAt the Task Force meeting in May 2017, it was decided that the TOC trends analysis should include rolling regressions based on the annually aggregated data (rather than calculating a single set of statistics for a small number of pre-specified time periods). Some code outlining an approach for this in Python can be found here [here](http://nbviewer.jupyter.org/github/JamesSample/icpw/blob/master/rolling_sens_slope.ipynb), and John has also created a version using R.Heleen has asked me to provided John with annual time series for each site for the period from 1990 to 2012 (see e-mail received from Heleen on 12/05/2017 at 10.01 and also the e-mail chain involving John, Don and Heleen between 19th and 20th May 2017).As a first step, I've modified my previous trends code so that, duirng the processing, the annual time series are saved as a series of CSVs. The additional lines of code can be found on lines 567 to 574 of [toc_trend_analysis.py](https://github.com/JamesSample/icpw/blob/master/toc_trends_analysis.py).The output CSVs require a small amount of manual cleaning: * Delete the TOC series for station ID 23467 and * Delete the SO4 series for station ID 36561. (See sections 1.2 and 1.3 of [this notebook](https://github.com/JamesSample/icpw/blob/master/toc_trends_oct_2016_part3.ipynb) for justification).Annual time series for climate based on the updated climate data have already been created and are saved here:C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015\CRU_Climate_Data\cru_climate_summaries.xlsxThe temperature series also need correcting based on site elevation, as was done in [this notebook](https://github.com/JamesSample/icpw/blob/master/icpw_climate_trends.ipynb) and the two (climate and water chemistry) datasets then need restructuring and joining into a single output file for John to work with.
###Code
# Data paths
clim_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\CRU_Climate_Data\cru_climate_summaries.xlsx')
stn_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\CRU_Climate_Data\cru_stn_elevs.csv')
chem_fold = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\annual_chemistry_series')
# For performance, pre-load the climate output file
# (this saves having to read it within the inner loop below,
# which is very slow)
clim_dict = {}
for var in ['pre', 'tmp']:
for tm in ['ann', 'jja', 'jas']:
# Open the climate data
clim_df = pd.read_excel(clim_xls, sheetname='%s_%s' % (var, tm))
clim_dict[(var, tm)] = clim_df
# Read stn elev data
stn_df = pd.read_csv(stn_xls)
# Get list of sites
stn_list = stn_df['stn_id'].unique()
# List to store output
data_list = []
# Loop over stations
for stn in stn_list:
# Read chem data
chem_path = os.path.join(chem_fold, 'stn_%s.csv' % stn)
# Only process the 431 files with chem data
if os.path.exists(chem_path):
# Allow for manually edited stations (see above)
# which now have ';' as the delimiter
if stn in [23467, 36561]:
chem_df = pd.read_csv(chem_path, sep=';')
else:
chem_df = pd.read_csv(chem_path)
chem_df.index = chem_df['YEAR']
# Process climate data
# Dict to store output
data_dict = {}
# Loop over data
for var in ['pre', 'tmp']:
for tm in ['ann', 'jja', 'jas']:
# Get the climate data
clim_df = clim_dict[(var, tm)]
# Filter the climate data for this station
stn_clim_df = clim_df.query('stn_id == @stn')
# Set index
stn_clim_df.index = stn_clim_df['year']
stn_clim_df = stn_clim_df.sort_index()
# Correct temperatures according to lapse rate
if var == 'tmp':
# Get elevations
stn_elev = stn_df.query('stn_id == @stn')['elev_m'].values[0]
px_elev = stn_df.query('stn_id == @stn')['px_elev_m'].values[0]
# If pixel elev is negative (i.e. in sea), correct back to s.l.
if px_elev < 0:
px_elev = 0
# Calculate temperature difference based on 0.6C/100m
t_diff = 0.6 * (px_elev - stn_elev) / 100.
# Apply correction
stn_clim_df['tmp'] = stn_clim_df['tmp'] + t_diff
# Truncate
stn_clim_df = stn_clim_df.query('(year>=1990) & (year<=2012)')
# Add to dict
key = '%s_%s' % (var, tm)
val = stn_clim_df[var]
data_dict[key] = val
# Build output df
stn_clim_df = pd.DataFrame(data_dict)
# Join chem and clim data
df = pd.merge(stn_clim_df, chem_df, how='outer',
left_index=True, right_index=True)
# Get desired columns
# Modified 06/06/2017 to include all pars for Leah
df = df[['pre_ann', 'pre_jas', 'pre_jja', 'tmp_ann', 'tmp_jas',
'tmp_jja', 'Al', 'TOC', 'EH', 'ESO4', 'ECl', 'ESO4_ECl',
'ENO3', 'ESO4X', 'ESO4_ECl', 'ECa_EMg', 'ECaX_EMgX',
'ANC']]
# Transpose
df = df.T
# Add station ID
df.reset_index(inplace=True)
df['station_id'] = stn
# Rename cols
df.columns.name = ''
cols = list(df.columns)
cols[0] = 'var'
df.columns = cols
data_list.append(df)
# Combine results for each site
ann_df = pd.concat(data_list, axis=0)
ann_df.head()
###Output
_____no_output_____
###Markdown
The final step is to join in the site metadata used in the previous analysis.
###Code
# Read site data from previous output
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\toc_trends_long_format_update1.xlsx')
props_df = pd.read_excel(in_xls, sheetname='toc_trends_long_format_update1',
keep_default_na=False) # Otherwise 'NA' for North America becomes NaN
# Get just cols of interest
props_df = props_df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'lat', 'lon']]
# Drop duplicates
props_df.drop_duplicates(inplace=True)
# Join
ann_df = pd.merge(ann_df, props_df, how='left',
on='station_id')
# Reorder cols
ann_df = ann_df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'lat', 'lon', 'var']+range(1990, 2013)]
ann_df.head()
###Output
_____no_output_____
###Markdown
Heleen previously defined various criteria for whether a series should be included in the analysis or not. In order to keep things consistent, it's probably a good idea to include this information here.
###Code
# Read site data from previous output
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\toc_trends_long_format_update1.xlsx')
inc_df = pd.read_excel(in_xls, sheetname='toc_trends_long_format_update1')
# Get just cols of interest
inc_df = inc_df[['station_id', 'par_id', 'analysis_period', 'include']]
# Filter to just results for 1990-2012
inc_df = inc_df.query('analysis_period == "1990-2012"')
# Join
ann_df = pd.merge(ann_df, inc_df, how='left',
left_on=['station_id', 'var'],
right_on=['station_id', 'par_id'])
# Reorder cols
ann_df = ann_df[['project_id', 'project_name', 'station_id', 'station_code',
'station_name', 'nfc_code', 'type', 'continent', 'country',
'region', 'subregion', 'lat', 'lon', 'var', 'include']+range(1990, 2013)]
# The climate vars all have data and can be included
ann_df['include'].fillna(value='yes', inplace=True)
# Write output
out_path = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\annual_clim_chem_series_leah.csv')
ann_df.to_csv(out_path, encoding='utf-8')
ann_df.head()
###Output
_____no_output_____
###Markdown
Finally, it's worth checking that this output matches the time series available on the ICPW website and in the plots of the climate data.
###Code
in_xls = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Results\annual_clim_chem_series.xlsx')
df = pd.read_excel(in_xls, sheetname='annual_clim_chem_series')
# Select series
stn_id = 23455
var = 'tmp_jja'
df2 = df.query("(station_id==@stn_id) & (var==@var)")
df2 = df2[range(1990, 2013)].T
df2.columns = [var]
df2.plot(ls='-', marker='o')
###Output
_____no_output_____ |
scripts/Auto Mun/Auto Mun.ipynb | ###Markdown
**Set Up Planet Objects**
###Code
for name, obj in conn.space_center.bodies.items():
if name == 'Kerbin':
kerbin = obj
elif name == 'Mun':
mun = obj
mass_kerbin = 5.2915158e22 # (kg)
mass_mun = 9.7599066e20 # (kg)
G = 6.67408e-11 # (m**3/kg/s**2) gravitational constant
###Output
_____no_output_____
###Markdown
**Reference Frames**
###Code
kerbin_nrrf = kerbin.non_rotating_reference_frame
###Output
_____no_output_____
###Markdown
**Helper Functions**
###Code
def warp_to_node(conn, vessel, node, burn_duration):
# set attitude
vessel.control.sas_mode = conn.space_center.SASMode.maneuver
ang_vel = vessel.angular_velocity(vessel.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
while not oriented:
ang_vel = vessel.angular_velocity(vessel.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
# warp
end_warp = node.ut - burn_duration
conn.space_center.rails_warp_factor = 4
while conn.space_center.ut < (end_warp - 1000):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 3
while conn.space_center.ut < (end_warp - 300):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 2
while conn.space_center.ut < (end_warp - 100):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 1
while conn.space_center.ut < (end_warp - 15):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 0
def execute_node(conn, vessel, node, burn_duration):
while conn.space_center.ut < (node.ut-(burn_duration/2)):
time.sleep(0.1)
while node.remaining_delta_v > 15:
vessel.control.throttle = 1
while node.remaining_delta_v > 0.1:
vessel.control.throttle = 0.05
node.remove()
vessel.control.throttle = 0
###Output
_____no_output_____
###Markdown
**Orbital Mechanics**
###Code
def calc_burn_duration(vessel, dV):
''' Calculates the burn time for a given vessel and a given delta-V. '''
m = vessel.mass
isp = vessel.specific_impulse
thrust = vessel.available_thrust
term1 = (m*isp*9.81)/(thrust)
term2 = (1-e**(dV/(isp*9.81)))
return abs(term1*term2)
def calc_circ_orb_speed(r, M):
''' Calculates the orbital speed of a vessel in a circular orbit. '''
return np.sqrt(G*M/r)
def vis_viva(r, a, M):
'''
Calculates the orbital speed of a vessel at some point in an
elliptical orbit given the semi-major axis and mass of orbiting
body.
'''
return np.sqrt(G*M*((2/r)-(1/a)))
def orbital_period(a, M):
'''
Calculates the orbital period of a satellite given
the semi-major axis of the orbit and the mass of the
orbiting body.
'''
mu = G*M
return (2*np.pi)*np.sqrt(a**3/mu)
def v_pe_hyperbolic(M, a, e):
''' Find periapsis speed of hyperbolic orbit. '''
k = G*M # gravitational parameter
return ((-k/a)*(1+e)/(e-1))**0.5
###Output
_____no_output_____
###Markdown
**Kerbin Ascent**This ascent profile is custom for this vehicle. It would be nice to have a generalized script (at least attempt) to launch any vehicle to orbit.`TODO`: Convert times to use `MET` rather than `time.time()` so that physical time warp is stable.
###Code
# vehicle telemetry
telem = vessel.flight(vessel.orbit.body.reference_frame)
# go straight up at first
vessel.auto_pilot.target_pitch_and_heading(90, 90)
vessel.auto_pilot.engage()
time.sleep(1)
# throttle 100% and ignition
vessel.control.throttle = 1
vessel.control.activate_next_stage()
# wait until going 100 m/s straight up
v_speed = telem.vertical_speed
while v_speed < 100:
v_speed = telem.vertical_speed
time.sleep(0.1)
# pitch over at a rate of 1.125 deg/s for 40 s
# this gets vehicle to 45 deg by 10 km
pitch_start = time.time()
pitch_end = pitch_start + 40 # 40 seconds of pitching
time_to_deg = 1.125
while time.time() < pitch_end:
tgt_deg = 90 - ((time.time() - pitch_start) * time_to_deg)
vessel.auto_pilot.target_pitch_and_heading(tgt_deg, 90)
print('Lower atmosphere pitch complete.')
# pitch over at a lower rate until first stage depleted
pitch_start = time.time()
pitch_end = pitch_start + 25 # ~25 seconds until depleted
while time.time() < pitch_end:
tgt_deg = 45 - ((time.time() - pitch_start) * 0.4)
vessel.auto_pilot.target_pitch_and_heading(tgt_deg, 90)
# MECO
stg_1_resrcs = vessel.resources_in_decouple_stage(stage=9, cumulative=False)
stg_1_lqd_fu = stg_1_resrcs.amount('LiquidFuel')
while stg_1_lqd_fu > 0.01:
stg_1_lqd_fu = stg_1_resrcs.amount('LiquidFuel')
vessel.auto_pilot.target_pitch_and_heading(35, 90)
time.sleep(1)
print('MECO.')
# S1 Sep
vessel.control.throttle = 0
vessel.control.activate_next_stage()
time.sleep(1)
print('Stage 1 sep.')
# S2 Ignition
vessel.control.activate_next_stage()
vessel.control.throttle = 1
# pitch over slowly on S2 until apoapsis ~100 km
pitch_start = time.time()
pitch_end = pitch_start + 25
while time.time() < pitch_end:
tgt_deg = 35 - ((time.time() - pitch_start) * 1.0)
vessel.auto_pilot.target_pitch_and_heading(tgt_deg, 90)
apo_alt = vessel.orbit.apoapsis_altitude
while apo_alt < 100000:
apo_alt = vessel.orbit.apoapsis_altitude
vessel.auto_pilot.target_pitch_and_heading(10, 90)
# stabilize for coast to apoapsis
vessel.control.throttle = 0
print('SECO.')
vessel.auto_pilot.disengage()
time.sleep(1)
vessel.control.sas = True
time.sleep(1)
vessel.control.sas_mode = conn.space_center.SASMode.prograde
time.sleep(1)
###Output
Lower atmosphere pitch complete.
MECO.
Stage 1 sep.
SECO.
###Markdown
**Circularize Kerbin Orbit**
###Code
# wait until out of atmosphere
conn.space_center.physics_warp_factor = 3
while vessel.flight().mean_altitude < 70000:
time.sleep(1)
conn.space_center.physics_warp_factor = 0
# fairing & LES sep
time.sleep(3)
vessel.control.activate_next_stage()
print('Fairing sep.')
print('LAS sep.')
print('Out of atmosphere.')
# go to map screen
time.sleep(3)
conn.space_center.camera.mode = conn.space_center.CameraMode.map
# relevant times
time.sleep(1)
ut_set = conn.space_center.ut
t_to_ap = vessel.orbit.time_to_apoapsis
ut_ap = ut_set + t_to_ap
# determine delta-V required to circularize
V_ap_reqd = calc_circ_orb_speed(r=vessel.orbit.apoapsis, M=mass_kerbin)
V_ap_curr = vis_viva(r=vessel.orbit.apoapsis, a=vessel.orbit.semi_major_axis, \
M=mass_kerbin)
dV_node = abs(V_ap_reqd - V_ap_curr)
# create circularization node
circ = vessel.control.add_node(ut=ut_ap)
circ.prograde = dV_node
print('Created circularization maneuver node.')
# set SAS to maneuver and wait until oriented
print('Orienting for maneuver.')
vessel.control.sas_mode = conn.space_center.SASMode.maneuver
time.sleep(3)
ang_vel = vessel.angular_velocity(vessel.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
while not oriented:
ang_vel = vessel.angular_velocity(vessel.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
# time warp to maneuver
print('Warping to maneuver.')
burn_time = calc_burn_duration(vessel, dV_node)
conn.space_center.physics_warp_factor = 3
while conn.space_center.ut < (ut_ap - (burn_time / 2) - 15):
time.sleep(1)
conn.space_center.physics_warp_factor = 0
conn.space_center.camera.mode = conn.space_center.CameraMode.automatic
time.sleep(1)
# execute maneuver
while conn.space_center.ut < (ut_ap - (burn_time / 2)):
time.sleep(0.1)
while circ.remaining_delta_v > 15:
vessel.control.throttle = 1
while circ.remaining_delta_v > 0.1:
vessel.control.throttle = 0.05
circ.remove()
vessel.control.throttle = 0
print('Circularization complete.')
###Output
_____no_output_____
###Markdown
**CSM Detach, Flip, Dock Maneuver**This docking method depends entirely on this craft, and doesn't work generally. It'd be nice to have a general docking script that works for any two crafts.
###Code
# set vessel to SAS normal and wait until oriented
print('Orienting to normal... ', end='')
vessel.control.sas_mode = conn.space_center.SASMode.normal
time.sleep(3)
ang_vel = vessel.angular_velocity(vessel.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
while not oriented:
ang_vel = vessel.angular_velocity(vessel.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
print('Done.')
# turn off RCS and undock CSM
vessel.control.rcs = False
time.sleep(1)
vessel.control.activate_next_stage()
# turn on RCS, thrust normal briefly, then stop
vessel.control.rcs = True
time.sleep(1)
vessel.control.forward = 1
time.sleep(0.5)
vessel.control.forward = 0
time.sleep(2)
vessel.control.forward = -1
time.sleep(0.5)
vessel.control.forward = 0
# turn off RCS
vessel.control.rcs = False
print('CSM sep.')
# control CSM from docking port
print('Docking maneuver... ', end='')
CSM_docking_port = vessel.parts.with_title('Clamp-O-Tron Docking Port')[0]
vessel.parts.controlling = CSM_docking_port
# target LM docking port
vessels = conn.space_center.vessels
for v in vessels:
if v.name == 'Auto Mun Lander':
LM = v
elif v.name == 'Auto Mun':
CSM = v
LM_docking_port = LM.parts.docking_ports[0] # DockingPort object
conn.space_center.target_docking_port = LM_docking_port
# set CSM SAS to target
vessel.control.sas_mode = conn.space_center.SASMode.target
time.sleep(1)
# switch to LM
conn.space_center.active_vessel = LM
time.sleep(1)
# control LM from docking port
LM_parts = LM.parts.all
for part in LM_parts:
if part.name == 'dockingPort2':
LM_dp = part # Part object
LM.parts.controlling = LM_dp # Requires Part, not DockingPort
time.sleep(1)
# target CSM docking port
CSM_dp = CSM.parts.docking_ports[0]
conn.space_center.target_docking_port = CSM_dp
time.sleep(1)
# set LM SAS to target
LM.control.sas_mode = conn.space_center.SASMode.target
time.sleep(1)
# switch to CSM
conn.space_center.active_vessel = CSM
time.sleep(1)
CSM.control.sas_mode = conn.space_center.SASMode.target
time.sleep(1)
vessel.control.rcs = True
time.sleep(1)
# thrust slightly in direction of target docking port, turn off RCS, wait until docked
# this requires some Patience & Luck (TM)
n_vessels = len(conn.space_center.vessels)
vessel.control.forward = 1
time.sleep(1.5)
vessel.control.forward = 0
# wait until docked
while len(conn.space_center.vessels) == n_vessels:
time.sleep(1)
# turn off RCS
vessel.control.rcs = False
print('Done.')
###Output
_____no_output_____
###Markdown
**Trans-Munar Injection**
###Code
print('Setting up Trans-Munar Injection.')
# redefine newly docked vessel
CSM_LM_S2 = conn.space_center.active_vessel
# control from LM (for orientation purposes)
for part in CSM_LM_S2.parts.all:
if part.name == 'mk2LanderCabin.v2':
CSM_LM_S2.parts.controlling = part
# set Mun as target
conn.space_center.target_body = mun
time.sleep(1)
# create TMI maneuver node (Hohmann Transfer)
# calculate dV required for maneuver
v_s_hi = calc_circ_orb_speed(r=CSM_LM_S2.orbit.apoapsis, M=mass_kerbin)
v_s_lo = calc_circ_orb_speed(r=CSM_LM_S2.orbit.periapsis, M=mass_kerbin)
v_s = (v_s_hi + v_s_lo) / 2
r_b = CSM_LM_S2.orbit.semi_major_axis
a_tmi = (100000 + 600000 + mun.orbit.apoapsis) / 2
v_pe = vis_viva(r=r_b, a=a_tmi, M=mass_kerbin)
dv = v_pe - v_s
# calculate time until maneuver node
# half-period of transfer orbit
p = orbital_period(a=a_tmi, M=mass_kerbin) # period of one TMI orbit
p_2 = 0.5 * p # time of one half TMI orbit
# arc distance Mun travels in time p_2
v_m = mun.orbit.speed # Mun orbital velocity
s = v_m * p_2 # arc-length distance Mun travels
# angle between Mun current pos and Mun intercept pos
r_m_i = mun.position(kerbin_nrrf)
theta = s / np.linalg.norm(r_m_i) # radians
# Mun position at intercept
x1, y1, z1 = r_m_i[0], r_m_i[1], r_m_i[2]
x2 = x1 * np.cos(theta) - z1 * np.sin(theta)
y2 = 0
z2 = x1 * np.sin(theta) + z1 * np.cos(theta)
r_m_f = (x2, y2, z2)
# find time to burn
r_s_b = [-1 * r_m_f[i] for i in range(len(r_m_f))]
r_s_i = CSM_LM_S2.position(kerbin_nrrf)
r_s_i = [r_s_i[0], 0, r_s_i[2]] # nullify inclination
x1, y1 = r_s_i[0], r_s_i[2]
x2, y2 = r_s_b[0], r_s_b[2]
theta2 = np.arctan2(x1*y2-y1*x2, x1*x2+y1*y2)
if theta2 < 0:
theta2 += 2*np.pi
s_to_b = CSM_LM_S2.orbit.semi_major_axis * theta2
t_to_b = s_to_b / v_s
# create node
ut_b = conn.space_center.ut + t_to_b
tmi = CSM_LM_S2.control.add_node(ut=ut_b)
tmi.prograde = dv
# set SAS to maneuver and wait until oriented
print('Orienting for maneuver.')
CSM_LM_S2.control.sas_mode = conn.space_center.SASMode.maneuver
CSM_LM_S2.control.rcs = True
time.sleep(3)
ang_vel = CSM_LM_S2.angular_velocity(CSM_LM_S2.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.1 for i in range(3))
while not oriented:
ang_vel = CSM_LM_S2.angular_velocity(CSM_LM_S2.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.1 for i in range(3))
# time warp to maneuver
print('Warping to maneuver.')
burn_time = calc_burn_duration(CSM_LM_S2, dv)
conn.space_center.rails_warp_factor = 2
while conn.space_center.ut < (ut_b - (burn_time / 2) - 60):
time.sleep(1)
conn.space_center.rails_warp_factor = 0
conn.space_center.camera.mode = conn.space_center.CameraMode.automatic
time.sleep(1)
print('Burn baby, burn.')
# execute maneuver
while conn.space_center.ut < (ut_b - (burn_time / 2)):
time.sleep(0.1)
while tmi.remaining_delta_v > 15:
CSM_LM_S2.control.throttle = 1
while tmi.remaining_delta_v > 0.1:
CSM_LM_S2.control.throttle = 0.05
tmi.remove()
CSM_LM_S2.control.throttle = 0
CSM_LM_S2.control.rcs = False
print('TMI complete.')
###Output
_____no_output_____
###Markdown
**Outbound Trajectory Correction**
###Code
time.sleep(3)
# separate from Stage 2
for part in CSM_LM_S2.parts.all:
if part.tag == 'Decoupler.S2':
part.decoupler.decouple()
# rename current vessel
CSM_LM = conn.space_center.active_vessel
print('S2 sep.')
for part in CSM_LM.parts.all:
if part.name == 'engineLargeSkipper':
# activate SM engine
part.engine.active = True
elif part.name == 'liquidEngine2-2.v2':
# ensure LM engine deactivated
part.engine.active = False
elif part.name == 'mk1-3pod':
# control from CM
CSM_LM.parts.controlling = part
# create outbound trajectory correction node
print('Set up OTC node.')
ut_otc = conn.space_center.ut + 3000
otc = CSM_LM.control.add_node(ut=ut_otc)
otc.prograde = 12.5
# maneuver to node
CSM_LM.control.sas_mode = conn.space_center.SASMode.maneuver
time.sleep(3)
ang_vel = CSM_LM.angular_velocity(CSM_LM.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
while not oriented:
ang_vel = CSM_LM.angular_velocity(CSM_LM.orbit.body.reference_frame)
oriented = all(ang_vel[i] < 0.01 for i in range(3))
# warp carefully to node
print('Warping to node... ', end='')
conn.space_center.rails_warp_factor = 4
while conn.space_center.ut < (ut_otc - 1000):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 3
while conn.space_center.ut < (ut_otc - 300):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 2
while conn.space_center.ut < (ut_otc - 100):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 1
while conn.space_center.ut < (ut_otc - 15):
time.sleep(0.1)
conn.space_center.rails_warp_factor = 0
print('Done.')
# execute node
print('Cleaning up Mun intercept... ', end='')
while conn.space_center.ut < (ut_otc - 5):
time.sleep(0.1)
while otc.remaining_delta_v > 0.1:
CSM_LM.control.throttle = 0.05
CSM_LM.control.throttle = 0
otc.remove()
vessel = CSM_LM
print('Done.')
###Output
_____no_output_____
###Markdown
**Circularize Lunar Orbit**
###Code
# wait until in Mun SOI
print('Warping to Mun SOI... ', end='')
time.sleep(1.0) # wait for clunkiness to clear
conn.space_center.rails_warp_factor = 5
while vessel.orbit.body.name != 'Mun':
time.sleep(0.1)
time.sleep(1.0) # wait for SOI change clunkiness to clear
conn.space_center.rails_warp_factor = 0
print('Done.')
# create circularization maneuver node
t_to_pe = vessel.orbit.time_to_periapsis
ut_pe = conn.space_center.ut + t_to_pe
speed_pe = v_pe_hyperbolic(M=mass_mun, a=vessel.orbit.semi_major_axis, \
e=vessel.orbit.eccentricity)
speed_circ = calc_circ_orb_speed(r=vessel.orbit.periapsis, M=mass_mun)
dV = -abs(speed_pe - speed_circ) # minus to slow down
circ = vessel.control.add_node(ut=ut_pe, prograde=dV)
# warp to node
burn_duration = calc_burn_duration(vessel, dV)
warp_to_node(conn, vessel, circ, burn_duration)
# execute maneuver
print('Executing node... ', end='')
execute_node(conn, vessel, circ, burn_duration)
print('Done.')
###Output
_____no_output_____
###Markdown
**Crew Transfer**TODO: can't figure out how to transfer crew automatically. For now, need to launch with two pilots, one in each command pod. **Mun Landing**
###Code
# fill lander fuel tanks from SM fuel tanks
# lander detach
# wait until passing over night->day meridian
# deorbit burn
# landing sequence
###Output
_____no_output_____ |
Kaggle Datasets/DL/Mushroom Classification/mushroom-ann.ipynb | ###Markdown
Data :
Attribute Information: (classes: edible=e, poisonous=p)
--> cap-shape: bell=b,conical=c,convex=x,flat=f, knobbed=k,sunken=s
--> cap-surface: fibrous=f,grooves=g,scaly=y,smooth=s
--> cap-color: brown=n,buff=b,cinnamon=c,gray=g,green=r,pink=p,purple=u,red=e,white=w,yellow=y
--> bruises: bruises=t,no=f
--> odor: almond=a,anise=l,creosote=c,fishy=y,foul=f,musty=m,none=n,pungent=p,spicy=s
--> gill-attachment: attached=a,descending=d,free=f,notched=n
--> gill-spacing: close=c,crowded=w,distant=d
--> gill-size: broad=b,narrow=n
--> gill-color: black=k,brown=n,buff=b,chocolate=h,gray=g, green=r,orange=o,pink=p,purple=u,red=e,white=w,yellow=y
--> stalk-shape: enlarging=e,tapering=t
--> stalk-root: bulbous=b,club=c,cup=u,equal=e,rhizomorphs=z,rooted=r,missing=?
--> stalk-surface-above-ring: fibrous=f,scaly=y,silky=k,smooth=s
--> stalk-surface-below-ring: fibrous=f,scaly=y,silky=k,smooth=s
--> stalk-color-above-ring: brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y
--> stalk-color-below-ring: brown=n,buff=b,cinnamon=c,gray=g,orange=o,pink=p,red=e,white=w,yellow=y
--> veil-type: partial=p,universal=u
--> veil-color: brown=n,orange=o,white=w,yellow=y
--> ring-number: none=n,one=o,two=t
--> ring-type: cobwebby=c,evanescent=e,flaring=f,large=l,none=n,pendant=p,sheathing=s,zone=z
--> spore-print-color: black=k,brown=n,buff=b,chocolate=h,green=r,orange=o,purple=u,white=w,yellow=y
--> population: abundant=a,clustered=c,numerous=n,scattered=s,several=v,solitary=y
--> habitat: grasses=g,leaves=l,meadows=m,paths=p,urban=u,waste=w,woods=d Importing Libraries & getting Data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('dark_background')
import warnings
warnings.filterwarnings('ignore')
data = pd.read_csv('dataset/mushrooms.csv')
data.head()
data.info()
for i in data.columns:
print('{} consists of : {} unique-values , which are --> {}\n'.format(i ,len(pd.value_counts(data[i])) ,data[i].unique()))
###Output
class consists of : 2 unique-values , which are --> ['p' 'e']
cap-shape consists of : 6 unique-values , which are --> ['x' 'b' 's' 'f' 'k' 'c']
cap-surface consists of : 4 unique-values , which are --> ['s' 'y' 'f' 'g']
cap-color consists of : 10 unique-values , which are --> ['n' 'y' 'w' 'g' 'e' 'p' 'b' 'u' 'c' 'r']
bruises consists of : 2 unique-values , which are --> ['t' 'f']
odor consists of : 9 unique-values , which are --> ['p' 'a' 'l' 'n' 'f' 'c' 'y' 's' 'm']
gill-attachment consists of : 2 unique-values , which are --> ['f' 'a']
gill-spacing consists of : 2 unique-values , which are --> ['c' 'w']
gill-size consists of : 2 unique-values , which are --> ['n' 'b']
gill-color consists of : 12 unique-values , which are --> ['k' 'n' 'g' 'p' 'w' 'h' 'u' 'e' 'b' 'r' 'y' 'o']
stalk-shape consists of : 2 unique-values , which are --> ['e' 't']
stalk-root consists of : 5 unique-values , which are --> ['e' 'c' 'b' 'r' '?']
stalk-surface-above-ring consists of : 4 unique-values , which are --> ['s' 'f' 'k' 'y']
stalk-surface-below-ring consists of : 4 unique-values , which are --> ['s' 'f' 'y' 'k']
stalk-color-above-ring consists of : 9 unique-values , which are --> ['w' 'g' 'p' 'n' 'b' 'e' 'o' 'c' 'y']
stalk-color-below-ring consists of : 9 unique-values , which are --> ['w' 'p' 'g' 'b' 'n' 'e' 'y' 'o' 'c']
veil-type consists of : 1 unique-values , which are --> ['p']
veil-color consists of : 4 unique-values , which are --> ['w' 'n' 'o' 'y']
ring-number consists of : 3 unique-values , which are --> ['o' 't' 'n']
ring-type consists of : 5 unique-values , which are --> ['p' 'e' 'l' 'f' 'n']
spore-print-color consists of : 9 unique-values , which are --> ['k' 'n' 'u' 'h' 'w' 'r' 'o' 'y' 'b']
population consists of : 6 unique-values , which are --> ['s' 'n' 'a' 'v' 'y' 'c']
habitat consists of : 7 unique-values , which are --> ['u' 'g' 'm' 'd' 'p' 'w' 'l']
###Markdown
Handling Missing Values
###Code
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Analysing Target Variable i.e class
###Code
data['class'].value_counts()
sns.countplot(x='class' ,data=data )
###Output
_____no_output_____
###Markdown
Encoding
###Code
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
for i in data.columns:
data[i] = encoder.fit_transform(data[i])
data.head()
data.shape
###Output
_____no_output_____
###Markdown
Correlation
###Code
data.corr()['class'].sort_values(ascending=False)
plt.figure(figsize=(20,10))
sns.heatmap(data.corr() ,annot=True ,cmap='RdYlGn')
plt.show()
###Output
_____no_output_____
###Markdown
Model Train-Test Split
###Code
X = data.drop(['class'] ,axis=1)
y = data['class'].values
X.shape ,y.shape
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Model Building
###Code
import tensorflow as tf
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
def model_building():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=8, kernel_initializer='uniform',
activation='relu', input_dim=X_train.shape[1]))
model.add(tf.keras.layers.Dense(units=8 ,kernel_initializer='uniform' ,activation='relu'))
model.add(tf.keras.layers.Dense(units=1 ,kernel_initializer='uniform' ,activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy' ,metrics=['accuracy'])
return model
classifier_model = KerasClassifier(build_fn=model_building ,epochs=70 ,batch_size=10)
accuracies = cross_val_score(estimator=classifier_model ,X=X_train ,y=y_train ,cv=2)
mean = accuracies.mean()
variance = accuracies.std()
print('Mean Accuracy :' ,str(mean))
print('Mean Variance :', str(variance))
model_history = classifier_model.fit(X_train ,y_train ,validation_split=0.20 ,epochs=70 ,batch_size=10)
print(model_history.history.keys())
plt.figure(figsize=(10,5))
plt.plot(model_history.history['accuracy'])
plt.plot(model_history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
plt.figure(figsize=(10,5))
plt.plot(model_history.history['loss'])
plt.plot(model_history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()
###Output
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
|
matematica-para-datascience/Lectures/Lecture-21-Principal-Component-Analysis.ipynb | ###Markdown
Lecture 21: Principal Component Analysis
###Code
import numpy as np
import matplotlib.pyplot as plt
import numpy.linalg as LA
%matplotlib inline
###Output
_____no_output_____
###Markdown
PCA and data preprocessingPrincipal Components Analysis (PCA) is a dimensionality reduction algorithm that can be used to significantly speed up our feature(s) learning algorithm. Mathematically speaking, PCA uses a method called Singular Value Decomposition (SVD), in which the singular value resembles the eigenvalue.Suppose we are training a model to classify our MNIST handwritten digits (28x28 grayscale images). A given training sample in `X_train` is of shape `(784,)`, which has 784 features (dimensions). However, many of these features are somewhat redundant, because the values of adjacent pixels in an image are highly correlated. Concretely, for a training vector $\mathbf{x} = (x_1,\dots, x_{784}) \in \mathbb{R}^{784}$ which is 784 dimensional vectors, with each feature $x_j$ corresponding to the intensity of $j$-th pixel (in the flattened image). Because of the correlation between adjacent pixels, PCA will allow us to approximate the input with a much lower dimensional one, while incurring very little error. (Reading) What does PCA exactly do?PCA will find the most significant "direction" in a dataset, then the second most significant direction, then the third most significant direction,.... so on and so forth.Let $A\in \mathbb{R}^{n\times d}$ matrix, usually we assume $n>d$>Singular Value Decomposition (SVD): any real matrix can be decomposed into the following form:>$$A = U S V^{\top}$$Where $S\in \mathbb{R}^{n\times d}$ is a diagonal matrix whose diagonal entries are non-negative and in decreasing order. $U\in \mathbb{R}^{n\times n}$ and $V\in \mathbb{R}^{d\times d}$ are orthogonal matrices (i.e. columns of $V$ are orthonormal, same for $U$, $U^{\top}U = UU^{\top} = I$). The columns of $V= [\mathbf{v}_1 \mathbf{v}_2 \cdots \mathbf{v}_d]$ are the significant directions we were looking for, which are known as the right singular vectors. Moreover, the columns of $V$ form a $d$-dimensional orthogonal basis.The columns of $U = [\mathbf{u}_1 \mathbf{u}_2 \cdots \mathbf{u}_n]$ are known as the left singular vectors. Sometimes we just say "eigenvectors" instead of "singular vectors".If some singular values are 0 or very small, we can essentially "discard" those singular values and the corresponding eigenvectors, and still get a reasonably good approximation of our data, hence reducing the dimensions.Finally, $U$ (or more precisely $U S$) stores how you write each coordinate vector in terms of the significant directions in $V$. ---- Geometric meaningConsider what happens to the unit sphere in our vector space as it isbeing transformed by the matrix $X$. First, we apply some transformation $V^{\top}$, which is essentially a rotation, since $V^{\top}$ is a matrix with orthonormal rows. A matrix with orthonormal rows just changes the coordinate axes via some rotation or reflection but does no scaling.Next, we apply a scaling defined by $S$, which just scales the dimensions since it is a diagonal matrix. Finally, we rotate again with $U$. In other words, any transformation can be expressed as a rotation followed by a scaling followed by another rotation. ---- Rank-$k$ approximationThe vectors $V= [\mathbf{v}_1 \mathbf{v}_2 \cdots \mathbf{v}_d]$ are such that they describe the most important axis of the data in the following sense. The first eigenvector $\mathbf{v}_1$ describes which direction has the most variance. Then since $\mathbf{v}_2, \cdots, \mathbf{v}_d$ are each orthogonal to $\mathbf{v}_1$, this implies that $\mathbf{v}_2$is the direction (after $\mathbf{v}_1$ has been factored out) that has the most variance.The rank $k$ approximation $A_k \in \mathbb{R}^{n\times d}$ of $A$ is as follows: $$A_k = U_k S_k V_k^{\top},$$where $S_k$ is still an $\mathbb{R}^{n\times d}$ matrix, but only with the first $k$ entries being nonzero. In this way, only first $k$ columns of $V$ ($k$ rows of $V^{\top}$) contribute to $A_k$.---- Example of GaussLet us recall in the beginning of this quarter, and load `Gauss.jpg`, then flatten its color dimension to make a grayscale image.Reference: [https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.linalg.svd.html](https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.linalg.svd.html)
###Code
G = plt.imread("gauss.jpg")
G_bw = np.mean(G, axis=2)
plt.imshow(G_bw, cmap="gray")
U, S, VT = LA.svd(G_bw)
S_mat = np.zeros_like(G_bw)
S_mat[:600] = np.diag(S)
np.allclose(G_bw, U.dot(S_mat.dot(VT)))
###Output
_____no_output_____
###Markdown
Remark: Or you can use `@`, where the `@` (at) operator is intended to be used for matrix multiplication. This is New in version 3.5.
###Code
# suppose we only use first k singular values
k = 30
S_k = np.zeros_like(G_bw)
S_k[:k, :k] = np.diag(S[:k])
G_k = (U @ S_k) @ VT
plt.imshow(G_k, cmap="gray")
###Output
_____no_output_____
###Markdown
Why can we discard the smaller singular values?Let us use the following artificially generated data as an example.
###Code
num_samples = 500
mean = np.array([0,0])
covariance = np.array([[12, 9], [ 9, 10]])
X = np.random.multivariate_normal(mean, covariance, num_samples)
plt.scatter(X[:,0], X[:,1], alpha=0.2)
U, S, VT = LA.svd(X)
VT[0,:] # first eigenvectors
VT[1,:] # second eigenvectors
v1 = VT[0,:] * S[0]/20
v2 = VT[1,:] * S[1]/20
plt.figure(figsize=(6,6))
plt.axis([-8,8,-8,8])
plt.scatter(X[:,0], X[:,1], alpha=0.2)
plt.arrow(0, 0, v1[0], v1[1], width=0.1, head_width=0.34, head_length=0.8)
plt.arrow(0, 0, v2[0], v2[1], width=0.1, head_width=0.34, head_length=0.8)
###Output
_____no_output_____
###Markdown
What does the "significance" of a direction mean in data science?Consider using SVD on the dataset matrix $X\in \mathbb{R}^{N\times d}$, where $X$'s $i$-th row corresponds to the $i$-th data sample. There are $N$ data samples and each data point has $n$ dimensions (features). If the data set is normalized row-wise, i.e., each row (data sample) has mean zero. The significant directions (columns of $V$) are the eigenvectors of the covariance matrix ${\frac{1}{N-1}X^T X}$. The singular values (entries of $S$) are the square roots of the eigenvalues of this covariance matrix.In other words, $S$'s entries are the amount of deviations of the data in that direction. The covariance matrix tells you how correlated each direction is with other directions in the space where the dataset lives. In-class Exercise: MNISTPCA can be used to speed up a machine learning algorithm (logistic regression) on the MNIST dataset.Download `mnist_binary_train.npz` and `mnist_binary_test.npz` on Canvas files tab, and load them using the following cell. Then try the following:* Use `scikit-learn`'s `LogisticRgression` class on the original dataset.* Import `PCA` from `scikit-learn`'s `decomposition` submodule, apply it on the dataset, re-run the `LogisticRegression` on the reduced dataset.
###Code
data_train = np.load('mnist_binary_train.npz')
data_test = np.load('mnist_binary_test.npz')
X_train, y_train = data_train['X'], data_train['y']
X_test, y_test = data_test['X'], data_test['y']
# use scikit-learn's built-in logistic regression
from sklearn.linear_model import LogisticRegression
import time # measure the time
mnist_reg = LogisticRegression(solver='lbfgs', max_iter=500)
starting_time = time.process_time()
mnist_reg.fit(X_train,y_train)
print("Data fitting takes", time.process_time() - starting_time, "seconds")
mnist_reg.score(X_test,y_test)
###Output
_____no_output_____
###Markdown
Use PCA to reduce the dimensionFrom the [reference of the `PCA`](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html):> If `0 < n_components < 1` and `svd_solver == 'full'`, select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components.The following cell means that the `PCA` class in `scikit-learn` will choose the minimum number of principal components such that 95% of the variance is retained for our data.
###Code
from sklearn.decomposition import PCA
mnist_pca = PCA(n_components=0.95)
mnist_pca.fit(X_train)
X_train_reduced = mnist_pca.transform(X_train)
X_test_reduced = mnist_pca.transform(X_test)
mnist_reg_pca = LogisticRegression(solver='lbfgs', max_iter=500)
starting_time = time.process_time()
mnist_reg_pca.fit(X_train_reduced,y_train)
print("Data fitting takes", time.process_time() - starting_time, "seconds")
mnist_reg_pca.score(X_test_reduced,y_test)
###Output
_____no_output_____ |
solutions/Practical_4.ipynb | ###Markdown
Practical 4: Modules and Functions - Building Conway's Game of LifeObjectives: In this practical we continue to use functions, modules and conditional statements. We also continue practicing how we access entries from 2D arrays. At the end of this notebook you will have a complete version of Conway's Game of Life which will produce an animation. This will be done through 3 different sections, each of which has an exercise for you to complete: - 1) [Creating different shapes through 2D Numpy array modifications](Part1) * [Exercise 1: Draw still 'life' from Conway's Universe](Exercise1) * [Exercise 2: Draw oscillators and space-ship 'life' from Conway's Universe](Exercise2) - 2) [Creating a function that searches a local neighbourhood for values of '1' and '0'](Part2) * [Exercise 3: Implement the 4 rules of life](Exercise3) * [Exercise 4: Loop through 20 oscillations of the 'Beacon' lifeform](Exercise4) - 3) [Populating Conway's Universe with multiple species](Part3) As with our other notebooks, we will provide you with a template for plotting the results. Also please note that you should not feel pressured to complete every exercise in class. These practicals are designed for you to take outside of class and continue working on them. Proposed solutions to all exercises can be found in the 'Solutions' folder. Please note: After reading the instructions and aims of any exercise, search the code snippets for a note that reads -------'INSERT CODE HERE'------- to identify where you need to write your code Introduction: The gameBefore we get our teeth into the exercises included in this notebook, let's remind ourselves about the basis for Conway's game of life. In Conway's game of life, the Universe is represented as a 2D space [a 2D Numpy array in our case!] on which each cell can either be alive or dead. If we refer to each cell as having one of two states, we can represent this numerically as each cell having either a value of 1 or 0. If we then assume we can draw 2D shapes that represent a 'specie', as a collection of live cells, we might find patterns changing over time.Every cell interacts with its neighbours, whether they are horizontally, vertically of diagonally adjacent. There are 4 laws that define these interactions: - Any live cell with fewer than two live neighbours dies, as if by underpopulation. - Any live cell with two or three live neighbours lives on to the next generation. - Any live cell with more than three live neighbours dies, as if by overpopulation. - Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.So, imagine we are at the beginning of time in our 2D Universe. We need to check the status of every cell and make changes according to these laws. After one sweep through our 2D space, or time step, the status of individual cells will change. Numerically, the distribution of '1's and '0's change across our 2D space. In fact, by defining species as dinstinct groups of cells of a certain shape, as we move through multiple time steps we find 3 types of patterns emerging: - Still life: These patterns remain fixed on the board - Oscillators: These patterns change shape on every iteration, but return to their initial state after a number of generations. - Space-ships: These patterns end up moving across the board according to the rules that define life and death.From a programming perspective, implementing these rules through any number of time steps requires a number of procedures to be implemented via code: - 1) Defining 2D arrays that represent species in Conway's Universe. - 2) Creating a function that searches the immediate neighbouring space of each cell for 1's and 0's. - 3) Counting the number of 1's and 0's according to the previous point. - 4) Changing the values of each cell according to the 4 laws stated above. - 5) Looping through points 2-4 for any number of time steps.By sequentially following the proceeding exercises, we will eventually build a variant of Conway's game of life. Creating different shapes through 2D Numpy array modifications Before we can run a simulation, let's create distinct species as groups of cells, and thus patterns. This will help us practice creating 2D arrays and populating each cell with either a '0' or '1' depending on what pattern we want to draw. To generate and thus draw each specie you will be asked to initialise a 2D Numpy array that repeats the pattern seen in the picture. The code to plot, thus visualise, each pattern is given for you. Still life The pictures in Figure 1 and 2 illustrate common types of still life in Conway's Universe. Ive given you some code that reproduces the pattern for 'Block', in the code box below. Read through the code and comments and see if this makes sense.  Figure 1 Figure 2
###Code
#%matplotlib inline #this is to help us retrieve those love animations!
import numpy as np #import the numerical python library, numpy. Changing the referenced library to 'np' is solely for convenience
import matplotlib.pyplot as plt #as per the above, much easier to write over and over again
from matplotlib import animation, rc
# Lets first create our 'Block'. Dont forget, we can call our arrays and matrices anything we want. In this case Im going to use the name of the pattern we are interested in
Block = np.zeros((4,4),dtype=int) #Im telling the Python interpreter I want a numpy array that is 4 rows by 4 columns, contains '0' for now and is expecting my data to be of integer type
# What does this look like?
print("An empty array",Block)
# Can you see a matrix of 0s?
# Ok cool. Now lets add some black cells by position some values of 1. For the Block pattern, this is done as follows:
Block[1,1]=1
Block[1,2]=1
Block[2,1]=1
Block[2,2]=1
# Remeber how we refer to elements in an array in Python? Everything starts at 0, so here im filling in the central 2x2 matrix with 1s. Lets check this out numerically:
print(print("A finished array",Block))
#Now lets plot this to recreate the patterns given in figure x.
plt.imshow(Block, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Block')
plt.show()
###Output
An empty array [[0 0 0 0]
[0 0 0 0]
[0 0 0 0]
[0 0 0 0]]
A finished array [[0 0 0 0]
[0 1 1 0]
[0 1 1 0]
[0 0 0 0]]
None
###Markdown
Exercise 1: Draw still 'life' from Conway's Universe. In this exercise you will need to create a 2D Numpy array that essentially draws both the *Tub* and *Boat* specie from figure 2.
###Code
# We have already imported both Numpy and Matplotlib so no need to import those again.
# Initialise our matrices
Tub = np.zeros((5,5),dtype=int)
Boat = np.zeros((5,5),dtype=int)
#-------'INSERT CODE HERE'-------
# Now add '1's to the currently empty 2D array Tub
Tub [1,2]=1
Tub [2,1]=1
Tub [3,2]=1
Tub [2,3]=1
#--------------------------------
plt.subplot(1, 2, 1).imshow(Tub, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Tub')
#plt.show()
#-------'INSERT CODE HERE'-------
# Now add '1's to the currently empty 2D array Boat
Boat [1,1]=1
Boat [1,2]=1
Boat [2,1]=1
Boat [2,3]=1
Boat [3,2]=1
#--------------------------------
plt.subplot(1, 2, 2).imshow(Boat, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Boat')
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 2: Draw oscillators and space-ship 'life' from Conway's Universe Following exercise 1,now do the same for 2 types of both *oscillators* and *space ships*: Toad, Beacon, Glider and Light-weight spaceship (LWSS). Can you replicate the patterns shown in figures 1 and 3? Check the size of each array you need, accounting for white space around the outside. Use the space below and copy-paste the code we have already used. Figure 3
###Code
#Enter the Python code here to create and then visualise a Toad, Beacon and Glider
#Initialise each matrix
Beacon = np.zeros((6,6),dtype=int)
Toad = np.zeros((6,6),dtype=int)
Glider = np.zeros((5,5),dtype=int)
LWSS = np.zeros((6,7),dtype=int)
#Enter values for '1' where you would like a black square
#-------'INSERT CODE HERE'-------
Beacon [1,1]=1
Beacon [1,2]=1
Beacon [2,1]=1
Beacon [3,4]=1
Beacon [4,3]=1
Beacon [4,4]=1
Toad [2,2]=1
Toad [2,3]=1
Toad [2,4]=1
Toad [3,1]=1
Toad [3,2]=1
Toad [3,3]=1
Glider [1,2]=1
Glider [2,3]=1
Glider [3,1]=1
Glider [3,2]=1
Glider [3,3]=1
LWSS [1,2]=1
LWSS [1,5]=1
LWSS [2,1]=1
LWSS [3,1]=1
LWSS [4,1]=1
LWSS [4,2]=1
LWSS [4,3]=1
LWSS [4,4]=1
LWSS [3,5]=1
#--------------------------------
#Now visualise your results.
plt.subplot(1, 2, 1).imshow(Beacon, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Beacon')
plt.subplot(1, 2, 2).imshow(Toad, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Toad')
plt.show()
plt.subplot(1, 2, 1).imshow(Glider, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Glider')
plt.subplot(1, 2, 2).imshow(LWSS, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('LWSS')
plt.show()
###Output
_____no_output_____
###Markdown
Creating a function that searches a local neighbourhood for values of '1' and '0' Now we know how to define a specie by modifying values in a 2D array, we also need to now create a function that can search the neighbouring space of any cell for the occurance of '1's or '0's. We are going to perform this operation many times so creating a function to do this seems a sensible approach.As an example, let's re-create the 2D array that represents the specie 'Beacon' and then pass this array into a new function that will search the neighbouring space of every cell to detect a '1' or '0. In this example I have given you all of the code to perform this operation. Try to understand the syntax used. Does this make sense? First look at the code and then let's formulate the steps in the function as a narrative.
###Code
#Initialise the Beacon matrix
Beacon = np.zeros((6,6),dtype=int)
#Enter values for '1' where you would like a black square
Beacon [1,1]=1
Beacon [1,2]=1
Beacon [2,1]=1
Beacon [3,4]=1
Beacon [4,3]=1
Beacon [4,4]=1
# Now define a function that moves through through each cell in our 2D array and searches the neighbouring space
# We pass three variables:
# rows - Number of rows in our space to be searched
# cols - Number of columns in our space to be searched
# space - The 2D array space to be searched
def search_each_cell(total_rows,total_cols,space):
# 1) First, we need to start moving through each cell of our 'space'.
# To do this, we will use two nested 'for' loops
for row in range(total_rows):
for col in range(total_cols):
# So 'row' and 'col' define our current cell.
# We now need to search a neighbourhood defined as 1 cell distance around this position
# We thus need two more nested for loops. When searching this neighbouring space, we want
# to count the number of 1's. Thus we also need a variable that we can increment by 1
# everytime we find a value of 1. Lets call this integer variable count
count = 0
for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range'
for col2 in range(col-1,col+2):
# We need to check if our new position, defined by [row2,col2] is off the board
if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols):
# Do nothing
pass
elif row2 == row and col2 == col:
# Do nothing, its the cell we already have!
pass
# If we are not off the board or in the same cell as our starting point...
# We can check if this new space has a value of 1. It it does, lets count it
else:
if space[row2,col2]>0:
count=count+1
return # At the moment we are not returning anything. Seem odd? We will get back to this.
# call the above function
search_each_cell(6,6,Beacon)
print("Finished function call, nothing to report!")
###Output
Finished function call, nothing to report!
###Markdown
Now let's try to understand what this function is actually doing. As an algorithm, we have the following steps - 1) Pass the 2D Numpy array to the new function along with variables that define the total number of rows and columns - 2) We need to move through every cell and search its local neighbourhood. Moving through each cell is defined by the first two loops that cycle through both the row and column index of our 2D space. The limits are defined by the variables total_rows and total_cols - 3) For each cell, we will want to have an integer variable that counts how many 1's there are in the local neighborhood. We need to initialise this to 0 for each cell we move through. We call this variable count - 4) Now we need to look at the local space surrounding our cell. For this we need two more nested loops that look 1 row above, 1 row below, 1 column to the left and one to the right. - 5) As we move through this neighborhood we need to check if we are either off the board OR in the same location as the cell we are interested in! - 6) If none of the above is true, then check if a cell has a value greater then 0. If it does, increment variable count by 1. - 7) For each cell on the board, repeat steps 3-6. - 8) When the entire space has been searched, stop the function and return nothing. Exercise 3 - Implement the 4 rules of life Now we have the function that can search the local neighbourhood of any cell and count how many 1s and 0's there are, we can now add on more code that can implement the 4 rules of life and thus keep the value of our current cell or change it. Let's remind ourselves what those rules are: - Any live cell with fewer than two live neighbours dies, as if by underpopulation. - Any live cell with two or three live neighbours lives on to the next generation. - Any live cell with more than three live neighbours dies, as if by overpopulation. - Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.So in this exercise we have a shape that has been passed into our function, and then create a new shape according to the rules of life. In the exercise you will need to add a series of conditional statements that populate the value of cells in our new shape according to these rules. In other words, we can re-write the above rules as: - If our current cell is alive [=1]: a) If count < 2, current cell = 0 [it dies]. b) If 2<=count<=3, current cell = 1 [it stays alive]. c) If count>3, current cell = 0 [it dies] - If our current cell is dead [=0] a) If count == 3, current cell = 1 [born] Notice the syntax I have used for the last conditional: If count == 3 ? When checking a value we use two equals signs == as we are not *assigning* a value as we would in, e.g. x = 4 . In the code snippet below, I have identified where you need to implement these rules. Notice that we plot the 'Beacon' pattern before we call the function and then the new 2D space which should change the pattern. With this in mind, also note that our function new returns a new version of our 2D space which I have called 'new_space'. If correct, when you run your completed code you should see figure 4. Figure 4Please note that where I have added 'INSERT CODE HERE' we are using the correct indentation.
###Code
#Initialise the Beacon matrix
Beacon = np.zeros((6,6),dtype=int)
#Enter values for '1' where you would like a black square
Beacon [1,1]=1
Beacon [1,2]=1
Beacon [2,1]=1
Beacon [3,4]=1
Beacon [4,3]=1
Beacon [4,4]=1
# Now define a function that moves through through each cell in our 2D array and searches the neighbouring space
# We pass three variables:
# rows - Number of rows in our space to be searched
# cols - Number of columns in our space to be searched
# space - The 2D array space to be searched
def search_each_cell(total_rows,total_cols,space):
new_space = np.zeros((total_rows,total_cols),dtype=int)
# 1) First, we need to start moving through each cell of our 'space'.
# To do this, we will use two nested 'for' loops
for row in range(total_rows):
for col in range(total_cols):
# So 'row' and 'col' define our current cell index.
# We now need to search a neighbourhood defined as 1 cell distance around this position
# We thus need two more nested for loops. When searching this neighbouring space, we want
# to count the number of 1's. Thus we also need a variable that we can increment by 1
# everytime we find a value of 1. Lets call this integer variable count.
count = 0
for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range'
for col2 in range(col-1,col+2):
# We need to check if our new position, defined by [row2,col2] is off the board
if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols):
# Do nothing
pass
elif row2 == row and col2 == col:
# Do nothing, its the cell we already have!
pass
# If we are not off the board or in the same cell as our starting point...
# We can check if this new space has a value of 1. It it does, lets count it
else:
if space[row2,col2]>0:
count=count+1
#-------'INSERT CODE HERE'-------
# Here you need to introduce conditional statements that act on the value of 'count'
# Read through the narrative provided above and remember to obey the spacing rules
# You will need to check the value of space[row,col] and then, depending on whether
# this is greater than 0 OR equals to 0, implement the rules of life. I have provided
# the first example. Please do try to complete this.
if space[row,col] > 0:
if count < 2:
new_space[row,col] = 0;
elif 2<=count<=3:
new_space[row,col] = 1;
elif count > 3:
new_space[row,col] = 0;
elif space[row,col] == 0:
if count == 3:
new_space[row,col] = 1;
#--------------------------------
return new_space
# call the above function
Beacon_new = search_each_cell(6,6,Beacon)
print("Finished function call, now lets compare our pattern before and after...")
#Now visualise your results.
plt.subplot(1, 2, 1).imshow(Beacon, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Beacon - before')
plt.subplot(1, 2, 2).imshow(Beacon_new, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Beacon - after')
plt.show()
###Output
Finished function call, now lets compare our pattern before and after...
###Markdown
Exercise 4 - Loop through 20 oscillations of the 'Beacon' lifeform Now that we have build the function that can implement the 4 rules of life, all that is left for us to do is to call this function a set number of times to simulate evolution across our Universe. In the code box below, drop your conditional statements from above in the relevant place and click 'Run'. Do you see the Beacon shape oscillating? As before, I have provided the code for plotting but see if the syntax makes sense.
###Code
import numpy as np #import the numerical python library, numpy. Changing the referenced library to 'np' is solely for convenience
import matplotlib.pyplot as plt #as per the above, much easier to write over and over again
from matplotlib import animation, rc
from IPython.display import HTML
from IPython.display import clear_output
import time
#Initialise the Beacon matrix
Beacon = np.zeros((6,6),dtype=int)
#Enter values for '1' where you would like a black square
Beacon [1,1]=1
Beacon [1,2]=1
Beacon [2,1]=1
Beacon [3,4]=1
Beacon [4,3]=1
Beacon [4,4]=1
# Now define a function that moves through through each cell in our 2D array and searches the neighbouring space
# We pass three variables:
# rows - Number of rows in our space to be searched
# cols - Number of columns in our space to be searched
# space - The 2D array space to be searched
def search_each_cell(total_rows,total_cols,space):
new_space = np.zeros((total_rows,total_cols),dtype=int)
# 1) First, we need to start moving through each cell of our 'space'.
# To do this, we will use two nested 'for' loops
for row in range(total_rows):
for col in range(total_cols):
# So 'row' and 'col' define our current cell index.
# We now need to search a neighbourhood defined as 1 cell distance around this position
# We thus need two more nested for loops. When searching this neighbouring space, we want
# to count the number of 1's. Thus we also need a variable that we can increment by 1
# everytime we find a value of 1. Lets call this integer variable count
count = 0
for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range'
for col2 in range(col-1,col+2):
# We need to check if our new position, defined by [row2,col2] is off the board
if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols):
# Do nothing
pass
elif row2 == row and col2 == col:
# Do nothing, its the cell we already have!
pass
# If we are not off the board or in the same cell as our starting point...
# We can check if this new space has a value of 1. It it does, lets count it
else:
if space[row2,col2]>0:
count=count+1
#-------'INSERT CODE HERE'-------
# Here you need to introduce conditional statements that act on the value of 'count'
# Read through the narrative provided above and remember to obey the spacing rules
if space[row,col] > 0:
if count < 2:
new_space[row,col] = 0;
elif 2<=count<=3:
new_space[row,col] = 1;
elif count > 3:
new_space[row,col] = 0;
elif space[row,col] == 0:
if count == 3:
new_space[row,col] = 1;
#--------------------------------
return new_space
fig, ax2 = plt.subplots()
plt.imshow(Beacon, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Beacon oscillating')
plt.show()
# Let us call the function 20 times
# Each time we are given a new shape to plot on our figure.
# Wait 0.2 seconds before moving on to thje next iteration
# We shpuld see oscillating behaviour.
for x in range(20):
clear_output(wait=True)
Beacon_new = search_each_cell(6,6,Beacon)
Beacon = Beacon_new
plt.imshow(Beacon_new, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Beacon oscillating')
plt.show()
time.sleep(0.2)
###Output
_____no_output_____
###Markdown
Populating Conway's Universe with multiple species Now we are going to use the defintion of our shapes to populate a miniature Universe in 2D space! Once we have this, following the same procedure as above, we should see some interesting movement! So let's create a space that is big enough for all of our cell types. To do this, we need to create another matrix:
###Code
Universe=np.zeros((50,50),dtype=int)
print(Universe)
###Output
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
###Markdown
You should now see a snapshot of the Universe matrix that is empty. How do we populate our Universe with individual species? We could enter a value for each cell but this is laborious. Rather, we are going to use our existing matrices that define our species and place them on the Universe grid. We do that by definining the exact space in the Universe we want our cells to go. This is practice in recognising the correct shape of an array/matrix and matching one to another. For example, look at the code below which places the top left corner of an LWSS on the cell in the 12th row and 13th column of my Universe and then visualises the results. Dont forget, indexing in Python starts at 0 so for the 12th row and 13th column, I need to refer to element [11,12]. Im also using the operator : which allows us to straddle cells bound by a start and a finish. Why have I chosen the range given below? Feel free to change the values, but if you get the size of space needed to fit in an LWSS, Python will complain it cannot broadcast a given shape:
###Code
#Define the space in the Universe you would like your LWSS to appear
Universe[11:17,12:19] = LWSS
#Now visualise our Universe
plt.imshow(Universe, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Universe [with 1 LWSS]')
plt.show()
###Output
_____no_output_____
###Markdown
To finish this notebook, in the following code box we fill the Universe with a range of species and then run a simulation. Can you see how we have mapped species shapes into our Universe? It is left for you to copy the working function 'search_each_cell' from above to complete the simulation.Have a play with this! What happens if you increase the number of iterations to 300? Please note, we might want to clear our Universe from the above exercise, in which case we could write: Universe[:,:]=0, but let's keep it in for now.
###Code
#Define the space in the Universe you would like your different species to appear
Universe[30:36,32:39] = LWSS
Universe[11:17,12:19] = LWSS
Universe[22:28,12:18] = Beacon
Universe[33:39,2:8] = Beacon
Universe[19:25,32:38] = Toad
Universe[1:6,1:6] = Glider
Universe[6:11,25:30] = Boat
plt.imshow(Universe, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Universe [with multiple cell types]')
plt.show()
#-------'INSERT CODE HERE'-------
def search_each_cell(total_rows,total_cols,space):
new_space = np.zeros((total_rows,total_cols),dtype=int)
# 1) First, we need to start moving through each cell of our 'space'.
# To do this, we will use two nested 'for' loops
for row in range(total_rows):
for col in range(total_cols):
# So 'row' and 'col' define our current cell index.
# We now need to search a neighbourhood defined as 1 cell distance around this position
# We thus need two more nested for loops. When searching this neighbouring space, we want
# to count the number of 1's. Thus we also need a variable that we can increment by 1
# everytime we find a value of 1. Lets call this integer variable count
count = 0
for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range'
for col2 in range(col-1,col+2):
# We need to check if our new position, defined by [row2,col2] is off the board
if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols):
# Do nothing
pass
elif row2 == row and col2 == col:
# Do nothing, its the cell we already have!
pass
# If we are not off the board or in the same cell as our starting point...
# We can check if this new space has a value of 1. It it does, lets count it
else:
if space[row2,col2]>0:
count=count+1
# Here you need to introduce conditional statements that act on the value of 'count'
# Read through the narrative provided above and remember to obey the spacing rules
if space[row,col] > 0:
if count < 2:
new_space[row,col] = 0;
elif 2<=count<=3:
new_space[row,col] = 1;
elif count > 3:
new_space[row,col] = 0;
elif space[row,col] == 0:
if count == 3:
new_space[row,col] = 1;
return new_space
#--------------------------------
fig, ax2 = plt.subplots(figsize=(12, 12))
plt.imshow(Universe, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Universe simulation')
plt.show()
for x in range(100):
clear_output(wait=True)
Universe_new = search_each_cell(50,50,Universe)
Universe = Universe_new
plt.imshow(Universe_new, cmap='binary') #The cmap, or colour map, gives us a black and white board.
plt.title('Universe simulation')
plt.show()
time.sleep(0.2)
###Output
_____no_output_____ |
Assignments/Assignment_2_final.ipynb | ###Markdown
Second Assignment 1) Create a function called **"even_squared"** that receives an integer value **N**, and returns a list containing, in ascending order, the square of each of the even values, from 1 to N, including N if applicable.
###Code
def even_squared(N):
'''returns a list of squared even values from 1 to N'''
my_range = range(1, N)
squares = [value**2 for value in my_range if value%2 ==0]
return (squares)
even_squared(5)
###Output
_____no_output_____
###Markdown
2) Using a while loop and the **input()** function, read an indefinite amount of **integers** until the number read is **-1**. After this process, print two lists on the screen: The first containing the even integers, and the second containing the odd integers. Both must be in ascending order.
###Code
even_list = []
odd_list = []
while True:
i = int(input("Type in an integer: "))
if i == -1:
break
if i%2 ==0:
even_list.append(i)
else:
odd_list.append(i)
print(sorted(even_list))
print(sorted(odd_list))
###Output
Type in an integer: 2
Type in an integer: 4
Type in an integer: 5
Type in an integer: -1
###Markdown
3) Create a function called **"even_account"** that receives a list of integers, counts the number of existing even elements, and returns this count.
###Code
def even_account(l):
'''count the number of given even numbers'''
list1 = len([num for num in l if num%2 ==0])
return list1
my_list = [4, 5, 2, 12]
even_account(my_list)
###Output
_____no_output_____
###Markdown
4) Create a function called **"squared_list"** that receives a list of integers and returns another list whose elements are the squares of the elements of the first.
###Code
def square_list(lis):
'''return list of swaures of all the elements in the given list'''
squares = [value **2 for value in lis]
return squares
my_test_list = [1, 3]
square_list(my_test_list)
###Output
_____no_output_____
###Markdown
5) Create a function called **"descending"** that receives two lists of integers and returns a single list, which contains all the elements in descending order, and may include repeated elements.
###Code
def descending(list1, list2):
'''merge the elements of two lists in decending order'''
list3 = (list1 +list2)
list3.sort(reverse=True)
return (list3)
my_first = [2, 4, 3]
my_second = [5, 7, 3]
descending(my_first, my_second)
###Output
_____no_output_____
###Markdown
6) Create a function called **"adding"** that receives a list **A**, and an arbitrary number of integers as input. Return a new list containing the elements of **A** plus the integers passed as input, in the order in which they were given. Here is an example: >```python>>>> A = [10,20,30]>>>> adding(A, 4, 10, 50, 1)> [10, 20, 30, 4, 10, 50, 1]```
###Code
def adding(A, *num):
'''combine the received list and the arbitrary number of integers to a new list
in the order that was given'''
numb = [value for value in num]
my_list = A + numb
print(my_list)
A = [1, 2, 3]
adding(A, 4,5)
###Output
[1, 2, 3, 4, 5]
###Markdown
7) Create a function called **"intersection"** that receives two input lists and returns another list with the values that belong to the two lists simultaneously (intersection) without repetition of values and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2, 3]>>>> B = [-1, 2, 3, 6, 8]>>>> intersection(A,B)> [2, 3]```
###Code
def intersection(list1, list2):
'''receive 2 lists and return one list with the intersection, without duplicates in ascending order'''
my_intersection = [element for element in list1 if element in list2]
return (my_intersection)
A = [1, 2, 3]
B = [2, 3, 4]
intersection(A, B)
###Output
_____no_output_____
###Markdown
8) Create a function called **"union"** that receives two input lists and returns another list with the union of the elements of the two received, without repetition of elements and in ascending order. Use only lists (do not use sets); loops and conditionals. See the example: >```python>>>> A = [-2, 0, 1, 2]>>>> B = [-1, 1, 2, 10]>>>> union(A,B)> [-2, ,-1, 0, 1, 2, 10]```
###Code
def union(list1,list2):
'''receive two input lists, return new list with union of
elements without duplicates in ascending order'''
for e in list2:
if e not in list1:
list1.append(e)
list1.sort()
return list1
A = [1, 5, 3]
B = [2, 3, 1]
union(A, B)
###Output
_____no_output_____
###Markdown
9) Generalize the **"intersection"** function so that it receives an indefinite number of lists and returns the intersection of all of them. Call the new function **intersection2**.
###Code
A = [1, 2, 3]
B = [2, 3, 4]
C = [6, 7, 4]
def intersection2(*lists):
'''generalised intersection function: intersection of provided list elements'''
my_intersection = []
my_intersection2 = []
lists_together = [part for part in lists]
for myList in lists_together:
for item in myList:
my_intersection.append(item)
for i in my_intersection:
if i not in my_intersection2:
my_intersection2.append(i)
return (my_intersection2)
intersection2(A, B, C)
###Output
_____no_output_____ |
src/ipynbfiles/run.ipynb | ###Markdown
###Code
#
#
# TODO:
# - read file data TIK
# - delete enters TIK
# - read subtitle folders list
# - split by " " and search in TIK
# - shomaresh TIK
# - save dar csv
# - sort list by counting TIK
# - remove duplicate TIK
#
#
# - Optional : 1 july 2021
# - write it to the picture for reading better or printing
#
#
#
import csv
# 4/1AX4XfWjgyO89bswaF-OiIEHWet7wTPljmthD6YmmhelJG6x_frolThpIxzY
from google.colab import drive
drive.mount('/content/drive')
# this block remove blank lines and replace enters with space
def clean_file(filename):
lines = ""
with open(filename) as f:
for line in f:
if not line.strip(): continue # skip the empty line
lines += line
return lines.replace("\n", " ")
#test and track
lines = clean_file("/content/drive/MyDrive/dataset.txt")
# word filters
common_prepositions = "a the about above across after against among around at before behind below beside between by down during for from in inside into near of off on out over through to toward under up with"
common_prepositions = common_prepositions.split()
less_comman_prepositions = "aboard along amid as beneath beyond but concerning considering despite except following like minus next onto opposite outside past per plus regarding round save since than till underneath unlike until upon versus via within without"
less_comman_prepositions = less_comman_prepositions.split()
full_perps = common_prepositions+less_comman_prepositions
# define counter number of repeat every string
def count_repeats(input_string):
countdict = []
words = input_string.split() # list of words in a input string
for i in range(len(words)):
# print(i)
# print(lines.split()[i])
countdict.append([words[i],input_string.count(words[i])])
return countdict
# duplicator remover get a list for input
words = count_repeats(lines)
def remove_duplicates(wordslist):
res = []
[res.append(x) for x in words if x not in res]
return res
words = remove_duplicates(words)
len(words)
indexwords = []
[indexwords.append(x[0]) for x in words]
len(indexwords)
# remove useless words
junk_list = []
[junk_list.append(x) for x in indexwords if "[" in x and "]" in x or len(x) < 3 or x in full_perps]
print(len(words))
[words.remove(x) for x in words if x[0] in junk_list]
print(len(words))
# sort by repeats
words.sort(key=lambda x:x[1], reverse=True)
words
#save results in a csv file
def save_to_csv():
fields = ['words', 'repeats']
with open('result.csv', 'w') as f:
# using csv.writer method from CSV package
write = csv.writer(f)
write.writerow(fields)
write.writerows(words)
save_to_csv()
###Output
_____no_output_____ |
src/Topics/Notes/modules.ipynb | ###Markdown
###Code
!pip install ExifRead
import exifread
# Open image file for reading (binary mode)
f = open("foto_exif.jpeg", 'rb')
# Return Exif tags
tags = exifread.process_file(f)
for tag in tags.keys():
if tag not in ('JPEGThumbnail', 'TIFFThumbnail', 'Filename', 'EXIF MakerNote'):
print(f"Key: {tag}, value {tags[tag]}")
!pip install requests3
from random import randint
import requests
# url = f'http://numbersapi.com/{randint(1,2019)}/year'
# r = requests.get(url)
# print(r.text, url)
class Number:
url = 'http://numbersapi.com'
types = ['trivia', 'math', 'year', 'date']
def __init__(self, number):
self.number = number
self.facts = {}
for type in self.types:
self.facts[type] = self.get_fact(type)
def get_fact(self, type):
url = f'{self.url}/{self.number}/{type}'
r = requests.get(url)
return r.text
def trivia(self):
if not hasattr(self, "_trivia"):
self._trivia = self.get_fact('trivia')
return self._trivia
def year(self):
if not hasattr(self, "_year"):
self._year = self.get_fact('year')
return self._year
n1945 = Number(randint(1,2019))
print(n1945.year())
print(n1945.trivia())
from requests import get
class Weather:
url_base = 'http://api.openweathermap.org/data'
version = 2.5
forecast_endpoint = 'forecast/daily'
api_key = '51b03e8d15d7df69f88921b3c1ba2f69'
@staticmethod
def convert(k):
c = k - 273.15
return 9*c/5 + 32
def __init__(self, city, country, num_days=16):
if num_days < 1 or num_days > 16:
raise Exception('num_days must be between 1 and 16!')
self.city = city
self.country = country
self.days = num_days
def api_call(self):
args = f'q={self.city},{self.country}&APPID={self.api_key}&cnt={self.days}'
url = f'{self.url_base}/{self.version}/{self.forecast_endpoint}?{args}'
r = get(url)
return r.json()
def get_weather(self, day):
if day < 1 or day > self.days:
raise Exception('days must be between 1 and 16!')
data = self.api_call()
return self.convert(data['list'][day-1]['temp']['day'])
nyc = Weather('new york', 'usa', 2)
print(nyc.get_weather(2))
###Output
83.03000000000004
###Markdown
Modules & PackagesIn Python, a `module` is Python source file that contains pre-defined objects like variables, functions, classes, and other items we'll talk about soon. A Python `package`, sometimes used synonymously with the term `library`, is simply a collection of Python modules. The diagram below can show you this hierarchy visually.Essentially, packages and modules are a means of `modularizing` code by grouping functions and objects into specific areas of focus. For instance, the `statsmodels` module ([here](https://www.statsmodels.org/)) contains code useful to a data scientist. The `Pyglet` library ([here](http://www.pyglet.org/)) contains code useful to game developers needing shortcuts for 3D game animation. But vice versa?`Modular programming` allows us to break out modules and packages dealing with specific topics in order make the standard library more efficient for the general public. It's sort of like "a la carte" code. This becomes especially valuable once you scale your programs. Who needs that extra baggage? Global vs. Local ScopeOne of the reasons Python leverages modular programming is because it helps avoid conflicts between `local` and `global` variables by creating separate `namespaces`. `Namespaces` are the place where variables are stored, and they exist on several independent levels, including **local, global, built-in, and nested namespaces**. For instance, the functions `builtin.open()` and `os.open()` are distinguished by their namespaces. Namespaces also aid readability and maintainability by making it clear which module implements a function. At a high level, a variable declared outside a function has `global scope`, meaning you can access a it inside or outside functions. A variable declared within a function has `local scope`, which means you can only access it within the object you created it. If you try to access it outside that, you will get a `NameError` telling you that variable is not defined.We'll get more into how to use and interpret local and global scope as we dive into modules and functions... Importing Modules & PackagesImporting modules and packages is very easy and saves you a lot of time you'd otherwise spend reinventing the wheel. Modules can even import other modules! The best practice is to place all import statements at the of your script file so you can easily see everything you've imported right at the top. Importing Modules Let's look at a few different way to import modules and their contents. The simplest way to import a module is to simply write `import module_name`. This will allow you to access all the contents within that module. If you want to easily find out exactly what is in your newly imported module, you can call the built-in function `dir()` on it. This will list all types of names: variables, modules, functions, etc.
###Code
import math
dir(math)
# prints ['__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'acos', 'acosh', 'asin', ... etc.]
###Output
_____no_output_____
###Markdown
You can also import one specific object from a module like this:
###Code
from math import sqrt
sqrt(25) # 5
###Output
_____no_output_____
###Markdown
Notice how we included `math.` when we called the `sqrt` function. Because of *variable scope*, you need to reference the `namespace` where `sqrt` is defined. Simply importing `sqrt` does not give it `global scope`. It still has `local scope` within the math module.However, you can help avoid verbose code by importing modules and their items like this:
###Code
from math import sqrt as s
s(25) # 5
###Output
_____no_output_____
###Markdown
By importing the `sqrt as s`, you can call the function as `s()` instead of `math.sqrt`. The same works for modules. Note the difference in how we reference the square root function though...
###Code
import math as m
m.sqrt(25) # 5.0
...we only renamed the module in this import and not the function. So we have to go back to the `module_name.function()` syntax. *However*, because we renamed the module on import, we can reference it in function calls by its shortened name, i.e. `m.sqrt`.
## Managing Dependencies
In addition to "built-in" modules, we have the ability in python to create, distribute and most importantly *consume* community defined python modules.
This is powerful because anyone who builds something useful has the ability to share with the larger python community. Creating and distributing python modules is outside the scope of this class, but we can consume any module we'd like by running the:
pip install [module_name]
Modules can be found in [**PyPI**](https://pypi.org/), or, the Python Package Index. Any registered module in pypi is installable via pip.
However, in order to safely install modules across projects (ie: perhaps project A requires module 1 v1 but then project B, started a year later needs to use module 1 v2) we need to create what are called **virtual environments**, isolated python environments where we can safely install our pip modules and rest assured that they don't interfere with other projects / the system at lare.
In order to create a virtual environment:
python3 -m venv .env
source .env/bin/activate
The `.env` folder contains everything needed for this **"virtualenv"**. We go *inside* the env by running the `source ./env/bin/activate` command. To deactivate, (while in virtualenv):
deactivate
The best part about this is not only can we install our pip modules safely, we can also do this:
pip freeze > requirements.txt
This will collect all the installed pip modules in the virtual env and store into a file (that we are calling `requirements.txt`). This is useful because if we ever wanted to run this software from a different computer, all we would have to do is pull down the python files, create a new virtualenv and then:
pip install -r requirements.txt
###Output
_____no_output_____ |
Deploy Python bot on Messenger.ipynb | ###Markdown
Deploying Python bot on Facebook Messenger Let's say you have a chatbot written in Python and you want it available for many people to use. Why not deploy it on Facebook Messenger? I will show you below how you can make necessary set ups in this tutorial. Step 1. Installing neccessary modules and programYou need three main things to connect your Python code to Messenger:1. flask [link](http://flask.pocoo.org/)2. requests [link](https://2.python-requests.org//en/master/)3. ngrok [link](https://ngrok.com/product)You can install them by following the link above. If you don't have them already, you are mostly likely missing other modules required to run them. You need to install all the modules to run the Python code I will post below. Before that, let me explain the purpose of three main modules/program. `flask` is a module used to create local server in your computer that can receive messages from Messenger. `ngrok` is a program that will forward a http connection from Messenger to your computer. Lastly, `requests` is a function that will send the reply from your bot back to Messenger. Now, save the following Python code in `server.py` and put it inside a directory where your chatbot code is located.
###Code
from flask import Flask, request
import requests
app = Flask(__name__)
FB_API_URL = 'https://graph.facebook.com/v2.6/me/messages'
VERIFY_TOKEN = ''# <paste your verify token here>
PAGE_ACCESS_TOKEN = ''# paste your page access token here>"
@app.route("/webhook",methods=['GET','POST'] )
def listen():
"""This is the main function flask uses to
listen at the `/webhook` endpoint"""
if request.method == 'GET':
return verify_webhook(request)
if request.method == 'POST':
payload = request.json
event = payload['entry'][0]['messaging']
for x in event:
if is_user_message(x):
text = x['message']['text']
sender_id = x['sender']['id']
respond(sender_id, text)
return "ok"
def verify_webhook(req):
if req.args.get("hub.verify_token") == VERIFY_TOKEN:
return req.args.get("hub.challenge")
else:
return "incorrect"
def respond(sender, message):
"""Formulate a response to the user and
pass it on to a function that sends it."""
response = get_bot_response(message)
send_message(sender, response)
def get_bot_response(message):
"""This is just a dummy function, returning a variation of what
the user said. Replace this function with one connected to chatbot."""
return "This is a dummy response to '{}'".format(message)
def is_user_message(message):
"""Check if the message is a message from the user"""
return (message.get('message') and
message['message'].get('text') and
not message['message'].get("is_echo"))
def send_message(recipient_id, text):
"""Send a response to Facebook"""
payload = {
'message': {
'text': text
},
'recipient': {
'id': recipient_id
},
'notification_type': 'regular'
}
auth = {
'access_token': PAGE_ACCESS_TOKEN
}
response = requests.post(
FB_API_URL,
params=auth,
json=payload
)
return response.json()
###Output
_____no_output_____
###Markdown
Step 2. Understanding code to access Messenger messages and send them to your computer Now I will explain what above code does in detail. `flask` will create `app` that will listen for http requests of messages sent to `localhost:5000/webhook`, which is your localhost server. Inside `listen()` function, `request` function will use `verify_webhook(req)` function to authenticate connection between your `app` and Facebook. Once `listen()` function sees that message from Messenger is valid, it will send it to `respond(sender, message)` function, which access bot's code directly through `get_bot_response(message)` function. Finally, `send_message(recipient_id, text)` will send bot's response back to Messenger. Note that `VERIFY_TOKEN` and `PAGE_ACCESS_TOKEN` have been left out. `VERIFY_TOKEN` is any string you would be want to use as password to let Facebook know your localhost server wants to listen for messages. We will be coming back for `PAGE_ACCESS_TOKEN` later. Step 3. Running ngrok to start localhost serverOnce you installed ngrok program in your computer, start the program and don't turn it off while setting up connection. In terminal window, type `ngrok http 5000`. This will set up a http endpoint that will be forwarded to your computer on port 5000. Your http endpoint will look something like, `http://9cbec3d0.ngrok.io`.  Step 4. Setting up Facebook page and getting Page Access TokenLet's head over to Facebook Developer website by clicking this [link](https://developers.facebook.com/). If you already have a Facebook account, just simply log in. Now, click on `My Apps` on the top right corner and click `Create an App`.Create the display name of your App and click `Create App ID`. You have to do a quick security check to prove that you are not a robot. The display name can be anything you choose to be. I choose to use Bookbot for the demonstration purpose. You will be lead to your bot's dashboard. First, click `Skip` for the first page you see like in the image above.Then, if you scroll down, you will see `Add a Product` section. Click on `Set Up` button for Messenger. In the next page you see, scroll down until you see `Access Tokens` section. You don't have Page Access Token yet for your chatbot because you don't have a Facebook page yet. What you want to do is create a new page. When you click on `Create a New Page`, you will be led to the following page:Click on either options to create a page. I have decided to create my BookBot page as a community page but it is up to you. Once you click through a few options for profile picture, you will finally be able to see your bot's Facebook page. Now, time to get back to Facebook Developer page for your bot. When you refresh the page, you can now select your bot's page to get Access Token. But you are not quite there yet. You will be told that you have to edit your permission to get access token. Just click on `Edit Permissions` and follow along the instruction until your bot is linked to the Facebook page. Now, you will see Page Access Token appear. Click on it to copy to a clipboard and head over to `server.py`. Step 5. Starting chatbot serverFirst, copy and paste Page Access Token to `PAGE_ACCESS_TOKEN` in `server.py`. Save the file.Second, in a separate terminal window, `cd` to the directory where you have your chatbot code and `server.py` code. Third, in the terminal window, enter `set FLASK_APP=server.py`. Then enter `flask run`. If everything goes well, you will see something similar to following screen: Step 6. Setting up webhookGo to ngrok server and copy the address starting with `https`. Now go over to Facebook Development page for your bot. Underneath `Access Tokens` section, you will see `Webhooks` section. Click `Subscribe to Event`. Then copy and paste the address you copied in `Callback URL` section. Make sure to end the address with `/webhook`, because this is how your flask app will find webhook address. Ignore the callback URL in the following image. Next, fill out `Verify Token` with `VERIFY_TOKEN` from `server.py`. Finally, click `messages` and `messages_postbacks` from Subscription Fields and click `Verify and Save` button. Lastly, you want to choose the page you want to subscribe to and click `Subscribe`. Now you are all set! How do you know?On the navigation pane to the left, if you see the word `Webhook` with a green circle and a check mark, that means all the set up has been successfully completed. Step 7. Test your chatbot!Now, if you didn't get any error, everything has been set up! Once you are over to your bot's Facebook page, click on three dot icons and click `View as Page Visitor`.Now the moment of the truth! Click on `Send Message` and when the chatbox pops up start tying in question you have trained your bot for!Open the chatbox in Messenger mode and here is what you will see. *Note: BookBot you see in the examples above has been changed to Testbot because of working issue.* You can see the dummy chatbot code from `server.py` showing up on Messenger chatbox. If you have gotten this far, congratulations!Last but not least, you probably want your chatbot to do more than simply repeat what the user says. You can use the following simple chatbot code to simulate conversation. Copy and save the following as `chatbot.py` and make sure it is in the same directory as `server.py`.
###Code
import re
import random
rules = {"I want (.*)":["What would it mean if you got {0}",
"Why do you want {0}",
"What's stopping you from getting {0}"],
"if (.*)":["Do you really think it's likely that {0}",
"Do you wish that {0}",
"What do you think about {0}",
"Really--if {0}"],
"do you think (.*)":["{0} Absolutely.",
"No chance"],
"do you remember (.*)":["Did you think I would forget {0}",
"Why haven't you been able to forget {0}",
"What about {0}",
"Yes .. and?"]
}
# Define respond()
def robo_respond(message):
# Call match_rule
response, phrase = match_rule(rules, message)
if '{0}' in response:
# Replace the pronouns in the phrase
phrase = replace_pronouns(phrase)
# Include the phrase in the response
response = response.format(phrase)
return response
# Define match_rule()
def match_rule(rules, message):
response, phrase = "default", None
# Iterate over the rules dictionary
for pattern, responses in rules.items():
# Create a match object
match = re.search(pattern, message)
if match is not None:
# Choose a random response
response = random.choice(responses)
if '{0}' in response:
phrase = match.group(1)
# Return the response and phrase
return response, phrase
def replace_pronouns(message):
message = message.lower()
if 'me' in message:
# Replace 'me' with 'you'
return re.sub('me', 'you', message)
if 'my' in message:
# Replace 'my' with 'your'
return re.sub('my', 'your', message)
if 'your' in message:
# Replace 'your' with 'my'
return re.sub('your', 'my', message)
if 'you' in message:
# Replace 'you' with 'I'
return re.sub('you', 'I', message)
return message
###Output
_____no_output_____ |
ipython notebooks/feature_sets.ipynb | ###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Feature Sets **Learning Objective:** Create a minimal set of features that performs just as well as a more complex feature set So far, we've thrown all of our features into the model. Models with fewer features use fewer resources and are easier to maintain. Let's see if we can build a model on a minimal set of housing features that will perform equally as well as one that uses all the features in the data set. SetupAs before, let's load and prepare the California housing data.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Scale the target to be in units of thousands of dollars.
output_targets["median_house_value"] = (
california_housing_dataframe["median_house_value"] / 1000.0)
return output_targets
# Choose the first 12000 (out of 17000) examples for training.
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
# Choose the last 5000 (out of 17000) examples for validation.
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
# Double-check that we've done the right thing.
print("Training examples summary:")
display.display(training_examples.describe())
print("Validation examples summary:")
display.display(validation_examples.describe())
print("Training targets summary:")
display.display(training_targets.describe())
print("Validation targets summary:")
display.display(validation_targets.describe())
###Output
Training examples summary:
###Markdown
Task 1: Develop a Good Feature Set**What's the best performance you can get with just 2 or 3 features?**A **correlation matrix** shows pairwise correlations, both for each feature compared to the target and for each feature compared to other features.Here, correlation is defined as the [Pearson correlation coefficient](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient). You don't have to understand the mathematical details for this exercise.Correlation values have the following meanings: * `-1.0`: perfect negative correlation * `0.0`: no correlation * `1.0`: perfect positive correlation
###Code
correlation_dataframe = training_examples.copy()
correlation_dataframe["target"] = training_targets["median_house_value"]
correlation_dataframe.corr()
###Output
_____no_output_____
###Markdown
Features that have strong positive or negative correlations with the target will add information to our model. We can use the correlation matrix to find such strongly correlated features.We'd also like to have features that aren't so strongly correlated with each other, so that they add independent information.Use this information to try removing features. You can also try developing additional synthetic features, such as ratios of two raw features.For convenience, we've included the training code from the previous exercise.
###Code
def construct_feature_columns(input_features):
"""Construct the TensorFlow Feature Columns.
Args:
input_features: The names of the numerical input features to use.
Returns:
A set of feature columns
"""
return set([tf.feature_column.numeric_column(my_feature)
for my_feature in input_features])
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(
learning_rate,
steps,
batch_size,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearRegressor` object trained on the training data.
"""
periods = 10
steps_per_period = steps / periods
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=construct_feature_columns(training_examples),
optimizer=my_optimizer
)
# Create input functions.
training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(training_examples,
training_targets["median_house_value"],
num_epochs=1,
shuffle=False)
predict_validation_input_fn = lambda: my_input_fn(validation_examples,
validation_targets["median_house_value"],
num_epochs=1,
shuffle=False)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
training_rmse = []
validation_rmse = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
training_predictions = np.array([item['predictions'][0] for item in training_predictions])
validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn)
validation_predictions = np.array([item['predictions'][0] for item in validation_predictions])
# Compute training and validation loss.
training_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(training_predictions, training_targets))
validation_root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(validation_predictions, validation_targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, training_root_mean_squared_error))
# Add the loss metrics from this period to our list.
training_rmse.append(training_root_mean_squared_error)
validation_rmse.append(validation_root_mean_squared_error)
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.ylabel("RMSE")
plt.xlabel("Periods")
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(training_rmse, label="training")
plt.plot(validation_rmse, label="validation")
plt.legend()
return linear_regressor
###Output
_____no_output_____
###Markdown
Spend 5 minutes searching for a good set of features and training parameters. Then check the solution to see what we chose. Don't forget that different features may require different learning parameters.
###Code
#
# Your code here: add your features of choice as a list of quoted strings.
#
minimal_features = [
"median_income",
"rooms_per_person"
]
assert minimal_features, "You must select at least one feature!"
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
#
# Don't forget to adjust these parameters.
#
train_model(
learning_rate=0.1,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 132.25
period 01 : 86.39
period 02 : 85.04
period 03 : 85.00
period 04 : 85.00
period 05 : 85.05
period 06 : 84.82
period 07 : 84.74
period 08 : 84.92
period 09 : 84.72
Model training finished.
###Markdown
SolutionClick below for a solution.
###Code
minimal_features = [
"median_income",
"latitude",
]
minimal_training_examples = training_examples[minimal_features]
minimal_validation_examples = validation_examples[minimal_features]
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=minimal_training_examples,
training_targets=training_targets,
validation_examples=minimal_validation_examples,
validation_targets=validation_targets)
###Output
_____no_output_____
###Markdown
Task 2: Make Better Use of LatitudePlotting `latitude` vs. `median_house_value` shows that there really isn't a linear relationship there.Instead, there are a couple of peaks, which roughly correspond to Los Angeles and San Francisco.
###Code
plt.scatter(training_examples["latitude"], training_targets["median_house_value"])
###Output
_____no_output_____
###Markdown
**Try creating some synthetic features that do a better job with latitude.**For example, you could have a feature that maps `latitude` to a value of `|latitude - 38|`, and call this `distance_from_san_francisco`.Or you could break the space into 10 different buckets. `latitude_32_to_33`, `latitude_33_to_34`, etc., each showing a value of `1.0` if `latitude` is within that bucket range and a value of `0.0` otherwise.Use the correlation matrix to help guide development, and then add them to your model if you find something that looks good.What's the best validation performance you can get?
###Code
#
# YOUR CODE HERE: Train on a new data set that includes synthetic features based on latitude.
#
selected_examples = pd.DataFrame()
selected_examples["distance"] = abs(training_examples["latitude"] - 38)
selected_examples
###Output
_____no_output_____
###Markdown
SolutionClick below for a solution. Aside from `latitude`, we'll also keep `median_income`, to compare with the previous results.We decided to bucketize the latitude. This is fairly straightforward in Pandas using `Series.apply`.
###Code
LATITUDE_RANGES = zip(range(32, 44), range(33, 45))
def select_and_transform_features(source_df):
selected_examples = pd.DataFrame()
selected_examples["median_income"] = source_df["median_income"]
for r in LATITUDE_RANGES:
selected_examples["latitude_%d_to_%d" % r] = source_df["latitude"].apply(
lambda l: 1.0 if l >= r[0] and l < r[1] else 0.0)
return selected_examples
selected_training_examples = select_and_transform_features(training_examples)
selected_validation_examples = select_and_transform_features(validation_examples)
_ = train_model(
learning_rate=0.5,
steps=500,
batch_size=5,
training_examples=selected_training_examples,
training_targets=training_targets,
validation_examples=selected_validation_examples,
validation_targets=validation_targets)
###Output
Training model...
RMSE (on training data):
period 00 : 84.00
period 01 : 85.72
period 02 : 86.31
period 03 : 81.77
period 04 : 80.64
period 05 : 80.75
period 06 : 80.63
period 07 : 79.95
period 08 : 80.15
period 09 : 80.40
Model training finished.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.